All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH net-next 0/7] cxgb4: new driver submission
@ 2010-02-18  1:37 Dimitris Michailidis
  2010-02-18  1:37 ` [PATCH net-next 1/7] cxgb4: Add register and message definitions Dimitris Michailidis
  2010-02-18  1:59 ` [PATCH net-next 0/7] cxgb4: new driver submission David Miller
  0 siblings, 2 replies; 10+ messages in thread
From: Dimitris Michailidis @ 2010-02-18  1:37 UTC (permalink / raw)
  To: netdev


The following 7 patches add a new driver cxgb4 for Chelsio's new 1G and 10G
cards.  At this time this is for review and comments, I'll be sending an
updated patch series once any review comments are incorporated.

 drivers/net/Kconfig            |   25 +
 drivers/net/Makefile           |    1 +
 drivers/net/cxgb4/Makefile     |    7 +
 drivers/net/cxgb4/cxgb4.h      |  754 +++++++
 drivers/net/cxgb4/cxgb4_main.c | 4203 ++++++++++++++++++++++++++++++++++++++++
 drivers/net/cxgb4/cxgb4_uld.h  |  230 +++
 drivers/net/cxgb4/l2t.c        |  627 ++++++
 drivers/net/cxgb4/l2t.h        |  110 ++
 drivers/net/cxgb4/sge.c        | 2463 +++++++++++++++++++++++
 drivers/net/cxgb4/t4_hw.c      | 3116 +++++++++++++++++++++++++++++
 drivers/net/cxgb4/t4_hw.h      |  129 ++
 drivers/net/cxgb4/t4_msg.h     | 1353 +++++++++++++
 drivers/net/cxgb4/t4_regs.h    | 2428 +++++++++++++++++++++++
 drivers/net/cxgb4/t4fw_api.h   | 3727 +++++++++++++++++++++++++++++++++++
 14 files changed, 19173 insertions(+), 0 deletions(-)
 create mode 100644 drivers/net/cxgb4/Makefile
 create mode 100644 drivers/net/cxgb4/cxgb4.h
 create mode 100644 drivers/net/cxgb4/cxgb4_main.c
 create mode 100644 drivers/net/cxgb4/cxgb4_uld.h
 create mode 100644 drivers/net/cxgb4/l2t.c
 create mode 100644 drivers/net/cxgb4/l2t.h
 create mode 100644 drivers/net/cxgb4/sge.c
 create mode 100644 drivers/net/cxgb4/t4_hw.c
 create mode 100644 drivers/net/cxgb4/t4_hw.h
 create mode 100644 drivers/net/cxgb4/t4_msg.h
 create mode 100644 drivers/net/cxgb4/t4_regs.h
 create mode 100644 drivers/net/cxgb4/t4fw_api.h

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH net-next 1/7] cxgb4: Add register and message definitions
  2010-02-18  1:37 [PATCH net-next 0/7] cxgb4: new driver submission Dimitris Michailidis
@ 2010-02-18  1:37 ` Dimitris Michailidis
  2010-02-18  1:37   ` [PATCH net-next 2/7] cxgb4: Add FW API definitions Dimitris Michailidis
  2010-02-18  1:59 ` [PATCH net-next 0/7] cxgb4: new driver submission David Miller
  1 sibling, 1 reply; 10+ messages in thread
From: Dimitris Michailidis @ 2010-02-18  1:37 UTC (permalink / raw)
  To: netdev

Signed-off-by: Dimitris Michailidis <dm@chelsio.com>
---
 drivers/net/cxgb4/t4_msg.h  | 1353 ++++++++++++++++++++++++
 drivers/net/cxgb4/t4_regs.h | 2428 +++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3781 insertions(+), 0 deletions(-)
 create mode 100644 drivers/net/cxgb4/t4_msg.h
 create mode 100644 drivers/net/cxgb4/t4_regs.h

diff --git a/drivers/net/cxgb4/t4_msg.h b/drivers/net/cxgb4/t4_msg.h
new file mode 100644
index 0000000..9d22c5c
--- /dev/null
+++ b/drivers/net/cxgb4/t4_msg.h
@@ -0,0 +1,1353 @@
+/*
+ * This file is part of the Chelsio T4 Ethernet driver for Linux.
+ *
+ * Copyright (c) 2003-2010 Chelsio Communications, Inc. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ *     Redistribution and use in source and binary forms, with or
+ *     without modification, are permitted provided that the following
+ *     conditions are met:
+ *
+ *      - Redistributions of source code must retain the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer.
+ *
+ *      - Redistributions in binary form must reproduce the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer in the documentation and/or other materials
+ *        provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef __T4_MSG_H
+#define __T4_MSG_H
+
+#include <linux/types.h>
+
+enum {
+	CPL_PASS_OPEN_REQ     = 0x1,
+	CPL_PASS_ACCEPT_RPL   = 0x2,
+	CPL_ACT_OPEN_REQ      = 0x3,
+	CPL_SET_TCB_FIELD     = 0x5,
+	CPL_GET_TCB           = 0x6,
+	CPL_CLOSE_CON_REQ     = 0x8,
+	CPL_CLOSE_LISTSRV_REQ = 0x9,
+	CPL_ABORT_REQ         = 0xA,
+	CPL_ABORT_RPL         = 0xB,
+	CPL_RX_DATA_ACK       = 0xD,
+	CPL_TX_PKT            = 0xE,
+	CPL_L2T_WRITE_REQ     = 0x12,
+	CPL_TID_RELEASE       = 0x1A,
+
+	CPL_CLOSE_LISTSRV_RPL = 0x20,
+	CPL_L2T_WRITE_RPL     = 0x23,
+	CPL_PASS_OPEN_RPL     = 0x24,
+	CPL_ACT_OPEN_RPL      = 0x25,
+	CPL_PEER_CLOSE        = 0x26,
+	CPL_ABORT_REQ_RSS     = 0x2B,
+	CPL_ABORT_RPL_RSS     = 0x2D,
+
+	CPL_CLOSE_CON_RPL     = 0x32,
+	CPL_ISCSI_HDR         = 0x33,
+	CPL_RDMA_CQE          = 0x35,
+	CPL_RDMA_CQE_READ_RSP = 0x36,
+	CPL_RDMA_CQE_ERR      = 0x37,
+	CPL_RX_DATA           = 0x39,
+	CPL_SET_TCB_RPL       = 0x3A,
+	CPL_RX_PKT            = 0x3B,
+	CPL_RX_DDP_COMPLETE   = 0x3F,
+
+	CPL_ACT_ESTABLISH     = 0x40,
+	CPL_PASS_ESTABLISH    = 0x41,
+	CPL_RX_DATA_DDP       = 0x42,
+	CPL_PASS_ACCEPT_REQ   = 0x44,
+
+	CPL_RDMA_READ_REQ     = 0x60,
+
+	CPL_PASS_OPEN_REQ6    = 0x81,
+	CPL_ACT_OPEN_REQ6     = 0x83,
+
+	CPL_RDMA_TERMINATE    = 0xA2,
+	CPL_RDMA_WRITE        = 0xA4,
+	CPL_SGE_EGR_UPDATE    = 0xA5,
+
+	CPL_TRACE_PKT         = 0xB0,
+
+	CPL_FW4_MSG           = 0xC0,
+	CPL_FW4_PLD           = 0xC1,
+	CPL_FW4_ACK           = 0xC3,
+
+	CPL_FW6_MSG           = 0xE0,
+	CPL_FW6_PLD           = 0xE1,
+	CPL_TX_PKT_LSO        = 0xED,
+	CPL_TX_PKT_XT         = 0xEE,
+
+	NUM_CPL_CMDS    /* must be last and previous entries must be sorted */
+};
+
+enum CPL_error {
+	CPL_ERR_NONE               = 0,
+	CPL_ERR_TCAM_PARITY        = 1,
+	CPL_ERR_TCAM_FULL          = 3,
+	CPL_ERR_BAD_LENGTH         = 15,
+	CPL_ERR_BAD_ROUTE          = 18,
+	CPL_ERR_CONN_RESET         = 20,
+	CPL_ERR_CONN_EXIST_SYNRECV = 21,
+	CPL_ERR_CONN_EXIST         = 22,
+	CPL_ERR_ARP_MISS           = 23,
+	CPL_ERR_BAD_SYN            = 24,
+	CPL_ERR_CONN_TIMEDOUT      = 30,
+	CPL_ERR_XMIT_TIMEDOUT      = 31,
+	CPL_ERR_PERSIST_TIMEDOUT   = 32,
+	CPL_ERR_FINWAIT2_TIMEDOUT  = 33,
+	CPL_ERR_KEEPALIVE_TIMEDOUT = 34,
+	CPL_ERR_RTX_NEG_ADVICE     = 35,
+	CPL_ERR_PERSIST_NEG_ADVICE = 36,
+	CPL_ERR_ABORT_FAILED       = 42,
+	CPL_ERR_IWARP_FLM          = 50,
+};
+
+enum {
+	CPL_CONN_POLICY_ASK  = 1,
+};
+
+enum {
+	ULP_MODE_NONE          = 0,
+	ULP_MODE_ISCSI         = 2,
+	ULP_MODE_RDMA          = 4,
+	ULP_MODE_FCOE          = 6,
+};
+
+enum {
+	ULP_CRC_HEADER = 1 << 0,
+	ULP_CRC_DATA   = 1 << 1
+};
+
+enum {
+	CPL_PASS_OPEN_ACCEPT,
+	CPL_PASS_OPEN_REJECT,
+	CPL_PASS_OPEN_ACCEPT_TNL
+};
+
+enum {
+	CPL_ABORT_SEND_RST = 0,
+	CPL_ABORT_NO_RST,
+};
+
+enum {                     /* TX_PKT_XT checksum types */
+	TX_CSUM_TCP    = 0,
+	TX_CSUM_UDP    = 1,
+	TX_CSUM_CRC16  = 4,
+	TX_CSUM_CRC32  = 5,
+	TX_CSUM_CRC32C = 6,
+	TX_CSUM_FCOE   = 7,
+	TX_CSUM_TCPIP  = 8,
+	TX_CSUM_UDPIP  = 9,
+	TX_CSUM_TCPIP6 = 10,
+	TX_CSUM_UDPIP6 = 11,
+	TX_CSUM_IP     = 12,
+};
+
+enum {                     /* packet type in CPL_RX_PKT */
+	PKTYPE_XACT_UCAST = 0,
+	PKTYPE_HASH_UCAST = 1,
+	PKTYPE_XACT_MCAST = 2,
+	PKTYPE_HASH_MCAST = 3,
+	PKTYPE_PROMISC    = 4,
+	PKTYPE_HPROMISC   = 5,
+	PKTYPE_BCAST      = 6
+};
+
+enum {                     /* DMAC type in CPL_RX_PKT */
+	DATYPE_UCAST,
+	DATYPE_MCAST,
+	DATYPE_BCAST
+};
+
+enum {                     /* RSS hash type */
+	RSS_HASH_NONE = 0, /* no hash computed */
+	RSS_HASH_IP   = 1, /* IP or IPv6 2-tuple hash */
+	RSS_HASH_TCP  = 2, /* TCP 4-tuple hash */
+	RSS_HASH_UDP  = 3  /* UDP 4-tuple hash */
+};
+
+union opcode_tid {
+	__be32 opcode_tid;
+	__u8 opcode;
+};
+
+#define S_CPL_OPCODE    24
+#define V_CPL_OPCODE(x) ((x) << S_CPL_OPCODE)
+#define G_CPL_OPCODE(x) (((x) >> S_CPL_OPCODE) & 0xFF)
+#define G_TID(x)    ((x) & 0xFFFFFF)
+
+#define MK_OPCODE_TID(opcode, tid) (V_CPL_OPCODE(opcode) | (tid))
+
+#define OPCODE_TID(cmd) ((cmd)->ot.opcode_tid)
+
+/* extract the TID from a CPL command */
+#define GET_TID(cmd) (G_TID(ntohl(OPCODE_TID(cmd))))
+
+/* partitioning of TID fields that also carry a queue id */
+#define S_TID_TID    0
+#define M_TID_TID    0x3fff
+#define V_TID_TID(x) ((x) << S_TID_TID)
+#define G_TID_TID(x) (((x) >> S_TID_TID) & M_TID_TID)
+
+#define S_TID_QID    14
+#define M_TID_QID    0x3ff
+#define V_TID_QID(x) ((x) << S_TID_QID)
+#define G_TID_QID(x) (((x) >> S_TID_QID) & M_TID_QID)
+
+struct tcp_options {
+	__be16 mss;
+	__u8 wsf;
+#if defined(__LITTLE_ENDIAN_BITFIELD)
+	__u8 :4;
+	__u8 unknown:1;
+	__u8 :1;
+	__u8 sack:1;
+	__u8 tstamp:1;
+#else
+	__u8 tstamp:1;
+	__u8 sack:1;
+	__u8 :1;
+	__u8 unknown:1;
+	__u8 :4;
+#endif
+};
+
+struct rss_header {
+	__u8 opcode;
+#if defined(__LITTLE_ENDIAN_BITFIELD)
+	__u8 channel:2;
+	__u8 filter_hit:1;
+	__u8 filter_tid:1;
+	__u8 hash_type:2;
+	__u8 ipv6:1;
+	__u8 send2fw:1;
+#else
+	__u8 send2fw:1;
+	__u8 ipv6:1;
+	__u8 hash_type:2;
+	__u8 filter_tid:1;
+	__u8 filter_hit:1;
+	__u8 channel:2;
+#endif
+	__be16 qid;
+	__be32 hash_val;
+};
+
+#define S_HASHTYPE 20
+#define M_HASHTYPE 0x3
+#define G_HASHTYPE(x) (((x) >> S_HASHTYPE) & M_HASHTYPE)
+
+#define S_QNUM 0
+#define M_QNUM 0xFFFF
+#define G_QNUM(x) (((x) >> S_QNUM) & M_QNUM)
+
+struct work_request_hdr {
+	__be32 wr_hi;
+	__be32 wr_mid;
+	__be64 wr_lo;
+};
+
+/* wr_mid fields */
+#define S_WR_LEN16    0
+#define M_WR_LEN16    0xFF
+#define V_WR_LEN16(x) ((x) << S_WR_LEN16)
+#define G_WR_LEN16(x) (((x) >> S_WR_LEN16) & M_WR_LEN16)
+
+/* wr_hi fields */
+#define S_WR_OP    24
+#define M_WR_OP    0xFF
+#define V_WR_OP(x) ((__u64)(x) << S_WR_OP)
+#define G_WR_OP(x) (((x) >> S_WR_OP) & M_WR_OP)
+
+#define WR_HDR struct work_request_hdr wr
+
+/* option 0 fields */
+#define S_ACCEPT_MODE    0
+#define M_ACCEPT_MODE    0x3
+#define V_ACCEPT_MODE(x) ((x) << S_ACCEPT_MODE)
+#define G_ACCEPT_MODE(x) (((x) >> S_ACCEPT_MODE) & M_ACCEPT_MODE)
+
+#define S_TX_CHAN    2
+#define M_TX_CHAN    0x3
+#define V_TX_CHAN(x) ((x) << S_TX_CHAN)
+#define G_TX_CHAN(x) (((x) >> S_TX_CHAN) & M_TX_CHAN)
+
+#define S_NO_CONG    4
+#define V_NO_CONG(x) ((x) << S_NO_CONG)
+#define F_NO_CONG    V_NO_CONG(1U)
+
+#define S_DELACK    5
+#define V_DELACK(x) ((x) << S_DELACK)
+#define F_DELACK    V_DELACK(1U)
+
+#define S_NON_OFFLOAD    7
+#define V_NON_OFFLOAD(x) ((x) << S_NON_OFFLOAD)
+#define F_NON_OFFLOAD    V_NON_OFFLOAD(1U)
+
+#define S_ULP_MODE    8
+#define M_ULP_MODE    0xF
+#define V_ULP_MODE(x) ((x) << S_ULP_MODE)
+#define G_ULP_MODE(x) (((x) >> S_ULP_MODE) & M_ULP_MODE)
+
+#define S_RCV_BUFSIZ    12
+#define M_RCV_BUFSIZ    0x3FF
+#define V_RCV_BUFSIZ(x) ((x) << S_RCV_BUFSIZ)
+#define G_RCV_BUFSIZ(x) (((x) >> S_RCV_BUFSIZ) & M_RCV_BUFSIZ)
+
+#define S_DSCP    22
+#define M_DSCP    0x3F
+#define V_DSCP(x) ((x) << S_DSCP)
+#define G_DSCP(x) (((x) >> S_DSCP) & M_DSCP)
+
+#define S_SMAC_SEL    28
+#define M_SMAC_SEL    0xFF
+#define V_SMAC_SEL(x) ((__u64)(x) << S_SMAC_SEL)
+#define G_SMAC_SEL(x) (((x) >> S_SMAC_SEL) & M_SMAC_SEL)
+
+#define S_L2T_IDX    36
+#define M_L2T_IDX    0xFFF
+#define V_L2T_IDX(x) ((__u64)(x) << S_L2T_IDX)
+#define G_L2T_IDX(x) (((x) >> S_L2T_IDX) & M_L2T_IDX)
+
+#define S_TCAM_BYPASS    48
+#define V_TCAM_BYPASS(x) ((__u64)(x) << S_TCAM_BYPASS)
+#define F_TCAM_BYPASS    V_TCAM_BYPASS(1ULL)
+
+#define S_NAGLE    49
+#define V_NAGLE(x) ((__u64)(x) << S_NAGLE)
+#define F_NAGLE    V_NAGLE(1ULL)
+
+#define S_WND_SCALE    50
+#define M_WND_SCALE    0xF
+#define V_WND_SCALE(x) ((__u64)(x) << S_WND_SCALE)
+#define G_WND_SCALE(x) (((x) >> S_WND_SCALE) & M_WND_SCALE)
+
+#define S_KEEP_ALIVE    54
+#define V_KEEP_ALIVE(x) ((__u64)(x) << S_KEEP_ALIVE)
+#define F_KEEP_ALIVE    V_KEEP_ALIVE(1ULL)
+
+#define S_MAX_RT    55
+#define M_MAX_RT    0xF
+#define V_MAX_RT(x) ((__u64)(x) << S_MAX_RT)
+#define G_MAX_RT(x) (((x) >> S_MAX_RT) & M_MAX_RT)
+
+#define S_MAX_RT_OVERRIDE    59
+#define V_MAX_RT_OVERRIDE(x) ((__u64)(x) << S_MAX_RT_OVERRIDE)
+#define F_MAX_RT_OVERRIDE    V_MAX_RT_OVERRIDE(1ULL)
+
+#define S_MSS_IDX    60
+#define M_MSS_IDX    0xF
+#define V_MSS_IDX(x) ((__u64)(x) << S_MSS_IDX)
+#define G_MSS_IDX(x) (((x) >> S_MSS_IDX) & M_MSS_IDX)
+
+/* option 1 fields */
+#define S_SYN_RSS_ENABLE    0
+#define V_SYN_RSS_ENABLE(x) ((x) << S_SYN_RSS_ENABLE)
+#define F_SYN_RSS_ENABLE    V_SYN_RSS_ENABLE(1U)
+
+#define S_SYN_RSS_USE_HASH    1
+#define V_SYN_RSS_USE_HASH(x) ((x) << S_SYN_RSS_USE_HASH)
+#define F_SYN_RSS_USE_HASH    V_SYN_RSS_USE_HASH(1U)
+
+#define S_SYN_RSS_QUEUE    2
+#define M_SYN_RSS_QUEUE    0x3FF
+#define V_SYN_RSS_QUEUE(x) ((x) << S_SYN_RSS_QUEUE)
+#define G_SYN_RSS_QUEUE(x) (((x) >> S_SYN_RSS_QUEUE) & M_SYN_RSS_QUEUE)
+
+#define S_LISTEN_INTF    12
+#define M_LISTEN_INTF    0xFF
+#define V_LISTEN_INTF(x) ((x) << S_LISTEN_INTF)
+#define G_LISTEN_INTF(x) (((x) >> S_LISTEN_INTF) & M_LISTEN_INTF)
+
+#define S_LISTEN_FILTER    20
+#define V_LISTEN_FILTER(x) ((x) << S_LISTEN_FILTER)
+#define F_LISTEN_FILTER    V_LISTEN_FILTER(1U)
+
+#define S_CONN_POLICY    22
+#define M_CONN_POLICY    0x3
+#define V_CONN_POLICY(x) ((x) << S_CONN_POLICY)
+#define G_CONN_POLICY(x) (((x) >> S_CONN_POLICY) & M_CONN_POLICY)
+
+/* option 2 fields */
+#define S_RSS_QUEUE    0
+#define M_RSS_QUEUE    0x3FF
+#define V_RSS_QUEUE(x) ((x) << S_RSS_QUEUE)
+#define G_RSS_QUEUE(x) (((x) >> S_RSS_QUEUE) & M_RSS_QUEUE)
+
+#define S_RSS_QUEUE_VALID    10
+#define V_RSS_QUEUE_VALID(x) ((x) << S_RSS_QUEUE_VALID)
+#define F_RSS_QUEUE_VALID    V_RSS_QUEUE_VALID(1U)
+
+#define S_RX_COALESCE_VALID    11
+#define V_RX_COALESCE_VALID(x) ((x) << S_RX_COALESCE_VALID)
+#define F_RX_COALESCE_VALID    V_RX_COALESCE_VALID(1U)
+
+#define S_RX_COALESCE    12
+#define M_RX_COALESCE    0x3
+#define V_RX_COALESCE(x) ((x) << S_RX_COALESCE)
+#define G_RX_COALESCE(x) (((x) >> S_RX_COALESCE) & M_RX_COALESCE)
+
+#define S_CONG_CNTRL    14
+#define M_CONG_CNTRL    0x3
+#define V_CONG_CNTRL(x) ((x) << S_CONG_CNTRL)
+#define G_CONG_CNTRL(x) (((x) >> S_CONG_CNTRL) & M_CONG_CNTRL)
+
+#define S_PACE    16
+#define M_PACE    0x3
+#define V_PACE(x) ((x) << S_PACE)
+#define G_PACE(x) (((x) >> S_PACE) & M_PACE)
+
+#define S_CONG_CNTRL_VALID    18
+#define V_CONG_CNTRL_VALID(x) ((x) << S_CONG_CNTRL_VALID)
+#define F_CONG_CNTRL_VALID    V_CONG_CNTRL_VALID(1U)
+
+#define S_PACE_VALID    19
+#define V_PACE_VALID(x) ((x) << S_PACE_VALID)
+#define F_PACE_VALID    V_PACE_VALID(1U)
+
+#define S_RX_FC_DISABLE    20
+#define V_RX_FC_DISABLE(x) ((x) << S_RX_FC_DISABLE)
+#define F_RX_FC_DISABLE    V_RX_FC_DISABLE(1U)
+
+#define S_RX_FC_DDP    21
+#define V_RX_FC_DDP(x) ((x) << S_RX_FC_DDP)
+#define F_RX_FC_DDP    V_RX_FC_DDP(1U)
+
+#define S_RX_FC_VALID    22
+#define V_RX_FC_VALID(x) ((x) << S_RX_FC_VALID)
+#define F_RX_FC_VALID    V_RX_FC_VALID(1U)
+
+#define S_TX_QUEUE    23
+#define M_TX_QUEUE    0x7
+#define V_TX_QUEUE(x) ((x) << S_TX_QUEUE)
+#define G_TX_QUEUE(x) (((x) >> S_TX_QUEUE) & M_TX_QUEUE)
+
+#define S_RX_CHANNEL    26
+#define V_RX_CHANNEL(x) ((x) << S_RX_CHANNEL)
+#define F_RX_CHANNEL    V_RX_CHANNEL(1U)
+
+#define S_CCTRL_ECN    27
+#define V_CCTRL_ECN(x) ((x) << S_CCTRL_ECN)
+#define F_CCTRL_ECN    V_CCTRL_ECN(1U)
+
+#define S_WND_SCALE_EN    28
+#define V_WND_SCALE_EN(x) ((x) << S_WND_SCALE_EN)
+#define F_WND_SCALE_EN    V_WND_SCALE_EN(1U)
+
+#define S_TSTAMPS_EN    29
+#define V_TSTAMPS_EN(x) ((x) << S_TSTAMPS_EN)
+#define F_TSTAMPS_EN    V_TSTAMPS_EN(1U)
+
+#define S_SACK_EN    30
+#define V_SACK_EN(x) ((x) << S_SACK_EN)
+#define F_SACK_EN    V_SACK_EN(1U)
+
+struct cpl_pass_open_req {
+	WR_HDR;
+	union opcode_tid ot;
+	__be16 local_port;
+	__be16 peer_port;
+	__be32 local_ip;
+	__be32 peer_ip;
+	__be64 opt0;
+	__be64 opt1;
+};
+
+struct cpl_pass_open_req6 {
+	WR_HDR;
+	union opcode_tid ot;
+	__be16 local_port;
+	__be16 peer_port;
+	__be64 local_ip_hi;
+	__be64 local_ip_lo;
+	__be64 peer_ip_hi;
+	__be64 peer_ip_lo;
+	__be64 opt0;
+	__be64 opt1;
+};
+
+struct cpl_pass_open_rpl {
+	union opcode_tid ot;
+	__u8 rsvd[3];
+	__u8 status;
+};
+
+struct cpl_pass_establish {
+	union opcode_tid ot;
+	__be32 rsvd;
+	__be32 tos_stid;
+	__be16 mac_idx;
+	__be16 tcp_opt;
+	__be32 snd_isn;
+	__be32 rcv_isn;
+};
+
+/* cpl_pass_establish.tos_stid fields */
+#define S_PASS_OPEN_TID    0
+#define M_PASS_OPEN_TID    0xFFFFFF
+#define V_PASS_OPEN_TID(x) ((x) << S_PASS_OPEN_TID)
+#define G_PASS_OPEN_TID(x) (((x) >> S_PASS_OPEN_TID) & M_PASS_OPEN_TID)
+
+#define S_PASS_OPEN_TOS    24
+#define M_PASS_OPEN_TOS    0xFF
+#define V_PASS_OPEN_TOS(x) ((x) << S_PASS_OPEN_TOS)
+#define G_PASS_OPEN_TOS(x) (((x) >> S_PASS_OPEN_TOS) & M_PASS_OPEN_TOS)
+
+/* cpl_pass_establish.tcp_opt fields (also applies to act_open_establish) */
+#define G_TCPOPT_WSCALE_OK(x)  (((x) >> 5) & 1)
+#define G_TCPOPT_SACK(x)       (((x) >> 6) & 1)
+#define G_TCPOPT_TSTAMP(x)     (((x) >> 7) & 1)
+#define G_TCPOPT_SND_WSCALE(x) (((x) >> 8) & 0xf)
+#define G_TCPOPT_MSS(x)        (((x) >> 12) & 0xf)
+
+struct cpl_pass_accept_req {
+	union opcode_tid ot;
+	__be16 rsvd;
+	__be16 len;
+	__be32 hdr_len;
+	__be16 vlan;
+	__be16 l2info;
+	__be32 tos_stid;
+	struct tcp_options tcpopt;
+};
+
+/* cpl_pass_accept_req.hdr_len fields */
+#define S_SYN_RX_CHAN    0
+#define M_SYN_RX_CHAN    0xF
+#define V_SYN_RX_CHAN(x) ((x) << S_SYN_RX_CHAN)
+#define G_SYN_RX_CHAN(x) (((x) >> S_SYN_RX_CHAN) & M_SYN_RX_CHAN)
+
+#define S_TCP_HDR_LEN    10
+#define M_TCP_HDR_LEN    0x3F
+#define V_TCP_HDR_LEN(x) ((x) << S_TCP_HDR_LEN)
+#define G_TCP_HDR_LEN(x) (((x) >> S_TCP_HDR_LEN) & M_TCP_HDR_LEN)
+
+#define S_IP_HDR_LEN    16
+#define M_IP_HDR_LEN    0x3FF
+#define V_IP_HDR_LEN(x) ((x) << S_IP_HDR_LEN)
+#define G_IP_HDR_LEN(x) (((x) >> S_IP_HDR_LEN) & M_IP_HDR_LEN)
+
+#define S_ETH_HDR_LEN    26
+#define M_ETH_HDR_LEN    0x1F
+#define V_ETH_HDR_LEN(x) ((x) << S_ETH_HDR_LEN)
+#define G_ETH_HDR_LEN(x) (((x) >> S_ETH_HDR_LEN) & M_ETH_HDR_LEN)
+
+/* cpl_pass_accept_req.l2info fields */
+#define S_SYN_MAC_IDX    0
+#define M_SYN_MAC_IDX    0x1FF
+#define V_SYN_MAC_IDX(x) ((x) << S_SYN_MAC_IDX)
+#define G_SYN_MAC_IDX(x) (((x) >> S_SYN_MAC_IDX) & M_SYN_MAC_IDX)
+
+#define S_SYN_XACT_MATCH    9
+#define V_SYN_XACT_MATCH(x) ((x) << S_SYN_XACT_MATCH)
+#define F_SYN_XACT_MATCH    V_SYN_XACT_MATCH(1U)
+
+#define S_SYN_INTF    12
+#define M_SYN_INTF    0xF
+#define V_SYN_INTF(x) ((x) << S_SYN_INTF)
+#define G_SYN_INTF(x) (((x) >> S_SYN_INTF) & M_SYN_INTF)
+
+struct cpl_pass_accept_rpl {
+	WR_HDR;
+	union opcode_tid ot;
+	__be32 opt2;
+	__be64 opt0;
+};
+
+struct cpl_act_open_req {
+	WR_HDR;
+	union opcode_tid ot;
+	__be16 local_port;
+	__be16 peer_port;
+	__be32 local_ip;
+	__be32 peer_ip;
+	__be64 opt0;
+	__be32 params;
+	__be32 opt2;
+};
+
+struct cpl_act_open_req6 {
+	WR_HDR;
+	union opcode_tid ot;
+	__be16 local_port;
+	__be16 peer_port;
+	__be64 local_ip_hi;
+	__be64 local_ip_lo;
+	__be64 peer_ip_hi;
+	__be64 peer_ip_lo;
+	__be64 opt0;
+	__be32 params;
+	__be32 opt2;
+};
+
+struct cpl_act_open_rpl {
+	union opcode_tid ot;
+	__be32 atid_status;
+};
+
+/* cpl_act_open_rpl.atid_status fields */
+#define S_AOPEN_STATUS    0
+#define M_AOPEN_STATUS    0xFF
+#define V_AOPEN_STATUS(x) ((x) << S_AOPEN_STATUS)
+#define G_AOPEN_STATUS(x) (((x) >> S_AOPEN_STATUS) & M_AOPEN_STATUS)
+
+#define S_AOPEN_ATID    8
+#define M_AOPEN_ATID    0xFFFFFF
+#define V_AOPEN_ATID(x) ((x) << S_AOPEN_ATID)
+#define G_AOPEN_ATID(x) (((x) >> S_AOPEN_ATID) & M_AOPEN_ATID)
+
+struct cpl_act_establish {
+	union opcode_tid ot;
+	__be32 rsvd;
+	__be32 tos_atid;
+	__be16 mac_idx;
+	__be16 tcp_opt;
+	__be32 snd_isn;
+	__be32 rcv_isn;
+};
+
+struct cpl_get_tcb {
+	WR_HDR;
+	union opcode_tid ot;
+	__be16 reply_ctrl;
+	__be16 cookie;
+};
+
+/* cpl_get_tcb.reply_ctrl fields */
+#define S_QUEUENO    0
+#define M_QUEUENO    0x3FF
+#define V_QUEUENO(x) ((x) << S_QUEUENO)
+#define G_QUEUENO(x) (((x) >> S_QUEUENO) & M_QUEUENO)
+
+#define S_REPLY_CHAN    14
+#define V_REPLY_CHAN(x) ((x) << S_REPLY_CHAN)
+#define F_REPLY_CHAN    V_REPLY_CHAN(1U)
+
+#define S_NO_REPLY    15
+#define V_NO_REPLY(x) ((x) << S_NO_REPLY)
+#define F_NO_REPLY    V_NO_REPLY(1U)
+
+struct cpl_get_tcb_rpl {
+	union opcode_tid ot;
+	__u8 cookie;
+	__u8 status;
+	__be16 len;
+};
+
+struct cpl_set_tcb_field {
+	WR_HDR;
+	union opcode_tid ot;
+	__be16 reply_ctrl;
+	__be16 word_cookie;
+	__be64 mask;
+	__be64 val;
+};
+
+/* cpl_set_tcb_field.word_cookie fields */
+#define S_WORD    0
+#define M_WORD    0x1F
+#define V_WORD(x) ((x) << S_WORD)
+#define G_WORD(x) (((x) >> S_WORD) & M_WORD)
+
+#define S_COOKIE    5
+#define M_COOKIE    0x7
+#define V_COOKIE(x) ((x) << S_COOKIE)
+#define G_COOKIE(x) (((x) >> S_COOKIE) & M_COOKIE)
+
+struct cpl_set_tcb_rpl {
+	union opcode_tid ot;
+	__be16 rsvd;
+	__u8   cookie;
+	__u8   status;
+	__be64 oldval;
+};
+
+struct cpl_close_con_req {
+	WR_HDR;
+	union opcode_tid ot;
+	__be32 rsvd;
+};
+
+struct cpl_close_con_rpl {
+	union opcode_tid ot;
+	__u8  rsvd[3];
+	__u8  status;
+	__be32 snd_nxt;
+	__be32 rcv_nxt;
+};
+
+struct cpl_close_listsvr_req {
+	WR_HDR;
+	union opcode_tid ot;
+	__be16 reply_ctrl;
+	__be16 rsvd;
+};
+
+/* additional cpl_close_listsvr_req.reply_ctrl field */
+#define S_LISTSVR_IPV6    14
+#define V_LISTSVR_IPV6(x) ((x) << S_LISTSVR_IPV6)
+#define F_LISTSVR_IPV6    V_LISTSVR_IPV6(1U)
+
+struct cpl_close_listsvr_rpl {
+	union opcode_tid ot;
+	__u8 rsvd[3];
+	__u8 status;
+};
+
+struct cpl_abort_req_rss {
+	union opcode_tid ot;
+	__u8  rsvd[3];
+	__u8  status;
+};
+
+struct cpl_abort_req {
+	WR_HDR;
+	union opcode_tid ot;
+	__be32 rsvd0;
+	__u8  rsvd1;
+	__u8  cmd;
+	__u8  rsvd2[6];
+};
+
+struct cpl_abort_rpl_rss {
+	union opcode_tid ot;
+	__u8  rsvd[3];
+	__u8  status;
+};
+
+struct cpl_abort_rpl {
+	WR_HDR;
+	union opcode_tid ot;
+	__be32 rsvd0;
+	__u8  rsvd1;
+	__u8  cmd;
+	__u8  rsvd2[6];
+};
+
+struct cpl_peer_close {
+	union opcode_tid ot;
+	__be32 rcv_nxt;
+};
+
+struct cpl_tid_release {
+	WR_HDR;
+	union opcode_tid ot;
+	__be32 rsvd;
+};
+
+struct cpl_tx_pkt_core {
+	__be32 ctrl0;
+	__be16 pack;
+	__be16 len;
+	__be64 ctrl1;
+};
+
+struct cpl_tx_pkt {
+	WR_HDR;
+	struct cpl_tx_pkt_core c;
+};
+
+#define cpl_tx_pkt_xt cpl_tx_pkt
+
+/* cpl_tx_pkt_core.ctrl0 fields */
+#define S_TXPKT_VF    0
+#define M_TXPKT_VF    0xFF
+#define V_TXPKT_VF(x) ((x) << S_TXPKT_VF)
+#define G_TXPKT_VF(x) (((x) >> S_TXPKT_VF) & M_TXPKT_VF)
+
+#define S_TXPKT_PF    8
+#define M_TXPKT_PF    0x7
+#define V_TXPKT_PF(x) ((x) << S_TXPKT_PF)
+#define G_TXPKT_PF(x) (((x) >> S_TXPKT_PF) & M_TXPKT_PF)
+
+#define S_TXPKT_VF_VLD    11
+#define V_TXPKT_VF_VLD(x) ((x) << S_TXPKT_VF_VLD)
+#define F_TXPKT_VF_VLD    V_TXPKT_VF_VLD(1U)
+
+#define S_TXPKT_OVLAN_IDX    12
+#define M_TXPKT_OVLAN_IDX    0xF
+#define V_TXPKT_OVLAN_IDX(x) ((x) << S_TXPKT_OVLAN_IDX)
+#define G_TXPKT_OVLAN_IDX(x) (((x) >> S_TXPKT_OVLAN_IDX) & M_TXPKT_OVLAN_IDX)
+
+#define S_TXPKT_INTF    16
+#define M_TXPKT_INTF    0xF
+#define V_TXPKT_INTF(x) ((x) << S_TXPKT_INTF)
+#define G_TXPKT_INTF(x) (((x) >> S_TXPKT_INTF) & M_TXPKT_INTF)
+
+#define S_TXPKT_SPECIAL_STAT    20
+#define V_TXPKT_SPECIAL_STAT(x) ((x) << S_TXPKT_SPECIAL_STAT)
+#define F_TXPKT_SPECIAL_STAT    V_TXPKT_SPECIAL_STAT(1U)
+
+#define S_TXPKT_INS_OVLAN    21
+#define V_TXPKT_INS_OVLAN(x) ((x) << S_TXPKT_INS_OVLAN)
+#define F_TXPKT_INS_OVLAN    V_TXPKT_INS_OVLAN(1U)
+
+#define S_TXPKT_STAT_DIS    22
+#define V_TXPKT_STAT_DIS(x) ((x) << S_TXPKT_STAT_DIS)
+#define F_TXPKT_STAT_DIS    V_TXPKT_STAT_DIS(1U)
+
+#define S_TXPKT_LOOPBACK    23
+#define V_TXPKT_LOOPBACK(x) ((x) << S_TXPKT_LOOPBACK)
+#define F_TXPKT_LOOPBACK    V_TXPKT_LOOPBACK(1U)
+
+#define S_TXPKT_OPCODE    24
+#define M_TXPKT_OPCODE    0xFF
+#define V_TXPKT_OPCODE(x) ((x) << S_TXPKT_OPCODE)
+#define G_TXPKT_OPCODE(x) (((x) >> S_TXPKT_OPCODE) & M_TXPKT_OPCODE)
+
+/* cpl_tx_pkt_core.ctrl1 fields */
+#define S_TXPKT_SA_IDX    0
+#define M_TXPKT_SA_IDX    0xFFF
+#define V_TXPKT_SA_IDX(x) ((x) << S_TXPKT_SA_IDX)
+#define G_TXPKT_SA_IDX(x) (((x) >> S_TXPKT_SA_IDX) & M_TXPKT_SA_IDX)
+
+#define S_TXPKT_CSUM_END    12
+#define M_TXPKT_CSUM_END    0xFF
+#define V_TXPKT_CSUM_END(x) ((x) << S_TXPKT_CSUM_END)
+#define G_TXPKT_CSUM_END(x) (((x) >> S_TXPKT_CSUM_END) & M_TXPKT_CSUM_END)
+
+#define S_TXPKT_CSUM_START    20
+#define M_TXPKT_CSUM_START    0x3FF
+#define V_TXPKT_CSUM_START(x) ((x) << S_TXPKT_CSUM_START)
+#define G_TXPKT_CSUM_START(x) (((x) >> S_TXPKT_CSUM_START) & M_TXPKT_CSUM_START)
+
+#define S_TXPKT_IPHDR_LEN    20
+#define M_TXPKT_IPHDR_LEN    0x3FFF
+#define V_TXPKT_IPHDR_LEN(x) ((__u64)(x) << S_TXPKT_IPHDR_LEN)
+#define G_TXPKT_IPHDR_LEN(x) (((x) >> S_TXPKT_IPHDR_LEN) & M_TXPKT_IPHDR_LEN)
+
+#define S_TXPKT_CSUM_LOC    30
+#define M_TXPKT_CSUM_LOC    0x3FF
+#define V_TXPKT_CSUM_LOC(x) ((__u64)(x) << S_TXPKT_CSUM_LOC)
+#define G_TXPKT_CSUM_LOC(x) (((x) >> S_TXPKT_CSUM_LOC) & M_TXPKT_CSUM_LOC)
+
+#define S_TXPKT_ETHHDR_LEN    34
+#define M_TXPKT_ETHHDR_LEN    0x3F
+#define V_TXPKT_ETHHDR_LEN(x) ((__u64)(x) << S_TXPKT_ETHHDR_LEN)
+#define G_TXPKT_ETHHDR_LEN(x) (((x) >> S_TXPKT_ETHHDR_LEN) & M_TXPKT_ETHHDR_LEN)
+
+#define S_TXPKT_CSUM_TYPE    40
+#define M_TXPKT_CSUM_TYPE    0xF
+#define V_TXPKT_CSUM_TYPE(x) ((__u64)(x) << S_TXPKT_CSUM_TYPE)
+#define G_TXPKT_CSUM_TYPE(x) (((x) >> S_TXPKT_CSUM_TYPE) & M_TXPKT_CSUM_TYPE)
+
+#define S_TXPKT_VLAN    44
+#define M_TXPKT_VLAN    0xFFFF
+#define V_TXPKT_VLAN(x) ((__u64)(x) << S_TXPKT_VLAN)
+#define G_TXPKT_VLAN(x) (((x) >> S_TXPKT_VLAN) & M_TXPKT_VLAN)
+
+#define S_TXPKT_VLAN_VLD    60
+#define V_TXPKT_VLAN_VLD(x) ((__u64)(x) << S_TXPKT_VLAN_VLD)
+#define F_TXPKT_VLAN_VLD    V_TXPKT_VLAN_VLD(1ULL)
+
+#define S_TXPKT_IPSEC    61
+#define V_TXPKT_IPSEC(x) ((__u64)(x) << S_TXPKT_IPSEC)
+#define F_TXPKT_IPSEC    V_TXPKT_IPSEC(1ULL)
+
+#define S_TXPKT_IPCSUM_DIS    62
+#define V_TXPKT_IPCSUM_DIS(x) ((__u64)(x) << S_TXPKT_IPCSUM_DIS)
+#define F_TXPKT_IPCSUM_DIS    V_TXPKT_IPCSUM_DIS(1ULL)
+
+#define S_TXPKT_L4CSUM_DIS    63
+#define V_TXPKT_L4CSUM_DIS(x) ((__u64)(x) << S_TXPKT_L4CSUM_DIS)
+#define F_TXPKT_L4CSUM_DIS    V_TXPKT_L4CSUM_DIS(1ULL)
+
+struct cpl_tx_pkt_lso {
+	WR_HDR;
+	__be32 lso_ctrl;
+	__be16 ipid_ofst;
+	__be16 mss;
+	__be32 seqno_offset;
+	__be32 len;
+	/* encapsulated CPL (TX_PKT, TX_PKT_XT or TX_DATA) follows here */
+};
+
+/* cpl_tx_pkt_lso.lso_ctrl fields */
+#define S_LSO_TCPHDR_LEN    0
+#define M_LSO_TCPHDR_LEN    0xF
+#define V_LSO_TCPHDR_LEN(x) ((x) << S_LSO_TCPHDR_LEN)
+#define G_LSO_TCPHDR_LEN(x) (((x) >> S_LSO_TCPHDR_LEN) & M_LSO_TCPHDR_LEN)
+
+#define S_LSO_IPHDR_LEN    4
+#define M_LSO_IPHDR_LEN    0xFFF
+#define V_LSO_IPHDR_LEN(x) ((x) << S_LSO_IPHDR_LEN)
+#define G_LSO_IPHDR_LEN(x) (((x) >> S_LSO_IPHDR_LEN) & M_LSO_IPHDR_LEN)
+
+#define S_LSO_ETHHDR_LEN    16
+#define M_LSO_ETHHDR_LEN    0xF
+#define V_LSO_ETHHDR_LEN(x) ((x) << S_LSO_ETHHDR_LEN)
+#define G_LSO_ETHHDR_LEN(x) (((x) >> S_LSO_ETHHDR_LEN) & M_LSO_ETHHDR_LEN)
+
+#define S_LSO_IPV6    20
+#define V_LSO_IPV6(x) ((x) << S_LSO_IPV6)
+#define F_LSO_IPV6    V_LSO_IPV6(1U)
+
+#define S_LSO_OFLD_ENCAP    21
+#define V_LSO_OFLD_ENCAP(x) ((x) << S_LSO_OFLD_ENCAP)
+#define F_LSO_OFLD_ENCAP    V_LSO_OFLD_ENCAP(1U)
+
+#define S_LSO_LAST_SLICE    22
+#define V_LSO_LAST_SLICE(x) ((x) << S_LSO_LAST_SLICE)
+#define F_LSO_LAST_SLICE    V_LSO_LAST_SLICE(1U)
+
+#define S_LSO_FIRST_SLICE    23
+#define V_LSO_FIRST_SLICE(x) ((x) << S_LSO_FIRST_SLICE)
+#define F_LSO_FIRST_SLICE    V_LSO_FIRST_SLICE(1U)
+
+#define S_LSO_OPCODE    24
+#define M_LSO_OPCODE    0xFF
+#define V_LSO_OPCODE(x) ((x) << S_LSO_OPCODE)
+#define G_LSO_OPCODE(x) (((x) >> S_LSO_OPCODE) & M_LSO_OPCODE)
+
+/* cpl_tx_pkt_lso.mss fields */
+#define S_LSO_MSS    0
+#define M_LSO_MSS    0x3FFF
+#define V_LSO_MSS(x) ((x) << S_LSO_MSS)
+#define G_LSO_MSS(x) (((x) >> S_LSO_MSS) & M_LSO_MSS)
+
+#define S_LSO_IPID_SPLIT    15
+#define V_LSO_IPID_SPLIT(x) ((x) << S_LSO_IPID_SPLIT)
+#define F_LSO_IPID_SPLIT    V_LSO_IPID_SPLIT(1U)
+
+struct cpl_iscsi_hdr {
+	union opcode_tid ot;
+	__be16 pdu_len_ddp;
+	__be16 len;
+	__be32 seq;
+	__be16 urg;
+	__u8 rsvd;
+	__u8 status;
+};
+
+/* cpl_iscsi_hdr.pdu_len_ddp fields */
+#define S_ISCSI_PDU_LEN    0
+#define M_ISCSI_PDU_LEN    0x7FFF
+#define V_ISCSI_PDU_LEN(x) ((x) << S_ISCSI_PDU_LEN)
+#define G_ISCSI_PDU_LEN(x) (((x) >> S_ISCSI_PDU_LEN) & M_ISCSI_PDU_LEN)
+
+#define S_ISCSI_DDP    15
+#define V_ISCSI_DDP(x) ((x) << S_ISCSI_DDP)
+#define F_ISCSI_DDP    V_ISCSI_DDP(1U)
+
+struct cpl_rx_data {
+	union opcode_tid ot;
+	__be16 rsvd;
+	__be16 len;
+	__be32 seq;
+	__be16 urg;
+#if defined(__LITTLE_ENDIAN_BITFIELD)
+	__u8 dack_mode:2;
+	__u8 psh:1;
+	__u8 heartbeat:1;
+	__u8 ddp_off:1;
+	__u8 :3;
+#else
+	__u8 :3;
+	__u8 ddp_off:1;
+	__u8 heartbeat:1;
+	__u8 psh:1;
+	__u8 dack_mode:2;
+#endif
+	__u8 status;
+};
+
+struct cpl_rx_data_ack {
+	WR_HDR;
+	union opcode_tid ot;
+	__be32 credit_dack;
+};
+
+/* cpl_rx_data_ack.ack_seq fields */
+#define S_RX_CREDITS    0
+#define M_RX_CREDITS    0x3FFFFFF
+#define V_RX_CREDITS(x) ((x) << S_RX_CREDITS)
+#define G_RX_CREDITS(x) (((x) >> S_RX_CREDITS) & M_RX_CREDITS)
+
+#define S_RX_MODULATE_TX    26
+#define V_RX_MODULATE_TX(x) ((x) << S_RX_MODULATE_TX)
+#define F_RX_MODULATE_TX    V_RX_MODULATE_TX(1U)
+
+#define S_RX_MODULATE_RX    27
+#define V_RX_MODULATE_RX(x) ((x) << S_RX_MODULATE_RX)
+#define F_RX_MODULATE_RX    V_RX_MODULATE_RX(1U)
+
+#define S_RX_FORCE_ACK    28
+#define V_RX_FORCE_ACK(x) ((x) << S_RX_FORCE_ACK)
+#define F_RX_FORCE_ACK    V_RX_FORCE_ACK(1U)
+
+#define S_RX_DACK_MODE    29
+#define M_RX_DACK_MODE    0x3
+#define V_RX_DACK_MODE(x) ((x) << S_RX_DACK_MODE)
+#define G_RX_DACK_MODE(x) (((x) >> S_RX_DACK_MODE) & M_RX_DACK_MODE)
+
+#define S_RX_DACK_CHANGE    31
+#define V_RX_DACK_CHANGE(x) ((x) << S_RX_DACK_CHANGE)
+#define F_RX_DACK_CHANGE    V_RX_DACK_CHANGE(1U)
+
+struct cpl_rx_pkt {
+	__u8 opcode;
+#if defined(__LITTLE_ENDIAN_BITFIELD)
+	__u8 iff:4;
+	__u8 csum_calc:1;
+	__u8 ipmi_pkt:1;
+	__u8 vlan_ex:1;
+	__u8 ip_frag:1;
+#else
+	__u8 ip_frag:1;
+	__u8 vlan_ex:1;
+	__u8 ipmi_pkt:1;
+	__u8 csum_calc:1;
+	__u8 iff:4;
+#endif
+	__be16 csum;
+	__be16 vlan;
+	__be16 len;
+	__be32 l2info;
+	__be16 hdr_len;
+	__be16 err_vec;
+};
+
+/* rx_pkt.l2info fields */
+#define S_RX_ETHHDR_LEN    0
+#define M_RX_ETHHDR_LEN    0x1F
+#define V_RX_ETHHDR_LEN(x) ((x) << S_RX_ETHHDR_LEN)
+#define G_RX_ETHHDR_LEN(x) (((x) >> S_RX_ETHHDR_LEN) & M_RX_ETHHDR_LEN)
+
+#define S_RX_PKTYPE    5
+#define M_RX_PKTYPE    0x7
+#define V_RX_PKTYPE(x) ((x) << S_RX_PKTYPE)
+#define G_RX_PKTYPE(x) (((x) >> S_RX_PKTYPE) & M_RX_PKTYPE)
+
+#define S_RX_MACIDX    8
+#define M_RX_MACIDX    0x1FF
+#define V_RX_MACIDX(x) ((x) << S_RX_MACIDX)
+#define G_RX_MACIDX(x) (((x) >> S_RX_MACIDX) & M_RX_MACIDX)
+
+#define S_RX_DATYPE    18
+#define M_RX_DATYPE    0x3
+#define V_RX_DATYPE(x) ((x) << S_RX_DATYPE)
+#define G_RX_DATYPE(x) (((x) >> S_RX_DATYPE) & M_RX_DATYPE)
+
+#define S_RXF_PSH    20
+#define V_RXF_PSH(x) ((x) << S_RXF_PSH)
+#define F_RXF_PSH    V_RXF_PSH(1U)
+
+#define S_RXF_SYN    21
+#define V_RXF_SYN(x) ((x) << S_RXF_SYN)
+#define F_RXF_SYN    V_RXF_SYN(1U)
+
+#define S_RXF_UDP    22
+#define V_RXF_UDP(x) ((x) << S_RXF_UDP)
+#define F_RXF_UDP    V_RXF_UDP(1U)
+
+#define S_RXF_TCP    23
+#define V_RXF_TCP(x) ((x) << S_RXF_TCP)
+#define F_RXF_TCP    V_RXF_TCP(1U)
+
+#define S_RXF_IP    24
+#define V_RXF_IP(x) ((x) << S_RXF_IP)
+#define F_RXF_IP    V_RXF_IP(1U)
+
+#define S_RXF_IP6    25
+#define V_RXF_IP6(x) ((x) << S_RXF_IP6)
+#define F_RXF_IP6    V_RXF_IP6(1U)
+
+#define S_RXF_SYN_COOKIE    26
+#define V_RXF_SYN_COOKIE(x) ((x) << S_RXF_SYN_COOKIE)
+#define F_RXF_SYN_COOKIE    V_RXF_SYN_COOKIE(1U)
+
+#define S_RXF_FCOE    26
+#define V_RXF_FCOE(x) ((x) << S_RXF_FCOE)
+#define F_RXF_FCOE    V_RXF_FCOE(1U)
+
+#define S_RXF_LRO    27
+#define V_RXF_LRO(x) ((x) << S_RXF_LRO)
+#define F_RXF_LRO    V_RXF_LRO(1U)
+
+#define S_RX_CHAN    28
+#define M_RX_CHAN    0xF
+#define V_RX_CHAN(x) ((x) << S_RX_CHAN)
+#define G_RX_CHAN(x) (((x) >> S_RX_CHAN) & M_RX_CHAN)
+
+/* rx_pkt.hdr_len fields */
+#define S_RX_TCPHDR_LEN    0
+#define M_RX_TCPHDR_LEN    0x3F
+#define V_RX_TCPHDR_LEN(x) ((x) << S_RX_TCPHDR_LEN)
+#define G_RX_TCPHDR_LEN(x) (((x) >> S_RX_TCPHDR_LEN) & M_RX_TCPHDR_LEN)
+
+#define S_RX_IPHDR_LEN    6
+#define M_RX_IPHDR_LEN    0x3FF
+#define V_RX_IPHDR_LEN(x) ((x) << S_RX_IPHDR_LEN)
+#define G_RX_IPHDR_LEN(x) (((x) >> S_RX_IPHDR_LEN) & M_RX_IPHDR_LEN)
+
+/* rx_pkt.err_vec fields */
+#define S_RXERR_OR    0
+#define V_RXERR_OR(x) ((x) << S_RXERR_OR)
+#define F_RXERR_OR    V_RXERR_OR(1U)
+
+#define S_RXERR_MAC    1
+#define V_RXERR_MAC(x) ((x) << S_RXERR_MAC)
+#define F_RXERR_MAC    V_RXERR_MAC(1U)
+
+#define S_RXERR_IPVERS    2
+#define V_RXERR_IPVERS(x) ((x) << S_RXERR_IPVERS)
+#define F_RXERR_IPVERS    V_RXERR_IPVERS(1U)
+
+#define S_RXERR_FRAG    3
+#define V_RXERR_FRAG(x) ((x) << S_RXERR_FRAG)
+#define F_RXERR_FRAG    V_RXERR_FRAG(1U)
+
+#define S_RXERR_ATTACK    4
+#define V_RXERR_ATTACK(x) ((x) << S_RXERR_ATTACK)
+#define F_RXERR_ATTACK    V_RXERR_ATTACK(1U)
+
+#define S_RXERR_ETHHDR_LEN    5
+#define V_RXERR_ETHHDR_LEN(x) ((x) << S_RXERR_ETHHDR_LEN)
+#define F_RXERR_ETHHDR_LEN    V_RXERR_ETHHDR_LEN(1U)
+
+#define S_RXERR_IPHDR_LEN    6
+#define V_RXERR_IPHDR_LEN(x) ((x) << S_RXERR_IPHDR_LEN)
+#define F_RXERR_IPHDR_LEN    V_RXERR_IPHDR_LEN(1U)
+
+#define S_RXERR_TCPHDR_LEN    7
+#define V_RXERR_TCPHDR_LEN(x) ((x) << S_RXERR_TCPHDR_LEN)
+#define F_RXERR_TCPHDR_LEN    V_RXERR_TCPHDR_LEN(1U)
+
+#define S_RXERR_PKT_LEN    8
+#define V_RXERR_PKT_LEN(x) ((x) << S_RXERR_PKT_LEN)
+#define F_RXERR_PKT_LEN    V_RXERR_PKT_LEN(1U)
+
+#define S_RXERR_TCP_OPT    9
+#define V_RXERR_TCP_OPT(x) ((x) << S_RXERR_TCP_OPT)
+#define F_RXERR_TCP_OPT    V_RXERR_TCP_OPT(1U)
+
+#define S_RXERR_IPCSUM    12
+#define V_RXERR_IPCSUM(x) ((x) << S_RXERR_IPCSUM)
+#define F_RXERR_IPCSUM    V_RXERR_IPCSUM(1U)
+
+#define S_RXERR_CSUM    13
+#define V_RXERR_CSUM(x) ((x) << S_RXERR_CSUM)
+#define F_RXERR_CSUM    V_RXERR_CSUM(1U)
+
+#define S_RXERR_PING    14
+#define V_RXERR_PING(x) ((x) << S_RXERR_PING)
+#define F_RXERR_PING    V_RXERR_PING(1U)
+
+struct cpl_trace_pkt {
+	__u8 opcode;
+	__u8 intf;
+#if defined(__LITTLE_ENDIAN_BITFIELD)
+	__u8 runt:4;
+	__u8 filter_hit:4;
+	__u8 :6;
+	__u8 err:1;
+	__u8 trunc:1;
+#else
+	__u8 filter_hit:4;
+	__u8 runt:4;
+	__u8 trunc:1;
+	__u8 err:1;
+	__u8 :6;
+#endif
+	__be16 rsvd;
+	__be16 len;
+	__be64 tstamp;
+};
+
+struct cpl_l2t_write_req {
+	WR_HDR;
+	union opcode_tid ot;
+	__be16 params;
+	__be16 l2t_idx;
+	__be16 vlan;
+	__u8   dst_mac[6];
+};
+
+/* cpl_l2t_write_req.params fields */
+#define S_L2T_W_INFO    2
+#define M_L2T_W_INFO    0x3F
+#define V_L2T_W_INFO(x) ((x) << S_L2T_W_INFO)
+#define G_L2T_W_INFO(x) (((x) >> S_L2T_W_INFO) & M_L2T_W_INFO)
+
+#define S_L2T_W_PORT    8
+#define M_L2T_W_PORT    0xF
+#define V_L2T_W_PORT(x) ((x) << S_L2T_W_PORT)
+#define G_L2T_W_PORT(x) (((x) >> S_L2T_W_PORT) & M_L2T_W_PORT)
+
+#define S_L2T_W_NOREPLY    15
+#define V_L2T_W_NOREPLY(x) ((x) << S_L2T_W_NOREPLY)
+#define F_L2T_W_NOREPLY    V_L2T_W_NOREPLY(1U)
+
+struct cpl_l2t_write_rpl {
+	union opcode_tid ot;
+	__u8 status;
+	__u8 rsvd[3];
+};
+
+struct cpl_rdma_terminate {
+	union opcode_tid ot;
+	__be16 rsvd;
+	__be16 len;
+};
+
+struct cpl_sge_egr_update {
+	__be32 opcode_qid;
+	__be16 cidx;
+	__be16 pidx;
+};
+
+/* cpl_sge_egr_update.ot fields */
+#define S_EGR_QID    0
+#define M_EGR_QID    0x1FFFF
+#define V_EGR_QID(x) ((x) << S_EGR_QID)
+#define G_EGR_QID(x) (((x) >> S_EGR_QID) & M_EGR_QID)
+
+struct cpl_fw4_pld {
+	u8 opcode;
+	u8 rsvd0[3];
+	u8 type;
+	u8 rsvd1;
+	__be16 len;
+	__be64 data;
+	__be64 rsvd2;
+};
+
+struct cpl_fw6_pld {
+	u8 opcode;
+	u8 rsvd[5];
+	__be16 len;
+	__be64 data[4];
+};
+
+struct cpl_fw4_msg {
+	u8 opcode;
+	u8 type;
+	__be16 rsvd0;
+	__be32 rsvd1;
+	__be64 data[2];
+};
+
+struct cpl_fw4_ack {
+	union opcode_tid ot;
+	u8 credits;
+	u8 rsvd0[2];
+	u8 seq_vld;
+	__be32 snd_nxt;
+	__be32 snd_una;
+	__be64 rsvd1;
+};
+
+struct cpl_fw6_msg {
+	u8 opcode;
+	u8 type;
+	__be16 rsvd0;
+	__be32 rsvd1;
+	__be64 data[4];
+};
+
+/* ULP_TX opcodes */
+enum {
+	ULP_TX_MEM_READ = 2,
+	ULP_TX_MEM_WRITE = 3,
+	ULP_TX_PKT = 4
+};
+
+enum {
+	ULP_TX_SC_NOOP = 0x80,
+	ULP_TX_SC_IMM  = 0x81,
+	ULP_TX_SC_DSGL = 0x82,
+	ULP_TX_SC_ISGL = 0x83
+};
+
+#define S_ULPTX_CMD    24
+#define M_ULPTX_CMD    0xFF
+#define V_ULPTX_CMD(x) ((x) << S_ULPTX_CMD)
+
+#define S_ULPTX_LEN16    0
+#define M_ULPTX_LEN16    0xFF
+#define V_ULPTX_LEN16(x) ((x) << S_ULPTX_LEN16)
+
+struct ulptx_sge_pair {
+	__be32 len[2];
+	__be64 addr[2];
+};
+
+struct ulptx_sgl {
+	__be32 cmd_nsge;
+	__be32 len0;
+	__be64 addr0;
+	struct ulptx_sge_pair sge[0];
+};
+
+struct ulptx_isge {
+	__be32 stag;
+	__be32 len;
+	__be64 target_ofst;
+};
+
+struct ulptx_isgl {
+	__be32 cmd_nisge;
+	__be32 rsvd;
+	struct ulptx_isge sge[0];
+};
+
+#define S_ULPTX_NSGE    0
+#define M_ULPTX_NSGE    0xFFFF
+#define V_ULPTX_NSGE(x) ((x) << S_ULPTX_NSGE)
+
+struct ulp_mem_io {
+	WR_HDR;
+	__be32 cmd;
+	__be32 len16;             /* command length */
+	__be32 dlen;              /* data length in 32-byte units */
+	__be32 lock_addr;
+};
+
+/* additional ulp_mem_io.cmd fields */
+#define S_ULP_MEMIO_ORDER    23
+#define V_ULP_MEMIO_ORDER(x) ((x) << S_ULP_MEMIO_ORDER)
+#define F_ULP_MEMIO_ORDER    V_ULP_MEMIO_ORDER(1U)
+
+/* ulp_mem_io.lock_addr fields */
+#define S_ULP_MEMIO_ADDR    0
+#define M_ULP_MEMIO_ADDR    0x7FFFFFF
+#define V_ULP_MEMIO_ADDR(x) ((x) << S_ULP_MEMIO_ADDR)
+
+#define S_ULP_MEMIO_LOCK    31
+#define V_ULP_MEMIO_LOCK(x) ((x) << S_ULP_MEMIO_LOCK)
+#define F_ULP_MEMIO_LOCK    V_ULP_MEMIO_LOCK(1U)
+
+/* ulp_mem_io.dlen fields */
+#define S_ULP_MEMIO_DATA_LEN    0
+#define M_ULP_MEMIO_DATA_LEN    0x1F
+#define V_ULP_MEMIO_DATA_LEN(x) ((x) << S_ULP_MEMIO_DATA_LEN)
+
+struct ulp_txpkt {
+	__be32 cmd_dest;
+	__be32 len;
+};
+
+/* ulp_txpkt.cmd_dest fields */
+#define S_ULP_TXPKT_DEST    16
+#define M_ULP_TXPKT_DEST    0x3
+#define V_ULP_TXPKT_DEST(x) ((x) << S_ULP_TXPKT_DEST)
+
+#endif  /* __T4_MSG_H */
diff --git a/drivers/net/cxgb4/t4_regs.h b/drivers/net/cxgb4/t4_regs.h
new file mode 100644
index 0000000..4e7981a
--- /dev/null
+++ b/drivers/net/cxgb4/t4_regs.h
@@ -0,0 +1,2428 @@
+/*
+ * This file is part of the Chelsio T4 Ethernet driver for Linux.
+ *
+ * Copyright (c) 2010 Chelsio Communications, Inc. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ *     Redistribution and use in source and binary forms, with or
+ *     without modification, are permitted provided that the following
+ *     conditions are met:
+ *
+ *      - Redistributions of source code must retain the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer.
+ *
+ *      - Redistributions in binary form must reproduce the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer in the documentation and/or other materials
+ *        provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef __T4_REGS_H
+#define __T4_REGS_H
+
+#define MYPF_BASE 0x1b000
+#define MYPF_REG(reg_addr) (MYPF_BASE + (reg_addr))
+
+#define PF0_BASE 0x1e000
+#define PF0_REG(reg_addr) (PF0_BASE + (reg_addr))
+
+#define PF_STRIDE 0x400
+#define PF_BASE(idx) (PF0_BASE + (idx) * PF_STRIDE)
+#define PF_REG(idx, reg) (PF_BASE(idx) + (reg))
+
+#define MYPORT_BASE 0x1c000
+#define MYPORT_REG(reg_addr) (MYPORT_BASE + (reg_addr))
+
+#define PORT0_BASE 0x20000
+#define PORT0_REG(reg_addr) (PORT0_BASE + (reg_addr))
+
+#define PORT1_BASE 0x22000
+#define PORT1_REG(reg_addr) (PORT1_BASE + (reg_addr))
+
+#define PORT2_BASE 0x24000
+#define PORT2_REG(reg_addr) (PORT2_BASE + (reg_addr))
+
+#define PORT3_BASE 0x26000
+#define PORT3_REG(reg_addr) (PORT3_BASE + (reg_addr))
+
+#define PORT_STRIDE 0x2000
+#define PORT_BASE(idx) (PORT0_BASE + (idx) * PORT_STRIDE)
+#define PORT_REG(idx, reg) (PORT_BASE(idx) + (reg))
+
+#define EDC_STRIDE (EDC_1_BASE_ADDR - EDC_0_BASE_ADDR)
+#define EDC_REG(reg, idx) (reg + EDC_STRIDE * idx)
+
+#define PCIE_MEM_ACCESS_REG(reg_addr, idx) ((reg_addr) + (idx) * 8)
+#define NUM_PCIE_MEM_ACCESS_INSTANCES 8
+
+#define PCIE_MAILBOX_REG(reg_addr, idx) ((reg_addr) + (idx) * 8)
+#define NUM_PCIE_MAILBOX_INSTANCES 1
+
+#define MC_BIST_STATUS_REG(reg_addr, idx) ((reg_addr) + (idx) * 4)
+#define NUM_MC_BIST_STATUS_INSTANCES 18
+
+#define EDC_BIST_STATUS_REG(reg_addr, idx) ((reg_addr) + (idx) * 4)
+#define NUM_EDC_BIST_STATUS_INSTANCES 18
+
+#define CIM_PF_MAILBOX_DATA(idx) (A_CIM_PF_MAILBOX_DATA + (idx) * 4)
+#define NUM_CIM_PF_MAILBOX_DATA_INSTANCES 16
+
+#define MPS_TRC_FILTER_MATCH_CTL_A(idx) (A_MPS_TRC_FILTER_MATCH_CTL_A + (idx) * 4)
+#define NUM_MPS_TRC_FILTER_MATCH_CTL_A_INSTANCES 4
+
+#define MPS_TRC_FILTER_MATCH_CTL_B(idx) (A_MPS_TRC_FILTER_MATCH_CTL_B + (idx) * 4)
+#define NUM_MPS_TRC_FILTER_MATCH_CTL_B_INSTANCES 4
+
+#define MPS_TRC_FILTER0_MATCH(idx) (A_MPS_TRC_FILTER0_MATCH + (idx) * 4)
+#define NUM_MPS_TRC_FILTER0_MATCH_INSTANCES 28
+
+#define MPS_TRC_FILTER0_DONT_CARE(idx) (A_MPS_TRC_FILTER0_DONT_CARE + (idx) * 4)
+#define NUM_MPS_TRC_FILTER0_DONT_CARE_INSTANCES 28
+
+#define MPS_TRC_FILTER1_MATCH(idx) (A_MPS_TRC_FILTER1_MATCH + (idx) * 4)
+#define NUM_MPS_TRC_FILTER1_MATCH_INSTANCES 28
+
+/* registers for module SGE */
+#define SGE_BASE_ADDR 0x1000
+
+#define A_SGE_PF_KDOORBELL 0x0
+
+#define S_QID    15
+#define M_QID    0x1ffffU
+#define V_QID(x) ((x) << S_QID)
+#define G_QID(x) (((x) >> S_QID) & M_QID)
+
+#define S_DBPRIO    14
+#define V_DBPRIO(x) ((x) << S_DBPRIO)
+#define F_DBPRIO    V_DBPRIO(1U)
+
+#define S_PIDX    0
+#define M_PIDX    0x3fffU
+#define V_PIDX(x) ((x) << S_PIDX)
+#define G_PIDX(x) (((x) >> S_PIDX) & M_PIDX)
+
+#define A_SGE_PF_GTS 0x4
+
+#define S_INGRESSQID    16
+#define M_INGRESSQID    0xffffU
+#define V_INGRESSQID(x) ((x) << S_INGRESSQID)
+#define G_INGRESSQID(x) (((x) >> S_INGRESSQID) & M_INGRESSQID)
+
+#define S_TIMERREG    13
+#define M_TIMERREG    0x7U
+#define V_TIMERREG(x) ((x) << S_TIMERREG)
+#define G_TIMERREG(x) (((x) >> S_TIMERREG) & M_TIMERREG)
+
+#define E_TIMERIDX_NONE     6
+#define E_TIMERIDX_CRED_UPD 7
+
+#define S_SEINTARM    12
+#define V_SEINTARM(x) ((x) << S_SEINTARM)
+#define F_SEINTARM    V_SEINTARM(1U)
+
+#define S_CIDXINC    0
+#define M_CIDXINC    0xfffU
+#define V_CIDXINC(x) ((x) << S_CIDXINC)
+#define G_CIDXINC(x) (((x) >> S_CIDXINC) & M_CIDXINC)
+
+#define A_SGE_CONTROL 0x1008
+
+#define S_IGRALLCPLTOFL    31
+#define V_IGRALLCPLTOFL(x) ((x) << S_IGRALLCPLTOFL)
+#define F_IGRALLCPLTOFL    V_IGRALLCPLTOFL(1U)
+
+#define S_FLSPLITMIN    22
+#define M_FLSPLITMIN    0x1ffU
+#define V_FLSPLITMIN(x) ((x) << S_FLSPLITMIN)
+#define G_FLSPLITMIN(x) (((x) >> S_FLSPLITMIN) & M_FLSPLITMIN)
+
+#define S_FLSPLITMODE    20
+#define M_FLSPLITMODE    0x3U
+#define V_FLSPLITMODE(x) ((x) << S_FLSPLITMODE)
+#define G_FLSPLITMODE(x) (((x) >> S_FLSPLITMODE) & M_FLSPLITMODE)
+
+#define S_DCASYSTYPE    19
+#define V_DCASYSTYPE(x) ((x) << S_DCASYSTYPE)
+#define F_DCASYSTYPE    V_DCASYSTYPE(1U)
+
+#define S_RXPKTCPLMODE    18
+#define V_RXPKTCPLMODE(x) ((x) << S_RXPKTCPLMODE)
+#define F_RXPKTCPLMODE    V_RXPKTCPLMODE(1U)
+
+#define S_EGRSTATUSPAGESIZE    17
+#define V_EGRSTATUSPAGESIZE(x) ((x) << S_EGRSTATUSPAGESIZE)
+#define F_EGRSTATUSPAGESIZE    V_EGRSTATUSPAGESIZE(1U)
+
+#define S_INGHINTENABLE1    15
+#define V_INGHINTENABLE1(x) ((x) << S_INGHINTENABLE1)
+#define F_INGHINTENABLE1    V_INGHINTENABLE1(1U)
+
+#define S_INGHINTENABLE0    14
+#define V_INGHINTENABLE0(x) ((x) << S_INGHINTENABLE0)
+#define F_INGHINTENABLE0    V_INGHINTENABLE0(1U)
+
+#define S_INGINTCOMPAREIDX    13
+#define V_INGINTCOMPAREIDX(x) ((x) << S_INGINTCOMPAREIDX)
+#define F_INGINTCOMPAREIDX    V_INGINTCOMPAREIDX(1U)
+
+#define S_PKTSHIFT    10
+#define M_PKTSHIFT    0x7U
+#define V_PKTSHIFT(x) ((x) << S_PKTSHIFT)
+#define G_PKTSHIFT(x) (((x) >> S_PKTSHIFT) & M_PKTSHIFT)
+
+#define S_INGPCIEBOUNDARY    7
+#define M_INGPCIEBOUNDARY    0x7U
+#define V_INGPCIEBOUNDARY(x) ((x) << S_INGPCIEBOUNDARY)
+#define G_INGPCIEBOUNDARY(x) (((x) >> S_INGPCIEBOUNDARY) & M_INGPCIEBOUNDARY)
+
+#define S_INGPADBOUNDARY    4
+#define M_INGPADBOUNDARY    0x7U
+#define V_INGPADBOUNDARY(x) ((x) << S_INGPADBOUNDARY)
+#define G_INGPADBOUNDARY(x) (((x) >> S_INGPADBOUNDARY) & M_INGPADBOUNDARY)
+
+#define S_EGRPCIEBOUNDARY    1
+#define M_EGRPCIEBOUNDARY    0x7U
+#define V_EGRPCIEBOUNDARY(x) ((x) << S_EGRPCIEBOUNDARY)
+#define G_EGRPCIEBOUNDARY(x) (((x) >> S_EGRPCIEBOUNDARY) & M_EGRPCIEBOUNDARY)
+
+#define S_GLOBALENABLE    0
+#define V_GLOBALENABLE(x) ((x) << S_GLOBALENABLE)
+#define F_GLOBALENABLE    V_GLOBALENABLE(1U)
+
+#define A_SGE_HOST_PAGE_SIZE 0x100c
+
+#define S_HOSTPAGESIZEPF7    28
+#define M_HOSTPAGESIZEPF7    0xfU
+#define V_HOSTPAGESIZEPF7(x) ((x) << S_HOSTPAGESIZEPF7)
+#define G_HOSTPAGESIZEPF7(x) (((x) >> S_HOSTPAGESIZEPF7) & M_HOSTPAGESIZEPF7)
+
+#define S_HOSTPAGESIZEPF6    24
+#define M_HOSTPAGESIZEPF6    0xfU
+#define V_HOSTPAGESIZEPF6(x) ((x) << S_HOSTPAGESIZEPF6)
+#define G_HOSTPAGESIZEPF6(x) (((x) >> S_HOSTPAGESIZEPF6) & M_HOSTPAGESIZEPF6)
+
+#define S_HOSTPAGESIZEPF5    20
+#define M_HOSTPAGESIZEPF5    0xfU
+#define V_HOSTPAGESIZEPF5(x) ((x) << S_HOSTPAGESIZEPF5)
+#define G_HOSTPAGESIZEPF5(x) (((x) >> S_HOSTPAGESIZEPF5) & M_HOSTPAGESIZEPF5)
+
+#define S_HOSTPAGESIZEPF4    16
+#define M_HOSTPAGESIZEPF4    0xfU
+#define V_HOSTPAGESIZEPF4(x) ((x) << S_HOSTPAGESIZEPF4)
+#define G_HOSTPAGESIZEPF4(x) (((x) >> S_HOSTPAGESIZEPF4) & M_HOSTPAGESIZEPF4)
+
+#define S_HOSTPAGESIZEPF3    12
+#define M_HOSTPAGESIZEPF3    0xfU
+#define V_HOSTPAGESIZEPF3(x) ((x) << S_HOSTPAGESIZEPF3)
+#define G_HOSTPAGESIZEPF3(x) (((x) >> S_HOSTPAGESIZEPF3) & M_HOSTPAGESIZEPF3)
+
+#define S_HOSTPAGESIZEPF2    8
+#define M_HOSTPAGESIZEPF2    0xfU
+#define V_HOSTPAGESIZEPF2(x) ((x) << S_HOSTPAGESIZEPF2)
+#define G_HOSTPAGESIZEPF2(x) (((x) >> S_HOSTPAGESIZEPF2) & M_HOSTPAGESIZEPF2)
+
+#define S_HOSTPAGESIZEPF1    4
+#define M_HOSTPAGESIZEPF1    0xfU
+#define V_HOSTPAGESIZEPF1(x) ((x) << S_HOSTPAGESIZEPF1)
+#define G_HOSTPAGESIZEPF1(x) (((x) >> S_HOSTPAGESIZEPF1) & M_HOSTPAGESIZEPF1)
+
+#define S_HOSTPAGESIZEPF0    0
+#define M_HOSTPAGESIZEPF0    0xfU
+#define V_HOSTPAGESIZEPF0(x) ((x) << S_HOSTPAGESIZEPF0)
+#define G_HOSTPAGESIZEPF0(x) (((x) >> S_HOSTPAGESIZEPF0) & M_HOSTPAGESIZEPF0)
+
+#define A_SGE_EGRESS_QUEUES_PER_PAGE_PF 0x1010
+
+#define S_QUEUESPERPAGEPF7    28
+#define M_QUEUESPERPAGEPF7    0xfU
+#define V_QUEUESPERPAGEPF7(x) ((x) << S_QUEUESPERPAGEPF7)
+#define G_QUEUESPERPAGEPF7(x) (((x) >> S_QUEUESPERPAGEPF7) & M_QUEUESPERPAGEPF7)
+
+#define S_QUEUESPERPAGEPF6    24
+#define M_QUEUESPERPAGEPF6    0xfU
+#define V_QUEUESPERPAGEPF6(x) ((x) << S_QUEUESPERPAGEPF6)
+#define G_QUEUESPERPAGEPF6(x) (((x) >> S_QUEUESPERPAGEPF6) & M_QUEUESPERPAGEPF6)
+
+#define S_QUEUESPERPAGEPF5    20
+#define M_QUEUESPERPAGEPF5    0xfU
+#define V_QUEUESPERPAGEPF5(x) ((x) << S_QUEUESPERPAGEPF5)
+#define G_QUEUESPERPAGEPF5(x) (((x) >> S_QUEUESPERPAGEPF5) & M_QUEUESPERPAGEPF5)
+
+#define S_QUEUESPERPAGEPF4    16
+#define M_QUEUESPERPAGEPF4    0xfU
+#define V_QUEUESPERPAGEPF4(x) ((x) << S_QUEUESPERPAGEPF4)
+#define G_QUEUESPERPAGEPF4(x) (((x) >> S_QUEUESPERPAGEPF4) & M_QUEUESPERPAGEPF4)
+
+#define S_QUEUESPERPAGEPF3    12
+#define M_QUEUESPERPAGEPF3    0xfU
+#define V_QUEUESPERPAGEPF3(x) ((x) << S_QUEUESPERPAGEPF3)
+#define G_QUEUESPERPAGEPF3(x) (((x) >> S_QUEUESPERPAGEPF3) & M_QUEUESPERPAGEPF3)
+
+#define S_QUEUESPERPAGEPF2    8
+#define M_QUEUESPERPAGEPF2    0xfU
+#define V_QUEUESPERPAGEPF2(x) ((x) << S_QUEUESPERPAGEPF2)
+#define G_QUEUESPERPAGEPF2(x) (((x) >> S_QUEUESPERPAGEPF2) & M_QUEUESPERPAGEPF2)
+
+#define S_QUEUESPERPAGEPF1    4
+#define M_QUEUESPERPAGEPF1    0xfU
+#define V_QUEUESPERPAGEPF1(x) ((x) << S_QUEUESPERPAGEPF1)
+#define G_QUEUESPERPAGEPF1(x) (((x) >> S_QUEUESPERPAGEPF1) & M_QUEUESPERPAGEPF1)
+
+#define S_QUEUESPERPAGEPF0    0
+#define M_QUEUESPERPAGEPF0    0xfU
+#define V_QUEUESPERPAGEPF0(x) ((x) << S_QUEUESPERPAGEPF0)
+#define G_QUEUESPERPAGEPF0(x) (((x) >> S_QUEUESPERPAGEPF0) & M_QUEUESPERPAGEPF0)
+
+#define A_SGE_INT_CAUSE1 0x1024
+#define A_SGE_INT_CAUSE2 0x1030
+#define A_SGE_INT_CAUSE3 0x103c
+
+#define S_ERR_FLM_DBP    31
+#define V_ERR_FLM_DBP(x) ((x) << S_ERR_FLM_DBP)
+#define F_ERR_FLM_DBP    V_ERR_FLM_DBP(1U)
+
+#define S_ERR_FLM_IDMA1    30
+#define V_ERR_FLM_IDMA1(x) ((x) << S_ERR_FLM_IDMA1)
+#define F_ERR_FLM_IDMA1    V_ERR_FLM_IDMA1(1U)
+
+#define S_ERR_FLM_IDMA0    29
+#define V_ERR_FLM_IDMA0(x) ((x) << S_ERR_FLM_IDMA0)
+#define F_ERR_FLM_IDMA0    V_ERR_FLM_IDMA0(1U)
+
+#define S_ERR_FLM_HINT    28
+#define V_ERR_FLM_HINT(x) ((x) << S_ERR_FLM_HINT)
+#define F_ERR_FLM_HINT    V_ERR_FLM_HINT(1U)
+
+#define S_ERR_PCIE_ERROR3    27
+#define V_ERR_PCIE_ERROR3(x) ((x) << S_ERR_PCIE_ERROR3)
+#define F_ERR_PCIE_ERROR3    V_ERR_PCIE_ERROR3(1U)
+
+#define S_ERR_PCIE_ERROR2    26
+#define V_ERR_PCIE_ERROR2(x) ((x) << S_ERR_PCIE_ERROR2)
+#define F_ERR_PCIE_ERROR2    V_ERR_PCIE_ERROR2(1U)
+
+#define S_ERR_PCIE_ERROR1    25
+#define V_ERR_PCIE_ERROR1(x) ((x) << S_ERR_PCIE_ERROR1)
+#define F_ERR_PCIE_ERROR1    V_ERR_PCIE_ERROR1(1U)
+
+#define S_ERR_PCIE_ERROR0    24
+#define V_ERR_PCIE_ERROR0(x) ((x) << S_ERR_PCIE_ERROR0)
+#define F_ERR_PCIE_ERROR0    V_ERR_PCIE_ERROR0(1U)
+
+#define S_ERR_TIMER_ABOVE_MAX_QID    23
+#define V_ERR_TIMER_ABOVE_MAX_QID(x) ((x) << S_ERR_TIMER_ABOVE_MAX_QID)
+#define F_ERR_TIMER_ABOVE_MAX_QID    V_ERR_TIMER_ABOVE_MAX_QID(1U)
+
+#define S_ERR_CPL_EXCEED_IQE_SIZE    22
+#define V_ERR_CPL_EXCEED_IQE_SIZE(x) ((x) << S_ERR_CPL_EXCEED_IQE_SIZE)
+#define F_ERR_CPL_EXCEED_IQE_SIZE    V_ERR_CPL_EXCEED_IQE_SIZE(1U)
+
+#define S_ERR_INVALID_CIDX_INC    21
+#define V_ERR_INVALID_CIDX_INC(x) ((x) << S_ERR_INVALID_CIDX_INC)
+#define F_ERR_INVALID_CIDX_INC    V_ERR_INVALID_CIDX_INC(1U)
+
+#define S_ERR_ITP_TIME_PAUSED    20
+#define V_ERR_ITP_TIME_PAUSED(x) ((x) << S_ERR_ITP_TIME_PAUSED)
+#define F_ERR_ITP_TIME_PAUSED    V_ERR_ITP_TIME_PAUSED(1U)
+
+#define S_ERR_CPL_OPCODE_0    19
+#define V_ERR_CPL_OPCODE_0(x) ((x) << S_ERR_CPL_OPCODE_0)
+#define F_ERR_CPL_OPCODE_0    V_ERR_CPL_OPCODE_0(1U)
+
+#define S_ERR_DROPPED_DB    18
+#define V_ERR_DROPPED_DB(x) ((x) << S_ERR_DROPPED_DB)
+#define F_ERR_DROPPED_DB    V_ERR_DROPPED_DB(1U)
+
+#define S_ERR_DATA_CPL_ON_HIGH_QID1    17
+#define V_ERR_DATA_CPL_ON_HIGH_QID1(x) ((x) << S_ERR_DATA_CPL_ON_HIGH_QID1)
+#define F_ERR_DATA_CPL_ON_HIGH_QID1    V_ERR_DATA_CPL_ON_HIGH_QID1(1U)
+
+#define S_ERR_DATA_CPL_ON_HIGH_QID0    16
+#define V_ERR_DATA_CPL_ON_HIGH_QID0(x) ((x) << S_ERR_DATA_CPL_ON_HIGH_QID0)
+#define F_ERR_DATA_CPL_ON_HIGH_QID0    V_ERR_DATA_CPL_ON_HIGH_QID0(1U)
+
+#define S_ERR_BAD_DB_PIDX3    15
+#define V_ERR_BAD_DB_PIDX3(x) ((x) << S_ERR_BAD_DB_PIDX3)
+#define F_ERR_BAD_DB_PIDX3    V_ERR_BAD_DB_PIDX3(1U)
+
+#define S_ERR_BAD_DB_PIDX2    14
+#define V_ERR_BAD_DB_PIDX2(x) ((x) << S_ERR_BAD_DB_PIDX2)
+#define F_ERR_BAD_DB_PIDX2    V_ERR_BAD_DB_PIDX2(1U)
+
+#define S_ERR_BAD_DB_PIDX1    13
+#define V_ERR_BAD_DB_PIDX1(x) ((x) << S_ERR_BAD_DB_PIDX1)
+#define F_ERR_BAD_DB_PIDX1    V_ERR_BAD_DB_PIDX1(1U)
+
+#define S_ERR_BAD_DB_PIDX0    12
+#define V_ERR_BAD_DB_PIDX0(x) ((x) << S_ERR_BAD_DB_PIDX0)
+#define F_ERR_BAD_DB_PIDX0    V_ERR_BAD_DB_PIDX0(1U)
+
+#define S_ERR_ING_PCIE_CHAN    11
+#define V_ERR_ING_PCIE_CHAN(x) ((x) << S_ERR_ING_PCIE_CHAN)
+#define F_ERR_ING_PCIE_CHAN    V_ERR_ING_PCIE_CHAN(1U)
+
+#define S_ERR_ING_CTXT_PRIO    10
+#define V_ERR_ING_CTXT_PRIO(x) ((x) << S_ERR_ING_CTXT_PRIO)
+#define F_ERR_ING_CTXT_PRIO    V_ERR_ING_CTXT_PRIO(1U)
+
+#define S_ERR_EGR_CTXT_PRIO    9
+#define V_ERR_EGR_CTXT_PRIO(x) ((x) << S_ERR_EGR_CTXT_PRIO)
+#define F_ERR_EGR_CTXT_PRIO    V_ERR_EGR_CTXT_PRIO(1U)
+
+#define S_DBFIFO_HP_INT    8
+#define V_DBFIFO_HP_INT(x) ((x) << S_DBFIFO_HP_INT)
+#define F_DBFIFO_HP_INT    V_DBFIFO_HP_INT(1U)
+
+#define S_DBFIFO_LP_INT    7
+#define V_DBFIFO_LP_INT(x) ((x) << S_DBFIFO_LP_INT)
+#define F_DBFIFO_LP_INT    V_DBFIFO_LP_INT(1U)
+
+#define S_REG_ADDRESS_ERR    6
+#define V_REG_ADDRESS_ERR(x) ((x) << S_REG_ADDRESS_ERR)
+#define F_REG_ADDRESS_ERR    V_REG_ADDRESS_ERR(1U)
+
+#define S_INGRESS_SIZE_ERR    5
+#define V_INGRESS_SIZE_ERR(x) ((x) << S_INGRESS_SIZE_ERR)
+#define F_INGRESS_SIZE_ERR    V_INGRESS_SIZE_ERR(1U)
+
+#define S_EGRESS_SIZE_ERR    4
+#define V_EGRESS_SIZE_ERR(x) ((x) << S_EGRESS_SIZE_ERR)
+#define F_EGRESS_SIZE_ERR    V_EGRESS_SIZE_ERR(1U)
+
+#define S_ERR_INV_CTXT3    3
+#define V_ERR_INV_CTXT3(x) ((x) << S_ERR_INV_CTXT3)
+#define F_ERR_INV_CTXT3    V_ERR_INV_CTXT3(1U)
+
+#define S_ERR_INV_CTXT2    2
+#define V_ERR_INV_CTXT2(x) ((x) << S_ERR_INV_CTXT2)
+#define F_ERR_INV_CTXT2    V_ERR_INV_CTXT2(1U)
+
+#define S_ERR_INV_CTXT1    1
+#define V_ERR_INV_CTXT1(x) ((x) << S_ERR_INV_CTXT1)
+#define F_ERR_INV_CTXT1    V_ERR_INV_CTXT1(1U)
+
+#define S_ERR_INV_CTXT0    0
+#define V_ERR_INV_CTXT0(x) ((x) << S_ERR_INV_CTXT0)
+#define F_ERR_INV_CTXT0    V_ERR_INV_CTXT0(1U)
+
+#define A_SGE_INT_ENABLE3 0x1040
+#define A_SGE_FL_BUFFER_SIZE0 0x1044
+
+#define S_SIZE    4
+#define M_SIZE    0xfffffffU
+#define V_SIZE(x) ((x) << S_SIZE)
+#define G_SIZE(x) (((x) >> S_SIZE) & M_SIZE)
+
+#define A_SGE_FL_BUFFER_SIZE1 0x1048
+#define A_SGE_INGRESS_RX_THRESHOLD 0x10a0
+
+#define S_THRESHOLD_0    24
+#define M_THRESHOLD_0    0x3fU
+#define V_THRESHOLD_0(x) ((x) << S_THRESHOLD_0)
+#define G_THRESHOLD_0(x) (((x) >> S_THRESHOLD_0) & M_THRESHOLD_0)
+
+#define S_THRESHOLD_1    16
+#define M_THRESHOLD_1    0x3fU
+#define V_THRESHOLD_1(x) ((x) << S_THRESHOLD_1)
+#define G_THRESHOLD_1(x) (((x) >> S_THRESHOLD_1) & M_THRESHOLD_1)
+
+#define S_THRESHOLD_2    8
+#define M_THRESHOLD_2    0x3fU
+#define V_THRESHOLD_2(x) ((x) << S_THRESHOLD_2)
+#define G_THRESHOLD_2(x) (((x) >> S_THRESHOLD_2) & M_THRESHOLD_2)
+
+#define S_THRESHOLD_3    0
+#define M_THRESHOLD_3    0x3fU
+#define V_THRESHOLD_3(x) ((x) << S_THRESHOLD_3)
+#define G_THRESHOLD_3(x) (((x) >> S_THRESHOLD_3) & M_THRESHOLD_3)
+
+#define A_SGE_TIMER_VALUE_0_AND_1 0x10b8
+
+#define S_TIMERVALUE0    16
+#define M_TIMERVALUE0    0xffffU
+#define V_TIMERVALUE0(x) ((x) << S_TIMERVALUE0)
+#define G_TIMERVALUE0(x) (((x) >> S_TIMERVALUE0) & M_TIMERVALUE0)
+
+#define S_TIMERVALUE1    0
+#define M_TIMERVALUE1    0xffffU
+#define V_TIMERVALUE1(x) ((x) << S_TIMERVALUE1)
+#define G_TIMERVALUE1(x) (((x) >> S_TIMERVALUE1) & M_TIMERVALUE1)
+
+#define A_SGE_TIMER_VALUE_2_AND_3 0x10bc
+
+#define S_TIMERVALUE2    16
+#define M_TIMERVALUE2    0xffffU
+#define V_TIMERVALUE2(x) ((x) << S_TIMERVALUE2)
+#define G_TIMERVALUE2(x) (((x) >> S_TIMERVALUE2) & M_TIMERVALUE2)
+
+#define S_TIMERVALUE3    0
+#define M_TIMERVALUE3    0xffffU
+#define V_TIMERVALUE3(x) ((x) << S_TIMERVALUE3)
+#define G_TIMERVALUE3(x) (((x) >> S_TIMERVALUE3) & M_TIMERVALUE3)
+
+#define A_SGE_TIMER_VALUE_4_AND_5 0x10c0
+
+#define S_TIMERVALUE4    16
+#define M_TIMERVALUE4    0xffffU
+#define V_TIMERVALUE4(x) ((x) << S_TIMERVALUE4)
+#define G_TIMERVALUE4(x) (((x) >> S_TIMERVALUE4) & M_TIMERVALUE4)
+
+#define S_TIMERVALUE5    0
+#define M_TIMERVALUE5    0xffffU
+#define V_TIMERVALUE5(x) ((x) << S_TIMERVALUE5)
+#define G_TIMERVALUE5(x) (((x) >> S_TIMERVALUE5) & M_TIMERVALUE5)
+
+#define A_SGE_DEBUG_INDEX 0x10cc
+#define A_SGE_DEBUG_DATA_HIGH 0x10d0
+#define A_SGE_DEBUG_DATA_LOW 0x10d4
+#define A_SGE_INGRESS_QUEUES_PER_PAGE_PF 0x10f4
+#define A_SGE_CTXT_CMD 0x11fc
+
+#define S_BUSY    31
+#define V_BUSY(x) ((x) << S_BUSY)
+#define F_BUSY    V_BUSY(1U)
+
+#define S_CTXTOP    28
+#define M_CTXTOP    0x3U
+#define V_CTXTOP(x) ((x) << S_CTXTOP)
+#define G_CTXTOP(x) (((x) >> S_CTXTOP) & M_CTXTOP)
+
+#define S_CTXTTYPE    24
+#define M_CTXTTYPE    0x3U
+#define V_CTXTTYPE(x) ((x) << S_CTXTTYPE)
+#define G_CTXTTYPE(x) (((x) >> S_CTXTTYPE) & M_CTXTTYPE)
+
+#define S_CTXTQID    0
+#define M_CTXTQID    0x1ffffU
+#define V_CTXTQID(x) ((x) << S_CTXTQID)
+#define G_CTXTQID(x) (((x) >> S_CTXTQID) & M_CTXTQID)
+
+/* registers for module PCIE */
+#define PCIE_BASE_ADDR 0x3000
+
+#define A_PCIE_PF_CLI 0x44
+#define A_PCIE_INT_CAUSE 0x3004
+
+#define S_NONFATALERR    30
+#define V_NONFATALERR(x) ((x) << S_NONFATALERR)
+#define F_NONFATALERR    V_NONFATALERR(1U)
+
+#define S_UNXSPLCPLERR    29
+#define V_UNXSPLCPLERR(x) ((x) << S_UNXSPLCPLERR)
+#define F_UNXSPLCPLERR    V_UNXSPLCPLERR(1U)
+
+#define S_PCIEPINT    28
+#define V_PCIEPINT(x) ((x) << S_PCIEPINT)
+#define F_PCIEPINT    V_PCIEPINT(1U)
+
+#define S_PCIESINT    27
+#define V_PCIESINT(x) ((x) << S_PCIESINT)
+#define F_PCIESINT    V_PCIESINT(1U)
+
+#define S_RPLPERR    26
+#define V_RPLPERR(x) ((x) << S_RPLPERR)
+#define F_RPLPERR    V_RPLPERR(1U)
+
+#define S_RXWRPERR    25
+#define V_RXWRPERR(x) ((x) << S_RXWRPERR)
+#define F_RXWRPERR    V_RXWRPERR(1U)
+
+#define S_RXCPLPERR    24
+#define V_RXCPLPERR(x) ((x) << S_RXCPLPERR)
+#define F_RXCPLPERR    V_RXCPLPERR(1U)
+
+#define S_PIOTAGPERR    23
+#define V_PIOTAGPERR(x) ((x) << S_PIOTAGPERR)
+#define F_PIOTAGPERR    V_PIOTAGPERR(1U)
+
+#define S_MATAGPERR    22
+#define V_MATAGPERR(x) ((x) << S_MATAGPERR)
+#define F_MATAGPERR    V_MATAGPERR(1U)
+
+#define S_INTXCLRPERR    21
+#define V_INTXCLRPERR(x) ((x) << S_INTXCLRPERR)
+#define F_INTXCLRPERR    V_INTXCLRPERR(1U)
+
+#define S_FIDPERR    20
+#define V_FIDPERR(x) ((x) << S_FIDPERR)
+#define F_FIDPERR    V_FIDPERR(1U)
+
+#define S_CFGSNPPERR    19
+#define V_CFGSNPPERR(x) ((x) << S_CFGSNPPERR)
+#define F_CFGSNPPERR    V_CFGSNPPERR(1U)
+
+#define S_HRSPPERR    18
+#define V_HRSPPERR(x) ((x) << S_HRSPPERR)
+#define F_HRSPPERR    V_HRSPPERR(1U)
+
+#define S_HREQPERR    17
+#define V_HREQPERR(x) ((x) << S_HREQPERR)
+#define F_HREQPERR    V_HREQPERR(1U)
+
+#define S_HCNTPERR    16
+#define V_HCNTPERR(x) ((x) << S_HCNTPERR)
+#define F_HCNTPERR    V_HCNTPERR(1U)
+
+#define S_DRSPPERR    15
+#define V_DRSPPERR(x) ((x) << S_DRSPPERR)
+#define F_DRSPPERR    V_DRSPPERR(1U)
+
+#define S_DREQPERR    14
+#define V_DREQPERR(x) ((x) << S_DREQPERR)
+#define F_DREQPERR    V_DREQPERR(1U)
+
+#define S_DCNTPERR    13
+#define V_DCNTPERR(x) ((x) << S_DCNTPERR)
+#define F_DCNTPERR    V_DCNTPERR(1U)
+
+#define S_CRSPPERR    12
+#define V_CRSPPERR(x) ((x) << S_CRSPPERR)
+#define F_CRSPPERR    V_CRSPPERR(1U)
+
+#define S_CREQPERR    11
+#define V_CREQPERR(x) ((x) << S_CREQPERR)
+#define F_CREQPERR    V_CREQPERR(1U)
+
+#define S_CCNTPERR    10
+#define V_CCNTPERR(x) ((x) << S_CCNTPERR)
+#define F_CCNTPERR    V_CCNTPERR(1U)
+
+#define S_TARTAGPERR    9
+#define V_TARTAGPERR(x) ((x) << S_TARTAGPERR)
+#define F_TARTAGPERR    V_TARTAGPERR(1U)
+
+#define S_PIOREQPERR    8
+#define V_PIOREQPERR(x) ((x) << S_PIOREQPERR)
+#define F_PIOREQPERR    V_PIOREQPERR(1U)
+
+#define S_PIOCPLPERR    7
+#define V_PIOCPLPERR(x) ((x) << S_PIOCPLPERR)
+#define F_PIOCPLPERR    V_PIOCPLPERR(1U)
+
+#define S_MSIXDIPERR    6
+#define V_MSIXDIPERR(x) ((x) << S_MSIXDIPERR)
+#define F_MSIXDIPERR    V_MSIXDIPERR(1U)
+
+#define S_MSIXDATAPERR    5
+#define V_MSIXDATAPERR(x) ((x) << S_MSIXDATAPERR)
+#define F_MSIXDATAPERR    V_MSIXDATAPERR(1U)
+
+#define S_MSIXADDRHPERR    4
+#define V_MSIXADDRHPERR(x) ((x) << S_MSIXADDRHPERR)
+#define F_MSIXADDRHPERR    V_MSIXADDRHPERR(1U)
+
+#define S_MSIXADDRLPERR    3
+#define V_MSIXADDRLPERR(x) ((x) << S_MSIXADDRLPERR)
+#define F_MSIXADDRLPERR    V_MSIXADDRLPERR(1U)
+
+#define S_MSIDATAPERR    2
+#define V_MSIDATAPERR(x) ((x) << S_MSIDATAPERR)
+#define F_MSIDATAPERR    V_MSIDATAPERR(1U)
+
+#define S_MSIADDRHPERR    1
+#define V_MSIADDRHPERR(x) ((x) << S_MSIADDRHPERR)
+#define F_MSIADDRHPERR    V_MSIADDRHPERR(1U)
+
+#define S_MSIADDRLPERR    0
+#define V_MSIADDRLPERR(x) ((x) << S_MSIADDRLPERR)
+#define F_MSIADDRLPERR    V_MSIADDRLPERR(1U)
+
+#define A_PCIE_NONFAT_ERR 0x3010
+#define A_PCIE_MEM_ACCESS_BASE_WIN 0x3068
+
+#define S_PCIEOFST    10
+#define M_PCIEOFST    0x3fffffU
+#define V_PCIEOFST(x) ((x) << S_PCIEOFST)
+#define G_PCIEOFST(x) (((x) >> S_PCIEOFST) & M_PCIEOFST)
+
+#define S_BIR    8
+#define M_BIR    0x3U
+#define V_BIR(x) ((x) << S_BIR)
+#define G_BIR(x) (((x) >> S_BIR) & M_BIR)
+
+#define S_WINDOW    0
+#define M_WINDOW    0xffU
+#define V_WINDOW(x) ((x) << S_WINDOW)
+#define G_WINDOW(x) (((x) >> S_WINDOW) & M_WINDOW)
+
+#define A_PCIE_CORE_UTL_SYSTEM_BUS_AGENT_STATUS 0x5908
+
+#define S_RNPP    31
+#define V_RNPP(x) ((x) << S_RNPP)
+#define F_RNPP    V_RNPP(1U)
+
+#define S_RPCP    29
+#define V_RPCP(x) ((x) << S_RPCP)
+#define F_RPCP    V_RPCP(1U)
+
+#define S_RCIP    27
+#define V_RCIP(x) ((x) << S_RCIP)
+#define F_RCIP    V_RCIP(1U)
+
+#define S_RCCP    26
+#define V_RCCP(x) ((x) << S_RCCP)
+#define F_RCCP    V_RCCP(1U)
+
+#define S_RFTP    23
+#define V_RFTP(x) ((x) << S_RFTP)
+#define F_RFTP    V_RFTP(1U)
+
+#define S_PTRP    20
+#define V_PTRP(x) ((x) << S_PTRP)
+#define F_PTRP    V_PTRP(1U)
+
+#define A_PCIE_CORE_UTL_PCI_EXPRESS_PORT_STATUS 0x59a4
+
+#define S_TPCP    30
+#define V_TPCP(x) ((x) << S_TPCP)
+#define F_TPCP    V_TPCP(1U)
+
+#define S_TNPP    29
+#define V_TNPP(x) ((x) << S_TNPP)
+#define F_TNPP    V_TNPP(1U)
+
+#define S_TFTP    28
+#define V_TFTP(x) ((x) << S_TFTP)
+#define F_TFTP    V_TFTP(1U)
+
+#define S_TCAP    27
+#define V_TCAP(x) ((x) << S_TCAP)
+#define F_TCAP    V_TCAP(1U)
+
+#define S_TCIP    26
+#define V_TCIP(x) ((x) << S_TCIP)
+#define F_TCIP    V_TCIP(1U)
+
+#define S_RCAP    25
+#define V_RCAP(x) ((x) << S_RCAP)
+#define F_RCAP    V_RCAP(1U)
+
+#define S_PLUP    23
+#define V_PLUP(x) ((x) << S_PLUP)
+#define F_PLUP    V_PLUP(1U)
+
+#define S_PLDN    22
+#define V_PLDN(x) ((x) << S_PLDN)
+#define F_PLDN    V_PLDN(1U)
+
+#define S_OTDD    21
+#define V_OTDD(x) ((x) << S_OTDD)
+#define F_OTDD    V_OTDD(1U)
+
+#define S_GTRP    20
+#define V_GTRP(x) ((x) << S_GTRP)
+#define F_GTRP    V_GTRP(1U)
+
+#define S_RDPE    18
+#define V_RDPE(x) ((x) << S_RDPE)
+#define F_RDPE    V_RDPE(1U)
+
+#define S_TDCE    17
+#define V_TDCE(x) ((x) << S_TDCE)
+#define F_TDCE    V_TDCE(1U)
+
+#define S_TDUE    16
+#define V_TDUE(x) ((x) << S_TDUE)
+#define F_TDUE    V_TDUE(1U)
+
+/* registers for module MC */
+#define MC_BASE_ADDR 0x6200
+
+#define A_MC_INT_CAUSE 0x7518
+
+#define S_ECC_UE_INT_CAUSE    2
+#define V_ECC_UE_INT_CAUSE(x) ((x) << S_ECC_UE_INT_CAUSE)
+#define F_ECC_UE_INT_CAUSE    V_ECC_UE_INT_CAUSE(1U)
+
+#define S_ECC_CE_INT_CAUSE    1
+#define V_ECC_CE_INT_CAUSE(x) ((x) << S_ECC_CE_INT_CAUSE)
+#define F_ECC_CE_INT_CAUSE    V_ECC_CE_INT_CAUSE(1U)
+
+#define S_PERR_INT_CAUSE    0
+#define V_PERR_INT_CAUSE(x) ((x) << S_PERR_INT_CAUSE)
+#define F_PERR_INT_CAUSE    V_PERR_INT_CAUSE(1U)
+
+#define A_MC_ECC_STATUS 0x751c
+
+#define S_ECC_CECNT    16
+#define M_ECC_CECNT    0xffffU
+#define V_ECC_CECNT(x) ((x) << S_ECC_CECNT)
+#define G_ECC_CECNT(x) (((x) >> S_ECC_CECNT) & M_ECC_CECNT)
+
+#define S_ECC_UECNT    0
+#define M_ECC_UECNT    0xffffU
+#define V_ECC_UECNT(x) ((x) << S_ECC_UECNT)
+#define G_ECC_UECNT(x) (((x) >> S_ECC_UECNT) & M_ECC_UECNT)
+
+#define A_MC_BIST_CMD 0x7600
+
+#define S_START_BIST    31
+#define V_START_BIST(x) ((x) << S_START_BIST)
+#define F_START_BIST    V_START_BIST(1U)
+
+#define S_BIST_CMD_GAP    8
+#define M_BIST_CMD_GAP    0xffU
+#define V_BIST_CMD_GAP(x) ((x) << S_BIST_CMD_GAP)
+#define G_BIST_CMD_GAP(x) (((x) >> S_BIST_CMD_GAP) & M_BIST_CMD_GAP)
+
+#define S_BIST_OPCODE    0
+#define M_BIST_OPCODE    0x3U
+#define V_BIST_OPCODE(x) ((x) << S_BIST_OPCODE)
+#define G_BIST_OPCODE(x) (((x) >> S_BIST_OPCODE) & M_BIST_OPCODE)
+
+#define A_MC_BIST_CMD_ADDR 0x7604
+#define A_MC_BIST_CMD_LEN 0x7608
+#define A_MC_BIST_DATA_PATTERN 0x760c
+
+#define S_BIST_DATA_TYPE    0
+#define M_BIST_DATA_TYPE    0xfU
+#define V_BIST_DATA_TYPE(x) ((x) << S_BIST_DATA_TYPE)
+#define G_BIST_DATA_TYPE(x) (((x) >> S_BIST_DATA_TYPE) & M_BIST_DATA_TYPE)
+
+#define A_MC_BIST_STATUS_RDATA 0x7688
+
+/* registers for module MA */
+#define MA_BASE_ADDR 0x7700
+
+#define A_MA_EXT_MEMORY_BAR 0x77c8
+
+#define S_EXT_MEM_BASE    16
+#define M_EXT_MEM_BASE    0xfffU
+#define V_EXT_MEM_BASE(x) ((x) << S_EXT_MEM_BASE)
+#define G_EXT_MEM_BASE(x) (((x) >> S_EXT_MEM_BASE) & M_EXT_MEM_BASE)
+
+#define S_EXT_MEM_SIZE    0
+#define M_EXT_MEM_SIZE    0xfffU
+#define V_EXT_MEM_SIZE(x) ((x) << S_EXT_MEM_SIZE)
+#define G_EXT_MEM_SIZE(x) (((x) >> S_EXT_MEM_SIZE) & M_EXT_MEM_SIZE)
+
+#define A_MA_TARGET_MEM_ENABLE 0x77d8
+
+#define S_HMA_ENABLE    3
+#define V_HMA_ENABLE(x) ((x) << S_HMA_ENABLE)
+#define F_HMA_ENABLE    V_HMA_ENABLE(1U)
+
+#define S_EXT_MEM_ENABLE    2
+#define V_EXT_MEM_ENABLE(x) ((x) << S_EXT_MEM_ENABLE)
+#define F_EXT_MEM_ENABLE    V_EXT_MEM_ENABLE(1U)
+
+#define S_EDRAM1_ENABLE    1
+#define V_EDRAM1_ENABLE(x) ((x) << S_EDRAM1_ENABLE)
+#define F_EDRAM1_ENABLE    V_EDRAM1_ENABLE(1U)
+
+#define S_EDRAM0_ENABLE    0
+#define V_EDRAM0_ENABLE(x) ((x) << S_EDRAM0_ENABLE)
+#define F_EDRAM0_ENABLE    V_EDRAM0_ENABLE(1U)
+
+#define A_MA_INT_CAUSE 0x77e0
+
+#define S_MEM_PERR_INT_CAUSE    1
+#define V_MEM_PERR_INT_CAUSE(x) ((x) << S_MEM_PERR_INT_CAUSE)
+#define F_MEM_PERR_INT_CAUSE    V_MEM_PERR_INT_CAUSE(1U)
+
+#define S_MEM_WRAP_INT_CAUSE    0
+#define V_MEM_WRAP_INT_CAUSE(x) ((x) << S_MEM_WRAP_INT_CAUSE)
+#define F_MEM_WRAP_INT_CAUSE    V_MEM_WRAP_INT_CAUSE(1U)
+
+#define A_MA_INT_WRAP_STATUS 0x77e4
+
+#define S_MEM_WRAP_ADDRESS    4
+#define M_MEM_WRAP_ADDRESS    0xfffffffU
+#define V_MEM_WRAP_ADDRESS(x) ((x) << S_MEM_WRAP_ADDRESS)
+#define G_MEM_WRAP_ADDRESS(x) (((x) >> S_MEM_WRAP_ADDRESS) & M_MEM_WRAP_ADDRESS)
+
+#define S_MEM_WRAP_CLIENT_NUM    0
+#define M_MEM_WRAP_CLIENT_NUM    0xfU
+#define V_MEM_WRAP_CLIENT_NUM(x) ((x) << S_MEM_WRAP_CLIENT_NUM)
+#define G_MEM_WRAP_CLIENT_NUM(x) (((x) >> S_MEM_WRAP_CLIENT_NUM) & M_MEM_WRAP_CLIENT_NUM)
+
+#define A_MA_PARITY_ERROR_STATUS 0x77f4
+
+/* registers for module EDC_0 */
+#define EDC_0_BASE_ADDR 0x7900
+
+#define A_EDC_BIST_CMD 0x7904
+#define A_EDC_BIST_CMD_ADDR 0x7908
+#define A_EDC_BIST_CMD_LEN 0x790c
+#define A_EDC_BIST_DATA_PATTERN 0x7910
+#define A_EDC_BIST_STATUS_RDATA 0x7928
+#define A_EDC_INT_CAUSE 0x7978
+
+#define S_ECC_UE_PAR    5
+#define V_ECC_UE_PAR(x) ((x) << S_ECC_UE_PAR)
+#define F_ECC_UE_PAR    V_ECC_UE_PAR(1U)
+
+#define S_ECC_CE_PAR    4
+#define V_ECC_CE_PAR(x) ((x) << S_ECC_CE_PAR)
+#define F_ECC_CE_PAR    V_ECC_CE_PAR(1U)
+
+#define S_PERR_PAR_CAUSE    3
+#define V_PERR_PAR_CAUSE(x) ((x) << S_PERR_PAR_CAUSE)
+#define F_PERR_PAR_CAUSE    V_PERR_PAR_CAUSE(1U)
+
+#define A_EDC_ECC_STATUS 0x797c
+
+/* registers for module EDC_1 */
+#define EDC_1_BASE_ADDR 0x7980
+
+/* registers for module CIM */
+#define CIM_BASE_ADDR 0x7b00
+
+#define A_CIM_PF_MAILBOX_DATA 0x240
+#define A_CIM_PF_MAILBOX_CTRL 0x280
+
+#define S_MBGENERIC    4
+#define M_MBGENERIC    0xfffffffU
+#define V_MBGENERIC(x) ((x) << S_MBGENERIC)
+#define G_MBGENERIC(x) (((x) >> S_MBGENERIC) & M_MBGENERIC)
+
+#define S_MBMSGVALID    3
+#define V_MBMSGVALID(x) ((x) << S_MBMSGVALID)
+#define F_MBMSGVALID    V_MBMSGVALID(1U)
+
+#define S_MBINTREQ    2
+#define V_MBINTREQ(x) ((x) << S_MBINTREQ)
+#define F_MBINTREQ    V_MBINTREQ(1U)
+
+#define S_MBOWNER    0
+#define M_MBOWNER    0x3U
+#define V_MBOWNER(x) ((x) << S_MBOWNER)
+#define G_MBOWNER(x) (((x) >> S_MBOWNER) & M_MBOWNER)
+
+#define A_CIM_PF_HOST_INT_CAUSE 0x28c
+
+#define S_MBMSGRDYINT    19
+#define V_MBMSGRDYINT(x) ((x) << S_MBMSGRDYINT)
+#define F_MBMSGRDYINT    V_MBMSGRDYINT(1U)
+
+#define A_CIM_HOST_INT_CAUSE 0x7b2c
+
+#define S_TIEQOUTPARERRINT    20
+#define V_TIEQOUTPARERRINT(x) ((x) << S_TIEQOUTPARERRINT)
+#define F_TIEQOUTPARERRINT    V_TIEQOUTPARERRINT(1U)
+
+#define S_TIEQINPARERRINT    19
+#define V_TIEQINPARERRINT(x) ((x) << S_TIEQINPARERRINT)
+#define F_TIEQINPARERRINT    V_TIEQINPARERRINT(1U)
+
+#define S_MBHOSTPARERR    18
+#define V_MBHOSTPARERR(x) ((x) << S_MBHOSTPARERR)
+#define F_MBHOSTPARERR    V_MBHOSTPARERR(1U)
+
+#define S_MBUPPARERR    17
+#define V_MBUPPARERR(x) ((x) << S_MBUPPARERR)
+#define F_MBUPPARERR    V_MBUPPARERR(1U)
+
+#define S_IBQTP0PARERR    16
+#define V_IBQTP0PARERR(x) ((x) << S_IBQTP0PARERR)
+#define F_IBQTP0PARERR    V_IBQTP0PARERR(1U)
+
+#define S_IBQTP1PARERR    15
+#define V_IBQTP1PARERR(x) ((x) << S_IBQTP1PARERR)
+#define F_IBQTP1PARERR    V_IBQTP1PARERR(1U)
+
+#define S_IBQULPPARERR    14
+#define V_IBQULPPARERR(x) ((x) << S_IBQULPPARERR)
+#define F_IBQULPPARERR    V_IBQULPPARERR(1U)
+
+#define S_IBQSGELOPARERR    13
+#define V_IBQSGELOPARERR(x) ((x) << S_IBQSGELOPARERR)
+#define F_IBQSGELOPARERR    V_IBQSGELOPARERR(1U)
+
+#define S_IBQSGEHIPARERR    12
+#define V_IBQSGEHIPARERR(x) ((x) << S_IBQSGEHIPARERR)
+#define F_IBQSGEHIPARERR    V_IBQSGEHIPARERR(1U)
+
+#define S_IBQNCSIPARERR    11
+#define V_IBQNCSIPARERR(x) ((x) << S_IBQNCSIPARERR)
+#define F_IBQNCSIPARERR    V_IBQNCSIPARERR(1U)
+
+#define S_OBQULP0PARERR    10
+#define V_OBQULP0PARERR(x) ((x) << S_OBQULP0PARERR)
+#define F_OBQULP0PARERR    V_OBQULP0PARERR(1U)
+
+#define S_OBQULP1PARERR    9
+#define V_OBQULP1PARERR(x) ((x) << S_OBQULP1PARERR)
+#define F_OBQULP1PARERR    V_OBQULP1PARERR(1U)
+
+#define S_OBQULP2PARERR    8
+#define V_OBQULP2PARERR(x) ((x) << S_OBQULP2PARERR)
+#define F_OBQULP2PARERR    V_OBQULP2PARERR(1U)
+
+#define S_OBQULP3PARERR    7
+#define V_OBQULP3PARERR(x) ((x) << S_OBQULP3PARERR)
+#define F_OBQULP3PARERR    V_OBQULP3PARERR(1U)
+
+#define S_OBQSGEPARERR    6
+#define V_OBQSGEPARERR(x) ((x) << S_OBQSGEPARERR)
+#define F_OBQSGEPARERR    V_OBQSGEPARERR(1U)
+
+#define S_OBQNCSIPARERR    5
+#define V_OBQNCSIPARERR(x) ((x) << S_OBQNCSIPARERR)
+#define F_OBQNCSIPARERR    V_OBQNCSIPARERR(1U)
+
+#define S_TIMER1INT    3
+#define V_TIMER1INT(x) ((x) << S_TIMER1INT)
+#define F_TIMER1INT    V_TIMER1INT(1U)
+
+#define S_TIMER0INT    2
+#define V_TIMER0INT(x) ((x) << S_TIMER0INT)
+#define F_TIMER0INT    V_TIMER0INT(1U)
+
+#define S_PREFDROPINT    1
+#define V_PREFDROPINT(x) ((x) << S_PREFDROPINT)
+#define F_PREFDROPINT    V_PREFDROPINT(1U)
+
+#define S_UPACCNONZERO    0
+#define V_UPACCNONZERO(x) ((x) << S_UPACCNONZERO)
+#define F_UPACCNONZERO    V_UPACCNONZERO(1U)
+
+#define A_CIM_HOST_UPACC_INT_CAUSE 0x7b34
+
+#define S_EEPROMWRINT    30
+#define V_EEPROMWRINT(x) ((x) << S_EEPROMWRINT)
+#define F_EEPROMWRINT    V_EEPROMWRINT(1U)
+
+#define S_TIMEOUTMAINT    29
+#define V_TIMEOUTMAINT(x) ((x) << S_TIMEOUTMAINT)
+#define F_TIMEOUTMAINT    V_TIMEOUTMAINT(1U)
+
+#define S_TIMEOUTINT    28
+#define V_TIMEOUTINT(x) ((x) << S_TIMEOUTINT)
+#define F_TIMEOUTINT    V_TIMEOUTINT(1U)
+
+#define S_RSPOVRLOOKUPINT    27
+#define V_RSPOVRLOOKUPINT(x) ((x) << S_RSPOVRLOOKUPINT)
+#define F_RSPOVRLOOKUPINT    V_RSPOVRLOOKUPINT(1U)
+
+#define S_REQOVRLOOKUPINT    26
+#define V_REQOVRLOOKUPINT(x) ((x) << S_REQOVRLOOKUPINT)
+#define F_REQOVRLOOKUPINT    V_REQOVRLOOKUPINT(1U)
+
+#define S_BLKWRPLINT    25
+#define V_BLKWRPLINT(x) ((x) << S_BLKWRPLINT)
+#define F_BLKWRPLINT    V_BLKWRPLINT(1U)
+
+#define S_BLKRDPLINT    24
+#define V_BLKRDPLINT(x) ((x) << S_BLKRDPLINT)
+#define F_BLKRDPLINT    V_BLKRDPLINT(1U)
+
+#define S_SGLWRPLINT    23
+#define V_SGLWRPLINT(x) ((x) << S_SGLWRPLINT)
+#define F_SGLWRPLINT    V_SGLWRPLINT(1U)
+
+#define S_SGLRDPLINT    22
+#define V_SGLRDPLINT(x) ((x) << S_SGLRDPLINT)
+#define F_SGLRDPLINT    V_SGLRDPLINT(1U)
+
+#define S_BLKWRCTLINT    21
+#define V_BLKWRCTLINT(x) ((x) << S_BLKWRCTLINT)
+#define F_BLKWRCTLINT    V_BLKWRCTLINT(1U)
+
+#define S_BLKRDCTLINT    20
+#define V_BLKRDCTLINT(x) ((x) << S_BLKRDCTLINT)
+#define F_BLKRDCTLINT    V_BLKRDCTLINT(1U)
+
+#define S_SGLWRCTLINT    19
+#define V_SGLWRCTLINT(x) ((x) << S_SGLWRCTLINT)
+#define F_SGLWRCTLINT    V_SGLWRCTLINT(1U)
+
+#define S_SGLRDCTLINT    18
+#define V_SGLRDCTLINT(x) ((x) << S_SGLRDCTLINT)
+#define F_SGLRDCTLINT    V_SGLRDCTLINT(1U)
+
+#define S_BLKWREEPROMINT    17
+#define V_BLKWREEPROMINT(x) ((x) << S_BLKWREEPROMINT)
+#define F_BLKWREEPROMINT    V_BLKWREEPROMINT(1U)
+
+#define S_BLKRDEEPROMINT    16
+#define V_BLKRDEEPROMINT(x) ((x) << S_BLKRDEEPROMINT)
+#define F_BLKRDEEPROMINT    V_BLKRDEEPROMINT(1U)
+
+#define S_SGLWREEPROMINT    15
+#define V_SGLWREEPROMINT(x) ((x) << S_SGLWREEPROMINT)
+#define F_SGLWREEPROMINT    V_SGLWREEPROMINT(1U)
+
+#define S_SGLRDEEPROMINT    14
+#define V_SGLRDEEPROMINT(x) ((x) << S_SGLRDEEPROMINT)
+#define F_SGLRDEEPROMINT    V_SGLRDEEPROMINT(1U)
+
+#define S_BLKWRFLASHINT    13
+#define V_BLKWRFLASHINT(x) ((x) << S_BLKWRFLASHINT)
+#define F_BLKWRFLASHINT    V_BLKWRFLASHINT(1U)
+
+#define S_BLKRDFLASHINT    12
+#define V_BLKRDFLASHINT(x) ((x) << S_BLKRDFLASHINT)
+#define F_BLKRDFLASHINT    V_BLKRDFLASHINT(1U)
+
+#define S_SGLWRFLASHINT    11
+#define V_SGLWRFLASHINT(x) ((x) << S_SGLWRFLASHINT)
+#define F_SGLWRFLASHINT    V_SGLWRFLASHINT(1U)
+
+#define S_SGLRDFLASHINT    10
+#define V_SGLRDFLASHINT(x) ((x) << S_SGLRDFLASHINT)
+#define F_SGLRDFLASHINT    V_SGLRDFLASHINT(1U)
+
+#define S_BLKWRBOOTINT    9
+#define V_BLKWRBOOTINT(x) ((x) << S_BLKWRBOOTINT)
+#define F_BLKWRBOOTINT    V_BLKWRBOOTINT(1U)
+
+#define S_BLKRDBOOTINT    8
+#define V_BLKRDBOOTINT(x) ((x) << S_BLKRDBOOTINT)
+#define F_BLKRDBOOTINT    V_BLKRDBOOTINT(1U)
+
+#define S_SGLWRBOOTINT    7
+#define V_SGLWRBOOTINT(x) ((x) << S_SGLWRBOOTINT)
+#define F_SGLWRBOOTINT    V_SGLWRBOOTINT(1U)
+
+#define S_SGLRDBOOTINT    6
+#define V_SGLRDBOOTINT(x) ((x) << S_SGLRDBOOTINT)
+#define F_SGLRDBOOTINT    V_SGLRDBOOTINT(1U)
+
+#define S_ILLWRBEINT    5
+#define V_ILLWRBEINT(x) ((x) << S_ILLWRBEINT)
+#define F_ILLWRBEINT    V_ILLWRBEINT(1U)
+
+#define S_ILLRDBEINT    4
+#define V_ILLRDBEINT(x) ((x) << S_ILLRDBEINT)
+#define F_ILLRDBEINT    V_ILLRDBEINT(1U)
+
+#define S_ILLRDINT    3
+#define V_ILLRDINT(x) ((x) << S_ILLRDINT)
+#define F_ILLRDINT    V_ILLRDINT(1U)
+
+#define S_ILLWRINT    2
+#define V_ILLWRINT(x) ((x) << S_ILLWRINT)
+#define F_ILLWRINT    V_ILLWRINT(1U)
+
+#define S_ILLTRANSINT    1
+#define V_ILLTRANSINT(x) ((x) << S_ILLTRANSINT)
+#define F_ILLTRANSINT    V_ILLTRANSINT(1U)
+
+#define S_RSVDSPACEINT    0
+#define V_RSVDSPACEINT(x) ((x) << S_RSVDSPACEINT)
+#define F_RSVDSPACEINT    V_RSVDSPACEINT(1U)
+
+#define A_CIM_HOST_ACC_CTRL 0x7b50
+
+#define S_HOSTBUSY    17
+#define V_HOSTBUSY(x) ((x) << S_HOSTBUSY)
+#define F_HOSTBUSY    V_HOSTBUSY(1U)
+
+#define S_HOSTWRITE    16
+#define V_HOSTWRITE(x) ((x) << S_HOSTWRITE)
+#define F_HOSTWRITE    V_HOSTWRITE(1U)
+
+#define S_HOSTADDR    0
+#define M_HOSTADDR    0xffffU
+#define V_HOSTADDR(x) ((x) << S_HOSTADDR)
+#define G_HOSTADDR(x) (((x) >> S_HOSTADDR) & M_HOSTADDR)
+
+#define A_CIM_HOST_ACC_DATA 0x7b54
+
+/* registers for module TP */
+#define TP_BASE_ADDR 0x7d00
+
+#define A_TP_OUT_CONFIG 0x7d04
+
+#define S_VLANEXTENABLEPORT3    15
+#define V_VLANEXTENABLEPORT3(x) ((x) << S_VLANEXTENABLEPORT3)
+#define F_VLANEXTENABLEPORT3    V_VLANEXTENABLEPORT3(1U)
+
+#define S_VLANEXTENABLEPORT2    14
+#define V_VLANEXTENABLEPORT2(x) ((x) << S_VLANEXTENABLEPORT2)
+#define F_VLANEXTENABLEPORT2    V_VLANEXTENABLEPORT2(1U)
+
+#define S_VLANEXTENABLEPORT1    13
+#define V_VLANEXTENABLEPORT1(x) ((x) << S_VLANEXTENABLEPORT1)
+#define F_VLANEXTENABLEPORT1    V_VLANEXTENABLEPORT1(1U)
+
+#define S_VLANEXTENABLEPORT0    12
+#define V_VLANEXTENABLEPORT0(x) ((x) << S_VLANEXTENABLEPORT0)
+#define F_VLANEXTENABLEPORT0    V_VLANEXTENABLEPORT0(1U)
+
+#define A_TP_TIMER_RESOLUTION 0x7d90
+
+#define S_TIMERRESOLUTION    16
+#define M_TIMERRESOLUTION    0xffU
+#define V_TIMERRESOLUTION(x) ((x) << S_TIMERRESOLUTION)
+#define G_TIMERRESOLUTION(x) (((x) >> S_TIMERRESOLUTION) & M_TIMERRESOLUTION)
+
+#define S_TIMESTAMPRESOLUTION    8
+#define M_TIMESTAMPRESOLUTION    0xffU
+#define V_TIMESTAMPRESOLUTION(x) ((x) << S_TIMESTAMPRESOLUTION)
+#define G_TIMESTAMPRESOLUTION(x) (((x) >> S_TIMESTAMPRESOLUTION) & M_TIMESTAMPRESOLUTION)
+
+#define S_DELAYEDACKRESOLUTION    0
+#define M_DELAYEDACKRESOLUTION    0xffU
+#define V_DELAYEDACKRESOLUTION(x) ((x) << S_DELAYEDACKRESOLUTION)
+#define G_DELAYEDACKRESOLUTION(x) (((x) >> S_DELAYEDACKRESOLUTION) & M_DELAYEDACKRESOLUTION)
+
+#define A_TP_SHIFT_CNT 0x7dc0
+
+#define S_SYNSHIFTMAX    24
+#define M_SYNSHIFTMAX    0xffU
+#define V_SYNSHIFTMAX(x) ((x) << S_SYNSHIFTMAX)
+#define G_SYNSHIFTMAX(x) (((x) >> S_SYNSHIFTMAX) & M_SYNSHIFTMAX)
+
+#define S_RXTSHIFTMAXR1    20
+#define M_RXTSHIFTMAXR1    0xfU
+#define V_RXTSHIFTMAXR1(x) ((x) << S_RXTSHIFTMAXR1)
+#define G_RXTSHIFTMAXR1(x) (((x) >> S_RXTSHIFTMAXR1) & M_RXTSHIFTMAXR1)
+
+#define S_RXTSHIFTMAXR2    16
+#define M_RXTSHIFTMAXR2    0xfU
+#define V_RXTSHIFTMAXR2(x) ((x) << S_RXTSHIFTMAXR2)
+#define G_RXTSHIFTMAXR2(x) (((x) >> S_RXTSHIFTMAXR2) & M_RXTSHIFTMAXR2)
+
+#define S_PERSHIFTBACKOFFMAX    12
+#define M_PERSHIFTBACKOFFMAX    0xfU
+#define V_PERSHIFTBACKOFFMAX(x) ((x) << S_PERSHIFTBACKOFFMAX)
+#define G_PERSHIFTBACKOFFMAX(x) (((x) >> S_PERSHIFTBACKOFFMAX) & M_PERSHIFTBACKOFFMAX)
+
+#define S_PERSHIFTMAX    8
+#define M_PERSHIFTMAX    0xfU
+#define V_PERSHIFTMAX(x) ((x) << S_PERSHIFTMAX)
+#define G_PERSHIFTMAX(x) (((x) >> S_PERSHIFTMAX) & M_PERSHIFTMAX)
+
+#define S_KEEPALIVEMAXR1    4
+#define M_KEEPALIVEMAXR1    0xfU
+#define V_KEEPALIVEMAXR1(x) ((x) << S_KEEPALIVEMAXR1)
+#define G_KEEPALIVEMAXR1(x) (((x) >> S_KEEPALIVEMAXR1) & M_KEEPALIVEMAXR1)
+
+#define S_KEEPALIVEMAXR2    0
+#define M_KEEPALIVEMAXR2    0xfU
+#define V_KEEPALIVEMAXR2(x) ((x) << S_KEEPALIVEMAXR2)
+#define G_KEEPALIVEMAXR2(x) (((x) >> S_KEEPALIVEMAXR2) & M_KEEPALIVEMAXR2)
+
+#define A_TP_CCTRL_TABLE 0x7ddc
+
+#define S_ROWINDEX    16
+#define M_ROWINDEX    0xffffU
+#define V_ROWINDEX(x) ((x) << S_ROWINDEX)
+#define G_ROWINDEX(x) (((x) >> S_ROWINDEX) & M_ROWINDEX)
+
+#define S_ROWVALUE    0
+#define M_ROWVALUE    0xffffU
+#define V_ROWVALUE(x) ((x) << S_ROWVALUE)
+#define G_ROWVALUE(x) (((x) >> S_ROWVALUE) & M_ROWVALUE)
+
+#define A_TP_MTU_TABLE 0x7de4
+
+#define S_MTUINDEX    24
+#define M_MTUINDEX    0xffU
+#define V_MTUINDEX(x) ((x) << S_MTUINDEX)
+#define G_MTUINDEX(x) (((x) >> S_MTUINDEX) & M_MTUINDEX)
+
+#define S_MTUWIDTH    16
+#define M_MTUWIDTH    0xfU
+#define V_MTUWIDTH(x) ((x) << S_MTUWIDTH)
+#define G_MTUWIDTH(x) (((x) >> S_MTUWIDTH) & M_MTUWIDTH)
+
+#define S_MTUVALUE    0
+#define M_MTUVALUE    0x3fffU
+#define V_MTUVALUE(x) ((x) << S_MTUVALUE)
+#define G_MTUVALUE(x) (((x) >> S_MTUVALUE) & M_MTUVALUE)
+
+#define A_TP_RSS_LKP_TABLE 0x7dec
+
+#define S_LKPTBLROWVLD    31
+#define V_LKPTBLROWVLD(x) ((x) << S_LKPTBLROWVLD)
+#define F_LKPTBLROWVLD    V_LKPTBLROWVLD(1U)
+
+#define S_LKPTBLROWIDX    20
+#define M_LKPTBLROWIDX    0x3ffU
+#define V_LKPTBLROWIDX(x) ((x) << S_LKPTBLROWIDX)
+#define G_LKPTBLROWIDX(x) (((x) >> S_LKPTBLROWIDX) & M_LKPTBLROWIDX)
+
+#define S_LKPTBLQUEUE1    10
+#define M_LKPTBLQUEUE1    0x3ffU
+#define V_LKPTBLQUEUE1(x) ((x) << S_LKPTBLQUEUE1)
+#define G_LKPTBLQUEUE1(x) (((x) >> S_LKPTBLQUEUE1) & M_LKPTBLQUEUE1)
+
+#define S_LKPTBLQUEUE0    0
+#define M_LKPTBLQUEUE0    0x3ffU
+#define V_LKPTBLQUEUE0(x) ((x) << S_LKPTBLQUEUE0)
+#define G_LKPTBLQUEUE0(x) (((x) >> S_LKPTBLQUEUE0) & M_LKPTBLQUEUE0)
+
+#define A_TP_RSS_CONFIG 0x7df0
+
+#define S_TNL4TUPENIPV6    31
+#define V_TNL4TUPENIPV6(x) ((x) << S_TNL4TUPENIPV6)
+#define F_TNL4TUPENIPV6    V_TNL4TUPENIPV6(1U)
+
+#define S_TNL2TUPENIPV6    30
+#define V_TNL2TUPENIPV6(x) ((x) << S_TNL2TUPENIPV6)
+#define F_TNL2TUPENIPV6    V_TNL2TUPENIPV6(1U)
+
+#define S_TNL4TUPENIPV4    29
+#define V_TNL4TUPENIPV4(x) ((x) << S_TNL4TUPENIPV4)
+#define F_TNL4TUPENIPV4    V_TNL4TUPENIPV4(1U)
+
+#define S_TNL2TUPENIPV4    28
+#define V_TNL2TUPENIPV4(x) ((x) << S_TNL2TUPENIPV4)
+#define F_TNL2TUPENIPV4    V_TNL2TUPENIPV4(1U)
+
+#define S_TNLTCPSEL    27
+#define V_TNLTCPSEL(x) ((x) << S_TNLTCPSEL)
+#define F_TNLTCPSEL    V_TNLTCPSEL(1U)
+
+#define S_TNLIP6SEL    26
+#define V_TNLIP6SEL(x) ((x) << S_TNLIP6SEL)
+#define F_TNLIP6SEL    V_TNLIP6SEL(1U)
+
+#define S_TNLVRTSEL    25
+#define V_TNLVRTSEL(x) ((x) << S_TNLVRTSEL)
+#define F_TNLVRTSEL    V_TNLVRTSEL(1U)
+
+#define S_TNLMAPEN    24
+#define V_TNLMAPEN(x) ((x) << S_TNLMAPEN)
+#define F_TNLMAPEN    V_TNLMAPEN(1U)
+
+#define S_OFDHASHSAVE    19
+#define V_OFDHASHSAVE(x) ((x) << S_OFDHASHSAVE)
+#define F_OFDHASHSAVE    V_OFDHASHSAVE(1U)
+
+#define S_OFDVRTSEL    18
+#define V_OFDVRTSEL(x) ((x) << S_OFDVRTSEL)
+#define F_OFDVRTSEL    V_OFDVRTSEL(1U)
+
+#define S_OFDMAPEN    17
+#define V_OFDMAPEN(x) ((x) << S_OFDMAPEN)
+#define F_OFDMAPEN    V_OFDMAPEN(1U)
+
+#define S_OFDLKPEN    16
+#define V_OFDLKPEN(x) ((x) << S_OFDLKPEN)
+#define F_OFDLKPEN    V_OFDLKPEN(1U)
+
+#define S_SYN4TUPENIPV6    15
+#define V_SYN4TUPENIPV6(x) ((x) << S_SYN4TUPENIPV6)
+#define F_SYN4TUPENIPV6    V_SYN4TUPENIPV6(1U)
+
+#define S_SYN2TUPENIPV6    14
+#define V_SYN2TUPENIPV6(x) ((x) << S_SYN2TUPENIPV6)
+#define F_SYN2TUPENIPV6    V_SYN2TUPENIPV6(1U)
+
+#define S_SYN4TUPENIPV4    13
+#define V_SYN4TUPENIPV4(x) ((x) << S_SYN4TUPENIPV4)
+#define F_SYN4TUPENIPV4    V_SYN4TUPENIPV4(1U)
+
+#define S_SYN2TUPENIPV4    12
+#define V_SYN2TUPENIPV4(x) ((x) << S_SYN2TUPENIPV4)
+#define F_SYN2TUPENIPV4    V_SYN2TUPENIPV4(1U)
+
+#define S_SYNIP6SEL    11
+#define V_SYNIP6SEL(x) ((x) << S_SYNIP6SEL)
+#define F_SYNIP6SEL    V_SYNIP6SEL(1U)
+
+#define S_SYNVRTSEL    10
+#define V_SYNVRTSEL(x) ((x) << S_SYNVRTSEL)
+#define F_SYNVRTSEL    V_SYNVRTSEL(1U)
+
+#define S_SYNMAPEN    9
+#define V_SYNMAPEN(x) ((x) << S_SYNMAPEN)
+#define F_SYNMAPEN    V_SYNMAPEN(1U)
+
+#define S_SYNLKPEN    8
+#define V_SYNLKPEN(x) ((x) << S_SYNLKPEN)
+#define F_SYNLKPEN    V_SYNLKPEN(1U)
+
+#define S_CHANNELENABLE    7
+#define V_CHANNELENABLE(x) ((x) << S_CHANNELENABLE)
+#define F_CHANNELENABLE    V_CHANNELENABLE(1U)
+
+#define S_PORTENABLE    6
+#define V_PORTENABLE(x) ((x) << S_PORTENABLE)
+#define F_PORTENABLE    V_PORTENABLE(1U)
+
+#define S_TNLALLLOOKUP    5
+#define V_TNLALLLOOKUP(x) ((x) << S_TNLALLLOOKUP)
+#define F_TNLALLLOOKUP    V_TNLALLLOOKUP(1U)
+
+#define S_VIRTENABLE    4
+#define V_VIRTENABLE(x) ((x) << S_VIRTENABLE)
+#define F_VIRTENABLE    V_VIRTENABLE(1U)
+
+#define S_CONGESTIONENABLE    3
+#define V_CONGESTIONENABLE(x) ((x) << S_CONGESTIONENABLE)
+#define F_CONGESTIONENABLE    V_CONGESTIONENABLE(1U)
+
+#define S_HASHTOEPLITZ    2
+#define V_HASHTOEPLITZ(x) ((x) << S_HASHTOEPLITZ)
+#define F_HASHTOEPLITZ    V_HASHTOEPLITZ(1U)
+
+#define S_UDPENABLE    1
+#define V_UDPENABLE(x) ((x) << S_UDPENABLE)
+#define F_UDPENABLE    V_UDPENABLE(1U)
+
+#define S_DISABLE    0
+#define V_DISABLE(x) ((x) << S_DISABLE)
+#define F_DISABLE    V_DISABLE(1U)
+
+#define A_TP_RSS_CONFIG_OFD 0x7df8
+
+#define S_MASKSIZE    28
+#define M_MASKSIZE    0xfU
+#define V_MASKSIZE(x) ((x) << S_MASKSIZE)
+#define G_MASKSIZE(x) (((x) >> S_MASKSIZE) & M_MASKSIZE)
+
+#define S_RRCPLMAPEN    20
+#define V_RRCPLMAPEN(x) ((x) << S_RRCPLMAPEN)
+#define F_RRCPLMAPEN    V_RRCPLMAPEN(1U)
+
+#define S_RRCPLQUEWIDTH    16
+#define M_RRCPLQUEWIDTH    0xfU
+#define V_RRCPLQUEWIDTH(x) ((x) << S_RRCPLQUEWIDTH)
+#define G_RRCPLQUEWIDTH(x) (((x) >> S_RRCPLQUEWIDTH) & M_RRCPLQUEWIDTH)
+
+#define A_TP_PIO_ADDR 0x7e40
+#define A_TP_PIO_DATA 0x7e44
+#define A_TP_MIB_INDEX 0x7e50
+#define A_TP_MIB_DATA 0x7e54
+#define A_TP_INT_CAUSE 0x7e74
+
+#define S_FLMTXFLSTEMPTY    30
+#define V_FLMTXFLSTEMPTY(x) ((x) << S_FLMTXFLSTEMPTY)
+#define F_FLMTXFLSTEMPTY    V_FLMTXFLSTEMPTY(1U)
+
+#define A_TP_INGRESS_CONFIG 0x141
+
+#define S_VNIC    11
+#define V_VNIC(x) ((x) << S_VNIC)
+#define F_VNIC    V_VNIC(1U)
+
+#define S_CSUM_HAS_PSEUDO_HDR    10
+#define V_CSUM_HAS_PSEUDO_HDR(x) ((x) << S_CSUM_HAS_PSEUDO_HDR)
+#define F_CSUM_HAS_PSEUDO_HDR    V_CSUM_HAS_PSEUDO_HDR(1U)
+
+#define S_RM_OVLAN    9
+#define V_RM_OVLAN(x) ((x) << S_RM_OVLAN)
+#define F_RM_OVLAN    V_RM_OVLAN(1U)
+
+#define S_LOOKUPEVERYPKT    8
+#define V_LOOKUPEVERYPKT(x) ((x) << S_LOOKUPEVERYPKT)
+#define F_LOOKUPEVERYPKT    V_LOOKUPEVERYPKT(1U)
+
+#define A_TP_MIB_MAC_IN_ERR_0 0x0
+#define A_TP_MIB_TCP_OUT_RST 0xc
+#define A_TP_MIB_TCP_IN_SEG_HI 0x10
+#define A_TP_MIB_TCP_IN_SEG_LO 0x11
+#define A_TP_MIB_TCP_OUT_SEG_HI 0x12
+#define A_TP_MIB_TCP_OUT_SEG_LO 0x13
+#define A_TP_MIB_TCP_RXT_SEG_HI 0x14
+#define A_TP_MIB_TCP_RXT_SEG_LO 0x15
+#define A_TP_MIB_TNL_CNG_DROP_0 0x18
+#define A_TP_MIB_TCP_V6IN_ERR_0 0x28
+#define A_TP_MIB_TCP_V6OUT_RST 0x2c
+#define A_TP_MIB_OFD_ARP_DROP 0x36
+#define A_TP_MIB_TNL_DROP_0 0x44
+#define A_TP_MIB_OFD_VLN_DROP_0 0x58
+
+/* registers for module ULP_TX */
+#define ULP_TX_BASE_ADDR 0x8dc0
+
+#define A_ULP_TX_INT_CAUSE 0x8dcc
+
+#define S_PBL_BOUND_ERR_CH3    31
+#define V_PBL_BOUND_ERR_CH3(x) ((x) << S_PBL_BOUND_ERR_CH3)
+#define F_PBL_BOUND_ERR_CH3    V_PBL_BOUND_ERR_CH3(1U)
+
+#define S_PBL_BOUND_ERR_CH2    30
+#define V_PBL_BOUND_ERR_CH2(x) ((x) << S_PBL_BOUND_ERR_CH2)
+#define F_PBL_BOUND_ERR_CH2    V_PBL_BOUND_ERR_CH2(1U)
+
+#define S_PBL_BOUND_ERR_CH1    29
+#define V_PBL_BOUND_ERR_CH1(x) ((x) << S_PBL_BOUND_ERR_CH1)
+#define F_PBL_BOUND_ERR_CH1    V_PBL_BOUND_ERR_CH1(1U)
+
+#define S_PBL_BOUND_ERR_CH0    28
+#define V_PBL_BOUND_ERR_CH0(x) ((x) << S_PBL_BOUND_ERR_CH0)
+#define F_PBL_BOUND_ERR_CH0    V_PBL_BOUND_ERR_CH0(1U)
+
+/* registers for module PM_RX */
+#define PM_RX_BASE_ADDR 0x8fc0
+
+#define A_PM_RX_INT_CAUSE 0x8fdc
+
+#define S_ZERO_E_CMD_ERROR    22
+#define V_ZERO_E_CMD_ERROR(x) ((x) << S_ZERO_E_CMD_ERROR)
+#define F_ZERO_E_CMD_ERROR    V_ZERO_E_CMD_ERROR(1U)
+
+#define S_OCSPI_PAR_ERROR    3
+#define V_OCSPI_PAR_ERROR(x) ((x) << S_OCSPI_PAR_ERROR)
+#define F_OCSPI_PAR_ERROR    V_OCSPI_PAR_ERROR(1U)
+
+#define S_DB_OPTIONS_PAR_ERROR    2
+#define V_DB_OPTIONS_PAR_ERROR(x) ((x) << S_DB_OPTIONS_PAR_ERROR)
+#define F_DB_OPTIONS_PAR_ERROR    V_DB_OPTIONS_PAR_ERROR(1U)
+
+#define S_IESPI_PAR_ERROR    1
+#define V_IESPI_PAR_ERROR(x) ((x) << S_IESPI_PAR_ERROR)
+#define F_IESPI_PAR_ERROR    V_IESPI_PAR_ERROR(1U)
+
+#define S_E_PCMD_PAR_ERROR    0
+#define V_E_PCMD_PAR_ERROR(x) ((x) << S_E_PCMD_PAR_ERROR)
+#define F_E_PCMD_PAR_ERROR    V_E_PCMD_PAR_ERROR(1U)
+
+/* registers for module PM_TX */
+#define PM_TX_BASE_ADDR 0x8fe0
+
+#define A_PM_TX_INT_CAUSE 0x8ffc
+
+#define S_PCMD_LEN_OVFL0    31
+#define V_PCMD_LEN_OVFL0(x) ((x) << S_PCMD_LEN_OVFL0)
+#define F_PCMD_LEN_OVFL0    V_PCMD_LEN_OVFL0(1U)
+
+#define S_PCMD_LEN_OVFL1    30
+#define V_PCMD_LEN_OVFL1(x) ((x) << S_PCMD_LEN_OVFL1)
+#define F_PCMD_LEN_OVFL1    V_PCMD_LEN_OVFL1(1U)
+
+#define S_PCMD_LEN_OVFL2    29
+#define V_PCMD_LEN_OVFL2(x) ((x) << S_PCMD_LEN_OVFL2)
+#define F_PCMD_LEN_OVFL2    V_PCMD_LEN_OVFL2(1U)
+
+#define S_ZERO_C_CMD_ERROR    28
+#define V_ZERO_C_CMD_ERROR(x) ((x) << S_ZERO_C_CMD_ERROR)
+#define F_ZERO_C_CMD_ERROR    V_ZERO_C_CMD_ERROR(1U)
+
+#define S_OESPI_PAR_ERROR    3
+#define V_OESPI_PAR_ERROR(x) ((x) << S_OESPI_PAR_ERROR)
+#define F_OESPI_PAR_ERROR    V_OESPI_PAR_ERROR(1U)
+
+#define S_ICSPI_PAR_ERROR    1
+#define V_ICSPI_PAR_ERROR(x) ((x) << S_ICSPI_PAR_ERROR)
+#define F_ICSPI_PAR_ERROR    V_ICSPI_PAR_ERROR(1U)
+
+#define S_C_PCMD_PAR_ERROR    0
+#define V_C_PCMD_PAR_ERROR(x) ((x) << S_C_PCMD_PAR_ERROR)
+#define F_C_PCMD_PAR_ERROR    V_C_PCMD_PAR_ERROR(1U)
+
+/* registers for module MPS */
+#define MPS_BASE_ADDR 0x9000
+
+#define A_MPS_PORT_STAT_TX_PORT_BYTES_L 0x400
+#define A_MPS_PORT_STAT_TX_PORT_BYTES_H 0x404
+#define A_MPS_PORT_STAT_TX_PORT_FRAMES_L 0x408
+#define A_MPS_PORT_STAT_TX_PORT_FRAMES_H 0x40c
+#define A_MPS_PORT_STAT_TX_PORT_BCAST_L 0x410
+#define A_MPS_PORT_STAT_TX_PORT_BCAST_H 0x414
+#define A_MPS_PORT_STAT_TX_PORT_MCAST_L 0x418
+#define A_MPS_PORT_STAT_TX_PORT_MCAST_H 0x41c
+#define A_MPS_PORT_STAT_TX_PORT_UCAST_L 0x420
+#define A_MPS_PORT_STAT_TX_PORT_UCAST_H 0x424
+#define A_MPS_PORT_STAT_TX_PORT_ERROR_L 0x428
+#define A_MPS_PORT_STAT_TX_PORT_ERROR_H 0x42c
+#define A_MPS_PORT_STAT_TX_PORT_64B_L 0x430
+#define A_MPS_PORT_STAT_TX_PORT_64B_H 0x434
+#define A_MPS_PORT_STAT_TX_PORT_65B_127B_L 0x438
+#define A_MPS_PORT_STAT_TX_PORT_65B_127B_H 0x43c
+#define A_MPS_PORT_STAT_TX_PORT_128B_255B_L 0x440
+#define A_MPS_PORT_STAT_TX_PORT_128B_255B_H 0x444
+#define A_MPS_PORT_STAT_TX_PORT_256B_511B_L 0x448
+#define A_MPS_PORT_STAT_TX_PORT_256B_511B_H 0x44c
+#define A_MPS_PORT_STAT_TX_PORT_512B_1023B_L 0x450
+#define A_MPS_PORT_STAT_TX_PORT_512B_1023B_H 0x454
+#define A_MPS_PORT_STAT_TX_PORT_1024B_1518B_L 0x458
+#define A_MPS_PORT_STAT_TX_PORT_1024B_1518B_H 0x45c
+#define A_MPS_PORT_STAT_TX_PORT_1519B_MAX_L 0x460
+#define A_MPS_PORT_STAT_TX_PORT_1519B_MAX_H 0x464
+#define A_MPS_PORT_STAT_TX_PORT_DROP_L 0x468
+#define A_MPS_PORT_STAT_TX_PORT_DROP_H 0x46c
+#define A_MPS_PORT_STAT_TX_PORT_PAUSE_L 0x470
+#define A_MPS_PORT_STAT_TX_PORT_PAUSE_H 0x474
+#define A_MPS_PORT_STAT_TX_PORT_PPP0_L 0x478
+#define A_MPS_PORT_STAT_TX_PORT_PPP0_H 0x47c
+#define A_MPS_PORT_STAT_TX_PORT_PPP1_L 0x480
+#define A_MPS_PORT_STAT_TX_PORT_PPP1_H 0x484
+#define A_MPS_PORT_STAT_TX_PORT_PPP2_L 0x488
+#define A_MPS_PORT_STAT_TX_PORT_PPP2_H 0x48c
+#define A_MPS_PORT_STAT_TX_PORT_PPP3_L 0x490
+#define A_MPS_PORT_STAT_TX_PORT_PPP3_H 0x494
+#define A_MPS_PORT_STAT_TX_PORT_PPP4_L 0x498
+#define A_MPS_PORT_STAT_TX_PORT_PPP4_H 0x49c
+#define A_MPS_PORT_STAT_TX_PORT_PPP5_L 0x4a0
+#define A_MPS_PORT_STAT_TX_PORT_PPP5_H 0x4a4
+#define A_MPS_PORT_STAT_TX_PORT_PPP6_L 0x4a8
+#define A_MPS_PORT_STAT_TX_PORT_PPP6_H 0x4ac
+#define A_MPS_PORT_STAT_TX_PORT_PPP7_L 0x4b0
+#define A_MPS_PORT_STAT_TX_PORT_PPP7_H 0x4b4
+#define A_MPS_PORT_STAT_LB_PORT_BYTES_L 0x4c0
+#define A_MPS_PORT_STAT_LB_PORT_BYTES_H 0x4c4
+#define A_MPS_PORT_STAT_LB_PORT_FRAMES_L 0x4c8
+#define A_MPS_PORT_STAT_LB_PORT_FRAMES_H 0x4cc
+#define A_MPS_PORT_STAT_LB_PORT_BCAST_L 0x4d0
+#define A_MPS_PORT_STAT_LB_PORT_BCAST_H 0x4d4
+#define A_MPS_PORT_STAT_LB_PORT_MCAST_L 0x4d8
+#define A_MPS_PORT_STAT_LB_PORT_MCAST_H 0x4dc
+#define A_MPS_PORT_STAT_LB_PORT_UCAST_L 0x4e0
+#define A_MPS_PORT_STAT_LB_PORT_UCAST_H 0x4e4
+#define A_MPS_PORT_STAT_LB_PORT_ERROR_L 0x4e8
+#define A_MPS_PORT_STAT_LB_PORT_ERROR_H 0x4ec
+#define A_MPS_PORT_STAT_LB_PORT_64B_L 0x4f0
+#define A_MPS_PORT_STAT_LB_PORT_64B_H 0x4f4
+#define A_MPS_PORT_STAT_LB_PORT_65B_127B_L 0x4f8
+#define A_MPS_PORT_STAT_LB_PORT_65B_127B_H 0x4fc
+#define A_MPS_PORT_STAT_LB_PORT_128B_255B_L 0x500
+#define A_MPS_PORT_STAT_LB_PORT_128B_255B_H 0x504
+#define A_MPS_PORT_STAT_LB_PORT_256B_511B_L 0x508
+#define A_MPS_PORT_STAT_LB_PORT_256B_511B_H 0x50c
+#define A_MPS_PORT_STAT_LB_PORT_512B_1023B_L 0x510
+#define A_MPS_PORT_STAT_LB_PORT_512B_1023B_H 0x514
+#define A_MPS_PORT_STAT_LB_PORT_1024B_1518B_L 0x518
+#define A_MPS_PORT_STAT_LB_PORT_1024B_1518B_H 0x51c
+#define A_MPS_PORT_STAT_LB_PORT_1519B_MAX_L 0x520
+#define A_MPS_PORT_STAT_LB_PORT_1519B_MAX_H 0x524
+#define A_MPS_PORT_STAT_LB_PORT_DROP_FRAMES 0x528
+#define A_MPS_PORT_STAT_RX_PORT_BYTES_L 0x540
+#define A_MPS_PORT_STAT_RX_PORT_BYTES_H 0x544
+#define A_MPS_PORT_STAT_RX_PORT_FRAMES_L 0x548
+#define A_MPS_PORT_STAT_RX_PORT_FRAMES_H 0x54c
+#define A_MPS_PORT_STAT_RX_PORT_BCAST_L 0x550
+#define A_MPS_PORT_STAT_RX_PORT_BCAST_H 0x554
+#define A_MPS_PORT_STAT_RX_PORT_MCAST_L 0x558
+#define A_MPS_PORT_STAT_RX_PORT_MCAST_H 0x55c
+#define A_MPS_PORT_STAT_RX_PORT_UCAST_L 0x560
+#define A_MPS_PORT_STAT_RX_PORT_UCAST_H 0x564
+#define A_MPS_PORT_STAT_RX_PORT_MTU_ERROR_L 0x568
+#define A_MPS_PORT_STAT_RX_PORT_MTU_ERROR_H 0x56c
+#define A_MPS_PORT_STAT_RX_PORT_MTU_CRC_ERROR_L 0x570
+#define A_MPS_PORT_STAT_RX_PORT_MTU_CRC_ERROR_H 0x574
+#define A_MPS_PORT_STAT_RX_PORT_CRC_ERROR_L 0x578
+#define A_MPS_PORT_STAT_RX_PORT_CRC_ERROR_H 0x57c
+#define A_MPS_PORT_STAT_RX_PORT_LEN_ERROR_L 0x580
+#define A_MPS_PORT_STAT_RX_PORT_LEN_ERROR_H 0x584
+#define A_MPS_PORT_STAT_RX_PORT_SYM_ERROR_L 0x588
+#define A_MPS_PORT_STAT_RX_PORT_SYM_ERROR_H 0x58c
+#define A_MPS_PORT_STAT_RX_PORT_64B_L 0x590
+#define A_MPS_PORT_STAT_RX_PORT_64B_H 0x594
+#define A_MPS_PORT_STAT_RX_PORT_65B_127B_L 0x598
+#define A_MPS_PORT_STAT_RX_PORT_65B_127B_H 0x59c
+#define A_MPS_PORT_STAT_RX_PORT_128B_255B_L 0x5a0
+#define A_MPS_PORT_STAT_RX_PORT_128B_255B_H 0x5a4
+#define A_MPS_PORT_STAT_RX_PORT_256B_511B_L 0x5a8
+#define A_MPS_PORT_STAT_RX_PORT_256B_511B_H 0x5ac
+#define A_MPS_PORT_STAT_RX_PORT_512B_1023B_L 0x5b0
+#define A_MPS_PORT_STAT_RX_PORT_512B_1023B_H 0x5b4
+#define A_MPS_PORT_STAT_RX_PORT_1024B_1518B_L 0x5b8
+#define A_MPS_PORT_STAT_RX_PORT_1024B_1518B_H 0x5bc
+#define A_MPS_PORT_STAT_RX_PORT_1519B_MAX_L 0x5c0
+#define A_MPS_PORT_STAT_RX_PORT_1519B_MAX_H 0x5c4
+#define A_MPS_PORT_STAT_RX_PORT_PAUSE_L 0x5c8
+#define A_MPS_PORT_STAT_RX_PORT_PAUSE_H 0x5cc
+#define A_MPS_PORT_STAT_RX_PORT_PPP0_L 0x5d0
+#define A_MPS_PORT_STAT_RX_PORT_PPP0_H 0x5d4
+#define A_MPS_PORT_STAT_RX_PORT_PPP1_L 0x5d8
+#define A_MPS_PORT_STAT_RX_PORT_PPP1_H 0x5dc
+#define A_MPS_PORT_STAT_RX_PORT_PPP2_L 0x5e0
+#define A_MPS_PORT_STAT_RX_PORT_PPP2_H 0x5e4
+#define A_MPS_PORT_STAT_RX_PORT_PPP3_L 0x5e8
+#define A_MPS_PORT_STAT_RX_PORT_PPP3_H 0x5ec
+#define A_MPS_PORT_STAT_RX_PORT_PPP4_L 0x5f0
+#define A_MPS_PORT_STAT_RX_PORT_PPP4_H 0x5f4
+#define A_MPS_PORT_STAT_RX_PORT_PPP5_L 0x5f8
+#define A_MPS_PORT_STAT_RX_PORT_PPP5_H 0x5fc
+#define A_MPS_PORT_STAT_RX_PORT_PPP6_L 0x600
+#define A_MPS_PORT_STAT_RX_PORT_PPP6_H 0x604
+#define A_MPS_PORT_STAT_RX_PORT_PPP7_L 0x608
+#define A_MPS_PORT_STAT_RX_PORT_PPP7_H 0x60c
+#define A_MPS_PORT_STAT_RX_PORT_LESS_64B_L 0x610
+#define A_MPS_PORT_STAT_RX_PORT_LESS_64B_H 0x614
+#define A_MPS_CMN_CTL 0x9000
+
+#define S_DETECT8023    3
+#define V_DETECT8023(x) ((x) << S_DETECT8023)
+#define F_DETECT8023    V_DETECT8023(1U)
+
+#define S_VFDIRECTACCESS    2
+#define V_VFDIRECTACCESS(x) ((x) << S_VFDIRECTACCESS)
+#define F_VFDIRECTACCESS    V_VFDIRECTACCESS(1U)
+
+#define S_NUMPORTS    0
+#define M_NUMPORTS    0x3U
+#define V_NUMPORTS(x) ((x) << S_NUMPORTS)
+#define G_NUMPORTS(x) (((x) >> S_NUMPORTS) & M_NUMPORTS)
+
+#define A_MPS_INT_CAUSE 0x9008
+
+#define S_STATINT    5
+#define V_STATINT(x) ((x) << S_STATINT)
+#define F_STATINT    V_STATINT(1U)
+
+#define S_TXINT    4
+#define V_TXINT(x) ((x) << S_TXINT)
+#define F_TXINT    V_TXINT(1U)
+
+#define S_RXINT    3
+#define V_RXINT(x) ((x) << S_RXINT)
+#define F_RXINT    V_RXINT(1U)
+
+#define S_TRCINT    2
+#define V_TRCINT(x) ((x) << S_TRCINT)
+#define F_TRCINT    V_TRCINT(1U)
+
+#define S_CLSINT    1
+#define V_CLSINT(x) ((x) << S_CLSINT)
+#define F_CLSINT    V_CLSINT(1U)
+
+#define S_PLINT    0
+#define V_PLINT(x) ((x) << S_PLINT)
+#define F_PLINT    V_PLINT(1U)
+
+#define A_MPS_TX_INT_CAUSE 0x9408
+
+#define S_PORTERR    16
+#define V_PORTERR(x) ((x) << S_PORTERR)
+#define F_PORTERR    V_PORTERR(1U)
+
+#define S_FRMERR    15
+#define V_FRMERR(x) ((x) << S_FRMERR)
+#define F_FRMERR    V_FRMERR(1U)
+
+#define S_SECNTERR    14
+#define V_SECNTERR(x) ((x) << S_SECNTERR)
+#define F_SECNTERR    V_SECNTERR(1U)
+
+#define S_BUBBLE    13
+#define V_BUBBLE(x) ((x) << S_BUBBLE)
+#define F_BUBBLE    V_BUBBLE(1U)
+
+#define S_TXDESCFIFO    9
+#define M_TXDESCFIFO    0xfU
+#define V_TXDESCFIFO(x) ((x) << S_TXDESCFIFO)
+#define G_TXDESCFIFO(x) (((x) >> S_TXDESCFIFO) & M_TXDESCFIFO)
+
+#define S_TXDATAFIFO    5
+#define M_TXDATAFIFO    0xfU
+#define V_TXDATAFIFO(x) ((x) << S_TXDATAFIFO)
+#define G_TXDATAFIFO(x) (((x) >> S_TXDATAFIFO) & M_TXDATAFIFO)
+
+#define S_NCSIFIFO    4
+#define V_NCSIFIFO(x) ((x) << S_NCSIFIFO)
+#define F_NCSIFIFO    V_NCSIFIFO(1U)
+
+#define S_TPFIFO    0
+#define M_TPFIFO    0xfU
+#define V_TPFIFO(x) ((x) << S_TPFIFO)
+#define G_TPFIFO(x) (((x) >> S_TPFIFO) & M_TPFIFO)
+
+#define A_MPS_STAT_PERR_INT_CAUSE_SRAM 0x9614
+#define A_MPS_STAT_PERR_INT_CAUSE_TX_FIFO 0x9620
+
+#define S_TX    12
+#define M_TX    0xffU
+#define V_TX(x) ((x) << S_TX)
+#define G_TX(x) (((x) >> S_TX) & M_TX)
+
+#define S_TXPAUSEFIFO    8
+#define M_TXPAUSEFIFO    0xfU
+#define V_TXPAUSEFIFO(x) ((x) << S_TXPAUSEFIFO)
+#define G_TXPAUSEFIFO(x) (((x) >> S_TXPAUSEFIFO) & M_TXPAUSEFIFO)
+
+#define S_DROP    0
+#define M_DROP    0xffU
+#define V_DROP(x) ((x) << S_DROP)
+#define G_DROP(x) (((x) >> S_DROP) & M_DROP)
+
+#define A_MPS_STAT_PERR_INT_CAUSE_RX_FIFO 0x962c
+
+#define S_PAUSEFIFO    20
+#define M_PAUSEFIFO    0xfU
+#define V_PAUSEFIFO(x) ((x) << S_PAUSEFIFO)
+#define G_PAUSEFIFO(x) (((x) >> S_PAUSEFIFO) & M_PAUSEFIFO)
+
+#define S_LPBK    16
+#define M_LPBK    0xfU
+#define V_LPBK(x) ((x) << S_LPBK)
+#define G_LPBK(x) (((x) >> S_LPBK) & M_LPBK)
+
+#define S_NQ    8
+#define M_NQ    0xffU
+#define V_NQ(x) ((x) << S_NQ)
+#define G_NQ(x) (((x) >> S_NQ) & M_NQ)
+
+#define S_PV    4
+#define M_PV    0xfU
+#define V_PV(x) ((x) << S_PV)
+#define G_PV(x) (((x) >> S_PV) & M_PV)
+
+#define S_MAC    0
+#define M_MAC    0xfU
+#define V_MAC(x) ((x) << S_MAC)
+#define G_MAC(x) (((x) >> S_MAC) & M_MAC)
+
+#define A_MPS_STAT_RX_BG_0_MAC_DROP_FRAME_L 0x9640
+#define A_MPS_STAT_RX_BG_0_MAC_DROP_FRAME_H 0x9644
+#define A_MPS_STAT_RX_BG_1_MAC_DROP_FRAME_L 0x9648
+#define A_MPS_STAT_RX_BG_1_MAC_DROP_FRAME_H 0x964c
+#define A_MPS_STAT_RX_BG_2_MAC_DROP_FRAME_L 0x9650
+#define A_MPS_STAT_RX_BG_2_MAC_DROP_FRAME_H 0x9654
+#define A_MPS_STAT_RX_BG_3_MAC_DROP_FRAME_L 0x9658
+#define A_MPS_STAT_RX_BG_3_MAC_DROP_FRAME_H 0x965c
+#define A_MPS_STAT_RX_BG_0_LB_DROP_FRAME_L 0x9660
+#define A_MPS_STAT_RX_BG_0_LB_DROP_FRAME_H 0x9664
+#define A_MPS_STAT_RX_BG_1_LB_DROP_FRAME_L 0x9668
+#define A_MPS_STAT_RX_BG_1_LB_DROP_FRAME_H 0x966c
+#define A_MPS_STAT_RX_BG_2_LB_DROP_FRAME_L 0x9670
+#define A_MPS_STAT_RX_BG_2_LB_DROP_FRAME_H 0x9674
+#define A_MPS_STAT_RX_BG_3_LB_DROP_FRAME_L 0x9678
+#define A_MPS_STAT_RX_BG_3_LB_DROP_FRAME_H 0x967c
+#define A_MPS_STAT_RX_BG_0_MAC_TRUNC_FRAME_L 0x9680
+#define A_MPS_STAT_RX_BG_0_MAC_TRUNC_FRAME_H 0x9684
+#define A_MPS_STAT_RX_BG_1_MAC_TRUNC_FRAME_L 0x9688
+#define A_MPS_STAT_RX_BG_1_MAC_TRUNC_FRAME_H 0x968c
+#define A_MPS_STAT_RX_BG_2_MAC_TRUNC_FRAME_L 0x9690
+#define A_MPS_STAT_RX_BG_2_MAC_TRUNC_FRAME_H 0x9694
+#define A_MPS_STAT_RX_BG_3_MAC_TRUNC_FRAME_L 0x9698
+#define A_MPS_STAT_RX_BG_3_MAC_TRUNC_FRAME_H 0x969c
+#define A_MPS_STAT_RX_BG_0_LB_TRUNC_FRAME_L 0x96a0
+#define A_MPS_STAT_RX_BG_0_LB_TRUNC_FRAME_H 0x96a4
+#define A_MPS_STAT_RX_BG_1_LB_TRUNC_FRAME_L 0x96a8
+#define A_MPS_STAT_RX_BG_1_LB_TRUNC_FRAME_H 0x96ac
+#define A_MPS_STAT_RX_BG_2_LB_TRUNC_FRAME_L 0x96b0
+#define A_MPS_STAT_RX_BG_2_LB_TRUNC_FRAME_H 0x96b4
+#define A_MPS_STAT_RX_BG_3_LB_TRUNC_FRAME_L 0x96b8
+#define A_MPS_STAT_RX_BG_3_LB_TRUNC_FRAME_H 0x96bc
+#define A_MPS_TRC_CFG 0x9800
+
+#define S_TRCFIFOEMPTY    4
+#define V_TRCFIFOEMPTY(x) ((x) << S_TRCFIFOEMPTY)
+#define F_TRCFIFOEMPTY    V_TRCFIFOEMPTY(1U)
+
+#define S_TRCIGNOREDROPINPUT    3
+#define V_TRCIGNOREDROPINPUT(x) ((x) << S_TRCIGNOREDROPINPUT)
+#define F_TRCIGNOREDROPINPUT    V_TRCIGNOREDROPINPUT(1U)
+
+#define S_TRCKEEPDUPLICATES    2
+#define V_TRCKEEPDUPLICATES(x) ((x) << S_TRCKEEPDUPLICATES)
+#define F_TRCKEEPDUPLICATES    V_TRCKEEPDUPLICATES(1U)
+
+#define S_TRCEN    1
+#define V_TRCEN(x) ((x) << S_TRCEN)
+#define F_TRCEN    V_TRCEN(1U)
+
+#define S_TRCMULTIFILTER    0
+#define V_TRCMULTIFILTER(x) ((x) << S_TRCMULTIFILTER)
+#define F_TRCMULTIFILTER    V_TRCMULTIFILTER(1U)
+
+#define A_MPS_TRC_RSS_CONTROL 0x9808
+
+#define S_RSSCONTROL    16
+#define M_RSSCONTROL    0xffU
+#define V_RSSCONTROL(x) ((x) << S_RSSCONTROL)
+#define G_RSSCONTROL(x) (((x) >> S_RSSCONTROL) & M_RSSCONTROL)
+
+#define S_QUEUENUMBER    0
+#define M_QUEUENUMBER    0xffffU
+#define V_QUEUENUMBER(x) ((x) << S_QUEUENUMBER)
+#define G_QUEUENUMBER(x) (((x) >> S_QUEUENUMBER) & M_QUEUENUMBER)
+
+#define A_MPS_TRC_FILTER_MATCH_CTL_A 0x9810
+
+#define S_TFINVERTMATCH    24
+#define V_TFINVERTMATCH(x) ((x) << S_TFINVERTMATCH)
+#define F_TFINVERTMATCH    V_TFINVERTMATCH(1U)
+
+#define S_TFPKTTOOLARGE    23
+#define V_TFPKTTOOLARGE(x) ((x) << S_TFPKTTOOLARGE)
+#define F_TFPKTTOOLARGE    V_TFPKTTOOLARGE(1U)
+
+#define S_TFEN    22
+#define V_TFEN(x) ((x) << S_TFEN)
+#define F_TFEN    V_TFEN(1U)
+
+#define S_TFPORT    18
+#define M_TFPORT    0xfU
+#define V_TFPORT(x) ((x) << S_TFPORT)
+#define G_TFPORT(x) (((x) >> S_TFPORT) & M_TFPORT)
+
+#define S_TFDROP    17
+#define V_TFDROP(x) ((x) << S_TFDROP)
+#define F_TFDROP    V_TFDROP(1U)
+
+#define S_TFSOPEOPERR    16
+#define V_TFSOPEOPERR(x) ((x) << S_TFSOPEOPERR)
+#define F_TFSOPEOPERR    V_TFSOPEOPERR(1U)
+
+#define S_TFLENGTH    8
+#define M_TFLENGTH    0x1fU
+#define V_TFLENGTH(x) ((x) << S_TFLENGTH)
+#define G_TFLENGTH(x) (((x) >> S_TFLENGTH) & M_TFLENGTH)
+
+#define S_TFOFFSET    0
+#define M_TFOFFSET    0x1fU
+#define V_TFOFFSET(x) ((x) << S_TFOFFSET)
+#define G_TFOFFSET(x) (((x) >> S_TFOFFSET) & M_TFOFFSET)
+
+#define A_MPS_TRC_FILTER_MATCH_CTL_B 0x9820
+
+#define S_TFMINPKTSIZE    16
+#define M_TFMINPKTSIZE    0x1ffU
+#define V_TFMINPKTSIZE(x) ((x) << S_TFMINPKTSIZE)
+#define G_TFMINPKTSIZE(x) (((x) >> S_TFMINPKTSIZE) & M_TFMINPKTSIZE)
+
+#define S_TFCAPTUREMAX    0
+#define M_TFCAPTUREMAX    0x3fffU
+#define V_TFCAPTUREMAX(x) ((x) << S_TFCAPTUREMAX)
+#define G_TFCAPTUREMAX(x) (((x) >> S_TFCAPTUREMAX) & M_TFCAPTUREMAX)
+
+#define A_MPS_TRC_INT_CAUSE 0x985c
+
+#define S_TRCPLERRENB    9
+#define V_TRCPLERRENB(x) ((x) << S_TRCPLERRENB)
+#define F_TRCPLERRENB    V_TRCPLERRENB(1U)
+
+#define S_MISCPERR    8
+#define V_MISCPERR(x) ((x) << S_MISCPERR)
+#define F_MISCPERR    V_MISCPERR(1U)
+
+#define S_PKTFIFO    4
+#define M_PKTFIFO    0xfU
+#define V_PKTFIFO(x) ((x) << S_PKTFIFO)
+#define G_PKTFIFO(x) (((x) >> S_PKTFIFO) & M_PKTFIFO)
+
+#define S_FILTMEM    0
+#define M_FILTMEM    0xfU
+#define V_FILTMEM(x) ((x) << S_FILTMEM)
+#define G_FILTMEM(x) (((x) >> S_FILTMEM) & M_FILTMEM)
+
+#define A_MPS_TRC_FILTER0_MATCH 0x9c00
+#define A_MPS_TRC_FILTER0_DONT_CARE 0x9c80
+#define A_MPS_TRC_FILTER1_MATCH 0x9d00
+#define A_MPS_CLS_INT_CAUSE 0xd028
+
+#define S_PLERRENB    3
+#define V_PLERRENB(x) ((x) << S_PLERRENB)
+#define F_PLERRENB    V_PLERRENB(1U)
+
+#define S_HASHSRAM    2
+#define V_HASHSRAM(x) ((x) << S_HASHSRAM)
+#define F_HASHSRAM    V_HASHSRAM(1U)
+
+#define S_MATCHTCAM    1
+#define V_MATCHTCAM(x) ((x) << S_MATCHTCAM)
+#define F_MATCHTCAM    V_MATCHTCAM(1U)
+
+#define S_MATCHSRAM    0
+#define V_MATCHSRAM(x) ((x) << S_MATCHSRAM)
+#define F_MATCHSRAM    V_MATCHSRAM(1U)
+
+#define A_MPS_RX_PERR_INT_CAUSE 0x11074
+
+/* registers for module CPL_SWITCH */
+#define CPL_SWITCH_BASE_ADDR 0x19040
+
+#define A_CPL_INTR_CAUSE 0x19054
+
+#define S_CIM_OP_MAP_PERR    5
+#define V_CIM_OP_MAP_PERR(x) ((x) << S_CIM_OP_MAP_PERR)
+#define F_CIM_OP_MAP_PERR    V_CIM_OP_MAP_PERR(1U)
+
+#define S_CIM_OVFL_ERROR    4
+#define V_CIM_OVFL_ERROR(x) ((x) << S_CIM_OVFL_ERROR)
+#define F_CIM_OVFL_ERROR    V_CIM_OVFL_ERROR(1U)
+
+#define S_TP_FRAMING_ERROR    3
+#define V_TP_FRAMING_ERROR(x) ((x) << S_TP_FRAMING_ERROR)
+#define F_TP_FRAMING_ERROR    V_TP_FRAMING_ERROR(1U)
+
+#define S_SGE_FRAMING_ERROR    2
+#define V_SGE_FRAMING_ERROR(x) ((x) << S_SGE_FRAMING_ERROR)
+#define F_SGE_FRAMING_ERROR    V_SGE_FRAMING_ERROR(1U)
+
+#define S_CIM_FRAMING_ERROR    1
+#define V_CIM_FRAMING_ERROR(x) ((x) << S_CIM_FRAMING_ERROR)
+#define F_CIM_FRAMING_ERROR    V_CIM_FRAMING_ERROR(1U)
+
+#define S_ZERO_SWITCH_ERROR    0
+#define V_ZERO_SWITCH_ERROR(x) ((x) << S_ZERO_SWITCH_ERROR)
+#define F_ZERO_SWITCH_ERROR    V_ZERO_SWITCH_ERROR(1U)
+
+/* registers for module SMB */
+#define SMB_BASE_ADDR 0x19060
+
+#define A_SMB_INT_CAUSE 0x19090
+
+#define S_MSTTXFIFOPARINT    21
+#define V_MSTTXFIFOPARINT(x) ((x) << S_MSTTXFIFOPARINT)
+#define F_MSTTXFIFOPARINT    V_MSTTXFIFOPARINT(1U)
+
+#define S_MSTRXFIFOPARINT    20
+#define V_MSTRXFIFOPARINT(x) ((x) << S_MSTRXFIFOPARINT)
+#define F_MSTRXFIFOPARINT    V_MSTRXFIFOPARINT(1U)
+
+#define S_SLVFIFOPARINT    19
+#define V_SLVFIFOPARINT(x) ((x) << S_SLVFIFOPARINT)
+#define F_SLVFIFOPARINT    V_SLVFIFOPARINT(1U)
+
+/* registers for module ULP_RX */
+#define ULP_RX_BASE_ADDR 0x19150
+
+#define A_ULP_RX_INT_CAUSE 0x19158
+#define A_ULP_RX_ISCSI_TAGMASK 0x19164
+
+#define S_ISCSITAGMASK    6
+#define M_ISCSITAGMASK    0x3ffffffU
+#define V_ISCSITAGMASK(x) ((x) << S_ISCSITAGMASK)
+#define G_ISCSITAGMASK(x) (((x) >> S_ISCSITAGMASK) & M_ISCSITAGMASK)
+
+#define A_ULP_RX_ISCSI_PSZ 0x19168
+
+#define S_HPZ3    24
+#define M_HPZ3    0xfU
+#define V_HPZ3(x) ((x) << S_HPZ3)
+#define G_HPZ3(x) (((x) >> S_HPZ3) & M_HPZ3)
+
+#define S_HPZ2    16
+#define M_HPZ2    0xfU
+#define V_HPZ2(x) ((x) << S_HPZ2)
+#define G_HPZ2(x) (((x) >> S_HPZ2) & M_HPZ2)
+
+#define S_HPZ1    8
+#define M_HPZ1    0xfU
+#define V_HPZ1(x) ((x) << S_HPZ1)
+#define G_HPZ1(x) (((x) >> S_HPZ1) & M_HPZ1)
+
+#define S_HPZ0    0
+#define M_HPZ0    0xfU
+#define V_HPZ0(x) ((x) << S_HPZ0)
+#define G_HPZ0(x) (((x) >> S_HPZ0) & M_HPZ0)
+
+#define A_ULP_RX_TDDP_PSZ 0x19178
+
+/* registers for module SF */
+#define SF_BASE_ADDR 0x193f8
+
+#define A_SF_DATA 0x193f8
+#define A_SF_OP 0x193fc
+
+#define S_SF_LOCK    4
+#define V_SF_LOCK(x) ((x) << S_SF_LOCK)
+#define F_SF_LOCK    V_SF_LOCK(1U)
+
+#define S_CONT    3
+#define V_CONT(x) ((x) << S_CONT)
+#define F_CONT    V_CONT(1U)
+
+#define S_BYTECNT    1
+#define M_BYTECNT    0x3U
+#define V_BYTECNT(x) ((x) << S_BYTECNT)
+#define G_BYTECNT(x) (((x) >> S_BYTECNT) & M_BYTECNT)
+
+#define S_OP    0
+#define V_OP(x) ((x) << S_OP)
+#define F_OP    V_OP(1U)
+
+/* registers for module PL */
+#define PL_BASE_ADDR 0x19400
+
+#define A_PL_PF_INT_CAUSE 0x3c0
+
+#define S_PFSW    3
+#define V_PFSW(x) ((x) << S_PFSW)
+#define F_PFSW    V_PFSW(1U)
+
+#define S_PFSGE    2
+#define V_PFSGE(x) ((x) << S_PFSGE)
+#define F_PFSGE    V_PFSGE(1U)
+
+#define S_PFCIM    1
+#define V_PFCIM(x) ((x) << S_PFCIM)
+#define F_PFCIM    V_PFCIM(1U)
+
+#define S_PFMPS    0
+#define V_PFMPS(x) ((x) << S_PFMPS)
+#define F_PFMPS    V_PFMPS(1U)
+
+#define A_PL_PF_INT_ENABLE 0x3c4
+#define A_PL_PF_CTL 0x3c8
+
+#define S_SWINT    0
+#define V_SWINT(x) ((x) << S_SWINT)
+#define F_SWINT    V_SWINT(1U)
+
+#define A_PL_WHOAMI 0x19400
+
+#define S_PORTXMAP    24
+#define M_PORTXMAP    0x7U
+#define V_PORTXMAP(x) ((x) << S_PORTXMAP)
+#define G_PORTXMAP(x) (((x) >> S_PORTXMAP) & M_PORTXMAP)
+
+#define S_SOURCEBUS    16
+#define M_SOURCEBUS    0x3U
+#define V_SOURCEBUS(x) ((x) << S_SOURCEBUS)
+#define G_SOURCEBUS(x) (((x) >> S_SOURCEBUS) & M_SOURCEBUS)
+
+#define S_SOURCEPF    8
+#define M_SOURCEPF    0x7U
+#define V_SOURCEPF(x) ((x) << S_SOURCEPF)
+#define G_SOURCEPF(x) (((x) >> S_SOURCEPF) & M_SOURCEPF)
+
+#define S_ISVF    7
+#define V_ISVF(x) ((x) << S_ISVF)
+#define F_ISVF    V_ISVF(1U)
+
+#define S_VFID    0
+#define M_VFID    0x7fU
+#define V_VFID(x) ((x) << S_VFID)
+#define G_VFID(x) (((x) >> S_VFID) & M_VFID)
+
+#define A_PL_INT_CAUSE 0x1940c
+
+#define S_FLR    30
+#define V_FLR(x) ((x) << S_FLR)
+#define F_FLR    V_FLR(1U)
+
+#define S_SW_CIM    29
+#define V_SW_CIM(x) ((x) << S_SW_CIM)
+#define F_SW_CIM    V_SW_CIM(1U)
+
+#define S_UART    28
+#define V_UART(x) ((x) << S_UART)
+#define F_UART    V_UART(1U)
+
+#define S_ULP_TX    27
+#define V_ULP_TX(x) ((x) << S_ULP_TX)
+#define F_ULP_TX    V_ULP_TX(1U)
+
+#define S_SGE    26
+#define V_SGE(x) ((x) << S_SGE)
+#define F_SGE    V_SGE(1U)
+
+#define S_HMA    25
+#define V_HMA(x) ((x) << S_HMA)
+#define F_HMA    V_HMA(1U)
+
+#define S_CPL_SWITCH    24
+#define V_CPL_SWITCH(x) ((x) << S_CPL_SWITCH)
+#define F_CPL_SWITCH    V_CPL_SWITCH(1U)
+
+#define S_ULP_RX    23
+#define V_ULP_RX(x) ((x) << S_ULP_RX)
+#define F_ULP_RX    V_ULP_RX(1U)
+
+#define S_PM_RX    22
+#define V_PM_RX(x) ((x) << S_PM_RX)
+#define F_PM_RX    V_PM_RX(1U)
+
+#define S_PM_TX    21
+#define V_PM_TX(x) ((x) << S_PM_TX)
+#define F_PM_TX    V_PM_TX(1U)
+
+#define S_MA    20
+#define V_MA(x) ((x) << S_MA)
+#define F_MA    V_MA(1U)
+
+#define S_TP    19
+#define V_TP(x) ((x) << S_TP)
+#define F_TP    V_TP(1U)
+
+#define S_LE    18
+#define V_LE(x) ((x) << S_LE)
+#define F_LE    V_LE(1U)
+
+#define S_EDC1    17
+#define V_EDC1(x) ((x) << S_EDC1)
+#define F_EDC1    V_EDC1(1U)
+
+#define S_EDC0    16
+#define V_EDC0(x) ((x) << S_EDC0)
+#define F_EDC0    V_EDC0(1U)
+
+#define S_MC    15
+#define V_MC(x) ((x) << S_MC)
+#define F_MC    V_MC(1U)
+
+#define S_PCIE    14
+#define V_PCIE(x) ((x) << S_PCIE)
+#define F_PCIE    V_PCIE(1U)
+
+#define S_PMU    13
+#define V_PMU(x) ((x) << S_PMU)
+#define F_PMU    V_PMU(1U)
+
+#define S_XGMAC_KR1    12
+#define V_XGMAC_KR1(x) ((x) << S_XGMAC_KR1)
+#define F_XGMAC_KR1    V_XGMAC_KR1(1U)
+
+#define S_XGMAC_KR0    11
+#define V_XGMAC_KR0(x) ((x) << S_XGMAC_KR0)
+#define F_XGMAC_KR0    V_XGMAC_KR0(1U)
+
+#define S_XGMAC1    10
+#define V_XGMAC1(x) ((x) << S_XGMAC1)
+#define F_XGMAC1    V_XGMAC1(1U)
+
+#define S_XGMAC0    9
+#define V_XGMAC0(x) ((x) << S_XGMAC0)
+#define F_XGMAC0    V_XGMAC0(1U)
+
+#define S_SMB    8
+#define V_SMB(x) ((x) << S_SMB)
+#define F_SMB    V_SMB(1U)
+
+#define S_SF    7
+#define V_SF(x) ((x) << S_SF)
+#define F_SF    V_SF(1U)
+
+#define S_PL    6
+#define V_PL(x) ((x) << S_PL)
+#define F_PL    V_PL(1U)
+
+#define S_NCSI    5
+#define V_NCSI(x) ((x) << S_NCSI)
+#define F_NCSI    V_NCSI(1U)
+
+#define S_MPS    4
+#define V_MPS(x) ((x) << S_MPS)
+#define F_MPS    V_MPS(1U)
+
+#define S_MI    3
+#define V_MI(x) ((x) << S_MI)
+#define F_MI    V_MI(1U)
+
+#define S_DBG    2
+#define V_DBG(x) ((x) << S_DBG)
+#define F_DBG    V_DBG(1U)
+
+#define S_I2CM    1
+#define V_I2CM(x) ((x) << S_I2CM)
+#define F_I2CM    V_I2CM(1U)
+
+#define S_CIM    0
+#define V_CIM(x) ((x) << S_CIM)
+#define F_CIM    V_CIM(1U)
+
+#define A_PL_INT_MAP0 0x19414
+
+#define S_MAPNCSI    16
+#define M_MAPNCSI    0x1ffU
+#define V_MAPNCSI(x) ((x) << S_MAPNCSI)
+#define G_MAPNCSI(x) (((x) >> S_MAPNCSI) & M_MAPNCSI)
+
+#define S_MAPDEFAULT    0
+#define M_MAPDEFAULT    0x1ffU
+#define V_MAPDEFAULT(x) ((x) << S_MAPDEFAULT)
+#define G_MAPDEFAULT(x) (((x) >> S_MAPDEFAULT) & M_MAPDEFAULT)
+
+#define A_PL_RST 0x19428
+
+#define S_FATALPERREN    3
+#define V_FATALPERREN(x) ((x) << S_FATALPERREN)
+#define F_FATALPERREN    V_FATALPERREN(1U)
+
+#define S_SWINTCIM    2
+#define V_SWINTCIM(x) ((x) << S_SWINTCIM)
+#define F_SWINTCIM    V_SWINTCIM(1U)
+
+#define S_PIORST    1
+#define V_PIORST(x) ((x) << S_PIORST)
+#define F_PIORST    V_PIORST(1U)
+
+#define S_PIORSTMODE    0
+#define V_PIORSTMODE(x) ((x) << S_PIORSTMODE)
+#define F_PIORSTMODE    V_PIORSTMODE(1U)
+
+#define A_PL_PL_INT_CAUSE 0x19430
+
+#define S_PF_ENABLEERR    5
+#define V_PF_ENABLEERR(x) ((x) << S_PF_ENABLEERR)
+#define F_PF_ENABLEERR    V_PF_ENABLEERR(1U)
+
+#define S_FATALPERR    4
+#define V_FATALPERR(x) ((x) << S_FATALPERR)
+#define F_FATALPERR    V_FATALPERR(1U)
+
+#define S_INVALIDACCESS    3
+#define V_INVALIDACCESS(x) ((x) << S_INVALIDACCESS)
+#define F_INVALIDACCESS    V_INVALIDACCESS(1U)
+
+#define S_TIMEOUT    2
+#define V_TIMEOUT(x) ((x) << S_TIMEOUT)
+#define F_TIMEOUT    V_TIMEOUT(1U)
+
+#define S_PLERR    1
+#define V_PLERR(x) ((x) << S_PLERR)
+#define F_PLERR    V_PLERR(1U)
+
+#define S_PERRVFID    0
+#define V_PERRVFID(x) ((x) << S_PERRVFID)
+#define F_PERRVFID    V_PERRVFID(1U)
+
+#define A_PL_REV 0x1943c
+
+#define S_REV    0
+#define M_REV    0xfU
+#define V_REV(x) ((x) << S_REV)
+#define G_REV(x) (((x) >> S_REV) & M_REV)
+
+/* registers for module LE */
+#define LE_BASE_ADDR 0x19c00
+
+#define A_LE_DB_CONFIG 0x19c04
+
+#define S_HASHEN    20
+#define V_HASHEN(x) ((x) << S_HASHEN)
+#define F_HASHEN    V_HASHEN(1U)
+
+#define S_ASBOTHSRCHEN    18
+#define V_ASBOTHSRCHEN(x) ((x) << S_ASBOTHSRCHEN)
+#define F_ASBOTHSRCHEN    V_ASBOTHSRCHEN(1U)
+
+#define S_ASLIPCOMPEN    17
+#define V_ASLIPCOMPEN(x) ((x) << S_ASLIPCOMPEN)
+#define F_ASLIPCOMPEN    V_ASLIPCOMPEN(1U)
+
+#define S_FILTEREN    11
+#define V_FILTEREN(x) ((x) << S_FILTEREN)
+#define F_FILTEREN    V_FILTEREN(1U)
+
+#define A_LE_DB_SERVER_INDEX 0x19c18
+
+#define S_SRINDX    7
+#define M_SRINDX    0x3fU
+#define V_SRINDX(x) ((x) << S_SRINDX)
+#define G_SRINDX(x) (((x) >> S_SRINDX) & M_SRINDX)
+
+#define A_LE_DB_ACT_CNT_IPV4 0x19c20
+#define A_LE_DB_ACT_CNT_IPV6 0x19c24
+#define A_LE_DB_INT_CAUSE 0x19c3c
+
+#define S_REQQPARERR    16
+#define V_REQQPARERR(x) ((x) << S_REQQPARERR)
+#define F_REQQPARERR    V_REQQPARERR(1U)
+
+#define S_UNKNOWNCMD    15
+#define V_UNKNOWNCMD(x) ((x) << S_UNKNOWNCMD)
+#define F_UNKNOWNCMD    V_UNKNOWNCMD(1U)
+
+#define S_NFASRCHFAIL    8
+#define V_NFASRCHFAIL(x) ((x) << S_NFASRCHFAIL)
+#define F_NFASRCHFAIL    V_NFASRCHFAIL(1U)
+
+#define S_ACTRGNFULL    7
+#define V_ACTRGNFULL(x) ((x) << S_ACTRGNFULL)
+#define F_ACTRGNFULL    V_ACTRGNFULL(1U)
+
+#define S_PARITYERR    6
+#define V_PARITYERR(x) ((x) << S_PARITYERR)
+#define F_PARITYERR    V_PARITYERR(1U)
+
+#define S_LIPMISS    5
+#define V_LIPMISS(x) ((x) << S_LIPMISS)
+#define F_LIPMISS    V_LIPMISS(1U)
+
+#define S_LIP0    4
+#define V_LIP0(x) ((x) << S_LIP0)
+#define F_LIP0    V_LIP0(1U)
+
+#define A_LE_DB_TID_HASHBASE 0x19df8
+
+#define S_HASHBASE_ADDR    2
+#define M_HASHBASE_ADDR    0xfffffU
+#define V_HASHBASE_ADDR(x) ((x) << S_HASHBASE_ADDR)
+#define G_HASHBASE_ADDR(x) (((x) >> S_HASHBASE_ADDR) & M_HASHBASE_ADDR)
+
+/* registers for module NCSI */
+#define NCSI_BASE_ADDR 0x1a000
+
+#define A_NCSI_INT_CAUSE 0x1a0d8
+
+#define S_CIM_DM_PRTY_ERR    8
+#define V_CIM_DM_PRTY_ERR(x) ((x) << S_CIM_DM_PRTY_ERR)
+#define F_CIM_DM_PRTY_ERR    V_CIM_DM_PRTY_ERR(1U)
+
+#define S_MPS_DM_PRTY_ERR    7
+#define V_MPS_DM_PRTY_ERR(x) ((x) << S_MPS_DM_PRTY_ERR)
+#define F_MPS_DM_PRTY_ERR    V_MPS_DM_PRTY_ERR(1U)
+
+#define S_TXFIFO_PRTY_ERR    1
+#define V_TXFIFO_PRTY_ERR(x) ((x) << S_TXFIFO_PRTY_ERR)
+#define F_TXFIFO_PRTY_ERR    V_TXFIFO_PRTY_ERR(1U)
+
+#define S_RXFIFO_PRTY_ERR    0
+#define V_RXFIFO_PRTY_ERR(x) ((x) << S_RXFIFO_PRTY_ERR)
+#define F_RXFIFO_PRTY_ERR    V_RXFIFO_PRTY_ERR(1U)
+
+/* registers for module XGMAC */
+#define XGMAC_BASE_ADDR 0x0
+
+#define A_XGMAC_PORT_CFG2 0x1018
+
+#define S_INSTANCENUM    22
+#define M_INSTANCENUM    0x3U
+#define V_INSTANCENUM(x) ((x) << S_INSTANCENUM)
+#define G_INSTANCENUM(x) (((x) >> S_INSTANCENUM) & M_INSTANCENUM)
+
+#define S_STOPONPERR    21
+#define V_STOPONPERR(x) ((x) << S_STOPONPERR)
+#define F_STOPONPERR    V_STOPONPERR(1U)
+
+#define S_MACTXEN    20
+#define V_MACTXEN(x) ((x) << S_MACTXEN)
+#define F_MACTXEN    V_MACTXEN(1U)
+
+#define S_MACRXEN    19
+#define V_MACRXEN(x) ((x) << S_MACRXEN)
+#define F_MACRXEN    V_MACRXEN(1U)
+
+#define S_PATEN    18
+#define V_PATEN(x) ((x) << S_PATEN)
+#define F_PATEN    V_PATEN(1U)
+
+#define S_MAGICEN    17
+#define V_MAGICEN(x) ((x) << S_MAGICEN)
+#define F_MAGICEN    V_MAGICEN(1U)
+
+#define S_TX_IPG    4
+#define M_TX_IPG    0x1fffU
+#define V_TX_IPG(x) ((x) << S_TX_IPG)
+#define G_TX_IPG(x) (((x) >> S_TX_IPG) & M_TX_IPG)
+
+#define S_AEC_PMA_TX_READY    1
+#define V_AEC_PMA_TX_READY(x) ((x) << S_AEC_PMA_TX_READY)
+#define F_AEC_PMA_TX_READY    V_AEC_PMA_TX_READY(1U)
+
+#define S_AEC_PMA_RX_READY    0
+#define V_AEC_PMA_RX_READY(x) ((x) << S_AEC_PMA_RX_READY)
+#define F_AEC_PMA_RX_READY    V_AEC_PMA_RX_READY(1U)
+
+#define A_XGMAC_PORT_MAGIC_MACID_LO 0x1024
+#define A_XGMAC_PORT_MAGIC_MACID_HI 0x1028
+
+#define S_MAC_WOL_DA    0
+#define M_MAC_WOL_DA    0xffffU
+#define V_MAC_WOL_DA(x) ((x) << S_MAC_WOL_DA)
+#define G_MAC_WOL_DA(x) (((x) >> S_MAC_WOL_DA) & M_MAC_WOL_DA)
+
+#define A_XGMAC_PORT_EPIO_DATA0 0x10c0
+#define A_XGMAC_PORT_EPIO_DATA1 0x10c4
+#define A_XGMAC_PORT_EPIO_DATA2 0x10c8
+#define A_XGMAC_PORT_EPIO_DATA3 0x10cc
+#define A_XGMAC_PORT_EPIO_OP 0x10d0
+
+#define S_EPIOWR    8
+#define V_EPIOWR(x) ((x) << S_EPIOWR)
+#define F_EPIOWR    V_EPIOWR(1U)
+
+#define S_ADDRESS    0
+#define M_ADDRESS    0xffU
+#define V_ADDRESS(x) ((x) << S_ADDRESS)
+#define G_ADDRESS(x) (((x) >> S_ADDRESS) & M_ADDRESS)
+
+#define A_XGMAC_PORT_INT_CAUSE 0x10dc
+#endif /* __T4_REGS_H */
-- 
1.5.4


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH net-next 2/7] cxgb4: Add FW API definitions
  2010-02-18  1:37 ` [PATCH net-next 1/7] cxgb4: Add register and message definitions Dimitris Michailidis
@ 2010-02-18  1:37   ` Dimitris Michailidis
  2010-02-18  1:37     ` [PATCH net-next 3/7] cxgb4: Add HW and FW support code Dimitris Michailidis
  0 siblings, 1 reply; 10+ messages in thread
From: Dimitris Michailidis @ 2010-02-18  1:37 UTC (permalink / raw)
  To: netdev

Signed-off-by: Dimitris Michailidis <dm@chelsio.com>
---
 drivers/net/cxgb4/t4fw_api.h | 3727 ++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 3727 insertions(+), 0 deletions(-)
 create mode 100644 drivers/net/cxgb4/t4fw_api.h

diff --git a/drivers/net/cxgb4/t4fw_api.h b/drivers/net/cxgb4/t4fw_api.h
new file mode 100644
index 0000000..1768596
--- /dev/null
+++ b/drivers/net/cxgb4/t4fw_api.h
@@ -0,0 +1,3727 @@
+/*
+ * This file is part of the Chelsio T4 Ethernet driver for Linux.
+ *
+ * Copyright (c) 2009-2010 Chelsio Communications, Inc. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ *     Redistribution and use in source and binary forms, with or
+ *     without modification, are permitted provided that the following
+ *     conditions are met:
+ *
+ *      - Redistributions of source code must retain the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer.
+ *
+ *      - Redistributions in binary form must reproduce the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer in the documentation and/or other materials
+ *        provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef _T4FW_INTERFACE_H_
+#define _T4FW_INTERFACE_H_
+
+/******************************************************************************
+ *   R E T U R N   V A L U E S
+ ********************************/
+
+enum fw_retval{
+	FW_SUCCESS	= 0,		/* completed sucessfully */
+	FW_EPERM	= 1,		/* operation not permitted */
+	FW_EIO		= 5,		/* input/output error; hw bad */
+	FW_ENOEXEC	= 8,		/* Exec format error; inv microcode */
+	FW_EAGAIN	= 11,		/* try again */
+	FW_ENOMEM	= 12,		/* out of memory */
+	FW_EBUSY	= 16,		/* resource busy */
+	FW_EINVAL	= 22,		/* invalid argument */
+	FW_ENOSYS	= 38,		/* functionality not implemented */
+	FW_EPROTO	= 71,		/* protocol error */
+};
+
+/******************************************************************************
+ *   W O R K   R E Q U E S T s
+ ********************************/
+
+enum fw_wr_opcodes {
+	FW_FILTER_WR                   = 0x02,
+	FW_ULPTX_WR                    = 0x04,
+	FW_TP_WR                       = 0x05,
+	FW_ETH_TX_PKT_WR               = 0x08,
+	FW_FLOWC_WR                    = 0x0a,
+	FW_OFLD_TX_DATA_WR             = 0x0b,
+	FW_CMD_WR                      = 0x10,
+	FW_ETH_TX_PKT_VM_WR            = 0x11,
+	FW_RI_RES_WR                   = 0x0c,
+	FW_RI_INIT_WR                  = 0x0d,
+	FW_RI_RDMA_WRITE_WR            = 0x14,
+	FW_RI_SEND_WR                  = 0x15,
+	FW_RI_RDMA_READ_WR             = 0x16,
+	FW_RI_RECV_WR                  = 0x17,
+	FW_RI_BIND_MW_WR               = 0x18,
+	FW_RI_FR_NSMR_WR               = 0x19,
+	FW_RI_INV_LSTAG_WR             = 0x1a,
+	FW_LASTC2E_WR                  = 0x40
+};
+
+/*
+ * Generic work request header flit0
+ */
+struct fw_wr_hdr {
+	__be32 hi;
+	__be32 lo;
+};
+
+#define S_FW_WR_OP		24
+#define M_FW_WR_OP		0xff
+#define V_FW_WR_OP(x)		((x) << S_FW_WR_OP)
+#define G_FW_WR_OP(x)		(((x) >> S_FW_WR_OP) & M_FW_WR_OP)
+
+#define S_FW_WR_ATOMIC		23
+#define M_FW_WR_ATOMIC		0x1
+#define V_FW_WR_ATOMIC(x)	((x) << S_FW_WR_ATOMIC)
+#define G_FW_WR_ATOMIC(x)	\
+    (((x) >> S_FW_WR_ATOMIC) & M_FW_WR_ATOMIC)
+#define F_FW_WR_ATOMIC		V_FW_WR_ATOMIC(1U)
+
+#define S_FW_WR_FLUSH     22
+#define M_FW_WR_FLUSH     0x1
+#define V_FW_WR_FLUSH(x)  ((x) << S_FW_WR_FLUSH)
+#define G_FW_WR_FLUSH(x)  \
+    (((x) >> S_FW_WR_FLUSH) & M_FW_WR_FLUSH)
+#define F_FW_WR_FLUSH     V_FW_WR_FLUSH(1U)
+
+#define S_FW_WR_COMPL     21
+#define M_FW_WR_COMPL     0x1
+#define V_FW_WR_COMPL(x)  ((x) << S_FW_WR_COMPL)
+#define G_FW_WR_COMPL(x)  \
+    (((x) >> S_FW_WR_COMPL) & M_FW_WR_COMPL)
+#define F_FW_WR_COMPL     V_FW_WR_COMPL(1U)
+
+#define S_FW_WR_IMMDLEN	0
+#define M_FW_WR_IMMDLEN	0xff
+#define V_FW_WR_IMMDLEN(x)	((x) << S_FW_WR_IMMDLEN)
+#define G_FW_WR_IMMDLEN(x)	\
+    (((x) >> S_FW_WR_IMMDLEN) & M_FW_WR_IMMDLEN)
+
+#define S_FW_WR_EQUIQ		31
+#define M_FW_WR_EQUIQ		0x1
+#define V_FW_WR_EQUIQ(x)	((x) << S_FW_WR_EQUIQ)
+#define G_FW_WR_EQUIQ(x)	(((x) >> S_FW_WR_EQUIQ) & M_FW_WR_EQUIQ)
+#define F_FW_WR_EQUIQ		V_FW_WR_EQUIQ(1U)
+
+#define S_FW_WR_EQUEQ		30
+#define M_FW_WR_EQUEQ		0x1
+#define V_FW_WR_EQUEQ(x)	((x) << S_FW_WR_EQUEQ)
+#define G_FW_WR_EQUEQ(x)	(((x) >> S_FW_WR_EQUEQ) & M_FW_WR_EQUEQ)
+#define F_FW_WR_EQUEQ		V_FW_WR_EQUEQ(1U)
+
+#define S_FW_WR_FLOWID		8
+#define M_FW_WR_FLOWID		0xfffff
+#define V_FW_WR_FLOWID(x)	((x) << S_FW_WR_FLOWID)
+#define G_FW_WR_FLOWID(x)	(((x) >> S_FW_WR_FLOWID) & M_FW_WR_FLOWID)
+
+#define S_FW_WR_LEN16		0
+#define M_FW_WR_LEN16		0xff
+#define V_FW_WR_LEN16(x)	((x) << S_FW_WR_LEN16)
+#define G_FW_WR_LEN16(x)	(((x) >> S_FW_WR_LEN16) & M_FW_WR_LEN16)
+
+struct fw_ulptx_wr {
+	__be32 op_to_compl;
+	__be32 flowid_len16;
+	__u64  cookie;
+};
+
+struct fw_tp_wr {
+	__be32 op_to_immdlen;
+	__be32 flowid_len16;
+	__u64  cookie;
+};
+
+struct fw_eth_tx_pkt_wr {
+	__be32 op_immdlen;
+	__be32 equiq_to_len16;
+	__be64 r3;
+};
+
+enum fw_flowc_mnem {
+	FW_FLOWC_MNEM_PFNVFN,		/* PFN [15:8] VFN [7:0] */
+	FW_FLOWC_MNEM_CH,
+	FW_FLOWC_MNEM_PORT,
+	FW_FLOWC_MNEM_IQID,
+	FW_FLOWC_MNEM_SNDNXT,
+	FW_FLOWC_MNEM_RCVNXT,
+	FW_FLOWC_MNEM_SNDBUF,
+	FW_FLOWC_MNEM_MSS,
+};
+
+struct fw_flowc_mnemval {
+	__u8   mnemonic;
+	__u8   r4[3];
+	__be32 val;
+};
+
+struct fw_flowc_wr {
+	__be32 op_to_nparams;
+	__be32 flowid_len16;
+	struct fw_flowc_mnemval mnemval[0];
+};
+
+#define S_FW_FLOWC_WR_NPARAMS		0
+#define M_FW_FLOWC_WR_NPARAMS		0xff
+#define V_FW_FLOWC_WR_NPARAMS(x)	((x) << S_FW_FLOWC_WR_NPARAMS)
+#define G_FW_FLOWC_WR_NPARAMS(x)	\
+    (((x) >> S_FW_FLOWC_WR_NPARAMS) & M_FW_FLOWC_WR_NPARAMS)
+
+struct fw_ofld_tx_data_wr {
+	__be32 op_to_immdlen;
+	__be32 flowid_len16;
+	__be32 plen;
+	__be32 tunnel_to_proxy;
+};
+
+#define S_FW_OFLD_TX_DATA_WR_TUNNEL	19
+#define M_FW_OFLD_TX_DATA_WR_TUNNEL	0x1
+#define V_FW_OFLD_TX_DATA_WR_TUNNEL(x)	((x) << S_FW_OFLD_TX_DATA_WR_TUNNEL)
+#define G_FW_OFLD_TX_DATA_WR_TUNNEL(x)	\
+    (((x) >> S_FW_OFLD_TX_DATA_WR_TUNNEL) & M_FW_OFLD_TX_DATA_WR_TUNNEL)
+#define F_FW_OFLD_TX_DATA_WR_TUNNEL	V_FW_OFLD_TX_DATA_WR_TUNNEL(1U)
+
+#define S_FW_OFLD_TX_DATA_WR_SAVE	18
+#define M_FW_OFLD_TX_DATA_WR_SAVE	0x1
+#define V_FW_OFLD_TX_DATA_WR_SAVE(x)	((x) << S_FW_OFLD_TX_DATA_WR_SAVE)
+#define G_FW_OFLD_TX_DATA_WR_SAVE(x)	\
+    (((x) >> S_FW_OFLD_TX_DATA_WR_SAVE) & M_FW_OFLD_TX_DATA_WR_SAVE)
+#define F_FW_OFLD_TX_DATA_WR_SAVE	V_FW_OFLD_TX_DATA_WR_SAVE(1U)
+
+#define S_FW_OFLD_TX_DATA_WR_FLUSH	17
+#define M_FW_OFLD_TX_DATA_WR_FLUSH	0x1
+#define V_FW_OFLD_TX_DATA_WR_FLUSH(x)	((x) << S_FW_OFLD_TX_DATA_WR_FLUSH)
+#define G_FW_OFLD_TX_DATA_WR_FLUSH(x)	\
+    (((x) >> S_FW_OFLD_TX_DATA_WR_FLUSH) & M_FW_OFLD_TX_DATA_WR_FLUSH)
+#define F_FW_OFLD_TX_DATA_WR_FLUSH	V_FW_OFLD_TX_DATA_WR_FLUSH(1U)
+
+#define S_FW_OFLD_TX_DATA_WR_URGENT	16
+#define M_FW_OFLD_TX_DATA_WR_URGENT	0x1
+#define V_FW_OFLD_TX_DATA_WR_URGENT(x)	((x) << S_FW_OFLD_TX_DATA_WR_URGENT)
+#define G_FW_OFLD_TX_DATA_WR_URGENT(x)	\
+    (((x) >> S_FW_OFLD_TX_DATA_WR_URGENT) & M_FW_OFLD_TX_DATA_WR_URGENT)
+#define F_FW_OFLD_TX_DATA_WR_URGENT	V_FW_OFLD_TX_DATA_WR_URGENT(1U)
+
+#define S_FW_OFLD_TX_DATA_WR_MORE	15
+#define M_FW_OFLD_TX_DATA_WR_MORE	0x1
+#define V_FW_OFLD_TX_DATA_WR_MORE(x)	((x) << S_FW_OFLD_TX_DATA_WR_MORE)
+#define G_FW_OFLD_TX_DATA_WR_MORE(x)	\
+    (((x) >> S_FW_OFLD_TX_DATA_WR_MORE) & M_FW_OFLD_TX_DATA_WR_MORE)
+#define F_FW_OFLD_TX_DATA_WR_MORE	V_FW_OFLD_TX_DATA_WR_MORE(1U)
+
+#define S_FW_OFLD_TX_DATA_WR_SHOVE	14
+#define M_FW_OFLD_TX_DATA_WR_SHOVE	0x1
+#define V_FW_OFLD_TX_DATA_WR_SHOVE(x)	((x) << S_FW_OFLD_TX_DATA_WR_SHOVE)
+#define G_FW_OFLD_TX_DATA_WR_SHOVE(x)	\
+    (((x) >> S_FW_OFLD_TX_DATA_WR_SHOVE) & M_FW_OFLD_TX_DATA_WR_SHOVE)
+#define F_FW_OFLD_TX_DATA_WR_SHOVE	V_FW_OFLD_TX_DATA_WR_SHOVE(1U)
+
+#define S_FW_OFLD_TX_DATA_WR_ULPODE	10
+#define M_FW_OFLD_TX_DATA_WR_ULPODE	0xf
+#define V_FW_OFLD_TX_DATA_WR_ULPODE(x)	((x) << S_FW_OFLD_TX_DATA_WR_ULPODE)
+#define G_FW_OFLD_TX_DATA_WR_ULPODE(x)	\
+    (((x) >> S_FW_OFLD_TX_DATA_WR_ULPODE) & M_FW_OFLD_TX_DATA_WR_ULPODE)
+
+#define S_FW_OFLD_TX_DATA_WR_ULPSUBMODE		6
+#define M_FW_OFLD_TX_DATA_WR_ULPSUBMODE		0xf
+#define V_FW_OFLD_TX_DATA_WR_ULPSUBMODE(x)	\
+    ((x) << S_FW_OFLD_TX_DATA_WR_ULPSUBMODE)
+#define G_FW_OFLD_TX_DATA_WR_ULPSUBMODE(x)	\
+    (((x) >> S_FW_OFLD_TX_DATA_WR_ULPSUBMODE) & \
+     M_FW_OFLD_TX_DATA_WR_ULPSUBMODE)
+
+#define S_FW_OFLD_TX_DATA_WR_PROXY	5
+#define M_FW_OFLD_TX_DATA_WR_PROXY	0x1
+#define V_FW_OFLD_TX_DATA_WR_PROXY(x)	((x) << S_FW_OFLD_TX_DATA_WR_PROXY)
+#define G_FW_OFLD_TX_DATA_WR_PROXY(x)	\
+    (((x) >> S_FW_OFLD_TX_DATA_WR_PROXY) & M_FW_OFLD_TX_DATA_WR_PROXY)
+#define F_FW_OFLD_TX_DATA_WR_PROXY	V_FW_OFLD_TX_DATA_WR_PROXY(1U)
+
+struct fw_cmd_wr {
+	__be32 op_dma;
+	__be32 len16_pkd;
+	__be64 cookie_daddr;
+};
+
+#define S_FW_CMD_WR_DMA		17
+#define M_FW_CMD_WR_DMA		0x1
+#define V_FW_CMD_WR_DMA(x)	((x) << S_FW_CMD_WR_DMA)
+#define G_FW_CMD_WR_DMA(x)	(((x) >> S_FW_CMD_WR_DMA) & M_FW_CMD_WR_DMA)
+#define F_FW_CMD_WR_DMA	V_FW_CMD_WR_DMA(1U)
+
+struct fw_eth_tx_pkt_vm_wr {
+	__be32 op_immdlen;
+	__be32 equiq_to_len16;
+	__be32 r3[2];
+	__u8   ethmacdst[6];
+	__u8   ethmacsrc[6];
+	__be16 ethtype;
+	__be16 vlantci;
+};
+
+/******************************************************************************
+ *  C O M M A N D s
+ *********************/
+
+#ifdef PCTL
+#define FW_CMD_MAX_TIMEOUT	10000
+#else
+#define FW_CMD_MAX_TIMEOUT	2000
+#endif
+
+enum fw_cmd_opcodes {
+	FW_LDST_CMD                    = 0x01,
+	FW_RESET_CMD                   = 0x03,
+	FW_HELLO_CMD                   = 0x04,
+	FW_BYE_CMD                     = 0x05,
+	FW_INITIALIZE_CMD              = 0x06,
+	FW_CAPS_CONFIG_CMD             = 0x07,
+	FW_PARAMS_CMD                  = 0x08,
+	FW_PFVF_CMD                    = 0x09,
+	FW_IQ_CMD                      = 0x10,
+	FW_EQ_MNGT_CMD                 = 0x11,
+	FW_EQ_ETH_CMD                  = 0x12,
+	FW_EQ_CTRL_CMD                 = 0x13,
+	FW_EQ_OFLD_CMD                 = 0x21,
+	FW_VI_CMD                      = 0x14,
+	FW_VI_MAC_CMD                  = 0x15,
+	FW_VI_RXMODE_CMD               = 0x16,
+	FW_VI_ENABLE_CMD               = 0x17,
+	FW_VI_STATS_CMD                = 0x1a,
+	FW_ACL_MAC_CMD                 = 0x18,
+	FW_ACL_VLAN_CMD                = 0x19,
+	FW_PORT_CMD                    = 0x1b,
+	FW_PORT_STATS_CMD              = 0x1c,
+	FW_PORT_LB_STATS_CMD           = 0x1d,
+	FW_PORT_TRACE_CMD              = 0x1e,
+	FW_PORT_TRACE_MMAP_CMD         = 0x1f,
+	FW_RSS_IND_TBL_CMD             = 0x20,
+	FW_RSS_GLB_CONFIG_CMD          = 0x22,
+	FW_RSS_VI_CONFIG_CMD           = 0x23,
+	FW_LASTC2E_CMD                 = 0x40,
+	FW_ERROR_CMD                   = 0x80,
+	FW_DEBUG_CMD                   = 0x81,
+
+};
+
+enum fw_cmd_cap {
+        FW_CMD_CAP_PF                  = 0x01,
+        FW_CMD_CAP_DMAQ                = 0x02,
+        FW_CMD_CAP_PORT                = 0x04,
+        FW_CMD_CAP_PORTPROMISC         = 0x08,
+        FW_CMD_CAP_PORTSTATS           = 0x10,
+        FW_CMD_CAP_VF                  = 0x80,
+};
+
+/*
+ * Generic command header flit0
+ */
+struct fw_cmd_hdr {
+	__be32 hi;
+	__be32 lo;
+};
+
+#define S_FW_CMD_OP		24
+#define M_FW_CMD_OP		0xff
+#define V_FW_CMD_OP(x)		((x) << S_FW_CMD_OP)
+#define G_FW_CMD_OP(x)		(((x) >> S_FW_CMD_OP) & M_FW_CMD_OP)
+
+#define S_FW_CMD_REQUEST	23
+#define M_FW_CMD_REQUEST	0x1
+#define V_FW_CMD_REQUEST(x)	((x) << S_FW_CMD_REQUEST)
+#define G_FW_CMD_REQUEST(x)	(((x) >> S_FW_CMD_REQUEST) & M_FW_CMD_REQUEST)
+#define F_FW_CMD_REQUEST	V_FW_CMD_REQUEST(1U)
+
+#define S_FW_CMD_READ		22
+#define M_FW_CMD_READ		0x1
+#define V_FW_CMD_READ(x)	((x) << S_FW_CMD_READ)
+#define G_FW_CMD_READ(x)	(((x) >> S_FW_CMD_READ) & M_FW_CMD_READ)
+#define F_FW_CMD_READ		V_FW_CMD_READ(1U)
+
+#define S_FW_CMD_WRITE		21
+#define M_FW_CMD_WRITE		0x1
+#define V_FW_CMD_WRITE(x)	((x) << S_FW_CMD_WRITE)
+#define G_FW_CMD_WRITE(x)	(((x) >> S_FW_CMD_WRITE) & M_FW_CMD_WRITE)
+#define F_FW_CMD_WRITE		V_FW_CMD_WRITE(1U)
+
+#define S_FW_CMD_EXEC		20
+#define M_FW_CMD_EXEC		0x1
+#define V_FW_CMD_EXEC(x)	((x) << S_FW_CMD_EXEC)
+#define G_FW_CMD_EXEC(x)	(((x) >> S_FW_CMD_EXEC) & M_FW_CMD_EXEC)
+#define F_FW_CMD_EXEC		V_FW_CMD_EXEC(1U)
+
+#define S_FW_CMD_RAMASK		20
+#define M_FW_CMD_RAMASK		0xf
+#define V_FW_CMD_RAMASK(x)	((x) << S_FW_CMD_RAMASK)
+#define G_FW_CMD_RAMASK(x)	(((x) >> S_FW_CMD_RAMASK) & M_FW_CMD_RAMASK)
+
+#define S_FW_CMD_RETVAL		8
+#define M_FW_CMD_RETVAL		0xff
+#define V_FW_CMD_RETVAL(x)	((x) << S_FW_CMD_RETVAL)
+#define G_FW_CMD_RETVAL(x)	(((x) >> S_FW_CMD_RETVAL) & M_FW_CMD_RETVAL)
+
+#define S_FW_CMD_LEN16		0
+#define M_FW_CMD_LEN16		0xff
+#define V_FW_CMD_LEN16(x)	((x) << S_FW_CMD_LEN16)
+#define G_FW_CMD_LEN16(x)	(((x) >> S_FW_CMD_LEN16) & M_FW_CMD_LEN16)
+
+/*
+ *	address spaces
+ */
+enum fw_ldst_addrspc {
+	FW_LDST_ADDRSPC_FIRMWARE  = 0x0001,
+	FW_LDST_ADDRSPC_SGE_EGRC  = 0x0008,
+	FW_LDST_ADDRSPC_SGE_INGC  = 0x0009,
+	FW_LDST_ADDRSPC_SGE_FLMC  = 0x000a,
+	FW_LDST_ADDRSPC_SGE_CONMC = 0x000b,
+	FW_LDST_ADDRSPC_TP_PIO    = 0x0010,
+	FW_LDST_ADDRSPC_TP_TM_PIO = 0x0011,
+	FW_LDST_ADDRSPC_TP_MIB    = 0x0012,
+	FW_LDST_ADDRSPC_MDIO      = 0x0018,
+	FW_LDST_ADDRSPC_MPS       = 0x0020,
+	FW_LDST_ADDRSPC_FUNC      = 0x0028
+};
+
+enum fw_ldst_mps_fid {
+	FW_LDST_MPS_ATRB,
+	FW_LDST_MPS_RPLC
+};
+
+enum fw_ldst_func_access_ctl {
+	FW_LDST_FUNC_ACC_CTL_VIID,
+	FW_LDST_FUNC_ACC_CTL_FID
+};
+
+enum fw_ldst_func_mod_index {
+	FW_LDST_FUNC_MPS
+};
+
+struct fw_ldst_cmd {
+	__be32 op_to_addrspace;
+	__be32 cycles_to_len16;
+	union fw_ldst {
+		struct fw_ldst_addrval {
+			__be32 addr;
+			__be32 val;
+		} addrval;
+		struct fw_ldst_idctxt {
+			__be32 physid;
+			__be32 msg_pkd;
+			__be32 ctxt_data7;
+			__be32 ctxt_data6;
+			__be32 ctxt_data5;
+			__be32 ctxt_data4;
+			__be32 ctxt_data3;
+			__be32 ctxt_data2;
+			__be32 ctxt_data1;
+			__be32 ctxt_data0;
+		} idctxt;
+		struct fw_ldst_mdio {
+			__be16 paddr_mmd;
+			__be16 raddr;
+			__be16 vctl;
+			__be16 rval;
+		} mdio;
+		struct fw_ldst_mps {
+			__be16 fid_ctl;
+			__be16 rplcpf_pkd;
+			__be32 rplc127_96;
+			__be32 rplc95_64;
+			__be32 rplc63_32;
+			__be32 rplc31_0;
+			__be32 atrb;
+			__be16 vlan[16];
+		} mps;
+		struct fw_ldst_func {
+			__u8   access_ctl;
+			__u8   mod_index;
+			__be16 ctl_id;
+			__be32 offset;
+			__be64 data0;
+			__be64 data1;
+		} func;
+	} u;
+};
+
+#define S_FW_LDST_CMD_ADDRSPACE		0
+#define M_FW_LDST_CMD_ADDRSPACE		0xff
+#define V_FW_LDST_CMD_ADDRSPACE(x)	((x) << S_FW_LDST_CMD_ADDRSPACE)
+#define G_FW_LDST_CMD_ADDRSPACE(x)	\
+    (((x) >> S_FW_LDST_CMD_ADDRSPACE) & M_FW_LDST_CMD_ADDRSPACE)
+
+#define S_FW_LDST_CMD_CYCLES	16
+#define M_FW_LDST_CMD_CYCLES	0xffff
+#define V_FW_LDST_CMD_CYCLES(x)	((x) << S_FW_LDST_CMD_CYCLES)
+#define G_FW_LDST_CMD_CYCLES(x)	\
+    (((x) >> S_FW_LDST_CMD_CYCLES) & M_FW_LDST_CMD_CYCLES)
+
+#define S_FW_LDST_CMD_MSG	31
+#define M_FW_LDST_CMD_MSG	0x1
+#define V_FW_LDST_CMD_MSG(x)	((x) << S_FW_LDST_CMD_MSG)
+#define G_FW_LDST_CMD_MSG(x)	\
+    (((x) >> S_FW_LDST_CMD_MSG) & M_FW_LDST_CMD_MSG)
+#define F_FW_LDST_CMD_MSG	V_FW_LDST_CMD_MSG(1U)
+
+#define S_FW_LDST_CMD_PADDR	8
+#define M_FW_LDST_CMD_PADDR	0x1f
+#define V_FW_LDST_CMD_PADDR(x)	((x) << S_FW_LDST_CMD_PADDR)
+#define G_FW_LDST_CMD_PADDR(x)	\
+    (((x) >> S_FW_LDST_CMD_PADDR) & M_FW_LDST_CMD_PADDR)
+
+#define S_FW_LDST_CMD_MMD	0
+#define M_FW_LDST_CMD_MMD	0x1f
+#define V_FW_LDST_CMD_MMD(x)	((x) << S_FW_LDST_CMD_MMD)
+#define G_FW_LDST_CMD_MMD(x)	\
+    (((x) >> S_FW_LDST_CMD_MMD) & M_FW_LDST_CMD_MMD)
+
+#define S_FW_LDST_CMD_FID	15
+#define M_FW_LDST_CMD_FID	0x1
+#define V_FW_LDST_CMD_FID(x)	((x) << S_FW_LDST_CMD_FID)
+#define G_FW_LDST_CMD_FID(x)	\
+    (((x) >> S_FW_LDST_CMD_FID) & M_FW_LDST_CMD_FID)
+#define F_FW_LDST_CMD_FID	V_FW_LDST_CMD_FID(1U)
+
+#define S_FW_LDST_CMD_CTL	0
+#define M_FW_LDST_CMD_CTL	0x7fff
+#define V_FW_LDST_CMD_CTL(x)	((x) << S_FW_LDST_CMD_CTL)
+#define G_FW_LDST_CMD_CTL(x)	\
+    (((x) >> S_FW_LDST_CMD_CTL) & M_FW_LDST_CMD_CTL)
+
+#define S_FW_LDST_CMD_RPLCPF	0
+#define M_FW_LDST_CMD_RPLCPF	0xff
+#define V_FW_LDST_CMD_RPLCPF(x)	((x) << S_FW_LDST_CMD_RPLCPF)
+#define G_FW_LDST_CMD_RPLCPF(x)	\
+    (((x) >> S_FW_LDST_CMD_RPLCPF) & M_FW_LDST_CMD_RPLCPF)
+
+struct fw_reset_cmd {
+	__be32 op_to_write;
+	__be32 retval_len16;
+	__be32 val;
+	__be32 r3;
+};
+
+struct fw_hello_cmd {
+	__be32 op_to_write;
+	__be32 retval_len16;
+	__be32 err_to_mbasyncnot;
+	__be32 fwrev;
+};
+
+#define S_FW_HELLO_CMD_ERR	31
+#define M_FW_HELLO_CMD_ERR	0x1
+#define V_FW_HELLO_CMD_ERR(x)	((x) << S_FW_HELLO_CMD_ERR)
+#define G_FW_HELLO_CMD_ERR(x)	\
+    (((x) >> S_FW_HELLO_CMD_ERR) & M_FW_HELLO_CMD_ERR)
+#define F_FW_HELLO_CMD_ERR	V_FW_HELLO_CMD_ERR(1U)
+
+#define S_FW_HELLO_CMD_INIT	30
+#define M_FW_HELLO_CMD_INIT	0x1
+#define V_FW_HELLO_CMD_INIT(x)	((x) << S_FW_HELLO_CMD_INIT)
+#define G_FW_HELLO_CMD_INIT(x)	\
+    (((x) >> S_FW_HELLO_CMD_INIT) & M_FW_HELLO_CMD_INIT)
+#define F_FW_HELLO_CMD_INIT	V_FW_HELLO_CMD_INIT(1U)
+
+#define S_FW_HELLO_CMD_MASTERDIS	29
+#define M_FW_HELLO_CMD_MASTERDIS	0x1
+#define V_FW_HELLO_CMD_MASTERDIS(x)	((x) << S_FW_HELLO_CMD_MASTERDIS)
+#define G_FW_HELLO_CMD_MASTERDIS(x)	\
+    (((x) >> S_FW_HELLO_CMD_MASTERDIS) & M_FW_HELLO_CMD_MASTERDIS)
+#define F_FW_HELLO_CMD_MASTERDIS	V_FW_HELLO_CMD_MASTERDIS(1U)
+
+#define S_FW_HELLO_CMD_MASTERFORCE	28
+#define M_FW_HELLO_CMD_MASTERFORCE	0x1
+#define V_FW_HELLO_CMD_MASTERFORCE(x)	((x) << S_FW_HELLO_CMD_MASTERFORCE)
+#define G_FW_HELLO_CMD_MASTERFORCE(x)	\
+    (((x) >> S_FW_HELLO_CMD_MASTERFORCE) & M_FW_HELLO_CMD_MASTERFORCE)
+#define F_FW_HELLO_CMD_MASTERFORCE	V_FW_HELLO_CMD_MASTERFORCE(1U)
+
+#define S_FW_HELLO_CMD_MBMASTER		24
+#define M_FW_HELLO_CMD_MBMASTER		0xf
+#define V_FW_HELLO_CMD_MBMASTER(x)	((x) << S_FW_HELLO_CMD_MBMASTER)
+#define G_FW_HELLO_CMD_MBMASTER(x)	\
+    (((x) >> S_FW_HELLO_CMD_MBMASTER) & M_FW_HELLO_CMD_MBMASTER)
+
+#define S_FW_HELLO_CMD_MBASYNCNOT	20
+#define M_FW_HELLO_CMD_MBASYNCNOT	0xf
+#define V_FW_HELLO_CMD_MBASYNCNOT(x)	((x) << S_FW_HELLO_CMD_MBASYNCNOT)
+#define G_FW_HELLO_CMD_MBASYNCNOT(x)	\
+    (((x) >> S_FW_HELLO_CMD_MBASYNCNOT) & M_FW_HELLO_CMD_MBASYNCNOT)
+
+struct fw_bye_cmd {
+	__be32 op_to_write;
+	__be32 retval_len16;
+	__be64 r3;
+};
+
+struct fw_initialize_cmd {
+	__be32 op_to_write;
+	__be32 retval_len16;
+	__be64 r3;
+};
+
+enum fw_caps_config_hm {
+	FW_CAPS_CONFIG_HM_PCIE		= 0x00000001,
+	FW_CAPS_CONFIG_HM_PL		= 0x00000002,
+	FW_CAPS_CONFIG_HM_SGE		= 0x00000004,
+	FW_CAPS_CONFIG_HM_CIM		= 0x00000008,
+	FW_CAPS_CONFIG_HM_ULPTX		= 0x00000010,
+	FW_CAPS_CONFIG_HM_TP		= 0x00000020,
+	FW_CAPS_CONFIG_HM_ULPRX		= 0x00000040,
+	FW_CAPS_CONFIG_HM_PMRX		= 0x00000080,
+	FW_CAPS_CONFIG_HM_PMTX		= 0x00000100,
+	FW_CAPS_CONFIG_HM_MC		= 0x00000200,
+	FW_CAPS_CONFIG_HM_LE		= 0x00000400,
+	FW_CAPS_CONFIG_HM_MPS		= 0x00000800,
+	FW_CAPS_CONFIG_HM_XGMAC		= 0x00001000,
+	FW_CAPS_CONFIG_HM_CPLSWITCH	= 0x00002000,
+	FW_CAPS_CONFIG_HM_T4DBG		= 0x00004000,
+	FW_CAPS_CONFIG_HM_MI		= 0x00008000,
+	FW_CAPS_CONFIG_HM_I2CM		= 0x00010000,
+	FW_CAPS_CONFIG_HM_NCSI		= 0x00020000,
+	FW_CAPS_CONFIG_HM_SMB		= 0x00040000,
+	FW_CAPS_CONFIG_HM_MA		= 0x00080000,
+	FW_CAPS_CONFIG_HM_EDRAM		= 0x00100000,
+	FW_CAPS_CONFIG_HM_PMU		= 0x00200000,
+	FW_CAPS_CONFIG_HM_UART		= 0x00400000,
+	FW_CAPS_CONFIG_HM_SF		= 0x00800000,
+};
+
+enum fw_caps_config_nbm {
+	FW_CAPS_CONFIG_NBM_IPMI		= 0x00000001,
+	FW_CAPS_CONFIG_NBM_NCSI		= 0x00000002,
+};
+
+enum fw_caps_config_link {
+	FW_CAPS_CONFIG_LINK_PPP		= 0x00000001,
+	FW_CAPS_CONFIG_LINK_QFC		= 0x00000002,
+	FW_CAPS_CONFIG_LINK_DCBX	= 0x00000004,
+};
+
+enum fw_caps_config_switch {
+	FW_CAPS_CONFIG_SWITCH_INGRESS	= 0x00000001,
+	FW_CAPS_CONFIG_SWITCH_EGRESS	= 0x00000002,
+};
+
+enum fw_caps_config_nic {
+	FW_CAPS_CONFIG_NIC		= 0x00000001,
+	FW_CAPS_CONFIG_NIC_VM		= 0x00000002,
+};
+
+enum fw_caps_config_ofld {
+	FW_CAPS_CONFIG_OFLD		= 0x00000001,
+};
+
+enum fw_caps_config_rdma {
+	FW_CAPS_CONFIG_RDMA_RDDP	= 0x00000001,
+	FW_CAPS_CONFIG_RDMA_RDMAC	= 0x00000002,
+};
+
+enum fw_caps_config_iscsi {
+	FW_CAPS_CONFIG_ISCSI_INITIATOR_PDU = 0x00000001,
+	FW_CAPS_CONFIG_ISCSI_TARGET_PDU = 0x00000002,
+	FW_CAPS_CONFIG_ISCSI_INITIATOR_CNXOFLD = 0x00000004,
+	FW_CAPS_CONFIG_ISCSI_TARGET_CNXOFLD = 0x00000008,
+};
+
+enum fw_caps_config_fcoe {
+	FW_CAPS_CONFIG_FCOE_INITIATOR	= 0x00000001,
+	FW_CAPS_CONFIG_FCOE_TARGET	= 0x00000002,
+};
+
+struct fw_caps_config_cmd {
+	__be32 op_to_write;
+	__be32 retval_len16;
+	__be32 r2;
+	__be32 hwmbitmap;
+	__be16 nbmcaps;
+	__be16 linkcaps;
+	__be16 switchcaps;
+	__be16 r3;
+	__be16 niccaps;
+	__be16 ofldcaps;
+	__be16 rdmacaps;
+	__be16 r4;
+	__be16 iscsicaps;
+	__be16 fcoecaps;
+	__be32 r5;
+	__be64 r6;
+};
+
+/*
+ * params command mnemonics
+ */
+enum fw_params_mnem {
+	FW_PARAMS_MNEM_DEV		= 1,	/* device params */
+	FW_PARAMS_MNEM_PFVF		= 2,	/* function params */
+	FW_PARAMS_MNEM_REG		= 3,	/* limited register access */
+	FW_PARAMS_MNEM_DMAQ		= 4,	/* dma queue params */
+	FW_PARAMS_MNEM_LAST
+};
+
+/*
+ * device parameters
+ */
+enum fw_params_param_dev {
+	FW_PARAMS_PARAM_DEV_CCLK	= 0x00, /* chip core clock in khz */
+	FW_PARAMS_PARAM_DEV_PORTVEC	= 0x01, /* the port vector */
+	FW_PARAMS_PARAM_DEV_NTID	= 0x02, /* reads the number of TIDs
+						 * allocated by the device's
+						 * Lookup Engine
+						 */
+	FW_PARAMS_PARAM_DEV_FLOWC_BUFFIFO_SZ = 0x03,
+};
+
+/*
+ * physical and virtual function parameters
+ */
+enum fw_params_param_pfvf {
+	FW_PARAMS_PARAM_PFVF_RWXCAPS	= 0x00,
+	FW_PARAMS_PARAM_PFVF_ROUTE_START = 0x01,
+	FW_PARAMS_PARAM_PFVF_ROUTE_END = 0x02,
+	FW_PARAMS_PARAM_PFVF_CLIP_START = 0x03,
+	FW_PARAMS_PARAM_PFVF_CLIP_END = 0x04,
+	FW_PARAMS_PARAM_PFVF_FILTER_START = 0x05,
+	FW_PARAMS_PARAM_PFVF_FILTER_END = 0x06,
+	FW_PARAMS_PARAM_PFVF_SERVER_START = 0x07,
+	FW_PARAMS_PARAM_PFVF_SERVER_END = 0x08,
+	FW_PARAMS_PARAM_PFVF_TDDP_START = 0x09,
+	FW_PARAMS_PARAM_PFVF_TDDP_END = 0x0A,
+	FW_PARAMS_PARAM_PFVF_ISCSI_START = 0x0B,
+	FW_PARAMS_PARAM_PFVF_ISCSI_END = 0x0C,
+	FW_PARAMS_PARAM_PFVF_STAG_START = 0x0D,
+	FW_PARAMS_PARAM_PFVF_STAG_END = 0x0E,
+	FW_PARAMS_PARAM_PFVF_RQ_START = 0x1F,
+	FW_PARAMS_PARAM_PFVF_RQ_END	= 0x10,
+	FW_PARAMS_PARAM_PFVF_PBL_START = 0x11,
+	FW_PARAMS_PARAM_PFVF_PBL_END	= 0x12,
+	FW_PARAMS_PARAM_PFVF_L2T_START = 0x13,
+	FW_PARAMS_PARAM_PFVF_L2T_END = 0x14,
+};
+
+/*
+ * dma queue parameters
+ */
+enum fw_params_param_dmaq {
+	FW_PARAMS_PARAM_DMAQ_IQ_DCAEN_DCACPU = 0x00,
+	FW_PARAMS_PARAM_DMAQ_IQ_INTCNTTHRESH = 0x01,
+	FW_PARAMS_PARAM_DMAQ_EQ_CMPLIQID_MNGT = 0x10,
+	FW_PARAMS_PARAM_DMAQ_EQ_CMPLIQID_CTRL = 0x11,
+
+};
+
+#define S_FW_PARAMS_MNEM	24
+#define M_FW_PARAMS_MNEM	0xff
+#define V_FW_PARAMS_MNEM(x)	((x) << S_FW_PARAMS_MNEM)
+#define G_FW_PARAMS_MNEM(x)	\
+    (((x) >> S_FW_PARAMS_MNEM) & M_FW_PARAMS_MNEM)
+
+#define S_FW_PARAMS_PARAM_X	16
+#define M_FW_PARAMS_PARAM_X	0xff
+#define V_FW_PARAMS_PARAM_X(x) ((x) << S_FW_PARAMS_PARAM_X)
+#define G_FW_PARAMS_PARAM_X(x) \
+    (((x) >> S_FW_PARAMS_PARAM_X) & M_FW_PARAMS_PARAM_X)
+
+#define S_FW_PARAMS_PARAM_Y	8
+#define M_FW_PARAMS_PARAM_Y	0xff
+#define V_FW_PARAMS_PARAM_Y(x) ((x) << S_FW_PARAMS_PARAM_Y)
+#define G_FW_PARAMS_PARAM_Y(x) \
+    (((x) >> S_FW_PARAMS_PARAM_Y) & M_FW_PARAMS_PARAM_Y)
+
+#define S_FW_PARAMS_PARAM_Z	0
+#define M_FW_PARAMS_PARAM_Z	0xff
+#define V_FW_PARAMS_PARAM_Z(x) ((x) << S_FW_PARAMS_PARAM_Z)
+#define G_FW_PARAMS_PARAM_Z(x) \
+    (((x) >> S_FW_PARAMS_PARAM_Z) & M_FW_PARAMS_PARAM_Z)
+
+#define S_FW_PARAMS_PARAM_XYZ	0
+#define M_FW_PARAMS_PARAM_XYZ	0xffffff
+#define V_FW_PARAMS_PARAM_XYZ(x) ((x) << S_FW_PARAMS_PARAM_XYZ)
+#define G_FW_PARAMS_PARAM_XYZ(x) \
+    (((x) >> S_FW_PARAMS_PARAM_XYZ) & M_FW_PARAMS_PARAM_XYZ)
+
+#define S_FW_PARAMS_PARAM_YZ	0
+#define M_FW_PARAMS_PARAM_YZ	0xffff
+#define V_FW_PARAMS_PARAM_YZ(x) ((x) << S_FW_PARAMS_PARAM_YZ)
+#define G_FW_PARAMS_PARAM_YZ(x) \
+    (((x) >> S_FW_PARAMS_PARAM_YZ) & M_FW_PARAMS_PARAM_YZ)
+
+struct fw_params_cmd {
+	__be32 op_to_write;
+	__be32 retval_len16;
+	struct fw_params_param {
+		__be32 mnem;
+		__be32 val;
+	} param[7];
+};
+
+struct fw_pfvf_cmd {
+	__be32 op_to_vfn;
+	__be32 retval_len16;
+	__be32 niqflint_niq;
+	__be32 cmask_to_neq;
+	__be32 tc_to_nexactf;
+	__be32 r_caps_to_nethctrl;
+	__be16 nricq;
+	__be16 nriqp;
+	__be32 r4;
+};
+
+#define S_FW_PFVF_CMD_PFN	8
+#define M_FW_PFVF_CMD_PFN	0x7
+#define V_FW_PFVF_CMD_PFN(x)	((x) << S_FW_PFVF_CMD_PFN)
+#define G_FW_PFVF_CMD_PFN(x)	\
+    (((x) >> S_FW_PFVF_CMD_PFN) & M_FW_PFVF_CMD_PFN)
+
+#define S_FW_PFVF_CMD_VFN	0
+#define M_FW_PFVF_CMD_VFN	0xff
+#define V_FW_PFVF_CMD_VFN(x)	((x) << S_FW_PFVF_CMD_VFN)
+#define G_FW_PFVF_CMD_VFN(x)	\
+    (((x) >> S_FW_PFVF_CMD_VFN) & M_FW_PFVF_CMD_VFN)
+
+#define S_FW_PFVF_CMD_NIQFLINT		20
+#define M_FW_PFVF_CMD_NIQFLINT		0xfff
+#define V_FW_PFVF_CMD_NIQFLINT(x)	((x) << S_FW_PFVF_CMD_NIQFLINT)
+#define G_FW_PFVF_CMD_NIQFLINT(x)	\
+    (((x) >> S_FW_PFVF_CMD_NIQFLINT) & M_FW_PFVF_CMD_NIQFLINT)
+
+#define S_FW_PFVF_CMD_NIQ	0
+#define M_FW_PFVF_CMD_NIQ	0xfffff
+#define V_FW_PFVF_CMD_NIQ(x)	((x) << S_FW_PFVF_CMD_NIQ)
+#define G_FW_PFVF_CMD_NIQ(x)	\
+    (((x) >> S_FW_PFVF_CMD_NIQ) & M_FW_PFVF_CMD_NIQ)
+
+#define S_FW_PFVF_CMD_CMASK	24
+#define M_FW_PFVF_CMD_CMASK	0xf
+#define V_FW_PFVF_CMD_CMASK(x)	((x) << S_FW_PFVF_CMD_CMASK)
+#define G_FW_PFVF_CMD_CMASK(x)	\
+    (((x) >> S_FW_PFVF_CMD_CMASK) & M_FW_PFVF_CMD_CMASK)
+
+#define S_FW_PFVF_CMD_PMASK	20
+#define M_FW_PFVF_CMD_PMASK	0xf
+#define V_FW_PFVF_CMD_PMASK(x)	((x) << S_FW_PFVF_CMD_PMASK)
+#define G_FW_PFVF_CMD_PMASK(x)	\
+    (((x) >> S_FW_PFVF_CMD_PMASK) & M_FW_PFVF_CMD_PMASK)
+
+#define S_FW_PFVF_CMD_NEQ	0
+#define M_FW_PFVF_CMD_NEQ	0xfffff
+#define V_FW_PFVF_CMD_NEQ(x)	((x) << S_FW_PFVF_CMD_NEQ)
+#define G_FW_PFVF_CMD_NEQ(x)	\
+    (((x) >> S_FW_PFVF_CMD_NEQ) & M_FW_PFVF_CMD_NEQ)
+
+#define S_FW_PFVF_CMD_TC	24
+#define M_FW_PFVF_CMD_TC	0xff
+#define V_FW_PFVF_CMD_TC(x)	((x) << S_FW_PFVF_CMD_TC)
+#define G_FW_PFVF_CMD_TC(x)	(((x) >> S_FW_PFVF_CMD_TC) & M_FW_PFVF_CMD_TC)
+
+#define S_FW_PFVF_CMD_NVI	16
+#define M_FW_PFVF_CMD_NVI	0xff
+#define V_FW_PFVF_CMD_NVI(x)	((x) << S_FW_PFVF_CMD_NVI)
+#define G_FW_PFVF_CMD_NVI(x)	\
+    (((x) >> S_FW_PFVF_CMD_NVI) & M_FW_PFVF_CMD_NVI)
+
+#define S_FW_PFVF_CMD_NEXACTF		0
+#define M_FW_PFVF_CMD_NEXACTF		0xffff
+#define V_FW_PFVF_CMD_NEXACTF(x)	((x) << S_FW_PFVF_CMD_NEXACTF)
+#define G_FW_PFVF_CMD_NEXACTF(x)	\
+    (((x) >> S_FW_PFVF_CMD_NEXACTF) & M_FW_PFVF_CMD_NEXACTF)
+
+#define S_FW_PFVF_CMD_R_CAPS	24
+#define M_FW_PFVF_CMD_R_CAPS	0xff
+#define V_FW_PFVF_CMD_R_CAPS(x)	((x) << S_FW_PFVF_CMD_R_CAPS)
+#define G_FW_PFVF_CMD_R_CAPS(x)	\
+    (((x) >> S_FW_PFVF_CMD_R_CAPS) & M_FW_PFVF_CMD_R_CAPS)
+
+#define S_FW_PFVF_CMD_WX_CAPS		16
+#define M_FW_PFVF_CMD_WX_CAPS		0xff
+#define V_FW_PFVF_CMD_WX_CAPS(x)	((x) << S_FW_PFVF_CMD_WX_CAPS)
+#define G_FW_PFVF_CMD_WX_CAPS(x)	\
+    (((x) >> S_FW_PFVF_CMD_WX_CAPS) & M_FW_PFVF_CMD_WX_CAPS)
+
+#define S_FW_PFVF_CMD_NETHCTRL		0
+#define M_FW_PFVF_CMD_NETHCTRL		0xffff
+#define V_FW_PFVF_CMD_NETHCTRL(x)	((x) << S_FW_PFVF_CMD_NETHCTRL)
+#define G_FW_PFVF_CMD_NETHCTRL(x)	\
+    (((x) >> S_FW_PFVF_CMD_NETHCTRL) & M_FW_PFVF_CMD_NETHCTRL)
+/*
+ *	ingress queue type; the first 1K ingress queues can have associated 0,
+ *	1 or 2 free lists and an interrupt, all other ingress queues lack these
+ *	capabilities
+ */
+enum fw_iq_type {
+	FW_IQ_TYPE_FL_INT_CAP,
+	FW_IQ_TYPE_NO_FL_INT_CAP
+};
+
+struct fw_iq_cmd {
+	__be32 op_to_vfn;
+	__be32 alloc_to_len16;
+	__be16 physiqid;
+	__be16 iqid;
+	__be16 fl0id;
+	__be16 fl1id;
+	__be32 type_to_iqandstindex;
+	__be16 iqdroprss_to_iqesize;
+	__be16 iqsize;
+	__be64 iqaddr;
+	__be32 iqns_to_fl0congen;
+	__be16 fl0dcaen_to_fl0cidxfthresh;
+	__be16 fl0size;
+	__be64 fl0addr;
+	__be32 fl1cngchmap_to_fl1congen;
+	__be16 fl1dcaen_to_fl1cidxfthresh;
+	__be16 fl1size;
+	__be64 fl1addr;
+};
+
+#define S_FW_IQ_CMD_PFN		8
+#define M_FW_IQ_CMD_PFN		0x7
+#define V_FW_IQ_CMD_PFN(x)	((x) << S_FW_IQ_CMD_PFN)
+#define G_FW_IQ_CMD_PFN(x)	(((x) >> S_FW_IQ_CMD_PFN) & M_FW_IQ_CMD_PFN)
+
+#define S_FW_IQ_CMD_VFN		0
+#define M_FW_IQ_CMD_VFN		0xff
+#define V_FW_IQ_CMD_VFN(x)	((x) << S_FW_IQ_CMD_VFN)
+#define G_FW_IQ_CMD_VFN(x)	(((x) >> S_FW_IQ_CMD_VFN) & M_FW_IQ_CMD_VFN)
+
+#define S_FW_IQ_CMD_ALLOC	31
+#define M_FW_IQ_CMD_ALLOC	0x1
+#define V_FW_IQ_CMD_ALLOC(x)	((x) << S_FW_IQ_CMD_ALLOC)
+#define G_FW_IQ_CMD_ALLOC(x)	\
+    (((x) >> S_FW_IQ_CMD_ALLOC) & M_FW_IQ_CMD_ALLOC)
+#define F_FW_IQ_CMD_ALLOC	V_FW_IQ_CMD_ALLOC(1U)
+
+#define S_FW_IQ_CMD_FREE	30
+#define M_FW_IQ_CMD_FREE	0x1
+#define V_FW_IQ_CMD_FREE(x)	((x) << S_FW_IQ_CMD_FREE)
+#define G_FW_IQ_CMD_FREE(x)	(((x) >> S_FW_IQ_CMD_FREE) & M_FW_IQ_CMD_FREE)
+#define F_FW_IQ_CMD_FREE	V_FW_IQ_CMD_FREE(1U)
+
+#define S_FW_IQ_CMD_MODIFY	29
+#define M_FW_IQ_CMD_MODIFY	0x1
+#define V_FW_IQ_CMD_MODIFY(x)	((x) << S_FW_IQ_CMD_MODIFY)
+#define G_FW_IQ_CMD_MODIFY(x)	\
+    (((x) >> S_FW_IQ_CMD_MODIFY) & M_FW_IQ_CMD_MODIFY)
+#define F_FW_IQ_CMD_MODIFY	V_FW_IQ_CMD_MODIFY(1U)
+
+#define S_FW_IQ_CMD_IQSTART	28
+#define M_FW_IQ_CMD_IQSTART	0x1
+#define V_FW_IQ_CMD_IQSTART(x)	((x) << S_FW_IQ_CMD_IQSTART)
+#define G_FW_IQ_CMD_IQSTART(x)	\
+    (((x) >> S_FW_IQ_CMD_IQSTART) & M_FW_IQ_CMD_IQSTART)
+#define F_FW_IQ_CMD_IQSTART	V_FW_IQ_CMD_IQSTART(1U)
+
+#define S_FW_IQ_CMD_IQSTOP	27
+#define M_FW_IQ_CMD_IQSTOP	0x1
+#define V_FW_IQ_CMD_IQSTOP(x)	((x) << S_FW_IQ_CMD_IQSTOP)
+#define G_FW_IQ_CMD_IQSTOP(x)	\
+    (((x) >> S_FW_IQ_CMD_IQSTOP) & M_FW_IQ_CMD_IQSTOP)
+#define F_FW_IQ_CMD_IQSTOP	V_FW_IQ_CMD_IQSTOP(1U)
+
+#define S_FW_IQ_CMD_TYPE	29
+#define M_FW_IQ_CMD_TYPE	0x7
+#define V_FW_IQ_CMD_TYPE(x)	((x) << S_FW_IQ_CMD_TYPE)
+#define G_FW_IQ_CMD_TYPE(x)	(((x) >> S_FW_IQ_CMD_TYPE) & M_FW_IQ_CMD_TYPE)
+
+#define S_FW_IQ_CMD_IQASYNCH	28
+#define M_FW_IQ_CMD_IQASYNCH	0x1
+#define V_FW_IQ_CMD_IQASYNCH(x)	((x) << S_FW_IQ_CMD_IQASYNCH)
+#define G_FW_IQ_CMD_IQASYNCH(x)	\
+    (((x) >> S_FW_IQ_CMD_IQASYNCH) & M_FW_IQ_CMD_IQASYNCH)
+#define F_FW_IQ_CMD_IQASYNCH	V_FW_IQ_CMD_IQASYNCH(1U)
+
+#define S_FW_IQ_CMD_VIID	16
+#define M_FW_IQ_CMD_VIID	0xfff
+#define V_FW_IQ_CMD_VIID(x)	((x) << S_FW_IQ_CMD_VIID)
+#define G_FW_IQ_CMD_VIID(x)	(((x) >> S_FW_IQ_CMD_VIID) & M_FW_IQ_CMD_VIID)
+
+#define S_FW_IQ_CMD_IQANDST	15
+#define M_FW_IQ_CMD_IQANDST	0x1
+#define V_FW_IQ_CMD_IQANDST(x)	((x) << S_FW_IQ_CMD_IQANDST)
+#define G_FW_IQ_CMD_IQANDST(x)	\
+    (((x) >> S_FW_IQ_CMD_IQANDST) & M_FW_IQ_CMD_IQANDST)
+#define F_FW_IQ_CMD_IQANDST	V_FW_IQ_CMD_IQANDST(1U)
+
+#define S_FW_IQ_CMD_IQANUS	14
+#define M_FW_IQ_CMD_IQANUS	0x1
+#define V_FW_IQ_CMD_IQANUS(x)	((x) << S_FW_IQ_CMD_IQANUS)
+#define G_FW_IQ_CMD_IQANUS(x)	\
+    (((x) >> S_FW_IQ_CMD_IQANUS) & M_FW_IQ_CMD_IQANUS)
+#define F_FW_IQ_CMD_IQANUS	V_FW_IQ_CMD_IQANUS(1U)
+
+#define S_FW_IQ_CMD_IQANUD	12
+#define M_FW_IQ_CMD_IQANUD	0x3
+#define V_FW_IQ_CMD_IQANUD(x)	((x) << S_FW_IQ_CMD_IQANUD)
+#define G_FW_IQ_CMD_IQANUD(x)	\
+    (((x) >> S_FW_IQ_CMD_IQANUD) & M_FW_IQ_CMD_IQANUD)
+
+#define S_FW_IQ_CMD_IQANDSTINDEX	0
+#define M_FW_IQ_CMD_IQANDSTINDEX	0xfff
+#define V_FW_IQ_CMD_IQANDSTINDEX(x)	((x) << S_FW_IQ_CMD_IQANDSTINDEX)
+#define G_FW_IQ_CMD_IQANDSTINDEX(x)	\
+    (((x) >> S_FW_IQ_CMD_IQANDSTINDEX) & M_FW_IQ_CMD_IQANDSTINDEX)
+
+#define S_FW_IQ_CMD_IQDROPRSS		15
+#define M_FW_IQ_CMD_IQDROPRSS		0x1
+#define V_FW_IQ_CMD_IQDROPRSS(x)	((x) << S_FW_IQ_CMD_IQDROPRSS)
+#define G_FW_IQ_CMD_IQDROPRSS(x)	\
+    (((x) >> S_FW_IQ_CMD_IQDROPRSS) & M_FW_IQ_CMD_IQDROPRSS)
+#define F_FW_IQ_CMD_IQDROPRSS	V_FW_IQ_CMD_IQDROPRSS(1U)
+
+#define S_FW_IQ_CMD_IQGTSMODE		14
+#define M_FW_IQ_CMD_IQGTSMODE		0x1
+#define V_FW_IQ_CMD_IQGTSMODE(x)	((x) << S_FW_IQ_CMD_IQGTSMODE)
+#define G_FW_IQ_CMD_IQGTSMODE(x)	\
+    (((x) >> S_FW_IQ_CMD_IQGTSMODE) & M_FW_IQ_CMD_IQGTSMODE)
+#define F_FW_IQ_CMD_IQGTSMODE	V_FW_IQ_CMD_IQGTSMODE(1U)
+
+#define S_FW_IQ_CMD_IQPCIECH	12
+#define M_FW_IQ_CMD_IQPCIECH	0x3
+#define V_FW_IQ_CMD_IQPCIECH(x)	((x) << S_FW_IQ_CMD_IQPCIECH)
+#define G_FW_IQ_CMD_IQPCIECH(x)	\
+    (((x) >> S_FW_IQ_CMD_IQPCIECH) & M_FW_IQ_CMD_IQPCIECH)
+
+#define S_FW_IQ_CMD_IQDCAEN	11
+#define M_FW_IQ_CMD_IQDCAEN	0x1
+#define V_FW_IQ_CMD_IQDCAEN(x)	((x) << S_FW_IQ_CMD_IQDCAEN)
+#define G_FW_IQ_CMD_IQDCAEN(x)	\
+    (((x) >> S_FW_IQ_CMD_IQDCAEN) & M_FW_IQ_CMD_IQDCAEN)
+#define F_FW_IQ_CMD_IQDCAEN	V_FW_IQ_CMD_IQDCAEN(1U)
+
+#define S_FW_IQ_CMD_IQDCACPU	6
+#define M_FW_IQ_CMD_IQDCACPU	0x1f
+#define V_FW_IQ_CMD_IQDCACPU(x)	((x) << S_FW_IQ_CMD_IQDCACPU)
+#define G_FW_IQ_CMD_IQDCACPU(x)	\
+    (((x) >> S_FW_IQ_CMD_IQDCACPU) & M_FW_IQ_CMD_IQDCACPU)
+
+#define S_FW_IQ_CMD_IQINTCNTTHRESH	4
+#define M_FW_IQ_CMD_IQINTCNTTHRESH	0x3
+#define V_FW_IQ_CMD_IQINTCNTTHRESH(x)	((x) << S_FW_IQ_CMD_IQINTCNTTHRESH)
+#define G_FW_IQ_CMD_IQINTCNTTHRESH(x)	\
+    (((x) >> S_FW_IQ_CMD_IQINTCNTTHRESH) & M_FW_IQ_CMD_IQINTCNTTHRESH)
+
+#define S_FW_IQ_CMD_IQO		3
+#define M_FW_IQ_CMD_IQO		0x1
+#define V_FW_IQ_CMD_IQO(x)	((x) << S_FW_IQ_CMD_IQO)
+#define G_FW_IQ_CMD_IQO(x)	(((x) >> S_FW_IQ_CMD_IQO) & M_FW_IQ_CMD_IQO)
+#define F_FW_IQ_CMD_IQO	V_FW_IQ_CMD_IQO(1U)
+
+#define S_FW_IQ_CMD_IQCPRIO	2
+#define M_FW_IQ_CMD_IQCPRIO	0x1
+#define V_FW_IQ_CMD_IQCPRIO(x)	((x) << S_FW_IQ_CMD_IQCPRIO)
+#define G_FW_IQ_CMD_IQCPRIO(x)	\
+    (((x) >> S_FW_IQ_CMD_IQCPRIO) & M_FW_IQ_CMD_IQCPRIO)
+#define F_FW_IQ_CMD_IQCPRIO	V_FW_IQ_CMD_IQCPRIO(1U)
+
+#define S_FW_IQ_CMD_IQESIZE	0
+#define M_FW_IQ_CMD_IQESIZE	0x3
+#define V_FW_IQ_CMD_IQESIZE(x)	((x) << S_FW_IQ_CMD_IQESIZE)
+#define G_FW_IQ_CMD_IQESIZE(x)	\
+    (((x) >> S_FW_IQ_CMD_IQESIZE) & M_FW_IQ_CMD_IQESIZE)
+
+#define S_FW_IQ_CMD_IQNS	31
+#define M_FW_IQ_CMD_IQNS	0x1
+#define V_FW_IQ_CMD_IQNS(x)	((x) << S_FW_IQ_CMD_IQNS)
+#define G_FW_IQ_CMD_IQNS(x)	(((x) >> S_FW_IQ_CMD_IQNS) & M_FW_IQ_CMD_IQNS)
+#define F_FW_IQ_CMD_IQNS	V_FW_IQ_CMD_IQNS(1U)
+
+#define S_FW_IQ_CMD_IQRO	30
+#define M_FW_IQ_CMD_IQRO	0x1
+#define V_FW_IQ_CMD_IQRO(x)	((x) << S_FW_IQ_CMD_IQRO)
+#define G_FW_IQ_CMD_IQRO(x)	(((x) >> S_FW_IQ_CMD_IQRO) & M_FW_IQ_CMD_IQRO)
+#define F_FW_IQ_CMD_IQRO	V_FW_IQ_CMD_IQRO(1U)
+
+#define S_FW_IQ_CMD_IQFLINTIQHSEN	28
+#define M_FW_IQ_CMD_IQFLINTIQHSEN	0x3
+#define V_FW_IQ_CMD_IQFLINTIQHSEN(x)	((x) << S_FW_IQ_CMD_IQFLINTIQHSEN)
+#define G_FW_IQ_CMD_IQFLINTIQHSEN(x)	\
+    (((x) >> S_FW_IQ_CMD_IQFLINTIQHSEN) & M_FW_IQ_CMD_IQFLINTIQHSEN)
+
+#define S_FW_IQ_CMD_IQFLINTCONGEN	27
+#define M_FW_IQ_CMD_IQFLINTCONGEN	0x1
+#define V_FW_IQ_CMD_IQFLINTCONGEN(x)	((x) << S_FW_IQ_CMD_IQFLINTCONGEN)
+#define G_FW_IQ_CMD_IQFLINTCONGEN(x)	\
+    (((x) >> S_FW_IQ_CMD_IQFLINTCONGEN) & M_FW_IQ_CMD_IQFLINTCONGEN)
+#define F_FW_IQ_CMD_IQFLINTCONGEN	V_FW_IQ_CMD_IQFLINTCONGEN(1U)
+
+#define S_FW_IQ_CMD_IQFLINTISCSIC	26
+#define M_FW_IQ_CMD_IQFLINTISCSIC	0x1
+#define V_FW_IQ_CMD_IQFLINTISCSIC(x)	((x) << S_FW_IQ_CMD_IQFLINTISCSIC)
+#define G_FW_IQ_CMD_IQFLINTISCSIC(x)	\
+    (((x) >> S_FW_IQ_CMD_IQFLINTISCSIC) & M_FW_IQ_CMD_IQFLINTISCSIC)
+#define F_FW_IQ_CMD_IQFLINTISCSIC	V_FW_IQ_CMD_IQFLINTISCSIC(1U)
+
+#define S_FW_IQ_CMD_FL0CNGCHMAP		20
+#define M_FW_IQ_CMD_FL0CNGCHMAP		0x7
+#define V_FW_IQ_CMD_FL0CNGCHMAP(x)	((x) << S_FW_IQ_CMD_FL0CNGCHMAP)
+#define G_FW_IQ_CMD_FL0CNGCHMAP(x)	\
+    (((x) >> S_FW_IQ_CMD_FL0CNGCHMAP) & M_FW_IQ_CMD_FL0CNGCHMAP)
+
+#define S_FW_IQ_CMD_FL0CACHELOCK	15
+#define M_FW_IQ_CMD_FL0CACHELOCK	0x1
+#define V_FW_IQ_CMD_FL0CACHELOCK(x)	((x) << S_FW_IQ_CMD_FL0CACHELOCK)
+#define G_FW_IQ_CMD_FL0CACHELOCK(x)	\
+    (((x) >> S_FW_IQ_CMD_FL0CACHELOCK) & M_FW_IQ_CMD_FL0CACHELOCK)
+#define F_FW_IQ_CMD_FL0CACHELOCK	V_FW_IQ_CMD_FL0CACHELOCK(1U)
+
+#define S_FW_IQ_CMD_FL0DBP	14
+#define M_FW_IQ_CMD_FL0DBP	0x1
+#define V_FW_IQ_CMD_FL0DBP(x)	((x) << S_FW_IQ_CMD_FL0DBP)
+#define G_FW_IQ_CMD_FL0DBP(x)	\
+    (((x) >> S_FW_IQ_CMD_FL0DBP) & M_FW_IQ_CMD_FL0DBP)
+#define F_FW_IQ_CMD_FL0DBP	V_FW_IQ_CMD_FL0DBP(1U)
+
+#define S_FW_IQ_CMD_FL0DATANS		13
+#define M_FW_IQ_CMD_FL0DATANS		0x1
+#define V_FW_IQ_CMD_FL0DATANS(x)	((x) << S_FW_IQ_CMD_FL0DATANS)
+#define G_FW_IQ_CMD_FL0DATANS(x)	\
+    (((x) >> S_FW_IQ_CMD_FL0DATANS) & M_FW_IQ_CMD_FL0DATANS)
+#define F_FW_IQ_CMD_FL0DATANS	V_FW_IQ_CMD_FL0DATANS(1U)
+
+#define S_FW_IQ_CMD_FL0DATARO		12
+#define M_FW_IQ_CMD_FL0DATARO		0x1
+#define V_FW_IQ_CMD_FL0DATARO(x)	((x) << S_FW_IQ_CMD_FL0DATARO)
+#define G_FW_IQ_CMD_FL0DATARO(x)	\
+    (((x) >> S_FW_IQ_CMD_FL0DATARO) & M_FW_IQ_CMD_FL0DATARO)
+#define F_FW_IQ_CMD_FL0DATARO	V_FW_IQ_CMD_FL0DATARO(1U)
+
+#define S_FW_IQ_CMD_FL0CONGCIF		11
+#define M_FW_IQ_CMD_FL0CONGCIF		0x1
+#define V_FW_IQ_CMD_FL0CONGCIF(x)	((x) << S_FW_IQ_CMD_FL0CONGCIF)
+#define G_FW_IQ_CMD_FL0CONGCIF(x)	\
+    (((x) >> S_FW_IQ_CMD_FL0CONGCIF) & M_FW_IQ_CMD_FL0CONGCIF)
+#define F_FW_IQ_CMD_FL0CONGCIF	V_FW_IQ_CMD_FL0CONGCIF(1U)
+
+#define S_FW_IQ_CMD_FL0ONCHIP		10
+#define M_FW_IQ_CMD_FL0ONCHIP		0x1
+#define V_FW_IQ_CMD_FL0ONCHIP(x)	((x) << S_FW_IQ_CMD_FL0ONCHIP)
+#define G_FW_IQ_CMD_FL0ONCHIP(x)	\
+    (((x) >> S_FW_IQ_CMD_FL0ONCHIP) & M_FW_IQ_CMD_FL0ONCHIP)
+#define F_FW_IQ_CMD_FL0ONCHIP	V_FW_IQ_CMD_FL0ONCHIP(1U)
+
+#define S_FW_IQ_CMD_FL0STATUSPGNS	9
+#define M_FW_IQ_CMD_FL0STATUSPGNS	0x1
+#define V_FW_IQ_CMD_FL0STATUSPGNS(x)	((x) << S_FW_IQ_CMD_FL0STATUSPGNS)
+#define G_FW_IQ_CMD_FL0STATUSPGNS(x)	\
+    (((x) >> S_FW_IQ_CMD_FL0STATUSPGNS) & M_FW_IQ_CMD_FL0STATUSPGNS)
+#define F_FW_IQ_CMD_FL0STATUSPGNS	V_FW_IQ_CMD_FL0STATUSPGNS(1U)
+
+#define S_FW_IQ_CMD_FL0STATUSPGRO	8
+#define M_FW_IQ_CMD_FL0STATUSPGRO	0x1
+#define V_FW_IQ_CMD_FL0STATUSPGRO(x)	((x) << S_FW_IQ_CMD_FL0STATUSPGRO)
+#define G_FW_IQ_CMD_FL0STATUSPGRO(x)	\
+    (((x) >> S_FW_IQ_CMD_FL0STATUSPGRO) & M_FW_IQ_CMD_FL0STATUSPGRO)
+#define F_FW_IQ_CMD_FL0STATUSPGRO	V_FW_IQ_CMD_FL0STATUSPGRO(1U)
+
+#define S_FW_IQ_CMD_FL0FETCHNS		7
+#define M_FW_IQ_CMD_FL0FETCHNS		0x1
+#define V_FW_IQ_CMD_FL0FETCHNS(x)	((x) << S_FW_IQ_CMD_FL0FETCHNS)
+#define G_FW_IQ_CMD_FL0FETCHNS(x)	\
+    (((x) >> S_FW_IQ_CMD_FL0FETCHNS) & M_FW_IQ_CMD_FL0FETCHNS)
+#define F_FW_IQ_CMD_FL0FETCHNS	V_FW_IQ_CMD_FL0FETCHNS(1U)
+
+#define S_FW_IQ_CMD_FL0FETCHRO		6
+#define M_FW_IQ_CMD_FL0FETCHRO		0x1
+#define V_FW_IQ_CMD_FL0FETCHRO(x)	((x) << S_FW_IQ_CMD_FL0FETCHRO)
+#define G_FW_IQ_CMD_FL0FETCHRO(x)	\
+    (((x) >> S_FW_IQ_CMD_FL0FETCHRO) & M_FW_IQ_CMD_FL0FETCHRO)
+#define F_FW_IQ_CMD_FL0FETCHRO	V_FW_IQ_CMD_FL0FETCHRO(1U)
+
+#define S_FW_IQ_CMD_FL0HOSTFCMODE	4
+#define M_FW_IQ_CMD_FL0HOSTFCMODE	0x3
+#define V_FW_IQ_CMD_FL0HOSTFCMODE(x)	((x) << S_FW_IQ_CMD_FL0HOSTFCMODE)
+#define G_FW_IQ_CMD_FL0HOSTFCMODE(x)	\
+    (((x) >> S_FW_IQ_CMD_FL0HOSTFCMODE) & M_FW_IQ_CMD_FL0HOSTFCMODE)
+
+#define S_FW_IQ_CMD_FL0CPRIO	3
+#define M_FW_IQ_CMD_FL0CPRIO	0x1
+#define V_FW_IQ_CMD_FL0CPRIO(x)	((x) << S_FW_IQ_CMD_FL0CPRIO)
+#define G_FW_IQ_CMD_FL0CPRIO(x)	\
+    (((x) >> S_FW_IQ_CMD_FL0CPRIO) & M_FW_IQ_CMD_FL0CPRIO)
+#define F_FW_IQ_CMD_FL0CPRIO	V_FW_IQ_CMD_FL0CPRIO(1U)
+
+#define S_FW_IQ_CMD_FL0PADEN	2
+#define M_FW_IQ_CMD_FL0PADEN	0x1
+#define V_FW_IQ_CMD_FL0PADEN(x)	((x) << S_FW_IQ_CMD_FL0PADEN)
+#define G_FW_IQ_CMD_FL0PADEN(x)	\
+    (((x) >> S_FW_IQ_CMD_FL0PADEN) & M_FW_IQ_CMD_FL0PADEN)
+#define F_FW_IQ_CMD_FL0PADEN	V_FW_IQ_CMD_FL0PADEN(1U)
+
+#define S_FW_IQ_CMD_FL0PACKEN		1
+#define M_FW_IQ_CMD_FL0PACKEN		0x1
+#define V_FW_IQ_CMD_FL0PACKEN(x)	((x) << S_FW_IQ_CMD_FL0PACKEN)
+#define G_FW_IQ_CMD_FL0PACKEN(x)	\
+    (((x) >> S_FW_IQ_CMD_FL0PACKEN) & M_FW_IQ_CMD_FL0PACKEN)
+#define F_FW_IQ_CMD_FL0PACKEN	V_FW_IQ_CMD_FL0PACKEN(1U)
+
+#define S_FW_IQ_CMD_FL0CONGEN		0
+#define M_FW_IQ_CMD_FL0CONGEN		0x1
+#define V_FW_IQ_CMD_FL0CONGEN(x)	((x) << S_FW_IQ_CMD_FL0CONGEN)
+#define G_FW_IQ_CMD_FL0CONGEN(x)	\
+    (((x) >> S_FW_IQ_CMD_FL0CONGEN) & M_FW_IQ_CMD_FL0CONGEN)
+#define F_FW_IQ_CMD_FL0CONGEN	V_FW_IQ_CMD_FL0CONGEN(1U)
+
+#define S_FW_IQ_CMD_FL0DCAEN	15
+#define M_FW_IQ_CMD_FL0DCAEN	0x1
+#define V_FW_IQ_CMD_FL0DCAEN(x)	((x) << S_FW_IQ_CMD_FL0DCAEN)
+#define G_FW_IQ_CMD_FL0DCAEN(x)	\
+    (((x) >> S_FW_IQ_CMD_FL0DCAEN) & M_FW_IQ_CMD_FL0DCAEN)
+#define F_FW_IQ_CMD_FL0DCAEN	V_FW_IQ_CMD_FL0DCAEN(1U)
+
+#define S_FW_IQ_CMD_FL0DCACPU		10
+#define M_FW_IQ_CMD_FL0DCACPU		0x1f
+#define V_FW_IQ_CMD_FL0DCACPU(x)	((x) << S_FW_IQ_CMD_FL0DCACPU)
+#define G_FW_IQ_CMD_FL0DCACPU(x)	\
+    (((x) >> S_FW_IQ_CMD_FL0DCACPU) & M_FW_IQ_CMD_FL0DCACPU)
+
+#define S_FW_IQ_CMD_FL0FBMIN	7
+#define M_FW_IQ_CMD_FL0FBMIN	0x7
+#define V_FW_IQ_CMD_FL0FBMIN(x)	((x) << S_FW_IQ_CMD_FL0FBMIN)
+#define G_FW_IQ_CMD_FL0FBMIN(x)	\
+    (((x) >> S_FW_IQ_CMD_FL0FBMIN) & M_FW_IQ_CMD_FL0FBMIN)
+
+#define S_FW_IQ_CMD_FL0FBMAX	4
+#define M_FW_IQ_CMD_FL0FBMAX	0x7
+#define V_FW_IQ_CMD_FL0FBMAX(x)	((x) << S_FW_IQ_CMD_FL0FBMAX)
+#define G_FW_IQ_CMD_FL0FBMAX(x)	\
+    (((x) >> S_FW_IQ_CMD_FL0FBMAX) & M_FW_IQ_CMD_FL0FBMAX)
+
+#define S_FW_IQ_CMD_FL0CIDXFTHRESHO	3
+#define M_FW_IQ_CMD_FL0CIDXFTHRESHO	0x1
+#define V_FW_IQ_CMD_FL0CIDXFTHRESHO(x)	((x) << S_FW_IQ_CMD_FL0CIDXFTHRESHO)
+#define G_FW_IQ_CMD_FL0CIDXFTHRESHO(x)	\
+    (((x) >> S_FW_IQ_CMD_FL0CIDXFTHRESHO) & M_FW_IQ_CMD_FL0CIDXFTHRESHO)
+#define F_FW_IQ_CMD_FL0CIDXFTHRESHO	V_FW_IQ_CMD_FL0CIDXFTHRESHO(1U)
+
+#define S_FW_IQ_CMD_FL0CIDXFTHRESH	0
+#define M_FW_IQ_CMD_FL0CIDXFTHRESH	0x7
+#define V_FW_IQ_CMD_FL0CIDXFTHRESH(x)	((x) << S_FW_IQ_CMD_FL0CIDXFTHRESH)
+#define G_FW_IQ_CMD_FL0CIDXFTHRESH(x)	\
+    (((x) >> S_FW_IQ_CMD_FL0CIDXFTHRESH) & M_FW_IQ_CMD_FL0CIDXFTHRESH)
+
+#define S_FW_IQ_CMD_FL1CNGCHMAP		20
+#define M_FW_IQ_CMD_FL1CNGCHMAP		0x7
+#define V_FW_IQ_CMD_FL1CNGCHMAP(x)	((x) << S_FW_IQ_CMD_FL1CNGCHMAP)
+#define G_FW_IQ_CMD_FL1CNGCHMAP(x)	\
+    (((x) >> S_FW_IQ_CMD_FL1CNGCHMAP) & M_FW_IQ_CMD_FL1CNGCHMAP)
+
+#define S_FW_IQ_CMD_FL1CACHELOCK	15
+#define M_FW_IQ_CMD_FL1CACHELOCK	0x1
+#define V_FW_IQ_CMD_FL1CACHELOCK(x)	((x) << S_FW_IQ_CMD_FL1CACHELOCK)
+#define G_FW_IQ_CMD_FL1CACHELOCK(x)	\
+    (((x) >> S_FW_IQ_CMD_FL1CACHELOCK) & M_FW_IQ_CMD_FL1CACHELOCK)
+#define F_FW_IQ_CMD_FL1CACHELOCK	V_FW_IQ_CMD_FL1CACHELOCK(1U)
+
+#define S_FW_IQ_CMD_FL1DBP	14
+#define M_FW_IQ_CMD_FL1DBP	0x1
+#define V_FW_IQ_CMD_FL1DBP(x)	((x) << S_FW_IQ_CMD_FL1DBP)
+#define G_FW_IQ_CMD_FL1DBP(x)	\
+    (((x) >> S_FW_IQ_CMD_FL1DBP) & M_FW_IQ_CMD_FL1DBP)
+#define F_FW_IQ_CMD_FL1DBP	V_FW_IQ_CMD_FL1DBP(1U)
+
+#define S_FW_IQ_CMD_FL1DATANS		13
+#define M_FW_IQ_CMD_FL1DATANS		0x1
+#define V_FW_IQ_CMD_FL1DATANS(x)	((x) << S_FW_IQ_CMD_FL1DATANS)
+#define G_FW_IQ_CMD_FL1DATANS(x)	\
+    (((x) >> S_FW_IQ_CMD_FL1DATANS) & M_FW_IQ_CMD_FL1DATANS)
+#define F_FW_IQ_CMD_FL1DATANS	V_FW_IQ_CMD_FL1DATANS(1U)
+
+#define S_FW_IQ_CMD_FL1DATARO		12
+#define M_FW_IQ_CMD_FL1DATARO		0x1
+#define V_FW_IQ_CMD_FL1DATARO(x)	((x) << S_FW_IQ_CMD_FL1DATARO)
+#define G_FW_IQ_CMD_FL1DATARO(x)	\
+    (((x) >> S_FW_IQ_CMD_FL1DATARO) & M_FW_IQ_CMD_FL1DATARO)
+#define F_FW_IQ_CMD_FL1DATARO	V_FW_IQ_CMD_FL1DATARO(1U)
+
+#define S_FW_IQ_CMD_FL1CONGCIF		11
+#define M_FW_IQ_CMD_FL1CONGCIF		0x1
+#define V_FW_IQ_CMD_FL1CONGCIF(x)	((x) << S_FW_IQ_CMD_FL1CONGCIF)
+#define G_FW_IQ_CMD_FL1CONGCIF(x)	\
+    (((x) >> S_FW_IQ_CMD_FL1CONGCIF) & M_FW_IQ_CMD_FL1CONGCIF)
+#define F_FW_IQ_CMD_FL1CONGCIF	V_FW_IQ_CMD_FL1CONGCIF(1U)
+
+#define S_FW_IQ_CMD_FL1ONCHIP		10
+#define M_FW_IQ_CMD_FL1ONCHIP		0x1
+#define V_FW_IQ_CMD_FL1ONCHIP(x)	((x) << S_FW_IQ_CMD_FL1ONCHIP)
+#define G_FW_IQ_CMD_FL1ONCHIP(x)	\
+    (((x) >> S_FW_IQ_CMD_FL1ONCHIP) & M_FW_IQ_CMD_FL1ONCHIP)
+#define F_FW_IQ_CMD_FL1ONCHIP	V_FW_IQ_CMD_FL1ONCHIP(1U)
+
+#define S_FW_IQ_CMD_FL1STATUSPGNS	9
+#define M_FW_IQ_CMD_FL1STATUSPGNS	0x1
+#define V_FW_IQ_CMD_FL1STATUSPGNS(x)	((x) << S_FW_IQ_CMD_FL1STATUSPGNS)
+#define G_FW_IQ_CMD_FL1STATUSPGNS(x)	\
+    (((x) >> S_FW_IQ_CMD_FL1STATUSPGNS) & M_FW_IQ_CMD_FL1STATUSPGNS)
+#define F_FW_IQ_CMD_FL1STATUSPGNS	V_FW_IQ_CMD_FL1STATUSPGNS(1U)
+
+#define S_FW_IQ_CMD_FL1STATUSPGRO	8
+#define M_FW_IQ_CMD_FL1STATUSPGRO	0x1
+#define V_FW_IQ_CMD_FL1STATUSPGRO(x)	((x) << S_FW_IQ_CMD_FL1STATUSPGRO)
+#define G_FW_IQ_CMD_FL1STATUSPGRO(x)	\
+    (((x) >> S_FW_IQ_CMD_FL1STATUSPGRO) & M_FW_IQ_CMD_FL1STATUSPGRO)
+#define F_FW_IQ_CMD_FL1STATUSPGRO	V_FW_IQ_CMD_FL1STATUSPGRO(1U)
+
+#define S_FW_IQ_CMD_FL1FETCHNS		7
+#define M_FW_IQ_CMD_FL1FETCHNS		0x1
+#define V_FW_IQ_CMD_FL1FETCHNS(x)	((x) << S_FW_IQ_CMD_FL1FETCHNS)
+#define G_FW_IQ_CMD_FL1FETCHNS(x)	\
+    (((x) >> S_FW_IQ_CMD_FL1FETCHNS) & M_FW_IQ_CMD_FL1FETCHNS)
+#define F_FW_IQ_CMD_FL1FETCHNS	V_FW_IQ_CMD_FL1FETCHNS(1U)
+
+#define S_FW_IQ_CMD_FL1FETCHRO		6
+#define M_FW_IQ_CMD_FL1FETCHRO		0x1
+#define V_FW_IQ_CMD_FL1FETCHRO(x)	((x) << S_FW_IQ_CMD_FL1FETCHRO)
+#define G_FW_IQ_CMD_FL1FETCHRO(x)	\
+    (((x) >> S_FW_IQ_CMD_FL1FETCHRO) & M_FW_IQ_CMD_FL1FETCHRO)
+#define F_FW_IQ_CMD_FL1FETCHRO	V_FW_IQ_CMD_FL1FETCHRO(1U)
+
+#define S_FW_IQ_CMD_FL1HOSTFCMODE	4
+#define M_FW_IQ_CMD_FL1HOSTFCMODE	0x3
+#define V_FW_IQ_CMD_FL1HOSTFCMODE(x)	((x) << S_FW_IQ_CMD_FL1HOSTFCMODE)
+#define G_FW_IQ_CMD_FL1HOSTFCMODE(x)	\
+    (((x) >> S_FW_IQ_CMD_FL1HOSTFCMODE) & M_FW_IQ_CMD_FL1HOSTFCMODE)
+
+#define S_FW_IQ_CMD_FL1CPRIO	3
+#define M_FW_IQ_CMD_FL1CPRIO	0x1
+#define V_FW_IQ_CMD_FL1CPRIO(x)	((x) << S_FW_IQ_CMD_FL1CPRIO)
+#define G_FW_IQ_CMD_FL1CPRIO(x)	\
+    (((x) >> S_FW_IQ_CMD_FL1CPRIO) & M_FW_IQ_CMD_FL1CPRIO)
+#define F_FW_IQ_CMD_FL1CPRIO	V_FW_IQ_CMD_FL1CPRIO(1U)
+
+#define S_FW_IQ_CMD_FL1PADEN	2
+#define M_FW_IQ_CMD_FL1PADEN	0x1
+#define V_FW_IQ_CMD_FL1PADEN(x)	((x) << S_FW_IQ_CMD_FL1PADEN)
+#define G_FW_IQ_CMD_FL1PADEN(x)	\
+    (((x) >> S_FW_IQ_CMD_FL1PADEN) & M_FW_IQ_CMD_FL1PADEN)
+#define F_FW_IQ_CMD_FL1PADEN	V_FW_IQ_CMD_FL1PADEN(1U)
+
+#define S_FW_IQ_CMD_FL1PACKEN		1
+#define M_FW_IQ_CMD_FL1PACKEN		0x1
+#define V_FW_IQ_CMD_FL1PACKEN(x)	((x) << S_FW_IQ_CMD_FL1PACKEN)
+#define G_FW_IQ_CMD_FL1PACKEN(x)	\
+    (((x) >> S_FW_IQ_CMD_FL1PACKEN) & M_FW_IQ_CMD_FL1PACKEN)
+#define F_FW_IQ_CMD_FL1PACKEN	V_FW_IQ_CMD_FL1PACKEN(1U)
+
+#define S_FW_IQ_CMD_FL1CONGEN		0
+#define M_FW_IQ_CMD_FL1CONGEN		0x1
+#define V_FW_IQ_CMD_FL1CONGEN(x)	((x) << S_FW_IQ_CMD_FL1CONGEN)
+#define G_FW_IQ_CMD_FL1CONGEN(x)	\
+    (((x) >> S_FW_IQ_CMD_FL1CONGEN) & M_FW_IQ_CMD_FL1CONGEN)
+#define F_FW_IQ_CMD_FL1CONGEN	V_FW_IQ_CMD_FL1CONGEN(1U)
+
+#define S_FW_IQ_CMD_FL1DCAEN	15
+#define M_FW_IQ_CMD_FL1DCAEN	0x1
+#define V_FW_IQ_CMD_FL1DCAEN(x)	((x) << S_FW_IQ_CMD_FL1DCAEN)
+#define G_FW_IQ_CMD_FL1DCAEN(x)	\
+    (((x) >> S_FW_IQ_CMD_FL1DCAEN) & M_FW_IQ_CMD_FL1DCAEN)
+#define F_FW_IQ_CMD_FL1DCAEN	V_FW_IQ_CMD_FL1DCAEN(1U)
+
+#define S_FW_IQ_CMD_FL1DCACPU		10
+#define M_FW_IQ_CMD_FL1DCACPU		0x1f
+#define V_FW_IQ_CMD_FL1DCACPU(x)	((x) << S_FW_IQ_CMD_FL1DCACPU)
+#define G_FW_IQ_CMD_FL1DCACPU(x)	\
+    (((x) >> S_FW_IQ_CMD_FL1DCACPU) & M_FW_IQ_CMD_FL1DCACPU)
+
+#define S_FW_IQ_CMD_FL1FBMIN	7
+#define M_FW_IQ_CMD_FL1FBMIN	0x7
+#define V_FW_IQ_CMD_FL1FBMIN(x)	((x) << S_FW_IQ_CMD_FL1FBMIN)
+#define G_FW_IQ_CMD_FL1FBMIN(x)	\
+    (((x) >> S_FW_IQ_CMD_FL1FBMIN) & M_FW_IQ_CMD_FL1FBMIN)
+
+#define S_FW_IQ_CMD_FL1FBMAX	4
+#define M_FW_IQ_CMD_FL1FBMAX	0x7
+#define V_FW_IQ_CMD_FL1FBMAX(x)	((x) << S_FW_IQ_CMD_FL1FBMAX)
+#define G_FW_IQ_CMD_FL1FBMAX(x)	\
+    (((x) >> S_FW_IQ_CMD_FL1FBMAX) & M_FW_IQ_CMD_FL1FBMAX)
+
+#define S_FW_IQ_CMD_FL1CIDXFTHRESHO	3
+#define M_FW_IQ_CMD_FL1CIDXFTHRESHO	0x1
+#define V_FW_IQ_CMD_FL1CIDXFTHRESHO(x)	((x) << S_FW_IQ_CMD_FL1CIDXFTHRESHO)
+#define G_FW_IQ_CMD_FL1CIDXFTHRESHO(x)	\
+    (((x) >> S_FW_IQ_CMD_FL1CIDXFTHRESHO) & M_FW_IQ_CMD_FL1CIDXFTHRESHO)
+#define F_FW_IQ_CMD_FL1CIDXFTHRESHO	V_FW_IQ_CMD_FL1CIDXFTHRESHO(1U)
+
+#define S_FW_IQ_CMD_FL1CIDXFTHRESH	0
+#define M_FW_IQ_CMD_FL1CIDXFTHRESH	0x7
+#define V_FW_IQ_CMD_FL1CIDXFTHRESH(x)	((x) << S_FW_IQ_CMD_FL1CIDXFTHRESH)
+#define G_FW_IQ_CMD_FL1CIDXFTHRESH(x)	\
+    (((x) >> S_FW_IQ_CMD_FL1CIDXFTHRESH) & M_FW_IQ_CMD_FL1CIDXFTHRESH)
+
+struct fw_eq_mngt_cmd {
+	__be32 op_to_vfn;
+	__be32 alloc_to_len16;
+	__be32 cmpliqid_eqid;
+	__be32 physeqid_pkd;
+	__be32 fetchszm_to_iqid;
+	__be32 dcaen_to_eqsize;
+	__be64 eqaddr;
+};
+
+#define S_FW_EQ_MNGT_CMD_PFN	8
+#define M_FW_EQ_MNGT_CMD_PFN	0x7
+#define V_FW_EQ_MNGT_CMD_PFN(x)	((x) << S_FW_EQ_MNGT_CMD_PFN)
+#define G_FW_EQ_MNGT_CMD_PFN(x)	\
+    (((x) >> S_FW_EQ_MNGT_CMD_PFN) & M_FW_EQ_MNGT_CMD_PFN)
+
+#define S_FW_EQ_MNGT_CMD_VFN	0
+#define M_FW_EQ_MNGT_CMD_VFN	0xff
+#define V_FW_EQ_MNGT_CMD_VFN(x)	((x) << S_FW_EQ_MNGT_CMD_VFN)
+#define G_FW_EQ_MNGT_CMD_VFN(x)	\
+    (((x) >> S_FW_EQ_MNGT_CMD_VFN) & M_FW_EQ_MNGT_CMD_VFN)
+
+#define S_FW_EQ_MNGT_CMD_ALLOC		31
+#define M_FW_EQ_MNGT_CMD_ALLOC		0x1
+#define V_FW_EQ_MNGT_CMD_ALLOC(x)	((x) << S_FW_EQ_MNGT_CMD_ALLOC)
+#define G_FW_EQ_MNGT_CMD_ALLOC(x)	\
+    (((x) >> S_FW_EQ_MNGT_CMD_ALLOC) & M_FW_EQ_MNGT_CMD_ALLOC)
+#define F_FW_EQ_MNGT_CMD_ALLOC	V_FW_EQ_MNGT_CMD_ALLOC(1U)
+
+#define S_FW_EQ_MNGT_CMD_FREE		30
+#define M_FW_EQ_MNGT_CMD_FREE		0x1
+#define V_FW_EQ_MNGT_CMD_FREE(x)	((x) << S_FW_EQ_MNGT_CMD_FREE)
+#define G_FW_EQ_MNGT_CMD_FREE(x)	\
+    (((x) >> S_FW_EQ_MNGT_CMD_FREE) & M_FW_EQ_MNGT_CMD_FREE)
+#define F_FW_EQ_MNGT_CMD_FREE	V_FW_EQ_MNGT_CMD_FREE(1U)
+
+#define S_FW_EQ_MNGT_CMD_MODIFY		29
+#define M_FW_EQ_MNGT_CMD_MODIFY		0x1
+#define V_FW_EQ_MNGT_CMD_MODIFY(x)	((x) << S_FW_EQ_MNGT_CMD_MODIFY)
+#define G_FW_EQ_MNGT_CMD_MODIFY(x)	\
+    (((x) >> S_FW_EQ_MNGT_CMD_MODIFY) & M_FW_EQ_MNGT_CMD_MODIFY)
+#define F_FW_EQ_MNGT_CMD_MODIFY	V_FW_EQ_MNGT_CMD_MODIFY(1U)
+
+#define S_FW_EQ_MNGT_CMD_EQSTART	28
+#define M_FW_EQ_MNGT_CMD_EQSTART	0x1
+#define V_FW_EQ_MNGT_CMD_EQSTART(x)	((x) << S_FW_EQ_MNGT_CMD_EQSTART)
+#define G_FW_EQ_MNGT_CMD_EQSTART(x)	\
+    (((x) >> S_FW_EQ_MNGT_CMD_EQSTART) & M_FW_EQ_MNGT_CMD_EQSTART)
+#define F_FW_EQ_MNGT_CMD_EQSTART	V_FW_EQ_MNGT_CMD_EQSTART(1U)
+
+#define S_FW_EQ_MNGT_CMD_EQSTOP		27
+#define M_FW_EQ_MNGT_CMD_EQSTOP		0x1
+#define V_FW_EQ_MNGT_CMD_EQSTOP(x)	((x) << S_FW_EQ_MNGT_CMD_EQSTOP)
+#define G_FW_EQ_MNGT_CMD_EQSTOP(x)	\
+    (((x) >> S_FW_EQ_MNGT_CMD_EQSTOP) & M_FW_EQ_MNGT_CMD_EQSTOP)
+#define F_FW_EQ_MNGT_CMD_EQSTOP	V_FW_EQ_MNGT_CMD_EQSTOP(1U)
+
+#define S_FW_EQ_MNGT_CMD_CMPLIQID	20
+#define M_FW_EQ_MNGT_CMD_CMPLIQID	0xfff
+#define V_FW_EQ_MNGT_CMD_CMPLIQID(x)	((x) << S_FW_EQ_MNGT_CMD_CMPLIQID)
+#define G_FW_EQ_MNGT_CMD_CMPLIQID(x)	\
+    (((x) >> S_FW_EQ_MNGT_CMD_CMPLIQID) & M_FW_EQ_MNGT_CMD_CMPLIQID)
+
+#define S_FW_EQ_MNGT_CMD_EQID		0
+#define M_FW_EQ_MNGT_CMD_EQID		0xfffff
+#define V_FW_EQ_MNGT_CMD_EQID(x)	((x) << S_FW_EQ_MNGT_CMD_EQID)
+#define G_FW_EQ_MNGT_CMD_EQID(x)	\
+    (((x) >> S_FW_EQ_MNGT_CMD_EQID) & M_FW_EQ_MNGT_CMD_EQID)
+
+#define S_FW_EQ_MNGT_CMD_PHYSEQID	0
+#define M_FW_EQ_MNGT_CMD_PHYSEQID	0xfffff
+#define V_FW_EQ_MNGT_CMD_PHYSEQID(x)	((x) << S_FW_EQ_MNGT_CMD_PHYSEQID)
+#define G_FW_EQ_MNGT_CMD_PHYSEQID(x)	\
+    (((x) >> S_FW_EQ_MNGT_CMD_PHYSEQID) & M_FW_EQ_MNGT_CMD_PHYSEQID)
+
+#define S_FW_EQ_MNGT_CMD_FETCHSZM	26
+#define M_FW_EQ_MNGT_CMD_FETCHSZM	0x1
+#define V_FW_EQ_MNGT_CMD_FETCHSZM(x)	((x) << S_FW_EQ_MNGT_CMD_FETCHSZM)
+#define G_FW_EQ_MNGT_CMD_FETCHSZM(x)	\
+    (((x) >> S_FW_EQ_MNGT_CMD_FETCHSZM) & M_FW_EQ_MNGT_CMD_FETCHSZM)
+#define F_FW_EQ_MNGT_CMD_FETCHSZM	V_FW_EQ_MNGT_CMD_FETCHSZM(1U)
+
+#define S_FW_EQ_MNGT_CMD_STATUSPGNS	25
+#define M_FW_EQ_MNGT_CMD_STATUSPGNS	0x1
+#define V_FW_EQ_MNGT_CMD_STATUSPGNS(x)	((x) << S_FW_EQ_MNGT_CMD_STATUSPGNS)
+#define G_FW_EQ_MNGT_CMD_STATUSPGNS(x)	\
+    (((x) >> S_FW_EQ_MNGT_CMD_STATUSPGNS) & M_FW_EQ_MNGT_CMD_STATUSPGNS)
+#define F_FW_EQ_MNGT_CMD_STATUSPGNS	V_FW_EQ_MNGT_CMD_STATUSPGNS(1U)
+
+#define S_FW_EQ_MNGT_CMD_STATUSPGRO	24
+#define M_FW_EQ_MNGT_CMD_STATUSPGRO	0x1
+#define V_FW_EQ_MNGT_CMD_STATUSPGRO(x)	((x) << S_FW_EQ_MNGT_CMD_STATUSPGRO)
+#define G_FW_EQ_MNGT_CMD_STATUSPGRO(x)	\
+    (((x) >> S_FW_EQ_MNGT_CMD_STATUSPGRO) & M_FW_EQ_MNGT_CMD_STATUSPGRO)
+#define F_FW_EQ_MNGT_CMD_STATUSPGRO	V_FW_EQ_MNGT_CMD_STATUSPGRO(1U)
+
+#define S_FW_EQ_MNGT_CMD_FETCHNS	23
+#define M_FW_EQ_MNGT_CMD_FETCHNS	0x1
+#define V_FW_EQ_MNGT_CMD_FETCHNS(x)	((x) << S_FW_EQ_MNGT_CMD_FETCHNS)
+#define G_FW_EQ_MNGT_CMD_FETCHNS(x)	\
+    (((x) >> S_FW_EQ_MNGT_CMD_FETCHNS) & M_FW_EQ_MNGT_CMD_FETCHNS)
+#define F_FW_EQ_MNGT_CMD_FETCHNS	V_FW_EQ_MNGT_CMD_FETCHNS(1U)
+
+#define S_FW_EQ_MNGT_CMD_FETCHRO	22
+#define M_FW_EQ_MNGT_CMD_FETCHRO	0x1
+#define V_FW_EQ_MNGT_CMD_FETCHRO(x)	((x) << S_FW_EQ_MNGT_CMD_FETCHRO)
+#define G_FW_EQ_MNGT_CMD_FETCHRO(x)	\
+    (((x) >> S_FW_EQ_MNGT_CMD_FETCHRO) & M_FW_EQ_MNGT_CMD_FETCHRO)
+#define F_FW_EQ_MNGT_CMD_FETCHRO	V_FW_EQ_MNGT_CMD_FETCHRO(1U)
+
+#define S_FW_EQ_MNGT_CMD_HOSTFCMODE	20
+#define M_FW_EQ_MNGT_CMD_HOSTFCMODE	0x3
+#define V_FW_EQ_MNGT_CMD_HOSTFCMODE(x)	((x) << S_FW_EQ_MNGT_CMD_HOSTFCMODE)
+#define G_FW_EQ_MNGT_CMD_HOSTFCMODE(x)	\
+    (((x) >> S_FW_EQ_MNGT_CMD_HOSTFCMODE) & M_FW_EQ_MNGT_CMD_HOSTFCMODE)
+
+#define S_FW_EQ_MNGT_CMD_CPRIO		19
+#define M_FW_EQ_MNGT_CMD_CPRIO		0x1
+#define V_FW_EQ_MNGT_CMD_CPRIO(x)	((x) << S_FW_EQ_MNGT_CMD_CPRIO)
+#define G_FW_EQ_MNGT_CMD_CPRIO(x)	\
+    (((x) >> S_FW_EQ_MNGT_CMD_CPRIO) & M_FW_EQ_MNGT_CMD_CPRIO)
+#define F_FW_EQ_MNGT_CMD_CPRIO	V_FW_EQ_MNGT_CMD_CPRIO(1U)
+
+#define S_FW_EQ_MNGT_CMD_ONCHIP		18
+#define M_FW_EQ_MNGT_CMD_ONCHIP		0x1
+#define V_FW_EQ_MNGT_CMD_ONCHIP(x)	((x) << S_FW_EQ_MNGT_CMD_ONCHIP)
+#define G_FW_EQ_MNGT_CMD_ONCHIP(x)	\
+    (((x) >> S_FW_EQ_MNGT_CMD_ONCHIP) & M_FW_EQ_MNGT_CMD_ONCHIP)
+#define F_FW_EQ_MNGT_CMD_ONCHIP	V_FW_EQ_MNGT_CMD_ONCHIP(1U)
+
+#define S_FW_EQ_MNGT_CMD_PCIECHN	16
+#define M_FW_EQ_MNGT_CMD_PCIECHN	0x3
+#define V_FW_EQ_MNGT_CMD_PCIECHN(x)	((x) << S_FW_EQ_MNGT_CMD_PCIECHN)
+#define G_FW_EQ_MNGT_CMD_PCIECHN(x)	\
+    (((x) >> S_FW_EQ_MNGT_CMD_PCIECHN) & M_FW_EQ_MNGT_CMD_PCIECHN)
+
+#define S_FW_EQ_MNGT_CMD_IQID		0
+#define M_FW_EQ_MNGT_CMD_IQID		0xffff
+#define V_FW_EQ_MNGT_CMD_IQID(x)	((x) << S_FW_EQ_MNGT_CMD_IQID)
+#define G_FW_EQ_MNGT_CMD_IQID(x)	\
+    (((x) >> S_FW_EQ_MNGT_CMD_IQID) & M_FW_EQ_MNGT_CMD_IQID)
+
+#define S_FW_EQ_MNGT_CMD_DCAEN		31
+#define M_FW_EQ_MNGT_CMD_DCAEN		0x1
+#define V_FW_EQ_MNGT_CMD_DCAEN(x)	((x) << S_FW_EQ_MNGT_CMD_DCAEN)
+#define G_FW_EQ_MNGT_CMD_DCAEN(x)	\
+    (((x) >> S_FW_EQ_MNGT_CMD_DCAEN) & M_FW_EQ_MNGT_CMD_DCAEN)
+#define F_FW_EQ_MNGT_CMD_DCAEN	V_FW_EQ_MNGT_CMD_DCAEN(1U)
+
+#define S_FW_EQ_MNGT_CMD_DCACPU		26
+#define M_FW_EQ_MNGT_CMD_DCACPU		0x1f
+#define V_FW_EQ_MNGT_CMD_DCACPU(x)	((x) << S_FW_EQ_MNGT_CMD_DCACPU)
+#define G_FW_EQ_MNGT_CMD_DCACPU(x)	\
+    (((x) >> S_FW_EQ_MNGT_CMD_DCACPU) & M_FW_EQ_MNGT_CMD_DCACPU)
+
+#define S_FW_EQ_MNGT_CMD_FBMIN		23
+#define M_FW_EQ_MNGT_CMD_FBMIN		0x7
+#define V_FW_EQ_MNGT_CMD_FBMIN(x)	((x) << S_FW_EQ_MNGT_CMD_FBMIN)
+#define G_FW_EQ_MNGT_CMD_FBMIN(x)	\
+    (((x) >> S_FW_EQ_MNGT_CMD_FBMIN) & M_FW_EQ_MNGT_CMD_FBMIN)
+
+#define S_FW_EQ_MNGT_CMD_FBMAX		20
+#define M_FW_EQ_MNGT_CMD_FBMAX		0x7
+#define V_FW_EQ_MNGT_CMD_FBMAX(x)	((x) << S_FW_EQ_MNGT_CMD_FBMAX)
+#define G_FW_EQ_MNGT_CMD_FBMAX(x)	\
+    (((x) >> S_FW_EQ_MNGT_CMD_FBMAX) & M_FW_EQ_MNGT_CMD_FBMAX)
+
+#define S_FW_EQ_MNGT_CMD_CIDXFTHRESHO		19
+#define M_FW_EQ_MNGT_CMD_CIDXFTHRESHO		0x1
+#define V_FW_EQ_MNGT_CMD_CIDXFTHRESHO(x)	\
+    ((x) << S_FW_EQ_MNGT_CMD_CIDXFTHRESHO)
+#define G_FW_EQ_MNGT_CMD_CIDXFTHRESHO(x)	\
+    (((x) >> S_FW_EQ_MNGT_CMD_CIDXFTHRESHO) & M_FW_EQ_MNGT_CMD_CIDXFTHRESHO)
+#define F_FW_EQ_MNGT_CMD_CIDXFTHRESHO	V_FW_EQ_MNGT_CMD_CIDXFTHRESHO(1U)
+
+#define S_FW_EQ_MNGT_CMD_CIDXFTHRESH	16
+#define M_FW_EQ_MNGT_CMD_CIDXFTHRESH	0x7
+#define V_FW_EQ_MNGT_CMD_CIDXFTHRESH(x)	((x) << S_FW_EQ_MNGT_CMD_CIDXFTHRESH)
+#define G_FW_EQ_MNGT_CMD_CIDXFTHRESH(x)	\
+    (((x) >> S_FW_EQ_MNGT_CMD_CIDXFTHRESH) & M_FW_EQ_MNGT_CMD_CIDXFTHRESH)
+
+#define S_FW_EQ_MNGT_CMD_EQSIZE		0
+#define M_FW_EQ_MNGT_CMD_EQSIZE		0xffff
+#define V_FW_EQ_MNGT_CMD_EQSIZE(x)	((x) << S_FW_EQ_MNGT_CMD_EQSIZE)
+#define G_FW_EQ_MNGT_CMD_EQSIZE(x)	\
+    (((x) >> S_FW_EQ_MNGT_CMD_EQSIZE) & M_FW_EQ_MNGT_CMD_EQSIZE)
+
+struct fw_eq_eth_cmd {
+	__be32 op_to_vfn;
+	__be32 alloc_to_len16;
+	__be32 eqid_pkd;
+	__be32 physeqid_pkd;
+	__be32 fetchszm_to_iqid;
+	__be32 dcaen_to_eqsize;
+	__be64 eqaddr;
+	__be32 viid_pkd;
+	__be32 r8_lo;
+	__be64 r9;
+};
+
+#define S_FW_EQ_ETH_CMD_PFN	8
+#define M_FW_EQ_ETH_CMD_PFN	0x7
+#define V_FW_EQ_ETH_CMD_PFN(x)	((x) << S_FW_EQ_ETH_CMD_PFN)
+#define G_FW_EQ_ETH_CMD_PFN(x)	\
+    (((x) >> S_FW_EQ_ETH_CMD_PFN) & M_FW_EQ_ETH_CMD_PFN)
+
+#define S_FW_EQ_ETH_CMD_VFN	0
+#define M_FW_EQ_ETH_CMD_VFN	0xff
+#define V_FW_EQ_ETH_CMD_VFN(x)	((x) << S_FW_EQ_ETH_CMD_VFN)
+#define G_FW_EQ_ETH_CMD_VFN(x)	\
+    (((x) >> S_FW_EQ_ETH_CMD_VFN) & M_FW_EQ_ETH_CMD_VFN)
+
+#define S_FW_EQ_ETH_CMD_ALLOC		31
+#define M_FW_EQ_ETH_CMD_ALLOC		0x1
+#define V_FW_EQ_ETH_CMD_ALLOC(x)	((x) << S_FW_EQ_ETH_CMD_ALLOC)
+#define G_FW_EQ_ETH_CMD_ALLOC(x)	\
+    (((x) >> S_FW_EQ_ETH_CMD_ALLOC) & M_FW_EQ_ETH_CMD_ALLOC)
+#define F_FW_EQ_ETH_CMD_ALLOC	V_FW_EQ_ETH_CMD_ALLOC(1U)
+
+#define S_FW_EQ_ETH_CMD_FREE	30
+#define M_FW_EQ_ETH_CMD_FREE	0x1
+#define V_FW_EQ_ETH_CMD_FREE(x)	((x) << S_FW_EQ_ETH_CMD_FREE)
+#define G_FW_EQ_ETH_CMD_FREE(x)	\
+    (((x) >> S_FW_EQ_ETH_CMD_FREE) & M_FW_EQ_ETH_CMD_FREE)
+#define F_FW_EQ_ETH_CMD_FREE	V_FW_EQ_ETH_CMD_FREE(1U)
+
+#define S_FW_EQ_ETH_CMD_MODIFY		29
+#define M_FW_EQ_ETH_CMD_MODIFY		0x1
+#define V_FW_EQ_ETH_CMD_MODIFY(x)	((x) << S_FW_EQ_ETH_CMD_MODIFY)
+#define G_FW_EQ_ETH_CMD_MODIFY(x)	\
+    (((x) >> S_FW_EQ_ETH_CMD_MODIFY) & M_FW_EQ_ETH_CMD_MODIFY)
+#define F_FW_EQ_ETH_CMD_MODIFY	V_FW_EQ_ETH_CMD_MODIFY(1U)
+
+#define S_FW_EQ_ETH_CMD_EQSTART		28
+#define M_FW_EQ_ETH_CMD_EQSTART		0x1
+#define V_FW_EQ_ETH_CMD_EQSTART(x)	((x) << S_FW_EQ_ETH_CMD_EQSTART)
+#define G_FW_EQ_ETH_CMD_EQSTART(x)	\
+    (((x) >> S_FW_EQ_ETH_CMD_EQSTART) & M_FW_EQ_ETH_CMD_EQSTART)
+#define F_FW_EQ_ETH_CMD_EQSTART	V_FW_EQ_ETH_CMD_EQSTART(1U)
+
+#define S_FW_EQ_ETH_CMD_EQSTOP		27
+#define M_FW_EQ_ETH_CMD_EQSTOP		0x1
+#define V_FW_EQ_ETH_CMD_EQSTOP(x)	((x) << S_FW_EQ_ETH_CMD_EQSTOP)
+#define G_FW_EQ_ETH_CMD_EQSTOP(x)	\
+    (((x) >> S_FW_EQ_ETH_CMD_EQSTOP) & M_FW_EQ_ETH_CMD_EQSTOP)
+#define F_FW_EQ_ETH_CMD_EQSTOP	V_FW_EQ_ETH_CMD_EQSTOP(1U)
+
+#define S_FW_EQ_ETH_CMD_EQID	0
+#define M_FW_EQ_ETH_CMD_EQID	0xfffff
+#define V_FW_EQ_ETH_CMD_EQID(x)	((x) << S_FW_EQ_ETH_CMD_EQID)
+#define G_FW_EQ_ETH_CMD_EQID(x)	\
+    (((x) >> S_FW_EQ_ETH_CMD_EQID) & M_FW_EQ_ETH_CMD_EQID)
+
+#define S_FW_EQ_ETH_CMD_PHYSEQID	0
+#define M_FW_EQ_ETH_CMD_PHYSEQID	0xfffff
+#define V_FW_EQ_ETH_CMD_PHYSEQID(x)	((x) << S_FW_EQ_ETH_CMD_PHYSEQID)
+#define G_FW_EQ_ETH_CMD_PHYSEQID(x)	\
+    (((x) >> S_FW_EQ_ETH_CMD_PHYSEQID) & M_FW_EQ_ETH_CMD_PHYSEQID)
+
+#define S_FW_EQ_ETH_CMD_FETCHSZM	26
+#define M_FW_EQ_ETH_CMD_FETCHSZM	0x1
+#define V_FW_EQ_ETH_CMD_FETCHSZM(x)	((x) << S_FW_EQ_ETH_CMD_FETCHSZM)
+#define G_FW_EQ_ETH_CMD_FETCHSZM(x)	\
+    (((x) >> S_FW_EQ_ETH_CMD_FETCHSZM) & M_FW_EQ_ETH_CMD_FETCHSZM)
+#define F_FW_EQ_ETH_CMD_FETCHSZM	V_FW_EQ_ETH_CMD_FETCHSZM(1U)
+
+#define S_FW_EQ_ETH_CMD_STATUSPGNS	25
+#define M_FW_EQ_ETH_CMD_STATUSPGNS	0x1
+#define V_FW_EQ_ETH_CMD_STATUSPGNS(x)	((x) << S_FW_EQ_ETH_CMD_STATUSPGNS)
+#define G_FW_EQ_ETH_CMD_STATUSPGNS(x)	\
+    (((x) >> S_FW_EQ_ETH_CMD_STATUSPGNS) & M_FW_EQ_ETH_CMD_STATUSPGNS)
+#define F_FW_EQ_ETH_CMD_STATUSPGNS	V_FW_EQ_ETH_CMD_STATUSPGNS(1U)
+
+#define S_FW_EQ_ETH_CMD_STATUSPGRO	24
+#define M_FW_EQ_ETH_CMD_STATUSPGRO	0x1
+#define V_FW_EQ_ETH_CMD_STATUSPGRO(x)	((x) << S_FW_EQ_ETH_CMD_STATUSPGRO)
+#define G_FW_EQ_ETH_CMD_STATUSPGRO(x)	\
+    (((x) >> S_FW_EQ_ETH_CMD_STATUSPGRO) & M_FW_EQ_ETH_CMD_STATUSPGRO)
+#define F_FW_EQ_ETH_CMD_STATUSPGRO	V_FW_EQ_ETH_CMD_STATUSPGRO(1U)
+
+#define S_FW_EQ_ETH_CMD_FETCHNS		23
+#define M_FW_EQ_ETH_CMD_FETCHNS		0x1
+#define V_FW_EQ_ETH_CMD_FETCHNS(x)	((x) << S_FW_EQ_ETH_CMD_FETCHNS)
+#define G_FW_EQ_ETH_CMD_FETCHNS(x)	\
+    (((x) >> S_FW_EQ_ETH_CMD_FETCHNS) & M_FW_EQ_ETH_CMD_FETCHNS)
+#define F_FW_EQ_ETH_CMD_FETCHNS	V_FW_EQ_ETH_CMD_FETCHNS(1U)
+
+#define S_FW_EQ_ETH_CMD_FETCHRO		22
+#define M_FW_EQ_ETH_CMD_FETCHRO		0x1
+#define V_FW_EQ_ETH_CMD_FETCHRO(x)	((x) << S_FW_EQ_ETH_CMD_FETCHRO)
+#define G_FW_EQ_ETH_CMD_FETCHRO(x)	\
+    (((x) >> S_FW_EQ_ETH_CMD_FETCHRO) & M_FW_EQ_ETH_CMD_FETCHRO)
+#define F_FW_EQ_ETH_CMD_FETCHRO	V_FW_EQ_ETH_CMD_FETCHRO(1U)
+
+#define S_FW_EQ_ETH_CMD_HOSTFCMODE	20
+#define M_FW_EQ_ETH_CMD_HOSTFCMODE	0x3
+#define V_FW_EQ_ETH_CMD_HOSTFCMODE(x)	((x) << S_FW_EQ_ETH_CMD_HOSTFCMODE)
+#define G_FW_EQ_ETH_CMD_HOSTFCMODE(x)	\
+    (((x) >> S_FW_EQ_ETH_CMD_HOSTFCMODE) & M_FW_EQ_ETH_CMD_HOSTFCMODE)
+
+#define S_FW_EQ_ETH_CMD_CPRIO		19
+#define M_FW_EQ_ETH_CMD_CPRIO		0x1
+#define V_FW_EQ_ETH_CMD_CPRIO(x)	((x) << S_FW_EQ_ETH_CMD_CPRIO)
+#define G_FW_EQ_ETH_CMD_CPRIO(x)	\
+    (((x) >> S_FW_EQ_ETH_CMD_CPRIO) & M_FW_EQ_ETH_CMD_CPRIO)
+#define F_FW_EQ_ETH_CMD_CPRIO	V_FW_EQ_ETH_CMD_CPRIO(1U)
+
+#define S_FW_EQ_ETH_CMD_ONCHIP		18
+#define M_FW_EQ_ETH_CMD_ONCHIP		0x1
+#define V_FW_EQ_ETH_CMD_ONCHIP(x)	((x) << S_FW_EQ_ETH_CMD_ONCHIP)
+#define G_FW_EQ_ETH_CMD_ONCHIP(x)	\
+    (((x) >> S_FW_EQ_ETH_CMD_ONCHIP) & M_FW_EQ_ETH_CMD_ONCHIP)
+#define F_FW_EQ_ETH_CMD_ONCHIP	V_FW_EQ_ETH_CMD_ONCHIP(1U)
+
+#define S_FW_EQ_ETH_CMD_PCIECHN		16
+#define M_FW_EQ_ETH_CMD_PCIECHN		0x3
+#define V_FW_EQ_ETH_CMD_PCIECHN(x)	((x) << S_FW_EQ_ETH_CMD_PCIECHN)
+#define G_FW_EQ_ETH_CMD_PCIECHN(x)	\
+    (((x) >> S_FW_EQ_ETH_CMD_PCIECHN) & M_FW_EQ_ETH_CMD_PCIECHN)
+
+#define S_FW_EQ_ETH_CMD_IQID	0
+#define M_FW_EQ_ETH_CMD_IQID	0xffff
+#define V_FW_EQ_ETH_CMD_IQID(x)	((x) << S_FW_EQ_ETH_CMD_IQID)
+#define G_FW_EQ_ETH_CMD_IQID(x)	\
+    (((x) >> S_FW_EQ_ETH_CMD_IQID) & M_FW_EQ_ETH_CMD_IQID)
+
+#define S_FW_EQ_ETH_CMD_DCAEN		31
+#define M_FW_EQ_ETH_CMD_DCAEN		0x1
+#define V_FW_EQ_ETH_CMD_DCAEN(x)	((x) << S_FW_EQ_ETH_CMD_DCAEN)
+#define G_FW_EQ_ETH_CMD_DCAEN(x)	\
+    (((x) >> S_FW_EQ_ETH_CMD_DCAEN) & M_FW_EQ_ETH_CMD_DCAEN)
+#define F_FW_EQ_ETH_CMD_DCAEN	V_FW_EQ_ETH_CMD_DCAEN(1U)
+
+#define S_FW_EQ_ETH_CMD_DCACPU		26
+#define M_FW_EQ_ETH_CMD_DCACPU		0x1f
+#define V_FW_EQ_ETH_CMD_DCACPU(x)	((x) << S_FW_EQ_ETH_CMD_DCACPU)
+#define G_FW_EQ_ETH_CMD_DCACPU(x)	\
+    (((x) >> S_FW_EQ_ETH_CMD_DCACPU) & M_FW_EQ_ETH_CMD_DCACPU)
+
+#define S_FW_EQ_ETH_CMD_FBMIN		23
+#define M_FW_EQ_ETH_CMD_FBMIN		0x7
+#define V_FW_EQ_ETH_CMD_FBMIN(x)	((x) << S_FW_EQ_ETH_CMD_FBMIN)
+#define G_FW_EQ_ETH_CMD_FBMIN(x)	\
+    (((x) >> S_FW_EQ_ETH_CMD_FBMIN) & M_FW_EQ_ETH_CMD_FBMIN)
+
+#define S_FW_EQ_ETH_CMD_FBMAX		20
+#define M_FW_EQ_ETH_CMD_FBMAX		0x7
+#define V_FW_EQ_ETH_CMD_FBMAX(x)	((x) << S_FW_EQ_ETH_CMD_FBMAX)
+#define G_FW_EQ_ETH_CMD_FBMAX(x)	\
+    (((x) >> S_FW_EQ_ETH_CMD_FBMAX) & M_FW_EQ_ETH_CMD_FBMAX)
+
+#define S_FW_EQ_ETH_CMD_CIDXFTHRESHO	19
+#define M_FW_EQ_ETH_CMD_CIDXFTHRESHO	0x1
+#define V_FW_EQ_ETH_CMD_CIDXFTHRESHO(x)	((x) << S_FW_EQ_ETH_CMD_CIDXFTHRESHO)
+#define G_FW_EQ_ETH_CMD_CIDXFTHRESHO(x)	\
+    (((x) >> S_FW_EQ_ETH_CMD_CIDXFTHRESHO) & M_FW_EQ_ETH_CMD_CIDXFTHRESHO)
+#define F_FW_EQ_ETH_CMD_CIDXFTHRESHO	V_FW_EQ_ETH_CMD_CIDXFTHRESHO(1U)
+
+#define S_FW_EQ_ETH_CMD_CIDXFTHRESH	16
+#define M_FW_EQ_ETH_CMD_CIDXFTHRESH	0x7
+#define V_FW_EQ_ETH_CMD_CIDXFTHRESH(x)	((x) << S_FW_EQ_ETH_CMD_CIDXFTHRESH)
+#define G_FW_EQ_ETH_CMD_CIDXFTHRESH(x)	\
+    (((x) >> S_FW_EQ_ETH_CMD_CIDXFTHRESH) & M_FW_EQ_ETH_CMD_CIDXFTHRESH)
+
+#define S_FW_EQ_ETH_CMD_EQSIZE		0
+#define M_FW_EQ_ETH_CMD_EQSIZE		0xffff
+#define V_FW_EQ_ETH_CMD_EQSIZE(x)	((x) << S_FW_EQ_ETH_CMD_EQSIZE)
+#define G_FW_EQ_ETH_CMD_EQSIZE(x)	\
+    (((x) >> S_FW_EQ_ETH_CMD_EQSIZE) & M_FW_EQ_ETH_CMD_EQSIZE)
+
+#define S_FW_EQ_ETH_CMD_VIID	16
+#define M_FW_EQ_ETH_CMD_VIID	0xfff
+#define V_FW_EQ_ETH_CMD_VIID(x)	((x) << S_FW_EQ_ETH_CMD_VIID)
+#define G_FW_EQ_ETH_CMD_VIID(x)	\
+    (((x) >> S_FW_EQ_ETH_CMD_VIID) & M_FW_EQ_ETH_CMD_VIID)
+
+struct fw_eq_ctrl_cmd {
+	__be32 op_to_vfn;
+	__be32 alloc_to_len16;
+	__be32 cmpliqid_eqid;
+	__be32 physeqid_pkd;
+	__be32 fetchszm_to_iqid;
+	__be32 dcaen_to_eqsize;
+	__be64 eqaddr;
+};
+
+#define S_FW_EQ_CTRL_CMD_PFN	8
+#define M_FW_EQ_CTRL_CMD_PFN	0x7
+#define V_FW_EQ_CTRL_CMD_PFN(x)	((x) << S_FW_EQ_CTRL_CMD_PFN)
+#define G_FW_EQ_CTRL_CMD_PFN(x)	\
+    (((x) >> S_FW_EQ_CTRL_CMD_PFN) & M_FW_EQ_CTRL_CMD_PFN)
+
+#define S_FW_EQ_CTRL_CMD_VFN	0
+#define M_FW_EQ_CTRL_CMD_VFN	0xff
+#define V_FW_EQ_CTRL_CMD_VFN(x)	((x) << S_FW_EQ_CTRL_CMD_VFN)
+#define G_FW_EQ_CTRL_CMD_VFN(x)	\
+    (((x) >> S_FW_EQ_CTRL_CMD_VFN) & M_FW_EQ_CTRL_CMD_VFN)
+
+#define S_FW_EQ_CTRL_CMD_ALLOC		31
+#define M_FW_EQ_CTRL_CMD_ALLOC		0x1
+#define V_FW_EQ_CTRL_CMD_ALLOC(x)	((x) << S_FW_EQ_CTRL_CMD_ALLOC)
+#define G_FW_EQ_CTRL_CMD_ALLOC(x)	\
+    (((x) >> S_FW_EQ_CTRL_CMD_ALLOC) & M_FW_EQ_CTRL_CMD_ALLOC)
+#define F_FW_EQ_CTRL_CMD_ALLOC	V_FW_EQ_CTRL_CMD_ALLOC(1U)
+
+#define S_FW_EQ_CTRL_CMD_FREE		30
+#define M_FW_EQ_CTRL_CMD_FREE		0x1
+#define V_FW_EQ_CTRL_CMD_FREE(x)	((x) << S_FW_EQ_CTRL_CMD_FREE)
+#define G_FW_EQ_CTRL_CMD_FREE(x)	\
+    (((x) >> S_FW_EQ_CTRL_CMD_FREE) & M_FW_EQ_CTRL_CMD_FREE)
+#define F_FW_EQ_CTRL_CMD_FREE	V_FW_EQ_CTRL_CMD_FREE(1U)
+
+#define S_FW_EQ_CTRL_CMD_MODIFY		29
+#define M_FW_EQ_CTRL_CMD_MODIFY		0x1
+#define V_FW_EQ_CTRL_CMD_MODIFY(x)	((x) << S_FW_EQ_CTRL_CMD_MODIFY)
+#define G_FW_EQ_CTRL_CMD_MODIFY(x)	\
+    (((x) >> S_FW_EQ_CTRL_CMD_MODIFY) & M_FW_EQ_CTRL_CMD_MODIFY)
+#define F_FW_EQ_CTRL_CMD_MODIFY	V_FW_EQ_CTRL_CMD_MODIFY(1U)
+
+#define S_FW_EQ_CTRL_CMD_EQSTART	28
+#define M_FW_EQ_CTRL_CMD_EQSTART	0x1
+#define V_FW_EQ_CTRL_CMD_EQSTART(x)	((x) << S_FW_EQ_CTRL_CMD_EQSTART)
+#define G_FW_EQ_CTRL_CMD_EQSTART(x)	\
+    (((x) >> S_FW_EQ_CTRL_CMD_EQSTART) & M_FW_EQ_CTRL_CMD_EQSTART)
+#define F_FW_EQ_CTRL_CMD_EQSTART	V_FW_EQ_CTRL_CMD_EQSTART(1U)
+
+#define S_FW_EQ_CTRL_CMD_EQSTOP		27
+#define M_FW_EQ_CTRL_CMD_EQSTOP		0x1
+#define V_FW_EQ_CTRL_CMD_EQSTOP(x)	((x) << S_FW_EQ_CTRL_CMD_EQSTOP)
+#define G_FW_EQ_CTRL_CMD_EQSTOP(x)	\
+    (((x) >> S_FW_EQ_CTRL_CMD_EQSTOP) & M_FW_EQ_CTRL_CMD_EQSTOP)
+#define F_FW_EQ_CTRL_CMD_EQSTOP	V_FW_EQ_CTRL_CMD_EQSTOP(1U)
+
+#define S_FW_EQ_CTRL_CMD_CMPLIQID	20
+#define M_FW_EQ_CTRL_CMD_CMPLIQID	0xfff
+#define V_FW_EQ_CTRL_CMD_CMPLIQID(x)	((x) << S_FW_EQ_CTRL_CMD_CMPLIQID)
+#define G_FW_EQ_CTRL_CMD_CMPLIQID(x)	\
+    (((x) >> S_FW_EQ_CTRL_CMD_CMPLIQID) & M_FW_EQ_CTRL_CMD_CMPLIQID)
+
+#define S_FW_EQ_CTRL_CMD_EQID		0
+#define M_FW_EQ_CTRL_CMD_EQID		0xfffff
+#define V_FW_EQ_CTRL_CMD_EQID(x)	((x) << S_FW_EQ_CTRL_CMD_EQID)
+#define G_FW_EQ_CTRL_CMD_EQID(x)	\
+    (((x) >> S_FW_EQ_CTRL_CMD_EQID) & M_FW_EQ_CTRL_CMD_EQID)
+
+#define S_FW_EQ_CTRL_CMD_PHYSEQID	0
+#define M_FW_EQ_CTRL_CMD_PHYSEQID	0xfffff
+#define V_FW_EQ_CTRL_CMD_PHYSEQID(x)	((x) << S_FW_EQ_CTRL_CMD_PHYSEQID)
+#define G_FW_EQ_CTRL_CMD_PHYSEQID(x)	\
+    (((x) >> S_FW_EQ_CTRL_CMD_PHYSEQID) & M_FW_EQ_CTRL_CMD_PHYSEQID)
+
+#define S_FW_EQ_CTRL_CMD_FETCHSZM	26
+#define M_FW_EQ_CTRL_CMD_FETCHSZM	0x1
+#define V_FW_EQ_CTRL_CMD_FETCHSZM(x)	((x) << S_FW_EQ_CTRL_CMD_FETCHSZM)
+#define G_FW_EQ_CTRL_CMD_FETCHSZM(x)	\
+    (((x) >> S_FW_EQ_CTRL_CMD_FETCHSZM) & M_FW_EQ_CTRL_CMD_FETCHSZM)
+#define F_FW_EQ_CTRL_CMD_FETCHSZM	V_FW_EQ_CTRL_CMD_FETCHSZM(1U)
+
+#define S_FW_EQ_CTRL_CMD_STATUSPGNS	25
+#define M_FW_EQ_CTRL_CMD_STATUSPGNS	0x1
+#define V_FW_EQ_CTRL_CMD_STATUSPGNS(x)	((x) << S_FW_EQ_CTRL_CMD_STATUSPGNS)
+#define G_FW_EQ_CTRL_CMD_STATUSPGNS(x)	\
+    (((x) >> S_FW_EQ_CTRL_CMD_STATUSPGNS) & M_FW_EQ_CTRL_CMD_STATUSPGNS)
+#define F_FW_EQ_CTRL_CMD_STATUSPGNS	V_FW_EQ_CTRL_CMD_STATUSPGNS(1U)
+
+#define S_FW_EQ_CTRL_CMD_STATUSPGRO	24
+#define M_FW_EQ_CTRL_CMD_STATUSPGRO	0x1
+#define V_FW_EQ_CTRL_CMD_STATUSPGRO(x)	((x) << S_FW_EQ_CTRL_CMD_STATUSPGRO)
+#define G_FW_EQ_CTRL_CMD_STATUSPGRO(x)	\
+    (((x) >> S_FW_EQ_CTRL_CMD_STATUSPGRO) & M_FW_EQ_CTRL_CMD_STATUSPGRO)
+#define F_FW_EQ_CTRL_CMD_STATUSPGRO	V_FW_EQ_CTRL_CMD_STATUSPGRO(1U)
+
+#define S_FW_EQ_CTRL_CMD_FETCHNS	23
+#define M_FW_EQ_CTRL_CMD_FETCHNS	0x1
+#define V_FW_EQ_CTRL_CMD_FETCHNS(x)	((x) << S_FW_EQ_CTRL_CMD_FETCHNS)
+#define G_FW_EQ_CTRL_CMD_FETCHNS(x)	\
+    (((x) >> S_FW_EQ_CTRL_CMD_FETCHNS) & M_FW_EQ_CTRL_CMD_FETCHNS)
+#define F_FW_EQ_CTRL_CMD_FETCHNS	V_FW_EQ_CTRL_CMD_FETCHNS(1U)
+
+#define S_FW_EQ_CTRL_CMD_FETCHRO	22
+#define M_FW_EQ_CTRL_CMD_FETCHRO	0x1
+#define V_FW_EQ_CTRL_CMD_FETCHRO(x)	((x) << S_FW_EQ_CTRL_CMD_FETCHRO)
+#define G_FW_EQ_CTRL_CMD_FETCHRO(x)	\
+    (((x) >> S_FW_EQ_CTRL_CMD_FETCHRO) & M_FW_EQ_CTRL_CMD_FETCHRO)
+#define F_FW_EQ_CTRL_CMD_FETCHRO	V_FW_EQ_CTRL_CMD_FETCHRO(1U)
+
+#define S_FW_EQ_CTRL_CMD_HOSTFCMODE	20
+#define M_FW_EQ_CTRL_CMD_HOSTFCMODE	0x3
+#define V_FW_EQ_CTRL_CMD_HOSTFCMODE(x)	((x) << S_FW_EQ_CTRL_CMD_HOSTFCMODE)
+#define G_FW_EQ_CTRL_CMD_HOSTFCMODE(x)	\
+    (((x) >> S_FW_EQ_CTRL_CMD_HOSTFCMODE) & M_FW_EQ_CTRL_CMD_HOSTFCMODE)
+
+#define S_FW_EQ_CTRL_CMD_CPRIO		19
+#define M_FW_EQ_CTRL_CMD_CPRIO		0x1
+#define V_FW_EQ_CTRL_CMD_CPRIO(x)	((x) << S_FW_EQ_CTRL_CMD_CPRIO)
+#define G_FW_EQ_CTRL_CMD_CPRIO(x)	\
+    (((x) >> S_FW_EQ_CTRL_CMD_CPRIO) & M_FW_EQ_CTRL_CMD_CPRIO)
+#define F_FW_EQ_CTRL_CMD_CPRIO	V_FW_EQ_CTRL_CMD_CPRIO(1U)
+
+#define S_FW_EQ_CTRL_CMD_ONCHIP		18
+#define M_FW_EQ_CTRL_CMD_ONCHIP		0x1
+#define V_FW_EQ_CTRL_CMD_ONCHIP(x)	((x) << S_FW_EQ_CTRL_CMD_ONCHIP)
+#define G_FW_EQ_CTRL_CMD_ONCHIP(x)	\
+    (((x) >> S_FW_EQ_CTRL_CMD_ONCHIP) & M_FW_EQ_CTRL_CMD_ONCHIP)
+#define F_FW_EQ_CTRL_CMD_ONCHIP	V_FW_EQ_CTRL_CMD_ONCHIP(1U)
+
+#define S_FW_EQ_CTRL_CMD_PCIECHN	16
+#define M_FW_EQ_CTRL_CMD_PCIECHN	0x3
+#define V_FW_EQ_CTRL_CMD_PCIECHN(x)	((x) << S_FW_EQ_CTRL_CMD_PCIECHN)
+#define G_FW_EQ_CTRL_CMD_PCIECHN(x)	\
+    (((x) >> S_FW_EQ_CTRL_CMD_PCIECHN) & M_FW_EQ_CTRL_CMD_PCIECHN)
+
+#define S_FW_EQ_CTRL_CMD_IQID		0
+#define M_FW_EQ_CTRL_CMD_IQID		0xffff
+#define V_FW_EQ_CTRL_CMD_IQID(x)	((x) << S_FW_EQ_CTRL_CMD_IQID)
+#define G_FW_EQ_CTRL_CMD_IQID(x)	\
+    (((x) >> S_FW_EQ_CTRL_CMD_IQID) & M_FW_EQ_CTRL_CMD_IQID)
+
+#define S_FW_EQ_CTRL_CMD_DCAEN		31
+#define M_FW_EQ_CTRL_CMD_DCAEN		0x1
+#define V_FW_EQ_CTRL_CMD_DCAEN(x)	((x) << S_FW_EQ_CTRL_CMD_DCAEN)
+#define G_FW_EQ_CTRL_CMD_DCAEN(x)	\
+    (((x) >> S_FW_EQ_CTRL_CMD_DCAEN) & M_FW_EQ_CTRL_CMD_DCAEN)
+#define F_FW_EQ_CTRL_CMD_DCAEN	V_FW_EQ_CTRL_CMD_DCAEN(1U)
+
+#define S_FW_EQ_CTRL_CMD_DCACPU		26
+#define M_FW_EQ_CTRL_CMD_DCACPU		0x1f
+#define V_FW_EQ_CTRL_CMD_DCACPU(x)	((x) << S_FW_EQ_CTRL_CMD_DCACPU)
+#define G_FW_EQ_CTRL_CMD_DCACPU(x)	\
+    (((x) >> S_FW_EQ_CTRL_CMD_DCACPU) & M_FW_EQ_CTRL_CMD_DCACPU)
+
+#define S_FW_EQ_CTRL_CMD_FBMIN		23
+#define M_FW_EQ_CTRL_CMD_FBMIN		0x7
+#define V_FW_EQ_CTRL_CMD_FBMIN(x)	((x) << S_FW_EQ_CTRL_CMD_FBMIN)
+#define G_FW_EQ_CTRL_CMD_FBMIN(x)	\
+    (((x) >> S_FW_EQ_CTRL_CMD_FBMIN) & M_FW_EQ_CTRL_CMD_FBMIN)
+
+#define S_FW_EQ_CTRL_CMD_FBMAX		20
+#define M_FW_EQ_CTRL_CMD_FBMAX		0x7
+#define V_FW_EQ_CTRL_CMD_FBMAX(x)	((x) << S_FW_EQ_CTRL_CMD_FBMAX)
+#define G_FW_EQ_CTRL_CMD_FBMAX(x)	\
+    (((x) >> S_FW_EQ_CTRL_CMD_FBMAX) & M_FW_EQ_CTRL_CMD_FBMAX)
+
+#define S_FW_EQ_CTRL_CMD_CIDXFTHRESHO		19
+#define M_FW_EQ_CTRL_CMD_CIDXFTHRESHO		0x1
+#define V_FW_EQ_CTRL_CMD_CIDXFTHRESHO(x)	\
+    ((x) << S_FW_EQ_CTRL_CMD_CIDXFTHRESHO)
+#define G_FW_EQ_CTRL_CMD_CIDXFTHRESHO(x)	\
+    (((x) >> S_FW_EQ_CTRL_CMD_CIDXFTHRESHO) & M_FW_EQ_CTRL_CMD_CIDXFTHRESHO)
+#define F_FW_EQ_CTRL_CMD_CIDXFTHRESHO	V_FW_EQ_CTRL_CMD_CIDXFTHRESHO(1U)
+
+#define S_FW_EQ_CTRL_CMD_CIDXFTHRESH	16
+#define M_FW_EQ_CTRL_CMD_CIDXFTHRESH	0x7
+#define V_FW_EQ_CTRL_CMD_CIDXFTHRESH(x)	((x) << S_FW_EQ_CTRL_CMD_CIDXFTHRESH)
+#define G_FW_EQ_CTRL_CMD_CIDXFTHRESH(x)	\
+    (((x) >> S_FW_EQ_CTRL_CMD_CIDXFTHRESH) & M_FW_EQ_CTRL_CMD_CIDXFTHRESH)
+
+#define S_FW_EQ_CTRL_CMD_EQSIZE		0
+#define M_FW_EQ_CTRL_CMD_EQSIZE		0xffff
+#define V_FW_EQ_CTRL_CMD_EQSIZE(x)	((x) << S_FW_EQ_CTRL_CMD_EQSIZE)
+#define G_FW_EQ_CTRL_CMD_EQSIZE(x)	\
+    (((x) >> S_FW_EQ_CTRL_CMD_EQSIZE) & M_FW_EQ_CTRL_CMD_EQSIZE)
+
+struct fw_eq_ofld_cmd {
+	__be32 op_to_vfn;
+	__be32 alloc_to_len16;
+	__be32 eqid_pkd;
+	__be32 physeqid_pkd;
+	__be32 fetchszm_to_iqid;
+	__be32 dcaen_to_eqsize;
+	__be64 eqaddr;
+};
+
+#define S_FW_EQ_OFLD_CMD_PFN	8
+#define M_FW_EQ_OFLD_CMD_PFN	0x7
+#define V_FW_EQ_OFLD_CMD_PFN(x)	((x) << S_FW_EQ_OFLD_CMD_PFN)
+#define G_FW_EQ_OFLD_CMD_PFN(x)	\
+    (((x) >> S_FW_EQ_OFLD_CMD_PFN) & M_FW_EQ_OFLD_CMD_PFN)
+
+#define S_FW_EQ_OFLD_CMD_VFN	0
+#define M_FW_EQ_OFLD_CMD_VFN	0xff
+#define V_FW_EQ_OFLD_CMD_VFN(x)	((x) << S_FW_EQ_OFLD_CMD_VFN)
+#define G_FW_EQ_OFLD_CMD_VFN(x)	\
+    (((x) >> S_FW_EQ_OFLD_CMD_VFN) & M_FW_EQ_OFLD_CMD_VFN)
+
+#define S_FW_EQ_OFLD_CMD_ALLOC		31
+#define M_FW_EQ_OFLD_CMD_ALLOC		0x1
+#define V_FW_EQ_OFLD_CMD_ALLOC(x)	((x) << S_FW_EQ_OFLD_CMD_ALLOC)
+#define G_FW_EQ_OFLD_CMD_ALLOC(x)	\
+    (((x) >> S_FW_EQ_OFLD_CMD_ALLOC) & M_FW_EQ_OFLD_CMD_ALLOC)
+#define F_FW_EQ_OFLD_CMD_ALLOC	V_FW_EQ_OFLD_CMD_ALLOC(1U)
+
+#define S_FW_EQ_OFLD_CMD_FREE		30
+#define M_FW_EQ_OFLD_CMD_FREE		0x1
+#define V_FW_EQ_OFLD_CMD_FREE(x)	((x) << S_FW_EQ_OFLD_CMD_FREE)
+#define G_FW_EQ_OFLD_CMD_FREE(x)	\
+    (((x) >> S_FW_EQ_OFLD_CMD_FREE) & M_FW_EQ_OFLD_CMD_FREE)
+#define F_FW_EQ_OFLD_CMD_FREE	V_FW_EQ_OFLD_CMD_FREE(1U)
+
+#define S_FW_EQ_OFLD_CMD_MODIFY		29
+#define M_FW_EQ_OFLD_CMD_MODIFY		0x1
+#define V_FW_EQ_OFLD_CMD_MODIFY(x)	((x) << S_FW_EQ_OFLD_CMD_MODIFY)
+#define G_FW_EQ_OFLD_CMD_MODIFY(x)	\
+    (((x) >> S_FW_EQ_OFLD_CMD_MODIFY) & M_FW_EQ_OFLD_CMD_MODIFY)
+#define F_FW_EQ_OFLD_CMD_MODIFY	V_FW_EQ_OFLD_CMD_MODIFY(1U)
+
+#define S_FW_EQ_OFLD_CMD_EQSTART	28
+#define M_FW_EQ_OFLD_CMD_EQSTART	0x1
+#define V_FW_EQ_OFLD_CMD_EQSTART(x)	((x) << S_FW_EQ_OFLD_CMD_EQSTART)
+#define G_FW_EQ_OFLD_CMD_EQSTART(x)	\
+    (((x) >> S_FW_EQ_OFLD_CMD_EQSTART) & M_FW_EQ_OFLD_CMD_EQSTART)
+#define F_FW_EQ_OFLD_CMD_EQSTART	V_FW_EQ_OFLD_CMD_EQSTART(1U)
+
+#define S_FW_EQ_OFLD_CMD_EQSTOP		27
+#define M_FW_EQ_OFLD_CMD_EQSTOP		0x1
+#define V_FW_EQ_OFLD_CMD_EQSTOP(x)	((x) << S_FW_EQ_OFLD_CMD_EQSTOP)
+#define G_FW_EQ_OFLD_CMD_EQSTOP(x)	\
+    (((x) >> S_FW_EQ_OFLD_CMD_EQSTOP) & M_FW_EQ_OFLD_CMD_EQSTOP)
+#define F_FW_EQ_OFLD_CMD_EQSTOP	V_FW_EQ_OFLD_CMD_EQSTOP(1U)
+
+#define S_FW_EQ_OFLD_CMD_EQID		0
+#define M_FW_EQ_OFLD_CMD_EQID		0xfffff
+#define V_FW_EQ_OFLD_CMD_EQID(x)	((x) << S_FW_EQ_OFLD_CMD_EQID)
+#define G_FW_EQ_OFLD_CMD_EQID(x)	\
+    (((x) >> S_FW_EQ_OFLD_CMD_EQID) & M_FW_EQ_OFLD_CMD_EQID)
+
+#define S_FW_EQ_OFLD_CMD_PHYSEQID	0
+#define M_FW_EQ_OFLD_CMD_PHYSEQID	0xfffff
+#define V_FW_EQ_OFLD_CMD_PHYSEQID(x)	((x) << S_FW_EQ_OFLD_CMD_PHYSEQID)
+#define G_FW_EQ_OFLD_CMD_PHYSEQID(x)	\
+    (((x) >> S_FW_EQ_OFLD_CMD_PHYSEQID) & M_FW_EQ_OFLD_CMD_PHYSEQID)
+
+#define S_FW_EQ_OFLD_CMD_FETCHSZM	26
+#define M_FW_EQ_OFLD_CMD_FETCHSZM	0x1
+#define V_FW_EQ_OFLD_CMD_FETCHSZM(x)	((x) << S_FW_EQ_OFLD_CMD_FETCHSZM)
+#define G_FW_EQ_OFLD_CMD_FETCHSZM(x)	\
+    (((x) >> S_FW_EQ_OFLD_CMD_FETCHSZM) & M_FW_EQ_OFLD_CMD_FETCHSZM)
+#define F_FW_EQ_OFLD_CMD_FETCHSZM	V_FW_EQ_OFLD_CMD_FETCHSZM(1U)
+
+#define S_FW_EQ_OFLD_CMD_STATUSPGNS	25
+#define M_FW_EQ_OFLD_CMD_STATUSPGNS	0x1
+#define V_FW_EQ_OFLD_CMD_STATUSPGNS(x)	((x) << S_FW_EQ_OFLD_CMD_STATUSPGNS)
+#define G_FW_EQ_OFLD_CMD_STATUSPGNS(x)	\
+    (((x) >> S_FW_EQ_OFLD_CMD_STATUSPGNS) & M_FW_EQ_OFLD_CMD_STATUSPGNS)
+#define F_FW_EQ_OFLD_CMD_STATUSPGNS	V_FW_EQ_OFLD_CMD_STATUSPGNS(1U)
+
+#define S_FW_EQ_OFLD_CMD_STATUSPGRO	24
+#define M_FW_EQ_OFLD_CMD_STATUSPGRO	0x1
+#define V_FW_EQ_OFLD_CMD_STATUSPGRO(x)	((x) << S_FW_EQ_OFLD_CMD_STATUSPGRO)
+#define G_FW_EQ_OFLD_CMD_STATUSPGRO(x)	\
+    (((x) >> S_FW_EQ_OFLD_CMD_STATUSPGRO) & M_FW_EQ_OFLD_CMD_STATUSPGRO)
+#define F_FW_EQ_OFLD_CMD_STATUSPGRO	V_FW_EQ_OFLD_CMD_STATUSPGRO(1U)
+
+#define S_FW_EQ_OFLD_CMD_FETCHNS	23
+#define M_FW_EQ_OFLD_CMD_FETCHNS	0x1
+#define V_FW_EQ_OFLD_CMD_FETCHNS(x)	((x) << S_FW_EQ_OFLD_CMD_FETCHNS)
+#define G_FW_EQ_OFLD_CMD_FETCHNS(x)	\
+    (((x) >> S_FW_EQ_OFLD_CMD_FETCHNS) & M_FW_EQ_OFLD_CMD_FETCHNS)
+#define F_FW_EQ_OFLD_CMD_FETCHNS	V_FW_EQ_OFLD_CMD_FETCHNS(1U)
+
+#define S_FW_EQ_OFLD_CMD_FETCHRO	22
+#define M_FW_EQ_OFLD_CMD_FETCHRO	0x1
+#define V_FW_EQ_OFLD_CMD_FETCHRO(x)	((x) << S_FW_EQ_OFLD_CMD_FETCHRO)
+#define G_FW_EQ_OFLD_CMD_FETCHRO(x)	\
+    (((x) >> S_FW_EQ_OFLD_CMD_FETCHRO) & M_FW_EQ_OFLD_CMD_FETCHRO)
+#define F_FW_EQ_OFLD_CMD_FETCHRO	V_FW_EQ_OFLD_CMD_FETCHRO(1U)
+
+#define S_FW_EQ_OFLD_CMD_HOSTFCMODE	20
+#define M_FW_EQ_OFLD_CMD_HOSTFCMODE	0x3
+#define V_FW_EQ_OFLD_CMD_HOSTFCMODE(x)	((x) << S_FW_EQ_OFLD_CMD_HOSTFCMODE)
+#define G_FW_EQ_OFLD_CMD_HOSTFCMODE(x)	\
+    (((x) >> S_FW_EQ_OFLD_CMD_HOSTFCMODE) & M_FW_EQ_OFLD_CMD_HOSTFCMODE)
+
+#define S_FW_EQ_OFLD_CMD_CPRIO		19
+#define M_FW_EQ_OFLD_CMD_CPRIO		0x1
+#define V_FW_EQ_OFLD_CMD_CPRIO(x)	((x) << S_FW_EQ_OFLD_CMD_CPRIO)
+#define G_FW_EQ_OFLD_CMD_CPRIO(x)	\
+    (((x) >> S_FW_EQ_OFLD_CMD_CPRIO) & M_FW_EQ_OFLD_CMD_CPRIO)
+#define F_FW_EQ_OFLD_CMD_CPRIO	V_FW_EQ_OFLD_CMD_CPRIO(1U)
+
+#define S_FW_EQ_OFLD_CMD_ONCHIP		18
+#define M_FW_EQ_OFLD_CMD_ONCHIP		0x1
+#define V_FW_EQ_OFLD_CMD_ONCHIP(x)	((x) << S_FW_EQ_OFLD_CMD_ONCHIP)
+#define G_FW_EQ_OFLD_CMD_ONCHIP(x)	\
+    (((x) >> S_FW_EQ_OFLD_CMD_ONCHIP) & M_FW_EQ_OFLD_CMD_ONCHIP)
+#define F_FW_EQ_OFLD_CMD_ONCHIP	V_FW_EQ_OFLD_CMD_ONCHIP(1U)
+
+#define S_FW_EQ_OFLD_CMD_PCIECHN	16
+#define M_FW_EQ_OFLD_CMD_PCIECHN	0x3
+#define V_FW_EQ_OFLD_CMD_PCIECHN(x)	((x) << S_FW_EQ_OFLD_CMD_PCIECHN)
+#define G_FW_EQ_OFLD_CMD_PCIECHN(x)	\
+    (((x) >> S_FW_EQ_OFLD_CMD_PCIECHN) & M_FW_EQ_OFLD_CMD_PCIECHN)
+
+#define S_FW_EQ_OFLD_CMD_IQID		0
+#define M_FW_EQ_OFLD_CMD_IQID		0xffff
+#define V_FW_EQ_OFLD_CMD_IQID(x)	((x) << S_FW_EQ_OFLD_CMD_IQID)
+#define G_FW_EQ_OFLD_CMD_IQID(x)	\
+    (((x) >> S_FW_EQ_OFLD_CMD_IQID) & M_FW_EQ_OFLD_CMD_IQID)
+
+#define S_FW_EQ_OFLD_CMD_DCAEN		31
+#define M_FW_EQ_OFLD_CMD_DCAEN		0x1
+#define V_FW_EQ_OFLD_CMD_DCAEN(x)	((x) << S_FW_EQ_OFLD_CMD_DCAEN)
+#define G_FW_EQ_OFLD_CMD_DCAEN(x)	\
+    (((x) >> S_FW_EQ_OFLD_CMD_DCAEN) & M_FW_EQ_OFLD_CMD_DCAEN)
+#define F_FW_EQ_OFLD_CMD_DCAEN	V_FW_EQ_OFLD_CMD_DCAEN(1U)
+
+#define S_FW_EQ_OFLD_CMD_DCACPU		26
+#define M_FW_EQ_OFLD_CMD_DCACPU		0x1f
+#define V_FW_EQ_OFLD_CMD_DCACPU(x)	((x) << S_FW_EQ_OFLD_CMD_DCACPU)
+#define G_FW_EQ_OFLD_CMD_DCACPU(x)	\
+    (((x) >> S_FW_EQ_OFLD_CMD_DCACPU) & M_FW_EQ_OFLD_CMD_DCACPU)
+
+#define S_FW_EQ_OFLD_CMD_FBMIN		23
+#define M_FW_EQ_OFLD_CMD_FBMIN		0x7
+#define V_FW_EQ_OFLD_CMD_FBMIN(x)	((x) << S_FW_EQ_OFLD_CMD_FBMIN)
+#define G_FW_EQ_OFLD_CMD_FBMIN(x)	\
+    (((x) >> S_FW_EQ_OFLD_CMD_FBMIN) & M_FW_EQ_OFLD_CMD_FBMIN)
+
+#define S_FW_EQ_OFLD_CMD_FBMAX		20
+#define M_FW_EQ_OFLD_CMD_FBMAX		0x7
+#define V_FW_EQ_OFLD_CMD_FBMAX(x)	((x) << S_FW_EQ_OFLD_CMD_FBMAX)
+#define G_FW_EQ_OFLD_CMD_FBMAX(x)	\
+    (((x) >> S_FW_EQ_OFLD_CMD_FBMAX) & M_FW_EQ_OFLD_CMD_FBMAX)
+
+#define S_FW_EQ_OFLD_CMD_CIDXFTHRESHO		19
+#define M_FW_EQ_OFLD_CMD_CIDXFTHRESHO		0x1
+#define V_FW_EQ_OFLD_CMD_CIDXFTHRESHO(x)	\
+    ((x) << S_FW_EQ_OFLD_CMD_CIDXFTHRESHO)
+#define G_FW_EQ_OFLD_CMD_CIDXFTHRESHO(x)	\
+    (((x) >> S_FW_EQ_OFLD_CMD_CIDXFTHRESHO) & M_FW_EQ_OFLD_CMD_CIDXFTHRESHO)
+#define F_FW_EQ_OFLD_CMD_CIDXFTHRESHO	V_FW_EQ_OFLD_CMD_CIDXFTHRESHO(1U)
+
+#define S_FW_EQ_OFLD_CMD_CIDXFTHRESH	16
+#define M_FW_EQ_OFLD_CMD_CIDXFTHRESH	0x7
+#define V_FW_EQ_OFLD_CMD_CIDXFTHRESH(x)	((x) << S_FW_EQ_OFLD_CMD_CIDXFTHRESH)
+#define G_FW_EQ_OFLD_CMD_CIDXFTHRESH(x)	\
+    (((x) >> S_FW_EQ_OFLD_CMD_CIDXFTHRESH) & M_FW_EQ_OFLD_CMD_CIDXFTHRESH)
+
+#define S_FW_EQ_OFLD_CMD_EQSIZE		0
+#define M_FW_EQ_OFLD_CMD_EQSIZE		0xffff
+#define V_FW_EQ_OFLD_CMD_EQSIZE(x)	((x) << S_FW_EQ_OFLD_CMD_EQSIZE)
+#define G_FW_EQ_OFLD_CMD_EQSIZE(x)	\
+    (((x) >> S_FW_EQ_OFLD_CMD_EQSIZE) & M_FW_EQ_OFLD_CMD_EQSIZE)
+/* Macros for VIID parsing:
+   VIID - [10:8] PFN, [7] VI Valid, [6:0] VI number */
+#define S_FW_VIID_PFN 		8
+#define M_FW_VIID_PFN 		0x7
+#define V_FW_VIID_PFN(x) 	((x) << S_FW_VIID_PFN)
+#define G_FW_VIID_PFN(x)	(((x) >> S_FW_VIID_PFN) & M_FW_VIID_PFN)
+
+#define S_FW_VIID_VIVLD 	7
+#define M_FW_VIID_VIVLD 	0x1
+#define V_FW_VIID_VIVLD(x) 	((x) << S_FW_VIID_VIVLD)
+#define G_FW_VIID_VIVLD(x)	(((x) >> S_FW_VIID_VIVLD) & M_FW_VIID_VIVLD)
+
+#define S_FW_VIID_VIN 		0
+#define M_FW_VIID_VIN 		0x7F
+#define V_FW_VIID_VIN(x) 	((x) << S_FW_VIID_VIN)
+#define G_FW_VIID_VIN(x)	(((x) >> S_FW_VIID_VIN) & M_FW_VIID_VIN)
+
+struct fw_vi_cmd {
+	__be32 op_to_vfn;
+	__be32 alloc_to_len16;
+	__be16 viid_pkd;
+	__u8   mac[6];
+	__u8   portid_pkd;
+	__u8   nmac;
+	__u8   nmac0[6];
+	__be16 rsssize_pkd;
+	__u8   nmac1[6];
+	__be16 r7;
+	__u8   nmac2[6];
+	__be16 r8;
+	__u8   nmac3[6];
+	__be64 r9;
+	__be64 r10;
+};
+
+#define S_FW_VI_CMD_PFN		8
+#define M_FW_VI_CMD_PFN		0x7
+#define V_FW_VI_CMD_PFN(x)	((x) << S_FW_VI_CMD_PFN)
+#define G_FW_VI_CMD_PFN(x)	(((x) >> S_FW_VI_CMD_PFN) & M_FW_VI_CMD_PFN)
+
+#define S_FW_VI_CMD_VFN		0
+#define M_FW_VI_CMD_VFN		0xff
+#define V_FW_VI_CMD_VFN(x)	((x) << S_FW_VI_CMD_VFN)
+#define G_FW_VI_CMD_VFN(x)	(((x) >> S_FW_VI_CMD_VFN) & M_FW_VI_CMD_VFN)
+
+#define S_FW_VI_CMD_ALLOC	31
+#define M_FW_VI_CMD_ALLOC	0x1
+#define V_FW_VI_CMD_ALLOC(x)	((x) << S_FW_VI_CMD_ALLOC)
+#define G_FW_VI_CMD_ALLOC(x)	\
+    (((x) >> S_FW_VI_CMD_ALLOC) & M_FW_VI_CMD_ALLOC)
+#define F_FW_VI_CMD_ALLOC	V_FW_VI_CMD_ALLOC(1U)
+
+#define S_FW_VI_CMD_FREE	30
+#define M_FW_VI_CMD_FREE	0x1
+#define V_FW_VI_CMD_FREE(x)	((x) << S_FW_VI_CMD_FREE)
+#define G_FW_VI_CMD_FREE(x)	(((x) >> S_FW_VI_CMD_FREE) & M_FW_VI_CMD_FREE)
+#define F_FW_VI_CMD_FREE	V_FW_VI_CMD_FREE(1U)
+
+#define S_FW_VI_CMD_VIID	0
+#define M_FW_VI_CMD_VIID	0xfff
+#define V_FW_VI_CMD_VIID(x)	((x) << S_FW_VI_CMD_VIID)
+#define G_FW_VI_CMD_VIID(x)	(((x) >> S_FW_VI_CMD_VIID) & M_FW_VI_CMD_VIID)
+
+#define S_FW_VI_CMD_PORTID	4
+#define M_FW_VI_CMD_PORTID	0xf
+#define V_FW_VI_CMD_PORTID(x)	((x) << S_FW_VI_CMD_PORTID)
+#define G_FW_VI_CMD_PORTID(x)	\
+    (((x) >> S_FW_VI_CMD_PORTID) & M_FW_VI_CMD_PORTID)
+
+#define S_FW_VI_CMD_RSSSIZE	0
+#define M_FW_VI_CMD_RSSSIZE	0x7ff
+#define V_FW_VI_CMD_RSSSIZE(x)	((x) << S_FW_VI_CMD_RSSSIZE)
+#define G_FW_VI_CMD_RSSSIZE(x)	\
+    (((x) >> S_FW_VI_CMD_RSSSIZE) & M_FW_VI_CMD_RSSSIZE)
+
+/* Special VI_MAC command index ids */
+#define FW_VI_MAC_ADD_MAC		0x3FF
+#define FW_VI_MAC_ADD_PERSIST_MAC	0x3FE
+#define FW_VI_MAC_MAC_BASED_FREE	0x3FD
+#define FW_CLS_TCAM_NUM_ENTRIES		336
+
+enum fw_vi_mac_result {
+	FW_VI_MAC_R_SUCCESS,
+	FW_VI_MAC_R_F_NONEXISTENT_NOMEM,
+	FW_VI_MAC_R_F_CONFL_PERSIST,
+	FW_VI_MAC_R_F_ACL_CHECK
+};
+
+struct fw_vi_mac_cmd {
+	__be32 op_to_viid;
+	__be32 freemacs_to_len16;
+	union fw_vi_mac {
+		struct fw_vi_mac_exact {
+			__be16 valid_to_idx;
+			__u8   macaddr[6];
+		} exact[7];
+		struct fw_vi_mac_hash {
+			__be64 hashvec;
+		} hash;
+	} u;
+};
+
+#define S_FW_VI_MAC_CMD_VIID	0
+#define M_FW_VI_MAC_CMD_VIID	0xfff
+#define V_FW_VI_MAC_CMD_VIID(x)	((x) << S_FW_VI_MAC_CMD_VIID)
+#define G_FW_VI_MAC_CMD_VIID(x)	\
+    (((x) >> S_FW_VI_MAC_CMD_VIID) & M_FW_VI_MAC_CMD_VIID)
+
+#define S_FW_VI_MAC_CMD_FREEMACS	31
+#define M_FW_VI_MAC_CMD_FREEMACS	0x1
+#define V_FW_VI_MAC_CMD_FREEMACS(x)	((x) << S_FW_VI_MAC_CMD_FREEMACS)
+#define G_FW_VI_MAC_CMD_FREEMACS(x)	\
+    (((x) >> S_FW_VI_MAC_CMD_FREEMACS) & M_FW_VI_MAC_CMD_FREEMACS)
+#define F_FW_VI_MAC_CMD_FREEMACS	V_FW_VI_MAC_CMD_FREEMACS(1U)
+
+#define S_FW_VI_MAC_CMD_HASHVECEN	23
+#define M_FW_VI_MAC_CMD_HASHVECEN	0x1
+#define V_FW_VI_MAC_CMD_HASHVECEN(x)	((x) << S_FW_VI_MAC_CMD_HASHVECEN)
+#define G_FW_VI_MAC_CMD_HASHVECEN(x)	\
+    (((x) >> S_FW_VI_MAC_CMD_HASHVECEN) & M_FW_VI_MAC_CMD_HASHVECEN)
+#define F_FW_VI_MAC_CMD_HASHVECEN	V_FW_VI_MAC_CMD_HASHVECEN(1U)
+
+#define S_FW_VI_MAC_CMD_HASHUNIEN	22
+#define M_FW_VI_MAC_CMD_HASHUNIEN	0x1
+#define V_FW_VI_MAC_CMD_HASHUNIEN(x)	((x) << S_FW_VI_MAC_CMD_HASHUNIEN)
+#define G_FW_VI_MAC_CMD_HASHUNIEN(x)	\
+    (((x) >> S_FW_VI_MAC_CMD_HASHUNIEN) & M_FW_VI_MAC_CMD_HASHUNIEN)
+#define F_FW_VI_MAC_CMD_HASHUNIEN	V_FW_VI_MAC_CMD_HASHUNIEN(1U)
+
+#define S_FW_VI_MAC_CMD_VALID		15
+#define M_FW_VI_MAC_CMD_VALID		0x1
+#define V_FW_VI_MAC_CMD_VALID(x)	((x) << S_FW_VI_MAC_CMD_VALID)
+#define G_FW_VI_MAC_CMD_VALID(x)	\
+    (((x) >> S_FW_VI_MAC_CMD_VALID) & M_FW_VI_MAC_CMD_VALID)
+#define F_FW_VI_MAC_CMD_VALID	V_FW_VI_MAC_CMD_VALID(1U)
+
+#define S_FW_VI_MAC_CMD_PRIO	12
+#define M_FW_VI_MAC_CMD_PRIO	0x7
+#define V_FW_VI_MAC_CMD_PRIO(x)	((x) << S_FW_VI_MAC_CMD_PRIO)
+#define G_FW_VI_MAC_CMD_PRIO(x)	\
+    (((x) >> S_FW_VI_MAC_CMD_PRIO) & M_FW_VI_MAC_CMD_PRIO)
+
+#define S_FW_VI_MAC_CMD_RESULT		10
+#define M_FW_VI_MAC_CMD_RESULT		0x3
+#define V_FW_VI_MAC_CMD_RESULT(x)	((x) << S_FW_VI_MAC_CMD_RESULT)
+#define G_FW_VI_MAC_CMD_RESULT(x)	\
+    (((x) >> S_FW_VI_MAC_CMD_RESULT) & M_FW_VI_MAC_CMD_RESULT)
+
+#define S_FW_VI_MAC_CMD_IDX	0
+#define M_FW_VI_MAC_CMD_IDX	0x3ff
+#define V_FW_VI_MAC_CMD_IDX(x)	((x) << S_FW_VI_MAC_CMD_IDX)
+#define G_FW_VI_MAC_CMD_IDX(x)	\
+    (((x) >> S_FW_VI_MAC_CMD_IDX) & M_FW_VI_MAC_CMD_IDX)
+
+/* T4 max MTU supported */
+#define T4_MAX_MTU_SUPPORTED	9600
+#define FW_RXMODE_MTU_NO_CHG	65535
+
+struct fw_vi_rxmode_cmd {
+	__be32 op_to_viid;
+	__be32 retval_len16;
+	__be32 mtu_to_broadcasten;
+	__be32 r4_lo;
+};
+
+#define S_FW_VI_RXMODE_CMD_VIID		0
+#define M_FW_VI_RXMODE_CMD_VIID		0xfff
+#define V_FW_VI_RXMODE_CMD_VIID(x)	((x) << S_FW_VI_RXMODE_CMD_VIID)
+#define G_FW_VI_RXMODE_CMD_VIID(x)	\
+    (((x) >> S_FW_VI_RXMODE_CMD_VIID) & M_FW_VI_RXMODE_CMD_VIID)
+
+#define S_FW_VI_RXMODE_CMD_MTU		16
+#define M_FW_VI_RXMODE_CMD_MTU		0xffff
+#define V_FW_VI_RXMODE_CMD_MTU(x)	((x) << S_FW_VI_RXMODE_CMD_MTU)
+#define G_FW_VI_RXMODE_CMD_MTU(x)	\
+    (((x) >> S_FW_VI_RXMODE_CMD_MTU) & M_FW_VI_RXMODE_CMD_MTU)
+
+#define S_FW_VI_RXMODE_CMD_PROMISCEN	14
+#define M_FW_VI_RXMODE_CMD_PROMISCEN	0x3
+#define V_FW_VI_RXMODE_CMD_PROMISCEN(x)	((x) << S_FW_VI_RXMODE_CMD_PROMISCEN)
+#define G_FW_VI_RXMODE_CMD_PROMISCEN(x)	\
+    (((x) >> S_FW_VI_RXMODE_CMD_PROMISCEN) & M_FW_VI_RXMODE_CMD_PROMISCEN)
+
+#define S_FW_VI_RXMODE_CMD_ALLMULTIEN		12
+#define M_FW_VI_RXMODE_CMD_ALLMULTIEN		0x3
+#define V_FW_VI_RXMODE_CMD_ALLMULTIEN(x)	\
+    ((x) << S_FW_VI_RXMODE_CMD_ALLMULTIEN)
+#define G_FW_VI_RXMODE_CMD_ALLMULTIEN(x)	\
+    (((x) >> S_FW_VI_RXMODE_CMD_ALLMULTIEN) & M_FW_VI_RXMODE_CMD_ALLMULTIEN)
+
+#define S_FW_VI_RXMODE_CMD_BROADCASTEN		10
+#define M_FW_VI_RXMODE_CMD_BROADCASTEN		0x3
+#define V_FW_VI_RXMODE_CMD_BROADCASTEN(x)	\
+    ((x) << S_FW_VI_RXMODE_CMD_BROADCASTEN)
+#define G_FW_VI_RXMODE_CMD_BROADCASTEN(x)	\
+    (((x) >> S_FW_VI_RXMODE_CMD_BROADCASTEN) & M_FW_VI_RXMODE_CMD_BROADCASTEN)
+
+struct fw_vi_enable_cmd {
+	__be32 op_to_viid;
+	__be32 ien_to_len16;
+	__be16 blinkdur;
+	__be16 r3;
+	__be32 r4;
+};
+
+#define S_FW_VI_ENABLE_CMD_VIID		0
+#define M_FW_VI_ENABLE_CMD_VIID		0xfff
+#define V_FW_VI_ENABLE_CMD_VIID(x)	((x) << S_FW_VI_ENABLE_CMD_VIID)
+#define G_FW_VI_ENABLE_CMD_VIID(x)	\
+    (((x) >> S_FW_VI_ENABLE_CMD_VIID) & M_FW_VI_ENABLE_CMD_VIID)
+
+#define S_FW_VI_ENABLE_CMD_IEN		31
+#define M_FW_VI_ENABLE_CMD_IEN		0x1
+#define V_FW_VI_ENABLE_CMD_IEN(x)	((x) << S_FW_VI_ENABLE_CMD_IEN)
+#define G_FW_VI_ENABLE_CMD_IEN(x)	\
+    (((x) >> S_FW_VI_ENABLE_CMD_IEN) & M_FW_VI_ENABLE_CMD_IEN)
+#define F_FW_VI_ENABLE_CMD_IEN	V_FW_VI_ENABLE_CMD_IEN(1U)
+
+#define S_FW_VI_ENABLE_CMD_EEN		30
+#define M_FW_VI_ENABLE_CMD_EEN		0x1
+#define V_FW_VI_ENABLE_CMD_EEN(x)	((x) << S_FW_VI_ENABLE_CMD_EEN)
+#define G_FW_VI_ENABLE_CMD_EEN(x)	\
+    (((x) >> S_FW_VI_ENABLE_CMD_EEN) & M_FW_VI_ENABLE_CMD_EEN)
+#define F_FW_VI_ENABLE_CMD_EEN	V_FW_VI_ENABLE_CMD_EEN(1U)
+
+#define S_FW_VI_ENABLE_CMD_LED		29
+#define M_FW_VI_ENABLE_CMD_LED		0x1
+#define V_FW_VI_ENABLE_CMD_LED(x)	((x) << S_FW_VI_ENABLE_CMD_LED)
+#define G_FW_VI_ENABLE_CMD_LED(x)	\
+    (((x) >> S_FW_VI_ENABLE_CMD_LED) & M_FW_VI_ENABLE_CMD_LED)
+#define F_FW_VI_ENABLE_CMD_LED	V_FW_VI_ENABLE_CMD_LED(1U)
+
+/* VI VF stats offset definitions */
+#define VI_VF_NUM_STATS	16
+enum fw_vi_stats_vf_index {
+	FW_VI_VF_STAT_TX_BCAST_BYTES_IX,
+	FW_VI_VF_STAT_TX_BCAST_FRAMES_IX,
+	FW_VI_VF_STAT_TX_MCAST_BYTES_IX,
+	FW_VI_VF_STAT_TX_MCAST_FRAMES_IX,
+	FW_VI_VF_STAT_TX_UCAST_BYTES_IX,
+	FW_VI_VF_STAT_TX_UCAST_FRAMES_IX,
+	FW_VI_VF_STAT_TX_DROP_FRAMES_IX,
+	FW_VI_VF_STAT_TX_OFLD_BYTES_IX,
+	FW_VI_VF_STAT_TX_OFLD_FRAMES_IX,
+	FW_VI_VF_STAT_RX_BCAST_BYTES_IX,
+	FW_VI_VF_STAT_RX_BCAST_FRAMES_IX,
+	FW_VI_VF_STAT_RX_MCAST_BYTES_IX,
+	FW_VI_VF_STAT_RX_MCAST_FRAMES_IX,
+	FW_VI_VF_STAT_RX_UCAST_BYTES_IX,
+	FW_VI_VF_STAT_RX_UCAST_FRAMES_IX,
+	FW_VI_VF_STAT_RX_ERR_FRAMES_IX
+};
+
+/* VI PF stats offset definitions */
+#define VI_PF_NUM_STATS	17
+enum fw_vi_stats_pf_index {
+	FW_VI_PF_STAT_TX_BCAST_BYTES_IX,
+	FW_VI_PF_STAT_TX_BCAST_FRAMES_IX,
+	FW_VI_PF_STAT_TX_MCAST_BYTES_IX,
+	FW_VI_PF_STAT_TX_MCAST_FRAMES_IX,
+	FW_VI_PF_STAT_TX_UCAST_BYTES_IX,
+	FW_VI_PF_STAT_TX_UCAST_FRAMES_IX,
+	FW_VI_PF_STAT_TX_OFLD_BYTES_IX,
+	FW_VI_PF_STAT_TX_OFLD_FRAMES_IX,
+	FW_VI_PF_STAT_RX_BYTES_IX,
+	FW_VI_PF_STAT_RX_FRAMES_IX,
+	FW_VI_PF_STAT_RX_BCAST_BYTES_IX,
+	FW_VI_PF_STAT_RX_BCAST_FRAMES_IX,
+	FW_VI_PF_STAT_RX_MCAST_BYTES_IX,
+	FW_VI_PF_STAT_RX_MCAST_FRAMES_IX,
+	FW_VI_PF_STAT_RX_UCAST_BYTES_IX,
+	FW_VI_PF_STAT_RX_UCAST_FRAMES_IX,
+	FW_VI_PF_STAT_RX_ERR_FRAMES_IX
+};
+
+struct fw_vi_stats_cmd {
+	__be32 op_to_viid;
+	__be32 retval_len16;
+	union fw_vi_stats {
+		struct fw_vi_stats_ctl {
+			__be16 nstats_ix;
+			__be16 r6;
+			__be32 r7;
+			__be64 stat0;
+			__be64 stat1;
+			__be64 stat2;
+			__be64 stat3;
+			__be64 stat4;
+			__be64 stat5;
+		} ctl;
+		struct fw_vi_stats_pf {
+			__be64 tx_bcast_bytes;
+			__be64 tx_bcast_frames;
+			__be64 tx_mcast_bytes;
+			__be64 tx_mcast_frames;
+			__be64 tx_ucast_bytes;
+			__be64 tx_ucast_frames;
+			__be64 tx_offload_bytes;
+			__be64 tx_offload_frames;
+			__be64 rx_pf_bytes;
+			__be64 rx_pf_frames;
+			__be64 rx_bcast_bytes;
+			__be64 rx_bcast_frames;
+			__be64 rx_mcast_bytes;
+			__be64 rx_mcast_frames;
+			__be64 rx_ucast_bytes;
+			__be64 rx_ucast_frames;
+			__be64 rx_err_frames;
+		} pf;
+		struct fw_vi_stats_vf {
+			__be64 tx_bcast_bytes;
+			__be64 tx_bcast_frames;
+			__be64 tx_mcast_bytes;
+			__be64 tx_mcast_frames;
+			__be64 tx_ucast_bytes;
+			__be64 tx_ucast_frames;
+			__be64 tx_drop_frames;
+			__be64 tx_offload_bytes;
+			__be64 tx_offload_frames;
+			__be64 rx_bcast_bytes;
+			__be64 rx_bcast_frames;
+			__be64 rx_mcast_bytes;
+			__be64 rx_mcast_frames;
+			__be64 rx_ucast_bytes;
+			__be64 rx_ucast_frames;
+			__be64 rx_err_frames;
+		} vf;
+	} u;
+};
+
+#define S_FW_VI_STATS_CMD_VIID		0
+#define M_FW_VI_STATS_CMD_VIID		0xfff
+#define V_FW_VI_STATS_CMD_VIID(x)	((x) << S_FW_VI_STATS_CMD_VIID)
+#define G_FW_VI_STATS_CMD_VIID(x)	\
+    (((x) >> S_FW_VI_STATS_CMD_VIID) & M_FW_VI_STATS_CMD_VIID)
+
+#define S_FW_VI_STATS_CMD_NSTATS	12
+#define M_FW_VI_STATS_CMD_NSTATS	0x7
+#define V_FW_VI_STATS_CMD_NSTATS(x)	((x) << S_FW_VI_STATS_CMD_NSTATS)
+#define G_FW_VI_STATS_CMD_NSTATS(x)	\
+    (((x) >> S_FW_VI_STATS_CMD_NSTATS) & M_FW_VI_STATS_CMD_NSTATS)
+
+#define S_FW_VI_STATS_CMD_IX	0
+#define M_FW_VI_STATS_CMD_IX	0x1f
+#define V_FW_VI_STATS_CMD_IX(x)	((x) << S_FW_VI_STATS_CMD_IX)
+#define G_FW_VI_STATS_CMD_IX(x)	\
+    (((x) >> S_FW_VI_STATS_CMD_IX) & M_FW_VI_STATS_CMD_IX)
+
+struct fw_acl_mac_cmd {
+	__be32 op_to_vfn;
+	__be32 en_to_len16;
+	__u8   nmac;
+	__u8   r3[7];
+	__be16 r4;
+	__u8   macaddr0[6];
+	__be16 r5;
+	__u8   macaddr1[6];
+	__be16 r6;
+	__u8   macaddr2[6];
+	__be16 r7;
+	__u8   macaddr3[6];
+};
+
+#define S_FW_ACL_MAC_CMD_PFN	8
+#define M_FW_ACL_MAC_CMD_PFN	0x7
+#define V_FW_ACL_MAC_CMD_PFN(x)	((x) << S_FW_ACL_MAC_CMD_PFN)
+#define G_FW_ACL_MAC_CMD_PFN(x)	\
+    (((x) >> S_FW_ACL_MAC_CMD_PFN) & M_FW_ACL_MAC_CMD_PFN)
+
+#define S_FW_ACL_MAC_CMD_VFN	0
+#define M_FW_ACL_MAC_CMD_VFN	0xff
+#define V_FW_ACL_MAC_CMD_VFN(x)	((x) << S_FW_ACL_MAC_CMD_VFN)
+#define G_FW_ACL_MAC_CMD_VFN(x)	\
+    (((x) >> S_FW_ACL_MAC_CMD_VFN) & M_FW_ACL_MAC_CMD_VFN)
+
+#define S_FW_ACL_MAC_CMD_EN	31
+#define M_FW_ACL_MAC_CMD_EN	0x1
+#define V_FW_ACL_MAC_CMD_EN(x)	((x) << S_FW_ACL_MAC_CMD_EN)
+#define G_FW_ACL_MAC_CMD_EN(x)	\
+    (((x) >> S_FW_ACL_MAC_CMD_EN) & M_FW_ACL_MAC_CMD_EN)
+#define F_FW_ACL_MAC_CMD_EN	V_FW_ACL_MAC_CMD_EN(1U)
+
+struct fw_acl_vlan_cmd {
+	__be32 op_to_vfn;
+	__be32 en_to_len16;
+	__u8   nvlan;
+	__u8   dropnovlan_fm;
+	__u8   r3_lo[6];
+	__be16 vlanid[16];
+};
+
+#define S_FW_ACL_VLAN_CMD_PFN		8
+#define M_FW_ACL_VLAN_CMD_PFN		0x7
+#define V_FW_ACL_VLAN_CMD_PFN(x)	((x) << S_FW_ACL_VLAN_CMD_PFN)
+#define G_FW_ACL_VLAN_CMD_PFN(x)	\
+    (((x) >> S_FW_ACL_VLAN_CMD_PFN) & M_FW_ACL_VLAN_CMD_PFN)
+
+#define S_FW_ACL_VLAN_CMD_VFN		0
+#define M_FW_ACL_VLAN_CMD_VFN		0xff
+#define V_FW_ACL_VLAN_CMD_VFN(x)	((x) << S_FW_ACL_VLAN_CMD_VFN)
+#define G_FW_ACL_VLAN_CMD_VFN(x)	\
+    (((x) >> S_FW_ACL_VLAN_CMD_VFN) & M_FW_ACL_VLAN_CMD_VFN)
+
+#define S_FW_ACL_VLAN_CMD_EN	31
+#define M_FW_ACL_VLAN_CMD_EN	0x1
+#define V_FW_ACL_VLAN_CMD_EN(x)	((x) << S_FW_ACL_VLAN_CMD_EN)
+#define G_FW_ACL_VLAN_CMD_EN(x)	\
+    (((x) >> S_FW_ACL_VLAN_CMD_EN) & M_FW_ACL_VLAN_CMD_EN)
+#define F_FW_ACL_VLAN_CMD_EN	V_FW_ACL_VLAN_CMD_EN(1U)
+
+#define S_FW_ACL_VLAN_CMD_DROPNOVLAN	7
+#define M_FW_ACL_VLAN_CMD_DROPNOVLAN	0x1
+#define V_FW_ACL_VLAN_CMD_DROPNOVLAN(x)	((x) << S_FW_ACL_VLAN_CMD_DROPNOVLAN)
+#define G_FW_ACL_VLAN_CMD_DROPNOVLAN(x)	\
+    (((x) >> S_FW_ACL_VLAN_CMD_DROPNOVLAN) & M_FW_ACL_VLAN_CMD_DROPNOVLAN)
+#define F_FW_ACL_VLAN_CMD_DROPNOVLAN	V_FW_ACL_VLAN_CMD_DROPNOVLAN(1U)
+
+#define S_FW_ACL_VLAN_CMD_FM	6
+#define M_FW_ACL_VLAN_CMD_FM	0x1
+#define V_FW_ACL_VLAN_CMD_FM(x)	((x) << S_FW_ACL_VLAN_CMD_FM)
+#define G_FW_ACL_VLAN_CMD_FM(x)	\
+    (((x) >> S_FW_ACL_VLAN_CMD_FM) & M_FW_ACL_VLAN_CMD_FM)
+#define F_FW_ACL_VLAN_CMD_FM	V_FW_ACL_VLAN_CMD_FM(1U)
+
+/* port capabilities bitmap */
+enum fw_port_cap {
+	FW_PORT_CAP_SPEED_100M		= 0x0001,
+	FW_PORT_CAP_SPEED_1G		= 0x0002,
+	FW_PORT_CAP_SPEED_2_5G		= 0x0004,
+	FW_PORT_CAP_SPEED_10G		= 0x0008,
+	FW_PORT_CAP_SPEED_40G		= 0x0010,
+	FW_PORT_CAP_SPEED_100G		= 0x0020,
+	FW_PORT_CAP_FC_RX		= 0x0040,
+	FW_PORT_CAP_FC_TX		= 0x0080,
+	FW_PORT_CAP_ANEG		= 0x0100,
+	FW_PORT_CAP_MDI_0		= 0x0200,
+	FW_PORT_CAP_MDI_1		= 0x0400,
+	FW_PORT_CAP_BEAN		= 0x0800,
+	FW_PORT_CAP_PMA_LPBK		= 0x1000,
+	FW_PORT_CAP_PCS_LPBK		= 0x2000,
+	FW_PORT_CAP_PHYXS_LPBK		= 0x4000,
+	FW_PORT_CAP_FAR_END_LPBK	= 0x8000,
+};
+
+enum fw_port_mdi {
+	FW_PORT_MDI_UNCHANGED,
+	FW_PORT_MDI_AUTO,
+	FW_PORT_MDI_F_STRAIGHT,
+	FW_PORT_MDI_F_CROSSOVER
+};
+
+#define S_FW_PORT_MDI 9
+#define M_FW_PORT_MDI 3
+#define V_FW_PORT_MDI(x) ((x & M_FW_PORT_MDI) << S_FW_PORT_MDI)
+#define G_FW_PORT_MDI(x) ((x >> S_FW_PORT_MDI) & M_FW_PORT_MDI)
+
+enum fw_port_action {
+	FW_PORT_ACTION_L1_CFG		= 0x0001,
+	FW_PORT_ACTION_L2_CFG		= 0x0002,
+	FW_PORT_ACTION_GET_PORT_INFO	= 0x0003,
+	FW_PORT_ACTION_L2_PPP_CFG	= 0x0004,
+	FW_PORT_ACTION_L2_DCB_CFG	= 0x0005,
+	FW_PORT_ACTION_LOW_PWR_TO_NORMAL= 0x0010,
+	FW_PORT_ACTION_L1_LOW_PWR_EN	= 0x0011,
+	FW_PORT_ACTION_L2_WOL_MODE_EN	= 0x0012,
+	FW_PORT_ACTION_LPBK_TO_NORMAL	= 0x0020,
+	FW_PORT_ACTION_L1_LPBK		= 0x0021,
+	FW_PORT_ACTION_L1_PMA_LPBK	= 0x0022,
+	FW_PORT_ACTION_L1_PCS_LPBK	= 0x0023,
+	FW_PORT_ACTION_L1_PHYXS_CSIDE_LPBK = 0x0024,
+	FW_PORT_ACTION_L1_PHYXS_ESIDE_LPBK = 0x0025,
+	FW_PORT_ACTION_PHY_RESET	= 0x0040,
+	FW_PORT_ACTION_PMA_RESET	= 0x0041,
+	FW_PORT_ACTION_PCS_RESET	= 0x0042,
+	FW_PORT_ACTION_PHYXS_RESET	= 0x0043,
+	FW_PORT_ACTION_DTEXS_REEST	= 0x0044,
+	FW_PORT_ACTION_AN_RESET		= 0x0045
+};
+
+enum fw_port_l2cfg_ctlbf {
+	FW_PORT_L2_CTLBF_OVLAN0	= 0x01,
+	FW_PORT_L2_CTLBF_OVLAN1	= 0x02,
+	FW_PORT_L2_CTLBF_OVLAN2	= 0x04,
+	FW_PORT_L2_CTLBF_OVLAN3	= 0x08,
+	FW_PORT_L2_CTLBF_IVLAN	= 0x10,
+	FW_PORT_L2_CTLBF_TXIPG	= 0x20
+};
+
+enum fw_port_dcb_cfg {
+	FW_PORT_DCB_CFG_PG	= 0x01,
+	FW_PORT_DCB_CFG_PFC	= 0x02,
+	FW_PORT_DCB_CFG_APPL	= 0x04
+};
+
+enum fw_port_dcb_cfg_rc {
+	FW_PORT_DCB_CFG_SUCCESS	= 0x0,
+	FW_PORT_DCB_CFG_ERROR	= 0x1
+};
+
+struct fw_port_cmd {
+	__be32 op_to_portid;
+	__be32 action_to_len16;
+	union fw_port {
+		struct fw_port_l1cfg {
+			__be32 rcap;
+			__be32 r;
+		} l1cfg;
+		struct fw_port_l2cfg {
+			__be16 ctlbf_to_ivlan0;
+			__be16 ivlantype;
+			__be32 txipg_pkd;
+			__be16 ovlan0mask;
+			__be16 ovlan0type;
+			__be16 ovlan1mask;
+			__be16 ovlan1type;
+			__be16 ovlan2mask;
+			__be16 ovlan2type;
+			__be16 ovlan3mask;
+			__be16 ovlan3type;
+		} l2cfg;
+		struct fw_port_info {
+			__be32 lstatus_to_modtype;
+			__be16 pcap;
+			__be16 acap;
+		} info;
+		struct fw_port_ppp {
+			__be32 pppen_to_ncsich;
+			__be32 r11;
+		} ppp;
+		struct fw_port_dcb {
+			__be16 cfg;
+			__u8   up_map;
+			__u8   sf_cfgrc;
+			__be16 prot_ix;
+			__u8   pe7_to_pe0;
+			__u8   numTCPFCs;
+			__be32 pgid0_to_pgid7;
+			__be32 numTCs_oui;
+			__u8   pgpc[8];
+		} dcb;
+	} u;
+};
+
+#define S_FW_PORT_CMD_READ	22
+#define M_FW_PORT_CMD_READ	0x1
+#define V_FW_PORT_CMD_READ(x)	((x) << S_FW_PORT_CMD_READ)
+#define G_FW_PORT_CMD_READ(x)	\
+    (((x) >> S_FW_PORT_CMD_READ) & M_FW_PORT_CMD_READ)
+#define F_FW_PORT_CMD_READ	V_FW_PORT_CMD_READ(1U)
+
+#define S_FW_PORT_CMD_PORTID	0
+#define M_FW_PORT_CMD_PORTID	0xf
+#define V_FW_PORT_CMD_PORTID(x)	((x) << S_FW_PORT_CMD_PORTID)
+#define G_FW_PORT_CMD_PORTID(x)	\
+    (((x) >> S_FW_PORT_CMD_PORTID) & M_FW_PORT_CMD_PORTID)
+
+#define S_FW_PORT_CMD_ACTION	16
+#define M_FW_PORT_CMD_ACTION	0xffff
+#define V_FW_PORT_CMD_ACTION(x)	((x) << S_FW_PORT_CMD_ACTION)
+#define G_FW_PORT_CMD_ACTION(x)	\
+    (((x) >> S_FW_PORT_CMD_ACTION) & M_FW_PORT_CMD_ACTION)
+
+#define S_FW_PORT_CMD_CTLBF	11
+#define M_FW_PORT_CMD_CTLBF	0x1f
+#define V_FW_PORT_CMD_CTLBF(x)	((x) << S_FW_PORT_CMD_CTLBF)
+#define G_FW_PORT_CMD_CTLBF(x)	\
+    (((x) >> S_FW_PORT_CMD_CTLBF) & M_FW_PORT_CMD_CTLBF)
+
+#define S_FW_PORT_CMD_OVLAN3	10
+#define M_FW_PORT_CMD_OVLAN3	0x1
+#define V_FW_PORT_CMD_OVLAN3(x)	((x) << S_FW_PORT_CMD_OVLAN3)
+#define G_FW_PORT_CMD_OVLAN3(x)	\
+    (((x) >> S_FW_PORT_CMD_OVLAN3) & M_FW_PORT_CMD_OVLAN3)
+#define F_FW_PORT_CMD_OVLAN3	V_FW_PORT_CMD_OVLAN3(1U)
+
+#define S_FW_PORT_CMD_OVLAN2	9
+#define M_FW_PORT_CMD_OVLAN2	0x1
+#define V_FW_PORT_CMD_OVLAN2(x)	((x) << S_FW_PORT_CMD_OVLAN2)
+#define G_FW_PORT_CMD_OVLAN2(x)	\
+    (((x) >> S_FW_PORT_CMD_OVLAN2) & M_FW_PORT_CMD_OVLAN2)
+#define F_FW_PORT_CMD_OVLAN2	V_FW_PORT_CMD_OVLAN2(1U)
+
+#define S_FW_PORT_CMD_OVLAN1	8
+#define M_FW_PORT_CMD_OVLAN1	0x1
+#define V_FW_PORT_CMD_OVLAN1(x)	((x) << S_FW_PORT_CMD_OVLAN1)
+#define G_FW_PORT_CMD_OVLAN1(x)	\
+    (((x) >> S_FW_PORT_CMD_OVLAN1) & M_FW_PORT_CMD_OVLAN1)
+#define F_FW_PORT_CMD_OVLAN1	V_FW_PORT_CMD_OVLAN1(1U)
+
+#define S_FW_PORT_CMD_OVLAN0	7
+#define M_FW_PORT_CMD_OVLAN0	0x1
+#define V_FW_PORT_CMD_OVLAN0(x)	((x) << S_FW_PORT_CMD_OVLAN0)
+#define G_FW_PORT_CMD_OVLAN0(x)	\
+    (((x) >> S_FW_PORT_CMD_OVLAN0) & M_FW_PORT_CMD_OVLAN0)
+#define F_FW_PORT_CMD_OVLAN0	V_FW_PORT_CMD_OVLAN0(1U)
+
+#define S_FW_PORT_CMD_IVLAN0	6
+#define M_FW_PORT_CMD_IVLAN0	0x1
+#define V_FW_PORT_CMD_IVLAN0(x)	((x) << S_FW_PORT_CMD_IVLAN0)
+#define G_FW_PORT_CMD_IVLAN0(x)	\
+    (((x) >> S_FW_PORT_CMD_IVLAN0) & M_FW_PORT_CMD_IVLAN0)
+#define F_FW_PORT_CMD_IVLAN0	V_FW_PORT_CMD_IVLAN0(1U)
+
+#define S_FW_PORT_CMD_TXIPG	19
+#define M_FW_PORT_CMD_TXIPG	0x1fff
+#define V_FW_PORT_CMD_TXIPG(x)	((x) << S_FW_PORT_CMD_TXIPG)
+#define G_FW_PORT_CMD_TXIPG(x)	\
+    (((x) >> S_FW_PORT_CMD_TXIPG) & M_FW_PORT_CMD_TXIPG)
+
+#define S_FW_PORT_CMD_LSTATUS		31
+#define M_FW_PORT_CMD_LSTATUS		0x1
+#define V_FW_PORT_CMD_LSTATUS(x)	((x) << S_FW_PORT_CMD_LSTATUS)
+#define G_FW_PORT_CMD_LSTATUS(x)	\
+    (((x) >> S_FW_PORT_CMD_LSTATUS) & M_FW_PORT_CMD_LSTATUS)
+#define F_FW_PORT_CMD_LSTATUS	V_FW_PORT_CMD_LSTATUS(1U)
+
+#define S_FW_PORT_CMD_LSPEED	24
+#define M_FW_PORT_CMD_LSPEED	0x3f
+#define V_FW_PORT_CMD_LSPEED(x)	((x) << S_FW_PORT_CMD_LSPEED)
+#define G_FW_PORT_CMD_LSPEED(x)	\
+    (((x) >> S_FW_PORT_CMD_LSPEED) & M_FW_PORT_CMD_LSPEED)
+
+#define S_FW_PORT_CMD_TXPAUSE		23
+#define M_FW_PORT_CMD_TXPAUSE		0x1
+#define V_FW_PORT_CMD_TXPAUSE(x)	((x) << S_FW_PORT_CMD_TXPAUSE)
+#define G_FW_PORT_CMD_TXPAUSE(x)	\
+    (((x) >> S_FW_PORT_CMD_TXPAUSE) & M_FW_PORT_CMD_TXPAUSE)
+#define F_FW_PORT_CMD_TXPAUSE	V_FW_PORT_CMD_TXPAUSE(1U)
+
+#define S_FW_PORT_CMD_RXPAUSE		22
+#define M_FW_PORT_CMD_RXPAUSE		0x1
+#define V_FW_PORT_CMD_RXPAUSE(x)	((x) << S_FW_PORT_CMD_RXPAUSE)
+#define G_FW_PORT_CMD_RXPAUSE(x)	\
+    (((x) >> S_FW_PORT_CMD_RXPAUSE) & M_FW_PORT_CMD_RXPAUSE)
+#define F_FW_PORT_CMD_RXPAUSE	V_FW_PORT_CMD_RXPAUSE(1U)
+
+#define S_FW_PORT_CMD_MDIOCAP		21
+#define M_FW_PORT_CMD_MDIOCAP		0x1
+#define V_FW_PORT_CMD_MDIOCAP(x)	((x) << S_FW_PORT_CMD_MDIOCAP)
+#define G_FW_PORT_CMD_MDIOCAP(x)	\
+    (((x) >> S_FW_PORT_CMD_MDIOCAP) & M_FW_PORT_CMD_MDIOCAP)
+#define F_FW_PORT_CMD_MDIOCAP	V_FW_PORT_CMD_MDIOCAP(1U)
+
+#define S_FW_PORT_CMD_MDIOADDR		16
+#define M_FW_PORT_CMD_MDIOADDR		0x1f
+#define V_FW_PORT_CMD_MDIOADDR(x)	((x) << S_FW_PORT_CMD_MDIOADDR)
+#define G_FW_PORT_CMD_MDIOADDR(x)	\
+    (((x) >> S_FW_PORT_CMD_MDIOADDR) & M_FW_PORT_CMD_MDIOADDR)
+
+#define S_FW_PORT_CMD_LPTXPAUSE		15
+#define M_FW_PORT_CMD_LPTXPAUSE		0x1
+#define V_FW_PORT_CMD_LPTXPAUSE(x)	((x) << S_FW_PORT_CMD_LPTXPAUSE)
+#define G_FW_PORT_CMD_LPTXPAUSE(x)	\
+    (((x) >> S_FW_PORT_CMD_LPTXPAUSE) & M_FW_PORT_CMD_LPTXPAUSE)
+#define F_FW_PORT_CMD_LPTXPAUSE	V_FW_PORT_CMD_LPTXPAUSE(1U)
+
+#define S_FW_PORT_CMD_LPRXPAUSE		14
+#define M_FW_PORT_CMD_LPRXPAUSE		0x1
+#define V_FW_PORT_CMD_LPRXPAUSE(x)	((x) << S_FW_PORT_CMD_LPRXPAUSE)
+#define G_FW_PORT_CMD_LPRXPAUSE(x)	\
+    (((x) >> S_FW_PORT_CMD_LPRXPAUSE) & M_FW_PORT_CMD_LPRXPAUSE)
+#define F_FW_PORT_CMD_LPRXPAUSE	V_FW_PORT_CMD_LPRXPAUSE(1U)
+
+#define S_FW_PORT_CMD_PTYPE	8
+#define M_FW_PORT_CMD_PTYPE	0x1f
+#define V_FW_PORT_CMD_PTYPE(x)	((x) << S_FW_PORT_CMD_PTYPE)
+#define G_FW_PORT_CMD_PTYPE(x)	\
+    (((x) >> S_FW_PORT_CMD_PTYPE) & M_FW_PORT_CMD_PTYPE)
+
+#define S_FW_PORT_CMD_MODTYPE		0
+#define M_FW_PORT_CMD_MODTYPE		0x1f
+#define V_FW_PORT_CMD_MODTYPE(x)	((x) << S_FW_PORT_CMD_MODTYPE)
+#define G_FW_PORT_CMD_MODTYPE(x)	\
+    (((x) >> S_FW_PORT_CMD_MODTYPE) & M_FW_PORT_CMD_MODTYPE)
+
+#define S_FW_PORT_CMD_PPPEN	31
+#define M_FW_PORT_CMD_PPPEN	0x1
+#define V_FW_PORT_CMD_PPPEN(x)	((x) << S_FW_PORT_CMD_PPPEN)
+#define G_FW_PORT_CMD_PPPEN(x)	\
+    (((x) >> S_FW_PORT_CMD_PPPEN) & M_FW_PORT_CMD_PPPEN)
+#define F_FW_PORT_CMD_PPPEN	V_FW_PORT_CMD_PPPEN(1U)
+
+#define S_FW_PORT_CMD_TPSRC	28
+#define M_FW_PORT_CMD_TPSRC	0x3
+#define V_FW_PORT_CMD_TPSRC(x)	((x) << S_FW_PORT_CMD_TPSRC)
+#define G_FW_PORT_CMD_TPSRC(x)	\
+    (((x) >> S_FW_PORT_CMD_TPSRC) & M_FW_PORT_CMD_TPSRC)
+
+#define S_FW_PORT_CMD_NCSISRC		24
+#define M_FW_PORT_CMD_NCSISRC		0x3
+#define V_FW_PORT_CMD_NCSISRC(x)	((x) << S_FW_PORT_CMD_NCSISRC)
+#define G_FW_PORT_CMD_NCSISRC(x)	\
+    (((x) >> S_FW_PORT_CMD_NCSISRC) & M_FW_PORT_CMD_NCSISRC)
+
+#define S_FW_PORT_CMD_CH0	20
+#define M_FW_PORT_CMD_CH0	0x7
+#define V_FW_PORT_CMD_CH0(x)	((x) << S_FW_PORT_CMD_CH0)
+#define G_FW_PORT_CMD_CH0(x)	\
+    (((x) >> S_FW_PORT_CMD_CH0) & M_FW_PORT_CMD_CH0)
+
+#define S_FW_PORT_CMD_CH1	16
+#define M_FW_PORT_CMD_CH1	0x7
+#define V_FW_PORT_CMD_CH1(x)	((x) << S_FW_PORT_CMD_CH1)
+#define G_FW_PORT_CMD_CH1(x)	\
+    (((x) >> S_FW_PORT_CMD_CH1) & M_FW_PORT_CMD_CH1)
+
+#define S_FW_PORT_CMD_CH2	12
+#define M_FW_PORT_CMD_CH2	0x7
+#define V_FW_PORT_CMD_CH2(x)	((x) << S_FW_PORT_CMD_CH2)
+#define G_FW_PORT_CMD_CH2(x)	\
+    (((x) >> S_FW_PORT_CMD_CH2) & M_FW_PORT_CMD_CH2)
+
+#define S_FW_PORT_CMD_CH3	8
+#define M_FW_PORT_CMD_CH3	0x7
+#define V_FW_PORT_CMD_CH3(x)	((x) << S_FW_PORT_CMD_CH3)
+#define G_FW_PORT_CMD_CH3(x)	\
+    (((x) >> S_FW_PORT_CMD_CH3) & M_FW_PORT_CMD_CH3)
+
+#define S_FW_PORT_CMD_NCSICH	4
+#define M_FW_PORT_CMD_NCSICH	0x7
+#define V_FW_PORT_CMD_NCSICH(x)	((x) << S_FW_PORT_CMD_NCSICH)
+#define G_FW_PORT_CMD_NCSICH(x)	\
+    (((x) >> S_FW_PORT_CMD_NCSICH) & M_FW_PORT_CMD_NCSICH)
+
+#define S_FW_PORT_CMD_SF	6
+#define M_FW_PORT_CMD_SF	0x3
+#define V_FW_PORT_CMD_SF(x)	((x) << S_FW_PORT_CMD_SF)
+#define G_FW_PORT_CMD_SF(x)	(((x) >> S_FW_PORT_CMD_SF) & M_FW_PORT_CMD_SF)
+
+#define S_FW_PORT_CMD_CFGRC	0
+#define M_FW_PORT_CMD_CFGRC	0x3f
+#define V_FW_PORT_CMD_CFGRC(x)	((x) << S_FW_PORT_CMD_CFGRC)
+#define G_FW_PORT_CMD_CFGRC(x)	\
+    (((x) >> S_FW_PORT_CMD_CFGRC) & M_FW_PORT_CMD_CFGRC)
+
+#define S_FW_PORT_CMD_PE7	7
+#define M_FW_PORT_CMD_PE7	0x1
+#define V_FW_PORT_CMD_PE7(x)	((x) << S_FW_PORT_CMD_PE7)
+#define G_FW_PORT_CMD_PE7(x)	\
+    (((x) >> S_FW_PORT_CMD_PE7) & M_FW_PORT_CMD_PE7)
+#define F_FW_PORT_CMD_PE7	V_FW_PORT_CMD_PE7(1U)
+
+#define S_FW_PORT_CMD_PE6	6
+#define M_FW_PORT_CMD_PE6	0x1
+#define V_FW_PORT_CMD_PE6(x)	((x) << S_FW_PORT_CMD_PE6)
+#define G_FW_PORT_CMD_PE6(x)	\
+    (((x) >> S_FW_PORT_CMD_PE6) & M_FW_PORT_CMD_PE6)
+#define F_FW_PORT_CMD_PE6	V_FW_PORT_CMD_PE6(1U)
+
+#define S_FW_PORT_CMD_PE5	5
+#define M_FW_PORT_CMD_PE5	0x1
+#define V_FW_PORT_CMD_PE5(x)	((x) << S_FW_PORT_CMD_PE5)
+#define G_FW_PORT_CMD_PE5(x)	\
+    (((x) >> S_FW_PORT_CMD_PE5) & M_FW_PORT_CMD_PE5)
+#define F_FW_PORT_CMD_PE5	V_FW_PORT_CMD_PE5(1U)
+
+#define S_FW_PORT_CMD_PE4	4
+#define M_FW_PORT_CMD_PE4	0x1
+#define V_FW_PORT_CMD_PE4(x)	((x) << S_FW_PORT_CMD_PE4)
+#define G_FW_PORT_CMD_PE4(x)	\
+    (((x) >> S_FW_PORT_CMD_PE4) & M_FW_PORT_CMD_PE4)
+#define F_FW_PORT_CMD_PE4	V_FW_PORT_CMD_PE4(1U)
+
+#define S_FW_PORT_CMD_PE3	3
+#define M_FW_PORT_CMD_PE3	0x1
+#define V_FW_PORT_CMD_PE3(x)	((x) << S_FW_PORT_CMD_PE3)
+#define G_FW_PORT_CMD_PE3(x)	\
+    (((x) >> S_FW_PORT_CMD_PE3) & M_FW_PORT_CMD_PE3)
+#define F_FW_PORT_CMD_PE3	V_FW_PORT_CMD_PE3(1U)
+
+#define S_FW_PORT_CMD_PE2	2
+#define M_FW_PORT_CMD_PE2	0x1
+#define V_FW_PORT_CMD_PE2(x)	((x) << S_FW_PORT_CMD_PE2)
+#define G_FW_PORT_CMD_PE2(x)	\
+    (((x) >> S_FW_PORT_CMD_PE2) & M_FW_PORT_CMD_PE2)
+#define F_FW_PORT_CMD_PE2	V_FW_PORT_CMD_PE2(1U)
+
+#define S_FW_PORT_CMD_PE1	1
+#define M_FW_PORT_CMD_PE1	0x1
+#define V_FW_PORT_CMD_PE1(x)	((x) << S_FW_PORT_CMD_PE1)
+#define G_FW_PORT_CMD_PE1(x)	\
+    (((x) >> S_FW_PORT_CMD_PE1) & M_FW_PORT_CMD_PE1)
+#define F_FW_PORT_CMD_PE1	V_FW_PORT_CMD_PE1(1U)
+
+#define S_FW_PORT_CMD_PE0	0
+#define M_FW_PORT_CMD_PE0	0x1
+#define V_FW_PORT_CMD_PE0(x)	((x) << S_FW_PORT_CMD_PE0)
+#define G_FW_PORT_CMD_PE0(x)	\
+    (((x) >> S_FW_PORT_CMD_PE0) & M_FW_PORT_CMD_PE0)
+#define F_FW_PORT_CMD_PE0	V_FW_PORT_CMD_PE0(1U)
+
+#define S_FW_PORT_CMD_PGID0	28
+#define M_FW_PORT_CMD_PGID0	0xf
+#define V_FW_PORT_CMD_PGID0(x)	((x) << S_FW_PORT_CMD_PGID0)
+#define G_FW_PORT_CMD_PGID0(x)	\
+    (((x) >> S_FW_PORT_CMD_PGID0) & M_FW_PORT_CMD_PGID0)
+
+#define S_FW_PORT_CMD_PGID1	24
+#define M_FW_PORT_CMD_PGID1	0xf
+#define V_FW_PORT_CMD_PGID1(x)	((x) << S_FW_PORT_CMD_PGID1)
+#define G_FW_PORT_CMD_PGID1(x)	\
+    (((x) >> S_FW_PORT_CMD_PGID1) & M_FW_PORT_CMD_PGID1)
+
+#define S_FW_PORT_CMD_PGID2	20
+#define M_FW_PORT_CMD_PGID2	0xf
+#define V_FW_PORT_CMD_PGID2(x)	((x) << S_FW_PORT_CMD_PGID2)
+#define G_FW_PORT_CMD_PGID2(x)	\
+    (((x) >> S_FW_PORT_CMD_PGID2) & M_FW_PORT_CMD_PGID2)
+
+#define S_FW_PORT_CMD_PGID3	16
+#define M_FW_PORT_CMD_PGID3	0xf
+#define V_FW_PORT_CMD_PGID3(x)	((x) << S_FW_PORT_CMD_PGID3)
+#define G_FW_PORT_CMD_PGID3(x)	\
+    (((x) >> S_FW_PORT_CMD_PGID3) & M_FW_PORT_CMD_PGID3)
+
+#define S_FW_PORT_CMD_PGID4	12
+#define M_FW_PORT_CMD_PGID4	0xf
+#define V_FW_PORT_CMD_PGID4(x)	((x) << S_FW_PORT_CMD_PGID4)
+#define G_FW_PORT_CMD_PGID4(x)	\
+    (((x) >> S_FW_PORT_CMD_PGID4) & M_FW_PORT_CMD_PGID4)
+
+#define S_FW_PORT_CMD_PGID5	8
+#define M_FW_PORT_CMD_PGID5	0xf
+#define V_FW_PORT_CMD_PGID5(x)	((x) << S_FW_PORT_CMD_PGID5)
+#define G_FW_PORT_CMD_PGID5(x)	\
+    (((x) >> S_FW_PORT_CMD_PGID5) & M_FW_PORT_CMD_PGID5)
+
+#define S_FW_PORT_CMD_PGID6	4
+#define M_FW_PORT_CMD_PGID6	0xf
+#define V_FW_PORT_CMD_PGID6(x)	((x) << S_FW_PORT_CMD_PGID6)
+#define G_FW_PORT_CMD_PGID6(x)	\
+    (((x) >> S_FW_PORT_CMD_PGID6) & M_FW_PORT_CMD_PGID6)
+
+#define S_FW_PORT_CMD_PGID7	0
+#define M_FW_PORT_CMD_PGID7	0xf
+#define V_FW_PORT_CMD_PGID7(x)	((x) << S_FW_PORT_CMD_PGID7)
+#define G_FW_PORT_CMD_PGID7(x)	\
+    (((x) >> S_FW_PORT_CMD_PGID7) & M_FW_PORT_CMD_PGID7)
+
+#define S_FW_PORT_CMD_NUMTCS	24
+#define M_FW_PORT_CMD_NUMTCS	0xff
+#define V_FW_PORT_CMD_NUMTCS(x)	((x) << S_FW_PORT_CMD_NUMTCS)
+#define G_FW_PORT_CMD_NUMTCS(x)	\
+    (((x) >> S_FW_PORT_CMD_NUMTCS) & M_FW_PORT_CMD_NUMTCS)
+
+#define S_FW_PORT_CMD_OUI	0
+#define M_FW_PORT_CMD_OUI	0xffffff
+#define V_FW_PORT_CMD_OUI(x)	((x) << S_FW_PORT_CMD_OUI)
+#define G_FW_PORT_CMD_OUI(x)	\
+    (((x) >> S_FW_PORT_CMD_OUI) & M_FW_PORT_CMD_OUI)
+
+enum fw_port_type {
+	FW_PORT_TYPE_FIBER,
+	FW_PORT_TYPE_KX4,
+	FW_PORT_TYPE_BT_SGMII,
+	FW_PORT_TYPE_KX,
+	FW_PORT_TYPE_BT_XAUI,
+	FW_PORT_TYPE_KR,
+	FW_PORT_TYPE_CX4,
+	FW_PORT_TYPE_TWINAX,
+
+	FW_PORT_TYPE_NONE = M_FW_PORT_CMD_PTYPE
+};
+
+enum fw_port_module_type {
+	FW_PORT_MOD_TYPE_NA,
+	FW_PORT_MOD_TYPE_LR,
+	FW_PORT_MOD_TYPE_SR,
+	FW_PORT_MOD_TYPE_ER,
+
+	FW_PORT_MOD_TYPE_NONE = M_FW_PORT_CMD_MODTYPE
+};
+
+/* port stats */
+#define FW_NUM_PORT_STATS 50
+#define FW_NUM_PORT_TX_STATS 23
+#define FW_NUM_PORT_RX_STATS 27
+
+enum fw_port_stats_tx_index {
+	FW_STAT_TX_PORT_BYTES_IX,
+	FW_STAT_TX_PORT_FRAMES_IX,	
+	FW_STAT_TX_PORT_BCAST_IX,
+	FW_STAT_TX_PORT_MCAST_IX,
+	FW_STAT_TX_PORT_UCAST_IX,
+	FW_STAT_TX_PORT_ERROR_IX,
+	FW_STAT_TX_PORT_64B_IX,
+	FW_STAT_TX_PORT_65B_127B_IX,
+	FW_STAT_TX_PORT_128B_255B_IX,
+	FW_STAT_TX_PORT_256B_511B_IX,
+	FW_STAT_TX_PORT_512B_1023B_IX,
+	FW_STAT_TX_PORT_1024B_1518B_IX,
+	FW_STAT_TX_PORT_1519B_MAX_IX,
+	FW_STAT_TX_PORT_DROP_IX,
+	FW_STAT_TX_PORT_PAUSE_IX,
+	FW_STAT_TX_PORT_PPP0_IX,
+	FW_STAT_TX_PORT_PPP1_IX,
+	FW_STAT_TX_PORT_PPP2_IX,
+	FW_STAT_TX_PORT_PPP3_IX,
+	FW_STAT_TX_PORT_PPP4_IX,
+	FW_STAT_TX_PORT_PPP5_IX,
+	FW_STAT_TX_PORT_PPP6_IX,
+	FW_STAT_TX_PORT_PPP7_IX
+};
+
+enum fw_port_stat_rx_index {
+	FW_STAT_RX_PORT_BYTES_IX,
+	FW_STAT_RX_PORT_FRAMES_IX,
+	FW_STAT_RX_PORT_BCAST_IX,
+	FW_STAT_RX_PORT_MCAST_IX,
+	FW_STAT_RX_PORT_UCAST_IX,
+	FW_STAT_RX_PORT_MTU_ERROR_IX,
+	FW_STAT_RX_PORT_MTU_CRC_ERROR_IX,
+	FW_STAT_RX_PORT_CRC_ERROR_IX,
+	FW_STAT_RX_PORT_LEN_ERROR_IX,
+	FW_STAT_RX_PORT_SYM_ERROR_IX,
+	FW_STAT_RX_PORT_64B_IX,
+	FW_STAT_RX_PORT_65B_127B_IX,
+	FW_STAT_RX_PORT_128B_255B_IX,
+	FW_STAT_RX_PORT_256B_511B_IX,
+	FW_STAT_RX_PORT_512B_1023B_IX,
+	FW_STAT_RX_PORT_1024B_1518B_IX,
+	FW_STAT_RX_PORT_1519B_MAX_IX,
+	FW_STAT_RX_PORT_PAUSE_IX,
+	FW_STAT_RX_PORT_PPP0_IX,
+	FW_STAT_RX_PORT_PPP1_IX,
+	FW_STAT_RX_PORT_PPP2_IX,
+	FW_STAT_RX_PORT_PPP3_IX,
+	FW_STAT_RX_PORT_PPP4_IX,
+	FW_STAT_RX_PORT_PPP5_IX,
+	FW_STAT_RX_PORT_PPP6_IX,
+	FW_STAT_RX_PORT_PPP7_IX,	
+	FW_STAT_RX_PORT_LESS_64B_IX
+};
+
+struct fw_port_stats_cmd {
+	__be32 op_to_portid;
+	__be32 retval_len16;
+	union fw_port_stats {
+		struct fw_port_stats_ctl {
+			__u8   nstats_bg_bm;
+			__u8   tx_ix;
+			__be16 r6;
+			__be32 r7;
+			__be64 stat0;
+			__be64 stat1;
+			__be64 stat2;
+			__be64 stat3;
+			__be64 stat4;
+			__be64 stat5;
+		} ctl;
+		struct fw_port_stats_all {
+			__be64 tx_bytes;
+			__be64 tx_frames;
+			__be64 tx_bcast;
+			__be64 tx_mcast;
+			__be64 tx_ucast;
+			__be64 tx_error;
+			__be64 tx_64b;
+			__be64 tx_65b_127b;
+			__be64 tx_128b_255b;
+			__be64 tx_256b_511b;
+			__be64 tx_512b_1023b;
+			__be64 tx_1024b_1518b;
+			__be64 tx_1519b_max;
+			__be64 tx_drop;
+			__be64 tx_pause;
+			__be64 tx_ppp0;
+			__be64 tx_ppp1;
+			__be64 tx_ppp2;
+			__be64 tx_ppp3;
+			__be64 tx_ppp4;
+			__be64 tx_ppp5;
+			__be64 tx_ppp6;
+			__be64 tx_ppp7;
+			__be64 rx_bytes;
+			__be64 rx_frames;
+			__be64 rx_bcast;
+			__be64 rx_mcast;
+			__be64 rx_ucast;
+			__be64 rx_mtu_error;
+			__be64 rx_mtu_crc_error;
+			__be64 rx_crc_error;
+			__be64 rx_len_error;
+			__be64 rx_sym_error;
+			__be64 rx_64b;
+			__be64 rx_65b_127b;
+			__be64 rx_128b_255b;
+			__be64 rx_256b_511b;
+			__be64 rx_512b_1023b;
+			__be64 rx_1024b_1518b;
+			__be64 rx_1519b_max;
+			__be64 rx_pause;
+			__be64 rx_ppp0;
+			__be64 rx_ppp1;
+			__be64 rx_ppp2;
+			__be64 rx_ppp3;
+			__be64 rx_ppp4;
+			__be64 rx_ppp5;
+			__be64 rx_ppp6;
+			__be64 rx_ppp7;
+			__be64 rx_less_64b;
+			__be64 rx_bg_drop;
+			__be64 rx_bg_trunc;
+		} all;
+	} u;
+};
+
+#define S_FW_PORT_STATS_CMD_NSTATS	4
+#define M_FW_PORT_STATS_CMD_NSTATS	0x7
+#define V_FW_PORT_STATS_CMD_NSTATS(x)	((x) << S_FW_PORT_STATS_CMD_NSTATS)
+#define G_FW_PORT_STATS_CMD_NSTATS(x)	\
+    (((x) >> S_FW_PORT_STATS_CMD_NSTATS) & M_FW_PORT_STATS_CMD_NSTATS)
+
+#define S_FW_PORT_STATS_CMD_BG_BM	0
+#define M_FW_PORT_STATS_CMD_BG_BM	0x3
+#define V_FW_PORT_STATS_CMD_BG_BM(x)	((x) << S_FW_PORT_STATS_CMD_BG_BM)
+#define G_FW_PORT_STATS_CMD_BG_BM(x)	\
+    (((x) >> S_FW_PORT_STATS_CMD_BG_BM) & M_FW_PORT_STATS_CMD_BG_BM)
+
+#define S_FW_PORT_STATS_CMD_TX		7
+#define M_FW_PORT_STATS_CMD_TX		0x1
+#define V_FW_PORT_STATS_CMD_TX(x)	((x) << S_FW_PORT_STATS_CMD_TX)
+#define G_FW_PORT_STATS_CMD_TX(x)	\
+    (((x) >> S_FW_PORT_STATS_CMD_TX) & M_FW_PORT_STATS_CMD_TX)
+#define F_FW_PORT_STATS_CMD_TX	V_FW_PORT_STATS_CMD_TX(1U)
+
+#define S_FW_PORT_STATS_CMD_IX		0
+#define M_FW_PORT_STATS_CMD_IX		0x3f
+#define V_FW_PORT_STATS_CMD_IX(x)	((x) << S_FW_PORT_STATS_CMD_IX)
+#define G_FW_PORT_STATS_CMD_IX(x)	\
+    (((x) >> S_FW_PORT_STATS_CMD_IX) & M_FW_PORT_STATS_CMD_IX)
+
+/* port loopback stats */
+#define FW_NUM_LB_STATS 16
+enum fw_port_lb_stats_index {
+	FW_STAT_LB_PORT_BYTES_IX,
+	FW_STAT_LB_PORT_FRAMES_IX, 	
+	FW_STAT_LB_PORT_BCAST_IX, 
+	FW_STAT_LB_PORT_MCAST_IX,
+	FW_STAT_LB_PORT_UCAST_IX,
+	FW_STAT_LB_PORT_ERROR_IX,
+	FW_STAT_LB_PORT_64B_IX,
+	FW_STAT_LB_PORT_65B_127B_IX,
+	FW_STAT_LB_PORT_128B_255B_IX,
+	FW_STAT_LB_PORT_256B_511B_IX,
+	FW_STAT_LB_PORT_512B_1023B_IX,
+	FW_STAT_LB_PORT_1024B_1518B_IX,
+	FW_STAT_LB_PORT_1519B_MAX_IX,
+	FW_STAT_LB_PORT_DROP_FRAMES_IX
+};
+
+struct fw_port_lb_stats_cmd {
+	__be32 op_to_lbport;
+	__be32 retval_len16;
+	union fw_port_lb_stats {
+		struct fw_port_lb_stats_ctl {
+			__u8   nstats_bg_bm;
+			__u8   ix_pkd;
+			__be16 r6;
+			__be32 r7;
+			__be64 stat0;
+			__be64 stat1;
+			__be64 stat2;
+			__be64 stat3;
+			__be64 stat4;
+			__be64 stat5;
+		} ctl;
+		struct fw_port_lb_stats_all {
+			__be64 tx_bytes;
+			__be64 tx_frames;
+			__be64 tx_bcast;
+			__be64 tx_mcast;
+			__be64 tx_ucast;
+			__be64 tx_error;
+			__be64 tx_64b;
+			__be64 tx_65b_127b;
+			__be64 tx_128b_255b;
+			__be64 tx_256b_511b;
+			__be64 tx_512b_1023b;
+			__be64 tx_1024b_1518b;
+			__be64 tx_1519b_max;
+			__be64 rx_lb_drop;
+			__be64 rx_lb_trunc;
+		} all;
+	} u;
+};
+
+#define S_FW_PORT_LB_STATS_CMD_LBPORT		0
+#define M_FW_PORT_LB_STATS_CMD_LBPORT		0xf
+#define V_FW_PORT_LB_STATS_CMD_LBPORT(x)	\
+    ((x) << S_FW_PORT_LB_STATS_CMD_LBPORT)
+#define G_FW_PORT_LB_STATS_CMD_LBPORT(x)	\
+    (((x) >> S_FW_PORT_LB_STATS_CMD_LBPORT) & M_FW_PORT_LB_STATS_CMD_LBPORT)
+
+#define S_FW_PORT_LB_STATS_CMD_NSTATS		4
+#define M_FW_PORT_LB_STATS_CMD_NSTATS		0x7
+#define V_FW_PORT_LB_STATS_CMD_NSTATS(x)	\
+    ((x) << S_FW_PORT_LB_STATS_CMD_NSTATS)
+#define G_FW_PORT_LB_STATS_CMD_NSTATS(x)	\
+    (((x) >> S_FW_PORT_LB_STATS_CMD_NSTATS) & M_FW_PORT_LB_STATS_CMD_NSTATS)
+
+#define S_FW_PORT_LB_STATS_CMD_BG_BM	0
+#define M_FW_PORT_LB_STATS_CMD_BG_BM	0x3
+#define V_FW_PORT_LB_STATS_CMD_BG_BM(x)	((x) << S_FW_PORT_LB_STATS_CMD_BG_BM)
+#define G_FW_PORT_LB_STATS_CMD_BG_BM(x)	\
+    (((x) >> S_FW_PORT_LB_STATS_CMD_BG_BM) & M_FW_PORT_LB_STATS_CMD_BG_BM)
+
+#define S_FW_PORT_LB_STATS_CMD_IX	0
+#define M_FW_PORT_LB_STATS_CMD_IX	0xf
+#define V_FW_PORT_LB_STATS_CMD_IX(x)	((x) << S_FW_PORT_LB_STATS_CMD_IX)
+#define G_FW_PORT_LB_STATS_CMD_IX(x)	\
+    (((x) >> S_FW_PORT_LB_STATS_CMD_IX) & M_FW_PORT_LB_STATS_CMD_IX)
+
+struct fw_rss_ind_tbl_cmd {
+	__be32 op_to_viid;
+	__be32 retval_len16;
+	__be16 niqid;
+	__be16 startidx;
+	__be32 r3;
+	__be32 iq0_to_iq2;
+	__be32 iq3_to_iq5;
+	__be32 iq6_to_iq8;
+	__be32 iq9_to_iq11;
+	__be32 iq12_to_iq14;
+	__be32 iq15_to_iq17;
+	__be32 iq18_to_iq20;
+	__be32 iq21_to_iq23;
+	__be32 iq24_to_iq26;
+	__be32 iq27_to_iq29;
+	__be32 iq30_iq31;
+	__be32 r15_lo;
+};
+
+#define S_FW_RSS_IND_TBL_CMD_VIID	0
+#define M_FW_RSS_IND_TBL_CMD_VIID	0xfff
+#define V_FW_RSS_IND_TBL_CMD_VIID(x)	((x) << S_FW_RSS_IND_TBL_CMD_VIID)
+#define G_FW_RSS_IND_TBL_CMD_VIID(x)	\
+    (((x) >> S_FW_RSS_IND_TBL_CMD_VIID) & M_FW_RSS_IND_TBL_CMD_VIID)
+
+#define S_FW_RSS_IND_TBL_CMD_IQ0	20
+#define M_FW_RSS_IND_TBL_CMD_IQ0	0x3ff
+#define V_FW_RSS_IND_TBL_CMD_IQ0(x)	((x) << S_FW_RSS_IND_TBL_CMD_IQ0)
+#define G_FW_RSS_IND_TBL_CMD_IQ0(x)	\
+    (((x) >> S_FW_RSS_IND_TBL_CMD_IQ0) & M_FW_RSS_IND_TBL_CMD_IQ0)
+
+#define S_FW_RSS_IND_TBL_CMD_IQ1	10
+#define M_FW_RSS_IND_TBL_CMD_IQ1	0x3ff
+#define V_FW_RSS_IND_TBL_CMD_IQ1(x)	((x) << S_FW_RSS_IND_TBL_CMD_IQ1)
+#define G_FW_RSS_IND_TBL_CMD_IQ1(x)	\
+    (((x) >> S_FW_RSS_IND_TBL_CMD_IQ1) & M_FW_RSS_IND_TBL_CMD_IQ1)
+
+#define S_FW_RSS_IND_TBL_CMD_IQ2	0
+#define M_FW_RSS_IND_TBL_CMD_IQ2	0x3ff
+#define V_FW_RSS_IND_TBL_CMD_IQ2(x)	((x) << S_FW_RSS_IND_TBL_CMD_IQ2)
+#define G_FW_RSS_IND_TBL_CMD_IQ2(x)	\
+    (((x) >> S_FW_RSS_IND_TBL_CMD_IQ2) & M_FW_RSS_IND_TBL_CMD_IQ2)
+
+#define S_FW_RSS_IND_TBL_CMD_IQ3	20
+#define M_FW_RSS_IND_TBL_CMD_IQ3	0x3ff
+#define V_FW_RSS_IND_TBL_CMD_IQ3(x)	((x) << S_FW_RSS_IND_TBL_CMD_IQ3)
+#define G_FW_RSS_IND_TBL_CMD_IQ3(x)	\
+    (((x) >> S_FW_RSS_IND_TBL_CMD_IQ3) & M_FW_RSS_IND_TBL_CMD_IQ3)
+
+#define S_FW_RSS_IND_TBL_CMD_IQ4	10
+#define M_FW_RSS_IND_TBL_CMD_IQ4	0x3ff
+#define V_FW_RSS_IND_TBL_CMD_IQ4(x)	((x) << S_FW_RSS_IND_TBL_CMD_IQ4)
+#define G_FW_RSS_IND_TBL_CMD_IQ4(x)	\
+    (((x) >> S_FW_RSS_IND_TBL_CMD_IQ4) & M_FW_RSS_IND_TBL_CMD_IQ4)
+
+#define S_FW_RSS_IND_TBL_CMD_IQ5	0
+#define M_FW_RSS_IND_TBL_CMD_IQ5	0x3ff
+#define V_FW_RSS_IND_TBL_CMD_IQ5(x)	((x) << S_FW_RSS_IND_TBL_CMD_IQ5)
+#define G_FW_RSS_IND_TBL_CMD_IQ5(x)	\
+    (((x) >> S_FW_RSS_IND_TBL_CMD_IQ5) & M_FW_RSS_IND_TBL_CMD_IQ5)
+
+#define S_FW_RSS_IND_TBL_CMD_IQ6	20
+#define M_FW_RSS_IND_TBL_CMD_IQ6	0x3ff
+#define V_FW_RSS_IND_TBL_CMD_IQ6(x)	((x) << S_FW_RSS_IND_TBL_CMD_IQ6)
+#define G_FW_RSS_IND_TBL_CMD_IQ6(x)	\
+    (((x) >> S_FW_RSS_IND_TBL_CMD_IQ6) & M_FW_RSS_IND_TBL_CMD_IQ6)
+
+#define S_FW_RSS_IND_TBL_CMD_IQ7	10
+#define M_FW_RSS_IND_TBL_CMD_IQ7	0x3ff
+#define V_FW_RSS_IND_TBL_CMD_IQ7(x)	((x) << S_FW_RSS_IND_TBL_CMD_IQ7)
+#define G_FW_RSS_IND_TBL_CMD_IQ7(x)	\
+    (((x) >> S_FW_RSS_IND_TBL_CMD_IQ7) & M_FW_RSS_IND_TBL_CMD_IQ7)
+
+#define S_FW_RSS_IND_TBL_CMD_IQ8	0
+#define M_FW_RSS_IND_TBL_CMD_IQ8	0x3ff
+#define V_FW_RSS_IND_TBL_CMD_IQ8(x)	((x) << S_FW_RSS_IND_TBL_CMD_IQ8)
+#define G_FW_RSS_IND_TBL_CMD_IQ8(x)	\
+    (((x) >> S_FW_RSS_IND_TBL_CMD_IQ8) & M_FW_RSS_IND_TBL_CMD_IQ8)
+
+#define S_FW_RSS_IND_TBL_CMD_IQ9	20
+#define M_FW_RSS_IND_TBL_CMD_IQ9	0x3ff
+#define V_FW_RSS_IND_TBL_CMD_IQ9(x)	((x) << S_FW_RSS_IND_TBL_CMD_IQ9)
+#define G_FW_RSS_IND_TBL_CMD_IQ9(x)	\
+    (((x) >> S_FW_RSS_IND_TBL_CMD_IQ9) & M_FW_RSS_IND_TBL_CMD_IQ9)
+
+#define S_FW_RSS_IND_TBL_CMD_IQ10	10
+#define M_FW_RSS_IND_TBL_CMD_IQ10	0x3ff
+#define V_FW_RSS_IND_TBL_CMD_IQ10(x)	((x) << S_FW_RSS_IND_TBL_CMD_IQ10)
+#define G_FW_RSS_IND_TBL_CMD_IQ10(x)	\
+    (((x) >> S_FW_RSS_IND_TBL_CMD_IQ10) & M_FW_RSS_IND_TBL_CMD_IQ10)
+
+#define S_FW_RSS_IND_TBL_CMD_IQ11	0
+#define M_FW_RSS_IND_TBL_CMD_IQ11	0x3ff
+#define V_FW_RSS_IND_TBL_CMD_IQ11(x)	((x) << S_FW_RSS_IND_TBL_CMD_IQ11)
+#define G_FW_RSS_IND_TBL_CMD_IQ11(x)	\
+    (((x) >> S_FW_RSS_IND_TBL_CMD_IQ11) & M_FW_RSS_IND_TBL_CMD_IQ11)
+
+#define S_FW_RSS_IND_TBL_CMD_IQ12	20
+#define M_FW_RSS_IND_TBL_CMD_IQ12	0x3ff
+#define V_FW_RSS_IND_TBL_CMD_IQ12(x)	((x) << S_FW_RSS_IND_TBL_CMD_IQ12)
+#define G_FW_RSS_IND_TBL_CMD_IQ12(x)	\
+    (((x) >> S_FW_RSS_IND_TBL_CMD_IQ12) & M_FW_RSS_IND_TBL_CMD_IQ12)
+
+#define S_FW_RSS_IND_TBL_CMD_IQ13	10
+#define M_FW_RSS_IND_TBL_CMD_IQ13	0x3ff
+#define V_FW_RSS_IND_TBL_CMD_IQ13(x)	((x) << S_FW_RSS_IND_TBL_CMD_IQ13)
+#define G_FW_RSS_IND_TBL_CMD_IQ13(x)	\
+    (((x) >> S_FW_RSS_IND_TBL_CMD_IQ13) & M_FW_RSS_IND_TBL_CMD_IQ13)
+
+#define S_FW_RSS_IND_TBL_CMD_IQ14	0
+#define M_FW_RSS_IND_TBL_CMD_IQ14	0x3ff
+#define V_FW_RSS_IND_TBL_CMD_IQ14(x)	((x) << S_FW_RSS_IND_TBL_CMD_IQ14)
+#define G_FW_RSS_IND_TBL_CMD_IQ14(x)	\
+    (((x) >> S_FW_RSS_IND_TBL_CMD_IQ14) & M_FW_RSS_IND_TBL_CMD_IQ14)
+
+#define S_FW_RSS_IND_TBL_CMD_IQ15	20
+#define M_FW_RSS_IND_TBL_CMD_IQ15	0x3ff
+#define V_FW_RSS_IND_TBL_CMD_IQ15(x)	((x) << S_FW_RSS_IND_TBL_CMD_IQ15)
+#define G_FW_RSS_IND_TBL_CMD_IQ15(x)	\
+    (((x) >> S_FW_RSS_IND_TBL_CMD_IQ15) & M_FW_RSS_IND_TBL_CMD_IQ15)
+
+#define S_FW_RSS_IND_TBL_CMD_IQ16	10
+#define M_FW_RSS_IND_TBL_CMD_IQ16	0x3ff
+#define V_FW_RSS_IND_TBL_CMD_IQ16(x)	((x) << S_FW_RSS_IND_TBL_CMD_IQ16)
+#define G_FW_RSS_IND_TBL_CMD_IQ16(x)	\
+    (((x) >> S_FW_RSS_IND_TBL_CMD_IQ16) & M_FW_RSS_IND_TBL_CMD_IQ16)
+
+#define S_FW_RSS_IND_TBL_CMD_IQ17	0
+#define M_FW_RSS_IND_TBL_CMD_IQ17	0x3ff
+#define V_FW_RSS_IND_TBL_CMD_IQ17(x)	((x) << S_FW_RSS_IND_TBL_CMD_IQ17)
+#define G_FW_RSS_IND_TBL_CMD_IQ17(x)	\
+    (((x) >> S_FW_RSS_IND_TBL_CMD_IQ17) & M_FW_RSS_IND_TBL_CMD_IQ17)
+
+#define S_FW_RSS_IND_TBL_CMD_IQ18	20
+#define M_FW_RSS_IND_TBL_CMD_IQ18	0x3ff
+#define V_FW_RSS_IND_TBL_CMD_IQ18(x)	((x) << S_FW_RSS_IND_TBL_CMD_IQ18)
+#define G_FW_RSS_IND_TBL_CMD_IQ18(x)	\
+    (((x) >> S_FW_RSS_IND_TBL_CMD_IQ18) & M_FW_RSS_IND_TBL_CMD_IQ18)
+
+#define S_FW_RSS_IND_TBL_CMD_IQ19	10
+#define M_FW_RSS_IND_TBL_CMD_IQ19	0x3ff
+#define V_FW_RSS_IND_TBL_CMD_IQ19(x)	((x) << S_FW_RSS_IND_TBL_CMD_IQ19)
+#define G_FW_RSS_IND_TBL_CMD_IQ19(x)	\
+    (((x) >> S_FW_RSS_IND_TBL_CMD_IQ19) & M_FW_RSS_IND_TBL_CMD_IQ19)
+
+#define S_FW_RSS_IND_TBL_CMD_IQ20	0
+#define M_FW_RSS_IND_TBL_CMD_IQ20	0x3ff
+#define V_FW_RSS_IND_TBL_CMD_IQ20(x)	((x) << S_FW_RSS_IND_TBL_CMD_IQ20)
+#define G_FW_RSS_IND_TBL_CMD_IQ20(x)	\
+    (((x) >> S_FW_RSS_IND_TBL_CMD_IQ20) & M_FW_RSS_IND_TBL_CMD_IQ20)
+
+#define S_FW_RSS_IND_TBL_CMD_IQ21	20
+#define M_FW_RSS_IND_TBL_CMD_IQ21	0x3ff
+#define V_FW_RSS_IND_TBL_CMD_IQ21(x)	((x) << S_FW_RSS_IND_TBL_CMD_IQ21)
+#define G_FW_RSS_IND_TBL_CMD_IQ21(x)	\
+    (((x) >> S_FW_RSS_IND_TBL_CMD_IQ21) & M_FW_RSS_IND_TBL_CMD_IQ21)
+
+#define S_FW_RSS_IND_TBL_CMD_IQ22	10
+#define M_FW_RSS_IND_TBL_CMD_IQ22	0x3ff
+#define V_FW_RSS_IND_TBL_CMD_IQ22(x)	((x) << S_FW_RSS_IND_TBL_CMD_IQ22)
+#define G_FW_RSS_IND_TBL_CMD_IQ22(x)	\
+    (((x) >> S_FW_RSS_IND_TBL_CMD_IQ22) & M_FW_RSS_IND_TBL_CMD_IQ22)
+
+#define S_FW_RSS_IND_TBL_CMD_IQ23	0
+#define M_FW_RSS_IND_TBL_CMD_IQ23	0x3ff
+#define V_FW_RSS_IND_TBL_CMD_IQ23(x)	((x) << S_FW_RSS_IND_TBL_CMD_IQ23)
+#define G_FW_RSS_IND_TBL_CMD_IQ23(x)	\
+    (((x) >> S_FW_RSS_IND_TBL_CMD_IQ23) & M_FW_RSS_IND_TBL_CMD_IQ23)
+
+#define S_FW_RSS_IND_TBL_CMD_IQ24	20
+#define M_FW_RSS_IND_TBL_CMD_IQ24	0x3ff
+#define V_FW_RSS_IND_TBL_CMD_IQ24(x)	((x) << S_FW_RSS_IND_TBL_CMD_IQ24)
+#define G_FW_RSS_IND_TBL_CMD_IQ24(x)	\
+    (((x) >> S_FW_RSS_IND_TBL_CMD_IQ24) & M_FW_RSS_IND_TBL_CMD_IQ24)
+
+#define S_FW_RSS_IND_TBL_CMD_IQ25	10
+#define M_FW_RSS_IND_TBL_CMD_IQ25	0x3ff
+#define V_FW_RSS_IND_TBL_CMD_IQ25(x)	((x) << S_FW_RSS_IND_TBL_CMD_IQ25)
+#define G_FW_RSS_IND_TBL_CMD_IQ25(x)	\
+    (((x) >> S_FW_RSS_IND_TBL_CMD_IQ25) & M_FW_RSS_IND_TBL_CMD_IQ25)
+
+#define S_FW_RSS_IND_TBL_CMD_IQ26	0
+#define M_FW_RSS_IND_TBL_CMD_IQ26	0x3ff
+#define V_FW_RSS_IND_TBL_CMD_IQ26(x)	((x) << S_FW_RSS_IND_TBL_CMD_IQ26)
+#define G_FW_RSS_IND_TBL_CMD_IQ26(x)	\
+    (((x) >> S_FW_RSS_IND_TBL_CMD_IQ26) & M_FW_RSS_IND_TBL_CMD_IQ26)
+
+#define S_FW_RSS_IND_TBL_CMD_IQ27	20
+#define M_FW_RSS_IND_TBL_CMD_IQ27	0x3ff
+#define V_FW_RSS_IND_TBL_CMD_IQ27(x)	((x) << S_FW_RSS_IND_TBL_CMD_IQ27)
+#define G_FW_RSS_IND_TBL_CMD_IQ27(x)	\
+    (((x) >> S_FW_RSS_IND_TBL_CMD_IQ27) & M_FW_RSS_IND_TBL_CMD_IQ27)
+
+#define S_FW_RSS_IND_TBL_CMD_IQ28	10
+#define M_FW_RSS_IND_TBL_CMD_IQ28	0x3ff
+#define V_FW_RSS_IND_TBL_CMD_IQ28(x)	((x) << S_FW_RSS_IND_TBL_CMD_IQ28)
+#define G_FW_RSS_IND_TBL_CMD_IQ28(x)	\
+    (((x) >> S_FW_RSS_IND_TBL_CMD_IQ28) & M_FW_RSS_IND_TBL_CMD_IQ28)
+
+#define S_FW_RSS_IND_TBL_CMD_IQ29	0
+#define M_FW_RSS_IND_TBL_CMD_IQ29	0x3ff
+#define V_FW_RSS_IND_TBL_CMD_IQ29(x)	((x) << S_FW_RSS_IND_TBL_CMD_IQ29)
+#define G_FW_RSS_IND_TBL_CMD_IQ29(x)	\
+    (((x) >> S_FW_RSS_IND_TBL_CMD_IQ29) & M_FW_RSS_IND_TBL_CMD_IQ29)
+
+#define S_FW_RSS_IND_TBL_CMD_IQ30	20
+#define M_FW_RSS_IND_TBL_CMD_IQ30	0x3ff
+#define V_FW_RSS_IND_TBL_CMD_IQ30(x)	((x) << S_FW_RSS_IND_TBL_CMD_IQ30)
+#define G_FW_RSS_IND_TBL_CMD_IQ30(x)	\
+    (((x) >> S_FW_RSS_IND_TBL_CMD_IQ30) & M_FW_RSS_IND_TBL_CMD_IQ30)
+
+#define S_FW_RSS_IND_TBL_CMD_IQ31	10
+#define M_FW_RSS_IND_TBL_CMD_IQ31	0x3ff
+#define V_FW_RSS_IND_TBL_CMD_IQ31(x)	((x) << S_FW_RSS_IND_TBL_CMD_IQ31)
+#define G_FW_RSS_IND_TBL_CMD_IQ31(x)	\
+    (((x) >> S_FW_RSS_IND_TBL_CMD_IQ31) & M_FW_RSS_IND_TBL_CMD_IQ31)
+
+struct fw_rss_glb_config_cmd {
+	__be32 op_to_write;
+	__be32 retval_len16;
+	union fw_rss_glb_config {
+		struct fw_rss_glb_config_manual {
+			__be32 mode_pkd;
+			__be32 r3;
+			__be64 r4;
+			__be64 r5;
+		} manual;
+		struct fw_rss_glb_config_basicvirtual {
+			__be32 mode_pkd;
+			__be32 synmapen_to_hashtoeplitz;
+			__be64 r8;
+			__be64 r9;
+		} basicvirtual;
+	} u;
+};
+
+#define S_FW_RSS_GLB_CONFIG_CMD_MODE	28
+#define M_FW_RSS_GLB_CONFIG_CMD_MODE	0xf
+#define V_FW_RSS_GLB_CONFIG_CMD_MODE(x)	((x) << S_FW_RSS_GLB_CONFIG_CMD_MODE)
+#define G_FW_RSS_GLB_CONFIG_CMD_MODE(x)	\
+    (((x) >> S_FW_RSS_GLB_CONFIG_CMD_MODE) & M_FW_RSS_GLB_CONFIG_CMD_MODE)
+
+#define FW_RSS_GLB_CONFIG_CMD_MODE_MANUAL	0
+#define FW_RSS_GLB_CONFIG_CMD_MODE_BASICVIRTUAL	1
+#define FW_RSS_GLB_CONFIG_CMD_MODE_MAX		1
+
+#define S_FW_RSS_GLB_CONFIG_CMD_SYNMAPEN	8
+#define M_FW_RSS_GLB_CONFIG_CMD_SYNMAPEN	0x1
+#define V_FW_RSS_GLB_CONFIG_CMD_SYNMAPEN(x)	\
+    ((x) << S_FW_RSS_GLB_CONFIG_CMD_SYNMAPEN)
+#define G_FW_RSS_GLB_CONFIG_CMD_SYNMAPEN(x)	\
+    (((x) >> S_FW_RSS_GLB_CONFIG_CMD_SYNMAPEN) & \
+     M_FW_RSS_GLB_CONFIG_CMD_SYNMAPEN)
+#define F_FW_RSS_GLB_CONFIG_CMD_SYNMAPEN	\
+    V_FW_RSS_GLB_CONFIG_CMD_SYNMAPEN(1U)
+
+#define S_FW_RSS_GLB_CONFIG_CMD_SYN4TUPENIPV6		7
+#define M_FW_RSS_GLB_CONFIG_CMD_SYN4TUPENIPV6		0x1
+#define V_FW_RSS_GLB_CONFIG_CMD_SYN4TUPENIPV6(x)	\
+    ((x) << S_FW_RSS_GLB_CONFIG_CMD_SYN4TUPENIPV6)
+#define G_FW_RSS_GLB_CONFIG_CMD_SYN4TUPENIPV6(x)	\
+    (((x) >> S_FW_RSS_GLB_CONFIG_CMD_SYN4TUPENIPV6) & \
+     M_FW_RSS_GLB_CONFIG_CMD_SYN4TUPENIPV6)
+#define F_FW_RSS_GLB_CONFIG_CMD_SYN4TUPENIPV6	\
+    V_FW_RSS_GLB_CONFIG_CMD_SYN4TUPENIPV6(1U)
+
+#define S_FW_RSS_GLB_CONFIG_CMD_SYN2TUPENIPV6		6
+#define M_FW_RSS_GLB_CONFIG_CMD_SYN2TUPENIPV6		0x1
+#define V_FW_RSS_GLB_CONFIG_CMD_SYN2TUPENIPV6(x)	\
+    ((x) << S_FW_RSS_GLB_CONFIG_CMD_SYN2TUPENIPV6)
+#define G_FW_RSS_GLB_CONFIG_CMD_SYN2TUPENIPV6(x)	\
+    (((x) >> S_FW_RSS_GLB_CONFIG_CMD_SYN2TUPENIPV6) & \
+     M_FW_RSS_GLB_CONFIG_CMD_SYN2TUPENIPV6)
+#define F_FW_RSS_GLB_CONFIG_CMD_SYN2TUPENIPV6	\
+    V_FW_RSS_GLB_CONFIG_CMD_SYN2TUPENIPV6(1U)
+
+#define S_FW_RSS_GLB_CONFIG_CMD_SYN4TUPENIPV4		5
+#define M_FW_RSS_GLB_CONFIG_CMD_SYN4TUPENIPV4		0x1
+#define V_FW_RSS_GLB_CONFIG_CMD_SYN4TUPENIPV4(x)	\
+    ((x) << S_FW_RSS_GLB_CONFIG_CMD_SYN4TUPENIPV4)
+#define G_FW_RSS_GLB_CONFIG_CMD_SYN4TUPENIPV4(x)	\
+    (((x) >> S_FW_RSS_GLB_CONFIG_CMD_SYN4TUPENIPV4) & \
+     M_FW_RSS_GLB_CONFIG_CMD_SYN4TUPENIPV4)
+#define F_FW_RSS_GLB_CONFIG_CMD_SYN4TUPENIPV4	\
+    V_FW_RSS_GLB_CONFIG_CMD_SYN4TUPENIPV4(1U)
+
+#define S_FW_RSS_GLB_CONFIG_CMD_SYN2TUPENIPV4		4
+#define M_FW_RSS_GLB_CONFIG_CMD_SYN2TUPENIPV4		0x1
+#define V_FW_RSS_GLB_CONFIG_CMD_SYN2TUPENIPV4(x)	\
+    ((x) << S_FW_RSS_GLB_CONFIG_CMD_SYN2TUPENIPV4)
+#define G_FW_RSS_GLB_CONFIG_CMD_SYN2TUPENIPV4(x)	\
+    (((x) >> S_FW_RSS_GLB_CONFIG_CMD_SYN2TUPENIPV4) & \
+     M_FW_RSS_GLB_CONFIG_CMD_SYN2TUPENIPV4)
+#define F_FW_RSS_GLB_CONFIG_CMD_SYN2TUPENIPV4	\
+    V_FW_RSS_GLB_CONFIG_CMD_SYN2TUPENIPV4(1U)
+
+#define S_FW_RSS_GLB_CONFIG_CMD_OFDMAPEN	3
+#define M_FW_RSS_GLB_CONFIG_CMD_OFDMAPEN	0x1
+#define V_FW_RSS_GLB_CONFIG_CMD_OFDMAPEN(x)	\
+    ((x) << S_FW_RSS_GLB_CONFIG_CMD_OFDMAPEN)
+#define G_FW_RSS_GLB_CONFIG_CMD_OFDMAPEN(x)	\
+    (((x) >> S_FW_RSS_GLB_CONFIG_CMD_OFDMAPEN) & \
+     M_FW_RSS_GLB_CONFIG_CMD_OFDMAPEN)
+#define F_FW_RSS_GLB_CONFIG_CMD_OFDMAPEN	\
+    V_FW_RSS_GLB_CONFIG_CMD_OFDMAPEN(1U)
+
+#define S_FW_RSS_GLB_CONFIG_CMD_TNLMAPEN	2
+#define M_FW_RSS_GLB_CONFIG_CMD_TNLMAPEN	0x1
+#define V_FW_RSS_GLB_CONFIG_CMD_TNLMAPEN(x)	\
+    ((x) << S_FW_RSS_GLB_CONFIG_CMD_TNLMAPEN)
+#define G_FW_RSS_GLB_CONFIG_CMD_TNLMAPEN(x)	\
+    (((x) >> S_FW_RSS_GLB_CONFIG_CMD_TNLMAPEN) & \
+     M_FW_RSS_GLB_CONFIG_CMD_TNLMAPEN)
+#define F_FW_RSS_GLB_CONFIG_CMD_TNLMAPEN	\
+    V_FW_RSS_GLB_CONFIG_CMD_TNLMAPEN(1U)
+
+#define S_FW_RSS_GLB_CONFIG_CMD_TNLALLLKP	1
+#define M_FW_RSS_GLB_CONFIG_CMD_TNLALLLKP	0x1
+#define V_FW_RSS_GLB_CONFIG_CMD_TNLALLLKP(x)	\
+    ((x) << S_FW_RSS_GLB_CONFIG_CMD_TNLALLLKP)
+#define G_FW_RSS_GLB_CONFIG_CMD_TNLALLLKP(x)	\
+    (((x) >> S_FW_RSS_GLB_CONFIG_CMD_TNLALLLKP) & \
+     M_FW_RSS_GLB_CONFIG_CMD_TNLALLLKP)
+#define F_FW_RSS_GLB_CONFIG_CMD_TNLALLLKP	\
+    V_FW_RSS_GLB_CONFIG_CMD_TNLALLLKP(1U)
+
+#define S_FW_RSS_GLB_CONFIG_CMD_HASHTOEPLITZ	0
+#define M_FW_RSS_GLB_CONFIG_CMD_HASHTOEPLITZ	0x1
+#define V_FW_RSS_GLB_CONFIG_CMD_HASHTOEPLITZ(x)	\
+    ((x) << S_FW_RSS_GLB_CONFIG_CMD_HASHTOEPLITZ)
+#define G_FW_RSS_GLB_CONFIG_CMD_HASHTOEPLITZ(x)	\
+    (((x) >> S_FW_RSS_GLB_CONFIG_CMD_HASHTOEPLITZ) & \
+     M_FW_RSS_GLB_CONFIG_CMD_HASHTOEPLITZ)
+#define F_FW_RSS_GLB_CONFIG_CMD_HASHTOEPLITZ	\
+    V_FW_RSS_GLB_CONFIG_CMD_HASHTOEPLITZ(1U)
+
+struct fw_rss_vi_config_cmd {
+	__be32 op_to_viid;
+	__be32 retval_len16;
+	union fw_rss_vi_config {
+		struct fw_rss_vi_config_manual {
+			__be64 r3;
+			__be64 r4;
+			__be64 r5;
+		} manual;
+		struct fw_rss_vi_config_basicvirtual {
+			__be32 r6;
+			__be32 defaultq_to_ip4udpen;
+			__be64 r9;
+			__be64 r10;
+		} basicvirtual;
+	} u;
+};
+
+#define S_FW_RSS_VI_CONFIG_CMD_VIID	0
+#define M_FW_RSS_VI_CONFIG_CMD_VIID	0xfff
+#define V_FW_RSS_VI_CONFIG_CMD_VIID(x)	((x) << S_FW_RSS_VI_CONFIG_CMD_VIID)
+#define G_FW_RSS_VI_CONFIG_CMD_VIID(x)	\
+    (((x) >> S_FW_RSS_VI_CONFIG_CMD_VIID) & M_FW_RSS_VI_CONFIG_CMD_VIID)
+
+#define S_FW_RSS_VI_CONFIG_CMD_DEFAULTQ		16
+#define M_FW_RSS_VI_CONFIG_CMD_DEFAULTQ		0x3ff
+#define V_FW_RSS_VI_CONFIG_CMD_DEFAULTQ(x)	\
+    ((x) << S_FW_RSS_VI_CONFIG_CMD_DEFAULTQ)
+#define G_FW_RSS_VI_CONFIG_CMD_DEFAULTQ(x)	\
+    (((x) >> S_FW_RSS_VI_CONFIG_CMD_DEFAULTQ) & \
+     M_FW_RSS_VI_CONFIG_CMD_DEFAULTQ)
+
+#define S_FW_RSS_VI_CONFIG_CMD_IP6FOURTUPEN	4
+#define M_FW_RSS_VI_CONFIG_CMD_IP6FOURTUPEN	0x1
+#define V_FW_RSS_VI_CONFIG_CMD_IP6FOURTUPEN(x)	\
+    ((x) << S_FW_RSS_VI_CONFIG_CMD_IP6FOURTUPEN)
+#define G_FW_RSS_VI_CONFIG_CMD_IP6FOURTUPEN(x)	\
+    (((x) >> S_FW_RSS_VI_CONFIG_CMD_IP6FOURTUPEN) & \
+     M_FW_RSS_VI_CONFIG_CMD_IP6FOURTUPEN)
+#define F_FW_RSS_VI_CONFIG_CMD_IP6FOURTUPEN	\
+    V_FW_RSS_VI_CONFIG_CMD_IP6FOURTUPEN(1U)
+
+#define S_FW_RSS_VI_CONFIG_CMD_IP6TWOTUPEN	3
+#define M_FW_RSS_VI_CONFIG_CMD_IP6TWOTUPEN	0x1
+#define V_FW_RSS_VI_CONFIG_CMD_IP6TWOTUPEN(x)	\
+    ((x) << S_FW_RSS_VI_CONFIG_CMD_IP6TWOTUPEN)
+#define G_FW_RSS_VI_CONFIG_CMD_IP6TWOTUPEN(x)	\
+    (((x) >> S_FW_RSS_VI_CONFIG_CMD_IP6TWOTUPEN) & \
+     M_FW_RSS_VI_CONFIG_CMD_IP6TWOTUPEN)
+#define F_FW_RSS_VI_CONFIG_CMD_IP6TWOTUPEN	\
+    V_FW_RSS_VI_CONFIG_CMD_IP6TWOTUPEN(1U)
+
+#define S_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN	2
+#define M_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN	0x1
+#define V_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN(x)	\
+    ((x) << S_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN)
+#define G_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN(x)	\
+    (((x) >> S_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN) & \
+     M_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN)
+#define F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN	\
+    V_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN(1U)
+
+#define S_FW_RSS_VI_CONFIG_CMD_IP4TWOTUPEN	1
+#define M_FW_RSS_VI_CONFIG_CMD_IP4TWOTUPEN	0x1
+#define V_FW_RSS_VI_CONFIG_CMD_IP4TWOTUPEN(x)	\
+    ((x) << S_FW_RSS_VI_CONFIG_CMD_IP4TWOTUPEN)
+#define G_FW_RSS_VI_CONFIG_CMD_IP4TWOTUPEN(x)	\
+    (((x) >> S_FW_RSS_VI_CONFIG_CMD_IP4TWOTUPEN) & \
+     M_FW_RSS_VI_CONFIG_CMD_IP4TWOTUPEN)
+#define F_FW_RSS_VI_CONFIG_CMD_IP4TWOTUPEN	\
+    V_FW_RSS_VI_CONFIG_CMD_IP4TWOTUPEN(1U)
+
+#define S_FW_RSS_VI_CONFIG_CMD_IP4UDPEN		0
+#define M_FW_RSS_VI_CONFIG_CMD_IP4UDPEN		0x1
+#define V_FW_RSS_VI_CONFIG_CMD_IP4UDPEN(x)	\
+    ((x) << S_FW_RSS_VI_CONFIG_CMD_IP4UDPEN)
+#define G_FW_RSS_VI_CONFIG_CMD_IP4UDPEN(x)	\
+    (((x) >> S_FW_RSS_VI_CONFIG_CMD_IP4UDPEN) & \
+     M_FW_RSS_VI_CONFIG_CMD_IP4UDPEN)
+#define F_FW_RSS_VI_CONFIG_CMD_IP4UDPEN	V_FW_RSS_VI_CONFIG_CMD_IP4UDPEN(1U)
+
+struct fw_error_cmd {
+	__be32 op_pkd;
+	__be32 len16_pkd;
+	__be16 errstridx;
+	__be16 r2[3];
+	__be32 errstrparam0;
+	__be32 errstrparam1;
+	__be32 errstrparam2;
+	__be32 errstrparam3;
+};
+
+struct fw_debug_cmd {
+	__be32 op_type;
+	__be32 len16_pkd;
+	union fw_debug {
+		struct fw_debug_assert {
+			__be32 fcid;
+			__be32 line;
+			__be32 x;
+			__be32 y;
+			__u8   filename_0_7[8];
+			__u8   filename_8_15[8];
+			__be64 r3;
+		} assert;
+		struct fw_debug_prt {
+			__be16 dprtstridx;
+			__be16 r3[3];
+			__be32 dprtstrparam0;
+			__be32 dprtstrparam1;
+			__be32 dprtstrparam2;
+			__be32 dprtstrparam3;
+		} prt;
+	} u;
+};
+
+#define S_FW_DEBUG_CMD_TYPE	0
+#define M_FW_DEBUG_CMD_TYPE	0xff
+#define V_FW_DEBUG_CMD_TYPE(x)	((x) << S_FW_DEBUG_CMD_TYPE)
+#define G_FW_DEBUG_CMD_TYPE(x)	\
+    (((x) >> S_FW_DEBUG_CMD_TYPE) & M_FW_DEBUG_CMD_TYPE)
+
+/******************************************************************************
+ *   B I N A R Y   H E A D E R   F O R M A T
+ **********************************************/
+
+/*
+ *	firmware binary header format
+ */
+struct fw_hdr {
+	__u8	ver;
+	__u8	reserved1;
+	__be16	len512;			/* bin length in units of 512-bytes */
+	__be32	fw_ver;			/* firmware version */
+	__be32	tp_microcode_ver;	/* tcp processor microcode version */
+	__be32	reserved2[29];
+};
+
+#define S_FW_HDR_FW_VER_MAJOR	24
+#define M_FW_HDR_FW_VER_MAJOR	0xff
+#define V_FW_HDR_FW_VER_MAJOR(x) \
+    ((x) << S_FW_HDR_FW_VER_MAJOR)
+#define G_FW_HDR_FW_VER_MAJOR(x) \
+    (((x) >> S_FW_HDR_FW_VER_MAJOR) & M_FW_HDR_FW_VER_MAJOR)
+
+#define S_FW_HDR_FW_VER_MINOR	16
+#define M_FW_HDR_FW_VER_MINOR	0xff
+#define V_FW_HDR_FW_VER_MINOR(x) \
+    ((x) << S_FW_HDR_FW_VER_MINOR)
+#define G_FW_HDR_FW_VER_MINOR(x) \
+    (((x) >> S_FW_HDR_FW_VER_MINOR) & M_FW_HDR_FW_VER_MINOR)
+
+#define S_FW_HDR_FW_VER_MICRO	8
+#define M_FW_HDR_FW_VER_MICRO	0xff
+#define V_FW_HDR_FW_VER_MICRO(x) \
+    ((x) << S_FW_HDR_FW_VER_MICRO)
+#define G_FW_HDR_FW_VER_MICRO(x) \
+    (((x) >> S_FW_HDR_FW_VER_MICRO) & M_FW_HDR_FW_VER_MICRO)
+
+#define S_FW_HDR_FW_VER_BUILD	0
+#define M_FW_HDR_FW_VER_BUILD	0xff
+#define V_FW_HDR_FW_VER_BUILD(x) \
+    ((x) << S_FW_HDR_FW_VER_BUILD)
+#define G_FW_HDR_FW_VER_BUILD(x) \
+    (((x) >> S_FW_HDR_FW_VER_BUILD) & M_FW_HDR_FW_VER_BUILD)
+
+#endif /* _T4FW_INTERFACE_H_ */
-- 
1.5.4


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH net-next 3/7] cxgb4: Add HW and FW support code
  2010-02-18  1:37   ` [PATCH net-next 2/7] cxgb4: Add FW API definitions Dimitris Michailidis
@ 2010-02-18  1:37     ` Dimitris Michailidis
  2010-02-18  1:37       ` [PATCH net-next 4/7] cxgb4: Add packet queues and packet DMA code Dimitris Michailidis
  0 siblings, 1 reply; 10+ messages in thread
From: Dimitris Michailidis @ 2010-02-18  1:37 UTC (permalink / raw)
  To: netdev

Signed-off-by: Dimitris Michailidis <dm@chelsio.com>
---
 drivers/net/cxgb4/t4_hw.c | 3116 +++++++++++++++++++++++++++++++++++++++++++++
 drivers/net/cxgb4/t4_hw.h |  129 ++
 2 files changed, 3245 insertions(+), 0 deletions(-)
 create mode 100644 drivers/net/cxgb4/t4_hw.c
 create mode 100644 drivers/net/cxgb4/t4_hw.h

diff --git a/drivers/net/cxgb4/t4_hw.c b/drivers/net/cxgb4/t4_hw.c
new file mode 100644
index 0000000..fee2f32
--- /dev/null
+++ b/drivers/net/cxgb4/t4_hw.c
@@ -0,0 +1,3116 @@
+/*
+ * This file is part of the Chelsio T4 Ethernet driver for Linux.
+ *
+ * Copyright (c) 2003-2010 Chelsio Communications, Inc. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ *     Redistribution and use in source and binary forms, with or
+ *     without modification, are permitted provided that the following
+ *     conditions are met:
+ *
+ *      - Redistributions of source code must retain the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer.
+ *
+ *      - Redistributions in binary form must reproduce the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer in the documentation and/or other materials
+ *        provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include <linux/init.h>
+#include <linux/delay.h>
+#include "cxgb4.h"
+#include "t4_regs.h"
+#include "t4fw_api.h"
+
+/**
+ *	t4_wait_op_done_val - wait until an operation is completed
+ *	@adapter: the adapter performing the operation
+ *	@reg: the register to check for completion
+ *	@mask: a single-bit field within @reg that indicates completion
+ *	@polarity: the value of the field when the operation is completed
+ *	@attempts: number of check iterations
+ *	@delay: delay in usecs between iterations
+ *	@valp: where to store the value of the register at completion time
+ *
+ *	Wait until an operation is completed by checking a bit in a register
+ *	up to @attempts times.  If @valp is not NULL the value of the register
+ *	at the time it indicated completion is stored there.  Returns 0 if the
+ *	operation completes and	-EAGAIN	otherwise.
+ */
+int t4_wait_op_done_val(struct adapter *adapter, int reg, u32 mask,
+			int polarity, int attempts, int delay, u32 *valp)
+{
+	while (1) {
+		u32 val = t4_read_reg(adapter, reg);
+
+		if (!!(val & mask) == polarity) {
+			if (valp)
+				*valp = val;
+			return 0;
+		}
+		if (--attempts == 0)
+			return -EAGAIN;
+		if (delay)
+			udelay(delay);
+	}
+}
+
+static inline int t4_wait_op_done(struct adapter *adapter, int reg, u32 mask,
+				  int polarity, int attempts, int delay)
+{
+	return t4_wait_op_done_val(adapter, reg, mask, polarity, attempts,
+				   delay, NULL);
+}
+
+/**
+ *	t4_set_reg_field - set a register field to a value
+ *	@adapter: the adapter to program
+ *	@addr: the register address
+ *	@mask: specifies the portion of the register to modify
+ *	@val: the new value for the register field
+ *
+ *	Sets a register field specified by the supplied mask to the
+ *	given value.
+ */
+void t4_set_reg_field(struct adapter *adapter, unsigned int addr, u32 mask,
+		      u32 val)
+{
+	u32 v = t4_read_reg(adapter, addr) & ~mask;
+
+	t4_write_reg(adapter, addr, v | val);
+	(void) t4_read_reg(adapter, addr);      /* flush */
+}
+
+/**
+ *	t4_read_indirect - read indirectly addressed registers
+ *	@adap: the adapter
+ *	@addr_reg: register holding the indirect address
+ *	@data_reg: register holding the value of the indirect register
+ *	@vals: where the read register values are stored
+ *	@nregs: how many indirect registers to read
+ *	@start_idx: index of first indirect register to read
+ *
+ *	Reads registers that are accessed indirectly through an address/data
+ *	register pair.
+ */
+void t4_read_indirect(struct adapter *adap, unsigned int addr_reg,
+		      unsigned int data_reg, u32 *vals, unsigned int nregs,
+		      unsigned int start_idx)
+{
+	while (nregs--) {
+		t4_write_reg(adap, addr_reg, start_idx);
+		*vals++ = t4_read_reg(adap, data_reg);
+		start_idx++;
+	}
+}
+
+/**
+ *	t4_write_indirect - write indirectly addressed registers
+ *	@adap: the adapter
+ *	@addr_reg: register holding the indirect addresses
+ *	@data_reg: register holding the value for the indirect registers
+ *	@vals: values to write
+ *	@nregs: how many indirect registers to write
+ *	@start_idx: address of first indirect register to write
+ *
+ *	Writes a sequential block of registers that are accessed indirectly
+ *	through an address/data register pair.
+ */
+void t4_write_indirect(struct adapter *adap, unsigned int addr_reg,
+		       unsigned int data_reg, const u32 *vals,
+		       unsigned int nregs, unsigned int start_idx)
+{
+	while (nregs--) {
+		t4_write_reg(adap, addr_reg, start_idx++);
+		t4_write_reg(adap, data_reg, *vals++);
+	}
+}
+
+/*
+ * Get the reply to a mailbox command and store it in @rpl in big-endian order.
+ */
+static void get_mbox_rpl(struct adapter *adap, __be64 *rpl, int nflit,
+			 u32 mbox_addr)
+{
+	for ( ; nflit; nflit--, mbox_addr += 8)
+		*rpl++ = cpu_to_be64(t4_read_reg64(adap, mbox_addr));
+}
+
+/*
+ * Handle a FW assertion reported in a mailbox.
+ */
+static void fw_asrt(struct adapter *adap, u32 mbox_addr)
+{
+	struct fw_debug_cmd asrt;
+
+	get_mbox_rpl(adap, (__be64 *)&asrt, sizeof(asrt) / 8, mbox_addr);
+	CH_ALERT(adap, "FW assertion at %.16s:%u, val0 %#x, val1 %#x\n",
+		 asrt.u.assert.filename_0_7, ntohl(asrt.u.assert.line),
+		 ntohl(asrt.u.assert.x), ntohl(asrt.u.assert.y));
+}
+
+static void dump_mbox(struct adapter *adap, int mbox, u32 data_reg)
+{
+	CH_ERR(adap, "mbox %d: %llx %llx %llx %llx %llx %llx %llx %llx\n", mbox,
+	       (unsigned long long)t4_read_reg64(adap, data_reg),
+	       (unsigned long long)t4_read_reg64(adap, data_reg + 8),
+	       (unsigned long long)t4_read_reg64(adap, data_reg + 16),
+	       (unsigned long long)t4_read_reg64(adap, data_reg + 24),
+	       (unsigned long long)t4_read_reg64(adap, data_reg + 32),
+	       (unsigned long long)t4_read_reg64(adap, data_reg + 40),
+	       (unsigned long long)t4_read_reg64(adap, data_reg + 48),
+	       (unsigned long long)t4_read_reg64(adap, data_reg + 56));
+}
+
+/**
+ *	t4_wr_mbox_meat - send a command to FW through the given mailbox
+ *	@adap: the adapter
+ *	@mbox: index of the mailbox to use
+ *	@cmd: the command to write
+ *	@size: command length in bytes
+ *	@rpl: where to optionally store the reply
+ *	@sleep_ok: if true we may sleep while awaiting command completion
+ *
+ *	Sends the given command to FW through the selected mailbox and waits
+ *	for the FW to execute the command.  If @rpl is not %NULL it is used to
+ *	store the FW's reply to the command.  The command and its optional
+ *	reply are of the same length.  FW can take up to %FW_CMD_MAX_TIMEOUT ms
+ *	to respond.  @sleep_ok determines whether we may sleep while awaiting
+ *	the response.  If sleeping is allowed we use progressive backoff
+ *	otherwise we spin.
+ *
+ *	The return value is 0 on success or a negative errno on failure.  A
+ *	failure can happen either because we are not able to execute the
+ *	command or FW executes it but signals an error.  In the latter case
+ *	the return value is the error code indicated by FW (negated).
+ */
+int t4_wr_mbox_meat(struct adapter *adap, int mbox, const void *cmd, int size,
+		    void *rpl, bool sleep_ok)
+{
+	static int delay[] = {
+		1, 1, 3, 5, 10, 10, 20, 50, 100, 200
+	};
+
+	u32 v;
+	u64 res;
+	int i, ms, delay_idx;
+	const __be64 *p = cmd;
+	u32 data_reg = PF_REG(mbox, A_CIM_PF_MAILBOX_DATA);
+	u32 ctl_reg = PF_REG(mbox, A_CIM_PF_MAILBOX_CTRL);
+
+	if ((size & 15) || size > MBOX_LEN)
+		return -EINVAL;
+
+	v = G_MBOWNER(t4_read_reg(adap, ctl_reg));
+	for (i = 0; v == MBOX_OWNER_NONE && i < 3; i++)
+		v = G_MBOWNER(t4_read_reg(adap, ctl_reg));
+
+	if (v != MBOX_OWNER_DRV)
+		return v ? -EBUSY : -ETIMEDOUT;
+
+	for (i = 0; i < size; i += 8)
+		t4_write_reg64(adap, data_reg + i, be64_to_cpu(*p++));
+
+	t4_write_reg(adap, ctl_reg, F_MBMSGVALID | V_MBOWNER(MBOX_OWNER_FW));
+	t4_read_reg(adap, ctl_reg);          /* flush write */
+
+	delay_idx = 0;
+	ms = delay[0];
+
+	for (i = 0; i < FW_CMD_MAX_TIMEOUT; i += ms) {
+		if (sleep_ok) {
+			ms = delay[delay_idx];  /* last element may repeat */
+			if (delay_idx < ARRAY_SIZE(delay) - 1)
+				delay_idx++;
+			msleep(ms);
+		} else
+			mdelay(ms);
+
+		v = t4_read_reg(adap, ctl_reg);
+		if (G_MBOWNER(v) == MBOX_OWNER_DRV) {
+			if (!(v & F_MBMSGVALID)) {
+				t4_write_reg(adap, ctl_reg, 0);
+				continue;
+			}
+
+			res = t4_read_reg64(adap, data_reg);
+			if (G_FW_CMD_OP(res >> 32) == FW_DEBUG_CMD) {
+				fw_asrt(adap, data_reg);
+				res = V_FW_CMD_RETVAL(EIO);
+			} else if (rpl)
+				get_mbox_rpl(adap, rpl, size / 8, data_reg);
+
+			if (G_FW_CMD_RETVAL((int)res))
+				dump_mbox(adap, mbox, data_reg);
+			t4_write_reg(adap, ctl_reg, 0);
+			return -G_FW_CMD_RETVAL((int)res);
+		}
+	}
+
+	dump_mbox(adap, mbox, data_reg);
+	CH_ERR(adap, "command %#x in mailbox %d timed out\n",
+	       *(const u8 *)cmd, mbox);
+	return -ETIMEDOUT;
+}
+
+/**
+ *	t4_mc_read - read from MC through backdoor accesses
+ *	@adap: the adapter
+ *	@addr: address of first byte requested
+ *	@data: 64 bytes of data containing the requested address
+ *	@ecc: where to store the corresponding 64-bit ECC word
+ *
+ *	Read 64 bytes of data from MC starting at a 64-byte-aligned address
+ *	that covers the requested address @addr.  If @parity is not %NULL it
+ *	is assigned the 64-bit ECC word for the read data.
+ */
+int t4_mc_read(struct adapter *adap, u32 addr, __be32 *data, u64 *ecc)
+{
+	int i;
+
+	if (t4_read_reg(adap, A_MC_BIST_CMD) & F_START_BIST)
+		return -EBUSY;
+	t4_write_reg(adap, A_MC_BIST_CMD_ADDR, addr & ~0x3fU);
+	t4_write_reg(adap, A_MC_BIST_CMD_LEN, 64);
+	t4_write_reg(adap, A_MC_BIST_DATA_PATTERN, 0xc);
+	t4_write_reg(adap, A_MC_BIST_CMD, V_BIST_OPCODE(1) | F_START_BIST |
+		     V_BIST_CMD_GAP(1));
+	i = t4_wait_op_done(adap, A_MC_BIST_CMD, F_START_BIST, 0, 10, 1);
+	if (i)
+		return i;
+
+#define MC_DATA(i) MC_BIST_STATUS_REG(A_MC_BIST_STATUS_RDATA, i)
+
+	for (i = 15; i >= 0; i--)
+		*data++ = htonl(t4_read_reg(adap, MC_DATA(i)));
+	if (ecc)
+		*ecc = t4_read_reg64(adap, MC_DATA(16));
+#undef MC_DATA
+	return 0;
+}
+
+/**
+ *	t4_edc_read - read from EDC through backdoor accesses
+ *	@adap: the adapter
+ *	@idx: which EDC to access
+ *	@addr: address of first byte requested
+ *	@data: 64 bytes of data containing the requested address
+ *	@ecc: where to store the corresponding 64-bit ECC word
+ *
+ *	Read 64 bytes of data from EDC starting at a 64-byte-aligned address
+ *	that covers the requested address @addr.  If @parity is not %NULL it
+ *	is assigned the 64-bit ECC word for the read data.
+ */
+int t4_edc_read(struct adapter *adap, int idx, u32 addr, __be32 *data, u64 *ecc)
+{
+	int i;
+
+	idx *= EDC_STRIDE;
+	if (t4_read_reg(adap, A_EDC_BIST_CMD + idx) & F_START_BIST)
+		return -EBUSY;
+	t4_write_reg(adap, A_EDC_BIST_CMD_ADDR + idx, addr & ~0x3fU);
+	t4_write_reg(adap, A_EDC_BIST_CMD_LEN + idx, 64);
+	t4_write_reg(adap, A_EDC_BIST_DATA_PATTERN + idx, 0xc);
+	t4_write_reg(adap, A_EDC_BIST_CMD + idx,
+		     V_BIST_OPCODE(1) | V_BIST_CMD_GAP(1) | F_START_BIST);
+	i = t4_wait_op_done(adap, A_EDC_BIST_CMD + idx, F_START_BIST, 0, 10, 1);
+	if (i)
+		return i;
+
+#define EDC_DATA(i) (EDC_BIST_STATUS_REG(A_EDC_BIST_STATUS_RDATA, i) + idx)
+
+	for (i = 15; i >= 0; i--)
+		*data++ = htonl(t4_read_reg(adap, EDC_DATA(i)));
+	if (ecc)
+		*ecc = t4_read_reg64(adap, EDC_DATA(16));
+#undef EDC_DATA
+	return 0;
+}
+
+#define VPD_ENTRY(name, len) \
+	u8 name##_kword[2]; u8 name##_len; u8 name##_data[len]
+
+/*
+ * Partial EEPROM Vital Product Data structure.  Includes only the ID and
+ * VPD-R sections.
+ */
+struct t4_vpd {
+	u8  id_tag;
+	u8  id_len[2];
+	u8  id_data[ID_LEN];
+	u8  vpdr_tag;
+	u8  vpdr_len[2];
+	VPD_ENTRY(pn, 16);                     /* part number */
+	VPD_ENTRY(ec, EC_LEN);                 /* EC level */
+	VPD_ENTRY(sn, SERNUM_LEN);             /* serial number */
+	VPD_ENTRY(na, 12);                     /* MAC address base */
+	VPD_ENTRY(port_type, 8);               /* port types */
+	VPD_ENTRY(gpio, 14);                   /* GPIO usage */
+	VPD_ENTRY(cclk, 6);                    /* core clock */
+	VPD_ENTRY(port_addr, 8);               /* port MDIO addresses */
+	VPD_ENTRY(rv, 1);                      /* csum */
+	u32 pad;                  /* for multiple-of-4 sizing and alignment */
+};
+
+#define EEPROM_STAT_ADDR   0x7bfc
+#define VPD_BASE           0
+
+/**
+ *	t4_seeprom_wp - enable/disable EEPROM write protection
+ *	@adapter: the adapter
+ *	@enable: whether to enable or disable write protection
+ *
+ *	Enables or disables write protection on the serial EEPROM.
+ */
+int t4_seeprom_wp(struct adapter *adapter, bool enable)
+{
+	unsigned int v = enable ? 0xc : 0;
+	int ret = pci_write_vpd(adapter->pdev, EEPROM_STAT_ADDR, 4, &v);
+	return ret < 0 ? ret : 0;
+}
+
+/**
+ *	get_vpd_params - read VPD parameters from VPD EEPROM
+ *	@adapter: adapter to read
+ *	@p: where to store the parameters
+ *
+ *	Reads card parameters stored in VPD EEPROM.
+ */
+static int get_vpd_params(struct adapter *adapter, struct vpd_params *p)
+{
+	int ret;
+	struct t4_vpd vpd;
+	u8 *q = (u8 *)&vpd, csum;
+
+	ret = pci_read_vpd(adapter->pdev, VPD_BASE, sizeof(vpd), &vpd);
+	if (ret < 0)
+		return ret;
+
+	for (csum = 0; q <= vpd.rv_data; q++)
+		csum += *q;
+
+	if (csum) {
+		CH_ERR(adapter, "corrupted VPD EEPROM, actual csum %u\n", csum);
+		return -EINVAL;
+	}
+
+	p->cclk = simple_strtoul(vpd.cclk_data, NULL, 10);
+	memcpy(p->id, vpd.id_data, sizeof(vpd.id_data));
+	strim(p->id);
+	memcpy(p->ec, vpd.ec_data, sizeof(vpd.ec_data));
+	strim(p->ec);
+	memcpy(p->sn, vpd.sn_data, sizeof(vpd.sn_data));
+	strim(p->sn);
+	return 0;
+}
+
+/* serial flash and firmware constants */
+enum {
+	SF_ATTEMPTS = 10,             /* max retries for SF operations */
+
+	/* flash command opcodes */
+	SF_PROG_PAGE    = 2,          /* program page */
+	SF_WR_DISABLE   = 4,          /* disable writes */
+	SF_RD_STATUS    = 5,          /* read status register */
+	SF_WR_ENABLE    = 6,          /* enable writes */
+	SF_RD_DATA_FAST = 0xb,        /* read flash */
+	SF_ERASE_SECTOR = 0xd8,       /* erase sector */
+
+	FW_START_SEC = 8,             /* first flash sector for FW */
+	FW_END_SEC = 15,              /* last flash sector for FW */
+	FW_IMG_START = FW_START_SEC * SF_SEC_SIZE,
+	FW_MAX_SIZE = (FW_END_SEC - FW_START_SEC + 1) * SF_SEC_SIZE,
+};
+
+/**
+ *	sf1_read - read data from the serial flash
+ *	@adapter: the adapter
+ *	@byte_cnt: number of bytes to read
+ *	@cont: whether another operation will be chained
+ *	@lock: whether to lock SF for PL access only
+ *	@valp: where to store the read data
+ *
+ *	Reads up to 4 bytes of data from the serial flash.  The location of
+ *	the read needs to be specified prior to calling this by issuing the
+ *	appropriate commands to the serial flash.
+ */
+static int sf1_read(struct adapter *adapter, unsigned int byte_cnt, int cont,
+		    int lock, u32 *valp)
+{
+	int ret;
+
+	if (!byte_cnt || byte_cnt > 4)
+		return -EINVAL;
+	if (t4_read_reg(adapter, A_SF_OP) & F_BUSY)
+		return -EBUSY;
+	t4_write_reg(adapter, A_SF_OP,
+		     V_SF_LOCK(lock) | V_CONT(cont) | V_BYTECNT(byte_cnt - 1));
+	ret = t4_wait_op_done(adapter, A_SF_OP, F_BUSY, 0, SF_ATTEMPTS, 5);
+	if (!ret)
+		*valp = t4_read_reg(adapter, A_SF_DATA);
+	return ret;
+}
+
+/**
+ *	sf1_write - write data to the serial flash
+ *	@adapter: the adapter
+ *	@byte_cnt: number of bytes to write
+ *	@cont: whether another operation will be chained
+ *	@lock: whether to lock SF for PL access only
+ *	@val: value to write
+ *
+ *	Writes up to 4 bytes of data to the serial flash.  The location of
+ *	the write needs to be specified prior to calling this by issuing the
+ *	appropriate commands to the serial flash.
+ */
+static int sf1_write(struct adapter *adapter, unsigned int byte_cnt, int cont,
+		     int lock, u32 val)
+{
+	if (!byte_cnt || byte_cnt > 4)
+		return -EINVAL;
+	if (t4_read_reg(adapter, A_SF_OP) & F_BUSY)
+		return -EBUSY;
+	t4_write_reg(adapter, A_SF_DATA, val);
+	t4_write_reg(adapter, A_SF_OP, V_SF_LOCK(lock) |
+		     V_CONT(cont) | V_BYTECNT(byte_cnt - 1) | V_OP(1));
+	return t4_wait_op_done(adapter, A_SF_OP, F_BUSY, 0, SF_ATTEMPTS, 5);
+}
+
+/**
+ *	flash_wait_op - wait for a flash operation to complete
+ *	@adapter: the adapter
+ *	@attempts: max number of polls of the status register
+ *	@delay: delay between polls in ms
+ *
+ *	Wait for a flash operation to complete by polling the status register.
+ */
+static int flash_wait_op(struct adapter *adapter, int attempts, int delay)
+{
+	int ret;
+	u32 status;
+
+	while (1) {
+		if ((ret = sf1_write(adapter, 1, 1, 1, SF_RD_STATUS)) != 0 ||
+		    (ret = sf1_read(adapter, 1, 0, 1, &status)) != 0)
+			return ret;
+		if (!(status & 1))
+			return 0;
+		if (--attempts == 0)
+			return -EAGAIN;
+		if (delay)
+			msleep(delay);
+	}
+}
+
+/**
+ *	t4_read_flash - read words from serial flash
+ *	@adapter: the adapter
+ *	@addr: the start address for the read
+ *	@nwords: how many 32-bit words to read
+ *	@data: where to store the read data
+ *	@byte_oriented: whether to store data as bytes or as words
+ *
+ *	Read the specified number of 32-bit words from the serial flash.
+ *	If @byte_oriented is set the read data is stored as a byte array
+ *	(i.e., big-endian), otherwise as 32-bit words in the platform's
+ *	natural endianess.
+ */
+int t4_read_flash(struct adapter *adapter, unsigned int addr,
+		  unsigned int nwords, u32 *data, int byte_oriented)
+{
+	int ret;
+
+	if (addr + nwords * sizeof(u32) > SF_SIZE || (addr & 3))
+		return -EINVAL;
+
+	addr = swab32(addr) | SF_RD_DATA_FAST;
+
+	if ((ret = sf1_write(adapter, 4, 1, 0, addr)) != 0 ||
+	    (ret = sf1_read(adapter, 1, 1, 0, data)) != 0)
+		return ret;
+
+	for ( ; nwords; nwords--, data++) {
+		ret = sf1_read(adapter, 4, nwords > 1, nwords == 1, data);
+		if (nwords == 1)
+			t4_write_reg(adapter, A_SF_OP, 0);    /* unlock SF */
+		if (ret)
+			return ret;
+		if (byte_oriented)
+			*data = htonl(*data);
+	}
+	return 0;
+}
+
+/**
+ *	t4_write_flash - write up to a page of data to the serial flash
+ *	@adapter: the adapter
+ *	@addr: the start address to write
+ *	@n: length of data to write in bytes
+ *	@data: the data to write
+ *
+ *	Writes up to a page of data (256 bytes) to the serial flash starting
+ *	at the given address.  All the data must be written to the same page.
+ */
+static int t4_write_flash(struct adapter *adapter, unsigned int addr,
+			  unsigned int n, const u8 *data)
+{
+	int ret;
+	u32 buf[64];
+	unsigned int i, c, left, val, offset = addr & 0xff;
+
+	if (addr >= SF_SIZE || offset + n > SF_PAGE_SIZE)
+		return -EINVAL;
+
+	val = swab32(addr) | SF_PROG_PAGE;
+
+	if ((ret = sf1_write(adapter, 1, 0, 1, SF_WR_ENABLE)) != 0 ||
+	    (ret = sf1_write(adapter, 4, 1, 1, val)) != 0)
+		goto unlock;
+
+	for (left = n; left; left -= c) {
+		c = min(left, 4U);
+		for (val = 0, i = 0; i < c; ++i)
+			val = (val << 8) + *data++;
+
+		ret = sf1_write(adapter, c, c != left, 1, val);
+		if (ret)
+			goto unlock;
+	}
+	if ((ret = flash_wait_op(adapter, 5, 1)) != 0)
+		goto unlock;
+
+	t4_write_reg(adapter, A_SF_OP, 0);    /* unlock SF */
+
+	/* Read the page to verify the write succeeded */
+	ret = t4_read_flash(adapter, addr & ~0xff, ARRAY_SIZE(buf), buf, 1);
+	if (ret)
+		return ret;
+
+	if (memcmp(data - n, (u8 *)buf + offset, n)) {
+		CH_ERR(adapter, "failed to correctly write the flash page "
+		       "at %#x\n", addr);
+		return -EIO;
+	}
+	return 0;
+
+unlock:
+	t4_write_reg(adapter, A_SF_OP, 0);    /* unlock SF */
+	return ret;
+}
+
+/**
+ *	t4_get_fw_version - read the firmware version
+ *	@adapter: the adapter
+ *	@vers: where to place the version
+ *
+ *	Reads the FW version from flash.
+ */
+int t4_get_fw_version(struct adapter *adapter, u32 *vers)
+{
+	return t4_read_flash(adapter,
+			     FW_IMG_START + offsetof(struct fw_hdr, fw_ver), 1,
+			     vers, 0);
+}
+
+/**
+ *	t4_get_tp_version - read the TP microcode version
+ *	@adapter: the adapter
+ *	@vers: where to place the version
+ *
+ *	Reads the TP microcode version from flash.
+ */
+int t4_get_tp_version(struct adapter *adapter, u32 *vers)
+{
+	return t4_read_flash(adapter, FW_IMG_START + offsetof(struct fw_hdr,
+							      tp_microcode_ver),
+			     1, vers, 0);
+}
+
+/**
+ *	t4_check_fw_version - check if the FW is compatible with this driver
+ *	@adapter: the adapter
+ *
+ *	Checks if an adapter's FW is compatible with the driver.  Returns 0
+ *	if there's exact match, a negative error if the version could not be
+ *	read or there's a major version mismatch, and a positive value if the
+ *	expected major version is found but there's a minor version mismatch.
+ */
+int t4_check_fw_version(struct adapter *adapter)
+{
+	int ret, major, minor;
+
+	ret = t4_get_fw_version(adapter, &adapter->params.fw_vers);
+	if (!ret)
+		ret = t4_get_tp_version(adapter, &adapter->params.tp_vers);
+	if (ret)
+		return ret;
+
+	major = G_FW_HDR_FW_VER_MAJOR(adapter->params.fw_vers);
+	minor = G_FW_HDR_FW_VER_MINOR(adapter->params.fw_vers);
+
+	if (major != FW_VERSION_MAJOR) {            /* major mismatch - fail */
+		CH_ERR(adapter, "card FW has major version %u, driver wants "
+		       "%u\n", major, FW_VERSION_MAJOR);
+		return -EINVAL;
+	}
+
+	if (minor == FW_VERSION_MINOR)              /* perfect match */
+		return 0;
+
+	/* Minor version mismatch.  Report it but often it can be dealt with. */
+	CH_WARN(adapter, "FW minor version mismatch, found %u.%u, prefer "
+		"%u.%u\n", major, minor, FW_VERSION_MAJOR, FW_VERSION_MINOR);
+	return minor < FW_VERSION_MINOR ? 1 : 2;
+}
+
+/**
+ *	t4_flash_erase_sectors - erase a range of flash sectors
+ *	@adapter: the adapter
+ *	@start: the first sector to erase
+ *	@end: the last sector to erase
+ *
+ *	Erases the sectors in the given inclusive range.
+ */
+static int t4_flash_erase_sectors(struct adapter *adapter, int start, int end)
+{
+	int ret = 0;
+
+	while (start <= end) {
+		if ((ret = sf1_write(adapter, 1, 0, 1, SF_WR_ENABLE)) != 0 ||
+		    (ret = sf1_write(adapter, 4, 0, 1,
+				     SF_ERASE_SECTOR | (start << 8))) != 0 ||
+		    (ret = flash_wait_op(adapter, 5, 500)) != 0) {
+			CH_ERR(adapter, "erase of flash sector %d failed, "
+			       "error %d\n", start, ret);
+			break;
+		}
+		start++;
+	}
+	t4_write_reg(adapter, A_SF_OP, 0);    /* unlock SF */
+	return ret;
+}
+
+/**
+ *	t4_load_fw - download firmware
+ *	@adap: the adapter
+ *	@fw_data: the firmware image to write
+ *	@size: image size
+ *
+ *	Write the supplied firmware image to the card's serial flash.
+ */
+int t4_load_fw(struct adapter *adap, const u8 *fw_data, unsigned int size)
+{
+	u32 csum;
+	__be32 vers;
+	int ret, addr;
+	unsigned int i;
+	u8 first_page[SF_PAGE_SIZE];
+	const u32 *p = (const u32 *)fw_data;
+	struct fw_hdr *hdr = (struct fw_hdr *)fw_data;
+
+	if (!size) {
+		CH_ERR(adap, "FW image has no data\n");
+		return -EINVAL;
+	}
+	if (size & 511) {
+		CH_ERR(adap, "FW image size not multiple of 512 bytes\n");
+		return -EINVAL;
+	}
+	if (ntohs(hdr->len512) * 512 != size) {
+		CH_ERR(adap, "FW image size differs from size in FW header\n");
+		return -EINVAL;
+	}
+	if (size > FW_MAX_SIZE) {
+		CH_ERR(adap, "FW image too large, max is %u bytes\n",
+		       FW_MAX_SIZE);
+		return -EFBIG;
+	}
+
+	for (csum = 0, i = 0; i < size / sizeof(csum); i++)
+		csum += ntohl(p[i]);
+
+	if (csum != 0xffffffff) {
+		CH_ERR(adap, "corrupted firmware image, checksum %#x\n",
+		       csum);
+		return -EINVAL;
+	}
+
+	i = DIV_ROUND_UP(size, SF_SEC_SIZE);        /* # of sectors spanned */
+	ret = t4_flash_erase_sectors(adap, FW_START_SEC, FW_START_SEC + i - 1);
+	if (ret)
+		goto out;
+
+	/*
+	 * We write the correct version at the end so the driver can see a bad
+	 * version if the FW write fails.  Start by writing a copy of the
+	 * first page with a bad version.
+	 */
+	vers = hdr->fw_ver;
+	memcpy(first_page, fw_data, SF_PAGE_SIZE);
+	((struct fw_hdr *)first_page)->fw_ver = htonl(0xffffffff);
+	ret = t4_write_flash(adap, FW_IMG_START, SF_PAGE_SIZE, first_page);
+	if (ret)
+		goto out;
+
+	addr = FW_IMG_START;
+	for (size -= SF_PAGE_SIZE ; size; size -= SF_PAGE_SIZE) {
+		addr += SF_PAGE_SIZE;
+		fw_data += SF_PAGE_SIZE;
+		ret = t4_write_flash(adap, addr, SF_PAGE_SIZE, fw_data);
+		if (ret)
+			goto out;
+	}
+
+	ret = t4_write_flash(adap,
+			     FW_IMG_START + offsetof(struct fw_hdr, fw_ver),
+			     sizeof(vers), (const u8 *)&vers);
+out:
+	if (ret)
+		CH_ERR(adap, "firmware download failed, error %d\n", ret);
+	return ret;
+}
+
+#define ADVERT_MASK (FW_PORT_CAP_SPEED_100M | FW_PORT_CAP_SPEED_1G |\
+		     FW_PORT_CAP_SPEED_10G | FW_PORT_CAP_ANEG)
+
+/**
+ *	t4_link_start - apply link configuration to MAC/PHY
+ *	@phy: the PHY to setup
+ *	@mac: the MAC to setup
+ *	@lc: the requested link configuration
+ *
+ *	Set up a port's MAC and PHY according to a desired link configuration.
+ *	- If the PHY can auto-negotiate first decide what to advertise, then
+ *	  enable/disable auto-negotiation as desired, and reset.
+ *	- If the PHY does not auto-negotiate just reset it.
+ *	- If auto-negotiation is off set the MAC to the proper speed/duplex/FC,
+ *	  otherwise do it later based on the outcome of auto-negotiation.
+ */
+int t4_link_start(struct adapter *adap, unsigned int mbox, unsigned int port,
+		  struct link_config *lc)
+{
+	struct fw_port_cmd c;
+	unsigned int fc = 0, mdi = V_FW_PORT_MDI(FW_PORT_MDI_AUTO);
+
+	lc->link_ok = 0;
+	if (lc->requested_fc & PAUSE_RX)
+		fc |= FW_PORT_CAP_FC_RX;
+	if (lc->requested_fc & PAUSE_TX)
+		fc |= FW_PORT_CAP_FC_TX;
+
+	memset(&c, 0, sizeof(c));
+	c.op_to_portid = htonl(V_FW_CMD_OP(FW_PORT_CMD) | F_FW_CMD_REQUEST |
+			       F_FW_CMD_EXEC | V_FW_PORT_CMD_PORTID(port));
+	c.action_to_len16 = htonl(V_FW_PORT_CMD_ACTION(FW_PORT_ACTION_L1_CFG) |
+				  FW_LEN16(c));
+
+	if (!(lc->supported & FW_PORT_CAP_ANEG)) {
+		c.u.l1cfg.rcap = htonl((lc->supported & ADVERT_MASK) | fc);
+		lc->fc = lc->requested_fc & (PAUSE_RX | PAUSE_TX);
+	} else if (lc->autoneg == AUTONEG_DISABLE) {
+		c.u.l1cfg.rcap = htonl(lc->requested_speed | fc | mdi);
+		lc->fc = lc->requested_fc & (PAUSE_RX | PAUSE_TX);
+	} else
+		c.u.l1cfg.rcap = htonl(lc->advertising | fc | mdi);
+
+	return t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL);
+}
+
+/**
+ *	t4_restart_aneg - restart autonegotiation
+ *	@adap: the adapter
+ *	@mbox: mbox to use for the FW command
+ *	@port: the port id
+ *
+ *	Restarts autonegotiation for the selected port.
+ */
+int t4_restart_aneg(struct adapter *adap, unsigned int mbox, unsigned int port)
+{
+	struct fw_port_cmd c;
+
+	memset(&c, 0, sizeof(c));
+	c.op_to_portid = htonl(V_FW_CMD_OP(FW_PORT_CMD) | F_FW_CMD_REQUEST |
+			       F_FW_CMD_EXEC | V_FW_PORT_CMD_PORTID(port));
+	c.action_to_len16 = htonl(V_FW_PORT_CMD_ACTION(FW_PORT_ACTION_L1_CFG) |
+				  FW_LEN16(c));
+	c.u.l1cfg.rcap = htonl(FW_PORT_CAP_ANEG);
+	return t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL);
+}
+
+/**
+ *	t4_set_vlan_accel - configure HW VLAN extraction
+ *	@adap: the adapter
+ *	@ports: bitmap of adapter ports to operate on
+ *	@on: enable (1) or disable (0) HW VLAN extraction
+ *
+ *	Enables or disables HW extraction of VLAN tags for the ports specified
+ *	by @ports.  @ports is a bitmap with the ith bit designating the port
+ *	associated with the ith adapter channel.
+ */
+void t4_set_vlan_accel(struct adapter *adap, unsigned int ports, int on)
+{
+	ports <<= S_VLANEXTENABLEPORT0;
+	t4_set_reg_field(adap, A_TP_OUT_CONFIG, ports, on ? ports : 0);
+}
+
+struct intr_info {
+	unsigned int mask;       /* bits to check in interrupt status */
+	const char *msg;         /* message to print or NULL */
+	short stat_idx;          /* stat counter to increment or -1 */
+	unsigned short fatal;    /* whether the condition reported is fatal */
+};
+
+/**
+ *	t4_handle_intr_status - table driven interrupt handler
+ *	@adapter: the adapter that generated the interrupt
+ *	@reg: the interrupt status register to process
+ *	@acts: table of interrupt actions
+ *	@stats: statistics counters tracking interrupt occurences
+ *
+ *	A table driven interrupt handler that applies a set of masks to an
+ *	interrupt status word and performs the corresponding actions if the
+ *	interrupts described by the mask have occured.  The actions include
+ *	optionally emitting a warning or alert message, and optionally
+ *	incrementing a stat counter.  The table is terminated by an entry
+ *	specifying mask 0.  Returns the number of fatal interrupt conditions.
+ */
+static int t4_handle_intr_status(struct adapter *adapter, unsigned int reg,
+		const struct intr_info *acts, unsigned long *stats)
+{
+	int fatal = 0;
+	unsigned int mask = 0;
+	unsigned int status = t4_read_reg(adapter, reg);
+
+	for ( ; acts->mask; ++acts) {
+		if (!(status & acts->mask)) continue;
+		if (acts->fatal) {
+			fatal++;
+			CH_ALERT(adapter, "%s (0x%x)\n",
+				 acts->msg, status & acts->mask);
+		} else if (acts->msg)
+			CH_WARN_RATELIMIT(adapter, "%s (0x%x)\n",
+					  acts->msg, status & acts->mask);
+		if (acts->stat_idx >= 0)
+			stats[acts->stat_idx]++;
+		mask |= acts->mask;
+	}
+	status &= mask;
+	if (status)                           /* clear processed interrupts */
+		t4_write_reg(adapter, reg, status);
+	return fatal;
+}
+
+/*
+ * Interrupt handler for the PCIE module.
+ */
+static void pcie_intr_handler(struct adapter *adapter)
+{
+	static struct intr_info sysbus_intr_info[] = {
+		{ F_RNPP, "RXNP array parity error", -1, 1 },
+		{ F_RPCP, "RXPC array parity error", -1, 1 },
+		{ F_RCIP, "RXCIF array parity error", -1, 1 },
+		{ F_RCCP, "Rx completions control array parity error", -1, 1 },
+		{ F_RFTP, "RXFT array parity error", -1, 1 },
+		{ 0 }
+	};
+	static struct intr_info pcie_port_intr_info[] = {
+		{ F_TPCP, "TXPC array parity error", -1, 1 },
+		{ F_TNPP, "TXNP array parity error", -1, 1 },
+		{ F_TFTP, "TXFT array parity error", -1, 1 },
+		{ F_TCAP, "TXCA array parity error", -1, 1 },
+		{ F_TCIP, "TXCIF array parity error", -1, 1 },
+		{ F_RCAP, "RXCA array parity error", -1, 1 },
+		{ F_OTDD, "outbound request TLP discarded", -1, 1 },
+		{ F_RDPE, "Rx data parity error", -1, 1 },
+		{ F_TDUE, "Tx uncorrectable data error", -1, 1 },
+		{ 0 }
+	};
+	static struct intr_info pcie_intr_info[] = {
+		{ F_MSIADDRLPERR, "MSI AddrL parity error", -1, 1 },
+		{ F_MSIADDRHPERR, "MSI AddrH parity error", -1, 1 },
+		{ F_MSIDATAPERR, "MSI data parity error", -1, 1 },
+		{ F_MSIXADDRLPERR, "MSI-X AddrL parity error", -1, 1 },
+		{ F_MSIXADDRHPERR, "MSI-X AddrH parity error", -1, 1 },
+		{ F_MSIXDATAPERR, "MSI-X data parity error", -1, 1 },
+		{ F_MSIXDIPERR, "MSI-X DI parity error", -1, 1 },
+		{ F_PIOCPLPERR, "PCI PIO completion FIFO parity error", -1, 1 },
+		{ F_PIOREQPERR, "PCI PIO request FIFO parity error", -1, 1 },
+		{ F_TARTAGPERR, "PCI PCI target tag FIFO parity error", -1, 1 },
+		{ F_CCNTPERR, "PCI CMD channel count parity error", -1, 1 },
+		{ F_CREQPERR, "PCI CMD channel request parity error", -1, 1 },
+		{ F_CRSPPERR, "PCI CMD channel response parity error", -1, 1 },
+		{ F_DCNTPERR, "PCI DMA channel count parity error", -1, 1 },
+		{ F_DREQPERR, "PCI DMA channel request parity error", -1, 1 },
+		{ F_DRSPPERR, "PCI DMA channel response parity error", -1, 1 },
+		{ F_HCNTPERR, "PCI HMA channel count parity error", -1, 1 },
+		{ F_HREQPERR, "PCI HMA channel request parity error", -1, 1 },
+		{ F_HRSPPERR, "PCI HMA channel response parity error", -1, 1 },
+		{ F_CFGSNPPERR, "PCI config snoop FIFO parity error", -1, 1 },
+		{ F_FIDPERR, "PCI FID parity error", -1, 1 },
+		{ F_INTXCLRPERR, "PCI INTx clear parity error", -1, 1 },
+		{ F_MATAGPERR, "PCI MA tag parity error", -1, 1 },
+		{ F_PIOTAGPERR, "PCI PIO tag parity error", -1, 1 },
+		{ F_RXCPLPERR, "PCI Rx completion parity error", -1, 1 },
+		{ F_RXWRPERR, "PCI Rx write parity error", -1, 1 },
+		{ F_RPLPERR, "PCI replay buffer parity error", -1, 1 },
+		{ F_PCIESINT, "PCI core secondary fault", -1, 1 },
+		{ F_PCIEPINT, "PCI core primary fault", -1, 1 },
+		{ F_UNXSPLCPLERR, "PCI unexpected split completion error", -1,
+		  0 },
+		{ 0 }
+	};
+
+	int fat;
+
+	fat = t4_handle_intr_status(adapter,
+				    A_PCIE_CORE_UTL_SYSTEM_BUS_AGENT_STATUS,
+				    sysbus_intr_info, NULL) +
+	      t4_handle_intr_status(adapter,
+				    A_PCIE_CORE_UTL_PCI_EXPRESS_PORT_STATUS,
+				    pcie_port_intr_info, NULL) +
+	      t4_handle_intr_status(adapter, A_PCIE_INT_CAUSE, pcie_intr_info,
+				    NULL);
+	if (fat)
+		t4_fatal_err(adapter);
+}
+
+/*
+ * TP interrupt handler.
+ */
+static void tp_intr_handler(struct adapter *adapter)
+{
+	static struct intr_info tp_intr_info[] = {
+		{ 0x3fffffff, "TP parity error", -1, 1 },
+		{ F_FLMTXFLSTEMPTY, "TP out of Tx pages", -1, 1 },
+		{ 0 }
+	};
+
+	if (t4_handle_intr_status(adapter, A_TP_INT_CAUSE, tp_intr_info, NULL))
+		t4_fatal_err(adapter);
+}
+
+/*
+ * SGE interrupt handler.
+ */
+static void sge_intr_handler(struct adapter *adapter)
+{
+	u64 v;
+
+	static struct intr_info sge_intr_info[] = {
+		{ F_ERR_CPL_EXCEED_IQE_SIZE,
+		  "SGE received CPL exceeding IQE size", -1, 1 },
+		{ F_ERR_INVALID_CIDX_INC,
+		  "SGE GTS CIDX increment too large", -1, 0 },
+		{ F_ERR_CPL_OPCODE_0, "SGE received 0-length CPL", -1, 0 },
+		{ F_ERR_DROPPED_DB, "SGE doorbell dropped", -1, 0 },
+		{ F_ERR_DATA_CPL_ON_HIGH_QID1 | F_ERR_DATA_CPL_ON_HIGH_QID0,
+		  "SGE IQID > 1023 received CPL for FL", -1, 0 },
+		{ F_ERR_BAD_DB_PIDX3, "SGE DBP 3 pidx increment too large", -1,
+		  0 },
+		{ F_ERR_BAD_DB_PIDX2, "SGE DBP 2 pidx increment too large", -1,
+		  0 },
+		{ F_ERR_BAD_DB_PIDX1, "SGE DBP 1 pidx increment too large", -1,
+		  0 },
+		{ F_ERR_BAD_DB_PIDX0, "SGE DBP 0 pidx increment too large", -1,
+		  0 },
+		{ F_ERR_ING_CTXT_PRIO,
+		  "SGE too many priority ingress contexts", -1, 0 },
+		{ F_ERR_EGR_CTXT_PRIO,
+		  "SGE too many priority egress contexts", -1, 0 },
+		{ F_INGRESS_SIZE_ERR, "SGE illegal ingress QID", -1, 0 },
+		{ F_EGRESS_SIZE_ERR, "SGE illegal egress QID", -1, 0 },
+		{ 0 }
+	};
+
+	v = (u64)t4_read_reg(adapter, A_SGE_INT_CAUSE1) |
+	    ((u64)t4_read_reg(adapter, A_SGE_INT_CAUSE2) << 32);
+	if (v) {
+		CH_ALERT(adapter, "SGE parity error (%#llx)\n",
+			 (unsigned long long)v);
+		t4_write_reg(adapter, A_SGE_INT_CAUSE1, v);
+		t4_write_reg(adapter, A_SGE_INT_CAUSE2, v >> 32);
+	}
+
+	if (t4_handle_intr_status(adapter, A_SGE_INT_CAUSE3,
+				  sge_intr_info, NULL) || v)
+		t4_fatal_err(adapter);
+}
+
+#define CIM_OBQ_INTR (F_OBQULP0PARERR | F_OBQULP1PARERR | F_OBQULP2PARERR |\
+		      F_OBQULP3PARERR | F_OBQSGEPARERR | F_OBQNCSIPARERR)
+#define CIM_IBQ_INTR (F_IBQTP0PARERR | F_IBQTP1PARERR | F_IBQULPPARERR |\
+		      F_IBQSGEHIPARERR | F_IBQSGELOPARERR | F_IBQNCSIPARERR)
+
+/*
+ * CIM interrupt handler.
+ */
+static void cim_intr_handler(struct adapter *adapter)
+{
+	static struct intr_info cim_intr_info[] = {
+		{ F_PREFDROPINT, "CIM control register prefetch drop", -1, 1 },
+		{ CIM_OBQ_INTR, "CIM OBQ parity error", -1, 1 },
+		{ CIM_IBQ_INTR, "CIM IBQ parity error", -1, 1 },
+		{ F_MBUPPARERR, "CIM mailbox uP parity error", -1, 1 },
+		{ F_MBHOSTPARERR, "CIM mailbox host parity error", -1, 1 },
+		{ F_TIEQINPARERRINT, "CIM TIEQ outgoing parity error", -1, 1 },
+		{ F_TIEQOUTPARERRINT, "CIM TIEQ incoming parity error", -1, 1 },
+		{ 0 }
+	};
+	static struct intr_info cim_upintr_info[] = {
+		{ F_RSVDSPACEINT, "CIM reserved space access", -1, 1 },
+		{ F_ILLTRANSINT, "CIM illegal transaction", -1, 1 },
+		{ F_ILLWRINT, "CIM illegal write", -1, 1 },
+		{ F_ILLRDINT, "CIM illegal read", -1, 1 },
+		{ F_ILLRDBEINT, "CIM illegal read BE", -1, 1 },
+		{ F_ILLWRBEINT, "CIM illegal write BE", -1, 1 },
+		{ F_SGLRDBOOTINT, "CIM single read from boot space", -1, 1 },
+		{ F_SGLWRBOOTINT, "CIM single write to boot space", -1, 1 },
+		{ F_BLKWRBOOTINT, "CIM block write to boot space", -1, 1 },
+		{ F_SGLRDFLASHINT, "CIM single read from flash space", -1, 1 },
+		{ F_SGLWRFLASHINT, "CIM single write to flash space", -1, 1 },
+		{ F_BLKWRFLASHINT, "CIM block write to flash space", -1, 1 },
+		{ F_SGLRDEEPROMINT, "CIM single EEPROM read", -1, 1 },
+		{ F_SGLWREEPROMINT, "CIM single EEPROM write", -1, 1 },
+		{ F_BLKRDEEPROMINT, "CIM block EEPROM read", -1, 1 },
+		{ F_BLKWREEPROMINT, "CIM block EEPROM write", -1, 1 },
+		{ F_SGLRDCTLINT , "CIM single read from CTL space", -1, 1 },
+		{ F_SGLWRCTLINT , "CIM single write to CTL space", -1, 1 },
+		{ F_BLKRDCTLINT , "CIM block read from CTL space", -1, 1 },
+		{ F_BLKWRCTLINT , "CIM block write to CTL space", -1, 1 },
+		{ F_SGLRDPLINT , "CIM single read from PL space", -1, 1 },
+		{ F_SGLWRPLINT , "CIM single write to PL space", -1, 1 },
+		{ F_BLKRDPLINT , "CIM block read from PL space", -1, 1 },
+		{ F_BLKWRPLINT , "CIM block write to PL space", -1, 1 },
+		{ F_REQOVRLOOKUPINT , "CIM request FIFO overwrite", -1, 1 },
+		{ F_RSPOVRLOOKUPINT , "CIM response FIFO overwrite", -1, 1 },
+		{ F_TIMEOUTINT , "CIM PIF timeout", -1, 1 },
+		{ F_TIMEOUTMAINT , "CIM PIF MA timeout", -1, 1 },
+		{ 0 }
+        };
+
+	int fat;
+
+	fat = t4_handle_intr_status(adapter, A_CIM_HOST_INT_CAUSE,
+				    cim_intr_info, NULL) +
+	      t4_handle_intr_status(adapter, A_CIM_HOST_UPACC_INT_CAUSE,
+				    cim_upintr_info, NULL);
+	if (fat)
+		t4_fatal_err(adapter);
+}
+
+/*
+ * ULP RX interrupt handler.
+ */
+static void ulprx_intr_handler(struct adapter *adapter)
+{
+	static struct intr_info ulprx_intr_info[] = {
+		{ 0x7fffff, "ULPRX parity error", -1, 1 },
+		{ 0 }
+        };
+
+	if (t4_handle_intr_status(adapter, A_ULP_RX_INT_CAUSE, ulprx_intr_info,
+				  NULL))
+		t4_fatal_err(adapter);
+}
+
+/*
+ * ULP TX interrupt handler.
+ */
+static void ulptx_intr_handler(struct adapter *adapter)
+{
+	static struct intr_info ulptx_intr_info[] = {
+		{ F_PBL_BOUND_ERR_CH3, "ULPTX channel 3 PBL out of bounds", -1,
+		  0 },
+		{ F_PBL_BOUND_ERR_CH2, "ULPTX channel 2 PBL out of bounds", -1,
+		  0 },
+		{ F_PBL_BOUND_ERR_CH1, "ULPTX channel 1 PBL out of bounds", -1,
+		  0 },
+		{ F_PBL_BOUND_ERR_CH0, "ULPTX channel 0 PBL out of bounds", -1,
+		  0 },
+		{ 0xfffffff, "ULPTX parity error", -1, 1 },
+		{ 0 }
+        };
+
+	if (t4_handle_intr_status(adapter, A_ULP_TX_INT_CAUSE, ulptx_intr_info,
+				  NULL))
+		t4_fatal_err(adapter);
+}
+
+/*
+ * PM TX interrupt handler.
+ */
+static void pmtx_intr_handler(struct adapter *adapter)
+{
+	static struct intr_info pmtx_intr_info[] = {
+		{ F_PCMD_LEN_OVFL0, "PMTX channel 0 pcmd too large", -1, 1 },
+		{ F_PCMD_LEN_OVFL1, "PMTX channel 1 pcmd too large", -1, 1 },
+		{ F_PCMD_LEN_OVFL2, "PMTX channel 2 pcmd too large", -1, 1 },
+		{ F_ZERO_C_CMD_ERROR, "PMTX 0-length pcmd", -1, 1 },
+		{ 0xffffff0, "PMTX framing error", -1, 1 },
+		{ F_OESPI_PAR_ERROR, "PMTX oespi parity error", -1, 1 },
+		{ F_DB_OPTIONS_PAR_ERROR, "PMTX db_options parity error", -1,
+		  1 },
+		{ F_ICSPI_PAR_ERROR, "PMTX icspi parity error", -1, 1 },
+		{ F_C_PCMD_PAR_ERROR, "PMTX c_pcmd parity error", -1, 1},
+		{ 0 }
+        };
+
+	if (t4_handle_intr_status(adapter, A_PM_TX_INT_CAUSE, pmtx_intr_info,
+				  NULL))
+		t4_fatal_err(adapter);
+}
+
+/*
+ * PM RX interrupt handler.
+ */
+static void pmrx_intr_handler(struct adapter *adapter)
+{
+	static struct intr_info pmrx_intr_info[] = {
+		{ F_ZERO_E_CMD_ERROR, "PMRX 0-length pcmd", -1, 1 },
+		{ 0x3ffff0, "PMRX framing error", -1, 1 },
+		{ F_OCSPI_PAR_ERROR, "PMRX ocspi parity error", -1, 1 },
+		{ F_DB_OPTIONS_PAR_ERROR, "PMRX db_options parity error", -1,
+		  1 },
+		{ F_IESPI_PAR_ERROR, "PMRX iespi parity error", -1, 1 },
+		{ F_E_PCMD_PAR_ERROR, "PMRX e_pcmd parity error", -1, 1},
+		{ 0 }
+        };
+
+	if (t4_handle_intr_status(adapter, A_PM_RX_INT_CAUSE, pmrx_intr_info,
+				  NULL))
+		t4_fatal_err(adapter);
+}
+
+/*
+ * CPL switch interrupt handler.
+ */
+static void cplsw_intr_handler(struct adapter *adapter)
+{
+	static struct intr_info cplsw_intr_info[] = {
+		{ F_CIM_OP_MAP_PERR, "CPLSW CIM op_map parity error", -1, 1 },
+		{ F_CIM_OVFL_ERROR, "CPLSW CIM overflow", -1, 1 },
+		{ F_TP_FRAMING_ERROR, "CPLSW TP framing error", -1, 1 },
+		{ F_SGE_FRAMING_ERROR, "CPLSW SGE framing error", -1, 1 },
+		{ F_CIM_FRAMING_ERROR, "CPLSW CIM framing error", -1, 1 },
+		{ F_ZERO_SWITCH_ERROR, "CPLSW no-switch error", -1, 1 },
+		{ 0 }
+        };
+
+	if (t4_handle_intr_status(adapter, A_CPL_INTR_CAUSE, cplsw_intr_info,
+				  NULL))
+		t4_fatal_err(adapter);
+}
+
+/*
+ * LE interrupt handler.
+ */
+static void le_intr_handler(struct adapter *adap)
+{
+	static struct intr_info le_intr_info[] = {
+		{ F_LIPMISS, "LE LIP miss", -1, 0 },
+		{ F_LIP0, "LE 0 LIP error", -1, 0 },
+		{ F_PARITYERR, "LE parity error", -1, 1 },
+		{ F_UNKNOWNCMD, "LE unknown command", -1, 1 },
+		{ F_REQQPARERR, "LE request queue parity error", -1, 1 },
+		{ 0 }
+	};
+
+	if (t4_handle_intr_status(adap, A_LE_DB_INT_CAUSE, le_intr_info, NULL))
+		t4_fatal_err(adap);
+}
+
+/*
+ * MPS interrupt handler.
+ */
+static void mps_intr_handler(struct adapter *adapter)
+{
+	static struct intr_info mps_rx_intr_info[] = {
+		{ 0xffffff, "MPS Rx parity error", -1, 1 },
+		{ 0 }
+	};
+	static struct intr_info mps_tx_intr_info[] = {
+		{ V_TPFIFO(M_TPFIFO), "MPS Tx TP FIFO parity error", -1, 1 },
+		{ F_NCSIFIFO, "MPS Tx NC-SI FIFO parity error", -1, 1 },
+		{ V_TXDATAFIFO(M_TXDATAFIFO), "MPS Tx data FIFO parity error",
+		  -1, 1 },
+		{ V_TXDESCFIFO(M_TXDESCFIFO), "MPS Tx desc FIFO parity error",
+		  -1, 1 },
+		{ F_BUBBLE, "MPS Tx underflow", -1, 1 },
+		{ F_SECNTERR, "MPS Tx SOP/EOP error", -1, 1 },
+		{ F_FRMERR, "MPS Tx framing error", -1, 1 },
+		{ 0 }
+	};
+	static struct intr_info mps_trc_intr_info[] = {
+		{ V_FILTMEM(M_FILTMEM), "MPS TRC filter parity error", -1, 1 },
+		{ V_PKTFIFO(M_PKTFIFO), "MPS TRC packet FIFO parity error", -1,
+		  1 },
+		{ F_MISCPERR, "MPS TRC misc parity error", -1, 1 },
+		{ 0 }
+	};
+	static struct intr_info mps_stat_sram_intr_info[] = {
+		{ 0x1fffff, "MPS statistics SRAM parity error", -1, 1 },
+		{ 0 }
+	};
+	static struct intr_info mps_stat_tx_intr_info[] = {
+		{ 0xfffff, "MPS statistics Tx FIFO parity error", -1, 1 },
+		{ 0 }
+	};
+	static struct intr_info mps_stat_rx_intr_info[] = {
+		{ 0xffffff, "MPS statistics Rx FIFO parity error", -1, 1 },
+		{ 0 }
+	};
+	static struct intr_info mps_cls_intr_info[] = {
+		{ F_MATCHSRAM, "MPS match SRAM parity error", -1, 1 },
+		{ F_MATCHTCAM, "MPS match TCAM parity error", -1, 1 },
+		{ F_HASHSRAM, "MPS hash SRAM parity error", -1, 1 },
+		{ 0 }
+	};
+
+	int fat;
+
+	fat = t4_handle_intr_status(adapter, A_MPS_RX_PERR_INT_CAUSE,
+				    mps_rx_intr_info, NULL) +
+	      t4_handle_intr_status(adapter, A_MPS_TX_INT_CAUSE,
+				    mps_tx_intr_info, NULL) +
+	      t4_handle_intr_status(adapter, A_MPS_TRC_INT_CAUSE,
+				    mps_trc_intr_info, NULL) +
+	      t4_handle_intr_status(adapter, A_MPS_STAT_PERR_INT_CAUSE_SRAM,
+				    mps_stat_sram_intr_info, NULL) +
+	      t4_handle_intr_status(adapter, A_MPS_STAT_PERR_INT_CAUSE_TX_FIFO,
+				    mps_stat_tx_intr_info, NULL) +
+	      t4_handle_intr_status(adapter, A_MPS_STAT_PERR_INT_CAUSE_RX_FIFO,
+				    mps_stat_rx_intr_info, NULL) +
+	      t4_handle_intr_status(adapter, A_MPS_CLS_INT_CAUSE,
+				    mps_cls_intr_info, NULL);
+
+	t4_write_reg(adapter, A_MPS_INT_CAUSE, F_CLSINT | F_TRCINT |
+		     F_RXINT | F_TXINT | F_STATINT);
+	t4_read_reg(adapter, A_MPS_INT_CAUSE);                    /* flush */
+	if (fat)
+		t4_fatal_err(adapter);
+}
+
+#define MEM_INT_MASK (F_PERR_INT_CAUSE | F_ECC_CE_INT_CAUSE | F_ECC_UE_INT_CAUSE)
+
+/*
+ * EDC/MC interrupt handler.
+ */
+static void mem_intr_handler(struct adapter *adapter, int idx)
+{
+	static const char name[3][5] = { "EDC0", "EDC1", "MC" };
+
+	unsigned int addr, cnt_addr, v;
+
+	if (idx <= MEM_EDC1) {
+		addr = EDC_REG(A_EDC_INT_CAUSE, idx);
+		cnt_addr = EDC_REG(A_EDC_ECC_STATUS, idx);
+	} else {
+		addr = A_MC_INT_CAUSE;
+		cnt_addr = A_MC_ECC_STATUS;
+	}
+
+	v = t4_read_reg(adapter, addr) & MEM_INT_MASK;
+	if (v & F_PERR_INT_CAUSE)
+		CH_ALERT(adapter, "%s FIFO parity error\n", name[idx]);
+	if (v & F_ECC_CE_INT_CAUSE) {
+		u32 cnt = G_ECC_CECNT(t4_read_reg(adapter, cnt_addr));
+
+		t4_write_reg(adapter, cnt_addr, V_ECC_CECNT(M_ECC_CECNT));
+		CH_WARN_RATELIMIT(adapter,
+				  "%u %s correctable ECC data error%s\n",
+				  cnt, name[idx], cnt > 1 ? "s" : "");
+	}
+	if (v & F_ECC_UE_INT_CAUSE)
+		CH_ALERT(adapter, "%s uncorrectable ECC data error\n",
+			 name[idx]);
+
+	t4_write_reg(adapter, addr, v);
+	if (v & (F_PERR_INT_CAUSE | F_ECC_UE_INT_CAUSE))
+		t4_fatal_err(adapter);
+}
+
+/*
+ * MA interrupt handler.
+ */
+static void ma_intr_handler(struct adapter *adapter)
+{
+	u32 v, status = t4_read_reg(adapter, A_MA_INT_CAUSE);
+
+	if (status & F_MEM_PERR_INT_CAUSE)
+		CH_ALERT(adapter, "MA parity error, parity status %#x\n",
+			 t4_read_reg(adapter, A_MA_PARITY_ERROR_STATUS));
+	if (status & F_MEM_WRAP_INT_CAUSE) {
+		v = t4_read_reg(adapter, A_MA_INT_WRAP_STATUS);
+		CH_ALERT(adapter, "MA address wrap-around error by client %u to"
+			 " address %#x\n", G_MEM_WRAP_CLIENT_NUM(v),
+			 G_MEM_WRAP_ADDRESS(v) << 4);
+	}
+	t4_write_reg(adapter, A_MA_INT_CAUSE, status);
+	t4_fatal_err(adapter);
+}
+
+/*
+ * SMB interrupt handler.
+ */
+static void smb_intr_handler(struct adapter *adap)
+{
+	static struct intr_info smb_intr_info[] = {
+		{ F_MSTTXFIFOPARINT, "SMB master Tx FIFO parity error", -1, 1 },
+		{ F_MSTRXFIFOPARINT, "SMB master Rx FIFO parity error", -1, 1 },
+		{ F_SLVFIFOPARINT, "SMB slave FIFO parity error", -1, 1 },
+		{ 0 }
+	};
+
+	if (t4_handle_intr_status(adap, A_SMB_INT_CAUSE, smb_intr_info, NULL))
+		t4_fatal_err(adap);
+}
+
+/*
+ * NC-SI interrupt handler.
+ */
+static void ncsi_intr_handler(struct adapter *adap)
+{
+	static struct intr_info ncsi_intr_info[] = {
+		{ F_CIM_DM_PRTY_ERR, "NC-SI CIM parity error", -1, 1 },
+		{ F_MPS_DM_PRTY_ERR, "NC-SI MPS parity error", -1, 1 },
+		{ F_TXFIFO_PRTY_ERR, "NC-SI Tx FIFO parity error", -1, 1 },
+		{ F_RXFIFO_PRTY_ERR, "NC-SI Rx FIFO parity error", -1, 1 },
+		{ 0 }
+	};
+
+	if (t4_handle_intr_status(adap, A_NCSI_INT_CAUSE, ncsi_intr_info, NULL))
+		t4_fatal_err(adap);
+}
+
+/*
+ * XGMAC interrupt handler.
+ */
+static void xgmac_intr_handler(struct adapter *adap, int port)
+{
+	u32 v = t4_read_reg(adap, PORT_REG(port, A_XGMAC_PORT_INT_CAUSE));
+
+	v &= F_TXFIFO_PRTY_ERR | F_RXFIFO_PRTY_ERR;
+	if (!v)
+		return;
+
+	if (v & F_TXFIFO_PRTY_ERR)
+		CH_ALERT(adap, "XGMAC %d Tx FIFO parity error\n", port);
+	if (v & F_RXFIFO_PRTY_ERR)
+		CH_ALERT(adap, "XGMAC %d Rx FIFO parity error\n", port);
+	t4_write_reg(adap, PORT_REG(port, A_XGMAC_PORT_INT_CAUSE), v);
+	t4_fatal_err(adap);
+}
+
+/*
+ * PL interrupt handler.
+ */
+static void pl_intr_handler(struct adapter *adap)
+{
+	static struct intr_info pl_intr_info[] = {
+		{ F_FATALPERR, "T4 fatal parity error", -1, 1 },
+		{ F_PERRVFID, "PL VFID_MAP parity error", -1, 1 },
+		{ 0 }
+	};
+
+	if (t4_handle_intr_status(adap, A_PL_PL_INT_CAUSE, pl_intr_info, NULL))
+		t4_fatal_err(adap);
+}
+
+#define PF_INTR_MASK (F_PFSW | F_PFCIM)
+#define GLBL_INTR_MASK (F_CIM | F_MPS | F_PL | F_PCIE | F_MC | F_EDC0 | \
+		F_EDC1 | F_LE | F_TP | F_MA | F_PM_TX | F_PM_RX | F_ULP_RX | \
+		F_CPL_SWITCH | F_SGE | F_ULP_TX)
+
+/**
+ *	t4_slow_intr_handler - control path interrupt handler
+ *	@adapter: the adapter
+ *
+ *	T4 interrupt handler for non-data global interrupt events, e.g., errors.
+ *	The designation 'slow' is because it involves register reads, while
+ *	data interrupts typically don't involve any MMIOs.
+ */
+int t4_slow_intr_handler(struct adapter *adapter)
+{
+	u32 cause = t4_read_reg(adapter, A_PL_INT_CAUSE);
+
+	if (!(cause & GLBL_INTR_MASK))
+		return 0;
+	if (cause & F_CIM)
+		cim_intr_handler(adapter);
+	if (cause & F_MPS)
+		mps_intr_handler(adapter);
+	if (cause & F_NCSI)
+		ncsi_intr_handler(adapter);
+	if (cause & F_PL)
+		pl_intr_handler(adapter);
+	if (cause & F_SMB)
+		smb_intr_handler(adapter);
+	if (cause & F_XGMAC0)
+		xgmac_intr_handler(adapter, 0);
+	if (cause & F_XGMAC1)
+		xgmac_intr_handler(adapter, 1);
+	if (cause & F_XGMAC_KR0)
+		xgmac_intr_handler(adapter, 2);
+	if (cause & F_XGMAC_KR1)
+		xgmac_intr_handler(adapter, 3);
+	if (cause & F_PCIE)
+		pcie_intr_handler(adapter);
+	if (cause & F_MC)
+		mem_intr_handler(adapter, MEM_MC);
+	if (cause & F_EDC0)
+		mem_intr_handler(adapter, MEM_EDC0);
+	if (cause & F_EDC1)
+		mem_intr_handler(adapter, MEM_EDC1);
+	if (cause & F_LE)
+		le_intr_handler(adapter);
+	if (cause & F_TP)
+		tp_intr_handler(adapter);
+	if (cause & F_MA)
+		ma_intr_handler(adapter);
+	if (cause & F_PM_TX)
+		pmtx_intr_handler(adapter);
+	if (cause & F_PM_RX)
+		pmrx_intr_handler(adapter);
+	if (cause & F_ULP_RX)
+		ulprx_intr_handler(adapter);
+	if (cause & F_CPL_SWITCH)
+		cplsw_intr_handler(adapter);
+	if (cause & F_SGE)
+		sge_intr_handler(adapter);
+	if (cause & F_ULP_TX)
+		ulptx_intr_handler(adapter);
+
+	/* Clear the interrupts just processed for which we are the master. */
+	t4_write_reg(adapter, A_PL_INT_CAUSE, cause & GLBL_INTR_MASK);
+	(void) t4_read_reg(adapter, A_PL_INT_CAUSE); /* flush */
+	return 1;
+}
+
+/**
+ *	t4_intr_enable - enable interrupts
+ *	@adapter: the adapter whose interrupts should be enabled
+ *
+ *	Enable PF-specific interrupts for the calling function and the top-level
+ *	interrupt concentrator for global interrupts.  Interrupts are already
+ *	enabled at each module,	here we just enable the roots of the interrupt
+ *	hierarchies.
+ *
+ *	Note: this function should be called only when the driver manages
+ *	non PF-specific interrupts from the various HW modules.  Only one PCI
+ *	function at a time should be doing this.
+ */
+void t4_intr_enable(struct adapter *adapter)
+{
+	u32 pf = G_SOURCEPF(t4_read_reg(adapter, A_PL_WHOAMI));
+
+	t4_write_reg(adapter, A_SGE_INT_ENABLE3, F_ERR_CPL_EXCEED_IQE_SIZE |
+		     F_ERR_INVALID_CIDX_INC | F_ERR_CPL_OPCODE_0 |
+		     F_ERR_DROPPED_DB | F_ERR_DATA_CPL_ON_HIGH_QID1 |
+		     F_ERR_DATA_CPL_ON_HIGH_QID0 | F_ERR_BAD_DB_PIDX3 |
+		     F_ERR_BAD_DB_PIDX2 | F_ERR_BAD_DB_PIDX1 |
+		     F_ERR_BAD_DB_PIDX0 | F_ERR_ING_CTXT_PRIO |
+		     F_ERR_EGR_CTXT_PRIO | F_INGRESS_SIZE_ERR |
+		     F_EGRESS_SIZE_ERR);
+	t4_write_reg(adapter, MYPF_REG(A_PL_PF_INT_ENABLE), PF_INTR_MASK);
+	t4_set_reg_field(adapter, A_PL_INT_MAP0, 0, 1 << pf);
+}
+
+/**
+ *	t4_intr_disable - disable interrupts
+ *	@adapter: the adapter whose interrupts should be disabled
+ *
+ *	Disable interrupts.  We only disable the top-level interrupt
+ *	concentrators.  The caller must be a PCI function managing global
+ *	interrupts.
+ */
+void t4_intr_disable(struct adapter *adapter)
+{
+	u32 pf = G_SOURCEPF(t4_read_reg(adapter, A_PL_WHOAMI));
+
+	t4_write_reg(adapter, MYPF_REG(A_PL_PF_INT_ENABLE), 0);
+	t4_set_reg_field(adapter, A_PL_INT_MAP0, 1 << pf, 0);
+}
+
+/**
+ *	t4_intr_clear - clear all interrupts
+ *	@adapter: the adapter whose interrupts should be cleared
+ *
+ *	Clears all interrupts.  The caller must be a PCI function managing
+ *	global interrupts.
+ */
+void t4_intr_clear(struct adapter *adapter)
+{
+	static const unsigned int cause_reg[] = {
+		A_SGE_INT_CAUSE1, A_SGE_INT_CAUSE2, A_SGE_INT_CAUSE3,
+		A_PCIE_CORE_UTL_SYSTEM_BUS_AGENT_STATUS,
+		A_PCIE_CORE_UTL_PCI_EXPRESS_PORT_STATUS,
+		A_PCIE_NONFAT_ERR, A_PCIE_INT_CAUSE,
+		A_MC_INT_CAUSE,
+		A_MA_INT_WRAP_STATUS, A_MA_PARITY_ERROR_STATUS, A_MA_INT_CAUSE,
+		A_EDC_INT_CAUSE, EDC_REG(A_EDC_INT_CAUSE, 1),
+		A_CIM_HOST_INT_CAUSE, A_CIM_HOST_UPACC_INT_CAUSE,
+		MYPF_REG(A_CIM_PF_HOST_INT_CAUSE),
+		A_TP_INT_CAUSE,
+		A_ULP_RX_INT_CAUSE, A_ULP_TX_INT_CAUSE,
+		A_PM_RX_INT_CAUSE, A_PM_TX_INT_CAUSE,
+		A_MPS_RX_PERR_INT_CAUSE,
+		A_CPL_INTR_CAUSE,
+		MYPF_REG(A_PL_PF_INT_CAUSE),
+		A_PL_PL_INT_CAUSE,
+		A_LE_DB_INT_CAUSE,
+	};
+
+	unsigned int i;
+
+	for (i = 0; i < ARRAY_SIZE(cause_reg); ++i)
+		t4_write_reg(adapter, cause_reg[i], 0xffffffff);
+
+	t4_write_reg(adapter, A_PL_INT_CAUSE, GLBL_INTR_MASK);
+	(void) t4_read_reg(adapter, A_PL_INT_CAUSE);          /* flush */
+}
+
+/**
+ *	t4_hash_mac_addr - return the hash value of a MAC address
+ *	@addr: the 48-bit Ethernet MAC address
+ *
+ *	Hashes a MAC address according to the hash function used by HW inexact
+ *	(hash) address matching.
+ */
+int t4_hash_mac_addr(const u8 *addr)
+{
+	u32 a = ((u32)addr[0] << 16) | ((u32)addr[1] << 8) | addr[2];
+	u32 b = ((u32)addr[3] << 16) | ((u32)addr[4] << 8) | addr[5];
+	a ^= b;
+	a ^= (a >> 12);
+	a ^= (a >> 6);
+	return a & 0x3f;
+}
+
+/**
+ *	t4_config_rss_range - configure a portion of the RSS mapping table
+ *	@adapter: the adapter
+ *	@mbox: mbox to use for the FW command
+ *	@viid: virtual interface whose RSS subtable is to be written
+ *	@start: start entry in the table to write
+ *	@n: how many table entries to write
+ *	@rspq: values for the response queue lookup table
+ *	@nrspq: number of values in @rspq
+ *
+ *	Programs the selected part of the VI's RSS mapping table with the
+ *	provided values.  If @nrspq < @n the supplied values are used repeatedly
+ *	until the full table range is populated.
+ *
+ *	The caller must ensure the values in @rspq are in the range allowed for
+ *	@viid.
+ */
+int t4_config_rss_range(struct adapter *adapter, int mbox, unsigned int viid,
+			int start, int n, const u16 *rspq, unsigned int nrspq)
+{
+	int ret;
+	const u16 *rsp = rspq;
+	const u16 *rsp_end = rspq + nrspq;
+	struct fw_rss_ind_tbl_cmd cmd;
+
+	memset(&cmd, 0, sizeof(cmd));
+	cmd.op_to_viid = htonl(V_FW_CMD_OP(FW_RSS_IND_TBL_CMD) |
+			       F_FW_CMD_REQUEST | F_FW_CMD_WRITE |
+			       V_FW_RSS_IND_TBL_CMD_VIID(viid));
+	cmd.retval_len16 = htonl(FW_LEN16(cmd));
+
+	/* each fw_rss_ind_tbl_cmd takes up to 32 entries */
+	while (n > 0) {
+		int nq = min(n, 32);
+		__be32 *qp = &cmd.iq0_to_iq2;
+
+		cmd.niqid = htons(nq);
+		cmd.startidx = htons(start);
+
+		start += nq;
+		n -= nq;
+
+		while (nq > 0) {
+			unsigned int v;
+
+			v = V_FW_RSS_IND_TBL_CMD_IQ0(*rsp);
+			if (++rsp >= rsp_end)
+				rsp = rspq;
+			v |= V_FW_RSS_IND_TBL_CMD_IQ1(*rsp);
+			if (++rsp >= rsp_end)
+				rsp = rspq;
+			v |= V_FW_RSS_IND_TBL_CMD_IQ2(*rsp);
+			if (++rsp >= rsp_end)
+				rsp = rspq;
+
+			*qp++ = htonl(v);
+			nq -= 3;
+		}
+
+		ret = t4_wr_mbox(adapter, mbox, &cmd, sizeof(cmd), NULL);
+		if (ret != FW_SUCCESS)
+			return ret;
+	}
+	return 0;
+}
+
+/**
+ *	t4_config_glbl_rss - configure the global RSS mode
+ *	@adapter: the adapter
+ *	@mbox: mbox to use for the FW command
+ *	@mode: global RSS mode
+ *	@flags: mode-specific flags
+ *
+ *	Sets the global RSS mode.
+ */
+int t4_config_glbl_rss(struct adapter *adapter, int mbox, unsigned int mode,
+		       unsigned int flags)
+{
+	struct fw_rss_glb_config_cmd c;
+
+	memset(&c, 0, sizeof(c));
+	c.op_to_write = htonl(V_FW_CMD_OP(FW_RSS_GLB_CONFIG_CMD) |
+			      F_FW_CMD_REQUEST | F_FW_CMD_WRITE);
+	c.retval_len16 = htonl(FW_LEN16(c));
+	if (mode == FW_RSS_GLB_CONFIG_CMD_MODE_MANUAL) {
+		c.u.manual.mode_pkd = htonl(V_FW_RSS_GLB_CONFIG_CMD_MODE(mode));
+	} else if (mode == FW_RSS_GLB_CONFIG_CMD_MODE_BASICVIRTUAL) {
+		c.u.basicvirtual.mode_pkd =
+			htonl(V_FW_RSS_GLB_CONFIG_CMD_MODE(mode));
+		c.u.basicvirtual.synmapen_to_hashtoeplitz = htonl(flags);
+	} else
+		return -EINVAL;
+	return t4_wr_mbox(adapter, mbox, &c, sizeof(c), NULL);
+}
+
+/* Read an RSS table row */
+static int rd_rss_row(struct adapter *adap, int row, u32 *val)
+{
+	t4_write_reg(adap, A_TP_RSS_LKP_TABLE, 0xfff00000 | row);
+	return t4_wait_op_done_val(adap, A_TP_RSS_LKP_TABLE, F_LKPTBLROWVLD, 1,
+				   5, 0, val);
+}
+
+/**
+ *	t4_read_rss - read the contents of the RSS mapping table
+ *	@adapter: the adapter
+ *	@map: holds the contents of the RSS mapping table
+ *
+ *	Reads the contents of the RSS hash->queue mapping table.
+ */
+int t4_read_rss(struct adapter *adapter, u16 *map)
+{
+	u32 val;
+	int i, ret;
+
+	for (i = 0; i < RSS_NENTRIES / 2; ++i) {
+		ret = rd_rss_row(adapter, i, &val);
+		if (ret)
+			return ret;
+		*map++ = G_LKPTBLQUEUE0(val);
+		*map++ = G_LKPTBLQUEUE1(val);
+	}
+	return 0;
+}
+
+/**
+ *	t4_tp_get_tcp_stats - read TP's TCP MIB counters
+ *	@adap: the adapter
+ *	@v4: holds the TCP/IP counter values
+ *	@v6: holds the TCP/IPv6 counter values
+ *
+ *	Returns the values of TP's TCP/IP and TCP/IPv6 MIB counters.
+ *	Either @v4 or @v6 may be %NULL to skip the corresponding stats.
+ */
+void t4_tp_get_tcp_stats(struct adapter *adap, struct tp_tcp_stats *v4,
+			 struct tp_tcp_stats *v6)
+{
+	u32 val[A_TP_MIB_TCP_RXT_SEG_LO - A_TP_MIB_TCP_OUT_RST + 1];
+
+#define STAT_IDX(x) ((A_TP_MIB_TCP_##x) - A_TP_MIB_TCP_OUT_RST)
+#define STAT(x)     val[STAT_IDX(x)]
+#define STAT64(x)   (((u64)STAT(x##_HI) << 32) | STAT(x##_LO))
+
+	if (v4) {
+		t4_read_indirect(adap, A_TP_MIB_INDEX, A_TP_MIB_DATA, val,
+				 ARRAY_SIZE(val), A_TP_MIB_TCP_OUT_RST);
+		v4->tcpOutRsts = STAT(OUT_RST);
+		v4->tcpInSegs  = STAT64(IN_SEG);
+		v4->tcpOutSegs = STAT64(OUT_SEG);
+		v4->tcpRetransSegs = STAT64(RXT_SEG);
+	}
+	if (v6) {
+		t4_read_indirect(adap, A_TP_MIB_INDEX, A_TP_MIB_DATA, val,
+				 ARRAY_SIZE(val), A_TP_MIB_TCP_V6OUT_RST);
+		v6->tcpOutRsts = STAT(OUT_RST);
+		v6->tcpInSegs  = STAT64(IN_SEG);
+		v6->tcpOutSegs = STAT64(OUT_SEG);
+		v6->tcpRetransSegs = STAT64(RXT_SEG);
+	}
+#undef STAT64
+#undef STAT
+#undef STAT_IDX
+}
+
+/**
+ *	t4_tp_get_err_stats - read TP's error MIB counters
+ *	@adap: the adapter
+ *	@st: holds the counter values
+ *
+ *	Returns the values of TP's error counters.
+ */
+void t4_tp_get_err_stats(struct adapter *adap, struct tp_err_stats *st)
+{
+	t4_read_indirect(adap, A_TP_MIB_INDEX, A_TP_MIB_DATA, st->macInErrs,
+			 12, A_TP_MIB_MAC_IN_ERR_0);
+	t4_read_indirect(adap, A_TP_MIB_INDEX, A_TP_MIB_DATA, st->tnlCongDrops,
+			 8, A_TP_MIB_TNL_CNG_DROP_0);
+	t4_read_indirect(adap, A_TP_MIB_INDEX, A_TP_MIB_DATA, st->tnlTxDrops,
+			 4, A_TP_MIB_TNL_DROP_0);
+	t4_read_indirect(adap, A_TP_MIB_INDEX, A_TP_MIB_DATA, st->ofldVlanDrops,
+			 4, A_TP_MIB_OFD_VLN_DROP_0);
+	t4_read_indirect(adap, A_TP_MIB_INDEX, A_TP_MIB_DATA, st->tcp6InErrs,
+			 4, A_TP_MIB_TCP_V6IN_ERR_0);
+	t4_read_indirect(adap, A_TP_MIB_INDEX, A_TP_MIB_DATA, &st->ofldNoNeigh,
+			 2, A_TP_MIB_OFD_ARP_DROP);
+}
+
+/**
+ *	t4_read_mtu_tbl - returns the values in the HW path MTU table
+ *	@adap: the adapter
+ *	@mtus: where to store the MTU values
+ *	@mtu_log: where to store the MTU base-2 log (may be %NULL)
+ *
+ *	Reads the HW path MTU table.
+ */
+void t4_read_mtu_tbl(struct adapter *adap, u16 *mtus, u8 *mtu_log)
+{
+	u32 v;
+	int i;
+
+	for (i = 0; i < NMTUS; ++i) {
+		t4_write_reg(adap, A_TP_MTU_TABLE,
+			     V_MTUINDEX(0xff) | V_MTUVALUE(i));
+		v = t4_read_reg(adap, A_TP_MTU_TABLE);
+		mtus[i] = G_MTUVALUE(v);
+		if (mtu_log)
+			mtu_log[i] = G_MTUWIDTH(v);
+	}
+}
+
+/**
+ *	init_cong_ctrl - initialize congestion control parameters
+ *	@a: the alpha values for congestion control
+ *	@b: the beta values for congestion control
+ *
+ *	Initialize the congestion control parameters.
+ */
+static void __devinit init_cong_ctrl(unsigned short *a, unsigned short *b)
+{
+	a[0] = a[1] = a[2] = a[3] = a[4] = a[5] = a[6] = a[7] = a[8] = 1;
+	a[9] = 2;
+	a[10] = 3;
+	a[11] = 4;
+	a[12] = 5;
+	a[13] = 6;
+	a[14] = 7;
+	a[15] = 8;
+	a[16] = 9;
+	a[17] = 10;
+	a[18] = 14;
+	a[19] = 17;
+	a[20] = 21;
+	a[21] = 25;
+	a[22] = 30;
+	a[23] = 35;
+	a[24] = 45;
+	a[25] = 60;
+	a[26] = 80;
+	a[27] = 100;
+	a[28] = 200;
+	a[29] = 300;
+	a[30] = 400;
+	a[31] = 500;
+
+	b[0] = b[1] = b[2] = b[3] = b[4] = b[5] = b[6] = b[7] = b[8] = 0;
+	b[9] = b[10] = 1;
+	b[11] = b[12] = 2;
+	b[13] = b[14] = b[15] = b[16] = 3;
+	b[17] = b[18] = b[19] = b[20] = b[21] = 4;
+	b[22] = b[23] = b[24] = b[25] = b[26] = b[27] = 5;
+	b[28] = b[29] = 6;
+	b[30] = b[31] = 7;
+}
+
+/* The minimum additive increment value for the congestion control table */
+#define CC_MIN_INCR 2U
+
+/**
+ *	t4_load_mtus - write the MTU and congestion control HW tables
+ *	@adap: the adapter
+ *	@mtus: the values for the MTU table
+ *	@alpha: the values for the congestion control alpha parameter
+ *	@beta: the values for the congestion control beta parameter
+ *
+ *	Write the HW MTU table with the supplied MTUs and the high-speed
+ *	congestion control table with the supplied alpha, beta, and MTUs.
+ *	We write the two tables together because the additive increments
+ *	depend on the MTUs.
+ */
+void t4_load_mtus(struct adapter *adap, const unsigned short *mtus,
+		  const unsigned short *alpha, const unsigned short *beta)
+{
+	static const unsigned int avg_pkts[NCCTRL_WIN] = {
+		2, 6, 10, 14, 20, 28, 40, 56, 80, 112, 160, 224, 320, 448, 640,
+		896, 1281, 1792, 2560, 3584, 5120, 7168, 10240, 14336, 20480,
+		28672, 40960, 57344, 81920, 114688, 163840, 229376
+	};
+
+	unsigned int i, w;
+
+	for (i = 0; i < NMTUS; ++i) {
+		unsigned int mtu = mtus[i];
+		unsigned int log2 = fls(mtu);
+
+		if (!(mtu & ((1 << log2) >> 2)))     /* round */
+			log2--;
+		t4_write_reg(adap, A_TP_MTU_TABLE, V_MTUINDEX(i) |
+			     V_MTUWIDTH(log2) | V_MTUVALUE(mtu));
+
+		for (w = 0; w < NCCTRL_WIN; ++w) {
+			unsigned int inc;
+
+			inc = max(((mtu - 40) * alpha[w]) / avg_pkts[w],
+				  CC_MIN_INCR);
+
+			t4_write_reg(adap, A_TP_CCTRL_TABLE, (i << 21) |
+				     (w << 16) | (beta[w] << 13) | inc);
+		}
+	}
+}
+
+/**
+ *	t4_set_trace_filter - configure one of the tracing filters
+ *	@adap: the adapter
+ *	@tp: the desired trace filter parameters
+ *	@idx: which filter to configure
+ *	@enable: whether to enable or disable the filter
+ *
+ *	Configures one of the tracing filters available in HW.  If @enable is
+ *	%0 @tp is not examined and may be %NULL.
+ */
+int t4_set_trace_filter(struct adapter *adap, const struct trace_params *tp,
+			int idx, int enable)
+{
+	int i, ofst = idx * 4;
+	u32 data_reg, mask_reg, cfg;
+	u32 multitrc = F_TRCMULTIFILTER;
+
+	if (!enable) {
+		t4_write_reg(adap, A_MPS_TRC_FILTER_MATCH_CTL_A + ofst, 0);
+		goto out;
+	}
+
+	if (tp->port > 11 || tp->invert > 1 || tp->skip_len > M_TFLENGTH ||
+	    tp->skip_ofst > M_TFOFFSET || tp->min_len > M_TFMINPKTSIZE ||
+	    tp->snap_len > 9600 || (idx && tp->snap_len > 256))
+		return -EINVAL;
+
+	if (tp->snap_len > 256) {            /* must be tracer 0 */
+		if ((t4_read_reg(adap, A_MPS_TRC_FILTER_MATCH_CTL_A + 4) |
+		     t4_read_reg(adap, A_MPS_TRC_FILTER_MATCH_CTL_A + 8) |
+		     t4_read_reg(adap, A_MPS_TRC_FILTER_MATCH_CTL_A + 12)) &
+		    F_TFEN)
+			return -EINVAL;  /* other tracers are enabled */
+		multitrc = 0;
+	} else if (idx) {
+		i = t4_read_reg(adap, A_MPS_TRC_FILTER_MATCH_CTL_B);
+		if (G_TFCAPTUREMAX(i) > 256 &&
+		    (t4_read_reg(adap, A_MPS_TRC_FILTER_MATCH_CTL_A) & F_TFEN))
+			return -EINVAL;
+	}
+
+	/* stop the tracer we'll be changing */
+	t4_write_reg(adap, A_MPS_TRC_FILTER_MATCH_CTL_A + ofst, 0);
+
+	/* disable tracing globally if running in the wrong single/multi mode */
+	cfg = t4_read_reg(adap, A_MPS_TRC_CFG);
+	if ((cfg & F_TRCEN) && multitrc != (cfg & F_TRCMULTIFILTER)) {
+		t4_write_reg(adap, A_MPS_TRC_CFG, cfg ^ F_TRCEN);
+		t4_read_reg(adap, A_MPS_TRC_CFG);                  /* flush */
+		msleep(1);
+		if (!(t4_read_reg(adap, A_MPS_TRC_CFG) & F_TRCFIFOEMPTY))
+			return -ETIMEDOUT;
+	}
+	/*
+	 * At this point either the tracing is enabled and in the right mode or
+	 * disabled.
+	 */
+
+	idx *= (A_MPS_TRC_FILTER1_MATCH - A_MPS_TRC_FILTER0_MATCH);
+	data_reg = A_MPS_TRC_FILTER0_MATCH + idx;
+	mask_reg = A_MPS_TRC_FILTER0_DONT_CARE + idx;
+
+	for (i = 0; i < TRACE_LEN / 4; i++, data_reg += 4, mask_reg += 4) {
+		t4_write_reg(adap, data_reg, tp->data[i]);
+		t4_write_reg(adap, mask_reg, ~tp->mask[i]);
+	}
+	t4_write_reg(adap, A_MPS_TRC_FILTER_MATCH_CTL_B + ofst,
+		     V_TFCAPTUREMAX(tp->snap_len) |
+		     V_TFMINPKTSIZE(tp->min_len));
+	t4_write_reg(adap, A_MPS_TRC_FILTER_MATCH_CTL_A + ofst,
+		     V_TFOFFSET(tp->skip_ofst) | V_TFLENGTH(tp->skip_len) |
+		     V_TFPORT(tp->port) | F_TFEN | V_TFINVERTMATCH(tp->invert));
+
+	cfg &= ~F_TRCMULTIFILTER;
+	t4_write_reg(adap, A_MPS_TRC_CFG, cfg | F_TRCEN | multitrc);
+out:	t4_read_reg(adap, A_MPS_TRC_CFG);  /* flush */
+	return 0;
+}
+
+/**
+ *	t4_get_trace_filter - query one of the tracing filters
+ *	@adap: the adapter
+ *	@tp: the current trace filter parameters
+ *	@idx: which trace filter to query
+ *	@enabled: non-zero if the filter is enabled
+ *
+ *	Returns the current settings of one of the HW tracing filters.
+ */
+void t4_get_trace_filter(struct adapter *adap, struct trace_params *tp, int idx,
+			 int *enabled)
+{
+	u32 ctla, ctlb;
+	int i, ofst = idx * 4;
+	u32 data_reg, mask_reg;
+
+	ctla = t4_read_reg(adap, A_MPS_TRC_FILTER_MATCH_CTL_A + ofst);
+	ctlb = t4_read_reg(adap, A_MPS_TRC_FILTER_MATCH_CTL_B + ofst);
+
+	*enabled = !!(ctla & F_TFEN);
+	tp->snap_len = G_TFCAPTUREMAX(ctlb);
+	tp->min_len = G_TFMINPKTSIZE(ctlb);
+	tp->skip_ofst = G_TFOFFSET(ctla);
+	tp->skip_len = G_TFLENGTH(ctla);
+	tp->invert = !!(ctla & F_TFINVERTMATCH);
+	tp->port = G_TFPORT(ctla);
+
+	ofst = (A_MPS_TRC_FILTER1_MATCH - A_MPS_TRC_FILTER0_MATCH) * idx;
+	data_reg = A_MPS_TRC_FILTER0_MATCH + ofst;
+	mask_reg = A_MPS_TRC_FILTER0_DONT_CARE + ofst;
+
+	for (i = 0; i < TRACE_LEN / 4; i++, data_reg += 4, mask_reg += 4) {
+		tp->mask[i] = ~t4_read_reg(adap, mask_reg);
+		tp->data[i] = t4_read_reg(adap, data_reg) & tp->mask[i];
+	}
+}
+
+/**
+ *	get_mps_bg_map - return the buffer groups associated with a port
+ *	@adap: the adapter
+ *	@idx: the port index
+ *
+ *	Returns a bitmap indicating which MPS buffer groups are associated
+ *	with the given port.  Bit i is set if buffer group i is used by the
+ *	port.
+ */
+static unsigned int get_mps_bg_map(struct adapter *adap, int idx)
+{
+	u32 n = G_NUMPORTS(t4_read_reg(adap, A_MPS_CMN_CTL));
+
+	if (n == 0)
+		return idx == 0 ? 0xf : 0;
+	if (n == 1)
+		return idx < 2 ? (3 << (2 * idx)) : 0;
+	return 1 << idx;
+}
+
+/**
+ *	t4_get_port_stats - collect port statistics
+ *	@adap: the adapter
+ *	@idx: the port index
+ *	@p: the stats structure to fill
+ *
+ *	Collect statistics related to the given port from HW.
+ */
+void t4_get_port_stats(struct adapter *adap, int idx, struct port_stats *p)
+{
+	u32 bgmap = get_mps_bg_map(adap, idx);
+
+#define GET_STAT(name) \
+	t4_read_reg64(adap, PORT_REG(idx, A_MPS_PORT_STAT_##name##_L))
+#define GET_STAT_COM(name) t4_read_reg64(adap, A_MPS_STAT_##name##_L)
+
+	p->tx_octets           = GET_STAT(TX_PORT_BYTES);
+	p->tx_frames           = GET_STAT(TX_PORT_FRAMES);
+	p->tx_bcast_frames     = GET_STAT(TX_PORT_BCAST);
+	p->tx_mcast_frames     = GET_STAT(TX_PORT_MCAST);
+	p->tx_ucast_frames     = GET_STAT(TX_PORT_UCAST);
+	p->tx_error_frames     = GET_STAT(TX_PORT_ERROR);
+	p->tx_frames_64        = GET_STAT(TX_PORT_64B);
+	p->tx_frames_65_127    = GET_STAT(TX_PORT_65B_127B);
+	p->tx_frames_128_255   = GET_STAT(TX_PORT_128B_255B);
+	p->tx_frames_256_511   = GET_STAT(TX_PORT_256B_511B);
+	p->tx_frames_512_1023  = GET_STAT(TX_PORT_512B_1023B);
+	p->tx_frames_1024_1518 = GET_STAT(TX_PORT_1024B_1518B);
+	p->tx_frames_1519_max  = GET_STAT(TX_PORT_1519B_MAX);
+	p->tx_drop             = GET_STAT(TX_PORT_DROP);
+	p->tx_pause            = GET_STAT(TX_PORT_PAUSE);
+	p->tx_ppp0             = GET_STAT(TX_PORT_PPP0);
+	p->tx_ppp1             = GET_STAT(TX_PORT_PPP1);
+	p->tx_ppp2             = GET_STAT(TX_PORT_PPP2);
+	p->tx_ppp3             = GET_STAT(TX_PORT_PPP3);
+	p->tx_ppp4             = GET_STAT(TX_PORT_PPP4);
+	p->tx_ppp5             = GET_STAT(TX_PORT_PPP5);
+	p->tx_ppp6             = GET_STAT(TX_PORT_PPP6);
+	p->tx_ppp7             = GET_STAT(TX_PORT_PPP7);
+
+	p->rx_octets           = GET_STAT(RX_PORT_BYTES);
+	p->rx_frames           = GET_STAT(RX_PORT_FRAMES);
+	p->rx_bcast_frames     = GET_STAT(RX_PORT_BCAST);
+	p->rx_mcast_frames     = GET_STAT(RX_PORT_MCAST);
+	p->rx_ucast_frames     = GET_STAT(RX_PORT_UCAST);
+	p->rx_too_long         = GET_STAT(RX_PORT_MTU_ERROR);
+	p->rx_jabber           = GET_STAT(RX_PORT_MTU_CRC_ERROR);
+	p->rx_fcs_err          = GET_STAT(RX_PORT_CRC_ERROR);
+	p->rx_len_err          = GET_STAT(RX_PORT_LEN_ERROR);
+	p->rx_symbol_err       = GET_STAT(RX_PORT_SYM_ERROR);
+	p->rx_runt             = GET_STAT(RX_PORT_LESS_64B);
+	p->rx_frames_64        = GET_STAT(RX_PORT_64B);
+	p->rx_frames_65_127    = GET_STAT(RX_PORT_65B_127B);
+	p->rx_frames_128_255   = GET_STAT(RX_PORT_128B_255B);
+	p->rx_frames_256_511   = GET_STAT(RX_PORT_256B_511B);
+	p->rx_frames_512_1023  = GET_STAT(RX_PORT_512B_1023B);
+	p->rx_frames_1024_1518 = GET_STAT(RX_PORT_1024B_1518B);
+	p->rx_frames_1519_max  = GET_STAT(RX_PORT_1519B_MAX);
+	p->rx_pause            = GET_STAT(RX_PORT_PAUSE);
+	p->rx_ppp0             = GET_STAT(RX_PORT_PPP0);
+	p->rx_ppp1             = GET_STAT(RX_PORT_PPP1);
+	p->rx_ppp2             = GET_STAT(RX_PORT_PPP2);
+	p->rx_ppp3             = GET_STAT(RX_PORT_PPP3);
+	p->rx_ppp4             = GET_STAT(RX_PORT_PPP4);
+	p->rx_ppp5             = GET_STAT(RX_PORT_PPP5);
+	p->rx_ppp6             = GET_STAT(RX_PORT_PPP6);
+	p->rx_ppp7             = GET_STAT(RX_PORT_PPP7);
+
+	p->rx_ovflow0 = (bgmap & 1) ? GET_STAT_COM(RX_BG_0_MAC_DROP_FRAME) : 0;
+	p->rx_ovflow1 = (bgmap & 2) ? GET_STAT_COM(RX_BG_1_MAC_DROP_FRAME) : 0;
+	p->rx_ovflow2 = (bgmap & 4) ? GET_STAT_COM(RX_BG_2_MAC_DROP_FRAME) : 0;
+	p->rx_ovflow3 = (bgmap & 8) ? GET_STAT_COM(RX_BG_3_MAC_DROP_FRAME) : 0;
+	p->rx_trunc0 = (bgmap & 1) ? GET_STAT_COM(RX_BG_0_MAC_TRUNC_FRAME) : 0;
+	p->rx_trunc1 = (bgmap & 2) ? GET_STAT_COM(RX_BG_1_MAC_TRUNC_FRAME) : 0;
+	p->rx_trunc2 = (bgmap & 4) ? GET_STAT_COM(RX_BG_2_MAC_TRUNC_FRAME) : 0;
+	p->rx_trunc3 = (bgmap & 8) ? GET_STAT_COM(RX_BG_3_MAC_TRUNC_FRAME) : 0;
+
+#undef GET_STAT
+#undef GET_STAT_COM
+}
+
+/**
+ *	t4_get_lb_stats - collect loopback port statistics
+ *	@adap: the adapter
+ *	@idx: the loopback port index
+ *	@p: the stats structure to fill
+ *
+ *	Return HW statistics for the given loopback port.
+ */
+void t4_get_lb_stats(struct adapter *adap, int idx, struct lb_port_stats *p)
+{
+	u32 bgmap = get_mps_bg_map(adap, idx);
+
+#define GET_STAT(name) \
+	t4_read_reg64(adap, PORT_REG(idx, A_MPS_PORT_STAT_LB_PORT_##name##_L))
+#define GET_STAT_COM(name) t4_read_reg64(adap, A_MPS_STAT_##name##_L)
+
+	p->octets           = GET_STAT(BYTES);
+	p->frames           = GET_STAT(FRAMES);
+	p->bcast_frames     = GET_STAT(BCAST);
+	p->mcast_frames     = GET_STAT(MCAST);
+	p->ucast_frames     = GET_STAT(UCAST);
+	p->error_frames     = GET_STAT(ERROR);
+
+	p->frames_64        = GET_STAT(64B);
+	p->frames_65_127    = GET_STAT(65B_127B);
+	p->frames_128_255   = GET_STAT(128B_255B);
+	p->frames_256_511   = GET_STAT(256B_511B);
+	p->frames_512_1023  = GET_STAT(512B_1023B);
+	p->frames_1024_1518 = GET_STAT(1024B_1518B);
+	p->frames_1519_max  = GET_STAT(1519B_MAX);
+	p->drop             = t4_read_reg(adap, PORT_REG(idx,
+					  A_MPS_PORT_STAT_LB_PORT_DROP_FRAMES));
+
+	p->ovflow0 = (bgmap & 1) ? GET_STAT_COM(RX_BG_0_LB_DROP_FRAME) : 0;
+	p->ovflow1 = (bgmap & 2) ? GET_STAT_COM(RX_BG_1_LB_DROP_FRAME) : 0;
+	p->ovflow2 = (bgmap & 4) ? GET_STAT_COM(RX_BG_2_LB_DROP_FRAME) : 0;
+	p->ovflow3 = (bgmap & 8) ? GET_STAT_COM(RX_BG_3_LB_DROP_FRAME) : 0;
+	p->trunc0 = (bgmap & 1) ? GET_STAT_COM(RX_BG_0_LB_TRUNC_FRAME) : 0;
+	p->trunc1 = (bgmap & 2) ? GET_STAT_COM(RX_BG_1_LB_TRUNC_FRAME) : 0;
+	p->trunc2 = (bgmap & 4) ? GET_STAT_COM(RX_BG_2_LB_TRUNC_FRAME) : 0;
+	p->trunc3 = (bgmap & 8) ? GET_STAT_COM(RX_BG_3_LB_TRUNC_FRAME) : 0;
+
+#undef GET_STAT
+#undef GET_STAT_COM
+}
+
+/**
+ * 	t4_wol_magic_enable - enable/disable magic packet WoL
+ * 	@adap: the adapter
+ * 	@port: the physical port index
+ * 	@addr: MAC address expected in magic packets, %NULL to disable
+ *
+ * 	Enables/disables magic packet wake-on-LAN for the selected port.
+ */
+void t4_wol_magic_enable(struct adapter *adap, unsigned int port,
+			 const u8 *addr)
+{
+	if (addr) {
+		t4_write_reg(adap, PORT_REG(port, A_XGMAC_PORT_MAGIC_MACID_LO),
+			     (addr[2] << 24) | (addr[3] << 16) |
+			     (addr[4] << 8) | addr[5]);
+		t4_write_reg(adap, PORT_REG(port, A_XGMAC_PORT_MAGIC_MACID_HI),
+			     (addr[0] << 8) | addr[1]);
+	}
+	t4_set_reg_field(adap, PORT_REG(port, A_XGMAC_PORT_CFG2), F_MAGICEN,
+			 V_MAGICEN(addr != NULL));
+}
+
+/**
+ *	t4_wol_pat_enable - enable/disable pattern-based WoL
+ *	@adap: the adapter
+ *	@port: the physical port index
+ *	@map: bitmap of which HW pattern filters to set
+ *	@mask0: byte mask for bytes 0-63 of a packet
+ *	@mask1: byte mask for bytes 64-127 of a packet
+ *	@crc: Ethernet CRC for selected bytes
+ *	@enable: enable/disable switch
+ *
+ *	Sets the pattern filters indicated in @map to mask out the bytes
+ *	specified in @mask0/@mask1 in received packets and compare the CRC of
+ *	the resulting packet against @crc.  If @enable is %true pattern-based
+ *	WoL is enabled, otherwise disabled.
+ */
+int t4_wol_pat_enable(struct adapter *adap, unsigned int port, unsigned int map,
+		      u64 mask0, u64 mask1, unsigned int crc, bool enable)
+{
+	int i;
+
+	if (!enable) {
+		t4_set_reg_field(adap, PORT_REG(port, A_XGMAC_PORT_CFG2),
+				 F_PATEN, 0);
+		return 0;
+	}
+	if (map > 0xff)
+		return -EINVAL;
+
+#define EPIO_REG(name) PORT_REG(port, A_XGMAC_PORT_EPIO_##name)
+
+	t4_write_reg(adap, EPIO_REG(DATA1), mask0 >> 32);
+	t4_write_reg(adap, EPIO_REG(DATA2), mask1);
+	t4_write_reg(adap, EPIO_REG(DATA3), mask1 >> 32);
+
+	for (i = 0; i < NWOL_PAT; i++, map >>= 1) {
+		if (!(map & 1))
+			continue;
+
+		/* write byte masks */
+		t4_write_reg(adap, EPIO_REG(DATA0), mask0);
+		t4_write_reg(adap, EPIO_REG(OP), V_ADDRESS(i) | F_EPIOWR);
+		t4_read_reg(adap, EPIO_REG(OP));                /* flush */
+		if (t4_read_reg(adap, EPIO_REG(OP)) & F_BUSY)
+			return -ETIMEDOUT;
+
+		/* write CRC */
+		t4_write_reg(adap, EPIO_REG(DATA0), crc);
+		t4_write_reg(adap, EPIO_REG(OP), V_ADDRESS(i + 32) | F_EPIOWR);
+		t4_read_reg(adap, EPIO_REG(OP));                /* flush */
+		if (t4_read_reg(adap, EPIO_REG(OP)) & F_BUSY)
+			return -ETIMEDOUT;
+	}
+#undef EPIO_REG
+
+	t4_set_reg_field(adap, PORT_REG(port, A_XGMAC_PORT_CFG2), 0, F_PATEN);
+	return 0;
+}
+
+#define INIT_CMD(var, cmd, rd_wr) \
+	(var).op_to_write = htonl(V_FW_CMD_OP(FW_##cmd##_CMD) | \
+				  F_FW_CMD_REQUEST | F_FW_CMD_##rd_wr); \
+	(var).retval_len16 = htonl(FW_LEN16(var))
+
+/**
+ * 	t4_mdio_rd - read a PHY register through MDIO
+ * 	@adap: the adapter
+ * 	@mbox: mailbox to use for the FW command
+ * 	@phy_addr: the PHY address
+ * 	@mmd: the PHY MMD to access (0 for clause 22 PHYs)
+ * 	@reg: the register to read
+ * 	@valp: where to store the value
+ *
+ * 	Issues a FW command through the given mailbox to read a PHY register.
+ */
+int t4_mdio_rd(struct adapter *adap, unsigned int mbox, unsigned int phy_addr,
+	       unsigned int mmd, unsigned int reg, u16 *valp)
+{
+	int ret;
+	struct fw_ldst_cmd c;
+
+	memset(&c, 0, sizeof(c));
+	c.op_to_addrspace = htonl(V_FW_CMD_OP(FW_LDST_CMD) | F_FW_CMD_REQUEST |
+		F_FW_CMD_READ | V_FW_LDST_CMD_ADDRSPACE(FW_LDST_ADDRSPC_MDIO));
+	c.cycles_to_len16 = htonl(FW_LEN16(c));
+	c.u.mdio.paddr_mmd = htons(V_FW_LDST_CMD_PADDR(phy_addr) |
+				   V_FW_LDST_CMD_MMD(mmd));
+	c.u.mdio.raddr = htons(reg);
+
+	ret = t4_wr_mbox(adap, mbox, &c, sizeof(c), &c);
+	if (ret == 0)
+		*valp = ntohs(c.u.mdio.rval);
+	return ret;
+}
+
+/**
+ * 	t4_mdio_wr - write a PHY register through MDIO
+ * 	@adap: the adapter
+ * 	@mbox: mailbox to use for the FW command
+ * 	@phy_addr: the PHY address
+ * 	@mmd: the PHY MMD to access (0 for clause 22 PHYs)
+ * 	@reg: the register to write
+ * 	@valp: value to write
+ *
+ * 	Issues a FW command through the given mailbox to write a PHY register.
+ */
+int t4_mdio_wr(struct adapter *adap, unsigned int mbox, unsigned int phy_addr,
+	       unsigned int mmd, unsigned int reg, u16 val)
+{
+	struct fw_ldst_cmd c;
+
+	memset(&c, 0, sizeof(c));
+	c.op_to_addrspace = htonl(V_FW_CMD_OP(FW_LDST_CMD) | F_FW_CMD_REQUEST |
+		F_FW_CMD_WRITE | V_FW_LDST_CMD_ADDRSPACE(FW_LDST_ADDRSPC_MDIO));
+	c.cycles_to_len16 = htonl(FW_LEN16(c));
+	c.u.mdio.paddr_mmd = htons(V_FW_LDST_CMD_PADDR(phy_addr) |
+				   V_FW_LDST_CMD_MMD(mmd));
+	c.u.mdio.raddr = htons(reg);
+	c.u.mdio.rval = htons(val);
+
+	return t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL);
+}
+
+/**
+ * 	t4_fw_hello - establish communication with FW
+ * 	@adap: the adapter
+ * 	@mbox: mailbox to use for the FW command
+ * 	@evt_mbox: mailbox to receive async FW events
+ * 	@master: specifies the caller's willingness to be the device master
+ * 	@state: returns the current device state
+ *
+ * 	Issues a command to establish communication with FW.
+ */
+int t4_fw_hello(struct adapter *adap, unsigned int mbox, unsigned int evt_mbox,
+		enum dev_master master, enum dev_state *state)
+{
+	int ret; 
+	struct fw_hello_cmd c;
+
+	INIT_CMD(c, HELLO, WRITE);
+	c.err_to_mbasyncnot = htonl(
+		V_FW_HELLO_CMD_MASTERDIS(master == MASTER_CANT) |
+		V_FW_HELLO_CMD_MASTERFORCE(master == MASTER_MUST) |
+		V_FW_HELLO_CMD_MBMASTER(master == MASTER_MUST ? mbox : 0xff) |
+		V_FW_HELLO_CMD_MBASYNCNOT(evt_mbox));
+
+	ret = t4_wr_mbox(adap, mbox, &c, sizeof(c), &c);
+	if (ret == 0 && state) {
+		u32 v = ntohl(c.err_to_mbasyncnot);
+		if (v & F_FW_HELLO_CMD_INIT)
+			*state = DEV_STATE_INIT;
+		else if (v & F_FW_HELLO_CMD_ERR)
+			*state = DEV_STATE_ERR;
+		else
+			*state = DEV_STATE_UNINIT;
+	}
+	return ret;
+}
+
+/**
+ * 	t4_fw_bye - end communication with FW
+ * 	@adap: the adapter
+ * 	@mbox: mailbox to use for the FW command
+ *
+ * 	Issues a command to terminate communication with FW.
+ */
+int t4_fw_bye(struct adapter *adap, unsigned int mbox)
+{
+	struct fw_bye_cmd c;
+
+	INIT_CMD(c, BYE, WRITE);
+	return t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL);
+}
+
+/**
+ * 	t4_init_cmd - ask FW to initialize the device
+ * 	@adap: the adapter
+ * 	@mbox: mailbox to use for the FW command
+ *
+ * 	Issues a command to FW to partially initialize the device.  This
+ * 	performs initialization that generally doesn't depend on user input.
+ */
+int t4_early_init(struct adapter *adap, unsigned int mbox)
+{
+	struct fw_initialize_cmd c;
+
+	INIT_CMD(c, INITIALIZE, WRITE);
+	return t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL);
+}
+
+/**
+ * 	t4_fw_reset - issue a reset to FW
+ * 	@adap: the adapter
+ * 	@mbox: mailbox to use for the FW command
+ * 	@reset: specifies the type of reset to perform
+ *
+ * 	Issues a reset command of the specified type to FW.
+ */
+int t4_fw_reset(struct adapter *adap, unsigned int mbox, int reset)
+{
+	struct fw_reset_cmd c;
+
+	INIT_CMD(c, RESET, WRITE);
+	c.val = htonl(reset);
+	return t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL);
+}
+
+/**
+ * 	t4_query_params - query FW or device parameters
+ * 	@adap: the adapter
+ * 	@mbox: mailbox to use for the FW command
+ * 	@nparams: the number of parameters
+ * 	@params: the parameter names
+ * 	@val: the parameter values
+ *
+ * 	Reads the value of FW or device parameters.  Up to 7 parameters can be
+ * 	queried at once.
+ */
+int t4_query_params(struct adapter *adap, unsigned int mbox,
+		    unsigned int nparams, const u32 *params, u32 *val)
+{
+	int i, ret;
+	struct fw_params_cmd c;
+	__be32 *p = &c.param[0].mnem;
+
+	if (nparams > 7)
+		return -EINVAL;
+
+	memset(&c, 0, sizeof(c));
+	INIT_CMD(c, PARAMS, READ);
+	for (i = 0; i < nparams; i++, p += 2)
+		*p = htonl(*params++);
+
+	ret = t4_wr_mbox(adap, mbox, &c, sizeof(c), &c);
+	if (ret == 0)
+		for (i = 0, p = &c.param[0].val; i < nparams; i++, p += 2)
+			*val++ = ntohl(*p);
+	return ret;
+}
+
+/**
+ * 	t4_set_params - sets FW or device parameters
+ * 	@adap: the adapter
+ * 	@mbox: mailbox to use for the FW command
+ * 	@nparams: the number of parameters
+ * 	@params: the parameter names
+ * 	@val: the parameter values
+ *
+ * 	Sets the value of FW or device parameters.  Up to 7 parameters can be
+ * 	specified at once.
+ */
+int t4_set_params(struct adapter *adap, unsigned int mbox, unsigned int nparams,
+		  const u32 *params, const u32 *val)
+{
+	struct fw_params_cmd c;
+	__be32 *p = &c.param[0].mnem;
+
+	if (nparams > 7)
+		return -EINVAL;
+
+	memset(&c, 0, sizeof(c));
+	INIT_CMD(c, PARAMS, WRITE);
+	while (nparams--) {
+		*p++ = htonl(*params++);
+		*p++ = htonl(*val++);
+	}
+
+	return t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL);
+}
+
+/**
+ * 	t4_cfg_pfvf - configure PF/VF resource limits
+ * 	@adap: the adapter
+ * 	@mbox: mailbox to use for the FW command
+ * 	@pf: the PF being configured
+ * 	@vf: the VF being configured
+ * 	@txq: the max number of egress queues
+ * 	@txq_eth_ctrl: the max number of egress Ethernet or control queues
+ * 	@rxqi: the max number of interrupt-capable ingress queues
+ * 	@rxq: the max number of interruptless ingress queues
+ * 	@tc: the PCI traffic class
+ * 	@vi: the max number of virtual interfaces
+ * 	@cmask: the channel access rights mask for the PF/VF
+ * 	@pmask: the port access rights mask for the PF/VF
+ * 	@nexact: the maximum number of exact MPS filters
+ * 	@rcaps: read capabilities
+ * 	@wxcaps: write/execute capabilities
+ *
+ * 	Configures resource limits and capabilities for a physical or virtual
+ * 	function.
+ */
+int t4_cfg_pfvf(struct adapter *adap, unsigned int mbox, unsigned int pf,
+		unsigned int vf, unsigned int txq, unsigned int txq_eth_ctrl,
+		unsigned int rxqi, unsigned int rxq, unsigned int tc,
+		unsigned int vi, unsigned int cmask, unsigned int pmask,
+		unsigned int nexact, unsigned int rcaps, unsigned int wxcaps)
+{
+	struct fw_pfvf_cmd c;
+
+	memset(&c, 0, sizeof(c));
+	c.op_to_vfn = htonl(V_FW_CMD_OP(FW_PFVF_CMD) | F_FW_CMD_REQUEST |
+			    F_FW_CMD_WRITE | V_FW_PFVF_CMD_PFN(pf) |
+			    V_FW_PFVF_CMD_VFN(vf));
+	c.retval_len16 = htonl(FW_LEN16(c));
+	c.niqflint_niq = htonl(V_FW_PFVF_CMD_NIQFLINT(rxqi) |
+			       V_FW_PFVF_CMD_NIQ(rxq));
+	c.cmask_to_neq = htonl(V_FW_PFVF_CMD_CMASK(cmask) |
+			       V_FW_PFVF_CMD_PMASK(pmask) |
+			       V_FW_PFVF_CMD_NEQ(txq));
+	c.tc_to_nexactf = htonl(V_FW_PFVF_CMD_TC(tc) | V_FW_PFVF_CMD_NVI(vi) |
+				V_FW_PFVF_CMD_NEXACTF(nexact));
+	c.r_caps_to_nethctrl = htonl(V_FW_PFVF_CMD_R_CAPS(rcaps) |
+				     V_FW_PFVF_CMD_WX_CAPS(wxcaps) |
+				     V_FW_PFVF_CMD_NETHCTRL(txq_eth_ctrl));
+	return t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL);
+}
+
+/**
+ * 	t4_alloc_vi - allocate a virtual interface
+ * 	@adap: the adapter
+ * 	@mbox: mailbox to use for the FW command
+ * 	@port: physical port associated with the VI
+ * 	@pf: the PF owning the VI
+ * 	@vf: the VF owning the VI
+ * 	@nmac: number of MAC addresses needed (1 to 5)
+ * 	@mac: the MAC addresses of the VI
+ * 	@rss_size: size of RSS table slice associated with this VI
+ *
+ * 	Allocates a virtual interface for the given physical port.  If @mac is
+ * 	not %NULL it contains the MAC addresses of the VI as assigned by FW.
+ * 	@mac should be large enough to hold @nmac Ethernet addresses, they are
+ * 	stored consecutively so the space needed is @nmac * 6 bytes.
+ * 	Returns a negative error number or the non-negative VI id.
+ */
+int t4_alloc_vi(struct adapter *adap, unsigned int mbox, unsigned int port,
+		unsigned int pf, unsigned int vf, unsigned int nmac, u8 *mac,
+		unsigned int *rss_size)
+{
+	int ret;
+	struct fw_vi_cmd c;
+
+	memset(&c, 0, sizeof(c));
+	c.op_to_vfn = htonl(V_FW_CMD_OP(FW_VI_CMD) | F_FW_CMD_REQUEST |
+			    F_FW_CMD_WRITE | F_FW_CMD_EXEC |
+			    V_FW_VI_CMD_PFN(pf) | V_FW_VI_CMD_VFN(vf));
+	c.alloc_to_len16 = htonl(F_FW_VI_CMD_ALLOC | FW_LEN16(c));
+	c.portid_pkd = V_FW_VI_CMD_PORTID(port);
+	c.nmac = nmac - 1;
+
+	ret = t4_wr_mbox(adap, mbox, &c, sizeof(c), &c);
+	if (ret)
+		return ret;
+
+       	if (mac) {
+		memcpy(mac, c.mac, sizeof(c.mac));
+		switch (nmac) {
+			case 5: memcpy(mac + 24, c.nmac3, sizeof(c.nmac3));
+			case 4: memcpy(mac + 18, c.nmac2, sizeof(c.nmac2));
+			case 3: memcpy(mac + 12, c.nmac1, sizeof(c.nmac1));
+			case 2: memcpy(mac + 6,  c.nmac0, sizeof(c.nmac0));
+		}
+	}
+	if (rss_size)
+		*rss_size = G_FW_VI_CMD_RSSSIZE(ntohs(c.rsssize_pkd));
+	return ntohs(c.viid_pkd);
+}
+
+/**
+ * 	t4_free_vi - free a virtual interface
+ * 	@adap: the adapter
+ * 	@mbox: mailbox to use for the FW command
+ * 	@pf: the PF owning the VI
+ * 	@vf: the VF owning the VI
+ * 	@viid: virtual interface identifiler
+ *
+ * 	Free a previously allocated virtual interface.
+ */
+int t4_free_vi(struct adapter *adap, unsigned int mbox, unsigned int pf,
+	       unsigned int vf, unsigned int viid)
+{
+	struct fw_vi_cmd c;
+
+	memset(&c, 0, sizeof(c));
+	c.op_to_vfn = htonl(V_FW_CMD_OP(FW_VI_CMD) | F_FW_CMD_REQUEST |
+			    F_FW_CMD_EXEC | V_FW_VI_CMD_PFN(pf) |
+			    V_FW_VI_CMD_VFN(vf));
+	c.alloc_to_len16 = htonl(F_FW_VI_CMD_FREE | FW_LEN16(c));
+	c.viid_pkd = htons(V_FW_VI_CMD_VIID(viid));
+	return t4_wr_mbox(adap, mbox, &c, sizeof(c), &c);
+}
+
+/**
+ * 	t4_set_rxmode - set Rx properties of a virtual interface
+ * 	@adap: the adapter
+ * 	@mbox: mailbox to use for the FW command
+ * 	@viid: the VI id
+ * 	@mtu: the new MTU or -1
+ * 	@promisc: 1 to enable promiscuous mode, 0 to disable it, -1 no change
+ * 	@all_multi: 1 to enable all-multi mode, 0 to disable it, -1 no change
+ * 	@bcast: 1 to enable broadcast Rx, 0 to disable it, -1 no change
+ * 	@sleep_ok: if true we may sleep while awaiting command completion
+ *
+ * 	Sets Rx properties of a virtual interface.
+ */
+int t4_set_rxmode(struct adapter *adap, unsigned int mbox, unsigned int viid,
+		  int mtu, int promisc, int all_multi, int bcast, bool sleep_ok)
+{
+	struct fw_vi_rxmode_cmd c;
+
+	/* convert to FW values */
+	if (mtu < 0)
+		mtu = M_FW_VI_RXMODE_CMD_MTU;
+	if (promisc < 0)
+		promisc = M_FW_VI_RXMODE_CMD_PROMISCEN;
+	if (all_multi < 0)
+		all_multi = M_FW_VI_RXMODE_CMD_ALLMULTIEN;
+	if (bcast < 0)
+		bcast = M_FW_VI_RXMODE_CMD_BROADCASTEN;
+
+	memset(&c, 0, sizeof(c));
+	c.op_to_viid = htonl(V_FW_CMD_OP(FW_VI_RXMODE_CMD) | F_FW_CMD_REQUEST |
+			     F_FW_CMD_WRITE | V_FW_VI_RXMODE_CMD_VIID(viid));
+	c.retval_len16 = htonl(FW_LEN16(c));
+	c.mtu_to_broadcasten = htonl(V_FW_VI_RXMODE_CMD_MTU(mtu) |
+				     V_FW_VI_RXMODE_CMD_PROMISCEN(promisc) |
+				     V_FW_VI_RXMODE_CMD_ALLMULTIEN(all_multi) |
+				     V_FW_VI_RXMODE_CMD_BROADCASTEN(bcast));
+	return t4_wr_mbox_meat(adap, mbox, &c, sizeof(c), NULL, sleep_ok);
+}
+
+/**
+ * 	t4_alloc_mac_filt - allocates exact-match filters for MAC addresses
+ * 	@adap: the adapter
+ * 	@mbox: mailbox to use for the FW command
+ * 	@viid: the VI id
+ * 	@free: if true any existing filters for this VI id are first removed
+ * 	@naddr: the number of MAC addresses to allocate filters for (up to 7)
+ * 	@addr: the MAC address(es)
+ * 	@idx: where to store the index of each allocated filter
+ * 	@hash: pointer to hash address filter bitmap
+ * 	@sleep_ok: call is allowed to sleep
+ *
+ * 	Allocates an exact-match filter for each of the supplied addresses and
+ * 	sets it to the corresponding address.  If @idx is not %NULL it should
+ * 	have at least @naddr entries, each of which will be set to the index of
+ * 	the filter allocated for the corresponding MAC address.  If a filter
+ * 	could not be allocated for an address its index is set to 0xffff.
+ * 	If @hash is not %NULL addresses that fail to allocate an exact filter
+ * 	are hashed and update the hash filter bitmap pointed at by @hash.
+ *
+ * 	Returns a negative error number or the number of filters allocated.
+ */
+int t4_alloc_mac_filt(struct adapter *adap, unsigned int mbox,
+		      unsigned int viid, bool free, unsigned int naddr,
+		      const u8 **addr, u16 *idx, u64 *hash, bool sleep_ok)
+{
+	int i, ret;
+	struct fw_vi_mac_cmd c;
+	struct fw_vi_mac_exact *p;
+
+	if (naddr > 7)
+		return -EINVAL;
+
+	memset(&c, 0, sizeof(c));
+	c.op_to_viid = htonl(V_FW_CMD_OP(FW_VI_MAC_CMD) | F_FW_CMD_REQUEST |
+			     F_FW_CMD_WRITE | V_FW_CMD_EXEC(free) |
+			     V_FW_VI_MAC_CMD_VIID(viid));
+	c.freemacs_to_len16 = htonl(V_FW_VI_MAC_CMD_FREEMACS(free) |
+				    V_FW_CMD_LEN16((naddr + 2) / 2));
+
+	for (i = 0, p = c.u.exact; i < naddr; i++, p++) {
+		p->valid_to_idx = htons(F_FW_VI_MAC_CMD_VALID |
+				      V_FW_VI_MAC_CMD_IDX(FW_VI_MAC_ADD_MAC));
+		memcpy(p->macaddr, addr[i], sizeof(p->macaddr));
+	}
+
+	ret = t4_wr_mbox_meat(adap, mbox, &c, sizeof(c), &c, sleep_ok);
+	if (ret)
+		return ret;
+
+	for (i = 0, p = c.u.exact; i < naddr; i++, p++) {
+		u16 index = G_FW_VI_MAC_CMD_IDX(ntohs(p->valid_to_idx));
+
+		if (idx)
+			idx[i] = index >= NEXACT_MAC ? 0xffff : index;
+		if (index < NEXACT_MAC)
+			ret++;
+		else if (hash)
+			*hash |= (1 << t4_hash_mac_addr(addr[i]));
+	}
+	return ret;
+}
+
+/**
+ * 	t4_change_mac - modifies the exact-match filter for a MAC address
+ * 	@adap: the adapter
+ * 	@mbox: mailbox to use for the FW command
+ * 	@viid: the VI id
+ * 	@idx: index of existing filter for old value of MAC address, or -1
+ * 	@addr: the new MAC address value
+ * 	@persist: whether a new MAC allocation should be persistent
+ *
+ *	Modifies an exact-match filter and sets it to the new MAC address.
+ *	Note that in general it is not possible to modify the value of a given
+ *	filter so the generic way to modify an address filter is to free the one
+ *	being used by the old address value and allocate a new filter for the
+ *	new address value.  @idx can be -1 if the address is a new addition.
+ *
+ * 	Returns a negative error number or the index of the filter with the new
+ * 	MAC value.
+ */
+int t4_change_mac(struct adapter *adap, unsigned int mbox, unsigned int viid,
+		  int idx, const u8 *addr, bool persist)
+{
+	int ret;
+	struct fw_vi_mac_cmd c;
+	struct fw_vi_mac_exact *p = c.u.exact;
+
+	if (idx < 0)                             /* new allocation */
+		idx = persist ? FW_VI_MAC_ADD_PERSIST_MAC : FW_VI_MAC_ADD_MAC;
+
+	memset(&c, 0, sizeof(c));
+	c.op_to_viid = htonl(V_FW_CMD_OP(FW_VI_MAC_CMD) | F_FW_CMD_REQUEST |
+			     F_FW_CMD_WRITE | V_FW_VI_MAC_CMD_VIID(viid));
+	c.freemacs_to_len16 = htonl(V_FW_CMD_LEN16(1));
+	p->valid_to_idx = htons(F_FW_VI_MAC_CMD_VALID |
+				V_FW_VI_MAC_CMD_IDX(idx));
+	memcpy(p->macaddr, addr, sizeof(p->macaddr));
+
+	ret = t4_wr_mbox(adap, mbox, &c, sizeof(c), &c);
+	if (ret == 0) {
+		ret = G_FW_VI_MAC_CMD_IDX(ntohs(p->valid_to_idx));
+		if (ret >= NEXACT_MAC)
+			ret = -ENOMEM;
+	}
+	return ret;
+}
+
+/**
+ * 	t4_set_addr_hash - program the MAC inexact-match hash filter
+ * 	@adap: the adapter
+ * 	@mbox: mailbox to use for the FW command
+ * 	@viid: the VI id
+ * 	@ucast: whether the hash filter should also match unicast addresses
+ * 	@vec: the value to be written to the hash filter
+ * 	@sleep_ok: call is allowed to sleep
+ *
+ * 	Sets the 64-bit inexact-match hash filter for a virtual interface.
+ */
+int t4_set_addr_hash(struct adapter *adap, unsigned int mbox, unsigned int viid,
+		     bool ucast, u64 vec, bool sleep_ok)
+{
+	struct fw_vi_mac_cmd c;
+
+	memset(&c, 0, sizeof(c));
+	c.op_to_viid = htonl(V_FW_CMD_OP(FW_VI_MAC_CMD) | F_FW_CMD_REQUEST |
+			     F_FW_CMD_WRITE | V_FW_VI_ENABLE_CMD_VIID(viid));
+	c.freemacs_to_len16 = htonl(F_FW_VI_MAC_CMD_HASHVECEN |
+				    V_FW_VI_MAC_CMD_HASHUNIEN(ucast) |
+				    V_FW_CMD_LEN16(1));
+	c.u.hash.hashvec = cpu_to_be64(vec);
+	return t4_wr_mbox_meat(adap, mbox, &c, sizeof(c), NULL, sleep_ok);
+}
+
+/**
+ * 	t4_enable_vi - enable/disable a virtual interface
+ * 	@adap: the adapter
+ * 	@mbox: mailbox to use for the FW command
+ * 	@viid: the VI id
+ * 	@rx_en: 1=enable Rx, 0=disable Rx
+ * 	@tx_en: 1=enable Tx, 0=disable Tx
+ *
+ * 	Enables/disables a virtual interface.
+ */
+int t4_enable_vi(struct adapter *adap, unsigned int mbox, unsigned int viid,
+		 bool rx_en, bool tx_en)
+{
+	struct fw_vi_enable_cmd c;
+
+	memset(&c, 0, sizeof(c));
+	c.op_to_viid = htonl(V_FW_CMD_OP(FW_VI_ENABLE_CMD) | F_FW_CMD_REQUEST |
+			     F_FW_CMD_EXEC | V_FW_VI_ENABLE_CMD_VIID(viid));
+	c.ien_to_len16 = htonl(V_FW_VI_ENABLE_CMD_IEN(rx_en) |
+			       V_FW_VI_ENABLE_CMD_EEN(tx_en) | FW_LEN16(c));
+	return t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL);
+}
+
+/**
+ * 	t4_identify_port - identify a VI's port by blinking its LED
+ * 	@adap: the adapter
+ * 	@mbox: mailbox to use for the FW command
+ * 	@viid: the VI id
+ * 	@nblinks: how many times to blink LED at 2.5 Hz
+ *
+ * 	Identifies a VI's port by blinking its LED.
+ */
+int t4_identify_port(struct adapter *adap, unsigned int mbox, unsigned int viid,
+		     unsigned int nblinks)
+{
+	struct fw_vi_enable_cmd c;
+
+	c.op_to_viid = htonl(V_FW_CMD_OP(FW_VI_ENABLE_CMD) | F_FW_CMD_REQUEST |
+			     F_FW_CMD_EXEC | V_FW_VI_ENABLE_CMD_VIID(viid));
+	c.ien_to_len16 = htonl(F_FW_VI_ENABLE_CMD_LED | FW_LEN16(c));
+	c.blinkdur = htons(nblinks);
+	return t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL);
+}
+
+/**
+ * 	t4_iq_start_stop - enable/disable an ingress queue and its FLs
+ * 	@adap: the adapter
+ * 	@mbox: mailbox to use for the FW command
+ * 	@start: %true to enable the queues, %false to disable them
+ * 	@pf: the PF owning the queues
+ * 	@vf: the VF owning the queues
+ * 	@iqid: ingress queue id
+ * 	@fl0id: FL0 queue id or 0xffff if no attached FL0
+ * 	@fl1id: FL1 queue id or 0xffff if no attached FL1
+ *
+ * 	Starts or stops an ingress queue and its associated FLs, if any.
+ */
+int t4_iq_start_stop(struct adapter *adap, unsigned int mbox, bool start,
+		     unsigned int pf, unsigned int vf, unsigned int iqid,
+		     unsigned int fl0id, unsigned int fl1id)
+{
+	struct fw_iq_cmd c;
+
+	memset(&c, 0, sizeof(c));
+	c.op_to_vfn = htonl(V_FW_CMD_OP(FW_IQ_CMD) | F_FW_CMD_REQUEST |
+			    F_FW_CMD_EXEC | V_FW_IQ_CMD_PFN(pf) |
+			    V_FW_IQ_CMD_VFN(vf));
+	c.alloc_to_len16 = htonl(V_FW_IQ_CMD_IQSTART(start) |
+				 V_FW_IQ_CMD_IQSTOP(!start) | FW_LEN16(c));
+	c.iqid = htons(iqid);
+	c.fl0id = htons(fl0id);
+	c.fl1id = htons(fl1id);
+	return t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL);
+}
+
+/**
+ * 	t4_iq_free - free an ingress queue and its FLs
+ * 	@adap: the adapter
+ * 	@mbox: mailbox to use for the FW command
+ * 	@pf: the PF owning the queues
+ * 	@vf: the VF owning the queues
+ * 	@iqtype: the ingress queue type
+ * 	@iqid: ingress queue id
+ * 	@fl0id: FL0 queue id or 0xffff if no attached FL0
+ * 	@fl1id: FL1 queue id or 0xffff if no attached FL1
+ *
+ * 	Frees an ingress queue and its associated FLs, if any.
+ */
+int t4_iq_free(struct adapter *adap, unsigned int mbox, unsigned int pf,
+	       unsigned int vf, unsigned int iqtype, unsigned int iqid,
+	       unsigned int fl0id, unsigned int fl1id)
+{
+	struct fw_iq_cmd c;
+
+	memset(&c, 0, sizeof(c));
+	c.op_to_vfn = htonl(V_FW_CMD_OP(FW_IQ_CMD) | F_FW_CMD_REQUEST |
+			    F_FW_CMD_EXEC | V_FW_IQ_CMD_PFN(pf) |
+			    V_FW_IQ_CMD_VFN(vf));
+	c.alloc_to_len16 = htonl(F_FW_IQ_CMD_FREE | FW_LEN16(c));
+	c.type_to_iqandstindex = htonl(V_FW_IQ_CMD_TYPE(iqtype));
+	c.iqid = htons(iqid);
+	c.fl0id = htons(fl0id);
+	c.fl1id = htons(fl1id);
+	return t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL);
+}
+
+/**
+ * 	t4_eth_eq_free - free an Ethernet egress queue
+ * 	@adap: the adapter
+ * 	@mbox: mailbox to use for the FW command
+ * 	@pf: the PF owning the queue
+ * 	@vf: the VF owning the queue
+ * 	@eqid: egress queue id
+ *
+ * 	Frees an Ethernet egress queue.
+ */
+int t4_eth_eq_free(struct adapter *adap, unsigned int mbox, unsigned int pf,
+		   unsigned int vf, unsigned int eqid)
+{
+	struct fw_eq_eth_cmd c;
+
+	memset(&c, 0, sizeof(c));
+	c.op_to_vfn = htonl(V_FW_CMD_OP(FW_EQ_ETH_CMD) | F_FW_CMD_REQUEST |
+			    F_FW_CMD_EXEC | V_FW_EQ_ETH_CMD_PFN(pf) |
+			    V_FW_EQ_ETH_CMD_VFN(vf));
+	c.alloc_to_len16 = htonl(F_FW_EQ_ETH_CMD_FREE | FW_LEN16(c));
+	c.eqid_pkd = htonl(V_FW_EQ_ETH_CMD_EQID(eqid));
+	return t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL);
+}
+
+/**
+ * 	t4_ctrl_eq_free - free a control egress queue
+ * 	@adap: the adapter
+ * 	@mbox: mailbox to use for the FW command
+ * 	@pf: the PF owning the queue
+ * 	@vf: the VF owning the queue
+ * 	@eqid: egress queue id
+ *
+ * 	Frees a control egress queue.
+ */
+int t4_ctrl_eq_free(struct adapter *adap, unsigned int mbox, unsigned int pf,
+		    unsigned int vf, unsigned int eqid)
+{
+	struct fw_eq_ctrl_cmd c;
+
+	memset(&c, 0, sizeof(c));
+	c.op_to_vfn = htonl(V_FW_CMD_OP(FW_EQ_CTRL_CMD) | F_FW_CMD_REQUEST |
+			    F_FW_CMD_EXEC | V_FW_EQ_CTRL_CMD_PFN(pf) |
+			    V_FW_EQ_CTRL_CMD_VFN(vf));
+	c.alloc_to_len16 = htonl(F_FW_EQ_CTRL_CMD_FREE | FW_LEN16(c));
+	c.cmpliqid_eqid = htonl(V_FW_EQ_CTRL_CMD_EQID(eqid));
+	return t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL);
+}
+
+/**
+ * 	t4_ofld_eq_free - free an offload egress queue
+ * 	@adap: the adapter
+ * 	@mbox: mailbox to use for the FW command
+ * 	@pf: the PF owning the queue
+ * 	@vf: the VF owning the queue
+ * 	@eqid: egress queue id
+ *
+ * 	Frees a control egress queue.
+ */
+int t4_ofld_eq_free(struct adapter *adap, unsigned int mbox, unsigned int pf,
+		    unsigned int vf, unsigned int eqid)
+{
+	struct fw_eq_ofld_cmd c;
+
+	memset(&c, 0, sizeof(c));
+	c.op_to_vfn = htonl(V_FW_CMD_OP(FW_EQ_OFLD_CMD) | F_FW_CMD_REQUEST |
+			    F_FW_CMD_EXEC | V_FW_EQ_OFLD_CMD_PFN(pf) |
+			    V_FW_EQ_OFLD_CMD_VFN(vf));
+	c.alloc_to_len16 = htonl(F_FW_EQ_OFLD_CMD_FREE | FW_LEN16(c));
+	c.eqid_pkd = htonl(V_FW_EQ_OFLD_CMD_EQID(eqid));
+	return t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL);
+}
+
+/**
+ * 	t4_handle_fw_rpl - process a FW reply message
+ * 	@adap: the adapter
+ * 	@rpl: start of the FW message
+ *
+ * 	Processes a FW message, such as link state change messages.
+ */
+int t4_handle_fw_rpl(struct adapter *adap, const __be64 *rpl)
+{
+	u8 opcode = *(const u8 *)rpl;
+
+	if (opcode == FW_PORT_CMD) {    /* link/module state change message */
+		int speed = 0, fc = 0;
+		const struct fw_port_cmd *p = (void *)rpl;
+		int chan = G_FW_PORT_CMD_PORTID(ntohl(p->op_to_portid));
+		int port = adap->chan_map[chan];
+		struct port_info *pi = adap2pinfo(adap, port);
+		struct link_config *lc = &pi->link_cfg;
+		u32 stat = ntohl(p->u.info.lstatus_to_modtype);
+		int link_ok = (stat & F_FW_PORT_CMD_LSTATUS) != 0;
+		u32 mod = G_FW_PORT_CMD_MODTYPE(stat);
+
+		if (stat & F_FW_PORT_CMD_RXPAUSE)
+			fc |= PAUSE_RX;
+		if (stat & F_FW_PORT_CMD_TXPAUSE)
+			fc |= PAUSE_TX;
+		if (stat & V_FW_PORT_CMD_LSPEED(FW_PORT_CAP_SPEED_100M))
+			speed = SPEED_100;
+		else if (stat & V_FW_PORT_CMD_LSPEED(FW_PORT_CAP_SPEED_1G))
+			speed = SPEED_1000;
+		else if (stat & V_FW_PORT_CMD_LSPEED(FW_PORT_CAP_SPEED_10G))
+			speed = SPEED_10000;
+
+		if (link_ok != lc->link_ok || speed != lc->speed ||
+		    fc != lc->fc) {                    /* something changed */
+			lc->link_ok = link_ok;
+			lc->speed = speed;
+			lc->fc = fc;
+			t4_os_link_changed(adap, port, link_ok);
+		}
+		if (mod != pi->mod_type) {
+			pi->mod_type = mod;
+			t4_os_portmod_changed(adap, port);
+		}
+	}
+	return 0;
+}
+
+/**
+ *	get_pci_mode - determine a card's PCI mode
+ *	@adapter: the adapter
+ *	@p: where to store the PCI settings
+ *
+ *	Determines a card's PCI mode and associated parameters, such as speed
+ *	and width.
+ */
+static void __devinit get_pci_mode(struct adapter *adapter,
+				   struct pci_params *p)
+{
+	u16 val;
+	u32 pcie_cap;
+
+	pcie_cap = pci_find_capability(adapter->pdev, PCI_CAP_ID_EXP);
+	if (pcie_cap) {
+		pci_read_config_word(adapter->pdev, pcie_cap + PCI_EXP_LNKSTA,
+				     &val);
+		p->speed = val & PCI_EXP_LNKSTA_CLS;
+		p->width = (val & PCI_EXP_LNKSTA_NLW) >> 4;
+	}
+}
+
+/**
+ *	init_link_config - initialize a link's SW state
+ *	@lc: structure holding the link state
+ *	@caps: link capabilities
+ *
+ *	Initializes the SW state maintained for each link, including the link's
+ *	capabilities and default speed/flow-control/autonegotiation settings.
+ */
+static void __devinit init_link_config(struct link_config *lc,
+				       unsigned int caps)
+{
+	lc->supported = caps;
+	lc->requested_speed = 0;
+	lc->speed = 0;
+	lc->requested_fc = lc->fc = PAUSE_RX | PAUSE_TX;
+	if (lc->supported & FW_PORT_CAP_ANEG) {
+		lc->advertising = lc->supported & ADVERT_MASK;
+		lc->autoneg = AUTONEG_ENABLE;
+		lc->requested_fc |= PAUSE_AUTONEG;
+	} else {
+		lc->advertising = 0;
+		lc->autoneg = AUTONEG_DISABLE;
+	}
+}
+
+static int __devinit wait_dev_ready(struct adapter *adap)
+{
+	if (t4_read_reg(adap, A_PL_WHOAMI) != 0xffffffff)
+		return 0;
+	msleep(500);
+	return t4_read_reg(adap, A_PL_WHOAMI) != 0xffffffff ? 0 : -EIO;
+}
+
+/**
+ *	t4_prep_adapter - prepare SW and HW for operation
+ *	@adapter: the adapter
+ *	@reset: if true perform a HW reset
+ *
+ *	Initialize adapter SW state for the various HW modules, set initial
+ *	values for some adapter tunables, take PHYs out of reset, and
+ *	initialize the MDIO interface.
+ */
+int __devinit t4_prep_adapter(struct adapter *adapter)
+{
+	int ret;
+
+	ret = wait_dev_ready(adapter);
+	if (ret < 0)
+		return ret;
+
+	get_pci_mode(adapter, &adapter->params.pci);
+	adapter->params.rev = t4_read_reg(adapter, A_PL_REV);
+
+	ret = get_vpd_params(adapter, &adapter->params.vpd);
+	if (ret < 0)
+		return ret;
+
+	init_cong_ctrl(adapter->params.a_wnd, adapter->params.b_wnd);
+
+	/*
+	 * Default port for debugging in case we can't reach FW.
+	 */
+	adapter->params.nports = 1;
+	adapter->params.portvec = 1;
+	return 0;
+}
+
+int __devinit t4_port_init(struct adapter *adap, int mbox, int pf, int vf)
+{
+	u8 addr[6];
+	int ret, i, j = 0;
+	struct fw_port_cmd c;
+
+	memset(&c, 0, sizeof(c));
+
+	for_each_port(adap, i) {
+		unsigned int rss_size;
+		struct port_info *p = adap2pinfo(adap, i);
+
+		while ((adap->params.portvec & (1 << j)) == 0)
+			j++;
+
+		c.op_to_portid = htonl(V_FW_CMD_OP(FW_PORT_CMD) |
+				       F_FW_CMD_REQUEST | F_FW_CMD_READ |
+				       V_FW_PORT_CMD_PORTID(j));
+		c.action_to_len16 = htonl(
+			V_FW_PORT_CMD_ACTION(FW_PORT_ACTION_GET_PORT_INFO) |
+			FW_LEN16(c));
+		ret = t4_wr_mbox(adap, mbox, &c, sizeof(c), &c);
+		if (ret)
+			return ret;
+
+		ret = t4_alloc_vi(adap, mbox, j, pf, vf, 1, addr, &rss_size);
+		if (ret < 0)
+			return ret;
+
+		p->viid = ret;
+		p->tx_chan = j;
+		p->lport = j;
+		p->rss_size = rss_size;
+		memcpy(adap->port[i]->dev_addr, addr, ETH_ALEN);
+		memcpy(adap->port[i]->perm_addr, addr, ETH_ALEN);
+
+		ret = ntohl(c.u.info.lstatus_to_modtype);
+		p->mdio_addr = (ret & F_FW_PORT_CMD_MDIOCAP) ?
+			G_FW_PORT_CMD_MDIOADDR(ret) : -1;
+		p->port_type = G_FW_PORT_CMD_PTYPE(ret);
+		p->mod_type = G_FW_PORT_CMD_MODTYPE(ret);
+
+		init_link_config(&p->link_cfg, ntohs(c.u.info.pcap));
+		j++;
+	}
+	return 0;
+}
diff --git a/drivers/net/cxgb4/t4_hw.h b/drivers/net/cxgb4/t4_hw.h
new file mode 100644
index 0000000..9e2228a
--- /dev/null
+++ b/drivers/net/cxgb4/t4_hw.h
@@ -0,0 +1,129 @@
+/*
+ * This file is part of the Chelsio T4 Ethernet driver for Linux.
+ *
+ * Copyright (c) 2003-2010 Chelsio Communications, Inc. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ *     Redistribution and use in source and binary forms, with or
+ *     without modification, are permitted provided that the following
+ *     conditions are met:
+ *
+ *      - Redistributions of source code must retain the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer.
+ *
+ *      - Redistributions in binary form must reproduce the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer in the documentation and/or other materials
+ *        provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef __T4_HW_H
+#define __T4_HW_H
+
+#include <linux/types.h>
+
+enum {
+	NCHAN          = 4,     /* # of HW channels */
+	MAX_MTU        = 9600,  /* max MAC MTU, excluding header + FCS */
+	EEPROMSIZE     = 17408, /* Serial EEPROM physical size */
+	EEPROMVSIZE    = 32768, /* Serial EEPROM virtual address space size */
+	RSS_NENTRIES   = 2048,  /* # of entries in RSS mapping table */
+	TCB_SIZE       = 128,   /* TCB size */
+	NMTUS          = 16,    /* size of MTU table */
+	NCCTRL_WIN     = 32,    /* # of congestion control windows */
+	NEXACT_MAC     = 336,   /* # of exact MAC address filters */
+	L2T_SIZE       = 4096,  /* # of L2T entries */
+	MBOX_LEN       = 64,    /* mailbox size in bytes */
+	TRACE_LEN      = 112,   /* length of trace data and mask */
+	FILTER_OPT_LEN = 36,    /* filter tuple width for optional components */
+	NWOL_PAT       = 8,     /* # of WoL patterns */
+	WOL_PAT_LEN    = 128,   /* length of WoL patterns */
+};
+
+enum {
+	SF_PAGE_SIZE = 256,           /* serial flash page size */
+	SF_SEC_SIZE = 64 * 1024,      /* serial flash sector size */
+	SF_SIZE = SF_SEC_SIZE * 16,   /* serial flash size */
+};
+
+enum { RSP_TYPE_FLBUF, RSP_TYPE_CPL, RSP_TYPE_INTR }; /* response entry types */
+
+enum { MBOX_OWNER_NONE, MBOX_OWNER_FW, MBOX_OWNER_DRV };    /* mailbox owners */
+
+enum {
+	SGE_MAX_WR_LEN = 512,     /* max WR size in bytes */
+	SGE_CTXT_SIZE = 24,       /* size of SGE context */
+	SGE_NTIMERS = 6,          /* # of interrupt holdoff timer values */
+	SGE_NCOUNTERS = 4,        /* # of interrupt packet counter values */
+};
+
+struct sge_qstat {                /* data written to SGE queue status entries */
+	__be32 qid;
+	__be16 cidx;
+	__be16 pidx;
+};
+
+#define S_QSTAT_PIDX    0
+#define M_QSTAT_PIDX    0xffff
+#define G_QSTAT_PIDX(x) (((x) >> S_QSTAT_PIDX) & M_QSTAT_PIDX)
+
+#define S_QSTAT_CIDX    16
+#define M_QSTAT_CIDX    0xffff
+#define G_QSTAT_CIDX(x) (((x) >> S_QSTAT_CIDX) & M_QSTAT_CIDX)
+
+/*
+ * Structure for last 128 bits of response descriptors
+ */
+struct rsp_ctrl {
+	__be32 hdrbuflen_pidx;
+	__be32 pldbuflen_qid;
+	union {
+		u8 type_gen;
+		__be64 last_flit;
+	};
+};
+
+#define S_RSPD_NEWBUF    31
+#define V_RSPD_NEWBUF(x) ((x) << S_RSPD_NEWBUF)
+#define F_RSPD_NEWBUF    V_RSPD_NEWBUF(1U)
+
+#define S_RSPD_LEN    0
+#define M_RSPD_LEN    0x7fffffff
+#define V_RSPD_LEN(x) ((x) << S_RSPD_LEN)
+#define G_RSPD_LEN(x) (((x) >> S_RSPD_LEN) & M_RSPD_LEN)
+
+#define S_RSPD_GEN    7
+#define V_RSPD_GEN(x) ((x) << S_RSPD_GEN)
+#define F_RSPD_GEN    V_RSPD_GEN(1U)
+
+#define S_RSPD_TYPE    4
+#define M_RSPD_TYPE    0x3
+#define V_RSPD_TYPE(x) ((x) << S_RSPD_TYPE)
+#define G_RSPD_TYPE(x) (((x) >> S_RSPD_TYPE) & M_RSPD_TYPE)
+
+/* Rx queue interrupt deferral fields: counter enable and timer index */
+#define S_QINTR_CNT_EN    0
+#define V_QINTR_CNT_EN(x) ((x) << S_QINTR_CNT_EN)
+#define F_QINTR_CNT_EN    V_QINTR_CNT_EN(1U)
+
+#define S_QINTR_TIMER_IDX    1
+#define M_QINTR_TIMER_IDX    0x7
+#define V_QINTR_TIMER_IDX(x) ((x) << S_QINTR_TIMER_IDX)
+#define G_QINTR_TIMER_IDX(x) (((x) >> S_QINTR_TIMER_IDX) & M_QINTR_TIMER_IDX)
+
+#endif /* __T4_HW_H */
-- 
1.5.4


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH net-next 4/7] cxgb4: Add packet queues and packet DMA code
  2010-02-18  1:37     ` [PATCH net-next 3/7] cxgb4: Add HW and FW support code Dimitris Michailidis
@ 2010-02-18  1:37       ` Dimitris Michailidis
  2010-02-18  1:37         ` [PATCH net-next 5/7] cxgb4: Add remaining driver headers and L2T management Dimitris Michailidis
  0 siblings, 1 reply; 10+ messages in thread
From: Dimitris Michailidis @ 2010-02-18  1:37 UTC (permalink / raw)
  To: netdev

Signed-off-by: Dimitris Michailidis <dm@chelsio.com>
---
 drivers/net/cxgb4/sge.c | 2463 +++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 2463 insertions(+), 0 deletions(-)
 create mode 100644 drivers/net/cxgb4/sge.c

diff --git a/drivers/net/cxgb4/sge.c b/drivers/net/cxgb4/sge.c
new file mode 100644
index 0000000..3a7bb1a
--- /dev/null
+++ b/drivers/net/cxgb4/sge.c
@@ -0,0 +1,2463 @@
+/*
+ * This file is part of the Chelsio T4 Ethernet driver for Linux.
+ *
+ * Copyright (c) 2003-2010 Chelsio Communications, Inc. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ *     Redistribution and use in source and binary forms, with or
+ *     without modification, are permitted provided that the following
+ *     conditions are met:
+ *
+ *      - Redistributions of source code must retain the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer.
+ *
+ *      - Redistributions in binary form must reproduce the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer in the documentation and/or other materials
+ *        provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include <linux/skbuff.h>
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/if_vlan.h>
+#include <linux/ip.h>
+#include <linux/dma-mapping.h>
+#include <net/ipv6.h>
+#include <net/tcp.h>
+#include "cxgb4.h"
+#include "t4_regs.h"
+#include "t4_msg.h"
+#include "t4fw_api.h"
+
+/*
+ * Rx buffer size.  We use largish buffers if possible but settle for single
+ * pages under memory shortage.
+ */
+#if PAGE_SHIFT >= 16
+# define FL_PG_ORDER 0
+#else
+# define FL_PG_ORDER (16 - PAGE_SHIFT)
+#endif
+
+/* RX_PULL_LEN should be <= RX_COPY_THRES */
+#define RX_COPY_THRES    256
+#define RX_PULL_LEN      128
+
+/*
+ * Main body length for sk_buffs used for Rx Ethernet packets with fragments.
+ * Should be >= RX_PULL_LEN but possibly bigger to give pskb_may_pull some room.
+ */
+#define RX_PKT_SKB_LEN   512
+
+/* Ethernet header padding prepended to RX_PKTs */
+#define RX_PKT_PAD 2
+
+/*
+ * Max number of Tx descriptors we clean up at a time.  Should be modest as
+ * freeing skbs isn't cheap and it happens while holding locks.  We just need
+ * to free packets faster than they arrive, we eventually catch up and keep
+ * the amortized cost reasonable.  Must be >= 2 * TXQ_STOP_THRES.
+ */
+#define MAX_TX_RECLAIM 16
+
+/*
+ * Max number of Rx buffers we replenish at a time.  Again keep this modest,
+ * allocating buffers isn't cheap either.
+ */
+#define MAX_RX_REFILL 16U
+
+/*
+ * Period of the Rx queue check timer.  This timer is infrequent as it has
+ * something to do only when the system experiences severe memory shortage.
+ */
+#define RX_QCHECK_PERIOD (HZ / 2)
+
+/*
+ * Period of the Tx queue check timer.
+ */
+#define TX_QCHECK_PERIOD (HZ / 2)
+
+/*
+ * Timer index used when backing off due to memory shortage.
+ */
+#define NOMEM_TMR_IDX (SGE_NTIMERS - 1)
+
+/*
+ * An FL with <= FL_STARVE_THRES buffers is starving and a periodic timer will
+ * attempt to refill it.
+ */
+#define FL_STARVE_THRES 4
+
+/*
+ * Suspend an Ethernet Tx queue with fewer available descriptors than this.
+ * This is the same as calc_tx_descs() for a TSO packet with
+ * nr_frags == MAX_SKB_FRAGS.
+ */
+#define ETHTXQ_STOP_THRES \
+	(1 + DIV_ROUND_UP((3 * MAX_SKB_FRAGS) / 2 + (MAX_SKB_FRAGS & 1), 8))
+
+/*
+ * Suspension threshold for non-Ethernet Tx queues.  We require enough room
+ * for a full sized WR.
+ */
+#define TXQ_STOP_THRES (SGE_MAX_WR_LEN / sizeof(struct tx_desc))
+
+/*
+ * Max Tx descriptor space we allow for an Ethernet packet to be inlined
+ * into a WR.
+ */
+#define MAX_IMM_TX_PKT_LEN 128
+
+/*
+ * Max size of a WR sent through a control Tx queue.
+ */
+#define MAX_CTRL_WR_LEN SGE_MAX_WR_LEN
+
+enum {
+	/* packet alignment in FL buffers */
+	FL_ALIGN = L1_CACHE_BYTES < 32 ? 32 : L1_CACHE_BYTES,
+	/* egress status entry size */
+	STAT_LEN = L1_CACHE_BYTES > 64 ? 128 : 64
+};
+
+struct tx_sw_desc {                /* SW state per Tx descriptor */
+	struct sk_buff *skb;
+	struct ulptx_sgl *sgl;
+};
+
+struct rx_sw_desc {                /* SW state per Rx descriptor */
+	struct page *page;
+	dma_addr_t dma_addr;
+};
+
+/*
+ * The low bits of rx_sw_desc.dma_addr have special meaning.
+ */
+enum {
+	RX_LARGE_BUF    = 1 << 0, /* buffer is larger than PAGE_SIZE */
+	RX_UNMAPPED_BUF = 1 << 1, /* buffer is not mapped */
+};
+
+static inline dma_addr_t get_buf_addr(const struct rx_sw_desc *d)
+{
+	return d->dma_addr & ~(dma_addr_t)(RX_LARGE_BUF | RX_UNMAPPED_BUF);
+}
+
+static inline bool is_buf_mapped(const struct rx_sw_desc *d)
+{
+	return !(d->dma_addr & RX_UNMAPPED_BUF);
+}
+
+/**
+ *	txq_avail - return the number of available slots in a Tx queue
+ *	@q: the Tx queue
+ *
+ *	Returns the number of descriptors in a Tx queue available to write new
+ *	packets.
+ */
+static inline unsigned int txq_avail(const struct sge_txq *q)
+{
+	return q->size - 1 - q->in_use;
+}
+
+/**
+ *	fl_cap - return the capacity of a free-buffer list
+ *	@fl: the FL
+ *
+ *	Returns the capacity of a free-buffer list.  The capacity is less than
+ *	the size because one descriptor needs to be left unpopulated, otherwise
+ *	HW will think the FL is empty.
+ */
+static inline unsigned int fl_cap(const struct sge_fl *fl)
+{
+	return fl->size - 8;   /* 1 descriptor = 8 buffers */
+}
+
+static inline bool fl_starving(const struct sge_fl *fl)
+{
+	return fl->avail - fl->pend_cred <= FL_STARVE_THRES;
+}
+
+static int map_skb(struct device *dev, const struct sk_buff *skb,
+		   dma_addr_t *addr)
+{
+	const skb_frag_t *fp, *end;
+	const struct skb_shared_info *si;
+
+	*addr = dma_map_single(dev, skb->data, skb_headlen(skb), DMA_TO_DEVICE);
+	if (dma_mapping_error(dev, *addr))
+		goto out_err;
+
+	si = skb_shinfo(skb);
+	end = &si->frags[si->nr_frags];
+
+	for (fp = si->frags; fp < end; fp++) {
+		*++addr = dma_map_page(dev, fp->page, fp->page_offset, fp->size,
+				       DMA_TO_DEVICE);
+		if (dma_mapping_error(dev, *addr))
+			goto unwind;
+	}
+	return 0;
+
+unwind:
+	while (fp-- > si->frags)
+		dma_unmap_page(dev, *--addr, fp->size, DMA_TO_DEVICE);
+
+	dma_unmap_single(dev, addr[-1], skb_headlen(skb), DMA_TO_DEVICE);
+out_err:
+	return -ENOMEM;
+}
+
+static void unmap_skb(struct device *dev, const struct sk_buff *skb,
+		      const dma_addr_t *addr)
+{
+	const skb_frag_t *fp, *end;
+	const struct skb_shared_info *si;
+
+	dma_unmap_single(dev, *addr++, skb_headlen(skb), DMA_TO_DEVICE);
+
+	si = skb_shinfo(skb);
+	end = &si->frags[si->nr_frags];
+	for (fp = si->frags; fp < end; fp++)
+		dma_unmap_page(dev, *addr++, fp->size, DMA_TO_DEVICE);
+}
+
+static void unmap_sgl(struct device *dev, const struct sk_buff *skb,
+		      const struct ulptx_sgl *sgl, const struct sge_txq *q)
+{
+	const struct ulptx_sge_pair *p;
+	unsigned int nfrags = skb_shinfo(skb)->nr_frags;
+
+	if (likely(skb_headlen(skb)))
+		dma_unmap_single(dev, be64_to_cpu(sgl->addr0), ntohl(sgl->len0),
+				 DMA_TO_DEVICE);
+	else {
+		dma_unmap_page(dev, be64_to_cpu(sgl->addr0), ntohl(sgl->len0),
+			       DMA_TO_DEVICE);
+		nfrags--;
+	}
+
+	/*
+	 * the complexity below is because of the possibility of a wrap-around
+	 * in the middle of an SGL
+	 */
+	for (p = sgl->sge; nfrags >= 2; nfrags -= 2) {
+		if (likely((u8 *)(p + 1) <= (u8 *)q->stat)) {
+unmap:			dma_unmap_page(dev, be64_to_cpu(p->addr[0]),
+				       ntohl(p->len[0]), DMA_TO_DEVICE);
+			dma_unmap_page(dev, be64_to_cpu(p->addr[1]),
+				       ntohl(p->len[1]), DMA_TO_DEVICE);
+			p++;
+		} else if ((u8 *)p == (u8 *)q->stat) {
+			p = (const struct ulptx_sge_pair *)q->desc;
+			goto unmap;
+		} else if ((u8 *)p + 8 == (u8 *)q->stat) {
+			const __be64 *addr = (const __be64 *)q->desc;
+
+			dma_unmap_page(dev, be64_to_cpu(addr[0]),
+				       ntohl(p->len[0]), DMA_TO_DEVICE);
+			dma_unmap_page(dev, be64_to_cpu(addr[1]),
+				       ntohl(p->len[1]), DMA_TO_DEVICE);
+			p = (const struct ulptx_sge_pair *)&addr[2];
+		} else {
+			const __be64 *addr = (const __be64 *)q->desc;
+
+			dma_unmap_page(dev, be64_to_cpu(p->addr[0]),
+				       ntohl(p->len[0]), DMA_TO_DEVICE);
+			dma_unmap_page(dev, be64_to_cpu(addr[0]),
+				       ntohl(p->len[1]), DMA_TO_DEVICE);
+			p = (const struct ulptx_sge_pair *)&addr[1];
+		}
+	}
+	if (nfrags) {
+		__be64 addr;
+
+		if ((u8 *)p == (u8 *)q->stat)
+			p = (const struct ulptx_sge_pair *)q->desc;
+		addr = (u8 *)p + 16 <= (u8 *)q->stat ? p->addr[0] :
+						       *(const __be64 *)q->desc;
+		dma_unmap_page(dev, be64_to_cpu(addr), ntohl(p->len[0]),
+			       DMA_TO_DEVICE);
+	}
+}
+
+/**
+ *	need_skb_unmap - does the platform need unmapping of sk_buffs?
+ *
+ *	Returns true if the platfrom needs sk_buff unmapping.  The compiler
+ *	optimizes away unecessary code if this returns true.
+ */
+static inline int need_skb_unmap(void)
+{
+	/*
+	 * This structure is used to tell if the platfrom needs buffer
+	 * unmapping by checking if DECLARE_PCI_UNMAP_ADDR defines anything.
+	 */
+	struct dummy {
+		DECLARE_PCI_UNMAP_ADDR(addr);
+	};
+
+	return sizeof(struct dummy) != 0;
+}
+
+/**
+ *	free_tx_desc - reclaims Tx descriptors and their buffers
+ *	@adapter: the adapter
+ *	@q: the Tx queue to reclaim descriptors from
+ *	@n: the number of descriptors to reclaim
+ *	@unmap: whether the buffers should be unmapped for DMA
+ *
+ *	Reclaims Tx descriptors from an SGE Tx queue and frees the associated
+ *	Tx buffers.  Called with the Tx queue lock held.
+ */
+static void free_tx_desc(struct adapter *adap, struct sge_txq *q,
+			 unsigned int n, bool unmap)
+{
+	struct tx_sw_desc *d;
+	unsigned int cidx = q->cidx;
+	struct device *dev = adap->pdev_dev;
+
+	const int need_unmap = need_skb_unmap() && unmap;
+
+	d = &q->sdesc[cidx];
+	while (n--) {
+		if (d->skb) {                       /* an SGL is present */
+			if (need_unmap)
+				unmap_sgl(dev, d->skb, d->sgl, q);
+			kfree_skb(d->skb);
+			d->skb = NULL;
+		}
+		++d;
+		if (++cidx == q->size) {
+			cidx = 0;
+			d = q->sdesc;
+		}
+	}
+	q->cidx = cidx;
+}
+
+/*
+ * Return the number of reclaimable descriptors in a Tx queue.
+ */
+static inline int reclaimable(const struct sge_txq *q)
+{
+	int hw_cidx = ntohs(q->stat->cidx);
+	hw_cidx -= q->cidx;
+	return hw_cidx < 0 ? hw_cidx + q->size : hw_cidx;
+}
+
+/**
+ *	reclaim_completed_tx - reclaims completed Tx descriptors
+ *	@adap: the adapter
+ *	@q: the Tx queue to reclaim completed descriptors from
+ *	@unmap: whether the buffers should be unmapped for DMA
+ *
+ *	Reclaims Tx descriptors that the SGE has indicated it has processed,
+ *	and frees the associated buffers if possible.  Called with the Tx
+ *	queue locked.
+ */
+static inline void reclaim_completed_tx(struct adapter *adap, struct sge_txq *q,
+					bool unmap)
+{
+	int avail = reclaimable(q);
+
+	if (avail) {
+		/*
+		 * Limit the amount of clean up work we do at a time to keep
+		 * the Tx lock hold time O(1).
+		 */
+		if (avail > MAX_TX_RECLAIM)
+			avail = MAX_TX_RECLAIM;
+
+		free_tx_desc(adap, q, avail, unmap);
+		q->in_use -= avail;
+	}
+}
+
+static inline int get_buf_size(const struct rx_sw_desc *d)
+{
+#if FL_PG_ORDER > 0
+	return (d->dma_addr & RX_LARGE_BUF) ? (PAGE_SIZE << FL_PG_ORDER) :
+					      PAGE_SIZE;
+#else
+	return PAGE_SIZE;
+#endif
+}
+
+/**
+ *	free_rx_bufs - free the Rx buffers on an SGE free list
+ *	@adap: the adapter
+ *	@q: the SGE free list to free buffers from
+ *	@n: how many buffers to free
+ *
+ *	Release the next @n buffers on an SGE free-buffer Rx queue.   The
+ *	buffers must be made inaccessible to HW before calling this function.
+ */
+static void free_rx_bufs(struct adapter *adap, struct sge_fl *q, int n)
+{
+	while (n--) {
+		struct rx_sw_desc *d = &q->sdesc[q->cidx];
+
+		if (is_buf_mapped(d))
+			dma_unmap_page(adap->pdev_dev, get_buf_addr(d),
+				       get_buf_size(d), PCI_DMA_FROMDEVICE);
+		put_page(d->page);
+		d->page = NULL;
+		if (++q->cidx == q->size)
+			q->cidx = 0;
+		q->avail--;
+	}
+}
+
+/**
+ *	unmap_rx_buf - unmap the current Rx buffer on an SGE free list
+ *	@adap: the adapter
+ *	@q: the SGE free list
+ *
+ *	Unmap the current buffer on an SGE free-buffer Rx queue.   The
+ *	buffer must be made inaccessible to HW before calling this function.
+ *
+ *	This is similar to @free_rx_bufs above but does not free the buffer.
+ *	Do note that the FL still loses any further access to the buffer.
+ */
+static void unmap_rx_buf(struct adapter *adap, struct sge_fl *q)
+{
+	struct rx_sw_desc *d = &q->sdesc[q->cidx];
+
+	if (is_buf_mapped(d))
+		dma_unmap_page(adap->pdev_dev, get_buf_addr(d),
+			       get_buf_size(d), PCI_DMA_FROMDEVICE);
+	d->page = NULL;
+	if (++q->cidx == q->size)
+		q->cidx = 0;
+	q->avail--;
+}
+
+static inline void ring_fl_db(struct adapter *adap, struct sge_fl *q)
+{
+	if (q->pend_cred >= 8) {
+		t4_write_reg(adap, MYPF_REG(A_SGE_PF_KDOORBELL), F_DBPRIO |
+			     V_QID(q->cntxt_id) | V_PIDX(q->pend_cred / 8));
+		q->pend_cred &= 7;
+	}
+}
+
+static inline void set_rx_sw_desc(struct rx_sw_desc *sd, struct page *pg,
+				  dma_addr_t mapping)
+{
+	sd->page = pg;
+	sd->dma_addr = mapping;      /* includes size low bits */
+}
+
+/**
+ *	refill_fl - refill an SGE Rx buffer ring
+ *	@adap: the adapter
+ *	@q: the ring to refill
+ *	@n: the number of new buffers to allocate
+ *	@gfp: the gfp flags for the allocations
+ *
+ *	(Re)populate an SGE free-buffer queue with up to @n new packet buffers,
+ *	allocated with the supplied gfp flags.  The caller must assure that
+ *	@n does not exceed the queue's capacity.  Returns the number of buffers
+ *	allocated.
+ */
+static unsigned int refill_fl(struct adapter *adap, struct sge_fl *q, int n,
+			      gfp_t gfp)
+{
+	struct page *pg;
+	dma_addr_t mapping;
+	unsigned int cred = q->avail;
+	__be64 *d = &q->desc[q->pidx];
+	struct rx_sw_desc *sd = &q->sdesc[q->pidx];
+
+	gfp |= __GFP_NOWARN;         /* failures are expected */
+
+#if FL_PG_ORDER > 0
+	/*
+	 * Prefer large buffers
+	 */
+	while (n) {
+		pg = alloc_pages(gfp | __GFP_COMP, FL_PG_ORDER);
+		if (unlikely(!pg)) {
+			q->large_alloc_failed++;
+			break;       /* fall back to single pages */
+		}
+
+		mapping = dma_map_page(adap->pdev_dev, pg, 0,
+				       PAGE_SIZE << FL_PG_ORDER,
+				       PCI_DMA_FROMDEVICE);
+		if (unlikely(dma_mapping_error(adap->pdev_dev, mapping))) {
+			__free_pages(pg, FL_PG_ORDER);
+			goto out;   /* do not try small pages for this error */
+		}
+		mapping |= RX_LARGE_BUF;
+		*d++ = cpu_to_be64(mapping);
+
+		set_rx_sw_desc(sd, pg, mapping);
+		sd++;
+
+		q->avail++;
+		if (++q->pidx == q->size) {
+			q->pidx = 0;
+			sd = q->sdesc;
+			d = q->desc;
+		}
+		n--;
+	}
+#endif
+
+	while (n--) {
+		pg = __netdev_alloc_page(adap->port[0], gfp);
+		if (unlikely(!pg)) {
+			q->alloc_failed++;
+			break;
+		}
+
+		mapping = dma_map_page(adap->pdev_dev, pg, 0, PAGE_SIZE,
+				       PCI_DMA_FROMDEVICE);
+		if (unlikely(dma_mapping_error(adap->pdev_dev, mapping))) {
+			netdev_free_page(adap->port[0], pg);
+			break;
+		}
+		*d++ = cpu_to_be64(mapping);
+
+		set_rx_sw_desc(sd, pg, mapping);
+		sd++;
+
+		q->avail++;
+		if (++q->pidx == q->size) {
+			q->pidx = 0;
+			sd = q->sdesc;
+			d = q->desc;
+		}
+	}
+
+out:	cred = q->avail - cred;
+	q->pend_cred += cred;
+	ring_fl_db(adap, q);
+	return cred;
+}
+
+static inline void __refill_fl(struct adapter *adap, struct sge_fl *fl)
+{
+	refill_fl(adap, fl, min(MAX_RX_REFILL, fl_cap(fl) - fl->avail),
+		  GFP_ATOMIC);
+}
+
+/**
+ *	replenish_fl - refill an SGE Rx buffer ring and check for starvation
+ *	@adap: the adapter
+ *	@fl: the ring to refill
+ *	@gfp: the gfp flags for the allocations
+ *
+ *	Attempt to refill an FL to capacity.  If afterwards the queue is found
+ *	critically low mark it as starving in the bitmap of starving FLs.
+ */
+static void replenish_fl(struct adapter *adap, struct sge_fl *fl, gfp_t gfp)
+{
+	refill_fl(adap, fl, fl_cap(fl) - fl->avail, gfp);
+	if (unlikely(fl_starving(fl)))
+		set_bit(fl->cntxt_id, adap->sge.starving_fl);
+}
+
+/**
+ *	alloc_ring - allocate resources for an SGE descriptor ring
+ *	@dev: the PCI device's core device
+ *	@nelem: the number of descriptors
+ *	@elem_size: the size of each descriptor
+ *	@sw_size: the size of the SW state associated with each ring element
+ *	@phys: the physical address of the allocated ring
+ *	@metadata: address of the array holding the SW state for the ring
+ *	@stat_size: extra space in HW ring for status information
+ *
+ *	Allocates resources for an SGE descriptor ring, such as Tx queues,
+ *	free buffer lists, or response queues.  Each SGE ring requires
+ *	space for its HW descriptors plus, optionally, space for the SW state
+ *	associated with each HW entry (the metadata).  The function returns
+ *	three values: the virtual address for the HW ring (the return value
+ *	of the function), the bus address of the HW ring, and the address
+ *	of the SW ring.
+ */
+static void *alloc_ring(struct device *dev, size_t nelem, size_t elem_size,
+			size_t sw_size, dma_addr_t *phys, void *metadata,
+			size_t stat_size)
+{
+	size_t len = nelem * elem_size + stat_size;
+	void *s = NULL;
+	void *p = dma_alloc_coherent(dev, len, phys, GFP_KERNEL);
+
+	if (!p)
+		return NULL;
+	if (sw_size) {
+		s = kcalloc(nelem, sw_size, GFP_KERNEL);
+
+		if (!s) {
+			dma_free_coherent(dev, len, p, *phys);
+			return NULL;
+		}
+	}
+	if (metadata)
+		*(void **)metadata = s;
+	memset(p, 0, len);
+	return p;
+}
+
+/**
+ *	sgl_len - calculates the size of an SGL of the given capacity
+ *	@n: the number of SGL entries
+ *
+ *	Calculates the number of flits needed for a scatter/gather list that
+ *	can hold the given number of entries.
+ */
+static inline unsigned int sgl_len(unsigned int n)
+{
+	n--;
+	return (3 * n) / 2 + (n & 1) + 2;
+}
+
+/**
+ *	flits_to_desc - returns the num of Tx descriptors for the given flits
+ *	@n: the number of flits
+ *
+ *	Returns the number of Tx descriptors needed for the supplied number
+ *	of flits.
+ */
+static inline unsigned int flits_to_desc(unsigned int n)
+{
+	BUG_ON(n > SGE_MAX_WR_LEN / 8);
+	return DIV_ROUND_UP(n, 8);
+}
+
+/**
+ *	is_eth_imm - can an Ethernet packet be sent as immediate data?
+ *	@skb: the packet
+ *
+ *	Returns whether an Ethernet packet is small enough to fit as
+ *	immediate data.
+ */
+static inline int is_eth_imm(const struct sk_buff *skb)
+{
+	return skb->len <= MAX_IMM_TX_PKT_LEN - sizeof(struct cpl_tx_pkt);
+}
+
+/**
+ *	calc_tx_flits - calculate the number of flits for a packet Tx WR
+ *	@skb: the packet
+ *
+ * 	Returns the number of flits needed for a Tx WR for the given Ethernet
+ * 	packet, including the needed WR and CPL headers.
+ */
+static inline unsigned int calc_tx_flits(const struct sk_buff *skb)
+{
+	unsigned int flits;
+
+	if (is_eth_imm(skb))
+		return DIV_ROUND_UP(skb->len + sizeof(struct cpl_tx_pkt), 8);
+
+	flits = sgl_len(skb_shinfo(skb)->nr_frags + 1) + 4;
+	if (skb_shinfo(skb)->gso_size)
+		flits += 2;
+	return flits;
+}
+
+/**
+ *	calc_tx_descs - calculate the number of Tx descriptors for a packet
+ *	@skb: the packet
+ *
+ * 	Returns the number of Tx descriptors needed for the given Ethernet
+ * 	packet, including the needed WR and CPL headers.
+ */
+static inline unsigned int calc_tx_descs(const struct sk_buff *skb)
+{
+	return flits_to_desc(calc_tx_flits(skb));
+}
+
+/**
+ *	write_sgl - populate a scatter/gather list for a packet
+ *	@skb: the packet
+ *	@q: the Tx queue we are writing into
+ *	@sgl: starting location for writing the SGL
+ *	@end: points right after the end of the SGL
+ *	@start: start offset into skb main-body data to include in the SGL
+ *	@addr: the list of bus addresses for the SGL elements
+ *
+ *	Generates a gather list for the buffers that make up a packet.
+ *	The caller must provide adequate space for the SGL that will be written.
+ *	The SGL includes all of the packet's page fragments and the data in its
+ *	main body except for the first @start bytes.  @sgl must be 16-byte
+ *	aligned and within a Tx descriptor with available space.  @end points
+ *	right after the end of the SGL but does not account for any potential
+ *	wrap around, i.e., @end > @sgl.
+ */
+static void write_sgl(const struct sk_buff *skb, struct sge_txq *q,
+		      struct ulptx_sgl *sgl, u64 *end, unsigned int start,
+		      const dma_addr_t *addr)
+{
+	unsigned int i, len;
+	struct ulptx_sge_pair *to;
+	const struct skb_shared_info *si = skb_shinfo(skb);
+	unsigned int nfrags = si->nr_frags;
+	struct ulptx_sge_pair buf[MAX_SKB_FRAGS / 2 + 1];
+
+	len = skb_headlen(skb) - start;
+	if (likely(len)) {
+		sgl->len0 = htonl(len);
+		sgl->addr0 = cpu_to_be64(addr[0] + start);
+		nfrags++;
+	} else {
+		sgl->len0 = htonl(si->frags[0].size);
+		sgl->addr0 = cpu_to_be64(addr[1]);
+	}
+
+	sgl->cmd_nsge = htonl(V_ULPTX_CMD(ULP_TX_SC_DSGL) |
+			      V_ULPTX_NSGE(nfrags));
+	if (likely(--nfrags == 0))
+		return;
+	/*
+	 * Most of the complexity below deals with the possibility we hit the
+	 * end of the queue in the middle of writing the SGL.  For this case
+	 * only we create the SGL in a temporary buffer and then copy it.
+	 */
+	to = (u8 *)end > (u8 *)q->stat ? buf : sgl->sge;
+
+	for (i = (nfrags != si->nr_frags); nfrags >= 2; nfrags -= 2, to++) {
+		to->len[0] = cpu_to_be32(si->frags[i].size);
+		to->len[1] = cpu_to_be32(si->frags[++i].size);
+		to->addr[0] = cpu_to_be64(addr[i]);
+		to->addr[1] = cpu_to_be64(addr[++i]);
+	}
+	if (nfrags) {
+		to->len[0] = cpu_to_be32(si->frags[i].size);
+		to->len[1] = cpu_to_be32(0);
+		to->addr[0] = cpu_to_be64(addr[i + 1]);
+	}
+	if (unlikely((u8 *)end > (u8 *)q->stat)) {
+		unsigned int part0 = (u8 *)q->stat - (u8 *)sgl->sge, part1;
+
+		if (likely(part0))
+			memcpy(sgl->sge, buf, part0);
+		part1 = (u8 *)end - (u8 *)q->stat;
+		memcpy(q->desc, (u8 *)buf + part0, part1);
+		end = (void *)q->desc + part1;
+	}
+	if ((uintptr_t)end & 8)           /* 0-pad to multiple of 16 */
+		*(u64 *)end = 0;
+}
+
+/**
+ *	ring_tx_db - check and potentially ring a Tx queue's doorbell
+ *	@adap: the adapter
+ *	@q: the Tx queue
+ *	@n: number of new descriptors to give to HW
+ *
+ *	Ring the doorbel for a Tx queue.
+ */
+static inline void ring_tx_db(struct adapter *adap, struct sge_txq *q, int n)
+{
+	wmb();            /* write descriptors before telling HW */
+	t4_write_reg(adap, MYPF_REG(A_SGE_PF_KDOORBELL),
+		     V_QID(q->cntxt_id) | V_PIDX(n));
+}
+
+/**
+ * 	inline_tx_skb - inline a packet's data into Tx descriptors
+ * 	@skb: the packet
+ * 	@q: the Tx queue where the packet will be inlined
+ * 	@pos: starting position in the Tx queue where to inline the packet
+ *
+ *	Inline a packet's contents directly into Tx descriptors, starting at
+ *	the given position within the Tx DMA ring.
+ *	Most of the complexity of this operation is dealing with wrap arounds
+ *	in the middle of the packet we want to inline.
+ */
+static void inline_tx_skb(const struct sk_buff *skb, const struct sge_txq *q,
+			  void *pos)
+{
+	u64 *p;
+	int left = (void *)q->stat - pos;
+
+	if (likely(skb->len <= left)) {
+		if (likely(!skb->data_len))
+			skb_copy_from_linear_data(skb, pos, skb->len);
+		else
+			skb_copy_bits(skb, 0, pos, skb->len);
+		pos += skb->len;
+	} else {
+		skb_copy_bits(skb, 0, pos, left);
+		skb_copy_bits(skb, left, q->desc, skb->len - left);
+		pos = (void *)q->desc + (skb->len - left);
+	}
+
+	/* 0-pad to multiple of 16 */
+	p = PTR_ALIGN(pos, 8);
+	if ((uintptr_t)p & 8)
+		*p = 0;
+}
+
+/*
+ * Figure out what HW csum a packet wants and return the appropriate control
+ * bits.
+ */
+static u64 hwcsum(const struct sk_buff *skb)
+{
+	int csum_type;
+	const struct iphdr *iph = ip_hdr(skb);
+
+	if (iph->version == 4) {
+		if (iph->protocol == IPPROTO_TCP)
+			csum_type = TX_CSUM_TCPIP;
+		else if (iph->protocol == IPPROTO_UDP)
+			csum_type = TX_CSUM_UDPIP;
+		else {
+nocsum:			/*
+			 * unknown protocol, disable HW csum
+			 * and hope a bad packet is detected
+			 */
+			return F_TXPKT_L4CSUM_DIS;
+		}
+	} else {
+		/*
+		 * this doesn't work with extension headers
+		 */
+		const struct ipv6hdr *ip6h = (const struct ipv6hdr *)iph;
+
+		if (ip6h->nexthdr == IPPROTO_TCP)
+			csum_type = TX_CSUM_TCPIP6;
+		else if (ip6h->nexthdr == IPPROTO_UDP)
+			csum_type = TX_CSUM_UDPIP6;
+		else
+			goto nocsum;
+	}
+
+	if (likely(csum_type >= TX_CSUM_TCPIP))
+		return V_TXPKT_CSUM_TYPE(csum_type) |
+			V_TXPKT_IPHDR_LEN(skb_network_header_len(skb)) |
+			V_TXPKT_ETHHDR_LEN(skb_network_offset(skb) - ETH_HLEN);
+	else {
+		int start = skb_transport_offset(skb);
+
+		return V_TXPKT_CSUM_TYPE(csum_type) |
+			V_TXPKT_CSUM_START(start) |
+			V_TXPKT_CSUM_LOC(start + skb->csum_offset);
+	}
+}
+
+static void eth_txq_stop(struct sge_eth_txq *q)
+{
+	netif_tx_stop_queue(q->txq);
+	q->q.stops++;
+}
+
+static inline void txq_advance(struct sge_txq *q, unsigned int n)
+{
+	q->in_use += n;
+	q->pidx += n;
+	if (q->pidx >= q->size)
+		q->pidx -= q->size;
+}
+
+/**
+ *	t4_eth_xmit - add a packet to an Ethernet Tx queue
+ *	@skb: the packet
+ *	@dev: the egress net device
+ *
+ *	Add a packet to an SGE Ethernet Tx queue.  Runs with softirqs disabled.
+ */
+netdev_tx_t t4_eth_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+	u32 wr_mid;
+	u64 cntrl, *end;
+	int qidx, credits;
+	unsigned int flits, ndesc;
+	struct adapter *adap;
+	struct sge_eth_txq *q;
+	const struct port_info *pi;
+	struct fw_eth_tx_pkt_wr *wr;
+	struct cpl_tx_pkt_core *cpl;
+	const struct skb_shared_info *ssi;
+	dma_addr_t addr[MAX_SKB_FRAGS + 1];
+
+	/*
+	 * The chip min packet length is 10 octets but play safe and reject
+	 * anything shorter than an Ethernet header.
+	 */
+	if (unlikely(skb->len < ETH_HLEN)) {
+out_free:	dev_kfree_skb(skb);
+		return NETDEV_TX_OK;
+	}
+
+	pi = netdev_priv(dev);
+	adap = pi->adapter;
+	qidx = skb_get_queue_mapping(skb);
+	q = &adap->sge.ethtxq[qidx + pi->first_qset];
+
+	reclaim_completed_tx(adap, &q->q, true);
+
+	flits = calc_tx_flits(skb);
+	ndesc = flits_to_desc(flits);
+	credits = txq_avail(&q->q) - ndesc;
+
+	if (unlikely(credits < 0)) {
+		eth_txq_stop(q);
+		dev_err(adap->pdev_dev,
+			"%s: Tx ring %u full while queue awake!\n",
+			dev->name, qidx);
+		return NETDEV_TX_BUSY;
+	}
+
+	if (!is_eth_imm(skb) &&
+	    unlikely(map_skb(adap->pdev_dev, skb, addr) < 0)) {
+		q->mapping_err++;
+		goto out_free;
+	}
+
+	wr_mid = V_FW_WR_LEN16(DIV_ROUND_UP(flits, 2));
+	if (unlikely(credits < ETHTXQ_STOP_THRES)) {
+		eth_txq_stop(q);
+		wr_mid |= F_FW_WR_EQUEQ | F_FW_WR_EQUIQ;
+	}
+
+	wr = (void *)&q->q.desc[q->q.pidx];
+	wr->equiq_to_len16 = htonl(wr_mid);
+	wr->r3 = cpu_to_be64(0);
+	end = (u64 *)wr + flits;
+
+	ssi = skb_shinfo(skb);
+	if (ssi->gso_size) {
+		struct cpl_tx_pkt_lso *lso = (void *)wr;
+		bool v6 = (ssi->gso_type & SKB_GSO_TCPV6) != 0;
+		int l3hdr_len = skb_network_header_len(skb);
+		int eth_xtra_len = skb_network_offset(skb) - ETH_HLEN;
+
+		wr->op_immdlen = htonl(V_FW_WR_OP(FW_ETH_TX_PKT_WR) |
+				       V_FW_WR_IMMDLEN(sizeof(*lso)));
+		lso->lso_ctrl = htonl(V_LSO_OPCODE(CPL_TX_PKT_LSO) |
+				      F_LSO_FIRST_SLICE | F_LSO_LAST_SLICE |
+				      V_LSO_IPV6(v6) |
+				      V_LSO_ETHHDR_LEN(eth_xtra_len / 4) |
+				      V_LSO_IPHDR_LEN(l3hdr_len / 4) |
+				      V_LSO_TCPHDR_LEN(tcp_hdr(skb)->doff));
+		lso->ipid_ofst = htons(0);
+		lso->mss = htons(ssi->gso_size);
+		lso->seqno_offset = htonl(0);
+		lso->len = htonl(skb->len);
+		cpl = (void *)(lso + 1);
+		cntrl = V_TXPKT_CSUM_TYPE(v6 ? TX_CSUM_TCPIP6 : TX_CSUM_TCPIP) |
+			V_TXPKT_IPHDR_LEN(l3hdr_len) |
+			V_TXPKT_ETHHDR_LEN(eth_xtra_len);
+		q->tso++;
+		q->tx_cso += ssi->gso_segs;
+	} else {
+		int len;
+
+		len = is_eth_imm(skb) ? skb->len + sizeof(*cpl) : sizeof(*cpl);
+		wr->op_immdlen = htonl(V_FW_WR_OP(FW_ETH_TX_PKT_WR) |
+				       V_FW_WR_IMMDLEN(len));
+		cpl = (void *)(wr + 1);
+		if (skb->ip_summed == CHECKSUM_PARTIAL) {
+			cntrl = hwcsum(skb) | F_TXPKT_IPCSUM_DIS;
+			q->tx_cso++;
+		} else
+			cntrl = F_TXPKT_L4CSUM_DIS | F_TXPKT_IPCSUM_DIS;
+	}
+
+	if (vlan_tx_tag_present(skb)) {
+		q->vlan_ins++;
+		cntrl |= F_TXPKT_VLAN_VLD | V_TXPKT_VLAN(vlan_tx_tag_get(skb));
+	}
+
+	cpl->ctrl0 = htonl(V_TXPKT_OPCODE(CPL_TX_PKT_XT) |
+			   V_TXPKT_INTF(pi->tx_chan) | V_TXPKT_PF(0));
+	cpl->pack = htons(0);
+	cpl->len = htons(skb->len);
+	cpl->ctrl1 = cpu_to_be64(cntrl);
+
+	if (is_eth_imm(skb)) {
+		inline_tx_skb(skb, &q->q, cpl + 1);
+		dev_kfree_skb(skb);
+	} else {
+		int last_desc;
+
+		write_sgl(skb, &q->q, (struct ulptx_sgl *)(cpl + 1), end, 0,
+			  addr);
+		skb_orphan(skb);
+
+		last_desc = q->q.pidx + ndesc - 1;
+		if (last_desc >= q->q.size)
+			last_desc -= q->q.size;
+		q->q.sdesc[last_desc].skb = skb;
+		q->q.sdesc[last_desc].sgl = (struct ulptx_sgl *)(cpl + 1);
+	}
+
+	txq_advance(&q->q, ndesc);
+
+	ring_tx_db(adap, &q->q, ndesc);
+	return NETDEV_TX_OK;
+}
+
+/**
+ *	reclaim_completed_tx_imm - reclaim completed control-queue Tx descs
+ *	@q: the SGE control Tx queue
+ *
+ *	This is a variant of reclaim_completed_tx() that is used for Tx queues
+ *	that send only immediate data (presently just the control queues) and
+ *	thus do not have any sk_buffs to release.
+ */
+static inline void reclaim_completed_tx_imm(struct sge_txq *q)
+{
+	int hw_cidx = ntohs(q->stat->cidx);
+	int reclaim = hw_cidx - q->cidx;
+
+	if (reclaim < 0)
+		reclaim += q->size;
+
+	q->in_use -= reclaim;
+	q->cidx = hw_cidx;
+}
+
+/**
+ *	is_imm - check whether a packet can be sent as immediate data
+ *	@skb: the packet
+ *
+ *	Returns true if a packet can be sent as a WR with immediate data.
+ */
+static inline int is_imm(const struct sk_buff *skb)
+{
+	return skb->len <= MAX_CTRL_WR_LEN;
+}
+
+/**
+ *	ctrlq_check_stop - check if a control queue is full and should stop
+ *	@q: the queue
+ *	@wr: most recent WR written to the queue
+ *
+ *	Check if a control queue has become full and should be stopped.
+ *	We clean up control queue descriptors very lazily, only when we are out.
+ *	If the queue is still full after reclaiming any completed descriptors
+ *	we suspend it and have the last WR wake it up.
+ */
+static void ctrlq_check_stop(struct sge_ctrl_txq *q, struct fw_wr_hdr *wr)
+{
+	reclaim_completed_tx_imm(&q->q);
+	if (unlikely(txq_avail(&q->q) < TXQ_STOP_THRES)) {
+		wr->lo |= htonl(F_FW_WR_EQUEQ | F_FW_WR_EQUIQ);
+		q->q.stops++;
+		q->full = 1;
+	}
+}
+
+/**
+ *	ctrl_xmit - send a packet through an SGE control Tx queue
+ *	@q: the control queue
+ *	@skb: the packet
+ *
+ *	Send a packet through an SGE control Tx queue.  Packets sent through
+ *	a control queue must fit entirely as immediate data.
+ */
+static int ctrl_xmit(struct sge_ctrl_txq *q, struct sk_buff *skb)
+{
+	unsigned int ndesc;
+	struct fw_wr_hdr *wr;
+
+	if (unlikely(!is_imm(skb))) {
+		WARN_ON(1);
+		dev_kfree_skb(skb);
+		return NET_XMIT_DROP;
+	}
+
+	ndesc = DIV_ROUND_UP(skb->len, sizeof(struct tx_desc));
+	spin_lock(&q->sendq.lock);
+
+	if (unlikely(q->full)) {
+		skb->priority = ndesc;                  /* save for restart */
+		__skb_queue_tail(&q->sendq, skb);
+		spin_unlock(&q->sendq.lock);
+		return NET_XMIT_CN;
+	}
+
+	wr = (struct fw_wr_hdr *)&q->q.desc[q->q.pidx];
+	inline_tx_skb(skb, &q->q, wr);
+
+	txq_advance(&q->q, ndesc);
+	if (unlikely(txq_avail(&q->q) < TXQ_STOP_THRES))
+		ctrlq_check_stop(q, wr);
+
+	ring_tx_db(q->adap, &q->q, ndesc);
+	spin_unlock(&q->sendq.lock);
+
+	kfree_skb(skb);
+	return NET_XMIT_SUCCESS;
+}
+
+/**
+ *	restart_ctrlq - restart a suspended control queue
+ *	@data: the control queue to restart
+ *
+ *	Resumes transmission on a suspended Tx control queue.
+ */
+static void restart_ctrlq(unsigned long data)
+{
+	struct sk_buff *skb;
+	unsigned int written = 0;
+	struct sge_ctrl_txq *q = (struct sge_ctrl_txq *)data;
+
+	spin_lock(&q->sendq.lock);
+	reclaim_completed_tx_imm(&q->q);
+	BUG_ON(txq_avail(&q->q) < TXQ_STOP_THRES);  /* q should be empty */
+
+	while ((skb = __skb_dequeue(&q->sendq)) != NULL) {
+		struct fw_wr_hdr *wr;
+		unsigned int ndesc = skb->priority;     /* previously saved */
+
+		/*
+		 * Write descriptors and free skbs outside the lock to limit
+		 * wait times.  q->full is still set so new skbs will be queued.
+		 */
+		spin_unlock(&q->sendq.lock);
+
+		wr = (struct fw_wr_hdr *)&q->q.desc[q->q.pidx];
+		inline_tx_skb(skb, &q->q, wr);
+		kfree_skb(skb);
+
+		written += ndesc;
+		txq_advance(&q->q, ndesc);
+		if (unlikely(txq_avail(&q->q) < TXQ_STOP_THRES)) {
+			unsigned long old = q->q.stops;
+
+			ctrlq_check_stop(q, wr);
+			if (q->q.stops != old) {          /* suspended anew */
+				spin_lock(&q->sendq.lock);
+				goto ringdb;
+			}
+		}
+		if (written > 16) {
+			ring_tx_db(q->adap, &q->q, written);
+			written = 0;
+		}
+		spin_lock(&q->sendq.lock);
+	}
+	q->full = 0;
+ringdb: if (written)
+		ring_tx_db(q->adap, &q->q, written);
+	spin_unlock(&q->sendq.lock);
+}
+
+/**
+ *	t4_mgmt_tx - send a management message
+ *	@adap: the adapter
+ *	@skb: the packet containing the management message
+ *
+ *	Send a management message through control queue 0.
+ */
+int t4_mgmt_tx(struct adapter *adap, struct sk_buff *skb)
+{
+	int ret;
+
+	local_bh_disable();
+	ret = ctrl_xmit(&adap->sge.ctrlq[0], skb);
+	local_bh_enable();
+	return ret;
+}
+
+/**
+ *	deferred_unmap_destructor - unmap a packet when it is freed
+ *	@skb: the packet
+ *
+ *	This is the packet destructor used for Tx packets that need to remain
+ *	mapped until they are freed rather than until their Tx descriptors are
+ *	freed.
+ */
+static void deferred_unmap_destructor(struct sk_buff *skb)
+{
+	unmap_skb(skb->dev->dev.parent, skb, (dma_addr_t *)skb->head);
+}
+
+/**
+ *	is_ofld_imm - check whether a packet can be sent as immediate data
+ *	@skb: the packet
+ *
+ *	Returns true if a packet can be sent as an offload WR with immediate
+ *	data.  We currently use the same limit as for Ethernet packets.
+ */
+static inline int is_ofld_imm(const struct sk_buff *skb)
+{
+	return skb->len <= MAX_IMM_TX_PKT_LEN;
+}
+
+/**
+ *	calc_tx_flits_ofld - calculate # of flits for an offload packet
+ *	@skb: the packet
+ *
+ * 	Returns the number of flits needed for the given offload packet.
+ * 	These packets are already fully constructed and no additional headers
+ * 	will be added.
+ */
+static inline unsigned int calc_tx_flits_ofld(const struct sk_buff *skb)
+{
+	unsigned int flits, cnt;
+
+	if (is_ofld_imm(skb))
+		return DIV_ROUND_UP(skb->len, 8);
+
+	flits = skb_transport_offset(skb) / 8U;   /* headers */
+	cnt = skb_shinfo(skb)->nr_frags;
+	if (skb->tail != skb->transport_header)
+		cnt++;
+	return flits + sgl_len(cnt);
+}
+
+/**
+ *	txq_stop_maperr - stop a Tx queue due to I/O MMU exhaustion
+ *	@adap: the adapter
+ *	@q: the queue to stop
+ *
+ *	Mark a Tx queue stopped due to I/O MMU exhaustion and resulting
+ *	inability to map packets.  A periodic timer attempts to restart
+ *	queues so marked.
+ */
+static void txq_stop_maperr(struct sge_ofld_txq *q)
+{
+	q->mapping_err++;
+	q->q.stops++;
+	set_bit(q->q.cntxt_id, q->adap->sge.txq_maperr);
+}
+
+/**
+ *	ofldtxq_stop - stop an offload Tx queue that has become full
+ *	@q: the queue to stop
+ *	@skb: the packet causing the queue to become full
+ *
+ *	Stops an offload Tx queue that has become full and modifies the packet
+ *	being written to request a wakeup.
+ */
+static void ofldtxq_stop(struct sge_ofld_txq *q, struct sk_buff *skb)
+{
+	struct fw_wr_hdr *wr = (struct fw_wr_hdr *)skb->data;
+
+	wr->lo |= htonl(F_FW_WR_EQUEQ | F_FW_WR_EQUIQ);
+	q->q.stops++;
+	q->full = 1;
+}
+
+/**
+ *	service_ofldq - restart a suspended offload queue
+ *	@q: the offload queue
+ *
+ *	Services an offload Tx queue by moving packets from its packet queue
+ *	to the HW Tx ring.  The function starts and ends with the queue locked.
+ */
+static void service_ofldq(struct sge_ofld_txq *q)
+{
+	u64 *pos;
+	int credits;
+	struct sk_buff *skb;
+	unsigned int written = 0;
+	unsigned int flits, ndesc;
+
+	while ((skb = skb_peek(&q->sendq)) != NULL && !q->full) {
+		/*
+		 * We drop the lock but leave skb on sendq, thus retaining
+		 * exclusive access to the state of the queue.
+		 */
+		spin_unlock(&q->sendq.lock);
+
+		reclaim_completed_tx(q->adap, &q->q, false);
+
+		flits = skb->priority;                /* previously saved */
+		ndesc = flits_to_desc(flits);
+		credits = txq_avail(&q->q) - ndesc;
+		BUG_ON(credits < 0);
+		if (unlikely(credits < TXQ_STOP_THRES))
+			ofldtxq_stop(q, skb);
+
+		pos = (u64 *)&q->q.desc[q->q.pidx];
+		if (is_ofld_imm(skb))
+			inline_tx_skb(skb, &q->q, pos);
+		else if (map_skb(q->adap->pdev_dev, skb,
+				 (dma_addr_t *)skb->head)) {
+			txq_stop_maperr(q);
+			spin_lock(&q->sendq.lock);
+			break;
+		} else {
+			int last_desc, hdr_len = skb_transport_offset(skb);
+
+			memcpy(pos, skb->data, hdr_len);
+			write_sgl(skb, &q->q, (void *)pos + hdr_len,
+				  pos + flits, hdr_len,
+				  (dma_addr_t *)skb->head);
+
+			if (need_skb_unmap()) {
+				skb->dev = q->adap->port[0];
+				skb->destructor = deferred_unmap_destructor;
+			}
+
+			last_desc = q->q.pidx + ndesc - 1;
+			if (last_desc >= q->q.size)
+				last_desc -= q->q.size;
+			q->q.sdesc[last_desc].skb = skb;
+		}
+
+		txq_advance(&q->q, ndesc);
+		written += ndesc;
+		if (unlikely(written > 32)) {
+			ring_tx_db(q->adap, &q->q, written);
+			written = 0;
+		}
+
+		spin_lock(&q->sendq.lock);
+		__skb_unlink(skb, &q->sendq);
+		if (is_ofld_imm(skb))
+			kfree_skb(skb);
+	}
+	if (likely(written))
+		ring_tx_db(q->adap, &q->q, written);
+}
+
+/**
+ *	ofld_xmit - send a packet through an offload queue
+ *	@q: the Tx offload queue
+ *	@skb: the packet
+ *
+ *	Send an offload packet through an SGE offload queue.
+ */
+static int ofld_xmit(struct sge_ofld_txq *q, struct sk_buff *skb)
+{
+	skb->priority = calc_tx_flits_ofld(skb);       /* save for restart */
+	spin_lock(&q->sendq.lock);
+	__skb_queue_tail(&q->sendq, skb);
+	if (q->sendq.qlen == 1)
+		service_ofldq(q);
+	spin_unlock(&q->sendq.lock);
+	return NET_XMIT_SUCCESS;
+}
+
+/**
+ *	restart_ofldq - restart a suspended offload queue
+ *	@data: the offload queue to restart
+ *
+ *	Resumes transmission on a suspended Tx offload queue.
+ */
+static void restart_ofldq(unsigned long data)
+{
+	struct sge_ofld_txq *q = (struct sge_ofld_txq *)data;
+
+	spin_lock(&q->sendq.lock);
+	q->full = 0;            /* the queue actually is completely empty now */
+	service_ofldq(q);
+	spin_unlock(&q->sendq.lock);
+}
+
+/**
+ *	skb_txq - return the Tx queue an offload packet should use
+ *	@skb: the packet
+ *
+ *	Returns the Tx queue an offload packet should use as indicated by bits
+ *	1-15 in the packet's queue_mapping.
+ */
+static inline unsigned int skb_txq(const struct sk_buff *skb)
+{
+	return skb->queue_mapping >> 1;
+}
+
+/**
+ *	is_ctrl_pkt - return whether an offload packet is a control packet
+ *	@skb: the packet
+ *
+ *	Returns whether an offload packet should use an OFLD or a CTRL
+ *	Tx queue as indicated by bit 0 in the packet's queue_mapping.
+ */
+static inline unsigned int is_ctrl_pkt(const struct sk_buff *skb)
+{
+	return skb->queue_mapping & 1;
+}
+
+static inline int ofld_send(struct adapter *adap, struct sk_buff *skb)
+{
+	unsigned int idx = skb_txq(skb);
+
+	if (unlikely(is_ctrl_pkt(skb)))
+		return ctrl_xmit(&adap->sge.ctrlq[idx], skb);
+	return ofld_xmit(&adap->sge.ofldtxq[idx], skb);
+}
+
+/**
+ *	t4_ofld_send - send an offload packet
+ *	@adap: the adapter
+ *	@skb: the packet
+ *
+ *	Sends an offload packet.  We use the packet queue_mapping to select the
+ *	appropriate Tx queue as follows: bit 0 indicates whether the packet
+ *	should be sent as regular or control, bits 1-15 select the queue.
+ */
+int t4_ofld_send(struct adapter *adap, struct sk_buff *skb)
+{
+	int ret;
+
+	local_bh_disable();
+	ret = ofld_send(adap, skb);
+	local_bh_enable();
+	return ret;
+}
+
+/**
+ *	cxgb4_ofld_send - send an offload packet
+ *	@dev: the net device
+ *	@skb: the packet
+ *
+ *	Sends an offload packet.  This is an exported version of @t4_ofld_send,
+ *	intended for ULDs.
+ */
+int cxgb4_ofld_send(struct net_device *dev, struct sk_buff *skb)
+{
+	return t4_ofld_send(netdev2adap(dev), skb);
+}
+EXPORT_SYMBOL(cxgb4_ofld_send);
+
+static inline void copy_frags(struct skb_shared_info *ssi,
+			      const struct pkt_gl *gl, unsigned int offset)
+{
+	unsigned int n;
+
+	/* usually there's just one frag */
+	ssi->frags[0].page = gl->frags[0].page;
+	ssi->frags[0].page_offset = gl->frags[0].page_offset + offset;
+	ssi->frags[0].size = gl->frags[0].size - offset;
+	ssi->nr_frags = gl->nfrags;
+	n = gl->nfrags - 1;
+	if (n)
+		memcpy(&ssi->frags[1], &gl->frags[1], n * sizeof(skb_frag_t));
+
+	/* get a reference to the last page, we don't own it */
+	get_page(gl->frags[n].page);
+}
+
+/**
+ *	cxgb4_pktgl_to_skb - build an sk_buff from a packet gather list
+ *	@gl: the gather list
+ *	@skb_len: size of sk_buff main body if it carries fragments
+ *	@pull_len: amount of data to move to the sk_buff's main body
+ *
+ *	Builds an sk_buff from the given packet gather list.  Returns the
+ *	sk_buff or %NULL if sk_buff allocation failed.
+ */
+struct sk_buff *cxgb4_pktgl_to_skb(const struct pkt_gl *gl,
+				   unsigned int skb_len, unsigned int pull_len)
+{
+	struct sk_buff *skb;
+
+	/*
+	 * Below we rely on RX_COPY_THRES being less than the smallest Rx buffer
+	 * size, which is expected since buffers are at least PAGE_SIZEd.
+	 * In this case packets up to RX_COPY_THRES have only one fragment.
+	 */
+	if (gl->tot_len <= RX_COPY_THRES) {
+		skb = dev_alloc_skb(gl->tot_len);
+		if (unlikely(!skb))
+			goto out;
+		__skb_put(skb, gl->tot_len);
+		skb_copy_to_linear_data(skb, gl->va, gl->tot_len);
+	} else {
+		skb = dev_alloc_skb(skb_len);
+		if (unlikely(!skb))
+			goto out;
+		__skb_put(skb, pull_len);
+		skb_copy_to_linear_data(skb, gl->va, pull_len);
+
+		copy_frags(skb_shinfo(skb), gl, pull_len);
+		skb->len = gl->tot_len;
+		skb->data_len = skb->len - pull_len;
+		skb->truesize += skb->data_len;
+	}
+out:	return skb;
+}
+EXPORT_SYMBOL(cxgb4_pktgl_to_skb);
+
+/**
+ *	t4_pktgl_free - free a packet gather list
+ *	@gl: the gather list
+ *
+ *	Releases the pages of a packet gather list.  We do not own the last
+ *	page on the list and do not free it.
+ */
+void t4_pktgl_free(const struct pkt_gl *gl)
+{
+	int n;
+	const skb_frag_t *p;
+
+	for (p = gl->frags, n = gl->nfrags - 1; n--; p++)
+		put_page(p->page);
+}
+
+/*
+ * Process an MPS trace packet.  Give it an unused protocol number so it won't
+ * be delivered to anyone and send it to the stack for capture.
+ */
+static noinline int handle_trace_pkt(struct adapter *adap,
+				     const struct pkt_gl *gl)
+{
+	struct sk_buff *skb;
+	struct cpl_trace_pkt *p;
+
+	skb = cxgb4_pktgl_to_skb(gl, RX_PULL_LEN, RX_PULL_LEN);
+	if (unlikely(!skb)) {
+		t4_pktgl_free(gl);
+		return 0;
+	}
+
+	p = (struct cpl_trace_pkt *)skb->data;
+	__skb_pull(skb, sizeof(*p));
+	skb_reset_mac_header(skb);
+	skb->protocol = htons(0xffff);
+	skb->dev = adap->port[0];
+	netif_receive_skb(skb);
+	return 0;
+}
+
+static void do_gro(struct sge_eth_rxq *rxq, const struct pkt_gl *gl,
+		   const struct cpl_rx_pkt *pkt)
+{
+	int ret;
+	struct sk_buff *skb;
+
+	skb = napi_get_frags(&rxq->rspq.napi);
+	if (unlikely(!skb)) {
+		t4_pktgl_free(gl);
+		rxq->stats.rx_drops++;
+		return;
+	}
+
+	copy_frags(skb_shinfo(skb), gl, RX_PKT_PAD);
+	skb->len = gl->tot_len - RX_PKT_PAD;
+	skb->data_len = skb->len;
+	skb->truesize += skb->data_len;
+	skb->ip_summed = CHECKSUM_UNNECESSARY;
+	skb_record_rx_queue(skb, rxq->rspq.idx);
+
+	if (unlikely(pkt->vlan_ex)) {
+		struct port_info *pi = netdev_priv(rxq->rspq.netdev);
+		struct vlan_group *grp = pi->vlan_grp;
+
+		rxq->stats.vlan_ex++;
+		if (likely(grp)) {
+			ret = vlan_gro_frags(&rxq->rspq.napi, grp,
+					     ntohs(pkt->vlan));
+			goto stats;
+		}
+	}
+	ret = napi_gro_frags(&rxq->rspq.napi);
+stats:  if (ret == GRO_HELD)
+		rxq->stats.lro_pkts++;
+	else if (ret == GRO_MERGED || ret == GRO_MERGED_FREE)
+		rxq->stats.lro_merged++;
+	rxq->stats.pkts++;
+	rxq->stats.rx_cso++;
+}
+
+/**
+ *	t4_ethrx_handler - process an ingress ethernet packet
+ *	@q: the response queue that received the packet
+ *	@rsp: the response queue descriptor holding the RX_PKT message
+ *	@si: the gather list of packet fragments
+ *
+ *	Process an ingress ethernet packet and deliver it to the stack.
+ */
+int t4_ethrx_handler(struct sge_rspq *q, const __be64 *rsp,
+		     const struct pkt_gl *si)
+{
+	bool csum_ok;
+	struct sk_buff *skb;
+	struct port_info *pi;
+	const struct cpl_rx_pkt *pkt;
+	struct sge_eth_rxq *rxq = container_of(q, struct sge_eth_rxq, rspq);
+
+	if (unlikely(*(u8 *)rsp == CPL_TRACE_PKT))
+		return handle_trace_pkt(q->adap, si);
+
+	pkt = (void *)&rsp[1];
+	csum_ok = pkt->csum_calc && !pkt->err_vec;
+	if ((pkt->l2info & htonl(F_RXF_TCP)) &&
+	    (q->netdev->features & NETIF_F_GRO) && csum_ok && !pkt->ip_frag) {
+		do_gro(rxq, si, pkt);
+		return 0;
+	}
+
+	if (si->tot_len <= RX_COPY_THRES) {
+		/* small packets have only one fragment */
+		skb = dev_alloc_skb(si->frags[0].size);
+		if (!skb)
+			goto nomem;
+		__skb_put(skb, si->frags[0].size);
+		skb_copy_to_linear_data(skb, si->va, si->frags[0].size);
+	} else {
+		skb = dev_alloc_skb(RX_PKT_SKB_LEN);
+		if (!skb)
+			goto nomem;
+		__skb_put(skb, RX_PULL_LEN);
+		skb_copy_to_linear_data(skb, si->va, RX_PULL_LEN);
+
+		copy_frags(skb_shinfo(skb), si, RX_PULL_LEN);
+		skb->len = si->tot_len;
+		skb->data_len = skb->len - RX_PULL_LEN;
+		skb->truesize += skb->data_len;
+	}
+
+	__skb_pull(skb, RX_PKT_PAD);      /* remove ethernet header padding */
+	skb->protocol = eth_type_trans(skb, q->netdev);
+	skb_record_rx_queue(skb, q->idx);
+	pi = netdev_priv(skb->dev);
+	rxq->stats.pkts++;
+
+	if (csum_ok && (pi->rx_offload & RX_CSO) &&
+	    (pkt->l2info & htonl(F_RXF_UDP | F_RXF_TCP))) {
+		if (!pkt->ip_frag)
+			skb->ip_summed = CHECKSUM_UNNECESSARY;
+		else {
+			__sum16 c = (__force __sum16)pkt->csum;
+			skb->csum = csum_unfold(c);
+			skb->ip_summed = CHECKSUM_COMPLETE;
+		}
+		rxq->stats.rx_cso++;
+	} else
+		skb->ip_summed = CHECKSUM_NONE;
+
+	if (unlikely(pkt->vlan_ex)) {
+		struct vlan_group *grp = pi->vlan_grp;
+
+		rxq->stats.vlan_ex++;
+		if (likely(grp))
+			vlan_hwaccel_receive_skb(skb, grp, ntohs(pkt->vlan));
+		else
+			dev_kfree_skb_any(skb);
+	} else
+		netif_receive_skb(skb);
+
+	return 0;
+
+nomem:	t4_pktgl_free(si);
+	rxq->stats.rx_drops++;
+	return 0;
+}
+
+/**
+ *	restore_rx_bufs - put back a packet's Rx buffers
+ *	@si: the packet gather list
+ *	@q: the SGE free list
+ *	@frags: number of FL buffers to restore
+ *
+ *	Puts back on an FL the Rx buffers associated with @si.  The buffers
+ *	have already been unmapped and are left unmapped, we mark them so to
+ *	prevent further unmapping attempts. 
+ *
+ *	This function undoes a series of @unmap_rx_buf calls when we find out
+ *	that the current packet can't be processed right away afterall and we
+ *	need to come back to it later.  This is a very rare event and there's
+ *	no effort to make this particularly efficient.
+ */
+static void restore_rx_bufs(const struct pkt_gl *si, struct sge_fl *q,
+			    int frags)
+{
+	struct rx_sw_desc *d;
+
+	while (frags--) {
+		if (q->cidx == 0)
+			q->cidx = q->size - 1;
+		else
+			q->cidx--;
+		d = &q->sdesc[q->cidx];
+		d->page = si->frags[frags].page;
+		d->dma_addr |= RX_UNMAPPED_BUF;
+		q->avail++;
+	}
+}
+
+/**
+ *	is_new_response - check if a response is newly written
+ *	@r: the response descriptor
+ *	@q: the response queue
+ *
+ *	Returns true if a response descriptor contains a yet unprocessed
+ *	response.
+ */
+static inline bool is_new_response(const struct rsp_ctrl *r,
+				   const struct sge_rspq *q)
+{
+	return (r->type_gen >> S_RSPD_GEN) == q->gen;
+}
+
+/**
+ *	rspq_next - advance to the next entry in a response queue
+ *	@q: the queue
+ *
+ *	Updates the state of a response queue to advance it to the next entry.
+ */
+static inline void rspq_next(struct sge_rspq *q)
+{
+	q->cur_desc = (void *)q->cur_desc + q->iqe_len;
+	if (unlikely(++q->cidx == q->size)) {
+		q->cidx = 0;
+		q->gen ^= 1;
+		q->cur_desc = q->desc;
+	}
+}
+
+/**
+ *	process_responses - process responses from an SGE response queue
+ *	@q: the ingress queue to process
+ *	@budget: how many responses can be processed in this round
+ *
+ *	Process responses from an SGE response queue up to the supplied budget.
+ *	Responses include received packets as well as control messages from FW
+ *	or HW.
+ *
+ *	Additionally choose the interrupt holdoff time for the next interrupt
+ *	on this queue.  If the system is under memory shortage use a fairly
+ *	long delay to help recovery.
+ */
+int process_responses(struct sge_rspq *q, int budget)
+{
+	int ret, rsp_type;
+	int budget_left = budget;
+	const struct rsp_ctrl *rc;
+	struct sge_eth_rxq *rxq = container_of(q, struct sge_eth_rxq, rspq);
+
+	while (likely(budget_left)) {
+		rc = (void *)q->cur_desc + (q->iqe_len - sizeof(*rc));
+		if (!is_new_response(rc, q))
+			break;
+
+		rmb();
+		rsp_type = G_RSPD_TYPE(rc->type_gen);
+		if (likely(rsp_type == RSP_TYPE_FLBUF)) {
+			skb_frag_t *fp;
+			struct pkt_gl si;
+			const struct rx_sw_desc *rsd;
+			u32 len = ntohl(rc->pldbuflen_qid), bufsz, frags;
+
+			if (len & F_RSPD_NEWBUF) {
+				if (likely(q->offset > 0)) {
+					free_rx_bufs(q->adap, &rxq->fl, 1);
+					q->offset = 0;
+				}
+				len = G_RSPD_LEN(len);
+			}
+			si.tot_len = len;
+
+			/* gather packet fragments */
+			for (frags = 0, fp = si.frags; ; frags++, fp++) {
+				rsd = &rxq->fl.sdesc[rxq->fl.cidx];
+				bufsz = get_buf_size(rsd);
+				fp->page = rsd->page;
+				fp->page_offset = q->offset;
+				fp->size = min(bufsz, len);
+				len -= fp->size;
+				if (!len)
+					break;
+				unmap_rx_buf(q->adap, &rxq->fl);
+			}
+
+			/*
+			 * Last buffer remains mapped so explicitly make it
+			 * coherent for CPU access.
+			 */
+			dma_sync_single_for_cpu(q->adap->pdev_dev,
+						get_buf_addr(rsd),
+						fp->size, DMA_FROM_DEVICE);
+
+			si.va = page_address(si.frags[0].page) +
+				si.frags[0].page_offset;
+			prefetch(si.va);
+
+			si.nfrags = frags + 1;
+			ret = q->handler(q, q->cur_desc, &si);
+			if (likely(ret == 0))
+				q->offset += ALIGN(fp->size, FL_ALIGN);
+			else
+				restore_rx_bufs(&si, &rxq->fl, frags);
+		} else if (likely(rsp_type == RSP_TYPE_CPL)) {
+			ret = q->handler(q, q->cur_desc, NULL);
+		} else {
+			ret = q->handler(q, (const __be64 *)rc, CXGB4_MSG_AN);
+		}
+
+		if (unlikely(ret)) {
+			/* couldn't process descriptor, back off for recovery */
+			q->next_intr_params = V_QINTR_TIMER_IDX(NOMEM_TMR_IDX);
+			break;
+		}
+
+		rspq_next(q);
+		budget_left--;
+	}
+
+	if (q->offset >= 0 && rxq->fl.size - rxq->fl.avail >= 16)
+		replenish_fl(q->adap, &rxq->fl, GFP_ATOMIC);
+	return budget - budget_left;
+}
+
+/**
+ *	napi_rx_handler - the NAPI handler for Rx processing
+ *	@napi: the napi instance
+ *	@budget: how many packets we can process in this round
+ *
+ *	Handler for new data events when using NAPI.  This does not need any
+ *	locking or protection from interrupts as data interrupts are off at
+ *	this point and other adapter interrupts do not interfere (the latter
+ *	in not a concern at all with MSI-X as non-data interrupts then have
+ *	a separate handler).
+ */
+static int napi_rx_handler(struct napi_struct *napi, int budget)
+{
+	unsigned int params;
+	struct sge_rspq *q = container_of(napi, struct sge_rspq, napi);
+	int work_done = process_responses(q, budget);
+
+	if (likely(work_done < budget)) {
+		napi_complete(napi);
+		params = q->next_intr_params;
+		q->next_intr_params = q->intr_params;
+	} else
+		params = V_QINTR_TIMER_IDX(7);
+
+	t4_write_reg(q->adap, MYPF_REG(A_SGE_PF_GTS), V_CIDXINC(work_done) |
+		     V_INGRESSQID((u32)q->cntxt_id) | V_SEINTARM(params));
+	return work_done;
+}
+
+/*
+ * Returns true if a NAPI instance is already scheduled for polling.
+ */
+static inline int napi_is_scheduled(const struct napi_struct *napi)
+{
+	return test_bit(NAPI_STATE_SCHED, &napi->state);
+}
+
+/*
+ * The MSI-X interrupt handler for an SGE response queue.
+ */
+irqreturn_t t4_sge_intr_msix(int irq, void *cookie)
+{
+	struct sge_rspq *q = cookie;
+
+	spin_lock(&q->lock);
+	napi_schedule(&q->napi);
+	spin_unlock(&q->lock);
+	return IRQ_HANDLED;
+}
+
+/*
+ * Process the indirect interrupt entries in the interrupt queue and kick off
+ * NAPI for each queue that has generated an entry.
+ */
+static unsigned int process_intrq(struct adapter *adap)
+{
+	unsigned int credits;
+	const struct rsp_ctrl *rc;
+	struct sge_rspq *q = &adap->sge.intrq;
+
+	for (credits = 0; ; credits++) {
+		rc = (void *)q->cur_desc + (q->iqe_len - sizeof(*rc));
+		if (!is_new_response(rc, q))
+			break;
+
+		rmb();
+		if (G_RSPD_TYPE(rc->type_gen) == RSP_TYPE_INTR) {
+			unsigned int qid = ntohl(rc->pldbuflen_qid);
+			struct sge_rspq *srcq = adap->sge.ingr_map[qid];
+
+			spin_lock(&srcq->lock);
+			napi_schedule(&srcq->napi);
+			spin_unlock(&srcq->lock);
+		}
+
+		rspq_next(q);
+	}
+
+	t4_write_reg(adap, MYPF_REG(A_SGE_PF_GTS), V_CIDXINC(credits) |
+		     V_INGRESSQID(q->cntxt_id) | V_SEINTARM(q->intr_params));
+	return credits;
+}
+
+/*
+ * The MSI interrupt handler, which handles data events from SGE response queues
+ * as well as error and other async events as they all use the same MSI vector.
+ */
+static irqreturn_t t4_intr_msi(int irq, void *cookie)
+{
+	struct adapter *adap = cookie;
+
+	t4_slow_intr_handler(adap);
+	process_intrq(adap);
+	return IRQ_HANDLED;
+}
+
+/*
+ * Interrupt handler for legacy INTx interrupts.
+ * Handles data events from SGE response queues as well as error and other
+ * async events as they all use the same interrupt line.
+ */
+static irqreturn_t t4_intr_intx(int irq, void *cookie)
+{
+	struct adapter *adap = cookie;
+
+	t4_write_reg(adap, MYPF_REG(A_PCIE_PF_CLI), 0);
+	if (t4_slow_intr_handler(adap) | process_intrq(adap))
+		return IRQ_HANDLED;
+	return IRQ_NONE;             /* probably shared interrupt */
+}
+
+/**
+ *	t4_intr_handler - select the top-level interrupt handler
+ *	@adap: the adapter
+ *
+ *	Selects the top-level interrupt handler based on the type of interrupts
+ *	(MSI-X, MSI, or INTx).
+ */
+irq_handler_t t4_intr_handler(struct adapter *adap)
+{
+	if (adap->flags & USING_MSIX)
+		return t4_sge_intr_msix;
+	if (adap->flags & USING_MSI)
+		return t4_intr_msi;
+	return t4_intr_intx;
+}
+
+static void sge_rx_timer_cb(unsigned long data)
+{
+	unsigned long m;
+	unsigned int i, cnt[2];
+	struct adapter *adap = (struct adapter *)data;
+	struct sge *s = &adap->sge;
+
+	for (i = 0; i < ARRAY_SIZE(s->starving_fl); i++)
+		for (m = s->starving_fl[i]; m; m &= m - 1) {
+			struct sge_eth_rxq *rxq;
+			unsigned int id = __ffs(m) + i * BITS_PER_LONG;
+			struct sge_fl *fl = s->egr_map[id];
+
+			rxq = container_of(fl, struct sge_eth_rxq, fl);
+			if (spin_trylock_irq(&rxq->rspq.lock)) {
+				if (!napi_is_scheduled(&rxq->rspq.napi)) {
+					if (fl_starving(fl)) {
+						fl->starving++;
+						__refill_fl(adap, fl);
+					}
+					if (!fl_starving(fl))
+						clear_bit(id, s->starving_fl);
+				}
+				spin_unlock(&rxq->rspq.lock);
+			}
+		}
+
+	t4_write_reg(adap, A_SGE_DEBUG_INDEX, 13);
+	cnt[0] = t4_read_reg(adap, A_SGE_DEBUG_DATA_HIGH);
+	cnt[1] = t4_read_reg(adap, A_SGE_DEBUG_DATA_LOW);
+
+	for (i = 0; i < 2; i++)
+		if (cnt[i] >= s->starve_thres) {
+			if (s->idma_state[i])
+				continue;
+			s->idma_state[i] = 1;
+			t4_write_reg(adap, A_SGE_DEBUG_INDEX, 11);
+			m = t4_read_reg(adap, A_SGE_DEBUG_DATA_LOW) >> (i * 16);
+			CH_WARN(adap, "SGE idma%u starvation detected for "
+				"queue %lu\n", i, m & 0xffff);
+		} else if (s->idma_state[i])
+			s->idma_state[i] = 0;
+
+	mod_timer(&s->rx_timer, jiffies + RX_QCHECK_PERIOD);
+}
+
+static void sge_tx_timer_cb(unsigned long data)
+{
+	unsigned long m, i;
+	struct adapter *adap = (struct adapter *)data;
+	struct sge *s = &adap->sge;
+
+	for (i = 0; i < ARRAY_SIZE(s->txq_maperr); i++)
+		for (m = s->txq_maperr[i]; m; m &= m - 1) {
+			unsigned long id = __ffs(m) + i * BITS_PER_LONG;
+			struct sge_ofld_txq *txq = s->egr_map[id];
+
+			clear_bit(id, s->txq_maperr);
+			tasklet_schedule(&txq->qresume_tsk);
+		}
+
+	mod_timer(&s->tx_timer, jiffies + TX_QCHECK_PERIOD);
+}
+
+int t4_sge_alloc_rxq(struct adapter *adap, struct sge_rspq *iq, bool fwevtq,
+		     struct net_device *dev, int intr_idx,
+		     struct sge_fl *fl, rspq_handler_t hnd)
+{
+	int ret, flsz = 0;
+	struct fw_iq_cmd c;
+	struct port_info *pi = netdev_priv(dev);
+
+	/* Size needs to be multiple of 16, including status entry. */
+	iq->size = roundup(iq->size, 16);
+
+	iq->desc = alloc_ring(adap->pdev_dev, iq->size, iq->iqe_len, 0,
+			      &iq->phys_addr, NULL, 0);
+	if (!iq->desc)
+		return -ENOMEM;
+
+	memset(&c, 0, sizeof(c));
+	c.op_to_vfn = htonl(V_FW_CMD_OP(FW_IQ_CMD) | F_FW_CMD_REQUEST |
+			    F_FW_CMD_WRITE | F_FW_CMD_EXEC |
+			    V_FW_IQ_CMD_PFN(0) | V_FW_IQ_CMD_VFN(0));
+	c.alloc_to_len16 = htonl(F_FW_IQ_CMD_ALLOC | F_FW_IQ_CMD_IQSTART |
+				 FW_LEN16(c));
+	c.type_to_iqandstindex = htonl(V_FW_IQ_CMD_TYPE(FW_IQ_TYPE_FL_INT_CAP) |
+		V_FW_IQ_CMD_IQASYNCH(fwevtq) | V_FW_IQ_CMD_VIID(pi->viid) |
+		V_FW_IQ_CMD_IQANDST(intr_idx < 0) | V_FW_IQ_CMD_IQANUD(1) |
+		V_FW_IQ_CMD_IQANDSTINDEX(intr_idx >= 0 ? intr_idx :
+							-intr_idx - 1));
+	c.iqdroprss_to_iqesize = htons(V_FW_IQ_CMD_IQPCIECH(pi->tx_chan) |
+		F_FW_IQ_CMD_IQGTSMODE |
+		V_FW_IQ_CMD_IQINTCNTTHRESH(iq->pktcnt_idx) |
+		V_FW_IQ_CMD_IQESIZE(ilog2(iq->iqe_len) - 4));
+	c.iqsize = htons(iq->size);
+	c.iqaddr = cpu_to_be64(iq->phys_addr);
+
+	if (fl) {
+		fl->size = roundup(fl->size, 8);
+		fl->desc = alloc_ring(adap->pdev_dev, fl->size, sizeof(__be64),
+				      sizeof(struct rx_sw_desc), &fl->addr,
+				      &fl->sdesc, STAT_LEN);
+		if (!fl->desc)
+			goto fl_nomem;
+
+		flsz = fl->size / 8 + STAT_LEN / sizeof(struct tx_desc);
+		c.iqns_to_fl0congen = htonl(F_FW_IQ_CMD_FL0PACKEN |
+					    F_FW_IQ_CMD_FL0PADEN);
+		c.fl0dcaen_to_fl0cidxfthresh = htons(V_FW_IQ_CMD_FL0FBMIN(2) |
+				V_FW_IQ_CMD_FL0FBMAX(3));
+		c.fl0size = htons(flsz);
+		c.fl0addr = cpu_to_be64(fl->addr);
+	}
+
+	ret = t4_wr_mbox(adap, 0, &c, sizeof(c), &c);
+	if (ret)
+		goto err;
+
+	spin_lock_init(&iq->lock);
+	netif_napi_add(dev, &iq->napi, napi_rx_handler, 64);
+	iq->cur_desc = iq->desc;
+	iq->cidx = 0;
+	iq->gen = 1;
+	iq->next_intr_params = iq->intr_params;
+	iq->cntxt_id = ntohs(c.iqid);
+	iq->abs_id = ntohs(c.physiqid);
+	iq->size--;                           /* subtract status entry */
+	iq->adap = adap;
+	iq->netdev = dev;
+	iq->handler = hnd;
+
+	/* set offset to -1 to distinguish ingress queues without FL */
+	iq->offset = fl ? 0 : -1;
+
+	adap->sge.ingr_map[iq->cntxt_id] = iq;
+
+	if (fl) {
+		fl->cntxt_id = htons(c.fl0id);
+		fl->avail = fl->pend_cred = 0;
+		fl->pidx = fl->cidx = 0;
+		fl->alloc_failed = fl->large_alloc_failed = fl->starving = 0;
+		adap->sge.egr_map[fl->cntxt_id] = fl;
+		replenish_fl(adap, fl, GFP_KERNEL);
+	}
+	return 0;
+
+fl_nomem:
+	ret = -ENOMEM;
+err:
+	if (iq->desc) {
+		dma_free_coherent(adap->pdev_dev, iq->size * iq->iqe_len,
+				  iq->desc, iq->phys_addr);
+		iq->desc = NULL;
+	}
+	if (fl && fl->desc) {
+		kfree(fl->sdesc);
+		fl->sdesc = NULL;
+		dma_free_coherent(adap->pdev_dev, flsz * sizeof(struct tx_desc),
+				  fl->desc, fl->addr);
+		fl->desc = NULL;
+	}
+	return ret;
+}
+
+static void init_txq(struct adapter *adap, struct sge_txq *q, unsigned int id)
+{
+	q->in_use = 0;
+	q->cidx = q->pidx = 0;
+	q->stops = q->restarts = 0;
+	q->stat = (void *)&q->desc[q->size];
+	q->cntxt_id = id;
+	adap->sge.egr_map[id] = q;
+}
+
+int t4_sge_alloc_eth_txq(struct adapter *adap, struct sge_eth_txq *txq,
+			 struct net_device *dev, struct netdev_queue *netdevq,
+			 unsigned int iqid)
+{
+	int ret, nentries;
+	struct fw_eq_eth_cmd c;
+	struct port_info *pi = netdev_priv(dev);
+
+	/* Add status entries */
+	nentries = txq->q.size + STAT_LEN / sizeof(struct tx_desc);
+
+	txq->q.desc = alloc_ring(adap->pdev_dev, txq->q.size,
+			sizeof(struct tx_desc), sizeof(struct tx_sw_desc),
+			&txq->q.phys_addr, &txq->q.sdesc, STAT_LEN);
+	if (!txq->q.desc)
+		return -ENOMEM;
+
+	memset(&c, 0, sizeof(c));
+	c.op_to_vfn = htonl(V_FW_CMD_OP(FW_EQ_ETH_CMD) | F_FW_CMD_REQUEST |
+			    F_FW_CMD_WRITE | F_FW_CMD_EXEC |
+			    V_FW_EQ_ETH_CMD_PFN(0) | V_FW_EQ_ETH_CMD_VFN(0));
+	c.alloc_to_len16 = htonl(F_FW_EQ_ETH_CMD_ALLOC |
+				 F_FW_EQ_ETH_CMD_EQSTART | FW_LEN16(c));
+	c.viid_pkd = htonl(V_FW_EQ_ETH_CMD_VIID(pi->viid));
+	c.fetchszm_to_iqid = htonl(V_FW_EQ_ETH_CMD_HOSTFCMODE(2) |
+				   V_FW_EQ_ETH_CMD_PCIECHN(pi->tx_chan) |
+				   V_FW_EQ_ETH_CMD_IQID(iqid));
+	c.dcaen_to_eqsize = htonl(V_FW_EQ_ETH_CMD_FBMIN(2) |
+				  V_FW_EQ_ETH_CMD_FBMAX(3) |
+				  V_FW_EQ_ETH_CMD_CIDXFTHRESH(5) |
+				  V_FW_EQ_ETH_CMD_EQSIZE(nentries));
+	c.eqaddr = cpu_to_be64(txq->q.phys_addr);
+
+	ret = t4_wr_mbox(adap, 0, &c, sizeof(c), &c);
+	if (ret) {
+		kfree(txq->q.sdesc);
+		txq->q.sdesc = NULL;
+		dma_free_coherent(adap->pdev_dev,
+				  nentries * sizeof(struct tx_desc),
+				  txq->q.desc, txq->q.phys_addr);
+		txq->q.desc = NULL;
+		return ret;
+	}
+
+	init_txq(adap, &txq->q, G_FW_EQ_ETH_CMD_EQID(ntohl(c.eqid_pkd)));
+	txq->txq = netdevq;
+	txq->tso = txq->tx_cso = txq->vlan_ins = 0;
+	txq->mapping_err = 0;
+	return 0;
+}
+
+int t4_sge_alloc_ctrl_txq(struct adapter *adap, struct sge_ctrl_txq *txq,
+			  struct net_device *dev, unsigned int iqid,
+			  unsigned int cmplqid)
+{
+	int ret, nentries;
+	struct fw_eq_ctrl_cmd c;
+	struct port_info *pi = netdev_priv(dev);
+
+	/* Add status entries */
+	nentries = txq->q.size + STAT_LEN / sizeof(struct tx_desc);
+
+	txq->q.desc = alloc_ring(adap->pdev_dev, nentries,
+				 sizeof(struct tx_desc), 0, &txq->q.phys_addr,
+				 NULL, 0);
+	if (!txq->q.desc)
+		return -ENOMEM;
+
+	c.op_to_vfn = htonl(V_FW_CMD_OP(FW_EQ_CTRL_CMD) | F_FW_CMD_REQUEST |
+			    F_FW_CMD_WRITE | F_FW_CMD_EXEC |
+			    V_FW_EQ_CTRL_CMD_PFN(0) | V_FW_EQ_CTRL_CMD_VFN(0));
+	c.alloc_to_len16 = htonl(F_FW_EQ_CTRL_CMD_ALLOC |
+				 F_FW_EQ_CTRL_CMD_EQSTART | FW_LEN16(c));
+	c.cmpliqid_eqid = htonl(V_FW_EQ_CTRL_CMD_CMPLIQID(cmplqid));
+	c.physeqid_pkd = htonl(0);
+	c.fetchszm_to_iqid = htonl(V_FW_EQ_CTRL_CMD_HOSTFCMODE(2) |
+				   V_FW_EQ_CTRL_CMD_PCIECHN(pi->tx_chan) |
+				   V_FW_EQ_CTRL_CMD_IQID(iqid));
+	c.dcaen_to_eqsize = htonl(V_FW_EQ_CTRL_CMD_FBMIN(2) |
+				  V_FW_EQ_CTRL_CMD_FBMAX(3) |
+				  V_FW_EQ_CTRL_CMD_CIDXFTHRESH(5) |
+				  V_FW_EQ_CTRL_CMD_EQSIZE(nentries));
+	c.eqaddr = cpu_to_be64(txq->q.phys_addr);
+
+	ret = t4_wr_mbox(adap, 0, &c, sizeof(c), &c);
+	if (ret) {
+		dma_free_coherent(adap->pdev_dev,
+				  nentries * sizeof(struct tx_desc),
+				  txq->q.desc, txq->q.phys_addr);
+		txq->q.desc = NULL;
+		return ret;
+	}
+
+	init_txq(adap, &txq->q, G_FW_EQ_CTRL_CMD_EQID(ntohl(c.cmpliqid_eqid)));
+	txq->adap = adap;
+	skb_queue_head_init(&txq->sendq);
+	tasklet_init(&txq->qresume_tsk, restart_ctrlq, (unsigned long)txq);
+	txq->full = 0;
+	return 0;
+}
+
+int t4_sge_alloc_ofld_txq(struct adapter *adap, struct sge_ofld_txq *txq,
+			  struct net_device *dev, unsigned int iqid)
+{
+	int ret, nentries;
+	struct fw_eq_ofld_cmd c;
+	struct port_info *pi = netdev_priv(dev);
+
+	/* Add status entries */
+	nentries = txq->q.size + STAT_LEN / sizeof(struct tx_desc);
+
+	txq->q.desc = alloc_ring(adap->pdev_dev, txq->q.size,
+			sizeof(struct tx_desc), sizeof(struct tx_sw_desc),
+			&txq->q.phys_addr, &txq->q.sdesc, STAT_LEN);
+	if (!txq->q.desc)
+		return -ENOMEM;
+
+	memset(&c, 0, sizeof(c));
+	c.op_to_vfn = htonl(V_FW_CMD_OP(FW_EQ_OFLD_CMD) | F_FW_CMD_REQUEST |
+			    F_FW_CMD_WRITE | F_FW_CMD_EXEC |
+			    V_FW_EQ_OFLD_CMD_PFN(0) | V_FW_EQ_OFLD_CMD_VFN(0));
+	c.alloc_to_len16 = htonl(F_FW_EQ_OFLD_CMD_ALLOC |
+				 F_FW_EQ_OFLD_CMD_EQSTART | FW_LEN16(c));
+	c.fetchszm_to_iqid = htonl(V_FW_EQ_OFLD_CMD_HOSTFCMODE(2) |
+				   V_FW_EQ_OFLD_CMD_PCIECHN(pi->tx_chan) |
+				   V_FW_EQ_OFLD_CMD_IQID(iqid));
+	c.dcaen_to_eqsize = htonl(V_FW_EQ_OFLD_CMD_FBMIN(2) |
+				  V_FW_EQ_OFLD_CMD_FBMAX(3) |
+				  V_FW_EQ_OFLD_CMD_CIDXFTHRESH(5) |
+				  V_FW_EQ_OFLD_CMD_EQSIZE(nentries));
+	c.eqaddr = cpu_to_be64(txq->q.phys_addr);
+
+	ret = t4_wr_mbox(adap, 0, &c, sizeof(c), &c);
+	if (ret) {
+		kfree(txq->q.sdesc);
+		txq->q.sdesc = NULL;
+		dma_free_coherent(adap->pdev_dev,
+				  nentries * sizeof(struct tx_desc),
+				  txq->q.desc, txq->q.phys_addr);
+		txq->q.desc = NULL;
+		return ret;
+	}
+
+	init_txq(adap, &txq->q, G_FW_EQ_OFLD_CMD_EQID(ntohl(c.eqid_pkd)));
+	txq->adap = adap;
+	skb_queue_head_init(&txq->sendq);
+	tasklet_init(&txq->qresume_tsk, restart_ofldq, (unsigned long)txq);
+	txq->full = 0;
+	txq->mapping_err = 0;
+	return 0;
+}
+
+static void free_txq(struct adapter *adap, struct sge_txq *q)
+{
+	dma_free_coherent(adap->pdev_dev,
+			  q->size * sizeof(struct tx_desc) + STAT_LEN,
+			  q->desc, q->phys_addr);
+	q->cntxt_id = 0;
+	q->sdesc = NULL;
+	q->desc = NULL;
+}
+
+static void free_rspq_fl(struct adapter *adap, struct sge_rspq *rq,
+			 struct sge_fl *fl)
+{
+	unsigned int fl_id = fl ? fl->cntxt_id : 0xffff;
+
+	t4_iq_free(adap, 0, 0, 0, FW_IQ_TYPE_FL_INT_CAP, rq->cntxt_id, fl_id,
+		   0xffff);
+	dma_free_coherent(adap->pdev_dev, (rq->size + 1) * rq->iqe_len,
+			  rq->desc, rq->phys_addr);
+	netif_napi_del(&rq->napi);
+	rq->netdev = NULL;
+	rq->cntxt_id = rq->abs_id = 0;
+	rq->desc = NULL;
+
+	if (fl) {
+		free_rx_bufs(adap, fl, fl->avail);
+		dma_free_coherent(adap->pdev_dev, fl->size * 8 + STAT_LEN,
+				  fl->desc, fl->addr);
+		kfree(fl->sdesc);
+		fl->sdesc = NULL;
+		fl->cntxt_id = 0;
+		fl->desc = NULL;
+	}
+}
+
+/**
+ *	t4_free_sge_resources - free SGE resources
+ *	@adap: the adapter
+ *
+ *	Frees resources used by the SGE queue sets.
+ */
+void t4_free_sge_resources(struct adapter *adap)
+{
+	int i;
+	struct sge_eth_rxq *eq = adap->sge.ethrxq;
+	struct sge_eth_txq *etq = adap->sge.ethtxq;
+	struct sge_ofld_rxq *oq = adap->sge.ofldrxq;
+	struct sge_rspq *intrq = &adap->sge.intrq;
+
+	/* clean up Ethernet Tx/Rx queues */
+	for (i = 0; i < adap->sge.ethqsets; i++, eq++, etq++) {
+		if (eq->rspq.desc)
+			free_rspq_fl(adap, &eq->rspq, &eq->fl);
+		if (etq->q.desc) {
+			t4_eth_eq_free(adap, 0, 0, 0, etq->q.cntxt_id);
+			free_tx_desc(adap, &etq->q, etq->q.in_use, true);
+			kfree(etq->q.sdesc);
+			free_txq(adap, &etq->q);
+		}
+	}
+
+	/* clean up RDMA and iSCSI Rx queues */
+	for (i = 0; i < adap->sge.ofldqsets; i++, oq++) {
+		if (oq->rspq.desc)
+			free_rspq_fl(adap, &oq->rspq, &oq->fl);
+	}
+	for (i = 0, oq = adap->sge.rdmarxq; i < adap->sge.rdmaqs; i++, oq++) {
+		if (oq->rspq.desc)
+			free_rspq_fl(adap, &oq->rspq, &oq->fl);
+	}
+
+	/* clean up offload Tx queues */
+	for (i = 0; i < ARRAY_SIZE(adap->sge.ofldtxq); i++) {
+		struct sge_ofld_txq *q = &adap->sge.ofldtxq[i];
+
+		if (q->q.desc) {
+			tasklet_kill(&q->qresume_tsk);
+			t4_ofld_eq_free(adap, 0, 0, 0, q->q.cntxt_id);
+			free_tx_desc(adap, &q->q, q->q.in_use, false);
+			kfree(q->q.sdesc);
+			__skb_queue_purge(&q->sendq);
+			free_txq(adap, &q->q);
+		}
+	}
+
+	/* clean up control Tx queues */
+	for (i = 0; i < ARRAY_SIZE(adap->sge.ctrlq); i++) {
+		struct sge_ctrl_txq *cq = &adap->sge.ctrlq[i];
+
+		if (cq->q.desc) {
+			tasklet_kill(&cq->qresume_tsk);
+			t4_ctrl_eq_free(adap, 0, 0, 0, cq->q.cntxt_id);
+			__skb_queue_purge(&cq->sendq);
+			free_txq(adap, &cq->q);
+		}
+	}
+
+	if (adap->sge.fw_evtq.desc)
+		free_rspq_fl(adap, &adap->sge.fw_evtq, NULL);
+
+	if (intrq->desc) {
+		t4_iq_free(adap, 0, 0, 0, FW_IQ_TYPE_FL_INT_CAP,
+			   intrq->cntxt_id, 0xffff, 0xffff);
+		dma_free_coherent(adap->pdev_dev,
+				  (intrq->size + 1) * intrq->iqe_len,
+				  intrq->desc, intrq->phys_addr);
+		intrq->cntxt_id = intrq->abs_id = 0;
+		intrq->desc = NULL;
+		/* this queue doesn't use NAPI */
+	}
+}
+
+void t4_sge_start(struct adapter *adap)
+{
+	mod_timer(&adap->sge.rx_timer, jiffies + RX_QCHECK_PERIOD);
+	mod_timer(&adap->sge.tx_timer, jiffies + TX_QCHECK_PERIOD);
+}
+
+/**
+ *	t4_sge_stop - disable SGE operation
+ *	@adap: the adapter
+ *
+ *	Stop tasklets and timers associated with the DMA engine.  Note that
+ *	this is effective only if measures have been taken to disable any HW
+ *	events that may restart them.
+ */
+void t4_sge_stop(struct adapter *adap)
+{
+	int i;
+	struct sge *s = &adap->sge;
+
+	if (in_interrupt())  /* actions below require waiting */
+		return;
+
+	if (s->rx_timer.function)
+		del_timer_sync(&s->rx_timer);
+	if (s->tx_timer.function)
+		del_timer_sync(&s->tx_timer);
+
+	for (i = 0; i < ARRAY_SIZE(s->ofldtxq); i++) {
+		struct sge_ofld_txq *q = &s->ofldtxq[i];
+
+		if (q->q.desc)
+			tasklet_kill(&q->qresume_tsk);
+	}
+	for (i = 0; i < ARRAY_SIZE(s->ctrlq); i++) {
+		struct sge_ctrl_txq *cq = &s->ctrlq[i];
+
+		if (cq->q.desc)
+			tasklet_kill(&cq->qresume_tsk);
+	}
+}
+
+/**
+ *	t4_sge_init - initialize SGE
+ *	@adap: the adapter
+ *
+ *	Performs SGE initialization needed every time after a chip reset.
+ *	We do not initialize any of the queues here, instead the driver
+ *	top-level must request them individually.
+ */
+void t4_sge_init(struct adapter *adap)
+{
+	struct sge *s = &adap->sge;
+	unsigned int fl_align_log = ilog2(FL_ALIGN);
+
+	t4_set_reg_field(adap, A_SGE_CONTROL, V_PKTSHIFT(M_PKTSHIFT) |
+			 V_INGPADBOUNDARY(M_INGPADBOUNDARY) |
+			 F_EGRSTATUSPAGESIZE,
+			 V_INGPADBOUNDARY(fl_align_log - 5) | V_PKTSHIFT(2) |
+			 F_RXPKTCPLMODE | V_EGRSTATUSPAGESIZE(STAT_LEN == 128));
+	t4_set_reg_field(adap, A_SGE_HOST_PAGE_SIZE,
+			 V_HOSTPAGESIZEPF0(M_HOSTPAGESIZEPF0),
+			 V_HOSTPAGESIZEPF0(PAGE_SHIFT - 10));
+	t4_write_reg(adap, A_SGE_FL_BUFFER_SIZE0, PAGE_SIZE);
+#if FL_PG_ORDER > 0
+	t4_write_reg(adap, A_SGE_FL_BUFFER_SIZE1, PAGE_SIZE << FL_PG_ORDER);
+#endif
+	t4_write_reg(adap, A_SGE_INGRESS_RX_THRESHOLD,
+		     V_THRESHOLD_0(s->counter_val[0]) |
+		     V_THRESHOLD_1(s->counter_val[1]) |
+		     V_THRESHOLD_2(s->counter_val[2]) |
+		     V_THRESHOLD_3(s->counter_val[3]));
+	t4_write_reg(adap, A_SGE_TIMER_VALUE_0_AND_1,
+		     V_TIMERVALUE0(us_to_core_ticks(adap, s->timer_val[0])) |
+		     V_TIMERVALUE1(us_to_core_ticks(adap, s->timer_val[1])));
+	t4_write_reg(adap, A_SGE_TIMER_VALUE_2_AND_3,
+		     V_TIMERVALUE2(us_to_core_ticks(adap, s->timer_val[2])) |
+		     V_TIMERVALUE3(us_to_core_ticks(adap, s->timer_val[3])));
+	t4_write_reg(adap, A_SGE_TIMER_VALUE_4_AND_5,
+		     V_TIMERVALUE4(us_to_core_ticks(adap, s->timer_val[4])) |
+		     V_TIMERVALUE5(us_to_core_ticks(adap, s->timer_val[5])));
+	setup_timer(&s->rx_timer, sge_rx_timer_cb, (unsigned long)adap);
+	setup_timer(&s->tx_timer, sge_tx_timer_cb, (unsigned long)adap);
+	s->starve_thres = core_ticks_per_usec(adap) * 1000000;  /* 1 s */
+	s->idma_state[0] = s->idma_state[1] = 0;
+}
-- 
1.5.4


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH net-next 5/7] cxgb4: Add remaining driver headers and L2T management
  2010-02-18  1:37       ` [PATCH net-next 4/7] cxgb4: Add packet queues and packet DMA code Dimitris Michailidis
@ 2010-02-18  1:37         ` Dimitris Michailidis
  2010-02-18  1:37           ` [PATCH net-next 6/7] cxgb4: Add main driver file and driver Makefile Dimitris Michailidis
  0 siblings, 1 reply; 10+ messages in thread
From: Dimitris Michailidis @ 2010-02-18  1:37 UTC (permalink / raw)
  To: netdev

Signed-off-by: Dimitris Michailidis <dm@chelsio.com>
---
 drivers/net/cxgb4/cxgb4.h     |  754 +++++++++++++++++++++++++++++++++++++++++
 drivers/net/cxgb4/cxgb4_uld.h |  230 +++++++++++++
 drivers/net/cxgb4/l2t.c       |  627 ++++++++++++++++++++++++++++++++++
 drivers/net/cxgb4/l2t.h       |  110 ++++++
 4 files changed, 1721 insertions(+), 0 deletions(-)
 create mode 100644 drivers/net/cxgb4/cxgb4.h
 create mode 100644 drivers/net/cxgb4/cxgb4_uld.h
 create mode 100644 drivers/net/cxgb4/l2t.c
 create mode 100644 drivers/net/cxgb4/l2t.h

diff --git a/drivers/net/cxgb4/cxgb4.h b/drivers/net/cxgb4/cxgb4.h
new file mode 100644
index 0000000..5eb39ec
--- /dev/null
+++ b/drivers/net/cxgb4/cxgb4.h
@@ -0,0 +1,754 @@
+/*
+ * This file is part of the Chelsio T4 Ethernet driver for Linux.
+ *
+ * Copyright (c) 2003-2010 Chelsio Communications, Inc. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ *     Redistribution and use in source and binary forms, with or
+ *     without modification, are permitted provided that the following
+ *     conditions are met:
+ *
+ *      - Redistributions of source code must retain the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer.
+ *
+ *      - Redistributions in binary form must reproduce the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer in the documentation and/or other materials
+ *        provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef __CXGB4_H__
+#define __CXGB4_H__
+
+#include <linux/bitops.h>
+#include <linux/cache.h>
+#include <linux/interrupt.h>
+#include <linux/list.h>
+#include <linux/netdevice.h>
+#include <linux/pci.h>
+#include <linux/spinlock.h>
+#include <linux/timer.h>
+#include <asm/io.h>
+#include "cxgb4_uld.h"
+#include "t4_hw.h"
+
+#define FW_VERSION_MAJOR 1
+#define FW_VERSION_MINOR 0
+#define FW_VERSION_MICRO 0
+
+enum {
+	MAX_NPORTS = 4,     /* max # of ports */
+	SERNUM_LEN = 16,    /* Serial # length */
+	EC_LEN     = 16,    /* E/C length */
+	ID_LEN     = 16,    /* ID length */
+};
+
+enum {
+	MEM_EDC0,
+	MEM_EDC1,
+	MEM_MC
+};
+
+enum dev_master {
+	MASTER_CANT,
+	MASTER_MAY,
+	MASTER_MUST
+};
+
+enum dev_state {
+	DEV_STATE_UNINIT,
+	DEV_STATE_INIT,
+	DEV_STATE_ERR
+};
+
+enum {
+	PAUSE_RX      = 1 << 0,
+	PAUSE_TX      = 1 << 1,
+	PAUSE_AUTONEG = 1 << 2
+};
+
+struct port_stats {
+	u64 tx_octets;            /* total # of octets in good frames */
+	u64 tx_frames;            /* all good frames */
+	u64 tx_bcast_frames;      /* all broadcast frames */
+	u64 tx_mcast_frames;      /* all multicast frames */
+	u64 tx_ucast_frames;      /* all unicast frames */
+	u64 tx_error_frames;      /* all error frames */
+
+	u64 tx_frames_64;         /* # of Tx frames in a particular range */
+	u64 tx_frames_65_127;
+	u64 tx_frames_128_255;
+	u64 tx_frames_256_511;
+	u64 tx_frames_512_1023;
+	u64 tx_frames_1024_1518;
+	u64 tx_frames_1519_max;
+
+	u64 tx_drop;              /* # of dropped Tx frames */
+	u64 tx_pause;             /* # of transmitted pause frames */
+	u64 tx_ppp0;              /* # of transmitted PPP prio 0 frames */
+	u64 tx_ppp1;              /* # of transmitted PPP prio 1 frames */
+	u64 tx_ppp2;              /* # of transmitted PPP prio 2 frames */
+	u64 tx_ppp3;              /* # of transmitted PPP prio 3 frames */
+	u64 tx_ppp4;              /* # of transmitted PPP prio 4 frames */
+	u64 tx_ppp5;              /* # of transmitted PPP prio 5 frames */
+	u64 tx_ppp6;              /* # of transmitted PPP prio 6 frames */
+	u64 tx_ppp7;              /* # of transmitted PPP prio 7 frames */
+
+	u64 rx_octets;            /* total # of octets in good frames */
+	u64 rx_frames;            /* all good frames */
+	u64 rx_bcast_frames;      /* all broadcast frames */
+	u64 rx_mcast_frames;      /* all multicast frames */
+	u64 rx_ucast_frames;      /* all unicast frames */
+	u64 rx_too_long;          /* # of frames exceeding MTU */
+	u64 rx_jabber;            /* # of jabber frames */
+	u64 rx_fcs_err;           /* # of received frames with bad FCS */
+	u64 rx_len_err;           /* # of received frames with length error */
+	u64 rx_symbol_err;        /* symbol errors */
+	u64 rx_runt;              /* # of short frames */
+
+	u64 rx_frames_64;         /* # of Rx frames in a particular range */
+	u64 rx_frames_65_127;
+	u64 rx_frames_128_255;
+	u64 rx_frames_256_511;
+	u64 rx_frames_512_1023;
+	u64 rx_frames_1024_1518;
+	u64 rx_frames_1519_max;
+
+	u64 rx_pause;             /* # of received pause frames */
+	u64 rx_ppp0;              /* # of received PPP prio 0 frames */
+	u64 rx_ppp1;              /* # of received PPP prio 1 frames */
+	u64 rx_ppp2;              /* # of received PPP prio 2 frames */
+	u64 rx_ppp3;              /* # of received PPP prio 3 frames */
+	u64 rx_ppp4;              /* # of received PPP prio 4 frames */
+	u64 rx_ppp5;              /* # of received PPP prio 5 frames */
+	u64 rx_ppp6;              /* # of received PPP prio 6 frames */
+	u64 rx_ppp7;              /* # of received PPP prio 7 frames */
+
+	u64 rx_ovflow0;           /* drops due to buffer-group 0 overflows */
+	u64 rx_ovflow1;           /* drops due to buffer-group 1 overflows */
+	u64 rx_ovflow2;           /* drops due to buffer-group 2 overflows */
+	u64 rx_ovflow3;           /* drops due to buffer-group 3 overflows */
+	u64 rx_trunc0;            /* buffer-group 0 truncated packets */
+	u64 rx_trunc1;            /* buffer-group 1 truncated packets */
+	u64 rx_trunc2;            /* buffer-group 2 truncated packets */
+	u64 rx_trunc3;            /* buffer-group 3 truncated packets */
+};
+
+struct lb_port_stats {
+	u64 octets;
+	u64 frames;
+	u64 bcast_frames;
+	u64 mcast_frames;
+	u64 ucast_frames;
+	u64 error_frames;
+
+	u64 frames_64;
+	u64 frames_65_127;
+	u64 frames_128_255;
+	u64 frames_256_511;
+	u64 frames_512_1023;
+	u64 frames_1024_1518;
+	u64 frames_1519_max;
+
+	u64 drop;
+
+	u64 ovflow0;
+	u64 ovflow1;
+	u64 ovflow2;
+	u64 ovflow3;
+	u64 trunc0;
+	u64 trunc1;
+	u64 trunc2;
+	u64 trunc3;
+};
+
+struct tp_tcp_stats {
+	u32 tcpOutRsts;
+	u64 tcpInSegs;
+	u64 tcpOutSegs;
+	u64 tcpRetransSegs;
+};
+
+struct tp_err_stats {
+	u32 macInErrs[4];
+	u32 hdrInErrs[4];
+	u32 tcpInErrs[4];
+	u32 tnlCongDrops[4];
+	u32 ofldChanDrops[4];
+	u32 tnlTxDrops[4];
+	u32 ofldVlanDrops[4];
+	u32 tcp6InErrs[4];
+	u32 ofldNoNeigh;
+	u32 ofldCongDefer;
+};
+
+struct tp_params {
+	unsigned int ntxchan;        /* # of Tx channels */
+	unsigned int tre;            /* log2 of core clocks per TP tick */
+};
+
+struct vpd_params {
+	unsigned int cclk;
+	u8 ec[EC_LEN + 1];
+	u8 sn[SERNUM_LEN + 1];
+	u8 id[ID_LEN + 1];
+};
+
+struct pci_params {
+	unsigned char speed;
+	unsigned char width;
+};
+
+struct adapter_params {
+	struct tp_params  tp;
+	struct vpd_params vpd;
+	struct pci_params pci;
+
+	unsigned int fw_vers;
+	unsigned int tp_vers;
+
+	unsigned short mtus[NMTUS];
+	unsigned short a_wnd[NCCTRL_WIN];
+	unsigned short b_wnd[NCCTRL_WIN];
+
+	unsigned char nports;             /* # of ethernet ports */
+	unsigned char portvec;
+	unsigned char rev;                /* chip revision */
+	unsigned char offload;
+
+	unsigned int ofldq_wr_cred;
+};
+
+struct trace_params {
+	u32 data[TRACE_LEN / 4];
+	u32 mask[TRACE_LEN / 4];
+	unsigned short snap_len;
+	unsigned short min_len;
+	unsigned char skip_ofst;
+	unsigned char skip_len;
+	unsigned char invert;
+	unsigned char port;
+};
+
+struct link_config {
+	unsigned short supported;        /* link capabilities */
+	unsigned short advertising;      /* advertised capabilities */
+	unsigned short requested_speed;  /* speed user has requested */
+	unsigned short speed;            /* actual link speed */
+	unsigned char  requested_fc;     /* flow control user has requested */
+	unsigned char  fc;               /* actual link flow control */
+	unsigned char  autoneg;          /* autonegotiating? */
+	unsigned char  link_ok;          /* link up? */
+};
+
+#define FW_LEN16(fw_struct) V_FW_CMD_LEN16(sizeof(fw_struct) / 16)
+
+#define CH_ERR(adap, fmt, ...)   dev_err(adap->pdev_dev, fmt, ## __VA_ARGS__)
+#define CH_WARN(adap, fmt, ...)  dev_warn(adap->pdev_dev, fmt, ## __VA_ARGS__)
+#define CH_ALERT(adap, fmt, ...) dev_alert(adap->pdev_dev, fmt, ## __VA_ARGS__)
+
+#define CH_WARN_RATELIMIT(adap, fmt, ...)  do {\
+	if (printk_ratelimit()) \
+		dev_warn(adap->pdev_dev, fmt, ## __VA_ARGS__); \
+} while (0)
+
+enum {
+	MAX_ETH_QSETS = 32,           /* # of Ethernet Tx/Rx queue sets */
+	MAX_OFLD_QSETS = 16,          /* # of offload Tx/Rx queue sets */
+	MAX_CTRL_QUEUES = NCHAN,      /* # of control Tx queues */
+	MAX_RDMA_QUEUES = NCHAN,      /* # of streaming RDMA Rx queues */
+};
+
+enum {
+	MAX_EGRQ = 128,         /* max # of egress queues, including FLs */
+	MAX_INGQ = 64           /* max # of interrupt-capable ingress queues */
+};
+
+struct adapter;
+struct vlan_group;
+struct sge_rspq;
+
+struct port_info {
+	struct adapter *adapter;
+	struct vlan_group *vlan_grp;
+	u16    viid;
+	s16    xact_addr_filt;        /* index of exact MAC address filter */
+	u16    rss_size;              /* size of VI's RSS table slice */
+	s8     mdio_addr;
+	u8     port_type;
+	u8     mod_type;
+	u8     port_id;
+	u8     tx_chan;
+	u8     lport;                 /* associated offload logical port */
+	u8     rx_offload;            /* CSO, etc */
+	u8     nqsets;                /* # of qsets */
+	u8     first_qset;            /* index of first qset */
+	struct link_config link_cfg;
+};
+
+/* port_info.rx_offload flags */
+enum {
+	RX_CSO = 1 << 0,
+};
+
+struct dentry;
+struct work_struct;
+struct proc_dir_entry;
+
+enum {                                 /* adapter flags */
+	FULL_INIT_DONE     = (1 << 0),
+	USING_MSI          = (1 << 1),
+	USING_MSIX         = (1 << 2),
+	QUEUES_BOUND       = (1 << 3),
+	FW_OK              = (1 << 4),
+};
+
+struct rx_sw_desc;
+
+struct sge_fl {                     /* SGE free-buffer queue state */
+	unsigned int avail;         /* # of available Rx buffers */
+	unsigned int pend_cred;     /* new buffers since last FL DB ring */
+	unsigned int cidx;          /* consumer index */
+	unsigned int pidx;          /* producer index */
+	unsigned long alloc_failed; /* # of times buffer allocation failed */
+	unsigned long large_alloc_failed;
+	unsigned long starving;
+	/* RO fields */
+	unsigned int cntxt_id;      /* SGE context id for the free list */
+	unsigned int size;          /* capacity of free list */
+	struct rx_sw_desc *sdesc;   /* address of SW Rx descriptor ring */
+	__be64 *desc;               /* address of HW Rx descriptor ring */
+	dma_addr_t addr;            /* bus address of HW ring start */
+};
+
+/* A packet gather list */
+struct pkt_gl {
+	skb_frag_t frags[MAX_SKB_FRAGS];
+	void *va;                         /* virtual address of first byte */
+	unsigned int nfrags;              /* # of fragments */
+	unsigned int tot_len;             /* total length of fragments */
+};
+
+typedef int (*rspq_handler_t)(struct sge_rspq *q, const __be64 *rsp,
+			      const struct pkt_gl *gl);
+
+struct sge_rspq {                   /* state for an SGE response queue */
+	spinlock_t lock;            /* guards response processing */
+	struct napi_struct napi;
+	const __be64 *cur_desc;     /* current descriptor in queue */
+	unsigned int cidx;          /* consumer index */
+	u8 gen;                     /* current generation bit */
+	u8 intr_params;             /* interrupt holdoff parameters */
+	u8 next_intr_params;        /* holdoff params for next interrupt */
+	u8 pktcnt_idx;              /* interrupt packet threshold */
+	u8 uld;                     /* ULD handling this queue */
+	u8 idx;                     /* queue index within its group */
+	int offset;                 /* offset into current Rx buffer */
+	u16 cntxt_id;               /* SGE context id for the response q */
+	u16 abs_id;                 /* absolute SGE id for the response q */
+	__be64 *desc;               /* address of HW response ring */
+	dma_addr_t phys_addr;       /* physical address of the ring */
+	unsigned int iqe_len;       /* entry size */
+	unsigned int size;          /* capacity of response queue */
+	struct adapter *adap;
+	struct net_device *netdev;  /* associated net device */
+	rspq_handler_t handler;
+};
+
+struct sge_eth_stats {              /* Ethernet queue statistics */
+	unsigned long pkts;         /* # of ethernet packets */
+	unsigned long lro_pkts;     /* # of LRO super packets */
+	unsigned long lro_merged;   /* # of wire packets merged by LRO */
+	unsigned long rx_cso;       /* # of Rx checksum offloads */
+	unsigned long vlan_ex;      /* # of Rx VLAN extractions */
+	unsigned long rx_drops;     /* # of packets dropped due to no mem */
+};
+
+struct sge_eth_rxq {                /* SW Ethernet Rx queue */
+	struct sge_rspq rspq;
+	struct sge_fl fl;
+	struct sge_eth_stats stats;
+} ____cacheline_aligned_in_smp;
+
+struct sge_ofld_stats {             /* offload queue statistics */
+	unsigned long pkts;         /* # of packets */
+	unsigned long imm;          /* # of immediate-data packets */
+	unsigned long an;           /* # of asynchronous notifications */
+	unsigned long nomem;        /* # of responses deferred due to no mem */
+};
+
+struct sge_ofld_rxq {               /* SW offload Rx queue */
+	struct sge_rspq rspq;
+	struct sge_fl fl;
+	struct sge_ofld_stats stats;
+} ____cacheline_aligned_in_smp;
+
+struct tx_desc {
+	__be64 flit[8];
+};
+
+struct tx_sw_desc;
+
+struct sge_txq {
+	unsigned int  in_use;       /* # of in-use Tx descriptors */
+	unsigned int  size;         /* # of descriptors */
+	unsigned int  cidx;         /* SW consumer index */
+	unsigned int  pidx;         /* producer index */
+	unsigned long stops;        /* # of times q has been stopped */
+	unsigned long restarts;     /* # of queue restarts */
+	unsigned int  cntxt_id;     /* SGE context id for the Tx q */
+	struct tx_desc *desc;       /* address of HW Tx descriptor ring */
+	struct tx_sw_desc *sdesc;   /* address of SW Tx descriptor ring */
+	struct sge_qstat *stat;     /* queue status entry */
+	dma_addr_t    phys_addr;    /* physical address of the ring */
+};
+
+struct sge_eth_txq {                /* state for an SGE Ethernet Tx queue */
+	struct sge_txq q;
+	struct netdev_queue *txq;   /* associated netdev TX queue */
+	unsigned long tso;          /* # of TSO requests */
+	unsigned long tx_cso;       /* # of Tx checksum offloads */
+	unsigned long vlan_ins;     /* # of Tx VLAN insertions */
+	unsigned long mapping_err;  /* # of I/O MMU packet mapping errors */
+} ____cacheline_aligned_in_smp;
+
+struct sge_ofld_txq {               /* state for an SGE offload Tx queue */
+	struct sge_txq q;
+	struct adapter *adap;
+	struct sk_buff_head sendq;  /* list of backpressured packets */
+	struct tasklet_struct qresume_tsk; /* restarts the queue */
+	u8 full;                    /* the Tx ring is full */
+	unsigned long mapping_err;  /* # of I/O MMU packet mapping errors */
+} ____cacheline_aligned_in_smp;
+
+struct sge_ctrl_txq {               /* state for an SGE control Tx queue */
+	struct sge_txq q;
+	struct adapter *adap;
+	struct sk_buff_head sendq;  /* list of backpressured packets */
+	struct tasklet_struct qresume_tsk; /* restarts the queue */
+	u8 full;                    /* the Tx ring is full */
+} ____cacheline_aligned_in_smp;
+
+struct sge {
+	struct sge_eth_txq ethtxq[MAX_ETH_QSETS];
+	struct sge_eth_rxq ethrxq[MAX_ETH_QSETS];
+	struct sge_ofld_txq ofldtxq[MAX_OFLD_QSETS];
+	struct sge_ofld_rxq ofldrxq[MAX_OFLD_QSETS];
+	struct sge_ofld_rxq rdmarxq[MAX_RDMA_QUEUES];
+	struct sge_ctrl_txq ctrlq[MAX_CTRL_QUEUES];
+	struct sge_rspq fw_evtq;
+	struct sge_rspq intrq;
+	u16 max_ethqsets;           /* # of available Ethernet queue sets */
+	u16 ethqsets;               /* # of active Ethernet queue sets */
+	u16 ofldqsets;              /* # of active offload queue sets */
+	u16 rdmaqs;                 /* # of available RDMA Rx queues */
+	u16 ofld_rxq[MAX_OFLD_QSETS];
+	u16 rdma_rxq[NCHAN];
+	u16 timer_val[SGE_NTIMERS];
+	u8 counter_val[SGE_NCOUNTERS];
+	unsigned int starve_thres;
+	u8 idma_state[2];
+	void *egr_map[MAX_EGRQ];    /* qid->queue egress queue map */
+	struct sge_rspq *ingr_map[MAX_INGQ]; /* qid->queue ingress queue map */
+	DECLARE_BITMAP(starving_fl, MAX_EGRQ);
+	DECLARE_BITMAP(txq_maperr, MAX_EGRQ);
+	struct timer_list rx_timer; /* refills starving FLs */
+	struct timer_list tx_timer; /* checks Tx queues */
+};
+
+#define for_each_ethrxq(sge, i) for (i = 0; i < (sge)->ethqsets; i++)
+#define for_each_ofldrxq(sge, i) for (i = 0; i < (sge)->ofldqsets; i++)
+#define for_each_rdmarxq(sge, i) for (i = 0; i < (sge)->rdmaqs; i++)
+
+struct l2t_data;
+
+struct adapter {
+	void __iomem *regs;
+	struct pci_dev *pdev;
+	struct device *pdev_dev;
+	unsigned long registered_device_map;
+	unsigned long open_device_map;
+	unsigned long flags;
+
+	const char *name;
+	int msg_enable;
+
+	struct adapter_params params;
+	struct cxgb4_virt_res vres;
+	unsigned int swintr;
+
+	unsigned int wol;
+
+	struct {
+		unsigned short vec;
+		char desc[14];
+	} msix_info[MAX_INGQ + 1];
+
+	struct sge sge;
+
+	struct net_device *port[MAX_NPORTS];
+	u8 chan_map[NCHAN];                   /* channel -> port map */
+
+	struct l2t_data *l2t;
+	void *uld_handle[CXGB4_ULD_MAX];
+	struct list_head list_node;
+
+	struct tid_info tids;
+	void **tid_release_head;
+	spinlock_t tid_release_lock;
+	struct work_struct tid_release_task;
+
+	struct dentry *debugfs_root;
+	struct proc_dir_entry *proc_root;
+
+	spinlock_t stats_lock;
+};
+
+static inline u32 t4_read_reg(struct adapter *adap, u32 reg_addr)
+{
+	return readl(adap->regs + reg_addr);
+}
+
+static inline void t4_write_reg(struct adapter *adap, u32 reg_addr, u32 val)
+{
+	writel(val, adap->regs + reg_addr);
+}
+
+#ifndef readq
+static inline u64 readq(const volatile void __iomem *addr)
+{
+	return readl(addr) + ((u64)readl(addr + 4) << 32);
+}
+
+static inline void writeq(u64 val, volatile void __iomem *addr)
+{
+	writel(val, addr);
+	writel(val >> 32, addr + 4);
+}
+#endif
+
+static inline u64 t4_read_reg64(struct adapter *adap, u32 reg_addr)
+{
+	return readq(adap->regs + reg_addr);
+}
+
+static inline void t4_write_reg64(struct adapter *adap, u32 reg_addr, u64 val)
+{
+	writeq(val, adap->regs + reg_addr);
+}
+
+/**
+ * netdev2pinfo - return the port_info structure associated with a net_device
+ * @dev: the netdev
+ *
+ * Return the struct port_info associated with a net_device
+ */
+static inline struct port_info *netdev2pinfo(const struct net_device *dev)
+{
+	return netdev_priv(dev);
+}
+
+/**
+ * adap2pinfo - return the port_info of a port
+ * @adap: the adapter
+ * @idx: the port index
+ *
+ * Return the port_info structure for the port of the given index.
+ */
+static inline struct port_info *adap2pinfo(struct adapter *adap, int idx)
+{
+	return netdev_priv(adap->port[idx]);
+}
+
+/**
+ * netdev2adap - return the adapter structure associated with a net_device
+ * @dev: the netdev
+ *
+ * Return the struct adapter associated with a net_device
+ */
+static inline struct adapter *netdev2adap(const struct net_device *dev)
+{
+	return netdev2pinfo(dev)->adapter;
+}
+
+void t4_os_portmod_changed(const struct adapter *adap, int port_id);
+void t4_os_link_changed(struct adapter *adap, int port_id, int link_stat);
+
+void *t4_alloc_mem(size_t size);
+void t4_free_mem(void *addr);
+
+void t4_free_sge_resources(struct adapter *adap);
+irq_handler_t t4_intr_handler(struct adapter *adap);
+netdev_tx_t t4_eth_xmit(struct sk_buff *skb, struct net_device *dev);
+int t4_ethrx_handler(struct sge_rspq *q, const __be64 *rsp,
+		     const struct pkt_gl *gl);
+int t4_mgmt_tx(struct adapter *adap, struct sk_buff *skb);
+int t4_ofld_send(struct adapter *adap, struct sk_buff *skb);
+int t4_sge_alloc_rxq(struct adapter *adap, struct sge_rspq *iq, bool fwevtq,
+		     struct net_device *dev, int intr_idx,
+		     struct sge_fl *fl, rspq_handler_t hnd);
+int t4_sge_alloc_eth_txq(struct adapter *adap, struct sge_eth_txq *txq,
+			 struct net_device *dev, struct netdev_queue *netdevq,
+			 unsigned int iqid);
+int t4_sge_alloc_ctrl_txq(struct adapter *adap, struct sge_ctrl_txq *txq,
+			  struct net_device *dev, unsigned int iqid,
+			  unsigned int cmplqid);
+int t4_sge_alloc_ofld_txq(struct adapter *adap, struct sge_ofld_txq *txq,
+			  struct net_device *dev, unsigned int iqid);
+irqreturn_t t4_sge_intr_msix(int irq, void *cookie);
+void t4_sge_init(struct adapter *adap);
+void t4_sge_start(struct adapter *adap);
+void t4_sge_stop(struct adapter *adap);
+
+#define for_each_port(adapter, iter) \
+	for (iter = 0; iter < (adapter)->params.nports; ++iter)
+
+static inline int is_offload(const struct adapter *adap)
+{
+	return adap->params.offload;
+}
+
+static inline unsigned int core_ticks_per_usec(const struct adapter *adap)
+{
+	return adap->params.vpd.cclk / 1000;
+}
+
+static inline unsigned int us_to_core_ticks(const struct adapter *adap,
+					    unsigned int us)
+{
+	return (us * adap->params.vpd.cclk) / 1000;
+}
+
+void t4_set_reg_field(struct adapter *adap, unsigned int addr, u32 mask,
+		      u32 val);
+
+int t4_wr_mbox_meat(struct adapter *adap, int mbox, const void *cmd, int size,
+		    void *rpl, bool sleep_ok);
+
+static inline int t4_wr_mbox(struct adapter *adap, int mbox, const void *cmd,
+			     int size, void *rpl)
+{
+	return t4_wr_mbox_meat(adap, mbox, cmd, size, rpl, true);
+}
+
+static inline int t4_wr_mbox_ns(struct adapter *adap, int mbox, const void *cmd,
+				int size, void *rpl)
+{
+	return t4_wr_mbox_meat(adap, mbox, cmd, size, rpl, false);
+}
+
+void t4_intr_enable(struct adapter *adapter);
+void t4_intr_disable(struct adapter *adapter);
+void t4_intr_clear(struct adapter *adapter);
+int t4_slow_intr_handler(struct adapter *adapter);
+
+int t4_hash_mac_addr(const u8 *addr);
+int t4_link_start(struct adapter *adap, unsigned int mbox, unsigned int port,
+		  struct link_config *lc);
+int t4_restart_aneg(struct adapter *adap, unsigned int mbox, unsigned int port);
+int t4_seeprom_wp(struct adapter *adapter, bool enable);
+int t4_read_flash(struct adapter *adapter, unsigned int addr,
+		  unsigned int nwords, u32 *data, int byte_oriented);
+int t4_load_fw(struct adapter *adapter, const u8 *fw_data, unsigned int size);
+int t4_get_fw_version(struct adapter *adapter, u32 *vers);
+int t4_get_tp_version(struct adapter *adapter, u32 *vers);
+int t4_check_fw_version(struct adapter *adapter);
+int t4_init_hw(struct adapter *adapter, u32 fw_params);
+int t4_prep_adapter(struct adapter *adapter);
+int t4_port_init(struct adapter *adap, int mbox, int pf, int vf);
+void t4_fatal_err(struct adapter *adapter);
+void t4_set_vlan_accel(struct adapter *adapter, unsigned int ports, int on);
+int t4_set_trace_filter(struct adapter *adapter, const struct trace_params *tp,
+			int filter_index, int enable);
+void t4_get_trace_filter(struct adapter *adapter, struct trace_params *tp,
+			 int filter_index, int *enabled);
+int t4_config_rss_range(struct adapter *adapter, int mbox, unsigned int viid,
+			int start, int n, const u16 *rspq, unsigned int nrspq);
+int t4_config_glbl_rss(struct adapter *adapter, int mbox, unsigned int mode,
+		       unsigned int flags);
+int t4_read_rss(struct adapter *adapter, u16 *entries);
+int t4_mps_set_active_ports(struct adapter *adap, unsigned int port_mask);
+int t4_mc_read(struct adapter *adap, u32 addr, __be32 *data, u64 *parity);
+int t4_edc_read(struct adapter *adap, int idx, u32 addr, __be32 *data,
+		u64 *parity);
+
+void t4_get_port_stats(struct adapter *adap, int idx, struct port_stats *p);
+void t4_get_lb_stats(struct adapter *adap, int idx, struct lb_port_stats *p);
+
+void t4_read_mtu_tbl(struct adapter *adap, u16 *mtus, u8 *mtu_log);
+void t4_tp_get_err_stats(struct adapter *adap, struct tp_err_stats *st);
+void t4_tp_get_tcp_stats(struct adapter *adap, struct tp_tcp_stats *v4,
+			 struct tp_tcp_stats *v6);
+void t4_load_mtus(struct adapter *adap, const unsigned short *mtus,
+		  const unsigned short *alpha, const unsigned short *beta);
+
+void t4_wol_magic_enable(struct adapter *adap, unsigned int port,
+			 const u8 *addr);
+int t4_wol_pat_enable(struct adapter *adap, unsigned int port, unsigned int map,
+		      u64 mask0, u64 mask1, unsigned int crc, bool enable);
+
+int t4_fw_hello(struct adapter *adap, unsigned int mbox, unsigned int evt_mbox,
+		enum dev_master master, enum dev_state *state);
+int t4_fw_bye(struct adapter *adap, unsigned int mbox);
+int t4_early_init(struct adapter *adap, unsigned int mbox);
+int t4_fw_reset(struct adapter *adap, unsigned int mbox, int reset);
+int t4_query_params(struct adapter *adap, unsigned int mbox,
+		    unsigned int nparams, const u32 *params, u32 *val);
+int t4_set_params(struct adapter *adap, unsigned int mbox, unsigned int nparams,
+		  const u32 *params, const u32 *val);
+int t4_cfg_pfvf(struct adapter *adap, unsigned int mbox, unsigned int pf,
+		unsigned int vf, unsigned int txq, unsigned int txq_eth_ctrl,
+		unsigned int rxqi, unsigned int rxq, unsigned int tc,
+		unsigned int vi, unsigned int cmask, unsigned int pmask,
+		unsigned int nexact, unsigned int rcaps, unsigned int wxcaps);
+int t4_alloc_vi(struct adapter *adap, unsigned int mbox, unsigned int port,
+		unsigned int pf, unsigned int vf, unsigned int nmac, u8 *mac,
+		unsigned int *rss_size);
+int t4_free_vi(struct adapter *adap, unsigned int mbox, unsigned int pf,
+	       unsigned int vf, unsigned int viid);
+int t4_set_rxmode(struct adapter *adap, unsigned int mbox, unsigned int viid,
+		int mtu, int promisc, int all_multi, int bcast, bool sleep_ok);
+int t4_alloc_mac_filt(struct adapter *adap, unsigned int mbox,
+		      unsigned int viid, bool free, unsigned int naddr,
+		      const u8 **addr, u16 *idx, u64 *hash, bool sleep_ok);
+int t4_change_mac(struct adapter *adap, unsigned int mbox, unsigned int viid,
+		  int idx, const u8 *addr, bool persist);
+int t4_set_addr_hash(struct adapter *adap, unsigned int mbox, unsigned int viid,
+		     bool ucast, u64 vec, bool sleep_ok);
+int t4_enable_vi(struct adapter *adap, unsigned int mbox, unsigned int viid,
+		 bool rx_en, bool tx_en);
+int t4_identify_port(struct adapter *adap, unsigned int mbox, unsigned int viid,
+		     unsigned int nblinks);
+int t4_mdio_rd(struct adapter *adap, unsigned int mbox, unsigned int phy_addr,
+	       unsigned int mmd, unsigned int reg, u16 *valp);
+int t4_mdio_wr(struct adapter *adap, unsigned int mbox, unsigned int phy_addr,
+	       unsigned int mmd, unsigned int reg, u16 val);
+int t4_iq_start_stop(struct adapter *adap, unsigned int mbox, bool start,
+		     unsigned int pf, unsigned int vf, unsigned int iqid,
+		     unsigned int fl0id, unsigned int fl1id);
+int t4_iq_free(struct adapter *adap, unsigned int mbox, unsigned int pf,
+	       unsigned int vf, unsigned int iqtype, unsigned int iqid,
+	       unsigned int fl0id, unsigned int fl1id);
+int t4_eth_eq_free(struct adapter *adap, unsigned int mbox, unsigned int pf,
+		   unsigned int vf, unsigned int eqid);
+int t4_ctrl_eq_free(struct adapter *adap, unsigned int mbox, unsigned int pf,
+		    unsigned int vf, unsigned int eqid);
+int t4_ofld_eq_free(struct adapter *adap, unsigned int mbox, unsigned int pf,
+		    unsigned int vf, unsigned int eqid);
+int t4_handle_fw_rpl(struct adapter *adap, const __be64 *rpl);
+#endif /* __CXGB4_H__ */
diff --git a/drivers/net/cxgb4/cxgb4_uld.h b/drivers/net/cxgb4/cxgb4_uld.h
new file mode 100644
index 0000000..1d5614f
--- /dev/null
+++ b/drivers/net/cxgb4/cxgb4_uld.h
@@ -0,0 +1,230 @@
+/*
+ * This file is part of the Chelsio T4 Ethernet driver for Linux.
+ *
+ * Copyright (c) 2003-2010 Chelsio Communications, Inc. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ *     Redistribution and use in source and binary forms, with or
+ *     without modification, are permitted provided that the following
+ *     conditions are met:
+ *
+ *      - Redistributions of source code must retain the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer.
+ *
+ *      - Redistributions in binary form must reproduce the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer in the documentation and/or other materials
+ *        provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef __CXGB4_OFLD_H
+#define __CXGB4_OFLD_H
+
+#include <linux/cache.h>
+#include <linux/spinlock.h>
+#include <linux/skbuff.h>
+#include <asm/atomic.h>
+
+/* CPL message priority levels */
+enum {
+	CPL_PRIORITY_DATA     = 0,  /* data messages */
+	CPL_PRIORITY_SETUP    = 1,  /* connection setup messages */
+	CPL_PRIORITY_TEARDOWN = 0,  /* connection teardown messages */
+	CPL_PRIORITY_LISTEN   = 1,  /* listen start/stop messages */
+	CPL_PRIORITY_ACK      = 1,  /* RX ACK messages */
+	CPL_PRIORITY_CONTROL  = 1   /* control messages */
+};
+
+#define INIT_TP_WR(w, tid) do { \
+	(w)->wr.wr_hi = htonl(V_FW_WR_OP(FW_TP_WR) | \
+			      V_FW_WR_IMMDLEN(sizeof(*w) - sizeof(w->wr))); \
+	(w)->wr.wr_mid = htonl(V_FW_WR_LEN16(DIV_ROUND_UP(sizeof(*w), 16)) | \
+			       V_FW_WR_FLOWID(tid)); \
+	(w)->wr.wr_lo = cpu_to_be64(0); \
+} while (0)
+
+#define INIT_TP_WR_CPL(w, cpl, tid) do { \
+	INIT_TP_WR(w, tid); \
+	OPCODE_TID(w) = htonl(MK_OPCODE_TID(cpl, tid)); \
+} while (0)
+
+/* Special asynchronous notification message */
+#define CXGB4_MSG_AN ((void *)1)
+
+struct serv_entry {
+	void *data;
+};
+
+union aopen_entry {
+	void *data;
+	union aopen_entry *next;
+};
+
+/*
+ * Holds the size, base address, free list start, etc of the TID, server TID,
+ * and active-open TID tables.  The tables themselves are allocated dynamically.
+ */
+struct tid_info {
+	void **tid_tab;
+	unsigned int ntids;
+
+	struct serv_entry *stid_tab;
+	unsigned long *stid_bmap;
+	unsigned int nstids;
+	unsigned int stid_base;
+
+	union aopen_entry *atid_tab;
+	unsigned int natids;
+
+	unsigned int nftids;
+	unsigned int ftid_base;
+
+	spinlock_t atid_lock ____cacheline_aligned_in_smp;
+	union aopen_entry *afree;
+	unsigned int atids_in_use;
+
+	spinlock_t stid_lock;
+	unsigned int stids_in_use;
+
+	atomic_t tids_in_use;
+};
+
+static inline void *lookup_tid(const struct tid_info *t, unsigned int tid)
+{
+	return tid < t->ntids ? t->tid_tab[tid] : NULL;
+}
+
+static inline void *lookup_atid(const struct tid_info *t, unsigned int atid)
+{
+	return atid < t->natids ? t->atid_tab[atid].data : NULL;
+}
+
+static inline void *lookup_stid(const struct tid_info *t, unsigned int stid)
+{
+	stid -= t->stid_base;
+	return stid < t->nstids ? t->stid_tab[stid].data : NULL;
+}
+
+static inline void cxgb4_insert_tid(struct tid_info *t, void *data,
+				    unsigned int tid)
+{
+	t->tid_tab[tid] = data;
+	atomic_inc(&t->tids_in_use);
+}
+
+int cxgb4_alloc_atid(struct tid_info *t, void *data);
+int cxgb4_alloc_stid(struct tid_info *t, int family, void *data);
+void cxgb4_free_atid(struct tid_info *t, unsigned int atid);
+void cxgb4_free_stid(struct tid_info *t, unsigned int stid, int family);
+void cxgb4_remove_tid(struct tid_info *t, unsigned int qid, unsigned int tid);
+void cxgb4_queue_tid_release(struct tid_info *t, unsigned int chan,
+			     unsigned int tid);
+
+struct in6_addr;
+
+int cxgb4_create_server(const struct net_device *dev, unsigned int stid,
+			__be32 sip, __be16 sport, unsigned int queue);
+int cxgb4_create_server6(const struct net_device *dev, unsigned int stid,
+			 const struct in6_addr *sip, __be16 sport,
+			 unsigned int queue);
+
+static inline void set_wr_txq(struct sk_buff *skb, int prio, int queue)
+{
+	skb_set_queue_mapping(skb, (queue << 1) | prio);
+}
+
+enum cxgb4_uld {
+	CXGB4_ULD_RDMA,
+	CXGB4_ULD_ISCSI,
+	CXGB4_ULD_MAX
+};
+
+enum cxgb4_state {
+	CXGB4_STATE_UP,
+	CXGB4_STATE_START_RECOVERY,
+	CXGB4_STATE_DOWN,
+	CXGB4_STATE_DETACH
+};
+
+struct pci_dev;
+struct l2t_data;
+struct net_device;
+struct pkt_gl;
+
+struct cxgb4_range {
+	unsigned int start;
+	unsigned int size;
+};
+
+struct cxgb4_virt_res {                      /* virtualized HW resources */
+	struct cxgb4_range ddp;
+	struct cxgb4_range iscsi;
+	struct cxgb4_range stag;
+	struct cxgb4_range rq;
+	struct cxgb4_range pbl;
+};
+
+/*
+ * Block of information the LLD provides to ULDs attaching to a device.
+ */
+struct cxgb4_lld_info {
+	struct pci_dev *pdev;                /* associated PCI device */
+	struct l2t_data *l2t;                /* L2 table */
+	struct tid_info *tids;               /* TID table */
+	struct net_device **ports;           /* device ports */
+	const struct cxgb4_virt_res *vr;     /* assorted HW resources */
+	const unsigned short *mtus;          /* MTU table */
+	const unsigned short *rxq_ids;       /* the ULD's Rx queue ids */
+	unsigned short nrxq;                 /* # of Rx queues */
+	unsigned short ntxq;                 /* # of Tx queues */
+	unsigned char nchan;                 /* # of channels */
+	unsigned char nports;                /* # of ports */
+	unsigned char wr_cred;               /* WR 16-byte credits */
+	unsigned char adapter_type;          /* type of adapter */
+	unsigned int fw_vers;                /* FW version */
+	unsigned short udb_density;          /* # of user DB/page */
+	unsigned short ucq_density;          /* # of user CQs/page */
+	void __iomem *gts_reg;               /* address of GTS register */
+	void __iomem *db_reg;                /* address of kernel doorbell */
+};
+
+/* cxgb4_lld_info.adapter_type values */
+enum {
+	CXGB4_ADAP_T4 = 0,
+};
+
+struct cxgb4_uld_info {
+	const char *name;
+	void *(*add)(const struct cxgb4_lld_info *p);
+	int (*rx_handler)(void *handle, const __be64 *rsp,
+			  const struct pkt_gl *gl);
+	int (*state_change)(void *handle, enum cxgb4_state new_state);
+};
+
+int cxgb4_register_uld(enum cxgb4_uld type, const struct cxgb4_uld_info *p);
+int cxgb4_unregister_uld(enum cxgb4_uld type);
+int cxgb4_ofld_send(struct net_device *dev, struct sk_buff *skb);
+unsigned int cxgb4_port_chan(const struct net_device *dev);
+struct net_device *cxgb4_netdev_by_hwid(struct pci_dev *pdev, unsigned int id);
+unsigned int cxgb4_best_mtu(const unsigned short *mtus, unsigned short mtu,
+			    unsigned int *idx);
+void cxgb4_iscsi_init(struct net_device *dev, unsigned int tag_mask,
+		      const unsigned int *pgsz_order);
+struct sk_buff *cxgb4_pktgl_to_skb(const struct pkt_gl *gl,
+				   unsigned int skb_len, unsigned int pull_len);
+#endif  /* !__CXGB4_OFLD_H */
diff --git a/drivers/net/cxgb4/l2t.c b/drivers/net/cxgb4/l2t.c
new file mode 100644
index 0000000..b647a2a
--- /dev/null
+++ b/drivers/net/cxgb4/l2t.c
@@ -0,0 +1,627 @@
+/*
+ * This file is part of the Chelsio T4 Ethernet driver for Linux.
+ *
+ * Copyright (c) 2003-2010 Chelsio Communications, Inc. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ *     Redistribution and use in source and binary forms, with or
+ *     without modification, are permitted provided that the following
+ *     conditions are met:
+ *
+ *      - Redistributions of source code must retain the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer.
+ *
+ *      - Redistributions in binary form must reproduce the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer in the documentation and/or other materials
+ *        provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include <linux/skbuff.h>
+#include <linux/netdevice.h>
+#include <linux/if.h>
+#include <linux/if_vlan.h>
+#include <linux/jhash.h>
+#include <net/neighbour.h>
+#include "cxgb4.h"
+#include "l2t.h"
+#include "t4_msg.h"
+#include "t4fw_api.h"
+
+#define VLAN_NONE 0xfff
+
+/* identifies sync vs async L2T_WRITE_REQs */
+#define S_SYNC_WR    12
+#define V_SYNC_WR(x) ((x) << S_SYNC_WR)
+#define F_SYNC_WR    V_SYNC_WR(1)
+
+enum {
+	L2T_STATE_VALID,      /* entry is up to date */
+	L2T_STATE_STALE,      /* entry may be used but needs revalidation */
+	L2T_STATE_RESOLVING,  /* entry needs address resolution */
+	L2T_STATE_SYNC_WRITE, /* synchronous write of entry underway */
+
+	/* when state is one of the below the entry is not hashed */
+	L2T_STATE_SWITCHING,  /* entry is being used by a switching filter */
+	L2T_STATE_UNUSED      /* entry not in use */
+};
+
+struct l2t_data {
+	rwlock_t lock;
+	atomic_t nfree;             /* number of free entries */
+	struct l2t_entry *rover;    /* starting point for next allocation */
+	struct l2t_entry l2tab[L2T_SIZE];
+};
+
+static inline unsigned int vlan_prio(const struct l2t_entry *e)
+{
+	return e->vlan >> 13;
+}
+
+static inline void l2t_hold(struct l2t_data *d, struct l2t_entry *e)
+{
+	if (atomic_add_return(1, &e->refcnt) == 1)  /* 0 -> 1 transition */
+		atomic_dec(&d->nfree);
+}
+
+/*
+ * To avoid having to check address families we do not allow v4 and v6
+ * neighbors to be on the same hash chain.  We keep v4 entries in the first
+ * half of available hash buckets and v6 in the second.
+ */
+enum {
+	L2T_SZ_HALF = L2T_SIZE / 2,
+	L2T_HASH_MASK = L2T_SZ_HALF - 1
+};
+
+static inline unsigned int arp_hash(const u32 *key, int ifindex)
+{
+	return jhash_2words(*key, ifindex, 0) & L2T_HASH_MASK;
+}
+
+static inline unsigned int ipv6_hash(const u32 *key, int ifindex)
+{
+	u32 xor = key[0] ^ key[1] ^ key[2] ^ key[3];
+
+	return L2T_SZ_HALF + (jhash_2words(xor, ifindex, 0) & L2T_HASH_MASK);
+}
+
+static unsigned int addr_hash(const u32 *addr, int addr_len, int ifindex)
+{
+	return addr_len == 4 ? arp_hash(addr, ifindex) :
+			       ipv6_hash(addr, ifindex);
+}
+
+/*
+ * Checks if an L2T entry is for the given IP/IPv6 address.  It does not check
+ * whether the L2T entry and the address are of the same address family.
+ * Callers ensure an address is only checked against L2T entries of the same
+ * family, something made trivial by the separation of IP and IPv6 hash chains
+ * mentioned above.  Returns 0 if there's a match,
+ */
+static int addreq(const struct l2t_entry *e, const u32 *addr)
+{
+	if (e->v6)
+		return (e->addr[0] ^ addr[0]) | (e->addr[1] ^ addr[1]) |
+		       (e->addr[2] ^ addr[2]) | (e->addr[3] ^ addr[3]);
+	return e->addr[0] ^ addr[0];
+}
+
+static void neigh_replace(struct l2t_entry *e, struct neighbour *n)
+{
+	neigh_hold(n);
+	if (e->neigh)
+		neigh_release(e->neigh);
+	e->neigh = n;
+}
+
+/*
+ * Write an L2T entry.  Must be called with the entry locked.
+ * The write may be synchronous or asynchronous.
+ */
+static int write_l2e(struct adapter *adap, struct l2t_entry *e, int sync)
+{
+	struct sk_buff *skb;
+	struct cpl_l2t_write_req *req;
+
+	skb = alloc_skb(sizeof(*req), GFP_ATOMIC);
+	if (!skb)
+		return -ENOMEM;
+
+	req = (struct cpl_l2t_write_req *)__skb_put(skb, sizeof(*req));
+	INIT_TP_WR(req, 0);
+
+	OPCODE_TID(req) = htonl(MK_OPCODE_TID(CPL_L2T_WRITE_REQ,
+					e->idx | V_SYNC_WR(sync) |
+					V_TID_QID(adap->sge.fw_evtq.abs_id)));
+	req->params = htons(V_L2T_W_PORT(e->lport) | V_L2T_W_NOREPLY(!sync));
+	req->l2t_idx = htons(e->idx);
+	req->vlan = htons(e->vlan);
+	if (e->neigh)
+		memcpy(e->dmac, e->neigh->ha, sizeof(e->dmac));
+	memcpy(req->dst_mac, e->dmac, sizeof(req->dst_mac));
+
+	set_wr_txq(skb, CPL_PRIORITY_CONTROL, 0);
+	t4_ofld_send(adap, skb);
+
+	if (sync && e->state != L2T_STATE_SWITCHING)
+		e->state = L2T_STATE_SYNC_WRITE;
+	return 0;
+}
+
+/*
+ * Send packets waiting in an L2T entry's ARP queue.  Must be called with the
+ * entry locked.
+ */
+static void send_pending(struct adapter *adap, struct l2t_entry *e)
+{
+	while (e->arpq_head) {
+		struct sk_buff *skb = e->arpq_head;
+
+		e->arpq_head = skb->next;
+		skb->next = NULL;
+		t4_ofld_send(adap, skb);
+	}
+	e->arpq_tail = NULL;
+}
+
+/*
+ * Process a CPL_L2T_WRITE_RPL.  Wake up the ARP queue if it completes a
+ * synchronous L2T_WRITE.  Note that the TID in the reply is really the L2T
+ * index it refers to.
+ */
+void do_l2t_write_rpl(struct adapter *adap, const struct cpl_l2t_write_rpl *rpl)
+{
+	unsigned int tid = GET_TID(rpl);
+	unsigned int idx = tid & (L2T_SIZE - 1);
+
+	if (unlikely(rpl->status != CPL_ERR_NONE)) {
+		CH_ERR(adap,
+		       "Unexpected L2T_WRITE_RPL status %u for entry %u\n",
+		       rpl->status, idx);
+		return;
+	}
+
+	if (tid & F_SYNC_WR) {
+		struct l2t_entry *e = &adap->l2t->l2tab[idx];
+
+		spin_lock(&e->lock);
+		if (e->state != L2T_STATE_SWITCHING) {
+			send_pending(adap, e);
+			e->state = (e->neigh->nud_state & NUD_STALE) ?
+					L2T_STATE_STALE : L2T_STATE_VALID;
+		}
+		spin_unlock(&e->lock);
+	}
+}
+
+/*
+ * Add a packet to an L2T entry's queue of packets awaiting resolution.
+ * Must be called with the entry's lock held.
+ */
+static inline void arpq_enqueue(struct l2t_entry *e, struct sk_buff *skb)
+{
+	skb->next = NULL;
+	if (e->arpq_head)
+		e->arpq_tail->next = skb;
+	else
+		e->arpq_head = skb;
+	e->arpq_tail = skb;
+}
+
+int cxgb4_l2t_send(struct net_device *dev, struct sk_buff *skb,
+		   struct l2t_entry *e)
+{
+	struct adapter *adap = netdev2adap(dev);
+
+again:
+	switch (e->state) {
+	case L2T_STATE_STALE:     /* entry is stale, kick off revalidation */
+		neigh_event_send(e->neigh, NULL);
+		spin_lock_bh(&e->lock);
+		if (e->state == L2T_STATE_STALE)
+			e->state = L2T_STATE_VALID;
+		spin_unlock_bh(&e->lock);
+	case L2T_STATE_VALID:     /* fast-path, send the packet on */
+		return t4_ofld_send(adap, skb);
+	case L2T_STATE_RESOLVING:
+	case L2T_STATE_SYNC_WRITE:
+		spin_lock_bh(&e->lock);
+		if (e->state != L2T_STATE_SYNC_WRITE &&
+		    e->state != L2T_STATE_RESOLVING) {
+			spin_unlock_bh(&e->lock);
+			goto again;
+		}
+		arpq_enqueue(e, skb);
+		spin_unlock_bh(&e->lock);
+
+		if (e->state == L2T_STATE_RESOLVING &&
+		    !neigh_event_send(e->neigh, NULL)) {
+			spin_lock_bh(&e->lock);
+			if (e->state == L2T_STATE_RESOLVING && e->arpq_head)
+				write_l2e(adap, e, 1);
+			spin_unlock_bh(&e->lock);
+		}
+	}
+	return 0;
+}
+EXPORT_SYMBOL(cxgb4_l2t_send);
+
+/*
+ * Allocate a free L2T entry.  Must be called with l2t_data.lock held.
+ */
+static struct l2t_entry *alloc_l2e(struct l2t_data *d)
+{
+	struct l2t_entry *end, *e, **p;
+
+	if (!atomic_read(&d->nfree))
+		return NULL;
+
+	/* there's definitely a free entry */
+	for (e = d->rover, end = &d->l2tab[L2T_SIZE]; e != end; ++e)
+		if (atomic_read(&e->refcnt) == 0)
+			goto found;
+
+	for (e = d->l2tab; atomic_read(&e->refcnt); ++e) ;
+found:
+	d->rover = e + 1;
+	atomic_dec(&d->nfree);
+
+	/*
+	 * The entry we found may be an inactive entry that is
+	 * presently in the hash table.  We need to remove it.
+	 */
+	if (e->state < L2T_STATE_SWITCHING)
+		for (p = &d->l2tab[e->hash].first; *p; p = &(*p)->next)
+			if (*p == e) {
+				*p = e->next;
+				e->next = NULL;
+				break;
+			}
+
+	e->state = L2T_STATE_UNUSED;
+	return e;
+}
+
+/*
+ * Called when an L2T entry has no more users.
+ */
+static void t4_l2e_free(struct l2t_entry *e)
+{
+	struct l2t_data *d;
+
+	spin_lock_bh(&e->lock);
+	if (atomic_read(&e->refcnt) == 0) {  /* hasn't been recycled */
+		if (e->neigh) {
+			neigh_release(e->neigh);
+			e->neigh = NULL;
+		}
+	}
+	spin_unlock_bh(&e->lock);
+
+	d = container_of(e, struct l2t_data, l2tab[e->idx]);
+	atomic_inc(&d->nfree);
+}
+
+void cxgb4_l2t_release(struct l2t_entry *e)
+{
+	if (atomic_dec_and_test(&e->refcnt))
+		t4_l2e_free(e);
+}
+EXPORT_SYMBOL(cxgb4_l2t_release);
+
+/*
+ * Update an L2T entry that was previously used for the same next hop as neigh.
+ * Must be called with softirqs disabled.
+ */
+static void reuse_entry(struct l2t_entry *e, struct neighbour *neigh)
+{
+	unsigned int nud_state;
+
+	spin_lock(&e->lock);                /* avoid race with t4_l2t_free */
+	if (neigh != e->neigh)
+		neigh_replace(e, neigh);
+	nud_state = neigh->nud_state;
+	if (memcmp(e->dmac, neigh->ha, sizeof(e->dmac)) ||
+	    !(nud_state & NUD_VALID))
+		e->state = L2T_STATE_RESOLVING;
+	else if (nud_state & NUD_CONNECTED)
+		e->state = L2T_STATE_VALID;
+	else
+		e->state = L2T_STATE_STALE;
+	spin_unlock(&e->lock);
+}
+
+struct l2t_entry *cxgb4_l2t_get(struct l2t_data *d, struct neighbour *neigh,
+				const struct net_device *physdev,
+				unsigned int priority)
+{
+	u8 lport;
+	u16 vlan;
+	struct l2t_entry *e;
+	int addr_len = neigh->tbl->key_len;
+	u32 *addr = (u32 *)neigh->primary_key;
+	int ifidx = neigh->dev->ifindex;
+	int hash = addr_hash(addr, addr_len, ifidx);
+
+	if (neigh->dev->flags & IFF_LOOPBACK)
+		lport = netdev2pinfo(physdev)->tx_chan + 4;
+	else
+		lport = netdev2pinfo(physdev)->lport;
+
+	if (neigh->dev->priv_flags & IFF_802_1Q_VLAN)
+		vlan = vlan_dev_vlan_id(neigh->dev);
+	else
+		vlan = VLAN_NONE;
+
+	write_lock_bh(&d->lock);
+	for (e = d->l2tab[hash].first; e; e = e->next)
+		if (!addreq(e, addr) && e->ifindex == ifidx &&
+		    e->vlan == vlan && e->lport == lport) {
+			l2t_hold(d, e);
+			if (atomic_read(&e->refcnt) == 1)
+				reuse_entry(e, neigh);
+			goto done;
+		}
+
+	/* Need to allocate a new entry */
+	e = alloc_l2e(d);
+	if (e) {
+		spin_lock(&e->lock);          /* avoid race with t4_l2t_free */
+		e->state = L2T_STATE_RESOLVING;
+		memcpy(e->addr, addr, addr_len);
+		e->ifindex = ifidx;
+		e->hash = hash;
+		e->lport = lport;
+		e->v6 = addr_len == 16;
+		atomic_set(&e->refcnt, 1);
+		neigh_replace(e, neigh);
+		e->vlan = vlan;
+		e->next = d->l2tab[hash].first;
+		d->l2tab[hash].first = e;
+		spin_unlock(&e->lock);
+	}
+done:
+	write_unlock_bh(&d->lock);
+	return e;
+}
+EXPORT_SYMBOL(cxgb4_l2t_get);
+
+/*
+ * Called when address resolution fails for an L2T entry to handle packets
+ * on the arpq head.  If a packet specifies a failure handler it is invoked,
+ * otherwise the packet is sent to the device.
+ */
+static void handle_failed_resolution(struct adapter *adap, struct sk_buff *arpq)
+{
+	while (arpq) {
+		struct sk_buff *skb = arpq;
+		const struct l2t_skb_cb *cb = L2T_SKB_CB(skb);
+
+		arpq = skb->next;
+		skb->next = NULL;
+		if (cb->arp_err_handler)
+			cb->arp_err_handler(cb->handle, skb);
+		else
+			t4_ofld_send(adap, skb);
+	}
+}
+
+/*
+ * Called when the host's neighbor layer makes a change to some entry that is
+ * loaded into the HW L2 table.
+ */
+void t4_l2t_update(struct adapter *adap, struct neighbour *neigh)
+{
+	struct l2t_entry *e;
+	struct sk_buff *arpq = NULL;
+	struct l2t_data *d = adap->l2t;
+	int addr_len = neigh->tbl->key_len;
+	u32 *addr = (u32 *) neigh->primary_key;
+	int ifidx = neigh->dev->ifindex;
+	int hash = addr_hash(addr, addr_len, ifidx);
+
+	read_lock_bh(&d->lock);
+	for (e = d->l2tab[hash].first; e; e = e->next)
+		if (!addreq(e, addr) && e->ifindex == ifidx) {
+			spin_lock(&e->lock);
+			if (atomic_read(&e->refcnt))
+				goto found;
+			spin_unlock(&e->lock);
+			break;
+		}
+	read_unlock_bh(&d->lock);
+	return;
+
+ found:
+	read_unlock(&d->lock);
+
+	if (neigh != e->neigh)
+		neigh_replace(e, neigh);
+
+	if (e->state == L2T_STATE_RESOLVING) {
+		if (neigh->nud_state & NUD_FAILED) {
+			arpq = e->arpq_head;
+			e->arpq_head = e->arpq_tail = NULL;
+		} else if ((neigh->nud_state & (NUD_CONNECTED | NUD_STALE)) &&
+			   e->arpq_head) {
+			write_l2e(adap, e, 1);
+		}
+	} else {
+		e->state = neigh->nud_state & NUD_CONNECTED ?
+			L2T_STATE_VALID : L2T_STATE_STALE;
+		if (memcmp(e->dmac, neigh->ha, sizeof(e->dmac)))
+			write_l2e(adap, e, 0);
+	}
+
+	spin_unlock_bh(&e->lock);
+
+	if (arpq)
+		handle_failed_resolution(adap, arpq);
+}
+
+/*
+ * Allocate an L2T entry for use by a switching rule.  Such entries need to be
+ * explicitly freed and while busy they are not on any hash chain, so normal
+ * address resolution updates do not see them.
+ */
+struct l2t_entry *t4_l2t_alloc_switching(struct l2t_data *d)
+{
+	struct l2t_entry *e;
+
+	write_lock_bh(&d->lock);
+	e = alloc_l2e(d);
+	if (e) {
+		spin_lock(&e->lock);          /* avoid race with t4_l2t_free */
+		e->state = L2T_STATE_SWITCHING;
+		atomic_set(&e->refcnt, 1);
+		spin_unlock(&e->lock);
+	}
+	write_unlock_bh(&d->lock);
+	return e;
+}
+
+/*
+ * Sets/updates the contents of a switching L2T entry that has been allocated
+ * with an earlier call to @t4_l2t_alloc_switching.
+ */
+int t4_l2t_set_switching(struct adapter *adap, struct l2t_entry *e, u16 vlan,
+			 u8 port, u8 *eth_addr)
+{
+	e->vlan = vlan;
+	e->lport = port;
+	memcpy(e->dmac, eth_addr, ETH_ALEN);
+	return write_l2e(adap, e, 0);
+}
+
+struct l2t_data *t4_init_l2t(void)
+{
+	int i;
+	struct l2t_data *d;
+
+	d = t4_alloc_mem(sizeof(*d));
+	if (!d)
+		return NULL;
+
+	d->rover = d->l2tab;
+	atomic_set(&d->nfree, L2T_SIZE);
+	rwlock_init(&d->lock);
+
+	for (i = 0; i < L2T_SIZE; ++i) {
+		d->l2tab[i].idx = i;
+		d->l2tab[i].state = L2T_STATE_UNUSED;
+		spin_lock_init(&d->l2tab[i].lock);
+		atomic_set(&d->l2tab[i].refcnt, 0);
+	}
+	return d;
+}
+
+#ifdef CONFIG_PROC_FS
+#include <linux/module.h>
+#include <linux/proc_fs.h>
+#include <linux/seq_file.h>
+
+static inline void *l2t_get_idx(struct seq_file *seq, loff_t pos)
+{
+	struct l2t_entry *l2tab = seq->private;
+
+	return pos >= L2T_SIZE ? NULL : &l2tab[pos];
+}
+
+static void *l2t_seq_start(struct seq_file *seq, loff_t *pos)
+{
+	return *pos ? l2t_get_idx(seq, *pos - 1) : SEQ_START_TOKEN;
+}
+
+static void *l2t_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+{
+	v = l2t_get_idx(seq, *pos);
+	if (v)
+		++*pos;
+	return v;
+}
+
+static void l2t_seq_stop(struct seq_file *seq, void *v)
+{
+}
+
+static char l2e_state(const struct l2t_entry *e)
+{
+	switch (e->state) {
+	case L2T_STATE_VALID: return 'V';
+	case L2T_STATE_STALE: return 'S';
+	case L2T_STATE_SYNC_WRITE: return 'W';
+	case L2T_STATE_RESOLVING: return e->arpq_head ? 'A' : 'R';
+	case L2T_STATE_SWITCHING: return 'X';
+	default:
+		return 'U';
+	}
+}
+
+static int l2t_seq_show(struct seq_file *seq, void *v)
+{
+	if (v == SEQ_START_TOKEN)
+		seq_puts(seq, " Idx IP address      Ethernet address  VLAN/P "
+			 "LP State Users Port\n");
+	else {
+		char ip[60];
+		struct l2t_entry *e = v;
+
+		spin_lock_bh(&e->lock);
+		if (e->state == L2T_STATE_SWITCHING)
+			ip[0] = '\0';
+		else
+			sprintf(ip, e->v6 ? "%pI6c" : "%pI4", e->addr);
+		seq_printf(seq, "%4u %-15s %17pM %4d %u %2u   %c   %5u %s\n",
+			   e->idx, ip, e->dmac,
+			   e->vlan & VLAN_VID_MASK, vlan_prio(e), e->lport,
+			   l2e_state(e), atomic_read(&e->refcnt),
+			   e->neigh ? e->neigh->dev->name : "");
+		spin_unlock_bh(&e->lock);
+	}
+	return 0;
+}
+
+static struct seq_operations l2t_seq_ops = {
+	.start = l2t_seq_start,
+	.next = l2t_seq_next,
+	.stop = l2t_seq_stop,
+	.show = l2t_seq_show
+};
+
+static int l2t_seq_open(struct inode *inode, struct file *file)
+{
+	int rc = seq_open(file, &l2t_seq_ops);
+
+	if (!rc) {
+		struct adapter *adap = PDE(inode)->data;
+		struct seq_file *seq = file->private_data;
+
+		seq->private = adap->l2t->l2tab;
+	}
+	return rc;
+}
+
+struct file_operations t4_l2t_proc_fops = {
+	.owner = THIS_MODULE,
+	.open = l2t_seq_open,
+	.read = seq_read,
+	.llseek = seq_lseek,
+	.release = seq_release,
+};
+#endif
diff --git a/drivers/net/cxgb4/l2t.h b/drivers/net/cxgb4/l2t.h
new file mode 100644
index 0000000..15e9eee
--- /dev/null
+++ b/drivers/net/cxgb4/l2t.h
@@ -0,0 +1,110 @@
+/*
+ * This file is part of the Chelsio T4 Ethernet driver for Linux.
+ *
+ * Copyright (c) 2003-2010 Chelsio Communications, Inc. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ *     Redistribution and use in source and binary forms, with or
+ *     without modification, are permitted provided that the following
+ *     conditions are met:
+ *
+ *      - Redistributions of source code must retain the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer.
+ *
+ *      - Redistributions in binary form must reproduce the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer in the documentation and/or other materials
+ *        provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef __CXGB4_L2T_H
+#define __CXGB4_L2T_H
+
+#include <linux/spinlock.h>
+#include <linux/if_ether.h>
+#include <asm/atomic.h>
+
+struct adapter;
+struct l2t_data;
+struct neighbour;
+struct net_device;
+struct file_operations;
+struct cpl_l2t_write_rpl;
+
+/*
+ * Each L2T entry plays multiple roles.  First of all, it keeps state for the
+ * corresponding entry of the HW L2 table and maintains a queue of offload
+ * packets awaiting address resolution.  Second, it is a node of a hash table
+ * chain, where the nodes of the chain are linked together through their next
+ * pointer.  Finally, each node is a bucket of a hash table, pointing to the
+ * first element in its chain through its first pointer.
+ */
+struct l2t_entry {
+	u16 state;                  /* entry state */
+	u16 idx;                    /* entry index */
+	u32 addr[4];                /* next hop IP or IPv6 address */
+	int ifindex;                /* neighbor's net_device's ifindex */
+	struct neighbour *neigh;    /* associated neighbour */
+	struct l2t_entry *first;    /* start of hash chain */
+	struct l2t_entry *next;     /* next l2t_entry on chain */
+	struct sk_buff *arpq_head;  /* queue of packets awaiting resolution */
+	struct sk_buff *arpq_tail;
+	spinlock_t lock;
+	atomic_t refcnt;            /* entry reference count */
+	u16 hash;                   /* hash bucket the entry is on */
+	u16 vlan;                   /* VLAN TCI (id: bits 0-11, prio: 13-15 */
+	u8 v6;                      /* whether entry is for IPv6 */
+	u8 lport;                   /* associated offload logical interface */
+	u8 dmac[ETH_ALEN];          /* neighbour's MAC address */
+};
+
+typedef void (*arp_err_handler_t)(void *handle, struct sk_buff *skb);
+
+/*
+ * Callback stored in an skb to handle address resolution failure.
+ */
+struct l2t_skb_cb {
+	void *handle;
+	arp_err_handler_t arp_err_handler;
+};
+
+#define L2T_SKB_CB(skb) ((struct l2t_skb_cb *)(skb)->cb)
+
+static inline void t4_set_arp_err_handler(struct sk_buff *skb, void *handle,
+					  arp_err_handler_t handler)
+{
+	L2T_SKB_CB(skb)->handle = handle;
+	L2T_SKB_CB(skb)->arp_err_handler = handler;
+}
+
+void cxgb4_l2t_release(struct l2t_entry *e);
+int cxgb4_l2t_send(struct net_device *dev, struct sk_buff *skb,
+		   struct l2t_entry *e);
+struct l2t_entry *cxgb4_l2t_get(struct l2t_data *d, struct neighbour *neigh,
+				const struct net_device *physdev,
+				unsigned int priority);
+
+void t4_l2t_update(struct adapter *adap, struct neighbour *neigh);
+struct l2t_entry *t4_l2t_alloc_switching(struct l2t_data *d);
+int t4_l2t_set_switching(struct adapter *adap, struct l2t_entry *e, u16 vlan,
+			 u8 port, u8 *eth_addr);
+struct l2t_data *t4_init_l2t(void);
+void do_l2t_write_rpl(struct adapter *p, const struct cpl_l2t_write_rpl *rpl);
+
+extern struct file_operations t4_l2t_proc_fops;
+#endif  /* __CXGB4_L2T_H */
-- 
1.5.4


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH net-next 6/7] cxgb4: Add main driver file and driver Makefile
  2010-02-18  1:37         ` [PATCH net-next 5/7] cxgb4: Add remaining driver headers and L2T management Dimitris Michailidis
@ 2010-02-18  1:37           ` Dimitris Michailidis
  2010-02-18  1:37             ` [PATCH net-next 7/7] net: Hook up cxgb4 to Kconfig and Makefile Dimitris Michailidis
  0 siblings, 1 reply; 10+ messages in thread
From: Dimitris Michailidis @ 2010-02-18  1:37 UTC (permalink / raw)
  To: netdev

Signed-off-by: Dimitris Michailidis <dm@chelsio.com>
---
 drivers/net/cxgb4/Makefile     |    7 +
 drivers/net/cxgb4/cxgb4_main.c | 4203 ++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4210 insertions(+), 0 deletions(-)
 create mode 100644 drivers/net/cxgb4/Makefile
 create mode 100644 drivers/net/cxgb4/cxgb4_main.c

diff --git a/drivers/net/cxgb4/Makefile b/drivers/net/cxgb4/Makefile
new file mode 100644
index 0000000..4986674
--- /dev/null
+++ b/drivers/net/cxgb4/Makefile
@@ -0,0 +1,7 @@
+#
+# Chelsio T4 driver
+#
+
+obj-$(CONFIG_CHELSIO_T4) += cxgb4.o
+
+cxgb4-objs := cxgb4_main.o l2t.o t4_hw.o sge.o
diff --git a/drivers/net/cxgb4/cxgb4_main.c b/drivers/net/cxgb4/cxgb4_main.c
new file mode 100644
index 0000000..6de7561
--- /dev/null
+++ b/drivers/net/cxgb4/cxgb4_main.c
@@ -0,0 +1,4203 @@
+/*
+ * This file is part of the Chelsio T4 Ethernet driver for Linux.
+ *
+ * Copyright (c) 2003-2010 Chelsio Communications, Inc. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ *     Redistribution and use in source and binary forms, with or
+ *     without modification, are permitted provided that the following
+ *     conditions are met:
+ *
+ *      - Redistributions of source code must retain the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer.
+ *
+ *      - Redistributions in binary form must reproduce the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer in the documentation and/or other materials
+ *        provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/bitmap.h>
+#include <linux/crc32.h>
+#include <linux/ctype.h>
+#include <linux/debugfs.h>
+#include <linux/etherdevice.h>
+#include <linux/firmware.h>
+#include <linux/if_vlan.h>
+#include <linux/init.h>
+#include <linux/log2.h>
+#include <linux/mdio.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/mutex.h>
+#include <linux/netdevice.h>
+#include <linux/pci.h>
+#include <linux/aer.h>
+#include <linux/proc_fs.h>
+#include <linux/rtnetlink.h>
+#include <linux/seq_file.h>
+#include <linux/sockios.h>
+#include <linux/vmalloc.h>
+#include <linux/workqueue.h>
+#include <net/neighbour.h>
+#include <net/netevent.h>
+#include <asm/uaccess.h>
+
+#include "cxgb4.h"
+#include "t4_regs.h"
+#include "t4_msg.h"
+#include "t4fw_api.h"
+#include "l2t.h"
+
+#define DRV_VERSION "1.0.0-ko"
+#define DRV_DESC "Chelsio T4 Network Driver"
+
+/*
+ * Max interrupt hold-off timer value in us.  Queues fall back to this value
+ * under extreme memory pressure so it's largish to give the system time to
+ * recover.
+ */
+#define MAX_SGE_TIMERVAL 200U
+
+enum {
+	MEMWIN0_APERTURE = 65536,
+	MEMWIN0_BASE     = 0x30000,
+	MEMWIN1_APERTURE = 32768,
+	MEMWIN1_BASE     = 0x28000,
+	MEMWIN2_APERTURE = 2048,
+	MEMWIN2_BASE     = 0x1b800,
+};
+
+enum {
+	MAX_TXQ_ENTRIES      = 16384,
+	MAX_CTRL_TXQ_ENTRIES = 1024,
+	MAX_RSPQ_ENTRIES     = 16384,
+	MAX_RX_BUFFERS       = 16384,
+	MIN_TXQ_ENTRIES      = 32,
+	MIN_CTRL_TXQ_ENTRIES = 32,
+	MIN_RSPQ_ENTRIES     = 128,
+	MIN_FL_ENTRIES       = 16
+};
+
+#define DFLT_MSG_ENABLE (NETIF_MSG_DRV | NETIF_MSG_PROBE | NETIF_MSG_LINK | \
+			 NETIF_MSG_TIMER | NETIF_MSG_IFDOWN | NETIF_MSG_IFUP |\
+			 NETIF_MSG_RX_ERR | NETIF_MSG_TX_ERR)
+
+#define CH_DEVICE(devid) { PCI_VDEVICE(CHELSIO, devid), 0 }
+
+static DEFINE_PCI_DEVICE_TABLE(cxgb4_pci_tbl) = {
+	CH_DEVICE(0xa000),  /* PE10K */
+	{ 0, }
+};
+
+#define FW_FNAME "cxgb4/t4fw.bin"
+
+MODULE_DESCRIPTION(DRV_DESC);
+MODULE_AUTHOR("Chelsio Communications");
+MODULE_LICENSE("Dual BSD/GPL");
+MODULE_VERSION(DRV_VERSION);
+MODULE_DEVICE_TABLE(pci, cxgb4_pci_tbl);
+MODULE_FIRMWARE(FW_FNAME);
+
+static int dflt_msg_enable = DFLT_MSG_ENABLE;
+
+module_param(dflt_msg_enable, int, 0644);
+MODULE_PARM_DESC(dflt_msg_enable, "Chelsio T4 default message enable bitmap");
+
+/*
+ * The driver uses the best interrupt scheme available on a platform in the
+ * order MSI-X, MSI, legacy INTx interrupts.  This parameter determines which
+ * of these schemes the driver may consider as follows:
+ *
+ * msi = 2: choose from among all three options
+ * msi = 1: only consider MSI and INTx interrupts
+ * msi = 0: force INTx interrupts
+ */
+static int msi = 2;
+
+module_param(msi, int, 0644);
+MODULE_PARM_DESC(msi, "whether to use INTx (0), MSI (1) or MSI-X (2)");
+
+/*
+ * Queue interrupt hold-off timer values.  Queues default to the first of these
+ * upon creation.
+ */
+static unsigned int intr_holdoff[SGE_NTIMERS - 1] = { 5, 10, 20, 50, 100 };
+
+module_param_array(intr_holdoff, uint, NULL, 0644);
+MODULE_PARM_DESC(intr_holdoff, "values for queue interrupt hold-off timers "
+		 "0..4 in microseconds");
+
+static unsigned int intr_cnt[SGE_NCOUNTERS - 1] = { 4, 8, 16 };
+
+module_param_array(intr_cnt, uint, NULL, 0644);
+MODULE_PARM_DESC(intr_cnt,
+		 "thresholds 1..3 for queue interrupt packet counters");
+
+static int vf_acls;
+
+#ifdef CONFIG_PCI_IOV
+module_param(vf_acls, bool, 0644);
+MODULE_PARM_DESC(vf_acls, "if set enable virtualization L2 ACL enforcement");
+
+static unsigned int num_vf[4];
+
+module_param_array(num_vf, uint, NULL, 0644);
+MODULE_PARM_DESC(num_vf, "number of VFs for each of PFs 0-3");
+#endif
+
+static struct dentry *cxgb4_debugfs_root;
+static struct proc_dir_entry *cxgb4_proc_root;
+
+static LIST_HEAD(adapter_list);
+static DEFINE_MUTEX(uld_mutex);
+static struct cxgb4_uld_info ulds[CXGB4_ULD_MAX];
+static const char *uld_str[] = { "RDMA", "iSCSI" };
+
+static void link_report(struct net_device *dev)
+{
+	if (!netif_carrier_ok(dev))
+		printk(KERN_INFO "%s: link down\n", dev->name);
+	else {
+		static const char *fc[] = { "no", "Rx", "Tx", "Tx/Rx" };
+
+		const char *s = "10Mbps";
+		const struct port_info *p = netdev_priv(dev);
+
+		switch (p->link_cfg.speed) {
+			case SPEED_10000: s = "10Gbps"; break;
+			case SPEED_1000:  s = "1000Mbps"; break;
+			case SPEED_100:   s = "100Mbps"; break;
+		}
+
+		printk(KERN_INFO "%s: link up, %s, full-duplex, %s PAUSE\n",
+		       dev->name, s, fc[p->link_cfg.fc]);
+	}
+}
+
+void t4_os_link_changed(struct adapter *adapter, int port_id, int link_stat)
+{
+	struct net_device *dev = adapter->port[port_id];
+
+	/* Skip changes from disabled ports. */
+	if (netif_running(dev) && link_stat != netif_carrier_ok(dev)) {
+		if (link_stat)
+			netif_carrier_on(dev);
+		else
+			netif_carrier_off(dev);
+
+		link_report(dev);
+	}
+}
+
+void t4_os_portmod_changed(const struct adapter *adap, int port_id)
+{
+	static const char *mod_str[] = {
+		NULL, "LR", "SR", "ER", "passive DA", "active DA"
+	};
+
+	const struct net_device *dev = adap->port[port_id];
+	const struct port_info *pi = netdev_priv(dev);
+
+	if (pi->mod_type == FW_PORT_MOD_TYPE_NONE)
+		printk(KERN_INFO "%s: port module unplugged\n", dev->name);
+	else
+		printk(KERN_INFO "%s: %s port module inserted\n", dev->name,
+		       mod_str[pi->mod_type]);
+}
+
+/*
+ * Configure the exact and hash address filters to handle a port's multicast
+ * and secondary unicast MAC addresses.
+ */
+static int set_addr_filters(const struct net_device *dev, bool sleep)
+{
+	u64 mhash = 0;
+	u64 uhash = 0;
+	bool free = true;
+	u16 filt_idx[7];
+	const u8 *addr[7];
+	int ret, naddr = 0;
+	const struct dev_addr_list *d;
+	const struct netdev_hw_addr *ha;
+	int uc_cnt = netdev_uc_count(dev);
+	const struct port_info *pi = netdev_priv(dev);
+
+	/* first do the secondary unicast addresses */
+	netdev_for_each_uc_addr(ha, dev) {
+		addr[naddr++] = ha->addr;
+		if (--uc_cnt == 0 || naddr >= ARRAY_SIZE(addr)) {
+			ret = t4_alloc_mac_filt(pi->adapter, 0, pi->viid, free,
+					naddr, addr, filt_idx, &uhash, sleep);
+			if (ret < 0)
+				return ret;
+
+			free = false;
+			naddr = 0;
+		}
+	}
+
+	/* next set up the multicast addresses */
+	netdev_for_each_mc_addr(d, dev) {
+		addr[naddr++] = d->dmi_addr;
+		if (naddr >= ARRAY_SIZE(addr) || d->next == NULL) {
+			ret = t4_alloc_mac_filt(pi->adapter, 0, pi->viid, free,
+					naddr, addr, filt_idx, &mhash, sleep);
+			if (ret < 0)
+				return ret;
+
+			free = false;
+			naddr = 0;
+		}
+	}
+
+	return t4_set_addr_hash(pi->adapter, 0, pi->viid, uhash != 0,
+				uhash | mhash, sleep);
+}
+
+/*
+ * Set Rx properties of a port, such as promiscruity, address filters, and MTU.
+ * If @mtu is -1 it is left unchanged.
+ */
+static int set_rxmode(struct net_device *dev, int mtu, bool sleep_ok)
+{
+	int ret;
+	struct port_info *pi = netdev_priv(dev);
+
+	ret = set_addr_filters(dev, sleep_ok);
+	if (ret == 0)
+		ret = t4_set_rxmode(pi->adapter, 0, pi->viid, mtu,
+				    (dev->flags & IFF_PROMISC) ? 1 : 0,
+				    (dev->flags & IFF_ALLMULTI) ? 1 : 0, 1,
+				    sleep_ok);
+	return ret;
+}
+
+/**
+ *	link_start - enable a port
+ *	@dev: the port to enable
+ *
+ *	Performs the MAC and PHY actions needed to enable a port.
+ */
+static int link_start(struct net_device *dev)
+{
+	int ret;
+	struct port_info *pi = netdev_priv(dev);
+
+	/*
+	 * We do not set address filters and promiscuity here, the stack does
+	 * that step explicitly.
+	 */
+	ret = t4_set_rxmode(pi->adapter, 0, pi->viid, dev->mtu, -1, -1, -1,
+			    true);
+	if (ret == 0) {
+		ret = t4_change_mac(pi->adapter, 0, pi->viid,
+				    pi->xact_addr_filt, dev->dev_addr, true);
+		if (ret >= 0) {
+			pi->xact_addr_filt = ret;
+			ret = 0;
+		}
+	}
+	if (ret == 0)
+		ret = t4_link_start(pi->adapter, 0, pi->tx_chan, &pi->link_cfg);
+	if (ret == 0)
+		ret = t4_enable_vi(pi->adapter, 0, pi->viid, true, true);
+	return ret;
+}
+
+/*
+ * Response queue handler for the FW event queue.
+ */
+static int fwevtq_handler(struct sge_rspq *q, const __be64 *rsp,
+			  const struct pkt_gl *gl)
+{
+	u8 opcode = ((const struct rss_header *)rsp)->opcode;
+
+	rsp++;                                          /* skip RSS header */
+	if (likely(opcode == CPL_SGE_EGR_UPDATE)) {
+		const struct cpl_sge_egr_update *p = (void *)rsp;
+		unsigned int qid = G_EGR_QID(ntohl(p->opcode_qid));
+		struct sge_txq *txq = q->adap->sge.egr_map[qid];
+
+		txq->restarts++;
+		if ((u8 *)txq < (u8 *)q->adap->sge.ethrxq) {
+			struct sge_eth_txq *eq;
+
+			eq = container_of(txq, struct sge_eth_txq, q);
+			netif_tx_wake_queue(eq->txq);
+		} else {
+			struct sge_ofld_txq *oq;
+
+			oq = container_of(txq, struct sge_ofld_txq, q);
+			tasklet_schedule(&oq->qresume_tsk);
+		}
+	} else if (opcode == CPL_FW6_MSG || opcode == CPL_FW4_MSG) {
+		const struct cpl_fw6_msg *p = (void *)rsp;
+
+		if (p->type == 0)
+			t4_handle_fw_rpl(q->adap, p->data);
+	} else if (opcode == CPL_L2T_WRITE_RPL) {
+		const struct cpl_l2t_write_rpl *p = (void *)rsp;
+
+		do_l2t_write_rpl(q->adap, p);
+	} else
+		CH_ERR(q->adap, "unexpected CPL %#x on FW event queue\n",
+		       opcode);
+	return 0;
+}
+
+/**
+ *	uldrx_handler - response queue handler for ULD queues
+ *	@q: the response queue that received the packet
+ *	@rsp: the response queue descriptor holding the offload message
+ *	@gl: the gather list of packet fragments
+ *
+ *	Deliver an ingress offload packet to a ULD.  All processing is done by
+ *	the ULD, we just maintain statistics.
+ */
+static int uldrx_handler(struct sge_rspq *q, const __be64 *rsp,
+			 const struct pkt_gl *gl)
+{
+	struct sge_ofld_rxq *rxq = container_of(q, struct sge_ofld_rxq, rspq);
+
+	if (ulds[q->uld].rx_handler(q->adap->uld_handle[q->uld], rsp, gl)) {
+		rxq->stats.nomem++;
+		return -1;
+	}
+	if (gl == NULL)
+		rxq->stats.imm++;
+	else if (gl == CXGB4_MSG_AN)
+		rxq->stats.an++;
+	else
+		rxq->stats.pkts++;
+	return 0;
+}
+
+static void disable_msi(struct adapter *adapter)
+{
+	if (adapter->flags & USING_MSIX) {
+		pci_disable_msix(adapter->pdev);
+		adapter->flags &= ~USING_MSIX;
+	} else if (adapter->flags & USING_MSI) {
+		pci_disable_msi(adapter->pdev);
+		adapter->flags &= ~USING_MSI;
+	}
+}
+
+/*
+ * Interrupt handler for non-data events used with MSI-X.
+ */
+static irqreturn_t t4_nondata_intr(int irq, void *cookie)
+{
+	struct adapter *adap = cookie;
+
+	u32 v = t4_read_reg(adap, MYPF_REG(A_PL_PF_INT_CAUSE));
+	if (v & F_PFSW) {
+		adap->swintr = 1;
+		t4_write_reg(adap, MYPF_REG(A_PL_PF_INT_CAUSE), v);
+	}
+	t4_slow_intr_handler(adap);
+	return IRQ_HANDLED;
+}
+
+/*
+ * Name the MSI-X interrupts.
+ */
+static void name_msix_vecs(struct adapter *adap)
+{
+	int i, j, msi_idx = 2, n = sizeof(adap->msix_info[0].desc) - 1;
+
+	/* non-data interrupts */
+	snprintf(adap->msix_info[0].desc, n, "%s", adap->name);
+	adap->msix_info[0].desc[n] = 0;
+
+	/* FW events */
+	snprintf(adap->msix_info[1].desc, n, "%s-FWeventq", adap->name);
+	adap->msix_info[1].desc[n] = 0;
+
+	/* Ethernet queues */
+	for_each_port(adap, j) {
+		struct net_device *d = adap->port[j];
+		const struct port_info *pi = netdev_priv(d);
+
+		for (i = 0; i < pi->nqsets; i++, msi_idx++) {
+			snprintf(adap->msix_info[msi_idx].desc, n, "%s-Rx%d",
+				 d->name, i);
+			adap->msix_info[msi_idx].desc[n] = 0;
+		}
+	}
+
+	/* offload queues */
+	for_each_ofldrxq(&adap->sge, i) {
+		snprintf(adap->msix_info[msi_idx].desc, n, "%s-ofld%d",
+			 adap->name, i);
+		adap->msix_info[msi_idx++].desc[n] = 0;
+	}
+	for_each_rdmarxq(&adap->sge, i) {
+		snprintf(adap->msix_info[msi_idx].desc, n, "%s-rdma%d",
+			 adap->name, i);
+		adap->msix_info[msi_idx++].desc[n] = 0;
+	}
+}
+
+static int request_msix_queue_irqs(struct adapter *adap)
+{
+	struct sge *s = &adap->sge;
+	int err, ethqidx, ofldqidx = 0, rdmaqidx = 0, msi = 2;
+
+	err = request_irq(adap->msix_info[1].vec, t4_sge_intr_msix, 0,
+			  adap->msix_info[1].desc, &s->fw_evtq);
+	if (err)
+		return err;
+
+	for_each_ethrxq(s, ethqidx) {
+		err = request_irq(adap->msix_info[msi].vec, t4_sge_intr_msix, 0,
+				  adap->msix_info[msi].desc,
+				  &s->ethrxq[ethqidx].rspq);
+		if (err)
+			goto unwind;
+		msi++;
+	}
+	for_each_ofldrxq(s, ofldqidx) {
+		err = request_irq(adap->msix_info[msi].vec, t4_sge_intr_msix, 0,
+				  adap->msix_info[msi].desc,
+				  &s->ofldrxq[ofldqidx].rspq);
+		if (err)
+			goto unwind;
+		msi++;
+	}
+	for_each_rdmarxq(s, rdmaqidx) {
+		err = request_irq(adap->msix_info[msi].vec, t4_sge_intr_msix, 0,
+				  adap->msix_info[msi].desc,
+				  &s->rdmarxq[rdmaqidx].rspq);
+		if (err)
+			goto unwind;
+		msi++;
+	}
+	return 0;
+
+unwind:
+	while (--rdmaqidx >= 0)
+		free_irq(adap->msix_info[--msi].vec,
+			 &s->rdmarxq[rdmaqidx].rspq);
+	while (--ofldqidx >= 0)
+		free_irq(adap->msix_info[--msi].vec,
+			 &s->ofldrxq[ofldqidx].rspq);
+	while (--ethqidx >= 0)
+		free_irq(adap->msix_info[--msi].vec, &s->ethrxq[ethqidx].rspq);
+	free_irq(adap->msix_info[1].vec, &s->fw_evtq);
+	return err;
+}
+
+static void free_msix_queue_irqs(struct adapter *adap)
+{
+	int i, msi = 2;
+	struct sge *s = &adap->sge;
+
+	free_irq(adap->msix_info[1].vec, &s->fw_evtq);
+	for_each_ethrxq(s, i)
+		free_irq(adap->msix_info[msi++].vec, &s->ethrxq[i].rspq);
+	for_each_ofldrxq(s, i)
+		free_irq(adap->msix_info[msi++].vec, &s->ofldrxq[i].rspq);
+	for_each_rdmarxq(s, i)
+		free_irq(adap->msix_info[msi++].vec, &s->rdmarxq[i].rspq);
+}
+
+/**
+ *	setup_rss - configure RSS
+ *	@adap: the adapter
+ *
+ *	Sets up RSS to distribute packets to multiple receive queues.  We
+ *	configure the RSS CPU lookup table to distribute to the number of HW
+ *	receive queues, and the response queue lookup table to narrow that
+ *	down to the response queues actually configured for each port.
+ *	We always configure the RSS mapping for all ports since the mapping
+ *	table has plenty of entries.
+ */
+static int setup_rss(struct adapter *adap)
+{
+	int i, j, err;
+	u16 rss[MAX_ETH_QSETS];
+
+	for_each_port(adap, i) {
+		const struct port_info *pi = adap2pinfo(adap, i);
+		const struct sge_eth_rxq *q = &adap->sge.ethrxq[pi->first_qset];
+
+		for (j = 0; j < pi->nqsets; j++)
+			rss[j] = q[j].rspq.abs_id;
+
+		err = t4_config_rss_range(adap, 0, pi->viid, 0, pi->rss_size,
+					  rss, pi->nqsets);
+		if (err)
+			return err;
+	}
+	return 0;
+}
+
+/*
+ * Wait until all NAPI handlers are descheduled.
+ */
+static void quiesce_rx(struct adapter *adap)
+{
+	int i;
+	struct sge *s = &adap->sge;
+
+	for_each_ethrxq(s, i)
+		napi_disable(&s->ethrxq[i].rspq.napi);
+
+	for_each_ofldrxq(s, i)
+		napi_disable(&s->ofldrxq[i].rspq.napi);
+
+	for_each_rdmarxq(s, i)
+		napi_disable(&s->rdmarxq[i].rspq.napi);
+
+	napi_disable(&s->fw_evtq.napi);
+}
+
+static void qenable(struct sge_rspq *q)
+{
+	napi_enable(&q->napi);
+	/* 0-increment GTS to start the timer and enable interrupts */
+	t4_write_reg(q->adap, MYPF_REG(A_SGE_PF_GTS),
+		     V_SEINTARM(q->intr_params) | V_INGRESSQID(q->cntxt_id));
+}
+
+/*
+ * Enable NAPI scheduling and interrupt generation for all Rx queues.
+ */
+static void enable_rx(struct adapter *adap)
+{
+	int i;
+	struct sge *s = &adap->sge;
+
+	for_each_ethrxq(s, i)
+		qenable(&s->ethrxq[i].rspq);
+
+	for_each_ofldrxq(s, i)
+		qenable(&s->ofldrxq[i].rspq);
+
+	for_each_rdmarxq(s, i)
+		qenable(&s->rdmarxq[i].rspq);
+
+	qenable(&s->fw_evtq);
+
+	/* the interrupt queue doesn't use NAPI */
+	if (!(adap->flags & USING_MSIX))
+		t4_write_reg(adap, MYPF_REG(A_SGE_PF_GTS),
+			     V_SEINTARM(s->intrq.intr_params) |
+			     V_INGRESSQID(s->intrq.cntxt_id));
+}
+
+/**
+ *	setup_sge_queues - configure SGE Tx/Rx/response queues
+ *	@adap: the adapter
+ *
+ *	Determines how many sets of SGE queues to use and initializes them.
+ *	We support multiple queue sets per port if we have MSI-X, otherwise
+ *	just one queue set per port.
+ */
+static int setup_sge_queues(struct adapter *adap)
+{
+	int err, msi_idx, i, j;
+	struct sge *s = &adap->sge;
+
+	bitmap_zero(s->starving_fl, MAX_EGRQ);
+	bitmap_zero(s->txq_maperr, MAX_EGRQ);
+
+	if (adap->flags & USING_MSIX)
+		msi_idx = 1;         /* vector 0 is for non-queue interrupts */
+	else {
+		err = t4_sge_alloc_rxq(adap, &s->intrq, false, adap->port[0], 0,
+				       NULL, NULL);
+		if (err)
+			return err;
+		msi_idx = -((int)s->intrq.abs_id + 1);
+	}
+
+	err = t4_sge_alloc_rxq(adap, &s->fw_evtq, true, adap->port[0],
+			       msi_idx, NULL, fwevtq_handler);
+	if (err) {
+freeout:	t4_free_sge_resources(adap);
+		return err;
+	}
+
+	for_each_port(adap, i) {
+		struct net_device *dev = adap->port[i];
+		struct port_info *pi = netdev_priv(dev);
+		struct sge_eth_rxq *q = &s->ethrxq[pi->first_qset];
+		struct sge_eth_txq *t = &s->ethtxq[pi->first_qset];
+
+		for (j = 0; j < pi->nqsets; j++, q++) {
+			if (msi_idx > 0)
+				msi_idx++;
+			err = t4_sge_alloc_rxq(adap, &q->rspq, false, dev,
+					       msi_idx, &q->fl,
+					       t4_ethrx_handler);
+			if (err)
+				goto freeout;
+			q->rspq.idx = j;
+			memset(&q->stats, 0, sizeof(q->stats));
+		}
+		for (j = 0; j < pi->nqsets; j++, t++) {
+			err = t4_sge_alloc_eth_txq(adap, t, dev,
+					netdev_get_tx_queue(dev, j),
+					s->fw_evtq.cntxt_id);
+			if (err)
+				goto freeout;
+		}
+	}
+
+	j = s->ofldqsets / adap->params.nports; /* ofld queues per channel */
+	for_each_ofldrxq(s, i) {
+		struct sge_ofld_rxq *q = &s->ofldrxq[i];
+		struct net_device *dev = adap->port[i / j];
+
+		if (msi_idx > 0)
+			msi_idx++;
+		err = t4_sge_alloc_rxq(adap, &q->rspq, false, dev, msi_idx,
+				       &q->fl, uldrx_handler);
+		if (err)
+			goto freeout;
+		memset(&q->stats, 0, sizeof(q->stats));
+		s->ofld_rxq[i] = q->rspq.abs_id;
+		err = t4_sge_alloc_ofld_txq(adap, &s->ofldtxq[i], dev,
+					    s->fw_evtq.cntxt_id);
+		if (err)
+			goto freeout;
+	}
+
+	for_each_rdmarxq(s, i) {
+		struct sge_ofld_rxq *q = &s->rdmarxq[i];
+
+		if (msi_idx > 0)
+			msi_idx++;
+		err = t4_sge_alloc_rxq(adap, &q->rspq, false, adap->port[i],
+				       msi_idx, &q->fl, uldrx_handler);
+		if (err)
+			goto freeout;
+		memset(&q->stats, 0, sizeof(q->stats));
+		s->rdma_rxq[i] = q->rspq.abs_id;
+	}
+
+	for_each_port(adap, i) {
+		/*
+		 * Note that ->rdmarxq[i].rspq.cntxt_id below is 0 if we don't
+		 * have RDMA queues, and that's the right value.
+		 */
+		err = t4_sge_alloc_ctrl_txq(adap, &s->ctrlq[i], adap->port[i],
+					    s->fw_evtq.cntxt_id,
+					    s->rdmarxq[i].rspq.cntxt_id);
+		if (err)
+			goto freeout;
+	}
+
+	t4_write_reg(adap, A_MPS_TRC_RSS_CONTROL,
+		     V_RSSCONTROL(netdev2pinfo(adap->port[0])->tx_chan) |
+		     V_QUEUENUMBER(s->ethrxq[0].rspq.abs_id));
+	return 0;
+}
+
+static int upgrade_fw(struct adapter *adap)
+{
+	int ret;
+	const struct firmware *fw;
+	struct device *dev = adap->pdev_dev;
+
+	ret = request_firmware(&fw, FW_FNAME, dev);
+	if (ret < 0) {
+		dev_err(dev, "unable to load firmware image " FW_FNAME
+			", error %d\n", ret);
+		return ret;
+	}
+	ret = t4_load_fw(adap, fw->data, fw->size);
+	release_firmware(fw);
+	if (!ret)
+		dev_info(dev, "firmware upgraded to " FW_FNAME "\n");
+	return ret;
+}
+
+/*
+ * Allocate a chunk of memory using kmalloc or, if that fails, vmalloc.
+ * The allocated memory is cleared.
+ */
+void *t4_alloc_mem(size_t size)
+{
+	void *p = kmalloc(size, GFP_KERNEL);
+
+	if (!p)
+		p = vmalloc(size);
+	if (p)
+		memset(p, 0, size);
+	return p;
+}
+
+/*
+ * Free memory allocated through alloc_mem().
+ */
+void t4_free_mem(void *addr)
+{
+	if (is_vmalloc_addr(addr))
+		vfree(addr);
+	else
+		kfree(addr);
+}
+
+/*
+ * Implementation of ethtool operations.
+ */
+
+static u32 get_msglevel(struct net_device *dev)
+{
+	return netdev2adap(dev)->msg_enable;
+}
+
+static void set_msglevel(struct net_device *dev, u32 val)
+{
+	netdev2adap(dev)->msg_enable = val;
+}
+
+static char stats_strings[][ETH_GSTRING_LEN] = {
+	"TxOctetsOK         ",
+	"TxFramesOK         ",
+	"TxBroadcastFrames  ",
+	"TxMulticastFrames  ",
+	"TxUnicastFrames    ",
+	"TxErrorFrames      ",
+
+	"TxFrames64         ",
+	"TxFrames65To127    ",
+	"TxFrames128To255   ",
+	"TxFrames256To511   ",
+	"TxFrames512To1023  ",
+	"TxFrames1024To1518 ",
+	"TxFrames1519ToMax  ",
+
+	"TxFramesDropped    ",
+	"TxPauseFrames      ",
+	"TxPPP0Frames       ",
+	"TxPPP1Frames       ",
+	"TxPPP2Frames       ",
+	"TxPPP3Frames       ",
+	"TxPPP4Frames       ",
+	"TxPPP5Frames       ",
+	"TxPPP6Frames       ",
+	"TxPPP7Frames       ",
+
+	"RxOctetsOK         ",
+	"RxFramesOK         ",
+	"RxBroadcastFrames  ",
+	"RxMulticastFrames  ",
+	"RxUnicastFrames    ",
+
+	"RxFramesTooLong    ",
+	"RxJabberErrors     ",
+	"RxFCSErrors        ",
+	"RxLengthErrors     ",
+	"RxSymbolErrors     ",
+	"RxRuntFrames       ",
+
+	"RxFrames64         ",
+	"RxFrames65To127    ",
+	"RxFrames128To255   ",
+	"RxFrames256To511   ",
+	"RxFrames512To1023  ",
+	"RxFrames1024To1518 ",
+	"RxFrames1519ToMax  ",
+
+	"RxPauseFrames      ",
+	"RxPPP0Frames       ",
+	"RxPPP1Frames       ",
+	"RxPPP2Frames       ",
+	"RxPPP3Frames       ",
+	"RxPPP4Frames       ",
+	"RxPPP5Frames       ",
+	"RxPPP6Frames       ",
+	"RxPPP7Frames       ",
+
+	"RxBG0FramesDropped ",
+	"RxBG1FramesDropped ",
+	"RxBG2FramesDropped ",
+	"RxBG3FramesDropped ",
+	"RxBG0FramesTrunc   ",
+	"RxBG1FramesTrunc   ",
+	"RxBG2FramesTrunc   ",
+	"RxBG3FramesTrunc   ",
+
+	"TSO                ",
+	"TxCsumOffload      ",
+	"RxCsumGood         ",
+	"VLANextractions    ",
+	"VLANinsertions     ",
+};
+
+static int get_sset_count(struct net_device *dev, int sset)
+{
+	switch (sset) {
+	case ETH_SS_STATS:
+		return ARRAY_SIZE(stats_strings);
+	default:
+		return -EOPNOTSUPP;
+	}
+}
+
+#define T4_REGMAP_SIZE (160 * 1024)
+
+static int get_regs_len(struct net_device *dev)
+{
+	return T4_REGMAP_SIZE;
+}
+
+static int get_eeprom_len(struct net_device *dev)
+{
+	return EEPROMSIZE;
+}
+
+static void get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *info)
+{
+	struct adapter *adapter = netdev2adap(dev);
+
+	strcpy(info->driver, KBUILD_MODNAME);
+	strcpy(info->version, DRV_VERSION);
+	strcpy(info->bus_info, pci_name(adapter->pdev));
+
+	if (!adapter->params.fw_vers)
+		strcpy(info->fw_version, "N/A");
+	else
+		snprintf(info->fw_version, sizeof(info->fw_version),
+			"%u.%u.%u.%u, TP %u.%u.%u.%u",
+			G_FW_HDR_FW_VER_MAJOR(adapter->params.fw_vers),
+			G_FW_HDR_FW_VER_MINOR(adapter->params.fw_vers),
+			G_FW_HDR_FW_VER_MICRO(adapter->params.fw_vers),
+			G_FW_HDR_FW_VER_BUILD(adapter->params.fw_vers),
+			G_FW_HDR_FW_VER_MAJOR(adapter->params.tp_vers),
+			G_FW_HDR_FW_VER_MINOR(adapter->params.tp_vers),
+			G_FW_HDR_FW_VER_MICRO(adapter->params.tp_vers),
+			G_FW_HDR_FW_VER_BUILD(adapter->params.tp_vers));
+}
+
+static void get_strings(struct net_device *dev, u32 stringset, u8 *data)
+{
+	if (stringset == ETH_SS_STATS)
+		memcpy(data, stats_strings, sizeof(stats_strings));
+}
+
+/*
+ * port stats maintained per queue of the port.  They should be in the same
+ * order as in stats_strings above.
+ */
+struct queue_port_stats {
+	u64 tso;
+	u64 tx_csum;
+	u64 rx_csum;
+	u64 vlan_ex;
+	u64 vlan_ins;
+};
+
+static void collect_sge_port_stats(const struct adapter *adap,
+		const struct port_info *p, struct queue_port_stats *s)
+{
+	int i;
+	const struct sge_eth_txq *tx = &adap->sge.ethtxq[p->first_qset];
+	const struct sge_eth_rxq *rx = &adap->sge.ethrxq[p->first_qset];
+
+	memset(s, 0, sizeof(*s));
+	for (i = 0; i < p->nqsets; i++, rx++, tx++) {
+		s->tso += tx->tso;
+		s->tx_csum += tx->tx_cso;
+		s->rx_csum += rx->stats.rx_cso;
+		s->vlan_ex += rx->stats.vlan_ex;
+		s->vlan_ins += tx->vlan_ins;
+	}
+}
+
+static void get_stats(struct net_device *dev, struct ethtool_stats *stats,
+		      u64 *data)
+{
+	struct port_info *pi = netdev_priv(dev);
+	struct adapter *adapter = pi->adapter;
+
+	t4_get_port_stats(adapter, pi->tx_chan, (struct port_stats *)data);
+
+	data += sizeof(struct port_stats) / sizeof(u64);
+	collect_sge_port_stats(adapter, pi, (struct queue_port_stats *)data);
+}
+
+/*
+ * Return a version number to identify the type of adapter.  The scheme is:
+ * - bits 0..9: chip version
+ * - bits 10..15: chip revision
+ */
+static inline unsigned int mk_adap_vers(const struct adapter *ap)
+{
+	return 4 | (ap->params.rev << 10);
+}
+
+static void reg_block_dump(struct adapter *ap, void *buf, unsigned int start,
+			   unsigned int end)
+{
+	u32 *p = buf + start;
+
+	for ( ; start <= end; start += sizeof(u32))
+		*p++ = t4_read_reg(ap, start);
+}
+
+static void get_regs(struct net_device *dev, struct ethtool_regs *regs,
+		     void *buf)
+{
+	static const unsigned int reg_ranges[] = {
+		0x1008, 0x1108,
+		0x1180, 0x11b4,
+		0x11fc, 0x123c,
+		0x1300, 0x173c,
+		0x1800, 0x18fc,
+		0x3000, 0x30d8,
+		0x30e0, 0x5924,
+		0x5960, 0x59d4,
+		0x5a00, 0x5af8,
+		0x6000, 0x6098,
+		0x6100, 0x6150,
+		0x6200, 0x6208,
+		0x6240, 0x6248,
+		0x6280, 0x6338,
+		0x6370, 0x638c,
+		0x6400, 0x643c,
+		0x6500, 0x6524,
+		0x6a00, 0x6a38,
+		0x6a60, 0x6a78,
+		0x6b00, 0x6b84,
+		0x6bf0, 0x6c84,
+		0x6cf0, 0x6d84,
+		0x6df0, 0x6e84,
+		0x6ef0, 0x6f84,
+		0x6ff0, 0x7084,
+		0x70f0, 0x7184,
+		0x71f0, 0x7284,
+		0x72f0, 0x7384,
+		0x73f0, 0x7450,
+		0x7500, 0x7530,
+		0x7600, 0x761c,
+		0x7680, 0x76cc,
+		0x7700, 0x7798,
+		0x77c0, 0x77fc,
+		0x7900, 0x79fc,
+		0x7b00, 0x7c38,
+		0x7d00, 0x7efc,
+		0x8dc0, 0x8e1c,
+		0x8e30, 0x8e78,
+		0x8ea0, 0x8f6c,
+		0x8fc0, 0x9074,
+		0x90fc, 0x90fc,
+		0x9400, 0x9458,
+		0x9600, 0x96bc,
+		0x9800, 0x9808,
+		0x9820, 0x983c,
+		0x9850, 0x9864,
+		0x9c00, 0x9c6c,
+		0x9c80, 0x9cec,
+		0x9d00, 0x9d6c,
+		0x9d80, 0x9dec,
+		0x9e00, 0x9e6c,
+		0x9e80, 0x9eec,
+		0x9f00, 0x9f6c,
+		0x9f80, 0x9fec,
+		0xd004, 0xd03c,
+		0xdfc0, 0xdfe0,
+		0xe000, 0xea7c,
+		0xf000, 0x11190,
+		0x19040, 0x19124,
+		0x19150, 0x191b0,
+		0x191d0, 0x191e8,
+		0x19238, 0x1924c,
+		0x193f8, 0x19474,
+		0x19490, 0x194f8,
+		0x19800, 0x19f30,
+		0x1a000, 0x1a06c,
+		0x1a0b0, 0x1a120,
+		0x1a128, 0x1a138,
+		0x1a190, 0x1a1c4,
+		0x1a1fc, 0x1a1fc,
+		0x1e040, 0x1e04c,
+		0x1e240, 0x1e28c,
+		0x1e2c0, 0x1e2c0,
+		0x1e2e0, 0x1e2e0,
+		0x1e300, 0x1e384,
+		0x1e3c0, 0x1e3c8,
+		0x1e440, 0x1e44c,
+		0x1e640, 0x1e68c,
+		0x1e6c0, 0x1e6c0,
+		0x1e6e0, 0x1e6e0,
+		0x1e700, 0x1e784,
+		0x1e7c0, 0x1e7c8,
+		0x1e840, 0x1e84c,
+		0x1ea40, 0x1ea8c,
+		0x1eac0, 0x1eac0,
+		0x1eae0, 0x1eae0,
+		0x1eb00, 0x1eb84,
+		0x1ebc0, 0x1ebc8,
+		0x1ec40, 0x1ec4c,
+		0x1ee40, 0x1ee8c,
+		0x1eec0, 0x1eec0,
+		0x1eee0, 0x1eee0,
+		0x1ef00, 0x1ef84,
+		0x1efc0, 0x1efc8,
+		0x1f040, 0x1f04c,
+		0x1f240, 0x1f28c,
+		0x1f2c0, 0x1f2c0,
+		0x1f2e0, 0x1f2e0,
+		0x1f300, 0x1f384,
+		0x1f3c0, 0x1f3c8,
+		0x1f440, 0x1f44c,
+		0x1f640, 0x1f68c,
+		0x1f6c0, 0x1f6c0,
+		0x1f6e0, 0x1f6e0,
+		0x1f700, 0x1f784,
+		0x1f7c0, 0x1f7c8,
+		0x1f840, 0x1f84c,
+		0x1fa40, 0x1fa8c,
+		0x1fac0, 0x1fac0,
+		0x1fae0, 0x1fae0,
+		0x1fb00, 0x1fb84,
+		0x1fbc0, 0x1fbc8,
+		0x1fc40, 0x1fc4c,
+		0x1fe40, 0x1fe8c,
+		0x1fec0, 0x1fec0,
+		0x1fee0, 0x1fee0,
+		0x1ff00, 0x1ff84,
+		0x1ffc0, 0x1ffc8,
+		0x20000, 0x2002c,
+		0x20100, 0x2013c,
+		0x20190, 0x201c4,
+		0x20200, 0x20318,
+		0x20400, 0x20528,
+		0x20540, 0x20614,
+		0x21000, 0x21040,
+		0x2104c, 0x21060,
+		0x210c0, 0x210ec,
+		0x21200, 0x21268,
+		0x21270, 0x21284,
+		0x212fc, 0x21388,
+		0x21400, 0x21404,
+		0x21500, 0x21518,
+		0x2152c, 0x2153c,
+		0x21550, 0x21554,
+		0x21600, 0x21600,
+		0x21608, 0x21628,
+		0x21630, 0x2163c,
+		0x21700, 0x2171c,
+		0x21780, 0x2178c,
+		0x21800, 0x21c38,
+		0x21c80, 0x21d7c,
+		0x21e00, 0x21e04,
+		0x22000, 0x2202c,
+		0x22100, 0x2213c,
+		0x22190, 0x221c4,
+		0x22200, 0x22318,
+		0x22400, 0x22528,
+		0x22540, 0x22614,
+		0x23000, 0x23040,
+		0x2304c, 0x23060,
+		0x230c0, 0x230ec,
+		0x23200, 0x23268,
+		0x23270, 0x23284,
+		0x232fc, 0x23388,
+		0x23400, 0x23404,
+		0x23500, 0x23518,
+		0x2352c, 0x2353c,
+		0x23550, 0x23554,
+		0x23600, 0x23600,
+		0x23608, 0x23628,
+		0x23630, 0x2363c,
+		0x23700, 0x2371c,
+		0x23780, 0x2378c,
+		0x23800, 0x23c38,
+		0x23c80, 0x23d7c,
+		0x23e00, 0x23e04,
+		0x24000, 0x2402c,
+		0x24100, 0x2413c,
+		0x24190, 0x241c4,
+		0x24200, 0x24318,
+		0x24400, 0x24528,
+		0x24540, 0x24614,
+		0x25000, 0x25040,
+		0x2504c, 0x25060,
+		0x250c0, 0x250ec,
+		0x25200, 0x25268,
+		0x25270, 0x25284,
+		0x252fc, 0x25388,
+		0x25400, 0x25404,
+		0x25500, 0x25518,
+		0x2552c, 0x2553c,
+		0x25550, 0x25554,
+		0x25600, 0x25600,
+		0x25608, 0x25628,
+		0x25630, 0x2563c,
+		0x25700, 0x2571c,
+		0x25780, 0x2578c,
+		0x25800, 0x25c38,
+		0x25c80, 0x25d7c,
+		0x25e00, 0x25e04,
+		0x26000, 0x2602c,
+		0x26100, 0x2613c,
+		0x26190, 0x261c4,
+		0x26200, 0x26318,
+		0x26400, 0x26528,
+		0x26540, 0x26614,
+		0x27000, 0x27040,
+		0x2704c, 0x27060,
+		0x270c0, 0x270ec,
+		0x27200, 0x27268,
+		0x27270, 0x27284,
+		0x272fc, 0x27388,
+		0x27400, 0x27404,
+		0x27500, 0x27518,
+		0x2752c, 0x2753c,
+		0x27550, 0x27554,
+		0x27600, 0x27600,
+		0x27608, 0x27628,
+		0x27630, 0x2763c,
+		0x27700, 0x2771c,
+		0x27780, 0x2778c,
+		0x27800, 0x27c38,
+		0x27c80, 0x27d7c,
+		0x27e00, 0x27e04
+	};
+
+	int i;
+	struct adapter *ap = netdev2adap(dev);
+
+	regs->version = mk_adap_vers(ap);
+
+	memset(buf, 0, T4_REGMAP_SIZE);
+	for (i = 0; i < ARRAY_SIZE(reg_ranges); i += 2)
+		reg_block_dump(ap, buf, reg_ranges[i], reg_ranges[i + 1]);
+}
+
+static int restart_autoneg(struct net_device *dev)
+{
+	struct port_info *p = netdev_priv(dev);
+
+	if (!netif_running(dev))
+		return -EAGAIN;
+	if (p->link_cfg.autoneg != AUTONEG_ENABLE)
+		return -EINVAL;
+	t4_restart_aneg(p->adapter, 0, p->tx_chan);
+	return 0;
+}
+
+static int identify_port(struct net_device *dev, u32 data)
+{
+	if (data == 0)
+		data = 2;     /* default to 2 seconds */
+
+	return t4_identify_port(netdev2adap(dev), 0, netdev2pinfo(dev)->viid,
+				data * 5);
+}
+
+static unsigned int from_fw_linkcaps(unsigned int type, unsigned int caps)
+{
+	unsigned int v = 0;
+
+	if (type == FW_PORT_TYPE_BT_SGMII || type == FW_PORT_TYPE_BT_XAUI) {
+		v |= SUPPORTED_TP;
+		if (caps & FW_PORT_CAP_SPEED_100M)
+			v |= SUPPORTED_100baseT_Full;
+		if (caps & FW_PORT_CAP_SPEED_1G)
+			v |= SUPPORTED_1000baseT_Full;
+		if (caps & FW_PORT_CAP_SPEED_10G)
+			v |= SUPPORTED_10000baseT_Full;
+	} else if (type == FW_PORT_TYPE_KX4 || type == FW_PORT_TYPE_KX) {
+		v |= SUPPORTED_Backplane;
+		if (caps & FW_PORT_CAP_SPEED_1G)
+			v |= SUPPORTED_1000baseKX_Full;
+		if (caps & FW_PORT_CAP_SPEED_10G)
+			v |= SUPPORTED_10000baseKX4_Full;
+	} else if (type == FW_PORT_TYPE_KR)
+		v |= SUPPORTED_Backplane | SUPPORTED_10000baseKR_Full;
+	else if (type == FW_PORT_TYPE_FIBER)
+		v |= SUPPORTED_FIBRE;
+
+	if (caps & FW_PORT_CAP_ANEG)
+		v |= SUPPORTED_Autoneg;
+	return v;
+}
+
+static unsigned int to_fw_linkcaps(unsigned int caps)
+{
+	unsigned int v = 0;
+
+	if (caps & ADVERTISED_100baseT_Full)
+		v |= FW_PORT_CAP_SPEED_100M;
+	if (caps & ADVERTISED_1000baseT_Full)
+		v |= FW_PORT_CAP_SPEED_1G;
+	if (caps & ADVERTISED_10000baseT_Full)
+		v |= FW_PORT_CAP_SPEED_10G;
+	return v;
+}
+
+static int get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
+{
+	const struct port_info *p = netdev_priv(dev);
+
+	if (p->port_type == FW_PORT_TYPE_BT_SGMII ||
+	    p->port_type == FW_PORT_TYPE_BT_XAUI)
+		cmd->port = PORT_TP;
+	else if (p->port_type == FW_PORT_TYPE_FIBER)
+		cmd->port = PORT_FIBRE;
+	else if (p->port_type == FW_PORT_TYPE_TWINAX)
+		cmd->port = PORT_DA;
+	else
+		cmd->port = PORT_OTHER;
+
+	if (p->mdio_addr >= 0) {
+		cmd->phy_address = p->mdio_addr;
+		cmd->transceiver = XCVR_EXTERNAL;
+		cmd->mdio_support = p->port_type == FW_PORT_TYPE_BT_SGMII ?
+			MDIO_SUPPORTS_C22 : MDIO_SUPPORTS_C45;
+	} else {
+		cmd->phy_address = 0;  /* not really, but no better option */
+		cmd->transceiver = XCVR_INTERNAL;
+		cmd->mdio_support = 0;
+	}
+
+	cmd->supported = from_fw_linkcaps(p->port_type, p->link_cfg.supported);
+	cmd->advertising = from_fw_linkcaps(p->port_type,
+					    p->link_cfg.advertising);
+	cmd->speed = netif_carrier_ok(dev) ? p->link_cfg.speed : 0;
+	cmd->duplex = DUPLEX_FULL;
+	cmd->autoneg = p->link_cfg.autoneg;
+	cmd->maxtxpkt = 0;
+	cmd->maxrxpkt = 0;
+	return 0;
+}
+
+static unsigned int speed_to_caps(int speed)
+{
+	if (speed == SPEED_100)
+		return FW_PORT_CAP_SPEED_100M;
+	if (speed == SPEED_1000)
+		return FW_PORT_CAP_SPEED_1G;
+	if (speed == SPEED_10000)
+		return FW_PORT_CAP_SPEED_10G;
+	return 0;
+}
+
+static int set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
+{
+	unsigned int cap;
+	struct port_info *p = netdev_priv(dev);
+	struct link_config *lc = &p->link_cfg;
+
+	if (cmd->duplex != DUPLEX_FULL)     /* only full-duplex supported */
+		return -EINVAL;
+
+	if (!(lc->supported & FW_PORT_CAP_ANEG)) {
+		/*
+		 * PHY offers a single speed.  See if that's what's
+		 * being requested.
+		 */
+		if (cmd->autoneg == AUTONEG_DISABLE &&
+		    (lc->supported & speed_to_caps(cmd->speed)))
+				return 0;
+		return -EINVAL;
+	}
+
+	if (cmd->autoneg == AUTONEG_DISABLE) {
+		cap = speed_to_caps(cmd->speed);
+
+		if (!(lc->supported & cap) || cmd->speed == SPEED_1000 ||
+		    cmd->speed == SPEED_10000)
+			return -EINVAL;
+		lc->requested_speed = cap;
+		lc->advertising = 0;
+	} else {
+		cap = to_fw_linkcaps(cmd->advertising);
+		if (!(lc->supported & cap))
+			return -EINVAL;
+		lc->requested_speed = 0;
+		lc->advertising = cap | FW_PORT_CAP_ANEG;
+	}
+	lc->autoneg = cmd->autoneg;
+
+	if (netif_running(dev))
+		return t4_link_start(p->adapter, 0, p->tx_chan, lc);
+	return 0;
+}
+
+static void get_pauseparam(struct net_device *dev,
+			   struct ethtool_pauseparam *epause)
+{
+	struct port_info *p = netdev_priv(dev);
+
+	epause->autoneg = (p->link_cfg.requested_fc & PAUSE_AUTONEG) != 0;
+	epause->rx_pause = (p->link_cfg.fc & PAUSE_RX) != 0;
+	epause->tx_pause = (p->link_cfg.fc & PAUSE_TX) != 0;
+}
+
+static int set_pauseparam(struct net_device *dev,
+			  struct ethtool_pauseparam *epause)
+{
+	struct port_info *p = netdev_priv(dev);
+	struct link_config *lc = &p->link_cfg;
+
+	if (epause->autoneg == AUTONEG_DISABLE)
+		lc->requested_fc = 0;
+	else if (lc->supported & FW_PORT_CAP_ANEG)
+		lc->requested_fc = PAUSE_AUTONEG;
+	else
+		return -EINVAL;
+
+	if (epause->rx_pause)
+		lc->requested_fc |= PAUSE_RX;
+	if (epause->tx_pause)
+		lc->requested_fc |= PAUSE_TX;
+	if (netif_running(dev))
+		return t4_link_start(p->adapter, 0, p->tx_chan, lc);
+	return 0;
+}
+
+static u32 get_rx_csum(struct net_device *dev)
+{
+	struct port_info *p = netdev_priv(dev);
+
+	return p->rx_offload & RX_CSO;
+}
+
+static int set_rx_csum(struct net_device *dev, u32 data)
+{
+	struct port_info *p = netdev_priv(dev);
+
+	if (data)
+		p->rx_offload |= RX_CSO;
+	else
+		p->rx_offload &= ~RX_CSO;
+	return 0;
+}
+
+static void get_sge_param(struct net_device *dev, struct ethtool_ringparam *e)
+{
+	const struct port_info *pi = netdev_priv(dev);
+	const struct sge *s = &pi->adapter->sge;
+
+	e->rx_max_pending = MAX_RX_BUFFERS;
+	e->rx_mini_max_pending = MAX_RSPQ_ENTRIES;
+	e->rx_jumbo_max_pending = 0;
+	e->tx_max_pending = MAX_TXQ_ENTRIES;
+
+	e->rx_pending = s->ethrxq[pi->first_qset].fl.size - 8;
+	e->rx_mini_pending = s->ethrxq[pi->first_qset].rspq.size;
+	e->rx_jumbo_pending = 0;
+	e->tx_pending = s->ethtxq[pi->first_qset].q.size;
+}
+
+static int set_sge_param(struct net_device *dev, struct ethtool_ringparam *e)
+{
+	int i;
+	const struct port_info *pi = netdev_priv(dev);
+	struct adapter *adapter = pi->adapter;
+	struct sge *s = &adapter->sge;
+
+	if (e->rx_pending > MAX_RX_BUFFERS || e->rx_jumbo_pending ||
+	    e->tx_pending > MAX_TXQ_ENTRIES ||
+	    e->rx_mini_pending > MAX_RSPQ_ENTRIES ||
+	    e->rx_mini_pending < MIN_RSPQ_ENTRIES ||
+	    e->rx_pending < MIN_FL_ENTRIES || e->tx_pending < MIN_TXQ_ENTRIES)
+		return -EINVAL;
+
+	if (adapter->flags & FULL_INIT_DONE)
+		return -EBUSY;
+
+	for (i = 0; i < pi->nqsets; ++i) {
+		s->ethtxq[pi->first_qset + i].q.size = e->tx_pending;
+		s->ethrxq[pi->first_qset + i].fl.size = e->rx_pending + 8;
+		s->ethrxq[pi->first_qset + i].rspq.size = e->rx_mini_pending;
+	}
+	return 0;
+}
+
+static int closest_timer(const struct sge *s, int time)
+{
+	int i, delta, match = 0, min_delta = INT_MAX;
+
+	for (i = 0; i < ARRAY_SIZE(s->timer_val); i++) {
+		delta = time - s->timer_val[i];
+		if (delta < 0)
+			delta = -delta;
+		if (delta < min_delta) {
+			min_delta = delta;
+			match = i;
+		}
+	}
+	return match;
+}
+
+static int closest_thres(const struct sge *s, int thres)
+{
+	int i, delta, match = 0, min_delta = INT_MAX;
+
+	for (i = 0; i < ARRAY_SIZE(s->counter_val); i++) {
+		delta = thres - s->counter_val[i];
+		if (delta < 0)
+			delta = -delta;
+		if (delta < min_delta) {
+			min_delta = delta;
+			match = i;
+		}
+	}
+	return match;
+}
+
+/*
+ * Return a queue's interrupt hold-off time in us.  0 means no timer.
+ */
+static unsigned int qtimer_val(const struct adapter *adap,
+			       const struct sge_rspq *q)
+{
+	unsigned int idx = G_QINTR_TIMER_IDX(q->intr_params);
+
+	return idx < SGE_NTIMERS ? adap->sge.timer_val[idx] : 0;
+}
+
+/**
+ *	set_rxq_intr_params - set a queue's interrupt holdoff parameters
+ *	@adap: the adapter
+ *	@q: the Rx queue
+ *	@us: the hold-off time in us, or 0 to disable timer
+ *	@cnt: the hold-off packet count, or 0 to disable counter
+ *
+ *	Sets an Rx queue's interrupt hold-off time and packet count.  At least
+ *	one of the two needs to be enabled for the queue to generate interrupts.
+ */
+static int set_rxq_intr_params(struct adapter *adap, struct sge_rspq *q,
+			       unsigned int us, unsigned int cnt)
+{
+	if ((us | cnt) == 0)
+		cnt = 1;
+
+	if (cnt) {
+		int err;
+		u32 v, new_idx;
+
+		new_idx = closest_thres(&adap->sge, cnt);
+		if (q->desc && q->pktcnt_idx != new_idx) {
+			/* the queue has already been created, update it */
+			v = V_FW_PARAMS_MNEM(FW_PARAMS_MNEM_DMAQ) |
+			    V_FW_PARAMS_PARAM_X(FW_PARAMS_PARAM_DMAQ_IQ_INTCNTTHRESH) |
+			    V_FW_PARAMS_PARAM_YZ(q->cntxt_id);
+			err = t4_set_params(adap, 0, 1, &v, &new_idx);
+			if (err)
+				return err;
+		}
+		q->pktcnt_idx = new_idx;
+	}
+
+	us = us == 0 ? E_TIMERIDX_NONE : closest_timer(&adap->sge, us);
+	q->intr_params = V_QINTR_TIMER_IDX(us) | V_QINTR_CNT_EN(cnt > 0);
+	return 0;
+}
+
+static int set_coalesce(struct net_device *dev, struct ethtool_coalesce *c)
+{
+	const struct port_info *pi = netdev_priv(dev);
+	struct adapter *adap = pi->adapter;
+
+	return set_rxq_intr_params(adap, &adap->sge.ethrxq[pi->first_qset].rspq,
+			c->rx_coalesce_usecs, c->rx_max_coalesced_frames);
+}
+
+static int get_coalesce(struct net_device *dev, struct ethtool_coalesce *c)
+{
+	const struct port_info *pi = netdev_priv(dev);
+	const struct adapter *adap = pi->adapter;
+	const struct sge_rspq *rq = &adap->sge.ethrxq[pi->first_qset].rspq;
+
+	c->rx_coalesce_usecs = qtimer_val(adap, rq);
+	c->rx_max_coalesced_frames = (rq->intr_params & F_QINTR_CNT_EN) ?
+		adap->sge.counter_val[rq->pktcnt_idx] : 0;
+	return 0;
+}
+
+/*
+ * Translate a physical EEPROM address to virtual.  The first 1K is accessed
+ * through virtual addresses starting at 31K, the rest is accessed through
+ * virtual addresses starting at 0.  This mapping is correct only for PF0.
+ */
+static int eeprom_ptov(unsigned int phys_addr)
+{
+	if (phys_addr < 1024)
+		return phys_addr + (31 << 10);
+	if (phys_addr < EEPROMSIZE)
+		return phys_addr - 1024;
+	return -EINVAL;
+}
+
+/*
+ * The next two routines implement eeprom read/write from physical addresses.
+ * The physical->virtual translation is correct only for PF0.
+ */
+static int eeprom_rd_phys(struct adapter *adap, unsigned int phys_addr, u32 *v)
+{
+	int vaddr = eeprom_ptov(phys_addr);
+
+	if (vaddr >= 0)
+		vaddr = pci_read_vpd(adap->pdev, vaddr, sizeof(u32), v);
+	return vaddr < 0 ? vaddr : 0;
+}
+
+static int eeprom_wr_phys(struct adapter *adap, unsigned int phys_addr, u32 v)
+{
+	int vaddr = eeprom_ptov(phys_addr);
+
+	if (vaddr >= 0)
+		vaddr = pci_write_vpd(adap->pdev, vaddr, sizeof(u32), &v);
+	return vaddr < 0 ? vaddr : 0;
+}
+
+#define EEPROM_MAGIC 0x38E2F10C
+
+static int get_eeprom(struct net_device *dev, struct ethtool_eeprom *e,
+		      u8 *data)
+{
+	int i, err = 0;
+	struct adapter *adapter = netdev2adap(dev);
+
+	u8 *buf = kmalloc(EEPROMSIZE, GFP_KERNEL);
+	if (!buf)
+		return -ENOMEM;
+
+	e->magic = EEPROM_MAGIC;
+	for (i = e->offset & ~3; !err && i < e->offset + e->len; i += 4)
+		err = eeprom_rd_phys(adapter, i, (u32 *)&buf[i]);
+
+	if (!err)
+		memcpy(data, buf + e->offset, e->len);
+	kfree(buf);
+	return err;
+}
+
+static int set_eeprom(struct net_device *dev, struct ethtool_eeprom *eeprom,
+		      u8 *data)
+{
+	u8 *buf;
+	int err = 0;
+	u32 aligned_offset, aligned_len, *p;
+	struct adapter *adapter = netdev2adap(dev);
+
+	if (eeprom->magic != EEPROM_MAGIC)
+		return -EINVAL;
+
+	aligned_offset = eeprom->offset & ~3;
+	aligned_len = (eeprom->len + (eeprom->offset & 3) + 3) & ~3;
+
+	if (aligned_offset != eeprom->offset || aligned_len != eeprom->len) {
+		/*
+		 * RMW possibly needed for first or last words.
+		 */
+		buf = kmalloc(aligned_len, GFP_KERNEL);
+		if (!buf)
+			return -ENOMEM;
+		err = eeprom_rd_phys(adapter, aligned_offset, (u32 *)buf);
+		if (!err && aligned_len > 4)
+			err = eeprom_rd_phys(adapter,
+					     aligned_offset + aligned_len - 4,
+					     (u32 *)&buf[aligned_len - 4]);
+		if (err)
+			goto out;
+		memcpy(buf + (eeprom->offset & 3), data, eeprom->len);
+	} else
+		buf = data;
+
+	err = t4_seeprom_wp(adapter, false);
+	if (err)
+		goto out;
+
+	for (p = (u32 *)buf; !err && aligned_len; aligned_len -= 4, p++) {
+		err = eeprom_wr_phys(adapter, aligned_offset, *p);
+		aligned_offset += 4;
+	}
+
+	if (!err)
+		err = t4_seeprom_wp(adapter, true);
+out:
+	if (buf != data)
+		kfree(buf);
+	return err;
+}
+
+static int set_flash(struct net_device *netdev, struct ethtool_flash *ef)
+{
+	int ret;
+	const struct firmware *fw;
+	struct adapter *adap = netdev2adap(netdev);
+
+	ef->data[sizeof(ef->data) - 1] = '\0';
+	ret = request_firmware(&fw, ef->data, adap->pdev_dev);
+	if (ret < 0)
+		return ret;
+
+	ret = t4_load_fw(adap, fw->data, fw->size);
+	release_firmware(fw);
+	if (!ret)
+		dev_info(adap->pdev_dev, "loaded firmware %s\n", ef->data);
+	return ret;
+}
+
+#define WOL_SUPPORTED (WAKE_BCAST | WAKE_MAGIC)
+#define BCAST_CRC 0xa0ccc1a6
+
+static void get_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
+{
+	wol->supported = WAKE_BCAST | WAKE_MAGIC;
+	wol->wolopts = netdev2adap(dev)->wol;
+	memset(&wol->sopass, 0, sizeof(wol->sopass));
+}
+
+static int set_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
+{
+	int err = 0;
+	struct port_info *pi = netdev_priv(dev);
+
+	if (wol->wolopts & ~WOL_SUPPORTED)
+		return -EINVAL;
+	t4_wol_magic_enable(pi->adapter, pi->tx_chan,
+			    (wol->wolopts & WAKE_MAGIC) ? dev->dev_addr : NULL);
+	if (wol->wolopts & WAKE_BCAST) {
+		err = t4_wol_pat_enable(pi->adapter, pi->tx_chan, 0xfe, ~0ULL,
+					~0ULL, 0, false);
+		if (!err)
+			err = t4_wol_pat_enable(pi->adapter, pi->tx_chan, 1,
+						~6ULL, ~0ULL, BCAST_CRC, true);
+	} else
+		t4_wol_pat_enable(pi->adapter, pi->tx_chan, 0, 0, 0, 0, false);
+	return err;
+}
+
+static int set_tso(struct net_device *dev, u32 value)
+{
+	if (value)
+		dev->features |= NETIF_F_TSO | NETIF_F_TSO6;
+	else
+		dev->features &= ~(NETIF_F_TSO | NETIF_F_TSO6);
+	return 0;
+}
+
+static struct ethtool_ops cxgb_ethtool_ops = {
+	.get_settings      = get_settings,
+	.set_settings      = set_settings,
+	.get_drvinfo       = get_drvinfo,
+	.get_msglevel      = get_msglevel,
+	.set_msglevel      = set_msglevel,
+	.get_ringparam     = get_sge_param,
+	.set_ringparam     = set_sge_param,
+	.get_coalesce      = get_coalesce,
+	.set_coalesce      = set_coalesce,
+	.get_eeprom_len    = get_eeprom_len,
+	.get_eeprom        = get_eeprom,
+	.set_eeprom        = set_eeprom,
+	.get_pauseparam    = get_pauseparam,
+	.set_pauseparam    = set_pauseparam,
+	.get_rx_csum       = get_rx_csum,
+	.set_rx_csum       = set_rx_csum,
+	.set_tx_csum       = ethtool_op_set_tx_ipv6_csum,
+	.set_sg            = ethtool_op_set_sg,
+	.get_link          = ethtool_op_get_link,
+	.get_strings       = get_strings,
+	.phys_id           = identify_port,
+	.nway_reset        = restart_autoneg,
+	.get_sset_count    = get_sset_count,
+	.get_ethtool_stats = get_stats,
+	.get_regs_len      = get_regs_len,
+	.get_regs          = get_regs,
+	.get_wol           = get_wol,
+	.set_wol           = set_wol,
+	.set_tso           = set_tso,
+	.flash_device      = set_flash,
+};
+
+/*
+ * Read a vector of numbers from a user space buffer.  Each number must be
+ * between min and max inclusive and in the given base.
+ */
+static int rd_usr_int_vec(const char __user *buf, size_t usr_len, int vec_len,
+			  unsigned long *vals, unsigned long min,
+			  unsigned long max, int base)
+{
+	size_t l;
+	unsigned long v;
+	char c, word[68], *end;
+
+	while (usr_len) {
+		/* skip whitespace to beginning of next word */
+		while (usr_len) {
+			if (get_user(c, buf))
+				return -EFAULT;
+			if (!isspace(c))
+				break;
+			usr_len--;
+			buf++;
+		}
+
+		if (!usr_len)
+			break;
+		if (!vec_len)
+			return -EINVAL;              /* too many numbers */
+
+		/* get next word (possibly going beyond its end) */
+		l = min(usr_len, sizeof(word) - 1);
+		if (copy_from_user(word, buf, l))
+			return -EFAULT;
+		word[l] = '\0';
+
+		v = simple_strtoul(word, &end, base);
+		l = end - word;
+		if (!l)
+			return -EINVAL;              /* catch embedded '\0's */
+		if (*end && !isspace(*end))
+			return -EINVAL;
+		/*
+		 * Complain if we encountered a too long sequence of digits.
+		 * The most we can consume in one iteration is for a 64-bit
+		 * number in binary.  Too bad simple_strtoul doesn't catch
+		 * overflows.
+		 */
+		if (l > 64)
+			return -EINVAL;
+		if (v < min || v > max)
+			return -ERANGE;
+		*vals++ = v;
+		vec_len--;
+		usr_len -= l;
+		buf += l;
+	}
+	if (vec_len)
+		return -EINVAL;                      /* not enough numbers */
+	return 0;
+}
+
+static void seq_noop_stop(struct seq_file *seq, void *v)
+{
+}
+
+/*
+ * generic seq_file support for showing a table of size rows x width.
+ */
+struct seq_tab {
+	int (*show)(struct seq_file *seq, void *v, int idx);
+	unsigned int rows;        /* # of entries */
+	unsigned char width;      /* size in bytes of each entry */
+	unsigned char skip_first; /* whether the first line is a header */
+	char data[0];             /* the table data */
+};
+
+static void *seq_tab_get_idx(struct seq_tab *tb, loff_t pos)
+{
+	pos -= tb->skip_first;
+	return pos >= tb->rows ? NULL : &tb->data[pos * tb->width];
+}
+
+static void *seq_tab_start(struct seq_file *seq, loff_t *pos)
+{
+	struct seq_tab *tb = seq->private;
+
+	if (tb->skip_first && *pos == 0)
+		return SEQ_START_TOKEN;
+
+	return seq_tab_get_idx(tb, *pos);
+}
+
+static void *seq_tab_next(struct seq_file *seq, void *v, loff_t *pos)
+{
+	v = seq_tab_get_idx(seq->private, *pos + 1);
+	if (v)
+		++*pos;
+	return v;
+}
+
+static int seq_tab_show(struct seq_file *seq, void *v)
+{
+	const struct seq_tab *tb = seq->private;
+
+	/*
+	 * index is bogus when v isn't within data, eg when it's
+	 * SEQ_START_TOKEN, but that's OK
+	 */
+	return tb->show(seq, v, ((char *)v - tb->data) / tb->width);
+}
+
+static struct seq_operations seq_tab_ops = {
+	.start = seq_tab_start,
+	.next  = seq_tab_next,
+	.stop  = seq_noop_stop,
+	.show  = seq_tab_show
+};
+
+static struct seq_tab *seq_open_tab(struct file *f, unsigned int rows,
+		unsigned int width, unsigned int have_header,
+		int (*show)(struct seq_file *seq, void *v, int i))
+{
+	struct seq_tab *p;
+
+	p = __seq_open_private(f, &seq_tab_ops, sizeof(*p) + rows * width);
+	if (p) {
+		p->show = show;
+		p->rows = rows;
+		p->width = width;
+		p->skip_first = have_header != 0;
+	}
+	return p;
+}
+
+/*
+ * debugfs support
+ */
+
+static int mem_open(struct inode *inode, struct file *file)
+{
+	file->private_data = inode->i_private;
+	return 0;
+}
+
+static ssize_t mem_read(struct file *file, char __user *buf, size_t count,
+			loff_t *ppos)
+{
+	loff_t pos = *ppos;
+	loff_t avail = file->f_path.dentry->d_inode->i_size;
+	unsigned int mem = (uintptr_t)file->private_data & 3;
+	struct adapter *adap = file->private_data - mem;
+
+	if (pos < 0)
+		return -EINVAL;
+	if (pos >= avail)
+		return 0;
+	if (count > avail - pos)
+		count = avail - pos;
+
+	while (count) {
+		size_t len;
+		int ret, ofst;
+		__be32 data[16];
+
+		if (mem == MEM_MC)
+			ret = t4_mc_read(adap, pos, data, NULL);
+		else
+			ret = t4_edc_read(adap, mem, pos, data, NULL);
+		if (ret)
+			return ret;
+
+		ofst = pos % sizeof(data);
+		len = min(count, sizeof(data) - ofst);
+		if (copy_to_user(buf, (u8 *)data + ofst, len))
+			return -EFAULT;
+
+		buf += len;
+		pos += len;
+		count -= len;
+	}
+	count = pos - *ppos;
+	*ppos = pos;
+	return count;
+}
+
+static const struct file_operations mem_debugfs_fops = {
+	.owner   = THIS_MODULE,
+	.open    = mem_open,
+	.read    = mem_read,
+};
+
+static void __devinit add_debugfs_mem(struct adapter *adap, const char *name,
+				      unsigned int idx, unsigned int size_mb)
+{
+	struct dentry *de;
+
+	de = debugfs_create_file(name, S_IRUSR, adap->debugfs_root,
+				 (void *)adap + idx, &mem_debugfs_fops);
+	if (de && de->d_inode)
+		de->d_inode->i_size = size_mb << 20;
+}
+
+static int __devinit setup_debugfs(struct adapter *adap)
+{
+	int i;
+
+	if (!adap->debugfs_root)
+		return -1;
+
+	i = t4_read_reg(adap, A_MA_TARGET_MEM_ENABLE);
+	if (i & F_EDRAM0_ENABLE)
+		add_debugfs_mem(adap, "edc0", MEM_EDC0, 5);
+	if (i & F_EDRAM1_ENABLE)
+		add_debugfs_mem(adap, "edc1", MEM_EDC1, 5);
+	if (i & F_EXT_MEM_ENABLE)
+		add_debugfs_mem(adap, "mc", MEM_MC,
+			G_EXT_MEM_SIZE(t4_read_reg(adap, A_MA_EXT_MEMORY_BAR)));
+	return 0;
+}
+
+/*
+ * /proc support
+ */
+
+#define DEFINE_SIMPLE_PROC_FILE(name) \
+static int name##_open(struct inode *inode, struct file *file) \
+{ \
+	return single_open(file, name##_show, PDE(inode)->data); \
+} \
+static const struct file_operations name##_proc_fops = { \
+	.owner   = THIS_MODULE, \
+	.open    = name##_open, \
+	.read    = seq_read, \
+	.llseek  = seq_lseek, \
+	.release = single_release \
+}
+
+static int sge_stats_show(struct seq_file *seq, void *v)
+{
+	struct adapter *adap = seq->private;
+	int eth_entries = DIV_ROUND_UP(adap->sge.ethqsets, 4);
+	int ofld_entries = DIV_ROUND_UP(adap->sge.ofldqsets, 4);
+	int rdma_entries = DIV_ROUND_UP(adap->sge.rdmaqs, 4);
+	int ctrl_entries = DIV_ROUND_UP(MAX_CTRL_QUEUES, 4);
+	int i, r = (uintptr_t)v - 1;
+
+	if (r)
+		seq_putc(seq, '\n');
+
+#define S3(fmt_spec, s, v) \
+	seq_printf(seq, "%-12s", s); \
+	for (i = 0; i < n; ++i) \
+		seq_printf(seq, " %16" fmt_spec, v); \
+		seq_putc(seq, '\n');
+#define S(s, v) S3("s", s, v)
+#define T3(fmt_spec, s, v) S3(fmt_spec, s, tx[i].v)
+#define T(s, v) T3("lu", s, v)
+#define R3(fmt_spec, s, v) S3(fmt_spec, s, qs[i].v)
+#define R(s, v) R3("lu", s, v)
+
+	if (r < eth_entries) {
+		const struct sge_eth_rxq *qs = &adap->sge.ethrxq[r * 4];
+		const struct sge_eth_txq *tx = &adap->sge.ethtxq[r * 4];
+		int n = min(4, adap->sge.ethqsets - 4 * r);
+
+		S("QType:", "Ethernet");
+		S("Interface:",
+		  qs[i].rspq.netdev ? qs[i].rspq.netdev->name : "N/A");
+		R("RxPackets:", stats.pkts);
+		R("RxCSO:", stats.rx_cso);
+		R("VLANxtract:", stats.vlan_ex);
+		R("LROmerged:", stats.lro_merged);
+		R("LROpackets:", stats.lro_pkts);
+		R("RxDrops:", stats.rx_drops);
+		T("TSO:", tso);
+		T("TxCSO:", tx_cso);
+		T("VLANins:", vlan_ins);
+		T("TxQFull:", q.stops);
+		T("TxQRestarts:", q.restarts);
+		T("TxMapErr:", mapping_err);
+		R("FLAllocErr:", fl.alloc_failed);
+		R("FLLrgAlcErr:", fl.large_alloc_failed);
+		R("FLstarving:", fl.starving);
+	} else if ((r -= eth_entries) < ofld_entries) {
+		const struct sge_ofld_rxq *qs = &adap->sge.ofldrxq[r * 4];
+		const struct sge_ofld_txq *tx = &adap->sge.ofldtxq[r * 4];
+		int n = min(4, adap->sge.ofldqsets - 4 * r);
+
+		S("QType:", "OFLD");
+		R("RxPackets:", stats.pkts);
+		R("RxImmPkts:", stats.imm);
+		R("RxNoMem:", stats.nomem);
+		T("TxQFull:", q.stops);
+		T("TxQRestarts:", q.restarts);
+		T("TxMapErr:", mapping_err);
+		R("FLAllocErr:", fl.alloc_failed);
+		R("FLLrgAlcErr:", fl.large_alloc_failed);
+		R("FLstarving:", fl.starving);
+	} else if ((r -= ofld_entries) < rdma_entries) {
+		const struct sge_ofld_rxq *qs = &adap->sge.rdmarxq[r * 4];
+		int n = min(4, adap->sge.rdmaqs - 4 * r);
+
+		S("QType:", "RDMA");
+		R("RxPackets:", stats.pkts);
+		R("RxImmPkts:", stats.imm);
+		R("RxAN:", stats.an);
+		R("RxNoMem:", stats.nomem);
+		R("FLAllocErr:", fl.alloc_failed);
+		R("FLstarving:", fl.starving);
+	} else if ((r -= rdma_entries) < ctrl_entries) {
+		const struct sge_ctrl_txq *tx = &adap->sge.ctrlq[r * 4];
+		int n = min(4, MAX_CTRL_QUEUES - 4 * r);
+
+		S("QType:", "Control");
+		T("TxQFull:", q.stops);
+		T("TxQRestarts:", q.restarts);
+	} else if ((r -= ctrl_entries) == 0) {
+		seq_printf(seq, "%-12s %16s\n", "QType:", "FW event queue");
+	}
+#undef R
+#undef T
+#undef S
+#undef R3
+#undef T3
+#undef S3
+	return 0;
+}
+
+static int sge_queue_entries(const struct adapter *adap)
+{
+	return DIV_ROUND_UP(adap->sge.ethqsets, 4) +
+	       DIV_ROUND_UP(adap->sge.ofldqsets, 4) +
+	       DIV_ROUND_UP(adap->sge.rdmaqs, 4) +
+	       DIV_ROUND_UP(MAX_CTRL_QUEUES, 4) + 1;
+}
+
+static void *sge_queue_start(struct seq_file *seq, loff_t *pos)
+{
+	int entries = sge_queue_entries(seq->private);
+
+	return *pos < entries ? (void *)((uintptr_t)*pos + 1) : NULL;
+}
+
+static void *sge_queue_next(struct seq_file *seq, void *v, loff_t *pos)
+{
+	int entries = sge_queue_entries(seq->private);
+
+	++*pos;
+	return *pos < entries ? (void *)((uintptr_t)*pos + 1) : NULL;
+}
+
+static struct seq_operations sge_stats_seq_ops = {
+	.start = sge_queue_start,
+	.next  = sge_queue_next,
+	.stop  = seq_noop_stop,
+	.show  = sge_stats_show
+};
+
+static int sge_stats_open(struct inode *inode, struct file *file)
+{
+	int res = seq_open(file, &sge_stats_seq_ops);
+
+	if (!res) {
+		struct seq_file *seq = file->private_data;
+		seq->private = PDE(inode)->data;
+	}
+	return res;
+}
+
+static const struct file_operations sge_stats_proc_fops = {
+	.owner   = THIS_MODULE,
+	.open    = sge_stats_open,
+	.read    = seq_read,
+	.llseek  = seq_lseek,
+	.release = seq_release,
+};
+
+static int lb_stats_show(struct seq_file *seq, void *v)
+{
+	static const char *stat_name[] = {
+		"OctetsOK:", "FramesOK:", "BcastFrames:", "McastFrames:",
+		"UcastFrames:", "ErrorFrames:", "Frames64:", "Frames65To127:",
+		"Frames128To255:", "Frames256To511:", "Frames512To1023:",
+		"Frames1024To1518:", "Frames1519ToMax:", "FramesDropped:",
+		"BG0FramesDropped:", "BG1FramesDropped:", "BG2FramesDropped:",
+		"BG3FramesDropped:", "BG0FramesTrunc:", "BG1FramesTrunc:",
+		"BG2FramesTrunc:", "BG3FramesTrunc:"
+	};
+
+	int i, j;
+	u64 *p0, *p1;
+	struct lb_port_stats s[2];
+
+	memset(s, 0, sizeof(s));
+
+	for (i = 0; i < 4; i += 2) {
+		t4_get_lb_stats(seq->private, i, &s[0]);
+		t4_get_lb_stats(seq->private, i + 1, &s[1]);
+
+		p0 = &s[0].octets;
+		p1 = &s[1].octets;
+		seq_printf(seq, "%s                       Loopback %u          "
+			   " Loopback %u\n", i == 0 ? "" : "\n", i, i + 1);
+
+		for (j = 0; j < ARRAY_SIZE(stat_name); j++)
+			seq_printf(seq, "%-17s %20llu %20llu\n", stat_name[j],
+				   (unsigned long long)*p0++,
+				   (unsigned long long)*p1++);
+	}
+	return 0;
+}
+
+DEFINE_SIMPLE_PROC_FILE(lb_stats);
+
+static int tcp_stats_show(struct seq_file *seq, void *v)
+{
+	struct tp_tcp_stats v4, v6;
+	struct adapter *adap = seq->private;
+
+	spin_lock(&adap->stats_lock);
+	t4_tp_get_tcp_stats(adap, &v4, &v6);
+	spin_unlock(&adap->stats_lock);
+
+	seq_puts(seq,
+		 "                                IP                 IPv6\n");
+	seq_printf(seq, "OutRsts:      %20u %20u\n",
+		   v4.tcpOutRsts, v6.tcpOutRsts);
+	seq_printf(seq, "InSegs:       %20llu %20llu\n",
+		   (unsigned long long)v4.tcpInSegs,
+		   (unsigned long long)v6.tcpInSegs);
+	seq_printf(seq, "OutSegs:      %20llu %20llu\n",
+		   (unsigned long long)v4.tcpOutSegs,
+		   (unsigned long long)v6.tcpOutSegs);
+	seq_printf(seq, "RetransSegs:  %20llu %20llu\n",
+		   (unsigned long long)v4.tcpRetransSegs,
+		   (unsigned long long)v6.tcpRetransSegs);
+	return 0;
+}
+
+DEFINE_SIMPLE_PROC_FILE(tcp_stats);
+
+static int tp_err_stats_show(struct seq_file *seq, void *v)
+{
+	struct tp_err_stats stats;
+	struct adapter *adap = seq->private;
+
+	spin_lock(&adap->stats_lock);
+	t4_tp_get_err_stats(adap, &stats);
+	spin_unlock(&adap->stats_lock);
+
+	seq_puts(seq, "                 channel 0  channel 1  channel 2  "
+		      "channel 3\n");
+	seq_printf(seq, "macInErrs:      %10u %10u %10u %10u\n",
+		   stats.macInErrs[0], stats.macInErrs[1], stats.macInErrs[2],
+		   stats.macInErrs[3]);
+	seq_printf(seq, "hdrInErrs:      %10u %10u %10u %10u\n",
+		   stats.hdrInErrs[0], stats.hdrInErrs[1], stats.hdrInErrs[2],
+		   stats.hdrInErrs[3]);
+	seq_printf(seq, "tcpInErrs:      %10u %10u %10u %10u\n",
+		   stats.tcpInErrs[0], stats.tcpInErrs[1], stats.tcpInErrs[2],
+		   stats.tcpInErrs[3]);
+	seq_printf(seq, "tcp6InErrs:     %10u %10u %10u %10u\n",
+		   stats.tcp6InErrs[0], stats.tcp6InErrs[1],
+		   stats.tcp6InErrs[2], stats.tcp6InErrs[3]);
+	seq_printf(seq, "tnlCongDrops:   %10u %10u %10u %10u\n",
+		   stats.tnlCongDrops[0], stats.tnlCongDrops[1],
+		   stats.tnlCongDrops[2], stats.tnlCongDrops[3]);
+	seq_printf(seq, "tnlTxDrops:     %10u %10u %10u %10u\n",
+		   stats.tnlTxDrops[0], stats.tnlTxDrops[1],
+		   stats.tnlTxDrops[2], stats.tnlTxDrops[3]);
+	seq_printf(seq, "ofldVlanDrops:  %10u %10u %10u %10u\n",
+		   stats.ofldVlanDrops[0], stats.ofldVlanDrops[1],
+		   stats.ofldVlanDrops[2], stats.ofldVlanDrops[3]);
+	seq_printf(seq, "ofldChanDrops:  %10u %10u %10u %10u\n\n",
+		   stats.ofldChanDrops[0], stats.ofldChanDrops[1],
+		   stats.ofldChanDrops[2], stats.ofldChanDrops[3]);
+	seq_printf(seq, "ofldNoNeigh:    %u\nofldCongDefer:  %u\n",
+		   stats.ofldNoNeigh, stats.ofldCongDefer);
+	return 0;
+}
+
+DEFINE_SIMPLE_PROC_FILE(tp_err_stats);
+
+static int rss_show(struct seq_file *seq, void *v, int idx)
+{
+	u16 *entry = v;
+
+	seq_printf(seq, "%4d:  %4u  %4u  %4u  %4u  %4u  %4u  %4u  %4u\n",
+		   idx * 8, entry[0], entry[1], entry[2], entry[3], entry[4],
+		   entry[5], entry[6], entry[7]);
+	return 0;
+}
+
+static int rss_open(struct inode *inode, struct file *file)
+{
+	int ret;
+	struct seq_tab *p;
+
+	p = seq_open_tab(file, RSS_NENTRIES / 8, 8 * sizeof(u16), 0, rss_show);
+	if (!p)
+		return -ENOMEM;
+	ret = t4_read_rss(PDE(inode)->data, (u16 *)p->data);
+	if (ret)
+		seq_release_private(inode, file);
+	return ret;
+}
+
+static const struct file_operations rss_proc_fops = {
+	.owner   = THIS_MODULE,
+	.open    = rss_open,
+	.read    = seq_read,
+	.llseek  = seq_lseek,
+	.release = seq_release_private
+};
+
+static int mtutab_show(struct seq_file *seq, void *v)
+{
+	u16 mtus[NMTUS];
+	struct adapter *adap = seq->private;
+
+	spin_lock(&adap->stats_lock);
+	t4_read_mtu_tbl(adap, mtus, NULL);
+	spin_unlock(&adap->stats_lock);
+
+	seq_printf(seq, "%u %u %u %u %u %u %u %u %u %u %u %u %u %u %u %u\n",
+		   mtus[0], mtus[1], mtus[2], mtus[3], mtus[4], mtus[5],
+		   mtus[6], mtus[7], mtus[8], mtus[9], mtus[10], mtus[11],
+		   mtus[12], mtus[13], mtus[14], mtus[15]);
+	return 0;
+}
+
+static int mtutab_open(struct inode *inode, struct file *file)
+{
+	return single_open(file, mtutab_show, PDE(inode)->data);
+}
+
+static ssize_t mtutab_write(struct file *file, const char __user *buf,
+			    size_t count, loff_t *pos)
+{
+	int i;
+	unsigned long mtus[NMTUS];
+	struct adapter *adap = PDE(file->f_path.dentry->d_inode)->data;
+
+	if (adap->flags & FULL_INIT_DONE)
+		return -EBUSY;
+
+	/* Require min MTU of 81 to accommodate SACK */
+	i = rd_usr_int_vec(buf, count, NMTUS, mtus, 81, MAX_MTU, 10);
+	if (i)
+		return i;
+
+	/* MTUs must be in ascending order */
+	for (i = 1; i < NMTUS; ++i)
+		if (mtus[i] < mtus[i - 1])
+			return -EINVAL;
+
+	for (i = 0; i < NMTUS; ++i)
+		adap->params.mtus[i] = mtus[i];
+	t4_load_mtus(adap, adap->params.mtus, adap->params.a_wnd,
+		     adap->params.b_wnd);
+	return count;
+}
+
+static const struct file_operations mtutab_proc_fops = {
+	.owner   = THIS_MODULE,
+	.open    = mtutab_open,
+	.read    = seq_read,
+	.llseek  = seq_lseek,
+	.release = single_release,
+	.write   = mtutab_write
+};
+
+static int mps_trc_show(struct seq_file *seq, void *v)
+{
+	int enabled, i;
+	struct trace_params tp;
+	struct adapter *adap = seq->private;
+	unsigned int trcidx = (uintptr_t)adap & 3;
+
+	adap = (void *)adap - trcidx;
+	t4_get_trace_filter(adap, &tp, trcidx, &enabled);
+	if (!enabled) {
+		seq_puts(seq, "tracer is disabled\n");
+		return 0;
+	}
+
+	if (tp.skip_ofst * 8 >= TRACE_LEN) {
+		dev_err(adap->pdev_dev, "illegal trace pattern skip offset\n");
+		return -EINVAL;
+	}
+	if (tp.port < 8) {
+		i = adap->chan_map[tp.port & 3];
+		if (i >= MAX_NPORTS) {
+			dev_err(adap->pdev_dev, "tracer %u is assigned "
+				"to non-existing port\n", trcidx);
+			return -EINVAL;
+		}
+		seq_printf(seq, "tracer is capturing %s %s, ",
+			   adap->port[i]->name, tp.port < 4 ? "Rx" : "Tx");
+	} else
+		seq_printf(seq, "tracer is capturing loopback %d, ",
+			   tp.port - 8);
+	seq_printf(seq, "snap length: %u, min length: %u\n", tp.snap_len,
+		   tp.min_len);
+	seq_printf(seq, "packets captured %smatch filter\n",
+		   tp.invert ? "do not " : "");
+
+	if (tp.skip_ofst) {
+		seq_puts(seq, "filter pattern: ");
+		for (i = 0; i < tp.skip_ofst * 2; i += 2)
+			seq_printf(seq, "%08x%08x", tp.data[i], tp.data[i + 1]);
+		seq_putc(seq, '/');
+		for (i = 0; i < tp.skip_ofst * 2; i += 2)
+			seq_printf(seq, "%08x%08x", tp.mask[i], tp.mask[i + 1]);
+		seq_puts(seq, "@0\n");
+	}
+
+	seq_puts(seq, "filter pattern: ");
+	for (i = tp.skip_ofst * 2; i < TRACE_LEN / 4; i += 2)
+		seq_printf(seq, "%08x%08x", tp.data[i], tp.data[i + 1]);
+	seq_putc(seq, '/');
+	for (i = tp.skip_ofst * 2; i < TRACE_LEN / 4; i += 2)
+		seq_printf(seq, "%08x%08x", tp.mask[i], tp.mask[i + 1]);
+	seq_printf(seq, "@%u\n", (tp.skip_ofst + tp.skip_len) * 8);
+	return 0;
+}
+
+static int mps_trc_open(struct inode *inode, struct file *file)
+{
+	return single_open(file, mps_trc_show, PDE(inode)->data);
+}
+
+static unsigned int xdigit2int(unsigned char c)
+{
+	return isdigit(c) ? c - '0' : tolower(c) - 'a' + 10;
+}
+
+#define TRC_PORT_NONE 0xff
+
+/*
+ * Set an MPS trace filter.  Syntax is:
+ *
+ * disable
+ *
+ * to disable tracing, or
+ *
+ * interface [snaplen=<val>] [minlen=<val>] [not] [<pattern>]...
+ *
+ * where interface is one of rxN, txN, or loopbackN, N = 0..3, and pattern
+ * has the form
+ *
+ * <pattern data>[/<pattern mask>][@<anchor>]
+ *
+ * Up to 2 filter patterns can be specified.  If 2 are supplied the first one
+ * must be anchored at 0.  An omited mask is taken as a mask of 1s, an omitted
+ * anchor is taken as 0.
+ */
+static ssize_t mps_trc_write(struct file *file, const char __user *buf,
+			     size_t count, loff_t *pos)
+{
+	int i, j, enable;
+	u32 *data, *mask;
+	struct trace_params tp;
+	char *s, *p, *word, *end;
+	struct adapter *adap = PDE(file->f_path.dentry->d_inode)->data;
+	unsigned int trcidx = (uintptr_t)adap & 3;
+
+	adap = (void *)adap - trcidx;
+
+	/*
+	 * Don't accept input more than 1K, can't be anything valid except lots
+	 * of whitespace.  Well, use less.
+	 */
+	if (count > 1024)
+		return -EFBIG;
+	p = s = kzalloc(count + 1, GFP_USER);
+	if (!s)
+		return -ENOMEM;
+	if (copy_from_user(s, buf, count)) {
+		count = -EFAULT;
+		goto out;
+	}
+
+	if (s[count - 1] == '\n')
+		s[count - 1] = '\0';
+
+	enable = strcmp("disable", s) != 0;
+	if (!enable)
+		goto apply;
+
+	memset(&tp, 0, sizeof(tp));
+	tp.port = TRC_PORT_NONE;
+	i = 0;                                      /* counts pattern nibbles */
+
+	while (p) {
+		while (isspace(*p))
+			p++;
+		word = strsep(&p, " ");
+		if (!*word)
+			break;
+
+		if (!strncmp(word, "snaplen=", 8)) {
+			j = simple_strtoul(word + 8, &end, 10);
+			if (*end || j > 9600) {
+inval:				count = -EINVAL;
+				goto out;
+			}
+			tp.snap_len = j;
+			continue;
+		}
+		if (!strncmp(word, "minlen=", 7)) {
+			j = simple_strtoul(word + 7, &end, 10);
+			if (*end || j > M_TFMINPKTSIZE)
+				goto inval;
+			tp.min_len = j;
+			continue;
+		}
+		if (!strcmp(word, "not")) {
+			tp.invert = !tp.invert;
+			continue;
+		}
+		if (!strncmp(word, "loopback", 8) && tp.port == TRC_PORT_NONE) {
+			if (word[8] < '0' || word[8] > '3' || word[9])
+				goto inval;
+			tp.port = word[8] - '0' + 8;
+			continue;
+		}
+		if (!strncmp(word, "tx", 2) && tp.port == TRC_PORT_NONE) {
+			if (word[2] < '0' || word[2] > '3' || word[3])
+				goto inval;
+			tp.port = word[2] - '0' + 4;
+			if (adap->chan_map[tp.port & 3] >= MAX_NPORTS)
+				goto inval;
+			continue;
+		}
+		if (!strncmp(word, "rx", 2) && tp.port == TRC_PORT_NONE) {
+			if (word[2] < '0' || word[2] > '3' || word[3])
+				goto inval;
+			tp.port = word[2] - '0';
+			if (adap->chan_map[tp.port] >= MAX_NPORTS)
+				goto inval;
+			continue;
+		}
+		if (!isxdigit(*word))
+			goto inval;
+
+		/* we have found a trace pattern */
+		if (i) {                            /* split pattern */
+			if (tp.skip_len)            /* too many splits */
+				goto inval;
+			tp.skip_ofst = i / 16;
+		}
+
+		data = &tp.data[i / 8];
+		mask = &tp.mask[i / 8];
+		j = i;
+
+		while (isxdigit(*word)) {
+			if (i >= TRACE_LEN * 2) {
+				count = -EFBIG;
+				goto out;
+			}
+			*data = (*data << 4) + xdigit2int(*word++);
+			if (++i % 8 == 0)
+				data++;
+		}
+		if (*word == '/') {
+			word++;
+			while (isxdigit(*word)) {
+				if (j >= i)         /* mask longer than data */
+					goto inval;
+				*mask = (*mask << 4) + xdigit2int(*word++);
+				if (++j % 8 == 0)
+					mask++;
+			}
+			if (i != j)                 /* mask shorter than data */
+				goto inval;
+		} else {                            /* no mask, use all 1s */
+			for ( ; i - j >= 8; j += 8)
+				*mask++ = 0xffffffff;
+			if (i % 8)
+				*mask = (1 << (i % 8) * 4) - 1;
+		}
+		if (*word == '@') {
+			j = simple_strtoul(word + 1, &end, 10);
+			if (*end && *end != '\n')
+				goto inval;
+			if (j & 7)          /* doesn't start at multiple of 8 */
+				goto inval;
+			j /= 8;
+			if (j < tp.skip_ofst)     /* overlaps earlier pattern */
+				goto inval;
+			if (j - tp.skip_ofst > 31)            /* skip too big */
+				goto inval;
+			tp.skip_len = j - tp.skip_ofst;
+		}
+		if (i % 8) {
+			*data <<= (8 - i % 8) * 4;
+			*mask <<= (8 - i % 8) * 4;
+			i = (i + 15) & ~15;         /* 8-byte align */
+		}
+	}
+
+	if (tp.port == TRC_PORT_NONE)
+		goto inval;
+apply:
+	i = t4_set_trace_filter(adap, &tp, trcidx, enable);
+	if (i)
+		count = i;
+out:
+	kfree(s);
+	return count;
+}
+
+static const struct file_operations mps_trc_proc_fops = {
+	.owner   = THIS_MODULE,
+	.open    = mps_trc_open,
+	.read    = seq_read,
+	.llseek  = seq_lseek,
+	.release = single_release,
+	.write   = mps_trc_write
+};
+
+static int tid_info_show(struct seq_file *seq, void *v)
+{
+	struct adapter *adap = seq->private;
+	const struct tid_info *t = &adap->tids;
+
+	if (t4_read_reg(adap, A_LE_DB_CONFIG) & F_HASHEN) {
+		unsigned int sb = t4_read_reg(adap, A_LE_DB_SERVER_INDEX) / 4;
+
+		if (sb)
+			seq_printf(seq, "TID range: 0..%u/%u..%u", sb - 1,
+				   t4_read_reg(adap, A_LE_DB_TID_HASHBASE) / 4,
+				   t->ntids - 1);
+		else
+			seq_printf(seq, "TID range: %u..%u",
+				   t4_read_reg(adap, A_LE_DB_TID_HASHBASE) / 4,
+				   t->ntids - 1);
+	} else
+		seq_printf(seq, "TID range: 0..%u", t->ntids - 1);
+	seq_printf(seq, ", in use: %u\n", atomic_read(&t->tids_in_use));
+	seq_printf(seq, "STID range: %u..%u, in use: %u\n", t->stid_base,
+		   t->stid_base + t->nstids - 1, t->stids_in_use);
+	seq_printf(seq, "ATID range: 0..%u, in use: %u\n", t->natids - 1,
+		   t->atids_in_use);
+	seq_printf(seq, "FTID range: %u..%u\n", t->ftid_base,
+		   t->ftid_base + t->nftids - 1);
+	seq_printf(seq, "HW TID usage: %u IP users, %u IPv6 users\n",
+		   t4_read_reg(adap, A_LE_DB_ACT_CNT_IPV4),
+		   t4_read_reg(adap, A_LE_DB_ACT_CNT_IPV6));
+	return 0;
+}
+
+DEFINE_SIMPLE_PROC_FILE(tid_info);
+
+static int uld_show(struct seq_file *seq, void *v)
+{
+	int i;
+	struct adapter *adap = seq->private;
+
+	for (i = 0; i < CXGB4_ULD_MAX; i++)
+		if (adap->uld_handle[i])
+			seq_printf(seq, "%s: %s\n", uld_str[i], ulds[i].name);
+	return 0;
+}
+
+DEFINE_SIMPLE_PROC_FILE(uld);
+
+enum {
+	ADAP_NEED_L2T  = 1 << 0,
+	ADAP_NEED_OFLD = 1 << 1,
+};
+
+struct cxgb4_proc_entry {
+	const char *name;
+	mode_t mode;
+	unsigned int req;      /* adapter requirements to create this file */
+	unsigned char data;    /* data passed in low bits of adapter pointer */
+	const struct file_operations *fops;
+};
+
+static struct cxgb4_proc_entry proc_files[] = {
+	{ "l2t", 0444, ADAP_NEED_L2T, 0, &t4_l2t_proc_fops },
+	{ "lb_stats", 0444, 0, 0, &lb_stats_proc_fops },
+	{ "path_mtus", 0644, 0, 0, &mtutab_proc_fops },
+	{ "qstats", 0444, 0, 0, &sge_stats_proc_fops },
+	{ "rss", 0444, 0, 0, &rss_proc_fops },
+	{ "tcp_stats", 0444, 0, 0, &tcp_stats_proc_fops },
+	{ "tids", 0444, ADAP_NEED_OFLD, 0, &tid_info_proc_fops },
+	{ "tp_err_stats", 0444, 0, 0, &tp_err_stats_proc_fops },
+	{ "trace0", 0644, 0, 0, &mps_trc_proc_fops },
+	{ "trace1", 0644, 0, 1, &mps_trc_proc_fops },
+	{ "trace2", 0644, 0, 2, &mps_trc_proc_fops },
+	{ "trace3", 0644, 0, 3, &mps_trc_proc_fops },
+	{ "uld", 0444, 0, 0, &uld_proc_fops },
+};
+
+static int __devinit setup_proc(struct adapter *adap,
+				struct proc_dir_entry *dir)
+{
+	int i, created;
+	struct proc_dir_entry *p;
+	int ofld = is_offload(adap);
+
+	if (!dir)
+		return -EINVAL;
+
+	/* If we can create any of the entries we do. */
+	for (created = i = 0; i < ARRAY_SIZE(proc_files); ++i) {
+		unsigned int req = proc_files[i].req;
+
+		if ((req & ADAP_NEED_OFLD) && !ofld)
+			continue;
+		if ((req & ADAP_NEED_L2T) && !adap->l2t)
+			continue;
+
+		p = proc_create_data(proc_files[i].name, proc_files[i].mode,
+				     dir, proc_files[i].fops,
+				     (void *)adap + proc_files[i].data);
+		if (p)
+			created++;
+	}
+
+	return created;
+}
+
+static void cleanup_proc(struct proc_dir_entry *dir)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(proc_files); ++i)
+		remove_proc_entry(proc_files[i].name, dir);
+}
+
+/*
+ * upper-layer driver support
+ */
+
+/*
+ * Allocate an active-open TID and set it to the supplied value.
+ */
+int cxgb4_alloc_atid(struct tid_info *t, void *data)
+{
+	int atid = -1;
+
+	spin_lock_bh(&t->atid_lock);
+	if (t->afree) {
+		union aopen_entry *p = t->afree;
+
+		atid = p - t->atid_tab;
+		t->afree = p->next;
+		p->data = data;
+		t->atids_in_use++;
+	}
+	spin_unlock_bh(&t->atid_lock);
+	return atid;
+}
+EXPORT_SYMBOL(cxgb4_alloc_atid);
+
+/*
+ * Release an active-open TID.
+ */
+void cxgb4_free_atid(struct tid_info *t, unsigned int atid)
+{
+	union aopen_entry *p = &t->atid_tab[atid];
+
+	spin_lock_bh(&t->atid_lock);
+	p->next = t->afree;
+	t->afree = p;
+	t->atids_in_use--;
+	spin_unlock_bh(&t->atid_lock);
+}
+EXPORT_SYMBOL(cxgb4_free_atid);
+
+/*
+ * Allocate a server TID and set it to the supplied value.
+ */
+int cxgb4_alloc_stid(struct tid_info *t, int family, void *data)
+{
+	int stid;
+
+	spin_lock_bh(&t->stid_lock);
+	if (family == PF_INET) {
+		stid = find_first_zero_bit(t->stid_bmap, t->nstids);
+		if (stid < t->nstids)
+			__set_bit(stid, t->stid_bmap);
+		else
+			stid = -1;
+	} else {
+		stid = bitmap_find_free_region(t->stid_bmap, t->nstids, 2);
+		if (stid < 0)
+			stid = -1;
+	}
+	if (stid >= 0) {
+		t->stid_tab[stid].data = data;
+		stid += t->stid_base;
+		t->stids_in_use++;
+	}
+	spin_unlock_bh(&t->stid_lock);
+	return stid;
+}
+EXPORT_SYMBOL(cxgb4_alloc_stid);
+
+/*
+ * Release a server TID.
+ */
+void cxgb4_free_stid(struct tid_info *t, unsigned int stid, int family)
+{
+	stid -= t->stid_base;
+	spin_lock_bh(&t->stid_lock);
+	if (family == PF_INET)
+		__clear_bit(stid, t->stid_bmap);
+	else
+		bitmap_release_region(t->stid_bmap, stid, 2);
+	t->stid_tab[stid].data = NULL;
+	t->stids_in_use--;
+	spin_unlock_bh(&t->stid_lock);
+}
+EXPORT_SYMBOL(cxgb4_free_stid);
+
+/*
+ * Populate a TID_RELEASE WR.  Caller must properly size the skb.
+ */
+static void mk_tid_release(struct sk_buff *skb, unsigned int chan,
+			   unsigned int tid)
+{
+	struct cpl_tid_release *req;
+
+	set_wr_txq(skb, CPL_PRIORITY_SETUP, chan);
+	req = (struct cpl_tid_release *)__skb_put(skb, sizeof(*req));
+	INIT_TP_WR(req, tid);
+	OPCODE_TID(req) = htonl(MK_OPCODE_TID(CPL_TID_RELEASE, tid));
+}
+
+/*
+ * Queue a TID release request and if necessary schedule a work queue to
+ * process it.
+ */
+void cxgb4_queue_tid_release(struct tid_info *t, unsigned int chan,
+			     unsigned int tid)
+{
+	void **p = &t->tid_tab[tid];
+	struct adapter *adap = container_of(t, struct adapter, tids);
+
+	spin_lock_bh(&adap->tid_release_lock);
+	*p = adap->tid_release_head;
+	/* Low 2 bits encode the Tx channel number */
+	adap->tid_release_head = (void **)((uintptr_t)p | chan);
+	if (!*p)
+		schedule_work(&adap->tid_release_task);
+	spin_unlock_bh(&adap->tid_release_lock);
+}
+EXPORT_SYMBOL(cxgb4_queue_tid_release);
+
+/*
+ * Process the list of pending TID release requests.
+ */
+static void process_tid_release_list(struct work_struct *work)
+{
+	struct sk_buff *skb;
+	struct adapter *adap;
+
+	adap = container_of(work, struct adapter, tid_release_task);
+
+	spin_lock_bh(&adap->tid_release_lock);
+	while (adap->tid_release_head) {
+		void **p = adap->tid_release_head;
+		unsigned int chan = (uintptr_t)p & 3;
+		p = (void *)p - chan;
+
+		adap->tid_release_head = *p;
+		*p = NULL;
+		spin_unlock_bh(&adap->tid_release_lock);
+
+		while (!(skb = alloc_skb(sizeof(struct cpl_tid_release),
+					 GFP_KERNEL)))
+			yield();
+
+		mk_tid_release(skb, chan, p - adap->tids.tid_tab);
+		t4_ofld_send(adap, skb);
+		spin_lock_bh(&adap->tid_release_lock);
+	}
+	spin_unlock_bh(&adap->tid_release_lock);
+}
+
+/*
+ * Release a TID and inform HW.  If we are unable to allocate the release
+ * message we defer to a work queue.
+ */
+void cxgb4_remove_tid(struct tid_info *t, unsigned int chan, unsigned int tid)
+{
+	struct sk_buff *skb;
+	struct adapter *adap = container_of(t, struct adapter, tids);
+
+	WARN_ON(tid >= t->ntids);
+
+	skb = alloc_skb(sizeof(struct cpl_tid_release), GFP_ATOMIC);
+	if (likely(skb)) {
+		t->tid_tab[tid] = NULL;
+		mk_tid_release(skb, chan, tid);
+		t4_ofld_send(adap, skb);
+	} else
+		cxgb4_queue_tid_release(t, chan, tid);
+	atomic_dec(&t->tids_in_use);
+}
+EXPORT_SYMBOL(cxgb4_remove_tid);
+
+/*
+ * Allocate and initialize the TID tables.  Returns 0 on success.
+ */
+static int tid_init(struct tid_info *t)
+{
+	size_t size;
+	unsigned int natids = t->natids;
+
+	size = t->ntids * sizeof(*t->tid_tab) + natids * sizeof(*t->atid_tab) +
+	       t->nstids * sizeof(*t->stid_tab) +
+	       BITS_TO_LONGS(t->nstids) * sizeof(long);
+	t->tid_tab = t4_alloc_mem(size);
+	if (!t->tid_tab)
+		return -ENOMEM;
+
+	t->atid_tab = (union aopen_entry *)&t->tid_tab[t->ntids];
+	t->stid_tab = (struct serv_entry *)&t->atid_tab[natids];
+	t->stid_bmap = (unsigned long *)&t->stid_tab[t->nstids];
+	spin_lock_init(&t->stid_lock);
+	spin_lock_init(&t->atid_lock);
+
+	t->stids_in_use = 0;
+	t->afree = NULL;
+	t->atids_in_use = 0;
+	atomic_set(&t->tids_in_use, 0);
+
+	/* Setup the free list for atid_tab and clear the stid bitmap. */
+	if (natids) {
+		while (--natids)
+			t->atid_tab[natids - 1].next = &t->atid_tab[natids];
+		t->afree = t->atid_tab;
+	}
+	bitmap_zero(t->stid_bmap, t->nstids);
+	return 0;
+}
+
+/**
+ *	cxgb4_create_server - create an IP server
+ *	@dev: the device
+ *	@stid: the server TID
+ *	@sip: local IP address to bind server to
+ *	@sport: the server's TCP port
+ *	@queue: queue to direct messages from this server to
+ *
+ *	Create an IP server for the given port and address.
+ *	Returns <0 on error and one of the %NET_XMIT_* values on success.
+ */
+int cxgb4_create_server(const struct net_device *dev, unsigned int stid,
+			__be32 sip, __be16 sport, unsigned int queue)
+{
+	unsigned int chan;
+	struct sk_buff *skb;
+	struct adapter *adap;
+	struct cpl_pass_open_req *req;
+
+	skb = alloc_skb(sizeof(*req), GFP_KERNEL);
+	if (!skb)
+		return -ENOMEM;
+
+	adap = netdev2adap(dev);
+	req = (struct cpl_pass_open_req *)__skb_put(skb, sizeof(*req));
+	INIT_TP_WR(req, 0);
+	OPCODE_TID(req) = htonl(MK_OPCODE_TID(CPL_PASS_OPEN_REQ, stid));
+	req->local_port = sport;
+	req->peer_port = htons(0);
+	req->local_ip = sip;
+	req->peer_ip = htonl(0);
+	chan = netdev2pinfo(adap->sge.ingr_map[queue]->netdev)->tx_chan;
+	req->opt0 = cpu_to_be64(V_TX_CHAN(chan));
+	req->opt1 = cpu_to_be64(V_CONN_POLICY(CPL_CONN_POLICY_ASK) |
+				F_SYN_RSS_ENABLE | V_SYN_RSS_QUEUE(queue));
+	return t4_mgmt_tx(adap, skb);
+}
+EXPORT_SYMBOL(cxgb4_create_server);
+
+/**
+ *	cxgb4_create_server6 - create an IPv6 server
+ *	@dev: the device
+ *	@stid: the server TID
+ *	@sip: local IPv6 address to bind server to
+ *	@sport: the server's TCP port
+ *	@queue: queue to direct messages from this server to
+ *
+ *	Create an IPv6 server for the given port and address.
+ *	Returns <0 on error and one of the %NET_XMIT_* values on success.
+ */
+int cxgb4_create_server6(const struct net_device *dev, unsigned int stid,
+			 const struct in6_addr *sip, __be16 sport,
+			 unsigned int queue)
+{
+	unsigned int chan;
+	struct sk_buff *skb;
+	struct adapter *adap;
+	struct cpl_pass_open_req6 *req;
+
+	skb = alloc_skb(sizeof(*req), GFP_KERNEL);
+	if (!skb)
+		return -ENOMEM;
+
+	adap = netdev2adap(dev);
+	req = (struct cpl_pass_open_req6 *)__skb_put(skb, sizeof(*req));
+	INIT_TP_WR(req, 0);
+	OPCODE_TID(req) = htonl(MK_OPCODE_TID(CPL_PASS_OPEN_REQ6, stid));
+	req->local_port = sport;
+	req->peer_port = htons(0);
+	req->local_ip_hi = *(__be64 *)(sip->s6_addr);
+	req->local_ip_lo = *(__be64 *)(sip->s6_addr + 8);
+	req->peer_ip_hi = cpu_to_be64(0);
+	req->peer_ip_lo = cpu_to_be64(0);
+	chan = netdev2pinfo(adap->sge.ingr_map[queue]->netdev)->tx_chan;
+	req->opt0 = cpu_to_be64(V_TX_CHAN(chan));
+	req->opt1 = cpu_to_be64(V_CONN_POLICY(CPL_CONN_POLICY_ASK) |
+				F_SYN_RSS_ENABLE | V_SYN_RSS_QUEUE(queue));
+	return t4_mgmt_tx(adap, skb);
+}
+EXPORT_SYMBOL(cxgb4_create_server6);
+
+/**
+ *	cxgb4_best_mtu - find the entry in the MTU table closest to an MTU
+ *	@mtus: the HW MTU table
+ *	@mtu: the target MTU
+ *	@idx: index of selected entry in the MTU table
+ *
+ *	Returns the index and the value in the HW MTU table that is closest to
+ *	but does not exceed @mtu, unless @mtu is smaller than any value in the
+ *	table, in which case that smallest available value is selected.
+ */
+unsigned int cxgb4_best_mtu(const unsigned short *mtus, unsigned short mtu,
+			    unsigned int *idx)
+{
+	unsigned int i = 0;
+
+	while (i < NMTUS - 1 && mtus[i + 1] <= mtu)
+		++i;
+	if (idx)
+		*idx = i;
+	return mtus[i];
+}
+EXPORT_SYMBOL(cxgb4_best_mtu);
+
+/**
+ *	cxgb4_port_chan - get the HW channel of a port
+ *	@dev: the net device for the port
+ *
+ *	Return the HW Tx channel of the given port.
+ */
+unsigned int cxgb4_port_chan(const struct net_device *dev)
+{
+	return netdev2pinfo(dev)->tx_chan;
+}
+EXPORT_SYMBOL(cxgb4_port_chan);
+
+/**
+ *	cxgb4_netdev_by_hwid - return the net device of a HW port
+ *	@pdev: identifies the adapter
+ *	@id: the HW port id
+ *
+ *	Return the net device associated with the interface with the given HW
+ *	id.
+ */
+struct net_device *cxgb4_netdev_by_hwid(struct pci_dev *pdev, unsigned int id)
+{
+	const struct adapter *adap = pci_get_drvdata(pdev);
+
+	if (!adap || id >= NCHAN)
+		return NULL;
+	id = adap->chan_map[id];
+	return id < MAX_NPORTS ? adap->port[id] : NULL;
+}
+EXPORT_SYMBOL(cxgb4_netdev_by_hwid);
+
+void cxgb4_iscsi_init(struct net_device *dev, unsigned int tag_mask,
+		      const unsigned int *pgsz_order)
+{
+	struct adapter *adap = netdev2adap(dev);
+
+	t4_write_reg(adap, A_ULP_RX_ISCSI_TAGMASK, tag_mask);
+	t4_write_reg(adap, A_ULP_RX_ISCSI_PSZ, V_HPZ0(pgsz_order[0]) |
+		     V_HPZ1(pgsz_order[1]) | V_HPZ2(pgsz_order[2]) |
+		     V_HPZ3(pgsz_order[3]));
+}
+EXPORT_SYMBOL(cxgb4_iscsi_init);
+
+static struct pci_driver cxgb4_driver;
+
+static void check_neigh_update(struct neighbour *neigh)
+{
+	const struct device *parent;
+	const struct net_device *netdev = neigh->dev;
+
+	if (netdev->priv_flags & IFF_802_1Q_VLAN)
+		netdev = vlan_dev_real_dev(netdev);
+	parent = netdev->dev.parent;
+	if (parent && parent->driver == &cxgb4_driver.driver)
+		t4_l2t_update(dev_get_drvdata(parent), neigh);
+}
+
+static int netevent_cb(struct notifier_block *nb, unsigned long event,
+		       void *data)
+{
+	switch (event) {
+	case NETEVENT_NEIGH_UPDATE :
+		check_neigh_update(data);
+		break;
+	case NETEVENT_PMTU_UPDATE:
+	case NETEVENT_REDIRECT:
+	default:
+		break;
+	}
+	return 0;
+}
+
+static bool netevent_registered;
+static struct notifier_block cxgb4_netevent_nb = {
+	.notifier_call = netevent_cb
+};
+
+static void uld_attach(struct adapter *adap, unsigned int uld)
+{
+	void *handle;
+	struct cxgb4_lld_info lli;
+
+	lli.pdev = adap->pdev;
+	lli.l2t = adap->l2t;
+	lli.tids = &adap->tids;
+	lli.ports = adap->port;
+	lli.vr = &adap->vres;
+	lli.mtus = adap->params.mtus;
+	if (uld == CXGB4_ULD_RDMA) {
+		lli.rxq_ids = adap->sge.rdma_rxq;
+		lli.nrxq = adap->sge.rdmaqs;
+	} else if (uld == CXGB4_ULD_ISCSI) {
+		lli.rxq_ids = adap->sge.ofld_rxq;
+		lli.nrxq = adap->sge.ofldqsets;
+	}
+	lli.ntxq = adap->sge.ofldqsets;
+	lli.nchan = adap->params.nports;
+	lli.nports = adap->params.nports;
+	lli.wr_cred = adap->params.ofldq_wr_cred;
+	lli.adapter_type = CXGB4_ADAP_T4;
+	lli.udb_density = 1 << G_QUEUESPERPAGEPF0(
+			t4_read_reg(adap, A_SGE_EGRESS_QUEUES_PER_PAGE_PF));
+	lli.ucq_density = 1 << G_QUEUESPERPAGEPF0(
+			t4_read_reg(adap, A_SGE_INGRESS_QUEUES_PER_PAGE_PF));
+	lli.gts_reg = adap->regs + MYPF_REG(A_SGE_PF_GTS);
+	lli.db_reg = adap->regs + MYPF_REG(A_SGE_PF_KDOORBELL);
+	lli.fw_vers = adap->params.fw_vers;
+
+	handle = ulds[uld].add(&lli);
+	if (IS_ERR(handle)) {
+		CH_WARN(adap, "could not attach to the %s driver, error %ld\n",
+			uld_str[uld], PTR_ERR(handle));
+		return;
+	}
+
+	adap->uld_handle[uld] = handle;
+
+	if (!netevent_registered) {
+		register_netevent_notifier(&cxgb4_netevent_nb);
+		netevent_registered = true;
+	}
+}
+
+static void attach_ulds(struct adapter *adap)
+{
+	unsigned int i;
+
+	mutex_lock(&uld_mutex);
+	list_add_tail(&adap->list_node, &adapter_list);
+	for (i = 0; i < CXGB4_ULD_MAX; i++)
+		if (ulds[i].add)
+			uld_attach(adap, i);
+	mutex_unlock(&uld_mutex);
+}
+
+static void detach_ulds(struct adapter *adap)
+{
+	unsigned int i;
+
+	mutex_lock(&uld_mutex);
+	list_del(&adap->list_node);
+	for (i = 0; i < CXGB4_ULD_MAX; i++)
+		if (adap->uld_handle[i]) {
+			ulds[i].state_change(adap->uld_handle[i],
+					     CXGB4_STATE_DETACH);
+			adap->uld_handle[i] = NULL;
+		}
+	if (netevent_registered && list_empty(&adapter_list)) {
+		unregister_netevent_notifier(&cxgb4_netevent_nb);
+		netevent_registered = false;
+	}
+	mutex_unlock(&uld_mutex);
+}
+
+static void notify_ulds(struct adapter *adap, enum cxgb4_state new_state)
+{
+	unsigned int i;
+
+	mutex_lock(&uld_mutex);
+	for (i = 0; i < CXGB4_ULD_MAX; i++)
+		if (adap->uld_handle[i])
+			ulds[i].state_change(adap->uld_handle[i], new_state);
+	mutex_unlock(&uld_mutex);
+}
+
+/**
+ *	cxgb4_register_uld - register an upper-layer driver
+ *	@type: the ULD type
+ *	@p: the ULD methods
+ *
+ *	Registers an upper-layer driver with this driver and notifies the ULD
+ *	about any presently available devices that support its type.  Returns
+ *	%-EBUSY if a ULD of the same type is already registered.
+ */
+int cxgb4_register_uld(enum cxgb4_uld type, const struct cxgb4_uld_info *p)
+{
+	int ret = 0;
+	struct adapter *adap;
+
+	if (type >= CXGB4_ULD_MAX)
+		return -EINVAL;
+	mutex_lock(&uld_mutex);
+	if (ulds[type].add) {
+		ret = -EBUSY;
+		goto out;
+	}
+	ulds[type] = *p;
+	list_for_each_entry(adap, &adapter_list, list_node)
+		uld_attach(adap, type);
+out:	mutex_unlock(&uld_mutex);
+	return ret;
+}
+EXPORT_SYMBOL(cxgb4_register_uld);
+
+/**
+ *	cxgb4_unregister_uld - unregister an upper-layer driver
+ *	@type: the ULD type
+ *
+ *	Unregisters an existing upper-layer driver.
+ */
+int cxgb4_unregister_uld(enum cxgb4_uld type)
+{
+	struct adapter *adap;
+
+	if (type >= CXGB4_ULD_MAX)
+		return -EINVAL;
+	mutex_lock(&uld_mutex);
+	list_for_each_entry(adap, &adapter_list, list_node)
+		adap->uld_handle[type] = NULL;
+	ulds[type].add = NULL;
+	mutex_unlock(&uld_mutex);
+	return 0;
+}
+EXPORT_SYMBOL(cxgb4_unregister_uld);
+
+/**
+ *	cxgb_up - enable the adapter
+ *	@adap: adapter being enabled
+ *
+ *	Called when the first port is enabled, this function performs the
+ *	actions necessary to make an adapter operational, such as completing
+ *	the initialization of HW modules, and enabling interrupts.
+ *
+ *	Must be called with the rtnl lock held.
+ */
+static int cxgb_up(struct adapter *adap)
+{
+	int err = 0;
+
+	if (!(adap->flags & FULL_INIT_DONE)) {
+		err = setup_sge_queues(adap);
+		if (err)
+			goto out;
+		err = setup_rss(adap);
+		if (err) {
+			t4_free_sge_resources(adap);
+			goto out;
+		}
+		if (adap->flags & USING_MSIX)
+			name_msix_vecs(adap);
+		adap->flags |= FULL_INIT_DONE;
+	}
+
+	if (adap->flags & USING_MSIX) {
+		err = request_irq(adap->msix_info[0].vec, t4_nondata_intr, 0,
+				  adap->msix_info[0].desc, adap);
+		if (err)
+			goto irq_err;
+
+		err = request_msix_queue_irqs(adap);
+		if (err) {
+			free_irq(adap->msix_info[0].vec, adap);
+			goto irq_err;
+		}
+	} else if ((err = request_irq(adap->pdev->irq, t4_intr_handler(adap),
+				(adap->flags & USING_MSI) ? 0 : IRQF_SHARED,
+				adap->name, adap)))
+		goto irq_err;
+
+	enable_rx(adap);
+	t4_sge_start(adap);
+	t4_intr_enable(adap);
+	notify_ulds(adap, CXGB4_STATE_UP);
+ out:
+	return err;
+ irq_err:
+	CH_ERR(adap, "request_irq failed, err %d\n", err);
+	goto out;
+}
+
+static void cxgb_down(struct adapter *adapter)
+{
+	t4_intr_disable(adapter);
+	cancel_work_sync(&adapter->tid_release_task);
+
+	if (adapter->flags & USING_MSIX) {
+		free_msix_queue_irqs(adapter);
+		free_irq(adapter->msix_info[0].vec, adapter);
+	} else
+		free_irq(adapter->pdev->irq, adapter);
+	quiesce_rx(adapter);
+}
+
+/*
+ * net_device operations
+ */
+static int cxgb_open(struct net_device *dev)
+{
+	int err;
+	struct port_info *pi = netdev_priv(dev);
+	struct adapter *adapter = pi->adapter;
+
+	if (!adapter->open_device_map && (err = cxgb_up(adapter)) < 0)
+		return err;
+
+	dev->real_num_tx_queues = pi->nqsets;
+	set_bit(pi->tx_chan, &adapter->open_device_map);
+	link_start(dev);
+	netif_tx_start_all_queues(dev);
+	return 0;
+}
+
+static int cxgb_close(struct net_device *dev)
+{
+	int ret;
+	struct port_info *pi = netdev_priv(dev);
+	struct adapter *adapter = pi->adapter;
+
+	netif_tx_stop_all_queues(dev);
+	netif_carrier_off(dev);
+	ret = t4_enable_vi(adapter, 0, pi->viid, false, false);
+
+	clear_bit(pi->tx_chan, &adapter->open_device_map);
+
+	if (!adapter->open_device_map)
+		cxgb_down(adapter);
+	return 0;
+}
+
+static struct net_device_stats *cxgb_get_stats(struct net_device *dev)
+{
+	struct port_stats stats;
+	struct port_info *p = netdev_priv(dev);
+	struct adapter *adapter = p->adapter;
+	struct net_device_stats *ns = &dev->stats;
+
+	spin_lock(&adapter->stats_lock);
+	t4_get_port_stats(adapter, p->tx_chan, &stats);
+	spin_unlock(&adapter->stats_lock);
+
+	ns->tx_bytes   = stats.tx_octets;
+	ns->tx_packets = stats.tx_frames;
+	ns->rx_bytes   = stats.rx_octets;
+	ns->rx_packets = stats.rx_frames;
+	ns->multicast  = stats.rx_mcast_frames;
+
+	/* detailed rx_errors */
+	ns->rx_length_errors = stats.rx_jabber + stats.rx_too_long +
+			       stats.rx_runt;
+	ns->rx_over_errors   = 0;
+	ns->rx_crc_errors    = stats.rx_fcs_err;
+	ns->rx_frame_errors  = stats.rx_symbol_err;
+	ns->rx_fifo_errors   = stats.rx_ovflow0 + stats.rx_ovflow1 +
+			       stats.rx_ovflow2 + stats.rx_ovflow3 +
+			       stats.rx_trunc0 + stats.rx_trunc1 +
+			       stats.rx_trunc2 + stats.rx_trunc3;
+	ns->rx_missed_errors = 0;
+
+	/* detailed tx_errors */
+	ns->tx_aborted_errors   = 0;
+	ns->tx_carrier_errors   = 0;
+	ns->tx_fifo_errors      = 0;
+	ns->tx_heartbeat_errors = 0;
+	ns->tx_window_errors    = 0;
+
+	ns->tx_errors = stats.tx_error_frames;
+	ns->rx_errors = stats.rx_symbol_err + stats.rx_fcs_err +
+		ns->rx_length_errors + stats.rx_len_err + ns->rx_fifo_errors;
+	return ns;
+}
+
+static int cxgb_ioctl(struct net_device *dev, struct ifreq *req, int cmd)
+{
+	int ret = 0, prtad, devad;
+	struct port_info *pi = netdev_priv(dev);
+	struct mii_ioctl_data *data = (struct mii_ioctl_data *)&req->ifr_data;
+
+	switch (cmd) {
+	case SIOCGMIIPHY:
+		if (pi->mdio_addr < 0)
+			return -EOPNOTSUPP;
+		data->phy_id = pi->mdio_addr;
+		break;
+	case SIOCGMIIREG:
+	case SIOCSMIIREG:
+		if (mdio_phy_id_is_c45(data->phy_id)) {
+			prtad = mdio_phy_id_prtad(data->phy_id);
+			devad = mdio_phy_id_devad(data->phy_id);
+		} else if (data->phy_id < 32) {
+			prtad = data->phy_id;
+			devad = 0;
+			data->reg_num &= 0x1f;
+		} else
+			return -EINVAL;
+
+		if (cmd == SIOCGMIIREG)
+			ret = t4_mdio_rd(pi->adapter, 0, prtad, devad,
+					 data->reg_num, &data->val_out);
+		else
+			ret = t4_mdio_wr(pi->adapter, 0, prtad, devad,
+					 data->reg_num, data->val_in);
+		break;
+	default:
+		return -EOPNOTSUPP;
+	}
+	return ret;
+}
+
+static void cxgb_set_rxmode(struct net_device *dev)
+{
+	/* unfortunately we can't return errors to the stack */
+	set_rxmode(dev, -1, false);
+}
+
+static int cxgb_change_mtu(struct net_device *dev, int new_mtu)
+{
+	int ret;
+	struct port_info *pi = netdev_priv(dev);
+
+	if (new_mtu < 81 || new_mtu > MAX_MTU)         /* accommodate SACK */
+		return -EINVAL;
+	ret = t4_set_rxmode(pi->adapter, 0, pi->viid, new_mtu, -1, -1, -1,
+			    true);
+	if (!ret)
+		dev->mtu = new_mtu;
+	return ret;
+}
+
+static int cxgb_set_mac_addr(struct net_device *dev, void *p)
+{
+	int ret;
+	struct sockaddr *addr = p;
+	struct port_info *pi = netdev_priv(dev);
+
+	if (!is_valid_ether_addr(addr->sa_data))
+		return -EINVAL;
+
+	ret = t4_change_mac(pi->adapter, 0, pi->viid, pi->xact_addr_filt,
+			    addr->sa_data, true);
+	if (ret < 0)
+		return ret;
+
+	memcpy(dev->dev_addr, addr->sa_data, dev->addr_len);
+	pi->xact_addr_filt = ret;
+	return 0;
+}
+
+static void vlan_rx_register(struct net_device *dev, struct vlan_group *grp)
+{
+	struct port_info *pi = netdev_priv(dev);
+
+	pi->vlan_grp = grp;
+	t4_set_vlan_accel(pi->adapter, 1 << pi->tx_chan, grp != NULL);
+}
+
+#ifdef CONFIG_NET_POLL_CONTROLLER
+static void cxgb_netpoll(struct net_device *dev)
+{
+	int i;
+	struct port_info *pi = netdev_priv(dev);
+	struct adapter *adap = pi->adapter;
+	struct sge_eth_rxq *rx = &adap->sge.ethrxq[pi->first_qset];
+
+	for (i = 0; i < pi->nqsets; i++, rx++)
+		t4_intr_handler(adap)((adap->flags & USING_MSIX) ? &rx->rspq :
+								   adap);
+}
+#endif
+
+static const struct net_device_ops cxgb4_netdev_ops = {
+	.ndo_open             = cxgb_open,
+	.ndo_stop             = cxgb_close,
+	.ndo_start_xmit       = t4_eth_xmit,
+	.ndo_get_stats        = cxgb_get_stats,
+	.ndo_set_rx_mode      = cxgb_set_rxmode,
+	.ndo_set_mac_address  = cxgb_set_mac_addr,
+	.ndo_validate_addr    = eth_validate_addr,
+	.ndo_do_ioctl         = cxgb_ioctl,
+	.ndo_change_mtu       = cxgb_change_mtu,
+	.ndo_vlan_rx_register = vlan_rx_register,
+#ifdef CONFIG_NET_POLL_CONTROLLER
+	.ndo_poll_controller  = cxgb_netpoll,
+#endif
+};
+
+void t4_fatal_err(struct adapter *adapter)
+{
+	t4_set_reg_field(adapter, A_SGE_CONTROL, F_GLOBALENABLE, 0);
+	CH_ALERT(adapter, "encountered fatal error, operation suspended\n");
+}
+
+static void setup_memwin(struct adapter *adap)
+{
+	u32 bar0;
+	
+	bar0 = pci_resource_start(adap->pdev, 0);  /* truncation intentional */
+	t4_write_reg(adap, PCIE_MEM_ACCESS_REG(A_PCIE_MEM_ACCESS_BASE_WIN, 0),
+		     (bar0 + MEMWIN0_BASE) | V_BIR(0) |
+		     V_WINDOW(ilog2(MEMWIN0_APERTURE) - 10));
+	t4_write_reg(adap, PCIE_MEM_ACCESS_REG(A_PCIE_MEM_ACCESS_BASE_WIN, 1),
+		     (bar0 + MEMWIN1_BASE) | V_BIR(0) |
+		     V_WINDOW(ilog2(MEMWIN1_APERTURE) - 10));
+	t4_write_reg(adap, PCIE_MEM_ACCESS_REG(A_PCIE_MEM_ACCESS_BASE_WIN, 2),
+		     (bar0 + MEMWIN2_BASE) | V_BIR(0) |
+		     V_WINDOW(ilog2(MEMWIN2_APERTURE) - 10));
+}
+
+/*
+ * Max # of ATIDs.  The absolute HW max is 16K but we keep it lower.
+ */
+#define MAX_ATIDS 8192U
+
+/*
+ * Phase 0 of initialization: contact FW, obtain config, perform basic init.
+ */
+static int adap_init0(struct adapter *adap)
+{
+	int ret;
+	u32 v, port_vec;
+	enum dev_state state;
+	u32 params[7], val[7];
+	struct fw_caps_config_cmd c;
+
+	ret = t4_check_fw_version(adap);
+	if (ret == -EINVAL) {
+		ret = upgrade_fw(adap);
+		if (ret == 0)                          /* recache FW version */
+			ret = t4_check_fw_version(adap);
+	}
+	if (ret < 0)
+		return ret;
+
+	/* contact FW, request master */
+	ret = t4_fw_hello(adap, 0, 0, MASTER_MUST, &state);
+	if (ret < 0) {
+		CH_ERR(adap, "could not connect to FW, error %d\n", ret);
+		return ret;
+	}
+
+	/* reset device */
+	ret = t4_fw_reset(adap, 0, F_PIORSTMODE | F_PIORST);
+	if (ret < 0)
+		goto bye;
+
+	/* get device capabilities */
+	memset(&c, 0, sizeof(c));
+	c.op_to_write = htonl(V_FW_CMD_OP(FW_CAPS_CONFIG_CMD) |
+			      F_FW_CMD_REQUEST | F_FW_CMD_READ);
+	c.retval_len16 = htonl(FW_LEN16(c));
+	ret = t4_wr_mbox(adap, 0, &c, sizeof(c), &c);
+	if (ret < 0)
+		goto bye;
+
+	/* select capabilities we'll be using */
+	if (c.niccaps & htons(FW_CAPS_CONFIG_NIC_VM)) {
+		if (!vf_acls)
+			c.niccaps ^= htons(FW_CAPS_CONFIG_NIC_VM);
+		else
+			c.niccaps = htons(FW_CAPS_CONFIG_NIC_VM);
+	} else if (vf_acls) {
+		CH_ERR(adap, "virtualization ACLs not supported");
+		goto bye;
+	}
+	c.op_to_write = htonl(V_FW_CMD_OP(FW_CAPS_CONFIG_CMD) |
+			      F_FW_CMD_REQUEST | F_FW_CMD_WRITE);
+	ret = t4_wr_mbox(adap, 0, &c, sizeof(c), NULL);
+	if (ret < 0)
+		goto bye;
+
+	ret = t4_config_glbl_rss(adap, 0,
+				 FW_RSS_GLB_CONFIG_CMD_MODE_BASICVIRTUAL,
+				 F_FW_RSS_GLB_CONFIG_CMD_TNLMAPEN |
+				 F_FW_RSS_GLB_CONFIG_CMD_TNLALLLKP);
+	if (ret < 0)
+		goto bye;
+
+	ret = t4_cfg_pfvf(adap, 0, 0, 0, 64, 64, 64, 0, 0, 4,
+			  M_FW_PFVF_CMD_CMASK, M_FW_PFVF_CMD_PMASK, 16,
+			  FW_CMD_CAP_PF, FW_CMD_CAP_PF);
+	if (ret < 0)
+		goto bye;
+
+	for (v = 0; v < SGE_NTIMERS - 1; v++)
+		adap->sge.timer_val[v] = min(intr_holdoff[v], MAX_SGE_TIMERVAL);
+	adap->sge.timer_val[SGE_NTIMERS - 1] = MAX_SGE_TIMERVAL;
+	adap->sge.counter_val[0] = 1;
+	for (v = 1; v < SGE_NCOUNTERS; v++)
+		adap->sge.counter_val[v] = min(intr_cnt[v - 1], M_THRESHOLD_0);
+	t4_sge_init(adap);
+
+	/* get basic stuff going */
+	ret = t4_early_init(adap, 0);
+	if (ret < 0)
+		goto bye;
+
+#define FW_PARAM_DEV(param) \
+	V_FW_PARAMS_MNEM(FW_PARAMS_MNEM_DEV) | \
+	V_FW_PARAMS_PARAM_X(FW_PARAMS_PARAM_DEV_##param)
+
+#define FW_PARAM_PFVF(param) \
+	V_FW_PARAMS_MNEM(FW_PARAMS_MNEM_PFVF) | \
+	V_FW_PARAMS_PARAM_X(FW_PARAMS_PARAM_PFVF_##param)
+
+	params[0] = FW_PARAM_DEV(PORTVEC);
+	params[1] = FW_PARAM_PFVF(L2T_START);
+	params[2] = FW_PARAM_PFVF(L2T_END);
+	params[3] = FW_PARAM_PFVF(FILTER_START);
+	params[4] = FW_PARAM_PFVF(FILTER_END);
+	ret = t4_query_params(adap, 0, 5, params, val);
+	if (ret < 0)
+		goto bye;
+	port_vec = val[0];
+	adap->tids.ftid_base = val[3];
+	adap->tids.nftids = val[4] - val[3] + 1;
+
+	if (c.ofldcaps) {
+		/* query offload-related parameters */
+		params[0] = FW_PARAM_DEV(NTID);
+		params[1] = FW_PARAM_PFVF(SERVER_START);
+		params[2] = FW_PARAM_PFVF(SERVER_END);
+		params[3] = FW_PARAM_PFVF(TDDP_START);
+		params[4] = FW_PARAM_PFVF(TDDP_END);
+		params[5] = FW_PARAM_DEV(FLOWC_BUFFIFO_SZ);
+		ret = t4_query_params(adap, 0, 6, params, val);
+		if (ret < 0)
+			goto bye;
+		adap->tids.ntids = val[0];
+		adap->tids.natids = min(adap->tids.ntids / 2, MAX_ATIDS);
+		adap->tids.stid_base = val[1];
+		adap->tids.nstids = val[2] - val[1] + 1;
+		adap->vres.ddp.start = val[3];
+		adap->vres.ddp.size = val[4] - val[3] + 1;
+		adap->params.ofldq_wr_cred = val[5];
+		adap->params.offload = 1;
+	}
+	if (c.rdmacaps) {
+		params[0] = FW_PARAM_PFVF(STAG_START);
+		params[1] = FW_PARAM_PFVF(STAG_END);
+		params[2] = FW_PARAM_PFVF(RQ_START);
+		params[3] = FW_PARAM_PFVF(RQ_END);
+		params[4] = FW_PARAM_PFVF(PBL_START);
+		params[5] = FW_PARAM_PFVF(PBL_END);
+		ret = t4_query_params(adap, 0, 6, params, val);
+		if (ret < 0)
+			goto bye;
+		adap->vres.stag.start = val[0];
+		adap->vres.stag.size = val[1] - val[0] + 1;
+		adap->vres.rq.start = val[2];
+		adap->vres.rq.size = val[3] - val[2] + 1;
+		adap->vres.pbl.start = val[4];
+		adap->vres.pbl.size = val[5] - val[4] + 1;
+	}
+	if (c.iscsicaps) {
+		params[0] = FW_PARAM_PFVF(ISCSI_START);
+		params[1] = FW_PARAM_PFVF(ISCSI_END);
+		ret = t4_query_params(adap, 0, 2, params, val);
+		if (ret < 0)
+			goto bye;
+		adap->vres.iscsi.start = val[0];
+		adap->vres.iscsi.size = val[1] - val[0] + 1;
+	}
+#undef FW_PARAM_PFVF
+#undef FW_PARAM_DEV
+
+	adap->params.nports = hweight32(port_vec);
+	adap->params.portvec = port_vec;
+	adap->flags |= FW_OK;
+
+	/* These are finalized by FW initialization, load their values now */
+	v = t4_read_reg(adap, A_TP_TIMER_RESOLUTION);
+	adap->params.tp.tre = G_TIMERRESOLUTION(v);
+	t4_read_mtu_tbl(adap, adap->params.mtus, NULL);
+	t4_load_mtus(adap, adap->params.mtus, adap->params.a_wnd,
+		     adap->params.b_wnd);
+
+	/* tweak some settings */
+	t4_write_reg(adap, A_TP_SHIFT_CNT, V_SYNSHIFTMAX(6) |
+		     V_RXTSHIFTMAXR1(4) | V_RXTSHIFTMAXR2(15) |
+		     V_PERSHIFTBACKOFFMAX(8) | V_PERSHIFTMAX(8) |
+		     V_KEEPALIVEMAXR1(4) | V_KEEPALIVEMAXR2(9));
+	t4_write_reg(adap, A_ULP_RX_TDDP_PSZ, V_HPZ0(PAGE_SHIFT - 12));
+	t4_write_reg(adap, A_TP_PIO_ADDR, A_TP_INGRESS_CONFIG);
+	v = t4_read_reg(adap, A_TP_PIO_DATA);
+	t4_write_reg(adap, A_TP_PIO_DATA, v & ~F_CSUM_HAS_PSEUDO_HDR);
+	setup_memwin(adap);
+	return 0;
+
+	/*
+	 * If a command timed out or failed with EIO FW does not operate within
+	 * its spec or something catastrophic happened to HW/FW, stop issuing
+	 * commands.
+	 */
+bye:	if (ret != -ETIMEDOUT && ret != -EIO)
+		t4_fw_bye(adap, 0);
+	return ret;
+}
+
+static inline bool is_10g_port(const struct link_config *lc)
+{
+	return (lc->supported & FW_PORT_CAP_SPEED_10G) != 0;
+}
+
+static inline void init_rspq(struct sge_rspq *q, u8 timer_idx, u8 pkt_cnt_idx,
+			     unsigned int size, unsigned int iqe_size)
+{
+	q->intr_params = V_QINTR_TIMER_IDX(timer_idx) |
+			 V_QINTR_CNT_EN(pkt_cnt_idx < SGE_NCOUNTERS);
+	q->pktcnt_idx = pkt_cnt_idx < SGE_NCOUNTERS ? pkt_cnt_idx : 0;
+	q->iqe_len = iqe_size;
+	q->size = size;
+}
+
+/*
+ * Perform default configuration of DMA queues depending on the number and type
+ * of ports we found and the number of available CPUs.  Most settings can be
+ * modified by the admin prior to actual use.
+ */
+static void __devinit cfg_queues(struct adapter *adap)
+{
+	struct sge *s = &adap->sge;
+	int i, q10g = 0, n10g = 0, qidx = 0;
+
+	for_each_port(adap, i)
+		n10g += is_10g_port(&adap2pinfo(adap, i)->link_cfg);
+
+	/*
+	 * We default to 1 queue per non-10G port and up to # of cores queues
+	 * per 10G port.
+	 */
+	if (n10g)
+		q10g = (MAX_ETH_QSETS - (adap->params.nports - n10g)) / n10g;
+	if (q10g > num_online_cpus())
+		q10g = num_online_cpus();
+
+	for_each_port(adap, i) {
+		struct port_info *pi = adap2pinfo(adap, i);
+
+		pi->first_qset = qidx;
+		pi->nqsets = is_10g_port(&pi->link_cfg) ? q10g : 1;
+		qidx += pi->nqsets;
+	}
+
+	s->ethqsets = qidx;
+	s->max_ethqsets = qidx;   /* MSI-X may lower it later */
+
+	if (is_offload(adap)) {
+		/*
+		 * For offload we use 1 queue/channel if all ports are up to 1G,
+		 * otherwise we divide all available queues amongst the channels
+		 * capped by the number of available cores.
+		 */
+		if (n10g) {
+			i = min_t(int, ARRAY_SIZE(s->ofldrxq),
+				  num_online_cpus());
+			s->ofldqsets = roundup(i, adap->params.nports);
+		} else
+			s->ofldqsets = adap->params.nports;
+		/* For RDMA one Rx queue per channel suffices */
+		s->rdmaqs = adap->params.nports;
+	}
+
+	for (i = 0; i < ARRAY_SIZE(s->ethrxq); i++) {
+		struct sge_eth_rxq *r = &s->ethrxq[i];
+
+		init_rspq(&r->rspq, 0, 0, 1024, 64);
+		r->fl.size = 72;
+	}
+
+	for (i = 0; i < ARRAY_SIZE(s->ethtxq); i++)
+		s->ethtxq[i].q.size = 1024;
+
+	for (i = 0; i < ARRAY_SIZE(s->ctrlq); i++)
+		s->ctrlq[i].q.size = 512;
+
+	for (i = 0; i < ARRAY_SIZE(s->ofldtxq); i++)
+		s->ofldtxq[i].q.size = 1024;
+
+	for (i = 0; i < ARRAY_SIZE(s->ofldrxq); i++) {
+		struct sge_ofld_rxq *r = &s->ofldrxq[i];
+
+		init_rspq(&r->rspq, 0, 0, 1024, 64);
+		r->rspq.uld = CXGB4_ULD_ISCSI;
+		r->fl.size = 72;
+	}
+
+	for (i = 0; i < ARRAY_SIZE(s->rdmarxq); i++) {
+		struct sge_ofld_rxq *r = &s->rdmarxq[i];
+
+		init_rspq(&r->rspq, 0, 0, 511, 64);
+		r->rspq.uld = CXGB4_ULD_RDMA;
+		r->fl.size = 72;
+	}
+
+	init_rspq(&s->fw_evtq, E_TIMERIDX_NONE, 0, 512, 64);
+	init_rspq(&s->intrq, E_TIMERIDX_NONE, 0, 2 * MAX_INGQ, 64);
+}
+
+/*
+ * Reduce the number of Ethernet queues across all ports to at most n.
+ * n provides at least one queue per port.
+ */
+static void __devinit reduce_ethqs(struct adapter *adap, int n)
+{
+	int i;
+	struct port_info *pi;
+
+	while (n < adap->sge.ethqsets)
+		for_each_port(adap, i) {
+			pi = adap2pinfo(adap, i);
+			if (pi->nqsets > 1) {
+				pi->nqsets--;
+				adap->sge.ethqsets--;
+				if (adap->sge.ethqsets <= n)
+					break;
+			}
+		}
+
+	n = 0;
+	for_each_port(adap, i) {
+		pi = adap2pinfo(adap, i);
+		pi->first_qset = n;
+		n += pi->nqsets;
+	}
+}
+
+/* 2 MSI-X vectors needed for the FW queue and non-data interrupts */
+#define EXTRA_VECS 2
+
+static int __devinit enable_msix(struct adapter *adap)
+{
+	int ofld_need = 0;
+	int i, err, want, need;
+	struct sge *s = &adap->sge;
+	unsigned int nchan = adap->params.nports;
+	struct msix_entry entries[MAX_INGQ + 1];
+
+	for (i = 0; i < ARRAY_SIZE(entries); ++i)
+		entries[i].entry = i;
+
+	want = s->max_ethqsets + EXTRA_VECS;
+	if (is_offload(adap)) {
+		want += s->rdmaqs + s->ofldqsets;
+		/* need nchan for each possible ULD */
+		ofld_need = 2 * nchan;
+	}
+	need = adap->params.nports + EXTRA_VECS + ofld_need;
+
+	while ((err = pci_enable_msix(adap->pdev, entries, want)) >= need)
+		want = err;
+
+	if (!err) {
+		/*
+		 * Distribute available vectors to the various queue groups.
+		 * Every group gets its minimum requirement and NIC gets top
+		 * priority for leftovers.
+		 */
+		i = want - EXTRA_VECS - ofld_need;
+		if (i < s->max_ethqsets) {
+			s->max_ethqsets = i;
+			if (i < s->ethqsets)
+				reduce_ethqs(adap, i);
+		}
+		if (is_offload(adap)) {
+			i = want - EXTRA_VECS - s->max_ethqsets;
+			i -= ofld_need - nchan;
+			s->ofldqsets = (i / nchan) * nchan;  /* round down */
+		}
+		for (i = 0; i < want; ++i)
+			adap->msix_info[i].vec = entries[i].vector;
+	} else if (err > 0)
+		dev_info(adap->pdev_dev, 
+			 "only %d MSI-X vectors left, not using MSI-X\n", err);
+	return err;
+}
+
+#undef EXTRA_VECS
+
+static void __devinit print_port_info(struct adapter *adap)
+{
+	static const char *base[] = {
+		"R", "KX4", "T", "KX", "T", "KR", "CX4"
+	};
+
+	int i;
+	char buf[80];
+
+	for_each_port(adap, i) {
+		struct net_device *dev = adap->port[i];
+		const struct port_info *pi = netdev_priv(dev);
+		char *bufp = buf;
+
+		if (!test_bit(i, &adap->registered_device_map))
+		       continue;
+
+		if (pi->link_cfg.supported & FW_PORT_CAP_SPEED_100M)
+			bufp += sprintf(bufp, "100/");
+		if (pi->link_cfg.supported & FW_PORT_CAP_SPEED_1G)
+			bufp += sprintf(bufp, "1000/");
+		if (pi->link_cfg.supported & FW_PORT_CAP_SPEED_10G)
+			bufp += sprintf(bufp, "10G/");
+		if (bufp != buf)
+			--bufp;
+		sprintf(bufp, "BASE-%s", base[pi->port_type]);
+
+		printk(KERN_INFO "%s: Chelsio %s rev %d %s %sNIC PCIe x%d%s\n",
+		       dev->name, adap->params.vpd.id, adap->params.rev,
+		       buf, is_offload(adap) ? "R" : "", adap->params.pci.width,
+		       (adap->flags & USING_MSIX) ? " MSI-X" :
+		       (adap->flags & USING_MSI) ? " MSI" : "");
+		if (adap->name == dev->name)
+			printk(KERN_INFO "%s: S/N: %s, E/C: %s\n", adap->name,
+			       adap->params.vpd.sn, adap->params.vpd.ec);
+	}
+}
+
+#define VLAN_FEAT (NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_TSO | NETIF_F_TSO6 |\
+		   NETIF_F_IPV6_CSUM | NETIF_F_HIGHDMA)
+
+static int __devinit init_one(struct pci_dev *pdev,
+			      const struct pci_device_id *ent)
+{
+	int func;
+	struct port_info *pi;
+	int i, err, pci_using_dac = 0;
+	struct adapter *adapter = NULL;
+
+	printk_once(KERN_INFO "%s - version %s\n", DRV_DESC, DRV_VERSION);
+
+	err = pci_request_regions(pdev, KBUILD_MODNAME);
+	if (err) {
+		/* Just info, some other driver may have claimed the device. */
+		dev_info(&pdev->dev, "cannot obtain PCI resources\n");
+		return err;
+	}
+
+	/* We control everything through PF 0 */
+	func = PCI_FUNC(pdev->devfn);
+	if (func > 0)
+		goto sriov;
+
+	err = pci_enable_device(pdev);
+	if (err) {
+		dev_err(&pdev->dev, "cannot enable PCI device\n");
+		goto out_release_regions;
+	}
+
+	if (!pci_set_dma_mask(pdev, DMA_BIT_MASK(64))) {
+		pci_using_dac = 1;
+		err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64));
+		if (err) {
+			dev_err(&pdev->dev, "unable to obtain 64-bit DMA for "
+				"coherent allocations\n");
+			goto out_disable_device;
+		}
+	} else if ((err = pci_set_dma_mask(pdev, DMA_BIT_MASK(32))) != 0) {
+		dev_err(&pdev->dev, "no usable DMA configuration\n");
+		goto out_disable_device;
+	}
+
+	pci_enable_pcie_error_reporting(pdev);
+	pci_set_master(pdev);
+	pci_save_state(pdev);
+
+	adapter = kzalloc(sizeof(*adapter), GFP_KERNEL);
+	if (!adapter) {
+		err = -ENOMEM;
+		goto out_disable_device;
+	}
+
+	adapter->regs = pci_ioremap_bar(pdev, 0);
+	if (!adapter->regs) {
+		dev_err(&pdev->dev, "cannot map device registers\n");
+		err = -ENOMEM;
+		goto out_free_adapter;
+	}
+
+	adapter->pdev = pdev;
+	adapter->pdev_dev = &pdev->dev;
+	adapter->name = pci_name(pdev);
+	adapter->msg_enable = dflt_msg_enable;
+	memset(adapter->chan_map, 0xff, sizeof(adapter->chan_map));
+
+	spin_lock_init(&adapter->stats_lock);
+	spin_lock_init(&adapter->tid_release_lock);
+
+	INIT_WORK(&adapter->tid_release_task, process_tid_release_list);
+
+	err = t4_prep_adapter(adapter);
+	if (err)
+		goto out_unmap_bar;
+	err = adap_init0(adapter);
+	if (err)
+		goto out_unmap_bar;
+
+	for_each_port(adapter, i) {
+		struct net_device *netdev;
+
+		netdev = alloc_etherdev_mq(sizeof(struct port_info),
+					   MAX_ETH_QSETS);
+		if (!netdev) {
+			err = -ENOMEM;
+			goto out_free_dev;
+		}
+
+		SET_NETDEV_DEV(netdev, &pdev->dev);
+
+		adapter->port[i] = netdev;
+		pi = netdev_priv(netdev);
+		pi->adapter = adapter;
+		pi->xact_addr_filt = -1;
+		pi->rx_offload = RX_CSO;
+		pi->port_id = i;
+		netif_carrier_off(netdev);
+		netif_tx_stop_all_queues(netdev);
+		netdev->irq = pdev->irq;
+
+		netdev->features |= NETIF_F_SG | NETIF_F_TSO | NETIF_F_TSO6;
+		netdev->features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM;
+		netdev->features |= NETIF_F_GRO;
+		if (pci_using_dac)
+			netdev->features |= NETIF_F_HIGHDMA;
+
+		netdev->features |= NETIF_F_HW_VLAN_TX | NETIF_F_HW_VLAN_RX;
+		netdev->vlan_features = netdev->features & VLAN_FEAT;
+
+		netdev->netdev_ops = &cxgb4_netdev_ops;
+		SET_ETHTOOL_OPS(netdev, &cxgb_ethtool_ops);
+	}
+
+	pci_set_drvdata(pdev, adapter);
+
+	if (adapter->flags & FW_OK) {
+		err = t4_port_init(adapter, 0, 0, 0);
+		if (err)
+			goto out_free_dev;
+	}
+
+	/*
+	 * Configure queues and allocate tables now, they can be needed as
+	 * soon as the first register_netdev completes.
+	 */
+	cfg_queues(adapter);
+
+	adapter->l2t = t4_init_l2t();
+	if (!adapter->l2t) {
+		/* We tolerate a lack of L2T, giving up some functionality */
+		dev_warn(&pdev->dev, "could not allocate L2T, continuing\n");
+		adapter->params.offload = 0;
+	}
+
+	if (is_offload(adapter) && tid_init(&adapter->tids) < 0) {
+		dev_warn(&pdev->dev, "could not allocate TID table, "
+			 "continuing\n");
+		adapter->params.offload = 0;
+	}
+
+	/*
+	 * The card is now ready to go.  If any errors occur during device
+	 * registration we do not fail the whole card but rather proceed only
+	 * with the ports we manage to register successfully.  However we must
+	 * register at least one net device.
+	 */
+	for_each_port(adapter, i) {
+		err = register_netdev(adapter->port[i]);
+		if (err)
+			dev_warn(&pdev->dev,
+				 "cannot register net device %s, skipping\n",
+				 adapter->port[i]->name);
+		else {
+			/*
+			 * Change the name we use for messages to the name of
+			 * the first successfully registered interface.
+			 */
+			if (!adapter->registered_device_map)
+				adapter->name = adapter->port[i]->name;
+
+			__set_bit(i, &adapter->registered_device_map);
+			adapter->chan_map[adap2pinfo(adapter, i)->tx_chan] = i;
+		}
+	}
+	if (!adapter->registered_device_map) {
+		dev_err(&pdev->dev, "could not register any net devices\n");
+		goto out_free_dev;
+	}
+
+	if (cxgb4_proc_root) {
+		adapter->proc_root = proc_mkdir(pci_name(pdev),
+						cxgb4_proc_root);
+		if (!adapter->proc_root)
+			dev_warn(&pdev->dev,
+				 "could not create /proc directory");
+		else
+			setup_proc(adapter, adapter->proc_root);
+	}
+
+	if (cxgb4_debugfs_root) {
+		adapter->debugfs_root = debugfs_create_dir(pci_name(pdev),
+							   cxgb4_debugfs_root);
+		setup_debugfs(adapter);
+	}
+
+	/* See what interrupts we'll be using */
+	if (msi > 1 && enable_msix(adapter) == 0)
+		adapter->flags |= USING_MSIX;
+	else if (msi > 0 && pci_enable_msi(pdev) == 0)
+		adapter->flags |= USING_MSI;
+
+	if (is_offload(adapter)) {
+		attach_ulds(adapter);
+	}
+
+	print_port_info(adapter);
+
+sriov:
+#ifdef CONFIG_PCI_IOV
+	if (func < ARRAY_SIZE(num_vf) && num_vf[func] > 0)
+		if (pci_enable_sriov(pdev, num_vf[func]) == 0)
+			dev_info(&pdev->dev,
+				 "instantiated %u virtual functions\n",
+				 num_vf[func]);
+#endif
+	return 0;
+
+ out_free_dev:
+	t4_free_mem(adapter->tids.tid_tab);
+	t4_free_mem(adapter->l2t);
+	for_each_port(adapter, i)
+		if (adapter->port[i])
+			free_netdev(adapter->port[i]);
+	if (adapter->flags & FW_OK)
+		t4_fw_bye(adapter, 0);
+ out_unmap_bar:
+	iounmap(adapter->regs);
+ out_free_adapter:
+	kfree(adapter);
+ out_disable_device:
+	pci_disable_pcie_error_reporting(pdev);
+	pci_disable_device(pdev);
+ out_release_regions:
+	pci_release_regions(pdev);
+	pci_set_drvdata(pdev, NULL);
+	return err;
+}
+
+static void __devexit remove_one(struct pci_dev *pdev)
+{
+	struct adapter *adapter = pci_get_drvdata(pdev);
+
+	pci_disable_sriov(pdev);
+
+	if (adapter) {
+		int i;
+
+		if (is_offload(adapter))
+			detach_ulds(adapter);
+		
+		t4_sge_stop(adapter);
+
+		for_each_port(adapter, i)
+			if (test_bit(i, &adapter->registered_device_map))
+				unregister_netdev(adapter->port[i]);
+
+		if (adapter->proc_root) {
+			cleanup_proc(adapter->proc_root);
+			remove_proc_entry(pci_name(pdev), cxgb4_proc_root);
+		}
+
+		if (adapter->debugfs_root) {
+			debugfs_remove_recursive(adapter->debugfs_root);
+		}
+
+		t4_free_sge_resources(adapter);
+		t4_free_mem(adapter->l2t);
+		t4_free_mem(adapter->tids.tid_tab);
+		disable_msi(adapter);
+
+		for_each_port(adapter, i)
+			if (adapter->port[i])
+				free_netdev(adapter->port[i]);
+
+		if (adapter->flags & FW_OK)
+			t4_fw_bye(adapter, 0);
+		iounmap(adapter->regs);
+		kfree(adapter);
+		pci_disable_pcie_error_reporting(pdev);
+		pci_disable_device(pdev);
+		pci_release_regions(pdev);
+		pci_set_drvdata(pdev, NULL);
+	} else if (PCI_FUNC(pdev->devfn) > 0)
+		pci_release_regions(pdev);
+}
+
+static struct pci_driver cxgb4_driver = {
+	.name     = KBUILD_MODNAME,
+	.id_table = cxgb4_pci_tbl,
+	.probe    = init_one,
+	.remove   = __devexit_p(remove_one),
+};
+
+#define DRV_PROC_NAME "driver/" KBUILD_MODNAME
+
+static int __init cxgb4_init_module(void)
+{
+	int ret;
+
+	/* Debugfs support is optional, just warn if this fails */
+	cxgb4_debugfs_root = debugfs_create_dir(KBUILD_MODNAME, NULL);
+	if (!cxgb4_debugfs_root)
+		pr_warning("could not create debugfs entry, continuing\n");
+
+#ifdef CONFIG_PROC_FS
+	cxgb4_proc_root = proc_mkdir(DRV_PROC_NAME, NULL);
+	if (!cxgb4_proc_root)
+		pr_warning("could not create /proc driver directory, "
+			   "continuing\n");
+#endif
+
+	ret = pci_register_driver(&cxgb4_driver);
+	if (ret < 0) {
+		remove_proc_entry(DRV_PROC_NAME, NULL);
+		debugfs_remove(cxgb4_debugfs_root);
+	}
+	return ret;
+}
+
+static void __exit cxgb4_cleanup_module(void)
+{
+	pci_unregister_driver(&cxgb4_driver);
+	remove_proc_entry(DRV_PROC_NAME, NULL);
+	debugfs_remove(cxgb4_debugfs_root);  /* NULL ok */
+}
+
+module_init(cxgb4_init_module);
+module_exit(cxgb4_cleanup_module);
-- 
1.5.4


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH net-next 7/7] net: Hook up cxgb4 to Kconfig and Makefile
  2010-02-18  1:37           ` [PATCH net-next 6/7] cxgb4: Add main driver file and driver Makefile Dimitris Michailidis
@ 2010-02-18  1:37             ` Dimitris Michailidis
  0 siblings, 0 replies; 10+ messages in thread
From: Dimitris Michailidis @ 2010-02-18  1:37 UTC (permalink / raw)
  To: netdev

Signed-off-by: Dimitris Michailidis <dm@chelsio.com>
---
 drivers/net/Kconfig  |   25 +++++++++++++++++++++++++
 drivers/net/Makefile |    1 +
 2 files changed, 26 insertions(+), 0 deletions(-)

diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig
index b711d54..acca6b6 100644
--- a/drivers/net/Kconfig
+++ b/drivers/net/Kconfig
@@ -2574,6 +2574,31 @@ config CHELSIO_T3
 	  To compile this driver as a module, choose M here: the module
 	  will be called cxgb3.
 
+config CHELSIO_T4_DEPENDS
+	tristate
+	depends on PCI && INET
+	default y
+
+config CHELSIO_T4
+	tristate "Chelsio Communications T4 Ethernet support"
+	depends on CHELSIO_T4_DEPENDS
+	select FW_LOADER
+	select MDIO
+	help
+	  This driver supports Chelsio T4-based gigabit and 10Gb Ethernet
+	  adapters.
+
+	  For general information about Chelsio and our products, visit
+	  our website at <http://www.chelsio.com>.
+
+	  For customer support, please visit our customer support page at
+	  <http://www.chelsio.com/support.htm>.
+
+	  Please send feedback to <linux-bugs@chelsio.com>.
+
+	  To compile this driver as a module choose M here; the module
+	  will be called cxgb4.
+
 config EHEA
 	tristate "eHEA Ethernet support"
 	depends on IBMEBUS && INET && SPARSEMEM
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index 622cfd4..89a9884 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -19,6 +19,7 @@ obj-$(CONFIG_IXGB) += ixgb/
 obj-$(CONFIG_IP1000) += ipg.o
 obj-$(CONFIG_CHELSIO_T1) += chelsio/
 obj-$(CONFIG_CHELSIO_T3) += cxgb3/
+obj-$(CONFIG_CHELSIO_T4) += cxgb4/
 obj-$(CONFIG_EHEA) += ehea/
 obj-$(CONFIG_CAN) += can/
 obj-$(CONFIG_BONDING) += bonding/
-- 
1.5.4


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH net-next 0/7] cxgb4: new driver submission
  2010-02-18  1:37 [PATCH net-next 0/7] cxgb4: new driver submission Dimitris Michailidis
  2010-02-18  1:37 ` [PATCH net-next 1/7] cxgb4: Add register and message definitions Dimitris Michailidis
@ 2010-02-18  1:59 ` David Miller
  1 sibling, 0 replies; 10+ messages in thread
From: David Miller @ 2010-02-18  1:59 UTC (permalink / raw)
  To: dm; +Cc: netdev

From: Dimitris Michailidis <dm@chelsio.com>
Date: Wed, 17 Feb 2010 17:37:35 -0800

> 
> The following 7 patches add a new driver cxgb4 for Chelsio's new 1G and 10G
> cards.  At this time this is for review and comments, I'll be sending an
> updated patch series once any review comments are incorporated.

There is lots of trailing whitespace added by your changes.
There's also a case of spaces followed by tab characters in
the initial indentation of lines.

What makes those two things so incredibly inexcusable is that the very
tools we use to add changes to the tree _tell_ you about these things.

bundle-895.mbox:6829: trailing whitespace.
	FW_STAT_TX_PORT_FRAMES_IX,	
bundle-895.mbox:6879: trailing whitespace.
	FW_STAT_RX_PORT_PPP7_IX,	
bundle-895.mbox:6985: trailing whitespace.
	FW_STAT_LB_PORT_FRAMES_IX, 	
bundle-895.mbox:6986: trailing whitespace.
	FW_STAT_LB_PORT_BCAST_IX, 
bundle-895.mbox:9918: trailing whitespace.
	int ret; 
bundle-895.mbox:10135: space before tab in indent.
       	if (mac) {
bundle-895.mbox:12533: trailing whitespace.
 *	prevent further unmapping attempts. 
bundle-895.mbox:18605: trailing whitespace.
	
bundle-895.mbox:18981: trailing whitespace.
		dev_info(adap->pdev_dev, 
bundle-895.mbox:19266: trailing whitespace.
		
fatal: 10 lines add whitespace errors.

Such automated clerical issues should be taken care of before you even
submit this for "review".  It's the same as making sure the code
compiles.

Also you should use the netdev_*() message logging helpers added
by Joe Perches instead of your local CMSG_*() hacks.

Finally, this V_*, S_*, F_* naming scheme for register values is
backwards and if anything very non-standard.  Please use normal macro
names for these things so you code is more readable by people who
have to look at all of the other device drivers in the tree not
just your's.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [PATCH net-next 0/7] cxgb4: new driver submission
@ 2010-02-18  2:56 Dimitrios Michailidis
  0 siblings, 0 replies; 10+ messages in thread
From: Dimitrios Michailidis @ 2010-02-18  2:56 UTC (permalink / raw)
  To: David Miller; +Cc: netdev



> -----Original Message-----
> From: David Miller [mailto:davem@davemloft.net]
> Sent: Wednesday, February 17, 2010 5:59 PM
> To: Dimitrios Michailidis
> Cc: netdev@vger.kernel.org
> Subject: Re: [PATCH net-next 0/7] cxgb4: new driver submission
> 
> From: Dimitris Michailidis <dm@chelsio.com>
> Date: Wed, 17 Feb 2010 17:37:35 -0800
> 
> >
> > The following 7 patches add a new driver cxgb4 for Chelsio's new 1G
and
> 10G
> > cards.  At this time this is for review and comments, I'll be
sending an
> > updated patch series once any review comments are incorporated.
> 
> There is lots of trailing whitespace added by your changes.
> There's also a case of spaces followed by tab characters in
> the initial indentation of lines.
> 
> What makes those two things so incredibly inexcusable is that the very
> tools we use to add changes to the tree _tell_ you about these things.
> 
> bundle-895.mbox:6829: trailing whitespace.
> 	FW_STAT_TX_PORT_FRAMES_IX,
> bundle-895.mbox:6879: trailing whitespace.
> 	FW_STAT_RX_PORT_PPP7_IX,
> bundle-895.mbox:6985: trailing whitespace.
> 	FW_STAT_LB_PORT_FRAMES_IX,
> bundle-895.mbox:6986: trailing whitespace.
> 	FW_STAT_LB_PORT_BCAST_IX,
> bundle-895.mbox:9918: trailing whitespace.
> 	int ret;
> bundle-895.mbox:10135: space before tab in indent.
>        	if (mac) {
> bundle-895.mbox:12533: trailing whitespace.
>  *	prevent further unmapping attempts.
> bundle-895.mbox:18605: trailing whitespace.
> 
> bundle-895.mbox:18981: trailing whitespace.
> 		dev_info(adap->pdev_dev,
> bundle-895.mbox:19266: trailing whitespace.
> 
> fatal: 10 lines add whitespace errors.
> 
> Such automated clerical issues should be taken care of before you even
> submit this for "review".  It's the same as making sure the code
> compiles.

I apologize for the whitespace damage, I'll fix that asap.


> Also you should use the netdev_*() message logging helpers added
> by Joe Perches instead of your local CMSG_*() hacks.


I saw those patches going in but didn't use them because the messages in
the driver are almost always about the adapter as a whole and not about
any one of its ports (it has several).  The CH_* macros in the driver
expand to the dev_* family of logging helpers.  I could replace them
with the netdev_* family but the netdev used for them will tend to be
arbitrary and I didn't think this was very appropriate.  Let me know.


> Finally, this V_*, S_*, F_* naming scheme for register values is
> backwards and if anything very non-standard.  Please use normal macro
> names for these things so you code is more readable by people who
> have to look at all of the other device drivers in the tree not
> just your's.

The first letter in these macros indicates what the macro is about (S
for shift, M for mask, F for flag, etc) and is the same scheme used in
previous Chelsio drivers.  I can replace them if you want but I wanted
to mention that there's a reasoning for their naming.  Would it help to
add a comment at the top explaining what the first letter designates?

Thanks for the comments.

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2010-02-18  2:53 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-02-18  1:37 [PATCH net-next 0/7] cxgb4: new driver submission Dimitris Michailidis
2010-02-18  1:37 ` [PATCH net-next 1/7] cxgb4: Add register and message definitions Dimitris Michailidis
2010-02-18  1:37   ` [PATCH net-next 2/7] cxgb4: Add FW API definitions Dimitris Michailidis
2010-02-18  1:37     ` [PATCH net-next 3/7] cxgb4: Add HW and FW support code Dimitris Michailidis
2010-02-18  1:37       ` [PATCH net-next 4/7] cxgb4: Add packet queues and packet DMA code Dimitris Michailidis
2010-02-18  1:37         ` [PATCH net-next 5/7] cxgb4: Add remaining driver headers and L2T management Dimitris Michailidis
2010-02-18  1:37           ` [PATCH net-next 6/7] cxgb4: Add main driver file and driver Makefile Dimitris Michailidis
2010-02-18  1:37             ` [PATCH net-next 7/7] net: Hook up cxgb4 to Kconfig and Makefile Dimitris Michailidis
2010-02-18  1:59 ` [PATCH net-next 0/7] cxgb4: new driver submission David Miller
2010-02-18  2:56 Dimitrios Michailidis

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.