All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/61] net/qede/base: qede PMD enhancements
@ 2017-02-27  7:56 Rasesh Mody
  2017-02-27  7:56 ` [PATCH 01/61] net/qede/base: return an initialized return value Rasesh Mody
                   ` (61 more replies)
  0 siblings, 62 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Hi,

This patch set adds support for new firmware 8.18.9.0, new features and
bug fixes.

Please apply to dpdk-net-next for 17.05 release.

Thanks!
Rasesh

Harish Patil (3):
  net/qede/base: add support for arfs mode
  net/qede: add ntuple and flow director filter support
  net/qede: add LRO/TSO offloads support

Rasesh Mody (58):
  net/qede/base: return an initialized return value
  send FW version driver state to MFW
  net/qede/base: mask Rx buffer attention bits
  net/qede/base: print various indication on Tx-timeouts
  net/qede/base: utilize FW 8.18.9.0
  drivers/net/qede: upgrade the FW to 8.18.9.0
  net/qede/base: decrease MAX_HWFNS_PER_DEVICE from 4 to 2
  net/qede/base: move mask constants defining NIC type
  net/qede/base: remove attribute field from update current config
  net/qede/base: add nvram options
  net/qede/base: add comment
  net/qede/base: use default mtu from shared memory
  net/qede/base: change queue/sb-id from 8 bit to 16 bit
  net/qede/base: update MFW when default mtu is changed
  net/qede/base: prevent device init failure
  net/qede/base: add support to read personality via MFW commands
  net/qede/base: allow probe to succeed with minor HW-issues
  net/qede/base: remove unneeded step in HW init
  net/qede/base: allow only trusted VFs to be promisc/multi-promisc
  net/qede/base: qm initialization revamp
  net/qede/base: add a printout of the FW, MFW and MBI versions
  net/qede/base: check active VF queues before stopping
  net/qede/base: set the drv_type before sending load request
  net/qede/base: prevent driver laod with invalid resources
  net/qede/base: add interfaces for MFW TLV request processing
  net/qede/base: fix to set pointers to NULL after freeing
  net/qede/base: L2 handler changes
  net/qede/base: add support for handling TLV request from MFW
  net/qede/base: optimize cache-line access
  net/qede/base: infrastructure changes for VF tunnelling
  net/qede/base: revise tunnel APIs/structs
  net/qede/base: add tunnelling support for VFs
  net/qede/base: formatting changes
  net/qede/base: prevent transmitter stuck condition
  net/qede/base: add mask/shift defines for resource command
  net/qede/base: add API for using MFW resource lock
  net/qede/base: remove clock slowdown option
  net/qede/base: add new image types
  net/qede/base: use L2-handles for RSS configuration
  net/qede/base: change valloc to vzalloc
  net/qede/base: add support for previous driver unload
  net/qede/base: add non-l2 dcbx tlv application support
  net/qede/base: update bulletin board with link state during init
  net/qede/base: add coalescing support for VFs
  net/qede/base: add macro got resource value message
  net/qede/base: add mailbox for resource allocation
  net/qede/base: add macro for unsupported command
  net/qede/base: Add support to set max values of soft resoruces
  net/qede/base: add return code check
  net/qede/base: zero out MFW mailbox data
  net/qede/base: move code bits
  net/qede/base: add PF parameter
  net/qede/base: allow PMD to control vport-id and rss-eng-id
  net/qede/base: add udp ports in bulletin board message
  net/qede/base: prevent DMAE transactions during recovery
  net/qede/base: add multi-Txq support on same queue-zone for VFs
  net/qede/base: fix race cond between MFW attentions and PF stop
  net/qede/base: semantic changes

 doc/guides/nics/features/qede.ini             |    4 +
 doc/guides/nics/features/qede_vf.ini          |    2 +
 doc/guides/nics/qede.rst                      |    9 +-
 drivers/net/qede/Makefile                     |    1 +
 drivers/net/qede/base/bcm_osal.h              |   13 +-
 drivers/net/qede/base/common_hsi.h            |  191 ++-
 drivers/net/qede/base/ecore.h                 |  169 +-
 drivers/net/qede/base/ecore_chain.h           |  143 +-
 drivers/net/qede/base/ecore_cxt.c             |  297 +++-
 drivers/net/qede/base/ecore_cxt.h             |   64 +-
 drivers/net/qede/base/ecore_cxt_api.h         |   13 -
 drivers/net/qede/base/ecore_dcbx.c            |   42 +-
 drivers/net/qede/base/ecore_dcbx.h            |    4 +-
 drivers/net/qede/base/ecore_dcbx_api.h        |    4 +-
 drivers/net/qede/base/ecore_dev.c             | 2142 +++++++++++++++----------
 drivers/net/qede/base/ecore_dev_api.h         |  122 +-
 drivers/net/qede/base/ecore_gtt_reg_addr.h    |   20 +-
 drivers/net/qede/base/ecore_hsi_common.h      |  816 +++++-----
 drivers/net/qede/base/ecore_hsi_debug_tools.h |  203 ++-
 drivers/net/qede/base/ecore_hsi_eth.h         | 2069 ++++++++++++------------
 drivers/net/qede/base/ecore_hsi_init_tool.h   |   78 +-
 drivers/net/qede/base/ecore_hw.c              |   49 +-
 drivers/net/qede/base/ecore_init_fw_funcs.c   | 1408 ++++++++++------
 drivers/net/qede/base/ecore_init_fw_funcs.h   |  172 +-
 drivers/net/qede/base/ecore_init_ops.c        |    4 +
 drivers/net/qede/base/ecore_int.c             |   55 +-
 drivers/net/qede/base/ecore_int.h             |   10 -
 drivers/net/qede/base/ecore_int_api.h         |   21 +
 drivers/net/qede/base/ecore_iov_api.h         |   45 +-
 drivers/net/qede/base/ecore_iro.h             |    8 +
 drivers/net/qede/base/ecore_iro_values.h      |   28 +-
 drivers/net/qede/base/ecore_l2.c              |  853 +++++++---
 drivers/net/qede/base/ecore_l2.h              |  149 +-
 drivers/net/qede/base/ecore_l2_api.h          |  134 +-
 drivers/net/qede/base/ecore_mcp.c             | 1018 ++++++++++--
 drivers/net/qede/base/ecore_mcp.h             |  181 ++-
 drivers/net/qede/base/ecore_mcp_api.h         |  316 +++-
 drivers/net/qede/base/ecore_mng_tlv.c         | 1535 ++++++++++++++++++
 drivers/net/qede/base/ecore_proto_if.h        |   16 +
 drivers/net/qede/base/ecore_rt_defs.h         |  623 ++++---
 drivers/net/qede/base/ecore_sp_api.h          |   19 +
 drivers/net/qede/base/ecore_sp_commands.c     |  372 +++--
 drivers/net/qede/base/ecore_sp_commands.h     |   23 +-
 drivers/net/qede/base/ecore_spq.c             |   92 +-
 drivers/net/qede/base/ecore_spq.h             |   36 +-
 drivers/net/qede/base/ecore_sriov.c           |  954 ++++++++---
 drivers/net/qede/base/ecore_sriov.h           |   23 +-
 drivers/net/qede/base/ecore_vf.c              |  348 +++-
 drivers/net/qede/base/ecore_vf.h              |   85 +-
 drivers/net/qede/base/ecore_vf_api.h          |   11 +
 drivers/net/qede/base/ecore_vfpf_if.h         |   55 +-
 drivers/net/qede/base/eth_common.h            |    2 +-
 drivers/net/qede/base/mcp_public.h            |  271 ++--
 drivers/net/qede/base/nvm_cfg.h               |  475 +++++-
 drivers/net/qede/base/reg_addr.h              |   59 +
 drivers/net/qede/qede_eth_if.c                |   56 +-
 drivers/net/qede/qede_eth_if.h                |   25 +-
 drivers/net/qede/qede_ethdev.c                |  100 +-
 drivers/net/qede/qede_ethdev.h                |   42 +-
 drivers/net/qede/qede_fdir.c                  |  486 ++++++
 drivers/net/qede/qede_if.h                    |   58 +-
 drivers/net/qede/qede_main.c                  |  122 +-
 drivers/net/qede/qede_rxtx.c                  |  677 ++++++--
 drivers/net/qede/qede_rxtx.h                  |   32 +
 64 files changed, 12328 insertions(+), 5126 deletions(-)
 create mode 100644 drivers/net/qede/base/ecore_mng_tlv.c
 create mode 100644 drivers/net/qede/qede_fdir.c

-- 
1.7.10.3

^ permalink raw reply	[flat|nested] 329+ messages in thread

* [PATCH 01/61] net/qede/base: return an initialized return value
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 02/61] send FW version driver state to MFW Rasesh Mody
                   ` (60 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Make sure ecore_iov_mark_vf_flr() always returns an initialized return
value.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_sriov.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 51a3a03..b2ba79b 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -3166,7 +3166,7 @@ enum _ecore_status_t
 
 bool ecore_iov_mark_vf_flr(struct ecore_hwfn *p_hwfn, u32 *p_disabled_vfs)
 {
-	bool found;
+	bool found = false;
 	u16 i;
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "Marking FLR-ed VFs\n");
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 02/61] send FW version driver state to MFW
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
  2017-02-27  7:56 ` [PATCH 01/61] net/qede/base: return an initialized return value Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-03-03 10:26   ` Ferruh Yigit
  2017-02-27  7:56 ` [PATCH 03/61] net/qede/base: mask Rx buffer attention bits Rasesh Mody
                   ` (59 subsequent siblings)
  61 siblings, 1 reply; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add support to send FW version and driver state to Management FW.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c     |   31 ++++++++++++++++++++++++++++---
 drivers/net/qede/base/ecore_mcp.c     |    7 +++++--
 drivers/net/qede/base/ecore_mcp_api.h |    3 ++-
 drivers/net/qede/qede_if.h            |    3 +++
 drivers/net/qede/qede_main.c          |   20 ++++++++++++++++++++
 5 files changed, 58 insertions(+), 6 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index c5f16da..4211513 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1617,8 +1617,9 @@ static void ecore_reset_mb_shadow(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 				   struct ecore_hw_init_params *p_params)
 {
-	enum _ecore_status_t rc, mfw_rc;
-	u32 load_code, param;
+	enum _ecore_status_t rc = ECORE_SUCCESS, mfw_rc;
+	u32 load_code, param, drv_mb_param;
+	struct ecore_hwfn *p_hwfn;
 	int i;
 
 	if ((p_params->int_mode == ECORE_INT_MODE_MSI) &&
@@ -1751,7 +1752,26 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		p_hwfn->hw_init_done = true;
 	}
 
-	return ECORE_SUCCESS;
+	if (IS_PF(p_dev)) {
+		p_hwfn = ECORE_LEADING_HWFN(p_dev);
+		drv_mb_param = (FW_MAJOR_VERSION << 24) |
+			       (FW_MINOR_VERSION << 16) |
+			       (FW_REVISION_VERSION << 8) |
+			       (FW_ENGINEERING_VERSION);
+		rc = ecore_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
+				   DRV_MSG_CODE_OV_UPDATE_STORM_FW_VER,
+				   drv_mb_param, &load_code, &param);
+		if (rc != ECORE_SUCCESS) {
+			DP_ERR(p_hwfn, "Failed to send firmware version\n");
+			return rc;
+		}
+
+		rc = ecore_mcp_ov_update_driver_state(p_hwfn,
+						      p_hwfn->p_main_ptt,
+						ECORE_OV_DRIVER_STATE_DISABLED);
+	}
+
+	return rc;
 }
 
 #define ECORE_HW_STOP_RETRY_LIMIT	(10)
@@ -3138,8 +3158,13 @@ enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev,
 
 void ecore_hw_remove(struct ecore_dev *p_dev)
 {
+	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
 	int i;
 
+	if (IS_PF(p_dev))
+		ecore_mcp_ov_update_driver_state(p_hwfn, p_hwfn->p_main_ptt,
+					ECORE_OV_DRIVER_STATE_NOT_LOADED);
+
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
 
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 64069be..8d747c2 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1724,6 +1724,9 @@ enum _ecore_status_t
 	case ECORE_OV_CLIENT_USER:
 		drv_mb_param = DRV_MB_PARAM_OV_CURR_CFG_OTHER;
 		break;
+	case ECORE_OV_CLIENT_VENDOR_SPEC:
+		drv_mb_param = DRV_MB_PARAM_OV_CURR_CFG_VENDOR_SPEC;
+		break;
 	default:
 		DP_NOTICE(p_hwfn, true, "Invalid client type %d\n", config);
 		return ECORE_INVAL;
@@ -1762,9 +1765,9 @@ enum _ecore_status_t
 	}
 
 	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE,
-			   drv_state, &resp, &param);
+			   drv_mb_param, &resp, &param);
 	if (rc != ECORE_SUCCESS)
-		DP_ERR(p_hwfn, "MCP response failure, aborting\n");
+		DP_ERR(p_hwfn, "Failed to send driver state\n");
 
 	return rc;
 }
diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h
index 4e954bd..614cf67 100644
--- a/drivers/net/qede/base/ecore_mcp_api.h
+++ b/drivers/net/qede/base/ecore_mcp_api.h
@@ -181,7 +181,8 @@ enum ecore_ov_config_method {
 
 enum ecore_ov_client {
 	ECORE_OV_CLIENT_DRV,
-	ECORE_OV_CLIENT_USER
+	ECORE_OV_CLIENT_USER,
+	ECORE_OV_CLIENT_VENDOR_SPEC
 };
 
 enum ecore_ov_driver_state {
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index 4289d0b..4b23bb9 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -150,8 +150,11 @@ struct qed_common_ops {
 			    uint16_t sb_id, enum qed_sb_type type);
 
 	bool (*can_link_change)(struct ecore_dev *edev);
+
 	void (*update_msglvl)(struct ecore_dev *edev,
 			      uint32_t dp_module, uint8_t dp_level);
+
+	int (*send_drv_state)(struct ecore_dev *edev, bool active);
 };
 
 #endif /* _QEDE_IF_H */
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 8a4d68a..f0033a1 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -668,6 +668,25 @@ static void qed_remove(struct ecore_dev *edev)
 	ecore_hw_remove(edev);
 }
 
+static int qed_send_drv_state(struct ecore_dev *edev, bool active)
+{
+	struct ecore_hwfn *hwfn = ECORE_LEADING_HWFN(edev);
+	struct ecore_ptt *ptt;
+	int status = 0;
+
+	ptt = ecore_ptt_acquire(hwfn);
+	if (!ptt)
+		return -EAGAIN;
+
+	status = ecore_mcp_ov_update_driver_state(hwfn, ptt, active ?
+						  ECORE_OV_DRIVER_STATE_ACTIVE :
+						ECORE_OV_DRIVER_STATE_DISABLED);
+
+	ecore_ptt_release(hwfn, ptt);
+
+	return status;
+}
+
 const struct qed_common_ops qed_common_ops_pass = {
 	INIT_STRUCT_FIELD(probe, &qed_probe),
 	INIT_STRUCT_FIELD(update_pf_params, &qed_update_pf_params),
@@ -681,4 +700,5 @@ static void qed_remove(struct ecore_dev *edev)
 	INIT_STRUCT_FIELD(drain, &qed_drain),
 	INIT_STRUCT_FIELD(slowpath_stop, &qed_slowpath_stop),
 	INIT_STRUCT_FIELD(remove, &qed_remove),
+	INIT_STRUCT_FIELD(send_drv_state, &qed_send_drv_state),
 };
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 03/61] net/qede/base: mask Rx buffer attention bits
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
  2017-02-27  7:56 ` [PATCH 01/61] net/qede/base: return an initialized return value Rasesh Mody
  2017-02-27  7:56 ` [PATCH 02/61] send FW version driver state to MFW Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 04/61] net/qede/base: print various indication on Tx-timeouts Rasesh Mody
                   ` (58 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Mask the BRB "RC0_EOP_OUT_SYNC_FIFO_PUSH_ERROR" attention.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c |    6 ++++++
 drivers/net/qede/base/reg_addr.h  |    3 +++
 2 files changed, 9 insertions(+)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 4211513..d8ef314 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1059,6 +1059,12 @@ static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn,
 	/* pretend to original PF */
 	ecore_fid_pretend(p_hwfn, p_ptt, p_hwfn->rel_pf_id);
 
+	/* @@@TMP:
+	 * CQ89456 - Mask the BRB "RC0_EOP_OUT_SYNC_FIFO_PUSH_ERROR" attention.
+	 */
+	if (ECORE_IS_AH(p_dev))
+		ecore_wr(p_hwfn, p_ptt, BRB_REG_INT_MASK_10, 0x4000000);
+
 	return rc;
 }
 
diff --git a/drivers/net/qede/base/reg_addr.h b/drivers/net/qede/base/reg_addr.h
index 3c369aa..21cbdbd 100644
--- a/drivers/net/qede/base/reg_addr.h
+++ b/drivers/net/qede/base/reg_addr.h
@@ -1141,3 +1141,6 @@
 #define NIG_REG_LLH_TAGMAC_DEF_PF_VECTOR 0x50196cUL
 #define PRS_REG_MSG_INFO 0x1f0a1cUL
 #define BAR0_MAP_REG_XSDM_RAM 0x1e00000UL
+
+/* 8.18.7.0 FW */
+#define BRB_REG_INT_MASK_10 0x3401b8UL
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 04/61] net/qede/base: print various indication on Tx-timeouts
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (2 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 03/61] net/qede/base: mask Rx buffer attention bits Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 05/61] net/qede/base: utilize FW 8.18.9.0 Rasesh Mody
                   ` (57 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Print various indication on Tx-timeouts.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_int.c     |   27 +++++++++++++++++++++++++++
 drivers/net/qede/base/ecore_int_api.h |   21 +++++++++++++++++++++
 drivers/net/qede/base/reg_addr.h      |    3 +++
 drivers/net/qede/qede_main.c          |   23 +++++++++++++++++++++++
 4 files changed, 74 insertions(+)

diff --git a/drivers/net/qede/base/ecore_int.c b/drivers/net/qede/base/ecore_int.c
index b6b8e2d..e5a4359 100644
--- a/drivers/net/qede/base/ecore_int.c
+++ b/drivers/net/qede/base/ecore_int.c
@@ -2255,3 +2255,30 @@ enum _ecore_status_t ecore_int_set_timer_res(struct ecore_hwfn *p_hwfn,
 
 	return rc;
 }
+
+enum _ecore_status_t ecore_int_get_sb_dbg(struct ecore_hwfn *p_hwfn,
+					  struct ecore_ptt *p_ptt,
+					  struct ecore_sb_info *p_sb,
+					  struct ecore_sb_info_dbg *p_info)
+{
+	u16 sbid = p_sb->igu_sb_id;
+	int i;
+
+	if (IS_VF(p_hwfn->p_dev))
+		return ECORE_INVAL;
+
+	if (sbid > NUM_OF_SBS(p_hwfn->p_dev))
+		return ECORE_INVAL;
+
+	p_info->igu_prod = ecore_rd(p_hwfn, p_ptt,
+				    IGU_REG_PRODUCER_MEMORY + sbid * 4);
+	p_info->igu_cons = ecore_rd(p_hwfn, p_ptt,
+				    IGU_REG_CONSUMER_MEM + sbid * 4);
+
+	for (i = 0; i < PIS_PER_SB; i++)
+		p_info->pi[i] = (u16)ecore_rd(p_hwfn, p_ptt,
+					      CAU_REG_PI_MEMORY +
+					      sbid * 4 * PIS_PER_SB +  i * 4);
+
+	return ECORE_SUCCESS;
+}
diff --git a/drivers/net/qede/base/ecore_int_api.h b/drivers/net/qede/base/ecore_int_api.h
index a0d6a43..fdfcba8 100644
--- a/drivers/net/qede/base/ecore_int_api.h
+++ b/drivers/net/qede/base/ecore_int_api.h
@@ -41,6 +41,12 @@ struct ecore_sb_info {
 	struct ecore_dev *p_dev;
 };
 
+struct ecore_sb_info_dbg {
+	u32 igu_prod;
+	u32 igu_cons;
+	u16 pi[PIS_PER_SB];
+};
+
 struct ecore_sb_cnt_info {
 	int sb_cnt;
 	int sb_iov_cnt;
@@ -303,4 +309,19 @@ void ecore_int_get_num_sbs(struct ecore_hwfn *p_hwfn,
  */
 void ecore_int_attn_clr_enable(struct ecore_dev *p_dev, bool clr_enable);
 
+/**
+ * @brief Read debug information regarding a given SB.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param p_sb - point to Status block for which we want to get info.
+ * @param p_info - pointer to struct to fill with information regarding SB.
+ *
+ * @return ECORE_SUCCESS if pointer is filled; failure otherwise.
+ */
+enum _ecore_status_t ecore_int_get_sb_dbg(struct ecore_hwfn *p_hwfn,
+					  struct ecore_ptt *p_ptt,
+					  struct ecore_sb_info *p_sb,
+					  struct ecore_sb_info_dbg *p_info);
+
 #endif
diff --git a/drivers/net/qede/base/reg_addr.h b/drivers/net/qede/base/reg_addr.h
index 21cbdbd..3cc7fd4 100644
--- a/drivers/net/qede/base/reg_addr.h
+++ b/drivers/net/qede/base/reg_addr.h
@@ -1144,3 +1144,6 @@
 
 /* 8.18.7.0 FW */
 #define BRB_REG_INT_MASK_10 0x3401b8UL
+
+#define IGU_REG_PRODUCER_MEMORY 0x182000UL
+#define IGU_REG_CONSUMER_MEM 0x183000UL
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index f0033a1..a604a5b 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -687,6 +687,29 @@ static int qed_send_drv_state(struct ecore_dev *edev, bool active)
 	return status;
 }
 
+static int qed_get_sb_info(struct ecore_dev *edev, struct ecore_sb_info *sb,
+			   u16 qid, struct ecore_sb_info_dbg *sb_dbg)
+{
+	struct ecore_hwfn *hwfn = &edev->hwfns[qid % edev->num_hwfns];
+	struct ecore_ptt *ptt;
+	int rc;
+
+	if (IS_VF(edev))
+		return -EINVAL;
+
+	ptt = ecore_ptt_acquire(hwfn);
+	if (!ptt) {
+		DP_NOTICE(hwfn, true, "Can't acquire PTT\n");
+		return -EAGAIN;
+	}
+
+	memset(sb_dbg, 0, sizeof(*sb_dbg));
+	rc = ecore_int_get_sb_dbg(hwfn, ptt, sb, sb_dbg);
+
+	ecore_ptt_release(hwfn, ptt);
+	return rc;
+}
+
 const struct qed_common_ops qed_common_ops_pass = {
 	INIT_STRUCT_FIELD(probe, &qed_probe),
 	INIT_STRUCT_FIELD(update_pf_params, &qed_update_pf_params),
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 05/61] net/qede/base: utilize FW 8.18.9.0
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (3 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 04/61] net/qede/base: print various indication on Tx-timeouts Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 06/61] drivers/net/qede: upgrade the FW to 8.18.9.0 Rasesh Mody
                   ` (56 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

This change is in preparation to work with new FW 8.18.9.0.
Rename the defines to use E4_ and structs to use e4_. This renaming
is to add support for future chipsets.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/common_hsi.h       |   15 +-
 drivers/net/qede/base/ecore_hsi_common.h |  770 +++++------
 drivers/net/qede/base/ecore_hsi_eth.h    | 2052 +++++++++++++++---------------
 drivers/net/qede/base/ecore_iov_api.h    |    4 +-
 drivers/net/qede/base/ecore_spq.c        |   20 +-
 drivers/net/qede/base/ecore_sriov.c      |    2 +-
 drivers/net/qede/base/ecore_sriov.h      |    4 +-
 7 files changed, 1447 insertions(+), 1420 deletions(-)

diff --git a/drivers/net/qede/base/common_hsi.h b/drivers/net/qede/base/common_hsi.h
index 2f84148..59e751f 100644
--- a/drivers/net/qede/base/common_hsi.h
+++ b/drivers/net/qede/base/common_hsi.h
@@ -107,20 +107,20 @@
 #define MAX_NUM_PFS	(MAX_NUM_PFS_K2)
 #define MAX_NUM_OF_PFS_IN_CHIP (16) /* On both engines */
 
-#define MAX_NUM_VFS_K2	(192)
 #define MAX_NUM_VFS_BB	(120)
-#define MAX_NUM_VFS	(MAX_NUM_VFS_K2)
+#define MAX_NUM_VFS_K2	(192)
+#define E4_MAX_NUM_VFS	(MAX_NUM_VFS_K2)
 
 #define MAX_NUM_FUNCTIONS_BB	(MAX_NUM_PFS_BB + MAX_NUM_VFS_BB)
 #define MAX_NUM_FUNCTIONS_K2	(MAX_NUM_PFS_K2 + MAX_NUM_VFS_K2)
-#define MAX_NUM_FUNCTIONS	(MAX_NUM_PFS + MAX_NUM_VFS)
+#define MAX_NUM_FUNCTIONS	(MAX_NUM_PFS + E4_MAX_NUM_VFS)
 
 /* in both BB and K2, the VF number starts from 16. so for arrays containing all
  * possible PFs and VFs - we need a constant for this size
  */
 #define MAX_FUNCTION_NUMBER_BB	(MAX_NUM_PFS + MAX_NUM_VFS_BB)
 #define MAX_FUNCTION_NUMBER_K2	(MAX_NUM_PFS + MAX_NUM_VFS_K2)
-#define MAX_FUNCTION_NUMBER	(MAX_NUM_PFS + MAX_NUM_VFS)
+#define MAX_FUNCTION_NUMBER	(MAX_NUM_PFS + E4_MAX_NUM_VFS)
 
 #define MAX_NUM_VPORTS_K2	(208)
 #define MAX_NUM_VPORTS_BB	(160)
@@ -149,9 +149,10 @@
 #define MAX_PHYS_VOQS		(NUM_OF_PHYS_TCS * MAX_NUM_PORTS_BB)
 
 /* CIDs */
-#define NUM_OF_CONNECTION_TYPES	(8)
-#define NUM_OF_LCIDS		(320)
-#define NUM_OF_LTIDS		(320)
+#define E4_NUM_OF_CONNECTION_TYPES (8)
+#define NUM_OF_TASK_TYPES		(8)
+#define NUM_OF_LCIDS			(320)
+#define NUM_OF_LTIDS			(320)
 
 /* Clock values */
 #define MASTER_CLK_FREQ_E4		(375e6)
diff --git a/drivers/net/qede/base/ecore_hsi_common.h b/drivers/net/qede/base/ecore_hsi_common.h
index d978bb0..f934e68 100644
--- a/drivers/net/qede/base/ecore_hsi_common.h
+++ b/drivers/net/qede/base/ecore_hsi_common.h
@@ -75,306 +75,306 @@ struct xstorm_core_conn_st_ctx {
 	__le32 reserved0[55] /* Pad to 15 cycles */;
 };
 
-struct xstorm_core_conn_ag_ctx {
+struct e4_xstorm_core_conn_ag_ctx {
 	u8 reserved0 /* cdu_validation */;
 	u8 core_state /* state */;
 	u8 flags0;
 /* exist_in_qm0 */
-#define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_SHIFT        0
+#define E4_XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_SHIFT        0
 /* exist_in_qm1 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED1_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED1_SHIFT           1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED1_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED1_SHIFT           1
 /* exist_in_qm2 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED2_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED2_SHIFT           2
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED2_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED2_SHIFT           2
 /* exist_in_qm3 */
-#define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM3_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM3_SHIFT        3
+#define E4_XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM3_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM3_SHIFT        3
 /* bit4 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED3_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED3_SHIFT           4
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED3_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED3_SHIFT           4
 /* cf_array_active */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED4_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED4_SHIFT           5
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED4_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED4_SHIFT           5
 /* bit6 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED5_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED5_SHIFT           6
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED5_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED5_SHIFT           6
 /* bit7 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED6_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED6_SHIFT           7
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED6_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED6_SHIFT           7
 	u8 flags1;
 /* bit8 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED7_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED7_SHIFT           0
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED7_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED7_SHIFT           0
 /* bit9 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED8_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED8_SHIFT           1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED8_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED8_SHIFT           1
 /* bit10 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED9_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED9_SHIFT           2
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED9_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED9_SHIFT           2
 /* bit11 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT11_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT11_SHIFT               3
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT11_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT11_SHIFT               3
 /* bit12 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT12_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT12_SHIFT               4
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT12_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT12_SHIFT               4
 /* bit13 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT13_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT13_SHIFT               5
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT13_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT13_SHIFT               5
 /* bit14 */
-#define XSTORM_CORE_CONN_AG_CTX_TX_RULE_ACTIVE_MASK       0x1
-#define XSTORM_CORE_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT      6
+#define E4_XSTORM_CORE_CONN_AG_CTX_TX_RULE_ACTIVE_MASK       0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT      6
 /* bit15 */
-#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT        7
+#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT        7
 	u8 flags2;
 /* timer0cf */
-#define XSTORM_CORE_CONN_AG_CTX_CF0_MASK                  0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF0_SHIFT                 0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF0_MASK                  0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF0_SHIFT                 0
 /* timer1cf */
-#define XSTORM_CORE_CONN_AG_CTX_CF1_MASK                  0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF1_SHIFT                 2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF1_MASK                  0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF1_SHIFT                 2
 /* timer2cf */
-#define XSTORM_CORE_CONN_AG_CTX_CF2_MASK                  0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF2_SHIFT                 4
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF2_MASK                  0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF2_SHIFT                 4
 /* timer_stop_all */
-#define XSTORM_CORE_CONN_AG_CTX_CF3_MASK                  0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF3_SHIFT                 6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF3_MASK                  0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF3_SHIFT                 6
 	u8 flags3;
-#define XSTORM_CORE_CONN_AG_CTX_CF4_MASK                  0x3 /* cf4 */
-#define XSTORM_CORE_CONN_AG_CTX_CF4_SHIFT                 0
-#define XSTORM_CORE_CONN_AG_CTX_CF5_MASK                  0x3 /* cf5 */
-#define XSTORM_CORE_CONN_AG_CTX_CF5_SHIFT                 2
-#define XSTORM_CORE_CONN_AG_CTX_CF6_MASK                  0x3 /* cf6 */
-#define XSTORM_CORE_CONN_AG_CTX_CF6_SHIFT                 4
-#define XSTORM_CORE_CONN_AG_CTX_CF7_MASK                  0x3 /* cf7 */
-#define XSTORM_CORE_CONN_AG_CTX_CF7_SHIFT                 6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF4_MASK                  0x3 /* cf4 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF4_SHIFT                 0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF5_MASK                  0x3 /* cf5 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF5_SHIFT                 2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF6_MASK                  0x3 /* cf6 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF6_SHIFT                 4
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF7_MASK                  0x3 /* cf7 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF7_SHIFT                 6
 	u8 flags4;
-#define XSTORM_CORE_CONN_AG_CTX_CF8_MASK                  0x3 /* cf8 */
-#define XSTORM_CORE_CONN_AG_CTX_CF8_SHIFT                 0
-#define XSTORM_CORE_CONN_AG_CTX_CF9_MASK                  0x3 /* cf9 */
-#define XSTORM_CORE_CONN_AG_CTX_CF9_SHIFT                 2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF8_MASK                  0x3 /* cf8 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF8_SHIFT                 0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF9_MASK                  0x3 /* cf9 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF9_SHIFT                 2
 /* cf10 */
-#define XSTORM_CORE_CONN_AG_CTX_CF10_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF10_SHIFT                4
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF10_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF10_SHIFT                4
 /* cf11 */
-#define XSTORM_CORE_CONN_AG_CTX_CF11_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF11_SHIFT                6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF11_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF11_SHIFT                6
 	u8 flags5;
 /* cf12 */
-#define XSTORM_CORE_CONN_AG_CTX_CF12_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF12_SHIFT                0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF12_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF12_SHIFT                0
 /* cf13 */
-#define XSTORM_CORE_CONN_AG_CTX_CF13_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF13_SHIFT                2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF13_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF13_SHIFT                2
 /* cf14 */
-#define XSTORM_CORE_CONN_AG_CTX_CF14_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF14_SHIFT                4
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF14_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF14_SHIFT                4
 /* cf15 */
-#define XSTORM_CORE_CONN_AG_CTX_CF15_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF15_SHIFT                6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF15_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF15_SHIFT                6
 	u8 flags6;
 /* cf16 */
-#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_MASK     0x3
-#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_SHIFT    0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_MASK     0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_SHIFT    0
 /* cf_array_cf */
-#define XSTORM_CORE_CONN_AG_CTX_CF17_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF17_SHIFT                2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF17_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF17_SHIFT                2
 /* cf18 */
-#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_MASK                0x3
-#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_SHIFT               4
+#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_MASK                0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_SHIFT               4
 /* cf19 */
-#define XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_MASK         0x3
-#define XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_SHIFT        6
+#define E4_XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_MASK         0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_SHIFT        6
 	u8 flags7;
 /* cf20 */
-#define XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_MASK             0x3
-#define XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_SHIFT            0
+#define E4_XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_MASK             0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_SHIFT            0
 /* cf21 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED10_MASK           0x3
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED10_SHIFT          2
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED10_MASK           0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED10_SHIFT          2
 /* cf22 */
-#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_MASK            0x3
-#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_SHIFT           4
+#define E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_MASK            0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_SHIFT           4
 /* cf0en */
-#define XSTORM_CORE_CONN_AG_CTX_CF0EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT               6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF0EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT               6
 /* cf1en */
-#define XSTORM_CORE_CONN_AG_CTX_CF1EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT               7
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF1EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT               7
 	u8 flags8;
 /* cf2en */
-#define XSTORM_CORE_CONN_AG_CTX_CF2EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT               0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF2EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT               0
 /* cf3en */
-#define XSTORM_CORE_CONN_AG_CTX_CF3EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT               1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF3EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT               1
 /* cf4en */
-#define XSTORM_CORE_CONN_AG_CTX_CF4EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT               2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF4EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT               2
 /* cf5en */
-#define XSTORM_CORE_CONN_AG_CTX_CF5EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT               3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF5EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT               3
 /* cf6en */
-#define XSTORM_CORE_CONN_AG_CTX_CF6EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT               4
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF6EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT               4
 /* cf7en */
-#define XSTORM_CORE_CONN_AG_CTX_CF7EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF7EN_SHIFT               5
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF7EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF7EN_SHIFT               5
 /* cf8en */
-#define XSTORM_CORE_CONN_AG_CTX_CF8EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF8EN_SHIFT               6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF8EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF8EN_SHIFT               6
 /* cf9en */
-#define XSTORM_CORE_CONN_AG_CTX_CF9EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF9EN_SHIFT               7
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF9EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF9EN_SHIFT               7
 	u8 flags9;
 /* cf10en */
-#define XSTORM_CORE_CONN_AG_CTX_CF10EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF10EN_SHIFT              0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF10EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF10EN_SHIFT              0
 /* cf11en */
-#define XSTORM_CORE_CONN_AG_CTX_CF11EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF11EN_SHIFT              1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF11EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF11EN_SHIFT              1
 /* cf12en */
-#define XSTORM_CORE_CONN_AG_CTX_CF12EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF12EN_SHIFT              2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF12EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF12EN_SHIFT              2
 /* cf13en */
-#define XSTORM_CORE_CONN_AG_CTX_CF13EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF13EN_SHIFT              3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF13EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF13EN_SHIFT              3
 /* cf14en */
-#define XSTORM_CORE_CONN_AG_CTX_CF14EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF14EN_SHIFT              4
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF14EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF14EN_SHIFT              4
 /* cf15en */
-#define XSTORM_CORE_CONN_AG_CTX_CF15EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF15EN_SHIFT              5
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF15EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF15EN_SHIFT              5
 /* cf16en */
-#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_MASK  0x1
-#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_SHIFT 6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_MASK  0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_SHIFT 6
 /* cf_array_cf_en */
-#define XSTORM_CORE_CONN_AG_CTX_CF17EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF17EN_SHIFT              7
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF17EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF17EN_SHIFT              7
 	u8 flags10;
 /* cf18en */
-#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_SHIFT            0
+#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_SHIFT            0
 /* cf19en */
-#define XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_EN_MASK      0x1
-#define XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT     1
+#define E4_XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_EN_MASK      0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT     1
 /* cf20en */
-#define XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_EN_MASK          0x1
-#define XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT         2
+#define E4_XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_EN_MASK          0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT         2
 /* cf21en */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED11_MASK           0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED11_SHIFT          3
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED11_MASK           0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED11_SHIFT          3
 /* cf22en */
-#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_SHIFT        4
+#define E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_SHIFT        4
 /* cf23en */
-#define XSTORM_CORE_CONN_AG_CTX_CF23EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF23EN_SHIFT              5
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF23EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF23EN_SHIFT              5
 /* rule0en */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED12_MASK           0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED12_SHIFT          6
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED12_MASK           0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED12_SHIFT          6
 /* rule1en */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED13_MASK           0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED13_SHIFT          7
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED13_MASK           0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED13_SHIFT          7
 	u8 flags11;
 /* rule2en */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED14_MASK           0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED14_SHIFT          0
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED14_MASK           0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED14_SHIFT          0
 /* rule3en */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED15_MASK           0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED15_SHIFT          1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED15_MASK           0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED15_SHIFT          1
 /* rule4en */
-#define XSTORM_CORE_CONN_AG_CTX_TX_DEC_RULE_EN_MASK       0x1
-#define XSTORM_CORE_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT      2
+#define E4_XSTORM_CORE_CONN_AG_CTX_TX_DEC_RULE_EN_MASK       0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT      2
 /* rule5en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE5EN_MASK              0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT             3
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE5EN_MASK              0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT             3
 /* rule6en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE6EN_MASK              0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT             4
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE6EN_MASK              0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT             4
 /* rule7en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE7EN_MASK              0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT             5
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE7EN_MASK              0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT             5
 /* rule8en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED1_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED1_SHIFT        6
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED1_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED1_SHIFT        6
 /* rule9en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE9EN_MASK              0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE9EN_SHIFT             7
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE9EN_MASK              0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE9EN_SHIFT             7
 	u8 flags12;
 /* rule10en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE10EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE10EN_SHIFT            0
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE10EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE10EN_SHIFT            0
 /* rule11en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE11EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE11EN_SHIFT            1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE11EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE11EN_SHIFT            1
 /* rule12en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED2_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED2_SHIFT        2
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED2_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED2_SHIFT        2
 /* rule13en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED3_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED3_SHIFT        3
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED3_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED3_SHIFT        3
 /* rule14en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE14EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE14EN_SHIFT            4
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE14EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE14EN_SHIFT            4
 /* rule15en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE15EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE15EN_SHIFT            5
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE15EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE15EN_SHIFT            5
 /* rule16en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE16EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE16EN_SHIFT            6
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE16EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE16EN_SHIFT            6
 /* rule17en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE17EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE17EN_SHIFT            7
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE17EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE17EN_SHIFT            7
 	u8 flags13;
 /* rule18en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE18EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE18EN_SHIFT            0
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE18EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE18EN_SHIFT            0
 /* rule19en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE19EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE19EN_SHIFT            1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE19EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE19EN_SHIFT            1
 /* rule20en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED4_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED4_SHIFT        2
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED4_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED4_SHIFT        2
 /* rule21en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED5_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED5_SHIFT        3
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED5_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED5_SHIFT        3
 /* rule22en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED6_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED6_SHIFT        4
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED6_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED6_SHIFT        4
 /* rule23en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED7_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED7_SHIFT        5
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED7_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED7_SHIFT        5
 /* rule24en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED8_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED8_SHIFT        6
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED8_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED8_SHIFT        6
 /* rule25en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED9_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED9_SHIFT        7
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED9_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED9_SHIFT        7
 	u8 flags14;
 /* bit16 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT16_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT16_SHIFT               0
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT16_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT16_SHIFT               0
 /* bit17 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT17_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT17_SHIFT               1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT17_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT17_SHIFT               1
 /* bit18 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT18_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT18_SHIFT               2
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT18_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT18_SHIFT               2
 /* bit19 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT19_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT19_SHIFT               3
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT19_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT19_SHIFT               3
 /* bit20 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT20_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT20_SHIFT               4
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT20_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT20_SHIFT               4
 /* bit21 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT21_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT21_SHIFT               5
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT21_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT21_SHIFT               5
 /* cf23 */
-#define XSTORM_CORE_CONN_AG_CTX_CF23_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF23_SHIFT                6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF23_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF23_SHIFT                6
 	u8 byte2 /* byte2 */;
 	__le16 physical_q0 /* physical_q0 */;
 	__le16 consolid_prod /* physical_q1 */;
@@ -410,7 +410,7 @@ struct xstorm_core_conn_ag_ctx {
 	u8 byte13 /* byte13 */;
 	u8 byte14 /* byte14 */;
 	u8 byte15 /* byte15 */;
-	u8 byte16 /* byte16 */;
+	u8 e5_reserved /* e5_reserved */;
 	__le16 word11 /* word11 */;
 	__le32 reg10 /* reg10 */;
 	__le32 reg11 /* reg11 */;
@@ -428,89 +428,89 @@ struct xstorm_core_conn_ag_ctx {
 	__le16 word15 /* word15 */;
 };
 
-struct tstorm_core_conn_ag_ctx {
+struct e4_tstorm_core_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 byte1 /* state */;
 	u8 flags0;
-#define TSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
-#define TSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
-#define TSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
-#define TSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
-#define TSTORM_CORE_CONN_AG_CTX_BIT2_MASK     0x1 /* bit2 */
-#define TSTORM_CORE_CONN_AG_CTX_BIT2_SHIFT    2
-#define TSTORM_CORE_CONN_AG_CTX_BIT3_MASK     0x1 /* bit3 */
-#define TSTORM_CORE_CONN_AG_CTX_BIT3_SHIFT    3
-#define TSTORM_CORE_CONN_AG_CTX_BIT4_MASK     0x1 /* bit4 */
-#define TSTORM_CORE_CONN_AG_CTX_BIT4_SHIFT    4
-#define TSTORM_CORE_CONN_AG_CTX_BIT5_MASK     0x1 /* bit5 */
-#define TSTORM_CORE_CONN_AG_CTX_BIT5_SHIFT    5
-#define TSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* timer0cf */
-#define TSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     6
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT2_MASK     0x1 /* bit2 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT2_SHIFT    2
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT3_MASK     0x1 /* bit3 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT3_SHIFT    3
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT4_MASK     0x1 /* bit4 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT4_SHIFT    4
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT5_MASK     0x1 /* bit5 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT5_SHIFT    5
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* timer0cf */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     6
 	u8 flags1;
-#define TSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* timer1cf */
-#define TSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     0
-#define TSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* timer2cf */
-#define TSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     2
-#define TSTORM_CORE_CONN_AG_CTX_CF3_MASK      0x3 /* timer_stop_all */
-#define TSTORM_CORE_CONN_AG_CTX_CF3_SHIFT     4
-#define TSTORM_CORE_CONN_AG_CTX_CF4_MASK      0x3 /* cf4 */
-#define TSTORM_CORE_CONN_AG_CTX_CF4_SHIFT     6
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* timer1cf */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     0
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* timer2cf */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     2
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF3_MASK      0x3 /* timer_stop_all */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF3_SHIFT     4
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF4_MASK      0x3 /* cf4 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF4_SHIFT     6
 	u8 flags2;
-#define TSTORM_CORE_CONN_AG_CTX_CF5_MASK      0x3 /* cf5 */
-#define TSTORM_CORE_CONN_AG_CTX_CF5_SHIFT     0
-#define TSTORM_CORE_CONN_AG_CTX_CF6_MASK      0x3 /* cf6 */
-#define TSTORM_CORE_CONN_AG_CTX_CF6_SHIFT     2
-#define TSTORM_CORE_CONN_AG_CTX_CF7_MASK      0x3 /* cf7 */
-#define TSTORM_CORE_CONN_AG_CTX_CF7_SHIFT     4
-#define TSTORM_CORE_CONN_AG_CTX_CF8_MASK      0x3 /* cf8 */
-#define TSTORM_CORE_CONN_AG_CTX_CF8_SHIFT     6
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF5_MASK      0x3 /* cf5 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF5_SHIFT     0
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF6_MASK      0x3 /* cf6 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF6_SHIFT     2
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF7_MASK      0x3 /* cf7 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF7_SHIFT     4
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF8_MASK      0x3 /* cf8 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF8_SHIFT     6
 	u8 flags3;
-#define TSTORM_CORE_CONN_AG_CTX_CF9_MASK      0x3 /* cf9 */
-#define TSTORM_CORE_CONN_AG_CTX_CF9_SHIFT     0
-#define TSTORM_CORE_CONN_AG_CTX_CF10_MASK     0x3 /* cf10 */
-#define TSTORM_CORE_CONN_AG_CTX_CF10_SHIFT    2
-#define TSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
-#define TSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   4
-#define TSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
-#define TSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   5
-#define TSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
-#define TSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   6
-#define TSTORM_CORE_CONN_AG_CTX_CF3EN_MASK    0x1 /* cf3en */
-#define TSTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT   7
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF9_MASK      0x3 /* cf9 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF9_SHIFT     0
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF10_MASK     0x3 /* cf10 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF10_SHIFT    2
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   4
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   5
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   6
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF3EN_MASK    0x1 /* cf3en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT   7
 	u8 flags4;
-#define TSTORM_CORE_CONN_AG_CTX_CF4EN_MASK    0x1 /* cf4en */
-#define TSTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT   0
-#define TSTORM_CORE_CONN_AG_CTX_CF5EN_MASK    0x1 /* cf5en */
-#define TSTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT   1
-#define TSTORM_CORE_CONN_AG_CTX_CF6EN_MASK    0x1 /* cf6en */
-#define TSTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT   2
-#define TSTORM_CORE_CONN_AG_CTX_CF7EN_MASK    0x1 /* cf7en */
-#define TSTORM_CORE_CONN_AG_CTX_CF7EN_SHIFT   3
-#define TSTORM_CORE_CONN_AG_CTX_CF8EN_MASK    0x1 /* cf8en */
-#define TSTORM_CORE_CONN_AG_CTX_CF8EN_SHIFT   4
-#define TSTORM_CORE_CONN_AG_CTX_CF9EN_MASK    0x1 /* cf9en */
-#define TSTORM_CORE_CONN_AG_CTX_CF9EN_SHIFT   5
-#define TSTORM_CORE_CONN_AG_CTX_CF10EN_MASK   0x1 /* cf10en */
-#define TSTORM_CORE_CONN_AG_CTX_CF10EN_SHIFT  6
-#define TSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 7
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF4EN_MASK    0x1 /* cf4en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT   0
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF5EN_MASK    0x1 /* cf5en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT   1
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF6EN_MASK    0x1 /* cf6en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT   2
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF7EN_MASK    0x1 /* cf7en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF7EN_SHIFT   3
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF8EN_MASK    0x1 /* cf8en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF8EN_SHIFT   4
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF9EN_MASK    0x1 /* cf9en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF9EN_SHIFT   5
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF10EN_MASK   0x1 /* cf10en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF10EN_SHIFT  6
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 7
 	u8 flags5;
-#define TSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 0
-#define TSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 1
-#define TSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 2
-#define TSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 3
-#define TSTORM_CORE_CONN_AG_CTX_RULE5EN_MASK  0x1 /* rule5en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT 4
-#define TSTORM_CORE_CONN_AG_CTX_RULE6EN_MASK  0x1 /* rule6en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT 5
-#define TSTORM_CORE_CONN_AG_CTX_RULE7EN_MASK  0x1 /* rule7en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT 6
-#define TSTORM_CORE_CONN_AG_CTX_RULE8EN_MASK  0x1 /* rule8en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE8EN_SHIFT 7
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 0
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 1
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 2
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 3
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE5EN_MASK  0x1 /* rule5en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT 4
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE6EN_MASK  0x1 /* rule6en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT 5
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE7EN_MASK  0x1 /* rule7en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT 6
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE8EN_MASK  0x1 /* rule8en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE8EN_SHIFT 7
 	__le32 reg0 /* reg0 */;
 	__le32 reg1 /* reg1 */;
 	__le32 reg2 /* reg2 */;
@@ -532,63 +532,63 @@ struct tstorm_core_conn_ag_ctx {
 	__le32 reg10 /* reg10 */;
 };
 
-struct ustorm_core_conn_ag_ctx {
+struct e4_ustorm_core_conn_ag_ctx {
 	u8 reserved /* cdu_validation */;
 	u8 byte1 /* state */;
 	u8 flags0;
-#define USTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
-#define USTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
-#define USTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
-#define USTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
-#define USTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* timer0cf */
-#define USTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
-#define USTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* timer1cf */
-#define USTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
-#define USTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* timer2cf */
-#define USTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
+#define E4_USTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
+#define E4_USTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
+#define E4_USTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
+#define E4_USTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
+#define E4_USTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* timer0cf */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
+#define E4_USTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* timer1cf */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
+#define E4_USTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* timer2cf */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
 	u8 flags1;
-#define USTORM_CORE_CONN_AG_CTX_CF3_MASK      0x3 /* timer_stop_all */
-#define USTORM_CORE_CONN_AG_CTX_CF3_SHIFT     0
-#define USTORM_CORE_CONN_AG_CTX_CF4_MASK      0x3 /* cf4 */
-#define USTORM_CORE_CONN_AG_CTX_CF4_SHIFT     2
-#define USTORM_CORE_CONN_AG_CTX_CF5_MASK      0x3 /* cf5 */
-#define USTORM_CORE_CONN_AG_CTX_CF5_SHIFT     4
-#define USTORM_CORE_CONN_AG_CTX_CF6_MASK      0x3 /* cf6 */
-#define USTORM_CORE_CONN_AG_CTX_CF6_SHIFT     6
+#define E4_USTORM_CORE_CONN_AG_CTX_CF3_MASK      0x3 /* timer_stop_all */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF3_SHIFT     0
+#define E4_USTORM_CORE_CONN_AG_CTX_CF4_MASK      0x3 /* cf4 */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF4_SHIFT     2
+#define E4_USTORM_CORE_CONN_AG_CTX_CF5_MASK      0x3 /* cf5 */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF5_SHIFT     4
+#define E4_USTORM_CORE_CONN_AG_CTX_CF6_MASK      0x3 /* cf6 */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF6_SHIFT     6
 	u8 flags2;
-#define USTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
-#define USTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
-#define USTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
-#define USTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
-#define USTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
-#define USTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
-#define USTORM_CORE_CONN_AG_CTX_CF3EN_MASK    0x1 /* cf3en */
-#define USTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT   3
-#define USTORM_CORE_CONN_AG_CTX_CF4EN_MASK    0x1 /* cf4en */
-#define USTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT   4
-#define USTORM_CORE_CONN_AG_CTX_CF5EN_MASK    0x1 /* cf5en */
-#define USTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT   5
-#define USTORM_CORE_CONN_AG_CTX_CF6EN_MASK    0x1 /* cf6en */
-#define USTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT   6
-#define USTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
-#define USTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 7
+#define E4_USTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
+#define E4_USTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
+#define E4_USTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
+#define E4_USTORM_CORE_CONN_AG_CTX_CF3EN_MASK    0x1 /* cf3en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT   3
+#define E4_USTORM_CORE_CONN_AG_CTX_CF4EN_MASK    0x1 /* cf4en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT   4
+#define E4_USTORM_CORE_CONN_AG_CTX_CF5EN_MASK    0x1 /* cf5en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT   5
+#define E4_USTORM_CORE_CONN_AG_CTX_CF6EN_MASK    0x1 /* cf6en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT   6
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 7
 	u8 flags3;
-#define USTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
-#define USTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 0
-#define USTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
-#define USTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 1
-#define USTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
-#define USTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 2
-#define USTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
-#define USTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 3
-#define USTORM_CORE_CONN_AG_CTX_RULE5EN_MASK  0x1 /* rule5en */
-#define USTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT 4
-#define USTORM_CORE_CONN_AG_CTX_RULE6EN_MASK  0x1 /* rule6en */
-#define USTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT 5
-#define USTORM_CORE_CONN_AG_CTX_RULE7EN_MASK  0x1 /* rule7en */
-#define USTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT 6
-#define USTORM_CORE_CONN_AG_CTX_RULE8EN_MASK  0x1 /* rule8en */
-#define USTORM_CORE_CONN_AG_CTX_RULE8EN_SHIFT 7
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 0
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 1
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 2
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 3
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE5EN_MASK  0x1 /* rule5en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT 4
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE6EN_MASK  0x1 /* rule6en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT 5
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE7EN_MASK  0x1 /* rule7en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT 6
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE8EN_MASK  0x1 /* rule8en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE8EN_SHIFT 7
 	u8 byte2 /* byte2 */;
 	u8 byte3 /* byte3 */;
 	__le16 word0 /* conn_dpi */;
@@ -628,11 +628,11 @@ struct core_conn_context {
 /* xstorm storm context */
 	struct xstorm_core_conn_st_ctx xstorm_st_context;
 /* xstorm aggregative context */
-	struct xstorm_core_conn_ag_ctx xstorm_ag_context;
+	struct e4_xstorm_core_conn_ag_ctx xstorm_ag_context;
 /* tstorm aggregative context */
-	struct tstorm_core_conn_ag_ctx tstorm_ag_context;
+	struct e4_tstorm_core_conn_ag_ctx tstorm_ag_context;
 /* ustorm aggregative context */
-	struct ustorm_core_conn_ag_ctx ustorm_ag_context;
+	struct e4_ustorm_core_conn_ag_ctx ustorm_ag_context;
 /* mstorm storm context */
 	struct mstorm_core_conn_st_ctx mstorm_st_context;
 /* ustorm storm context */
@@ -1934,6 +1934,92 @@ enum dmae_cmd_src_enum {
 };
 
 
+struct e4_mstorm_core_conn_ag_ctx {
+	u8 byte0 /* cdu_validation */;
+	u8 byte1 /* state */;
+	u8 flags0;
+#define E4_MSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
+#define E4_MSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
+#define E4_MSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
+#define E4_MSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* cf0 */
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* cf1 */
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* cf2 */
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
+	u8 flags1;
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 3
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 4
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 5
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 6
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 7
+	__le16 word0 /* word0 */;
+	__le16 word1 /* word1 */;
+	__le32 reg0 /* reg0 */;
+	__le32 reg1 /* reg1 */;
+};
+
+
+
+
+
+struct e4_ystorm_core_conn_ag_ctx {
+	u8 byte0 /* cdu_validation */;
+	u8 byte1 /* state */;
+	u8 flags0;
+#define E4_YSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
+#define E4_YSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
+#define E4_YSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
+#define E4_YSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* cf0 */
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* cf1 */
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* cf2 */
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
+	u8 flags1;
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 3
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 4
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 5
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 6
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 7
+	u8 byte2 /* byte2 */;
+	u8 byte3 /* byte3 */;
+	__le16 word0 /* word0 */;
+	__le32 reg0 /* reg0 */;
+	__le32 reg1 /* reg1 */;
+	__le16 word1 /* word1 */;
+	__le16 word2 /* word2 */;
+	__le16 word3 /* word3 */;
+	__le16 word4 /* word4 */;
+	__le32 reg2 /* reg2 */;
+	__le32 reg3 /* reg3 */;
+};
+
+
 /*
  * IGU cleanup command
  */
@@ -2017,44 +2103,6 @@ struct igu_msix_vector {
 };
 
 
-struct mstorm_core_conn_ag_ctx {
-	u8 byte0 /* cdu_validation */;
-	u8 byte1 /* state */;
-	u8 flags0;
-#define MSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
-#define MSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
-#define MSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
-#define MSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
-#define MSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* cf0 */
-#define MSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
-#define MSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* cf1 */
-#define MSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
-#define MSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* cf2 */
-#define MSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
-	u8 flags1;
-#define MSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
-#define MSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
-#define MSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
-#define MSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
-#define MSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
-#define MSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
-#define MSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
-#define MSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 3
-#define MSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
-#define MSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 4
-#define MSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
-#define MSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 5
-#define MSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
-#define MSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 6
-#define MSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
-#define MSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 7
-	__le16 word0 /* word0 */;
-	__le16 word1 /* word1 */;
-	__le32 reg0 /* reg0 */;
-	__le32 reg1 /* reg1 */;
-};
-
-
 /*
  * per encapsulation type enabling flags
  */
diff --git a/drivers/net/qede/base/ecore_hsi_eth.h b/drivers/net/qede/base/ecore_hsi_eth.h
index e8373d7..9d2a118 100644
--- a/drivers/net/qede/base/ecore_hsi_eth.h
+++ b/drivers/net/qede/base/ecore_hsi_eth.h
@@ -34,315 +34,315 @@ struct xstorm_eth_conn_st_ctx {
 	__le32 reserved[60];
 };
 
-struct xstorm_eth_conn_ag_ctx {
+struct e4_xstorm_eth_conn_ag_ctx {
 	u8 reserved0 /* cdu_validation */;
 	u8 eth_state /* state */;
 	u8 flags0;
 /* exist_in_qm0 */
-#define XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_SHIFT           0
+#define E4_XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_SHIFT           0
 /* exist_in_qm1 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED1_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED1_SHIFT              1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED1_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED1_SHIFT              1
 /* exist_in_qm2 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED2_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED2_SHIFT              2
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED2_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED2_SHIFT              2
 /* exist_in_qm3 */
-#define XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM3_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM3_SHIFT           3
+#define E4_XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM3_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM3_SHIFT           3
 /* bit4 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED3_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED3_SHIFT              4
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED3_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED3_SHIFT              4
 /* cf_array_active */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED4_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED4_SHIFT              5
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED4_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED4_SHIFT              5
 /* bit6 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED5_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED5_SHIFT              6
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED5_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED5_SHIFT              6
 /* bit7 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED6_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED6_SHIFT              7
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED6_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED6_SHIFT              7
 	u8 flags1;
 /* bit8 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED7_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED7_SHIFT              0
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED7_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED7_SHIFT              0
 /* bit9 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED8_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED8_SHIFT              1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED8_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED8_SHIFT              1
 /* bit10 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED9_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED9_SHIFT              2
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED9_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED9_SHIFT              2
 /* bit11 */
-#define XSTORM_ETH_CONN_AG_CTX_BIT11_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_BIT11_SHIFT                  3
+#define E4_XSTORM_ETH_CONN_AG_CTX_BIT11_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_BIT11_SHIFT                  3
 /* bit12 */
-#define XSTORM_ETH_CONN_AG_CTX_BIT12_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_BIT12_SHIFT                  4
+#define E4_XSTORM_ETH_CONN_AG_CTX_BIT12_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_BIT12_SHIFT                  4
 /* bit13 */
-#define XSTORM_ETH_CONN_AG_CTX_BIT13_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_BIT13_SHIFT                  5
+#define E4_XSTORM_ETH_CONN_AG_CTX_BIT13_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_BIT13_SHIFT                  5
 /* bit14 */
-#define XSTORM_ETH_CONN_AG_CTX_TX_RULE_ACTIVE_MASK          0x1
-#define XSTORM_ETH_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT         6
+#define E4_XSTORM_ETH_CONN_AG_CTX_TX_RULE_ACTIVE_MASK          0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT         6
 /* bit15 */
-#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_ACTIVE_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT           7
+#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_ACTIVE_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT           7
 	u8 flags2;
 /* timer0cf */
-#define XSTORM_ETH_CONN_AG_CTX_CF0_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF0_SHIFT                    0
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF0_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF0_SHIFT                    0
 /* timer1cf */
-#define XSTORM_ETH_CONN_AG_CTX_CF1_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF1_SHIFT                    2
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF1_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF1_SHIFT                    2
 /* timer2cf */
-#define XSTORM_ETH_CONN_AG_CTX_CF2_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF2_SHIFT                    4
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF2_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF2_SHIFT                    4
 /* timer_stop_all */
-#define XSTORM_ETH_CONN_AG_CTX_CF3_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF3_SHIFT                    6
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF3_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF3_SHIFT                    6
 	u8 flags3;
 /* cf4 */
-#define XSTORM_ETH_CONN_AG_CTX_CF4_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF4_SHIFT                    0
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF4_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF4_SHIFT                    0
 /* cf5 */
-#define XSTORM_ETH_CONN_AG_CTX_CF5_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF5_SHIFT                    2
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF5_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF5_SHIFT                    2
 /* cf6 */
-#define XSTORM_ETH_CONN_AG_CTX_CF6_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF6_SHIFT                    4
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF6_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF6_SHIFT                    4
 /* cf7 */
-#define XSTORM_ETH_CONN_AG_CTX_CF7_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF7_SHIFT                    6
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF7_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF7_SHIFT                    6
 	u8 flags4;
 /* cf8 */
-#define XSTORM_ETH_CONN_AG_CTX_CF8_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF8_SHIFT                    0
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF8_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF8_SHIFT                    0
 /* cf9 */
-#define XSTORM_ETH_CONN_AG_CTX_CF9_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF9_SHIFT                    2
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF9_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF9_SHIFT                    2
 /* cf10 */
-#define XSTORM_ETH_CONN_AG_CTX_CF10_MASK                    0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF10_SHIFT                   4
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF10_MASK                    0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF10_SHIFT                   4
 /* cf11 */
-#define XSTORM_ETH_CONN_AG_CTX_CF11_MASK                    0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF11_SHIFT                   6
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF11_MASK                    0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF11_SHIFT                   6
 	u8 flags5;
 /* cf12 */
-#define XSTORM_ETH_CONN_AG_CTX_CF12_MASK                    0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF12_SHIFT                   0
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF12_MASK                    0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF12_SHIFT                   0
 /* cf13 */
-#define XSTORM_ETH_CONN_AG_CTX_CF13_MASK                    0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF13_SHIFT                   2
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF13_MASK                    0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF13_SHIFT                   2
 /* cf14 */
-#define XSTORM_ETH_CONN_AG_CTX_CF14_MASK                    0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF14_SHIFT                   4
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF14_MASK                    0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF14_SHIFT                   4
 /* cf15 */
-#define XSTORM_ETH_CONN_AG_CTX_CF15_MASK                    0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF15_SHIFT                   6
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF15_MASK                    0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF15_SHIFT                   6
 	u8 flags6;
 /* cf16 */
-#define XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_MASK        0x3
-#define XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_SHIFT       0
+#define E4_XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_MASK        0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_SHIFT       0
 /* cf_array_cf */
-#define XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_MASK        0x3
-#define XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_SHIFT       2
+#define E4_XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_MASK        0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_SHIFT       2
 /* cf18 */
-#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_MASK                   0x3
-#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_SHIFT                  4
+#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_MASK                   0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_SHIFT                  4
 /* cf19 */
-#define XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_MASK            0x3
-#define XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_SHIFT           6
+#define E4_XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_MASK            0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_SHIFT           6
 	u8 flags7;
 /* cf20 */
-#define XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_MASK                0x3
-#define XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_SHIFT               0
+#define E4_XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_MASK                0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_SHIFT               0
 /* cf21 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED10_MASK              0x3
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED10_SHIFT             2
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED10_MASK              0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED10_SHIFT             2
 /* cf22 */
-#define XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_MASK               0x3
-#define XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_SHIFT              4
+#define E4_XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_MASK               0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_SHIFT              4
 /* cf0en */
-#define XSTORM_ETH_CONN_AG_CTX_CF0EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT                  6
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF0EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT                  6
 /* cf1en */
-#define XSTORM_ETH_CONN_AG_CTX_CF1EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT                  7
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF1EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT                  7
 	u8 flags8;
 /* cf2en */
-#define XSTORM_ETH_CONN_AG_CTX_CF2EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                  0
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF2EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                  0
 /* cf3en */
-#define XSTORM_ETH_CONN_AG_CTX_CF3EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT                  1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF3EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT                  1
 /* cf4en */
-#define XSTORM_ETH_CONN_AG_CTX_CF4EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF4EN_SHIFT                  2
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF4EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF4EN_SHIFT                  2
 /* cf5en */
-#define XSTORM_ETH_CONN_AG_CTX_CF5EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF5EN_SHIFT                  3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF5EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF5EN_SHIFT                  3
 /* cf6en */
-#define XSTORM_ETH_CONN_AG_CTX_CF6EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF6EN_SHIFT                  4
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF6EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF6EN_SHIFT                  4
 /* cf7en */
-#define XSTORM_ETH_CONN_AG_CTX_CF7EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF7EN_SHIFT                  5
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF7EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF7EN_SHIFT                  5
 /* cf8en */
-#define XSTORM_ETH_CONN_AG_CTX_CF8EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF8EN_SHIFT                  6
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF8EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF8EN_SHIFT                  6
 /* cf9en */
-#define XSTORM_ETH_CONN_AG_CTX_CF9EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF9EN_SHIFT                  7
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF9EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF9EN_SHIFT                  7
 	u8 flags9;
 /* cf10en */
-#define XSTORM_ETH_CONN_AG_CTX_CF10EN_MASK                  0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF10EN_SHIFT                 0
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF10EN_MASK                  0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF10EN_SHIFT                 0
 /* cf11en */
-#define XSTORM_ETH_CONN_AG_CTX_CF11EN_MASK                  0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF11EN_SHIFT                 1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF11EN_MASK                  0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF11EN_SHIFT                 1
 /* cf12en */
-#define XSTORM_ETH_CONN_AG_CTX_CF12EN_MASK                  0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF12EN_SHIFT                 2
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF12EN_MASK                  0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF12EN_SHIFT                 2
 /* cf13en */
-#define XSTORM_ETH_CONN_AG_CTX_CF13EN_MASK                  0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF13EN_SHIFT                 3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF13EN_MASK                  0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF13EN_SHIFT                 3
 /* cf14en */
-#define XSTORM_ETH_CONN_AG_CTX_CF14EN_MASK                  0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF14EN_SHIFT                 4
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF14EN_MASK                  0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF14EN_SHIFT                 4
 /* cf15en */
-#define XSTORM_ETH_CONN_AG_CTX_CF15EN_MASK                  0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF15EN_SHIFT                 5
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF15EN_MASK                  0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF15EN_SHIFT                 5
 /* cf16en */
-#define XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_MASK     0x1
-#define XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_SHIFT    6
+#define E4_XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_MASK     0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_SHIFT    6
 /* cf_array_cf_en */
-#define XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_EN_MASK     0x1
-#define XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_EN_SHIFT    7
+#define E4_XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_EN_MASK     0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_EN_SHIFT    7
 	u8 flags10;
 /* cf18en */
-#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_EN_SHIFT               0
+#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_EN_SHIFT               0
 /* cf19en */
-#define XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_EN_MASK         0x1
-#define XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT        1
+#define E4_XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_EN_MASK         0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT        1
 /* cf20en */
-#define XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_EN_MASK             0x1
-#define XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT            2
+#define E4_XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_EN_MASK             0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT            2
 /* cf21en */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED11_MASK              0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED11_SHIFT             3
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED11_MASK              0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED11_SHIFT             3
 /* cf22en */
-#define XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_EN_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_EN_SHIFT           4
+#define E4_XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_EN_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_EN_SHIFT           4
 /* cf23en */
-#define XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_MASK  0x1
-#define XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_SHIFT 5
+#define E4_XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_MASK  0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_SHIFT 5
 /* rule0en */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED12_MASK              0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED12_SHIFT             6
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED12_MASK              0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED12_SHIFT             6
 /* rule1en */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED13_MASK              0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED13_SHIFT             7
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED13_MASK              0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED13_SHIFT             7
 	u8 flags11;
 /* rule2en */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED14_MASK              0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED14_SHIFT             0
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED14_MASK              0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED14_SHIFT             0
 /* rule3en */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED15_MASK              0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED15_SHIFT             1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED15_MASK              0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED15_SHIFT             1
 /* rule4en */
-#define XSTORM_ETH_CONN_AG_CTX_TX_DEC_RULE_EN_MASK          0x1
-#define XSTORM_ETH_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT         2
+#define E4_XSTORM_ETH_CONN_AG_CTX_TX_DEC_RULE_EN_MASK          0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT         2
 /* rule5en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE5EN_MASK                 0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT                3
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE5EN_MASK                 0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT                3
 /* rule6en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE6EN_MASK                 0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE6EN_SHIFT                4
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE6EN_MASK                 0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE6EN_SHIFT                4
 /* rule7en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE7EN_MASK                 0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT                5
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE7EN_MASK                 0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT                5
 /* rule8en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED1_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED1_SHIFT           6
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED1_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED1_SHIFT           6
 /* rule9en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE9EN_MASK                 0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE9EN_SHIFT                7
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE9EN_MASK                 0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE9EN_SHIFT                7
 	u8 flags12;
 /* rule10en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE10EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE10EN_SHIFT               0
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE10EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE10EN_SHIFT               0
 /* rule11en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE11EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE11EN_SHIFT               1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE11EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE11EN_SHIFT               1
 /* rule12en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED2_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED2_SHIFT           2
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED2_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED2_SHIFT           2
 /* rule13en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED3_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED3_SHIFT           3
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED3_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED3_SHIFT           3
 /* rule14en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE14EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE14EN_SHIFT               4
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE14EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE14EN_SHIFT               4
 /* rule15en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE15EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE15EN_SHIFT               5
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE15EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE15EN_SHIFT               5
 /* rule16en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE16EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE16EN_SHIFT               6
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE16EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE16EN_SHIFT               6
 /* rule17en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE17EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE17EN_SHIFT               7
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE17EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE17EN_SHIFT               7
 	u8 flags13;
 /* rule18en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE18EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE18EN_SHIFT               0
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE18EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE18EN_SHIFT               0
 /* rule19en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE19EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE19EN_SHIFT               1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE19EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE19EN_SHIFT               1
 /* rule20en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED4_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED4_SHIFT           2
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED4_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED4_SHIFT           2
 /* rule21en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED5_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED5_SHIFT           3
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED5_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED5_SHIFT           3
 /* rule22en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED6_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED6_SHIFT           4
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED6_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED6_SHIFT           4
 /* rule23en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED7_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED7_SHIFT           5
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED7_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED7_SHIFT           5
 /* rule24en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED8_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED8_SHIFT           6
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED8_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED8_SHIFT           6
 /* rule25en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED9_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED9_SHIFT           7
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED9_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED9_SHIFT           7
 	u8 flags14;
 /* bit16 */
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_USE_EXT_HDR_MASK        0x1
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_USE_EXT_HDR_SHIFT       0
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_USE_EXT_HDR_MASK        0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_USE_EXT_HDR_SHIFT       0
 /* bit17 */
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_MASK      0x1
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_SHIFT     1
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_MASK      0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_SHIFT     1
 /* bit18 */
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_MASK    0x1
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_SHIFT   2
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_MASK    0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_SHIFT   2
 /* bit19 */
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_MASK    0x1
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_SHIFT   3
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_MASK    0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_SHIFT   3
 /* bit20 */
-#define XSTORM_ETH_CONN_AG_CTX_L2_EDPM_ENABLE_MASK          0x1
-#define XSTORM_ETH_CONN_AG_CTX_L2_EDPM_ENABLE_SHIFT         4
+#define E4_XSTORM_ETH_CONN_AG_CTX_L2_EDPM_ENABLE_MASK          0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_L2_EDPM_ENABLE_SHIFT         4
 /* bit21 */
-#define XSTORM_ETH_CONN_AG_CTX_ROCE_EDPM_ENABLE_MASK        0x1
-#define XSTORM_ETH_CONN_AG_CTX_ROCE_EDPM_ENABLE_SHIFT       5
+#define E4_XSTORM_ETH_CONN_AG_CTX_ROCE_EDPM_ENABLE_MASK        0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_ROCE_EDPM_ENABLE_SHIFT       5
 /* cf23 */
-#define XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_MASK              0x3
-#define XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_SHIFT             6
+#define E4_XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_MASK              0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_SHIFT             6
 	u8 edpm_event_id /* byte2 */;
 	__le16 physical_q0 /* physical_q0 */;
-	__le16 quota /* physical_q1 */;
+	__le16 e5_reserved1 /* physical_q1 */;
 	__le16 edpm_num_bds /* physical_q2 */;
 	__le16 tx_bd_cons /* word3 */;
 	__le16 tx_bd_prod /* word4 */;
@@ -375,7 +375,7 @@ struct xstorm_eth_conn_ag_ctx {
 	u8 byte13 /* byte13 */;
 	u8 byte14 /* byte14 */;
 	u8 byte15 /* byte15 */;
-	u8 byte16 /* byte16 */;
+	u8 e5_reserved /* e5_reserved */;
 	__le16 word11 /* word11 */;
 	__le32 reg10 /* reg10 */;
 	__le32 reg11 /* reg11 */;
@@ -400,47 +400,47 @@ struct ystorm_eth_conn_st_ctx {
 	__le32 reserved[8];
 };
 
-struct ystorm_eth_conn_ag_ctx {
+struct e4_ystorm_eth_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 state /* state */;
 	u8 flags0;
 /* exist_in_qm0 */
-#define YSTORM_ETH_CONN_AG_CTX_BIT0_MASK                  0x1
-#define YSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT                 0
+#define E4_YSTORM_ETH_CONN_AG_CTX_BIT0_MASK                  0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT                 0
 /* exist_in_qm1 */
-#define YSTORM_ETH_CONN_AG_CTX_BIT1_MASK                  0x1
-#define YSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT                 1
-#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK     0x3 /* cf0 */
-#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT    2
-#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_MASK      0x3 /* cf1 */
-#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_SHIFT     4
-#define YSTORM_ETH_CONN_AG_CTX_CF2_MASK                   0x3 /* cf2 */
-#define YSTORM_ETH_CONN_AG_CTX_CF2_SHIFT                  6
+#define E4_YSTORM_ETH_CONN_AG_CTX_BIT1_MASK                  0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT                 1
+#define E4_YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK     0x3 /* cf0 */
+#define E4_YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT    2
+#define E4_YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_MASK      0x3 /* cf1 */
+#define E4_YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_SHIFT     4
+#define E4_YSTORM_ETH_CONN_AG_CTX_CF2_MASK                   0x3 /* cf2 */
+#define E4_YSTORM_ETH_CONN_AG_CTX_CF2_SHIFT                  6
 	u8 flags1;
 /* cf0en */
-#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK  0x1
-#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT 0
+#define E4_YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK  0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT 0
 /* cf1en */
-#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_MASK   0x1
-#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_SHIFT  1
+#define E4_YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_MASK   0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_SHIFT  1
 /* cf2en */
-#define YSTORM_ETH_CONN_AG_CTX_CF2EN_MASK                 0x1
-#define YSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                2
+#define E4_YSTORM_ETH_CONN_AG_CTX_CF2EN_MASK                 0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                2
 /* rule0en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK               0x1
-#define YSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT              3
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK               0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT              3
 /* rule1en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK               0x1
-#define YSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT              4
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK               0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT              4
 /* rule2en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK               0x1
-#define YSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT              5
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK               0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT              5
 /* rule3en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK               0x1
-#define YSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT              6
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK               0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT              6
 /* rule4en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK               0x1
-#define YSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT              7
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK               0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT              7
 	u8 tx_q0_int_coallecing_timeset /* byte2 */;
 	u8 byte3 /* byte3 */;
 	__le16 word0 /* word0 */;
@@ -454,89 +454,89 @@ struct ystorm_eth_conn_ag_ctx {
 	__le32 reg3 /* reg3 */;
 };
 
-struct tstorm_eth_conn_ag_ctx {
+struct e4_tstorm_eth_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 byte1 /* state */;
 	u8 flags0;
-#define TSTORM_ETH_CONN_AG_CTX_BIT0_MASK      0x1 /* exist_in_qm0 */
-#define TSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT     0
-#define TSTORM_ETH_CONN_AG_CTX_BIT1_MASK      0x1 /* exist_in_qm1 */
-#define TSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT     1
-#define TSTORM_ETH_CONN_AG_CTX_BIT2_MASK      0x1 /* bit2 */
-#define TSTORM_ETH_CONN_AG_CTX_BIT2_SHIFT     2
-#define TSTORM_ETH_CONN_AG_CTX_BIT3_MASK      0x1 /* bit3 */
-#define TSTORM_ETH_CONN_AG_CTX_BIT3_SHIFT     3
-#define TSTORM_ETH_CONN_AG_CTX_BIT4_MASK      0x1 /* bit4 */
-#define TSTORM_ETH_CONN_AG_CTX_BIT4_SHIFT     4
-#define TSTORM_ETH_CONN_AG_CTX_BIT5_MASK      0x1 /* bit5 */
-#define TSTORM_ETH_CONN_AG_CTX_BIT5_SHIFT     5
-#define TSTORM_ETH_CONN_AG_CTX_CF0_MASK       0x3 /* timer0cf */
-#define TSTORM_ETH_CONN_AG_CTX_CF0_SHIFT      6
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT0_MASK      0x1 /* exist_in_qm0 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT     0
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT1_MASK      0x1 /* exist_in_qm1 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT     1
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT2_MASK      0x1 /* bit2 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT2_SHIFT     2
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT3_MASK      0x1 /* bit3 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT3_SHIFT     3
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT4_MASK      0x1 /* bit4 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT4_SHIFT     4
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT5_MASK      0x1 /* bit5 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT5_SHIFT     5
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF0_MASK       0x3 /* timer0cf */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF0_SHIFT      6
 	u8 flags1;
-#define TSTORM_ETH_CONN_AG_CTX_CF1_MASK       0x3 /* timer1cf */
-#define TSTORM_ETH_CONN_AG_CTX_CF1_SHIFT      0
-#define TSTORM_ETH_CONN_AG_CTX_CF2_MASK       0x3 /* timer2cf */
-#define TSTORM_ETH_CONN_AG_CTX_CF2_SHIFT      2
-#define TSTORM_ETH_CONN_AG_CTX_CF3_MASK       0x3 /* timer_stop_all */
-#define TSTORM_ETH_CONN_AG_CTX_CF3_SHIFT      4
-#define TSTORM_ETH_CONN_AG_CTX_CF4_MASK       0x3 /* cf4 */
-#define TSTORM_ETH_CONN_AG_CTX_CF4_SHIFT      6
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF1_MASK       0x3 /* timer1cf */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF1_SHIFT      0
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF2_MASK       0x3 /* timer2cf */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF2_SHIFT      2
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF3_MASK       0x3 /* timer_stop_all */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF3_SHIFT      4
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF4_MASK       0x3 /* cf4 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF4_SHIFT      6
 	u8 flags2;
-#define TSTORM_ETH_CONN_AG_CTX_CF5_MASK       0x3 /* cf5 */
-#define TSTORM_ETH_CONN_AG_CTX_CF5_SHIFT      0
-#define TSTORM_ETH_CONN_AG_CTX_CF6_MASK       0x3 /* cf6 */
-#define TSTORM_ETH_CONN_AG_CTX_CF6_SHIFT      2
-#define TSTORM_ETH_CONN_AG_CTX_CF7_MASK       0x3 /* cf7 */
-#define TSTORM_ETH_CONN_AG_CTX_CF7_SHIFT      4
-#define TSTORM_ETH_CONN_AG_CTX_CF8_MASK       0x3 /* cf8 */
-#define TSTORM_ETH_CONN_AG_CTX_CF8_SHIFT      6
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF5_MASK       0x3 /* cf5 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF5_SHIFT      0
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF6_MASK       0x3 /* cf6 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF6_SHIFT      2
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF7_MASK       0x3 /* cf7 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF7_SHIFT      4
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF8_MASK       0x3 /* cf8 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF8_SHIFT      6
 	u8 flags3;
-#define TSTORM_ETH_CONN_AG_CTX_CF9_MASK       0x3 /* cf9 */
-#define TSTORM_ETH_CONN_AG_CTX_CF9_SHIFT      0
-#define TSTORM_ETH_CONN_AG_CTX_CF10_MASK      0x3 /* cf10 */
-#define TSTORM_ETH_CONN_AG_CTX_CF10_SHIFT     2
-#define TSTORM_ETH_CONN_AG_CTX_CF0EN_MASK     0x1 /* cf0en */
-#define TSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT    4
-#define TSTORM_ETH_CONN_AG_CTX_CF1EN_MASK     0x1 /* cf1en */
-#define TSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT    5
-#define TSTORM_ETH_CONN_AG_CTX_CF2EN_MASK     0x1 /* cf2en */
-#define TSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT    6
-#define TSTORM_ETH_CONN_AG_CTX_CF3EN_MASK     0x1 /* cf3en */
-#define TSTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT    7
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF9_MASK       0x3 /* cf9 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF9_SHIFT      0
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF10_MASK      0x3 /* cf10 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF10_SHIFT     2
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF0EN_MASK     0x1 /* cf0en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT    4
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF1EN_MASK     0x1 /* cf1en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT    5
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF2EN_MASK     0x1 /* cf2en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT    6
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF3EN_MASK     0x1 /* cf3en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT    7
 	u8 flags4;
-#define TSTORM_ETH_CONN_AG_CTX_CF4EN_MASK     0x1 /* cf4en */
-#define TSTORM_ETH_CONN_AG_CTX_CF4EN_SHIFT    0
-#define TSTORM_ETH_CONN_AG_CTX_CF5EN_MASK     0x1 /* cf5en */
-#define TSTORM_ETH_CONN_AG_CTX_CF5EN_SHIFT    1
-#define TSTORM_ETH_CONN_AG_CTX_CF6EN_MASK     0x1 /* cf6en */
-#define TSTORM_ETH_CONN_AG_CTX_CF6EN_SHIFT    2
-#define TSTORM_ETH_CONN_AG_CTX_CF7EN_MASK     0x1 /* cf7en */
-#define TSTORM_ETH_CONN_AG_CTX_CF7EN_SHIFT    3
-#define TSTORM_ETH_CONN_AG_CTX_CF8EN_MASK     0x1 /* cf8en */
-#define TSTORM_ETH_CONN_AG_CTX_CF8EN_SHIFT    4
-#define TSTORM_ETH_CONN_AG_CTX_CF9EN_MASK     0x1 /* cf9en */
-#define TSTORM_ETH_CONN_AG_CTX_CF9EN_SHIFT    5
-#define TSTORM_ETH_CONN_AG_CTX_CF10EN_MASK    0x1 /* cf10en */
-#define TSTORM_ETH_CONN_AG_CTX_CF10EN_SHIFT   6
-#define TSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK   0x1 /* rule0en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT  7
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF4EN_MASK     0x1 /* cf4en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF4EN_SHIFT    0
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF5EN_MASK     0x1 /* cf5en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF5EN_SHIFT    1
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF6EN_MASK     0x1 /* cf6en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF6EN_SHIFT    2
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF7EN_MASK     0x1 /* cf7en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF7EN_SHIFT    3
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF8EN_MASK     0x1 /* cf8en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF8EN_SHIFT    4
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF9EN_MASK     0x1 /* cf9en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF9EN_SHIFT    5
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF10EN_MASK    0x1 /* cf10en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF10EN_SHIFT   6
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK   0x1 /* rule0en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT  7
 	u8 flags5;
-#define TSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK   0x1 /* rule1en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT  0
-#define TSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK   0x1 /* rule2en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT  1
-#define TSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK   0x1 /* rule3en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT  2
-#define TSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK   0x1 /* rule4en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT  3
-#define TSTORM_ETH_CONN_AG_CTX_RULE5EN_MASK   0x1 /* rule5en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT  4
-#define TSTORM_ETH_CONN_AG_CTX_RX_BD_EN_MASK  0x1 /* rule6en */
-#define TSTORM_ETH_CONN_AG_CTX_RX_BD_EN_SHIFT 5
-#define TSTORM_ETH_CONN_AG_CTX_RULE7EN_MASK   0x1 /* rule7en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT  6
-#define TSTORM_ETH_CONN_AG_CTX_RULE8EN_MASK   0x1 /* rule8en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE8EN_SHIFT  7
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK   0x1 /* rule1en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT  0
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK   0x1 /* rule2en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT  1
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK   0x1 /* rule3en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT  2
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK   0x1 /* rule4en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT  3
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE5EN_MASK   0x1 /* rule5en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT  4
+#define E4_TSTORM_ETH_CONN_AG_CTX_RX_BD_EN_MASK  0x1 /* rule6en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RX_BD_EN_SHIFT 5
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE7EN_MASK   0x1 /* rule7en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT  6
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE8EN_MASK   0x1 /* rule8en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE8EN_SHIFT  7
 	__le32 reg0 /* reg0 */;
 	__le32 reg1 /* reg1 */;
 	__le32 reg2 /* reg2 */;
@@ -558,88 +558,88 @@ struct tstorm_eth_conn_ag_ctx {
 	__le32 reg10 /* reg10 */;
 };
 
-struct ustorm_eth_conn_ag_ctx {
+struct e4_ustorm_eth_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 byte1 /* state */;
 	u8 flags0;
 /* exist_in_qm0 */
-#define USTORM_ETH_CONN_AG_CTX_BIT0_MASK                    0x1
-#define USTORM_ETH_CONN_AG_CTX_BIT0_SHIFT                   0
+#define E4_USTORM_ETH_CONN_AG_CTX_BIT0_MASK                    0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_BIT0_SHIFT                   0
 /* exist_in_qm1 */
-#define USTORM_ETH_CONN_AG_CTX_BIT1_MASK                    0x1
-#define USTORM_ETH_CONN_AG_CTX_BIT1_SHIFT                   1
+#define E4_USTORM_ETH_CONN_AG_CTX_BIT1_MASK                    0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_BIT1_SHIFT                   1
 /* timer0cf */
-#define USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_MASK     0x3
-#define USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_SHIFT    2
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_MASK     0x3
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_SHIFT    2
 /* timer1cf */
-#define USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_MASK     0x3
-#define USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_SHIFT    4
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_MASK     0x3
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_SHIFT    4
 /* timer2cf */
-#define USTORM_ETH_CONN_AG_CTX_CF2_MASK                     0x3
-#define USTORM_ETH_CONN_AG_CTX_CF2_SHIFT                    6
+#define E4_USTORM_ETH_CONN_AG_CTX_CF2_MASK                     0x3
+#define E4_USTORM_ETH_CONN_AG_CTX_CF2_SHIFT                    6
 	u8 flags1;
 /* timer_stop_all */
-#define USTORM_ETH_CONN_AG_CTX_CF3_MASK                     0x3
-#define USTORM_ETH_CONN_AG_CTX_CF3_SHIFT                    0
+#define E4_USTORM_ETH_CONN_AG_CTX_CF3_MASK                     0x3
+#define E4_USTORM_ETH_CONN_AG_CTX_CF3_SHIFT                    0
 /* cf4 */
-#define USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_MASK               0x3
-#define USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_SHIFT              2
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_MASK               0x3
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_SHIFT              2
 /* cf5 */
-#define USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_MASK               0x3
-#define USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_SHIFT              4
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_MASK               0x3
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_SHIFT              4
 /* cf6 */
-#define USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK       0x3
-#define USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT      6
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK       0x3
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT      6
 	u8 flags2;
 /* cf0en */
-#define USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_EN_MASK  0x1
-#define USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_EN_SHIFT 0
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_EN_MASK  0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_EN_SHIFT 0
 /* cf1en */
-#define USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_EN_MASK  0x1
-#define USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_EN_SHIFT 1
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_EN_MASK  0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_EN_SHIFT 1
 /* cf2en */
-#define USTORM_ETH_CONN_AG_CTX_CF2EN_MASK                   0x1
-#define USTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                  2
+#define E4_USTORM_ETH_CONN_AG_CTX_CF2EN_MASK                   0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                  2
 /* cf3en */
-#define USTORM_ETH_CONN_AG_CTX_CF3EN_MASK                   0x1
-#define USTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT                  3
+#define E4_USTORM_ETH_CONN_AG_CTX_CF3EN_MASK                   0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT                  3
 /* cf4en */
-#define USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_EN_MASK            0x1
-#define USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_EN_SHIFT           4
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_EN_MASK            0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_EN_SHIFT           4
 /* cf5en */
-#define USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_EN_MASK            0x1
-#define USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_EN_SHIFT           5
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_EN_MASK            0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_EN_SHIFT           5
 /* cf6en */
-#define USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK    0x1
-#define USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT   6
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK    0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT   6
 /* rule0en */
-#define USTORM_ETH_CONN_AG_CTX_RULE0EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT                7
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE0EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT                7
 	u8 flags3;
 /* rule1en */
-#define USTORM_ETH_CONN_AG_CTX_RULE1EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT                0
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE1EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT                0
 /* rule2en */
-#define USTORM_ETH_CONN_AG_CTX_RULE2EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT                1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE2EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT                1
 /* rule3en */
-#define USTORM_ETH_CONN_AG_CTX_RULE3EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT                2
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE3EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT                2
 /* rule4en */
-#define USTORM_ETH_CONN_AG_CTX_RULE4EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT                3
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE4EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT                3
 /* rule5en */
-#define USTORM_ETH_CONN_AG_CTX_RULE5EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT                4
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE5EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT                4
 /* rule6en */
-#define USTORM_ETH_CONN_AG_CTX_RULE6EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE6EN_SHIFT                5
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE6EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE6EN_SHIFT                5
 /* rule7en */
-#define USTORM_ETH_CONN_AG_CTX_RULE7EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT                6
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE7EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT                6
 /* rule8en */
-#define USTORM_ETH_CONN_AG_CTX_RULE8EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE8EN_SHIFT                7
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE8EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE8EN_SHIFT                7
 	u8 byte2 /* byte2 */;
 	u8 byte3 /* byte3 */;
 	__le16 word0 /* conn_dpi */;
@@ -678,15 +678,15 @@ struct eth_conn_context {
 /* xstorm storm context */
 	struct xstorm_eth_conn_st_ctx xstorm_st_context;
 /* xstorm aggregative context */
-	struct xstorm_eth_conn_ag_ctx xstorm_ag_context;
+	struct e4_xstorm_eth_conn_ag_ctx xstorm_ag_context;
 /* ystorm storm context */
 	struct ystorm_eth_conn_st_ctx ystorm_st_context;
 /* ystorm aggregative context */
-	struct ystorm_eth_conn_ag_ctx ystorm_ag_context;
+	struct e4_ystorm_eth_conn_ag_ctx ystorm_ag_context;
 /* tstorm aggregative context */
-	struct tstorm_eth_conn_ag_ctx tstorm_ag_context;
+	struct e4_tstorm_eth_conn_ag_ctx tstorm_ag_context;
 /* ustorm aggregative context */
-	struct ustorm_eth_conn_ag_ctx ustorm_ag_context;
+	struct e4_ustorm_eth_conn_ag_ctx ustorm_ag_context;
 /* ustorm storm context */
 	struct ustorm_eth_conn_st_ctx ustorm_st_context;
 /* mstorm storm context */
@@ -1480,6 +1480,668 @@ struct vport_update_ramrod_data {
 
 
 
+struct E4XstormEthConnAgCtxDqExtLdPart {
+	u8 reserved0 /* cdu_validation */;
+	u8 eth_state /* state */;
+	u8 flags0;
+/* exist_in_qm0 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM0_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM0_SHIFT           0
+/* exist_in_qm1 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED1_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED1_SHIFT              1
+/* exist_in_qm2 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED2_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED2_SHIFT              2
+/* exist_in_qm3 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM3_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM3_SHIFT           3
+/* bit4 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED3_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED3_SHIFT              4
+/* cf_array_active */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED4_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED4_SHIFT              5
+/* bit6 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED5_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED5_SHIFT              6
+/* bit7 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED6_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED6_SHIFT              7
+	u8 flags1;
+/* bit8 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED7_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED7_SHIFT              0
+/* bit9 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED8_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED8_SHIFT              1
+/* bit10 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED9_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED9_SHIFT              2
+/* bit11 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_BIT11_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_BIT11_SHIFT                  3
+/* bit12 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_BIT12_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_BIT12_SHIFT                  4
+/* bit13 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_BIT13_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_BIT13_SHIFT                  5
+/* bit14 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TX_RULE_ACTIVE_MASK          0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TX_RULE_ACTIVE_SHIFT         6
+/* bit15 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_ACTIVE_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_ACTIVE_SHIFT           7
+	u8 flags2;
+/* timer0cf */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF0_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF0_SHIFT                    0
+/* timer1cf */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF1_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF1_SHIFT                    2
+/* timer2cf */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF2_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF2_SHIFT                    4
+/* timer_stop_all */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF3_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF3_SHIFT                    6
+	u8 flags3;
+/* cf4 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF4_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF4_SHIFT                    0
+/* cf5 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF5_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF5_SHIFT                    2
+/* cf6 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF6_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF6_SHIFT                    4
+/* cf7 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF7_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF7_SHIFT                    6
+	u8 flags4;
+/* cf8 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF8_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF8_SHIFT                    0
+/* cf9 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF9_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF9_SHIFT                    2
+/* cf10 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF10_MASK                    0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF10_SHIFT                   4
+/* cf11 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF11_MASK                    0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF11_SHIFT                   6
+	u8 flags5;
+/* cf12 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF12_MASK                    0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF12_SHIFT                   0
+/* cf13 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF13_MASK                    0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF13_SHIFT                   2
+/* cf14 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF14_MASK                    0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF14_SHIFT                   4
+/* cf15 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF15_MASK                    0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF15_SHIFT                   6
+	u8 flags6;
+/* cf16 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_MASK        0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_SHIFT       0
+/* cf_array_cf */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_MASK        0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_SHIFT       2
+/* cf18 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_MASK                   0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_SHIFT                  4
+/* cf19 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_MASK            0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_SHIFT           6
+	u8 flags7;
+/* cf20 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_MASK                0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_SHIFT               0
+/* cf21 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED10_MASK              0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED10_SHIFT             2
+/* cf22 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_MASK               0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_SHIFT              4
+/* cf0en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF0EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF0EN_SHIFT                  6
+/* cf1en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF1EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF1EN_SHIFT                  7
+	u8 flags8;
+/* cf2en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF2EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF2EN_SHIFT                  0
+/* cf3en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF3EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF3EN_SHIFT                  1
+/* cf4en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF4EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF4EN_SHIFT                  2
+/* cf5en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF5EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF5EN_SHIFT                  3
+/* cf6en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF6EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF6EN_SHIFT                  4
+/* cf7en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF7EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF7EN_SHIFT                  5
+/* cf8en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF8EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF8EN_SHIFT                  6
+/* cf9en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF9EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF9EN_SHIFT                  7
+	u8 flags9;
+/* cf10en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF10EN_MASK                  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF10EN_SHIFT                 0
+/* cf11en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF11EN_MASK                  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF11EN_SHIFT                 1
+/* cf12en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF12EN_MASK                  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF12EN_SHIFT                 2
+/* cf13en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF13EN_MASK                  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF13EN_SHIFT                 3
+/* cf14en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF14EN_MASK                  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF14EN_SHIFT                 4
+/* cf15en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF15EN_MASK                  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF15EN_SHIFT                 5
+/* cf16en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_EN_MASK     0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_EN_SHIFT    6
+/* cf_array_cf_en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_EN_MASK     0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_EN_SHIFT    7
+	u8 flags10;
+/* cf18en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_EN_SHIFT               0
+/* cf19en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_EN_MASK         0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_EN_SHIFT        1
+/* cf20en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_EN_MASK             0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_EN_SHIFT            2
+/* cf21en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED11_MASK              0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED11_SHIFT             3
+/* cf22en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_EN_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_EN_SHIFT           4
+/* cf23en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_EN_RESERVED_MASK  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_EN_RESERVED_SHIFT 5
+/* rule0en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED12_MASK              0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED12_SHIFT             6
+/* rule1en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED13_MASK              0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED13_SHIFT             7
+	u8 flags11;
+/* rule2en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED14_MASK              0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED14_SHIFT             0
+/* rule3en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED15_MASK              0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED15_SHIFT             1
+/* rule4en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TX_DEC_RULE_EN_MASK          0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TX_DEC_RULE_EN_SHIFT         2
+/* rule5en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE5EN_MASK                 0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE5EN_SHIFT                3
+/* rule6en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE6EN_MASK                 0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE6EN_SHIFT                4
+/* rule7en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE7EN_MASK                 0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE7EN_SHIFT                5
+/* rule8en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED1_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED1_SHIFT           6
+/* rule9en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE9EN_MASK                 0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE9EN_SHIFT                7
+	u8 flags12;
+/* rule10en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE10EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE10EN_SHIFT               0
+/* rule11en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE11EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE11EN_SHIFT               1
+/* rule12en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED2_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED2_SHIFT           2
+/* rule13en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED3_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED3_SHIFT           3
+/* rule14en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE14EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE14EN_SHIFT               4
+/* rule15en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE15EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE15EN_SHIFT               5
+/* rule16en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE16EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE16EN_SHIFT               6
+/* rule17en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE17EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE17EN_SHIFT               7
+	u8 flags13;
+/* rule18en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE18EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE18EN_SHIFT               0
+/* rule19en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE19EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE19EN_SHIFT               1
+/* rule20en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED4_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED4_SHIFT           2
+/* rule21en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED5_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED5_SHIFT           3
+/* rule22en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED6_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED6_SHIFT           4
+/* rule23en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED7_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED7_SHIFT           5
+/* rule24en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED8_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED8_SHIFT           6
+/* rule25en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED9_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED9_SHIFT           7
+	u8 flags14;
+/* bit16 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_USE_EXT_HDR_MASK        0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_USE_EXT_HDR_SHIFT       0
+/* bit17 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_RAW_L3L4_MASK      0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_RAW_L3L4_SHIFT     1
+/* bit18 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_INBAND_PROP_HDR_MASK    0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_INBAND_PROP_HDR_SHIFT   2
+/* bit19 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_EXT_TUNNEL_MASK    0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_EXT_TUNNEL_SHIFT   3
+/* bit20 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_L2_EDPM_ENABLE_MASK          0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_L2_EDPM_ENABLE_SHIFT         4
+/* bit21 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_ROCE_EDPM_ENABLE_MASK        0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_ROCE_EDPM_ENABLE_SHIFT       5
+/* cf23 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_MASK              0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_SHIFT             6
+	u8 edpm_event_id /* byte2 */;
+	__le16 physical_q0 /* physical_q0 */;
+	__le16 e5_reserved1 /* physical_q1 */;
+	__le16 edpm_num_bds /* physical_q2 */;
+	__le16 tx_bd_cons /* word3 */;
+	__le16 tx_bd_prod /* word4 */;
+	__le16 tx_class /* word5 */;
+	__le16 conn_dpi /* conn_dpi */;
+	u8 byte3 /* byte3 */;
+	u8 byte4 /* byte4 */;
+	u8 byte5 /* byte5 */;
+	u8 byte6 /* byte6 */;
+	__le32 reg0 /* reg0 */;
+	__le32 reg1 /* reg1 */;
+	__le32 reg2 /* reg2 */;
+	__le32 reg3 /* reg3 */;
+	__le32 reg4 /* reg4 */;
+};
+
+
+struct e4_mstorm_eth_conn_ag_ctx {
+	u8 byte0 /* cdu_validation */;
+	u8 byte1 /* state */;
+	u8 flags0;
+#define E4_MSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_MASK  0x1 /* exist_in_qm0 */
+#define E4_MSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_SHIFT 0
+#define E4_MSTORM_ETH_CONN_AG_CTX_BIT1_MASK          0x1 /* exist_in_qm1 */
+#define E4_MSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT         1
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF0_MASK           0x3 /* cf0 */
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF0_SHIFT          2
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF1_MASK           0x3 /* cf1 */
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF1_SHIFT          4
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF2_MASK           0x3 /* cf2 */
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF2_SHIFT          6
+	u8 flags1;
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF0EN_MASK         0x1 /* cf0en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT        0
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF1EN_MASK         0x1 /* cf1en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT        1
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF2EN_MASK         0x1 /* cf2en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT        2
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK       0x1 /* rule0en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT      3
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK       0x1 /* rule1en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT      4
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK       0x1 /* rule2en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT      5
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK       0x1 /* rule3en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT      6
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK       0x1 /* rule4en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT      7
+	__le16 word0 /* word0 */;
+	__le16 word1 /* word1 */;
+	__le32 reg0 /* reg0 */;
+	__le32 reg1 /* reg1 */;
+};
+
+
+
+
+
+struct e4_xstorm_eth_hw_conn_ag_ctx {
+	u8 reserved0 /* cdu_validation */;
+	u8 eth_state /* state */;
+	u8 flags0;
+/* exist_in_qm0 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM0_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM0_SHIFT           0
+/* exist_in_qm1 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED1_MASK               0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED1_SHIFT              1
+/* exist_in_qm2 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED2_MASK               0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED2_SHIFT              2
+/* exist_in_qm3 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM3_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM3_SHIFT           3
+/* bit4 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED3_MASK               0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED3_SHIFT              4
+/* cf_array_active */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED4_MASK               0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED4_SHIFT              5
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED5_MASK               0x1 /* bit6 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED5_SHIFT              6
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED6_MASK               0x1 /* bit7 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED6_SHIFT              7
+	u8 flags1;
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED7_MASK               0x1 /* bit8 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED7_SHIFT              0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED8_MASK               0x1 /* bit9 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED8_SHIFT              1
+/* bit10 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED9_MASK               0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED9_SHIFT              2
+/* bit11 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT11_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT11_SHIFT                  3
+/* bit12 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT12_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT12_SHIFT                  4
+/* bit13 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT13_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT13_SHIFT                  5
+/* bit14 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TX_RULE_ACTIVE_MASK          0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT         6
+/* bit15 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_ACTIVE_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT           7
+	u8 flags2;
+/* timer0cf */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF0_MASK                     0x3
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF0_SHIFT                    0
+/* timer1cf */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF1_MASK                     0x3
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF1_SHIFT                    2
+/* timer2cf */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF2_MASK                     0x3
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF2_SHIFT                    4
+/* timer_stop_all */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF3_MASK                     0x3
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF3_SHIFT                    6
+	u8 flags3;
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF4_MASK                     0x3 /* cf4 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF4_SHIFT                    0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF5_MASK                     0x3 /* cf5 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF5_SHIFT                    2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF6_MASK                     0x3 /* cf6 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF6_SHIFT                    4
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF7_MASK                     0x3 /* cf7 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF7_SHIFT                    6
+	u8 flags4;
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF8_MASK                     0x3 /* cf8 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF8_SHIFT                    0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF9_MASK                     0x3 /* cf9 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF9_SHIFT                    2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF10_MASK                    0x3 /* cf10 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF10_SHIFT                   4
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF11_MASK                    0x3 /* cf11 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF11_SHIFT                   6
+	u8 flags5;
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF12_MASK                    0x3 /* cf12 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF12_SHIFT                   0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF13_MASK                    0x3 /* cf13 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF13_SHIFT                   2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF14_MASK                    0x3 /* cf14 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF14_SHIFT                   4
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF15_MASK                    0x3 /* cf15 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF15_SHIFT                   6
+	u8 flags6;
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_MASK        0x3 /* cf16 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_SHIFT       0
+/* cf_array_cf */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_MASK        0x3
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_SHIFT       2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_MASK                   0x3 /* cf18 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_SHIFT                  4
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_MASK            0x3 /* cf19 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_SHIFT           6
+	u8 flags7;
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_MASK                0x3 /* cf20 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_SHIFT               0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED10_MASK              0x3 /* cf21 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED10_SHIFT             2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_MASK               0x3 /* cf22 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_SHIFT              4
+/* cf0en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF0EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF0EN_SHIFT                  6
+/* cf1en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF1EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF1EN_SHIFT                  7
+	u8 flags8;
+/* cf2en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF2EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF2EN_SHIFT                  0
+/* cf3en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF3EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF3EN_SHIFT                  1
+/* cf4en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF4EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF4EN_SHIFT                  2
+/* cf5en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF5EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF5EN_SHIFT                  3
+/* cf6en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF6EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF6EN_SHIFT                  4
+/* cf7en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF7EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF7EN_SHIFT                  5
+/* cf8en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF8EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF8EN_SHIFT                  6
+/* cf9en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF9EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF9EN_SHIFT                  7
+	u8 flags9;
+/* cf10en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF10EN_MASK                  0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF10EN_SHIFT                 0
+/* cf11en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF11EN_MASK                  0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF11EN_SHIFT                 1
+/* cf12en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF12EN_MASK                  0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF12EN_SHIFT                 2
+/* cf13en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF13EN_MASK                  0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF13EN_SHIFT                 3
+/* cf14en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF14EN_MASK                  0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF14EN_SHIFT                 4
+/* cf15en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF15EN_MASK                  0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF15EN_SHIFT                 5
+/* cf16en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_MASK     0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_SHIFT    6
+/* cf_array_cf_en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_EN_MASK     0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_EN_SHIFT    7
+	u8 flags10;
+/* cf18en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_EN_SHIFT               0
+/* cf19en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_EN_MASK         0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT        1
+/* cf20en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_EN_MASK             0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT            2
+/* cf21en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED11_MASK              0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED11_SHIFT             3
+/* cf22en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_EN_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_EN_SHIFT           4
+/* cf23en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_MASK  0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_SHIFT 5
+/* rule0en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED12_MASK              0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED12_SHIFT             6
+/* rule1en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED13_MASK              0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED13_SHIFT             7
+	u8 flags11;
+/* rule2en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED14_MASK              0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED14_SHIFT             0
+/* rule3en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED15_MASK              0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED15_SHIFT             1
+/* rule4en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TX_DEC_RULE_EN_MASK          0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT         2
+/* rule5en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE5EN_MASK                 0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE5EN_SHIFT                3
+/* rule6en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE6EN_MASK                 0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE6EN_SHIFT                4
+/* rule7en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE7EN_MASK                 0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE7EN_SHIFT                5
+/* rule8en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED1_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED1_SHIFT           6
+/* rule9en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE9EN_MASK                 0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE9EN_SHIFT                7
+	u8 flags12;
+/* rule10en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE10EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE10EN_SHIFT               0
+/* rule11en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE11EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE11EN_SHIFT               1
+/* rule12en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED2_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED2_SHIFT           2
+/* rule13en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED3_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED3_SHIFT           3
+/* rule14en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE14EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE14EN_SHIFT               4
+/* rule15en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE15EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE15EN_SHIFT               5
+/* rule16en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE16EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE16EN_SHIFT               6
+/* rule17en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE17EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE17EN_SHIFT               7
+	u8 flags13;
+/* rule18en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE18EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE18EN_SHIFT               0
+/* rule19en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE19EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE19EN_SHIFT               1
+/* rule20en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED4_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED4_SHIFT           2
+/* rule21en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED5_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED5_SHIFT           3
+/* rule22en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED6_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED6_SHIFT           4
+/* rule23en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED7_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED7_SHIFT           5
+/* rule24en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED8_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED8_SHIFT           6
+/* rule25en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED9_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED9_SHIFT           7
+	u8 flags14;
+/* bit16 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_USE_EXT_HDR_MASK        0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_USE_EXT_HDR_SHIFT       0
+/* bit17 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_MASK      0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_SHIFT     1
+/* bit18 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_MASK    0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_SHIFT   2
+/* bit19 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_MASK    0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_SHIFT   3
+/* bit20 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_L2_EDPM_ENABLE_MASK          0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_L2_EDPM_ENABLE_SHIFT         4
+/* bit21 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_ROCE_EDPM_ENABLE_MASK        0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_ROCE_EDPM_ENABLE_SHIFT       5
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_MASK              0x3 /* cf23 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_SHIFT             6
+	u8 edpm_event_id /* byte2 */;
+	__le16 physical_q0 /* physical_q0 */;
+	__le16 e5_reserved1 /* physical_q1 */;
+	__le16 edpm_num_bds /* physical_q2 */;
+	__le16 tx_bd_cons /* word3 */;
+	__le16 tx_bd_prod /* word4 */;
+	__le16 tx_class /* word5 */;
+	__le16 conn_dpi /* conn_dpi */;
+};
+
+
+
 /*
  * GFT CAM line struct
  */
@@ -1730,690 +2392,4 @@ enum gft_vlan_select {
 };
 
 
-struct mstorm_eth_conn_ag_ctx {
-	u8 byte0 /* cdu_validation */;
-	u8 byte1 /* state */;
-	u8 flags0;
-/* exist_in_qm0 */
-#define MSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_MASK  0x1
-#define MSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_SHIFT 0
-/* exist_in_qm1 */
-#define MSTORM_ETH_CONN_AG_CTX_BIT1_MASK          0x1
-#define MSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT         1
-#define MSTORM_ETH_CONN_AG_CTX_CF0_MASK           0x3 /* cf0 */
-#define MSTORM_ETH_CONN_AG_CTX_CF0_SHIFT          2
-#define MSTORM_ETH_CONN_AG_CTX_CF1_MASK           0x3 /* cf1 */
-#define MSTORM_ETH_CONN_AG_CTX_CF1_SHIFT          4
-#define MSTORM_ETH_CONN_AG_CTX_CF2_MASK           0x3 /* cf2 */
-#define MSTORM_ETH_CONN_AG_CTX_CF2_SHIFT          6
-	u8 flags1;
-#define MSTORM_ETH_CONN_AG_CTX_CF0EN_MASK         0x1 /* cf0en */
-#define MSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT        0
-#define MSTORM_ETH_CONN_AG_CTX_CF1EN_MASK         0x1 /* cf1en */
-#define MSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT        1
-#define MSTORM_ETH_CONN_AG_CTX_CF2EN_MASK         0x1 /* cf2en */
-#define MSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT        2
-#define MSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK       0x1 /* rule0en */
-#define MSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT      3
-#define MSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK       0x1 /* rule1en */
-#define MSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT      4
-#define MSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK       0x1 /* rule2en */
-#define MSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT      5
-#define MSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK       0x1 /* rule3en */
-#define MSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT      6
-#define MSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK       0x1 /* rule4en */
-#define MSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT      7
-	__le16 word0 /* word0 */;
-	__le16 word1 /* word1 */;
-	__le32 reg0 /* reg0 */;
-	__le32 reg1 /* reg1 */;
-};
-
-
-
-
-struct xstormEthConnAgCtxDqExtLdPart {
-	u8 reserved0 /* cdu_validation */;
-	u8 eth_state /* state */;
-	u8 flags0;
-/* exist_in_qm0 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM0_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM0_SHIFT           0
-/* exist_in_qm1 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED1_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED1_SHIFT              1
-/* exist_in_qm2 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED2_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED2_SHIFT              2
-/* exist_in_qm3 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM3_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM3_SHIFT           3
-/* bit4 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED3_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED3_SHIFT              4
-/* cf_array_active */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED4_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED4_SHIFT              5
-/* bit6 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED5_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED5_SHIFT              6
-/* bit7 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED6_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED6_SHIFT              7
-	u8 flags1;
-/* bit8 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED7_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED7_SHIFT              0
-/* bit9 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED8_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED8_SHIFT              1
-/* bit10 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED9_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED9_SHIFT              2
-/* bit11 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_BIT11_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_BIT11_SHIFT                  3
-/* bit12 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_BIT12_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_BIT12_SHIFT                  4
-/* bit13 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_BIT13_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_BIT13_SHIFT                  5
-/* bit14 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TX_RULE_ACTIVE_MASK          0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TX_RULE_ACTIVE_SHIFT         6
-/* bit15 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_ACTIVE_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_ACTIVE_SHIFT           7
-	u8 flags2;
-/* timer0cf */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF0_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF0_SHIFT                    0
-/* timer1cf */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF1_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF1_SHIFT                    2
-/* timer2cf */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF2_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF2_SHIFT                    4
-/* timer_stop_all */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF3_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF3_SHIFT                    6
-	u8 flags3;
-/* cf4 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF4_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF4_SHIFT                    0
-/* cf5 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF5_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF5_SHIFT                    2
-/* cf6 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF6_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF6_SHIFT                    4
-/* cf7 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF7_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF7_SHIFT                    6
-	u8 flags4;
-/* cf8 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF8_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF8_SHIFT                    0
-/* cf9 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF9_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF9_SHIFT                    2
-/* cf10 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF10_MASK                    0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF10_SHIFT                   4
-/* cf11 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF11_MASK                    0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF11_SHIFT                   6
-	u8 flags5;
-/* cf12 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF12_MASK                    0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF12_SHIFT                   0
-/* cf13 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF13_MASK                    0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF13_SHIFT                   2
-/* cf14 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF14_MASK                    0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF14_SHIFT                   4
-/* cf15 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF15_MASK                    0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF15_SHIFT                   6
-	u8 flags6;
-/* cf16 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_MASK        0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_SHIFT       0
-/* cf_array_cf */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_MASK        0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_SHIFT       2
-/* cf18 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_MASK                   0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_SHIFT                  4
-/* cf19 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_MASK            0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_SHIFT           6
-	u8 flags7;
-/* cf20 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_MASK                0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_SHIFT               0
-/* cf21 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED10_MASK              0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED10_SHIFT             2
-/* cf22 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_MASK               0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_SHIFT              4
-/* cf0en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF0EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF0EN_SHIFT                  6
-/* cf1en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF1EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF1EN_SHIFT                  7
-	u8 flags8;
-/* cf2en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF2EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF2EN_SHIFT                  0
-/* cf3en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF3EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF3EN_SHIFT                  1
-/* cf4en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF4EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF4EN_SHIFT                  2
-/* cf5en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF5EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF5EN_SHIFT                  3
-/* cf6en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF6EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF6EN_SHIFT                  4
-/* cf7en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF7EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF7EN_SHIFT                  5
-/* cf8en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF8EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF8EN_SHIFT                  6
-/* cf9en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF9EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF9EN_SHIFT                  7
-	u8 flags9;
-/* cf10en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF10EN_MASK                  0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF10EN_SHIFT                 0
-/* cf11en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF11EN_MASK                  0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF11EN_SHIFT                 1
-/* cf12en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF12EN_MASK                  0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF12EN_SHIFT                 2
-/* cf13en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF13EN_MASK                  0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF13EN_SHIFT                 3
-/* cf14en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF14EN_MASK                  0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF14EN_SHIFT                 4
-/* cf15en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF15EN_MASK                  0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF15EN_SHIFT                 5
-/* cf16en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_EN_MASK     0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_EN_SHIFT    6
-/* cf_array_cf_en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_EN_MASK     0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_EN_SHIFT    7
-	u8 flags10;
-/* cf18en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_EN_SHIFT               0
-/* cf19en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_EN_MASK         0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_EN_SHIFT        1
-/* cf20en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_EN_MASK             0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_EN_SHIFT            2
-/* cf21en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED11_MASK              0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED11_SHIFT             3
-/* cf22en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_EN_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_EN_SHIFT           4
-/* cf23en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_EN_RESERVED_MASK  0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_EN_RESERVED_SHIFT 5
-/* rule0en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED12_MASK              0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED12_SHIFT             6
-/* rule1en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED13_MASK              0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED13_SHIFT             7
-	u8 flags11;
-/* rule2en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED14_MASK              0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED14_SHIFT             0
-/* rule3en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED15_MASK              0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED15_SHIFT             1
-/* rule4en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TX_DEC_RULE_EN_MASK          0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TX_DEC_RULE_EN_SHIFT         2
-/* rule5en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE5EN_MASK                 0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE5EN_SHIFT                3
-/* rule6en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE6EN_MASK                 0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE6EN_SHIFT                4
-/* rule7en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE7EN_MASK                 0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE7EN_SHIFT                5
-/* rule8en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED1_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED1_SHIFT           6
-/* rule9en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE9EN_MASK                 0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE9EN_SHIFT                7
-	u8 flags12;
-/* rule10en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE10EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE10EN_SHIFT               0
-/* rule11en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE11EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE11EN_SHIFT               1
-/* rule12en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED2_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED2_SHIFT           2
-/* rule13en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED3_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED3_SHIFT           3
-/* rule14en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE14EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE14EN_SHIFT               4
-/* rule15en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE15EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE15EN_SHIFT               5
-/* rule16en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE16EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE16EN_SHIFT               6
-/* rule17en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE17EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE17EN_SHIFT               7
-	u8 flags13;
-/* rule18en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE18EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE18EN_SHIFT               0
-/* rule19en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE19EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE19EN_SHIFT               1
-/* rule20en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED4_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED4_SHIFT           2
-/* rule21en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED5_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED5_SHIFT           3
-/* rule22en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED6_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED6_SHIFT           4
-/* rule23en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED7_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED7_SHIFT           5
-/* rule24en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED8_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED8_SHIFT           6
-/* rule25en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED9_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED9_SHIFT           7
-	u8 flags14;
-/* bit16 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_USE_EXT_HDR_MASK        0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_USE_EXT_HDR_SHIFT       0
-/* bit17 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_RAW_L3L4_MASK      0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_RAW_L3L4_SHIFT     1
-/* bit18 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_INBAND_PROP_HDR_MASK    0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_INBAND_PROP_HDR_SHIFT   2
-/* bit19 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_EXT_TUNNEL_MASK    0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_EXT_TUNNEL_SHIFT   3
-/* bit20 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_L2_EDPM_ENABLE_MASK          0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_L2_EDPM_ENABLE_SHIFT         4
-/* bit21 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_ROCE_EDPM_ENABLE_MASK        0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_ROCE_EDPM_ENABLE_SHIFT       5
-/* cf23 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_MASK              0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_SHIFT             6
-	u8 edpm_event_id /* byte2 */;
-	__le16 physical_q0 /* physical_q0 */;
-	__le16 quota /* physical_q1 */;
-	__le16 edpm_num_bds /* physical_q2 */;
-	__le16 tx_bd_cons /* word3 */;
-	__le16 tx_bd_prod /* word4 */;
-	__le16 tx_class /* word5 */;
-	__le16 conn_dpi /* conn_dpi */;
-	u8 byte3 /* byte3 */;
-	u8 byte4 /* byte4 */;
-	u8 byte5 /* byte5 */;
-	u8 byte6 /* byte6 */;
-	__le32 reg0 /* reg0 */;
-	__le32 reg1 /* reg1 */;
-	__le32 reg2 /* reg2 */;
-	__le32 reg3 /* reg3 */;
-	__le32 reg4 /* reg4 */;
-};
-
-
-
-struct xstorm_eth_hw_conn_ag_ctx {
-	u8 reserved0 /* cdu_validation */;
-	u8 eth_state /* state */;
-	u8 flags0;
-/* exist_in_qm0 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM0_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM0_SHIFT           0
-/* exist_in_qm1 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED1_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED1_SHIFT              1
-/* exist_in_qm2 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED2_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED2_SHIFT              2
-/* exist_in_qm3 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM3_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM3_SHIFT           3
-/* bit4 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED3_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED3_SHIFT              4
-/* cf_array_active */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED4_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED4_SHIFT              5
-/* bit6 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED5_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED5_SHIFT              6
-/* bit7 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED6_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED6_SHIFT              7
-	u8 flags1;
-/* bit8 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED7_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED7_SHIFT              0
-/* bit9 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED8_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED8_SHIFT              1
-/* bit10 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED9_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED9_SHIFT              2
-/* bit11 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_BIT11_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_BIT11_SHIFT                  3
-/* bit12 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_BIT12_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_BIT12_SHIFT                  4
-/* bit13 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_BIT13_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_BIT13_SHIFT                  5
-/* bit14 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_TX_RULE_ACTIVE_MASK          0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT         6
-/* bit15 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_ACTIVE_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT           7
-	u8 flags2;
-/* timer0cf */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF0_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF0_SHIFT                    0
-/* timer1cf */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF1_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF1_SHIFT                    2
-/* timer2cf */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF2_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF2_SHIFT                    4
-/* timer_stop_all */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF3_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF3_SHIFT                    6
-	u8 flags3;
-/* cf4 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF4_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF4_SHIFT                    0
-/* cf5 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF5_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF5_SHIFT                    2
-/* cf6 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF6_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF6_SHIFT                    4
-/* cf7 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF7_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF7_SHIFT                    6
-	u8 flags4;
-/* cf8 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF8_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF8_SHIFT                    0
-/* cf9 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF9_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF9_SHIFT                    2
-/* cf10 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF10_MASK                    0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF10_SHIFT                   4
-/* cf11 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF11_MASK                    0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF11_SHIFT                   6
-	u8 flags5;
-/* cf12 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF12_MASK                    0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF12_SHIFT                   0
-/* cf13 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF13_MASK                    0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF13_SHIFT                   2
-/* cf14 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF14_MASK                    0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF14_SHIFT                   4
-/* cf15 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF15_MASK                    0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF15_SHIFT                   6
-	u8 flags6;
-/* cf16 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_MASK        0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_SHIFT       0
-/* cf_array_cf */
-#define XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_MASK        0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_SHIFT       2
-/* cf18 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_MASK                   0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_SHIFT                  4
-/* cf19 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_MASK            0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_SHIFT           6
-	u8 flags7;
-/* cf20 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_MASK                0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_SHIFT               0
-/* cf21 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED10_MASK              0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED10_SHIFT             2
-/* cf22 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_MASK               0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_SHIFT              4
-/* cf0en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF0EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF0EN_SHIFT                  6
-/* cf1en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF1EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF1EN_SHIFT                  7
-	u8 flags8;
-/* cf2en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF2EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF2EN_SHIFT                  0
-/* cf3en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF3EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF3EN_SHIFT                  1
-/* cf4en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF4EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF4EN_SHIFT                  2
-/* cf5en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF5EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF5EN_SHIFT                  3
-/* cf6en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF6EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF6EN_SHIFT                  4
-/* cf7en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF7EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF7EN_SHIFT                  5
-/* cf8en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF8EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF8EN_SHIFT                  6
-/* cf9en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF9EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF9EN_SHIFT                  7
-	u8 flags9;
-/* cf10en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF10EN_MASK                  0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF10EN_SHIFT                 0
-/* cf11en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF11EN_MASK                  0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF11EN_SHIFT                 1
-/* cf12en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF12EN_MASK                  0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF12EN_SHIFT                 2
-/* cf13en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF13EN_MASK                  0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF13EN_SHIFT                 3
-/* cf14en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF14EN_MASK                  0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF14EN_SHIFT                 4
-/* cf15en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF15EN_MASK                  0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF15EN_SHIFT                 5
-/* cf16en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_MASK     0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_SHIFT    6
-/* cf_array_cf_en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_EN_MASK     0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_EN_SHIFT    7
-	u8 flags10;
-/* cf18en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_EN_SHIFT               0
-/* cf19en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_EN_MASK         0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT        1
-/* cf20en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_EN_MASK             0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT            2
-/* cf21en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED11_MASK              0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED11_SHIFT             3
-/* cf22en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_EN_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_EN_SHIFT           4
-/* cf23en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_MASK  0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_SHIFT 5
-/* rule0en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED12_MASK              0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED12_SHIFT             6
-/* rule1en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED13_MASK              0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED13_SHIFT             7
-	u8 flags11;
-/* rule2en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED14_MASK              0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED14_SHIFT             0
-/* rule3en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED15_MASK              0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED15_SHIFT             1
-/* rule4en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_TX_DEC_RULE_EN_MASK          0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT         2
-/* rule5en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE5EN_MASK                 0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE5EN_SHIFT                3
-/* rule6en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE6EN_MASK                 0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE6EN_SHIFT                4
-/* rule7en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE7EN_MASK                 0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE7EN_SHIFT                5
-/* rule8en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED1_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED1_SHIFT           6
-/* rule9en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE9EN_MASK                 0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE9EN_SHIFT                7
-	u8 flags12;
-/* rule10en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE10EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE10EN_SHIFT               0
-/* rule11en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE11EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE11EN_SHIFT               1
-/* rule12en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED2_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED2_SHIFT           2
-/* rule13en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED3_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED3_SHIFT           3
-/* rule14en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE14EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE14EN_SHIFT               4
-/* rule15en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE15EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE15EN_SHIFT               5
-/* rule16en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE16EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE16EN_SHIFT               6
-/* rule17en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE17EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE17EN_SHIFT               7
-	u8 flags13;
-/* rule18en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE18EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE18EN_SHIFT               0
-/* rule19en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE19EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE19EN_SHIFT               1
-/* rule20en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED4_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED4_SHIFT           2
-/* rule21en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED5_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED5_SHIFT           3
-/* rule22en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED6_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED6_SHIFT           4
-/* rule23en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED7_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED7_SHIFT           5
-/* rule24en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED8_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED8_SHIFT           6
-/* rule25en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED9_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED9_SHIFT           7
-	u8 flags14;
-/* bit16 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_USE_EXT_HDR_MASK        0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_USE_EXT_HDR_SHIFT       0
-/* bit17 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_MASK      0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_SHIFT     1
-/* bit18 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_MASK    0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_SHIFT   2
-/* bit19 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_MASK    0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_SHIFT   3
-/* bit20 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_L2_EDPM_ENABLE_MASK          0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_L2_EDPM_ENABLE_SHIFT         4
-/* bit21 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_ROCE_EDPM_ENABLE_MASK        0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_ROCE_EDPM_ENABLE_SHIFT       5
-/* cf23 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_MASK              0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_SHIFT             6
-	u8 edpm_event_id /* byte2 */;
-	__le16 physical_q0 /* physical_q0 */;
-	__le16 quota /* physical_q1 */;
-	__le16 edpm_num_bds /* physical_q2 */;
-	__le16 tx_bd_cons /* word3 */;
-	__le16 tx_bd_prod /* word4 */;
-	__le16 tx_class /* word5 */;
-	__le16 conn_dpi /* conn_dpi */;
-};
-
-
 #endif /* __ECORE_HSI_ETH__ */
diff --git a/drivers/net/qede/base/ecore_iov_api.h b/drivers/net/qede/base/ecore_iov_api.h
index 24a43d3..9775360 100644
--- a/drivers/net/qede/base/ecore_iov_api.h
+++ b/drivers/net/qede/base/ecore_iov_api.h
@@ -701,7 +701,7 @@ enum _ecore_status_t ecore_iov_configure_min_tx_rate(struct ecore_dev *p_dev,
  * @param p_hwfn
  * @param rel_vf_id
  *
- * @return MAX_NUM_VFS in case no further active VFs, otherwise index.
+ * @return E4_MAX_NUM_VFS in case no further active VFs, otherwise index.
  */
 u16 ecore_iov_get_next_active_vf(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
 
@@ -709,7 +709,7 @@ enum _ecore_status_t ecore_iov_configure_min_tx_rate(struct ecore_dev *p_dev,
 
 #define ecore_for_each_vf(_p_hwfn, _i)					\
 	for (_i = ecore_iov_get_next_active_vf(_p_hwfn, 0);		\
-	     _i < MAX_NUM_VFS;						\
+	     _i < E4_MAX_NUM_VFS;					\
 	     _i = ecore_iov_get_next_active_vf(_p_hwfn, _i + 1))
 
 #endif
diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c
index d55a448..066f3fb 100644
--- a/drivers/net/qede/base/ecore_spq.c
+++ b/drivers/net/qede/base/ecore_spq.c
@@ -191,15 +191,17 @@ static void ecore_spq_hw_initialize(struct ecore_hwfn *p_hwfn,
 
 	p_cxt = cxt_info.p_cxt;
 
-	SET_FIELD(p_cxt->xstorm_ag_context.flags10,
-		  XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN, 1);
-	SET_FIELD(p_cxt->xstorm_ag_context.flags1,
-		  XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE, 1);
-	/* SET_FIELD(p_cxt->xstorm_ag_context.flags10,
-	 *           XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN, 1);
-	 */
-	SET_FIELD(p_cxt->xstorm_ag_context.flags9,
-		  XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN, 1);
+	if (ECORE_IS_BB(p_hwfn->p_dev) || ECORE_IS_AH(p_hwfn->p_dev)) {
+		SET_FIELD(p_cxt->xstorm_ag_context.flags10,
+			  E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN, 1);
+		SET_FIELD(p_cxt->xstorm_ag_context.flags1,
+			  E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE, 1);
+		/* SET_FIELD(p_cxt->xstorm_ag_context.flags10,
+		 *	  E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN, 1);
+		 */
+		SET_FIELD(p_cxt->xstorm_ag_context.flags9,
+			  E4_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN, 1);
+	}
 
 	/* CDU validation - FIXME currently disabled */
 
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index b2ba79b..cda4516 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -3489,7 +3489,7 @@ u16 ecore_iov_get_next_active_vf(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
 			return i;
 
 out:
-	return MAX_NUM_VFS;
+	return E4_MAX_NUM_VFS;
 }
 
 enum _ecore_status_t ecore_iov_copy_vf_msg(struct ecore_hwfn *p_hwfn,
diff --git a/drivers/net/qede/base/ecore_sriov.h b/drivers/net/qede/base/ecore_sriov.h
index 884a90c..e9ccc79 100644
--- a/drivers/net/qede/base/ecore_sriov.h
+++ b/drivers/net/qede/base/ecore_sriov.h
@@ -15,7 +15,7 @@
 #include "ecore_hsi_common.h"
 
 #define ECORE_ETH_MAX_VF_NUM_VLAN_FILTERS \
-	(MAX_NUM_VFS * ECORE_ETH_VF_NUM_VLAN_FILTERS)
+	(E4_MAX_NUM_VFS * ECORE_ETH_VF_NUM_VLAN_FILTERS)
 
 /* Represents a full message. Both the request filled by VF
  * and the response filled by the PF. The VF needs one copy
@@ -152,7 +152,7 @@ struct ecore_vf_info {
  * capability enabled.
  */
 struct ecore_pf_iov {
-	struct ecore_vf_info	vfs_array[MAX_NUM_VFS];
+	struct ecore_vf_info	vfs_array[E4_MAX_NUM_VFS];
 	u64			pending_events[ECORE_VF_ARRAY_LENGTH];
 	u64			pending_flr[ECORE_VF_ARRAY_LENGTH];
 	u16			base_vport_id;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 06/61] drivers/net/qede: upgrade the FW to 8.18.9.0
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (4 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 05/61] net/qede/base: utilize FW 8.18.9.0 Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 07/61] net/qede/base: decrease MAX_HWFNS_PER_DEVICE from 4 to 2 Rasesh Mody
                   ` (55 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

This patchset adds changes to upgrade to 8.18.9.0 FW.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/bcm_osal.h              |    1 +
 drivers/net/qede/base/common_hsi.h            |  176 +++-
 drivers/net/qede/base/ecore_dcbx.c            |    4 +-
 drivers/net/qede/base/ecore_dev.c             |  204 ++--
 drivers/net/qede/base/ecore_gtt_reg_addr.h    |   20 +-
 drivers/net/qede/base/ecore_hsi_common.h      |   46 +-
 drivers/net/qede/base/ecore_hsi_debug_tools.h |  203 ++--
 drivers/net/qede/base/ecore_hsi_eth.h         |   17 +-
 drivers/net/qede/base/ecore_hsi_init_tool.h   |   78 +-
 drivers/net/qede/base/ecore_init_fw_funcs.c   | 1378 ++++++++++++++++---------
 drivers/net/qede/base/ecore_init_fw_funcs.h   |  161 ++-
 drivers/net/qede/base/ecore_iro.h             |    8 +
 drivers/net/qede/base/ecore_iro_values.h      |   28 +-
 drivers/net/qede/base/ecore_rt_defs.h         |  623 ++++++-----
 drivers/net/qede/base/eth_common.h            |    2 +-
 drivers/net/qede/base/reg_addr.h              |   53 +
 drivers/net/qede/qede_main.c                  |    2 +-
 17 files changed, 1882 insertions(+), 1122 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index a20b318..5338f27 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -394,6 +394,7 @@ void qede_vf_fill_driver_data(struct ecore_hwfn *, struct vf_pf_resc_request *,
 #define OSAL_STRCPY(dst, string) strcpy(dst, string)
 #define OSAL_STRNCPY(dst, string, len) strncpy(dst, string, len)
 #define OSAL_STRCMP(str1, str2) strcmp(str1, str2)
+#define OSAL_STRTOUL(str, base, res) 0
 
 #define OSAL_INLINE inline
 #define OSAL_REG_ADDR(_p_hwfn, _offset) \
diff --git a/drivers/net/qede/base/common_hsi.h b/drivers/net/qede/base/common_hsi.h
index 59e751f..cbcde22 100644
--- a/drivers/net/qede/base/common_hsi.h
+++ b/drivers/net/qede/base/common_hsi.h
@@ -78,8 +78,16 @@
 
 #define CORE_SPQE_PAGE_SIZE_BYTES                       4096
 
-#define MAX_NUM_LL2_RX_QUEUES					32
-#define MAX_NUM_LL2_TX_STATS_COUNTERS			32
+/*
+ * Usually LL2 queues are opened in pairs TX-RX.
+ * There is a hard restriction on number of RX queues (limited by Tstorm RAM)
+ * and TX counters (Pstorm RAM).
+ * Number of TX queues is almost unlimited.
+ * The constants are different so as to allow asymmetric LL2 connections
+ */
+
+#define MAX_NUM_LL2_RX_QUEUES					48
+#define MAX_NUM_LL2_TX_STATS_COUNTERS			48
 
 
 /****************************************************************************/
@@ -89,8 +97,8 @@
 
 
 #define FW_MAJOR_VERSION		8
-#define FW_MINOR_VERSION		14
-#define FW_REVISION_VERSION		6
+#define FW_MINOR_VERSION		18
+#define FW_REVISION_VERSION		9
 #define FW_ENGINEERING_VERSION	0
 
 /***********************/
@@ -110,6 +118,7 @@
 #define MAX_NUM_VFS_BB	(120)
 #define MAX_NUM_VFS_K2	(192)
 #define E4_MAX_NUM_VFS	(MAX_NUM_VFS_K2)
+#define COMMON_MAX_NUM_VFS (240)
 
 #define MAX_NUM_FUNCTIONS_BB	(MAX_NUM_PFS_BB + MAX_NUM_VFS_BB)
 #define MAX_NUM_FUNCTIONS_K2	(MAX_NUM_PFS_K2 + MAX_NUM_VFS_K2)
@@ -177,6 +186,13 @@
 #define CDU_VF_FL_SEG_TYPE_OFFSET_REG_TYPE_SHIFT	(12)
 #define CDU_VF_FL_SEG_TYPE_OFFSET_REG_OFFSET_MASK	(0xfff)
 
+#define	CDU_CONTEXT_VALIDATION_CFG_ENABLE_SHIFT				(0)
+#define	CDU_CONTEXT_VALIDATION_CFG_VALIDATION_TYPE_SHIFT	(1)
+#define	CDU_CONTEXT_VALIDATION_CFG_USE_TYPE				(2)
+#define	CDU_CONTEXT_VALIDATION_CFG_USE_REGION				(3)
+#define	CDU_CONTEXT_VALIDATION_CFG_USE_CID				(4)
+#define	CDU_CONTEXT_VALIDATION_CFG_USE_ACTIVE				(5)
+
 
 /*****************/
 /* DQ CONSTANTS  */
@@ -472,7 +488,6 @@
 #define PXP_BAR_DQ                                          1
 
 /* PTT and GTT */
-#define PXP_NUM_PF_WINDOWS		12
 #define PXP_PER_PF_ENTRY_SIZE		8
 #define PXP_NUM_GLOBAL_WINDOWS		243
 #define PXP_GLOBAL_ENTRY_SIZE		4
@@ -497,6 +512,8 @@
 #define PXP_PF_ME_OPAQUE_ADDR		0x1f8
 #define PXP_PF_ME_CONCRETE_ADDR		0x1fc
 
+#define PXP_NUM_PF_WINDOWS		12
+
 #define PXP_EXTERNAL_BAR_PF_WINDOW_START	0x1000
 #define PXP_EXTERNAL_BAR_PF_WINDOW_NUM		PXP_NUM_PF_WINDOWS
 #define PXP_EXTERNAL_BAR_PF_WINDOW_SINGLE_SIZE	0x1000
@@ -519,8 +536,6 @@
 	 PXP_EXTERNAL_BAR_GLOBAL_WINDOW_LENGTH - 1)
 
 /* PF BAR */
-/*#define PXP_BAR0_START_GRC 0x1000 */
-/*#define PXP_BAR0_GRC_LENGTH 0xBFF000 */
 #define PXP_BAR0_START_GRC                      0x0000
 #define PXP_BAR0_GRC_LENGTH                     0x1C00000
 #define PXP_BAR0_END_GRC                        \
@@ -589,7 +604,7 @@
 #define SDM_OP_GEN_TRIG_AGG_INT			2
 #define SDM_OP_GEN_TRIG_LOADER			4
 #define SDM_OP_GEN_TRIG_INDICATE_ERROR	6
-#define SDM_OP_GEN_TRIG_RELEASE_THREAD	7
+#define SDM_OP_GEN_TRIG_INC_ORDER_CNT	9
 
 /***********************************************************/
 /* Completion types                                        */
@@ -612,6 +627,7 @@
 #define SDM_COMP_TYPE_RELEASE_THREAD	7
 /* Write to local RAM as a completion */
 #define SDM_COMP_TYPE_RAM		8
+#define SDM_COMP_TYPE_INC_ORDER_CNT	9 /* Applicable only for E4 */
 
 
 /******************/
@@ -881,7 +897,7 @@ enum db_dest {
  */
 enum db_dpm_type {
 	DPM_LEGACY /* Legacy DPM- to Xstorm RAM */,
-	DPM_ROCE /* RoCE DPM- to NIG */,
+	DPM_RDMA /* RDMA DPM (only RoCE in E4) - to NIG */,
 /* L2 DPM inline- to PBF, with packet data on doorbell */
 	DPM_L2_INLINE,
 	DPM_L2_BD /* L2 DPM with BD- to PBF, with TX BD data on doorbell */,
@@ -968,42 +984,42 @@ struct db_pwm_addr {
 };
 
 /*
- * Parameters to RoCE firmware, passed in EDPM doorbell
+ * Parameters to RDMA firmware, passed in EDPM doorbell
  */
-struct db_roce_dpm_params {
+struct db_rdma_dpm_params {
 	__le32 params;
 /* Size in QWORD-s of the DPM burst */
-#define DB_ROCE_DPM_PARAMS_SIZE_MASK            0x3F
-#define DB_ROCE_DPM_PARAMS_SIZE_SHIFT           0
-/* Type of DPM transacation (DPM_ROCE) (use enum db_dpm_type) */
-#define DB_ROCE_DPM_PARAMS_DPM_TYPE_MASK        0x3
-#define DB_ROCE_DPM_PARAMS_DPM_TYPE_SHIFT       6
-/* opcode for ROCE operation */
-#define DB_ROCE_DPM_PARAMS_OPCODE_MASK          0xFF
-#define DB_ROCE_DPM_PARAMS_OPCODE_SHIFT         8
+#define DB_RDMA_DPM_PARAMS_SIZE_MASK            0x3F
+#define DB_RDMA_DPM_PARAMS_SIZE_SHIFT           0
+/* Type of DPM transacation (DPM_RDMA) (use enum db_dpm_type) */
+#define DB_RDMA_DPM_PARAMS_DPM_TYPE_MASK        0x3
+#define DB_RDMA_DPM_PARAMS_DPM_TYPE_SHIFT       6
+/* opcode for RDMA operation */
+#define DB_RDMA_DPM_PARAMS_OPCODE_MASK          0xFF
+#define DB_RDMA_DPM_PARAMS_OPCODE_SHIFT         8
 /* the size of the WQE payload in bytes */
-#define DB_ROCE_DPM_PARAMS_WQE_SIZE_MASK        0x7FF
-#define DB_ROCE_DPM_PARAMS_WQE_SIZE_SHIFT       16
-#define DB_ROCE_DPM_PARAMS_RESERVED0_MASK       0x1
-#define DB_ROCE_DPM_PARAMS_RESERVED0_SHIFT      27
+#define DB_RDMA_DPM_PARAMS_WQE_SIZE_MASK        0x7FF
+#define DB_RDMA_DPM_PARAMS_WQE_SIZE_SHIFT       16
+#define DB_RDMA_DPM_PARAMS_RESERVED0_MASK       0x1
+#define DB_RDMA_DPM_PARAMS_RESERVED0_SHIFT      27
 /* RoCE completion flag */
-#define DB_ROCE_DPM_PARAMS_COMPLETION_FLG_MASK  0x1
-#define DB_ROCE_DPM_PARAMS_COMPLETION_FLG_SHIFT 28
-#define DB_ROCE_DPM_PARAMS_S_FLG_MASK           0x1 /* RoCE S flag */
-#define DB_ROCE_DPM_PARAMS_S_FLG_SHIFT          29
-#define DB_ROCE_DPM_PARAMS_RESERVED1_MASK       0x3
-#define DB_ROCE_DPM_PARAMS_RESERVED1_SHIFT      30
+#define DB_RDMA_DPM_PARAMS_COMPLETION_FLG_MASK  0x1
+#define DB_RDMA_DPM_PARAMS_COMPLETION_FLG_SHIFT 28
+#define DB_RDMA_DPM_PARAMS_S_FLG_MASK           0x1 /* RoCE S flag */
+#define DB_RDMA_DPM_PARAMS_S_FLG_SHIFT          29
+#define DB_RDMA_DPM_PARAMS_RESERVED1_MASK       0x3
+#define DB_RDMA_DPM_PARAMS_RESERVED1_SHIFT      30
 };
 
 /*
- * Structure for doorbell data, in ROCE DPM mode, for the first doorbell in a
+ * Structure for doorbell data, in RDMA DPM mode, for the first doorbell in a
  * DPM burst
  */
-struct db_roce_dpm_data {
+struct db_rdma_dpm_data {
 	__le16 icid /* internal CID */;
 	__le16 prod_val /* aggregated value to update */;
-/* parameters passed to RoCE firmware */
-	struct db_roce_dpm_params params;
+/* parameters passed to RDMA firmware */
+	struct db_rdma_dpm_params params;
 };
 
 /* Igu interrupt command */
@@ -1136,6 +1152,68 @@ struct parsing_and_err_flags {
 
 
 /*
+ * Parsing error flags bitmap.
+ */
+struct parsing_err_flags {
+	__le16 flags;
+/* MAC error indication */
+#define PARSING_ERR_FLAGS_MAC_ERROR_MASK                          0x1
+#define PARSING_ERR_FLAGS_MAC_ERROR_SHIFT                         0
+/* truncation error indication */
+#define PARSING_ERR_FLAGS_TRUNC_ERROR_MASK                        0x1
+#define PARSING_ERR_FLAGS_TRUNC_ERROR_SHIFT                       1
+/* packet too small indication */
+#define PARSING_ERR_FLAGS_PKT_TOO_SMALL_MASK                      0x1
+#define PARSING_ERR_FLAGS_PKT_TOO_SMALL_SHIFT                     2
+/* Header Missing Tag */
+#define PARSING_ERR_FLAGS_ANY_HDR_MISSING_TAG_MASK                0x1
+#define PARSING_ERR_FLAGS_ANY_HDR_MISSING_TAG_SHIFT               3
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_ANY_HDR_IP_VER_MISMTCH_MASK             0x1
+#define PARSING_ERR_FLAGS_ANY_HDR_IP_VER_MISMTCH_SHIFT            4
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_ANY_HDR_IP_V4_HDR_LEN_TOO_SMALL_MASK    0x1
+#define PARSING_ERR_FLAGS_ANY_HDR_IP_V4_HDR_LEN_TOO_SMALL_SHIFT   5
+/* set this error if: 1. total-len is smaller than hdr-len 2. total-ip-len
+ * indicates number that is bigger than real packet length 3. tunneling:
+ * total-ip-length of the outer header points to offset that is smaller than
+ * the one pointed to by the total-ip-len of the inner hdr.
+ */
+#define PARSING_ERR_FLAGS_ANY_HDR_IP_BAD_TOTAL_LEN_MASK           0x1
+#define PARSING_ERR_FLAGS_ANY_HDR_IP_BAD_TOTAL_LEN_SHIFT          6
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_IP_V4_CHKSM_ERROR_MASK                  0x1
+#define PARSING_ERR_FLAGS_IP_V4_CHKSM_ERROR_SHIFT                 7
+/* from frame cracker output. for either TCP or UDP */
+#define PARSING_ERR_FLAGS_ANY_HDR_L4_IP_LEN_MISMTCH_MASK          0x1
+#define PARSING_ERR_FLAGS_ANY_HDR_L4_IP_LEN_MISMTCH_SHIFT         8
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_ZERO_UDP_IP_V6_CHKSM_MASK               0x1
+#define PARSING_ERR_FLAGS_ZERO_UDP_IP_V6_CHKSM_SHIFT              9
+/* cksm calculated and value isn't 0xffff or L4-cksm-wasnt-calculated for any
+ * reason, like: udp/ipv4 checksum is 0 etc.
+ */
+#define PARSING_ERR_FLAGS_INNER_L4_CHKSM_ERROR_MASK               0x1
+#define PARSING_ERR_FLAGS_INNER_L4_CHKSM_ERROR_SHIFT              10
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_ANY_HDR_ZERO_TTL_OR_HOP_LIM_MASK        0x1
+#define PARSING_ERR_FLAGS_ANY_HDR_ZERO_TTL_OR_HOP_LIM_SHIFT       11
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_NON_8021Q_TAG_EXISTS_IN_BOTH_HDRS_MASK  0x1
+#define PARSING_ERR_FLAGS_NON_8021Q_TAG_EXISTS_IN_BOTH_HDRS_SHIFT 12
+/* set if geneve option size was over 32 byte */
+#define PARSING_ERR_FLAGS_GENEVE_OPTION_OVERSIZED_MASK            0x1
+#define PARSING_ERR_FLAGS_GENEVE_OPTION_OVERSIZED_SHIFT           13
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_TUNNEL_IP_V4_CHKSM_ERROR_MASK           0x1
+#define PARSING_ERR_FLAGS_TUNNEL_IP_V4_CHKSM_ERROR_SHIFT          14
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_TUNNEL_L4_CHKSM_ERROR_MASK              0x1
+#define PARSING_ERR_FLAGS_TUNNEL_L4_CHKSM_ERROR_SHIFT             15
+};
+
+
+/*
  * Pb context
  */
 struct pb_context {
@@ -1492,49 +1570,57 @@ struct tdif_task_context {
 struct timers_context {
 	__le32 logical_client_0;
 /* Expiration time of logical client 0 */
-#define TIMERS_CONTEXT_EXPIRATIONTIMELC0_MASK     0xFFFFFFF
+#define TIMERS_CONTEXT_EXPIRATIONTIMELC0_MASK     0x7FFFFFF
 #define TIMERS_CONTEXT_EXPIRATIONTIMELC0_SHIFT    0
+#define TIMERS_CONTEXT_RESERVED0_MASK             0x1
+#define TIMERS_CONTEXT_RESERVED0_SHIFT            27
 /* Valid bit of logical client 0 */
 #define TIMERS_CONTEXT_VALIDLC0_MASK              0x1
 #define TIMERS_CONTEXT_VALIDLC0_SHIFT             28
 /* Active bit of logical client 0 */
 #define TIMERS_CONTEXT_ACTIVELC0_MASK             0x1
 #define TIMERS_CONTEXT_ACTIVELC0_SHIFT            29
-#define TIMERS_CONTEXT_RESERVED0_MASK             0x3
-#define TIMERS_CONTEXT_RESERVED0_SHIFT            30
+#define TIMERS_CONTEXT_RESERVED1_MASK             0x3
+#define TIMERS_CONTEXT_RESERVED1_SHIFT            30
 	__le32 logical_client_1;
 /* Expiration time of logical client 1 */
-#define TIMERS_CONTEXT_EXPIRATIONTIMELC1_MASK     0xFFFFFFF
+#define TIMERS_CONTEXT_EXPIRATIONTIMELC1_MASK     0x7FFFFFF
 #define TIMERS_CONTEXT_EXPIRATIONTIMELC1_SHIFT    0
+#define TIMERS_CONTEXT_RESERVED2_MASK             0x1
+#define TIMERS_CONTEXT_RESERVED2_SHIFT            27
 /* Valid bit of logical client 1 */
 #define TIMERS_CONTEXT_VALIDLC1_MASK              0x1
 #define TIMERS_CONTEXT_VALIDLC1_SHIFT             28
 /* Active bit of logical client 1 */
 #define TIMERS_CONTEXT_ACTIVELC1_MASK             0x1
 #define TIMERS_CONTEXT_ACTIVELC1_SHIFT            29
-#define TIMERS_CONTEXT_RESERVED1_MASK             0x3
-#define TIMERS_CONTEXT_RESERVED1_SHIFT            30
+#define TIMERS_CONTEXT_RESERVED3_MASK             0x3
+#define TIMERS_CONTEXT_RESERVED3_SHIFT            30
 	__le32 logical_client_2;
 /* Expiration time of logical client 2 */
-#define TIMERS_CONTEXT_EXPIRATIONTIMELC2_MASK     0xFFFFFFF
+#define TIMERS_CONTEXT_EXPIRATIONTIMELC2_MASK     0x7FFFFFF
 #define TIMERS_CONTEXT_EXPIRATIONTIMELC2_SHIFT    0
+#define TIMERS_CONTEXT_RESERVED4_MASK             0x1
+#define TIMERS_CONTEXT_RESERVED4_SHIFT            27
 /* Valid bit of logical client 2 */
 #define TIMERS_CONTEXT_VALIDLC2_MASK              0x1
 #define TIMERS_CONTEXT_VALIDLC2_SHIFT             28
 /* Active bit of logical client 2 */
 #define TIMERS_CONTEXT_ACTIVELC2_MASK             0x1
 #define TIMERS_CONTEXT_ACTIVELC2_SHIFT            29
-#define TIMERS_CONTEXT_RESERVED2_MASK             0x3
-#define TIMERS_CONTEXT_RESERVED2_SHIFT            30
+#define TIMERS_CONTEXT_RESERVED5_MASK             0x3
+#define TIMERS_CONTEXT_RESERVED5_SHIFT            30
 	__le32 host_expiration_fields;
 /* Expiration time on host (closest one) */
-#define TIMERS_CONTEXT_HOSTEXPRIRATIONVALUE_MASK  0xFFFFFFF
+#define TIMERS_CONTEXT_HOSTEXPRIRATIONVALUE_MASK  0x7FFFFFF
 #define TIMERS_CONTEXT_HOSTEXPRIRATIONVALUE_SHIFT 0
+#define TIMERS_CONTEXT_RESERVED6_MASK             0x1
+#define TIMERS_CONTEXT_RESERVED6_SHIFT            27
 /* Valid bit of host expiration */
 #define TIMERS_CONTEXT_HOSTEXPRIRATIONVALID_MASK  0x1
 #define TIMERS_CONTEXT_HOSTEXPRIRATIONVALID_SHIFT 28
-#define TIMERS_CONTEXT_RESERVED3_MASK             0x7
-#define TIMERS_CONTEXT_RESERVED3_SHIFT            29
+#define TIMERS_CONTEXT_RESERVED7_MASK             0x7
+#define TIMERS_CONTEXT_RESERVED7_SHIFT            29
 };
 
 
diff --git a/drivers/net/qede/base/ecore_dcbx.c b/drivers/net/qede/base/ecore_dcbx.c
index 9ce6dc4..ca3aece 100644
--- a/drivers/net/qede/base/ecore_dcbx.c
+++ b/drivers/net/qede/base/ecore_dcbx.c
@@ -126,7 +126,7 @@ static bool ecore_dcbx_default_tlv(u32 app_info_bitmap, u16 proto_id, bool ieee)
 	else if (enable)
 		p_data->arr[type].update = UPDATE_DCB;
 	else
-		p_data->arr[type].update = DONT_UPDATE_DCB_DHCP;
+		p_data->arr[type].update = DONT_UPDATE_DCB_DSCP;
 
 	/* QM reconf data */
 	if (p_hwfn->hw_info.personality == personality) {
@@ -938,7 +938,7 @@ void ecore_dcbx_set_pf_update_params(struct ecore_dcbx_results *p_src,
 	p_dest->pf_id = p_src->pf_id;
 
 	update_flag = p_src->arr[DCBX_PROTOCOL_ETH].update;
-	p_dest->update_eth_dcb_data_flag = update_flag;
+	p_dest->update_eth_dcb_data_mode = update_flag;
 
 	p_dcb_data = &p_dest->eth_dcb_data;
 	ecore_dcbx_update_protocol_data(p_dcb_data, p_src, DCBX_PROTOCOL_ETH);
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index d8ef314..43bfd05 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -822,7 +822,7 @@ static enum _ecore_status_t ecore_calc_hw_mode(struct ecore_hwfn *p_hwfn)
 	int hw_mode = 0;
 
 	if (ECORE_IS_BB_B0(p_hwfn->p_dev)) {
-		hw_mode |= 1 << MODE_BB_B0;
+		hw_mode |= 1 << MODE_BB;
 	} else if (ECORE_IS_AH(p_hwfn->p_dev)) {
 		hw_mode |= 1 << MODE_K2;
 	} else {
@@ -894,29 +894,36 @@ static enum _ecore_status_t ecore_calc_hw_mode(struct ecore_hwfn *p_hwfn)
 static enum _ecore_status_t ecore_hw_init_chip(struct ecore_hwfn *p_hwfn,
 					       struct ecore_ptt *p_ptt)
 {
+	struct ecore_dev *p_dev = p_hwfn->p_dev;
 	u32 pl_hv = 1;
 	int i;
 
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && ECORE_IS_AH(p_hwfn->p_dev))
-		pl_hv |= 0x600;
+	if (CHIP_REV_IS_EMUL(p_dev)) {
+		if (ECORE_IS_AH(p_dev))
+			pl_hv |= 0x600;
+	}
 
 	ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV + 4, pl_hv);
 
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && ECORE_IS_AH(p_hwfn->p_dev))
-		ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV_2, 0x3ffffff);
+	if (CHIP_REV_IS_EMUL(p_dev) &&
+	    (ECORE_IS_AH(p_dev)))
+		ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV_2_K2_E5,
+			 0x3ffffff);
 
 	/* initialize port mode to 4x10G_E (10G with 4x10 SERDES) */
 	/* CNIG_REG_NW_PORT_MODE is same for A0 and B0 */
-	if (!CHIP_REV_IS_EMUL(p_hwfn->p_dev) || !ECORE_IS_AH(p_hwfn->p_dev))
-		ecore_wr(p_hwfn, p_ptt, CNIG_REG_NW_PORT_MODE_BB_B0, 4);
+	if (!CHIP_REV_IS_EMUL(p_dev) || ECORE_IS_BB(p_dev))
+		ecore_wr(p_hwfn, p_ptt, CNIG_REG_NW_PORT_MODE_BB, 4);
 
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && ECORE_IS_AH(p_hwfn->p_dev)) {
-		/* 2 for 4-port, 1 for 2-port, 0 for 1-port */
-		ecore_wr(p_hwfn, p_ptt, MISC_REG_PORT_MODE,
-			 (p_hwfn->p_dev->num_ports_in_engines >> 1));
+	if (CHIP_REV_IS_EMUL(p_dev)) {
+		if (ECORE_IS_AH(p_dev)) {
+			/* 2 for 4-port, 1 for 2-port, 0 for 1-port */
+			ecore_wr(p_hwfn, p_ptt, MISC_REG_PORT_MODE,
+				 (p_dev->num_ports_in_engines >> 1));
 
-		ecore_wr(p_hwfn, p_ptt, MISC_REG_BLOCK_256B_EN,
-			 p_hwfn->p_dev->num_ports_in_engines == 4 ? 0 : 3);
+			ecore_wr(p_hwfn, p_ptt, MISC_REG_BLOCK_256B_EN,
+				 p_dev->num_ports_in_engines == 4 ? 0 : 3);
+		}
 	}
 
 	/* Poll on RBC */
@@ -1059,12 +1066,6 @@ static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn,
 	/* pretend to original PF */
 	ecore_fid_pretend(p_hwfn, p_ptt, p_hwfn->rel_pf_id);
 
-	/* @@@TMP:
-	 * CQ89456 - Mask the BRB "RC0_EOP_OUT_SYNC_FIFO_PUSH_ERROR" attention.
-	 */
-	if (ECORE_IS_AH(p_dev))
-		ecore_wr(p_hwfn, p_ptt, BRB_REG_INT_MASK_10, 0x4000000);
-
 	return rc;
 }
 
@@ -1080,20 +1081,19 @@ static void ecore_wr_nw_port(struct ecore_hwfn *p_hwfn,
 {
 	DP_VERBOSE(p_hwfn, ECORE_MSG_LINK,
 		   "CMD: %08x, ADDR: 0x%08x, DATA: %08x:%08x\n",
-		   ecore_rd(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB_B0) |
+		   ecore_rd(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB) |
 		   (8 << PMEG_IF_BYTE_COUNT),
 		   (reg_type << 25) | (addr << 8) | port,
 		   (u32)((data >> 32) & 0xffffffff),
 		   (u32)(data & 0xffffffff));
 
-	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB_B0,
-		 (ecore_rd(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB_B0) &
+	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB,
+		 (ecore_rd(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB) &
 		  0xffff00fe) | (8 << PMEG_IF_BYTE_COUNT));
-	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_ADDR_BB_B0,
+	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_ADDR_BB,
 		 (reg_type << 25) | (addr << 8) | port);
-	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_WRDATA_BB_B0,
-		 data & 0xffffffff);
-	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_WRDATA_BB_B0,
+	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_WRDATA_BB, data & 0xffffffff);
+	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_WRDATA_BB,
 		 (data >> 32) & 0xffffffff);
 }
 
@@ -1109,48 +1109,13 @@ static void ecore_wr_nw_port(struct ecore_hwfn *p_hwfn,
 #define XLMAC_PAUSE_CTRL (0x60d)
 #define XLMAC_PFC_CTRL (0x60e)
 
-static void ecore_emul_link_init_ah(struct ecore_hwfn *p_hwfn,
+static void ecore_emul_link_init_bb(struct ecore_hwfn *p_hwfn,
 				    struct ecore_ptt *p_ptt)
 {
-	u8 port = p_hwfn->port_id;
-	u32 mac_base = NWM_REG_MAC0 + (port << 2) * NWM_REG_MAC0_SIZE;
-
-	ecore_wr(p_hwfn, p_ptt, CNIG_REG_NIG_PORT0_CONF_K2 + (port << 2),
-		 (1 << CNIG_REG_NIG_PORT0_CONF_NIG_PORT_ENABLE_0_SHIFT) |
-		 (port << CNIG_REG_NIG_PORT0_CONF_NIG_PORT_NWM_PORT_MAP_0_SHIFT)
-		 | (0 << CNIG_REG_NIG_PORT0_CONF_NIG_PORT_RATE_0_SHIFT));
-
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_XIF_MODE,
-		 1 << ETH_MAC_REG_XIF_MODE_XGMII_SHIFT);
-
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_FRM_LENGTH,
-		 9018 << ETH_MAC_REG_FRM_LENGTH_FRM_LENGTH_SHIFT);
-
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_TX_IPG_LENGTH,
-		 0xc << ETH_MAC_REG_TX_IPG_LENGTH_TXIPG_SHIFT);
-
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_RX_FIFO_SECTIONS,
-		 8 << ETH_MAC_REG_RX_FIFO_SECTIONS_RX_SECTION_FULL_SHIFT);
-
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_TX_FIFO_SECTIONS,
-		 (0xA << ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_EMPTY_SHIFT) |
-		 (8 << ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_FULL_SHIFT));
-
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_COMMAND_CONFIG, 0xa853);
-}
-
-static void ecore_emul_link_init(struct ecore_hwfn *p_hwfn,
-				 struct ecore_ptt *p_ptt)
-{
 	u8 loopback = 0, port = p_hwfn->port_id * 2;
 
 	DP_INFO(p_hwfn->p_dev, "Configurating Emulation Link %02x\n", port);
 
-	if (ECORE_IS_AH(p_hwfn->p_dev)) {
-		ecore_emul_link_init_ah(p_hwfn, p_ptt);
-		return;
-	}
-
 	/* XLPORT MAC MODE *//* 0 Quad, 4 Single... */
 	ecore_wr_nw_port(p_hwfn, p_ptt, XLPORT_MODE_REG, (0x4 << 4) | 0x4, 1,
 			 port);
@@ -1179,8 +1144,53 @@ static void ecore_emul_link_init(struct ecore_hwfn *p_hwfn,
 	ecore_wr_nw_port(p_hwfn, p_ptt, XLPORT_ENABLE_REG, 0xf, 1, port);
 }
 
-static void ecore_link_init(struct ecore_hwfn *p_hwfn,
-			    struct ecore_ptt *p_ptt, u8 port)
+static void ecore_emul_link_init_ah_e5(struct ecore_hwfn *p_hwfn,
+				       struct ecore_ptt *p_ptt)
+{
+	u8 port = p_hwfn->port_id;
+	u32 mac_base = NWM_REG_MAC0_K2_E5 + (port << 2) * NWM_REG_MAC0_SIZE;
+
+	DP_INFO(p_hwfn->p_dev, "Configurating Emulation Link %02x\n", port);
+
+	ecore_wr(p_hwfn, p_ptt, CNIG_REG_NIG_PORT0_CONF_K2_E5 + (port << 2),
+		 (1 << CNIG_REG_NIG_PORT0_CONF_NIG_PORT_ENABLE_0_K2_E5_SHIFT) |
+		 (port <<
+		  CNIG_REG_NIG_PORT0_CONF_NIG_PORT_NWM_PORT_MAP_0_K2_E5_SHIFT) |
+		 (0 << CNIG_REG_NIG_PORT0_CONF_NIG_PORT_RATE_0_K2_E5_SHIFT));
+
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_XIF_MODE_K2_E5,
+		 1 << ETH_MAC_REG_XIF_MODE_XGMII_K2_E5_SHIFT);
+
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_FRM_LENGTH_K2_E5,
+		 9018 << ETH_MAC_REG_FRM_LENGTH_FRM_LENGTH_K2_E5_SHIFT);
+
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_TX_IPG_LENGTH_K2_E5,
+		 0xc << ETH_MAC_REG_TX_IPG_LENGTH_TXIPG_K2_E5_SHIFT);
+
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_RX_FIFO_SECTIONS_K2_E5,
+		 8 << ETH_MAC_REG_RX_FIFO_SECTIONS_RX_SECTION_FULL_K2_E5_SHIFT);
+
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_TX_FIFO_SECTIONS_K2_E5,
+		 (0xA <<
+		  ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_EMPTY_K2_E5_SHIFT) |
+		 (8 <<
+		  ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_FULL_K2_E5_SHIFT));
+
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_COMMAND_CONFIG_K2_E5,
+		 0xa853);
+}
+
+static void ecore_emul_link_init(struct ecore_hwfn *p_hwfn,
+				 struct ecore_ptt *p_ptt)
+{
+	if (ECORE_IS_AH(p_hwfn->p_dev))
+		ecore_emul_link_init_ah_e5(p_hwfn, p_ptt);
+	else /* BB */
+		ecore_emul_link_init_bb(p_hwfn, p_ptt);
+}
+
+static void ecore_link_init_bb(struct ecore_hwfn *p_hwfn,
+			       struct ecore_ptt *p_ptt,  u8 port)
 {
 	int port_offset = port ? 0x800 : 0;
 	u32 xmac_rxctrl = 0;
@@ -1193,10 +1203,10 @@ static void ecore_link_init(struct ecore_hwfn *p_hwfn,
 	ecore_wr(p_hwfn, p_ptt, MISC_REG_RESET_PL_PDA_VAUX + sizeof(u32),
 		 MISC_REG_RESET_REG_2_XMAC_BIT);	/* Set */
 
-	ecore_wr(p_hwfn, p_ptt, MISC_REG_XMAC_CORE_PORT_MODE, 1);
+	ecore_wr(p_hwfn, p_ptt, MISC_REG_XMAC_CORE_PORT_MODE_BB, 1);
 
 	/* Set the number of ports on the Warp Core to 10G */
-	ecore_wr(p_hwfn, p_ptt, MISC_REG_XMAC_PHY_PORT_MODE, 3);
+	ecore_wr(p_hwfn, p_ptt, MISC_REG_XMAC_PHY_PORT_MODE_BB, 3);
 
 	/* Soft reset of XMAC */
 	ecore_wr(p_hwfn, p_ptt, MISC_REG_RESET_PL_PDA_VAUX + 2 * sizeof(u32),
@@ -1207,20 +1217,21 @@ static void ecore_link_init(struct ecore_hwfn *p_hwfn,
 
 	/* FIXME: move to common end */
 	if (CHIP_REV_IS_FPGA(p_hwfn->p_dev))
-		ecore_wr(p_hwfn, p_ptt, XMAC_REG_MODE + port_offset, 0x20);
+		ecore_wr(p_hwfn, p_ptt, XMAC_REG_MODE_BB + port_offset, 0x20);
 
 	/* Set Max packet size: initialize XMAC block register for port 0 */
-	ecore_wr(p_hwfn, p_ptt, XMAC_REG_RX_MAX_SIZE + port_offset, 0x2710);
+	ecore_wr(p_hwfn, p_ptt, XMAC_REG_RX_MAX_SIZE_BB + port_offset, 0x2710);
 
 	/* CRC append for Tx packets: init XMAC block register for port 1 */
-	ecore_wr(p_hwfn, p_ptt, XMAC_REG_TX_CTRL_LO + port_offset, 0xC800);
+	ecore_wr(p_hwfn, p_ptt, XMAC_REG_TX_CTRL_LO_BB + port_offset, 0xC800);
 
 	/* Enable TX and RX: initialize XMAC block register for port 1 */
-	ecore_wr(p_hwfn, p_ptt, XMAC_REG_CTRL + port_offset,
-		 XMAC_REG_CTRL_TX_EN | XMAC_REG_CTRL_RX_EN);
-	xmac_rxctrl = ecore_rd(p_hwfn, p_ptt, XMAC_REG_RX_CTRL + port_offset);
-	xmac_rxctrl |= XMAC_REG_RX_CTRL_PROCESS_VARIABLE_PREAMBLE;
-	ecore_wr(p_hwfn, p_ptt, XMAC_REG_RX_CTRL + port_offset, xmac_rxctrl);
+	ecore_wr(p_hwfn, p_ptt, XMAC_REG_CTRL_BB + port_offset,
+		 XMAC_REG_CTRL_TX_EN_BB | XMAC_REG_CTRL_RX_EN_BB);
+	xmac_rxctrl = ecore_rd(p_hwfn, p_ptt,
+			       XMAC_REG_RX_CTRL_BB + port_offset);
+	xmac_rxctrl |= XMAC_REG_RX_CTRL_PROCESS_VARIABLE_PREAMBLE_BB;
+	ecore_wr(p_hwfn, p_ptt, XMAC_REG_RX_CTRL_BB + port_offset, xmac_rxctrl);
 }
 #endif
 
@@ -1241,7 +1252,8 @@ static enum _ecore_status_t ecore_hw_init_port(struct ecore_hwfn *p_hwfn,
 	if (CHIP_REV_IS_FPGA(p_hwfn->p_dev)) {
 		if (ECORE_IS_AH(p_hwfn->p_dev))
 			return ECORE_SUCCESS;
-		ecore_link_init(p_hwfn, p_ptt, p_hwfn->port_id);
+		else if (ECORE_IS_BB(p_hwfn->p_dev))
+			ecore_link_init_bb(p_hwfn, p_ptt, p_hwfn->port_id);
 	} else if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
 		if (p_hwfn->p_dev->num_hwfns > 1) {
 			/* Activate OPTE in CMT */
@@ -1675,7 +1687,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		 * out that these registers get initialized during the call to
 		 * ecore_mcp_load_req request. So we need to reread them here
 		 * to get the proper shadow register value.
-		 * Note: This is a workaround for the missinginig MFW
+		 * Note: This is a workaround for the missing MFW
 		 * initialization. It may be removed once the implementation
 		 * is done.
 		 */
@@ -2041,22 +2053,22 @@ static void ecore_hw_hwfn_prepare(struct ecore_hwfn *p_hwfn)
 	/* clear indirect access */
 	if (ECORE_IS_AH(p_hwfn->p_dev)) {
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_E8_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_E8_F0_K2_E5, 0);
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_EC_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_EC_F0_K2_E5, 0);
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_F0_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_F0_F0_K2_E5, 0);
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_F4_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_F4_F0_K2_E5, 0);
 	} else {
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_88_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_88_F0_BB, 0);
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_8C_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_8C_F0_BB, 0);
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_90_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_90_F0_BB, 0);
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_94_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_94_F0_BB, 0);
 	}
 
 	/* Clean Previous errors if such exist */
@@ -2651,7 +2663,12 @@ static void ecore_get_num_funcs(struct ecore_hwfn *p_hwfn,
 	 * In case of CMT in BB, only the "even" functions are enabled, and thus
 	 * the number of functions for both hwfns is learnt from the same bits.
 	 */
-	reg_function_hide = ecore_rd(p_hwfn, p_ptt, MISCS_REG_FUNCTION_HIDE);
+	if (ECORE_IS_BB(p_dev) || ECORE_IS_AH(p_dev)) {
+		reg_function_hide = ecore_rd(p_hwfn, p_ptt,
+					     MISCS_REG_FUNCTION_HIDE_BB_K2);
+	} else { /* E5 */
+		reg_function_hide = 0;
+	}
 
 	if (reg_function_hide & 0x1) {
 		if (ECORE_IS_BB(p_dev)) {
@@ -2717,8 +2734,7 @@ static void ecore_hw_info_port_num_bb(struct ecore_hwfn *p_hwfn,
 		port_mode = 1;
 	else
 #endif
-		port_mode = ecore_rd(p_hwfn, p_ptt,
-				     CNIG_REG_NW_PORT_MODE_BB_B0);
+	port_mode = ecore_rd(p_hwfn, p_ptt, CNIG_REG_NW_PORT_MODE_BB);
 
 	if (port_mode < 3) {
 		p_hwfn->p_dev->num_ports_in_engines = 1;
@@ -2733,8 +2749,8 @@ static void ecore_hw_info_port_num_bb(struct ecore_hwfn *p_hwfn,
 	}
 }
 
-static void ecore_hw_info_port_num_ah(struct ecore_hwfn *p_hwfn,
-				      struct ecore_ptt *p_ptt)
+static void ecore_hw_info_port_num_ah_e5(struct ecore_hwfn *p_hwfn,
+					 struct ecore_ptt *p_ptt)
 {
 	u32 port;
 	int i;
@@ -2763,7 +2779,8 @@ static void ecore_hw_info_port_num_ah(struct ecore_hwfn *p_hwfn,
 #endif
 		for (i = 0; i < MAX_NUM_PORTS_K2; i++) {
 			port = ecore_rd(p_hwfn, p_ptt,
-					CNIG_REG_NIG_PORT0_CONF_K2 + (i * 4));
+					CNIG_REG_NIG_PORT0_CONF_K2_E5 +
+					(i * 4));
 			if (port & 1)
 				p_hwfn->p_dev->num_ports_in_engines++;
 		}
@@ -2775,7 +2792,7 @@ static void ecore_hw_info_port_num(struct ecore_hwfn *p_hwfn,
 	if (ECORE_IS_BB(p_hwfn->p_dev))
 		ecore_hw_info_port_num_bb(p_hwfn, p_ptt);
 	else
-		ecore_hw_info_port_num_ah(p_hwfn, p_ptt);
+		ecore_hw_info_port_num_ah_e5(p_hwfn, p_ptt);
 }
 
 static enum _ecore_status_t
@@ -3084,12 +3101,13 @@ void ecore_prepare_hibernate(struct ecore_dev *p_dev)
 	if (CHIP_REV_IS_FPGA(p_dev)) {
 		DP_NOTICE(p_hwfn, false,
 			  "FPGA: workaround; Prevent DMAE parities\n");
-		ecore_wr(p_hwfn, p_hwfn->p_main_ptt, PCIE_REG_PRTY_MASK, 7);
+		ecore_wr(p_hwfn, p_hwfn->p_main_ptt, PCIE_REG_PRTY_MASK_K2_E5,
+			 7);
 
 		DP_NOTICE(p_hwfn, false,
 			  "FPGA: workaround: Set VF bar0 size\n");
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_VF_BAR0_SIZE, 4);
+			 PGLUE_B_REG_VF_BAR0_SIZE_K2_E5, 4);
 	}
 #endif
 
diff --git a/drivers/net/qede/base/ecore_gtt_reg_addr.h b/drivers/net/qede/base/ecore_gtt_reg_addr.h
index 070588d..2acd864 100644
--- a/drivers/net/qede/base/ecore_gtt_reg_addr.h
+++ b/drivers/net/qede/base/ecore_gtt_reg_addr.h
@@ -10,43 +10,43 @@
 #define GTT_REG_ADDR_H
 
 /* Win 2 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_IGU_CMD                                      0x00f000UL
 
 /* Win 3 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_TSDM_RAM                                     0x010000UL
 
 /* Win 4 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_MSDM_RAM                                     0x011000UL
 
 /* Win 5 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_MSDM_RAM_1024                                0x012000UL
 
 /* Win 6 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_USDM_RAM                                     0x013000UL
 
 /* Win 7 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_USDM_RAM_1024                                0x014000UL
 
 /* Win 8 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_USDM_RAM_2048                                0x015000UL
 
 /* Win 9 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_XSDM_RAM                                     0x016000UL
 
 /* Win 10 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_YSDM_RAM                                     0x017000UL
 
 /* Win 11 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_PSDM_RAM                                     0x018000UL
 
 #endif
diff --git a/drivers/net/qede/base/ecore_hsi_common.h b/drivers/net/qede/base/ecore_hsi_common.h
index f934e68..3042ed5 100644
--- a/drivers/net/qede/base/ecore_hsi_common.h
+++ b/drivers/net/qede/base/ecore_hsi_common.h
@@ -836,7 +836,12 @@ struct core_rx_fast_path_cqe {
 	__le16 packet_length /* Total packet length (from the parser) */;
 	__le16 vlan /* 802.1q VLAN tag */;
 	struct core_rx_cqe_opaque_data opaque_data /* Opaque Data */;
-	__le32 reserved[4];
+/* bit- map: each bit represents a specific error. errors indications are
+ * provided by the cracker. see spec for detailed description
+ */
+	struct parsing_err_flags err_flags;
+	__le16 reserved0;
+	__le32 reserved1[3];
 };
 
 /*
@@ -1042,13 +1047,13 @@ struct core_tx_stop_ramrod_data {
 /*
  * Enum flag for what type of dcb data to update
  */
-enum dcb_dhcp_update_flag {
+enum dcb_dscp_update_mode {
 /* use when no change should be done to dcb data */
-	DONT_UPDATE_DCB_DHCP,
+	DONT_UPDATE_DCB_DSCP,
 	UPDATE_DCB /* use to update only l2 (vlan) priority */,
-	UPDATE_DSCP /* use to update only l3 dhcp */,
-	UPDATE_DCB_DSCP /* update vlan pri and dhcp */,
-	MAX_DCB_DHCP_UPDATE_FLAG
+	UPDATE_DSCP /* use to update only l3 dscp */,
+	UPDATE_DCB_DSCP /* update vlan pri and dscp */,
+	MAX_DCB_DSCP_UPDATE_FLAG
 };
 
 
@@ -1232,6 +1237,10 @@ enum iwarp_ll2_tx_queues {
 	IWARP_LL2_IN_ORDER_TX_QUEUE = 1,
 /* LL2 queue for unaligned packets sent aligned by the driver */
 	IWARP_LL2_ALIGNED_TX_QUEUE,
+/* LL2 queue for unaligned packets sent aligned and was right-trimmed by the
+ * driver
+ */
+	IWARP_LL2_ALIGNED_RIGHT_TRIMMED_TX_QUEUE,
 	IWARP_LL2_ERROR /* Error indication */,
 	MAX_IWARP_LL2_TX_QUEUES
 };
@@ -1446,13 +1455,13 @@ struct pf_update_tunnel_config {
  */
 struct pf_update_ramrod_data {
 	u8 pf_id;
-	u8 update_eth_dcb_data_flag /* Update Eth DCB  data indication */;
-	u8 update_fcoe_dcb_data_flag /* Update FCOE DCB  data indication */;
-	u8 update_iscsi_dcb_data_flag /* Update iSCSI DCB  data indication */;
-	u8 update_roce_dcb_data_flag /* Update ROCE DCB  data indication */;
+	u8 update_eth_dcb_data_mode /* Update Eth DCB  data indication */;
+	u8 update_fcoe_dcb_data_mode /* Update FCOE DCB  data indication */;
+	u8 update_iscsi_dcb_data_mode /* Update iSCSI DCB  data indication */;
+	u8 update_roce_dcb_data_mode /* Update ROCE DCB  data indication */;
 /* Update RROCE (RoceV2) DCB  data indication */
-	u8 update_rroce_dcb_data_flag;
-	u8 update_iwarp_dcb_data_flag /* Update IWARP DCB  data indication */;
+	u8 update_rroce_dcb_data_mode;
+	u8 update_iwarp_dcb_data_mode /* Update IWARP DCB  data indication */;
 	u8 update_mf_vlan_flag /* Update MF outer vlan Id */;
 	struct protocol_dcb_data eth_dcb_data /* core eth related fields */;
 	struct protocol_dcb_data fcoe_dcb_data /* core fcoe related fields */;
@@ -1611,6 +1620,8 @@ struct tstorm_per_port_stat {
 	struct regpair fcoe_irregular_pkt;
 /* packet is an ROCE irregular packet */
 	struct regpair roce_irregular_pkt;
+/* packet is an IWARP irregular packet */
+	struct regpair iwarp_irregular_pkt;
 /* packet is an ETH irregular packet */
 	struct regpair eth_irregular_pkt;
 /* packet is an TOE irregular packet */
@@ -1861,8 +1872,11 @@ struct dmae_cmd {
 #define DMAE_CMD_SRC_VF_ID_SHIFT       0
 #define DMAE_CMD_DST_VF_ID_MASK        0xFF /* Destination VF id */
 #define DMAE_CMD_DST_VF_ID_SHIFT       8
-	__le32 comp_addr_lo /* PCIe completion address low or grc address */;
-/* PCIe completion address high or reserved (if completion address is in GRC) */
+/* PCIe completion address low in bytes or GRC completion address in DW */
+	__le32 comp_addr_lo;
+/* PCIe completion address high in bytes or reserved (if completion address is
+ * GRC)
+ */
 	__le32 comp_addr_hi;
 	__le32 comp_val /* Value to write to completion address */;
 	__le32 crc32 /* crc16 result */;
@@ -2250,10 +2264,6 @@ struct sdm_op_gen {
 #define SDM_OP_GEN_RESERVED_SHIFT   20
 };
 
-
-
-
-
 struct ystorm_core_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 byte1 /* state */;
diff --git a/drivers/net/qede/base/ecore_hsi_debug_tools.h b/drivers/net/qede/base/ecore_hsi_debug_tools.h
index effb6ed..917e8f4 100644
--- a/drivers/net/qede/base/ecore_hsi_debug_tools.h
+++ b/drivers/net/qede/base/ecore_hsi_debug_tools.h
@@ -93,10 +93,12 @@ enum block_addr {
 	GRCBASE_PHY_PCIE = 0x620000,
 	GRCBASE_LED = 0x6b8000,
 	GRCBASE_AVS_WRAP = 0x6b0000,
-	GRCBASE_RGFS = 0x19d0000,
-	GRCBASE_TGFS = 0x19e0000,
-	GRCBASE_PTLD = 0x19f0000,
-	GRCBASE_YPLD = 0x1a10000,
+	GRCBASE_RGFS = 0x1fa0000,
+	GRCBASE_RGSRC = 0x1fa8000,
+	GRCBASE_TGFS = 0x1fb0000,
+	GRCBASE_TGSRC = 0x1fb8000,
+	GRCBASE_PTLD = 0x1fc0000,
+	GRCBASE_YPLD = 0x1fe0000,
 	GRCBASE_MISC_AEU = 0x8000,
 	GRCBASE_BAR0_MAP = 0x1c00000,
 	MAX_BLOCK_ADDR
@@ -184,7 +186,9 @@ enum block_id {
 	BLOCK_LED,
 	BLOCK_AVS_WRAP,
 	BLOCK_RGFS,
+	BLOCK_RGSRC,
 	BLOCK_TGFS,
+	BLOCK_TGSRC,
 	BLOCK_PTLD,
 	BLOCK_YPLD,
 	BLOCK_MISC_AEU,
@@ -208,6 +212,10 @@ enum bin_dbg_buffer_type {
 	BIN_BUF_DBG_ATTN_REGS /* Attention registers */,
 	BIN_BUF_DBG_ATTN_INDEXES /* Attention indexes */,
 	BIN_BUF_DBG_ATTN_NAME_OFFSETS /* Attention name offsets */,
+	BIN_BUF_DBG_BUS_BLOCKS /* Debug Bus blocks */,
+	BIN_BUF_DBG_BUS_LINES /* Debug Bus lines */,
+	BIN_BUF_DBG_BUS_BLOCKS_USER_DATA /* Debug Bus blocks user data */,
+	BIN_BUF_DBG_BUS_LINE_NAME_OFFSETS /* Debug Bus line name offsets */,
 	BIN_BUF_DBG_PARSING_STRINGS /* Debug Tools parsing strings */,
 	MAX_BIN_DBG_BUFFER_TYPE
 };
@@ -219,8 +227,8 @@ enum bin_dbg_buffer_type {
 struct dbg_attn_bit_mapping {
 	__le16 data;
 /* The index of an attention in the blocks attentions list
- * (if is_unused_idx_cnt=0), or a number of consecutive unused attention bits
- * (if is_unused_idx_cnt=1)
+ * (if is_unused_bit_cnt=0), or a number of consecutive unused attention bits
+ * (if is_unused_bit_cnt=1)
  */
 #define DBG_ATTN_BIT_MAPPING_VAL_MASK                0x7FFF
 #define DBG_ATTN_BIT_MAPPING_VAL_SHIFT               0
@@ -269,10 +277,10 @@ struct dbg_attn_reg_result {
 #define DBG_ATTN_REG_RESULT_STS_ADDRESS_MASK   0xFFFFFF
 #define DBG_ATTN_REG_RESULT_STS_ADDRESS_SHIFT  0
 /* Number of attention indexes in this register */
-#define DBG_ATTN_REG_RESULT_NUM_ATTN_IDX_MASK  0xFF
-#define DBG_ATTN_REG_RESULT_NUM_ATTN_IDX_SHIFT 24
-/* Offset of this registers block attention indexes (values in the range
- * 0..number of block attentions)
+#define DBG_ATTN_REG_RESULT_NUM_REG_ATTN_MASK  0xFF
+#define DBG_ATTN_REG_RESULT_NUM_REG_ATTN_SHIFT 24
+/* The offset of this registers attentions within the blocks attentions
+ * list (a value in the range 0..number of block attentions-1)
  */
 	__le16 attn_idx_offset;
 	__le16 reserved;
@@ -289,7 +297,7 @@ struct dbg_attn_block_result {
 /* Value from dbg_attn_type enum */
 #define DBG_ATTN_BLOCK_RESULT_ATTN_TYPE_MASK  0x3
 #define DBG_ATTN_BLOCK_RESULT_ATTN_TYPE_SHIFT 0
-/* Number of registers in the blok in which at least one attention bit is set */
+/* Number of registers in block in which at least one attention bit is set */
 #define DBG_ATTN_BLOCK_RESULT_NUM_REGS_MASK   0x3F
 #define DBG_ATTN_BLOCK_RESULT_NUM_REGS_SHIFT  2
 /* Offset of this registers block attention names in the attention name offsets
@@ -324,17 +332,17 @@ struct dbg_mode_hdr {
  */
 struct dbg_attn_reg {
 	struct dbg_mode_hdr mode /* Mode header */;
-/* Offset of this registers block attention indexes (values in the range
- * 0..number of block attentions)
+/* The offset of this registers attentions within the blocks attentions
+ * list (a value in the range 0..number of block attentions-1)
  */
 	__le16 attn_idx_offset;
 	__le32 data;
 /* STS attention register GRC address (in dwords) */
 #define DBG_ATTN_REG_STS_ADDRESS_MASK   0xFFFFFF
 #define DBG_ATTN_REG_STS_ADDRESS_SHIFT  0
-/* Number of attention indexes in this register */
-#define DBG_ATTN_REG_NUM_ATTN_IDX_MASK  0xFF
-#define DBG_ATTN_REG_NUM_ATTN_IDX_SHIFT 24
+/* Number of attention in this register */
+#define DBG_ATTN_REG_NUM_REG_ATTN_MASK  0xFF
+#define DBG_ATTN_REG_NUM_REG_ATTN_SHIFT 24
 /* STS_CLR attention register GRC address (in dwords) */
 	__le32 sts_clr_address;
 /* MASK attention register GRC address (in dwords) */
@@ -354,6 +362,53 @@ enum dbg_attn_type {
 
 
 /*
+ * Debug Bus block data
+ */
+struct dbg_bus_block {
+/* Number of debug lines in this block (excluding signature & latency events) */
+	u8 num_of_lines;
+/* Indicates if this block has a latency events debug line (0/1). */
+	u8 has_latency_events;
+/* Offset of this blocks lines in the Debug Bus lines array. */
+	__le16 lines_offset;
+};
+
+
+/*
+ * Debug Bus block user data
+ */
+struct dbg_bus_block_user_data {
+/* Number of debug lines in this block (excluding signature & latency events) */
+	u8 num_of_lines;
+/* Indicates if this block has a latency events debug line (0/1). */
+	u8 has_latency_events;
+/* Offset of this blocks lines in the debug bus line name offsets array. */
+	__le16 names_offset;
+};
+
+
+/*
+ * Block Debug line data
+ */
+struct dbg_bus_line {
+	u8 data;
+/* Number of groups in the line (0-3) */
+#define DBG_BUS_LINE_NUM_OF_GROUPS_MASK  0xF
+#define DBG_BUS_LINE_NUM_OF_GROUPS_SHIFT 0
+/* Indicates if this is a 128b line (0) or a 256b line (1). */
+#define DBG_BUS_LINE_IS_256B_MASK        0x1
+#define DBG_BUS_LINE_IS_256B_SHIFT       4
+#define DBG_BUS_LINE_RESERVED_MASK       0x7
+#define DBG_BUS_LINE_RESERVED_SHIFT      5
+/* Four 2-bit values, indicating the size of each group minus 1 (i.e.
+ * value=0 means size=1, value=1 means size=2, etc), starting from lsb.
+ * The sizes are in dwords (if is_256b=0) or in qwords (if is_256b=1).
+ */
+	u8 group_sizes;
+};
+
+
+/*
  * condition header for registers dump
  */
 struct dbg_dump_cond_hdr {
@@ -377,8 +432,11 @@ struct dbg_dump_mem {
 /* register size (in dwords) */
 #define DBG_DUMP_MEM_LENGTH_MASK        0xFFFFFF
 #define DBG_DUMP_MEM_LENGTH_SHIFT       0
-#define DBG_DUMP_MEM_RESERVED_MASK      0xFF
-#define DBG_DUMP_MEM_RESERVED_SHIFT     24
+/* indicates if the register is wide-bus */
+#define DBG_DUMP_MEM_WIDE_BUS_MASK      0x1
+#define DBG_DUMP_MEM_WIDE_BUS_SHIFT     24
+#define DBG_DUMP_MEM_RESERVED_MASK      0x7F
+#define DBG_DUMP_MEM_RESERVED_SHIFT     25
 };
 
 
@@ -388,10 +446,13 @@ struct dbg_dump_mem {
 struct dbg_dump_reg {
 	__le32 data;
 /* register address (in dwords) */
-#define DBG_DUMP_REG_ADDRESS_MASK  0xFFFFFF
-#define DBG_DUMP_REG_ADDRESS_SHIFT 0
-#define DBG_DUMP_REG_LENGTH_MASK   0xFF /* register size (in dwords) */
-#define DBG_DUMP_REG_LENGTH_SHIFT  24
+#define DBG_DUMP_REG_ADDRESS_MASK   0x7FFFFF /* register address (in dwords) */
+#define DBG_DUMP_REG_ADDRESS_SHIFT  0
+/* indicates if the register is wide-bus */
+#define DBG_DUMP_REG_WIDE_BUS_MASK  0x1
+#define DBG_DUMP_REG_WIDE_BUS_SHIFT 23
+#define DBG_DUMP_REG_LENGTH_MASK    0xFF /* register size (in dwords) */
+#define DBG_DUMP_REG_LENGTH_SHIFT   24
 };
 
 
@@ -424,8 +485,11 @@ struct dbg_idle_chk_cond_hdr {
 struct dbg_idle_chk_cond_reg {
 	__le32 data;
 /* Register GRC address (in dwords) */
-#define DBG_IDLE_CHK_COND_REG_ADDRESS_MASK   0xFFFFFF
+#define DBG_IDLE_CHK_COND_REG_ADDRESS_MASK   0x7FFFFF
 #define DBG_IDLE_CHK_COND_REG_ADDRESS_SHIFT  0
+/* indicates if the register is wide-bus */
+#define DBG_IDLE_CHK_COND_REG_WIDE_BUS_MASK  0x1
+#define DBG_IDLE_CHK_COND_REG_WIDE_BUS_SHIFT 23
 /* value from block_id enum */
 #define DBG_IDLE_CHK_COND_REG_BLOCK_ID_MASK  0xFF
 #define DBG_IDLE_CHK_COND_REG_BLOCK_ID_SHIFT 24
@@ -441,8 +505,11 @@ struct dbg_idle_chk_cond_reg {
 struct dbg_idle_chk_info_reg {
 	__le32 data;
 /* Register GRC address (in dwords) */
-#define DBG_IDLE_CHK_INFO_REG_ADDRESS_MASK   0xFFFFFF
+#define DBG_IDLE_CHK_INFO_REG_ADDRESS_MASK   0x7FFFFF
 #define DBG_IDLE_CHK_INFO_REG_ADDRESS_SHIFT  0
+/* indicates if the register is wide-bus */
+#define DBG_IDLE_CHK_INFO_REG_WIDE_BUS_MASK  0x1
+#define DBG_IDLE_CHK_INFO_REG_WIDE_BUS_SHIFT 23
 /* value from block_id enum */
 #define DBG_IDLE_CHK_INFO_REG_BLOCK_ID_MASK  0xFF
 #define DBG_IDLE_CHK_INFO_REG_BLOCK_ID_SHIFT 24
@@ -544,17 +611,21 @@ enum dbg_idle_chk_severity_types {
  * Debug Bus block data
  */
 struct dbg_bus_block_data {
-/* Indicates if the block is enabled for recording (0/1) */
-	u8 enabled;
-	u8 hw_id /* HW ID associated with the block */;
+	__le16 data;
+/* 4-bit value: bit i set -> dword/qword i is enabled. */
+#define DBG_BUS_BLOCK_DATA_ENABLE_MASK_MASK       0xF
+#define DBG_BUS_BLOCK_DATA_ENABLE_MASK_SHIFT      0
+/* Number of dwords/qwords to shift right the debug data (0-3) */
+#define DBG_BUS_BLOCK_DATA_RIGHT_SHIFT_MASK       0xF
+#define DBG_BUS_BLOCK_DATA_RIGHT_SHIFT_SHIFT      4
+/* 4-bit value: bit i set -> dword/qword i is forced valid. */
+#define DBG_BUS_BLOCK_DATA_FORCE_VALID_MASK_MASK  0xF
+#define DBG_BUS_BLOCK_DATA_FORCE_VALID_MASK_SHIFT 8
+/* 4-bit value: bit i set -> dword/qword i frame bit is forced. */
+#define DBG_BUS_BLOCK_DATA_FORCE_FRAME_MASK_MASK  0xF
+#define DBG_BUS_BLOCK_DATA_FORCE_FRAME_MASK_SHIFT 12
 	u8 line_num /* Debug line number to select */;
-	u8 right_shift /* Number of units to  right the debug data (0-3) */;
-	u8 cycle_en /* 4-bit value: bit i set -> unit i is enabled. */;
-/* 4-bit value: bit i set -> unit i is forced valid. */
-	u8 force_valid;
-/* 4-bit value: bit i set -> unit i frame bit is forced. */
-	u8 force_frame;
-	u8 reserved;
+	u8 hw_id /* HW ID associated with the block */;
 };
 
 
@@ -604,6 +675,21 @@ enum dbg_bus_constraint_ops {
 
 
 /*
+ * Debug Bus trigger state data
+ */
+struct dbg_bus_trigger_state_data {
+	u8 data;
+/* 4-bit value: bit i set -> dword i of the trigger state block
+ * (after right shift) is enabled.
+ */
+#define DBG_BUS_TRIGGER_STATE_DATA_BLOCK_SHIFTED_ENABLE_MASK_MASK  0xF
+#define DBG_BUS_TRIGGER_STATE_DATA_BLOCK_SHIFTED_ENABLE_MASK_SHIFT 0
+/* 4-bit value: bit i set -> dword i is compared by a constraint */
+#define DBG_BUS_TRIGGER_STATE_DATA_CONSTRAINT_DWORD_MASK_MASK      0xF
+#define DBG_BUS_TRIGGER_STATE_DATA_CONSTRAINT_DWORD_MASK_SHIFT     4
+};
+
+/*
  * Debug Bus memory address
  */
 struct dbg_bus_mem_addr {
@@ -650,14 +736,8 @@ struct dbg_bus_storm_eid_mask_params {
  * Debug Bus Storm data
  */
 struct dbg_bus_storm_data {
-/* Indicates if the Storm is enabled for fast debug recording (0/1) */
-	u8 fast_enabled;
-/* Fast debug Storm mode, valid only if fast_enabled is set */
-	u8 fast_mode;
-/* Indicates if the Storm is enabled for slow debug recording (0/1) */
-	u8 slow_enabled;
-/* Slow debug Storm mode, valid only if slow_enabled is set */
-	u8 slow_mode;
+	u8 enabled /* indicates if the Storm is enabled for recording */;
+	u8 mode /* Storm debug mode, valid only if the Storm is enabled */;
 	u8 hw_id /* HW ID associated with the Storm */;
 	u8 eid_filter_en /* Indicates if EID filtering is performed (0/1) */;
 /* 1 = EID range filter, 0 = EID mask filter. Valid only if eid_filter_en is
@@ -667,7 +747,6 @@ struct dbg_bus_storm_data {
 	u8 cid_filter_en /* Indicates if CID filtering is performed (0/1) */;
 /* EID filter params to filter on. Valid only if eid_filter_en is set. */
 	union dbg_bus_storm_eid_params eid_filter_params;
-	__le16 reserved;
 /* CID to filter on. Valid only if cid_filter_en is set. */
 	__le32 cid;
 };
@@ -679,20 +758,18 @@ struct dbg_bus_data {
 	__le32 app_version /* The tools version number of the application */;
 	u8 state /* The current debug bus state */;
 	u8 hw_dwords /* HW dwords per cycle */;
-	u8 next_hw_id /* Next HW ID to be associated with an input */;
+/* The HW IDs of the recorded HW blocks, where bits i*3..i*3+2 contain the
+ * HW ID of dword/qword i
+ */
+	__le16 hw_id_mask;
 	u8 num_enabled_blocks /* Number of blocks enabled for recording */;
 	u8 num_enabled_storms /* Number of Storms enabled for recording */;
 	u8 target /* Output target */;
-	u8 next_trigger_state /* ID of next trigger state to be added */;
-/* ID of next filter/trigger constraint to be added */
-	u8 next_constraint_id;
 	u8 one_shot_en /* Indicates if one-shot mode is enabled (0/1) */;
 	u8 grc_input_en /* Indicates if GRC recording is enabled (0/1) */;
 /* Indicates if timestamp recording is enabled (0/1) */
 	u8 timestamp_input_en;
 	u8 filter_en /* Indicates if the recording filter is enabled (0/1) */;
-/* Indicates if the recording trigger is enabled (0/1) */
-	u8 trigger_en;
 /* If true, the next added constraint belong to the filter. Otherwise,
  * it belongs to the last added trigger state. Valid only if either filter or
  * triggers are enabled.
@@ -706,6 +783,14 @@ struct dbg_bus_data {
  * Valid only if both filter and trigger are enabled (0/1)
  */
 	u8 filter_post_trigger;
+	__le16 reserved;
+/* Indicates if the recording trigger is enabled (0/1) */
+	u8 trigger_en;
+/* trigger states data */
+	struct dbg_bus_trigger_state_data trigger_states[3];
+	u8 next_trigger_state /* ID of next trigger state to be added */;
+/* ID of next filter/trigger constraint to be added */
+	u8 next_constraint_id;
 /* If true, all inputs are associated with HW ID 0. Otherwise, each input is
  * assigned a different HW ID (0/1)
  */
@@ -716,7 +801,6 @@ struct dbg_bus_data {
  * DBG_BUS_TARGET_ID_PCI.
  */
 	struct dbg_bus_pci_buf_data pci_buf;
-	__le16 reserved;
 /* Debug Bus data for each block */
 	struct dbg_bus_block_data blocks[88];
 /* Debug Bus data for each block */
@@ -748,17 +832,6 @@ enum dbg_bus_frame_modes {
 
 
 /*
- * Debug bus input types
- */
-enum dbg_bus_input_types {
-	DBG_BUS_INPUT_TYPE_STORM,
-	DBG_BUS_INPUT_TYPE_BLOCK,
-	MAX_DBG_BUS_INPUT_TYPES
-};
-
-
-
-/*
  * Debug bus other engine mode
  */
 enum dbg_bus_other_engine_modes {
@@ -852,6 +925,7 @@ enum dbg_bus_targets {
 };
 
 
+
 /*
  * GRC Dump data
  */
@@ -987,7 +1061,10 @@ enum dbg_status {
 	DBG_STATUS_REG_FIFO_BAD_DATA,
 	DBG_STATUS_PROTECTION_OVERRIDE_BAD_DATA,
 	DBG_STATUS_DBG_ARRAY_NOT_SET,
-	DBG_STATUS_MULTI_BLOCKS_WITH_FILTER,
+	DBG_STATUS_FILTER_BUG,
+	DBG_STATUS_NON_MATCHING_LINES,
+	DBG_STATUS_INVALID_TRIGGER_DWORD_OFFSET,
+	DBG_STATUS_DBG_BUS_IN_USE,
 	MAX_DBG_STATUS
 };
 
@@ -1028,7 +1105,7 @@ struct dbg_tools_data {
 /* Indicates if a block is in reset state (0/1) */
 	u8 block_in_reset[88];
 	u8 chip_id /* Chip ID (from enum chip_ids) */;
-	u8 platform_id /* Platform ID (from enum platform_ids) */;
+	u8 platform_id /* Platform ID */;
 	u8 initialized /* Indicates if the data was initialized */;
 	u8 reserved;
 };
diff --git a/drivers/net/qede/base/ecore_hsi_eth.h b/drivers/net/qede/base/ecore_hsi_eth.h
index 9d2a118..397c408 100644
--- a/drivers/net/qede/base/ecore_hsi_eth.h
+++ b/drivers/net/qede/base/ecore_hsi_eth.h
@@ -739,6 +739,7 @@ enum eth_error_code {
 	ETH_FILTERS_VNI_ADD_FAIL_FULL,
 /* vni add filters command failed due to duplicate VNI filter */
 	ETH_FILTERS_VNI_ADD_FAIL_DUP,
+	ETH_FILTERS_GFT_UPDATE_FAIL /* Fail update GFT filter. */,
 	MAX_ETH_ERROR_CODE
 };
 
@@ -982,8 +983,10 @@ struct eth_vport_rss_config {
 	u8 rss_id;
 	u8 rss_mode /* The RSS mode for this function */;
 	u8 update_rss_key /* if set update the rss key */;
-	u8 update_rss_ind_table /* if set update the indirection table */;
-	u8 update_rss_capabilities /* if set update the capabilities */;
+/* if set update the indirection table values */
+	u8 update_rss_ind_table;
+/* if set update the capabilities and indirection table size. */
+	u8 update_rss_capabilities;
 	u8 tbl_size /* rss mask (Tbl size) */;
 	__le32 reserved2[2];
 /* RSS indirection table */
@@ -1267,7 +1270,10 @@ struct rx_update_gft_filter_data {
 /* Use enum to set type of flow using gft HW logic blocks */
 	u8 filter_type;
 	u8 filter_action /* Use to set type of action on filter */;
-	u8 reserved;
+/* 0 - dont assert in case of error. Just return an error code. 1 - assert in
+ * case of error.
+ */
+	u8 assert_on_error;
 };
 
 
@@ -2290,8 +2296,7 @@ enum gft_profile_upper_protocol_type {
  * GFT RAM line struct
  */
 struct gft_ram_line {
-	__le32 low32bits;
-/*  (use enum gft_vlan_select) */
+	__le32 lo;
 #define GFT_RAM_LINE_VLAN_SELECT_MASK              0x3
 #define GFT_RAM_LINE_VLAN_SELECT_SHIFT             0
 #define GFT_RAM_LINE_TUNNEL_ENTROPHY_MASK          0x1
@@ -2354,7 +2359,7 @@ struct gft_ram_line {
 #define GFT_RAM_LINE_DST_PORT_SHIFT                30
 #define GFT_RAM_LINE_SRC_PORT_MASK                 0x1
 #define GFT_RAM_LINE_SRC_PORT_SHIFT                31
-	__le32 high32bits;
+	__le32 hi;
 #define GFT_RAM_LINE_DSCP_MASK                     0x1
 #define GFT_RAM_LINE_DSCP_SHIFT                    0
 #define GFT_RAM_LINE_OVER_IP_PROTOCOL_MASK         0x1
diff --git a/drivers/net/qede/base/ecore_hsi_init_tool.h b/drivers/net/qede/base/ecore_hsi_init_tool.h
index d07549c..1f57e9b 100644
--- a/drivers/net/qede/base/ecore_hsi_init_tool.h
+++ b/drivers/net/qede/base/ecore_hsi_init_tool.h
@@ -22,43 +22,13 @@
 /* Max size in dwords of a zipped array */
 #define MAX_ZIPPED_SIZE			8192
 
-enum init_modes {
-	MODE_BB_A0_DEPRECATED,
-	MODE_BB_B0,
-	MODE_K2,
-	MODE_ASIC,
-	MODE_EMUL_REDUCED,
-	MODE_EMUL_FULL,
-	MODE_FPGA,
-	MODE_CHIPSIM,
-	MODE_SF,
-	MODE_MF_SD,
-	MODE_MF_SI,
-	MODE_PORTS_PER_ENG_1,
-	MODE_PORTS_PER_ENG_2,
-	MODE_PORTS_PER_ENG_4,
-	MODE_100G,
-	MODE_E5,
-	MAX_INIT_MODES
-};
-
-enum init_phases {
-	PHASE_ENGINE,
-	PHASE_PORT,
-	PHASE_PF,
-	PHASE_VF,
-	PHASE_QM_PF,
-	MAX_INIT_PHASES
+enum chip_ids {
+	CHIP_BB,
+	CHIP_K2,
+	CHIP_E5,
+	MAX_CHIP_IDS
 };
 
-enum init_split_types {
-	SPLIT_TYPE_NONE,
-	SPLIT_TYPE_PORT,
-	SPLIT_TYPE_PF,
-	SPLIT_TYPE_PORT_PF,
-	SPLIT_TYPE_VF,
-	MAX_INIT_SPLIT_TYPES
-};
 
 struct fw_asserts_ram_section {
 /* The offset of the section in the RAM in RAM lines (64-bit units) */
@@ -196,8 +166,46 @@ struct init_array_pattern_hdr {
 };
 
 
+enum init_modes {
+	MODE_BB_A0_DEPRECATED,
+	MODE_BB,
+	MODE_K2,
+	MODE_ASIC,
+	MODE_EMUL_REDUCED,
+	MODE_EMUL_FULL,
+	MODE_FPGA,
+	MODE_CHIPSIM,
+	MODE_SF,
+	MODE_MF_SD,
+	MODE_MF_SI,
+	MODE_PORTS_PER_ENG_1,
+	MODE_PORTS_PER_ENG_2,
+	MODE_PORTS_PER_ENG_4,
+	MODE_100G,
+	MODE_E5,
+	MAX_INIT_MODES
+};
 
 
+enum init_phases {
+	PHASE_ENGINE,
+	PHASE_PORT,
+	PHASE_PF,
+	PHASE_VF,
+	PHASE_QM_PF,
+	MAX_INIT_PHASES
+};
+
+
+enum init_split_types {
+	SPLIT_TYPE_NONE,
+	SPLIT_TYPE_PORT,
+	SPLIT_TYPE_PF,
+	SPLIT_TYPE_PORT_PF,
+	SPLIT_TYPE_VF,
+	MAX_INIT_SPLIT_TYPES
+};
+
 
 /*
  * init array types
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.c b/drivers/net/qede/base/ecore_init_fw_funcs.c
index 77f9152..af0deaa 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.c
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.c
@@ -17,112 +17,156 @@
 #include "ecore_hsi_init_tool.h"
 #include "ecore_iro.h"
 #include "ecore_init_fw_funcs.h"
-enum CmInterfaceEnum {
-	MCM_SEC,
-	MCM_PRI,
-	UCM_SEC,
-	UCM_PRI,
-	TCM_SEC,
-	TCM_PRI,
-	YCM_SEC,
-	YCM_PRI,
-	XCM_SEC,
-	XCM_PRI,
-	NUM_OF_CM_INTERFACES
+
+#define CDU_VALIDATION_DEFAULT_CFG 61
+
+static u16 con_region_offsets[3][E4_NUM_OF_CONNECTION_TYPES] = {
+	{ 400,  336,  352,  304,  304,  384,  416,  352}, /* region 3 offsets */
+	{ 528,  496,  416,  448,  448,  512,  544,  480}, /* region 4 offsets */
+	{ 608,  544,  496,  512,  576,  592,  624,  560}  /* region 5 offsets */
+};
+static u16 task_region_offsets[1][E4_NUM_OF_CONNECTION_TYPES] = {
+	{ 240,  240,  112,    0,    0,    0,    0,   96}  /* region 1 offsets */
 };
-/* general constants */
-#define QM_PQ_MEM_4KB(pq_size) \
-(pq_size ? DIV_ROUND_UP((pq_size + 1) * QM_PQ_ELEMENT_SIZE, 0x1000) : 0)
-#define QM_PQ_SIZE_256B(pq_size) \
-(pq_size ? DIV_ROUND_UP(pq_size, 0x100) - 1 : 0)
-#define QM_INVALID_PQ_ID			0xffff
-/* feature enable */
-#define QM_BYPASS_EN				1
-#define QM_BYTE_CRD_EN				1
-/* other PQ constants */
-#define QM_OTHER_PQS_PER_PF			4
-/* WFQ constants */
-#define QM_WFQ_UPPER_BOUND			62500000
+
+/* General constants */
+#define QM_PQ_MEM_4KB(pq_size) (pq_size ? DIV_ROUND_UP((pq_size + 1) * \
+				QM_PQ_ELEMENT_SIZE, 0x1000) : 0)
+#define QM_PQ_SIZE_256B(pq_size) (pq_size ? DIV_ROUND_UP(pq_size, 0x100) - 1 : \
+				  0)
+#define QM_INVALID_PQ_ID		0xffff
+
+/* Feature enable */
+#define QM_BYPASS_EN			1
+#define QM_BYTE_CRD_EN			1
+
+/* Other PQ constants */
+#define QM_OTHER_PQS_PER_PF		4
+
+/* WFQ constants: */
+
+/* Upper bound in MB, 10 * burst size of 1ms in 50Gbps */
+#define QM_WFQ_UPPER_BOUND		62500000
+
+/* Bit  of VOQ in WFQ VP PQ map */
 #define QM_WFQ_VP_PQ_VOQ_SHIFT		0
+
+/* Bit  of PF in WFQ VP PQ map */
 #define QM_WFQ_VP_PQ_PF_SHIFT		5
+
+/* 0x9000 = 4*9*1024 */
 #define QM_WFQ_INC_VAL(weight)		((weight) * 0x9000)
-#define QM_WFQ_MAX_INC_VAL			43750000
-/* RL constants */
-#define QM_RL_UPPER_BOUND			62500000
-#define QM_RL_PERIOD				5
+
+/* 0.7 * upper bound (62500000) */
+#define QM_WFQ_MAX_INC_VAL		43750000
+
+/* RL constants: */
+
+/* Upper bound is set to 10 * burst size of 1ms in 50Gbps */
+#define QM_RL_UPPER_BOUND		62500000
+
+/* Period in us */
+#define QM_RL_PERIOD			5
+
+/* Period in 25MHz cycles */
 #define QM_RL_PERIOD_CLK_25M		(25 * QM_RL_PERIOD)
-#define QM_RL_MAX_INC_VAL			43750000
-/* RL increment value - the factor of 1.01 was added after seeing only
- * 99% factor reached in a 25Gbps port with DPDK RFC 2544 test.
- * In this scenario the PF RL was reducing the line rate to 99% although
- * the credit increment value was the correct one and FW calculated
- * correct packet sizes. The reason for the inaccuracy of the RL is
- * unknown at this point.
+
+/* 0.7 * upper bound (62500000) */
+#define QM_RL_MAX_INC_VAL		43750000
+
+/* RL increment value - rate is specified in mbps. the factor of 1.01 was
+ * added after seeing only 99% factor reached in a 25Gbps port with DPDK RFC
+ * 2544 test. In this scenario the PF RL was reducing the line rate to 99%
+ * although the credit increment value was the correct one and FW calculated
+ * correct packet sizes. The reason for the inaccuracy of the RL is unknown at
+ * this point.
  */
-/* rate in mbps */
 #define QM_RL_INC_VAL(rate) OSAL_MAX_T(u32, (u32)(((rate ? rate : 1000000) * \
-					QM_RL_PERIOD * 101) / (8 * 100)), 1)
+				       QM_RL_PERIOD * 101) / (8 * 100)), 1)
+
 /* AFullOprtnstcCrdMask constants */
 #define QM_OPPOR_LINE_VOQ_DEF		1
 #define QM_OPPOR_FW_STOP_DEF		0
 #define QM_OPPOR_PQ_EMPTY_DEF		1
-/* Command Queue constants */
-#define PBF_CMDQ_PURE_LB_LINES			150
+
+/* Command Queue constants: */
+
+/* Pure LB CmdQ lines (+spare) */
+#define PBF_CMDQ_PURE_LB_LINES		150
+
 #define PBF_CMDQ_LINES_RT_OFFSET(voq) \
-(PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET + \
-voq * (PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET \
-- PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET))
+	(PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET + voq * \
+	 (PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET - \
+	  PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET))
+
 #define PBF_BTB_GUARANTEED_RT_OFFSET(voq) \
-(PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET + voq * \
-(PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET - PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET))
+	(PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET + voq * \
+	 (PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET - \
+	  PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET))
+
 #define QM_VOQ_LINE_CRD(pbf_cmd_lines) \
 ((((pbf_cmd_lines) - 4) * 2) | QM_LINE_CRD_REG_SIGN_BIT)
+
 /* BTB: blocks constants (block size = 256B) */
-#define BTB_JUMBO_PKT_BLOCKS 38	/* 256B blocks in 9700B packet */
-/* headroom per-port */
-#define BTB_HEADROOM_BLOCKS BTB_JUMBO_PKT_BLOCKS
+
+/* 256B blocks in 9700B packet */
+#define BTB_JUMBO_PKT_BLOCKS		38
+
+/* Headroom per-port */
+#define BTB_HEADROOM_BLOCKS		BTB_JUMBO_PKT_BLOCKS
 #define BTB_PURE_LB_FACTOR		10
-#define BTB_PURE_LB_RATIO		7 /* factored (hence really 0.7) */
+
+/* Factored (hence really 0.7) */
+#define BTB_PURE_LB_RATIO		7
+
 /* QM stop command constants */
-#define QM_STOP_PQ_MASK_WIDTH			32
-#define QM_STOP_CMD_ADDR				0x2
-#define QM_STOP_CMD_STRUCT_SIZE			2
+#define QM_STOP_PQ_MASK_WIDTH		32
+#define QM_STOP_CMD_ADDR		2
+#define QM_STOP_CMD_STRUCT_SIZE		2
 #define QM_STOP_CMD_PAUSE_MASK_OFFSET	0
 #define QM_STOP_CMD_PAUSE_MASK_SHIFT	0
-#define QM_STOP_CMD_PAUSE_MASK_MASK		0xffffffff /* @DPDK */
-#define QM_STOP_CMD_GROUP_ID_OFFSET		1
-#define QM_STOP_CMD_GROUP_ID_SHIFT		16
-#define QM_STOP_CMD_GROUP_ID_MASK		15
-#define QM_STOP_CMD_PQ_TYPE_OFFSET		1
-#define QM_STOP_CMD_PQ_TYPE_SHIFT		24
-#define QM_STOP_CMD_PQ_TYPE_MASK		1
-#define QM_STOP_CMD_MAX_POLL_COUNT		100
-#define QM_STOP_CMD_POLL_PERIOD_US		500
+#define QM_STOP_CMD_PAUSE_MASK_MASK	0xffffffff /* @DPDK */
+#define QM_STOP_CMD_GROUP_ID_OFFSET	1
+#define QM_STOP_CMD_GROUP_ID_SHIFT	16
+#define QM_STOP_CMD_GROUP_ID_MASK	15
+#define QM_STOP_CMD_PQ_TYPE_OFFSET	1
+#define QM_STOP_CMD_PQ_TYPE_SHIFT	24
+#define QM_STOP_CMD_PQ_TYPE_MASK	1
+#define QM_STOP_CMD_MAX_POLL_COUNT	100
+#define QM_STOP_CMD_POLL_PERIOD_US	500
+
 /* QM command macros */
-#define QM_CMD_STRUCT_SIZE(cmd)	cmd##_STRUCT_SIZE
+#define QM_CMD_STRUCT_SIZE(cmd) cmd##_STRUCT_SIZE
 #define QM_CMD_SET_FIELD(var, cmd, field, value) \
-SET_FIELD(var[cmd##_##field##_OFFSET], cmd##_##field, value)
+	SET_FIELD(var[cmd##_##field##_OFFSET], cmd##_##field, value)
+
 /* QM: VOQ macros */
 #define PHYS_VOQ(port, tc, max_phys_tcs_per_port) \
-((port) * (max_phys_tcs_per_port) + (tc))
-#define LB_VOQ(port)				(MAX_PHYS_VOQS + (port))
+	((port) * (max_phys_tcs_per_port) + (tc))
+#define LB_VOQ(port)				 (MAX_PHYS_VOQS + (port))
 #define VOQ(port, tc, max_phys_tcs_per_port) \
-((tc) < LB_TC ? PHYS_VOQ(port, tc, max_phys_tcs_per_port) : LB_VOQ(port))
+	((tc) < LB_TC ? PHYS_VOQ(port, tc, max_phys_tcs_per_port) : \
+				 LB_VOQ(port))
+
+
 /******************** INTERNAL IMPLEMENTATION *********************/
+
 /* Prepare PF RL enable/disable runtime init values */
 static void ecore_enable_pf_rl(struct ecore_hwfn *p_hwfn, bool pf_rl_en)
 {
 	STORE_RT_REG(p_hwfn, QM_REG_RLPFENABLE_RT_OFFSET, pf_rl_en ? 1 : 0);
 	if (pf_rl_en) {
-		/* enable RLs for all VOQs */
+		/* Enable RLs for all VOQs */
 		STORE_RT_REG(p_hwfn, QM_REG_RLPFVOQENABLE_RT_OFFSET,
 			     (1 << MAX_NUM_VOQS) - 1);
-		/* write RL period */
+
+		/* Write RL period */
 		STORE_RT_REG(p_hwfn, QM_REG_RLPFPERIOD_RT_OFFSET,
 			     QM_RL_PERIOD_CLK_25M);
 		STORE_RT_REG(p_hwfn, QM_REG_RLPFPERIODTIMER_RT_OFFSET,
 			     QM_RL_PERIOD_CLK_25M);
-		/* set credit threshold for QM bypass flow */
+
+		/* Set credit threshold for QM bypass flow */
 		if (QM_BYPASS_EN)
 			STORE_RT_REG(p_hwfn, QM_REG_AFULLQMBYPTHRPFRL_RT_OFFSET,
 				     QM_RL_UPPER_BOUND);
@@ -133,7 +177,8 @@ static void ecore_enable_pf_rl(struct ecore_hwfn *p_hwfn, bool pf_rl_en)
 static void ecore_enable_pf_wfq(struct ecore_hwfn *p_hwfn, bool pf_wfq_en)
 {
 	STORE_RT_REG(p_hwfn, QM_REG_WFQPFENABLE_RT_OFFSET, pf_wfq_en ? 1 : 0);
-	/* set credit threshold for QM bypass flow */
+
+	/* Set credit threshold for QM bypass flow */
 	if (pf_wfq_en && QM_BYPASS_EN)
 		STORE_RT_REG(p_hwfn, QM_REG_AFULLQMBYPTHRPFWFQ_RT_OFFSET,
 			     QM_WFQ_UPPER_BOUND);
@@ -145,12 +190,13 @@ static void ecore_enable_vport_rl(struct ecore_hwfn *p_hwfn, bool vport_rl_en)
 	STORE_RT_REG(p_hwfn, QM_REG_RLGLBLENABLE_RT_OFFSET,
 		     vport_rl_en ? 1 : 0);
 	if (vport_rl_en) {
-		/* write RL period (use timer 0 only) */
+		/* Write RL period (use timer 0 only) */
 		STORE_RT_REG(p_hwfn, QM_REG_RLGLBLPERIOD_0_RT_OFFSET,
 			     QM_RL_PERIOD_CLK_25M);
 		STORE_RT_REG(p_hwfn, QM_REG_RLGLBLPERIODTIMER_0_RT_OFFSET,
 			     QM_RL_PERIOD_CLK_25M);
-		/* set credit threshold for QM bypass flow */
+
+		/* Set credit threshold for QM bypass flow */
 		if (QM_BYPASS_EN)
 			STORE_RT_REG(p_hwfn,
 				     QM_REG_AFULLQMBYPTHRGLBLRL_RT_OFFSET,
@@ -163,7 +209,8 @@ static void ecore_enable_vport_wfq(struct ecore_hwfn *p_hwfn, bool vport_wfq_en)
 {
 	STORE_RT_REG(p_hwfn, QM_REG_WFQVPENABLE_RT_OFFSET,
 		     vport_wfq_en ? 1 : 0);
-	/* set credit threshold for QM bypass flow */
+
+	/* Set credit threshold for QM bypass flow */
 	if (vport_wfq_en && QM_BYPASS_EN)
 		STORE_RT_REG(p_hwfn, QM_REG_AFULLQMBYPTHRVPWFQ_RT_OFFSET,
 			     QM_WFQ_UPPER_BOUND);
@@ -176,7 +223,9 @@ static void ecore_cmdq_lines_voq_rt_init(struct ecore_hwfn *p_hwfn,
 					 u8 voq, u16 cmdq_lines)
 {
 	u32 qm_line_crd;
+
 	qm_line_crd = QM_VOQ_LINE_CRD(cmdq_lines);
+
 	OVERWRITE_RT_REG(p_hwfn, PBF_CMDQ_LINES_RT_OFFSET(voq),
 			 (u32)cmdq_lines);
 	STORE_RT_REG(p_hwfn, QM_REG_VOQCRDLINE_RT_OFFSET + voq, qm_line_crd);
@@ -192,38 +241,43 @@ static void ecore_cmdq_lines_rt_init(struct ecore_hwfn *p_hwfn,
 				     port_params[MAX_NUM_PORTS])
 {
 	u8 tc, voq, port_id, num_tcs_in_port;
-	/* clear PBF lines for all VOQs */
+
+	/* Clear PBF lines for all VOQs */
 	for (voq = 0; voq < MAX_NUM_VOQS; voq++)
 		STORE_RT_REG(p_hwfn, PBF_CMDQ_LINES_RT_OFFSET(voq), 0);
+
 	for (port_id = 0; port_id < max_ports_per_engine; port_id++) {
-		if (port_params[port_id].active) {
-			u16 phys_lines, phys_lines_per_tc;
-			/* find #lines to divide between active physical TCs */
-			phys_lines =
-			    port_params[port_id].num_pbf_cmd_lines -
-			    PBF_CMDQ_PURE_LB_LINES;
-			/* find #lines per active physical TC */
-			num_tcs_in_port = 0;
-			for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) {
-				if (((port_params[port_id].active_phys_tcs >>
-						tc) & 0x1) == 1)
-					num_tcs_in_port++;
-			}
-			phys_lines_per_tc = phys_lines / num_tcs_in_port;
-			/* init registers per active TC */
-			for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) {
-				if (((port_params[port_id].active_phys_tcs >>
-							tc) & 0x1) == 1) {
-					voq = PHYS_VOQ(port_id, tc,
-							max_phys_tcs_per_port);
-					ecore_cmdq_lines_voq_rt_init(p_hwfn,
-							voq, phys_lines_per_tc);
-				}
+		u16 phys_lines, phys_lines_per_tc;
+
+		if (!port_params[port_id].active)
+			continue;
+
+		/* Find #lines to divide between the active physical TCs */
+		phys_lines = port_params[port_id].num_pbf_cmd_lines -
+			     PBF_CMDQ_PURE_LB_LINES;
+
+		/* Find #lines per active physical TC */
+		num_tcs_in_port = 0;
+		for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++)
+			if (((port_params[port_id].active_phys_tcs >> tc) &
+			      0x1) == 1)
+				num_tcs_in_port++;
+		phys_lines_per_tc = phys_lines / num_tcs_in_port;
+
+		/* Init registers per active TC */
+		for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) {
+			if (((port_params[port_id].active_phys_tcs >> tc) &
+			      0x1) == 1) {
+				voq = PHYS_VOQ(port_id, tc,
+					       max_phys_tcs_per_port);
+				ecore_cmdq_lines_voq_rt_init(p_hwfn, voq,
+							     phys_lines_per_tc);
 			}
-			/* init registers for pure LB TC */
-			ecore_cmdq_lines_voq_rt_init(p_hwfn, LB_VOQ(port_id),
-						     PBF_CMDQ_PURE_LB_LINES);
 		}
+
+		/* Init registers for pure LB TC */
+		ecore_cmdq_lines_voq_rt_init(p_hwfn, LB_VOQ(port_id),
+					     PBF_CMDQ_PURE_LB_LINES);
 	}
 }
 
@@ -253,50 +307,51 @@ static void ecore_btb_blocks_rt_init(struct ecore_hwfn *p_hwfn,
 				     struct init_qm_port_params
 				     port_params[MAX_NUM_PORTS])
 {
-	u8 tc, voq, port_id, num_tcs_in_port;
 	u32 usable_blocks, pure_lb_blocks, phys_blocks;
+	u8 tc, voq, port_id, num_tcs_in_port;
+
 	for (port_id = 0; port_id < max_ports_per_engine; port_id++) {
-		if (port_params[port_id].active) {
-			/* subtract headroom blocks */
-			usable_blocks =
-			    port_params[port_id].num_btb_blocks -
-			    BTB_HEADROOM_BLOCKS;
-/* find blocks per physical TC. use factor to avoid floating arithmethic */
-
-			num_tcs_in_port = 0;
-			for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++)
-				if (((port_params[port_id].active_phys_tcs >>
-								tc) & 0x1) == 1)
-					num_tcs_in_port++;
-			pure_lb_blocks =
-			    (usable_blocks * BTB_PURE_LB_FACTOR) /
-			    (num_tcs_in_port *
-			     BTB_PURE_LB_FACTOR + BTB_PURE_LB_RATIO);
-			pure_lb_blocks =
-			    OSAL_MAX_T(u32, BTB_JUMBO_PKT_BLOCKS,
-				       pure_lb_blocks / BTB_PURE_LB_FACTOR);
-			phys_blocks =
-			    (usable_blocks -
-			     pure_lb_blocks) /
-			     num_tcs_in_port;
-			/* init physical TCs */
-			for (tc = 0;
-			     tc < NUM_OF_PHYS_TCS;
-			     tc++) {
-				if (((port_params[port_id].active_phys_tcs >>
-							tc) & 0x1) == 1) {
-					voq = PHYS_VOQ(port_id, tc,
-						       max_phys_tcs_per_port);
-					STORE_RT_REG(p_hwfn,
+		if (!port_params[port_id].active)
+			continue;
+
+		/* Subtract headroom blocks */
+		usable_blocks = port_params[port_id].num_btb_blocks -
+				BTB_HEADROOM_BLOCKS;
+
+		/* Find blocks per physical TC. use factor to avoid floating
+		 * arithmethic.
+		 */
+		num_tcs_in_port = 0;
+		for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++)
+			if (((port_params[port_id].active_phys_tcs >> tc) &
+			      0x1) == 1)
+				num_tcs_in_port++;
+
+		pure_lb_blocks = (usable_blocks * BTB_PURE_LB_FACTOR) /
+				  (num_tcs_in_port * BTB_PURE_LB_FACTOR +
+				   BTB_PURE_LB_RATIO);
+		pure_lb_blocks = OSAL_MAX_T(u32, BTB_JUMBO_PKT_BLOCKS,
+					    pure_lb_blocks /
+					    BTB_PURE_LB_FACTOR);
+		phys_blocks = (usable_blocks - pure_lb_blocks) /
+			      num_tcs_in_port;
+
+		/* Init physical TCs */
+		for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) {
+			if (((port_params[port_id].active_phys_tcs >> tc) &
+			      0x1) == 1) {
+				voq = PHYS_VOQ(port_id, tc,
+					       max_phys_tcs_per_port);
+				STORE_RT_REG(p_hwfn,
 					     PBF_BTB_GUARANTEED_RT_OFFSET(voq),
 					     phys_blocks);
-				}
 			}
-			/* init pure LB TC */
-			STORE_RT_REG(p_hwfn,
-				     PBF_BTB_GUARANTEED_RT_OFFSET(
-					LB_VOQ(port_id)), pure_lb_blocks);
 		}
+
+		/* Init pure LB TC */
+		STORE_RT_REG(p_hwfn,
+			     PBF_BTB_GUARANTEED_RT_OFFSET(LB_VOQ(port_id)),
+			     pure_lb_blocks);
 	}
 }
 
@@ -317,57 +372,69 @@ static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 				    struct init_qm_pq_params *pq_params,
 				    struct init_qm_vport_params *vport_params)
 {
-	u16 i, pq_id, pq_group;
-	u16 num_pqs = num_pf_pqs + num_vf_pqs;
-	u16 first_pq_group = start_pq / QM_PF_QUEUE_GROUP_SIZE;
-	u16 last_pq_group = (start_pq + num_pqs - 1) / QM_PF_QUEUE_GROUP_SIZE;
-	/* a bit per Tx PQ indicating if the PQ is associated with a VF */
+	/* A bit per Tx PQ indicating if the PQ is associated with a VF */
 	u32 tx_pq_vf_mask[MAX_QM_TX_QUEUES / QM_PF_QUEUE_GROUP_SIZE] = { 0 };
 	u32 num_tx_pq_vf_masks = MAX_QM_TX_QUEUES / QM_PF_QUEUE_GROUP_SIZE;
-	u32 pq_mem_4kb = QM_PQ_MEM_4KB(num_pf_cids);
-	u32 vport_pq_mem_4kb = QM_PQ_MEM_4KB(num_vf_cids);
-	u32 mem_addr_4kb = base_mem_addr_4kb;
-	/* set mapping from PQ group to PF */
+	u16 num_pqs, first_pq_group, last_pq_group, i, pq_id, pq_group;
+	u32 pq_mem_4kb, vport_pq_mem_4kb, mem_addr_4kb;
+
+	num_pqs = num_pf_pqs + num_vf_pqs;
+
+	first_pq_group = start_pq / QM_PF_QUEUE_GROUP_SIZE;
+	last_pq_group = (start_pq + num_pqs - 1) / QM_PF_QUEUE_GROUP_SIZE;
+
+	pq_mem_4kb = QM_PQ_MEM_4KB(num_pf_cids);
+	vport_pq_mem_4kb = QM_PQ_MEM_4KB(num_vf_cids);
+	mem_addr_4kb = base_mem_addr_4kb;
+
+	/* Set mapping from PQ group to PF */
 	for (pq_group = first_pq_group; pq_group <= last_pq_group; pq_group++)
 		STORE_RT_REG(p_hwfn, QM_REG_PQTX2PF_0_RT_OFFSET + pq_group,
 			     (u32)(pf_id));
-	/* set PQ sizes */
+
+	/* Set PQ sizes */
 	STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZE_0_RT_OFFSET,
 		     QM_PQ_SIZE_256B(num_pf_cids));
 	STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZE_1_RT_OFFSET,
 		     QM_PQ_SIZE_256B(num_vf_cids));
-	/* go over all Tx PQs */
+
+	/* Go over all Tx PQs */
 	for (i = 0, pq_id = start_pq; i < num_pqs; i++, pq_id++) {
-		struct qm_rf_pq_map tx_pq_map;
-		u8 voq =
-		    VOQ(port_id, pq_params[i].tc_id, max_phys_tcs_per_port);
-		bool is_vf_pq = (i >= num_pf_pqs);
-		/* added to avoid compilation warning */
 		u32 max_qm_global_rls = MAX_QM_GLOBAL_RLS;
-		bool rl_valid = pq_params[i].rl_valid &&
-				pq_params[i].vport_id < max_qm_global_rls;
-		/* update first Tx PQ of VPORT/TC */
-		u8 vport_id_in_pf = pq_params[i].vport_id - start_vport;
-		u16 first_tx_pq_id =
-		    vport_params[vport_id_in_pf].first_tx_pq_id[pq_params[i].
-								tc_id];
+		struct qm_rf_pq_map tx_pq_map;
+		bool is_vf_pq, rl_valid;
+		u8 voq, vport_id_in_pf;
+		u16 first_tx_pq_id;
+
+		voq = VOQ(port_id, pq_params[i].tc_id, max_phys_tcs_per_port);
+		is_vf_pq = (i >= num_pf_pqs);
+		rl_valid = pq_params[i].rl_valid && pq_params[i].vport_id <
+			   max_qm_global_rls;
+
+		/* Update first Tx PQ of VPORT/TC */
+		vport_id_in_pf = pq_params[i].vport_id - start_vport;
+		first_tx_pq_id =
+		vport_params[vport_id_in_pf].first_tx_pq_id[pq_params[i].tc_id];
 		if (first_tx_pq_id == QM_INVALID_PQ_ID) {
-			/* create new VP PQ */
+			/* Create new VP PQ */
 			vport_params[vport_id_in_pf].
 			    first_tx_pq_id[pq_params[i].tc_id] = pq_id;
 			first_tx_pq_id = pq_id;
-			/* map VP PQ to VOQ and PF */
+
+			/* Map VP PQ to VOQ and PF */
 			STORE_RT_REG(p_hwfn,
 				     QM_REG_WFQVPMAP_RT_OFFSET + first_tx_pq_id,
 				     (voq << QM_WFQ_VP_PQ_VOQ_SHIFT) | (pf_id <<
 							QM_WFQ_VP_PQ_PF_SHIFT));
 		}
-		/* check RL ID */
+
+		/* Check RL ID */
 		if (pq_params[i].rl_valid && pq_params[i].vport_id >=
 							max_qm_global_rls)
 			DP_NOTICE(p_hwfn, true,
-				  "Invalid VPORT ID for rate limiter config");
-		/* fill PQ map entry */
+				  "Invalid VPORT ID for rate limiter config\n");
+
+		/* Fill PQ map entry */
 		OSAL_MEMSET(&tx_pq_map, 0, sizeof(tx_pq_map));
 		SET_FIELD(tx_pq_map.reg, QM_RF_PQ_MAP_PQ_VALID, 1);
 		SET_FIELD(tx_pq_map.reg, QM_RF_PQ_MAP_RL_VALID,
@@ -378,17 +445,17 @@ static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 		SET_FIELD(tx_pq_map.reg, QM_RF_PQ_MAP_VOQ, voq);
 		SET_FIELD(tx_pq_map.reg, QM_RF_PQ_MAP_WRR_WEIGHT_GROUP,
 			  pq_params[i].wrr_group);
-		/* write PQ map entry to CAM */
+
+		/* Write PQ map entry to CAM */
 		STORE_RT_REG(p_hwfn, QM_REG_TXPQMAP_RT_OFFSET + pq_id,
 			     *((u32 *)&tx_pq_map));
-		/* set base address */
+
+		/* Set base address */
 		STORE_RT_REG(p_hwfn, QM_REG_BASEADDRTXPQ_RT_OFFSET + pq_id,
 			     mem_addr_4kb);
-		/* check if VF PQ */
+
+		/* If VF PQ, add indication to PQ VF mask */
 		if (is_vf_pq) {
-			/* if PQ is associated with a VF, add indication to PQ
-			 * VF mask
-			 */
 			tx_pq_vf_mask[pq_id / QM_PF_QUEUE_GROUP_SIZE] |=
 				(1 << (pq_id % QM_PF_QUEUE_GROUP_SIZE));
 			mem_addr_4kb += vport_pq_mem_4kb;
@@ -396,12 +463,12 @@ static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 			mem_addr_4kb += pq_mem_4kb;
 		}
 	}
-	/* store Tx PQ VF mask to size select register */
-	for (i = 0; i < num_tx_pq_vf_masks; i++) {
+
+	/* Store Tx PQ VF mask to size select register */
+	for (i = 0; i < num_tx_pq_vf_masks; i++)
 		if (tx_pq_vf_mask[i])
 			STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZETXSEL_0_RT_OFFSET +
 				     i, tx_pq_vf_mask[i]);
-	}
 }
 
 /* Prepare Other PQ mapping runtime init values for the specified PF */
@@ -411,20 +478,26 @@ static void ecore_other_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 				       u32 num_pf_cids,
 				       u32 num_tids, u32 base_mem_addr_4kb)
 {
-	u16 i, pq_id;
-/* a single other PQ grp is used in each PF, where PQ group i is used in PF i */
-
-	u16 pq_group = pf_id;
-	u32 pq_size = num_pf_cids + num_tids;
-	u32 pq_mem_4kb = QM_PQ_MEM_4KB(pq_size);
-	u32 mem_addr_4kb = base_mem_addr_4kb;
-	/* map PQ group to PF */
+	u32 pq_size, pq_mem_4kb, mem_addr_4kb;
+	u16 i, pq_id, pq_group;
+
+	/* A single other PQ group is used in each PF, where PQ group i is used
+	 * in PF i.
+	 */
+	pq_group = pf_id;
+	pq_size = num_pf_cids + num_tids;
+	pq_mem_4kb = QM_PQ_MEM_4KB(pq_size);
+	mem_addr_4kb = base_mem_addr_4kb;
+
+	/* Map PQ group to PF */
 	STORE_RT_REG(p_hwfn, QM_REG_PQOTHER2PF_0_RT_OFFSET + pq_group,
 		     (u32)(pf_id));
-	/* set PQ sizes */
+
+	/* Set PQ sizes */
 	STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZE_2_RT_OFFSET,
 		     QM_PQ_SIZE_256B(pq_size));
-	/* set base address */
+
+	/* Set base address */
 	for (i = 0, pq_id = pf_id * QM_PF_QUEUE_GROUP_SIZE;
 	     i < QM_OTHER_PQS_PER_PF; i++, pq_id++) {
 		STORE_RT_REG(p_hwfn, QM_REG_BASEADDROTHERPQ_RT_OFFSET + pq_id,
@@ -432,7 +505,10 @@ static void ecore_other_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 		mem_addr_4kb += pq_mem_4kb;
 	}
 }
-/* Prepare PF WFQ runtime init values for specified PF. Return -1 on error. */
+
+/* Prepare PF WFQ runtime init values for the specified PF.
+ * Return -1 on error.
+ */
 static int ecore_pf_wfq_rt_init(struct ecore_hwfn *p_hwfn,
 				u8 port_id,
 				u8 pf_id,
@@ -441,76 +517,89 @@ static int ecore_pf_wfq_rt_init(struct ecore_hwfn *p_hwfn,
 				u16 num_tx_pqs,
 				struct init_qm_pq_params *pq_params)
 {
+	u32 inc_val, crd_reg_offset;
+	u8 voq;
 	u16 i;
-	u32 inc_val;
-	u32 crd_reg_offset =
-	    (pf_id <
-	     MAX_NUM_PFS_BB ? QM_REG_WFQPFCRD_RT_OFFSET :
-	     QM_REG_WFQPFCRD_MSB_RT_OFFSET) + (pf_id % MAX_NUM_PFS_BB);
+
+	crd_reg_offset = (pf_id < MAX_NUM_PFS_BB ? QM_REG_WFQPFCRD_RT_OFFSET :
+			  QM_REG_WFQPFCRD_MSB_RT_OFFSET) +
+			 (pf_id % MAX_NUM_PFS_BB);
+
 	inc_val = QM_WFQ_INC_VAL(pf_wfq);
-	if (inc_val == 0 || inc_val > QM_WFQ_MAX_INC_VAL) {
-		DP_NOTICE(p_hwfn, true, "Invalid PF WFQ weight configuration");
+	if (!inc_val || inc_val > QM_WFQ_MAX_INC_VAL) {
+		DP_NOTICE(p_hwfn, true,
+			  "Invalid PF WFQ weight configuration\n");
 		return -1;
 	}
+
 	for (i = 0; i < num_tx_pqs; i++) {
-		u8 voq =
-		    VOQ(port_id, pq_params[i].tc_id, max_phys_tcs_per_port);
+		voq = VOQ(port_id, pq_params[i].tc_id, max_phys_tcs_per_port);
 		OVERWRITE_RT_REG(p_hwfn, crd_reg_offset + voq * MAX_NUM_PFS_BB,
 				 (u32)QM_WFQ_CRD_REG_SIGN_BIT);
 	}
+
 	STORE_RT_REG(p_hwfn, QM_REG_WFQPFUPPERBOUND_RT_OFFSET + pf_id,
 		     QM_WFQ_UPPER_BOUND | (u32)QM_WFQ_CRD_REG_SIGN_BIT);
 	STORE_RT_REG(p_hwfn, QM_REG_WFQPFWEIGHT_RT_OFFSET + pf_id, inc_val);
 	return 0;
 }
-/* Prepare PF RL runtime init values for specified PF. Return -1 on error. */
+
+/* Prepare PF RL runtime init values for the specified PF.
+ * Return -1 on error.
+ */
 static int ecore_pf_rl_rt_init(struct ecore_hwfn *p_hwfn, u8 pf_id, u32 pf_rl)
 {
-	u32 inc_val = QM_RL_INC_VAL(pf_rl);
+	u32 inc_val;
+
+	inc_val = QM_RL_INC_VAL(pf_rl);
 	if (inc_val > QM_RL_MAX_INC_VAL) {
-		DP_NOTICE(p_hwfn, true, "Invalid PF rate limit configuration");
+		DP_NOTICE(p_hwfn, true,
+			  "Invalid PF rate limit configuration\n");
 		return -1;
 	}
+
 	STORE_RT_REG(p_hwfn, QM_REG_RLPFCRD_RT_OFFSET + pf_id,
 		     (u32)QM_RL_CRD_REG_SIGN_BIT);
 	STORE_RT_REG(p_hwfn, QM_REG_RLPFUPPERBOUND_RT_OFFSET + pf_id,
 		     QM_RL_UPPER_BOUND | (u32)QM_RL_CRD_REG_SIGN_BIT);
 	STORE_RT_REG(p_hwfn, QM_REG_RLPFINCVAL_RT_OFFSET + pf_id, inc_val);
+
 	return 0;
 }
-/* Prepare VPORT WFQ runtime init values for the specified VPORTs. Return -1 on
- * error.
+
+/* Prepare VPORT WFQ runtime init values for the specified VPORTs.
+ * Return -1 on error.
  */
 static int ecore_vp_wfq_rt_init(struct ecore_hwfn *p_hwfn,
 				u8 num_vports,
 				struct init_qm_vport_params *vport_params)
 {
-	u8 tc, i;
+	u16 vport_pq_id;
 	u32 inc_val;
-	/* go over all PF VPORTs */
+	u8 tc, i;
+
+	/* Go over all PF VPORTs */
 	for (i = 0; i < num_vports; i++) {
-		if (vport_params[i].vport_wfq) {
-			inc_val = QM_WFQ_INC_VAL(vport_params[i].vport_wfq);
-			if (inc_val > QM_WFQ_MAX_INC_VAL) {
-				DP_NOTICE(p_hwfn, true,
-					  "Invalid VPORT WFQ weight config");
-				return -1;
-			}
-			/* each VPORT can have several VPORT PQ IDs for
-			 * different TCs
-			 */
-			for (tc = 0; tc < NUM_OF_TCS; tc++) {
-				u16 vport_pq_id =
-				    vport_params[i].first_tx_pq_id[tc];
-				if (vport_pq_id != QM_INVALID_PQ_ID) {
-					STORE_RT_REG(p_hwfn,
-						  QM_REG_WFQVPCRD_RT_OFFSET +
-						  vport_pq_id,
-						  (u32)QM_WFQ_CRD_REG_SIGN_BIT);
-					STORE_RT_REG(p_hwfn,
-						QM_REG_WFQVPWEIGHT_RT_OFFSET
-						     + vport_pq_id, inc_val);
-				}
+		if (!vport_params[i].vport_wfq)
+			continue;
+
+		inc_val = QM_WFQ_INC_VAL(vport_params[i].vport_wfq);
+		if (inc_val > QM_WFQ_MAX_INC_VAL) {
+			DP_NOTICE(p_hwfn, true,
+				  "Invalid VPORT WFQ weight configuration\n");
+			return -1;
+		}
+
+		/* Each VPORT can have several VPORT PQ IDs for various TCs */
+		for (tc = 0; tc < NUM_OF_TCS; tc++) {
+			vport_pq_id = vport_params[i].first_tx_pq_id[tc];
+			if (vport_pq_id != QM_INVALID_PQ_ID) {
+				STORE_RT_REG(p_hwfn, QM_REG_WFQVPCRD_RT_OFFSET +
+					     vport_pq_id,
+					     (u32)QM_WFQ_CRD_REG_SIGN_BIT);
+				STORE_RT_REG(p_hwfn,
+					     QM_REG_WFQVPWEIGHT_RT_OFFSET +
+					     vport_pq_id, inc_val);
 			}
 		}
 	}
@@ -526,19 +615,23 @@ static int ecore_vport_rl_rt_init(struct ecore_hwfn *p_hwfn,
 				  struct init_qm_vport_params *vport_params)
 {
 	u8 i, vport_id;
+	u32 inc_val;
+
 	if (start_vport + num_vports >= MAX_QM_GLOBAL_RLS) {
 		DP_NOTICE(p_hwfn, true,
-			  "Invalid VPORT ID for rate limiter configuration");
+			  "Invalid VPORT ID for rate limiter configuration\n");
 		return -1;
 	}
-	/* go over all PF VPORTs */
+
+	/* Go over all PF VPORTs */
 	for (i = 0, vport_id = start_vport; i < num_vports; i++, vport_id++) {
 		u32 inc_val = QM_RL_INC_VAL(vport_params[i].vport_rl);
 		if (inc_val > QM_RL_MAX_INC_VAL) {
 			DP_NOTICE(p_hwfn, true,
-				  "Invalid VPORT rate-limit configuration");
+				  "Invalid VPORT rate-limit configuration\n");
 			return -1;
 		}
+
 		STORE_RT_REG(p_hwfn, QM_REG_RLGLBLCRD_RT_OFFSET + vport_id,
 			     (u32)QM_RL_CRD_REG_SIGN_BIT);
 		STORE_RT_REG(p_hwfn,
@@ -547,6 +640,7 @@ static int ecore_vport_rl_rt_init(struct ecore_hwfn *p_hwfn,
 		STORE_RT_REG(p_hwfn, QM_REG_RLGLBLINCVAL_RT_OFFSET + vport_id,
 			     inc_val);
 	}
+
 	return 0;
 }
 
@@ -554,17 +648,20 @@ static bool ecore_poll_on_qm_cmd_ready(struct ecore_hwfn *p_hwfn,
 				       struct ecore_ptt *p_ptt)
 {
 	u32 reg_val, i;
-	for (i = 0, reg_val = 0; i < QM_STOP_CMD_MAX_POLL_COUNT && reg_val == 0;
+
+	for (i = 0, reg_val = 0; i < QM_STOP_CMD_MAX_POLL_COUNT && !reg_val;
 	     i++) {
 		OSAL_UDELAY(QM_STOP_CMD_POLL_PERIOD_US);
 		reg_val = ecore_rd(p_hwfn, p_ptt, QM_REG_SDMCMDREADY);
 	}
-	/* check if timeout while waiting for SDM command ready */
+
+	/* Check if timeout while waiting for SDM command ready */
 	if (i == QM_STOP_CMD_MAX_POLL_COUNT) {
 		DP_VERBOSE(p_hwfn, ECORE_MSG_DEBUG,
 			   "Timeout waiting for QM SDM cmd ready signal\n");
 		return false;
 	}
+
 	return true;
 }
 
@@ -574,15 +671,19 @@ static bool ecore_send_qm_cmd(struct ecore_hwfn *p_hwfn,
 {
 	if (!ecore_poll_on_qm_cmd_ready(p_hwfn, p_ptt))
 		return false;
+
 	ecore_wr(p_hwfn, p_ptt, QM_REG_SDMCMDADDR, cmd_addr);
 	ecore_wr(p_hwfn, p_ptt, QM_REG_SDMCMDDATALSB, cmd_data_lsb);
 	ecore_wr(p_hwfn, p_ptt, QM_REG_SDMCMDDATAMSB, cmd_data_msb);
 	ecore_wr(p_hwfn, p_ptt, QM_REG_SDMCMDGO, 1);
 	ecore_wr(p_hwfn, p_ptt, QM_REG_SDMCMDGO, 0);
+
 	return ecore_poll_on_qm_cmd_ready(p_hwfn, p_ptt);
 }
 
+
 /******************** INTERFACE IMPLEMENTATION *********************/
+
 u32 ecore_qm_pf_mem_size(u8 pf_id,
 			 u32 num_pf_cids,
 			 u32 num_vf_cids,
@@ -603,32 +704,42 @@ int ecore_qm_common_rt_init(struct ecore_hwfn *p_hwfn,
 			    struct init_qm_port_params
 			    port_params[MAX_NUM_PORTS])
 {
-	/* init AFullOprtnstcCrdMask */
-	u32 mask =
-	    (QM_OPPOR_LINE_VOQ_DEF << QM_RF_OPPORTUNISTIC_MASK_LINEVOQ_SHIFT) |
-	    (QM_BYTE_CRD_EN << QM_RF_OPPORTUNISTIC_MASK_BYTEVOQ_SHIFT) |
-	    (pf_wfq_en << QM_RF_OPPORTUNISTIC_MASK_PFWFQ_SHIFT) |
-	    (vport_wfq_en << QM_RF_OPPORTUNISTIC_MASK_VPWFQ_SHIFT) |
-	    (pf_rl_en << QM_RF_OPPORTUNISTIC_MASK_PFRL_SHIFT) |
-	    (vport_rl_en << QM_RF_OPPORTUNISTIC_MASK_VPQCNRL_SHIFT) |
-	    (QM_OPPOR_FW_STOP_DEF << QM_RF_OPPORTUNISTIC_MASK_FWPAUSE_SHIFT) |
-	    (QM_OPPOR_PQ_EMPTY_DEF <<
-	     QM_RF_OPPORTUNISTIC_MASK_QUEUEEMPTY_SHIFT);
+	u32 mask;
+
+	/* Init AFullOprtnstcCrdMask */
+	mask = (QM_OPPOR_LINE_VOQ_DEF <<
+		QM_RF_OPPORTUNISTIC_MASK_LINEVOQ_SHIFT) |
+		(QM_BYTE_CRD_EN << QM_RF_OPPORTUNISTIC_MASK_BYTEVOQ_SHIFT) |
+		(pf_wfq_en << QM_RF_OPPORTUNISTIC_MASK_PFWFQ_SHIFT) |
+		(vport_wfq_en << QM_RF_OPPORTUNISTIC_MASK_VPWFQ_SHIFT) |
+		(pf_rl_en << QM_RF_OPPORTUNISTIC_MASK_PFRL_SHIFT) |
+		(vport_rl_en << QM_RF_OPPORTUNISTIC_MASK_VPQCNRL_SHIFT) |
+		(QM_OPPOR_FW_STOP_DEF <<
+		 QM_RF_OPPORTUNISTIC_MASK_FWPAUSE_SHIFT) |
+		(QM_OPPOR_PQ_EMPTY_DEF <<
+		 QM_RF_OPPORTUNISTIC_MASK_QUEUEEMPTY_SHIFT);
 	STORE_RT_REG(p_hwfn, QM_REG_AFULLOPRTNSTCCRDMASK_RT_OFFSET, mask);
-	/* enable/disable PF RL */
+
+	/* Enable/disable PF RL */
 	ecore_enable_pf_rl(p_hwfn, pf_rl_en);
-	/* enable/disable PF WFQ */
+
+	/* Enable/disable PF WFQ */
 	ecore_enable_pf_wfq(p_hwfn, pf_wfq_en);
-	/* enable/disable VPORT RL */
+
+	/* Enable/disable VPORT RL */
 	ecore_enable_vport_rl(p_hwfn, vport_rl_en);
-	/* enable/disable VPORT WFQ */
+
+	/* Enable/disable VPORT WFQ */
 	ecore_enable_vport_wfq(p_hwfn, vport_wfq_en);
-	/* init PBF CMDQ line credit */
+
+	/* Init PBF CMDQ line credit */
 	ecore_cmdq_lines_rt_init(p_hwfn, max_ports_per_engine,
 				 max_phys_tcs_per_port, port_params);
-	/* init BTB blocks in PBF */
+
+	/* Init BTB blocks in PBF */
 	ecore_btb_blocks_rt_init(p_hwfn, max_ports_per_engine,
 				 max_phys_tcs_per_port, port_params);
+
 	return 0;
 }
 
@@ -651,66 +762,86 @@ int ecore_qm_pf_rt_init(struct ecore_hwfn *p_hwfn,
 			struct init_qm_pq_params *pq_params,
 			struct init_qm_vport_params *vport_params)
 {
+	u32 other_mem_size_4kb;
 	u8 tc, i;
-	u32 other_mem_size_4kb =
-	    QM_PQ_MEM_4KB(num_pf_cids + num_tids) * QM_OTHER_PQS_PER_PF;
-	/* clear first Tx PQ ID array for each VPORT */
+
+	other_mem_size_4kb = QM_PQ_MEM_4KB(num_pf_cids + num_tids) *
+			     QM_OTHER_PQS_PER_PF;
+
+	/* Clear first Tx PQ ID array for each VPORT */
 	for (i = 0; i < num_vports; i++)
 		for (tc = 0; tc < NUM_OF_TCS; tc++)
 			vport_params[i].first_tx_pq_id[tc] = QM_INVALID_PQ_ID;
-	/* map Other PQs (if any) */
+
+	/* Map Other PQs (if any) */
 #if QM_OTHER_PQS_PER_PF > 0
 	ecore_other_pq_map_rt_init(p_hwfn, port_id, pf_id, num_pf_cids,
 				   num_tids, 0);
 #endif
-	/* map Tx PQs */
+
+	/* Map Tx PQs */
 	ecore_tx_pq_map_rt_init(p_hwfn, p_ptt, port_id, pf_id,
 				max_phys_tcs_per_port, is_first_pf, num_pf_cids,
 				num_vf_cids, start_pq, num_pf_pqs, num_vf_pqs,
 				start_vport, other_mem_size_4kb, pq_params,
 				vport_params);
-	/* init PF WFQ */
+
+	/* Init PF WFQ */
 	if (pf_wfq)
 		if (ecore_pf_wfq_rt_init
 		    (p_hwfn, port_id, pf_id, pf_wfq, max_phys_tcs_per_port,
-		     num_pf_pqs + num_vf_pqs, pq_params) != 0)
+		     num_pf_pqs + num_vf_pqs, pq_params))
 			return -1;
-	/* init PF RL */
-	if (ecore_pf_rl_rt_init(p_hwfn, pf_id, pf_rl) != 0)
+
+	/* Init PF RL */
+	if (ecore_pf_rl_rt_init(p_hwfn, pf_id, pf_rl))
 		return -1;
-	/* set VPORT WFQ */
-	if (ecore_vp_wfq_rt_init(p_hwfn, num_vports, vport_params) != 0)
+
+	/* Set VPORT WFQ */
+	if (ecore_vp_wfq_rt_init(p_hwfn, num_vports, vport_params))
 		return -1;
-	/* set VPORT RL */
+
+	/* Set VPORT RL */
 	if (ecore_vport_rl_rt_init
-	    (p_hwfn, start_vport, num_vports, vport_params) != 0)
+	    (p_hwfn, start_vport, num_vports, vport_params))
 		return -1;
+
 	return 0;
 }
 
 int ecore_init_pf_wfq(struct ecore_hwfn *p_hwfn,
 		      struct ecore_ptt *p_ptt, u8 pf_id, u16 pf_wfq)
 {
-	u32 inc_val = QM_WFQ_INC_VAL(pf_wfq);
-	if (inc_val == 0 || inc_val > QM_WFQ_MAX_INC_VAL) {
-		DP_NOTICE(p_hwfn, true, "Invalid PF WFQ weight configuration");
+	u32 inc_val;
+
+	inc_val = QM_WFQ_INC_VAL(pf_wfq);
+	if (!inc_val || inc_val > QM_WFQ_MAX_INC_VAL) {
+		DP_NOTICE(p_hwfn, true,
+			  "Invalid PF WFQ weight configuration\n");
 		return -1;
 	}
+
 	ecore_wr(p_hwfn, p_ptt, QM_REG_WFQPFWEIGHT + pf_id * 4, inc_val);
+
 	return 0;
 }
 
 int ecore_init_pf_rl(struct ecore_hwfn *p_hwfn,
 		     struct ecore_ptt *p_ptt, u8 pf_id, u32 pf_rl)
 {
-	u32 inc_val = QM_RL_INC_VAL(pf_rl);
+	u32 inc_val;
+
+	inc_val = QM_RL_INC_VAL(pf_rl);
 	if (inc_val > QM_RL_MAX_INC_VAL) {
-		DP_NOTICE(p_hwfn, true, "Invalid PF rate limit configuration");
+		DP_NOTICE(p_hwfn, true,
+			  "Invalid PF rate limit configuration\n");
 		return -1;
 	}
+
 	ecore_wr(p_hwfn, p_ptt, QM_REG_RLPFCRD + pf_id * 4,
 		 (u32)QM_RL_CRD_REG_SIGN_BIT);
 	ecore_wr(p_hwfn, p_ptt, QM_REG_RLPFINCVAL + pf_id * 4, inc_val);
+
 	return 0;
 }
 
@@ -718,20 +849,25 @@ int ecore_init_vport_wfq(struct ecore_hwfn *p_hwfn,
 			 struct ecore_ptt *p_ptt,
 			 u16 first_tx_pq_id[NUM_OF_TCS], u16 vport_wfq)
 {
+	u16 vport_pq_id;
+	u32 inc_val;
 	u8 tc;
-	u32 inc_val = QM_WFQ_INC_VAL(vport_wfq);
-	if (inc_val == 0 || inc_val > QM_WFQ_MAX_INC_VAL) {
+
+	inc_val = QM_WFQ_INC_VAL(vport_wfq);
+	if (!inc_val || inc_val > QM_WFQ_MAX_INC_VAL) {
 		DP_NOTICE(p_hwfn, true,
-			  "Invalid VPORT WFQ weight configuration");
+			  "Invalid VPORT WFQ weight configuration\n");
 		return -1;
 	}
+
 	for (tc = 0; tc < NUM_OF_TCS; tc++) {
-		u16 vport_pq_id = first_tx_pq_id[tc];
+		vport_pq_id = first_tx_pq_id[tc];
 		if (vport_pq_id != QM_INVALID_PQ_ID) {
 			ecore_wr(p_hwfn, p_ptt,
 				 QM_REG_WFQVPWEIGHT + vport_pq_id * 4, inc_val);
 		}
 	}
+
 	return 0;
 }
 
@@ -739,20 +875,24 @@ int ecore_init_vport_rl(struct ecore_hwfn *p_hwfn,
 			struct ecore_ptt *p_ptt, u8 vport_id, u32 vport_rl)
 {
 	u32 inc_val, max_qm_global_rls = MAX_QM_GLOBAL_RLS;
+
 	if (vport_id >= max_qm_global_rls) {
 		DP_NOTICE(p_hwfn, true,
-			  "Invalid VPORT ID for rate limiter configuration");
+			  "Invalid VPORT ID for rate limiter configuration\n");
 		return -1;
 	}
+
 	inc_val = QM_RL_INC_VAL(vport_rl);
 	if (inc_val > QM_RL_MAX_INC_VAL) {
 		DP_NOTICE(p_hwfn, true,
-			  "Invalid VPORT rate-limit configuration");
+			  "Invalid VPORT rate-limit configuration\n");
 		return -1;
 	}
+
 	ecore_wr(p_hwfn, p_ptt, QM_REG_RLGLBLCRD + vport_id * 4,
 		 (u32)QM_RL_CRD_REG_SIGN_BIT);
 	ecore_wr(p_hwfn, p_ptt, QM_REG_RLGLBLINCVAL + vport_id * 4, inc_val);
+
 	return 0;
 }
 
@@ -762,15 +902,20 @@ bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn,
 			    bool is_tx_pq, u16 start_pq, u16 num_pqs)
 {
 	u32 cmd_arr[QM_CMD_STRUCT_SIZE(QM_STOP_CMD)] = { 0 };
-	u32 pq_mask = 0, last_pq = start_pq + num_pqs - 1, pq_id;
-	/* set command's PQ type */
+	u32 pq_mask = 0, last_pq, pq_id;
+
+	last_pq = start_pq + num_pqs - 1;
+
+	/* Set command's PQ type */
 	QM_CMD_SET_FIELD(cmd_arr, QM_STOP_CMD, PQ_TYPE, is_tx_pq ? 0 : 1);
-	/* go over requested PQs */
+
+	/* Go over requested PQs */
 	for (pq_id = start_pq; pq_id <= last_pq; pq_id++) {
-		/* set PQ bit in mask (stop command only) */
+		/* Set PQ bit in mask (stop command only) */
 		if (!is_release_cmd)
 			pq_mask |= (1 << (pq_id % QM_STOP_PQ_MASK_WIDTH));
-		/* if last PQ or end of PQ mask, write command */
+
+		/* If last PQ or end of PQ mask, write command */
 		if ((pq_id == last_pq) ||
 		    (pq_id % QM_STOP_PQ_MASK_WIDTH ==
 		    (QM_STOP_PQ_MASK_WIDTH - 1))) {
@@ -785,68 +930,92 @@ bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn,
 			pq_mask = 0;
 		}
 	}
+
 	return true;
 }
 
+
 /* NIG: ETS configuration constants */
 #define NIG_TX_ETS_CLIENT_OFFSET	4
 #define NIG_LB_ETS_CLIENT_OFFSET	1
 #define NIG_ETS_MIN_WFQ_BYTES		1600
+
 /* NIG: ETS constants */
 #define NIG_ETS_UP_BOUND(weight, mtu) \
-(2 * ((weight) > (mtu) ? (weight) : (mtu)))
+	(2 * ((weight) > (mtu) ? (weight) : (mtu)))
+
 /* NIG: RL constants */
-#define NIG_RL_BASE_TYPE			1	/* byte base type */
-#define NIG_RL_PERIOD				1	/* in us */
+
+/* Byte base type value */
+#define NIG_RL_BASE_TYPE		1
+
+/* Period in us */
+#define NIG_RL_PERIOD			1
+
+/* Period in 25MHz cycles */
 #define NIG_RL_PERIOD_CLK_25M		(25 * NIG_RL_PERIOD)
+
+/* Rate in mbps */
 #define NIG_RL_INC_VAL(rate)		(((rate) * NIG_RL_PERIOD) / 8)
+
 #define NIG_RL_MAX_VAL(inc_val, mtu) \
-(2 * ((inc_val) > (mtu) ? (inc_val) : (mtu)))
+	(2 * ((inc_val) > (mtu) ? (inc_val) : (mtu)))
+
 /* NIG: packet prioritry configuration constants */
-#define NIG_PRIORITY_MAP_TC_BITS 4
+#define NIG_PRIORITY_MAP_TC_BITS	4
+
+
 void ecore_init_nig_ets(struct ecore_hwfn *p_hwfn,
 			struct ecore_ptt *p_ptt,
 			struct init_ets_req *req, bool is_lb)
 {
-	u8 tc, sp_tc_map = 0, wfq_tc_map = 0;
-	u8 num_tc = is_lb ? NUM_OF_TCS : NUM_OF_PHYS_TCS;
-	u8 tc_client_offset =
-	    is_lb ? NIG_LB_ETS_CLIENT_OFFSET : NIG_TX_ETS_CLIENT_OFFSET;
-	u32 min_weight = 0xffffffff;
-	u32 tc_weight_base_addr =
-	    is_lb ? NIG_REG_LB_ARB_CREDIT_WEIGHT_0 :
-	    NIG_REG_TX_ARB_CREDIT_WEIGHT_0;
-	u32 tc_weight_addr_diff =
-	    is_lb ? NIG_REG_LB_ARB_CREDIT_WEIGHT_1 -
-	    NIG_REG_LB_ARB_CREDIT_WEIGHT_0 : NIG_REG_TX_ARB_CREDIT_WEIGHT_1 -
-	    NIG_REG_TX_ARB_CREDIT_WEIGHT_0;
-	u32 tc_bound_base_addr =
-	    is_lb ? NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_0 :
-	    NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_0;
-	u32 tc_bound_addr_diff =
-	    is_lb ? NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_1 -
-	    NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_0 :
-	    NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_1 -
-	    NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_0;
+	u32 min_weight, tc_weight_base_addr, tc_weight_addr_diff;
+	u32 tc_bound_base_addr, tc_bound_addr_diff;
+	u8 sp_tc_map = 0, wfq_tc_map = 0;
+	u8 tc, num_tc, tc_client_offset;
+
+	num_tc = is_lb ? NUM_OF_TCS : NUM_OF_PHYS_TCS;
+	tc_client_offset = is_lb ? NIG_LB_ETS_CLIENT_OFFSET :
+				   NIG_TX_ETS_CLIENT_OFFSET;
+	min_weight = 0xffffffff;
+	tc_weight_base_addr = is_lb ? NIG_REG_LB_ARB_CREDIT_WEIGHT_0 :
+				      NIG_REG_TX_ARB_CREDIT_WEIGHT_0;
+	tc_weight_addr_diff = is_lb ? NIG_REG_LB_ARB_CREDIT_WEIGHT_1 -
+				      NIG_REG_LB_ARB_CREDIT_WEIGHT_0 :
+				      NIG_REG_TX_ARB_CREDIT_WEIGHT_1 -
+				      NIG_REG_TX_ARB_CREDIT_WEIGHT_0;
+	tc_bound_base_addr = is_lb ? NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_0 :
+				     NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_0;
+	tc_bound_addr_diff = is_lb ? NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_1 -
+				     NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_0 :
+				     NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_1 -
+				     NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_0;
+
 	for (tc = 0; tc < num_tc; tc++) {
 		struct init_ets_tc_req *tc_req = &req->tc_req[tc];
-		/* update SP map */
+
+		/* Update SP map */
 		if (tc_req->use_sp)
 			sp_tc_map |= (1 << tc);
-		if (tc_req->use_wfq) {
-			/* update WFQ map */
-			wfq_tc_map |= (1 << tc);
-			/* find minimal weight */
-			if (tc_req->weight < min_weight)
-				min_weight = tc_req->weight;
-		}
+
+		if (!tc_req->use_wfq)
+			continue;
+
+		/* Update WFQ map */
+		wfq_tc_map |= (1 << tc);
+
+		/* Find minimal weight */
+		if (tc_req->weight < min_weight)
+			min_weight = tc_req->weight;
 	}
-	/* write SP map */
+
+	/* Write SP map */
 	ecore_wr(p_hwfn, p_ptt,
 		 is_lb ? NIG_REG_LB_ARB_CLIENT_IS_STRICT :
 		 NIG_REG_TX_ARB_CLIENT_IS_STRICT,
 		 (sp_tc_map << tc_client_offset));
-	/* write WFQ map */
+
+	/* Write WFQ map */
 	ecore_wr(p_hwfn, p_ptt,
 		 is_lb ? NIG_REG_LB_ARB_CLIENT_IS_SUBJECT2WFQ :
 		 NIG_REG_TX_ARB_CLIENT_IS_SUBJECT2WFQ,
@@ -854,22 +1023,23 @@ void ecore_init_nig_ets(struct ecore_hwfn *p_hwfn,
 	/* write WFQ weights */
 	for (tc = 0; tc < num_tc; tc++, tc_client_offset++) {
 		struct init_ets_tc_req *tc_req = &req->tc_req[tc];
-		if (tc_req->use_wfq) {
-			/* translate weight to bytes */
-			u32 byte_weight =
-			    (NIG_ETS_MIN_WFQ_BYTES * tc_req->weight) /
-			    min_weight;
-			/* write WFQ weight */
-			ecore_wr(p_hwfn, p_ptt,
-				 tc_weight_base_addr +
-				 tc_weight_addr_diff * tc_client_offset,
-				 byte_weight);
-			/* write WFQ upper bound */
-			ecore_wr(p_hwfn, p_ptt,
-				 tc_bound_base_addr +
-				 tc_bound_addr_diff * tc_client_offset,
-				 NIG_ETS_UP_BOUND(byte_weight, req->mtu));
-		}
+		u32 byte_weight;
+
+		if (!tc_req->use_wfq)
+			continue;
+
+		/* Translate weight to bytes */
+		byte_weight = (NIG_ETS_MIN_WFQ_BYTES * tc_req->weight) /
+			      min_weight;
+
+		/* Write WFQ weight */
+		ecore_wr(p_hwfn, p_ptt, tc_weight_base_addr +
+			 tc_weight_addr_diff * tc_client_offset, byte_weight);
+
+		/* Write WFQ upper bound */
+		ecore_wr(p_hwfn, p_ptt, tc_bound_base_addr +
+			 tc_bound_addr_diff * tc_client_offset,
+			 NIG_ETS_UP_BOUND(byte_weight, req->mtu));
 	}
 }
 
@@ -877,16 +1047,18 @@ void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn,
 			  struct ecore_ptt *p_ptt,
 			  struct init_nig_lb_rl_req *req)
 {
-	u8 tc;
 	u32 ctrl, inc_val, reg_offset;
-	/* disable global MAC+LB RL */
+	u8 tc;
+
+	/* Disable global MAC+LB RL */
 	ctrl =
 	    NIG_RL_BASE_TYPE <<
 	    NIG_REG_TX_LB_GLBRATELIMIT_CTRL_TX_LB_GLBRATELIMIT_BASE_TYPE_SHIFT;
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_CTRL, ctrl);
-	/* configure and enable global MAC+LB RL */
+
+	/* Configure and enable global MAC+LB RL */
 	if (req->lb_mac_rate) {
-		/* configure  */
+		/* Configure  */
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_INC_PERIOD,
 			 NIG_RL_PERIOD_CLK_25M);
 		inc_val = NIG_RL_INC_VAL(req->lb_mac_rate);
@@ -894,20 +1066,23 @@ void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn,
 			 inc_val);
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_MAX_VALUE,
 			 NIG_RL_MAX_VAL(inc_val, req->mtu));
-		/* enable */
+
+		/* Enable */
 		ctrl |=
 		    1 <<
 		    NIG_REG_TX_LB_GLBRATELIMIT_CTRL_TX_LB_GLBRATELIMIT_EN_SHIFT;
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_CTRL, ctrl);
 	}
-	/* disable global LB-only RL */
+
+	/* Disable global LB-only RL */
 	ctrl =
 	    NIG_RL_BASE_TYPE <<
 	    NIG_REG_LB_BRBRATELIMIT_CTRL_LB_BRBRATELIMIT_BASE_TYPE_SHIFT;
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_CTRL, ctrl);
-	/* configure and enable global LB-only RL */
+
+	/* Configure and enable global LB-only RL */
 	if (req->lb_rate) {
-		/* configure  */
+		/* Configure  */
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_INC_PERIOD,
 			 NIG_RL_PERIOD_CLK_25M);
 		inc_val = NIG_RL_INC_VAL(req->lb_rate);
@@ -915,41 +1090,41 @@ void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn,
 			 inc_val);
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_MAX_VALUE,
 			 NIG_RL_MAX_VAL(inc_val, req->mtu));
-		/* enable */
+
+		/* Enable */
 		ctrl |=
 		    1 << NIG_REG_LB_BRBRATELIMIT_CTRL_LB_BRBRATELIMIT_EN_SHIFT;
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_CTRL, ctrl);
 	}
-	/* per-TC RLs */
+
+	/* Per-TC RLs */
 	for (tc = 0, reg_offset = 0; tc < NUM_OF_PHYS_TCS;
 	     tc++, reg_offset += 4) {
-		/* disable TC RL */
+		/* Disable TC RL */
 		ctrl =
 		    NIG_RL_BASE_TYPE <<
 		NIG_REG_LB_TCRATELIMIT_CTRL_0_LB_TCRATELIMIT_BASE_TYPE_0_SHIFT;
 		ecore_wr(p_hwfn, p_ptt,
 			 NIG_REG_LB_TCRATELIMIT_CTRL_0 + reg_offset, ctrl);
-		/* configure and enable TC RL */
-		if (req->tc_rate[tc]) {
-			/* configure */
-			ecore_wr(p_hwfn, p_ptt,
-				 NIG_REG_LB_TCRATELIMIT_INC_PERIOD_0 +
-				 reg_offset, NIG_RL_PERIOD_CLK_25M);
-			inc_val = NIG_RL_INC_VAL(req->tc_rate[tc]);
-			ecore_wr(p_hwfn, p_ptt,
-				 NIG_REG_LB_TCRATELIMIT_INC_VALUE_0 +
-				 reg_offset, inc_val);
-			ecore_wr(p_hwfn, p_ptt,
-				 NIG_REG_LB_TCRATELIMIT_MAX_VALUE_0 +
-				 reg_offset, NIG_RL_MAX_VAL(inc_val, req->mtu));
-			/* enable */
-			ctrl |=
-			    1 <<
-		NIG_REG_LB_TCRATELIMIT_CTRL_0_LB_TCRATELIMIT_EN_0_SHIFT;
-			ecore_wr(p_hwfn, p_ptt,
-				 NIG_REG_LB_TCRATELIMIT_CTRL_0 + reg_offset,
-				 ctrl);
-		}
+
+		/* Configure and enable TC RL */
+		if (!req->tc_rate[tc])
+			continue;
+
+		/* Configure */
+		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_INC_PERIOD_0 +
+			 reg_offset, NIG_RL_PERIOD_CLK_25M);
+		inc_val = NIG_RL_INC_VAL(req->tc_rate[tc]);
+		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_INC_VALUE_0 +
+			 reg_offset, inc_val);
+		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_MAX_VALUE_0 +
+			 reg_offset, NIG_RL_MAX_VAL(inc_val, req->mtu));
+
+		/* Enable */
+		ctrl |= 1 <<
+			NIG_REG_LB_TCRATELIMIT_CTRL_0_LB_TCRATELIMIT_EN_0_SHIFT;
+		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_CTRL_0 +
+			 reg_offset, ctrl);
 	}
 }
 
@@ -957,20 +1132,23 @@ void ecore_init_nig_pri_tc_map(struct ecore_hwfn *p_hwfn,
 			       struct ecore_ptt *p_ptt,
 			       struct init_nig_pri_tc_map_req *req)
 {
-	u8 pri, tc;
-	u32 pri_tc_mask = 0;
 	u8 tc_pri_mask[NUM_OF_PHYS_TCS] = { 0 };
+	u32 pri_tc_mask = 0;
+	u8 pri, tc;
+
 	for (pri = 0; pri < NUM_OF_VLAN_PRIORITIES; pri++) {
-		if (req->pri[pri].valid) {
-			pri_tc_mask |=
-			    (req->pri[pri].
-			     tc_id << (pri * NIG_PRIORITY_MAP_TC_BITS));
-			tc_pri_mask[req->pri[pri].tc_id] |= (1 << pri);
-		}
+		if (!req->pri[pri].valid)
+			continue;
+
+		pri_tc_mask |= (req->pri[pri].tc_id <<
+				(pri * NIG_PRIORITY_MAP_TC_BITS));
+		tc_pri_mask[req->pri[pri].tc_id] |= (1 << pri);
 	}
-	/* write priority -> TC mask */
+
+	/* Write priority -> TC mask */
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_PKT_PRIORITY_TO_TC, pri_tc_mask);
-	/* write TC -> priority mask */
+
+	/* Write TC -> priority mask */
 	for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) {
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_PRIORITY_FOR_TC_0 + tc * 4,
 			 tc_pri_mask[tc]);
@@ -979,110 +1157,133 @@ void ecore_init_nig_pri_tc_map(struct ecore_hwfn *p_hwfn,
 	}
 }
 
+
 /* PRS: ETS configuration constants */
-#define PRS_ETS_MIN_WFQ_BYTES			1600
+#define PRS_ETS_MIN_WFQ_BYTES		1600
 #define PRS_ETS_UP_BOUND(weight, mtu) \
-(2 * ((weight) > (mtu) ? (weight) : (mtu)))
+	(2 * ((weight) > (mtu) ? (weight) : (mtu)))
+
+
 void ecore_init_prs_ets(struct ecore_hwfn *p_hwfn,
 			struct ecore_ptt *p_ptt, struct init_ets_req *req)
 {
+	u32 tc_weight_addr_diff, tc_bound_addr_diff, min_weight = 0xffffffff;
 	u8 tc, sp_tc_map = 0, wfq_tc_map = 0;
-	u32 min_weight = 0xffffffff;
-	u32 tc_weight_addr_diff =
-	    PRS_REG_ETS_ARB_CREDIT_WEIGHT_1 - PRS_REG_ETS_ARB_CREDIT_WEIGHT_0;
-	u32 tc_bound_addr_diff =
-	    PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_1 -
-	    PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_0;
+
+	tc_weight_addr_diff = PRS_REG_ETS_ARB_CREDIT_WEIGHT_1 -
+			      PRS_REG_ETS_ARB_CREDIT_WEIGHT_0;
+	tc_bound_addr_diff = PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_1 -
+			     PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_0;
+
 	for (tc = 0; tc < NUM_OF_TCS; tc++) {
 		struct init_ets_tc_req *tc_req = &req->tc_req[tc];
-		/* update SP map */
+
+		/* Update SP map */
 		if (tc_req->use_sp)
 			sp_tc_map |= (1 << tc);
-		if (tc_req->use_wfq) {
-			/* update WFQ map */
-			wfq_tc_map |= (1 << tc);
-			/* find minimal weight */
-			if (tc_req->weight < min_weight)
-				min_weight = tc_req->weight;
-		}
+
+		if (!tc_req->use_wfq)
+			continue;
+
+		/* Update WFQ map */
+		wfq_tc_map |= (1 << tc);
+
+		/* Find minimal weight */
+		if (tc_req->weight < min_weight)
+			min_weight = tc_req->weight;
 	}
+
 	/* write SP map */
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CLIENT_IS_STRICT, sp_tc_map);
+
 	/* write WFQ map */
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CLIENT_IS_SUBJECT2WFQ,
 		 wfq_tc_map);
+
 	/* write WFQ weights */
 	for (tc = 0; tc < NUM_OF_TCS; tc++) {
 		struct init_ets_tc_req *tc_req = &req->tc_req[tc];
-		if (tc_req->use_wfq) {
-			/* translate weight to bytes */
-			u32 byte_weight =
-			    (PRS_ETS_MIN_WFQ_BYTES * tc_req->weight) /
-			    min_weight;
-			/* write WFQ weight */
-			ecore_wr(p_hwfn, p_ptt,
-				 PRS_REG_ETS_ARB_CREDIT_WEIGHT_0 +
-				 tc * tc_weight_addr_diff, byte_weight);
-			/* write WFQ upper bound */
-			ecore_wr(p_hwfn, p_ptt,
-				 PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_0 +
-				 tc * tc_bound_addr_diff,
-				 PRS_ETS_UP_BOUND(byte_weight, req->mtu));
-		}
+		u32 byte_weight;
+
+		if (!tc_req->use_wfq)
+			continue;
+
+		/* Translate weight to bytes */
+		byte_weight = (PRS_ETS_MIN_WFQ_BYTES * tc_req->weight) /
+			      min_weight;
+
+		/* Write WFQ weight */
+		ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CREDIT_WEIGHT_0 + tc *
+			 tc_weight_addr_diff, byte_weight);
+
+		/* Write WFQ upper bound */
+		ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_0 +
+			 tc * tc_bound_addr_diff, PRS_ETS_UP_BOUND(byte_weight,
+								   req->mtu));
 	}
 }
 
+
 /* BRB: RAM configuration constants */
 #define BRB_TOTAL_RAM_BLOCKS_BB	4800
 #define BRB_TOTAL_RAM_BLOCKS_K2	5632
-#define BRB_BLOCK_SIZE			128	/* in bytes */
+#define BRB_BLOCK_SIZE		128
 #define BRB_MIN_BLOCKS_PER_TC	9
-#define BRB_HYST_BYTES			10240
-#define BRB_HYST_BLOCKS			(BRB_HYST_BYTES / BRB_BLOCK_SIZE)
-/*
- * temporary big RAM allocation - should be updated
- */
+#define BRB_HYST_BYTES		10240
+#define BRB_HYST_BLOCKS		(BRB_HYST_BYTES / BRB_BLOCK_SIZE)
+
+/* Temporary big RAM allocation - should be updated */
 void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
 			struct ecore_ptt *p_ptt, struct init_brb_ram_req *req)
 {
-	u8 port, active_ports = 0;
+	u32 tc_headroom_blocks, min_pkt_size_blocks, total_blocks;
 	u32 active_port_blocks, reg_offset = 0;
-	u32 tc_headroom_blocks =
-	    (u32)DIV_ROUND_UP(req->headroom_per_tc, BRB_BLOCK_SIZE);
-	u32 min_pkt_size_blocks =
-	    (u32)DIV_ROUND_UP(req->min_pkt_size, BRB_BLOCK_SIZE);
-	u32 total_blocks =
-	    ECORE_IS_K2(p_hwfn->
-			p_dev) ? BRB_TOTAL_RAM_BLOCKS_K2 :
-	    BRB_TOTAL_RAM_BLOCKS_BB;
-	/* find number of active ports */
+	u8 port, active_ports = 0;
+
+	tc_headroom_blocks = (u32)DIV_ROUND_UP(req->headroom_per_tc,
+					       BRB_BLOCK_SIZE);
+	min_pkt_size_blocks = (u32)DIV_ROUND_UP(req->min_pkt_size,
+						BRB_BLOCK_SIZE);
+	total_blocks = ECORE_IS_K2(p_hwfn->p_dev) ? BRB_TOTAL_RAM_BLOCKS_K2 :
+						    BRB_TOTAL_RAM_BLOCKS_BB;
+
+	/* Find number of active ports */
 	for (port = 0; port < MAX_NUM_PORTS; port++)
 		if (req->num_active_tcs[port])
 			active_ports++;
+
 	active_port_blocks = (u32)(total_blocks / active_ports);
+
 	for (port = 0; port < req->max_ports_per_engine; port++) {
-		/* calculate per-port sizes */
-		u32 tc_guaranteed_blocks =
-		    (u32)DIV_ROUND_UP(req->guranteed_per_tc, BRB_BLOCK_SIZE);
-		u32 port_blocks =
-		    req->num_active_tcs[port] ? active_port_blocks : 0;
-		u32 port_guaranteed_blocks =
-		    req->num_active_tcs[port] * tc_guaranteed_blocks;
-		u32 port_shared_blocks = port_blocks - port_guaranteed_blocks;
-		u32 full_xoff_th =
-		    req->num_active_tcs[port] * BRB_MIN_BLOCKS_PER_TC;
-		u32 full_xon_th = full_xoff_th + min_pkt_size_blocks;
-		u32 pause_xoff_th = tc_headroom_blocks;
-		u32 pause_xon_th = pause_xoff_th + min_pkt_size_blocks;
+		u32 port_blocks, port_shared_blocks, port_guaranteed_blocks;
+		u32 full_xoff_th, full_xon_th, pause_xoff_th, pause_xon_th;
+		u32 tc_guaranteed_blocks;
 		u8 tc;
-		/* init total size per port */
+
+		/* Calculate per-port sizes */
+		tc_guaranteed_blocks = (u32)DIV_ROUND_UP(req->guranteed_per_tc,
+							 BRB_BLOCK_SIZE);
+		port_blocks = req->num_active_tcs[port] ? active_port_blocks :
+							  0;
+		port_guaranteed_blocks = req->num_active_tcs[port] *
+					 tc_guaranteed_blocks;
+		port_shared_blocks = port_blocks - port_guaranteed_blocks;
+		full_xoff_th = req->num_active_tcs[port] *
+			       BRB_MIN_BLOCKS_PER_TC;
+		full_xon_th = full_xoff_th + min_pkt_size_blocks;
+		pause_xoff_th = tc_headroom_blocks;
+		pause_xon_th = pause_xoff_th + min_pkt_size_blocks;
+
+		/* Init total size per port */
 		ecore_wr(p_hwfn, p_ptt, BRB_REG_TOTAL_MAC_SIZE + port * 4,
 			 port_blocks);
-		/* init shared size per port */
+
+		/* Init shared size per port */
 		ecore_wr(p_hwfn, p_ptt, BRB_REG_SHARED_HR_AREA + port * 4,
 			 port_shared_blocks);
+
 		for (tc = 0; tc < NUM_OF_TCS; tc++, reg_offset += 4) {
-			/* clear init values for non-active TCs */
+			/* Clear init values for non-active TCs */
 			if (tc == req->num_active_tcs[port]) {
 				tc_guaranteed_blocks = 0;
 				full_xoff_th = 0;
@@ -1090,15 +1291,18 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
 				pause_xoff_th = 0;
 				pause_xon_th = 0;
 			}
-			/* init guaranteed size per TC */
+
+			/* Init guaranteed size per TC */
 			ecore_wr(p_hwfn, p_ptt,
 				 BRB_REG_TC_GUARANTIED_0 + reg_offset,
 				 tc_guaranteed_blocks);
 			ecore_wr(p_hwfn, p_ptt,
 				 BRB_REG_MAIN_TC_GUARANTIED_HYST_0 + reg_offset,
 				 BRB_HYST_BLOCKS);
-/* init pause/full thresholds per physical TC - for loopback traffic */
 
+			/* Init pause/full thresholds per physical TC - for
+			 * loopback traffic.
+			 */
 			ecore_wr(p_hwfn, p_ptt,
 				 BRB_REG_LB_TC_FULL_XOFF_THRESHOLD_0 +
 				 reg_offset, full_xoff_th);
@@ -1111,7 +1315,10 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
 			ecore_wr(p_hwfn, p_ptt,
 				 BRB_REG_LB_TC_PAUSE_XON_THRESHOLD_0 +
 				 reg_offset, pause_xon_th);
-/* init pause/full thresholds per physical TC - for main traffic */
+
+			/* Init pause/full thresholds per physical TC - for
+			 * main traffic.
+			 */
 			ecore_wr(p_hwfn, p_ptt,
 				 BRB_REG_MAIN_TC_FULL_XOFF_THRESHOLD_0 +
 				 reg_offset, full_xoff_th);
@@ -1128,23 +1335,25 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
 	}
 }
 
-/*In MF should be called once per engine to set EtherType of OuterTag*/
+/* In MF should be called once per engine to set EtherType of OuterTag */
 void ecore_set_engine_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
 					struct ecore_ptt *p_ptt, u32 ethType)
 {
-	/* update PRS register */
+	/* Update PRS register */
 	STORE_RT_REG(p_hwfn, PRS_REG_TAG_ETHERTYPE_0_RT_OFFSET, ethType);
-	/* update NIG register */
+
+	/* Update NIG register */
 	STORE_RT_REG(p_hwfn, NIG_REG_TAG_ETHERTYPE_0_RT_OFFSET, ethType);
-	/* update PBF register */
+
+	/* Update PBF register */
 	STORE_RT_REG(p_hwfn, PBF_REG_TAG_ETHERTYPE_0_RT_OFFSET, ethType);
 }
 
-/*In MF should be called once per port to set EtherType of OuterTag*/
+/* In MF should be called once per port to set EtherType of OuterTag */
 void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
 				      struct ecore_ptt *p_ptt, u32 ethType)
 {
-	/* update DORQ register */
+	/* Update DORQ register */
 	STORE_RT_REG(p_hwfn, DORQ_REG_TAG1_ETHERTYPE_RT_OFFSET, ethType);
 }
 
@@ -1154,11 +1363,13 @@ void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
 void ecore_set_vxlan_dest_port(struct ecore_hwfn *p_hwfn,
 			       struct ecore_ptt *p_ptt, u16 dest_port)
 {
-	/* update PRS register */
+	/* Update PRS register */
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_VXLAN_PORT, dest_port);
-	/* update NIG register */
+
+	/* Update NIG register */
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_VXLAN_CTRL, dest_port);
-	/* update PBF register */
+
+	/* Update PBF register */
 	ecore_wr(p_hwfn, p_ptt, PBF_REG_VXLAN_PORT, dest_port);
 }
 
@@ -1166,23 +1377,26 @@ void ecore_set_vxlan_enable(struct ecore_hwfn *p_hwfn,
 			    struct ecore_ptt *p_ptt, bool vxlan_enable)
 {
 	u32 reg_val;
-	/* update PRS register */
+
+	/* Update PRS register */
 	reg_val = ecore_rd(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN);
 	SET_TUNNEL_TYPE_ENABLE_BIT(reg_val,
 			   PRS_REG_ENCAPSULATION_TYPE_EN_VXLAN_ENABLE_SHIFT,
 			   vxlan_enable);
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN, reg_val);
 	if (reg_val) {
-		ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0,
+		ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0_BB_K2,
 			 (u32)PRS_ETH_TUNN_FIC_FORMAT);
 	}
-	/* update NIG register */
+
+	/* Update NIG register */
 	reg_val = ecore_rd(p_hwfn, p_ptt, NIG_REG_ENC_TYPE_ENABLE);
 	SET_TUNNEL_TYPE_ENABLE_BIT(reg_val,
 				   NIG_REG_ENC_TYPE_ENABLE_VXLAN_ENABLE_SHIFT,
 				   vxlan_enable);
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_ENC_TYPE_ENABLE, reg_val);
-	/* update DORQ register */
+
+	/* Update DORQ register */
 	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_VXLAN_EN,
 		 vxlan_enable ? 1 : 0);
 }
@@ -1192,7 +1406,8 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn,
 			  bool eth_gre_enable, bool ip_gre_enable)
 {
 	u32 reg_val;
-	/* update PRS register */
+
+	/* Update PRS register */
 	reg_val = ecore_rd(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN);
 	SET_TUNNEL_TYPE_ENABLE_BIT(reg_val,
 		   PRS_REG_ENCAPSULATION_TYPE_EN_ETH_OVER_GRE_ENABLE_SHIFT,
@@ -1202,10 +1417,11 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn,
 		   ip_gre_enable);
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN, reg_val);
 	if (reg_val) {
-		ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0,
+		ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0_BB_K2,
 			 (u32)PRS_ETH_TUNN_FIC_FORMAT);
 	}
-	/* update NIG register */
+
+	/* Update NIG register */
 	reg_val = ecore_rd(p_hwfn, p_ptt, NIG_REG_ENC_TYPE_ENABLE);
 	SET_TUNNEL_TYPE_ENABLE_BIT(reg_val,
 		   NIG_REG_ENC_TYPE_ENABLE_ETH_OVER_GRE_ENABLE_SHIFT,
@@ -1214,7 +1430,8 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn,
 		   NIG_REG_ENC_TYPE_ENABLE_IP_OVER_GRE_ENABLE_SHIFT,
 		   ip_gre_enable);
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_ENC_TYPE_ENABLE, reg_val);
-	/* update DORQ registers */
+
+	/* Update DORQ registers */
 	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_GRE_ETH_EN,
 		 eth_gre_enable ? 1 : 0);
 	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_GRE_IP_EN,
@@ -1224,11 +1441,13 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn,
 void ecore_set_geneve_dest_port(struct ecore_hwfn *p_hwfn,
 				struct ecore_ptt *p_ptt, u16 dest_port)
 {
-	/* update PRS register */
+	/* Update PRS register */
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_NGE_PORT, dest_port);
-	/* update NIG register */
+
+	/* Update NIG register */
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_NGE_PORT, dest_port);
-	/* update PBF register */
+
+	/* Update PBF register */
 	ecore_wr(p_hwfn, p_ptt, PBF_REG_NGE_PORT, dest_port);
 }
 
@@ -1237,7 +1456,8 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
 			     bool eth_geneve_enable, bool ip_geneve_enable)
 {
 	u32 reg_val;
-	/* update PRS register */
+
+	/* Update PRS register */
 	reg_val = ecore_rd(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN);
 	SET_TUNNEL_TYPE_ENABLE_BIT(reg_val,
 		   PRS_REG_ENCAPSULATION_TYPE_EN_ETH_OVER_GENEVE_ENABLE_SHIFT,
@@ -1247,37 +1467,44 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
 		   ip_geneve_enable);
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN, reg_val);
 	if (reg_val) {
-		ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0,
+		ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0_BB_K2,
 			 (u32)PRS_ETH_TUNN_FIC_FORMAT);
 	}
-	/* update NIG register */
+
+	/* Update NIG register */
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_NGE_ETH_ENABLE,
 		 eth_geneve_enable ? 1 : 0);
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_NGE_IP_ENABLE,
 		 ip_geneve_enable ? 1 : 0);
-	/* EDPM with geneve tunnel not supported in BB_B0 */
+
+	/* EDPM with geneve tunnel not supported in BB */
 	if (ECORE_IS_BB_B0(p_hwfn->p_dev))
 		return;
-	/* update DORQ registers */
-	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_ETH_EN,
+
+	/* Update DORQ registers */
+	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_ETH_EN_K2_E5,
 		 eth_geneve_enable ? 1 : 0);
-	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_IP_EN,
+	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_IP_EN_K2_E5,
 		 ip_geneve_enable ? 1 : 0);
 }
 
+
 #define T_ETH_PACKET_ACTION_GFT_EVENTID  23
 #define PARSER_ETH_CONN_GFT_ACTION_CM_HDR  272
 #define T_ETH_PACKET_MATCH_RFS_EVENTID 25
-#define PARSER_ETH_CONN_CM_HDR (0x0)
+#define PARSER_ETH_CONN_CM_HDR 0
 #define CAM_LINE_SIZE sizeof(u32)
 #define RAM_LINE_SIZE sizeof(u64)
 #define REG_SIZE sizeof(u32)
 
+
 void ecore_set_gft_event_id_cm_hdr(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt)
 {
-	/* set RFS event ID to be awakened i Tstorm By Prs */
-	u32 rfs_cm_hdr_event_id = ecore_rd(p_hwfn, p_ptt, PRS_REG_CM_HDR_GFT);
+	u32 rfs_cm_hdr_event_id;
+
+	/* Set RFS event ID to be awakened i Tstorm By Prs */
+	rfs_cm_hdr_event_id = ecore_rd(p_hwfn, p_ptt, PRS_REG_CM_HDR_GFT);
 	rfs_cm_hdr_event_id |= T_ETH_PACKET_ACTION_GFT_EVENTID <<
 	    PRS_REG_CM_HDR_GFT_EVENT_ID_SHIFT;
 	rfs_cm_hdr_event_id |= PARSER_ETH_CONN_GFT_ACTION_CM_HDR <<
@@ -1298,39 +1525,48 @@ void ecore_set_rfs_mode_enable(struct ecore_hwfn *p_hwfn,
 	struct gft_ram_line ramLine;
 	u32 *ramLinePointer = (u32 *)&ramLine;
 	int i;
+
 	if (!ipv6 && !ipv4)
 		DP_NOTICE(p_hwfn, true,
 			  "set_rfs_mode_enable: must accept at "
 			  "least on of - ipv4 or ipv6");
+
 	if (!tcp && !udp)
 		DP_NOTICE(p_hwfn, true,
 			  "set_rfs_mode_enable: must accept at "
 			  "least on of - udp or tcp");
-	/* set RFS event ID to be awakened i Tstorm By Prs */
+
+	/* Set RFS event ID to be awakened i Tstorm By Prs */
 	rfs_cm_hdr_event_id |=  T_ETH_PACKET_MATCH_RFS_EVENTID <<
 	    PRS_REG_CM_HDR_GFT_EVENT_ID_SHIFT;
 	rfs_cm_hdr_event_id |=  PARSER_ETH_CONN_CM_HDR <<
 	    PRS_REG_CM_HDR_GFT_CM_HDR_SHIFT;
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_CM_HDR_GFT, rfs_cm_hdr_event_id);
+
 	/* Configure Registers for RFS mode */
-/* enable gft search */
+
+	/* Enable gft search */
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_GFT, 1);
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_LOAD_L2_FILTER, 0); /* do not load
 							     * context only cid
 							     * in PRS on match
 							     */
 	camLine.cam_line_mapped.camline = 0;
-	/* cam line is now valid!! */
+
+	/* Cam line is now valid!! */
 	SET_FIELD(camLine.cam_line_mapped.camline,
 		  GFT_CAM_LINE_MAPPED_VALID, 1);
-	/* filters are per PF!! */
+
+	/* Filters are per PF!! */
 	SET_FIELD(camLine.cam_line_mapped.camline,
 		  GFT_CAM_LINE_MAPPED_PF_ID_MASK, 1);
 	SET_FIELD(camLine.cam_line_mapped.camline,
 		  GFT_CAM_LINE_MAPPED_PF_ID, pf_id);
+
 	if (!(tcp && udp)) {
 		SET_FIELD(camLine.cam_line_mapped.camline,
-			  GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE_MASK, 1);
+			  GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE_MASK,
+			  GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE_MASK_MASK);
 		if (tcp)
 			SET_FIELD(camLine.cam_line_mapped.camline,
 				  GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE,
@@ -1340,6 +1576,7 @@ void ecore_set_rfs_mode_enable(struct ecore_hwfn *p_hwfn,
 				  GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE,
 				  GFT_PROFILE_UDP_PROTOCOL);
 	}
+
 	if (!(ipv4 && ipv6)) {
 		SET_FIELD(camLine.cam_line_mapped.camline,
 			  GFT_CAM_LINE_MAPPED_IP_VERSION_MASK, 1);
@@ -1352,44 +1589,53 @@ void ecore_set_rfs_mode_enable(struct ecore_hwfn *p_hwfn,
 				  GFT_CAM_LINE_MAPPED_IP_VERSION,
 				  GFT_PROFILE_IPV6);
 	}
-	/* write characteristics to cam */
+
+	/* Write characteristics to cam */
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_CAM + CAM_LINE_SIZE * pf_id,
 	    camLine.cam_line_mapped.camline);
 	camLine.cam_line_mapped.camline =
 	    ecore_rd(p_hwfn, p_ptt, PRS_REG_GFT_CAM + CAM_LINE_SIZE * pf_id);
-	/* write line to RAM - compare to filter 4 tuple */
-	ramLine.low32bits = 0;
-	ramLine.high32bits = 0;
-	SET_FIELD(ramLine.high32bits, GFT_RAM_LINE_DST_IP, 1);
-	SET_FIELD(ramLine.high32bits, GFT_RAM_LINE_SRC_IP, 1);
-	SET_FIELD(ramLine.low32bits, GFT_RAM_LINE_SRC_PORT, 1);
-	SET_FIELD(ramLine.low32bits, GFT_RAM_LINE_DST_PORT, 1);
-	/* each iteration write to reg */
+
+	/* Write line to RAM - compare to filter 4 tuple */
+	ramLine.lo = 0;
+	ramLine.hi = 0;
+	SET_FIELD(ramLine.hi, GFT_RAM_LINE_DST_IP, 1);
+	SET_FIELD(ramLine.hi, GFT_RAM_LINE_SRC_IP, 1);
+	SET_FIELD(ramLine.hi, GFT_RAM_LINE_OVER_IP_PROTOCOL, 1);
+	SET_FIELD(ramLine.lo, GFT_RAM_LINE_ETHERTYPE, 1);
+	SET_FIELD(ramLine.lo, GFT_RAM_LINE_SRC_PORT, 1);
+	SET_FIELD(ramLine.lo, GFT_RAM_LINE_DST_PORT, 1);
+
+	/* Each iteration write to reg */
 	for (i = 0; i < RAM_LINE_SIZE / REG_SIZE; i++)
 		ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_PROFILE_MASK_RAM +
 			 RAM_LINE_SIZE * pf_id +
 			 i * REG_SIZE, *(ramLinePointer + i));
-	/* set default profile so that no filter match will happen */
-	ramLine.low32bits = 0xffff;
-	ramLine.high32bits = 0xffff;
+
+	/* Set default profile so that no filter match will happen */
+	ramLine.lo = 0xffff;
+	ramLine.hi = 0xffff;
 	for (i = 0; i < RAM_LINE_SIZE / REG_SIZE; i++)
 		ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_PROFILE_MASK_RAM +
 			 RAM_LINE_SIZE * PRS_GFT_CAM_LINES_NO_MATCH +
 			 i * REG_SIZE, *(ramLinePointer + i));
 }
 
-/* Configure VF zone size mode*/
+/* Configure VF zone size mode */
 void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn,
 				    struct ecore_ptt *p_ptt, u16 mode,
 				    bool runtime_init)
 {
 	u32 msdm_vf_size_log = MSTORM_VF_ZONE_DEFAULT_SIZE_LOG;
 	u32 msdm_vf_offset_mask;
+
 	if (mode == VF_ZONE_SIZE_MODE_DOUBLE)
 		msdm_vf_size_log += 1;
 	else if (mode == VF_ZONE_SIZE_MODE_QUAD)
 		msdm_vf_size_log += 2;
+
 	msdm_vf_offset_mask = (1 << msdm_vf_size_log) - 1;
+
 	if (runtime_init) {
 		STORE_RT_REG(p_hwfn,
 			     PGLUE_REG_B_MSDM_VF_SHIFT_B_RT_OFFSET,
@@ -1405,12 +1651,13 @@ void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn,
 	}
 }
 
-/* get mstorm statistics for offset by VF zone size mode*/
+/* Get mstorm statistics for offset by VF zone size mode */
 u32 ecore_get_mstorm_queue_stat_offset(struct ecore_hwfn *p_hwfn,
 				       u16 stat_cnt_id,
 				       u16 vf_zone_size_mode)
 {
 	u32 offset = MSTORM_QUEUE_STAT_OFFSET(stat_cnt_id);
+
 	if ((vf_zone_size_mode != VF_ZONE_SIZE_MODE_DEFAULT) &&
 	    (stat_cnt_id > MAX_NUM_PFS)) {
 		if (vf_zone_size_mode == VF_ZONE_SIZE_MODE_DOUBLE)
@@ -1420,16 +1667,18 @@ u32 ecore_get_mstorm_queue_stat_offset(struct ecore_hwfn *p_hwfn,
 			offset += 3 * (1 << MSTORM_VF_ZONE_DEFAULT_SIZE_LOG) *
 			    (stat_cnt_id - MAX_NUM_PFS);
 	}
+
 	return offset;
 }
 
-/* get mstorm VF producer offset by VF zone size mode*/
+/* Get mstorm VF producer offset by VF zone size mode */
 u32 ecore_get_mstorm_eth_vf_prods_offset(struct ecore_hwfn *p_hwfn,
 					 u8 vf_id,
 					 u8 vf_queue_id,
 					 u16 vf_zone_size_mode)
 {
 	u32 offset = MSTORM_ETH_VF_PRODS_OFFSET(vf_id, vf_queue_id);
+
 	if (vf_zone_size_mode != VF_ZONE_SIZE_MODE_DEFAULT) {
 		if (vf_zone_size_mode == VF_ZONE_SIZE_MODE_DOUBLE)
 			offset += (1 << MSTORM_VF_ZONE_DEFAULT_SIZE_LOG) *
@@ -1438,5 +1687,166 @@ u32 ecore_get_mstorm_eth_vf_prods_offset(struct ecore_hwfn *p_hwfn,
 			offset += 3 * (1 << MSTORM_VF_ZONE_DEFAULT_SIZE_LOG) *
 				  vf_id;
 	}
+
 	return offset;
 }
+
+/* Calculate CRC8 of first 4 bytes in buf */
+static u8 ecore_calc_crc8(const u8 *buf)
+{
+	u32 i, j, crc = 0xff << 8;
+
+	/* CRC-8 polynomial */
+	#define POLY 0x1070
+
+	for (j = 0; j < 4; j++, buf++) {
+		crc ^= (*buf << 8);
+		for (i = 0; i < 8; i++) {
+			if (crc & 0x8000)
+				crc ^= (POLY << 3);
+
+			 crc <<= 1;
+		}
+	}
+
+	return (u8)(crc >> 8);
+}
+
+/* Calculate and return CDU validation byte per conneciton type / region /
+ * cid
+ */
+static u8 ecore_calc_cdu_validation_byte(u8 conn_type, u8 region,
+					 u32 cid)
+{
+	const u8 validation_cfg = CDU_VALIDATION_DEFAULT_CFG;
+	u8 crc, validation_byte = 0;
+	u32 validation_string = 0;
+	const u8 *data_to_crc_rev;
+	u8 data_to_crc[4];
+
+	data_to_crc_rev = (const u8 *)&validation_string;
+
+	/*
+	 * The CRC is calculated on the String-to-compress:
+	 * [31:8]  = {CID[31:20],CID[11:0]}
+	 * [7:4]   = Region
+	 * [3:0]   = Type
+	 */
+	if ((validation_cfg >> CDU_CONTEXT_VALIDATION_CFG_USE_CID) & 1)
+		validation_string |= (cid & 0xFFF00000) | ((cid & 0xFFF) << 8);
+
+	if ((validation_cfg >> CDU_CONTEXT_VALIDATION_CFG_USE_REGION) & 1)
+		validation_string |= ((region & 0xF) << 4);
+
+	if ((validation_cfg >> CDU_CONTEXT_VALIDATION_CFG_USE_TYPE) & 1)
+		validation_string |= (conn_type & 0xF);
+
+	/* Convert to big-endian (ntoh())*/
+	data_to_crc[0] = data_to_crc_rev[3];
+	data_to_crc[1] = data_to_crc_rev[2];
+	data_to_crc[2] = data_to_crc_rev[1];
+	data_to_crc[3] = data_to_crc_rev[0];
+
+	crc = ecore_calc_crc8(data_to_crc);
+
+	validation_byte |= ((validation_cfg >>
+			     CDU_CONTEXT_VALIDATION_CFG_USE_ACTIVE) & 1) << 7;
+
+	if ((validation_cfg >>
+	     CDU_CONTEXT_VALIDATION_CFG_VALIDATION_TYPE_SHIFT) & 1)
+		validation_byte |= ((conn_type & 0xF) << 3) | (crc & 0x7);
+	else
+		validation_byte |= crc & 0x7F;
+
+	return validation_byte;
+}
+
+/* Calcualte and set validation bytes for session context */
+void ecore_calc_session_ctx_validation(void *p_ctx_mem, u16 ctx_size,
+				       u8 ctx_type, u32 cid)
+{
+	u8 *x_val_ptr, *t_val_ptr, *u_val_ptr, *p_ctx;
+
+	p_ctx = (u8 *)p_ctx_mem;
+	x_val_ptr = &p_ctx[con_region_offsets[0][ctx_type]];
+	t_val_ptr = &p_ctx[con_region_offsets[1][ctx_type]];
+	u_val_ptr = &p_ctx[con_region_offsets[2][ctx_type]];
+
+	OSAL_MEMSET(p_ctx, 0, ctx_size);
+
+	*x_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 3, cid);
+	*t_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 4, cid);
+	*u_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 5, cid);
+}
+
+/* Calcualte and set validation bytes for task context */
+void ecore_calc_task_ctx_validation(void *p_ctx_mem, u16 ctx_size,
+				    u8 ctx_type, u32 tid)
+{
+	u8 *p_ctx, *region1_val_ptr;
+
+	p_ctx = (u8 *)p_ctx_mem;
+	region1_val_ptr = &p_ctx[task_region_offsets[0][ctx_type]];
+
+	OSAL_MEMSET(p_ctx, 0, ctx_size);
+
+	*region1_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 1, tid);
+}
+
+/* Memset session context to 0 while preserving validation bytes */
+void ecore_memset_session_ctx(void *p_ctx_mem, u32 ctx_size, u8 ctx_type)
+{
+	u8 *x_val_ptr, *t_val_ptr, *u_val_ptr, *p_ctx;
+	u8 x_val, t_val, u_val;
+
+	p_ctx = (u8 *)p_ctx_mem;
+	x_val_ptr = &p_ctx[con_region_offsets[0][ctx_type]];
+	t_val_ptr = &p_ctx[con_region_offsets[1][ctx_type]];
+	u_val_ptr = &p_ctx[con_region_offsets[2][ctx_type]];
+
+	x_val = *x_val_ptr;
+	t_val = *t_val_ptr;
+	u_val = *u_val_ptr;
+
+	OSAL_MEMSET(p_ctx, 0, ctx_size);
+
+	*x_val_ptr = x_val;
+	*t_val_ptr = t_val;
+	*u_val_ptr = u_val;
+}
+
+/* Memset task context to 0 while preserving validation bytes */
+void ecore_memset_task_ctx(void *p_ctx_mem, const u32 ctx_size,
+			   const u8 ctx_type)
+{
+	u8 *p_ctx, *region1_val_ptr;
+	u8 region1_val;
+
+	p_ctx = (u8 *)p_ctx_mem;
+	region1_val_ptr = &p_ctx[task_region_offsets[0][ctx_type]];
+
+	region1_val = *region1_val_ptr;
+
+	OSAL_MEMSET(p_ctx, 0, ctx_size);
+
+	*region1_val_ptr = region1_val;
+}
+
+/* Enable and configure context validation */
+void ecore_enable_context_validation(struct ecore_hwfn *p_hwfn,
+				     struct ecore_ptt *p_ptt)
+{
+	u32 ctx_validation;
+
+	/* Enable validation for connection region 3 - bits [31:24] */
+	ctx_validation = CDU_VALIDATION_DEFAULT_CFG << 24;
+	ecore_wr(p_hwfn, p_ptt, CDU_REG_CCFC_CTX_VALID0, ctx_validation);
+
+	/* Enable validation for connection region 5 - bits [15: 8] */
+	ctx_validation = CDU_VALIDATION_DEFAULT_CFG << 8;
+	ecore_wr(p_hwfn, p_ptt, CDU_REG_CCFC_CTX_VALID1, ctx_validation);
+
+	/* Enable validation for connection region 1 - bits [15: 8] */
+	ctx_validation = CDU_VALIDATION_DEFAULT_CFG << 8;
+	ecore_wr(p_hwfn, p_ptt, CDU_REG_TCFC_CTX_VALID0, ctx_validation);
+}
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.h b/drivers/net/qede/base/ecore_init_fw_funcs.h
index 9df0e7d..2d1ab7c 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.h
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.h
@@ -8,20 +8,22 @@
 
 #ifndef _INIT_FW_FUNCS_H
 #define _INIT_FW_FUNCS_H
-/* forward declarations */
+/* Forward declarations */
+
 struct init_qm_pq_params;
+
 /**
- * @brief ecore_qm_pf_mem_size - prepare QM ILT sizes
+ * @brief ecore_qm_pf_mem_size - Prepare QM ILT sizes
  *
  * Returns the required host memory size in 4KB units.
  * Must be called before all QM init HSI functions.
  *
- * @param pf_id			- physical function ID
- * @param num_pf_cids	- number of connections used by this PF
- * @param num_vf_cids	- number of connections used by VFs of this PF
- * @param num_tids		- number of tasks used by this PF
- * @param num_pf_pqs	- number of PQs used by this PF
- * @param num_vf_pqs	- number of PQs used by VFs of this PF
+ * @param pf_id -	physical function ID
+ * @param num_pf_cids - number of connections used by this PF
+ * @param num_vf_cids -	number of connections used by VFs of this PF
+ * @param num_tids -	number of tasks used by this PF
+ * @param num_pf_pqs -	number of PQs used by this PF
+ * @param num_vf_pqs -	number of PQs used by VFs of this PF
  *
  * @return The required host memory size in 4KB units.
  */
@@ -31,6 +33,7 @@ u32 ecore_qm_pf_mem_size(u8 pf_id,
 						 u32 num_tids,
 						 u16 num_pf_pqs,
 						 u16 num_vf_pqs);
+
 /**
  * @brief ecore_qm_common_rt_init - Prepare QM runtime init values for engine
  *                                  phase
@@ -38,10 +41,10 @@ u32 ecore_qm_pf_mem_size(u8 pf_id,
  * @param p_hwfn
  * @param max_ports_per_engine	- max number of ports per engine in HW
  * @param max_phys_tcs_per_port	- max number of physical TCs per port in HW
- * @param pf_rl_en				- enable per-PF rate limiters
- * @param pf_wfq_en				- enable per-PF WFQ
- * @param vport_rl_en			- enable per-VPORT rate limiters
- * @param vport_wfq_en			- enable per-VPORT WFQ
+ * @param pf_rl_en		- enable per-PF rate limiters
+ * @param pf_wfq_en		- enable per-PF WFQ
+ * @param vport_rl_en		- enable per-VPORT rate limiters
+ * @param vport_wfq_en		- enable per-VPORT WFQ
  * @param port_params - array of size MAX_NUM_PORTS with params for each port
  *
  * @return 0 on success, -1 on error.
@@ -54,22 +57,24 @@ int ecore_qm_common_rt_init(struct ecore_hwfn *p_hwfn,
 			 bool vport_rl_en,
 			 bool vport_wfq_en,
 			 struct init_qm_port_params port_params[MAX_NUM_PORTS]);
+
 /**
  * @brief ecore_qm_pf_rt_init  Prepare QM runtime init values for the PF phase
  *
  * @param p_hwfn
  * @param p_ptt			- ptt window used for writing the registers
- * @param port_id				- port ID
- * @param pf_id					- PF ID
+ * @param port_id		- port ID
+ * @param pf_id			- PF ID
  * @param max_phys_tcs_per_port	- max number of physical TCs per port in HW
- * @param is_first_pf			- 1 = first PF in engine, 0 = othwerwise
- * @param num_pf_cids			- number of connections used by this PF
+ * @param is_first_pf		- 1 = first PF in engine, 0 = othwerwise
+ * @param num_pf_cids		- number of connections used by this PF
  * @param num_vf_cids		- number of connections used by VFs of this PF
- * @param num_tids			- number of tasks used by this PF
- * @param start_pq			- first Tx PQ ID associated with this PF
- * @param num_pf_pqs	- number of Tx PQs associated with this PF (non-VF)
- * @param num_vf_pqs			- number of Tx PQs associated with a VF
- * @param start_vport			- first VPORT ID associated with this PF
+ * @param num_tids		- number of tasks used by this PF
+ * @param start_pq		- first Tx PQ ID associated with this PF
+ * @param num_pf_pqs		- number of Tx PQs associated with this PF
+ *                                (non-VF)
+ * @param num_vf_pqs		- number of Tx PQs associated with a VF
+ * @param start_vport		- first VPORT ID associated with this PF
  * @param num_vports - number of VPORTs associated with this PF
  * @param pf_wfq - WFQ weight. if PF WFQ is globally disabled, the weight must
  *		   be 0. otherwise, the weight must be non-zero.
@@ -100,6 +105,7 @@ int ecore_qm_pf_rt_init(struct ecore_hwfn *p_hwfn,
 				u32 pf_rl,
 				struct init_qm_pq_params *pq_params,
 				struct init_qm_vport_params *vport_params);
+
 /**
  * @brief ecore_init_pf_wfq  Initializes the WFQ weight of the specified PF
  *
@@ -114,11 +120,12 @@ int ecore_init_pf_wfq(struct ecore_hwfn *p_hwfn,
 					  struct ecore_ptt *p_ptt,
 					  u8 pf_id,
 					  u16 pf_wfq);
+
 /**
- * @brief ecore_init_pf_rl  Initializes the rate limit of the specified PF
+ * @brief ecore_init_pf_rl - Initializes the rate limit of the specified PF
  *
  * @param p_hwfn
- * @param p_ptt	- ptt window used for writing the registers
+ * @param p_ptt - ptt window used for writing the registers
  * @param pf_id	- PF ID
  * @param pf_rl	- rate limit in Mb/sec units
  *
@@ -128,6 +135,7 @@ int ecore_init_pf_rl(struct ecore_hwfn *p_hwfn,
 					 struct ecore_ptt *p_ptt,
 					 u8 pf_id,
 					 u32 pf_rl);
+
 /**
  * @brief ecore_init_vport_wfq  Initializes the WFQ weight of specified VPORT
  *
@@ -144,10 +152,12 @@ int ecore_init_vport_wfq(struct ecore_hwfn *p_hwfn,
 						 struct ecore_ptt *p_ptt,
 						 u16 first_tx_pq_id[NUM_OF_TCS],
 						 u16 vport_wfq);
+
 /**
- * @brief ecore_init_vport_rl  Initializes the rate limit of the specified VPORT
+ * @brief ecore_init_vport_rl - Initializes the rate limit of the specified
+ * VPORT.
  *
- * @param p_hwfn
+ * @param p_hwfn	- HW device data
  * @param p_ptt		- ptt window used for writing the registers
  * @param vport_id	- VPORT ID
  * @param vport_rl	- rate limit in Mb/sec units
@@ -158,6 +168,7 @@ int ecore_init_vport_rl(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt,
 						u8 vport_id,
 						u32 vport_rl);
+
 /**
  * @brief ecore_send_qm_stop_cmd  Sends a stop command to the QM
  *
@@ -178,6 +189,7 @@ bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn,
 							u16 start_pq,
 							u16 num_pqs);
 #ifndef UNUSED_HSI_FUNC
+
 /**
  * @brief ecore_init_nig_ets - initializes the NIG ETS arbiter
  *
@@ -193,6 +205,7 @@ void ecore_init_nig_ets(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt,
 						struct init_ets_req *req,
 						bool is_lb);
+
 /**
  * @brief ecore_init_nig_lb_rl - initializes the NIG LB RLs
  *
@@ -205,6 +218,7 @@ void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn,
 				  struct ecore_ptt *p_ptt,
 				  struct init_nig_lb_rl_req *req);
 #endif /* UNUSED_HSI_FUNC */
+
 /**
  * @brief ecore_init_nig_pri_tc_map - initializes the NIG priority to TC map.
  *
@@ -216,6 +230,7 @@ void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn,
 void ecore_init_nig_pri_tc_map(struct ecore_hwfn *p_hwfn,
 					   struct ecore_ptt *p_ptt,
 					   struct init_nig_pri_tc_map_req *req);
+
 #ifndef UNUSED_HSI_FUNC
 /**
  * @brief ecore_init_prs_ets - initializes the PRS Rx ETS arbiter
@@ -229,6 +244,7 @@ void ecore_init_prs_ets(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt,
 						struct init_ets_req *req);
 #endif /* UNUSED_HSI_FUNC */
+
 #ifndef UNUSED_HSI_FUNC
 /**
  * @brief ecore_init_brb_ram - initializes BRB RAM sizes per TC
@@ -242,6 +258,7 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt,
 						struct init_brb_ram_req *req);
 #endif /* UNUSED_HSI_FUNC */
+
 #ifndef UNUSED_HSI_FUNC
 /**
  * @brief ecore_set_engine_mf_ovlan_eth_type - initializes Nig,Prs,Pbf and llh
@@ -250,22 +267,24 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
  *                                             if engine
  *  is in BD mode.
  *
- * @param p_ptt    - ptt window used for writing the registers.
+ * @param p_ptt   - ptt window used for writing the registers.
  * @param ethType - etherType to configure
  */
 void ecore_set_engine_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
 			struct ecore_ptt *p_ptt, u32 ethType);
+
 /**
  * @brief ecore_set_port_mf_ovlan_eth_type - initializes DORQ ethType Regs to
  *                                           input ethType should Be called
  *                                           once per port.
  *
- * @param p_ptt    - ptt window used for writing the registers.
+ * @param p_ptt   - ptt window used for writing the registers.
  * @param ethType - etherType to configure
  */
 void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
 			struct ecore_ptt *p_ptt, u32 ethType);
 #endif /* UNUSED_HSI_FUNC */
+
 /**
  * @brief ecore_set_vxlan_dest_port - initializes vxlan tunnel destination udp
  *                                    port
@@ -276,15 +295,17 @@ void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
 void ecore_set_vxlan_dest_port(struct ecore_hwfn *p_hwfn,
 			       struct ecore_ptt *p_ptt,
 			       u16 dest_port);
+
 /**
  * @brief ecore_set_vxlan_enable - enable or disable VXLAN tunnel in HW
  *
- * @param p_ptt	- ptt window used for writing the registers.
- * @param vxlan_enable - vxlan enable flag.
+ * @param p_ptt		- ptt window used for writing the registers.
+ * @param vxlan_enable	- vxlan enable flag.
  */
 void ecore_set_vxlan_enable(struct ecore_hwfn *p_hwfn,
 			    struct ecore_ptt *p_ptt,
 			    bool vxlan_enable);
+
 /**
  * @brief ecore_set_gre_enable - enable or disable GRE tunnel in HW
  *
@@ -296,6 +317,7 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn,
 			  struct ecore_ptt *p_ptt,
 			  bool eth_gre_enable,
 			  bool ip_gre_enable);
+
 /**
  * @brief ecore_set_geneve_dest_port - initializes geneve tunnel destination
  *                                     udp port
@@ -306,6 +328,7 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn,
 void ecore_set_geneve_dest_port(struct ecore_hwfn *p_hwfn,
 				struct ecore_ptt *p_ptt,
 				u16 dest_port);
+
 /**
  * @brief ecore_set_gre_enable - enable or disable GRE tunnel in HW
  *
@@ -318,6 +341,7 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
 			     bool eth_geneve_enable,
 			     bool ip_geneve_enable);
 #ifndef UNUSED_HSI_FUNC
+
 /**
 * @brief ecore_set_gft_event_id_cm_hdr - configure GFT event id and cm header
 *
@@ -325,16 +349,16 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
 */
 void ecore_set_gft_event_id_cm_hdr(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt);
+
 /**
 * @brief ecore_set_rfs_mode_enable - enable and configure HW for RFS
 *
-*
-* @param p_ptt             - ptt window used for writing the registers.
-* @param pf_id - pf on which to enable RFS.
-* @param tcp -  set profile tcp packets.
-* @param udp -  set profile udp  packet.
-* @param ipv4 - set profile ipv4 packet.
-* @param ipv6 - set profile ipv6 packet.
+* @param p_ptt	- ptt window used for writing the registers.
+* @param pf_id	- pf on which to enable RFS.
+* @param tcp	- set profile tcp packets.
+* @param udp	- set profile udp  packet.
+* @param ipv4	- set profile ipv4 packet.
+* @param ipv6	- set profile ipv6 packet.
 */
 void ecore_set_rfs_mode_enable(struct ecore_hwfn *p_hwfn,
 	struct ecore_ptt *p_ptt,
@@ -344,6 +368,7 @@ void ecore_set_rfs_mode_enable(struct ecore_hwfn *p_hwfn,
 	bool ipv4,
 	bool ipv6);
 #endif /* UNUSED_HSI_FUNC */
+
 /**
 * @brief ecore_config_vf_zone_size_mode - Configure VF zone size mode. Must be
 *                                         used before first ETH queue started.
@@ -357,18 +382,20 @@ void ecore_set_rfs_mode_enable(struct ecore_hwfn *p_hwfn,
 */
 void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn, struct ecore_ptt
 				    *p_ptt, u16 mode, bool runtime_init);
+
 /**
-* @brief ecore_get_mstorm_queue_stat_offset - get mstorm statistics offset by VF
-*                                             zone size mode.
+ * @brief ecore_get_mstorm_queue_stat_offset - Get mstorm statistics offset by
+ * VF zone size mode.
 *
 * @param stat_cnt_id         -  statistic counter id
 * @param vf_zone_size_mode   -  VF zone size mode. Use enum vf_zone_size_mode.
 */
 u32 ecore_get_mstorm_queue_stat_offset(struct ecore_hwfn *p_hwfn,
 				       u16 stat_cnt_id, u16 vf_zone_size_mode);
+
 /**
-* @brief ecore_get_mstorm_eth_vf_prods_offset - VF producer offset by VF zone
-*                                               size mode.
+ * @brief ecore_get_mstorm_eth_vf_prods_offset - VF producer offset by VF zone
+ * size mode.
 *
 * @param vf_id               -  vf id.
 * @param vf_queue_id         -  per VF rx queue id.
@@ -376,4 +403,58 @@ u32 ecore_get_mstorm_queue_stat_offset(struct ecore_hwfn *p_hwfn,
 */
 u32 ecore_get_mstorm_eth_vf_prods_offset(struct ecore_hwfn *p_hwfn, u8 vf_id, u8
 					 vf_queue_id, u16 vf_zone_size_mode);
+/**
+ * @brief ecore_enable_context_validation - Enable and configure context
+ *                                          validation.
+ *
+ * @param p_ptt - ptt window used for writing the registers.
+ */
+void ecore_enable_context_validation(struct ecore_hwfn *p_hwfn,
+				     struct ecore_ptt *p_ptt);
+/**
+ * @brief ecore_calc_session_ctx_validation - Calcualte validation byte for
+ *                                            session context.
+ *
+ *
+ * @param p_ctx_mem           -  pointer to context memory.
+ * @param ctx_size            -  context size.
+ * @param ctx_type            -  context type.
+ * @param cid                 -  context cid.
+ */
+void ecore_calc_session_ctx_validation(void *p_ctx_mem, u16 ctx_size,
+				       u8 ctx_type, u32 cid);
+/**
+ * @brief ecore_calc_task_ctx_validation - Calcualte validation byte for task
+ *                                         context.
+ *
+ *
+ * @param p_ctx_mem           -  pointer to context memory.
+ * @param ctx_size            -  context size.
+ * @param ctx_type            -  context type.
+ * @param tid                 -  context tid.
+ */
+void ecore_calc_task_ctx_validation(void *p_ctx_mem, u16 ctx_size,
+				    u8 ctx_type, u32 tid);
+/**
+ * @brief ecore_memset_session_ctx - Memset session context to 0 while
+ *                                   preserving validation bytes.
+ *
+ *
+ * @param p_ctx_mem           -  pointer to context memory.
+ * @param ctx_size            -  size to initialzie.
+ * @param ctx_type            -  context type.
+ */
+void ecore_memset_session_ctx(void *p_ctx_mem, u32 ctx_size,
+			      u8 ctx_type);
+/**
+ * @brief ecore_memset_task_ctx - Memset session context to 0 while preserving
+ *                                validation bytes.
+ *
+ *
+ * @param p_ctx_mem           -  pointer to context memory.
+ * @param ctx_size            -  size to initialzie.
+ * @param ctx_type            -  context type.
+ */
+void ecore_memset_task_ctx(void *p_ctx_mem, u32 ctx_size,
+			   u8 ctx_type);
 #endif
diff --git a/drivers/net/qede/base/ecore_iro.h b/drivers/net/qede/base/ecore_iro.h
index aad9012..b4bfe89 100644
--- a/drivers/net/qede/base/ecore_iro.h
+++ b/drivers/net/qede/base/ecore_iro.h
@@ -185,5 +185,13 @@
 #define TSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) (IRO[46].base + \
 	((rdma_stat_counter_id) * IRO[46].m1))
 #define TSTORM_RDMA_QUEUE_STAT_SIZE (IRO[46].size)
+/* Xstorm iWARP rxmit stats */
+#define XSTORM_IWARP_RXMIT_STATS_OFFSET(pf_id) (IRO[47].base + \
+	((pf_id) * IRO[47].m1))
+#define XSTORM_IWARP_RXMIT_STATS_SIZE (IRO[47].size)
+/* Tstorm RoCE Event Statistics */
+#define TSTORM_ROCE_EVENTS_STAT_OFFSET(roce_pf_id) (IRO[48].base + \
+	((roce_pf_id) * IRO[48].m1))
+#define TSTORM_ROCE_EVENTS_STAT_SIZE (IRO[48].size)
 
 #endif /* __IRO_H__ */
diff --git a/drivers/net/qede/base/ecore_iro_values.h b/drivers/net/qede/base/ecore_iro_values.h
index 4ff7e95..6764bfa 100644
--- a/drivers/net/qede/base/ecore_iro_values.h
+++ b/drivers/net/qede/base/ecore_iro_values.h
@@ -9,13 +9,13 @@
 #ifndef __IRO_VALUES_H__
 #define __IRO_VALUES_H__
 
-static const struct iro iro_arr[47] = {
+static const struct iro iro_arr[49] = {
 /* YSTORM_FLOW_CONTROL_MODE_OFFSET */
 	{      0x0,      0x0,      0x0,      0x0,      0x8},
 /* TSTORM_PORT_STAT_OFFSET(port_id) */
-	{   0x4cb0,     0x78,      0x0,      0x0,     0x78},
+	{   0x4cb0,     0x80,      0x0,      0x0,     0x80},
 /* TSTORM_LL2_PORT_STAT_OFFSET(port_id) */
-	{   0x6318,     0x20,      0x0,      0x0,     0x20},
+	{   0x6518,     0x20,      0x0,      0x0,     0x20},
 /* USTORM_VF_PF_CHANNEL_READY_OFFSET(vf_id) */
 	{    0xb00,      0x8,      0x0,      0x0,      0x4},
 /* USTORM_FLR_FINAL_ACK_OFFSET(pf_id) */
@@ -41,7 +41,7 @@
 /* TSTORM_LL2_RX_PRODS_OFFSET(core_rx_queue_id) */
 	{    0xa28,      0x8,      0x0,      0x0,      0x8},
 /* CORE_LL2_TSTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id) */
-	{   0x60f8,     0x10,      0x0,      0x0,     0x10},
+	{   0x61f8,     0x10,      0x0,      0x0,     0x10},
 /* CORE_LL2_USTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id) */
 	{   0xb820,     0x30,      0x0,      0x0,     0x30},
 /* CORE_LL2_PSTORM_PER_QUEUE_STAT_OFFSET(core_tx_stats_id) */
@@ -53,7 +53,7 @@
 /* MSTORM_ETH_VF_PRODS_OFFSET(vf_id,vf_queue_id) */
 	{   0x53a0,     0x80,      0x4,      0x0,      0x4},
 /* MSTORM_TPA_TIMEOUT_US_OFFSET */
-	{   0xc8f0,      0x0,      0x0,      0x0,      0x4},
+	{   0xc7c8,      0x0,      0x0,      0x0,      0x4},
 /* MSTORM_ETH_PF_STAT_OFFSET(pf_id) */
 	{   0x4ba0,     0x80,      0x0,      0x0,     0x20},
 /* USTORM_QUEUE_STAT_OFFSET(stat_counter_id) */
@@ -63,13 +63,13 @@
 /* PSTORM_QUEUE_STAT_OFFSET(stat_counter_id) */
 	{   0x2b48,     0x80,      0x0,      0x0,     0x38},
 /* PSTORM_ETH_PF_STAT_OFFSET(pf_id) */
-	{   0xf188,     0x78,      0x0,      0x0,     0x78},
+	{   0xf1b0,     0x78,      0x0,      0x0,     0x78},
 /* PSTORM_CTL_FRAME_ETHTYPE_OFFSET(ethType_id) */
 	{    0x1f8,      0x4,      0x0,      0x0,      0x4},
 /* TSTORM_ETH_PRS_INPUT_OFFSET */
-	{   0xacf0,      0x0,      0x0,      0x0,     0xf0},
+	{   0xaef8,      0x0,      0x0,      0x0,     0xf0},
 /* ETH_RX_RATE_LIMIT_OFFSET(pf_id) */
-	{   0xade0,      0x8,      0x0,      0x0,      0x8},
+	{   0xafe8,      0x8,      0x0,      0x0,      0x8},
 /* XSTORM_ETH_QUEUE_ZONE_OFFSET(queue_id) */
 	{    0x1f8,      0x8,      0x0,      0x0,      0x8},
 /* YSTORM_TOE_CQ_PROD_OFFSET(rss_id) */
@@ -85,9 +85,9 @@
 /* MSTORM_SCSI_BDQ_EXT_PROD_OFFSET(func_id,bdq_id) */
 	{    0xb78,     0x10,      0x8,      0x0,      0x2},
 /* TSTORM_ISCSI_RX_STATS_OFFSET(pf_id) */
-	{   0xd888,     0x38,      0x0,      0x0,     0x24},
+	{   0xd9a8,     0x38,      0x0,      0x0,     0x24},
 /* MSTORM_ISCSI_RX_STATS_OFFSET(pf_id) */
-	{  0x12c38,     0x10,      0x0,      0x0,      0x8},
+	{  0x12988,     0x10,      0x0,      0x0,      0x8},
 /* USTORM_ISCSI_RX_STATS_OFFSET(pf_id) */
 	{  0x11aa0,     0x38,      0x0,      0x0,     0x18},
 /* XSTORM_ISCSI_TX_STATS_OFFSET(pf_id) */
@@ -97,13 +97,17 @@
 /* PSTORM_ISCSI_TX_STATS_OFFSET(pf_id) */
 	{  0x101f8,     0x10,      0x0,      0x0,     0x10},
 /* TSTORM_FCOE_RX_STATS_OFFSET(pf_id) */
-	{   0xdd08,     0x48,      0x0,      0x0,     0x38},
+	{   0xde28,     0x48,      0x0,      0x0,     0x38},
 /* PSTORM_FCOE_TX_STATS_OFFSET(pf_id) */
 	{  0x10660,     0x20,      0x0,      0x0,     0x20},
 /* PSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) */
 	{   0x2b80,     0x80,      0x0,      0x0,     0x10},
 /* TSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) */
-	{   0x5000,     0x10,      0x0,      0x0,     0x10},
+	{   0x5020,     0x10,      0x0,      0x0,     0x10},
+/* XSTORM_IWARP_RXMIT_STATS_OFFSET(pf_id) */
+	{   0xc9b0,     0x30,      0x0,      0x0,     0x10},
+/* TSTORM_ROCE_EVENTS_STAT_OFFSET(roce_pf_id) */
+	{   0xeec0,     0x10,      0x0,      0x0,     0x10},
 };
 
 #endif /* __IRO_VALUES_H__ */
diff --git a/drivers/net/qede/base/ecore_rt_defs.h b/drivers/net/qede/base/ecore_rt_defs.h
index 01a29e3..846dc6d 100644
--- a/drivers/net/qede/base/ecore_rt_defs.h
+++ b/drivers/net/qede/base/ecore_rt_defs.h
@@ -115,339 +115,338 @@
 #define TM_REG_CONFIG_CONN_MEM_RT_OFFSET                            28716
 #define TM_REG_CONFIG_CONN_MEM_RT_SIZE                              416
 #define TM_REG_CONFIG_TASK_MEM_RT_OFFSET                            29132
-#define TM_REG_CONFIG_TASK_MEM_RT_SIZE                              512
-#define QM_REG_MAXPQSIZE_0_RT_OFFSET                                29644
-#define QM_REG_MAXPQSIZE_1_RT_OFFSET                                29645
-#define QM_REG_MAXPQSIZE_2_RT_OFFSET                                29646
-#define QM_REG_MAXPQSIZETXSEL_0_RT_OFFSET                           29647
-#define QM_REG_MAXPQSIZETXSEL_1_RT_OFFSET                           29648
-#define QM_REG_MAXPQSIZETXSEL_2_RT_OFFSET                           29649
-#define QM_REG_MAXPQSIZETXSEL_3_RT_OFFSET                           29650
-#define QM_REG_MAXPQSIZETXSEL_4_RT_OFFSET                           29651
-#define QM_REG_MAXPQSIZETXSEL_5_RT_OFFSET                           29652
-#define QM_REG_MAXPQSIZETXSEL_6_RT_OFFSET                           29653
-#define QM_REG_MAXPQSIZETXSEL_7_RT_OFFSET                           29654
-#define QM_REG_MAXPQSIZETXSEL_8_RT_OFFSET                           29655
-#define QM_REG_MAXPQSIZETXSEL_9_RT_OFFSET                           29656
-#define QM_REG_MAXPQSIZETXSEL_10_RT_OFFSET                          29657
-#define QM_REG_MAXPQSIZETXSEL_11_RT_OFFSET                          29658
-#define QM_REG_MAXPQSIZETXSEL_12_RT_OFFSET                          29659
-#define QM_REG_MAXPQSIZETXSEL_13_RT_OFFSET                          29660
-#define QM_REG_MAXPQSIZETXSEL_14_RT_OFFSET                          29661
-#define QM_REG_MAXPQSIZETXSEL_15_RT_OFFSET                          29662
-#define QM_REG_MAXPQSIZETXSEL_16_RT_OFFSET                          29663
-#define QM_REG_MAXPQSIZETXSEL_17_RT_OFFSET                          29664
-#define QM_REG_MAXPQSIZETXSEL_18_RT_OFFSET                          29665
-#define QM_REG_MAXPQSIZETXSEL_19_RT_OFFSET                          29666
-#define QM_REG_MAXPQSIZETXSEL_20_RT_OFFSET                          29667
-#define QM_REG_MAXPQSIZETXSEL_21_RT_OFFSET                          29668
-#define QM_REG_MAXPQSIZETXSEL_22_RT_OFFSET                          29669
-#define QM_REG_MAXPQSIZETXSEL_23_RT_OFFSET                          29670
-#define QM_REG_MAXPQSIZETXSEL_24_RT_OFFSET                          29671
-#define QM_REG_MAXPQSIZETXSEL_25_RT_OFFSET                          29672
-#define QM_REG_MAXPQSIZETXSEL_26_RT_OFFSET                          29673
-#define QM_REG_MAXPQSIZETXSEL_27_RT_OFFSET                          29674
-#define QM_REG_MAXPQSIZETXSEL_28_RT_OFFSET                          29675
-#define QM_REG_MAXPQSIZETXSEL_29_RT_OFFSET                          29676
-#define QM_REG_MAXPQSIZETXSEL_30_RT_OFFSET                          29677
-#define QM_REG_MAXPQSIZETXSEL_31_RT_OFFSET                          29678
-#define QM_REG_MAXPQSIZETXSEL_32_RT_OFFSET                          29679
-#define QM_REG_MAXPQSIZETXSEL_33_RT_OFFSET                          29680
-#define QM_REG_MAXPQSIZETXSEL_34_RT_OFFSET                          29681
-#define QM_REG_MAXPQSIZETXSEL_35_RT_OFFSET                          29682
-#define QM_REG_MAXPQSIZETXSEL_36_RT_OFFSET                          29683
-#define QM_REG_MAXPQSIZETXSEL_37_RT_OFFSET                          29684
-#define QM_REG_MAXPQSIZETXSEL_38_RT_OFFSET                          29685
-#define QM_REG_MAXPQSIZETXSEL_39_RT_OFFSET                          29686
-#define QM_REG_MAXPQSIZETXSEL_40_RT_OFFSET                          29687
-#define QM_REG_MAXPQSIZETXSEL_41_RT_OFFSET                          29688
-#define QM_REG_MAXPQSIZETXSEL_42_RT_OFFSET                          29689
-#define QM_REG_MAXPQSIZETXSEL_43_RT_OFFSET                          29690
-#define QM_REG_MAXPQSIZETXSEL_44_RT_OFFSET                          29691
-#define QM_REG_MAXPQSIZETXSEL_45_RT_OFFSET                          29692
-#define QM_REG_MAXPQSIZETXSEL_46_RT_OFFSET                          29693
-#define QM_REG_MAXPQSIZETXSEL_47_RT_OFFSET                          29694
-#define QM_REG_MAXPQSIZETXSEL_48_RT_OFFSET                          29695
-#define QM_REG_MAXPQSIZETXSEL_49_RT_OFFSET                          29696
-#define QM_REG_MAXPQSIZETXSEL_50_RT_OFFSET                          29697
-#define QM_REG_MAXPQSIZETXSEL_51_RT_OFFSET                          29698
-#define QM_REG_MAXPQSIZETXSEL_52_RT_OFFSET                          29699
-#define QM_REG_MAXPQSIZETXSEL_53_RT_OFFSET                          29700
-#define QM_REG_MAXPQSIZETXSEL_54_RT_OFFSET                          29701
-#define QM_REG_MAXPQSIZETXSEL_55_RT_OFFSET                          29702
-#define QM_REG_MAXPQSIZETXSEL_56_RT_OFFSET                          29703
-#define QM_REG_MAXPQSIZETXSEL_57_RT_OFFSET                          29704
-#define QM_REG_MAXPQSIZETXSEL_58_RT_OFFSET                          29705
-#define QM_REG_MAXPQSIZETXSEL_59_RT_OFFSET                          29706
-#define QM_REG_MAXPQSIZETXSEL_60_RT_OFFSET                          29707
-#define QM_REG_MAXPQSIZETXSEL_61_RT_OFFSET                          29708
-#define QM_REG_MAXPQSIZETXSEL_62_RT_OFFSET                          29709
-#define QM_REG_MAXPQSIZETXSEL_63_RT_OFFSET                          29710
-#define QM_REG_BASEADDROTHERPQ_RT_OFFSET                            29711
-#define QM_REG_BASEADDROTHERPQ_RT_SIZE                              128
-#define QM_REG_VOQCRDLINE_RT_OFFSET                                 29839
-#define QM_REG_VOQCRDLINE_RT_SIZE                                   20
-#define QM_REG_VOQINITCRDLINE_RT_OFFSET                             29859
-#define QM_REG_VOQINITCRDLINE_RT_SIZE                               20
-#define QM_REG_AFULLQMBYPTHRPFWFQ_RT_OFFSET                         29879
-#define QM_REG_AFULLQMBYPTHRVPWFQ_RT_OFFSET                         29880
-#define QM_REG_AFULLQMBYPTHRPFRL_RT_OFFSET                          29881
-#define QM_REG_AFULLQMBYPTHRGLBLRL_RT_OFFSET                        29882
-#define QM_REG_AFULLOPRTNSTCCRDMASK_RT_OFFSET                       29883
-#define QM_REG_WRROTHERPQGRP_0_RT_OFFSET                            29884
-#define QM_REG_WRROTHERPQGRP_1_RT_OFFSET                            29885
-#define QM_REG_WRROTHERPQGRP_2_RT_OFFSET                            29886
-#define QM_REG_WRROTHERPQGRP_3_RT_OFFSET                            29887
-#define QM_REG_WRROTHERPQGRP_4_RT_OFFSET                            29888
-#define QM_REG_WRROTHERPQGRP_5_RT_OFFSET                            29889
-#define QM_REG_WRROTHERPQGRP_6_RT_OFFSET                            29890
-#define QM_REG_WRROTHERPQGRP_7_RT_OFFSET                            29891
-#define QM_REG_WRROTHERPQGRP_8_RT_OFFSET                            29892
-#define QM_REG_WRROTHERPQGRP_9_RT_OFFSET                            29893
-#define QM_REG_WRROTHERPQGRP_10_RT_OFFSET                           29894
-#define QM_REG_WRROTHERPQGRP_11_RT_OFFSET                           29895
-#define QM_REG_WRROTHERPQGRP_12_RT_OFFSET                           29896
-#define QM_REG_WRROTHERPQGRP_13_RT_OFFSET                           29897
-#define QM_REG_WRROTHERPQGRP_14_RT_OFFSET                           29898
-#define QM_REG_WRROTHERPQGRP_15_RT_OFFSET                           29899
-#define QM_REG_WRROTHERGRPWEIGHT_0_RT_OFFSET                        29900
-#define QM_REG_WRROTHERGRPWEIGHT_1_RT_OFFSET                        29901
-#define QM_REG_WRROTHERGRPWEIGHT_2_RT_OFFSET                        29902
-#define QM_REG_WRROTHERGRPWEIGHT_3_RT_OFFSET                        29903
-#define QM_REG_WRRTXGRPWEIGHT_0_RT_OFFSET                           29904
-#define QM_REG_WRRTXGRPWEIGHT_1_RT_OFFSET                           29905
-#define QM_REG_PQTX2PF_0_RT_OFFSET                                  29906
-#define QM_REG_PQTX2PF_1_RT_OFFSET                                  29907
-#define QM_REG_PQTX2PF_2_RT_OFFSET                                  29908
-#define QM_REG_PQTX2PF_3_RT_OFFSET                                  29909
-#define QM_REG_PQTX2PF_4_RT_OFFSET                                  29910
-#define QM_REG_PQTX2PF_5_RT_OFFSET                                  29911
-#define QM_REG_PQTX2PF_6_RT_OFFSET                                  29912
-#define QM_REG_PQTX2PF_7_RT_OFFSET                                  29913
-#define QM_REG_PQTX2PF_8_RT_OFFSET                                  29914
-#define QM_REG_PQTX2PF_9_RT_OFFSET                                  29915
-#define QM_REG_PQTX2PF_10_RT_OFFSET                                 29916
-#define QM_REG_PQTX2PF_11_RT_OFFSET                                 29917
-#define QM_REG_PQTX2PF_12_RT_OFFSET                                 29918
-#define QM_REG_PQTX2PF_13_RT_OFFSET                                 29919
-#define QM_REG_PQTX2PF_14_RT_OFFSET                                 29920
-#define QM_REG_PQTX2PF_15_RT_OFFSET                                 29921
-#define QM_REG_PQTX2PF_16_RT_OFFSET                                 29922
-#define QM_REG_PQTX2PF_17_RT_OFFSET                                 29923
-#define QM_REG_PQTX2PF_18_RT_OFFSET                                 29924
-#define QM_REG_PQTX2PF_19_RT_OFFSET                                 29925
-#define QM_REG_PQTX2PF_20_RT_OFFSET                                 29926
-#define QM_REG_PQTX2PF_21_RT_OFFSET                                 29927
-#define QM_REG_PQTX2PF_22_RT_OFFSET                                 29928
-#define QM_REG_PQTX2PF_23_RT_OFFSET                                 29929
-#define QM_REG_PQTX2PF_24_RT_OFFSET                                 29930
-#define QM_REG_PQTX2PF_25_RT_OFFSET                                 29931
-#define QM_REG_PQTX2PF_26_RT_OFFSET                                 29932
-#define QM_REG_PQTX2PF_27_RT_OFFSET                                 29933
-#define QM_REG_PQTX2PF_28_RT_OFFSET                                 29934
-#define QM_REG_PQTX2PF_29_RT_OFFSET                                 29935
-#define QM_REG_PQTX2PF_30_RT_OFFSET                                 29936
-#define QM_REG_PQTX2PF_31_RT_OFFSET                                 29937
-#define QM_REG_PQTX2PF_32_RT_OFFSET                                 29938
-#define QM_REG_PQTX2PF_33_RT_OFFSET                                 29939
-#define QM_REG_PQTX2PF_34_RT_OFFSET                                 29940
-#define QM_REG_PQTX2PF_35_RT_OFFSET                                 29941
-#define QM_REG_PQTX2PF_36_RT_OFFSET                                 29942
-#define QM_REG_PQTX2PF_37_RT_OFFSET                                 29943
-#define QM_REG_PQTX2PF_38_RT_OFFSET                                 29944
-#define QM_REG_PQTX2PF_39_RT_OFFSET                                 29945
-#define QM_REG_PQTX2PF_40_RT_OFFSET                                 29946
-#define QM_REG_PQTX2PF_41_RT_OFFSET                                 29947
-#define QM_REG_PQTX2PF_42_RT_OFFSET                                 29948
-#define QM_REG_PQTX2PF_43_RT_OFFSET                                 29949
-#define QM_REG_PQTX2PF_44_RT_OFFSET                                 29950
-#define QM_REG_PQTX2PF_45_RT_OFFSET                                 29951
-#define QM_REG_PQTX2PF_46_RT_OFFSET                                 29952
-#define QM_REG_PQTX2PF_47_RT_OFFSET                                 29953
-#define QM_REG_PQTX2PF_48_RT_OFFSET                                 29954
-#define QM_REG_PQTX2PF_49_RT_OFFSET                                 29955
-#define QM_REG_PQTX2PF_50_RT_OFFSET                                 29956
-#define QM_REG_PQTX2PF_51_RT_OFFSET                                 29957
-#define QM_REG_PQTX2PF_52_RT_OFFSET                                 29958
-#define QM_REG_PQTX2PF_53_RT_OFFSET                                 29959
-#define QM_REG_PQTX2PF_54_RT_OFFSET                                 29960
-#define QM_REG_PQTX2PF_55_RT_OFFSET                                 29961
-#define QM_REG_PQTX2PF_56_RT_OFFSET                                 29962
-#define QM_REG_PQTX2PF_57_RT_OFFSET                                 29963
-#define QM_REG_PQTX2PF_58_RT_OFFSET                                 29964
-#define QM_REG_PQTX2PF_59_RT_OFFSET                                 29965
-#define QM_REG_PQTX2PF_60_RT_OFFSET                                 29966
-#define QM_REG_PQTX2PF_61_RT_OFFSET                                 29967
-#define QM_REG_PQTX2PF_62_RT_OFFSET                                 29968
-#define QM_REG_PQTX2PF_63_RT_OFFSET                                 29969
-#define QM_REG_PQOTHER2PF_0_RT_OFFSET                               29970
-#define QM_REG_PQOTHER2PF_1_RT_OFFSET                               29971
-#define QM_REG_PQOTHER2PF_2_RT_OFFSET                               29972
-#define QM_REG_PQOTHER2PF_3_RT_OFFSET                               29973
-#define QM_REG_PQOTHER2PF_4_RT_OFFSET                               29974
-#define QM_REG_PQOTHER2PF_5_RT_OFFSET                               29975
-#define QM_REG_PQOTHER2PF_6_RT_OFFSET                               29976
-#define QM_REG_PQOTHER2PF_7_RT_OFFSET                               29977
-#define QM_REG_PQOTHER2PF_8_RT_OFFSET                               29978
-#define QM_REG_PQOTHER2PF_9_RT_OFFSET                               29979
-#define QM_REG_PQOTHER2PF_10_RT_OFFSET                              29980
-#define QM_REG_PQOTHER2PF_11_RT_OFFSET                              29981
-#define QM_REG_PQOTHER2PF_12_RT_OFFSET                              29982
-#define QM_REG_PQOTHER2PF_13_RT_OFFSET                              29983
-#define QM_REG_PQOTHER2PF_14_RT_OFFSET                              29984
-#define QM_REG_PQOTHER2PF_15_RT_OFFSET                              29985
-#define QM_REG_RLGLBLPERIOD_0_RT_OFFSET                             29986
-#define QM_REG_RLGLBLPERIOD_1_RT_OFFSET                             29987
-#define QM_REG_RLGLBLPERIODTIMER_0_RT_OFFSET                        29988
-#define QM_REG_RLGLBLPERIODTIMER_1_RT_OFFSET                        29989
-#define QM_REG_RLGLBLPERIODSEL_0_RT_OFFSET                          29990
-#define QM_REG_RLGLBLPERIODSEL_1_RT_OFFSET                          29991
-#define QM_REG_RLGLBLPERIODSEL_2_RT_OFFSET                          29992
-#define QM_REG_RLGLBLPERIODSEL_3_RT_OFFSET                          29993
-#define QM_REG_RLGLBLPERIODSEL_4_RT_OFFSET                          29994
-#define QM_REG_RLGLBLPERIODSEL_5_RT_OFFSET                          29995
-#define QM_REG_RLGLBLPERIODSEL_6_RT_OFFSET                          29996
-#define QM_REG_RLGLBLPERIODSEL_7_RT_OFFSET                          29997
-#define QM_REG_RLGLBLINCVAL_RT_OFFSET                               29998
+#define TM_REG_CONFIG_TASK_MEM_RT_SIZE                              608
+#define QM_REG_MAXPQSIZE_0_RT_OFFSET                                29740
+#define QM_REG_MAXPQSIZE_1_RT_OFFSET                                29741
+#define QM_REG_MAXPQSIZE_2_RT_OFFSET                                29742
+#define QM_REG_MAXPQSIZETXSEL_0_RT_OFFSET                           29743
+#define QM_REG_MAXPQSIZETXSEL_1_RT_OFFSET                           29744
+#define QM_REG_MAXPQSIZETXSEL_2_RT_OFFSET                           29745
+#define QM_REG_MAXPQSIZETXSEL_3_RT_OFFSET                           29746
+#define QM_REG_MAXPQSIZETXSEL_4_RT_OFFSET                           29747
+#define QM_REG_MAXPQSIZETXSEL_5_RT_OFFSET                           29748
+#define QM_REG_MAXPQSIZETXSEL_6_RT_OFFSET                           29749
+#define QM_REG_MAXPQSIZETXSEL_7_RT_OFFSET                           29750
+#define QM_REG_MAXPQSIZETXSEL_8_RT_OFFSET                           29751
+#define QM_REG_MAXPQSIZETXSEL_9_RT_OFFSET                           29752
+#define QM_REG_MAXPQSIZETXSEL_10_RT_OFFSET                          29753
+#define QM_REG_MAXPQSIZETXSEL_11_RT_OFFSET                          29754
+#define QM_REG_MAXPQSIZETXSEL_12_RT_OFFSET                          29755
+#define QM_REG_MAXPQSIZETXSEL_13_RT_OFFSET                          29756
+#define QM_REG_MAXPQSIZETXSEL_14_RT_OFFSET                          29757
+#define QM_REG_MAXPQSIZETXSEL_15_RT_OFFSET                          29758
+#define QM_REG_MAXPQSIZETXSEL_16_RT_OFFSET                          29759
+#define QM_REG_MAXPQSIZETXSEL_17_RT_OFFSET                          29760
+#define QM_REG_MAXPQSIZETXSEL_18_RT_OFFSET                          29761
+#define QM_REG_MAXPQSIZETXSEL_19_RT_OFFSET                          29762
+#define QM_REG_MAXPQSIZETXSEL_20_RT_OFFSET                          29763
+#define QM_REG_MAXPQSIZETXSEL_21_RT_OFFSET                          29764
+#define QM_REG_MAXPQSIZETXSEL_22_RT_OFFSET                          29765
+#define QM_REG_MAXPQSIZETXSEL_23_RT_OFFSET                          29766
+#define QM_REG_MAXPQSIZETXSEL_24_RT_OFFSET                          29767
+#define QM_REG_MAXPQSIZETXSEL_25_RT_OFFSET                          29768
+#define QM_REG_MAXPQSIZETXSEL_26_RT_OFFSET                          29769
+#define QM_REG_MAXPQSIZETXSEL_27_RT_OFFSET                          29770
+#define QM_REG_MAXPQSIZETXSEL_28_RT_OFFSET                          29771
+#define QM_REG_MAXPQSIZETXSEL_29_RT_OFFSET                          29772
+#define QM_REG_MAXPQSIZETXSEL_30_RT_OFFSET                          29773
+#define QM_REG_MAXPQSIZETXSEL_31_RT_OFFSET                          29774
+#define QM_REG_MAXPQSIZETXSEL_32_RT_OFFSET                          29775
+#define QM_REG_MAXPQSIZETXSEL_33_RT_OFFSET                          29776
+#define QM_REG_MAXPQSIZETXSEL_34_RT_OFFSET                          29777
+#define QM_REG_MAXPQSIZETXSEL_35_RT_OFFSET                          29778
+#define QM_REG_MAXPQSIZETXSEL_36_RT_OFFSET                          29779
+#define QM_REG_MAXPQSIZETXSEL_37_RT_OFFSET                          29780
+#define QM_REG_MAXPQSIZETXSEL_38_RT_OFFSET                          29781
+#define QM_REG_MAXPQSIZETXSEL_39_RT_OFFSET                          29782
+#define QM_REG_MAXPQSIZETXSEL_40_RT_OFFSET                          29783
+#define QM_REG_MAXPQSIZETXSEL_41_RT_OFFSET                          29784
+#define QM_REG_MAXPQSIZETXSEL_42_RT_OFFSET                          29785
+#define QM_REG_MAXPQSIZETXSEL_43_RT_OFFSET                          29786
+#define QM_REG_MAXPQSIZETXSEL_44_RT_OFFSET                          29787
+#define QM_REG_MAXPQSIZETXSEL_45_RT_OFFSET                          29788
+#define QM_REG_MAXPQSIZETXSEL_46_RT_OFFSET                          29789
+#define QM_REG_MAXPQSIZETXSEL_47_RT_OFFSET                          29790
+#define QM_REG_MAXPQSIZETXSEL_48_RT_OFFSET                          29791
+#define QM_REG_MAXPQSIZETXSEL_49_RT_OFFSET                          29792
+#define QM_REG_MAXPQSIZETXSEL_50_RT_OFFSET                          29793
+#define QM_REG_MAXPQSIZETXSEL_51_RT_OFFSET                          29794
+#define QM_REG_MAXPQSIZETXSEL_52_RT_OFFSET                          29795
+#define QM_REG_MAXPQSIZETXSEL_53_RT_OFFSET                          29796
+#define QM_REG_MAXPQSIZETXSEL_54_RT_OFFSET                          29797
+#define QM_REG_MAXPQSIZETXSEL_55_RT_OFFSET                          29798
+#define QM_REG_MAXPQSIZETXSEL_56_RT_OFFSET                          29799
+#define QM_REG_MAXPQSIZETXSEL_57_RT_OFFSET                          29800
+#define QM_REG_MAXPQSIZETXSEL_58_RT_OFFSET                          29801
+#define QM_REG_MAXPQSIZETXSEL_59_RT_OFFSET                          29802
+#define QM_REG_MAXPQSIZETXSEL_60_RT_OFFSET                          29803
+#define QM_REG_MAXPQSIZETXSEL_61_RT_OFFSET                          29804
+#define QM_REG_MAXPQSIZETXSEL_62_RT_OFFSET                          29805
+#define QM_REG_MAXPQSIZETXSEL_63_RT_OFFSET                          29806
+#define QM_REG_BASEADDROTHERPQ_RT_OFFSET                            29807
+#define QM_REG_AFULLQMBYPTHRPFWFQ_RT_OFFSET                         29935
+#define QM_REG_AFULLQMBYPTHRVPWFQ_RT_OFFSET                         29936
+#define QM_REG_AFULLQMBYPTHRPFRL_RT_OFFSET                          29937
+#define QM_REG_AFULLQMBYPTHRGLBLRL_RT_OFFSET                        29938
+#define QM_REG_AFULLOPRTNSTCCRDMASK_RT_OFFSET                       29939
+#define QM_REG_WRROTHERPQGRP_0_RT_OFFSET                            29940
+#define QM_REG_WRROTHERPQGRP_1_RT_OFFSET                            29941
+#define QM_REG_WRROTHERPQGRP_2_RT_OFFSET                            29942
+#define QM_REG_WRROTHERPQGRP_3_RT_OFFSET                            29943
+#define QM_REG_WRROTHERPQGRP_4_RT_OFFSET                            29944
+#define QM_REG_WRROTHERPQGRP_5_RT_OFFSET                            29945
+#define QM_REG_WRROTHERPQGRP_6_RT_OFFSET                            29946
+#define QM_REG_WRROTHERPQGRP_7_RT_OFFSET                            29947
+#define QM_REG_WRROTHERPQGRP_8_RT_OFFSET                            29948
+#define QM_REG_WRROTHERPQGRP_9_RT_OFFSET                            29949
+#define QM_REG_WRROTHERPQGRP_10_RT_OFFSET                           29950
+#define QM_REG_WRROTHERPQGRP_11_RT_OFFSET                           29951
+#define QM_REG_WRROTHERPQGRP_12_RT_OFFSET                           29952
+#define QM_REG_WRROTHERPQGRP_13_RT_OFFSET                           29953
+#define QM_REG_WRROTHERPQGRP_14_RT_OFFSET                           29954
+#define QM_REG_WRROTHERPQGRP_15_RT_OFFSET                           29955
+#define QM_REG_WRROTHERGRPWEIGHT_0_RT_OFFSET                        29956
+#define QM_REG_WRROTHERGRPWEIGHT_1_RT_OFFSET                        29957
+#define QM_REG_WRROTHERGRPWEIGHT_2_RT_OFFSET                        29958
+#define QM_REG_WRROTHERGRPWEIGHT_3_RT_OFFSET                        29959
+#define QM_REG_WRRTXGRPWEIGHT_0_RT_OFFSET                           29960
+#define QM_REG_WRRTXGRPWEIGHT_1_RT_OFFSET                           29961
+#define QM_REG_PQTX2PF_0_RT_OFFSET                                  29962
+#define QM_REG_PQTX2PF_1_RT_OFFSET                                  29963
+#define QM_REG_PQTX2PF_2_RT_OFFSET                                  29964
+#define QM_REG_PQTX2PF_3_RT_OFFSET                                  29965
+#define QM_REG_PQTX2PF_4_RT_OFFSET                                  29966
+#define QM_REG_PQTX2PF_5_RT_OFFSET                                  29967
+#define QM_REG_PQTX2PF_6_RT_OFFSET                                  29968
+#define QM_REG_PQTX2PF_7_RT_OFFSET                                  29969
+#define QM_REG_PQTX2PF_8_RT_OFFSET                                  29970
+#define QM_REG_PQTX2PF_9_RT_OFFSET                                  29971
+#define QM_REG_PQTX2PF_10_RT_OFFSET                                 29972
+#define QM_REG_PQTX2PF_11_RT_OFFSET                                 29973
+#define QM_REG_PQTX2PF_12_RT_OFFSET                                 29974
+#define QM_REG_PQTX2PF_13_RT_OFFSET                                 29975
+#define QM_REG_PQTX2PF_14_RT_OFFSET                                 29976
+#define QM_REG_PQTX2PF_15_RT_OFFSET                                 29977
+#define QM_REG_PQTX2PF_16_RT_OFFSET                                 29978
+#define QM_REG_PQTX2PF_17_RT_OFFSET                                 29979
+#define QM_REG_PQTX2PF_18_RT_OFFSET                                 29980
+#define QM_REG_PQTX2PF_19_RT_OFFSET                                 29981
+#define QM_REG_PQTX2PF_20_RT_OFFSET                                 29982
+#define QM_REG_PQTX2PF_21_RT_OFFSET                                 29983
+#define QM_REG_PQTX2PF_22_RT_OFFSET                                 29984
+#define QM_REG_PQTX2PF_23_RT_OFFSET                                 29985
+#define QM_REG_PQTX2PF_24_RT_OFFSET                                 29986
+#define QM_REG_PQTX2PF_25_RT_OFFSET                                 29987
+#define QM_REG_PQTX2PF_26_RT_OFFSET                                 29988
+#define QM_REG_PQTX2PF_27_RT_OFFSET                                 29989
+#define QM_REG_PQTX2PF_28_RT_OFFSET                                 29990
+#define QM_REG_PQTX2PF_29_RT_OFFSET                                 29991
+#define QM_REG_PQTX2PF_30_RT_OFFSET                                 29992
+#define QM_REG_PQTX2PF_31_RT_OFFSET                                 29993
+#define QM_REG_PQTX2PF_32_RT_OFFSET                                 29994
+#define QM_REG_PQTX2PF_33_RT_OFFSET                                 29995
+#define QM_REG_PQTX2PF_34_RT_OFFSET                                 29996
+#define QM_REG_PQTX2PF_35_RT_OFFSET                                 29997
+#define QM_REG_PQTX2PF_36_RT_OFFSET                                 29998
+#define QM_REG_PQTX2PF_37_RT_OFFSET                                 29999
+#define QM_REG_PQTX2PF_38_RT_OFFSET                                 30000
+#define QM_REG_PQTX2PF_39_RT_OFFSET                                 30001
+#define QM_REG_PQTX2PF_40_RT_OFFSET                                 30002
+#define QM_REG_PQTX2PF_41_RT_OFFSET                                 30003
+#define QM_REG_PQTX2PF_42_RT_OFFSET                                 30004
+#define QM_REG_PQTX2PF_43_RT_OFFSET                                 30005
+#define QM_REG_PQTX2PF_44_RT_OFFSET                                 30006
+#define QM_REG_PQTX2PF_45_RT_OFFSET                                 30007
+#define QM_REG_PQTX2PF_46_RT_OFFSET                                 30008
+#define QM_REG_PQTX2PF_47_RT_OFFSET                                 30009
+#define QM_REG_PQTX2PF_48_RT_OFFSET                                 30010
+#define QM_REG_PQTX2PF_49_RT_OFFSET                                 30011
+#define QM_REG_PQTX2PF_50_RT_OFFSET                                 30012
+#define QM_REG_PQTX2PF_51_RT_OFFSET                                 30013
+#define QM_REG_PQTX2PF_52_RT_OFFSET                                 30014
+#define QM_REG_PQTX2PF_53_RT_OFFSET                                 30015
+#define QM_REG_PQTX2PF_54_RT_OFFSET                                 30016
+#define QM_REG_PQTX2PF_55_RT_OFFSET                                 30017
+#define QM_REG_PQTX2PF_56_RT_OFFSET                                 30018
+#define QM_REG_PQTX2PF_57_RT_OFFSET                                 30019
+#define QM_REG_PQTX2PF_58_RT_OFFSET                                 30020
+#define QM_REG_PQTX2PF_59_RT_OFFSET                                 30021
+#define QM_REG_PQTX2PF_60_RT_OFFSET                                 30022
+#define QM_REG_PQTX2PF_61_RT_OFFSET                                 30023
+#define QM_REG_PQTX2PF_62_RT_OFFSET                                 30024
+#define QM_REG_PQTX2PF_63_RT_OFFSET                                 30025
+#define QM_REG_PQOTHER2PF_0_RT_OFFSET                               30026
+#define QM_REG_PQOTHER2PF_1_RT_OFFSET                               30027
+#define QM_REG_PQOTHER2PF_2_RT_OFFSET                               30028
+#define QM_REG_PQOTHER2PF_3_RT_OFFSET                               30029
+#define QM_REG_PQOTHER2PF_4_RT_OFFSET                               30030
+#define QM_REG_PQOTHER2PF_5_RT_OFFSET                               30031
+#define QM_REG_PQOTHER2PF_6_RT_OFFSET                               30032
+#define QM_REG_PQOTHER2PF_7_RT_OFFSET                               30033
+#define QM_REG_PQOTHER2PF_8_RT_OFFSET                               30034
+#define QM_REG_PQOTHER2PF_9_RT_OFFSET                               30035
+#define QM_REG_PQOTHER2PF_10_RT_OFFSET                              30036
+#define QM_REG_PQOTHER2PF_11_RT_OFFSET                              30037
+#define QM_REG_PQOTHER2PF_12_RT_OFFSET                              30038
+#define QM_REG_PQOTHER2PF_13_RT_OFFSET                              30039
+#define QM_REG_PQOTHER2PF_14_RT_OFFSET                              30040
+#define QM_REG_PQOTHER2PF_15_RT_OFFSET                              30041
+#define QM_REG_RLGLBLPERIOD_0_RT_OFFSET                             30042
+#define QM_REG_RLGLBLPERIOD_1_RT_OFFSET                             30043
+#define QM_REG_RLGLBLPERIODTIMER_0_RT_OFFSET                        30044
+#define QM_REG_RLGLBLPERIODTIMER_1_RT_OFFSET                        30045
+#define QM_REG_RLGLBLPERIODSEL_0_RT_OFFSET                          30046
+#define QM_REG_RLGLBLPERIODSEL_1_RT_OFFSET                          30047
+#define QM_REG_RLGLBLPERIODSEL_2_RT_OFFSET                          30048
+#define QM_REG_RLGLBLPERIODSEL_3_RT_OFFSET                          30049
+#define QM_REG_RLGLBLPERIODSEL_4_RT_OFFSET                          30050
+#define QM_REG_RLGLBLPERIODSEL_5_RT_OFFSET                          30051
+#define QM_REG_RLGLBLPERIODSEL_6_RT_OFFSET                          30052
+#define QM_REG_RLGLBLPERIODSEL_7_RT_OFFSET                          30053
+#define QM_REG_RLGLBLINCVAL_RT_OFFSET                               30054
 #define QM_REG_RLGLBLINCVAL_RT_SIZE                                 256
-#define QM_REG_RLGLBLUPPERBOUND_RT_OFFSET                           30254
+#define QM_REG_RLGLBLUPPERBOUND_RT_OFFSET                           30310
 #define QM_REG_RLGLBLUPPERBOUND_RT_SIZE                             256
-#define QM_REG_RLGLBLCRD_RT_OFFSET                                  30510
+#define QM_REG_RLGLBLCRD_RT_OFFSET                                  30566
 #define QM_REG_RLGLBLCRD_RT_SIZE                                    256
-#define QM_REG_RLGLBLENABLE_RT_OFFSET                               30766
-#define QM_REG_RLPFPERIOD_RT_OFFSET                                 30767
-#define QM_REG_RLPFPERIODTIMER_RT_OFFSET                            30768
-#define QM_REG_RLPFINCVAL_RT_OFFSET                                 30769
+#define QM_REG_RLGLBLENABLE_RT_OFFSET                               30822
+#define QM_REG_RLPFPERIOD_RT_OFFSET                                 30823
+#define QM_REG_RLPFPERIODTIMER_RT_OFFSET                            30824
+#define QM_REG_RLPFINCVAL_RT_OFFSET                                 30825
 #define QM_REG_RLPFINCVAL_RT_SIZE                                   16
-#define QM_REG_RLPFUPPERBOUND_RT_OFFSET                             30785
+#define QM_REG_RLPFUPPERBOUND_RT_OFFSET                             30841
 #define QM_REG_RLPFUPPERBOUND_RT_SIZE                               16
-#define QM_REG_RLPFCRD_RT_OFFSET                                    30801
+#define QM_REG_RLPFCRD_RT_OFFSET                                    30857
 #define QM_REG_RLPFCRD_RT_SIZE                                      16
-#define QM_REG_RLPFENABLE_RT_OFFSET                                 30817
-#define QM_REG_RLPFVOQENABLE_RT_OFFSET                              30818
-#define QM_REG_WFQPFWEIGHT_RT_OFFSET                                30819
+#define QM_REG_RLPFENABLE_RT_OFFSET                                 30873
+#define QM_REG_RLPFVOQENABLE_RT_OFFSET                              30874
+#define QM_REG_WFQPFWEIGHT_RT_OFFSET                                30875
 #define QM_REG_WFQPFWEIGHT_RT_SIZE                                  16
-#define QM_REG_WFQPFUPPERBOUND_RT_OFFSET                            30835
+#define QM_REG_WFQPFUPPERBOUND_RT_OFFSET                            30891
 #define QM_REG_WFQPFUPPERBOUND_RT_SIZE                              16
-#define QM_REG_WFQPFCRD_RT_OFFSET                                   30851
-#define QM_REG_WFQPFCRD_RT_SIZE                                     160
-#define QM_REG_WFQPFENABLE_RT_OFFSET                                31011
-#define QM_REG_WFQVPENABLE_RT_OFFSET                                31012
-#define QM_REG_BASEADDRTXPQ_RT_OFFSET                               31013
+#define QM_REG_WFQPFCRD_RT_OFFSET                                   30907
+#define QM_REG_WFQPFCRD_RT_SIZE                                     256
+#define QM_REG_WFQPFENABLE_RT_OFFSET                                31163
+#define QM_REG_WFQVPENABLE_RT_OFFSET                                31164
+#define QM_REG_BASEADDRTXPQ_RT_OFFSET                               31165
 #define QM_REG_BASEADDRTXPQ_RT_SIZE                                 512
-#define QM_REG_TXPQMAP_RT_OFFSET                                    31525
+#define QM_REG_TXPQMAP_RT_OFFSET                                    31677
 #define QM_REG_TXPQMAP_RT_SIZE                                      512
-#define QM_REG_WFQVPWEIGHT_RT_OFFSET                                32037
+#define QM_REG_WFQVPWEIGHT_RT_OFFSET                                32189
 #define QM_REG_WFQVPWEIGHT_RT_SIZE                                  512
-#define QM_REG_WFQVPCRD_RT_OFFSET                                   32549
+#define QM_REG_WFQVPCRD_RT_OFFSET                                   32701
 #define QM_REG_WFQVPCRD_RT_SIZE                                     512
-#define QM_REG_WFQVPMAP_RT_OFFSET                                   33061
+#define QM_REG_WFQVPMAP_RT_OFFSET                                   33213
 #define QM_REG_WFQVPMAP_RT_SIZE                                     512
-#define QM_REG_WFQPFCRD_MSB_RT_OFFSET                               33573
-#define QM_REG_WFQPFCRD_MSB_RT_SIZE                                 160
-#define NIG_REG_TAG_ETHERTYPE_0_RT_OFFSET                           33733
-#define NIG_REG_OUTER_TAG_VALUE_LIST0_RT_OFFSET                     33734
-#define NIG_REG_OUTER_TAG_VALUE_LIST1_RT_OFFSET                     33735
-#define NIG_REG_OUTER_TAG_VALUE_LIST2_RT_OFFSET                     33736
-#define NIG_REG_OUTER_TAG_VALUE_LIST3_RT_OFFSET                     33737
-#define NIG_REG_OUTER_TAG_VALUE_MASK_RT_OFFSET                      33738
-#define NIG_REG_LLH_FUNC_TAGMAC_CLS_TYPE_RT_OFFSET                  33739
-#define NIG_REG_LLH_FUNC_TAG_EN_RT_OFFSET                           33740
+#define QM_REG_WFQPFCRD_MSB_RT_OFFSET                               33725
+#define QM_REG_WFQPFCRD_MSB_RT_SIZE                                 320
+#define QM_REG_VOQCRDLINE_RT_OFFSET                                 34045
+#define QM_REG_VOQCRDLINE_RT_SIZE                                   36
+#define QM_REG_VOQINITCRDLINE_RT_OFFSET                             34081
+#define QM_REG_VOQINITCRDLINE_RT_SIZE                               36
+#define NIG_REG_TAG_ETHERTYPE_0_RT_OFFSET                           34117
+#define NIG_REG_OUTER_TAG_VALUE_LIST0_RT_OFFSET                     34118
+#define NIG_REG_OUTER_TAG_VALUE_LIST1_RT_OFFSET                     34119
+#define NIG_REG_OUTER_TAG_VALUE_LIST2_RT_OFFSET                     34120
+#define NIG_REG_OUTER_TAG_VALUE_LIST3_RT_OFFSET                     34121
+#define NIG_REG_OUTER_TAG_VALUE_MASK_RT_OFFSET                      34122
+#define NIG_REG_LLH_FUNC_TAGMAC_CLS_TYPE_RT_OFFSET                  34123
+#define NIG_REG_LLH_FUNC_TAG_EN_RT_OFFSET                           34124
 #define NIG_REG_LLH_FUNC_TAG_EN_RT_SIZE                             4
-#define NIG_REG_LLH_FUNC_TAG_HDR_SEL_RT_OFFSET                      33744
+#define NIG_REG_LLH_FUNC_TAG_HDR_SEL_RT_OFFSET                      34128
 #define NIG_REG_LLH_FUNC_TAG_HDR_SEL_RT_SIZE                        4
-#define NIG_REG_LLH_FUNC_TAG_VALUE_RT_OFFSET                        33748
+#define NIG_REG_LLH_FUNC_TAG_VALUE_RT_OFFSET                        34132
 #define NIG_REG_LLH_FUNC_TAG_VALUE_RT_SIZE                          4
-#define NIG_REG_LLH_FUNC_NO_TAG_RT_OFFSET                           33752
-#define NIG_REG_LLH_FUNC_FILTER_VALUE_RT_OFFSET                     33753
+#define NIG_REG_LLH_FUNC_NO_TAG_RT_OFFSET                           34136
+#define NIG_REG_LLH_FUNC_FILTER_VALUE_RT_OFFSET                     34137
 #define NIG_REG_LLH_FUNC_FILTER_VALUE_RT_SIZE                       32
-#define NIG_REG_LLH_FUNC_FILTER_EN_RT_OFFSET                        33785
+#define NIG_REG_LLH_FUNC_FILTER_EN_RT_OFFSET                        34169
 #define NIG_REG_LLH_FUNC_FILTER_EN_RT_SIZE                          16
-#define NIG_REG_LLH_FUNC_FILTER_MODE_RT_OFFSET                      33801
+#define NIG_REG_LLH_FUNC_FILTER_MODE_RT_OFFSET                      34185
 #define NIG_REG_LLH_FUNC_FILTER_MODE_RT_SIZE                        16
-#define NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_RT_OFFSET             33817
+#define NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_RT_OFFSET             34201
 #define NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_RT_SIZE               16
-#define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_RT_OFFSET                   33833
+#define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_RT_OFFSET                   34217
 #define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_RT_SIZE                     16
-#define NIG_REG_TX_EDPM_CTRL_RT_OFFSET                              33849
-#define NIG_REG_ROCE_DUPLICATE_TO_HOST_RT_OFFSET                    33850
-#define CDU_REG_CID_ADDR_PARAMS_RT_OFFSET                           33851
-#define CDU_REG_SEGMENT0_PARAMS_RT_OFFSET                           33852
-#define CDU_REG_SEGMENT1_PARAMS_RT_OFFSET                           33853
-#define CDU_REG_PF_SEG0_TYPE_OFFSET_RT_OFFSET                       33854
-#define CDU_REG_PF_SEG1_TYPE_OFFSET_RT_OFFSET                       33855
-#define CDU_REG_PF_SEG2_TYPE_OFFSET_RT_OFFSET                       33856
-#define CDU_REG_PF_SEG3_TYPE_OFFSET_RT_OFFSET                       33857
-#define CDU_REG_PF_FL_SEG0_TYPE_OFFSET_RT_OFFSET                    33858
-#define CDU_REG_PF_FL_SEG1_TYPE_OFFSET_RT_OFFSET                    33859
-#define CDU_REG_PF_FL_SEG2_TYPE_OFFSET_RT_OFFSET                    33860
-#define CDU_REG_PF_FL_SEG3_TYPE_OFFSET_RT_OFFSET                    33861
-#define CDU_REG_VF_SEG_TYPE_OFFSET_RT_OFFSET                        33862
-#define CDU_REG_VF_FL_SEG_TYPE_OFFSET_RT_OFFSET                     33863
-#define PBF_REG_TAG_ETHERTYPE_0_RT_OFFSET                           33864
-#define PBF_REG_BTB_SHARED_AREA_SIZE_RT_OFFSET                      33865
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET                    33866
-#define PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET                       33867
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ0_RT_OFFSET                33868
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET                    33869
-#define PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET                       33870
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ1_RT_OFFSET                33871
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ2_RT_OFFSET                    33872
-#define PBF_REG_BTB_GUARANTEED_VOQ2_RT_OFFSET                       33873
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ2_RT_OFFSET                33874
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ3_RT_OFFSET                    33875
-#define PBF_REG_BTB_GUARANTEED_VOQ3_RT_OFFSET                       33876
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ3_RT_OFFSET                33877
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ4_RT_OFFSET                    33878
-#define PBF_REG_BTB_GUARANTEED_VOQ4_RT_OFFSET                       33879
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ4_RT_OFFSET                33880
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ5_RT_OFFSET                    33881
-#define PBF_REG_BTB_GUARANTEED_VOQ5_RT_OFFSET                       33882
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ5_RT_OFFSET                33883
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ6_RT_OFFSET                    33884
-#define PBF_REG_BTB_GUARANTEED_VOQ6_RT_OFFSET                       33885
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ6_RT_OFFSET                33886
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ7_RT_OFFSET                    33887
-#define PBF_REG_BTB_GUARANTEED_VOQ7_RT_OFFSET                       33888
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ7_RT_OFFSET                33889
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ8_RT_OFFSET                    33890
-#define PBF_REG_BTB_GUARANTEED_VOQ8_RT_OFFSET                       33891
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ8_RT_OFFSET                33892
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ9_RT_OFFSET                    33893
-#define PBF_REG_BTB_GUARANTEED_VOQ9_RT_OFFSET                       33894
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ9_RT_OFFSET                33895
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ10_RT_OFFSET                   33896
-#define PBF_REG_BTB_GUARANTEED_VOQ10_RT_OFFSET                      33897
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ10_RT_OFFSET               33898
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ11_RT_OFFSET                   33899
-#define PBF_REG_BTB_GUARANTEED_VOQ11_RT_OFFSET                      33900
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ11_RT_OFFSET               33901
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ12_RT_OFFSET                   33902
-#define PBF_REG_BTB_GUARANTEED_VOQ12_RT_OFFSET                      33903
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ12_RT_OFFSET               33904
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ13_RT_OFFSET                   33905
-#define PBF_REG_BTB_GUARANTEED_VOQ13_RT_OFFSET                      33906
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ13_RT_OFFSET               33907
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ14_RT_OFFSET                   33908
-#define PBF_REG_BTB_GUARANTEED_VOQ14_RT_OFFSET                      33909
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ14_RT_OFFSET               33910
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ15_RT_OFFSET                   33911
-#define PBF_REG_BTB_GUARANTEED_VOQ15_RT_OFFSET                      33912
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ15_RT_OFFSET               33913
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ16_RT_OFFSET                   33914
-#define PBF_REG_BTB_GUARANTEED_VOQ16_RT_OFFSET                      33915
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ16_RT_OFFSET               33916
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ17_RT_OFFSET                   33917
-#define PBF_REG_BTB_GUARANTEED_VOQ17_RT_OFFSET                      33918
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ17_RT_OFFSET               33919
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ18_RT_OFFSET                   33920
-#define PBF_REG_BTB_GUARANTEED_VOQ18_RT_OFFSET                      33921
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ18_RT_OFFSET               33922
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ19_RT_OFFSET                   33923
-#define PBF_REG_BTB_GUARANTEED_VOQ19_RT_OFFSET                      33924
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ19_RT_OFFSET               33925
-#define XCM_REG_CON_PHY_Q3_RT_OFFSET                                33926
+#define NIG_REG_TX_EDPM_CTRL_RT_OFFSET                              34233
+#define NIG_REG_ROCE_DUPLICATE_TO_HOST_RT_OFFSET                    34234
+#define CDU_REG_CID_ADDR_PARAMS_RT_OFFSET                           34235
+#define CDU_REG_SEGMENT0_PARAMS_RT_OFFSET                           34236
+#define CDU_REG_SEGMENT1_PARAMS_RT_OFFSET                           34237
+#define CDU_REG_PF_SEG0_TYPE_OFFSET_RT_OFFSET                       34238
+#define CDU_REG_PF_SEG1_TYPE_OFFSET_RT_OFFSET                       34239
+#define CDU_REG_PF_SEG2_TYPE_OFFSET_RT_OFFSET                       34240
+#define CDU_REG_PF_SEG3_TYPE_OFFSET_RT_OFFSET                       34241
+#define CDU_REG_PF_FL_SEG0_TYPE_OFFSET_RT_OFFSET                    34242
+#define CDU_REG_PF_FL_SEG1_TYPE_OFFSET_RT_OFFSET                    34243
+#define CDU_REG_PF_FL_SEG2_TYPE_OFFSET_RT_OFFSET                    34244
+#define CDU_REG_PF_FL_SEG3_TYPE_OFFSET_RT_OFFSET                    34245
+#define CDU_REG_VF_SEG_TYPE_OFFSET_RT_OFFSET                        34246
+#define CDU_REG_VF_FL_SEG_TYPE_OFFSET_RT_OFFSET                     34247
+#define PBF_REG_TAG_ETHERTYPE_0_RT_OFFSET                           34248
+#define PBF_REG_BTB_SHARED_AREA_SIZE_RT_OFFSET                      34249
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET                    34250
+#define PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET                       34251
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ0_RT_OFFSET                34252
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET                    34253
+#define PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET                       34254
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ1_RT_OFFSET                34255
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ2_RT_OFFSET                    34256
+#define PBF_REG_BTB_GUARANTEED_VOQ2_RT_OFFSET                       34257
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ2_RT_OFFSET                34258
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ3_RT_OFFSET                    34259
+#define PBF_REG_BTB_GUARANTEED_VOQ3_RT_OFFSET                       34260
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ3_RT_OFFSET                34261
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ4_RT_OFFSET                    34262
+#define PBF_REG_BTB_GUARANTEED_VOQ4_RT_OFFSET                       34263
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ4_RT_OFFSET                34264
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ5_RT_OFFSET                    34265
+#define PBF_REG_BTB_GUARANTEED_VOQ5_RT_OFFSET                       34266
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ5_RT_OFFSET                34267
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ6_RT_OFFSET                    34268
+#define PBF_REG_BTB_GUARANTEED_VOQ6_RT_OFFSET                       34269
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ6_RT_OFFSET                34270
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ7_RT_OFFSET                    34271
+#define PBF_REG_BTB_GUARANTEED_VOQ7_RT_OFFSET                       34272
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ7_RT_OFFSET                34273
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ8_RT_OFFSET                    34274
+#define PBF_REG_BTB_GUARANTEED_VOQ8_RT_OFFSET                       34275
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ8_RT_OFFSET                34276
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ9_RT_OFFSET                    34277
+#define PBF_REG_BTB_GUARANTEED_VOQ9_RT_OFFSET                       34278
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ9_RT_OFFSET                34279
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ10_RT_OFFSET                   34280
+#define PBF_REG_BTB_GUARANTEED_VOQ10_RT_OFFSET                      34281
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ10_RT_OFFSET               34282
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ11_RT_OFFSET                   34283
+#define PBF_REG_BTB_GUARANTEED_VOQ11_RT_OFFSET                      34284
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ11_RT_OFFSET               34285
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ12_RT_OFFSET                   34286
+#define PBF_REG_BTB_GUARANTEED_VOQ12_RT_OFFSET                      34287
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ12_RT_OFFSET               34288
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ13_RT_OFFSET                   34289
+#define PBF_REG_BTB_GUARANTEED_VOQ13_RT_OFFSET                      34290
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ13_RT_OFFSET               34291
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ14_RT_OFFSET                   34292
+#define PBF_REG_BTB_GUARANTEED_VOQ14_RT_OFFSET                      34293
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ14_RT_OFFSET               34294
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ15_RT_OFFSET                   34295
+#define PBF_REG_BTB_GUARANTEED_VOQ15_RT_OFFSET                      34296
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ15_RT_OFFSET               34297
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ16_RT_OFFSET                   34298
+#define PBF_REG_BTB_GUARANTEED_VOQ16_RT_OFFSET                      34299
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ16_RT_OFFSET               34300
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ17_RT_OFFSET                   34301
+#define PBF_REG_BTB_GUARANTEED_VOQ17_RT_OFFSET                      34302
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ17_RT_OFFSET               34303
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ18_RT_OFFSET                   34304
+#define PBF_REG_BTB_GUARANTEED_VOQ18_RT_OFFSET                      34305
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ18_RT_OFFSET               34306
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ19_RT_OFFSET                   34307
+#define PBF_REG_BTB_GUARANTEED_VOQ19_RT_OFFSET                      34308
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ19_RT_OFFSET               34309
+#define XCM_REG_CON_PHY_Q3_RT_OFFSET                                34310
 
-#define RUNTIME_ARRAY_SIZE 33927
+#define RUNTIME_ARRAY_SIZE 34311
 
 #endif /* __RT_DEFS_H__ */
diff --git a/drivers/net/qede/base/eth_common.h b/drivers/net/qede/base/eth_common.h
index d2ebce8..6dc969b 100644
--- a/drivers/net/qede/base/eth_common.h
+++ b/drivers/net/qede/base/eth_common.h
@@ -182,7 +182,7 @@ struct eth_tx_1st_bd_flags {
 struct eth_tx_data_1st_bd {
 /* VLAN tag to insert to packet (if enabled by vlan_insertion flag). */
 	__le16 vlan;
-/* Number of BDs in packet. Should be at least 2 in non-LSO packet and at least
+/* Number of BDs in packet. Should be at least 1 in non-LSO packet and at least
  * 3 in LSO (or Tunnel with IPv6+ext) packet.
  */
 	u8 nbds;
diff --git a/drivers/net/qede/base/reg_addr.h b/drivers/net/qede/base/reg_addr.h
index 3cc7fd4..f9920f3 100644
--- a/drivers/net/qede/base/reg_addr.h
+++ b/drivers/net/qede/base/reg_addr.h
@@ -1147,3 +1147,56 @@
 
 #define IGU_REG_PRODUCER_MEMORY 0x182000UL
 #define IGU_REG_CONSUMER_MEM 0x183000UL
+
+#define CDU_REG_CCFC_CTX_VALID0 0x580400UL
+#define CDU_REG_CCFC_CTX_VALID1 0x580404UL
+#define CDU_REG_TCFC_CTX_VALID0 0x580408UL
+
+#define DORQ_REG_L2_EDPM_TUNNEL_NGE_IP_EN_K2_E5 0x10092cUL
+#define DORQ_REG_L2_EDPM_TUNNEL_NGE_ETH_EN_K2_E5 0x100930UL
+#define MISCS_REG_RESET_PL_HV_2_K2_E5 0x009150UL
+#define CNIG_REG_NW_PORT_MODE_BB 0x218200UL
+#define CNIG_REG_PMEG_IF_CMD_BB 0x21821cUL
+#define CNIG_REG_PMEG_IF_ADDR_BB 0x218224UL
+#define CNIG_REG_PMEG_IF_WRDATA_BB 0x218228UL
+#define NWM_REG_MAC0_K2_E5 0x800400UL
+#define CNIG_REG_NIG_PORT0_CONF_K2_E5 0x218200UL
+#define CNIG_REG_NIG_PORT0_CONF_NIG_PORT_ENABLE_0_K2_E5_SHIFT 0
+#define CNIG_REG_NIG_PORT0_CONF_NIG_PORT_NWM_PORT_MAP_0_K2_E5_SHIFT 1
+#define CNIG_REG_NIG_PORT0_CONF_NIG_PORT_RATE_0_K2_E5_SHIFT 3
+#define ETH_MAC_REG_XIF_MODE_K2_E5 0x000080UL
+#define ETH_MAC_REG_XIF_MODE_XGMII_K2_E5_SHIFT 0
+#define ETH_MAC_REG_FRM_LENGTH_K2_E5 0x000014UL
+#define ETH_MAC_REG_FRM_LENGTH_FRM_LENGTH_K2_E5_SHIFT 0
+#define ETH_MAC_REG_TX_IPG_LENGTH_K2_E5 0x000044UL
+#define ETH_MAC_REG_TX_IPG_LENGTH_TXIPG_K2_E5_SHIFT 0
+#define ETH_MAC_REG_RX_FIFO_SECTIONS_K2_E5 0x00001cUL
+#define ETH_MAC_REG_RX_FIFO_SECTIONS_RX_SECTION_FULL_K2_E5_SHIFT 0
+#define ETH_MAC_REG_TX_FIFO_SECTIONS_K2_E5 0x000020UL
+#define ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_EMPTY_K2_E5_SHIFT 16
+#define ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_FULL_K2_E5_SHIFT 0
+#define ETH_MAC_REG_COMMAND_CONFIG_K2_E5 0x000008UL
+#define MISC_REG_XMAC_CORE_PORT_MODE_BB 0x008c08UL
+#define MISC_REG_XMAC_PHY_PORT_MODE_BB 0x008c04UL
+#define XMAC_REG_MODE_BB 0x210008UL
+#define XMAC_REG_RX_MAX_SIZE_BB  0x210040UL
+#define XMAC_REG_TX_CTRL_LO_BB 0x210020UL
+#define XMAC_REG_CTRL_BB 0x210000UL
+#define XMAC_REG_CTRL_TX_EN_BB (0x1 << 0)
+#define XMAC_REG_CTRL_RX_EN_BB (0x1 << 1)
+#define XMAC_REG_RX_CTRL_BB 0x210030UL
+#define XMAC_REG_RX_CTRL_PROCESS_VARIABLE_PREAMBLE_BB (0x1 << 12)
+
+#define PGLUE_B_REG_PGL_ADDR_E8_F0_K2_E5 0x2aaf98UL
+#define PGLUE_B_REG_PGL_ADDR_EC_F0_K2_E5 0x2aaf9cUL
+#define PGLUE_B_REG_PGL_ADDR_F0_F0_K2_E5 0x2aafa0UL
+#define PGLUE_B_REG_PGL_ADDR_F4_F0_K2_E5 0x2aafa4UL
+#define PGLUE_B_REG_PGL_ADDR_88_F0_BB 0x2aa404UL
+#define PGLUE_B_REG_PGL_ADDR_8C_F0_BB 0x2aa408UL
+#define PGLUE_B_REG_PGL_ADDR_90_F0_BB 0x2aa40cUL
+#define PGLUE_B_REG_PGL_ADDR_94_F0_BB 0x2aa410UL
+#define MISCS_REG_FUNCTION_HIDE_BB_K2 0x0096f0UL
+#define PCIE_REG_PRTY_MASK_K2_E5 0x0547b4UL
+#define PGLUE_B_REG_VF_BAR0_SIZE_K2_E5 0x2aaeb4UL
+
+#define PRS_REG_OUTPUT_FORMAT_4_0_BB_K2 0x1f099cUL
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index a604a5b..332b1f8 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -21,7 +21,7 @@
 char fw_file[PATH_MAX];
 
 const char *QEDE_DEFAULT_FIRMWARE =
-	"/lib/firmware/qed/qed_init_values-8.14.6.0.bin";
+	"/lib/firmware/qed/qed_init_values-8.18.9.0.bin";
 
 static void
 qed_update_pf_params(struct ecore_dev *edev, struct ecore_pf_params *params)
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 07/61] net/qede/base: decrease MAX_HWFNS_PER_DEVICE from 4 to 2
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (5 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 06/61] drivers/net/qede: upgrade the FW to 8.18.9.0 Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 08/61] net/qede/base: move mask constants defining NIC type Rasesh Mody
                   ` (54 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Decrease MAX_HWFNS_PER_DEVICE from 4 to 2

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index b2f4910..d14f99c 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -28,7 +28,7 @@
 #include "ecore_proto_if.h"
 #include "mcp_public.h"
 
-#define MAX_HWFNS_PER_DEVICE	(4)
+#define MAX_HWFNS_PER_DEVICE	2
 #define NAME_SIZE 128 /* @DPDK */
 #define VER_SIZE 16
 #define ECORE_WFQ_UNIT	100
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 08/61] net/qede/base: move mask constants defining NIC type
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (6 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 07/61] net/qede/base: decrease MAX_HWFNS_PER_DEVICE from 4 to 2 Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 09/61] net/qede/base: remove attribute field from update current config Rasesh Mody
                   ` (53 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Move mask constants defining NIC type to ecore.h

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h     |    4 ++++
 drivers/net/qede/base/ecore_dev.c |    4 ----
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index d14f99c..a6cf52e 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -625,6 +625,10 @@ struct ecore_dev {
 #define ECORE_IS_AH(dev)	((dev)->type == ECORE_DEV_TYPE_AH)
 #define ECORE_IS_K2(dev)	ECORE_IS_AH(dev)
 
+#define ECORE_DEV_ID_MASK	0xff00
+#define ECORE_DEV_ID_MASK_BB	0x1600
+#define ECORE_DEV_ID_MASK_AH	0x8000
+
 	u16 vendor_id;
 	u16 device_id;
 
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 43bfd05..2fe9d04 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2896,10 +2896,6 @@ static void ecore_hw_info_port_num(struct ecore_hwfn *p_hwfn,
 	return ecore_hw_get_resc(p_hwfn, drv_resc_alloc);
 }
 
-#define ECORE_DEV_ID_MASK	0xff00
-#define ECORE_DEV_ID_MASK_BB	0x1600
-#define ECORE_DEV_ID_MASK_AH	0x8000
-
 static enum _ecore_status_t ecore_get_dev_info(struct ecore_dev *p_dev)
 {
 	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 09/61] net/qede/base: remove attribute field from update current config
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (7 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 08/61] net/qede/base: move mask constants defining NIC type Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 10/61] net/qede/base: add nvram options Rasesh Mody
                   ` (52 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Remove attribute field from update_current_config() API, Management FW
need to know only the last entity who configured the device.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_mcp.c     |    5 ++---
 drivers/net/qede/base/ecore_mcp_api.h |    8 --------
 2 files changed, 2 insertions(+), 11 deletions(-)

diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 8d747c2..cc69b65 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1710,14 +1710,13 @@ enum _ecore_status_t ecore_mcp_resume(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t
 ecore_mcp_ov_update_current_config(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt,
-				   enum ecore_ov_config_method config,
 				   enum ecore_ov_client client)
 {
 	enum _ecore_status_t rc;
 	u32 resp = 0, param = 0;
 	u32 drv_mb_param;
 
-	switch (config) {
+	switch (client) {
 	case ECORE_OV_CLIENT_DRV:
 		drv_mb_param = DRV_MB_PARAM_OV_CURR_CFG_OS;
 		break;
@@ -1728,7 +1727,7 @@ enum _ecore_status_t
 		drv_mb_param = DRV_MB_PARAM_OV_CURR_CFG_VENDOR_SPEC;
 		break;
 	default:
-		DP_NOTICE(p_hwfn, true, "Invalid client type %d\n", config);
+		DP_NOTICE(p_hwfn, true, "Invalid client type %d\n", client);
 		return ECORE_INVAL;
 	}
 
diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h
index 614cf67..72a58e4 100644
--- a/drivers/net/qede/base/ecore_mcp_api.h
+++ b/drivers/net/qede/base/ecore_mcp_api.h
@@ -173,12 +173,6 @@ enum ecore_mcp_protocol_type {
 };
 #endif
 
-enum ecore_ov_config_method {
-	ECORE_OV_CONFIG_MTU,
-	ECORE_OV_CONFIG_MAC,
-	ECORE_OV_CONFIG_WOL
-};
-
 enum ecore_ov_client {
 	ECORE_OV_CLIENT_DRV,
 	ECORE_OV_CLIENT_USER,
@@ -453,7 +447,6 @@ enum _ecore_status_t ecore_start_recovery_process(struct ecore_hwfn *p_hwfn,
  *
  *  @param p_hwfn
  *  @param p_ptt
- *  @param config - Configuation that has been updated
  *  @param client - ecore client type
  *
  * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
@@ -461,7 +454,6 @@ enum _ecore_status_t ecore_start_recovery_process(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t
 ecore_mcp_ov_update_current_config(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt,
-				   enum ecore_ov_config_method config,
 				   enum ecore_ov_client client);
 
 /**
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 10/61] net/qede/base: add nvram options
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (8 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 09/61] net/qede/base: remove attribute field from update current config Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 11/61] net/qede/base: add comment Rasesh Mody
                   ` (51 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add a bunch of NVRAM options like MCOT, FEC selection, temperature
threshold, Reset On Lan, etc.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/nvm_cfg.h |  465 ++++++++++++++++++++++++++++++++++++++-
 1 file changed, 461 insertions(+), 4 deletions(-)

diff --git a/drivers/net/qede/base/nvm_cfg.h b/drivers/net/qede/base/nvm_cfg.h
index 68abc2d..4202337 100644
--- a/drivers/net/qede/base/nvm_cfg.h
+++ b/drivers/net/qede/base/nvm_cfg.h
@@ -13,13 +13,21 @@
  * Description: NVM config file - Generated file from nvm cfg excel.
  *              DO NOT MODIFY !!!
  *
- * Created:     9/6/2016
+ * Created:     12/15/2016
  *
  ****************************************************************************/
 
 #ifndef NVM_CFG_H
 #define NVM_CFG_H
 
+#define NVM_CFG_version 0x81805
+
+#define NVM_CFG_new_option_seq 15
+
+#define NVM_CFG_removed_option_seq 0
+
+#define NVM_CFG_updated_value_seq 1
+
 struct nvm_cfg_mac_address {
 	u32 mac_addr_hi;
 		#define NVM_CFG_MAC_ADDRESS_HI_MASK 0x0000FFFF
@@ -242,6 +250,11 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_INTERNAL 0x0
 		#define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_EXTERNAL 0x1
 		#define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_BOTH 0x2
+	/*  ROL enable */
+		#define NVM_CFG1_GLOB_RESET_ON_LAN_MASK 0x80000000
+		#define NVM_CFG1_GLOB_RESET_ON_LAN_OFFSET 31
+		#define NVM_CFG1_GLOB_RESET_ON_LAN_DISABLED 0x0
+		#define NVM_CFG1_GLOB_RESET_ON_LAN_ENABLED 0x1
 	u32 f_lane_cfg1; /* 0x38 */
 		#define NVM_CFG1_GLOB_RX_LANE0_SWAP_MASK 0x0000000F
 		#define NVM_CFG1_GLOB_RX_LANE0_SWAP_OFFSET 0
@@ -470,6 +483,15 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_MANUF3_VER_OFFSET 18
 		#define NVM_CFG1_GLOB_MANUF4_VER_MASK 0x3F000000
 		#define NVM_CFG1_GLOB_MANUF4_VER_OFFSET 24
+	/*  Select package id method */
+		#define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_IO_MASK 0x40000000
+		#define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_IO_OFFSET 30
+		#define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_IO_NVRAM 0x0
+		#define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_IO_IO_PINS 0x1
+		#define NVM_CFG1_GLOB_RECOVERY_MODE_MASK 0x80000000
+		#define NVM_CFG1_GLOB_RECOVERY_MODE_OFFSET 31
+		#define NVM_CFG1_GLOB_RECOVERY_MODE_DISABLED 0x0
+		#define NVM_CFG1_GLOB_RECOVERY_MODE_ENABLED 0x1
 	u32 manufacture_time; /* 0x70 */
 		#define NVM_CFG1_GLOB_MANUF0_TIME_MASK 0x0000003F
 		#define NVM_CFG1_GLOB_MANUF0_TIME_OFFSET 0
@@ -480,6 +502,11 @@ struct nvm_cfg1_glob {
 	/*  Max MSIX for Ethernet in default mode */
 		#define NVM_CFG1_GLOB_MAX_MSIX_MASK 0x03FC0000
 		#define NVM_CFG1_GLOB_MAX_MSIX_OFFSET 18
+	/*  PF Mapping */
+		#define NVM_CFG1_GLOB_PF_MAPPING_MASK 0x0C000000
+		#define NVM_CFG1_GLOB_PF_MAPPING_OFFSET 26
+		#define NVM_CFG1_GLOB_PF_MAPPING_CONTINUOUS 0x0
+		#define NVM_CFG1_GLOB_PF_MAPPING_FIXED 0x1
 	u32 led_global_settings; /* 0x74 */
 		#define NVM_CFG1_GLOB_LED_SWAP_0_MASK 0x0000000F
 		#define NVM_CFG1_GLOB_LED_SWAP_0_OFFSET 0
@@ -489,6 +516,47 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_LED_SWAP_2_OFFSET 8
 		#define NVM_CFG1_GLOB_LED_SWAP_3_MASK 0x0000F000
 		#define NVM_CFG1_GLOB_LED_SWAP_3_OFFSET 12
+	/*  Max. continues operating temperature */
+		#define NVM_CFG1_GLOB_MAX_CONT_OPERATING_TEMP_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_MAX_CONT_OPERATING_TEMP_OFFSET 16
+	/*  GPIO which triggers run-time port swap according to the map
+	 *  specified in option 205
+	 */
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_OFFSET 24
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO31 0x20
 	u32 generic_cont1; /* 0x78 */
 		#define NVM_CFG1_GLOB_AVS_DAC_CODE_MASK 0x000003FF
 		#define NVM_CFG1_GLOB_AVS_DAC_CODE_OFFSET 0
@@ -508,6 +576,17 @@ struct nvm_cfg1_glob {
 	/*  PCIe Preset value - applies only if option 194 is enabled */
 		#define NVM_CFG1_GLOB_PCIE_PRESET_VALUE_MASK 0x00780000
 		#define NVM_CFG1_GLOB_PCIE_PRESET_VALUE_OFFSET 19
+	/*  Port mapping to be used when the run-time GPIO for port-swap is
+	 *  defined and set.
+	 */
+		#define NVM_CFG1_GLOB_RUNTIME_PORT0_SWAP_MAP_MASK 0x01800000
+		#define NVM_CFG1_GLOB_RUNTIME_PORT0_SWAP_MAP_OFFSET 23
+		#define NVM_CFG1_GLOB_RUNTIME_PORT1_SWAP_MAP_MASK 0x06000000
+		#define NVM_CFG1_GLOB_RUNTIME_PORT1_SWAP_MAP_OFFSET 25
+		#define NVM_CFG1_GLOB_RUNTIME_PORT2_SWAP_MAP_MASK 0x18000000
+		#define NVM_CFG1_GLOB_RUNTIME_PORT2_SWAP_MAP_OFFSET 27
+		#define NVM_CFG1_GLOB_RUNTIME_PORT3_SWAP_MAP_MASK 0x60000000
+		#define NVM_CFG1_GLOB_RUNTIME_PORT3_SWAP_MAP_OFFSET 29
 	u32 mbi_version; /* 0x7C */
 		#define NVM_CFG1_GLOB_MBI_VERSION_0_MASK 0x000000FF
 		#define NVM_CFG1_GLOB_MBI_VERSION_0_OFFSET 0
@@ -515,6 +594,44 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_MBI_VERSION_1_OFFSET 8
 		#define NVM_CFG1_GLOB_MBI_VERSION_2_MASK 0x00FF0000
 		#define NVM_CFG1_GLOB_MBI_VERSION_2_OFFSET 16
+	/*  If set to other than NA, 0 - Normal operation, 1 - Thermal event
+	 *  occurred
+	 */
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_OFFSET 24
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO31 0x20
 	u32 mbi_date; /* 0x80 */
 	u32 misc_sig; /* 0x84 */
 	/*  Define the GPIO mapping to switch i2c mux */
@@ -555,6 +672,81 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO29 0x1E
 		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO30 0x1F
 		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO31 0x20
+	/*  Interrupt signal used for SMBus/I2C management interface
+	 *  0 = Interrupt event occurred
+	 *  1 = Normal
+	 */
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_OFFSET 16
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO31 0x20
+	/*  Set aLOM FAN on GPIO */
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_OFFSET 24
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO31 0x20
 	u32 device_capabilities; /* 0x88 */
 		#define NVM_CFG1_GLOB_DEVICE_CAPABILITIES_ETHERNET 0x1
 		#define NVM_CFG1_GLOB_DEVICE_CAPABILITIES_FCOE 0x2
@@ -591,11 +783,262 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_BB_1X100G \
 			0x80
 		#define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_2X10G 0x100
-	u32 reserved[41]; /* 0x9C */
+	/* @DPDK */
+	u32 reserved1[12]; /* 0x9C */
+	u32 oem1_number[8]; /* 0xCC */
+	u32 oem2_number[8]; /* 0xEC */
+	u32 mps25_active_txfir_pre; /* 0x10C */
+		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_PRE_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_PRE_OFFSET 0
+		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_PRE_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_PRE_OFFSET 8
+		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_PRE_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_PRE_OFFSET 16
+		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_PRE_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_PRE_OFFSET 24
+	u32 mps25_active_txfir_main; /* 0x110 */
+		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_MAIN_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_MAIN_OFFSET 0
+		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_MAIN_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_MAIN_OFFSET 8
+		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_MAIN_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_MAIN_OFFSET 16
+		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_MAIN_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_MAIN_OFFSET 24
+	u32 mps25_active_txfir_post; /* 0x114 */
+		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_POST_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_POST_OFFSET 0
+		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_POST_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_POST_OFFSET 8
+		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_POST_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_POST_OFFSET 16
+		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_POST_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_POST_OFFSET 24
+	u32 features; /* 0x118 */
+	/*  Set the Aux Fan on temperature  */
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_VALUE_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_VALUE_OFFSET 0
+	/*  Set NC-SI package ID */
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_OFFSET 8
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO31 0x20
+	/*  PMBUS Clock GPIO */
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_OFFSET 16
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO31 0x20
+	/*  PMBUS Data GPIO */
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_OFFSET 24
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO31 0x20
+	u32 tx_rx_eq_25g_hlpc; /* 0x11C */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_HLPC_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_HLPC_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_HLPC_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_HLPC_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_HLPC_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_HLPC_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_HLPC_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_HLPC_OFFSET 24
+	u32 tx_rx_eq_25g_llpc; /* 0x120 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_LLPC_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_LLPC_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_LLPC_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_LLPC_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_LLPC_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_LLPC_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_LLPC_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_LLPC_OFFSET 24
+	u32 tx_rx_eq_25g_ac; /* 0x124 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_AC_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_AC_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_AC_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_AC_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_AC_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_AC_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_AC_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_AC_OFFSET 24
+	u32 tx_rx_eq_10g_pc; /* 0x128 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_PC_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_PC_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_PC_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_PC_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_PC_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_PC_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_PC_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_PC_OFFSET 24
+	u32 tx_rx_eq_10g_ac; /* 0x12C */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_AC_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_AC_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_AC_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_AC_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_AC_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_AC_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_AC_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_AC_OFFSET 24
+	u32 tx_rx_eq_1g; /* 0x130 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_1G_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_1G_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_1G_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_1G_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_1G_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_1G_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_1G_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_1G_OFFSET 24
+	u32 tx_rx_eq_25g_bt; /* 0x134 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_BT_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_BT_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_BT_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_BT_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_BT_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_BT_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_BT_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_BT_OFFSET 24
+	u32 tx_rx_eq_10g_bt; /* 0x138 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_BT_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_BT_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_BT_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_BT_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_BT_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_BT_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_BT_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_BT_OFFSET 24
+	u32 generic_cont4; /* 0x13C */
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_OFFSET 0
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO31 0x20
+	u32 reserved[58]; /* 0x140 */
 };
 
 struct nvm_cfg1_path {
-	u32 reserved[30]; /* 0x0 */
+	u32 reserved[1]; /* 0x0 */
 };
 
 struct nvm_cfg1_port {
@@ -749,6 +1192,15 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_FEC_FORCE_MODE_FIRECODE 0x1
 		#define NVM_CFG1_PORT_FEC_FORCE_MODE_RS 0x2
 		#define NVM_CFG1_PORT_FEC_FORCE_MODE_AUTO 0x7
+		#define NVM_CFG1_PORT_FEC_AN_MODE_MASK 0x00700000
+		#define NVM_CFG1_PORT_FEC_AN_MODE_OFFSET 20
+		#define NVM_CFG1_PORT_FEC_AN_MODE_NONE 0x0
+		#define NVM_CFG1_PORT_FEC_AN_MODE_10G_FIRECODE 0x1
+		#define NVM_CFG1_PORT_FEC_AN_MODE_25G_FIRECODE 0x2
+		#define NVM_CFG1_PORT_FEC_AN_MODE_10G_AND_25G_FIRECODE 0x3
+		#define NVM_CFG1_PORT_FEC_AN_MODE_25G_RS 0x4
+		#define NVM_CFG1_PORT_FEC_AN_MODE_25G_FIRECODE_AND_RS 0x5
+		#define NVM_CFG1_PORT_FEC_AN_MODE_ALL 0x6
 	u32 phy_cfg; /* 0x1C */
 		#define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_MASK 0x0000FFFF
 		#define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_OFFSET 0
@@ -1451,12 +1903,17 @@ struct nvm_cfg1_func {
 		#define NVM_CFG1_FUNC_PREBOOT_VLAN_VALUE_OFFSET 0
 		#define NVM_CFG1_FUNC_PREBOOT_VLAN_MASK 0x00010000
 		#define NVM_CFG1_FUNC_PREBOOT_VLAN_OFFSET 16
+		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_MASK 0x001E0000
+		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_OFFSET 17
+		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_ETHERNET 0x1
+		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_FCOE 0x2
+		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_ISCSI 0x4
 	u32 reserved[8]; /* 0x30 */
 };
 
 struct nvm_cfg1 {
 	struct nvm_cfg1_glob glob; /* 0x0 */
-	struct nvm_cfg1_path path[MCP_GLOB_PATH_MAX]; /* 0x140 */
+	struct nvm_cfg1_path path[MCP_GLOB_PATH_MAX]; /* 0x228 */
 	struct nvm_cfg1_port port[MCP_GLOB_PORT_MAX]; /* 0x230 */
 	struct nvm_cfg1_func func[MCP_GLOB_FUNC_MAX]; /* 0xB90 */
 };
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 11/61] net/qede/base: add comment
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (9 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 10/61] net/qede/base: add nvram options Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 12/61] net/qede/base: use default mtu from shared memory Rasesh Mody
                   ` (50 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add a comment for the endianness manipulation in
ecore_mcp_send_drv_version().

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_mcp.c |    1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index cc69b65..afd0685 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1663,6 +1663,7 @@ enum _ecore_status_t
 	p_drv_version->version = p_ver->version;
 	num_words = (MCP_DRV_VER_STR_SIZE - 4) / 4;
 	for (i = 0; i < num_words; i++) {
+		/* The driver name is expected to be in a big-endian format */
 		p_name = &p_ver->name[i * sizeof(u32)];
 		val = OSAL_CPU_TO_BE32(*(u32 *)p_name);
 		*(u32 *)&p_drv_version->name[i * sizeof(u32)] = val;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 12/61] net/qede/base: use default mtu from shared memory
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (10 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 11/61] net/qede/base: add comment Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 13/61] net/qede/base: change queue/sb-id from 8 bit to 16 bit Rasesh Mody
                   ` (49 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Read and use the default mtu value from shared-memory.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h         |    2 ++
 drivers/net/qede/base/ecore_dev.c     |    3 +++
 drivers/net/qede/base/ecore_mcp.c     |   10 ++++++++++
 drivers/net/qede/base/ecore_mcp_api.h |    2 ++
 drivers/net/qede/qede_if.h            |    1 +
 drivers/net/qede/qede_main.c          |    2 ++
 6 files changed, 20 insertions(+)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index a6cf52e..25c96f8 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -377,6 +377,8 @@ struct ecore_hw_info {
 
 	/* Default DCBX mode */
 	u8 dcbx_mode;
+
+	u16 mtu;
 };
 
 struct ecore_hw_cid_data {
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 2fe9d04..2c768d8 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2887,6 +2887,9 @@ static void ecore_hw_info_port_num(struct ecore_hwfn *p_hwfn,
 
 	ecore_get_num_funcs(p_hwfn, p_ptt);
 
+	if (ecore_mcp_is_init(p_hwfn))
+		p_hwfn->hw_info.mtu = p_hwfn->mcp_info->func_info.mtu;
+
 	/* In case of forcing the driver's default resource allocation, calling
 	 * ecore_hw_get_resc() should come after initializing the personality
 	 * and after getting the number of functions, since the calculation of
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index afd0685..b744c42 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1432,6 +1432,16 @@ enum _ecore_status_t ecore_mcp_fill_shmem_func_info(struct ecore_hwfn *p_hwfn,
 
 	info->ovlan = (u16)(shmem_info.ovlan_stag & FUNC_MF_CFG_OV_STAG_MASK);
 
+	info->mtu = (u16)shmem_info.mtu_size;
+
+	if (info->mtu == 0)
+		info->mtu = 1500;
+
+	info->mtu = (u16)shmem_info.mtu_size;
+
+	if (info->mtu == 0)
+		info->mtu = 1500;
+
 	DP_VERBOSE(p_hwfn, (ECORE_MSG_SP | ECORE_MSG_IFUP),
 		   "Read configuration from shmem: pause_on_host %02x"
 		    " protocol %02x BW [%02x - %02x]"
diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h
index 72a58e4..1be22dd 100644
--- a/drivers/net/qede/base/ecore_mcp_api.h
+++ b/drivers/net/qede/base/ecore_mcp_api.h
@@ -84,6 +84,8 @@ struct ecore_mcp_function_info {
 
 #define ECORE_MCP_VLAN_UNSET		(0xffff)
 	u16 ovlan;
+
+	u16 mtu;
 };
 
 struct ecore_mcp_nvm_common {
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index 4b23bb9..18404fb 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -34,6 +34,7 @@ struct qed_dev_info {
 	uint32_t flash_size;
 	uint8_t mf_mode;
 	bool tx_switching;
+	u16 mtu;
 	/* To be added... */
 };
 
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 332b1f8..e76346e 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -365,6 +365,8 @@ static int qed_slowpath_start(struct ecore_dev *edev,
 				      &dev_info->mfw_rev, NULL);
 	}
 
+	dev_info->mtu = ECORE_LEADING_HWFN(edev)->hw_info.mtu;
+
 	return 0;
 }
 
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 13/61] net/qede/base: change queue/sb-id from 8 bit to 16 bit
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (11 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 12/61] net/qede/base: use default mtu from shared memory Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 14/61] net/qede/base: update MFW when default mtu is changed Rasesh Mody
                   ` (48 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Change the queue/sb-id values from 8 bit fields to 16 bit fields.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c     |    8 ++++----
 drivers/net/qede/base/ecore_dev_api.h |    4 ++--
 drivers/net/qede/base/ecore_l2.c      |    2 +-
 drivers/net/qede/base/ecore_l2_api.h  |    2 +-
 drivers/net/qede/base/ecore_sriov.c   |    4 ++--
 5 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 2c768d8..ea087a7 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -3884,7 +3884,7 @@ static enum _ecore_status_t ecore_set_coalesce(struct ecore_hwfn *p_hwfn,
 
 enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u8 qid, u16 sb_id)
+					    u16 coalesce, u16 qid, u16 sb_id)
 {
 	struct ustorm_eth_queue_zone eth_qzone;
 	u8 timeset, timer_res;
@@ -3905,7 +3905,7 @@ enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
 	}
 	timeset = (u8)(coalesce >> timer_res);
 
-	rc = ecore_fw_l2_queue(p_hwfn, (u16)qid, &fw_qid);
+	rc = ecore_fw_l2_queue(p_hwfn, qid, &fw_qid);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
@@ -3927,7 +3927,7 @@ enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
 
 enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u8 qid, u16 sb_id)
+					    u16 coalesce, u16 qid, u16 sb_id)
 {
 	struct xstorm_eth_queue_zone eth_qzone;
 	u8 timeset, timer_res;
@@ -3949,7 +3949,7 @@ enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
 
 	timeset = (u8)(coalesce >> timer_res);
 
-	rc = ecore_fw_l2_queue(p_hwfn, (u16)qid, &fw_qid);
+	rc = ecore_fw_l2_queue(p_hwfn, qid, &fw_qid);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index 0dee68a..e7332ac 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -535,7 +535,7 @@ enum _ecore_status_t ecore_final_cleanup(struct ecore_hwfn	*p_hwfn,
  */
 enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u8 qid, u16 sb_id);
+					    u16 coalesce, u16 qid, u16 sb_id);
 
 /**
  * @brief ecore_set_txq_coalesce - Configure coalesce parameters for a Tx queue
@@ -553,6 +553,6 @@ enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
  */
 enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u8 qid, u16 sb_id);
+					    u16 coalesce, u16 qid, u16 sb_id);
 
 #endif
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 22bb43d..1379a1b 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -212,7 +212,7 @@ enum _ecore_status_t
 
 	for (i = 0; i < ECORE_RSS_IND_TABLE_SIZE; i++) {
 		rc = ecore_fw_l2_queue(p_hwfn,
-				       (u8)p_rss->rss_ind_table[i],
+				       p_rss->rss_ind_table[i],
 				       &abs_l2_queue);
 		if (rc != ECORE_SUCCESS)
 			return rc;
diff --git a/drivers/net/qede/base/ecore_l2_api.h b/drivers/net/qede/base/ecore_l2_api.h
index 247316b..8f7b614 100644
--- a/drivers/net/qede/base/ecore_l2_api.h
+++ b/drivers/net/qede/base/ecore_l2_api.h
@@ -37,7 +37,7 @@ struct ecore_queue_start_common_params {
 	/* q_zone_id is relative, may be different from queue id
 	 * currently used by Tx-only, upper-bounded by number of FW-queues
 	 */
-	u8 qzone_id;
+	u16 qzone_id;
 
 	/* stats_id is relative or absolute depends on function */
 	u8 stats_id;
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index cda4516..a6c4b6e 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -2120,8 +2120,8 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 	    !ecore_iov_validate_sb(p_hwfn, vf, req->hw_sb))
 		goto out;
 
-	params.queue_id = (u8)vf->vf_queues[req->tx_qid].fw_tx_qid;
-	params.qzone_id = (u8)vf->vf_queues[req->tx_qid].fw_tx_qid;
+	params.queue_id = vf->vf_queues[req->tx_qid].fw_tx_qid;
+	params.qzone_id = vf->vf_queues[req->tx_qid].fw_tx_qid;
 	params.vport_id = vf->vport_id;
 	params.stats_id = vf->abs_vf_id + 0x10;
 	params.sb = req->hw_sb;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 14/61] net/qede/base: update MFW when default mtu is changed
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (12 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 13/61] net/qede/base: change queue/sb-id from 8 bit to 16 bit Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 15/61] net/qede/base: prevent device init failure Rasesh Mody
                   ` (47 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Send mailbox command to Management FW when mtu changes.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c |   11 +++++++++++
 drivers/net/qede/base/ecore_mcp.c |    3 ---
 2 files changed, 11 insertions(+), 3 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index ea087a7..73bd008 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1637,6 +1637,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 {
 	enum _ecore_status_t rc = ECORE_SUCCESS, mfw_rc;
 	u32 load_code, param, drv_mb_param;
+	bool b_default_mtu = true;
 	struct ecore_hwfn *p_hwfn;
 	int i;
 
@@ -1656,6 +1657,12 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
 
+		/* If management didn't provide a default, set one of our own */
+		if (!p_hwfn->hw_info.mtu) {
+			p_hwfn->hw_info.mtu = 1500;
+			b_default_mtu = false;
+		}
+
 		if (IS_VF(p_dev)) {
 			p_hwfn->b_int_enabled = 1;
 			continue;
@@ -1784,6 +1791,10 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 			return rc;
 		}
 
+		if (!b_default_mtu)
+			ecore_mcp_ov_update_mtu(p_hwfn, p_hwfn->p_main_ptt,
+						p_hwfn->hw_info.mtu);
+
 		rc = ecore_mcp_ov_update_driver_state(p_hwfn,
 						      p_hwfn->p_main_ptt,
 						ECORE_OV_DRIVER_STATE_DISABLED);
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index b744c42..d3f0fbd 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1439,9 +1439,6 @@ enum _ecore_status_t ecore_mcp_fill_shmem_func_info(struct ecore_hwfn *p_hwfn,
 
 	info->mtu = (u16)shmem_info.mtu_size;
 
-	if (info->mtu == 0)
-		info->mtu = 1500;
-
 	DP_VERBOSE(p_hwfn, (ECORE_MSG_SP | ECORE_MSG_IFUP),
 		   "Read configuration from shmem: pause_on_host %02x"
 		    " protocol %02x BW [%02x - %02x]"
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 15/61] net/qede/base: prevent device init failure
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (13 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 14/61] net/qede/base: update MFW when default mtu is changed Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 16/61] net/qede/base: add support to read personality via MFW commands Rasesh Mody
                   ` (46 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Device initialization flow should not be failed because the FW interface
command is not available.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c |   14 ++++++++------
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 73bd008..c8e28d7 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1786,18 +1786,20 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		rc = ecore_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
 				   DRV_MSG_CODE_OV_UPDATE_STORM_FW_VER,
 				   drv_mb_param, &load_code, &param);
-		if (rc != ECORE_SUCCESS) {
-			DP_ERR(p_hwfn, "Failed to send firmware version\n");
-			return rc;
-		}
+		if (rc != ECORE_SUCCESS)
+			DP_INFO(p_hwfn, "Failed to update firmware version\n");
 
 		if (!b_default_mtu)
-			ecore_mcp_ov_update_mtu(p_hwfn, p_hwfn->p_main_ptt,
-						p_hwfn->hw_info.mtu);
+			rc = ecore_mcp_ov_update_mtu(p_hwfn, p_hwfn->p_main_ptt,
+						      p_hwfn->hw_info.mtu);
+		if (rc != ECORE_SUCCESS)
+			DP_INFO(p_hwfn, "Failed to update default mtu\n");
 
 		rc = ecore_mcp_ov_update_driver_state(p_hwfn,
 						      p_hwfn->p_main_ptt,
 						ECORE_OV_DRIVER_STATE_DISABLED);
+		if (rc != ECORE_SUCCESS)
+			DP_INFO(p_hwfn, "Failed to update driver state\n");
 	}
 
 	return rc;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 16/61] net/qede/base: add support to read personality via MFW commands
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (14 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 15/61] net/qede/base: prevent device init failure Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 17/61] net/qede/base: allow probe to succeed with minor HW-issues Rasesh Mody
                   ` (45 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add support to read NIC personality via management FW for non-L2
protocols.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h       |   16 +++++++++++++-
 drivers/net/qede/base/ecore_dev.c   |   17 +++++----------
 drivers/net/qede/base/ecore_mcp.c   |   41 +++++++++++++++++++++++++++++++----
 drivers/net/qede/base/ecore_sriov.c |    1 +
 4 files changed, 59 insertions(+), 16 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 25c96f8..842a3b5 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -243,7 +243,8 @@ enum ecore_pci_personality {
 	ECORE_PCI_FCOE,
 	ECORE_PCI_ISCSI,
 	ECORE_PCI_ETH_ROCE,
-	ECORE_PCI_IWARP,
+	ECORE_PCI_ETH_IWARP,
+	ECORE_PCI_ETH_RDMA,
 	ECORE_PCI_DEFAULT /* default in shmem */
 };
 
@@ -328,6 +329,19 @@ enum ecore_hw_err_type {
 struct ecore_hw_info {
 	/* PCI personality */
 	enum ecore_pci_personality personality;
+#define ECORE_IS_RDMA_PERSONALITY(dev)			    \
+	((dev)->hw_info.personality == ECORE_PCI_ETH_ROCE ||  \
+	 (dev)->hw_info.personality == ECORE_PCI_ETH_IWARP || \
+	 (dev)->hw_info.personality == ECORE_PCI_ETH_RDMA)
+#define ECORE_IS_ROCE_PERSONALITY(dev)			   \
+	((dev)->hw_info.personality == ECORE_PCI_ETH_ROCE || \
+	 (dev)->hw_info.personality == ECORE_PCI_ETH_RDMA)
+#define ECORE_IS_IWARP_PERSONALITY(dev)			    \
+	((dev)->hw_info.personality == ECORE_PCI_ETH_IWARP || \
+	 (dev)->hw_info.personality == ECORE_PCI_ETH_RDMA)
+#define ECORE_IS_L2_PERSONALITY(dev)		      \
+	((dev)->hw_info.personality == ECORE_PCI_ETH || \
+	 ECORE_IS_RDMA_PERSONALITY(dev))
 
 	/* Resource Allocation scheme results */
 	u32 resc_start[ECORE_MAX_RESC];
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index c8e28d7..82a41a3 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -227,9 +227,7 @@ static enum _ecore_status_t ecore_init_qm_info(struct ecore_hwfn *p_hwfn,
 	 * don't have a good recycle flow. Non ethernet PFs require only a
 	 * single physical queue.
 	 */
-	if (p_hwfn->hw_info.personality == ECORE_PCI_ETH_ROCE ||
-	    p_hwfn->hw_info.personality == ECORE_PCI_IWARP ||
-	    p_hwfn->hw_info.personality == ECORE_PCI_ETH)
+	if (ECORE_IS_L2_PERSONALITY(p_hwfn))
 		protocol_pqs = p_hwfn->hw_info.num_hw_tc;
 	else
 		protocol_pqs = 1;
@@ -237,7 +235,7 @@ static enum _ecore_status_t ecore_init_qm_info(struct ecore_hwfn *p_hwfn,
 	num_pqs = protocol_pqs + num_vfs + 1;	/* The '1' is for pure-LB */
 	num_vports = (u8)RESC_NUM(p_hwfn, ECORE_VPORT);
 
-	if (p_hwfn->hw_info.personality == ECORE_PCI_ETH_ROCE) {
+	if (ECORE_IS_ROCE_PERSONALITY(p_hwfn)) {
 		num_pqs++;	/* for RoCE queue */
 		init_rdma_offload_pq = true;
 		if (p_hwfn->pf_params.rdma_pf_params.enable_dcqcn) {
@@ -267,7 +265,7 @@ static enum _ecore_status_t ecore_init_qm_info(struct ecore_hwfn *p_hwfn,
 		qm_info->num_pf_rls = (u8)num_pf_rls;
 	}
 
-	if (p_hwfn->hw_info.personality == ECORE_PCI_IWARP) {
+	if (ECORE_IS_IWARP_PERSONALITY(p_hwfn)) {
 		num_pqs += 3;	/* for iwarp queue / pure-ack / ooo */
 		init_rdma_offload_pq = true;
 		init_pure_ack_pq = true;
@@ -343,9 +341,7 @@ static enum _ecore_status_t ecore_init_qm_info(struct ecore_hwfn *p_hwfn,
 		struct init_qm_pq_params *params =
 		    &qm_info->qm_pq_params[curr_queue++];
 
-		if (p_hwfn->hw_info.personality == ECORE_PCI_ETH_ROCE ||
-		    p_hwfn->hw_info.personality == ECORE_PCI_IWARP ||
-		    p_hwfn->hw_info.personality == ECORE_PCI_ETH) {
+		if (ECORE_IS_L2_PERSONALITY(p_hwfn)) {
 			params->vport_id = vport_id;
 			params->tc_id = i;
 			/* Note: this assumes that if we had a configuration
@@ -620,8 +616,7 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 
 		/* EQ */
 		n_eqes = ecore_chain_get_capacity(&p_hwfn->p_spq->chain);
-		if ((p_hwfn->hw_info.personality == ECORE_PCI_ETH_ROCE) ||
-		    (p_hwfn->hw_info.personality == ECORE_PCI_IWARP)) {
+		if (ECORE_IS_RDMA_PERSONALITY(p_hwfn)) {
 			/* Calculate the EQ size
 			 * ---------------------
 			 * Each ICID may generate up to one event at a time i.e.
@@ -644,7 +639,7 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 			 *          smaller than RoCE's so we avoid exact
 			 *          calculation.
 			 */
-			if (p_hwfn->hw_info.personality == ECORE_PCI_ETH_ROCE) {
+			if (ECORE_IS_ROCE_PERSONALITY(p_hwfn)) {
 				num_cons =
 				    ecore_cxt_get_proto_cid_count(
 						p_hwfn,
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index d3f0fbd..dc1a5cd 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1374,16 +1374,47 @@ enum _ecore_status_t ecore_mcp_get_media_type(struct ecore_dev *p_dev,
 	return ECORE_SUCCESS;
 }
 
+/* @DPDK */
+/* Old MFW has a global configuration for all PFs regarding RDMA support */
+static void
+ecore_mcp_get_shmem_proto_legacy(struct ecore_hwfn *p_hwfn,
+				 enum ecore_pci_personality *p_proto)
+{
+	*p_proto = ECORE_PCI_ETH;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IFUP,
+		   "According to Legacy capabilities, L2 personality is %08x\n",
+		   (u32)*p_proto);
+}
+
+/* @DPDK */
+static enum _ecore_status_t
+ecore_mcp_get_shmem_proto_mfw(struct ecore_hwfn *p_hwfn,
+			      struct ecore_ptt *p_ptt,
+			      enum ecore_pci_personality *p_proto)
+{
+	u32 resp = 0, param = 0;
+	enum _ecore_status_t rc;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IFUP,
+		   "According to capabilities, L2 personality is %08x [resp %08x param %08x]\n",
+		   (u32)*p_proto, resp, param);
+	return ECORE_SUCCESS;
+}
+
 static enum _ecore_status_t
 ecore_mcp_get_shmem_proto(struct ecore_hwfn *p_hwfn,
 			  struct public_func *p_info,
+			  struct ecore_ptt *p_ptt,
 			  enum ecore_pci_personality *p_proto)
 {
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 
 	switch (p_info->config & FUNC_MF_CFG_PROTOCOL_MASK) {
 	case FUNC_MF_CFG_PROTOCOL_ETHERNET:
-		*p_proto = ECORE_PCI_ETH;
+		if (ecore_mcp_get_shmem_proto_mfw(p_hwfn, p_ptt, p_proto) !=
+		    ECORE_SUCCESS)
+			ecore_mcp_get_shmem_proto_legacy(p_hwfn, p_proto);
 		break;
 	default:
 		rc = ECORE_INVAL;
@@ -1404,7 +1435,8 @@ enum _ecore_status_t ecore_mcp_fill_shmem_func_info(struct ecore_hwfn *p_hwfn,
 	info->pause_on_host = (shmem_info.config &
 			       FUNC_MF_CFG_PAUSE_ON_HOST_RING) ? 1 : 0;
 
-	if (ecore_mcp_get_shmem_proto(p_hwfn, &shmem_info, &info->protocol)) {
+	if (ecore_mcp_get_shmem_proto(p_hwfn, &shmem_info, p_ptt,
+				      &info->protocol)) {
 		DP_ERR(p_hwfn, "Unknown personality %08x\n",
 		       (u32)(shmem_info.config & FUNC_MF_CFG_PROTOCOL_MASK));
 		return ECORE_INVAL;
@@ -1560,8 +1592,9 @@ int ecore_mcp_get_personality_cnt(struct ecore_hwfn *p_hwfn,
 		if (shmem_info.config & FUNC_MF_CFG_FUNC_HIDE)
 			continue;
 
-		if (ecore_mcp_get_shmem_proto(p_hwfn, &shmem_info,
-					      &protocol) != ECORE_SUCCESS)
+		if (ecore_mcp_get_shmem_proto(p_hwfn, &shmem_info, p_ptt,
+					      &protocol) !=
+		    ECORE_SUCCESS)
 			continue;
 
 		if ((1 << ((u32)protocol)) & personalities)
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index a6c4b6e..50d8703 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -86,6 +86,7 @@ static enum _ecore_status_t ecore_sp_vf_start(struct ecore_hwfn *p_hwfn,
 		p_ramrod->personality = PERSONALITY_ETH;
 		break;
 	case ECORE_PCI_ETH_ROCE:
+	case ECORE_PCI_ETH_IWARP:
 		p_ramrod->personality = PERSONALITY_RDMA_AND_ETH;
 		break;
 	default:
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 17/61] net/qede/base: allow probe to succeed with minor HW-issues
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (15 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 16/61] net/qede/base: add support to read personality via MFW commands Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 18/61] net/qede/base: remove unneeded step in HW init Rasesh Mody
                   ` (44 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Allow probe to succeed with various 'minor' HW-issues [if requested]

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c     |   71 +++++++++++++++++++++++++++------
 drivers/net/qede/base/ecore_dev_api.h |   40 ++++++++++++++++---
 2 files changed, 94 insertions(+), 17 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 82a41a3..99d8f15 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2453,12 +2453,15 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-static enum _ecore_status_t ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
-						  struct ecore_ptt *p_ptt)
+static enum _ecore_status_t
+ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
+		      struct ecore_ptt *p_ptt,
+		      struct ecore_hw_prepare_params *p_params)
 {
 	u32 nvm_cfg1_offset, mf_mode, addr, generic_cont0, core_cfg, dcbx_mode;
 	u32 port_cfg_addr, link_temp, nvm_cfg_addr, device_capabilities;
 	struct ecore_mcp_link_params *link;
+	enum _ecore_status_t rc;
 
 	/* Read global nvm_cfg address */
 	nvm_cfg_addr = ecore_rd(p_hwfn, p_ptt, MISC_REG_GEN_PURP_CR0);
@@ -2466,6 +2469,8 @@ static enum _ecore_status_t ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
 	/* Verify MCP has initialized it */
 	if (!nvm_cfg_addr) {
 		DP_NOTICE(p_hwfn, false, "Shared memory not initialized\n");
+		if (p_params->b_relaxed_probe)
+			p_params->p_relaxed_res = ECORE_HW_PREPARE_FAILED_NVM;
 		return ECORE_INVAL;
 	}
 
@@ -2651,7 +2656,13 @@ static enum _ecore_status_t ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
 		OSAL_SET_BIT(ECORE_DEV_CAP_IWARP,
 			     &p_hwfn->hw_info.device_capabilities);
 
-	return ecore_mcp_fill_shmem_func_info(p_hwfn, p_ptt);
+	rc = ecore_mcp_fill_shmem_func_info(p_hwfn, p_ptt);
+	if (rc != ECORE_SUCCESS && p_params->b_relaxed_probe) {
+		rc = ECORE_SUCCESS;
+		p_params->p_relaxed_res = ECORE_HW_PREPARE_BAD_MCP;
+	}
+
+	return rc;
 }
 
 static void ecore_get_num_funcs(struct ecore_hwfn *p_hwfn,
@@ -2805,15 +2816,22 @@ static void ecore_hw_info_port_num(struct ecore_hwfn *p_hwfn,
 
 static enum _ecore_status_t
 ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-		  enum ecore_pci_personality personality, bool drv_resc_alloc)
+		  enum ecore_pci_personality personality,
+		  struct ecore_hw_prepare_params *p_params)
 {
+	bool drv_resc_alloc = p_params->drv_resc_alloc;
 	enum _ecore_status_t rc;
 
 	/* Since all information is common, only first hwfns should do this */
 	if (IS_LEAD_HWFN(p_hwfn)) {
 		rc = ecore_iov_hw_info(p_hwfn);
-		if (rc != ECORE_SUCCESS)
-			return rc;
+		if (rc != ECORE_SUCCESS) {
+			if (p_params->b_relaxed_probe)
+				p_params->p_relaxed_res =
+						ECORE_HW_PREPARE_BAD_IOV;
+			else
+				return rc;
+		}
 	}
 
 	/* TODO In get_hw_info, amoungst others:
@@ -2828,7 +2846,7 @@ static void ecore_hw_info_port_num(struct ecore_hwfn *p_hwfn,
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_ASIC(p_hwfn->p_dev)) {
 #endif
-	rc = ecore_hw_get_nvm_info(p_hwfn, p_ptt);
+	rc = ecore_hw_get_nvm_info(p_hwfn, p_ptt, p_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 #ifndef ASIC_ONLY
@@ -2836,8 +2854,12 @@ static void ecore_hw_info_port_num(struct ecore_hwfn *p_hwfn,
 #endif
 
 	rc = ecore_int_igu_read_cam(p_hwfn, p_ptt);
-	if (rc != ECORE_SUCCESS)
-		return rc;
+	if (rc != ECORE_SUCCESS) {
+		if (p_params->b_relaxed_probe)
+			p_params->p_relaxed_res = ECORE_HW_PREPARE_BAD_IGU;
+		else
+			return rc;
+	}
 
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_ASIC(p_hwfn->p_dev) && ecore_mcp_is_init(p_hwfn)) {
@@ -2904,7 +2926,13 @@ static void ecore_hw_info_port_num(struct ecore_hwfn *p_hwfn,
 	 * the resources/features depends on them.
 	 * This order is not harmful if not forcing.
 	 */
-	return ecore_hw_get_resc(p_hwfn, drv_resc_alloc);
+	rc = ecore_hw_get_resc(p_hwfn, drv_resc_alloc);
+	if (rc != ECORE_SUCCESS && p_params->b_relaxed_probe) {
+		rc = ECORE_SUCCESS;
+		p_params->p_relaxed_res = ECORE_HW_PREPARE_BAD_MCP;
+	}
+
+	return rc;
 }
 
 static enum _ecore_status_t ecore_get_dev_info(struct ecore_dev *p_dev)
@@ -3036,6 +3064,8 @@ void ecore_prepare_hibernate(struct ecore_dev *p_dev)
 	if (REG_RD(p_hwfn, PXP_PF_ME_OPAQUE_ADDR) == 0xffffffff) {
 		DP_ERR(p_hwfn,
 		       "Reading the ME register returns all Fs; Preventing further chip access\n");
+		if (p_params->b_relaxed_probe)
+			p_params->p_relaxed_res = ECORE_HW_PREPARE_FAILED_ME;
 		return ECORE_INVAL;
 	}
 
@@ -3045,6 +3075,8 @@ void ecore_prepare_hibernate(struct ecore_dev *p_dev)
 	rc = ecore_ptt_pool_alloc(p_hwfn);
 	if (rc) {
 		DP_NOTICE(p_hwfn, true, "Failed to prepare hwfn's hw\n");
+		if (p_params->b_relaxed_probe)
+			p_params->p_relaxed_res = ECORE_HW_PREPARE_FAILED_MEM;
 		goto err0;
 	}
 
@@ -3054,8 +3086,12 @@ void ecore_prepare_hibernate(struct ecore_dev *p_dev)
 	/* First hwfn learns basic information, e.g., number of hwfns */
 	if (!p_hwfn->my_id) {
 		rc = ecore_get_dev_info(p_dev);
-		if (rc != ECORE_SUCCESS)
+		if (rc != ECORE_SUCCESS) {
+			if (p_params->b_relaxed_probe)
+				p_params->p_relaxed_res =
+					ECORE_HW_PREPARE_FAILED_DEV;
 			goto err1;
+		}
 	}
 
 	ecore_hw_hwfn_prepare(p_hwfn);
@@ -3064,12 +3100,14 @@ void ecore_prepare_hibernate(struct ecore_dev *p_dev)
 	rc = ecore_mcp_cmd_init(p_hwfn, p_hwfn->p_main_ptt);
 	if (rc) {
 		DP_NOTICE(p_hwfn, true, "Failed initializing mcp command\n");
+		if (p_params->b_relaxed_probe)
+			p_params->p_relaxed_res = ECORE_HW_PREPARE_FAILED_MEM;
 		goto err1;
 	}
 
 	/* Read the device configuration information from the HW and SHMEM */
 	rc = ecore_get_hw_info(p_hwfn, p_hwfn->p_main_ptt,
-			       p_params->personality, p_params->drv_resc_alloc);
+			       p_params->personality, p_params);
 	if (rc) {
 		DP_NOTICE(p_hwfn, true, "Failed to get HW information\n");
 		goto err2;
@@ -3102,6 +3140,8 @@ void ecore_prepare_hibernate(struct ecore_dev *p_dev)
 	rc = ecore_init_alloc(p_hwfn);
 	if (rc) {
 		DP_NOTICE(p_hwfn, true, "Failed to allocate the init array\n");
+		if (p_params->b_relaxed_probe)
+			p_params->p_relaxed_res = ECORE_HW_PREPARE_FAILED_MEM;
 		goto err2;
 	}
 #ifndef ASIC_ONLY
@@ -3137,6 +3177,9 @@ enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev,
 
 	p_dev->chk_reg_fifo = p_params->chk_reg_fifo;
 
+	if (p_params->b_relaxed_probe)
+		p_params->p_relaxed_res = ECORE_HW_PREPARE_SUCCESS;
+
 	/* Store the precompiled init data ptrs */
 	if (IS_PF(p_dev))
 		ecore_init_iro_array(p_dev);
@@ -3172,6 +3215,10 @@ enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev,
 		 * initiliazed hwfn 0.
 		 */
 		if (rc != ECORE_SUCCESS) {
+			if (p_params->b_relaxed_probe)
+				p_params->p_relaxed_res =
+						ECORE_HW_PREPARE_FAILED_ENG2;
+
 			if (IS_PF(p_dev)) {
 				ecore_init_free(p_hwfn);
 				ecore_mcp_free(p_hwfn);
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index e7332ac..74a15ef 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -138,17 +138,47 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
  */
 enum _ecore_status_t ecore_hw_reset(struct ecore_dev *p_dev);
 
+enum ecore_hw_prepare_result {
+	ECORE_HW_PREPARE_SUCCESS,
+
+	/* FAILED results indicate probe has failed & cleaned up */
+	ECORE_HW_PREPARE_FAILED_ENG2,
+	ECORE_HW_PREPARE_FAILED_ME,
+	ECORE_HW_PREPARE_FAILED_MEM,
+	ECORE_HW_PREPARE_FAILED_DEV,
+	ECORE_HW_PREPARE_FAILED_NVM,
+
+	/* BAD results indicate probe is passed even though some wrongness
+	 * has occurred; Trying to actually use [I.e., hw_init()] might have
+	 * dire reprecautions.
+	 */
+	ECORE_HW_PREPARE_BAD_IOV,
+	ECORE_HW_PREPARE_BAD_MCP,
+	ECORE_HW_PREPARE_BAD_IGU,
+};
+
 struct ecore_hw_prepare_params {
-	/* personality to initialize */
+	/* Personality to initialize */
 	int personality;
-	/* force the driver's default resource allocation */
+
+	/* Force the driver's default resource allocation */
 	bool drv_resc_alloc;
-	/* check the reg_fifo after any register access */
+
+	/* Check the reg_fifo after any register access */
 	bool chk_reg_fifo;
-	/* request the MFW to initiate PF FLR */
+
+	/* Request the MFW to initiate PF FLR */
 	bool initiate_pf_flr;
-	/* the OS Epoch time in seconds */
+
+	/* The OS Epoch time in seconds */
 	u32 epoch;
+
+	/* Allow prepare to pass even if some initializations are failing.
+	 * If set, the `p_prepare_res' field would be set with the return,
+	 * and might allow probe to pass even if there are certain issues.
+	 */
+	bool b_relaxed_probe;
+	enum ecore_hw_prepare_result p_relaxed_res;
 };
 
 /**
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 18/61] net/qede/base: remove unneeded step in HW init
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (16 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 17/61] net/qede/base: allow probe to succeed with minor HW-issues Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 19/61] net/qede/base: allow only trusted VFs to be promisc/multi-promisc Rasesh Mody
                   ` (43 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

There is no need to close the OUT_EN NIG registers, so remove that.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c |   12 ------------
 1 file changed, 12 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 99d8f15..2b9e700 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1007,18 +1007,6 @@ static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn,
 
 	ecore_cxt_hw_init_common(p_hwfn);
 
-	/* Close gate from NIG to BRB/Storm; By default they are open, but
-	 * we close them to prevent NIG from passing data to reset blocks.
-	 * Should have been done in the ENGINE phase, but init-tool lacks
-	 * proper port-pretend capabilities.
-	 */
-	ecore_wr(p_hwfn, p_ptt, NIG_REG_RX_BRB_OUT_EN, 0);
-	ecore_wr(p_hwfn, p_ptt, NIG_REG_STORM_OUT_EN, 0);
-	ecore_port_pretend(p_hwfn, p_ptt, p_hwfn->port_id ^ 1);
-	ecore_wr(p_hwfn, p_ptt, NIG_REG_RX_BRB_OUT_EN, 0);
-	ecore_wr(p_hwfn, p_ptt, NIG_REG_STORM_OUT_EN, 0);
-	ecore_port_unpretend(p_hwfn, p_ptt);
-
 	rc = ecore_init_run(p_hwfn, p_ptt, PHASE_ENGINE, ANY_PHASE_ID, hw_mode);
 	if (rc != ECORE_SUCCESS)
 		return rc;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 19/61] net/qede/base: allow only trusted VFs to be promisc/multi-promisc
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (17 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 18/61] net/qede/base: remove unneeded step in HW init Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 20/61] net/qede/base: qm initialization revamp Rasesh Mody
                   ` (42 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Allow only trusted VFs to be promisc/multi-promisc. The reasonable
thing is to use the 'trusted' node instead of simply allowing VFs to
become promiscuous.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_l2.c    |    8 ++++----
 drivers/net/qede/base/ecore_sriov.c |    2 --
 2 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 1379a1b..d2e1719 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -274,8 +274,8 @@ enum _ecore_status_t
 
 		p_ramrod->rx_mode.state = OSAL_CPU_TO_LE16(state);
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-			   "p_ramrod->rx_mode.state = 0x%x\n",
-			   state);
+			   "vport[%02x] p_ramrod->rx_mode.state = 0x%x\n",
+			   p_ramrod->common.vport_id, state);
 	}
 
 	/* Set Tx mode accept flags */
@@ -298,8 +298,8 @@ enum _ecore_status_t
 
 		p_ramrod->tx_mode.state = OSAL_CPU_TO_LE16(state);
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-			   "p_ramrod->tx_mode.state = 0x%x\n",
-			   state);
+			   "vport[%02x] p_ramrod->tx_mode.state = 0x%x\n",
+			   p_ramrod->common.vport_id, state);
 	}
 }
 
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 50d8703..8d25d52 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -2628,7 +2628,6 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 	 */
 	tlvs_accepted = tlvs_mask;
 
-#ifndef LINUX_REMOVE
 	if (OSAL_IOV_VF_VPORT_UPDATE(p_hwfn, vf->relative_vf_id,
 				     &params, &tlvs_accepted) !=
 	    ECORE_SUCCESS) {
@@ -2636,7 +2635,6 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 		status = PFVF_STATUS_NOT_SUPPORTED;
 		goto out;
 	}
-#endif
 
 	if (!tlvs_accepted) {
 		if (tlvs_mask)
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 20/61] net/qede/base: qm initialization revamp
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (18 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 19/61] net/qede/base: allow only trusted VFs to be promisc/multi-promisc Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 21/61] net/qede/base: add a printout of the FW, MFW and MBI versions Rasesh Mody
                   ` (41 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

This patch revamps queue initialization.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/bcm_osal.h    |    2 +
 drivers/net/qede/base/ecore.h       |   34 +-
 drivers/net/qede/base/ecore_cxt.c   |   14 +-
 drivers/net/qede/base/ecore_dev.c   |  869 ++++++++++++++++++++++++-----------
 drivers/net/qede/base/ecore_hw.c    |   38 --
 drivers/net/qede/base/ecore_l2.c    |   12 +-
 drivers/net/qede/base/ecore_l2.h    |    2 +-
 drivers/net/qede/base/ecore_spq.c   |    9 +-
 drivers/net/qede/base/ecore_sriov.c |   13 +-
 9 files changed, 645 insertions(+), 348 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 5338f27..4089943 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -316,6 +316,8 @@ void *osal_dma_alloc_coherent_aligned(struct ecore_dev *, dma_addr_t *,
 #define OSAL_BUILD_BUG_ON(cond)		nothing
 #define ETH_ALEN			ETHER_ADDR_LEN
 
+#define OSAL_BITMAP_WEIGHT(bitmap, count) 0
+
 #define OSAL_LINK_UPDATE(hwfn) qed_link_update(hwfn)
 #define OSAL_DCBX_AEN(hwfn, mib_type) nothing
 
diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 842a3b5..58c97a3 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -445,11 +445,13 @@ struct ecore_qm_info {
 	struct init_qm_port_params  *qm_port_params;
 	u16			start_pq;
 	u8			start_vport;
-	u8			pure_lb_pq;
-	u8			offload_pq;
-	u8			pure_ack_pq;
-	u8			ooo_pq;
-	u8			vf_queues_offset;
+	u16			pure_lb_pq;
+	u16			offload_pq;
+	u16			pure_ack_pq;
+	u16			ooo_pq;
+	u16			first_vf_pq;
+	u16			first_mcos_pq;
+	u16			first_rl_pq;
 	u16			num_pqs;
 	u16			num_vf_pqs;
 	u8			num_vports;
@@ -828,6 +830,28 @@ void ecore_configure_vp_wfq_on_link_change(struct ecore_dev *p_dev,
 void ecore_set_fw_mac_addr(__le16 *fw_msb, __le16 *fw_mid, __le16 *fw_lsb,
 			   u8 *mac);
 
+/* Flags for indication of required queues */
+#define PQ_FLAGS_RLS	(1 << 0)
+#define PQ_FLAGS_MCOS	(1 << 1)
+#define PQ_FLAGS_LB	(1 << 2)
+#define PQ_FLAGS_OOO	(1 << 3)
+#define PQ_FLAGS_ACK    (1 << 4)
+#define PQ_FLAGS_OFLD	(1 << 5)
+#define PQ_FLAGS_VFS	(1 << 6)
+
+/* physical queue index for cm context intialization */
+u16 ecore_get_cm_pq_idx(struct ecore_hwfn *p_hwfn, u32 pq_flags);
+u16 ecore_get_cm_pq_idx_mcos(struct ecore_hwfn *p_hwfn, u8 tc);
+u16 ecore_get_cm_pq_idx_vf(struct ecore_hwfn *p_hwfn, u16 vf);
+u16 ecore_get_cm_pq_idx_rl(struct ecore_hwfn *p_hwfn, u8 qpid);
+
+/* amount of resources used in qm init */
+u8 ecore_init_qm_get_num_tcs(struct ecore_hwfn *p_hwfn);
+u16 ecore_init_qm_get_num_vfs(struct ecore_hwfn *p_hwfn);
+u16 ecore_init_qm_get_num_pf_rls(struct ecore_hwfn *p_hwfn);
+u16 ecore_init_qm_get_num_vports(struct ecore_hwfn *p_hwfn);
+u16 ecore_init_qm_get_num_pqs(struct ecore_hwfn *p_hwfn);
+
 #define ECORE_LEADING_HWFN(dev)	(&dev->hwfns[0])
 
 #endif /* __ECORE_H */
diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index f310bdb..bf68f86 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -1375,18 +1375,10 @@ void ecore_qm_init_pf(struct ecore_hwfn *p_hwfn)
 }
 
 /* CM PF */
-static enum _ecore_status_t ecore_cm_init_pf(struct ecore_hwfn *p_hwfn)
+void ecore_cm_init_pf(struct ecore_hwfn *p_hwfn)
 {
-	union ecore_qm_pq_params pq_params;
-	u16 pq;
-
-	/* XCM pure-LB queue */
-	OSAL_MEMSET(&pq_params, 0, sizeof(pq_params));
-	pq_params.core.tc = LB_TC;
-	pq = ecore_get_qm_pq(p_hwfn, PROTOCOLID_CORE, &pq_params);
-	STORE_RT_REG(p_hwfn, XCM_REG_CON_PHY_Q3_RT_OFFSET, pq);
-
-	return ECORE_SUCCESS;
+	STORE_RT_REG(p_hwfn, XCM_REG_CON_PHY_Q3_RT_OFFSET,
+		     ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_LB));
 }
 
 /* DQ PF */
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 2b9e700..e80813b 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -186,282 +186,575 @@ void ecore_resc_free(struct ecore_dev *p_dev)
 	}
 }
 
-static enum _ecore_status_t ecore_init_qm_info(struct ecore_hwfn *p_hwfn,
-					       bool b_sleepable)
+/******************** QM initialization *******************/
+
+/* bitmaps for indicating active traffic classes.
+ * Special case for Arrowhead 4 port
+ */
+/* 0..3 actualy used, 4 serves OOO, 7 serves high priority stuff (e.g. DCQCN) */
+#define ACTIVE_TCS_BMAP 0x9f
+/* 0..3 actually used, OOO and high priority stuff all use 3 */
+#define ACTIVE_TCS_BMAP_4PORT_K2 0xf
+
+/* determines the physical queue flags for a given PF. */
+static u32 ecore_get_pq_flags(struct ecore_hwfn *p_hwfn)
 {
-	u8 num_vports, vf_offset = 0, i, vport_id, num_ports, curr_queue;
-	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
-	struct init_qm_port_params *p_qm_port;
-	bool init_rdma_offload_pq = false;
-	bool init_pure_ack_pq = false;
-	bool init_ooo_pq = false;
-	u16 num_pqs, protocol_pqs;
-	u16 num_pf_rls = 0;
-	u16 num_vfs = 0;
-	u32 pf_rl;
-	u8 pf_wfq;
-
-	/* @TMP - saving the existing min/max bw config before resetting the
-	 * qm_info to restore them.
-	 */
-	pf_rl = qm_info->pf_rl;
-	pf_wfq = qm_info->pf_wfq;
+	u32 flags;
 
-#ifdef CONFIG_ECORE_SRIOV
-	if (p_hwfn->p_dev->p_iov_info)
-		num_vfs = p_hwfn->p_dev->p_iov_info->total_vfs;
-#endif
-	OSAL_MEM_ZERO(qm_info, sizeof(*qm_info));
+	/* common flags */
+	flags = PQ_FLAGS_LB;
 
-#ifndef ASIC_ONLY
-	/* @TMP - Don't allocate QM queues for VFs on emulation */
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
-		DP_NOTICE(p_hwfn, false,
-			  "Emulation - skip configuring QM queues for VFs\n");
-		num_vfs = 0;
+	/* feature flags */
+	if (IS_ECORE_SRIOV(p_hwfn->p_dev))
+		flags |= PQ_FLAGS_VFS;
+
+	/* protocol flags */
+	switch (p_hwfn->hw_info.personality) {
+	case ECORE_PCI_ETH:
+		flags |= PQ_FLAGS_MCOS;
+		break;
+	case ECORE_PCI_FCOE:
+		flags |= PQ_FLAGS_OFLD;
+		break;
+	case ECORE_PCI_ISCSI:
+		flags |= PQ_FLAGS_ACK | PQ_FLAGS_OOO | PQ_FLAGS_OFLD;
+		break;
+	case ECORE_PCI_ETH_ROCE:
+		flags |= PQ_FLAGS_MCOS | PQ_FLAGS_OFLD;
+		break;
+	case ECORE_PCI_ETH_IWARP:
+		flags |= PQ_FLAGS_MCOS | PQ_FLAGS_ACK | PQ_FLAGS_OOO |
+			 PQ_FLAGS_OFLD;
+		break;
+	default:
+		DP_ERR(p_hwfn, "unknown personality %d\n",
+		       p_hwfn->hw_info.personality);
+		return 0;
 	}
-#endif
+	return flags;
+}
 
-	/* ethernet PFs require a pq per tc. Even if only a subset of the TCs
-	 * active, we want physical queues allocated for all of them, since we
-	 * don't have a good recycle flow. Non ethernet PFs require only a
-	 * single physical queue.
-	 */
-	if (ECORE_IS_L2_PERSONALITY(p_hwfn))
-		protocol_pqs = p_hwfn->hw_info.num_hw_tc;
-	else
-		protocol_pqs = 1;
-
-	num_pqs = protocol_pqs + num_vfs + 1;	/* The '1' is for pure-LB */
-	num_vports = (u8)RESC_NUM(p_hwfn, ECORE_VPORT);
-
-	if (ECORE_IS_ROCE_PERSONALITY(p_hwfn)) {
-		num_pqs++;	/* for RoCE queue */
-		init_rdma_offload_pq = true;
-		if (p_hwfn->pf_params.rdma_pf_params.enable_dcqcn) {
-			/* Due to FW assumption that rl==vport, we limit the
-			 * number of rate limiters by the minimum between its
-			 * allocated number and the allocated number of vports.
-			 * Another limitation is the number of supported qps
-			 * with rate limiters in FW.
-			 */
-			num_pf_rls =
-			    (u16)OSAL_MIN_T(u32, RESC_NUM(p_hwfn, ECORE_RL),
-					     RESC_NUM(p_hwfn, ECORE_VPORT));
+/* Getters for resource amounts necessary for qm initialization */
+u8 ecore_init_qm_get_num_tcs(struct ecore_hwfn *p_hwfn)
+{
+	return p_hwfn->hw_info.num_hw_tc;
+}
 
-			/* we subtract num_vfs because each one requires a rate
-			 * limiter, and one default rate limiter.
-			 */
-			if (num_pf_rls < num_vfs + 1) {
-				DP_ERR(p_hwfn, "No RL for DCQCN");
-				DP_ERR(p_hwfn, "[num_pf_rls %d num_vfs %d]\n",
-				       num_pf_rls, num_vfs);
-				return ECORE_INVAL;
-			}
-			num_pf_rls -= num_vfs + 1;
-		}
+u16 ecore_init_qm_get_num_vfs(struct ecore_hwfn *p_hwfn)
+{
+	return IS_ECORE_SRIOV(p_hwfn->p_dev) ?
+			p_hwfn->p_dev->p_iov_info->total_vfs : 0;
+}
 
-		num_pqs += num_pf_rls;
-		qm_info->num_pf_rls = (u8)num_pf_rls;
-	}
+#define NUM_DEFAULT_RLS 1
 
-	if (ECORE_IS_IWARP_PERSONALITY(p_hwfn)) {
-		num_pqs += 3;	/* for iwarp queue / pure-ack / ooo */
-		init_rdma_offload_pq = true;
-		init_pure_ack_pq = true;
-		init_ooo_pq = true;
-	}
+u16 ecore_init_qm_get_num_pf_rls(struct ecore_hwfn *p_hwfn)
+{
+	u16 num_pf_rls, num_vfs = ecore_init_qm_get_num_vfs(p_hwfn);
 
-	if (p_hwfn->hw_info.personality == ECORE_PCI_ISCSI) {
-		num_pqs += 2;	/* for iSCSI pure-ACK / OOO queue */
-		init_pure_ack_pq = true;
-		init_ooo_pq = true;
-	}
+	/* @DPDK */
+	/* num RLs can't exceed resource amount of rls or vports or the
+	 * dcqcn qps
+	 */
+	num_pf_rls = (u16)OSAL_MIN_T(u32, RESC_NUM(p_hwfn, ECORE_RL),
+				     (u16)RESC_NUM(p_hwfn, ECORE_VPORT));
 
-	/* Sanity checking that setup requires legal number of resources */
-	if (num_pqs > RESC_NUM(p_hwfn, ECORE_PQ)) {
-		DP_ERR(p_hwfn,
-		       "Need too many Physical queues - 0x%04x avail %04x",
-		       num_pqs, RESC_NUM(p_hwfn, ECORE_PQ));
-		return ECORE_INVAL;
+	/* make sure after we reserve the default and VF rls we'll have
+	 * something left
+	 */
+	if (num_pf_rls < num_vfs + NUM_DEFAULT_RLS) {
+		DP_NOTICE(p_hwfn, false,
+			  "no rate limiters left for PF rate limiting"
+			  " [num_pf_rls %d num_vfs %d]\n", num_pf_rls, num_vfs);
+		return 0;
 	}
 
-	/* PQs will be arranged as follows: First per-TC PQ, then pure-LB queue,
-	 * then special queues (iSCSI pure-ACK / RoCE), then per-VF PQ.
+	/* subtract rls necessary for VFs and one default one for the PF */
+	num_pf_rls -= num_vfs + NUM_DEFAULT_RLS;
+
+	return num_pf_rls;
+}
+
+u16 ecore_init_qm_get_num_vports(struct ecore_hwfn *p_hwfn)
+{
+	u32 pq_flags = ecore_get_pq_flags(p_hwfn);
+
+	/* all pqs share the same vport (hence the 1 below), except for vfs
+	 * and pf_rl pqs
 	 */
-	qm_info->qm_pq_params = OSAL_ZALLOC(p_hwfn->p_dev,
-					    b_sleepable ? GFP_KERNEL :
-					    GFP_ATOMIC,
-					    sizeof(struct init_qm_pq_params) *
-					    num_pqs);
-	if (!qm_info->qm_pq_params)
-		goto alloc_err;
+	return (!!(PQ_FLAGS_RLS & pq_flags)) *
+		ecore_init_qm_get_num_pf_rls(p_hwfn) +
+	       (!!(PQ_FLAGS_VFS & pq_flags)) *
+		ecore_init_qm_get_num_vfs(p_hwfn) + 1;
+}
 
-	qm_info->qm_vport_params = OSAL_ZALLOC(p_hwfn->p_dev,
-					       b_sleepable ? GFP_KERNEL :
-					       GFP_ATOMIC,
-					       sizeof(struct
-						      init_qm_vport_params) *
-					       num_vports);
-	if (!qm_info->qm_vport_params)
-		goto alloc_err;
+/* calc amount of PQs according to the requested flags */
+u16 ecore_init_qm_get_num_pqs(struct ecore_hwfn *p_hwfn)
+{
+	u32 pq_flags = ecore_get_pq_flags(p_hwfn);
+
+	return (!!(PQ_FLAGS_RLS & pq_flags)) *
+		ecore_init_qm_get_num_pf_rls(p_hwfn) +
+	       (!!(PQ_FLAGS_MCOS & pq_flags)) *
+		ecore_init_qm_get_num_tcs(p_hwfn) +
+	       (!!(PQ_FLAGS_LB & pq_flags)) +
+	       (!!(PQ_FLAGS_OOO & pq_flags)) +
+	       (!!(PQ_FLAGS_ACK & pq_flags)) +
+	       (!!(PQ_FLAGS_OFLD & pq_flags)) +
+	       (!!(PQ_FLAGS_VFS & pq_flags)) *
+		ecore_init_qm_get_num_vfs(p_hwfn);
+}
 
-	qm_info->qm_port_params = OSAL_ZALLOC(p_hwfn->p_dev,
-					      b_sleepable ? GFP_KERNEL :
-					      GFP_ATOMIC,
-					      sizeof(struct init_qm_port_params)
-					      * MAX_NUM_PORTS);
-	if (!qm_info->qm_port_params)
-		goto alloc_err;
+/* initialize the top level QM params */
+static void ecore_init_qm_params(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
 
-	qm_info->wfq_data = OSAL_ZALLOC(p_hwfn->p_dev,
-					b_sleepable ? GFP_KERNEL :
-					GFP_ATOMIC,
-					sizeof(struct ecore_wfq_data) *
-					num_vports);
+	/* pq and vport bases for this PF */
+	qm_info->start_pq = (u16)RESC_START(p_hwfn, ECORE_PQ);
+	qm_info->start_vport = (u8)RESC_START(p_hwfn, ECORE_VPORT);
 
-	if (!qm_info->wfq_data)
-		goto alloc_err;
+	/* rate limiting and weighted fair queueing are always enabled */
+	qm_info->vport_rl_en = 1;
+	qm_info->vport_wfq_en = 1;
 
-	vport_id = (u8)RESC_START(p_hwfn, ECORE_VPORT);
+	/* in AH 4 port we have fewer TCs per port */
+	qm_info->max_phys_tcs_per_port =
+		p_hwfn->p_dev->num_ports_in_engines == MAX_NUM_PORTS_K2 ?
+			NUM_PHYS_TCS_4PORT_K2 : NUM_OF_PHYS_TCS;
+}
 
-	/* First init rate limited queues ( Due to RoCE assumption of
-	 * qpid=rlid )
-	 */
-	for (curr_queue = 0; curr_queue < num_pf_rls; curr_queue++) {
-		qm_info->qm_pq_params[curr_queue].vport_id = vport_id++;
-		qm_info->qm_pq_params[curr_queue].tc_id =
-		    p_hwfn->hw_info.offload_tc;
-		qm_info->qm_pq_params[curr_queue].wrr_group = 1;
-		qm_info->qm_pq_params[curr_queue].rl_valid = 1;
-	};
-
-	/* Protocol PQs */
-	for (i = 0; i < protocol_pqs; i++) {
-		struct init_qm_pq_params *params =
-		    &qm_info->qm_pq_params[curr_queue++];
-
-		if (ECORE_IS_L2_PERSONALITY(p_hwfn)) {
-			params->vport_id = vport_id;
-			params->tc_id = i;
-			/* Note: this assumes that if we had a configuration
-			 * with N tcs and subsequently another configuration
-			 * With Fewer TCs, the in flight traffic (in QM queues,
-			 * in FW, from driver to FW) will still trickle out and
-			 * not get "stuck" in the QM. This is determined by the
-			 * NIG_REG_TX_ARB_CLIENT_IS_SUBJECT2WFQ. Unused TCs are
-			 * supposed to be cleared in this map, allowing traffic
-			 * to flush out. If this is not the case, we would need
-			 * to set the TC of unused queues to 0, and reconfigure
-			 * QM every time num of TCs changes. Unused queues in
-			 * this context would mean those intended for TCs where
-			 * tc_id > hw_info.num_active_tcs.
-			 */
-			params->wrr_group = 1;	/* @@@TBD ECORE_WRR_MEDIUM */
-		} else {
-			params->vport_id = vport_id;
-			params->tc_id = p_hwfn->hw_info.offload_tc;
-			params->wrr_group = 1;	/* @@@TBD ECORE_WRR_MEDIUM */
-		}
-	}
+/* initialize qm vport params */
+static void ecore_init_qm_vport_params(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+	u8 i;
 
-	/* Then init pure-LB PQ */
-	qm_info->pure_lb_pq = curr_queue;
-	qm_info->qm_pq_params[curr_queue].vport_id =
-	    (u8)RESC_START(p_hwfn, ECORE_VPORT);
-	qm_info->qm_pq_params[curr_queue].tc_id = PURE_LB_TC;
-	qm_info->qm_pq_params[curr_queue].wrr_group = 1;
-	curr_queue++;
-
-	qm_info->offload_pq = 0;	/* Already initialized for iSCSI/FCoE */
-	if (init_rdma_offload_pq) {
-		qm_info->offload_pq = curr_queue;
-		qm_info->qm_pq_params[curr_queue].vport_id = vport_id;
-		qm_info->qm_pq_params[curr_queue].tc_id =
-		    p_hwfn->hw_info.offload_tc;
-		qm_info->qm_pq_params[curr_queue].wrr_group = 1;
-		curr_queue++;
-	}
-
-	if (init_pure_ack_pq) {
-		qm_info->pure_ack_pq = curr_queue;
-		qm_info->qm_pq_params[curr_queue].vport_id = vport_id;
-		qm_info->qm_pq_params[curr_queue].tc_id =
-		    p_hwfn->hw_info.offload_tc;
-		qm_info->qm_pq_params[curr_queue].wrr_group = 1;
-		curr_queue++;
-	}
-
-	if (init_ooo_pq) {
-		qm_info->ooo_pq = curr_queue;
-		qm_info->qm_pq_params[curr_queue].vport_id = vport_id;
-		qm_info->qm_pq_params[curr_queue].tc_id = DCBX_ISCSI_OOO_TC;
-		qm_info->qm_pq_params[curr_queue].wrr_group = 1;
-		curr_queue++;
-	}
-
-	/* Then init per-VF PQs */
-	vf_offset = curr_queue;
-	for (i = 0; i < num_vfs; i++) {
-		/* First vport is used by the PF */
-		qm_info->qm_pq_params[curr_queue].vport_id = vport_id + i + 1;
-		/* @@@TBD VF Multi-cos */
-		qm_info->qm_pq_params[curr_queue].tc_id = 0;
-		qm_info->qm_pq_params[curr_queue].wrr_group = 1;
-		qm_info->qm_pq_params[curr_queue].rl_valid = 1;
-		curr_queue++;
-	};
-
-	qm_info->vf_queues_offset = vf_offset;
-	qm_info->num_pqs = num_pqs;
-	qm_info->num_vports = num_vports;
+	/* all vports participate in weighted fair queueing */
+	for (i = 0; i < ecore_init_qm_get_num_vports(p_hwfn); i++)
+		qm_info->qm_vport_params[i].vport_wfq = 1;
+}
 
+/* initialize qm port params */
+static void ecore_init_qm_port_params(struct ecore_hwfn *p_hwfn)
+{
 	/* Initialize qm port parameters */
-	num_ports = p_hwfn->p_dev->num_ports_in_engines;
+	u8 i, active_phys_tcs, num_ports = p_hwfn->p_dev->num_ports_in_engines;
+
+	/* indicate how ooo and high pri traffic is dealt with */
+	active_phys_tcs = num_ports == MAX_NUM_PORTS_K2 ?
+		ACTIVE_TCS_BMAP_4PORT_K2 : ACTIVE_TCS_BMAP;
+
 	for (i = 0; i < num_ports; i++) {
-		p_qm_port = &qm_info->qm_port_params[i];
+		struct init_qm_port_params *p_qm_port =
+			&p_hwfn->qm_info.qm_port_params[i];
+
 		p_qm_port->active = 1;
-		/* @@@TMP - was NUM_OF_PHYS_TCS; Changed until dcbx will
-		 * be in place
-		 */
-		if (num_ports == 4)
-			p_qm_port->active_phys_tcs = 0xf;
-		else
-			p_qm_port->active_phys_tcs = 0x9f;
+		p_qm_port->active_phys_tcs = active_phys_tcs;
 		p_qm_port->num_pbf_cmd_lines = PBF_MAX_CMD_LINES / num_ports;
 		p_qm_port->num_btb_blocks = BTB_MAX_BLOCKS / num_ports;
 	}
+}
 
-	if (ECORE_IS_AH(p_hwfn->p_dev) && (num_ports == 4))
-		qm_info->max_phys_tcs_per_port = NUM_PHYS_TCS_4PORT_K2;
-	else
-		qm_info->max_phys_tcs_per_port = NUM_OF_PHYS_TCS;
+/* Reset the params which must be reset for qm init. QM init may be called as
+ * a result of flows other than driver load (e.g. dcbx renegotiation). Other
+ * params may be affected by the init but would simply recalculate to the same
+ * values. The allocations made for QM init, ports, vports, pqs and vfqs are not
+ * affected as these amounts stay the same.
+ */
+static void ecore_init_qm_reset_params(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
 
-	qm_info->start_pq = (u16)RESC_START(p_hwfn, ECORE_PQ);
+	qm_info->num_pqs = 0;
+	qm_info->num_vports = 0;
+	qm_info->num_pf_rls = 0;
+	qm_info->num_vf_pqs = 0;
+	qm_info->first_vf_pq = 0;
+	qm_info->first_mcos_pq = 0;
+	qm_info->first_rl_pq = 0;
+}
+
+static void ecore_init_qm_advance_vport(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+
+	qm_info->num_vports++;
+
+	if (qm_info->num_vports > ecore_init_qm_get_num_vports(p_hwfn))
+		DP_ERR(p_hwfn,
+		       "vport overflow! qm_info->num_vports %d,"
+		       " qm_init_get_num_vports() %d\n",
+		       qm_info->num_vports,
+		       ecore_init_qm_get_num_vports(p_hwfn));
+}
+
+/* initialize a single pq and manage qm_info resources accounting.
+ * The pq_init_flags param determines whether the PQ is rate limited
+ * (for VF or PF)
+ * and whether a new vport is allocated to the pq or not (i.e. vport will be
+ * shared)
+ */
+
+/* flags for pq init */
+#define PQ_INIT_SHARE_VPORT	(1 << 0)
+#define PQ_INIT_PF_RL		(1 << 1)
+#define PQ_INIT_VF_RL		(1 << 2)
+
+/* defines for pq init */
+#define PQ_INIT_DEFAULT_WRR_GROUP	1
+#define PQ_INIT_DEFAULT_TC		0
+#define PQ_INIT_OFLD_TC			(p_hwfn->hw_info.offload_tc)
+
+static void ecore_init_qm_pq(struct ecore_hwfn *p_hwfn,
+			     struct ecore_qm_info *qm_info,
+			     u8 tc, u32 pq_init_flags)
+{
+	u16 pq_idx = qm_info->num_pqs, max_pq =
+					ecore_init_qm_get_num_pqs(p_hwfn);
+
+	if (pq_idx > max_pq)
+		DP_ERR(p_hwfn,
+		       "pq overflow! pq %d, max pq %d\n", pq_idx, max_pq);
+
+	/* init pq params */
+	qm_info->qm_pq_params[pq_idx].vport_id = qm_info->start_vport +
+						 qm_info->num_vports;
+	qm_info->qm_pq_params[pq_idx].tc_id = tc;
+	qm_info->qm_pq_params[pq_idx].wrr_group = PQ_INIT_DEFAULT_WRR_GROUP;
+	qm_info->qm_pq_params[pq_idx].rl_valid =
+		(pq_init_flags & PQ_INIT_PF_RL ||
+		 pq_init_flags & PQ_INIT_VF_RL);
+
+	/* qm params accounting */
+	qm_info->num_pqs++;
+	if (!(pq_init_flags & PQ_INIT_SHARE_VPORT))
+		qm_info->num_vports++;
+
+	if (pq_init_flags & PQ_INIT_PF_RL)
+		qm_info->num_pf_rls++;
+
+	if (qm_info->num_vports > ecore_init_qm_get_num_vports(p_hwfn))
+		DP_ERR(p_hwfn,
+		       "vport overflow! qm_info->num_vports %d,"
+		       " qm_init_get_num_vports() %d\n",
+		       qm_info->num_vports,
+		       ecore_init_qm_get_num_vports(p_hwfn));
+
+	if (qm_info->num_pf_rls > ecore_init_qm_get_num_pf_rls(p_hwfn))
+		DP_ERR(p_hwfn, "rl overflow! qm_info->num_pf_rls %d,"
+		       " qm_init_get_num_pf_rls() %d\n",
+		       qm_info->num_pf_rls,
+		       ecore_init_qm_get_num_pf_rls(p_hwfn));
+}
+
+/* get pq index according to PQ_FLAGS */
+static u16 *ecore_init_qm_get_idx_from_flags(struct ecore_hwfn *p_hwfn,
+					     u32 pq_flags)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+
+	/* Can't have multiple flags set here */
+	if (OSAL_BITMAP_WEIGHT((unsigned long *)&pq_flags,
+				sizeof(pq_flags)) > 1)
+		goto err;
+
+	switch (pq_flags) {
+	case PQ_FLAGS_RLS:
+		return &qm_info->first_rl_pq;
+	case PQ_FLAGS_MCOS:
+		return &qm_info->first_mcos_pq;
+	case PQ_FLAGS_LB:
+		return &qm_info->pure_lb_pq;
+	case PQ_FLAGS_OOO:
+		return &qm_info->ooo_pq;
+	case PQ_FLAGS_ACK:
+		return &qm_info->pure_ack_pq;
+	case PQ_FLAGS_OFLD:
+		return &qm_info->offload_pq;
+	case PQ_FLAGS_VFS:
+		return &qm_info->first_vf_pq;
+	default:
+		goto err;
+	}
+
+err:
+	DP_ERR(p_hwfn, "BAD pq flags %d\n", pq_flags);
+	return OSAL_NULL;
+}
+
+/* save pq index in qm info */
+static void ecore_init_qm_set_idx(struct ecore_hwfn *p_hwfn,
+				  u32 pq_flags, u16 pq_val)
+{
+	u16 *base_pq_idx = ecore_init_qm_get_idx_from_flags(p_hwfn, pq_flags);
+
+	*base_pq_idx = p_hwfn->qm_info.start_pq + pq_val;
+}
+
+/* get tx pq index, with the PQ TX base already set (ready for context init) */
+u16 ecore_get_cm_pq_idx(struct ecore_hwfn *p_hwfn, u32 pq_flags)
+{
+	u16 *base_pq_idx = ecore_init_qm_get_idx_from_flags(p_hwfn, pq_flags);
+
+	return *base_pq_idx + CM_TX_PQ_BASE;
+}
+
+u16 ecore_get_cm_pq_idx_mcos(struct ecore_hwfn *p_hwfn, u8 tc)
+{
+	u8 max_tc = ecore_init_qm_get_num_tcs(p_hwfn);
+
+	if (tc > max_tc)
+		DP_ERR(p_hwfn, "tc %d must be smaller than %d\n", tc, max_tc);
+
+	return ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_MCOS) + tc;
+}
+
+u16 ecore_get_cm_pq_idx_vf(struct ecore_hwfn *p_hwfn, u16 vf)
+{
+	u16 max_vf = ecore_init_qm_get_num_vfs(p_hwfn);
+
+	if (vf > max_vf)
+		DP_ERR(p_hwfn, "vf %d must be smaller than %d\n", vf, max_vf);
+
+	return ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_VFS) + vf;
+}
+
+u16 ecore_get_cm_pq_idx_rl(struct ecore_hwfn *p_hwfn, u8 rl)
+{
+	u16 max_rl = ecore_init_qm_get_num_pf_rls(p_hwfn);
+
+	if (rl > max_rl)
+		DP_ERR(p_hwfn, "rl %d must be smaller than %d\n", rl, max_rl);
+
+	return ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_RLS) + rl;
+}
+
+/* Functions for creating specific types of pqs */
+static void ecore_init_qm_lb_pq(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+
+	if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_LB))
+		return;
+
+	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_LB, qm_info->num_pqs);
+	ecore_init_qm_pq(p_hwfn, qm_info, PURE_LB_TC, PQ_INIT_SHARE_VPORT);
+}
+
+static void ecore_init_qm_ooo_pq(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+
+	if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_OOO))
+		return;
+
+	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_OOO, qm_info->num_pqs);
+	ecore_init_qm_pq(p_hwfn, qm_info, DCBX_ISCSI_OOO_TC,
+			 PQ_INIT_SHARE_VPORT);
+}
+
+static void ecore_init_qm_pure_ack_pq(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+
+	if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_ACK))
+		return;
+
+	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_ACK, qm_info->num_pqs);
+	ecore_init_qm_pq(p_hwfn, qm_info, PQ_INIT_OFLD_TC, PQ_INIT_SHARE_VPORT);
+}
+
+static void ecore_init_qm_offload_pq(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+
+	if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_OFLD))
+		return;
+
+	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_OFLD, qm_info->num_pqs);
+	ecore_init_qm_pq(p_hwfn, qm_info, PQ_INIT_OFLD_TC, PQ_INIT_SHARE_VPORT);
+}
+
+static void ecore_init_qm_mcos_pqs(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+	u8 tc_idx;
+
+	if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_MCOS))
+		return;
+
+	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_MCOS, qm_info->num_pqs);
+	for (tc_idx = 0; tc_idx < ecore_init_qm_get_num_tcs(p_hwfn); tc_idx++)
+		ecore_init_qm_pq(p_hwfn, qm_info, tc_idx, PQ_INIT_SHARE_VPORT);
+}
+
+static void ecore_init_qm_vf_pqs(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+	u16 vf_idx, num_vfs = ecore_init_qm_get_num_vfs(p_hwfn);
+
+	if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_VFS))
+		return;
+
+	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_VFS, qm_info->num_pqs);
 
 	qm_info->num_vf_pqs = num_vfs;
-	qm_info->start_vport = (u8)RESC_START(p_hwfn, ECORE_VPORT);
+	for (vf_idx = 0; vf_idx < num_vfs; vf_idx++)
+		ecore_init_qm_pq(p_hwfn, qm_info, PQ_INIT_DEFAULT_TC,
+				 PQ_INIT_VF_RL);
+}
 
-	for (i = 0; i < qm_info->num_vports; i++)
-		qm_info->qm_vport_params[i].vport_wfq = 1;
+static void ecore_init_qm_rl_pqs(struct ecore_hwfn *p_hwfn)
+{
+	u16 pf_rls_idx, num_pf_rls = ecore_init_qm_get_num_pf_rls(p_hwfn);
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
 
-	qm_info->vport_rl_en = 1;
-	qm_info->vport_wfq_en = 1;
-	qm_info->pf_rl = pf_rl;
-	qm_info->pf_wfq = pf_wfq;
+	if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_RLS))
+		return;
+
+	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_RLS, qm_info->num_pqs);
+	for (pf_rls_idx = 0; pf_rls_idx < num_pf_rls; pf_rls_idx++)
+		ecore_init_qm_pq(p_hwfn, qm_info, PQ_INIT_OFLD_TC,
+				 PQ_INIT_PF_RL);
+}
+
+static void ecore_init_qm_pq_params(struct ecore_hwfn *p_hwfn)
+{
+	/* rate limited pqs, must come first (FW assumption) */
+	ecore_init_qm_rl_pqs(p_hwfn);
+
+	/* pqs for multi cos */
+	ecore_init_qm_mcos_pqs(p_hwfn);
+
+	/* pure loopback pq */
+	ecore_init_qm_lb_pq(p_hwfn);
+
+	/* out of order pq */
+	ecore_init_qm_ooo_pq(p_hwfn);
+
+	/* pure ack pq */
+	ecore_init_qm_pure_ack_pq(p_hwfn);
+
+	/* pq for offloaded protocol */
+	ecore_init_qm_offload_pq(p_hwfn);
+
+	/* done sharing vports */
+	ecore_init_qm_advance_vport(p_hwfn);
+
+	/* pqs for vfs */
+	ecore_init_qm_vf_pqs(p_hwfn);
+}
+
+/* compare values of getters against resources amounts */
+static enum _ecore_status_t ecore_init_qm_sanity(struct ecore_hwfn *p_hwfn)
+{
+	if (ecore_init_qm_get_num_vports(p_hwfn) >
+	    RESC_NUM(p_hwfn, ECORE_VPORT)) {
+		DP_ERR(p_hwfn, "requested amount of vports exceeds resource\n");
+		return ECORE_INVAL;
+	}
+
+	if (ecore_init_qm_get_num_pqs(p_hwfn) > RESC_NUM(p_hwfn, ECORE_PQ)) {
+		DP_ERR(p_hwfn, "requested amount of pqs exceeds resource\n");
+		return ECORE_INVAL;
+	}
 
 	return ECORE_SUCCESS;
+}
 
- alloc_err:
-	DP_NOTICE(p_hwfn, false, "Failed to allocate memory for QM params\n");
-	ecore_qm_info_free(p_hwfn);
-	return ECORE_NOMEM;
+/*
+ * Function for verbose printing of the qm initialization results
+ */
+static void ecore_dp_init_qm_params(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+	struct init_qm_vport_params *vport;
+	struct init_qm_port_params *port;
+	struct init_qm_pq_params *pq;
+	int i, tc;
+
+	/* top level params */
+	DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+		   "qm init top level params: start_pq %d, start_vport %d,"
+		   " pure_lb_pq %d, offload_pq %d, pure_ack_pq %d\n",
+		   qm_info->start_pq, qm_info->start_vport, qm_info->pure_lb_pq,
+		   qm_info->offload_pq, qm_info->pure_ack_pq);
+	DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+		   "ooo_pq %d, first_vf_pq %d, num_pqs %d, num_vf_pqs %d,"
+		   " num_vports %d, max_phys_tcs_per_port %d\n",
+		   qm_info->ooo_pq, qm_info->first_vf_pq, qm_info->num_pqs,
+		   qm_info->num_vf_pqs, qm_info->num_vports,
+		   qm_info->max_phys_tcs_per_port);
+	DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+		   "pf_rl_en %d, pf_wfq_en %d, vport_rl_en %d, vport_wfq_en %d,"
+		   " pf_wfq %d, pf_rl %d, num_pf_rls %d, pq_flags %x\n",
+		   qm_info->pf_rl_en, qm_info->pf_wfq_en, qm_info->vport_rl_en,
+		   qm_info->vport_wfq_en, qm_info->pf_wfq, qm_info->pf_rl,
+		   qm_info->num_pf_rls, ecore_get_pq_flags(p_hwfn));
+
+	/* port table */
+	for (i = 0; i < p_hwfn->p_dev->num_ports_in_engines; i++) {
+		port = &qm_info->qm_port_params[i];
+		DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+			   "port idx %d, active %d, active_phys_tcs %d,"
+			   " num_pbf_cmd_lines %d, num_btb_blocks %d,"
+			   " reserved %d\n",
+			   i, port->active, port->active_phys_tcs,
+			   port->num_pbf_cmd_lines, port->num_btb_blocks,
+			   port->reserved);
+	}
+
+	/* vport table */
+	for (i = 0; i < qm_info->num_vports; i++) {
+		vport = &qm_info->qm_vport_params[i];
+		DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+			   "vport idx %d, vport_rl %d, wfq %d,"
+			   " first_tx_pq_id [ ",
+			   qm_info->start_vport + i, vport->vport_rl,
+			   vport->vport_wfq);
+		for (tc = 0; tc < NUM_OF_TCS; tc++)
+			DP_VERBOSE(p_hwfn, ECORE_MSG_HW, "%d ",
+				   vport->first_tx_pq_id[tc]);
+		DP_VERBOSE(p_hwfn, ECORE_MSG_HW, "]\n");
+	}
+
+	/* pq table */
+	for (i = 0; i < qm_info->num_pqs; i++) {
+		pq = &qm_info->qm_pq_params[i];
+		DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+			   "pq idx %d, vport_id %d, tc %d, wrr_grp %d,"
+			   " rl_valid %d\n",
+			   qm_info->start_pq + i, pq->vport_id, pq->tc_id,
+			   pq->wrr_group, pq->rl_valid);
+	}
+}
+
+static void ecore_init_qm_info(struct ecore_hwfn *p_hwfn)
+{
+	/* reset params required for init run */
+	ecore_init_qm_reset_params(p_hwfn);
+
+	/* init QM top level params */
+	ecore_init_qm_params(p_hwfn);
+
+	/* init QM port params */
+	ecore_init_qm_port_params(p_hwfn);
+
+	/* init QM vport params */
+	ecore_init_qm_vport_params(p_hwfn);
+
+	/* init QM physical queue params */
+	ecore_init_qm_pq_params(p_hwfn);
+
+	/* display all that init */
+	ecore_dp_init_qm_params(p_hwfn);
 }
 
 /* This function reconfigures the QM pf on the fly.
  * For this purpose we:
  * 1. reconfigure the QM database
- * 2. set new values to runtime arrat
+ * 2. set new values to runtime array
  * 3. send an sdm_qm_cmd through the rbc interface to stop the QM
  * 4. activate init tool in QM_PF stage
  * 5. send an sdm_qm_cmd through rbc interface to release the QM
@@ -470,20 +763,11 @@ enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn,
 				     struct ecore_ptt *p_ptt)
 {
 	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
-	bool b_rc;
 	enum _ecore_status_t rc;
-
-	/* qm_info is allocated in ecore_init_qm_info() which is already called
-	 * from ecore_resc_alloc() or previous call of ecore_qm_reconf().
-	 * The allocated size may change each init, so we free it before next
-	 * allocation.
-	 */
-	ecore_qm_info_free(p_hwfn);
+	bool b_rc;
 
 	/* initialize ecore's qm data structure */
-	rc = ecore_init_qm_info(p_hwfn, false);
-	if (rc != ECORE_SUCCESS)
-		return rc;
+	ecore_init_qm_info(p_hwfn);
 
 	/* stop PF's qm queues */
 	OSAL_SPIN_LOCK(&qm_lock);
@@ -516,6 +800,48 @@ enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
+static enum _ecore_status_t ecore_alloc_qm_data(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+	enum _ecore_status_t rc;
+
+	rc = ecore_init_qm_sanity(p_hwfn);
+	if (rc != ECORE_SUCCESS)
+		goto alloc_err;
+
+	qm_info->qm_pq_params = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
+					    sizeof(struct init_qm_pq_params) *
+					    ecore_init_qm_get_num_pqs(p_hwfn));
+	if (!qm_info->qm_pq_params)
+		goto alloc_err;
+
+	qm_info->qm_vport_params = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
+				       sizeof(struct init_qm_vport_params) *
+				       ecore_init_qm_get_num_vports(p_hwfn));
+	if (!qm_info->qm_vport_params)
+		goto alloc_err;
+
+	qm_info->qm_port_params = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
+				      sizeof(struct init_qm_port_params) *
+				      p_hwfn->p_dev->num_ports_in_engines);
+	if (!qm_info->qm_port_params)
+		goto alloc_err;
+
+	qm_info->wfq_data = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
+					sizeof(struct ecore_wfq_data) *
+					ecore_init_qm_get_num_vports(p_hwfn));
+	if (!qm_info->wfq_data)
+		goto alloc_err;
+
+	return ECORE_SUCCESS;
+
+alloc_err:
+	DP_NOTICE(p_hwfn, false, "Failed to allocate memory for QM params\n");
+	ecore_qm_info_free(p_hwfn);
+	return ECORE_NOMEM;
+}
+/******************** End QM initialization ***************/
+
 enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 {
 	struct ecore_consq *p_consq;
@@ -580,11 +906,13 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 		if (rc)
 			goto alloc_err;
 
-		/* Prepare and process QM requirements */
-		rc = ecore_init_qm_info(p_hwfn, true);
+		rc = ecore_alloc_qm_data(p_hwfn);
 		if (rc)
 			goto alloc_err;
 
+		/* init qm info */
+		ecore_init_qm_info(p_hwfn);
+
 		/* Compute the ILT client partition */
 		rc = ecore_cxt_cfg_ilt_compute(p_hwfn);
 		if (rc)
@@ -626,18 +954,18 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 			 * worst case:
 			 * - Core - according to SPQ.
 			 * - RoCE - per QP there are a couple of ICIDs, one
-			 *          responder and one requester, each can
-			 *          generate an EQE => n_eqes_qp = 2 * n_qp.
-			 *          Each CQ can generate an EQE. There are 2 CQs
-			 *          per QP => n_eqes_cq = 2 * n_qp.
-			 *          Hence the RoCE total is 4 * n_qp or
-			 *          2 * num_cons.
+			 *	  responder and one requester, each can
+			 *	  generate an EQE => n_eqes_qp = 2 * n_qp.
+			 *	  Each CQ can generate an EQE. There are 2 CQs
+			 *	  per QP => n_eqes_cq = 2 * n_qp.
+			 *	  Hence the RoCE total is 4 * n_qp or
+			 *	  2 * num_cons.
 			 * - ENet - There can be up to two events per VF. One
-			 *          for VF-PF channel and another for VF FLR
-			 *          initial cleanup. The number of VFs is
-			 *          bounded by MAX_NUM_VFS_BB, and is much
-			 *          smaller than RoCE's so we avoid exact
-			 *          calculation.
+			 *	  for VF-PF channel and another for VF FLR
+			 *	  initial cleanup. The number of VFs is
+			 *	  bounded by MAX_NUM_VFS_BB, and is much
+			 *	  smaller than RoCE's so we avoid exact
+			 *	  calculation.
 			 */
 			if (ECORE_IS_ROCE_PERSONALITY(p_hwfn)) {
 				num_cons =
@@ -691,7 +1019,8 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 		rc = ecore_dmae_info_alloc(p_hwfn);
 		if (rc) {
 			DP_NOTICE(p_hwfn, true,
-				  "Failed to allocate memory for dmae_info structure\n");
+				  "Failed to allocate memory for dmae_info"
+				  " structure\n");
 			goto alloc_err;
 		}
 
@@ -713,9 +1042,9 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 
 	return ECORE_SUCCESS;
 
- alloc_no_mem:
+alloc_no_mem:
 	rc = ECORE_NOMEM;
- alloc_err:
+alloc_err:
 	ecore_resc_free(p_dev);
 	return rc;
 }
@@ -2361,7 +2690,7 @@ static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
 			*p_resc_start = dflt_resc_start;
 		}
 	}
- out:
+out:
 	return ECORE_SUCCESS;
 }
 
@@ -3147,13 +3476,13 @@ void ecore_prepare_hibernate(struct ecore_dev *p_dev)
 #endif
 
 	return rc;
- err2:
+err2:
 	if (IS_LEAD_HWFN(p_hwfn))
 		ecore_iov_free_hw_info(p_dev);
 	ecore_mcp_free(p_hwfn);
- err1:
+err1:
 	ecore_hw_hwfn_free(p_hwfn);
- err0:
+err0:
 	return rc;
 }
 
@@ -3317,7 +3646,7 @@ static void ecore_chain_free_pbl(struct ecore_dev *p_dev,
 	if (!p_chain->pbl.external)
 		OSAL_DMA_FREE_COHERENT(p_dev, p_chain->pbl.p_virt_table,
 				       p_chain->pbl.p_phys_table, pbl_size);
- out:
+out:
 	OSAL_VFREE(p_dev, p_chain->pbl.pp_virt_addr_tbl);
 }
 
@@ -3529,7 +3858,7 @@ enum _ecore_status_t ecore_chain_alloc(struct ecore_dev *p_dev,
 
 	return ECORE_SUCCESS;
 
- nomem:
+nomem:
 	ecore_chain_free(p_dev, p_chain);
 	return rc;
 }
@@ -3964,7 +4293,7 @@ enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
 		goto out;
 
 	p_hwfn->p_dev->rx_coalesce_usecs = coalesce;
- out:
+out:
 	return rc;
 }
 
@@ -4008,7 +4337,7 @@ enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
 		goto out;
 
 	p_hwfn->p_dev->tx_coalesce_usecs = coalesce;
- out:
+out:
 	return rc;
 }
 
diff --git a/drivers/net/qede/base/ecore_hw.c b/drivers/net/qede/base/ecore_hw.c
index 22da415..2c47f6b 100644
--- a/drivers/net/qede/base/ecore_hw.c
+++ b/drivers/net/qede/base/ecore_hw.c
@@ -906,44 +906,6 @@ enum _ecore_status_t
 	return rc;
 }
 
-u16 ecore_get_qm_pq(struct ecore_hwfn *p_hwfn,
-		    enum protocol_type proto,
-		    union ecore_qm_pq_params *p_params)
-{
-	u16 pq_id = 0;
-
-	if ((proto == PROTOCOLID_CORE ||
-	     proto == PROTOCOLID_ETH) && !p_params) {
-		DP_NOTICE(p_hwfn, true,
-			  "Protocol %d received NULL PQ params\n", proto);
-		return 0;
-	}
-
-	switch (proto) {
-	case PROTOCOLID_CORE:
-		if (p_params->core.tc == LB_TC)
-			pq_id = p_hwfn->qm_info.pure_lb_pq;
-		else if (p_params->core.tc == PKT_LB_TC)
-			pq_id = p_hwfn->qm_info.ooo_pq;
-		else
-			pq_id = p_hwfn->qm_info.offload_pq;
-		break;
-	case PROTOCOLID_ETH:
-		pq_id = p_params->eth.tc;
-		/* TODO - multi-CoS for VFs? */
-		if (p_params->eth.is_vf)
-			pq_id += p_hwfn->qm_info.vf_queues_offset +
-			    p_params->eth.vf_id;
-		break;
-	default:
-		pq_id = 0;
-	}
-
-	pq_id = CM_TX_PQ_BASE + pq_id + RESC_START(p_hwfn, ECORE_PQ);
-
-	return pq_id;
-}
-
 void ecore_hw_err_notify(struct ecore_hwfn *p_hwfn,
 			 enum ecore_hw_err_type err_type)
 {
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index d2e1719..0220d19 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -834,13 +834,13 @@ enum _ecore_status_t
 			      struct ecore_queue_start_common_params *p_params,
 			      dma_addr_t pbl_addr,
 			      u16 pbl_size,
-			      union ecore_qm_pq_params *p_pq_params)
+			      u16 pq_id)
 {
 	struct tx_queue_start_ramrod_data *p_ramrod = OSAL_NULL;
 	struct ecore_spq_entry *p_ent = OSAL_NULL;
 	struct ecore_sp_init_data init_data;
 	struct ecore_hw_cid_data *p_tx_cid;
-	u16 pq_id, abs_tx_qzone_id = 0;
+	u16 abs_tx_qzone_id = 0;
 	enum _ecore_status_t rc = ECORE_NOTIMPL;
 	u8 abs_vport_id;
 
@@ -882,7 +882,6 @@ enum _ecore_status_t
 	p_ramrod->pbl_size = OSAL_CPU_TO_LE16(pbl_size);
 	DMA_REGPAIR_LE(p_ramrod->pbl_base_addr, pbl_addr);
 
-	pq_id = ecore_get_qm_pq(p_hwfn, PROTOCOLID_ETH, p_pq_params);
 	p_ramrod->qm_pq_id = OSAL_CPU_TO_LE16(pq_id);
 
 	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
@@ -898,7 +897,6 @@ enum _ecore_status_t
 			    void OSAL_IOMEM * *pp_doorbell)
 {
 	struct ecore_hw_cid_data *p_tx_cid;
-	union ecore_qm_pq_params pq_params;
 	u8 abs_stats_id = 0;
 	enum _ecore_status_t rc;
 
@@ -918,9 +916,6 @@ enum _ecore_status_t
 
 	p_tx_cid = &p_hwfn->p_tx_cids[p_params->queue_id];
 	OSAL_MEMSET(p_tx_cid, 0, sizeof(*p_tx_cid));
-	OSAL_MEMSET(&pq_params, 0, sizeof(pq_params));
-
-	pq_params.eth.tc = tc;
 
 	/* Allocate a CID for the queue */
 	rc = ecore_cxt_acquire_cid(p_hwfn, PROTOCOLID_ETH, &p_tx_cid->cid);
@@ -944,7 +939,8 @@ enum _ecore_status_t
 					   p_params,
 					   pbl_addr,
 					   pbl_size,
-					   &pq_params);
+					   ecore_get_cm_pq_idx_mcos(p_hwfn,
+								    tc));
 
 	*pp_doorbell = (u8 OSAL_IOMEM *)p_hwfn->doorbells +
 	    DB_ADDR(p_tx_cid->cid, DQ_DEMS_LEGACY);
diff --git a/drivers/net/qede/base/ecore_l2.h b/drivers/net/qede/base/ecore_l2.h
index 9c1bd38..b598eda 100644
--- a/drivers/net/qede/base/ecore_l2.h
+++ b/drivers/net/qede/base/ecore_l2.h
@@ -81,7 +81,7 @@ enum _ecore_status_t
 			      struct ecore_queue_start_common_params *p_params,
 			      dma_addr_t pbl_addr,
 			      u16 pbl_size,
-			      union ecore_qm_pq_params *p_pq_params);
+			      u16 pq_id);
 
 u8 ecore_mcast_bin_from_mac(u8 *mac);
 
diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c
index 066f3fb..fa2bce3 100644
--- a/drivers/net/qede/base/ecore_spq.c
+++ b/drivers/net/qede/base/ecore_spq.c
@@ -173,11 +173,10 @@ static enum _ecore_status_t ecore_spq_block(struct ecore_hwfn *p_hwfn,
 static void ecore_spq_hw_initialize(struct ecore_hwfn *p_hwfn,
 				    struct ecore_spq *p_spq)
 {
-	u16 pq;
 	struct ecore_cxt_info cxt_info;
 	struct core_conn_context *p_cxt;
-	union ecore_qm_pq_params pq_params;
 	enum _ecore_status_t rc;
+	u16 physical_q;
 
 	cxt_info.iid = p_spq->cid;
 
@@ -206,10 +205,8 @@ static void ecore_spq_hw_initialize(struct ecore_hwfn *p_hwfn,
 	/* CDU validation - FIXME currently disabled */
 
 	/* QM physical queue */
-	OSAL_MEMSET(&pq_params, 0, sizeof(pq_params));
-	pq_params.core.tc = LB_TC;
-	pq = ecore_get_qm_pq(p_hwfn, PROTOCOLID_CORE, &pq_params);
-	p_cxt->xstorm_ag_context.physical_q0 = OSAL_CPU_TO_LE16(pq);
+	physical_q = ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_LB);
+	p_cxt->xstorm_ag_context.physical_q0 = OSAL_CPU_TO_LE16(physical_q);
 
 	p_cxt->xstorm_st_context.spq_base_lo =
 	    DMA_LO_LE(p_spq->chain.p_phys_addr);
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 8d25d52..8134d90 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -634,8 +634,8 @@ enum _ecore_status_t ecore_iov_hw_info(struct ecore_hwfn *p_hwfn)
 	return ECORE_SUCCESS;
 }
 
-bool _ecore_iov_pf_sanity_check(struct ecore_hwfn *p_hwfn, int vfid,
-				bool b_fail_malicious)
+static bool _ecore_iov_pf_sanity_check(struct ecore_hwfn *p_hwfn, int vfid,
+				       bool b_fail_malicious)
 {
 	/* Check PF supports sriov */
 	if (IS_VF(p_hwfn->p_dev) || !IS_ECORE_SRIOV(p_hwfn->p_dev) ||
@@ -2105,15 +2105,9 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 	struct ecore_queue_start_common_params params;
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
 	u8 status = PFVF_STATUS_NO_RESOURCE;
-	union ecore_qm_pq_params pq_params;
 	struct vfpf_start_txq_tlv *req;
 	enum _ecore_status_t rc;
 
-	/* Prepare the parameters which would choose the right PQ */
-	OSAL_MEMSET(&pq_params, 0, sizeof(pq_params));
-	pq_params.eth.is_vf = 1;
-	pq_params.eth.vf_id = vf->relative_vf_id;
-
 	OSAL_MEMSET(&params, 0, sizeof(params));
 	req = &mbx->req_virt->start_txq;
 
@@ -2134,7 +2128,8 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 					   &params,
 					   req->pbl_addr,
 					   req->pbl_size,
-					   &pq_params);
+					   ecore_get_cm_pq_idx_vf(p_hwfn,
+							vf->relative_vf_id));
 
 	if (rc)
 		status = PFVF_STATUS_FAILURE;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 21/61] net/qede/base: add a printout of the FW, MFW and MBI versions
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (19 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 20/61] net/qede/base: qm initialization revamp Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 22/61] net/qede/base: check active VF queues before stopping Rasesh Mody
                   ` (40 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add a printout of the FW, Management FW and MBI versions.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/qede_if.h   |    9 ++++++++-
 drivers/net/qede/qede_main.c |   14 ++++++--------
 2 files changed, 14 insertions(+), 9 deletions(-)

diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index 18404fb..1e27428 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -30,12 +30,19 @@ struct qed_dev_info {
 
 	/* MFW version */
 	uint32_t mfw_rev;
+#define QED_MFW_VERSION_0_MASK		0x000000FF
+#define QED_MFW_VERSION_0_OFFSET	0
+#define QED_MFW_VERSION_1_MASK		0x0000FF00
+#define QED_MFW_VERSION_1_OFFSET	8
+#define QED_MFW_VERSION_2_MASK		0x00FF0000
+#define QED_MFW_VERSION_2_OFFSET	16
+#define QED_MFW_VERSION_3_MASK		0xFF000000
+#define QED_MFW_VERSION_3_OFFSET	24
 
 	uint32_t flash_size;
 	uint8_t mf_mode;
 	bool tx_switching;
 	u16 mtu;
-	/* To be added... */
 };
 
 enum qed_sb_type {
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index e76346e..1d4f336 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -327,6 +327,8 @@ static int qed_slowpath_start(struct ecore_dev *edev,
 	memset(dev_info, 0, sizeof(struct qed_dev_info));
 	dev_info->num_hwfns = edev->num_hwfns;
 	dev_info->is_mf_default = IS_MF_DEFAULT(&edev->hwfns[0]);
+	dev_info->mtu = ECORE_LEADING_HWFN(edev)->hw_info.mtu;
+
 	rte_memcpy(&dev_info->hw_mac, &edev->hwfns[0].hw_info.hw_mac_addr,
 	       ETHER_ADDR_LEN);
 
@@ -337,13 +339,7 @@ static int qed_slowpath_start(struct ecore_dev *edev,
 		dev_info->fw_eng = FW_ENGINEERING_VERSION;
 		dev_info->mf_mode = edev->mf_mode;
 		dev_info->tx_switching = false;
-	} else {
-		ecore_vf_get_fw_version(&edev->hwfns[0], &dev_info->fw_major,
-					&dev_info->fw_minor, &dev_info->fw_rev,
-					&dev_info->fw_eng);
-	}
 
-	if (IS_PF(edev)) {
 		ptt = ecore_ptt_acquire(ECORE_LEADING_HWFN(edev));
 		if (ptt) {
 			ecore_mcp_get_mfw_ver(ECORE_LEADING_HWFN(edev), ptt,
@@ -361,12 +357,14 @@ static int qed_slowpath_start(struct ecore_dev *edev,
 			ecore_ptt_release(ECORE_LEADING_HWFN(edev), ptt);
 		}
 	} else {
+		ecore_vf_get_fw_version(&edev->hwfns[0], &dev_info->fw_major,
+					&dev_info->fw_minor, &dev_info->fw_rev,
+					&dev_info->fw_eng);
+
 		ecore_mcp_get_mfw_ver(ECORE_LEADING_HWFN(edev), ptt,
 				      &dev_info->mfw_rev, NULL);
 	}
 
-	dev_info->mtu = ECORE_LEADING_HWFN(edev)->hw_info.mtu;
-
 	return 0;
 }
 
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 22/61] net/qede/base: check active VF queues before stopping
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (20 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 21/61] net/qede/base: add a printout of the FW, MFW and MBI versions Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 23/61] net/qede/base: set the drv_type before sending load request Rasesh Mody
                   ` (39 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Make sure VF queue are closed before stopping vport.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_sriov.c |   37 ++++++++++++++++++++++++++++++++++-
 1 file changed, 36 insertions(+), 1 deletion(-)

diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 8134d90..ce14460 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -232,6 +232,30 @@ static bool ecore_iov_validate_sb(struct ecore_hwfn *p_hwfn,
 	return false;
 }
 
+static bool ecore_iov_validate_active_rxq(struct ecore_hwfn *p_hwfn,
+					  struct ecore_vf_info *p_vf)
+{
+	u8 i;
+
+	for (i = 0; i < p_vf->num_rxqs; i++)
+		if (p_vf->vf_queues[i].rxq_active)
+			return true;
+
+	return false;
+}
+
+static bool ecore_iov_validate_active_txq(struct ecore_hwfn *p_hwfn,
+					  struct ecore_vf_info *p_vf)
+{
+	u8 i;
+
+	for (i = 0; i < p_vf->num_rxqs; i++)
+		if (p_vf->vf_queues[i].txq_active)
+			return true;
+
+	return false;
+}
+
 /* TODO - this is linux crc32; Need a way to ifdef it out for linux */
 u32 ecore_crc32(u32 crc, u8 *ptr, u32 length)
 {
@@ -1367,8 +1391,10 @@ static void ecore_iov_vf_cleanup(struct ecore_hwfn *p_hwfn,
 
 	p_vf->num_active_rxqs = 0;
 
-	for (i = 0; i < ECORE_MAX_VF_CHAINS_PER_PF; i++)
+	for (i = 0; i < ECORE_MAX_VF_CHAINS_PER_PF; i++) {
 		p_vf->vf_queues[i].rxq_active = 0;
+		p_vf->vf_queues[i].txq_active = 0;
+	}
 
 	OSAL_MEMSET(&p_vf->shadow_config, 0, sizeof(p_vf->shadow_config));
 	OSAL_MEMSET(&p_vf->acquire, 0, sizeof(p_vf->acquire));
@@ -1945,6 +1971,15 @@ static void ecore_iov_vf_mbx_stop_vport(struct ecore_hwfn *p_hwfn,
 	vf->vport_instance--;
 	vf->spoof_chk = false;
 
+	if ((ecore_iov_validate_active_rxq(p_hwfn, vf)) ||
+	    (ecore_iov_validate_active_txq(p_hwfn, vf))) {
+		vf->b_malicious = true;
+		DP_NOTICE(p_hwfn, false,
+			  "VF [%02x] - considered malicious;"
+			  " Unable to stop RX/TX queuess\n",
+			  vf->abs_vf_id);
+	}
+
 	rc = ecore_sp_vport_stop(p_hwfn, vf->opaque_fid, vf->vport_id);
 	if (rc != ECORE_SUCCESS) {
 		DP_ERR(p_hwfn,
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 23/61] net/qede/base: set the drv_type before sending load request
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (21 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 22/61] net/qede/base: check active VF queues before stopping Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 24/61] net/qede/base: prevent driver laod with invalid resources Rasesh Mody
                   ` (38 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Set the drv_type before sending LOAD_REQ and remove the
ver_str which is not used by the MFW

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h     |    3 +--
 drivers/net/qede/base/ecore_mcp.c |    3 ---
 drivers/net/qede/qede_ethdev.c    |    2 +-
 drivers/net/qede/qede_if.h        |    3 +--
 drivers/net/qede/qede_main.c      |   10 ++++------
 5 files changed, 7 insertions(+), 14 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 58c97a3..b8c8bfd 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -30,7 +30,6 @@
 
 #define MAX_HWFNS_PER_DEVICE	2
 #define NAME_SIZE 128 /* @DPDK */
-#define VER_SIZE 16
 #define ECORE_WFQ_UNIT	100
 #include "../qede_logs.h" /* @DPDK */
 
@@ -706,7 +705,7 @@ struct ecore_dev {
 
 	int				pcie_width;
 	int				pcie_speed;
-	u8				ver_str[NAME_SIZE]; /* @DPDK */
+
 	/* Add MF related configuration */
 	u8				mcp_rev;
 	u8				boot_mode;
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index dc1a5cd..c5cc827 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -525,7 +525,6 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 {
 	struct ecore_dev *p_dev = p_hwfn->p_dev;
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
 	enum _ecore_status_t rc;
 
 #ifndef ASIC_ONLY
@@ -539,8 +538,6 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 	mb_params.cmd = DRV_MSG_CODE_LOAD_REQ;
 	mb_params.param = PDA_COMP | DRV_ID_MCP_HSI_VER_CURRENT |
 			  p_dev->drv_type;
-	OSAL_MEMCPY(&union_data.ver_str, p_dev->ver_str, MCP_DRV_VER_STR_SIZE);
-	mb_params.p_data_src = &union_data;
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 
 	/* if mcp fails to respond we must abort */
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index c372181..d52e1be 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -2175,7 +2175,7 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
 
 	qede_alloc_etherdev(adapter, &dev_info);
 
-	adapter->ops->common->set_id(edev, edev->name, QEDE_PMD_VERSION);
+	adapter->ops->common->set_name(edev, edev->name);
 
 	if (!is_vf)
 		adapter->dev_info.num_mac_filters =
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index 1e27428..0a1f7db 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -116,8 +116,7 @@ struct qed_common_ops {
 		     struct rte_pci_device *pci_dev,
 		     enum qed_protocol protocol,
 		     uint32_t dp_module, uint8_t dp_level, bool is_vf);
-	void (*set_id)(struct ecore_dev *edev,
-		char name[], const char ver_str[]);
+	void (*set_name)(struct ecore_dev *edev, char name[]);
 	enum _ecore_status_t
 		(*chain_alloc)(struct ecore_dev *edev,
 			       enum ecore_chain_use_mode
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 1d4f336..a932c5f 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -50,7 +50,9 @@ static void qed_init_pci(struct ecore_dev *edev, struct rte_pci_device *pci_dev)
 	int rc;
 
 	ecore_init_struct(edev);
+	edev->drv_type = DRV_ID_DRV_TYPE_LINUX;
 	qdev->protocol = protocol;
+
 	if (is_vf)
 		edev->b_is_vf = true;
 
@@ -420,9 +422,7 @@ static int qed_slowpath_start(struct ecore_dev *edev,
 	return 0;
 }
 
-static void
-qed_set_id(struct ecore_dev *edev, char name[NAME_SIZE],
-	   const char ver_str[NAME_SIZE])
+static void qed_set_name(struct ecore_dev *edev, char name[NAME_SIZE])
 {
 	int i;
 
@@ -430,8 +430,6 @@ static int qed_slowpath_start(struct ecore_dev *edev,
 	for_each_hwfn(edev, i) {
 		snprintf(edev->hwfns[i].name, NAME_SIZE, "%s-%d", name, i);
 	}
-	memcpy(edev->ver_str, ver_str, NAME_SIZE);
-	edev->drv_type = DRV_ID_DRV_TYPE_LINUX;
 }
 
 static uint32_t
@@ -714,7 +712,7 @@ static int qed_get_sb_info(struct ecore_dev *edev, struct ecore_sb_info *sb,
 	INIT_STRUCT_FIELD(probe, &qed_probe),
 	INIT_STRUCT_FIELD(update_pf_params, &qed_update_pf_params),
 	INIT_STRUCT_FIELD(slowpath_start, &qed_slowpath_start),
-	INIT_STRUCT_FIELD(set_id, &qed_set_id),
+	INIT_STRUCT_FIELD(set_name, &qed_set_name),
 	INIT_STRUCT_FIELD(chain_alloc, &ecore_chain_alloc),
 	INIT_STRUCT_FIELD(chain_free, &ecore_chain_free),
 	INIT_STRUCT_FIELD(sb_init, &qed_sb_init),
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 24/61] net/qede/base: prevent driver laod with invalid resources
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (22 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 23/61] net/qede/base: set the drv_type before sending load request Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 25/61] net/qede/base: add interfaces for MFW TLV request processing Rasesh Mody
                   ` (37 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Prevent storage drivers from attempting to load with invalid resources.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c |   12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index e80813b..35574d4 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2445,13 +2445,19 @@ static void ecore_hw_set_feat(struct ecore_hwfn *p_hwfn)
 			   FEAT_NUM(p_hwfn, ECORE_PF_L2_QUE),
 			   sb_cnt_info.sb_iov_cnt);
 
+	feat_num[ECORE_FCOE_CQ] = OSAL_MIN_T(u32, RESC_NUM(p_hwfn, ECORE_SB),
+					     RESC_NUM(p_hwfn, ECORE_CMDQS_CQS));
+	feat_num[ECORE_ISCSI_CQ] = OSAL_MIN_T(u32, RESC_NUM(p_hwfn, ECORE_SB),
+					     RESC_NUM(p_hwfn, ECORE_CMDQS_CQS));
+
 	DP_VERBOSE(p_hwfn, ECORE_MSG_PROBE,
-		   "#PF_L2_QUEUES=%d VF_L2_QUEUES=%d #ROCE_CNQ=%d #SBS=%d num_features=%d\n",
+		   "#PF_L2_QUEUE=%d VF_L2_QUEUES=%d #ROCE_CNQ=%d #FCOE_CQ=%d #ISCSI_CQ=%d #SB=%d\n",
 		   (int)FEAT_NUM(p_hwfn, ECORE_PF_L2_QUE),
 		   (int)FEAT_NUM(p_hwfn, ECORE_VF_L2_QUE),
 		   (int)FEAT_NUM(p_hwfn, ECORE_RDMA_CNQ),
-		   RESC_NUM(p_hwfn, ECORE_SB),
-		   num_features);
+		   (int)FEAT_NUM(p_hwfn, ECORE_FCOE_CQ),
+		   (int)FEAT_NUM(p_hwfn, ECORE_ISCSI_CQ),
+		   RESC_NUM(p_hwfn, ECORE_SB));
 }
 
 static enum resource_id_enum
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 25/61] net/qede/base: add interfaces for MFW TLV request processing
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (23 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 24/61] net/qede/base: prevent driver laod with invalid resources Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 26/61] net/qede/base: fix to set pointers to NULL after freeing Rasesh Mody
                   ` (36 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add new base driver interfaces for Management FW TLV request processing.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_mcp.c     |    6 +
 drivers/net/qede/base/ecore_mcp_api.h |  301 +++++++++++++++++++++++++++++++++
 2 files changed, 307 insertions(+)

diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index c5cc827..e4fa872 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -2503,3 +2503,9 @@ enum _ecore_status_t ecore_mcp_initiate_pf_flr(struct ecore_hwfn *p_hwfn,
 	return ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_INITIATE_PF_FLR, 0,
 			     &mcp_resp, &mcp_param);
 }
+
+enum _ecore_status_t
+ecore_mfw_process_tlv_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
+{
+	return ECORE_SUCCESS;
+}
diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h
index 1be22dd..8cad43d 100644
--- a/drivers/net/qede/base/ecore_mcp_api.h
+++ b/drivers/net/qede/base/ecore_mcp_api.h
@@ -232,6 +232,295 @@ struct ecore_mba_vers {
 	u32 mba_vers[ECORE_MAX_NUM_OF_ROMIMG];
 };
 
+enum ecore_mfw_tlv_type {
+	ECORE_MFW_TLV_GENERIC = 0x1,	/* Core driver TLVs */
+	ECORE_MFW_TLV_FCOE = 0x2,	/* FCoE protocol TLVs */
+	ECORE_MFW_TLV_ISCSI = 0x4,	/* SCSI protocol TLVs */
+};
+
+struct ecore_mfw_tlv_generic {
+	u16 feat_flags;
+	bool feat_flags_set;
+	u64 local_mac;
+	bool local_mac_set;
+	u64 additional_mac1;
+	bool additional_mac1_set;
+	u64 additional_mac2;
+	bool additional_mac2_set;
+	u16 lso_maxoff_size;
+	bool lso_maxoff_size_set;
+	u16 lso_minseg_size;
+	bool lso_minseg_size_set;
+	u8 prom_mode;
+	bool prom_mode_set;
+	u16 tx_descr_size;
+	bool tx_descr_size_set;
+	u16 rx_descr_size;
+	bool rx_descr_size_set;
+	u16 netq_count;
+	bool netq_count_set;
+	u16 flex_vlan;
+	bool flex_vlan_set;
+	u8 drv_state;
+	bool drv_state_set;
+	u8 pxe_progress;
+	bool pxe_progress_set;
+	u32 tcp4_offloads;
+	bool tcp4_offloads_set;
+	u32 tcp6_offloads;
+	bool tcp6_offloads_set;
+	u16 tx_descr_qdepth;
+	bool tx_descr_qdepth_set;
+	u16 rx_descr_qdepth;
+	bool rx_descr_qdepth_set;
+	u64 rx_frames;
+	bool rx_frames_set;
+	u64 rx_bytes;
+	bool rx_bytes_set;
+	u64 tx_frames;
+	bool tx_frames_set;
+	u64 tx_bytes;
+	bool tx_bytes_set;
+	u8 iov_offload;
+	bool iov_offload_set;
+	u8 txqs_empty;
+	bool txqs_empty_set;
+	u8 rxqs_empty;
+	bool rxqs_empty_set;
+	u8 num_txqs_full;
+	bool num_txqs_full_set;
+	u8 num_rxqs_full;
+	bool num_rxqs_full_set;
+};
+
+struct ecore_mfw_tlv_fcoe {
+	u8 scsi_timeout;
+	bool scsi_timeout_set;
+	u32 rt_tov;
+	bool rt_tov_set;
+	u32 ra_tov;
+	bool ra_tov_set;
+	u32 ed_tov;
+	bool ed_tov_set;
+	u32 cr_tov;
+	bool cr_tov_set;
+	u8 boot_type;
+	bool boot_type_set;
+	u8 npiv_state;
+	bool npiv_state_set;
+	u32 num_npiv_ids;
+	bool num_npiv_ids_set;
+	u8 switch_name[8];
+	bool switch_name_set;
+	u16 switch_portnum;
+	bool switch_portnum_set;
+	u8 switch_portid[3];
+	bool switch_portid_set;
+	u8 vendor_name[8];
+	bool vendor_name_set;
+	u8 switch_model[8];
+	bool switch_model_set;
+	u8 switch_fw_version[8];
+	bool switch_fw_version_set;
+	u8 qos_pri;
+	bool qos_pri_set;
+	u8 port_alias[3];
+	bool port_alias_set;
+	u8 port_state;
+	bool port_state_set;
+	u16 fip_tx_descr_size;
+	bool fip_tx_descr_size_set;
+	u16 fip_rx_descr_size;
+	bool fip_rx_descr_size_set;
+	u16 link_failures;
+	bool link_failures_set;
+	u8 fcoe_boot_progress;
+	bool fcoe_boot_progress_set;
+	u64 rx_bcast;
+	bool rx_bcast_set;
+	u64 tx_bcast;
+	bool tx_bcast_set;
+	u16 fcoe_txq_depth;
+	bool fcoe_txq_depth_set;
+	u16 fcoe_rxq_depth;
+	bool fcoe_rxq_depth_set;
+	u64 fcoe_rx_frames;
+	bool fcoe_rx_frames_set;
+	u64 fcoe_rx_bytes;
+	bool fcoe_rx_bytes_set;
+	u64 fcoe_tx_frames;
+	bool fcoe_tx_frames_set;
+	u64 fcoe_tx_bytes;
+	bool fcoe_tx_bytes_set;
+	u16 crc_count;
+	bool crc_count_set;
+	u32 crc_err_src_fcid[5];
+	bool crc_err_src_fcid_set[5];
+	u8 crc_err_tstamp[5][14];
+	bool crc_err_tstamp_set[5];
+	u16 losync_err;
+	bool losync_err_set;
+	u16 losig_err;
+	bool losig_err_set;
+	u16 primtive_err;
+	bool primtive_err_set;
+	u16 disparity_err;
+	bool disparity_err_set;
+	u16 code_violation_err;
+	bool code_violation_err_set;
+	u32 flogi_param[4];
+	bool flogi_param_set[4];
+	u8 flogi_tstamp[14];
+	bool flogi_tstamp_set;
+	u32 flogi_acc_param[4];
+	bool flogi_acc_param_set[4];
+	u8 flogi_acc_tstamp[14];
+	bool flogi_acc_tstamp_set;
+	u32 flogi_rjt;
+	bool flogi_rjt_set;
+	u8 flogi_rjt_tstamp[14];
+	bool flogi_rjt_tstamp_set;
+	u32 fdiscs;
+	bool fdiscs_set;
+	u8 fdisc_acc;
+	bool fdisc_acc_set;
+	u8 fdisc_rjt;
+	bool fdisc_rjt_set;
+	u8 plogi;
+	bool plogi_set;
+	u8 plogi_acc;
+	bool plogi_acc_set;
+	u8 plogi_rjt;
+	bool plogi_rjt_set;
+	u32 plogi_dst_fcid[5];
+	bool plogi_dst_fcid_set[5];
+	u8 plogi_tstamp[5][14];
+	bool plogi_tstamp_set[5];
+	u32 plogi_acc_src_fcid[5];
+	bool plogi_acc_src_fcid_set[5];
+	u8 plogi_acc_tstamp[5][14];
+	bool plogi_acc_tstamp_set[5];
+	u8 tx_plogos;
+	bool tx_plogos_set;
+	u8 plogo_acc;
+	bool plogo_acc_set;
+	u8 plogo_rjt;
+	bool plogo_rjt_set;
+	u32 plogo_src_fcid[5];
+	bool plogo_src_fcid_set[5];
+	u8 plogo_tstamp[5][14];
+	bool plogo_tstamp_set[5];
+	u8 rx_logos;
+	bool rx_logos_set;
+	u8 tx_accs;
+	bool tx_accs_set;
+	u8 tx_prlis;
+	bool tx_prlis_set;
+	u8 rx_accs;
+	bool rx_accs_set;
+	u8 tx_abts;
+	bool tx_abts_set;
+	u8 rx_abts_acc;
+	bool rx_abts_acc_set;
+	u8 rx_abts_rjt;
+	bool rx_abts_rjt_set;
+	u32 abts_dst_fcid[5];
+	bool abts_dst_fcid_set[5];
+	u8 abts_tstamp[5][14];
+	bool abts_tstamp_set[5];
+	u8 rx_rscn;
+	bool rx_rscn_set;
+	u32 rx_rscn_nport[4];
+	bool rx_rscn_nport_set[4];
+	u8 tx_lun_rst;
+	bool tx_lun_rst_set;
+	u8 abort_task_sets;
+	bool abort_task_sets_set;
+	u8 tx_tprlos;
+	bool tx_tprlos_set;
+	u8 tx_nos;
+	bool tx_nos_set;
+	u8 rx_nos;
+	bool rx_nos_set;
+	u8 ols;
+	bool ols_set;
+	u8 lr;
+	bool lr_set;
+	u8 llr;
+	bool llrt;
+	u8 tx_lip;
+	bool tx_lip_set;
+	u8 rx_lip;
+	bool rx_lip_set;
+	u8 eofa;
+	bool eofa_set;
+	u8 eofni;
+	bool eofni_set;
+	u8 scsi_chks;
+	bool scsi_chks_set;
+	u8 scsi_cond_met;
+	bool scsi_cond_met_set;
+	u8 scsi_busy;
+	bool scsi_busy_set;
+	u8 scsi_inter;
+	bool scsi_inter_set;
+	u8 scsi_inter_cond_met;
+	bool scsi_inter_cond_met_set;
+	u8 scsi_rsv_conflicts;
+	bool scsi_rsv_conflicts_set;
+	u8 scsi_tsk_full;
+	bool scsi_tsk_full_set;
+	u8 scsi_aca_active;
+	bool scsi_aca_active_set;
+	u8 scsi_tsk_abort;
+	bool scsi_tsk_abort_set;
+	u32 scsi_rx_chk[5];
+	bool scsi_rx_chk_set[5];
+	u8 scsi_chk_tstamp[5][14];
+	bool scsi_chk_tstamp_set[5];
+};
+
+struct ecore_mfw_tlv_iscsi {
+	u8 target_llmnr;
+	bool target_llmnr_set;
+	u8 header_digest;
+	bool header_digest_set;
+	u8 data_digest;
+	bool data_digest_set;
+	u8 auth_method;
+	bool auth_method_set;
+	u16 boot_taget_portal;
+	bool boot_taget_portal_set;
+	u16 frame_size;
+	bool frame_size_set;
+	u16 tx_desc_size;
+	bool tx_desc_size_set;
+	u16 rx_desc_size;
+	bool rx_desc_size_set;
+	u8 boot_progress;
+	bool boot_progress_set;
+	u16 tx_desc_qdepth;
+	bool tx_desc_qdepth_set;
+	u16 rx_desc_qdepth;
+	bool rx_desc_qdepth_set;
+	u64 rx_frames;
+	bool rx_frames_set;
+	u64 rx_bytes;
+	bool rx_bytes_set;
+	u64 tx_frames;
+	bool tx_frames_set;
+	u64 tx_bytes;
+	bool tx_bytes_set;
+	u32 cpcp_spcp_map;
+	bool cpcp_spcp_map_set;
+};
+
+union ecore_mfw_tlv_data {
+	struct ecore_mfw_tlv_generic generic;
+	struct ecore_mfw_tlv_fcoe fcoe;
+	struct ecore_mfw_tlv_iscsi iscsi;
+};
+
 /**
  * @brief - returns the link params of the hw function
  *
@@ -820,4 +1109,16 @@ enum _ecore_status_t
 enum _ecore_status_t ecore_mcp_mdump_clear_logs(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt);
 
+/**
+ * @brief - Processes the TLV request from MFW i.e., get the required TLV info
+ *          from the ecore client and send it to the MFW.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ *
+ * @param return ECORE_SUCCESS upon success.
+ */
+enum _ecore_status_t ecore_mfw_process_tlv_req(struct ecore_hwfn *p_hwfn,
+					       struct ecore_ptt *p_ptt);
+
 #endif
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 26/61] net/qede/base: fix to set pointers to NULL after freeing
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (24 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 25/61] net/qede/base: add interfaces for MFW TLV request processing Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 27/61] net/qede/base: L2 handler changes Rasesh Mody
                   ` (35 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Set pointers to NULL after being freed

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c      |   45 +++++++-----------------
 drivers/net/qede/base/ecore_init_ops.c |    4 +++
 drivers/net/qede/base/ecore_int.c      |    4 +++
 drivers/net/qede/base/ecore_spq.c      |   60 ++++++++++++++++++--------------
 drivers/net/qede/base/ecore_spq.h      |   35 +++++++------------
 drivers/net/qede/base/ecore_sriov.c    |    1 +
 6 files changed, 67 insertions(+), 82 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 35574d4..3591381 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -173,12 +173,9 @@ void ecore_resc_free(struct ecore_dev *p_dev)
 		ecore_cxt_mngr_free(p_hwfn);
 		ecore_qm_info_free(p_hwfn);
 		ecore_spq_free(p_hwfn);
-		ecore_eq_free(p_hwfn, p_hwfn->p_eq);
-		ecore_consq_free(p_hwfn, p_hwfn->p_consq);
+		ecore_eq_free(p_hwfn);
+		ecore_consq_free(p_hwfn);
 		ecore_int_free(p_hwfn);
-#ifdef CONFIG_ECORE_LL2
-		ecore_ll2_free(p_hwfn, p_hwfn->p_ll2_info);
-#endif
 		ecore_iov_free(p_hwfn);
 		ecore_dmae_info_free(p_hwfn);
 		ecore_dcbx_info_free(p_hwfn, p_hwfn->p_dcbx_info);
@@ -844,11 +841,6 @@ static enum _ecore_status_t ecore_alloc_qm_data(struct ecore_hwfn *p_hwfn)
 
 enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 {
-	struct ecore_consq *p_consq;
-	struct ecore_eq *p_eq;
-#ifdef	CONFIG_ECORE_LL2
-	struct ecore_ll2_info *p_ll2_info;
-#endif
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 	int i;
 
@@ -996,24 +988,13 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 			goto alloc_no_mem;
 		}
 
-		p_eq = ecore_eq_alloc(p_hwfn, (u16)n_eqes);
-		if (!p_eq)
-			goto alloc_no_mem;
-		p_hwfn->p_eq = p_eq;
+		rc = ecore_eq_alloc(p_hwfn, (u16)n_eqes);
+		if (rc)
+			goto alloc_err;
 
-		p_consq = ecore_consq_alloc(p_hwfn);
-		if (!p_consq)
-			goto alloc_no_mem;
-		p_hwfn->p_consq = p_consq;
-
-#ifdef CONFIG_ECORE_LL2
-		if (p_hwfn->using_ll2) {
-			p_ll2_info = ecore_ll2_alloc(p_hwfn);
-			if (!p_ll2_info)
-				goto alloc_no_mem;
-			p_hwfn->p_ll2_info = p_ll2_info;
-		}
-#endif
+		rc = ecore_consq_alloc(p_hwfn);
+		if (rc)
+			goto alloc_err;
 
 		/* DMA info initialization */
 		rc = ecore_dmae_info_alloc(p_hwfn);
@@ -1061,8 +1042,8 @@ void ecore_resc_setup(struct ecore_dev *p_dev)
 
 		ecore_cxt_mngr_setup(p_hwfn);
 		ecore_spq_setup(p_hwfn);
-		ecore_eq_setup(p_hwfn, p_hwfn->p_eq);
-		ecore_consq_setup(p_hwfn, p_hwfn->p_consq);
+		ecore_eq_setup(p_hwfn);
+		ecore_consq_setup(p_hwfn);
 
 		/* Read shadow of current MFW mailbox */
 		ecore_mcp_read_mb(p_hwfn, p_hwfn->p_main_ptt);
@@ -1073,10 +1054,6 @@ void ecore_resc_setup(struct ecore_dev *p_dev)
 		ecore_int_setup(p_hwfn, p_hwfn->p_main_ptt);
 
 		ecore_iov_setup(p_hwfn, p_hwfn->p_main_ptt);
-#ifdef CONFIG_ECORE_LL2
-		if (p_hwfn->using_ll2)
-			ecore_ll2_setup(p_hwfn, p_hwfn->p_ll2_info);
-#endif
 	}
 }
 
@@ -2370,6 +2347,7 @@ static void ecore_hw_hwfn_free(struct ecore_hwfn *p_hwfn)
 {
 	ecore_ptt_pool_free(p_hwfn);
 	OSAL_FREE(p_hwfn->p_dev, p_hwfn->hw_info.p_igu_info);
+	p_hwfn->hw_info.p_igu_info = OSAL_NULL;
 }
 
 /* Setup bar access */
@@ -3654,6 +3632,7 @@ static void ecore_chain_free_pbl(struct ecore_dev *p_dev,
 				       p_chain->pbl.p_phys_table, pbl_size);
 out:
 	OSAL_VFREE(p_dev, p_chain->pbl.pp_virt_addr_tbl);
+	p_chain->pbl.pp_virt_addr_tbl = OSAL_NULL;
 }
 
 void ecore_chain_free(struct ecore_dev *p_dev, struct ecore_chain *p_chain)
diff --git a/drivers/net/qede/base/ecore_init_ops.c b/drivers/net/qede/base/ecore_init_ops.c
index b907a95..3d0273b 100644
--- a/drivers/net/qede/base/ecore_init_ops.c
+++ b/drivers/net/qede/base/ecore_init_ops.c
@@ -115,6 +115,7 @@ enum _ecore_status_t ecore_init_alloc(struct ecore_hwfn *p_hwfn)
 					sizeof(u32) * RUNTIME_ARRAY_SIZE);
 	if (!rt_data->init_val) {
 		OSAL_FREE(p_hwfn->p_dev, rt_data->b_valid);
+		rt_data->b_valid = OSAL_NULL;
 		return ECORE_NOMEM;
 	}
 
@@ -124,7 +125,9 @@ enum _ecore_status_t ecore_init_alloc(struct ecore_hwfn *p_hwfn)
 void ecore_init_free(struct ecore_hwfn *p_hwfn)
 {
 	OSAL_FREE(p_hwfn->p_dev, p_hwfn->rt_data.init_val);
+	p_hwfn->rt_data.init_val = OSAL_NULL;
 	OSAL_FREE(p_hwfn->p_dev, p_hwfn->rt_data.b_valid);
+	p_hwfn->rt_data.b_valid = OSAL_NULL;
 }
 
 static enum _ecore_status_t ecore_init_array_dmae(struct ecore_hwfn *p_hwfn,
@@ -506,6 +509,7 @@ enum _ecore_status_t ecore_init_run(struct ecore_hwfn *p_hwfn,
 	}
 #ifdef CONFIG_ECORE_ZIPPED_FW
 	OSAL_FREE(p_hwfn->p_dev, p_hwfn->unzip_buf);
+	p_hwfn->unzip_buf = OSAL_NULL;
 #endif
 	return rc;
 }
diff --git a/drivers/net/qede/base/ecore_int.c b/drivers/net/qede/base/ecore_int.c
index e5a4359..ffcae46 100644
--- a/drivers/net/qede/base/ecore_int.c
+++ b/drivers/net/qede/base/ecore_int.c
@@ -1229,7 +1229,9 @@ static void ecore_int_sb_attn_free(struct ecore_hwfn *p_hwfn)
 				       p_sb->sb_phys,
 				       SB_ATTN_ALIGNED_SIZE(p_hwfn));
 	}
+
 	OSAL_FREE(p_hwfn->p_dev, p_sb);
+	p_hwfn->p_sb_attn = OSAL_NULL;
 }
 
 static void ecore_int_sb_attn_setup(struct ecore_hwfn *p_hwfn,
@@ -1593,6 +1595,7 @@ static void ecore_int_sp_sb_free(struct ecore_hwfn *p_hwfn)
 	}
 
 	OSAL_FREE(p_hwfn->p_dev, p_sb);
+	p_hwfn->p_sp_sb = OSAL_NULL;
 }
 
 static enum _ecore_status_t ecore_int_sp_sb_alloc(struct ecore_hwfn *p_hwfn,
@@ -2126,6 +2129,7 @@ static enum _ecore_status_t ecore_int_sp_dpc_alloc(struct ecore_hwfn *p_hwfn)
 static void ecore_int_sp_dpc_free(struct ecore_hwfn *p_hwfn)
 {
 	OSAL_FREE(p_hwfn->p_dev, p_hwfn->sp_dpc);
+	p_hwfn->sp_dpc = OSAL_NULL;
 }
 
 enum _ecore_status_t ecore_int_alloc(struct ecore_hwfn *p_hwfn,
diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c
index fa2bce3..23ed772 100644
--- a/drivers/net/qede/base/ecore_spq.c
+++ b/drivers/net/qede/base/ecore_spq.c
@@ -355,7 +355,7 @@ enum _ecore_status_t ecore_eq_completion(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-struct ecore_eq *ecore_eq_alloc(struct ecore_hwfn *p_hwfn, u16 num_elem)
+enum _ecore_status_t ecore_eq_alloc(struct ecore_hwfn *p_hwfn, u16 num_elem)
 {
 	struct ecore_eq *p_eq;
 
@@ -364,7 +364,7 @@ struct ecore_eq *ecore_eq_alloc(struct ecore_hwfn *p_hwfn, u16 num_elem)
 	if (!p_eq) {
 		DP_NOTICE(p_hwfn, true,
 			  "Failed to allocate `struct ecore_eq'\n");
-		return OSAL_NULL;
+		return ECORE_NOMEM;
 	}
 
 	/* Allocate and initialize EQ chain*/
@@ -374,7 +374,7 @@ struct ecore_eq *ecore_eq_alloc(struct ecore_hwfn *p_hwfn, u16 num_elem)
 			      ECORE_CHAIN_CNT_TYPE_U16,
 			      num_elem,
 			      sizeof(union event_ring_element),
-			      &p_eq->chain, OSAL_NULL)) {
+			      &p_eq->chain, OSAL_NULL) != ECORE_SUCCESS) {
 		DP_NOTICE(p_hwfn, true, "Failed to allocate eq chain\n");
 		goto eq_allocate_fail;
 	}
@@ -383,25 +383,28 @@ struct ecore_eq *ecore_eq_alloc(struct ecore_hwfn *p_hwfn, u16 num_elem)
 	ecore_int_register_cb(p_hwfn, ecore_eq_completion,
 			      p_eq, &p_eq->eq_sb_index, &p_eq->p_fw_cons);
 
-	return p_eq;
+	p_hwfn->p_eq = p_eq;
+	return ECORE_SUCCESS;
 
 eq_allocate_fail:
-	ecore_eq_free(p_hwfn, p_eq);
-	return OSAL_NULL;
+	OSAL_FREE(p_hwfn->p_dev, p_eq);
+	return ECORE_NOMEM;
 }
 
-void ecore_eq_setup(struct ecore_hwfn *p_hwfn, struct ecore_eq *p_eq)
+void ecore_eq_setup(struct ecore_hwfn *p_hwfn)
 {
-	ecore_chain_reset(&p_eq->chain);
+	ecore_chain_reset(&p_hwfn->p_eq->chain);
 }
 
-void ecore_eq_free(struct ecore_hwfn *p_hwfn, struct ecore_eq *p_eq)
+void ecore_eq_free(struct ecore_hwfn *p_hwfn)
 {
-	if (!p_eq)
+	if (!p_hwfn->p_eq)
 		return;
-	ecore_chain_free(p_hwfn->p_dev, &p_eq->chain);
-	OSAL_FREE(p_hwfn->p_dev, p_eq);
-	p_eq = OSAL_NULL;
+
+	ecore_chain_free(p_hwfn->p_dev, &p_hwfn->p_eq->chain);
+
+	OSAL_FREE(p_hwfn->p_dev, p_hwfn->p_eq);
+	p_hwfn->p_eq = OSAL_NULL;
 }
 
 /***************************************************************************
@@ -554,7 +557,9 @@ void ecore_spq_free(struct ecore_hwfn *p_hwfn)
 
 	ecore_chain_free(p_hwfn->p_dev, &p_spq->chain);
 	OSAL_SPIN_LOCK_DEALLOC(&p_spq->lock);
+
 	OSAL_FREE(p_hwfn->p_dev, p_spq);
+	p_hwfn->p_spq = OSAL_NULL;
 }
 
 enum _ecore_status_t
@@ -944,7 +949,7 @@ enum _ecore_status_t ecore_spq_completion(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-struct ecore_consq *ecore_consq_alloc(struct ecore_hwfn *p_hwfn)
+enum _ecore_status_t ecore_consq_alloc(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_consq *p_consq;
 
@@ -954,7 +959,7 @@ struct ecore_consq *ecore_consq_alloc(struct ecore_hwfn *p_hwfn)
 	if (!p_consq) {
 		DP_NOTICE(p_hwfn, true,
 			  "Failed to allocate `struct ecore_consq'\n");
-		return OSAL_NULL;
+		return ECORE_NOMEM;
 	}
 
 	/* Allocate and initialize EQ chain */
@@ -964,28 +969,31 @@ struct ecore_consq *ecore_consq_alloc(struct ecore_hwfn *p_hwfn)
 			      ECORE_CHAIN_CNT_TYPE_U16,
 			      ECORE_CHAIN_PAGE_SIZE / 0x80,
 			      0x80,
-			      &p_consq->chain, OSAL_NULL)) {
+			      &p_consq->chain, OSAL_NULL) != ECORE_SUCCESS) {
 		DP_NOTICE(p_hwfn, true, "Failed to allocate consq chain");
 		goto consq_allocate_fail;
 	}
 
-	return p_consq;
+	p_hwfn->p_consq = p_consq;
+	return ECORE_SUCCESS;
 
 consq_allocate_fail:
-	ecore_consq_free(p_hwfn, p_consq);
-	return OSAL_NULL;
+	OSAL_FREE(p_hwfn->p_dev, p_consq);
+	return ECORE_NOMEM;
 }
 
-void ecore_consq_setup(struct ecore_hwfn *p_hwfn, struct ecore_consq *p_consq)
+void ecore_consq_setup(struct ecore_hwfn *p_hwfn)
 {
-	ecore_chain_reset(&p_consq->chain);
+	ecore_chain_reset(&p_hwfn->p_consq->chain);
 }
 
-void ecore_consq_free(struct ecore_hwfn *p_hwfn, struct ecore_consq *p_consq)
+void ecore_consq_free(struct ecore_hwfn *p_hwfn)
 {
-	if (!p_consq)
+	if (!p_hwfn->p_consq)
 		return;
-	ecore_chain_free(p_hwfn->p_dev, &p_consq->chain);
-	OSAL_FREE(p_hwfn->p_dev, p_consq);
-	p_consq = OSAL_NULL;
+
+	ecore_chain_free(p_hwfn->p_dev, &p_hwfn->p_consq->chain);
+
+	OSAL_FREE(p_hwfn->p_dev, p_hwfn->p_consq);
+	p_hwfn->p_consq = OSAL_NULL;
 }
diff --git a/drivers/net/qede/base/ecore_spq.h b/drivers/net/qede/base/ecore_spq.h
index 717ede3..e2468b7 100644
--- a/drivers/net/qede/base/ecore_spq.h
+++ b/drivers/net/qede/base/ecore_spq.h
@@ -194,28 +194,23 @@ void ecore_spq_return_entry(struct ecore_hwfn		*p_hwfn,
  * @param p_hwfn
  * @param num_elem number of elements in the eq
  *
- * @return struct ecore_eq* - a newly allocated structure; NULL upon error.
+ * @return enum _ecore_status_t
  */
-struct ecore_eq *ecore_eq_alloc(struct ecore_hwfn	*p_hwfn,
-				 u16			num_elem);
+enum _ecore_status_t ecore_eq_alloc(struct ecore_hwfn	*p_hwfn, u16 num_elem);
 
 /**
- * @brief ecore_eq_setup - Reset the SPQ to its start state.
+ * @brief ecore_eq_setup - Reset the EQ to its start state.
  *
  * @param p_hwfn
- * @param p_eq
  */
-void ecore_eq_setup(struct ecore_hwfn *p_hwfn,
-		    struct ecore_eq   *p_eq);
+void ecore_eq_setup(struct ecore_hwfn *p_hwfn);
 
 /**
- * @brief ecore_eq_deallocate - deallocates the given EQ struct.
+ * @brief ecore_eq_free - deallocates the given EQ struct.
  *
  * @param p_hwfn
- * @param p_eq
  */
-void ecore_eq_free(struct ecore_hwfn *p_hwfn,
-		   struct ecore_eq   *p_eq);
+void ecore_eq_free(struct ecore_hwfn *p_hwfn);
 
 /**
  * @brief ecore_eq_prod_update - update the FW with default EQ producer
@@ -261,32 +256,26 @@ enum _ecore_status_t ecore_spq_completion(struct ecore_hwfn	*p_hwfn,
 u32 ecore_spq_get_cid(struct ecore_hwfn *p_hwfn);
 
 /**
- * @brief ecore_consq_alloc - Allocates & initializes an ConsQ
- *        struct
+ * @brief ecore_consq_alloc - Allocates & initializes an ConsQ struct
  *
  * @param p_hwfn
  *
- * @return struct ecore_eq* - a newly allocated structure; NULL upon error.
+ * @return enum _ecore_status_t
  */
-struct ecore_consq *ecore_consq_alloc(struct ecore_hwfn	*p_hwfn);
+enum _ecore_status_t ecore_consq_alloc(struct ecore_hwfn *p_hwfn);
 
 /**
- * @brief ecore_consq_setup - Reset the ConsQ to its start
- *        state.
+ * @brief ecore_consq_setup - Reset the ConsQ to its start state.
  *
  * @param p_hwfn
- * @param p_eq
  */
-void ecore_consq_setup(struct ecore_hwfn *p_hwfn,
-		    struct ecore_consq   *p_consq);
+void ecore_consq_setup(struct ecore_hwfn *p_hwfn);
 
 /**
  * @brief ecore_consq_free - deallocates the given ConsQ struct.
  *
  * @param p_hwfn
- * @param p_eq
  */
-void ecore_consq_free(struct ecore_hwfn *p_hwfn,
-		   struct ecore_consq   *p_consq);
+void ecore_consq_free(struct ecore_hwfn *p_hwfn);
 
 #endif /* __ECORE_SPQ_H__ */
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index ce14460..87ffa34 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -573,6 +573,7 @@ void ecore_iov_free(struct ecore_hwfn *p_hwfn)
 	if (IS_PF_SRIOV_ALLOC(p_hwfn)) {
 		ecore_iov_free_vfdb(p_hwfn);
 		OSAL_FREE(p_hwfn->p_dev, p_hwfn->pf_iov_info);
+		p_hwfn->pf_iov_info = OSAL_NULL;
 	}
 }
 
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 27/61] net/qede/base: L2 handler changes
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (25 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 26/61] net/qede/base: fix to set pointers to NULL after freeing Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 28/61] net/qede/base: add support for handling TLV request from MFW Rasesh Mody
                   ` (34 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

L2 handler changes:

This is change to remove the queue-id/qzone difference for Tx queues.

It does that by mainly doing:

a. VFs queues are no longer determined by the SBs they're using.
Instead, the ecore-client needs to maintain those and choose the values
to be used by VF when initializing it.

b. Eliminate the HW-cid array in the hw-function.
To do that, have all the rx/tx functionality turn into 'handle' base -
when queue would be started the caller would get a (void*) handle,
which it would later use with ecore for configuring various
queue-related stop [update, close].

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h         |   13 -
 drivers/net/qede/base/ecore_dev.c     |   39 ---
 drivers/net/qede/base/ecore_int.c     |   24 --
 drivers/net/qede/base/ecore_int.h     |   10 -
 drivers/net/qede/base/ecore_iov_api.h |   24 +-
 drivers/net/qede/base/ecore_l2.c      |  526 ++++++++++++++++++---------------
 drivers/net/qede/base/ecore_l2.h      |   84 +++---
 drivers/net/qede/base/ecore_l2_api.h  |  108 ++++---
 drivers/net/qede/base/ecore_sriov.c   |  262 ++++++++++------
 drivers/net/qede/base/ecore_sriov.h   |    4 +-
 drivers/net/qede/base/ecore_vf.c      |  119 +++++---
 drivers/net/qede/base/ecore_vf.h      |   55 ++--
 drivers/net/qede/qede_eth_if.c        |   50 ++--
 drivers/net/qede/qede_eth_if.h        |   22 +-
 drivers/net/qede/qede_rxtx.c          |   42 +--
 drivers/net/qede/qede_rxtx.h          |    2 +
 16 files changed, 723 insertions(+), 661 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index b8c8bfd..de0f49a 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -394,16 +394,6 @@ struct ecore_hw_info {
 	u16 mtu;
 };
 
-struct ecore_hw_cid_data {
-	u32	cid;
-	bool	b_cid_allocated;
-	u8	vfid; /* 1-based; 0 signals this is for a PF */
-
-	/* Additional identifiers */
-	u16	opaque_fid;
-	u8	vport_id;
-};
-
 /* maximun size of read/write commands (HW limit) */
 #define DMAE_MAX_RW_SIZE	0x2000
 
@@ -566,9 +556,6 @@ struct ecore_hwfn {
 	struct ecore_mcp_info		*mcp_info;
 	struct ecore_dcbx_info		*p_dcbx_info;
 
-	struct ecore_hw_cid_data	*p_tx_cids;
-	struct ecore_hw_cid_data	*p_rx_cids;
-
 	struct ecore_dmae_info		dmae_info;
 
 	/* QM init */
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 3591381..168ada8 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -161,15 +161,6 @@ void ecore_resc_free(struct ecore_dev *p_dev)
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
 
-		OSAL_FREE(p_dev, p_hwfn->p_tx_cids);
-		p_hwfn->p_tx_cids = OSAL_NULL;
-		OSAL_FREE(p_dev, p_hwfn->p_rx_cids);
-		p_hwfn->p_rx_cids = OSAL_NULL;
-	}
-
-	for_each_hwfn(p_dev, i) {
-		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
-
 		ecore_cxt_mngr_free(p_hwfn);
 		ecore_qm_info_free(p_hwfn);
 		ecore_spq_free(p_hwfn);
@@ -852,36 +843,6 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 	if (!p_dev->fw_data)
 		return ECORE_NOMEM;
 
-	/* Allocate Memory for the Queue->CID mapping */
-	for_each_hwfn(p_dev, i) {
-		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
-		u32 num_tx_conns = RESC_NUM(p_hwfn, ECORE_L2_QUEUE);
-		int tx_size, rx_size;
-
-		/* @@@TMP - resc management, change to actual required size */
-		if (p_hwfn->pf_params.eth_pf_params.num_cons > num_tx_conns)
-			num_tx_conns = p_hwfn->pf_params.eth_pf_params.num_cons;
-		tx_size = sizeof(struct ecore_hw_cid_data) * num_tx_conns;
-		rx_size = sizeof(struct ecore_hw_cid_data) *
-		    RESC_NUM(p_hwfn, ECORE_L2_QUEUE);
-
-		p_hwfn->p_tx_cids = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
-						tx_size);
-		if (!p_hwfn->p_tx_cids) {
-			DP_NOTICE(p_hwfn, true,
-				  "Failed to allocate memory for Tx Cids\n");
-			goto alloc_no_mem;
-		}
-
-		p_hwfn->p_rx_cids = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
-						rx_size);
-		if (!p_hwfn->p_rx_cids) {
-			DP_NOTICE(p_hwfn, true,
-				  "Failed to allocate memory for Rx Cids\n");
-			goto alloc_no_mem;
-		}
-	}
-
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
 		u32 n_eqes, num_cons;
diff --git a/drivers/net/qede/base/ecore_int.c b/drivers/net/qede/base/ecore_int.c
index ffcae46..66c4731 100644
--- a/drivers/net/qede/base/ecore_int.c
+++ b/drivers/net/qede/base/ecore_int.c
@@ -2186,30 +2186,6 @@ void ecore_int_get_num_sbs(struct ecore_hwfn *p_hwfn,
 	p_sb_cnt_info->sb_free_blk = info->free_blks;
 }
 
-u16 ecore_int_queue_id_from_sb_id(struct ecore_hwfn *p_hwfn, u16 sb_id)
-{
-	struct ecore_igu_info *p_info = p_hwfn->hw_info.p_igu_info;
-
-	/* Determine origin of SB id */
-	if ((sb_id >= p_info->igu_base_sb) &&
-	    (sb_id < p_info->igu_base_sb + p_info->igu_sb_cnt)) {
-		return sb_id - p_info->igu_base_sb;
-	} else if ((sb_id >= p_info->igu_base_sb_iov) &&
-		   (sb_id < p_info->igu_base_sb_iov +
-			    p_info->igu_sb_cnt_iov)) {
-		/* We want the first VF queue to be adjacent to the
-		 * last PF queue. Since L2 queues can be partial to
-		 * SBs, we'll use the feature instead.
-		 */
-		return sb_id - p_info->igu_base_sb_iov +
-		       FEAT_NUM(p_hwfn, ECORE_PF_L2_QUE);
-	} else {
-		DP_NOTICE(p_hwfn, true, "SB %d not in range for function\n",
-			  sb_id);
-		return 0;
-	}
-}
-
 void ecore_int_disable_post_isr_release(struct ecore_dev *p_dev)
 {
 	int i;
diff --git a/drivers/net/qede/base/ecore_int.h b/drivers/net/qede/base/ecore_int.h
index 45358b9..0c8929e 100644
--- a/drivers/net/qede/base/ecore_int.h
+++ b/drivers/net/qede/base/ecore_int.h
@@ -172,16 +172,6 @@ enum _ecore_status_t ecore_int_alloc(struct ecore_hwfn	*p_hwfn,
 void ecore_int_setup(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt);
 
 /**
- * @brief - Returns an Rx queue index appropriate for usage with given SB.
- *
- * @param p_hwfn
- * @param sb_id - absolute index of SB
- *
- * @return index of Rx queue
- */
-u16 ecore_int_queue_id_from_sb_id(struct ecore_hwfn *p_hwfn, u16 sb_id);
-
-/**
  * @brief - Enable Interrupt & Attention for hw function
  *
  * @param p_hwfn
diff --git a/drivers/net/qede/base/ecore_iov_api.h b/drivers/net/qede/base/ecore_iov_api.h
index 9775360..b8dc47b 100644
--- a/drivers/net/qede/base/ecore_iov_api.h
+++ b/drivers/net/qede/base/ecore_iov_api.h
@@ -88,6 +88,23 @@ struct ecore_public_vf_info {
 	u16 forced_vlan;
 };
 
+struct ecore_iov_vf_init_params {
+	u16 rel_vf_id;
+
+	/* Number of requested Queues; Currently, don't support different
+	 * number of Rx/Tx queues.
+	 */
+	/* TODO - remove this limitation */
+	u16 num_queues;
+
+	/* Allow the client to choose which qzones to use for Rx/Tx,
+	 * and which queue_base to use for Tx queues on a per-queue basis.
+	 * Notice values should be relative to the PF resources.
+	 */
+	u16 req_rx_queue[ECORE_MAX_VF_CHAINS_PER_PF];
+	u16 req_tx_queue[ECORE_MAX_VF_CHAINS_PER_PF];
+};
+
 #ifdef CONFIG_ECORE_SW_CHANNEL
 /* This is SW channel related only... */
 enum mbx_state {
@@ -175,15 +192,14 @@ void ecore_iov_set_vf_to_disable(struct ecore_dev *p_dev,
  *
  * @param p_hwfn
  * @param p_ptt
- * @param rel_vf_id
- * @param num_rx_queues
+ * @param p_params
  *
  * @return enum _ecore_status_t
  */
 enum _ecore_status_t ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 					      struct ecore_ptt *p_ptt,
-					      u16 rel_vf_id,
-					      u16 num_rx_queues);
+					      struct ecore_iov_vf_init_params
+						     *p_params);
 
 /**
  * @brief ecore_iov_process_mbx_req - process a request received
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 0220d19..352620a 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -29,6 +29,120 @@
 #define ECORE_MAX_SGES_NUM 16
 #define CRC32_POLY 0x1edc6f41
 
+void ecore_eth_queue_cid_release(struct ecore_hwfn *p_hwfn,
+				 struct ecore_queue_cid *p_cid)
+{
+	/* VFs' CIDs are 0-based in PF-view, and uninitialized on VF */
+	if (!p_cid->is_vf && IS_PF(p_hwfn->p_dev))
+		ecore_cxt_release_cid(p_hwfn, p_cid->cid);
+	OSAL_VFREE(p_hwfn->p_dev, p_cid);
+}
+
+/* The internal is only meant to be directly called by PFs initializeing CIDs
+ * for their VFs.
+ */
+struct ecore_queue_cid *
+_ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
+			u16 opaque_fid, u32 cid, u8 vf_qid,
+			struct ecore_queue_start_common_params *p_params)
+{
+	bool b_is_same = (p_hwfn->hw_info.opaque_fid == opaque_fid);
+	struct ecore_queue_cid *p_cid;
+	enum _ecore_status_t rc;
+
+	p_cid = OSAL_VALLOC(p_hwfn->p_dev, sizeof(*p_cid));
+	if (p_cid == OSAL_NULL)
+		return OSAL_NULL;
+	OSAL_MEM_ZERO(p_cid, sizeof(*p_cid));
+
+	p_cid->opaque_fid = opaque_fid;
+	p_cid->cid = cid;
+	p_cid->vf_qid = vf_qid;
+	p_cid->rel = *p_params;
+
+	/* Don't try calculating the absolute indices for VFs */
+	if (IS_VF(p_hwfn->p_dev)) {
+		p_cid->abs = p_cid->rel;
+		goto out;
+	}
+
+	/* Calculate the engine-absolute indices of the resources.
+	 * The would guarantee they're valid later on.
+	 * In some cases [SBs] we already have the right values.
+	 */
+	rc = ecore_fw_vport(p_hwfn, p_cid->rel.vport_id, &p_cid->abs.vport_id);
+	if (rc != ECORE_SUCCESS)
+		goto fail;
+
+	rc = ecore_fw_l2_queue(p_hwfn, p_cid->rel.queue_id,
+			       &p_cid->abs.queue_id);
+	if (rc != ECORE_SUCCESS)
+		goto fail;
+
+	/* In case of a PF configuring its VF's queues, the stats-id is already
+	 * absolute [since there's a single index that's suitable per-VF].
+	 */
+	if (b_is_same) {
+		rc = ecore_fw_vport(p_hwfn, p_cid->rel.stats_id,
+				    &p_cid->abs.stats_id);
+		if (rc != ECORE_SUCCESS)
+			goto fail;
+	} else {
+		p_cid->abs.stats_id = p_cid->rel.stats_id;
+	}
+
+	/* SBs relevant information was already provided as absolute */
+	p_cid->abs.sb = p_cid->rel.sb;
+	p_cid->abs.sb_idx = p_cid->rel.sb_idx;
+
+	/* This is tricky - we're actually interested in whehter this is a PF
+	 * entry meant for the VF.
+	 */
+	if (!b_is_same)
+		p_cid->is_vf = true;
+out:
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "opaque_fid: %04x CID %08x vport %02x [%02x] qzone %04x [%04x] stats %02x [%02x] SB %04x PI %02x\n",
+		   p_cid->opaque_fid, p_cid->cid,
+		   p_cid->rel.vport_id, p_cid->abs.vport_id,
+		   p_cid->rel.queue_id, p_cid->abs.queue_id,
+		   p_cid->rel.stats_id, p_cid->abs.stats_id,
+		   p_cid->abs.sb, p_cid->abs.sb_idx);
+
+	return p_cid;
+
+fail:
+	OSAL_VFREE(p_hwfn->p_dev, p_cid);
+	return OSAL_NULL;
+}
+
+static struct ecore_queue_cid *
+ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
+		       u16 opaque_fid,
+		       struct ecore_queue_start_common_params *p_params)
+{
+	struct ecore_queue_cid *p_cid;
+	u32 cid = 0;
+
+	/* Get a unique firmware CID for this queue, in case it's a PF.
+	 * VF's don't need a CID as the queue configuration will be done
+	 * by PF.
+	 */
+	if (IS_PF(p_hwfn->p_dev)) {
+		if (ecore_cxt_acquire_cid(p_hwfn, PROTOCOLID_ETH,
+					  &cid) != ECORE_SUCCESS) {
+			DP_NOTICE(p_hwfn, true, "Failed to acquire cid\n");
+			return OSAL_NULL;
+		}
+	}
+
+	p_cid = _ecore_eth_queue_to_cid(p_hwfn, opaque_fid, cid, 0, p_params);
+	if ((p_cid == OSAL_NULL) && IS_PF(p_hwfn->p_dev))
+		ecore_cxt_release_cid(p_hwfn, cid);
+
+	return p_cid;
+}
+
 enum _ecore_status_t
 ecore_sp_eth_vport_start(struct ecore_hwfn *p_hwfn,
 			 struct ecore_sp_vport_start_params *p_params)
@@ -558,57 +672,28 @@ enum _ecore_status_t
 	return 0;
 }
 
-static void ecore_sp_release_queue_cid(struct ecore_hwfn *p_hwfn,
-				       struct ecore_hw_cid_data *p_cid_data)
-{
-	if (!p_cid_data->b_cid_allocated)
-		return;
-
-	ecore_cxt_release_cid(p_hwfn, p_cid_data->cid);
-	p_cid_data->b_cid_allocated = false;
-}
-
 enum _ecore_status_t
-ecore_sp_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
-			      u16 opaque_fid,
-			      u32 cid,
-			      struct ecore_queue_start_common_params *p_params,
-			      u16 bd_max_bytes,
-			      dma_addr_t bd_chain_phys_addr,
-			      dma_addr_t cqe_pbl_addr,
-			      u16 cqe_pbl_size, bool b_use_zone_a_prod)
+ecore_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
+			   struct ecore_queue_cid *p_cid,
+			   u16 bd_max_bytes,
+			   dma_addr_t bd_chain_phys_addr,
+			   dma_addr_t cqe_pbl_addr,
+			   u16 cqe_pbl_size)
 {
 	struct rx_queue_start_ramrod_data *p_ramrod = OSAL_NULL;
 	struct ecore_spq_entry *p_ent = OSAL_NULL;
 	struct ecore_sp_init_data init_data;
-	struct ecore_hw_cid_data *p_rx_cid;
-	u16 abs_rx_q_id = 0;
-	u8 abs_vport_id = 0;
 	enum _ecore_status_t rc = ECORE_NOTIMPL;
 
-	/* Store information for the stop */
-	p_rx_cid = &p_hwfn->p_rx_cids[p_params->queue_id];
-	p_rx_cid->cid = cid;
-	p_rx_cid->opaque_fid = opaque_fid;
-	p_rx_cid->vport_id = p_params->vport_id;
-
-	rc = ecore_fw_vport(p_hwfn, p_params->vport_id, &abs_vport_id);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	rc = ecore_fw_l2_queue(p_hwfn, p_params->queue_id, &abs_rx_q_id);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-		   "opaque_fid=0x%x, cid=0x%x, rx_qid=0x%x, vport_id=0x%x, sb_id=0x%x\n",
-		   opaque_fid, cid, p_params->queue_id,
-		   p_params->vport_id, p_params->sb);
+		   "opaque_fid=0x%x, cid=0x%x, rx_qzone=0x%x, vport_id=0x%x, sb_id=0x%x\n",
+		   p_cid->opaque_fid, p_cid->cid, p_cid->abs.queue_id,
+		   p_cid->abs.vport_id, p_cid->abs.sb);
 
 	/* Get SPQ entry */
 	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
-	init_data.cid = cid;
-	init_data.opaque_fid = opaque_fid;
+	init_data.cid = p_cid->cid;
+	init_data.opaque_fid = p_cid->opaque_fid;
 	init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
 
 	rc = ecore_sp_init_request(p_hwfn, &p_ent,
@@ -619,11 +704,11 @@ enum _ecore_status_t
 
 	p_ramrod = &p_ent->ramrod.rx_queue_start;
 
-	p_ramrod->sb_id = OSAL_CPU_TO_LE16(p_params->sb);
-	p_ramrod->sb_index = (u8)p_params->sb_idx;
-	p_ramrod->vport_id = abs_vport_id;
-	p_ramrod->stats_counter_id = p_params->stats_id;
-	p_ramrod->rx_queue_id = OSAL_CPU_TO_LE16(abs_rx_q_id);
+	p_ramrod->sb_id = OSAL_CPU_TO_LE16(p_cid->abs.sb);
+	p_ramrod->sb_index = p_cid->abs.sb_idx;
+	p_ramrod->vport_id = p_cid->abs.vport_id;
+	p_ramrod->stats_counter_id = p_cid->abs.stats_id;
+	p_ramrod->rx_queue_id = OSAL_CPU_TO_LE16(p_cid->abs.queue_id);
 	p_ramrod->complete_cqe_flg = 0;
 	p_ramrod->complete_event_flg = 1;
 
@@ -633,92 +718,88 @@ enum _ecore_status_t
 	p_ramrod->num_of_pbl_pages = OSAL_CPU_TO_LE16(cqe_pbl_size);
 	DMA_REGPAIR_LE(p_ramrod->cqe_pbl_addr, cqe_pbl_addr);
 
-	if (p_params->vf_qid || b_use_zone_a_prod) {
-		p_ramrod->vf_rx_prod_index = (u8)p_params->vf_qid;
+	if (p_cid->is_vf) {
+		p_ramrod->vf_rx_prod_index = p_cid->vf_qid;
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 			   "Queue%s is meant for VF rxq[%02x]\n",
-			   b_use_zone_a_prod ? " [legacy]" : "",
-			   p_params->vf_qid);
-		p_ramrod->vf_rx_prod_use_zone_a = b_use_zone_a_prod;
+			   !!p_cid->b_legacy_vf ? " [legacy]" : "",
+			   p_cid->vf_qid);
+		p_ramrod->vf_rx_prod_use_zone_a = !!p_cid->b_legacy_vf;
 	}
 
 	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
 }
 
-enum _ecore_status_t
-ecore_sp_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
-			    u16 opaque_fid,
-			    struct ecore_queue_start_common_params *p_params,
+static enum _ecore_status_t
+ecore_eth_pf_rx_queue_start(struct ecore_hwfn *p_hwfn,
+			    struct ecore_queue_cid *p_cid,
 			    u16 bd_max_bytes,
 			    dma_addr_t bd_chain_phys_addr,
 			    dma_addr_t cqe_pbl_addr,
 			    u16 cqe_pbl_size,
-			    void OSAL_IOMEM * *pp_prod)
+			    void OSAL_IOMEM * *pp_producer)
 {
-	struct ecore_hw_cid_data *p_rx_cid;
 	u32 init_prod_val = 0;
-	u16 abs_l2_queue = 0;
-	u8 abs_stats_id = 0;
-	enum _ecore_status_t rc;
-
-	if (IS_VF(p_hwfn->p_dev)) {
-		return ecore_vf_pf_rxq_start(p_hwfn,
-					     (u8)p_params->queue_id,
-					     p_params->sb,
-					     (u8)p_params->sb_idx,
-					     bd_max_bytes,
-					     bd_chain_phys_addr,
-					     cqe_pbl_addr,
-					     cqe_pbl_size, pp_prod);
-	}
-
-	rc = ecore_fw_l2_queue(p_hwfn, p_params->queue_id, &abs_l2_queue);
-	if (rc != ECORE_SUCCESS)
-		return rc;
 
-	rc = ecore_fw_vport(p_hwfn, p_params->stats_id, &abs_stats_id);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	*pp_prod = (u8 OSAL_IOMEM *)p_hwfn->regview +
-	    GTT_BAR0_MAP_REG_MSDM_RAM +
-	    MSTORM_ETH_PF_PRODS_OFFSET(abs_l2_queue);
+	*pp_producer = (u8 OSAL_IOMEM *)
+		       p_hwfn->regview +
+		       GTT_BAR0_MAP_REG_MSDM_RAM +
+		       MSTORM_ETH_PF_PRODS_OFFSET(p_cid->abs.queue_id);
 
 	/* Init the rcq, rx bd and rx sge (if valid) producers to 0 */
-	__internal_ram_wr(p_hwfn, *pp_prod, sizeof(u32),
+	__internal_ram_wr(p_hwfn, *pp_producer, sizeof(u32),
 			  (u32 *)(&init_prod_val));
 
+	return ecore_eth_rxq_start_ramrod(p_hwfn, p_cid,
+					  bd_max_bytes,
+					  bd_chain_phys_addr,
+					  cqe_pbl_addr, cqe_pbl_size);
+}
+
+enum _ecore_status_t
+ecore_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
+			 u16 opaque_fid,
+			 struct ecore_queue_start_common_params *p_params,
+			 u16 bd_max_bytes,
+			 dma_addr_t bd_chain_phys_addr,
+			 dma_addr_t cqe_pbl_addr,
+			 u16 cqe_pbl_size,
+			 struct ecore_rxq_start_ret_params *p_ret_params)
+{
+	struct ecore_queue_cid *p_cid;
+	enum _ecore_status_t rc;
+
 	/* Allocate a CID for the queue */
-	p_rx_cid = &p_hwfn->p_rx_cids[p_params->queue_id];
-	rc = ecore_cxt_acquire_cid(p_hwfn, PROTOCOLID_ETH,
-				   &p_rx_cid->cid);
-	if (rc != ECORE_SUCCESS) {
-		DP_NOTICE(p_hwfn, true, "Failed to acquire cid\n");
-		return rc;
-	}
-	p_rx_cid->b_cid_allocated = true;
-	p_params->stats_id = abs_stats_id;
-	p_params->vf_qid = 0;
-
-	rc = ecore_sp_eth_rxq_start_ramrod(p_hwfn,
-					   opaque_fid,
-					   p_rx_cid->cid,
-					   p_params,
+	p_cid = ecore_eth_queue_to_cid(p_hwfn, opaque_fid, p_params);
+	if (p_cid == OSAL_NULL)
+		return ECORE_NOMEM;
+
+	if (IS_PF(p_hwfn->p_dev))
+		rc = ecore_eth_pf_rx_queue_start(p_hwfn, p_cid,
+						 bd_max_bytes,
+						 bd_chain_phys_addr,
+						 cqe_pbl_addr, cqe_pbl_size,
+						 &p_ret_params->p_prod);
+	else
+		rc = ecore_vf_pf_rxq_start(p_hwfn, p_cid,
 					   bd_max_bytes,
 					   bd_chain_phys_addr,
 					   cqe_pbl_addr,
 					   cqe_pbl_size,
-					   false);
+					   &p_ret_params->p_prod);
 
+	/* Provide the caller with a reference to as handler */
 	if (rc != ECORE_SUCCESS)
-		ecore_sp_release_queue_cid(p_hwfn, p_rx_cid);
+		ecore_eth_queue_cid_release(p_hwfn, p_cid);
+	else
+		p_ret_params->p_handle = (void *)p_cid;
 
 	return rc;
 }
 
 enum _ecore_status_t
 ecore_sp_eth_rx_queues_update(struct ecore_hwfn *p_hwfn,
-			      u16 rx_queue_id,
+			      void **pp_rxq_handles,
 			      u8 num_rxqs,
 			      u8 complete_cqe_flg,
 			      u8 complete_event_flg,
@@ -728,14 +809,14 @@ enum _ecore_status_t
 	struct rx_queue_update_ramrod_data *p_ramrod = OSAL_NULL;
 	struct ecore_spq_entry *p_ent = OSAL_NULL;
 	struct ecore_sp_init_data init_data;
-	struct ecore_hw_cid_data *p_rx_cid;
-	u16 qid, abs_rx_q_id = 0;
+	struct ecore_queue_cid *p_cid;
 	enum _ecore_status_t rc = ECORE_NOTIMPL;
 	u8 i;
 
 	if (IS_VF(p_hwfn->p_dev))
 		return ecore_vf_pf_rxqs_update(p_hwfn,
-					       rx_queue_id,
+					       (struct ecore_queue_cid **)
+					       pp_rxq_handles,
 					       num_rxqs,
 					       complete_cqe_flg,
 					       complete_event_flg);
@@ -745,12 +826,11 @@ enum _ecore_status_t
 	init_data.p_comp_data = p_comp_data;
 
 	for (i = 0; i < num_rxqs; i++) {
-		qid = rx_queue_id + i;
-		p_rx_cid = &p_hwfn->p_rx_cids[qid];
+		p_cid = ((struct ecore_queue_cid **)pp_rxq_handles)[i];
 
 		/* Get SPQ entry */
-		init_data.cid = p_rx_cid->cid;
-		init_data.opaque_fid = p_rx_cid->opaque_fid;
+		init_data.cid = p_cid->cid;
+		init_data.opaque_fid = p_cid->opaque_fid;
 
 		rc = ecore_sp_init_request(p_hwfn, &p_ent,
 					   ETH_RAMROD_RX_QUEUE_UPDATE,
@@ -759,41 +839,34 @@ enum _ecore_status_t
 			return rc;
 
 		p_ramrod = &p_ent->ramrod.rx_queue_update;
+		p_ramrod->vport_id = p_cid->abs.vport_id;
 
-		ecore_fw_vport(p_hwfn, p_rx_cid->vport_id, &p_ramrod->vport_id);
-		ecore_fw_l2_queue(p_hwfn, qid, &abs_rx_q_id);
-		p_ramrod->rx_queue_id = OSAL_CPU_TO_LE16(abs_rx_q_id);
+		p_ramrod->rx_queue_id = OSAL_CPU_TO_LE16(p_cid->abs.queue_id);
 		p_ramrod->complete_cqe_flg = complete_cqe_flg;
 		p_ramrod->complete_event_flg = complete_event_flg;
 
 		rc = ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
-		if (rc)
+		if (rc != ECORE_SUCCESS)
 			return rc;
 	}
 
 	return rc;
 }
 
-enum _ecore_status_t
-ecore_sp_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
-			   u16 rx_queue_id,
-			   bool eq_completion_only, bool cqe_completion)
+static enum _ecore_status_t
+ecore_eth_pf_rx_queue_stop(struct ecore_hwfn *p_hwfn,
+			   struct ecore_queue_cid *p_cid,
+			   bool b_eq_completion_only,
+			   bool b_cqe_completion)
 {
-	struct ecore_hw_cid_data *p_rx_cid = &p_hwfn->p_rx_cids[rx_queue_id];
 	struct rx_queue_stop_ramrod_data *p_ramrod = OSAL_NULL;
 	struct ecore_spq_entry *p_ent = OSAL_NULL;
 	struct ecore_sp_init_data init_data;
-	u16 abs_rx_q_id = 0;
-	enum _ecore_status_t rc = ECORE_NOTIMPL;
-
-	if (IS_VF(p_hwfn->p_dev))
-		return ecore_vf_pf_rxq_stop(p_hwfn, rx_queue_id,
-					    cqe_completion);
+	enum _ecore_status_t rc;
 
-	/* Get SPQ entry */
 	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
-	init_data.cid = p_rx_cid->cid;
-	init_data.opaque_fid = p_rx_cid->opaque_fid;
+	init_data.cid = p_cid->cid;
+	init_data.opaque_fid = p_cid->opaque_fid;
 	init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
 
 	rc = ecore_sp_init_request(p_hwfn, &p_ent,
@@ -803,64 +876,54 @@ enum _ecore_status_t
 		return rc;
 
 	p_ramrod = &p_ent->ramrod.rx_queue_stop;
-
-	ecore_fw_vport(p_hwfn, p_rx_cid->vport_id, &p_ramrod->vport_id);
-	ecore_fw_l2_queue(p_hwfn, rx_queue_id, &abs_rx_q_id);
-	p_ramrod->rx_queue_id = OSAL_CPU_TO_LE16(abs_rx_q_id);
+	p_ramrod->vport_id = p_cid->abs.vport_id;
+	p_ramrod->rx_queue_id = OSAL_CPU_TO_LE16(p_cid->abs.queue_id);
 
 	/* Cleaning the queue requires the completion to arrive there.
 	 * In addition, VFs require the answer to come as eqe to PF.
 	 */
-	p_ramrod->complete_cqe_flg = (!!(p_rx_cid->opaque_fid ==
-					 p_hwfn->hw_info.opaque_fid) &&
-				      !eq_completion_only) || cqe_completion;
-	p_ramrod->complete_event_flg = !(p_rx_cid->opaque_fid ==
-					 p_hwfn->hw_info.opaque_fid) ||
-	    eq_completion_only;
+	p_ramrod->complete_cqe_flg = (!p_cid->is_vf && !b_eq_completion_only) ||
+				     b_cqe_completion;
+	p_ramrod->complete_event_flg = p_cid->is_vf || b_eq_completion_only;
 
-	rc = ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
-	if (rc != ECORE_SUCCESS)
-		return rc;
+	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
+}
 
-	ecore_sp_release_queue_cid(p_hwfn, p_rx_cid);
+enum _ecore_status_t ecore_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
+					     void *p_rxq,
+					     bool eq_completion_only,
+					     bool cqe_completion)
+{
+	struct ecore_queue_cid *p_cid = (struct ecore_queue_cid *)p_rxq;
+	enum _ecore_status_t rc = ECORE_NOTIMPL;
+
+	if (IS_PF(p_hwfn->p_dev))
+		rc = ecore_eth_pf_rx_queue_stop(p_hwfn, p_cid,
+						eq_completion_only,
+						cqe_completion);
+	else
+		rc = ecore_vf_pf_rxq_stop(p_hwfn, p_cid, cqe_completion);
 
+	if (rc == ECORE_SUCCESS)
+		ecore_eth_queue_cid_release(p_hwfn, p_cid);
 	return rc;
 }
 
 enum _ecore_status_t
-ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
-			      u16 opaque_fid,
-			      u32 cid,
-			      struct ecore_queue_start_common_params *p_params,
-			      dma_addr_t pbl_addr,
-			      u16 pbl_size,
-			      u16 pq_id)
+ecore_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
+			   struct ecore_queue_cid *p_cid,
+			   dma_addr_t pbl_addr, u16 pbl_size,
+			   u16 pq_id)
 {
 	struct tx_queue_start_ramrod_data *p_ramrod = OSAL_NULL;
 	struct ecore_spq_entry *p_ent = OSAL_NULL;
 	struct ecore_sp_init_data init_data;
-	struct ecore_hw_cid_data *p_tx_cid;
-	u16 abs_tx_qzone_id = 0;
 	enum _ecore_status_t rc = ECORE_NOTIMPL;
-	u8 abs_vport_id;
-
-	/* Store information for the stop */
-	p_tx_cid = &p_hwfn->p_tx_cids[p_params->queue_id];
-	p_tx_cid->cid = cid;
-	p_tx_cid->opaque_fid = opaque_fid;
-
-	rc = ecore_fw_vport(p_hwfn, p_params->vport_id, &abs_vport_id);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	rc = ecore_fw_l2_queue(p_hwfn, p_params->qzone_id, &abs_tx_qzone_id);
-	if (rc != ECORE_SUCCESS)
-		return rc;
 
 	/* Get SPQ entry */
 	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
-	init_data.cid = cid;
-	init_data.opaque_fid = opaque_fid;
+	init_data.cid = p_cid->cid;
+	init_data.opaque_fid = p_cid->opaque_fid;
 	init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
 
 	rc = ecore_sp_init_request(p_hwfn, &p_ent,
@@ -870,14 +933,14 @@ enum _ecore_status_t
 		return rc;
 
 	p_ramrod = &p_ent->ramrod.tx_queue_start;
-	p_ramrod->vport_id = abs_vport_id;
+	p_ramrod->vport_id = p_cid->abs.vport_id;
 
-	p_ramrod->sb_id = OSAL_CPU_TO_LE16(p_params->sb);
-	p_ramrod->sb_index = (u8)p_params->sb_idx;
-	p_ramrod->stats_counter_id = p_params->stats_id;
+	p_ramrod->sb_id = OSAL_CPU_TO_LE16(p_cid->abs.sb);
+	p_ramrod->sb_index = p_cid->abs.sb_idx;
+	p_ramrod->stats_counter_id = p_cid->abs.stats_id;
 
-	p_ramrod->queue_zone_id = OSAL_CPU_TO_LE16(abs_tx_qzone_id);
-	p_ramrod->same_as_last_id = OSAL_CPU_TO_LE16(abs_tx_qzone_id);
+	p_ramrod->queue_zone_id = OSAL_CPU_TO_LE16(p_cid->abs.queue_id);
+	p_ramrod->same_as_last_id = OSAL_CPU_TO_LE16(p_cid->abs.queue_id);
 
 	p_ramrod->pbl_size = OSAL_CPU_TO_LE16(pbl_size);
 	DMA_REGPAIR_LE(p_ramrod->pbl_base_addr, pbl_addr);
@@ -887,90 +950,72 @@ enum _ecore_status_t
 	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
 }
 
-enum _ecore_status_t
-ecore_sp_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
-			    u16 opaque_fid,
-			    struct ecore_queue_start_common_params *p_params,
+static enum _ecore_status_t
+ecore_eth_pf_tx_queue_start(struct ecore_hwfn *p_hwfn,
+			    struct ecore_queue_cid *p_cid,
 			    u8 tc,
-			    dma_addr_t pbl_addr,
-			    u16 pbl_size,
+			    dma_addr_t pbl_addr, u16 pbl_size,
 			    void OSAL_IOMEM * *pp_doorbell)
 {
-	struct ecore_hw_cid_data *p_tx_cid;
-	u8 abs_stats_id = 0;
 	enum _ecore_status_t rc;
 
-	if (IS_VF(p_hwfn->p_dev)) {
-		return ecore_vf_pf_txq_start(p_hwfn,
-					     p_params->queue_id,
-					     p_params->sb,
-					     (u8)p_params->sb_idx,
-					     pbl_addr,
-					     pbl_size,
-					     pp_doorbell);
-	}
-
-	rc = ecore_fw_vport(p_hwfn, p_params->stats_id, &abs_stats_id);
+	/* TODO - set tc in the pq_params for multi-cos */
+	rc = ecore_eth_txq_start_ramrod(p_hwfn, p_cid,
+					pbl_addr, pbl_size,
+					ecore_get_cm_pq_idx_mcos(p_hwfn, tc));
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	p_tx_cid = &p_hwfn->p_tx_cids[p_params->queue_id];
-	OSAL_MEMSET(p_tx_cid, 0, sizeof(*p_tx_cid));
+	/* Provide the caller with the necessary return values */
+	*pp_doorbell = (u8 OSAL_IOMEM *)
+		       p_hwfn->doorbells +
+		       DB_ADDR(p_cid->cid, DQ_DEMS_LEGACY);
 
-	/* Allocate a CID for the queue */
-	rc = ecore_cxt_acquire_cid(p_hwfn, PROTOCOLID_ETH, &p_tx_cid->cid);
-	if (rc != ECORE_SUCCESS) {
-		DP_NOTICE(p_hwfn, true, "Failed to acquire cid\n");
-		return rc;
-	}
-	p_tx_cid->b_cid_allocated = true;
+	return ECORE_SUCCESS;
+}
 
-	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-		   "opaque_fid=0x%x, cid=0x%x, tx_qid=0x%x, vport_id=0x%x, sb_id=0x%x\n",
-		    opaque_fid, p_tx_cid->cid, p_params->queue_id,
-		    p_params->vport_id, p_params->sb);
+enum _ecore_status_t
+ecore_eth_tx_queue_start(struct ecore_hwfn *p_hwfn, u16 opaque_fid,
+			 struct ecore_queue_start_common_params *p_params,
+			 u8 tc,
+			 dma_addr_t pbl_addr, u16 pbl_size,
+			 struct ecore_txq_start_ret_params *p_ret_params)
+{
+	struct ecore_queue_cid *p_cid;
+	enum _ecore_status_t rc;
 
-	p_params->stats_id = abs_stats_id;
+	p_cid = ecore_eth_queue_to_cid(p_hwfn, opaque_fid, p_params);
+	if (p_cid == OSAL_NULL)
+		return ECORE_INVAL;
 
-	/* TODO - set tc in the pq_params for multi-cos */
-	rc = ecore_sp_eth_txq_start_ramrod(p_hwfn,
-					   opaque_fid,
-					   p_tx_cid->cid,
-					   p_params,
-					   pbl_addr,
-					   pbl_size,
-					   ecore_get_cm_pq_idx_mcos(p_hwfn,
-								    tc));
-
-	*pp_doorbell = (u8 OSAL_IOMEM *)p_hwfn->doorbells +
-	    DB_ADDR(p_tx_cid->cid, DQ_DEMS_LEGACY);
+	if (IS_PF(p_hwfn->p_dev))
+		rc = ecore_eth_pf_tx_queue_start(p_hwfn, p_cid, tc,
+						 pbl_addr, pbl_size,
+						 &p_ret_params->p_doorbell);
+	else
+		rc = ecore_vf_pf_txq_start(p_hwfn, p_cid,
+					   pbl_addr, pbl_size,
+					   &p_ret_params->p_doorbell);
 
 	if (rc != ECORE_SUCCESS)
-		ecore_sp_release_queue_cid(p_hwfn, p_tx_cid);
+		ecore_eth_queue_cid_release(p_hwfn, p_cid);
+	else
+		p_ret_params->p_handle = (void *)p_cid;
 
 	return rc;
 }
 
-enum _ecore_status_t ecore_sp_eth_tx_queue_update(struct ecore_hwfn *p_hwfn)
-{
-	return ECORE_NOTIMPL;
-}
-
-enum _ecore_status_t ecore_sp_eth_tx_queue_stop(struct ecore_hwfn *p_hwfn,
-						u16 tx_queue_id)
+static enum _ecore_status_t
+ecore_eth_pf_tx_queue_stop(struct ecore_hwfn *p_hwfn,
+			   struct ecore_queue_cid *p_cid)
 {
-	struct ecore_hw_cid_data *p_tx_cid = &p_hwfn->p_tx_cids[tx_queue_id];
 	struct ecore_spq_entry *p_ent = OSAL_NULL;
 	struct ecore_sp_init_data init_data;
-	enum _ecore_status_t rc = ECORE_NOTIMPL;
-
-	if (IS_VF(p_hwfn->p_dev))
-		return ecore_vf_pf_txq_stop(p_hwfn, tx_queue_id);
+	enum _ecore_status_t rc;
 
-	/* Get SPQ entry */
 	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
-	init_data.cid = p_tx_cid->cid;
-	init_data.opaque_fid = p_tx_cid->opaque_fid;
+	init_data.cid = p_cid->cid;
+	init_data.opaque_fid = p_cid->opaque_fid;
 	init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
 
 	rc = ecore_sp_init_request(p_hwfn, &p_ent,
@@ -979,11 +1024,22 @@ enum _ecore_status_t ecore_sp_eth_tx_queue_stop(struct ecore_hwfn *p_hwfn,
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	rc = ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
-	if (rc != ECORE_SUCCESS)
-		return rc;
+	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
+}
+
+enum _ecore_status_t ecore_eth_tx_queue_stop(struct ecore_hwfn *p_hwfn,
+					     void *p_handle)
+{
+	struct ecore_queue_cid *p_cid = (struct ecore_queue_cid *)p_handle;
+	enum _ecore_status_t rc;
+
+	if (IS_PF(p_hwfn->p_dev))
+		rc = ecore_eth_pf_tx_queue_stop(p_hwfn, p_cid);
+	else
+		rc = ecore_vf_pf_txq_stop(p_hwfn, p_cid);
 
-	ecore_sp_release_queue_cid(p_hwfn, p_tx_cid);
+	if (rc == ECORE_SUCCESS)
+		ecore_eth_queue_cid_release(p_hwfn, p_cid);
 	return rc;
 }
 
diff --git a/drivers/net/qede/base/ecore_l2.h b/drivers/net/qede/base/ecore_l2.h
index b598eda..c136389 100644
--- a/drivers/net/qede/base/ecore_l2.h
+++ b/drivers/net/qede/base/ecore_l2.h
@@ -15,59 +15,66 @@
 #include "ecore_spq.h"
 #include "ecore_l2_api.h"
 
-/**
- * @brief ecore_sp_eth_tx_queue_update -
- *
- * This ramrod updates a TX queue. It is used for setting the active
- * state of the queue.
- *
- * @note Final phase API.
- *
- * @param p_hwfn
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_sp_eth_tx_queue_update(struct ecore_hwfn *p_hwfn);
+struct ecore_queue_cid {
+	/* 'Relative' is a relative term ;-). Usually the indices [not counting
+	 * SBs] would be PF-relative, but there are some cases where that isn't
+	 * the case - specifically for a PF configuring its VF indices it's
+	 * possible some fields [E.g., stats-id] in 'rel' would already be abs.
+	 */
+	struct ecore_queue_start_common_params rel;
+	struct ecore_queue_start_common_params abs;
+	u32 cid;
+	u16 opaque_fid;
+
+	/* VFs queues are mapped differently, so we need to know the
+	 * relative queue associated with them [0-based].
+	 * Notice this is relevant on the *PF* queue-cid of its VF's queues,
+	 * and not on the VF itself.
+	 */
+	bool is_vf;
+	u8 vf_qid;
+
+	/* Legacy VFs might have Rx producer located elsewhere */
+	bool b_legacy_vf;
+};
+
+void ecore_eth_queue_cid_release(struct ecore_hwfn *p_hwfn,
+				 struct ecore_queue_cid *p_cid);
+
+struct ecore_queue_cid *
+_ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
+			u16 opaque_fid, u32 cid, u8 vf_qid,
+			struct ecore_queue_start_common_params *p_params);
 
 enum _ecore_status_t
 ecore_sp_eth_vport_start(struct ecore_hwfn *p_hwfn,
 			 struct ecore_sp_vport_start_params *p_params);
 
 /**
- * @brief - Starts an Rx queue; Should be used where contexts are handled
- * outside of the ramrod area [specifically iov scenarios]
+ * @brief - Starts an Rx queue, when queue_cid is already prepared
  *
  * @param p_hwfn
- * @param opaque_fid
- * @param cid
- * @param p_params [queue_id, vport_id, stats_id, sb, sb_idx, vf_qid]
-	  stats_id is absolute packed in p_params.
+ * @param p_cid
  * @param bd_max_bytes
  * @param bd_chain_phys_addr
  * @param cqe_pbl_addr
  * @param cqe_pbl_size
- * @param b_use_zone_a_prod - support legacy VF producers
  *
  * @return enum _ecore_status_t
  */
 enum _ecore_status_t
-ecore_sp_eth_rxq_start_ramrod(struct ecore_hwfn	*p_hwfn,
-			      u16 opaque_fid,
-			      u32 cid,
-			      struct ecore_queue_start_common_params *p_params,
-			      u16 bd_max_bytes,
-			      dma_addr_t bd_chain_phys_addr,
-			      dma_addr_t cqe_pbl_addr,
-			      u16 cqe_pbl_size, bool b_use_zone_a_prod);
+ecore_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
+			   struct ecore_queue_cid *p_cid,
+			   u16 bd_max_bytes,
+			   dma_addr_t bd_chain_phys_addr,
+			   dma_addr_t cqe_pbl_addr,
+			   u16 cqe_pbl_size);
 
 /**
- * @brief - Starts a Tx queue; Should be used where contexts are handled
- * outside of the ramrod area [specifically iov scenarios]
+ * @brief - Starts a Tx queue, where queue_cid is already prepared
  *
  * @param p_hwfn
- * @param opaque_fid
- * @param cid
- * @param p_params [queue_id, vport_id,stats_id, sb, sb_idx, vf_qid]
+ * @param p_cid
  * @param pbl_addr
  * @param pbl_size
  * @param p_pq_params - parameters for choosing the PQ for this Tx queue
@@ -75,13 +82,10 @@ enum _ecore_status_t
  * @return enum _ecore_status_t
  */
 enum _ecore_status_t
-ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn	*p_hwfn,
-			      u16 opaque_fid,
-			      u32 cid,
-			      struct ecore_queue_start_common_params *p_params,
-			      dma_addr_t pbl_addr,
-			      u16 pbl_size,
-			      u16 pq_id);
+ecore_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
+			   struct ecore_queue_cid *p_cid,
+			   dma_addr_t pbl_addr, u16 pbl_size,
+			   u16 pq_id);
 
 u8 ecore_mcast_bin_from_mac(u8 *mac);
 
diff --git a/drivers/net/qede/base/ecore_l2_api.h b/drivers/net/qede/base/ecore_l2_api.h
index 8f7b614..af316d3 100644
--- a/drivers/net/qede/base/ecore_l2_api.h
+++ b/drivers/net/qede/base/ecore_l2_api.h
@@ -28,22 +28,26 @@ enum ecore_rss_caps {
 #endif
 
 struct ecore_queue_start_common_params {
-	/* Rx/Tx queue relative id to keep obtained cid in corresponding array
-	 * RX - upper-bounded by number of FW-queues
-	 */
-	u16 queue_id;
+	/* Should always be relative to entity sending this. */
 	u8 vport_id;
+	u16 queue_id;
 
-	/* q_zone_id is relative, may be different from queue id
-	 * currently used by Tx-only, upper-bounded by number of FW-queues
-	 */
-	u16 qzone_id;
-
-	/* stats_id is relative or absolute depends on function */
+	/* Relative, but relevant only for PFs */
 	u8 stats_id;
+
+	/* These are always absolute */
 	u16 sb;
-	u16 sb_idx;
-	u16 vf_qid;
+	u8 sb_idx;
+};
+
+struct ecore_rxq_start_ret_params {
+	void OSAL_IOMEM *p_prod;
+	void *p_handle;
+};
+
+struct ecore_txq_start_ret_params {
+	void OSAL_IOMEM *p_doorbell;
+	void *p_handle;
 };
 
 struct ecore_rss_params {
@@ -167,42 +171,37 @@ enum _ecore_status_t
 	struct ecore_spq_comp_cb	 *p_comp_data);
 
 /**
- * @brief ecore_sp_eth_rx_queue_start - RX Queue Start Ramrod
+ * @brief ecore_eth_rx_queue_start - RX Queue Start Ramrod
  *
  * This ramrod initializes an RX Queue for a VPort. An Assert is generated if
  * the VPort ID is not currently initialized.
  *
  * @param p_hwfn
  * @param opaque_fid
- * @p_params			[stats_id is relative, packed in p_params]
+ * @p_params			Inputs; Relative for PF [SB being an exception]
  * @param bd_max_bytes		Maximum bytes that can be placed on a BD
  * @param bd_chain_phys_addr	Physical address of BDs for receive.
  * @param cqe_pbl_addr		Physical address of the CQE PBL Table.
  * @param cqe_pbl_size		Size of the CQE PBL Table
- * @param pp_prod		Pointer to place producer's
- *                              address for the Rx Q (May be
- *				NULL).
+ * @param p_ret_params		Pointed struct to be filled with outputs.
  *
  * @return enum _ecore_status_t
  */
 enum _ecore_status_t
-ecore_sp_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
-			    u16 opaque_fid,
-			    struct ecore_queue_start_common_params *p_params,
-			    u16 bd_max_bytes,
-			    dma_addr_t bd_chain_phys_addr,
-			    dma_addr_t cqe_pbl_addr,
-			    u16 cqe_pbl_size,
-			    void OSAL_IOMEM * *pp_prod);
+ecore_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
+			 u16 opaque_fid,
+			 struct ecore_queue_start_common_params *p_params,
+			 u16 bd_max_bytes,
+			 dma_addr_t bd_chain_phys_addr,
+			 dma_addr_t cqe_pbl_addr,
+			 u16 cqe_pbl_size,
+			 struct ecore_rxq_start_ret_params *p_ret_params);
 
 /**
- * @brief ecore_sp_eth_rx_queue_stop -
- *
- * This ramrod closes an RX queue. It sends RX queue stop ramrod
- * + CFC delete ramrod
+ * @brief ecore_eth_rx_queue_stop - This ramrod closes an Rx queue
  *
  * @param p_hwfn
- * @param rx_queue_id		RX Queue ID
+ * @param p_rxq			Handler of queue to close
  * @param eq_completion_only	If True completion will be on
  *				EQe, if False completion will be
  *				on EQe if p_hwfn opaque
@@ -213,13 +212,13 @@ enum _ecore_status_t
  * @return enum _ecore_status_t
  */
 enum _ecore_status_t
-ecore_sp_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
-			   u16 rx_queue_id,
-			   bool eq_completion_only,
-			   bool cqe_completion);
+ecore_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
+			void *p_rxq,
+			bool eq_completion_only,
+			bool cqe_completion);
 
 /**
- * @brief ecore_sp_eth_tx_queue_start - TX Queue Start Ramrod
+ * @brief - TX Queue Start Ramrod
  *
  * This ramrod initializes a TX Queue for a VPort. An Assert is generated if
  * the VPort is not currently initialized.
@@ -230,34 +229,29 @@ enum _ecore_status_t
  * @param tc			traffic class to use with this L2 txq
  * @param pbl_addr		address of the pbl array
  * @param pbl_size		number of entries in pbl
- * @param pp_doorbell		Pointer to place doorbell pointer (May be NULL).
- *				This address should be used with the
- *				DIRECT_REG_WR macro.
+ * @param p_ret_params		Pointer to fill the return parameters in.
  *
  * @return enum _ecore_status_t
  */
 enum _ecore_status_t
-ecore_sp_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
-			    u16 opaque_fid,
-			    struct ecore_queue_start_common_params *p_params,
-			    u8 tc,
-			    dma_addr_t pbl_addr,
-			    u16 pbl_size,
-			    void OSAL_IOMEM * *pp_doorbell);
+ecore_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
+			 u16 opaque_fid,
+			 struct ecore_queue_start_common_params *p_params,
+			 u8 tc,
+			 dma_addr_t pbl_addr,
+			 u16 pbl_size,
+			 struct ecore_txq_start_ret_params *p_ret_params);
 
 /**
- * @brief ecore_sp_eth_tx_queue_stop -
- *
- * This ramrod closes a TX queue. It sends TX queue stop ramrod
- * + CFC delete ramrod
+ * @brief ecore_eth_tx_queue_stop - closes a Tx queue
  *
  * @param p_hwfn
- * @param tx_queue_id		TX Queue ID
+ * @param p_txq - handle to Tx queue needed to be closed
  *
  * @return enum _ecore_status_t
  */
-enum _ecore_status_t ecore_sp_eth_tx_queue_stop(struct ecore_hwfn *p_hwfn,
-						u16 tx_queue_id);
+enum _ecore_status_t ecore_eth_tx_queue_stop(struct ecore_hwfn *p_hwfn,
+					     void *p_txq);
 
 enum ecore_tpa_mode	{
 	ECORE_TPA_MODE_NONE,
@@ -389,19 +383,19 @@ enum _ecore_status_t
  * @note Final phase API.
  *
  * @param p_hwfn
- * @param rx_queue_id		RX Queue ID
- * @param num_rxqs              Allow to update multiple rx
- *				queues, from rx_queue_id to
- *				(rx_queue_id + num_rxqs)
+ * @param pp_rxq_handlers	An array of queue handlers to be updated.
+ * @param num_rxqs              number of queues to update.
  * @param complete_cqe_flg	Post completion to the CQE Ring if set
  * @param complete_event_flg	Post completion to the Event Ring if set
+ * @param comp_mode
+ * @param p_comp_data
  *
  * @return enum _ecore_status_t
  */
 
 enum _ecore_status_t
 ecore_sp_eth_rx_queues_update(struct ecore_hwfn *p_hwfn,
-			      u16 rx_queue_id,
+			      void **pp_rxq_handlers,
 			      u8 num_rxqs,
 			      u8 complete_cqe_flg,
 			      u8 complete_event_flg,
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 87ffa34..7a20d56 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -238,7 +238,7 @@ static bool ecore_iov_validate_active_rxq(struct ecore_hwfn *p_hwfn,
 	u8 i;
 
 	for (i = 0; i < p_vf->num_rxqs; i++)
-		if (p_vf->vf_queues[i].rxq_active)
+		if (p_vf->vf_queues[i].p_rx_cid)
 			return true;
 
 	return false;
@@ -250,7 +250,7 @@ static bool ecore_iov_validate_active_txq(struct ecore_hwfn *p_hwfn,
 	u8 i;
 
 	for (i = 0; i < p_vf->num_rxqs; i++)
-		if (p_vf->vf_queues[i].txq_active)
+		if (p_vf->vf_queues[i].p_tx_cid)
 			return true;
 
 	return false;
@@ -956,17 +956,19 @@ static void ecore_iov_free_vf_igu_sbs(struct ecore_hwfn *p_hwfn,
 	vf->num_sbs = 0;
 }
 
-enum _ecore_status_t ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
-					      struct ecore_ptt *p_ptt,
-					      u16 rel_vf_id, u16 num_rx_queues)
+enum _ecore_status_t
+ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
+			 struct ecore_ptt *p_ptt,
+			 struct ecore_iov_vf_init_params *p_params)
 {
 	u8 num_of_vf_available_chains  = 0;
 	struct ecore_vf_info *vf = OSAL_NULL;
+	u16 qid, num_irqs;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 	u32 cids;
 	u8 i;
 
-	vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, false);
+	vf = ecore_iov_get_vf_info(p_hwfn, p_params->rel_vf_id, false);
 	if (!vf) {
 		DP_ERR(p_hwfn, "ecore_iov_init_hw_for_vf : vf is OSAL_NULL\n");
 		return ECORE_UNKNOWN_ERROR;
@@ -974,22 +976,52 @@ enum _ecore_status_t ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 
 	if (vf->b_init) {
 		DP_NOTICE(p_hwfn, true, "VF[%d] is already active.\n",
-			  rel_vf_id);
+			  p_params->rel_vf_id);
 		return ECORE_INVAL;
 	}
 
+	/* Perform sanity checking on the requested queue_id */
+	for (i = 0; i < p_params->num_queues; i++) {
+		u16 min_vf_qzone = (u16)FEAT_NUM(p_hwfn, ECORE_PF_L2_QUE);
+		u16 max_vf_qzone = min_vf_qzone +
+				   FEAT_NUM(p_hwfn, ECORE_VF_L2_QUE) - 1;
+
+		qid = p_params->req_rx_queue[i];
+		if (qid < min_vf_qzone || qid > max_vf_qzone) {
+			DP_NOTICE(p_hwfn, true,
+				  "Can't enable Rx qid [%04x] for VF[%d]: qids [0x%04x,...,0x%04x] available\n",
+				  qid, p_params->rel_vf_id,
+				  min_vf_qzone, max_vf_qzone);
+			return ECORE_INVAL;
+		}
+
+		qid = p_params->req_tx_queue[i];
+		if (qid > max_vf_qzone) {
+			DP_NOTICE(p_hwfn, true,
+				  "Can't enable Tx qid [%04x] for VF[%d]: max qid 0x%04x\n",
+				  qid, p_params->rel_vf_id, max_vf_qzone);
+			return ECORE_INVAL;
+		}
+
+		/* If client *really* wants, Tx qid can be shared with PF */
+		if (qid < min_vf_qzone)
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VF[%d] is using PF qid [0x%04x] for Txq[0x%02x]\n",
+				   p_params->rel_vf_id, qid, i);
+	}
+
 	/* Limit number of queues according to number of CIDs */
 	ecore_cxt_get_proto_cid_count(p_hwfn, PROTOCOLID_ETH, &cids);
 	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 		   "VF[%d] - requesting to initialize for 0x%04x queues"
 		   " [0x%04x CIDs available]\n",
-		   vf->relative_vf_id, num_rx_queues, (u16)cids);
-	num_rx_queues = OSAL_MIN_T(u16, num_rx_queues, ((u16)cids));
+		   vf->relative_vf_id, p_params->num_queues, (u16)cids);
+	num_irqs = OSAL_MIN_T(u16, p_params->num_queues, ((u16)cids));
 
 	num_of_vf_available_chains = ecore_iov_alloc_vf_igu_sbs(p_hwfn,
 							       p_ptt,
 							       vf,
-							       num_rx_queues);
+							       num_irqs);
 	if (num_of_vf_available_chains == 0) {
 		DP_ERR(p_hwfn, "no available igu sbs\n");
 		return ECORE_NOMEM;
@@ -1000,26 +1032,19 @@ enum _ecore_status_t ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 	vf->num_txqs = num_of_vf_available_chains;
 
 	for (i = 0; i < vf->num_rxqs; i++) {
-		u16 queue_id = ecore_int_queue_id_from_sb_id(p_hwfn,
-							     vf->igu_sbs[i]);
+		struct ecore_vf_q_info *p_queue = &vf->vf_queues[i];
 
-		if (queue_id > RESC_NUM(p_hwfn, ECORE_L2_QUEUE)) {
-			DP_NOTICE(p_hwfn, true,
-				  "VF[%d] will require utilizing of"
-				  " out-of-bounds queues - %04x\n",
-				  vf->relative_vf_id, queue_id);
-			/* TODO - cleanup the already allocate SBs */
-			return ECORE_INVAL;
-		}
+		p_queue->fw_rx_qid = p_params->req_rx_queue[i];
+		p_queue->fw_tx_qid = p_params->req_tx_queue[i];
 
 		/* CIDs are per-VF, so no problem having them 0-based. */
-		vf->vf_queues[i].fw_rx_qid = queue_id;
-		vf->vf_queues[i].fw_tx_qid = queue_id;
-		vf->vf_queues[i].fw_cid = i;
+		p_queue->fw_cid = i;
 
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-			   "VF[%d] - [%d] SB %04x, Tx/Rx queue %04x CID %04x\n",
-			   vf->relative_vf_id, i, vf->igu_sbs[i], queue_id, i);
+			   "VF[%d] - Q[%d] SB %04x, qid [Rx %04x Tx %04x]  CID %04x\n",
+			   vf->relative_vf_id, i, vf->igu_sbs[i],
+			   p_queue->fw_rx_qid, p_queue->fw_tx_qid,
+			   p_queue->fw_cid);
 	}
 
 	rc = ecore_iov_enable_vf_access(p_hwfn, p_ptt, vf);
@@ -1393,8 +1418,19 @@ static void ecore_iov_vf_cleanup(struct ecore_hwfn *p_hwfn,
 	p_vf->num_active_rxqs = 0;
 
 	for (i = 0; i < ECORE_MAX_VF_CHAINS_PER_PF; i++) {
-		p_vf->vf_queues[i].rxq_active = 0;
-		p_vf->vf_queues[i].txq_active = 0;
+		struct ecore_vf_q_info *p_queue = &p_vf->vf_queues[i];
+
+		if (p_queue->p_rx_cid) {
+			ecore_eth_queue_cid_release(p_hwfn,
+						    p_queue->p_rx_cid);
+			p_queue->p_rx_cid = OSAL_NULL;
+		}
+
+		if (p_queue->p_tx_cid) {
+			ecore_eth_queue_cid_release(p_hwfn,
+						    p_queue->p_tx_cid);
+			p_queue->p_tx_cid = OSAL_NULL;
+		}
 	}
 
 	OSAL_MEMSET(&p_vf->shadow_config, 0, sizeof(p_vf->shadow_config));
@@ -1832,14 +1868,14 @@ static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn       *p_hwfn,
 
 		/* Update all the Rx queues */
 		for (i = 0; i < ECORE_MAX_VF_CHAINS_PER_PF; i++) {
-			u16 qid;
+			struct ecore_queue_cid *p_cid;
 
-			if (!p_vf->vf_queues[i].rxq_active)
+			p_cid = p_vf->vf_queues[i].p_rx_cid;
+			if (p_cid == OSAL_NULL)
 				continue;
 
-			qid = p_vf->vf_queues[i].fw_rx_qid;
-
-			rc = ecore_sp_eth_rx_queues_update(p_hwfn, qid,
+			rc = ecore_sp_eth_rx_queues_update(p_hwfn,
+							   (void **)&p_cid,
 						   1, 0, 1,
 						   ECORE_SPQ_MODE_EBLOCK,
 						   OSAL_NULL);
@@ -1847,7 +1883,7 @@ static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn       *p_hwfn,
 				DP_NOTICE(p_hwfn, true,
 					  "Failed to send Rx update"
 					  " fo queue[0x%04x]\n",
-					  qid);
+					  p_cid->rel.queue_id);
 				return rc;
 			}
 		}
@@ -2041,6 +2077,7 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
 	struct ecore_queue_start_common_params params;
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
 	u8 status = PFVF_STATUS_NO_RESOURCE;
+	struct ecore_vf_q_info *p_queue;
 	struct vfpf_start_rxq_tlv *req;
 	bool b_legacy_vf = false;
 	enum _ecore_status_t rc;
@@ -2051,14 +2088,24 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
 	    !ecore_iov_validate_sb(p_hwfn, vf, req->hw_sb))
 		goto out;
 
+	/* Acquire a new queue-cid */
+	p_queue = &vf->vf_queues[req->rx_qid];
+
 	OSAL_MEMSET(&params, 0, sizeof(params));
-	params.queue_id = (u8)vf->vf_queues[req->rx_qid].fw_rx_qid;
-	params.vf_qid = req->rx_qid;
+	params.queue_id = (u8)p_queue->fw_rx_qid;
 	params.vport_id = vf->vport_id;
 	params.stats_id = vf->abs_vf_id + 0x10;
 	params.sb = req->hw_sb;
 	params.sb_idx = req->sb_index;
 
+	p_queue->p_rx_cid = _ecore_eth_queue_to_cid(p_hwfn,
+						    vf->opaque_fid,
+						    p_queue->fw_cid,
+						    (u8)req->rx_qid,
+						    &params);
+	if (p_queue->p_rx_cid == OSAL_NULL)
+		goto out;
+
 	/* Legacy VFs have their Producers in a different location, which they
 	 * calculate on their own and clean the producer prior to this.
 	 */
@@ -2070,27 +2117,27 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
 		       GTT_BAR0_MAP_REG_MSDM_RAM +
 		       MSTORM_ETH_VF_PRODS_OFFSET(vf->abs_vf_id, req->rx_qid),
 		       0);
+	p_queue->p_rx_cid->b_legacy_vf = b_legacy_vf;
 
-	rc = ecore_sp_eth_rxq_start_ramrod(p_hwfn, vf->opaque_fid,
-					   vf->vf_queues[req->rx_qid].fw_cid,
-					   &params,
-					   req->bd_max_bytes,
-					   req->rxq_addr,
-					   req->cqe_pbl_addr,
-					   req->cqe_pbl_size,
-					   b_legacy_vf);
 
-	if (rc) {
+	rc = ecore_eth_rxq_start_ramrod(p_hwfn,
+					p_queue->p_rx_cid,
+					req->bd_max_bytes,
+					req->rxq_addr,
+					req->cqe_pbl_addr,
+					req->cqe_pbl_size);
+	if (rc != ECORE_SUCCESS) {
 		status = PFVF_STATUS_FAILURE;
+		ecore_eth_queue_cid_release(p_hwfn, p_queue->p_rx_cid);
+		p_queue->p_rx_cid = OSAL_NULL;
 	} else {
 		status = PFVF_STATUS_SUCCESS;
-		vf->vf_queues[req->rx_qid].rxq_active = true;
 		vf->num_active_rxqs++;
 	}
 
 out:
-	ecore_iov_vf_mbx_start_rxq_resp(p_hwfn, p_ptt, vf,
-					status, b_legacy_vf);
+	ecore_iov_vf_mbx_start_rxq_resp(p_hwfn, p_ptt, vf, status,
+					b_legacy_vf);
 }
 
 static void ecore_iov_vf_mbx_start_txq_resp(struct ecore_hwfn *p_hwfn,
@@ -2141,8 +2188,10 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 	struct ecore_queue_start_common_params params;
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
 	u8 status = PFVF_STATUS_NO_RESOURCE;
+	struct ecore_vf_q_info *p_queue;
 	struct vfpf_start_txq_tlv *req;
 	enum _ecore_status_t rc;
+	u16 pq;
 
 	OSAL_MEMSET(&params, 0, sizeof(params));
 	req = &mbx->req_virt->start_txq;
@@ -2151,27 +2200,34 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 	    !ecore_iov_validate_sb(p_hwfn, vf, req->hw_sb))
 		goto out;
 
-	params.queue_id = vf->vf_queues[req->tx_qid].fw_tx_qid;
-	params.qzone_id = vf->vf_queues[req->tx_qid].fw_tx_qid;
+	/* Acquire a new queue-cid */
+	p_queue = &vf->vf_queues[req->tx_qid];
+
+	params.queue_id = p_queue->fw_tx_qid;
 	params.vport_id = vf->vport_id;
 	params.stats_id = vf->abs_vf_id + 0x10;
 	params.sb = req->hw_sb;
 	params.sb_idx = req->sb_index;
 
-	rc = ecore_sp_eth_txq_start_ramrod(p_hwfn,
-					   vf->opaque_fid,
-					   vf->vf_queues[req->tx_qid].fw_cid,
-					   &params,
-					   req->pbl_addr,
-					   req->pbl_size,
-					   ecore_get_cm_pq_idx_vf(p_hwfn,
-							vf->relative_vf_id));
+	p_queue->p_tx_cid = _ecore_eth_queue_to_cid(p_hwfn,
+						    vf->opaque_fid,
+						    p_queue->fw_cid,
+						    (u8)req->tx_qid,
+						    &params);
+	if (p_queue->p_tx_cid == OSAL_NULL)
+		goto out;
 
-	if (rc)
+	pq = ecore_get_cm_pq_idx_vf(p_hwfn,
+				    vf->relative_vf_id);
+	rc = ecore_eth_txq_start_ramrod(p_hwfn, p_queue->p_tx_cid,
+					req->pbl_addr, req->pbl_size, pq);
+	if (rc != ECORE_SUCCESS) {
 		status = PFVF_STATUS_FAILURE;
-	else {
+		ecore_eth_queue_cid_release(p_hwfn,
+					    p_queue->p_tx_cid);
+		p_queue->p_tx_cid = OSAL_NULL;
+	} else {
 		status = PFVF_STATUS_SUCCESS;
-		vf->vf_queues[req->tx_qid].txq_active = true;
 	}
 
 out:
@@ -2184,6 +2240,7 @@ static enum _ecore_status_t ecore_iov_vf_stop_rxqs(struct ecore_hwfn *p_hwfn,
 						   u8 num_rxqs,
 						   bool cqe_completion)
 {
+	struct ecore_vf_q_info *p_queue;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 	int qid;
 
@@ -2191,16 +2248,18 @@ static enum _ecore_status_t ecore_iov_vf_stop_rxqs(struct ecore_hwfn *p_hwfn,
 		return ECORE_INVAL;
 
 	for (qid = rxq_id; qid < rxq_id + num_rxqs; qid++) {
-		if (vf->vf_queues[qid].rxq_active) {
-			rc = ecore_sp_eth_rx_queue_stop(p_hwfn,
-							vf->vf_queues[qid].
-							fw_rx_qid, false,
-							cqe_completion);
+		p_queue = &vf->vf_queues[qid];
 
-			if (rc)
-				return rc;
-		}
-		vf->vf_queues[qid].rxq_active = false;
+		if (!p_queue->p_rx_cid)
+			continue;
+
+		rc = ecore_eth_rx_queue_stop(p_hwfn,
+					     p_queue->p_rx_cid,
+					     false, cqe_completion);
+		if (rc != ECORE_SUCCESS)
+			return rc;
+
+		vf->vf_queues[qid].p_rx_cid = OSAL_NULL;
 		vf->num_active_rxqs--;
 	}
 
@@ -2212,21 +2271,23 @@ static enum _ecore_status_t ecore_iov_vf_stop_txqs(struct ecore_hwfn *p_hwfn,
 						   u16 txq_id, u8 num_txqs)
 {
 	enum _ecore_status_t rc = ECORE_SUCCESS;
+	struct ecore_vf_q_info *p_queue;
 	int qid;
 
 	if (txq_id + num_txqs > OSAL_ARRAY_SIZE(vf->vf_queues))
 		return ECORE_INVAL;
 
 	for (qid = txq_id; qid < txq_id + num_txqs; qid++) {
-		if (vf->vf_queues[qid].txq_active) {
-			rc = ecore_sp_eth_tx_queue_stop(p_hwfn,
-							vf->vf_queues[qid].
-							fw_tx_qid);
+		p_queue = &vf->vf_queues[qid];
+		if (!p_queue->p_tx_cid)
+			continue;
 
-			if (rc)
-				return rc;
-		}
-		vf->vf_queues[qid].txq_active = false;
+		rc = ecore_eth_tx_queue_stop(p_hwfn,
+					     p_queue->p_tx_cid);
+		if (rc != ECORE_SUCCESS)
+			return rc;
+
+		p_queue->p_tx_cid = OSAL_NULL;
 	}
 	return rc;
 }
@@ -2282,10 +2343,11 @@ static void ecore_iov_vf_mbx_update_rxqs(struct ecore_hwfn *p_hwfn,
 					 struct ecore_ptt *p_ptt,
 					 struct ecore_vf_info *vf)
 {
+	struct ecore_queue_cid *handlers[ECORE_MAX_VF_CHAINS_PER_PF];
 	u16 length = sizeof(struct pfvf_def_resp_tlv);
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
 	struct vfpf_update_rxq_tlv *req;
-	u8 status = PFVF_STATUS_SUCCESS;
+	u8 status = PFVF_STATUS_FAILURE;
 	u8 complete_event_flg;
 	u8 complete_cqe_flg;
 	u16 qid;
@@ -2296,30 +2358,38 @@ static void ecore_iov_vf_mbx_update_rxqs(struct ecore_hwfn *p_hwfn,
 	complete_cqe_flg = !!(req->flags & VFPF_RXQ_UPD_COMPLETE_CQE_FLAG);
 	complete_event_flg = !!(req->flags & VFPF_RXQ_UPD_COMPLETE_EVENT_FLAG);
 
+	/* Validaute inputs */
+	if (req->num_rxqs + req->rx_qid > ECORE_MAX_VF_CHAINS_PER_PF ||
+	    !ecore_iov_validate_rxq(p_hwfn, vf, req->rx_qid)) {
+		DP_INFO(p_hwfn, "VF[%d]: Incorrect Rxqs [%04x, %02x]\n",
+			vf->relative_vf_id, req->rx_qid, req->num_rxqs);
+		goto out;
+	}
+
 	for (i = 0; i < req->num_rxqs; i++) {
 		qid = req->rx_qid + i;
 
-		if (!vf->vf_queues[qid].rxq_active) {
-			DP_NOTICE(p_hwfn, true,
-				  "VF rx_qid = %d isn`t active!\n", qid);
-			status = PFVF_STATUS_FAILURE;
-			break;
+		if (!vf->vf_queues[qid].p_rx_cid) {
+			DP_INFO(p_hwfn,
+				"VF[%d] rx_qid = %d isn`t active!\n",
+				vf->relative_vf_id, qid);
+			goto out;
 		}
 
-		rc = ecore_sp_eth_rx_queues_update(p_hwfn,
-						   vf->vf_queues[qid].fw_rx_qid,
-						   1,
-						   complete_cqe_flg,
-						   complete_event_flg,
-						   ECORE_SPQ_MODE_EBLOCK,
-						   OSAL_NULL);
-
-		if (rc) {
-			status = PFVF_STATUS_FAILURE;
-			break;
-		}
+		handlers[i] = vf->vf_queues[qid].p_rx_cid;
 	}
 
+	rc = ecore_sp_eth_rx_queues_update(p_hwfn, (void **)&handlers,
+					   req->num_rxqs,
+					   complete_cqe_flg,
+					   complete_event_flg,
+					   ECORE_SPQ_MODE_EBLOCK,
+					   OSAL_NULL);
+	if (rc)
+		goto out;
+
+	status = PFVF_STATUS_SUCCESS;
+out:
 	ecore_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_UPDATE_RXQ,
 			       length, status);
 }
@@ -2548,7 +2618,7 @@ void *ecore_iov_search_list_tlvs(struct ecore_hwfn *p_hwfn,
 				  "rss_ind_table[%d] = %d,"
 				  " rxq is out of range\n",
 				  i, q_idx);
-		else if (!vf->vf_queues[q_idx].rxq_active)
+		else if (!vf->vf_queues[q_idx].p_rx_cid)
 			DP_NOTICE(p_hwfn, true,
 				  "rss_ind_table[%d] = %d, rxq is not active\n",
 				  i, q_idx);
diff --git a/drivers/net/qede/base/ecore_sriov.h b/drivers/net/qede/base/ecore_sriov.h
index e9ccc79..d32f931 100644
--- a/drivers/net/qede/base/ecore_sriov.h
+++ b/drivers/net/qede/base/ecore_sriov.h
@@ -64,10 +64,10 @@ struct ecore_iov_vf_mbx {
 
 struct ecore_vf_q_info {
 	u16 fw_rx_qid;
+	struct ecore_queue_cid *p_rx_cid;
 	u16 fw_tx_qid;
+	struct ecore_queue_cid *p_tx_cid;
 	u8 fw_cid;
-	u8 rxq_active;
-	u8 txq_active;
 };
 
 enum vf_state {
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index c12cbcf..d1c6691 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -451,19 +451,19 @@ enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn)
 #define MSTORM_QZONE_START(dev)   (TSTORM_QZONE_START + \
 				   (TSTORM_QZONE_SIZE * NUM_OF_L2_QUEUES(dev)))
 
-enum _ecore_status_t ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
-					   u8 rx_qid,
-					   u16 sb,
-					   u8 sb_index,
-					   u16 bd_max_bytes,
-					   dma_addr_t bd_chain_phys_addr,
-					   dma_addr_t cqe_pbl_addr,
-					   u16 cqe_pbl_size,
-					   void OSAL_IOMEM **pp_prod)
+enum _ecore_status_t
+ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
+		      struct ecore_queue_cid *p_cid,
+		      u16 bd_max_bytes,
+		      dma_addr_t bd_chain_phys_addr,
+		      dma_addr_t cqe_pbl_addr,
+		      u16 cqe_pbl_size,
+		      void OSAL_IOMEM **pp_prod)
 {
 	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
 	struct pfvf_start_queue_resp_tlv *resp;
 	struct vfpf_start_rxq_tlv *req;
+	u16 rx_qid = p_cid->rel.queue_id;
 	enum _ecore_status_t rc;
 
 	/* clear mailbox and prep first tlv */
@@ -473,19 +473,20 @@ enum _ecore_status_t ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
 	req->cqe_pbl_addr = cqe_pbl_addr;
 	req->cqe_pbl_size = cqe_pbl_size;
 	req->rxq_addr = bd_chain_phys_addr;
-	req->hw_sb = sb;
-	req->sb_index = sb_index;
+	req->hw_sb = p_cid->rel.sb;
+	req->sb_index = p_cid->rel.sb_idx;
 	req->bd_max_bytes = bd_max_bytes;
 	req->stat_id = -1; /* Keep initialized, for future compatibility */
 
 	/* If PF is legacy, we'll need to calculate producers ourselves
 	 * as well as clean them.
 	 */
-	if (pp_prod && p_iov->b_pre_fp_hsi) {
+	if (p_iov->b_pre_fp_hsi) {
 		u8 hw_qid = p_iov->acquire_resp.resc.hw_qid[rx_qid];
 		u32 init_prod_val = 0;
 
-		*pp_prod = (u8 OSAL_IOMEM *)p_hwfn->regview +
+		*pp_prod = (u8 OSAL_IOMEM *)
+			   p_hwfn->regview +
 			   MSTORM_QZONE_START(p_hwfn->p_dev) +
 			   (hw_qid) * MSTORM_QZONE_SIZE;
 
@@ -510,7 +511,7 @@ enum _ecore_status_t ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
 	}
 
 	/* Learn the address of the producer from the response */
-	if (pp_prod && !p_iov->b_pre_fp_hsi) {
+	if (!p_iov->b_pre_fp_hsi) {
 		u32 init_prod_val = 0;
 
 		*pp_prod = (u8 OSAL_IOMEM *)p_hwfn->regview + resp->offset;
@@ -534,7 +535,8 @@ enum _ecore_status_t ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
 }
 
 enum _ecore_status_t ecore_vf_pf_rxq_stop(struct ecore_hwfn *p_hwfn,
-					  u16 rx_qid, bool cqe_completion)
+					  struct ecore_queue_cid *p_cid,
+					  bool cqe_completion)
 {
 	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
 	struct vfpf_stop_rxqs_tlv *req;
@@ -544,7 +546,7 @@ enum _ecore_status_t ecore_vf_pf_rxq_stop(struct ecore_hwfn *p_hwfn,
 	/* clear mailbox and prep first tlv */
 	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_STOP_RXQS, sizeof(*req));
 
-	req->rx_qid = rx_qid;
+	req->rx_qid = p_cid->rel.queue_id;
 	req->num_rxqs = 1;
 	req->cqe_completion = cqe_completion;
 
@@ -569,29 +571,28 @@ enum _ecore_status_t ecore_vf_pf_rxq_stop(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-enum _ecore_status_t ecore_vf_pf_txq_start(struct ecore_hwfn *p_hwfn,
-					   u16 tx_queue_id,
-					   u16 sb,
-					   u8 sb_index,
-					   dma_addr_t pbl_addr,
-					   u16 pbl_size,
-					   void OSAL_IOMEM **pp_doorbell)
+enum _ecore_status_t
+ecore_vf_pf_txq_start(struct ecore_hwfn *p_hwfn,
+		      struct ecore_queue_cid *p_cid,
+		      dma_addr_t pbl_addr, u16 pbl_size,
+		      void OSAL_IOMEM **pp_doorbell)
 {
 	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
 	struct pfvf_start_queue_resp_tlv *resp;
 	struct vfpf_start_txq_tlv *req;
+	u16 qid = p_cid->rel.queue_id;
 	enum _ecore_status_t rc;
 
 	/* clear mailbox and prep first tlv */
 	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_START_TXQ, sizeof(*req));
 
-	req->tx_qid = tx_queue_id;
+	req->tx_qid = qid;
 
 	/* Tx */
 	req->pbl_addr = pbl_addr;
 	req->pbl_size = pbl_size;
-	req->hw_sb = sb;
-	req->sb_index = sb_index;
+	req->hw_sb = p_cid->rel.sb;
+	req->sb_index = p_cid->rel.sb_idx;
 
 	/* add list termination tlv */
 	ecore_add_tlv(p_hwfn, &p_iov->offset,
@@ -608,32 +609,30 @@ enum _ecore_status_t ecore_vf_pf_txq_start(struct ecore_hwfn *p_hwfn,
 		goto exit;
 	}
 
-	if (pp_doorbell) {
-		/* Modern PFs provide the actual offsets, while legacy
-		 * provided only the queue id.
-		 */
-		if (!p_iov->b_pre_fp_hsi) {
-			*pp_doorbell = (u8 OSAL_IOMEM *)p_hwfn->doorbells +
-						       resp->offset;
-		} else {
-			u8 cid = p_iov->acquire_resp.resc.cid[tx_queue_id];
-
+	/* Modern PFs provide the actual offsets, while legacy
+	 * provided only the queue id.
+	 */
+	if (!p_iov->b_pre_fp_hsi) {
 		*pp_doorbell = (u8 OSAL_IOMEM *)p_hwfn->doorbells +
-				DB_ADDR_VF(cid, DQ_DEMS_LEGACY);
-		}
+						resp->offset;
+	} else {
+		u8 cid = p_iov->acquire_resp.resc.cid[qid];
 
-		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-			   "Txq[0x%02x]: doorbell at %p [offset 0x%08x]\n",
-			   tx_queue_id, *pp_doorbell, resp->offset);
+		*pp_doorbell = (u8 OSAL_IOMEM *)p_hwfn->doorbells +
+						DB_ADDR_VF(cid, DQ_DEMS_LEGACY);
 	}
 
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+		   "Txq[0x%02x]: doorbell at %p [offset 0x%08x]\n",
+		   qid, *pp_doorbell, resp->offset);
 exit:
 	ecore_vf_pf_req_end(p_hwfn, rc);
 
 	return rc;
 }
 
-enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn *p_hwfn, u16 tx_qid)
+enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn *p_hwfn,
+					  struct ecore_queue_cid *p_cid)
 {
 	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
 	struct vfpf_stop_txqs_tlv *req;
@@ -643,7 +642,7 @@ enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn *p_hwfn, u16 tx_qid)
 	/* clear mailbox and prep first tlv */
 	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_STOP_TXQS, sizeof(*req));
 
-	req->tx_qid = tx_qid;
+	req->tx_qid = p_cid->rel.queue_id;
 	req->num_txqs = 1;
 
 	/* add list termination tlv */
@@ -668,20 +667,36 @@ enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn *p_hwfn, u16 tx_qid)
 }
 
 enum _ecore_status_t ecore_vf_pf_rxqs_update(struct ecore_hwfn *p_hwfn,
-					     u16 rx_queue_id,
+					     struct ecore_queue_cid **pp_cid,
 					     u8 num_rxqs,
-					     u8 comp_cqe_flg, u8 comp_event_flg)
+					     u8 comp_cqe_flg,
+					     u8 comp_event_flg)
 {
 	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
 	struct pfvf_def_resp_tlv *resp = &p_iov->pf2vf_reply->default_resp;
 	struct vfpf_update_rxq_tlv *req;
 	enum _ecore_status_t rc;
 
+	/* TODO - API is limited to assuming continuous regions of queues,
+	 * but VF queues might not fullfil this requirement.
+	 * Need to consider whether we need new TLVs for this, or whether
+	 * simply doing it iteratively is good enough.
+	 */
+	if (!num_rxqs)
+		return ECORE_INVAL;
+
+again:
 	/* clear mailbox and prep first tlv */
 	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_UPDATE_RXQ, sizeof(*req));
 
-	req->rx_qid = rx_queue_id;
-	req->num_rxqs = num_rxqs;
+	/* Find the length of the current contagious range of queues beginning
+	 * at first queue's index.
+	 */
+	req->rx_qid = (*pp_cid)->rel.queue_id;
+	for (req->num_rxqs = 1; req->num_rxqs < num_rxqs; req->num_rxqs++)
+		if (pp_cid[req->num_rxqs]->rel.queue_id !=
+		    req->rx_qid + req->num_rxqs)
+			break;
 
 	if (comp_cqe_flg)
 		req->flags |= VFPF_RXQ_UPD_COMPLETE_CQE_FLAG;
@@ -702,9 +717,17 @@ enum _ecore_status_t ecore_vf_pf_rxqs_update(struct ecore_hwfn *p_hwfn,
 		goto exit;
 	}
 
+	/* Make sure we're done with all the queues */
+	if (req->num_rxqs < num_rxqs) {
+		num_rxqs -= req->num_rxqs;
+		pp_cid += req->num_rxqs;
+		/* TODO - should we give a non-locked variant instead? */
+		ecore_vf_pf_req_end(p_hwfn, rc);
+		goto again;
+	}
+
 exit:
 	ecore_vf_pf_req_end(p_hwfn, rc);
-
 	return rc;
 }
 
diff --git a/drivers/net/qede/base/ecore_vf.h b/drivers/net/qede/base/ecore_vf.h
index 6077d60..1afd667 100644
--- a/drivers/net/qede/base/ecore_vf.h
+++ b/drivers/net/qede/base/ecore_vf.h
@@ -53,10 +53,7 @@ struct ecore_vf_iov {
  * @brief VF - start the RX Queue by sending a message to the PF
  *
  * @param p_hwfn
- * @param cid			- zero based within the VF
- * @param rx_queue_id		- zero based within the VF
- * @param sb			- VF status block for this queue
- * @param sb_index		- Index within the status block
+ * @param p_cid			- Only relative fields are relevant
  * @param bd_max_bytes		- maximum number of bytes per bd
  * @param bd_chain_phys_addr	- physical address of bd chain
  * @param cqe_pbl_addr		- physical address of pbl
@@ -67,9 +64,7 @@ struct ecore_vf_iov {
  * @return enum _ecore_status_t
  */
 enum _ecore_status_t ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
-					   u8 rx_queue_id,
-					   u16 sb,
-					   u8 sb_index,
+					   struct ecore_queue_cid *p_cid,
 					   u16 bd_max_bytes,
 					   dma_addr_t bd_chain_phys_addr,
 					   dma_addr_t cqe_pbl_addr,
@@ -81,46 +76,44 @@ enum _ecore_status_t ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
  *        PF.
  *
  * @param p_hwfn
- * @param tx_queue_id		- zero based within the VF
- * @param sb			- status block for this queue
- * @param sb_index		- index within the status block
+ * @param p_cid
  * @param bd_chain_phys_addr	- physical address of tx chain
  * @param pp_doorbell		- pointer to address to which to
  *				write the doorbell too..
  *
  * @return enum _ecore_status_t
  */
-enum _ecore_status_t ecore_vf_pf_txq_start(struct ecore_hwfn *p_hwfn,
-					   u16 tx_queue_id,
-					   u16 sb,
-					   u8 sb_index,
-					   dma_addr_t pbl_addr,
-					   u16 pbl_size,
-					   void OSAL_IOMEM **pp_doorbell);
+enum _ecore_status_t
+ecore_vf_pf_txq_start(struct ecore_hwfn *p_hwfn,
+		      struct ecore_queue_cid *p_cid,
+		      dma_addr_t pbl_addr, u16 pbl_size,
+		      void OSAL_IOMEM **pp_doorbell);
 
 /**
  * @brief VF - stop the RX queue by sending a message to the PF
  *
  * @param p_hwfn
- * @param rx_qid
+ * @param p_cid
  * @param cqe_completion
  *
  * @return enum _ecore_status_t
  */
-enum _ecore_status_t ecore_vf_pf_rxq_stop(struct ecore_hwfn	*p_hwfn,
-					  u16			rx_qid,
-					  bool			cqe_completion);
+enum _ecore_status_t ecore_vf_pf_rxq_stop(struct ecore_hwfn *p_hwfn,
+					  struct ecore_queue_cid *p_cid,
+					  bool cqe_completion);
 
 /**
  * @brief VF - stop the TX queue by sending a message to the PF
  *
  * @param p_hwfn
- * @param tx_qid
+ * @param p_cid
  *
  * @return enum _ecore_status_t
  */
-enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn	*p_hwfn,
-					  u16			tx_qid);
+enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn *p_hwfn,
+					  struct ecore_queue_cid *p_cid);
+
+/* TODO - fix all the !SRIOV prototypes */
 
 #ifndef LINUX_REMOVE
 /**
@@ -128,20 +121,18 @@ enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn	*p_hwfn,
  *        PF
  *
  * @param p_hwfn
- * @param rx_queue_id
+ * @param pp_cid - list of queue-cids which we want to update
  * @param num_rxqs
- * @param init_sge_ring
  * @param comp_cqe_flg
  * @param comp_event_flg
  *
  * @return enum _ecore_status_t
  */
-enum _ecore_status_t ecore_vf_pf_rxqs_update(
-			struct ecore_hwfn	*p_hwfn,
-			u16			rx_queue_id,
-			u8			num_rxqs,
-			u8			comp_cqe_flg,
-			u8			comp_event_flg);
+enum _ecore_status_t ecore_vf_pf_rxqs_update(struct ecore_hwfn *p_hwfn,
+					     struct ecore_queue_cid **pp_cid,
+					     u8 num_rxqs,
+					     u8 comp_cqe_flg,
+					     u8 comp_event_flg);
 #endif
 
 /**
diff --git a/drivers/net/qede/qede_eth_if.c b/drivers/net/qede/qede_eth_if.c
index d0f6e87..936dd15 100644
--- a/drivers/net/qede/qede_eth_if.c
+++ b/drivers/net/qede/qede_eth_if.c
@@ -148,7 +148,8 @@ bool qed_update_rss_parm_cmt(struct ecore_dev *edev, uint16_t *p_tbl)
 	      uint16_t bd_max_bytes,
 	      dma_addr_t bd_chain_phys_addr,
 	      dma_addr_t cqe_pbl_addr,
-	      uint16_t cqe_pbl_size, void OSAL_IOMEM * *pp_prod)
+	      uint16_t cqe_pbl_size,
+	      struct ecore_rxq_start_ret_params *ret_params)
 {
 	struct ecore_hwfn *p_hwfn;
 	int rc, hwfn_index;
@@ -159,12 +160,14 @@ bool qed_update_rss_parm_cmt(struct ecore_dev *edev, uint16_t *p_tbl)
 	p_params->queue_id = p_params->queue_id / edev->num_hwfns;
 	p_params->stats_id = p_params->vport_id;
 
-	rc = ecore_sp_eth_rx_queue_start(p_hwfn,
-					 p_hwfn->hw_info.opaque_fid,
-					 p_params,
-					 bd_max_bytes,
-					 bd_chain_phys_addr,
-					 cqe_pbl_addr, cqe_pbl_size, pp_prod);
+	rc = ecore_eth_rx_queue_start(p_hwfn,
+				      p_hwfn->hw_info.opaque_fid,
+				      p_params,
+				      bd_max_bytes,
+				      bd_chain_phys_addr,
+				      cqe_pbl_addr,
+				      cqe_pbl_size,
+				      ret_params);
 
 	if (rc) {
 		DP_ERR(edev, "Failed to start RXQ#%d\n", p_params->queue_id);
@@ -180,19 +183,17 @@ bool qed_update_rss_parm_cmt(struct ecore_dev *edev, uint16_t *p_tbl)
 }
 
 static int
-qed_stop_rxq(struct ecore_dev *edev, struct qed_stop_rxq_params *params)
+qed_stop_rxq(struct ecore_dev *edev, uint8_t rss_id, void *handle)
 {
 	int rc, hwfn_index;
 	struct ecore_hwfn *p_hwfn;
 
-	hwfn_index = params->rss_id % edev->num_hwfns;
+	hwfn_index = rss_id % edev->num_hwfns;
 	p_hwfn = &edev->hwfns[hwfn_index];
 
-	rc = ecore_sp_eth_rx_queue_stop(p_hwfn,
-					params->rx_queue_id / edev->num_hwfns,
-					params->eq_completion_only, false);
+	rc = ecore_eth_rx_queue_stop(p_hwfn, handle, false, false);
 	if (rc) {
-		DP_ERR(edev, "Failed to stop RXQ#%d\n", params->rx_queue_id);
+		DP_ERR(edev, "Failed to stop RXQ#%02x\n", rss_id);
 		return rc;
 	}
 
@@ -204,7 +205,8 @@ bool qed_update_rss_parm_cmt(struct ecore_dev *edev, uint16_t *p_tbl)
 	      uint8_t rss_num,
 	      struct ecore_queue_start_common_params *p_params,
 	      dma_addr_t pbl_addr,
-	      uint16_t pbl_size, void OSAL_IOMEM * *pp_doorbell)
+	      uint16_t pbl_size,
+	      struct ecore_txq_start_ret_params *ret_params)
 {
 	struct ecore_hwfn *p_hwfn;
 	int rc, hwfn_index;
@@ -213,14 +215,13 @@ bool qed_update_rss_parm_cmt(struct ecore_dev *edev, uint16_t *p_tbl)
 	p_hwfn = &edev->hwfns[hwfn_index];
 
 	p_params->queue_id = p_params->queue_id / edev->num_hwfns;
-	p_params->qzone_id = p_params->queue_id;
 	p_params->stats_id = p_params->vport_id;
 
-	rc = ecore_sp_eth_tx_queue_start(p_hwfn,
-					 p_hwfn->hw_info.opaque_fid,
-					 p_params,
-					 0 /* tc */,
-					 pbl_addr, pbl_size, pp_doorbell);
+	rc = ecore_eth_tx_queue_start(p_hwfn,
+				      p_hwfn->hw_info.opaque_fid,
+				      p_params, 0 /* tc */,
+				      pbl_addr, pbl_size,
+				      ret_params);
 
 	if (rc) {
 		DP_ERR(edev, "Failed to start TXQ#%d\n", p_params->queue_id);
@@ -236,18 +237,17 @@ bool qed_update_rss_parm_cmt(struct ecore_dev *edev, uint16_t *p_tbl)
 }
 
 static int
-qed_stop_txq(struct ecore_dev *edev, struct qed_stop_txq_params *params)
+qed_stop_txq(struct ecore_dev *edev, uint8_t rss_id, void *handle)
 {
 	struct ecore_hwfn *p_hwfn;
 	int rc, hwfn_index;
 
-	hwfn_index = params->rss_id % edev->num_hwfns;
+	hwfn_index = rss_id % edev->num_hwfns;
 	p_hwfn = &edev->hwfns[hwfn_index];
 
-	rc = ecore_sp_eth_tx_queue_stop(p_hwfn,
-					params->tx_queue_id / edev->num_hwfns);
+	rc = ecore_eth_tx_queue_stop(p_hwfn, handle);
 	if (rc) {
-		DP_ERR(edev, "Failed to stop TXQ#%d\n", params->tx_queue_id);
+		DP_ERR(edev, "Failed to stop TXQ#%02x\n", rss_id);
 		return rc;
 	}
 
diff --git a/drivers/net/qede/qede_eth_if.h b/drivers/net/qede/qede_eth_if.h
index 37b1b74..12dd828 100644
--- a/drivers/net/qede/qede_eth_if.h
+++ b/drivers/net/qede/qede_eth_if.h
@@ -47,13 +47,6 @@ struct qed_dev_eth_info {
 	bool is_legacy;
 };
 
-struct qed_stop_rxq_params {
-	uint8_t rss_id;
-	uint8_t rx_queue_id;
-	uint8_t vport_id;
-	bool eq_completion_only;
-};
-
 struct qed_update_vport_params {
 	uint8_t vport_id;
 	uint8_t update_vport_active_flg;
@@ -78,11 +71,6 @@ struct qed_start_vport_params {
 	bool clear_stats;
 };
 
-struct qed_stop_txq_params {
-	uint8_t rss_id;
-	uint8_t tx_queue_id;
-};
-
 struct qed_eth_ops {
 	const struct qed_common_ops *common;
 
@@ -103,19 +91,21 @@ struct qed_eth_ops {
 			  uint16_t bd_max_bytes,
 			  dma_addr_t bd_chain_phys_addr,
 			  dma_addr_t cqe_pbl_addr,
-			  uint16_t cqe_pbl_size, void OSAL_IOMEM * *pp_prod);
+			  uint16_t cqe_pbl_size,
+			  struct ecore_rxq_start_ret_params *ret_params);
 
 	int (*q_rx_stop)(struct ecore_dev *edev,
-			 struct qed_stop_rxq_params *params);
+			 uint8_t rss_id, void *handle);
 
 	int (*q_tx_start)(struct ecore_dev *edev,
 			  uint8_t rss_num,
 			  struct ecore_queue_start_common_params *p_params,
 			  dma_addr_t pbl_addr,
-			  uint16_t pbl_size, void OSAL_IOMEM * *pp_doorbell);
+			  uint16_t pbl_size,
+			  struct ecore_txq_start_ret_params *ret_params);
 
 	int (*q_tx_stop)(struct ecore_dev *edev,
-			 struct qed_stop_txq_params *params);
+			 uint8_t rss_id, void *handle);
 
 	int (*eth_cqe_completion)(struct ecore_dev *edev,
 				  uint8_t rss_id,
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 01ea9b4..85134fb 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -527,11 +527,14 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 	for_each_queue(i) {
 		fp = &qdev->fp_array[i];
 		if (fp->type & QEDE_FASTPATH_RX) {
+			struct ecore_rxq_start_ret_params ret_params;
+
 			p_phys_table = ecore_chain_get_pbl_phys(&fp->rxq->
 								rx_comp_ring);
 			page_cnt = ecore_chain_get_page_cnt(&fp->rxq->
 								rx_comp_ring);
 
+			memset(&ret_params, 0, sizeof(ret_params));
 			memset(&q_params, 0, sizeof(q_params));
 			q_params.queue_id = i;
 			q_params.vport_id = 0;
@@ -545,13 +548,17 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 					   fp->rxq->rx_bd_ring.p_phys_addr,
 					   p_phys_table,
 					   page_cnt,
-					   &fp->rxq->hw_rxq_prod_addr);
+					   &ret_params);
 			if (rc) {
 				DP_ERR(edev, "Start rxq #%d failed %d\n",
 				       fp->rxq->queue_id, rc);
 				return rc;
 			}
 
+			/* Use the return parameters */
+			fp->rxq->hw_rxq_prod_addr = ret_params.p_prod;
+			fp->rxq->handle = ret_params.p_handle;
+
 			fp->rxq->hw_cons_ptr =
 					&fp->sb_info->sb_virt->pi_array[RX_PI];
 
@@ -561,6 +568,8 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 		if (!(fp->type & QEDE_FASTPATH_TX))
 			continue;
 		for (tc = 0; tc < qdev->num_tc; tc++) {
+			struct ecore_txq_start_ret_params ret_params;
+
 			txq = fp->txqs[tc];
 			txq_index = tc * QEDE_RSS_COUNT(qdev) + i;
 
@@ -568,6 +577,7 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 			page_cnt = ecore_chain_get_page_cnt(&txq->tx_pbl);
 
 			memset(&q_params, 0, sizeof(q_params));
+			memset(&ret_params, 0, sizeof(ret_params));
 			q_params.queue_id = txq->queue_id;
 			q_params.vport_id = 0;
 			q_params.sb = fp->sb_info->igu_sb_id;
@@ -576,13 +586,16 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 			rc = qdev->ops->q_tx_start(edev, i, &q_params,
 						   p_phys_table,
 						   page_cnt, /* **pp_doorbell */
-						   &txq->doorbell_addr);
+						   &ret_params);
 			if (rc) {
 				DP_ERR(edev, "Start txq %u failed %d\n",
 				       txq_index, rc);
 				return rc;
 			}
 
+			txq->doorbell_addr = ret_params.p_doorbell;
+			txq->handle = ret_params.p_handle;
+
 			txq->hw_cons_ptr =
 			    &fp->sb_info->sb_virt->pi_array[TX_PI(tc)];
 			SET_FIELD(txq->tx_db.data.params,
@@ -1399,6 +1412,7 @@ static int qede_stop_queues(struct qede_dev *qdev)
 {
 	struct qed_update_vport_params vport_update_params;
 	struct ecore_dev *edev = &qdev->edev;
+	struct qede_fastpath *fp;
 	int rc, tc, i;
 
 	/* Disable the vport */
@@ -1420,7 +1434,7 @@ static int qede_stop_queues(struct qede_dev *qdev)
 
 	/* Flush Tx queues. If needed, request drain from MCP */
 	for_each_queue(i) {
-		struct qede_fastpath *fp = &qdev->fp_array[i];
+		fp = &qdev->fp_array[i];
 
 		if (fp->type & QEDE_FASTPATH_TX) {
 			for (tc = 0; tc < qdev->num_tc; tc++) {
@@ -1435,23 +1449,17 @@ static int qede_stop_queues(struct qede_dev *qdev)
 
 	/* Stop all Queues in reverse order */
 	for (i = QEDE_QUEUE_CNT(qdev) - 1; i >= 0; i--) {
-		struct qed_stop_rxq_params rx_params;
+		fp = &qdev->fp_array[i];
 
 		/* Stop the Tx Queue(s) */
 		if (qdev->fp_array[i].type & QEDE_FASTPATH_TX) {
 			for (tc = 0; tc < qdev->num_tc; tc++) {
-				struct qed_stop_txq_params tx_params;
-				u8 val;
-
-				tx_params.rss_id = i;
-				val = qdev->fp_array[i].txqs[tc]->queue_id;
-				tx_params.tx_queue_id = val;
-
+				struct qede_tx_queue *txq = fp->txqs[tc];
 				DP_INFO(edev, "Stopping tx queues\n");
-				rc = qdev->ops->q_tx_stop(edev, &tx_params);
+				rc = qdev->ops->q_tx_stop(edev, i, txq->handle);
 				if (rc) {
 					DP_ERR(edev, "Failed to stop TXQ #%d\n",
-					       tx_params.tx_queue_id);
+					       i);
 					return rc;
 				}
 			}
@@ -1459,14 +1467,8 @@ static int qede_stop_queues(struct qede_dev *qdev)
 
 		/* Stop the Rx Queue */
 		if (qdev->fp_array[i].type & QEDE_FASTPATH_RX) {
-			memset(&rx_params, 0, sizeof(rx_params));
-			rx_params.rss_id = i;
-			rx_params.rx_queue_id = qdev->fp_array[i].rxq->queue_id;
-			rx_params.eq_completion_only = 1;
-
 			DP_INFO(edev, "Stopping rx queues\n");
-
-			rc = qdev->ops->q_rx_stop(edev, &rx_params);
+			rc = qdev->ops->q_rx_stop(edev, i, fp->rxq->handle);
 			if (rc) {
 				DP_ERR(edev, "Failed to stop RXQ #%d\n", i);
 				return rc;
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index 9a393e9..17a2f0c 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -156,6 +156,7 @@ struct qede_rx_queue {
 	uint64_t rx_hw_errors;
 	uint64_t rx_alloc_errors;
 	struct qede_dev *qdev;
+	void *handle;
 };
 
 /*
@@ -187,6 +188,7 @@ struct qede_tx_queue {
 	uint64_t xmit_pkts;
 	bool is_legacy;
 	struct qede_dev *qdev;
+	void *handle;
 };
 
 struct qede_fastpath {
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 28/61] net/qede/base: add support for handling TLV request from MFW
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (26 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 27/61] net/qede/base: L2 handler changes Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 29/61] net/qede/base: optimize cache-line access Rasesh Mody
                   ` (33 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add support for handling the TLV request from Management FW.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/bcm_osal.h      |    3 +
 drivers/net/qede/base/ecore_mcp.c     |    6 -
 drivers/net/qede/base/ecore_mcp.h     |    8 +
 drivers/net/qede/base/ecore_mcp_api.h |   44 +-
 drivers/net/qede/base/ecore_mng_tlv.c | 1536 +++++++++++++++++++++++++++++++++
 drivers/net/qede/qede_if.h            |   21 +
 6 files changed, 1591 insertions(+), 27 deletions(-)
 create mode 100644 drivers/net/qede/base/ecore_mng_tlv.c

diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 4089943..2430cad 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -415,5 +415,8 @@ void qede_get_mcp_proto_stats(struct ecore_dev *, enum ecore_mcp_protocol_type,
 	qede_get_mcp_proto_stats(dev, type, stats)
 
 #define	OSAL_SLOWPATH_IRQ_REQ(p_hwfn) (0)
+#define OSAL_MFW_TLV_REQ(p_hwfn) (0)
+#define OSAL_MFW_FILL_TLV_DATA(type, buf, data) (0)
+
 
 #endif /* __BCM_OSAL_H */
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index e4fa872..c5cc827 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -2503,9 +2503,3 @@ enum _ecore_status_t ecore_mcp_initiate_pf_flr(struct ecore_hwfn *p_hwfn,
 	return ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_INITIATE_PF_FLR, 0,
 			     &mcp_resp, &mcp_param);
 }
-
-enum _ecore_status_t
-ecore_mfw_process_tlv_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
-{
-	return ECORE_SUCCESS;
-}
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index d77b5df..0708923 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -70,6 +70,14 @@ struct ecore_mcp_mb_params {
 	u32 mcp_param;
 };
 
+struct ecore_drv_tlv_hdr {
+	u8 tlv_type;	/* According to the enum below */
+	u8 tlv_length;	/* In dwords - not including this header */
+	u8 tlv_reserved;
+#define ECORE_DRV_TLV_FLAGS_CHANGED 0x01
+	u8 tlv_flags;
+};
+
 /**
  * @brief Initialize the interface with the MCP
  *
diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h
index 8cad43d..190c135 100644
--- a/drivers/net/qede/base/ecore_mcp_api.h
+++ b/drivers/net/qede/base/ecore_mcp_api.h
@@ -233,9 +233,11 @@ struct ecore_mba_vers {
 };
 
 enum ecore_mfw_tlv_type {
-	ECORE_MFW_TLV_GENERIC = 0x1,	/* Core driver TLVs */
-	ECORE_MFW_TLV_FCOE = 0x2,	/* FCoE protocol TLVs */
-	ECORE_MFW_TLV_ISCSI = 0x4,	/* SCSI protocol TLVs */
+	ECORE_MFW_TLV_GENERIC = 0x1, /* Core driver TLVs */
+	ECORE_MFW_TLV_ETH = 0x2, /* L2 driver TLVs */
+	ECORE_MFW_TLV_FCOE = 0x4, /* FCoE protocol TLVs */
+	ECORE_MFW_TLV_ISCSI = 0x8, /* SCSI protocol TLVs */
+	ECORE_MFW_TLV_MAX = 0x16,
 };
 
 struct ecore_mfw_tlv_generic {
@@ -247,6 +249,21 @@ struct ecore_mfw_tlv_generic {
 	bool additional_mac1_set;
 	u64 additional_mac2;
 	bool additional_mac2_set;
+	u8 drv_state;
+	bool drv_state_set;
+	u8 pxe_progress;
+	bool pxe_progress_set;
+	u64 rx_frames;
+	bool rx_frames_set;
+	u64 rx_bytes;
+	bool rx_bytes_set;
+	u64 tx_frames;
+	bool tx_frames_set;
+	u64 tx_bytes;
+	bool tx_bytes_set;
+};
+
+struct ecore_mfw_tlv_eth {
 	u16 lso_maxoff_size;
 	bool lso_maxoff_size_set;
 	u16 lso_minseg_size;
@@ -259,12 +276,6 @@ struct ecore_mfw_tlv_generic {
 	bool rx_descr_size_set;
 	u16 netq_count;
 	bool netq_count_set;
-	u16 flex_vlan;
-	bool flex_vlan_set;
-	u8 drv_state;
-	bool drv_state_set;
-	u8 pxe_progress;
-	bool pxe_progress_set;
 	u32 tcp4_offloads;
 	bool tcp4_offloads_set;
 	u32 tcp6_offloads;
@@ -273,14 +284,6 @@ struct ecore_mfw_tlv_generic {
 	bool tx_descr_qdepth_set;
 	u16 rx_descr_qdepth;
 	bool rx_descr_qdepth_set;
-	u64 rx_frames;
-	bool rx_frames_set;
-	u64 rx_bytes;
-	bool rx_bytes_set;
-	u64 tx_frames;
-	bool tx_frames_set;
-	u64 tx_bytes;
-	bool tx_bytes_set;
 	u8 iov_offload;
 	bool iov_offload_set;
 	u8 txqs_empty;
@@ -446,8 +449,8 @@ struct ecore_mfw_tlv_fcoe {
 	bool ols_set;
 	u8 lr;
 	bool lr_set;
-	u8 llr;
-	bool llrt;
+	u8 lrr;
+	bool lrr_set;
 	u8 tx_lip;
 	bool tx_lip_set;
 	u8 rx_lip;
@@ -511,12 +514,11 @@ struct ecore_mfw_tlv_iscsi {
 	bool tx_frames_set;
 	u64 tx_bytes;
 	bool tx_bytes_set;
-	u32 cpcp_spcp_map;
-	bool cpcp_spcp_map_set;
 };
 
 union ecore_mfw_tlv_data {
 	struct ecore_mfw_tlv_generic generic;
+	struct ecore_mfw_tlv_eth eth;
 	struct ecore_mfw_tlv_fcoe fcoe;
 	struct ecore_mfw_tlv_iscsi iscsi;
 };
diff --git a/drivers/net/qede/base/ecore_mng_tlv.c b/drivers/net/qede/base/ecore_mng_tlv.c
new file mode 100644
index 0000000..0065d12
--- /dev/null
+++ b/drivers/net/qede/base/ecore_mng_tlv.c
@@ -0,0 +1,1536 @@
+#include "bcm_osal.h"
+#include "ecore.h"
+#include "ecore_status.h"
+#include "ecore_mcp.h"
+#include "ecore_hw.h"
+#include "reg_addr.h"
+
+#define TLV_TYPE(p)	(p[0])
+#define TLV_LENGTH(p)	(p[1])
+#define TLV_FLAGS(p)	(p[3])
+
+static enum _ecore_status_t
+ecore_mfw_get_tlv_group(u8 tlv_type, u8 *tlv_group)
+{
+	switch (tlv_type) {
+	case DRV_TLV_FEATURE_FLAGS:
+	case DRV_TLV_LOCAL_ADMIN_ADDR:
+	case DRV_TLV_ADDITIONAL_MAC_ADDR_1:
+	case DRV_TLV_ADDITIONAL_MAC_ADDR_2:
+	case DRV_TLV_OS_DRIVER_STATES:
+	case DRV_TLV_PXE_BOOT_PROGRESS:
+	case DRV_TLV_RX_FRAMES_RECEIVED:
+	case DRV_TLV_RX_BYTES_RECEIVED:
+	case DRV_TLV_TX_FRAMES_SENT:
+	case DRV_TLV_TX_BYTES_SENT:
+		*tlv_group |= ECORE_MFW_TLV_GENERIC;
+		break;
+	case DRV_TLV_LSO_MAX_OFFLOAD_SIZE:
+	case DRV_TLV_LSO_MIN_SEGMENT_COUNT:
+	case DRV_TLV_PROMISCUOUS_MODE:
+	case DRV_TLV_TX_DESCRIPTORS_QUEUE_SIZE:
+	case DRV_TLV_RX_DESCRIPTORS_QUEUE_SIZE:
+	case DRV_TLV_NUM_OF_NET_QUEUE_VMQ_CFG:
+	case DRV_TLV_NUM_OFFLOADED_CONNECTIONS_TCP_IPV4:
+	case DRV_TLV_NUM_OFFLOADED_CONNECTIONS_TCP_IPV6:
+	case DRV_TLV_TX_DESCRIPTOR_QUEUE_AVG_DEPTH:
+	case DRV_TLV_RX_DESCRIPTORS_QUEUE_AVG_DEPTH:
+	case DRV_TLV_IOV_OFFLOAD:
+	case DRV_TLV_TX_QUEUES_EMPTY:
+	case DRV_TLV_RX_QUEUES_EMPTY:
+	case DRV_TLV_TX_QUEUES_FULL:
+	case DRV_TLV_RX_QUEUES_FULL:
+		*tlv_group |= ECORE_MFW_TLV_ETH;
+		break;
+	case DRV_TLV_SCSI_TO:
+	case DRV_TLV_R_T_TOV:
+	case DRV_TLV_R_A_TOV:
+	case DRV_TLV_E_D_TOV:
+	case DRV_TLV_CR_TOV:
+	case DRV_TLV_BOOT_TYPE:
+	case DRV_TLV_NPIV_STATE:
+	case DRV_TLV_NUM_OF_NPIV_IDS:
+	case DRV_TLV_SWITCH_NAME:
+	case DRV_TLV_SWITCH_PORT_NUM:
+	case DRV_TLV_SWITCH_PORT_ID:
+	case DRV_TLV_VENDOR_NAME:
+	case DRV_TLV_SWITCH_MODEL:
+	case DRV_TLV_SWITCH_FW_VER:
+	case DRV_TLV_QOS_PRIORITY_PER_802_1P:
+	case DRV_TLV_PORT_ALIAS:
+	case DRV_TLV_PORT_STATE:
+	case DRV_TLV_FIP_TX_DESCRIPTORS_QUEUE_SIZE:
+	case DRV_TLV_FCOE_RX_DESCRIPTORS_QUEUE_SIZE:
+	case DRV_TLV_LINK_FAILURE_COUNT:
+	case DRV_TLV_FCOE_BOOT_PROGRESS:
+	case DRV_TLV_RX_BROADCAST_PACKETS:
+	case DRV_TLV_TX_BROADCAST_PACKETS:
+	case DRV_TLV_FCOE_TX_DESCRIPTOR_QUEUE_AVG_DEPTH:
+	case DRV_TLV_FCOE_RX_DESCRIPTORS_QUEUE_AVG_DEPTH:
+	case DRV_TLV_FCOE_RX_FRAMES_RECEIVED:
+	case DRV_TLV_FCOE_RX_BYTES_RECEIVED:
+	case DRV_TLV_FCOE_TX_FRAMES_SENT:
+	case DRV_TLV_FCOE_TX_BYTES_SENT:
+	case DRV_TLV_CRC_ERROR_COUNT:
+	case DRV_TLV_CRC_ERROR_1_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_CRC_ERROR_1_TIMESTAMP:
+	case DRV_TLV_CRC_ERROR_2_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_CRC_ERROR_2_TIMESTAMP:
+	case DRV_TLV_CRC_ERROR_3_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_CRC_ERROR_3_TIMESTAMP:
+	case DRV_TLV_CRC_ERROR_4_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_CRC_ERROR_4_TIMESTAMP:
+	case DRV_TLV_CRC_ERROR_5_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_CRC_ERROR_5_TIMESTAMP:
+	case DRV_TLV_LOSS_OF_SYNC_ERROR_COUNT:
+	case DRV_TLV_LOSS_OF_SIGNAL_ERRORS:
+	case DRV_TLV_PRIMITIVE_SEQUENCE_PROTOCOL_ERROR_COUNT:
+	case DRV_TLV_DISPARITY_ERROR_COUNT:
+	case DRV_TLV_CODE_VIOLATION_ERROR_COUNT:
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_1:
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_2:
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_3:
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_4:
+	case DRV_TLV_LAST_FLOGI_TIMESTAMP:
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_1:
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_2:
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_3:
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_4:
+	case DRV_TLV_LAST_FLOGI_ACC_TIMESTAMP:
+	case DRV_TLV_LAST_FLOGI_RJT:
+	case DRV_TLV_LAST_FLOGI_RJT_TIMESTAMP:
+	case DRV_TLV_FDISCS_SENT_COUNT:
+	case DRV_TLV_FDISC_ACCS_RECEIVED:
+	case DRV_TLV_FDISC_RJTS_RECEIVED:
+	case DRV_TLV_PLOGI_SENT_COUNT:
+	case DRV_TLV_PLOGI_ACCS_RECEIVED:
+	case DRV_TLV_PLOGI_RJTS_RECEIVED:
+	case DRV_TLV_PLOGI_1_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_PLOGI_1_TIMESTAMP:
+	case DRV_TLV_PLOGI_2_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_PLOGI_2_TIMESTAMP:
+	case DRV_TLV_PLOGI_3_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_PLOGI_3_TIMESTAMP:
+	case DRV_TLV_PLOGI_4_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_PLOGI_4_TIMESTAMP:
+	case DRV_TLV_PLOGI_5_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_PLOGI_5_TIMESTAMP:
+	case DRV_TLV_PLOGI_1_ACC_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_PLOGI_1_ACC_TIMESTAMP:
+	case DRV_TLV_PLOGI_2_ACC_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_PLOGI_2_ACC_TIMESTAMP:
+	case DRV_TLV_PLOGI_3_ACC_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_PLOGI_3_ACC_TIMESTAMP:
+	case DRV_TLV_PLOGI_4_ACC_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_PLOGI_4_ACC_TIMESTAMP:
+	case DRV_TLV_PLOGI_5_ACC_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_PLOGI_5_ACC_TIMESTAMP:
+	case DRV_TLV_LOGOS_ISSUED:
+	case DRV_TLV_LOGO_ACCS_RECEIVED:
+	case DRV_TLV_LOGO_RJTS_RECEIVED:
+	case DRV_TLV_LOGO_1_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_LOGO_1_TIMESTAMP:
+	case DRV_TLV_LOGO_2_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_LOGO_2_TIMESTAMP:
+	case DRV_TLV_LOGO_3_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_LOGO_3_TIMESTAMP:
+	case DRV_TLV_LOGO_4_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_LOGO_4_TIMESTAMP:
+	case DRV_TLV_LOGO_5_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_LOGO_5_TIMESTAMP:
+	case DRV_TLV_LOGOS_RECEIVED:
+	case DRV_TLV_ACCS_ISSUED:
+	case DRV_TLV_PRLIS_ISSUED:
+	case DRV_TLV_ACCS_RECEIVED:
+	case DRV_TLV_ABTS_SENT_COUNT:
+	case DRV_TLV_ABTS_ACCS_RECEIVED:
+	case DRV_TLV_ABTS_RJTS_RECEIVED:
+	case DRV_TLV_ABTS_1_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_ABTS_1_TIMESTAMP:
+	case DRV_TLV_ABTS_2_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_ABTS_2_TIMESTAMP:
+	case DRV_TLV_ABTS_3_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_ABTS_3_TIMESTAMP:
+	case DRV_TLV_ABTS_4_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_ABTS_4_TIMESTAMP:
+	case DRV_TLV_ABTS_5_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_ABTS_5_TIMESTAMP:
+	case DRV_TLV_RSCNS_RECEIVED:
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_1:
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_2:
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_3:
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_4:
+	case DRV_TLV_LUN_RESETS_ISSUED:
+	case DRV_TLV_ABORT_TASK_SETS_ISSUED:
+	case DRV_TLV_TPRLOS_SENT:
+	case DRV_TLV_NOS_SENT_COUNT:
+	case DRV_TLV_NOS_RECEIVED_COUNT:
+	case DRV_TLV_OLS_COUNT:
+	case DRV_TLV_LR_COUNT:
+	case DRV_TLV_LRR_COUNT:
+	case DRV_TLV_LIP_SENT_COUNT:
+	case DRV_TLV_LIP_RECEIVED_COUNT:
+	case DRV_TLV_EOFA_COUNT:
+	case DRV_TLV_EOFNI_COUNT:
+	case DRV_TLV_SCSI_STATUS_CHECK_CONDITION_COUNT:
+	case DRV_TLV_SCSI_STATUS_CONDITION_MET_COUNT:
+	case DRV_TLV_SCSI_STATUS_BUSY_COUNT:
+	case DRV_TLV_SCSI_STATUS_INTERMEDIATE_COUNT:
+	case DRV_TLV_SCSI_STATUS_INTERMEDIATE_CONDITION_MET_COUNT:
+	case DRV_TLV_SCSI_STATUS_RESERVATION_CONFLICT_COUNT:
+	case DRV_TLV_SCSI_STATUS_TASK_SET_FULL_COUNT:
+	case DRV_TLV_SCSI_STATUS_ACA_ACTIVE_COUNT:
+	case DRV_TLV_SCSI_STATUS_TASK_ABORTED_COUNT:
+	case DRV_TLV_SCSI_CHECK_CONDITION_1_RECEIVED_SK_ASC_ASCQ:
+	case DRV_TLV_SCSI_CHECK_1_TIMESTAMP:
+	case DRV_TLV_SCSI_CHECK_CONDITION_2_RECEIVED_SK_ASC_ASCQ:
+	case DRV_TLV_SCSI_CHECK_2_TIMESTAMP:
+	case DRV_TLV_SCSI_CHECK_CONDITION_3_RECEIVED_SK_ASC_ASCQ:
+	case DRV_TLV_SCSI_CHECK_3_TIMESTAMP:
+	case DRV_TLV_SCSI_CHECK_CONDITION_4_RECEIVED_SK_ASC_ASCQ:
+	case DRV_TLV_SCSI_CHECK_4_TIMESTAMP:
+	case DRV_TLV_SCSI_CHECK_CONDITION_5_RECEIVED_SK_ASC_ASCQ:
+	case DRV_TLV_SCSI_CHECK_5_TIMESTAMP:
+		*tlv_group = ECORE_MFW_TLV_FCOE;
+		break;
+	case DRV_TLV_TARGET_LLMNR_ENABLED:
+	case DRV_TLV_HEADER_DIGEST_FLAG_ENABLED:
+	case DRV_TLV_DATA_DIGEST_FLAG_ENABLED:
+	case DRV_TLV_AUTHENTICATION_METHOD:
+	case DRV_TLV_ISCSI_BOOT_TARGET_PORTAL:
+	case DRV_TLV_MAX_FRAME_SIZE:
+	case DRV_TLV_PDU_TX_DESCRIPTORS_QUEUE_SIZE:
+	case DRV_TLV_PDU_RX_DESCRIPTORS_QUEUE_SIZE:
+	case DRV_TLV_ISCSI_BOOT_PROGRESS:
+	case DRV_TLV_PDU_TX_DESCRIPTOR_QUEUE_AVG_DEPTH:
+	case DRV_TLV_PDU_RX_DESCRIPTORS_QUEUE_AVG_DEPTH:
+	case DRV_TLV_ISCSI_PDU_RX_FRAMES_RECEIVED:
+	case DRV_TLV_ISCSI_PDU_RX_BYTES_RECEIVED:
+	case DRV_TLV_ISCSI_PDU_TX_FRAMES_SENT:
+	case DRV_TLV_ISCSI_PDU_TX_BYTES_SENT:
+		*tlv_group |= ECORE_MFW_TLV_ISCSI;
+		break;
+	default:
+		return ECORE_INVAL;
+	}
+
+	return ECORE_SUCCESS;
+}
+
+static int
+ecore_mfw_get_gen_tlv_value(struct ecore_drv_tlv_hdr *p_tlv,
+			    struct ecore_mfw_tlv_generic *p_drv_buf,
+			    u8 **p_tlv_buf)
+{
+	switch (p_tlv->tlv_type) {
+	case DRV_TLV_FEATURE_FLAGS:
+		if (p_drv_buf->feat_flags_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->feat_flags;
+			return sizeof(p_drv_buf->feat_flags);
+		}
+		break;
+	case DRV_TLV_LOCAL_ADMIN_ADDR:
+		if (p_drv_buf->local_mac_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->local_mac;
+			return sizeof(p_drv_buf->local_mac);
+		}
+		break;
+	case DRV_TLV_ADDITIONAL_MAC_ADDR_1:
+		if (p_drv_buf->additional_mac1_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->additional_mac1;
+			return sizeof(p_drv_buf->additional_mac1);
+		}
+		break;
+	case DRV_TLV_ADDITIONAL_MAC_ADDR_2:
+		if (p_drv_buf->additional_mac2_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->additional_mac2;
+			return sizeof(p_drv_buf->additional_mac2);
+		}
+		break;
+	case DRV_TLV_OS_DRIVER_STATES:
+		if (p_drv_buf->drv_state_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->drv_state;
+			return sizeof(p_drv_buf->drv_state);
+		}
+		break;
+	case DRV_TLV_PXE_BOOT_PROGRESS:
+		if (p_drv_buf->pxe_progress_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->pxe_progress;
+			return sizeof(p_drv_buf->pxe_progress);
+		}
+		break;
+	case DRV_TLV_RX_FRAMES_RECEIVED:
+		if (p_drv_buf->rx_frames_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_frames;
+			return sizeof(p_drv_buf->rx_frames);
+		}
+		break;
+	case DRV_TLV_RX_BYTES_RECEIVED:
+		if (p_drv_buf->rx_bytes_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_bytes;
+			return sizeof(p_drv_buf->rx_bytes);
+		}
+		break;
+	case DRV_TLV_TX_FRAMES_SENT:
+		if (p_drv_buf->tx_frames_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_frames;
+			return sizeof(p_drv_buf->tx_frames);
+		}
+		break;
+	case DRV_TLV_TX_BYTES_SENT:
+		if (p_drv_buf->tx_bytes_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_bytes;
+			return sizeof(p_drv_buf->tx_bytes);
+		}
+		break;
+	default:
+		break;
+	}
+
+	return -1;
+}
+
+static int
+ecore_mfw_get_eth_tlv_value(struct ecore_drv_tlv_hdr *p_tlv,
+			    struct ecore_mfw_tlv_eth *p_drv_buf,
+			    u8 **p_tlv_buf)
+{
+	switch (p_tlv->tlv_type) {
+	case DRV_TLV_LSO_MAX_OFFLOAD_SIZE:
+		if (p_drv_buf->lso_maxoff_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->lso_maxoff_size;
+			return sizeof(p_drv_buf->lso_maxoff_size);
+		}
+		break;
+	case DRV_TLV_LSO_MIN_SEGMENT_COUNT:
+		if (p_drv_buf->lso_minseg_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->lso_minseg_size;
+			return sizeof(p_drv_buf->lso_minseg_size);
+		}
+		break;
+	case DRV_TLV_PROMISCUOUS_MODE:
+		if (p_drv_buf->prom_mode_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->prom_mode;
+			return sizeof(p_drv_buf->prom_mode);
+		}
+		break;
+	case DRV_TLV_TX_DESCRIPTORS_QUEUE_SIZE:
+		if (p_drv_buf->tx_descr_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_descr_size;
+			return sizeof(p_drv_buf->tx_descr_size);
+		}
+		break;
+	case DRV_TLV_RX_DESCRIPTORS_QUEUE_SIZE:
+		if (p_drv_buf->rx_descr_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_descr_size;
+			return sizeof(p_drv_buf->rx_descr_size);
+		}
+		break;
+	case DRV_TLV_NUM_OF_NET_QUEUE_VMQ_CFG:
+		if (p_drv_buf->netq_count_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->netq_count;
+			return sizeof(p_drv_buf->netq_count);
+		}
+		break;
+	case DRV_TLV_NUM_OFFLOADED_CONNECTIONS_TCP_IPV4:
+		if (p_drv_buf->tcp4_offloads_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tcp4_offloads;
+			return sizeof(p_drv_buf->tcp4_offloads);
+		}
+		break;
+	case DRV_TLV_NUM_OFFLOADED_CONNECTIONS_TCP_IPV6:
+		if (p_drv_buf->tcp6_offloads_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tcp6_offloads;
+			return sizeof(p_drv_buf->tcp6_offloads);
+		}
+		break;
+	case DRV_TLV_TX_DESCRIPTOR_QUEUE_AVG_DEPTH:
+		if (p_drv_buf->tx_descr_qdepth_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_descr_qdepth;
+			return sizeof(p_drv_buf->tx_descr_qdepth);
+		}
+		break;
+	case DRV_TLV_RX_DESCRIPTORS_QUEUE_AVG_DEPTH:
+		if (p_drv_buf->rx_descr_qdepth_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_descr_qdepth;
+			return sizeof(p_drv_buf->rx_descr_qdepth);
+		}
+		break;
+	case DRV_TLV_IOV_OFFLOAD:
+		if (p_drv_buf->iov_offload_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->iov_offload;
+			return sizeof(p_drv_buf->iov_offload);
+		}
+		break;
+	case DRV_TLV_TX_QUEUES_EMPTY:
+		if (p_drv_buf->txqs_empty_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->txqs_empty;
+			return sizeof(p_drv_buf->txqs_empty);
+		}
+		break;
+	case DRV_TLV_RX_QUEUES_EMPTY:
+		if (p_drv_buf->rxqs_empty_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rxqs_empty;
+			return sizeof(p_drv_buf->rxqs_empty);
+		}
+		break;
+	case DRV_TLV_TX_QUEUES_FULL:
+		if (p_drv_buf->num_txqs_full_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->num_txqs_full;
+			return sizeof(p_drv_buf->num_txqs_full);
+		}
+		break;
+	case DRV_TLV_RX_QUEUES_FULL:
+		if (p_drv_buf->num_rxqs_full_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->num_rxqs_full;
+			return sizeof(p_drv_buf->num_rxqs_full);
+		}
+		break;
+	default:
+		break;
+	}
+
+	return -1;
+}
+
+static int
+ecore_mfw_get_fcoe_tlv_value(struct ecore_drv_tlv_hdr *p_tlv,
+			     struct ecore_mfw_tlv_fcoe *p_drv_buf,
+			     u8 **p_tlv_buf)
+{
+	switch (p_tlv->tlv_type) {
+	case DRV_TLV_SCSI_TO:
+		if (p_drv_buf->scsi_timeout_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_timeout;
+			return sizeof(p_drv_buf->scsi_timeout);
+		}
+		break;
+	case DRV_TLV_R_T_TOV:
+		if (p_drv_buf->rt_tov_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rt_tov;
+			return sizeof(p_drv_buf->rt_tov);
+		}
+		break;
+	case DRV_TLV_R_A_TOV:
+		if (p_drv_buf->ra_tov_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->ra_tov;
+			return sizeof(p_drv_buf->ra_tov);
+		}
+		break;
+	case DRV_TLV_E_D_TOV:
+		if (p_drv_buf->ed_tov_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->ed_tov;
+			return sizeof(p_drv_buf->ed_tov);
+		}
+		break;
+	case DRV_TLV_CR_TOV:
+		if (p_drv_buf->cr_tov_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->cr_tov;
+			return sizeof(p_drv_buf->cr_tov);
+		}
+		break;
+	case DRV_TLV_BOOT_TYPE:
+		if (p_drv_buf->boot_type_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->boot_type;
+			return sizeof(p_drv_buf->boot_type);
+		}
+		break;
+	case DRV_TLV_NPIV_STATE:
+		if (p_drv_buf->npiv_state_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->npiv_state;
+			return sizeof(p_drv_buf->npiv_state);
+		}
+		break;
+	case DRV_TLV_NUM_OF_NPIV_IDS:
+		if (p_drv_buf->num_npiv_ids_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->num_npiv_ids;
+			return sizeof(p_drv_buf->num_npiv_ids);
+		}
+		break;
+	case DRV_TLV_SWITCH_NAME:
+		if (p_drv_buf->switch_name_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->switch_name;
+			return sizeof(p_drv_buf->switch_name);
+		}
+		break;
+	case DRV_TLV_SWITCH_PORT_NUM:
+		if (p_drv_buf->switch_portnum_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->switch_portnum;
+			return sizeof(p_drv_buf->switch_portnum);
+		}
+		break;
+	case DRV_TLV_SWITCH_PORT_ID:
+		if (p_drv_buf->switch_portid_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->switch_portid;
+			return sizeof(p_drv_buf->switch_portid);
+		}
+		break;
+	case DRV_TLV_VENDOR_NAME:
+		if (p_drv_buf->vendor_name_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->vendor_name;
+			return sizeof(p_drv_buf->vendor_name);
+		}
+		break;
+	case DRV_TLV_SWITCH_MODEL:
+		if (p_drv_buf->switch_model_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->switch_model;
+			return sizeof(p_drv_buf->switch_model);
+		}
+		break;
+	case DRV_TLV_SWITCH_FW_VER:
+		if (p_drv_buf->switch_fw_version_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->switch_fw_version;
+			return sizeof(p_drv_buf->switch_fw_version);
+		}
+		break;
+	case DRV_TLV_QOS_PRIORITY_PER_802_1P:
+		if (p_drv_buf->qos_pri_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->qos_pri;
+			return sizeof(p_drv_buf->qos_pri);
+		}
+		break;
+	case DRV_TLV_PORT_ALIAS:
+		if (p_drv_buf->port_alias_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->port_alias;
+			return sizeof(p_drv_buf->port_alias);
+		}
+		break;
+	case DRV_TLV_PORT_STATE:
+		if (p_drv_buf->port_state_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->port_state;
+			return sizeof(p_drv_buf->port_state);
+		}
+		break;
+	case DRV_TLV_FIP_TX_DESCRIPTORS_QUEUE_SIZE:
+		if (p_drv_buf->fip_tx_descr_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fip_tx_descr_size;
+			return sizeof(p_drv_buf->fip_tx_descr_size);
+		}
+		break;
+	case DRV_TLV_FCOE_RX_DESCRIPTORS_QUEUE_SIZE:
+		if (p_drv_buf->fip_rx_descr_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fip_rx_descr_size;
+			return sizeof(p_drv_buf->fip_rx_descr_size);
+		}
+		break;
+	case DRV_TLV_LINK_FAILURE_COUNT:
+		if (p_drv_buf->link_failures_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->link_failures;
+			return sizeof(p_drv_buf->link_failures);
+		}
+		break;
+	case DRV_TLV_FCOE_BOOT_PROGRESS:
+		if (p_drv_buf->fcoe_boot_progress_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_boot_progress;
+			return sizeof(p_drv_buf->fcoe_boot_progress);
+		}
+		break;
+	case DRV_TLV_RX_BROADCAST_PACKETS:
+		if (p_drv_buf->rx_bcast_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_bcast;
+			return sizeof(p_drv_buf->rx_bcast);
+		}
+		break;
+	case DRV_TLV_TX_BROADCAST_PACKETS:
+		if (p_drv_buf->tx_bcast_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_bcast;
+			return sizeof(p_drv_buf->tx_bcast);
+		}
+		break;
+	case DRV_TLV_FCOE_TX_DESCRIPTOR_QUEUE_AVG_DEPTH:
+		if (p_drv_buf->fcoe_txq_depth_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_txq_depth;
+			return sizeof(p_drv_buf->fcoe_txq_depth);
+		}
+		break;
+	case DRV_TLV_FCOE_RX_DESCRIPTORS_QUEUE_AVG_DEPTH:
+		if (p_drv_buf->fcoe_rxq_depth_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_rxq_depth;
+			return sizeof(p_drv_buf->fcoe_rxq_depth);
+		}
+		break;
+	case DRV_TLV_FCOE_RX_FRAMES_RECEIVED:
+		if (p_drv_buf->fcoe_rx_frames_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_rx_frames;
+			return sizeof(p_drv_buf->fcoe_rx_frames);
+		}
+		break;
+	case DRV_TLV_FCOE_RX_BYTES_RECEIVED:
+		if (p_drv_buf->fcoe_rx_bytes_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_rx_bytes;
+			return sizeof(p_drv_buf->fcoe_rx_bytes);
+		}
+		break;
+	case DRV_TLV_FCOE_TX_FRAMES_SENT:
+		if (p_drv_buf->fcoe_tx_frames_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_tx_frames;
+			return sizeof(p_drv_buf->fcoe_tx_frames);
+		}
+		break;
+	case DRV_TLV_FCOE_TX_BYTES_SENT:
+		if (p_drv_buf->fcoe_tx_bytes_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_tx_bytes;
+			return sizeof(p_drv_buf->fcoe_tx_bytes);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_COUNT:
+		if (p_drv_buf->crc_count_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_count;
+			return sizeof(p_drv_buf->crc_count);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_1_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->crc_err_src_fcid_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_src_fcid[0];
+			return sizeof(p_drv_buf->crc_err_src_fcid[0]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_2_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->crc_err_src_fcid_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_src_fcid[1];
+			return sizeof(p_drv_buf->crc_err_src_fcid[1]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_3_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->crc_err_src_fcid_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_src_fcid[2];
+			return sizeof(p_drv_buf->crc_err_src_fcid[2]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_4_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->crc_err_src_fcid_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_src_fcid[3];
+			return sizeof(p_drv_buf->crc_err_src_fcid[3]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_5_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->crc_err_src_fcid_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_src_fcid[4];
+			return sizeof(p_drv_buf->crc_err_src_fcid[4]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_1_TIMESTAMP:
+		if (p_drv_buf->crc_err_tstamp_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_tstamp[0];
+			return sizeof(p_drv_buf->crc_err_tstamp[0]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_2_TIMESTAMP:
+		if (p_drv_buf->crc_err_tstamp_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_tstamp[1];
+			return sizeof(p_drv_buf->crc_err_tstamp[1]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_3_TIMESTAMP:
+		if (p_drv_buf->crc_err_tstamp_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_tstamp[2];
+			return sizeof(p_drv_buf->crc_err_tstamp[2]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_4_TIMESTAMP:
+		if (p_drv_buf->crc_err_tstamp_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_tstamp[3];
+			return sizeof(p_drv_buf->crc_err_tstamp[3]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_5_TIMESTAMP:
+		if (p_drv_buf->crc_err_tstamp_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_tstamp[4];
+			return sizeof(p_drv_buf->crc_err_tstamp[4]);
+		}
+		break;
+	case DRV_TLV_LOSS_OF_SYNC_ERROR_COUNT:
+		if (p_drv_buf->losync_err_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->losync_err;
+			return sizeof(p_drv_buf->losync_err);
+		}
+		break;
+	case DRV_TLV_LOSS_OF_SIGNAL_ERRORS:
+		if (p_drv_buf->losig_err_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->losig_err;
+			return sizeof(p_drv_buf->losig_err);
+		}
+		break;
+	case DRV_TLV_PRIMITIVE_SEQUENCE_PROTOCOL_ERROR_COUNT:
+		if (p_drv_buf->primtive_err_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->primtive_err;
+			return sizeof(p_drv_buf->primtive_err);
+		}
+		break;
+	case DRV_TLV_DISPARITY_ERROR_COUNT:
+		if (p_drv_buf->disparity_err_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->disparity_err;
+			return sizeof(p_drv_buf->disparity_err);
+		}
+		break;
+	case DRV_TLV_CODE_VIOLATION_ERROR_COUNT:
+		if (p_drv_buf->code_violation_err_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->code_violation_err;
+			return sizeof(p_drv_buf->code_violation_err);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_1:
+		if (p_drv_buf->flogi_param_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_param[0];
+			return sizeof(p_drv_buf->flogi_param[0]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_2:
+		if (p_drv_buf->flogi_param_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_param[1];
+			return sizeof(p_drv_buf->flogi_param[1]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_3:
+		if (p_drv_buf->flogi_param_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_param[2];
+			return sizeof(p_drv_buf->flogi_param[2]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_4:
+		if (p_drv_buf->flogi_param_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_param[3];
+			return sizeof(p_drv_buf->flogi_param[3]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_TIMESTAMP:
+		if (p_drv_buf->flogi_tstamp_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_tstamp;
+			return sizeof(p_drv_buf->flogi_tstamp);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_1:
+		if (p_drv_buf->flogi_acc_param_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_acc_param[0];
+			return sizeof(p_drv_buf->flogi_acc_param[0]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_2:
+		if (p_drv_buf->flogi_acc_param_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_acc_param[1];
+			return sizeof(p_drv_buf->flogi_acc_param[1]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_3:
+		if (p_drv_buf->flogi_acc_param_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_acc_param[2];
+			return sizeof(p_drv_buf->flogi_acc_param[2]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_4:
+		if (p_drv_buf->flogi_acc_param_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_acc_param[3];
+			return sizeof(p_drv_buf->flogi_acc_param[3]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ACC_TIMESTAMP:
+		if (p_drv_buf->flogi_acc_tstamp_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_acc_tstamp;
+			return sizeof(p_drv_buf->flogi_acc_tstamp);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_RJT:
+		if (p_drv_buf->flogi_rjt_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_rjt;
+			return sizeof(p_drv_buf->flogi_rjt);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_RJT_TIMESTAMP:
+		if (p_drv_buf->flogi_rjt_tstamp_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_rjt_tstamp;
+			return sizeof(p_drv_buf->flogi_rjt_tstamp);
+		}
+		break;
+	case DRV_TLV_FDISCS_SENT_COUNT:
+		if (p_drv_buf->fdiscs_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fdiscs;
+			return sizeof(p_drv_buf->fdiscs);
+		}
+		break;
+	case DRV_TLV_FDISC_ACCS_RECEIVED:
+		if (p_drv_buf->fdisc_acc_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fdisc_acc;
+			return sizeof(p_drv_buf->fdisc_acc);
+		}
+		break;
+	case DRV_TLV_FDISC_RJTS_RECEIVED:
+		if (p_drv_buf->fdisc_rjt_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fdisc_rjt;
+			return sizeof(p_drv_buf->fdisc_rjt);
+		}
+		break;
+	case DRV_TLV_PLOGI_SENT_COUNT:
+		if (p_drv_buf->plogi_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi;
+			return sizeof(p_drv_buf->plogi);
+		}
+		break;
+	case DRV_TLV_PLOGI_ACCS_RECEIVED:
+		if (p_drv_buf->plogi_acc_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc;
+			return sizeof(p_drv_buf->plogi_acc);
+		}
+		break;
+	case DRV_TLV_PLOGI_RJTS_RECEIVED:
+		if (p_drv_buf->plogi_rjt_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_rjt;
+			return sizeof(p_drv_buf->plogi_rjt);
+		}
+		break;
+	case DRV_TLV_PLOGI_1_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->plogi_dst_fcid_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_dst_fcid[0];
+			return sizeof(p_drv_buf->plogi_dst_fcid[0]);
+		}
+		break;
+	case DRV_TLV_PLOGI_2_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->plogi_dst_fcid_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_dst_fcid[1];
+			return sizeof(p_drv_buf->plogi_dst_fcid[1]);
+		}
+		break;
+	case DRV_TLV_PLOGI_3_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->plogi_dst_fcid_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_dst_fcid[2];
+			return sizeof(p_drv_buf->plogi_dst_fcid[2]);
+		}
+		break;
+	case DRV_TLV_PLOGI_4_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->plogi_dst_fcid_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_dst_fcid[3];
+			return sizeof(p_drv_buf->plogi_dst_fcid[3]);
+		}
+		break;
+	case DRV_TLV_PLOGI_5_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->plogi_dst_fcid_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_dst_fcid[4];
+			return sizeof(p_drv_buf->plogi_dst_fcid[4]);
+		}
+		break;
+	case DRV_TLV_PLOGI_1_TIMESTAMP:
+		if (p_drv_buf->plogi_tstamp_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_tstamp[0];
+			return sizeof(p_drv_buf->plogi_tstamp[0]);
+		}
+		break;
+	case DRV_TLV_PLOGI_2_TIMESTAMP:
+		if (p_drv_buf->plogi_tstamp_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_tstamp[1];
+			return sizeof(p_drv_buf->plogi_tstamp[1]);
+		}
+		break;
+	case DRV_TLV_PLOGI_3_TIMESTAMP:
+		if (p_drv_buf->plogi_tstamp_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_tstamp[2];
+			return sizeof(p_drv_buf->plogi_tstamp[2]);
+		}
+		break;
+	case DRV_TLV_PLOGI_4_TIMESTAMP:
+		if (p_drv_buf->plogi_tstamp_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_tstamp[3];
+			return sizeof(p_drv_buf->plogi_tstamp[3]);
+		}
+		break;
+	case DRV_TLV_PLOGI_5_TIMESTAMP:
+		if (p_drv_buf->plogi_tstamp_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_tstamp[4];
+			return sizeof(p_drv_buf->plogi_tstamp[4]);
+		}
+		break;
+	case DRV_TLV_PLOGI_1_ACC_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogi_acc_src_fcid_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_src_fcid[0];
+			return sizeof(p_drv_buf->plogi_acc_src_fcid[0]);
+		}
+		break;
+	case DRV_TLV_PLOGI_2_ACC_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogi_acc_src_fcid_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_src_fcid[1];
+			return sizeof(p_drv_buf->plogi_acc_src_fcid[1]);
+		}
+		break;
+	case DRV_TLV_PLOGI_3_ACC_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogi_acc_src_fcid_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_src_fcid[2];
+			return sizeof(p_drv_buf->plogi_acc_src_fcid[2]);
+		}
+		break;
+	case DRV_TLV_PLOGI_4_ACC_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogi_acc_src_fcid_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_src_fcid[3];
+			return sizeof(p_drv_buf->plogi_acc_src_fcid[3]);
+		}
+		break;
+	case DRV_TLV_PLOGI_5_ACC_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogi_acc_src_fcid_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_src_fcid[4];
+			return sizeof(p_drv_buf->plogi_acc_src_fcid[4]);
+		}
+		break;
+	case DRV_TLV_PLOGI_1_ACC_TIMESTAMP:
+		if (p_drv_buf->plogi_acc_tstamp_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_tstamp[0];
+			return sizeof(p_drv_buf->plogi_acc_tstamp[0]);
+		}
+		break;
+	case DRV_TLV_PLOGI_2_ACC_TIMESTAMP:
+		if (p_drv_buf->plogi_acc_tstamp_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_tstamp[1];
+			return sizeof(p_drv_buf->plogi_acc_tstamp[1]);
+		}
+		break;
+	case DRV_TLV_PLOGI_3_ACC_TIMESTAMP:
+		if (p_drv_buf->plogi_acc_tstamp_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_tstamp[2];
+			return sizeof(p_drv_buf->plogi_acc_tstamp[2]);
+		}
+		break;
+	case DRV_TLV_PLOGI_4_ACC_TIMESTAMP:
+		if (p_drv_buf->plogi_acc_tstamp_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_tstamp[3];
+			return sizeof(p_drv_buf->plogi_acc_tstamp[3]);
+		}
+		break;
+	case DRV_TLV_PLOGI_5_ACC_TIMESTAMP:
+		if (p_drv_buf->plogi_acc_tstamp_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_tstamp[4];
+			return sizeof(p_drv_buf->plogi_acc_tstamp[4]);
+		}
+		break;
+	case DRV_TLV_LOGOS_ISSUED:
+		if (p_drv_buf->tx_plogos_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_plogos;
+			return sizeof(p_drv_buf->tx_plogos);
+		}
+		break;
+	case DRV_TLV_LOGO_ACCS_RECEIVED:
+		if (p_drv_buf->plogo_acc_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_acc;
+			return sizeof(p_drv_buf->plogo_acc);
+		}
+		break;
+	case DRV_TLV_LOGO_RJTS_RECEIVED:
+		if (p_drv_buf->plogo_rjt_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_rjt;
+			return sizeof(p_drv_buf->plogo_rjt);
+		}
+		break;
+	case DRV_TLV_LOGO_1_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogo_src_fcid_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_src_fcid[0];
+			return sizeof(p_drv_buf->plogo_src_fcid[0]);
+		}
+		break;
+	case DRV_TLV_LOGO_2_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogo_src_fcid_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_src_fcid[1];
+			return sizeof(p_drv_buf->plogo_src_fcid[1]);
+		}
+		break;
+	case DRV_TLV_LOGO_3_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogo_src_fcid_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_src_fcid[2];
+			return sizeof(p_drv_buf->plogo_src_fcid[2]);
+		}
+		break;
+	case DRV_TLV_LOGO_4_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogo_src_fcid_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_src_fcid[3];
+			return sizeof(p_drv_buf->plogo_src_fcid[3]);
+		}
+		break;
+	case DRV_TLV_LOGO_5_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogo_src_fcid_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_src_fcid[4];
+			return sizeof(p_drv_buf->plogo_src_fcid[4]);
+		}
+		break;
+	case DRV_TLV_LOGO_1_TIMESTAMP:
+		if (p_drv_buf->plogo_tstamp_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_tstamp[0];
+			return sizeof(p_drv_buf->plogo_tstamp[0]);
+		}
+		break;
+	case DRV_TLV_LOGO_2_TIMESTAMP:
+		if (p_drv_buf->plogo_tstamp_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_tstamp[1];
+			return sizeof(p_drv_buf->plogo_tstamp[1]);
+		}
+		break;
+	case DRV_TLV_LOGO_3_TIMESTAMP:
+		if (p_drv_buf->plogo_tstamp_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_tstamp[2];
+			return sizeof(p_drv_buf->plogo_tstamp[2]);
+		}
+		break;
+	case DRV_TLV_LOGO_4_TIMESTAMP:
+		if (p_drv_buf->plogo_tstamp_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_tstamp[3];
+			return sizeof(p_drv_buf->plogo_tstamp[3]);
+		}
+		break;
+	case DRV_TLV_LOGO_5_TIMESTAMP:
+		if (p_drv_buf->plogo_tstamp_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_tstamp[4];
+			return sizeof(p_drv_buf->plogo_tstamp[4]);
+		}
+		break;
+	case DRV_TLV_LOGOS_RECEIVED:
+		if (p_drv_buf->rx_logos_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_logos;
+			return sizeof(p_drv_buf->rx_logos);
+		}
+		break;
+	case DRV_TLV_ACCS_ISSUED:
+		if (p_drv_buf->tx_accs_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_accs;
+			return sizeof(p_drv_buf->tx_accs);
+		}
+		break;
+	case DRV_TLV_PRLIS_ISSUED:
+		if (p_drv_buf->tx_prlis_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_prlis;
+			return sizeof(p_drv_buf->tx_prlis);
+		}
+		break;
+	case DRV_TLV_ACCS_RECEIVED:
+		if (p_drv_buf->rx_accs_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_accs;
+			return sizeof(p_drv_buf->rx_accs);
+		}
+		break;
+	case DRV_TLV_ABTS_SENT_COUNT:
+		if (p_drv_buf->tx_abts_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_abts;
+			return sizeof(p_drv_buf->tx_abts);
+		}
+		break;
+	case DRV_TLV_ABTS_ACCS_RECEIVED:
+		if (p_drv_buf->rx_abts_acc_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_abts_acc;
+			return sizeof(p_drv_buf->rx_abts_acc);
+		}
+		break;
+	case DRV_TLV_ABTS_RJTS_RECEIVED:
+		if (p_drv_buf->rx_abts_rjt_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_abts_rjt;
+			return sizeof(p_drv_buf->rx_abts_rjt);
+		}
+		break;
+	case DRV_TLV_ABTS_1_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->abts_dst_fcid_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_dst_fcid[0];
+			return sizeof(p_drv_buf->abts_dst_fcid[0]);
+		}
+		break;
+	case DRV_TLV_ABTS_2_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->abts_dst_fcid_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_dst_fcid[1];
+			return sizeof(p_drv_buf->abts_dst_fcid[1]);
+		}
+		break;
+	case DRV_TLV_ABTS_3_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->abts_dst_fcid_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_dst_fcid[2];
+			return sizeof(p_drv_buf->abts_dst_fcid[2]);
+		}
+		break;
+	case DRV_TLV_ABTS_4_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->abts_dst_fcid_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_dst_fcid[3];
+			return sizeof(p_drv_buf->abts_dst_fcid[3]);
+		}
+		break;
+	case DRV_TLV_ABTS_5_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->abts_dst_fcid_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_dst_fcid[4];
+			return sizeof(p_drv_buf->abts_dst_fcid[4]);
+		}
+		break;
+	case DRV_TLV_ABTS_1_TIMESTAMP:
+		if (p_drv_buf->abts_tstamp_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_tstamp[0];
+			return sizeof(p_drv_buf->abts_tstamp[0]);
+		}
+		break;
+	case DRV_TLV_ABTS_2_TIMESTAMP:
+		if (p_drv_buf->abts_tstamp_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_tstamp[1];
+			return sizeof(p_drv_buf->abts_tstamp[1]);
+		}
+		break;
+	case DRV_TLV_ABTS_3_TIMESTAMP:
+		if (p_drv_buf->abts_tstamp_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_tstamp[2];
+			return sizeof(p_drv_buf->abts_tstamp[2]);
+		}
+		break;
+	case DRV_TLV_ABTS_4_TIMESTAMP:
+		if (p_drv_buf->abts_tstamp_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_tstamp[3];
+			return sizeof(p_drv_buf->abts_tstamp[3]);
+		}
+		break;
+	case DRV_TLV_ABTS_5_TIMESTAMP:
+		if (p_drv_buf->abts_tstamp_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_tstamp[4];
+			return sizeof(p_drv_buf->abts_tstamp[4]);
+		}
+		break;
+	case DRV_TLV_RSCNS_RECEIVED:
+		if (p_drv_buf->rx_rscn_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_rscn;
+			return sizeof(p_drv_buf->rx_rscn);
+		}
+		break;
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_1:
+		if (p_drv_buf->rx_rscn_nport_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_rscn_nport[0];
+			return sizeof(p_drv_buf->rx_rscn_nport[0]);
+		}
+		break;
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_2:
+		if (p_drv_buf->rx_rscn_nport_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_rscn_nport[1];
+			return sizeof(p_drv_buf->rx_rscn_nport[1]);
+		}
+		break;
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_3:
+		if (p_drv_buf->rx_rscn_nport_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_rscn_nport[2];
+			return sizeof(p_drv_buf->rx_rscn_nport[2]);
+		}
+		break;
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_4:
+		if (p_drv_buf->rx_rscn_nport_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_rscn_nport[3];
+			return sizeof(p_drv_buf->rx_rscn_nport[3]);
+		}
+		break;
+	case DRV_TLV_LUN_RESETS_ISSUED:
+		if (p_drv_buf->tx_lun_rst_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_lun_rst;
+			return sizeof(p_drv_buf->tx_lun_rst);
+		}
+		break;
+	case DRV_TLV_ABORT_TASK_SETS_ISSUED:
+		if (p_drv_buf->abort_task_sets_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abort_task_sets;
+			return sizeof(p_drv_buf->abort_task_sets);
+		}
+		break;
+	case DRV_TLV_TPRLOS_SENT:
+		if (p_drv_buf->tx_tprlos_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_tprlos;
+			return sizeof(p_drv_buf->tx_tprlos);
+		}
+		break;
+	case DRV_TLV_NOS_SENT_COUNT:
+		if (p_drv_buf->tx_nos_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_nos;
+			return sizeof(p_drv_buf->tx_nos);
+		}
+		break;
+	case DRV_TLV_NOS_RECEIVED_COUNT:
+		if (p_drv_buf->rx_nos_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_nos;
+			return sizeof(p_drv_buf->rx_nos);
+		}
+		break;
+	case DRV_TLV_OLS_COUNT:
+		if (p_drv_buf->ols_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->ols;
+			return sizeof(p_drv_buf->ols);
+		}
+		break;
+	case DRV_TLV_LR_COUNT:
+		if (p_drv_buf->lr_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->lr;
+			return sizeof(p_drv_buf->lr);
+		}
+		break;
+	case DRV_TLV_LRR_COUNT:
+		if (p_drv_buf->lrr_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->lrr;
+			return sizeof(p_drv_buf->lrr);
+		}
+		break;
+	case DRV_TLV_LIP_SENT_COUNT:
+		if (p_drv_buf->tx_lip_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_lip;
+			return sizeof(p_drv_buf->tx_lip);
+		}
+		break;
+	case DRV_TLV_LIP_RECEIVED_COUNT:
+		if (p_drv_buf->rx_lip_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_lip;
+			return sizeof(p_drv_buf->rx_lip);
+		}
+		break;
+	case DRV_TLV_EOFA_COUNT:
+		if (p_drv_buf->eofa_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->eofa;
+			return sizeof(p_drv_buf->eofa);
+		}
+		break;
+	case DRV_TLV_EOFNI_COUNT:
+		if (p_drv_buf->eofni_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->eofni;
+			return sizeof(p_drv_buf->eofni);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_CHECK_CONDITION_COUNT:
+		if (p_drv_buf->scsi_chks_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_chks;
+			return sizeof(p_drv_buf->scsi_chks);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_CONDITION_MET_COUNT:
+		if (p_drv_buf->scsi_cond_met_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_cond_met;
+			return sizeof(p_drv_buf->scsi_cond_met);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_BUSY_COUNT:
+		if (p_drv_buf->scsi_busy_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_busy;
+			return sizeof(p_drv_buf->scsi_busy);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_INTERMEDIATE_COUNT:
+		if (p_drv_buf->scsi_inter_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_inter;
+			return sizeof(p_drv_buf->scsi_inter);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_INTERMEDIATE_CONDITION_MET_COUNT:
+		if (p_drv_buf->scsi_inter_cond_met_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_inter_cond_met;
+			return sizeof(p_drv_buf->scsi_inter_cond_met);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_RESERVATION_CONFLICT_COUNT:
+		if (p_drv_buf->scsi_rsv_conflicts_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_rsv_conflicts;
+			return sizeof(p_drv_buf->scsi_rsv_conflicts);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_TASK_SET_FULL_COUNT:
+		if (p_drv_buf->scsi_tsk_full_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_tsk_full;
+			return sizeof(p_drv_buf->scsi_tsk_full);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_ACA_ACTIVE_COUNT:
+		if (p_drv_buf->scsi_aca_active_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_aca_active;
+			return sizeof(p_drv_buf->scsi_aca_active);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_TASK_ABORTED_COUNT:
+		if (p_drv_buf->scsi_tsk_abort_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_tsk_abort;
+			return sizeof(p_drv_buf->scsi_tsk_abort);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_CONDITION_1_RECEIVED_SK_ASC_ASCQ:
+		if (p_drv_buf->scsi_rx_chk_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_rx_chk[0];
+			return sizeof(p_drv_buf->scsi_rx_chk[0]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_CONDITION_2_RECEIVED_SK_ASC_ASCQ:
+		if (p_drv_buf->scsi_rx_chk_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_rx_chk[1];
+			return sizeof(p_drv_buf->scsi_rx_chk[1]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_CONDITION_3_RECEIVED_SK_ASC_ASCQ:
+		if (p_drv_buf->scsi_rx_chk_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_rx_chk[2];
+			return sizeof(p_drv_buf->scsi_rx_chk[2]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_CONDITION_4_RECEIVED_SK_ASC_ASCQ:
+		if (p_drv_buf->scsi_rx_chk_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_rx_chk[3];
+			return sizeof(p_drv_buf->scsi_rx_chk[4]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_CONDITION_5_RECEIVED_SK_ASC_ASCQ:
+		if (p_drv_buf->scsi_rx_chk_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_rx_chk[4];
+			return sizeof(p_drv_buf->scsi_rx_chk[4]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_1_TIMESTAMP:
+		if (p_drv_buf->scsi_chk_tstamp_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_chk_tstamp[0];
+			return sizeof(p_drv_buf->scsi_chk_tstamp[0]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_2_TIMESTAMP:
+		if (p_drv_buf->scsi_chk_tstamp_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_chk_tstamp[1];
+			return sizeof(p_drv_buf->scsi_chk_tstamp[1]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_3_TIMESTAMP:
+		if (p_drv_buf->scsi_chk_tstamp_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_chk_tstamp[2];
+			return sizeof(p_drv_buf->scsi_chk_tstamp[2]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_4_TIMESTAMP:
+		if (p_drv_buf->scsi_chk_tstamp_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_chk_tstamp[3];
+			return sizeof(p_drv_buf->scsi_chk_tstamp[3]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_5_TIMESTAMP:
+		if (p_drv_buf->scsi_chk_tstamp_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_chk_tstamp[4];
+			return sizeof(p_drv_buf->scsi_chk_tstamp[4]);
+		}
+		break;
+	default:
+		break;
+	}
+
+	return -1;
+}
+
+static int
+ecore_mfw_get_iscsi_tlv_value(struct ecore_drv_tlv_hdr *p_tlv,
+			      struct ecore_mfw_tlv_iscsi *p_drv_buf,
+			      u8 **p_tlv_buf)
+{
+	switch (p_tlv->tlv_type) {
+	case DRV_TLV_TARGET_LLMNR_ENABLED:
+		if (p_drv_buf->target_llmnr_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->target_llmnr;
+			return sizeof(p_drv_buf->target_llmnr);
+		}
+		break;
+	case DRV_TLV_HEADER_DIGEST_FLAG_ENABLED:
+		if (p_drv_buf->header_digest_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->header_digest;
+			return sizeof(p_drv_buf->header_digest);
+		}
+		break;
+	case DRV_TLV_DATA_DIGEST_FLAG_ENABLED:
+		if (p_drv_buf->data_digest_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->data_digest;
+			return sizeof(p_drv_buf->data_digest);
+		}
+		break;
+	case DRV_TLV_AUTHENTICATION_METHOD:
+		if (p_drv_buf->auth_method_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->auth_method;
+			return sizeof(p_drv_buf->auth_method);
+		}
+		break;
+	case DRV_TLV_ISCSI_BOOT_TARGET_PORTAL:
+		if (p_drv_buf->boot_taget_portal_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->boot_taget_portal;
+			return sizeof(p_drv_buf->boot_taget_portal);
+		}
+		break;
+	case DRV_TLV_MAX_FRAME_SIZE:
+		if (p_drv_buf->frame_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->frame_size;
+			return sizeof(p_drv_buf->frame_size);
+		}
+		break;
+	case DRV_TLV_PDU_TX_DESCRIPTORS_QUEUE_SIZE:
+		if (p_drv_buf->tx_desc_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_desc_size;
+			return sizeof(p_drv_buf->tx_desc_size);
+		}
+		break;
+	case DRV_TLV_PDU_RX_DESCRIPTORS_QUEUE_SIZE:
+		if (p_drv_buf->rx_desc_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_desc_size;
+			return sizeof(p_drv_buf->rx_desc_size);
+		}
+		break;
+	case DRV_TLV_ISCSI_BOOT_PROGRESS:
+		if (p_drv_buf->boot_progress_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->boot_progress;
+			return sizeof(p_drv_buf->boot_progress);
+		}
+		break;
+	case DRV_TLV_PDU_TX_DESCRIPTOR_QUEUE_AVG_DEPTH:
+		if (p_drv_buf->tx_desc_qdepth_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_desc_qdepth;
+			return sizeof(p_drv_buf->tx_desc_qdepth);
+		}
+		break;
+	case DRV_TLV_PDU_RX_DESCRIPTORS_QUEUE_AVG_DEPTH:
+		if (p_drv_buf->rx_desc_qdepth_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_desc_qdepth;
+			return sizeof(p_drv_buf->rx_desc_qdepth);
+		}
+		break;
+	case DRV_TLV_ISCSI_PDU_RX_FRAMES_RECEIVED:
+		if (p_drv_buf->rx_frames_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_frames;
+			return sizeof(p_drv_buf->rx_frames);
+		}
+		break;
+	case DRV_TLV_ISCSI_PDU_RX_BYTES_RECEIVED:
+		if (p_drv_buf->rx_bytes_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_bytes;
+			return sizeof(p_drv_buf->rx_bytes);
+		}
+		break;
+	case DRV_TLV_ISCSI_PDU_TX_FRAMES_SENT:
+		if (p_drv_buf->tx_frames_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_frames;
+			return sizeof(p_drv_buf->tx_frames);
+		}
+		break;
+	case DRV_TLV_ISCSI_PDU_TX_BYTES_SENT:
+		if (p_drv_buf->tx_bytes_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_bytes;
+			return sizeof(p_drv_buf->tx_bytes);
+		}
+		break;
+	default:
+		break;
+	}
+
+	return -1;
+}
+
+static enum _ecore_status_t
+ecore_mfw_update_tlvs(u8 tlv_group, struct ecore_hwfn *p_hwfn,
+		      struct ecore_ptt *p_ptt, u8 *p_mfw_buf, u32 size)
+{
+	union ecore_mfw_tlv_data *p_tlv_data;
+	struct ecore_drv_tlv_hdr tlv;
+	u8 *p_tlv_ptr = OSAL_NULL, *p_temp;
+	u32 offset;
+	int len;
+
+	p_tlv_data = OSAL_VALLOC(p_hwfn->p_dev, sizeof(*p_tlv_data));
+	if (!p_tlv_data)
+		return ECORE_NOMEM;
+
+	OSAL_MEMSET(p_tlv_data, 0, sizeof(*p_tlv_data));
+	if (OSAL_MFW_FILL_TLV_DATA(p_hwfn, tlv_group, p_tlv_data)) {
+		OSAL_VFREE(p_hwfn->p_dev, p_tlv_data);
+		return ECORE_INVAL;
+	}
+
+	offset = 0;
+	OSAL_MEMSET(&tlv, 0, sizeof(tlv));
+	while (offset < size) {
+		p_temp = &p_mfw_buf[offset];
+		tlv.tlv_type = TLV_TYPE(p_temp);
+		tlv.tlv_length = TLV_LENGTH(p_temp);
+		tlv.tlv_flags = TLV_FLAGS(p_temp);
+		DP_INFO(p_hwfn, "Type %d length = %d flags = 0x%x\n",
+			tlv.tlv_type, tlv.tlv_length, tlv.tlv_flags);
+
+		offset += sizeof(tlv);
+		if (tlv_group == ECORE_MFW_TLV_GENERIC)
+			len = ecore_mfw_get_gen_tlv_value(&tlv,
+					&p_tlv_data->generic, &p_tlv_ptr);
+		else if (tlv_group == ECORE_MFW_TLV_ETH)
+			len = ecore_mfw_get_eth_tlv_value(&tlv,
+					&p_tlv_data->eth, &p_tlv_ptr);
+		else if (tlv_group == ECORE_MFW_TLV_FCOE)
+			len = ecore_mfw_get_fcoe_tlv_value(&tlv,
+					&p_tlv_data->fcoe, &p_tlv_ptr);
+		else
+			len = ecore_mfw_get_iscsi_tlv_value(&tlv,
+					&p_tlv_data->iscsi, &p_tlv_ptr);
+
+		if (len > 0) {
+			OSAL_WARN(len > 4 * tlv.tlv_length,
+				  "Incorrect MFW TLV length");
+			len = OSAL_MIN_T(int, len, 4 * tlv.tlv_length);
+			tlv.tlv_flags |= ECORE_DRV_TLV_FLAGS_CHANGED;
+			/* TODO: Endianness handling? */
+			OSAL_MEMCPY(p_mfw_buf, &tlv, sizeof(tlv));
+			OSAL_MEMCPY(p_mfw_buf + offset, p_tlv_ptr, len);
+		}
+
+		offset += sizeof(u32) * tlv.tlv_length;
+	}
+
+	OSAL_VFREE(p_hwfn->p_dev, p_tlv_data);
+
+	return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t
+ecore_mfw_process_tlv_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
+{
+	u32 addr, size, offset, resp, param, val;
+	u8 tlv_group = 0, id, *p_mfw_buf = OSAL_NULL, *p_temp;
+	u32 global_offsize, global_addr;
+	enum _ecore_status_t rc;
+	struct ecore_drv_tlv_hdr tlv;
+
+	addr = SECTION_OFFSIZE_ADDR(p_hwfn->mcp_info->public_base,
+				    PUBLIC_GLOBAL);
+	global_offsize = ecore_rd(p_hwfn, p_ptt, addr);
+	global_addr = SECTION_ADDR(global_offsize, 0);
+	addr = global_addr + OFFSETOF(struct public_global, data_ptr);
+	size = ecore_rd(p_hwfn, p_ptt, global_addr +
+			OFFSETOF(struct public_global, data_size));
+
+	if (!size) {
+		DP_NOTICE(p_hwfn, false, "Invalid TLV req size = %d\n", size);
+		goto drv_done;
+	}
+
+	p_mfw_buf = (void *)OSAL_VALLOC(p_hwfn->p_dev, size);
+	if (!p_mfw_buf) {
+		DP_NOTICE(p_hwfn, false,
+			  "Failed allocate memory for p_mfw_buf\n");
+		goto drv_done;
+	}
+
+	/* Read the TLV request to local buffer */
+	for (offset = 0; offset < size; offset += sizeof(u32)) {
+		val = ecore_rd(p_hwfn, p_ptt, addr + offset);
+		OSAL_MEMCPY(&p_mfw_buf[offset], &val, sizeof(u32));
+	}
+
+	/* Parse the headers to enumerate the requested TLV groups */
+	for (offset = 0; offset < size;
+	     offset += sizeof(tlv) + sizeof(u32) * tlv.tlv_length) {
+		p_temp = &p_mfw_buf[offset];
+		tlv.tlv_type = TLV_TYPE(p_temp);
+		tlv.tlv_length = TLV_LENGTH(p_temp);
+		if (ecore_mfw_get_tlv_group(tlv.tlv_type, &tlv_group))
+			goto drv_done;
+	}
+
+	/* Update the TLV values in the local buffer */
+	for (id = ECORE_MFW_TLV_GENERIC; id < ECORE_MFW_TLV_MAX; id <<= 1) {
+		if (tlv_group & id) {
+			if (ecore_mfw_update_tlvs(id, p_hwfn, p_ptt, p_mfw_buf,
+						  size))
+				goto drv_done;
+		}
+	}
+
+	/* Write the TLV data to shared memory */
+	for (offset = 0; offset < size; offset += sizeof(u32)) {
+		val = (u32)p_mfw_buf[offset];
+		ecore_wr(p_hwfn, p_ptt, addr + offset, val);
+		offset += sizeof(u32);
+	}
+
+drv_done:
+	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_GET_TLV_DONE, 0, &resp,
+			   &param);
+
+	OSAL_VFREE(p_hwfn->p_dev, p_mfw_buf);
+
+	return rc;
+}
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index 0a1f7db..bfd96d6 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -96,8 +96,29 @@ struct qed_slowpath_params {
 
 #define ILT_PAGE_SIZE_TCFC 0x8000	/* 32KB */
 
+struct qed_eth_tlvs {
+	u16 feat_flags;
+	u8 mac[3][ETH_ALEN];
+	u16 lso_maxoff;
+	u16 lso_minseg;
+	bool prom_mode;
+	u16 num_txqs;
+	u16 num_rxqs;
+	u16 num_netqs;
+	u16 flex_vlan;
+	u32 tcp4_offloads;
+	u32 tcp6_offloads;
+	u16 tx_avg_qdepth;
+	u16 rx_avg_qdepth;
+	u8 txqs_empty;
+	u8 rxqs_empty;
+	u8 num_txqs_full;
+	u8 num_rxqs_full;
+};
+
 struct qed_common_cb_ops {
 	void (*link_update)(void *dev, struct qed_link_output *link);
+	void (*get_tlv_data)(void *dev, struct qed_eth_tlvs *data);
 };
 
 struct qed_selftest_ops {
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 29/61] net/qede/base: optimize cache-line access
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (27 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 28/61] net/qede/base: add support for handling TLV request from MFW Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 30/61] net/qede/base: infrastructure changes for VF tunnelling Rasesh Mody
                   ` (32 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Optimize cache-line access in ecore_chain -
re-arrange fields so that fields that are needed for fastpath
[mostly produce/consume and their derivatives] are in the first cache
line, and the rest are in the second.

This is true for both PBL and NEXT_PTR kind of chains.
Advancing a page in a SINGLE_PAGE chain would still require the 2nd
cacheline as well, but afaik only SPQ uses it and so it isn't
considered as 'fastpath'.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_chain.h       |  143 ++++++++++++++++-------------
 drivers/net/qede/base/ecore_dev.c         |   14 +--
 drivers/net/qede/base/ecore_sp_commands.c |    4 +-
 3 files changed, 89 insertions(+), 72 deletions(-)

diff --git a/drivers/net/qede/base/ecore_chain.h b/drivers/net/qede/base/ecore_chain.h
index 61e39b5..ba272a9 100644
--- a/drivers/net/qede/base/ecore_chain.h
+++ b/drivers/net/qede/base/ecore_chain.h
@@ -59,25 +59,6 @@ struct ecore_chain_ext_pbl {
 	void *p_pbl_virt;
 };
 
-struct ecore_chain_pbl {
-	/* Base address of a pre-allocated buffer for pbl */
-	dma_addr_t p_phys_table;
-	void *p_virt_table;
-
-	/* Table for keeping the virtual addresses of the chain pages,
-	 * respectively to the physical addresses in the pbl table.
-	 */
-	void **pp_virt_addr_tbl;
-
-	/* Index to current used page by producer/consumer */
-	union {
-		struct ecore_chain_pbl_u16 pbl16;
-		struct ecore_chain_pbl_u32 pbl32;
-	} u;
-
-	bool external;
-};
-
 struct ecore_chain_u16 {
 	/* Cyclic index of next element to produce/consme */
 	u16 prod_idx;
@@ -91,40 +72,75 @@ struct ecore_chain_u32 {
 };
 
 struct ecore_chain {
-	/* Address of first page of the chain */
-	void *p_virt_addr;
-	dma_addr_t p_phys_addr;
-
+	/* fastpath portion of the chain - required for commands such
+	 * as produce / consume.
+	 */
 	/* Point to next element to produce/consume */
 	void *p_prod_elem;
 	void *p_cons_elem;
 
-	enum ecore_chain_mode mode;
-	enum ecore_chain_use_mode intended_use;
+	/* Fastpath portions of the PBL [if exists] */
+
+	struct {
+		/* Table for keeping the virtual addresses of the chain pages,
+		 * respectively to the physical addresses in the pbl table.
+		 */
+		void		**pp_virt_addr_tbl;
+
+		union {
+			struct ecore_chain_pbl_u16	u16;
+			struct ecore_chain_pbl_u32	u32;
+		} c;
+	} pbl;
 
-	enum ecore_chain_cnt_type cnt_type;
 	union {
 		struct ecore_chain_u16 chain16;
 		struct ecore_chain_u32 chain32;
 	} u;
 
-	u32 page_cnt;
+	/* Capacity counts only usable elements */
+	u32				capacity;
+	u32				page_cnt;
 
-	/* Number of elements - capacity is for usable elements only,
-	 * while size will contain total number of elements [for entire chain].
+	/* A u8 would suffice for mode, but it would save as a lot of headaches
+	 * on castings & defaults.
 	 */
-	u32 capacity;
-	u32 size;
+	enum ecore_chain_mode		mode;
 
 	/* Elements information for fast calculations */
 	u16 elem_per_page;
 	u16 elem_per_page_mask;
-	u16 elem_unusable;
-	u16 usable_per_page;
 	u16 elem_size;
 	u16 next_page_mask;
+	u16 usable_per_page;
+	u8 elem_unusable;
 
-	struct ecore_chain_pbl pbl;
+	u8				cnt_type;
+
+	/* Slowpath of the chain - required for initialization and destruction,
+	 * but isn't involved in regular functionality.
+	 */
+
+	/* Base address of a pre-allocated buffer for pbl */
+	struct {
+		dma_addr_t		p_phys_table;
+		void			*p_virt_table;
+	} pbl_sp;
+
+	/* Address of first page of the chain  - the address is required
+	 * for fastpath operation [consume/produce] but only for the the SINGLE
+	 * flavour which isn't considered fastpath [== SPQ].
+	 */
+	void				*p_virt_addr;
+	dma_addr_t			p_phys_addr;
+
+	/* Total number of elements [for entire chain] */
+	u32				size;
+
+	u8				intended_use;
+
+	/* TBD - do we really need this? Couldn't find usage for it */
+	bool				b_external_pbl;
 
 	void *dp_ctx;
 };
@@ -135,8 +151,8 @@ struct ecore_chain {
 
 #define UNUSABLE_ELEMS_PER_PAGE(elem_size, mode)		\
 	  ((mode == ECORE_CHAIN_MODE_NEXT_PTR) ?		\
-	   (1 + ((sizeof(struct ecore_chain_next) - 1) /		\
-	   (elem_size))) : 0)
+	   (u8)(1 + ((sizeof(struct ecore_chain_next) - 1) /	\
+		     (elem_size))) : 0)
 
 #define USABLE_ELEMS_PER_PAGE(elem_size, mode)		\
 	((u32)(ELEMS_PER_PAGE(elem_size) -			\
@@ -245,7 +261,7 @@ u16 ecore_chain_get_usable_per_page(struct ecore_chain *p_chain)
 }
 
 static OSAL_INLINE
-u16 ecore_chain_get_unusable_per_page(struct ecore_chain *p_chain)
+u8 ecore_chain_get_unusable_per_page(struct ecore_chain *p_chain)
 {
 	return p_chain->elem_unusable;
 }
@@ -263,7 +279,7 @@ static OSAL_INLINE u32 ecore_chain_get_page_cnt(struct ecore_chain *p_chain)
 static OSAL_INLINE
 dma_addr_t ecore_chain_get_pbl_phys(struct ecore_chain *p_chain)
 {
-	return p_chain->pbl.p_phys_table;
+	return p_chain->pbl_sp.p_phys_table;
 }
 
 /**
@@ -288,9 +304,9 @@ dma_addr_t ecore_chain_get_pbl_phys(struct ecore_chain *p_chain)
 		p_next = (struct ecore_chain_next *)(*p_next_elem);
 		*p_next_elem = p_next->next_virt;
 		if (is_chain_u16(p_chain))
-			*(u16 *)idx_to_inc += p_chain->elem_unusable;
+			*(u16 *)idx_to_inc += (u16)p_chain->elem_unusable;
 		else
-			*(u32 *)idx_to_inc += p_chain->elem_unusable;
+			*(u32 *)idx_to_inc += (u16)p_chain->elem_unusable;
 		break;
 	case ECORE_CHAIN_MODE_SINGLE:
 		*p_next_elem = p_chain->p_virt_addr;
@@ -391,7 +407,7 @@ static OSAL_INLINE void *ecore_chain_produce(struct ecore_chain *p_chain)
 		if ((p_chain->u.chain16.prod_idx &
 		     p_chain->elem_per_page_mask) == p_chain->next_page_mask) {
 			p_prod_idx = &p_chain->u.chain16.prod_idx;
-			p_prod_page_idx = &p_chain->pbl.u.pbl16.prod_page_idx;
+			p_prod_page_idx = &p_chain->pbl.c.u16.prod_page_idx;
 			ecore_chain_advance_page(p_chain, &p_chain->p_prod_elem,
 						 p_prod_idx, p_prod_page_idx);
 		}
@@ -400,7 +416,7 @@ static OSAL_INLINE void *ecore_chain_produce(struct ecore_chain *p_chain)
 		if ((p_chain->u.chain32.prod_idx &
 		     p_chain->elem_per_page_mask) == p_chain->next_page_mask) {
 			p_prod_idx = &p_chain->u.chain32.prod_idx;
-			p_prod_page_idx = &p_chain->pbl.u.pbl32.prod_page_idx;
+			p_prod_page_idx = &p_chain->pbl.c.u32.prod_page_idx;
 			ecore_chain_advance_page(p_chain, &p_chain->p_prod_elem,
 						 p_prod_idx, p_prod_page_idx);
 		}
@@ -465,7 +481,7 @@ static OSAL_INLINE void *ecore_chain_consume(struct ecore_chain *p_chain)
 		if ((p_chain->u.chain16.cons_idx &
 		     p_chain->elem_per_page_mask) == p_chain->next_page_mask) {
 			p_cons_idx = &p_chain->u.chain16.cons_idx;
-			p_cons_page_idx = &p_chain->pbl.u.pbl16.cons_page_idx;
+			p_cons_page_idx = &p_chain->pbl.c.u16.cons_page_idx;
 			ecore_chain_advance_page(p_chain, &p_chain->p_cons_elem,
 						 p_cons_idx, p_cons_page_idx);
 		}
@@ -474,7 +490,7 @@ static OSAL_INLINE void *ecore_chain_consume(struct ecore_chain *p_chain)
 		if ((p_chain->u.chain32.cons_idx &
 		     p_chain->elem_per_page_mask) == p_chain->next_page_mask) {
 			p_cons_idx = &p_chain->u.chain32.cons_idx;
-			p_cons_page_idx = &p_chain->pbl.u.pbl32.cons_page_idx;
+			p_cons_page_idx = &p_chain->pbl.c.u32.cons_page_idx;
 			ecore_chain_advance_page(p_chain, &p_chain->p_cons_elem,
 						 p_cons_idx, p_cons_page_idx);
 		}
@@ -518,25 +534,26 @@ static OSAL_INLINE void ecore_chain_reset(struct ecore_chain *p_chain)
 		u32 reset_val = p_chain->page_cnt - 1;
 
 		if (is_chain_u16(p_chain)) {
-			p_chain->pbl.u.pbl16.prod_page_idx = (u16)reset_val;
-			p_chain->pbl.u.pbl16.cons_page_idx = (u16)reset_val;
+			p_chain->pbl.c.u16.prod_page_idx = (u16)reset_val;
+			p_chain->pbl.c.u16.cons_page_idx = (u16)reset_val;
 		} else {
-			p_chain->pbl.u.pbl32.prod_page_idx = reset_val;
-			p_chain->pbl.u.pbl32.cons_page_idx = reset_val;
+			p_chain->pbl.c.u32.prod_page_idx = reset_val;
+			p_chain->pbl.c.u32.cons_page_idx = reset_val;
 		}
 	}
 
 	switch (p_chain->intended_use) {
-	case ECORE_CHAIN_USE_TO_CONSUME_PRODUCE:
-	case ECORE_CHAIN_USE_TO_PRODUCE:
-			/* Do nothing */
-			break;
-
 	case ECORE_CHAIN_USE_TO_CONSUME:
-			/* produce empty elements */
-			for (i = 0; i < p_chain->capacity; i++)
+		/* produce empty elements */
+		for (i = 0; i < p_chain->capacity; i++)
 			ecore_chain_recycle_consumed(p_chain);
-			break;
+		break;
+
+	case ECORE_CHAIN_USE_TO_CONSUME_PRODUCE:
+	case ECORE_CHAIN_USE_TO_PRODUCE:
+	default:
+		/* Do nothing */
+		break;
 	}
 }
 
@@ -563,9 +580,9 @@ static OSAL_INLINE void ecore_chain_reset(struct ecore_chain *p_chain)
 	p_chain->p_virt_addr = OSAL_NULL;
 	p_chain->p_phys_addr = 0;
 	p_chain->elem_size = elem_size;
-	p_chain->intended_use = intended_use;
+	p_chain->intended_use = (u8)intended_use;
 	p_chain->mode = mode;
-	p_chain->cnt_type = cnt_type;
+	p_chain->cnt_type = (u8)cnt_type;
 
 	p_chain->elem_per_page = ELEMS_PER_PAGE(elem_size);
 	p_chain->usable_per_page = USABLE_ELEMS_PER_PAGE(elem_size, mode);
@@ -577,9 +594,9 @@ static OSAL_INLINE void ecore_chain_reset(struct ecore_chain *p_chain)
 	p_chain->page_cnt = page_cnt;
 	p_chain->capacity = p_chain->usable_per_page * page_cnt;
 	p_chain->size = p_chain->elem_per_page * page_cnt;
-	p_chain->pbl.external = false;
-	p_chain->pbl.p_phys_table = 0;
-	p_chain->pbl.p_virt_table = OSAL_NULL;
+	p_chain->b_external_pbl = false;
+	p_chain->pbl_sp.p_phys_table = 0;
+	p_chain->pbl_sp.p_virt_table = OSAL_NULL;
 	p_chain->pbl.pp_virt_addr_tbl = OSAL_NULL;
 
 	p_chain->dp_ctx = dp_ctx;
@@ -623,8 +640,8 @@ static OSAL_INLINE void ecore_chain_init_pbl_mem(struct ecore_chain *p_chain,
 						 dma_addr_t p_phys_pbl,
 						 void **pp_virt_addr_tbl)
 {
-	p_chain->pbl.p_phys_table = p_phys_pbl;
-	p_chain->pbl.p_virt_table = p_virt_pbl;
+	p_chain->pbl_sp.p_phys_table = p_phys_pbl;
+	p_chain->pbl_sp.p_virt_table = p_virt_pbl;
 	p_chain->pbl.pp_virt_addr_tbl = pp_virt_addr_tbl;
 }
 
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 168ada8..4d52e94 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -3566,13 +3566,13 @@ static void ecore_chain_free_pbl(struct ecore_dev *p_dev,
 				 struct ecore_chain *p_chain)
 {
 	void **pp_virt_addr_tbl = p_chain->pbl.pp_virt_addr_tbl;
-	u8 *p_pbl_virt = (u8 *)p_chain->pbl.p_virt_table;
+	u8 *p_pbl_virt = (u8 *)p_chain->pbl_sp.p_virt_table;
 	u32 page_cnt = p_chain->page_cnt, i, pbl_size;
 
 	if (!pp_virt_addr_tbl)
 		return;
 
-	if (!p_chain->pbl.p_virt_table)
+	if (!p_pbl_virt)
 		goto out;
 
 	for (i = 0; i < page_cnt; i++) {
@@ -3588,10 +3588,10 @@ static void ecore_chain_free_pbl(struct ecore_dev *p_dev,
 
 	pbl_size = page_cnt * ECORE_CHAIN_PBL_ENTRY_SIZE;
 
-	if (!p_chain->pbl.external)
-		OSAL_DMA_FREE_COHERENT(p_dev, p_chain->pbl.p_virt_table,
-				       p_chain->pbl.p_phys_table, pbl_size);
-out:
+	if (!p_chain->b_external_pbl)
+		OSAL_DMA_FREE_COHERENT(p_dev, p_chain->pbl_sp.p_virt_table,
+				       p_chain->pbl_sp.p_phys_table, pbl_size);
+ out:
 	OSAL_VFREE(p_dev, p_chain->pbl.pp_virt_addr_tbl);
 	p_chain->pbl.pp_virt_addr_tbl = OSAL_NULL;
 }
@@ -3724,7 +3724,7 @@ void ecore_chain_free(struct ecore_dev *p_dev, struct ecore_chain *p_chain)
 	} else {
 		p_pbl_virt = ext_pbl->p_pbl_virt;
 		p_pbl_phys = ext_pbl->p_pbl_phys;
-		p_chain->pbl.external = true;
+		p_chain->b_external_pbl = true;
 	}
 
 	ecore_chain_init_pbl_mem(p_chain, p_pbl_virt, p_pbl_phys,
diff --git a/drivers/net/qede/base/ecore_sp_commands.c b/drivers/net/qede/base/ecore_sp_commands.c
index 23ebab7..b831970 100644
--- a/drivers/net/qede/base/ecore_sp_commands.c
+++ b/drivers/net/qede/base/ecore_sp_commands.c
@@ -379,11 +379,11 @@ enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
 
 	/* Place EQ address in RAMROD */
 	DMA_REGPAIR_LE(p_ramrod->event_ring_pbl_addr,
-		       p_hwfn->p_eq->chain.pbl.p_phys_table);
+		       p_hwfn->p_eq->chain.pbl_sp.p_phys_table);
 	page_cnt = (u8)ecore_chain_get_page_cnt(&p_hwfn->p_eq->chain);
 	p_ramrod->event_ring_num_pages = page_cnt;
 	DMA_REGPAIR_LE(p_ramrod->consolid_q_pbl_addr,
-		       p_hwfn->p_consq->chain.pbl.p_phys_table);
+		       p_hwfn->p_consq->chain.pbl_sp.p_phys_table);
 
 	ecore_tunn_set_pf_start_params(p_hwfn, p_tunn,
 				       &p_ramrod->tunnel_config);
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 30/61] net/qede/base: infrastructure changes for VF tunnelling
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (28 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 29/61] net/qede/base: optimize cache-line access Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 31/61] net/qede/base: revise tunnel APIs/structs Rasesh Mody
                   ` (31 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Infrastructure changes for VF tunnelling.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/bcm_osal.h          |    3 +-
 drivers/net/qede/base/ecore.h             |   14 ++++-
 drivers/net/qede/base/ecore_sp_commands.c |   87 +++++++++++++++++++----------
 drivers/net/qede/qede_if.h                |    5 ++
 drivers/net/qede/qede_main.c              |   18 ++++++
 5 files changed, 93 insertions(+), 34 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 2430cad..902c500 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -288,7 +288,8 @@ void *osal_dma_alloc_coherent_aligned(struct ecore_dev *, dma_addr_t *,
 #define OSAL_WMB(dev)			rte_wmb()
 #define OSAL_DMA_SYNC(dev, addr, length, is_post) nothing
 
-#define OSAL_BITS_PER_BYTE		(8)
+#define OSAL_BIT(nr)            (1UL << (nr))
+#define OSAL_BITS_PER_BYTE	(8)
 #define OSAL_BITS_PER_UL	(sizeof(unsigned long) * OSAL_BITS_PER_BYTE)
 #define OSAL_BITS_PER_UL_MASK		(OSAL_BITS_PER_UL - 1)
 
diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index de0f49a..5c12c1e 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -470,6 +470,17 @@ struct ecore_fw_data {
 	u32 init_ops_size;
 };
 
+struct ecore_tunnel_info {
+	u8		tunn_clss_vxlan;
+	u8		tunn_clss_l2geneve;
+	u8		tunn_clss_ipgeneve;
+	u8		tunn_clss_l2gre;
+	u8		tunn_clss_ipgre;
+	unsigned long	tunn_mode;
+	u16		port_vxlan_udp_port;
+	u16		port_geneve_udp_port;
+};
+
 struct ecore_hwfn {
 	struct ecore_dev		*p_dev;
 	u8				my_id;		/* ID inside the PF */
@@ -724,8 +735,7 @@ struct ecore_dev {
 	/* SRIOV */
 	struct ecore_hw_sriov_info	*p_iov_info;
 #define IS_ECORE_SRIOV(p_dev)		(!!(p_dev)->p_iov_info)
-	unsigned long			tunn_mode;
-
+	struct ecore_tunnel_info	tunnel;
 	bool				b_is_vf;
 
 	u32				drv_type;
diff --git a/drivers/net/qede/base/ecore_sp_commands.c b/drivers/net/qede/base/ecore_sp_commands.c
index b831970..f5860a0 100644
--- a/drivers/net/qede/base/ecore_sp_commands.c
+++ b/drivers/net/qede/base/ecore_sp_commands.c
@@ -111,8 +111,9 @@ static enum tunnel_clss ecore_tunn_get_clss_type(u8 type)
 				struct ecore_tunn_update_params *p_src,
 				struct pf_update_tunnel_config *p_tunn_cfg)
 {
-	unsigned long cached_tunn_mode = p_hwfn->p_dev->tunn_mode;
 	unsigned long update_mask = p_src->tunn_mode_update_mask;
+	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
+	unsigned long cached_tunn_mode = p_tun->tunn_mode;
 	unsigned long tunn_mode = p_src->tunn_mode;
 	unsigned long new_tunn_mode = 0;
 
@@ -149,9 +150,10 @@ static enum tunnel_clss ecore_tunn_get_clss_type(u8 type)
 	}
 
 	if (p_src->update_geneve_udp_port) {
+		p_tun->port_geneve_udp_port = p_src->geneve_udp_port;
 		p_tunn_cfg->set_geneve_udp_port_flg = 1;
 		p_tunn_cfg->geneve_udp_port =
-		    OSAL_CPU_TO_LE16(p_src->geneve_udp_port);
+				OSAL_CPU_TO_LE16(p_tun->port_geneve_udp_port);
 	}
 
 	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &update_mask)) {
@@ -178,33 +180,39 @@ static enum tunnel_clss ecore_tunn_get_clss_type(u8 type)
 				struct ecore_tunn_update_params *p_src,
 				struct pf_update_tunnel_config *p_tunn_cfg)
 {
-	unsigned long tunn_mode = p_src->tunn_mode;
+	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
 	enum tunnel_clss type;
 
 	ecore_tunn_set_pf_fix_tunn_mode(p_hwfn, p_src, p_tunn_cfg);
+	p_tun->tunn_mode = p_src->tunn_mode;
+
 	p_tunn_cfg->update_rx_pf_clss = p_src->update_rx_pf_clss;
 	p_tunn_cfg->update_tx_pf_clss = p_src->update_tx_pf_clss;
 
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_vxlan);
-	p_tunn_cfg->tunnel_clss_vxlan = type;
+	p_tun->tunn_clss_vxlan = type;
+	p_tunn_cfg->tunnel_clss_vxlan = p_tun->tunn_clss_vxlan;
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2gre);
-	p_tunn_cfg->tunnel_clss_l2gre = type;
+	p_tun->tunn_clss_l2gre = type;
+	p_tunn_cfg->tunnel_clss_l2gre = p_tun->tunn_clss_l2gre;
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgre);
-	p_tunn_cfg->tunnel_clss_ipgre = type;
+	p_tun->tunn_clss_ipgre = type;
+	p_tunn_cfg->tunnel_clss_ipgre = p_tun->tunn_clss_ipgre;
 
 	if (p_src->update_vxlan_udp_port) {
+		p_tun->port_vxlan_udp_port = p_src->vxlan_udp_port;
 		p_tunn_cfg->set_vxlan_udp_port_flg = 1;
 		p_tunn_cfg->vxlan_udp_port =
-		    OSAL_CPU_TO_LE16(p_src->vxlan_udp_port);
+				OSAL_CPU_TO_LE16(p_tun->port_vxlan_udp_port);
 	}
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_l2gre = 1;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_ipgre = 1;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_vxlan = 1;
 
 	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
@@ -215,21 +223,24 @@ static enum tunnel_clss ecore_tunn_get_clss_type(u8 type)
 	}
 
 	if (p_src->update_geneve_udp_port) {
+		p_tun->port_geneve_udp_port = p_src->geneve_udp_port;
 		p_tunn_cfg->set_geneve_udp_port_flg = 1;
 		p_tunn_cfg->geneve_udp_port =
-		    OSAL_CPU_TO_LE16(p_src->geneve_udp_port);
+				OSAL_CPU_TO_LE16(p_tun->port_geneve_udp_port);
 	}
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_l2geneve = 1;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_ipgeneve = 1;
 
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2geneve);
-	p_tunn_cfg->tunnel_clss_l2geneve = type;
+	p_tun->tunn_clss_l2geneve = type;
+	p_tunn_cfg->tunnel_clss_l2geneve = p_tun->tunn_clss_l2geneve;
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgeneve);
-	p_tunn_cfg->tunnel_clss_ipgeneve = type;
+	p_tun->tunn_clss_ipgeneve = type;
+	p_tunn_cfg->tunnel_clss_ipgeneve = p_tun->tunn_clss_ipgeneve;
 }
 
 static void ecore_set_hw_tunn_mode(struct ecore_hwfn *p_hwfn,
@@ -269,33 +280,37 @@ static void ecore_set_hw_tunn_mode(struct ecore_hwfn *p_hwfn,
 			       struct ecore_tunn_start_params *p_src,
 			       struct pf_start_tunnel_config *p_tunn_cfg)
 {
-	unsigned long tunn_mode;
+	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
 	enum tunnel_clss type;
 
 	if (!p_src)
 		return;
 
-	tunn_mode = p_src->tunn_mode;
+	p_tun->tunn_mode = p_src->tunn_mode;
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_vxlan);
-	p_tunn_cfg->tunnel_clss_vxlan = type;
+	p_tun->tunn_clss_vxlan = type;
+	p_tunn_cfg->tunnel_clss_vxlan = p_tun->tunn_clss_vxlan;
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2gre);
-	p_tunn_cfg->tunnel_clss_l2gre = type;
+	p_tun->tunn_clss_l2gre = type;
+	p_tunn_cfg->tunnel_clss_l2gre = p_tun->tunn_clss_l2gre;
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgre);
-	p_tunn_cfg->tunnel_clss_ipgre = type;
+	p_tun->tunn_clss_ipgre = type;
+	p_tunn_cfg->tunnel_clss_ipgre = p_tun->tunn_clss_ipgre;
 
 	if (p_src->update_vxlan_udp_port) {
+		p_tun->port_vxlan_udp_port = p_src->vxlan_udp_port;
 		p_tunn_cfg->set_vxlan_udp_port_flg = 1;
 		p_tunn_cfg->vxlan_udp_port =
-		    OSAL_CPU_TO_LE16(p_src->vxlan_udp_port);
+				OSAL_CPU_TO_LE16(p_tun->port_vxlan_udp_port);
 	}
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_l2gre = 1;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_ipgre = 1;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_vxlan = 1;
 
 	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
@@ -306,21 +321,24 @@ static void ecore_set_hw_tunn_mode(struct ecore_hwfn *p_hwfn,
 	}
 
 	if (p_src->update_geneve_udp_port) {
+		p_tun->port_geneve_udp_port = p_src->geneve_udp_port;
 		p_tunn_cfg->set_geneve_udp_port_flg = 1;
 		p_tunn_cfg->geneve_udp_port =
-		    OSAL_CPU_TO_LE16(p_src->geneve_udp_port);
+				OSAL_CPU_TO_LE16(p_tun->port_geneve_udp_port);
 	}
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_l2geneve = 1;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_ipgeneve = 1;
 
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2geneve);
-	p_tunn_cfg->tunnel_clss_l2geneve = type;
+	p_tun->tunn_clss_l2geneve = type;
+	p_tunn_cfg->tunnel_clss_l2geneve = p_tun->tunn_clss_l2geneve;
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgeneve);
-	p_tunn_cfg->tunnel_clss_ipgeneve = type;
+	p_tun->tunn_clss_ipgeneve = type;
+	p_tunn_cfg->tunnel_clss_ipgeneve = p_tun->tunn_clss_ipgeneve;
 }
 
 enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
@@ -420,9 +438,16 @@ enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
 	rc = ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
 
 	if (p_tunn) {
+		if (p_tunn->update_vxlan_udp_port)
+			ecore_set_vxlan_dest_port(p_hwfn, p_hwfn->p_main_ptt,
+						  p_tunn->vxlan_udp_port);
+
+		if (p_tunn->update_geneve_udp_port)
+			ecore_set_geneve_dest_port(p_hwfn, p_hwfn->p_main_ptt,
+						   p_tunn->geneve_udp_port);
+
 		ecore_set_hw_tunn_mode(p_hwfn, p_hwfn->p_main_ptt,
 				       p_tunn->tunn_mode);
-		p_hwfn->p_dev->tunn_mode = p_tunn->tunn_mode;
 	}
 
 	return rc;
@@ -529,12 +554,12 @@ enum _ecore_status_t
 	if (p_tunn->update_vxlan_udp_port)
 		ecore_set_vxlan_dest_port(p_hwfn, p_hwfn->p_main_ptt,
 					  p_tunn->vxlan_udp_port);
+
 	if (p_tunn->update_geneve_udp_port)
 		ecore_set_geneve_dest_port(p_hwfn, p_hwfn->p_main_ptt,
 					   p_tunn->geneve_udp_port);
 
 	ecore_set_hw_tunn_mode(p_hwfn, p_hwfn->p_main_ptt, p_tunn->tunn_mode);
-	p_hwfn->p_dev->tunn_mode = p_tunn->tunn_mode;
 
 	return rc;
 }
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index bfd96d6..baa8476 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -43,6 +43,11 @@ struct qed_dev_info {
 	uint8_t mf_mode;
 	bool tx_switching;
 	u16 mtu;
+
+	/* Out param for qede */
+	bool vxlan_enable;
+	bool gre_enable;
+	bool geneve_enable;
 };
 
 enum qed_sb_type {
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index a932c5f..e7195b4 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -325,8 +325,26 @@ static int qed_slowpath_start(struct ecore_dev *edev,
 qed_fill_dev_info(struct ecore_dev *edev, struct qed_dev_info *dev_info)
 {
 	struct ecore_ptt *ptt = NULL;
+	struct ecore_tunnel_info *tun = &edev->tunnel;
 
 	memset(dev_info, 0, sizeof(struct qed_dev_info));
+
+	if (tun->tunn_mode & OSAL_BIT(ECORE_MODE_VXLAN_TUNN) &&
+	    tun->tunn_clss_vxlan == ECORE_TUNN_CLSS_MAC_VLAN)
+		dev_info->vxlan_enable = true;
+
+	if (tun->tunn_mode & OSAL_BIT(ECORE_MODE_L2GRE_TUNN) &&
+	    tun->tunn_mode & OSAL_BIT(ECORE_MODE_IPGRE_TUNN) &&
+	    tun->tunn_clss_l2gre == ECORE_TUNN_CLSS_MAC_VLAN &&
+	    tun->tunn_clss_ipgre == ECORE_TUNN_CLSS_MAC_VLAN)
+		dev_info->gre_enable = true;
+
+	if (tun->tunn_mode & OSAL_BIT(ECORE_MODE_L2GENEVE_TUNN) &&
+	    tun->tunn_mode & OSAL_BIT(ECORE_MODE_IPGENEVE_TUNN) &&
+	    tun->tunn_clss_l2geneve == ECORE_TUNN_CLSS_MAC_VLAN &&
+	    tun->tunn_clss_ipgeneve == ECORE_TUNN_CLSS_MAC_VLAN)
+		dev_info->geneve_enable = true;
+
 	dev_info->num_hwfns = edev->num_hwfns;
 	dev_info->is_mf_default = IS_MF_DEFAULT(&edev->hwfns[0]);
 	dev_info->mtu = ECORE_LEADING_HWFN(edev)->hw_info.mtu;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 31/61] net/qede/base: revise tunnel APIs/structs
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (29 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 30/61] net/qede/base: infrastructure changes for VF tunnelling Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 32/61] net/qede/base: add tunnelling support for VFs Rasesh Mody
                   ` (30 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Revise tunnel APIs/structs.
 - Unite tunnel start and update params in single struct
   "ecore_tunnel_info"
 - Remove A0 chip tunnelling support.
 - Added per tunnel info - removed bitmasks.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h             |   57 ++---
 drivers/net/qede/base/ecore_dev.c         |    2 +-
 drivers/net/qede/base/ecore_dev_api.h     |    2 +-
 drivers/net/qede/base/ecore_sp_api.h      |   19 ++
 drivers/net/qede/base/ecore_sp_commands.c |  384 +++++++++++++----------------
 drivers/net/qede/base/ecore_sp_commands.h |   23 +-
 drivers/net/qede/qede_ethdev.c            |   20 +-
 drivers/net/qede/qede_if.h                |   16 ++
 drivers/net/qede/qede_main.c              |   18 +-
 9 files changed, 248 insertions(+), 293 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 5c12c1e..f86f7ca 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -204,33 +204,29 @@ enum ecore_tunn_clss {
 	MAX_ECORE_TUNN_CLSS,
 };
 
-struct ecore_tunn_start_params {
-	unsigned long tunn_mode;
-	u16	vxlan_udp_port;
-	u16	geneve_udp_port;
-	u8	update_vxlan_udp_port;
-	u8	update_geneve_udp_port;
-	u8	tunn_clss_vxlan;
-	u8	tunn_clss_l2geneve;
-	u8	tunn_clss_ipgeneve;
-	u8	tunn_clss_l2gre;
-	u8	tunn_clss_ipgre;
+struct ecore_tunn_update_type {
+	bool b_update_mode;
+	bool b_mode_enabled;
+	enum ecore_tunn_clss tun_cls;
 };
 
-struct ecore_tunn_update_params {
-	unsigned long tunn_mode_update_mask;
-	unsigned long tunn_mode;
-	u16	vxlan_udp_port;
-	u16	geneve_udp_port;
-	u8	update_rx_pf_clss;
-	u8	update_tx_pf_clss;
-	u8	update_vxlan_udp_port;
-	u8	update_geneve_udp_port;
-	u8	tunn_clss_vxlan;
-	u8	tunn_clss_l2geneve;
-	u8	tunn_clss_ipgeneve;
-	u8	tunn_clss_l2gre;
-	u8	tunn_clss_ipgre;
+struct ecore_tunn_update_udp_port {
+	bool b_update_port;
+	u16 port;
+};
+
+struct ecore_tunnel_info {
+	struct ecore_tunn_update_type vxlan;
+	struct ecore_tunn_update_type l2_geneve;
+	struct ecore_tunn_update_type ip_geneve;
+	struct ecore_tunn_update_type l2_gre;
+	struct ecore_tunn_update_type ip_gre;
+
+	struct ecore_tunn_update_udp_port vxlan_port;
+	struct ecore_tunn_update_udp_port geneve_port;
+
+	bool b_update_rx_cls;
+	bool b_update_tx_cls;
 };
 
 /* The PCI personality is not quite synonymous to protocol ID:
@@ -470,17 +466,6 @@ struct ecore_fw_data {
 	u32 init_ops_size;
 };
 
-struct ecore_tunnel_info {
-	u8		tunn_clss_vxlan;
-	u8		tunn_clss_l2geneve;
-	u8		tunn_clss_ipgeneve;
-	u8		tunn_clss_l2gre;
-	u8		tunn_clss_ipgre;
-	unsigned long	tunn_mode;
-	u16		port_vxlan_udp_port;
-	u16		port_geneve_udp_port;
-};
-
 struct ecore_hwfn {
 	struct ecore_dev		*p_dev;
 	u8				my_id;		/* ID inside the PF */
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 4d52e94..c80b2cb 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1702,7 +1702,7 @@ enum ECORE_ROCE_EDPM_MODE {
 static enum _ecore_status_t
 ecore_hw_init_pf(struct ecore_hwfn *p_hwfn,
 		 struct ecore_ptt *p_ptt,
-		 struct ecore_tunn_start_params *p_tunn,
+		 struct ecore_tunnel_info *p_tunn,
 		 int hw_mode,
 		 bool b_hw_start,
 		 enum ecore_int_mode int_mode, bool allow_npar_tx_switch)
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index 74a15ef..356c5e4 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -59,7 +59,7 @@ void ecore_init_dp(struct ecore_dev *p_dev,
 
 struct ecore_hw_init_params {
 	/* tunnelling parameters */
-	struct ecore_tunn_start_params *p_tunn;
+	struct ecore_tunnel_info *p_tunn;
 	bool b_hw_start;
 	/* interrupt mode [msix, inta, etc.] to use */
 	enum ecore_int_mode int_mode;
diff --git a/drivers/net/qede/base/ecore_sp_api.h b/drivers/net/qede/base/ecore_sp_api.h
index a4cb507..c8e564f 100644
--- a/drivers/net/qede/base/ecore_sp_api.h
+++ b/drivers/net/qede/base/ecore_sp_api.h
@@ -41,5 +41,24 @@ struct ecore_spq_comp_cb {
  */
 enum _ecore_status_t ecore_eth_cqe_completion(struct ecore_hwfn *p_hwfn,
 					      struct eth_slow_path_rx_cqe *cqe);
+/**
+ * @brief ecore_sp_pf_update_tunn_cfg - PF Function Tunnel configuration
+ *					update  Ramrod
+ *
+ * This ramrod is sent to update a tunneling configuration
+ * for a physical function (PF).
+ *
+ * @param p_hwfn
+ * @param p_tunn - pf update tunneling parameters
+ * @param comp_mode - completion mode
+ * @param p_comp_data - callback function
+ *
+ * @return enum _ecore_status_t
+ */
 
+enum _ecore_status_t
+ecore_sp_pf_update_tunn_cfg(struct ecore_hwfn *p_hwfn,
+			    struct ecore_tunnel_info *p_tunn,
+			    enum spq_mode comp_mode,
+			    struct ecore_spq_comp_cb *p_comp_data);
 #endif
diff --git a/drivers/net/qede/base/ecore_sp_commands.c b/drivers/net/qede/base/ecore_sp_commands.c
index f5860a0..4cacce8 100644
--- a/drivers/net/qede/base/ecore_sp_commands.c
+++ b/drivers/net/qede/base/ecore_sp_commands.c
@@ -88,7 +88,7 @@ enum _ecore_status_t ecore_sp_init_request(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-static enum tunnel_clss ecore_tunn_get_clss_type(u8 type)
+static enum tunnel_clss ecore_tunn_clss_to_fw_clss(u8 type)
 {
 	switch (type) {
 	case ECORE_TUNN_CLSS_MAC_VLAN:
@@ -107,242 +107,207 @@ static enum tunnel_clss ecore_tunn_get_clss_type(u8 type)
 }
 
 static void
-ecore_tunn_set_pf_fix_tunn_mode(struct ecore_hwfn *p_hwfn,
-				struct ecore_tunn_update_params *p_src,
-				struct pf_update_tunnel_config *p_tunn_cfg)
+ecore_set_pf_update_tunn_mode(struct ecore_tunnel_info *p_tun,
+			      struct ecore_tunnel_info *p_src,
+			      bool b_pf_start)
 {
-	unsigned long update_mask = p_src->tunn_mode_update_mask;
-	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
-	unsigned long cached_tunn_mode = p_tun->tunn_mode;
-	unsigned long tunn_mode = p_src->tunn_mode;
-	unsigned long new_tunn_mode = 0;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &update_mask)) {
-		if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_L2GRE_TUNN, &new_tunn_mode);
-	} else {
-		if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &cached_tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_L2GRE_TUNN, &new_tunn_mode);
-	}
-
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &update_mask)) {
-		if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_IPGRE_TUNN, &new_tunn_mode);
-	} else {
-		if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &cached_tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_IPGRE_TUNN, &new_tunn_mode);
-	}
-
-	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &update_mask)) {
-		if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_VXLAN_TUNN, &new_tunn_mode);
-	} else {
-		if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &cached_tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_VXLAN_TUNN, &new_tunn_mode);
-	}
-
-	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
-		if (p_src->update_geneve_udp_port)
-			DP_NOTICE(p_hwfn, true, "Geneve not supported\n");
-		p_src->update_geneve_udp_port = 0;
-		p_src->tunn_mode = new_tunn_mode;
-		return;
-	}
+	if (p_src->vxlan.b_update_mode || b_pf_start)
+		p_tun->vxlan.b_mode_enabled = p_src->vxlan.b_mode_enabled;
 
-	if (p_src->update_geneve_udp_port) {
-		p_tun->port_geneve_udp_port = p_src->geneve_udp_port;
-		p_tunn_cfg->set_geneve_udp_port_flg = 1;
-		p_tunn_cfg->geneve_udp_port =
-				OSAL_CPU_TO_LE16(p_tun->port_geneve_udp_port);
-	}
+	if (p_src->l2_gre.b_update_mode || b_pf_start)
+		p_tun->l2_gre.b_mode_enabled = p_src->l2_gre.b_mode_enabled;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &update_mask)) {
-		if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_L2GENEVE_TUNN, &new_tunn_mode);
-	} else {
-		if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &cached_tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_L2GENEVE_TUNN, &new_tunn_mode);
-	}
+	if (p_src->ip_gre.b_update_mode || b_pf_start)
+		p_tun->ip_gre.b_mode_enabled = p_src->ip_gre.b_mode_enabled;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &update_mask)) {
-		if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_IPGENEVE_TUNN, &new_tunn_mode);
-	} else {
-		if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &cached_tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_IPGENEVE_TUNN, &new_tunn_mode);
-	}
+	if (p_src->l2_geneve.b_update_mode || b_pf_start)
+		p_tun->l2_geneve.b_mode_enabled =
+				p_src->l2_geneve.b_mode_enabled;
 
-	p_src->tunn_mode = new_tunn_mode;
+	if (p_src->ip_geneve.b_update_mode || b_pf_start)
+		p_tun->ip_geneve.b_mode_enabled =
+				p_src->ip_geneve.b_mode_enabled;
 }
 
-static void
-ecore_tunn_set_pf_update_params(struct ecore_hwfn *p_hwfn,
-				struct ecore_tunn_update_params *p_src,
-				struct pf_update_tunnel_config *p_tunn_cfg)
+static void ecore_set_tunn_cls_info(struct ecore_tunnel_info *p_tun,
+				    struct ecore_tunnel_info *p_src)
 {
-	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
 	enum tunnel_clss type;
 
-	ecore_tunn_set_pf_fix_tunn_mode(p_hwfn, p_src, p_tunn_cfg);
-	p_tun->tunn_mode = p_src->tunn_mode;
-
-	p_tunn_cfg->update_rx_pf_clss = p_src->update_rx_pf_clss;
-	p_tunn_cfg->update_tx_pf_clss = p_src->update_tx_pf_clss;
-
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_vxlan);
-	p_tun->tunn_clss_vxlan = type;
-	p_tunn_cfg->tunnel_clss_vxlan = p_tun->tunn_clss_vxlan;
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2gre);
-	p_tun->tunn_clss_l2gre = type;
-	p_tunn_cfg->tunnel_clss_l2gre = p_tun->tunn_clss_l2gre;
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgre);
-	p_tun->tunn_clss_ipgre = type;
-	p_tunn_cfg->tunnel_clss_ipgre = p_tun->tunn_clss_ipgre;
-
-	if (p_src->update_vxlan_udp_port) {
-		p_tun->port_vxlan_udp_port = p_src->vxlan_udp_port;
-		p_tunn_cfg->set_vxlan_udp_port_flg = 1;
-		p_tunn_cfg->vxlan_udp_port =
-				OSAL_CPU_TO_LE16(p_tun->port_vxlan_udp_port);
-	}
+	p_tun->b_update_rx_cls = p_src->b_update_rx_cls;
+	p_tun->b_update_tx_cls = p_src->b_update_tx_cls;
+
+	type = ecore_tunn_clss_to_fw_clss(p_src->vxlan.tun_cls);
+	p_tun->vxlan.tun_cls = type;
+	type = ecore_tunn_clss_to_fw_clss(p_src->l2_gre.tun_cls);
+	p_tun->l2_gre.tun_cls = type;
+	type = ecore_tunn_clss_to_fw_clss(p_src->ip_gre.tun_cls);
+	p_tun->ip_gre.tun_cls = type;
+	type = ecore_tunn_clss_to_fw_clss(p_src->l2_geneve.tun_cls);
+	p_tun->l2_geneve.tun_cls = type;
+	type = ecore_tunn_clss_to_fw_clss(p_src->ip_geneve.tun_cls);
+	p_tun->ip_geneve.tun_cls = type;
+}
+
+static void ecore_set_tunn_ports(struct ecore_tunnel_info *p_tun,
+				 struct ecore_tunnel_info *p_src)
+{
+	p_tun->geneve_port.b_update_port = p_src->geneve_port.b_update_port;
+	p_tun->vxlan_port.b_update_port = p_src->vxlan_port.b_update_port;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_l2gre = 1;
+	if (p_src->geneve_port.b_update_port)
+		p_tun->geneve_port.port = p_src->geneve_port.port;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_ipgre = 1;
+	if (p_src->vxlan_port.b_update_port)
+		p_tun->vxlan_port.port = p_src->vxlan_port.port;
+}
 
-	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_vxlan = 1;
+static void
+__ecore_set_ramrod_tunnel_param(u8 *p_tunn_cls, u8 *p_enable_tx_clas,
+				struct ecore_tunn_update_type *tun_type)
+{
+	*p_tunn_cls = tun_type->tun_cls;
 
-	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
-		if (p_src->update_geneve_udp_port)
-			DP_NOTICE(p_hwfn, true, "Geneve not supported\n");
-		p_src->update_geneve_udp_port = 0;
-		return;
-	}
+	if (tun_type->b_mode_enabled)
+		*p_enable_tx_clas = 1;
+}
 
-	if (p_src->update_geneve_udp_port) {
-		p_tun->port_geneve_udp_port = p_src->geneve_udp_port;
-		p_tunn_cfg->set_geneve_udp_port_flg = 1;
-		p_tunn_cfg->geneve_udp_port =
-				OSAL_CPU_TO_LE16(p_tun->port_geneve_udp_port);
+static void
+ecore_set_ramrod_tunnel_param(u8 *p_tunn_cls, u8 *p_enable_tx_clas,
+			      struct ecore_tunn_update_type *tun_type,
+			      u8 *p_update_port, __le16 *p_port,
+			      struct ecore_tunn_update_udp_port *p_udp_port)
+{
+	__ecore_set_ramrod_tunnel_param(p_tunn_cls, p_enable_tx_clas,
+					tun_type);
+	if (p_udp_port->b_update_port) {
+		*p_update_port = 1;
+		*p_port = OSAL_CPU_TO_LE16(p_udp_port->port);
 	}
+}
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_l2geneve = 1;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_ipgeneve = 1;
+static void
+ecore_tunn_set_pf_update_params(struct ecore_hwfn		*p_hwfn,
+				struct ecore_tunnel_info *p_src,
+				struct pf_update_tunnel_config	*p_tunn_cfg)
+{
+	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
 
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2geneve);
-	p_tun->tunn_clss_l2geneve = type;
-	p_tunn_cfg->tunnel_clss_l2geneve = p_tun->tunn_clss_l2geneve;
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgeneve);
-	p_tun->tunn_clss_ipgeneve = type;
-	p_tunn_cfg->tunnel_clss_ipgeneve = p_tun->tunn_clss_ipgeneve;
+	ecore_set_pf_update_tunn_mode(p_tun, p_src, false);
+	ecore_set_tunn_cls_info(p_tun, p_src);
+	ecore_set_tunn_ports(p_tun, p_src);
+
+	ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_vxlan,
+				      &p_tunn_cfg->tx_enable_vxlan,
+				      &p_tun->vxlan,
+				      &p_tunn_cfg->set_vxlan_udp_port_flg,
+				      &p_tunn_cfg->vxlan_udp_port,
+				      &p_tun->vxlan_port);
+
+	ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_l2geneve,
+				      &p_tunn_cfg->tx_enable_l2geneve,
+				      &p_tun->l2_geneve,
+				      &p_tunn_cfg->set_geneve_udp_port_flg,
+				      &p_tunn_cfg->geneve_udp_port,
+				      &p_tun->geneve_port);
+
+	__ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_ipgeneve,
+					&p_tunn_cfg->tx_enable_ipgeneve,
+					&p_tun->ip_geneve);
+
+	__ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_l2gre,
+					&p_tunn_cfg->tx_enable_l2gre,
+					&p_tun->l2_gre);
+
+	__ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_ipgre,
+					&p_tunn_cfg->tx_enable_ipgre,
+					&p_tun->ip_gre);
+
+	p_tunn_cfg->update_rx_pf_clss = p_tun->b_update_rx_cls;
+	p_tunn_cfg->update_tx_pf_clss = p_tun->b_update_tx_cls;
 }
 
 static void ecore_set_hw_tunn_mode(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt,
-				   unsigned long tunn_mode)
+				   struct ecore_tunnel_info *p_tun)
 {
-	u8 l2gre_enable = 0, ipgre_enable = 0, vxlan_enable = 0;
-	u8 l2geneve_enable = 0, ipgeneve_enable = 0;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &tunn_mode))
-		l2gre_enable = 1;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &tunn_mode))
-		ipgre_enable = 1;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &tunn_mode))
-		vxlan_enable = 1;
+	ecore_set_gre_enable(p_hwfn, p_ptt, p_tun->l2_gre.b_mode_enabled,
+			     p_tun->ip_gre.b_mode_enabled);
+	ecore_set_vxlan_enable(p_hwfn, p_ptt, p_tun->vxlan.b_mode_enabled);
 
-	ecore_set_gre_enable(p_hwfn, p_ptt, l2gre_enable, ipgre_enable);
-	ecore_set_vxlan_enable(p_hwfn, p_ptt, vxlan_enable);
+	ecore_set_geneve_enable(p_hwfn, p_ptt, p_tun->l2_geneve.b_mode_enabled,
+				p_tun->ip_geneve.b_mode_enabled);
+}
 
-	if (ECORE_IS_BB_A0(p_hwfn->p_dev))
+static void ecore_set_hw_tunn_mode_port(struct ecore_hwfn *p_hwfn,
+					struct ecore_tunnel_info *p_tunn)
+{
+	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
+		DP_NOTICE(p_hwfn, true,
+			  "A0 chip: tunnel hw config is not supported\n");
 		return;
+	}
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &tunn_mode))
-		l2geneve_enable = 1;
+	if (p_tunn->vxlan_port.b_update_port)
+		ecore_set_vxlan_dest_port(p_hwfn, p_hwfn->p_main_ptt,
+					  p_tunn->vxlan_port.port);
 
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &tunn_mode))
-		ipgeneve_enable = 1;
+	if (p_tunn->geneve_port.b_update_port)
+		ecore_set_geneve_dest_port(p_hwfn, p_hwfn->p_main_ptt,
+					   p_tunn->geneve_port.port);
 
-	ecore_set_geneve_enable(p_hwfn, p_ptt, l2geneve_enable,
-				ipgeneve_enable);
+	ecore_set_hw_tunn_mode(p_hwfn, p_hwfn->p_main_ptt, p_tunn);
 }
 
 static void
 ecore_tunn_set_pf_start_params(struct ecore_hwfn *p_hwfn,
-			       struct ecore_tunn_start_params *p_src,
+			       struct ecore_tunnel_info		*p_src,
 			       struct pf_start_tunnel_config *p_tunn_cfg)
 {
 	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
-	enum tunnel_clss type;
-
-	if (!p_src)
-		return;
-
-	p_tun->tunn_mode = p_src->tunn_mode;
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_vxlan);
-	p_tun->tunn_clss_vxlan = type;
-	p_tunn_cfg->tunnel_clss_vxlan = p_tun->tunn_clss_vxlan;
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2gre);
-	p_tun->tunn_clss_l2gre = type;
-	p_tunn_cfg->tunnel_clss_l2gre = p_tun->tunn_clss_l2gre;
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgre);
-	p_tun->tunn_clss_ipgre = type;
-	p_tunn_cfg->tunnel_clss_ipgre = p_tun->tunn_clss_ipgre;
-
-	if (p_src->update_vxlan_udp_port) {
-		p_tun->port_vxlan_udp_port = p_src->vxlan_udp_port;
-		p_tunn_cfg->set_vxlan_udp_port_flg = 1;
-		p_tunn_cfg->vxlan_udp_port =
-				OSAL_CPU_TO_LE16(p_tun->port_vxlan_udp_port);
-	}
-
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_l2gre = 1;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_ipgre = 1;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_vxlan = 1;
 
 	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
-		if (p_src->update_geneve_udp_port)
-			DP_NOTICE(p_hwfn, true, "Geneve not supported\n");
-		p_src->update_geneve_udp_port = 0;
+		DP_NOTICE(p_hwfn, true,
+			  "A0 chip: tunnel pf start config is not supported\n");
 		return;
 	}
 
-	if (p_src->update_geneve_udp_port) {
-		p_tun->port_geneve_udp_port = p_src->geneve_udp_port;
-		p_tunn_cfg->set_geneve_udp_port_flg = 1;
-		p_tunn_cfg->geneve_udp_port =
-				OSAL_CPU_TO_LE16(p_tun->port_geneve_udp_port);
-	}
-
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_l2geneve = 1;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_ipgeneve = 1;
+	if (!p_src)
+		return;
 
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2geneve);
-	p_tun->tunn_clss_l2geneve = type;
-	p_tunn_cfg->tunnel_clss_l2geneve = p_tun->tunn_clss_l2geneve;
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgeneve);
-	p_tun->tunn_clss_ipgeneve = type;
-	p_tunn_cfg->tunnel_clss_ipgeneve = p_tun->tunn_clss_ipgeneve;
+	ecore_set_pf_update_tunn_mode(p_tun, p_src, true);
+	ecore_set_tunn_cls_info(p_tun, p_src);
+	ecore_set_tunn_ports(p_tun, p_src);
+
+	ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_vxlan,
+				      &p_tunn_cfg->tx_enable_vxlan,
+				      &p_tun->vxlan,
+				      &p_tunn_cfg->set_vxlan_udp_port_flg,
+				      &p_tunn_cfg->vxlan_udp_port,
+				      &p_tun->vxlan_port);
+
+	ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_l2geneve,
+				      &p_tunn_cfg->tx_enable_l2geneve,
+				      &p_tun->l2_geneve,
+				      &p_tunn_cfg->set_geneve_udp_port_flg,
+				      &p_tunn_cfg->geneve_udp_port,
+				      &p_tun->geneve_port);
+
+	__ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_ipgeneve,
+					&p_tunn_cfg->tx_enable_ipgeneve,
+					&p_tun->ip_geneve);
+
+	__ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_l2gre,
+					&p_tunn_cfg->tx_enable_l2gre,
+					&p_tun->l2_gre);
+
+	__ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_ipgre,
+					&p_tunn_cfg->tx_enable_ipgre,
+					&p_tun->ip_gre);
 }
 
 enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
-				       struct ecore_tunn_start_params *p_tunn,
+				       struct ecore_tunnel_info *p_tunn,
 				       enum ecore_mf_mode mode,
 				       bool allow_npar_tx_switch)
 {
@@ -437,18 +402,8 @@ enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
 
 	rc = ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
 
-	if (p_tunn) {
-		if (p_tunn->update_vxlan_udp_port)
-			ecore_set_vxlan_dest_port(p_hwfn, p_hwfn->p_main_ptt,
-						  p_tunn->vxlan_udp_port);
-
-		if (p_tunn->update_geneve_udp_port)
-			ecore_set_geneve_dest_port(p_hwfn, p_hwfn->p_main_ptt,
-						   p_tunn->geneve_udp_port);
-
-		ecore_set_hw_tunn_mode(p_hwfn, p_hwfn->p_main_ptt,
-				       p_tunn->tunn_mode);
-	}
+	if (p_tunn)
+		ecore_set_hw_tunn_mode_port(p_hwfn, &p_hwfn->p_dev->tunnel);
 
 	return rc;
 }
@@ -523,7 +478,7 @@ enum _ecore_status_t ecore_sp_rl_update(struct ecore_hwfn *p_hwfn,
 /* Set pf update ramrod command params */
 enum _ecore_status_t
 ecore_sp_pf_update_tunn_cfg(struct ecore_hwfn *p_hwfn,
-			    struct ecore_tunn_update_params *p_tunn,
+			    struct ecore_tunnel_info *p_tunn,
 			    enum spq_mode comp_mode,
 			    struct ecore_spq_comp_cb *p_comp_data)
 {
@@ -531,6 +486,15 @@ enum _ecore_status_t
 	struct ecore_sp_init_data init_data;
 	enum _ecore_status_t rc = ECORE_NOTIMPL;
 
+	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
+		DP_NOTICE(p_hwfn, true,
+			  "A0 chip: tunnel pf update config is not supported\n");
+		return rc;
+	}
+
+	if (!p_tunn)
+		return ECORE_INVAL;
+
 	/* Get SPQ entry */
 	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
 	init_data.cid = ecore_spq_get_cid(p_hwfn);
@@ -551,15 +515,7 @@ enum _ecore_status_t
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	if (p_tunn->update_vxlan_udp_port)
-		ecore_set_vxlan_dest_port(p_hwfn, p_hwfn->p_main_ptt,
-					  p_tunn->vxlan_udp_port);
-
-	if (p_tunn->update_geneve_udp_port)
-		ecore_set_geneve_dest_port(p_hwfn, p_hwfn->p_main_ptt,
-					   p_tunn->geneve_udp_port);
-
-	ecore_set_hw_tunn_mode(p_hwfn, p_hwfn->p_main_ptt, p_tunn->tunn_mode);
+	ecore_set_hw_tunn_mode_port(p_hwfn, &p_hwfn->p_dev->tunnel);
 
 	return rc;
 }
diff --git a/drivers/net/qede/base/ecore_sp_commands.h b/drivers/net/qede/base/ecore_sp_commands.h
index 66c9a69..33e31e4 100644
--- a/drivers/net/qede/base/ecore_sp_commands.h
+++ b/drivers/net/qede/base/ecore_sp_commands.h
@@ -68,32 +68,11 @@ enum _ecore_status_t ecore_sp_init_request(struct ecore_hwfn *p_hwfn,
  */
 
 enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
-				       struct ecore_tunn_start_params *p_tunn,
+				       struct ecore_tunnel_info *p_tunn,
 				       enum ecore_mf_mode mode,
 				       bool allow_npar_tx_switch);
 
 /**
- * @brief ecore_sp_pf_update_tunn_cfg - PF Function Tunnel configuration
- *					update  Ramrod
- *
- * This ramrod is sent to update a tunneling configuration
- * for a physical function (PF).
- *
- * @param p_hwfn
- * @param p_tunn - pf update tunneling parameters
- * @param comp_mode - completion mode
- * @param p_comp_data - callback function
- *
- * @return enum _ecore_status_t
- */
-
-enum _ecore_status_t
-ecore_sp_pf_update_tunn_cfg(struct ecore_hwfn *p_hwfn,
-			    struct ecore_tunn_update_params *p_tunn,
-			    enum spq_mode comp_mode,
-			    struct ecore_spq_comp_cb *p_comp_data);
-
-/**
  * @brief ecore_sp_pf_update - PF Function Update Ramrod
  *
  * This ramrod updates function-related parameters. Every parameter can be
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index d52e1be..4ef93d4 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -335,10 +335,10 @@ static void qede_set_ucast_cmn_params(struct ecore_filter_ucast *ucast)
 	/* ucast->assert_on_error = true; - For debug */
 }
 
-static void qede_set_cmn_tunn_param(struct ecore_tunn_update_params *params,
-				     uint8_t clss, uint64_t mode, uint64_t mask)
+static void qede_set_cmn_tunn_param(struct qed_tunn_update_params *params,
+				    uint8_t clss, uint64_t mode, uint64_t mask)
 {
-	memset(params, 0, sizeof(struct ecore_tunn_update_params));
+	memset(params, 0, sizeof(struct qed_tunn_update_params));
 	params->tunn_mode = mode;
 	params->tunn_mode_update_mask = mask;
 	params->update_tx_pf_clss = 1;
@@ -1707,7 +1707,8 @@ int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
 {
 	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
-	struct ecore_tunn_update_params params;
+	struct qed_tunn_update_params params;
+	struct ecore_tunnel_info *p_tunn;
 	struct ecore_hwfn *p_hwfn;
 	int rc, i;
 
@@ -1720,7 +1721,7 @@ int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
 					QEDE_VXLAN_DEF_PORT;
 		for_each_hwfn(edev, i) {
 			p_hwfn = &edev->hwfns[i];
-			rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, &params,
+			rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, p_tunn,
 						ECORE_SPQ_MODE_CB, NULL);
 			if (rc != ECORE_SUCCESS) {
 				DP_ERR(edev, "Unable to config UDP port %u\n",
@@ -1817,7 +1818,8 @@ static int qede_vxlan_tunn_config(struct rte_eth_dev *eth_dev,
 {
 	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
-	struct ecore_tunn_update_params params;
+	struct qed_tunn_update_params params;
+	struct ecore_tunnel_info *p_tunn;
 	struct ecore_hwfn *p_hwfn;
 	enum ecore_filter_ucast_type type;
 	enum ecore_tunn_clss clss;
@@ -1872,7 +1874,7 @@ static int qede_vxlan_tunn_config(struct rte_eth_dev *eth_dev,
 		for_each_hwfn(edev, i) {
 			p_hwfn = &edev->hwfns[i];
 			rc = ecore_sp_pf_update_tunn_cfg(p_hwfn,
-				&params, ECORE_SPQ_MODE_CB, NULL);
+				p_tunn, ECORE_SPQ_MODE_CB, NULL);
 			if (rc != ECORE_SUCCESS) {
 				DP_ERR(edev, "Failed to update tunn_clss %u\n",
 					params.tunn_clss_vxlan);
@@ -1906,8 +1908,8 @@ static int qede_vxlan_tunn_config(struct rte_eth_dev *eth_dev,
 						(1 << ECORE_MODE_VXLAN_TUNN));
 			for_each_hwfn(edev, i) {
 				p_hwfn = &edev->hwfns[i];
-				rc = ecore_sp_pf_update_tunn_cfg(p_hwfn,
-					&params, ECORE_SPQ_MODE_CB, NULL);
+				rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, p_tunn,
+					ECORE_SPQ_MODE_CB, NULL);
 				if (rc != ECORE_SUCCESS) {
 					DP_ERR(edev,
 						"Failed to update tunn_clss %u\n",
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index baa8476..09b6912 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -121,6 +121,22 @@ struct qed_eth_tlvs {
 	u8 num_rxqs_full;
 };
 
+struct qed_tunn_update_params {
+	unsigned long   tunn_mode_update_mask;
+	unsigned long   tunn_mode;
+	u16             vxlan_udp_port;
+	u16             geneve_udp_port;
+	u8              update_rx_pf_clss;
+	u8              update_tx_pf_clss;
+	u8              update_vxlan_udp_port;
+	u8              update_geneve_udp_port;
+	u8              tunn_clss_vxlan;
+	u8              tunn_clss_l2geneve;
+	u8              tunn_clss_ipgeneve;
+	u8              tunn_clss_l2gre;
+	u8              tunn_clss_ipgre;
+};
+
 struct qed_common_cb_ops {
 	void (*link_update)(void *dev, struct qed_link_output *link);
 	void (*get_tlv_data)(void *dev, struct qed_eth_tlvs *data);
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index e7195b4..5c79055 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -329,20 +329,18 @@ static int qed_slowpath_start(struct ecore_dev *edev,
 
 	memset(dev_info, 0, sizeof(struct qed_dev_info));
 
-	if (tun->tunn_mode & OSAL_BIT(ECORE_MODE_VXLAN_TUNN) &&
-	    tun->tunn_clss_vxlan == ECORE_TUNN_CLSS_MAC_VLAN)
+	if (tun->vxlan.tun_cls == ECORE_TUNN_CLSS_MAC_VLAN &&
+	    tun->vxlan.b_mode_enabled)
 		dev_info->vxlan_enable = true;
 
-	if (tun->tunn_mode & OSAL_BIT(ECORE_MODE_L2GRE_TUNN) &&
-	    tun->tunn_mode & OSAL_BIT(ECORE_MODE_IPGRE_TUNN) &&
-	    tun->tunn_clss_l2gre == ECORE_TUNN_CLSS_MAC_VLAN &&
-	    tun->tunn_clss_ipgre == ECORE_TUNN_CLSS_MAC_VLAN)
+	if (tun->l2_gre.b_mode_enabled && tun->ip_gre.b_mode_enabled &&
+	    tun->l2_gre.tun_cls == ECORE_TUNN_CLSS_MAC_VLAN &&
+	    tun->ip_gre.tun_cls == ECORE_TUNN_CLSS_MAC_VLAN)
 		dev_info->gre_enable = true;
 
-	if (tun->tunn_mode & OSAL_BIT(ECORE_MODE_L2GENEVE_TUNN) &&
-	    tun->tunn_mode & OSAL_BIT(ECORE_MODE_IPGENEVE_TUNN) &&
-	    tun->tunn_clss_l2geneve == ECORE_TUNN_CLSS_MAC_VLAN &&
-	    tun->tunn_clss_ipgeneve == ECORE_TUNN_CLSS_MAC_VLAN)
+	if (tun->l2_geneve.b_mode_enabled && tun->ip_geneve.b_mode_enabled &&
+	    tun->l2_geneve.tun_cls == ECORE_TUNN_CLSS_MAC_VLAN &&
+	    tun->ip_geneve.tun_cls == ECORE_TUNN_CLSS_MAC_VLAN)
 		dev_info->geneve_enable = true;
 
 	dev_info->num_hwfns = edev->num_hwfns;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 32/61] net/qede/base: add tunnelling support for VFs
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (30 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 31/61] net/qede/base: revise tunnel APIs/structs Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 33/61] net/qede/base: formatting changes Rasesh Mody
                   ` (29 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add new tunnelling support for VFs.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/bcm_osal.h          |    3 +-
 drivers/net/qede/base/ecore_dev.c         |   15 ++-
 drivers/net/qede/base/ecore_sp_commands.c |   15 ++-
 drivers/net/qede/base/ecore_sriov.c       |  144 +++++++++++++++++++++++++++
 drivers/net/qede/base/ecore_vf.c          |  154 +++++++++++++++++++++++++++++
 drivers/net/qede/base/ecore_vf.h          |    5 +
 drivers/net/qede/base/ecore_vfpf_if.h     |   40 ++++++++
 drivers/net/qede/qede_ethdev.c            |   49 +++++----
 8 files changed, 390 insertions(+), 35 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 902c500..246cc6c 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -418,6 +418,5 @@ void qede_get_mcp_proto_stats(struct ecore_dev *, enum ecore_mcp_protocol_type,
 #define	OSAL_SLOWPATH_IRQ_REQ(p_hwfn) (0)
 #define OSAL_MFW_TLV_REQ(p_hwfn) (0)
 #define OSAL_MFW_FILL_TLV_DATA(type, buf, data) (0)
-
-
+#define OSAL_PF_VALIDATE_MODIFY_TUNN_CONFIG(p_hwfn, mask, b_update, tunn) 0
 #endif /* __BCM_OSAL_H */
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index c80b2cb..dfb95bb 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1882,6 +1882,19 @@ static void ecore_reset_mb_shadow(struct ecore_hwfn *p_hwfn,
 		    p_hwfn->mcp_info->mfw_mb_length);
 }
 
+enum _ecore_status_t ecore_vf_start(struct ecore_hwfn *p_hwfn,
+				    struct ecore_hw_init_params *p_params)
+{
+	if (p_params->p_tunn) {
+		ecore_vf_set_vf_start_tunn_update_param(p_params->p_tunn);
+		ecore_vf_pf_tunnel_param_update(p_hwfn, p_params->p_tunn);
+	}
+
+	p_hwfn->b_int_enabled = 1;
+
+	return ECORE_SUCCESS;
+}
+
 enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 				   struct ecore_hw_init_params *p_params)
 {
@@ -1914,7 +1927,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		}
 
 		if (IS_VF(p_dev)) {
-			p_hwfn->b_int_enabled = 1;
+			ecore_vf_start(p_hwfn, p_params);
 			continue;
 		}
 
diff --git a/drivers/net/qede/base/ecore_sp_commands.c b/drivers/net/qede/base/ecore_sp_commands.c
index 4cacce8..8fd64d7 100644
--- a/drivers/net/qede/base/ecore_sp_commands.c
+++ b/drivers/net/qede/base/ecore_sp_commands.c
@@ -22,6 +22,7 @@
 #include "ecore_hw.h"
 #include "ecore_dcbx.h"
 #include "ecore_sriov.h"
+#include "ecore_vf.h"
 
 enum _ecore_status_t ecore_sp_init_request(struct ecore_hwfn *p_hwfn,
 					   struct ecore_spq_entry **pp_ent,
@@ -137,16 +138,17 @@ static void ecore_set_tunn_cls_info(struct ecore_tunnel_info *p_tun,
 	p_tun->b_update_rx_cls = p_src->b_update_rx_cls;
 	p_tun->b_update_tx_cls = p_src->b_update_tx_cls;
 
+	/* @DPDK - typecast tunnul class */
 	type = ecore_tunn_clss_to_fw_clss(p_src->vxlan.tun_cls);
-	p_tun->vxlan.tun_cls = type;
+	p_tun->vxlan.tun_cls = (enum ecore_tunn_clss)type;
 	type = ecore_tunn_clss_to_fw_clss(p_src->l2_gre.tun_cls);
-	p_tun->l2_gre.tun_cls = type;
+	p_tun->l2_gre.tun_cls = (enum ecore_tunn_clss)type;
 	type = ecore_tunn_clss_to_fw_clss(p_src->ip_gre.tun_cls);
-	p_tun->ip_gre.tun_cls = type;
+	p_tun->ip_gre.tun_cls = (enum ecore_tunn_clss)type;
 	type = ecore_tunn_clss_to_fw_clss(p_src->l2_geneve.tun_cls);
-	p_tun->l2_geneve.tun_cls = type;
+	p_tun->l2_geneve.tun_cls = (enum ecore_tunn_clss)type;
 	type = ecore_tunn_clss_to_fw_clss(p_src->ip_geneve.tun_cls);
-	p_tun->ip_geneve.tun_cls = type;
+	p_tun->ip_geneve.tun_cls = (enum ecore_tunn_clss)type;
 }
 
 static void ecore_set_tunn_ports(struct ecore_tunnel_info *p_tun,
@@ -486,6 +488,9 @@ enum _ecore_status_t
 	struct ecore_sp_init_data init_data;
 	enum _ecore_status_t rc = ECORE_NOTIMPL;
 
+	if (IS_VF(p_hwfn->p_dev))
+		return ecore_vf_pf_tunnel_param_update(p_hwfn, p_tunn);
+
 	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
 		DP_NOTICE(p_hwfn, true,
 			  "A0 chip: tunnel pf update config is not supported\n");
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 7a20d56..e7c120b 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -51,6 +51,7 @@
 	"CHANNEL_TLV_VPORT_UPDATE_RSS",
 	"CHANNEL_TLV_VPORT_UPDATE_ACCEPT_ANY_VLAN",
 	"CHANNEL_TLV_VPORT_UPDATE_SGE_TPA",
+	"CHANNEL_TLV_UPDATE_TUNN_PARAM",
 	"CHANNEL_TLV_MAX"
 };
 
@@ -2140,6 +2141,146 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
 					b_legacy_vf);
 }
 
+static void
+ecore_iov_pf_update_tun_response(struct pfvf_update_tunn_param_tlv *p_resp,
+				 struct ecore_tunnel_info *p_tun,
+				 u16 tunn_feature_mask)
+{
+	p_resp->tunn_feature_mask = tunn_feature_mask;
+	p_resp->vxlan_mode = p_tun->vxlan.b_mode_enabled;
+	p_resp->l2geneve_mode = p_tun->l2_geneve.b_mode_enabled;
+	p_resp->ipgeneve_mode = p_tun->ip_geneve.b_mode_enabled;
+	p_resp->l2gre_mode = p_tun->l2_gre.b_mode_enabled;
+	p_resp->ipgre_mode = p_tun->l2_gre.b_mode_enabled;
+	p_resp->vxlan_clss = p_tun->vxlan.tun_cls;
+	p_resp->l2gre_clss = p_tun->l2_gre.tun_cls;
+	p_resp->ipgre_clss = p_tun->ip_gre.tun_cls;
+	p_resp->l2geneve_clss = p_tun->l2_geneve.tun_cls;
+	p_resp->ipgeneve_clss = p_tun->ip_geneve.tun_cls;
+	p_resp->geneve_udp_port = p_tun->geneve_port.port;
+	p_resp->vxlan_udp_port = p_tun->vxlan_port.port;
+}
+
+static void
+__ecore_iov_pf_update_tun_param(struct vfpf_update_tunn_param_tlv *p_req,
+				struct ecore_tunn_update_type *p_tun,
+				enum ecore_tunn_mode mask, u8 tun_cls)
+{
+	if (p_req->tun_mode_update_mask & (1 << mask)) {
+		p_tun->b_update_mode = true;
+
+		if (p_req->tunn_mode & (1 << mask))
+			p_tun->b_mode_enabled = true;
+	}
+
+	p_tun->tun_cls = tun_cls;
+}
+
+static void
+ecore_iov_pf_update_tun_param(struct vfpf_update_tunn_param_tlv *p_req,
+			      struct ecore_tunn_update_type *p_tun,
+			      struct ecore_tunn_update_udp_port *p_port,
+			      enum ecore_tunn_mode mask,
+			      u8 tun_cls, u8 update_port, u16 port)
+{
+	if (update_port) {
+		p_port->b_update_port = true;
+		p_port->port = port;
+	}
+
+	__ecore_iov_pf_update_tun_param(p_req, p_tun, mask, tun_cls);
+}
+
+static bool
+ecore_iov_pf_validate_tunn_param(struct vfpf_update_tunn_param_tlv *p_req)
+{
+	bool b_update_requested = false;
+
+	if (p_req->tun_mode_update_mask || p_req->update_tun_cls ||
+	    p_req->update_geneve_port || p_req->update_vxlan_port)
+		b_update_requested = true;
+
+	return b_update_requested;
+}
+
+static void ecore_iov_vf_mbx_update_tunn_param(struct ecore_hwfn *p_hwfn,
+					       struct ecore_ptt *p_ptt,
+					       struct ecore_vf_info *p_vf)
+{
+	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
+	struct ecore_iov_vf_mbx *mbx = &p_vf->vf_mbx;
+	struct pfvf_update_tunn_param_tlv *p_resp;
+	struct vfpf_update_tunn_param_tlv *p_req;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	u8 status = PFVF_STATUS_SUCCESS;
+	bool b_update_required = false;
+	struct ecore_tunnel_info tunn;
+	u16 tunn_feature_mask = 0;
+
+	mbx->offset = (u8 *)mbx->reply_virt;
+
+	OSAL_MEM_ZERO(&tunn, sizeof(tunn));
+	p_req = &mbx->req_virt->tunn_param_update;
+
+	if (!ecore_iov_pf_validate_tunn_param(p_req)) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "No tunnel update requested by VF\n");
+		status = PFVF_STATUS_FAILURE;
+		goto send_resp;
+	}
+
+	tunn.b_update_rx_cls = p_req->update_tun_cls;
+	tunn.b_update_tx_cls = p_req->update_tun_cls;
+
+	ecore_iov_pf_update_tun_param(p_req, &tunn.vxlan, &tunn.vxlan_port,
+				      ECORE_MODE_VXLAN_TUNN, p_req->vxlan_clss,
+				      p_req->update_vxlan_port,
+				      p_req->vxlan_port);
+	ecore_iov_pf_update_tun_param(p_req, &tunn.l2_geneve, &tunn.geneve_port,
+				      ECORE_MODE_L2GENEVE_TUNN,
+				      p_req->l2geneve_clss,
+				      p_req->update_geneve_port,
+				      p_req->geneve_port);
+	__ecore_iov_pf_update_tun_param(p_req, &tunn.ip_geneve,
+					ECORE_MODE_IPGENEVE_TUNN,
+					p_req->ipgeneve_clss);
+	__ecore_iov_pf_update_tun_param(p_req, &tunn.l2_gre,
+					ECORE_MODE_L2GRE_TUNN,
+					p_req->l2gre_clss);
+	__ecore_iov_pf_update_tun_param(p_req, &tunn.ip_gre,
+					ECORE_MODE_IPGRE_TUNN,
+					p_req->ipgre_clss);
+
+	/* If PF modifies VF's req then it should
+	 * still return an error in case of partial configuration
+	 * or modified configuration as opposed to requested one.
+	 */
+	rc = OSAL_PF_VALIDATE_MODIFY_TUNN_CONFIG(p_hwfn, &tunn_feature_mask,
+						 &b_update_required, &tunn);
+
+	if (rc != ECORE_SUCCESS)
+		status = PFVF_STATUS_FAILURE;
+
+	/* If ECORE client is willing to update anything ? */
+	if (b_update_required) {
+		rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, &tunn,
+						 ECORE_SPQ_MODE_EBLOCK,
+						 OSAL_NULL);
+		if (rc != ECORE_SUCCESS)
+			status = PFVF_STATUS_FAILURE;
+	}
+
+send_resp:
+	p_resp = ecore_add_tlv(p_hwfn, &mbx->offset,
+			       CHANNEL_TLV_UPDATE_TUNN_PARAM, sizeof(*p_resp));
+
+	ecore_iov_pf_update_tun_response(p_resp, p_tun, tunn_feature_mask);
+	ecore_add_tlv(p_hwfn, &mbx->offset, CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	ecore_iov_send_response(p_hwfn, p_ptt, p_vf, sizeof(*p_resp), status);
+}
+
 static void ecore_iov_vf_mbx_start_txq_resp(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
 					    struct ecore_vf_info *p_vf,
@@ -3408,6 +3549,9 @@ void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn,
 		case CHANNEL_TLV_RELEASE:
 			ecore_iov_vf_mbx_release(p_hwfn, p_ptt, p_vf);
 			break;
+		case CHANNEL_TLV_UPDATE_TUNN_PARAM:
+			ecore_iov_vf_mbx_update_tunn_param(p_hwfn, p_ptt, p_vf);
+			break;
 		}
 	} else if (ecore_iov_tlv_supported(mbx->first_tlv.tl.type)) {
 		/* If we've received a message from a VF we consider malicious
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index d1c6691..2845d2e 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -451,6 +451,160 @@ enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn)
 #define MSTORM_QZONE_START(dev)   (TSTORM_QZONE_START + \
 				   (TSTORM_QZONE_SIZE * NUM_OF_L2_QUEUES(dev)))
 
+/* @DPDK - changed enum ecore_tunn_clss to enum ecore_tunn_mode */
+static void
+__ecore_vf_prep_tunn_req_tlv(struct vfpf_update_tunn_param_tlv *p_req,
+			     struct ecore_tunn_update_type *p_src,
+			     enum ecore_tunn_mode mask, u8 *p_cls)
+{
+	if (p_src->b_update_mode) {
+		p_req->tun_mode_update_mask |= (1 << mask);
+
+		if (p_src->b_mode_enabled)
+			p_req->tunn_mode |= (1 << mask);
+	}
+
+	*p_cls = p_src->tun_cls;
+}
+
+/* @DPDK - changed enum ecore_tunn_clss to enum ecore_tunn_mode */
+static void
+ecore_vf_prep_tunn_req_tlv(struct vfpf_update_tunn_param_tlv *p_req,
+			   struct ecore_tunn_update_type *p_src,
+			   enum ecore_tunn_mode mask, u8 *p_cls,
+			   struct ecore_tunn_update_udp_port *p_port,
+			   u8 *p_update_port, u16 *p_udp_port)
+{
+	if (p_port->b_update_port) {
+		*p_update_port = 1;
+		*p_udp_port = p_port->port;
+	}
+
+	__ecore_vf_prep_tunn_req_tlv(p_req, p_src, mask, p_cls);
+}
+
+void ecore_vf_set_vf_start_tunn_update_param(struct ecore_tunnel_info *p_tun)
+{
+	if (p_tun->vxlan.b_mode_enabled)
+		p_tun->vxlan.b_update_mode = true;
+	if (p_tun->l2_geneve.b_mode_enabled)
+		p_tun->l2_geneve.b_update_mode = true;
+	if (p_tun->ip_geneve.b_mode_enabled)
+		p_tun->ip_geneve.b_update_mode = true;
+	if (p_tun->l2_gre.b_mode_enabled)
+		p_tun->l2_gre.b_update_mode = true;
+	if (p_tun->ip_gre.b_mode_enabled)
+		p_tun->ip_gre.b_update_mode = true;
+
+	p_tun->b_update_rx_cls = true;
+	p_tun->b_update_tx_cls = true;
+}
+
+static void
+__ecore_vf_update_tunn_param(struct ecore_tunn_update_type *p_tun,
+			     u16 feature_mask, u8 tunn_mode, u8 tunn_cls,
+			     enum ecore_tunn_mode val)
+{
+	if (feature_mask & (1 << val)) {
+		p_tun->b_mode_enabled = tunn_mode;
+		p_tun->tun_cls = tunn_cls;
+	} else {
+		p_tun->b_mode_enabled = false;
+	}
+}
+
+static void
+ecore_vf_update_tunn_param(struct ecore_hwfn *p_hwfn,
+			   struct ecore_tunnel_info *p_tun,
+			   struct pfvf_update_tunn_param_tlv *p_resp)
+{
+	/* Update mode and classes provided by PF */
+	u16 feat_mask = p_resp->tunn_feature_mask;
+
+	__ecore_vf_update_tunn_param(&p_tun->vxlan, feat_mask,
+				     p_resp->vxlan_mode, p_resp->vxlan_clss,
+				     ECORE_MODE_VXLAN_TUNN);
+	__ecore_vf_update_tunn_param(&p_tun->l2_geneve, feat_mask,
+				     p_resp->l2geneve_mode,
+				     p_resp->l2geneve_clss,
+				     ECORE_MODE_L2GENEVE_TUNN);
+	__ecore_vf_update_tunn_param(&p_tun->ip_geneve, feat_mask,
+				     p_resp->ipgeneve_mode,
+				     p_resp->ipgeneve_clss,
+				     ECORE_MODE_IPGENEVE_TUNN);
+	__ecore_vf_update_tunn_param(&p_tun->l2_gre, feat_mask,
+				     p_resp->l2gre_mode, p_resp->l2gre_clss,
+				     ECORE_MODE_L2GRE_TUNN);
+	__ecore_vf_update_tunn_param(&p_tun->ip_gre, feat_mask,
+				     p_resp->ipgre_mode, p_resp->ipgre_clss,
+				     ECORE_MODE_IPGRE_TUNN);
+	p_tun->geneve_port.port = p_resp->geneve_udp_port;
+	p_tun->vxlan_port.port = p_resp->vxlan_udp_port;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+		   "tunn mode: vxlan=0x%x, l2geneve=0x%x, ipgeneve=0x%x, l2gre=0x%x, ipgre=0x%x",
+		   p_tun->vxlan.b_mode_enabled, p_tun->l2_geneve.b_mode_enabled,
+		   p_tun->ip_geneve.b_mode_enabled,
+		   p_tun->l2_gre.b_mode_enabled,
+		   p_tun->ip_gre.b_mode_enabled);
+}
+
+enum _ecore_status_t
+ecore_vf_pf_tunnel_param_update(struct ecore_hwfn *p_hwfn,
+				struct ecore_tunnel_info *p_src)
+{
+	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
+	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+	struct pfvf_update_tunn_param_tlv *p_resp;
+	struct vfpf_update_tunn_param_tlv *p_req;
+	enum _ecore_status_t rc;
+
+	p_req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_UPDATE_TUNN_PARAM,
+				 sizeof(*p_req));
+
+	if (p_src->b_update_rx_cls && p_src->b_update_tx_cls)
+		p_req->update_tun_cls = 1;
+
+	ecore_vf_prep_tunn_req_tlv(p_req, &p_src->vxlan, ECORE_MODE_VXLAN_TUNN,
+				   &p_req->vxlan_clss, &p_src->vxlan_port,
+				   &p_req->update_vxlan_port,
+				   &p_req->vxlan_port);
+	ecore_vf_prep_tunn_req_tlv(p_req, &p_src->l2_geneve,
+				   ECORE_MODE_L2GENEVE_TUNN,
+				   &p_req->l2geneve_clss, &p_src->geneve_port,
+				   &p_req->update_geneve_port,
+				   &p_req->geneve_port);
+	__ecore_vf_prep_tunn_req_tlv(p_req, &p_src->ip_geneve,
+				     ECORE_MODE_IPGENEVE_TUNN,
+				     &p_req->ipgeneve_clss);
+	__ecore_vf_prep_tunn_req_tlv(p_req, &p_src->l2_gre,
+				     ECORE_MODE_L2GRE_TUNN, &p_req->l2gre_clss);
+	__ecore_vf_prep_tunn_req_tlv(p_req, &p_src->ip_gre,
+				     ECORE_MODE_IPGRE_TUNN, &p_req->ipgre_clss);
+
+	/* add list termination tlv */
+	ecore_add_tlv(p_hwfn, &p_iov->offset,
+		      CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	p_resp = &p_iov->pf2vf_reply->tunn_param_resp;
+	rc = ecore_send_msg2pf(p_hwfn, &p_resp->hdr.status, sizeof(*p_resp));
+
+	if (rc)
+		goto exit;
+
+	if (p_resp->hdr.status != PFVF_STATUS_SUCCESS) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "Failed to update tunnel parameters\n");
+		rc = ECORE_INVAL;
+	}
+
+	ecore_vf_update_tunn_param(p_hwfn, p_tun, p_resp);
+exit:
+	ecore_vf_pf_req_end(p_hwfn, rc);
+	return rc;
+}
+
 enum _ecore_status_t
 ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
 		      struct ecore_queue_cid *p_cid,
diff --git a/drivers/net/qede/base/ecore_vf.h b/drivers/net/qede/base/ecore_vf.h
index 1afd667..0d67054 100644
--- a/drivers/net/qede/base/ecore_vf.h
+++ b/drivers/net/qede/base/ecore_vf.h
@@ -258,5 +258,10 @@ void __ecore_vf_get_link_caps(struct ecore_hwfn *p_hwfn,
 			      struct ecore_mcp_link_capabilities *p_link_caps,
 			      struct ecore_bulletin_content *p_bulletin);
 
+enum _ecore_status_t
+ecore_vf_pf_tunnel_param_update(struct ecore_hwfn *p_hwfn,
+				struct ecore_tunnel_info *p_tunn);
+
+void ecore_vf_set_vf_start_tunn_update_param(struct ecore_tunnel_info *p_tun);
 #endif
 #endif /* __ECORE_VF_H__ */
diff --git a/drivers/net/qede/base/ecore_vfpf_if.h b/drivers/net/qede/base/ecore_vfpf_if.h
index 149d092..82ed4f5 100644
--- a/drivers/net/qede/base/ecore_vfpf_if.h
+++ b/drivers/net/qede/base/ecore_vfpf_if.h
@@ -416,6 +416,43 @@ struct vfpf_ucast_filter_tlv {
 	u16			padding[3];
 };
 
+/* tunnel update param tlv */
+struct vfpf_update_tunn_param_tlv {
+	struct vfpf_first_tlv   first_tlv;
+
+	u8			tun_mode_update_mask;
+	u8			tunn_mode;
+	u8			update_tun_cls;
+	u8			vxlan_clss;
+	u8			l2gre_clss;
+	u8			ipgre_clss;
+	u8			l2geneve_clss;
+	u8			ipgeneve_clss;
+	u8			update_geneve_port;
+	u8			update_vxlan_port;
+	u16			geneve_port;
+	u16			vxlan_port;
+	u8			padding[2];
+};
+
+struct pfvf_update_tunn_param_tlv {
+	struct pfvf_tlv hdr;
+
+	u16			tunn_feature_mask;
+	u8			vxlan_mode;
+	u8			l2geneve_mode;
+	u8			ipgeneve_mode;
+	u8			l2gre_mode;
+	u8			ipgre_mode;
+	u8			vxlan_clss;
+	u8			l2gre_clss;
+	u8			ipgre_clss;
+	u8			l2geneve_clss;
+	u8			ipgeneve_clss;
+	u16			vxlan_udp_port;
+	u16			geneve_udp_port;
+};
+
 struct tlv_buffer_size {
 	u8 tlv_buffer[TLV_BUFFER_SIZE];
 };
@@ -431,6 +468,7 @@ struct tlv_buffer_size {
 	struct vfpf_vport_start_tlv		start_vport;
 	struct vfpf_vport_update_tlv		vport_update;
 	struct vfpf_ucast_filter_tlv		ucast_filter;
+	struct vfpf_update_tunn_param_tlv	tunn_param_update;
 	struct tlv_buffer_size			tlv_buf_size;
 };
 
@@ -439,6 +477,7 @@ struct tlv_buffer_size {
 	struct pfvf_acquire_resp_tlv		acquire_resp;
 	struct tlv_buffer_size			tlv_buf_size;
 	struct pfvf_start_queue_resp_tlv	queue_start;
+	struct pfvf_update_tunn_param_tlv	tunn_param_resp;
 };
 
 /* This is a structure which is allocated in the VF, which the PF may update
@@ -552,6 +591,7 @@ enum {
 	CHANNEL_TLV_VPORT_UPDATE_RSS,
 	CHANNEL_TLV_VPORT_UPDATE_ACCEPT_ANY_VLAN,
 	CHANNEL_TLV_VPORT_UPDATE_SGE_TPA,
+	CHANNEL_TLV_UPDATE_TUNN_PARAM,
 	CHANNEL_TLV_MAX,
 
 	/* Required for iterating over vport-update tlvs.
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 4ef93d4..257e5b2 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -335,15 +335,15 @@ static void qede_set_ucast_cmn_params(struct ecore_filter_ucast *ucast)
 	/* ucast->assert_on_error = true; - For debug */
 }
 
-static void qede_set_cmn_tunn_param(struct qed_tunn_update_params *params,
-				    uint8_t clss, uint64_t mode, uint64_t mask)
+static void qede_set_cmn_tunn_param(struct ecore_tunnel_info *p_tunn,
+				    uint8_t clss, bool mode, bool mask)
 {
-	memset(params, 0, sizeof(struct qed_tunn_update_params));
-	params->tunn_mode = mode;
-	params->tunn_mode_update_mask = mask;
-	params->update_tx_pf_clss = 1;
-	params->update_rx_pf_clss = 1;
-	params->tunn_clss_vxlan = clss;
+	memset(p_tunn, 0, sizeof(struct ecore_tunnel_info));
+	p_tunn->vxlan.b_update_mode = mode;
+	p_tunn->vxlan.b_mode_enabled = mask;
+	p_tunn->b_update_rx_cls = true;
+	p_tunn->b_update_tx_cls = true;
+	p_tunn->vxlan.tun_cls = clss;
 }
 
 static int
@@ -1707,25 +1707,24 @@ int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
 {
 	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
-	struct qed_tunn_update_params params;
-	struct ecore_tunnel_info *p_tunn;
+	struct ecore_tunnel_info tunn; /* @DPDK */
 	struct ecore_hwfn *p_hwfn;
 	int rc, i;
 
 	PMD_INIT_FUNC_TRACE(edev);
 
-	memset(&params, 0, sizeof(params));
+	memset(&tunn, 0, sizeof(tunn));
 	if (tunnel_udp->prot_type == RTE_TUNNEL_TYPE_VXLAN) {
-		params.update_vxlan_udp_port = 1;
-		params.vxlan_udp_port = (add) ? tunnel_udp->udp_port :
-					QEDE_VXLAN_DEF_PORT;
+		tunn.vxlan_port.b_update_port = true;
+		tunn.vxlan_port.port = (add) ? tunnel_udp->udp_port :
+						  QEDE_VXLAN_DEF_PORT;
 		for_each_hwfn(edev, i) {
 			p_hwfn = &edev->hwfns[i];
-			rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, p_tunn,
+			rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, &tunn,
 						ECORE_SPQ_MODE_CB, NULL);
 			if (rc != ECORE_SUCCESS) {
 				DP_ERR(edev, "Unable to config UDP port %u\n",
-					params.vxlan_udp_port);
+				       tunn.vxlan_port.port);
 				return rc;
 			}
 		}
@@ -1818,8 +1817,7 @@ static int qede_vxlan_tunn_config(struct rte_eth_dev *eth_dev,
 {
 	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
-	struct qed_tunn_update_params params;
-	struct ecore_tunnel_info *p_tunn;
+	struct ecore_tunnel_info tunn;
 	struct ecore_hwfn *p_hwfn;
 	enum ecore_filter_ucast_type type;
 	enum ecore_tunn_clss clss;
@@ -1868,16 +1866,14 @@ static int qede_vxlan_tunn_config(struct rte_eth_dev *eth_dev,
 		qdev->vxlan_filter_type = filter_type;
 
 		DP_INFO(edev, "Enabling VXLAN tunneling\n");
-		qede_set_cmn_tunn_param(&params, clss,
-					(1 << ECORE_MODE_VXLAN_TUNN),
-					(1 << ECORE_MODE_VXLAN_TUNN));
+		qede_set_cmn_tunn_param(&tunn, clss, true, true);
 		for_each_hwfn(edev, i) {
 			p_hwfn = &edev->hwfns[i];
 			rc = ecore_sp_pf_update_tunn_cfg(p_hwfn,
-				p_tunn, ECORE_SPQ_MODE_CB, NULL);
+				&tunn, ECORE_SPQ_MODE_CB, NULL);
 			if (rc != ECORE_SUCCESS) {
 				DP_ERR(edev, "Failed to update tunn_clss %u\n",
-					params.tunn_clss_vxlan);
+				       tunn.vxlan.tun_cls);
 			}
 		}
 		qdev->num_tunn_filters++; /* Filter added successfully */
@@ -1904,16 +1900,15 @@ static int qede_vxlan_tunn_config(struct rte_eth_dev *eth_dev,
 			DP_INFO(edev, "Disabling VXLAN tunneling\n");
 
 			/* Use 0 as tunnel mode */
-			qede_set_cmn_tunn_param(&params, clss, 0,
-						(1 << ECORE_MODE_VXLAN_TUNN));
+			qede_set_cmn_tunn_param(&tunn, clss, false, true);
 			for_each_hwfn(edev, i) {
 				p_hwfn = &edev->hwfns[i];
-				rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, p_tunn,
+				rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, &tunn,
 					ECORE_SPQ_MODE_CB, NULL);
 				if (rc != ECORE_SUCCESS) {
 					DP_ERR(edev,
 						"Failed to update tunn_clss %u\n",
-						params.tunn_clss_vxlan);
+						tunn.vxlan.tun_cls);
 					break;
 				}
 			}
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 33/61] net/qede/base: formatting changes
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (31 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 32/61] net/qede/base: add tunnelling support for VFs Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 34/61] net/qede/base: prevent transmitter stuck condition Rasesh Mody
                   ` (28 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h      |   14 +--
 drivers/net/qede/base/mcp_public.h |  176 ++++++++++++++++++------------------
 2 files changed, 96 insertions(+), 94 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index f86f7ca..479a991 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -157,8 +157,8 @@ enum DP_MODULE {
 	ECORE_MSG_CXT		= 0x800000,
 	ECORE_MSG_LL2		= 0x1000000,
 	ECORE_MSG_ILT		= 0x2000000,
-	ECORE_MSG_RDMA          = 0x4000000,
-	ECORE_MSG_DEBUG         = 0x8000000,
+	ECORE_MSG_RDMA		= 0x4000000,
+	ECORE_MSG_DEBUG		= 0x8000000,
 	/* to be added...up to 0x8000000 */
 };
 #endif
@@ -480,7 +480,7 @@ struct ecore_hwfn {
 	u32				dp_module;
 	u8				dp_level;
 	char				name[NAME_SIZE];
-	void                            *dp_ctx;
+	void				*dp_ctx;
 
 	bool				first_on_engine;
 	bool				hw_init_done;
@@ -535,8 +535,8 @@ struct ecore_hwfn {
 	u32				rdma_prs_search_reg;
 
 	/* Array of sb_info of all status blocks */
-	struct ecore_sb_info            *sbs_info[MAX_SB_PER_PF_MIMD];
-	u16                             num_sbs;
+	struct ecore_sb_info		*sbs_info[MAX_SB_PER_PF_MIMD];
+	u16				num_sbs;
 
 	struct ecore_cxt_mngr		*p_cxt_mngr;
 
@@ -608,7 +608,7 @@ struct ecore_dev {
 	u32				dp_module;
 	u8				dp_level;
 	char				name[NAME_SIZE];
-	void                            *dp_ctx;
+	void				*dp_ctx;
 
 	u8				type;
 #define ECORE_DEV_TYPE_BB	(0 << 0)
@@ -816,7 +816,7 @@ void ecore_set_fw_mac_addr(__le16 *fw_msb, __le16 *fw_mid, __le16 *fw_lsb,
 #define PQ_FLAGS_MCOS	(1 << 1)
 #define PQ_FLAGS_LB	(1 << 2)
 #define PQ_FLAGS_OOO	(1 << 3)
-#define PQ_FLAGS_ACK    (1 << 4)
+#define PQ_FLAGS_ACK	(1 << 4)
 #define PQ_FLAGS_OFLD	(1 << 5)
 #define PQ_FLAGS_VFS	(1 << 6)
 
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 969dd5a..28909fb 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -586,14 +586,14 @@ struct public_port {
 	u32 link_status;
 #define LINK_STATUS_LINK_UP				0x00000001
 #define LINK_STATUS_SPEED_AND_DUPLEX_MASK		0x0000001e
-#define LINK_STATUS_SPEED_AND_DUPLEX_1000THD			(1 << 1)
-#define LINK_STATUS_SPEED_AND_DUPLEX_1000TFD			(2 << 1)
-#define LINK_STATUS_SPEED_AND_DUPLEX_10G			(3 << 1)
-#define LINK_STATUS_SPEED_AND_DUPLEX_20G			(4 << 1)
-#define LINK_STATUS_SPEED_AND_DUPLEX_40G			(5 << 1)
-#define LINK_STATUS_SPEED_AND_DUPLEX_50G			(6 << 1)
-#define LINK_STATUS_SPEED_AND_DUPLEX_100G			(7 << 1)
-#define LINK_STATUS_SPEED_AND_DUPLEX_25G			(8 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_1000THD		(1 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_1000TFD		(2 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_10G		(3 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_20G		(4 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_40G		(5 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_50G		(6 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_100G		(7 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_25G		(8 << 1)
 #define LINK_STATUS_AUTO_NEGOTIATE_ENABLED		0x00000020
 #define LINK_STATUS_AUTO_NEGOTIATE_COMPLETE		0x00000040
 #define LINK_STATUS_PARALLEL_DETECTION_USED		0x00000080
@@ -607,10 +607,10 @@ struct public_port {
 #define LINK_STATUS_LINK_PARTNER_100G_CAPABLE		0x00008000
 #define LINK_STATUS_LINK_PARTNER_25G_CAPABLE		0x00010000
 #define LINK_STATUS_LINK_PARTNER_FLOW_CONTROL_MASK	0x000C0000
-#define LINK_STATUS_LINK_PARTNER_NOT_PAUSE_CAPABLE		(0 << 18)
-#define LINK_STATUS_LINK_PARTNER_SYMMETRIC_PAUSE		(1 << 18)
-#define LINK_STATUS_LINK_PARTNER_ASYMMETRIC_PAUSE		(2 << 18)
-#define LINK_STATUS_LINK_PARTNER_BOTH_PAUSE			(3 << 18)
+#define LINK_STATUS_LINK_PARTNER_NOT_PAUSE_CAPABLE	(0 << 18)
+#define LINK_STATUS_LINK_PARTNER_SYMMETRIC_PAUSE	(1 << 18)
+#define LINK_STATUS_LINK_PARTNER_ASYMMETRIC_PAUSE	(2 << 18)
+#define LINK_STATUS_LINK_PARTNER_BOTH_PAUSE		(3 << 18)
 #define LINK_STATUS_SFP_TX_FAULT			0x00100000
 #define LINK_STATUS_TX_FLOW_CONTROL_ENABLED		0x00200000
 #define LINK_STATUS_RX_FLOW_CONTROL_ENABLED		0x00400000
@@ -619,9 +619,9 @@ struct public_port {
 #define LINK_STATUS_MAC_REMOTE_FAULT			0x02000000
 #define LINK_STATUS_UNSUPPORTED_SPD_REQ			0x04000000
 #define LINK_STATUS_FEC_MODE_MASK			0x38000000
-#define LINK_STATUS_FEC_MODE_NONE				(0 << 27)
-#define LINK_STATUS_FEC_MODE_FIRECODE_CL74			(1 << 27)
-#define LINK_STATUS_FEC_MODE_RS_CL91				(2 << 27)
+#define LINK_STATUS_FEC_MODE_NONE			(0 << 27)
+#define LINK_STATUS_FEC_MODE_FIRECODE_CL74		(1 << 27)
+#define LINK_STATUS_FEC_MODE_RS_CL91			(2 << 27)
 #define LINK_STATUS_EXT_PHY_LINK_UP			0x40000000
 
 	u32 link_status1;
@@ -762,23 +762,23 @@ struct public_port {
 	 *          When 1'b1 those bits contains a value times 16 microseconds.
 	 */
 	u32 eee_status;
-	#define EEE_TIMER_MASK		0x000fffff
-	#define EEE_ADV_STATUS_MASK	0x00f00000
-		#define EEE_1G_ADV	(1 << 1)
-		#define EEE_10G_ADV	(1 << 2)
-	#define EEE_ADV_STATUS_SHIFT	20
-	#define	EEE_LP_ADV_STATUS_MASK	0x0f000000
-	#define EEE_LP_ADV_STATUS_SHIFT	24
-	#define EEE_REQUESTED_BIT	0x10000000
-	#define EEE_LPI_REQUESTED_BIT	0x20000000
-	#define EEE_ACTIVE_BIT		0x40000000
-	#define EEE_TIME_OUTPUT_BIT	0x80000000
+#define EEE_TIMER_MASK		0x000fffff
+#define EEE_ADV_STATUS_MASK	0x00f00000
+#define EEE_1G_ADV	(1 << 1)
+#define EEE_10G_ADV	(1 << 2)
+#define EEE_ADV_STATUS_SHIFT	20
+#define	EEE_LP_ADV_STATUS_MASK	0x0f000000
+#define EEE_LP_ADV_STATUS_SHIFT	24
+#define EEE_REQUESTED_BIT	0x10000000
+#define EEE_LPI_REQUESTED_BIT	0x20000000
+#define EEE_ACTIVE_BIT		0x40000000
+#define EEE_TIME_OUTPUT_BIT	0x80000000
 
 	u32 eee_remote;	/* Used for EEE in LLDP */
-	#define EEE_REMOTE_TW_TX_MASK	0x0000ffff
-	#define EEE_REMOTE_TW_TX_SHIFT	0
-	#define EEE_REMOTE_TW_RX_MASK	0xffff0000
-	#define EEE_REMOTE_TW_RX_SHIFT	16
+#define EEE_REMOTE_TW_TX_MASK	0x0000ffff
+#define EEE_REMOTE_TW_TX_SHIFT	0
+#define EEE_REMOTE_TW_RX_MASK	0xffff0000
+#define EEE_REMOTE_TW_RX_SHIFT	16
 };
 
 /**************************************/
@@ -1157,15 +1157,17 @@ struct public_drv_mb {
  * [3:0] - func, drv_data[7:0] - MAC/WWNN/WWPN
  */
 #define DRV_MSG_CODE_GET_VMAC                   0x00120000
-	#define DRV_MSG_CODE_VMAC_TYPE_MAC              1
-	#define DRV_MSG_CODE_VMAC_TYPE_WWNN             2
-	#define DRV_MSG_CODE_VMAC_TYPE_WWPN             3
+#define DRV_MSG_CODE_VMAC_TYPE_SHIFT            4
+#define DRV_MSG_CODE_VMAC_TYPE_MASK             0x30
+#define DRV_MSG_CODE_VMAC_TYPE_MAC              1
+#define DRV_MSG_CODE_VMAC_TYPE_WWNN             2
+#define DRV_MSG_CODE_VMAC_TYPE_WWPN             3
 /* Get statistics from pf, params [31:4] - reserved, [3:0] - stats type */
 #define DRV_MSG_CODE_GET_STATS                  0x00130000
-	#define DRV_MSG_CODE_STATS_TYPE_LAN             1
-	#define DRV_MSG_CODE_STATS_TYPE_FCOE            2
-	#define DRV_MSG_CODE_STATS_TYPE_ISCSI           3
-	#define DRV_MSG_CODE_STATS_TYPE_RDMA            4
+#define DRV_MSG_CODE_STATS_TYPE_LAN             1
+#define DRV_MSG_CODE_STATS_TYPE_FCOE            2
+#define DRV_MSG_CODE_STATS_TYPE_ISCSI           3
+#define DRV_MSG_CODE_STATS_TYPE_RDMA            4
 /* Host shall provide buffer and size for MFW  */
 #define DRV_MSG_CODE_PMD_DIAG_DUMP		0x00140000
 /* Host shall provide buffer and size for MFW  */
@@ -1193,8 +1195,8 @@ struct public_drv_mb {
 #define DRV_MSG_CODE_MASK_PARITIES		0x001a0000
 /* param[0] - Simulate fan failure,  param[1] - simulate over temp. */
 #define DRV_MSG_CODE_INDUCE_FAILURE		0x001b0000
-	#define DRV_MSG_FAN_FAILURE_TYPE		(1 << 0)
-	#define DRV_MSG_TEMPERATURE_FAILURE_TYPE	(1 << 1)
+#define DRV_MSG_FAN_FAILURE_TYPE		(1 << 0)
+#define DRV_MSG_TEMPERATURE_FAILURE_TYPE	(1 << 1)
 /* Param: [0:15] - gpio number */
 #define DRV_MSG_CODE_GPIO_READ			0x001c0000
 /* Param: [0:15] - gpio number, [16:31] - gpio value */
@@ -1215,50 +1217,50 @@ struct public_drv_mb {
  * param[15:8] - age
  */
 #define DRV_MSG_CODE_RESOURCE_CMD		0x00230000
-	/* request resource ownership with default aging */
-	#define RESOURCE_OPCODE_REQ			1
-	/* request resource ownership without aging */
-	#define RESOURCE_OPCODE_REQ_WO_AGING		2
-	/* request resource ownership with specific aging timer (in seconds) */
-	#define RESOURCE_OPCODE_REQ_W_AGING		3
-	#define RESOURCE_OPCODE_RELEASE			4 /* release resource */
-	/* force resource release */
-	#define RESOURCE_OPCODE_FORCE_RELEASE		5
-	/* resource is free and granted to requester */
-	#define RESOURCE_OPCODE_GNT			1
-	/* resource is busy, param[7:0] indicates owner as follow 0-15 = PF0-15,
-	 * 16 = MFW, 17 = diag over serial
-	 */
-	#define RESOURCE_OPCODE_BUSY			2
-	/* indicate release request was acknowledged */
-	#define RESOURCE_OPCODE_RELEASED		3
-	/* indicate release request was previously received by other owner */
-	#define RESOURCE_OPCODE_RELEASED_PREVIOUS	4
-	/* indicate wrong owner during release */
-	#define RESOURCE_OPCODE_WRONG_OWNER		5
-	#define RESOURCE_OPCODE_UNKNOWN_CMD		255
-	/* dedicate resource 0 for dump */
-	#define RESOURCE_DUMP				0
+/* request resource ownership with default aging */
+#define RESOURCE_OPCODE_REQ			1
+/* request resource ownership without aging */
+#define RESOURCE_OPCODE_REQ_WO_AGING		2
+/* request resource ownership with specific aging timer (in seconds) */
+#define RESOURCE_OPCODE_REQ_W_AGING		3
+#define RESOURCE_OPCODE_RELEASE			4 /* release resource */
+/* force resource release */
+#define RESOURCE_OPCODE_FORCE_RELEASE		5
+/* resource is free and granted to requester */
+#define RESOURCE_OPCODE_GNT			1
+/* resource is busy, param[7:0] indicates owner as follow 0-15 = PF0-15,
+ * 16 = MFW, 17 = diag over serial
+ */
+#define RESOURCE_OPCODE_BUSY			2
+/* indicate release request was acknowledged */
+#define RESOURCE_OPCODE_RELEASED		3
+/* indicate release request was previously received by other owner */
+#define RESOURCE_OPCODE_RELEASED_PREVIOUS	4
+/* indicate wrong owner during release */
+#define RESOURCE_OPCODE_WRONG_OWNER		5
+#define RESOURCE_OPCODE_UNKNOWN_CMD		255
+/* dedicate resource 0 for dump */
+#define RESOURCE_DUMP				0
 #define DRV_MSG_CODE_GET_MBA_VERSION		0x00240000 /* Get MBA version */
 /* Send crash dump commands with param[3:0] - opcode */
 #define DRV_MSG_CODE_MDUMP_CMD			0x00250000
-	#define MDUMP_DRV_PARAM_OPCODE_MASK		0x0000000f
-	/* acknowledge reception of error indication */
-	#define DRV_MSG_CODE_MDUMP_ACK			0x01
-	/* set epoc and personality as follow: drv_data[3:0] - epoch,
-	 * drv_data[7:4] - personality
-	 */
-	#define DRV_MSG_CODE_MDUMP_SET_VALUES		0x02
-	/* trigger crash dump procedure */
-	#define DRV_MSG_CODE_MDUMP_TRIGGER		0x03
-	/* Request valid logs and config words */
-	#define DRV_MSG_CODE_MDUMP_GET_CONFIG		0x04
-	/* Set triggers mask. drv_mb_param should indicate (bitwise) which
-	 * trigger enabled
-	 */
-	#define DRV_MSG_CODE_MDUMP_SET_ENABLE		0x05
-	/* Clear all logs */
-	#define DRV_MSG_CODE_MDUMP_CLEAR_LOGS		0x06
+#define MDUMP_DRV_PARAM_OPCODE_MASK		0x0000000f
+/* acknowledge reception of error indication */
+#define DRV_MSG_CODE_MDUMP_ACK			0x01
+/* set epoc and personality as follow: drv_data[3:0] - epoch,
+ * drv_data[7:4] - personality
+ */
+#define DRV_MSG_CODE_MDUMP_SET_VALUES		0x02
+/* trigger crash dump procedure */
+#define DRV_MSG_CODE_MDUMP_TRIGGER		0x03
+/* Request valid logs and config words */
+#define DRV_MSG_CODE_MDUMP_GET_CONFIG		0x04
+/* Set triggers mask. drv_mb_param should indicate (bitwise) which
+ * trigger enabled
+ */
+#define DRV_MSG_CODE_MDUMP_SET_ENABLE		0x05
+/* Clear all logs */
+#define DRV_MSG_CODE_MDUMP_CLEAR_LOGS		0x06
 #define DRV_MSG_CODE_MEM_ECC_EVENTS		0x00260000 /* Param: None */
 /* Param: [0:15] - gpio number */
 #define DRV_MSG_CODE_GPIO_INFO			0x00270000
@@ -1266,12 +1268,12 @@ struct public_drv_mb {
 #define DRV_MSG_CODE_EXT_PHY_READ		0x00280000
 /* Value should be placed in union */
 #define DRV_MSG_CODE_EXT_PHY_WRITE		0x00290000
-	#define DRV_MB_PARAM_ADDR_SHIFT			0
-	#define DRV_MB_PARAM_ADDR_MASK			0x0000FFFF
-	#define DRV_MB_PARAM_DEVAD_SHIFT		16
-	#define DRV_MB_PARAM_DEVAD_MASK			0x001F0000
-	#define DRV_MB_PARAM_PORT_SHIFT			21
-	#define DRV_MB_PARAM_PORT_MASK			0x00600000
+#define DRV_MB_PARAM_ADDR_SHIFT			0
+#define DRV_MB_PARAM_ADDR_MASK			0x0000FFFF
+#define DRV_MB_PARAM_DEVAD_SHIFT		16
+#define DRV_MB_PARAM_DEVAD_MASK			0x001F0000
+#define DRV_MB_PARAM_PORT_SHIFT			21
+#define DRV_MB_PARAM_PORT_MASK			0x00600000
 #define DRV_MSG_CODE_EXT_PHY_FW_UPGRADE		0x002a0000
 
 #define DRV_MSG_SEQ_NUMBER_MASK                 0x0000ffff
@@ -1510,7 +1512,7 @@ struct public_drv_mb {
 #define FW_MSG_CODE_EXTPHY_OPERATION_FAILED	0x00720000
 #define FW_MSG_CODE_EXTPHY_NO_PHY_DETECTED	0x00730000
 
-/* mdump related response codes */
+	/* mdump related response codes */
 #define FW_MSG_CODE_MDUMP_NO_IMAGE_FOUND	0x00010000
 #define FW_MSG_CODE_MDUMP_ALLOC_FAILED		0x00020000
 #define FW_MSG_CODE_MDUMP_INVALID_CMD		0x00030000
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 34/61] net/qede/base: prevent transmitter stuck condition
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (32 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 33/61] net/qede/base: formatting changes Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 35/61] net/qede/base: add mask/shift defines for resource command Rasesh Mody
                   ` (27 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Change OOO TC properly to prevent transmitter stuck condition
due to credit underruns.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h      |    4 +---
 drivers/net/qede/base/ecore_dcbx.c |    6 ++----
 drivers/net/qede/base/ecore_dev.c  |   19 ++++++++++++++-----
 drivers/net/qede/base/mcp_public.h |   12 ++++++++----
 4 files changed, 25 insertions(+), 16 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 479a991..c9b1b5a 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -358,9 +358,6 @@ struct ecore_hw_info {
 
 	u8 num_active_tc;
 
-	/* Traffic class used for tcp out of order traffic */
-	u8 ooo_tc;
-
 	/* The traffic class used by PF for it's offloaded protocol */
 	u8 offload_tc;
 
@@ -441,6 +438,7 @@ struct ecore_qm_info {
 	u16			num_vf_pqs;
 	u8			num_vports;
 	u8			max_phys_tcs_per_port;
+	u8			ooo_tc;
 	bool			pf_rl_en;
 	bool			pf_wfq_en;
 	bool			vport_rl_en;
diff --git a/drivers/net/qede/base/ecore_dcbx.c b/drivers/net/qede/base/ecore_dcbx.c
index ca3aece..e82946a 100644
--- a/drivers/net/qede/base/ecore_dcbx.c
+++ b/drivers/net/qede/base/ecore_dcbx.c
@@ -129,11 +129,8 @@ static bool ecore_dcbx_default_tlv(u32 app_info_bitmap, u16 proto_id, bool ieee)
 		p_data->arr[type].update = DONT_UPDATE_DCB_DSCP;
 
 	/* QM reconf data */
-	if (p_hwfn->hw_info.personality == personality) {
+	if (p_hwfn->hw_info.personality == personality)
 		p_hwfn->hw_info.offload_tc = tc;
-		if (personality == ECORE_PCI_ISCSI)
-			p_hwfn->hw_info.ooo_tc = DCBX_ISCSI_OOO_TC;
-	}
 }
 
 /* Update app protocol data and hw_info fields with the TLV info */
@@ -317,6 +314,7 @@ static bool ecore_dcbx_default_tlv(u32 app_info_bitmap, u16 proto_id, bool ieee)
 
 	p_info->num_active_tc = ECORE_MFW_GET_FIELD(p_ets->flags,
 						    DCBX_ETS_MAX_TCS);
+	p_hwfn->qm_info.ooo_tc = ECORE_MFW_GET_FIELD(p_ets->flags, DCBX_OOO_TC);
 	data.pf_id = p_hwfn->rel_pf_id;
 	data.dcbx_enabled = !!dcbx_version;
 
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index dfb95bb..704bd8f 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -297,6 +297,7 @@ u16 ecore_init_qm_get_num_pqs(struct ecore_hwfn *p_hwfn)
 static void ecore_init_qm_params(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+	bool four_port;
 
 	/* pq and vport bases for this PF */
 	qm_info->start_pq = (u16)RESC_START(p_hwfn, ECORE_PQ);
@@ -306,10 +307,19 @@ static void ecore_init_qm_params(struct ecore_hwfn *p_hwfn)
 	qm_info->vport_rl_en = 1;
 	qm_info->vport_wfq_en = 1;
 
+	/* TC config is different for AH 4 port */
+	four_port = p_hwfn->p_dev->num_ports_in_engines == MAX_NUM_PORTS_K2;
+
 	/* in AH 4 port we have fewer TCs per port */
-	qm_info->max_phys_tcs_per_port =
-		p_hwfn->p_dev->num_ports_in_engines == MAX_NUM_PORTS_K2 ?
-			NUM_PHYS_TCS_4PORT_K2 : NUM_OF_PHYS_TCS;
+	qm_info->max_phys_tcs_per_port = four_port ? NUM_PHYS_TCS_4PORT_K2 :
+						     NUM_OF_PHYS_TCS;
+
+	/* unless MFW indicated otherwise, ooo_tc should be 3 for AH 4 port and
+	 * 4 otherwise
+	 */
+	if (!qm_info->ooo_tc)
+		qm_info->ooo_tc = four_port ? DCBX_TCP_OOO_K2_4PORT_TC :
+					      DCBX_TCP_OOO_TC;
 }
 
 /* initialize qm vport params */
@@ -538,8 +548,7 @@ static void ecore_init_qm_ooo_pq(struct ecore_hwfn *p_hwfn)
 		return;
 
 	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_OOO, qm_info->num_pqs);
-	ecore_init_qm_pq(p_hwfn, qm_info, DCBX_ISCSI_OOO_TC,
-			 PQ_INIT_SHARE_VPORT);
+	ecore_init_qm_pq(p_hwfn, qm_info, qm_info->ooo_tc, PQ_INIT_SHARE_VPORT);
 }
 
 static void ecore_init_qm_pure_ack_pq(struct ecore_hwfn *p_hwfn)
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 28909fb..bd34557 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -294,16 +294,20 @@ struct dcbx_ets_feature {
 #define DCBX_ETS_CBS_SHIFT                      3
 #define DCBX_ETS_MAX_TCS_MASK                   0x000000f0
 #define DCBX_ETS_MAX_TCS_SHIFT                  4
-#define DCBX_ISCSI_OOO_TC_MASK			0x00000f00
-#define DCBX_ISCSI_OOO_TC_SHIFT                 8
+#define DCBX_OOO_TC_MASK                        0x00000f00
+#define DCBX_OOO_TC_SHIFT                       8
 /* Entries in tc table are orginized that the left most is pri 0, right most is
  * prio 7
  */
 
 	u32  pri_tc_tbl[1];
-#define DCBX_ISCSI_OOO_TC			(4)
+/* Fixed TCP OOO TC usage is deprecated and used only for driver backward
+ * compatibility
+ */
+#define DCBX_TCP_OOO_TC				(4)
+#define DCBX_TCP_OOO_K2_4PORT_TC		(3)
 
-#define NIG_ETS_ISCSI_OOO_CLIENT_OFFSET		(DCBX_ISCSI_OOO_TC + 1)
+#define NIG_ETS_ISCSI_OOO_CLIENT_OFFSET		(DCBX_TCP_OOO_TC + 1)
 #define DCBX_CEE_STRICT_PRIORITY		0xf
 /* Entries in tc table are orginized that the left most is pri 0, right most is
  * prio 7
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 35/61] net/qede/base: add mask/shift defines for resource command
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (33 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 34/61] net/qede/base: prevent transmitter stuck condition Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 36/61] net/qede/base: add API for using MFW resource lock Rasesh Mody
                   ` (26 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add several mask/shift defines for the resource command

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/mcp_public.h |   15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index bd34557..1b1ecd2 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -1217,10 +1217,16 @@ struct public_drv_mb {
 #define DRV_MSG_CODE_TIMESTAMP                  0x00210000
 /* This is an empty mailbox just return OK*/
 #define DRV_MSG_CODE_EMPTY_MB			0x00220000
+
 /* Param[0:4] - resource number (0-31), Param[5:7] - opcode,
  * param[15:8] - age
  */
 #define DRV_MSG_CODE_RESOURCE_CMD		0x00230000
+
+#define RESOURCE_CMD_REQ_RESC_MASK		0x0000001F
+#define RESOURCE_CMD_REQ_RESC_SHIFT		0
+#define RESOURCE_CMD_REQ_OPCODE_MASK		0x000000E0
+#define RESOURCE_CMD_REQ_OPCODE_SHIFT		5
 /* request resource ownership with default aging */
 #define RESOURCE_OPCODE_REQ			1
 /* request resource ownership without aging */
@@ -1230,6 +1236,13 @@ struct public_drv_mb {
 #define RESOURCE_OPCODE_RELEASE			4 /* release resource */
 /* force resource release */
 #define RESOURCE_OPCODE_FORCE_RELEASE		5
+#define RESOURCE_CMD_REQ_AGE_MASK		0x0000FF00
+#define RESOURCE_CMD_REQ_AGE_SHIFT		8
+
+#define RESOURCE_CMD_RSP_OWNER_MASK		0x000000FF
+#define RESOURCE_CMD_RSP_OWNER_SHIFT		0
+#define RESOURCE_CMD_RSP_OPCODE_MASK		0x00000700
+#define RESOURCE_CMD_RSP_OPCODE_SHIFT		8
 /* resource is free and granted to requester */
 #define RESOURCE_OPCODE_GNT			1
 /* resource is busy, param[7:0] indicates owner as follow 0-15 = PF0-15,
@@ -1243,8 +1256,10 @@ struct public_drv_mb {
 /* indicate wrong owner during release */
 #define RESOURCE_OPCODE_WRONG_OWNER		5
 #define RESOURCE_OPCODE_UNKNOWN_CMD		255
+
 /* dedicate resource 0 for dump */
 #define RESOURCE_DUMP				0
+
 #define DRV_MSG_CODE_GET_MBA_VERSION		0x00240000 /* Get MBA version */
 /* Send crash dump commands with param[3:0] - opcode */
 #define DRV_MSG_CODE_MDUMP_CMD			0x00250000
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 36/61] net/qede/base: add API for using MFW resource lock
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (34 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 35/61] net/qede/base: add mask/shift defines for resource command Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 37/61] net/qede/base: remove clock slowdown option Rasesh Mody
                   ` (25 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add base driver API for using the Management FW resource lock

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h      |    9 +++
 drivers/net/qede/base/ecore_dcbx.h |    3 -
 drivers/net/qede/base/ecore_mcp.c  |  143 ++++++++++++++++++++++++++++++++++++
 drivers/net/qede/base/ecore_mcp.h  |   41 +++++++++++
 4 files changed, 193 insertions(+), 3 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index c9b1b5a..acf2244 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -86,6 +86,15 @@ enum ecore_nvm_cmd {
 	(((value) >> (name##_SHIFT)) & name##_MASK)
 #endif
 
+#define ECORE_MFW_GET_FIELD(name, field)				\
+	(((name) & (field ## _MASK)) >> (field ## _SHIFT))
+
+#define ECORE_MFW_SET_FIELD(name, field, value)				\
+do {									\
+	(name) &= ~((field ## _MASK) << (field ## _SHIFT));		\
+	(name) |= (((value) << (field ## _SHIFT)) & (field ## _MASK));	\
+} while (0)
+
 static OSAL_INLINE u32 DB_ADDR(u32 cid, u32 DEMS)
 {
 	u32 db_addr = FIELD_VALUE(DB_LEGACY_ADDR_DEMS, DEMS) |
diff --git a/drivers/net/qede/base/ecore_dcbx.h b/drivers/net/qede/base/ecore_dcbx.h
index 2ce4465..0830014 100644
--- a/drivers/net/qede/base/ecore_dcbx.h
+++ b/drivers/net/qede/base/ecore_dcbx.h
@@ -17,9 +17,6 @@
 #include "ecore_hsi_common.h"
 #include "ecore_dcbx_api.h"
 
-#define ECORE_MFW_GET_FIELD(name, field) \
-	(((name) & (field ## _MASK)) >> (field ## _SHIFT))
-
 struct ecore_dcbx_info {
 	struct lldp_status_params_s lldp_remote[LLDP_MAX_LLDP_AGENTS];
 	struct lldp_config_params_s lldp_local[LLDP_MAX_LLDP_AGENTS];
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index c5cc827..73cf4db 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -2503,3 +2503,146 @@ enum _ecore_status_t ecore_mcp_initiate_pf_flr(struct ecore_hwfn *p_hwfn,
 	return ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_INITIATE_PF_FLR, 0,
 			     &mcp_resp, &mcp_param);
 }
+
+static enum _ecore_status_t ecore_mcp_resource_cmd(struct ecore_hwfn *p_hwfn,
+						   struct ecore_ptt *p_ptt,
+						   u32 param, u32 *p_mcp_resp,
+						   u32 *p_mcp_param)
+{
+	enum _ecore_status_t rc;
+
+	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_RESOURCE_CMD, param,
+			   p_mcp_resp, p_mcp_param);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	/* A zero response implies that the resource command is not supported */
+	if (!*p_mcp_resp)
+		return ECORE_NOTIMPL;
+
+	if (*p_mcp_param == RESOURCE_OPCODE_UNKNOWN_CMD) {
+		u8 opcode = ECORE_MFW_GET_FIELD(param, RESOURCE_CMD_REQ_OPCODE);
+
+		DP_NOTICE(p_hwfn, false,
+			  "The resource command is unknown to the MFW [param 0x%08x, opcode %d]\n",
+			  param, opcode);
+		return ECORE_INVAL;
+	}
+
+	return rc;
+}
+
+enum _ecore_status_t ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn,
+					 struct ecore_ptt *p_ptt,
+					 u8 resource_num, u8 timeout,
+					 bool *p_granted, u8 *p_owner)
+{
+	u32 param = 0, mcp_resp, mcp_param;
+	u8 opcode;
+	enum _ecore_status_t rc;
+
+	switch (timeout) {
+	case ECORE_MCP_RESC_LOCK_TO_DEFAULT:
+		opcode = RESOURCE_OPCODE_REQ;
+		timeout = 0;
+		break;
+	case ECORE_MCP_RESC_LOCK_TO_NONE:
+		opcode = RESOURCE_OPCODE_REQ_WO_AGING;
+		timeout = 0;
+		break;
+	default:
+		opcode = RESOURCE_OPCODE_REQ_W_AGING;
+		break;
+	}
+
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_RESC, resource_num);
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_OPCODE, opcode);
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_AGE, timeout);
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Resource lock request: param 0x%08x [age %d, opcode %d, resc_num %d]\n",
+		   param, timeout, opcode, resource_num);
+
+	/* Attempt to acquire the resource */
+	rc = ecore_mcp_resource_cmd(p_hwfn, p_ptt, param, &mcp_resp,
+				    &mcp_param);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	/* Analyze the response */
+	*p_owner = ECORE_MFW_GET_FIELD(mcp_param, RESOURCE_CMD_RSP_OWNER);
+	opcode = ECORE_MFW_GET_FIELD(mcp_param, RESOURCE_CMD_RSP_OPCODE);
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Resource lock response: mcp_param 0x%08x [opcode %d, owner %d]\n",
+		   mcp_param, opcode, *p_owner);
+
+	switch (opcode) {
+	case RESOURCE_OPCODE_GNT:
+		*p_granted = true;
+		break;
+	case RESOURCE_OPCODE_BUSY:
+		*p_granted = false;
+		break;
+	default:
+		DP_NOTICE(p_hwfn, false,
+			  "Unexpected opcode in resource lock response [mcp_param 0x%08x, opcode %d]\n",
+			  mcp_param, opcode);
+		return ECORE_INVAL;
+	}
+
+	return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t ecore_mcp_resc_unlock(struct ecore_hwfn *p_hwfn,
+					   struct ecore_ptt *p_ptt,
+					   u8 resource_num, bool force,
+					   bool *p_released)
+{
+	u32 param = 0, mcp_resp, mcp_param;
+	u8 opcode;
+	enum _ecore_status_t rc;
+
+	opcode = force ? RESOURCE_OPCODE_FORCE_RELEASE
+		       : RESOURCE_OPCODE_RELEASE;
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_RESC, resource_num);
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_OPCODE, opcode);
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Resource unlock request: param 0x%08x [opcode %d, resc_num %d]\n",
+		   param, opcode, resource_num);
+
+	/* Attempt to release the resource */
+	rc = ecore_mcp_resource_cmd(p_hwfn, p_ptt, param, &mcp_resp,
+				    &mcp_param);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	/* Analyze the response */
+	opcode = ECORE_MFW_GET_FIELD(mcp_param, RESOURCE_CMD_RSP_OPCODE);
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Resource unlock response: mcp_param 0x%08x [opcode %d]\n",
+		   mcp_param, opcode);
+
+	switch (opcode) {
+	case RESOURCE_OPCODE_RELEASED_PREVIOUS:
+		DP_INFO(p_hwfn,
+			"Resource unlock request for an already released resource [resc_num %d]\n",
+			resource_num);
+		/* Fallthrough */
+	case RESOURCE_OPCODE_RELEASED:
+		*p_released = true;
+		break;
+	case RESOURCE_OPCODE_WRONG_OWNER:
+		*p_released = false;
+		break;
+	default:
+		DP_NOTICE(p_hwfn, false,
+			  "Unexpected opcode in resource unlock response [mcp_param 0x%08x, opcode %d]\n",
+			  mcp_param, opcode);
+		return ECORE_INVAL;
+	}
+
+	return ECORE_SUCCESS;
+}
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index 0708923..7a81516 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -361,4 +361,45 @@ enum _ecore_status_t ecore_mcp_get_resc_info(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_mcp_initiate_pf_flr(struct ecore_hwfn *p_hwfn,
 					       struct ecore_ptt *p_ptt);
 
+#define ECORE_MCP_RESC_LOCK_TO_DEFAULT	0
+#define ECORE_MCP_RESC_LOCK_TO_NONE	255
+
+/**
+ * @brief Acquires MFW generic resource lock
+ *
+ *  @param p_hwfn
+ *  @param p_ptt
+ *  @param resource_num - valid values are 0..31
+ *  @param timeout - lock timeout value in seconds
+ *                   (1..254, '0' - default value, '255' - no timeout).
+ *  @param p_granted - will be filled as true if the resource is free and
+ *                     granted, or false if it is busy.
+ *  @param p_owner - A pointer to a variable to be filled with the resource
+ *                   owner (0..15 = PF0-15, 16 = MFW, 17 = diag over serial).
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
+ */
+enum _ecore_status_t ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn,
+					 struct ecore_ptt *p_ptt,
+					 u8 resource_num, u8 timeout,
+					 bool *p_granted, u8 *p_owner);
+
+/**
+ * @brief Releases MFW generic resource lock
+ *
+ *  @param p_hwfn
+ *  @param p_ptt
+ *  @param resource_num
+ *  @param force -  allows to release a reeource even if belongs to another PF
+ *  @param p_released - will be filled as true if the resource is released (or
+ *			has been already released), and false if the resource is
+ *			acquired by another PF and the `force' flag was not set.
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
+ */
+enum _ecore_status_t ecore_mcp_resc_unlock(struct ecore_hwfn *p_hwfn,
+					   struct ecore_ptt *p_ptt,
+					   u8 resource_num, bool force,
+					   bool *p_released);
+
 #endif /* __ECORE_MCP_H__ */
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 37/61] net/qede/base: remove clock slowdown option
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (35 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 36/61] net/qede/base: add API for using MFW resource lock Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 38/61] net/qede/base: add new image types Rasesh Mody
                   ` (24 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Remove clock slowdown NVM config option as this is not supported
for current chipsets.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/nvm_cfg.h |   10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/drivers/net/qede/base/nvm_cfg.h b/drivers/net/qede/base/nvm_cfg.h
index 4202337..4e58835 100644
--- a/drivers/net/qede/base/nvm_cfg.h
+++ b/drivers/net/qede/base/nvm_cfg.h
@@ -72,10 +72,12 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_ENABLE_ATC_OFFSET 30
 		#define NVM_CFG1_GLOB_ENABLE_ATC_DISABLED 0x0
 		#define NVM_CFG1_GLOB_ENABLE_ATC_ENABLED 0x1
-		#define NVM_CFG1_GLOB_CLOCK_SLOWDOWN_MASK 0x80000000
-		#define NVM_CFG1_GLOB_CLOCK_SLOWDOWN_OFFSET 31
-		#define NVM_CFG1_GLOB_CLOCK_SLOWDOWN_DISABLED 0x0
-		#define NVM_CFG1_GLOB_CLOCK_SLOWDOWN_ENABLED 0x1
+		#define NVM_CFG1_GLOB_RESERVED__M_WAS_CLOCK_SLOWDOWN_MASK \
+								0x80000000
+		#define NVM_CFG1_GLOB_RESERVED__M_WAS_CLOCK_SLOWDOWN_OFFSET 31
+		#define NVM_CFG1_GLOB_RESERVED__M_WAS_CLOCK_SLOWDOWN_DISABLED \
+								0x0
+		#define NVM_CFG1_GLOB_RESERVED__M_WAS_CLOCK_SLOWDOWN_ENABLED 0x1
 	u32 engineering_change[3]; /* 0x4 */
 	u32 manufacturing_id; /* 0x10 */
 	u32 serial_number[4]; /* 0x14 */
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 38/61] net/qede/base: add new image types
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (36 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 37/61] net/qede/base: remove clock slowdown option Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 39/61] net/qede/base: use L2-handles for RSS configuration Rasesh Mody
                   ` (23 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add new image types - RECOVERY and PK (Public Key) towards
the second phase of NVRAM security support.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/mcp_public.h |    5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 1b1ecd2..d3cbc96 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -1502,6 +1502,10 @@ struct public_drv_mb {
 #define FW_MSG_CODE_NVM_PUT_FILE_FINISH_OK	0x00400000
 /* MFW reject "mcp reset" command if one of the drivers is up */
 #define FW_MSG_CODE_MCP_RESET_REJECT		0x00600000
+#define FW_MSG_CODE_NVM_FAILED_CALC_HASH	0x00310000
+#define FW_MSG_CODE_NVM_PUBLIC_KEY_MISSING	0x00320000
+#define FW_MSG_CODE_NVM_INVALID_PUBLIC_KEY	0x00330000
+
 #define FW_MSG_CODE_PHY_OK			0x00110000
 #define FW_MSG_CODE_PHY_ERROR			0x00120000
 #define FW_MSG_CODE_SET_SECURE_MODE_ERROR	0x00130000
@@ -1530,6 +1534,7 @@ struct public_drv_mb {
 #define FW_MSG_CODE_EXTPHY_INVALID_PHY_TYPE	0x00710000
 #define FW_MSG_CODE_EXTPHY_OPERATION_FAILED	0x00720000
 #define FW_MSG_CODE_EXTPHY_NO_PHY_DETECTED	0x00730000
+#define FW_MSG_CODE_RECOVERY_MODE		0x00740000
 
 	/* mdump related response codes */
 #define FW_MSG_CODE_MDUMP_NO_IMAGE_FOUND	0x00010000
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 39/61] net/qede/base: use L2-handles for RSS configuration
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (37 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 38/61] net/qede/base: add new image types Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 40/61] net/qede/base: change valloc to vzalloc Rasesh Mody
                   ` (22 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Move RSS configuration into using L2-handles instead of queue-ids.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_l2.c     |   48 ++++++++++++++++++-------
 drivers/net/qede/base/ecore_l2.h     |    2 ++
 drivers/net/qede/base/ecore_l2_api.h |    4 ++-
 drivers/net/qede/base/ecore_sriov.c  |   66 +++++++++++++++++++++-------------
 drivers/net/qede/base/ecore_vf.c     |   13 +++++--
 drivers/net/qede/qede_ethdev.c       |    4 +--
 6 files changed, 95 insertions(+), 42 deletions(-)

diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 352620a..2635213 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -59,6 +59,7 @@ struct ecore_queue_cid *
 	p_cid->cid = cid;
 	p_cid->vf_qid = vf_qid;
 	p_cid->rel = *p_params;
+	p_cid->p_owner = p_hwfn;
 
 	/* Don't try calculating the absolute indices for VFs */
 	if (IS_VF(p_hwfn->p_dev)) {
@@ -267,10 +268,9 @@ enum _ecore_status_t
 			  struct vport_update_ramrod_data *p_ramrod,
 			  struct ecore_rss_params *p_rss)
 {
-	enum _ecore_status_t rc = ECORE_SUCCESS;
 	struct eth_vport_rss_config *p_config;
-	u16 abs_l2_queue = 0;
-	int i;
+	int i, table_size;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
 
 	if (!p_rss) {
 		p_ramrod->common.update_rss_flg = 0;
@@ -324,16 +324,40 @@ enum _ecore_status_t
 		   p_config->capabilities,
 		   p_config->update_rss_ind_table, p_config->update_rss_key);
 
-	for (i = 0; i < ECORE_RSS_IND_TABLE_SIZE; i++) {
-		rc = ecore_fw_l2_queue(p_hwfn,
-				       p_rss->rss_ind_table[i],
-				       &abs_l2_queue);
-		if (rc != ECORE_SUCCESS)
-			return rc;
+	table_size = OSAL_MIN_T(int, ECORE_RSS_IND_TABLE_SIZE,
+				1 << p_config->tbl_size);
+	for (i = 0; i < table_size; i++) {
+		struct ecore_queue_cid *p_queue = p_rss->rss_ind_table[i];
 
-		p_config->indirection_table[i] = OSAL_CPU_TO_LE16(abs_l2_queue);
-		DP_VERBOSE(p_hwfn, ECORE_MSG_IFUP, "i= %d, queue = %d\n",
-			   i, p_config->indirection_table[i]);
+		if (!p_queue)
+			return ECORE_INVAL;
+
+		p_config->indirection_table[i] =
+				OSAL_CPU_TO_LE16(p_queue->abs.queue_id);
+	}
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IFUP,
+		   "Configured RSS indirection table [%d entries]:\n",
+		   table_size);
+	for (i = 0; i < ECORE_RSS_IND_TABLE_SIZE; i += 0x10) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IFUP,
+			   "%04x %04x %04x %04x %04x %04x %04x %04x %04x %04x %04x %04x %04x %04x %04x %04x\n",
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 1]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 2]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 3]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 4]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 5]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 6]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 7]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 8]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 9]),
+			  OSAL_LE16_TO_CPU(p_config->indirection_table[i + 10]),
+			  OSAL_LE16_TO_CPU(p_config->indirection_table[i + 11]),
+			  OSAL_LE16_TO_CPU(p_config->indirection_table[i + 12]),
+			  OSAL_LE16_TO_CPU(p_config->indirection_table[i + 13]),
+			  OSAL_LE16_TO_CPU(p_config->indirection_table[i + 14]),
+			 OSAL_LE16_TO_CPU(p_config->indirection_table[i + 15]));
 	}
 
 	for (i = 0; i < 10; i++)
diff --git a/drivers/net/qede/base/ecore_l2.h b/drivers/net/qede/base/ecore_l2.h
index c136389..4b0ccb4 100644
--- a/drivers/net/qede/base/ecore_l2.h
+++ b/drivers/net/qede/base/ecore_l2.h
@@ -36,6 +36,8 @@ struct ecore_queue_cid {
 
 	/* Legacy VFs might have Rx producer located elsewhere */
 	bool b_legacy_vf;
+
+	struct ecore_hwfn *p_owner;
 };
 
 void ecore_eth_queue_cid_release(struct ecore_hwfn *p_hwfn,
diff --git a/drivers/net/qede/base/ecore_l2_api.h b/drivers/net/qede/base/ecore_l2_api.h
index af316d3..5a7db76 100644
--- a/drivers/net/qede/base/ecore_l2_api.h
+++ b/drivers/net/qede/base/ecore_l2_api.h
@@ -59,7 +59,9 @@ struct ecore_rss_params {
 	u8 update_rss_key;
 	u8 rss_caps;
 	u8 rss_table_size_log; /* The table size is 2 ^ rss_table_size_log */
-	u16 rss_ind_table[ECORE_RSS_IND_TABLE_SIZE];
+
+	/* Indirection table consist of rx queue handles */
+	void *rss_ind_table[ECORE_RSS_IND_TABLE_SIZE];
 	u32 rss_key[ECORE_RSS_KEY_SIZE];
 };
 
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index e7c120b..9a2943f 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -2707,12 +2707,14 @@ void *ecore_iov_search_list_tlvs(struct ecore_hwfn *p_hwfn,
 			      struct ecore_vf_info *vf,
 			      struct ecore_sp_vport_update_params *p_data,
 			      struct ecore_rss_params *p_rss,
-			      struct ecore_iov_vf_mbx *p_mbx, u16 *tlvs_mask)
+			      struct ecore_iov_vf_mbx *p_mbx,
+			      u16 *tlvs_mask, u16 *tlvs_accepted)
 {
 	struct vfpf_vport_update_rss_tlv *p_rss_tlv;
 	u16 tlv = CHANNEL_TLV_VPORT_UPDATE_RSS;
-	u16 i, q_idx, max_q_idx;
+	bool b_reject = false;
 	u16 table_size;
+	u16 i, q_idx;
 
 	p_rss_tlv = (struct vfpf_vport_update_rss_tlv *)
 	    ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv);
@@ -2740,36 +2742,38 @@ void *ecore_iov_search_list_tlvs(struct ecore_hwfn *p_hwfn,
 	p_rss->rss_eng_id = vf->relative_vf_id + 1;
 	p_rss->rss_caps = p_rss_tlv->rss_caps;
 	p_rss->rss_table_size_log = p_rss_tlv->rss_table_size_log;
-	OSAL_MEMCPY(p_rss->rss_ind_table, p_rss_tlv->rss_ind_table,
-		    sizeof(p_rss->rss_ind_table));
 	OSAL_MEMCPY(p_rss->rss_key, p_rss_tlv->rss_key,
 		    sizeof(p_rss->rss_key));
 
 	table_size = OSAL_MIN_T(u16, OSAL_ARRAY_SIZE(p_rss->rss_ind_table),
 				(1 << p_rss_tlv->rss_table_size_log));
 
-	max_q_idx = OSAL_ARRAY_SIZE(vf->vf_queues);
-
 	for (i = 0; i < table_size; i++) {
-		u16 index = vf->vf_queues[0].fw_rx_qid;
+		q_idx = p_rss_tlv->rss_ind_table[i];
+		if (!ecore_iov_validate_rxq(p_hwfn, vf, q_idx)) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VF[%d]: Omitting RSS due to wrong queue %04x\n",
+				   vf->relative_vf_id, q_idx);
+			b_reject = true;
+			goto out;
+		}
 
-		q_idx = p_rss->rss_ind_table[i];
-		if (q_idx >= max_q_idx)
-			DP_NOTICE(p_hwfn, true,
-				  "rss_ind_table[%d] = %d,"
-				  " rxq is out of range\n",
-				  i, q_idx);
-		else if (!vf->vf_queues[q_idx].p_rx_cid)
-			DP_NOTICE(p_hwfn, true,
-				  "rss_ind_table[%d] = %d, rxq is not active\n",
-				  i, q_idx);
-		else
-			index = vf->vf_queues[q_idx].fw_rx_qid;
-		p_rss->rss_ind_table[i] = index;
+		if (!vf->vf_queues[q_idx].p_rx_cid) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VF[%d]: Omitting RSS due to inactive queue %08x\n",
+				   vf->relative_vf_id, q_idx);
+			b_reject = true;
+			goto out;
+		}
+
+		p_rss->rss_ind_table[i] = vf->vf_queues[q_idx].p_rx_cid;
 	}
 
 	p_data->rss_params = p_rss;
+out:
 	*tlvs_mask |= 1 << ECORE_IOV_VP_UPDATE_RSS;
+	if (!b_reject)
+		*tlvs_accepted |= 1 << ECORE_IOV_VP_UPDATE_RSS;
 }
 
 static void
@@ -2825,11 +2829,11 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 					  struct ecore_ptt *p_ptt,
 					  struct ecore_vf_info *vf)
 {
+	struct ecore_rss_params *p_rss_params = OSAL_NULL;
 	struct ecore_sp_vport_update_params params;
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
 	struct ecore_sge_tpa_params sge_tpa_params;
 	u16 tlvs_mask = 0, tlvs_accepted = 0;
-	struct ecore_rss_params rss_params;
 	u8 status = PFVF_STATUS_SUCCESS;
 	u16 length;
 	enum _ecore_status_t rc;
@@ -2844,6 +2848,12 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 		goto out;
 	}
 
+	p_rss_params = OSAL_VALLOC(p_hwfn->p_dev, sizeof(*p_rss_params));
+	if (p_rss_params == OSAL_NULL) {
+		status = PFVF_STATUS_FAILURE;
+		goto out;
+	}
+
 	OSAL_MEMSET(&params, 0, sizeof(params));
 	params.opaque_fid = vf->opaque_fid;
 	params.vport_id = vf->vport_id;
@@ -2857,19 +2867,24 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 	ecore_iov_vp_update_tx_switch(p_hwfn, &params, mbx, &tlvs_mask);
 	ecore_iov_vp_update_mcast_bin_param(p_hwfn, &params, mbx, &tlvs_mask);
 	ecore_iov_vp_update_accept_flag(p_hwfn, &params, mbx, &tlvs_mask);
-	ecore_iov_vp_update_rss_param(p_hwfn, vf, &params, &rss_params,
-				      mbx, &tlvs_mask);
 	ecore_iov_vp_update_accept_any_vlan(p_hwfn, &params, mbx, &tlvs_mask);
 	ecore_iov_vp_update_sge_tpa_param(p_hwfn, vf, &params,
 					  &sge_tpa_params, mbx, &tlvs_mask);
 
+	tlvs_accepted = tlvs_mask;
+
+	/* Some of the extended TLVs need to be validated first; In that case,
+	 * they can update the mask without updating the accepted [so that
+	 * PF could communicate to VF it has rejected request].
+	 */
+	ecore_iov_vp_update_rss_param(p_hwfn, vf, &params, p_rss_params,
+				      mbx, &tlvs_mask, &tlvs_accepted);
+
 	/* Just log a message if there is no single extended tlv in buffer.
 	 * When all features of vport update ramrod would be requested by VF
 	 * as extended TLVs in buffer then an error can be returned in response
 	 * if there is no extended TLV present in buffer.
 	 */
-	tlvs_accepted = tlvs_mask;
-
 	if (OSAL_IOV_VF_VPORT_UPDATE(p_hwfn, vf->relative_vf_id,
 				     &params, &tlvs_accepted) !=
 	    ECORE_SUCCESS) {
@@ -2897,6 +2912,7 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 		status = PFVF_STATUS_FAILURE;
 
 out:
+	OSAL_VFREE(p_hwfn->p_dev, p_rss_params);
 	length = ecore_iov_prep_vp_update_resp_tlvs(p_hwfn, vf, mbx, status,
 						    tlvs_mask, tlvs_accepted);
 	ecore_iov_send_response(p_hwfn, p_ptt, vf, length, status);
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index 2845d2e..be3bc5f 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -1132,6 +1132,7 @@ enum _ecore_status_t
 	if (p_params->rss_params) {
 		struct ecore_rss_params *rss_params = p_params->rss_params;
 		struct vfpf_vport_update_rss_tlv *p_rss_tlv;
+		int i, table_size;
 
 		size = sizeof(struct vfpf_vport_update_rss_tlv);
 		p_rss_tlv = ecore_add_tlv(p_hwfn, &p_iov->offset,
@@ -1153,8 +1154,16 @@ enum _ecore_status_t
 		p_rss_tlv->rss_enable = rss_params->rss_enable;
 		p_rss_tlv->rss_caps = rss_params->rss_caps;
 		p_rss_tlv->rss_table_size_log = rss_params->rss_table_size_log;
-		OSAL_MEMCPY(p_rss_tlv->rss_ind_table, rss_params->rss_ind_table,
-			    sizeof(rss_params->rss_ind_table));
+
+		table_size = OSAL_MIN_T(int, T_ETH_INDIRECTION_TABLE_SIZE,
+					1 << p_rss_tlv->rss_table_size_log);
+		for (i = 0; i < table_size; i++) {
+			struct ecore_queue_cid *p_queue;
+
+			p_queue = rss_params->rss_ind_table[i];
+			p_rss_tlv->rss_ind_table[i] = p_queue->rel.queue_id;
+		}
+
 		OSAL_MEMCPY(p_rss_tlv->rss_key, rss_params->rss_key,
 			    sizeof(rss_params->rss_key));
 	}
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 257e5b2..6fbd898 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1607,14 +1607,14 @@ static int qede_rss_reta_update(struct rte_eth_dev *eth_dev,
 		shift = i % RTE_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift)) {
 			entry = reta_conf[idx].reta[shift];
-			params.rss_ind_table[i] = entry;
+			params.rss_ind_table[i] = &entry;
 		}
 	}
 
 	/* Fix up RETA for CMT mode device */
 	if (edev->num_hwfns > 1)
 		qdev->rss_enable = qed_update_rss_parm_cmt(edev,
-					&params.rss_ind_table[0]);
+					params.rss_ind_table[0]);
 	params.update_rss_ind_table = 1;
 	params.rss_table_size_log = 7;
 	params.update_rss_config = 1;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 40/61] net/qede/base: change valloc to vzalloc
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (38 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 39/61] net/qede/base: use L2-handles for RSS configuration Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 41/61] net/qede/base: add support for previous driver unload Rasesh Mody
                   ` (21 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Change OSAL_VALLOC() into OSAL_VZALLOC() which would also zero memory.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/bcm_osal.h      |    2 +-
 drivers/net/qede/base/ecore_dev.c     |    3 +--
 drivers/net/qede/base/ecore_l2.c      |    3 +--
 drivers/net/qede/base/ecore_mng_tlv.c |    5 ++---
 drivers/net/qede/base/ecore_sriov.c   |    2 +-
 5 files changed, 6 insertions(+), 9 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 246cc6c..f361791 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -89,7 +89,7 @@
 #define OSAL_ALLOC(dev, GFP, size) rte_malloc("qede", size, 0)
 #define OSAL_ZALLOC(dev, GFP, size) rte_zmalloc("qede", size, 0)
 #define OSAL_CALLOC(dev, GFP, num, size) rte_calloc("qede", num, size, 0)
-#define OSAL_VALLOC(dev, size) rte_malloc("qede", size, 0)
+#define OSAL_VZALLOC(dev, size) rte_zmalloc("qede", size, 0)
 #define OSAL_FREE(dev, memory) rte_free((void *)memory)
 #define OSAL_VFREE(dev, memory) OSAL_FREE(dev, memory)
 #define OSAL_MEM_ZERO(mem, size) bzero(mem, size)
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 704bd8f..816d790 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -3725,13 +3725,12 @@ void ecore_chain_free(struct ecore_dev *p_dev, struct ecore_chain *p_chain)
 	u32 page_cnt = p_chain->page_cnt, size, i;
 
 	size = page_cnt * sizeof(*pp_virt_addr_tbl);
-	pp_virt_addr_tbl = (void **)OSAL_VALLOC(p_dev, size);
+	pp_virt_addr_tbl = (void **)OSAL_VZALLOC(p_dev, size);
 	if (!pp_virt_addr_tbl) {
 		DP_NOTICE(p_dev, true,
 			  "Failed to allocate memory for the chain virtual addresses table\n");
 		return ECORE_NOMEM;
 	}
-	OSAL_MEM_ZERO(pp_virt_addr_tbl, size);
 
 	/* The allocation of the PBL table is done with its full size, since it
 	 * is expected to be successive.
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 2635213..4d26e19 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -50,10 +50,9 @@ struct ecore_queue_cid *
 	struct ecore_queue_cid *p_cid;
 	enum _ecore_status_t rc;
 
-	p_cid = OSAL_VALLOC(p_hwfn->p_dev, sizeof(*p_cid));
+	p_cid = OSAL_VZALLOC(p_hwfn->p_dev, sizeof(*p_cid));
 	if (p_cid == OSAL_NULL)
 		return OSAL_NULL;
-	OSAL_MEM_ZERO(p_cid, sizeof(*p_cid));
 
 	p_cid->opaque_fid = opaque_fid;
 	p_cid->cid = cid;
diff --git a/drivers/net/qede/base/ecore_mng_tlv.c b/drivers/net/qede/base/ecore_mng_tlv.c
index 0065d12..0bf1be8 100644
--- a/drivers/net/qede/base/ecore_mng_tlv.c
+++ b/drivers/net/qede/base/ecore_mng_tlv.c
@@ -1413,11 +1413,10 @@
 	u32 offset;
 	int len;
 
-	p_tlv_data = OSAL_VALLOC(p_hwfn->p_dev, sizeof(*p_tlv_data));
+	p_tlv_data = OSAL_VZALLOC(p_hwfn->p_dev, sizeof(*p_tlv_data));
 	if (!p_tlv_data)
 		return ECORE_NOMEM;
 
-	OSAL_MEMSET(p_tlv_data, 0, sizeof(*p_tlv_data));
 	if (OSAL_MFW_FILL_TLV_DATA(p_hwfn, tlv_group, p_tlv_data)) {
 		OSAL_VFREE(p_hwfn->p_dev, p_tlv_data);
 		return ECORE_INVAL;
@@ -1487,7 +1486,7 @@ enum _ecore_status_t
 		goto drv_done;
 	}
 
-	p_mfw_buf = (void *)OSAL_VALLOC(p_hwfn->p_dev, size);
+	p_mfw_buf = (void *)OSAL_VZALLOC(p_hwfn->p_dev, size);
 	if (!p_mfw_buf) {
 		DP_NOTICE(p_hwfn, false,
 			  "Failed allocate memory for p_mfw_buf\n");
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 9a2943f..af27d02 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -2848,7 +2848,7 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 		goto out;
 	}
 
-	p_rss_params = OSAL_VALLOC(p_hwfn->p_dev, sizeof(*p_rss_params));
+	p_rss_params = OSAL_VZALLOC(p_hwfn->p_dev, sizeof(*p_rss_params));
 	if (p_rss_params == OSAL_NULL) {
 		status = PFVF_STATUS_FAILURE;
 		goto out;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 41/61] net/qede/base: add support for previous driver unload
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (39 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 40/61] net/qede/base: change valloc to vzalloc Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 42/61] net/qede/base: add non-l2 dcbx tlv application support Rasesh Mody
                   ` (20 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

New driver/management fw load request sequence for handling previous
driver unload.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h         |   13 ++
 drivers/net/qede/base/ecore_dev.c     |   43 ++--
 drivers/net/qede/base/ecore_dev_api.h |   30 ++-
 drivers/net/qede/base/ecore_mcp.c     |  369 ++++++++++++++++++++++++++++++---
 drivers/net/qede/base/ecore_mcp.h     |   40 ++--
 drivers/net/qede/base/mcp_public.h    |   56 ++++-
 drivers/net/qede/qede_main.c          |    2 +
 7 files changed, 482 insertions(+), 71 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index acf2244..60a8a6b 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -28,6 +28,19 @@
 #include "ecore_proto_if.h"
 #include "mcp_public.h"
 
+#define ECORE_MAJOR_VERSION		8
+#define ECORE_MINOR_VERSION		18
+#define ECORE_REVISION_VERSION		7
+#define ECORE_ENGINEERING_VERSION	0
+
+#define ECORE_VERSION							\
+	((ECORE_MAJOR_VERSION << 24) | (ECORE_MINOR_VERSION << 16) |	\
+	 (ECORE_REVISION_VERSION << 8) | ECORE_ENGINEERING_VERSION)
+
+#define STORM_FW_VERSION						\
+	((FW_MAJOR_VERSION << 24) | (FW_MINOR_VERSION << 16) |	\
+	 (FW_REVISION_VERSION << 8) | FW_ENGINEERING_VERSION)
+
 #define MAX_HWFNS_PER_DEVICE	2
 #define NAME_SIZE 128 /* @DPDK */
 #define ECORE_WFQ_UNIT	100
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 816d790..358d1b6 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1907,10 +1907,11 @@ enum _ecore_status_t ecore_vf_start(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 				   struct ecore_hw_init_params *p_params)
 {
-	enum _ecore_status_t rc = ECORE_SUCCESS, mfw_rc;
+	struct ecore_load_req_params load_req_params;
 	u32 load_code, param, drv_mb_param;
-	bool b_default_mtu = true;
 	struct ecore_hwfn *p_hwfn;
+	bool b_default_mtu = true;
+	enum _ecore_status_t rc = ECORE_SUCCESS, mfw_rc;
 	int i;
 
 	if ((p_params->int_mode == ECORE_INT_MODE_MSI) &&
@@ -1949,17 +1950,25 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		if (rc != ECORE_SUCCESS)
 			return rc;
 
-		/* @@@TBD need to add here:
-		 * Check for fan failure
-		 * Prev_unload
-		 */
-		rc = ecore_mcp_load_req(p_hwfn, p_hwfn->p_main_ptt, &load_code);
-		if (rc) {
+		OSAL_MEM_ZERO(&load_req_params, sizeof(load_req_params));
+		load_req_params.drv_role = p_params->is_crash_kernel ?
+					   ECORE_DRV_ROLE_KDUMP :
+					   ECORE_DRV_ROLE_OS;
+		load_req_params.timeout_val = p_params->mfw_timeout_val;
+		load_req_params.avoid_eng_reset = p_params->avoid_eng_reset;
+		rc = ecore_mcp_load_req(p_hwfn, p_hwfn->p_main_ptt,
+					&load_req_params);
+		if (rc != ECORE_SUCCESS) {
 			DP_NOTICE(p_hwfn, true,
-				  "Failed sending LOAD_REQ command\n");
+				  "Failed sending a LOAD_REQ command\n");
 			return rc;
 		}
 
+		load_code = load_req_params.load_code;
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+			   "Load request was sent. Load code: 0x%x\n",
+			   load_code);
+
 		/* CQ75580:
 		 * When coming back from hiberbate state, the registers from
 		 * which shadow is read initially are not initialized. It turns
@@ -1972,10 +1981,6 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		 */
 		ecore_reset_mb_shadow(p_hwfn, p_hwfn->p_main_ptt);
 
-		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-			   "Load request was sent. Resp:0x%x, Load code: 0x%x\n",
-			   rc, load_code);
-
 		/* Only relevant for recovery:
 		 * Clear the indication after the LOAD_REQ command is responded
 		 * by the MFW.
@@ -1994,13 +1999,13 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		case FW_MSG_CODE_DRV_LOAD_ENGINE:
 			rc = ecore_hw_init_common(p_hwfn, p_hwfn->p_main_ptt,
 						  p_hwfn->hw_info.hw_mode);
-			if (rc)
+			if (rc != ECORE_SUCCESS)
 				break;
 			/* Fall into */
 		case FW_MSG_CODE_DRV_LOAD_PORT:
 			rc = ecore_hw_init_port(p_hwfn, p_hwfn->p_main_ptt,
 						p_hwfn->hw_info.hw_mode);
-			if (rc)
+			if (rc != ECORE_SUCCESS)
 				break;
 			/* Fall into */
 		case FW_MSG_CODE_DRV_LOAD_FUNCTION:
@@ -2012,6 +2017,8 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 					      p_params->allow_npar_tx_switch);
 			break;
 		default:
+			DP_NOTICE(p_hwfn, false,
+				  "Unexpected load code [0x%08x]", load_code);
 			rc = ECORE_NOTIMPL;
 			break;
 		}
@@ -2027,6 +2034,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 				       0, &load_code, &param);
 		if (rc != ECORE_SUCCESS)
 			return rc;
+
 		if (mfw_rc != ECORE_SUCCESS) {
 			DP_NOTICE(p_hwfn, true,
 				  "Failed sending LOAD_DONE command\n");
@@ -2051,10 +2059,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 
 	if (IS_PF(p_dev)) {
 		p_hwfn = ECORE_LEADING_HWFN(p_dev);
-		drv_mb_param = (FW_MAJOR_VERSION << 24) |
-			       (FW_MINOR_VERSION << 16) |
-			       (FW_REVISION_VERSION << 8) |
-			       (FW_ENGINEERING_VERSION);
+		drv_mb_param = STORM_FW_VERSION;
 		rc = ecore_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
 				   DRV_MSG_CODE_OV_UPDATE_STORM_FW_VER,
 				   drv_mb_param, &load_code, &param);
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index 356c5e4..7e90778 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -58,16 +58,38 @@ void ecore_init_dp(struct ecore_dev *p_dev,
 void ecore_resc_setup(struct ecore_dev *p_dev);
 
 struct ecore_hw_init_params {
-	/* tunnelling parameters */
+	/* Tunnelling parameters */
 	struct ecore_tunnel_info *p_tunn;
+
 	bool b_hw_start;
-	/* interrupt mode [msix, inta, etc.] to use */
+
+	/* Interrupt mode [msix, inta, etc.] to use */
 	enum ecore_int_mode int_mode;
-/* npar tx switching to be used for vports configured for tx-switching */
 
+	/* NPAR tx switching to be used for vports configured for tx-switching
+	 */
 	bool allow_npar_tx_switch;
-	/* binary fw data pointer in binary fw file */
+
+	/* Binary fw data pointer in binary fw file */
 	const u8 *bin_fw_data;
+
+	/* Indicates whether the driver is running over a crash kernel.
+	 * As part of the load request, this will be used for providing the
+	 * driver role to the MFW.
+	 * In case of a crash kernel over PDA - this should be set to false.
+	 */
+	bool is_crash_kernel;
+
+	/* The timeout value that the MFW should use when locking the engine for
+	 * the driver load process.
+	 * A value of '0' means the default value, and '255' means no timeout.
+	 */
+	u8 mfw_timeout_val;
+#define ECORE_LOAD_REQ_LOCK_TO_DEFAULT	0
+#define ECORE_LOAD_REQ_LOCK_TO_NONE	255
+
+	/* Avoid engine reset when first PF loads on it */
+	bool avoid_eng_reset;
 };
 
 /**
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 73cf4db..11ecac3 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -519,51 +519,368 @@ static void ecore_mcp_mf_workaround(struct ecore_hwfn *p_hwfn,
 }
 #endif
 
+static bool ecore_mcp_can_force_load(u8 drv_role, u8 exist_drv_role)
+{
+	return (drv_role == DRV_ROLE_OS &&
+		exist_drv_role == DRV_ROLE_PREBOOT) ||
+	       (drv_role == DRV_ROLE_KDUMP && exist_drv_role == DRV_ROLE_OS);
+}
+
+static enum _ecore_status_t ecore_mcp_cancel_load_req(struct ecore_hwfn *p_hwfn,
+						      struct ecore_ptt *p_ptt)
+{
+	u32 resp = 0, param = 0;
+	enum _ecore_status_t rc;
+
+	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_CANCEL_LOAD_REQ, 0,
+			   &resp, &param);
+	if (rc != ECORE_SUCCESS)
+		DP_NOTICE(p_hwfn, false,
+			  "Failed to send cancel load request, rc = %d\n", rc);
+
+	return rc;
+}
+
+#define CONFIG_ECORE_L2_BITMAP_IDX	(0x1 << 0)
+#define CONFIG_ECORE_SRIOV_BITMAP_IDX	(0x1 << 1)
+#define CONFIG_ECORE_ROCE_BITMAP_IDX	(0x1 << 2)
+#define CONFIG_ECORE_IWARP_BITMAP_IDX	(0x1 << 3)
+#define CONFIG_ECORE_FCOE_BITMAP_IDX	(0x1 << 4)
+#define CONFIG_ECORE_ISCSI_BITMAP_IDX	(0x1 << 5)
+#define CONFIG_ECORE_LL2_BITMAP_IDX	(0x1 << 6)
+
+static u32 ecore_get_config_bitmap(void)
+{
+	u32 config_bitmap = 0x0;
+
+#ifdef CONFIG_ECORE_L2
+	config_bitmap |= CONFIG_ECORE_L2_BITMAP_IDX;
+#endif
+#ifdef CONFIG_ECORE_SRIOV
+	config_bitmap |= CONFIG_ECORE_SRIOV_BITMAP_IDX;
+#endif
+#ifdef CONFIG_ECORE_ROCE
+	config_bitmap |= CONFIG_ECORE_ROCE_BITMAP_IDX;
+#endif
+#ifdef CONFIG_ECORE_IWARP
+	config_bitmap |= CONFIG_ECORE_IWARP_BITMAP_IDX;
+#endif
+#ifdef CONFIG_ECORE_FCOE
+	config_bitmap |= CONFIG_ECORE_FCOE_BITMAP_IDX;
+#endif
+#ifdef CONFIG_ECORE_ISCSI
+	config_bitmap |= CONFIG_ECORE_ISCSI_BITMAP_IDX;
+#endif
+#ifdef CONFIG_ECORE_LL2
+	config_bitmap |= CONFIG_ECORE_LL2_BITMAP_IDX;
+#endif
+
+	return config_bitmap;
+}
+
+struct ecore_load_req_in_params {
+	u8 hsi_ver;
+#define ECORE_LOAD_REQ_HSI_VER_DEFAULT	0
+#define ECORE_LOAD_REQ_HSI_VER_1	1
+	u32 drv_ver_0;
+	u32 drv_ver_1;
+	u32 fw_ver;
+	u8 drv_role;
+	u8 timeout_val;
+	u8 force_cmd;
+	bool avoid_eng_reset;
+};
+
+struct ecore_load_req_out_params {
+	u32 load_code;
+	u32 exist_drv_ver_0;
+	u32 exist_drv_ver_1;
+	u32 exist_fw_ver;
+	u8 exist_drv_role;
+	u8 mfw_hsi_ver;
+	bool drv_exists;
+};
+
+static enum _ecore_status_t
+__ecore_mcp_load_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		     struct ecore_load_req_in_params *p_in_params,
+		     struct ecore_load_req_out_params *p_out_params)
+{
+	union drv_union_data union_data_src, union_data_dst;
+	struct ecore_mcp_mb_params mb_params;
+	struct load_req_stc *p_load_req;
+	struct load_rsp_stc *p_load_rsp;
+	u32 hsi_ver;
+	enum _ecore_status_t rc;
+
+	p_load_req = &union_data_src.load_req;
+	OSAL_MEM_ZERO(p_load_req, sizeof(*p_load_req));
+	p_load_req->drv_ver_0 = p_in_params->drv_ver_0;
+	p_load_req->drv_ver_1 = p_in_params->drv_ver_1;
+	p_load_req->fw_ver = p_in_params->fw_ver;
+	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_ROLE,
+			    p_in_params->drv_role);
+	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_LOCK_TO,
+			    p_in_params->timeout_val);
+	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_FORCE,
+			    p_in_params->force_cmd);
+	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_FLAGS0,
+			    p_in_params->avoid_eng_reset);
+
+	hsi_ver = (p_in_params->hsi_ver == ECORE_LOAD_REQ_HSI_VER_DEFAULT) ?
+		  DRV_ID_MCP_HSI_VER_CURRENT :
+		  (p_in_params->hsi_ver << DRV_ID_MCP_HSI_VER_SHIFT);
+
+	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
+	mb_params.cmd = DRV_MSG_CODE_LOAD_REQ;
+	mb_params.param = PDA_COMP | hsi_ver | p_hwfn->p_dev->drv_type;
+	mb_params.p_data_src = &union_data_src;
+	mb_params.p_data_dst = &union_data_dst;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Load Request: param 0x%08x [init_hw %d, drv_type %d, hsi_ver %d, pda 0x%04x]\n",
+		   mb_params.param,
+		   ECORE_MFW_GET_FIELD(mb_params.param, DRV_ID_DRV_INIT_HW),
+		   ECORE_MFW_GET_FIELD(mb_params.param, DRV_ID_DRV_TYPE),
+		   ECORE_MFW_GET_FIELD(mb_params.param, DRV_ID_MCP_HSI_VER),
+		   ECORE_MFW_GET_FIELD(mb_params.param, DRV_ID_PDA_COMP_VER));
+
+	if (p_in_params->hsi_ver != ECORE_LOAD_REQ_HSI_VER_1)
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+			   "Load Request: drv_ver 0x%08x_0x%08x, fw_ver 0x%08x, misc0 0x%08x [role %d, timeout %d, force %d, flags0 0x%x]\n",
+			   p_load_req->drv_ver_0, p_load_req->drv_ver_1,
+			   p_load_req->fw_ver, p_load_req->misc0,
+			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
+					       LOAD_REQ_ROLE),
+			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
+					       LOAD_REQ_LOCK_TO),
+			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
+					       LOAD_REQ_FORCE),
+			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
+					       LOAD_REQ_FLAGS0));
+
+	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+	if (rc != ECORE_SUCCESS) {
+		DP_NOTICE(p_hwfn, false,
+			  "Failed to send load request, rc = %d\n", rc);
+		return rc;
+	}
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Load Response: resp 0x%08x\n", mb_params.mcp_resp);
+	p_out_params->load_code = mb_params.mcp_resp;
+
+	if (p_in_params->hsi_ver != ECORE_LOAD_REQ_HSI_VER_1 &&
+	    p_out_params->load_code != FW_MSG_CODE_DRV_LOAD_REFUSED_HSI_1) {
+		p_load_rsp = &union_data_dst.load_rsp;
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+			   "Load Response: exist_drv_ver 0x%08x_0x%08x, exist_fw_ver 0x%08x, misc0 0x%08x [exist_role %d, mfw_hsi %d, flags0 0x%x]\n",
+			   p_load_rsp->drv_ver_0, p_load_rsp->drv_ver_1,
+			   p_load_rsp->fw_ver, p_load_rsp->misc0,
+			   ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
+					       LOAD_RSP_ROLE),
+			   ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
+					       LOAD_RSP_HSI),
+			   ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
+					       LOAD_RSP_FLAGS0));
+
+		p_out_params->exist_drv_ver_0 = p_load_rsp->drv_ver_0;
+		p_out_params->exist_drv_ver_1 = p_load_rsp->drv_ver_1;
+		p_out_params->exist_fw_ver = p_load_rsp->fw_ver;
+		p_out_params->exist_drv_role =
+			ECORE_MFW_GET_FIELD(p_load_rsp->misc0, LOAD_RSP_ROLE);
+		p_out_params->mfw_hsi_ver =
+			ECORE_MFW_GET_FIELD(p_load_rsp->misc0, LOAD_RSP_HSI);
+		p_out_params->drv_exists =
+			ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
+					    LOAD_RSP_FLAGS0) &
+			LOAD_RSP_FLAGS0_DRV_EXISTS;
+	}
+
+	return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t eocre_get_mfw_drv_role(struct ecore_hwfn *p_hwfn,
+						   enum ecore_drv_role drv_role,
+						   u8 *p_mfw_drv_role)
+{
+	switch (drv_role) {
+	case ECORE_DRV_ROLE_OS:
+		*p_mfw_drv_role = DRV_ROLE_OS;
+		break;
+	case ECORE_DRV_ROLE_KDUMP:
+		*p_mfw_drv_role = DRV_ROLE_KDUMP;
+		break;
+	default:
+		DP_ERR(p_hwfn, "Unexpected driver role %d\n", drv_role);
+		return ECORE_INVAL;
+	}
+
+	return ECORE_SUCCESS;
+}
+
+enum ecore_load_req_force {
+	ECORE_LOAD_REQ_FORCE_NONE,
+	ECORE_LOAD_REQ_FORCE_PF,
+	ECORE_LOAD_REQ_FORCE_ALL,
+};
+
+static enum _ecore_status_t
+ecore_get_mfw_force_cmd(struct ecore_hwfn *p_hwfn,
+			enum ecore_load_req_force force_cmd,
+			u8 *p_mfw_force_cmd)
+{
+	switch (force_cmd) {
+	case ECORE_LOAD_REQ_FORCE_NONE:
+		*p_mfw_force_cmd = LOAD_REQ_FORCE_NONE;
+		break;
+	case ECORE_LOAD_REQ_FORCE_PF:
+		*p_mfw_force_cmd = LOAD_REQ_FORCE_PF;
+		break;
+	case ECORE_LOAD_REQ_FORCE_ALL:
+		*p_mfw_force_cmd = LOAD_REQ_FORCE_ALL;
+		break;
+	default:
+		DP_ERR(p_hwfn, "Unexpected force value %d\n", force_cmd);
+		return ECORE_INVAL;
+	}
+
+	return ECORE_SUCCESS;
+}
+
 enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 					struct ecore_ptt *p_ptt,
-					u32 *p_load_code)
+					struct ecore_load_req_params *p_params)
 {
-	struct ecore_dev *p_dev = p_hwfn->p_dev;
-	struct ecore_mcp_mb_params mb_params;
+	struct ecore_load_req_out_params out_params;
+	struct ecore_load_req_in_params in_params;
+	u8 mfw_drv_role, mfw_force_cmd;
 	enum _ecore_status_t rc;
 
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
-		ecore_mcp_mf_workaround(p_hwfn, p_load_code);
+		ecore_mcp_mf_workaround(p_hwfn, &p_params->load_code);
 		return ECORE_SUCCESS;
 	}
 #endif
 
-	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
-	mb_params.cmd = DRV_MSG_CODE_LOAD_REQ;
-	mb_params.param = PDA_COMP | DRV_ID_MCP_HSI_VER_CURRENT |
-			  p_dev->drv_type;
-	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+	OSAL_MEM_ZERO(&in_params, sizeof(in_params));
+	in_params.hsi_ver = ECORE_LOAD_REQ_HSI_VER_DEFAULT;
+	in_params.drv_ver_0 = ECORE_VERSION;
+	in_params.drv_ver_1 = ecore_get_config_bitmap();
+	in_params.fw_ver = STORM_FW_VERSION;
+	rc = eocre_get_mfw_drv_role(p_hwfn, p_params->drv_role, &mfw_drv_role);
+	if (rc != ECORE_SUCCESS)
+		return rc;
 
-	/* if mcp fails to respond we must abort */
-	if (rc != ECORE_SUCCESS) {
-		DP_ERR(p_hwfn, "MCP response failure, aborting\n");
+	in_params.drv_role = mfw_drv_role;
+	in_params.timeout_val = p_params->timeout_val;
+	rc = ecore_get_mfw_force_cmd(p_hwfn, ECORE_LOAD_REQ_FORCE_NONE,
+				     &mfw_force_cmd);
+	if (rc != ECORE_SUCCESS)
 		return rc;
-	}
 
-	*p_load_code = mb_params.mcp_resp;
+	in_params.force_cmd = mfw_force_cmd;
+	in_params.avoid_eng_reset = p_params->avoid_eng_reset;
 
-	/* If MFW refused (e.g. other port is in diagnostic mode) we
-	 * must abort. This can happen in the following cases:
-	 * - Other port is in diagnostic mode
-	 * - Previously loaded function on the engine is not compliant with
-	 *   the requester.
-	 * - MFW cannot cope with the requester's DRV_MFW_HSI_VERSION.
-	 *      -
+	OSAL_MEM_ZERO(&out_params, sizeof(out_params));
+	rc = __ecore_mcp_load_req(p_hwfn, p_ptt, &in_params, &out_params);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	/* First handle cases where another load request should/might be sent:
+	 * - MFW expects the old interface [HSI version = 1]
+	 * - MFW responds that a force load request is required
 	 */
-	if (!(*p_load_code) ||
-	    ((*p_load_code) == FW_MSG_CODE_DRV_LOAD_REFUSED_HSI) ||
-	    ((*p_load_code) == FW_MSG_CODE_DRV_LOAD_REFUSED_PDA) ||
-	    ((*p_load_code) == FW_MSG_CODE_DRV_LOAD_REFUSED_DIAG)) {
-		DP_ERR(p_hwfn, "MCP refused load request, aborting\n");
+	if (out_params.load_code == FW_MSG_CODE_DRV_LOAD_REFUSED_HSI_1) {
+		DP_INFO(p_hwfn,
+			"MFW refused a load request due to HSI > 1. Resending with HSI = 1.\n");
+
+		/* The previous load request set the mailbox blocking */
+		p_hwfn->mcp_info->block_mb_sending = false;
+
+		in_params.hsi_ver = ECORE_LOAD_REQ_HSI_VER_1;
+		OSAL_MEM_ZERO(&out_params, sizeof(out_params));
+		rc = __ecore_mcp_load_req(p_hwfn, p_ptt, &in_params,
+					  &out_params);
+		if (rc != ECORE_SUCCESS)
+			return rc;
+	} else if (out_params.load_code ==
+		   FW_MSG_CODE_DRV_LOAD_REFUSED_REQUIRES_FORCE) {
+		/* The previous load request set the mailbox blocking */
+		p_hwfn->mcp_info->block_mb_sending = false;
+
+		if (ecore_mcp_can_force_load(in_params.drv_role,
+					     out_params.exist_drv_role)) {
+			DP_INFO(p_hwfn,
+				"A force load is required [existing: role %d, fw_ver 0x%08x, drv_ver 0x%08x_0x%08x]. Sending a force load request.\n",
+				out_params.exist_drv_role,
+				out_params.exist_fw_ver,
+				out_params.exist_drv_ver_0,
+				out_params.exist_drv_ver_1);
+
+			rc = ecore_get_mfw_force_cmd(p_hwfn,
+						     ECORE_LOAD_REQ_FORCE_ALL,
+						     &mfw_force_cmd);
+			if (rc != ECORE_SUCCESS)
+				return rc;
+
+			in_params.force_cmd = mfw_force_cmd;
+			OSAL_MEM_ZERO(&out_params, sizeof(out_params));
+			rc = __ecore_mcp_load_req(p_hwfn, p_ptt, &in_params,
+						  &out_params);
+			if (rc != ECORE_SUCCESS)
+				return rc;
+		} else {
+			DP_NOTICE(p_hwfn, false,
+				  "A force load is required [existing: role %d, fw_ver 0x%08x, drv_ver 0x%08x_0x%08x]. Avoiding to prevent disruption of active PFs.\n",
+				  out_params.exist_drv_role,
+				  out_params.exist_fw_ver,
+				  out_params.exist_drv_ver_0,
+				  out_params.exist_drv_ver_1);
+
+			ecore_mcp_cancel_load_req(p_hwfn, p_ptt);
+			return ECORE_BUSY;
+		}
+	}
+
+	/* Now handle the other types of responses.
+	 * The "REFUSED_HSI_1" and "REFUSED_REQUIRES_FORCE" responses are not
+	 * expected here after the additional revised load requests were sent.
+	 */
+	switch (out_params.load_code) {
+	case FW_MSG_CODE_DRV_LOAD_ENGINE:
+	case FW_MSG_CODE_DRV_LOAD_PORT:
+	case FW_MSG_CODE_DRV_LOAD_FUNCTION:
+		if (out_params.mfw_hsi_ver != ECORE_LOAD_REQ_HSI_VER_1 &&
+		    out_params.drv_exists) {
+			/* The role and fw/driver version match, but the PF is
+			 * already loaded and has not been unloaded gracefully.
+			 * This is unexpected since a quasi-FLR request was
+			 * previously sent as part of ecore_hw_prepare().
+			 */
+			DP_NOTICE(p_hwfn, false,
+				  "PF is already loaded - shouldn't have got here since a quasi-FLR request was previously sent!\n");
+			return ECORE_INVAL;
+		}
+		break;
+	case FW_MSG_CODE_DRV_LOAD_REFUSED_PDA:
+	case FW_MSG_CODE_DRV_LOAD_REFUSED_DIAG:
+	case FW_MSG_CODE_DRV_LOAD_REFUSED_HSI:
+	case FW_MSG_CODE_DRV_LOAD_REFUSED_REJECT:
+		DP_NOTICE(p_hwfn, false,
+			  "MFW refused a load request [resp 0x%08x]. Aborting.\n",
+			  out_params.load_code);
 		return ECORE_BUSY;
+	default:
+		DP_NOTICE(p_hwfn, false,
+			  "Unexpected response to load request [resp 0x%08x]. Aborting.\n",
+			  out_params.load_code);
+		break;
 	}
 
+	p_params->load_code = out_params.load_code;
+
 	return ECORE_SUCCESS;
 }
 
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index 7a81516..4138a12 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -136,32 +136,36 @@ enum _ecore_status_t ecore_mcp_handle_events(struct ecore_hwfn *p_hwfn,
  * @param p_hwfn - hw function
  * @param p_ptt - PTT required for register access
  * @return enum _ecore_status_t - ECORE_SUCCESS - operation
- * was successul.
+ * was successful.
  */
 enum _ecore_status_t ecore_issue_pulse(struct ecore_hwfn *p_hwfn,
 				       struct ecore_ptt *p_ptt);
 
+enum ecore_drv_role {
+	ECORE_DRV_ROLE_OS,
+	ECORE_DRV_ROLE_KDUMP,
+};
+
+struct ecore_load_req_params {
+	enum ecore_drv_role drv_role;
+	u8 timeout_val; /* 1..254, '0' - default value, '255' - no timeout */
+	bool avoid_eng_reset;
+	u32 load_code;
+};
+
 /**
- * @brief Sends a LOAD_REQ to the MFW, and in case operation
- *        succeed, returns whether this PF is the first on the
- *        chip/engine/port or function. This function should be
- *        called when driver is ready to accept MFW events after
- *        Storms initializations are done.
- *
- * @param p_hwfn       - hw function
- * @param p_ptt        - PTT required for register access
- * @param p_load_code  - The MCP response param containing one
- *      of the following:
- *      FW_MSG_CODE_DRV_LOAD_ENGINE
- *      FW_MSG_CODE_DRV_LOAD_PORT
- *      FW_MSG_CODE_DRV_LOAD_FUNCTION
- * @return enum _ecore_status_t -
- *      ECORE_SUCCESS - Operation was successul.
- *      ECORE_BUSY - Operation failed
+ * @brief Sends a LOAD_REQ to the MFW, and in case the operation succeeds,
+ *        returns whether this PF is the first on the engine/port or function.
+ *
+ * @param p_hwfn
+ * @param p_pt
+ * @param p_params
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - Operation was successful.
  */
 enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 					struct ecore_ptt *p_ptt,
-					u32 *p_load_code);
+					struct ecore_load_req_params *p_params);
 
 /**
  * @brief Read the MFW mailbox into Current buffer.
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index d3cbc96..7f94ba1 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -878,9 +878,11 @@ struct public_func {
 #define DRV_ID_PDA_COMP_VER_MASK	0x0000ffff
 #define DRV_ID_PDA_COMP_VER_SHIFT	0
 
+#define LOAD_REQ_HSI_VERSION		2
 #define DRV_ID_MCP_HSI_VER_MASK		0x00ff0000
 #define DRV_ID_MCP_HSI_VER_SHIFT	16
-#define DRV_ID_MCP_HSI_VER_CURRENT	(1 << DRV_ID_MCP_HSI_VER_SHIFT)
+#define DRV_ID_MCP_HSI_VER_CURRENT	(LOAD_REQ_HSI_VERSION << \
+					 DRV_ID_MCP_HSI_VER_SHIFT)
 
 #define DRV_ID_DRV_TYPE_MASK		0x7f000000
 #define DRV_ID_DRV_TYPE_SHIFT		24
@@ -1040,8 +1042,47 @@ struct resource_info {
 #define RESOURCE_ELEMENT_STRICT (1 << 0)
 };
 
+#define DRV_ROLE_NONE		0
+#define DRV_ROLE_PREBOOT	1
+#define DRV_ROLE_OS		2
+#define DRV_ROLE_KDUMP		3
+
+struct load_req_stc {
+	u32 drv_ver_0;
+	u32 drv_ver_1;
+	u32 fw_ver;
+	u32 misc0;
+#define LOAD_REQ_ROLE_MASK		0x000000FF
+#define LOAD_REQ_ROLE_SHIFT		0
+#define LOAD_REQ_LOCK_TO_MASK		0x0000FF00
+#define LOAD_REQ_LOCK_TO_SHIFT		8
+#define LOAD_REQ_LOCK_TO_DEFAULT	0
+#define LOAD_REQ_LOCK_TO_NONE		255
+#define LOAD_REQ_FORCE_MASK		0x000F0000
+#define LOAD_REQ_FORCE_SHIFT		16
+#define LOAD_REQ_FORCE_NONE		0
+#define LOAD_REQ_FORCE_PF		1
+#define LOAD_REQ_FORCE_ALL		2
+#define LOAD_REQ_FLAGS0_MASK		0x00F00000
+#define LOAD_REQ_FLAGS0_SHIFT		20
+#define LOAD_REQ_FLAGS0_AVOID_RESET	(0x1 << 0)
+};
+
+struct load_rsp_stc {
+	u32 drv_ver_0;
+	u32 drv_ver_1;
+	u32 fw_ver;
+	u32 misc0;
+#define LOAD_RSP_ROLE_MASK		0x000000FF
+#define LOAD_RSP_ROLE_SHIFT		0
+#define LOAD_RSP_HSI_MASK		0x0000FF00
+#define LOAD_RSP_HSI_SHIFT		8
+#define LOAD_RSP_FLAGS0_MASK		0x000F0000
+#define LOAD_RSP_FLAGS0_SHIFT		16
+#define LOAD_RSP_FLAGS0_DRV_EXISTS	(0x1 << 0)
+};
+
 union drv_union_data {
-	u32 ver_str[MCP_DRV_VER_STR_SIZE_DWORD];    /* LOAD_REQ */
 	struct mcp_mac wol_mac; /* UNLOAD_DONE */
 
 /* This configuration should be set by the driver for the LINK_SET command. */
@@ -1068,6 +1109,9 @@ struct resource_info {
 	struct bist_nvm_image_att nvm_image_att;
 	struct mdump_config_stc mdump_config;
 	u32 dword;
+
+	struct load_req_stc load_req;
+	struct load_rsp_stc load_rsp;
 	/* ... */
 };
 
@@ -1077,6 +1121,7 @@ struct public_drv_mb {
 #define DRV_MSG_CODE_LOAD_REQ                   0x10000000
 #define DRV_MSG_CODE_LOAD_DONE                  0x11000000
 #define DRV_MSG_CODE_INIT_HW                    0x12000000
+#define DRV_MSG_CODE_CANCEL_LOAD_REQ            0x13000000
 #define DRV_MSG_CODE_UNLOAD_REQ		        0x20000000
 #define DRV_MSG_CODE_UNLOAD_DONE                0x21000000
 #define DRV_MSG_CODE_INIT_PHY			0x22000000
@@ -1448,8 +1493,11 @@ struct public_drv_mb {
 #define FW_MSG_CODE_DRV_LOAD_PORT               0x10110000
 #define FW_MSG_CODE_DRV_LOAD_FUNCTION           0x10120000
 #define FW_MSG_CODE_DRV_LOAD_REFUSED_PDA        0x10200000
-#define FW_MSG_CODE_DRV_LOAD_REFUSED_HSI        0x10210000
+#define FW_MSG_CODE_DRV_LOAD_REFUSED_HSI_1      0x10210000
 #define FW_MSG_CODE_DRV_LOAD_REFUSED_DIAG       0x10220000
+#define FW_MSG_CODE_DRV_LOAD_REFUSED_HSI        0x10230000
+#define FW_MSG_CODE_DRV_LOAD_REFUSED_REQUIRES_FORCE 0x10300000
+#define FW_MSG_CODE_DRV_LOAD_REFUSED_REJECT     0x10310000
 #define FW_MSG_CODE_DRV_LOAD_DONE               0x11100000
 #define FW_MSG_CODE_DRV_UNLOAD_ENGINE           0x20110000
 #define FW_MSG_CODE_DRV_UNLOAD_PORT             0x20120000
@@ -1547,7 +1595,7 @@ struct public_drv_mb {
 
 
 	u32 fw_mb_param;
-	/* Resource Allocation params - MFW  version support*/
+/* Resource Allocation params - MFW  version support */
 #define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR_MASK	0xFFFF0000
 #define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR_SHIFT		16
 #define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR_MASK	0x0000FFFF
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 5c79055..326e56f 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -276,6 +276,8 @@ static int qed_slowpath_start(struct ecore_dev *edev,
 	hw_init_params.int_mode = ECORE_INT_MODE_MSIX;
 	hw_init_params.allow_npar_tx_switch = allow_npar_tx_switching;
 	hw_init_params.bin_fw_data = data;
+	hw_init_params.mfw_timeout_val = ECORE_LOAD_REQ_LOCK_TO_DEFAULT;
+	hw_init_params.avoid_eng_reset = false;
 	rc = ecore_hw_init(edev, &hw_init_params);
 	if (rc) {
 		DP_ERR(edev, "ecore_hw_init failed\n");
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 42/61] net/qede/base: add non-l2 dcbx tlv application support
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (40 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 41/61] net/qede/base: add support for previous driver unload Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:56 ` [PATCH 43/61] net/qede/base: update bulletin board with link state during init Rasesh Mody
                   ` (19 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add non-l2 dcbx tlv application support.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dcbx.c     |   30 ++++++++++++++++++++++++++----
 drivers/net/qede/base/ecore_dcbx.h     |    1 +
 drivers/net/qede/base/ecore_dcbx_api.h |    4 +++-
 drivers/net/qede/base/ecore_proto_if.h |    3 +++
 4 files changed, 33 insertions(+), 5 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dcbx.c b/drivers/net/qede/base/ecore_dcbx.c
index e82946a..e31ce81 100644
--- a/drivers/net/qede/base/ecore_dcbx.c
+++ b/drivers/net/qede/base/ecore_dcbx.c
@@ -72,6 +72,23 @@ static bool ecore_dcbx_default_tlv(u32 app_info_bitmap, u16 proto_id, bool ieee)
 	return !!(ethtype && (proto_id == ECORE_ETH_TYPE_DEFAULT));
 }
 
+static bool ecore_dcbx_iwarp_tlv(struct ecore_hwfn *p_hwfn, u32 app_info_bitmap,
+				 u16 proto_id, bool ieee)
+{
+	bool port;
+
+	if (!p_hwfn->p_dcbx_info->iwarp_port)
+		return false;
+
+	if (ieee)
+		port = ecore_dcbx_ieee_app_port(app_info_bitmap,
+						DCBX_APP_SF_IEEE_TCP_PORT);
+	else
+		port = ecore_dcbx_app_port(app_info_bitmap);
+
+	return !!(port && (proto_id == p_hwfn->p_dcbx_info->iwarp_port));
+}
+
 static void
 ecore_dcbx_dp_protocol(struct ecore_hwfn *p_hwfn,
 		       struct ecore_dcbx_results *p_data)
@@ -896,17 +913,18 @@ enum _ecore_status_t
 
 enum _ecore_status_t ecore_dcbx_info_alloc(struct ecore_hwfn *p_hwfn)
 {
-	enum _ecore_status_t rc = ECORE_SUCCESS;
-
 	p_hwfn->p_dcbx_info = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
 					  sizeof(*p_hwfn->p_dcbx_info));
 	if (!p_hwfn->p_dcbx_info) {
 		DP_NOTICE(p_hwfn, true,
 			  "Failed to allocate `struct ecore_dcbx_info'");
-		rc = ECORE_NOMEM;
+		return ECORE_NOMEM;
 	}
 
-	return rc;
+	p_hwfn->p_dcbx_info->iwarp_port =
+		p_hwfn->pf_params.rdma_pf_params.iwarp_port;
+
+	return ECORE_SUCCESS;
 }
 
 void ecore_dcbx_info_free(struct ecore_hwfn *p_hwfn,
@@ -937,9 +955,13 @@ void ecore_dcbx_set_pf_update_params(struct ecore_dcbx_results *p_src,
 
 	update_flag = p_src->arr[DCBX_PROTOCOL_ETH].update;
 	p_dest->update_eth_dcb_data_mode = update_flag;
+	update_flag = p_src->arr[DCBX_PROTOCOL_IWARP].update;
+	p_dest->update_iwarp_dcb_data_mode = update_flag;
 
 	p_dcb_data = &p_dest->eth_dcb_data;
 	ecore_dcbx_update_protocol_data(p_dcb_data, p_src, DCBX_PROTOCOL_ETH);
+	p_dcb_data = &p_dest->iwarp_dcb_data;
+	ecore_dcbx_update_protocol_data(p_dcb_data, p_src, DCBX_PROTOCOL_IWARP);
 }
 
 enum _ecore_status_t ecore_dcbx_query_params(struct ecore_hwfn *p_hwfn,
diff --git a/drivers/net/qede/base/ecore_dcbx.h b/drivers/net/qede/base/ecore_dcbx.h
index 0830014..eba2d91 100644
--- a/drivers/net/qede/base/ecore_dcbx.h
+++ b/drivers/net/qede/base/ecore_dcbx.h
@@ -29,6 +29,7 @@ struct ecore_dcbx_info {
 	struct ecore_dcbx_set set;
 	struct ecore_dcbx_get get;
 	u8 dcbx_cap;
+	u16 iwarp_port;
 };
 
 struct ecore_dcbx_mib_meta_data {
diff --git a/drivers/net/qede/base/ecore_dcbx_api.h b/drivers/net/qede/base/ecore_dcbx_api.h
index 3a1712f..2dc7679 100644
--- a/drivers/net/qede/base/ecore_dcbx_api.h
+++ b/drivers/net/qede/base/ecore_dcbx_api.h
@@ -37,6 +37,7 @@ enum dcbx_protocol_type {
 	DCBX_PROTOCOL_ROCE,
 	DCBX_PROTOCOL_ROCE_V2,
 	DCBX_PROTOCOL_ETH,
+	DCBX_PROTOCOL_IWARP,
 	DCBX_MAX_PROTOCOL_TYPE
 };
 
@@ -191,7 +192,8 @@ enum _ecore_status_t ecore_dcbx_config_params(struct ecore_hwfn *,
 	{DCBX_PROTOCOL_FCOE, "FCOE", ECORE_PCI_FCOE},
 	{DCBX_PROTOCOL_ROCE, "ROCE", ECORE_PCI_ETH_ROCE},
 	{DCBX_PROTOCOL_ROCE_V2, "ROCE_V2", ECORE_PCI_ETH_ROCE},
-	{DCBX_PROTOCOL_ETH, "ETH", ECORE_PCI_ETH}
+	{DCBX_PROTOCOL_ETH, "ETH", ECORE_PCI_ETH},
+	{DCBX_PROTOCOL_IWARP, "IWARP", ECORE_PCI_ETH_IWARP}
 };
 
 #endif /* __ECORE_DCBX_API_H__ */
diff --git a/drivers/net/qede/base/ecore_proto_if.h b/drivers/net/qede/base/ecore_proto_if.h
index e252d52..ed24019 100644
--- a/drivers/net/qede/base/ecore_proto_if.h
+++ b/drivers/net/qede/base/ecore_proto_if.h
@@ -76,6 +76,9 @@ struct ecore_rdma_pf_params {
 
 	/* Will allocate rate limiters to be used with QPs */
 	u8		enable_dcqcn;
+
+	/* TCP port number used for the iwarp traffic */
+	u16		iwarp_port;
 };
 
 struct ecore_pf_params {
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 43/61] net/qede/base: update bulletin board with link state during init
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (41 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 42/61] net/qede/base: add non-l2 dcbx tlv application support Rasesh Mody
@ 2017-02-27  7:56 ` Rasesh Mody
  2017-02-27  7:57 ` [PATCH 44/61] net/qede/base: add coalescing support for VFs Rasesh Mody
                   ` (18 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:56 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Updated bulletin board with link state during VF initialization.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_sriov.c |   88 ++++++++++++++++++++---------------
 1 file changed, 51 insertions(+), 37 deletions(-)

diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index af27d02..e4da813 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -957,11 +957,51 @@ static void ecore_iov_free_vf_igu_sbs(struct ecore_hwfn *p_hwfn,
 	vf->num_sbs = 0;
 }
 
+void ecore_iov_set_link(struct ecore_hwfn *p_hwfn,
+			u16 vfid,
+			struct ecore_mcp_link_params *params,
+			struct ecore_mcp_link_state *link,
+			struct ecore_mcp_link_capabilities *p_caps)
+{
+	struct ecore_vf_info *p_vf = ecore_iov_get_vf_info(p_hwfn, vfid, false);
+	struct ecore_bulletin_content *p_bulletin;
+
+	if (!p_vf)
+		return;
+
+	p_bulletin = p_vf->bulletin.p_virt;
+	p_bulletin->req_autoneg = params->speed.autoneg;
+	p_bulletin->req_adv_speed = params->speed.advertised_speeds;
+	p_bulletin->req_forced_speed = params->speed.forced_speed;
+	p_bulletin->req_autoneg_pause = params->pause.autoneg;
+	p_bulletin->req_forced_rx = params->pause.forced_rx;
+	p_bulletin->req_forced_tx = params->pause.forced_tx;
+	p_bulletin->req_loopback = params->loopback_mode;
+
+	p_bulletin->link_up = link->link_up;
+	p_bulletin->speed = link->speed;
+	p_bulletin->full_duplex = link->full_duplex;
+	p_bulletin->autoneg = link->an;
+	p_bulletin->autoneg_complete = link->an_complete;
+	p_bulletin->parallel_detection = link->parallel_detection;
+	p_bulletin->pfc_enabled = link->pfc_enabled;
+	p_bulletin->partner_adv_speed = link->partner_adv_speed;
+	p_bulletin->partner_tx_flow_ctrl_en = link->partner_tx_flow_ctrl_en;
+	p_bulletin->partner_rx_flow_ctrl_en = link->partner_rx_flow_ctrl_en;
+	p_bulletin->partner_adv_pause = link->partner_adv_pause;
+	p_bulletin->sfp_tx_fault = link->sfp_tx_fault;
+
+	p_bulletin->capability_speed = p_caps->speed_capabilities;
+}
+
 enum _ecore_status_t
 ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 			 struct ecore_ptt *p_ptt,
 			 struct ecore_iov_vf_init_params *p_params)
 {
+	struct ecore_mcp_link_capabilities link_caps;
+	struct ecore_mcp_link_params link_params;
+	struct ecore_mcp_link_state link_state;
 	u8 num_of_vf_available_chains  = 0;
 	struct ecore_vf_info *vf = OSAL_NULL;
 	u16 qid, num_irqs;
@@ -1048,6 +1088,17 @@ enum _ecore_status_t
 			   p_queue->fw_cid);
 	}
 
+	/* Update the link configuration in bulletin.
+	 */
+	OSAL_MEMCPY(&link_params, ecore_mcp_get_link_params(p_hwfn),
+		    sizeof(link_params));
+	OSAL_MEMCPY(&link_state, ecore_mcp_get_link_state(p_hwfn),
+		    sizeof(link_state));
+	OSAL_MEMCPY(&link_caps, ecore_mcp_get_link_capabilities(p_hwfn),
+		    sizeof(link_caps));
+	ecore_iov_set_link(p_hwfn, p_params->rel_vf_id,
+			   &link_params, &link_state, &link_caps);
+
 	rc = ecore_iov_enable_vf_access(p_hwfn, p_ptt, vf);
 
 	if (rc == ECORE_SUCCESS) {
@@ -1062,43 +1113,6 @@ enum _ecore_status_t
 	return rc;
 }
 
-void ecore_iov_set_link(struct ecore_hwfn *p_hwfn,
-			u16 vfid,
-			struct ecore_mcp_link_params *params,
-			struct ecore_mcp_link_state *link,
-			struct ecore_mcp_link_capabilities *p_caps)
-{
-	struct ecore_vf_info *p_vf = ecore_iov_get_vf_info(p_hwfn, vfid, false);
-	struct ecore_bulletin_content *p_bulletin;
-
-	if (!p_vf)
-		return;
-
-	p_bulletin = p_vf->bulletin.p_virt;
-	p_bulletin->req_autoneg = params->speed.autoneg;
-	p_bulletin->req_adv_speed = params->speed.advertised_speeds;
-	p_bulletin->req_forced_speed = params->speed.forced_speed;
-	p_bulletin->req_autoneg_pause = params->pause.autoneg;
-	p_bulletin->req_forced_rx = params->pause.forced_rx;
-	p_bulletin->req_forced_tx = params->pause.forced_tx;
-	p_bulletin->req_loopback = params->loopback_mode;
-
-	p_bulletin->link_up = link->link_up;
-	p_bulletin->speed = link->speed;
-	p_bulletin->full_duplex = link->full_duplex;
-	p_bulletin->autoneg = link->an;
-	p_bulletin->autoneg_complete = link->an_complete;
-	p_bulletin->parallel_detection = link->parallel_detection;
-	p_bulletin->pfc_enabled = link->pfc_enabled;
-	p_bulletin->partner_adv_speed = link->partner_adv_speed;
-	p_bulletin->partner_tx_flow_ctrl_en = link->partner_tx_flow_ctrl_en;
-	p_bulletin->partner_rx_flow_ctrl_en = link->partner_rx_flow_ctrl_en;
-	p_bulletin->partner_adv_pause = link->partner_adv_pause;
-	p_bulletin->sfp_tx_fault = link->sfp_tx_fault;
-
-	p_bulletin->capability_speed = p_caps->speed_capabilities;
-}
-
 enum _ecore_status_t ecore_iov_release_hw_for_vf(struct ecore_hwfn *p_hwfn,
 						 struct ecore_ptt *p_ptt,
 						 u16 rel_vf_id)
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 44/61] net/qede/base: add coalescing support for VFs
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (42 preceding siblings ...)
  2017-02-27  7:56 ` [PATCH 43/61] net/qede/base: update bulletin board with link state during init Rasesh Mody
@ 2017-02-27  7:57 ` Rasesh Mody
  2017-02-27  7:57 ` [PATCH 45/61] net/qede/base: add macro got resource value message Rasesh Mody
                   ` (17 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:57 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add coalescing support for VFs.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c     |   83 ++++++++++++++++++++++-----------
 drivers/net/qede/base/ecore_dev_api.h |   43 ++++++-----------
 drivers/net/qede/base/ecore_sriov.c   |   66 +++++++++++++++++++++++++-
 drivers/net/qede/base/ecore_vf.c      |   42 +++++++++++++++++
 drivers/net/qede/base/ecore_vf.h      |   24 ++++++++++
 drivers/net/qede/base/ecore_vfpf_if.h |   10 ++++
 6 files changed, 209 insertions(+), 59 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 358d1b6..8385157 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -30,6 +30,7 @@
 #include "nvm_cfg.h"
 #include "ecore_dev_api.h"
 #include "ecore_dcbx.h"
+#include "ecore_l2.h"
 
 /* TODO - there's a bug in DCBx re-configuration flows in MF, as the QM
  * registers involved are not split and thus configuration is a race where
@@ -4206,11 +4207,6 @@ static enum _ecore_status_t ecore_set_coalesce(struct ecore_hwfn *p_hwfn,
 {
 	struct coalescing_timeset *p_coal_timeset;
 
-	if (IS_VF(p_hwfn->p_dev)) {
-		DP_NOTICE(p_hwfn, true, "VF coalescing config not supported\n");
-		return ECORE_INVAL;
-	}
-
 	if (p_hwfn->p_dev->int_coalescing_mode != ECORE_COAL_MODE_ENABLE) {
 		DP_NOTICE(p_hwfn, true,
 			  "Coalescing configuration not enabled\n");
@@ -4226,13 +4222,53 @@ static enum _ecore_status_t ecore_set_coalesce(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
+enum _ecore_status_t ecore_set_queue_coalesce(struct ecore_hwfn *p_hwfn,
+					      u16 rx_coal, u16 tx_coal,
+					      void *p_handle)
+{
+	struct ecore_queue_cid *p_cid = (struct ecore_queue_cid *)p_handle;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	struct ecore_ptt *p_ptt;
+
+	/* TODO - Configuring a single queue's coalescing but
+	 * claiming all queues are abiding same configuration
+	 * for PF and VF both.
+	 */
+
+	if (IS_VF(p_hwfn->p_dev))
+		return ecore_vf_pf_set_coalesce(p_hwfn, rx_coal,
+						tx_coal, p_cid);
+
+	p_ptt = ecore_ptt_acquire(p_hwfn);
+	if (!p_ptt)
+		return ECORE_AGAIN;
+
+	if (rx_coal) {
+		rc = ecore_set_rxq_coalesce(p_hwfn, p_ptt, rx_coal, p_cid);
+		if (rc)
+			goto out;
+		p_hwfn->p_dev->rx_coalesce_usecs = rx_coal;
+	}
+
+	if (tx_coal) {
+		rc = ecore_set_txq_coalesce(p_hwfn, p_ptt, tx_coal, p_cid);
+		if (rc)
+			goto out;
+		p_hwfn->p_dev->tx_coalesce_usecs = tx_coal;
+	}
+out:
+	ecore_ptt_release(p_hwfn, p_ptt);
+
+	return rc;
+}
+
 enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u16 qid, u16 sb_id)
+					    u16 coalesce,
+					    struct ecore_queue_cid *p_cid)
 {
 	struct ustorm_eth_queue_zone eth_qzone;
 	u8 timeset, timer_res;
-	u16 fw_qid = 0;
 	u32 address;
 	enum _ecore_status_t rc;
 
@@ -4249,33 +4285,30 @@ enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
 	}
 	timeset = (u8)(coalesce >> timer_res);
 
-	rc = ecore_fw_l2_queue(p_hwfn, qid, &fw_qid);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	rc = ecore_int_set_timer_res(p_hwfn, p_ptt, timer_res, sb_id, false);
+	rc = ecore_int_set_timer_res(p_hwfn, p_ptt, timer_res,
+				     p_cid->abs.sb_idx, false);
 	if (rc != ECORE_SUCCESS)
 		goto out;
 
-	address = BAR0_MAP_REG_USDM_RAM + USTORM_ETH_QUEUE_ZONE_OFFSET(fw_qid);
+	address = BAR0_MAP_REG_USDM_RAM +
+		  USTORM_ETH_QUEUE_ZONE_OFFSET(p_cid->abs.queue_id);
 
 	rc = ecore_set_coalesce(p_hwfn, p_ptt, address, &eth_qzone,
 				sizeof(struct ustorm_eth_queue_zone), timeset);
 	if (rc != ECORE_SUCCESS)
 		goto out;
 
-	p_hwfn->p_dev->rx_coalesce_usecs = coalesce;
-out:
+ out:
 	return rc;
 }
 
 enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u16 qid, u16 sb_id)
+					    u16 coalesce,
+					    struct ecore_queue_cid *p_cid)
 {
 	struct xstorm_eth_queue_zone eth_qzone;
 	u8 timeset, timer_res;
-	u16 fw_qid = 0;
 	u32 address;
 	enum _ecore_status_t rc;
 
@@ -4293,23 +4326,17 @@ enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
 
 	timeset = (u8)(coalesce >> timer_res);
 
-	rc = ecore_fw_l2_queue(p_hwfn, qid, &fw_qid);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	rc = ecore_int_set_timer_res(p_hwfn, p_ptt, timer_res, sb_id, true);
+	rc = ecore_int_set_timer_res(p_hwfn, p_ptt, timer_res,
+				     p_cid->abs.sb_idx, true);
 	if (rc != ECORE_SUCCESS)
 		goto out;
 
-	address = BAR0_MAP_REG_XSDM_RAM + XSTORM_ETH_QUEUE_ZONE_OFFSET(fw_qid);
+	address = BAR0_MAP_REG_XSDM_RAM +
+		  XSTORM_ETH_QUEUE_ZONE_OFFSET(p_cid->abs.queue_id);
 
 	rc = ecore_set_coalesce(p_hwfn, p_ptt, address, &eth_qzone,
 				sizeof(struct xstorm_eth_queue_zone), timeset);
-	if (rc != ECORE_SUCCESS)
-		goto out;
-
-	p_hwfn->p_dev->tx_coalesce_usecs = coalesce;
-out:
+ out:
 	return rc;
 }
 
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index 7e90778..ce764d2 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -570,41 +570,24 @@ enum _ecore_status_t ecore_final_cleanup(struct ecore_hwfn	*p_hwfn,
 					 struct ecore_ptt	*p_ptt,
 					 u16			id,
 					 bool			is_vf);
-
-/**
- * @brief ecore_set_rxq_coalesce - Configure coalesce parameters for an Rx queue
- *    The fact that we can configure coalescing to up to 511, but on varying
- *    accuracy [the bigger the value the less accurate] up to a mistake of 3usec
- *    for the highest values.
- *
- * @param p_hwfn
- * @param p_ptt
- * @param coalesce - Coalesce value in micro seconds.
- * @param qid - Queue index.
- * @param qid - SB Id
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
-					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u16 qid, u16 sb_id);
-
 /**
- * @brief ecore_set_txq_coalesce - Configure coalesce parameters for a Tx queue
- *    While the API allows setting coalescing per-qid, all tx queues sharing a
- *    SB should be in same range [i.e., either 0-0x7f, 0x80-0xff or 0x100-0x1ff]
+ * @brief ecore_set_queue_coalesce - Configure coalesce parameters for Rx and
+ *    Tx queue. The fact that we can configure coalescing to up to 511, but on
+ *    varying accuracy [the bigger the value the less accurate] up to a mistake
+ *    of 3usec for the highest values.
+ *    While the API allows setting coalescing per-qid, all queues sharing a SB
+ *    should be in same range [i.e., either 0-0x7f, 0x80-0xff or 0x100-0x1ff]
  *    otherwise configuration would break.
  *
  * @param p_hwfn
- * @param p_ptt
- * @param coalesce - Coalesce value in micro seconds.
- * @param qid - Queue index.
- * @param qid - SB Id
+ * @param rx_coal - Rx Coalesce value in micro seconds.
+ * @param tx_coal - TX Coalesce value in micro seconds.
+ * @param p_handle
  *
  * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
-					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u16 qid, u16 sb_id);
+ **/
+enum _ecore_status_t
+ecore_set_queue_coalesce(struct ecore_hwfn *p_hwfn, u16 rx_coal,
+			 u16 tx_coal, void *p_handle);
 
 #endif
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index e4da813..4951873 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -52,6 +52,7 @@
 	"CHANNEL_TLV_VPORT_UPDATE_ACCEPT_ANY_VLAN",
 	"CHANNEL_TLV_VPORT_UPDATE_SGE_TPA",
 	"CHANNEL_TLV_UPDATE_TUNN_PARAM",
+	"CHANNEL_TLV_COALESCE_UPDATE",
 	"CHANNEL_TLV_MAX"
 };
 
@@ -1942,6 +1943,8 @@ static void ecore_iov_vf_mbx_start_vport(struct ecore_hwfn *p_hwfn,
 	vf->state = VF_ENABLED;
 	start = &mbx->req_virt->start_vport;
 
+	ecore_iov_enable_vf_traffic(p_hwfn, p_ptt, vf);
+
 	/* Initialize Status block in CAU */
 	for (sb_id = 0; sb_id < vf->num_sbs; sb_id++) {
 		if (!start->sb_addr[sb_id]) {
@@ -1956,7 +1959,6 @@ static void ecore_iov_vf_mbx_start_vport(struct ecore_hwfn *p_hwfn,
 				      vf->igu_sbs[sb_id],
 				      vf->abs_vf_id, 1);
 	}
-	ecore_iov_enable_vf_traffic(p_hwfn, p_ptt, vf);
 
 	vf->mtu = start->mtu;
 	vf->shadow_config.inner_vlan_removal = start->inner_vlan_removal;
@@ -3229,6 +3231,65 @@ static void ecore_iov_vf_mbx_release(struct ecore_hwfn *p_hwfn,
 			       length, status);
 }
 
+static void ecore_iov_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn,
+					 struct ecore_ptt *p_ptt,
+					 struct ecore_vf_info *vf)
+{
+	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	struct vfpf_update_coalesce *req;
+	u8 status = PFVF_STATUS_FAILURE;
+	struct ecore_queue_cid *p_cid;
+	u16 rx_coal, tx_coal;
+	u16  qid;
+
+	req = &mbx->req_virt->update_coalesce;
+
+	rx_coal = req->rx_coal;
+	tx_coal = req->tx_coal;
+	qid = req->qid;
+	p_cid = vf->vf_queues[qid].p_rx_cid;
+
+	if (!ecore_iov_validate_rxq(p_hwfn, vf, qid)) {
+		DP_ERR(p_hwfn, "VF[%d]: Invalid Rx queue_id = %d\n",
+		       vf->abs_vf_id, qid);
+		goto out;
+	}
+
+	if (!ecore_iov_validate_txq(p_hwfn, vf, qid)) {
+		DP_ERR(p_hwfn, "VF[%d]: Invalid Tx queue_id = %d\n",
+		       vf->abs_vf_id, qid);
+		goto out;
+	}
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+		   "VF[%d]: Setting coalesce for VF rx_coal = %d, tx_coal = %d at queue = %d\n",
+		   vf->abs_vf_id, rx_coal, tx_coal, qid);
+	if (rx_coal) {
+		rc = ecore_set_rxq_coalesce(p_hwfn, p_ptt, rx_coal, p_cid);
+		if (rc != ECORE_SUCCESS) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VF[%d]: Unable to set rx queue = %d coalesce\n",
+				   vf->abs_vf_id, vf->vf_queues[qid].fw_rx_qid);
+			goto out;
+		}
+	}
+	if (tx_coal) {
+		rc =  ecore_set_txq_coalesce(p_hwfn, p_ptt, tx_coal, p_cid);
+		if (rc != ECORE_SUCCESS) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VF[%d]: Unable to set tx queue = %d coalesce\n",
+				   vf->abs_vf_id, vf->vf_queues[qid].fw_tx_qid);
+			goto out;
+		}
+	}
+
+	status = PFVF_STATUS_SUCCESS;
+out:
+	ecore_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_COALESCE_UPDATE,
+			       sizeof(struct pfvf_def_resp_tlv), status);
+}
+
 static enum _ecore_status_t
 ecore_iov_vf_flr_poll_dorq(struct ecore_hwfn *p_hwfn,
 			   struct ecore_vf_info *p_vf, struct ecore_ptt *p_ptt)
@@ -3582,6 +3643,9 @@ void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn,
 		case CHANNEL_TLV_UPDATE_TUNN_PARAM:
 			ecore_iov_vf_mbx_update_tunn_param(p_hwfn, p_ptt, p_vf);
 			break;
+		case CHANNEL_TLV_COALESCE_UPDATE:
+			ecore_iov_vf_pf_set_coalesce(p_hwfn, p_ptt, p_vf);
+			break;
 		}
 	} else if (ecore_iov_tlv_supported(mbx->first_tlv.tl.type)) {
 		/* If we've received a message from a VF we consider malicious
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index be3bc5f..1e3857b 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -1425,6 +1425,48 @@ enum _ecore_status_t ecore_vf_pf_int_cleanup(struct ecore_hwfn *p_hwfn)
 	return rc;
 }
 
+enum _ecore_status_t
+ecore_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn, u16 rx_coal, u16 tx_coal,
+			 struct ecore_queue_cid     *p_cid)
+{
+	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+	struct vfpf_update_coalesce *req;
+	struct pfvf_def_resp_tlv *resp;
+	enum _ecore_status_t rc;
+
+	/* clear mailbox and prep header tlv */
+	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_COALESCE_UPDATE,
+			       sizeof(*req));
+
+	req->rx_coal = rx_coal;
+	req->tx_coal = tx_coal;
+	req->qid = p_cid->rel.queue_id;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+		   "Setting coalesce rx_coal = %d, tx_coal = %d at queue = %d\n",
+		   rx_coal, tx_coal, req->qid);
+
+	/* add list termination tlv */
+	ecore_add_tlv(p_hwfn, &p_iov->offset, CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	resp = &p_iov->pf2vf_reply->default_resp;
+	rc = ecore_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
+
+	if (rc != ECORE_SUCCESS)
+		goto exit;
+
+	if (resp->hdr.status != PFVF_STATUS_SUCCESS)
+		goto exit;
+
+	p_hwfn->p_dev->rx_coalesce_usecs = rx_coal;
+	p_hwfn->p_dev->tx_coalesce_usecs = tx_coal;
+
+exit:
+	ecore_vf_pf_req_end(p_hwfn, rc);
+	return rc;
+}
+
 u16 ecore_vf_get_igu_sb_id(struct ecore_hwfn *p_hwfn,
 			   u16               sb_id)
 {
diff --git a/drivers/net/qede/base/ecore_vf.h b/drivers/net/qede/base/ecore_vf.h
index 0d67054..228bbf0 100644
--- a/drivers/net/qede/base/ecore_vf.h
+++ b/drivers/net/qede/base/ecore_vf.h
@@ -50,6 +50,20 @@ struct ecore_vf_iov {
 enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn);
 
 /**
+ * @brief VF - Set Rx/Tx coalesce per VF's relative queue.
+ *	Coalesce value '0' will omit the configuration.
+ *
+ *	@param p_hwfn
+ *	@param rx_coal - coalesce value in micro second for rx queue
+ *	@param tx_coal - coalesce value in micro second for tx queue
+ *	@param qid
+ *
+ **/
+enum _ecore_status_t ecore_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn,
+					      u16 rx_coal, u16 tx_coal,
+					      struct ecore_queue_cid *p_cid);
+
+/**
  * @brief VF - start the RX Queue by sending a message to the PF
  *
  * @param p_hwfn
@@ -263,5 +277,15 @@ enum _ecore_status_t
 				struct ecore_tunnel_info *p_tunn);
 
 void ecore_vf_set_vf_start_tunn_update_param(struct ecore_tunnel_info *p_tun);
+
+enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
+					    struct ecore_ptt *p_ptt,
+					    u16 coalesce,
+					    struct ecore_queue_cid *p_cid);
+
+enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
+					    struct ecore_ptt *p_ptt,
+					    u16 coalesce,
+					    struct ecore_queue_cid *p_cid);
 #endif
 #endif /* __ECORE_VF_H__ */
diff --git a/drivers/net/qede/base/ecore_vfpf_if.h b/drivers/net/qede/base/ecore_vfpf_if.h
index 82ed4f5..e0b63bf 100644
--- a/drivers/net/qede/base/ecore_vfpf_if.h
+++ b/drivers/net/qede/base/ecore_vfpf_if.h
@@ -457,6 +457,14 @@ struct tlv_buffer_size {
 	u8 tlv_buffer[TLV_BUFFER_SIZE];
 };
 
+struct vfpf_update_coalesce {
+	struct vfpf_first_tlv first_tlv;
+	u16 rx_coal;
+	u16 tx_coal;
+	u16 qid;
+	u8 padding[2];
+};
+
 union vfpf_tlvs {
 	struct vfpf_first_tlv			first_tlv;
 	struct vfpf_acquire_tlv			acquire;
@@ -469,6 +477,7 @@ struct tlv_buffer_size {
 	struct vfpf_vport_update_tlv		vport_update;
 	struct vfpf_ucast_filter_tlv		ucast_filter;
 	struct vfpf_update_tunn_param_tlv	tunn_param_update;
+	struct vfpf_update_coalesce		update_coalesce;
 	struct tlv_buffer_size			tlv_buf_size;
 };
 
@@ -592,6 +601,7 @@ enum {
 	CHANNEL_TLV_VPORT_UPDATE_ACCEPT_ANY_VLAN,
 	CHANNEL_TLV_VPORT_UPDATE_SGE_TPA,
 	CHANNEL_TLV_UPDATE_TUNN_PARAM,
+	CHANNEL_TLV_COALESCE_UPDATE,
 	CHANNEL_TLV_MAX,
 
 	/* Required for iterating over vport-update tlvs.
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 45/61] net/qede/base: add macro got resource value message
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (43 preceding siblings ...)
  2017-02-27  7:57 ` [PATCH 44/61] net/qede/base: add coalescing support for VFs Rasesh Mody
@ 2017-02-27  7:57 ` Rasesh Mody
  2017-02-27  7:57 ` [PATCH 46/61] net/qede/base: add mailbox for resource allocation Rasesh Mody
                   ` (16 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:57 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add macro got resource value message

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/mcp_public.h |    5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 7f94ba1..6f0e2f9 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -1137,16 +1137,15 @@ struct public_drv_mb {
 #define DRV_MSG_CODE_OV_UPDATE_BUS_NUM		0x27000000
 #define DRV_MSG_CODE_OV_UPDATE_BOOT_PROGRESS	0x28000000
 #define DRV_MSG_CODE_OV_UPDATE_STORM_FW_VER	0x29000000
+#define DRV_MSG_CODE_NIG_DRAIN			0x30000000
 #define DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE	0x31000000
 #define DRV_MSG_CODE_BW_UPDATE_ACK		0x32000000
 #define DRV_MSG_CODE_OV_UPDATE_MTU		0x33000000
-
-#define DRV_MSG_CODE_NIG_DRAIN			0x30000000
-
 /* DRV_MB Param: driver version supp, FW_MB param: MFW version supp,
  * data: struct resource_info
  */
 #define DRV_MSG_GET_RESOURCE_ALLOC_MSG		0x34000000
+#define DRV_MSG_SET_RESOURCE_VALUE_MSG		0x35000000
 
 /*deprecated don't use*/
 #define DRV_MSG_CODE_INITIATE_FLR_DEPRECATED    0x02000000
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 46/61] net/qede/base: add mailbox for resource allocation
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (44 preceding siblings ...)
  2017-02-27  7:57 ` [PATCH 45/61] net/qede/base: add macro got resource value message Rasesh Mody
@ 2017-02-27  7:57 ` Rasesh Mody
  2017-02-27  7:57 ` [PATCH 47/61] net/qede/base: add macro for unsupported command Rasesh Mody
                   ` (15 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:57 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add the Management FW mailbox for getting non-l2 resource allocation
information.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h      |    1 +
 drivers/net/qede/base/ecore_dev.c  |   60 ++++++++++++++++++++++++------------
 drivers/net/qede/base/mcp_public.h |    1 +
 3 files changed, 43 insertions(+), 19 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 60a8a6b..25b6c4e 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -291,6 +291,7 @@ enum ecore_resources {
 	ECORE_LL2_QUEUE,
 	ECORE_CMDQS_CQS,
 	ECORE_RDMA_STATS_QUEUE,
+	ECORE_BDQ,
 	ECORE_MAX_RESC,			/* must be last */
 };
 
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 8385157..113c326 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2470,6 +2470,9 @@ static void ecore_hw_set_feat(struct ecore_hwfn *p_hwfn)
 	case ECORE_RDMA_STATS_QUEUE:
 		mfw_res_id = RESOURCE_RDMA_STATS_QUEUE_E;
 		break;
+	case ECORE_BDQ:
+		mfw_res_id = RESOURCE_BDQ_E;
+		break;
 	default:
 		break;
 	}
@@ -2477,67 +2480,84 @@ static void ecore_hw_set_feat(struct ecore_hwfn *p_hwfn)
 	return mfw_res_id;
 }
 
-static u32 ecore_hw_get_dflt_resc_num(struct ecore_hwfn *p_hwfn,
-				      enum ecore_resources res_id)
+static
+enum _ecore_status_t ecore_hw_get_dflt_resc(struct ecore_hwfn *p_hwfn,
+					    enum ecore_resources res_id,
+					    u32 *p_resc_num,
+					    u32 *p_resc_start)
 {
 	u8 num_funcs = p_hwfn->num_funcs_on_engine;
 	bool b_ah = ECORE_IS_AH(p_hwfn->p_dev);
 	struct ecore_sb_cnt_info sb_cnt_info;
-	u32 dflt_resc_num = 0;
 
 	switch (res_id) {
 	case ECORE_SB:
 		OSAL_MEM_ZERO(&sb_cnt_info, sizeof(sb_cnt_info));
 		ecore_int_get_num_sbs(p_hwfn, &sb_cnt_info);
-		dflt_resc_num = sb_cnt_info.sb_cnt;
+		*p_resc_num = sb_cnt_info.sb_cnt;
 		break;
 	case ECORE_L2_QUEUE:
-		dflt_resc_num = (b_ah ? MAX_NUM_L2_QUEUES_K2 :
+		*p_resc_num = (b_ah ? MAX_NUM_L2_QUEUES_K2 :
 				 MAX_NUM_L2_QUEUES_BB) / num_funcs;
 		break;
 	case ECORE_VPORT:
-		dflt_resc_num = (b_ah ? MAX_NUM_VPORTS_K2 :
+		*p_resc_num = (b_ah ? MAX_NUM_VPORTS_K2 :
 				 MAX_NUM_VPORTS_BB) / num_funcs;
 		break;
 	case ECORE_RSS_ENG:
-		dflt_resc_num = (b_ah ? ETH_RSS_ENGINE_NUM_K2 :
+		*p_resc_num = (b_ah ? ETH_RSS_ENGINE_NUM_K2 :
 				 ETH_RSS_ENGINE_NUM_BB) / num_funcs;
 		break;
 	case ECORE_PQ:
-		dflt_resc_num = (b_ah ? MAX_QM_TX_QUEUES_K2 :
+		*p_resc_num = (b_ah ? MAX_QM_TX_QUEUES_K2 :
 				 MAX_QM_TX_QUEUES_BB) / num_funcs;
 		break;
 	case ECORE_RL:
-		dflt_resc_num = MAX_QM_GLOBAL_RLS / num_funcs;
+		*p_resc_num = MAX_QM_GLOBAL_RLS / num_funcs;
 		break;
 	case ECORE_MAC:
 	case ECORE_VLAN:
 		/* Each VFC resource can accommodate both a MAC and a VLAN */
-		dflt_resc_num = ETH_NUM_MAC_FILTERS / num_funcs;
+		*p_resc_num = ETH_NUM_MAC_FILTERS / num_funcs;
 		break;
 	case ECORE_ILT:
-		dflt_resc_num = (b_ah ? PXP_NUM_ILT_RECORDS_K2 :
+		*p_resc_num = (b_ah ? PXP_NUM_ILT_RECORDS_K2 :
 				 PXP_NUM_ILT_RECORDS_BB) / num_funcs;
 		break;
 	case ECORE_LL2_QUEUE:
-		dflt_resc_num = MAX_NUM_LL2_RX_QUEUES / num_funcs;
+		*p_resc_num = MAX_NUM_LL2_RX_QUEUES / num_funcs;
 		break;
 	case ECORE_RDMA_CNQ_RAM:
 	case ECORE_CMDQS_CQS:
 		/* CNQ/CMDQS are the same resource */
 		/* @DPDK */
-		dflt_resc_num = (NUM_OF_GLOBAL_QUEUES / 2) / num_funcs;
+		*p_resc_num = (NUM_OF_GLOBAL_QUEUES / 2) / num_funcs;
 		break;
 	case ECORE_RDMA_STATS_QUEUE:
 		/* @DPDK */
-		dflt_resc_num = (b_ah ? MAX_NUM_VPORTS_K2 :
+		*p_resc_num = (b_ah ? MAX_NUM_VPORTS_K2 :
 				 MAX_NUM_VPORTS_BB) / num_funcs;
 		break;
+	case ECORE_BDQ:
+		/* @DPDK */
+		*p_resc_num = 0;
+		break;
+	default:
+		break;
+	}
+
+
+	switch (res_id) {
+	case ECORE_BDQ:
+		if (!*p_resc_num)
+			*p_resc_start = 0;
+		break;
 	default:
+		*p_resc_start = *p_resc_num * p_hwfn->enabled_func_idx;
 		break;
 	}
 
-	return dflt_resc_num;
+	return ECORE_SUCCESS;
 }
 
 static const char *ecore_hw_get_resc_name(enum ecore_resources res_id)
@@ -2569,6 +2589,8 @@ static const char *ecore_hw_get_resc_name(enum ecore_resources res_id)
 		return "CMDQS_CQS";
 	case ECORE_RDMA_STATS_QUEUE:
 		return "RDMA_STATS_QUEUE";
+	case ECORE_BDQ:
+		return "BDQ";
 	default:
 		return "UNKNOWN_RESOURCE";
 	}
@@ -2586,14 +2608,14 @@ static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
 	p_resc_num = &RESC_NUM(p_hwfn, res_id);
 	p_resc_start = &RESC_START(p_hwfn, res_id);
 
-	dflt_resc_num = ecore_hw_get_dflt_resc_num(p_hwfn, res_id);
-	if (!dflt_resc_num) {
+	rc = ecore_hw_get_dflt_resc(p_hwfn, res_id,
+				    &dflt_resc_num, &dflt_resc_start);
+	if (rc != ECORE_SUCCESS) {
 		DP_ERR(p_hwfn,
 		       "Failed to get default amount for resource %d [%s]\n",
 			res_id, ecore_hw_get_resc_name(res_id));
-		return ECORE_INVAL;
+		return rc;
 	}
-	dflt_resc_start = dflt_resc_num * p_hwfn->enabled_func_idx;
 
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_SLOW(p_hwfn->p_dev)) {
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 6f0e2f9..333d147 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -1025,6 +1025,7 @@ enum resource_id_enum {
 	RESOURCE_NUM_RSS_ENGINES_E	=	14,
 	RESOURCE_LL2_QUEUE_E		=	15,
 	RESOURCE_RDMA_STATS_QUEUE_E	=	16,
+	RESOURCE_BDQ_E			=	17,
 	RESOURCE_MAX_NUM,
 	RESOURCE_NUM_INVALID		=	0xFFFFFFFF
 };
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 47/61] net/qede/base: add macro for unsupported command
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (45 preceding siblings ...)
  2017-02-27  7:57 ` [PATCH 46/61] net/qede/base: add mailbox for resource allocation Rasesh Mody
@ 2017-02-27  7:57 ` Rasesh Mody
  2017-02-27  7:57 ` [PATCH 48/61] net/qede/base: Add support to set max values of soft resoruces Rasesh Mody
                   ` (14 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:57 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add a macro for upsupported management FW command

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_mcp.c  |    6 ++----
 drivers/net/qede/base/mcp_public.h |    1 +
 2 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 11ecac3..ede51a4 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1425,8 +1425,7 @@ enum _ecore_status_t ecore_mcp_mdump_trigger(struct ecore_hwfn *p_hwfn,
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	/* A zero response implies that the mdump command is not supported */
-	if (!mcp_resp)
+	if (mcp_resp == FW_MSG_CODE_UNSUPPORTED)
 		return ECORE_NOTIMPL;
 
 	if (mcp_resp != FW_MSG_CODE_OK) {
@@ -2833,8 +2832,7 @@ static enum _ecore_status_t ecore_mcp_resource_cmd(struct ecore_hwfn *p_hwfn,
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	/* A zero response implies that the resource command is not supported */
-	if (!*p_mcp_resp)
+	if (*p_mcp_resp == FW_MSG_CODE_UNSUPPORTED)
 		return ECORE_NOTIMPL;
 
 	if (*p_mcp_param == RESOURCE_OPCODE_UNKNOWN_CMD) {
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 333d147..fcf9847 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -1489,6 +1489,7 @@ struct public_drv_mb {
 
 	u32 fw_mb_header;
 #define FW_MSG_CODE_MASK                        0xffff0000
+#define FW_MSG_CODE_UNSUPPORTED			0x00000000
 #define FW_MSG_CODE_DRV_LOAD_ENGINE		0x10100000
 #define FW_MSG_CODE_DRV_LOAD_PORT               0x10110000
 #define FW_MSG_CODE_DRV_LOAD_FUNCTION           0x10120000
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 48/61] net/qede/base: Add support to set max values of soft resoruces
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (46 preceding siblings ...)
  2017-02-27  7:57 ` [PATCH 47/61] net/qede/base: add macro for unsupported command Rasesh Mody
@ 2017-02-27  7:57 ` Rasesh Mody
  2017-02-27  7:57 ` [PATCH 49/61] net/qede/base: add return code check Rasesh Mody
                   ` (13 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:57 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add support for the new interface with the Management FW for setting
max values of "soft" resoruces.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h     |    2 +
 drivers/net/qede/base/ecore_dev.c |  281 ++++++++++++++++++++++--------------
 drivers/net/qede/base/ecore_mcp.c |  287 +++++++++++++++++++++++++++++++------
 drivers/net/qede/base/ecore_mcp.h |  104 ++++++++++----
 4 files changed, 497 insertions(+), 177 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 25b6c4e..7379b3f 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -856,4 +856,6 @@ void ecore_set_fw_mac_addr(__le16 *fw_msb, __le16 *fw_mid, __le16 *fw_lsb,
 
 #define ECORE_LEADING_HWFN(dev)	(&dev->hwfns[0])
 
+const char *ecore_hw_get_resc_name(enum ecore_resources res_id);
+
 #endif /* __ECORE_H */
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 113c326..fb245ec 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2427,64 +2427,109 @@ static void ecore_hw_set_feat(struct ecore_hwfn *p_hwfn)
 		   RESC_NUM(p_hwfn, ECORE_SB));
 }
 
-static enum resource_id_enum
-ecore_hw_get_mfw_res_id(enum ecore_resources res_id)
+const char *ecore_hw_get_resc_name(enum ecore_resources res_id)
 {
-	enum resource_id_enum mfw_res_id = RESOURCE_NUM_INVALID;
-
 	switch (res_id) {
 	case ECORE_SB:
-		mfw_res_id = RESOURCE_NUM_SB_E;
-		break;
+		return "SB";
 	case ECORE_L2_QUEUE:
-		mfw_res_id = RESOURCE_NUM_L2_QUEUE_E;
-		break;
+		return "L2_QUEUE";
 	case ECORE_VPORT:
-		mfw_res_id = RESOURCE_NUM_VPORT_E;
-		break;
+		return "VPORT";
 	case ECORE_RSS_ENG:
-		mfw_res_id = RESOURCE_NUM_RSS_ENGINES_E;
-		break;
+		return "RSS_ENG";
 	case ECORE_PQ:
-		mfw_res_id = RESOURCE_NUM_PQ_E;
-		break;
+		return "PQ";
 	case ECORE_RL:
-		mfw_res_id = RESOURCE_NUM_RL_E;
-		break;
+		return "RL";
 	case ECORE_MAC:
+		return "MAC";
 	case ECORE_VLAN:
-		/* Each VFC resource can accommodate both a MAC and a VLAN */
-		mfw_res_id = RESOURCE_VFC_FILTER_E;
-		break;
+		return "VLAN";
+	case ECORE_RDMA_CNQ_RAM:
+		return "RDMA_CNQ_RAM";
 	case ECORE_ILT:
-		mfw_res_id = RESOURCE_ILT_E;
-		break;
+		return "ILT";
 	case ECORE_LL2_QUEUE:
-		mfw_res_id = RESOURCE_LL2_QUEUE_E;
-		break;
-	case ECORE_RDMA_CNQ_RAM:
+		return "LL2_QUEUE";
 	case ECORE_CMDQS_CQS:
-		/* CNQ/CMDQS are the same resource */
-		mfw_res_id = RESOURCE_CQS_E;
-		break;
+		return "CMDQS_CQS";
 	case ECORE_RDMA_STATS_QUEUE:
-		mfw_res_id = RESOURCE_RDMA_STATS_QUEUE_E;
-		break;
+		return "RDMA_STATS_QUEUE";
 	case ECORE_BDQ:
-		mfw_res_id = RESOURCE_BDQ_E;
-		break;
+		return "BDQ";
 	default:
-		break;
+		return "UNKNOWN_RESOURCE";
 	}
+}
 
-	return mfw_res_id;
+static enum _ecore_status_t
+__ecore_hw_set_soft_resc_size(struct ecore_hwfn *p_hwfn,
+			      enum ecore_resources res_id, u32 resc_max_val,
+			      u32 *p_mcp_resp)
+{
+	enum _ecore_status_t rc;
+
+	rc = ecore_mcp_set_resc_max_val(p_hwfn, p_hwfn->p_main_ptt, res_id,
+					resc_max_val, p_mcp_resp);
+	if (rc != ECORE_SUCCESS) {
+		DP_NOTICE(p_hwfn, true,
+			  "MFW response failure for a max value setting of resource %d [%s]\n",
+			  res_id, ecore_hw_get_resc_name(res_id));
+		return rc;
+	}
+
+	if (*p_mcp_resp != FW_MSG_CODE_RESOURCE_ALLOC_OK)
+		DP_INFO(p_hwfn,
+			"Failed to set the max value of resource %d [%s]. mcp_resp = 0x%08x.\n",
+			res_id, ecore_hw_get_resc_name(res_id), *p_mcp_resp);
+
+	return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t
+ecore_hw_set_soft_resc_size(struct ecore_hwfn *p_hwfn)
+{
+	bool b_ah = ECORE_IS_AH(p_hwfn->p_dev);
+	u32 resc_max_val, mcp_resp;
+	u8 res_id;
+	enum _ecore_status_t rc;
+
+	for (res_id = 0; res_id < ECORE_MAX_RESC; res_id++) {
+		/* @DPDK */
+		switch (res_id) {
+		case ECORE_LL2_QUEUE:
+		case ECORE_RDMA_CNQ_RAM:
+		case ECORE_RDMA_STATS_QUEUE:
+		case ECORE_BDQ:
+			resc_max_val = 0;
+			break;
+		default:
+			continue;
+		}
+
+		rc = __ecore_hw_set_soft_resc_size(p_hwfn, res_id,
+						   resc_max_val, &mcp_resp);
+		if (rc != ECORE_SUCCESS)
+			return rc;
+
+		/* There's no point to continue to the next resource if the
+		 * command is not supported by the MFW.
+		 * We do continue if the command is supported but the resource
+		 * is unknown to the MFW. Such a resource will be later
+		 * configured with the default allocation values.
+		 */
+		if (mcp_resp == FW_MSG_CODE_UNSUPPORTED)
+			return ECORE_NOTIMPL;
+	}
+
+	return ECORE_SUCCESS;
 }
 
 static
 enum _ecore_status_t ecore_hw_get_dflt_resc(struct ecore_hwfn *p_hwfn,
 					    enum ecore_resources res_id,
-					    u32 *p_resc_num,
-					    u32 *p_resc_start)
+					    u32 *p_resc_num, u32 *p_resc_start)
 {
 	u8 num_funcs = p_hwfn->num_funcs_on_engine;
 	bool b_ah = ECORE_IS_AH(p_hwfn->p_dev);
@@ -2560,56 +2605,19 @@ enum _ecore_status_t ecore_hw_get_dflt_resc(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-static const char *ecore_hw_get_resc_name(enum ecore_resources res_id)
-{
-	switch (res_id) {
-	case ECORE_SB:
-		return "SB";
-	case ECORE_L2_QUEUE:
-		return "L2_QUEUE";
-	case ECORE_VPORT:
-		return "VPORT";
-	case ECORE_RSS_ENG:
-		return "RSS_ENG";
-	case ECORE_PQ:
-		return "PQ";
-	case ECORE_RL:
-		return "RL";
-	case ECORE_MAC:
-		return "MAC";
-	case ECORE_VLAN:
-		return "VLAN";
-	case ECORE_RDMA_CNQ_RAM:
-		return "RDMA_CNQ_RAM";
-	case ECORE_ILT:
-		return "ILT";
-	case ECORE_LL2_QUEUE:
-		return "LL2_QUEUE";
-	case ECORE_CMDQS_CQS:
-		return "CMDQS_CQS";
-	case ECORE_RDMA_STATS_QUEUE:
-		return "RDMA_STATS_QUEUE";
-	case ECORE_BDQ:
-		return "BDQ";
-	default:
-		return "UNKNOWN_RESOURCE";
-	}
-}
-
-static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
-						   enum ecore_resources res_id,
-						   bool drv_resc_alloc)
+static enum _ecore_status_t
+__ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn, enum ecore_resources res_id,
+			 bool drv_resc_alloc)
 {
-	u32 dflt_resc_num = 0, dflt_resc_start = 0, mcp_resp, mcp_param;
-	u32 *p_resc_num, *p_resc_start;
-	struct resource_info resc_info;
+	u32 dflt_resc_num = 0, dflt_resc_start = 0;
+	u32 mcp_resp, *p_resc_num, *p_resc_start;
 	enum _ecore_status_t rc;
 
 	p_resc_num = &RESC_NUM(p_hwfn, res_id);
 	p_resc_start = &RESC_START(p_hwfn, res_id);
 
-	rc = ecore_hw_get_dflt_resc(p_hwfn, res_id,
-				    &dflt_resc_num, &dflt_resc_start);
+	rc = ecore_hw_get_dflt_resc(p_hwfn, res_id, &dflt_resc_num,
+				    &dflt_resc_start);
 	if (rc != ECORE_SUCCESS) {
 		DP_ERR(p_hwfn,
 		       "Failed to get default amount for resource %d [%s]\n",
@@ -2625,17 +2633,8 @@ static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
 	}
 #endif
 
-	OSAL_MEM_ZERO(&resc_info, sizeof(resc_info));
-	resc_info.res_id = ecore_hw_get_mfw_res_id(res_id);
-	if (resc_info.res_id == RESOURCE_NUM_INVALID) {
-		DP_ERR(p_hwfn,
-		       "Failed to match resource %d with MFW resources\n",
-		       res_id);
-		return ECORE_INVAL;
-	}
-
-	rc = ecore_mcp_get_resc_info(p_hwfn, p_hwfn->p_main_ptt, &resc_info,
-				     &mcp_resp, &mcp_param);
+	rc = ecore_mcp_get_resc_info(p_hwfn, p_hwfn->p_main_ptt, res_id,
+				     &mcp_resp, p_resc_num, p_resc_start);
 	if (rc != ECORE_SUCCESS) {
 		DP_NOTICE(p_hwfn, true,
 			  "MFW response failure for an allocation request for"
@@ -2649,13 +2648,11 @@ static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
 	 * - There is an internal error in the MFW while processing the request
 	 * - The resource ID is unknown to the MFW
 	 */
-	if (mcp_resp != FW_MSG_CODE_RESOURCE_ALLOC_OK &&
-	    mcp_resp != FW_MSG_CODE_RESOURCE_ALLOC_DEPRECATED) {
-		/* @DPDK */
+	if (mcp_resp != FW_MSG_CODE_RESOURCE_ALLOC_OK) {
 		DP_INFO(p_hwfn,
-			"Resource %d [%s]: No allocation info was received"
-			" [mcp_resp 0x%x]. Applying default values"
-			" [num %d, start %d].\n",
+			"Failed to receive allocation info for resource %d [%s]."
+			" mcp_resp = 0x%x. Applying default values"
+			" [%d,%d].\n",
 			res_id, ecore_hw_get_resc_name(res_id), mcp_resp,
 			dflt_resc_num, dflt_resc_start);
 
@@ -2667,16 +2664,13 @@ static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
 	/* TBD - remove this when revising the handling of the SB resource */
 	if (res_id == ECORE_SB) {
 		/* Excluding the slowpath SB */
-		resc_info.size -= 1;
-		resc_info.offset -= p_hwfn->enabled_func_idx;
+		*p_resc_num -= 1;
+		*p_resc_start -= p_hwfn->enabled_func_idx;
 	}
 
-	*p_resc_num = resc_info.size;
-	*p_resc_start = resc_info.offset;
-
 	if (*p_resc_num != dflt_resc_num || *p_resc_start != dflt_resc_start) {
 		DP_INFO(p_hwfn,
-			"Resource %d [%s]: MFW allocation [num %d, start %d] differs from default values [num %d, start %d]%s\n",
+			"MFW allocation for resource %d [%s] differs from default values [%d,%d vs. %d,%d]%s\n",
 			res_id, ecore_hw_get_resc_name(res_id), *p_resc_num,
 			*p_resc_start, dflt_resc_num, dflt_resc_start,
 			drv_resc_alloc ? " - Applying default values" : "");
@@ -2689,12 +2683,32 @@ static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
+static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
+						   bool drv_resc_alloc)
+{
+	enum _ecore_status_t rc;
+	u8 res_id;
+
+	for (res_id = 0; res_id < ECORE_MAX_RESC; res_id++) {
+		rc = __ecore_hw_set_resc_info(p_hwfn, res_id, drv_resc_alloc);
+		if (rc != ECORE_SUCCESS)
+			return rc;
+	}
+
+	return ECORE_SUCCESS;
+}
+
+#define ECORE_RESC_ALLOC_LOCK_RETRY_CNT		10
+#define ECORE_RESC_ALLOC_LOCK_RETRY_INTVL_US	10000 /* 10 msec */
+
 static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 					      bool drv_resc_alloc)
 {
+	struct ecore_resc_unlock_params resc_unlock_params;
+	struct ecore_resc_lock_params resc_lock_params;
 	bool b_ah = ECORE_IS_AH(p_hwfn->p_dev);
-	enum _ecore_status_t rc;
 	u8 res_id;
+	enum _ecore_status_t rc;
 #ifndef ASIC_ONLY
 	u32 *resc_start = p_hwfn->hw_info.resc_start;
 	u32 *resc_num = p_hwfn->hw_info.resc_num;
@@ -2707,10 +2721,61 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 	u32 roce_min_ilt_lines = PXP_NUM_ILT_RECORDS_BB / MAX_NUM_PFS_BB;
 #endif
 
-	for (res_id = 0; res_id < ECORE_MAX_RESC; res_id++) {
-		rc = ecore_hw_set_resc_info(p_hwfn, res_id, drv_resc_alloc);
+	/* Setting the max values of the soft resources and the following
+	 * resources allocation queries should be atomic. Since several PFs can
+	 * run in parallel - a resource lock is needed.
+	 * If either the resource lock or resource set value commands are not
+	 * supported - skip the the max values setting, release the lock if
+	 * needed, and proceed to the queries. Other failures, including a
+	 * failure to acquire the lock, will cause this function to fail.
+	 * Old drivers that don't acquire the lock can run in parallel, and
+	 * their allocation values won't be affected by the updated max values.
+	 */
+	OSAL_MEM_ZERO(&resc_lock_params, sizeof(resc_lock_params));
+	resc_lock_params.resource = ECORE_RESC_LOCK_RESC_ALLOC;
+	resc_lock_params.retry_num = ECORE_RESC_ALLOC_LOCK_RETRY_CNT;
+	resc_lock_params.retry_interval = ECORE_RESC_ALLOC_LOCK_RETRY_INTVL_US;
+	resc_lock_params.sleep_b4_retry = true;
+	OSAL_MEM_ZERO(&resc_unlock_params, sizeof(resc_unlock_params));
+	resc_unlock_params.resource = ECORE_RESC_LOCK_RESC_ALLOC;
+
+	rc = ecore_mcp_resc_lock(p_hwfn, p_hwfn->p_main_ptt, &resc_lock_params);
+	if (rc != ECORE_SUCCESS && rc != ECORE_NOTIMPL) {
+		return rc;
+	} else if (rc == ECORE_NOTIMPL) {
+		DP_INFO(p_hwfn,
+			"Skip the max values setting of the soft resources since the resource lock is not supported by the MFW\n");
+	} else if (rc == ECORE_SUCCESS && !resc_lock_params.b_granted) {
+		DP_NOTICE(p_hwfn, false,
+			  "Failed to acquire the resource lock for the resource allocation commands\n");
+		return ECORE_BUSY;
+	} else {
+		rc = ecore_hw_set_soft_resc_size(p_hwfn);
+		if (rc != ECORE_SUCCESS && rc != ECORE_NOTIMPL) {
+			DP_NOTICE(p_hwfn, false,
+				  "Failed to set the max values of the soft resources\n");
+			goto unlock_and_exit;
+		} else if (rc == ECORE_NOTIMPL) {
+			DP_INFO(p_hwfn,
+				"Skip the max values setting of the soft resources since it is not supported by the MFW\n");
+			rc = ecore_mcp_resc_unlock(p_hwfn, p_hwfn->p_main_ptt,
+						   &resc_unlock_params);
+			if (rc != ECORE_SUCCESS)
+				DP_INFO(p_hwfn,
+					"Failed to release the resource lock for the resource allocation commands\n");
+		}
+	}
+
+	rc = ecore_hw_set_resc_info(p_hwfn, drv_resc_alloc);
+	if (rc != ECORE_SUCCESS)
+		goto unlock_and_exit;
+
+	if (resc_lock_params.b_granted && !resc_unlock_params.b_released) {
+		rc = ecore_mcp_resc_unlock(p_hwfn, p_hwfn->p_main_ptt,
+					   &resc_unlock_params);
 		if (rc != ECORE_SUCCESS)
-			return rc;
+			DP_INFO(p_hwfn,
+				"Failed to release the resource lock for the resource allocation commands\n");
 	}
 
 #ifndef ASIC_ONLY
@@ -2763,6 +2828,10 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 			   RESC_START(p_hwfn, res_id));
 
 	return ECORE_SUCCESS;
+
+unlock_and_exit:
+	ecore_mcp_resc_unlock(p_hwfn, p_hwfn->p_main_ptt, &resc_unlock_params);
+	return rc;
 }
 
 static enum _ecore_status_t
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index ede51a4..46f2fd0 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -2769,7 +2769,60 @@ enum _ecore_status_t ecore_mcp_mem_ecc_events(struct ecore_hwfn *p_hwfn,
 			     0, &rsp, (u32 *)num_events);
 }
 
-#define ECORE_RESC_ALLOC_VERSION_MAJOR	1
+static enum resource_id_enum
+ecore_mcp_get_mfw_res_id(enum ecore_resources res_id)
+{
+	enum resource_id_enum mfw_res_id = RESOURCE_NUM_INVALID;
+
+	switch (res_id) {
+	case ECORE_SB:
+		mfw_res_id = RESOURCE_NUM_SB_E;
+		break;
+	case ECORE_L2_QUEUE:
+		mfw_res_id = RESOURCE_NUM_L2_QUEUE_E;
+		break;
+	case ECORE_VPORT:
+		mfw_res_id = RESOURCE_NUM_VPORT_E;
+		break;
+	case ECORE_RSS_ENG:
+		mfw_res_id = RESOURCE_NUM_RSS_ENGINES_E;
+		break;
+	case ECORE_PQ:
+		mfw_res_id = RESOURCE_NUM_PQ_E;
+		break;
+	case ECORE_RL:
+		mfw_res_id = RESOURCE_NUM_RL_E;
+		break;
+	case ECORE_MAC:
+	case ECORE_VLAN:
+		/* Each VFC resource can accommodate both a MAC and a VLAN */
+		mfw_res_id = RESOURCE_VFC_FILTER_E;
+		break;
+	case ECORE_ILT:
+		mfw_res_id = RESOURCE_ILT_E;
+		break;
+	case ECORE_LL2_QUEUE:
+		mfw_res_id = RESOURCE_LL2_QUEUE_E;
+		break;
+	case ECORE_RDMA_CNQ_RAM:
+	case ECORE_CMDQS_CQS:
+		/* CNQ/CMDQS are the same resource */
+		mfw_res_id = RESOURCE_CQS_E;
+		break;
+	case ECORE_RDMA_STATS_QUEUE:
+		mfw_res_id = RESOURCE_RDMA_STATS_QUEUE_E;
+		break;
+	case ECORE_BDQ:
+		mfw_res_id = RESOURCE_BDQ_E;
+		break;
+	default:
+		break;
+	}
+
+	return mfw_res_id;
+}
+
+#define ECORE_RESC_ALLOC_VERSION_MAJOR	2
 #define ECORE_RESC_ALLOC_VERSION_MINOR	0
 #define ECORE_RESC_ALLOC_VERSION				\
 	((ECORE_RESC_ALLOC_VERSION_MAJOR <<			\
@@ -2777,36 +2830,146 @@ enum _ecore_status_t ecore_mcp_mem_ecc_events(struct ecore_hwfn *p_hwfn,
 	 (ECORE_RESC_ALLOC_VERSION_MINOR <<			\
 	  DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR_SHIFT))
 
-enum _ecore_status_t ecore_mcp_get_resc_info(struct ecore_hwfn *p_hwfn,
-					     struct ecore_ptt *p_ptt,
-					     struct resource_info *p_resc_info,
-					     u32 *p_mcp_resp, u32 *p_mcp_param)
+struct ecore_resc_alloc_in_params {
+	u32 cmd;
+	enum ecore_resources res_id;
+	u32 resc_max_val;
+};
+
+struct ecore_resc_alloc_out_params {
+	u32 mcp_resp;
+	u32 mcp_param;
+	u32 resc_num;
+	u32 resc_start;
+	u32 vf_resc_num;
+	u32 vf_resc_start;
+	u32 flags;
+};
+
+static enum _ecore_status_t
+ecore_mcp_resc_allocation_msg(struct ecore_hwfn *p_hwfn,
+			      struct ecore_ptt *p_ptt,
+			      struct ecore_resc_alloc_in_params *p_in_params,
+			      struct ecore_resc_alloc_out_params *p_out_params)
 {
+	struct resource_info *p_mfw_resc_info;
 	struct ecore_mcp_mb_params mb_params;
 	union drv_union_data union_data;
 	enum _ecore_status_t rc;
 
+	p_mfw_resc_info = &union_data.resource;
+	OSAL_MEM_ZERO(p_mfw_resc_info, sizeof(*p_mfw_resc_info));
+
+	p_mfw_resc_info->res_id = ecore_mcp_get_mfw_res_id(p_in_params->res_id);
+	if (p_mfw_resc_info->res_id == RESOURCE_NUM_INVALID) {
+		DP_ERR(p_hwfn,
+		       "Failed to match resource %d [%s] with the MFW resources\n",
+		       p_in_params->res_id,
+		       ecore_hw_get_resc_name(p_in_params->res_id));
+		return ECORE_INVAL;
+	}
+
+	switch (p_in_params->cmd) {
+	case DRV_MSG_SET_RESOURCE_VALUE_MSG:
+		p_mfw_resc_info->size = p_in_params->resc_max_val;
+		/* Fallthrough */
+	case DRV_MSG_GET_RESOURCE_ALLOC_MSG:
+		break;
+	default:
+		DP_ERR(p_hwfn, "Unexpected resource alloc command [0x%08x]\n",
+		       p_in_params->cmd);
+		return ECORE_INVAL;
+	}
+
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
-	mb_params.cmd = DRV_MSG_GET_RESOURCE_ALLOC_MSG;
+	mb_params.cmd = p_in_params->cmd;
 	mb_params.param = ECORE_RESC_ALLOC_VERSION;
-	OSAL_MEMCPY(&union_data.resource, p_resc_info, sizeof(*p_resc_info));
 	mb_params.p_data_src = &union_data;
 	mb_params.p_data_dst = &union_data;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Resource message request: cmd 0x%08x, res_id %d [%s], hsi_version %d.%d, val 0x%x\n",
+		   p_in_params->cmd, p_in_params->res_id,
+		   ecore_hw_get_resc_name(p_in_params->res_id),
+		   ECORE_MFW_GET_FIELD(mb_params.param,
+			   DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR),
+		   ECORE_MFW_GET_FIELD(mb_params.param,
+			   DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR),
+		   p_in_params->resc_max_val);
+
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	*p_mcp_resp = mb_params.mcp_resp;
-	*p_mcp_param = mb_params.mcp_param;
-
-	OSAL_MEMCPY(p_resc_info, &union_data.resource, sizeof(*p_resc_info));
+	p_out_params->mcp_resp = mb_params.mcp_resp;
+	p_out_params->mcp_param = mb_params.mcp_param;
+	p_out_params->resc_num = p_mfw_resc_info->size;
+	p_out_params->resc_start = p_mfw_resc_info->offset;
+	p_out_params->vf_resc_num = p_mfw_resc_info->vf_size;
+	p_out_params->vf_resc_start = p_mfw_resc_info->vf_offset;
+	p_out_params->flags = p_mfw_resc_info->flags;
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-		   "MFW resource_info: version 0x%x, res_id 0x%x, size 0x%x,"
-		   " offset 0x%x, vf_size 0x%x, vf_offset 0x%x, flags 0x%x\n",
-		   *p_mcp_param, p_resc_info->res_id, p_resc_info->size,
-		   p_resc_info->offset, p_resc_info->vf_size,
-		   p_resc_info->vf_offset, p_resc_info->flags);
+		   "Resource message response: mfw_hsi_version %d.%d, num 0x%x, start 0x%x, vf_num 0x%x, vf_start 0x%x, flags 0x%08x\n",
+		   ECORE_MFW_GET_FIELD(p_out_params->mcp_param,
+			   FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR),
+		   ECORE_MFW_GET_FIELD(p_out_params->mcp_param,
+			   FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR),
+		   p_out_params->resc_num, p_out_params->resc_start,
+		   p_out_params->vf_resc_num, p_out_params->vf_resc_start,
+		   p_out_params->flags);
+
+	return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t
+ecore_mcp_set_resc_max_val(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			   enum ecore_resources res_id, u32 resc_max_val,
+			   u32 *p_mcp_resp)
+{
+	struct ecore_resc_alloc_out_params out_params;
+	struct ecore_resc_alloc_in_params in_params;
+	enum _ecore_status_t rc;
+
+	OSAL_MEM_ZERO(&in_params, sizeof(in_params));
+	in_params.cmd = DRV_MSG_SET_RESOURCE_VALUE_MSG;
+	in_params.res_id = res_id;
+	in_params.resc_max_val = resc_max_val;
+	OSAL_MEM_ZERO(&out_params, sizeof(out_params));
+	rc = ecore_mcp_resc_allocation_msg(p_hwfn, p_ptt, &in_params,
+					   &out_params);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	*p_mcp_resp = out_params.mcp_resp;
+
+	return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t
+ecore_mcp_get_resc_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			enum ecore_resources res_id, u32 *p_mcp_resp,
+			u32 *p_resc_num, u32 *p_resc_start)
+{
+	struct ecore_resc_alloc_out_params out_params;
+	struct ecore_resc_alloc_in_params in_params;
+	enum _ecore_status_t rc;
+
+	OSAL_MEM_ZERO(&in_params, sizeof(in_params));
+	in_params.cmd = DRV_MSG_GET_RESOURCE_ALLOC_MSG;
+	in_params.res_id = res_id;
+	OSAL_MEM_ZERO(&out_params, sizeof(out_params));
+	rc = ecore_mcp_resc_allocation_msg(p_hwfn, p_ptt, &in_params,
+					   &out_params);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	*p_mcp_resp = out_params.mcp_resp;
+
+	if (*p_mcp_resp == FW_MSG_CODE_RESOURCE_ALLOC_OK) {
+		*p_resc_num = out_params.resc_num;
+		*p_resc_start = out_params.resc_start;
+	}
 
 	return ECORE_SUCCESS;
 }
@@ -2832,8 +2995,11 @@ static enum _ecore_status_t ecore_mcp_resource_cmd(struct ecore_hwfn *p_hwfn,
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	if (*p_mcp_resp == FW_MSG_CODE_UNSUPPORTED)
+	if (*p_mcp_resp == FW_MSG_CODE_UNSUPPORTED) {
+		DP_INFO(p_hwfn,
+			"The resource command is unsupported by the MFW\n");
 		return ECORE_NOTIMPL;
+	}
 
 	if (*p_mcp_param == RESOURCE_OPCODE_UNKNOWN_CMD) {
 		u8 opcode = ECORE_MFW_GET_FIELD(param, RESOURCE_CMD_REQ_OPCODE);
@@ -2847,36 +3013,35 @@ static enum _ecore_status_t ecore_mcp_resource_cmd(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-enum _ecore_status_t ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn,
-					 struct ecore_ptt *p_ptt,
-					 u8 resource_num, u8 timeout,
-					 bool *p_granted, u8 *p_owner)
+enum _ecore_status_t
+__ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		      struct ecore_resc_lock_params *p_params)
 {
 	u32 param = 0, mcp_resp, mcp_param;
 	u8 opcode;
 	enum _ecore_status_t rc;
 
-	switch (timeout) {
+	switch (p_params->timeout) {
 	case ECORE_MCP_RESC_LOCK_TO_DEFAULT:
 		opcode = RESOURCE_OPCODE_REQ;
-		timeout = 0;
+		p_params->timeout = 0;
 		break;
 	case ECORE_MCP_RESC_LOCK_TO_NONE:
 		opcode = RESOURCE_OPCODE_REQ_WO_AGING;
-		timeout = 0;
+		p_params->timeout = 0;
 		break;
 	default:
 		opcode = RESOURCE_OPCODE_REQ_W_AGING;
 		break;
 	}
 
-	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_RESC, resource_num);
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_RESC, p_params->resource);
 	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_OPCODE, opcode);
-	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_AGE, timeout);
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_AGE, p_params->timeout);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-		   "Resource lock request: param 0x%08x [age %d, opcode %d, resc_num %d]\n",
-		   param, timeout, opcode, resource_num);
+		   "Resource lock request: param 0x%08x [age %d, opcode %d, resource %d]\n",
+		   param, p_params->timeout, opcode, p_params->resource);
 
 	/* Attempt to acquire the resource */
 	rc = ecore_mcp_resource_cmd(p_hwfn, p_ptt, param, &mcp_resp,
@@ -2885,19 +3050,20 @@ enum _ecore_status_t ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn,
 		return rc;
 
 	/* Analyze the response */
-	*p_owner = ECORE_MFW_GET_FIELD(mcp_param, RESOURCE_CMD_RSP_OWNER);
+	p_params->owner = ECORE_MFW_GET_FIELD(mcp_param,
+					     RESOURCE_CMD_RSP_OWNER);
 	opcode = ECORE_MFW_GET_FIELD(mcp_param, RESOURCE_CMD_RSP_OPCODE);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 		   "Resource lock response: mcp_param 0x%08x [opcode %d, owner %d]\n",
-		   mcp_param, opcode, *p_owner);
+		   mcp_param, opcode, p_params->owner);
 
 	switch (opcode) {
 	case RESOURCE_OPCODE_GNT:
-		*p_granted = true;
+		p_params->b_granted = true;
 		break;
 	case RESOURCE_OPCODE_BUSY:
-		*p_granted = false;
+		p_params->b_granted = false;
 		break;
 	default:
 		DP_NOTICE(p_hwfn, false,
@@ -2909,23 +3075,54 @@ enum _ecore_status_t ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-enum _ecore_status_t ecore_mcp_resc_unlock(struct ecore_hwfn *p_hwfn,
-					   struct ecore_ptt *p_ptt,
-					   u8 resource_num, bool force,
-					   bool *p_released)
+enum _ecore_status_t
+ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		    struct ecore_resc_lock_params *p_params)
+{
+	u32 retry_cnt = 0;
+	enum _ecore_status_t rc;
+
+	do {
+		/* No need for an interval before the first iteration */
+		if (retry_cnt) {
+			if (p_params->sleep_b4_retry) {
+				u16 retry_interval_in_ms =
+					DIV_ROUND_UP(p_params->retry_interval,
+						     1000);
+
+				OSAL_MSLEEP(retry_interval_in_ms);
+			} else {
+				OSAL_UDELAY(p_params->retry_interval);
+			}
+		}
+
+		rc = __ecore_mcp_resc_lock(p_hwfn, p_ptt, p_params);
+		if (rc != ECORE_SUCCESS)
+			return rc;
+
+		if (p_params->b_granted)
+			break;
+	} while (retry_cnt++ < p_params->retry_num);
+
+	return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t
+ecore_mcp_resc_unlock(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		      struct ecore_resc_unlock_params *p_params)
 {
 	u32 param = 0, mcp_resp, mcp_param;
 	u8 opcode;
 	enum _ecore_status_t rc;
 
-	opcode = force ? RESOURCE_OPCODE_FORCE_RELEASE
-		       : RESOURCE_OPCODE_RELEASE;
-	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_RESC, resource_num);
+	opcode = p_params->b_force ? RESOURCE_OPCODE_FORCE_RELEASE
+				   : RESOURCE_OPCODE_RELEASE;
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_RESC, p_params->resource);
 	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_OPCODE, opcode);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-		   "Resource unlock request: param 0x%08x [opcode %d, resc_num %d]\n",
-		   param, opcode, resource_num);
+		   "Resource unlock request: param 0x%08x [opcode %d, resource %d]\n",
+		   param, opcode, p_params->resource);
 
 	/* Attempt to release the resource */
 	rc = ecore_mcp_resource_cmd(p_hwfn, p_ptt, param, &mcp_resp,
@@ -2943,14 +3140,14 @@ enum _ecore_status_t ecore_mcp_resc_unlock(struct ecore_hwfn *p_hwfn,
 	switch (opcode) {
 	case RESOURCE_OPCODE_RELEASED_PREVIOUS:
 		DP_INFO(p_hwfn,
-			"Resource unlock request for an already released resource [resc_num %d]\n",
-			resource_num);
+			"Resource unlock request for an already released resource [%d]\n",
+			p_params->resource);
 		/* Fallthrough */
 	case RESOURCE_OPCODE_RELEASED:
-		*p_released = true;
+		p_params->b_released = true;
 		break;
 	case RESOURCE_OPCODE_WRONG_OWNER:
-		*p_released = false;
+		p_params->b_released = false;
 		break;
 	default:
 		DP_NOTICE(p_hwfn, false,
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index 4138a12..f5dac9d 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -11,6 +11,7 @@
 
 #include "bcm_osal.h"
 #include "mcp_public.h"
+#include "ecore.h"
 #include "ecore_mcp_api.h"
 
 /* Using hwfn number (and not pf_num) is required since in CMT mode,
@@ -339,20 +340,37 @@ enum _ecore_status_t ecore_mcp_mdump_trigger(struct ecore_hwfn *p_hwfn,
 					     struct ecore_ptt *p_ptt);
 
 /**
+ * @brief - Sets the MFW's max value for the given resource
+ *
+ *  @param p_hwfn
+ *  @param p_ptt
+ *  @param res_id
+ *  @param resc_max_val
+ *  @param p_mcp_resp
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
+ */
+enum _ecore_status_t
+ecore_mcp_set_resc_max_val(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			   enum ecore_resources res_id, u32 resc_max_val,
+			   u32 *p_mcp_resp);
+
+/**
  * @brief - Gets the MFW allocation info for the given resource
  *
  *  @param p_hwfn
  *  @param p_ptt
- *  @param p_resc_info
+ *  @param res_id
  *  @param p_mcp_resp
- *  @param p_mcp_param
+ *  @param p_resc_num
+ *  @param p_resc_start
  *
  * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
  */
-enum _ecore_status_t ecore_mcp_get_resc_info(struct ecore_hwfn *p_hwfn,
-					     struct ecore_ptt *p_ptt,
-					     struct resource_info *p_resc_info,
-					     u32 *p_mcp_resp, u32 *p_mcp_param);
+enum _ecore_status_t
+ecore_mcp_get_resc_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			enum ecore_resources res_id, u32 *p_mcp_resp,
+			u32 *p_resc_num, u32 *p_resc_start);
 
 /**
  * @brief - Initiates PF FLR
@@ -365,45 +383,79 @@ enum _ecore_status_t ecore_mcp_get_resc_info(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_mcp_initiate_pf_flr(struct ecore_hwfn *p_hwfn,
 					       struct ecore_ptt *p_ptt);
 
+#define ECORE_MCP_RESC_LOCK_MIN_VAL	RESOURCE_DUMP /* 0 */
+#define ECORE_MCP_RESC_LOCK_MAX_VAL	31
+
+enum ecore_resc_lock {
+	ECORE_RESC_LOCK_DBG_DUMP = ECORE_MCP_RESC_LOCK_MIN_VAL,
+	/* Locks that the MFW is aware of should be added here downwards */
+
+	/* Ecore only locks should be added here upwards */
+	ECORE_RESC_LOCK_RESC_ALLOC = ECORE_MCP_RESC_LOCK_MAX_VAL
+};
+
+struct ecore_resc_lock_params {
+	/* Resource number [valid values are 0..31] */
+	u8 resource;
+
+	/* Lock timeout value in seconds [default, none or 1..254] */
+	u8 timeout;
 #define ECORE_MCP_RESC_LOCK_TO_DEFAULT	0
 #define ECORE_MCP_RESC_LOCK_TO_NONE	255
 
+	/* Number of times to retry locking */
+	u8 retry_num;
+
+	/* The interval in usec between retries */
+	u16 retry_interval;
+
+	/* Use sleep or delay between retries */
+	bool sleep_b4_retry;
+
+	/* Will be set as true if the resource is free and granted */
+	bool b_granted;
+
+	/* Will be filled with the resource owner.
+	 * [0..15 = PF0-15, 16 = MFW, 17 = diag over serial]
+	 */
+	u8 owner;
+};
+
 /**
  * @brief Acquires MFW generic resource lock
  *
  *  @param p_hwfn
  *  @param p_ptt
- *  @param resource_num - valid values are 0..31
- *  @param timeout - lock timeout value in seconds
- *                   (1..254, '0' - default value, '255' - no timeout).
- *  @param p_granted - will be filled as true if the resource is free and
- *                     granted, or false if it is busy.
- *  @param p_owner - A pointer to a variable to be filled with the resource
- *                   owner (0..15 = PF0-15, 16 = MFW, 17 = diag over serial).
+ *  @param p_params
  *
  * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
  */
-enum _ecore_status_t ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn,
-					 struct ecore_ptt *p_ptt,
-					 u8 resource_num, u8 timeout,
-					 bool *p_granted, u8 *p_owner);
+enum _ecore_status_t
+ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		    struct ecore_resc_lock_params *p_params);
+
+struct ecore_resc_unlock_params {
+	/* Resource number [valid values are 0..31] */
+	u8 resource;
+
+	/* Allow to release a resource even if belongs to another PF */
+	bool b_force;
+
+	/* Will be set as true if the resource is released */
+	bool b_released;
+};
 
 /**
  * @brief Releases MFW generic resource lock
  *
  *  @param p_hwfn
  *  @param p_ptt
- *  @param resource_num
- *  @param force -  allows to release a reeource even if belongs to another PF
- *  @param p_released - will be filled as true if the resource is released (or
- *			has been already released), and false if the resource is
- *			acquired by another PF and the `force' flag was not set.
+ *  @param p_params
  *
  * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
  */
-enum _ecore_status_t ecore_mcp_resc_unlock(struct ecore_hwfn *p_hwfn,
-					   struct ecore_ptt *p_ptt,
-					   u8 resource_num, bool force,
-					   bool *p_released);
+enum _ecore_status_t
+ecore_mcp_resc_unlock(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		      struct ecore_resc_unlock_params *p_params);
 
 #endif /* __ECORE_MCP_H__ */
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 49/61] net/qede/base: add return code check
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (47 preceding siblings ...)
  2017-02-27  7:57 ` [PATCH 48/61] net/qede/base: Add support to set max values of soft resoruces Rasesh Mody
@ 2017-02-27  7:57 ` Rasesh Mody
  2017-02-27  7:57 ` [PATCH 50/61] net/qede/base: zero out MFW mailbox data Rasesh Mody
                   ` (12 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:57 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add a check of the return code of ecore_mcp_cmd_and_union() in
ecore_mcp_send_protocol_stats()

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_mcp.c |    5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 46f2fd0..9e56065 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1238,6 +1238,7 @@ static void ecore_mcp_send_protocol_stats(struct ecore_hwfn *p_hwfn,
 	struct ecore_mcp_mb_params mb_params;
 	union drv_union_data union_data;
 	u32 hsi_param;
+	enum _ecore_status_t rc;
 
 	switch (type) {
 	case MFW_DRV_MSG_GET_LAN_STATS:
@@ -1256,7 +1257,9 @@ static void ecore_mcp_send_protocol_stats(struct ecore_hwfn *p_hwfn,
 	mb_params.param = hsi_param;
 	OSAL_MEMCPY(&union_data, &stats, sizeof(stats));
 	mb_params.p_data_src = &union_data;
-	ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+	if (rc != ECORE_SUCCESS)
+		DP_ERR(p_hwfn, "Failed to send protocol stats, rc = %d\n", rc);
 }
 
 static void ecore_read_pf_bandwidth(struct ecore_hwfn *p_hwfn,
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 50/61] net/qede/base: zero out MFW mailbox data
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (48 preceding siblings ...)
  2017-02-27  7:57 ` [PATCH 49/61] net/qede/base: add return code check Rasesh Mody
@ 2017-02-27  7:57 ` Rasesh Mody
  2017-02-27  7:57 ` [PATCH 51/61] net/qede/base: move code bits Rasesh Mody
                   ` (11 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:57 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Zero the whole union data of the Management FW mailbox before copying
the actual union member

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c |    4 +-
 drivers/net/qede/base/ecore_mcp.c |  296 ++++++++++++++++++++-----------------
 drivers/net/qede/base/ecore_mcp.h |   19 ++-
 3 files changed, 181 insertions(+), 138 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index fb245ec..7baa1b0 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2317,9 +2317,7 @@ enum _ecore_status_t ecore_hw_reset(struct ecore_dev *p_dev)
 			unload_resp = FW_MSG_CODE_DRV_UNLOAD_ENGINE;
 		}
 
-		rc = ecore_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
-				   DRV_MSG_CODE_UNLOAD_DONE,
-				   0, &unload_resp, &unload_param);
+		rc = ecore_mcp_unload_done(p_hwfn, p_hwfn->p_main_ptt);
 		if (rc != ECORE_SUCCESS) {
 			DP_NOTICE(p_hwfn,
 				  true, "ecore_hw_reset: UNLOAD_DONE failed\n");
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 9e56065..a2ff6c2 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -365,6 +365,7 @@ static enum _ecore_status_t ecore_do_mcp_cmd(struct ecore_hwfn *p_hwfn,
 			struct ecore_ptt *p_ptt,
 			struct ecore_mcp_mb_params *p_mb_params)
 {
+	union drv_union_data union_data;
 	u32 union_data_addr;
 	enum _ecore_status_t rc;
 
@@ -374,6 +375,15 @@ static enum _ecore_status_t ecore_do_mcp_cmd(struct ecore_hwfn *p_hwfn,
 		return ECORE_BUSY;
 	}
 
+	if (p_mb_params->data_src_size > sizeof(union_data) ||
+	    p_mb_params->data_dst_size > sizeof(union_data)) {
+		DP_ERR(p_hwfn,
+		       "The provided size is larger than the union data size [src_size %u, dst_size %u, union_data_size %zu]\n",
+		       p_mb_params->data_src_size, p_mb_params->data_dst_size,
+		       sizeof(union_data));
+		return ECORE_INVAL;
+	}
+
 	union_data_addr = p_hwfn->mcp_info->drv_mb_addr +
 			  OFFSETOF(struct public_drv_mb, union_data);
 
@@ -384,19 +394,21 @@ static enum _ecore_status_t ecore_do_mcp_cmd(struct ecore_hwfn *p_hwfn,
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	if (p_mb_params->p_data_src != OSAL_NULL)
-		ecore_memcpy_to(p_hwfn, p_ptt, union_data_addr,
-				p_mb_params->p_data_src,
-				sizeof(*p_mb_params->p_data_src));
+	OSAL_MEM_ZERO(&union_data, sizeof(union_data));
+	if (p_mb_params->p_data_src != OSAL_NULL && p_mb_params->data_src_size)
+		OSAL_MEMCPY(&union_data, p_mb_params->p_data_src,
+			    p_mb_params->data_src_size);
+	ecore_memcpy_to(p_hwfn, p_ptt, union_data_addr, &union_data,
+			sizeof(union_data));
 
 	rc = ecore_do_mcp_cmd(p_hwfn, p_ptt, p_mb_params->cmd,
 			      p_mb_params->param, &p_mb_params->mcp_resp,
 			      &p_mb_params->mcp_param);
 
-	if (p_mb_params->p_data_dst != OSAL_NULL)
+	if (p_mb_params->p_data_dst != OSAL_NULL &&
+	    p_mb_params->data_dst_size)
 		ecore_memcpy_from(p_hwfn, p_ptt, p_mb_params->p_data_dst,
-				  union_data_addr,
-				  sizeof(*p_mb_params->p_data_dst));
+				  union_data_addr, p_mb_params->data_dst_size);
 
 	ecore_mcp_mb_unlock(p_hwfn, p_mb_params->cmd);
 
@@ -444,14 +456,13 @@ enum _ecore_status_t ecore_mcp_nvm_wr_cmd(struct ecore_hwfn *p_hwfn,
 					  u32 i_txn_size, u32 *i_buf)
 {
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
 	enum _ecore_status_t rc;
 
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = cmd;
 	mb_params.param = param;
-	OSAL_MEMCPY((u32 *)&union_data.raw_data, i_buf, i_txn_size);
-	mb_params.p_data_src = &union_data;
+	mb_params.p_data_src = i_buf;
+	mb_params.data_src_size = (u8)i_txn_size;
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
@@ -471,13 +482,17 @@ enum _ecore_status_t ecore_mcp_nvm_rd_cmd(struct ecore_hwfn *p_hwfn,
 					  u32 *o_txn_size, u32 *o_buf)
 {
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
+	u8 raw_data[MCP_DRV_NVM_BUF_LEN];
 	enum _ecore_status_t rc;
 
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = cmd;
 	mb_params.param = param;
-	mb_params.p_data_dst = &union_data;
+	mb_params.p_data_dst = raw_data;
+
+	/* Use the maximal value since the actual one is part of the response */
+	mb_params.data_dst_size = MCP_DRV_NVM_BUF_LEN;
+
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
@@ -486,7 +501,7 @@ enum _ecore_status_t ecore_mcp_nvm_rd_cmd(struct ecore_hwfn *p_hwfn,
 	*o_mcp_param = mb_params.mcp_param;
 
 	*o_txn_size = *o_mcp_param;
-	OSAL_MEMCPY(o_buf, (u32 *)&union_data.raw_data, *o_txn_size);
+	OSAL_MEMCPY(o_buf, raw_data, *o_txn_size);
 
 	return ECORE_SUCCESS;
 }
@@ -606,26 +621,23 @@ struct ecore_load_req_out_params {
 		     struct ecore_load_req_in_params *p_in_params,
 		     struct ecore_load_req_out_params *p_out_params)
 {
-	union drv_union_data union_data_src, union_data_dst;
 	struct ecore_mcp_mb_params mb_params;
-	struct load_req_stc *p_load_req;
-	struct load_rsp_stc *p_load_rsp;
+	struct load_req_stc load_req;
+	struct load_rsp_stc load_rsp;
 	u32 hsi_ver;
 	enum _ecore_status_t rc;
 
-	p_load_req = &union_data_src.load_req;
-	OSAL_MEM_ZERO(p_load_req, sizeof(*p_load_req));
-	p_load_req->drv_ver_0 = p_in_params->drv_ver_0;
-	p_load_req->drv_ver_1 = p_in_params->drv_ver_1;
-	p_load_req->fw_ver = p_in_params->fw_ver;
-	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_ROLE,
+	OSAL_MEM_ZERO(&load_req, sizeof(load_req));
+	load_req.drv_ver_0 = p_in_params->drv_ver_0;
+	load_req.drv_ver_1 = p_in_params->drv_ver_1;
+	load_req.fw_ver = p_in_params->fw_ver;
+	ECORE_MFW_SET_FIELD(load_req.misc0, LOAD_REQ_ROLE,
 			    p_in_params->drv_role);
-	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_LOCK_TO,
+	ECORE_MFW_SET_FIELD(load_req.misc0, LOAD_REQ_LOCK_TO,
 			    p_in_params->timeout_val);
-	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_FORCE,
-			    p_in_params->force_cmd);
-	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_FLAGS0,
-			    p_in_params->avoid_eng_reset);
+
+	/* @DPDK */
+	load_req.misc0 |= LOAD_REQ_FORCE_NONE;
 
 	hsi_ver = (p_in_params->hsi_ver == ECORE_LOAD_REQ_HSI_VER_DEFAULT) ?
 		  DRV_ID_MCP_HSI_VER_CURRENT :
@@ -634,8 +646,10 @@ struct ecore_load_req_out_params {
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = DRV_MSG_CODE_LOAD_REQ;
 	mb_params.param = PDA_COMP | hsi_ver | p_hwfn->p_dev->drv_type;
-	mb_params.p_data_src = &union_data_src;
-	mb_params.p_data_dst = &union_data_dst;
+	mb_params.p_data_src = &load_req;
+	mb_params.data_src_size = sizeof(load_req);
+	mb_params.p_data_dst = &load_rsp;
+	mb_params.data_dst_size = sizeof(load_rsp);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 		   "Load Request: param 0x%08x [init_hw %d, drv_type %d, hsi_ver %d, pda 0x%04x]\n",
@@ -648,15 +662,13 @@ struct ecore_load_req_out_params {
 	if (p_in_params->hsi_ver != ECORE_LOAD_REQ_HSI_VER_1)
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 			   "Load Request: drv_ver 0x%08x_0x%08x, fw_ver 0x%08x, misc0 0x%08x [role %d, timeout %d, force %d, flags0 0x%x]\n",
-			   p_load_req->drv_ver_0, p_load_req->drv_ver_1,
-			   p_load_req->fw_ver, p_load_req->misc0,
-			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
-					       LOAD_REQ_ROLE),
-			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
+			   load_req.drv_ver_0, load_req.drv_ver_1,
+			   load_req.fw_ver, load_req.misc0,
+			   ECORE_MFW_GET_FIELD(load_req.misc0, LOAD_REQ_ROLE),
+			   ECORE_MFW_GET_FIELD(load_req.misc0,
 					       LOAD_REQ_LOCK_TO),
-			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
-					       LOAD_REQ_FORCE),
-			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
+			   ECORE_MFW_GET_FIELD(load_req.misc0, LOAD_REQ_FORCE),
+			   ECORE_MFW_GET_FIELD(load_req.misc0,
 					       LOAD_REQ_FLAGS0));
 
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
@@ -672,28 +684,24 @@ struct ecore_load_req_out_params {
 
 	if (p_in_params->hsi_ver != ECORE_LOAD_REQ_HSI_VER_1 &&
 	    p_out_params->load_code != FW_MSG_CODE_DRV_LOAD_REFUSED_HSI_1) {
-		p_load_rsp = &union_data_dst.load_rsp;
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 			   "Load Response: exist_drv_ver 0x%08x_0x%08x, exist_fw_ver 0x%08x, misc0 0x%08x [exist_role %d, mfw_hsi %d, flags0 0x%x]\n",
-			   p_load_rsp->drv_ver_0, p_load_rsp->drv_ver_1,
-			   p_load_rsp->fw_ver, p_load_rsp->misc0,
-			   ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
-					       LOAD_RSP_ROLE),
-			   ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
-					       LOAD_RSP_HSI),
-			   ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
+			   load_rsp.drv_ver_0, load_rsp.drv_ver_1,
+			   load_rsp.fw_ver, load_rsp.misc0,
+			   ECORE_MFW_GET_FIELD(load_rsp.misc0, LOAD_RSP_ROLE),
+			   ECORE_MFW_GET_FIELD(load_rsp.misc0, LOAD_RSP_HSI),
+			   ECORE_MFW_GET_FIELD(load_rsp.misc0,
 					       LOAD_RSP_FLAGS0));
 
-		p_out_params->exist_drv_ver_0 = p_load_rsp->drv_ver_0;
-		p_out_params->exist_drv_ver_1 = p_load_rsp->drv_ver_1;
-		p_out_params->exist_fw_ver = p_load_rsp->fw_ver;
+		p_out_params->exist_drv_ver_0 = load_rsp.drv_ver_0;
+		p_out_params->exist_drv_ver_1 = load_rsp.drv_ver_1;
+		p_out_params->exist_fw_ver = load_rsp.fw_ver;
 		p_out_params->exist_drv_role =
-			ECORE_MFW_GET_FIELD(p_load_rsp->misc0, LOAD_RSP_ROLE);
+			ECORE_MFW_GET_FIELD(load_rsp.misc0, LOAD_RSP_ROLE);
 		p_out_params->mfw_hsi_ver =
-			ECORE_MFW_GET_FIELD(p_load_rsp->misc0, LOAD_RSP_HSI);
+			ECORE_MFW_GET_FIELD(load_rsp.misc0, LOAD_RSP_HSI);
 		p_out_params->drv_exists =
-			ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
-					    LOAD_RSP_FLAGS0) &
+			ECORE_MFW_GET_FIELD(load_rsp.misc0, LOAD_RSP_FLAGS0) &
 			LOAD_RSP_FLAGS0_DRV_EXISTS;
 	}
 
@@ -884,6 +892,18 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
+enum _ecore_status_t ecore_mcp_unload_done(struct ecore_hwfn *p_hwfn,
+					   struct ecore_ptt *p_ptt)
+{
+	struct ecore_mcp_mb_params mb_params;
+	struct mcp_mac wol_mac;
+
+	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
+	mb_params.cmd = DRV_MSG_CODE_UNLOAD_DONE;
+
+	return ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+}
+
 static void ecore_mcp_handle_vf_flr(struct ecore_hwfn *p_hwfn,
 				    struct ecore_ptt *p_ptt)
 {
@@ -925,7 +945,6 @@ enum _ecore_status_t ecore_mcp_ack_vf_flr(struct ecore_hwfn *p_hwfn,
 	u32 func_addr = SECTION_ADDR(mfw_func_offsize,
 				     MCP_PF_ID(p_hwfn));
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
 	enum _ecore_status_t rc;
 	int i;
 
@@ -936,8 +955,8 @@ enum _ecore_status_t ecore_mcp_ack_vf_flr(struct ecore_hwfn *p_hwfn,
 
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = DRV_MSG_CODE_VF_DISABLED_DONE;
-	OSAL_MEMCPY(&union_data.ack_vf_disabled, vfs_to_ack, VF_MAX_STATIC / 8);
-	mb_params.p_data_src = &union_data;
+	mb_params.p_data_src = vfs_to_ack;
+	mb_params.data_src_size = VF_MAX_STATIC / 8;
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt,
 				     &mb_params);
 	if (rc != ECORE_SUCCESS) {
@@ -1123,8 +1142,7 @@ enum _ecore_status_t ecore_mcp_set_link(struct ecore_hwfn *p_hwfn,
 {
 	struct ecore_mcp_link_params *params = &p_hwfn->mcp_info->link_input;
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
-	struct eth_phy_cfg *p_phy_cfg;
+	struct eth_phy_cfg phy_cfg;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 	u32 cmd;
 
@@ -1134,30 +1152,30 @@ enum _ecore_status_t ecore_mcp_set_link(struct ecore_hwfn *p_hwfn,
 #endif
 
 	/* Set the shmem configuration according to params */
-	p_phy_cfg = &union_data.drv_phy_cfg;
-	OSAL_MEMSET(p_phy_cfg, 0, sizeof(*p_phy_cfg));
+	OSAL_MEM_ZERO(&phy_cfg, sizeof(phy_cfg));
 	cmd = b_up ? DRV_MSG_CODE_INIT_PHY : DRV_MSG_CODE_LINK_RESET;
 	if (!params->speed.autoneg)
-		p_phy_cfg->speed = params->speed.forced_speed;
-	p_phy_cfg->pause |= (params->pause.autoneg) ? ETH_PAUSE_AUTONEG : 0;
-	p_phy_cfg->pause |= (params->pause.forced_rx) ? ETH_PAUSE_RX : 0;
-	p_phy_cfg->pause |= (params->pause.forced_tx) ? ETH_PAUSE_TX : 0;
-	p_phy_cfg->adv_speed = params->speed.advertised_speeds;
-	p_phy_cfg->loopback_mode = params->loopback_mode;
+		phy_cfg.speed = params->speed.forced_speed;
+	phy_cfg.pause |= (params->pause.autoneg) ? ETH_PAUSE_AUTONEG : 0;
+	phy_cfg.pause |= (params->pause.forced_rx) ? ETH_PAUSE_RX : 0;
+	phy_cfg.pause |= (params->pause.forced_tx) ? ETH_PAUSE_TX : 0;
+	phy_cfg.adv_speed = params->speed.advertised_speeds;
+	phy_cfg.loopback_mode = params->loopback_mode;
 	p_hwfn->b_drv_link_init = b_up;
 
 	if (b_up)
 		DP_VERBOSE(p_hwfn, ECORE_MSG_LINK,
 			   "Configuring Link: Speed 0x%08x, Pause 0x%08x,"
 			   " adv_speed 0x%08x, loopback 0x%08x\n",
-			   p_phy_cfg->speed, p_phy_cfg->pause,
-			   p_phy_cfg->adv_speed, p_phy_cfg->loopback_mode);
+			   phy_cfg.speed, phy_cfg.pause, phy_cfg.adv_speed,
+			   phy_cfg.loopback_mode);
 	else
 		DP_VERBOSE(p_hwfn, ECORE_MSG_LINK, "Resetting link\n");
 
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = cmd;
-	mb_params.p_data_src = &union_data;
+	mb_params.p_data_src = &phy_cfg;
+	mb_params.data_src_size = sizeof(phy_cfg);
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 
 	/* if mcp fails to respond we must abort */
@@ -1236,7 +1254,6 @@ static void ecore_mcp_send_protocol_stats(struct ecore_hwfn *p_hwfn,
 	enum ecore_mcp_protocol_type stats_type;
 	union ecore_mcp_protocol_stats stats;
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
 	u32 hsi_param;
 	enum _ecore_status_t rc;
 
@@ -1255,8 +1272,8 @@ static void ecore_mcp_send_protocol_stats(struct ecore_hwfn *p_hwfn,
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = DRV_MSG_CODE_GET_STATS;
 	mb_params.param = hsi_param;
-	OSAL_MEMCPY(&union_data, &stats, sizeof(stats));
-	mb_params.p_data_src = &union_data;
+	mb_params.p_data_src = &stats;
+	mb_params.data_src_size = sizeof(stats);
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS)
 		DP_ERR(p_hwfn, "Failed to send protocol stats, rc = %d\n", rc);
@@ -1354,28 +1371,38 @@ static void ecore_mcp_handle_fan_failure(struct ecore_hwfn *p_hwfn,
 	ecore_hw_err_notify(p_hwfn, ECORE_HW_ERR_FAN_FAIL);
 }
 
+struct ecore_mdump_cmd_params {
+	u32 cmd;
+	void *p_data_src;
+	u8 data_src_size;
+	void *p_data_dst;
+	u8 data_dst_size;
+	u32 mcp_resp;
+};
+
 static enum _ecore_status_t
 ecore_mcp_mdump_cmd(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-		    u32 mdump_cmd, union drv_union_data *p_data_src,
-		    union drv_union_data *p_data_dst, u32 *p_mcp_resp)
+		    struct ecore_mdump_cmd_params *p_mdump_cmd_params)
 {
 	struct ecore_mcp_mb_params mb_params;
 	enum _ecore_status_t rc;
 
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = DRV_MSG_CODE_MDUMP_CMD;
-	mb_params.param = mdump_cmd;
-	mb_params.p_data_src = p_data_src;
-	mb_params.p_data_dst = p_data_dst;
+	mb_params.param = p_mdump_cmd_params->cmd;
+	mb_params.p_data_src = p_mdump_cmd_params->p_data_src;
+	mb_params.data_src_size = p_mdump_cmd_params->data_src_size;
+	mb_params.p_data_dst = p_mdump_cmd_params->p_data_dst;
+	mb_params.data_dst_size = p_mdump_cmd_params->data_dst_size;
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	*p_mcp_resp = mb_params.mcp_resp;
-	if (*p_mcp_resp == FW_MSG_CODE_MDUMP_INVALID_CMD) {
+	p_mdump_cmd_params->mcp_resp = mb_params.mcp_resp;
+	if (p_mdump_cmd_params->mcp_resp == FW_MSG_CODE_MDUMP_INVALID_CMD) {
 		DP_NOTICE(p_hwfn, false,
 			  "MFW claims that the mdump command is illegal [mdump_cmd 0x%x]\n",
-			  mdump_cmd);
+			  p_mdump_cmd_params->cmd);
 		rc = ECORE_INVAL;
 	}
 
@@ -1385,62 +1412,68 @@ static void ecore_mcp_handle_fan_failure(struct ecore_hwfn *p_hwfn,
 static enum _ecore_status_t ecore_mcp_mdump_ack(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt)
 {
-	u32 mcp_resp;
+	struct ecore_mdump_cmd_params mdump_cmd_params;
+
+	OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params));
+	mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_ACK;
 
-	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MDUMP_ACK,
-				   OSAL_NULL, OSAL_NULL, &mcp_resp);
+	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
 }
 
 enum _ecore_status_t ecore_mcp_mdump_set_values(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt,
 						u32 epoch)
 {
-	union drv_union_data union_data;
-	u32 mcp_resp;
+	struct ecore_mdump_cmd_params mdump_cmd_params;
 
-	OSAL_MEMCPY(&union_data.raw_data, &epoch, sizeof(epoch));
+	OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params));
+	mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_SET_VALUES;
+	mdump_cmd_params.p_data_src = &epoch;
+	mdump_cmd_params.data_src_size = sizeof(epoch);
 
-	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MDUMP_SET_VALUES,
-				   &union_data, OSAL_NULL, &mcp_resp);
+	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
 }
 
 enum _ecore_status_t ecore_mcp_mdump_trigger(struct ecore_hwfn *p_hwfn,
 					     struct ecore_ptt *p_ptt)
 {
-	u32 mcp_resp;
+	struct ecore_mdump_cmd_params mdump_cmd_params;
 
-	p_hwfn->p_dev->mdump_en = true;
+	OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params));
+	mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_TRIGGER;
 
-	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MDUMP_TRIGGER,
-				   OSAL_NULL, OSAL_NULL, &mcp_resp);
+	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
 }
 
 static enum _ecore_status_t
 ecore_mcp_mdump_get_config(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			   struct mdump_config_stc *p_mdump_config)
 {
-	union drv_union_data union_data;
-	u32 mcp_resp;
+	struct ecore_mdump_cmd_params mdump_cmd_params;
 	enum _ecore_status_t rc;
 
-	rc = ecore_mcp_mdump_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MDUMP_GET_CONFIG,
-				 OSAL_NULL, &union_data, &mcp_resp);
+	OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params));
+	mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_GET_CONFIG;
+	mdump_cmd_params.p_data_dst = p_mdump_config;
+	mdump_cmd_params.data_dst_size = sizeof(*p_mdump_config);
+
+	rc = ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	if (mcp_resp == FW_MSG_CODE_UNSUPPORTED)
+	if (mdump_cmd_params.mcp_resp == FW_MSG_CODE_UNSUPPORTED) {
+		DP_INFO(p_hwfn,
+			"The mdump command is not supported by the MFW\n");
 		return ECORE_NOTIMPL;
+	}
 
-	if (mcp_resp != FW_MSG_CODE_OK) {
+	if (mdump_cmd_params.mcp_resp != FW_MSG_CODE_OK) {
 		DP_NOTICE(p_hwfn, false,
 			  "Failed to get the mdump configuration and logs info [mcp_resp 0x%x]\n",
-			  mcp_resp);
+			  mdump_cmd_params.mcp_resp);
 		rc = ECORE_UNKNOWN_ERROR;
 	}
 
-	OSAL_MEMCPY(p_mdump_config, &union_data.mdump_config,
-		    sizeof(*p_mdump_config));
-
 	return rc;
 }
 
@@ -1490,10 +1523,12 @@ enum _ecore_status_t
 enum _ecore_status_t ecore_mcp_mdump_clear_logs(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt)
 {
-	u32 mcp_resp;
+	struct ecore_mdump_cmd_params mdump_cmd_params;
 
-	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MDUMP_CLEAR_LOGS,
-				   OSAL_NULL, OSAL_NULL, &mcp_resp);
+	OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params));
+	mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_CLEAR_LOGS;
+
+	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
 }
 
 static void ecore_mcp_handle_critical_error(struct ecore_hwfn *p_hwfn,
@@ -2002,9 +2037,8 @@ enum _ecore_status_t
 ecore_mcp_send_drv_version(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			   struct ecore_mcp_drv_version *p_ver)
 {
-	struct drv_version_stc *p_drv_version;
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
+	struct drv_version_stc drv_version;
 	u32 num_words, i;
 	void *p_name;
 	OSAL_BE32 val;
@@ -2015,19 +2049,20 @@ enum _ecore_status_t
 		return ECORE_SUCCESS;
 #endif
 
-	p_drv_version = &union_data.drv_version;
-	p_drv_version->version = p_ver->version;
+	OSAL_MEM_ZERO(&drv_version, sizeof(drv_version));
+	drv_version.version = p_ver->version;
 	num_words = (MCP_DRV_VER_STR_SIZE - 4) / 4;
 	for (i = 0; i < num_words; i++) {
 		/* The driver name is expected to be in a big-endian format */
 		p_name = &p_ver->name[i * sizeof(u32)];
 		val = OSAL_CPU_TO_BE32(*(u32 *)p_name);
-		*(u32 *)&p_drv_version->name[i * sizeof(u32)] = val;
+		*(u32 *)&drv_version.name[i * sizeof(u32)] = val;
 	}
 
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = DRV_MSG_CODE_SET_VERSION;
-	mb_params.p_data_src = &union_data;
+	mb_params.p_data_src = &drv_version;
+	mb_params.data_src_size = sizeof(drv_version);
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS)
 		DP_ERR(p_hwfn, "MCP response failure, aborting\n");
@@ -2696,28 +2731,25 @@ enum _ecore_status_t
 			       struct ecore_temperature_info *p_temp_info)
 {
 	struct ecore_temperature_sensor *p_temp_sensor;
-	struct temperature_status_stc *p_mfw_temp_info;
+	struct temperature_status_stc mfw_temp_info;
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
 	u32 val;
 	enum _ecore_status_t rc;
 	u8 i;
 
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = DRV_MSG_CODE_GET_TEMPERATURE;
-	mb_params.p_data_dst = &union_data;
+	mb_params.p_data_dst = &mfw_temp_info;
+	mb_params.data_dst_size = sizeof(mfw_temp_info);
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	p_mfw_temp_info = &union_data.temp_info;
-
 	OSAL_BUILD_BUG_ON(ECORE_MAX_NUM_OF_SENSORS != MAX_NUM_OF_SENSORS);
-	p_temp_info->num_sensors = OSAL_MIN_T(u32,
-					      p_mfw_temp_info->num_of_sensors,
+	p_temp_info->num_sensors = OSAL_MIN_T(u32, mfw_temp_info.num_of_sensors,
 					      ECORE_MAX_NUM_OF_SENSORS);
 	for (i = 0; i < p_temp_info->num_sensors; i++) {
-		val = p_mfw_temp_info->sensor[i];
+		val = mfw_temp_info.sensor[i];
 		p_temp_sensor = &p_temp_info->sensors[i];
 		p_temp_sensor->sensor_location = (val & SENSOR_LOCATION_MASK) >>
 						 SENSOR_LOCATION_SHIFT;
@@ -2855,16 +2887,14 @@ struct ecore_resc_alloc_out_params {
 			      struct ecore_resc_alloc_in_params *p_in_params,
 			      struct ecore_resc_alloc_out_params *p_out_params)
 {
-	struct resource_info *p_mfw_resc_info;
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
+	struct resource_info mfw_resc_info;
 	enum _ecore_status_t rc;
 
-	p_mfw_resc_info = &union_data.resource;
-	OSAL_MEM_ZERO(p_mfw_resc_info, sizeof(*p_mfw_resc_info));
+	OSAL_MEM_ZERO(&mfw_resc_info, sizeof(mfw_resc_info));
 
-	p_mfw_resc_info->res_id = ecore_mcp_get_mfw_res_id(p_in_params->res_id);
-	if (p_mfw_resc_info->res_id == RESOURCE_NUM_INVALID) {
+	mfw_resc_info.res_id = ecore_mcp_get_mfw_res_id(p_in_params->res_id);
+	if (mfw_resc_info.res_id == RESOURCE_NUM_INVALID) {
 		DP_ERR(p_hwfn,
 		       "Failed to match resource %d [%s] with the MFW resources\n",
 		       p_in_params->res_id,
@@ -2874,7 +2904,7 @@ struct ecore_resc_alloc_out_params {
 
 	switch (p_in_params->cmd) {
 	case DRV_MSG_SET_RESOURCE_VALUE_MSG:
-		p_mfw_resc_info->size = p_in_params->resc_max_val;
+		mfw_resc_info.size = p_in_params->resc_max_val;
 		/* Fallthrough */
 	case DRV_MSG_GET_RESOURCE_ALLOC_MSG:
 		break;
@@ -2887,8 +2917,10 @@ struct ecore_resc_alloc_out_params {
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = p_in_params->cmd;
 	mb_params.param = ECORE_RESC_ALLOC_VERSION;
-	mb_params.p_data_src = &union_data;
-	mb_params.p_data_dst = &union_data;
+	mb_params.p_data_src = &mfw_resc_info;
+	mb_params.data_src_size = sizeof(mfw_resc_info);
+	mb_params.p_data_dst = mb_params.p_data_src;
+	mb_params.data_dst_size = mb_params.data_src_size;
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 		   "Resource message request: cmd 0x%08x, res_id %d [%s], hsi_version %d.%d, val 0x%x\n",
@@ -2906,11 +2938,11 @@ struct ecore_resc_alloc_out_params {
 
 	p_out_params->mcp_resp = mb_params.mcp_resp;
 	p_out_params->mcp_param = mb_params.mcp_param;
-	p_out_params->resc_num = p_mfw_resc_info->size;
-	p_out_params->resc_start = p_mfw_resc_info->offset;
-	p_out_params->vf_resc_num = p_mfw_resc_info->vf_size;
-	p_out_params->vf_resc_start = p_mfw_resc_info->vf_offset;
-	p_out_params->flags = p_mfw_resc_info->flags;
+	p_out_params->resc_num = mfw_resc_info.size;
+	p_out_params->resc_start = mfw_resc_info.offset;
+	p_out_params->vf_resc_num = mfw_resc_info.vf_size;
+	p_out_params->vf_resc_start = mfw_resc_info.vf_offset;
+	p_out_params->flags = mfw_resc_info.flags;
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 		   "Resource message response: mfw_hsi_version %d.%d, num 0x%x, start 0x%x, vf_num 0x%x, vf_start 0x%x, flags 0x%08x\n",
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index f5dac9d..350d8a2 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -65,8 +65,10 @@ struct ecore_mcp_info {
 struct ecore_mcp_mb_params {
 	u32 cmd;
 	u32 param;
-	union drv_union_data *p_data_src;
-	union drv_union_data *p_data_dst;
+	void *p_data_src;
+	u8 data_src_size;
+	void *p_data_dst;
+	u8 data_dst_size;
 	u32 mcp_resp;
 	u32 mcp_param;
 };
@@ -159,7 +161,7 @@ struct ecore_load_req_params {
  *        returns whether this PF is the first on the engine/port or function.
  *
  * @param p_hwfn
- * @param p_pt
+ * @param p_ptt
  * @param p_params
  *
  * @return enum _ecore_status_t - ECORE_SUCCESS - Operation was successful.
@@ -169,6 +171,17 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 					struct ecore_load_req_params *p_params);
 
 /**
+ * @brief Sends a UNLOAD_DONE message to the MFW
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - Operation was successful.
+ */
+enum _ecore_status_t ecore_mcp_unload_done(struct ecore_hwfn *p_hwfn,
+					   struct ecore_ptt *p_ptt);
+
+/**
  * @brief Read the MFW mailbox into Current buffer.
  *
  * @param p_hwfn
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 51/61] net/qede/base: move code bits
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (49 preceding siblings ...)
  2017-02-27  7:57 ` [PATCH 50/61] net/qede/base: zero out MFW mailbox data Rasesh Mody
@ 2017-02-27  7:57 ` Rasesh Mody
  2017-02-27  7:57 ` [PATCH 52/61] net/qede/base: add PF parameter Rasesh Mody
                   ` (10 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:57 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_vf.h |   41 +++++++++++++++++++-------------------
 1 file changed, 20 insertions(+), 21 deletions(-)

diff --git a/drivers/net/qede/base/ecore_vf.h b/drivers/net/qede/base/ecore_vf.h
index 228bbf0..f471388 100644
--- a/drivers/net/qede/base/ecore_vf.h
+++ b/drivers/net/qede/base/ecore_vf.h
@@ -38,17 +38,15 @@ struct ecore_vf_iov {
 	bool b_pre_fp_hsi;
 };
 
-#ifdef CONFIG_ECORE_SRIOV
-/**
- * @brief hw preparation for VF
- * sends ACQUIRE message
- *
- * @param p_hwfn
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn);
 
+enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
+					    struct ecore_ptt *p_ptt,
+					    u16 coalesce,
+					    struct ecore_queue_cid *p_cid);
+enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
+					    struct ecore_ptt *p_ptt,
+					    u16 coalesce,
+					    struct ecore_queue_cid *p_cid);
 /**
  * @brief VF - Set Rx/Tx coalesce per VF's relative queue.
  *	Coalesce value '0' will omit the configuration.
@@ -56,13 +54,24 @@ struct ecore_vf_iov {
  *	@param p_hwfn
  *	@param rx_coal - coalesce value in micro second for rx queue
  *	@param tx_coal - coalesce value in micro second for tx queue
- *	@param qid
+ *	@param queue_cid
  *
  **/
 enum _ecore_status_t ecore_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn,
 					      u16 rx_coal, u16 tx_coal,
 					      struct ecore_queue_cid *p_cid);
 
+#ifdef CONFIG_ECORE_SRIOV
+/**
+ * @brief hw preparation for VF
+ *	sends ACQUIRE message
+ *
+ * @param p_hwfn
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn);
+
 /**
  * @brief VF - start the RX Queue by sending a message to the PF
  *
@@ -277,15 +286,5 @@ enum _ecore_status_t
 				struct ecore_tunnel_info *p_tunn);
 
 void ecore_vf_set_vf_start_tunn_update_param(struct ecore_tunnel_info *p_tun);
-
-enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
-					    struct ecore_ptt *p_ptt,
-					    u16 coalesce,
-					    struct ecore_queue_cid *p_cid);
-
-enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
-					    struct ecore_ptt *p_ptt,
-					    u16 coalesce,
-					    struct ecore_queue_cid *p_cid);
 #endif
 #endif /* __ECORE_VF_H__ */
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 52/61] net/qede/base: add PF parameter
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (50 preceding siblings ...)
  2017-02-27  7:57 ` [PATCH 51/61] net/qede/base: move code bits Rasesh Mody
@ 2017-02-27  7:57 ` Rasesh Mody
  2017-02-27  7:57 ` [PATCH 53/61] net/qede/base: allow PMD to control vport-id and rss-eng-id Rasesh Mody
                   ` (9 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:57 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add a common enum to pf_params for RDMA.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_cxt.c      |    1 +
 drivers/net/qede/base/ecore_proto_if.h |    7 +++++++
 2 files changed, 8 insertions(+)

diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index bf68f86..837a19f 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -19,6 +19,7 @@
 #include "ecore_hw.h"
 #include "ecore_dev_api.h"
 #include "ecore_sriov.h"
+#include "ecore_mcp.h"
 
 /* Max number of connection types in HW (DQ/CDU etc.) */
 #define MAX_CONN_TYPES		PROTOCOLID_COMMON
diff --git a/drivers/net/qede/base/ecore_proto_if.h b/drivers/net/qede/base/ecore_proto_if.h
index ed24019..0ac153f 100644
--- a/drivers/net/qede/base/ecore_proto_if.h
+++ b/drivers/net/qede/base/ecore_proto_if.h
@@ -63,6 +63,12 @@ struct ecore_iscsi_pf_params {
 	u8		bdq_pbl_num_entries[2];
 };
 
+enum ecore_rdma_protocol {
+	ECORE_RDMA_PROTOCOL_DEFAULT,
+	ECORE_RDMA_PROTOCOL_ROCE,
+	ECORE_RDMA_PROTOCOL_IWARP,
+};
+
 struct ecore_rdma_pf_params {
 	/* Supplied to ECORE during resource allocation (may affect the ILT and
 	 * the doorbell BAR).
@@ -79,6 +85,7 @@ struct ecore_rdma_pf_params {
 
 	/* TCP port number used for the iwarp traffic */
 	u16		iwarp_port;
+	enum ecore_rdma_protocol rdma_protocol;
 };
 
 struct ecore_pf_params {
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 53/61] net/qede/base: allow PMD to control vport-id and rss-eng-id
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (51 preceding siblings ...)
  2017-02-27  7:57 ` [PATCH 52/61] net/qede/base: add PF parameter Rasesh Mody
@ 2017-02-27  7:57 ` Rasesh Mody
  2017-02-27  7:57 ` [PATCH 54/61] net/qede/base: add udp ports in bulletin board message Rasesh Mody
                   ` (8 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:57 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Let PMD have control over the vport-id and rss-eng-id of a given VF
during initializaion.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_iov_api.h |   15 ++++-------
 drivers/net/qede/base/ecore_sriov.c   |   46 +++++++++++++++++++++------------
 drivers/net/qede/base/ecore_sriov.h   |    2 +-
 3 files changed, 35 insertions(+), 28 deletions(-)

diff --git a/drivers/net/qede/base/ecore_iov_api.h b/drivers/net/qede/base/ecore_iov_api.h
index b8dc47b..6a0fc5a 100644
--- a/drivers/net/qede/base/ecore_iov_api.h
+++ b/drivers/net/qede/base/ecore_iov_api.h
@@ -103,6 +103,11 @@ struct ecore_iov_vf_init_params {
 	 */
 	u16 req_rx_queue[ECORE_MAX_VF_CHAINS_PER_PF];
 	u16 req_tx_queue[ECORE_MAX_VF_CHAINS_PER_PF];
+
+	u8 vport_id;
+
+	/* Should be set in case RSS is going to be used for VF */
+	u8 rss_eng_id;
 };
 
 #ifdef CONFIG_ECORE_SW_CHANNEL
@@ -417,16 +422,6 @@ void ecore_iov_get_vfs_opaque_fid(struct ecore_hwfn *p_hwfn, int vfid,
 				  u16 *opaque_fid);
 
 /**
- * @brief Get VFs VPORT id.
- *
- * @param p_hwfn
- * @param vfid
- * @param vport id
- */
-void ecore_iov_get_vfs_vport_id(struct ecore_hwfn *p_hwfn, int vfid,
-				u8 *p_vport_id);
-
-/**
  * @brief Set forced VLAN [pvid] in PFs copy of bulletin board
  *        and configures FW/HW to support the configuration.
  *        Setting of pvid 0 would clear the feature.
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 4951873..939ace5 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -426,8 +426,6 @@ static void ecore_iov_setup_vfdb(struct ecore_hwfn *p_hwfn)
 		return;
 	}
 
-	p_iov_info->base_vport_id = 1;	/* @@@TBD resource allocation */
-
 	for (idx = 0; idx < p_iov->total_vfs; idx++) {
 		struct ecore_vf_info *vf = &p_iov_info->vfs_array[idx];
 		u32 concrete;
@@ -456,8 +454,6 @@ static void ecore_iov_setup_vfdb(struct ecore_hwfn *p_hwfn)
 		/* TODO - need to devise a better way of getting opaque */
 		vf->opaque_fid = (p_hwfn->hw_info.opaque_fid & 0xff) |
 		    (vf->abs_vf_id << 8);
-		/* @@TBD MichalK - add base vport_id of VFs to equation */
-		vf->vport_id = p_iov_info->base_vport_id + idx;
 
 		vf->num_mac_filters = ECORE_ETH_VF_NUM_MAC_FILTERS;
 		vf->num_vlan_filters = ECORE_ETH_VF_NUM_VLAN_FILTERS;
@@ -1022,6 +1018,34 @@ enum _ecore_status_t
 		return ECORE_INVAL;
 	}
 
+	/* Perform sanity checking on the requested vport/rss */
+	if (p_params->vport_id >= RESC_NUM(p_hwfn, ECORE_VPORT)) {
+		DP_NOTICE(p_hwfn, true, "VF[%d] - can't use VPORT %02x\n",
+			  p_params->rel_vf_id, p_params->vport_id);
+		return ECORE_INVAL;
+	}
+
+	if ((p_params->num_queues > 1) &&
+	    (p_params->rss_eng_id >= RESC_NUM(p_hwfn, ECORE_RSS_ENG))) {
+		DP_NOTICE(p_hwfn, true, "VF[%d] - can't use RSS_ENG %02x\n",
+			  p_params->rel_vf_id, p_params->rss_eng_id);
+		return ECORE_INVAL;
+	}
+
+	/* TODO - remove this once we get confidence of change */
+	if (!p_params->vport_id) {
+		DP_NOTICE(p_hwfn, false,
+			  "VF[%d] - Unlikely that VF uses vport0. Forgotten?\n",
+			  p_params->rel_vf_id);
+	}
+	if ((!p_params->rss_eng_id) && (p_params->num_queues > 1)) {
+		DP_NOTICE(p_hwfn, false,
+			  "VF[%d] - Unlikely that VF uses RSS_eng0. Forgotten?\n",
+			  p_params->rel_vf_id);
+	}
+	vf->vport_id = p_params->vport_id;
+	vf->rss_eng_id = p_params->rss_eng_id;
+
 	/* Perform sanity checking on the requested queue_id */
 	for (i = 0; i < p_params->num_queues; i++) {
 		u16 min_vf_qzone = (u16)FEAT_NUM(p_hwfn, ECORE_PF_L2_QUE);
@@ -2755,7 +2779,7 @@ void *ecore_iov_search_list_tlvs(struct ecore_hwfn *p_hwfn,
 		VFPF_UPDATE_RSS_KEY_FLAG);
 
 	p_rss->rss_enable = p_rss_tlv->rss_enable;
-	p_rss->rss_eng_id = vf->relative_vf_id + 1;
+	p_rss->rss_eng_id = vf->rss_eng_id;
 	p_rss->rss_caps = p_rss_tlv->rss_caps;
 	p_rss->rss_table_size_log = p_rss_tlv->rss_table_size_log;
 	OSAL_MEMCPY(p_rss->rss_key, p_rss_tlv->rss_key,
@@ -3977,18 +4001,6 @@ void ecore_iov_get_vfs_opaque_fid(struct ecore_hwfn *p_hwfn, int vfid,
 	*opaque_fid = vf_info->opaque_fid;
 }
 
-void ecore_iov_get_vfs_vport_id(struct ecore_hwfn *p_hwfn, int vfid,
-				u8 *p_vort_id)
-{
-	struct ecore_vf_info *vf_info;
-
-	vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
-	if (!vf_info)
-		return;
-
-	*p_vort_id = vf_info->vport_id;
-}
-
 void ecore_iov_bulletin_set_forced_vlan(struct ecore_hwfn *p_hwfn,
 					u16 pvid, int vfid)
 {
diff --git a/drivers/net/qede/base/ecore_sriov.h b/drivers/net/qede/base/ecore_sriov.h
index d32f931..66e9271 100644
--- a/drivers/net/qede/base/ecore_sriov.h
+++ b/drivers/net/qede/base/ecore_sriov.h
@@ -111,6 +111,7 @@ struct ecore_vf_info {
 	u16			mtu;
 
 	u8			vport_id;
+	u8			rss_eng_id;
 	u8			relative_vf_id;
 	u8			abs_vf_id;
 #define ECORE_VF_ABS_ID(p_hwfn, p_vf)	(ECORE_PATH_ID(p_hwfn) ? \
@@ -155,7 +156,6 @@ struct ecore_pf_iov {
 	struct ecore_vf_info	vfs_array[E4_MAX_NUM_VFS];
 	u64			pending_events[ECORE_VF_ARRAY_LENGTH];
 	u64			pending_flr[ECORE_VF_ARRAY_LENGTH];
-	u16			base_vport_id;
 
 #ifndef REMOVE_DBG
 	/* This doesn't serve anything functionally, but it makes windows
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 54/61] net/qede/base: add udp ports in bulletin board message
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (52 preceding siblings ...)
  2017-02-27  7:57 ` [PATCH 53/61] net/qede/base: allow PMD to control vport-id and rss-eng-id Rasesh Mody
@ 2017-02-27  7:57 ` Rasesh Mody
  2017-02-27  7:57 ` [PATCH 55/61] net/qede/base: prevent DMAE transactions during recovery Rasesh Mody
                   ` (7 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:57 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add udp ports in bulletin board message.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_iov_api.h |    2 ++
 drivers/net/qede/base/ecore_sriov.c   |   33 +++++++++++++++++++++++++++++++++
 drivers/net/qede/base/ecore_vf.c      |   12 ++++++++++++
 drivers/net/qede/base/ecore_vf_api.h  |    2 ++
 drivers/net/qede/base/ecore_vfpf_if.h |    5 ++++-
 5 files changed, 53 insertions(+), 1 deletion(-)

diff --git a/drivers/net/qede/base/ecore_iov_api.h b/drivers/net/qede/base/ecore_iov_api.h
index 6a0fc5a..870c57e 100644
--- a/drivers/net/qede/base/ecore_iov_api.h
+++ b/drivers/net/qede/base/ecore_iov_api.h
@@ -716,6 +716,8 @@ enum _ecore_status_t ecore_iov_configure_min_tx_rate(struct ecore_dev *p_dev,
  */
 u16 ecore_iov_get_next_active_vf(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
 
+void ecore_iov_bulletin_set_udp_ports(struct ecore_hwfn *p_hwfn, int vfid,
+				      u16 vxlan_port, u16 geneve_port);
 #endif /* CONFIG_ECORE_SRIOV */
 
 #define ecore_for_each_vf(_p_hwfn, _i)					\
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 939ace5..dc01c6d 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -2256,6 +2256,7 @@ static void ecore_iov_vf_mbx_update_tunn_param(struct ecore_hwfn *p_hwfn,
 	bool b_update_required = false;
 	struct ecore_tunnel_info tunn;
 	u16 tunn_feature_mask = 0;
+	int i;
 
 	mbx->offset = (u8 *)mbx->reply_virt;
 
@@ -2303,11 +2304,20 @@ static void ecore_iov_vf_mbx_update_tunn_param(struct ecore_hwfn *p_hwfn,
 
 	/* If ECORE client is willing to update anything ? */
 	if (b_update_required) {
+		u16 geneve_port;
+
 		rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, &tunn,
 						 ECORE_SPQ_MODE_EBLOCK,
 						 OSAL_NULL);
 		if (rc != ECORE_SUCCESS)
 			status = PFVF_STATUS_FAILURE;
+
+		geneve_port = p_tun->geneve_port.port;
+		ecore_for_each_vf(p_hwfn, i) {
+			ecore_iov_bulletin_set_udp_ports(p_hwfn, i,
+							 p_tun->vxlan_port.port,
+							 geneve_port);
+		}
 	}
 
 send_resp:
@@ -4031,6 +4041,29 @@ void ecore_iov_bulletin_set_forced_vlan(struct ecore_hwfn *p_hwfn,
 	ecore_iov_configure_vport_forced(p_hwfn, vf_info, feature);
 }
 
+void ecore_iov_bulletin_set_udp_ports(struct ecore_hwfn *p_hwfn,
+				      int vfid, u16 vxlan_port, u16 geneve_port)
+{
+	struct ecore_vf_info *vf_info;
+
+	vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
+	if (!vf_info) {
+		DP_NOTICE(p_hwfn->p_dev, true,
+			  "Can not set udp ports, invalid vfid [%d]\n", vfid);
+		return;
+	}
+
+	if (vf_info->b_malicious) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "Can not set udp ports to malicious VF [%d]\n",
+			   vfid);
+		return;
+	}
+
+	vf_info->bulletin.p_virt->vxlan_udp_port = vxlan_port;
+	vf_info->bulletin.p_virt->geneve_udp_port = geneve_port;
+}
+
 bool ecore_iov_vf_has_vport_instance(struct ecore_hwfn *p_hwfn, int vfid)
 {
 	struct ecore_vf_info *p_vf_info;
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index 1e3857b..c6743ed 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -1653,6 +1653,18 @@ bool ecore_vf_bulletin_get_forced_mac(struct ecore_hwfn *hwfn, u8 *dst_mac,
 	return true;
 }
 
+void ecore_vf_bulletin_get_udp_ports(struct ecore_hwfn *p_hwfn,
+				     u16 *p_vxlan_port,
+				     u16 *p_geneve_port)
+{
+	struct ecore_bulletin_content *p_bulletin;
+
+	p_bulletin = &p_hwfn->vf_iov_info->bulletin_shadow;
+
+	*p_vxlan_port = p_bulletin->vxlan_udp_port;
+	*p_geneve_port = p_bulletin->geneve_udp_port;
+}
+
 bool ecore_vf_bulletin_get_forced_vlan(struct ecore_hwfn *hwfn, u16 *dst_pvid)
 {
 	struct ecore_bulletin_content *bulletin;
diff --git a/drivers/net/qede/base/ecore_vf_api.h b/drivers/net/qede/base/ecore_vf_api.h
index 77b93ff..a6e5f32 100644
--- a/drivers/net/qede/base/ecore_vf_api.h
+++ b/drivers/net/qede/base/ecore_vf_api.h
@@ -152,5 +152,7 @@ void ecore_vf_get_fw_version(struct ecore_hwfn *p_hwfn,
 			     u16 *fw_minor,
 			     u16 *fw_rev,
 			     u16 *fw_eng);
+void ecore_vf_bulletin_get_udp_ports(struct ecore_hwfn *p_hwfn,
+				     u16 *p_vxlan_port, u16 *p_geneve_port);
 #endif
 #endif
diff --git a/drivers/net/qede/base/ecore_vfpf_if.h b/drivers/net/qede/base/ecore_vfpf_if.h
index e0b63bf..6618442 100644
--- a/drivers/net/qede/base/ecore_vfpf_if.h
+++ b/drivers/net/qede/base/ecore_vfpf_if.h
@@ -554,9 +554,12 @@ struct ecore_bulletin_content {
 	u8 pfc_enabled;
 	u8 partner_tx_flow_ctrl_en;
 	u8 partner_rx_flow_ctrl_en;
+
 	u8 partner_adv_pause;
 	u8 sfp_tx_fault;
-	u8 padding4[6];
+	u16 vxlan_udp_port;
+	u16 geneve_udp_port;
+	u8 padding4[2];
 
 	u32 speed;
 	u32 partner_adv_speed;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 55/61] net/qede/base: prevent DMAE transactions during recovery
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (53 preceding siblings ...)
  2017-02-27  7:57 ` [PATCH 54/61] net/qede/base: add udp ports in bulletin board message Rasesh Mody
@ 2017-02-27  7:57 ` Rasesh Mody
  2017-02-27  7:57 ` [PATCH 56/61] net/qede/base: add multi-Txq support on same queue-zone for VFs Rasesh Mody
                   ` (6 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:57 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Prevent DMA engine transactions during recovery phase.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_hw.c |   11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/drivers/net/qede/base/ecore_hw.c b/drivers/net/qede/base/ecore_hw.c
index 2c47f6b..9e65ddf 100644
--- a/drivers/net/qede/base/ecore_hw.c
+++ b/drivers/net/qede/base/ecore_hw.c
@@ -774,6 +774,17 @@ static enum _ecore_status_t ecore_dmae_operation_wait(struct ecore_hwfn *p_hwfn)
 	enum _ecore_status_t ecore_status = ECORE_SUCCESS;
 	u32 offset = 0;
 
+	if (p_hwfn->p_dev->recov_in_prog) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+			   "Recovery is in progress. Avoid DMAE transaction [{src: addr 0x%lx, type %d}, {dst: addr 0x%lx, type %d}, size %d].\n",
+			   src_addr, src_type, dst_addr, dst_type,
+			   size_in_dwords);
+		/* Return success to let the flow to be completed successfully
+		 * w/o any error handling.
+		 */
+		return ECORE_SUCCESS;
+	}
+
 	ecore_dmae_opcode(p_hwfn,
 			  (src_type == ECORE_DMAE_ADDRESS_GRC),
 			  (dst_type == ECORE_DMAE_ADDRESS_GRC), p_params);
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 56/61] net/qede/base: add multi-Txq support on same queue-zone for VFs
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (54 preceding siblings ...)
  2017-02-27  7:57 ` [PATCH 55/61] net/qede/base: prevent DMAE transactions during recovery Rasesh Mody
@ 2017-02-27  7:57 ` Rasesh Mody
  2017-02-27  7:57 ` [PATCH 57/61] net/qede/base: fix race cond between MFW attentions and PF stop Rasesh Mody
                   ` (5 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:57 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

A step toward having multi-Txq support on same queue-zone for VFs.

This change takes care of:

 - VFs assume a single CID per-queue, where queue X receives CID X.
   Switch to a model similar to that of PF - I.e., Use different CIDs
   for Rx/Tx, and use mapping to acquire/release those. Each VF
   currently will have 32 CIDs available for it [for its possible 16
   Rx & 16 Tx queues].

 - To retain the same interface for PFs/VFs when initializing queues,
   the base driver would have to retain a unique number per-each queue
   that would be communicated in some extended TLV [current TLV
   interface allows the PF to send only the queue-id]. The new TLV isn't
   part of the current change but base driver would now start adding
   such unique keys internally to queue_cids. This would also force
   us to start having alloc/setup/free for L2 [we've refrained from
   doing so until now]
   The limit would be no-more than 64 queues per qzone [This could be
   changed if needed, but hopefully no one needs so many queues]

 - In IOV, Add infrastructure for up to 64 qids per-qzone, although
   at the moment hard-code '0' for Rx and '1' for Tx [Since VF still
   isn't communicating via new TLV which index to associate with a
   given queue in its queue-zone].

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h         |    4 +
 drivers/net/qede/base/ecore_cxt.c     |  230 +++++++++++++++-----
 drivers/net/qede/base/ecore_cxt.h     |   53 ++++-
 drivers/net/qede/base/ecore_cxt_api.h |   13 --
 drivers/net/qede/base/ecore_dev.c     |   24 +-
 drivers/net/qede/base/ecore_l2.c      |  248 ++++++++++++++++++---
 drivers/net/qede/base/ecore_l2.h      |   46 +++-
 drivers/net/qede/base/ecore_sriov.c   |  387 ++++++++++++++++++++++-----------
 drivers/net/qede/base/ecore_sriov.h   |   17 +-
 drivers/net/qede/base/ecore_vf.c      |    6 +
 drivers/net/qede/base/ecore_vf_api.h  |    9 +
 11 files changed, 794 insertions(+), 243 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 7379b3f..fab8193 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -200,6 +200,7 @@ enum DP_MODULE {
 struct ecore_dma_mem;
 struct ecore_sb_sp_info;
 struct ecore_ll2_info;
+struct ecore_l2_info;
 struct ecore_igu_info;
 struct ecore_mcp_info;
 struct ecore_dcbx_info;
@@ -598,6 +599,9 @@ struct ecore_hwfn {
 	/* If one of the following is set then EDPM shouldn't be used */
 	u8				dcbx_no_edpm;
 	u8				db_bar_no_edpm;
+
+	/* L2-related */
+	struct ecore_l2_info		*p_l2_info;
 };
 
 #ifndef __EXTRACT__LINUX__
diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index 837a19f..b3d939a 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -8,6 +8,7 @@
 
 #include "bcm_osal.h"
 #include "reg_addr.h"
+#include "common_hsi.h"
 #include "ecore_hsi_common.h"
 #include "ecore_hsi_eth.h"
 #include "ecore_rt_defs.h"
@@ -101,7 +102,6 @@ struct ecore_tid_seg {
 
 struct ecore_conn_type_cfg {
 	u32 cid_count;
-	u32 cid_start;
 	u32 cids_per_vf;
 	struct ecore_tid_seg tid_seg[TASK_SEGMENTS];
 };
@@ -197,6 +197,9 @@ struct ecore_cxt_mngr {
 
 	/* Acquired CIDs */
 	struct ecore_cid_acquired_map acquired[MAX_CONN_TYPES];
+	/* TBD - do we want this allocated to reserve space? */
+	struct ecore_cid_acquired_map
+		acquired_vf[MAX_CONN_TYPES][COMMON_MAX_NUM_VFS];
 
 	/* ILT  shadow table */
 	struct ecore_dma_mem *ilt_shadow;
@@ -1016,44 +1019,75 @@ static enum _ecore_status_t ecore_ilt_shadow_alloc(struct ecore_hwfn *p_hwfn)
 static void ecore_cid_map_free(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
-	u32 type;
+	u32 type, vf;
 
 	for (type = 0; type < MAX_CONN_TYPES; type++) {
 		OSAL_FREE(p_hwfn->p_dev, p_mngr->acquired[type].cid_map);
 		p_mngr->acquired[type].max_count = 0;
 		p_mngr->acquired[type].start_cid = 0;
+
+		for (vf = 0; vf < COMMON_MAX_NUM_VFS; vf++) {
+			OSAL_FREE(p_hwfn->p_dev,
+				  p_mngr->acquired_vf[type][vf].cid_map);
+			p_mngr->acquired_vf[type][vf].max_count = 0;
+			p_mngr->acquired_vf[type][vf].start_cid = 0;
+		}
 	}
 }
 
+static enum _ecore_status_t
+ecore_cid_map_alloc_single(struct ecore_hwfn *p_hwfn, u32 type,
+			   u32 cid_start, u32 cid_count,
+			   struct ecore_cid_acquired_map *p_map)
+{
+	u32 size;
+
+	if (!cid_count)
+		return ECORE_SUCCESS;
+
+	size = MAP_WORD_SIZE * DIV_ROUND_UP(cid_count, BITS_PER_MAP_WORD);
+	p_map->cid_map = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL, size);
+	if (p_map->cid_map == OSAL_NULL)
+		return ECORE_NOMEM;
+
+	p_map->max_count = cid_count;
+	p_map->start_cid = cid_start;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_CXT,
+		   "Type %08x start: %08x count %08x\n",
+		   type, p_map->start_cid, p_map->max_count);
+
+	return ECORE_SUCCESS;
+}
+
 static enum _ecore_status_t ecore_cid_map_alloc(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
-	u32 start_cid = 0;
-	u32 type;
+	u32 start_cid = 0, vf_start_cid = 0;
+	u32 type, vf;
 
 	for (type = 0; type < MAX_CONN_TYPES; type++) {
-		u32 cid_cnt = p_hwfn->p_cxt_mngr->conn_cfg[type].cid_count;
-		u32 size;
-
-		if (cid_cnt == 0)
-			continue;
+		struct ecore_conn_type_cfg *p_cfg = &p_mngr->conn_cfg[type];
+		struct ecore_cid_acquired_map *p_map;
 
-		size = MAP_WORD_SIZE * DIV_ROUND_UP(cid_cnt, BITS_PER_MAP_WORD);
-		p_mngr->acquired[type].cid_map = OSAL_ZALLOC(p_hwfn->p_dev,
-							     GFP_KERNEL, size);
-		if (!p_mngr->acquired[type].cid_map)
+		/* Handle PF maps */
+		p_map = &p_mngr->acquired[type];
+		if (ecore_cid_map_alloc_single(p_hwfn, type, start_cid,
+					       p_cfg->cid_count, p_map))
 			goto cid_map_fail;
 
-		p_mngr->acquired[type].max_count = cid_cnt;
-		p_mngr->acquired[type].start_cid = start_cid;
-
-		p_hwfn->p_cxt_mngr->conn_cfg[type].cid_start = start_cid;
+		/* Handle VF maps */
+		for (vf = 0; vf < COMMON_MAX_NUM_VFS; vf++) {
+			p_map = &p_mngr->acquired_vf[type][vf];
+			if (ecore_cid_map_alloc_single(p_hwfn, type,
+						       vf_start_cid,
+						       p_cfg->cids_per_vf,
+						       p_map))
+				goto cid_map_fail;
+		}
 
-		DP_VERBOSE(p_hwfn, ECORE_MSG_CXT,
-			   "Type %08x start: %08x count %08x\n",
-			   type, p_mngr->acquired[type].start_cid,
-			   p_mngr->acquired[type].max_count);
-		start_cid += cid_cnt;
+		start_cid += p_cfg->cid_count;
+		vf_start_cid += p_cfg->cids_per_vf;
 	}
 
 	return ECORE_SUCCESS;
@@ -1174,18 +1208,34 @@ void ecore_cxt_mngr_free(struct ecore_hwfn *p_hwfn)
 void ecore_cxt_mngr_setup(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
+	struct ecore_cid_acquired_map *p_map;
+	struct ecore_conn_type_cfg *p_cfg;
 	int type;
+	u32 len;
 
 	/* Reset acquired cids */
 	for (type = 0; type < MAX_CONN_TYPES; type++) {
-		u32 cid_cnt = p_hwfn->p_cxt_mngr->conn_cfg[type].cid_count;
-		u32 i;
+		u32 vf;
+
+		p_cfg = &p_mngr->conn_cfg[type];
+		if (p_cfg->cid_count) {
+			p_map = &p_mngr->acquired[type];
+			len = DIV_ROUND_UP(p_map->max_count,
+					   BITS_PER_MAP_WORD) *
+			      MAP_WORD_SIZE;
+			OSAL_MEM_ZERO(p_map->cid_map, len);
+		}
 
-		if (cid_cnt == 0)
+		if (!p_cfg->cids_per_vf)
 			continue;
 
-		for (i = 0; i < DIV_ROUND_UP(cid_cnt, BITS_PER_MAP_WORD); i++)
-			p_mngr->acquired[type].cid_map[i] = 0;
+		for (vf = 0; vf < COMMON_MAX_NUM_VFS; vf++) {
+			p_map = &p_mngr->acquired_vf[type][vf];
+			len = DIV_ROUND_UP(p_map->max_count,
+					   BITS_PER_MAP_WORD) *
+			      MAP_WORD_SIZE;
+			OSAL_MEM_ZERO(p_map->cid_map, len);
+		}
 	}
 }
 
@@ -1726,93 +1776,150 @@ void ecore_cxt_hw_init_pf(struct ecore_hwfn *p_hwfn)
 	ecore_prs_init_pf(p_hwfn);
 }
 
-enum _ecore_status_t ecore_cxt_acquire_cid(struct ecore_hwfn *p_hwfn,
-					   enum protocol_type type, u32 *p_cid)
+enum _ecore_status_t _ecore_cxt_acquire_cid(struct ecore_hwfn *p_hwfn,
+					    enum protocol_type type,
+					    u32 *p_cid, u8 vfid)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
+	struct ecore_cid_acquired_map *p_map;
 	u32 rel_cid;
 
-	if (type >= MAX_CONN_TYPES || !p_mngr->acquired[type].cid_map) {
+	if (type >= MAX_CONN_TYPES) {
 		DP_NOTICE(p_hwfn, true, "Invalid protocol type %d", type);
 		return ECORE_INVAL;
 	}
 
-	rel_cid = OSAL_FIND_FIRST_ZERO_BIT(p_mngr->acquired[type].cid_map,
-					   p_mngr->acquired[type].max_count);
+	if (vfid >= COMMON_MAX_NUM_VFS && vfid != ECORE_CXT_PF_CID) {
+		DP_NOTICE(p_hwfn, true, "VF [%02x] is out of range\n", vfid);
+		return ECORE_INVAL;
+	}
+
+	/* Determine the right map to take this CID from */
+	if (vfid == ECORE_CXT_PF_CID)
+		p_map = &p_mngr->acquired[type];
+	else
+		p_map = &p_mngr->acquired_vf[type][vfid];
 
-	if (rel_cid >= p_mngr->acquired[type].max_count) {
+	if (p_map->cid_map == OSAL_NULL) {
+		DP_NOTICE(p_hwfn, true, "Invalid protocol type %d", type);
+		return ECORE_INVAL;
+	}
+
+	rel_cid = OSAL_FIND_FIRST_ZERO_BIT(p_map->cid_map,
+					   p_map->max_count);
+
+	if (rel_cid >= p_map->max_count) {
 		DP_NOTICE(p_hwfn, false, "no CID available for protocol %d\n",
 			  type);
 		return ECORE_NORESOURCES;
 	}
 
-	OSAL_SET_BIT(rel_cid, p_mngr->acquired[type].cid_map);
+	OSAL_SET_BIT(rel_cid, p_map->cid_map);
 
-	*p_cid = rel_cid + p_mngr->acquired[type].start_cid;
+	*p_cid = rel_cid + p_map->start_cid;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_CXT,
+		   "Acquired cid 0x%08x [rel. %08x] vfid %02x type %d\n",
+		   *p_cid, rel_cid, vfid, type);
 
 	return ECORE_SUCCESS;
 }
 
+enum _ecore_status_t ecore_cxt_acquire_cid(struct ecore_hwfn *p_hwfn,
+					   enum protocol_type type,
+					   u32 *p_cid)
+{
+	return _ecore_cxt_acquire_cid(p_hwfn, type, p_cid, ECORE_CXT_PF_CID);
+}
+
 static bool ecore_cxt_test_cid_acquired(struct ecore_hwfn *p_hwfn,
-					u32 cid, enum protocol_type *p_type)
+					u32 cid, u8 vfid,
+					enum protocol_type *p_type,
+					struct ecore_cid_acquired_map **pp_map)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
-	struct ecore_cid_acquired_map *p_map;
-	enum protocol_type p;
 	u32 rel_cid;
 
 	/* Iterate over protocols and find matching cid range */
-	for (p = 0; p < MAX_CONN_TYPES; p++) {
-		p_map = &p_mngr->acquired[p];
+	for (*p_type = 0; *p_type < MAX_CONN_TYPES; (*p_type)++) {
+		if (vfid == ECORE_CXT_PF_CID)
+			*pp_map = &p_mngr->acquired[*p_type];
+		else
+			*pp_map = &p_mngr->acquired_vf[*p_type][vfid];
 
-		if (!p_map->cid_map)
+		if (!((*pp_map)->cid_map))
 			continue;
-		if (cid >= p_map->start_cid &&
-		    cid < p_map->start_cid + p_map->max_count) {
+		if (cid >= (*pp_map)->start_cid &&
+		    cid < (*pp_map)->start_cid + (*pp_map)->max_count) {
 			break;
 		}
 	}
-	*p_type = p;
-
-	if (p == MAX_CONN_TYPES) {
-		DP_NOTICE(p_hwfn, true, "Invalid CID %d", cid);
-		return false;
+	if (*p_type == MAX_CONN_TYPES) {
+		DP_NOTICE(p_hwfn, true, "Invalid CID %d vfid %02x", cid, vfid);
+		goto fail;
 	}
-	rel_cid = cid - p_map->start_cid;
-	if (!OSAL_TEST_BIT(rel_cid, p_map->cid_map)) {
-		DP_NOTICE(p_hwfn, true, "CID %d not acquired", cid);
-		return false;
+
+	rel_cid = cid - (*pp_map)->start_cid;
+	if (!OSAL_TEST_BIT(rel_cid, (*pp_map)->cid_map)) {
+		DP_NOTICE(p_hwfn, true,
+			  "CID %d [vifd %02x] not acquired", cid, vfid);
+		goto fail;
 	}
+
 	return true;
+fail:
+	*p_type = MAX_CONN_TYPES;
+	*pp_map = OSAL_NULL;
+	return false;
 }
 
-void ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn, u32 cid)
+void _ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn, u32 cid, u8 vfid)
 {
-	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
+	struct ecore_cid_acquired_map *p_map = OSAL_NULL;
 	enum protocol_type type;
 	bool b_acquired;
 	u32 rel_cid;
 
+	if (vfid != ECORE_CXT_PF_CID && vfid > COMMON_MAX_NUM_VFS) {
+		DP_NOTICE(p_hwfn, true,
+			  "Trying to return incorrect CID belonging to VF %02x\n",
+			  vfid);
+		return;
+	}
+
 	/* Test acquired and find matching per-protocol map */
-	b_acquired = ecore_cxt_test_cid_acquired(p_hwfn, cid, &type);
+	b_acquired = ecore_cxt_test_cid_acquired(p_hwfn, cid, vfid,
+						 &type, &p_map);
 
 	if (!b_acquired)
 		return;
 
-	rel_cid = cid - p_mngr->acquired[type].start_cid;
-	OSAL_CLEAR_BIT(rel_cid, p_mngr->acquired[type].cid_map);
+	rel_cid = cid - p_map->start_cid;
+	OSAL_CLEAR_BIT(rel_cid, p_map->cid_map);
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_CXT,
+		   "Released CID 0x%08x [rel. %08x] vfid %02x type %d\n",
+		   cid, rel_cid, vfid, type);
+}
+
+void ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn, u32 cid)
+{
+	_ecore_cxt_release_cid(p_hwfn, cid, ECORE_CXT_PF_CID);
 }
 
 enum _ecore_status_t ecore_cxt_get_cid_info(struct ecore_hwfn *p_hwfn,
 					    struct ecore_cxt_info *p_info)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
+	struct ecore_cid_acquired_map *p_map = OSAL_NULL;
 	u32 conn_cxt_size, hw_p_size, cxts_per_p, line;
 	enum protocol_type type;
 	bool b_acquired;
 
 	/* Test acquired and find matching per-protocol map */
-	b_acquired = ecore_cxt_test_cid_acquired(p_hwfn, p_info->iid, &type);
+	b_acquired = ecore_cxt_test_cid_acquired(p_hwfn, p_info->iid,
+						 ECORE_CXT_PF_CID,
+						 &type, &p_map);
 
 	if (!b_acquired)
 		return ECORE_INVAL;
@@ -1868,9 +1975,14 @@ enum _ecore_status_t ecore_cxt_set_pf_params(struct ecore_hwfn *p_hwfn)
 			struct ecore_eth_pf_params *p_params =
 			    &p_hwfn->pf_params.eth_pf_params;
 
+			/* TODO - we probably want to add VF number to the PF
+			 * params;
+			 * As of now, allocates 16 * 2 per-VF [to retain regular
+			 * functionality].
+			 */
 			ecore_cxt_set_proto_cid_count(p_hwfn,
 				PROTOCOLID_ETH,
-				p_params->num_cons, 1);	/* FIXME VF count... */
+				p_params->num_cons, 32);
 
 			break;
 		}
diff --git a/drivers/net/qede/base/ecore_cxt.h b/drivers/net/qede/base/ecore_cxt.h
index 5379d7b..1128051 100644
--- a/drivers/net/qede/base/ecore_cxt.h
+++ b/drivers/net/qede/base/ecore_cxt.h
@@ -130,14 +130,53 @@ void ecore_cxt_qm_iids(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn,
 				     struct ecore_ptt *p_ptt);
 
+#define ECORE_CXT_PF_CID (0xff)
+
+/**
+ * @brief ecore_cxt_release - Release a cid
+ *
+ * @param p_hwfn
+ * @param cid
+ */
+void ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn, u32 cid);
+
 /**
-* @brief ecore_cxt_release - Release a cid
-*
-* @param p_hwfn
-* @param cid
-*/
-void ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn,
-			   u32 cid);
+ * @brief ecore_cxt_release - Release a cid belonging to a vf-queue
+ *
+ * @param p_hwfn
+ * @param cid
+ * @param vfid - engine relative index. ECORE_CXT_PF_CID if belongs to PF
+ */
+void _ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn,
+			    u32 cid, u8 vfid);
+
+/**
+ * @brief ecore_cxt_acquire - Acquire a new cid of a specific protocol type
+ *
+ * @param p_hwfn
+ * @param type
+ * @param p_cid
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_cxt_acquire_cid(struct ecore_hwfn *p_hwfn,
+					   enum protocol_type type,
+					   u32 *p_cid);
+
+/**
+ * @brief _ecore_cxt_acquire - Acquire a new cid of a specific protocol type
+ *                             for a vf-queue
+ *
+ * @param p_hwfn
+ * @param type
+ * @param p_cid
+ * @param vfid - engine relative index. ECORE_CXT_PF_CID if belongs to PF
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t _ecore_cxt_acquire_cid(struct ecore_hwfn *p_hwfn,
+					    enum protocol_type type,
+					    u32 *p_cid, u8 vfid);
 
 /**
  * @brief ecore_cxt_get_tid_mem_info - function checks if the
diff --git a/drivers/net/qede/base/ecore_cxt_api.h b/drivers/net/qede/base/ecore_cxt_api.h
index 6a50412..f154e0d 100644
--- a/drivers/net/qede/base/ecore_cxt_api.h
+++ b/drivers/net/qede/base/ecore_cxt_api.h
@@ -26,19 +26,6 @@ struct ecore_tid_mem {
 };
 
 /**
-* @brief ecore_cxt_acquire - Acquire a new cid of a specific protocol type
-*
-* @param p_hwfn
-* @param type
-* @param p_cid
-*
-* @return enum _ecore_status_t
-*/
-enum _ecore_status_t ecore_cxt_acquire_cid(struct ecore_hwfn  *p_hwfn,
-					   enum protocol_type type,
-					   u32 *p_cid);
-
-/**
 * @brief ecoreo_cid_get_cxt_info - Returns the context info for a specific cid
 *
 *
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 7baa1b0..0f60010 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -150,8 +150,11 @@ void ecore_resc_free(struct ecore_dev *p_dev)
 {
 	int i;
 
-	if (IS_VF(p_dev))
+	if (IS_VF(p_dev)) {
+		for_each_hwfn(p_dev, i)
+			ecore_l2_free(&p_dev->hwfns[i]);
 		return;
+	}
 
 	OSAL_FREE(p_dev, p_dev->fw_data);
 	p_dev->fw_data = OSAL_NULL;
@@ -169,6 +172,7 @@ void ecore_resc_free(struct ecore_dev *p_dev)
 		ecore_consq_free(p_hwfn);
 		ecore_int_free(p_hwfn);
 		ecore_iov_free(p_hwfn);
+		ecore_l2_free(p_hwfn);
 		ecore_dmae_info_free(p_hwfn);
 		ecore_dcbx_info_free(p_hwfn, p_hwfn->p_dcbx_info);
 		/* @@@TBD Flush work-queue ? */
@@ -845,8 +849,14 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 	int i;
 
-	if (IS_VF(p_dev))
+	if (IS_VF(p_dev)) {
+		for_each_hwfn(p_dev, i) {
+			rc = ecore_l2_alloc(&p_dev->hwfns[i]);
+			if (rc != ECORE_SUCCESS)
+				return rc;
+		}
 		return rc;
+	}
 
 	p_dev->fw_data = OSAL_ZALLOC(p_dev, GFP_KERNEL,
 				     sizeof(*p_dev->fw_data));
@@ -967,6 +977,10 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 		if (rc)
 			goto alloc_err;
 
+		rc = ecore_l2_alloc(p_hwfn);
+		if (rc != ECORE_SUCCESS)
+			goto alloc_err;
+
 		/* DMA info initialization */
 		rc = ecore_dmae_info_alloc(p_hwfn);
 		if (rc) {
@@ -1005,8 +1019,11 @@ void ecore_resc_setup(struct ecore_dev *p_dev)
 {
 	int i;
 
-	if (IS_VF(p_dev))
+	if (IS_VF(p_dev)) {
+		for_each_hwfn(p_dev, i)
+			ecore_l2_setup(&p_dev->hwfns[i]);
 		return;
+	}
 
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
@@ -1024,6 +1041,7 @@ void ecore_resc_setup(struct ecore_dev *p_dev)
 
 		ecore_int_setup(p_hwfn, p_hwfn->p_main_ptt);
 
+		ecore_l2_setup(p_hwfn);
 		ecore_iov_setup(p_hwfn, p_hwfn->p_main_ptt);
 	}
 }
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 4d26e19..adb5e47 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -29,24 +29,172 @@
 #define ECORE_MAX_SGES_NUM 16
 #define CRC32_POLY 0x1edc6f41
 
+struct ecore_l2_info {
+	u32 queues;
+	unsigned long **pp_qid_usage;
+
+	/* The lock is meant to synchronize access to the qid usage */
+	osal_mutex_t lock;
+};
+
+enum _ecore_status_t ecore_l2_alloc(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_l2_info *p_l2_info;
+	unsigned long **pp_qids;
+	u32 i;
+
+	if (!ECORE_IS_L2_PERSONALITY(p_hwfn))
+		return ECORE_SUCCESS;
+
+	p_l2_info = OSAL_VZALLOC(p_hwfn->p_dev, sizeof(*p_l2_info));
+	if (!p_l2_info)
+		return ECORE_NOMEM;
+	p_hwfn->p_l2_info = p_l2_info;
+
+	if (IS_PF(p_hwfn->p_dev)) {
+		p_l2_info->queues = RESC_NUM(p_hwfn, ECORE_L2_QUEUE);
+	} else {
+		u8 rx = 0, tx = 0;
+
+		ecore_vf_get_num_rxqs(p_hwfn, &rx);
+		ecore_vf_get_num_txqs(p_hwfn, &tx);
+
+		p_l2_info->queues = (u32)OSAL_MAX_T(u8, rx, tx);
+	}
+
+	pp_qids = OSAL_VZALLOC(p_hwfn->p_dev,
+			       sizeof(unsigned long *) *
+			       p_l2_info->queues);
+	if (pp_qids == OSAL_NULL)
+		return ECORE_NOMEM;
+	p_l2_info->pp_qid_usage = pp_qids;
+
+	for (i = 0; i < p_l2_info->queues; i++) {
+		pp_qids[i] = OSAL_VZALLOC(p_hwfn->p_dev,
+					  MAX_QUEUES_PER_QZONE / 8);
+		if (pp_qids[i] == OSAL_NULL)
+			return ECORE_NOMEM;
+	}
+
+#ifdef CONFIG_ECORE_LOCK_ALLOC
+	OSAL_MUTEX_ALLOC(p_hwfn, &p_l2_info->lock);
+#endif
+
+	return ECORE_SUCCESS;
+}
+
+void ecore_l2_setup(struct ecore_hwfn *p_hwfn)
+{
+	if (!ECORE_IS_L2_PERSONALITY(p_hwfn))
+		return;
+
+	OSAL_MUTEX_INIT(&p_hwfn->p_l2_info->lock);
+}
+
+void ecore_l2_free(struct ecore_hwfn *p_hwfn)
+{
+	u32 i;
+
+	if (!ECORE_IS_L2_PERSONALITY(p_hwfn))
+		return;
+
+	if (p_hwfn->p_l2_info == OSAL_NULL)
+		return;
+
+	if (p_hwfn->p_l2_info->pp_qid_usage == OSAL_NULL)
+		goto out_l2_info;
+
+	/* Free until hit first uninitialized entry */
+	for (i = 0; i < p_hwfn->p_l2_info->queues; i++) {
+		if (p_hwfn->p_l2_info->pp_qid_usage[i] == OSAL_NULL)
+			break;
+		OSAL_VFREE(p_hwfn->p_dev,
+			   p_hwfn->p_l2_info->pp_qid_usage[i]);
+	}
+
+#ifdef CONFIG_ECORE_LOCK_ALLOC
+	/* Lock is last to initialize, if everything else was */
+	if (i == p_hwfn->p_l2_info->queues)
+		OSAL_MUTEX_DEALLOC(&p_hwfn->p_l2_info->lock);
+#endif
+
+	OSAL_VFREE(p_hwfn->p_dev, p_hwfn->p_l2_info->pp_qid_usage);
+
+out_l2_info:
+	OSAL_VFREE(p_hwfn->p_dev, p_hwfn->p_l2_info);
+	p_hwfn->p_l2_info = OSAL_NULL;
+}
+
+/* TODO - we'll need locking around these... */
+static bool ecore_eth_queue_qid_usage_add(struct ecore_hwfn *p_hwfn,
+					  struct ecore_queue_cid *p_cid)
+{
+	struct ecore_l2_info *p_l2_info = p_hwfn->p_l2_info;
+	u16 queue_id = p_cid->rel.queue_id;
+	bool b_rc = true;
+	u8 first;
+
+	OSAL_MUTEX_ACQUIRE(&p_l2_info->lock);
+
+	if (queue_id > p_l2_info->queues) {
+		DP_NOTICE(p_hwfn, true,
+			  "Requested to increase usage for qzone %04x out of %08x\n",
+			  queue_id, p_l2_info->queues);
+		b_rc = false;
+		goto out;
+	}
+
+	first = (u8)OSAL_FIND_FIRST_ZERO_BIT(p_l2_info->pp_qid_usage[queue_id],
+					     MAX_QUEUES_PER_QZONE);
+	if (first >= MAX_QUEUES_PER_QZONE) {
+		b_rc = false;
+		goto out;
+	}
+
+	OSAL_SET_BIT(first, p_l2_info->pp_qid_usage[queue_id]);
+	p_cid->qid_usage_idx = first;
+
+out:
+	OSAL_MUTEX_RELEASE(&p_l2_info->lock);
+	return b_rc;
+}
+
+static void ecore_eth_queue_qid_usage_del(struct ecore_hwfn *p_hwfn,
+					  struct ecore_queue_cid *p_cid)
+{
+	OSAL_MUTEX_ACQUIRE(&p_hwfn->p_l2_info->lock);
+
+	OSAL_CLEAR_BIT(p_cid->qid_usage_idx,
+		       p_hwfn->p_l2_info->pp_qid_usage[p_cid->rel.queue_id]);
+
+	OSAL_MUTEX_RELEASE(&p_hwfn->p_l2_info->lock);
+}
+
 void ecore_eth_queue_cid_release(struct ecore_hwfn *p_hwfn,
 				 struct ecore_queue_cid *p_cid)
 {
+	/* For VF-queues, stuff is a bit complicated as:
+	 *  - They always maintain the qid_usage on their own.
+	 *  - In legacy mode, they also maintain their CIDs.
+	 */
+
 	/* VFs' CIDs are 0-based in PF-view, and uninitialized on VF */
-	if (!p_cid->is_vf && IS_PF(p_hwfn->p_dev))
-		ecore_cxt_release_cid(p_hwfn, p_cid->cid);
+	if (IS_PF(p_hwfn->p_dev) && !p_cid->b_legacy_vf)
+		_ecore_cxt_release_cid(p_hwfn, p_cid->cid, p_cid->vfid);
+	if (!p_cid->b_legacy_vf)
+		ecore_eth_queue_qid_usage_del(p_hwfn, p_cid);
 	OSAL_VFREE(p_hwfn->p_dev, p_cid);
 }
 
 /* The internal is only meant to be directly called by PFs initializeing CIDs
  * for their VFs.
  */
-struct ecore_queue_cid *
+static struct ecore_queue_cid *
 _ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
-			u16 opaque_fid, u32 cid, u8 vf_qid,
-			struct ecore_queue_start_common_params *p_params)
+			u16 opaque_fid, u32 cid,
+			struct ecore_queue_start_common_params *p_params,
+			struct ecore_queue_cid_vf_params *p_vf_params)
 {
-	bool b_is_same = (p_hwfn->hw_info.opaque_fid == opaque_fid);
 	struct ecore_queue_cid *p_cid;
 	enum _ecore_status_t rc;
 
@@ -56,13 +204,22 @@ struct ecore_queue_cid *
 
 	p_cid->opaque_fid = opaque_fid;
 	p_cid->cid = cid;
-	p_cid->vf_qid = vf_qid;
 	p_cid->rel = *p_params;
 	p_cid->p_owner = p_hwfn;
 
+	/* Fill-in bits related to VFs' queues if information was provided */
+	if (p_vf_params != OSAL_NULL) {
+		p_cid->vfid = p_vf_params->vfid;
+		p_cid->vf_qid = p_vf_params->vf_qid;
+		p_cid->b_legacy_vf = p_vf_params->b_legacy;
+	} else {
+		p_cid->vfid = ECORE_QUEUE_CID_PF;
+	}
+
 	/* Don't try calculating the absolute indices for VFs */
 	if (IS_VF(p_hwfn->p_dev)) {
 		p_cid->abs = p_cid->rel;
+
 		goto out;
 	}
 
@@ -82,7 +239,7 @@ struct ecore_queue_cid *
 	/* In case of a PF configuring its VF's queues, the stats-id is already
 	 * absolute [since there's a single index that's suitable per-VF].
 	 */
-	if (b_is_same) {
+	if (p_cid->vfid == ECORE_QUEUE_CID_PF) {
 		rc = ecore_fw_vport(p_hwfn, p_cid->rel.stats_id,
 				    &p_cid->abs.stats_id);
 		if (rc != ECORE_SUCCESS)
@@ -95,17 +252,23 @@ struct ecore_queue_cid *
 	p_cid->abs.sb = p_cid->rel.sb;
 	p_cid->abs.sb_idx = p_cid->rel.sb_idx;
 
-	/* This is tricky - we're actually interested in whehter this is a PF
-	 * entry meant for the VF.
-	 */
-	if (!b_is_same)
-		p_cid->is_vf = true;
 out:
+	/* VF-images have provided the qid_usage_idx on their own.
+	 * Otherwise, we need to allocate a unique one.
+	 */
+	if (!p_vf_params) {
+		if (!ecore_eth_queue_qid_usage_add(p_hwfn, p_cid))
+			goto fail;
+	} else {
+		p_cid->qid_usage_idx = p_vf_params->qid_usage_idx;
+	}
+
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-		   "opaque_fid: %04x CID %08x vport %02x [%02x] qzone %04x [%04x] stats %02x [%02x] SB %04x PI %02x\n",
+		   "opaque_fid: %04x CID %08x vport %02x [%02x] qzone %04x.%02x [%04x] stats %02x [%02x] SB %04x PI %02x\n",
 		   p_cid->opaque_fid, p_cid->cid,
 		   p_cid->rel.vport_id, p_cid->abs.vport_id,
-		   p_cid->rel.queue_id, p_cid->abs.queue_id,
+		   p_cid->rel.queue_id,	p_cid->qid_usage_idx,
+		   p_cid->abs.queue_id,
 		   p_cid->rel.stats_id, p_cid->abs.stats_id,
 		   p_cid->abs.sb, p_cid->abs.sb_idx);
 
@@ -116,33 +279,56 @@ struct ecore_queue_cid *
 	return OSAL_NULL;
 }
 
-static struct ecore_queue_cid *
-ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
-		       u16 opaque_fid,
-		       struct ecore_queue_start_common_params *p_params)
+struct ecore_queue_cid *
+ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn, u16 opaque_fid,
+		       struct ecore_queue_start_common_params *p_params,
+		       struct ecore_queue_cid_vf_params *p_vf_params)
 {
 	struct ecore_queue_cid *p_cid;
+	u8 vfid = ECORE_CXT_PF_CID;
+	bool b_legacy_vf = false;
 	u32 cid = 0;
 
+	/* In case of legacy VFs, The CID can be derived from the additional
+	 * VF parameters - the VF assumes queue X uses CID X, so we can simply
+	 * use the vf_qid for this purpose as well.
+	 */
+	if (p_vf_params) {
+		vfid = p_vf_params->vfid;
+
+		if (p_vf_params->b_legacy) {
+			b_legacy_vf = true;
+			cid = p_vf_params->vf_qid;
+		}
+	}
+
 	/* Get a unique firmware CID for this queue, in case it's a PF.
 	 * VF's don't need a CID as the queue configuration will be done
 	 * by PF.
 	 */
-	if (IS_PF(p_hwfn->p_dev)) {
-		if (ecore_cxt_acquire_cid(p_hwfn, PROTOCOLID_ETH,
-					  &cid) != ECORE_SUCCESS) {
+	if (IS_PF(p_hwfn->p_dev) && !b_legacy_vf) {
+		if (_ecore_cxt_acquire_cid(p_hwfn, PROTOCOLID_ETH,
+					   &cid, vfid) != ECORE_SUCCESS) {
 			DP_NOTICE(p_hwfn, true, "Failed to acquire cid\n");
 			return OSAL_NULL;
 		}
 	}
 
-	p_cid = _ecore_eth_queue_to_cid(p_hwfn, opaque_fid, cid, 0, p_params);
-	if ((p_cid == OSAL_NULL) && IS_PF(p_hwfn->p_dev))
-		ecore_cxt_release_cid(p_hwfn, cid);
+	p_cid = _ecore_eth_queue_to_cid(p_hwfn, opaque_fid, cid,
+					p_params, p_vf_params);
+	if ((p_cid == OSAL_NULL) && IS_PF(p_hwfn->p_dev) && !b_legacy_vf)
+		_ecore_cxt_release_cid(p_hwfn, cid, vfid);
 
 	return p_cid;
 }
 
+static struct ecore_queue_cid *
+ecore_eth_queue_to_cid_pf(struct ecore_hwfn *p_hwfn, u16 opaque_fid,
+			  struct ecore_queue_start_common_params *p_params)
+{
+	return ecore_eth_queue_to_cid(p_hwfn, opaque_fid, p_params, OSAL_NULL);
+}
+
 enum _ecore_status_t
 ecore_sp_eth_vport_start(struct ecore_hwfn *p_hwfn,
 			 struct ecore_sp_vport_start_params *p_params)
@@ -741,7 +927,7 @@ enum _ecore_status_t
 	p_ramrod->num_of_pbl_pages = OSAL_CPU_TO_LE16(cqe_pbl_size);
 	DMA_REGPAIR_LE(p_ramrod->cqe_pbl_addr, cqe_pbl_addr);
 
-	if (p_cid->is_vf) {
+	if (p_cid->vfid != ECORE_QUEUE_CID_PF) {
 		p_ramrod->vf_rx_prod_index = p_cid->vf_qid;
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 			   "Queue%s is meant for VF rxq[%02x]\n",
@@ -793,7 +979,7 @@ enum _ecore_status_t
 	enum _ecore_status_t rc;
 
 	/* Allocate a CID for the queue */
-	p_cid = ecore_eth_queue_to_cid(p_hwfn, opaque_fid, p_params);
+	p_cid = ecore_eth_queue_to_cid_pf(p_hwfn, opaque_fid, p_params);
 	if (p_cid == OSAL_NULL)
 		return ECORE_NOMEM;
 
@@ -905,9 +1091,11 @@ enum _ecore_status_t
 	/* Cleaning the queue requires the completion to arrive there.
 	 * In addition, VFs require the answer to come as eqe to PF.
 	 */
-	p_ramrod->complete_cqe_flg = (!p_cid->is_vf && !b_eq_completion_only) ||
+	p_ramrod->complete_cqe_flg = ((p_cid->vfid == ECORE_QUEUE_CID_PF) &&
+				      !b_eq_completion_only) ||
 				     b_cqe_completion;
-	p_ramrod->complete_event_flg = p_cid->is_vf || b_eq_completion_only;
+	p_ramrod->complete_event_flg = (p_cid->vfid != ECORE_QUEUE_CID_PF) ||
+				       b_eq_completion_only;
 
 	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
 }
@@ -1007,7 +1195,7 @@ enum _ecore_status_t
 	struct ecore_queue_cid *p_cid;
 	enum _ecore_status_t rc;
 
-	p_cid = ecore_eth_queue_to_cid(p_hwfn, opaque_fid, p_params);
+	p_cid = ecore_eth_queue_to_cid_pf(p_hwfn, opaque_fid, p_params);
 	if (p_cid == OSAL_NULL)
 		return ECORE_INVAL;
 
diff --git a/drivers/net/qede/base/ecore_l2.h b/drivers/net/qede/base/ecore_l2.h
index 4b0ccb4..3f86eac 100644
--- a/drivers/net/qede/base/ecore_l2.h
+++ b/drivers/net/qede/base/ecore_l2.h
@@ -15,6 +15,34 @@
 #include "ecore_spq.h"
 #include "ecore_l2_api.h"
 
+#define MAX_QUEUES_PER_QZONE	(sizeof(unsigned long) * 8)
+#define ECORE_QUEUE_CID_PF	(0xff)
+
+/* Additional parameters required for initialization of the queue_cid
+ * and are relevant only for a PF initializing one for its VFs.
+ */
+struct ecore_queue_cid_vf_params {
+	/* Should match the VF's relative index */
+	u8 vfid;
+
+	/* 0-based queue index. Should reflect the relative qzone the
+	 * VF thinks is associated with it [in its range].
+	 */
+	u8 vf_qid;
+
+	/* Indicates a VF is legacy, making it differ in several things:
+	 *  - Producers would be placed in a different place.
+	 *  - Makes assumptions regarding the CIDs.
+	 */
+	bool b_legacy;
+
+	/* For VFs, this index arrives via TLV to diffrentiate between
+	 * different queues opened on the same qzone, and is passed
+	 * [where the PF would have allocated it internally for its own].
+	 */
+	u8 qid_usage_idx;
+};
+
 struct ecore_queue_cid {
 	/* 'Relative' is a relative term ;-). Usually the indices [not counting
 	 * SBs] would be PF-relative, but there are some cases where that isn't
@@ -31,22 +59,32 @@ struct ecore_queue_cid {
 	 * Notice this is relevant on the *PF* queue-cid of its VF's queues,
 	 * and not on the VF itself.
 	 */
-	bool is_vf;
+	u8 vfid;
 	u8 vf_qid;
 
+	/* We need an additional index to diffrentiate between queues opened
+	 * for same queue-zone, as VFs would have to communicate the info
+	 * to the PF [otherwise PF has no way to diffrentiate].
+	 */
+	u8 qid_usage_idx;
+
 	/* Legacy VFs might have Rx producer located elsewhere */
 	bool b_legacy_vf;
 
 	struct ecore_hwfn *p_owner;
 };
 
+enum _ecore_status_t ecore_l2_alloc(struct ecore_hwfn *p_hwfn);
+void ecore_l2_setup(struct ecore_hwfn *p_hwfn);
+void ecore_l2_free(struct ecore_hwfn *p_hwfn);
+
 void ecore_eth_queue_cid_release(struct ecore_hwfn *p_hwfn,
 				 struct ecore_queue_cid *p_cid);
 
 struct ecore_queue_cid *
-_ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
-			u16 opaque_fid, u32 cid, u8 vf_qid,
-			struct ecore_queue_start_common_params *p_params);
+ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn, u16 opaque_fid,
+		       struct ecore_queue_start_common_params *p_params,
+		       struct ecore_queue_cid_vf_params *p_vf_params);
 
 enum _ecore_status_t
 ecore_sp_eth_vport_start(struct ecore_hwfn *p_hwfn,
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index dc01c6d..557644a 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -192,28 +192,90 @@ struct ecore_vf_info *ecore_iov_get_vf_info(struct ecore_hwfn *p_hwfn,
 	return vf;
 }
 
+static struct ecore_queue_cid *
+ecore_iov_get_vf_rx_queue_cid(struct ecore_hwfn *p_hwfn,
+			      struct ecore_vf_info *p_vf,
+			      struct ecore_vf_queue *p_queue)
+{
+	int i;
+
+	for (i = 0; i < MAX_QUEUES_PER_QZONE; i++) {
+		if (p_queue->cids[i].p_cid &&
+		    !p_queue->cids[i].b_is_tx)
+			return p_queue->cids[i].p_cid;
+	}
+
+	return OSAL_NULL;
+}
+
+enum ecore_iov_validate_q_mode {
+	ECORE_IOV_VALIDATE_Q_NA,
+	ECORE_IOV_VALIDATE_Q_ENABLE,
+	ECORE_IOV_VALIDATE_Q_DISABLE,
+};
+
+static bool ecore_iov_validate_queue_mode(struct ecore_hwfn *p_hwfn,
+					  struct ecore_vf_info *p_vf,
+					  u16 qid,
+					  enum ecore_iov_validate_q_mode mode,
+					  bool b_is_tx)
+{
+	int i;
+
+	if (mode == ECORE_IOV_VALIDATE_Q_NA)
+		return true;
+
+	for (i = 0; i < MAX_QUEUES_PER_QZONE; i++) {
+		struct ecore_vf_queue_cid *p_qcid;
+
+		p_qcid = &p_vf->vf_queues[qid].cids[i];
+
+		if (p_qcid->p_cid == OSAL_NULL)
+			continue;
+
+		if (p_qcid->b_is_tx != b_is_tx)
+			continue;
+
+		/* Found. It's enabled. */
+		return (mode == ECORE_IOV_VALIDATE_Q_ENABLE);
+	}
+
+	/* In case we haven't found any valid cid, then its disabled */
+	return (mode == ECORE_IOV_VALIDATE_Q_DISABLE);
+}
+
 static bool ecore_iov_validate_rxq(struct ecore_hwfn *p_hwfn,
 				   struct ecore_vf_info *p_vf,
-				   u16 rx_qid)
+				   u16 rx_qid,
+				   enum ecore_iov_validate_q_mode mode)
 {
-	if (rx_qid >= p_vf->num_rxqs)
+	if (rx_qid >= p_vf->num_rxqs) {
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 			   "VF[0x%02x] - can't touch Rx queue[%04x];"
 			   " Only 0x%04x are allocated\n",
 			   p_vf->abs_vf_id, rx_qid, p_vf->num_rxqs);
-	return rx_qid < p_vf->num_rxqs;
+		return false;
+	}
+
+	return ecore_iov_validate_queue_mode(p_hwfn, p_vf, rx_qid,
+					     mode, false);
 }
 
 static bool ecore_iov_validate_txq(struct ecore_hwfn *p_hwfn,
 				   struct ecore_vf_info *p_vf,
-				   u16 tx_qid)
+				   u16 tx_qid,
+				   enum ecore_iov_validate_q_mode mode)
 {
-	if (tx_qid >= p_vf->num_txqs)
+	if (tx_qid >= p_vf->num_txqs) {
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 			   "VF[0x%02x] - can't touch Tx queue[%04x];"
 			   " Only 0x%04x are allocated\n",
 			   p_vf->abs_vf_id, tx_qid, p_vf->num_txqs);
-	return tx_qid < p_vf->num_txqs;
+		return false;
+	}
+
+	return ecore_iov_validate_queue_mode(p_hwfn, p_vf, tx_qid,
+					     mode, true);
 }
 
 static bool ecore_iov_validate_sb(struct ecore_hwfn *p_hwfn,
@@ -234,13 +296,16 @@ static bool ecore_iov_validate_sb(struct ecore_hwfn *p_hwfn,
 	return false;
 }
 
+/* Is there at least 1 queue open? */
 static bool ecore_iov_validate_active_rxq(struct ecore_hwfn *p_hwfn,
 					  struct ecore_vf_info *p_vf)
 {
 	u8 i;
 
 	for (i = 0; i < p_vf->num_rxqs; i++)
-		if (p_vf->vf_queues[i].p_rx_cid)
+		if (ecore_iov_validate_queue_mode(p_hwfn, p_vf, i,
+						  ECORE_IOV_VALIDATE_Q_ENABLE,
+						  false))
 			return true;
 
 	return false;
@@ -251,8 +316,10 @@ static bool ecore_iov_validate_active_txq(struct ecore_hwfn *p_hwfn,
 {
 	u8 i;
 
-	for (i = 0; i < p_vf->num_rxqs; i++)
-		if (p_vf->vf_queues[i].p_tx_cid)
+	for (i = 0; i < p_vf->num_txqs; i++)
+		if (ecore_iov_validate_queue_mode(p_hwfn, p_vf, i,
+						  ECORE_IOV_VALIDATE_Q_ENABLE,
+						  true))
 			return true;
 
 	return false;
@@ -1098,19 +1165,15 @@ enum _ecore_status_t
 	vf->num_txqs = num_of_vf_available_chains;
 
 	for (i = 0; i < vf->num_rxqs; i++) {
-		struct ecore_vf_q_info *p_queue = &vf->vf_queues[i];
+		struct ecore_vf_queue *p_queue = &vf->vf_queues[i];
 
 		p_queue->fw_rx_qid = p_params->req_rx_queue[i];
 		p_queue->fw_tx_qid = p_params->req_tx_queue[i];
 
-		/* CIDs are per-VF, so no problem having them 0-based. */
-		p_queue->fw_cid = i;
-
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-			   "VF[%d] - Q[%d] SB %04x, qid [Rx %04x Tx %04x]  CID %04x\n",
+			   "VF[%d] - Q[%d] SB %04x, qid [Rx %04x Tx %04x]\n",
 			   vf->relative_vf_id, i, vf->igu_sbs[i],
-			   p_queue->fw_rx_qid, p_queue->fw_tx_qid,
-			   p_queue->fw_cid);
+			   p_queue->fw_rx_qid, p_queue->fw_tx_qid);
 	}
 
 	/* Update the link configuration in bulletin.
@@ -1446,7 +1509,7 @@ struct ecore_public_vf_info
 static void ecore_iov_vf_cleanup(struct ecore_hwfn *p_hwfn,
 				 struct ecore_vf_info *p_vf)
 {
-	u32 i;
+	u32 i, j;
 	p_vf->vf_bulletin = 0;
 	p_vf->vport_instance = 0;
 	p_vf->configured_features = 0;
@@ -1458,18 +1521,15 @@ static void ecore_iov_vf_cleanup(struct ecore_hwfn *p_hwfn,
 	p_vf->num_active_rxqs = 0;
 
 	for (i = 0; i < ECORE_MAX_VF_CHAINS_PER_PF; i++) {
-		struct ecore_vf_q_info *p_queue = &p_vf->vf_queues[i];
+		struct ecore_vf_queue *p_queue = &p_vf->vf_queues[i];
 
-		if (p_queue->p_rx_cid) {
-			ecore_eth_queue_cid_release(p_hwfn,
-						    p_queue->p_rx_cid);
-			p_queue->p_rx_cid = OSAL_NULL;
-		}
+		for (j = 0; j < MAX_QUEUES_PER_QZONE; j++) {
+			if (!p_queue->cids[j].p_cid)
+				continue;
 
-		if (p_queue->p_tx_cid) {
 			ecore_eth_queue_cid_release(p_hwfn,
-						    p_queue->p_tx_cid);
-			p_queue->p_tx_cid = OSAL_NULL;
+						    p_queue->cids[j].p_cid);
+			p_queue->cids[j].p_cid = OSAL_NULL;
 		}
 	}
 
@@ -1484,7 +1544,7 @@ static u8 ecore_iov_vf_mbx_acquire_resc(struct ecore_hwfn *p_hwfn,
 					struct vf_pf_resc_request *p_req,
 					struct pf_vf_resc *p_resp)
 {
-	int i;
+	u8 i;
 
 	/* Queue related information */
 	p_resp->num_rxqs = p_vf->num_rxqs;
@@ -1505,7 +1565,7 @@ static u8 ecore_iov_vf_mbx_acquire_resc(struct ecore_hwfn *p_hwfn,
 	for (i = 0; i < p_resp->num_rxqs; i++) {
 		ecore_fw_l2_queue(p_hwfn, p_vf->vf_queues[i].fw_rx_qid,
 				  (u16 *)&p_resp->hw_qid[i]);
-		p_resp->cid[i] = p_vf->vf_queues[i].fw_cid;
+		p_resp->cid[i] = i;
 	}
 
 	/* Filter related information */
@@ -1908,9 +1968,12 @@ static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn       *p_hwfn,
 
 		/* Update all the Rx queues */
 		for (i = 0; i < ECORE_MAX_VF_CHAINS_PER_PF; i++) {
-			struct ecore_queue_cid *p_cid;
+			struct ecore_vf_queue *p_queue = &p_vf->vf_queues[i];
+			struct ecore_queue_cid *p_cid = OSAL_NULL;
 
-			p_cid = p_vf->vf_queues[i].p_rx_cid;
+			/* There can be at most 1 Rx queue on qzone. Find it */
+			p_cid = ecore_iov_get_vf_rx_queue_cid(p_hwfn, p_vf,
+							      p_queue);
 			if (p_cid == OSAL_NULL)
 				continue;
 
@@ -2116,19 +2179,32 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
 				       struct ecore_vf_info *vf)
 {
 	struct ecore_queue_start_common_params params;
+	struct ecore_queue_cid_vf_params vf_params;
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
 	u8 status = PFVF_STATUS_NO_RESOURCE;
-	struct ecore_vf_q_info *p_queue;
+	struct ecore_vf_queue *p_queue;
 	struct vfpf_start_rxq_tlv *req;
+	struct ecore_queue_cid *p_cid;
 	bool b_legacy_vf = false;
+	u8 qid_usage_idx;
 	enum _ecore_status_t rc;
 
 	req = &mbx->req_virt->start_rxq;
 
-	if (!ecore_iov_validate_rxq(p_hwfn, vf, req->rx_qid) ||
+	if (!ecore_iov_validate_rxq(p_hwfn, vf, req->rx_qid,
+				    ECORE_IOV_VALIDATE_Q_DISABLE) ||
 	    !ecore_iov_validate_sb(p_hwfn, vf, req->hw_sb))
 		goto out;
 
+	/* Legacy VFs made assumptions on the CID their queues connected to,
+	 * assuming queue X used CID X.
+	 * TODO - need to validate that there was no official release post
+	 * the current legacy scheme that still made that assumption.
+	 */
+	if (vf->acquire.vfdev_info.eth_fp_hsi_minor ==
+	    ETH_HSI_VER_NO_PKT_LEN_TUNN)
+		b_legacy_vf = true;
+
 	/* Acquire a new queue-cid */
 	p_queue = &vf->vf_queues[req->rx_qid];
 
@@ -2139,39 +2215,42 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
 	params.sb = req->hw_sb;
 	params.sb_idx = req->sb_index;
 
-	p_queue->p_rx_cid = _ecore_eth_queue_to_cid(p_hwfn,
-						    vf->opaque_fid,
-						    p_queue->fw_cid,
-						    (u8)req->rx_qid,
-						    &params);
-	if (p_queue->p_rx_cid == OSAL_NULL)
+	/* TODO - set qid_usage_idx according to extended TLV. For now, use
+	 * '0' for Rx.
+	 */
+	qid_usage_idx = 0;
+
+	OSAL_MEM_ZERO(&vf_params, sizeof(vf_params));
+	vf_params.vfid = vf->relative_vf_id;
+	vf_params.vf_qid = (u8)req->rx_qid;
+	vf_params.b_legacy = b_legacy_vf;
+	vf_params.qid_usage_idx = qid_usage_idx;
+
+	p_cid = ecore_eth_queue_to_cid(p_hwfn, vf->opaque_fid,
+				       &params, &vf_params);
+	if (p_cid == OSAL_NULL)
 		goto out;
 
 	/* Legacy VFs have their Producers in a different location, which they
 	 * calculate on their own and clean the producer prior to this.
 	 */
-	if (vf->acquire.vfdev_info.eth_fp_hsi_minor ==
-	    ETH_HSI_VER_NO_PKT_LEN_TUNN)
-		b_legacy_vf = true;
-	else
+	if (!b_legacy_vf)
 		REG_WR(p_hwfn,
 		       GTT_BAR0_MAP_REG_MSDM_RAM +
 		       MSTORM_ETH_VF_PRODS_OFFSET(vf->abs_vf_id, req->rx_qid),
 		       0);
-	p_queue->p_rx_cid->b_legacy_vf = b_legacy_vf;
 
-
-	rc = ecore_eth_rxq_start_ramrod(p_hwfn,
-					p_queue->p_rx_cid,
+	rc = ecore_eth_rxq_start_ramrod(p_hwfn, p_cid,
 					req->bd_max_bytes,
 					req->rxq_addr,
 					req->cqe_pbl_addr,
 					req->cqe_pbl_size);
 	if (rc != ECORE_SUCCESS) {
 		status = PFVF_STATUS_FAILURE;
-		ecore_eth_queue_cid_release(p_hwfn, p_queue->p_rx_cid);
-		p_queue->p_rx_cid = OSAL_NULL;
+		ecore_eth_queue_cid_release(p_hwfn, p_cid);
 	} else {
+		p_queue->cids[qid_usage_idx].p_cid = p_cid;
+		p_queue->cids[qid_usage_idx].b_is_tx = false;
 		status = PFVF_STATUS_SUCCESS;
 		vf->num_active_rxqs++;
 	}
@@ -2334,6 +2413,7 @@ static void ecore_iov_vf_mbx_update_tunn_param(struct ecore_hwfn *p_hwfn,
 static void ecore_iov_vf_mbx_start_txq_resp(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
 					    struct ecore_vf_info *p_vf,
+					    u32 cid,
 					    u8 status)
 {
 	struct ecore_iov_vf_mbx *mbx = &p_vf->vf_mbx;
@@ -2362,12 +2442,8 @@ static void ecore_iov_vf_mbx_start_txq_resp(struct ecore_hwfn *p_hwfn,
 		      sizeof(struct channel_list_end_tlv));
 
 	/* Update the TLV with the response */
-	if ((status == PFVF_STATUS_SUCCESS) && !b_legacy) {
-		u16 qid = mbx->req_virt->start_txq.tx_qid;
-
-		p_tlv->offset = DB_ADDR_VF(p_vf->vf_queues[qid].fw_cid,
-					   DQ_DEMS_LEGACY);
-	}
+	if ((status == PFVF_STATUS_SUCCESS) && !b_legacy)
+		p_tlv->offset = DB_ADDR_VF(cid, DQ_DEMS_LEGACY);
 
 	ecore_iov_send_response(p_hwfn, p_ptt, p_vf, length, status);
 }
@@ -2377,20 +2453,34 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 				       struct ecore_vf_info *vf)
 {
 	struct ecore_queue_start_common_params params;
+	struct ecore_queue_cid_vf_params vf_params;
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
 	u8 status = PFVF_STATUS_NO_RESOURCE;
-	struct ecore_vf_q_info *p_queue;
+	struct ecore_vf_queue *p_queue;
 	struct vfpf_start_txq_tlv *req;
+	struct ecore_queue_cid *p_cid;
+	bool b_legacy_vf = false;
+	u8 qid_usage_idx;
+	u32 cid = 0;
 	enum _ecore_status_t rc;
 	u16 pq;
 
 	OSAL_MEMSET(&params, 0, sizeof(params));
 	req = &mbx->req_virt->start_txq;
 
-	if (!ecore_iov_validate_txq(p_hwfn, vf, req->tx_qid) ||
+	if (!ecore_iov_validate_txq(p_hwfn, vf, req->tx_qid,
+				    ECORE_IOV_VALIDATE_Q_NA) ||
 	    !ecore_iov_validate_sb(p_hwfn, vf, req->hw_sb))
 		goto out;
 
+	/* In case this is a legacy VF - need to know to use the right cids.
+	 * TODO - need to validate that there was no official release post
+	 * the current legacy scheme that still made that assumption.
+	 */
+	if (vf->acquire.vfdev_info.eth_fp_hsi_minor ==
+	    ETH_HSI_VER_NO_PKT_LEN_TUNN)
+		b_legacy_vf = true;
+
 	/* Acquire a new queue-cid */
 	p_queue = &vf->vf_queues[req->tx_qid];
 
@@ -2400,29 +2490,42 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 	params.sb = req->hw_sb;
 	params.sb_idx = req->sb_index;
 
-	p_queue->p_tx_cid = _ecore_eth_queue_to_cid(p_hwfn,
-						    vf->opaque_fid,
-						    p_queue->fw_cid,
-						    (u8)req->tx_qid,
-						    &params);
-	if (p_queue->p_tx_cid == OSAL_NULL)
+	/* TODO - set qid_usage_idx according to extended TLV. For now, use
+	 * '1' for Tx.
+	 */
+	qid_usage_idx = 1;
+
+	if (p_queue->cids[qid_usage_idx].p_cid)
+		goto out;
+
+	OSAL_MEM_ZERO(&vf_params, sizeof(vf_params));
+	vf_params.vfid = vf->relative_vf_id;
+	vf_params.vf_qid = (u8)req->tx_qid;
+	vf_params.b_legacy = b_legacy_vf;
+	vf_params.qid_usage_idx = qid_usage_idx;
+
+	p_cid = ecore_eth_queue_to_cid(p_hwfn, vf->opaque_fid,
+				       &params, &vf_params);
+	if (p_cid == OSAL_NULL)
 		goto out;
 
 	pq = ecore_get_cm_pq_idx_vf(p_hwfn,
 				    vf->relative_vf_id);
-	rc = ecore_eth_txq_start_ramrod(p_hwfn, p_queue->p_tx_cid,
+	rc = ecore_eth_txq_start_ramrod(p_hwfn, p_cid,
 					req->pbl_addr, req->pbl_size, pq);
 	if (rc != ECORE_SUCCESS) {
 		status = PFVF_STATUS_FAILURE;
-		ecore_eth_queue_cid_release(p_hwfn,
-					    p_queue->p_tx_cid);
-		p_queue->p_tx_cid = OSAL_NULL;
+		ecore_eth_queue_cid_release(p_hwfn, p_cid);
 	} else {
 		status = PFVF_STATUS_SUCCESS;
+		p_queue->cids[qid_usage_idx].p_cid = p_cid;
+		p_queue->cids[qid_usage_idx].b_is_tx = true;
+		cid = p_cid->cid;
 	}
 
 out:
-	ecore_iov_vf_mbx_start_txq_resp(p_hwfn, p_ptt, vf, status);
+	ecore_iov_vf_mbx_start_txq_resp(p_hwfn, p_ptt, vf,
+					cid, status);
 }
 
 static enum _ecore_status_t ecore_iov_vf_stop_rxqs(struct ecore_hwfn *p_hwfn,
@@ -2431,26 +2534,38 @@ static enum _ecore_status_t ecore_iov_vf_stop_rxqs(struct ecore_hwfn *p_hwfn,
 						   u8 num_rxqs,
 						   bool cqe_completion)
 {
-	struct ecore_vf_q_info *p_queue;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
-	int qid;
+	int qid, i;
 
+	/* TODO - improve validation [wrap around] */
 	if (rxq_id + num_rxqs > OSAL_ARRAY_SIZE(vf->vf_queues))
 		return ECORE_INVAL;
 
 	for (qid = rxq_id; qid < rxq_id + num_rxqs; qid++) {
-		p_queue = &vf->vf_queues[qid];
-
-		if (!p_queue->p_rx_cid)
+		struct ecore_vf_queue *p_queue = &vf->vf_queues[qid];
+		struct ecore_queue_cid **pp_cid = OSAL_NULL;
+
+		/* There can be at most a single Rx per qzone. Find it */
+		for (i = 0; i < MAX_QUEUES_PER_QZONE; i++) {
+			if (p_queue->cids[i].p_cid &&
+			    !p_queue->cids[i].b_is_tx) {
+				pp_cid = &p_queue->cids[i].p_cid;
+				break;
+			}
+		}
+		if (pp_cid == OSAL_NULL) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "Ignoring VF[%02x] request of closing Rx queue %04x - closed\n",
+				   vf->relative_vf_id, qid);
 			continue;
+		}
 
-		rc = ecore_eth_rx_queue_stop(p_hwfn,
-					     p_queue->p_rx_cid,
+		rc = ecore_eth_rx_queue_stop(p_hwfn, *pp_cid,
 					     false, cqe_completion);
 		if (rc != ECORE_SUCCESS)
 			return rc;
 
-		vf->vf_queues[qid].p_rx_cid = OSAL_NULL;
+		*pp_cid = OSAL_NULL;
 		vf->num_active_rxqs--;
 	}
 
@@ -2462,24 +2577,33 @@ static enum _ecore_status_t ecore_iov_vf_stop_txqs(struct ecore_hwfn *p_hwfn,
 						   u16 txq_id, u8 num_txqs)
 {
 	enum _ecore_status_t rc = ECORE_SUCCESS;
-	struct ecore_vf_q_info *p_queue;
-	int qid;
+	struct ecore_vf_queue *p_queue;
+	int qid, j;
 
-	if (txq_id + num_txqs > OSAL_ARRAY_SIZE(vf->vf_queues))
+	if (!ecore_iov_validate_txq(p_hwfn, vf, txq_id,
+				    ECORE_IOV_VALIDATE_Q_NA) ||
+	    !ecore_iov_validate_txq(p_hwfn, vf, txq_id + num_txqs,
+				    ECORE_IOV_VALIDATE_Q_NA))
 		return ECORE_INVAL;
 
 	for (qid = txq_id; qid < txq_id + num_txqs; qid++) {
 		p_queue = &vf->vf_queues[qid];
-		if (!p_queue->p_tx_cid)
-			continue;
+		for (j = 0; j < MAX_QUEUES_PER_QZONE; j++) {
+			if (p_queue->cids[j].p_cid == OSAL_NULL)
+				continue;
 
-		rc = ecore_eth_tx_queue_stop(p_hwfn,
-					     p_queue->p_tx_cid);
-		if (rc != ECORE_SUCCESS)
-			return rc;
+			if (!p_queue->cids[j].b_is_tx)
+				continue;
+
+			rc = ecore_eth_tx_queue_stop(p_hwfn,
+						     p_queue->cids[j].p_cid);
+			if (rc != ECORE_SUCCESS)
+				return rc;
 
-		p_queue->p_tx_cid = OSAL_NULL;
+			p_queue->cids[j].p_cid = OSAL_NULL;
+		}
 	}
+
 	return rc;
 }
 
@@ -2541,33 +2665,32 @@ static void ecore_iov_vf_mbx_update_rxqs(struct ecore_hwfn *p_hwfn,
 	u8 status = PFVF_STATUS_FAILURE;
 	u8 complete_event_flg;
 	u8 complete_cqe_flg;
-	u16 qid;
 	enum _ecore_status_t rc;
-	u8 i;
+	u16 i;
 
 	req = &mbx->req_virt->update_rxq;
 	complete_cqe_flg = !!(req->flags & VFPF_RXQ_UPD_COMPLETE_CQE_FLAG);
 	complete_event_flg = !!(req->flags & VFPF_RXQ_UPD_COMPLETE_EVENT_FLAG);
 
-	/* Validaute inputs */
-	if (req->num_rxqs + req->rx_qid > ECORE_MAX_VF_CHAINS_PER_PF ||
-	    !ecore_iov_validate_rxq(p_hwfn, vf, req->rx_qid)) {
-		DP_INFO(p_hwfn, "VF[%d]: Incorrect Rxqs [%04x, %02x]\n",
-			vf->relative_vf_id, req->rx_qid, req->num_rxqs);
-		goto out;
+	/* Validate inputs */
+	for (i = req->rx_qid; i < req->rx_qid + req->num_rxqs; i++) {
+		if (!ecore_iov_validate_rxq(p_hwfn, vf, i,
+					    ECORE_IOV_VALIDATE_Q_ENABLE)) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VF[%d]: Incorrect Rxqs [%04x, %02x]\n",
+				   vf->relative_vf_id, req->rx_qid,
+				   req->num_rxqs);
+			goto out;
+		}
 	}
 
 	for (i = 0; i < req->num_rxqs; i++) {
-		qid = req->rx_qid + i;
-
-		if (!vf->vf_queues[qid].p_rx_cid) {
-			DP_INFO(p_hwfn,
-				"VF[%d] rx_qid = %d isn`t active!\n",
-				vf->relative_vf_id, qid);
-			goto out;
-		}
+		struct ecore_vf_queue *p_queue;
+		u16 qid = req->rx_qid + i;
 
-		handlers[i] = vf->vf_queues[qid].p_rx_cid;
+		p_queue = &vf->vf_queues[qid];
+		handlers[i] = ecore_iov_get_vf_rx_queue_cid(p_hwfn, vf,
+							    p_queue);
 	}
 
 	rc = ecore_sp_eth_rx_queues_update(p_hwfn, (void **)&handlers,
@@ -2799,8 +2922,11 @@ void *ecore_iov_search_list_tlvs(struct ecore_hwfn *p_hwfn,
 				(1 << p_rss_tlv->rss_table_size_log));
 
 	for (i = 0; i < table_size; i++) {
+		struct ecore_queue_cid *p_cid;
+
 		q_idx = p_rss_tlv->rss_ind_table[i];
-		if (!ecore_iov_validate_rxq(p_hwfn, vf, q_idx)) {
+		if (!ecore_iov_validate_rxq(p_hwfn, vf, q_idx,
+					    ECORE_IOV_VALIDATE_Q_ENABLE)) {
 			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 				   "VF[%d]: Omitting RSS due to wrong queue %04x\n",
 				   vf->relative_vf_id, q_idx);
@@ -2808,15 +2934,9 @@ void *ecore_iov_search_list_tlvs(struct ecore_hwfn *p_hwfn,
 			goto out;
 		}
 
-		if (!vf->vf_queues[q_idx].p_rx_cid) {
-			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-				   "VF[%d]: Omitting RSS due to inactive queue %08x\n",
-				   vf->relative_vf_id, q_idx);
-			b_reject = true;
-			goto out;
-		}
-
-		p_rss->rss_ind_table[i] = vf->vf_queues[q_idx].p_rx_cid;
+		p_cid = ecore_iov_get_vf_rx_queue_cid(p_hwfn, vf,
+						      &vf->vf_queues[q_idx]);
+		p_rss->rss_ind_table[i] = p_cid;
 	}
 
 	p_data->rss_params = p_rss;
@@ -3275,22 +3395,26 @@ static void ecore_iov_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn,
 	u8 status = PFVF_STATUS_FAILURE;
 	struct ecore_queue_cid *p_cid;
 	u16 rx_coal, tx_coal;
-	u16  qid;
+	u16 qid;
+	int i;
 
 	req = &mbx->req_virt->update_coalesce;
 
 	rx_coal = req->rx_coal;
 	tx_coal = req->tx_coal;
 	qid = req->qid;
-	p_cid = vf->vf_queues[qid].p_rx_cid;
 
-	if (!ecore_iov_validate_rxq(p_hwfn, vf, qid)) {
+	if (!ecore_iov_validate_rxq(p_hwfn, vf, qid,
+				    ECORE_IOV_VALIDATE_Q_ENABLE) &&
+	    rx_coal) {
 		DP_ERR(p_hwfn, "VF[%d]: Invalid Rx queue_id = %d\n",
 		       vf->abs_vf_id, qid);
 		goto out;
 	}
 
-	if (!ecore_iov_validate_txq(p_hwfn, vf, qid)) {
+	if (!ecore_iov_validate_txq(p_hwfn, vf, qid,
+				    ECORE_IOV_VALIDATE_Q_ENABLE) &&
+	    tx_coal) {
 		DP_ERR(p_hwfn, "VF[%d]: Invalid Tx queue_id = %d\n",
 		       vf->abs_vf_id, qid);
 		goto out;
@@ -3299,7 +3423,11 @@ static void ecore_iov_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn,
 	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 		   "VF[%d]: Setting coalesce for VF rx_coal = %d, tx_coal = %d at queue = %d\n",
 		   vf->abs_vf_id, rx_coal, tx_coal, qid);
+
 	if (rx_coal) {
+		p_cid = ecore_iov_get_vf_rx_queue_cid(p_hwfn, vf,
+						      &vf->vf_queues[qid]);
+
 		rc = ecore_set_rxq_coalesce(p_hwfn, p_ptt, rx_coal, p_cid);
 		if (rc != ECORE_SUCCESS) {
 			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
@@ -3308,13 +3436,28 @@ static void ecore_iov_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn,
 			goto out;
 		}
 	}
+
+	/* TODO - in future, it might be possible to pass this in a per-cid
+	 * granularity. For now, do this for all Tx queues.
+	 */
 	if (tx_coal) {
-		rc =  ecore_set_txq_coalesce(p_hwfn, p_ptt, tx_coal, p_cid);
-		if (rc != ECORE_SUCCESS) {
-			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-				   "VF[%d]: Unable to set tx queue = %d coalesce\n",
-				   vf->abs_vf_id, vf->vf_queues[qid].fw_tx_qid);
-			goto out;
+		struct ecore_vf_queue *p_queue = &vf->vf_queues[qid];
+
+		for (i = 0; i < MAX_QUEUES_PER_QZONE; i++) {
+			if (p_queue->cids[i].p_cid == OSAL_NULL)
+				continue;
+
+			if (!p_queue->cids[i].b_is_tx)
+				continue;
+
+			rc = ecore_set_txq_coalesce(p_hwfn, p_ptt, tx_coal,
+						    p_queue->cids[i].p_cid);
+			if (rc != ECORE_SUCCESS) {
+				DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+					   "VF[%d]: Unable to set tx queue coalesce\n",
+					   vf->abs_vf_id);
+				goto out;
+			}
 		}
 	}
 
diff --git a/drivers/net/qede/base/ecore_sriov.h b/drivers/net/qede/base/ecore_sriov.h
index 66e9271..3c2f58b 100644
--- a/drivers/net/qede/base/ecore_sriov.h
+++ b/drivers/net/qede/base/ecore_sriov.h
@@ -13,6 +13,7 @@
 #include "ecore_vfpf_if.h"
 #include "ecore_iov_api.h"
 #include "ecore_hsi_common.h"
+#include "ecore_l2.h"
 
 #define ECORE_ETH_MAX_VF_NUM_VLAN_FILTERS \
 	(E4_MAX_NUM_VFS * ECORE_ETH_VF_NUM_VLAN_FILTERS)
@@ -62,12 +63,18 @@ struct ecore_iov_vf_mbx {
 					 */
 };
 
-struct ecore_vf_q_info {
+struct ecore_vf_queue_cid {
+	bool b_is_tx;
+	struct ecore_queue_cid *p_cid;
+};
+
+/* Describes a qzone associated with the VF */
+struct ecore_vf_queue {
+	/* Input from upper-layer, mapping relateive queue to queue-zone */
 	u16 fw_rx_qid;
-	struct ecore_queue_cid *p_rx_cid;
 	u16 fw_tx_qid;
-	struct ecore_queue_cid *p_tx_cid;
-	u8 fw_cid;
+
+	struct ecore_vf_queue_cid cids[MAX_QUEUES_PER_QZONE];
 };
 
 enum vf_state {
@@ -127,7 +134,7 @@ struct ecore_vf_info {
 	u8			num_mac_filters;
 	u8			num_vlan_filters;
 
-	struct ecore_vf_q_info	vf_queues[ECORE_MAX_VF_CHAINS_PER_PF];
+	struct ecore_vf_queue	vf_queues[ECORE_MAX_VF_CHAINS_PER_PF];
 	u16			igu_sbs[ECORE_MAX_VF_CHAINS_PER_PF];
 
 	/* TODO - Only windows is using it - should be removed */
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index c6743ed..53fc0cf 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -1583,6 +1583,12 @@ void ecore_vf_get_num_rxqs(struct ecore_hwfn *p_hwfn, u8 *num_rxqs)
 	*num_rxqs = p_hwfn->vf_iov_info->acquire_resp.resc.num_rxqs;
 }
 
+void ecore_vf_get_num_txqs(struct ecore_hwfn *p_hwfn,
+			   u8 *num_txqs)
+{
+	*num_txqs = p_hwfn->vf_iov_info->acquire_resp.resc.num_txqs;
+}
+
 void ecore_vf_get_port_mac(struct ecore_hwfn *p_hwfn, u8 *port_mac)
 {
 	OSAL_MEMCPY(port_mac,
diff --git a/drivers/net/qede/base/ecore_vf_api.h b/drivers/net/qede/base/ecore_vf_api.h
index a6e5f32..be3a326 100644
--- a/drivers/net/qede/base/ecore_vf_api.h
+++ b/drivers/net/qede/base/ecore_vf_api.h
@@ -61,6 +61,15 @@ void ecore_vf_get_num_rxqs(struct ecore_hwfn *p_hwfn,
 			   u8 *num_rxqs);
 
 /**
+ * @brief Get number of Rx queues allocated for VF by ecore
+ *
+ *  @param p_hwfn
+ *  @param num_txqs - allocated RX queues
+ */
+void ecore_vf_get_num_txqs(struct ecore_hwfn *p_hwfn,
+			   u8 *num_txqs);
+
+/**
  * @brief Get port mac address for VF
  *
  * @param p_hwfn
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 57/61] net/qede/base: fix race cond between MFW attentions and PF stop
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (55 preceding siblings ...)
  2017-02-27  7:57 ` [PATCH 56/61] net/qede/base: add multi-Txq support on same queue-zone for VFs Rasesh Mody
@ 2017-02-27  7:57 ` Rasesh Mody
  2017-02-27  7:57 ` [PATCH 58/61] net/qede/base: semantic changes Rasesh Mody
                   ` (4 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:57 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Merge hw_stop and hw_reset into one function.
Prevent race condition between MFW attentions and pf stop command during
unload flow cause ASSERT.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/bcm_osal.h      |    1 +
 drivers/net/qede/base/ecore_dev.c     |  175 ++++++++++++++++-----------------
 drivers/net/qede/base/ecore_dev_api.h |    9 --
 drivers/net/qede/base/ecore_mcp.c     |   12 +++
 drivers/net/qede/base/ecore_mcp.h     |   11 +++
 drivers/net/qede/base/ecore_spq.c     |    3 +
 drivers/net/qede/qede_main.c          |   18 +---
 7 files changed, 116 insertions(+), 113 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index f361791..03a879a 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -164,6 +164,7 @@ void *osal_dma_alloc_coherent_aligned(struct ecore_dev *, dma_addr_t *,
 #define OSAL_DPC_ALLOC(hwfn) OSAL_ALLOC(hwfn, GFP, sizeof(osal_dpc_t))
 #define OSAL_DPC_INIT(dpc, hwfn) nothing
 #define OSAL_POLL_MODE_DPC(hwfn) nothing
+#define OSAL_DPC_SYNC(hwfn) nothing
 
 /* Lists */
 
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 0f60010..66fd22b 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2056,7 +2056,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 
 		if (mfw_rc != ECORE_SUCCESS) {
 			DP_NOTICE(p_hwfn, true,
-				  "Failed sending LOAD_DONE command\n");
+				  "Failed sending a LOAD_DONE command\n");
 			return mfw_rc;
 		}
 
@@ -2145,32 +2145,77 @@ void ecore_hw_timers_stop_all(struct ecore_dev *p_dev)
 	}
 }
 
+static enum _ecore_status_t ecore_verify_reg_val(struct ecore_hwfn *p_hwfn,
+						 struct ecore_ptt *p_ptt,
+						 u32 addr, u32 expected_val)
+{
+	u32 val = ecore_rd(p_hwfn, p_ptt, addr);
+
+	if (val != expected_val) {
+		DP_NOTICE(p_hwfn, true,
+			  "Value at address 0x%08x is 0x%08x while the expected value is 0x%08x\n",
+			  addr, val, expected_val);
+		return ECORE_UNKNOWN_ERROR;
+	}
+
+	return ECORE_SUCCESS;
+}
+
 enum _ecore_status_t ecore_hw_stop(struct ecore_dev *p_dev)
 {
-	enum _ecore_status_t rc = ECORE_SUCCESS, t_rc;
+	struct ecore_hwfn *p_hwfn;
+	struct ecore_ptt *p_ptt;
+	enum _ecore_status_t rc, rc2 = ECORE_SUCCESS;
 	int j;
 
 	for_each_hwfn(p_dev, j) {
-		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[j];
-		struct ecore_ptt *p_ptt = p_hwfn->p_main_ptt;
+		p_hwfn = &p_dev->hwfns[j];
+		p_ptt = p_hwfn->p_main_ptt;
 
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IFDOWN, "Stopping hw/fw\n");
 
 		if (IS_VF(p_dev)) {
 			ecore_vf_pf_int_cleanup(p_hwfn);
+			rc = ecore_vf_pf_reset(p_hwfn);
+			if (rc != ECORE_SUCCESS) {
+				DP_NOTICE(p_hwfn, true,
+					  "ecore_vf_pf_reset failed. rc = %d.\n",
+					  rc);
+				rc2 = ECORE_UNKNOWN_ERROR;
+			}
 			continue;
 		}
 
 		/* mark the hw as uninitialized... */
 		p_hwfn->hw_init_done = false;
 
+		/* Send unload command to MCP */
+		if (!p_dev->recov_in_prog) {
+			rc = ecore_mcp_unload_req(p_hwfn, p_ptt);
+			if (rc != ECORE_SUCCESS) {
+				DP_NOTICE(p_hwfn, true,
+					  "Failed sending a UNLOAD_REQ command. rc = %d.\n",
+					  rc);
+				rc2 = ECORE_UNKNOWN_ERROR;
+			}
+		}
+
+		OSAL_DPC_SYNC(p_hwfn);
+
+		/* After this point no MFW attentions are expected, e.g. prevent
+		 * race between pf stop and dcbx pf update.
+		 */
+
 		rc = ecore_sp_pf_stop(p_hwfn);
-		if (rc)
+		if (rc != ECORE_SUCCESS) {
 			DP_NOTICE(p_hwfn, true,
-				  "Failed to close PF against FW. Continue to stop HW to prevent illegal host access by the device\n");
+				  "Failed to close PF against FW [rc = %d]. Continue to stop HW to prevent illegal host access by the device.\n",
+				  rc);
+			rc2 = ECORE_UNKNOWN_ERROR;
+		}
 
 		/* perform debug action after PF stop was sent */
-		OSAL_AFTER_PF_STOP((void *)p_hwfn->p_dev, p_hwfn->my_id);
+		OSAL_AFTER_PF_STOP((void *)p_dev, p_hwfn->my_id);
 
 		/* close NIG to BRB gate */
 		ecore_wr(p_hwfn, p_ptt,
@@ -2197,20 +2242,48 @@ enum _ecore_status_t ecore_hw_stop(struct ecore_dev *p_dev)
 		ecore_int_igu_init_pure_rt(p_hwfn, p_ptt, false, true);
 		/* Need to wait 1ms to guarantee SBs are cleared */
 		OSAL_MSLEEP(1);
-	}
+
+		if (!p_dev->recov_in_prog) {
+			ecore_verify_reg_val(p_hwfn, p_ptt,
+					     QM_REG_USG_CNT_PF_TX, 0);
+			ecore_verify_reg_val(p_hwfn, p_ptt,
+					     QM_REG_USG_CNT_PF_OTHER, 0);
+			/* @@@TBD - assert on incorrect xCFC values (10.b) */
+		}
+
+		/* Disable PF in HW blocks */
+		ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_DB_ENABLE, 0);
+		ecore_wr(p_hwfn, p_ptt, QM_REG_PF_EN, 0);
+
+		if (!p_dev->recov_in_prog) {
+			ecore_mcp_unload_done(p_hwfn, p_ptt);
+			if (rc != ECORE_SUCCESS) {
+				DP_NOTICE(p_hwfn, true,
+					  "Failed sending a UNLOAD_DONE command. rc = %d.\n",
+					  rc);
+				rc2 = ECORE_UNKNOWN_ERROR;
+			}
+		}
+	} /* hwfn loop */
 
 	if (IS_PF(p_dev)) {
+		p_hwfn = ECORE_LEADING_HWFN(p_dev);
+		p_ptt = ECORE_LEADING_HWFN(p_dev)->p_main_ptt;
+
 		/* Disable DMAE in PXP - in CMT, this should only be done for
 		 * first hw-function, and only after all transactions have
 		 * stopped for all active hw-functions.
 		 */
-		t_rc = ecore_change_pci_hwfn(&p_dev->hwfns[0],
-					     p_dev->hwfns[0].p_main_ptt, false);
-		if (t_rc != ECORE_SUCCESS)
-			rc = t_rc;
+		rc = ecore_change_pci_hwfn(p_hwfn, p_ptt, false);
+		if (rc != ECORE_SUCCESS) {
+			DP_NOTICE(p_hwfn, true,
+				  "ecore_change_pci_hwfn failed. rc = %d.\n",
+				  rc);
+			rc2 = ECORE_UNKNOWN_ERROR;
+		}
 	}
 
-	return rc;
+	return rc2;
 }
 
 void ecore_hw_stop_fastpath(struct ecore_dev *p_dev)
@@ -2271,82 +2344,6 @@ void ecore_hw_start_fastpath(struct ecore_hwfn *p_hwfn)
 		 NIG_REG_RX_LLH_BRB_GATE_DNTFWD_PERPF, 0x0);
 }
 
-static enum _ecore_status_t ecore_reg_assert(struct ecore_hwfn *p_hwfn,
-					     struct ecore_ptt *p_ptt, u32 reg,
-					     bool expected)
-{
-	u32 assert_val = ecore_rd(p_hwfn, p_ptt, reg);
-
-	if (assert_val != expected) {
-		DP_NOTICE(p_hwfn, true, "Value at address 0x%08x != 0x%08x\n",
-			  reg, expected);
-		return ECORE_UNKNOWN_ERROR;
-	}
-
-	return 0;
-}
-
-enum _ecore_status_t ecore_hw_reset(struct ecore_dev *p_dev)
-{
-	enum _ecore_status_t rc = ECORE_SUCCESS;
-	u32 unload_resp, unload_param;
-	int i;
-
-	for_each_hwfn(p_dev, i) {
-		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
-
-		if (IS_VF(p_dev)) {
-			rc = ecore_vf_pf_reset(p_hwfn);
-			if (rc)
-				return rc;
-			continue;
-		}
-
-		DP_VERBOSE(p_hwfn, ECORE_MSG_IFDOWN, "Resetting hw/fw\n");
-
-		/* Check for incorrect states */
-		if (!p_dev->recov_in_prog) {
-			ecore_reg_assert(p_hwfn, p_hwfn->p_main_ptt,
-					 QM_REG_USG_CNT_PF_TX, 0);
-			ecore_reg_assert(p_hwfn, p_hwfn->p_main_ptt,
-					 QM_REG_USG_CNT_PF_OTHER, 0);
-			/* @@@TBD - assert on incorrect xCFC values (10.b) */
-		}
-
-		/* Disable PF in HW blocks */
-		ecore_wr(p_hwfn, p_hwfn->p_main_ptt, DORQ_REG_PF_DB_ENABLE, 0);
-		ecore_wr(p_hwfn, p_hwfn->p_main_ptt, QM_REG_PF_EN, 0);
-
-		if (p_dev->recov_in_prog) {
-			DP_VERBOSE(p_hwfn, ECORE_MSG_IFDOWN,
-				   "Recovery is in progress -> skip sending unload_req/done\n");
-			break;
-		}
-
-		/* Send unload command to MCP */
-		rc = ecore_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
-				   DRV_MSG_CODE_UNLOAD_REQ,
-				   DRV_MB_PARAM_UNLOAD_WOL_MCP,
-				   &unload_resp, &unload_param);
-		if (rc != ECORE_SUCCESS) {
-			DP_NOTICE(p_hwfn, true,
-				  "ecore_hw_reset: UNLOAD_REQ failed\n");
-			/* @@TBD - what to do? for now, assume ENG. */
-			unload_resp = FW_MSG_CODE_DRV_UNLOAD_ENGINE;
-		}
-
-		rc = ecore_mcp_unload_done(p_hwfn, p_hwfn->p_main_ptt);
-		if (rc != ECORE_SUCCESS) {
-			DP_NOTICE(p_hwfn,
-				  true, "ecore_hw_reset: UNLOAD_DONE failed\n");
-			/* @@@TBD - Should it really ASSERT here ? */
-			return rc;
-		}
-	}
-
-	return rc;
-}
-
 /* Free hwfn memory and resources acquired in hw_hwfn_prepare */
 static void ecore_hw_hwfn_free(struct ecore_hwfn *p_hwfn)
 {
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index ce764d2..e64a768 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -151,15 +151,6 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
  */
 void ecore_hw_start_fastpath(struct ecore_hwfn *p_hwfn);
 
-/**
- * @brief ecore_hw_reset -
- *
- * @param p_dev
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_hw_reset(struct ecore_dev *p_dev);
-
 enum ecore_hw_prepare_result {
 	ECORE_HW_PREPARE_SUCCESS,
 
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index a2ff6c2..af82f0f 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -892,6 +892,18 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
+enum _ecore_status_t ecore_mcp_unload_req(struct ecore_hwfn *p_hwfn,
+					  struct ecore_ptt *p_ptt)
+{
+	u32 wol_param, mcp_resp, mcp_param;
+
+	/* @DPDK */
+	wol_param = DRV_MB_PARAM_UNLOAD_WOL_MCP;
+
+	return ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_UNLOAD_REQ, wol_param,
+			     &mcp_resp, &mcp_param);
+}
+
 enum _ecore_status_t ecore_mcp_unload_done(struct ecore_hwfn *p_hwfn,
 					   struct ecore_ptt *p_ptt)
 {
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index 350d8a2..37d1835 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -171,6 +171,17 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 					struct ecore_load_req_params *p_params);
 
 /**
+ * @brief Sends a UNLOAD_REQ message to the MFW
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - Operation was successful.
+ */
+enum _ecore_status_t ecore_mcp_unload_req(struct ecore_hwfn *p_hwfn,
+					  struct ecore_ptt *p_ptt);
+
+/**
  * @brief Sends a UNLOAD_DONE message to the MFW
  *
  * @param p_hwfn
diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c
index 23ed772..60526fe 100644
--- a/drivers/net/qede/base/ecore_spq.c
+++ b/drivers/net/qede/base/ecore_spq.c
@@ -190,6 +190,9 @@ static void ecore_spq_hw_initialize(struct ecore_hwfn *p_hwfn,
 
 	p_cxt = cxt_info.p_cxt;
 
+	/* @@@TBD we zero the context until we have ilt_reset implemented. */
+	OSAL_MEM_ZERO(p_cxt, sizeof(*p_cxt));
+
 	if (ECORE_IS_BB(p_hwfn->p_dev) || ECORE_IS_AH(p_hwfn->p_dev)) {
 		SET_FIELD(p_cxt->xstorm_ag_context.flags10,
 			  E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN, 1);
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 326e56f..74856c5 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -636,19 +636,6 @@ static int qed_nic_stop(struct ecore_dev *edev)
 	return rc;
 }
 
-static int qed_nic_reset(struct ecore_dev *edev)
-{
-	int rc;
-
-	rc = ecore_hw_reset(edev);
-	if (rc)
-		return rc;
-
-	ecore_resc_free(edev);
-
-	return 0;
-}
-
 static int qed_slowpath_stop(struct ecore_dev *edev)
 {
 #ifdef CONFIG_QED_SRIOV
@@ -667,10 +654,11 @@ static int qed_slowpath_stop(struct ecore_dev *edev)
 		if (IS_QED_ETH_IF(edev))
 			qed_sriov_disable(edev, true);
 #endif
-		qed_nic_stop(edev);
 	}
 
-	qed_nic_reset(edev);
+	qed_nic_stop(edev);
+
+	ecore_resc_free(edev);
 	qed_stop_iov_task(edev);
 
 	return 0;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 58/61] net/qede/base: semantic changes
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (56 preceding siblings ...)
  2017-02-27  7:57 ` [PATCH 57/61] net/qede/base: fix race cond between MFW attentions and PF stop Rasesh Mody
@ 2017-02-27  7:57 ` Rasesh Mody
  2017-02-27  7:57 ` [PATCH 59/61] net/qede/base: add support for arfs mode Rasesh Mody
                   ` (3 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:57 UTC (permalink / raw)
  To: dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Make APIs static and other semantic changes.
A step toward cleaning 'make C=1' with GCC 4.8.3.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_cxt.c  |    5 +-
 drivers/net/qede/base/ecore_cxt.h  |   11 ----
 drivers/net/qede/base/ecore_dcbx.c |    2 +-
 drivers/net/qede/base/ecore_dev.c  |  111 ++++++++++++++++++------------------
 drivers/net/qede/base/ecore_l2.c   |   12 ++--
 drivers/net/qede/base/ecore_vf.c   |    2 +-
 6 files changed, 68 insertions(+), 75 deletions(-)

diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index b3d939a..d94db8b 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -327,7 +327,8 @@ static OSAL_INLINE void ecore_cxt_tm_iids(struct ecore_cxt_mngr *p_mngr,
 	}
 }
 
-void ecore_cxt_qm_iids(struct ecore_hwfn *p_hwfn, struct ecore_qm_iids *iids)
+static void ecore_cxt_qm_iids(struct ecore_hwfn *p_hwfn,
+			      struct ecore_qm_iids *iids)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
 	struct ecore_tid_seg *segs;
@@ -1948,7 +1949,7 @@ enum _ecore_status_t ecore_cxt_get_cid_info(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-void ecore_cxt_set_srq_count(struct ecore_hwfn *p_hwfn, u32 num_srqs)
+static void ecore_cxt_set_srq_count(struct ecore_hwfn *p_hwfn, u32 num_srqs)
 {
 	struct ecore_cxt_mngr *p_mgr = p_hwfn->p_cxt_mngr;
 
diff --git a/drivers/net/qede/base/ecore_cxt.h b/drivers/net/qede/base/ecore_cxt.h
index 1128051..e678118 100644
--- a/drivers/net/qede/base/ecore_cxt.h
+++ b/drivers/net/qede/base/ecore_cxt.h
@@ -35,17 +35,6 @@ u32 ecore_cxt_get_proto_cid_start(struct ecore_hwfn *p_hwfn,
 				  enum protocol_type type);
 u32 ecore_cxt_get_srq_count(struct ecore_hwfn *p_hwfn);
 
-#ifndef LINUX_REMOVE
-/**
- * @brief ecore_cxt_qm_iids - fills the cid/tid counts for the QM configuration
- *
- * @param p_hwfn
- * @param iids [out], a structure holding all the counters
- */
-void ecore_cxt_qm_iids(struct ecore_hwfn *p_hwfn,
-		       struct ecore_qm_iids *iids);
-#endif
-
 /**
  * @brief ecore_cxt_set_pf_params - Set the PF params for cxt init
  *
diff --git a/drivers/net/qede/base/ecore_dcbx.c b/drivers/net/qede/base/ecore_dcbx.c
index e31ce81..156eb0e 100644
--- a/drivers/net/qede/base/ecore_dcbx.c
+++ b/drivers/net/qede/base/ecore_dcbx.c
@@ -114,7 +114,7 @@ static bool ecore_dcbx_iwarp_tlv(struct ecore_hwfn *p_hwfn, u32 app_info_bitmap,
 	}
 }
 
-void
+static void
 ecore_dcbx_set_params(struct ecore_dcbx_results *p_data,
 		      struct ecore_hwfn *p_hwfn,
 		      bool enable, u8 prio, u8 tc,
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 66fd22b..10257f3 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -765,8 +765,8 @@ enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn,
 				     struct ecore_ptt *p_ptt)
 {
 	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
-	enum _ecore_status_t rc;
 	bool b_rc;
+	enum _ecore_status_t rc;
 
 	/* initialize ecore's qm data structure */
 	ecore_init_qm_info(p_hwfn);
@@ -1513,54 +1513,6 @@ static void ecore_link_init_bb(struct ecore_hwfn *p_hwfn,
 }
 #endif
 
-static enum _ecore_status_t ecore_hw_init_port(struct ecore_hwfn *p_hwfn,
-					       struct ecore_ptt *p_ptt,
-					       int hw_mode)
-{
-	enum _ecore_status_t rc = ECORE_SUCCESS;
-
-	rc = ecore_init_run(p_hwfn, p_ptt, PHASE_PORT, p_hwfn->port_id,
-			    hw_mode);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-#ifndef ASIC_ONLY
-	if (CHIP_REV_IS_ASIC(p_hwfn->p_dev))
-		return ECORE_SUCCESS;
-
-	if (CHIP_REV_IS_FPGA(p_hwfn->p_dev)) {
-		if (ECORE_IS_AH(p_hwfn->p_dev))
-			return ECORE_SUCCESS;
-		else if (ECORE_IS_BB(p_hwfn->p_dev))
-			ecore_link_init_bb(p_hwfn, p_ptt, p_hwfn->port_id);
-	} else if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
-		if (p_hwfn->p_dev->num_hwfns > 1) {
-			/* Activate OPTE in CMT */
-			u32 val;
-
-			val = ecore_rd(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV);
-			val |= 0x10;
-			ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV, val);
-			ecore_wr(p_hwfn, p_ptt, MISC_REG_CLK_100G_MODE, 1);
-			ecore_wr(p_hwfn, p_ptt, MISCS_REG_CLK_100G_MODE, 1);
-			ecore_wr(p_hwfn, p_ptt, MISC_REG_OPTE_MODE, 1);
-			ecore_wr(p_hwfn, p_ptt,
-				 NIG_REG_LLH_ENG_CLS_TCP_4_TUPLE_SEARCH, 1);
-			ecore_wr(p_hwfn, p_ptt,
-				 NIG_REG_LLH_ENG_CLS_ENG_ID_TBL, 0x55555555);
-			ecore_wr(p_hwfn, p_ptt,
-				 NIG_REG_LLH_ENG_CLS_ENG_ID_TBL + 0x4,
-				 0x55555555);
-		}
-
-		ecore_emul_link_init(p_hwfn, p_ptt);
-	} else {
-		DP_INFO(p_hwfn->p_dev, "link is not being configured\n");
-	}
-#endif
-
-	return rc;
-}
-
 static enum _ecore_status_t
 ecore_hw_init_dpi_size(struct ecore_hwfn *p_hwfn,
 		       struct ecore_ptt *p_ptt, u32 pwm_region_size, u32 n_cpus)
@@ -1629,7 +1581,7 @@ enum ECORE_ROCE_EDPM_MODE {
 	u32 db_bar_size, n_cpus;
 	u32 roce_edpm_mode;
 	u32 pf_dems_shift;
-	int rc = ECORE_SUCCESS;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
 	u8 cond;
 
 	db_bar_size = ecore_hw_bar_size(p_hwfn, BAR_ID_1);
@@ -1684,8 +1636,9 @@ enum ECORE_ROCE_EDPM_MODE {
 		rc = ecore_hw_init_dpi_size(p_hwfn, p_ptt, pwm_regsize, n_cpus);
 	}
 
-	cond = ((rc) && (roce_edpm_mode == ECORE_ROCE_EDPM_MODE_ENABLE)) ||
-	    (roce_edpm_mode == ECORE_ROCE_EDPM_MODE_DISABLE);
+	cond = ((rc != ECORE_SUCCESS) &&
+		(roce_edpm_mode == ECORE_ROCE_EDPM_MODE_ENABLE)) ||
+		(roce_edpm_mode == ECORE_ROCE_EDPM_MODE_DISABLE);
 	if (cond || p_hwfn->dcbx_no_edpm) {
 		/* Either EDPM is disabled from user configuration, or it is
 		 * disabled via DCBx, or it is not mandatory and we failed to
@@ -1709,7 +1662,7 @@ enum ECORE_ROCE_EDPM_MODE {
 		"disabled" : "enabled");
 
 	/* Check return codes from above calls */
-	if (rc) {
+	if (rc != ECORE_SUCCESS) {
 		DP_ERR(p_hwfn,
 		       "Failed to allocate enough DPIs\n");
 		return ECORE_NORESOURCES;
@@ -1727,6 +1680,56 @@ enum ECORE_ROCE_EDPM_MODE {
 	return ECORE_SUCCESS;
 }
 
+static enum _ecore_status_t ecore_hw_init_port(struct ecore_hwfn *p_hwfn,
+					       struct ecore_ptt *p_ptt,
+					       int hw_mode)
+{
+	enum _ecore_status_t rc	= ECORE_SUCCESS;
+
+	rc = ecore_init_run(p_hwfn, p_ptt, PHASE_PORT, p_hwfn->port_id,
+			    hw_mode);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+#ifndef ASIC_ONLY
+	if (CHIP_REV_IS_ASIC(p_hwfn->p_dev))
+		return ECORE_SUCCESS;
+
+	if (CHIP_REV_IS_FPGA(p_hwfn->p_dev)) {
+		if (ECORE_IS_AH(p_hwfn->p_dev))
+			return ECORE_SUCCESS;
+		else if (ECORE_IS_BB(p_hwfn->p_dev))
+			ecore_link_init_bb(p_hwfn, p_ptt, p_hwfn->port_id);
+		else /* E5 */
+			ECORE_E5_MISSING_CODE;
+	} else if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
+		if (p_hwfn->p_dev->num_hwfns > 1) {
+			/* Activate OPTE in CMT */
+			u32 val;
+
+			val = ecore_rd(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV);
+			val |= 0x10;
+			ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV, val);
+			ecore_wr(p_hwfn, p_ptt, MISC_REG_CLK_100G_MODE, 1);
+			ecore_wr(p_hwfn, p_ptt, MISCS_REG_CLK_100G_MODE, 1);
+			ecore_wr(p_hwfn, p_ptt, MISC_REG_OPTE_MODE, 1);
+			ecore_wr(p_hwfn, p_ptt,
+				 NIG_REG_LLH_ENG_CLS_TCP_4_TUPLE_SEARCH, 1);
+			ecore_wr(p_hwfn, p_ptt,
+				 NIG_REG_LLH_ENG_CLS_ENG_ID_TBL, 0x55555555);
+			ecore_wr(p_hwfn, p_ptt,
+				 NIG_REG_LLH_ENG_CLS_ENG_ID_TBL + 0x4,
+				 0x55555555);
+		}
+
+		ecore_emul_link_init(p_hwfn, p_ptt);
+	} else {
+		DP_INFO(p_hwfn->p_dev, "link is not being configured\n");
+	}
+#endif
+
+	return rc;
+}
+
 static enum _ecore_status_t
 ecore_hw_init_pf(struct ecore_hwfn *p_hwfn,
 		 struct ecore_ptt *p_ptt,
@@ -1928,8 +1931,8 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 {
 	struct ecore_load_req_params load_req_params;
 	u32 load_code, param, drv_mb_param;
-	struct ecore_hwfn *p_hwfn;
 	bool b_default_mtu = true;
+	struct ecore_hwfn *p_hwfn;
 	enum _ecore_status_t rc = ECORE_SUCCESS, mfw_rc;
 	int i;
 
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index adb5e47..c4af895 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -946,17 +946,17 @@ enum _ecore_status_t
 			    dma_addr_t bd_chain_phys_addr,
 			    dma_addr_t cqe_pbl_addr,
 			    u16 cqe_pbl_size,
-			    void OSAL_IOMEM * *pp_producer)
+			    void OSAL_IOMEM * *pp_prod)
 {
 	u32 init_prod_val = 0;
 
-	*pp_producer = (u8 OSAL_IOMEM *)
-		       p_hwfn->regview +
-		       GTT_BAR0_MAP_REG_MSDM_RAM +
-		       MSTORM_ETH_PF_PRODS_OFFSET(p_cid->abs.queue_id);
+	*pp_prod = (u8 OSAL_IOMEM *)
+		    p_hwfn->regview +
+		    GTT_BAR0_MAP_REG_MSDM_RAM +
+		    MSTORM_ETH_PF_PRODS_OFFSET(p_cid->abs.queue_id);
 
 	/* Init the rcq, rx bd and rx sge (if valid) producers to 0 */
-	__internal_ram_wr(p_hwfn, *pp_producer, sizeof(u32),
+	__internal_ram_wr(p_hwfn, *pp_prod, sizeof(u32),
 			  (u32 *)(&init_prod_val));
 
 	return ecore_eth_rxq_start_ramrod(p_hwfn, p_cid,
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index 53fc0cf..2aaf4c8 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -1285,8 +1285,8 @@ enum _ecore_status_t ecore_vf_pf_release(struct ecore_hwfn *p_hwfn)
 	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
 	struct pfvf_def_resp_tlv *resp;
 	struct vfpf_first_tlv *req;
-	enum _ecore_status_t rc;
 	u32 size;
+	enum _ecore_status_t rc;
 
 	/* clear mailbox and prep first tlv */
 	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_RELEASE, sizeof(*req));
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 59/61] net/qede/base: add support for arfs mode
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (57 preceding siblings ...)
  2017-02-27  7:57 ` [PATCH 58/61] net/qede/base: semantic changes Rasesh Mody
@ 2017-02-27  7:57 ` Rasesh Mody
  2017-02-27  7:57 ` [PATCH 60/61] net/qede: add ntuple and flow director filter support Rasesh Mody
                   ` (2 subsequent siblings)
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:57 UTC (permalink / raw)
  To: dev; +Cc: Harish Patil, Dept-EngDPDKDev

From: Harish Patil <harish.patil@qlogic.com>

Add base driver APIs to enable accelerated RFS[aRFS] mode and ramrod
to configure rfs and ntuple filter.

Signed-off-by: Harish Patil <harish.patil@qlogic.com>
---
 drivers/net/qede/base/ecore_cxt.c           |   49 +++++++++++-----
 drivers/net/qede/base/ecore_init_fw_funcs.c |   30 ++++++++++
 drivers/net/qede/base/ecore_init_fw_funcs.h |   11 ++++
 drivers/net/qede/base/ecore_l2.c            |   84 +++++++++++++++++++++++++++
 drivers/net/qede/base/ecore_l2.h            |   27 +++++++++
 drivers/net/qede/base/ecore_l2_api.h        |   22 +++++++
 drivers/net/qede/base/ecore_proto_if.h      |    6 ++
 drivers/net/qede/base/ecore_spq.h           |    1 +
 8 files changed, 217 insertions(+), 13 deletions(-)

diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index d94db8b..2fd33e5 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -192,9 +192,6 @@ struct ecore_cxt_mngr {
 	 */
 	u32 vf_count;
 
-	/* total number of SRQ's for this hwfn */
-	u32				srq_count;
-
 	/* Acquired CIDs */
 	struct ecore_cid_acquired_map acquired[MAX_CONN_TYPES];
 	/* TBD - do we want this allocated to reserve space? */
@@ -213,10 +210,29 @@ struct ecore_cxt_mngr {
 	u32 t2_num_pages;
 	u64 first_free;
 	u64 last_free;
+
+	/* The infrastructure originally was very generic and context/task
+	 * oriented - per connection-type we would set how many of those
+	 * are needed, and later when determining how much memory we're
+	 * needing for a given block we'd iterate over all the relevant
+	 * connection-types.
+	 * But since then we've had some additional resources, some of which
+	 * require memory which is indepent of the general context/task
+	 * scheme. We add those here explicitly per-feature.
+	 */
+
+	/* total number of SRQ's for this hwfn */
+	u32				srq_count;
+
+	/* Maximal number of L2 steering filters */
+	u32				arfs_count;
+
+	/* TODO - VF arfs filters ? */
 };
 
 /* check if resources/configuration is required according to protocol type */
-static OSAL_INLINE bool src_proto(enum protocol_type type)
+static OSAL_INLINE bool src_proto(struct ecore_hwfn *p_hwfn,
+				  enum protocol_type type)
 {
 	return type == PROTOCOLID_TOE;
 }
@@ -254,18 +270,22 @@ struct ecore_src_iids {
 	u32 per_vf_cids;
 };
 
-static OSAL_INLINE void ecore_cxt_src_iids(struct ecore_cxt_mngr *p_mngr,
+static OSAL_INLINE void ecore_cxt_src_iids(struct ecore_hwfn *p_hwfn,
+					   struct ecore_cxt_mngr *p_mngr,
 					   struct ecore_src_iids *iids)
 {
 	u32 i;
 
 	for (i = 0; i < MAX_CONN_TYPES; i++) {
-		if (!src_proto(i))
+		if (!src_proto(p_hwfn, i))
 			continue;
 
 		iids->pf_cids += p_mngr->conn_cfg[i].cid_count;
 		iids->per_vf_cids += p_mngr->conn_cfg[i].cids_per_vf;
 	}
+
+	/* Add L2 filtering filters in addition */
+	iids->pf_cids += p_mngr->arfs_count;
 }
 
 /* counts the iids for the Timers block configuration */
@@ -686,7 +706,7 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 
 	/* SRC */
 	p_cli = &p_mngr->clients[ILT_CLI_SRC];
-	ecore_cxt_src_iids(p_mngr, &src_iids);
+	ecore_cxt_src_iids(p_hwfn, p_mngr, &src_iids);
 
 	/* Both the PF and VFs searcher connections are stored in the per PF
 	 * database. Thus sum the PF searcher cids and all the VFs searcher
@@ -801,7 +821,7 @@ static enum _ecore_status_t ecore_cxt_src_t2_alloc(struct ecore_hwfn *p_hwfn)
 	if (!p_src->active)
 		return ECORE_SUCCESS;
 
-	ecore_cxt_src_iids(p_mngr, &src_iids);
+	ecore_cxt_src_iids(p_hwfn, p_mngr, &src_iids);
 	conn_num = src_iids.pf_cids + src_iids.per_vf_cids * p_mngr->vf_count;
 	total_size = conn_num * sizeof(struct src_ent);
 
@@ -1622,7 +1642,7 @@ static void ecore_src_init_pf(struct ecore_hwfn *p_hwfn)
 	struct ecore_src_iids src_iids;
 
 	OSAL_MEM_ZERO(&src_iids, sizeof(src_iids));
-	ecore_cxt_src_iids(p_mngr, &src_iids);
+	ecore_cxt_src_iids(p_hwfn, p_mngr, &src_iids);
 	conn_num = src_iids.pf_cids + src_iids.per_vf_cids * p_mngr->vf_count;
 	if (!conn_num)
 		return;
@@ -1638,6 +1658,9 @@ static void ecore_src_init_pf(struct ecore_hwfn *p_hwfn)
 			 p_hwfn->p_cxt_mngr->first_free);
 	STORE_RT_REG_AGG(p_hwfn, SRC_REG_LASTFREE_RT_OFFSET,
 			 p_hwfn->p_cxt_mngr->last_free);
+	DP_VERBOSE(p_hwfn, ECORE_MSG_ILT,
+		   "Configured SEARCHER for 0x%08x connections\n",
+		   conn_num);
 }
 
 /* Timers PF */
@@ -1981,10 +2004,10 @@ enum _ecore_status_t ecore_cxt_set_pf_params(struct ecore_hwfn *p_hwfn)
 			 * As of now, allocates 16 * 2 per-VF [to retain regular
 			 * functionality].
 			 */
-			ecore_cxt_set_proto_cid_count(p_hwfn,
-				PROTOCOLID_ETH,
-				p_params->num_cons, 32);
-
+			ecore_cxt_set_proto_cid_count(p_hwfn, PROTOCOLID_ETH,
+						      p_params->num_cons, 32);
+			p_hwfn->p_cxt_mngr->arfs_count =
+						p_params->num_arfs_filters;
 			break;
 		}
 	default:
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.c b/drivers/net/qede/base/ecore_init_fw_funcs.c
index af0deaa..fc8aec8 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.c
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.c
@@ -1497,6 +1497,36 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
 #define RAM_LINE_SIZE sizeof(u64)
 #define REG_SIZE sizeof(u32)
 
+void ecore_set_rfs_mode_disable(struct ecore_hwfn *p_hwfn,
+	struct ecore_ptt *p_ptt,
+	u16 pf_id)
+{
+	union gft_cam_line_union cam_line;
+	struct gft_ram_line ram_line;
+	u32 i, *ram_line_ptr;
+
+	ram_line_ptr = (u32*)&ram_line;
+
+	/* Stop using gft logic, disable gft search */
+	ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_GFT, 0);
+	ecore_wr(p_hwfn, p_ptt, PRS_REG_CM_HDR_GFT, 0x0);
+
+	/* Clean ram & cam for next rfs/gft session*/
+
+	/* Zero camline */
+	OSAL_MEMSET(&cam_line, 0, sizeof(cam_line));
+	ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_CAM + CAM_LINE_SIZE*pf_id,
+					cam_line.cam_line_mapped.camline);
+
+	/* Zero ramline */
+	OSAL_MEMSET(&ram_line, 0, sizeof(ram_line));
+
+	/* Each iteration write to reg */
+	for (i = 0; i < RAM_LINE_SIZE / REG_SIZE; i++)
+		ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_PROFILE_MASK_RAM +
+			 RAM_LINE_SIZE*pf_id + i*REG_SIZE, *(ram_line_ptr + i));
+}
+
 
 void ecore_set_gft_event_id_cm_hdr(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt)
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.h b/drivers/net/qede/base/ecore_init_fw_funcs.h
index 2d1ab7c..4da3fc2 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.h
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.h
@@ -351,6 +351,17 @@ void ecore_set_gft_event_id_cm_hdr(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt);
 
 /**
+ * @brief ecore_set_rfs_mode_disable - Disable and configure HW for RFS
+ *
+ * @param p_hwfn -   HW device data
+ * @param p_ptt -   ptt window used for writing the registers.
+ * @param pf_id - pf on which to disable RFS.
+ */
+void ecore_set_rfs_mode_disable(struct ecore_hwfn *p_hwfn,
+				struct ecore_ptt *p_ptt,
+				u16 pf_id);
+
+/**
 * @brief ecore_set_rfs_mode_enable - enable and configure HW for RFS
 *
 * @param p_ptt	- ptt window used for writing the registers.
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index c4af895..3f75467 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -2018,3 +2018,87 @@ void ecore_reset_vport_stats(struct ecore_dev *p_dev)
 	else
 		_ecore_get_vport_stats(p_dev, p_dev->reset_stats);
 }
+
+void ecore_arfs_mode_configure(struct ecore_hwfn *p_hwfn,
+			       struct ecore_ptt *p_ptt,
+			       struct ecore_arfs_config_params *p_cfg_params)
+{
+	if (p_cfg_params->arfs_enable) {
+		ecore_set_rfs_mode_enable(p_hwfn, p_ptt, p_hwfn->rel_pf_id,
+					  p_cfg_params->tcp,
+					  p_cfg_params->udp,
+					  p_cfg_params->ipv4,
+					  p_cfg_params->ipv6);
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+			   "tcp = %s, udp = %s, ipv4 = %s, ipv6 =%s\n",
+			   p_cfg_params->tcp ? "Enable" : "Disable",
+			   p_cfg_params->udp ? "Enable" : "Disable",
+			   p_cfg_params->ipv4 ? "Enable" : "Disable",
+			   p_cfg_params->ipv6 ? "Enable" : "Disable");
+	} else {
+		ecore_set_rfs_mode_disable(p_hwfn, p_ptt, p_hwfn->rel_pf_id);
+	}
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP, "Configured ARFS mode : %s\n",
+		   p_cfg_params->arfs_enable ? "Enable" : "Disable");
+}
+
+enum _ecore_status_t
+ecore_configure_rfs_ntuple_filter(struct ecore_hwfn *p_hwfn,
+				  struct ecore_ptt *p_ptt,
+				  struct ecore_spq_comp_cb *p_cb,
+				  dma_addr_t p_addr, u16 length,
+				  u16 qid, u8 vport_id,
+				  bool b_is_add)
+{
+	struct rx_update_gft_filter_data *p_ramrod = OSAL_NULL;
+	struct ecore_spq_entry *p_ent = OSAL_NULL;
+	struct ecore_sp_init_data init_data;
+	u16 abs_rx_q_id = 0;
+	u8 abs_vport_id = 0;
+	enum _ecore_status_t rc = ECORE_NOTIMPL;
+
+	rc = ecore_fw_vport(p_hwfn, vport_id, &abs_vport_id);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	rc = ecore_fw_l2_queue(p_hwfn, qid, &abs_rx_q_id);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	/* Get SPQ entry */
+	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
+	init_data.cid = ecore_spq_get_cid(p_hwfn);
+
+	init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
+
+	if (p_cb) {
+		init_data.comp_mode = ECORE_SPQ_MODE_CB;
+		init_data.p_comp_data = p_cb;
+	} else {
+		init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
+	}
+
+	rc = ecore_sp_init_request(p_hwfn, &p_ent,
+				   ETH_RAMROD_GFT_UPDATE_FILTER,
+				   PROTOCOLID_ETH, &init_data);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	p_ramrod = &p_ent->ramrod.rx_update_gft;
+
+	DMA_REGPAIR_LE(p_ramrod->pkt_hdr_addr, p_addr);
+	p_ramrod->pkt_hdr_length = OSAL_CPU_TO_LE16(length);
+	p_ramrod->rx_qid_or_action_icid = OSAL_CPU_TO_LE16(abs_rx_q_id);
+	p_ramrod->vport_id = abs_vport_id;
+	p_ramrod->filter_type = RFS_FILTER_TYPE;
+	p_ramrod->filter_action = b_is_add ? GFT_ADD_FILTER
+					   : GFT_DELETE_FILTER;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "V[%0x], Q[%04x] - %s filter from 0x%lx [length %04xb]\n",
+		   abs_vport_id, abs_rx_q_id,
+		   b_is_add ? "Adding" : "Removing",
+		   (u64)p_addr, length);
+
+	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
+}
diff --git a/drivers/net/qede/base/ecore_l2.h b/drivers/net/qede/base/ecore_l2.h
index 3f86eac..7fe4cbc 100644
--- a/drivers/net/qede/base/ecore_l2.h
+++ b/drivers/net/qede/base/ecore_l2.h
@@ -129,4 +129,31 @@ enum _ecore_status_t
 
 u8 ecore_mcast_bin_from_mac(u8 *mac);
 
+/**
+ * @brief - ecore_configure_rfs_ntuple_filter
+ *
+ * This ramrod should be used to add or remove arfs hw filter
+ *
+ * @params p_hwfn
+ * @params p_ptt
+ * @params p_cb		Used for ECORE_SPQ_MODE_CB,where client would initialize
+			it with cookie and callback function address, if not
+			using this mode then client must pass NULL.
+ * @params p_addr	p_addr is an actual packet header that needs to be
+ *			filter. It has to mapped with IO to read prior to
+ *			calling this, [contains 4 tuples- src ip, dest ip,
+ *			src port, dest port].
+ * @params length	length of p_addr header up to past the transport header.
+ * @params qid		receive packet will be directed to this queue.
+ * @params vport_id
+ * @params b_is_add	flag to add or remove filter.
+ *
+ */
+enum _ecore_status_t
+ecore_configure_rfs_ntuple_filter(struct ecore_hwfn *p_hwfn,
+				  struct ecore_ptt *p_ptt,
+				  struct ecore_spq_comp_cb *p_cb,
+				  dma_addr_t p_addr, u16 length,
+				  u16 qid, u8 vport_id,
+				  bool b_is_add);
 #endif
diff --git a/drivers/net/qede/base/ecore_l2_api.h b/drivers/net/qede/base/ecore_l2_api.h
index 5a7db76..d09f3c4 100644
--- a/drivers/net/qede/base/ecore_l2_api.h
+++ b/drivers/net/qede/base/ecore_l2_api.h
@@ -141,6 +141,14 @@ struct ecore_filter_accept_flags {
 #define ECORE_ACCEPT_BCAST		0x20
 };
 
+struct ecore_arfs_config_params {
+	bool tcp;
+	bool udp;
+	bool ipv4;
+	bool ipv6;
+	bool arfs_enable;	/* Enable or disable arfs mode */
+};
+
 /* Add / remove / move / remove-all unicast MAC-VLAN filters.
  * FW will assert in the following cases, so driver should take care...:
  * 1. Adding a filter to a full table.
@@ -414,4 +422,18 @@ void ecore_get_vport_stats(struct ecore_dev *p_dev,
 
 void ecore_reset_vport_stats(struct ecore_dev *p_dev);
 
+/**
+ *@brief ecore_arfs_mode_configure -
+ *
+ *Enable or disable rfs mode. It must accept atleast one of tcp or udp true
+ *and atleast one of ipv4 or ipv6 true to enable rfs mode.
+ *
+ *@param p_hwfn
+ *@param p_ptt
+ *@param p_cfg_params		arfs mode configuration parameters.
+ *
+ */
+void ecore_arfs_mode_configure(struct ecore_hwfn *p_hwfn,
+			       struct ecore_ptt *p_ptt,
+			       struct ecore_arfs_config_params *p_cfg_params);
 #endif
diff --git a/drivers/net/qede/base/ecore_proto_if.h b/drivers/net/qede/base/ecore_proto_if.h
index 0ac153f..226e3d2 100644
--- a/drivers/net/qede/base/ecore_proto_if.h
+++ b/drivers/net/qede/base/ecore_proto_if.h
@@ -21,6 +21,12 @@ struct ecore_eth_pf_params {
 	 * to update_pf_params routine invoked before slowpath start
 	 */
 	u16	num_cons;
+
+	/* To enable arfs, previous to HW-init a positive number needs to be
+	 * set [as filters require allocated searcher ILT memory].
+	 * This will set the maximal number of configured steering-filters.
+	 */
+	u32	num_arfs_filters;
 };
 
 /* Most of the the parameters below are described in the FW iSCSI / TCP HSI */
diff --git a/drivers/net/qede/base/ecore_spq.h b/drivers/net/qede/base/ecore_spq.h
index e2468b7..e530f83 100644
--- a/drivers/net/qede/base/ecore_spq.h
+++ b/drivers/net/qede/base/ecore_spq.h
@@ -26,6 +26,7 @@
 	struct tx_queue_stop_ramrod_data		tx_queue_stop;
 	struct vport_start_ramrod_data			vport_start;
 	struct vport_stop_ramrod_data			vport_stop;
+	struct rx_update_gft_filter_data		rx_update_gft;
 	struct vport_update_ramrod_data			vport_update;
 	struct core_rx_start_ramrod_data		core_rx_queue_start;
 	struct core_rx_stop_ramrod_data			core_rx_queue_stop;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 60/61] net/qede: add ntuple and flow director filter support
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (58 preceding siblings ...)
  2017-02-27  7:57 ` [PATCH 59/61] net/qede/base: add support for arfs mode Rasesh Mody
@ 2017-02-27  7:57 ` Rasesh Mody
  2017-02-27  7:57 ` [PATCH 61/61] net/qede: add LRO/TSO offloads support Rasesh Mody
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:57 UTC (permalink / raw)
  To: dev; +Cc: Harish Patil, Dept-EngDPDKDev

From: Harish Patil <harish.patil@qlogic.com>

Add limited support for ntuple filter and flow director configuration.
The filtering is based on 4-tuples viz src-ip, dst-ip, src-port,
dst-port. The mask fields, tcp_flags, flex masks, priority fields,
Rx queue drop etc are not supported.

Signed-off-by: Harish Patil <harish.patil@qlogic.com>
---
 doc/guides/nics/features/qede.ini |    2 +
 doc/guides/nics/qede.rst          |    7 +-
 drivers/net/qede/Makefile         |    1 +
 drivers/net/qede/base/ecore.h     |    3 +
 drivers/net/qede/qede_ethdev.c    |   16 +-
 drivers/net/qede/qede_ethdev.h    |   39 +++
 drivers/net/qede/qede_fdir.c      |  486 +++++++++++++++++++++++++++++++++++++
 drivers/net/qede/qede_main.c      |   19 +-
 8 files changed, 563 insertions(+), 10 deletions(-)
 create mode 100644 drivers/net/qede/qede_fdir.c

diff --git a/doc/guides/nics/features/qede.ini b/doc/guides/nics/features/qede.ini
index 8858e5d..b688914 100644
--- a/doc/guides/nics/features/qede.ini
+++ b/doc/guides/nics/features/qede.ini
@@ -34,3 +34,5 @@ Multiprocess aware   = Y
 Linux UIO            = Y
 x86-64               = Y
 Usage doc            = Y
+N-tuple filter       = Y
+Flow director        = Y
diff --git a/doc/guides/nics/qede.rst b/doc/guides/nics/qede.rst
index 1cf5501..5f65bde 100644
--- a/doc/guides/nics/qede.rst
+++ b/doc/guides/nics/qede.rst
@@ -60,6 +60,7 @@ Supported Features
 - Multiprocess aware
 - Scatter-Gather
 - VXLAN tunneling offload
+- N-tuple filter and flow director (limited support)
 
 Non-supported Features
 ----------------------
@@ -77,10 +78,10 @@ Supported QLogic Adapters
 Prerequisites
 -------------
 
-- Requires firmware version **8.14.x.** and management firmware
-  version **8.14.x or higher**. Firmware may be available
+- Requires firmware version **8.18.x.** and management firmware
+  version **8.18.x or higher**. Firmware may be available
   inbox in certain newer Linux distros under the standard directory
-  ``E.g. /lib/firmware/qed/qed_init_values-8.14.6.0.bin``
+  ``E.g. /lib/firmware/qed/qed_init_values-8.18.9.0.bin``
 
 - If the required firmware files are not available then visit
   `QLogic Driver Download Center <http://driverdownloads.qlogic.com>`_.
diff --git a/drivers/net/qede/Makefile b/drivers/net/qede/Makefile
index 29b443d..aae6bd2 100644
--- a/drivers/net/qede/Makefile
+++ b/drivers/net/qede/Makefile
@@ -99,6 +99,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_ethdev.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_eth_if.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_main.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_rxtx.c
+SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_fdir.c
 
 # dependent libs:
 DEPDIRS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += lib/librte_eal lib/librte_ether
diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index fab8193..31470b6 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -602,6 +602,9 @@ struct ecore_hwfn {
 
 	/* L2-related */
 	struct ecore_l2_info		*p_l2_info;
+
+	/* @DPDK */
+	struct ecore_ptt		*p_arfs_ptt;
 };
 
 #ifndef __EXTRACT__LINUX__
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 6fbd898..2b91a10 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -924,6 +924,15 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 		return -EINVAL;
 	}
 
+	/* Flow director mode check */
+	rc = qede_check_fdir_support(eth_dev);
+	if (rc) {
+		qdev->ops->vport_stop(edev, 0);
+		qede_dealloc_fp_resc(eth_dev);
+		return -EINVAL;
+	}
+	SLIST_INIT(&qdev->fdir_info.fdir_list_head);
+
 	SLIST_INIT(&qdev->vlan_list_head);
 
 	/* Add primary mac for PF */
@@ -1124,6 +1133,8 @@ static void qede_dev_close(struct rte_eth_dev *eth_dev)
 
 	PMD_INIT_FUNC_TRACE(edev);
 
+	qede_fdir_dealloc_resc(eth_dev);
+
 	/* dev_stop() shall cleanup fp resources in hw but without releasing
 	 * dma memories and sw structures so that dev_start() can be called
 	 * by the app without reconfiguration. However, in dev_close() we
@@ -1957,11 +1968,13 @@ int qede_dev_filter_ctrl(struct rte_eth_dev *eth_dev,
 		}
 		break;
 	case RTE_ETH_FILTER_FDIR:
+		return qede_fdir_filter_conf(eth_dev, filter_op, arg);
+	case RTE_ETH_FILTER_NTUPLE:
+		return qede_ntuple_filter_conf(eth_dev, filter_op, arg);
 	case RTE_ETH_FILTER_MACVLAN:
 	case RTE_ETH_FILTER_ETHERTYPE:
 	case RTE_ETH_FILTER_FLEXIBLE:
 	case RTE_ETH_FILTER_SYN:
-	case RTE_ETH_FILTER_NTUPLE:
 	case RTE_ETH_FILTER_HASH:
 	case RTE_ETH_FILTER_L2_TUNNEL:
 	case RTE_ETH_FILTER_MAX:
@@ -2052,6 +2065,7 @@ static void qede_update_pf_params(struct ecore_dev *edev)
 
 	memset(&pf_params, 0, sizeof(struct ecore_pf_params));
 	pf_params.eth_pf_params.num_cons = QEDE_PF_NUM_CONNS;
+	pf_params.eth_pf_params.num_arfs_filters = QEDE_RFS_MAX_FLTR;
 	qed_ops->common->update_pf_params(edev, &pf_params);
 }
 
diff --git a/drivers/net/qede/qede_ethdev.h b/drivers/net/qede/qede_ethdev.h
index be54f31..8342b99 100644
--- a/drivers/net/qede/qede_ethdev.h
+++ b/drivers/net/qede/qede_ethdev.h
@@ -34,6 +34,8 @@
 #include "base/nvm_cfg.h"
 #include "base/ecore_iov_api.h"
 #include "base/ecore_sp_commands.h"
+#include "base/ecore_l2.h"
+#include "base/ecore_dev_api.h"
 
 #include "qede_logs.h"
 #include "qede_if.h"
@@ -131,6 +133,9 @@
 /* Number of PF connections - 32 RX + 32 TX */
 #define QEDE_PF_NUM_CONNS		(64)
 
+/* Maximum number of flowdir filters */
+#define QEDE_RFS_MAX_FLTR		(256)
+
 /* Port/function states */
 enum qede_dev_state {
 	QEDE_DEV_INIT, /* Init the chip and Slowpath */
@@ -156,6 +161,21 @@ struct qede_ucast_entry {
 	SLIST_ENTRY(qede_ucast_entry) list;
 };
 
+struct qede_fdir_entry {
+	uint32_t soft_id; /* unused for now */
+	uint16_t pkt_len; /* actual packet length to match */
+	uint16_t rx_queue; /* queue to be steered to */
+	const struct rte_memzone *mz; /* mz used to hold L2 frame */
+	SLIST_ENTRY(qede_fdir_entry) list;
+};
+
+struct qede_fdir_info {
+	struct ecore_arfs_config_params arfs;
+	uint16_t filter_count;
+	SLIST_HEAD(fdir_list_head, qede_fdir_entry)fdir_list_head;
+};
+
+
 /*
  *  Structure to store private data for each port.
  */
@@ -190,6 +210,7 @@ struct qede_dev {
 	bool handle_hw_err;
 	uint16_t num_tunn_filters;
 	uint16_t vxlan_filter_type;
+	struct qede_fdir_info fdir_info;
 	char drv_ver[QEDE_PMD_DRV_VER_STR_SIZE];
 };
 
@@ -208,6 +229,11 @@ static int qede_rss_reta_update(struct rte_eth_dev *eth_dev,
 
 static inline uint32_t qede_rx_cqe_to_pkt_type(uint16_t flags);
 
+static uint16_t qede_fdir_construct_pkt(struct rte_eth_dev *eth_dev,
+					struct rte_eth_fdir_filter *fdir,
+					void *buff,
+					struct ecore_arfs_config_params *param);
+
 /* Non-static functions */
 void qede_init_rss_caps(uint8_t *rss_caps, uint64_t hf);
 
@@ -215,4 +241,17 @@ int qed_fill_eth_dev_info(struct ecore_dev *edev,
 				 struct qed_dev_eth_info *info);
 int qede_dev_set_link_state(struct rte_eth_dev *eth_dev, bool link_up);
 
+int qede_dev_filter_ctrl(struct rte_eth_dev *dev, enum rte_filter_type type,
+			 enum rte_filter_op op, void *arg);
+
+int qede_fdir_filter_conf(struct rte_eth_dev *eth_dev,
+			  enum rte_filter_op filter_op, void *arg);
+
+int qede_ntuple_filter_conf(struct rte_eth_dev *eth_dev,
+			    enum rte_filter_op filter_op, void *arg);
+
+int qede_check_fdir_support(struct rte_eth_dev *eth_dev);
+
+void qede_fdir_dealloc_resc(struct rte_eth_dev *eth_dev);
+
 #endif /* _QEDE_ETHDEV_H_ */
diff --git a/drivers/net/qede/qede_fdir.c b/drivers/net/qede/qede_fdir.c
new file mode 100644
index 0000000..6d9a99b
--- /dev/null
+++ b/drivers/net/qede/qede_fdir.c
@@ -0,0 +1,486 @@
+/*
+ * Copyright (c) 2017 QLogic Corporation.
+ * All rights reserved.
+ * www.qlogic.com
+ *
+ * See LICENSE.qede_pmd for copyright and licensing details.
+ */
+
+#include <rte_udp.h>
+#include <rte_tcp.h>
+#include <rte_sctp.h>
+#include <rte_errno.h>
+
+#include "qede_ethdev.h"
+
+#define IP_VERSION				(0x40)
+#define IP_HDRLEN				(0x5)
+#define QEDE_FDIR_IP_DEFAULT_VERSION_IHL	(IP_VERSION | IP_HDRLEN)
+#define QEDE_FDIR_TCP_DEFAULT_DATAOFF		(0x50)
+#define QEDE_FDIR_IPV4_DEF_TTL			(64)
+
+/* Sum of length of header types of L2, L3, L4.
+ * L2 : ether_hdr + vlan_hdr + vxlan_hdr
+ * L3 : ipv6_hdr
+ * L4 : tcp_hdr
+ */
+#define QEDE_MAX_FDIR_PKT_LEN			(86)
+
+#ifndef IPV6_ADDR_LEN
+#define IPV6_ADDR_LEN				(16)
+#endif
+
+#define QEDE_VALID_FLOW(flow_type) \
+	((flow_type) == RTE_ETH_FLOW_FRAG_IPV4		|| \
+	(flow_type) == RTE_ETH_FLOW_NONFRAG_IPV4_TCP	|| \
+	(flow_type) == RTE_ETH_FLOW_NONFRAG_IPV4_UDP	|| \
+	(flow_type) == RTE_ETH_FLOW_FRAG_IPV6		|| \
+	(flow_type) == RTE_ETH_FLOW_NONFRAG_IPV6_TCP	|| \
+	(flow_type) == RTE_ETH_FLOW_NONFRAG_IPV6_UDP)
+
+/* Note: Flowdir support is only partial.
+ * For ex: drop_queue, FDIR masks, flex_conf are not supported.
+ * Parameters like pballoc/status fields are irrelevant here.
+ */
+int qede_check_fdir_support(struct rte_eth_dev *eth_dev)
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	struct rte_fdir_conf *fdir = &eth_dev->data->dev_conf.fdir_conf;
+
+	/* check FDIR modes */
+	switch (fdir->mode) {
+	case RTE_FDIR_MODE_NONE:
+		qdev->fdir_info.arfs.arfs_enable = false;
+		DP_INFO(edev, "flowdir is disabled\n");
+	break;
+	case RTE_FDIR_MODE_PERFECT:
+		if (edev->num_hwfns > 1) {
+			DP_ERR(edev, "flowdir is not supported in 100G mode\n");
+			qdev->fdir_info.arfs.arfs_enable = false;
+			return -ENOTSUP;
+		}
+		qdev->fdir_info.arfs.arfs_enable = true;
+		DP_INFO(edev, "flowdir is enabled\n");
+	break;
+	case RTE_FDIR_MODE_PERFECT_TUNNEL:
+	case RTE_FDIR_MODE_SIGNATURE:
+	case RTE_FDIR_MODE_PERFECT_MAC_VLAN:
+		DP_ERR(edev, "Unsupported flowdir mode %d\n", fdir->mode);
+		return -ENOTSUP;
+	}
+
+	return 0;
+}
+
+void qede_fdir_dealloc_resc(struct rte_eth_dev *eth_dev)
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	struct qede_fdir_entry *tmp = NULL;
+	struct qede_fdir_entry *fdir;
+
+	SLIST_FOREACH(tmp, &qdev->fdir_info.fdir_list_head, list) {
+		if (tmp) {
+			if (tmp->mz)
+				rte_memzone_free(tmp->mz);
+			SLIST_REMOVE(&qdev->fdir_info.fdir_list_head, tmp,
+				     qede_fdir_entry, list);
+			rte_free(tmp);
+		}
+	}
+}
+
+static int
+qede_config_cmn_fdir_filter(struct rte_eth_dev *eth_dev,
+			    struct rte_eth_fdir_filter *fdir_filter,
+			    bool add)
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	char mz_name[RTE_MEMZONE_NAMESIZE] = {0};
+	struct qede_fdir_entry *tmp = NULL;
+	struct qede_fdir_entry *fdir;
+	const struct rte_memzone *mz;
+	struct ecore_hwfn *p_hwfn;
+	enum _ecore_status_t rc;
+	uint16_t pkt_len;
+	uint16_t len;
+	void *pkt;
+
+	if (add) {
+		if (qdev->fdir_info.filter_count == QEDE_RFS_MAX_FLTR - 1) {
+			DP_ERR(edev, "Reached max flowdir filter limit\n");
+			return -EINVAL;
+		}
+		fdir = rte_malloc(NULL, sizeof(struct qede_fdir_entry),
+				  RTE_CACHE_LINE_SIZE);
+		if (!fdir) {
+			DP_ERR(edev, "Did not allocate memory for fdir\n");
+			return -ENOMEM;
+		}
+	}
+	/* soft_id could have been used as memzone string, but soft_id is
+	 * not currently used so it has no significance.
+	 */
+	snprintf(mz_name, sizeof(mz_name) - 1, "%lx", rte_get_timer_cycles());
+	mz = rte_memzone_reserve_aligned(mz_name, QEDE_MAX_FDIR_PKT_LEN,
+					 SOCKET_ID_ANY, 0, RTE_CACHE_LINE_SIZE);
+	if (!mz) {
+		DP_ERR(edev, "Failed to allocate memzone for fdir, err = %s\n",
+		       rte_strerror(rte_errno));
+		rc = -rte_errno;
+		goto err1;
+	}
+
+	pkt = mz->addr;
+	memset(pkt, 0, QEDE_MAX_FDIR_PKT_LEN);
+	pkt_len = qede_fdir_construct_pkt(eth_dev, fdir_filter, pkt,
+					  &qdev->fdir_info.arfs);
+	if (pkt_len == 0) {
+		rc = -EINVAL;
+		goto err2;
+	}
+	DP_INFO(edev, "pkt_len = %u memzone = %s\n", pkt_len, mz_name);
+	if (add) {
+		SLIST_FOREACH(tmp, &qdev->fdir_info.fdir_list_head, list) {
+			if (memcmp(tmp->mz->addr, pkt, pkt_len) == 0) {
+				DP_ERR(edev, "flowdir filter exist\n");
+				rc = -EEXIST;
+				goto err2;
+			}
+		}
+	} else {
+		SLIST_FOREACH(tmp, &qdev->fdir_info.fdir_list_head, list) {
+			if (memcmp(tmp->mz->addr, pkt, pkt_len) == 0)
+				break;
+		}
+		if (!tmp) {
+			DP_ERR(edev, "flowdir filter does not exist\n");
+			rc = -EEXIST;
+			goto err2;
+		}
+	}
+	p_hwfn = ECORE_LEADING_HWFN(edev);
+	if (add) {
+		if (!qdev->fdir_info.arfs.arfs_enable) {
+			/* Force update */
+			eth_dev->data->dev_conf.fdir_conf.mode =
+						RTE_FDIR_MODE_PERFECT;
+			qdev->fdir_info.arfs.arfs_enable = true;
+			DP_INFO(edev, "Force enable flowdir in perfect mode\n");
+		}
+		/* Enable ARFS searcher with updated flow_types */
+		ecore_arfs_mode_configure(p_hwfn, p_hwfn->p_arfs_ptt,
+					  &qdev->fdir_info.arfs);
+	}
+	/* configure filter with ECORE_SPQ_MODE_EBLOCK */
+	rc = ecore_configure_rfs_ntuple_filter(p_hwfn, p_hwfn->p_arfs_ptt, NULL,
+					       (dma_addr_t)mz->phys_addr,
+					       pkt_len,
+					       fdir_filter->action.rx_queue,
+					       0, add);
+	if (rc == ECORE_SUCCESS) {
+		if (add) {
+			fdir->rx_queue = fdir_filter->action.rx_queue;
+			fdir->pkt_len = pkt_len;
+			fdir->mz = mz;
+			SLIST_INSERT_HEAD(&qdev->fdir_info.fdir_list_head,
+					  fdir, list);
+			qdev->fdir_info.filter_count++;
+			DP_INFO(edev, "flowdir filter added, count = %d\n",
+				qdev->fdir_info.filter_count);
+		} else {
+			rte_memzone_free(tmp->mz);
+			SLIST_REMOVE(&qdev->fdir_info.fdir_list_head, tmp,
+				     qede_fdir_entry, list);
+			rte_free(tmp); /* the node deleted */
+			rte_memzone_free(mz); /* temp node allocated */
+			qdev->fdir_info.filter_count--;
+			DP_INFO(edev, "Fdir filter deleted, count = %d\n",
+				qdev->fdir_info.filter_count);
+		}
+	} else {
+		DP_ERR(edev, "flowdir filter failed, rc=%d filter_count=%d\n",
+		       rc, qdev->fdir_info.filter_count);
+	}
+
+	/* Disable ARFS searcher if there are no more filters */
+	if (qdev->fdir_info.filter_count == 0) {
+		memset(&qdev->fdir_info.arfs, 0,
+		       sizeof(struct ecore_arfs_config_params));
+		DP_INFO(edev, "Disabling flowdir\n");
+		qdev->fdir_info.arfs.arfs_enable = false;
+		ecore_arfs_mode_configure(p_hwfn, p_hwfn->p_arfs_ptt,
+					  &qdev->fdir_info.arfs);
+	}
+	return 0;
+
+err2:
+	rte_memzone_free(mz);
+err1:
+	if (add)
+		rte_free(fdir);
+	return rc;
+}
+
+static int
+qede_fdir_filter_add(struct rte_eth_dev *eth_dev,
+		     struct rte_eth_fdir_filter *fdir,
+		     bool add)
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+
+	if (!QEDE_VALID_FLOW(fdir->input.flow_type)) {
+		DP_ERR(edev, "invalid flow_type input\n");
+		return -EINVAL;
+	}
+
+	if (fdir->action.rx_queue >= QEDE_RSS_COUNT(qdev)) {
+		DP_ERR(edev, "invalid queue number %u\n",
+		       fdir->action.rx_queue);
+		return -EINVAL;
+	}
+
+	if (fdir->input.flow_ext.is_vf) {
+		DP_ERR(edev, "flowdir is not supported over VF\n");
+		return -EINVAL;
+	}
+
+	return qede_config_cmn_fdir_filter(eth_dev, fdir, add);
+}
+
+/* Fills the L3/L4 headers and returns the actual length  of flowdir packet */
+static uint16_t
+qede_fdir_construct_pkt(struct rte_eth_dev *eth_dev,
+			struct rte_eth_fdir_filter *fdir,
+			void *buff,
+			struct ecore_arfs_config_params *params)
+
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	uint16_t *ether_type;
+	uint8_t *raw_pkt;
+	struct rte_eth_fdir_input *input;
+	static uint8_t vlan_frame[] = {0x81, 0, 0, 0};
+	struct ipv4_hdr *ip;
+	struct ipv6_hdr *ip6;
+	struct udp_hdr *udp;
+	struct tcp_hdr *tcp;
+	struct sctp_hdr *sctp;
+	uint8_t size, dst = 0;
+	uint16_t len;
+	static const uint8_t next_proto[] = {
+		[RTE_ETH_FLOW_FRAG_IPV4] = IPPROTO_IP,
+		[RTE_ETH_FLOW_NONFRAG_IPV4_TCP] = IPPROTO_TCP,
+		[RTE_ETH_FLOW_NONFRAG_IPV4_UDP] = IPPROTO_UDP,
+		[RTE_ETH_FLOW_FRAG_IPV6] = IPPROTO_NONE,
+		[RTE_ETH_FLOW_NONFRAG_IPV6_TCP] = IPPROTO_TCP,
+		[RTE_ETH_FLOW_NONFRAG_IPV6_UDP] = IPPROTO_UDP,
+	};
+	raw_pkt = (uint8_t *)buff;
+	input = &fdir->input;
+	DP_INFO(edev, "flow_type %d\n", input->flow_type);
+
+	len =  2 * sizeof(struct ether_addr);
+	raw_pkt += 2 * sizeof(struct ether_addr);
+	if (input->flow_ext.vlan_tci) {
+		DP_INFO(edev, "adding VLAN header\n");
+		rte_memcpy(raw_pkt, vlan_frame, sizeof(vlan_frame));
+		rte_memcpy(raw_pkt + sizeof(uint16_t),
+			   &input->flow_ext.vlan_tci,
+			   sizeof(uint16_t));
+		raw_pkt += sizeof(vlan_frame);
+		len += sizeof(vlan_frame);
+	}
+	ether_type = (uint16_t *)raw_pkt;
+	raw_pkt += sizeof(uint16_t);
+	len += sizeof(uint16_t);
+
+	/* fill the common ip header */
+	switch (input->flow_type) {
+	case RTE_ETH_FLOW_NONFRAG_IPV4_TCP:
+	case RTE_ETH_FLOW_NONFRAG_IPV4_UDP:
+	case RTE_ETH_FLOW_FRAG_IPV4:
+		ip = (struct ipv4_hdr *)raw_pkt;
+		*ether_type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+		ip->version_ihl = QEDE_FDIR_IP_DEFAULT_VERSION_IHL;
+		ip->total_length = sizeof(struct ipv4_hdr);
+		ip->next_proto_id = input->flow.ip4_flow.proto ?
+				    input->flow.ip4_flow.proto :
+				    next_proto[input->flow_type];
+		ip->time_to_live = input->flow.ip4_flow.ttl ?
+				   input->flow.ip4_flow.ttl :
+				   QEDE_FDIR_IPV4_DEF_TTL;
+		ip->type_of_service = input->flow.ip4_flow.tos;
+		ip->dst_addr = input->flow.ip4_flow.dst_ip;
+		ip->src_addr = input->flow.ip4_flow.src_ip;
+		len += sizeof(struct ipv4_hdr);
+		params->ipv4 = true;
+		break;
+	case RTE_ETH_FLOW_NONFRAG_IPV6_TCP:
+	case RTE_ETH_FLOW_NONFRAG_IPV6_UDP:
+	case RTE_ETH_FLOW_FRAG_IPV6:
+		ip6 = (struct ipv6_hdr *)raw_pkt;
+		*ether_type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+		ip6->proto = input->flow.ipv6_flow.proto ?
+					input->flow.ipv6_flow.proto :
+					next_proto[input->flow_type];
+		rte_memcpy(&ip6->src_addr, &input->flow.ipv6_flow.dst_ip,
+			   IPV6_ADDR_LEN);
+		rte_memcpy(&ip6->dst_addr, &input->flow.ipv6_flow.src_ip,
+			   IPV6_ADDR_LEN);
+		len += sizeof(struct ipv6_hdr);
+		break;
+	default:
+		DP_ERR(edev, "Unsupported flow_type %u\n",
+		       input->flow_type);
+		return 0;
+	}
+
+	/* fill the L4 header */
+	raw_pkt = (uint8_t *)buff;
+	switch (input->flow_type) {
+	case RTE_ETH_FLOW_NONFRAG_IPV4_UDP:
+		udp = (struct udp_hdr *)(raw_pkt + len);
+		udp->dst_port = input->flow.udp4_flow.dst_port;
+		udp->src_port = input->flow.udp4_flow.src_port;
+		udp->dgram_len = sizeof(struct udp_hdr);
+		len += sizeof(struct udp_hdr);
+		/* adjust ip total_length */
+		ip->total_length += sizeof(struct udp_hdr);
+		params->udp = true;
+	break;
+	case RTE_ETH_FLOW_NONFRAG_IPV4_TCP:
+		tcp = (struct tcp_hdr *)(raw_pkt + len);
+		tcp->src_port = input->flow.tcp4_flow.src_port;
+		tcp->dst_port = input->flow.tcp4_flow.dst_port;
+		tcp->data_off = QEDE_FDIR_TCP_DEFAULT_DATAOFF;
+		len += sizeof(struct tcp_hdr);
+		/* adjust ip total_length */
+		ip->total_length += sizeof(struct tcp_hdr);
+		params->tcp = true;
+	break;
+	case RTE_ETH_FLOW_NONFRAG_IPV6_TCP:
+		tcp = (struct tcp_hdr *)(raw_pkt + len);
+		tcp->data_off = QEDE_FDIR_TCP_DEFAULT_DATAOFF;
+		tcp->src_port = input->flow.udp6_flow.src_port;
+		tcp->dst_port = input->flow.udp6_flow.dst_port;
+		/* adjust ip total_length */
+		len += sizeof(struct tcp_hdr);
+		params->tcp = true;
+	break;
+	case RTE_ETH_FLOW_NONFRAG_IPV6_UDP:
+		udp = (struct udp_hdr *)(raw_pkt + len);
+		udp->src_port = input->flow.udp6_flow.dst_port;
+		udp->dst_port = input->flow.udp6_flow.src_port;
+		/* adjust ip total_length */
+		len += sizeof(struct udp_hdr);
+		params->udp = true;
+	break;
+	default:
+		DP_ERR(edev, "Unsupported flow_type %d\n", input->flow_type);
+		return 0;
+	}
+	return len;
+}
+
+int
+qede_fdir_filter_conf(struct rte_eth_dev *eth_dev,
+		      enum rte_filter_op filter_op,
+		      void *arg)
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	struct rte_eth_fdir_filter *fdir;
+	int ret;
+
+	fdir = (struct rte_eth_fdir_filter *)arg;
+	switch (filter_op) {
+	case RTE_ETH_FILTER_NOP:
+		/* Typically used to query flowdir support */
+		if (edev->num_hwfns > 1) {
+			DP_ERR(edev, "flowdir is not supported in 100G mode\n");
+			return -ENOTSUP;
+		}
+		return 0; /* means supported */
+	case RTE_ETH_FILTER_ADD:
+		ret = qede_fdir_filter_add(eth_dev, fdir, 1);
+	break;
+	case RTE_ETH_FILTER_DELETE:
+		ret = qede_fdir_filter_add(eth_dev, fdir, 0);
+	break;
+	case RTE_ETH_FILTER_FLUSH:
+	case RTE_ETH_FILTER_UPDATE:
+	case RTE_ETH_FILTER_INFO:
+		return -ENOTSUP;
+	break;
+	default:
+		DP_ERR(edev, "unknown operation %u", filter_op);
+		ret = -EINVAL;
+	}
+
+	return ret;
+}
+
+int qede_ntuple_filter_conf(struct rte_eth_dev *eth_dev,
+			    enum rte_filter_op filter_op,
+			    void *arg)
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	struct rte_eth_ntuple_filter *ntuple;
+	struct rte_eth_fdir_filter fdir_entry;
+	struct rte_eth_tcpv4_flow *tcpv4_flow;
+	struct rte_eth_udpv4_flow *udpv4_flow;
+	struct ecore_hwfn *p_hwfn;
+	bool add;
+
+	switch (filter_op) {
+	case RTE_ETH_FILTER_NOP:
+		/* Typically used to query fdir support */
+		if (edev->num_hwfns > 1) {
+			DP_ERR(edev, "flowdir is not supported in 100G mode\n");
+			return -ENOTSUP;
+		}
+		return 0; /* means supported */
+	case RTE_ETH_FILTER_ADD:
+		add = true;
+	break;
+	case RTE_ETH_FILTER_DELETE:
+		add = false;
+	break;
+	case RTE_ETH_FILTER_INFO:
+	case RTE_ETH_FILTER_GET:
+	case RTE_ETH_FILTER_UPDATE:
+	case RTE_ETH_FILTER_FLUSH:
+	case RTE_ETH_FILTER_SET:
+	case RTE_ETH_FILTER_STATS:
+	case RTE_ETH_FILTER_OP_MAX:
+		DP_ERR(edev, "Unsupported filter_op %d\n", filter_op);
+		return -ENOTSUP;
+	}
+	ntuple = (struct rte_eth_ntuple_filter *)arg;
+	/* Internally convert ntuple to fdir entry */
+	memset(&fdir_entry, 0, sizeof(fdir_entry));
+	if (ntuple->proto == IPPROTO_TCP) {
+		fdir_entry.input.flow_type = RTE_ETH_FLOW_NONFRAG_IPV4_TCP;
+		tcpv4_flow = &fdir_entry.input.flow.tcp4_flow;
+		tcpv4_flow->ip.src_ip = ntuple->src_ip;
+		tcpv4_flow->ip.dst_ip = ntuple->dst_ip;
+		tcpv4_flow->ip.proto = IPPROTO_TCP;
+		tcpv4_flow->src_port = ntuple->src_port;
+		tcpv4_flow->dst_port = ntuple->dst_port;
+	} else {
+		fdir_entry.input.flow_type = RTE_ETH_FLOW_NONFRAG_IPV4_UDP;
+		udpv4_flow = &fdir_entry.input.flow.udp4_flow;
+		udpv4_flow->ip.src_ip = ntuple->src_ip;
+		udpv4_flow->ip.dst_ip = ntuple->dst_ip;
+		udpv4_flow->ip.proto = IPPROTO_TCP;
+		udpv4_flow->src_port = ntuple->src_port;
+		udpv4_flow->dst_port = ntuple->dst_port;
+	}
+	return qede_config_cmn_fdir_filter(eth_dev, &fdir_entry, add);
+}
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 74856c5..5548b0f 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -12,8 +12,6 @@
 
 #include "qede_ethdev.h"
 
-static uint8_t npar_tx_switching = 1;
-
 /* Alarm timeout. */
 #define QEDE_ALARM_TIMEOUT_US 100000
 
@@ -224,12 +222,12 @@ static void qed_stop_iov_task(struct ecore_dev *edev)
 static int qed_slowpath_start(struct ecore_dev *edev,
 			      struct qed_slowpath_params *params)
 {
-	bool allow_npar_tx_switching;
 	const uint8_t *data = NULL;
 	struct ecore_hwfn *hwfn;
 	struct ecore_mcp_drv_version drv_version;
 	struct ecore_hw_init_params hw_init_params;
 	struct qede_dev *qdev = (struct qede_dev *)edev;
+	struct ecore_ptt *p_ptt;
 	int rc;
 
 #ifdef CONFIG_ECORE_BINARY_FW
@@ -241,6 +239,17 @@ static int qed_slowpath_start(struct ecore_dev *edev,
 		}
 	}
 #endif
+	hwfn = ECORE_LEADING_HWFN(edev);
+	if (edev->num_hwfns == 1) { /* skip aRFS for 100G device */
+		p_ptt = ecore_ptt_acquire(hwfn);
+		if (p_ptt) {
+			ECORE_LEADING_HWFN(edev)->p_arfs_ptt = p_ptt;
+		} else {
+			DP_ERR(edev, "Failed to acquire PTT for flowdir\n");
+			rc = -ENOMEM;
+			goto err;
+		}
+	}
 
 	rc = qed_nic_setup(edev);
 	if (rc)
@@ -268,13 +277,11 @@ static int qed_slowpath_start(struct ecore_dev *edev,
 		data = (const uint8_t *)edev->firmware + sizeof(u32);
 #endif
 
-	allow_npar_tx_switching = npar_tx_switching ? true : false;
-
 	/* Start the slowpath */
 	memset(&hw_init_params, 0, sizeof(hw_init_params));
 	hw_init_params.b_hw_start = true;
 	hw_init_params.int_mode = ECORE_INT_MODE_MSIX;
-	hw_init_params.allow_npar_tx_switch = allow_npar_tx_switching;
+	hw_init_params.allow_npar_tx_switch = true;
 	hw_init_params.bin_fw_data = data;
 	hw_init_params.mfw_timeout_val = ECORE_LOAD_REQ_LOCK_TO_DEFAULT;
 	hw_init_params.avoid_eng_reset = false;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH 61/61] net/qede: add LRO/TSO offloads support
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (59 preceding siblings ...)
  2017-02-27  7:57 ` [PATCH 60/61] net/qede: add ntuple and flow director filter support Rasesh Mody
@ 2017-02-27  7:57 ` Rasesh Mody
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
  61 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-02-27  7:57 UTC (permalink / raw)
  To: dev; +Cc: Harish Patil, Dept-EngDPDKDev

From: Harish Patil <harish.patil@qlogic.com>

This patch includes slowpath configuration and fastpath changes
to support LRO and TSO. A bit of revamping is needed in order
to make use of existing packet classification schemes in Rx fastpath
and for SG element processing in Tx.

Signed-off-by: Harish Patil <harish.patil@qlogic.com>
---
 doc/guides/nics/features/qede.ini    |    2 +
 doc/guides/nics/features/qede_vf.ini |    2 +
 doc/guides/nics/qede.rst             |    2 +-
 drivers/net/qede/qede_eth_if.c       |    6 +-
 drivers/net/qede/qede_eth_if.h       |    3 +-
 drivers/net/qede/qede_ethdev.c       |   29 +-
 drivers/net/qede/qede_ethdev.h       |    3 +-
 drivers/net/qede/qede_rxtx.c         |  635 +++++++++++++++++++++++++++-------
 drivers/net/qede/qede_rxtx.h         |   30 ++
 9 files changed, 561 insertions(+), 151 deletions(-)

diff --git a/doc/guides/nics/features/qede.ini b/doc/guides/nics/features/qede.ini
index b688914..fba5dc3 100644
--- a/doc/guides/nics/features/qede.ini
+++ b/doc/guides/nics/features/qede.ini
@@ -36,3 +36,5 @@ x86-64               = Y
 Usage doc            = Y
 N-tuple filter       = Y
 Flow director        = Y
+LRO                  = Y
+TSO                  = Y
diff --git a/doc/guides/nics/features/qede_vf.ini b/doc/guides/nics/features/qede_vf.ini
index acb1b99..21ec40f 100644
--- a/doc/guides/nics/features/qede_vf.ini
+++ b/doc/guides/nics/features/qede_vf.ini
@@ -31,4 +31,6 @@ Stats per queue      = Y
 Multiprocess aware   = Y
 Linux UIO            = Y
 x86-64               = Y
+LRO                  = Y
+TSO                  = Y
 Usage doc            = Y
diff --git a/doc/guides/nics/qede.rst b/doc/guides/nics/qede.rst
index 5f65bde..9023b7f 100644
--- a/doc/guides/nics/qede.rst
+++ b/doc/guides/nics/qede.rst
@@ -61,13 +61,13 @@ Supported Features
 - Scatter-Gather
 - VXLAN tunneling offload
 - N-tuple filter and flow director (limited support)
+- LRO/TSO
 
 Non-supported Features
 ----------------------
 
 - SR-IOV PF
 - GENEVE and NVGRE Tunneling offloads
-- LRO/TSO
 - NPAR
 
 Supported QLogic Adapters
diff --git a/drivers/net/qede/qede_eth_if.c b/drivers/net/qede/qede_eth_if.c
index 936dd15..9d0b1fe 100644
--- a/drivers/net/qede/qede_eth_if.c
+++ b/drivers/net/qede/qede_eth_if.c
@@ -18,8 +18,8 @@
 		u8 tx_switching = 0;
 		struct ecore_sp_vport_start_params start = { 0 };
 
-		start.tpa_mode = p_params->gro_enable ? ECORE_TPA_MODE_GRO :
-		    ECORE_TPA_MODE_NONE;
+		start.tpa_mode = p_params->enable_lro ? ECORE_TPA_MODE_RSC :
+				ECORE_TPA_MODE_NONE;
 		start.remove_inner_vlan = p_params->remove_inner_vlan;
 		start.tx_switching = tx_switching;
 		start.only_untagged = false;	/* untagged only */
@@ -29,7 +29,6 @@
 		start.concrete_fid = p_hwfn->hw_info.concrete_fid;
 		start.handle_ptp_pkts = p_params->handle_ptp_pkts;
 		start.vport_id = p_params->vport_id;
-		start.max_buffers_per_cqe = 16;	/* TODO-is this right */
 		start.mtu = p_params->mtu;
 		/* @DPDK - Disable FW placement */
 		start.zero_placement_offset = 1;
@@ -120,6 +119,7 @@ bool qed_update_rss_parm_cmt(struct ecore_dev *edev, uint16_t *p_tbl)
 	sp_params.update_accept_any_vlan_flg =
 	    params->update_accept_any_vlan_flg;
 	sp_params.mtu = params->mtu;
+	sp_params.sge_tpa_params = params->sge_tpa_params;
 
 	for_each_hwfn(edev, i) {
 		struct ecore_hwfn *p_hwfn = &edev->hwfns[i];
diff --git a/drivers/net/qede/qede_eth_if.h b/drivers/net/qede/qede_eth_if.h
index 12dd828..d845bac 100644
--- a/drivers/net/qede/qede_eth_if.h
+++ b/drivers/net/qede/qede_eth_if.h
@@ -59,12 +59,13 @@ struct qed_update_vport_params {
 	uint8_t accept_any_vlan;
 	uint8_t update_rss_flg;
 	uint16_t mtu;
+	struct ecore_sge_tpa_params *sge_tpa_params;
 };
 
 struct qed_start_vport_params {
 	bool remove_inner_vlan;
 	bool handle_ptp_pkts;
-	bool gro_enable;
+	bool enable_lro;
 	bool drop_ttl0;
 	uint8_t vport_id;
 	uint16_t mtu;
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 2b91a10..d709097 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -769,7 +769,7 @@ static int qede_init_vport(struct qede_dev *qdev)
 	int rc;
 
 	start.remove_inner_vlan = 1;
-	start.gro_enable = 0;
+	start.enable_lro = qdev->enable_lro;
 	start.mtu = ETHER_MTU + QEDE_ETH_OVERHEAD;
 	start.vport_id = 0;
 	start.drop_ttl0 = false;
@@ -866,11 +866,6 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 	if (rxmode->enable_scatter == 1)
 		eth_dev->data->scattered_rx = 1;
 
-	if (rxmode->enable_lro == 1) {
-		DP_ERR(edev, "LRO is not supported\n");
-		return -EINVAL;
-	}
-
 	if (!rxmode->hw_strip_crc)
 		DP_INFO(edev, "L2 CRC stripping is always enabled in hw\n");
 
@@ -878,6 +873,13 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 		DP_INFO(edev, "IP/UDP/TCP checksum offload is always enabled "
 			      "in hw\n");
 
+	if (rxmode->enable_lro) {
+		qdev->enable_lro = true;
+		/* Enable scatter mode for LRO */
+		if (!rxmode->enable_scatter)
+			eth_dev->data->scattered_rx = 1;
+	}
+
 	/* Check for the port restart case */
 	if (qdev->state != QEDE_DEV_INIT) {
 		rc = qdev->ops->vport_stop(edev, 0);
@@ -957,13 +959,15 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 static const struct rte_eth_desc_lim qede_rx_desc_lim = {
 	.nb_max = NUM_RX_BDS_MAX,
 	.nb_min = 128,
-	.nb_align = 128	/* lowest common multiple */
+	.nb_align = 128 /* lowest common multiple */
 };
 
 static const struct rte_eth_desc_lim qede_tx_desc_lim = {
 	.nb_max = NUM_TX_BDS_MAX,
 	.nb_min = 256,
-	.nb_align = 256
+	.nb_align = 256,
+	.nb_seg_max = ETH_TX_MAX_BDS_PER_LSO_PACKET,
+	.nb_mtu_seg_max = ETH_TX_MAX_BDS_PER_NON_LSO_PACKET
 };
 
 static void
@@ -1005,12 +1009,16 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 				     DEV_RX_OFFLOAD_IPV4_CKSUM	|
 				     DEV_RX_OFFLOAD_UDP_CKSUM	|
 				     DEV_RX_OFFLOAD_TCP_CKSUM	|
-				     DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM);
+				     DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+				     DEV_RX_OFFLOAD_TCP_LRO);
+
 	dev_info->tx_offload_capa = (DEV_TX_OFFLOAD_VLAN_INSERT	|
 				     DEV_TX_OFFLOAD_IPV4_CKSUM	|
 				     DEV_TX_OFFLOAD_UDP_CKSUM	|
 				     DEV_TX_OFFLOAD_TCP_CKSUM	|
-				     DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM);
+				     DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				     DEV_TX_OFFLOAD_TCP_TSO |
+				     DEV_TX_OFFLOAD_VXLAN_TNL_TSO);
 
 	memset(&link, 0, sizeof(struct qed_link_output));
 	qdev->ops->common->get_link(edev, &link);
@@ -2102,6 +2110,7 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
 
 	eth_dev->rx_pkt_burst = qede_recv_pkts;
 	eth_dev->tx_pkt_burst = qede_xmit_pkts;
+	eth_dev->tx_pkt_prepare = qede_xmit_prep_pkts;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
 		DP_NOTICE(edev, false,
diff --git a/drivers/net/qede/qede_ethdev.h b/drivers/net/qede/qede_ethdev.h
index 8342b99..799a3ba 100644
--- a/drivers/net/qede/qede_ethdev.h
+++ b/drivers/net/qede/qede_ethdev.h
@@ -193,8 +193,7 @@ struct qede_dev {
 	uint16_t rss_ind_table[ECORE_RSS_IND_TABLE_SIZE];
 	uint64_t rss_hf;
 	uint8_t rss_key_len;
-	uint32_t flags;
-	bool gro_disable;
+	bool enable_lro;
 	uint16_t num_queues;
 	uint8_t fp_num_tx;
 	uint8_t fp_num_rx;
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 85134fb..5943ef2 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -6,10 +6,9 @@
  * See LICENSE.qede_pmd for copyright and licensing details.
  */
 
+#include <rte_net.h>
 #include "qede_rxtx.h"
 
-static bool gro_disable = 1;	/* mod_param */
-
 static inline int qede_alloc_rx_buffer(struct qede_rx_queue *rxq)
 {
 	struct rte_mbuf *new_mb = NULL;
@@ -352,7 +351,6 @@ static void qede_init_fp(struct qede_dev *qdev)
 		snprintf(fp->name, sizeof(fp->name), "%s-fp-%d", "qdev", i);
 	}
 
-	qdev->gro_disable = gro_disable;
 }
 
 void qede_free_fp_arrays(struct qede_dev *qdev)
@@ -509,6 +507,30 @@ void qede_dealloc_fp_resc(struct rte_eth_dev *eth_dev)
 	PMD_RX_LOG(DEBUG, rxq, "bd_prod %u  cqe_prod %u", bd_prod, cqe_prod);
 }
 
+static void
+qede_update_sge_tpa_params(struct ecore_sge_tpa_params *sge_tpa_params,
+			   uint16_t mtu, bool enable)
+{
+	/* Enable LRO in split mode */
+	sge_tpa_params->tpa_ipv4_en_flg = enable;
+	sge_tpa_params->tpa_ipv6_en_flg = enable;
+	sge_tpa_params->tpa_ipv4_tunn_en_flg = enable;
+	sge_tpa_params->tpa_ipv6_tunn_en_flg = enable;
+	/* set if tpa enable changes */
+	sge_tpa_params->update_tpa_en_flg = 1;
+	/* set if tpa parameters should be handled */
+	sge_tpa_params->update_tpa_param_flg = enable;
+
+	sge_tpa_params->max_buffers_per_cqe = 20;
+	sge_tpa_params->tpa_pkt_split_flg = 1;
+	sge_tpa_params->tpa_hdr_data_split_flg = 0;
+	sge_tpa_params->tpa_gro_consistent_flg = 0;
+	sge_tpa_params->tpa_max_aggs_num = ETH_TPA_MAX_AGGS_NUM;
+	sge_tpa_params->tpa_max_size = 0x7FFF;
+	sge_tpa_params->tpa_min_size_to_start = mtu / 2;
+	sge_tpa_params->tpa_min_size_to_cont = mtu / 2;
+}
+
 static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 {
 	struct qede_dev *qdev = eth_dev->data->dev_private;
@@ -516,6 +538,7 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 	struct ecore_queue_start_common_params q_params;
 	struct qed_dev_info *qed_info = &qdev->dev_info.common;
 	struct qed_update_vport_params vport_update_params;
+	struct ecore_sge_tpa_params tpa_params;
 	struct qede_tx_queue *txq;
 	struct qede_fastpath *fp;
 	dma_addr_t p_phys_table;
@@ -625,6 +648,14 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 		vport_update_params.tx_switching_flg = 1;
 	}
 
+	/* TPA */
+	if (qdev->enable_lro) {
+		DP_INFO(edev, "Enabling LRO\n");
+		memset(&tpa_params, 0, sizeof(struct ecore_sge_tpa_params));
+		qede_update_sge_tpa_params(&tpa_params, qdev->mtu, true);
+		vport_update_params.sge_tpa_params = &tpa_params;
+	}
+
 	rc = qdev->ops->vport_update(edev, &vport_update_params);
 	if (rc) {
 		DP_ERR(edev, "Update V-PORT failed %d\n", rc);
@@ -761,6 +792,94 @@ static inline uint32_t qede_rx_cqe_to_pkt_type(uint16_t flags)
 		return RTE_PTYPE_UNKNOWN;
 }
 
+static inline void
+qede_rx_process_tpa_cont_cqe(struct qede_dev *qdev,
+			     struct qede_rx_queue *rxq,
+			     struct eth_fast_path_rx_tpa_cont_cqe *cqe)
+{
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	struct qede_agg_info *tpa_info;
+	struct rte_mbuf *temp_frag; /* Pointer to mbuf chain head */
+	struct rte_mbuf *curr_frag;
+	uint8_t list_count = 0;
+	uint16_t cons_idx;
+	uint8_t i;
+
+	PMD_RX_LOG(INFO, rxq, "TPA cont[%02x] - len_list [%04x %04x]\n",
+		   cqe->tpa_agg_index, rte_le_to_cpu_16(cqe->len_list[0]),
+		   rte_le_to_cpu_16(cqe->len_list[1]));
+
+	tpa_info = &rxq->tpa_info[cqe->tpa_agg_index];
+	temp_frag = tpa_info->mbuf;
+	assert(temp_frag);
+
+	for (i = 0; cqe->len_list[i]; i++) {
+		cons_idx = rxq->sw_rx_cons & NUM_RX_BDS(rxq);
+		curr_frag = rxq->sw_rx_ring[cons_idx].mbuf;
+		qede_rx_bd_ring_consume(rxq);
+		curr_frag->data_len = rte_le_to_cpu_16(cqe->len_list[i]);
+		temp_frag->next = curr_frag;
+		temp_frag = curr_frag;
+		list_count++;
+	}
+
+	/* Allocate RX mbuf on the RX BD ring for those many consumed  */
+	for (i = 0 ; i < list_count ; i++) {
+		if (unlikely(qede_alloc_rx_buffer(rxq) != 0)) {
+			DP_ERR(edev, "Failed to allocate mbuf for LRO cont\n");
+			tpa_info->state = QEDE_AGG_STATE_ERROR;
+		}
+	}
+}
+
+static inline void
+qede_rx_process_tpa_end_cqe(struct qede_dev *qdev,
+			    struct qede_rx_queue *rxq,
+			    struct eth_fast_path_rx_tpa_end_cqe *cqe)
+{
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	struct qede_agg_info *tpa_info;
+	struct rte_mbuf *temp_frag; /* Pointer to mbuf chain head */
+	struct rte_mbuf *curr_frag;
+	struct rte_mbuf *rx_mb;
+	uint8_t list_count = 0;
+	uint16_t cons_idx;
+	uint8_t i;
+
+	PMD_RX_LOG(INFO, rxq, "TPA End[%02x] - len_list [%04x %04x]\n",
+		   cqe->tpa_agg_index, rte_le_to_cpu_16(cqe->len_list[0]),
+		   rte_le_to_cpu_16(cqe->len_list[1]));
+
+	tpa_info = &rxq->tpa_info[cqe->tpa_agg_index];
+	temp_frag = tpa_info->mbuf;
+	assert(temp_frag);
+
+	for (i = 0; cqe->len_list[i]; i++) {
+		cons_idx = rxq->sw_rx_cons & NUM_RX_BDS(rxq);
+		curr_frag = rxq->sw_rx_ring[cons_idx].mbuf;
+		qede_rx_bd_ring_consume(rxq);
+		curr_frag->data_len = rte_le_to_cpu_16(cqe->len_list[i]);
+		temp_frag->next = curr_frag;
+		temp_frag = curr_frag;
+		list_count++;
+	}
+
+	/* Allocate RX mbuf on the RX BD ring for those many consumed */
+	for (i = 0 ; i < list_count ; i++) {
+		if (unlikely(qede_alloc_rx_buffer(rxq) != 0)) {
+			DP_ERR(edev, "Failed to allocate mbuf for lro end\n");
+			tpa_info->state = QEDE_AGG_STATE_ERROR;
+		}
+	}
+
+	/* Update total length and frags based on end TPA */
+	rx_mb = rxq->tpa_info[cqe->tpa_agg_index].mbuf;
+	/* TBD: Add sanity checks here */
+	rx_mb->nb_segs = cqe->num_of_bds;
+	rx_mb->pkt_len = cqe->total_packet_len;
+	tpa_info->state = QEDE_AGG_STATE_NONE;
+}
+
 static inline uint32_t qede_rx_cqe_to_tunn_pkt_type(uint16_t flags)
 {
 	uint32_t val;
@@ -882,6 +1001,14 @@ static inline uint32_t qede_rx_cqe_to_tunn_pkt_type(uint16_t flags)
 	enum rss_hash_type htype;
 	uint8_t tunn_parse_flag;
 	uint8_t j;
+	struct eth_fast_path_rx_tpa_start_cqe *cqe_start_tpa;
+	uint64_t ol_flags;
+	uint32_t packet_type;
+	uint16_t vlan_tci;
+	bool tpa_start_flg;
+	uint8_t bitfield_val;
+	uint8_t offset;
+	struct qede_agg_info *tpa_info;
 
 	hw_comp_cons = rte_le_to_cpu_16(*rxq->hw_cons_ptr);
 	sw_comp_cons = ecore_chain_get_cons_idx(&rxq->rx_comp_ring);
@@ -892,16 +1019,55 @@ static inline uint32_t qede_rx_cqe_to_tunn_pkt_type(uint16_t flags)
 		return 0;
 
 	while (sw_comp_cons != hw_comp_cons) {
+		ol_flags = 0;
+		packet_type = RTE_PTYPE_UNKNOWN;
+		vlan_tci = 0;
+		tpa_start_flg = false;
+
 		/* Get the CQE from the completion ring */
 		cqe =
 		    (union eth_rx_cqe *)ecore_chain_consume(&rxq->rx_comp_ring);
 		cqe_type = cqe->fast_path_regular.type;
-
-		if (unlikely(cqe_type == ETH_RX_CQE_TYPE_SLOW_PATH)) {
-			PMD_RX_LOG(DEBUG, rxq, "Got a slowath CQE");
-
+		PMD_RX_LOG(INFO, rxq, "Rx CQE type %d\n", cqe_type);
+
+		switch (cqe_type) {
+		case ETH_RX_CQE_TYPE_REGULAR:
+			fp_cqe = &cqe->fast_path_regular;
+		break;
+		case ETH_RX_CQE_TYPE_TPA_START:
+			cqe_start_tpa = &cqe->fast_path_tpa_start;
+			tpa_info = &rxq->tpa_info[cqe_start_tpa->tpa_agg_index];
+			tpa_start_flg = true;
+			PMD_RX_LOG(INFO, rxq,
+				   "TPA start[%u] - len %04x [header %02x]"
+				   " [bd_list[0] %04x], [seg_len %04x]\n",
+				    cqe_start_tpa->tpa_agg_index,
+				    rte_le_to_cpu_16(cqe_start_tpa->
+						     len_on_first_bd),
+				    cqe_start_tpa->header_len,
+				    rte_le_to_cpu_16(cqe_start_tpa->
+							ext_bd_len_list[0]),
+				    rte_le_to_cpu_16(cqe_start_tpa->seg_len));
+
+		break;
+		case ETH_RX_CQE_TYPE_TPA_CONT:
+			qede_rx_process_tpa_cont_cqe(qdev, rxq,
+						     &cqe->fast_path_tpa_cont);
+			continue;
+		case ETH_RX_CQE_TYPE_TPA_END:
+			qede_rx_process_tpa_end_cqe(qdev, rxq,
+						    &cqe->fast_path_tpa_end);
+			rx_mb = rxq->
+			tpa_info[cqe->fast_path_tpa_end.tpa_agg_index].mbuf;
+			PMD_RX_LOG(INFO, rxq, "TPA end reason %d\n",
+				   cqe->fast_path_tpa_end.end_reason);
+			goto tpa_end;
+		case ETH_RX_CQE_TYPE_SLOW_PATH:
+			PMD_RX_LOG(INFO, rxq, "Got unexpected slowpath CQE\n");
 			qdev->ops->eth_cqe_completion(edev, fp->id,
 				(struct eth_slow_path_rx_cqe *)cqe);
+			/* fall-thru */
+		default:
 			goto next_cqe;
 		}
 
@@ -910,69 +1076,93 @@ static inline uint32_t qede_rx_cqe_to_tunn_pkt_type(uint16_t flags)
 		rx_mb = rxq->sw_rx_ring[sw_rx_index].mbuf;
 		assert(rx_mb != NULL);
 
-		/* non GRO */
-		fp_cqe = &cqe->fast_path_regular;
-
-		len = rte_le_to_cpu_16(fp_cqe->len_on_first_bd);
-		pkt_len = rte_le_to_cpu_16(fp_cqe->pkt_len);
-		pad = fp_cqe->placement_offset;
-		assert((len + pad) <= rx_mb->buf_len);
-
-		PMD_RX_LOG(DEBUG, rxq,
-			   "CQE type = 0x%x, flags = 0x%x, vlan = 0x%x"
-			   " len = %u, parsing_flags = %d",
-			   cqe_type, fp_cqe->bitfields,
-			   rte_le_to_cpu_16(fp_cqe->vlan_tag),
-			   len, rte_le_to_cpu_16(fp_cqe->pars_flags.flags));
-
-		/* If this is an error packet then drop it */
-		parse_flag =
-		    rte_le_to_cpu_16(cqe->fast_path_regular.pars_flags.flags);
-
-		rx_mb->ol_flags = 0;
-
+		/* Handle regular CQE or TPA start CQE */
+		if (!tpa_start_flg) {
+			parse_flag = rte_le_to_cpu_16(fp_cqe->pars_flags.flags);
+			bitfield_val = fp_cqe->bitfields;
+			offset = fp_cqe->placement_offset;
+			len = rte_le_to_cpu_16(fp_cqe->len_on_first_bd);
+			pkt_len = rte_le_to_cpu_16(fp_cqe->pkt_len);
+		} else {
+			parse_flag = rte_le_to_cpu_16(cqe_start_tpa->
+							pars_flags.flags);
+			bitfield_val = cqe_start_tpa->bitfields;
+			offset = cqe_start_tpa->placement_offset;
+			/* seg_len = len_on_first_bd */
+			len = rte_le_to_cpu_16(cqe_start_tpa->len_on_first_bd);
+			tpa_info->start_cqe_bd_len = len +
+						cqe_start_tpa->header_len;
+			tpa_info->mbuf = rx_mb;
+		}
 		if (qede_tunn_exist(parse_flag)) {
-			PMD_RX_LOG(DEBUG, rxq, "Rx tunneled packet");
+			PMD_RX_LOG(INFO, rxq, "Rx tunneled packet\n");
 			if (unlikely(qede_check_tunn_csum_l4(parse_flag))) {
 				PMD_RX_LOG(ERR, rxq,
-					    "L4 csum failed, flags = 0x%x",
+					    "L4 csum failed, flags = 0x%x\n",
 					    parse_flag);
 				rxq->rx_hw_errors++;
-				rx_mb->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+				ol_flags |= PKT_RX_L4_CKSUM_BAD;
 			} else {
-				tunn_parse_flag =
-						fp_cqe->tunnel_pars_flags.flags;
-				rx_mb->packet_type =
-					qede_rx_cqe_to_tunn_pkt_type(
-							tunn_parse_flag);
+				ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+				if (tpa_start_flg)
+					tunn_parse_flag = cqe_start_tpa->
+							tunnel_pars_flags.flags;
+				else
+					tunn_parse_flag = fp_cqe->
+							tunnel_pars_flags.flags;
+				packet_type =
+				qede_rx_cqe_to_tunn_pkt_type(tunn_parse_flag);
 			}
 		} else {
-			PMD_RX_LOG(DEBUG, rxq, "Rx non-tunneled packet");
+			PMD_RX_LOG(INFO, rxq, "Rx non-tunneled packet\n");
 			if (unlikely(qede_check_notunn_csum_l4(parse_flag))) {
 				PMD_RX_LOG(ERR, rxq,
-					    "L4 csum failed, flags = 0x%x",
+					    "L4 csum failed, flags = 0x%x\n",
 					    parse_flag);
 				rxq->rx_hw_errors++;
-				rx_mb->ol_flags |= PKT_RX_L4_CKSUM_BAD;
-			} else if (unlikely(qede_check_notunn_csum_l3(rx_mb,
+				ol_flags |= PKT_RX_L4_CKSUM_BAD;
+			} else {
+				ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			}
+			if (unlikely(qede_check_notunn_csum_l3(rx_mb,
 							parse_flag))) {
 				PMD_RX_LOG(ERR, rxq,
-					   "IP csum failed, flags = 0x%x",
+					   "IP csum failed, flags = 0x%x\n",
 					   parse_flag);
 				rxq->rx_hw_errors++;
-				rx_mb->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+				ol_flags |= PKT_RX_IP_CKSUM_BAD;
 			} else {
-				rx_mb->packet_type =
+				ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+				packet_type =
 					qede_rx_cqe_to_pkt_type(parse_flag);
 			}
 		}
 
-		PMD_RX_LOG(INFO, rxq, "packet_type 0x%x", rx_mb->packet_type);
+		if (CQE_HAS_VLAN(parse_flag)) {
+			vlan_tci = rte_le_to_cpu_16(fp_cqe->vlan_tag);
+			ol_flags |= PKT_RX_VLAN_PKT;
+		}
+
+		if (CQE_HAS_OUTER_VLAN(parse_flag)) {
+			vlan_tci = rte_le_to_cpu_16(fp_cqe->vlan_tag);
+			ol_flags |= PKT_RX_QINQ_PKT;
+			rx_mb->vlan_tci_outer = 0;
+		}
+
+		/* RSS Hash */
+		htype = (uint8_t)GET_FIELD(bitfield_val,
+					ETH_FAST_PATH_RX_REG_CQE_RSS_HASH_TYPE);
+		if (qdev->rss_enable && htype) {
+			ol_flags |= PKT_RX_RSS_HASH;
+			rx_mb->hash.rss = rte_le_to_cpu_32(fp_cqe->rss_hash);
+			PMD_RX_LOG(INFO, rxq, "Hash result 0x%x\n",
+				   rx_mb->hash.rss);
+		}
 
 		if (unlikely(qede_alloc_rx_buffer(rxq) != 0)) {
 			PMD_RX_LOG(ERR, rxq,
 				   "New buffer allocation failed,"
-				   "dropping incoming packet");
+				   "dropping incoming packet\n");
 			qede_recycle_rx_bd_ring(rxq, qdev, fp_cqe->bd_num);
 			rte_eth_devices[rxq->port_id].
 			    data->rx_mbuf_alloc_failed++;
@@ -980,7 +1170,8 @@ static inline uint32_t qede_rx_cqe_to_tunn_pkt_type(uint16_t flags)
 			break;
 		}
 		qede_rx_bd_ring_consume(rxq);
-		if (fp_cqe->bd_num > 1) {
+
+		if (!tpa_start_flg && fp_cqe->bd_num > 1) {
 			PMD_RX_LOG(DEBUG, rxq, "Jumbo-over-BD packet: %02x BDs"
 				   " len on first: %04x Total Len: %04x",
 				   fp_cqe->bd_num, len, pkt_len);
@@ -1009,39 +1200,23 @@ static inline uint32_t qede_rx_cqe_to_tunn_pkt_type(uint16_t flags)
 
 		/* Update rest of the MBUF fields */
 		rx_mb->data_off = pad + RTE_PKTMBUF_HEADROOM;
-		rx_mb->nb_segs = fp_cqe->bd_num;
-		rx_mb->data_len = len;
-		rx_mb->pkt_len = pkt_len;
 		rx_mb->port = rxq->port_id;
-
-		htype = (uint8_t)GET_FIELD(fp_cqe->bitfields,
-				ETH_FAST_PATH_RX_REG_CQE_RSS_HASH_TYPE);
-		if (qdev->rss_enable && htype) {
-			rx_mb->ol_flags |= PKT_RX_RSS_HASH;
-			rx_mb->hash.rss = rte_le_to_cpu_32(fp_cqe->rss_hash);
-			PMD_RX_LOG(DEBUG, rxq, "Hash result 0x%x",
-				   rx_mb->hash.rss);
+		rx_mb->ol_flags = ol_flags;
+		rx_mb->data_len = len;
+		rx_mb->vlan_tci = vlan_tci;
+		rx_mb->packet_type = packet_type;
+		PMD_RX_LOG(INFO, rxq, "pkt_type %04x len %04x flags %04lx\n",
+			   packet_type, len, ol_flags);
+		if (!tpa_start_flg) {
+			rx_mb->nb_segs = fp_cqe->bd_num;
+			rx_mb->pkt_len = pkt_len;
 		}
-
 		rte_prefetch1(rte_pktmbuf_mtod(rx_mb, void *));
-
-		if (CQE_HAS_VLAN(parse_flag)) {
-			rx_mb->vlan_tci = rte_le_to_cpu_16(fp_cqe->vlan_tag);
-			rx_mb->ol_flags |= PKT_RX_VLAN_PKT;
-		}
-
-		if (CQE_HAS_OUTER_VLAN(parse_flag)) {
-			/* FW does not provide indication of Outer VLAN tag,
-			 * which is always stripped, so vlan_tci_outer is set
-			 * to 0. Here vlan_tag represents inner VLAN tag.
-			 */
-			rx_mb->vlan_tci = rte_le_to_cpu_16(fp_cqe->vlan_tag);
-			rx_mb->ol_flags |= PKT_RX_QINQ_PKT;
-			rx_mb->vlan_tci_outer = 0;
+tpa_end:
+		if (!tpa_start_flg) {
+			rx_pkts[rx_pkt] = rx_mb;
+			rx_pkt++;
 		}
-
-		rx_pkts[rx_pkt] = rx_mb;
-		rx_pkt++;
 next_cqe:
 		ecore_chain_recycle_consumed(&rxq->rx_comp_ring);
 		sw_comp_cons = ecore_chain_get_cons_idx(&rxq->rx_comp_ring);
@@ -1120,43 +1295,44 @@ static inline uint32_t qede_rx_cqe_to_tunn_pkt_type(uint16_t flags)
 /* Populate scatter gather buffer descriptor fields */
 static inline uint8_t
 qede_encode_sg_bd(struct qede_tx_queue *p_txq, struct rte_mbuf *m_seg,
-		  struct eth_tx_1st_bd *bd1)
+		  struct eth_tx_2nd_bd **bd2, struct eth_tx_3rd_bd **bd3)
 {
 	struct qede_tx_queue *txq = p_txq;
-	struct eth_tx_2nd_bd *bd2 = NULL;
-	struct eth_tx_3rd_bd *bd3 = NULL;
 	struct eth_tx_bd *tx_bd = NULL;
 	dma_addr_t mapping;
-	uint8_t nb_segs = 1; /* min one segment per packet */
+	uint8_t nb_segs = 0;
 
 	/* Check for scattered buffers */
 	while (m_seg) {
-		if (nb_segs == 1) {
-			bd2 = (struct eth_tx_2nd_bd *)
-				ecore_chain_produce(&txq->tx_pbl);
-			memset(bd2, 0, sizeof(*bd2));
+		if (nb_segs == 0) {
+			if (!*bd2) {
+				*bd2 = (struct eth_tx_2nd_bd *)
+					ecore_chain_produce(&txq->tx_pbl);
+				memset(*bd2, 0, sizeof(struct eth_tx_2nd_bd));
+				nb_segs++;
+			}
 			mapping = rte_mbuf_data_dma_addr(m_seg);
-			QEDE_BD_SET_ADDR_LEN(bd2, mapping, m_seg->data_len);
-			PMD_TX_LOG(DEBUG, txq, "BD2 len %04x",
-				   m_seg->data_len);
-		} else if (nb_segs == 2) {
-			bd3 = (struct eth_tx_3rd_bd *)
-				ecore_chain_produce(&txq->tx_pbl);
-			memset(bd3, 0, sizeof(*bd3));
+			QEDE_BD_SET_ADDR_LEN(*bd2, mapping, m_seg->data_len);
+			PMD_TX_LOG(DEBUG, txq, "BD2 len %04x", m_seg->data_len);
+		} else if (nb_segs == 1) {
+			if (!*bd3) {
+				*bd3 = (struct eth_tx_3rd_bd *)
+					ecore_chain_produce(&txq->tx_pbl);
+				memset(*bd3, 0, sizeof(struct eth_tx_3rd_bd));
+				nb_segs++;
+			}
 			mapping = rte_mbuf_data_dma_addr(m_seg);
-			QEDE_BD_SET_ADDR_LEN(bd3, mapping, m_seg->data_len);
-			PMD_TX_LOG(DEBUG, txq, "BD3 len %04x",
-				   m_seg->data_len);
+			QEDE_BD_SET_ADDR_LEN(*bd3, mapping, m_seg->data_len);
+			PMD_TX_LOG(DEBUG, txq, "BD3 len %04x", m_seg->data_len);
 		} else {
 			tx_bd = (struct eth_tx_bd *)
 				ecore_chain_produce(&txq->tx_pbl);
 			memset(tx_bd, 0, sizeof(*tx_bd));
+			nb_segs++;
 			mapping = rte_mbuf_data_dma_addr(m_seg);
 			QEDE_BD_SET_ADDR_LEN(tx_bd, mapping, m_seg->data_len);
-			PMD_TX_LOG(DEBUG, txq, "BD len %04x",
-				   m_seg->data_len);
+			PMD_TX_LOG(DEBUG, txq, "BD len %04x", m_seg->data_len);
 		}
-		nb_segs++;
 		m_seg = m_seg->next;
 	}
 
@@ -1164,6 +1340,96 @@ static inline uint32_t qede_rx_cqe_to_tunn_pkt_type(uint16_t flags)
 	return nb_segs;
 }
 
+#ifdef RTE_LIBRTE_QEDE_DEBUG_TX
+static inline void
+print_tx_bd_info(struct qede_tx_queue *txq,
+		 struct eth_tx_1st_bd *bd1,
+		 struct eth_tx_2nd_bd *bd2,
+		 struct eth_tx_3rd_bd *bd3,
+		 uint64_t tx_ol_flags)
+{
+	char ol_buf[256] = { 0 }; /* for verbose prints */
+
+	if (bd1)
+		PMD_TX_LOG(INFO, txq,
+			   "BD1: nbytes=%u nbds=%u bd_flags=04%x bf=%04x",
+			   rte_cpu_to_le_16(bd1->nbytes), bd1->data.nbds,
+			   bd1->data.bd_flags.bitfields,
+			   rte_cpu_to_le_16(bd1->data.bitfields));
+	if (bd2)
+		PMD_TX_LOG(INFO, txq,
+			   "BD2: nbytes=%u bf=%04x\n",
+			   rte_cpu_to_le_16(bd2->nbytes), bd2->data.bitfields1);
+	if (bd3)
+		PMD_TX_LOG(INFO, txq,
+			   "BD3: nbytes=%u bf=%04x mss=%u\n",
+			   rte_cpu_to_le_16(bd3->nbytes),
+			   rte_cpu_to_le_16(bd3->data.bitfields),
+			   rte_cpu_to_le_16(bd3->data.lso_mss));
+
+	rte_get_tx_ol_flag_list(tx_ol_flags, ol_buf, sizeof(ol_buf));
+	PMD_TX_LOG(INFO, txq, "TX offloads = %s\n", ol_buf);
+}
+#endif
+
+/* TX prepare to check packets meets TX conditions */
+uint16_t
+qede_xmit_prep_pkts(void *p_txq, struct rte_mbuf **tx_pkts,
+		    uint16_t nb_pkts)
+{
+	struct qede_tx_queue *txq = p_txq;
+	uint64_t ol_flags;
+	struct rte_mbuf *m;
+	uint16_t i;
+	int ret;
+
+	for (i = 0; i < nb_pkts; i++) {
+		m = tx_pkts[i];
+		ol_flags = m->ol_flags;
+		if (ol_flags & PKT_TX_TCP_SEG) {
+			if (m->nb_segs >= ETH_TX_MAX_BDS_PER_LSO_PACKET) {
+				rte_errno = -EINVAL;
+				break;
+			}
+			/* TBD: confirm its ~9700B for both ? */
+			if (m->tso_segsz > ETH_TX_MAX_NON_LSO_PKT_LEN) {
+				rte_errno = -EINVAL;
+				break;
+			}
+		} else {
+			if (m->nb_segs >= ETH_TX_MAX_BDS_PER_NON_LSO_PACKET) {
+				rte_errno = -EINVAL;
+				break;
+			}
+		}
+		if (ol_flags & QEDE_TX_OFFLOAD_NOTSUP_MASK) {
+			rte_errno = -ENOTSUP;
+			break;
+		}
+
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+		ret = rte_validate_tx_offload(m);
+		if (ret != 0) {
+			rte_errno = ret;
+			break;
+		}
+#endif
+		/* TBD: pseudo csum calcuation required iff
+		 * ETH_TX_DATA_2ND_BD_L4_PSEUDO_CSUM_MODE not set?
+		 */
+		ret = rte_net_intel_cksum_prepare(m);
+		if (ret != 0) {
+			rte_errno = ret;
+			break;
+		}
+	}
+
+	if (unlikely(i != nb_pkts))
+		PMD_TX_LOG(ERR, txq, "TX prepare failed for %u\n",
+			   nb_pkts - i);
+	return i;
+}
+
 uint16_t
 qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 {
@@ -1171,15 +1437,22 @@ static inline uint32_t qede_rx_cqe_to_tunn_pkt_type(uint16_t flags)
 	struct qede_dev *qdev = txq->qdev;
 	struct ecore_dev *edev = &qdev->edev;
 	struct qede_fastpath *fp;
-	struct eth_tx_1st_bd *bd1;
 	struct rte_mbuf *mbuf;
 	struct rte_mbuf *m_seg = NULL;
 	uint16_t nb_tx_pkts;
 	uint16_t bd_prod;
 	uint16_t idx;
-	uint16_t tx_count;
 	uint16_t nb_frags;
 	uint16_t nb_pkt_sent = 0;
+	uint8_t nbds;
+	bool ipv6_ext_flg;
+	bool lso_flg;
+	bool tunn_flg;
+	struct eth_tx_1st_bd *bd1;
+	struct eth_tx_2nd_bd *bd2;
+	struct eth_tx_3rd_bd *bd3;
+	uint64_t tx_ol_flags;
+	uint16_t hdr_size;
 
 	fp = &qdev->fp_array[QEDE_RSS_COUNT(qdev) + txq->queue_id];
 
@@ -1189,34 +1462,86 @@ static inline uint32_t qede_rx_cqe_to_tunn_pkt_type(uint16_t flags)
 		(void)qede_process_tx_compl(edev, txq);
 	}
 
-	nb_tx_pkts = RTE_MIN(nb_pkts, (txq->nb_tx_avail /
-			ETH_TX_MAX_BDS_PER_NON_LSO_PACKET));
-	if (unlikely(nb_tx_pkts == 0)) {
-		PMD_TX_LOG(DEBUG, txq, "Out of BDs nb_pkts=%u avail=%u",
-			   nb_pkts, txq->nb_tx_avail);
-		return 0;
-	}
-
-	tx_count = nb_tx_pkts;
+	nb_tx_pkts  = nb_pkts;
+	bd_prod = rte_cpu_to_le_16(ecore_chain_get_prod_idx(&txq->tx_pbl));
 	while (nb_tx_pkts--) {
+		/* Init flags/values */
+		ipv6_ext_flg = false;
+		tunn_flg = false;
+		lso_flg = false;
+		nbds = 0;
+		bd1 = NULL;
+		bd2 = NULL;
+		bd3 = NULL;
+		hdr_size = 0;
+
 		/* Fill the entry in the SW ring and the BDs in the FW ring */
 		idx = TX_PROD(txq);
 		mbuf = *tx_pkts++;
 		txq->sw_tx_ring[idx].mbuf = mbuf;
+		tx_ol_flags = mbuf->ol_flags;
+
+#define RTE_ETH_IS_IPV6_HDR_EXT(ptype) ((ptype) & RTE_PTYPE_L3_IPV6_EXT)
+		if (RTE_ETH_IS_IPV6_HDR_EXT(mbuf->packet_type))
+			ipv6_ext_flg = true;
+
+		if (RTE_ETH_IS_TUNNEL_PKT(mbuf->packet_type))
+			tunn_flg = true;
+
+		if (tx_ol_flags & PKT_TX_TCP_SEG)
+			lso_flg = true;
+
+		/* Check minimum TX BDS availability against available BDs */
+		if (unlikely(txq->nb_tx_avail < mbuf->nb_segs))
+			break;
+
+		if (lso_flg) {
+			if (unlikely(txq->nb_tx_avail <
+						ETH_TX_MIN_BDS_PER_LSO_PKT))
+				break;
+		} else {
+			if (unlikely(txq->nb_tx_avail <
+					ETH_TX_MIN_BDS_PER_NON_LSO_PKT))
+				break;
+		}
+
+		if (tunn_flg && ipv6_ext_flg) {
+			if (unlikely(txq->nb_tx_avail <
+				ETH_TX_MIN_BDS_PER_TUNN_IPV6_WITH_EXT_PKT))
+				break;
+		}
+		if (ipv6_ext_flg) {
+			if (unlikely(txq->nb_tx_avail <
+					ETH_TX_MIN_BDS_PER_IPV6_WITH_EXT_PKT))
+				break;
+		}
+		/* BD1 */
 		bd1 = (struct eth_tx_1st_bd *)ecore_chain_produce(&txq->tx_pbl);
-		bd1->data.bd_flags.bitfields =
+		nbds++;
+		bd1->data.bd_flags.bitfields = 0;
+		bd1->data.bitfields = 0;
+
+		bd1->data.bd_flags.bitfields |=
 			1 << ETH_TX_1ST_BD_FLAGS_START_BD_SHIFT;
 		/* FW 8.10.x specific change */
-		bd1->data.bitfields =
+		if (!lso_flg) {
+			bd1->data.bitfields |=
 			(mbuf->pkt_len & ETH_TX_DATA_1ST_BD_PKT_LEN_MASK)
 				<< ETH_TX_DATA_1ST_BD_PKT_LEN_SHIFT;
-		/* Map MBUF linear data for DMA and set in the first BD */
-		QEDE_BD_SET_ADDR_LEN(bd1, rte_mbuf_data_dma_addr(mbuf),
-				     mbuf->data_len);
-		PMD_TX_LOG(INFO, txq, "BD1 len %04x", mbuf->data_len);
+			/* Map MBUF linear data for DMA and set in the BD1 */
+			QEDE_BD_SET_ADDR_LEN(bd1, rte_mbuf_data_dma_addr(mbuf),
+					     mbuf->data_len);
+		} else {
+			/* For LSO, packet header and payload must reside on
+			 * buffers pointed by different BDs. Using BD1 for HDR
+			 * and BD2 onwards for data.
+			 */
+			hdr_size = mbuf->l2_len + mbuf->l3_len + mbuf->l4_len;
+			QEDE_BD_SET_ADDR_LEN(bd1, rte_mbuf_data_dma_addr(mbuf),
+					     hdr_size);
+		}
 
-		if (RTE_ETH_IS_TUNNEL_PKT(mbuf->packet_type)) {
-			PMD_TX_LOG(INFO, txq, "Tx tunnel packet");
+		if (tunn_flg) {
 			/* First indicate its a tunnel pkt */
 			bd1->data.bd_flags.bitfields |=
 				ETH_TX_DATA_1ST_BD_TUNN_FLAG_MASK <<
@@ -1231,8 +1556,7 @@ static inline uint32_t qede_rx_cqe_to_tunn_pkt_type(uint16_t flags)
 					1 << ETH_TX_DATA_1ST_BD_TUNN_FLAG_SHIFT;
 
 			/* Outer IP checksum offload */
-			if (mbuf->ol_flags & PKT_TX_OUTER_IP_CKSUM) {
-				PMD_TX_LOG(INFO, txq, "OuterIP csum offload");
+			if (tx_ol_flags & PKT_TX_OUTER_IP_CKSUM) {
 				bd1->data.bd_flags.bitfields |=
 					ETH_TX_1ST_BD_FLAGS_TUNN_IP_CSUM_MASK <<
 					ETH_TX_1ST_BD_FLAGS_TUNN_IP_CSUM_SHIFT;
@@ -1245,43 +1569,79 @@ static inline uint32_t qede_rx_cqe_to_tunn_pkt_type(uint16_t flags)
 		}
 
 		/* Descriptor based VLAN insertion */
-		if (mbuf->ol_flags & (PKT_TX_VLAN_PKT | PKT_TX_QINQ_PKT)) {
-			PMD_TX_LOG(INFO, txq, "Insert VLAN 0x%x",
-				   mbuf->vlan_tci);
+		if (tx_ol_flags & (PKT_TX_VLAN_PKT | PKT_TX_QINQ_PKT)) {
 			bd1->data.vlan = rte_cpu_to_le_16(mbuf->vlan_tci);
 			bd1->data.bd_flags.bitfields |=
 			    1 << ETH_TX_1ST_BD_FLAGS_VLAN_INSERTION_SHIFT;
 		}
 
+		if (lso_flg)
+			bd1->data.bd_flags.bitfields |=
+				1 << ETH_TX_1ST_BD_FLAGS_LSO_SHIFT;
+
 		/* Offload the IP checksum in the hardware */
-		if (mbuf->ol_flags & PKT_TX_IP_CKSUM) {
-			PMD_TX_LOG(INFO, txq, "IP csum offload");
+		if ((lso_flg) || (tx_ol_flags & PKT_TX_IP_CKSUM))
 			bd1->data.bd_flags.bitfields |=
 			    1 << ETH_TX_1ST_BD_FLAGS_IP_CSUM_SHIFT;
-		}
 
 		/* L4 checksum offload (tcp or udp) */
-		if (mbuf->ol_flags & (PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM)) {
-			PMD_TX_LOG(INFO, txq, "L4 csum offload");
+		if ((lso_flg) || (tx_ol_flags & (PKT_TX_TCP_CKSUM |
+						PKT_TX_UDP_CKSUM)))
+			/* PKT_TX_TCP_SEG implies PKT_TX_TCP_CKSUM */
 			bd1->data.bd_flags.bitfields |=
 			    1 << ETH_TX_1ST_BD_FLAGS_L4_CSUM_SHIFT;
-			/* IPv6 + extn. -> later */
+
+		/* BD2 */
+		if (lso_flg || ipv6_ext_flg) {
+			bd2 = (struct eth_tx_2nd_bd *)ecore_chain_produce
+							(&txq->tx_pbl);
+			memset(bd2, 0, sizeof(struct eth_tx_2nd_bd));
+			nbds++;
+			QEDE_BD_SET_ADDR_LEN(bd2,
+					    (hdr_size +
+					    rte_mbuf_data_dma_addr(mbuf)),
+					    mbuf->data_len - hdr_size);
+			/* TBD: check pseudo csum iff tx_prepare not called? */
+			if (ipv6_ext_flg) {
+				bd2->data.bitfields1 |=
+				ETH_L4_PSEUDO_CSUM_ZERO_LENGTH <<
+				ETH_TX_DATA_2ND_BD_L4_PSEUDO_CSUM_MODE_SHIFT;
+			}
+		}
+
+		/* BD3 */
+		if (lso_flg || ipv6_ext_flg) {
+			bd3 = (struct eth_tx_3rd_bd *)ecore_chain_produce
+							(&txq->tx_pbl);
+			memset(bd3, 0, sizeof(struct eth_tx_3rd_bd));
+			nbds++;
+			if (lso_flg) {
+				bd3->data.lso_mss =
+					rte_cpu_to_le_16(mbuf->tso_segsz);
+				/* Using one header BD */
+				bd3->data.bitfields |=
+					rte_cpu_to_le_16(1 <<
+					ETH_TX_DATA_3RD_BD_HDR_NBD_SHIFT);
+			}
 		}
 
 		/* Handle fragmented MBUF */
 		m_seg = mbuf->next;
 		/* Encode scatter gather buffer descriptors if required */
-		nb_frags = qede_encode_sg_bd(txq, m_seg, bd1);
-		bd1->data.nbds = nb_frags;
-		txq->nb_tx_avail -= nb_frags;
+		nb_frags = qede_encode_sg_bd(txq, m_seg, &bd2, &bd3);
+		bd1->data.nbds = nbds + nb_frags;
+		txq->nb_tx_avail -= bd1->data.nbds;
 		txq->sw_tx_prod++;
 		rte_prefetch0(txq->sw_tx_ring[TX_PROD(txq)].mbuf);
 		bd_prod =
 		    rte_cpu_to_le_16(ecore_chain_get_prod_idx(&txq->tx_pbl));
+#ifdef RTE_LIBRTE_QEDE_DEBUG_TX
+		print_tx_bd_info(txq, bd1, bd2, bd3, tx_ol_flags);
+		PMD_TX_LOG(INFO, txq, "lso=%d tunn=%d ipv6_ext=%d\n",
+			   lso_flg, tunn_flg, ipv6_ext_flg);
+#endif
 		nb_pkt_sent++;
 		txq->xmit_pkts++;
-		PMD_TX_LOG(INFO, txq, "nbds = %d pkt_len = %04x",
-			   bd1->data.nbds, mbuf->pkt_len);
 	}
 
 	/* Write value of prod idx into bd_prod */
@@ -1294,8 +1654,8 @@ static inline uint32_t qede_rx_cqe_to_tunn_pkt_type(uint16_t flags)
 	/* Check again for Tx completions */
 	(void)qede_process_tx_compl(edev, txq);
 
-	PMD_TX_LOG(DEBUG, txq, "to_send=%u can_send=%u sent=%u core=%d",
-		   nb_pkts, tx_count, nb_pkt_sent, rte_lcore_id());
+	PMD_TX_LOG(DEBUG, txq, "to_send=%u sent=%u bd_prod=%u core=%d",
+		   nb_pkts, nb_pkt_sent, TX_PROD(txq), rte_lcore_id());
 
 	return nb_pkt_sent;
 }
@@ -1412,6 +1772,7 @@ static int qede_stop_queues(struct qede_dev *qdev)
 {
 	struct qed_update_vport_params vport_update_params;
 	struct ecore_dev *edev = &qdev->edev;
+	struct ecore_sge_tpa_params tpa_params;
 	struct qede_fastpath *fp;
 	int rc, tc, i;
 
@@ -1421,9 +1782,15 @@ static int qede_stop_queues(struct qede_dev *qdev)
 	vport_update_params.update_vport_active_flg = 1;
 	vport_update_params.vport_active_flg = 0;
 	vport_update_params.update_rss_flg = 0;
+	/* Disable TPA */
+	if (qdev->enable_lro) {
+		DP_INFO(edev, "Disabling LRO\n");
+		memset(&tpa_params, 0, sizeof(struct ecore_sge_tpa_params));
+		qede_update_sge_tpa_params(&tpa_params, qdev->mtu, false);
+		vport_update_params.sge_tpa_params = &tpa_params;
+	}
 
 	DP_INFO(edev, "Deactivate vport\n");
-
 	rc = qdev->ops->vport_update(edev, &vport_update_params);
 	if (rc) {
 		DP_ERR(edev, "Failed to update vport\n");
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index 17a2f0c..c27632e 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -126,6 +126,19 @@
 
 #define QEDE_PKT_TYPE_TUNN_MAX_TYPE			0x20 /* 2^5 */
 
+#define QEDE_TX_CSUM_OFFLOAD_MASK (PKT_TX_IP_CKSUM              | \
+				   PKT_TX_TCP_CKSUM             | \
+				   PKT_TX_UDP_CKSUM             | \
+				   PKT_TX_OUTER_IP_CKSUM        | \
+				   PKT_TX_TCP_SEG)
+
+#define QEDE_TX_OFFLOAD_MASK (QEDE_TX_CSUM_OFFLOAD_MASK | \
+			      PKT_TX_QINQ_PKT           | \
+			      PKT_TX_VLAN_PKT)
+
+#define QEDE_TX_OFFLOAD_NOTSUP_MASK \
+	(PKT_TX_OFFLOAD_MASK ^ QEDE_TX_OFFLOAD_MASK)
+
 /*
  * RX BD descriptor ring
  */
@@ -135,6 +148,19 @@ struct qede_rx_entry {
 	/* allows expansion .. */
 };
 
+/* TPA related structures */
+enum qede_agg_state {
+	QEDE_AGG_STATE_NONE  = 0,
+	QEDE_AGG_STATE_START = 1,
+	QEDE_AGG_STATE_ERROR = 2
+};
+
+struct qede_agg_info {
+	struct rte_mbuf *mbuf;
+	uint16_t start_cqe_bd_len;
+	uint8_t state; /* for sanity check */
+};
+
 /*
  * Structure associated with each RX queue.
  */
@@ -155,6 +181,7 @@ struct qede_rx_queue {
 	uint64_t rx_segs;
 	uint64_t rx_hw_errors;
 	uint64_t rx_alloc_errors;
+	struct qede_agg_info tpa_info[ETH_TPA_MAX_AGGS_NUM];
 	struct qede_dev *qdev;
 	void *handle;
 };
@@ -232,6 +259,9 @@ int qede_tx_queue_setup(struct rte_eth_dev *dev,
 uint16_t qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts,
 			uint16_t nb_pkts);
 
+uint16_t qede_xmit_prep_pkts(void *p_txq, struct rte_mbuf **tx_pkts,
+			     uint16_t nb_pkts);
+
 uint16_t qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts,
 			uint16_t nb_pkts);
 
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* Re: [PATCH 00/61] net/qede/base: qede PMD enhancements
  2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
                   ` (60 preceding siblings ...)
  2017-02-27  7:57 ` [PATCH 61/61] net/qede: add LRO/TSO offloads support Rasesh Mody
@ 2017-03-03 10:25 ` Ferruh Yigit
  2017-03-18  7:05   ` [PATCH v2 " Rasesh Mody
                     ` (62 more replies)
  61 siblings, 63 replies; 329+ messages in thread
From: Ferruh Yigit @ 2017-03-03 10:25 UTC (permalink / raw)
  To: Rasesh Mody, dev; +Cc: Dept-EngDPDKDev

On 2/27/2017 7:56 AM, Rasesh Mody wrote:
> Hi,
> 
> This patch set adds support for new firmware 8.18.9.0, new features and
> bug fixes.

This looks like depends other qede driver patchset [1], can you please
confirm? If so, it helps to mention from it here.

Also I am getting following build errors [2].

And there are some checkpatch and check-git-log.sh [3] errors.

Thanks,
ferruh

[1]
http://dpdk.org/dev/patchwork/patch/20816/ [patchset with 21 patches]



[2]
.../drivers/net/qede/base/ecore_dev.c:1703:4: error: use of undeclared
identifier 'ECORE_E5_MISSING_CODE'
                        ECORE_E5_MISSING_CODE;
                        ^
1 error generated.
make[7]: *** [base/ecore_dev.o] Error 1
make[7]: *** Waiting for unfinished jobs....
.../drivers/net/qede/qede_rxtx.c:1202:21: error: variable 'pad' is
uninitialized when used here [-Werror,-Wuninitialized]
                rx_mb->data_off = pad + RTE_PKTMBUF_HEADROOM;
                                  ^~~
.../drivers/net/qede/qede_rxtx.c:997:14: note: initialize the variable
'pad' to silence this warning
        uint16_t pad;
                    ^
                     = 0
1 error generated.

.../drivers/net/qede/qede_fdir.c: In function ‘qede_config_cmn_fdir_filter’:
.../drivers/net/qede/qede_fdir.c:126:44: error: format ‘%lx’ expects
argument of type ‘long unsigned int’, but argument 4 has type ‘uint64_t
{aka long long unsigned int}’ [-Werror=format=]
  snprintf(mz_name, sizeof(mz_name) - 1, "%lx", rte_get_timer_cycles());



[3]
Wrong headline format:
        send FW version driver state to MFW
        net/qede/base: decrease MAX_HWFNS_PER_DEVICE from 4 to 2
        net/qede/base: add a printout of the FW, MFW and MBI versions
        net/qede/base: set the drv_type before sending load request
Wrong headline prefix:
        send FW version driver state to MFW
        drivers/net/qede: upgrade the FW to 8.18.9.0
Wrong headline uppercase:
        net/qede/base: L2 handler changes
        net/qede/base: Add support to set max values of soft resoruces
Wrong headline lowercase:
        net/qede/base: use default mtu from shared memory
        net/qede/base: update MFW when default mtu is changed
        net/qede/base: add non-l2 dcbx tlv application support
        net/qede/base: allow PMD to control vport-id and rss-eng-id
Headline too long:
        net/qede/base: remove attribute field from update current config
        net/qede/base: add support to read personality via MFW commands
        net/qede/base: allow only trusted VFs to be promisc/multi-promisc
        net/qede/base: add a printout of the FW, MFW and MBI versions
        net/qede/base: update bulletin board with link state during init
        net/qede/base: Add support to set max values of soft resoruces
        net/qede/base: add multi-Txq support on same queue-zone for VFs
        net/qede/base: fix race cond between MFW attentions and PF stop
Missing 'Fixes' tag:
        net/qede/base: fix to set pointers to NULL after freeing
        net/qede/base: fix race cond between MFW attentions and PF stop



> 
> Please apply to dpdk-net-next for 17.05 release.
> 
> Thanks!
> Rasesh
> 
> Harish Patil (3):
>   net/qede/base: add support for arfs mode
>   net/qede: add ntuple and flow director filter support
>   net/qede: add LRO/TSO offloads support
> 
> Rasesh Mody (58):
>   net/qede/base: return an initialized return value
>   send FW version driver state to MFW
>   net/qede/base: mask Rx buffer attention bits
>   net/qede/base: print various indication on Tx-timeouts
>   net/qede/base: utilize FW 8.18.9.0
>   drivers/net/qede: upgrade the FW to 8.18.9.0
>   net/qede/base: decrease MAX_HWFNS_PER_DEVICE from 4 to 2
>   net/qede/base: move mask constants defining NIC type
>   net/qede/base: remove attribute field from update current config
>   net/qede/base: add nvram options
>   net/qede/base: add comment
>   net/qede/base: use default mtu from shared memory
>   net/qede/base: change queue/sb-id from 8 bit to 16 bit
>   net/qede/base: update MFW when default mtu is changed
>   net/qede/base: prevent device init failure
>   net/qede/base: add support to read personality via MFW commands
>   net/qede/base: allow probe to succeed with minor HW-issues
>   net/qede/base: remove unneeded step in HW init
>   net/qede/base: allow only trusted VFs to be promisc/multi-promisc
>   net/qede/base: qm initialization revamp
>   net/qede/base: add a printout of the FW, MFW and MBI versions
>   net/qede/base: check active VF queues before stopping
>   net/qede/base: set the drv_type before sending load request
>   net/qede/base: prevent driver laod with invalid resources
>   net/qede/base: add interfaces for MFW TLV request processing
>   net/qede/base: fix to set pointers to NULL after freeing
>   net/qede/base: L2 handler changes
>   net/qede/base: add support for handling TLV request from MFW
>   net/qede/base: optimize cache-line access
>   net/qede/base: infrastructure changes for VF tunnelling
>   net/qede/base: revise tunnel APIs/structs
>   net/qede/base: add tunnelling support for VFs
>   net/qede/base: formatting changes
>   net/qede/base: prevent transmitter stuck condition
>   net/qede/base: add mask/shift defines for resource command
>   net/qede/base: add API for using MFW resource lock
>   net/qede/base: remove clock slowdown option
>   net/qede/base: add new image types
>   net/qede/base: use L2-handles for RSS configuration
>   net/qede/base: change valloc to vzalloc
>   net/qede/base: add support for previous driver unload
>   net/qede/base: add non-l2 dcbx tlv application support
>   net/qede/base: update bulletin board with link state during init
>   net/qede/base: add coalescing support for VFs
>   net/qede/base: add macro got resource value message
>   net/qede/base: add mailbox for resource allocation
>   net/qede/base: add macro for unsupported command
>   net/qede/base: Add support to set max values of soft resoruces
>   net/qede/base: add return code check
>   net/qede/base: zero out MFW mailbox data
>   net/qede/base: move code bits
>   net/qede/base: add PF parameter
>   net/qede/base: allow PMD to control vport-id and rss-eng-id
>   net/qede/base: add udp ports in bulletin board message
>   net/qede/base: prevent DMAE transactions during recovery
>   net/qede/base: add multi-Txq support on same queue-zone for VFs
>   net/qede/base: fix race cond between MFW attentions and PF stop
>   net/qede/base: semantic changes

<...>

^ permalink raw reply	[flat|nested] 329+ messages in thread

* Re: [PATCH 02/61] send FW version driver state to MFW
  2017-02-27  7:56 ` [PATCH 02/61] send FW version driver state to MFW Rasesh Mody
@ 2017-03-03 10:26   ` Ferruh Yigit
  0 siblings, 0 replies; 329+ messages in thread
From: Ferruh Yigit @ 2017-03-03 10:26 UTC (permalink / raw)
  To: Rasesh Mody, dev; +Cc: Dept-EngDPDKDev

On 2/27/2017 7:56 AM, Rasesh Mody wrote:
> Add support to send FW version and driver state to Management FW.
> 
> Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>

<...>

> -	return ECORE_SUCCESS;
> +	if (IS_PF(p_dev)) {
> +		p_hwfn = ECORE_LEADING_HWFN(p_dev);
> +		drv_mb_param = (FW_MAJOR_VERSION << 24) |
> +			       (FW_MINOR_VERSION << 16) |
> +			       (FW_REVISION_VERSION << 8) |
> +			       (FW_ENGINEERING_VERSION);
> +		rc = ecore_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
> +				   DRV_MSG_CODE_OV_UPDATE_STORM_FW_VER,
> +				   drv_mb_param, &load_code, &param);
> +		if (rc != ECORE_SUCCESS) {
> +			DP_ERR(p_hwfn, "Failed to send firmware version\n");
> +			return rc;
> +		}
> +
> +		rc = ecore_mcp_ov_update_driver_state(p_hwfn,
> +						      p_hwfn->p_main_ptt,
> +						ECORE_OV_DRIVER_STATE_DISABLED);

Is this something that effects end user, the application that uses this PMD?

> +	}
> +
> +	return rc;
>  }
>  

<...>

^ permalink raw reply	[flat|nested] 329+ messages in thread

* [PATCH v2 00/61] net/qede/base: qede PMD enhancements
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
@ 2017-03-18  7:05   ` Rasesh Mody
  2017-03-20 16:59     ` Ferruh Yigit
  2017-03-18  7:05   ` [PATCH v2 01/61] net/qede/base: return an initialized return value Rasesh Mody
                     ` (61 subsequent siblings)
  62 siblings, 1 reply; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:05 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Hi,

This patch set adds support for new firmware 8.18.9.0, new features and
bug fixes.

Please apply to dpdk-net-next for 17.05 release. Note that this patch set
depends on http://dpdk.org/dev/patchwork/patch/21896.

v1..v2
 - address all the review comments received so far

Thanks!
Rasesh

Harish Patil (3):
  net/qede/base: add support for arfs mode
  net/qede: add ntuple and flow director filter support
  net/qede: add LRO/TSO offloads support

Rasesh Mody (58):
  net/qede/base: return an initialized return value
  net/qede/base: send FW version driver state to MFW
  net/qede/base: mask Rx buffer attention bits
  net/qede/base: print various indication on Tx-timeouts
  net/qede/base: utilize FW 8.18.9.0
  net/qede: upgrade the FW to 8.18.9.0
  net/qede/base: decrease maximum HW func per device
  net/qede/base: move mask constants defining NIC type
  net/qede/base: remove attribute from update current config
  net/qede/base: add nvram options
  net/qede/base: add comment
  net/qede/base: use default MTU from shared memory
  net/qede/base: change queue/sb-id from 8 bit to 16 bit
  net/qede/base: update MFW when default MTU is changed
  net/qede/base: prevent device init failure
  net/qede/base: read card personality via MFW commands
  net/qede/base: allow probe to succeed with minor HW-issues
  net/qede/base: remove unneeded step in HW init
  net/qede/base: allow only trusted VFs to be promisc
  net/qede/base: qm initialization revamp
  net/qede/base: print firmware MFW and MBI versions
  net/qede/base: check active VF queues before stopping
  net/qede/base: set driver type before sending load request
  net/qede/base: prevent driver laod with invalid resources
  net/qede/base: add interfaces for MFW TLV request processing
  net/qede/base: code refactoring of SP queues
  net/qede/base: make L2 queues handle based
  net/qede/base: add support for handling TLV request from MFW
  net/qede/base: optimize cache-line access
  net/qede/base: infrastructure changes for VF tunnelling
  net/qede/base: revise tunnel APIs/structs
  net/qede/base: add tunnelling support for VFs
  net/qede/base: formatting changes
  net/qede/base: prevent transmitter stuck condition
  net/qede/base: add mask/shift defines for resource command
  net/qede/base: add API for using MFW resource lock
  net/qede/base: remove clock slowdown option
  net/qede/base: add new image types
  net/qede/base: use L2-handles for RSS configuration
  net/qede/base: change valloc to vzalloc
  net/qede/base: add support for previous driver unload
  net/qede/base: add non-L2 dcbx tlv application support
  net/qede/base: update bulletin board during VF init
  net/qede/base: add coalescing support for VFs
  net/qede/base: add macro got resource value message
  net/qede/base: add mailbox for resource allocation
  net/qede/base: add macro for unsupported command
  net/qede/base: set max values for soft resoruces
  net/qede/base: add return code check
  net/qede/base: zero out MFW mailbox data
  net/qede/base: move code bits
  net/qede/base: add PF parameter
  net/qede/base: allow PMD to control vport and RSS engine ids
  net/qede/base: add udp ports in bulletin board message
  net/qede/base: prevent DMAE transactions during recovery
  net/qede/base: multi-Txq support on same queue-zone for VFs
  net/qede/base: prevent race condition during unload
  net/qede/base: semantic changes

 doc/guides/nics/features/qede.ini             |    4 +
 doc/guides/nics/features/qede_vf.ini          |    2 +
 doc/guides/nics/qede.rst                      |    9 +-
 drivers/net/qede/Makefile                     |    1 +
 drivers/net/qede/base/bcm_osal.h              |   13 +-
 drivers/net/qede/base/common_hsi.h            |  191 ++-
 drivers/net/qede/base/ecore.h                 |  169 +-
 drivers/net/qede/base/ecore_chain.h           |  143 +-
 drivers/net/qede/base/ecore_cxt.c             |  297 +++-
 drivers/net/qede/base/ecore_cxt.h             |   64 +-
 drivers/net/qede/base/ecore_cxt_api.h         |   13 -
 drivers/net/qede/base/ecore_dcbx.c            |   42 +-
 drivers/net/qede/base/ecore_dcbx.h            |    4 +-
 drivers/net/qede/base/ecore_dcbx_api.h        |    4 +-
 drivers/net/qede/base/ecore_dev.c             | 2137 +++++++++++++++----------
 drivers/net/qede/base/ecore_dev_api.h         |  122 +-
 drivers/net/qede/base/ecore_gtt_reg_addr.h    |   20 +-
 drivers/net/qede/base/ecore_hsi_common.h      |  816 +++++-----
 drivers/net/qede/base/ecore_hsi_debug_tools.h |  203 ++-
 drivers/net/qede/base/ecore_hsi_eth.h         | 2069 ++++++++++++------------
 drivers/net/qede/base/ecore_hsi_init_tool.h   |   78 +-
 drivers/net/qede/base/ecore_hw.c              |   49 +-
 drivers/net/qede/base/ecore_init_fw_funcs.c   | 1409 ++++++++++------
 drivers/net/qede/base/ecore_init_fw_funcs.h   |  172 +-
 drivers/net/qede/base/ecore_int.c             |   51 +-
 drivers/net/qede/base/ecore_int.h             |   10 -
 drivers/net/qede/base/ecore_int_api.h         |   21 +
 drivers/net/qede/base/ecore_iov_api.h         |   45 +-
 drivers/net/qede/base/ecore_iro.h             |    8 +
 drivers/net/qede/base/ecore_iro_values.h      |   28 +-
 drivers/net/qede/base/ecore_l2.c              |  853 +++++++---
 drivers/net/qede/base/ecore_l2.h              |  149 +-
 drivers/net/qede/base/ecore_l2_api.h          |  134 +-
 drivers/net/qede/base/ecore_mcp.c             | 1018 ++++++++++--
 drivers/net/qede/base/ecore_mcp.h             |  181 ++-
 drivers/net/qede/base/ecore_mcp_api.h         |  316 +++-
 drivers/net/qede/base/ecore_mng_tlv.c         | 1535 ++++++++++++++++++
 drivers/net/qede/base/ecore_proto_if.h        |   16 +
 drivers/net/qede/base/ecore_rt_defs.h         |  623 ++++---
 drivers/net/qede/base/ecore_sp_api.h          |   19 +
 drivers/net/qede/base/ecore_sp_commands.c     |  372 +++--
 drivers/net/qede/base/ecore_sp_commands.h     |   23 +-
 drivers/net/qede/base/ecore_spq.c             |   86 +-
 drivers/net/qede/base/ecore_spq.h             |   36 +-
 drivers/net/qede/base/ecore_sriov.c           |  953 ++++++++---
 drivers/net/qede/base/ecore_sriov.h           |   23 +-
 drivers/net/qede/base/ecore_vf.c              |  348 +++-
 drivers/net/qede/base/ecore_vf.h              |   85 +-
 drivers/net/qede/base/ecore_vf_api.h          |   11 +
 drivers/net/qede/base/ecore_vfpf_if.h         |   55 +-
 drivers/net/qede/base/eth_common.h            |    2 +-
 drivers/net/qede/base/mcp_public.h            |  271 ++--
 drivers/net/qede/base/nvm_cfg.h               |  475 +++++-
 drivers/net/qede/base/reg_addr.h              |   59 +
 drivers/net/qede/qede_eth_if.c                |   56 +-
 drivers/net/qede/qede_eth_if.h                |   25 +-
 drivers/net/qede/qede_ethdev.c                |  100 +-
 drivers/net/qede/qede_ethdev.h                |   42 +-
 drivers/net/qede/qede_fdir.c                  |  486 ++++++
 drivers/net/qede/qede_if.h                    |   58 +-
 drivers/net/qede/qede_main.c                  |  122 +-
 drivers/net/qede/qede_rxtx.c                  |  677 ++++++--
 drivers/net/qede/qede_rxtx.h                  |   32 +
 63 files changed, 12313 insertions(+), 5122 deletions(-)
 create mode 100644 drivers/net/qede/base/ecore_mng_tlv.c
 create mode 100644 drivers/net/qede/qede_fdir.c

-- 
1.7.10.3

^ permalink raw reply	[flat|nested] 329+ messages in thread

* [PATCH v2 01/61] net/qede/base: return an initialized return value
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
  2017-03-18  7:05   ` [PATCH v2 " Rasesh Mody
@ 2017-03-18  7:05   ` Rasesh Mody
  2017-03-18  7:05   ` [PATCH v2 02/61] net/qede/base: send FW version driver state to MFW Rasesh Mody
                     ` (60 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:05 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Make sure ecore_iov_mark_vf_flr() always returns an initialized return
value.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_sriov.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 6912cf8..d1c809c 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -3164,7 +3164,7 @@ ecore_iov_single_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
 
 bool ecore_iov_mark_vf_flr(struct ecore_hwfn *p_hwfn, u32 *p_disabled_vfs)
 {
-	bool found;
+	bool found = false;
 	u16 i;
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "Marking FLR-ed VFs\n");
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 02/61] net/qede/base: send FW version driver state to MFW
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
  2017-03-18  7:05   ` [PATCH v2 " Rasesh Mody
  2017-03-18  7:05   ` [PATCH v2 01/61] net/qede/base: return an initialized return value Rasesh Mody
@ 2017-03-18  7:05   ` Rasesh Mody
  2017-03-18  7:05   ` [PATCH v2 03/61] net/qede/base: mask Rx buffer attention bits Rasesh Mody
                     ` (59 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:05 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add support to send FW version and driver state to Management FW.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c     |   31 ++++++++++++++++++++++++++++---
 drivers/net/qede/base/ecore_mcp.c     |    7 +++++--
 drivers/net/qede/base/ecore_mcp_api.h |    3 ++-
 drivers/net/qede/qede_if.h            |    3 +++
 drivers/net/qede/qede_main.c          |   20 ++++++++++++++++++++
 5 files changed, 58 insertions(+), 6 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index da9cdc9..2d1e031 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1609,8 +1609,9 @@ static void ecore_reset_mb_shadow(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 				   struct ecore_hw_init_params *p_params)
 {
-	enum _ecore_status_t rc, mfw_rc;
-	u32 load_code, param;
+	enum _ecore_status_t rc = ECORE_SUCCESS, mfw_rc;
+	u32 load_code, param, drv_mb_param;
+	struct ecore_hwfn *p_hwfn;
 	int i;
 
 	if ((p_params->int_mode == ECORE_INT_MODE_MSI) &&
@@ -1743,7 +1744,26 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		p_hwfn->hw_init_done = true;
 	}
 
-	return ECORE_SUCCESS;
+	if (IS_PF(p_dev)) {
+		p_hwfn = ECORE_LEADING_HWFN(p_dev);
+		drv_mb_param = (FW_MAJOR_VERSION << 24) |
+			       (FW_MINOR_VERSION << 16) |
+			       (FW_REVISION_VERSION << 8) |
+			       (FW_ENGINEERING_VERSION);
+		rc = ecore_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
+				   DRV_MSG_CODE_OV_UPDATE_STORM_FW_VER,
+				   drv_mb_param, &load_code, &param);
+		if (rc != ECORE_SUCCESS) {
+			DP_ERR(p_hwfn, "Failed to send firmware version\n");
+			return rc;
+		}
+
+		rc = ecore_mcp_ov_update_driver_state(p_hwfn,
+						      p_hwfn->p_main_ptt,
+						ECORE_OV_DRIVER_STATE_DISABLED);
+	}
+
+	return rc;
 }
 
 #define ECORE_HW_STOP_RETRY_LIMIT	(10)
@@ -3130,8 +3150,13 @@ enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev,
 
 void ecore_hw_remove(struct ecore_dev *p_dev)
 {
+	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
 	int i;
 
+	if (IS_PF(p_dev))
+		ecore_mcp_ov_update_driver_state(p_hwfn, p_hwfn->p_main_ptt,
+					ECORE_OV_DRIVER_STATE_NOT_LOADED);
+
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
 
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index cb3e0bd..e236f39 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1723,6 +1723,9 @@ ecore_mcp_ov_update_current_config(struct ecore_hwfn *p_hwfn,
 	case ECORE_OV_CLIENT_USER:
 		drv_mb_param = DRV_MB_PARAM_OV_CURR_CFG_OTHER;
 		break;
+	case ECORE_OV_CLIENT_VENDOR_SPEC:
+		drv_mb_param = DRV_MB_PARAM_OV_CURR_CFG_VENDOR_SPEC;
+		break;
 	default:
 		DP_NOTICE(p_hwfn, true, "Invalid client type %d\n", config);
 		return ECORE_INVAL;
@@ -1761,9 +1764,9 @@ ecore_mcp_ov_update_driver_state(struct ecore_hwfn *p_hwfn,
 	}
 
 	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE,
-			   drv_state, &resp, &param);
+			   drv_mb_param, &resp, &param);
 	if (rc != ECORE_SUCCESS)
-		DP_ERR(p_hwfn, "MCP response failure, aborting\n");
+		DP_ERR(p_hwfn, "Failed to send driver state\n");
 
 	return rc;
 }
diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h
index 4e954bd..614cf67 100644
--- a/drivers/net/qede/base/ecore_mcp_api.h
+++ b/drivers/net/qede/base/ecore_mcp_api.h
@@ -181,7 +181,8 @@ enum ecore_ov_config_method {
 
 enum ecore_ov_client {
 	ECORE_OV_CLIENT_DRV,
-	ECORE_OV_CLIENT_USER
+	ECORE_OV_CLIENT_USER,
+	ECORE_OV_CLIENT_VENDOR_SPEC
 };
 
 enum ecore_ov_driver_state {
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index 4289d0b..4b23bb9 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -150,8 +150,11 @@ struct qed_common_ops {
 			    uint16_t sb_id, enum qed_sb_type type);
 
 	bool (*can_link_change)(struct ecore_dev *edev);
+
 	void (*update_msglvl)(struct ecore_dev *edev,
 			      uint32_t dp_module, uint8_t dp_level);
+
+	int (*send_drv_state)(struct ecore_dev *edev, bool active);
 };
 
 #endif /* _QEDE_IF_H */
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 8a4d68a..f0033a1 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -668,6 +668,25 @@ static void qed_remove(struct ecore_dev *edev)
 	ecore_hw_remove(edev);
 }
 
+static int qed_send_drv_state(struct ecore_dev *edev, bool active)
+{
+	struct ecore_hwfn *hwfn = ECORE_LEADING_HWFN(edev);
+	struct ecore_ptt *ptt;
+	int status = 0;
+
+	ptt = ecore_ptt_acquire(hwfn);
+	if (!ptt)
+		return -EAGAIN;
+
+	status = ecore_mcp_ov_update_driver_state(hwfn, ptt, active ?
+						  ECORE_OV_DRIVER_STATE_ACTIVE :
+						ECORE_OV_DRIVER_STATE_DISABLED);
+
+	ecore_ptt_release(hwfn, ptt);
+
+	return status;
+}
+
 const struct qed_common_ops qed_common_ops_pass = {
 	INIT_STRUCT_FIELD(probe, &qed_probe),
 	INIT_STRUCT_FIELD(update_pf_params, &qed_update_pf_params),
@@ -681,4 +700,5 @@ const struct qed_common_ops qed_common_ops_pass = {
 	INIT_STRUCT_FIELD(drain, &qed_drain),
 	INIT_STRUCT_FIELD(slowpath_stop, &qed_slowpath_stop),
 	INIT_STRUCT_FIELD(remove, &qed_remove),
+	INIT_STRUCT_FIELD(send_drv_state, &qed_send_drv_state),
 };
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 03/61] net/qede/base: mask Rx buffer attention bits
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (2 preceding siblings ...)
  2017-03-18  7:05   ` [PATCH v2 02/61] net/qede/base: send FW version driver state to MFW Rasesh Mody
@ 2017-03-18  7:05   ` Rasesh Mody
  2017-03-18  7:05   ` [PATCH v2 04/61] net/qede/base: print various indication on Tx-timeouts Rasesh Mody
                     ` (58 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:05 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Mask the BRB "RC0_EOP_OUT_SYNC_FIFO_PUSH_ERROR" attention.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c |    6 ++++++
 drivers/net/qede/base/reg_addr.h  |    3 +++
 2 files changed, 9 insertions(+)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 2d1e031..eef24cd 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1051,6 +1051,12 @@ static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn,
 	/* pretend to original PF */
 	ecore_fid_pretend(p_hwfn, p_ptt, p_hwfn->rel_pf_id);
 
+	/* @@@TMP:
+	 * CQ89456 - Mask the BRB "RC0_EOP_OUT_SYNC_FIFO_PUSH_ERROR" attention.
+	 */
+	if (ECORE_IS_AH(p_dev))
+		ecore_wr(p_hwfn, p_ptt, BRB_REG_INT_MASK_10, 0x4000000);
+
 	return rc;
 }
 
diff --git a/drivers/net/qede/base/reg_addr.h b/drivers/net/qede/base/reg_addr.h
index 3c369aa..21cbdbd 100644
--- a/drivers/net/qede/base/reg_addr.h
+++ b/drivers/net/qede/base/reg_addr.h
@@ -1141,3 +1141,6 @@
 #define NIG_REG_LLH_TAGMAC_DEF_PF_VECTOR 0x50196cUL
 #define PRS_REG_MSG_INFO 0x1f0a1cUL
 #define BAR0_MAP_REG_XSDM_RAM 0x1e00000UL
+
+/* 8.18.7.0 FW */
+#define BRB_REG_INT_MASK_10 0x3401b8UL
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 04/61] net/qede/base: print various indication on Tx-timeouts
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (3 preceding siblings ...)
  2017-03-18  7:05   ` [PATCH v2 03/61] net/qede/base: mask Rx buffer attention bits Rasesh Mody
@ 2017-03-18  7:05   ` Rasesh Mody
  2017-03-18  7:05   ` [PATCH v2 05/61] net/qede/base: utilize FW 8.18.9.0 Rasesh Mody
                     ` (57 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:05 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Print various indication on Tx-timeouts.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_int.c     |   27 +++++++++++++++++++++++++++
 drivers/net/qede/base/ecore_int_api.h |   21 +++++++++++++++++++++
 drivers/net/qede/base/reg_addr.h      |    3 +++
 drivers/net/qede/qede_main.c          |   23 +++++++++++++++++++++++
 4 files changed, 74 insertions(+)

diff --git a/drivers/net/qede/base/ecore_int.c b/drivers/net/qede/base/ecore_int.c
index b6b8e2d..e5a4359 100644
--- a/drivers/net/qede/base/ecore_int.c
+++ b/drivers/net/qede/base/ecore_int.c
@@ -2255,3 +2255,30 @@ enum _ecore_status_t ecore_int_set_timer_res(struct ecore_hwfn *p_hwfn,
 
 	return rc;
 }
+
+enum _ecore_status_t ecore_int_get_sb_dbg(struct ecore_hwfn *p_hwfn,
+					  struct ecore_ptt *p_ptt,
+					  struct ecore_sb_info *p_sb,
+					  struct ecore_sb_info_dbg *p_info)
+{
+	u16 sbid = p_sb->igu_sb_id;
+	int i;
+
+	if (IS_VF(p_hwfn->p_dev))
+		return ECORE_INVAL;
+
+	if (sbid > NUM_OF_SBS(p_hwfn->p_dev))
+		return ECORE_INVAL;
+
+	p_info->igu_prod = ecore_rd(p_hwfn, p_ptt,
+				    IGU_REG_PRODUCER_MEMORY + sbid * 4);
+	p_info->igu_cons = ecore_rd(p_hwfn, p_ptt,
+				    IGU_REG_CONSUMER_MEM + sbid * 4);
+
+	for (i = 0; i < PIS_PER_SB; i++)
+		p_info->pi[i] = (u16)ecore_rd(p_hwfn, p_ptt,
+					      CAU_REG_PI_MEMORY +
+					      sbid * 4 * PIS_PER_SB +  i * 4);
+
+	return ECORE_SUCCESS;
+}
diff --git a/drivers/net/qede/base/ecore_int_api.h b/drivers/net/qede/base/ecore_int_api.h
index a0d6a43..fdfcba8 100644
--- a/drivers/net/qede/base/ecore_int_api.h
+++ b/drivers/net/qede/base/ecore_int_api.h
@@ -41,6 +41,12 @@ struct ecore_sb_info {
 	struct ecore_dev *p_dev;
 };
 
+struct ecore_sb_info_dbg {
+	u32 igu_prod;
+	u32 igu_cons;
+	u16 pi[PIS_PER_SB];
+};
+
 struct ecore_sb_cnt_info {
 	int sb_cnt;
 	int sb_iov_cnt;
@@ -303,4 +309,19 @@ void ecore_int_disable_post_isr_release(struct ecore_dev *p_dev);
  */
 void ecore_int_attn_clr_enable(struct ecore_dev *p_dev, bool clr_enable);
 
+/**
+ * @brief Read debug information regarding a given SB.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param p_sb - point to Status block for which we want to get info.
+ * @param p_info - pointer to struct to fill with information regarding SB.
+ *
+ * @return ECORE_SUCCESS if pointer is filled; failure otherwise.
+ */
+enum _ecore_status_t ecore_int_get_sb_dbg(struct ecore_hwfn *p_hwfn,
+					  struct ecore_ptt *p_ptt,
+					  struct ecore_sb_info *p_sb,
+					  struct ecore_sb_info_dbg *p_info);
+
 #endif
diff --git a/drivers/net/qede/base/reg_addr.h b/drivers/net/qede/base/reg_addr.h
index 21cbdbd..3cc7fd4 100644
--- a/drivers/net/qede/base/reg_addr.h
+++ b/drivers/net/qede/base/reg_addr.h
@@ -1144,3 +1144,6 @@
 
 /* 8.18.7.0 FW */
 #define BRB_REG_INT_MASK_10 0x3401b8UL
+
+#define IGU_REG_PRODUCER_MEMORY 0x182000UL
+#define IGU_REG_CONSUMER_MEM 0x183000UL
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index f0033a1..a604a5b 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -687,6 +687,29 @@ static int qed_send_drv_state(struct ecore_dev *edev, bool active)
 	return status;
 }
 
+static int qed_get_sb_info(struct ecore_dev *edev, struct ecore_sb_info *sb,
+			   u16 qid, struct ecore_sb_info_dbg *sb_dbg)
+{
+	struct ecore_hwfn *hwfn = &edev->hwfns[qid % edev->num_hwfns];
+	struct ecore_ptt *ptt;
+	int rc;
+
+	if (IS_VF(edev))
+		return -EINVAL;
+
+	ptt = ecore_ptt_acquire(hwfn);
+	if (!ptt) {
+		DP_NOTICE(hwfn, true, "Can't acquire PTT\n");
+		return -EAGAIN;
+	}
+
+	memset(sb_dbg, 0, sizeof(*sb_dbg));
+	rc = ecore_int_get_sb_dbg(hwfn, ptt, sb, sb_dbg);
+
+	ecore_ptt_release(hwfn, ptt);
+	return rc;
+}
+
 const struct qed_common_ops qed_common_ops_pass = {
 	INIT_STRUCT_FIELD(probe, &qed_probe),
 	INIT_STRUCT_FIELD(update_pf_params, &qed_update_pf_params),
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 05/61] net/qede/base: utilize FW 8.18.9.0
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (4 preceding siblings ...)
  2017-03-18  7:05   ` [PATCH v2 04/61] net/qede/base: print various indication on Tx-timeouts Rasesh Mody
@ 2017-03-18  7:05   ` Rasesh Mody
  2017-03-18  7:05   ` [PATCH v2 06/61] net/qede: upgrade the FW to 8.18.9.0 Rasesh Mody
                     ` (56 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:05 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

This change is in preparation to work with new FW 8.18.9.0.
Rename the defines to use E4_ and structs to use e4_. This renaming
is to add support for future chipsets.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/common_hsi.h       |   15 +-
 drivers/net/qede/base/ecore_hsi_common.h |  770 +++++------
 drivers/net/qede/base/ecore_hsi_eth.h    | 2052 +++++++++++++++---------------
 drivers/net/qede/base/ecore_iov_api.h    |    4 +-
 drivers/net/qede/base/ecore_spq.c        |   20 +-
 drivers/net/qede/base/ecore_sriov.c      |    2 +-
 drivers/net/qede/base/ecore_sriov.h      |    4 +-
 7 files changed, 1447 insertions(+), 1420 deletions(-)

diff --git a/drivers/net/qede/base/common_hsi.h b/drivers/net/qede/base/common_hsi.h
index 2f84148..59e751f 100644
--- a/drivers/net/qede/base/common_hsi.h
+++ b/drivers/net/qede/base/common_hsi.h
@@ -107,20 +107,20 @@
 #define MAX_NUM_PFS	(MAX_NUM_PFS_K2)
 #define MAX_NUM_OF_PFS_IN_CHIP (16) /* On both engines */
 
-#define MAX_NUM_VFS_K2	(192)
 #define MAX_NUM_VFS_BB	(120)
-#define MAX_NUM_VFS	(MAX_NUM_VFS_K2)
+#define MAX_NUM_VFS_K2	(192)
+#define E4_MAX_NUM_VFS	(MAX_NUM_VFS_K2)
 
 #define MAX_NUM_FUNCTIONS_BB	(MAX_NUM_PFS_BB + MAX_NUM_VFS_BB)
 #define MAX_NUM_FUNCTIONS_K2	(MAX_NUM_PFS_K2 + MAX_NUM_VFS_K2)
-#define MAX_NUM_FUNCTIONS	(MAX_NUM_PFS + MAX_NUM_VFS)
+#define MAX_NUM_FUNCTIONS	(MAX_NUM_PFS + E4_MAX_NUM_VFS)
 
 /* in both BB and K2, the VF number starts from 16. so for arrays containing all
  * possible PFs and VFs - we need a constant for this size
  */
 #define MAX_FUNCTION_NUMBER_BB	(MAX_NUM_PFS + MAX_NUM_VFS_BB)
 #define MAX_FUNCTION_NUMBER_K2	(MAX_NUM_PFS + MAX_NUM_VFS_K2)
-#define MAX_FUNCTION_NUMBER	(MAX_NUM_PFS + MAX_NUM_VFS)
+#define MAX_FUNCTION_NUMBER	(MAX_NUM_PFS + E4_MAX_NUM_VFS)
 
 #define MAX_NUM_VPORTS_K2	(208)
 #define MAX_NUM_VPORTS_BB	(160)
@@ -149,9 +149,10 @@
 #define MAX_PHYS_VOQS		(NUM_OF_PHYS_TCS * MAX_NUM_PORTS_BB)
 
 /* CIDs */
-#define NUM_OF_CONNECTION_TYPES	(8)
-#define NUM_OF_LCIDS		(320)
-#define NUM_OF_LTIDS		(320)
+#define E4_NUM_OF_CONNECTION_TYPES (8)
+#define NUM_OF_TASK_TYPES		(8)
+#define NUM_OF_LCIDS			(320)
+#define NUM_OF_LTIDS			(320)
 
 /* Clock values */
 #define MASTER_CLK_FREQ_E4		(375e6)
diff --git a/drivers/net/qede/base/ecore_hsi_common.h b/drivers/net/qede/base/ecore_hsi_common.h
index d978bb0..f934e68 100644
--- a/drivers/net/qede/base/ecore_hsi_common.h
+++ b/drivers/net/qede/base/ecore_hsi_common.h
@@ -75,306 +75,306 @@ struct xstorm_core_conn_st_ctx {
 	__le32 reserved0[55] /* Pad to 15 cycles */;
 };
 
-struct xstorm_core_conn_ag_ctx {
+struct e4_xstorm_core_conn_ag_ctx {
 	u8 reserved0 /* cdu_validation */;
 	u8 core_state /* state */;
 	u8 flags0;
 /* exist_in_qm0 */
-#define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_SHIFT        0
+#define E4_XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_SHIFT        0
 /* exist_in_qm1 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED1_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED1_SHIFT           1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED1_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED1_SHIFT           1
 /* exist_in_qm2 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED2_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED2_SHIFT           2
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED2_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED2_SHIFT           2
 /* exist_in_qm3 */
-#define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM3_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM3_SHIFT        3
+#define E4_XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM3_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM3_SHIFT        3
 /* bit4 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED3_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED3_SHIFT           4
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED3_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED3_SHIFT           4
 /* cf_array_active */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED4_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED4_SHIFT           5
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED4_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED4_SHIFT           5
 /* bit6 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED5_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED5_SHIFT           6
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED5_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED5_SHIFT           6
 /* bit7 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED6_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED6_SHIFT           7
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED6_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED6_SHIFT           7
 	u8 flags1;
 /* bit8 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED7_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED7_SHIFT           0
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED7_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED7_SHIFT           0
 /* bit9 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED8_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED8_SHIFT           1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED8_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED8_SHIFT           1
 /* bit10 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED9_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED9_SHIFT           2
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED9_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED9_SHIFT           2
 /* bit11 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT11_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT11_SHIFT               3
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT11_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT11_SHIFT               3
 /* bit12 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT12_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT12_SHIFT               4
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT12_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT12_SHIFT               4
 /* bit13 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT13_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT13_SHIFT               5
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT13_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT13_SHIFT               5
 /* bit14 */
-#define XSTORM_CORE_CONN_AG_CTX_TX_RULE_ACTIVE_MASK       0x1
-#define XSTORM_CORE_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT      6
+#define E4_XSTORM_CORE_CONN_AG_CTX_TX_RULE_ACTIVE_MASK       0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT      6
 /* bit15 */
-#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT        7
+#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT        7
 	u8 flags2;
 /* timer0cf */
-#define XSTORM_CORE_CONN_AG_CTX_CF0_MASK                  0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF0_SHIFT                 0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF0_MASK                  0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF0_SHIFT                 0
 /* timer1cf */
-#define XSTORM_CORE_CONN_AG_CTX_CF1_MASK                  0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF1_SHIFT                 2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF1_MASK                  0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF1_SHIFT                 2
 /* timer2cf */
-#define XSTORM_CORE_CONN_AG_CTX_CF2_MASK                  0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF2_SHIFT                 4
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF2_MASK                  0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF2_SHIFT                 4
 /* timer_stop_all */
-#define XSTORM_CORE_CONN_AG_CTX_CF3_MASK                  0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF3_SHIFT                 6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF3_MASK                  0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF3_SHIFT                 6
 	u8 flags3;
-#define XSTORM_CORE_CONN_AG_CTX_CF4_MASK                  0x3 /* cf4 */
-#define XSTORM_CORE_CONN_AG_CTX_CF4_SHIFT                 0
-#define XSTORM_CORE_CONN_AG_CTX_CF5_MASK                  0x3 /* cf5 */
-#define XSTORM_CORE_CONN_AG_CTX_CF5_SHIFT                 2
-#define XSTORM_CORE_CONN_AG_CTX_CF6_MASK                  0x3 /* cf6 */
-#define XSTORM_CORE_CONN_AG_CTX_CF6_SHIFT                 4
-#define XSTORM_CORE_CONN_AG_CTX_CF7_MASK                  0x3 /* cf7 */
-#define XSTORM_CORE_CONN_AG_CTX_CF7_SHIFT                 6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF4_MASK                  0x3 /* cf4 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF4_SHIFT                 0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF5_MASK                  0x3 /* cf5 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF5_SHIFT                 2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF6_MASK                  0x3 /* cf6 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF6_SHIFT                 4
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF7_MASK                  0x3 /* cf7 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF7_SHIFT                 6
 	u8 flags4;
-#define XSTORM_CORE_CONN_AG_CTX_CF8_MASK                  0x3 /* cf8 */
-#define XSTORM_CORE_CONN_AG_CTX_CF8_SHIFT                 0
-#define XSTORM_CORE_CONN_AG_CTX_CF9_MASK                  0x3 /* cf9 */
-#define XSTORM_CORE_CONN_AG_CTX_CF9_SHIFT                 2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF8_MASK                  0x3 /* cf8 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF8_SHIFT                 0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF9_MASK                  0x3 /* cf9 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF9_SHIFT                 2
 /* cf10 */
-#define XSTORM_CORE_CONN_AG_CTX_CF10_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF10_SHIFT                4
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF10_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF10_SHIFT                4
 /* cf11 */
-#define XSTORM_CORE_CONN_AG_CTX_CF11_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF11_SHIFT                6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF11_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF11_SHIFT                6
 	u8 flags5;
 /* cf12 */
-#define XSTORM_CORE_CONN_AG_CTX_CF12_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF12_SHIFT                0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF12_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF12_SHIFT                0
 /* cf13 */
-#define XSTORM_CORE_CONN_AG_CTX_CF13_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF13_SHIFT                2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF13_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF13_SHIFT                2
 /* cf14 */
-#define XSTORM_CORE_CONN_AG_CTX_CF14_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF14_SHIFT                4
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF14_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF14_SHIFT                4
 /* cf15 */
-#define XSTORM_CORE_CONN_AG_CTX_CF15_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF15_SHIFT                6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF15_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF15_SHIFT                6
 	u8 flags6;
 /* cf16 */
-#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_MASK     0x3
-#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_SHIFT    0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_MASK     0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_SHIFT    0
 /* cf_array_cf */
-#define XSTORM_CORE_CONN_AG_CTX_CF17_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF17_SHIFT                2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF17_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF17_SHIFT                2
 /* cf18 */
-#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_MASK                0x3
-#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_SHIFT               4
+#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_MASK                0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_SHIFT               4
 /* cf19 */
-#define XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_MASK         0x3
-#define XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_SHIFT        6
+#define E4_XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_MASK         0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_SHIFT        6
 	u8 flags7;
 /* cf20 */
-#define XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_MASK             0x3
-#define XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_SHIFT            0
+#define E4_XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_MASK             0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_SHIFT            0
 /* cf21 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED10_MASK           0x3
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED10_SHIFT          2
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED10_MASK           0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED10_SHIFT          2
 /* cf22 */
-#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_MASK            0x3
-#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_SHIFT           4
+#define E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_MASK            0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_SHIFT           4
 /* cf0en */
-#define XSTORM_CORE_CONN_AG_CTX_CF0EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT               6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF0EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT               6
 /* cf1en */
-#define XSTORM_CORE_CONN_AG_CTX_CF1EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT               7
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF1EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT               7
 	u8 flags8;
 /* cf2en */
-#define XSTORM_CORE_CONN_AG_CTX_CF2EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT               0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF2EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT               0
 /* cf3en */
-#define XSTORM_CORE_CONN_AG_CTX_CF3EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT               1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF3EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT               1
 /* cf4en */
-#define XSTORM_CORE_CONN_AG_CTX_CF4EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT               2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF4EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT               2
 /* cf5en */
-#define XSTORM_CORE_CONN_AG_CTX_CF5EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT               3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF5EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT               3
 /* cf6en */
-#define XSTORM_CORE_CONN_AG_CTX_CF6EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT               4
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF6EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT               4
 /* cf7en */
-#define XSTORM_CORE_CONN_AG_CTX_CF7EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF7EN_SHIFT               5
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF7EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF7EN_SHIFT               5
 /* cf8en */
-#define XSTORM_CORE_CONN_AG_CTX_CF8EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF8EN_SHIFT               6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF8EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF8EN_SHIFT               6
 /* cf9en */
-#define XSTORM_CORE_CONN_AG_CTX_CF9EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF9EN_SHIFT               7
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF9EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF9EN_SHIFT               7
 	u8 flags9;
 /* cf10en */
-#define XSTORM_CORE_CONN_AG_CTX_CF10EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF10EN_SHIFT              0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF10EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF10EN_SHIFT              0
 /* cf11en */
-#define XSTORM_CORE_CONN_AG_CTX_CF11EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF11EN_SHIFT              1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF11EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF11EN_SHIFT              1
 /* cf12en */
-#define XSTORM_CORE_CONN_AG_CTX_CF12EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF12EN_SHIFT              2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF12EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF12EN_SHIFT              2
 /* cf13en */
-#define XSTORM_CORE_CONN_AG_CTX_CF13EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF13EN_SHIFT              3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF13EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF13EN_SHIFT              3
 /* cf14en */
-#define XSTORM_CORE_CONN_AG_CTX_CF14EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF14EN_SHIFT              4
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF14EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF14EN_SHIFT              4
 /* cf15en */
-#define XSTORM_CORE_CONN_AG_CTX_CF15EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF15EN_SHIFT              5
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF15EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF15EN_SHIFT              5
 /* cf16en */
-#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_MASK  0x1
-#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_SHIFT 6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_MASK  0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_SHIFT 6
 /* cf_array_cf_en */
-#define XSTORM_CORE_CONN_AG_CTX_CF17EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF17EN_SHIFT              7
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF17EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF17EN_SHIFT              7
 	u8 flags10;
 /* cf18en */
-#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_SHIFT            0
+#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_SHIFT            0
 /* cf19en */
-#define XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_EN_MASK      0x1
-#define XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT     1
+#define E4_XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_EN_MASK      0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT     1
 /* cf20en */
-#define XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_EN_MASK          0x1
-#define XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT         2
+#define E4_XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_EN_MASK          0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT         2
 /* cf21en */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED11_MASK           0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED11_SHIFT          3
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED11_MASK           0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED11_SHIFT          3
 /* cf22en */
-#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_SHIFT        4
+#define E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_SHIFT        4
 /* cf23en */
-#define XSTORM_CORE_CONN_AG_CTX_CF23EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF23EN_SHIFT              5
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF23EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF23EN_SHIFT              5
 /* rule0en */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED12_MASK           0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED12_SHIFT          6
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED12_MASK           0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED12_SHIFT          6
 /* rule1en */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED13_MASK           0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED13_SHIFT          7
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED13_MASK           0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED13_SHIFT          7
 	u8 flags11;
 /* rule2en */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED14_MASK           0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED14_SHIFT          0
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED14_MASK           0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED14_SHIFT          0
 /* rule3en */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED15_MASK           0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED15_SHIFT          1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED15_MASK           0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED15_SHIFT          1
 /* rule4en */
-#define XSTORM_CORE_CONN_AG_CTX_TX_DEC_RULE_EN_MASK       0x1
-#define XSTORM_CORE_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT      2
+#define E4_XSTORM_CORE_CONN_AG_CTX_TX_DEC_RULE_EN_MASK       0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT      2
 /* rule5en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE5EN_MASK              0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT             3
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE5EN_MASK              0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT             3
 /* rule6en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE6EN_MASK              0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT             4
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE6EN_MASK              0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT             4
 /* rule7en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE7EN_MASK              0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT             5
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE7EN_MASK              0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT             5
 /* rule8en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED1_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED1_SHIFT        6
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED1_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED1_SHIFT        6
 /* rule9en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE9EN_MASK              0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE9EN_SHIFT             7
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE9EN_MASK              0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE9EN_SHIFT             7
 	u8 flags12;
 /* rule10en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE10EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE10EN_SHIFT            0
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE10EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE10EN_SHIFT            0
 /* rule11en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE11EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE11EN_SHIFT            1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE11EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE11EN_SHIFT            1
 /* rule12en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED2_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED2_SHIFT        2
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED2_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED2_SHIFT        2
 /* rule13en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED3_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED3_SHIFT        3
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED3_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED3_SHIFT        3
 /* rule14en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE14EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE14EN_SHIFT            4
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE14EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE14EN_SHIFT            4
 /* rule15en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE15EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE15EN_SHIFT            5
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE15EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE15EN_SHIFT            5
 /* rule16en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE16EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE16EN_SHIFT            6
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE16EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE16EN_SHIFT            6
 /* rule17en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE17EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE17EN_SHIFT            7
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE17EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE17EN_SHIFT            7
 	u8 flags13;
 /* rule18en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE18EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE18EN_SHIFT            0
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE18EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE18EN_SHIFT            0
 /* rule19en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE19EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE19EN_SHIFT            1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE19EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE19EN_SHIFT            1
 /* rule20en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED4_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED4_SHIFT        2
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED4_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED4_SHIFT        2
 /* rule21en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED5_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED5_SHIFT        3
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED5_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED5_SHIFT        3
 /* rule22en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED6_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED6_SHIFT        4
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED6_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED6_SHIFT        4
 /* rule23en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED7_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED7_SHIFT        5
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED7_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED7_SHIFT        5
 /* rule24en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED8_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED8_SHIFT        6
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED8_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED8_SHIFT        6
 /* rule25en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED9_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED9_SHIFT        7
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED9_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED9_SHIFT        7
 	u8 flags14;
 /* bit16 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT16_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT16_SHIFT               0
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT16_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT16_SHIFT               0
 /* bit17 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT17_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT17_SHIFT               1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT17_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT17_SHIFT               1
 /* bit18 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT18_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT18_SHIFT               2
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT18_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT18_SHIFT               2
 /* bit19 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT19_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT19_SHIFT               3
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT19_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT19_SHIFT               3
 /* bit20 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT20_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT20_SHIFT               4
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT20_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT20_SHIFT               4
 /* bit21 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT21_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT21_SHIFT               5
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT21_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT21_SHIFT               5
 /* cf23 */
-#define XSTORM_CORE_CONN_AG_CTX_CF23_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF23_SHIFT                6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF23_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF23_SHIFT                6
 	u8 byte2 /* byte2 */;
 	__le16 physical_q0 /* physical_q0 */;
 	__le16 consolid_prod /* physical_q1 */;
@@ -410,7 +410,7 @@ struct xstorm_core_conn_ag_ctx {
 	u8 byte13 /* byte13 */;
 	u8 byte14 /* byte14 */;
 	u8 byte15 /* byte15 */;
-	u8 byte16 /* byte16 */;
+	u8 e5_reserved /* e5_reserved */;
 	__le16 word11 /* word11 */;
 	__le32 reg10 /* reg10 */;
 	__le32 reg11 /* reg11 */;
@@ -428,89 +428,89 @@ struct xstorm_core_conn_ag_ctx {
 	__le16 word15 /* word15 */;
 };
 
-struct tstorm_core_conn_ag_ctx {
+struct e4_tstorm_core_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 byte1 /* state */;
 	u8 flags0;
-#define TSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
-#define TSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
-#define TSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
-#define TSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
-#define TSTORM_CORE_CONN_AG_CTX_BIT2_MASK     0x1 /* bit2 */
-#define TSTORM_CORE_CONN_AG_CTX_BIT2_SHIFT    2
-#define TSTORM_CORE_CONN_AG_CTX_BIT3_MASK     0x1 /* bit3 */
-#define TSTORM_CORE_CONN_AG_CTX_BIT3_SHIFT    3
-#define TSTORM_CORE_CONN_AG_CTX_BIT4_MASK     0x1 /* bit4 */
-#define TSTORM_CORE_CONN_AG_CTX_BIT4_SHIFT    4
-#define TSTORM_CORE_CONN_AG_CTX_BIT5_MASK     0x1 /* bit5 */
-#define TSTORM_CORE_CONN_AG_CTX_BIT5_SHIFT    5
-#define TSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* timer0cf */
-#define TSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     6
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT2_MASK     0x1 /* bit2 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT2_SHIFT    2
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT3_MASK     0x1 /* bit3 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT3_SHIFT    3
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT4_MASK     0x1 /* bit4 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT4_SHIFT    4
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT5_MASK     0x1 /* bit5 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT5_SHIFT    5
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* timer0cf */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     6
 	u8 flags1;
-#define TSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* timer1cf */
-#define TSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     0
-#define TSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* timer2cf */
-#define TSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     2
-#define TSTORM_CORE_CONN_AG_CTX_CF3_MASK      0x3 /* timer_stop_all */
-#define TSTORM_CORE_CONN_AG_CTX_CF3_SHIFT     4
-#define TSTORM_CORE_CONN_AG_CTX_CF4_MASK      0x3 /* cf4 */
-#define TSTORM_CORE_CONN_AG_CTX_CF4_SHIFT     6
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* timer1cf */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     0
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* timer2cf */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     2
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF3_MASK      0x3 /* timer_stop_all */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF3_SHIFT     4
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF4_MASK      0x3 /* cf4 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF4_SHIFT     6
 	u8 flags2;
-#define TSTORM_CORE_CONN_AG_CTX_CF5_MASK      0x3 /* cf5 */
-#define TSTORM_CORE_CONN_AG_CTX_CF5_SHIFT     0
-#define TSTORM_CORE_CONN_AG_CTX_CF6_MASK      0x3 /* cf6 */
-#define TSTORM_CORE_CONN_AG_CTX_CF6_SHIFT     2
-#define TSTORM_CORE_CONN_AG_CTX_CF7_MASK      0x3 /* cf7 */
-#define TSTORM_CORE_CONN_AG_CTX_CF7_SHIFT     4
-#define TSTORM_CORE_CONN_AG_CTX_CF8_MASK      0x3 /* cf8 */
-#define TSTORM_CORE_CONN_AG_CTX_CF8_SHIFT     6
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF5_MASK      0x3 /* cf5 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF5_SHIFT     0
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF6_MASK      0x3 /* cf6 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF6_SHIFT     2
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF7_MASK      0x3 /* cf7 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF7_SHIFT     4
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF8_MASK      0x3 /* cf8 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF8_SHIFT     6
 	u8 flags3;
-#define TSTORM_CORE_CONN_AG_CTX_CF9_MASK      0x3 /* cf9 */
-#define TSTORM_CORE_CONN_AG_CTX_CF9_SHIFT     0
-#define TSTORM_CORE_CONN_AG_CTX_CF10_MASK     0x3 /* cf10 */
-#define TSTORM_CORE_CONN_AG_CTX_CF10_SHIFT    2
-#define TSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
-#define TSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   4
-#define TSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
-#define TSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   5
-#define TSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
-#define TSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   6
-#define TSTORM_CORE_CONN_AG_CTX_CF3EN_MASK    0x1 /* cf3en */
-#define TSTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT   7
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF9_MASK      0x3 /* cf9 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF9_SHIFT     0
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF10_MASK     0x3 /* cf10 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF10_SHIFT    2
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   4
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   5
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   6
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF3EN_MASK    0x1 /* cf3en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT   7
 	u8 flags4;
-#define TSTORM_CORE_CONN_AG_CTX_CF4EN_MASK    0x1 /* cf4en */
-#define TSTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT   0
-#define TSTORM_CORE_CONN_AG_CTX_CF5EN_MASK    0x1 /* cf5en */
-#define TSTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT   1
-#define TSTORM_CORE_CONN_AG_CTX_CF6EN_MASK    0x1 /* cf6en */
-#define TSTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT   2
-#define TSTORM_CORE_CONN_AG_CTX_CF7EN_MASK    0x1 /* cf7en */
-#define TSTORM_CORE_CONN_AG_CTX_CF7EN_SHIFT   3
-#define TSTORM_CORE_CONN_AG_CTX_CF8EN_MASK    0x1 /* cf8en */
-#define TSTORM_CORE_CONN_AG_CTX_CF8EN_SHIFT   4
-#define TSTORM_CORE_CONN_AG_CTX_CF9EN_MASK    0x1 /* cf9en */
-#define TSTORM_CORE_CONN_AG_CTX_CF9EN_SHIFT   5
-#define TSTORM_CORE_CONN_AG_CTX_CF10EN_MASK   0x1 /* cf10en */
-#define TSTORM_CORE_CONN_AG_CTX_CF10EN_SHIFT  6
-#define TSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 7
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF4EN_MASK    0x1 /* cf4en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT   0
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF5EN_MASK    0x1 /* cf5en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT   1
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF6EN_MASK    0x1 /* cf6en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT   2
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF7EN_MASK    0x1 /* cf7en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF7EN_SHIFT   3
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF8EN_MASK    0x1 /* cf8en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF8EN_SHIFT   4
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF9EN_MASK    0x1 /* cf9en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF9EN_SHIFT   5
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF10EN_MASK   0x1 /* cf10en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF10EN_SHIFT  6
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 7
 	u8 flags5;
-#define TSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 0
-#define TSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 1
-#define TSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 2
-#define TSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 3
-#define TSTORM_CORE_CONN_AG_CTX_RULE5EN_MASK  0x1 /* rule5en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT 4
-#define TSTORM_CORE_CONN_AG_CTX_RULE6EN_MASK  0x1 /* rule6en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT 5
-#define TSTORM_CORE_CONN_AG_CTX_RULE7EN_MASK  0x1 /* rule7en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT 6
-#define TSTORM_CORE_CONN_AG_CTX_RULE8EN_MASK  0x1 /* rule8en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE8EN_SHIFT 7
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 0
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 1
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 2
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 3
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE5EN_MASK  0x1 /* rule5en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT 4
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE6EN_MASK  0x1 /* rule6en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT 5
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE7EN_MASK  0x1 /* rule7en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT 6
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE8EN_MASK  0x1 /* rule8en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE8EN_SHIFT 7
 	__le32 reg0 /* reg0 */;
 	__le32 reg1 /* reg1 */;
 	__le32 reg2 /* reg2 */;
@@ -532,63 +532,63 @@ struct tstorm_core_conn_ag_ctx {
 	__le32 reg10 /* reg10 */;
 };
 
-struct ustorm_core_conn_ag_ctx {
+struct e4_ustorm_core_conn_ag_ctx {
 	u8 reserved /* cdu_validation */;
 	u8 byte1 /* state */;
 	u8 flags0;
-#define USTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
-#define USTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
-#define USTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
-#define USTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
-#define USTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* timer0cf */
-#define USTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
-#define USTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* timer1cf */
-#define USTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
-#define USTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* timer2cf */
-#define USTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
+#define E4_USTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
+#define E4_USTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
+#define E4_USTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
+#define E4_USTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
+#define E4_USTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* timer0cf */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
+#define E4_USTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* timer1cf */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
+#define E4_USTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* timer2cf */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
 	u8 flags1;
-#define USTORM_CORE_CONN_AG_CTX_CF3_MASK      0x3 /* timer_stop_all */
-#define USTORM_CORE_CONN_AG_CTX_CF3_SHIFT     0
-#define USTORM_CORE_CONN_AG_CTX_CF4_MASK      0x3 /* cf4 */
-#define USTORM_CORE_CONN_AG_CTX_CF4_SHIFT     2
-#define USTORM_CORE_CONN_AG_CTX_CF5_MASK      0x3 /* cf5 */
-#define USTORM_CORE_CONN_AG_CTX_CF5_SHIFT     4
-#define USTORM_CORE_CONN_AG_CTX_CF6_MASK      0x3 /* cf6 */
-#define USTORM_CORE_CONN_AG_CTX_CF6_SHIFT     6
+#define E4_USTORM_CORE_CONN_AG_CTX_CF3_MASK      0x3 /* timer_stop_all */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF3_SHIFT     0
+#define E4_USTORM_CORE_CONN_AG_CTX_CF4_MASK      0x3 /* cf4 */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF4_SHIFT     2
+#define E4_USTORM_CORE_CONN_AG_CTX_CF5_MASK      0x3 /* cf5 */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF5_SHIFT     4
+#define E4_USTORM_CORE_CONN_AG_CTX_CF6_MASK      0x3 /* cf6 */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF6_SHIFT     6
 	u8 flags2;
-#define USTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
-#define USTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
-#define USTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
-#define USTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
-#define USTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
-#define USTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
-#define USTORM_CORE_CONN_AG_CTX_CF3EN_MASK    0x1 /* cf3en */
-#define USTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT   3
-#define USTORM_CORE_CONN_AG_CTX_CF4EN_MASK    0x1 /* cf4en */
-#define USTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT   4
-#define USTORM_CORE_CONN_AG_CTX_CF5EN_MASK    0x1 /* cf5en */
-#define USTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT   5
-#define USTORM_CORE_CONN_AG_CTX_CF6EN_MASK    0x1 /* cf6en */
-#define USTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT   6
-#define USTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
-#define USTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 7
+#define E4_USTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
+#define E4_USTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
+#define E4_USTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
+#define E4_USTORM_CORE_CONN_AG_CTX_CF3EN_MASK    0x1 /* cf3en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT   3
+#define E4_USTORM_CORE_CONN_AG_CTX_CF4EN_MASK    0x1 /* cf4en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT   4
+#define E4_USTORM_CORE_CONN_AG_CTX_CF5EN_MASK    0x1 /* cf5en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT   5
+#define E4_USTORM_CORE_CONN_AG_CTX_CF6EN_MASK    0x1 /* cf6en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT   6
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 7
 	u8 flags3;
-#define USTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
-#define USTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 0
-#define USTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
-#define USTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 1
-#define USTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
-#define USTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 2
-#define USTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
-#define USTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 3
-#define USTORM_CORE_CONN_AG_CTX_RULE5EN_MASK  0x1 /* rule5en */
-#define USTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT 4
-#define USTORM_CORE_CONN_AG_CTX_RULE6EN_MASK  0x1 /* rule6en */
-#define USTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT 5
-#define USTORM_CORE_CONN_AG_CTX_RULE7EN_MASK  0x1 /* rule7en */
-#define USTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT 6
-#define USTORM_CORE_CONN_AG_CTX_RULE8EN_MASK  0x1 /* rule8en */
-#define USTORM_CORE_CONN_AG_CTX_RULE8EN_SHIFT 7
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 0
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 1
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 2
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 3
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE5EN_MASK  0x1 /* rule5en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT 4
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE6EN_MASK  0x1 /* rule6en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT 5
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE7EN_MASK  0x1 /* rule7en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT 6
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE8EN_MASK  0x1 /* rule8en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE8EN_SHIFT 7
 	u8 byte2 /* byte2 */;
 	u8 byte3 /* byte3 */;
 	__le16 word0 /* conn_dpi */;
@@ -628,11 +628,11 @@ struct core_conn_context {
 /* xstorm storm context */
 	struct xstorm_core_conn_st_ctx xstorm_st_context;
 /* xstorm aggregative context */
-	struct xstorm_core_conn_ag_ctx xstorm_ag_context;
+	struct e4_xstorm_core_conn_ag_ctx xstorm_ag_context;
 /* tstorm aggregative context */
-	struct tstorm_core_conn_ag_ctx tstorm_ag_context;
+	struct e4_tstorm_core_conn_ag_ctx tstorm_ag_context;
 /* ustorm aggregative context */
-	struct ustorm_core_conn_ag_ctx ustorm_ag_context;
+	struct e4_ustorm_core_conn_ag_ctx ustorm_ag_context;
 /* mstorm storm context */
 	struct mstorm_core_conn_st_ctx mstorm_st_context;
 /* ustorm storm context */
@@ -1934,6 +1934,92 @@ enum dmae_cmd_src_enum {
 };
 
 
+struct e4_mstorm_core_conn_ag_ctx {
+	u8 byte0 /* cdu_validation */;
+	u8 byte1 /* state */;
+	u8 flags0;
+#define E4_MSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
+#define E4_MSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
+#define E4_MSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
+#define E4_MSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* cf0 */
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* cf1 */
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* cf2 */
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
+	u8 flags1;
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 3
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 4
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 5
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 6
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 7
+	__le16 word0 /* word0 */;
+	__le16 word1 /* word1 */;
+	__le32 reg0 /* reg0 */;
+	__le32 reg1 /* reg1 */;
+};
+
+
+
+
+
+struct e4_ystorm_core_conn_ag_ctx {
+	u8 byte0 /* cdu_validation */;
+	u8 byte1 /* state */;
+	u8 flags0;
+#define E4_YSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
+#define E4_YSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
+#define E4_YSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
+#define E4_YSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* cf0 */
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* cf1 */
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* cf2 */
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
+	u8 flags1;
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 3
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 4
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 5
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 6
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 7
+	u8 byte2 /* byte2 */;
+	u8 byte3 /* byte3 */;
+	__le16 word0 /* word0 */;
+	__le32 reg0 /* reg0 */;
+	__le32 reg1 /* reg1 */;
+	__le16 word1 /* word1 */;
+	__le16 word2 /* word2 */;
+	__le16 word3 /* word3 */;
+	__le16 word4 /* word4 */;
+	__le32 reg2 /* reg2 */;
+	__le32 reg3 /* reg3 */;
+};
+
+
 /*
  * IGU cleanup command
  */
@@ -2017,44 +2103,6 @@ struct igu_msix_vector {
 };
 
 
-struct mstorm_core_conn_ag_ctx {
-	u8 byte0 /* cdu_validation */;
-	u8 byte1 /* state */;
-	u8 flags0;
-#define MSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
-#define MSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
-#define MSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
-#define MSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
-#define MSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* cf0 */
-#define MSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
-#define MSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* cf1 */
-#define MSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
-#define MSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* cf2 */
-#define MSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
-	u8 flags1;
-#define MSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
-#define MSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
-#define MSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
-#define MSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
-#define MSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
-#define MSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
-#define MSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
-#define MSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 3
-#define MSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
-#define MSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 4
-#define MSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
-#define MSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 5
-#define MSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
-#define MSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 6
-#define MSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
-#define MSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 7
-	__le16 word0 /* word0 */;
-	__le16 word1 /* word1 */;
-	__le32 reg0 /* reg0 */;
-	__le32 reg1 /* reg1 */;
-};
-
-
 /*
  * per encapsulation type enabling flags
  */
diff --git a/drivers/net/qede/base/ecore_hsi_eth.h b/drivers/net/qede/base/ecore_hsi_eth.h
index e8373d7..9d2a118 100644
--- a/drivers/net/qede/base/ecore_hsi_eth.h
+++ b/drivers/net/qede/base/ecore_hsi_eth.h
@@ -34,315 +34,315 @@ struct xstorm_eth_conn_st_ctx {
 	__le32 reserved[60];
 };
 
-struct xstorm_eth_conn_ag_ctx {
+struct e4_xstorm_eth_conn_ag_ctx {
 	u8 reserved0 /* cdu_validation */;
 	u8 eth_state /* state */;
 	u8 flags0;
 /* exist_in_qm0 */
-#define XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_SHIFT           0
+#define E4_XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_SHIFT           0
 /* exist_in_qm1 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED1_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED1_SHIFT              1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED1_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED1_SHIFT              1
 /* exist_in_qm2 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED2_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED2_SHIFT              2
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED2_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED2_SHIFT              2
 /* exist_in_qm3 */
-#define XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM3_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM3_SHIFT           3
+#define E4_XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM3_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM3_SHIFT           3
 /* bit4 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED3_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED3_SHIFT              4
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED3_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED3_SHIFT              4
 /* cf_array_active */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED4_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED4_SHIFT              5
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED4_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED4_SHIFT              5
 /* bit6 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED5_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED5_SHIFT              6
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED5_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED5_SHIFT              6
 /* bit7 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED6_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED6_SHIFT              7
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED6_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED6_SHIFT              7
 	u8 flags1;
 /* bit8 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED7_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED7_SHIFT              0
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED7_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED7_SHIFT              0
 /* bit9 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED8_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED8_SHIFT              1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED8_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED8_SHIFT              1
 /* bit10 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED9_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED9_SHIFT              2
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED9_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED9_SHIFT              2
 /* bit11 */
-#define XSTORM_ETH_CONN_AG_CTX_BIT11_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_BIT11_SHIFT                  3
+#define E4_XSTORM_ETH_CONN_AG_CTX_BIT11_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_BIT11_SHIFT                  3
 /* bit12 */
-#define XSTORM_ETH_CONN_AG_CTX_BIT12_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_BIT12_SHIFT                  4
+#define E4_XSTORM_ETH_CONN_AG_CTX_BIT12_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_BIT12_SHIFT                  4
 /* bit13 */
-#define XSTORM_ETH_CONN_AG_CTX_BIT13_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_BIT13_SHIFT                  5
+#define E4_XSTORM_ETH_CONN_AG_CTX_BIT13_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_BIT13_SHIFT                  5
 /* bit14 */
-#define XSTORM_ETH_CONN_AG_CTX_TX_RULE_ACTIVE_MASK          0x1
-#define XSTORM_ETH_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT         6
+#define E4_XSTORM_ETH_CONN_AG_CTX_TX_RULE_ACTIVE_MASK          0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT         6
 /* bit15 */
-#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_ACTIVE_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT           7
+#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_ACTIVE_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT           7
 	u8 flags2;
 /* timer0cf */
-#define XSTORM_ETH_CONN_AG_CTX_CF0_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF0_SHIFT                    0
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF0_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF0_SHIFT                    0
 /* timer1cf */
-#define XSTORM_ETH_CONN_AG_CTX_CF1_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF1_SHIFT                    2
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF1_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF1_SHIFT                    2
 /* timer2cf */
-#define XSTORM_ETH_CONN_AG_CTX_CF2_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF2_SHIFT                    4
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF2_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF2_SHIFT                    4
 /* timer_stop_all */
-#define XSTORM_ETH_CONN_AG_CTX_CF3_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF3_SHIFT                    6
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF3_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF3_SHIFT                    6
 	u8 flags3;
 /* cf4 */
-#define XSTORM_ETH_CONN_AG_CTX_CF4_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF4_SHIFT                    0
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF4_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF4_SHIFT                    0
 /* cf5 */
-#define XSTORM_ETH_CONN_AG_CTX_CF5_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF5_SHIFT                    2
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF5_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF5_SHIFT                    2
 /* cf6 */
-#define XSTORM_ETH_CONN_AG_CTX_CF6_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF6_SHIFT                    4
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF6_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF6_SHIFT                    4
 /* cf7 */
-#define XSTORM_ETH_CONN_AG_CTX_CF7_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF7_SHIFT                    6
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF7_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF7_SHIFT                    6
 	u8 flags4;
 /* cf8 */
-#define XSTORM_ETH_CONN_AG_CTX_CF8_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF8_SHIFT                    0
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF8_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF8_SHIFT                    0
 /* cf9 */
-#define XSTORM_ETH_CONN_AG_CTX_CF9_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF9_SHIFT                    2
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF9_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF9_SHIFT                    2
 /* cf10 */
-#define XSTORM_ETH_CONN_AG_CTX_CF10_MASK                    0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF10_SHIFT                   4
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF10_MASK                    0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF10_SHIFT                   4
 /* cf11 */
-#define XSTORM_ETH_CONN_AG_CTX_CF11_MASK                    0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF11_SHIFT                   6
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF11_MASK                    0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF11_SHIFT                   6
 	u8 flags5;
 /* cf12 */
-#define XSTORM_ETH_CONN_AG_CTX_CF12_MASK                    0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF12_SHIFT                   0
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF12_MASK                    0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF12_SHIFT                   0
 /* cf13 */
-#define XSTORM_ETH_CONN_AG_CTX_CF13_MASK                    0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF13_SHIFT                   2
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF13_MASK                    0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF13_SHIFT                   2
 /* cf14 */
-#define XSTORM_ETH_CONN_AG_CTX_CF14_MASK                    0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF14_SHIFT                   4
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF14_MASK                    0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF14_SHIFT                   4
 /* cf15 */
-#define XSTORM_ETH_CONN_AG_CTX_CF15_MASK                    0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF15_SHIFT                   6
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF15_MASK                    0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF15_SHIFT                   6
 	u8 flags6;
 /* cf16 */
-#define XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_MASK        0x3
-#define XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_SHIFT       0
+#define E4_XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_MASK        0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_SHIFT       0
 /* cf_array_cf */
-#define XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_MASK        0x3
-#define XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_SHIFT       2
+#define E4_XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_MASK        0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_SHIFT       2
 /* cf18 */
-#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_MASK                   0x3
-#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_SHIFT                  4
+#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_MASK                   0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_SHIFT                  4
 /* cf19 */
-#define XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_MASK            0x3
-#define XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_SHIFT           6
+#define E4_XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_MASK            0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_SHIFT           6
 	u8 flags7;
 /* cf20 */
-#define XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_MASK                0x3
-#define XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_SHIFT               0
+#define E4_XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_MASK                0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_SHIFT               0
 /* cf21 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED10_MASK              0x3
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED10_SHIFT             2
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED10_MASK              0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED10_SHIFT             2
 /* cf22 */
-#define XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_MASK               0x3
-#define XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_SHIFT              4
+#define E4_XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_MASK               0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_SHIFT              4
 /* cf0en */
-#define XSTORM_ETH_CONN_AG_CTX_CF0EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT                  6
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF0EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT                  6
 /* cf1en */
-#define XSTORM_ETH_CONN_AG_CTX_CF1EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT                  7
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF1EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT                  7
 	u8 flags8;
 /* cf2en */
-#define XSTORM_ETH_CONN_AG_CTX_CF2EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                  0
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF2EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                  0
 /* cf3en */
-#define XSTORM_ETH_CONN_AG_CTX_CF3EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT                  1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF3EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT                  1
 /* cf4en */
-#define XSTORM_ETH_CONN_AG_CTX_CF4EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF4EN_SHIFT                  2
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF4EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF4EN_SHIFT                  2
 /* cf5en */
-#define XSTORM_ETH_CONN_AG_CTX_CF5EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF5EN_SHIFT                  3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF5EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF5EN_SHIFT                  3
 /* cf6en */
-#define XSTORM_ETH_CONN_AG_CTX_CF6EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF6EN_SHIFT                  4
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF6EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF6EN_SHIFT                  4
 /* cf7en */
-#define XSTORM_ETH_CONN_AG_CTX_CF7EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF7EN_SHIFT                  5
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF7EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF7EN_SHIFT                  5
 /* cf8en */
-#define XSTORM_ETH_CONN_AG_CTX_CF8EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF8EN_SHIFT                  6
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF8EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF8EN_SHIFT                  6
 /* cf9en */
-#define XSTORM_ETH_CONN_AG_CTX_CF9EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF9EN_SHIFT                  7
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF9EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF9EN_SHIFT                  7
 	u8 flags9;
 /* cf10en */
-#define XSTORM_ETH_CONN_AG_CTX_CF10EN_MASK                  0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF10EN_SHIFT                 0
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF10EN_MASK                  0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF10EN_SHIFT                 0
 /* cf11en */
-#define XSTORM_ETH_CONN_AG_CTX_CF11EN_MASK                  0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF11EN_SHIFT                 1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF11EN_MASK                  0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF11EN_SHIFT                 1
 /* cf12en */
-#define XSTORM_ETH_CONN_AG_CTX_CF12EN_MASK                  0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF12EN_SHIFT                 2
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF12EN_MASK                  0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF12EN_SHIFT                 2
 /* cf13en */
-#define XSTORM_ETH_CONN_AG_CTX_CF13EN_MASK                  0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF13EN_SHIFT                 3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF13EN_MASK                  0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF13EN_SHIFT                 3
 /* cf14en */
-#define XSTORM_ETH_CONN_AG_CTX_CF14EN_MASK                  0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF14EN_SHIFT                 4
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF14EN_MASK                  0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF14EN_SHIFT                 4
 /* cf15en */
-#define XSTORM_ETH_CONN_AG_CTX_CF15EN_MASK                  0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF15EN_SHIFT                 5
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF15EN_MASK                  0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF15EN_SHIFT                 5
 /* cf16en */
-#define XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_MASK     0x1
-#define XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_SHIFT    6
+#define E4_XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_MASK     0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_SHIFT    6
 /* cf_array_cf_en */
-#define XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_EN_MASK     0x1
-#define XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_EN_SHIFT    7
+#define E4_XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_EN_MASK     0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_EN_SHIFT    7
 	u8 flags10;
 /* cf18en */
-#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_EN_SHIFT               0
+#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_EN_SHIFT               0
 /* cf19en */
-#define XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_EN_MASK         0x1
-#define XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT        1
+#define E4_XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_EN_MASK         0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT        1
 /* cf20en */
-#define XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_EN_MASK             0x1
-#define XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT            2
+#define E4_XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_EN_MASK             0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT            2
 /* cf21en */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED11_MASK              0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED11_SHIFT             3
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED11_MASK              0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED11_SHIFT             3
 /* cf22en */
-#define XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_EN_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_EN_SHIFT           4
+#define E4_XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_EN_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_EN_SHIFT           4
 /* cf23en */
-#define XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_MASK  0x1
-#define XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_SHIFT 5
+#define E4_XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_MASK  0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_SHIFT 5
 /* rule0en */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED12_MASK              0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED12_SHIFT             6
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED12_MASK              0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED12_SHIFT             6
 /* rule1en */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED13_MASK              0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED13_SHIFT             7
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED13_MASK              0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED13_SHIFT             7
 	u8 flags11;
 /* rule2en */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED14_MASK              0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED14_SHIFT             0
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED14_MASK              0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED14_SHIFT             0
 /* rule3en */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED15_MASK              0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED15_SHIFT             1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED15_MASK              0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED15_SHIFT             1
 /* rule4en */
-#define XSTORM_ETH_CONN_AG_CTX_TX_DEC_RULE_EN_MASK          0x1
-#define XSTORM_ETH_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT         2
+#define E4_XSTORM_ETH_CONN_AG_CTX_TX_DEC_RULE_EN_MASK          0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT         2
 /* rule5en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE5EN_MASK                 0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT                3
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE5EN_MASK                 0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT                3
 /* rule6en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE6EN_MASK                 0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE6EN_SHIFT                4
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE6EN_MASK                 0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE6EN_SHIFT                4
 /* rule7en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE7EN_MASK                 0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT                5
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE7EN_MASK                 0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT                5
 /* rule8en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED1_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED1_SHIFT           6
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED1_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED1_SHIFT           6
 /* rule9en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE9EN_MASK                 0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE9EN_SHIFT                7
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE9EN_MASK                 0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE9EN_SHIFT                7
 	u8 flags12;
 /* rule10en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE10EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE10EN_SHIFT               0
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE10EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE10EN_SHIFT               0
 /* rule11en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE11EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE11EN_SHIFT               1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE11EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE11EN_SHIFT               1
 /* rule12en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED2_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED2_SHIFT           2
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED2_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED2_SHIFT           2
 /* rule13en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED3_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED3_SHIFT           3
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED3_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED3_SHIFT           3
 /* rule14en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE14EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE14EN_SHIFT               4
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE14EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE14EN_SHIFT               4
 /* rule15en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE15EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE15EN_SHIFT               5
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE15EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE15EN_SHIFT               5
 /* rule16en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE16EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE16EN_SHIFT               6
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE16EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE16EN_SHIFT               6
 /* rule17en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE17EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE17EN_SHIFT               7
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE17EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE17EN_SHIFT               7
 	u8 flags13;
 /* rule18en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE18EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE18EN_SHIFT               0
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE18EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE18EN_SHIFT               0
 /* rule19en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE19EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE19EN_SHIFT               1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE19EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE19EN_SHIFT               1
 /* rule20en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED4_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED4_SHIFT           2
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED4_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED4_SHIFT           2
 /* rule21en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED5_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED5_SHIFT           3
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED5_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED5_SHIFT           3
 /* rule22en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED6_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED6_SHIFT           4
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED6_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED6_SHIFT           4
 /* rule23en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED7_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED7_SHIFT           5
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED7_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED7_SHIFT           5
 /* rule24en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED8_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED8_SHIFT           6
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED8_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED8_SHIFT           6
 /* rule25en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED9_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED9_SHIFT           7
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED9_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED9_SHIFT           7
 	u8 flags14;
 /* bit16 */
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_USE_EXT_HDR_MASK        0x1
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_USE_EXT_HDR_SHIFT       0
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_USE_EXT_HDR_MASK        0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_USE_EXT_HDR_SHIFT       0
 /* bit17 */
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_MASK      0x1
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_SHIFT     1
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_MASK      0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_SHIFT     1
 /* bit18 */
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_MASK    0x1
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_SHIFT   2
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_MASK    0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_SHIFT   2
 /* bit19 */
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_MASK    0x1
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_SHIFT   3
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_MASK    0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_SHIFT   3
 /* bit20 */
-#define XSTORM_ETH_CONN_AG_CTX_L2_EDPM_ENABLE_MASK          0x1
-#define XSTORM_ETH_CONN_AG_CTX_L2_EDPM_ENABLE_SHIFT         4
+#define E4_XSTORM_ETH_CONN_AG_CTX_L2_EDPM_ENABLE_MASK          0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_L2_EDPM_ENABLE_SHIFT         4
 /* bit21 */
-#define XSTORM_ETH_CONN_AG_CTX_ROCE_EDPM_ENABLE_MASK        0x1
-#define XSTORM_ETH_CONN_AG_CTX_ROCE_EDPM_ENABLE_SHIFT       5
+#define E4_XSTORM_ETH_CONN_AG_CTX_ROCE_EDPM_ENABLE_MASK        0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_ROCE_EDPM_ENABLE_SHIFT       5
 /* cf23 */
-#define XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_MASK              0x3
-#define XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_SHIFT             6
+#define E4_XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_MASK              0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_SHIFT             6
 	u8 edpm_event_id /* byte2 */;
 	__le16 physical_q0 /* physical_q0 */;
-	__le16 quota /* physical_q1 */;
+	__le16 e5_reserved1 /* physical_q1 */;
 	__le16 edpm_num_bds /* physical_q2 */;
 	__le16 tx_bd_cons /* word3 */;
 	__le16 tx_bd_prod /* word4 */;
@@ -375,7 +375,7 @@ struct xstorm_eth_conn_ag_ctx {
 	u8 byte13 /* byte13 */;
 	u8 byte14 /* byte14 */;
 	u8 byte15 /* byte15 */;
-	u8 byte16 /* byte16 */;
+	u8 e5_reserved /* e5_reserved */;
 	__le16 word11 /* word11 */;
 	__le32 reg10 /* reg10 */;
 	__le32 reg11 /* reg11 */;
@@ -400,47 +400,47 @@ struct ystorm_eth_conn_st_ctx {
 	__le32 reserved[8];
 };
 
-struct ystorm_eth_conn_ag_ctx {
+struct e4_ystorm_eth_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 state /* state */;
 	u8 flags0;
 /* exist_in_qm0 */
-#define YSTORM_ETH_CONN_AG_CTX_BIT0_MASK                  0x1
-#define YSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT                 0
+#define E4_YSTORM_ETH_CONN_AG_CTX_BIT0_MASK                  0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT                 0
 /* exist_in_qm1 */
-#define YSTORM_ETH_CONN_AG_CTX_BIT1_MASK                  0x1
-#define YSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT                 1
-#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK     0x3 /* cf0 */
-#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT    2
-#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_MASK      0x3 /* cf1 */
-#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_SHIFT     4
-#define YSTORM_ETH_CONN_AG_CTX_CF2_MASK                   0x3 /* cf2 */
-#define YSTORM_ETH_CONN_AG_CTX_CF2_SHIFT                  6
+#define E4_YSTORM_ETH_CONN_AG_CTX_BIT1_MASK                  0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT                 1
+#define E4_YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK     0x3 /* cf0 */
+#define E4_YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT    2
+#define E4_YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_MASK      0x3 /* cf1 */
+#define E4_YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_SHIFT     4
+#define E4_YSTORM_ETH_CONN_AG_CTX_CF2_MASK                   0x3 /* cf2 */
+#define E4_YSTORM_ETH_CONN_AG_CTX_CF2_SHIFT                  6
 	u8 flags1;
 /* cf0en */
-#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK  0x1
-#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT 0
+#define E4_YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK  0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT 0
 /* cf1en */
-#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_MASK   0x1
-#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_SHIFT  1
+#define E4_YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_MASK   0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_SHIFT  1
 /* cf2en */
-#define YSTORM_ETH_CONN_AG_CTX_CF2EN_MASK                 0x1
-#define YSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                2
+#define E4_YSTORM_ETH_CONN_AG_CTX_CF2EN_MASK                 0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                2
 /* rule0en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK               0x1
-#define YSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT              3
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK               0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT              3
 /* rule1en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK               0x1
-#define YSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT              4
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK               0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT              4
 /* rule2en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK               0x1
-#define YSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT              5
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK               0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT              5
 /* rule3en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK               0x1
-#define YSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT              6
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK               0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT              6
 /* rule4en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK               0x1
-#define YSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT              7
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK               0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT              7
 	u8 tx_q0_int_coallecing_timeset /* byte2 */;
 	u8 byte3 /* byte3 */;
 	__le16 word0 /* word0 */;
@@ -454,89 +454,89 @@ struct ystorm_eth_conn_ag_ctx {
 	__le32 reg3 /* reg3 */;
 };
 
-struct tstorm_eth_conn_ag_ctx {
+struct e4_tstorm_eth_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 byte1 /* state */;
 	u8 flags0;
-#define TSTORM_ETH_CONN_AG_CTX_BIT0_MASK      0x1 /* exist_in_qm0 */
-#define TSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT     0
-#define TSTORM_ETH_CONN_AG_CTX_BIT1_MASK      0x1 /* exist_in_qm1 */
-#define TSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT     1
-#define TSTORM_ETH_CONN_AG_CTX_BIT2_MASK      0x1 /* bit2 */
-#define TSTORM_ETH_CONN_AG_CTX_BIT2_SHIFT     2
-#define TSTORM_ETH_CONN_AG_CTX_BIT3_MASK      0x1 /* bit3 */
-#define TSTORM_ETH_CONN_AG_CTX_BIT3_SHIFT     3
-#define TSTORM_ETH_CONN_AG_CTX_BIT4_MASK      0x1 /* bit4 */
-#define TSTORM_ETH_CONN_AG_CTX_BIT4_SHIFT     4
-#define TSTORM_ETH_CONN_AG_CTX_BIT5_MASK      0x1 /* bit5 */
-#define TSTORM_ETH_CONN_AG_CTX_BIT5_SHIFT     5
-#define TSTORM_ETH_CONN_AG_CTX_CF0_MASK       0x3 /* timer0cf */
-#define TSTORM_ETH_CONN_AG_CTX_CF0_SHIFT      6
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT0_MASK      0x1 /* exist_in_qm0 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT     0
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT1_MASK      0x1 /* exist_in_qm1 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT     1
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT2_MASK      0x1 /* bit2 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT2_SHIFT     2
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT3_MASK      0x1 /* bit3 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT3_SHIFT     3
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT4_MASK      0x1 /* bit4 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT4_SHIFT     4
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT5_MASK      0x1 /* bit5 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT5_SHIFT     5
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF0_MASK       0x3 /* timer0cf */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF0_SHIFT      6
 	u8 flags1;
-#define TSTORM_ETH_CONN_AG_CTX_CF1_MASK       0x3 /* timer1cf */
-#define TSTORM_ETH_CONN_AG_CTX_CF1_SHIFT      0
-#define TSTORM_ETH_CONN_AG_CTX_CF2_MASK       0x3 /* timer2cf */
-#define TSTORM_ETH_CONN_AG_CTX_CF2_SHIFT      2
-#define TSTORM_ETH_CONN_AG_CTX_CF3_MASK       0x3 /* timer_stop_all */
-#define TSTORM_ETH_CONN_AG_CTX_CF3_SHIFT      4
-#define TSTORM_ETH_CONN_AG_CTX_CF4_MASK       0x3 /* cf4 */
-#define TSTORM_ETH_CONN_AG_CTX_CF4_SHIFT      6
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF1_MASK       0x3 /* timer1cf */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF1_SHIFT      0
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF2_MASK       0x3 /* timer2cf */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF2_SHIFT      2
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF3_MASK       0x3 /* timer_stop_all */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF3_SHIFT      4
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF4_MASK       0x3 /* cf4 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF4_SHIFT      6
 	u8 flags2;
-#define TSTORM_ETH_CONN_AG_CTX_CF5_MASK       0x3 /* cf5 */
-#define TSTORM_ETH_CONN_AG_CTX_CF5_SHIFT      0
-#define TSTORM_ETH_CONN_AG_CTX_CF6_MASK       0x3 /* cf6 */
-#define TSTORM_ETH_CONN_AG_CTX_CF6_SHIFT      2
-#define TSTORM_ETH_CONN_AG_CTX_CF7_MASK       0x3 /* cf7 */
-#define TSTORM_ETH_CONN_AG_CTX_CF7_SHIFT      4
-#define TSTORM_ETH_CONN_AG_CTX_CF8_MASK       0x3 /* cf8 */
-#define TSTORM_ETH_CONN_AG_CTX_CF8_SHIFT      6
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF5_MASK       0x3 /* cf5 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF5_SHIFT      0
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF6_MASK       0x3 /* cf6 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF6_SHIFT      2
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF7_MASK       0x3 /* cf7 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF7_SHIFT      4
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF8_MASK       0x3 /* cf8 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF8_SHIFT      6
 	u8 flags3;
-#define TSTORM_ETH_CONN_AG_CTX_CF9_MASK       0x3 /* cf9 */
-#define TSTORM_ETH_CONN_AG_CTX_CF9_SHIFT      0
-#define TSTORM_ETH_CONN_AG_CTX_CF10_MASK      0x3 /* cf10 */
-#define TSTORM_ETH_CONN_AG_CTX_CF10_SHIFT     2
-#define TSTORM_ETH_CONN_AG_CTX_CF0EN_MASK     0x1 /* cf0en */
-#define TSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT    4
-#define TSTORM_ETH_CONN_AG_CTX_CF1EN_MASK     0x1 /* cf1en */
-#define TSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT    5
-#define TSTORM_ETH_CONN_AG_CTX_CF2EN_MASK     0x1 /* cf2en */
-#define TSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT    6
-#define TSTORM_ETH_CONN_AG_CTX_CF3EN_MASK     0x1 /* cf3en */
-#define TSTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT    7
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF9_MASK       0x3 /* cf9 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF9_SHIFT      0
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF10_MASK      0x3 /* cf10 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF10_SHIFT     2
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF0EN_MASK     0x1 /* cf0en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT    4
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF1EN_MASK     0x1 /* cf1en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT    5
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF2EN_MASK     0x1 /* cf2en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT    6
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF3EN_MASK     0x1 /* cf3en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT    7
 	u8 flags4;
-#define TSTORM_ETH_CONN_AG_CTX_CF4EN_MASK     0x1 /* cf4en */
-#define TSTORM_ETH_CONN_AG_CTX_CF4EN_SHIFT    0
-#define TSTORM_ETH_CONN_AG_CTX_CF5EN_MASK     0x1 /* cf5en */
-#define TSTORM_ETH_CONN_AG_CTX_CF5EN_SHIFT    1
-#define TSTORM_ETH_CONN_AG_CTX_CF6EN_MASK     0x1 /* cf6en */
-#define TSTORM_ETH_CONN_AG_CTX_CF6EN_SHIFT    2
-#define TSTORM_ETH_CONN_AG_CTX_CF7EN_MASK     0x1 /* cf7en */
-#define TSTORM_ETH_CONN_AG_CTX_CF7EN_SHIFT    3
-#define TSTORM_ETH_CONN_AG_CTX_CF8EN_MASK     0x1 /* cf8en */
-#define TSTORM_ETH_CONN_AG_CTX_CF8EN_SHIFT    4
-#define TSTORM_ETH_CONN_AG_CTX_CF9EN_MASK     0x1 /* cf9en */
-#define TSTORM_ETH_CONN_AG_CTX_CF9EN_SHIFT    5
-#define TSTORM_ETH_CONN_AG_CTX_CF10EN_MASK    0x1 /* cf10en */
-#define TSTORM_ETH_CONN_AG_CTX_CF10EN_SHIFT   6
-#define TSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK   0x1 /* rule0en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT  7
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF4EN_MASK     0x1 /* cf4en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF4EN_SHIFT    0
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF5EN_MASK     0x1 /* cf5en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF5EN_SHIFT    1
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF6EN_MASK     0x1 /* cf6en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF6EN_SHIFT    2
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF7EN_MASK     0x1 /* cf7en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF7EN_SHIFT    3
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF8EN_MASK     0x1 /* cf8en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF8EN_SHIFT    4
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF9EN_MASK     0x1 /* cf9en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF9EN_SHIFT    5
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF10EN_MASK    0x1 /* cf10en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF10EN_SHIFT   6
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK   0x1 /* rule0en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT  7
 	u8 flags5;
-#define TSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK   0x1 /* rule1en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT  0
-#define TSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK   0x1 /* rule2en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT  1
-#define TSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK   0x1 /* rule3en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT  2
-#define TSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK   0x1 /* rule4en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT  3
-#define TSTORM_ETH_CONN_AG_CTX_RULE5EN_MASK   0x1 /* rule5en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT  4
-#define TSTORM_ETH_CONN_AG_CTX_RX_BD_EN_MASK  0x1 /* rule6en */
-#define TSTORM_ETH_CONN_AG_CTX_RX_BD_EN_SHIFT 5
-#define TSTORM_ETH_CONN_AG_CTX_RULE7EN_MASK   0x1 /* rule7en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT  6
-#define TSTORM_ETH_CONN_AG_CTX_RULE8EN_MASK   0x1 /* rule8en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE8EN_SHIFT  7
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK   0x1 /* rule1en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT  0
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK   0x1 /* rule2en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT  1
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK   0x1 /* rule3en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT  2
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK   0x1 /* rule4en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT  3
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE5EN_MASK   0x1 /* rule5en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT  4
+#define E4_TSTORM_ETH_CONN_AG_CTX_RX_BD_EN_MASK  0x1 /* rule6en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RX_BD_EN_SHIFT 5
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE7EN_MASK   0x1 /* rule7en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT  6
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE8EN_MASK   0x1 /* rule8en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE8EN_SHIFT  7
 	__le32 reg0 /* reg0 */;
 	__le32 reg1 /* reg1 */;
 	__le32 reg2 /* reg2 */;
@@ -558,88 +558,88 @@ struct tstorm_eth_conn_ag_ctx {
 	__le32 reg10 /* reg10 */;
 };
 
-struct ustorm_eth_conn_ag_ctx {
+struct e4_ustorm_eth_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 byte1 /* state */;
 	u8 flags0;
 /* exist_in_qm0 */
-#define USTORM_ETH_CONN_AG_CTX_BIT0_MASK                    0x1
-#define USTORM_ETH_CONN_AG_CTX_BIT0_SHIFT                   0
+#define E4_USTORM_ETH_CONN_AG_CTX_BIT0_MASK                    0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_BIT0_SHIFT                   0
 /* exist_in_qm1 */
-#define USTORM_ETH_CONN_AG_CTX_BIT1_MASK                    0x1
-#define USTORM_ETH_CONN_AG_CTX_BIT1_SHIFT                   1
+#define E4_USTORM_ETH_CONN_AG_CTX_BIT1_MASK                    0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_BIT1_SHIFT                   1
 /* timer0cf */
-#define USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_MASK     0x3
-#define USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_SHIFT    2
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_MASK     0x3
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_SHIFT    2
 /* timer1cf */
-#define USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_MASK     0x3
-#define USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_SHIFT    4
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_MASK     0x3
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_SHIFT    4
 /* timer2cf */
-#define USTORM_ETH_CONN_AG_CTX_CF2_MASK                     0x3
-#define USTORM_ETH_CONN_AG_CTX_CF2_SHIFT                    6
+#define E4_USTORM_ETH_CONN_AG_CTX_CF2_MASK                     0x3
+#define E4_USTORM_ETH_CONN_AG_CTX_CF2_SHIFT                    6
 	u8 flags1;
 /* timer_stop_all */
-#define USTORM_ETH_CONN_AG_CTX_CF3_MASK                     0x3
-#define USTORM_ETH_CONN_AG_CTX_CF3_SHIFT                    0
+#define E4_USTORM_ETH_CONN_AG_CTX_CF3_MASK                     0x3
+#define E4_USTORM_ETH_CONN_AG_CTX_CF3_SHIFT                    0
 /* cf4 */
-#define USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_MASK               0x3
-#define USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_SHIFT              2
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_MASK               0x3
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_SHIFT              2
 /* cf5 */
-#define USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_MASK               0x3
-#define USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_SHIFT              4
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_MASK               0x3
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_SHIFT              4
 /* cf6 */
-#define USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK       0x3
-#define USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT      6
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK       0x3
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT      6
 	u8 flags2;
 /* cf0en */
-#define USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_EN_MASK  0x1
-#define USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_EN_SHIFT 0
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_EN_MASK  0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_EN_SHIFT 0
 /* cf1en */
-#define USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_EN_MASK  0x1
-#define USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_EN_SHIFT 1
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_EN_MASK  0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_EN_SHIFT 1
 /* cf2en */
-#define USTORM_ETH_CONN_AG_CTX_CF2EN_MASK                   0x1
-#define USTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                  2
+#define E4_USTORM_ETH_CONN_AG_CTX_CF2EN_MASK                   0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                  2
 /* cf3en */
-#define USTORM_ETH_CONN_AG_CTX_CF3EN_MASK                   0x1
-#define USTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT                  3
+#define E4_USTORM_ETH_CONN_AG_CTX_CF3EN_MASK                   0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT                  3
 /* cf4en */
-#define USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_EN_MASK            0x1
-#define USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_EN_SHIFT           4
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_EN_MASK            0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_EN_SHIFT           4
 /* cf5en */
-#define USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_EN_MASK            0x1
-#define USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_EN_SHIFT           5
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_EN_MASK            0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_EN_SHIFT           5
 /* cf6en */
-#define USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK    0x1
-#define USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT   6
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK    0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT   6
 /* rule0en */
-#define USTORM_ETH_CONN_AG_CTX_RULE0EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT                7
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE0EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT                7
 	u8 flags3;
 /* rule1en */
-#define USTORM_ETH_CONN_AG_CTX_RULE1EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT                0
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE1EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT                0
 /* rule2en */
-#define USTORM_ETH_CONN_AG_CTX_RULE2EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT                1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE2EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT                1
 /* rule3en */
-#define USTORM_ETH_CONN_AG_CTX_RULE3EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT                2
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE3EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT                2
 /* rule4en */
-#define USTORM_ETH_CONN_AG_CTX_RULE4EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT                3
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE4EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT                3
 /* rule5en */
-#define USTORM_ETH_CONN_AG_CTX_RULE5EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT                4
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE5EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT                4
 /* rule6en */
-#define USTORM_ETH_CONN_AG_CTX_RULE6EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE6EN_SHIFT                5
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE6EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE6EN_SHIFT                5
 /* rule7en */
-#define USTORM_ETH_CONN_AG_CTX_RULE7EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT                6
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE7EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT                6
 /* rule8en */
-#define USTORM_ETH_CONN_AG_CTX_RULE8EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE8EN_SHIFT                7
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE8EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE8EN_SHIFT                7
 	u8 byte2 /* byte2 */;
 	u8 byte3 /* byte3 */;
 	__le16 word0 /* conn_dpi */;
@@ -678,15 +678,15 @@ struct eth_conn_context {
 /* xstorm storm context */
 	struct xstorm_eth_conn_st_ctx xstorm_st_context;
 /* xstorm aggregative context */
-	struct xstorm_eth_conn_ag_ctx xstorm_ag_context;
+	struct e4_xstorm_eth_conn_ag_ctx xstorm_ag_context;
 /* ystorm storm context */
 	struct ystorm_eth_conn_st_ctx ystorm_st_context;
 /* ystorm aggregative context */
-	struct ystorm_eth_conn_ag_ctx ystorm_ag_context;
+	struct e4_ystorm_eth_conn_ag_ctx ystorm_ag_context;
 /* tstorm aggregative context */
-	struct tstorm_eth_conn_ag_ctx tstorm_ag_context;
+	struct e4_tstorm_eth_conn_ag_ctx tstorm_ag_context;
 /* ustorm aggregative context */
-	struct ustorm_eth_conn_ag_ctx ustorm_ag_context;
+	struct e4_ustorm_eth_conn_ag_ctx ustorm_ag_context;
 /* ustorm storm context */
 	struct ustorm_eth_conn_st_ctx ustorm_st_context;
 /* mstorm storm context */
@@ -1480,6 +1480,668 @@ struct vport_update_ramrod_data {
 
 
 
+struct E4XstormEthConnAgCtxDqExtLdPart {
+	u8 reserved0 /* cdu_validation */;
+	u8 eth_state /* state */;
+	u8 flags0;
+/* exist_in_qm0 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM0_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM0_SHIFT           0
+/* exist_in_qm1 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED1_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED1_SHIFT              1
+/* exist_in_qm2 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED2_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED2_SHIFT              2
+/* exist_in_qm3 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM3_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM3_SHIFT           3
+/* bit4 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED3_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED3_SHIFT              4
+/* cf_array_active */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED4_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED4_SHIFT              5
+/* bit6 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED5_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED5_SHIFT              6
+/* bit7 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED6_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED6_SHIFT              7
+	u8 flags1;
+/* bit8 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED7_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED7_SHIFT              0
+/* bit9 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED8_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED8_SHIFT              1
+/* bit10 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED9_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED9_SHIFT              2
+/* bit11 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_BIT11_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_BIT11_SHIFT                  3
+/* bit12 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_BIT12_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_BIT12_SHIFT                  4
+/* bit13 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_BIT13_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_BIT13_SHIFT                  5
+/* bit14 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TX_RULE_ACTIVE_MASK          0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TX_RULE_ACTIVE_SHIFT         6
+/* bit15 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_ACTIVE_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_ACTIVE_SHIFT           7
+	u8 flags2;
+/* timer0cf */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF0_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF0_SHIFT                    0
+/* timer1cf */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF1_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF1_SHIFT                    2
+/* timer2cf */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF2_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF2_SHIFT                    4
+/* timer_stop_all */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF3_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF3_SHIFT                    6
+	u8 flags3;
+/* cf4 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF4_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF4_SHIFT                    0
+/* cf5 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF5_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF5_SHIFT                    2
+/* cf6 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF6_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF6_SHIFT                    4
+/* cf7 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF7_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF7_SHIFT                    6
+	u8 flags4;
+/* cf8 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF8_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF8_SHIFT                    0
+/* cf9 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF9_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF9_SHIFT                    2
+/* cf10 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF10_MASK                    0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF10_SHIFT                   4
+/* cf11 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF11_MASK                    0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF11_SHIFT                   6
+	u8 flags5;
+/* cf12 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF12_MASK                    0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF12_SHIFT                   0
+/* cf13 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF13_MASK                    0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF13_SHIFT                   2
+/* cf14 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF14_MASK                    0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF14_SHIFT                   4
+/* cf15 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF15_MASK                    0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF15_SHIFT                   6
+	u8 flags6;
+/* cf16 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_MASK        0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_SHIFT       0
+/* cf_array_cf */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_MASK        0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_SHIFT       2
+/* cf18 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_MASK                   0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_SHIFT                  4
+/* cf19 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_MASK            0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_SHIFT           6
+	u8 flags7;
+/* cf20 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_MASK                0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_SHIFT               0
+/* cf21 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED10_MASK              0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED10_SHIFT             2
+/* cf22 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_MASK               0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_SHIFT              4
+/* cf0en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF0EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF0EN_SHIFT                  6
+/* cf1en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF1EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF1EN_SHIFT                  7
+	u8 flags8;
+/* cf2en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF2EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF2EN_SHIFT                  0
+/* cf3en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF3EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF3EN_SHIFT                  1
+/* cf4en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF4EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF4EN_SHIFT                  2
+/* cf5en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF5EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF5EN_SHIFT                  3
+/* cf6en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF6EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF6EN_SHIFT                  4
+/* cf7en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF7EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF7EN_SHIFT                  5
+/* cf8en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF8EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF8EN_SHIFT                  6
+/* cf9en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF9EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF9EN_SHIFT                  7
+	u8 flags9;
+/* cf10en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF10EN_MASK                  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF10EN_SHIFT                 0
+/* cf11en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF11EN_MASK                  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF11EN_SHIFT                 1
+/* cf12en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF12EN_MASK                  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF12EN_SHIFT                 2
+/* cf13en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF13EN_MASK                  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF13EN_SHIFT                 3
+/* cf14en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF14EN_MASK                  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF14EN_SHIFT                 4
+/* cf15en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF15EN_MASK                  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF15EN_SHIFT                 5
+/* cf16en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_EN_MASK     0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_EN_SHIFT    6
+/* cf_array_cf_en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_EN_MASK     0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_EN_SHIFT    7
+	u8 flags10;
+/* cf18en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_EN_SHIFT               0
+/* cf19en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_EN_MASK         0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_EN_SHIFT        1
+/* cf20en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_EN_MASK             0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_EN_SHIFT            2
+/* cf21en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED11_MASK              0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED11_SHIFT             3
+/* cf22en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_EN_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_EN_SHIFT           4
+/* cf23en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_EN_RESERVED_MASK  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_EN_RESERVED_SHIFT 5
+/* rule0en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED12_MASK              0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED12_SHIFT             6
+/* rule1en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED13_MASK              0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED13_SHIFT             7
+	u8 flags11;
+/* rule2en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED14_MASK              0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED14_SHIFT             0
+/* rule3en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED15_MASK              0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED15_SHIFT             1
+/* rule4en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TX_DEC_RULE_EN_MASK          0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TX_DEC_RULE_EN_SHIFT         2
+/* rule5en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE5EN_MASK                 0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE5EN_SHIFT                3
+/* rule6en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE6EN_MASK                 0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE6EN_SHIFT                4
+/* rule7en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE7EN_MASK                 0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE7EN_SHIFT                5
+/* rule8en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED1_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED1_SHIFT           6
+/* rule9en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE9EN_MASK                 0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE9EN_SHIFT                7
+	u8 flags12;
+/* rule10en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE10EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE10EN_SHIFT               0
+/* rule11en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE11EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE11EN_SHIFT               1
+/* rule12en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED2_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED2_SHIFT           2
+/* rule13en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED3_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED3_SHIFT           3
+/* rule14en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE14EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE14EN_SHIFT               4
+/* rule15en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE15EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE15EN_SHIFT               5
+/* rule16en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE16EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE16EN_SHIFT               6
+/* rule17en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE17EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE17EN_SHIFT               7
+	u8 flags13;
+/* rule18en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE18EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE18EN_SHIFT               0
+/* rule19en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE19EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE19EN_SHIFT               1
+/* rule20en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED4_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED4_SHIFT           2
+/* rule21en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED5_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED5_SHIFT           3
+/* rule22en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED6_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED6_SHIFT           4
+/* rule23en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED7_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED7_SHIFT           5
+/* rule24en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED8_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED8_SHIFT           6
+/* rule25en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED9_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED9_SHIFT           7
+	u8 flags14;
+/* bit16 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_USE_EXT_HDR_MASK        0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_USE_EXT_HDR_SHIFT       0
+/* bit17 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_RAW_L3L4_MASK      0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_RAW_L3L4_SHIFT     1
+/* bit18 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_INBAND_PROP_HDR_MASK    0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_INBAND_PROP_HDR_SHIFT   2
+/* bit19 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_EXT_TUNNEL_MASK    0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_EXT_TUNNEL_SHIFT   3
+/* bit20 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_L2_EDPM_ENABLE_MASK          0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_L2_EDPM_ENABLE_SHIFT         4
+/* bit21 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_ROCE_EDPM_ENABLE_MASK        0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_ROCE_EDPM_ENABLE_SHIFT       5
+/* cf23 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_MASK              0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_SHIFT             6
+	u8 edpm_event_id /* byte2 */;
+	__le16 physical_q0 /* physical_q0 */;
+	__le16 e5_reserved1 /* physical_q1 */;
+	__le16 edpm_num_bds /* physical_q2 */;
+	__le16 tx_bd_cons /* word3 */;
+	__le16 tx_bd_prod /* word4 */;
+	__le16 tx_class /* word5 */;
+	__le16 conn_dpi /* conn_dpi */;
+	u8 byte3 /* byte3 */;
+	u8 byte4 /* byte4 */;
+	u8 byte5 /* byte5 */;
+	u8 byte6 /* byte6 */;
+	__le32 reg0 /* reg0 */;
+	__le32 reg1 /* reg1 */;
+	__le32 reg2 /* reg2 */;
+	__le32 reg3 /* reg3 */;
+	__le32 reg4 /* reg4 */;
+};
+
+
+struct e4_mstorm_eth_conn_ag_ctx {
+	u8 byte0 /* cdu_validation */;
+	u8 byte1 /* state */;
+	u8 flags0;
+#define E4_MSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_MASK  0x1 /* exist_in_qm0 */
+#define E4_MSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_SHIFT 0
+#define E4_MSTORM_ETH_CONN_AG_CTX_BIT1_MASK          0x1 /* exist_in_qm1 */
+#define E4_MSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT         1
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF0_MASK           0x3 /* cf0 */
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF0_SHIFT          2
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF1_MASK           0x3 /* cf1 */
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF1_SHIFT          4
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF2_MASK           0x3 /* cf2 */
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF2_SHIFT          6
+	u8 flags1;
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF0EN_MASK         0x1 /* cf0en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT        0
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF1EN_MASK         0x1 /* cf1en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT        1
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF2EN_MASK         0x1 /* cf2en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT        2
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK       0x1 /* rule0en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT      3
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK       0x1 /* rule1en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT      4
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK       0x1 /* rule2en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT      5
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK       0x1 /* rule3en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT      6
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK       0x1 /* rule4en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT      7
+	__le16 word0 /* word0 */;
+	__le16 word1 /* word1 */;
+	__le32 reg0 /* reg0 */;
+	__le32 reg1 /* reg1 */;
+};
+
+
+
+
+
+struct e4_xstorm_eth_hw_conn_ag_ctx {
+	u8 reserved0 /* cdu_validation */;
+	u8 eth_state /* state */;
+	u8 flags0;
+/* exist_in_qm0 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM0_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM0_SHIFT           0
+/* exist_in_qm1 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED1_MASK               0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED1_SHIFT              1
+/* exist_in_qm2 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED2_MASK               0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED2_SHIFT              2
+/* exist_in_qm3 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM3_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM3_SHIFT           3
+/* bit4 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED3_MASK               0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED3_SHIFT              4
+/* cf_array_active */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED4_MASK               0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED4_SHIFT              5
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED5_MASK               0x1 /* bit6 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED5_SHIFT              6
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED6_MASK               0x1 /* bit7 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED6_SHIFT              7
+	u8 flags1;
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED7_MASK               0x1 /* bit8 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED7_SHIFT              0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED8_MASK               0x1 /* bit9 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED8_SHIFT              1
+/* bit10 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED9_MASK               0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED9_SHIFT              2
+/* bit11 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT11_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT11_SHIFT                  3
+/* bit12 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT12_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT12_SHIFT                  4
+/* bit13 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT13_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT13_SHIFT                  5
+/* bit14 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TX_RULE_ACTIVE_MASK          0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT         6
+/* bit15 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_ACTIVE_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT           7
+	u8 flags2;
+/* timer0cf */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF0_MASK                     0x3
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF0_SHIFT                    0
+/* timer1cf */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF1_MASK                     0x3
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF1_SHIFT                    2
+/* timer2cf */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF2_MASK                     0x3
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF2_SHIFT                    4
+/* timer_stop_all */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF3_MASK                     0x3
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF3_SHIFT                    6
+	u8 flags3;
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF4_MASK                     0x3 /* cf4 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF4_SHIFT                    0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF5_MASK                     0x3 /* cf5 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF5_SHIFT                    2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF6_MASK                     0x3 /* cf6 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF6_SHIFT                    4
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF7_MASK                     0x3 /* cf7 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF7_SHIFT                    6
+	u8 flags4;
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF8_MASK                     0x3 /* cf8 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF8_SHIFT                    0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF9_MASK                     0x3 /* cf9 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF9_SHIFT                    2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF10_MASK                    0x3 /* cf10 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF10_SHIFT                   4
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF11_MASK                    0x3 /* cf11 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF11_SHIFT                   6
+	u8 flags5;
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF12_MASK                    0x3 /* cf12 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF12_SHIFT                   0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF13_MASK                    0x3 /* cf13 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF13_SHIFT                   2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF14_MASK                    0x3 /* cf14 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF14_SHIFT                   4
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF15_MASK                    0x3 /* cf15 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF15_SHIFT                   6
+	u8 flags6;
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_MASK        0x3 /* cf16 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_SHIFT       0
+/* cf_array_cf */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_MASK        0x3
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_SHIFT       2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_MASK                   0x3 /* cf18 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_SHIFT                  4
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_MASK            0x3 /* cf19 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_SHIFT           6
+	u8 flags7;
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_MASK                0x3 /* cf20 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_SHIFT               0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED10_MASK              0x3 /* cf21 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED10_SHIFT             2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_MASK               0x3 /* cf22 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_SHIFT              4
+/* cf0en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF0EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF0EN_SHIFT                  6
+/* cf1en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF1EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF1EN_SHIFT                  7
+	u8 flags8;
+/* cf2en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF2EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF2EN_SHIFT                  0
+/* cf3en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF3EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF3EN_SHIFT                  1
+/* cf4en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF4EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF4EN_SHIFT                  2
+/* cf5en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF5EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF5EN_SHIFT                  3
+/* cf6en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF6EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF6EN_SHIFT                  4
+/* cf7en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF7EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF7EN_SHIFT                  5
+/* cf8en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF8EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF8EN_SHIFT                  6
+/* cf9en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF9EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF9EN_SHIFT                  7
+	u8 flags9;
+/* cf10en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF10EN_MASK                  0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF10EN_SHIFT                 0
+/* cf11en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF11EN_MASK                  0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF11EN_SHIFT                 1
+/* cf12en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF12EN_MASK                  0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF12EN_SHIFT                 2
+/* cf13en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF13EN_MASK                  0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF13EN_SHIFT                 3
+/* cf14en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF14EN_MASK                  0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF14EN_SHIFT                 4
+/* cf15en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF15EN_MASK                  0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF15EN_SHIFT                 5
+/* cf16en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_MASK     0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_SHIFT    6
+/* cf_array_cf_en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_EN_MASK     0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_EN_SHIFT    7
+	u8 flags10;
+/* cf18en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_EN_SHIFT               0
+/* cf19en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_EN_MASK         0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT        1
+/* cf20en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_EN_MASK             0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT            2
+/* cf21en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED11_MASK              0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED11_SHIFT             3
+/* cf22en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_EN_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_EN_SHIFT           4
+/* cf23en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_MASK  0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_SHIFT 5
+/* rule0en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED12_MASK              0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED12_SHIFT             6
+/* rule1en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED13_MASK              0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED13_SHIFT             7
+	u8 flags11;
+/* rule2en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED14_MASK              0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED14_SHIFT             0
+/* rule3en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED15_MASK              0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED15_SHIFT             1
+/* rule4en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TX_DEC_RULE_EN_MASK          0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT         2
+/* rule5en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE5EN_MASK                 0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE5EN_SHIFT                3
+/* rule6en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE6EN_MASK                 0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE6EN_SHIFT                4
+/* rule7en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE7EN_MASK                 0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE7EN_SHIFT                5
+/* rule8en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED1_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED1_SHIFT           6
+/* rule9en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE9EN_MASK                 0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE9EN_SHIFT                7
+	u8 flags12;
+/* rule10en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE10EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE10EN_SHIFT               0
+/* rule11en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE11EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE11EN_SHIFT               1
+/* rule12en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED2_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED2_SHIFT           2
+/* rule13en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED3_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED3_SHIFT           3
+/* rule14en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE14EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE14EN_SHIFT               4
+/* rule15en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE15EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE15EN_SHIFT               5
+/* rule16en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE16EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE16EN_SHIFT               6
+/* rule17en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE17EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE17EN_SHIFT               7
+	u8 flags13;
+/* rule18en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE18EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE18EN_SHIFT               0
+/* rule19en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE19EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE19EN_SHIFT               1
+/* rule20en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED4_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED4_SHIFT           2
+/* rule21en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED5_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED5_SHIFT           3
+/* rule22en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED6_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED6_SHIFT           4
+/* rule23en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED7_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED7_SHIFT           5
+/* rule24en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED8_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED8_SHIFT           6
+/* rule25en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED9_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED9_SHIFT           7
+	u8 flags14;
+/* bit16 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_USE_EXT_HDR_MASK        0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_USE_EXT_HDR_SHIFT       0
+/* bit17 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_MASK      0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_SHIFT     1
+/* bit18 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_MASK    0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_SHIFT   2
+/* bit19 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_MASK    0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_SHIFT   3
+/* bit20 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_L2_EDPM_ENABLE_MASK          0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_L2_EDPM_ENABLE_SHIFT         4
+/* bit21 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_ROCE_EDPM_ENABLE_MASK        0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_ROCE_EDPM_ENABLE_SHIFT       5
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_MASK              0x3 /* cf23 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_SHIFT             6
+	u8 edpm_event_id /* byte2 */;
+	__le16 physical_q0 /* physical_q0 */;
+	__le16 e5_reserved1 /* physical_q1 */;
+	__le16 edpm_num_bds /* physical_q2 */;
+	__le16 tx_bd_cons /* word3 */;
+	__le16 tx_bd_prod /* word4 */;
+	__le16 tx_class /* word5 */;
+	__le16 conn_dpi /* conn_dpi */;
+};
+
+
+
 /*
  * GFT CAM line struct
  */
@@ -1730,690 +2392,4 @@ enum gft_vlan_select {
 };
 
 
-struct mstorm_eth_conn_ag_ctx {
-	u8 byte0 /* cdu_validation */;
-	u8 byte1 /* state */;
-	u8 flags0;
-/* exist_in_qm0 */
-#define MSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_MASK  0x1
-#define MSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_SHIFT 0
-/* exist_in_qm1 */
-#define MSTORM_ETH_CONN_AG_CTX_BIT1_MASK          0x1
-#define MSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT         1
-#define MSTORM_ETH_CONN_AG_CTX_CF0_MASK           0x3 /* cf0 */
-#define MSTORM_ETH_CONN_AG_CTX_CF0_SHIFT          2
-#define MSTORM_ETH_CONN_AG_CTX_CF1_MASK           0x3 /* cf1 */
-#define MSTORM_ETH_CONN_AG_CTX_CF1_SHIFT          4
-#define MSTORM_ETH_CONN_AG_CTX_CF2_MASK           0x3 /* cf2 */
-#define MSTORM_ETH_CONN_AG_CTX_CF2_SHIFT          6
-	u8 flags1;
-#define MSTORM_ETH_CONN_AG_CTX_CF0EN_MASK         0x1 /* cf0en */
-#define MSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT        0
-#define MSTORM_ETH_CONN_AG_CTX_CF1EN_MASK         0x1 /* cf1en */
-#define MSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT        1
-#define MSTORM_ETH_CONN_AG_CTX_CF2EN_MASK         0x1 /* cf2en */
-#define MSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT        2
-#define MSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK       0x1 /* rule0en */
-#define MSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT      3
-#define MSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK       0x1 /* rule1en */
-#define MSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT      4
-#define MSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK       0x1 /* rule2en */
-#define MSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT      5
-#define MSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK       0x1 /* rule3en */
-#define MSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT      6
-#define MSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK       0x1 /* rule4en */
-#define MSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT      7
-	__le16 word0 /* word0 */;
-	__le16 word1 /* word1 */;
-	__le32 reg0 /* reg0 */;
-	__le32 reg1 /* reg1 */;
-};
-
-
-
-
-struct xstormEthConnAgCtxDqExtLdPart {
-	u8 reserved0 /* cdu_validation */;
-	u8 eth_state /* state */;
-	u8 flags0;
-/* exist_in_qm0 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM0_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM0_SHIFT           0
-/* exist_in_qm1 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED1_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED1_SHIFT              1
-/* exist_in_qm2 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED2_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED2_SHIFT              2
-/* exist_in_qm3 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM3_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM3_SHIFT           3
-/* bit4 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED3_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED3_SHIFT              4
-/* cf_array_active */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED4_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED4_SHIFT              5
-/* bit6 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED5_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED5_SHIFT              6
-/* bit7 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED6_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED6_SHIFT              7
-	u8 flags1;
-/* bit8 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED7_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED7_SHIFT              0
-/* bit9 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED8_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED8_SHIFT              1
-/* bit10 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED9_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED9_SHIFT              2
-/* bit11 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_BIT11_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_BIT11_SHIFT                  3
-/* bit12 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_BIT12_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_BIT12_SHIFT                  4
-/* bit13 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_BIT13_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_BIT13_SHIFT                  5
-/* bit14 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TX_RULE_ACTIVE_MASK          0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TX_RULE_ACTIVE_SHIFT         6
-/* bit15 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_ACTIVE_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_ACTIVE_SHIFT           7
-	u8 flags2;
-/* timer0cf */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF0_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF0_SHIFT                    0
-/* timer1cf */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF1_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF1_SHIFT                    2
-/* timer2cf */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF2_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF2_SHIFT                    4
-/* timer_stop_all */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF3_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF3_SHIFT                    6
-	u8 flags3;
-/* cf4 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF4_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF4_SHIFT                    0
-/* cf5 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF5_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF5_SHIFT                    2
-/* cf6 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF6_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF6_SHIFT                    4
-/* cf7 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF7_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF7_SHIFT                    6
-	u8 flags4;
-/* cf8 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF8_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF8_SHIFT                    0
-/* cf9 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF9_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF9_SHIFT                    2
-/* cf10 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF10_MASK                    0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF10_SHIFT                   4
-/* cf11 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF11_MASK                    0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF11_SHIFT                   6
-	u8 flags5;
-/* cf12 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF12_MASK                    0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF12_SHIFT                   0
-/* cf13 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF13_MASK                    0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF13_SHIFT                   2
-/* cf14 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF14_MASK                    0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF14_SHIFT                   4
-/* cf15 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF15_MASK                    0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF15_SHIFT                   6
-	u8 flags6;
-/* cf16 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_MASK        0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_SHIFT       0
-/* cf_array_cf */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_MASK        0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_SHIFT       2
-/* cf18 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_MASK                   0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_SHIFT                  4
-/* cf19 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_MASK            0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_SHIFT           6
-	u8 flags7;
-/* cf20 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_MASK                0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_SHIFT               0
-/* cf21 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED10_MASK              0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED10_SHIFT             2
-/* cf22 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_MASK               0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_SHIFT              4
-/* cf0en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF0EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF0EN_SHIFT                  6
-/* cf1en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF1EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF1EN_SHIFT                  7
-	u8 flags8;
-/* cf2en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF2EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF2EN_SHIFT                  0
-/* cf3en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF3EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF3EN_SHIFT                  1
-/* cf4en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF4EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF4EN_SHIFT                  2
-/* cf5en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF5EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF5EN_SHIFT                  3
-/* cf6en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF6EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF6EN_SHIFT                  4
-/* cf7en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF7EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF7EN_SHIFT                  5
-/* cf8en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF8EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF8EN_SHIFT                  6
-/* cf9en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF9EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF9EN_SHIFT                  7
-	u8 flags9;
-/* cf10en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF10EN_MASK                  0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF10EN_SHIFT                 0
-/* cf11en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF11EN_MASK                  0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF11EN_SHIFT                 1
-/* cf12en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF12EN_MASK                  0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF12EN_SHIFT                 2
-/* cf13en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF13EN_MASK                  0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF13EN_SHIFT                 3
-/* cf14en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF14EN_MASK                  0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF14EN_SHIFT                 4
-/* cf15en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF15EN_MASK                  0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF15EN_SHIFT                 5
-/* cf16en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_EN_MASK     0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_EN_SHIFT    6
-/* cf_array_cf_en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_EN_MASK     0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_EN_SHIFT    7
-	u8 flags10;
-/* cf18en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_EN_SHIFT               0
-/* cf19en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_EN_MASK         0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_EN_SHIFT        1
-/* cf20en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_EN_MASK             0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_EN_SHIFT            2
-/* cf21en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED11_MASK              0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED11_SHIFT             3
-/* cf22en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_EN_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_EN_SHIFT           4
-/* cf23en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_EN_RESERVED_MASK  0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_EN_RESERVED_SHIFT 5
-/* rule0en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED12_MASK              0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED12_SHIFT             6
-/* rule1en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED13_MASK              0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED13_SHIFT             7
-	u8 flags11;
-/* rule2en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED14_MASK              0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED14_SHIFT             0
-/* rule3en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED15_MASK              0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED15_SHIFT             1
-/* rule4en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TX_DEC_RULE_EN_MASK          0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TX_DEC_RULE_EN_SHIFT         2
-/* rule5en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE5EN_MASK                 0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE5EN_SHIFT                3
-/* rule6en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE6EN_MASK                 0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE6EN_SHIFT                4
-/* rule7en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE7EN_MASK                 0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE7EN_SHIFT                5
-/* rule8en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED1_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED1_SHIFT           6
-/* rule9en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE9EN_MASK                 0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE9EN_SHIFT                7
-	u8 flags12;
-/* rule10en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE10EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE10EN_SHIFT               0
-/* rule11en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE11EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE11EN_SHIFT               1
-/* rule12en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED2_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED2_SHIFT           2
-/* rule13en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED3_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED3_SHIFT           3
-/* rule14en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE14EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE14EN_SHIFT               4
-/* rule15en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE15EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE15EN_SHIFT               5
-/* rule16en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE16EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE16EN_SHIFT               6
-/* rule17en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE17EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE17EN_SHIFT               7
-	u8 flags13;
-/* rule18en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE18EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE18EN_SHIFT               0
-/* rule19en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE19EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE19EN_SHIFT               1
-/* rule20en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED4_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED4_SHIFT           2
-/* rule21en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED5_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED5_SHIFT           3
-/* rule22en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED6_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED6_SHIFT           4
-/* rule23en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED7_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED7_SHIFT           5
-/* rule24en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED8_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED8_SHIFT           6
-/* rule25en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED9_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED9_SHIFT           7
-	u8 flags14;
-/* bit16 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_USE_EXT_HDR_MASK        0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_USE_EXT_HDR_SHIFT       0
-/* bit17 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_RAW_L3L4_MASK      0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_RAW_L3L4_SHIFT     1
-/* bit18 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_INBAND_PROP_HDR_MASK    0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_INBAND_PROP_HDR_SHIFT   2
-/* bit19 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_EXT_TUNNEL_MASK    0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_EXT_TUNNEL_SHIFT   3
-/* bit20 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_L2_EDPM_ENABLE_MASK          0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_L2_EDPM_ENABLE_SHIFT         4
-/* bit21 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_ROCE_EDPM_ENABLE_MASK        0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_ROCE_EDPM_ENABLE_SHIFT       5
-/* cf23 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_MASK              0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_SHIFT             6
-	u8 edpm_event_id /* byte2 */;
-	__le16 physical_q0 /* physical_q0 */;
-	__le16 quota /* physical_q1 */;
-	__le16 edpm_num_bds /* physical_q2 */;
-	__le16 tx_bd_cons /* word3 */;
-	__le16 tx_bd_prod /* word4 */;
-	__le16 tx_class /* word5 */;
-	__le16 conn_dpi /* conn_dpi */;
-	u8 byte3 /* byte3 */;
-	u8 byte4 /* byte4 */;
-	u8 byte5 /* byte5 */;
-	u8 byte6 /* byte6 */;
-	__le32 reg0 /* reg0 */;
-	__le32 reg1 /* reg1 */;
-	__le32 reg2 /* reg2 */;
-	__le32 reg3 /* reg3 */;
-	__le32 reg4 /* reg4 */;
-};
-
-
-
-struct xstorm_eth_hw_conn_ag_ctx {
-	u8 reserved0 /* cdu_validation */;
-	u8 eth_state /* state */;
-	u8 flags0;
-/* exist_in_qm0 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM0_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM0_SHIFT           0
-/* exist_in_qm1 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED1_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED1_SHIFT              1
-/* exist_in_qm2 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED2_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED2_SHIFT              2
-/* exist_in_qm3 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM3_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM3_SHIFT           3
-/* bit4 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED3_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED3_SHIFT              4
-/* cf_array_active */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED4_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED4_SHIFT              5
-/* bit6 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED5_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED5_SHIFT              6
-/* bit7 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED6_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED6_SHIFT              7
-	u8 flags1;
-/* bit8 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED7_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED7_SHIFT              0
-/* bit9 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED8_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED8_SHIFT              1
-/* bit10 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED9_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED9_SHIFT              2
-/* bit11 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_BIT11_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_BIT11_SHIFT                  3
-/* bit12 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_BIT12_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_BIT12_SHIFT                  4
-/* bit13 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_BIT13_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_BIT13_SHIFT                  5
-/* bit14 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_TX_RULE_ACTIVE_MASK          0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT         6
-/* bit15 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_ACTIVE_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT           7
-	u8 flags2;
-/* timer0cf */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF0_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF0_SHIFT                    0
-/* timer1cf */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF1_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF1_SHIFT                    2
-/* timer2cf */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF2_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF2_SHIFT                    4
-/* timer_stop_all */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF3_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF3_SHIFT                    6
-	u8 flags3;
-/* cf4 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF4_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF4_SHIFT                    0
-/* cf5 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF5_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF5_SHIFT                    2
-/* cf6 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF6_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF6_SHIFT                    4
-/* cf7 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF7_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF7_SHIFT                    6
-	u8 flags4;
-/* cf8 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF8_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF8_SHIFT                    0
-/* cf9 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF9_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF9_SHIFT                    2
-/* cf10 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF10_MASK                    0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF10_SHIFT                   4
-/* cf11 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF11_MASK                    0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF11_SHIFT                   6
-	u8 flags5;
-/* cf12 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF12_MASK                    0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF12_SHIFT                   0
-/* cf13 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF13_MASK                    0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF13_SHIFT                   2
-/* cf14 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF14_MASK                    0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF14_SHIFT                   4
-/* cf15 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF15_MASK                    0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF15_SHIFT                   6
-	u8 flags6;
-/* cf16 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_MASK        0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_SHIFT       0
-/* cf_array_cf */
-#define XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_MASK        0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_SHIFT       2
-/* cf18 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_MASK                   0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_SHIFT                  4
-/* cf19 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_MASK            0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_SHIFT           6
-	u8 flags7;
-/* cf20 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_MASK                0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_SHIFT               0
-/* cf21 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED10_MASK              0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED10_SHIFT             2
-/* cf22 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_MASK               0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_SHIFT              4
-/* cf0en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF0EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF0EN_SHIFT                  6
-/* cf1en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF1EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF1EN_SHIFT                  7
-	u8 flags8;
-/* cf2en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF2EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF2EN_SHIFT                  0
-/* cf3en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF3EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF3EN_SHIFT                  1
-/* cf4en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF4EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF4EN_SHIFT                  2
-/* cf5en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF5EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF5EN_SHIFT                  3
-/* cf6en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF6EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF6EN_SHIFT                  4
-/* cf7en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF7EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF7EN_SHIFT                  5
-/* cf8en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF8EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF8EN_SHIFT                  6
-/* cf9en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF9EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF9EN_SHIFT                  7
-	u8 flags9;
-/* cf10en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF10EN_MASK                  0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF10EN_SHIFT                 0
-/* cf11en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF11EN_MASK                  0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF11EN_SHIFT                 1
-/* cf12en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF12EN_MASK                  0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF12EN_SHIFT                 2
-/* cf13en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF13EN_MASK                  0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF13EN_SHIFT                 3
-/* cf14en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF14EN_MASK                  0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF14EN_SHIFT                 4
-/* cf15en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF15EN_MASK                  0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF15EN_SHIFT                 5
-/* cf16en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_MASK     0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_SHIFT    6
-/* cf_array_cf_en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_EN_MASK     0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_EN_SHIFT    7
-	u8 flags10;
-/* cf18en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_EN_SHIFT               0
-/* cf19en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_EN_MASK         0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT        1
-/* cf20en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_EN_MASK             0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT            2
-/* cf21en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED11_MASK              0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED11_SHIFT             3
-/* cf22en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_EN_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_EN_SHIFT           4
-/* cf23en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_MASK  0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_SHIFT 5
-/* rule0en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED12_MASK              0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED12_SHIFT             6
-/* rule1en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED13_MASK              0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED13_SHIFT             7
-	u8 flags11;
-/* rule2en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED14_MASK              0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED14_SHIFT             0
-/* rule3en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED15_MASK              0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED15_SHIFT             1
-/* rule4en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_TX_DEC_RULE_EN_MASK          0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT         2
-/* rule5en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE5EN_MASK                 0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE5EN_SHIFT                3
-/* rule6en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE6EN_MASK                 0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE6EN_SHIFT                4
-/* rule7en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE7EN_MASK                 0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE7EN_SHIFT                5
-/* rule8en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED1_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED1_SHIFT           6
-/* rule9en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE9EN_MASK                 0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE9EN_SHIFT                7
-	u8 flags12;
-/* rule10en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE10EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE10EN_SHIFT               0
-/* rule11en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE11EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE11EN_SHIFT               1
-/* rule12en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED2_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED2_SHIFT           2
-/* rule13en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED3_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED3_SHIFT           3
-/* rule14en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE14EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE14EN_SHIFT               4
-/* rule15en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE15EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE15EN_SHIFT               5
-/* rule16en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE16EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE16EN_SHIFT               6
-/* rule17en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE17EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE17EN_SHIFT               7
-	u8 flags13;
-/* rule18en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE18EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE18EN_SHIFT               0
-/* rule19en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE19EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE19EN_SHIFT               1
-/* rule20en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED4_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED4_SHIFT           2
-/* rule21en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED5_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED5_SHIFT           3
-/* rule22en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED6_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED6_SHIFT           4
-/* rule23en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED7_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED7_SHIFT           5
-/* rule24en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED8_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED8_SHIFT           6
-/* rule25en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED9_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED9_SHIFT           7
-	u8 flags14;
-/* bit16 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_USE_EXT_HDR_MASK        0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_USE_EXT_HDR_SHIFT       0
-/* bit17 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_MASK      0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_SHIFT     1
-/* bit18 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_MASK    0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_SHIFT   2
-/* bit19 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_MASK    0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_SHIFT   3
-/* bit20 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_L2_EDPM_ENABLE_MASK          0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_L2_EDPM_ENABLE_SHIFT         4
-/* bit21 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_ROCE_EDPM_ENABLE_MASK        0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_ROCE_EDPM_ENABLE_SHIFT       5
-/* cf23 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_MASK              0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_SHIFT             6
-	u8 edpm_event_id /* byte2 */;
-	__le16 physical_q0 /* physical_q0 */;
-	__le16 quota /* physical_q1 */;
-	__le16 edpm_num_bds /* physical_q2 */;
-	__le16 tx_bd_cons /* word3 */;
-	__le16 tx_bd_prod /* word4 */;
-	__le16 tx_class /* word5 */;
-	__le16 conn_dpi /* conn_dpi */;
-};
-
-
 #endif /* __ECORE_HSI_ETH__ */
diff --git a/drivers/net/qede/base/ecore_iov_api.h b/drivers/net/qede/base/ecore_iov_api.h
index 24a43d3..9775360 100644
--- a/drivers/net/qede/base/ecore_iov_api.h
+++ b/drivers/net/qede/base/ecore_iov_api.h
@@ -701,7 +701,7 @@ enum _ecore_status_t ecore_iov_configure_min_tx_rate(struct ecore_dev *p_dev,
  * @param p_hwfn
  * @param rel_vf_id
  *
- * @return MAX_NUM_VFS in case no further active VFs, otherwise index.
+ * @return E4_MAX_NUM_VFS in case no further active VFs, otherwise index.
  */
 u16 ecore_iov_get_next_active_vf(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
 
@@ -709,7 +709,7 @@ u16 ecore_iov_get_next_active_vf(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
 
 #define ecore_for_each_vf(_p_hwfn, _i)					\
 	for (_i = ecore_iov_get_next_active_vf(_p_hwfn, 0);		\
-	     _i < MAX_NUM_VFS;						\
+	     _i < E4_MAX_NUM_VFS;					\
 	     _i = ecore_iov_get_next_active_vf(_p_hwfn, _i + 1))
 
 #endif
diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c
index 1f35d6c..9035d3b 100644
--- a/drivers/net/qede/base/ecore_spq.c
+++ b/drivers/net/qede/base/ecore_spq.c
@@ -191,15 +191,17 @@ static void ecore_spq_hw_initialize(struct ecore_hwfn *p_hwfn,
 
 	p_cxt = cxt_info.p_cxt;
 
-	SET_FIELD(p_cxt->xstorm_ag_context.flags10,
-		  XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN, 1);
-	SET_FIELD(p_cxt->xstorm_ag_context.flags1,
-		  XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE, 1);
-	/* SET_FIELD(p_cxt->xstorm_ag_context.flags10,
-	 *           XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN, 1);
-	 */
-	SET_FIELD(p_cxt->xstorm_ag_context.flags9,
-		  XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN, 1);
+	if (ECORE_IS_BB(p_hwfn->p_dev) || ECORE_IS_AH(p_hwfn->p_dev)) {
+		SET_FIELD(p_cxt->xstorm_ag_context.flags10,
+			  E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN, 1);
+		SET_FIELD(p_cxt->xstorm_ag_context.flags1,
+			  E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE, 1);
+		/* SET_FIELD(p_cxt->xstorm_ag_context.flags10,
+		 *	  E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN, 1);
+		 */
+		SET_FIELD(p_cxt->xstorm_ag_context.flags9,
+			  E4_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN, 1);
+	}
 
 	/* CDU validation - FIXME currently disabled */
 
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index d1c809c..b051678 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -3487,7 +3487,7 @@ u16 ecore_iov_get_next_active_vf(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
 			return i;
 
 out:
-	return MAX_NUM_VFS;
+	return E4_MAX_NUM_VFS;
 }
 
 enum _ecore_status_t ecore_iov_copy_vf_msg(struct ecore_hwfn *p_hwfn,
diff --git a/drivers/net/qede/base/ecore_sriov.h b/drivers/net/qede/base/ecore_sriov.h
index 884a90c..e9ccc79 100644
--- a/drivers/net/qede/base/ecore_sriov.h
+++ b/drivers/net/qede/base/ecore_sriov.h
@@ -15,7 +15,7 @@
 #include "ecore_hsi_common.h"
 
 #define ECORE_ETH_MAX_VF_NUM_VLAN_FILTERS \
-	(MAX_NUM_VFS * ECORE_ETH_VF_NUM_VLAN_FILTERS)
+	(E4_MAX_NUM_VFS * ECORE_ETH_VF_NUM_VLAN_FILTERS)
 
 /* Represents a full message. Both the request filled by VF
  * and the response filled by the PF. The VF needs one copy
@@ -152,7 +152,7 @@ struct ecore_vf_info {
  * capability enabled.
  */
 struct ecore_pf_iov {
-	struct ecore_vf_info	vfs_array[MAX_NUM_VFS];
+	struct ecore_vf_info	vfs_array[E4_MAX_NUM_VFS];
 	u64			pending_events[ECORE_VF_ARRAY_LENGTH];
 	u64			pending_flr[ECORE_VF_ARRAY_LENGTH];
 	u16			base_vport_id;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 06/61] net/qede: upgrade the FW to 8.18.9.0
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (5 preceding siblings ...)
  2017-03-18  7:05   ` [PATCH v2 05/61] net/qede/base: utilize FW 8.18.9.0 Rasesh Mody
@ 2017-03-18  7:05   ` Rasesh Mody
  2017-03-18  7:05   ` [PATCH v2 07/61] net/qede/base: decrease maximum HW func per device Rasesh Mody
                     ` (55 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:05 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

This patchset adds changes to upgrade to 8.18.9.0 FW.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/bcm_osal.h              |    1 +
 drivers/net/qede/base/common_hsi.h            |  176 +++-
 drivers/net/qede/base/ecore_dcbx.c            |    4 +-
 drivers/net/qede/base/ecore_dev.c             |  204 ++--
 drivers/net/qede/base/ecore_gtt_reg_addr.h    |   20 +-
 drivers/net/qede/base/ecore_hsi_common.h      |   46 +-
 drivers/net/qede/base/ecore_hsi_debug_tools.h |  203 ++--
 drivers/net/qede/base/ecore_hsi_eth.h         |   17 +-
 drivers/net/qede/base/ecore_hsi_init_tool.h   |   78 +-
 drivers/net/qede/base/ecore_init_fw_funcs.c   | 1378 ++++++++++++++++---------
 drivers/net/qede/base/ecore_init_fw_funcs.h   |  161 ++-
 drivers/net/qede/base/ecore_iro.h             |    8 +
 drivers/net/qede/base/ecore_iro_values.h      |   28 +-
 drivers/net/qede/base/ecore_rt_defs.h         |  623 ++++++-----
 drivers/net/qede/base/eth_common.h            |    2 +-
 drivers/net/qede/base/reg_addr.h              |   53 +
 drivers/net/qede/qede_main.c                  |    2 +-
 17 files changed, 1882 insertions(+), 1122 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 88246b7..0d239c9 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -398,6 +398,7 @@ u32 qede_osal_log2(u32);
 #define OSAL_STRCPY(dst, string) strcpy(dst, string)
 #define OSAL_STRNCPY(dst, string, len) strncpy(dst, string, len)
 #define OSAL_STRCMP(str1, str2) strcmp(str1, str2)
+#define OSAL_STRTOUL(str, base, res) 0
 
 #define OSAL_INLINE inline
 #define OSAL_REG_ADDR(_p_hwfn, _offset) \
diff --git a/drivers/net/qede/base/common_hsi.h b/drivers/net/qede/base/common_hsi.h
index 59e751f..cbcde22 100644
--- a/drivers/net/qede/base/common_hsi.h
+++ b/drivers/net/qede/base/common_hsi.h
@@ -78,8 +78,16 @@
 
 #define CORE_SPQE_PAGE_SIZE_BYTES                       4096
 
-#define MAX_NUM_LL2_RX_QUEUES					32
-#define MAX_NUM_LL2_TX_STATS_COUNTERS			32
+/*
+ * Usually LL2 queues are opened in pairs TX-RX.
+ * There is a hard restriction on number of RX queues (limited by Tstorm RAM)
+ * and TX counters (Pstorm RAM).
+ * Number of TX queues is almost unlimited.
+ * The constants are different so as to allow asymmetric LL2 connections
+ */
+
+#define MAX_NUM_LL2_RX_QUEUES					48
+#define MAX_NUM_LL2_TX_STATS_COUNTERS			48
 
 
 /****************************************************************************/
@@ -89,8 +97,8 @@
 
 
 #define FW_MAJOR_VERSION		8
-#define FW_MINOR_VERSION		14
-#define FW_REVISION_VERSION		6
+#define FW_MINOR_VERSION		18
+#define FW_REVISION_VERSION		9
 #define FW_ENGINEERING_VERSION	0
 
 /***********************/
@@ -110,6 +118,7 @@
 #define MAX_NUM_VFS_BB	(120)
 #define MAX_NUM_VFS_K2	(192)
 #define E4_MAX_NUM_VFS	(MAX_NUM_VFS_K2)
+#define COMMON_MAX_NUM_VFS (240)
 
 #define MAX_NUM_FUNCTIONS_BB	(MAX_NUM_PFS_BB + MAX_NUM_VFS_BB)
 #define MAX_NUM_FUNCTIONS_K2	(MAX_NUM_PFS_K2 + MAX_NUM_VFS_K2)
@@ -177,6 +186,13 @@
 #define CDU_VF_FL_SEG_TYPE_OFFSET_REG_TYPE_SHIFT	(12)
 #define CDU_VF_FL_SEG_TYPE_OFFSET_REG_OFFSET_MASK	(0xfff)
 
+#define	CDU_CONTEXT_VALIDATION_CFG_ENABLE_SHIFT				(0)
+#define	CDU_CONTEXT_VALIDATION_CFG_VALIDATION_TYPE_SHIFT	(1)
+#define	CDU_CONTEXT_VALIDATION_CFG_USE_TYPE				(2)
+#define	CDU_CONTEXT_VALIDATION_CFG_USE_REGION				(3)
+#define	CDU_CONTEXT_VALIDATION_CFG_USE_CID				(4)
+#define	CDU_CONTEXT_VALIDATION_CFG_USE_ACTIVE				(5)
+
 
 /*****************/
 /* DQ CONSTANTS  */
@@ -472,7 +488,6 @@
 #define PXP_BAR_DQ                                          1
 
 /* PTT and GTT */
-#define PXP_NUM_PF_WINDOWS		12
 #define PXP_PER_PF_ENTRY_SIZE		8
 #define PXP_NUM_GLOBAL_WINDOWS		243
 #define PXP_GLOBAL_ENTRY_SIZE		4
@@ -497,6 +512,8 @@
 #define PXP_PF_ME_OPAQUE_ADDR		0x1f8
 #define PXP_PF_ME_CONCRETE_ADDR		0x1fc
 
+#define PXP_NUM_PF_WINDOWS		12
+
 #define PXP_EXTERNAL_BAR_PF_WINDOW_START	0x1000
 #define PXP_EXTERNAL_BAR_PF_WINDOW_NUM		PXP_NUM_PF_WINDOWS
 #define PXP_EXTERNAL_BAR_PF_WINDOW_SINGLE_SIZE	0x1000
@@ -519,8 +536,6 @@
 	 PXP_EXTERNAL_BAR_GLOBAL_WINDOW_LENGTH - 1)
 
 /* PF BAR */
-/*#define PXP_BAR0_START_GRC 0x1000 */
-/*#define PXP_BAR0_GRC_LENGTH 0xBFF000 */
 #define PXP_BAR0_START_GRC                      0x0000
 #define PXP_BAR0_GRC_LENGTH                     0x1C00000
 #define PXP_BAR0_END_GRC                        \
@@ -589,7 +604,7 @@
 #define SDM_OP_GEN_TRIG_AGG_INT			2
 #define SDM_OP_GEN_TRIG_LOADER			4
 #define SDM_OP_GEN_TRIG_INDICATE_ERROR	6
-#define SDM_OP_GEN_TRIG_RELEASE_THREAD	7
+#define SDM_OP_GEN_TRIG_INC_ORDER_CNT	9
 
 /***********************************************************/
 /* Completion types                                        */
@@ -612,6 +627,7 @@
 #define SDM_COMP_TYPE_RELEASE_THREAD	7
 /* Write to local RAM as a completion */
 #define SDM_COMP_TYPE_RAM		8
+#define SDM_COMP_TYPE_INC_ORDER_CNT	9 /* Applicable only for E4 */
 
 
 /******************/
@@ -881,7 +897,7 @@ enum db_dest {
  */
 enum db_dpm_type {
 	DPM_LEGACY /* Legacy DPM- to Xstorm RAM */,
-	DPM_ROCE /* RoCE DPM- to NIG */,
+	DPM_RDMA /* RDMA DPM (only RoCE in E4) - to NIG */,
 /* L2 DPM inline- to PBF, with packet data on doorbell */
 	DPM_L2_INLINE,
 	DPM_L2_BD /* L2 DPM with BD- to PBF, with TX BD data on doorbell */,
@@ -968,42 +984,42 @@ struct db_pwm_addr {
 };
 
 /*
- * Parameters to RoCE firmware, passed in EDPM doorbell
+ * Parameters to RDMA firmware, passed in EDPM doorbell
  */
-struct db_roce_dpm_params {
+struct db_rdma_dpm_params {
 	__le32 params;
 /* Size in QWORD-s of the DPM burst */
-#define DB_ROCE_DPM_PARAMS_SIZE_MASK            0x3F
-#define DB_ROCE_DPM_PARAMS_SIZE_SHIFT           0
-/* Type of DPM transacation (DPM_ROCE) (use enum db_dpm_type) */
-#define DB_ROCE_DPM_PARAMS_DPM_TYPE_MASK        0x3
-#define DB_ROCE_DPM_PARAMS_DPM_TYPE_SHIFT       6
-/* opcode for ROCE operation */
-#define DB_ROCE_DPM_PARAMS_OPCODE_MASK          0xFF
-#define DB_ROCE_DPM_PARAMS_OPCODE_SHIFT         8
+#define DB_RDMA_DPM_PARAMS_SIZE_MASK            0x3F
+#define DB_RDMA_DPM_PARAMS_SIZE_SHIFT           0
+/* Type of DPM transacation (DPM_RDMA) (use enum db_dpm_type) */
+#define DB_RDMA_DPM_PARAMS_DPM_TYPE_MASK        0x3
+#define DB_RDMA_DPM_PARAMS_DPM_TYPE_SHIFT       6
+/* opcode for RDMA operation */
+#define DB_RDMA_DPM_PARAMS_OPCODE_MASK          0xFF
+#define DB_RDMA_DPM_PARAMS_OPCODE_SHIFT         8
 /* the size of the WQE payload in bytes */
-#define DB_ROCE_DPM_PARAMS_WQE_SIZE_MASK        0x7FF
-#define DB_ROCE_DPM_PARAMS_WQE_SIZE_SHIFT       16
-#define DB_ROCE_DPM_PARAMS_RESERVED0_MASK       0x1
-#define DB_ROCE_DPM_PARAMS_RESERVED0_SHIFT      27
+#define DB_RDMA_DPM_PARAMS_WQE_SIZE_MASK        0x7FF
+#define DB_RDMA_DPM_PARAMS_WQE_SIZE_SHIFT       16
+#define DB_RDMA_DPM_PARAMS_RESERVED0_MASK       0x1
+#define DB_RDMA_DPM_PARAMS_RESERVED0_SHIFT      27
 /* RoCE completion flag */
-#define DB_ROCE_DPM_PARAMS_COMPLETION_FLG_MASK  0x1
-#define DB_ROCE_DPM_PARAMS_COMPLETION_FLG_SHIFT 28
-#define DB_ROCE_DPM_PARAMS_S_FLG_MASK           0x1 /* RoCE S flag */
-#define DB_ROCE_DPM_PARAMS_S_FLG_SHIFT          29
-#define DB_ROCE_DPM_PARAMS_RESERVED1_MASK       0x3
-#define DB_ROCE_DPM_PARAMS_RESERVED1_SHIFT      30
+#define DB_RDMA_DPM_PARAMS_COMPLETION_FLG_MASK  0x1
+#define DB_RDMA_DPM_PARAMS_COMPLETION_FLG_SHIFT 28
+#define DB_RDMA_DPM_PARAMS_S_FLG_MASK           0x1 /* RoCE S flag */
+#define DB_RDMA_DPM_PARAMS_S_FLG_SHIFT          29
+#define DB_RDMA_DPM_PARAMS_RESERVED1_MASK       0x3
+#define DB_RDMA_DPM_PARAMS_RESERVED1_SHIFT      30
 };
 
 /*
- * Structure for doorbell data, in ROCE DPM mode, for the first doorbell in a
+ * Structure for doorbell data, in RDMA DPM mode, for the first doorbell in a
  * DPM burst
  */
-struct db_roce_dpm_data {
+struct db_rdma_dpm_data {
 	__le16 icid /* internal CID */;
 	__le16 prod_val /* aggregated value to update */;
-/* parameters passed to RoCE firmware */
-	struct db_roce_dpm_params params;
+/* parameters passed to RDMA firmware */
+	struct db_rdma_dpm_params params;
 };
 
 /* Igu interrupt command */
@@ -1136,6 +1152,68 @@ struct parsing_and_err_flags {
 
 
 /*
+ * Parsing error flags bitmap.
+ */
+struct parsing_err_flags {
+	__le16 flags;
+/* MAC error indication */
+#define PARSING_ERR_FLAGS_MAC_ERROR_MASK                          0x1
+#define PARSING_ERR_FLAGS_MAC_ERROR_SHIFT                         0
+/* truncation error indication */
+#define PARSING_ERR_FLAGS_TRUNC_ERROR_MASK                        0x1
+#define PARSING_ERR_FLAGS_TRUNC_ERROR_SHIFT                       1
+/* packet too small indication */
+#define PARSING_ERR_FLAGS_PKT_TOO_SMALL_MASK                      0x1
+#define PARSING_ERR_FLAGS_PKT_TOO_SMALL_SHIFT                     2
+/* Header Missing Tag */
+#define PARSING_ERR_FLAGS_ANY_HDR_MISSING_TAG_MASK                0x1
+#define PARSING_ERR_FLAGS_ANY_HDR_MISSING_TAG_SHIFT               3
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_ANY_HDR_IP_VER_MISMTCH_MASK             0x1
+#define PARSING_ERR_FLAGS_ANY_HDR_IP_VER_MISMTCH_SHIFT            4
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_ANY_HDR_IP_V4_HDR_LEN_TOO_SMALL_MASK    0x1
+#define PARSING_ERR_FLAGS_ANY_HDR_IP_V4_HDR_LEN_TOO_SMALL_SHIFT   5
+/* set this error if: 1. total-len is smaller than hdr-len 2. total-ip-len
+ * indicates number that is bigger than real packet length 3. tunneling:
+ * total-ip-length of the outer header points to offset that is smaller than
+ * the one pointed to by the total-ip-len of the inner hdr.
+ */
+#define PARSING_ERR_FLAGS_ANY_HDR_IP_BAD_TOTAL_LEN_MASK           0x1
+#define PARSING_ERR_FLAGS_ANY_HDR_IP_BAD_TOTAL_LEN_SHIFT          6
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_IP_V4_CHKSM_ERROR_MASK                  0x1
+#define PARSING_ERR_FLAGS_IP_V4_CHKSM_ERROR_SHIFT                 7
+/* from frame cracker output. for either TCP or UDP */
+#define PARSING_ERR_FLAGS_ANY_HDR_L4_IP_LEN_MISMTCH_MASK          0x1
+#define PARSING_ERR_FLAGS_ANY_HDR_L4_IP_LEN_MISMTCH_SHIFT         8
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_ZERO_UDP_IP_V6_CHKSM_MASK               0x1
+#define PARSING_ERR_FLAGS_ZERO_UDP_IP_V6_CHKSM_SHIFT              9
+/* cksm calculated and value isn't 0xffff or L4-cksm-wasnt-calculated for any
+ * reason, like: udp/ipv4 checksum is 0 etc.
+ */
+#define PARSING_ERR_FLAGS_INNER_L4_CHKSM_ERROR_MASK               0x1
+#define PARSING_ERR_FLAGS_INNER_L4_CHKSM_ERROR_SHIFT              10
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_ANY_HDR_ZERO_TTL_OR_HOP_LIM_MASK        0x1
+#define PARSING_ERR_FLAGS_ANY_HDR_ZERO_TTL_OR_HOP_LIM_SHIFT       11
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_NON_8021Q_TAG_EXISTS_IN_BOTH_HDRS_MASK  0x1
+#define PARSING_ERR_FLAGS_NON_8021Q_TAG_EXISTS_IN_BOTH_HDRS_SHIFT 12
+/* set if geneve option size was over 32 byte */
+#define PARSING_ERR_FLAGS_GENEVE_OPTION_OVERSIZED_MASK            0x1
+#define PARSING_ERR_FLAGS_GENEVE_OPTION_OVERSIZED_SHIFT           13
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_TUNNEL_IP_V4_CHKSM_ERROR_MASK           0x1
+#define PARSING_ERR_FLAGS_TUNNEL_IP_V4_CHKSM_ERROR_SHIFT          14
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_TUNNEL_L4_CHKSM_ERROR_MASK              0x1
+#define PARSING_ERR_FLAGS_TUNNEL_L4_CHKSM_ERROR_SHIFT             15
+};
+
+
+/*
  * Pb context
  */
 struct pb_context {
@@ -1492,49 +1570,57 @@ struct tdif_task_context {
 struct timers_context {
 	__le32 logical_client_0;
 /* Expiration time of logical client 0 */
-#define TIMERS_CONTEXT_EXPIRATIONTIMELC0_MASK     0xFFFFFFF
+#define TIMERS_CONTEXT_EXPIRATIONTIMELC0_MASK     0x7FFFFFF
 #define TIMERS_CONTEXT_EXPIRATIONTIMELC0_SHIFT    0
+#define TIMERS_CONTEXT_RESERVED0_MASK             0x1
+#define TIMERS_CONTEXT_RESERVED0_SHIFT            27
 /* Valid bit of logical client 0 */
 #define TIMERS_CONTEXT_VALIDLC0_MASK              0x1
 #define TIMERS_CONTEXT_VALIDLC0_SHIFT             28
 /* Active bit of logical client 0 */
 #define TIMERS_CONTEXT_ACTIVELC0_MASK             0x1
 #define TIMERS_CONTEXT_ACTIVELC0_SHIFT            29
-#define TIMERS_CONTEXT_RESERVED0_MASK             0x3
-#define TIMERS_CONTEXT_RESERVED0_SHIFT            30
+#define TIMERS_CONTEXT_RESERVED1_MASK             0x3
+#define TIMERS_CONTEXT_RESERVED1_SHIFT            30
 	__le32 logical_client_1;
 /* Expiration time of logical client 1 */
-#define TIMERS_CONTEXT_EXPIRATIONTIMELC1_MASK     0xFFFFFFF
+#define TIMERS_CONTEXT_EXPIRATIONTIMELC1_MASK     0x7FFFFFF
 #define TIMERS_CONTEXT_EXPIRATIONTIMELC1_SHIFT    0
+#define TIMERS_CONTEXT_RESERVED2_MASK             0x1
+#define TIMERS_CONTEXT_RESERVED2_SHIFT            27
 /* Valid bit of logical client 1 */
 #define TIMERS_CONTEXT_VALIDLC1_MASK              0x1
 #define TIMERS_CONTEXT_VALIDLC1_SHIFT             28
 /* Active bit of logical client 1 */
 #define TIMERS_CONTEXT_ACTIVELC1_MASK             0x1
 #define TIMERS_CONTEXT_ACTIVELC1_SHIFT            29
-#define TIMERS_CONTEXT_RESERVED1_MASK             0x3
-#define TIMERS_CONTEXT_RESERVED1_SHIFT            30
+#define TIMERS_CONTEXT_RESERVED3_MASK             0x3
+#define TIMERS_CONTEXT_RESERVED3_SHIFT            30
 	__le32 logical_client_2;
 /* Expiration time of logical client 2 */
-#define TIMERS_CONTEXT_EXPIRATIONTIMELC2_MASK     0xFFFFFFF
+#define TIMERS_CONTEXT_EXPIRATIONTIMELC2_MASK     0x7FFFFFF
 #define TIMERS_CONTEXT_EXPIRATIONTIMELC2_SHIFT    0
+#define TIMERS_CONTEXT_RESERVED4_MASK             0x1
+#define TIMERS_CONTEXT_RESERVED4_SHIFT            27
 /* Valid bit of logical client 2 */
 #define TIMERS_CONTEXT_VALIDLC2_MASK              0x1
 #define TIMERS_CONTEXT_VALIDLC2_SHIFT             28
 /* Active bit of logical client 2 */
 #define TIMERS_CONTEXT_ACTIVELC2_MASK             0x1
 #define TIMERS_CONTEXT_ACTIVELC2_SHIFT            29
-#define TIMERS_CONTEXT_RESERVED2_MASK             0x3
-#define TIMERS_CONTEXT_RESERVED2_SHIFT            30
+#define TIMERS_CONTEXT_RESERVED5_MASK             0x3
+#define TIMERS_CONTEXT_RESERVED5_SHIFT            30
 	__le32 host_expiration_fields;
 /* Expiration time on host (closest one) */
-#define TIMERS_CONTEXT_HOSTEXPRIRATIONVALUE_MASK  0xFFFFFFF
+#define TIMERS_CONTEXT_HOSTEXPRIRATIONVALUE_MASK  0x7FFFFFF
 #define TIMERS_CONTEXT_HOSTEXPRIRATIONVALUE_SHIFT 0
+#define TIMERS_CONTEXT_RESERVED6_MASK             0x1
+#define TIMERS_CONTEXT_RESERVED6_SHIFT            27
 /* Valid bit of host expiration */
 #define TIMERS_CONTEXT_HOSTEXPRIRATIONVALID_MASK  0x1
 #define TIMERS_CONTEXT_HOSTEXPRIRATIONVALID_SHIFT 28
-#define TIMERS_CONTEXT_RESERVED3_MASK             0x7
-#define TIMERS_CONTEXT_RESERVED3_SHIFT            29
+#define TIMERS_CONTEXT_RESERVED7_MASK             0x7
+#define TIMERS_CONTEXT_RESERVED7_SHIFT            29
 };
 
 
diff --git a/drivers/net/qede/base/ecore_dcbx.c b/drivers/net/qede/base/ecore_dcbx.c
index 7380fd8..102774d 100644
--- a/drivers/net/qede/base/ecore_dcbx.c
+++ b/drivers/net/qede/base/ecore_dcbx.c
@@ -126,7 +126,7 @@ ecore_dcbx_set_params(struct ecore_dcbx_results *p_data,
 	else if (enable)
 		p_data->arr[type].update = UPDATE_DCB;
 	else
-		p_data->arr[type].update = DONT_UPDATE_DCB_DHCP;
+		p_data->arr[type].update = DONT_UPDATE_DCB_DSCP;
 
 	/* QM reconf data */
 	if (p_hwfn->hw_info.personality == personality) {
@@ -938,7 +938,7 @@ void ecore_dcbx_set_pf_update_params(struct ecore_dcbx_results *p_src,
 	p_dest->pf_id = p_src->pf_id;
 
 	update_flag = p_src->arr[DCBX_PROTOCOL_ETH].update;
-	p_dest->update_eth_dcb_data_flag = update_flag;
+	p_dest->update_eth_dcb_data_mode = update_flag;
 
 	p_dcb_data = &p_dest->eth_dcb_data;
 	ecore_dcbx_update_protocol_data(p_dcb_data, p_src, DCBX_PROTOCOL_ETH);
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index eef24cd..f82f5e6 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -814,7 +814,7 @@ static enum _ecore_status_t ecore_calc_hw_mode(struct ecore_hwfn *p_hwfn)
 	int hw_mode = 0;
 
 	if (ECORE_IS_BB_B0(p_hwfn->p_dev)) {
-		hw_mode |= 1 << MODE_BB_B0;
+		hw_mode |= 1 << MODE_BB;
 	} else if (ECORE_IS_AH(p_hwfn->p_dev)) {
 		hw_mode |= 1 << MODE_K2;
 	} else {
@@ -886,29 +886,36 @@ static enum _ecore_status_t ecore_calc_hw_mode(struct ecore_hwfn *p_hwfn)
 static enum _ecore_status_t ecore_hw_init_chip(struct ecore_hwfn *p_hwfn,
 					       struct ecore_ptt *p_ptt)
 {
+	struct ecore_dev *p_dev = p_hwfn->p_dev;
 	u32 pl_hv = 1;
 	int i;
 
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && ECORE_IS_AH(p_hwfn->p_dev))
-		pl_hv |= 0x600;
+	if (CHIP_REV_IS_EMUL(p_dev)) {
+		if (ECORE_IS_AH(p_dev))
+			pl_hv |= 0x600;
+	}
 
 	ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV + 4, pl_hv);
 
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && ECORE_IS_AH(p_hwfn->p_dev))
-		ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV_2, 0x3ffffff);
+	if (CHIP_REV_IS_EMUL(p_dev) &&
+	    (ECORE_IS_AH(p_dev)))
+		ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV_2_K2_E5,
+			 0x3ffffff);
 
 	/* initialize port mode to 4x10G_E (10G with 4x10 SERDES) */
 	/* CNIG_REG_NW_PORT_MODE is same for A0 and B0 */
-	if (!CHIP_REV_IS_EMUL(p_hwfn->p_dev) || !ECORE_IS_AH(p_hwfn->p_dev))
-		ecore_wr(p_hwfn, p_ptt, CNIG_REG_NW_PORT_MODE_BB_B0, 4);
+	if (!CHIP_REV_IS_EMUL(p_dev) || ECORE_IS_BB(p_dev))
+		ecore_wr(p_hwfn, p_ptt, CNIG_REG_NW_PORT_MODE_BB, 4);
 
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && ECORE_IS_AH(p_hwfn->p_dev)) {
-		/* 2 for 4-port, 1 for 2-port, 0 for 1-port */
-		ecore_wr(p_hwfn, p_ptt, MISC_REG_PORT_MODE,
-			 (p_hwfn->p_dev->num_ports_in_engines >> 1));
+	if (CHIP_REV_IS_EMUL(p_dev)) {
+		if (ECORE_IS_AH(p_dev)) {
+			/* 2 for 4-port, 1 for 2-port, 0 for 1-port */
+			ecore_wr(p_hwfn, p_ptt, MISC_REG_PORT_MODE,
+				 (p_dev->num_ports_in_engines >> 1));
 
-		ecore_wr(p_hwfn, p_ptt, MISC_REG_BLOCK_256B_EN,
-			 p_hwfn->p_dev->num_ports_in_engines == 4 ? 0 : 3);
+			ecore_wr(p_hwfn, p_ptt, MISC_REG_BLOCK_256B_EN,
+				 p_dev->num_ports_in_engines == 4 ? 0 : 3);
+		}
 	}
 
 	/* Poll on RBC */
@@ -1051,12 +1058,6 @@ static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn,
 	/* pretend to original PF */
 	ecore_fid_pretend(p_hwfn, p_ptt, p_hwfn->rel_pf_id);
 
-	/* @@@TMP:
-	 * CQ89456 - Mask the BRB "RC0_EOP_OUT_SYNC_FIFO_PUSH_ERROR" attention.
-	 */
-	if (ECORE_IS_AH(p_dev))
-		ecore_wr(p_hwfn, p_ptt, BRB_REG_INT_MASK_10, 0x4000000);
-
 	return rc;
 }
 
@@ -1072,20 +1073,19 @@ static void ecore_wr_nw_port(struct ecore_hwfn *p_hwfn,
 {
 	DP_VERBOSE(p_hwfn, ECORE_MSG_LINK,
 		   "CMD: %08x, ADDR: 0x%08x, DATA: %08x:%08x\n",
-		   ecore_rd(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB_B0) |
+		   ecore_rd(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB) |
 		   (8 << PMEG_IF_BYTE_COUNT),
 		   (reg_type << 25) | (addr << 8) | port,
 		   (u32)((data >> 32) & 0xffffffff),
 		   (u32)(data & 0xffffffff));
 
-	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB_B0,
-		 (ecore_rd(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB_B0) &
+	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB,
+		 (ecore_rd(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB) &
 		  0xffff00fe) | (8 << PMEG_IF_BYTE_COUNT));
-	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_ADDR_BB_B0,
+	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_ADDR_BB,
 		 (reg_type << 25) | (addr << 8) | port);
-	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_WRDATA_BB_B0,
-		 data & 0xffffffff);
-	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_WRDATA_BB_B0,
+	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_WRDATA_BB, data & 0xffffffff);
+	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_WRDATA_BB,
 		 (data >> 32) & 0xffffffff);
 }
 
@@ -1101,48 +1101,13 @@ static void ecore_wr_nw_port(struct ecore_hwfn *p_hwfn,
 #define XLMAC_PAUSE_CTRL (0x60d)
 #define XLMAC_PFC_CTRL (0x60e)
 
-static void ecore_emul_link_init_ah(struct ecore_hwfn *p_hwfn,
+static void ecore_emul_link_init_bb(struct ecore_hwfn *p_hwfn,
 				    struct ecore_ptt *p_ptt)
 {
-	u8 port = p_hwfn->port_id;
-	u32 mac_base = NWM_REG_MAC0 + (port << 2) * NWM_REG_MAC0_SIZE;
-
-	ecore_wr(p_hwfn, p_ptt, CNIG_REG_NIG_PORT0_CONF_K2 + (port << 2),
-		 (1 << CNIG_REG_NIG_PORT0_CONF_NIG_PORT_ENABLE_0_SHIFT) |
-		 (port << CNIG_REG_NIG_PORT0_CONF_NIG_PORT_NWM_PORT_MAP_0_SHIFT)
-		 | (0 << CNIG_REG_NIG_PORT0_CONF_NIG_PORT_RATE_0_SHIFT));
-
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_XIF_MODE,
-		 1 << ETH_MAC_REG_XIF_MODE_XGMII_SHIFT);
-
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_FRM_LENGTH,
-		 9018 << ETH_MAC_REG_FRM_LENGTH_FRM_LENGTH_SHIFT);
-
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_TX_IPG_LENGTH,
-		 0xc << ETH_MAC_REG_TX_IPG_LENGTH_TXIPG_SHIFT);
-
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_RX_FIFO_SECTIONS,
-		 8 << ETH_MAC_REG_RX_FIFO_SECTIONS_RX_SECTION_FULL_SHIFT);
-
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_TX_FIFO_SECTIONS,
-		 (0xA << ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_EMPTY_SHIFT) |
-		 (8 << ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_FULL_SHIFT));
-
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_COMMAND_CONFIG, 0xa853);
-}
-
-static void ecore_emul_link_init(struct ecore_hwfn *p_hwfn,
-				 struct ecore_ptt *p_ptt)
-{
 	u8 loopback = 0, port = p_hwfn->port_id * 2;
 
 	DP_INFO(p_hwfn->p_dev, "Configurating Emulation Link %02x\n", port);
 
-	if (ECORE_IS_AH(p_hwfn->p_dev)) {
-		ecore_emul_link_init_ah(p_hwfn, p_ptt);
-		return;
-	}
-
 	/* XLPORT MAC MODE *//* 0 Quad, 4 Single... */
 	ecore_wr_nw_port(p_hwfn, p_ptt, XLPORT_MODE_REG, (0x4 << 4) | 0x4, 1,
 			 port);
@@ -1171,8 +1136,53 @@ static void ecore_emul_link_init(struct ecore_hwfn *p_hwfn,
 	ecore_wr_nw_port(p_hwfn, p_ptt, XLPORT_ENABLE_REG, 0xf, 1, port);
 }
 
-static void ecore_link_init(struct ecore_hwfn *p_hwfn,
-			    struct ecore_ptt *p_ptt, u8 port)
+static void ecore_emul_link_init_ah_e5(struct ecore_hwfn *p_hwfn,
+				       struct ecore_ptt *p_ptt)
+{
+	u8 port = p_hwfn->port_id;
+	u32 mac_base = NWM_REG_MAC0_K2_E5 + (port << 2) * NWM_REG_MAC0_SIZE;
+
+	DP_INFO(p_hwfn->p_dev, "Configurating Emulation Link %02x\n", port);
+
+	ecore_wr(p_hwfn, p_ptt, CNIG_REG_NIG_PORT0_CONF_K2_E5 + (port << 2),
+		 (1 << CNIG_REG_NIG_PORT0_CONF_NIG_PORT_ENABLE_0_K2_E5_SHIFT) |
+		 (port <<
+		  CNIG_REG_NIG_PORT0_CONF_NIG_PORT_NWM_PORT_MAP_0_K2_E5_SHIFT) |
+		 (0 << CNIG_REG_NIG_PORT0_CONF_NIG_PORT_RATE_0_K2_E5_SHIFT));
+
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_XIF_MODE_K2_E5,
+		 1 << ETH_MAC_REG_XIF_MODE_XGMII_K2_E5_SHIFT);
+
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_FRM_LENGTH_K2_E5,
+		 9018 << ETH_MAC_REG_FRM_LENGTH_FRM_LENGTH_K2_E5_SHIFT);
+
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_TX_IPG_LENGTH_K2_E5,
+		 0xc << ETH_MAC_REG_TX_IPG_LENGTH_TXIPG_K2_E5_SHIFT);
+
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_RX_FIFO_SECTIONS_K2_E5,
+		 8 << ETH_MAC_REG_RX_FIFO_SECTIONS_RX_SECTION_FULL_K2_E5_SHIFT);
+
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_TX_FIFO_SECTIONS_K2_E5,
+		 (0xA <<
+		  ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_EMPTY_K2_E5_SHIFT) |
+		 (8 <<
+		  ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_FULL_K2_E5_SHIFT));
+
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_COMMAND_CONFIG_K2_E5,
+		 0xa853);
+}
+
+static void ecore_emul_link_init(struct ecore_hwfn *p_hwfn,
+				 struct ecore_ptt *p_ptt)
+{
+	if (ECORE_IS_AH(p_hwfn->p_dev))
+		ecore_emul_link_init_ah_e5(p_hwfn, p_ptt);
+	else /* BB */
+		ecore_emul_link_init_bb(p_hwfn, p_ptt);
+}
+
+static void ecore_link_init_bb(struct ecore_hwfn *p_hwfn,
+			       struct ecore_ptt *p_ptt,  u8 port)
 {
 	int port_offset = port ? 0x800 : 0;
 	u32 xmac_rxctrl = 0;
@@ -1185,10 +1195,10 @@ static void ecore_link_init(struct ecore_hwfn *p_hwfn,
 	ecore_wr(p_hwfn, p_ptt, MISC_REG_RESET_PL_PDA_VAUX + sizeof(u32),
 		 MISC_REG_RESET_REG_2_XMAC_BIT);	/* Set */
 
-	ecore_wr(p_hwfn, p_ptt, MISC_REG_XMAC_CORE_PORT_MODE, 1);
+	ecore_wr(p_hwfn, p_ptt, MISC_REG_XMAC_CORE_PORT_MODE_BB, 1);
 
 	/* Set the number of ports on the Warp Core to 10G */
-	ecore_wr(p_hwfn, p_ptt, MISC_REG_XMAC_PHY_PORT_MODE, 3);
+	ecore_wr(p_hwfn, p_ptt, MISC_REG_XMAC_PHY_PORT_MODE_BB, 3);
 
 	/* Soft reset of XMAC */
 	ecore_wr(p_hwfn, p_ptt, MISC_REG_RESET_PL_PDA_VAUX + 2 * sizeof(u32),
@@ -1199,20 +1209,21 @@ static void ecore_link_init(struct ecore_hwfn *p_hwfn,
 
 	/* FIXME: move to common end */
 	if (CHIP_REV_IS_FPGA(p_hwfn->p_dev))
-		ecore_wr(p_hwfn, p_ptt, XMAC_REG_MODE + port_offset, 0x20);
+		ecore_wr(p_hwfn, p_ptt, XMAC_REG_MODE_BB + port_offset, 0x20);
 
 	/* Set Max packet size: initialize XMAC block register for port 0 */
-	ecore_wr(p_hwfn, p_ptt, XMAC_REG_RX_MAX_SIZE + port_offset, 0x2710);
+	ecore_wr(p_hwfn, p_ptt, XMAC_REG_RX_MAX_SIZE_BB + port_offset, 0x2710);
 
 	/* CRC append for Tx packets: init XMAC block register for port 1 */
-	ecore_wr(p_hwfn, p_ptt, XMAC_REG_TX_CTRL_LO + port_offset, 0xC800);
+	ecore_wr(p_hwfn, p_ptt, XMAC_REG_TX_CTRL_LO_BB + port_offset, 0xC800);
 
 	/* Enable TX and RX: initialize XMAC block register for port 1 */
-	ecore_wr(p_hwfn, p_ptt, XMAC_REG_CTRL + port_offset,
-		 XMAC_REG_CTRL_TX_EN | XMAC_REG_CTRL_RX_EN);
-	xmac_rxctrl = ecore_rd(p_hwfn, p_ptt, XMAC_REG_RX_CTRL + port_offset);
-	xmac_rxctrl |= XMAC_REG_RX_CTRL_PROCESS_VARIABLE_PREAMBLE;
-	ecore_wr(p_hwfn, p_ptt, XMAC_REG_RX_CTRL + port_offset, xmac_rxctrl);
+	ecore_wr(p_hwfn, p_ptt, XMAC_REG_CTRL_BB + port_offset,
+		 XMAC_REG_CTRL_TX_EN_BB | XMAC_REG_CTRL_RX_EN_BB);
+	xmac_rxctrl = ecore_rd(p_hwfn, p_ptt,
+			       XMAC_REG_RX_CTRL_BB + port_offset);
+	xmac_rxctrl |= XMAC_REG_RX_CTRL_PROCESS_VARIABLE_PREAMBLE_BB;
+	ecore_wr(p_hwfn, p_ptt, XMAC_REG_RX_CTRL_BB + port_offset, xmac_rxctrl);
 }
 #endif
 
@@ -1233,7 +1244,8 @@ static enum _ecore_status_t ecore_hw_init_port(struct ecore_hwfn *p_hwfn,
 	if (CHIP_REV_IS_FPGA(p_hwfn->p_dev)) {
 		if (ECORE_IS_AH(p_hwfn->p_dev))
 			return ECORE_SUCCESS;
-		ecore_link_init(p_hwfn, p_ptt, p_hwfn->port_id);
+		else if (ECORE_IS_BB(p_hwfn->p_dev))
+			ecore_link_init_bb(p_hwfn, p_ptt, p_hwfn->port_id);
 	} else if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
 		if (p_hwfn->p_dev->num_hwfns > 1) {
 			/* Activate OPTE in CMT */
@@ -1667,7 +1679,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		 * out that these registers get initialized during the call to
 		 * ecore_mcp_load_req request. So we need to reread them here
 		 * to get the proper shadow register value.
-		 * Note: This is a workaround for the missinginig MFW
+		 * Note: This is a workaround for the missing MFW
 		 * initialization. It may be removed once the implementation
 		 * is done.
 		 */
@@ -2033,22 +2045,22 @@ static void ecore_hw_hwfn_prepare(struct ecore_hwfn *p_hwfn)
 	/* clear indirect access */
 	if (ECORE_IS_AH(p_hwfn->p_dev)) {
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_E8_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_E8_F0_K2_E5, 0);
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_EC_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_EC_F0_K2_E5, 0);
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_F0_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_F0_F0_K2_E5, 0);
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_F4_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_F4_F0_K2_E5, 0);
 	} else {
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_88_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_88_F0_BB, 0);
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_8C_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_8C_F0_BB, 0);
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_90_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_90_F0_BB, 0);
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_94_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_94_F0_BB, 0);
 	}
 
 	/* Clean Previous errors if such exist */
@@ -2643,7 +2655,12 @@ static void ecore_get_num_funcs(struct ecore_hwfn *p_hwfn,
 	 * In case of CMT in BB, only the "even" functions are enabled, and thus
 	 * the number of functions for both hwfns is learnt from the same bits.
 	 */
-	reg_function_hide = ecore_rd(p_hwfn, p_ptt, MISCS_REG_FUNCTION_HIDE);
+	if (ECORE_IS_BB(p_dev) || ECORE_IS_AH(p_dev)) {
+		reg_function_hide = ecore_rd(p_hwfn, p_ptt,
+					     MISCS_REG_FUNCTION_HIDE_BB_K2);
+	} else { /* E5 */
+		reg_function_hide = 0;
+	}
 
 	if (reg_function_hide & 0x1) {
 		if (ECORE_IS_BB(p_dev)) {
@@ -2709,8 +2726,7 @@ static void ecore_hw_info_port_num_bb(struct ecore_hwfn *p_hwfn,
 		port_mode = 1;
 	else
 #endif
-		port_mode = ecore_rd(p_hwfn, p_ptt,
-				     CNIG_REG_NW_PORT_MODE_BB_B0);
+	port_mode = ecore_rd(p_hwfn, p_ptt, CNIG_REG_NW_PORT_MODE_BB);
 
 	if (port_mode < 3) {
 		p_hwfn->p_dev->num_ports_in_engines = 1;
@@ -2725,8 +2741,8 @@ static void ecore_hw_info_port_num_bb(struct ecore_hwfn *p_hwfn,
 	}
 }
 
-static void ecore_hw_info_port_num_ah(struct ecore_hwfn *p_hwfn,
-				      struct ecore_ptt *p_ptt)
+static void ecore_hw_info_port_num_ah_e5(struct ecore_hwfn *p_hwfn,
+					 struct ecore_ptt *p_ptt)
 {
 	u32 port;
 	int i;
@@ -2755,7 +2771,8 @@ static void ecore_hw_info_port_num_ah(struct ecore_hwfn *p_hwfn,
 #endif
 		for (i = 0; i < MAX_NUM_PORTS_K2; i++) {
 			port = ecore_rd(p_hwfn, p_ptt,
-					CNIG_REG_NIG_PORT0_CONF_K2 + (i * 4));
+					CNIG_REG_NIG_PORT0_CONF_K2_E5 +
+					(i * 4));
 			if (port & 1)
 				p_hwfn->p_dev->num_ports_in_engines++;
 		}
@@ -2767,7 +2784,7 @@ static void ecore_hw_info_port_num(struct ecore_hwfn *p_hwfn,
 	if (ECORE_IS_BB(p_hwfn->p_dev))
 		ecore_hw_info_port_num_bb(p_hwfn, p_ptt);
 	else
-		ecore_hw_info_port_num_ah(p_hwfn, p_ptt);
+		ecore_hw_info_port_num_ah_e5(p_hwfn, p_ptt);
 }
 
 static enum _ecore_status_t
@@ -3076,12 +3093,13 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn,
 	if (CHIP_REV_IS_FPGA(p_dev)) {
 		DP_NOTICE(p_hwfn, false,
 			  "FPGA: workaround; Prevent DMAE parities\n");
-		ecore_wr(p_hwfn, p_hwfn->p_main_ptt, PCIE_REG_PRTY_MASK, 7);
+		ecore_wr(p_hwfn, p_hwfn->p_main_ptt, PCIE_REG_PRTY_MASK_K2_E5,
+			 7);
 
 		DP_NOTICE(p_hwfn, false,
 			  "FPGA: workaround: Set VF bar0 size\n");
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_VF_BAR0_SIZE, 4);
+			 PGLUE_B_REG_VF_BAR0_SIZE_K2_E5, 4);
 	}
 #endif
 
diff --git a/drivers/net/qede/base/ecore_gtt_reg_addr.h b/drivers/net/qede/base/ecore_gtt_reg_addr.h
index 070588d..2acd864 100644
--- a/drivers/net/qede/base/ecore_gtt_reg_addr.h
+++ b/drivers/net/qede/base/ecore_gtt_reg_addr.h
@@ -10,43 +10,43 @@
 #define GTT_REG_ADDR_H
 
 /* Win 2 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_IGU_CMD                                      0x00f000UL
 
 /* Win 3 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_TSDM_RAM                                     0x010000UL
 
 /* Win 4 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_MSDM_RAM                                     0x011000UL
 
 /* Win 5 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_MSDM_RAM_1024                                0x012000UL
 
 /* Win 6 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_USDM_RAM                                     0x013000UL
 
 /* Win 7 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_USDM_RAM_1024                                0x014000UL
 
 /* Win 8 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_USDM_RAM_2048                                0x015000UL
 
 /* Win 9 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_XSDM_RAM                                     0x016000UL
 
 /* Win 10 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_YSDM_RAM                                     0x017000UL
 
 /* Win 11 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_PSDM_RAM                                     0x018000UL
 
 #endif
diff --git a/drivers/net/qede/base/ecore_hsi_common.h b/drivers/net/qede/base/ecore_hsi_common.h
index f934e68..3042ed5 100644
--- a/drivers/net/qede/base/ecore_hsi_common.h
+++ b/drivers/net/qede/base/ecore_hsi_common.h
@@ -836,7 +836,12 @@ struct core_rx_fast_path_cqe {
 	__le16 packet_length /* Total packet length (from the parser) */;
 	__le16 vlan /* 802.1q VLAN tag */;
 	struct core_rx_cqe_opaque_data opaque_data /* Opaque Data */;
-	__le32 reserved[4];
+/* bit- map: each bit represents a specific error. errors indications are
+ * provided by the cracker. see spec for detailed description
+ */
+	struct parsing_err_flags err_flags;
+	__le16 reserved0;
+	__le32 reserved1[3];
 };
 
 /*
@@ -1042,13 +1047,13 @@ struct core_tx_stop_ramrod_data {
 /*
  * Enum flag for what type of dcb data to update
  */
-enum dcb_dhcp_update_flag {
+enum dcb_dscp_update_mode {
 /* use when no change should be done to dcb data */
-	DONT_UPDATE_DCB_DHCP,
+	DONT_UPDATE_DCB_DSCP,
 	UPDATE_DCB /* use to update only l2 (vlan) priority */,
-	UPDATE_DSCP /* use to update only l3 dhcp */,
-	UPDATE_DCB_DSCP /* update vlan pri and dhcp */,
-	MAX_DCB_DHCP_UPDATE_FLAG
+	UPDATE_DSCP /* use to update only l3 dscp */,
+	UPDATE_DCB_DSCP /* update vlan pri and dscp */,
+	MAX_DCB_DSCP_UPDATE_FLAG
 };
 
 
@@ -1232,6 +1237,10 @@ enum iwarp_ll2_tx_queues {
 	IWARP_LL2_IN_ORDER_TX_QUEUE = 1,
 /* LL2 queue for unaligned packets sent aligned by the driver */
 	IWARP_LL2_ALIGNED_TX_QUEUE,
+/* LL2 queue for unaligned packets sent aligned and was right-trimmed by the
+ * driver
+ */
+	IWARP_LL2_ALIGNED_RIGHT_TRIMMED_TX_QUEUE,
 	IWARP_LL2_ERROR /* Error indication */,
 	MAX_IWARP_LL2_TX_QUEUES
 };
@@ -1446,13 +1455,13 @@ struct pf_update_tunnel_config {
  */
 struct pf_update_ramrod_data {
 	u8 pf_id;
-	u8 update_eth_dcb_data_flag /* Update Eth DCB  data indication */;
-	u8 update_fcoe_dcb_data_flag /* Update FCOE DCB  data indication */;
-	u8 update_iscsi_dcb_data_flag /* Update iSCSI DCB  data indication */;
-	u8 update_roce_dcb_data_flag /* Update ROCE DCB  data indication */;
+	u8 update_eth_dcb_data_mode /* Update Eth DCB  data indication */;
+	u8 update_fcoe_dcb_data_mode /* Update FCOE DCB  data indication */;
+	u8 update_iscsi_dcb_data_mode /* Update iSCSI DCB  data indication */;
+	u8 update_roce_dcb_data_mode /* Update ROCE DCB  data indication */;
 /* Update RROCE (RoceV2) DCB  data indication */
-	u8 update_rroce_dcb_data_flag;
-	u8 update_iwarp_dcb_data_flag /* Update IWARP DCB  data indication */;
+	u8 update_rroce_dcb_data_mode;
+	u8 update_iwarp_dcb_data_mode /* Update IWARP DCB  data indication */;
 	u8 update_mf_vlan_flag /* Update MF outer vlan Id */;
 	struct protocol_dcb_data eth_dcb_data /* core eth related fields */;
 	struct protocol_dcb_data fcoe_dcb_data /* core fcoe related fields */;
@@ -1611,6 +1620,8 @@ struct tstorm_per_port_stat {
 	struct regpair fcoe_irregular_pkt;
 /* packet is an ROCE irregular packet */
 	struct regpair roce_irregular_pkt;
+/* packet is an IWARP irregular packet */
+	struct regpair iwarp_irregular_pkt;
 /* packet is an ETH irregular packet */
 	struct regpair eth_irregular_pkt;
 /* packet is an TOE irregular packet */
@@ -1861,8 +1872,11 @@ struct dmae_cmd {
 #define DMAE_CMD_SRC_VF_ID_SHIFT       0
 #define DMAE_CMD_DST_VF_ID_MASK        0xFF /* Destination VF id */
 #define DMAE_CMD_DST_VF_ID_SHIFT       8
-	__le32 comp_addr_lo /* PCIe completion address low or grc address */;
-/* PCIe completion address high or reserved (if completion address is in GRC) */
+/* PCIe completion address low in bytes or GRC completion address in DW */
+	__le32 comp_addr_lo;
+/* PCIe completion address high in bytes or reserved (if completion address is
+ * GRC)
+ */
 	__le32 comp_addr_hi;
 	__le32 comp_val /* Value to write to completion address */;
 	__le32 crc32 /* crc16 result */;
@@ -2250,10 +2264,6 @@ struct sdm_op_gen {
 #define SDM_OP_GEN_RESERVED_SHIFT   20
 };
 
-
-
-
-
 struct ystorm_core_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 byte1 /* state */;
diff --git a/drivers/net/qede/base/ecore_hsi_debug_tools.h b/drivers/net/qede/base/ecore_hsi_debug_tools.h
index effb6ed..917e8f4 100644
--- a/drivers/net/qede/base/ecore_hsi_debug_tools.h
+++ b/drivers/net/qede/base/ecore_hsi_debug_tools.h
@@ -93,10 +93,12 @@ enum block_addr {
 	GRCBASE_PHY_PCIE = 0x620000,
 	GRCBASE_LED = 0x6b8000,
 	GRCBASE_AVS_WRAP = 0x6b0000,
-	GRCBASE_RGFS = 0x19d0000,
-	GRCBASE_TGFS = 0x19e0000,
-	GRCBASE_PTLD = 0x19f0000,
-	GRCBASE_YPLD = 0x1a10000,
+	GRCBASE_RGFS = 0x1fa0000,
+	GRCBASE_RGSRC = 0x1fa8000,
+	GRCBASE_TGFS = 0x1fb0000,
+	GRCBASE_TGSRC = 0x1fb8000,
+	GRCBASE_PTLD = 0x1fc0000,
+	GRCBASE_YPLD = 0x1fe0000,
 	GRCBASE_MISC_AEU = 0x8000,
 	GRCBASE_BAR0_MAP = 0x1c00000,
 	MAX_BLOCK_ADDR
@@ -184,7 +186,9 @@ enum block_id {
 	BLOCK_LED,
 	BLOCK_AVS_WRAP,
 	BLOCK_RGFS,
+	BLOCK_RGSRC,
 	BLOCK_TGFS,
+	BLOCK_TGSRC,
 	BLOCK_PTLD,
 	BLOCK_YPLD,
 	BLOCK_MISC_AEU,
@@ -208,6 +212,10 @@ enum bin_dbg_buffer_type {
 	BIN_BUF_DBG_ATTN_REGS /* Attention registers */,
 	BIN_BUF_DBG_ATTN_INDEXES /* Attention indexes */,
 	BIN_BUF_DBG_ATTN_NAME_OFFSETS /* Attention name offsets */,
+	BIN_BUF_DBG_BUS_BLOCKS /* Debug Bus blocks */,
+	BIN_BUF_DBG_BUS_LINES /* Debug Bus lines */,
+	BIN_BUF_DBG_BUS_BLOCKS_USER_DATA /* Debug Bus blocks user data */,
+	BIN_BUF_DBG_BUS_LINE_NAME_OFFSETS /* Debug Bus line name offsets */,
 	BIN_BUF_DBG_PARSING_STRINGS /* Debug Tools parsing strings */,
 	MAX_BIN_DBG_BUFFER_TYPE
 };
@@ -219,8 +227,8 @@ enum bin_dbg_buffer_type {
 struct dbg_attn_bit_mapping {
 	__le16 data;
 /* The index of an attention in the blocks attentions list
- * (if is_unused_idx_cnt=0), or a number of consecutive unused attention bits
- * (if is_unused_idx_cnt=1)
+ * (if is_unused_bit_cnt=0), or a number of consecutive unused attention bits
+ * (if is_unused_bit_cnt=1)
  */
 #define DBG_ATTN_BIT_MAPPING_VAL_MASK                0x7FFF
 #define DBG_ATTN_BIT_MAPPING_VAL_SHIFT               0
@@ -269,10 +277,10 @@ struct dbg_attn_reg_result {
 #define DBG_ATTN_REG_RESULT_STS_ADDRESS_MASK   0xFFFFFF
 #define DBG_ATTN_REG_RESULT_STS_ADDRESS_SHIFT  0
 /* Number of attention indexes in this register */
-#define DBG_ATTN_REG_RESULT_NUM_ATTN_IDX_MASK  0xFF
-#define DBG_ATTN_REG_RESULT_NUM_ATTN_IDX_SHIFT 24
-/* Offset of this registers block attention indexes (values in the range
- * 0..number of block attentions)
+#define DBG_ATTN_REG_RESULT_NUM_REG_ATTN_MASK  0xFF
+#define DBG_ATTN_REG_RESULT_NUM_REG_ATTN_SHIFT 24
+/* The offset of this registers attentions within the blocks attentions
+ * list (a value in the range 0..number of block attentions-1)
  */
 	__le16 attn_idx_offset;
 	__le16 reserved;
@@ -289,7 +297,7 @@ struct dbg_attn_block_result {
 /* Value from dbg_attn_type enum */
 #define DBG_ATTN_BLOCK_RESULT_ATTN_TYPE_MASK  0x3
 #define DBG_ATTN_BLOCK_RESULT_ATTN_TYPE_SHIFT 0
-/* Number of registers in the blok in which at least one attention bit is set */
+/* Number of registers in block in which at least one attention bit is set */
 #define DBG_ATTN_BLOCK_RESULT_NUM_REGS_MASK   0x3F
 #define DBG_ATTN_BLOCK_RESULT_NUM_REGS_SHIFT  2
 /* Offset of this registers block attention names in the attention name offsets
@@ -324,17 +332,17 @@ struct dbg_mode_hdr {
  */
 struct dbg_attn_reg {
 	struct dbg_mode_hdr mode /* Mode header */;
-/* Offset of this registers block attention indexes (values in the range
- * 0..number of block attentions)
+/* The offset of this registers attentions within the blocks attentions
+ * list (a value in the range 0..number of block attentions-1)
  */
 	__le16 attn_idx_offset;
 	__le32 data;
 /* STS attention register GRC address (in dwords) */
 #define DBG_ATTN_REG_STS_ADDRESS_MASK   0xFFFFFF
 #define DBG_ATTN_REG_STS_ADDRESS_SHIFT  0
-/* Number of attention indexes in this register */
-#define DBG_ATTN_REG_NUM_ATTN_IDX_MASK  0xFF
-#define DBG_ATTN_REG_NUM_ATTN_IDX_SHIFT 24
+/* Number of attention in this register */
+#define DBG_ATTN_REG_NUM_REG_ATTN_MASK  0xFF
+#define DBG_ATTN_REG_NUM_REG_ATTN_SHIFT 24
 /* STS_CLR attention register GRC address (in dwords) */
 	__le32 sts_clr_address;
 /* MASK attention register GRC address (in dwords) */
@@ -354,6 +362,53 @@ enum dbg_attn_type {
 
 
 /*
+ * Debug Bus block data
+ */
+struct dbg_bus_block {
+/* Number of debug lines in this block (excluding signature & latency events) */
+	u8 num_of_lines;
+/* Indicates if this block has a latency events debug line (0/1). */
+	u8 has_latency_events;
+/* Offset of this blocks lines in the Debug Bus lines array. */
+	__le16 lines_offset;
+};
+
+
+/*
+ * Debug Bus block user data
+ */
+struct dbg_bus_block_user_data {
+/* Number of debug lines in this block (excluding signature & latency events) */
+	u8 num_of_lines;
+/* Indicates if this block has a latency events debug line (0/1). */
+	u8 has_latency_events;
+/* Offset of this blocks lines in the debug bus line name offsets array. */
+	__le16 names_offset;
+};
+
+
+/*
+ * Block Debug line data
+ */
+struct dbg_bus_line {
+	u8 data;
+/* Number of groups in the line (0-3) */
+#define DBG_BUS_LINE_NUM_OF_GROUPS_MASK  0xF
+#define DBG_BUS_LINE_NUM_OF_GROUPS_SHIFT 0
+/* Indicates if this is a 128b line (0) or a 256b line (1). */
+#define DBG_BUS_LINE_IS_256B_MASK        0x1
+#define DBG_BUS_LINE_IS_256B_SHIFT       4
+#define DBG_BUS_LINE_RESERVED_MASK       0x7
+#define DBG_BUS_LINE_RESERVED_SHIFT      5
+/* Four 2-bit values, indicating the size of each group minus 1 (i.e.
+ * value=0 means size=1, value=1 means size=2, etc), starting from lsb.
+ * The sizes are in dwords (if is_256b=0) or in qwords (if is_256b=1).
+ */
+	u8 group_sizes;
+};
+
+
+/*
  * condition header for registers dump
  */
 struct dbg_dump_cond_hdr {
@@ -377,8 +432,11 @@ struct dbg_dump_mem {
 /* register size (in dwords) */
 #define DBG_DUMP_MEM_LENGTH_MASK        0xFFFFFF
 #define DBG_DUMP_MEM_LENGTH_SHIFT       0
-#define DBG_DUMP_MEM_RESERVED_MASK      0xFF
-#define DBG_DUMP_MEM_RESERVED_SHIFT     24
+/* indicates if the register is wide-bus */
+#define DBG_DUMP_MEM_WIDE_BUS_MASK      0x1
+#define DBG_DUMP_MEM_WIDE_BUS_SHIFT     24
+#define DBG_DUMP_MEM_RESERVED_MASK      0x7F
+#define DBG_DUMP_MEM_RESERVED_SHIFT     25
 };
 
 
@@ -388,10 +446,13 @@ struct dbg_dump_mem {
 struct dbg_dump_reg {
 	__le32 data;
 /* register address (in dwords) */
-#define DBG_DUMP_REG_ADDRESS_MASK  0xFFFFFF
-#define DBG_DUMP_REG_ADDRESS_SHIFT 0
-#define DBG_DUMP_REG_LENGTH_MASK   0xFF /* register size (in dwords) */
-#define DBG_DUMP_REG_LENGTH_SHIFT  24
+#define DBG_DUMP_REG_ADDRESS_MASK   0x7FFFFF /* register address (in dwords) */
+#define DBG_DUMP_REG_ADDRESS_SHIFT  0
+/* indicates if the register is wide-bus */
+#define DBG_DUMP_REG_WIDE_BUS_MASK  0x1
+#define DBG_DUMP_REG_WIDE_BUS_SHIFT 23
+#define DBG_DUMP_REG_LENGTH_MASK    0xFF /* register size (in dwords) */
+#define DBG_DUMP_REG_LENGTH_SHIFT   24
 };
 
 
@@ -424,8 +485,11 @@ struct dbg_idle_chk_cond_hdr {
 struct dbg_idle_chk_cond_reg {
 	__le32 data;
 /* Register GRC address (in dwords) */
-#define DBG_IDLE_CHK_COND_REG_ADDRESS_MASK   0xFFFFFF
+#define DBG_IDLE_CHK_COND_REG_ADDRESS_MASK   0x7FFFFF
 #define DBG_IDLE_CHK_COND_REG_ADDRESS_SHIFT  0
+/* indicates if the register is wide-bus */
+#define DBG_IDLE_CHK_COND_REG_WIDE_BUS_MASK  0x1
+#define DBG_IDLE_CHK_COND_REG_WIDE_BUS_SHIFT 23
 /* value from block_id enum */
 #define DBG_IDLE_CHK_COND_REG_BLOCK_ID_MASK  0xFF
 #define DBG_IDLE_CHK_COND_REG_BLOCK_ID_SHIFT 24
@@ -441,8 +505,11 @@ struct dbg_idle_chk_cond_reg {
 struct dbg_idle_chk_info_reg {
 	__le32 data;
 /* Register GRC address (in dwords) */
-#define DBG_IDLE_CHK_INFO_REG_ADDRESS_MASK   0xFFFFFF
+#define DBG_IDLE_CHK_INFO_REG_ADDRESS_MASK   0x7FFFFF
 #define DBG_IDLE_CHK_INFO_REG_ADDRESS_SHIFT  0
+/* indicates if the register is wide-bus */
+#define DBG_IDLE_CHK_INFO_REG_WIDE_BUS_MASK  0x1
+#define DBG_IDLE_CHK_INFO_REG_WIDE_BUS_SHIFT 23
 /* value from block_id enum */
 #define DBG_IDLE_CHK_INFO_REG_BLOCK_ID_MASK  0xFF
 #define DBG_IDLE_CHK_INFO_REG_BLOCK_ID_SHIFT 24
@@ -544,17 +611,21 @@ enum dbg_idle_chk_severity_types {
  * Debug Bus block data
  */
 struct dbg_bus_block_data {
-/* Indicates if the block is enabled for recording (0/1) */
-	u8 enabled;
-	u8 hw_id /* HW ID associated with the block */;
+	__le16 data;
+/* 4-bit value: bit i set -> dword/qword i is enabled. */
+#define DBG_BUS_BLOCK_DATA_ENABLE_MASK_MASK       0xF
+#define DBG_BUS_BLOCK_DATA_ENABLE_MASK_SHIFT      0
+/* Number of dwords/qwords to shift right the debug data (0-3) */
+#define DBG_BUS_BLOCK_DATA_RIGHT_SHIFT_MASK       0xF
+#define DBG_BUS_BLOCK_DATA_RIGHT_SHIFT_SHIFT      4
+/* 4-bit value: bit i set -> dword/qword i is forced valid. */
+#define DBG_BUS_BLOCK_DATA_FORCE_VALID_MASK_MASK  0xF
+#define DBG_BUS_BLOCK_DATA_FORCE_VALID_MASK_SHIFT 8
+/* 4-bit value: bit i set -> dword/qword i frame bit is forced. */
+#define DBG_BUS_BLOCK_DATA_FORCE_FRAME_MASK_MASK  0xF
+#define DBG_BUS_BLOCK_DATA_FORCE_FRAME_MASK_SHIFT 12
 	u8 line_num /* Debug line number to select */;
-	u8 right_shift /* Number of units to  right the debug data (0-3) */;
-	u8 cycle_en /* 4-bit value: bit i set -> unit i is enabled. */;
-/* 4-bit value: bit i set -> unit i is forced valid. */
-	u8 force_valid;
-/* 4-bit value: bit i set -> unit i frame bit is forced. */
-	u8 force_frame;
-	u8 reserved;
+	u8 hw_id /* HW ID associated with the block */;
 };
 
 
@@ -604,6 +675,21 @@ enum dbg_bus_constraint_ops {
 
 
 /*
+ * Debug Bus trigger state data
+ */
+struct dbg_bus_trigger_state_data {
+	u8 data;
+/* 4-bit value: bit i set -> dword i of the trigger state block
+ * (after right shift) is enabled.
+ */
+#define DBG_BUS_TRIGGER_STATE_DATA_BLOCK_SHIFTED_ENABLE_MASK_MASK  0xF
+#define DBG_BUS_TRIGGER_STATE_DATA_BLOCK_SHIFTED_ENABLE_MASK_SHIFT 0
+/* 4-bit value: bit i set -> dword i is compared by a constraint */
+#define DBG_BUS_TRIGGER_STATE_DATA_CONSTRAINT_DWORD_MASK_MASK      0xF
+#define DBG_BUS_TRIGGER_STATE_DATA_CONSTRAINT_DWORD_MASK_SHIFT     4
+};
+
+/*
  * Debug Bus memory address
  */
 struct dbg_bus_mem_addr {
@@ -650,14 +736,8 @@ union dbg_bus_storm_eid_params {
  * Debug Bus Storm data
  */
 struct dbg_bus_storm_data {
-/* Indicates if the Storm is enabled for fast debug recording (0/1) */
-	u8 fast_enabled;
-/* Fast debug Storm mode, valid only if fast_enabled is set */
-	u8 fast_mode;
-/* Indicates if the Storm is enabled for slow debug recording (0/1) */
-	u8 slow_enabled;
-/* Slow debug Storm mode, valid only if slow_enabled is set */
-	u8 slow_mode;
+	u8 enabled /* indicates if the Storm is enabled for recording */;
+	u8 mode /* Storm debug mode, valid only if the Storm is enabled */;
 	u8 hw_id /* HW ID associated with the Storm */;
 	u8 eid_filter_en /* Indicates if EID filtering is performed (0/1) */;
 /* 1 = EID range filter, 0 = EID mask filter. Valid only if eid_filter_en is
@@ -667,7 +747,6 @@ struct dbg_bus_storm_data {
 	u8 cid_filter_en /* Indicates if CID filtering is performed (0/1) */;
 /* EID filter params to filter on. Valid only if eid_filter_en is set. */
 	union dbg_bus_storm_eid_params eid_filter_params;
-	__le16 reserved;
 /* CID to filter on. Valid only if cid_filter_en is set. */
 	__le32 cid;
 };
@@ -679,20 +758,18 @@ struct dbg_bus_data {
 	__le32 app_version /* The tools version number of the application */;
 	u8 state /* The current debug bus state */;
 	u8 hw_dwords /* HW dwords per cycle */;
-	u8 next_hw_id /* Next HW ID to be associated with an input */;
+/* The HW IDs of the recorded HW blocks, where bits i*3..i*3+2 contain the
+ * HW ID of dword/qword i
+ */
+	__le16 hw_id_mask;
 	u8 num_enabled_blocks /* Number of blocks enabled for recording */;
 	u8 num_enabled_storms /* Number of Storms enabled for recording */;
 	u8 target /* Output target */;
-	u8 next_trigger_state /* ID of next trigger state to be added */;
-/* ID of next filter/trigger constraint to be added */
-	u8 next_constraint_id;
 	u8 one_shot_en /* Indicates if one-shot mode is enabled (0/1) */;
 	u8 grc_input_en /* Indicates if GRC recording is enabled (0/1) */;
 /* Indicates if timestamp recording is enabled (0/1) */
 	u8 timestamp_input_en;
 	u8 filter_en /* Indicates if the recording filter is enabled (0/1) */;
-/* Indicates if the recording trigger is enabled (0/1) */
-	u8 trigger_en;
 /* If true, the next added constraint belong to the filter. Otherwise,
  * it belongs to the last added trigger state. Valid only if either filter or
  * triggers are enabled.
@@ -706,6 +783,14 @@ struct dbg_bus_data {
  * Valid only if both filter and trigger are enabled (0/1)
  */
 	u8 filter_post_trigger;
+	__le16 reserved;
+/* Indicates if the recording trigger is enabled (0/1) */
+	u8 trigger_en;
+/* trigger states data */
+	struct dbg_bus_trigger_state_data trigger_states[3];
+	u8 next_trigger_state /* ID of next trigger state to be added */;
+/* ID of next filter/trigger constraint to be added */
+	u8 next_constraint_id;
 /* If true, all inputs are associated with HW ID 0. Otherwise, each input is
  * assigned a different HW ID (0/1)
  */
@@ -716,7 +801,6 @@ struct dbg_bus_data {
  * DBG_BUS_TARGET_ID_PCI.
  */
 	struct dbg_bus_pci_buf_data pci_buf;
-	__le16 reserved;
 /* Debug Bus data for each block */
 	struct dbg_bus_block_data blocks[88];
 /* Debug Bus data for each block */
@@ -748,17 +832,6 @@ enum dbg_bus_frame_modes {
 
 
 /*
- * Debug bus input types
- */
-enum dbg_bus_input_types {
-	DBG_BUS_INPUT_TYPE_STORM,
-	DBG_BUS_INPUT_TYPE_BLOCK,
-	MAX_DBG_BUS_INPUT_TYPES
-};
-
-
-
-/*
  * Debug bus other engine mode
  */
 enum dbg_bus_other_engine_modes {
@@ -852,6 +925,7 @@ enum dbg_bus_targets {
 };
 
 
+
 /*
  * GRC Dump data
  */
@@ -987,7 +1061,10 @@ enum dbg_status {
 	DBG_STATUS_REG_FIFO_BAD_DATA,
 	DBG_STATUS_PROTECTION_OVERRIDE_BAD_DATA,
 	DBG_STATUS_DBG_ARRAY_NOT_SET,
-	DBG_STATUS_MULTI_BLOCKS_WITH_FILTER,
+	DBG_STATUS_FILTER_BUG,
+	DBG_STATUS_NON_MATCHING_LINES,
+	DBG_STATUS_INVALID_TRIGGER_DWORD_OFFSET,
+	DBG_STATUS_DBG_BUS_IN_USE,
 	MAX_DBG_STATUS
 };
 
@@ -1028,7 +1105,7 @@ struct dbg_tools_data {
 /* Indicates if a block is in reset state (0/1) */
 	u8 block_in_reset[88];
 	u8 chip_id /* Chip ID (from enum chip_ids) */;
-	u8 platform_id /* Platform ID (from enum platform_ids) */;
+	u8 platform_id /* Platform ID */;
 	u8 initialized /* Indicates if the data was initialized */;
 	u8 reserved;
 };
diff --git a/drivers/net/qede/base/ecore_hsi_eth.h b/drivers/net/qede/base/ecore_hsi_eth.h
index 9d2a118..397c408 100644
--- a/drivers/net/qede/base/ecore_hsi_eth.h
+++ b/drivers/net/qede/base/ecore_hsi_eth.h
@@ -739,6 +739,7 @@ enum eth_error_code {
 	ETH_FILTERS_VNI_ADD_FAIL_FULL,
 /* vni add filters command failed due to duplicate VNI filter */
 	ETH_FILTERS_VNI_ADD_FAIL_DUP,
+	ETH_FILTERS_GFT_UPDATE_FAIL /* Fail update GFT filter. */,
 	MAX_ETH_ERROR_CODE
 };
 
@@ -982,8 +983,10 @@ struct eth_vport_rss_config {
 	u8 rss_id;
 	u8 rss_mode /* The RSS mode for this function */;
 	u8 update_rss_key /* if set update the rss key */;
-	u8 update_rss_ind_table /* if set update the indirection table */;
-	u8 update_rss_capabilities /* if set update the capabilities */;
+/* if set update the indirection table values */
+	u8 update_rss_ind_table;
+/* if set update the capabilities and indirection table size. */
+	u8 update_rss_capabilities;
 	u8 tbl_size /* rss mask (Tbl size) */;
 	__le32 reserved2[2];
 /* RSS indirection table */
@@ -1267,7 +1270,10 @@ struct rx_update_gft_filter_data {
 /* Use enum to set type of flow using gft HW logic blocks */
 	u8 filter_type;
 	u8 filter_action /* Use to set type of action on filter */;
-	u8 reserved;
+/* 0 - dont assert in case of error. Just return an error code. 1 - assert in
+ * case of error.
+ */
+	u8 assert_on_error;
 };
 
 
@@ -2290,8 +2296,7 @@ enum gft_profile_upper_protocol_type {
  * GFT RAM line struct
  */
 struct gft_ram_line {
-	__le32 low32bits;
-/*  (use enum gft_vlan_select) */
+	__le32 lo;
 #define GFT_RAM_LINE_VLAN_SELECT_MASK              0x3
 #define GFT_RAM_LINE_VLAN_SELECT_SHIFT             0
 #define GFT_RAM_LINE_TUNNEL_ENTROPHY_MASK          0x1
@@ -2354,7 +2359,7 @@ struct gft_ram_line {
 #define GFT_RAM_LINE_DST_PORT_SHIFT                30
 #define GFT_RAM_LINE_SRC_PORT_MASK                 0x1
 #define GFT_RAM_LINE_SRC_PORT_SHIFT                31
-	__le32 high32bits;
+	__le32 hi;
 #define GFT_RAM_LINE_DSCP_MASK                     0x1
 #define GFT_RAM_LINE_DSCP_SHIFT                    0
 #define GFT_RAM_LINE_OVER_IP_PROTOCOL_MASK         0x1
diff --git a/drivers/net/qede/base/ecore_hsi_init_tool.h b/drivers/net/qede/base/ecore_hsi_init_tool.h
index d07549c..1f57e9b 100644
--- a/drivers/net/qede/base/ecore_hsi_init_tool.h
+++ b/drivers/net/qede/base/ecore_hsi_init_tool.h
@@ -22,43 +22,13 @@
 /* Max size in dwords of a zipped array */
 #define MAX_ZIPPED_SIZE			8192
 
-enum init_modes {
-	MODE_BB_A0_DEPRECATED,
-	MODE_BB_B0,
-	MODE_K2,
-	MODE_ASIC,
-	MODE_EMUL_REDUCED,
-	MODE_EMUL_FULL,
-	MODE_FPGA,
-	MODE_CHIPSIM,
-	MODE_SF,
-	MODE_MF_SD,
-	MODE_MF_SI,
-	MODE_PORTS_PER_ENG_1,
-	MODE_PORTS_PER_ENG_2,
-	MODE_PORTS_PER_ENG_4,
-	MODE_100G,
-	MODE_E5,
-	MAX_INIT_MODES
-};
-
-enum init_phases {
-	PHASE_ENGINE,
-	PHASE_PORT,
-	PHASE_PF,
-	PHASE_VF,
-	PHASE_QM_PF,
-	MAX_INIT_PHASES
+enum chip_ids {
+	CHIP_BB,
+	CHIP_K2,
+	CHIP_E5,
+	MAX_CHIP_IDS
 };
 
-enum init_split_types {
-	SPLIT_TYPE_NONE,
-	SPLIT_TYPE_PORT,
-	SPLIT_TYPE_PF,
-	SPLIT_TYPE_PORT_PF,
-	SPLIT_TYPE_VF,
-	MAX_INIT_SPLIT_TYPES
-};
 
 struct fw_asserts_ram_section {
 /* The offset of the section in the RAM in RAM lines (64-bit units) */
@@ -196,8 +166,46 @@ union init_array_hdr {
 };
 
 
+enum init_modes {
+	MODE_BB_A0_DEPRECATED,
+	MODE_BB,
+	MODE_K2,
+	MODE_ASIC,
+	MODE_EMUL_REDUCED,
+	MODE_EMUL_FULL,
+	MODE_FPGA,
+	MODE_CHIPSIM,
+	MODE_SF,
+	MODE_MF_SD,
+	MODE_MF_SI,
+	MODE_PORTS_PER_ENG_1,
+	MODE_PORTS_PER_ENG_2,
+	MODE_PORTS_PER_ENG_4,
+	MODE_100G,
+	MODE_E5,
+	MAX_INIT_MODES
+};
 
 
+enum init_phases {
+	PHASE_ENGINE,
+	PHASE_PORT,
+	PHASE_PF,
+	PHASE_VF,
+	PHASE_QM_PF,
+	MAX_INIT_PHASES
+};
+
+
+enum init_split_types {
+	SPLIT_TYPE_NONE,
+	SPLIT_TYPE_PORT,
+	SPLIT_TYPE_PF,
+	SPLIT_TYPE_PORT_PF,
+	SPLIT_TYPE_VF,
+	MAX_INIT_SPLIT_TYPES
+};
+
 
 /*
  * init array types
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.c b/drivers/net/qede/base/ecore_init_fw_funcs.c
index 77f9152..af0deaa 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.c
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.c
@@ -17,112 +17,156 @@
 #include "ecore_hsi_init_tool.h"
 #include "ecore_iro.h"
 #include "ecore_init_fw_funcs.h"
-enum CmInterfaceEnum {
-	MCM_SEC,
-	MCM_PRI,
-	UCM_SEC,
-	UCM_PRI,
-	TCM_SEC,
-	TCM_PRI,
-	YCM_SEC,
-	YCM_PRI,
-	XCM_SEC,
-	XCM_PRI,
-	NUM_OF_CM_INTERFACES
+
+#define CDU_VALIDATION_DEFAULT_CFG 61
+
+static u16 con_region_offsets[3][E4_NUM_OF_CONNECTION_TYPES] = {
+	{ 400,  336,  352,  304,  304,  384,  416,  352}, /* region 3 offsets */
+	{ 528,  496,  416,  448,  448,  512,  544,  480}, /* region 4 offsets */
+	{ 608,  544,  496,  512,  576,  592,  624,  560}  /* region 5 offsets */
+};
+static u16 task_region_offsets[1][E4_NUM_OF_CONNECTION_TYPES] = {
+	{ 240,  240,  112,    0,    0,    0,    0,   96}  /* region 1 offsets */
 };
-/* general constants */
-#define QM_PQ_MEM_4KB(pq_size) \
-(pq_size ? DIV_ROUND_UP((pq_size + 1) * QM_PQ_ELEMENT_SIZE, 0x1000) : 0)
-#define QM_PQ_SIZE_256B(pq_size) \
-(pq_size ? DIV_ROUND_UP(pq_size, 0x100) - 1 : 0)
-#define QM_INVALID_PQ_ID			0xffff
-/* feature enable */
-#define QM_BYPASS_EN				1
-#define QM_BYTE_CRD_EN				1
-/* other PQ constants */
-#define QM_OTHER_PQS_PER_PF			4
-/* WFQ constants */
-#define QM_WFQ_UPPER_BOUND			62500000
+
+/* General constants */
+#define QM_PQ_MEM_4KB(pq_size) (pq_size ? DIV_ROUND_UP((pq_size + 1) * \
+				QM_PQ_ELEMENT_SIZE, 0x1000) : 0)
+#define QM_PQ_SIZE_256B(pq_size) (pq_size ? DIV_ROUND_UP(pq_size, 0x100) - 1 : \
+				  0)
+#define QM_INVALID_PQ_ID		0xffff
+
+/* Feature enable */
+#define QM_BYPASS_EN			1
+#define QM_BYTE_CRD_EN			1
+
+/* Other PQ constants */
+#define QM_OTHER_PQS_PER_PF		4
+
+/* WFQ constants: */
+
+/* Upper bound in MB, 10 * burst size of 1ms in 50Gbps */
+#define QM_WFQ_UPPER_BOUND		62500000
+
+/* Bit  of VOQ in WFQ VP PQ map */
 #define QM_WFQ_VP_PQ_VOQ_SHIFT		0
+
+/* Bit  of PF in WFQ VP PQ map */
 #define QM_WFQ_VP_PQ_PF_SHIFT		5
+
+/* 0x9000 = 4*9*1024 */
 #define QM_WFQ_INC_VAL(weight)		((weight) * 0x9000)
-#define QM_WFQ_MAX_INC_VAL			43750000
-/* RL constants */
-#define QM_RL_UPPER_BOUND			62500000
-#define QM_RL_PERIOD				5
+
+/* 0.7 * upper bound (62500000) */
+#define QM_WFQ_MAX_INC_VAL		43750000
+
+/* RL constants: */
+
+/* Upper bound is set to 10 * burst size of 1ms in 50Gbps */
+#define QM_RL_UPPER_BOUND		62500000
+
+/* Period in us */
+#define QM_RL_PERIOD			5
+
+/* Period in 25MHz cycles */
 #define QM_RL_PERIOD_CLK_25M		(25 * QM_RL_PERIOD)
-#define QM_RL_MAX_INC_VAL			43750000
-/* RL increment value - the factor of 1.01 was added after seeing only
- * 99% factor reached in a 25Gbps port with DPDK RFC 2544 test.
- * In this scenario the PF RL was reducing the line rate to 99% although
- * the credit increment value was the correct one and FW calculated
- * correct packet sizes. The reason for the inaccuracy of the RL is
- * unknown at this point.
+
+/* 0.7 * upper bound (62500000) */
+#define QM_RL_MAX_INC_VAL		43750000
+
+/* RL increment value - rate is specified in mbps. the factor of 1.01 was
+ * added after seeing only 99% factor reached in a 25Gbps port with DPDK RFC
+ * 2544 test. In this scenario the PF RL was reducing the line rate to 99%
+ * although the credit increment value was the correct one and FW calculated
+ * correct packet sizes. The reason for the inaccuracy of the RL is unknown at
+ * this point.
  */
-/* rate in mbps */
 #define QM_RL_INC_VAL(rate) OSAL_MAX_T(u32, (u32)(((rate ? rate : 1000000) * \
-					QM_RL_PERIOD * 101) / (8 * 100)), 1)
+				       QM_RL_PERIOD * 101) / (8 * 100)), 1)
+
 /* AFullOprtnstcCrdMask constants */
 #define QM_OPPOR_LINE_VOQ_DEF		1
 #define QM_OPPOR_FW_STOP_DEF		0
 #define QM_OPPOR_PQ_EMPTY_DEF		1
-/* Command Queue constants */
-#define PBF_CMDQ_PURE_LB_LINES			150
+
+/* Command Queue constants: */
+
+/* Pure LB CmdQ lines (+spare) */
+#define PBF_CMDQ_PURE_LB_LINES		150
+
 #define PBF_CMDQ_LINES_RT_OFFSET(voq) \
-(PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET + \
-voq * (PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET \
-- PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET))
+	(PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET + voq * \
+	 (PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET - \
+	  PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET))
+
 #define PBF_BTB_GUARANTEED_RT_OFFSET(voq) \
-(PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET + voq * \
-(PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET - PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET))
+	(PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET + voq * \
+	 (PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET - \
+	  PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET))
+
 #define QM_VOQ_LINE_CRD(pbf_cmd_lines) \
 ((((pbf_cmd_lines) - 4) * 2) | QM_LINE_CRD_REG_SIGN_BIT)
+
 /* BTB: blocks constants (block size = 256B) */
-#define BTB_JUMBO_PKT_BLOCKS 38	/* 256B blocks in 9700B packet */
-/* headroom per-port */
-#define BTB_HEADROOM_BLOCKS BTB_JUMBO_PKT_BLOCKS
+
+/* 256B blocks in 9700B packet */
+#define BTB_JUMBO_PKT_BLOCKS		38
+
+/* Headroom per-port */
+#define BTB_HEADROOM_BLOCKS		BTB_JUMBO_PKT_BLOCKS
 #define BTB_PURE_LB_FACTOR		10
-#define BTB_PURE_LB_RATIO		7 /* factored (hence really 0.7) */
+
+/* Factored (hence really 0.7) */
+#define BTB_PURE_LB_RATIO		7
+
 /* QM stop command constants */
-#define QM_STOP_PQ_MASK_WIDTH			32
-#define QM_STOP_CMD_ADDR				0x2
-#define QM_STOP_CMD_STRUCT_SIZE			2
+#define QM_STOP_PQ_MASK_WIDTH		32
+#define QM_STOP_CMD_ADDR		2
+#define QM_STOP_CMD_STRUCT_SIZE		2
 #define QM_STOP_CMD_PAUSE_MASK_OFFSET	0
 #define QM_STOP_CMD_PAUSE_MASK_SHIFT	0
-#define QM_STOP_CMD_PAUSE_MASK_MASK		0xffffffff /* @DPDK */
-#define QM_STOP_CMD_GROUP_ID_OFFSET		1
-#define QM_STOP_CMD_GROUP_ID_SHIFT		16
-#define QM_STOP_CMD_GROUP_ID_MASK		15
-#define QM_STOP_CMD_PQ_TYPE_OFFSET		1
-#define QM_STOP_CMD_PQ_TYPE_SHIFT		24
-#define QM_STOP_CMD_PQ_TYPE_MASK		1
-#define QM_STOP_CMD_MAX_POLL_COUNT		100
-#define QM_STOP_CMD_POLL_PERIOD_US		500
+#define QM_STOP_CMD_PAUSE_MASK_MASK	0xffffffff /* @DPDK */
+#define QM_STOP_CMD_GROUP_ID_OFFSET	1
+#define QM_STOP_CMD_GROUP_ID_SHIFT	16
+#define QM_STOP_CMD_GROUP_ID_MASK	15
+#define QM_STOP_CMD_PQ_TYPE_OFFSET	1
+#define QM_STOP_CMD_PQ_TYPE_SHIFT	24
+#define QM_STOP_CMD_PQ_TYPE_MASK	1
+#define QM_STOP_CMD_MAX_POLL_COUNT	100
+#define QM_STOP_CMD_POLL_PERIOD_US	500
+
 /* QM command macros */
-#define QM_CMD_STRUCT_SIZE(cmd)	cmd##_STRUCT_SIZE
+#define QM_CMD_STRUCT_SIZE(cmd) cmd##_STRUCT_SIZE
 #define QM_CMD_SET_FIELD(var, cmd, field, value) \
-SET_FIELD(var[cmd##_##field##_OFFSET], cmd##_##field, value)
+	SET_FIELD(var[cmd##_##field##_OFFSET], cmd##_##field, value)
+
 /* QM: VOQ macros */
 #define PHYS_VOQ(port, tc, max_phys_tcs_per_port) \
-((port) * (max_phys_tcs_per_port) + (tc))
-#define LB_VOQ(port)				(MAX_PHYS_VOQS + (port))
+	((port) * (max_phys_tcs_per_port) + (tc))
+#define LB_VOQ(port)				 (MAX_PHYS_VOQS + (port))
 #define VOQ(port, tc, max_phys_tcs_per_port) \
-((tc) < LB_TC ? PHYS_VOQ(port, tc, max_phys_tcs_per_port) : LB_VOQ(port))
+	((tc) < LB_TC ? PHYS_VOQ(port, tc, max_phys_tcs_per_port) : \
+				 LB_VOQ(port))
+
+
 /******************** INTERNAL IMPLEMENTATION *********************/
+
 /* Prepare PF RL enable/disable runtime init values */
 static void ecore_enable_pf_rl(struct ecore_hwfn *p_hwfn, bool pf_rl_en)
 {
 	STORE_RT_REG(p_hwfn, QM_REG_RLPFENABLE_RT_OFFSET, pf_rl_en ? 1 : 0);
 	if (pf_rl_en) {
-		/* enable RLs for all VOQs */
+		/* Enable RLs for all VOQs */
 		STORE_RT_REG(p_hwfn, QM_REG_RLPFVOQENABLE_RT_OFFSET,
 			     (1 << MAX_NUM_VOQS) - 1);
-		/* write RL period */
+
+		/* Write RL period */
 		STORE_RT_REG(p_hwfn, QM_REG_RLPFPERIOD_RT_OFFSET,
 			     QM_RL_PERIOD_CLK_25M);
 		STORE_RT_REG(p_hwfn, QM_REG_RLPFPERIODTIMER_RT_OFFSET,
 			     QM_RL_PERIOD_CLK_25M);
-		/* set credit threshold for QM bypass flow */
+
+		/* Set credit threshold for QM bypass flow */
 		if (QM_BYPASS_EN)
 			STORE_RT_REG(p_hwfn, QM_REG_AFULLQMBYPTHRPFRL_RT_OFFSET,
 				     QM_RL_UPPER_BOUND);
@@ -133,7 +177,8 @@ static void ecore_enable_pf_rl(struct ecore_hwfn *p_hwfn, bool pf_rl_en)
 static void ecore_enable_pf_wfq(struct ecore_hwfn *p_hwfn, bool pf_wfq_en)
 {
 	STORE_RT_REG(p_hwfn, QM_REG_WFQPFENABLE_RT_OFFSET, pf_wfq_en ? 1 : 0);
-	/* set credit threshold for QM bypass flow */
+
+	/* Set credit threshold for QM bypass flow */
 	if (pf_wfq_en && QM_BYPASS_EN)
 		STORE_RT_REG(p_hwfn, QM_REG_AFULLQMBYPTHRPFWFQ_RT_OFFSET,
 			     QM_WFQ_UPPER_BOUND);
@@ -145,12 +190,13 @@ static void ecore_enable_vport_rl(struct ecore_hwfn *p_hwfn, bool vport_rl_en)
 	STORE_RT_REG(p_hwfn, QM_REG_RLGLBLENABLE_RT_OFFSET,
 		     vport_rl_en ? 1 : 0);
 	if (vport_rl_en) {
-		/* write RL period (use timer 0 only) */
+		/* Write RL period (use timer 0 only) */
 		STORE_RT_REG(p_hwfn, QM_REG_RLGLBLPERIOD_0_RT_OFFSET,
 			     QM_RL_PERIOD_CLK_25M);
 		STORE_RT_REG(p_hwfn, QM_REG_RLGLBLPERIODTIMER_0_RT_OFFSET,
 			     QM_RL_PERIOD_CLK_25M);
-		/* set credit threshold for QM bypass flow */
+
+		/* Set credit threshold for QM bypass flow */
 		if (QM_BYPASS_EN)
 			STORE_RT_REG(p_hwfn,
 				     QM_REG_AFULLQMBYPTHRGLBLRL_RT_OFFSET,
@@ -163,7 +209,8 @@ static void ecore_enable_vport_wfq(struct ecore_hwfn *p_hwfn, bool vport_wfq_en)
 {
 	STORE_RT_REG(p_hwfn, QM_REG_WFQVPENABLE_RT_OFFSET,
 		     vport_wfq_en ? 1 : 0);
-	/* set credit threshold for QM bypass flow */
+
+	/* Set credit threshold for QM bypass flow */
 	if (vport_wfq_en && QM_BYPASS_EN)
 		STORE_RT_REG(p_hwfn, QM_REG_AFULLQMBYPTHRVPWFQ_RT_OFFSET,
 			     QM_WFQ_UPPER_BOUND);
@@ -176,7 +223,9 @@ static void ecore_cmdq_lines_voq_rt_init(struct ecore_hwfn *p_hwfn,
 					 u8 voq, u16 cmdq_lines)
 {
 	u32 qm_line_crd;
+
 	qm_line_crd = QM_VOQ_LINE_CRD(cmdq_lines);
+
 	OVERWRITE_RT_REG(p_hwfn, PBF_CMDQ_LINES_RT_OFFSET(voq),
 			 (u32)cmdq_lines);
 	STORE_RT_REG(p_hwfn, QM_REG_VOQCRDLINE_RT_OFFSET + voq, qm_line_crd);
@@ -192,38 +241,43 @@ static void ecore_cmdq_lines_rt_init(struct ecore_hwfn *p_hwfn,
 				     port_params[MAX_NUM_PORTS])
 {
 	u8 tc, voq, port_id, num_tcs_in_port;
-	/* clear PBF lines for all VOQs */
+
+	/* Clear PBF lines for all VOQs */
 	for (voq = 0; voq < MAX_NUM_VOQS; voq++)
 		STORE_RT_REG(p_hwfn, PBF_CMDQ_LINES_RT_OFFSET(voq), 0);
+
 	for (port_id = 0; port_id < max_ports_per_engine; port_id++) {
-		if (port_params[port_id].active) {
-			u16 phys_lines, phys_lines_per_tc;
-			/* find #lines to divide between active physical TCs */
-			phys_lines =
-			    port_params[port_id].num_pbf_cmd_lines -
-			    PBF_CMDQ_PURE_LB_LINES;
-			/* find #lines per active physical TC */
-			num_tcs_in_port = 0;
-			for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) {
-				if (((port_params[port_id].active_phys_tcs >>
-						tc) & 0x1) == 1)
-					num_tcs_in_port++;
-			}
-			phys_lines_per_tc = phys_lines / num_tcs_in_port;
-			/* init registers per active TC */
-			for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) {
-				if (((port_params[port_id].active_phys_tcs >>
-							tc) & 0x1) == 1) {
-					voq = PHYS_VOQ(port_id, tc,
-							max_phys_tcs_per_port);
-					ecore_cmdq_lines_voq_rt_init(p_hwfn,
-							voq, phys_lines_per_tc);
-				}
+		u16 phys_lines, phys_lines_per_tc;
+
+		if (!port_params[port_id].active)
+			continue;
+
+		/* Find #lines to divide between the active physical TCs */
+		phys_lines = port_params[port_id].num_pbf_cmd_lines -
+			     PBF_CMDQ_PURE_LB_LINES;
+
+		/* Find #lines per active physical TC */
+		num_tcs_in_port = 0;
+		for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++)
+			if (((port_params[port_id].active_phys_tcs >> tc) &
+			      0x1) == 1)
+				num_tcs_in_port++;
+		phys_lines_per_tc = phys_lines / num_tcs_in_port;
+
+		/* Init registers per active TC */
+		for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) {
+			if (((port_params[port_id].active_phys_tcs >> tc) &
+			      0x1) == 1) {
+				voq = PHYS_VOQ(port_id, tc,
+					       max_phys_tcs_per_port);
+				ecore_cmdq_lines_voq_rt_init(p_hwfn, voq,
+							     phys_lines_per_tc);
 			}
-			/* init registers for pure LB TC */
-			ecore_cmdq_lines_voq_rt_init(p_hwfn, LB_VOQ(port_id),
-						     PBF_CMDQ_PURE_LB_LINES);
 		}
+
+		/* Init registers for pure LB TC */
+		ecore_cmdq_lines_voq_rt_init(p_hwfn, LB_VOQ(port_id),
+					     PBF_CMDQ_PURE_LB_LINES);
 	}
 }
 
@@ -253,50 +307,51 @@ static void ecore_btb_blocks_rt_init(struct ecore_hwfn *p_hwfn,
 				     struct init_qm_port_params
 				     port_params[MAX_NUM_PORTS])
 {
-	u8 tc, voq, port_id, num_tcs_in_port;
 	u32 usable_blocks, pure_lb_blocks, phys_blocks;
+	u8 tc, voq, port_id, num_tcs_in_port;
+
 	for (port_id = 0; port_id < max_ports_per_engine; port_id++) {
-		if (port_params[port_id].active) {
-			/* subtract headroom blocks */
-			usable_blocks =
-			    port_params[port_id].num_btb_blocks -
-			    BTB_HEADROOM_BLOCKS;
-/* find blocks per physical TC. use factor to avoid floating arithmethic */
-
-			num_tcs_in_port = 0;
-			for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++)
-				if (((port_params[port_id].active_phys_tcs >>
-								tc) & 0x1) == 1)
-					num_tcs_in_port++;
-			pure_lb_blocks =
-			    (usable_blocks * BTB_PURE_LB_FACTOR) /
-			    (num_tcs_in_port *
-			     BTB_PURE_LB_FACTOR + BTB_PURE_LB_RATIO);
-			pure_lb_blocks =
-			    OSAL_MAX_T(u32, BTB_JUMBO_PKT_BLOCKS,
-				       pure_lb_blocks / BTB_PURE_LB_FACTOR);
-			phys_blocks =
-			    (usable_blocks -
-			     pure_lb_blocks) /
-			     num_tcs_in_port;
-			/* init physical TCs */
-			for (tc = 0;
-			     tc < NUM_OF_PHYS_TCS;
-			     tc++) {
-				if (((port_params[port_id].active_phys_tcs >>
-							tc) & 0x1) == 1) {
-					voq = PHYS_VOQ(port_id, tc,
-						       max_phys_tcs_per_port);
-					STORE_RT_REG(p_hwfn,
+		if (!port_params[port_id].active)
+			continue;
+
+		/* Subtract headroom blocks */
+		usable_blocks = port_params[port_id].num_btb_blocks -
+				BTB_HEADROOM_BLOCKS;
+
+		/* Find blocks per physical TC. use factor to avoid floating
+		 * arithmethic.
+		 */
+		num_tcs_in_port = 0;
+		for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++)
+			if (((port_params[port_id].active_phys_tcs >> tc) &
+			      0x1) == 1)
+				num_tcs_in_port++;
+
+		pure_lb_blocks = (usable_blocks * BTB_PURE_LB_FACTOR) /
+				  (num_tcs_in_port * BTB_PURE_LB_FACTOR +
+				   BTB_PURE_LB_RATIO);
+		pure_lb_blocks = OSAL_MAX_T(u32, BTB_JUMBO_PKT_BLOCKS,
+					    pure_lb_blocks /
+					    BTB_PURE_LB_FACTOR);
+		phys_blocks = (usable_blocks - pure_lb_blocks) /
+			      num_tcs_in_port;
+
+		/* Init physical TCs */
+		for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) {
+			if (((port_params[port_id].active_phys_tcs >> tc) &
+			      0x1) == 1) {
+				voq = PHYS_VOQ(port_id, tc,
+					       max_phys_tcs_per_port);
+				STORE_RT_REG(p_hwfn,
 					     PBF_BTB_GUARANTEED_RT_OFFSET(voq),
 					     phys_blocks);
-				}
 			}
-			/* init pure LB TC */
-			STORE_RT_REG(p_hwfn,
-				     PBF_BTB_GUARANTEED_RT_OFFSET(
-					LB_VOQ(port_id)), pure_lb_blocks);
 		}
+
+		/* Init pure LB TC */
+		STORE_RT_REG(p_hwfn,
+			     PBF_BTB_GUARANTEED_RT_OFFSET(LB_VOQ(port_id)),
+			     pure_lb_blocks);
 	}
 }
 
@@ -317,57 +372,69 @@ static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 				    struct init_qm_pq_params *pq_params,
 				    struct init_qm_vport_params *vport_params)
 {
-	u16 i, pq_id, pq_group;
-	u16 num_pqs = num_pf_pqs + num_vf_pqs;
-	u16 first_pq_group = start_pq / QM_PF_QUEUE_GROUP_SIZE;
-	u16 last_pq_group = (start_pq + num_pqs - 1) / QM_PF_QUEUE_GROUP_SIZE;
-	/* a bit per Tx PQ indicating if the PQ is associated with a VF */
+	/* A bit per Tx PQ indicating if the PQ is associated with a VF */
 	u32 tx_pq_vf_mask[MAX_QM_TX_QUEUES / QM_PF_QUEUE_GROUP_SIZE] = { 0 };
 	u32 num_tx_pq_vf_masks = MAX_QM_TX_QUEUES / QM_PF_QUEUE_GROUP_SIZE;
-	u32 pq_mem_4kb = QM_PQ_MEM_4KB(num_pf_cids);
-	u32 vport_pq_mem_4kb = QM_PQ_MEM_4KB(num_vf_cids);
-	u32 mem_addr_4kb = base_mem_addr_4kb;
-	/* set mapping from PQ group to PF */
+	u16 num_pqs, first_pq_group, last_pq_group, i, pq_id, pq_group;
+	u32 pq_mem_4kb, vport_pq_mem_4kb, mem_addr_4kb;
+
+	num_pqs = num_pf_pqs + num_vf_pqs;
+
+	first_pq_group = start_pq / QM_PF_QUEUE_GROUP_SIZE;
+	last_pq_group = (start_pq + num_pqs - 1) / QM_PF_QUEUE_GROUP_SIZE;
+
+	pq_mem_4kb = QM_PQ_MEM_4KB(num_pf_cids);
+	vport_pq_mem_4kb = QM_PQ_MEM_4KB(num_vf_cids);
+	mem_addr_4kb = base_mem_addr_4kb;
+
+	/* Set mapping from PQ group to PF */
 	for (pq_group = first_pq_group; pq_group <= last_pq_group; pq_group++)
 		STORE_RT_REG(p_hwfn, QM_REG_PQTX2PF_0_RT_OFFSET + pq_group,
 			     (u32)(pf_id));
-	/* set PQ sizes */
+
+	/* Set PQ sizes */
 	STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZE_0_RT_OFFSET,
 		     QM_PQ_SIZE_256B(num_pf_cids));
 	STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZE_1_RT_OFFSET,
 		     QM_PQ_SIZE_256B(num_vf_cids));
-	/* go over all Tx PQs */
+
+	/* Go over all Tx PQs */
 	for (i = 0, pq_id = start_pq; i < num_pqs; i++, pq_id++) {
-		struct qm_rf_pq_map tx_pq_map;
-		u8 voq =
-		    VOQ(port_id, pq_params[i].tc_id, max_phys_tcs_per_port);
-		bool is_vf_pq = (i >= num_pf_pqs);
-		/* added to avoid compilation warning */
 		u32 max_qm_global_rls = MAX_QM_GLOBAL_RLS;
-		bool rl_valid = pq_params[i].rl_valid &&
-				pq_params[i].vport_id < max_qm_global_rls;
-		/* update first Tx PQ of VPORT/TC */
-		u8 vport_id_in_pf = pq_params[i].vport_id - start_vport;
-		u16 first_tx_pq_id =
-		    vport_params[vport_id_in_pf].first_tx_pq_id[pq_params[i].
-								tc_id];
+		struct qm_rf_pq_map tx_pq_map;
+		bool is_vf_pq, rl_valid;
+		u8 voq, vport_id_in_pf;
+		u16 first_tx_pq_id;
+
+		voq = VOQ(port_id, pq_params[i].tc_id, max_phys_tcs_per_port);
+		is_vf_pq = (i >= num_pf_pqs);
+		rl_valid = pq_params[i].rl_valid && pq_params[i].vport_id <
+			   max_qm_global_rls;
+
+		/* Update first Tx PQ of VPORT/TC */
+		vport_id_in_pf = pq_params[i].vport_id - start_vport;
+		first_tx_pq_id =
+		vport_params[vport_id_in_pf].first_tx_pq_id[pq_params[i].tc_id];
 		if (first_tx_pq_id == QM_INVALID_PQ_ID) {
-			/* create new VP PQ */
+			/* Create new VP PQ */
 			vport_params[vport_id_in_pf].
 			    first_tx_pq_id[pq_params[i].tc_id] = pq_id;
 			first_tx_pq_id = pq_id;
-			/* map VP PQ to VOQ and PF */
+
+			/* Map VP PQ to VOQ and PF */
 			STORE_RT_REG(p_hwfn,
 				     QM_REG_WFQVPMAP_RT_OFFSET + first_tx_pq_id,
 				     (voq << QM_WFQ_VP_PQ_VOQ_SHIFT) | (pf_id <<
 							QM_WFQ_VP_PQ_PF_SHIFT));
 		}
-		/* check RL ID */
+
+		/* Check RL ID */
 		if (pq_params[i].rl_valid && pq_params[i].vport_id >=
 							max_qm_global_rls)
 			DP_NOTICE(p_hwfn, true,
-				  "Invalid VPORT ID for rate limiter config");
-		/* fill PQ map entry */
+				  "Invalid VPORT ID for rate limiter config\n");
+
+		/* Fill PQ map entry */
 		OSAL_MEMSET(&tx_pq_map, 0, sizeof(tx_pq_map));
 		SET_FIELD(tx_pq_map.reg, QM_RF_PQ_MAP_PQ_VALID, 1);
 		SET_FIELD(tx_pq_map.reg, QM_RF_PQ_MAP_RL_VALID,
@@ -378,17 +445,17 @@ static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 		SET_FIELD(tx_pq_map.reg, QM_RF_PQ_MAP_VOQ, voq);
 		SET_FIELD(tx_pq_map.reg, QM_RF_PQ_MAP_WRR_WEIGHT_GROUP,
 			  pq_params[i].wrr_group);
-		/* write PQ map entry to CAM */
+
+		/* Write PQ map entry to CAM */
 		STORE_RT_REG(p_hwfn, QM_REG_TXPQMAP_RT_OFFSET + pq_id,
 			     *((u32 *)&tx_pq_map));
-		/* set base address */
+
+		/* Set base address */
 		STORE_RT_REG(p_hwfn, QM_REG_BASEADDRTXPQ_RT_OFFSET + pq_id,
 			     mem_addr_4kb);
-		/* check if VF PQ */
+
+		/* If VF PQ, add indication to PQ VF mask */
 		if (is_vf_pq) {
-			/* if PQ is associated with a VF, add indication to PQ
-			 * VF mask
-			 */
 			tx_pq_vf_mask[pq_id / QM_PF_QUEUE_GROUP_SIZE] |=
 				(1 << (pq_id % QM_PF_QUEUE_GROUP_SIZE));
 			mem_addr_4kb += vport_pq_mem_4kb;
@@ -396,12 +463,12 @@ static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 			mem_addr_4kb += pq_mem_4kb;
 		}
 	}
-	/* store Tx PQ VF mask to size select register */
-	for (i = 0; i < num_tx_pq_vf_masks; i++) {
+
+	/* Store Tx PQ VF mask to size select register */
+	for (i = 0; i < num_tx_pq_vf_masks; i++)
 		if (tx_pq_vf_mask[i])
 			STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZETXSEL_0_RT_OFFSET +
 				     i, tx_pq_vf_mask[i]);
-	}
 }
 
 /* Prepare Other PQ mapping runtime init values for the specified PF */
@@ -411,20 +478,26 @@ static void ecore_other_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 				       u32 num_pf_cids,
 				       u32 num_tids, u32 base_mem_addr_4kb)
 {
-	u16 i, pq_id;
-/* a single other PQ grp is used in each PF, where PQ group i is used in PF i */
-
-	u16 pq_group = pf_id;
-	u32 pq_size = num_pf_cids + num_tids;
-	u32 pq_mem_4kb = QM_PQ_MEM_4KB(pq_size);
-	u32 mem_addr_4kb = base_mem_addr_4kb;
-	/* map PQ group to PF */
+	u32 pq_size, pq_mem_4kb, mem_addr_4kb;
+	u16 i, pq_id, pq_group;
+
+	/* A single other PQ group is used in each PF, where PQ group i is used
+	 * in PF i.
+	 */
+	pq_group = pf_id;
+	pq_size = num_pf_cids + num_tids;
+	pq_mem_4kb = QM_PQ_MEM_4KB(pq_size);
+	mem_addr_4kb = base_mem_addr_4kb;
+
+	/* Map PQ group to PF */
 	STORE_RT_REG(p_hwfn, QM_REG_PQOTHER2PF_0_RT_OFFSET + pq_group,
 		     (u32)(pf_id));
-	/* set PQ sizes */
+
+	/* Set PQ sizes */
 	STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZE_2_RT_OFFSET,
 		     QM_PQ_SIZE_256B(pq_size));
-	/* set base address */
+
+	/* Set base address */
 	for (i = 0, pq_id = pf_id * QM_PF_QUEUE_GROUP_SIZE;
 	     i < QM_OTHER_PQS_PER_PF; i++, pq_id++) {
 		STORE_RT_REG(p_hwfn, QM_REG_BASEADDROTHERPQ_RT_OFFSET + pq_id,
@@ -432,7 +505,10 @@ static void ecore_other_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 		mem_addr_4kb += pq_mem_4kb;
 	}
 }
-/* Prepare PF WFQ runtime init values for specified PF. Return -1 on error. */
+
+/* Prepare PF WFQ runtime init values for the specified PF.
+ * Return -1 on error.
+ */
 static int ecore_pf_wfq_rt_init(struct ecore_hwfn *p_hwfn,
 				u8 port_id,
 				u8 pf_id,
@@ -441,76 +517,89 @@ static int ecore_pf_wfq_rt_init(struct ecore_hwfn *p_hwfn,
 				u16 num_tx_pqs,
 				struct init_qm_pq_params *pq_params)
 {
+	u32 inc_val, crd_reg_offset;
+	u8 voq;
 	u16 i;
-	u32 inc_val;
-	u32 crd_reg_offset =
-	    (pf_id <
-	     MAX_NUM_PFS_BB ? QM_REG_WFQPFCRD_RT_OFFSET :
-	     QM_REG_WFQPFCRD_MSB_RT_OFFSET) + (pf_id % MAX_NUM_PFS_BB);
+
+	crd_reg_offset = (pf_id < MAX_NUM_PFS_BB ? QM_REG_WFQPFCRD_RT_OFFSET :
+			  QM_REG_WFQPFCRD_MSB_RT_OFFSET) +
+			 (pf_id % MAX_NUM_PFS_BB);
+
 	inc_val = QM_WFQ_INC_VAL(pf_wfq);
-	if (inc_val == 0 || inc_val > QM_WFQ_MAX_INC_VAL) {
-		DP_NOTICE(p_hwfn, true, "Invalid PF WFQ weight configuration");
+	if (!inc_val || inc_val > QM_WFQ_MAX_INC_VAL) {
+		DP_NOTICE(p_hwfn, true,
+			  "Invalid PF WFQ weight configuration\n");
 		return -1;
 	}
+
 	for (i = 0; i < num_tx_pqs; i++) {
-		u8 voq =
-		    VOQ(port_id, pq_params[i].tc_id, max_phys_tcs_per_port);
+		voq = VOQ(port_id, pq_params[i].tc_id, max_phys_tcs_per_port);
 		OVERWRITE_RT_REG(p_hwfn, crd_reg_offset + voq * MAX_NUM_PFS_BB,
 				 (u32)QM_WFQ_CRD_REG_SIGN_BIT);
 	}
+
 	STORE_RT_REG(p_hwfn, QM_REG_WFQPFUPPERBOUND_RT_OFFSET + pf_id,
 		     QM_WFQ_UPPER_BOUND | (u32)QM_WFQ_CRD_REG_SIGN_BIT);
 	STORE_RT_REG(p_hwfn, QM_REG_WFQPFWEIGHT_RT_OFFSET + pf_id, inc_val);
 	return 0;
 }
-/* Prepare PF RL runtime init values for specified PF. Return -1 on error. */
+
+/* Prepare PF RL runtime init values for the specified PF.
+ * Return -1 on error.
+ */
 static int ecore_pf_rl_rt_init(struct ecore_hwfn *p_hwfn, u8 pf_id, u32 pf_rl)
 {
-	u32 inc_val = QM_RL_INC_VAL(pf_rl);
+	u32 inc_val;
+
+	inc_val = QM_RL_INC_VAL(pf_rl);
 	if (inc_val > QM_RL_MAX_INC_VAL) {
-		DP_NOTICE(p_hwfn, true, "Invalid PF rate limit configuration");
+		DP_NOTICE(p_hwfn, true,
+			  "Invalid PF rate limit configuration\n");
 		return -1;
 	}
+
 	STORE_RT_REG(p_hwfn, QM_REG_RLPFCRD_RT_OFFSET + pf_id,
 		     (u32)QM_RL_CRD_REG_SIGN_BIT);
 	STORE_RT_REG(p_hwfn, QM_REG_RLPFUPPERBOUND_RT_OFFSET + pf_id,
 		     QM_RL_UPPER_BOUND | (u32)QM_RL_CRD_REG_SIGN_BIT);
 	STORE_RT_REG(p_hwfn, QM_REG_RLPFINCVAL_RT_OFFSET + pf_id, inc_val);
+
 	return 0;
 }
-/* Prepare VPORT WFQ runtime init values for the specified VPORTs. Return -1 on
- * error.
+
+/* Prepare VPORT WFQ runtime init values for the specified VPORTs.
+ * Return -1 on error.
  */
 static int ecore_vp_wfq_rt_init(struct ecore_hwfn *p_hwfn,
 				u8 num_vports,
 				struct init_qm_vport_params *vport_params)
 {
-	u8 tc, i;
+	u16 vport_pq_id;
 	u32 inc_val;
-	/* go over all PF VPORTs */
+	u8 tc, i;
+
+	/* Go over all PF VPORTs */
 	for (i = 0; i < num_vports; i++) {
-		if (vport_params[i].vport_wfq) {
-			inc_val = QM_WFQ_INC_VAL(vport_params[i].vport_wfq);
-			if (inc_val > QM_WFQ_MAX_INC_VAL) {
-				DP_NOTICE(p_hwfn, true,
-					  "Invalid VPORT WFQ weight config");
-				return -1;
-			}
-			/* each VPORT can have several VPORT PQ IDs for
-			 * different TCs
-			 */
-			for (tc = 0; tc < NUM_OF_TCS; tc++) {
-				u16 vport_pq_id =
-				    vport_params[i].first_tx_pq_id[tc];
-				if (vport_pq_id != QM_INVALID_PQ_ID) {
-					STORE_RT_REG(p_hwfn,
-						  QM_REG_WFQVPCRD_RT_OFFSET +
-						  vport_pq_id,
-						  (u32)QM_WFQ_CRD_REG_SIGN_BIT);
-					STORE_RT_REG(p_hwfn,
-						QM_REG_WFQVPWEIGHT_RT_OFFSET
-						     + vport_pq_id, inc_val);
-				}
+		if (!vport_params[i].vport_wfq)
+			continue;
+
+		inc_val = QM_WFQ_INC_VAL(vport_params[i].vport_wfq);
+		if (inc_val > QM_WFQ_MAX_INC_VAL) {
+			DP_NOTICE(p_hwfn, true,
+				  "Invalid VPORT WFQ weight configuration\n");
+			return -1;
+		}
+
+		/* Each VPORT can have several VPORT PQ IDs for various TCs */
+		for (tc = 0; tc < NUM_OF_TCS; tc++) {
+			vport_pq_id = vport_params[i].first_tx_pq_id[tc];
+			if (vport_pq_id != QM_INVALID_PQ_ID) {
+				STORE_RT_REG(p_hwfn, QM_REG_WFQVPCRD_RT_OFFSET +
+					     vport_pq_id,
+					     (u32)QM_WFQ_CRD_REG_SIGN_BIT);
+				STORE_RT_REG(p_hwfn,
+					     QM_REG_WFQVPWEIGHT_RT_OFFSET +
+					     vport_pq_id, inc_val);
 			}
 		}
 	}
@@ -526,19 +615,23 @@ static int ecore_vport_rl_rt_init(struct ecore_hwfn *p_hwfn,
 				  struct init_qm_vport_params *vport_params)
 {
 	u8 i, vport_id;
+	u32 inc_val;
+
 	if (start_vport + num_vports >= MAX_QM_GLOBAL_RLS) {
 		DP_NOTICE(p_hwfn, true,
-			  "Invalid VPORT ID for rate limiter configuration");
+			  "Invalid VPORT ID for rate limiter configuration\n");
 		return -1;
 	}
-	/* go over all PF VPORTs */
+
+	/* Go over all PF VPORTs */
 	for (i = 0, vport_id = start_vport; i < num_vports; i++, vport_id++) {
 		u32 inc_val = QM_RL_INC_VAL(vport_params[i].vport_rl);
 		if (inc_val > QM_RL_MAX_INC_VAL) {
 			DP_NOTICE(p_hwfn, true,
-				  "Invalid VPORT rate-limit configuration");
+				  "Invalid VPORT rate-limit configuration\n");
 			return -1;
 		}
+
 		STORE_RT_REG(p_hwfn, QM_REG_RLGLBLCRD_RT_OFFSET + vport_id,
 			     (u32)QM_RL_CRD_REG_SIGN_BIT);
 		STORE_RT_REG(p_hwfn,
@@ -547,6 +640,7 @@ static int ecore_vport_rl_rt_init(struct ecore_hwfn *p_hwfn,
 		STORE_RT_REG(p_hwfn, QM_REG_RLGLBLINCVAL_RT_OFFSET + vport_id,
 			     inc_val);
 	}
+
 	return 0;
 }
 
@@ -554,17 +648,20 @@ static bool ecore_poll_on_qm_cmd_ready(struct ecore_hwfn *p_hwfn,
 				       struct ecore_ptt *p_ptt)
 {
 	u32 reg_val, i;
-	for (i = 0, reg_val = 0; i < QM_STOP_CMD_MAX_POLL_COUNT && reg_val == 0;
+
+	for (i = 0, reg_val = 0; i < QM_STOP_CMD_MAX_POLL_COUNT && !reg_val;
 	     i++) {
 		OSAL_UDELAY(QM_STOP_CMD_POLL_PERIOD_US);
 		reg_val = ecore_rd(p_hwfn, p_ptt, QM_REG_SDMCMDREADY);
 	}
-	/* check if timeout while waiting for SDM command ready */
+
+	/* Check if timeout while waiting for SDM command ready */
 	if (i == QM_STOP_CMD_MAX_POLL_COUNT) {
 		DP_VERBOSE(p_hwfn, ECORE_MSG_DEBUG,
 			   "Timeout waiting for QM SDM cmd ready signal\n");
 		return false;
 	}
+
 	return true;
 }
 
@@ -574,15 +671,19 @@ static bool ecore_send_qm_cmd(struct ecore_hwfn *p_hwfn,
 {
 	if (!ecore_poll_on_qm_cmd_ready(p_hwfn, p_ptt))
 		return false;
+
 	ecore_wr(p_hwfn, p_ptt, QM_REG_SDMCMDADDR, cmd_addr);
 	ecore_wr(p_hwfn, p_ptt, QM_REG_SDMCMDDATALSB, cmd_data_lsb);
 	ecore_wr(p_hwfn, p_ptt, QM_REG_SDMCMDDATAMSB, cmd_data_msb);
 	ecore_wr(p_hwfn, p_ptt, QM_REG_SDMCMDGO, 1);
 	ecore_wr(p_hwfn, p_ptt, QM_REG_SDMCMDGO, 0);
+
 	return ecore_poll_on_qm_cmd_ready(p_hwfn, p_ptt);
 }
 
+
 /******************** INTERFACE IMPLEMENTATION *********************/
+
 u32 ecore_qm_pf_mem_size(u8 pf_id,
 			 u32 num_pf_cids,
 			 u32 num_vf_cids,
@@ -603,32 +704,42 @@ int ecore_qm_common_rt_init(struct ecore_hwfn *p_hwfn,
 			    struct init_qm_port_params
 			    port_params[MAX_NUM_PORTS])
 {
-	/* init AFullOprtnstcCrdMask */
-	u32 mask =
-	    (QM_OPPOR_LINE_VOQ_DEF << QM_RF_OPPORTUNISTIC_MASK_LINEVOQ_SHIFT) |
-	    (QM_BYTE_CRD_EN << QM_RF_OPPORTUNISTIC_MASK_BYTEVOQ_SHIFT) |
-	    (pf_wfq_en << QM_RF_OPPORTUNISTIC_MASK_PFWFQ_SHIFT) |
-	    (vport_wfq_en << QM_RF_OPPORTUNISTIC_MASK_VPWFQ_SHIFT) |
-	    (pf_rl_en << QM_RF_OPPORTUNISTIC_MASK_PFRL_SHIFT) |
-	    (vport_rl_en << QM_RF_OPPORTUNISTIC_MASK_VPQCNRL_SHIFT) |
-	    (QM_OPPOR_FW_STOP_DEF << QM_RF_OPPORTUNISTIC_MASK_FWPAUSE_SHIFT) |
-	    (QM_OPPOR_PQ_EMPTY_DEF <<
-	     QM_RF_OPPORTUNISTIC_MASK_QUEUEEMPTY_SHIFT);
+	u32 mask;
+
+	/* Init AFullOprtnstcCrdMask */
+	mask = (QM_OPPOR_LINE_VOQ_DEF <<
+		QM_RF_OPPORTUNISTIC_MASK_LINEVOQ_SHIFT) |
+		(QM_BYTE_CRD_EN << QM_RF_OPPORTUNISTIC_MASK_BYTEVOQ_SHIFT) |
+		(pf_wfq_en << QM_RF_OPPORTUNISTIC_MASK_PFWFQ_SHIFT) |
+		(vport_wfq_en << QM_RF_OPPORTUNISTIC_MASK_VPWFQ_SHIFT) |
+		(pf_rl_en << QM_RF_OPPORTUNISTIC_MASK_PFRL_SHIFT) |
+		(vport_rl_en << QM_RF_OPPORTUNISTIC_MASK_VPQCNRL_SHIFT) |
+		(QM_OPPOR_FW_STOP_DEF <<
+		 QM_RF_OPPORTUNISTIC_MASK_FWPAUSE_SHIFT) |
+		(QM_OPPOR_PQ_EMPTY_DEF <<
+		 QM_RF_OPPORTUNISTIC_MASK_QUEUEEMPTY_SHIFT);
 	STORE_RT_REG(p_hwfn, QM_REG_AFULLOPRTNSTCCRDMASK_RT_OFFSET, mask);
-	/* enable/disable PF RL */
+
+	/* Enable/disable PF RL */
 	ecore_enable_pf_rl(p_hwfn, pf_rl_en);
-	/* enable/disable PF WFQ */
+
+	/* Enable/disable PF WFQ */
 	ecore_enable_pf_wfq(p_hwfn, pf_wfq_en);
-	/* enable/disable VPORT RL */
+
+	/* Enable/disable VPORT RL */
 	ecore_enable_vport_rl(p_hwfn, vport_rl_en);
-	/* enable/disable VPORT WFQ */
+
+	/* Enable/disable VPORT WFQ */
 	ecore_enable_vport_wfq(p_hwfn, vport_wfq_en);
-	/* init PBF CMDQ line credit */
+
+	/* Init PBF CMDQ line credit */
 	ecore_cmdq_lines_rt_init(p_hwfn, max_ports_per_engine,
 				 max_phys_tcs_per_port, port_params);
-	/* init BTB blocks in PBF */
+
+	/* Init BTB blocks in PBF */
 	ecore_btb_blocks_rt_init(p_hwfn, max_ports_per_engine,
 				 max_phys_tcs_per_port, port_params);
+
 	return 0;
 }
 
@@ -651,66 +762,86 @@ int ecore_qm_pf_rt_init(struct ecore_hwfn *p_hwfn,
 			struct init_qm_pq_params *pq_params,
 			struct init_qm_vport_params *vport_params)
 {
+	u32 other_mem_size_4kb;
 	u8 tc, i;
-	u32 other_mem_size_4kb =
-	    QM_PQ_MEM_4KB(num_pf_cids + num_tids) * QM_OTHER_PQS_PER_PF;
-	/* clear first Tx PQ ID array for each VPORT */
+
+	other_mem_size_4kb = QM_PQ_MEM_4KB(num_pf_cids + num_tids) *
+			     QM_OTHER_PQS_PER_PF;
+
+	/* Clear first Tx PQ ID array for each VPORT */
 	for (i = 0; i < num_vports; i++)
 		for (tc = 0; tc < NUM_OF_TCS; tc++)
 			vport_params[i].first_tx_pq_id[tc] = QM_INVALID_PQ_ID;
-	/* map Other PQs (if any) */
+
+	/* Map Other PQs (if any) */
 #if QM_OTHER_PQS_PER_PF > 0
 	ecore_other_pq_map_rt_init(p_hwfn, port_id, pf_id, num_pf_cids,
 				   num_tids, 0);
 #endif
-	/* map Tx PQs */
+
+	/* Map Tx PQs */
 	ecore_tx_pq_map_rt_init(p_hwfn, p_ptt, port_id, pf_id,
 				max_phys_tcs_per_port, is_first_pf, num_pf_cids,
 				num_vf_cids, start_pq, num_pf_pqs, num_vf_pqs,
 				start_vport, other_mem_size_4kb, pq_params,
 				vport_params);
-	/* init PF WFQ */
+
+	/* Init PF WFQ */
 	if (pf_wfq)
 		if (ecore_pf_wfq_rt_init
 		    (p_hwfn, port_id, pf_id, pf_wfq, max_phys_tcs_per_port,
-		     num_pf_pqs + num_vf_pqs, pq_params) != 0)
+		     num_pf_pqs + num_vf_pqs, pq_params))
 			return -1;
-	/* init PF RL */
-	if (ecore_pf_rl_rt_init(p_hwfn, pf_id, pf_rl) != 0)
+
+	/* Init PF RL */
+	if (ecore_pf_rl_rt_init(p_hwfn, pf_id, pf_rl))
 		return -1;
-	/* set VPORT WFQ */
-	if (ecore_vp_wfq_rt_init(p_hwfn, num_vports, vport_params) != 0)
+
+	/* Set VPORT WFQ */
+	if (ecore_vp_wfq_rt_init(p_hwfn, num_vports, vport_params))
 		return -1;
-	/* set VPORT RL */
+
+	/* Set VPORT RL */
 	if (ecore_vport_rl_rt_init
-	    (p_hwfn, start_vport, num_vports, vport_params) != 0)
+	    (p_hwfn, start_vport, num_vports, vport_params))
 		return -1;
+
 	return 0;
 }
 
 int ecore_init_pf_wfq(struct ecore_hwfn *p_hwfn,
 		      struct ecore_ptt *p_ptt, u8 pf_id, u16 pf_wfq)
 {
-	u32 inc_val = QM_WFQ_INC_VAL(pf_wfq);
-	if (inc_val == 0 || inc_val > QM_WFQ_MAX_INC_VAL) {
-		DP_NOTICE(p_hwfn, true, "Invalid PF WFQ weight configuration");
+	u32 inc_val;
+
+	inc_val = QM_WFQ_INC_VAL(pf_wfq);
+	if (!inc_val || inc_val > QM_WFQ_MAX_INC_VAL) {
+		DP_NOTICE(p_hwfn, true,
+			  "Invalid PF WFQ weight configuration\n");
 		return -1;
 	}
+
 	ecore_wr(p_hwfn, p_ptt, QM_REG_WFQPFWEIGHT + pf_id * 4, inc_val);
+
 	return 0;
 }
 
 int ecore_init_pf_rl(struct ecore_hwfn *p_hwfn,
 		     struct ecore_ptt *p_ptt, u8 pf_id, u32 pf_rl)
 {
-	u32 inc_val = QM_RL_INC_VAL(pf_rl);
+	u32 inc_val;
+
+	inc_val = QM_RL_INC_VAL(pf_rl);
 	if (inc_val > QM_RL_MAX_INC_VAL) {
-		DP_NOTICE(p_hwfn, true, "Invalid PF rate limit configuration");
+		DP_NOTICE(p_hwfn, true,
+			  "Invalid PF rate limit configuration\n");
 		return -1;
 	}
+
 	ecore_wr(p_hwfn, p_ptt, QM_REG_RLPFCRD + pf_id * 4,
 		 (u32)QM_RL_CRD_REG_SIGN_BIT);
 	ecore_wr(p_hwfn, p_ptt, QM_REG_RLPFINCVAL + pf_id * 4, inc_val);
+
 	return 0;
 }
 
@@ -718,20 +849,25 @@ int ecore_init_vport_wfq(struct ecore_hwfn *p_hwfn,
 			 struct ecore_ptt *p_ptt,
 			 u16 first_tx_pq_id[NUM_OF_TCS], u16 vport_wfq)
 {
+	u16 vport_pq_id;
+	u32 inc_val;
 	u8 tc;
-	u32 inc_val = QM_WFQ_INC_VAL(vport_wfq);
-	if (inc_val == 0 || inc_val > QM_WFQ_MAX_INC_VAL) {
+
+	inc_val = QM_WFQ_INC_VAL(vport_wfq);
+	if (!inc_val || inc_val > QM_WFQ_MAX_INC_VAL) {
 		DP_NOTICE(p_hwfn, true,
-			  "Invalid VPORT WFQ weight configuration");
+			  "Invalid VPORT WFQ weight configuration\n");
 		return -1;
 	}
+
 	for (tc = 0; tc < NUM_OF_TCS; tc++) {
-		u16 vport_pq_id = first_tx_pq_id[tc];
+		vport_pq_id = first_tx_pq_id[tc];
 		if (vport_pq_id != QM_INVALID_PQ_ID) {
 			ecore_wr(p_hwfn, p_ptt,
 				 QM_REG_WFQVPWEIGHT + vport_pq_id * 4, inc_val);
 		}
 	}
+
 	return 0;
 }
 
@@ -739,20 +875,24 @@ int ecore_init_vport_rl(struct ecore_hwfn *p_hwfn,
 			struct ecore_ptt *p_ptt, u8 vport_id, u32 vport_rl)
 {
 	u32 inc_val, max_qm_global_rls = MAX_QM_GLOBAL_RLS;
+
 	if (vport_id >= max_qm_global_rls) {
 		DP_NOTICE(p_hwfn, true,
-			  "Invalid VPORT ID for rate limiter configuration");
+			  "Invalid VPORT ID for rate limiter configuration\n");
 		return -1;
 	}
+
 	inc_val = QM_RL_INC_VAL(vport_rl);
 	if (inc_val > QM_RL_MAX_INC_VAL) {
 		DP_NOTICE(p_hwfn, true,
-			  "Invalid VPORT rate-limit configuration");
+			  "Invalid VPORT rate-limit configuration\n");
 		return -1;
 	}
+
 	ecore_wr(p_hwfn, p_ptt, QM_REG_RLGLBLCRD + vport_id * 4,
 		 (u32)QM_RL_CRD_REG_SIGN_BIT);
 	ecore_wr(p_hwfn, p_ptt, QM_REG_RLGLBLINCVAL + vport_id * 4, inc_val);
+
 	return 0;
 }
 
@@ -762,15 +902,20 @@ bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn,
 			    bool is_tx_pq, u16 start_pq, u16 num_pqs)
 {
 	u32 cmd_arr[QM_CMD_STRUCT_SIZE(QM_STOP_CMD)] = { 0 };
-	u32 pq_mask = 0, last_pq = start_pq + num_pqs - 1, pq_id;
-	/* set command's PQ type */
+	u32 pq_mask = 0, last_pq, pq_id;
+
+	last_pq = start_pq + num_pqs - 1;
+
+	/* Set command's PQ type */
 	QM_CMD_SET_FIELD(cmd_arr, QM_STOP_CMD, PQ_TYPE, is_tx_pq ? 0 : 1);
-	/* go over requested PQs */
+
+	/* Go over requested PQs */
 	for (pq_id = start_pq; pq_id <= last_pq; pq_id++) {
-		/* set PQ bit in mask (stop command only) */
+		/* Set PQ bit in mask (stop command only) */
 		if (!is_release_cmd)
 			pq_mask |= (1 << (pq_id % QM_STOP_PQ_MASK_WIDTH));
-		/* if last PQ or end of PQ mask, write command */
+
+		/* If last PQ or end of PQ mask, write command */
 		if ((pq_id == last_pq) ||
 		    (pq_id % QM_STOP_PQ_MASK_WIDTH ==
 		    (QM_STOP_PQ_MASK_WIDTH - 1))) {
@@ -785,68 +930,92 @@ bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn,
 			pq_mask = 0;
 		}
 	}
+
 	return true;
 }
 
+
 /* NIG: ETS configuration constants */
 #define NIG_TX_ETS_CLIENT_OFFSET	4
 #define NIG_LB_ETS_CLIENT_OFFSET	1
 #define NIG_ETS_MIN_WFQ_BYTES		1600
+
 /* NIG: ETS constants */
 #define NIG_ETS_UP_BOUND(weight, mtu) \
-(2 * ((weight) > (mtu) ? (weight) : (mtu)))
+	(2 * ((weight) > (mtu) ? (weight) : (mtu)))
+
 /* NIG: RL constants */
-#define NIG_RL_BASE_TYPE			1	/* byte base type */
-#define NIG_RL_PERIOD				1	/* in us */
+
+/* Byte base type value */
+#define NIG_RL_BASE_TYPE		1
+
+/* Period in us */
+#define NIG_RL_PERIOD			1
+
+/* Period in 25MHz cycles */
 #define NIG_RL_PERIOD_CLK_25M		(25 * NIG_RL_PERIOD)
+
+/* Rate in mbps */
 #define NIG_RL_INC_VAL(rate)		(((rate) * NIG_RL_PERIOD) / 8)
+
 #define NIG_RL_MAX_VAL(inc_val, mtu) \
-(2 * ((inc_val) > (mtu) ? (inc_val) : (mtu)))
+	(2 * ((inc_val) > (mtu) ? (inc_val) : (mtu)))
+
 /* NIG: packet prioritry configuration constants */
-#define NIG_PRIORITY_MAP_TC_BITS 4
+#define NIG_PRIORITY_MAP_TC_BITS	4
+
+
 void ecore_init_nig_ets(struct ecore_hwfn *p_hwfn,
 			struct ecore_ptt *p_ptt,
 			struct init_ets_req *req, bool is_lb)
 {
-	u8 tc, sp_tc_map = 0, wfq_tc_map = 0;
-	u8 num_tc = is_lb ? NUM_OF_TCS : NUM_OF_PHYS_TCS;
-	u8 tc_client_offset =
-	    is_lb ? NIG_LB_ETS_CLIENT_OFFSET : NIG_TX_ETS_CLIENT_OFFSET;
-	u32 min_weight = 0xffffffff;
-	u32 tc_weight_base_addr =
-	    is_lb ? NIG_REG_LB_ARB_CREDIT_WEIGHT_0 :
-	    NIG_REG_TX_ARB_CREDIT_WEIGHT_0;
-	u32 tc_weight_addr_diff =
-	    is_lb ? NIG_REG_LB_ARB_CREDIT_WEIGHT_1 -
-	    NIG_REG_LB_ARB_CREDIT_WEIGHT_0 : NIG_REG_TX_ARB_CREDIT_WEIGHT_1 -
-	    NIG_REG_TX_ARB_CREDIT_WEIGHT_0;
-	u32 tc_bound_base_addr =
-	    is_lb ? NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_0 :
-	    NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_0;
-	u32 tc_bound_addr_diff =
-	    is_lb ? NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_1 -
-	    NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_0 :
-	    NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_1 -
-	    NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_0;
+	u32 min_weight, tc_weight_base_addr, tc_weight_addr_diff;
+	u32 tc_bound_base_addr, tc_bound_addr_diff;
+	u8 sp_tc_map = 0, wfq_tc_map = 0;
+	u8 tc, num_tc, tc_client_offset;
+
+	num_tc = is_lb ? NUM_OF_TCS : NUM_OF_PHYS_TCS;
+	tc_client_offset = is_lb ? NIG_LB_ETS_CLIENT_OFFSET :
+				   NIG_TX_ETS_CLIENT_OFFSET;
+	min_weight = 0xffffffff;
+	tc_weight_base_addr = is_lb ? NIG_REG_LB_ARB_CREDIT_WEIGHT_0 :
+				      NIG_REG_TX_ARB_CREDIT_WEIGHT_0;
+	tc_weight_addr_diff = is_lb ? NIG_REG_LB_ARB_CREDIT_WEIGHT_1 -
+				      NIG_REG_LB_ARB_CREDIT_WEIGHT_0 :
+				      NIG_REG_TX_ARB_CREDIT_WEIGHT_1 -
+				      NIG_REG_TX_ARB_CREDIT_WEIGHT_0;
+	tc_bound_base_addr = is_lb ? NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_0 :
+				     NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_0;
+	tc_bound_addr_diff = is_lb ? NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_1 -
+				     NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_0 :
+				     NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_1 -
+				     NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_0;
+
 	for (tc = 0; tc < num_tc; tc++) {
 		struct init_ets_tc_req *tc_req = &req->tc_req[tc];
-		/* update SP map */
+
+		/* Update SP map */
 		if (tc_req->use_sp)
 			sp_tc_map |= (1 << tc);
-		if (tc_req->use_wfq) {
-			/* update WFQ map */
-			wfq_tc_map |= (1 << tc);
-			/* find minimal weight */
-			if (tc_req->weight < min_weight)
-				min_weight = tc_req->weight;
-		}
+
+		if (!tc_req->use_wfq)
+			continue;
+
+		/* Update WFQ map */
+		wfq_tc_map |= (1 << tc);
+
+		/* Find minimal weight */
+		if (tc_req->weight < min_weight)
+			min_weight = tc_req->weight;
 	}
-	/* write SP map */
+
+	/* Write SP map */
 	ecore_wr(p_hwfn, p_ptt,
 		 is_lb ? NIG_REG_LB_ARB_CLIENT_IS_STRICT :
 		 NIG_REG_TX_ARB_CLIENT_IS_STRICT,
 		 (sp_tc_map << tc_client_offset));
-	/* write WFQ map */
+
+	/* Write WFQ map */
 	ecore_wr(p_hwfn, p_ptt,
 		 is_lb ? NIG_REG_LB_ARB_CLIENT_IS_SUBJECT2WFQ :
 		 NIG_REG_TX_ARB_CLIENT_IS_SUBJECT2WFQ,
@@ -854,22 +1023,23 @@ void ecore_init_nig_ets(struct ecore_hwfn *p_hwfn,
 	/* write WFQ weights */
 	for (tc = 0; tc < num_tc; tc++, tc_client_offset++) {
 		struct init_ets_tc_req *tc_req = &req->tc_req[tc];
-		if (tc_req->use_wfq) {
-			/* translate weight to bytes */
-			u32 byte_weight =
-			    (NIG_ETS_MIN_WFQ_BYTES * tc_req->weight) /
-			    min_weight;
-			/* write WFQ weight */
-			ecore_wr(p_hwfn, p_ptt,
-				 tc_weight_base_addr +
-				 tc_weight_addr_diff * tc_client_offset,
-				 byte_weight);
-			/* write WFQ upper bound */
-			ecore_wr(p_hwfn, p_ptt,
-				 tc_bound_base_addr +
-				 tc_bound_addr_diff * tc_client_offset,
-				 NIG_ETS_UP_BOUND(byte_weight, req->mtu));
-		}
+		u32 byte_weight;
+
+		if (!tc_req->use_wfq)
+			continue;
+
+		/* Translate weight to bytes */
+		byte_weight = (NIG_ETS_MIN_WFQ_BYTES * tc_req->weight) /
+			      min_weight;
+
+		/* Write WFQ weight */
+		ecore_wr(p_hwfn, p_ptt, tc_weight_base_addr +
+			 tc_weight_addr_diff * tc_client_offset, byte_weight);
+
+		/* Write WFQ upper bound */
+		ecore_wr(p_hwfn, p_ptt, tc_bound_base_addr +
+			 tc_bound_addr_diff * tc_client_offset,
+			 NIG_ETS_UP_BOUND(byte_weight, req->mtu));
 	}
 }
 
@@ -877,16 +1047,18 @@ void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn,
 			  struct ecore_ptt *p_ptt,
 			  struct init_nig_lb_rl_req *req)
 {
-	u8 tc;
 	u32 ctrl, inc_val, reg_offset;
-	/* disable global MAC+LB RL */
+	u8 tc;
+
+	/* Disable global MAC+LB RL */
 	ctrl =
 	    NIG_RL_BASE_TYPE <<
 	    NIG_REG_TX_LB_GLBRATELIMIT_CTRL_TX_LB_GLBRATELIMIT_BASE_TYPE_SHIFT;
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_CTRL, ctrl);
-	/* configure and enable global MAC+LB RL */
+
+	/* Configure and enable global MAC+LB RL */
 	if (req->lb_mac_rate) {
-		/* configure  */
+		/* Configure  */
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_INC_PERIOD,
 			 NIG_RL_PERIOD_CLK_25M);
 		inc_val = NIG_RL_INC_VAL(req->lb_mac_rate);
@@ -894,20 +1066,23 @@ void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn,
 			 inc_val);
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_MAX_VALUE,
 			 NIG_RL_MAX_VAL(inc_val, req->mtu));
-		/* enable */
+
+		/* Enable */
 		ctrl |=
 		    1 <<
 		    NIG_REG_TX_LB_GLBRATELIMIT_CTRL_TX_LB_GLBRATELIMIT_EN_SHIFT;
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_CTRL, ctrl);
 	}
-	/* disable global LB-only RL */
+
+	/* Disable global LB-only RL */
 	ctrl =
 	    NIG_RL_BASE_TYPE <<
 	    NIG_REG_LB_BRBRATELIMIT_CTRL_LB_BRBRATELIMIT_BASE_TYPE_SHIFT;
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_CTRL, ctrl);
-	/* configure and enable global LB-only RL */
+
+	/* Configure and enable global LB-only RL */
 	if (req->lb_rate) {
-		/* configure  */
+		/* Configure  */
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_INC_PERIOD,
 			 NIG_RL_PERIOD_CLK_25M);
 		inc_val = NIG_RL_INC_VAL(req->lb_rate);
@@ -915,41 +1090,41 @@ void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn,
 			 inc_val);
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_MAX_VALUE,
 			 NIG_RL_MAX_VAL(inc_val, req->mtu));
-		/* enable */
+
+		/* Enable */
 		ctrl |=
 		    1 << NIG_REG_LB_BRBRATELIMIT_CTRL_LB_BRBRATELIMIT_EN_SHIFT;
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_CTRL, ctrl);
 	}
-	/* per-TC RLs */
+
+	/* Per-TC RLs */
 	for (tc = 0, reg_offset = 0; tc < NUM_OF_PHYS_TCS;
 	     tc++, reg_offset += 4) {
-		/* disable TC RL */
+		/* Disable TC RL */
 		ctrl =
 		    NIG_RL_BASE_TYPE <<
 		NIG_REG_LB_TCRATELIMIT_CTRL_0_LB_TCRATELIMIT_BASE_TYPE_0_SHIFT;
 		ecore_wr(p_hwfn, p_ptt,
 			 NIG_REG_LB_TCRATELIMIT_CTRL_0 + reg_offset, ctrl);
-		/* configure and enable TC RL */
-		if (req->tc_rate[tc]) {
-			/* configure */
-			ecore_wr(p_hwfn, p_ptt,
-				 NIG_REG_LB_TCRATELIMIT_INC_PERIOD_0 +
-				 reg_offset, NIG_RL_PERIOD_CLK_25M);
-			inc_val = NIG_RL_INC_VAL(req->tc_rate[tc]);
-			ecore_wr(p_hwfn, p_ptt,
-				 NIG_REG_LB_TCRATELIMIT_INC_VALUE_0 +
-				 reg_offset, inc_val);
-			ecore_wr(p_hwfn, p_ptt,
-				 NIG_REG_LB_TCRATELIMIT_MAX_VALUE_0 +
-				 reg_offset, NIG_RL_MAX_VAL(inc_val, req->mtu));
-			/* enable */
-			ctrl |=
-			    1 <<
-		NIG_REG_LB_TCRATELIMIT_CTRL_0_LB_TCRATELIMIT_EN_0_SHIFT;
-			ecore_wr(p_hwfn, p_ptt,
-				 NIG_REG_LB_TCRATELIMIT_CTRL_0 + reg_offset,
-				 ctrl);
-		}
+
+		/* Configure and enable TC RL */
+		if (!req->tc_rate[tc])
+			continue;
+
+		/* Configure */
+		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_INC_PERIOD_0 +
+			 reg_offset, NIG_RL_PERIOD_CLK_25M);
+		inc_val = NIG_RL_INC_VAL(req->tc_rate[tc]);
+		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_INC_VALUE_0 +
+			 reg_offset, inc_val);
+		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_MAX_VALUE_0 +
+			 reg_offset, NIG_RL_MAX_VAL(inc_val, req->mtu));
+
+		/* Enable */
+		ctrl |= 1 <<
+			NIG_REG_LB_TCRATELIMIT_CTRL_0_LB_TCRATELIMIT_EN_0_SHIFT;
+		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_CTRL_0 +
+			 reg_offset, ctrl);
 	}
 }
 
@@ -957,20 +1132,23 @@ void ecore_init_nig_pri_tc_map(struct ecore_hwfn *p_hwfn,
 			       struct ecore_ptt *p_ptt,
 			       struct init_nig_pri_tc_map_req *req)
 {
-	u8 pri, tc;
-	u32 pri_tc_mask = 0;
 	u8 tc_pri_mask[NUM_OF_PHYS_TCS] = { 0 };
+	u32 pri_tc_mask = 0;
+	u8 pri, tc;
+
 	for (pri = 0; pri < NUM_OF_VLAN_PRIORITIES; pri++) {
-		if (req->pri[pri].valid) {
-			pri_tc_mask |=
-			    (req->pri[pri].
-			     tc_id << (pri * NIG_PRIORITY_MAP_TC_BITS));
-			tc_pri_mask[req->pri[pri].tc_id] |= (1 << pri);
-		}
+		if (!req->pri[pri].valid)
+			continue;
+
+		pri_tc_mask |= (req->pri[pri].tc_id <<
+				(pri * NIG_PRIORITY_MAP_TC_BITS));
+		tc_pri_mask[req->pri[pri].tc_id] |= (1 << pri);
 	}
-	/* write priority -> TC mask */
+
+	/* Write priority -> TC mask */
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_PKT_PRIORITY_TO_TC, pri_tc_mask);
-	/* write TC -> priority mask */
+
+	/* Write TC -> priority mask */
 	for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) {
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_PRIORITY_FOR_TC_0 + tc * 4,
 			 tc_pri_mask[tc]);
@@ -979,110 +1157,133 @@ void ecore_init_nig_pri_tc_map(struct ecore_hwfn *p_hwfn,
 	}
 }
 
+
 /* PRS: ETS configuration constants */
-#define PRS_ETS_MIN_WFQ_BYTES			1600
+#define PRS_ETS_MIN_WFQ_BYTES		1600
 #define PRS_ETS_UP_BOUND(weight, mtu) \
-(2 * ((weight) > (mtu) ? (weight) : (mtu)))
+	(2 * ((weight) > (mtu) ? (weight) : (mtu)))
+
+
 void ecore_init_prs_ets(struct ecore_hwfn *p_hwfn,
 			struct ecore_ptt *p_ptt, struct init_ets_req *req)
 {
+	u32 tc_weight_addr_diff, tc_bound_addr_diff, min_weight = 0xffffffff;
 	u8 tc, sp_tc_map = 0, wfq_tc_map = 0;
-	u32 min_weight = 0xffffffff;
-	u32 tc_weight_addr_diff =
-	    PRS_REG_ETS_ARB_CREDIT_WEIGHT_1 - PRS_REG_ETS_ARB_CREDIT_WEIGHT_0;
-	u32 tc_bound_addr_diff =
-	    PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_1 -
-	    PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_0;
+
+	tc_weight_addr_diff = PRS_REG_ETS_ARB_CREDIT_WEIGHT_1 -
+			      PRS_REG_ETS_ARB_CREDIT_WEIGHT_0;
+	tc_bound_addr_diff = PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_1 -
+			     PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_0;
+
 	for (tc = 0; tc < NUM_OF_TCS; tc++) {
 		struct init_ets_tc_req *tc_req = &req->tc_req[tc];
-		/* update SP map */
+
+		/* Update SP map */
 		if (tc_req->use_sp)
 			sp_tc_map |= (1 << tc);
-		if (tc_req->use_wfq) {
-			/* update WFQ map */
-			wfq_tc_map |= (1 << tc);
-			/* find minimal weight */
-			if (tc_req->weight < min_weight)
-				min_weight = tc_req->weight;
-		}
+
+		if (!tc_req->use_wfq)
+			continue;
+
+		/* Update WFQ map */
+		wfq_tc_map |= (1 << tc);
+
+		/* Find minimal weight */
+		if (tc_req->weight < min_weight)
+			min_weight = tc_req->weight;
 	}
+
 	/* write SP map */
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CLIENT_IS_STRICT, sp_tc_map);
+
 	/* write WFQ map */
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CLIENT_IS_SUBJECT2WFQ,
 		 wfq_tc_map);
+
 	/* write WFQ weights */
 	for (tc = 0; tc < NUM_OF_TCS; tc++) {
 		struct init_ets_tc_req *tc_req = &req->tc_req[tc];
-		if (tc_req->use_wfq) {
-			/* translate weight to bytes */
-			u32 byte_weight =
-			    (PRS_ETS_MIN_WFQ_BYTES * tc_req->weight) /
-			    min_weight;
-			/* write WFQ weight */
-			ecore_wr(p_hwfn, p_ptt,
-				 PRS_REG_ETS_ARB_CREDIT_WEIGHT_0 +
-				 tc * tc_weight_addr_diff, byte_weight);
-			/* write WFQ upper bound */
-			ecore_wr(p_hwfn, p_ptt,
-				 PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_0 +
-				 tc * tc_bound_addr_diff,
-				 PRS_ETS_UP_BOUND(byte_weight, req->mtu));
-		}
+		u32 byte_weight;
+
+		if (!tc_req->use_wfq)
+			continue;
+
+		/* Translate weight to bytes */
+		byte_weight = (PRS_ETS_MIN_WFQ_BYTES * tc_req->weight) /
+			      min_weight;
+
+		/* Write WFQ weight */
+		ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CREDIT_WEIGHT_0 + tc *
+			 tc_weight_addr_diff, byte_weight);
+
+		/* Write WFQ upper bound */
+		ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_0 +
+			 tc * tc_bound_addr_diff, PRS_ETS_UP_BOUND(byte_weight,
+								   req->mtu));
 	}
 }
 
+
 /* BRB: RAM configuration constants */
 #define BRB_TOTAL_RAM_BLOCKS_BB	4800
 #define BRB_TOTAL_RAM_BLOCKS_K2	5632
-#define BRB_BLOCK_SIZE			128	/* in bytes */
+#define BRB_BLOCK_SIZE		128
 #define BRB_MIN_BLOCKS_PER_TC	9
-#define BRB_HYST_BYTES			10240
-#define BRB_HYST_BLOCKS			(BRB_HYST_BYTES / BRB_BLOCK_SIZE)
-/*
- * temporary big RAM allocation - should be updated
- */
+#define BRB_HYST_BYTES		10240
+#define BRB_HYST_BLOCKS		(BRB_HYST_BYTES / BRB_BLOCK_SIZE)
+
+/* Temporary big RAM allocation - should be updated */
 void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
 			struct ecore_ptt *p_ptt, struct init_brb_ram_req *req)
 {
-	u8 port, active_ports = 0;
+	u32 tc_headroom_blocks, min_pkt_size_blocks, total_blocks;
 	u32 active_port_blocks, reg_offset = 0;
-	u32 tc_headroom_blocks =
-	    (u32)DIV_ROUND_UP(req->headroom_per_tc, BRB_BLOCK_SIZE);
-	u32 min_pkt_size_blocks =
-	    (u32)DIV_ROUND_UP(req->min_pkt_size, BRB_BLOCK_SIZE);
-	u32 total_blocks =
-	    ECORE_IS_K2(p_hwfn->
-			p_dev) ? BRB_TOTAL_RAM_BLOCKS_K2 :
-	    BRB_TOTAL_RAM_BLOCKS_BB;
-	/* find number of active ports */
+	u8 port, active_ports = 0;
+
+	tc_headroom_blocks = (u32)DIV_ROUND_UP(req->headroom_per_tc,
+					       BRB_BLOCK_SIZE);
+	min_pkt_size_blocks = (u32)DIV_ROUND_UP(req->min_pkt_size,
+						BRB_BLOCK_SIZE);
+	total_blocks = ECORE_IS_K2(p_hwfn->p_dev) ? BRB_TOTAL_RAM_BLOCKS_K2 :
+						    BRB_TOTAL_RAM_BLOCKS_BB;
+
+	/* Find number of active ports */
 	for (port = 0; port < MAX_NUM_PORTS; port++)
 		if (req->num_active_tcs[port])
 			active_ports++;
+
 	active_port_blocks = (u32)(total_blocks / active_ports);
+
 	for (port = 0; port < req->max_ports_per_engine; port++) {
-		/* calculate per-port sizes */
-		u32 tc_guaranteed_blocks =
-		    (u32)DIV_ROUND_UP(req->guranteed_per_tc, BRB_BLOCK_SIZE);
-		u32 port_blocks =
-		    req->num_active_tcs[port] ? active_port_blocks : 0;
-		u32 port_guaranteed_blocks =
-		    req->num_active_tcs[port] * tc_guaranteed_blocks;
-		u32 port_shared_blocks = port_blocks - port_guaranteed_blocks;
-		u32 full_xoff_th =
-		    req->num_active_tcs[port] * BRB_MIN_BLOCKS_PER_TC;
-		u32 full_xon_th = full_xoff_th + min_pkt_size_blocks;
-		u32 pause_xoff_th = tc_headroom_blocks;
-		u32 pause_xon_th = pause_xoff_th + min_pkt_size_blocks;
+		u32 port_blocks, port_shared_blocks, port_guaranteed_blocks;
+		u32 full_xoff_th, full_xon_th, pause_xoff_th, pause_xon_th;
+		u32 tc_guaranteed_blocks;
 		u8 tc;
-		/* init total size per port */
+
+		/* Calculate per-port sizes */
+		tc_guaranteed_blocks = (u32)DIV_ROUND_UP(req->guranteed_per_tc,
+							 BRB_BLOCK_SIZE);
+		port_blocks = req->num_active_tcs[port] ? active_port_blocks :
+							  0;
+		port_guaranteed_blocks = req->num_active_tcs[port] *
+					 tc_guaranteed_blocks;
+		port_shared_blocks = port_blocks - port_guaranteed_blocks;
+		full_xoff_th = req->num_active_tcs[port] *
+			       BRB_MIN_BLOCKS_PER_TC;
+		full_xon_th = full_xoff_th + min_pkt_size_blocks;
+		pause_xoff_th = tc_headroom_blocks;
+		pause_xon_th = pause_xoff_th + min_pkt_size_blocks;
+
+		/* Init total size per port */
 		ecore_wr(p_hwfn, p_ptt, BRB_REG_TOTAL_MAC_SIZE + port * 4,
 			 port_blocks);
-		/* init shared size per port */
+
+		/* Init shared size per port */
 		ecore_wr(p_hwfn, p_ptt, BRB_REG_SHARED_HR_AREA + port * 4,
 			 port_shared_blocks);
+
 		for (tc = 0; tc < NUM_OF_TCS; tc++, reg_offset += 4) {
-			/* clear init values for non-active TCs */
+			/* Clear init values for non-active TCs */
 			if (tc == req->num_active_tcs[port]) {
 				tc_guaranteed_blocks = 0;
 				full_xoff_th = 0;
@@ -1090,15 +1291,18 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
 				pause_xoff_th = 0;
 				pause_xon_th = 0;
 			}
-			/* init guaranteed size per TC */
+
+			/* Init guaranteed size per TC */
 			ecore_wr(p_hwfn, p_ptt,
 				 BRB_REG_TC_GUARANTIED_0 + reg_offset,
 				 tc_guaranteed_blocks);
 			ecore_wr(p_hwfn, p_ptt,
 				 BRB_REG_MAIN_TC_GUARANTIED_HYST_0 + reg_offset,
 				 BRB_HYST_BLOCKS);
-/* init pause/full thresholds per physical TC - for loopback traffic */
 
+			/* Init pause/full thresholds per physical TC - for
+			 * loopback traffic.
+			 */
 			ecore_wr(p_hwfn, p_ptt,
 				 BRB_REG_LB_TC_FULL_XOFF_THRESHOLD_0 +
 				 reg_offset, full_xoff_th);
@@ -1111,7 +1315,10 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
 			ecore_wr(p_hwfn, p_ptt,
 				 BRB_REG_LB_TC_PAUSE_XON_THRESHOLD_0 +
 				 reg_offset, pause_xon_th);
-/* init pause/full thresholds per physical TC - for main traffic */
+
+			/* Init pause/full thresholds per physical TC - for
+			 * main traffic.
+			 */
 			ecore_wr(p_hwfn, p_ptt,
 				 BRB_REG_MAIN_TC_FULL_XOFF_THRESHOLD_0 +
 				 reg_offset, full_xoff_th);
@@ -1128,23 +1335,25 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
 	}
 }
 
-/*In MF should be called once per engine to set EtherType of OuterTag*/
+/* In MF should be called once per engine to set EtherType of OuterTag */
 void ecore_set_engine_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
 					struct ecore_ptt *p_ptt, u32 ethType)
 {
-	/* update PRS register */
+	/* Update PRS register */
 	STORE_RT_REG(p_hwfn, PRS_REG_TAG_ETHERTYPE_0_RT_OFFSET, ethType);
-	/* update NIG register */
+
+	/* Update NIG register */
 	STORE_RT_REG(p_hwfn, NIG_REG_TAG_ETHERTYPE_0_RT_OFFSET, ethType);
-	/* update PBF register */
+
+	/* Update PBF register */
 	STORE_RT_REG(p_hwfn, PBF_REG_TAG_ETHERTYPE_0_RT_OFFSET, ethType);
 }
 
-/*In MF should be called once per port to set EtherType of OuterTag*/
+/* In MF should be called once per port to set EtherType of OuterTag */
 void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
 				      struct ecore_ptt *p_ptt, u32 ethType)
 {
-	/* update DORQ register */
+	/* Update DORQ register */
 	STORE_RT_REG(p_hwfn, DORQ_REG_TAG1_ETHERTYPE_RT_OFFSET, ethType);
 }
 
@@ -1154,11 +1363,13 @@ void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
 void ecore_set_vxlan_dest_port(struct ecore_hwfn *p_hwfn,
 			       struct ecore_ptt *p_ptt, u16 dest_port)
 {
-	/* update PRS register */
+	/* Update PRS register */
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_VXLAN_PORT, dest_port);
-	/* update NIG register */
+
+	/* Update NIG register */
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_VXLAN_CTRL, dest_port);
-	/* update PBF register */
+
+	/* Update PBF register */
 	ecore_wr(p_hwfn, p_ptt, PBF_REG_VXLAN_PORT, dest_port);
 }
 
@@ -1166,23 +1377,26 @@ void ecore_set_vxlan_enable(struct ecore_hwfn *p_hwfn,
 			    struct ecore_ptt *p_ptt, bool vxlan_enable)
 {
 	u32 reg_val;
-	/* update PRS register */
+
+	/* Update PRS register */
 	reg_val = ecore_rd(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN);
 	SET_TUNNEL_TYPE_ENABLE_BIT(reg_val,
 			   PRS_REG_ENCAPSULATION_TYPE_EN_VXLAN_ENABLE_SHIFT,
 			   vxlan_enable);
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN, reg_val);
 	if (reg_val) {
-		ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0,
+		ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0_BB_K2,
 			 (u32)PRS_ETH_TUNN_FIC_FORMAT);
 	}
-	/* update NIG register */
+
+	/* Update NIG register */
 	reg_val = ecore_rd(p_hwfn, p_ptt, NIG_REG_ENC_TYPE_ENABLE);
 	SET_TUNNEL_TYPE_ENABLE_BIT(reg_val,
 				   NIG_REG_ENC_TYPE_ENABLE_VXLAN_ENABLE_SHIFT,
 				   vxlan_enable);
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_ENC_TYPE_ENABLE, reg_val);
-	/* update DORQ register */
+
+	/* Update DORQ register */
 	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_VXLAN_EN,
 		 vxlan_enable ? 1 : 0);
 }
@@ -1192,7 +1406,8 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn,
 			  bool eth_gre_enable, bool ip_gre_enable)
 {
 	u32 reg_val;
-	/* update PRS register */
+
+	/* Update PRS register */
 	reg_val = ecore_rd(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN);
 	SET_TUNNEL_TYPE_ENABLE_BIT(reg_val,
 		   PRS_REG_ENCAPSULATION_TYPE_EN_ETH_OVER_GRE_ENABLE_SHIFT,
@@ -1202,10 +1417,11 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn,
 		   ip_gre_enable);
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN, reg_val);
 	if (reg_val) {
-		ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0,
+		ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0_BB_K2,
 			 (u32)PRS_ETH_TUNN_FIC_FORMAT);
 	}
-	/* update NIG register */
+
+	/* Update NIG register */
 	reg_val = ecore_rd(p_hwfn, p_ptt, NIG_REG_ENC_TYPE_ENABLE);
 	SET_TUNNEL_TYPE_ENABLE_BIT(reg_val,
 		   NIG_REG_ENC_TYPE_ENABLE_ETH_OVER_GRE_ENABLE_SHIFT,
@@ -1214,7 +1430,8 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn,
 		   NIG_REG_ENC_TYPE_ENABLE_IP_OVER_GRE_ENABLE_SHIFT,
 		   ip_gre_enable);
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_ENC_TYPE_ENABLE, reg_val);
-	/* update DORQ registers */
+
+	/* Update DORQ registers */
 	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_GRE_ETH_EN,
 		 eth_gre_enable ? 1 : 0);
 	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_GRE_IP_EN,
@@ -1224,11 +1441,13 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn,
 void ecore_set_geneve_dest_port(struct ecore_hwfn *p_hwfn,
 				struct ecore_ptt *p_ptt, u16 dest_port)
 {
-	/* update PRS register */
+	/* Update PRS register */
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_NGE_PORT, dest_port);
-	/* update NIG register */
+
+	/* Update NIG register */
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_NGE_PORT, dest_port);
-	/* update PBF register */
+
+	/* Update PBF register */
 	ecore_wr(p_hwfn, p_ptt, PBF_REG_NGE_PORT, dest_port);
 }
 
@@ -1237,7 +1456,8 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
 			     bool eth_geneve_enable, bool ip_geneve_enable)
 {
 	u32 reg_val;
-	/* update PRS register */
+
+	/* Update PRS register */
 	reg_val = ecore_rd(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN);
 	SET_TUNNEL_TYPE_ENABLE_BIT(reg_val,
 		   PRS_REG_ENCAPSULATION_TYPE_EN_ETH_OVER_GENEVE_ENABLE_SHIFT,
@@ -1247,37 +1467,44 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
 		   ip_geneve_enable);
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN, reg_val);
 	if (reg_val) {
-		ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0,
+		ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0_BB_K2,
 			 (u32)PRS_ETH_TUNN_FIC_FORMAT);
 	}
-	/* update NIG register */
+
+	/* Update NIG register */
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_NGE_ETH_ENABLE,
 		 eth_geneve_enable ? 1 : 0);
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_NGE_IP_ENABLE,
 		 ip_geneve_enable ? 1 : 0);
-	/* EDPM with geneve tunnel not supported in BB_B0 */
+
+	/* EDPM with geneve tunnel not supported in BB */
 	if (ECORE_IS_BB_B0(p_hwfn->p_dev))
 		return;
-	/* update DORQ registers */
-	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_ETH_EN,
+
+	/* Update DORQ registers */
+	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_ETH_EN_K2_E5,
 		 eth_geneve_enable ? 1 : 0);
-	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_IP_EN,
+	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_IP_EN_K2_E5,
 		 ip_geneve_enable ? 1 : 0);
 }
 
+
 #define T_ETH_PACKET_ACTION_GFT_EVENTID  23
 #define PARSER_ETH_CONN_GFT_ACTION_CM_HDR  272
 #define T_ETH_PACKET_MATCH_RFS_EVENTID 25
-#define PARSER_ETH_CONN_CM_HDR (0x0)
+#define PARSER_ETH_CONN_CM_HDR 0
 #define CAM_LINE_SIZE sizeof(u32)
 #define RAM_LINE_SIZE sizeof(u64)
 #define REG_SIZE sizeof(u32)
 
+
 void ecore_set_gft_event_id_cm_hdr(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt)
 {
-	/* set RFS event ID to be awakened i Tstorm By Prs */
-	u32 rfs_cm_hdr_event_id = ecore_rd(p_hwfn, p_ptt, PRS_REG_CM_HDR_GFT);
+	u32 rfs_cm_hdr_event_id;
+
+	/* Set RFS event ID to be awakened i Tstorm By Prs */
+	rfs_cm_hdr_event_id = ecore_rd(p_hwfn, p_ptt, PRS_REG_CM_HDR_GFT);
 	rfs_cm_hdr_event_id |= T_ETH_PACKET_ACTION_GFT_EVENTID <<
 	    PRS_REG_CM_HDR_GFT_EVENT_ID_SHIFT;
 	rfs_cm_hdr_event_id |= PARSER_ETH_CONN_GFT_ACTION_CM_HDR <<
@@ -1298,39 +1525,48 @@ void ecore_set_rfs_mode_enable(struct ecore_hwfn *p_hwfn,
 	struct gft_ram_line ramLine;
 	u32 *ramLinePointer = (u32 *)&ramLine;
 	int i;
+
 	if (!ipv6 && !ipv4)
 		DP_NOTICE(p_hwfn, true,
 			  "set_rfs_mode_enable: must accept at "
 			  "least on of - ipv4 or ipv6");
+
 	if (!tcp && !udp)
 		DP_NOTICE(p_hwfn, true,
 			  "set_rfs_mode_enable: must accept at "
 			  "least on of - udp or tcp");
-	/* set RFS event ID to be awakened i Tstorm By Prs */
+
+	/* Set RFS event ID to be awakened i Tstorm By Prs */
 	rfs_cm_hdr_event_id |=  T_ETH_PACKET_MATCH_RFS_EVENTID <<
 	    PRS_REG_CM_HDR_GFT_EVENT_ID_SHIFT;
 	rfs_cm_hdr_event_id |=  PARSER_ETH_CONN_CM_HDR <<
 	    PRS_REG_CM_HDR_GFT_CM_HDR_SHIFT;
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_CM_HDR_GFT, rfs_cm_hdr_event_id);
+
 	/* Configure Registers for RFS mode */
-/* enable gft search */
+
+	/* Enable gft search */
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_GFT, 1);
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_LOAD_L2_FILTER, 0); /* do not load
 							     * context only cid
 							     * in PRS on match
 							     */
 	camLine.cam_line_mapped.camline = 0;
-	/* cam line is now valid!! */
+
+	/* Cam line is now valid!! */
 	SET_FIELD(camLine.cam_line_mapped.camline,
 		  GFT_CAM_LINE_MAPPED_VALID, 1);
-	/* filters are per PF!! */
+
+	/* Filters are per PF!! */
 	SET_FIELD(camLine.cam_line_mapped.camline,
 		  GFT_CAM_LINE_MAPPED_PF_ID_MASK, 1);
 	SET_FIELD(camLine.cam_line_mapped.camline,
 		  GFT_CAM_LINE_MAPPED_PF_ID, pf_id);
+
 	if (!(tcp && udp)) {
 		SET_FIELD(camLine.cam_line_mapped.camline,
-			  GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE_MASK, 1);
+			  GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE_MASK,
+			  GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE_MASK_MASK);
 		if (tcp)
 			SET_FIELD(camLine.cam_line_mapped.camline,
 				  GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE,
@@ -1340,6 +1576,7 @@ void ecore_set_rfs_mode_enable(struct ecore_hwfn *p_hwfn,
 				  GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE,
 				  GFT_PROFILE_UDP_PROTOCOL);
 	}
+
 	if (!(ipv4 && ipv6)) {
 		SET_FIELD(camLine.cam_line_mapped.camline,
 			  GFT_CAM_LINE_MAPPED_IP_VERSION_MASK, 1);
@@ -1352,44 +1589,53 @@ void ecore_set_rfs_mode_enable(struct ecore_hwfn *p_hwfn,
 				  GFT_CAM_LINE_MAPPED_IP_VERSION,
 				  GFT_PROFILE_IPV6);
 	}
-	/* write characteristics to cam */
+
+	/* Write characteristics to cam */
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_CAM + CAM_LINE_SIZE * pf_id,
 	    camLine.cam_line_mapped.camline);
 	camLine.cam_line_mapped.camline =
 	    ecore_rd(p_hwfn, p_ptt, PRS_REG_GFT_CAM + CAM_LINE_SIZE * pf_id);
-	/* write line to RAM - compare to filter 4 tuple */
-	ramLine.low32bits = 0;
-	ramLine.high32bits = 0;
-	SET_FIELD(ramLine.high32bits, GFT_RAM_LINE_DST_IP, 1);
-	SET_FIELD(ramLine.high32bits, GFT_RAM_LINE_SRC_IP, 1);
-	SET_FIELD(ramLine.low32bits, GFT_RAM_LINE_SRC_PORT, 1);
-	SET_FIELD(ramLine.low32bits, GFT_RAM_LINE_DST_PORT, 1);
-	/* each iteration write to reg */
+
+	/* Write line to RAM - compare to filter 4 tuple */
+	ramLine.lo = 0;
+	ramLine.hi = 0;
+	SET_FIELD(ramLine.hi, GFT_RAM_LINE_DST_IP, 1);
+	SET_FIELD(ramLine.hi, GFT_RAM_LINE_SRC_IP, 1);
+	SET_FIELD(ramLine.hi, GFT_RAM_LINE_OVER_IP_PROTOCOL, 1);
+	SET_FIELD(ramLine.lo, GFT_RAM_LINE_ETHERTYPE, 1);
+	SET_FIELD(ramLine.lo, GFT_RAM_LINE_SRC_PORT, 1);
+	SET_FIELD(ramLine.lo, GFT_RAM_LINE_DST_PORT, 1);
+
+	/* Each iteration write to reg */
 	for (i = 0; i < RAM_LINE_SIZE / REG_SIZE; i++)
 		ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_PROFILE_MASK_RAM +
 			 RAM_LINE_SIZE * pf_id +
 			 i * REG_SIZE, *(ramLinePointer + i));
-	/* set default profile so that no filter match will happen */
-	ramLine.low32bits = 0xffff;
-	ramLine.high32bits = 0xffff;
+
+	/* Set default profile so that no filter match will happen */
+	ramLine.lo = 0xffff;
+	ramLine.hi = 0xffff;
 	for (i = 0; i < RAM_LINE_SIZE / REG_SIZE; i++)
 		ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_PROFILE_MASK_RAM +
 			 RAM_LINE_SIZE * PRS_GFT_CAM_LINES_NO_MATCH +
 			 i * REG_SIZE, *(ramLinePointer + i));
 }
 
-/* Configure VF zone size mode*/
+/* Configure VF zone size mode */
 void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn,
 				    struct ecore_ptt *p_ptt, u16 mode,
 				    bool runtime_init)
 {
 	u32 msdm_vf_size_log = MSTORM_VF_ZONE_DEFAULT_SIZE_LOG;
 	u32 msdm_vf_offset_mask;
+
 	if (mode == VF_ZONE_SIZE_MODE_DOUBLE)
 		msdm_vf_size_log += 1;
 	else if (mode == VF_ZONE_SIZE_MODE_QUAD)
 		msdm_vf_size_log += 2;
+
 	msdm_vf_offset_mask = (1 << msdm_vf_size_log) - 1;
+
 	if (runtime_init) {
 		STORE_RT_REG(p_hwfn,
 			     PGLUE_REG_B_MSDM_VF_SHIFT_B_RT_OFFSET,
@@ -1405,12 +1651,13 @@ void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn,
 	}
 }
 
-/* get mstorm statistics for offset by VF zone size mode*/
+/* Get mstorm statistics for offset by VF zone size mode */
 u32 ecore_get_mstorm_queue_stat_offset(struct ecore_hwfn *p_hwfn,
 				       u16 stat_cnt_id,
 				       u16 vf_zone_size_mode)
 {
 	u32 offset = MSTORM_QUEUE_STAT_OFFSET(stat_cnt_id);
+
 	if ((vf_zone_size_mode != VF_ZONE_SIZE_MODE_DEFAULT) &&
 	    (stat_cnt_id > MAX_NUM_PFS)) {
 		if (vf_zone_size_mode == VF_ZONE_SIZE_MODE_DOUBLE)
@@ -1420,16 +1667,18 @@ u32 ecore_get_mstorm_queue_stat_offset(struct ecore_hwfn *p_hwfn,
 			offset += 3 * (1 << MSTORM_VF_ZONE_DEFAULT_SIZE_LOG) *
 			    (stat_cnt_id - MAX_NUM_PFS);
 	}
+
 	return offset;
 }
 
-/* get mstorm VF producer offset by VF zone size mode*/
+/* Get mstorm VF producer offset by VF zone size mode */
 u32 ecore_get_mstorm_eth_vf_prods_offset(struct ecore_hwfn *p_hwfn,
 					 u8 vf_id,
 					 u8 vf_queue_id,
 					 u16 vf_zone_size_mode)
 {
 	u32 offset = MSTORM_ETH_VF_PRODS_OFFSET(vf_id, vf_queue_id);
+
 	if (vf_zone_size_mode != VF_ZONE_SIZE_MODE_DEFAULT) {
 		if (vf_zone_size_mode == VF_ZONE_SIZE_MODE_DOUBLE)
 			offset += (1 << MSTORM_VF_ZONE_DEFAULT_SIZE_LOG) *
@@ -1438,5 +1687,166 @@ u32 ecore_get_mstorm_eth_vf_prods_offset(struct ecore_hwfn *p_hwfn,
 			offset += 3 * (1 << MSTORM_VF_ZONE_DEFAULT_SIZE_LOG) *
 				  vf_id;
 	}
+
 	return offset;
 }
+
+/* Calculate CRC8 of first 4 bytes in buf */
+static u8 ecore_calc_crc8(const u8 *buf)
+{
+	u32 i, j, crc = 0xff << 8;
+
+	/* CRC-8 polynomial */
+	#define POLY 0x1070
+
+	for (j = 0; j < 4; j++, buf++) {
+		crc ^= (*buf << 8);
+		for (i = 0; i < 8; i++) {
+			if (crc & 0x8000)
+				crc ^= (POLY << 3);
+
+			 crc <<= 1;
+		}
+	}
+
+	return (u8)(crc >> 8);
+}
+
+/* Calculate and return CDU validation byte per conneciton type / region /
+ * cid
+ */
+static u8 ecore_calc_cdu_validation_byte(u8 conn_type, u8 region,
+					 u32 cid)
+{
+	const u8 validation_cfg = CDU_VALIDATION_DEFAULT_CFG;
+	u8 crc, validation_byte = 0;
+	u32 validation_string = 0;
+	const u8 *data_to_crc_rev;
+	u8 data_to_crc[4];
+
+	data_to_crc_rev = (const u8 *)&validation_string;
+
+	/*
+	 * The CRC is calculated on the String-to-compress:
+	 * [31:8]  = {CID[31:20],CID[11:0]}
+	 * [7:4]   = Region
+	 * [3:0]   = Type
+	 */
+	if ((validation_cfg >> CDU_CONTEXT_VALIDATION_CFG_USE_CID) & 1)
+		validation_string |= (cid & 0xFFF00000) | ((cid & 0xFFF) << 8);
+
+	if ((validation_cfg >> CDU_CONTEXT_VALIDATION_CFG_USE_REGION) & 1)
+		validation_string |= ((region & 0xF) << 4);
+
+	if ((validation_cfg >> CDU_CONTEXT_VALIDATION_CFG_USE_TYPE) & 1)
+		validation_string |= (conn_type & 0xF);
+
+	/* Convert to big-endian (ntoh())*/
+	data_to_crc[0] = data_to_crc_rev[3];
+	data_to_crc[1] = data_to_crc_rev[2];
+	data_to_crc[2] = data_to_crc_rev[1];
+	data_to_crc[3] = data_to_crc_rev[0];
+
+	crc = ecore_calc_crc8(data_to_crc);
+
+	validation_byte |= ((validation_cfg >>
+			     CDU_CONTEXT_VALIDATION_CFG_USE_ACTIVE) & 1) << 7;
+
+	if ((validation_cfg >>
+	     CDU_CONTEXT_VALIDATION_CFG_VALIDATION_TYPE_SHIFT) & 1)
+		validation_byte |= ((conn_type & 0xF) << 3) | (crc & 0x7);
+	else
+		validation_byte |= crc & 0x7F;
+
+	return validation_byte;
+}
+
+/* Calcualte and set validation bytes for session context */
+void ecore_calc_session_ctx_validation(void *p_ctx_mem, u16 ctx_size,
+				       u8 ctx_type, u32 cid)
+{
+	u8 *x_val_ptr, *t_val_ptr, *u_val_ptr, *p_ctx;
+
+	p_ctx = (u8 *)p_ctx_mem;
+	x_val_ptr = &p_ctx[con_region_offsets[0][ctx_type]];
+	t_val_ptr = &p_ctx[con_region_offsets[1][ctx_type]];
+	u_val_ptr = &p_ctx[con_region_offsets[2][ctx_type]];
+
+	OSAL_MEMSET(p_ctx, 0, ctx_size);
+
+	*x_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 3, cid);
+	*t_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 4, cid);
+	*u_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 5, cid);
+}
+
+/* Calcualte and set validation bytes for task context */
+void ecore_calc_task_ctx_validation(void *p_ctx_mem, u16 ctx_size,
+				    u8 ctx_type, u32 tid)
+{
+	u8 *p_ctx, *region1_val_ptr;
+
+	p_ctx = (u8 *)p_ctx_mem;
+	region1_val_ptr = &p_ctx[task_region_offsets[0][ctx_type]];
+
+	OSAL_MEMSET(p_ctx, 0, ctx_size);
+
+	*region1_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 1, tid);
+}
+
+/* Memset session context to 0 while preserving validation bytes */
+void ecore_memset_session_ctx(void *p_ctx_mem, u32 ctx_size, u8 ctx_type)
+{
+	u8 *x_val_ptr, *t_val_ptr, *u_val_ptr, *p_ctx;
+	u8 x_val, t_val, u_val;
+
+	p_ctx = (u8 *)p_ctx_mem;
+	x_val_ptr = &p_ctx[con_region_offsets[0][ctx_type]];
+	t_val_ptr = &p_ctx[con_region_offsets[1][ctx_type]];
+	u_val_ptr = &p_ctx[con_region_offsets[2][ctx_type]];
+
+	x_val = *x_val_ptr;
+	t_val = *t_val_ptr;
+	u_val = *u_val_ptr;
+
+	OSAL_MEMSET(p_ctx, 0, ctx_size);
+
+	*x_val_ptr = x_val;
+	*t_val_ptr = t_val;
+	*u_val_ptr = u_val;
+}
+
+/* Memset task context to 0 while preserving validation bytes */
+void ecore_memset_task_ctx(void *p_ctx_mem, const u32 ctx_size,
+			   const u8 ctx_type)
+{
+	u8 *p_ctx, *region1_val_ptr;
+	u8 region1_val;
+
+	p_ctx = (u8 *)p_ctx_mem;
+	region1_val_ptr = &p_ctx[task_region_offsets[0][ctx_type]];
+
+	region1_val = *region1_val_ptr;
+
+	OSAL_MEMSET(p_ctx, 0, ctx_size);
+
+	*region1_val_ptr = region1_val;
+}
+
+/* Enable and configure context validation */
+void ecore_enable_context_validation(struct ecore_hwfn *p_hwfn,
+				     struct ecore_ptt *p_ptt)
+{
+	u32 ctx_validation;
+
+	/* Enable validation for connection region 3 - bits [31:24] */
+	ctx_validation = CDU_VALIDATION_DEFAULT_CFG << 24;
+	ecore_wr(p_hwfn, p_ptt, CDU_REG_CCFC_CTX_VALID0, ctx_validation);
+
+	/* Enable validation for connection region 5 - bits [15: 8] */
+	ctx_validation = CDU_VALIDATION_DEFAULT_CFG << 8;
+	ecore_wr(p_hwfn, p_ptt, CDU_REG_CCFC_CTX_VALID1, ctx_validation);
+
+	/* Enable validation for connection region 1 - bits [15: 8] */
+	ctx_validation = CDU_VALIDATION_DEFAULT_CFG << 8;
+	ecore_wr(p_hwfn, p_ptt, CDU_REG_TCFC_CTX_VALID0, ctx_validation);
+}
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.h b/drivers/net/qede/base/ecore_init_fw_funcs.h
index 9df0e7d..2d1ab7c 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.h
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.h
@@ -8,20 +8,22 @@
 
 #ifndef _INIT_FW_FUNCS_H
 #define _INIT_FW_FUNCS_H
-/* forward declarations */
+/* Forward declarations */
+
 struct init_qm_pq_params;
+
 /**
- * @brief ecore_qm_pf_mem_size - prepare QM ILT sizes
+ * @brief ecore_qm_pf_mem_size - Prepare QM ILT sizes
  *
  * Returns the required host memory size in 4KB units.
  * Must be called before all QM init HSI functions.
  *
- * @param pf_id			- physical function ID
- * @param num_pf_cids	- number of connections used by this PF
- * @param num_vf_cids	- number of connections used by VFs of this PF
- * @param num_tids		- number of tasks used by this PF
- * @param num_pf_pqs	- number of PQs used by this PF
- * @param num_vf_pqs	- number of PQs used by VFs of this PF
+ * @param pf_id -	physical function ID
+ * @param num_pf_cids - number of connections used by this PF
+ * @param num_vf_cids -	number of connections used by VFs of this PF
+ * @param num_tids -	number of tasks used by this PF
+ * @param num_pf_pqs -	number of PQs used by this PF
+ * @param num_vf_pqs -	number of PQs used by VFs of this PF
  *
  * @return The required host memory size in 4KB units.
  */
@@ -31,6 +33,7 @@ u32 ecore_qm_pf_mem_size(u8 pf_id,
 						 u32 num_tids,
 						 u16 num_pf_pqs,
 						 u16 num_vf_pqs);
+
 /**
  * @brief ecore_qm_common_rt_init - Prepare QM runtime init values for engine
  *                                  phase
@@ -38,10 +41,10 @@ u32 ecore_qm_pf_mem_size(u8 pf_id,
  * @param p_hwfn
  * @param max_ports_per_engine	- max number of ports per engine in HW
  * @param max_phys_tcs_per_port	- max number of physical TCs per port in HW
- * @param pf_rl_en				- enable per-PF rate limiters
- * @param pf_wfq_en				- enable per-PF WFQ
- * @param vport_rl_en			- enable per-VPORT rate limiters
- * @param vport_wfq_en			- enable per-VPORT WFQ
+ * @param pf_rl_en		- enable per-PF rate limiters
+ * @param pf_wfq_en		- enable per-PF WFQ
+ * @param vport_rl_en		- enable per-VPORT rate limiters
+ * @param vport_wfq_en		- enable per-VPORT WFQ
  * @param port_params - array of size MAX_NUM_PORTS with params for each port
  *
  * @return 0 on success, -1 on error.
@@ -54,22 +57,24 @@ int ecore_qm_common_rt_init(struct ecore_hwfn *p_hwfn,
 			 bool vport_rl_en,
 			 bool vport_wfq_en,
 			 struct init_qm_port_params port_params[MAX_NUM_PORTS]);
+
 /**
  * @brief ecore_qm_pf_rt_init  Prepare QM runtime init values for the PF phase
  *
  * @param p_hwfn
  * @param p_ptt			- ptt window used for writing the registers
- * @param port_id				- port ID
- * @param pf_id					- PF ID
+ * @param port_id		- port ID
+ * @param pf_id			- PF ID
  * @param max_phys_tcs_per_port	- max number of physical TCs per port in HW
- * @param is_first_pf			- 1 = first PF in engine, 0 = othwerwise
- * @param num_pf_cids			- number of connections used by this PF
+ * @param is_first_pf		- 1 = first PF in engine, 0 = othwerwise
+ * @param num_pf_cids		- number of connections used by this PF
  * @param num_vf_cids		- number of connections used by VFs of this PF
- * @param num_tids			- number of tasks used by this PF
- * @param start_pq			- first Tx PQ ID associated with this PF
- * @param num_pf_pqs	- number of Tx PQs associated with this PF (non-VF)
- * @param num_vf_pqs			- number of Tx PQs associated with a VF
- * @param start_vport			- first VPORT ID associated with this PF
+ * @param num_tids		- number of tasks used by this PF
+ * @param start_pq		- first Tx PQ ID associated with this PF
+ * @param num_pf_pqs		- number of Tx PQs associated with this PF
+ *                                (non-VF)
+ * @param num_vf_pqs		- number of Tx PQs associated with a VF
+ * @param start_vport		- first VPORT ID associated with this PF
  * @param num_vports - number of VPORTs associated with this PF
  * @param pf_wfq - WFQ weight. if PF WFQ is globally disabled, the weight must
  *		   be 0. otherwise, the weight must be non-zero.
@@ -100,6 +105,7 @@ int ecore_qm_pf_rt_init(struct ecore_hwfn *p_hwfn,
 				u32 pf_rl,
 				struct init_qm_pq_params *pq_params,
 				struct init_qm_vport_params *vport_params);
+
 /**
  * @brief ecore_init_pf_wfq  Initializes the WFQ weight of the specified PF
  *
@@ -114,11 +120,12 @@ int ecore_init_pf_wfq(struct ecore_hwfn *p_hwfn,
 					  struct ecore_ptt *p_ptt,
 					  u8 pf_id,
 					  u16 pf_wfq);
+
 /**
- * @brief ecore_init_pf_rl  Initializes the rate limit of the specified PF
+ * @brief ecore_init_pf_rl - Initializes the rate limit of the specified PF
  *
  * @param p_hwfn
- * @param p_ptt	- ptt window used for writing the registers
+ * @param p_ptt - ptt window used for writing the registers
  * @param pf_id	- PF ID
  * @param pf_rl	- rate limit in Mb/sec units
  *
@@ -128,6 +135,7 @@ int ecore_init_pf_rl(struct ecore_hwfn *p_hwfn,
 					 struct ecore_ptt *p_ptt,
 					 u8 pf_id,
 					 u32 pf_rl);
+
 /**
  * @brief ecore_init_vport_wfq  Initializes the WFQ weight of specified VPORT
  *
@@ -144,10 +152,12 @@ int ecore_init_vport_wfq(struct ecore_hwfn *p_hwfn,
 						 struct ecore_ptt *p_ptt,
 						 u16 first_tx_pq_id[NUM_OF_TCS],
 						 u16 vport_wfq);
+
 /**
- * @brief ecore_init_vport_rl  Initializes the rate limit of the specified VPORT
+ * @brief ecore_init_vport_rl - Initializes the rate limit of the specified
+ * VPORT.
  *
- * @param p_hwfn
+ * @param p_hwfn	- HW device data
  * @param p_ptt		- ptt window used for writing the registers
  * @param vport_id	- VPORT ID
  * @param vport_rl	- rate limit in Mb/sec units
@@ -158,6 +168,7 @@ int ecore_init_vport_rl(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt,
 						u8 vport_id,
 						u32 vport_rl);
+
 /**
  * @brief ecore_send_qm_stop_cmd  Sends a stop command to the QM
  *
@@ -178,6 +189,7 @@ bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn,
 							u16 start_pq,
 							u16 num_pqs);
 #ifndef UNUSED_HSI_FUNC
+
 /**
  * @brief ecore_init_nig_ets - initializes the NIG ETS arbiter
  *
@@ -193,6 +205,7 @@ void ecore_init_nig_ets(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt,
 						struct init_ets_req *req,
 						bool is_lb);
+
 /**
  * @brief ecore_init_nig_lb_rl - initializes the NIG LB RLs
  *
@@ -205,6 +218,7 @@ void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn,
 				  struct ecore_ptt *p_ptt,
 				  struct init_nig_lb_rl_req *req);
 #endif /* UNUSED_HSI_FUNC */
+
 /**
  * @brief ecore_init_nig_pri_tc_map - initializes the NIG priority to TC map.
  *
@@ -216,6 +230,7 @@ void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn,
 void ecore_init_nig_pri_tc_map(struct ecore_hwfn *p_hwfn,
 					   struct ecore_ptt *p_ptt,
 					   struct init_nig_pri_tc_map_req *req);
+
 #ifndef UNUSED_HSI_FUNC
 /**
  * @brief ecore_init_prs_ets - initializes the PRS Rx ETS arbiter
@@ -229,6 +244,7 @@ void ecore_init_prs_ets(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt,
 						struct init_ets_req *req);
 #endif /* UNUSED_HSI_FUNC */
+
 #ifndef UNUSED_HSI_FUNC
 /**
  * @brief ecore_init_brb_ram - initializes BRB RAM sizes per TC
@@ -242,6 +258,7 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt,
 						struct init_brb_ram_req *req);
 #endif /* UNUSED_HSI_FUNC */
+
 #ifndef UNUSED_HSI_FUNC
 /**
  * @brief ecore_set_engine_mf_ovlan_eth_type - initializes Nig,Prs,Pbf and llh
@@ -250,22 +267,24 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
  *                                             if engine
  *  is in BD mode.
  *
- * @param p_ptt    - ptt window used for writing the registers.
+ * @param p_ptt   - ptt window used for writing the registers.
  * @param ethType - etherType to configure
  */
 void ecore_set_engine_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
 			struct ecore_ptt *p_ptt, u32 ethType);
+
 /**
  * @brief ecore_set_port_mf_ovlan_eth_type - initializes DORQ ethType Regs to
  *                                           input ethType should Be called
  *                                           once per port.
  *
- * @param p_ptt    - ptt window used for writing the registers.
+ * @param p_ptt   - ptt window used for writing the registers.
  * @param ethType - etherType to configure
  */
 void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
 			struct ecore_ptt *p_ptt, u32 ethType);
 #endif /* UNUSED_HSI_FUNC */
+
 /**
  * @brief ecore_set_vxlan_dest_port - initializes vxlan tunnel destination udp
  *                                    port
@@ -276,15 +295,17 @@ void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
 void ecore_set_vxlan_dest_port(struct ecore_hwfn *p_hwfn,
 			       struct ecore_ptt *p_ptt,
 			       u16 dest_port);
+
 /**
  * @brief ecore_set_vxlan_enable - enable or disable VXLAN tunnel in HW
  *
- * @param p_ptt	- ptt window used for writing the registers.
- * @param vxlan_enable - vxlan enable flag.
+ * @param p_ptt		- ptt window used for writing the registers.
+ * @param vxlan_enable	- vxlan enable flag.
  */
 void ecore_set_vxlan_enable(struct ecore_hwfn *p_hwfn,
 			    struct ecore_ptt *p_ptt,
 			    bool vxlan_enable);
+
 /**
  * @brief ecore_set_gre_enable - enable or disable GRE tunnel in HW
  *
@@ -296,6 +317,7 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn,
 			  struct ecore_ptt *p_ptt,
 			  bool eth_gre_enable,
 			  bool ip_gre_enable);
+
 /**
  * @brief ecore_set_geneve_dest_port - initializes geneve tunnel destination
  *                                     udp port
@@ -306,6 +328,7 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn,
 void ecore_set_geneve_dest_port(struct ecore_hwfn *p_hwfn,
 				struct ecore_ptt *p_ptt,
 				u16 dest_port);
+
 /**
  * @brief ecore_set_gre_enable - enable or disable GRE tunnel in HW
  *
@@ -318,6 +341,7 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
 			     bool eth_geneve_enable,
 			     bool ip_geneve_enable);
 #ifndef UNUSED_HSI_FUNC
+
 /**
 * @brief ecore_set_gft_event_id_cm_hdr - configure GFT event id and cm header
 *
@@ -325,16 +349,16 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
 */
 void ecore_set_gft_event_id_cm_hdr(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt);
+
 /**
 * @brief ecore_set_rfs_mode_enable - enable and configure HW for RFS
 *
-*
-* @param p_ptt             - ptt window used for writing the registers.
-* @param pf_id - pf on which to enable RFS.
-* @param tcp -  set profile tcp packets.
-* @param udp -  set profile udp  packet.
-* @param ipv4 - set profile ipv4 packet.
-* @param ipv6 - set profile ipv6 packet.
+* @param p_ptt	- ptt window used for writing the registers.
+* @param pf_id	- pf on which to enable RFS.
+* @param tcp	- set profile tcp packets.
+* @param udp	- set profile udp  packet.
+* @param ipv4	- set profile ipv4 packet.
+* @param ipv6	- set profile ipv6 packet.
 */
 void ecore_set_rfs_mode_enable(struct ecore_hwfn *p_hwfn,
 	struct ecore_ptt *p_ptt,
@@ -344,6 +368,7 @@ void ecore_set_rfs_mode_enable(struct ecore_hwfn *p_hwfn,
 	bool ipv4,
 	bool ipv6);
 #endif /* UNUSED_HSI_FUNC */
+
 /**
 * @brief ecore_config_vf_zone_size_mode - Configure VF zone size mode. Must be
 *                                         used before first ETH queue started.
@@ -357,18 +382,20 @@ void ecore_set_rfs_mode_enable(struct ecore_hwfn *p_hwfn,
 */
 void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn, struct ecore_ptt
 				    *p_ptt, u16 mode, bool runtime_init);
+
 /**
-* @brief ecore_get_mstorm_queue_stat_offset - get mstorm statistics offset by VF
-*                                             zone size mode.
+ * @brief ecore_get_mstorm_queue_stat_offset - Get mstorm statistics offset by
+ * VF zone size mode.
 *
 * @param stat_cnt_id         -  statistic counter id
 * @param vf_zone_size_mode   -  VF zone size mode. Use enum vf_zone_size_mode.
 */
 u32 ecore_get_mstorm_queue_stat_offset(struct ecore_hwfn *p_hwfn,
 				       u16 stat_cnt_id, u16 vf_zone_size_mode);
+
 /**
-* @brief ecore_get_mstorm_eth_vf_prods_offset - VF producer offset by VF zone
-*                                               size mode.
+ * @brief ecore_get_mstorm_eth_vf_prods_offset - VF producer offset by VF zone
+ * size mode.
 *
 * @param vf_id               -  vf id.
 * @param vf_queue_id         -  per VF rx queue id.
@@ -376,4 +403,58 @@ u32 ecore_get_mstorm_queue_stat_offset(struct ecore_hwfn *p_hwfn,
 */
 u32 ecore_get_mstorm_eth_vf_prods_offset(struct ecore_hwfn *p_hwfn, u8 vf_id, u8
 					 vf_queue_id, u16 vf_zone_size_mode);
+/**
+ * @brief ecore_enable_context_validation - Enable and configure context
+ *                                          validation.
+ *
+ * @param p_ptt - ptt window used for writing the registers.
+ */
+void ecore_enable_context_validation(struct ecore_hwfn *p_hwfn,
+				     struct ecore_ptt *p_ptt);
+/**
+ * @brief ecore_calc_session_ctx_validation - Calcualte validation byte for
+ *                                            session context.
+ *
+ *
+ * @param p_ctx_mem           -  pointer to context memory.
+ * @param ctx_size            -  context size.
+ * @param ctx_type            -  context type.
+ * @param cid                 -  context cid.
+ */
+void ecore_calc_session_ctx_validation(void *p_ctx_mem, u16 ctx_size,
+				       u8 ctx_type, u32 cid);
+/**
+ * @brief ecore_calc_task_ctx_validation - Calcualte validation byte for task
+ *                                         context.
+ *
+ *
+ * @param p_ctx_mem           -  pointer to context memory.
+ * @param ctx_size            -  context size.
+ * @param ctx_type            -  context type.
+ * @param tid                 -  context tid.
+ */
+void ecore_calc_task_ctx_validation(void *p_ctx_mem, u16 ctx_size,
+				    u8 ctx_type, u32 tid);
+/**
+ * @brief ecore_memset_session_ctx - Memset session context to 0 while
+ *                                   preserving validation bytes.
+ *
+ *
+ * @param p_ctx_mem           -  pointer to context memory.
+ * @param ctx_size            -  size to initialzie.
+ * @param ctx_type            -  context type.
+ */
+void ecore_memset_session_ctx(void *p_ctx_mem, u32 ctx_size,
+			      u8 ctx_type);
+/**
+ * @brief ecore_memset_task_ctx - Memset session context to 0 while preserving
+ *                                validation bytes.
+ *
+ *
+ * @param p_ctx_mem           -  pointer to context memory.
+ * @param ctx_size            -  size to initialzie.
+ * @param ctx_type            -  context type.
+ */
+void ecore_memset_task_ctx(void *p_ctx_mem, u32 ctx_size,
+			   u8 ctx_type);
 #endif
diff --git a/drivers/net/qede/base/ecore_iro.h b/drivers/net/qede/base/ecore_iro.h
index aad9012..b4bfe89 100644
--- a/drivers/net/qede/base/ecore_iro.h
+++ b/drivers/net/qede/base/ecore_iro.h
@@ -185,5 +185,13 @@
 #define TSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) (IRO[46].base + \
 	((rdma_stat_counter_id) * IRO[46].m1))
 #define TSTORM_RDMA_QUEUE_STAT_SIZE (IRO[46].size)
+/* Xstorm iWARP rxmit stats */
+#define XSTORM_IWARP_RXMIT_STATS_OFFSET(pf_id) (IRO[47].base + \
+	((pf_id) * IRO[47].m1))
+#define XSTORM_IWARP_RXMIT_STATS_SIZE (IRO[47].size)
+/* Tstorm RoCE Event Statistics */
+#define TSTORM_ROCE_EVENTS_STAT_OFFSET(roce_pf_id) (IRO[48].base + \
+	((roce_pf_id) * IRO[48].m1))
+#define TSTORM_ROCE_EVENTS_STAT_SIZE (IRO[48].size)
 
 #endif /* __IRO_H__ */
diff --git a/drivers/net/qede/base/ecore_iro_values.h b/drivers/net/qede/base/ecore_iro_values.h
index 4ff7e95..6764bfa 100644
--- a/drivers/net/qede/base/ecore_iro_values.h
+++ b/drivers/net/qede/base/ecore_iro_values.h
@@ -9,13 +9,13 @@
 #ifndef __IRO_VALUES_H__
 #define __IRO_VALUES_H__
 
-static const struct iro iro_arr[47] = {
+static const struct iro iro_arr[49] = {
 /* YSTORM_FLOW_CONTROL_MODE_OFFSET */
 	{      0x0,      0x0,      0x0,      0x0,      0x8},
 /* TSTORM_PORT_STAT_OFFSET(port_id) */
-	{   0x4cb0,     0x78,      0x0,      0x0,     0x78},
+	{   0x4cb0,     0x80,      0x0,      0x0,     0x80},
 /* TSTORM_LL2_PORT_STAT_OFFSET(port_id) */
-	{   0x6318,     0x20,      0x0,      0x0,     0x20},
+	{   0x6518,     0x20,      0x0,      0x0,     0x20},
 /* USTORM_VF_PF_CHANNEL_READY_OFFSET(vf_id) */
 	{    0xb00,      0x8,      0x0,      0x0,      0x4},
 /* USTORM_FLR_FINAL_ACK_OFFSET(pf_id) */
@@ -41,7 +41,7 @@ static const struct iro iro_arr[47] = {
 /* TSTORM_LL2_RX_PRODS_OFFSET(core_rx_queue_id) */
 	{    0xa28,      0x8,      0x0,      0x0,      0x8},
 /* CORE_LL2_TSTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id) */
-	{   0x60f8,     0x10,      0x0,      0x0,     0x10},
+	{   0x61f8,     0x10,      0x0,      0x0,     0x10},
 /* CORE_LL2_USTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id) */
 	{   0xb820,     0x30,      0x0,      0x0,     0x30},
 /* CORE_LL2_PSTORM_PER_QUEUE_STAT_OFFSET(core_tx_stats_id) */
@@ -53,7 +53,7 @@ static const struct iro iro_arr[47] = {
 /* MSTORM_ETH_VF_PRODS_OFFSET(vf_id,vf_queue_id) */
 	{   0x53a0,     0x80,      0x4,      0x0,      0x4},
 /* MSTORM_TPA_TIMEOUT_US_OFFSET */
-	{   0xc8f0,      0x0,      0x0,      0x0,      0x4},
+	{   0xc7c8,      0x0,      0x0,      0x0,      0x4},
 /* MSTORM_ETH_PF_STAT_OFFSET(pf_id) */
 	{   0x4ba0,     0x80,      0x0,      0x0,     0x20},
 /* USTORM_QUEUE_STAT_OFFSET(stat_counter_id) */
@@ -63,13 +63,13 @@ static const struct iro iro_arr[47] = {
 /* PSTORM_QUEUE_STAT_OFFSET(stat_counter_id) */
 	{   0x2b48,     0x80,      0x0,      0x0,     0x38},
 /* PSTORM_ETH_PF_STAT_OFFSET(pf_id) */
-	{   0xf188,     0x78,      0x0,      0x0,     0x78},
+	{   0xf1b0,     0x78,      0x0,      0x0,     0x78},
 /* PSTORM_CTL_FRAME_ETHTYPE_OFFSET(ethType_id) */
 	{    0x1f8,      0x4,      0x0,      0x0,      0x4},
 /* TSTORM_ETH_PRS_INPUT_OFFSET */
-	{   0xacf0,      0x0,      0x0,      0x0,     0xf0},
+	{   0xaef8,      0x0,      0x0,      0x0,     0xf0},
 /* ETH_RX_RATE_LIMIT_OFFSET(pf_id) */
-	{   0xade0,      0x8,      0x0,      0x0,      0x8},
+	{   0xafe8,      0x8,      0x0,      0x0,      0x8},
 /* XSTORM_ETH_QUEUE_ZONE_OFFSET(queue_id) */
 	{    0x1f8,      0x8,      0x0,      0x0,      0x8},
 /* YSTORM_TOE_CQ_PROD_OFFSET(rss_id) */
@@ -85,9 +85,9 @@ static const struct iro iro_arr[47] = {
 /* MSTORM_SCSI_BDQ_EXT_PROD_OFFSET(func_id,bdq_id) */
 	{    0xb78,     0x10,      0x8,      0x0,      0x2},
 /* TSTORM_ISCSI_RX_STATS_OFFSET(pf_id) */
-	{   0xd888,     0x38,      0x0,      0x0,     0x24},
+	{   0xd9a8,     0x38,      0x0,      0x0,     0x24},
 /* MSTORM_ISCSI_RX_STATS_OFFSET(pf_id) */
-	{  0x12c38,     0x10,      0x0,      0x0,      0x8},
+	{  0x12988,     0x10,      0x0,      0x0,      0x8},
 /* USTORM_ISCSI_RX_STATS_OFFSET(pf_id) */
 	{  0x11aa0,     0x38,      0x0,      0x0,     0x18},
 /* XSTORM_ISCSI_TX_STATS_OFFSET(pf_id) */
@@ -97,13 +97,17 @@ static const struct iro iro_arr[47] = {
 /* PSTORM_ISCSI_TX_STATS_OFFSET(pf_id) */
 	{  0x101f8,     0x10,      0x0,      0x0,     0x10},
 /* TSTORM_FCOE_RX_STATS_OFFSET(pf_id) */
-	{   0xdd08,     0x48,      0x0,      0x0,     0x38},
+	{   0xde28,     0x48,      0x0,      0x0,     0x38},
 /* PSTORM_FCOE_TX_STATS_OFFSET(pf_id) */
 	{  0x10660,     0x20,      0x0,      0x0,     0x20},
 /* PSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) */
 	{   0x2b80,     0x80,      0x0,      0x0,     0x10},
 /* TSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) */
-	{   0x5000,     0x10,      0x0,      0x0,     0x10},
+	{   0x5020,     0x10,      0x0,      0x0,     0x10},
+/* XSTORM_IWARP_RXMIT_STATS_OFFSET(pf_id) */
+	{   0xc9b0,     0x30,      0x0,      0x0,     0x10},
+/* TSTORM_ROCE_EVENTS_STAT_OFFSET(roce_pf_id) */
+	{   0xeec0,     0x10,      0x0,      0x0,     0x10},
 };
 
 #endif /* __IRO_VALUES_H__ */
diff --git a/drivers/net/qede/base/ecore_rt_defs.h b/drivers/net/qede/base/ecore_rt_defs.h
index 01a29e3..846dc6d 100644
--- a/drivers/net/qede/base/ecore_rt_defs.h
+++ b/drivers/net/qede/base/ecore_rt_defs.h
@@ -115,339 +115,338 @@
 #define TM_REG_CONFIG_CONN_MEM_RT_OFFSET                            28716
 #define TM_REG_CONFIG_CONN_MEM_RT_SIZE                              416
 #define TM_REG_CONFIG_TASK_MEM_RT_OFFSET                            29132
-#define TM_REG_CONFIG_TASK_MEM_RT_SIZE                              512
-#define QM_REG_MAXPQSIZE_0_RT_OFFSET                                29644
-#define QM_REG_MAXPQSIZE_1_RT_OFFSET                                29645
-#define QM_REG_MAXPQSIZE_2_RT_OFFSET                                29646
-#define QM_REG_MAXPQSIZETXSEL_0_RT_OFFSET                           29647
-#define QM_REG_MAXPQSIZETXSEL_1_RT_OFFSET                           29648
-#define QM_REG_MAXPQSIZETXSEL_2_RT_OFFSET                           29649
-#define QM_REG_MAXPQSIZETXSEL_3_RT_OFFSET                           29650
-#define QM_REG_MAXPQSIZETXSEL_4_RT_OFFSET                           29651
-#define QM_REG_MAXPQSIZETXSEL_5_RT_OFFSET                           29652
-#define QM_REG_MAXPQSIZETXSEL_6_RT_OFFSET                           29653
-#define QM_REG_MAXPQSIZETXSEL_7_RT_OFFSET                           29654
-#define QM_REG_MAXPQSIZETXSEL_8_RT_OFFSET                           29655
-#define QM_REG_MAXPQSIZETXSEL_9_RT_OFFSET                           29656
-#define QM_REG_MAXPQSIZETXSEL_10_RT_OFFSET                          29657
-#define QM_REG_MAXPQSIZETXSEL_11_RT_OFFSET                          29658
-#define QM_REG_MAXPQSIZETXSEL_12_RT_OFFSET                          29659
-#define QM_REG_MAXPQSIZETXSEL_13_RT_OFFSET                          29660
-#define QM_REG_MAXPQSIZETXSEL_14_RT_OFFSET                          29661
-#define QM_REG_MAXPQSIZETXSEL_15_RT_OFFSET                          29662
-#define QM_REG_MAXPQSIZETXSEL_16_RT_OFFSET                          29663
-#define QM_REG_MAXPQSIZETXSEL_17_RT_OFFSET                          29664
-#define QM_REG_MAXPQSIZETXSEL_18_RT_OFFSET                          29665
-#define QM_REG_MAXPQSIZETXSEL_19_RT_OFFSET                          29666
-#define QM_REG_MAXPQSIZETXSEL_20_RT_OFFSET                          29667
-#define QM_REG_MAXPQSIZETXSEL_21_RT_OFFSET                          29668
-#define QM_REG_MAXPQSIZETXSEL_22_RT_OFFSET                          29669
-#define QM_REG_MAXPQSIZETXSEL_23_RT_OFFSET                          29670
-#define QM_REG_MAXPQSIZETXSEL_24_RT_OFFSET                          29671
-#define QM_REG_MAXPQSIZETXSEL_25_RT_OFFSET                          29672
-#define QM_REG_MAXPQSIZETXSEL_26_RT_OFFSET                          29673
-#define QM_REG_MAXPQSIZETXSEL_27_RT_OFFSET                          29674
-#define QM_REG_MAXPQSIZETXSEL_28_RT_OFFSET                          29675
-#define QM_REG_MAXPQSIZETXSEL_29_RT_OFFSET                          29676
-#define QM_REG_MAXPQSIZETXSEL_30_RT_OFFSET                          29677
-#define QM_REG_MAXPQSIZETXSEL_31_RT_OFFSET                          29678
-#define QM_REG_MAXPQSIZETXSEL_32_RT_OFFSET                          29679
-#define QM_REG_MAXPQSIZETXSEL_33_RT_OFFSET                          29680
-#define QM_REG_MAXPQSIZETXSEL_34_RT_OFFSET                          29681
-#define QM_REG_MAXPQSIZETXSEL_35_RT_OFFSET                          29682
-#define QM_REG_MAXPQSIZETXSEL_36_RT_OFFSET                          29683
-#define QM_REG_MAXPQSIZETXSEL_37_RT_OFFSET                          29684
-#define QM_REG_MAXPQSIZETXSEL_38_RT_OFFSET                          29685
-#define QM_REG_MAXPQSIZETXSEL_39_RT_OFFSET                          29686
-#define QM_REG_MAXPQSIZETXSEL_40_RT_OFFSET                          29687
-#define QM_REG_MAXPQSIZETXSEL_41_RT_OFFSET                          29688
-#define QM_REG_MAXPQSIZETXSEL_42_RT_OFFSET                          29689
-#define QM_REG_MAXPQSIZETXSEL_43_RT_OFFSET                          29690
-#define QM_REG_MAXPQSIZETXSEL_44_RT_OFFSET                          29691
-#define QM_REG_MAXPQSIZETXSEL_45_RT_OFFSET                          29692
-#define QM_REG_MAXPQSIZETXSEL_46_RT_OFFSET                          29693
-#define QM_REG_MAXPQSIZETXSEL_47_RT_OFFSET                          29694
-#define QM_REG_MAXPQSIZETXSEL_48_RT_OFFSET                          29695
-#define QM_REG_MAXPQSIZETXSEL_49_RT_OFFSET                          29696
-#define QM_REG_MAXPQSIZETXSEL_50_RT_OFFSET                          29697
-#define QM_REG_MAXPQSIZETXSEL_51_RT_OFFSET                          29698
-#define QM_REG_MAXPQSIZETXSEL_52_RT_OFFSET                          29699
-#define QM_REG_MAXPQSIZETXSEL_53_RT_OFFSET                          29700
-#define QM_REG_MAXPQSIZETXSEL_54_RT_OFFSET                          29701
-#define QM_REG_MAXPQSIZETXSEL_55_RT_OFFSET                          29702
-#define QM_REG_MAXPQSIZETXSEL_56_RT_OFFSET                          29703
-#define QM_REG_MAXPQSIZETXSEL_57_RT_OFFSET                          29704
-#define QM_REG_MAXPQSIZETXSEL_58_RT_OFFSET                          29705
-#define QM_REG_MAXPQSIZETXSEL_59_RT_OFFSET                          29706
-#define QM_REG_MAXPQSIZETXSEL_60_RT_OFFSET                          29707
-#define QM_REG_MAXPQSIZETXSEL_61_RT_OFFSET                          29708
-#define QM_REG_MAXPQSIZETXSEL_62_RT_OFFSET                          29709
-#define QM_REG_MAXPQSIZETXSEL_63_RT_OFFSET                          29710
-#define QM_REG_BASEADDROTHERPQ_RT_OFFSET                            29711
-#define QM_REG_BASEADDROTHERPQ_RT_SIZE                              128
-#define QM_REG_VOQCRDLINE_RT_OFFSET                                 29839
-#define QM_REG_VOQCRDLINE_RT_SIZE                                   20
-#define QM_REG_VOQINITCRDLINE_RT_OFFSET                             29859
-#define QM_REG_VOQINITCRDLINE_RT_SIZE                               20
-#define QM_REG_AFULLQMBYPTHRPFWFQ_RT_OFFSET                         29879
-#define QM_REG_AFULLQMBYPTHRVPWFQ_RT_OFFSET                         29880
-#define QM_REG_AFULLQMBYPTHRPFRL_RT_OFFSET                          29881
-#define QM_REG_AFULLQMBYPTHRGLBLRL_RT_OFFSET                        29882
-#define QM_REG_AFULLOPRTNSTCCRDMASK_RT_OFFSET                       29883
-#define QM_REG_WRROTHERPQGRP_0_RT_OFFSET                            29884
-#define QM_REG_WRROTHERPQGRP_1_RT_OFFSET                            29885
-#define QM_REG_WRROTHERPQGRP_2_RT_OFFSET                            29886
-#define QM_REG_WRROTHERPQGRP_3_RT_OFFSET                            29887
-#define QM_REG_WRROTHERPQGRP_4_RT_OFFSET                            29888
-#define QM_REG_WRROTHERPQGRP_5_RT_OFFSET                            29889
-#define QM_REG_WRROTHERPQGRP_6_RT_OFFSET                            29890
-#define QM_REG_WRROTHERPQGRP_7_RT_OFFSET                            29891
-#define QM_REG_WRROTHERPQGRP_8_RT_OFFSET                            29892
-#define QM_REG_WRROTHERPQGRP_9_RT_OFFSET                            29893
-#define QM_REG_WRROTHERPQGRP_10_RT_OFFSET                           29894
-#define QM_REG_WRROTHERPQGRP_11_RT_OFFSET                           29895
-#define QM_REG_WRROTHERPQGRP_12_RT_OFFSET                           29896
-#define QM_REG_WRROTHERPQGRP_13_RT_OFFSET                           29897
-#define QM_REG_WRROTHERPQGRP_14_RT_OFFSET                           29898
-#define QM_REG_WRROTHERPQGRP_15_RT_OFFSET                           29899
-#define QM_REG_WRROTHERGRPWEIGHT_0_RT_OFFSET                        29900
-#define QM_REG_WRROTHERGRPWEIGHT_1_RT_OFFSET                        29901
-#define QM_REG_WRROTHERGRPWEIGHT_2_RT_OFFSET                        29902
-#define QM_REG_WRROTHERGRPWEIGHT_3_RT_OFFSET                        29903
-#define QM_REG_WRRTXGRPWEIGHT_0_RT_OFFSET                           29904
-#define QM_REG_WRRTXGRPWEIGHT_1_RT_OFFSET                           29905
-#define QM_REG_PQTX2PF_0_RT_OFFSET                                  29906
-#define QM_REG_PQTX2PF_1_RT_OFFSET                                  29907
-#define QM_REG_PQTX2PF_2_RT_OFFSET                                  29908
-#define QM_REG_PQTX2PF_3_RT_OFFSET                                  29909
-#define QM_REG_PQTX2PF_4_RT_OFFSET                                  29910
-#define QM_REG_PQTX2PF_5_RT_OFFSET                                  29911
-#define QM_REG_PQTX2PF_6_RT_OFFSET                                  29912
-#define QM_REG_PQTX2PF_7_RT_OFFSET                                  29913
-#define QM_REG_PQTX2PF_8_RT_OFFSET                                  29914
-#define QM_REG_PQTX2PF_9_RT_OFFSET                                  29915
-#define QM_REG_PQTX2PF_10_RT_OFFSET                                 29916
-#define QM_REG_PQTX2PF_11_RT_OFFSET                                 29917
-#define QM_REG_PQTX2PF_12_RT_OFFSET                                 29918
-#define QM_REG_PQTX2PF_13_RT_OFFSET                                 29919
-#define QM_REG_PQTX2PF_14_RT_OFFSET                                 29920
-#define QM_REG_PQTX2PF_15_RT_OFFSET                                 29921
-#define QM_REG_PQTX2PF_16_RT_OFFSET                                 29922
-#define QM_REG_PQTX2PF_17_RT_OFFSET                                 29923
-#define QM_REG_PQTX2PF_18_RT_OFFSET                                 29924
-#define QM_REG_PQTX2PF_19_RT_OFFSET                                 29925
-#define QM_REG_PQTX2PF_20_RT_OFFSET                                 29926
-#define QM_REG_PQTX2PF_21_RT_OFFSET                                 29927
-#define QM_REG_PQTX2PF_22_RT_OFFSET                                 29928
-#define QM_REG_PQTX2PF_23_RT_OFFSET                                 29929
-#define QM_REG_PQTX2PF_24_RT_OFFSET                                 29930
-#define QM_REG_PQTX2PF_25_RT_OFFSET                                 29931
-#define QM_REG_PQTX2PF_26_RT_OFFSET                                 29932
-#define QM_REG_PQTX2PF_27_RT_OFFSET                                 29933
-#define QM_REG_PQTX2PF_28_RT_OFFSET                                 29934
-#define QM_REG_PQTX2PF_29_RT_OFFSET                                 29935
-#define QM_REG_PQTX2PF_30_RT_OFFSET                                 29936
-#define QM_REG_PQTX2PF_31_RT_OFFSET                                 29937
-#define QM_REG_PQTX2PF_32_RT_OFFSET                                 29938
-#define QM_REG_PQTX2PF_33_RT_OFFSET                                 29939
-#define QM_REG_PQTX2PF_34_RT_OFFSET                                 29940
-#define QM_REG_PQTX2PF_35_RT_OFFSET                                 29941
-#define QM_REG_PQTX2PF_36_RT_OFFSET                                 29942
-#define QM_REG_PQTX2PF_37_RT_OFFSET                                 29943
-#define QM_REG_PQTX2PF_38_RT_OFFSET                                 29944
-#define QM_REG_PQTX2PF_39_RT_OFFSET                                 29945
-#define QM_REG_PQTX2PF_40_RT_OFFSET                                 29946
-#define QM_REG_PQTX2PF_41_RT_OFFSET                                 29947
-#define QM_REG_PQTX2PF_42_RT_OFFSET                                 29948
-#define QM_REG_PQTX2PF_43_RT_OFFSET                                 29949
-#define QM_REG_PQTX2PF_44_RT_OFFSET                                 29950
-#define QM_REG_PQTX2PF_45_RT_OFFSET                                 29951
-#define QM_REG_PQTX2PF_46_RT_OFFSET                                 29952
-#define QM_REG_PQTX2PF_47_RT_OFFSET                                 29953
-#define QM_REG_PQTX2PF_48_RT_OFFSET                                 29954
-#define QM_REG_PQTX2PF_49_RT_OFFSET                                 29955
-#define QM_REG_PQTX2PF_50_RT_OFFSET                                 29956
-#define QM_REG_PQTX2PF_51_RT_OFFSET                                 29957
-#define QM_REG_PQTX2PF_52_RT_OFFSET                                 29958
-#define QM_REG_PQTX2PF_53_RT_OFFSET                                 29959
-#define QM_REG_PQTX2PF_54_RT_OFFSET                                 29960
-#define QM_REG_PQTX2PF_55_RT_OFFSET                                 29961
-#define QM_REG_PQTX2PF_56_RT_OFFSET                                 29962
-#define QM_REG_PQTX2PF_57_RT_OFFSET                                 29963
-#define QM_REG_PQTX2PF_58_RT_OFFSET                                 29964
-#define QM_REG_PQTX2PF_59_RT_OFFSET                                 29965
-#define QM_REG_PQTX2PF_60_RT_OFFSET                                 29966
-#define QM_REG_PQTX2PF_61_RT_OFFSET                                 29967
-#define QM_REG_PQTX2PF_62_RT_OFFSET                                 29968
-#define QM_REG_PQTX2PF_63_RT_OFFSET                                 29969
-#define QM_REG_PQOTHER2PF_0_RT_OFFSET                               29970
-#define QM_REG_PQOTHER2PF_1_RT_OFFSET                               29971
-#define QM_REG_PQOTHER2PF_2_RT_OFFSET                               29972
-#define QM_REG_PQOTHER2PF_3_RT_OFFSET                               29973
-#define QM_REG_PQOTHER2PF_4_RT_OFFSET                               29974
-#define QM_REG_PQOTHER2PF_5_RT_OFFSET                               29975
-#define QM_REG_PQOTHER2PF_6_RT_OFFSET                               29976
-#define QM_REG_PQOTHER2PF_7_RT_OFFSET                               29977
-#define QM_REG_PQOTHER2PF_8_RT_OFFSET                               29978
-#define QM_REG_PQOTHER2PF_9_RT_OFFSET                               29979
-#define QM_REG_PQOTHER2PF_10_RT_OFFSET                              29980
-#define QM_REG_PQOTHER2PF_11_RT_OFFSET                              29981
-#define QM_REG_PQOTHER2PF_12_RT_OFFSET                              29982
-#define QM_REG_PQOTHER2PF_13_RT_OFFSET                              29983
-#define QM_REG_PQOTHER2PF_14_RT_OFFSET                              29984
-#define QM_REG_PQOTHER2PF_15_RT_OFFSET                              29985
-#define QM_REG_RLGLBLPERIOD_0_RT_OFFSET                             29986
-#define QM_REG_RLGLBLPERIOD_1_RT_OFFSET                             29987
-#define QM_REG_RLGLBLPERIODTIMER_0_RT_OFFSET                        29988
-#define QM_REG_RLGLBLPERIODTIMER_1_RT_OFFSET                        29989
-#define QM_REG_RLGLBLPERIODSEL_0_RT_OFFSET                          29990
-#define QM_REG_RLGLBLPERIODSEL_1_RT_OFFSET                          29991
-#define QM_REG_RLGLBLPERIODSEL_2_RT_OFFSET                          29992
-#define QM_REG_RLGLBLPERIODSEL_3_RT_OFFSET                          29993
-#define QM_REG_RLGLBLPERIODSEL_4_RT_OFFSET                          29994
-#define QM_REG_RLGLBLPERIODSEL_5_RT_OFFSET                          29995
-#define QM_REG_RLGLBLPERIODSEL_6_RT_OFFSET                          29996
-#define QM_REG_RLGLBLPERIODSEL_7_RT_OFFSET                          29997
-#define QM_REG_RLGLBLINCVAL_RT_OFFSET                               29998
+#define TM_REG_CONFIG_TASK_MEM_RT_SIZE                              608
+#define QM_REG_MAXPQSIZE_0_RT_OFFSET                                29740
+#define QM_REG_MAXPQSIZE_1_RT_OFFSET                                29741
+#define QM_REG_MAXPQSIZE_2_RT_OFFSET                                29742
+#define QM_REG_MAXPQSIZETXSEL_0_RT_OFFSET                           29743
+#define QM_REG_MAXPQSIZETXSEL_1_RT_OFFSET                           29744
+#define QM_REG_MAXPQSIZETXSEL_2_RT_OFFSET                           29745
+#define QM_REG_MAXPQSIZETXSEL_3_RT_OFFSET                           29746
+#define QM_REG_MAXPQSIZETXSEL_4_RT_OFFSET                           29747
+#define QM_REG_MAXPQSIZETXSEL_5_RT_OFFSET                           29748
+#define QM_REG_MAXPQSIZETXSEL_6_RT_OFFSET                           29749
+#define QM_REG_MAXPQSIZETXSEL_7_RT_OFFSET                           29750
+#define QM_REG_MAXPQSIZETXSEL_8_RT_OFFSET                           29751
+#define QM_REG_MAXPQSIZETXSEL_9_RT_OFFSET                           29752
+#define QM_REG_MAXPQSIZETXSEL_10_RT_OFFSET                          29753
+#define QM_REG_MAXPQSIZETXSEL_11_RT_OFFSET                          29754
+#define QM_REG_MAXPQSIZETXSEL_12_RT_OFFSET                          29755
+#define QM_REG_MAXPQSIZETXSEL_13_RT_OFFSET                          29756
+#define QM_REG_MAXPQSIZETXSEL_14_RT_OFFSET                          29757
+#define QM_REG_MAXPQSIZETXSEL_15_RT_OFFSET                          29758
+#define QM_REG_MAXPQSIZETXSEL_16_RT_OFFSET                          29759
+#define QM_REG_MAXPQSIZETXSEL_17_RT_OFFSET                          29760
+#define QM_REG_MAXPQSIZETXSEL_18_RT_OFFSET                          29761
+#define QM_REG_MAXPQSIZETXSEL_19_RT_OFFSET                          29762
+#define QM_REG_MAXPQSIZETXSEL_20_RT_OFFSET                          29763
+#define QM_REG_MAXPQSIZETXSEL_21_RT_OFFSET                          29764
+#define QM_REG_MAXPQSIZETXSEL_22_RT_OFFSET                          29765
+#define QM_REG_MAXPQSIZETXSEL_23_RT_OFFSET                          29766
+#define QM_REG_MAXPQSIZETXSEL_24_RT_OFFSET                          29767
+#define QM_REG_MAXPQSIZETXSEL_25_RT_OFFSET                          29768
+#define QM_REG_MAXPQSIZETXSEL_26_RT_OFFSET                          29769
+#define QM_REG_MAXPQSIZETXSEL_27_RT_OFFSET                          29770
+#define QM_REG_MAXPQSIZETXSEL_28_RT_OFFSET                          29771
+#define QM_REG_MAXPQSIZETXSEL_29_RT_OFFSET                          29772
+#define QM_REG_MAXPQSIZETXSEL_30_RT_OFFSET                          29773
+#define QM_REG_MAXPQSIZETXSEL_31_RT_OFFSET                          29774
+#define QM_REG_MAXPQSIZETXSEL_32_RT_OFFSET                          29775
+#define QM_REG_MAXPQSIZETXSEL_33_RT_OFFSET                          29776
+#define QM_REG_MAXPQSIZETXSEL_34_RT_OFFSET                          29777
+#define QM_REG_MAXPQSIZETXSEL_35_RT_OFFSET                          29778
+#define QM_REG_MAXPQSIZETXSEL_36_RT_OFFSET                          29779
+#define QM_REG_MAXPQSIZETXSEL_37_RT_OFFSET                          29780
+#define QM_REG_MAXPQSIZETXSEL_38_RT_OFFSET                          29781
+#define QM_REG_MAXPQSIZETXSEL_39_RT_OFFSET                          29782
+#define QM_REG_MAXPQSIZETXSEL_40_RT_OFFSET                          29783
+#define QM_REG_MAXPQSIZETXSEL_41_RT_OFFSET                          29784
+#define QM_REG_MAXPQSIZETXSEL_42_RT_OFFSET                          29785
+#define QM_REG_MAXPQSIZETXSEL_43_RT_OFFSET                          29786
+#define QM_REG_MAXPQSIZETXSEL_44_RT_OFFSET                          29787
+#define QM_REG_MAXPQSIZETXSEL_45_RT_OFFSET                          29788
+#define QM_REG_MAXPQSIZETXSEL_46_RT_OFFSET                          29789
+#define QM_REG_MAXPQSIZETXSEL_47_RT_OFFSET                          29790
+#define QM_REG_MAXPQSIZETXSEL_48_RT_OFFSET                          29791
+#define QM_REG_MAXPQSIZETXSEL_49_RT_OFFSET                          29792
+#define QM_REG_MAXPQSIZETXSEL_50_RT_OFFSET                          29793
+#define QM_REG_MAXPQSIZETXSEL_51_RT_OFFSET                          29794
+#define QM_REG_MAXPQSIZETXSEL_52_RT_OFFSET                          29795
+#define QM_REG_MAXPQSIZETXSEL_53_RT_OFFSET                          29796
+#define QM_REG_MAXPQSIZETXSEL_54_RT_OFFSET                          29797
+#define QM_REG_MAXPQSIZETXSEL_55_RT_OFFSET                          29798
+#define QM_REG_MAXPQSIZETXSEL_56_RT_OFFSET                          29799
+#define QM_REG_MAXPQSIZETXSEL_57_RT_OFFSET                          29800
+#define QM_REG_MAXPQSIZETXSEL_58_RT_OFFSET                          29801
+#define QM_REG_MAXPQSIZETXSEL_59_RT_OFFSET                          29802
+#define QM_REG_MAXPQSIZETXSEL_60_RT_OFFSET                          29803
+#define QM_REG_MAXPQSIZETXSEL_61_RT_OFFSET                          29804
+#define QM_REG_MAXPQSIZETXSEL_62_RT_OFFSET                          29805
+#define QM_REG_MAXPQSIZETXSEL_63_RT_OFFSET                          29806
+#define QM_REG_BASEADDROTHERPQ_RT_OFFSET                            29807
+#define QM_REG_AFULLQMBYPTHRPFWFQ_RT_OFFSET                         29935
+#define QM_REG_AFULLQMBYPTHRVPWFQ_RT_OFFSET                         29936
+#define QM_REG_AFULLQMBYPTHRPFRL_RT_OFFSET                          29937
+#define QM_REG_AFULLQMBYPTHRGLBLRL_RT_OFFSET                        29938
+#define QM_REG_AFULLOPRTNSTCCRDMASK_RT_OFFSET                       29939
+#define QM_REG_WRROTHERPQGRP_0_RT_OFFSET                            29940
+#define QM_REG_WRROTHERPQGRP_1_RT_OFFSET                            29941
+#define QM_REG_WRROTHERPQGRP_2_RT_OFFSET                            29942
+#define QM_REG_WRROTHERPQGRP_3_RT_OFFSET                            29943
+#define QM_REG_WRROTHERPQGRP_4_RT_OFFSET                            29944
+#define QM_REG_WRROTHERPQGRP_5_RT_OFFSET                            29945
+#define QM_REG_WRROTHERPQGRP_6_RT_OFFSET                            29946
+#define QM_REG_WRROTHERPQGRP_7_RT_OFFSET                            29947
+#define QM_REG_WRROTHERPQGRP_8_RT_OFFSET                            29948
+#define QM_REG_WRROTHERPQGRP_9_RT_OFFSET                            29949
+#define QM_REG_WRROTHERPQGRP_10_RT_OFFSET                           29950
+#define QM_REG_WRROTHERPQGRP_11_RT_OFFSET                           29951
+#define QM_REG_WRROTHERPQGRP_12_RT_OFFSET                           29952
+#define QM_REG_WRROTHERPQGRP_13_RT_OFFSET                           29953
+#define QM_REG_WRROTHERPQGRP_14_RT_OFFSET                           29954
+#define QM_REG_WRROTHERPQGRP_15_RT_OFFSET                           29955
+#define QM_REG_WRROTHERGRPWEIGHT_0_RT_OFFSET                        29956
+#define QM_REG_WRROTHERGRPWEIGHT_1_RT_OFFSET                        29957
+#define QM_REG_WRROTHERGRPWEIGHT_2_RT_OFFSET                        29958
+#define QM_REG_WRROTHERGRPWEIGHT_3_RT_OFFSET                        29959
+#define QM_REG_WRRTXGRPWEIGHT_0_RT_OFFSET                           29960
+#define QM_REG_WRRTXGRPWEIGHT_1_RT_OFFSET                           29961
+#define QM_REG_PQTX2PF_0_RT_OFFSET                                  29962
+#define QM_REG_PQTX2PF_1_RT_OFFSET                                  29963
+#define QM_REG_PQTX2PF_2_RT_OFFSET                                  29964
+#define QM_REG_PQTX2PF_3_RT_OFFSET                                  29965
+#define QM_REG_PQTX2PF_4_RT_OFFSET                                  29966
+#define QM_REG_PQTX2PF_5_RT_OFFSET                                  29967
+#define QM_REG_PQTX2PF_6_RT_OFFSET                                  29968
+#define QM_REG_PQTX2PF_7_RT_OFFSET                                  29969
+#define QM_REG_PQTX2PF_8_RT_OFFSET                                  29970
+#define QM_REG_PQTX2PF_9_RT_OFFSET                                  29971
+#define QM_REG_PQTX2PF_10_RT_OFFSET                                 29972
+#define QM_REG_PQTX2PF_11_RT_OFFSET                                 29973
+#define QM_REG_PQTX2PF_12_RT_OFFSET                                 29974
+#define QM_REG_PQTX2PF_13_RT_OFFSET                                 29975
+#define QM_REG_PQTX2PF_14_RT_OFFSET                                 29976
+#define QM_REG_PQTX2PF_15_RT_OFFSET                                 29977
+#define QM_REG_PQTX2PF_16_RT_OFFSET                                 29978
+#define QM_REG_PQTX2PF_17_RT_OFFSET                                 29979
+#define QM_REG_PQTX2PF_18_RT_OFFSET                                 29980
+#define QM_REG_PQTX2PF_19_RT_OFFSET                                 29981
+#define QM_REG_PQTX2PF_20_RT_OFFSET                                 29982
+#define QM_REG_PQTX2PF_21_RT_OFFSET                                 29983
+#define QM_REG_PQTX2PF_22_RT_OFFSET                                 29984
+#define QM_REG_PQTX2PF_23_RT_OFFSET                                 29985
+#define QM_REG_PQTX2PF_24_RT_OFFSET                                 29986
+#define QM_REG_PQTX2PF_25_RT_OFFSET                                 29987
+#define QM_REG_PQTX2PF_26_RT_OFFSET                                 29988
+#define QM_REG_PQTX2PF_27_RT_OFFSET                                 29989
+#define QM_REG_PQTX2PF_28_RT_OFFSET                                 29990
+#define QM_REG_PQTX2PF_29_RT_OFFSET                                 29991
+#define QM_REG_PQTX2PF_30_RT_OFFSET                                 29992
+#define QM_REG_PQTX2PF_31_RT_OFFSET                                 29993
+#define QM_REG_PQTX2PF_32_RT_OFFSET                                 29994
+#define QM_REG_PQTX2PF_33_RT_OFFSET                                 29995
+#define QM_REG_PQTX2PF_34_RT_OFFSET                                 29996
+#define QM_REG_PQTX2PF_35_RT_OFFSET                                 29997
+#define QM_REG_PQTX2PF_36_RT_OFFSET                                 29998
+#define QM_REG_PQTX2PF_37_RT_OFFSET                                 29999
+#define QM_REG_PQTX2PF_38_RT_OFFSET                                 30000
+#define QM_REG_PQTX2PF_39_RT_OFFSET                                 30001
+#define QM_REG_PQTX2PF_40_RT_OFFSET                                 30002
+#define QM_REG_PQTX2PF_41_RT_OFFSET                                 30003
+#define QM_REG_PQTX2PF_42_RT_OFFSET                                 30004
+#define QM_REG_PQTX2PF_43_RT_OFFSET                                 30005
+#define QM_REG_PQTX2PF_44_RT_OFFSET                                 30006
+#define QM_REG_PQTX2PF_45_RT_OFFSET                                 30007
+#define QM_REG_PQTX2PF_46_RT_OFFSET                                 30008
+#define QM_REG_PQTX2PF_47_RT_OFFSET                                 30009
+#define QM_REG_PQTX2PF_48_RT_OFFSET                                 30010
+#define QM_REG_PQTX2PF_49_RT_OFFSET                                 30011
+#define QM_REG_PQTX2PF_50_RT_OFFSET                                 30012
+#define QM_REG_PQTX2PF_51_RT_OFFSET                                 30013
+#define QM_REG_PQTX2PF_52_RT_OFFSET                                 30014
+#define QM_REG_PQTX2PF_53_RT_OFFSET                                 30015
+#define QM_REG_PQTX2PF_54_RT_OFFSET                                 30016
+#define QM_REG_PQTX2PF_55_RT_OFFSET                                 30017
+#define QM_REG_PQTX2PF_56_RT_OFFSET                                 30018
+#define QM_REG_PQTX2PF_57_RT_OFFSET                                 30019
+#define QM_REG_PQTX2PF_58_RT_OFFSET                                 30020
+#define QM_REG_PQTX2PF_59_RT_OFFSET                                 30021
+#define QM_REG_PQTX2PF_60_RT_OFFSET                                 30022
+#define QM_REG_PQTX2PF_61_RT_OFFSET                                 30023
+#define QM_REG_PQTX2PF_62_RT_OFFSET                                 30024
+#define QM_REG_PQTX2PF_63_RT_OFFSET                                 30025
+#define QM_REG_PQOTHER2PF_0_RT_OFFSET                               30026
+#define QM_REG_PQOTHER2PF_1_RT_OFFSET                               30027
+#define QM_REG_PQOTHER2PF_2_RT_OFFSET                               30028
+#define QM_REG_PQOTHER2PF_3_RT_OFFSET                               30029
+#define QM_REG_PQOTHER2PF_4_RT_OFFSET                               30030
+#define QM_REG_PQOTHER2PF_5_RT_OFFSET                               30031
+#define QM_REG_PQOTHER2PF_6_RT_OFFSET                               30032
+#define QM_REG_PQOTHER2PF_7_RT_OFFSET                               30033
+#define QM_REG_PQOTHER2PF_8_RT_OFFSET                               30034
+#define QM_REG_PQOTHER2PF_9_RT_OFFSET                               30035
+#define QM_REG_PQOTHER2PF_10_RT_OFFSET                              30036
+#define QM_REG_PQOTHER2PF_11_RT_OFFSET                              30037
+#define QM_REG_PQOTHER2PF_12_RT_OFFSET                              30038
+#define QM_REG_PQOTHER2PF_13_RT_OFFSET                              30039
+#define QM_REG_PQOTHER2PF_14_RT_OFFSET                              30040
+#define QM_REG_PQOTHER2PF_15_RT_OFFSET                              30041
+#define QM_REG_RLGLBLPERIOD_0_RT_OFFSET                             30042
+#define QM_REG_RLGLBLPERIOD_1_RT_OFFSET                             30043
+#define QM_REG_RLGLBLPERIODTIMER_0_RT_OFFSET                        30044
+#define QM_REG_RLGLBLPERIODTIMER_1_RT_OFFSET                        30045
+#define QM_REG_RLGLBLPERIODSEL_0_RT_OFFSET                          30046
+#define QM_REG_RLGLBLPERIODSEL_1_RT_OFFSET                          30047
+#define QM_REG_RLGLBLPERIODSEL_2_RT_OFFSET                          30048
+#define QM_REG_RLGLBLPERIODSEL_3_RT_OFFSET                          30049
+#define QM_REG_RLGLBLPERIODSEL_4_RT_OFFSET                          30050
+#define QM_REG_RLGLBLPERIODSEL_5_RT_OFFSET                          30051
+#define QM_REG_RLGLBLPERIODSEL_6_RT_OFFSET                          30052
+#define QM_REG_RLGLBLPERIODSEL_7_RT_OFFSET                          30053
+#define QM_REG_RLGLBLINCVAL_RT_OFFSET                               30054
 #define QM_REG_RLGLBLINCVAL_RT_SIZE                                 256
-#define QM_REG_RLGLBLUPPERBOUND_RT_OFFSET                           30254
+#define QM_REG_RLGLBLUPPERBOUND_RT_OFFSET                           30310
 #define QM_REG_RLGLBLUPPERBOUND_RT_SIZE                             256
-#define QM_REG_RLGLBLCRD_RT_OFFSET                                  30510
+#define QM_REG_RLGLBLCRD_RT_OFFSET                                  30566
 #define QM_REG_RLGLBLCRD_RT_SIZE                                    256
-#define QM_REG_RLGLBLENABLE_RT_OFFSET                               30766
-#define QM_REG_RLPFPERIOD_RT_OFFSET                                 30767
-#define QM_REG_RLPFPERIODTIMER_RT_OFFSET                            30768
-#define QM_REG_RLPFINCVAL_RT_OFFSET                                 30769
+#define QM_REG_RLGLBLENABLE_RT_OFFSET                               30822
+#define QM_REG_RLPFPERIOD_RT_OFFSET                                 30823
+#define QM_REG_RLPFPERIODTIMER_RT_OFFSET                            30824
+#define QM_REG_RLPFINCVAL_RT_OFFSET                                 30825
 #define QM_REG_RLPFINCVAL_RT_SIZE                                   16
-#define QM_REG_RLPFUPPERBOUND_RT_OFFSET                             30785
+#define QM_REG_RLPFUPPERBOUND_RT_OFFSET                             30841
 #define QM_REG_RLPFUPPERBOUND_RT_SIZE                               16
-#define QM_REG_RLPFCRD_RT_OFFSET                                    30801
+#define QM_REG_RLPFCRD_RT_OFFSET                                    30857
 #define QM_REG_RLPFCRD_RT_SIZE                                      16
-#define QM_REG_RLPFENABLE_RT_OFFSET                                 30817
-#define QM_REG_RLPFVOQENABLE_RT_OFFSET                              30818
-#define QM_REG_WFQPFWEIGHT_RT_OFFSET                                30819
+#define QM_REG_RLPFENABLE_RT_OFFSET                                 30873
+#define QM_REG_RLPFVOQENABLE_RT_OFFSET                              30874
+#define QM_REG_WFQPFWEIGHT_RT_OFFSET                                30875
 #define QM_REG_WFQPFWEIGHT_RT_SIZE                                  16
-#define QM_REG_WFQPFUPPERBOUND_RT_OFFSET                            30835
+#define QM_REG_WFQPFUPPERBOUND_RT_OFFSET                            30891
 #define QM_REG_WFQPFUPPERBOUND_RT_SIZE                              16
-#define QM_REG_WFQPFCRD_RT_OFFSET                                   30851
-#define QM_REG_WFQPFCRD_RT_SIZE                                     160
-#define QM_REG_WFQPFENABLE_RT_OFFSET                                31011
-#define QM_REG_WFQVPENABLE_RT_OFFSET                                31012
-#define QM_REG_BASEADDRTXPQ_RT_OFFSET                               31013
+#define QM_REG_WFQPFCRD_RT_OFFSET                                   30907
+#define QM_REG_WFQPFCRD_RT_SIZE                                     256
+#define QM_REG_WFQPFENABLE_RT_OFFSET                                31163
+#define QM_REG_WFQVPENABLE_RT_OFFSET                                31164
+#define QM_REG_BASEADDRTXPQ_RT_OFFSET                               31165
 #define QM_REG_BASEADDRTXPQ_RT_SIZE                                 512
-#define QM_REG_TXPQMAP_RT_OFFSET                                    31525
+#define QM_REG_TXPQMAP_RT_OFFSET                                    31677
 #define QM_REG_TXPQMAP_RT_SIZE                                      512
-#define QM_REG_WFQVPWEIGHT_RT_OFFSET                                32037
+#define QM_REG_WFQVPWEIGHT_RT_OFFSET                                32189
 #define QM_REG_WFQVPWEIGHT_RT_SIZE                                  512
-#define QM_REG_WFQVPCRD_RT_OFFSET                                   32549
+#define QM_REG_WFQVPCRD_RT_OFFSET                                   32701
 #define QM_REG_WFQVPCRD_RT_SIZE                                     512
-#define QM_REG_WFQVPMAP_RT_OFFSET                                   33061
+#define QM_REG_WFQVPMAP_RT_OFFSET                                   33213
 #define QM_REG_WFQVPMAP_RT_SIZE                                     512
-#define QM_REG_WFQPFCRD_MSB_RT_OFFSET                               33573
-#define QM_REG_WFQPFCRD_MSB_RT_SIZE                                 160
-#define NIG_REG_TAG_ETHERTYPE_0_RT_OFFSET                           33733
-#define NIG_REG_OUTER_TAG_VALUE_LIST0_RT_OFFSET                     33734
-#define NIG_REG_OUTER_TAG_VALUE_LIST1_RT_OFFSET                     33735
-#define NIG_REG_OUTER_TAG_VALUE_LIST2_RT_OFFSET                     33736
-#define NIG_REG_OUTER_TAG_VALUE_LIST3_RT_OFFSET                     33737
-#define NIG_REG_OUTER_TAG_VALUE_MASK_RT_OFFSET                      33738
-#define NIG_REG_LLH_FUNC_TAGMAC_CLS_TYPE_RT_OFFSET                  33739
-#define NIG_REG_LLH_FUNC_TAG_EN_RT_OFFSET                           33740
+#define QM_REG_WFQPFCRD_MSB_RT_OFFSET                               33725
+#define QM_REG_WFQPFCRD_MSB_RT_SIZE                                 320
+#define QM_REG_VOQCRDLINE_RT_OFFSET                                 34045
+#define QM_REG_VOQCRDLINE_RT_SIZE                                   36
+#define QM_REG_VOQINITCRDLINE_RT_OFFSET                             34081
+#define QM_REG_VOQINITCRDLINE_RT_SIZE                               36
+#define NIG_REG_TAG_ETHERTYPE_0_RT_OFFSET                           34117
+#define NIG_REG_OUTER_TAG_VALUE_LIST0_RT_OFFSET                     34118
+#define NIG_REG_OUTER_TAG_VALUE_LIST1_RT_OFFSET                     34119
+#define NIG_REG_OUTER_TAG_VALUE_LIST2_RT_OFFSET                     34120
+#define NIG_REG_OUTER_TAG_VALUE_LIST3_RT_OFFSET                     34121
+#define NIG_REG_OUTER_TAG_VALUE_MASK_RT_OFFSET                      34122
+#define NIG_REG_LLH_FUNC_TAGMAC_CLS_TYPE_RT_OFFSET                  34123
+#define NIG_REG_LLH_FUNC_TAG_EN_RT_OFFSET                           34124
 #define NIG_REG_LLH_FUNC_TAG_EN_RT_SIZE                             4
-#define NIG_REG_LLH_FUNC_TAG_HDR_SEL_RT_OFFSET                      33744
+#define NIG_REG_LLH_FUNC_TAG_HDR_SEL_RT_OFFSET                      34128
 #define NIG_REG_LLH_FUNC_TAG_HDR_SEL_RT_SIZE                        4
-#define NIG_REG_LLH_FUNC_TAG_VALUE_RT_OFFSET                        33748
+#define NIG_REG_LLH_FUNC_TAG_VALUE_RT_OFFSET                        34132
 #define NIG_REG_LLH_FUNC_TAG_VALUE_RT_SIZE                          4
-#define NIG_REG_LLH_FUNC_NO_TAG_RT_OFFSET                           33752
-#define NIG_REG_LLH_FUNC_FILTER_VALUE_RT_OFFSET                     33753
+#define NIG_REG_LLH_FUNC_NO_TAG_RT_OFFSET                           34136
+#define NIG_REG_LLH_FUNC_FILTER_VALUE_RT_OFFSET                     34137
 #define NIG_REG_LLH_FUNC_FILTER_VALUE_RT_SIZE                       32
-#define NIG_REG_LLH_FUNC_FILTER_EN_RT_OFFSET                        33785
+#define NIG_REG_LLH_FUNC_FILTER_EN_RT_OFFSET                        34169
 #define NIG_REG_LLH_FUNC_FILTER_EN_RT_SIZE                          16
-#define NIG_REG_LLH_FUNC_FILTER_MODE_RT_OFFSET                      33801
+#define NIG_REG_LLH_FUNC_FILTER_MODE_RT_OFFSET                      34185
 #define NIG_REG_LLH_FUNC_FILTER_MODE_RT_SIZE                        16
-#define NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_RT_OFFSET             33817
+#define NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_RT_OFFSET             34201
 #define NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_RT_SIZE               16
-#define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_RT_OFFSET                   33833
+#define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_RT_OFFSET                   34217
 #define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_RT_SIZE                     16
-#define NIG_REG_TX_EDPM_CTRL_RT_OFFSET                              33849
-#define NIG_REG_ROCE_DUPLICATE_TO_HOST_RT_OFFSET                    33850
-#define CDU_REG_CID_ADDR_PARAMS_RT_OFFSET                           33851
-#define CDU_REG_SEGMENT0_PARAMS_RT_OFFSET                           33852
-#define CDU_REG_SEGMENT1_PARAMS_RT_OFFSET                           33853
-#define CDU_REG_PF_SEG0_TYPE_OFFSET_RT_OFFSET                       33854
-#define CDU_REG_PF_SEG1_TYPE_OFFSET_RT_OFFSET                       33855
-#define CDU_REG_PF_SEG2_TYPE_OFFSET_RT_OFFSET                       33856
-#define CDU_REG_PF_SEG3_TYPE_OFFSET_RT_OFFSET                       33857
-#define CDU_REG_PF_FL_SEG0_TYPE_OFFSET_RT_OFFSET                    33858
-#define CDU_REG_PF_FL_SEG1_TYPE_OFFSET_RT_OFFSET                    33859
-#define CDU_REG_PF_FL_SEG2_TYPE_OFFSET_RT_OFFSET                    33860
-#define CDU_REG_PF_FL_SEG3_TYPE_OFFSET_RT_OFFSET                    33861
-#define CDU_REG_VF_SEG_TYPE_OFFSET_RT_OFFSET                        33862
-#define CDU_REG_VF_FL_SEG_TYPE_OFFSET_RT_OFFSET                     33863
-#define PBF_REG_TAG_ETHERTYPE_0_RT_OFFSET                           33864
-#define PBF_REG_BTB_SHARED_AREA_SIZE_RT_OFFSET                      33865
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET                    33866
-#define PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET                       33867
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ0_RT_OFFSET                33868
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET                    33869
-#define PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET                       33870
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ1_RT_OFFSET                33871
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ2_RT_OFFSET                    33872
-#define PBF_REG_BTB_GUARANTEED_VOQ2_RT_OFFSET                       33873
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ2_RT_OFFSET                33874
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ3_RT_OFFSET                    33875
-#define PBF_REG_BTB_GUARANTEED_VOQ3_RT_OFFSET                       33876
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ3_RT_OFFSET                33877
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ4_RT_OFFSET                    33878
-#define PBF_REG_BTB_GUARANTEED_VOQ4_RT_OFFSET                       33879
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ4_RT_OFFSET                33880
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ5_RT_OFFSET                    33881
-#define PBF_REG_BTB_GUARANTEED_VOQ5_RT_OFFSET                       33882
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ5_RT_OFFSET                33883
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ6_RT_OFFSET                    33884
-#define PBF_REG_BTB_GUARANTEED_VOQ6_RT_OFFSET                       33885
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ6_RT_OFFSET                33886
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ7_RT_OFFSET                    33887
-#define PBF_REG_BTB_GUARANTEED_VOQ7_RT_OFFSET                       33888
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ7_RT_OFFSET                33889
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ8_RT_OFFSET                    33890
-#define PBF_REG_BTB_GUARANTEED_VOQ8_RT_OFFSET                       33891
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ8_RT_OFFSET                33892
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ9_RT_OFFSET                    33893
-#define PBF_REG_BTB_GUARANTEED_VOQ9_RT_OFFSET                       33894
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ9_RT_OFFSET                33895
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ10_RT_OFFSET                   33896
-#define PBF_REG_BTB_GUARANTEED_VOQ10_RT_OFFSET                      33897
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ10_RT_OFFSET               33898
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ11_RT_OFFSET                   33899
-#define PBF_REG_BTB_GUARANTEED_VOQ11_RT_OFFSET                      33900
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ11_RT_OFFSET               33901
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ12_RT_OFFSET                   33902
-#define PBF_REG_BTB_GUARANTEED_VOQ12_RT_OFFSET                      33903
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ12_RT_OFFSET               33904
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ13_RT_OFFSET                   33905
-#define PBF_REG_BTB_GUARANTEED_VOQ13_RT_OFFSET                      33906
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ13_RT_OFFSET               33907
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ14_RT_OFFSET                   33908
-#define PBF_REG_BTB_GUARANTEED_VOQ14_RT_OFFSET                      33909
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ14_RT_OFFSET               33910
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ15_RT_OFFSET                   33911
-#define PBF_REG_BTB_GUARANTEED_VOQ15_RT_OFFSET                      33912
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ15_RT_OFFSET               33913
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ16_RT_OFFSET                   33914
-#define PBF_REG_BTB_GUARANTEED_VOQ16_RT_OFFSET                      33915
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ16_RT_OFFSET               33916
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ17_RT_OFFSET                   33917
-#define PBF_REG_BTB_GUARANTEED_VOQ17_RT_OFFSET                      33918
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ17_RT_OFFSET               33919
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ18_RT_OFFSET                   33920
-#define PBF_REG_BTB_GUARANTEED_VOQ18_RT_OFFSET                      33921
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ18_RT_OFFSET               33922
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ19_RT_OFFSET                   33923
-#define PBF_REG_BTB_GUARANTEED_VOQ19_RT_OFFSET                      33924
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ19_RT_OFFSET               33925
-#define XCM_REG_CON_PHY_Q3_RT_OFFSET                                33926
+#define NIG_REG_TX_EDPM_CTRL_RT_OFFSET                              34233
+#define NIG_REG_ROCE_DUPLICATE_TO_HOST_RT_OFFSET                    34234
+#define CDU_REG_CID_ADDR_PARAMS_RT_OFFSET                           34235
+#define CDU_REG_SEGMENT0_PARAMS_RT_OFFSET                           34236
+#define CDU_REG_SEGMENT1_PARAMS_RT_OFFSET                           34237
+#define CDU_REG_PF_SEG0_TYPE_OFFSET_RT_OFFSET                       34238
+#define CDU_REG_PF_SEG1_TYPE_OFFSET_RT_OFFSET                       34239
+#define CDU_REG_PF_SEG2_TYPE_OFFSET_RT_OFFSET                       34240
+#define CDU_REG_PF_SEG3_TYPE_OFFSET_RT_OFFSET                       34241
+#define CDU_REG_PF_FL_SEG0_TYPE_OFFSET_RT_OFFSET                    34242
+#define CDU_REG_PF_FL_SEG1_TYPE_OFFSET_RT_OFFSET                    34243
+#define CDU_REG_PF_FL_SEG2_TYPE_OFFSET_RT_OFFSET                    34244
+#define CDU_REG_PF_FL_SEG3_TYPE_OFFSET_RT_OFFSET                    34245
+#define CDU_REG_VF_SEG_TYPE_OFFSET_RT_OFFSET                        34246
+#define CDU_REG_VF_FL_SEG_TYPE_OFFSET_RT_OFFSET                     34247
+#define PBF_REG_TAG_ETHERTYPE_0_RT_OFFSET                           34248
+#define PBF_REG_BTB_SHARED_AREA_SIZE_RT_OFFSET                      34249
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET                    34250
+#define PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET                       34251
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ0_RT_OFFSET                34252
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET                    34253
+#define PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET                       34254
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ1_RT_OFFSET                34255
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ2_RT_OFFSET                    34256
+#define PBF_REG_BTB_GUARANTEED_VOQ2_RT_OFFSET                       34257
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ2_RT_OFFSET                34258
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ3_RT_OFFSET                    34259
+#define PBF_REG_BTB_GUARANTEED_VOQ3_RT_OFFSET                       34260
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ3_RT_OFFSET                34261
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ4_RT_OFFSET                    34262
+#define PBF_REG_BTB_GUARANTEED_VOQ4_RT_OFFSET                       34263
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ4_RT_OFFSET                34264
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ5_RT_OFFSET                    34265
+#define PBF_REG_BTB_GUARANTEED_VOQ5_RT_OFFSET                       34266
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ5_RT_OFFSET                34267
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ6_RT_OFFSET                    34268
+#define PBF_REG_BTB_GUARANTEED_VOQ6_RT_OFFSET                       34269
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ6_RT_OFFSET                34270
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ7_RT_OFFSET                    34271
+#define PBF_REG_BTB_GUARANTEED_VOQ7_RT_OFFSET                       34272
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ7_RT_OFFSET                34273
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ8_RT_OFFSET                    34274
+#define PBF_REG_BTB_GUARANTEED_VOQ8_RT_OFFSET                       34275
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ8_RT_OFFSET                34276
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ9_RT_OFFSET                    34277
+#define PBF_REG_BTB_GUARANTEED_VOQ9_RT_OFFSET                       34278
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ9_RT_OFFSET                34279
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ10_RT_OFFSET                   34280
+#define PBF_REG_BTB_GUARANTEED_VOQ10_RT_OFFSET                      34281
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ10_RT_OFFSET               34282
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ11_RT_OFFSET                   34283
+#define PBF_REG_BTB_GUARANTEED_VOQ11_RT_OFFSET                      34284
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ11_RT_OFFSET               34285
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ12_RT_OFFSET                   34286
+#define PBF_REG_BTB_GUARANTEED_VOQ12_RT_OFFSET                      34287
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ12_RT_OFFSET               34288
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ13_RT_OFFSET                   34289
+#define PBF_REG_BTB_GUARANTEED_VOQ13_RT_OFFSET                      34290
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ13_RT_OFFSET               34291
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ14_RT_OFFSET                   34292
+#define PBF_REG_BTB_GUARANTEED_VOQ14_RT_OFFSET                      34293
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ14_RT_OFFSET               34294
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ15_RT_OFFSET                   34295
+#define PBF_REG_BTB_GUARANTEED_VOQ15_RT_OFFSET                      34296
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ15_RT_OFFSET               34297
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ16_RT_OFFSET                   34298
+#define PBF_REG_BTB_GUARANTEED_VOQ16_RT_OFFSET                      34299
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ16_RT_OFFSET               34300
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ17_RT_OFFSET                   34301
+#define PBF_REG_BTB_GUARANTEED_VOQ17_RT_OFFSET                      34302
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ17_RT_OFFSET               34303
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ18_RT_OFFSET                   34304
+#define PBF_REG_BTB_GUARANTEED_VOQ18_RT_OFFSET                      34305
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ18_RT_OFFSET               34306
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ19_RT_OFFSET                   34307
+#define PBF_REG_BTB_GUARANTEED_VOQ19_RT_OFFSET                      34308
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ19_RT_OFFSET               34309
+#define XCM_REG_CON_PHY_Q3_RT_OFFSET                                34310
 
-#define RUNTIME_ARRAY_SIZE 33927
+#define RUNTIME_ARRAY_SIZE 34311
 
 #endif /* __RT_DEFS_H__ */
diff --git a/drivers/net/qede/base/eth_common.h b/drivers/net/qede/base/eth_common.h
index d2ebce8..6dc969b 100644
--- a/drivers/net/qede/base/eth_common.h
+++ b/drivers/net/qede/base/eth_common.h
@@ -182,7 +182,7 @@ struct eth_tx_1st_bd_flags {
 struct eth_tx_data_1st_bd {
 /* VLAN tag to insert to packet (if enabled by vlan_insertion flag). */
 	__le16 vlan;
-/* Number of BDs in packet. Should be at least 2 in non-LSO packet and at least
+/* Number of BDs in packet. Should be at least 1 in non-LSO packet and at least
  * 3 in LSO (or Tunnel with IPv6+ext) packet.
  */
 	u8 nbds;
diff --git a/drivers/net/qede/base/reg_addr.h b/drivers/net/qede/base/reg_addr.h
index 3cc7fd4..f9920f3 100644
--- a/drivers/net/qede/base/reg_addr.h
+++ b/drivers/net/qede/base/reg_addr.h
@@ -1147,3 +1147,56 @@
 
 #define IGU_REG_PRODUCER_MEMORY 0x182000UL
 #define IGU_REG_CONSUMER_MEM 0x183000UL
+
+#define CDU_REG_CCFC_CTX_VALID0 0x580400UL
+#define CDU_REG_CCFC_CTX_VALID1 0x580404UL
+#define CDU_REG_TCFC_CTX_VALID0 0x580408UL
+
+#define DORQ_REG_L2_EDPM_TUNNEL_NGE_IP_EN_K2_E5 0x10092cUL
+#define DORQ_REG_L2_EDPM_TUNNEL_NGE_ETH_EN_K2_E5 0x100930UL
+#define MISCS_REG_RESET_PL_HV_2_K2_E5 0x009150UL
+#define CNIG_REG_NW_PORT_MODE_BB 0x218200UL
+#define CNIG_REG_PMEG_IF_CMD_BB 0x21821cUL
+#define CNIG_REG_PMEG_IF_ADDR_BB 0x218224UL
+#define CNIG_REG_PMEG_IF_WRDATA_BB 0x218228UL
+#define NWM_REG_MAC0_K2_E5 0x800400UL
+#define CNIG_REG_NIG_PORT0_CONF_K2_E5 0x218200UL
+#define CNIG_REG_NIG_PORT0_CONF_NIG_PORT_ENABLE_0_K2_E5_SHIFT 0
+#define CNIG_REG_NIG_PORT0_CONF_NIG_PORT_NWM_PORT_MAP_0_K2_E5_SHIFT 1
+#define CNIG_REG_NIG_PORT0_CONF_NIG_PORT_RATE_0_K2_E5_SHIFT 3
+#define ETH_MAC_REG_XIF_MODE_K2_E5 0x000080UL
+#define ETH_MAC_REG_XIF_MODE_XGMII_K2_E5_SHIFT 0
+#define ETH_MAC_REG_FRM_LENGTH_K2_E5 0x000014UL
+#define ETH_MAC_REG_FRM_LENGTH_FRM_LENGTH_K2_E5_SHIFT 0
+#define ETH_MAC_REG_TX_IPG_LENGTH_K2_E5 0x000044UL
+#define ETH_MAC_REG_TX_IPG_LENGTH_TXIPG_K2_E5_SHIFT 0
+#define ETH_MAC_REG_RX_FIFO_SECTIONS_K2_E5 0x00001cUL
+#define ETH_MAC_REG_RX_FIFO_SECTIONS_RX_SECTION_FULL_K2_E5_SHIFT 0
+#define ETH_MAC_REG_TX_FIFO_SECTIONS_K2_E5 0x000020UL
+#define ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_EMPTY_K2_E5_SHIFT 16
+#define ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_FULL_K2_E5_SHIFT 0
+#define ETH_MAC_REG_COMMAND_CONFIG_K2_E5 0x000008UL
+#define MISC_REG_XMAC_CORE_PORT_MODE_BB 0x008c08UL
+#define MISC_REG_XMAC_PHY_PORT_MODE_BB 0x008c04UL
+#define XMAC_REG_MODE_BB 0x210008UL
+#define XMAC_REG_RX_MAX_SIZE_BB  0x210040UL
+#define XMAC_REG_TX_CTRL_LO_BB 0x210020UL
+#define XMAC_REG_CTRL_BB 0x210000UL
+#define XMAC_REG_CTRL_TX_EN_BB (0x1 << 0)
+#define XMAC_REG_CTRL_RX_EN_BB (0x1 << 1)
+#define XMAC_REG_RX_CTRL_BB 0x210030UL
+#define XMAC_REG_RX_CTRL_PROCESS_VARIABLE_PREAMBLE_BB (0x1 << 12)
+
+#define PGLUE_B_REG_PGL_ADDR_E8_F0_K2_E5 0x2aaf98UL
+#define PGLUE_B_REG_PGL_ADDR_EC_F0_K2_E5 0x2aaf9cUL
+#define PGLUE_B_REG_PGL_ADDR_F0_F0_K2_E5 0x2aafa0UL
+#define PGLUE_B_REG_PGL_ADDR_F4_F0_K2_E5 0x2aafa4UL
+#define PGLUE_B_REG_PGL_ADDR_88_F0_BB 0x2aa404UL
+#define PGLUE_B_REG_PGL_ADDR_8C_F0_BB 0x2aa408UL
+#define PGLUE_B_REG_PGL_ADDR_90_F0_BB 0x2aa40cUL
+#define PGLUE_B_REG_PGL_ADDR_94_F0_BB 0x2aa410UL
+#define MISCS_REG_FUNCTION_HIDE_BB_K2 0x0096f0UL
+#define PCIE_REG_PRTY_MASK_K2_E5 0x0547b4UL
+#define PGLUE_B_REG_VF_BAR0_SIZE_K2_E5 0x2aaeb4UL
+
+#define PRS_REG_OUTPUT_FORMAT_4_0_BB_K2 0x1f099cUL
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index a604a5b..332b1f8 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -21,7 +21,7 @@ static uint8_t npar_tx_switching = 1;
 char fw_file[PATH_MAX];
 
 const char *QEDE_DEFAULT_FIRMWARE =
-	"/lib/firmware/qed/qed_init_values-8.14.6.0.bin";
+	"/lib/firmware/qed/qed_init_values-8.18.9.0.bin";
 
 static void
 qed_update_pf_params(struct ecore_dev *edev, struct ecore_pf_params *params)
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 07/61] net/qede/base: decrease maximum HW func per device
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (6 preceding siblings ...)
  2017-03-18  7:05   ` [PATCH v2 06/61] net/qede: upgrade the FW to 8.18.9.0 Rasesh Mody
@ 2017-03-18  7:05   ` Rasesh Mody
  2017-03-18  7:05   ` [PATCH v2 08/61] net/qede/base: move mask constants defining NIC type Rasesh Mody
                     ` (54 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:05 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Decrease MAX_HWFNS_PER_DEVICE from 4 to 2

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index b2f4910..d14f99c 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -28,7 +28,7 @@
 #include "ecore_proto_if.h"
 #include "mcp_public.h"
 
-#define MAX_HWFNS_PER_DEVICE	(4)
+#define MAX_HWFNS_PER_DEVICE	2
 #define NAME_SIZE 128 /* @DPDK */
 #define VER_SIZE 16
 #define ECORE_WFQ_UNIT	100
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 08/61] net/qede/base: move mask constants defining NIC type
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (7 preceding siblings ...)
  2017-03-18  7:05   ` [PATCH v2 07/61] net/qede/base: decrease maximum HW func per device Rasesh Mody
@ 2017-03-18  7:05   ` Rasesh Mody
  2017-03-18  7:05   ` [PATCH v2 09/61] net/qede/base: remove attribute from update current config Rasesh Mody
                     ` (53 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:05 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Move mask constants defining NIC type to ecore.h

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h     |    4 ++++
 drivers/net/qede/base/ecore_dev.c |    4 ----
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index d14f99c..a6cf52e 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -625,6 +625,10 @@ struct ecore_dev {
 #define ECORE_IS_AH(dev)	((dev)->type == ECORE_DEV_TYPE_AH)
 #define ECORE_IS_K2(dev)	ECORE_IS_AH(dev)
 
+#define ECORE_DEV_ID_MASK	0xff00
+#define ECORE_DEV_ID_MASK_BB	0x1600
+#define ECORE_DEV_ID_MASK_AH	0x8000
+
 	u16 vendor_id;
 	u16 device_id;
 
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index f82f5e6..ee50090 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2888,10 +2888,6 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	return ecore_hw_get_resc(p_hwfn, drv_resc_alloc);
 }
 
-#define ECORE_DEV_ID_MASK	0xff00
-#define ECORE_DEV_ID_MASK_BB	0x1600
-#define ECORE_DEV_ID_MASK_AH	0x8000
-
 static enum _ecore_status_t ecore_get_dev_info(struct ecore_dev *p_dev)
 {
 	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 09/61] net/qede/base: remove attribute from update current config
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (8 preceding siblings ...)
  2017-03-18  7:05   ` [PATCH v2 08/61] net/qede/base: move mask constants defining NIC type Rasesh Mody
@ 2017-03-18  7:05   ` Rasesh Mody
  2017-03-18  7:05   ` [PATCH v2 10/61] net/qede/base: add nvram options Rasesh Mody
                     ` (52 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:05 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Remove attribute field from update_current_config() API, Management FW
need to know only the last entity who configured the device.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_mcp.c     |    5 ++---
 drivers/net/qede/base/ecore_mcp_api.h |    8 --------
 2 files changed, 2 insertions(+), 11 deletions(-)

diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index e236f39..245d478 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1709,14 +1709,13 @@ enum _ecore_status_t ecore_mcp_resume(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t
 ecore_mcp_ov_update_current_config(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt,
-				   enum ecore_ov_config_method config,
 				   enum ecore_ov_client client)
 {
 	enum _ecore_status_t rc;
 	u32 resp = 0, param = 0;
 	u32 drv_mb_param;
 
-	switch (config) {
+	switch (client) {
 	case ECORE_OV_CLIENT_DRV:
 		drv_mb_param = DRV_MB_PARAM_OV_CURR_CFG_OS;
 		break;
@@ -1727,7 +1726,7 @@ ecore_mcp_ov_update_current_config(struct ecore_hwfn *p_hwfn,
 		drv_mb_param = DRV_MB_PARAM_OV_CURR_CFG_VENDOR_SPEC;
 		break;
 	default:
-		DP_NOTICE(p_hwfn, true, "Invalid client type %d\n", config);
+		DP_NOTICE(p_hwfn, true, "Invalid client type %d\n", client);
 		return ECORE_INVAL;
 	}
 
diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h
index 614cf67..72a58e4 100644
--- a/drivers/net/qede/base/ecore_mcp_api.h
+++ b/drivers/net/qede/base/ecore_mcp_api.h
@@ -173,12 +173,6 @@ union ecore_mcp_protocol_stats {
 };
 #endif
 
-enum ecore_ov_config_method {
-	ECORE_OV_CONFIG_MTU,
-	ECORE_OV_CONFIG_MAC,
-	ECORE_OV_CONFIG_WOL
-};
-
 enum ecore_ov_client {
 	ECORE_OV_CLIENT_DRV,
 	ECORE_OV_CLIENT_USER,
@@ -453,7 +447,6 @@ enum _ecore_status_t ecore_start_recovery_process(struct ecore_hwfn *p_hwfn,
  *
  *  @param p_hwfn
  *  @param p_ptt
- *  @param config - Configuation that has been updated
  *  @param client - ecore client type
  *
  * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
@@ -461,7 +454,6 @@ enum _ecore_status_t ecore_start_recovery_process(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t
 ecore_mcp_ov_update_current_config(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt,
-				   enum ecore_ov_config_method config,
 				   enum ecore_ov_client client);
 
 /**
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 10/61] net/qede/base: add nvram options
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (9 preceding siblings ...)
  2017-03-18  7:05   ` [PATCH v2 09/61] net/qede/base: remove attribute from update current config Rasesh Mody
@ 2017-03-18  7:05   ` Rasesh Mody
  2017-03-18  7:05   ` [PATCH v2 11/61] net/qede/base: add comment Rasesh Mody
                     ` (51 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:05 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add a bunch of NVRAM options like MCOT, FEC selection, temperature
threshold, Reset On Lan, etc.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/nvm_cfg.h |  465 ++++++++++++++++++++++++++++++++++++++-
 1 file changed, 461 insertions(+), 4 deletions(-)

diff --git a/drivers/net/qede/base/nvm_cfg.h b/drivers/net/qede/base/nvm_cfg.h
index 68abc2d..4202337 100644
--- a/drivers/net/qede/base/nvm_cfg.h
+++ b/drivers/net/qede/base/nvm_cfg.h
@@ -13,13 +13,21 @@
  * Description: NVM config file - Generated file from nvm cfg excel.
  *              DO NOT MODIFY !!!
  *
- * Created:     9/6/2016
+ * Created:     12/15/2016
  *
  ****************************************************************************/
 
 #ifndef NVM_CFG_H
 #define NVM_CFG_H
 
+#define NVM_CFG_version 0x81805
+
+#define NVM_CFG_new_option_seq 15
+
+#define NVM_CFG_removed_option_seq 0
+
+#define NVM_CFG_updated_value_seq 1
+
 struct nvm_cfg_mac_address {
 	u32 mac_addr_hi;
 		#define NVM_CFG_MAC_ADDRESS_HI_MASK 0x0000FFFF
@@ -242,6 +250,11 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_INTERNAL 0x0
 		#define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_EXTERNAL 0x1
 		#define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_BOTH 0x2
+	/*  ROL enable */
+		#define NVM_CFG1_GLOB_RESET_ON_LAN_MASK 0x80000000
+		#define NVM_CFG1_GLOB_RESET_ON_LAN_OFFSET 31
+		#define NVM_CFG1_GLOB_RESET_ON_LAN_DISABLED 0x0
+		#define NVM_CFG1_GLOB_RESET_ON_LAN_ENABLED 0x1
 	u32 f_lane_cfg1; /* 0x38 */
 		#define NVM_CFG1_GLOB_RX_LANE0_SWAP_MASK 0x0000000F
 		#define NVM_CFG1_GLOB_RX_LANE0_SWAP_OFFSET 0
@@ -470,6 +483,15 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_MANUF3_VER_OFFSET 18
 		#define NVM_CFG1_GLOB_MANUF4_VER_MASK 0x3F000000
 		#define NVM_CFG1_GLOB_MANUF4_VER_OFFSET 24
+	/*  Select package id method */
+		#define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_IO_MASK 0x40000000
+		#define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_IO_OFFSET 30
+		#define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_IO_NVRAM 0x0
+		#define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_IO_IO_PINS 0x1
+		#define NVM_CFG1_GLOB_RECOVERY_MODE_MASK 0x80000000
+		#define NVM_CFG1_GLOB_RECOVERY_MODE_OFFSET 31
+		#define NVM_CFG1_GLOB_RECOVERY_MODE_DISABLED 0x0
+		#define NVM_CFG1_GLOB_RECOVERY_MODE_ENABLED 0x1
 	u32 manufacture_time; /* 0x70 */
 		#define NVM_CFG1_GLOB_MANUF0_TIME_MASK 0x0000003F
 		#define NVM_CFG1_GLOB_MANUF0_TIME_OFFSET 0
@@ -480,6 +502,11 @@ struct nvm_cfg1_glob {
 	/*  Max MSIX for Ethernet in default mode */
 		#define NVM_CFG1_GLOB_MAX_MSIX_MASK 0x03FC0000
 		#define NVM_CFG1_GLOB_MAX_MSIX_OFFSET 18
+	/*  PF Mapping */
+		#define NVM_CFG1_GLOB_PF_MAPPING_MASK 0x0C000000
+		#define NVM_CFG1_GLOB_PF_MAPPING_OFFSET 26
+		#define NVM_CFG1_GLOB_PF_MAPPING_CONTINUOUS 0x0
+		#define NVM_CFG1_GLOB_PF_MAPPING_FIXED 0x1
 	u32 led_global_settings; /* 0x74 */
 		#define NVM_CFG1_GLOB_LED_SWAP_0_MASK 0x0000000F
 		#define NVM_CFG1_GLOB_LED_SWAP_0_OFFSET 0
@@ -489,6 +516,47 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_LED_SWAP_2_OFFSET 8
 		#define NVM_CFG1_GLOB_LED_SWAP_3_MASK 0x0000F000
 		#define NVM_CFG1_GLOB_LED_SWAP_3_OFFSET 12
+	/*  Max. continues operating temperature */
+		#define NVM_CFG1_GLOB_MAX_CONT_OPERATING_TEMP_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_MAX_CONT_OPERATING_TEMP_OFFSET 16
+	/*  GPIO which triggers run-time port swap according to the map
+	 *  specified in option 205
+	 */
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_OFFSET 24
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO31 0x20
 	u32 generic_cont1; /* 0x78 */
 		#define NVM_CFG1_GLOB_AVS_DAC_CODE_MASK 0x000003FF
 		#define NVM_CFG1_GLOB_AVS_DAC_CODE_OFFSET 0
@@ -508,6 +576,17 @@ struct nvm_cfg1_glob {
 	/*  PCIe Preset value - applies only if option 194 is enabled */
 		#define NVM_CFG1_GLOB_PCIE_PRESET_VALUE_MASK 0x00780000
 		#define NVM_CFG1_GLOB_PCIE_PRESET_VALUE_OFFSET 19
+	/*  Port mapping to be used when the run-time GPIO for port-swap is
+	 *  defined and set.
+	 */
+		#define NVM_CFG1_GLOB_RUNTIME_PORT0_SWAP_MAP_MASK 0x01800000
+		#define NVM_CFG1_GLOB_RUNTIME_PORT0_SWAP_MAP_OFFSET 23
+		#define NVM_CFG1_GLOB_RUNTIME_PORT1_SWAP_MAP_MASK 0x06000000
+		#define NVM_CFG1_GLOB_RUNTIME_PORT1_SWAP_MAP_OFFSET 25
+		#define NVM_CFG1_GLOB_RUNTIME_PORT2_SWAP_MAP_MASK 0x18000000
+		#define NVM_CFG1_GLOB_RUNTIME_PORT2_SWAP_MAP_OFFSET 27
+		#define NVM_CFG1_GLOB_RUNTIME_PORT3_SWAP_MAP_MASK 0x60000000
+		#define NVM_CFG1_GLOB_RUNTIME_PORT3_SWAP_MAP_OFFSET 29
 	u32 mbi_version; /* 0x7C */
 		#define NVM_CFG1_GLOB_MBI_VERSION_0_MASK 0x000000FF
 		#define NVM_CFG1_GLOB_MBI_VERSION_0_OFFSET 0
@@ -515,6 +594,44 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_MBI_VERSION_1_OFFSET 8
 		#define NVM_CFG1_GLOB_MBI_VERSION_2_MASK 0x00FF0000
 		#define NVM_CFG1_GLOB_MBI_VERSION_2_OFFSET 16
+	/*  If set to other than NA, 0 - Normal operation, 1 - Thermal event
+	 *  occurred
+	 */
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_OFFSET 24
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO31 0x20
 	u32 mbi_date; /* 0x80 */
 	u32 misc_sig; /* 0x84 */
 	/*  Define the GPIO mapping to switch i2c mux */
@@ -555,6 +672,81 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO29 0x1E
 		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO30 0x1F
 		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO31 0x20
+	/*  Interrupt signal used for SMBus/I2C management interface
+	 *  0 = Interrupt event occurred
+	 *  1 = Normal
+	 */
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_OFFSET 16
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO31 0x20
+	/*  Set aLOM FAN on GPIO */
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_OFFSET 24
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO31 0x20
 	u32 device_capabilities; /* 0x88 */
 		#define NVM_CFG1_GLOB_DEVICE_CAPABILITIES_ETHERNET 0x1
 		#define NVM_CFG1_GLOB_DEVICE_CAPABILITIES_FCOE 0x2
@@ -591,11 +783,262 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_BB_1X100G \
 			0x80
 		#define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_2X10G 0x100
-	u32 reserved[41]; /* 0x9C */
+	/* @DPDK */
+	u32 reserved1[12]; /* 0x9C */
+	u32 oem1_number[8]; /* 0xCC */
+	u32 oem2_number[8]; /* 0xEC */
+	u32 mps25_active_txfir_pre; /* 0x10C */
+		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_PRE_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_PRE_OFFSET 0
+		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_PRE_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_PRE_OFFSET 8
+		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_PRE_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_PRE_OFFSET 16
+		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_PRE_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_PRE_OFFSET 24
+	u32 mps25_active_txfir_main; /* 0x110 */
+		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_MAIN_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_MAIN_OFFSET 0
+		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_MAIN_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_MAIN_OFFSET 8
+		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_MAIN_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_MAIN_OFFSET 16
+		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_MAIN_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_MAIN_OFFSET 24
+	u32 mps25_active_txfir_post; /* 0x114 */
+		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_POST_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_POST_OFFSET 0
+		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_POST_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_POST_OFFSET 8
+		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_POST_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_POST_OFFSET 16
+		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_POST_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_POST_OFFSET 24
+	u32 features; /* 0x118 */
+	/*  Set the Aux Fan on temperature  */
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_VALUE_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_VALUE_OFFSET 0
+	/*  Set NC-SI package ID */
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_OFFSET 8
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO31 0x20
+	/*  PMBUS Clock GPIO */
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_OFFSET 16
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO31 0x20
+	/*  PMBUS Data GPIO */
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_OFFSET 24
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO31 0x20
+	u32 tx_rx_eq_25g_hlpc; /* 0x11C */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_HLPC_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_HLPC_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_HLPC_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_HLPC_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_HLPC_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_HLPC_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_HLPC_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_HLPC_OFFSET 24
+	u32 tx_rx_eq_25g_llpc; /* 0x120 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_LLPC_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_LLPC_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_LLPC_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_LLPC_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_LLPC_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_LLPC_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_LLPC_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_LLPC_OFFSET 24
+	u32 tx_rx_eq_25g_ac; /* 0x124 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_AC_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_AC_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_AC_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_AC_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_AC_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_AC_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_AC_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_AC_OFFSET 24
+	u32 tx_rx_eq_10g_pc; /* 0x128 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_PC_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_PC_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_PC_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_PC_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_PC_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_PC_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_PC_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_PC_OFFSET 24
+	u32 tx_rx_eq_10g_ac; /* 0x12C */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_AC_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_AC_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_AC_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_AC_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_AC_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_AC_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_AC_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_AC_OFFSET 24
+	u32 tx_rx_eq_1g; /* 0x130 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_1G_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_1G_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_1G_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_1G_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_1G_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_1G_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_1G_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_1G_OFFSET 24
+	u32 tx_rx_eq_25g_bt; /* 0x134 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_BT_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_BT_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_BT_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_BT_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_BT_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_BT_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_BT_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_BT_OFFSET 24
+	u32 tx_rx_eq_10g_bt; /* 0x138 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_BT_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_BT_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_BT_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_BT_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_BT_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_BT_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_BT_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_BT_OFFSET 24
+	u32 generic_cont4; /* 0x13C */
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_OFFSET 0
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO31 0x20
+	u32 reserved[58]; /* 0x140 */
 };
 
 struct nvm_cfg1_path {
-	u32 reserved[30]; /* 0x0 */
+	u32 reserved[1]; /* 0x0 */
 };
 
 struct nvm_cfg1_port {
@@ -749,6 +1192,15 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_FEC_FORCE_MODE_FIRECODE 0x1
 		#define NVM_CFG1_PORT_FEC_FORCE_MODE_RS 0x2
 		#define NVM_CFG1_PORT_FEC_FORCE_MODE_AUTO 0x7
+		#define NVM_CFG1_PORT_FEC_AN_MODE_MASK 0x00700000
+		#define NVM_CFG1_PORT_FEC_AN_MODE_OFFSET 20
+		#define NVM_CFG1_PORT_FEC_AN_MODE_NONE 0x0
+		#define NVM_CFG1_PORT_FEC_AN_MODE_10G_FIRECODE 0x1
+		#define NVM_CFG1_PORT_FEC_AN_MODE_25G_FIRECODE 0x2
+		#define NVM_CFG1_PORT_FEC_AN_MODE_10G_AND_25G_FIRECODE 0x3
+		#define NVM_CFG1_PORT_FEC_AN_MODE_25G_RS 0x4
+		#define NVM_CFG1_PORT_FEC_AN_MODE_25G_FIRECODE_AND_RS 0x5
+		#define NVM_CFG1_PORT_FEC_AN_MODE_ALL 0x6
 	u32 phy_cfg; /* 0x1C */
 		#define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_MASK 0x0000FFFF
 		#define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_OFFSET 0
@@ -1451,12 +1903,17 @@ struct nvm_cfg1_func {
 		#define NVM_CFG1_FUNC_PREBOOT_VLAN_VALUE_OFFSET 0
 		#define NVM_CFG1_FUNC_PREBOOT_VLAN_MASK 0x00010000
 		#define NVM_CFG1_FUNC_PREBOOT_VLAN_OFFSET 16
+		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_MASK 0x001E0000
+		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_OFFSET 17
+		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_ETHERNET 0x1
+		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_FCOE 0x2
+		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_ISCSI 0x4
 	u32 reserved[8]; /* 0x30 */
 };
 
 struct nvm_cfg1 {
 	struct nvm_cfg1_glob glob; /* 0x0 */
-	struct nvm_cfg1_path path[MCP_GLOB_PATH_MAX]; /* 0x140 */
+	struct nvm_cfg1_path path[MCP_GLOB_PATH_MAX]; /* 0x228 */
 	struct nvm_cfg1_port port[MCP_GLOB_PORT_MAX]; /* 0x230 */
 	struct nvm_cfg1_func func[MCP_GLOB_FUNC_MAX]; /* 0xB90 */
 };
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 11/61] net/qede/base: add comment
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (10 preceding siblings ...)
  2017-03-18  7:05   ` [PATCH v2 10/61] net/qede/base: add nvram options Rasesh Mody
@ 2017-03-18  7:05   ` Rasesh Mody
  2017-03-18  7:05   ` [PATCH v2 12/61] net/qede/base: use default MTU from shared memory Rasesh Mody
                     ` (50 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:05 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add a comment for the endianness manipulation in
ecore_mcp_send_drv_version().

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_mcp.c |    1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 245d478..df6ebd2 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1662,6 +1662,7 @@ ecore_mcp_send_drv_version(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	p_drv_version->version = p_ver->version;
 	num_words = (MCP_DRV_VER_STR_SIZE - 4) / 4;
 	for (i = 0; i < num_words; i++) {
+		/* The driver name is expected to be in a big-endian format */
 		p_name = &p_ver->name[i * sizeof(u32)];
 		val = OSAL_CPU_TO_BE32(*(u32 *)p_name);
 		*(u32 *)&p_drv_version->name[i * sizeof(u32)] = val;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 12/61] net/qede/base: use default MTU from shared memory
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (11 preceding siblings ...)
  2017-03-18  7:05   ` [PATCH v2 11/61] net/qede/base: add comment Rasesh Mody
@ 2017-03-18  7:05   ` Rasesh Mody
  2017-03-18  7:05   ` [PATCH v2 13/61] net/qede/base: change queue/sb-id from 8 bit to 16 bit Rasesh Mody
                     ` (49 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:05 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Read and use the default MTU value from shared-memory.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h         |    2 ++
 drivers/net/qede/base/ecore_dev.c     |    3 +++
 drivers/net/qede/base/ecore_mcp.c     |   10 ++++++++++
 drivers/net/qede/base/ecore_mcp_api.h |    2 ++
 drivers/net/qede/qede_if.h            |    1 +
 drivers/net/qede/qede_main.c          |    2 ++
 6 files changed, 20 insertions(+)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index a6cf52e..25c96f8 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -377,6 +377,8 @@ struct ecore_hw_info {
 
 	/* Default DCBX mode */
 	u8 dcbx_mode;
+
+	u16 mtu;
 };
 
 struct ecore_hw_cid_data {
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index ee50090..87c1c23 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2879,6 +2879,9 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 
 	ecore_get_num_funcs(p_hwfn, p_ptt);
 
+	if (ecore_mcp_is_init(p_hwfn))
+		p_hwfn->hw_info.mtu = p_hwfn->mcp_info->func_info.mtu;
+
 	/* In case of forcing the driver's default resource allocation, calling
 	 * ecore_hw_get_resc() should come after initializing the personality
 	 * and after getting the number of functions, since the calculation of
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index df6ebd2..8720ae7 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1431,6 +1431,16 @@ enum _ecore_status_t ecore_mcp_fill_shmem_func_info(struct ecore_hwfn *p_hwfn,
 
 	info->ovlan = (u16)(shmem_info.ovlan_stag & FUNC_MF_CFG_OV_STAG_MASK);
 
+	info->mtu = (u16)shmem_info.mtu_size;
+
+	if (info->mtu == 0)
+		info->mtu = 1500;
+
+	info->mtu = (u16)shmem_info.mtu_size;
+
+	if (info->mtu == 0)
+		info->mtu = 1500;
+
 	DP_VERBOSE(p_hwfn, (ECORE_MSG_SP | ECORE_MSG_IFUP),
 		   "Read configuration from shmem: pause_on_host %02x"
 		    " protocol %02x BW [%02x - %02x]"
diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h
index 72a58e4..1be22dd 100644
--- a/drivers/net/qede/base/ecore_mcp_api.h
+++ b/drivers/net/qede/base/ecore_mcp_api.h
@@ -84,6 +84,8 @@ struct ecore_mcp_function_info {
 
 #define ECORE_MCP_VLAN_UNSET		(0xffff)
 	u16 ovlan;
+
+	u16 mtu;
 };
 
 struct ecore_mcp_nvm_common {
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index 4b23bb9..18404fb 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -34,6 +34,7 @@ struct qed_dev_info {
 	uint32_t flash_size;
 	uint8_t mf_mode;
 	bool tx_switching;
+	u16 mtu;
 	/* To be added... */
 };
 
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 332b1f8..e76346e 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -365,6 +365,8 @@ qed_fill_dev_info(struct ecore_dev *edev, struct qed_dev_info *dev_info)
 				      &dev_info->mfw_rev, NULL);
 	}
 
+	dev_info->mtu = ECORE_LEADING_HWFN(edev)->hw_info.mtu;
+
 	return 0;
 }
 
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 13/61] net/qede/base: change queue/sb-id from 8 bit to 16 bit
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (12 preceding siblings ...)
  2017-03-18  7:05   ` [PATCH v2 12/61] net/qede/base: use default MTU from shared memory Rasesh Mody
@ 2017-03-18  7:05   ` Rasesh Mody
  2017-03-18  7:05   ` [PATCH v2 14/61] net/qede/base: update MFW when default MTU is changed Rasesh Mody
                     ` (48 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:05 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Change the queue/sb-id values from 8 bit fields to 16 bit fields.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c     |    8 ++++----
 drivers/net/qede/base/ecore_dev_api.h |    4 ++--
 drivers/net/qede/base/ecore_l2.c      |    2 +-
 drivers/net/qede/base/ecore_l2_api.h  |    2 +-
 drivers/net/qede/base/ecore_sriov.c   |    4 ++--
 5 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 87c1c23..7a501bb 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -3876,7 +3876,7 @@ static enum _ecore_status_t ecore_set_coalesce(struct ecore_hwfn *p_hwfn,
 
 enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u8 qid, u16 sb_id)
+					    u16 coalesce, u16 qid, u16 sb_id)
 {
 	struct ustorm_eth_queue_zone eth_qzone;
 	u8 timeset, timer_res;
@@ -3897,7 +3897,7 @@ enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
 	}
 	timeset = (u8)(coalesce >> timer_res);
 
-	rc = ecore_fw_l2_queue(p_hwfn, (u16)qid, &fw_qid);
+	rc = ecore_fw_l2_queue(p_hwfn, qid, &fw_qid);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
@@ -3919,7 +3919,7 @@ enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
 
 enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u8 qid, u16 sb_id)
+					    u16 coalesce, u16 qid, u16 sb_id)
 {
 	struct xstorm_eth_queue_zone eth_qzone;
 	u8 timeset, timer_res;
@@ -3941,7 +3941,7 @@ enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
 
 	timeset = (u8)(coalesce >> timer_res);
 
-	rc = ecore_fw_l2_queue(p_hwfn, (u16)qid, &fw_qid);
+	rc = ecore_fw_l2_queue(p_hwfn, qid, &fw_qid);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index 0dee68a..e7332ac 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -535,7 +535,7 @@ enum _ecore_status_t ecore_final_cleanup(struct ecore_hwfn	*p_hwfn,
  */
 enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u8 qid, u16 sb_id);
+					    u16 coalesce, u16 qid, u16 sb_id);
 
 /**
  * @brief ecore_set_txq_coalesce - Configure coalesce parameters for a Tx queue
@@ -553,6 +553,6 @@ enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
  */
 enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u8 qid, u16 sb_id);
+					    u16 coalesce, u16 qid, u16 sb_id);
 
 #endif
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 22bb43d..1379a1b 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -212,7 +212,7 @@ ecore_sp_vport_update_rss(struct ecore_hwfn *p_hwfn,
 
 	for (i = 0; i < ECORE_RSS_IND_TABLE_SIZE; i++) {
 		rc = ecore_fw_l2_queue(p_hwfn,
-				       (u8)p_rss->rss_ind_table[i],
+				       p_rss->rss_ind_table[i],
 				       &abs_l2_queue);
 		if (rc != ECORE_SUCCESS)
 			return rc;
diff --git a/drivers/net/qede/base/ecore_l2_api.h b/drivers/net/qede/base/ecore_l2_api.h
index 247316b..8f7b614 100644
--- a/drivers/net/qede/base/ecore_l2_api.h
+++ b/drivers/net/qede/base/ecore_l2_api.h
@@ -37,7 +37,7 @@ struct ecore_queue_start_common_params {
 	/* q_zone_id is relative, may be different from queue id
 	 * currently used by Tx-only, upper-bounded by number of FW-queues
 	 */
-	u8 qzone_id;
+	u16 qzone_id;
 
 	/* stats_id is relative or absolute depends on function */
 	u8 stats_id;
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index b051678..6e86966 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -2118,8 +2118,8 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 	    !ecore_iov_validate_sb(p_hwfn, vf, req->hw_sb))
 		goto out;
 
-	params.queue_id = (u8)vf->vf_queues[req->tx_qid].fw_tx_qid;
-	params.qzone_id = (u8)vf->vf_queues[req->tx_qid].fw_tx_qid;
+	params.queue_id = vf->vf_queues[req->tx_qid].fw_tx_qid;
+	params.qzone_id = vf->vf_queues[req->tx_qid].fw_tx_qid;
 	params.vport_id = vf->vport_id;
 	params.stats_id = vf->abs_vf_id + 0x10;
 	params.sb = req->hw_sb;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 14/61] net/qede/base: update MFW when default MTU is changed
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (13 preceding siblings ...)
  2017-03-18  7:05   ` [PATCH v2 13/61] net/qede/base: change queue/sb-id from 8 bit to 16 bit Rasesh Mody
@ 2017-03-18  7:05   ` Rasesh Mody
  2017-03-18  7:05   ` [PATCH v2 15/61] net/qede/base: prevent device init failure Rasesh Mody
                     ` (47 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:05 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Send mailbox command to Management FW when MTU changes.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c |   11 +++++++++++
 drivers/net/qede/base/ecore_mcp.c |    3 ---
 2 files changed, 11 insertions(+), 3 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 7a501bb..13e13ba 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1629,6 +1629,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 {
 	enum _ecore_status_t rc = ECORE_SUCCESS, mfw_rc;
 	u32 load_code, param, drv_mb_param;
+	bool b_default_mtu = true;
 	struct ecore_hwfn *p_hwfn;
 	int i;
 
@@ -1648,6 +1649,12 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
 
+		/* If management didn't provide a default, set one of our own */
+		if (!p_hwfn->hw_info.mtu) {
+			p_hwfn->hw_info.mtu = 1500;
+			b_default_mtu = false;
+		}
+
 		if (IS_VF(p_dev)) {
 			p_hwfn->b_int_enabled = 1;
 			continue;
@@ -1776,6 +1783,10 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 			return rc;
 		}
 
+		if (!b_default_mtu)
+			ecore_mcp_ov_update_mtu(p_hwfn, p_hwfn->p_main_ptt,
+						p_hwfn->hw_info.mtu);
+
 		rc = ecore_mcp_ov_update_driver_state(p_hwfn,
 						      p_hwfn->p_main_ptt,
 						ECORE_OV_DRIVER_STATE_DISABLED);
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 8720ae7..0338576 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1438,9 +1438,6 @@ enum _ecore_status_t ecore_mcp_fill_shmem_func_info(struct ecore_hwfn *p_hwfn,
 
 	info->mtu = (u16)shmem_info.mtu_size;
 
-	if (info->mtu == 0)
-		info->mtu = 1500;
-
 	DP_VERBOSE(p_hwfn, (ECORE_MSG_SP | ECORE_MSG_IFUP),
 		   "Read configuration from shmem: pause_on_host %02x"
 		    " protocol %02x BW [%02x - %02x]"
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 15/61] net/qede/base: prevent device init failure
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (14 preceding siblings ...)
  2017-03-18  7:05   ` [PATCH v2 14/61] net/qede/base: update MFW when default MTU is changed Rasesh Mody
@ 2017-03-18  7:05   ` Rasesh Mody
  2017-03-18  7:05   ` [PATCH v2 16/61] net/qede/base: read card personality via MFW commands Rasesh Mody
                     ` (46 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:05 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Device initialization flow should not be failed because the FW interface
command is not available.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c |   14 ++++++++------
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 13e13ba..7494f93 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1778,18 +1778,20 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		rc = ecore_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
 				   DRV_MSG_CODE_OV_UPDATE_STORM_FW_VER,
 				   drv_mb_param, &load_code, &param);
-		if (rc != ECORE_SUCCESS) {
-			DP_ERR(p_hwfn, "Failed to send firmware version\n");
-			return rc;
-		}
+		if (rc != ECORE_SUCCESS)
+			DP_INFO(p_hwfn, "Failed to update firmware version\n");
 
 		if (!b_default_mtu)
-			ecore_mcp_ov_update_mtu(p_hwfn, p_hwfn->p_main_ptt,
-						p_hwfn->hw_info.mtu);
+			rc = ecore_mcp_ov_update_mtu(p_hwfn, p_hwfn->p_main_ptt,
+						      p_hwfn->hw_info.mtu);
+		if (rc != ECORE_SUCCESS)
+			DP_INFO(p_hwfn, "Failed to update default mtu\n");
 
 		rc = ecore_mcp_ov_update_driver_state(p_hwfn,
 						      p_hwfn->p_main_ptt,
 						ECORE_OV_DRIVER_STATE_DISABLED);
+		if (rc != ECORE_SUCCESS)
+			DP_INFO(p_hwfn, "Failed to update driver state\n");
 	}
 
 	return rc;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 16/61] net/qede/base: read card personality via MFW commands
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (15 preceding siblings ...)
  2017-03-18  7:05   ` [PATCH v2 15/61] net/qede/base: prevent device init failure Rasesh Mody
@ 2017-03-18  7:05   ` Rasesh Mody
  2017-03-18  7:05   ` [PATCH v2 17/61] net/qede/base: allow probe to succeed with minor HW-issues Rasesh Mody
                     ` (45 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:05 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add support to read NIC personality via management FW for non-L2
protocols.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h       |   16 +++++++++++++-
 drivers/net/qede/base/ecore_dev.c   |   17 +++++----------
 drivers/net/qede/base/ecore_mcp.c   |   41 +++++++++++++++++++++++++++++++----
 drivers/net/qede/base/ecore_sriov.c |    1 +
 4 files changed, 59 insertions(+), 16 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 25c96f8..842a3b5 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -243,7 +243,8 @@ enum ecore_pci_personality {
 	ECORE_PCI_FCOE,
 	ECORE_PCI_ISCSI,
 	ECORE_PCI_ETH_ROCE,
-	ECORE_PCI_IWARP,
+	ECORE_PCI_ETH_IWARP,
+	ECORE_PCI_ETH_RDMA,
 	ECORE_PCI_DEFAULT /* default in shmem */
 };
 
@@ -328,6 +329,19 @@ enum ecore_hw_err_type {
 struct ecore_hw_info {
 	/* PCI personality */
 	enum ecore_pci_personality personality;
+#define ECORE_IS_RDMA_PERSONALITY(dev)			    \
+	((dev)->hw_info.personality == ECORE_PCI_ETH_ROCE ||  \
+	 (dev)->hw_info.personality == ECORE_PCI_ETH_IWARP || \
+	 (dev)->hw_info.personality == ECORE_PCI_ETH_RDMA)
+#define ECORE_IS_ROCE_PERSONALITY(dev)			   \
+	((dev)->hw_info.personality == ECORE_PCI_ETH_ROCE || \
+	 (dev)->hw_info.personality == ECORE_PCI_ETH_RDMA)
+#define ECORE_IS_IWARP_PERSONALITY(dev)			    \
+	((dev)->hw_info.personality == ECORE_PCI_ETH_IWARP || \
+	 (dev)->hw_info.personality == ECORE_PCI_ETH_RDMA)
+#define ECORE_IS_L2_PERSONALITY(dev)		      \
+	((dev)->hw_info.personality == ECORE_PCI_ETH || \
+	 ECORE_IS_RDMA_PERSONALITY(dev))
 
 	/* Resource Allocation scheme results */
 	u32 resc_start[ECORE_MAX_RESC];
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 7494f93..1b033b7 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -219,9 +219,7 @@ static enum _ecore_status_t ecore_init_qm_info(struct ecore_hwfn *p_hwfn,
 	 * don't have a good recycle flow. Non ethernet PFs require only a
 	 * single physical queue.
 	 */
-	if (p_hwfn->hw_info.personality == ECORE_PCI_ETH_ROCE ||
-	    p_hwfn->hw_info.personality == ECORE_PCI_IWARP ||
-	    p_hwfn->hw_info.personality == ECORE_PCI_ETH)
+	if (ECORE_IS_L2_PERSONALITY(p_hwfn))
 		protocol_pqs = p_hwfn->hw_info.num_hw_tc;
 	else
 		protocol_pqs = 1;
@@ -229,7 +227,7 @@ static enum _ecore_status_t ecore_init_qm_info(struct ecore_hwfn *p_hwfn,
 	num_pqs = protocol_pqs + num_vfs + 1;	/* The '1' is for pure-LB */
 	num_vports = (u8)RESC_NUM(p_hwfn, ECORE_VPORT);
 
-	if (p_hwfn->hw_info.personality == ECORE_PCI_ETH_ROCE) {
+	if (ECORE_IS_ROCE_PERSONALITY(p_hwfn)) {
 		num_pqs++;	/* for RoCE queue */
 		init_rdma_offload_pq = true;
 		if (p_hwfn->pf_params.rdma_pf_params.enable_dcqcn) {
@@ -259,7 +257,7 @@ static enum _ecore_status_t ecore_init_qm_info(struct ecore_hwfn *p_hwfn,
 		qm_info->num_pf_rls = (u8)num_pf_rls;
 	}
 
-	if (p_hwfn->hw_info.personality == ECORE_PCI_IWARP) {
+	if (ECORE_IS_IWARP_PERSONALITY(p_hwfn)) {
 		num_pqs += 3;	/* for iwarp queue / pure-ack / ooo */
 		init_rdma_offload_pq = true;
 		init_pure_ack_pq = true;
@@ -335,9 +333,7 @@ static enum _ecore_status_t ecore_init_qm_info(struct ecore_hwfn *p_hwfn,
 		struct init_qm_pq_params *params =
 		    &qm_info->qm_pq_params[curr_queue++];
 
-		if (p_hwfn->hw_info.personality == ECORE_PCI_ETH_ROCE ||
-		    p_hwfn->hw_info.personality == ECORE_PCI_IWARP ||
-		    p_hwfn->hw_info.personality == ECORE_PCI_ETH) {
+		if (ECORE_IS_L2_PERSONALITY(p_hwfn)) {
 			params->vport_id = vport_id;
 			params->tc_id = i;
 			/* Note: this assumes that if we had a configuration
@@ -612,8 +608,7 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 
 		/* EQ */
 		n_eqes = ecore_chain_get_capacity(&p_hwfn->p_spq->chain);
-		if ((p_hwfn->hw_info.personality == ECORE_PCI_ETH_ROCE) ||
-		    (p_hwfn->hw_info.personality == ECORE_PCI_IWARP)) {
+		if (ECORE_IS_RDMA_PERSONALITY(p_hwfn)) {
 			/* Calculate the EQ size
 			 * ---------------------
 			 * Each ICID may generate up to one event at a time i.e.
@@ -636,7 +631,7 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 			 *          smaller than RoCE's so we avoid exact
 			 *          calculation.
 			 */
-			if (p_hwfn->hw_info.personality == ECORE_PCI_ETH_ROCE) {
+			if (ECORE_IS_ROCE_PERSONALITY(p_hwfn)) {
 				num_cons =
 				    ecore_cxt_get_proto_cid_count(
 						p_hwfn,
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 0338576..9f897b5 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1373,16 +1373,47 @@ enum _ecore_status_t ecore_mcp_get_media_type(struct ecore_dev *p_dev,
 	return ECORE_SUCCESS;
 }
 
+/* @DPDK */
+/* Old MFW has a global configuration for all PFs regarding RDMA support */
+static void
+ecore_mcp_get_shmem_proto_legacy(struct ecore_hwfn *p_hwfn,
+				 enum ecore_pci_personality *p_proto)
+{
+	*p_proto = ECORE_PCI_ETH;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IFUP,
+		   "According to Legacy capabilities, L2 personality is %08x\n",
+		   (u32)*p_proto);
+}
+
+/* @DPDK */
+static enum _ecore_status_t
+ecore_mcp_get_shmem_proto_mfw(struct ecore_hwfn *p_hwfn,
+			      struct ecore_ptt *p_ptt,
+			      enum ecore_pci_personality *p_proto)
+{
+	u32 resp = 0, param = 0;
+	enum _ecore_status_t rc;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IFUP,
+		   "According to capabilities, L2 personality is %08x [resp %08x param %08x]\n",
+		   (u32)*p_proto, resp, param);
+	return ECORE_SUCCESS;
+}
+
 static enum _ecore_status_t
 ecore_mcp_get_shmem_proto(struct ecore_hwfn *p_hwfn,
 			  struct public_func *p_info,
+			  struct ecore_ptt *p_ptt,
 			  enum ecore_pci_personality *p_proto)
 {
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 
 	switch (p_info->config & FUNC_MF_CFG_PROTOCOL_MASK) {
 	case FUNC_MF_CFG_PROTOCOL_ETHERNET:
-		*p_proto = ECORE_PCI_ETH;
+		if (ecore_mcp_get_shmem_proto_mfw(p_hwfn, p_ptt, p_proto) !=
+		    ECORE_SUCCESS)
+			ecore_mcp_get_shmem_proto_legacy(p_hwfn, p_proto);
 		break;
 	default:
 		rc = ECORE_INVAL;
@@ -1403,7 +1434,8 @@ enum _ecore_status_t ecore_mcp_fill_shmem_func_info(struct ecore_hwfn *p_hwfn,
 	info->pause_on_host = (shmem_info.config &
 			       FUNC_MF_CFG_PAUSE_ON_HOST_RING) ? 1 : 0;
 
-	if (ecore_mcp_get_shmem_proto(p_hwfn, &shmem_info, &info->protocol)) {
+	if (ecore_mcp_get_shmem_proto(p_hwfn, &shmem_info, p_ptt,
+				      &info->protocol)) {
 		DP_ERR(p_hwfn, "Unknown personality %08x\n",
 		       (u32)(shmem_info.config & FUNC_MF_CFG_PROTOCOL_MASK));
 		return ECORE_INVAL;
@@ -1559,8 +1591,9 @@ int ecore_mcp_get_personality_cnt(struct ecore_hwfn *p_hwfn,
 		if (shmem_info.config & FUNC_MF_CFG_FUNC_HIDE)
 			continue;
 
-		if (ecore_mcp_get_shmem_proto(p_hwfn, &shmem_info,
-					      &protocol) != ECORE_SUCCESS)
+		if (ecore_mcp_get_shmem_proto(p_hwfn, &shmem_info, p_ptt,
+					      &protocol) !=
+		    ECORE_SUCCESS)
 			continue;
 
 		if ((1 << ((u32)protocol)) & personalities)
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 6e86966..578899c 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -86,6 +86,7 @@ static enum _ecore_status_t ecore_sp_vf_start(struct ecore_hwfn *p_hwfn,
 		p_ramrod->personality = PERSONALITY_ETH;
 		break;
 	case ECORE_PCI_ETH_ROCE:
+	case ECORE_PCI_ETH_IWARP:
 		p_ramrod->personality = PERSONALITY_RDMA_AND_ETH;
 		break;
 	default:
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 17/61] net/qede/base: allow probe to succeed with minor HW-issues
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (16 preceding siblings ...)
  2017-03-18  7:05   ` [PATCH v2 16/61] net/qede/base: read card personality via MFW commands Rasesh Mody
@ 2017-03-18  7:05   ` Rasesh Mody
  2017-03-18  7:05   ` [PATCH v2 18/61] net/qede/base: remove unneeded step in HW init Rasesh Mody
                     ` (44 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:05 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Allow probe to succeed with various 'minor' HW-issues [if requested]

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c     |   71 +++++++++++++++++++++++++++------
 drivers/net/qede/base/ecore_dev_api.h |   40 ++++++++++++++++---
 2 files changed, 94 insertions(+), 17 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 1b033b7..907566c 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2445,12 +2445,15 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-static enum _ecore_status_t ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
-						  struct ecore_ptt *p_ptt)
+static enum _ecore_status_t
+ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
+		      struct ecore_ptt *p_ptt,
+		      struct ecore_hw_prepare_params *p_params)
 {
 	u32 nvm_cfg1_offset, mf_mode, addr, generic_cont0, core_cfg, dcbx_mode;
 	u32 port_cfg_addr, link_temp, nvm_cfg_addr, device_capabilities;
 	struct ecore_mcp_link_params *link;
+	enum _ecore_status_t rc;
 
 	/* Read global nvm_cfg address */
 	nvm_cfg_addr = ecore_rd(p_hwfn, p_ptt, MISC_REG_GEN_PURP_CR0);
@@ -2458,6 +2461,8 @@ static enum _ecore_status_t ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
 	/* Verify MCP has initialized it */
 	if (!nvm_cfg_addr) {
 		DP_NOTICE(p_hwfn, false, "Shared memory not initialized\n");
+		if (p_params->b_relaxed_probe)
+			p_params->p_relaxed_res = ECORE_HW_PREPARE_FAILED_NVM;
 		return ECORE_INVAL;
 	}
 
@@ -2643,7 +2648,13 @@ static enum _ecore_status_t ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
 		OSAL_SET_BIT(ECORE_DEV_CAP_IWARP,
 			     &p_hwfn->hw_info.device_capabilities);
 
-	return ecore_mcp_fill_shmem_func_info(p_hwfn, p_ptt);
+	rc = ecore_mcp_fill_shmem_func_info(p_hwfn, p_ptt);
+	if (rc != ECORE_SUCCESS && p_params->b_relaxed_probe) {
+		rc = ECORE_SUCCESS;
+		p_params->p_relaxed_res = ECORE_HW_PREPARE_BAD_MCP;
+	}
+
+	return rc;
 }
 
 static void ecore_get_num_funcs(struct ecore_hwfn *p_hwfn,
@@ -2797,15 +2808,22 @@ static void ecore_hw_info_port_num(struct ecore_hwfn *p_hwfn,
 
 static enum _ecore_status_t
 ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-		  enum ecore_pci_personality personality, bool drv_resc_alloc)
+		  enum ecore_pci_personality personality,
+		  struct ecore_hw_prepare_params *p_params)
 {
+	bool drv_resc_alloc = p_params->drv_resc_alloc;
 	enum _ecore_status_t rc;
 
 	/* Since all information is common, only first hwfns should do this */
 	if (IS_LEAD_HWFN(p_hwfn)) {
 		rc = ecore_iov_hw_info(p_hwfn);
-		if (rc != ECORE_SUCCESS)
-			return rc;
+		if (rc != ECORE_SUCCESS) {
+			if (p_params->b_relaxed_probe)
+				p_params->p_relaxed_res =
+						ECORE_HW_PREPARE_BAD_IOV;
+			else
+				return rc;
+		}
 	}
 
 	/* TODO In get_hw_info, amoungst others:
@@ -2820,7 +2838,7 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_ASIC(p_hwfn->p_dev)) {
 #endif
-	rc = ecore_hw_get_nvm_info(p_hwfn, p_ptt);
+	rc = ecore_hw_get_nvm_info(p_hwfn, p_ptt, p_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 #ifndef ASIC_ONLY
@@ -2828,8 +2846,12 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 #endif
 
 	rc = ecore_int_igu_read_cam(p_hwfn, p_ptt);
-	if (rc != ECORE_SUCCESS)
-		return rc;
+	if (rc != ECORE_SUCCESS) {
+		if (p_params->b_relaxed_probe)
+			p_params->p_relaxed_res = ECORE_HW_PREPARE_BAD_IGU;
+		else
+			return rc;
+	}
 
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_ASIC(p_hwfn->p_dev) && ecore_mcp_is_init(p_hwfn)) {
@@ -2896,7 +2918,13 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	 * the resources/features depends on them.
 	 * This order is not harmful if not forcing.
 	 */
-	return ecore_hw_get_resc(p_hwfn, drv_resc_alloc);
+	rc = ecore_hw_get_resc(p_hwfn, drv_resc_alloc);
+	if (rc != ECORE_SUCCESS && p_params->b_relaxed_probe) {
+		rc = ECORE_SUCCESS;
+		p_params->p_relaxed_res = ECORE_HW_PREPARE_BAD_MCP;
+	}
+
+	return rc;
 }
 
 static enum _ecore_status_t ecore_get_dev_info(struct ecore_dev *p_dev)
@@ -3028,6 +3056,8 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn,
 	if (REG_RD(p_hwfn, PXP_PF_ME_OPAQUE_ADDR) == 0xffffffff) {
 		DP_ERR(p_hwfn,
 		       "Reading the ME register returns all Fs; Preventing further chip access\n");
+		if (p_params->b_relaxed_probe)
+			p_params->p_relaxed_res = ECORE_HW_PREPARE_FAILED_ME;
 		return ECORE_INVAL;
 	}
 
@@ -3037,6 +3067,8 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn,
 	rc = ecore_ptt_pool_alloc(p_hwfn);
 	if (rc) {
 		DP_NOTICE(p_hwfn, true, "Failed to prepare hwfn's hw\n");
+		if (p_params->b_relaxed_probe)
+			p_params->p_relaxed_res = ECORE_HW_PREPARE_FAILED_MEM;
 		goto err0;
 	}
 
@@ -3046,8 +3078,12 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn,
 	/* First hwfn learns basic information, e.g., number of hwfns */
 	if (!p_hwfn->my_id) {
 		rc = ecore_get_dev_info(p_dev);
-		if (rc != ECORE_SUCCESS)
+		if (rc != ECORE_SUCCESS) {
+			if (p_params->b_relaxed_probe)
+				p_params->p_relaxed_res =
+					ECORE_HW_PREPARE_FAILED_DEV;
 			goto err1;
+		}
 	}
 
 	ecore_hw_hwfn_prepare(p_hwfn);
@@ -3056,12 +3092,14 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn,
 	rc = ecore_mcp_cmd_init(p_hwfn, p_hwfn->p_main_ptt);
 	if (rc) {
 		DP_NOTICE(p_hwfn, true, "Failed initializing mcp command\n");
+		if (p_params->b_relaxed_probe)
+			p_params->p_relaxed_res = ECORE_HW_PREPARE_FAILED_MEM;
 		goto err1;
 	}
 
 	/* Read the device configuration information from the HW and SHMEM */
 	rc = ecore_get_hw_info(p_hwfn, p_hwfn->p_main_ptt,
-			       p_params->personality, p_params->drv_resc_alloc);
+			       p_params->personality, p_params);
 	if (rc) {
 		DP_NOTICE(p_hwfn, true, "Failed to get HW information\n");
 		goto err2;
@@ -3094,6 +3132,8 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn,
 	rc = ecore_init_alloc(p_hwfn);
 	if (rc) {
 		DP_NOTICE(p_hwfn, true, "Failed to allocate the init array\n");
+		if (p_params->b_relaxed_probe)
+			p_params->p_relaxed_res = ECORE_HW_PREPARE_FAILED_MEM;
 		goto err2;
 	}
 #ifndef ASIC_ONLY
@@ -3129,6 +3169,9 @@ enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev,
 
 	p_dev->chk_reg_fifo = p_params->chk_reg_fifo;
 
+	if (p_params->b_relaxed_probe)
+		p_params->p_relaxed_res = ECORE_HW_PREPARE_SUCCESS;
+
 	/* Store the precompiled init data ptrs */
 	if (IS_PF(p_dev))
 		ecore_init_iro_array(p_dev);
@@ -3164,6 +3207,10 @@ enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev,
 		 * initiliazed hwfn 0.
 		 */
 		if (rc != ECORE_SUCCESS) {
+			if (p_params->b_relaxed_probe)
+				p_params->p_relaxed_res =
+						ECORE_HW_PREPARE_FAILED_ENG2;
+
 			if (IS_PF(p_dev)) {
 				ecore_init_free(p_hwfn);
 				ecore_mcp_free(p_hwfn);
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index e7332ac..74a15ef 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -138,17 +138,47 @@ void ecore_hw_start_fastpath(struct ecore_hwfn *p_hwfn);
  */
 enum _ecore_status_t ecore_hw_reset(struct ecore_dev *p_dev);
 
+enum ecore_hw_prepare_result {
+	ECORE_HW_PREPARE_SUCCESS,
+
+	/* FAILED results indicate probe has failed & cleaned up */
+	ECORE_HW_PREPARE_FAILED_ENG2,
+	ECORE_HW_PREPARE_FAILED_ME,
+	ECORE_HW_PREPARE_FAILED_MEM,
+	ECORE_HW_PREPARE_FAILED_DEV,
+	ECORE_HW_PREPARE_FAILED_NVM,
+
+	/* BAD results indicate probe is passed even though some wrongness
+	 * has occurred; Trying to actually use [I.e., hw_init()] might have
+	 * dire reprecautions.
+	 */
+	ECORE_HW_PREPARE_BAD_IOV,
+	ECORE_HW_PREPARE_BAD_MCP,
+	ECORE_HW_PREPARE_BAD_IGU,
+};
+
 struct ecore_hw_prepare_params {
-	/* personality to initialize */
+	/* Personality to initialize */
 	int personality;
-	/* force the driver's default resource allocation */
+
+	/* Force the driver's default resource allocation */
 	bool drv_resc_alloc;
-	/* check the reg_fifo after any register access */
+
+	/* Check the reg_fifo after any register access */
 	bool chk_reg_fifo;
-	/* request the MFW to initiate PF FLR */
+
+	/* Request the MFW to initiate PF FLR */
 	bool initiate_pf_flr;
-	/* the OS Epoch time in seconds */
+
+	/* The OS Epoch time in seconds */
 	u32 epoch;
+
+	/* Allow prepare to pass even if some initializations are failing.
+	 * If set, the `p_prepare_res' field would be set with the return,
+	 * and might allow probe to pass even if there are certain issues.
+	 */
+	bool b_relaxed_probe;
+	enum ecore_hw_prepare_result p_relaxed_res;
 };
 
 /**
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 18/61] net/qede/base: remove unneeded step in HW init
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (17 preceding siblings ...)
  2017-03-18  7:05   ` [PATCH v2 17/61] net/qede/base: allow probe to succeed with minor HW-issues Rasesh Mody
@ 2017-03-18  7:05   ` Rasesh Mody
  2017-03-18  7:05   ` [PATCH v2 19/61] net/qede/base: allow only trusted VFs to be promisc Rasesh Mody
                     ` (43 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:05 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

There is no need to close the OUT_EN NIG registers, so remove that.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c |   12 ------------
 1 file changed, 12 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 907566c..e2d4132 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -999,18 +999,6 @@ static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn,
 
 	ecore_cxt_hw_init_common(p_hwfn);
 
-	/* Close gate from NIG to BRB/Storm; By default they are open, but
-	 * we close them to prevent NIG from passing data to reset blocks.
-	 * Should have been done in the ENGINE phase, but init-tool lacks
-	 * proper port-pretend capabilities.
-	 */
-	ecore_wr(p_hwfn, p_ptt, NIG_REG_RX_BRB_OUT_EN, 0);
-	ecore_wr(p_hwfn, p_ptt, NIG_REG_STORM_OUT_EN, 0);
-	ecore_port_pretend(p_hwfn, p_ptt, p_hwfn->port_id ^ 1);
-	ecore_wr(p_hwfn, p_ptt, NIG_REG_RX_BRB_OUT_EN, 0);
-	ecore_wr(p_hwfn, p_ptt, NIG_REG_STORM_OUT_EN, 0);
-	ecore_port_unpretend(p_hwfn, p_ptt);
-
 	rc = ecore_init_run(p_hwfn, p_ptt, PHASE_ENGINE, ANY_PHASE_ID, hw_mode);
 	if (rc != ECORE_SUCCESS)
 		return rc;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 19/61] net/qede/base: allow only trusted VFs to be promisc
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (18 preceding siblings ...)
  2017-03-18  7:05   ` [PATCH v2 18/61] net/qede/base: remove unneeded step in HW init Rasesh Mody
@ 2017-03-18  7:05   ` Rasesh Mody
  2017-03-18  7:05   ` [PATCH v2 20/61] net/qede/base: qm initialization revamp Rasesh Mody
                     ` (42 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:05 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Allow only trusted VFs to be promisc/multi-promisc. The reasonable
thing is to use the 'trusted' node instead of simply allowing VFs to
become promiscuous.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_l2.c    |    8 ++++----
 drivers/net/qede/base/ecore_sriov.c |    2 --
 2 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 1379a1b..d2e1719 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -274,8 +274,8 @@ ecore_sp_update_accept_mode(struct ecore_hwfn *p_hwfn,
 
 		p_ramrod->rx_mode.state = OSAL_CPU_TO_LE16(state);
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-			   "p_ramrod->rx_mode.state = 0x%x\n",
-			   state);
+			   "vport[%02x] p_ramrod->rx_mode.state = 0x%x\n",
+			   p_ramrod->common.vport_id, state);
 	}
 
 	/* Set Tx mode accept flags */
@@ -298,8 +298,8 @@ ecore_sp_update_accept_mode(struct ecore_hwfn *p_hwfn,
 
 		p_ramrod->tx_mode.state = OSAL_CPU_TO_LE16(state);
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-			   "p_ramrod->tx_mode.state = 0x%x\n",
-			   state);
+			   "vport[%02x] p_ramrod->tx_mode.state = 0x%x\n",
+			   p_ramrod->common.vport_id, state);
 	}
 }
 
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 578899c..a302e9e 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -2626,7 +2626,6 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 	 */
 	tlvs_accepted = tlvs_mask;
 
-#ifndef LINUX_REMOVE
 	if (OSAL_IOV_VF_VPORT_UPDATE(p_hwfn, vf->relative_vf_id,
 				     &params, &tlvs_accepted) !=
 	    ECORE_SUCCESS) {
@@ -2634,7 +2633,6 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 		status = PFVF_STATUS_NOT_SUPPORTED;
 		goto out;
 	}
-#endif
 
 	if (!tlvs_accepted) {
 		if (tlvs_mask)
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 20/61] net/qede/base: qm initialization revamp
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (19 preceding siblings ...)
  2017-03-18  7:05   ` [PATCH v2 19/61] net/qede/base: allow only trusted VFs to be promisc Rasesh Mody
@ 2017-03-18  7:05   ` Rasesh Mody
  2017-03-18  7:05   ` [PATCH v2 21/61] net/qede/base: print firmware MFW and MBI versions Rasesh Mody
                     ` (41 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:05 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

This patch revamps queue initialization.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/bcm_osal.h    |    2 +
 drivers/net/qede/base/ecore.h       |   34 +-
 drivers/net/qede/base/ecore_cxt.c   |   14 +-
 drivers/net/qede/base/ecore_dev.c   |  869 ++++++++++++++++++++++++-----------
 drivers/net/qede/base/ecore_hw.c    |   38 --
 drivers/net/qede/base/ecore_l2.c    |   12 +-
 drivers/net/qede/base/ecore_l2.h    |    2 +-
 drivers/net/qede/base/ecore_spq.c   |    9 +-
 drivers/net/qede/base/ecore_sriov.c |   13 +-
 9 files changed, 645 insertions(+), 348 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 0d239c9..63ee6d5 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -320,6 +320,8 @@ u32 qede_find_first_zero_bit(unsigned long *, u32);
 #define OSAL_BUILD_BUG_ON(cond)		nothing
 #define ETH_ALEN			ETHER_ADDR_LEN
 
+#define OSAL_BITMAP_WEIGHT(bitmap, count) 0
+
 #define OSAL_LINK_UPDATE(hwfn) qed_link_update(hwfn)
 #define OSAL_DCBX_AEN(hwfn, mib_type) nothing
 
diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 842a3b5..58c97a3 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -445,11 +445,13 @@ struct ecore_qm_info {
 	struct init_qm_port_params  *qm_port_params;
 	u16			start_pq;
 	u8			start_vport;
-	u8			pure_lb_pq;
-	u8			offload_pq;
-	u8			pure_ack_pq;
-	u8			ooo_pq;
-	u8			vf_queues_offset;
+	u16			pure_lb_pq;
+	u16			offload_pq;
+	u16			pure_ack_pq;
+	u16			ooo_pq;
+	u16			first_vf_pq;
+	u16			first_mcos_pq;
+	u16			first_rl_pq;
 	u16			num_pqs;
 	u16			num_vf_pqs;
 	u8			num_vports;
@@ -828,6 +830,28 @@ int ecore_device_num_ports(struct ecore_dev *p_dev);
 void ecore_set_fw_mac_addr(__le16 *fw_msb, __le16 *fw_mid, __le16 *fw_lsb,
 			   u8 *mac);
 
+/* Flags for indication of required queues */
+#define PQ_FLAGS_RLS	(1 << 0)
+#define PQ_FLAGS_MCOS	(1 << 1)
+#define PQ_FLAGS_LB	(1 << 2)
+#define PQ_FLAGS_OOO	(1 << 3)
+#define PQ_FLAGS_ACK    (1 << 4)
+#define PQ_FLAGS_OFLD	(1 << 5)
+#define PQ_FLAGS_VFS	(1 << 6)
+
+/* physical queue index for cm context intialization */
+u16 ecore_get_cm_pq_idx(struct ecore_hwfn *p_hwfn, u32 pq_flags);
+u16 ecore_get_cm_pq_idx_mcos(struct ecore_hwfn *p_hwfn, u8 tc);
+u16 ecore_get_cm_pq_idx_vf(struct ecore_hwfn *p_hwfn, u16 vf);
+u16 ecore_get_cm_pq_idx_rl(struct ecore_hwfn *p_hwfn, u8 qpid);
+
+/* amount of resources used in qm init */
+u8 ecore_init_qm_get_num_tcs(struct ecore_hwfn *p_hwfn);
+u16 ecore_init_qm_get_num_vfs(struct ecore_hwfn *p_hwfn);
+u16 ecore_init_qm_get_num_pf_rls(struct ecore_hwfn *p_hwfn);
+u16 ecore_init_qm_get_num_vports(struct ecore_hwfn *p_hwfn);
+u16 ecore_init_qm_get_num_pqs(struct ecore_hwfn *p_hwfn);
+
 #define ECORE_LEADING_HWFN(dev)	(&dev->hwfns[0])
 
 #endif /* __ECORE_H */
diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index 2635030..aeeabf1 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -1372,18 +1372,10 @@ void ecore_qm_init_pf(struct ecore_hwfn *p_hwfn)
 }
 
 /* CM PF */
-static enum _ecore_status_t ecore_cm_init_pf(struct ecore_hwfn *p_hwfn)
+void ecore_cm_init_pf(struct ecore_hwfn *p_hwfn)
 {
-	union ecore_qm_pq_params pq_params;
-	u16 pq;
-
-	/* XCM pure-LB queue */
-	OSAL_MEMSET(&pq_params, 0, sizeof(pq_params));
-	pq_params.core.tc = LB_TC;
-	pq = ecore_get_qm_pq(p_hwfn, PROTOCOLID_CORE, &pq_params);
-	STORE_RT_REG(p_hwfn, XCM_REG_CON_PHY_Q3_RT_OFFSET, pq);
-
-	return ECORE_SUCCESS;
+	STORE_RT_REG(p_hwfn, XCM_REG_CON_PHY_Q3_RT_OFFSET,
+		     ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_LB));
 }
 
 /* DQ PF */
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index e2d4132..380c5ba 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -178,282 +178,575 @@ void ecore_resc_free(struct ecore_dev *p_dev)
 	}
 }
 
-static enum _ecore_status_t ecore_init_qm_info(struct ecore_hwfn *p_hwfn,
-					       bool b_sleepable)
+/******************** QM initialization *******************/
+
+/* bitmaps for indicating active traffic classes.
+ * Special case for Arrowhead 4 port
+ */
+/* 0..3 actualy used, 4 serves OOO, 7 serves high priority stuff (e.g. DCQCN) */
+#define ACTIVE_TCS_BMAP 0x9f
+/* 0..3 actually used, OOO and high priority stuff all use 3 */
+#define ACTIVE_TCS_BMAP_4PORT_K2 0xf
+
+/* determines the physical queue flags for a given PF. */
+static u32 ecore_get_pq_flags(struct ecore_hwfn *p_hwfn)
 {
-	u8 num_vports, vf_offset = 0, i, vport_id, num_ports, curr_queue;
-	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
-	struct init_qm_port_params *p_qm_port;
-	bool init_rdma_offload_pq = false;
-	bool init_pure_ack_pq = false;
-	bool init_ooo_pq = false;
-	u16 num_pqs, protocol_pqs;
-	u16 num_pf_rls = 0;
-	u16 num_vfs = 0;
-	u32 pf_rl;
-	u8 pf_wfq;
-
-	/* @TMP - saving the existing min/max bw config before resetting the
-	 * qm_info to restore them.
-	 */
-	pf_rl = qm_info->pf_rl;
-	pf_wfq = qm_info->pf_wfq;
+	u32 flags;
 
-#ifdef CONFIG_ECORE_SRIOV
-	if (p_hwfn->p_dev->p_iov_info)
-		num_vfs = p_hwfn->p_dev->p_iov_info->total_vfs;
-#endif
-	OSAL_MEM_ZERO(qm_info, sizeof(*qm_info));
+	/* common flags */
+	flags = PQ_FLAGS_LB;
 
-#ifndef ASIC_ONLY
-	/* @TMP - Don't allocate QM queues for VFs on emulation */
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
-		DP_NOTICE(p_hwfn, false,
-			  "Emulation - skip configuring QM queues for VFs\n");
-		num_vfs = 0;
+	/* feature flags */
+	if (IS_ECORE_SRIOV(p_hwfn->p_dev))
+		flags |= PQ_FLAGS_VFS;
+
+	/* protocol flags */
+	switch (p_hwfn->hw_info.personality) {
+	case ECORE_PCI_ETH:
+		flags |= PQ_FLAGS_MCOS;
+		break;
+	case ECORE_PCI_FCOE:
+		flags |= PQ_FLAGS_OFLD;
+		break;
+	case ECORE_PCI_ISCSI:
+		flags |= PQ_FLAGS_ACK | PQ_FLAGS_OOO | PQ_FLAGS_OFLD;
+		break;
+	case ECORE_PCI_ETH_ROCE:
+		flags |= PQ_FLAGS_MCOS | PQ_FLAGS_OFLD;
+		break;
+	case ECORE_PCI_ETH_IWARP:
+		flags |= PQ_FLAGS_MCOS | PQ_FLAGS_ACK | PQ_FLAGS_OOO |
+			 PQ_FLAGS_OFLD;
+		break;
+	default:
+		DP_ERR(p_hwfn, "unknown personality %d\n",
+		       p_hwfn->hw_info.personality);
+		return 0;
 	}
-#endif
+	return flags;
+}
 
-	/* ethernet PFs require a pq per tc. Even if only a subset of the TCs
-	 * active, we want physical queues allocated for all of them, since we
-	 * don't have a good recycle flow. Non ethernet PFs require only a
-	 * single physical queue.
-	 */
-	if (ECORE_IS_L2_PERSONALITY(p_hwfn))
-		protocol_pqs = p_hwfn->hw_info.num_hw_tc;
-	else
-		protocol_pqs = 1;
-
-	num_pqs = protocol_pqs + num_vfs + 1;	/* The '1' is for pure-LB */
-	num_vports = (u8)RESC_NUM(p_hwfn, ECORE_VPORT);
-
-	if (ECORE_IS_ROCE_PERSONALITY(p_hwfn)) {
-		num_pqs++;	/* for RoCE queue */
-		init_rdma_offload_pq = true;
-		if (p_hwfn->pf_params.rdma_pf_params.enable_dcqcn) {
-			/* Due to FW assumption that rl==vport, we limit the
-			 * number of rate limiters by the minimum between its
-			 * allocated number and the allocated number of vports.
-			 * Another limitation is the number of supported qps
-			 * with rate limiters in FW.
-			 */
-			num_pf_rls =
-			    (u16)OSAL_MIN_T(u32, RESC_NUM(p_hwfn, ECORE_RL),
-					     RESC_NUM(p_hwfn, ECORE_VPORT));
+/* Getters for resource amounts necessary for qm initialization */
+u8 ecore_init_qm_get_num_tcs(struct ecore_hwfn *p_hwfn)
+{
+	return p_hwfn->hw_info.num_hw_tc;
+}
 
-			/* we subtract num_vfs because each one requires a rate
-			 * limiter, and one default rate limiter.
-			 */
-			if (num_pf_rls < num_vfs + 1) {
-				DP_ERR(p_hwfn, "No RL for DCQCN");
-				DP_ERR(p_hwfn, "[num_pf_rls %d num_vfs %d]\n",
-				       num_pf_rls, num_vfs);
-				return ECORE_INVAL;
-			}
-			num_pf_rls -= num_vfs + 1;
-		}
+u16 ecore_init_qm_get_num_vfs(struct ecore_hwfn *p_hwfn)
+{
+	return IS_ECORE_SRIOV(p_hwfn->p_dev) ?
+			p_hwfn->p_dev->p_iov_info->total_vfs : 0;
+}
 
-		num_pqs += num_pf_rls;
-		qm_info->num_pf_rls = (u8)num_pf_rls;
-	}
+#define NUM_DEFAULT_RLS 1
 
-	if (ECORE_IS_IWARP_PERSONALITY(p_hwfn)) {
-		num_pqs += 3;	/* for iwarp queue / pure-ack / ooo */
-		init_rdma_offload_pq = true;
-		init_pure_ack_pq = true;
-		init_ooo_pq = true;
-	}
+u16 ecore_init_qm_get_num_pf_rls(struct ecore_hwfn *p_hwfn)
+{
+	u16 num_pf_rls, num_vfs = ecore_init_qm_get_num_vfs(p_hwfn);
 
-	if (p_hwfn->hw_info.personality == ECORE_PCI_ISCSI) {
-		num_pqs += 2;	/* for iSCSI pure-ACK / OOO queue */
-		init_pure_ack_pq = true;
-		init_ooo_pq = true;
-	}
+	/* @DPDK */
+	/* num RLs can't exceed resource amount of rls or vports or the
+	 * dcqcn qps
+	 */
+	num_pf_rls = (u16)OSAL_MIN_T(u32, RESC_NUM(p_hwfn, ECORE_RL),
+				     (u16)RESC_NUM(p_hwfn, ECORE_VPORT));
 
-	/* Sanity checking that setup requires legal number of resources */
-	if (num_pqs > RESC_NUM(p_hwfn, ECORE_PQ)) {
-		DP_ERR(p_hwfn,
-		       "Need too many Physical queues - 0x%04x avail %04x",
-		       num_pqs, RESC_NUM(p_hwfn, ECORE_PQ));
-		return ECORE_INVAL;
+	/* make sure after we reserve the default and VF rls we'll have
+	 * something left
+	 */
+	if (num_pf_rls < num_vfs + NUM_DEFAULT_RLS) {
+		DP_NOTICE(p_hwfn, false,
+			  "no rate limiters left for PF rate limiting"
+			  " [num_pf_rls %d num_vfs %d]\n", num_pf_rls, num_vfs);
+		return 0;
 	}
 
-	/* PQs will be arranged as follows: First per-TC PQ, then pure-LB queue,
-	 * then special queues (iSCSI pure-ACK / RoCE), then per-VF PQ.
+	/* subtract rls necessary for VFs and one default one for the PF */
+	num_pf_rls -= num_vfs + NUM_DEFAULT_RLS;
+
+	return num_pf_rls;
+}
+
+u16 ecore_init_qm_get_num_vports(struct ecore_hwfn *p_hwfn)
+{
+	u32 pq_flags = ecore_get_pq_flags(p_hwfn);
+
+	/* all pqs share the same vport (hence the 1 below), except for vfs
+	 * and pf_rl pqs
 	 */
-	qm_info->qm_pq_params = OSAL_ZALLOC(p_hwfn->p_dev,
-					    b_sleepable ? GFP_KERNEL :
-					    GFP_ATOMIC,
-					    sizeof(struct init_qm_pq_params) *
-					    num_pqs);
-	if (!qm_info->qm_pq_params)
-		goto alloc_err;
+	return (!!(PQ_FLAGS_RLS & pq_flags)) *
+		ecore_init_qm_get_num_pf_rls(p_hwfn) +
+	       (!!(PQ_FLAGS_VFS & pq_flags)) *
+		ecore_init_qm_get_num_vfs(p_hwfn) + 1;
+}
 
-	qm_info->qm_vport_params = OSAL_ZALLOC(p_hwfn->p_dev,
-					       b_sleepable ? GFP_KERNEL :
-					       GFP_ATOMIC,
-					       sizeof(struct
-						      init_qm_vport_params) *
-					       num_vports);
-	if (!qm_info->qm_vport_params)
-		goto alloc_err;
+/* calc amount of PQs according to the requested flags */
+u16 ecore_init_qm_get_num_pqs(struct ecore_hwfn *p_hwfn)
+{
+	u32 pq_flags = ecore_get_pq_flags(p_hwfn);
+
+	return (!!(PQ_FLAGS_RLS & pq_flags)) *
+		ecore_init_qm_get_num_pf_rls(p_hwfn) +
+	       (!!(PQ_FLAGS_MCOS & pq_flags)) *
+		ecore_init_qm_get_num_tcs(p_hwfn) +
+	       (!!(PQ_FLAGS_LB & pq_flags)) +
+	       (!!(PQ_FLAGS_OOO & pq_flags)) +
+	       (!!(PQ_FLAGS_ACK & pq_flags)) +
+	       (!!(PQ_FLAGS_OFLD & pq_flags)) +
+	       (!!(PQ_FLAGS_VFS & pq_flags)) *
+		ecore_init_qm_get_num_vfs(p_hwfn);
+}
 
-	qm_info->qm_port_params = OSAL_ZALLOC(p_hwfn->p_dev,
-					      b_sleepable ? GFP_KERNEL :
-					      GFP_ATOMIC,
-					      sizeof(struct init_qm_port_params)
-					      * MAX_NUM_PORTS);
-	if (!qm_info->qm_port_params)
-		goto alloc_err;
+/* initialize the top level QM params */
+static void ecore_init_qm_params(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
 
-	qm_info->wfq_data = OSAL_ZALLOC(p_hwfn->p_dev,
-					b_sleepable ? GFP_KERNEL :
-					GFP_ATOMIC,
-					sizeof(struct ecore_wfq_data) *
-					num_vports);
+	/* pq and vport bases for this PF */
+	qm_info->start_pq = (u16)RESC_START(p_hwfn, ECORE_PQ);
+	qm_info->start_vport = (u8)RESC_START(p_hwfn, ECORE_VPORT);
 
-	if (!qm_info->wfq_data)
-		goto alloc_err;
+	/* rate limiting and weighted fair queueing are always enabled */
+	qm_info->vport_rl_en = 1;
+	qm_info->vport_wfq_en = 1;
 
-	vport_id = (u8)RESC_START(p_hwfn, ECORE_VPORT);
+	/* in AH 4 port we have fewer TCs per port */
+	qm_info->max_phys_tcs_per_port =
+		p_hwfn->p_dev->num_ports_in_engines == MAX_NUM_PORTS_K2 ?
+			NUM_PHYS_TCS_4PORT_K2 : NUM_OF_PHYS_TCS;
+}
 
-	/* First init rate limited queues ( Due to RoCE assumption of
-	 * qpid=rlid )
-	 */
-	for (curr_queue = 0; curr_queue < num_pf_rls; curr_queue++) {
-		qm_info->qm_pq_params[curr_queue].vport_id = vport_id++;
-		qm_info->qm_pq_params[curr_queue].tc_id =
-		    p_hwfn->hw_info.offload_tc;
-		qm_info->qm_pq_params[curr_queue].wrr_group = 1;
-		qm_info->qm_pq_params[curr_queue].rl_valid = 1;
-	};
-
-	/* Protocol PQs */
-	for (i = 0; i < protocol_pqs; i++) {
-		struct init_qm_pq_params *params =
-		    &qm_info->qm_pq_params[curr_queue++];
-
-		if (ECORE_IS_L2_PERSONALITY(p_hwfn)) {
-			params->vport_id = vport_id;
-			params->tc_id = i;
-			/* Note: this assumes that if we had a configuration
-			 * with N tcs and subsequently another configuration
-			 * With Fewer TCs, the in flight traffic (in QM queues,
-			 * in FW, from driver to FW) will still trickle out and
-			 * not get "stuck" in the QM. This is determined by the
-			 * NIG_REG_TX_ARB_CLIENT_IS_SUBJECT2WFQ. Unused TCs are
-			 * supposed to be cleared in this map, allowing traffic
-			 * to flush out. If this is not the case, we would need
-			 * to set the TC of unused queues to 0, and reconfigure
-			 * QM every time num of TCs changes. Unused queues in
-			 * this context would mean those intended for TCs where
-			 * tc_id > hw_info.num_active_tcs.
-			 */
-			params->wrr_group = 1;	/* @@@TBD ECORE_WRR_MEDIUM */
-		} else {
-			params->vport_id = vport_id;
-			params->tc_id = p_hwfn->hw_info.offload_tc;
-			params->wrr_group = 1;	/* @@@TBD ECORE_WRR_MEDIUM */
-		}
-	}
+/* initialize qm vport params */
+static void ecore_init_qm_vport_params(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+	u8 i;
 
-	/* Then init pure-LB PQ */
-	qm_info->pure_lb_pq = curr_queue;
-	qm_info->qm_pq_params[curr_queue].vport_id =
-	    (u8)RESC_START(p_hwfn, ECORE_VPORT);
-	qm_info->qm_pq_params[curr_queue].tc_id = PURE_LB_TC;
-	qm_info->qm_pq_params[curr_queue].wrr_group = 1;
-	curr_queue++;
-
-	qm_info->offload_pq = 0;	/* Already initialized for iSCSI/FCoE */
-	if (init_rdma_offload_pq) {
-		qm_info->offload_pq = curr_queue;
-		qm_info->qm_pq_params[curr_queue].vport_id = vport_id;
-		qm_info->qm_pq_params[curr_queue].tc_id =
-		    p_hwfn->hw_info.offload_tc;
-		qm_info->qm_pq_params[curr_queue].wrr_group = 1;
-		curr_queue++;
-	}
-
-	if (init_pure_ack_pq) {
-		qm_info->pure_ack_pq = curr_queue;
-		qm_info->qm_pq_params[curr_queue].vport_id = vport_id;
-		qm_info->qm_pq_params[curr_queue].tc_id =
-		    p_hwfn->hw_info.offload_tc;
-		qm_info->qm_pq_params[curr_queue].wrr_group = 1;
-		curr_queue++;
-	}
-
-	if (init_ooo_pq) {
-		qm_info->ooo_pq = curr_queue;
-		qm_info->qm_pq_params[curr_queue].vport_id = vport_id;
-		qm_info->qm_pq_params[curr_queue].tc_id = DCBX_ISCSI_OOO_TC;
-		qm_info->qm_pq_params[curr_queue].wrr_group = 1;
-		curr_queue++;
-	}
-
-	/* Then init per-VF PQs */
-	vf_offset = curr_queue;
-	for (i = 0; i < num_vfs; i++) {
-		/* First vport is used by the PF */
-		qm_info->qm_pq_params[curr_queue].vport_id = vport_id + i + 1;
-		/* @@@TBD VF Multi-cos */
-		qm_info->qm_pq_params[curr_queue].tc_id = 0;
-		qm_info->qm_pq_params[curr_queue].wrr_group = 1;
-		qm_info->qm_pq_params[curr_queue].rl_valid = 1;
-		curr_queue++;
-	};
-
-	qm_info->vf_queues_offset = vf_offset;
-	qm_info->num_pqs = num_pqs;
-	qm_info->num_vports = num_vports;
+	/* all vports participate in weighted fair queueing */
+	for (i = 0; i < ecore_init_qm_get_num_vports(p_hwfn); i++)
+		qm_info->qm_vport_params[i].vport_wfq = 1;
+}
 
+/* initialize qm port params */
+static void ecore_init_qm_port_params(struct ecore_hwfn *p_hwfn)
+{
 	/* Initialize qm port parameters */
-	num_ports = p_hwfn->p_dev->num_ports_in_engines;
+	u8 i, active_phys_tcs, num_ports = p_hwfn->p_dev->num_ports_in_engines;
+
+	/* indicate how ooo and high pri traffic is dealt with */
+	active_phys_tcs = num_ports == MAX_NUM_PORTS_K2 ?
+		ACTIVE_TCS_BMAP_4PORT_K2 : ACTIVE_TCS_BMAP;
+
 	for (i = 0; i < num_ports; i++) {
-		p_qm_port = &qm_info->qm_port_params[i];
+		struct init_qm_port_params *p_qm_port =
+			&p_hwfn->qm_info.qm_port_params[i];
+
 		p_qm_port->active = 1;
-		/* @@@TMP - was NUM_OF_PHYS_TCS; Changed until dcbx will
-		 * be in place
-		 */
-		if (num_ports == 4)
-			p_qm_port->active_phys_tcs = 0xf;
-		else
-			p_qm_port->active_phys_tcs = 0x9f;
+		p_qm_port->active_phys_tcs = active_phys_tcs;
 		p_qm_port->num_pbf_cmd_lines = PBF_MAX_CMD_LINES / num_ports;
 		p_qm_port->num_btb_blocks = BTB_MAX_BLOCKS / num_ports;
 	}
+}
 
-	if (ECORE_IS_AH(p_hwfn->p_dev) && (num_ports == 4))
-		qm_info->max_phys_tcs_per_port = NUM_PHYS_TCS_4PORT_K2;
-	else
-		qm_info->max_phys_tcs_per_port = NUM_OF_PHYS_TCS;
+/* Reset the params which must be reset for qm init. QM init may be called as
+ * a result of flows other than driver load (e.g. dcbx renegotiation). Other
+ * params may be affected by the init but would simply recalculate to the same
+ * values. The allocations made for QM init, ports, vports, pqs and vfqs are not
+ * affected as these amounts stay the same.
+ */
+static void ecore_init_qm_reset_params(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
 
-	qm_info->start_pq = (u16)RESC_START(p_hwfn, ECORE_PQ);
+	qm_info->num_pqs = 0;
+	qm_info->num_vports = 0;
+	qm_info->num_pf_rls = 0;
+	qm_info->num_vf_pqs = 0;
+	qm_info->first_vf_pq = 0;
+	qm_info->first_mcos_pq = 0;
+	qm_info->first_rl_pq = 0;
+}
+
+static void ecore_init_qm_advance_vport(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+
+	qm_info->num_vports++;
+
+	if (qm_info->num_vports > ecore_init_qm_get_num_vports(p_hwfn))
+		DP_ERR(p_hwfn,
+		       "vport overflow! qm_info->num_vports %d,"
+		       " qm_init_get_num_vports() %d\n",
+		       qm_info->num_vports,
+		       ecore_init_qm_get_num_vports(p_hwfn));
+}
+
+/* initialize a single pq and manage qm_info resources accounting.
+ * The pq_init_flags param determines whether the PQ is rate limited
+ * (for VF or PF)
+ * and whether a new vport is allocated to the pq or not (i.e. vport will be
+ * shared)
+ */
+
+/* flags for pq init */
+#define PQ_INIT_SHARE_VPORT	(1 << 0)
+#define PQ_INIT_PF_RL		(1 << 1)
+#define PQ_INIT_VF_RL		(1 << 2)
+
+/* defines for pq init */
+#define PQ_INIT_DEFAULT_WRR_GROUP	1
+#define PQ_INIT_DEFAULT_TC		0
+#define PQ_INIT_OFLD_TC			(p_hwfn->hw_info.offload_tc)
+
+static void ecore_init_qm_pq(struct ecore_hwfn *p_hwfn,
+			     struct ecore_qm_info *qm_info,
+			     u8 tc, u32 pq_init_flags)
+{
+	u16 pq_idx = qm_info->num_pqs, max_pq =
+					ecore_init_qm_get_num_pqs(p_hwfn);
+
+	if (pq_idx > max_pq)
+		DP_ERR(p_hwfn,
+		       "pq overflow! pq %d, max pq %d\n", pq_idx, max_pq);
+
+	/* init pq params */
+	qm_info->qm_pq_params[pq_idx].vport_id = qm_info->start_vport +
+						 qm_info->num_vports;
+	qm_info->qm_pq_params[pq_idx].tc_id = tc;
+	qm_info->qm_pq_params[pq_idx].wrr_group = PQ_INIT_DEFAULT_WRR_GROUP;
+	qm_info->qm_pq_params[pq_idx].rl_valid =
+		(pq_init_flags & PQ_INIT_PF_RL ||
+		 pq_init_flags & PQ_INIT_VF_RL);
+
+	/* qm params accounting */
+	qm_info->num_pqs++;
+	if (!(pq_init_flags & PQ_INIT_SHARE_VPORT))
+		qm_info->num_vports++;
+
+	if (pq_init_flags & PQ_INIT_PF_RL)
+		qm_info->num_pf_rls++;
+
+	if (qm_info->num_vports > ecore_init_qm_get_num_vports(p_hwfn))
+		DP_ERR(p_hwfn,
+		       "vport overflow! qm_info->num_vports %d,"
+		       " qm_init_get_num_vports() %d\n",
+		       qm_info->num_vports,
+		       ecore_init_qm_get_num_vports(p_hwfn));
+
+	if (qm_info->num_pf_rls > ecore_init_qm_get_num_pf_rls(p_hwfn))
+		DP_ERR(p_hwfn, "rl overflow! qm_info->num_pf_rls %d,"
+		       " qm_init_get_num_pf_rls() %d\n",
+		       qm_info->num_pf_rls,
+		       ecore_init_qm_get_num_pf_rls(p_hwfn));
+}
+
+/* get pq index according to PQ_FLAGS */
+static u16 *ecore_init_qm_get_idx_from_flags(struct ecore_hwfn *p_hwfn,
+					     u32 pq_flags)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+
+	/* Can't have multiple flags set here */
+	if (OSAL_BITMAP_WEIGHT((unsigned long *)&pq_flags,
+				sizeof(pq_flags)) > 1)
+		goto err;
+
+	switch (pq_flags) {
+	case PQ_FLAGS_RLS:
+		return &qm_info->first_rl_pq;
+	case PQ_FLAGS_MCOS:
+		return &qm_info->first_mcos_pq;
+	case PQ_FLAGS_LB:
+		return &qm_info->pure_lb_pq;
+	case PQ_FLAGS_OOO:
+		return &qm_info->ooo_pq;
+	case PQ_FLAGS_ACK:
+		return &qm_info->pure_ack_pq;
+	case PQ_FLAGS_OFLD:
+		return &qm_info->offload_pq;
+	case PQ_FLAGS_VFS:
+		return &qm_info->first_vf_pq;
+	default:
+		goto err;
+	}
+
+err:
+	DP_ERR(p_hwfn, "BAD pq flags %d\n", pq_flags);
+	return OSAL_NULL;
+}
+
+/* save pq index in qm info */
+static void ecore_init_qm_set_idx(struct ecore_hwfn *p_hwfn,
+				  u32 pq_flags, u16 pq_val)
+{
+	u16 *base_pq_idx = ecore_init_qm_get_idx_from_flags(p_hwfn, pq_flags);
+
+	*base_pq_idx = p_hwfn->qm_info.start_pq + pq_val;
+}
+
+/* get tx pq index, with the PQ TX base already set (ready for context init) */
+u16 ecore_get_cm_pq_idx(struct ecore_hwfn *p_hwfn, u32 pq_flags)
+{
+	u16 *base_pq_idx = ecore_init_qm_get_idx_from_flags(p_hwfn, pq_flags);
+
+	return *base_pq_idx + CM_TX_PQ_BASE;
+}
+
+u16 ecore_get_cm_pq_idx_mcos(struct ecore_hwfn *p_hwfn, u8 tc)
+{
+	u8 max_tc = ecore_init_qm_get_num_tcs(p_hwfn);
+
+	if (tc > max_tc)
+		DP_ERR(p_hwfn, "tc %d must be smaller than %d\n", tc, max_tc);
+
+	return ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_MCOS) + tc;
+}
+
+u16 ecore_get_cm_pq_idx_vf(struct ecore_hwfn *p_hwfn, u16 vf)
+{
+	u16 max_vf = ecore_init_qm_get_num_vfs(p_hwfn);
+
+	if (vf > max_vf)
+		DP_ERR(p_hwfn, "vf %d must be smaller than %d\n", vf, max_vf);
+
+	return ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_VFS) + vf;
+}
+
+u16 ecore_get_cm_pq_idx_rl(struct ecore_hwfn *p_hwfn, u8 rl)
+{
+	u16 max_rl = ecore_init_qm_get_num_pf_rls(p_hwfn);
+
+	if (rl > max_rl)
+		DP_ERR(p_hwfn, "rl %d must be smaller than %d\n", rl, max_rl);
+
+	return ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_RLS) + rl;
+}
+
+/* Functions for creating specific types of pqs */
+static void ecore_init_qm_lb_pq(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+
+	if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_LB))
+		return;
+
+	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_LB, qm_info->num_pqs);
+	ecore_init_qm_pq(p_hwfn, qm_info, PURE_LB_TC, PQ_INIT_SHARE_VPORT);
+}
+
+static void ecore_init_qm_ooo_pq(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+
+	if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_OOO))
+		return;
+
+	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_OOO, qm_info->num_pqs);
+	ecore_init_qm_pq(p_hwfn, qm_info, DCBX_ISCSI_OOO_TC,
+			 PQ_INIT_SHARE_VPORT);
+}
+
+static void ecore_init_qm_pure_ack_pq(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+
+	if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_ACK))
+		return;
+
+	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_ACK, qm_info->num_pqs);
+	ecore_init_qm_pq(p_hwfn, qm_info, PQ_INIT_OFLD_TC, PQ_INIT_SHARE_VPORT);
+}
+
+static void ecore_init_qm_offload_pq(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+
+	if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_OFLD))
+		return;
+
+	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_OFLD, qm_info->num_pqs);
+	ecore_init_qm_pq(p_hwfn, qm_info, PQ_INIT_OFLD_TC, PQ_INIT_SHARE_VPORT);
+}
+
+static void ecore_init_qm_mcos_pqs(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+	u8 tc_idx;
+
+	if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_MCOS))
+		return;
+
+	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_MCOS, qm_info->num_pqs);
+	for (tc_idx = 0; tc_idx < ecore_init_qm_get_num_tcs(p_hwfn); tc_idx++)
+		ecore_init_qm_pq(p_hwfn, qm_info, tc_idx, PQ_INIT_SHARE_VPORT);
+}
+
+static void ecore_init_qm_vf_pqs(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+	u16 vf_idx, num_vfs = ecore_init_qm_get_num_vfs(p_hwfn);
+
+	if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_VFS))
+		return;
+
+	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_VFS, qm_info->num_pqs);
 
 	qm_info->num_vf_pqs = num_vfs;
-	qm_info->start_vport = (u8)RESC_START(p_hwfn, ECORE_VPORT);
+	for (vf_idx = 0; vf_idx < num_vfs; vf_idx++)
+		ecore_init_qm_pq(p_hwfn, qm_info, PQ_INIT_DEFAULT_TC,
+				 PQ_INIT_VF_RL);
+}
 
-	for (i = 0; i < qm_info->num_vports; i++)
-		qm_info->qm_vport_params[i].vport_wfq = 1;
+static void ecore_init_qm_rl_pqs(struct ecore_hwfn *p_hwfn)
+{
+	u16 pf_rls_idx, num_pf_rls = ecore_init_qm_get_num_pf_rls(p_hwfn);
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
 
-	qm_info->vport_rl_en = 1;
-	qm_info->vport_wfq_en = 1;
-	qm_info->pf_rl = pf_rl;
-	qm_info->pf_wfq = pf_wfq;
+	if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_RLS))
+		return;
+
+	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_RLS, qm_info->num_pqs);
+	for (pf_rls_idx = 0; pf_rls_idx < num_pf_rls; pf_rls_idx++)
+		ecore_init_qm_pq(p_hwfn, qm_info, PQ_INIT_OFLD_TC,
+				 PQ_INIT_PF_RL);
+}
+
+static void ecore_init_qm_pq_params(struct ecore_hwfn *p_hwfn)
+{
+	/* rate limited pqs, must come first (FW assumption) */
+	ecore_init_qm_rl_pqs(p_hwfn);
+
+	/* pqs for multi cos */
+	ecore_init_qm_mcos_pqs(p_hwfn);
+
+	/* pure loopback pq */
+	ecore_init_qm_lb_pq(p_hwfn);
+
+	/* out of order pq */
+	ecore_init_qm_ooo_pq(p_hwfn);
+
+	/* pure ack pq */
+	ecore_init_qm_pure_ack_pq(p_hwfn);
+
+	/* pq for offloaded protocol */
+	ecore_init_qm_offload_pq(p_hwfn);
+
+	/* done sharing vports */
+	ecore_init_qm_advance_vport(p_hwfn);
+
+	/* pqs for vfs */
+	ecore_init_qm_vf_pqs(p_hwfn);
+}
+
+/* compare values of getters against resources amounts */
+static enum _ecore_status_t ecore_init_qm_sanity(struct ecore_hwfn *p_hwfn)
+{
+	if (ecore_init_qm_get_num_vports(p_hwfn) >
+	    RESC_NUM(p_hwfn, ECORE_VPORT)) {
+		DP_ERR(p_hwfn, "requested amount of vports exceeds resource\n");
+		return ECORE_INVAL;
+	}
+
+	if (ecore_init_qm_get_num_pqs(p_hwfn) > RESC_NUM(p_hwfn, ECORE_PQ)) {
+		DP_ERR(p_hwfn, "requested amount of pqs exceeds resource\n");
+		return ECORE_INVAL;
+	}
 
 	return ECORE_SUCCESS;
+}
 
- alloc_err:
-	DP_NOTICE(p_hwfn, false, "Failed to allocate memory for QM params\n");
-	ecore_qm_info_free(p_hwfn);
-	return ECORE_NOMEM;
+/*
+ * Function for verbose printing of the qm initialization results
+ */
+static void ecore_dp_init_qm_params(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+	struct init_qm_vport_params *vport;
+	struct init_qm_port_params *port;
+	struct init_qm_pq_params *pq;
+	int i, tc;
+
+	/* top level params */
+	DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+		   "qm init top level params: start_pq %d, start_vport %d,"
+		   " pure_lb_pq %d, offload_pq %d, pure_ack_pq %d\n",
+		   qm_info->start_pq, qm_info->start_vport, qm_info->pure_lb_pq,
+		   qm_info->offload_pq, qm_info->pure_ack_pq);
+	DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+		   "ooo_pq %d, first_vf_pq %d, num_pqs %d, num_vf_pqs %d,"
+		   " num_vports %d, max_phys_tcs_per_port %d\n",
+		   qm_info->ooo_pq, qm_info->first_vf_pq, qm_info->num_pqs,
+		   qm_info->num_vf_pqs, qm_info->num_vports,
+		   qm_info->max_phys_tcs_per_port);
+	DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+		   "pf_rl_en %d, pf_wfq_en %d, vport_rl_en %d, vport_wfq_en %d,"
+		   " pf_wfq %d, pf_rl %d, num_pf_rls %d, pq_flags %x\n",
+		   qm_info->pf_rl_en, qm_info->pf_wfq_en, qm_info->vport_rl_en,
+		   qm_info->vport_wfq_en, qm_info->pf_wfq, qm_info->pf_rl,
+		   qm_info->num_pf_rls, ecore_get_pq_flags(p_hwfn));
+
+	/* port table */
+	for (i = 0; i < p_hwfn->p_dev->num_ports_in_engines; i++) {
+		port = &qm_info->qm_port_params[i];
+		DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+			   "port idx %d, active %d, active_phys_tcs %d,"
+			   " num_pbf_cmd_lines %d, num_btb_blocks %d,"
+			   " reserved %d\n",
+			   i, port->active, port->active_phys_tcs,
+			   port->num_pbf_cmd_lines, port->num_btb_blocks,
+			   port->reserved);
+	}
+
+	/* vport table */
+	for (i = 0; i < qm_info->num_vports; i++) {
+		vport = &qm_info->qm_vport_params[i];
+		DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+			   "vport idx %d, vport_rl %d, wfq %d,"
+			   " first_tx_pq_id [ ",
+			   qm_info->start_vport + i, vport->vport_rl,
+			   vport->vport_wfq);
+		for (tc = 0; tc < NUM_OF_TCS; tc++)
+			DP_VERBOSE(p_hwfn, ECORE_MSG_HW, "%d ",
+				   vport->first_tx_pq_id[tc]);
+		DP_VERBOSE(p_hwfn, ECORE_MSG_HW, "]\n");
+	}
+
+	/* pq table */
+	for (i = 0; i < qm_info->num_pqs; i++) {
+		pq = &qm_info->qm_pq_params[i];
+		DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+			   "pq idx %d, vport_id %d, tc %d, wrr_grp %d,"
+			   " rl_valid %d\n",
+			   qm_info->start_pq + i, pq->vport_id, pq->tc_id,
+			   pq->wrr_group, pq->rl_valid);
+	}
+}
+
+static void ecore_init_qm_info(struct ecore_hwfn *p_hwfn)
+{
+	/* reset params required for init run */
+	ecore_init_qm_reset_params(p_hwfn);
+
+	/* init QM top level params */
+	ecore_init_qm_params(p_hwfn);
+
+	/* init QM port params */
+	ecore_init_qm_port_params(p_hwfn);
+
+	/* init QM vport params */
+	ecore_init_qm_vport_params(p_hwfn);
+
+	/* init QM physical queue params */
+	ecore_init_qm_pq_params(p_hwfn);
+
+	/* display all that init */
+	ecore_dp_init_qm_params(p_hwfn);
 }
 
 /* This function reconfigures the QM pf on the fly.
  * For this purpose we:
  * 1. reconfigure the QM database
- * 2. set new values to runtime arrat
+ * 2. set new values to runtime array
  * 3. send an sdm_qm_cmd through the rbc interface to stop the QM
  * 4. activate init tool in QM_PF stage
  * 5. send an sdm_qm_cmd through rbc interface to release the QM
@@ -462,20 +755,11 @@ enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn,
 				     struct ecore_ptt *p_ptt)
 {
 	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
-	bool b_rc;
 	enum _ecore_status_t rc;
-
-	/* qm_info is allocated in ecore_init_qm_info() which is already called
-	 * from ecore_resc_alloc() or previous call of ecore_qm_reconf().
-	 * The allocated size may change each init, so we free it before next
-	 * allocation.
-	 */
-	ecore_qm_info_free(p_hwfn);
+	bool b_rc;
 
 	/* initialize ecore's qm data structure */
-	rc = ecore_init_qm_info(p_hwfn, false);
-	if (rc != ECORE_SUCCESS)
-		return rc;
+	ecore_init_qm_info(p_hwfn);
 
 	/* stop PF's qm queues */
 	OSAL_SPIN_LOCK(&qm_lock);
@@ -508,6 +792,48 @@ enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
+static enum _ecore_status_t ecore_alloc_qm_data(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+	enum _ecore_status_t rc;
+
+	rc = ecore_init_qm_sanity(p_hwfn);
+	if (rc != ECORE_SUCCESS)
+		goto alloc_err;
+
+	qm_info->qm_pq_params = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
+					    sizeof(struct init_qm_pq_params) *
+					    ecore_init_qm_get_num_pqs(p_hwfn));
+	if (!qm_info->qm_pq_params)
+		goto alloc_err;
+
+	qm_info->qm_vport_params = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
+				       sizeof(struct init_qm_vport_params) *
+				       ecore_init_qm_get_num_vports(p_hwfn));
+	if (!qm_info->qm_vport_params)
+		goto alloc_err;
+
+	qm_info->qm_port_params = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
+				      sizeof(struct init_qm_port_params) *
+				      p_hwfn->p_dev->num_ports_in_engines);
+	if (!qm_info->qm_port_params)
+		goto alloc_err;
+
+	qm_info->wfq_data = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
+					sizeof(struct ecore_wfq_data) *
+					ecore_init_qm_get_num_vports(p_hwfn));
+	if (!qm_info->wfq_data)
+		goto alloc_err;
+
+	return ECORE_SUCCESS;
+
+alloc_err:
+	DP_NOTICE(p_hwfn, false, "Failed to allocate memory for QM params\n");
+	ecore_qm_info_free(p_hwfn);
+	return ECORE_NOMEM;
+}
+/******************** End QM initialization ***************/
+
 enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 {
 	struct ecore_consq *p_consq;
@@ -572,11 +898,13 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 		if (rc)
 			goto alloc_err;
 
-		/* Prepare and process QM requirements */
-		rc = ecore_init_qm_info(p_hwfn, true);
+		rc = ecore_alloc_qm_data(p_hwfn);
 		if (rc)
 			goto alloc_err;
 
+		/* init qm info */
+		ecore_init_qm_info(p_hwfn);
+
 		/* Compute the ILT client partition */
 		rc = ecore_cxt_cfg_ilt_compute(p_hwfn);
 		if (rc)
@@ -618,18 +946,18 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 			 * worst case:
 			 * - Core - according to SPQ.
 			 * - RoCE - per QP there are a couple of ICIDs, one
-			 *          responder and one requester, each can
-			 *          generate an EQE => n_eqes_qp = 2 * n_qp.
-			 *          Each CQ can generate an EQE. There are 2 CQs
-			 *          per QP => n_eqes_cq = 2 * n_qp.
-			 *          Hence the RoCE total is 4 * n_qp or
-			 *          2 * num_cons.
+			 *	  responder and one requester, each can
+			 *	  generate an EQE => n_eqes_qp = 2 * n_qp.
+			 *	  Each CQ can generate an EQE. There are 2 CQs
+			 *	  per QP => n_eqes_cq = 2 * n_qp.
+			 *	  Hence the RoCE total is 4 * n_qp or
+			 *	  2 * num_cons.
 			 * - ENet - There can be up to two events per VF. One
-			 *          for VF-PF channel and another for VF FLR
-			 *          initial cleanup. The number of VFs is
-			 *          bounded by MAX_NUM_VFS_BB, and is much
-			 *          smaller than RoCE's so we avoid exact
-			 *          calculation.
+			 *	  for VF-PF channel and another for VF FLR
+			 *	  initial cleanup. The number of VFs is
+			 *	  bounded by MAX_NUM_VFS_BB, and is much
+			 *	  smaller than RoCE's so we avoid exact
+			 *	  calculation.
 			 */
 			if (ECORE_IS_ROCE_PERSONALITY(p_hwfn)) {
 				num_cons =
@@ -683,7 +1011,8 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 		rc = ecore_dmae_info_alloc(p_hwfn);
 		if (rc) {
 			DP_NOTICE(p_hwfn, true,
-				  "Failed to allocate memory for dmae_info structure\n");
+				  "Failed to allocate memory for dmae_info"
+				  " structure\n");
 			goto alloc_err;
 		}
 
@@ -705,9 +1034,9 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 
 	return ECORE_SUCCESS;
 
- alloc_no_mem:
+alloc_no_mem:
 	rc = ECORE_NOMEM;
- alloc_err:
+alloc_err:
 	ecore_resc_free(p_dev);
 	return rc;
 }
@@ -2353,7 +2682,7 @@ static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
 			*p_resc_start = dflt_resc_start;
 		}
 	}
- out:
+out:
 	return ECORE_SUCCESS;
 }
 
@@ -3139,13 +3468,13 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn,
 #endif
 
 	return rc;
- err2:
+err2:
 	if (IS_LEAD_HWFN(p_hwfn))
 		ecore_iov_free_hw_info(p_dev);
 	ecore_mcp_free(p_hwfn);
- err1:
+err1:
 	ecore_hw_hwfn_free(p_hwfn);
- err0:
+err0:
 	return rc;
 }
 
@@ -3309,7 +3638,7 @@ static void ecore_chain_free_pbl(struct ecore_dev *p_dev,
 	if (!p_chain->pbl.external)
 		OSAL_DMA_FREE_COHERENT(p_dev, p_chain->pbl.p_virt_table,
 				       p_chain->pbl.p_phys_table, pbl_size);
- out:
+out:
 	OSAL_VFREE(p_dev, p_chain->pbl.pp_virt_addr_tbl);
 }
 
@@ -3521,7 +3850,7 @@ enum _ecore_status_t ecore_chain_alloc(struct ecore_dev *p_dev,
 
 	return ECORE_SUCCESS;
 
- nomem:
+nomem:
 	ecore_chain_free(p_dev, p_chain);
 	return rc;
 }
@@ -3956,7 +4285,7 @@ enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
 		goto out;
 
 	p_hwfn->p_dev->rx_coalesce_usecs = coalesce;
- out:
+out:
 	return rc;
 }
 
@@ -4000,7 +4329,7 @@ enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
 		goto out;
 
 	p_hwfn->p_dev->tx_coalesce_usecs = coalesce;
- out:
+out:
 	return rc;
 }
 
diff --git a/drivers/net/qede/base/ecore_hw.c b/drivers/net/qede/base/ecore_hw.c
index 49d52c0..396edc2 100644
--- a/drivers/net/qede/base/ecore_hw.c
+++ b/drivers/net/qede/base/ecore_hw.c
@@ -905,44 +905,6 @@ ecore_dmae_host2host(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-u16 ecore_get_qm_pq(struct ecore_hwfn *p_hwfn,
-		    enum protocol_type proto,
-		    union ecore_qm_pq_params *p_params)
-{
-	u16 pq_id = 0;
-
-	if ((proto == PROTOCOLID_CORE ||
-	     proto == PROTOCOLID_ETH) && !p_params) {
-		DP_NOTICE(p_hwfn, true,
-			  "Protocol %d received NULL PQ params\n", proto);
-		return 0;
-	}
-
-	switch (proto) {
-	case PROTOCOLID_CORE:
-		if (p_params->core.tc == LB_TC)
-			pq_id = p_hwfn->qm_info.pure_lb_pq;
-		else if (p_params->core.tc == PKT_LB_TC)
-			pq_id = p_hwfn->qm_info.ooo_pq;
-		else
-			pq_id = p_hwfn->qm_info.offload_pq;
-		break;
-	case PROTOCOLID_ETH:
-		pq_id = p_params->eth.tc;
-		/* TODO - multi-CoS for VFs? */
-		if (p_params->eth.is_vf)
-			pq_id += p_hwfn->qm_info.vf_queues_offset +
-			    p_params->eth.vf_id;
-		break;
-	default:
-		pq_id = 0;
-	}
-
-	pq_id = CM_TX_PQ_BASE + pq_id + RESC_START(p_hwfn, ECORE_PQ);
-
-	return pq_id;
-}
-
 void ecore_hw_err_notify(struct ecore_hwfn *p_hwfn,
 			 enum ecore_hw_err_type err_type)
 {
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index d2e1719..0220d19 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -834,13 +834,13 @@ ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
 			      struct ecore_queue_start_common_params *p_params,
 			      dma_addr_t pbl_addr,
 			      u16 pbl_size,
-			      union ecore_qm_pq_params *p_pq_params)
+			      u16 pq_id)
 {
 	struct tx_queue_start_ramrod_data *p_ramrod = OSAL_NULL;
 	struct ecore_spq_entry *p_ent = OSAL_NULL;
 	struct ecore_sp_init_data init_data;
 	struct ecore_hw_cid_data *p_tx_cid;
-	u16 pq_id, abs_tx_qzone_id = 0;
+	u16 abs_tx_qzone_id = 0;
 	enum _ecore_status_t rc = ECORE_NOTIMPL;
 	u8 abs_vport_id;
 
@@ -882,7 +882,6 @@ ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
 	p_ramrod->pbl_size = OSAL_CPU_TO_LE16(pbl_size);
 	DMA_REGPAIR_LE(p_ramrod->pbl_base_addr, pbl_addr);
 
-	pq_id = ecore_get_qm_pq(p_hwfn, PROTOCOLID_ETH, p_pq_params);
 	p_ramrod->qm_pq_id = OSAL_CPU_TO_LE16(pq_id);
 
 	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
@@ -898,7 +897,6 @@ ecore_sp_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
 			    void OSAL_IOMEM * *pp_doorbell)
 {
 	struct ecore_hw_cid_data *p_tx_cid;
-	union ecore_qm_pq_params pq_params;
 	u8 abs_stats_id = 0;
 	enum _ecore_status_t rc;
 
@@ -918,9 +916,6 @@ ecore_sp_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
 
 	p_tx_cid = &p_hwfn->p_tx_cids[p_params->queue_id];
 	OSAL_MEMSET(p_tx_cid, 0, sizeof(*p_tx_cid));
-	OSAL_MEMSET(&pq_params, 0, sizeof(pq_params));
-
-	pq_params.eth.tc = tc;
 
 	/* Allocate a CID for the queue */
 	rc = ecore_cxt_acquire_cid(p_hwfn, PROTOCOLID_ETH, &p_tx_cid->cid);
@@ -944,7 +939,8 @@ ecore_sp_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
 					   p_params,
 					   pbl_addr,
 					   pbl_size,
-					   &pq_params);
+					   ecore_get_cm_pq_idx_mcos(p_hwfn,
+								    tc));
 
 	*pp_doorbell = (u8 OSAL_IOMEM *)p_hwfn->doorbells +
 	    DB_ADDR(p_tx_cid->cid, DQ_DEMS_LEGACY);
diff --git a/drivers/net/qede/base/ecore_l2.h b/drivers/net/qede/base/ecore_l2.h
index 9c1bd38..b598eda 100644
--- a/drivers/net/qede/base/ecore_l2.h
+++ b/drivers/net/qede/base/ecore_l2.h
@@ -81,7 +81,7 @@ ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn	*p_hwfn,
 			      struct ecore_queue_start_common_params *p_params,
 			      dma_addr_t pbl_addr,
 			      u16 pbl_size,
-			      union ecore_qm_pq_params *p_pq_params);
+			      u16 pq_id);
 
 u8 ecore_mcast_bin_from_mac(u8 *mac);
 
diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c
index 9035d3b..ba26d45 100644
--- a/drivers/net/qede/base/ecore_spq.c
+++ b/drivers/net/qede/base/ecore_spq.c
@@ -173,11 +173,10 @@ ecore_spq_fill_entry(struct ecore_hwfn *p_hwfn, struct ecore_spq_entry *p_ent)
 static void ecore_spq_hw_initialize(struct ecore_hwfn *p_hwfn,
 				    struct ecore_spq *p_spq)
 {
-	u16 pq;
 	struct ecore_cxt_info cxt_info;
 	struct core_conn_context *p_cxt;
-	union ecore_qm_pq_params pq_params;
 	enum _ecore_status_t rc;
+	u16 physical_q;
 
 	cxt_info.iid = p_spq->cid;
 
@@ -206,10 +205,8 @@ static void ecore_spq_hw_initialize(struct ecore_hwfn *p_hwfn,
 	/* CDU validation - FIXME currently disabled */
 
 	/* QM physical queue */
-	OSAL_MEMSET(&pq_params, 0, sizeof(pq_params));
-	pq_params.core.tc = LB_TC;
-	pq = ecore_get_qm_pq(p_hwfn, PROTOCOLID_CORE, &pq_params);
-	p_cxt->xstorm_ag_context.physical_q0 = OSAL_CPU_TO_LE16(pq);
+	physical_q = ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_LB);
+	p_cxt->xstorm_ag_context.physical_q0 = OSAL_CPU_TO_LE16(physical_q);
 
 	p_cxt->xstorm_st_context.spq_base_lo =
 	    DMA_LO_LE(p_spq->chain.p_phys_addr);
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index a302e9e..365be25 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -632,8 +632,8 @@ enum _ecore_status_t ecore_iov_hw_info(struct ecore_hwfn *p_hwfn)
 	return ECORE_SUCCESS;
 }
 
-bool _ecore_iov_pf_sanity_check(struct ecore_hwfn *p_hwfn, int vfid,
-				bool b_fail_malicious)
+static bool _ecore_iov_pf_sanity_check(struct ecore_hwfn *p_hwfn, int vfid,
+				       bool b_fail_malicious)
 {
 	/* Check PF supports sriov */
 	if (IS_VF(p_hwfn->p_dev) || !IS_ECORE_SRIOV(p_hwfn->p_dev) ||
@@ -2103,15 +2103,9 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 	struct ecore_queue_start_common_params params;
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
 	u8 status = PFVF_STATUS_NO_RESOURCE;
-	union ecore_qm_pq_params pq_params;
 	struct vfpf_start_txq_tlv *req;
 	enum _ecore_status_t rc;
 
-	/* Prepare the parameters which would choose the right PQ */
-	OSAL_MEMSET(&pq_params, 0, sizeof(pq_params));
-	pq_params.eth.is_vf = 1;
-	pq_params.eth.vf_id = vf->relative_vf_id;
-
 	OSAL_MEMSET(&params, 0, sizeof(params));
 	req = &mbx->req_virt->start_txq;
 
@@ -2132,7 +2126,8 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 					   &params,
 					   req->pbl_addr,
 					   req->pbl_size,
-					   &pq_params);
+					   ecore_get_cm_pq_idx_vf(p_hwfn,
+							vf->relative_vf_id));
 
 	if (rc)
 		status = PFVF_STATUS_FAILURE;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 21/61] net/qede/base: print firmware MFW and MBI versions
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (20 preceding siblings ...)
  2017-03-18  7:05   ` [PATCH v2 20/61] net/qede/base: qm initialization revamp Rasesh Mody
@ 2017-03-18  7:05   ` Rasesh Mody
  2017-03-18  7:05   ` [PATCH v2 22/61] net/qede/base: check active VF queues before stopping Rasesh Mody
                     ` (40 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:05 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add a printout of the FW, Management FW and MBI versions.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/qede_if.h   |    9 ++++++++-
 drivers/net/qede/qede_main.c |   14 ++++++--------
 2 files changed, 14 insertions(+), 9 deletions(-)

diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index 18404fb..1e27428 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -30,12 +30,19 @@ struct qed_dev_info {
 
 	/* MFW version */
 	uint32_t mfw_rev;
+#define QED_MFW_VERSION_0_MASK		0x000000FF
+#define QED_MFW_VERSION_0_OFFSET	0
+#define QED_MFW_VERSION_1_MASK		0x0000FF00
+#define QED_MFW_VERSION_1_OFFSET	8
+#define QED_MFW_VERSION_2_MASK		0x00FF0000
+#define QED_MFW_VERSION_2_OFFSET	16
+#define QED_MFW_VERSION_3_MASK		0xFF000000
+#define QED_MFW_VERSION_3_OFFSET	24
 
 	uint32_t flash_size;
 	uint8_t mf_mode;
 	bool tx_switching;
 	u16 mtu;
-	/* To be added... */
 };
 
 enum qed_sb_type {
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index e76346e..1d4f336 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -327,6 +327,8 @@ qed_fill_dev_info(struct ecore_dev *edev, struct qed_dev_info *dev_info)
 	memset(dev_info, 0, sizeof(struct qed_dev_info));
 	dev_info->num_hwfns = edev->num_hwfns;
 	dev_info->is_mf_default = IS_MF_DEFAULT(&edev->hwfns[0]);
+	dev_info->mtu = ECORE_LEADING_HWFN(edev)->hw_info.mtu;
+
 	rte_memcpy(&dev_info->hw_mac, &edev->hwfns[0].hw_info.hw_mac_addr,
 	       ETHER_ADDR_LEN);
 
@@ -337,13 +339,7 @@ qed_fill_dev_info(struct ecore_dev *edev, struct qed_dev_info *dev_info)
 		dev_info->fw_eng = FW_ENGINEERING_VERSION;
 		dev_info->mf_mode = edev->mf_mode;
 		dev_info->tx_switching = false;
-	} else {
-		ecore_vf_get_fw_version(&edev->hwfns[0], &dev_info->fw_major,
-					&dev_info->fw_minor, &dev_info->fw_rev,
-					&dev_info->fw_eng);
-	}
 
-	if (IS_PF(edev)) {
 		ptt = ecore_ptt_acquire(ECORE_LEADING_HWFN(edev));
 		if (ptt) {
 			ecore_mcp_get_mfw_ver(ECORE_LEADING_HWFN(edev), ptt,
@@ -361,12 +357,14 @@ qed_fill_dev_info(struct ecore_dev *edev, struct qed_dev_info *dev_info)
 			ecore_ptt_release(ECORE_LEADING_HWFN(edev), ptt);
 		}
 	} else {
+		ecore_vf_get_fw_version(&edev->hwfns[0], &dev_info->fw_major,
+					&dev_info->fw_minor, &dev_info->fw_rev,
+					&dev_info->fw_eng);
+
 		ecore_mcp_get_mfw_ver(ECORE_LEADING_HWFN(edev), ptt,
 				      &dev_info->mfw_rev, NULL);
 	}
 
-	dev_info->mtu = ECORE_LEADING_HWFN(edev)->hw_info.mtu;
-
 	return 0;
 }
 
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 22/61] net/qede/base: check active VF queues before stopping
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (21 preceding siblings ...)
  2017-03-18  7:05   ` [PATCH v2 21/61] net/qede/base: print firmware MFW and MBI versions Rasesh Mody
@ 2017-03-18  7:05   ` Rasesh Mody
  2017-03-18  7:05   ` [PATCH v2 23/61] net/qede/base: set driver type before sending load request Rasesh Mody
                     ` (39 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:05 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Make sure VF queue are closed before stopping vport.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_sriov.c |   37 ++++++++++++++++++++++++++++++++++-
 1 file changed, 36 insertions(+), 1 deletion(-)

diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 365be25..73c4015 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -232,6 +232,30 @@ static bool ecore_iov_validate_sb(struct ecore_hwfn *p_hwfn,
 	return false;
 }
 
+static bool ecore_iov_validate_active_rxq(struct ecore_hwfn *p_hwfn,
+					  struct ecore_vf_info *p_vf)
+{
+	u8 i;
+
+	for (i = 0; i < p_vf->num_rxqs; i++)
+		if (p_vf->vf_queues[i].rxq_active)
+			return true;
+
+	return false;
+}
+
+static bool ecore_iov_validate_active_txq(struct ecore_hwfn *p_hwfn,
+					  struct ecore_vf_info *p_vf)
+{
+	u8 i;
+
+	for (i = 0; i < p_vf->num_rxqs; i++)
+		if (p_vf->vf_queues[i].txq_active)
+			return true;
+
+	return false;
+}
+
 /* TODO - this is linux crc32; Need a way to ifdef it out for linux */
 u32 ecore_crc32(u32 crc, u8 *ptr, u32 length)
 {
@@ -1365,8 +1389,10 @@ static void ecore_iov_vf_cleanup(struct ecore_hwfn *p_hwfn,
 
 	p_vf->num_active_rxqs = 0;
 
-	for (i = 0; i < ECORE_MAX_VF_CHAINS_PER_PF; i++)
+	for (i = 0; i < ECORE_MAX_VF_CHAINS_PER_PF; i++) {
 		p_vf->vf_queues[i].rxq_active = 0;
+		p_vf->vf_queues[i].txq_active = 0;
+	}
 
 	OSAL_MEMSET(&p_vf->shadow_config, 0, sizeof(p_vf->shadow_config));
 	OSAL_MEMSET(&p_vf->acquire, 0, sizeof(p_vf->acquire));
@@ -1943,6 +1969,15 @@ static void ecore_iov_vf_mbx_stop_vport(struct ecore_hwfn *p_hwfn,
 	vf->vport_instance--;
 	vf->spoof_chk = false;
 
+	if ((ecore_iov_validate_active_rxq(p_hwfn, vf)) ||
+	    (ecore_iov_validate_active_txq(p_hwfn, vf))) {
+		vf->b_malicious = true;
+		DP_NOTICE(p_hwfn, false,
+			  "VF [%02x] - considered malicious;"
+			  " Unable to stop RX/TX queuess\n",
+			  vf->abs_vf_id);
+	}
+
 	rc = ecore_sp_vport_stop(p_hwfn, vf->opaque_fid, vf->vport_id);
 	if (rc != ECORE_SUCCESS) {
 		DP_ERR(p_hwfn,
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 23/61] net/qede/base: set driver type before sending load request
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (22 preceding siblings ...)
  2017-03-18  7:05   ` [PATCH v2 22/61] net/qede/base: check active VF queues before stopping Rasesh Mody
@ 2017-03-18  7:05   ` Rasesh Mody
  2017-03-18  7:05   ` [PATCH v2 24/61] net/qede/base: prevent driver laod with invalid resources Rasesh Mody
                     ` (38 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:05 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Set the drv_type before sending LOAD_REQ and remove the
ver_str which is not used by the MFW

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h     |    3 +--
 drivers/net/qede/base/ecore_mcp.c |    3 ---
 drivers/net/qede/qede_ethdev.c    |    2 +-
 drivers/net/qede/qede_if.h        |    3 +--
 drivers/net/qede/qede_main.c      |   10 ++++------
 5 files changed, 7 insertions(+), 14 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 58c97a3..b8c8bfd 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -30,7 +30,6 @@
 
 #define MAX_HWFNS_PER_DEVICE	2
 #define NAME_SIZE 128 /* @DPDK */
-#define VER_SIZE 16
 #define ECORE_WFQ_UNIT	100
 #include "../qede_logs.h" /* @DPDK */
 
@@ -706,7 +705,7 @@ struct ecore_dev {
 
 	int				pcie_width;
 	int				pcie_speed;
-	u8				ver_str[NAME_SIZE]; /* @DPDK */
+
 	/* Add MF related configuration */
 	u8				mcp_rev;
 	u8				boot_mode;
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 9f897b5..2b9c819 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -524,7 +524,6 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 {
 	struct ecore_dev *p_dev = p_hwfn->p_dev;
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
 	enum _ecore_status_t rc;
 
 #ifndef ASIC_ONLY
@@ -538,8 +537,6 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 	mb_params.cmd = DRV_MSG_CODE_LOAD_REQ;
 	mb_params.param = PDA_COMP | DRV_ID_MCP_HSI_VER_CURRENT |
 			  p_dev->drv_type;
-	OSAL_MEMCPY(&union_data.ver_str, p_dev->ver_str, MCP_DRV_VER_STR_SIZE);
-	mb_params.p_data_src = &union_data;
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 
 	/* if mcp fails to respond we must abort */
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index c372181..d52e1be 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -2175,7 +2175,7 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
 
 	qede_alloc_etherdev(adapter, &dev_info);
 
-	adapter->ops->common->set_id(edev, edev->name, QEDE_PMD_VERSION);
+	adapter->ops->common->set_name(edev, edev->name);
 
 	if (!is_vf)
 		adapter->dev_info.num_mac_filters =
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index 1e27428..0a1f7db 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -116,8 +116,7 @@ struct qed_common_ops {
 		     struct rte_pci_device *pci_dev,
 		     enum qed_protocol protocol,
 		     uint32_t dp_module, uint8_t dp_level, bool is_vf);
-	void (*set_id)(struct ecore_dev *edev,
-		char name[], const char ver_str[]);
+	void (*set_name)(struct ecore_dev *edev, char name[]);
 	enum _ecore_status_t
 		(*chain_alloc)(struct ecore_dev *edev,
 			       enum ecore_chain_use_mode
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 1d4f336..a932c5f 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -50,7 +50,9 @@ qed_probe(struct ecore_dev *edev, struct rte_pci_device *pci_dev,
 	int rc;
 
 	ecore_init_struct(edev);
+	edev->drv_type = DRV_ID_DRV_TYPE_LINUX;
 	qdev->protocol = protocol;
+
 	if (is_vf)
 		edev->b_is_vf = true;
 
@@ -420,9 +422,7 @@ qed_fill_eth_dev_info(struct ecore_dev *edev, struct qed_dev_eth_info *info)
 	return 0;
 }
 
-static void
-qed_set_id(struct ecore_dev *edev, char name[NAME_SIZE],
-	   const char ver_str[NAME_SIZE])
+static void qed_set_name(struct ecore_dev *edev, char name[NAME_SIZE])
 {
 	int i;
 
@@ -430,8 +430,6 @@ qed_set_id(struct ecore_dev *edev, char name[NAME_SIZE],
 	for_each_hwfn(edev, i) {
 		snprintf(edev->hwfns[i].name, NAME_SIZE, "%s-%d", name, i);
 	}
-	memcpy(edev->ver_str, ver_str, NAME_SIZE);
-	edev->drv_type = DRV_ID_DRV_TYPE_LINUX;
 }
 
 static uint32_t
@@ -714,7 +712,7 @@ const struct qed_common_ops qed_common_ops_pass = {
 	INIT_STRUCT_FIELD(probe, &qed_probe),
 	INIT_STRUCT_FIELD(update_pf_params, &qed_update_pf_params),
 	INIT_STRUCT_FIELD(slowpath_start, &qed_slowpath_start),
-	INIT_STRUCT_FIELD(set_id, &qed_set_id),
+	INIT_STRUCT_FIELD(set_name, &qed_set_name),
 	INIT_STRUCT_FIELD(chain_alloc, &ecore_chain_alloc),
 	INIT_STRUCT_FIELD(chain_free, &ecore_chain_free),
 	INIT_STRUCT_FIELD(sb_init, &qed_sb_init),
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 24/61] net/qede/base: prevent driver laod with invalid resources
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (23 preceding siblings ...)
  2017-03-18  7:05   ` [PATCH v2 23/61] net/qede/base: set driver type before sending load request Rasesh Mody
@ 2017-03-18  7:05   ` Rasesh Mody
  2017-03-18  7:05   ` [PATCH v2 25/61] net/qede/base: add interfaces for MFW TLV request processing Rasesh Mody
                     ` (37 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:05 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Prevent storage drivers from attempting to load with invalid resources.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c |   12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 380c5ba..7fce4fd 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2437,13 +2437,19 @@ static void ecore_hw_set_feat(struct ecore_hwfn *p_hwfn)
 			   FEAT_NUM(p_hwfn, ECORE_PF_L2_QUE),
 			   sb_cnt_info.sb_iov_cnt);
 
+	feat_num[ECORE_FCOE_CQ] = OSAL_MIN_T(u32, RESC_NUM(p_hwfn, ECORE_SB),
+					     RESC_NUM(p_hwfn, ECORE_CMDQS_CQS));
+	feat_num[ECORE_ISCSI_CQ] = OSAL_MIN_T(u32, RESC_NUM(p_hwfn, ECORE_SB),
+					     RESC_NUM(p_hwfn, ECORE_CMDQS_CQS));
+
 	DP_VERBOSE(p_hwfn, ECORE_MSG_PROBE,
-		   "#PF_L2_QUEUES=%d VF_L2_QUEUES=%d #ROCE_CNQ=%d #SBS=%d num_features=%d\n",
+		   "#PF_L2_QUEUE=%d VF_L2_QUEUES=%d #ROCE_CNQ=%d #FCOE_CQ=%d #ISCSI_CQ=%d #SB=%d\n",
 		   (int)FEAT_NUM(p_hwfn, ECORE_PF_L2_QUE),
 		   (int)FEAT_NUM(p_hwfn, ECORE_VF_L2_QUE),
 		   (int)FEAT_NUM(p_hwfn, ECORE_RDMA_CNQ),
-		   RESC_NUM(p_hwfn, ECORE_SB),
-		   num_features);
+		   (int)FEAT_NUM(p_hwfn, ECORE_FCOE_CQ),
+		   (int)FEAT_NUM(p_hwfn, ECORE_ISCSI_CQ),
+		   RESC_NUM(p_hwfn, ECORE_SB));
 }
 
 static enum resource_id_enum
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 25/61] net/qede/base: add interfaces for MFW TLV request processing
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (24 preceding siblings ...)
  2017-03-18  7:05   ` [PATCH v2 24/61] net/qede/base: prevent driver laod with invalid resources Rasesh Mody
@ 2017-03-18  7:05   ` Rasesh Mody
  2017-03-18  7:05   ` [PATCH v2 26/61] net/qede/base: code refactoring of SP queues Rasesh Mody
                     ` (36 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:05 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add new base driver interfaces for Management FW TLV request processing.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_mcp.c     |    6 +
 drivers/net/qede/base/ecore_mcp_api.h |  301 +++++++++++++++++++++++++++++++++
 2 files changed, 307 insertions(+)

diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 2b9c819..79a907b 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -2502,3 +2502,9 @@ enum _ecore_status_t ecore_mcp_initiate_pf_flr(struct ecore_hwfn *p_hwfn,
 	return ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_INITIATE_PF_FLR, 0,
 			     &mcp_resp, &mcp_param);
 }
+
+enum _ecore_status_t
+ecore_mfw_process_tlv_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
+{
+	return ECORE_SUCCESS;
+}
diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h
index 1be22dd..8cad43d 100644
--- a/drivers/net/qede/base/ecore_mcp_api.h
+++ b/drivers/net/qede/base/ecore_mcp_api.h
@@ -232,6 +232,295 @@ struct ecore_mba_vers {
 	u32 mba_vers[ECORE_MAX_NUM_OF_ROMIMG];
 };
 
+enum ecore_mfw_tlv_type {
+	ECORE_MFW_TLV_GENERIC = 0x1,	/* Core driver TLVs */
+	ECORE_MFW_TLV_FCOE = 0x2,	/* FCoE protocol TLVs */
+	ECORE_MFW_TLV_ISCSI = 0x4,	/* SCSI protocol TLVs */
+};
+
+struct ecore_mfw_tlv_generic {
+	u16 feat_flags;
+	bool feat_flags_set;
+	u64 local_mac;
+	bool local_mac_set;
+	u64 additional_mac1;
+	bool additional_mac1_set;
+	u64 additional_mac2;
+	bool additional_mac2_set;
+	u16 lso_maxoff_size;
+	bool lso_maxoff_size_set;
+	u16 lso_minseg_size;
+	bool lso_minseg_size_set;
+	u8 prom_mode;
+	bool prom_mode_set;
+	u16 tx_descr_size;
+	bool tx_descr_size_set;
+	u16 rx_descr_size;
+	bool rx_descr_size_set;
+	u16 netq_count;
+	bool netq_count_set;
+	u16 flex_vlan;
+	bool flex_vlan_set;
+	u8 drv_state;
+	bool drv_state_set;
+	u8 pxe_progress;
+	bool pxe_progress_set;
+	u32 tcp4_offloads;
+	bool tcp4_offloads_set;
+	u32 tcp6_offloads;
+	bool tcp6_offloads_set;
+	u16 tx_descr_qdepth;
+	bool tx_descr_qdepth_set;
+	u16 rx_descr_qdepth;
+	bool rx_descr_qdepth_set;
+	u64 rx_frames;
+	bool rx_frames_set;
+	u64 rx_bytes;
+	bool rx_bytes_set;
+	u64 tx_frames;
+	bool tx_frames_set;
+	u64 tx_bytes;
+	bool tx_bytes_set;
+	u8 iov_offload;
+	bool iov_offload_set;
+	u8 txqs_empty;
+	bool txqs_empty_set;
+	u8 rxqs_empty;
+	bool rxqs_empty_set;
+	u8 num_txqs_full;
+	bool num_txqs_full_set;
+	u8 num_rxqs_full;
+	bool num_rxqs_full_set;
+};
+
+struct ecore_mfw_tlv_fcoe {
+	u8 scsi_timeout;
+	bool scsi_timeout_set;
+	u32 rt_tov;
+	bool rt_tov_set;
+	u32 ra_tov;
+	bool ra_tov_set;
+	u32 ed_tov;
+	bool ed_tov_set;
+	u32 cr_tov;
+	bool cr_tov_set;
+	u8 boot_type;
+	bool boot_type_set;
+	u8 npiv_state;
+	bool npiv_state_set;
+	u32 num_npiv_ids;
+	bool num_npiv_ids_set;
+	u8 switch_name[8];
+	bool switch_name_set;
+	u16 switch_portnum;
+	bool switch_portnum_set;
+	u8 switch_portid[3];
+	bool switch_portid_set;
+	u8 vendor_name[8];
+	bool vendor_name_set;
+	u8 switch_model[8];
+	bool switch_model_set;
+	u8 switch_fw_version[8];
+	bool switch_fw_version_set;
+	u8 qos_pri;
+	bool qos_pri_set;
+	u8 port_alias[3];
+	bool port_alias_set;
+	u8 port_state;
+	bool port_state_set;
+	u16 fip_tx_descr_size;
+	bool fip_tx_descr_size_set;
+	u16 fip_rx_descr_size;
+	bool fip_rx_descr_size_set;
+	u16 link_failures;
+	bool link_failures_set;
+	u8 fcoe_boot_progress;
+	bool fcoe_boot_progress_set;
+	u64 rx_bcast;
+	bool rx_bcast_set;
+	u64 tx_bcast;
+	bool tx_bcast_set;
+	u16 fcoe_txq_depth;
+	bool fcoe_txq_depth_set;
+	u16 fcoe_rxq_depth;
+	bool fcoe_rxq_depth_set;
+	u64 fcoe_rx_frames;
+	bool fcoe_rx_frames_set;
+	u64 fcoe_rx_bytes;
+	bool fcoe_rx_bytes_set;
+	u64 fcoe_tx_frames;
+	bool fcoe_tx_frames_set;
+	u64 fcoe_tx_bytes;
+	bool fcoe_tx_bytes_set;
+	u16 crc_count;
+	bool crc_count_set;
+	u32 crc_err_src_fcid[5];
+	bool crc_err_src_fcid_set[5];
+	u8 crc_err_tstamp[5][14];
+	bool crc_err_tstamp_set[5];
+	u16 losync_err;
+	bool losync_err_set;
+	u16 losig_err;
+	bool losig_err_set;
+	u16 primtive_err;
+	bool primtive_err_set;
+	u16 disparity_err;
+	bool disparity_err_set;
+	u16 code_violation_err;
+	bool code_violation_err_set;
+	u32 flogi_param[4];
+	bool flogi_param_set[4];
+	u8 flogi_tstamp[14];
+	bool flogi_tstamp_set;
+	u32 flogi_acc_param[4];
+	bool flogi_acc_param_set[4];
+	u8 flogi_acc_tstamp[14];
+	bool flogi_acc_tstamp_set;
+	u32 flogi_rjt;
+	bool flogi_rjt_set;
+	u8 flogi_rjt_tstamp[14];
+	bool flogi_rjt_tstamp_set;
+	u32 fdiscs;
+	bool fdiscs_set;
+	u8 fdisc_acc;
+	bool fdisc_acc_set;
+	u8 fdisc_rjt;
+	bool fdisc_rjt_set;
+	u8 plogi;
+	bool plogi_set;
+	u8 plogi_acc;
+	bool plogi_acc_set;
+	u8 plogi_rjt;
+	bool plogi_rjt_set;
+	u32 plogi_dst_fcid[5];
+	bool plogi_dst_fcid_set[5];
+	u8 plogi_tstamp[5][14];
+	bool plogi_tstamp_set[5];
+	u32 plogi_acc_src_fcid[5];
+	bool plogi_acc_src_fcid_set[5];
+	u8 plogi_acc_tstamp[5][14];
+	bool plogi_acc_tstamp_set[5];
+	u8 tx_plogos;
+	bool tx_plogos_set;
+	u8 plogo_acc;
+	bool plogo_acc_set;
+	u8 plogo_rjt;
+	bool plogo_rjt_set;
+	u32 plogo_src_fcid[5];
+	bool plogo_src_fcid_set[5];
+	u8 plogo_tstamp[5][14];
+	bool plogo_tstamp_set[5];
+	u8 rx_logos;
+	bool rx_logos_set;
+	u8 tx_accs;
+	bool tx_accs_set;
+	u8 tx_prlis;
+	bool tx_prlis_set;
+	u8 rx_accs;
+	bool rx_accs_set;
+	u8 tx_abts;
+	bool tx_abts_set;
+	u8 rx_abts_acc;
+	bool rx_abts_acc_set;
+	u8 rx_abts_rjt;
+	bool rx_abts_rjt_set;
+	u32 abts_dst_fcid[5];
+	bool abts_dst_fcid_set[5];
+	u8 abts_tstamp[5][14];
+	bool abts_tstamp_set[5];
+	u8 rx_rscn;
+	bool rx_rscn_set;
+	u32 rx_rscn_nport[4];
+	bool rx_rscn_nport_set[4];
+	u8 tx_lun_rst;
+	bool tx_lun_rst_set;
+	u8 abort_task_sets;
+	bool abort_task_sets_set;
+	u8 tx_tprlos;
+	bool tx_tprlos_set;
+	u8 tx_nos;
+	bool tx_nos_set;
+	u8 rx_nos;
+	bool rx_nos_set;
+	u8 ols;
+	bool ols_set;
+	u8 lr;
+	bool lr_set;
+	u8 llr;
+	bool llrt;
+	u8 tx_lip;
+	bool tx_lip_set;
+	u8 rx_lip;
+	bool rx_lip_set;
+	u8 eofa;
+	bool eofa_set;
+	u8 eofni;
+	bool eofni_set;
+	u8 scsi_chks;
+	bool scsi_chks_set;
+	u8 scsi_cond_met;
+	bool scsi_cond_met_set;
+	u8 scsi_busy;
+	bool scsi_busy_set;
+	u8 scsi_inter;
+	bool scsi_inter_set;
+	u8 scsi_inter_cond_met;
+	bool scsi_inter_cond_met_set;
+	u8 scsi_rsv_conflicts;
+	bool scsi_rsv_conflicts_set;
+	u8 scsi_tsk_full;
+	bool scsi_tsk_full_set;
+	u8 scsi_aca_active;
+	bool scsi_aca_active_set;
+	u8 scsi_tsk_abort;
+	bool scsi_tsk_abort_set;
+	u32 scsi_rx_chk[5];
+	bool scsi_rx_chk_set[5];
+	u8 scsi_chk_tstamp[5][14];
+	bool scsi_chk_tstamp_set[5];
+};
+
+struct ecore_mfw_tlv_iscsi {
+	u8 target_llmnr;
+	bool target_llmnr_set;
+	u8 header_digest;
+	bool header_digest_set;
+	u8 data_digest;
+	bool data_digest_set;
+	u8 auth_method;
+	bool auth_method_set;
+	u16 boot_taget_portal;
+	bool boot_taget_portal_set;
+	u16 frame_size;
+	bool frame_size_set;
+	u16 tx_desc_size;
+	bool tx_desc_size_set;
+	u16 rx_desc_size;
+	bool rx_desc_size_set;
+	u8 boot_progress;
+	bool boot_progress_set;
+	u16 tx_desc_qdepth;
+	bool tx_desc_qdepth_set;
+	u16 rx_desc_qdepth;
+	bool rx_desc_qdepth_set;
+	u64 rx_frames;
+	bool rx_frames_set;
+	u64 rx_bytes;
+	bool rx_bytes_set;
+	u64 tx_frames;
+	bool tx_frames_set;
+	u64 tx_bytes;
+	bool tx_bytes_set;
+	u32 cpcp_spcp_map;
+	bool cpcp_spcp_map_set;
+};
+
+union ecore_mfw_tlv_data {
+	struct ecore_mfw_tlv_generic generic;
+	struct ecore_mfw_tlv_fcoe fcoe;
+	struct ecore_mfw_tlv_iscsi iscsi;
+};
+
 /**
  * @brief - returns the link params of the hw function
  *
@@ -820,4 +1109,16 @@ ecore_mcp_mdump_get_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 enum _ecore_status_t ecore_mcp_mdump_clear_logs(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt);
 
+/**
+ * @brief - Processes the TLV request from MFW i.e., get the required TLV info
+ *          from the ecore client and send it to the MFW.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ *
+ * @param return ECORE_SUCCESS upon success.
+ */
+enum _ecore_status_t ecore_mfw_process_tlv_req(struct ecore_hwfn *p_hwfn,
+					       struct ecore_ptt *p_ptt);
+
 #endif
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 26/61] net/qede/base: code refactoring of SP queues
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (25 preceding siblings ...)
  2017-03-18  7:05   ` [PATCH v2 25/61] net/qede/base: add interfaces for MFW TLV request processing Rasesh Mody
@ 2017-03-18  7:05   ` Rasesh Mody
  2017-03-18  7:05   ` [PATCH v2 27/61] net/qede/base: make L2 queues handle based Rasesh Mody
                     ` (35 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:05 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Maintain slowpath event queue and consumer queue within HW function
structure, update corresponding alloc and free APIs accordingly.
Cleanup unused code under CONFIG_ECORE_LL2 ifdef.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c |   43 +++++++----------------------
 drivers/net/qede/base/ecore_spq.c |   54 ++++++++++++++++++++-----------------
 drivers/net/qede/base/ecore_spq.h |   35 +++++++++---------------
 3 files changed, 52 insertions(+), 80 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 7fce4fd..1ce7d8e 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -165,12 +165,9 @@ void ecore_resc_free(struct ecore_dev *p_dev)
 		ecore_cxt_mngr_free(p_hwfn);
 		ecore_qm_info_free(p_hwfn);
 		ecore_spq_free(p_hwfn);
-		ecore_eq_free(p_hwfn, p_hwfn->p_eq);
-		ecore_consq_free(p_hwfn, p_hwfn->p_consq);
+		ecore_eq_free(p_hwfn);
+		ecore_consq_free(p_hwfn);
 		ecore_int_free(p_hwfn);
-#ifdef CONFIG_ECORE_LL2
-		ecore_ll2_free(p_hwfn, p_hwfn->p_ll2_info);
-#endif
 		ecore_iov_free(p_hwfn);
 		ecore_dmae_info_free(p_hwfn);
 		ecore_dcbx_info_free(p_hwfn, p_hwfn->p_dcbx_info);
@@ -836,11 +833,6 @@ alloc_err:
 
 enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 {
-	struct ecore_consq *p_consq;
-	struct ecore_eq *p_eq;
-#ifdef	CONFIG_ECORE_LL2
-	struct ecore_ll2_info *p_ll2_info;
-#endif
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 	int i;
 
@@ -988,24 +980,13 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 			goto alloc_no_mem;
 		}
 
-		p_eq = ecore_eq_alloc(p_hwfn, (u16)n_eqes);
-		if (!p_eq)
-			goto alloc_no_mem;
-		p_hwfn->p_eq = p_eq;
+		rc = ecore_eq_alloc(p_hwfn, (u16)n_eqes);
+		if (rc)
+			goto alloc_err;
 
-		p_consq = ecore_consq_alloc(p_hwfn);
-		if (!p_consq)
-			goto alloc_no_mem;
-		p_hwfn->p_consq = p_consq;
-
-#ifdef CONFIG_ECORE_LL2
-		if (p_hwfn->using_ll2) {
-			p_ll2_info = ecore_ll2_alloc(p_hwfn);
-			if (!p_ll2_info)
-				goto alloc_no_mem;
-			p_hwfn->p_ll2_info = p_ll2_info;
-		}
-#endif
+		rc = ecore_consq_alloc(p_hwfn);
+		if (rc)
+			goto alloc_err;
 
 		/* DMA info initialization */
 		rc = ecore_dmae_info_alloc(p_hwfn);
@@ -1053,8 +1034,8 @@ void ecore_resc_setup(struct ecore_dev *p_dev)
 
 		ecore_cxt_mngr_setup(p_hwfn);
 		ecore_spq_setup(p_hwfn);
-		ecore_eq_setup(p_hwfn, p_hwfn->p_eq);
-		ecore_consq_setup(p_hwfn, p_hwfn->p_consq);
+		ecore_eq_setup(p_hwfn);
+		ecore_consq_setup(p_hwfn);
 
 		/* Read shadow of current MFW mailbox */
 		ecore_mcp_read_mb(p_hwfn, p_hwfn->p_main_ptt);
@@ -1065,10 +1046,6 @@ void ecore_resc_setup(struct ecore_dev *p_dev)
 		ecore_int_setup(p_hwfn, p_hwfn->p_main_ptt);
 
 		ecore_iov_setup(p_hwfn, p_hwfn->p_main_ptt);
-#ifdef CONFIG_ECORE_LL2
-		if (p_hwfn->using_ll2)
-			ecore_ll2_setup(p_hwfn, p_hwfn->p_ll2_info);
-#endif
 	}
 }
 
diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c
index ba26d45..016de74 100644
--- a/drivers/net/qede/base/ecore_spq.c
+++ b/drivers/net/qede/base/ecore_spq.c
@@ -355,7 +355,7 @@ enum _ecore_status_t ecore_eq_completion(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-struct ecore_eq *ecore_eq_alloc(struct ecore_hwfn *p_hwfn, u16 num_elem)
+enum _ecore_status_t ecore_eq_alloc(struct ecore_hwfn *p_hwfn, u16 num_elem)
 {
 	struct ecore_eq *p_eq;
 
@@ -364,7 +364,7 @@ struct ecore_eq *ecore_eq_alloc(struct ecore_hwfn *p_hwfn, u16 num_elem)
 	if (!p_eq) {
 		DP_NOTICE(p_hwfn, true,
 			  "Failed to allocate `struct ecore_eq'\n");
-		return OSAL_NULL;
+		return ECORE_NOMEM;
 	}
 
 	/* Allocate and initialize EQ chain*/
@@ -374,7 +374,7 @@ struct ecore_eq *ecore_eq_alloc(struct ecore_hwfn *p_hwfn, u16 num_elem)
 			      ECORE_CHAIN_CNT_TYPE_U16,
 			      num_elem,
 			      sizeof(union event_ring_element),
-			      &p_eq->chain, OSAL_NULL)) {
+			      &p_eq->chain, OSAL_NULL) != ECORE_SUCCESS) {
 		DP_NOTICE(p_hwfn, true, "Failed to allocate eq chain\n");
 		goto eq_allocate_fail;
 	}
@@ -383,24 +383,28 @@ struct ecore_eq *ecore_eq_alloc(struct ecore_hwfn *p_hwfn, u16 num_elem)
 	ecore_int_register_cb(p_hwfn, ecore_eq_completion,
 			      p_eq, &p_eq->eq_sb_index, &p_eq->p_fw_cons);
 
-	return p_eq;
+	p_hwfn->p_eq = p_eq;
+	return ECORE_SUCCESS;
 
 eq_allocate_fail:
-	ecore_eq_free(p_hwfn, p_eq);
-	return OSAL_NULL;
+	OSAL_FREE(p_hwfn->p_dev, p_eq);
+	return ECORE_NOMEM;
 }
 
-void ecore_eq_setup(struct ecore_hwfn *p_hwfn, struct ecore_eq *p_eq)
+void ecore_eq_setup(struct ecore_hwfn *p_hwfn)
 {
-	ecore_chain_reset(&p_eq->chain);
+	ecore_chain_reset(&p_hwfn->p_eq->chain);
 }
 
-void ecore_eq_free(struct ecore_hwfn *p_hwfn, struct ecore_eq *p_eq)
+void ecore_eq_free(struct ecore_hwfn *p_hwfn)
 {
-	if (!p_eq)
+	if (!p_hwfn->p_eq)
 		return;
-	ecore_chain_free(p_hwfn->p_dev, &p_eq->chain);
-	OSAL_FREE(p_hwfn->p_dev, p_eq);
+
+	ecore_chain_free(p_hwfn->p_dev, &p_hwfn->p_eq->chain);
+
+	OSAL_FREE(p_hwfn->p_dev, p_hwfn->p_eq);
+	p_hwfn->p_eq = OSAL_NULL;
 }
 
 /***************************************************************************
@@ -943,7 +947,7 @@ enum _ecore_status_t ecore_spq_completion(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-struct ecore_consq *ecore_consq_alloc(struct ecore_hwfn *p_hwfn)
+enum _ecore_status_t ecore_consq_alloc(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_consq *p_consq;
 
@@ -953,7 +957,7 @@ struct ecore_consq *ecore_consq_alloc(struct ecore_hwfn *p_hwfn)
 	if (!p_consq) {
 		DP_NOTICE(p_hwfn, true,
 			  "Failed to allocate `struct ecore_consq'\n");
-		return OSAL_NULL;
+		return ECORE_NOMEM;
 	}
 
 	/* Allocate and initialize EQ chain */
@@ -963,27 +967,29 @@ struct ecore_consq *ecore_consq_alloc(struct ecore_hwfn *p_hwfn)
 			      ECORE_CHAIN_CNT_TYPE_U16,
 			      ECORE_CHAIN_PAGE_SIZE / 0x80,
 			      0x80,
-			      &p_consq->chain, OSAL_NULL)) {
+			      &p_consq->chain, OSAL_NULL) != ECORE_SUCCESS) {
 		DP_NOTICE(p_hwfn, true, "Failed to allocate consq chain");
 		goto consq_allocate_fail;
 	}
 
-	return p_consq;
+	p_hwfn->p_consq = p_consq;
+	return ECORE_SUCCESS;
 
 consq_allocate_fail:
-	ecore_consq_free(p_hwfn, p_consq);
-	return OSAL_NULL;
+	OSAL_FREE(p_hwfn->p_dev, p_consq);
+	return ECORE_NOMEM;
 }
 
-void ecore_consq_setup(struct ecore_hwfn *p_hwfn, struct ecore_consq *p_consq)
+void ecore_consq_setup(struct ecore_hwfn *p_hwfn)
 {
-	ecore_chain_reset(&p_consq->chain);
+	ecore_chain_reset(&p_hwfn->p_consq->chain);
 }
 
-void ecore_consq_free(struct ecore_hwfn *p_hwfn, struct ecore_consq *p_consq)
+void ecore_consq_free(struct ecore_hwfn *p_hwfn)
 {
-	if (!p_consq)
+	if (!p_hwfn->p_consq)
 		return;
-	ecore_chain_free(p_hwfn->p_dev, &p_consq->chain);
-	OSAL_FREE(p_hwfn->p_dev, p_consq);
+
+	ecore_chain_free(p_hwfn->p_dev, &p_hwfn->p_consq->chain);
+	OSAL_FREE(p_hwfn->p_dev, p_hwfn->p_consq);
 }
diff --git a/drivers/net/qede/base/ecore_spq.h b/drivers/net/qede/base/ecore_spq.h
index 717ede3..e2468b7 100644
--- a/drivers/net/qede/base/ecore_spq.h
+++ b/drivers/net/qede/base/ecore_spq.h
@@ -194,28 +194,23 @@ void ecore_spq_return_entry(struct ecore_hwfn		*p_hwfn,
  * @param p_hwfn
  * @param num_elem number of elements in the eq
  *
- * @return struct ecore_eq* - a newly allocated structure; NULL upon error.
+ * @return enum _ecore_status_t
  */
-struct ecore_eq *ecore_eq_alloc(struct ecore_hwfn	*p_hwfn,
-				 u16			num_elem);
+enum _ecore_status_t ecore_eq_alloc(struct ecore_hwfn	*p_hwfn, u16 num_elem);
 
 /**
- * @brief ecore_eq_setup - Reset the SPQ to its start state.
+ * @brief ecore_eq_setup - Reset the EQ to its start state.
  *
  * @param p_hwfn
- * @param p_eq
  */
-void ecore_eq_setup(struct ecore_hwfn *p_hwfn,
-		    struct ecore_eq   *p_eq);
+void ecore_eq_setup(struct ecore_hwfn *p_hwfn);
 
 /**
- * @brief ecore_eq_deallocate - deallocates the given EQ struct.
+ * @brief ecore_eq_free - deallocates the given EQ struct.
  *
  * @param p_hwfn
- * @param p_eq
  */
-void ecore_eq_free(struct ecore_hwfn *p_hwfn,
-		   struct ecore_eq   *p_eq);
+void ecore_eq_free(struct ecore_hwfn *p_hwfn);
 
 /**
  * @brief ecore_eq_prod_update - update the FW with default EQ producer
@@ -261,32 +256,26 @@ enum _ecore_status_t ecore_spq_completion(struct ecore_hwfn	*p_hwfn,
 u32 ecore_spq_get_cid(struct ecore_hwfn *p_hwfn);
 
 /**
- * @brief ecore_consq_alloc - Allocates & initializes an ConsQ
- *        struct
+ * @brief ecore_consq_alloc - Allocates & initializes an ConsQ struct
  *
  * @param p_hwfn
  *
- * @return struct ecore_eq* - a newly allocated structure; NULL upon error.
+ * @return enum _ecore_status_t
  */
-struct ecore_consq *ecore_consq_alloc(struct ecore_hwfn	*p_hwfn);
+enum _ecore_status_t ecore_consq_alloc(struct ecore_hwfn *p_hwfn);
 
 /**
- * @brief ecore_consq_setup - Reset the ConsQ to its start
- *        state.
+ * @brief ecore_consq_setup - Reset the ConsQ to its start state.
  *
  * @param p_hwfn
- * @param p_eq
  */
-void ecore_consq_setup(struct ecore_hwfn *p_hwfn,
-		    struct ecore_consq   *p_consq);
+void ecore_consq_setup(struct ecore_hwfn *p_hwfn);
 
 /**
  * @brief ecore_consq_free - deallocates the given ConsQ struct.
  *
  * @param p_hwfn
- * @param p_eq
  */
-void ecore_consq_free(struct ecore_hwfn *p_hwfn,
-		   struct ecore_consq   *p_consq);
+void ecore_consq_free(struct ecore_hwfn *p_hwfn);
 
 #endif /* __ECORE_SPQ_H__ */
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 27/61] net/qede/base: make L2 queues handle based
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (26 preceding siblings ...)
  2017-03-18  7:05   ` [PATCH v2 26/61] net/qede/base: code refactoring of SP queues Rasesh Mody
@ 2017-03-18  7:05   ` Rasesh Mody
  2017-03-18  7:05   ` [PATCH v2 28/61] net/qede/base: add support for handling TLV request from MFW Rasesh Mody
                     ` (34 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:05 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

L2 handler changes:

This is change to remove the queue-id/qzone difference for Tx queues.

It does that by mainly doing:

a. VFs queues are no longer determined by the SBs they're using.
Instead, the ecore-client needs to maintain those and choose the values
to be used by VF when initializing it.

b. Eliminate the HW-cid array in the hw-function.
To do that, have all the rx/tx functionality turn into 'handle' base -
when queue would be started the caller would get a (void*) handle,
which it would later use with ecore for configuring various
queue-related stop [update, close].

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h         |   13 -
 drivers/net/qede/base/ecore_dev.c     |   37 ---
 drivers/net/qede/base/ecore_int.c     |   24 --
 drivers/net/qede/base/ecore_int.h     |   10 -
 drivers/net/qede/base/ecore_iov_api.h |   24 +-
 drivers/net/qede/base/ecore_l2.c      |  526 ++++++++++++++++++---------------
 drivers/net/qede/base/ecore_l2.h      |   84 +++---
 drivers/net/qede/base/ecore_l2_api.h  |  108 ++++---
 drivers/net/qede/base/ecore_sriov.c   |  262 ++++++++++------
 drivers/net/qede/base/ecore_sriov.h   |    4 +-
 drivers/net/qede/base/ecore_vf.c      |  119 +++++---
 drivers/net/qede/base/ecore_vf.h      |   55 ++--
 drivers/net/qede/qede_eth_if.c        |   50 ++--
 drivers/net/qede/qede_eth_if.h        |   22 +-
 drivers/net/qede/qede_rxtx.c          |   42 +--
 drivers/net/qede/qede_rxtx.h          |    2 +
 16 files changed, 723 insertions(+), 659 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index b8c8bfd..de0f49a 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -394,16 +394,6 @@ struct ecore_hw_info {
 	u16 mtu;
 };
 
-struct ecore_hw_cid_data {
-	u32	cid;
-	bool	b_cid_allocated;
-	u8	vfid; /* 1-based; 0 signals this is for a PF */
-
-	/* Additional identifiers */
-	u16	opaque_fid;
-	u8	vport_id;
-};
-
 /* maximun size of read/write commands (HW limit) */
 #define DMAE_MAX_RW_SIZE	0x2000
 
@@ -566,9 +556,6 @@ struct ecore_hwfn {
 	struct ecore_mcp_info		*mcp_info;
 	struct ecore_dcbx_info		*p_dcbx_info;
 
-	struct ecore_hw_cid_data	*p_tx_cids;
-	struct ecore_hw_cid_data	*p_rx_cids;
-
 	struct ecore_dmae_info		dmae_info;
 
 	/* QM init */
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 1ce7d8e..c895656 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -155,13 +155,6 @@ void ecore_resc_free(struct ecore_dev *p_dev)
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
 
-		OSAL_FREE(p_dev, p_hwfn->p_tx_cids);
-		OSAL_FREE(p_dev, p_hwfn->p_rx_cids);
-	}
-
-	for_each_hwfn(p_dev, i) {
-		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
-
 		ecore_cxt_mngr_free(p_hwfn);
 		ecore_qm_info_free(p_hwfn);
 		ecore_spq_free(p_hwfn);
@@ -844,36 +837,6 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 	if (!p_dev->fw_data)
 		return ECORE_NOMEM;
 
-	/* Allocate Memory for the Queue->CID mapping */
-	for_each_hwfn(p_dev, i) {
-		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
-		u32 num_tx_conns = RESC_NUM(p_hwfn, ECORE_L2_QUEUE);
-		int tx_size, rx_size;
-
-		/* @@@TMP - resc management, change to actual required size */
-		if (p_hwfn->pf_params.eth_pf_params.num_cons > num_tx_conns)
-			num_tx_conns = p_hwfn->pf_params.eth_pf_params.num_cons;
-		tx_size = sizeof(struct ecore_hw_cid_data) * num_tx_conns;
-		rx_size = sizeof(struct ecore_hw_cid_data) *
-		    RESC_NUM(p_hwfn, ECORE_L2_QUEUE);
-
-		p_hwfn->p_tx_cids = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
-						tx_size);
-		if (!p_hwfn->p_tx_cids) {
-			DP_NOTICE(p_hwfn, true,
-				  "Failed to allocate memory for Tx Cids\n");
-			goto alloc_no_mem;
-		}
-
-		p_hwfn->p_rx_cids = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
-						rx_size);
-		if (!p_hwfn->p_rx_cids) {
-			DP_NOTICE(p_hwfn, true,
-				  "Failed to allocate memory for Rx Cids\n");
-			goto alloc_no_mem;
-		}
-	}
-
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
 		u32 n_eqes, num_cons;
diff --git a/drivers/net/qede/base/ecore_int.c b/drivers/net/qede/base/ecore_int.c
index e5a4359..8dc4d15 100644
--- a/drivers/net/qede/base/ecore_int.c
+++ b/drivers/net/qede/base/ecore_int.c
@@ -2182,30 +2182,6 @@ void ecore_int_get_num_sbs(struct ecore_hwfn *p_hwfn,
 	p_sb_cnt_info->sb_free_blk = info->free_blks;
 }
 
-u16 ecore_int_queue_id_from_sb_id(struct ecore_hwfn *p_hwfn, u16 sb_id)
-{
-	struct ecore_igu_info *p_info = p_hwfn->hw_info.p_igu_info;
-
-	/* Determine origin of SB id */
-	if ((sb_id >= p_info->igu_base_sb) &&
-	    (sb_id < p_info->igu_base_sb + p_info->igu_sb_cnt)) {
-		return sb_id - p_info->igu_base_sb;
-	} else if ((sb_id >= p_info->igu_base_sb_iov) &&
-		   (sb_id < p_info->igu_base_sb_iov +
-			    p_info->igu_sb_cnt_iov)) {
-		/* We want the first VF queue to be adjacent to the
-		 * last PF queue. Since L2 queues can be partial to
-		 * SBs, we'll use the feature instead.
-		 */
-		return sb_id - p_info->igu_base_sb_iov +
-		       FEAT_NUM(p_hwfn, ECORE_PF_L2_QUE);
-	} else {
-		DP_NOTICE(p_hwfn, true, "SB %d not in range for function\n",
-			  sb_id);
-		return 0;
-	}
-}
-
 void ecore_int_disable_post_isr_release(struct ecore_dev *p_dev)
 {
 	int i;
diff --git a/drivers/net/qede/base/ecore_int.h b/drivers/net/qede/base/ecore_int.h
index 45358b9..0c8929e 100644
--- a/drivers/net/qede/base/ecore_int.h
+++ b/drivers/net/qede/base/ecore_int.h
@@ -172,16 +172,6 @@ void ecore_int_free(struct ecore_hwfn *p_hwfn);
 void ecore_int_setup(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt);
 
 /**
- * @brief - Returns an Rx queue index appropriate for usage with given SB.
- *
- * @param p_hwfn
- * @param sb_id - absolute index of SB
- *
- * @return index of Rx queue
- */
-u16 ecore_int_queue_id_from_sb_id(struct ecore_hwfn *p_hwfn, u16 sb_id);
-
-/**
  * @brief - Enable Interrupt & Attention for hw function
  *
  * @param p_hwfn
diff --git a/drivers/net/qede/base/ecore_iov_api.h b/drivers/net/qede/base/ecore_iov_api.h
index 9775360..b8dc47b 100644
--- a/drivers/net/qede/base/ecore_iov_api.h
+++ b/drivers/net/qede/base/ecore_iov_api.h
@@ -88,6 +88,23 @@ struct ecore_public_vf_info {
 	u16 forced_vlan;
 };
 
+struct ecore_iov_vf_init_params {
+	u16 rel_vf_id;
+
+	/* Number of requested Queues; Currently, don't support different
+	 * number of Rx/Tx queues.
+	 */
+	/* TODO - remove this limitation */
+	u16 num_queues;
+
+	/* Allow the client to choose which qzones to use for Rx/Tx,
+	 * and which queue_base to use for Tx queues on a per-queue basis.
+	 * Notice values should be relative to the PF resources.
+	 */
+	u16 req_rx_queue[ECORE_MAX_VF_CHAINS_PER_PF];
+	u16 req_tx_queue[ECORE_MAX_VF_CHAINS_PER_PF];
+};
+
 #ifdef CONFIG_ECORE_SW_CHANNEL
 /* This is SW channel related only... */
 enum mbx_state {
@@ -175,15 +192,14 @@ void ecore_iov_set_vf_to_disable(struct ecore_dev *p_dev,
  *
  * @param p_hwfn
  * @param p_ptt
- * @param rel_vf_id
- * @param num_rx_queues
+ * @param p_params
  *
  * @return enum _ecore_status_t
  */
 enum _ecore_status_t ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 					      struct ecore_ptt *p_ptt,
-					      u16 rel_vf_id,
-					      u16 num_rx_queues);
+					      struct ecore_iov_vf_init_params
+						     *p_params);
 
 /**
  * @brief ecore_iov_process_mbx_req - process a request received
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 0220d19..352620a 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -29,6 +29,120 @@
 #define ECORE_MAX_SGES_NUM 16
 #define CRC32_POLY 0x1edc6f41
 
+void ecore_eth_queue_cid_release(struct ecore_hwfn *p_hwfn,
+				 struct ecore_queue_cid *p_cid)
+{
+	/* VFs' CIDs are 0-based in PF-view, and uninitialized on VF */
+	if (!p_cid->is_vf && IS_PF(p_hwfn->p_dev))
+		ecore_cxt_release_cid(p_hwfn, p_cid->cid);
+	OSAL_VFREE(p_hwfn->p_dev, p_cid);
+}
+
+/* The internal is only meant to be directly called by PFs initializeing CIDs
+ * for their VFs.
+ */
+struct ecore_queue_cid *
+_ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
+			u16 opaque_fid, u32 cid, u8 vf_qid,
+			struct ecore_queue_start_common_params *p_params)
+{
+	bool b_is_same = (p_hwfn->hw_info.opaque_fid == opaque_fid);
+	struct ecore_queue_cid *p_cid;
+	enum _ecore_status_t rc;
+
+	p_cid = OSAL_VALLOC(p_hwfn->p_dev, sizeof(*p_cid));
+	if (p_cid == OSAL_NULL)
+		return OSAL_NULL;
+	OSAL_MEM_ZERO(p_cid, sizeof(*p_cid));
+
+	p_cid->opaque_fid = opaque_fid;
+	p_cid->cid = cid;
+	p_cid->vf_qid = vf_qid;
+	p_cid->rel = *p_params;
+
+	/* Don't try calculating the absolute indices for VFs */
+	if (IS_VF(p_hwfn->p_dev)) {
+		p_cid->abs = p_cid->rel;
+		goto out;
+	}
+
+	/* Calculate the engine-absolute indices of the resources.
+	 * The would guarantee they're valid later on.
+	 * In some cases [SBs] we already have the right values.
+	 */
+	rc = ecore_fw_vport(p_hwfn, p_cid->rel.vport_id, &p_cid->abs.vport_id);
+	if (rc != ECORE_SUCCESS)
+		goto fail;
+
+	rc = ecore_fw_l2_queue(p_hwfn, p_cid->rel.queue_id,
+			       &p_cid->abs.queue_id);
+	if (rc != ECORE_SUCCESS)
+		goto fail;
+
+	/* In case of a PF configuring its VF's queues, the stats-id is already
+	 * absolute [since there's a single index that's suitable per-VF].
+	 */
+	if (b_is_same) {
+		rc = ecore_fw_vport(p_hwfn, p_cid->rel.stats_id,
+				    &p_cid->abs.stats_id);
+		if (rc != ECORE_SUCCESS)
+			goto fail;
+	} else {
+		p_cid->abs.stats_id = p_cid->rel.stats_id;
+	}
+
+	/* SBs relevant information was already provided as absolute */
+	p_cid->abs.sb = p_cid->rel.sb;
+	p_cid->abs.sb_idx = p_cid->rel.sb_idx;
+
+	/* This is tricky - we're actually interested in whehter this is a PF
+	 * entry meant for the VF.
+	 */
+	if (!b_is_same)
+		p_cid->is_vf = true;
+out:
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "opaque_fid: %04x CID %08x vport %02x [%02x] qzone %04x [%04x] stats %02x [%02x] SB %04x PI %02x\n",
+		   p_cid->opaque_fid, p_cid->cid,
+		   p_cid->rel.vport_id, p_cid->abs.vport_id,
+		   p_cid->rel.queue_id, p_cid->abs.queue_id,
+		   p_cid->rel.stats_id, p_cid->abs.stats_id,
+		   p_cid->abs.sb, p_cid->abs.sb_idx);
+
+	return p_cid;
+
+fail:
+	OSAL_VFREE(p_hwfn->p_dev, p_cid);
+	return OSAL_NULL;
+}
+
+static struct ecore_queue_cid *
+ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
+		       u16 opaque_fid,
+		       struct ecore_queue_start_common_params *p_params)
+{
+	struct ecore_queue_cid *p_cid;
+	u32 cid = 0;
+
+	/* Get a unique firmware CID for this queue, in case it's a PF.
+	 * VF's don't need a CID as the queue configuration will be done
+	 * by PF.
+	 */
+	if (IS_PF(p_hwfn->p_dev)) {
+		if (ecore_cxt_acquire_cid(p_hwfn, PROTOCOLID_ETH,
+					  &cid) != ECORE_SUCCESS) {
+			DP_NOTICE(p_hwfn, true, "Failed to acquire cid\n");
+			return OSAL_NULL;
+		}
+	}
+
+	p_cid = _ecore_eth_queue_to_cid(p_hwfn, opaque_fid, cid, 0, p_params);
+	if ((p_cid == OSAL_NULL) && IS_PF(p_hwfn->p_dev))
+		ecore_cxt_release_cid(p_hwfn, cid);
+
+	return p_cid;
+}
+
 enum _ecore_status_t
 ecore_sp_eth_vport_start(struct ecore_hwfn *p_hwfn,
 			 struct ecore_sp_vport_start_params *p_params)
@@ -558,57 +672,28 @@ ecore_filter_accept_cmd(struct ecore_dev *p_dev,
 	return 0;
 }
 
-static void ecore_sp_release_queue_cid(struct ecore_hwfn *p_hwfn,
-				       struct ecore_hw_cid_data *p_cid_data)
-{
-	if (!p_cid_data->b_cid_allocated)
-		return;
-
-	ecore_cxt_release_cid(p_hwfn, p_cid_data->cid);
-	p_cid_data->b_cid_allocated = false;
-}
-
 enum _ecore_status_t
-ecore_sp_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
-			      u16 opaque_fid,
-			      u32 cid,
-			      struct ecore_queue_start_common_params *p_params,
-			      u16 bd_max_bytes,
-			      dma_addr_t bd_chain_phys_addr,
-			      dma_addr_t cqe_pbl_addr,
-			      u16 cqe_pbl_size, bool b_use_zone_a_prod)
+ecore_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
+			   struct ecore_queue_cid *p_cid,
+			   u16 bd_max_bytes,
+			   dma_addr_t bd_chain_phys_addr,
+			   dma_addr_t cqe_pbl_addr,
+			   u16 cqe_pbl_size)
 {
 	struct rx_queue_start_ramrod_data *p_ramrod = OSAL_NULL;
 	struct ecore_spq_entry *p_ent = OSAL_NULL;
 	struct ecore_sp_init_data init_data;
-	struct ecore_hw_cid_data *p_rx_cid;
-	u16 abs_rx_q_id = 0;
-	u8 abs_vport_id = 0;
 	enum _ecore_status_t rc = ECORE_NOTIMPL;
 
-	/* Store information for the stop */
-	p_rx_cid = &p_hwfn->p_rx_cids[p_params->queue_id];
-	p_rx_cid->cid = cid;
-	p_rx_cid->opaque_fid = opaque_fid;
-	p_rx_cid->vport_id = p_params->vport_id;
-
-	rc = ecore_fw_vport(p_hwfn, p_params->vport_id, &abs_vport_id);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	rc = ecore_fw_l2_queue(p_hwfn, p_params->queue_id, &abs_rx_q_id);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-		   "opaque_fid=0x%x, cid=0x%x, rx_qid=0x%x, vport_id=0x%x, sb_id=0x%x\n",
-		   opaque_fid, cid, p_params->queue_id,
-		   p_params->vport_id, p_params->sb);
+		   "opaque_fid=0x%x, cid=0x%x, rx_qzone=0x%x, vport_id=0x%x, sb_id=0x%x\n",
+		   p_cid->opaque_fid, p_cid->cid, p_cid->abs.queue_id,
+		   p_cid->abs.vport_id, p_cid->abs.sb);
 
 	/* Get SPQ entry */
 	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
-	init_data.cid = cid;
-	init_data.opaque_fid = opaque_fid;
+	init_data.cid = p_cid->cid;
+	init_data.opaque_fid = p_cid->opaque_fid;
 	init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
 
 	rc = ecore_sp_init_request(p_hwfn, &p_ent,
@@ -619,11 +704,11 @@ ecore_sp_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
 
 	p_ramrod = &p_ent->ramrod.rx_queue_start;
 
-	p_ramrod->sb_id = OSAL_CPU_TO_LE16(p_params->sb);
-	p_ramrod->sb_index = (u8)p_params->sb_idx;
-	p_ramrod->vport_id = abs_vport_id;
-	p_ramrod->stats_counter_id = p_params->stats_id;
-	p_ramrod->rx_queue_id = OSAL_CPU_TO_LE16(abs_rx_q_id);
+	p_ramrod->sb_id = OSAL_CPU_TO_LE16(p_cid->abs.sb);
+	p_ramrod->sb_index = p_cid->abs.sb_idx;
+	p_ramrod->vport_id = p_cid->abs.vport_id;
+	p_ramrod->stats_counter_id = p_cid->abs.stats_id;
+	p_ramrod->rx_queue_id = OSAL_CPU_TO_LE16(p_cid->abs.queue_id);
 	p_ramrod->complete_cqe_flg = 0;
 	p_ramrod->complete_event_flg = 1;
 
@@ -633,92 +718,88 @@ ecore_sp_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
 	p_ramrod->num_of_pbl_pages = OSAL_CPU_TO_LE16(cqe_pbl_size);
 	DMA_REGPAIR_LE(p_ramrod->cqe_pbl_addr, cqe_pbl_addr);
 
-	if (p_params->vf_qid || b_use_zone_a_prod) {
-		p_ramrod->vf_rx_prod_index = (u8)p_params->vf_qid;
+	if (p_cid->is_vf) {
+		p_ramrod->vf_rx_prod_index = p_cid->vf_qid;
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 			   "Queue%s is meant for VF rxq[%02x]\n",
-			   b_use_zone_a_prod ? " [legacy]" : "",
-			   p_params->vf_qid);
-		p_ramrod->vf_rx_prod_use_zone_a = b_use_zone_a_prod;
+			   !!p_cid->b_legacy_vf ? " [legacy]" : "",
+			   p_cid->vf_qid);
+		p_ramrod->vf_rx_prod_use_zone_a = !!p_cid->b_legacy_vf;
 	}
 
 	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
 }
 
-enum _ecore_status_t
-ecore_sp_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
-			    u16 opaque_fid,
-			    struct ecore_queue_start_common_params *p_params,
+static enum _ecore_status_t
+ecore_eth_pf_rx_queue_start(struct ecore_hwfn *p_hwfn,
+			    struct ecore_queue_cid *p_cid,
 			    u16 bd_max_bytes,
 			    dma_addr_t bd_chain_phys_addr,
 			    dma_addr_t cqe_pbl_addr,
 			    u16 cqe_pbl_size,
-			    void OSAL_IOMEM * *pp_prod)
+			    void OSAL_IOMEM * *pp_producer)
 {
-	struct ecore_hw_cid_data *p_rx_cid;
 	u32 init_prod_val = 0;
-	u16 abs_l2_queue = 0;
-	u8 abs_stats_id = 0;
-	enum _ecore_status_t rc;
-
-	if (IS_VF(p_hwfn->p_dev)) {
-		return ecore_vf_pf_rxq_start(p_hwfn,
-					     (u8)p_params->queue_id,
-					     p_params->sb,
-					     (u8)p_params->sb_idx,
-					     bd_max_bytes,
-					     bd_chain_phys_addr,
-					     cqe_pbl_addr,
-					     cqe_pbl_size, pp_prod);
-	}
-
-	rc = ecore_fw_l2_queue(p_hwfn, p_params->queue_id, &abs_l2_queue);
-	if (rc != ECORE_SUCCESS)
-		return rc;
 
-	rc = ecore_fw_vport(p_hwfn, p_params->stats_id, &abs_stats_id);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	*pp_prod = (u8 OSAL_IOMEM *)p_hwfn->regview +
-	    GTT_BAR0_MAP_REG_MSDM_RAM +
-	    MSTORM_ETH_PF_PRODS_OFFSET(abs_l2_queue);
+	*pp_producer = (u8 OSAL_IOMEM *)
+		       p_hwfn->regview +
+		       GTT_BAR0_MAP_REG_MSDM_RAM +
+		       MSTORM_ETH_PF_PRODS_OFFSET(p_cid->abs.queue_id);
 
 	/* Init the rcq, rx bd and rx sge (if valid) producers to 0 */
-	__internal_ram_wr(p_hwfn, *pp_prod, sizeof(u32),
+	__internal_ram_wr(p_hwfn, *pp_producer, sizeof(u32),
 			  (u32 *)(&init_prod_val));
 
+	return ecore_eth_rxq_start_ramrod(p_hwfn, p_cid,
+					  bd_max_bytes,
+					  bd_chain_phys_addr,
+					  cqe_pbl_addr, cqe_pbl_size);
+}
+
+enum _ecore_status_t
+ecore_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
+			 u16 opaque_fid,
+			 struct ecore_queue_start_common_params *p_params,
+			 u16 bd_max_bytes,
+			 dma_addr_t bd_chain_phys_addr,
+			 dma_addr_t cqe_pbl_addr,
+			 u16 cqe_pbl_size,
+			 struct ecore_rxq_start_ret_params *p_ret_params)
+{
+	struct ecore_queue_cid *p_cid;
+	enum _ecore_status_t rc;
+
 	/* Allocate a CID for the queue */
-	p_rx_cid = &p_hwfn->p_rx_cids[p_params->queue_id];
-	rc = ecore_cxt_acquire_cid(p_hwfn, PROTOCOLID_ETH,
-				   &p_rx_cid->cid);
-	if (rc != ECORE_SUCCESS) {
-		DP_NOTICE(p_hwfn, true, "Failed to acquire cid\n");
-		return rc;
-	}
-	p_rx_cid->b_cid_allocated = true;
-	p_params->stats_id = abs_stats_id;
-	p_params->vf_qid = 0;
-
-	rc = ecore_sp_eth_rxq_start_ramrod(p_hwfn,
-					   opaque_fid,
-					   p_rx_cid->cid,
-					   p_params,
+	p_cid = ecore_eth_queue_to_cid(p_hwfn, opaque_fid, p_params);
+	if (p_cid == OSAL_NULL)
+		return ECORE_NOMEM;
+
+	if (IS_PF(p_hwfn->p_dev))
+		rc = ecore_eth_pf_rx_queue_start(p_hwfn, p_cid,
+						 bd_max_bytes,
+						 bd_chain_phys_addr,
+						 cqe_pbl_addr, cqe_pbl_size,
+						 &p_ret_params->p_prod);
+	else
+		rc = ecore_vf_pf_rxq_start(p_hwfn, p_cid,
 					   bd_max_bytes,
 					   bd_chain_phys_addr,
 					   cqe_pbl_addr,
 					   cqe_pbl_size,
-					   false);
+					   &p_ret_params->p_prod);
 
+	/* Provide the caller with a reference to as handler */
 	if (rc != ECORE_SUCCESS)
-		ecore_sp_release_queue_cid(p_hwfn, p_rx_cid);
+		ecore_eth_queue_cid_release(p_hwfn, p_cid);
+	else
+		p_ret_params->p_handle = (void *)p_cid;
 
 	return rc;
 }
 
 enum _ecore_status_t
 ecore_sp_eth_rx_queues_update(struct ecore_hwfn *p_hwfn,
-			      u16 rx_queue_id,
+			      void **pp_rxq_handles,
 			      u8 num_rxqs,
 			      u8 complete_cqe_flg,
 			      u8 complete_event_flg,
@@ -728,14 +809,14 @@ ecore_sp_eth_rx_queues_update(struct ecore_hwfn *p_hwfn,
 	struct rx_queue_update_ramrod_data *p_ramrod = OSAL_NULL;
 	struct ecore_spq_entry *p_ent = OSAL_NULL;
 	struct ecore_sp_init_data init_data;
-	struct ecore_hw_cid_data *p_rx_cid;
-	u16 qid, abs_rx_q_id = 0;
+	struct ecore_queue_cid *p_cid;
 	enum _ecore_status_t rc = ECORE_NOTIMPL;
 	u8 i;
 
 	if (IS_VF(p_hwfn->p_dev))
 		return ecore_vf_pf_rxqs_update(p_hwfn,
-					       rx_queue_id,
+					       (struct ecore_queue_cid **)
+					       pp_rxq_handles,
 					       num_rxqs,
 					       complete_cqe_flg,
 					       complete_event_flg);
@@ -745,12 +826,11 @@ ecore_sp_eth_rx_queues_update(struct ecore_hwfn *p_hwfn,
 	init_data.p_comp_data = p_comp_data;
 
 	for (i = 0; i < num_rxqs; i++) {
-		qid = rx_queue_id + i;
-		p_rx_cid = &p_hwfn->p_rx_cids[qid];
+		p_cid = ((struct ecore_queue_cid **)pp_rxq_handles)[i];
 
 		/* Get SPQ entry */
-		init_data.cid = p_rx_cid->cid;
-		init_data.opaque_fid = p_rx_cid->opaque_fid;
+		init_data.cid = p_cid->cid;
+		init_data.opaque_fid = p_cid->opaque_fid;
 
 		rc = ecore_sp_init_request(p_hwfn, &p_ent,
 					   ETH_RAMROD_RX_QUEUE_UPDATE,
@@ -759,41 +839,34 @@ ecore_sp_eth_rx_queues_update(struct ecore_hwfn *p_hwfn,
 			return rc;
 
 		p_ramrod = &p_ent->ramrod.rx_queue_update;
+		p_ramrod->vport_id = p_cid->abs.vport_id;
 
-		ecore_fw_vport(p_hwfn, p_rx_cid->vport_id, &p_ramrod->vport_id);
-		ecore_fw_l2_queue(p_hwfn, qid, &abs_rx_q_id);
-		p_ramrod->rx_queue_id = OSAL_CPU_TO_LE16(abs_rx_q_id);
+		p_ramrod->rx_queue_id = OSAL_CPU_TO_LE16(p_cid->abs.queue_id);
 		p_ramrod->complete_cqe_flg = complete_cqe_flg;
 		p_ramrod->complete_event_flg = complete_event_flg;
 
 		rc = ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
-		if (rc)
+		if (rc != ECORE_SUCCESS)
 			return rc;
 	}
 
 	return rc;
 }
 
-enum _ecore_status_t
-ecore_sp_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
-			   u16 rx_queue_id,
-			   bool eq_completion_only, bool cqe_completion)
+static enum _ecore_status_t
+ecore_eth_pf_rx_queue_stop(struct ecore_hwfn *p_hwfn,
+			   struct ecore_queue_cid *p_cid,
+			   bool b_eq_completion_only,
+			   bool b_cqe_completion)
 {
-	struct ecore_hw_cid_data *p_rx_cid = &p_hwfn->p_rx_cids[rx_queue_id];
 	struct rx_queue_stop_ramrod_data *p_ramrod = OSAL_NULL;
 	struct ecore_spq_entry *p_ent = OSAL_NULL;
 	struct ecore_sp_init_data init_data;
-	u16 abs_rx_q_id = 0;
-	enum _ecore_status_t rc = ECORE_NOTIMPL;
-
-	if (IS_VF(p_hwfn->p_dev))
-		return ecore_vf_pf_rxq_stop(p_hwfn, rx_queue_id,
-					    cqe_completion);
+	enum _ecore_status_t rc;
 
-	/* Get SPQ entry */
 	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
-	init_data.cid = p_rx_cid->cid;
-	init_data.opaque_fid = p_rx_cid->opaque_fid;
+	init_data.cid = p_cid->cid;
+	init_data.opaque_fid = p_cid->opaque_fid;
 	init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
 
 	rc = ecore_sp_init_request(p_hwfn, &p_ent,
@@ -803,64 +876,54 @@ ecore_sp_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
 		return rc;
 
 	p_ramrod = &p_ent->ramrod.rx_queue_stop;
-
-	ecore_fw_vport(p_hwfn, p_rx_cid->vport_id, &p_ramrod->vport_id);
-	ecore_fw_l2_queue(p_hwfn, rx_queue_id, &abs_rx_q_id);
-	p_ramrod->rx_queue_id = OSAL_CPU_TO_LE16(abs_rx_q_id);
+	p_ramrod->vport_id = p_cid->abs.vport_id;
+	p_ramrod->rx_queue_id = OSAL_CPU_TO_LE16(p_cid->abs.queue_id);
 
 	/* Cleaning the queue requires the completion to arrive there.
 	 * In addition, VFs require the answer to come as eqe to PF.
 	 */
-	p_ramrod->complete_cqe_flg = (!!(p_rx_cid->opaque_fid ==
-					 p_hwfn->hw_info.opaque_fid) &&
-				      !eq_completion_only) || cqe_completion;
-	p_ramrod->complete_event_flg = !(p_rx_cid->opaque_fid ==
-					 p_hwfn->hw_info.opaque_fid) ||
-	    eq_completion_only;
+	p_ramrod->complete_cqe_flg = (!p_cid->is_vf && !b_eq_completion_only) ||
+				     b_cqe_completion;
+	p_ramrod->complete_event_flg = p_cid->is_vf || b_eq_completion_only;
 
-	rc = ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
-	if (rc != ECORE_SUCCESS)
-		return rc;
+	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
+}
 
-	ecore_sp_release_queue_cid(p_hwfn, p_rx_cid);
+enum _ecore_status_t ecore_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
+					     void *p_rxq,
+					     bool eq_completion_only,
+					     bool cqe_completion)
+{
+	struct ecore_queue_cid *p_cid = (struct ecore_queue_cid *)p_rxq;
+	enum _ecore_status_t rc = ECORE_NOTIMPL;
+
+	if (IS_PF(p_hwfn->p_dev))
+		rc = ecore_eth_pf_rx_queue_stop(p_hwfn, p_cid,
+						eq_completion_only,
+						cqe_completion);
+	else
+		rc = ecore_vf_pf_rxq_stop(p_hwfn, p_cid, cqe_completion);
 
+	if (rc == ECORE_SUCCESS)
+		ecore_eth_queue_cid_release(p_hwfn, p_cid);
 	return rc;
 }
 
 enum _ecore_status_t
-ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
-			      u16 opaque_fid,
-			      u32 cid,
-			      struct ecore_queue_start_common_params *p_params,
-			      dma_addr_t pbl_addr,
-			      u16 pbl_size,
-			      u16 pq_id)
+ecore_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
+			   struct ecore_queue_cid *p_cid,
+			   dma_addr_t pbl_addr, u16 pbl_size,
+			   u16 pq_id)
 {
 	struct tx_queue_start_ramrod_data *p_ramrod = OSAL_NULL;
 	struct ecore_spq_entry *p_ent = OSAL_NULL;
 	struct ecore_sp_init_data init_data;
-	struct ecore_hw_cid_data *p_tx_cid;
-	u16 abs_tx_qzone_id = 0;
 	enum _ecore_status_t rc = ECORE_NOTIMPL;
-	u8 abs_vport_id;
-
-	/* Store information for the stop */
-	p_tx_cid = &p_hwfn->p_tx_cids[p_params->queue_id];
-	p_tx_cid->cid = cid;
-	p_tx_cid->opaque_fid = opaque_fid;
-
-	rc = ecore_fw_vport(p_hwfn, p_params->vport_id, &abs_vport_id);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	rc = ecore_fw_l2_queue(p_hwfn, p_params->qzone_id, &abs_tx_qzone_id);
-	if (rc != ECORE_SUCCESS)
-		return rc;
 
 	/* Get SPQ entry */
 	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
-	init_data.cid = cid;
-	init_data.opaque_fid = opaque_fid;
+	init_data.cid = p_cid->cid;
+	init_data.opaque_fid = p_cid->opaque_fid;
 	init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
 
 	rc = ecore_sp_init_request(p_hwfn, &p_ent,
@@ -870,14 +933,14 @@ ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
 		return rc;
 
 	p_ramrod = &p_ent->ramrod.tx_queue_start;
-	p_ramrod->vport_id = abs_vport_id;
+	p_ramrod->vport_id = p_cid->abs.vport_id;
 
-	p_ramrod->sb_id = OSAL_CPU_TO_LE16(p_params->sb);
-	p_ramrod->sb_index = (u8)p_params->sb_idx;
-	p_ramrod->stats_counter_id = p_params->stats_id;
+	p_ramrod->sb_id = OSAL_CPU_TO_LE16(p_cid->abs.sb);
+	p_ramrod->sb_index = p_cid->abs.sb_idx;
+	p_ramrod->stats_counter_id = p_cid->abs.stats_id;
 
-	p_ramrod->queue_zone_id = OSAL_CPU_TO_LE16(abs_tx_qzone_id);
-	p_ramrod->same_as_last_id = OSAL_CPU_TO_LE16(abs_tx_qzone_id);
+	p_ramrod->queue_zone_id = OSAL_CPU_TO_LE16(p_cid->abs.queue_id);
+	p_ramrod->same_as_last_id = OSAL_CPU_TO_LE16(p_cid->abs.queue_id);
 
 	p_ramrod->pbl_size = OSAL_CPU_TO_LE16(pbl_size);
 	DMA_REGPAIR_LE(p_ramrod->pbl_base_addr, pbl_addr);
@@ -887,90 +950,72 @@ ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
 	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
 }
 
-enum _ecore_status_t
-ecore_sp_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
-			    u16 opaque_fid,
-			    struct ecore_queue_start_common_params *p_params,
+static enum _ecore_status_t
+ecore_eth_pf_tx_queue_start(struct ecore_hwfn *p_hwfn,
+			    struct ecore_queue_cid *p_cid,
 			    u8 tc,
-			    dma_addr_t pbl_addr,
-			    u16 pbl_size,
+			    dma_addr_t pbl_addr, u16 pbl_size,
 			    void OSAL_IOMEM * *pp_doorbell)
 {
-	struct ecore_hw_cid_data *p_tx_cid;
-	u8 abs_stats_id = 0;
 	enum _ecore_status_t rc;
 
-	if (IS_VF(p_hwfn->p_dev)) {
-		return ecore_vf_pf_txq_start(p_hwfn,
-					     p_params->queue_id,
-					     p_params->sb,
-					     (u8)p_params->sb_idx,
-					     pbl_addr,
-					     pbl_size,
-					     pp_doorbell);
-	}
-
-	rc = ecore_fw_vport(p_hwfn, p_params->stats_id, &abs_stats_id);
+	/* TODO - set tc in the pq_params for multi-cos */
+	rc = ecore_eth_txq_start_ramrod(p_hwfn, p_cid,
+					pbl_addr, pbl_size,
+					ecore_get_cm_pq_idx_mcos(p_hwfn, tc));
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	p_tx_cid = &p_hwfn->p_tx_cids[p_params->queue_id];
-	OSAL_MEMSET(p_tx_cid, 0, sizeof(*p_tx_cid));
+	/* Provide the caller with the necessary return values */
+	*pp_doorbell = (u8 OSAL_IOMEM *)
+		       p_hwfn->doorbells +
+		       DB_ADDR(p_cid->cid, DQ_DEMS_LEGACY);
 
-	/* Allocate a CID for the queue */
-	rc = ecore_cxt_acquire_cid(p_hwfn, PROTOCOLID_ETH, &p_tx_cid->cid);
-	if (rc != ECORE_SUCCESS) {
-		DP_NOTICE(p_hwfn, true, "Failed to acquire cid\n");
-		return rc;
-	}
-	p_tx_cid->b_cid_allocated = true;
+	return ECORE_SUCCESS;
+}
 
-	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-		   "opaque_fid=0x%x, cid=0x%x, tx_qid=0x%x, vport_id=0x%x, sb_id=0x%x\n",
-		    opaque_fid, p_tx_cid->cid, p_params->queue_id,
-		    p_params->vport_id, p_params->sb);
+enum _ecore_status_t
+ecore_eth_tx_queue_start(struct ecore_hwfn *p_hwfn, u16 opaque_fid,
+			 struct ecore_queue_start_common_params *p_params,
+			 u8 tc,
+			 dma_addr_t pbl_addr, u16 pbl_size,
+			 struct ecore_txq_start_ret_params *p_ret_params)
+{
+	struct ecore_queue_cid *p_cid;
+	enum _ecore_status_t rc;
 
-	p_params->stats_id = abs_stats_id;
+	p_cid = ecore_eth_queue_to_cid(p_hwfn, opaque_fid, p_params);
+	if (p_cid == OSAL_NULL)
+		return ECORE_INVAL;
 
-	/* TODO - set tc in the pq_params for multi-cos */
-	rc = ecore_sp_eth_txq_start_ramrod(p_hwfn,
-					   opaque_fid,
-					   p_tx_cid->cid,
-					   p_params,
-					   pbl_addr,
-					   pbl_size,
-					   ecore_get_cm_pq_idx_mcos(p_hwfn,
-								    tc));
-
-	*pp_doorbell = (u8 OSAL_IOMEM *)p_hwfn->doorbells +
-	    DB_ADDR(p_tx_cid->cid, DQ_DEMS_LEGACY);
+	if (IS_PF(p_hwfn->p_dev))
+		rc = ecore_eth_pf_tx_queue_start(p_hwfn, p_cid, tc,
+						 pbl_addr, pbl_size,
+						 &p_ret_params->p_doorbell);
+	else
+		rc = ecore_vf_pf_txq_start(p_hwfn, p_cid,
+					   pbl_addr, pbl_size,
+					   &p_ret_params->p_doorbell);
 
 	if (rc != ECORE_SUCCESS)
-		ecore_sp_release_queue_cid(p_hwfn, p_tx_cid);
+		ecore_eth_queue_cid_release(p_hwfn, p_cid);
+	else
+		p_ret_params->p_handle = (void *)p_cid;
 
 	return rc;
 }
 
-enum _ecore_status_t ecore_sp_eth_tx_queue_update(struct ecore_hwfn *p_hwfn)
-{
-	return ECORE_NOTIMPL;
-}
-
-enum _ecore_status_t ecore_sp_eth_tx_queue_stop(struct ecore_hwfn *p_hwfn,
-						u16 tx_queue_id)
+static enum _ecore_status_t
+ecore_eth_pf_tx_queue_stop(struct ecore_hwfn *p_hwfn,
+			   struct ecore_queue_cid *p_cid)
 {
-	struct ecore_hw_cid_data *p_tx_cid = &p_hwfn->p_tx_cids[tx_queue_id];
 	struct ecore_spq_entry *p_ent = OSAL_NULL;
 	struct ecore_sp_init_data init_data;
-	enum _ecore_status_t rc = ECORE_NOTIMPL;
-
-	if (IS_VF(p_hwfn->p_dev))
-		return ecore_vf_pf_txq_stop(p_hwfn, tx_queue_id);
+	enum _ecore_status_t rc;
 
-	/* Get SPQ entry */
 	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
-	init_data.cid = p_tx_cid->cid;
-	init_data.opaque_fid = p_tx_cid->opaque_fid;
+	init_data.cid = p_cid->cid;
+	init_data.opaque_fid = p_cid->opaque_fid;
 	init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
 
 	rc = ecore_sp_init_request(p_hwfn, &p_ent,
@@ -979,11 +1024,22 @@ enum _ecore_status_t ecore_sp_eth_tx_queue_stop(struct ecore_hwfn *p_hwfn,
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	rc = ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
-	if (rc != ECORE_SUCCESS)
-		return rc;
+	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
+}
+
+enum _ecore_status_t ecore_eth_tx_queue_stop(struct ecore_hwfn *p_hwfn,
+					     void *p_handle)
+{
+	struct ecore_queue_cid *p_cid = (struct ecore_queue_cid *)p_handle;
+	enum _ecore_status_t rc;
+
+	if (IS_PF(p_hwfn->p_dev))
+		rc = ecore_eth_pf_tx_queue_stop(p_hwfn, p_cid);
+	else
+		rc = ecore_vf_pf_txq_stop(p_hwfn, p_cid);
 
-	ecore_sp_release_queue_cid(p_hwfn, p_tx_cid);
+	if (rc == ECORE_SUCCESS)
+		ecore_eth_queue_cid_release(p_hwfn, p_cid);
 	return rc;
 }
 
diff --git a/drivers/net/qede/base/ecore_l2.h b/drivers/net/qede/base/ecore_l2.h
index b598eda..c136389 100644
--- a/drivers/net/qede/base/ecore_l2.h
+++ b/drivers/net/qede/base/ecore_l2.h
@@ -15,59 +15,66 @@
 #include "ecore_spq.h"
 #include "ecore_l2_api.h"
 
-/**
- * @brief ecore_sp_eth_tx_queue_update -
- *
- * This ramrod updates a TX queue. It is used for setting the active
- * state of the queue.
- *
- * @note Final phase API.
- *
- * @param p_hwfn
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_sp_eth_tx_queue_update(struct ecore_hwfn *p_hwfn);
+struct ecore_queue_cid {
+	/* 'Relative' is a relative term ;-). Usually the indices [not counting
+	 * SBs] would be PF-relative, but there are some cases where that isn't
+	 * the case - specifically for a PF configuring its VF indices it's
+	 * possible some fields [E.g., stats-id] in 'rel' would already be abs.
+	 */
+	struct ecore_queue_start_common_params rel;
+	struct ecore_queue_start_common_params abs;
+	u32 cid;
+	u16 opaque_fid;
+
+	/* VFs queues are mapped differently, so we need to know the
+	 * relative queue associated with them [0-based].
+	 * Notice this is relevant on the *PF* queue-cid of its VF's queues,
+	 * and not on the VF itself.
+	 */
+	bool is_vf;
+	u8 vf_qid;
+
+	/* Legacy VFs might have Rx producer located elsewhere */
+	bool b_legacy_vf;
+};
+
+void ecore_eth_queue_cid_release(struct ecore_hwfn *p_hwfn,
+				 struct ecore_queue_cid *p_cid);
+
+struct ecore_queue_cid *
+_ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
+			u16 opaque_fid, u32 cid, u8 vf_qid,
+			struct ecore_queue_start_common_params *p_params);
 
 enum _ecore_status_t
 ecore_sp_eth_vport_start(struct ecore_hwfn *p_hwfn,
 			 struct ecore_sp_vport_start_params *p_params);
 
 /**
- * @brief - Starts an Rx queue; Should be used where contexts are handled
- * outside of the ramrod area [specifically iov scenarios]
+ * @brief - Starts an Rx queue, when queue_cid is already prepared
  *
  * @param p_hwfn
- * @param opaque_fid
- * @param cid
- * @param p_params [queue_id, vport_id, stats_id, sb, sb_idx, vf_qid]
-	  stats_id is absolute packed in p_params.
+ * @param p_cid
  * @param bd_max_bytes
  * @param bd_chain_phys_addr
  * @param cqe_pbl_addr
  * @param cqe_pbl_size
- * @param b_use_zone_a_prod - support legacy VF producers
  *
  * @return enum _ecore_status_t
  */
 enum _ecore_status_t
-ecore_sp_eth_rxq_start_ramrod(struct ecore_hwfn	*p_hwfn,
-			      u16 opaque_fid,
-			      u32 cid,
-			      struct ecore_queue_start_common_params *p_params,
-			      u16 bd_max_bytes,
-			      dma_addr_t bd_chain_phys_addr,
-			      dma_addr_t cqe_pbl_addr,
-			      u16 cqe_pbl_size, bool b_use_zone_a_prod);
+ecore_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
+			   struct ecore_queue_cid *p_cid,
+			   u16 bd_max_bytes,
+			   dma_addr_t bd_chain_phys_addr,
+			   dma_addr_t cqe_pbl_addr,
+			   u16 cqe_pbl_size);
 
 /**
- * @brief - Starts a Tx queue; Should be used where contexts are handled
- * outside of the ramrod area [specifically iov scenarios]
+ * @brief - Starts a Tx queue, where queue_cid is already prepared
  *
  * @param p_hwfn
- * @param opaque_fid
- * @param cid
- * @param p_params [queue_id, vport_id,stats_id, sb, sb_idx, vf_qid]
+ * @param p_cid
  * @param pbl_addr
  * @param pbl_size
  * @param p_pq_params - parameters for choosing the PQ for this Tx queue
@@ -75,13 +82,10 @@ ecore_sp_eth_rxq_start_ramrod(struct ecore_hwfn	*p_hwfn,
  * @return enum _ecore_status_t
  */
 enum _ecore_status_t
-ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn	*p_hwfn,
-			      u16 opaque_fid,
-			      u32 cid,
-			      struct ecore_queue_start_common_params *p_params,
-			      dma_addr_t pbl_addr,
-			      u16 pbl_size,
-			      u16 pq_id);
+ecore_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
+			   struct ecore_queue_cid *p_cid,
+			   dma_addr_t pbl_addr, u16 pbl_size,
+			   u16 pq_id);
 
 u8 ecore_mcast_bin_from_mac(u8 *mac);
 
diff --git a/drivers/net/qede/base/ecore_l2_api.h b/drivers/net/qede/base/ecore_l2_api.h
index 8f7b614..af316d3 100644
--- a/drivers/net/qede/base/ecore_l2_api.h
+++ b/drivers/net/qede/base/ecore_l2_api.h
@@ -28,22 +28,26 @@ enum ecore_rss_caps {
 #endif
 
 struct ecore_queue_start_common_params {
-	/* Rx/Tx queue relative id to keep obtained cid in corresponding array
-	 * RX - upper-bounded by number of FW-queues
-	 */
-	u16 queue_id;
+	/* Should always be relative to entity sending this. */
 	u8 vport_id;
+	u16 queue_id;
 
-	/* q_zone_id is relative, may be different from queue id
-	 * currently used by Tx-only, upper-bounded by number of FW-queues
-	 */
-	u16 qzone_id;
-
-	/* stats_id is relative or absolute depends on function */
+	/* Relative, but relevant only for PFs */
 	u8 stats_id;
+
+	/* These are always absolute */
 	u16 sb;
-	u16 sb_idx;
-	u16 vf_qid;
+	u8 sb_idx;
+};
+
+struct ecore_rxq_start_ret_params {
+	void OSAL_IOMEM *p_prod;
+	void *p_handle;
+};
+
+struct ecore_txq_start_ret_params {
+	void OSAL_IOMEM *p_doorbell;
+	void *p_handle;
 };
 
 struct ecore_rss_params {
@@ -167,42 +171,37 @@ ecore_filter_accept_cmd(
 	struct ecore_spq_comp_cb	 *p_comp_data);
 
 /**
- * @brief ecore_sp_eth_rx_queue_start - RX Queue Start Ramrod
+ * @brief ecore_eth_rx_queue_start - RX Queue Start Ramrod
  *
  * This ramrod initializes an RX Queue for a VPort. An Assert is generated if
  * the VPort ID is not currently initialized.
  *
  * @param p_hwfn
  * @param opaque_fid
- * @p_params			[stats_id is relative, packed in p_params]
+ * @p_params			Inputs; Relative for PF [SB being an exception]
  * @param bd_max_bytes		Maximum bytes that can be placed on a BD
  * @param bd_chain_phys_addr	Physical address of BDs for receive.
  * @param cqe_pbl_addr		Physical address of the CQE PBL Table.
  * @param cqe_pbl_size		Size of the CQE PBL Table
- * @param pp_prod		Pointer to place producer's
- *                              address for the Rx Q (May be
- *				NULL).
+ * @param p_ret_params		Pointed struct to be filled with outputs.
  *
  * @return enum _ecore_status_t
  */
 enum _ecore_status_t
-ecore_sp_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
-			    u16 opaque_fid,
-			    struct ecore_queue_start_common_params *p_params,
-			    u16 bd_max_bytes,
-			    dma_addr_t bd_chain_phys_addr,
-			    dma_addr_t cqe_pbl_addr,
-			    u16 cqe_pbl_size,
-			    void OSAL_IOMEM * *pp_prod);
+ecore_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
+			 u16 opaque_fid,
+			 struct ecore_queue_start_common_params *p_params,
+			 u16 bd_max_bytes,
+			 dma_addr_t bd_chain_phys_addr,
+			 dma_addr_t cqe_pbl_addr,
+			 u16 cqe_pbl_size,
+			 struct ecore_rxq_start_ret_params *p_ret_params);
 
 /**
- * @brief ecore_sp_eth_rx_queue_stop -
- *
- * This ramrod closes an RX queue. It sends RX queue stop ramrod
- * + CFC delete ramrod
+ * @brief ecore_eth_rx_queue_stop - This ramrod closes an Rx queue
  *
  * @param p_hwfn
- * @param rx_queue_id		RX Queue ID
+ * @param p_rxq			Handler of queue to close
  * @param eq_completion_only	If True completion will be on
  *				EQe, if False completion will be
  *				on EQe if p_hwfn opaque
@@ -213,13 +212,13 @@ ecore_sp_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
  * @return enum _ecore_status_t
  */
 enum _ecore_status_t
-ecore_sp_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
-			   u16 rx_queue_id,
-			   bool eq_completion_only,
-			   bool cqe_completion);
+ecore_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
+			void *p_rxq,
+			bool eq_completion_only,
+			bool cqe_completion);
 
 /**
- * @brief ecore_sp_eth_tx_queue_start - TX Queue Start Ramrod
+ * @brief - TX Queue Start Ramrod
  *
  * This ramrod initializes a TX Queue for a VPort. An Assert is generated if
  * the VPort is not currently initialized.
@@ -230,34 +229,29 @@ ecore_sp_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
  * @param tc			traffic class to use with this L2 txq
  * @param pbl_addr		address of the pbl array
  * @param pbl_size		number of entries in pbl
- * @param pp_doorbell		Pointer to place doorbell pointer (May be NULL).
- *				This address should be used with the
- *				DIRECT_REG_WR macro.
+ * @param p_ret_params		Pointer to fill the return parameters in.
  *
  * @return enum _ecore_status_t
  */
 enum _ecore_status_t
-ecore_sp_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
-			    u16 opaque_fid,
-			    struct ecore_queue_start_common_params *p_params,
-			    u8 tc,
-			    dma_addr_t pbl_addr,
-			    u16 pbl_size,
-			    void OSAL_IOMEM * *pp_doorbell);
+ecore_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
+			 u16 opaque_fid,
+			 struct ecore_queue_start_common_params *p_params,
+			 u8 tc,
+			 dma_addr_t pbl_addr,
+			 u16 pbl_size,
+			 struct ecore_txq_start_ret_params *p_ret_params);
 
 /**
- * @brief ecore_sp_eth_tx_queue_stop -
- *
- * This ramrod closes a TX queue. It sends TX queue stop ramrod
- * + CFC delete ramrod
+ * @brief ecore_eth_tx_queue_stop - closes a Tx queue
  *
  * @param p_hwfn
- * @param tx_queue_id		TX Queue ID
+ * @param p_txq - handle to Tx queue needed to be closed
  *
  * @return enum _ecore_status_t
  */
-enum _ecore_status_t ecore_sp_eth_tx_queue_stop(struct ecore_hwfn *p_hwfn,
-						u16 tx_queue_id);
+enum _ecore_status_t ecore_eth_tx_queue_stop(struct ecore_hwfn *p_hwfn,
+					     void *p_txq);
 
 enum ecore_tpa_mode	{
 	ECORE_TPA_MODE_NONE,
@@ -389,19 +383,19 @@ ecore_sp_eth_filter_ucast(struct ecore_hwfn *p_hwfn,
  * @note Final phase API.
  *
  * @param p_hwfn
- * @param rx_queue_id		RX Queue ID
- * @param num_rxqs              Allow to update multiple rx
- *				queues, from rx_queue_id to
- *				(rx_queue_id + num_rxqs)
+ * @param pp_rxq_handlers	An array of queue handlers to be updated.
+ * @param num_rxqs              number of queues to update.
  * @param complete_cqe_flg	Post completion to the CQE Ring if set
  * @param complete_event_flg	Post completion to the Event Ring if set
+ * @param comp_mode
+ * @param p_comp_data
  *
  * @return enum _ecore_status_t
  */
 
 enum _ecore_status_t
 ecore_sp_eth_rx_queues_update(struct ecore_hwfn *p_hwfn,
-			      u16 rx_queue_id,
+			      void **pp_rxq_handlers,
 			      u8 num_rxqs,
 			      u8 complete_cqe_flg,
 			      u8 complete_event_flg,
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 73c4015..7378420 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -238,7 +238,7 @@ static bool ecore_iov_validate_active_rxq(struct ecore_hwfn *p_hwfn,
 	u8 i;
 
 	for (i = 0; i < p_vf->num_rxqs; i++)
-		if (p_vf->vf_queues[i].rxq_active)
+		if (p_vf->vf_queues[i].p_rx_cid)
 			return true;
 
 	return false;
@@ -250,7 +250,7 @@ static bool ecore_iov_validate_active_txq(struct ecore_hwfn *p_hwfn,
 	u8 i;
 
 	for (i = 0; i < p_vf->num_rxqs; i++)
-		if (p_vf->vf_queues[i].txq_active)
+		if (p_vf->vf_queues[i].p_tx_cid)
 			return true;
 
 	return false;
@@ -953,17 +953,19 @@ static void ecore_iov_free_vf_igu_sbs(struct ecore_hwfn *p_hwfn,
 	vf->num_sbs = 0;
 }
 
-enum _ecore_status_t ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
-					      struct ecore_ptt *p_ptt,
-					      u16 rel_vf_id, u16 num_rx_queues)
+enum _ecore_status_t
+ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
+			 struct ecore_ptt *p_ptt,
+			 struct ecore_iov_vf_init_params *p_params)
 {
 	u8 num_of_vf_available_chains  = 0;
 	struct ecore_vf_info *vf = OSAL_NULL;
+	u16 qid, num_irqs;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 	u32 cids;
 	u8 i;
 
-	vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, false);
+	vf = ecore_iov_get_vf_info(p_hwfn, p_params->rel_vf_id, false);
 	if (!vf) {
 		DP_ERR(p_hwfn, "ecore_iov_init_hw_for_vf : vf is OSAL_NULL\n");
 		return ECORE_UNKNOWN_ERROR;
@@ -971,22 +973,52 @@ enum _ecore_status_t ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 
 	if (vf->b_init) {
 		DP_NOTICE(p_hwfn, true, "VF[%d] is already active.\n",
-			  rel_vf_id);
+			  p_params->rel_vf_id);
 		return ECORE_INVAL;
 	}
 
+	/* Perform sanity checking on the requested queue_id */
+	for (i = 0; i < p_params->num_queues; i++) {
+		u16 min_vf_qzone = (u16)FEAT_NUM(p_hwfn, ECORE_PF_L2_QUE);
+		u16 max_vf_qzone = min_vf_qzone +
+				   FEAT_NUM(p_hwfn, ECORE_VF_L2_QUE) - 1;
+
+		qid = p_params->req_rx_queue[i];
+		if (qid < min_vf_qzone || qid > max_vf_qzone) {
+			DP_NOTICE(p_hwfn, true,
+				  "Can't enable Rx qid [%04x] for VF[%d]: qids [0x%04x,...,0x%04x] available\n",
+				  qid, p_params->rel_vf_id,
+				  min_vf_qzone, max_vf_qzone);
+			return ECORE_INVAL;
+		}
+
+		qid = p_params->req_tx_queue[i];
+		if (qid > max_vf_qzone) {
+			DP_NOTICE(p_hwfn, true,
+				  "Can't enable Tx qid [%04x] for VF[%d]: max qid 0x%04x\n",
+				  qid, p_params->rel_vf_id, max_vf_qzone);
+			return ECORE_INVAL;
+		}
+
+		/* If client *really* wants, Tx qid can be shared with PF */
+		if (qid < min_vf_qzone)
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VF[%d] is using PF qid [0x%04x] for Txq[0x%02x]\n",
+				   p_params->rel_vf_id, qid, i);
+	}
+
 	/* Limit number of queues according to number of CIDs */
 	ecore_cxt_get_proto_cid_count(p_hwfn, PROTOCOLID_ETH, &cids);
 	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 		   "VF[%d] - requesting to initialize for 0x%04x queues"
 		   " [0x%04x CIDs available]\n",
-		   vf->relative_vf_id, num_rx_queues, (u16)cids);
-	num_rx_queues = OSAL_MIN_T(u16, num_rx_queues, ((u16)cids));
+		   vf->relative_vf_id, p_params->num_queues, (u16)cids);
+	num_irqs = OSAL_MIN_T(u16, p_params->num_queues, ((u16)cids));
 
 	num_of_vf_available_chains = ecore_iov_alloc_vf_igu_sbs(p_hwfn,
 							       p_ptt,
 							       vf,
-							       num_rx_queues);
+							       num_irqs);
 	if (num_of_vf_available_chains == 0) {
 		DP_ERR(p_hwfn, "no available igu sbs\n");
 		return ECORE_NOMEM;
@@ -997,26 +1029,19 @@ enum _ecore_status_t ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 	vf->num_txqs = num_of_vf_available_chains;
 
 	for (i = 0; i < vf->num_rxqs; i++) {
-		u16 queue_id = ecore_int_queue_id_from_sb_id(p_hwfn,
-							     vf->igu_sbs[i]);
+		struct ecore_vf_q_info *p_queue = &vf->vf_queues[i];
 
-		if (queue_id > RESC_NUM(p_hwfn, ECORE_L2_QUEUE)) {
-			DP_NOTICE(p_hwfn, true,
-				  "VF[%d] will require utilizing of"
-				  " out-of-bounds queues - %04x\n",
-				  vf->relative_vf_id, queue_id);
-			/* TODO - cleanup the already allocate SBs */
-			return ECORE_INVAL;
-		}
+		p_queue->fw_rx_qid = p_params->req_rx_queue[i];
+		p_queue->fw_tx_qid = p_params->req_tx_queue[i];
 
 		/* CIDs are per-VF, so no problem having them 0-based. */
-		vf->vf_queues[i].fw_rx_qid = queue_id;
-		vf->vf_queues[i].fw_tx_qid = queue_id;
-		vf->vf_queues[i].fw_cid = i;
+		p_queue->fw_cid = i;
 
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-			   "VF[%d] - [%d] SB %04x, Tx/Rx queue %04x CID %04x\n",
-			   vf->relative_vf_id, i, vf->igu_sbs[i], queue_id, i);
+			   "VF[%d] - Q[%d] SB %04x, qid [Rx %04x Tx %04x]  CID %04x\n",
+			   vf->relative_vf_id, i, vf->igu_sbs[i],
+			   p_queue->fw_rx_qid, p_queue->fw_tx_qid,
+			   p_queue->fw_cid);
 	}
 
 	rc = ecore_iov_enable_vf_access(p_hwfn, p_ptt, vf);
@@ -1390,8 +1415,19 @@ static void ecore_iov_vf_cleanup(struct ecore_hwfn *p_hwfn,
 	p_vf->num_active_rxqs = 0;
 
 	for (i = 0; i < ECORE_MAX_VF_CHAINS_PER_PF; i++) {
-		p_vf->vf_queues[i].rxq_active = 0;
-		p_vf->vf_queues[i].txq_active = 0;
+		struct ecore_vf_q_info *p_queue = &p_vf->vf_queues[i];
+
+		if (p_queue->p_rx_cid) {
+			ecore_eth_queue_cid_release(p_hwfn,
+						    p_queue->p_rx_cid);
+			p_queue->p_rx_cid = OSAL_NULL;
+		}
+
+		if (p_queue->p_tx_cid) {
+			ecore_eth_queue_cid_release(p_hwfn,
+						    p_queue->p_tx_cid);
+			p_queue->p_tx_cid = OSAL_NULL;
+		}
 	}
 
 	OSAL_MEMSET(&p_vf->shadow_config, 0, sizeof(p_vf->shadow_config));
@@ -1829,14 +1865,14 @@ ecore_iov_configure_vport_forced(struct ecore_hwfn *p_hwfn,
 
 		/* Update all the Rx queues */
 		for (i = 0; i < ECORE_MAX_VF_CHAINS_PER_PF; i++) {
-			u16 qid;
+			struct ecore_queue_cid *p_cid;
 
-			if (!p_vf->vf_queues[i].rxq_active)
+			p_cid = p_vf->vf_queues[i].p_rx_cid;
+			if (p_cid == OSAL_NULL)
 				continue;
 
-			qid = p_vf->vf_queues[i].fw_rx_qid;
-
-			rc = ecore_sp_eth_rx_queues_update(p_hwfn, qid,
+			rc = ecore_sp_eth_rx_queues_update(p_hwfn,
+							   (void **)&p_cid,
 						   1, 0, 1,
 						   ECORE_SPQ_MODE_EBLOCK,
 						   OSAL_NULL);
@@ -1844,7 +1880,7 @@ ecore_iov_configure_vport_forced(struct ecore_hwfn *p_hwfn,
 				DP_NOTICE(p_hwfn, true,
 					  "Failed to send Rx update"
 					  " fo queue[0x%04x]\n",
-					  qid);
+					  p_cid->rel.queue_id);
 				return rc;
 			}
 		}
@@ -2038,6 +2074,7 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
 	struct ecore_queue_start_common_params params;
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
 	u8 status = PFVF_STATUS_NO_RESOURCE;
+	struct ecore_vf_q_info *p_queue;
 	struct vfpf_start_rxq_tlv *req;
 	bool b_legacy_vf = false;
 	enum _ecore_status_t rc;
@@ -2048,14 +2085,24 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
 	    !ecore_iov_validate_sb(p_hwfn, vf, req->hw_sb))
 		goto out;
 
+	/* Acquire a new queue-cid */
+	p_queue = &vf->vf_queues[req->rx_qid];
+
 	OSAL_MEMSET(&params, 0, sizeof(params));
-	params.queue_id = (u8)vf->vf_queues[req->rx_qid].fw_rx_qid;
-	params.vf_qid = req->rx_qid;
+	params.queue_id = (u8)p_queue->fw_rx_qid;
 	params.vport_id = vf->vport_id;
 	params.stats_id = vf->abs_vf_id + 0x10;
 	params.sb = req->hw_sb;
 	params.sb_idx = req->sb_index;
 
+	p_queue->p_rx_cid = _ecore_eth_queue_to_cid(p_hwfn,
+						    vf->opaque_fid,
+						    p_queue->fw_cid,
+						    (u8)req->rx_qid,
+						    &params);
+	if (p_queue->p_rx_cid == OSAL_NULL)
+		goto out;
+
 	/* Legacy VFs have their Producers in a different location, which they
 	 * calculate on their own and clean the producer prior to this.
 	 */
@@ -2067,27 +2114,27 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
 		       GTT_BAR0_MAP_REG_MSDM_RAM +
 		       MSTORM_ETH_VF_PRODS_OFFSET(vf->abs_vf_id, req->rx_qid),
 		       0);
+	p_queue->p_rx_cid->b_legacy_vf = b_legacy_vf;
 
-	rc = ecore_sp_eth_rxq_start_ramrod(p_hwfn, vf->opaque_fid,
-					   vf->vf_queues[req->rx_qid].fw_cid,
-					   &params,
-					   req->bd_max_bytes,
-					   req->rxq_addr,
-					   req->cqe_pbl_addr,
-					   req->cqe_pbl_size,
-					   b_legacy_vf);
 
-	if (rc) {
+	rc = ecore_eth_rxq_start_ramrod(p_hwfn,
+					p_queue->p_rx_cid,
+					req->bd_max_bytes,
+					req->rxq_addr,
+					req->cqe_pbl_addr,
+					req->cqe_pbl_size);
+	if (rc != ECORE_SUCCESS) {
 		status = PFVF_STATUS_FAILURE;
+		ecore_eth_queue_cid_release(p_hwfn, p_queue->p_rx_cid);
+		p_queue->p_rx_cid = OSAL_NULL;
 	} else {
 		status = PFVF_STATUS_SUCCESS;
-		vf->vf_queues[req->rx_qid].rxq_active = true;
 		vf->num_active_rxqs++;
 	}
 
 out:
-	ecore_iov_vf_mbx_start_rxq_resp(p_hwfn, p_ptt, vf,
-					status, b_legacy_vf);
+	ecore_iov_vf_mbx_start_rxq_resp(p_hwfn, p_ptt, vf, status,
+					b_legacy_vf);
 }
 
 static void ecore_iov_vf_mbx_start_txq_resp(struct ecore_hwfn *p_hwfn,
@@ -2138,8 +2185,10 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 	struct ecore_queue_start_common_params params;
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
 	u8 status = PFVF_STATUS_NO_RESOURCE;
+	struct ecore_vf_q_info *p_queue;
 	struct vfpf_start_txq_tlv *req;
 	enum _ecore_status_t rc;
+	u16 pq;
 
 	OSAL_MEMSET(&params, 0, sizeof(params));
 	req = &mbx->req_virt->start_txq;
@@ -2148,27 +2197,34 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 	    !ecore_iov_validate_sb(p_hwfn, vf, req->hw_sb))
 		goto out;
 
-	params.queue_id = vf->vf_queues[req->tx_qid].fw_tx_qid;
-	params.qzone_id = vf->vf_queues[req->tx_qid].fw_tx_qid;
+	/* Acquire a new queue-cid */
+	p_queue = &vf->vf_queues[req->tx_qid];
+
+	params.queue_id = p_queue->fw_tx_qid;
 	params.vport_id = vf->vport_id;
 	params.stats_id = vf->abs_vf_id + 0x10;
 	params.sb = req->hw_sb;
 	params.sb_idx = req->sb_index;
 
-	rc = ecore_sp_eth_txq_start_ramrod(p_hwfn,
-					   vf->opaque_fid,
-					   vf->vf_queues[req->tx_qid].fw_cid,
-					   &params,
-					   req->pbl_addr,
-					   req->pbl_size,
-					   ecore_get_cm_pq_idx_vf(p_hwfn,
-							vf->relative_vf_id));
+	p_queue->p_tx_cid = _ecore_eth_queue_to_cid(p_hwfn,
+						    vf->opaque_fid,
+						    p_queue->fw_cid,
+						    (u8)req->tx_qid,
+						    &params);
+	if (p_queue->p_tx_cid == OSAL_NULL)
+		goto out;
 
-	if (rc)
+	pq = ecore_get_cm_pq_idx_vf(p_hwfn,
+				    vf->relative_vf_id);
+	rc = ecore_eth_txq_start_ramrod(p_hwfn, p_queue->p_tx_cid,
+					req->pbl_addr, req->pbl_size, pq);
+	if (rc != ECORE_SUCCESS) {
 		status = PFVF_STATUS_FAILURE;
-	else {
+		ecore_eth_queue_cid_release(p_hwfn,
+					    p_queue->p_tx_cid);
+		p_queue->p_tx_cid = OSAL_NULL;
+	} else {
 		status = PFVF_STATUS_SUCCESS;
-		vf->vf_queues[req->tx_qid].txq_active = true;
 	}
 
 out:
@@ -2181,6 +2237,7 @@ static enum _ecore_status_t ecore_iov_vf_stop_rxqs(struct ecore_hwfn *p_hwfn,
 						   u8 num_rxqs,
 						   bool cqe_completion)
 {
+	struct ecore_vf_q_info *p_queue;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 	int qid;
 
@@ -2188,16 +2245,18 @@ static enum _ecore_status_t ecore_iov_vf_stop_rxqs(struct ecore_hwfn *p_hwfn,
 		return ECORE_INVAL;
 
 	for (qid = rxq_id; qid < rxq_id + num_rxqs; qid++) {
-		if (vf->vf_queues[qid].rxq_active) {
-			rc = ecore_sp_eth_rx_queue_stop(p_hwfn,
-							vf->vf_queues[qid].
-							fw_rx_qid, false,
-							cqe_completion);
+		p_queue = &vf->vf_queues[qid];
 
-			if (rc)
-				return rc;
-		}
-		vf->vf_queues[qid].rxq_active = false;
+		if (!p_queue->p_rx_cid)
+			continue;
+
+		rc = ecore_eth_rx_queue_stop(p_hwfn,
+					     p_queue->p_rx_cid,
+					     false, cqe_completion);
+		if (rc != ECORE_SUCCESS)
+			return rc;
+
+		vf->vf_queues[qid].p_rx_cid = OSAL_NULL;
 		vf->num_active_rxqs--;
 	}
 
@@ -2209,21 +2268,23 @@ static enum _ecore_status_t ecore_iov_vf_stop_txqs(struct ecore_hwfn *p_hwfn,
 						   u16 txq_id, u8 num_txqs)
 {
 	enum _ecore_status_t rc = ECORE_SUCCESS;
+	struct ecore_vf_q_info *p_queue;
 	int qid;
 
 	if (txq_id + num_txqs > OSAL_ARRAY_SIZE(vf->vf_queues))
 		return ECORE_INVAL;
 
 	for (qid = txq_id; qid < txq_id + num_txqs; qid++) {
-		if (vf->vf_queues[qid].txq_active) {
-			rc = ecore_sp_eth_tx_queue_stop(p_hwfn,
-							vf->vf_queues[qid].
-							fw_tx_qid);
+		p_queue = &vf->vf_queues[qid];
+		if (!p_queue->p_tx_cid)
+			continue;
 
-			if (rc)
-				return rc;
-		}
-		vf->vf_queues[qid].txq_active = false;
+		rc = ecore_eth_tx_queue_stop(p_hwfn,
+					     p_queue->p_tx_cid);
+		if (rc != ECORE_SUCCESS)
+			return rc;
+
+		p_queue->p_tx_cid = OSAL_NULL;
 	}
 	return rc;
 }
@@ -2279,10 +2340,11 @@ static void ecore_iov_vf_mbx_update_rxqs(struct ecore_hwfn *p_hwfn,
 					 struct ecore_ptt *p_ptt,
 					 struct ecore_vf_info *vf)
 {
+	struct ecore_queue_cid *handlers[ECORE_MAX_VF_CHAINS_PER_PF];
 	u16 length = sizeof(struct pfvf_def_resp_tlv);
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
 	struct vfpf_update_rxq_tlv *req;
-	u8 status = PFVF_STATUS_SUCCESS;
+	u8 status = PFVF_STATUS_FAILURE;
 	u8 complete_event_flg;
 	u8 complete_cqe_flg;
 	u16 qid;
@@ -2293,30 +2355,38 @@ static void ecore_iov_vf_mbx_update_rxqs(struct ecore_hwfn *p_hwfn,
 	complete_cqe_flg = !!(req->flags & VFPF_RXQ_UPD_COMPLETE_CQE_FLAG);
 	complete_event_flg = !!(req->flags & VFPF_RXQ_UPD_COMPLETE_EVENT_FLAG);
 
+	/* Validaute inputs */
+	if (req->num_rxqs + req->rx_qid > ECORE_MAX_VF_CHAINS_PER_PF ||
+	    !ecore_iov_validate_rxq(p_hwfn, vf, req->rx_qid)) {
+		DP_INFO(p_hwfn, "VF[%d]: Incorrect Rxqs [%04x, %02x]\n",
+			vf->relative_vf_id, req->rx_qid, req->num_rxqs);
+		goto out;
+	}
+
 	for (i = 0; i < req->num_rxqs; i++) {
 		qid = req->rx_qid + i;
 
-		if (!vf->vf_queues[qid].rxq_active) {
-			DP_NOTICE(p_hwfn, true,
-				  "VF rx_qid = %d isn`t active!\n", qid);
-			status = PFVF_STATUS_FAILURE;
-			break;
+		if (!vf->vf_queues[qid].p_rx_cid) {
+			DP_INFO(p_hwfn,
+				"VF[%d] rx_qid = %d isn`t active!\n",
+				vf->relative_vf_id, qid);
+			goto out;
 		}
 
-		rc = ecore_sp_eth_rx_queues_update(p_hwfn,
-						   vf->vf_queues[qid].fw_rx_qid,
-						   1,
-						   complete_cqe_flg,
-						   complete_event_flg,
-						   ECORE_SPQ_MODE_EBLOCK,
-						   OSAL_NULL);
-
-		if (rc) {
-			status = PFVF_STATUS_FAILURE;
-			break;
-		}
+		handlers[i] = vf->vf_queues[qid].p_rx_cid;
 	}
 
+	rc = ecore_sp_eth_rx_queues_update(p_hwfn, (void **)&handlers,
+					   req->num_rxqs,
+					   complete_cqe_flg,
+					   complete_event_flg,
+					   ECORE_SPQ_MODE_EBLOCK,
+					   OSAL_NULL);
+	if (rc)
+		goto out;
+
+	status = PFVF_STATUS_SUCCESS;
+out:
 	ecore_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_UPDATE_RXQ,
 			       length, status);
 }
@@ -2545,7 +2615,7 @@ ecore_iov_vp_update_rss_param(struct ecore_hwfn *p_hwfn,
 				  "rss_ind_table[%d] = %d,"
 				  " rxq is out of range\n",
 				  i, q_idx);
-		else if (!vf->vf_queues[q_idx].rxq_active)
+		else if (!vf->vf_queues[q_idx].p_rx_cid)
 			DP_NOTICE(p_hwfn, true,
 				  "rss_ind_table[%d] = %d, rxq is not active\n",
 				  i, q_idx);
diff --git a/drivers/net/qede/base/ecore_sriov.h b/drivers/net/qede/base/ecore_sriov.h
index e9ccc79..d32f931 100644
--- a/drivers/net/qede/base/ecore_sriov.h
+++ b/drivers/net/qede/base/ecore_sriov.h
@@ -64,10 +64,10 @@ struct ecore_iov_vf_mbx {
 
 struct ecore_vf_q_info {
 	u16 fw_rx_qid;
+	struct ecore_queue_cid *p_rx_cid;
 	u16 fw_tx_qid;
+	struct ecore_queue_cid *p_tx_cid;
 	u8 fw_cid;
-	u8 rxq_active;
-	u8 txq_active;
 };
 
 enum vf_state {
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index 05ceefd..60ecd16 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -451,19 +451,19 @@ free_p_iov:
 #define MSTORM_QZONE_START(dev)   (TSTORM_QZONE_START + \
 				   (TSTORM_QZONE_SIZE * NUM_OF_L2_QUEUES(dev)))
 
-enum _ecore_status_t ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
-					   u8 rx_qid,
-					   u16 sb,
-					   u8 sb_index,
-					   u16 bd_max_bytes,
-					   dma_addr_t bd_chain_phys_addr,
-					   dma_addr_t cqe_pbl_addr,
-					   u16 cqe_pbl_size,
-					   void OSAL_IOMEM **pp_prod)
+enum _ecore_status_t
+ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
+		      struct ecore_queue_cid *p_cid,
+		      u16 bd_max_bytes,
+		      dma_addr_t bd_chain_phys_addr,
+		      dma_addr_t cqe_pbl_addr,
+		      u16 cqe_pbl_size,
+		      void OSAL_IOMEM **pp_prod)
 {
 	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
 	struct pfvf_start_queue_resp_tlv *resp;
 	struct vfpf_start_rxq_tlv *req;
+	u16 rx_qid = p_cid->rel.queue_id;
 	enum _ecore_status_t rc;
 
 	/* clear mailbox and prep first tlv */
@@ -473,19 +473,20 @@ enum _ecore_status_t ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
 	req->cqe_pbl_addr = cqe_pbl_addr;
 	req->cqe_pbl_size = cqe_pbl_size;
 	req->rxq_addr = bd_chain_phys_addr;
-	req->hw_sb = sb;
-	req->sb_index = sb_index;
+	req->hw_sb = p_cid->rel.sb;
+	req->sb_index = p_cid->rel.sb_idx;
 	req->bd_max_bytes = bd_max_bytes;
 	req->stat_id = -1; /* Keep initialized, for future compatibility */
 
 	/* If PF is legacy, we'll need to calculate producers ourselves
 	 * as well as clean them.
 	 */
-	if (pp_prod && p_iov->b_pre_fp_hsi) {
+	if (p_iov->b_pre_fp_hsi) {
 		u8 hw_qid = p_iov->acquire_resp.resc.hw_qid[rx_qid];
 		u32 init_prod_val = 0;
 
-		*pp_prod = (u8 OSAL_IOMEM *)p_hwfn->regview +
+		*pp_prod = (u8 OSAL_IOMEM *)
+			   p_hwfn->regview +
 			   MSTORM_QZONE_START(p_hwfn->p_dev) +
 			   (hw_qid) * MSTORM_QZONE_SIZE;
 
@@ -510,7 +511,7 @@ enum _ecore_status_t ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
 	}
 
 	/* Learn the address of the producer from the response */
-	if (pp_prod && !p_iov->b_pre_fp_hsi) {
+	if (!p_iov->b_pre_fp_hsi) {
 		u32 init_prod_val = 0;
 
 		*pp_prod = (u8 OSAL_IOMEM *)p_hwfn->regview + resp->offset;
@@ -534,7 +535,8 @@ exit:
 }
 
 enum _ecore_status_t ecore_vf_pf_rxq_stop(struct ecore_hwfn *p_hwfn,
-					  u16 rx_qid, bool cqe_completion)
+					  struct ecore_queue_cid *p_cid,
+					  bool cqe_completion)
 {
 	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
 	struct vfpf_stop_rxqs_tlv *req;
@@ -544,7 +546,7 @@ enum _ecore_status_t ecore_vf_pf_rxq_stop(struct ecore_hwfn *p_hwfn,
 	/* clear mailbox and prep first tlv */
 	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_STOP_RXQS, sizeof(*req));
 
-	req->rx_qid = rx_qid;
+	req->rx_qid = p_cid->rel.queue_id;
 	req->num_rxqs = 1;
 	req->cqe_completion = cqe_completion;
 
@@ -569,29 +571,28 @@ exit:
 	return rc;
 }
 
-enum _ecore_status_t ecore_vf_pf_txq_start(struct ecore_hwfn *p_hwfn,
-					   u16 tx_queue_id,
-					   u16 sb,
-					   u8 sb_index,
-					   dma_addr_t pbl_addr,
-					   u16 pbl_size,
-					   void OSAL_IOMEM **pp_doorbell)
+enum _ecore_status_t
+ecore_vf_pf_txq_start(struct ecore_hwfn *p_hwfn,
+		      struct ecore_queue_cid *p_cid,
+		      dma_addr_t pbl_addr, u16 pbl_size,
+		      void OSAL_IOMEM **pp_doorbell)
 {
 	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
 	struct pfvf_start_queue_resp_tlv *resp;
 	struct vfpf_start_txq_tlv *req;
+	u16 qid = p_cid->rel.queue_id;
 	enum _ecore_status_t rc;
 
 	/* clear mailbox and prep first tlv */
 	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_START_TXQ, sizeof(*req));
 
-	req->tx_qid = tx_queue_id;
+	req->tx_qid = qid;
 
 	/* Tx */
 	req->pbl_addr = pbl_addr;
 	req->pbl_size = pbl_size;
-	req->hw_sb = sb;
-	req->sb_index = sb_index;
+	req->hw_sb = p_cid->rel.sb;
+	req->sb_index = p_cid->rel.sb_idx;
 
 	/* add list termination tlv */
 	ecore_add_tlv(p_hwfn, &p_iov->offset,
@@ -608,32 +609,30 @@ enum _ecore_status_t ecore_vf_pf_txq_start(struct ecore_hwfn *p_hwfn,
 		goto exit;
 	}
 
-	if (pp_doorbell) {
-		/* Modern PFs provide the actual offsets, while legacy
-		 * provided only the queue id.
-		 */
-		if (!p_iov->b_pre_fp_hsi) {
-			*pp_doorbell = (u8 OSAL_IOMEM *)p_hwfn->doorbells +
-						       resp->offset;
-		} else {
-			u8 cid = p_iov->acquire_resp.resc.cid[tx_queue_id];
-
+	/* Modern PFs provide the actual offsets, while legacy
+	 * provided only the queue id.
+	 */
+	if (!p_iov->b_pre_fp_hsi) {
 		*pp_doorbell = (u8 OSAL_IOMEM *)p_hwfn->doorbells +
-				DB_ADDR_VF(cid, DQ_DEMS_LEGACY);
-		}
+						resp->offset;
+	} else {
+		u8 cid = p_iov->acquire_resp.resc.cid[qid];
 
-		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-			   "Txq[0x%02x]: doorbell at %p [offset 0x%08x]\n",
-			   tx_queue_id, *pp_doorbell, resp->offset);
+		*pp_doorbell = (u8 OSAL_IOMEM *)p_hwfn->doorbells +
+						DB_ADDR_VF(cid, DQ_DEMS_LEGACY);
 	}
 
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+		   "Txq[0x%02x]: doorbell at %p [offset 0x%08x]\n",
+		   qid, *pp_doorbell, resp->offset);
 exit:
 	ecore_vf_pf_req_end(p_hwfn, rc);
 
 	return rc;
 }
 
-enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn *p_hwfn, u16 tx_qid)
+enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn *p_hwfn,
+					  struct ecore_queue_cid *p_cid)
 {
 	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
 	struct vfpf_stop_txqs_tlv *req;
@@ -643,7 +642,7 @@ enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn *p_hwfn, u16 tx_qid)
 	/* clear mailbox and prep first tlv */
 	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_STOP_TXQS, sizeof(*req));
 
-	req->tx_qid = tx_qid;
+	req->tx_qid = p_cid->rel.queue_id;
 	req->num_txqs = 1;
 
 	/* add list termination tlv */
@@ -668,20 +667,36 @@ exit:
 }
 
 enum _ecore_status_t ecore_vf_pf_rxqs_update(struct ecore_hwfn *p_hwfn,
-					     u16 rx_queue_id,
+					     struct ecore_queue_cid **pp_cid,
 					     u8 num_rxqs,
-					     u8 comp_cqe_flg, u8 comp_event_flg)
+					     u8 comp_cqe_flg,
+					     u8 comp_event_flg)
 {
 	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
 	struct pfvf_def_resp_tlv *resp = &p_iov->pf2vf_reply->default_resp;
 	struct vfpf_update_rxq_tlv *req;
 	enum _ecore_status_t rc;
 
+	/* TODO - API is limited to assuming continuous regions of queues,
+	 * but VF queues might not fullfil this requirement.
+	 * Need to consider whether we need new TLVs for this, or whether
+	 * simply doing it iteratively is good enough.
+	 */
+	if (!num_rxqs)
+		return ECORE_INVAL;
+
+again:
 	/* clear mailbox and prep first tlv */
 	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_UPDATE_RXQ, sizeof(*req));
 
-	req->rx_qid = rx_queue_id;
-	req->num_rxqs = num_rxqs;
+	/* Find the length of the current contagious range of queues beginning
+	 * at first queue's index.
+	 */
+	req->rx_qid = (*pp_cid)->rel.queue_id;
+	for (req->num_rxqs = 1; req->num_rxqs < num_rxqs; req->num_rxqs++)
+		if (pp_cid[req->num_rxqs]->rel.queue_id !=
+		    req->rx_qid + req->num_rxqs)
+			break;
 
 	if (comp_cqe_flg)
 		req->flags |= VFPF_RXQ_UPD_COMPLETE_CQE_FLAG;
@@ -702,9 +717,17 @@ enum _ecore_status_t ecore_vf_pf_rxqs_update(struct ecore_hwfn *p_hwfn,
 		goto exit;
 	}
 
+	/* Make sure we're done with all the queues */
+	if (req->num_rxqs < num_rxqs) {
+		num_rxqs -= req->num_rxqs;
+		pp_cid += req->num_rxqs;
+		/* TODO - should we give a non-locked variant instead? */
+		ecore_vf_pf_req_end(p_hwfn, rc);
+		goto again;
+	}
+
 exit:
 	ecore_vf_pf_req_end(p_hwfn, rc);
-
 	return rc;
 }
 
diff --git a/drivers/net/qede/base/ecore_vf.h b/drivers/net/qede/base/ecore_vf.h
index 6077d60..1afd667 100644
--- a/drivers/net/qede/base/ecore_vf.h
+++ b/drivers/net/qede/base/ecore_vf.h
@@ -53,10 +53,7 @@ enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn);
  * @brief VF - start the RX Queue by sending a message to the PF
  *
  * @param p_hwfn
- * @param cid			- zero based within the VF
- * @param rx_queue_id		- zero based within the VF
- * @param sb			- VF status block for this queue
- * @param sb_index		- Index within the status block
+ * @param p_cid			- Only relative fields are relevant
  * @param bd_max_bytes		- maximum number of bytes per bd
  * @param bd_chain_phys_addr	- physical address of bd chain
  * @param cqe_pbl_addr		- physical address of pbl
@@ -67,9 +64,7 @@ enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn);
  * @return enum _ecore_status_t
  */
 enum _ecore_status_t ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
-					   u8 rx_queue_id,
-					   u16 sb,
-					   u8 sb_index,
+					   struct ecore_queue_cid *p_cid,
 					   u16 bd_max_bytes,
 					   dma_addr_t bd_chain_phys_addr,
 					   dma_addr_t cqe_pbl_addr,
@@ -81,46 +76,44 @@ enum _ecore_status_t ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
  *        PF.
  *
  * @param p_hwfn
- * @param tx_queue_id		- zero based within the VF
- * @param sb			- status block for this queue
- * @param sb_index		- index within the status block
+ * @param p_cid
  * @param bd_chain_phys_addr	- physical address of tx chain
  * @param pp_doorbell		- pointer to address to which to
  *				write the doorbell too..
  *
  * @return enum _ecore_status_t
  */
-enum _ecore_status_t ecore_vf_pf_txq_start(struct ecore_hwfn *p_hwfn,
-					   u16 tx_queue_id,
-					   u16 sb,
-					   u8 sb_index,
-					   dma_addr_t pbl_addr,
-					   u16 pbl_size,
-					   void OSAL_IOMEM **pp_doorbell);
+enum _ecore_status_t
+ecore_vf_pf_txq_start(struct ecore_hwfn *p_hwfn,
+		      struct ecore_queue_cid *p_cid,
+		      dma_addr_t pbl_addr, u16 pbl_size,
+		      void OSAL_IOMEM **pp_doorbell);
 
 /**
  * @brief VF - stop the RX queue by sending a message to the PF
  *
  * @param p_hwfn
- * @param rx_qid
+ * @param p_cid
  * @param cqe_completion
  *
  * @return enum _ecore_status_t
  */
-enum _ecore_status_t ecore_vf_pf_rxq_stop(struct ecore_hwfn	*p_hwfn,
-					  u16			rx_qid,
-					  bool			cqe_completion);
+enum _ecore_status_t ecore_vf_pf_rxq_stop(struct ecore_hwfn *p_hwfn,
+					  struct ecore_queue_cid *p_cid,
+					  bool cqe_completion);
 
 /**
  * @brief VF - stop the TX queue by sending a message to the PF
  *
  * @param p_hwfn
- * @param tx_qid
+ * @param p_cid
  *
  * @return enum _ecore_status_t
  */
-enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn	*p_hwfn,
-					  u16			tx_qid);
+enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn *p_hwfn,
+					  struct ecore_queue_cid *p_cid);
+
+/* TODO - fix all the !SRIOV prototypes */
 
 #ifndef LINUX_REMOVE
 /**
@@ -128,20 +121,18 @@ enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn	*p_hwfn,
  *        PF
  *
  * @param p_hwfn
- * @param rx_queue_id
+ * @param pp_cid - list of queue-cids which we want to update
  * @param num_rxqs
- * @param init_sge_ring
  * @param comp_cqe_flg
  * @param comp_event_flg
  *
  * @return enum _ecore_status_t
  */
-enum _ecore_status_t ecore_vf_pf_rxqs_update(
-			struct ecore_hwfn	*p_hwfn,
-			u16			rx_queue_id,
-			u8			num_rxqs,
-			u8			comp_cqe_flg,
-			u8			comp_event_flg);
+enum _ecore_status_t ecore_vf_pf_rxqs_update(struct ecore_hwfn *p_hwfn,
+					     struct ecore_queue_cid **pp_cid,
+					     u8 num_rxqs,
+					     u8 comp_cqe_flg,
+					     u8 comp_event_flg);
 #endif
 
 /**
diff --git a/drivers/net/qede/qede_eth_if.c b/drivers/net/qede/qede_eth_if.c
index d0f6e87..8e4290c 100644
--- a/drivers/net/qede/qede_eth_if.c
+++ b/drivers/net/qede/qede_eth_if.c
@@ -148,7 +148,8 @@ qed_start_rxq(struct ecore_dev *edev,
 	      uint16_t bd_max_bytes,
 	      dma_addr_t bd_chain_phys_addr,
 	      dma_addr_t cqe_pbl_addr,
-	      uint16_t cqe_pbl_size, void OSAL_IOMEM * *pp_prod)
+	      uint16_t cqe_pbl_size,
+	      struct ecore_rxq_start_ret_params *ret_params)
 {
 	struct ecore_hwfn *p_hwfn;
 	int rc, hwfn_index;
@@ -159,12 +160,14 @@ qed_start_rxq(struct ecore_dev *edev,
 	p_params->queue_id = p_params->queue_id / edev->num_hwfns;
 	p_params->stats_id = p_params->vport_id;
 
-	rc = ecore_sp_eth_rx_queue_start(p_hwfn,
-					 p_hwfn->hw_info.opaque_fid,
-					 p_params,
-					 bd_max_bytes,
-					 bd_chain_phys_addr,
-					 cqe_pbl_addr, cqe_pbl_size, pp_prod);
+	rc = ecore_eth_rx_queue_start(p_hwfn,
+				      p_hwfn->hw_info.opaque_fid,
+				      p_params,
+				      bd_max_bytes,
+				      bd_chain_phys_addr,
+				      cqe_pbl_addr,
+				      cqe_pbl_size,
+				      ret_params);
 
 	if (rc) {
 		DP_ERR(edev, "Failed to start RXQ#%d\n", p_params->queue_id);
@@ -180,19 +183,17 @@ qed_start_rxq(struct ecore_dev *edev,
 }
 
 static int
-qed_stop_rxq(struct ecore_dev *edev, struct qed_stop_rxq_params *params)
+qed_stop_rxq(struct ecore_dev *edev, uint8_t rss_id, void *handle)
 {
 	int rc, hwfn_index;
 	struct ecore_hwfn *p_hwfn;
 
-	hwfn_index = params->rss_id % edev->num_hwfns;
+	hwfn_index = rss_id % edev->num_hwfns;
 	p_hwfn = &edev->hwfns[hwfn_index];
 
-	rc = ecore_sp_eth_rx_queue_stop(p_hwfn,
-					params->rx_queue_id / edev->num_hwfns,
-					params->eq_completion_only, false);
+	rc = ecore_eth_rx_queue_stop(p_hwfn, handle, true, false);
 	if (rc) {
-		DP_ERR(edev, "Failed to stop RXQ#%d\n", params->rx_queue_id);
+		DP_ERR(edev, "Failed to stop RXQ#%02x\n", rss_id);
 		return rc;
 	}
 
@@ -204,7 +205,8 @@ qed_start_txq(struct ecore_dev *edev,
 	      uint8_t rss_num,
 	      struct ecore_queue_start_common_params *p_params,
 	      dma_addr_t pbl_addr,
-	      uint16_t pbl_size, void OSAL_IOMEM * *pp_doorbell)
+	      uint16_t pbl_size,
+	      struct ecore_txq_start_ret_params *ret_params)
 {
 	struct ecore_hwfn *p_hwfn;
 	int rc, hwfn_index;
@@ -213,14 +215,13 @@ qed_start_txq(struct ecore_dev *edev,
 	p_hwfn = &edev->hwfns[hwfn_index];
 
 	p_params->queue_id = p_params->queue_id / edev->num_hwfns;
-	p_params->qzone_id = p_params->queue_id;
 	p_params->stats_id = p_params->vport_id;
 
-	rc = ecore_sp_eth_tx_queue_start(p_hwfn,
-					 p_hwfn->hw_info.opaque_fid,
-					 p_params,
-					 0 /* tc */,
-					 pbl_addr, pbl_size, pp_doorbell);
+	rc = ecore_eth_tx_queue_start(p_hwfn,
+				      p_hwfn->hw_info.opaque_fid,
+				      p_params, 0 /* tc */,
+				      pbl_addr, pbl_size,
+				      ret_params);
 
 	if (rc) {
 		DP_ERR(edev, "Failed to start TXQ#%d\n", p_params->queue_id);
@@ -236,18 +237,17 @@ qed_start_txq(struct ecore_dev *edev,
 }
 
 static int
-qed_stop_txq(struct ecore_dev *edev, struct qed_stop_txq_params *params)
+qed_stop_txq(struct ecore_dev *edev, uint8_t rss_id, void *handle)
 {
 	struct ecore_hwfn *p_hwfn;
 	int rc, hwfn_index;
 
-	hwfn_index = params->rss_id % edev->num_hwfns;
+	hwfn_index = rss_id % edev->num_hwfns;
 	p_hwfn = &edev->hwfns[hwfn_index];
 
-	rc = ecore_sp_eth_tx_queue_stop(p_hwfn,
-					params->tx_queue_id / edev->num_hwfns);
+	rc = ecore_eth_tx_queue_stop(p_hwfn, handle);
 	if (rc) {
-		DP_ERR(edev, "Failed to stop TXQ#%d\n", params->tx_queue_id);
+		DP_ERR(edev, "Failed to stop TXQ#%02x\n", rss_id);
 		return rc;
 	}
 
diff --git a/drivers/net/qede/qede_eth_if.h b/drivers/net/qede/qede_eth_if.h
index 37b1b74..12dd828 100644
--- a/drivers/net/qede/qede_eth_if.h
+++ b/drivers/net/qede/qede_eth_if.h
@@ -47,13 +47,6 @@ struct qed_dev_eth_info {
 	bool is_legacy;
 };
 
-struct qed_stop_rxq_params {
-	uint8_t rss_id;
-	uint8_t rx_queue_id;
-	uint8_t vport_id;
-	bool eq_completion_only;
-};
-
 struct qed_update_vport_params {
 	uint8_t vport_id;
 	uint8_t update_vport_active_flg;
@@ -78,11 +71,6 @@ struct qed_start_vport_params {
 	bool clear_stats;
 };
 
-struct qed_stop_txq_params {
-	uint8_t rss_id;
-	uint8_t tx_queue_id;
-};
-
 struct qed_eth_ops {
 	const struct qed_common_ops *common;
 
@@ -103,19 +91,21 @@ struct qed_eth_ops {
 			  uint16_t bd_max_bytes,
 			  dma_addr_t bd_chain_phys_addr,
 			  dma_addr_t cqe_pbl_addr,
-			  uint16_t cqe_pbl_size, void OSAL_IOMEM * *pp_prod);
+			  uint16_t cqe_pbl_size,
+			  struct ecore_rxq_start_ret_params *ret_params);
 
 	int (*q_rx_stop)(struct ecore_dev *edev,
-			 struct qed_stop_rxq_params *params);
+			 uint8_t rss_id, void *handle);
 
 	int (*q_tx_start)(struct ecore_dev *edev,
 			  uint8_t rss_num,
 			  struct ecore_queue_start_common_params *p_params,
 			  dma_addr_t pbl_addr,
-			  uint16_t pbl_size, void OSAL_IOMEM * *pp_doorbell);
+			  uint16_t pbl_size,
+			  struct ecore_txq_start_ret_params *ret_params);
 
 	int (*q_tx_stop)(struct ecore_dev *edev,
-			 struct qed_stop_txq_params *params);
+			 uint8_t rss_id, void *handle);
 
 	int (*eth_cqe_completion)(struct ecore_dev *edev,
 				  uint8_t rss_id,
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 01ea9b4..85134fb 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -527,11 +527,14 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 	for_each_queue(i) {
 		fp = &qdev->fp_array[i];
 		if (fp->type & QEDE_FASTPATH_RX) {
+			struct ecore_rxq_start_ret_params ret_params;
+
 			p_phys_table = ecore_chain_get_pbl_phys(&fp->rxq->
 								rx_comp_ring);
 			page_cnt = ecore_chain_get_page_cnt(&fp->rxq->
 								rx_comp_ring);
 
+			memset(&ret_params, 0, sizeof(ret_params));
 			memset(&q_params, 0, sizeof(q_params));
 			q_params.queue_id = i;
 			q_params.vport_id = 0;
@@ -545,13 +548,17 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 					   fp->rxq->rx_bd_ring.p_phys_addr,
 					   p_phys_table,
 					   page_cnt,
-					   &fp->rxq->hw_rxq_prod_addr);
+					   &ret_params);
 			if (rc) {
 				DP_ERR(edev, "Start rxq #%d failed %d\n",
 				       fp->rxq->queue_id, rc);
 				return rc;
 			}
 
+			/* Use the return parameters */
+			fp->rxq->hw_rxq_prod_addr = ret_params.p_prod;
+			fp->rxq->handle = ret_params.p_handle;
+
 			fp->rxq->hw_cons_ptr =
 					&fp->sb_info->sb_virt->pi_array[RX_PI];
 
@@ -561,6 +568,8 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 		if (!(fp->type & QEDE_FASTPATH_TX))
 			continue;
 		for (tc = 0; tc < qdev->num_tc; tc++) {
+			struct ecore_txq_start_ret_params ret_params;
+
 			txq = fp->txqs[tc];
 			txq_index = tc * QEDE_RSS_COUNT(qdev) + i;
 
@@ -568,6 +577,7 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 			page_cnt = ecore_chain_get_page_cnt(&txq->tx_pbl);
 
 			memset(&q_params, 0, sizeof(q_params));
+			memset(&ret_params, 0, sizeof(ret_params));
 			q_params.queue_id = txq->queue_id;
 			q_params.vport_id = 0;
 			q_params.sb = fp->sb_info->igu_sb_id;
@@ -576,13 +586,16 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 			rc = qdev->ops->q_tx_start(edev, i, &q_params,
 						   p_phys_table,
 						   page_cnt, /* **pp_doorbell */
-						   &txq->doorbell_addr);
+						   &ret_params);
 			if (rc) {
 				DP_ERR(edev, "Start txq %u failed %d\n",
 				       txq_index, rc);
 				return rc;
 			}
 
+			txq->doorbell_addr = ret_params.p_doorbell;
+			txq->handle = ret_params.p_handle;
+
 			txq->hw_cons_ptr =
 			    &fp->sb_info->sb_virt->pi_array[TX_PI(tc)];
 			SET_FIELD(txq->tx_db.data.params,
@@ -1399,6 +1412,7 @@ static int qede_stop_queues(struct qede_dev *qdev)
 {
 	struct qed_update_vport_params vport_update_params;
 	struct ecore_dev *edev = &qdev->edev;
+	struct qede_fastpath *fp;
 	int rc, tc, i;
 
 	/* Disable the vport */
@@ -1420,7 +1434,7 @@ static int qede_stop_queues(struct qede_dev *qdev)
 
 	/* Flush Tx queues. If needed, request drain from MCP */
 	for_each_queue(i) {
-		struct qede_fastpath *fp = &qdev->fp_array[i];
+		fp = &qdev->fp_array[i];
 
 		if (fp->type & QEDE_FASTPATH_TX) {
 			for (tc = 0; tc < qdev->num_tc; tc++) {
@@ -1435,23 +1449,17 @@ static int qede_stop_queues(struct qede_dev *qdev)
 
 	/* Stop all Queues in reverse order */
 	for (i = QEDE_QUEUE_CNT(qdev) - 1; i >= 0; i--) {
-		struct qed_stop_rxq_params rx_params;
+		fp = &qdev->fp_array[i];
 
 		/* Stop the Tx Queue(s) */
 		if (qdev->fp_array[i].type & QEDE_FASTPATH_TX) {
 			for (tc = 0; tc < qdev->num_tc; tc++) {
-				struct qed_stop_txq_params tx_params;
-				u8 val;
-
-				tx_params.rss_id = i;
-				val = qdev->fp_array[i].txqs[tc]->queue_id;
-				tx_params.tx_queue_id = val;
-
+				struct qede_tx_queue *txq = fp->txqs[tc];
 				DP_INFO(edev, "Stopping tx queues\n");
-				rc = qdev->ops->q_tx_stop(edev, &tx_params);
+				rc = qdev->ops->q_tx_stop(edev, i, txq->handle);
 				if (rc) {
 					DP_ERR(edev, "Failed to stop TXQ #%d\n",
-					       tx_params.tx_queue_id);
+					       i);
 					return rc;
 				}
 			}
@@ -1459,14 +1467,8 @@ static int qede_stop_queues(struct qede_dev *qdev)
 
 		/* Stop the Rx Queue */
 		if (qdev->fp_array[i].type & QEDE_FASTPATH_RX) {
-			memset(&rx_params, 0, sizeof(rx_params));
-			rx_params.rss_id = i;
-			rx_params.rx_queue_id = qdev->fp_array[i].rxq->queue_id;
-			rx_params.eq_completion_only = 1;
-
 			DP_INFO(edev, "Stopping rx queues\n");
-
-			rc = qdev->ops->q_rx_stop(edev, &rx_params);
+			rc = qdev->ops->q_rx_stop(edev, i, fp->rxq->handle);
 			if (rc) {
 				DP_ERR(edev, "Failed to stop RXQ #%d\n", i);
 				return rc;
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index 9a393e9..17a2f0c 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -156,6 +156,7 @@ struct qede_rx_queue {
 	uint64_t rx_hw_errors;
 	uint64_t rx_alloc_errors;
 	struct qede_dev *qdev;
+	void *handle;
 };
 
 /*
@@ -187,6 +188,7 @@ struct qede_tx_queue {
 	uint64_t xmit_pkts;
 	bool is_legacy;
 	struct qede_dev *qdev;
+	void *handle;
 };
 
 struct qede_fastpath {
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 28/61] net/qede/base: add support for handling TLV request from MFW
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (27 preceding siblings ...)
  2017-03-18  7:05   ` [PATCH v2 27/61] net/qede/base: make L2 queues handle based Rasesh Mody
@ 2017-03-18  7:05   ` Rasesh Mody
  2017-03-18  7:05   ` [PATCH v2 29/61] net/qede/base: optimize cache-line access Rasesh Mody
                     ` (33 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:05 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add support for handling the TLV request from Management FW.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/bcm_osal.h      |    3 +
 drivers/net/qede/base/ecore_mcp.c     |    6 -
 drivers/net/qede/base/ecore_mcp.h     |    8 +
 drivers/net/qede/base/ecore_mcp_api.h |   44 +-
 drivers/net/qede/base/ecore_mng_tlv.c | 1536 +++++++++++++++++++++++++++++++++
 drivers/net/qede/qede_if.h            |   21 +
 6 files changed, 1591 insertions(+), 27 deletions(-)
 create mode 100644 drivers/net/qede/base/ecore_mng_tlv.c

diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 63ee6d5..82e3ebd 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -419,5 +419,8 @@ void qede_get_mcp_proto_stats(struct ecore_dev *, enum ecore_mcp_protocol_type,
 	qede_get_mcp_proto_stats(dev, type, stats)
 
 #define	OSAL_SLOWPATH_IRQ_REQ(p_hwfn) (0)
+#define OSAL_MFW_TLV_REQ(p_hwfn) (0)
+#define OSAL_MFW_FILL_TLV_DATA(type, buf, data) (0)
+
 
 #endif /* __BCM_OSAL_H */
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 79a907b..2b9c819 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -2502,9 +2502,3 @@ enum _ecore_status_t ecore_mcp_initiate_pf_flr(struct ecore_hwfn *p_hwfn,
 	return ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_INITIATE_PF_FLR, 0,
 			     &mcp_resp, &mcp_param);
 }
-
-enum _ecore_status_t
-ecore_mfw_process_tlv_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
-{
-	return ECORE_SUCCESS;
-}
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index d77b5df..0708923 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -70,6 +70,14 @@ struct ecore_mcp_mb_params {
 	u32 mcp_param;
 };
 
+struct ecore_drv_tlv_hdr {
+	u8 tlv_type;	/* According to the enum below */
+	u8 tlv_length;	/* In dwords - not including this header */
+	u8 tlv_reserved;
+#define ECORE_DRV_TLV_FLAGS_CHANGED 0x01
+	u8 tlv_flags;
+};
+
 /**
  * @brief Initialize the interface with the MCP
  *
diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h
index 8cad43d..190c135 100644
--- a/drivers/net/qede/base/ecore_mcp_api.h
+++ b/drivers/net/qede/base/ecore_mcp_api.h
@@ -233,9 +233,11 @@ struct ecore_mba_vers {
 };
 
 enum ecore_mfw_tlv_type {
-	ECORE_MFW_TLV_GENERIC = 0x1,	/* Core driver TLVs */
-	ECORE_MFW_TLV_FCOE = 0x2,	/* FCoE protocol TLVs */
-	ECORE_MFW_TLV_ISCSI = 0x4,	/* SCSI protocol TLVs */
+	ECORE_MFW_TLV_GENERIC = 0x1, /* Core driver TLVs */
+	ECORE_MFW_TLV_ETH = 0x2, /* L2 driver TLVs */
+	ECORE_MFW_TLV_FCOE = 0x4, /* FCoE protocol TLVs */
+	ECORE_MFW_TLV_ISCSI = 0x8, /* SCSI protocol TLVs */
+	ECORE_MFW_TLV_MAX = 0x16,
 };
 
 struct ecore_mfw_tlv_generic {
@@ -247,6 +249,21 @@ struct ecore_mfw_tlv_generic {
 	bool additional_mac1_set;
 	u64 additional_mac2;
 	bool additional_mac2_set;
+	u8 drv_state;
+	bool drv_state_set;
+	u8 pxe_progress;
+	bool pxe_progress_set;
+	u64 rx_frames;
+	bool rx_frames_set;
+	u64 rx_bytes;
+	bool rx_bytes_set;
+	u64 tx_frames;
+	bool tx_frames_set;
+	u64 tx_bytes;
+	bool tx_bytes_set;
+};
+
+struct ecore_mfw_tlv_eth {
 	u16 lso_maxoff_size;
 	bool lso_maxoff_size_set;
 	u16 lso_minseg_size;
@@ -259,12 +276,6 @@ struct ecore_mfw_tlv_generic {
 	bool rx_descr_size_set;
 	u16 netq_count;
 	bool netq_count_set;
-	u16 flex_vlan;
-	bool flex_vlan_set;
-	u8 drv_state;
-	bool drv_state_set;
-	u8 pxe_progress;
-	bool pxe_progress_set;
 	u32 tcp4_offloads;
 	bool tcp4_offloads_set;
 	u32 tcp6_offloads;
@@ -273,14 +284,6 @@ struct ecore_mfw_tlv_generic {
 	bool tx_descr_qdepth_set;
 	u16 rx_descr_qdepth;
 	bool rx_descr_qdepth_set;
-	u64 rx_frames;
-	bool rx_frames_set;
-	u64 rx_bytes;
-	bool rx_bytes_set;
-	u64 tx_frames;
-	bool tx_frames_set;
-	u64 tx_bytes;
-	bool tx_bytes_set;
 	u8 iov_offload;
 	bool iov_offload_set;
 	u8 txqs_empty;
@@ -446,8 +449,8 @@ struct ecore_mfw_tlv_fcoe {
 	bool ols_set;
 	u8 lr;
 	bool lr_set;
-	u8 llr;
-	bool llrt;
+	u8 lrr;
+	bool lrr_set;
 	u8 tx_lip;
 	bool tx_lip_set;
 	u8 rx_lip;
@@ -511,12 +514,11 @@ struct ecore_mfw_tlv_iscsi {
 	bool tx_frames_set;
 	u64 tx_bytes;
 	bool tx_bytes_set;
-	u32 cpcp_spcp_map;
-	bool cpcp_spcp_map_set;
 };
 
 union ecore_mfw_tlv_data {
 	struct ecore_mfw_tlv_generic generic;
+	struct ecore_mfw_tlv_eth eth;
 	struct ecore_mfw_tlv_fcoe fcoe;
 	struct ecore_mfw_tlv_iscsi iscsi;
 };
diff --git a/drivers/net/qede/base/ecore_mng_tlv.c b/drivers/net/qede/base/ecore_mng_tlv.c
new file mode 100644
index 0000000..0065d12
--- /dev/null
+++ b/drivers/net/qede/base/ecore_mng_tlv.c
@@ -0,0 +1,1536 @@
+#include "bcm_osal.h"
+#include "ecore.h"
+#include "ecore_status.h"
+#include "ecore_mcp.h"
+#include "ecore_hw.h"
+#include "reg_addr.h"
+
+#define TLV_TYPE(p)	(p[0])
+#define TLV_LENGTH(p)	(p[1])
+#define TLV_FLAGS(p)	(p[3])
+
+static enum _ecore_status_t
+ecore_mfw_get_tlv_group(u8 tlv_type, u8 *tlv_group)
+{
+	switch (tlv_type) {
+	case DRV_TLV_FEATURE_FLAGS:
+	case DRV_TLV_LOCAL_ADMIN_ADDR:
+	case DRV_TLV_ADDITIONAL_MAC_ADDR_1:
+	case DRV_TLV_ADDITIONAL_MAC_ADDR_2:
+	case DRV_TLV_OS_DRIVER_STATES:
+	case DRV_TLV_PXE_BOOT_PROGRESS:
+	case DRV_TLV_RX_FRAMES_RECEIVED:
+	case DRV_TLV_RX_BYTES_RECEIVED:
+	case DRV_TLV_TX_FRAMES_SENT:
+	case DRV_TLV_TX_BYTES_SENT:
+		*tlv_group |= ECORE_MFW_TLV_GENERIC;
+		break;
+	case DRV_TLV_LSO_MAX_OFFLOAD_SIZE:
+	case DRV_TLV_LSO_MIN_SEGMENT_COUNT:
+	case DRV_TLV_PROMISCUOUS_MODE:
+	case DRV_TLV_TX_DESCRIPTORS_QUEUE_SIZE:
+	case DRV_TLV_RX_DESCRIPTORS_QUEUE_SIZE:
+	case DRV_TLV_NUM_OF_NET_QUEUE_VMQ_CFG:
+	case DRV_TLV_NUM_OFFLOADED_CONNECTIONS_TCP_IPV4:
+	case DRV_TLV_NUM_OFFLOADED_CONNECTIONS_TCP_IPV6:
+	case DRV_TLV_TX_DESCRIPTOR_QUEUE_AVG_DEPTH:
+	case DRV_TLV_RX_DESCRIPTORS_QUEUE_AVG_DEPTH:
+	case DRV_TLV_IOV_OFFLOAD:
+	case DRV_TLV_TX_QUEUES_EMPTY:
+	case DRV_TLV_RX_QUEUES_EMPTY:
+	case DRV_TLV_TX_QUEUES_FULL:
+	case DRV_TLV_RX_QUEUES_FULL:
+		*tlv_group |= ECORE_MFW_TLV_ETH;
+		break;
+	case DRV_TLV_SCSI_TO:
+	case DRV_TLV_R_T_TOV:
+	case DRV_TLV_R_A_TOV:
+	case DRV_TLV_E_D_TOV:
+	case DRV_TLV_CR_TOV:
+	case DRV_TLV_BOOT_TYPE:
+	case DRV_TLV_NPIV_STATE:
+	case DRV_TLV_NUM_OF_NPIV_IDS:
+	case DRV_TLV_SWITCH_NAME:
+	case DRV_TLV_SWITCH_PORT_NUM:
+	case DRV_TLV_SWITCH_PORT_ID:
+	case DRV_TLV_VENDOR_NAME:
+	case DRV_TLV_SWITCH_MODEL:
+	case DRV_TLV_SWITCH_FW_VER:
+	case DRV_TLV_QOS_PRIORITY_PER_802_1P:
+	case DRV_TLV_PORT_ALIAS:
+	case DRV_TLV_PORT_STATE:
+	case DRV_TLV_FIP_TX_DESCRIPTORS_QUEUE_SIZE:
+	case DRV_TLV_FCOE_RX_DESCRIPTORS_QUEUE_SIZE:
+	case DRV_TLV_LINK_FAILURE_COUNT:
+	case DRV_TLV_FCOE_BOOT_PROGRESS:
+	case DRV_TLV_RX_BROADCAST_PACKETS:
+	case DRV_TLV_TX_BROADCAST_PACKETS:
+	case DRV_TLV_FCOE_TX_DESCRIPTOR_QUEUE_AVG_DEPTH:
+	case DRV_TLV_FCOE_RX_DESCRIPTORS_QUEUE_AVG_DEPTH:
+	case DRV_TLV_FCOE_RX_FRAMES_RECEIVED:
+	case DRV_TLV_FCOE_RX_BYTES_RECEIVED:
+	case DRV_TLV_FCOE_TX_FRAMES_SENT:
+	case DRV_TLV_FCOE_TX_BYTES_SENT:
+	case DRV_TLV_CRC_ERROR_COUNT:
+	case DRV_TLV_CRC_ERROR_1_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_CRC_ERROR_1_TIMESTAMP:
+	case DRV_TLV_CRC_ERROR_2_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_CRC_ERROR_2_TIMESTAMP:
+	case DRV_TLV_CRC_ERROR_3_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_CRC_ERROR_3_TIMESTAMP:
+	case DRV_TLV_CRC_ERROR_4_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_CRC_ERROR_4_TIMESTAMP:
+	case DRV_TLV_CRC_ERROR_5_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_CRC_ERROR_5_TIMESTAMP:
+	case DRV_TLV_LOSS_OF_SYNC_ERROR_COUNT:
+	case DRV_TLV_LOSS_OF_SIGNAL_ERRORS:
+	case DRV_TLV_PRIMITIVE_SEQUENCE_PROTOCOL_ERROR_COUNT:
+	case DRV_TLV_DISPARITY_ERROR_COUNT:
+	case DRV_TLV_CODE_VIOLATION_ERROR_COUNT:
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_1:
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_2:
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_3:
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_4:
+	case DRV_TLV_LAST_FLOGI_TIMESTAMP:
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_1:
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_2:
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_3:
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_4:
+	case DRV_TLV_LAST_FLOGI_ACC_TIMESTAMP:
+	case DRV_TLV_LAST_FLOGI_RJT:
+	case DRV_TLV_LAST_FLOGI_RJT_TIMESTAMP:
+	case DRV_TLV_FDISCS_SENT_COUNT:
+	case DRV_TLV_FDISC_ACCS_RECEIVED:
+	case DRV_TLV_FDISC_RJTS_RECEIVED:
+	case DRV_TLV_PLOGI_SENT_COUNT:
+	case DRV_TLV_PLOGI_ACCS_RECEIVED:
+	case DRV_TLV_PLOGI_RJTS_RECEIVED:
+	case DRV_TLV_PLOGI_1_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_PLOGI_1_TIMESTAMP:
+	case DRV_TLV_PLOGI_2_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_PLOGI_2_TIMESTAMP:
+	case DRV_TLV_PLOGI_3_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_PLOGI_3_TIMESTAMP:
+	case DRV_TLV_PLOGI_4_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_PLOGI_4_TIMESTAMP:
+	case DRV_TLV_PLOGI_5_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_PLOGI_5_TIMESTAMP:
+	case DRV_TLV_PLOGI_1_ACC_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_PLOGI_1_ACC_TIMESTAMP:
+	case DRV_TLV_PLOGI_2_ACC_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_PLOGI_2_ACC_TIMESTAMP:
+	case DRV_TLV_PLOGI_3_ACC_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_PLOGI_3_ACC_TIMESTAMP:
+	case DRV_TLV_PLOGI_4_ACC_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_PLOGI_4_ACC_TIMESTAMP:
+	case DRV_TLV_PLOGI_5_ACC_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_PLOGI_5_ACC_TIMESTAMP:
+	case DRV_TLV_LOGOS_ISSUED:
+	case DRV_TLV_LOGO_ACCS_RECEIVED:
+	case DRV_TLV_LOGO_RJTS_RECEIVED:
+	case DRV_TLV_LOGO_1_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_LOGO_1_TIMESTAMP:
+	case DRV_TLV_LOGO_2_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_LOGO_2_TIMESTAMP:
+	case DRV_TLV_LOGO_3_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_LOGO_3_TIMESTAMP:
+	case DRV_TLV_LOGO_4_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_LOGO_4_TIMESTAMP:
+	case DRV_TLV_LOGO_5_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_LOGO_5_TIMESTAMP:
+	case DRV_TLV_LOGOS_RECEIVED:
+	case DRV_TLV_ACCS_ISSUED:
+	case DRV_TLV_PRLIS_ISSUED:
+	case DRV_TLV_ACCS_RECEIVED:
+	case DRV_TLV_ABTS_SENT_COUNT:
+	case DRV_TLV_ABTS_ACCS_RECEIVED:
+	case DRV_TLV_ABTS_RJTS_RECEIVED:
+	case DRV_TLV_ABTS_1_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_ABTS_1_TIMESTAMP:
+	case DRV_TLV_ABTS_2_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_ABTS_2_TIMESTAMP:
+	case DRV_TLV_ABTS_3_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_ABTS_3_TIMESTAMP:
+	case DRV_TLV_ABTS_4_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_ABTS_4_TIMESTAMP:
+	case DRV_TLV_ABTS_5_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_ABTS_5_TIMESTAMP:
+	case DRV_TLV_RSCNS_RECEIVED:
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_1:
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_2:
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_3:
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_4:
+	case DRV_TLV_LUN_RESETS_ISSUED:
+	case DRV_TLV_ABORT_TASK_SETS_ISSUED:
+	case DRV_TLV_TPRLOS_SENT:
+	case DRV_TLV_NOS_SENT_COUNT:
+	case DRV_TLV_NOS_RECEIVED_COUNT:
+	case DRV_TLV_OLS_COUNT:
+	case DRV_TLV_LR_COUNT:
+	case DRV_TLV_LRR_COUNT:
+	case DRV_TLV_LIP_SENT_COUNT:
+	case DRV_TLV_LIP_RECEIVED_COUNT:
+	case DRV_TLV_EOFA_COUNT:
+	case DRV_TLV_EOFNI_COUNT:
+	case DRV_TLV_SCSI_STATUS_CHECK_CONDITION_COUNT:
+	case DRV_TLV_SCSI_STATUS_CONDITION_MET_COUNT:
+	case DRV_TLV_SCSI_STATUS_BUSY_COUNT:
+	case DRV_TLV_SCSI_STATUS_INTERMEDIATE_COUNT:
+	case DRV_TLV_SCSI_STATUS_INTERMEDIATE_CONDITION_MET_COUNT:
+	case DRV_TLV_SCSI_STATUS_RESERVATION_CONFLICT_COUNT:
+	case DRV_TLV_SCSI_STATUS_TASK_SET_FULL_COUNT:
+	case DRV_TLV_SCSI_STATUS_ACA_ACTIVE_COUNT:
+	case DRV_TLV_SCSI_STATUS_TASK_ABORTED_COUNT:
+	case DRV_TLV_SCSI_CHECK_CONDITION_1_RECEIVED_SK_ASC_ASCQ:
+	case DRV_TLV_SCSI_CHECK_1_TIMESTAMP:
+	case DRV_TLV_SCSI_CHECK_CONDITION_2_RECEIVED_SK_ASC_ASCQ:
+	case DRV_TLV_SCSI_CHECK_2_TIMESTAMP:
+	case DRV_TLV_SCSI_CHECK_CONDITION_3_RECEIVED_SK_ASC_ASCQ:
+	case DRV_TLV_SCSI_CHECK_3_TIMESTAMP:
+	case DRV_TLV_SCSI_CHECK_CONDITION_4_RECEIVED_SK_ASC_ASCQ:
+	case DRV_TLV_SCSI_CHECK_4_TIMESTAMP:
+	case DRV_TLV_SCSI_CHECK_CONDITION_5_RECEIVED_SK_ASC_ASCQ:
+	case DRV_TLV_SCSI_CHECK_5_TIMESTAMP:
+		*tlv_group = ECORE_MFW_TLV_FCOE;
+		break;
+	case DRV_TLV_TARGET_LLMNR_ENABLED:
+	case DRV_TLV_HEADER_DIGEST_FLAG_ENABLED:
+	case DRV_TLV_DATA_DIGEST_FLAG_ENABLED:
+	case DRV_TLV_AUTHENTICATION_METHOD:
+	case DRV_TLV_ISCSI_BOOT_TARGET_PORTAL:
+	case DRV_TLV_MAX_FRAME_SIZE:
+	case DRV_TLV_PDU_TX_DESCRIPTORS_QUEUE_SIZE:
+	case DRV_TLV_PDU_RX_DESCRIPTORS_QUEUE_SIZE:
+	case DRV_TLV_ISCSI_BOOT_PROGRESS:
+	case DRV_TLV_PDU_TX_DESCRIPTOR_QUEUE_AVG_DEPTH:
+	case DRV_TLV_PDU_RX_DESCRIPTORS_QUEUE_AVG_DEPTH:
+	case DRV_TLV_ISCSI_PDU_RX_FRAMES_RECEIVED:
+	case DRV_TLV_ISCSI_PDU_RX_BYTES_RECEIVED:
+	case DRV_TLV_ISCSI_PDU_TX_FRAMES_SENT:
+	case DRV_TLV_ISCSI_PDU_TX_BYTES_SENT:
+		*tlv_group |= ECORE_MFW_TLV_ISCSI;
+		break;
+	default:
+		return ECORE_INVAL;
+	}
+
+	return ECORE_SUCCESS;
+}
+
+static int
+ecore_mfw_get_gen_tlv_value(struct ecore_drv_tlv_hdr *p_tlv,
+			    struct ecore_mfw_tlv_generic *p_drv_buf,
+			    u8 **p_tlv_buf)
+{
+	switch (p_tlv->tlv_type) {
+	case DRV_TLV_FEATURE_FLAGS:
+		if (p_drv_buf->feat_flags_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->feat_flags;
+			return sizeof(p_drv_buf->feat_flags);
+		}
+		break;
+	case DRV_TLV_LOCAL_ADMIN_ADDR:
+		if (p_drv_buf->local_mac_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->local_mac;
+			return sizeof(p_drv_buf->local_mac);
+		}
+		break;
+	case DRV_TLV_ADDITIONAL_MAC_ADDR_1:
+		if (p_drv_buf->additional_mac1_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->additional_mac1;
+			return sizeof(p_drv_buf->additional_mac1);
+		}
+		break;
+	case DRV_TLV_ADDITIONAL_MAC_ADDR_2:
+		if (p_drv_buf->additional_mac2_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->additional_mac2;
+			return sizeof(p_drv_buf->additional_mac2);
+		}
+		break;
+	case DRV_TLV_OS_DRIVER_STATES:
+		if (p_drv_buf->drv_state_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->drv_state;
+			return sizeof(p_drv_buf->drv_state);
+		}
+		break;
+	case DRV_TLV_PXE_BOOT_PROGRESS:
+		if (p_drv_buf->pxe_progress_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->pxe_progress;
+			return sizeof(p_drv_buf->pxe_progress);
+		}
+		break;
+	case DRV_TLV_RX_FRAMES_RECEIVED:
+		if (p_drv_buf->rx_frames_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_frames;
+			return sizeof(p_drv_buf->rx_frames);
+		}
+		break;
+	case DRV_TLV_RX_BYTES_RECEIVED:
+		if (p_drv_buf->rx_bytes_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_bytes;
+			return sizeof(p_drv_buf->rx_bytes);
+		}
+		break;
+	case DRV_TLV_TX_FRAMES_SENT:
+		if (p_drv_buf->tx_frames_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_frames;
+			return sizeof(p_drv_buf->tx_frames);
+		}
+		break;
+	case DRV_TLV_TX_BYTES_SENT:
+		if (p_drv_buf->tx_bytes_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_bytes;
+			return sizeof(p_drv_buf->tx_bytes);
+		}
+		break;
+	default:
+		break;
+	}
+
+	return -1;
+}
+
+static int
+ecore_mfw_get_eth_tlv_value(struct ecore_drv_tlv_hdr *p_tlv,
+			    struct ecore_mfw_tlv_eth *p_drv_buf,
+			    u8 **p_tlv_buf)
+{
+	switch (p_tlv->tlv_type) {
+	case DRV_TLV_LSO_MAX_OFFLOAD_SIZE:
+		if (p_drv_buf->lso_maxoff_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->lso_maxoff_size;
+			return sizeof(p_drv_buf->lso_maxoff_size);
+		}
+		break;
+	case DRV_TLV_LSO_MIN_SEGMENT_COUNT:
+		if (p_drv_buf->lso_minseg_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->lso_minseg_size;
+			return sizeof(p_drv_buf->lso_minseg_size);
+		}
+		break;
+	case DRV_TLV_PROMISCUOUS_MODE:
+		if (p_drv_buf->prom_mode_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->prom_mode;
+			return sizeof(p_drv_buf->prom_mode);
+		}
+		break;
+	case DRV_TLV_TX_DESCRIPTORS_QUEUE_SIZE:
+		if (p_drv_buf->tx_descr_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_descr_size;
+			return sizeof(p_drv_buf->tx_descr_size);
+		}
+		break;
+	case DRV_TLV_RX_DESCRIPTORS_QUEUE_SIZE:
+		if (p_drv_buf->rx_descr_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_descr_size;
+			return sizeof(p_drv_buf->rx_descr_size);
+		}
+		break;
+	case DRV_TLV_NUM_OF_NET_QUEUE_VMQ_CFG:
+		if (p_drv_buf->netq_count_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->netq_count;
+			return sizeof(p_drv_buf->netq_count);
+		}
+		break;
+	case DRV_TLV_NUM_OFFLOADED_CONNECTIONS_TCP_IPV4:
+		if (p_drv_buf->tcp4_offloads_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tcp4_offloads;
+			return sizeof(p_drv_buf->tcp4_offloads);
+		}
+		break;
+	case DRV_TLV_NUM_OFFLOADED_CONNECTIONS_TCP_IPV6:
+		if (p_drv_buf->tcp6_offloads_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tcp6_offloads;
+			return sizeof(p_drv_buf->tcp6_offloads);
+		}
+		break;
+	case DRV_TLV_TX_DESCRIPTOR_QUEUE_AVG_DEPTH:
+		if (p_drv_buf->tx_descr_qdepth_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_descr_qdepth;
+			return sizeof(p_drv_buf->tx_descr_qdepth);
+		}
+		break;
+	case DRV_TLV_RX_DESCRIPTORS_QUEUE_AVG_DEPTH:
+		if (p_drv_buf->rx_descr_qdepth_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_descr_qdepth;
+			return sizeof(p_drv_buf->rx_descr_qdepth);
+		}
+		break;
+	case DRV_TLV_IOV_OFFLOAD:
+		if (p_drv_buf->iov_offload_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->iov_offload;
+			return sizeof(p_drv_buf->iov_offload);
+		}
+		break;
+	case DRV_TLV_TX_QUEUES_EMPTY:
+		if (p_drv_buf->txqs_empty_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->txqs_empty;
+			return sizeof(p_drv_buf->txqs_empty);
+		}
+		break;
+	case DRV_TLV_RX_QUEUES_EMPTY:
+		if (p_drv_buf->rxqs_empty_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rxqs_empty;
+			return sizeof(p_drv_buf->rxqs_empty);
+		}
+		break;
+	case DRV_TLV_TX_QUEUES_FULL:
+		if (p_drv_buf->num_txqs_full_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->num_txqs_full;
+			return sizeof(p_drv_buf->num_txqs_full);
+		}
+		break;
+	case DRV_TLV_RX_QUEUES_FULL:
+		if (p_drv_buf->num_rxqs_full_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->num_rxqs_full;
+			return sizeof(p_drv_buf->num_rxqs_full);
+		}
+		break;
+	default:
+		break;
+	}
+
+	return -1;
+}
+
+static int
+ecore_mfw_get_fcoe_tlv_value(struct ecore_drv_tlv_hdr *p_tlv,
+			     struct ecore_mfw_tlv_fcoe *p_drv_buf,
+			     u8 **p_tlv_buf)
+{
+	switch (p_tlv->tlv_type) {
+	case DRV_TLV_SCSI_TO:
+		if (p_drv_buf->scsi_timeout_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_timeout;
+			return sizeof(p_drv_buf->scsi_timeout);
+		}
+		break;
+	case DRV_TLV_R_T_TOV:
+		if (p_drv_buf->rt_tov_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rt_tov;
+			return sizeof(p_drv_buf->rt_tov);
+		}
+		break;
+	case DRV_TLV_R_A_TOV:
+		if (p_drv_buf->ra_tov_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->ra_tov;
+			return sizeof(p_drv_buf->ra_tov);
+		}
+		break;
+	case DRV_TLV_E_D_TOV:
+		if (p_drv_buf->ed_tov_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->ed_tov;
+			return sizeof(p_drv_buf->ed_tov);
+		}
+		break;
+	case DRV_TLV_CR_TOV:
+		if (p_drv_buf->cr_tov_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->cr_tov;
+			return sizeof(p_drv_buf->cr_tov);
+		}
+		break;
+	case DRV_TLV_BOOT_TYPE:
+		if (p_drv_buf->boot_type_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->boot_type;
+			return sizeof(p_drv_buf->boot_type);
+		}
+		break;
+	case DRV_TLV_NPIV_STATE:
+		if (p_drv_buf->npiv_state_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->npiv_state;
+			return sizeof(p_drv_buf->npiv_state);
+		}
+		break;
+	case DRV_TLV_NUM_OF_NPIV_IDS:
+		if (p_drv_buf->num_npiv_ids_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->num_npiv_ids;
+			return sizeof(p_drv_buf->num_npiv_ids);
+		}
+		break;
+	case DRV_TLV_SWITCH_NAME:
+		if (p_drv_buf->switch_name_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->switch_name;
+			return sizeof(p_drv_buf->switch_name);
+		}
+		break;
+	case DRV_TLV_SWITCH_PORT_NUM:
+		if (p_drv_buf->switch_portnum_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->switch_portnum;
+			return sizeof(p_drv_buf->switch_portnum);
+		}
+		break;
+	case DRV_TLV_SWITCH_PORT_ID:
+		if (p_drv_buf->switch_portid_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->switch_portid;
+			return sizeof(p_drv_buf->switch_portid);
+		}
+		break;
+	case DRV_TLV_VENDOR_NAME:
+		if (p_drv_buf->vendor_name_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->vendor_name;
+			return sizeof(p_drv_buf->vendor_name);
+		}
+		break;
+	case DRV_TLV_SWITCH_MODEL:
+		if (p_drv_buf->switch_model_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->switch_model;
+			return sizeof(p_drv_buf->switch_model);
+		}
+		break;
+	case DRV_TLV_SWITCH_FW_VER:
+		if (p_drv_buf->switch_fw_version_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->switch_fw_version;
+			return sizeof(p_drv_buf->switch_fw_version);
+		}
+		break;
+	case DRV_TLV_QOS_PRIORITY_PER_802_1P:
+		if (p_drv_buf->qos_pri_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->qos_pri;
+			return sizeof(p_drv_buf->qos_pri);
+		}
+		break;
+	case DRV_TLV_PORT_ALIAS:
+		if (p_drv_buf->port_alias_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->port_alias;
+			return sizeof(p_drv_buf->port_alias);
+		}
+		break;
+	case DRV_TLV_PORT_STATE:
+		if (p_drv_buf->port_state_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->port_state;
+			return sizeof(p_drv_buf->port_state);
+		}
+		break;
+	case DRV_TLV_FIP_TX_DESCRIPTORS_QUEUE_SIZE:
+		if (p_drv_buf->fip_tx_descr_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fip_tx_descr_size;
+			return sizeof(p_drv_buf->fip_tx_descr_size);
+		}
+		break;
+	case DRV_TLV_FCOE_RX_DESCRIPTORS_QUEUE_SIZE:
+		if (p_drv_buf->fip_rx_descr_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fip_rx_descr_size;
+			return sizeof(p_drv_buf->fip_rx_descr_size);
+		}
+		break;
+	case DRV_TLV_LINK_FAILURE_COUNT:
+		if (p_drv_buf->link_failures_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->link_failures;
+			return sizeof(p_drv_buf->link_failures);
+		}
+		break;
+	case DRV_TLV_FCOE_BOOT_PROGRESS:
+		if (p_drv_buf->fcoe_boot_progress_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_boot_progress;
+			return sizeof(p_drv_buf->fcoe_boot_progress);
+		}
+		break;
+	case DRV_TLV_RX_BROADCAST_PACKETS:
+		if (p_drv_buf->rx_bcast_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_bcast;
+			return sizeof(p_drv_buf->rx_bcast);
+		}
+		break;
+	case DRV_TLV_TX_BROADCAST_PACKETS:
+		if (p_drv_buf->tx_bcast_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_bcast;
+			return sizeof(p_drv_buf->tx_bcast);
+		}
+		break;
+	case DRV_TLV_FCOE_TX_DESCRIPTOR_QUEUE_AVG_DEPTH:
+		if (p_drv_buf->fcoe_txq_depth_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_txq_depth;
+			return sizeof(p_drv_buf->fcoe_txq_depth);
+		}
+		break;
+	case DRV_TLV_FCOE_RX_DESCRIPTORS_QUEUE_AVG_DEPTH:
+		if (p_drv_buf->fcoe_rxq_depth_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_rxq_depth;
+			return sizeof(p_drv_buf->fcoe_rxq_depth);
+		}
+		break;
+	case DRV_TLV_FCOE_RX_FRAMES_RECEIVED:
+		if (p_drv_buf->fcoe_rx_frames_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_rx_frames;
+			return sizeof(p_drv_buf->fcoe_rx_frames);
+		}
+		break;
+	case DRV_TLV_FCOE_RX_BYTES_RECEIVED:
+		if (p_drv_buf->fcoe_rx_bytes_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_rx_bytes;
+			return sizeof(p_drv_buf->fcoe_rx_bytes);
+		}
+		break;
+	case DRV_TLV_FCOE_TX_FRAMES_SENT:
+		if (p_drv_buf->fcoe_tx_frames_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_tx_frames;
+			return sizeof(p_drv_buf->fcoe_tx_frames);
+		}
+		break;
+	case DRV_TLV_FCOE_TX_BYTES_SENT:
+		if (p_drv_buf->fcoe_tx_bytes_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_tx_bytes;
+			return sizeof(p_drv_buf->fcoe_tx_bytes);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_COUNT:
+		if (p_drv_buf->crc_count_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_count;
+			return sizeof(p_drv_buf->crc_count);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_1_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->crc_err_src_fcid_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_src_fcid[0];
+			return sizeof(p_drv_buf->crc_err_src_fcid[0]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_2_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->crc_err_src_fcid_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_src_fcid[1];
+			return sizeof(p_drv_buf->crc_err_src_fcid[1]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_3_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->crc_err_src_fcid_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_src_fcid[2];
+			return sizeof(p_drv_buf->crc_err_src_fcid[2]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_4_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->crc_err_src_fcid_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_src_fcid[3];
+			return sizeof(p_drv_buf->crc_err_src_fcid[3]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_5_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->crc_err_src_fcid_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_src_fcid[4];
+			return sizeof(p_drv_buf->crc_err_src_fcid[4]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_1_TIMESTAMP:
+		if (p_drv_buf->crc_err_tstamp_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_tstamp[0];
+			return sizeof(p_drv_buf->crc_err_tstamp[0]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_2_TIMESTAMP:
+		if (p_drv_buf->crc_err_tstamp_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_tstamp[1];
+			return sizeof(p_drv_buf->crc_err_tstamp[1]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_3_TIMESTAMP:
+		if (p_drv_buf->crc_err_tstamp_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_tstamp[2];
+			return sizeof(p_drv_buf->crc_err_tstamp[2]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_4_TIMESTAMP:
+		if (p_drv_buf->crc_err_tstamp_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_tstamp[3];
+			return sizeof(p_drv_buf->crc_err_tstamp[3]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_5_TIMESTAMP:
+		if (p_drv_buf->crc_err_tstamp_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_tstamp[4];
+			return sizeof(p_drv_buf->crc_err_tstamp[4]);
+		}
+		break;
+	case DRV_TLV_LOSS_OF_SYNC_ERROR_COUNT:
+		if (p_drv_buf->losync_err_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->losync_err;
+			return sizeof(p_drv_buf->losync_err);
+		}
+		break;
+	case DRV_TLV_LOSS_OF_SIGNAL_ERRORS:
+		if (p_drv_buf->losig_err_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->losig_err;
+			return sizeof(p_drv_buf->losig_err);
+		}
+		break;
+	case DRV_TLV_PRIMITIVE_SEQUENCE_PROTOCOL_ERROR_COUNT:
+		if (p_drv_buf->primtive_err_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->primtive_err;
+			return sizeof(p_drv_buf->primtive_err);
+		}
+		break;
+	case DRV_TLV_DISPARITY_ERROR_COUNT:
+		if (p_drv_buf->disparity_err_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->disparity_err;
+			return sizeof(p_drv_buf->disparity_err);
+		}
+		break;
+	case DRV_TLV_CODE_VIOLATION_ERROR_COUNT:
+		if (p_drv_buf->code_violation_err_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->code_violation_err;
+			return sizeof(p_drv_buf->code_violation_err);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_1:
+		if (p_drv_buf->flogi_param_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_param[0];
+			return sizeof(p_drv_buf->flogi_param[0]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_2:
+		if (p_drv_buf->flogi_param_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_param[1];
+			return sizeof(p_drv_buf->flogi_param[1]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_3:
+		if (p_drv_buf->flogi_param_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_param[2];
+			return sizeof(p_drv_buf->flogi_param[2]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_4:
+		if (p_drv_buf->flogi_param_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_param[3];
+			return sizeof(p_drv_buf->flogi_param[3]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_TIMESTAMP:
+		if (p_drv_buf->flogi_tstamp_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_tstamp;
+			return sizeof(p_drv_buf->flogi_tstamp);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_1:
+		if (p_drv_buf->flogi_acc_param_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_acc_param[0];
+			return sizeof(p_drv_buf->flogi_acc_param[0]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_2:
+		if (p_drv_buf->flogi_acc_param_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_acc_param[1];
+			return sizeof(p_drv_buf->flogi_acc_param[1]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_3:
+		if (p_drv_buf->flogi_acc_param_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_acc_param[2];
+			return sizeof(p_drv_buf->flogi_acc_param[2]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_4:
+		if (p_drv_buf->flogi_acc_param_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_acc_param[3];
+			return sizeof(p_drv_buf->flogi_acc_param[3]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ACC_TIMESTAMP:
+		if (p_drv_buf->flogi_acc_tstamp_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_acc_tstamp;
+			return sizeof(p_drv_buf->flogi_acc_tstamp);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_RJT:
+		if (p_drv_buf->flogi_rjt_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_rjt;
+			return sizeof(p_drv_buf->flogi_rjt);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_RJT_TIMESTAMP:
+		if (p_drv_buf->flogi_rjt_tstamp_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_rjt_tstamp;
+			return sizeof(p_drv_buf->flogi_rjt_tstamp);
+		}
+		break;
+	case DRV_TLV_FDISCS_SENT_COUNT:
+		if (p_drv_buf->fdiscs_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fdiscs;
+			return sizeof(p_drv_buf->fdiscs);
+		}
+		break;
+	case DRV_TLV_FDISC_ACCS_RECEIVED:
+		if (p_drv_buf->fdisc_acc_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fdisc_acc;
+			return sizeof(p_drv_buf->fdisc_acc);
+		}
+		break;
+	case DRV_TLV_FDISC_RJTS_RECEIVED:
+		if (p_drv_buf->fdisc_rjt_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fdisc_rjt;
+			return sizeof(p_drv_buf->fdisc_rjt);
+		}
+		break;
+	case DRV_TLV_PLOGI_SENT_COUNT:
+		if (p_drv_buf->plogi_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi;
+			return sizeof(p_drv_buf->plogi);
+		}
+		break;
+	case DRV_TLV_PLOGI_ACCS_RECEIVED:
+		if (p_drv_buf->plogi_acc_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc;
+			return sizeof(p_drv_buf->plogi_acc);
+		}
+		break;
+	case DRV_TLV_PLOGI_RJTS_RECEIVED:
+		if (p_drv_buf->plogi_rjt_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_rjt;
+			return sizeof(p_drv_buf->plogi_rjt);
+		}
+		break;
+	case DRV_TLV_PLOGI_1_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->plogi_dst_fcid_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_dst_fcid[0];
+			return sizeof(p_drv_buf->plogi_dst_fcid[0]);
+		}
+		break;
+	case DRV_TLV_PLOGI_2_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->plogi_dst_fcid_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_dst_fcid[1];
+			return sizeof(p_drv_buf->plogi_dst_fcid[1]);
+		}
+		break;
+	case DRV_TLV_PLOGI_3_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->plogi_dst_fcid_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_dst_fcid[2];
+			return sizeof(p_drv_buf->plogi_dst_fcid[2]);
+		}
+		break;
+	case DRV_TLV_PLOGI_4_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->plogi_dst_fcid_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_dst_fcid[3];
+			return sizeof(p_drv_buf->plogi_dst_fcid[3]);
+		}
+		break;
+	case DRV_TLV_PLOGI_5_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->plogi_dst_fcid_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_dst_fcid[4];
+			return sizeof(p_drv_buf->plogi_dst_fcid[4]);
+		}
+		break;
+	case DRV_TLV_PLOGI_1_TIMESTAMP:
+		if (p_drv_buf->plogi_tstamp_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_tstamp[0];
+			return sizeof(p_drv_buf->plogi_tstamp[0]);
+		}
+		break;
+	case DRV_TLV_PLOGI_2_TIMESTAMP:
+		if (p_drv_buf->plogi_tstamp_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_tstamp[1];
+			return sizeof(p_drv_buf->plogi_tstamp[1]);
+		}
+		break;
+	case DRV_TLV_PLOGI_3_TIMESTAMP:
+		if (p_drv_buf->plogi_tstamp_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_tstamp[2];
+			return sizeof(p_drv_buf->plogi_tstamp[2]);
+		}
+		break;
+	case DRV_TLV_PLOGI_4_TIMESTAMP:
+		if (p_drv_buf->plogi_tstamp_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_tstamp[3];
+			return sizeof(p_drv_buf->plogi_tstamp[3]);
+		}
+		break;
+	case DRV_TLV_PLOGI_5_TIMESTAMP:
+		if (p_drv_buf->plogi_tstamp_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_tstamp[4];
+			return sizeof(p_drv_buf->plogi_tstamp[4]);
+		}
+		break;
+	case DRV_TLV_PLOGI_1_ACC_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogi_acc_src_fcid_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_src_fcid[0];
+			return sizeof(p_drv_buf->plogi_acc_src_fcid[0]);
+		}
+		break;
+	case DRV_TLV_PLOGI_2_ACC_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogi_acc_src_fcid_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_src_fcid[1];
+			return sizeof(p_drv_buf->plogi_acc_src_fcid[1]);
+		}
+		break;
+	case DRV_TLV_PLOGI_3_ACC_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogi_acc_src_fcid_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_src_fcid[2];
+			return sizeof(p_drv_buf->plogi_acc_src_fcid[2]);
+		}
+		break;
+	case DRV_TLV_PLOGI_4_ACC_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogi_acc_src_fcid_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_src_fcid[3];
+			return sizeof(p_drv_buf->plogi_acc_src_fcid[3]);
+		}
+		break;
+	case DRV_TLV_PLOGI_5_ACC_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogi_acc_src_fcid_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_src_fcid[4];
+			return sizeof(p_drv_buf->plogi_acc_src_fcid[4]);
+		}
+		break;
+	case DRV_TLV_PLOGI_1_ACC_TIMESTAMP:
+		if (p_drv_buf->plogi_acc_tstamp_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_tstamp[0];
+			return sizeof(p_drv_buf->plogi_acc_tstamp[0]);
+		}
+		break;
+	case DRV_TLV_PLOGI_2_ACC_TIMESTAMP:
+		if (p_drv_buf->plogi_acc_tstamp_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_tstamp[1];
+			return sizeof(p_drv_buf->plogi_acc_tstamp[1]);
+		}
+		break;
+	case DRV_TLV_PLOGI_3_ACC_TIMESTAMP:
+		if (p_drv_buf->plogi_acc_tstamp_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_tstamp[2];
+			return sizeof(p_drv_buf->plogi_acc_tstamp[2]);
+		}
+		break;
+	case DRV_TLV_PLOGI_4_ACC_TIMESTAMP:
+		if (p_drv_buf->plogi_acc_tstamp_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_tstamp[3];
+			return sizeof(p_drv_buf->plogi_acc_tstamp[3]);
+		}
+		break;
+	case DRV_TLV_PLOGI_5_ACC_TIMESTAMP:
+		if (p_drv_buf->plogi_acc_tstamp_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_tstamp[4];
+			return sizeof(p_drv_buf->plogi_acc_tstamp[4]);
+		}
+		break;
+	case DRV_TLV_LOGOS_ISSUED:
+		if (p_drv_buf->tx_plogos_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_plogos;
+			return sizeof(p_drv_buf->tx_plogos);
+		}
+		break;
+	case DRV_TLV_LOGO_ACCS_RECEIVED:
+		if (p_drv_buf->plogo_acc_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_acc;
+			return sizeof(p_drv_buf->plogo_acc);
+		}
+		break;
+	case DRV_TLV_LOGO_RJTS_RECEIVED:
+		if (p_drv_buf->plogo_rjt_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_rjt;
+			return sizeof(p_drv_buf->plogo_rjt);
+		}
+		break;
+	case DRV_TLV_LOGO_1_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogo_src_fcid_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_src_fcid[0];
+			return sizeof(p_drv_buf->plogo_src_fcid[0]);
+		}
+		break;
+	case DRV_TLV_LOGO_2_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogo_src_fcid_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_src_fcid[1];
+			return sizeof(p_drv_buf->plogo_src_fcid[1]);
+		}
+		break;
+	case DRV_TLV_LOGO_3_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogo_src_fcid_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_src_fcid[2];
+			return sizeof(p_drv_buf->plogo_src_fcid[2]);
+		}
+		break;
+	case DRV_TLV_LOGO_4_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogo_src_fcid_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_src_fcid[3];
+			return sizeof(p_drv_buf->plogo_src_fcid[3]);
+		}
+		break;
+	case DRV_TLV_LOGO_5_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogo_src_fcid_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_src_fcid[4];
+			return sizeof(p_drv_buf->plogo_src_fcid[4]);
+		}
+		break;
+	case DRV_TLV_LOGO_1_TIMESTAMP:
+		if (p_drv_buf->plogo_tstamp_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_tstamp[0];
+			return sizeof(p_drv_buf->plogo_tstamp[0]);
+		}
+		break;
+	case DRV_TLV_LOGO_2_TIMESTAMP:
+		if (p_drv_buf->plogo_tstamp_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_tstamp[1];
+			return sizeof(p_drv_buf->plogo_tstamp[1]);
+		}
+		break;
+	case DRV_TLV_LOGO_3_TIMESTAMP:
+		if (p_drv_buf->plogo_tstamp_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_tstamp[2];
+			return sizeof(p_drv_buf->plogo_tstamp[2]);
+		}
+		break;
+	case DRV_TLV_LOGO_4_TIMESTAMP:
+		if (p_drv_buf->plogo_tstamp_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_tstamp[3];
+			return sizeof(p_drv_buf->plogo_tstamp[3]);
+		}
+		break;
+	case DRV_TLV_LOGO_5_TIMESTAMP:
+		if (p_drv_buf->plogo_tstamp_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_tstamp[4];
+			return sizeof(p_drv_buf->plogo_tstamp[4]);
+		}
+		break;
+	case DRV_TLV_LOGOS_RECEIVED:
+		if (p_drv_buf->rx_logos_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_logos;
+			return sizeof(p_drv_buf->rx_logos);
+		}
+		break;
+	case DRV_TLV_ACCS_ISSUED:
+		if (p_drv_buf->tx_accs_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_accs;
+			return sizeof(p_drv_buf->tx_accs);
+		}
+		break;
+	case DRV_TLV_PRLIS_ISSUED:
+		if (p_drv_buf->tx_prlis_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_prlis;
+			return sizeof(p_drv_buf->tx_prlis);
+		}
+		break;
+	case DRV_TLV_ACCS_RECEIVED:
+		if (p_drv_buf->rx_accs_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_accs;
+			return sizeof(p_drv_buf->rx_accs);
+		}
+		break;
+	case DRV_TLV_ABTS_SENT_COUNT:
+		if (p_drv_buf->tx_abts_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_abts;
+			return sizeof(p_drv_buf->tx_abts);
+		}
+		break;
+	case DRV_TLV_ABTS_ACCS_RECEIVED:
+		if (p_drv_buf->rx_abts_acc_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_abts_acc;
+			return sizeof(p_drv_buf->rx_abts_acc);
+		}
+		break;
+	case DRV_TLV_ABTS_RJTS_RECEIVED:
+		if (p_drv_buf->rx_abts_rjt_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_abts_rjt;
+			return sizeof(p_drv_buf->rx_abts_rjt);
+		}
+		break;
+	case DRV_TLV_ABTS_1_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->abts_dst_fcid_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_dst_fcid[0];
+			return sizeof(p_drv_buf->abts_dst_fcid[0]);
+		}
+		break;
+	case DRV_TLV_ABTS_2_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->abts_dst_fcid_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_dst_fcid[1];
+			return sizeof(p_drv_buf->abts_dst_fcid[1]);
+		}
+		break;
+	case DRV_TLV_ABTS_3_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->abts_dst_fcid_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_dst_fcid[2];
+			return sizeof(p_drv_buf->abts_dst_fcid[2]);
+		}
+		break;
+	case DRV_TLV_ABTS_4_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->abts_dst_fcid_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_dst_fcid[3];
+			return sizeof(p_drv_buf->abts_dst_fcid[3]);
+		}
+		break;
+	case DRV_TLV_ABTS_5_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->abts_dst_fcid_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_dst_fcid[4];
+			return sizeof(p_drv_buf->abts_dst_fcid[4]);
+		}
+		break;
+	case DRV_TLV_ABTS_1_TIMESTAMP:
+		if (p_drv_buf->abts_tstamp_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_tstamp[0];
+			return sizeof(p_drv_buf->abts_tstamp[0]);
+		}
+		break;
+	case DRV_TLV_ABTS_2_TIMESTAMP:
+		if (p_drv_buf->abts_tstamp_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_tstamp[1];
+			return sizeof(p_drv_buf->abts_tstamp[1]);
+		}
+		break;
+	case DRV_TLV_ABTS_3_TIMESTAMP:
+		if (p_drv_buf->abts_tstamp_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_tstamp[2];
+			return sizeof(p_drv_buf->abts_tstamp[2]);
+		}
+		break;
+	case DRV_TLV_ABTS_4_TIMESTAMP:
+		if (p_drv_buf->abts_tstamp_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_tstamp[3];
+			return sizeof(p_drv_buf->abts_tstamp[3]);
+		}
+		break;
+	case DRV_TLV_ABTS_5_TIMESTAMP:
+		if (p_drv_buf->abts_tstamp_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_tstamp[4];
+			return sizeof(p_drv_buf->abts_tstamp[4]);
+		}
+		break;
+	case DRV_TLV_RSCNS_RECEIVED:
+		if (p_drv_buf->rx_rscn_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_rscn;
+			return sizeof(p_drv_buf->rx_rscn);
+		}
+		break;
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_1:
+		if (p_drv_buf->rx_rscn_nport_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_rscn_nport[0];
+			return sizeof(p_drv_buf->rx_rscn_nport[0]);
+		}
+		break;
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_2:
+		if (p_drv_buf->rx_rscn_nport_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_rscn_nport[1];
+			return sizeof(p_drv_buf->rx_rscn_nport[1]);
+		}
+		break;
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_3:
+		if (p_drv_buf->rx_rscn_nport_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_rscn_nport[2];
+			return sizeof(p_drv_buf->rx_rscn_nport[2]);
+		}
+		break;
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_4:
+		if (p_drv_buf->rx_rscn_nport_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_rscn_nport[3];
+			return sizeof(p_drv_buf->rx_rscn_nport[3]);
+		}
+		break;
+	case DRV_TLV_LUN_RESETS_ISSUED:
+		if (p_drv_buf->tx_lun_rst_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_lun_rst;
+			return sizeof(p_drv_buf->tx_lun_rst);
+		}
+		break;
+	case DRV_TLV_ABORT_TASK_SETS_ISSUED:
+		if (p_drv_buf->abort_task_sets_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abort_task_sets;
+			return sizeof(p_drv_buf->abort_task_sets);
+		}
+		break;
+	case DRV_TLV_TPRLOS_SENT:
+		if (p_drv_buf->tx_tprlos_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_tprlos;
+			return sizeof(p_drv_buf->tx_tprlos);
+		}
+		break;
+	case DRV_TLV_NOS_SENT_COUNT:
+		if (p_drv_buf->tx_nos_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_nos;
+			return sizeof(p_drv_buf->tx_nos);
+		}
+		break;
+	case DRV_TLV_NOS_RECEIVED_COUNT:
+		if (p_drv_buf->rx_nos_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_nos;
+			return sizeof(p_drv_buf->rx_nos);
+		}
+		break;
+	case DRV_TLV_OLS_COUNT:
+		if (p_drv_buf->ols_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->ols;
+			return sizeof(p_drv_buf->ols);
+		}
+		break;
+	case DRV_TLV_LR_COUNT:
+		if (p_drv_buf->lr_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->lr;
+			return sizeof(p_drv_buf->lr);
+		}
+		break;
+	case DRV_TLV_LRR_COUNT:
+		if (p_drv_buf->lrr_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->lrr;
+			return sizeof(p_drv_buf->lrr);
+		}
+		break;
+	case DRV_TLV_LIP_SENT_COUNT:
+		if (p_drv_buf->tx_lip_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_lip;
+			return sizeof(p_drv_buf->tx_lip);
+		}
+		break;
+	case DRV_TLV_LIP_RECEIVED_COUNT:
+		if (p_drv_buf->rx_lip_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_lip;
+			return sizeof(p_drv_buf->rx_lip);
+		}
+		break;
+	case DRV_TLV_EOFA_COUNT:
+		if (p_drv_buf->eofa_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->eofa;
+			return sizeof(p_drv_buf->eofa);
+		}
+		break;
+	case DRV_TLV_EOFNI_COUNT:
+		if (p_drv_buf->eofni_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->eofni;
+			return sizeof(p_drv_buf->eofni);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_CHECK_CONDITION_COUNT:
+		if (p_drv_buf->scsi_chks_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_chks;
+			return sizeof(p_drv_buf->scsi_chks);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_CONDITION_MET_COUNT:
+		if (p_drv_buf->scsi_cond_met_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_cond_met;
+			return sizeof(p_drv_buf->scsi_cond_met);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_BUSY_COUNT:
+		if (p_drv_buf->scsi_busy_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_busy;
+			return sizeof(p_drv_buf->scsi_busy);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_INTERMEDIATE_COUNT:
+		if (p_drv_buf->scsi_inter_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_inter;
+			return sizeof(p_drv_buf->scsi_inter);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_INTERMEDIATE_CONDITION_MET_COUNT:
+		if (p_drv_buf->scsi_inter_cond_met_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_inter_cond_met;
+			return sizeof(p_drv_buf->scsi_inter_cond_met);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_RESERVATION_CONFLICT_COUNT:
+		if (p_drv_buf->scsi_rsv_conflicts_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_rsv_conflicts;
+			return sizeof(p_drv_buf->scsi_rsv_conflicts);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_TASK_SET_FULL_COUNT:
+		if (p_drv_buf->scsi_tsk_full_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_tsk_full;
+			return sizeof(p_drv_buf->scsi_tsk_full);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_ACA_ACTIVE_COUNT:
+		if (p_drv_buf->scsi_aca_active_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_aca_active;
+			return sizeof(p_drv_buf->scsi_aca_active);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_TASK_ABORTED_COUNT:
+		if (p_drv_buf->scsi_tsk_abort_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_tsk_abort;
+			return sizeof(p_drv_buf->scsi_tsk_abort);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_CONDITION_1_RECEIVED_SK_ASC_ASCQ:
+		if (p_drv_buf->scsi_rx_chk_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_rx_chk[0];
+			return sizeof(p_drv_buf->scsi_rx_chk[0]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_CONDITION_2_RECEIVED_SK_ASC_ASCQ:
+		if (p_drv_buf->scsi_rx_chk_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_rx_chk[1];
+			return sizeof(p_drv_buf->scsi_rx_chk[1]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_CONDITION_3_RECEIVED_SK_ASC_ASCQ:
+		if (p_drv_buf->scsi_rx_chk_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_rx_chk[2];
+			return sizeof(p_drv_buf->scsi_rx_chk[2]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_CONDITION_4_RECEIVED_SK_ASC_ASCQ:
+		if (p_drv_buf->scsi_rx_chk_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_rx_chk[3];
+			return sizeof(p_drv_buf->scsi_rx_chk[4]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_CONDITION_5_RECEIVED_SK_ASC_ASCQ:
+		if (p_drv_buf->scsi_rx_chk_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_rx_chk[4];
+			return sizeof(p_drv_buf->scsi_rx_chk[4]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_1_TIMESTAMP:
+		if (p_drv_buf->scsi_chk_tstamp_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_chk_tstamp[0];
+			return sizeof(p_drv_buf->scsi_chk_tstamp[0]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_2_TIMESTAMP:
+		if (p_drv_buf->scsi_chk_tstamp_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_chk_tstamp[1];
+			return sizeof(p_drv_buf->scsi_chk_tstamp[1]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_3_TIMESTAMP:
+		if (p_drv_buf->scsi_chk_tstamp_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_chk_tstamp[2];
+			return sizeof(p_drv_buf->scsi_chk_tstamp[2]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_4_TIMESTAMP:
+		if (p_drv_buf->scsi_chk_tstamp_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_chk_tstamp[3];
+			return sizeof(p_drv_buf->scsi_chk_tstamp[3]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_5_TIMESTAMP:
+		if (p_drv_buf->scsi_chk_tstamp_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_chk_tstamp[4];
+			return sizeof(p_drv_buf->scsi_chk_tstamp[4]);
+		}
+		break;
+	default:
+		break;
+	}
+
+	return -1;
+}
+
+static int
+ecore_mfw_get_iscsi_tlv_value(struct ecore_drv_tlv_hdr *p_tlv,
+			      struct ecore_mfw_tlv_iscsi *p_drv_buf,
+			      u8 **p_tlv_buf)
+{
+	switch (p_tlv->tlv_type) {
+	case DRV_TLV_TARGET_LLMNR_ENABLED:
+		if (p_drv_buf->target_llmnr_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->target_llmnr;
+			return sizeof(p_drv_buf->target_llmnr);
+		}
+		break;
+	case DRV_TLV_HEADER_DIGEST_FLAG_ENABLED:
+		if (p_drv_buf->header_digest_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->header_digest;
+			return sizeof(p_drv_buf->header_digest);
+		}
+		break;
+	case DRV_TLV_DATA_DIGEST_FLAG_ENABLED:
+		if (p_drv_buf->data_digest_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->data_digest;
+			return sizeof(p_drv_buf->data_digest);
+		}
+		break;
+	case DRV_TLV_AUTHENTICATION_METHOD:
+		if (p_drv_buf->auth_method_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->auth_method;
+			return sizeof(p_drv_buf->auth_method);
+		}
+		break;
+	case DRV_TLV_ISCSI_BOOT_TARGET_PORTAL:
+		if (p_drv_buf->boot_taget_portal_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->boot_taget_portal;
+			return sizeof(p_drv_buf->boot_taget_portal);
+		}
+		break;
+	case DRV_TLV_MAX_FRAME_SIZE:
+		if (p_drv_buf->frame_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->frame_size;
+			return sizeof(p_drv_buf->frame_size);
+		}
+		break;
+	case DRV_TLV_PDU_TX_DESCRIPTORS_QUEUE_SIZE:
+		if (p_drv_buf->tx_desc_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_desc_size;
+			return sizeof(p_drv_buf->tx_desc_size);
+		}
+		break;
+	case DRV_TLV_PDU_RX_DESCRIPTORS_QUEUE_SIZE:
+		if (p_drv_buf->rx_desc_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_desc_size;
+			return sizeof(p_drv_buf->rx_desc_size);
+		}
+		break;
+	case DRV_TLV_ISCSI_BOOT_PROGRESS:
+		if (p_drv_buf->boot_progress_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->boot_progress;
+			return sizeof(p_drv_buf->boot_progress);
+		}
+		break;
+	case DRV_TLV_PDU_TX_DESCRIPTOR_QUEUE_AVG_DEPTH:
+		if (p_drv_buf->tx_desc_qdepth_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_desc_qdepth;
+			return sizeof(p_drv_buf->tx_desc_qdepth);
+		}
+		break;
+	case DRV_TLV_PDU_RX_DESCRIPTORS_QUEUE_AVG_DEPTH:
+		if (p_drv_buf->rx_desc_qdepth_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_desc_qdepth;
+			return sizeof(p_drv_buf->rx_desc_qdepth);
+		}
+		break;
+	case DRV_TLV_ISCSI_PDU_RX_FRAMES_RECEIVED:
+		if (p_drv_buf->rx_frames_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_frames;
+			return sizeof(p_drv_buf->rx_frames);
+		}
+		break;
+	case DRV_TLV_ISCSI_PDU_RX_BYTES_RECEIVED:
+		if (p_drv_buf->rx_bytes_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_bytes;
+			return sizeof(p_drv_buf->rx_bytes);
+		}
+		break;
+	case DRV_TLV_ISCSI_PDU_TX_FRAMES_SENT:
+		if (p_drv_buf->tx_frames_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_frames;
+			return sizeof(p_drv_buf->tx_frames);
+		}
+		break;
+	case DRV_TLV_ISCSI_PDU_TX_BYTES_SENT:
+		if (p_drv_buf->tx_bytes_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_bytes;
+			return sizeof(p_drv_buf->tx_bytes);
+		}
+		break;
+	default:
+		break;
+	}
+
+	return -1;
+}
+
+static enum _ecore_status_t
+ecore_mfw_update_tlvs(u8 tlv_group, struct ecore_hwfn *p_hwfn,
+		      struct ecore_ptt *p_ptt, u8 *p_mfw_buf, u32 size)
+{
+	union ecore_mfw_tlv_data *p_tlv_data;
+	struct ecore_drv_tlv_hdr tlv;
+	u8 *p_tlv_ptr = OSAL_NULL, *p_temp;
+	u32 offset;
+	int len;
+
+	p_tlv_data = OSAL_VALLOC(p_hwfn->p_dev, sizeof(*p_tlv_data));
+	if (!p_tlv_data)
+		return ECORE_NOMEM;
+
+	OSAL_MEMSET(p_tlv_data, 0, sizeof(*p_tlv_data));
+	if (OSAL_MFW_FILL_TLV_DATA(p_hwfn, tlv_group, p_tlv_data)) {
+		OSAL_VFREE(p_hwfn->p_dev, p_tlv_data);
+		return ECORE_INVAL;
+	}
+
+	offset = 0;
+	OSAL_MEMSET(&tlv, 0, sizeof(tlv));
+	while (offset < size) {
+		p_temp = &p_mfw_buf[offset];
+		tlv.tlv_type = TLV_TYPE(p_temp);
+		tlv.tlv_length = TLV_LENGTH(p_temp);
+		tlv.tlv_flags = TLV_FLAGS(p_temp);
+		DP_INFO(p_hwfn, "Type %d length = %d flags = 0x%x\n",
+			tlv.tlv_type, tlv.tlv_length, tlv.tlv_flags);
+
+		offset += sizeof(tlv);
+		if (tlv_group == ECORE_MFW_TLV_GENERIC)
+			len = ecore_mfw_get_gen_tlv_value(&tlv,
+					&p_tlv_data->generic, &p_tlv_ptr);
+		else if (tlv_group == ECORE_MFW_TLV_ETH)
+			len = ecore_mfw_get_eth_tlv_value(&tlv,
+					&p_tlv_data->eth, &p_tlv_ptr);
+		else if (tlv_group == ECORE_MFW_TLV_FCOE)
+			len = ecore_mfw_get_fcoe_tlv_value(&tlv,
+					&p_tlv_data->fcoe, &p_tlv_ptr);
+		else
+			len = ecore_mfw_get_iscsi_tlv_value(&tlv,
+					&p_tlv_data->iscsi, &p_tlv_ptr);
+
+		if (len > 0) {
+			OSAL_WARN(len > 4 * tlv.tlv_length,
+				  "Incorrect MFW TLV length");
+			len = OSAL_MIN_T(int, len, 4 * tlv.tlv_length);
+			tlv.tlv_flags |= ECORE_DRV_TLV_FLAGS_CHANGED;
+			/* TODO: Endianness handling? */
+			OSAL_MEMCPY(p_mfw_buf, &tlv, sizeof(tlv));
+			OSAL_MEMCPY(p_mfw_buf + offset, p_tlv_ptr, len);
+		}
+
+		offset += sizeof(u32) * tlv.tlv_length;
+	}
+
+	OSAL_VFREE(p_hwfn->p_dev, p_tlv_data);
+
+	return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t
+ecore_mfw_process_tlv_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
+{
+	u32 addr, size, offset, resp, param, val;
+	u8 tlv_group = 0, id, *p_mfw_buf = OSAL_NULL, *p_temp;
+	u32 global_offsize, global_addr;
+	enum _ecore_status_t rc;
+	struct ecore_drv_tlv_hdr tlv;
+
+	addr = SECTION_OFFSIZE_ADDR(p_hwfn->mcp_info->public_base,
+				    PUBLIC_GLOBAL);
+	global_offsize = ecore_rd(p_hwfn, p_ptt, addr);
+	global_addr = SECTION_ADDR(global_offsize, 0);
+	addr = global_addr + OFFSETOF(struct public_global, data_ptr);
+	size = ecore_rd(p_hwfn, p_ptt, global_addr +
+			OFFSETOF(struct public_global, data_size));
+
+	if (!size) {
+		DP_NOTICE(p_hwfn, false, "Invalid TLV req size = %d\n", size);
+		goto drv_done;
+	}
+
+	p_mfw_buf = (void *)OSAL_VALLOC(p_hwfn->p_dev, size);
+	if (!p_mfw_buf) {
+		DP_NOTICE(p_hwfn, false,
+			  "Failed allocate memory for p_mfw_buf\n");
+		goto drv_done;
+	}
+
+	/* Read the TLV request to local buffer */
+	for (offset = 0; offset < size; offset += sizeof(u32)) {
+		val = ecore_rd(p_hwfn, p_ptt, addr + offset);
+		OSAL_MEMCPY(&p_mfw_buf[offset], &val, sizeof(u32));
+	}
+
+	/* Parse the headers to enumerate the requested TLV groups */
+	for (offset = 0; offset < size;
+	     offset += sizeof(tlv) + sizeof(u32) * tlv.tlv_length) {
+		p_temp = &p_mfw_buf[offset];
+		tlv.tlv_type = TLV_TYPE(p_temp);
+		tlv.tlv_length = TLV_LENGTH(p_temp);
+		if (ecore_mfw_get_tlv_group(tlv.tlv_type, &tlv_group))
+			goto drv_done;
+	}
+
+	/* Update the TLV values in the local buffer */
+	for (id = ECORE_MFW_TLV_GENERIC; id < ECORE_MFW_TLV_MAX; id <<= 1) {
+		if (tlv_group & id) {
+			if (ecore_mfw_update_tlvs(id, p_hwfn, p_ptt, p_mfw_buf,
+						  size))
+				goto drv_done;
+		}
+	}
+
+	/* Write the TLV data to shared memory */
+	for (offset = 0; offset < size; offset += sizeof(u32)) {
+		val = (u32)p_mfw_buf[offset];
+		ecore_wr(p_hwfn, p_ptt, addr + offset, val);
+		offset += sizeof(u32);
+	}
+
+drv_done:
+	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_GET_TLV_DONE, 0, &resp,
+			   &param);
+
+	OSAL_VFREE(p_hwfn->p_dev, p_mfw_buf);
+
+	return rc;
+}
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index 0a1f7db..bfd96d6 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -96,8 +96,29 @@ struct qed_slowpath_params {
 
 #define ILT_PAGE_SIZE_TCFC 0x8000	/* 32KB */
 
+struct qed_eth_tlvs {
+	u16 feat_flags;
+	u8 mac[3][ETH_ALEN];
+	u16 lso_maxoff;
+	u16 lso_minseg;
+	bool prom_mode;
+	u16 num_txqs;
+	u16 num_rxqs;
+	u16 num_netqs;
+	u16 flex_vlan;
+	u32 tcp4_offloads;
+	u32 tcp6_offloads;
+	u16 tx_avg_qdepth;
+	u16 rx_avg_qdepth;
+	u8 txqs_empty;
+	u8 rxqs_empty;
+	u8 num_txqs_full;
+	u8 num_rxqs_full;
+};
+
 struct qed_common_cb_ops {
 	void (*link_update)(void *dev, struct qed_link_output *link);
+	void (*get_tlv_data)(void *dev, struct qed_eth_tlvs *data);
 };
 
 struct qed_selftest_ops {
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 29/61] net/qede/base: optimize cache-line access
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (28 preceding siblings ...)
  2017-03-18  7:05   ` [PATCH v2 28/61] net/qede/base: add support for handling TLV request from MFW Rasesh Mody
@ 2017-03-18  7:05   ` Rasesh Mody
  2017-03-18  7:05   ` [PATCH v2 30/61] net/qede/base: infrastructure changes for VF tunnelling Rasesh Mody
                     ` (32 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:05 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Optimize cache-line access in ecore_chain -
re-arrange fields so that fields that are needed for fastpath
[mostly produce/consume and their derivatives] are in the first cache
line, and the rest are in the second.

This is true for both PBL and NEXT_PTR kind of chains.
Advancing a page in a SINGLE_PAGE chain would still require the 2nd
cacheline as well, but afaik only SPQ uses it and so it isn't
considered as 'fastpath'.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_chain.h       |  143 ++++++++++++++++-------------
 drivers/net/qede/base/ecore_dev.c         |   14 +--
 drivers/net/qede/base/ecore_sp_commands.c |    4 +-
 3 files changed, 89 insertions(+), 72 deletions(-)

diff --git a/drivers/net/qede/base/ecore_chain.h b/drivers/net/qede/base/ecore_chain.h
index 61e39b5..ba272a9 100644
--- a/drivers/net/qede/base/ecore_chain.h
+++ b/drivers/net/qede/base/ecore_chain.h
@@ -59,25 +59,6 @@ struct ecore_chain_ext_pbl {
 	void *p_pbl_virt;
 };
 
-struct ecore_chain_pbl {
-	/* Base address of a pre-allocated buffer for pbl */
-	dma_addr_t p_phys_table;
-	void *p_virt_table;
-
-	/* Table for keeping the virtual addresses of the chain pages,
-	 * respectively to the physical addresses in the pbl table.
-	 */
-	void **pp_virt_addr_tbl;
-
-	/* Index to current used page by producer/consumer */
-	union {
-		struct ecore_chain_pbl_u16 pbl16;
-		struct ecore_chain_pbl_u32 pbl32;
-	} u;
-
-	bool external;
-};
-
 struct ecore_chain_u16 {
 	/* Cyclic index of next element to produce/consme */
 	u16 prod_idx;
@@ -91,40 +72,75 @@ struct ecore_chain_u32 {
 };
 
 struct ecore_chain {
-	/* Address of first page of the chain */
-	void *p_virt_addr;
-	dma_addr_t p_phys_addr;
-
+	/* fastpath portion of the chain - required for commands such
+	 * as produce / consume.
+	 */
 	/* Point to next element to produce/consume */
 	void *p_prod_elem;
 	void *p_cons_elem;
 
-	enum ecore_chain_mode mode;
-	enum ecore_chain_use_mode intended_use;
+	/* Fastpath portions of the PBL [if exists] */
+
+	struct {
+		/* Table for keeping the virtual addresses of the chain pages,
+		 * respectively to the physical addresses in the pbl table.
+		 */
+		void		**pp_virt_addr_tbl;
+
+		union {
+			struct ecore_chain_pbl_u16	u16;
+			struct ecore_chain_pbl_u32	u32;
+		} c;
+	} pbl;
 
-	enum ecore_chain_cnt_type cnt_type;
 	union {
 		struct ecore_chain_u16 chain16;
 		struct ecore_chain_u32 chain32;
 	} u;
 
-	u32 page_cnt;
+	/* Capacity counts only usable elements */
+	u32				capacity;
+	u32				page_cnt;
 
-	/* Number of elements - capacity is for usable elements only,
-	 * while size will contain total number of elements [for entire chain].
+	/* A u8 would suffice for mode, but it would save as a lot of headaches
+	 * on castings & defaults.
 	 */
-	u32 capacity;
-	u32 size;
+	enum ecore_chain_mode		mode;
 
 	/* Elements information for fast calculations */
 	u16 elem_per_page;
 	u16 elem_per_page_mask;
-	u16 elem_unusable;
-	u16 usable_per_page;
 	u16 elem_size;
 	u16 next_page_mask;
+	u16 usable_per_page;
+	u8 elem_unusable;
 
-	struct ecore_chain_pbl pbl;
+	u8				cnt_type;
+
+	/* Slowpath of the chain - required for initialization and destruction,
+	 * but isn't involved in regular functionality.
+	 */
+
+	/* Base address of a pre-allocated buffer for pbl */
+	struct {
+		dma_addr_t		p_phys_table;
+		void			*p_virt_table;
+	} pbl_sp;
+
+	/* Address of first page of the chain  - the address is required
+	 * for fastpath operation [consume/produce] but only for the the SINGLE
+	 * flavour which isn't considered fastpath [== SPQ].
+	 */
+	void				*p_virt_addr;
+	dma_addr_t			p_phys_addr;
+
+	/* Total number of elements [for entire chain] */
+	u32				size;
+
+	u8				intended_use;
+
+	/* TBD - do we really need this? Couldn't find usage for it */
+	bool				b_external_pbl;
 
 	void *dp_ctx;
 };
@@ -135,8 +151,8 @@ struct ecore_chain {
 
 #define UNUSABLE_ELEMS_PER_PAGE(elem_size, mode)		\
 	  ((mode == ECORE_CHAIN_MODE_NEXT_PTR) ?		\
-	   (1 + ((sizeof(struct ecore_chain_next) - 1) /		\
-	   (elem_size))) : 0)
+	   (u8)(1 + ((sizeof(struct ecore_chain_next) - 1) /	\
+		     (elem_size))) : 0)
 
 #define USABLE_ELEMS_PER_PAGE(elem_size, mode)		\
 	((u32)(ELEMS_PER_PAGE(elem_size) -			\
@@ -245,7 +261,7 @@ u16 ecore_chain_get_usable_per_page(struct ecore_chain *p_chain)
 }
 
 static OSAL_INLINE
-u16 ecore_chain_get_unusable_per_page(struct ecore_chain *p_chain)
+u8 ecore_chain_get_unusable_per_page(struct ecore_chain *p_chain)
 {
 	return p_chain->elem_unusable;
 }
@@ -263,7 +279,7 @@ static OSAL_INLINE u32 ecore_chain_get_page_cnt(struct ecore_chain *p_chain)
 static OSAL_INLINE
 dma_addr_t ecore_chain_get_pbl_phys(struct ecore_chain *p_chain)
 {
-	return p_chain->pbl.p_phys_table;
+	return p_chain->pbl_sp.p_phys_table;
 }
 
 /**
@@ -288,9 +304,9 @@ ecore_chain_advance_page(struct ecore_chain *p_chain, void **p_next_elem,
 		p_next = (struct ecore_chain_next *)(*p_next_elem);
 		*p_next_elem = p_next->next_virt;
 		if (is_chain_u16(p_chain))
-			*(u16 *)idx_to_inc += p_chain->elem_unusable;
+			*(u16 *)idx_to_inc += (u16)p_chain->elem_unusable;
 		else
-			*(u32 *)idx_to_inc += p_chain->elem_unusable;
+			*(u32 *)idx_to_inc += (u16)p_chain->elem_unusable;
 		break;
 	case ECORE_CHAIN_MODE_SINGLE:
 		*p_next_elem = p_chain->p_virt_addr;
@@ -391,7 +407,7 @@ static OSAL_INLINE void *ecore_chain_produce(struct ecore_chain *p_chain)
 		if ((p_chain->u.chain16.prod_idx &
 		     p_chain->elem_per_page_mask) == p_chain->next_page_mask) {
 			p_prod_idx = &p_chain->u.chain16.prod_idx;
-			p_prod_page_idx = &p_chain->pbl.u.pbl16.prod_page_idx;
+			p_prod_page_idx = &p_chain->pbl.c.u16.prod_page_idx;
 			ecore_chain_advance_page(p_chain, &p_chain->p_prod_elem,
 						 p_prod_idx, p_prod_page_idx);
 		}
@@ -400,7 +416,7 @@ static OSAL_INLINE void *ecore_chain_produce(struct ecore_chain *p_chain)
 		if ((p_chain->u.chain32.prod_idx &
 		     p_chain->elem_per_page_mask) == p_chain->next_page_mask) {
 			p_prod_idx = &p_chain->u.chain32.prod_idx;
-			p_prod_page_idx = &p_chain->pbl.u.pbl32.prod_page_idx;
+			p_prod_page_idx = &p_chain->pbl.c.u32.prod_page_idx;
 			ecore_chain_advance_page(p_chain, &p_chain->p_prod_elem,
 						 p_prod_idx, p_prod_page_idx);
 		}
@@ -465,7 +481,7 @@ static OSAL_INLINE void *ecore_chain_consume(struct ecore_chain *p_chain)
 		if ((p_chain->u.chain16.cons_idx &
 		     p_chain->elem_per_page_mask) == p_chain->next_page_mask) {
 			p_cons_idx = &p_chain->u.chain16.cons_idx;
-			p_cons_page_idx = &p_chain->pbl.u.pbl16.cons_page_idx;
+			p_cons_page_idx = &p_chain->pbl.c.u16.cons_page_idx;
 			ecore_chain_advance_page(p_chain, &p_chain->p_cons_elem,
 						 p_cons_idx, p_cons_page_idx);
 		}
@@ -474,7 +490,7 @@ static OSAL_INLINE void *ecore_chain_consume(struct ecore_chain *p_chain)
 		if ((p_chain->u.chain32.cons_idx &
 		     p_chain->elem_per_page_mask) == p_chain->next_page_mask) {
 			p_cons_idx = &p_chain->u.chain32.cons_idx;
-			p_cons_page_idx = &p_chain->pbl.u.pbl32.cons_page_idx;
+			p_cons_page_idx = &p_chain->pbl.c.u32.cons_page_idx;
 			ecore_chain_advance_page(p_chain, &p_chain->p_cons_elem,
 						 p_cons_idx, p_cons_page_idx);
 		}
@@ -518,25 +534,26 @@ static OSAL_INLINE void ecore_chain_reset(struct ecore_chain *p_chain)
 		u32 reset_val = p_chain->page_cnt - 1;
 
 		if (is_chain_u16(p_chain)) {
-			p_chain->pbl.u.pbl16.prod_page_idx = (u16)reset_val;
-			p_chain->pbl.u.pbl16.cons_page_idx = (u16)reset_val;
+			p_chain->pbl.c.u16.prod_page_idx = (u16)reset_val;
+			p_chain->pbl.c.u16.cons_page_idx = (u16)reset_val;
 		} else {
-			p_chain->pbl.u.pbl32.prod_page_idx = reset_val;
-			p_chain->pbl.u.pbl32.cons_page_idx = reset_val;
+			p_chain->pbl.c.u32.prod_page_idx = reset_val;
+			p_chain->pbl.c.u32.cons_page_idx = reset_val;
 		}
 	}
 
 	switch (p_chain->intended_use) {
-	case ECORE_CHAIN_USE_TO_CONSUME_PRODUCE:
-	case ECORE_CHAIN_USE_TO_PRODUCE:
-			/* Do nothing */
-			break;
-
 	case ECORE_CHAIN_USE_TO_CONSUME:
-			/* produce empty elements */
-			for (i = 0; i < p_chain->capacity; i++)
+		/* produce empty elements */
+		for (i = 0; i < p_chain->capacity; i++)
 			ecore_chain_recycle_consumed(p_chain);
-			break;
+		break;
+
+	case ECORE_CHAIN_USE_TO_CONSUME_PRODUCE:
+	case ECORE_CHAIN_USE_TO_PRODUCE:
+	default:
+		/* Do nothing */
+		break;
 	}
 }
 
@@ -563,9 +580,9 @@ ecore_chain_init_params(struct ecore_chain *p_chain, u32 page_cnt, u8 elem_size,
 	p_chain->p_virt_addr = OSAL_NULL;
 	p_chain->p_phys_addr = 0;
 	p_chain->elem_size = elem_size;
-	p_chain->intended_use = intended_use;
+	p_chain->intended_use = (u8)intended_use;
 	p_chain->mode = mode;
-	p_chain->cnt_type = cnt_type;
+	p_chain->cnt_type = (u8)cnt_type;
 
 	p_chain->elem_per_page = ELEMS_PER_PAGE(elem_size);
 	p_chain->usable_per_page = USABLE_ELEMS_PER_PAGE(elem_size, mode);
@@ -577,9 +594,9 @@ ecore_chain_init_params(struct ecore_chain *p_chain, u32 page_cnt, u8 elem_size,
 	p_chain->page_cnt = page_cnt;
 	p_chain->capacity = p_chain->usable_per_page * page_cnt;
 	p_chain->size = p_chain->elem_per_page * page_cnt;
-	p_chain->pbl.external = false;
-	p_chain->pbl.p_phys_table = 0;
-	p_chain->pbl.p_virt_table = OSAL_NULL;
+	p_chain->b_external_pbl = false;
+	p_chain->pbl_sp.p_phys_table = 0;
+	p_chain->pbl_sp.p_virt_table = OSAL_NULL;
 	p_chain->pbl.pp_virt_addr_tbl = OSAL_NULL;
 
 	p_chain->dp_ctx = dp_ctx;
@@ -623,8 +640,8 @@ static OSAL_INLINE void ecore_chain_init_pbl_mem(struct ecore_chain *p_chain,
 						 dma_addr_t p_phys_pbl,
 						 void **pp_virt_addr_tbl)
 {
-	p_chain->pbl.p_phys_table = p_phys_pbl;
-	p_chain->pbl.p_virt_table = p_virt_pbl;
+	p_chain->pbl_sp.p_phys_table = p_phys_pbl;
+	p_chain->pbl_sp.p_virt_table = p_virt_pbl;
 	p_chain->pbl.pp_virt_addr_tbl = pp_virt_addr_tbl;
 }
 
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index c895656..1c08d4a 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -3559,13 +3559,13 @@ static void ecore_chain_free_pbl(struct ecore_dev *p_dev,
 				 struct ecore_chain *p_chain)
 {
 	void **pp_virt_addr_tbl = p_chain->pbl.pp_virt_addr_tbl;
-	u8 *p_pbl_virt = (u8 *)p_chain->pbl.p_virt_table;
+	u8 *p_pbl_virt = (u8 *)p_chain->pbl_sp.p_virt_table;
 	u32 page_cnt = p_chain->page_cnt, i, pbl_size;
 
 	if (!pp_virt_addr_tbl)
 		return;
 
-	if (!p_chain->pbl.p_virt_table)
+	if (!p_pbl_virt)
 		goto out;
 
 	for (i = 0; i < page_cnt; i++) {
@@ -3581,10 +3581,10 @@ static void ecore_chain_free_pbl(struct ecore_dev *p_dev,
 
 	pbl_size = page_cnt * ECORE_CHAIN_PBL_ENTRY_SIZE;
 
-	if (!p_chain->pbl.external)
-		OSAL_DMA_FREE_COHERENT(p_dev, p_chain->pbl.p_virt_table,
-				       p_chain->pbl.p_phys_table, pbl_size);
-out:
+	if (!p_chain->b_external_pbl)
+		OSAL_DMA_FREE_COHERENT(p_dev, p_chain->pbl_sp.p_virt_table,
+				       p_chain->pbl_sp.p_phys_table, pbl_size);
+ out:
 	OSAL_VFREE(p_dev, p_chain->pbl.pp_virt_addr_tbl);
 }
 
@@ -3716,7 +3716,7 @@ ecore_chain_alloc_pbl(struct ecore_dev *p_dev,
 	} else {
 		p_pbl_virt = ext_pbl->p_pbl_virt;
 		p_pbl_phys = ext_pbl->p_pbl_phys;
-		p_chain->pbl.external = true;
+		p_chain->b_external_pbl = true;
 	}
 
 	ecore_chain_init_pbl_mem(p_chain, p_pbl_virt, p_pbl_phys,
diff --git a/drivers/net/qede/base/ecore_sp_commands.c b/drivers/net/qede/base/ecore_sp_commands.c
index 23ebab7..b831970 100644
--- a/drivers/net/qede/base/ecore_sp_commands.c
+++ b/drivers/net/qede/base/ecore_sp_commands.c
@@ -379,11 +379,11 @@ enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
 
 	/* Place EQ address in RAMROD */
 	DMA_REGPAIR_LE(p_ramrod->event_ring_pbl_addr,
-		       p_hwfn->p_eq->chain.pbl.p_phys_table);
+		       p_hwfn->p_eq->chain.pbl_sp.p_phys_table);
 	page_cnt = (u8)ecore_chain_get_page_cnt(&p_hwfn->p_eq->chain);
 	p_ramrod->event_ring_num_pages = page_cnt;
 	DMA_REGPAIR_LE(p_ramrod->consolid_q_pbl_addr,
-		       p_hwfn->p_consq->chain.pbl.p_phys_table);
+		       p_hwfn->p_consq->chain.pbl_sp.p_phys_table);
 
 	ecore_tunn_set_pf_start_params(p_hwfn, p_tunn,
 				       &p_ramrod->tunnel_config);
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 30/61] net/qede/base: infrastructure changes for VF tunnelling
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (29 preceding siblings ...)
  2017-03-18  7:05   ` [PATCH v2 29/61] net/qede/base: optimize cache-line access Rasesh Mody
@ 2017-03-18  7:05   ` Rasesh Mody
  2017-03-18  7:05   ` [PATCH v2 31/61] net/qede/base: revise tunnel APIs/structs Rasesh Mody
                     ` (31 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:05 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Infrastructure changes for VF tunnelling.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/bcm_osal.h          |    3 +-
 drivers/net/qede/base/ecore.h             |   14 ++++-
 drivers/net/qede/base/ecore_sp_commands.c |   87 +++++++++++++++++++----------
 drivers/net/qede/qede_if.h                |    5 ++
 drivers/net/qede/qede_main.c              |   18 ++++++
 5 files changed, 93 insertions(+), 34 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 82e3ebd..513d542 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -292,7 +292,8 @@ typedef struct osal_list_t {
 #define OSAL_WMB(dev)			rte_wmb()
 #define OSAL_DMA_SYNC(dev, addr, length, is_post) nothing
 
-#define OSAL_BITS_PER_BYTE		(8)
+#define OSAL_BIT(nr)            (1UL << (nr))
+#define OSAL_BITS_PER_BYTE	(8)
 #define OSAL_BITS_PER_UL	(sizeof(unsigned long) * OSAL_BITS_PER_BYTE)
 #define OSAL_BITS_PER_UL_MASK		(OSAL_BITS_PER_UL - 1)
 
diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index de0f49a..5c12c1e 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -470,6 +470,17 @@ struct ecore_fw_data {
 	u32 init_ops_size;
 };
 
+struct ecore_tunnel_info {
+	u8		tunn_clss_vxlan;
+	u8		tunn_clss_l2geneve;
+	u8		tunn_clss_ipgeneve;
+	u8		tunn_clss_l2gre;
+	u8		tunn_clss_ipgre;
+	unsigned long	tunn_mode;
+	u16		port_vxlan_udp_port;
+	u16		port_geneve_udp_port;
+};
+
 struct ecore_hwfn {
 	struct ecore_dev		*p_dev;
 	u8				my_id;		/* ID inside the PF */
@@ -724,8 +735,7 @@ struct ecore_dev {
 	/* SRIOV */
 	struct ecore_hw_sriov_info	*p_iov_info;
 #define IS_ECORE_SRIOV(p_dev)		(!!(p_dev)->p_iov_info)
-	unsigned long			tunn_mode;
-
+	struct ecore_tunnel_info	tunnel;
 	bool				b_is_vf;
 
 	u32				drv_type;
diff --git a/drivers/net/qede/base/ecore_sp_commands.c b/drivers/net/qede/base/ecore_sp_commands.c
index b831970..f5860a0 100644
--- a/drivers/net/qede/base/ecore_sp_commands.c
+++ b/drivers/net/qede/base/ecore_sp_commands.c
@@ -111,8 +111,9 @@ ecore_tunn_set_pf_fix_tunn_mode(struct ecore_hwfn *p_hwfn,
 				struct ecore_tunn_update_params *p_src,
 				struct pf_update_tunnel_config *p_tunn_cfg)
 {
-	unsigned long cached_tunn_mode = p_hwfn->p_dev->tunn_mode;
 	unsigned long update_mask = p_src->tunn_mode_update_mask;
+	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
+	unsigned long cached_tunn_mode = p_tun->tunn_mode;
 	unsigned long tunn_mode = p_src->tunn_mode;
 	unsigned long new_tunn_mode = 0;
 
@@ -149,9 +150,10 @@ ecore_tunn_set_pf_fix_tunn_mode(struct ecore_hwfn *p_hwfn,
 	}
 
 	if (p_src->update_geneve_udp_port) {
+		p_tun->port_geneve_udp_port = p_src->geneve_udp_port;
 		p_tunn_cfg->set_geneve_udp_port_flg = 1;
 		p_tunn_cfg->geneve_udp_port =
-		    OSAL_CPU_TO_LE16(p_src->geneve_udp_port);
+				OSAL_CPU_TO_LE16(p_tun->port_geneve_udp_port);
 	}
 
 	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &update_mask)) {
@@ -178,33 +180,39 @@ ecore_tunn_set_pf_update_params(struct ecore_hwfn *p_hwfn,
 				struct ecore_tunn_update_params *p_src,
 				struct pf_update_tunnel_config *p_tunn_cfg)
 {
-	unsigned long tunn_mode = p_src->tunn_mode;
+	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
 	enum tunnel_clss type;
 
 	ecore_tunn_set_pf_fix_tunn_mode(p_hwfn, p_src, p_tunn_cfg);
+	p_tun->tunn_mode = p_src->tunn_mode;
+
 	p_tunn_cfg->update_rx_pf_clss = p_src->update_rx_pf_clss;
 	p_tunn_cfg->update_tx_pf_clss = p_src->update_tx_pf_clss;
 
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_vxlan);
-	p_tunn_cfg->tunnel_clss_vxlan = type;
+	p_tun->tunn_clss_vxlan = type;
+	p_tunn_cfg->tunnel_clss_vxlan = p_tun->tunn_clss_vxlan;
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2gre);
-	p_tunn_cfg->tunnel_clss_l2gre = type;
+	p_tun->tunn_clss_l2gre = type;
+	p_tunn_cfg->tunnel_clss_l2gre = p_tun->tunn_clss_l2gre;
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgre);
-	p_tunn_cfg->tunnel_clss_ipgre = type;
+	p_tun->tunn_clss_ipgre = type;
+	p_tunn_cfg->tunnel_clss_ipgre = p_tun->tunn_clss_ipgre;
 
 	if (p_src->update_vxlan_udp_port) {
+		p_tun->port_vxlan_udp_port = p_src->vxlan_udp_port;
 		p_tunn_cfg->set_vxlan_udp_port_flg = 1;
 		p_tunn_cfg->vxlan_udp_port =
-		    OSAL_CPU_TO_LE16(p_src->vxlan_udp_port);
+				OSAL_CPU_TO_LE16(p_tun->port_vxlan_udp_port);
 	}
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_l2gre = 1;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_ipgre = 1;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_vxlan = 1;
 
 	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
@@ -215,21 +223,24 @@ ecore_tunn_set_pf_update_params(struct ecore_hwfn *p_hwfn,
 	}
 
 	if (p_src->update_geneve_udp_port) {
+		p_tun->port_geneve_udp_port = p_src->geneve_udp_port;
 		p_tunn_cfg->set_geneve_udp_port_flg = 1;
 		p_tunn_cfg->geneve_udp_port =
-		    OSAL_CPU_TO_LE16(p_src->geneve_udp_port);
+				OSAL_CPU_TO_LE16(p_tun->port_geneve_udp_port);
 	}
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_l2geneve = 1;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_ipgeneve = 1;
 
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2geneve);
-	p_tunn_cfg->tunnel_clss_l2geneve = type;
+	p_tun->tunn_clss_l2geneve = type;
+	p_tunn_cfg->tunnel_clss_l2geneve = p_tun->tunn_clss_l2geneve;
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgeneve);
-	p_tunn_cfg->tunnel_clss_ipgeneve = type;
+	p_tun->tunn_clss_ipgeneve = type;
+	p_tunn_cfg->tunnel_clss_ipgeneve = p_tun->tunn_clss_ipgeneve;
 }
 
 static void ecore_set_hw_tunn_mode(struct ecore_hwfn *p_hwfn,
@@ -269,33 +280,37 @@ ecore_tunn_set_pf_start_params(struct ecore_hwfn *p_hwfn,
 			       struct ecore_tunn_start_params *p_src,
 			       struct pf_start_tunnel_config *p_tunn_cfg)
 {
-	unsigned long tunn_mode;
+	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
 	enum tunnel_clss type;
 
 	if (!p_src)
 		return;
 
-	tunn_mode = p_src->tunn_mode;
+	p_tun->tunn_mode = p_src->tunn_mode;
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_vxlan);
-	p_tunn_cfg->tunnel_clss_vxlan = type;
+	p_tun->tunn_clss_vxlan = type;
+	p_tunn_cfg->tunnel_clss_vxlan = p_tun->tunn_clss_vxlan;
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2gre);
-	p_tunn_cfg->tunnel_clss_l2gre = type;
+	p_tun->tunn_clss_l2gre = type;
+	p_tunn_cfg->tunnel_clss_l2gre = p_tun->tunn_clss_l2gre;
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgre);
-	p_tunn_cfg->tunnel_clss_ipgre = type;
+	p_tun->tunn_clss_ipgre = type;
+	p_tunn_cfg->tunnel_clss_ipgre = p_tun->tunn_clss_ipgre;
 
 	if (p_src->update_vxlan_udp_port) {
+		p_tun->port_vxlan_udp_port = p_src->vxlan_udp_port;
 		p_tunn_cfg->set_vxlan_udp_port_flg = 1;
 		p_tunn_cfg->vxlan_udp_port =
-		    OSAL_CPU_TO_LE16(p_src->vxlan_udp_port);
+				OSAL_CPU_TO_LE16(p_tun->port_vxlan_udp_port);
 	}
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_l2gre = 1;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_ipgre = 1;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_vxlan = 1;
 
 	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
@@ -306,21 +321,24 @@ ecore_tunn_set_pf_start_params(struct ecore_hwfn *p_hwfn,
 	}
 
 	if (p_src->update_geneve_udp_port) {
+		p_tun->port_geneve_udp_port = p_src->geneve_udp_port;
 		p_tunn_cfg->set_geneve_udp_port_flg = 1;
 		p_tunn_cfg->geneve_udp_port =
-		    OSAL_CPU_TO_LE16(p_src->geneve_udp_port);
+				OSAL_CPU_TO_LE16(p_tun->port_geneve_udp_port);
 	}
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_l2geneve = 1;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_ipgeneve = 1;
 
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2geneve);
-	p_tunn_cfg->tunnel_clss_l2geneve = type;
+	p_tun->tunn_clss_l2geneve = type;
+	p_tunn_cfg->tunnel_clss_l2geneve = p_tun->tunn_clss_l2geneve;
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgeneve);
-	p_tunn_cfg->tunnel_clss_ipgeneve = type;
+	p_tun->tunn_clss_ipgeneve = type;
+	p_tunn_cfg->tunnel_clss_ipgeneve = p_tun->tunn_clss_ipgeneve;
 }
 
 enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
@@ -420,9 +438,16 @@ enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
 	rc = ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
 
 	if (p_tunn) {
+		if (p_tunn->update_vxlan_udp_port)
+			ecore_set_vxlan_dest_port(p_hwfn, p_hwfn->p_main_ptt,
+						  p_tunn->vxlan_udp_port);
+
+		if (p_tunn->update_geneve_udp_port)
+			ecore_set_geneve_dest_port(p_hwfn, p_hwfn->p_main_ptt,
+						   p_tunn->geneve_udp_port);
+
 		ecore_set_hw_tunn_mode(p_hwfn, p_hwfn->p_main_ptt,
 				       p_tunn->tunn_mode);
-		p_hwfn->p_dev->tunn_mode = p_tunn->tunn_mode;
 	}
 
 	return rc;
@@ -529,12 +554,12 @@ ecore_sp_pf_update_tunn_cfg(struct ecore_hwfn *p_hwfn,
 	if (p_tunn->update_vxlan_udp_port)
 		ecore_set_vxlan_dest_port(p_hwfn, p_hwfn->p_main_ptt,
 					  p_tunn->vxlan_udp_port);
+
 	if (p_tunn->update_geneve_udp_port)
 		ecore_set_geneve_dest_port(p_hwfn, p_hwfn->p_main_ptt,
 					   p_tunn->geneve_udp_port);
 
 	ecore_set_hw_tunn_mode(p_hwfn, p_hwfn->p_main_ptt, p_tunn->tunn_mode);
-	p_hwfn->p_dev->tunn_mode = p_tunn->tunn_mode;
 
 	return rc;
 }
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index bfd96d6..baa8476 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -43,6 +43,11 @@ struct qed_dev_info {
 	uint8_t mf_mode;
 	bool tx_switching;
 	u16 mtu;
+
+	/* Out param for qede */
+	bool vxlan_enable;
+	bool gre_enable;
+	bool geneve_enable;
 };
 
 enum qed_sb_type {
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index a932c5f..e7195b4 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -325,8 +325,26 @@ static int
 qed_fill_dev_info(struct ecore_dev *edev, struct qed_dev_info *dev_info)
 {
 	struct ecore_ptt *ptt = NULL;
+	struct ecore_tunnel_info *tun = &edev->tunnel;
 
 	memset(dev_info, 0, sizeof(struct qed_dev_info));
+
+	if (tun->tunn_mode & OSAL_BIT(ECORE_MODE_VXLAN_TUNN) &&
+	    tun->tunn_clss_vxlan == ECORE_TUNN_CLSS_MAC_VLAN)
+		dev_info->vxlan_enable = true;
+
+	if (tun->tunn_mode & OSAL_BIT(ECORE_MODE_L2GRE_TUNN) &&
+	    tun->tunn_mode & OSAL_BIT(ECORE_MODE_IPGRE_TUNN) &&
+	    tun->tunn_clss_l2gre == ECORE_TUNN_CLSS_MAC_VLAN &&
+	    tun->tunn_clss_ipgre == ECORE_TUNN_CLSS_MAC_VLAN)
+		dev_info->gre_enable = true;
+
+	if (tun->tunn_mode & OSAL_BIT(ECORE_MODE_L2GENEVE_TUNN) &&
+	    tun->tunn_mode & OSAL_BIT(ECORE_MODE_IPGENEVE_TUNN) &&
+	    tun->tunn_clss_l2geneve == ECORE_TUNN_CLSS_MAC_VLAN &&
+	    tun->tunn_clss_ipgeneve == ECORE_TUNN_CLSS_MAC_VLAN)
+		dev_info->geneve_enable = true;
+
 	dev_info->num_hwfns = edev->num_hwfns;
 	dev_info->is_mf_default = IS_MF_DEFAULT(&edev->hwfns[0]);
 	dev_info->mtu = ECORE_LEADING_HWFN(edev)->hw_info.mtu;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 31/61] net/qede/base: revise tunnel APIs/structs
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (30 preceding siblings ...)
  2017-03-18  7:05   ` [PATCH v2 30/61] net/qede/base: infrastructure changes for VF tunnelling Rasesh Mody
@ 2017-03-18  7:05   ` Rasesh Mody
  2017-03-18  7:05   ` [PATCH v2 32/61] net/qede/base: add tunnelling support for VFs Rasesh Mody
                     ` (30 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:05 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Revise tunnel APIs/structs.
 - Unite tunnel start and update params in single struct
   "ecore_tunnel_info"
 - Remove A0 chip tunnelling support.
 - Added per tunnel info - removed bitmasks.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h             |   57 ++---
 drivers/net/qede/base/ecore_dev.c         |    2 +-
 drivers/net/qede/base/ecore_dev_api.h     |    2 +-
 drivers/net/qede/base/ecore_sp_api.h      |   19 ++
 drivers/net/qede/base/ecore_sp_commands.c |  384 +++++++++++++----------------
 drivers/net/qede/base/ecore_sp_commands.h |   23 +-
 drivers/net/qede/qede_ethdev.c            |   20 +-
 drivers/net/qede/qede_if.h                |   16 ++
 drivers/net/qede/qede_main.c              |   18 +-
 9 files changed, 248 insertions(+), 293 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 5c12c1e..f86f7ca 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -204,33 +204,29 @@ enum ecore_tunn_clss {
 	MAX_ECORE_TUNN_CLSS,
 };
 
-struct ecore_tunn_start_params {
-	unsigned long tunn_mode;
-	u16	vxlan_udp_port;
-	u16	geneve_udp_port;
-	u8	update_vxlan_udp_port;
-	u8	update_geneve_udp_port;
-	u8	tunn_clss_vxlan;
-	u8	tunn_clss_l2geneve;
-	u8	tunn_clss_ipgeneve;
-	u8	tunn_clss_l2gre;
-	u8	tunn_clss_ipgre;
+struct ecore_tunn_update_type {
+	bool b_update_mode;
+	bool b_mode_enabled;
+	enum ecore_tunn_clss tun_cls;
 };
 
-struct ecore_tunn_update_params {
-	unsigned long tunn_mode_update_mask;
-	unsigned long tunn_mode;
-	u16	vxlan_udp_port;
-	u16	geneve_udp_port;
-	u8	update_rx_pf_clss;
-	u8	update_tx_pf_clss;
-	u8	update_vxlan_udp_port;
-	u8	update_geneve_udp_port;
-	u8	tunn_clss_vxlan;
-	u8	tunn_clss_l2geneve;
-	u8	tunn_clss_ipgeneve;
-	u8	tunn_clss_l2gre;
-	u8	tunn_clss_ipgre;
+struct ecore_tunn_update_udp_port {
+	bool b_update_port;
+	u16 port;
+};
+
+struct ecore_tunnel_info {
+	struct ecore_tunn_update_type vxlan;
+	struct ecore_tunn_update_type l2_geneve;
+	struct ecore_tunn_update_type ip_geneve;
+	struct ecore_tunn_update_type l2_gre;
+	struct ecore_tunn_update_type ip_gre;
+
+	struct ecore_tunn_update_udp_port vxlan_port;
+	struct ecore_tunn_update_udp_port geneve_port;
+
+	bool b_update_rx_cls;
+	bool b_update_tx_cls;
 };
 
 /* The PCI personality is not quite synonymous to protocol ID:
@@ -470,17 +466,6 @@ struct ecore_fw_data {
 	u32 init_ops_size;
 };
 
-struct ecore_tunnel_info {
-	u8		tunn_clss_vxlan;
-	u8		tunn_clss_l2geneve;
-	u8		tunn_clss_ipgeneve;
-	u8		tunn_clss_l2gre;
-	u8		tunn_clss_ipgre;
-	unsigned long	tunn_mode;
-	u16		port_vxlan_udp_port;
-	u16		port_geneve_udp_port;
-};
-
 struct ecore_hwfn {
 	struct ecore_dev		*p_dev;
 	u8				my_id;		/* ID inside the PF */
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 1c08d4a..0d3971c 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1696,7 +1696,7 @@ ecore_hw_init_pf_doorbell_bar(struct ecore_hwfn *p_hwfn,
 static enum _ecore_status_t
 ecore_hw_init_pf(struct ecore_hwfn *p_hwfn,
 		 struct ecore_ptt *p_ptt,
-		 struct ecore_tunn_start_params *p_tunn,
+		 struct ecore_tunnel_info *p_tunn,
 		 int hw_mode,
 		 bool b_hw_start,
 		 enum ecore_int_mode int_mode, bool allow_npar_tx_switch)
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index 74a15ef..356c5e4 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -59,7 +59,7 @@ void ecore_resc_setup(struct ecore_dev *p_dev);
 
 struct ecore_hw_init_params {
 	/* tunnelling parameters */
-	struct ecore_tunn_start_params *p_tunn;
+	struct ecore_tunnel_info *p_tunn;
 	bool b_hw_start;
 	/* interrupt mode [msix, inta, etc.] to use */
 	enum ecore_int_mode int_mode;
diff --git a/drivers/net/qede/base/ecore_sp_api.h b/drivers/net/qede/base/ecore_sp_api.h
index a4cb507..c8e564f 100644
--- a/drivers/net/qede/base/ecore_sp_api.h
+++ b/drivers/net/qede/base/ecore_sp_api.h
@@ -41,5 +41,24 @@ struct ecore_spq_comp_cb {
  */
 enum _ecore_status_t ecore_eth_cqe_completion(struct ecore_hwfn *p_hwfn,
 					      struct eth_slow_path_rx_cqe *cqe);
+/**
+ * @brief ecore_sp_pf_update_tunn_cfg - PF Function Tunnel configuration
+ *					update  Ramrod
+ *
+ * This ramrod is sent to update a tunneling configuration
+ * for a physical function (PF).
+ *
+ * @param p_hwfn
+ * @param p_tunn - pf update tunneling parameters
+ * @param comp_mode - completion mode
+ * @param p_comp_data - callback function
+ *
+ * @return enum _ecore_status_t
+ */
 
+enum _ecore_status_t
+ecore_sp_pf_update_tunn_cfg(struct ecore_hwfn *p_hwfn,
+			    struct ecore_tunnel_info *p_tunn,
+			    enum spq_mode comp_mode,
+			    struct ecore_spq_comp_cb *p_comp_data);
 #endif
diff --git a/drivers/net/qede/base/ecore_sp_commands.c b/drivers/net/qede/base/ecore_sp_commands.c
index f5860a0..4cacce8 100644
--- a/drivers/net/qede/base/ecore_sp_commands.c
+++ b/drivers/net/qede/base/ecore_sp_commands.c
@@ -88,7 +88,7 @@ enum _ecore_status_t ecore_sp_init_request(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-static enum tunnel_clss ecore_tunn_get_clss_type(u8 type)
+static enum tunnel_clss ecore_tunn_clss_to_fw_clss(u8 type)
 {
 	switch (type) {
 	case ECORE_TUNN_CLSS_MAC_VLAN:
@@ -107,242 +107,207 @@ static enum tunnel_clss ecore_tunn_get_clss_type(u8 type)
 }
 
 static void
-ecore_tunn_set_pf_fix_tunn_mode(struct ecore_hwfn *p_hwfn,
-				struct ecore_tunn_update_params *p_src,
-				struct pf_update_tunnel_config *p_tunn_cfg)
+ecore_set_pf_update_tunn_mode(struct ecore_tunnel_info *p_tun,
+			      struct ecore_tunnel_info *p_src,
+			      bool b_pf_start)
 {
-	unsigned long update_mask = p_src->tunn_mode_update_mask;
-	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
-	unsigned long cached_tunn_mode = p_tun->tunn_mode;
-	unsigned long tunn_mode = p_src->tunn_mode;
-	unsigned long new_tunn_mode = 0;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &update_mask)) {
-		if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_L2GRE_TUNN, &new_tunn_mode);
-	} else {
-		if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &cached_tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_L2GRE_TUNN, &new_tunn_mode);
-	}
-
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &update_mask)) {
-		if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_IPGRE_TUNN, &new_tunn_mode);
-	} else {
-		if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &cached_tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_IPGRE_TUNN, &new_tunn_mode);
-	}
-
-	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &update_mask)) {
-		if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_VXLAN_TUNN, &new_tunn_mode);
-	} else {
-		if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &cached_tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_VXLAN_TUNN, &new_tunn_mode);
-	}
-
-	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
-		if (p_src->update_geneve_udp_port)
-			DP_NOTICE(p_hwfn, true, "Geneve not supported\n");
-		p_src->update_geneve_udp_port = 0;
-		p_src->tunn_mode = new_tunn_mode;
-		return;
-	}
+	if (p_src->vxlan.b_update_mode || b_pf_start)
+		p_tun->vxlan.b_mode_enabled = p_src->vxlan.b_mode_enabled;
 
-	if (p_src->update_geneve_udp_port) {
-		p_tun->port_geneve_udp_port = p_src->geneve_udp_port;
-		p_tunn_cfg->set_geneve_udp_port_flg = 1;
-		p_tunn_cfg->geneve_udp_port =
-				OSAL_CPU_TO_LE16(p_tun->port_geneve_udp_port);
-	}
+	if (p_src->l2_gre.b_update_mode || b_pf_start)
+		p_tun->l2_gre.b_mode_enabled = p_src->l2_gre.b_mode_enabled;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &update_mask)) {
-		if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_L2GENEVE_TUNN, &new_tunn_mode);
-	} else {
-		if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &cached_tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_L2GENEVE_TUNN, &new_tunn_mode);
-	}
+	if (p_src->ip_gre.b_update_mode || b_pf_start)
+		p_tun->ip_gre.b_mode_enabled = p_src->ip_gre.b_mode_enabled;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &update_mask)) {
-		if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_IPGENEVE_TUNN, &new_tunn_mode);
-	} else {
-		if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &cached_tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_IPGENEVE_TUNN, &new_tunn_mode);
-	}
+	if (p_src->l2_geneve.b_update_mode || b_pf_start)
+		p_tun->l2_geneve.b_mode_enabled =
+				p_src->l2_geneve.b_mode_enabled;
 
-	p_src->tunn_mode = new_tunn_mode;
+	if (p_src->ip_geneve.b_update_mode || b_pf_start)
+		p_tun->ip_geneve.b_mode_enabled =
+				p_src->ip_geneve.b_mode_enabled;
 }
 
-static void
-ecore_tunn_set_pf_update_params(struct ecore_hwfn *p_hwfn,
-				struct ecore_tunn_update_params *p_src,
-				struct pf_update_tunnel_config *p_tunn_cfg)
+static void ecore_set_tunn_cls_info(struct ecore_tunnel_info *p_tun,
+				    struct ecore_tunnel_info *p_src)
 {
-	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
 	enum tunnel_clss type;
 
-	ecore_tunn_set_pf_fix_tunn_mode(p_hwfn, p_src, p_tunn_cfg);
-	p_tun->tunn_mode = p_src->tunn_mode;
-
-	p_tunn_cfg->update_rx_pf_clss = p_src->update_rx_pf_clss;
-	p_tunn_cfg->update_tx_pf_clss = p_src->update_tx_pf_clss;
-
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_vxlan);
-	p_tun->tunn_clss_vxlan = type;
-	p_tunn_cfg->tunnel_clss_vxlan = p_tun->tunn_clss_vxlan;
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2gre);
-	p_tun->tunn_clss_l2gre = type;
-	p_tunn_cfg->tunnel_clss_l2gre = p_tun->tunn_clss_l2gre;
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgre);
-	p_tun->tunn_clss_ipgre = type;
-	p_tunn_cfg->tunnel_clss_ipgre = p_tun->tunn_clss_ipgre;
-
-	if (p_src->update_vxlan_udp_port) {
-		p_tun->port_vxlan_udp_port = p_src->vxlan_udp_port;
-		p_tunn_cfg->set_vxlan_udp_port_flg = 1;
-		p_tunn_cfg->vxlan_udp_port =
-				OSAL_CPU_TO_LE16(p_tun->port_vxlan_udp_port);
-	}
+	p_tun->b_update_rx_cls = p_src->b_update_rx_cls;
+	p_tun->b_update_tx_cls = p_src->b_update_tx_cls;
+
+	type = ecore_tunn_clss_to_fw_clss(p_src->vxlan.tun_cls);
+	p_tun->vxlan.tun_cls = type;
+	type = ecore_tunn_clss_to_fw_clss(p_src->l2_gre.tun_cls);
+	p_tun->l2_gre.tun_cls = type;
+	type = ecore_tunn_clss_to_fw_clss(p_src->ip_gre.tun_cls);
+	p_tun->ip_gre.tun_cls = type;
+	type = ecore_tunn_clss_to_fw_clss(p_src->l2_geneve.tun_cls);
+	p_tun->l2_geneve.tun_cls = type;
+	type = ecore_tunn_clss_to_fw_clss(p_src->ip_geneve.tun_cls);
+	p_tun->ip_geneve.tun_cls = type;
+}
+
+static void ecore_set_tunn_ports(struct ecore_tunnel_info *p_tun,
+				 struct ecore_tunnel_info *p_src)
+{
+	p_tun->geneve_port.b_update_port = p_src->geneve_port.b_update_port;
+	p_tun->vxlan_port.b_update_port = p_src->vxlan_port.b_update_port;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_l2gre = 1;
+	if (p_src->geneve_port.b_update_port)
+		p_tun->geneve_port.port = p_src->geneve_port.port;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_ipgre = 1;
+	if (p_src->vxlan_port.b_update_port)
+		p_tun->vxlan_port.port = p_src->vxlan_port.port;
+}
 
-	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_vxlan = 1;
+static void
+__ecore_set_ramrod_tunnel_param(u8 *p_tunn_cls, u8 *p_enable_tx_clas,
+				struct ecore_tunn_update_type *tun_type)
+{
+	*p_tunn_cls = tun_type->tun_cls;
 
-	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
-		if (p_src->update_geneve_udp_port)
-			DP_NOTICE(p_hwfn, true, "Geneve not supported\n");
-		p_src->update_geneve_udp_port = 0;
-		return;
-	}
+	if (tun_type->b_mode_enabled)
+		*p_enable_tx_clas = 1;
+}
 
-	if (p_src->update_geneve_udp_port) {
-		p_tun->port_geneve_udp_port = p_src->geneve_udp_port;
-		p_tunn_cfg->set_geneve_udp_port_flg = 1;
-		p_tunn_cfg->geneve_udp_port =
-				OSAL_CPU_TO_LE16(p_tun->port_geneve_udp_port);
+static void
+ecore_set_ramrod_tunnel_param(u8 *p_tunn_cls, u8 *p_enable_tx_clas,
+			      struct ecore_tunn_update_type *tun_type,
+			      u8 *p_update_port, __le16 *p_port,
+			      struct ecore_tunn_update_udp_port *p_udp_port)
+{
+	__ecore_set_ramrod_tunnel_param(p_tunn_cls, p_enable_tx_clas,
+					tun_type);
+	if (p_udp_port->b_update_port) {
+		*p_update_port = 1;
+		*p_port = OSAL_CPU_TO_LE16(p_udp_port->port);
 	}
+}
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_l2geneve = 1;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_ipgeneve = 1;
+static void
+ecore_tunn_set_pf_update_params(struct ecore_hwfn		*p_hwfn,
+				struct ecore_tunnel_info *p_src,
+				struct pf_update_tunnel_config	*p_tunn_cfg)
+{
+	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
 
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2geneve);
-	p_tun->tunn_clss_l2geneve = type;
-	p_tunn_cfg->tunnel_clss_l2geneve = p_tun->tunn_clss_l2geneve;
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgeneve);
-	p_tun->tunn_clss_ipgeneve = type;
-	p_tunn_cfg->tunnel_clss_ipgeneve = p_tun->tunn_clss_ipgeneve;
+	ecore_set_pf_update_tunn_mode(p_tun, p_src, false);
+	ecore_set_tunn_cls_info(p_tun, p_src);
+	ecore_set_tunn_ports(p_tun, p_src);
+
+	ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_vxlan,
+				      &p_tunn_cfg->tx_enable_vxlan,
+				      &p_tun->vxlan,
+				      &p_tunn_cfg->set_vxlan_udp_port_flg,
+				      &p_tunn_cfg->vxlan_udp_port,
+				      &p_tun->vxlan_port);
+
+	ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_l2geneve,
+				      &p_tunn_cfg->tx_enable_l2geneve,
+				      &p_tun->l2_geneve,
+				      &p_tunn_cfg->set_geneve_udp_port_flg,
+				      &p_tunn_cfg->geneve_udp_port,
+				      &p_tun->geneve_port);
+
+	__ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_ipgeneve,
+					&p_tunn_cfg->tx_enable_ipgeneve,
+					&p_tun->ip_geneve);
+
+	__ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_l2gre,
+					&p_tunn_cfg->tx_enable_l2gre,
+					&p_tun->l2_gre);
+
+	__ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_ipgre,
+					&p_tunn_cfg->tx_enable_ipgre,
+					&p_tun->ip_gre);
+
+	p_tunn_cfg->update_rx_pf_clss = p_tun->b_update_rx_cls;
+	p_tunn_cfg->update_tx_pf_clss = p_tun->b_update_tx_cls;
 }
 
 static void ecore_set_hw_tunn_mode(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt,
-				   unsigned long tunn_mode)
+				   struct ecore_tunnel_info *p_tun)
 {
-	u8 l2gre_enable = 0, ipgre_enable = 0, vxlan_enable = 0;
-	u8 l2geneve_enable = 0, ipgeneve_enable = 0;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &tunn_mode))
-		l2gre_enable = 1;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &tunn_mode))
-		ipgre_enable = 1;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &tunn_mode))
-		vxlan_enable = 1;
+	ecore_set_gre_enable(p_hwfn, p_ptt, p_tun->l2_gre.b_mode_enabled,
+			     p_tun->ip_gre.b_mode_enabled);
+	ecore_set_vxlan_enable(p_hwfn, p_ptt, p_tun->vxlan.b_mode_enabled);
 
-	ecore_set_gre_enable(p_hwfn, p_ptt, l2gre_enable, ipgre_enable);
-	ecore_set_vxlan_enable(p_hwfn, p_ptt, vxlan_enable);
+	ecore_set_geneve_enable(p_hwfn, p_ptt, p_tun->l2_geneve.b_mode_enabled,
+				p_tun->ip_geneve.b_mode_enabled);
+}
 
-	if (ECORE_IS_BB_A0(p_hwfn->p_dev))
+static void ecore_set_hw_tunn_mode_port(struct ecore_hwfn *p_hwfn,
+					struct ecore_tunnel_info *p_tunn)
+{
+	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
+		DP_NOTICE(p_hwfn, true,
+			  "A0 chip: tunnel hw config is not supported\n");
 		return;
+	}
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &tunn_mode))
-		l2geneve_enable = 1;
+	if (p_tunn->vxlan_port.b_update_port)
+		ecore_set_vxlan_dest_port(p_hwfn, p_hwfn->p_main_ptt,
+					  p_tunn->vxlan_port.port);
 
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &tunn_mode))
-		ipgeneve_enable = 1;
+	if (p_tunn->geneve_port.b_update_port)
+		ecore_set_geneve_dest_port(p_hwfn, p_hwfn->p_main_ptt,
+					   p_tunn->geneve_port.port);
 
-	ecore_set_geneve_enable(p_hwfn, p_ptt, l2geneve_enable,
-				ipgeneve_enable);
+	ecore_set_hw_tunn_mode(p_hwfn, p_hwfn->p_main_ptt, p_tunn);
 }
 
 static void
 ecore_tunn_set_pf_start_params(struct ecore_hwfn *p_hwfn,
-			       struct ecore_tunn_start_params *p_src,
+			       struct ecore_tunnel_info		*p_src,
 			       struct pf_start_tunnel_config *p_tunn_cfg)
 {
 	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
-	enum tunnel_clss type;
-
-	if (!p_src)
-		return;
-
-	p_tun->tunn_mode = p_src->tunn_mode;
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_vxlan);
-	p_tun->tunn_clss_vxlan = type;
-	p_tunn_cfg->tunnel_clss_vxlan = p_tun->tunn_clss_vxlan;
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2gre);
-	p_tun->tunn_clss_l2gre = type;
-	p_tunn_cfg->tunnel_clss_l2gre = p_tun->tunn_clss_l2gre;
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgre);
-	p_tun->tunn_clss_ipgre = type;
-	p_tunn_cfg->tunnel_clss_ipgre = p_tun->tunn_clss_ipgre;
-
-	if (p_src->update_vxlan_udp_port) {
-		p_tun->port_vxlan_udp_port = p_src->vxlan_udp_port;
-		p_tunn_cfg->set_vxlan_udp_port_flg = 1;
-		p_tunn_cfg->vxlan_udp_port =
-				OSAL_CPU_TO_LE16(p_tun->port_vxlan_udp_port);
-	}
-
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_l2gre = 1;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_ipgre = 1;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_vxlan = 1;
 
 	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
-		if (p_src->update_geneve_udp_port)
-			DP_NOTICE(p_hwfn, true, "Geneve not supported\n");
-		p_src->update_geneve_udp_port = 0;
+		DP_NOTICE(p_hwfn, true,
+			  "A0 chip: tunnel pf start config is not supported\n");
 		return;
 	}
 
-	if (p_src->update_geneve_udp_port) {
-		p_tun->port_geneve_udp_port = p_src->geneve_udp_port;
-		p_tunn_cfg->set_geneve_udp_port_flg = 1;
-		p_tunn_cfg->geneve_udp_port =
-				OSAL_CPU_TO_LE16(p_tun->port_geneve_udp_port);
-	}
-
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_l2geneve = 1;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_ipgeneve = 1;
+	if (!p_src)
+		return;
 
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2geneve);
-	p_tun->tunn_clss_l2geneve = type;
-	p_tunn_cfg->tunnel_clss_l2geneve = p_tun->tunn_clss_l2geneve;
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgeneve);
-	p_tun->tunn_clss_ipgeneve = type;
-	p_tunn_cfg->tunnel_clss_ipgeneve = p_tun->tunn_clss_ipgeneve;
+	ecore_set_pf_update_tunn_mode(p_tun, p_src, true);
+	ecore_set_tunn_cls_info(p_tun, p_src);
+	ecore_set_tunn_ports(p_tun, p_src);
+
+	ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_vxlan,
+				      &p_tunn_cfg->tx_enable_vxlan,
+				      &p_tun->vxlan,
+				      &p_tunn_cfg->set_vxlan_udp_port_flg,
+				      &p_tunn_cfg->vxlan_udp_port,
+				      &p_tun->vxlan_port);
+
+	ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_l2geneve,
+				      &p_tunn_cfg->tx_enable_l2geneve,
+				      &p_tun->l2_geneve,
+				      &p_tunn_cfg->set_geneve_udp_port_flg,
+				      &p_tunn_cfg->geneve_udp_port,
+				      &p_tun->geneve_port);
+
+	__ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_ipgeneve,
+					&p_tunn_cfg->tx_enable_ipgeneve,
+					&p_tun->ip_geneve);
+
+	__ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_l2gre,
+					&p_tunn_cfg->tx_enable_l2gre,
+					&p_tun->l2_gre);
+
+	__ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_ipgre,
+					&p_tunn_cfg->tx_enable_ipgre,
+					&p_tun->ip_gre);
 }
 
 enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
-				       struct ecore_tunn_start_params *p_tunn,
+				       struct ecore_tunnel_info *p_tunn,
 				       enum ecore_mf_mode mode,
 				       bool allow_npar_tx_switch)
 {
@@ -437,18 +402,8 @@ enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
 
 	rc = ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
 
-	if (p_tunn) {
-		if (p_tunn->update_vxlan_udp_port)
-			ecore_set_vxlan_dest_port(p_hwfn, p_hwfn->p_main_ptt,
-						  p_tunn->vxlan_udp_port);
-
-		if (p_tunn->update_geneve_udp_port)
-			ecore_set_geneve_dest_port(p_hwfn, p_hwfn->p_main_ptt,
-						   p_tunn->geneve_udp_port);
-
-		ecore_set_hw_tunn_mode(p_hwfn, p_hwfn->p_main_ptt,
-				       p_tunn->tunn_mode);
-	}
+	if (p_tunn)
+		ecore_set_hw_tunn_mode_port(p_hwfn, &p_hwfn->p_dev->tunnel);
 
 	return rc;
 }
@@ -523,7 +478,7 @@ enum _ecore_status_t ecore_sp_rl_update(struct ecore_hwfn *p_hwfn,
 /* Set pf update ramrod command params */
 enum _ecore_status_t
 ecore_sp_pf_update_tunn_cfg(struct ecore_hwfn *p_hwfn,
-			    struct ecore_tunn_update_params *p_tunn,
+			    struct ecore_tunnel_info *p_tunn,
 			    enum spq_mode comp_mode,
 			    struct ecore_spq_comp_cb *p_comp_data)
 {
@@ -531,6 +486,15 @@ ecore_sp_pf_update_tunn_cfg(struct ecore_hwfn *p_hwfn,
 	struct ecore_sp_init_data init_data;
 	enum _ecore_status_t rc = ECORE_NOTIMPL;
 
+	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
+		DP_NOTICE(p_hwfn, true,
+			  "A0 chip: tunnel pf update config is not supported\n");
+		return rc;
+	}
+
+	if (!p_tunn)
+		return ECORE_INVAL;
+
 	/* Get SPQ entry */
 	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
 	init_data.cid = ecore_spq_get_cid(p_hwfn);
@@ -551,15 +515,7 @@ ecore_sp_pf_update_tunn_cfg(struct ecore_hwfn *p_hwfn,
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	if (p_tunn->update_vxlan_udp_port)
-		ecore_set_vxlan_dest_port(p_hwfn, p_hwfn->p_main_ptt,
-					  p_tunn->vxlan_udp_port);
-
-	if (p_tunn->update_geneve_udp_port)
-		ecore_set_geneve_dest_port(p_hwfn, p_hwfn->p_main_ptt,
-					   p_tunn->geneve_udp_port);
-
-	ecore_set_hw_tunn_mode(p_hwfn, p_hwfn->p_main_ptt, p_tunn->tunn_mode);
+	ecore_set_hw_tunn_mode_port(p_hwfn, &p_hwfn->p_dev->tunnel);
 
 	return rc;
 }
diff --git a/drivers/net/qede/base/ecore_sp_commands.h b/drivers/net/qede/base/ecore_sp_commands.h
index 66c9a69..33e31e4 100644
--- a/drivers/net/qede/base/ecore_sp_commands.h
+++ b/drivers/net/qede/base/ecore_sp_commands.h
@@ -68,32 +68,11 @@ enum _ecore_status_t ecore_sp_init_request(struct ecore_hwfn *p_hwfn,
  */
 
 enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
-				       struct ecore_tunn_start_params *p_tunn,
+				       struct ecore_tunnel_info *p_tunn,
 				       enum ecore_mf_mode mode,
 				       bool allow_npar_tx_switch);
 
 /**
- * @brief ecore_sp_pf_update_tunn_cfg - PF Function Tunnel configuration
- *					update  Ramrod
- *
- * This ramrod is sent to update a tunneling configuration
- * for a physical function (PF).
- *
- * @param p_hwfn
- * @param p_tunn - pf update tunneling parameters
- * @param comp_mode - completion mode
- * @param p_comp_data - callback function
- *
- * @return enum _ecore_status_t
- */
-
-enum _ecore_status_t
-ecore_sp_pf_update_tunn_cfg(struct ecore_hwfn *p_hwfn,
-			    struct ecore_tunn_update_params *p_tunn,
-			    enum spq_mode comp_mode,
-			    struct ecore_spq_comp_cb *p_comp_data);
-
-/**
  * @brief ecore_sp_pf_update - PF Function Update Ramrod
  *
  * This ramrod updates function-related parameters. Every parameter can be
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index d52e1be..4ef93d4 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -335,10 +335,10 @@ static void qede_set_ucast_cmn_params(struct ecore_filter_ucast *ucast)
 	/* ucast->assert_on_error = true; - For debug */
 }
 
-static void qede_set_cmn_tunn_param(struct ecore_tunn_update_params *params,
-				     uint8_t clss, uint64_t mode, uint64_t mask)
+static void qede_set_cmn_tunn_param(struct qed_tunn_update_params *params,
+				    uint8_t clss, uint64_t mode, uint64_t mask)
 {
-	memset(params, 0, sizeof(struct ecore_tunn_update_params));
+	memset(params, 0, sizeof(struct qed_tunn_update_params));
 	params->tunn_mode = mode;
 	params->tunn_mode_update_mask = mask;
 	params->update_tx_pf_clss = 1;
@@ -1707,7 +1707,8 @@ qede_conf_udp_dst_port(struct rte_eth_dev *eth_dev,
 {
 	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
-	struct ecore_tunn_update_params params;
+	struct qed_tunn_update_params params;
+	struct ecore_tunnel_info *p_tunn;
 	struct ecore_hwfn *p_hwfn;
 	int rc, i;
 
@@ -1720,7 +1721,7 @@ qede_conf_udp_dst_port(struct rte_eth_dev *eth_dev,
 					QEDE_VXLAN_DEF_PORT;
 		for_each_hwfn(edev, i) {
 			p_hwfn = &edev->hwfns[i];
-			rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, &params,
+			rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, p_tunn,
 						ECORE_SPQ_MODE_CB, NULL);
 			if (rc != ECORE_SUCCESS) {
 				DP_ERR(edev, "Unable to config UDP port %u\n",
@@ -1817,7 +1818,8 @@ static int qede_vxlan_tunn_config(struct rte_eth_dev *eth_dev,
 {
 	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
-	struct ecore_tunn_update_params params;
+	struct qed_tunn_update_params params;
+	struct ecore_tunnel_info *p_tunn;
 	struct ecore_hwfn *p_hwfn;
 	enum ecore_filter_ucast_type type;
 	enum ecore_tunn_clss clss;
@@ -1872,7 +1874,7 @@ static int qede_vxlan_tunn_config(struct rte_eth_dev *eth_dev,
 		for_each_hwfn(edev, i) {
 			p_hwfn = &edev->hwfns[i];
 			rc = ecore_sp_pf_update_tunn_cfg(p_hwfn,
-				&params, ECORE_SPQ_MODE_CB, NULL);
+				p_tunn, ECORE_SPQ_MODE_CB, NULL);
 			if (rc != ECORE_SUCCESS) {
 				DP_ERR(edev, "Failed to update tunn_clss %u\n",
 					params.tunn_clss_vxlan);
@@ -1906,8 +1908,8 @@ static int qede_vxlan_tunn_config(struct rte_eth_dev *eth_dev,
 						(1 << ECORE_MODE_VXLAN_TUNN));
 			for_each_hwfn(edev, i) {
 				p_hwfn = &edev->hwfns[i];
-				rc = ecore_sp_pf_update_tunn_cfg(p_hwfn,
-					&params, ECORE_SPQ_MODE_CB, NULL);
+				rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, p_tunn,
+					ECORE_SPQ_MODE_CB, NULL);
 				if (rc != ECORE_SUCCESS) {
 					DP_ERR(edev,
 						"Failed to update tunn_clss %u\n",
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index baa8476..09b6912 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -121,6 +121,22 @@ struct qed_eth_tlvs {
 	u8 num_rxqs_full;
 };
 
+struct qed_tunn_update_params {
+	unsigned long   tunn_mode_update_mask;
+	unsigned long   tunn_mode;
+	u16             vxlan_udp_port;
+	u16             geneve_udp_port;
+	u8              update_rx_pf_clss;
+	u8              update_tx_pf_clss;
+	u8              update_vxlan_udp_port;
+	u8              update_geneve_udp_port;
+	u8              tunn_clss_vxlan;
+	u8              tunn_clss_l2geneve;
+	u8              tunn_clss_ipgeneve;
+	u8              tunn_clss_l2gre;
+	u8              tunn_clss_ipgre;
+};
+
 struct qed_common_cb_ops {
 	void (*link_update)(void *dev, struct qed_link_output *link);
 	void (*get_tlv_data)(void *dev, struct qed_eth_tlvs *data);
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index e7195b4..5c79055 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -329,20 +329,18 @@ qed_fill_dev_info(struct ecore_dev *edev, struct qed_dev_info *dev_info)
 
 	memset(dev_info, 0, sizeof(struct qed_dev_info));
 
-	if (tun->tunn_mode & OSAL_BIT(ECORE_MODE_VXLAN_TUNN) &&
-	    tun->tunn_clss_vxlan == ECORE_TUNN_CLSS_MAC_VLAN)
+	if (tun->vxlan.tun_cls == ECORE_TUNN_CLSS_MAC_VLAN &&
+	    tun->vxlan.b_mode_enabled)
 		dev_info->vxlan_enable = true;
 
-	if (tun->tunn_mode & OSAL_BIT(ECORE_MODE_L2GRE_TUNN) &&
-	    tun->tunn_mode & OSAL_BIT(ECORE_MODE_IPGRE_TUNN) &&
-	    tun->tunn_clss_l2gre == ECORE_TUNN_CLSS_MAC_VLAN &&
-	    tun->tunn_clss_ipgre == ECORE_TUNN_CLSS_MAC_VLAN)
+	if (tun->l2_gre.b_mode_enabled && tun->ip_gre.b_mode_enabled &&
+	    tun->l2_gre.tun_cls == ECORE_TUNN_CLSS_MAC_VLAN &&
+	    tun->ip_gre.tun_cls == ECORE_TUNN_CLSS_MAC_VLAN)
 		dev_info->gre_enable = true;
 
-	if (tun->tunn_mode & OSAL_BIT(ECORE_MODE_L2GENEVE_TUNN) &&
-	    tun->tunn_mode & OSAL_BIT(ECORE_MODE_IPGENEVE_TUNN) &&
-	    tun->tunn_clss_l2geneve == ECORE_TUNN_CLSS_MAC_VLAN &&
-	    tun->tunn_clss_ipgeneve == ECORE_TUNN_CLSS_MAC_VLAN)
+	if (tun->l2_geneve.b_mode_enabled && tun->ip_geneve.b_mode_enabled &&
+	    tun->l2_geneve.tun_cls == ECORE_TUNN_CLSS_MAC_VLAN &&
+	    tun->ip_geneve.tun_cls == ECORE_TUNN_CLSS_MAC_VLAN)
 		dev_info->geneve_enable = true;
 
 	dev_info->num_hwfns = edev->num_hwfns;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 32/61] net/qede/base: add tunnelling support for VFs
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (31 preceding siblings ...)
  2017-03-18  7:05   ` [PATCH v2 31/61] net/qede/base: revise tunnel APIs/structs Rasesh Mody
@ 2017-03-18  7:05   ` Rasesh Mody
  2017-03-18  7:05   ` [PATCH v2 33/61] net/qede/base: formatting changes Rasesh Mody
                     ` (29 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:05 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add new tunnelling support for VFs.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/bcm_osal.h          |    3 +-
 drivers/net/qede/base/ecore_dev.c         |   15 ++-
 drivers/net/qede/base/ecore_sp_commands.c |   15 ++-
 drivers/net/qede/base/ecore_sriov.c       |  144 +++++++++++++++++++++++++++
 drivers/net/qede/base/ecore_vf.c          |  154 +++++++++++++++++++++++++++++
 drivers/net/qede/base/ecore_vf.h          |    5 +
 drivers/net/qede/base/ecore_vfpf_if.h     |   40 ++++++++
 drivers/net/qede/qede_ethdev.c            |   49 +++++----
 8 files changed, 390 insertions(+), 35 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 513d542..4c91dc0 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -422,6 +422,5 @@ void qede_get_mcp_proto_stats(struct ecore_dev *, enum ecore_mcp_protocol_type,
 #define	OSAL_SLOWPATH_IRQ_REQ(p_hwfn) (0)
 #define OSAL_MFW_TLV_REQ(p_hwfn) (0)
 #define OSAL_MFW_FILL_TLV_DATA(type, buf, data) (0)
-
-
+#define OSAL_PF_VALIDATE_MODIFY_TUNN_CONFIG(p_hwfn, mask, b_update, tunn) 0
 #endif /* __BCM_OSAL_H */
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 0d3971c..21fec58 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1876,6 +1876,19 @@ static void ecore_reset_mb_shadow(struct ecore_hwfn *p_hwfn,
 		    p_hwfn->mcp_info->mfw_mb_length);
 }
 
+enum _ecore_status_t ecore_vf_start(struct ecore_hwfn *p_hwfn,
+				    struct ecore_hw_init_params *p_params)
+{
+	if (p_params->p_tunn) {
+		ecore_vf_set_vf_start_tunn_update_param(p_params->p_tunn);
+		ecore_vf_pf_tunnel_param_update(p_hwfn, p_params->p_tunn);
+	}
+
+	p_hwfn->b_int_enabled = 1;
+
+	return ECORE_SUCCESS;
+}
+
 enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 				   struct ecore_hw_init_params *p_params)
 {
@@ -1908,7 +1921,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		}
 
 		if (IS_VF(p_dev)) {
-			p_hwfn->b_int_enabled = 1;
+			ecore_vf_start(p_hwfn, p_params);
 			continue;
 		}
 
diff --git a/drivers/net/qede/base/ecore_sp_commands.c b/drivers/net/qede/base/ecore_sp_commands.c
index 4cacce8..8fd64d7 100644
--- a/drivers/net/qede/base/ecore_sp_commands.c
+++ b/drivers/net/qede/base/ecore_sp_commands.c
@@ -22,6 +22,7 @@
 #include "ecore_hw.h"
 #include "ecore_dcbx.h"
 #include "ecore_sriov.h"
+#include "ecore_vf.h"
 
 enum _ecore_status_t ecore_sp_init_request(struct ecore_hwfn *p_hwfn,
 					   struct ecore_spq_entry **pp_ent,
@@ -137,16 +138,17 @@ static void ecore_set_tunn_cls_info(struct ecore_tunnel_info *p_tun,
 	p_tun->b_update_rx_cls = p_src->b_update_rx_cls;
 	p_tun->b_update_tx_cls = p_src->b_update_tx_cls;
 
+	/* @DPDK - typecast tunnul class */
 	type = ecore_tunn_clss_to_fw_clss(p_src->vxlan.tun_cls);
-	p_tun->vxlan.tun_cls = type;
+	p_tun->vxlan.tun_cls = (enum ecore_tunn_clss)type;
 	type = ecore_tunn_clss_to_fw_clss(p_src->l2_gre.tun_cls);
-	p_tun->l2_gre.tun_cls = type;
+	p_tun->l2_gre.tun_cls = (enum ecore_tunn_clss)type;
 	type = ecore_tunn_clss_to_fw_clss(p_src->ip_gre.tun_cls);
-	p_tun->ip_gre.tun_cls = type;
+	p_tun->ip_gre.tun_cls = (enum ecore_tunn_clss)type;
 	type = ecore_tunn_clss_to_fw_clss(p_src->l2_geneve.tun_cls);
-	p_tun->l2_geneve.tun_cls = type;
+	p_tun->l2_geneve.tun_cls = (enum ecore_tunn_clss)type;
 	type = ecore_tunn_clss_to_fw_clss(p_src->ip_geneve.tun_cls);
-	p_tun->ip_geneve.tun_cls = type;
+	p_tun->ip_geneve.tun_cls = (enum ecore_tunn_clss)type;
 }
 
 static void ecore_set_tunn_ports(struct ecore_tunnel_info *p_tun,
@@ -486,6 +488,9 @@ ecore_sp_pf_update_tunn_cfg(struct ecore_hwfn *p_hwfn,
 	struct ecore_sp_init_data init_data;
 	enum _ecore_status_t rc = ECORE_NOTIMPL;
 
+	if (IS_VF(p_hwfn->p_dev))
+		return ecore_vf_pf_tunnel_param_update(p_hwfn, p_tunn);
+
 	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
 		DP_NOTICE(p_hwfn, true,
 			  "A0 chip: tunnel pf update config is not supported\n");
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 7378420..6cec7b2 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -51,6 +51,7 @@ const char *ecore_channel_tlvs_string[] = {
 	"CHANNEL_TLV_VPORT_UPDATE_RSS",
 	"CHANNEL_TLV_VPORT_UPDATE_ACCEPT_ANY_VLAN",
 	"CHANNEL_TLV_VPORT_UPDATE_SGE_TPA",
+	"CHANNEL_TLV_UPDATE_TUNN_PARAM",
 	"CHANNEL_TLV_MAX"
 };
 
@@ -2137,6 +2138,146 @@ out:
 					b_legacy_vf);
 }
 
+static void
+ecore_iov_pf_update_tun_response(struct pfvf_update_tunn_param_tlv *p_resp,
+				 struct ecore_tunnel_info *p_tun,
+				 u16 tunn_feature_mask)
+{
+	p_resp->tunn_feature_mask = tunn_feature_mask;
+	p_resp->vxlan_mode = p_tun->vxlan.b_mode_enabled;
+	p_resp->l2geneve_mode = p_tun->l2_geneve.b_mode_enabled;
+	p_resp->ipgeneve_mode = p_tun->ip_geneve.b_mode_enabled;
+	p_resp->l2gre_mode = p_tun->l2_gre.b_mode_enabled;
+	p_resp->ipgre_mode = p_tun->l2_gre.b_mode_enabled;
+	p_resp->vxlan_clss = p_tun->vxlan.tun_cls;
+	p_resp->l2gre_clss = p_tun->l2_gre.tun_cls;
+	p_resp->ipgre_clss = p_tun->ip_gre.tun_cls;
+	p_resp->l2geneve_clss = p_tun->l2_geneve.tun_cls;
+	p_resp->ipgeneve_clss = p_tun->ip_geneve.tun_cls;
+	p_resp->geneve_udp_port = p_tun->geneve_port.port;
+	p_resp->vxlan_udp_port = p_tun->vxlan_port.port;
+}
+
+static void
+__ecore_iov_pf_update_tun_param(struct vfpf_update_tunn_param_tlv *p_req,
+				struct ecore_tunn_update_type *p_tun,
+				enum ecore_tunn_mode mask, u8 tun_cls)
+{
+	if (p_req->tun_mode_update_mask & (1 << mask)) {
+		p_tun->b_update_mode = true;
+
+		if (p_req->tunn_mode & (1 << mask))
+			p_tun->b_mode_enabled = true;
+	}
+
+	p_tun->tun_cls = tun_cls;
+}
+
+static void
+ecore_iov_pf_update_tun_param(struct vfpf_update_tunn_param_tlv *p_req,
+			      struct ecore_tunn_update_type *p_tun,
+			      struct ecore_tunn_update_udp_port *p_port,
+			      enum ecore_tunn_mode mask,
+			      u8 tun_cls, u8 update_port, u16 port)
+{
+	if (update_port) {
+		p_port->b_update_port = true;
+		p_port->port = port;
+	}
+
+	__ecore_iov_pf_update_tun_param(p_req, p_tun, mask, tun_cls);
+}
+
+static bool
+ecore_iov_pf_validate_tunn_param(struct vfpf_update_tunn_param_tlv *p_req)
+{
+	bool b_update_requested = false;
+
+	if (p_req->tun_mode_update_mask || p_req->update_tun_cls ||
+	    p_req->update_geneve_port || p_req->update_vxlan_port)
+		b_update_requested = true;
+
+	return b_update_requested;
+}
+
+static void ecore_iov_vf_mbx_update_tunn_param(struct ecore_hwfn *p_hwfn,
+					       struct ecore_ptt *p_ptt,
+					       struct ecore_vf_info *p_vf)
+{
+	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
+	struct ecore_iov_vf_mbx *mbx = &p_vf->vf_mbx;
+	struct pfvf_update_tunn_param_tlv *p_resp;
+	struct vfpf_update_tunn_param_tlv *p_req;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	u8 status = PFVF_STATUS_SUCCESS;
+	bool b_update_required = false;
+	struct ecore_tunnel_info tunn;
+	u16 tunn_feature_mask = 0;
+
+	mbx->offset = (u8 *)mbx->reply_virt;
+
+	OSAL_MEM_ZERO(&tunn, sizeof(tunn));
+	p_req = &mbx->req_virt->tunn_param_update;
+
+	if (!ecore_iov_pf_validate_tunn_param(p_req)) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "No tunnel update requested by VF\n");
+		status = PFVF_STATUS_FAILURE;
+		goto send_resp;
+	}
+
+	tunn.b_update_rx_cls = p_req->update_tun_cls;
+	tunn.b_update_tx_cls = p_req->update_tun_cls;
+
+	ecore_iov_pf_update_tun_param(p_req, &tunn.vxlan, &tunn.vxlan_port,
+				      ECORE_MODE_VXLAN_TUNN, p_req->vxlan_clss,
+				      p_req->update_vxlan_port,
+				      p_req->vxlan_port);
+	ecore_iov_pf_update_tun_param(p_req, &tunn.l2_geneve, &tunn.geneve_port,
+				      ECORE_MODE_L2GENEVE_TUNN,
+				      p_req->l2geneve_clss,
+				      p_req->update_geneve_port,
+				      p_req->geneve_port);
+	__ecore_iov_pf_update_tun_param(p_req, &tunn.ip_geneve,
+					ECORE_MODE_IPGENEVE_TUNN,
+					p_req->ipgeneve_clss);
+	__ecore_iov_pf_update_tun_param(p_req, &tunn.l2_gre,
+					ECORE_MODE_L2GRE_TUNN,
+					p_req->l2gre_clss);
+	__ecore_iov_pf_update_tun_param(p_req, &tunn.ip_gre,
+					ECORE_MODE_IPGRE_TUNN,
+					p_req->ipgre_clss);
+
+	/* If PF modifies VF's req then it should
+	 * still return an error in case of partial configuration
+	 * or modified configuration as opposed to requested one.
+	 */
+	rc = OSAL_PF_VALIDATE_MODIFY_TUNN_CONFIG(p_hwfn, &tunn_feature_mask,
+						 &b_update_required, &tunn);
+
+	if (rc != ECORE_SUCCESS)
+		status = PFVF_STATUS_FAILURE;
+
+	/* If ECORE client is willing to update anything ? */
+	if (b_update_required) {
+		rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, &tunn,
+						 ECORE_SPQ_MODE_EBLOCK,
+						 OSAL_NULL);
+		if (rc != ECORE_SUCCESS)
+			status = PFVF_STATUS_FAILURE;
+	}
+
+send_resp:
+	p_resp = ecore_add_tlv(p_hwfn, &mbx->offset,
+			       CHANNEL_TLV_UPDATE_TUNN_PARAM, sizeof(*p_resp));
+
+	ecore_iov_pf_update_tun_response(p_resp, p_tun, tunn_feature_mask);
+	ecore_add_tlv(p_hwfn, &mbx->offset, CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	ecore_iov_send_response(p_hwfn, p_ptt, p_vf, sizeof(*p_resp), status);
+}
+
 static void ecore_iov_vf_mbx_start_txq_resp(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
 					    struct ecore_vf_info *p_vf,
@@ -3405,6 +3546,9 @@ void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn,
 		case CHANNEL_TLV_RELEASE:
 			ecore_iov_vf_mbx_release(p_hwfn, p_ptt, p_vf);
 			break;
+		case CHANNEL_TLV_UPDATE_TUNN_PARAM:
+			ecore_iov_vf_mbx_update_tunn_param(p_hwfn, p_ptt, p_vf);
+			break;
 		}
 	} else if (ecore_iov_tlv_supported(mbx->first_tlv.tl.type)) {
 		/* If we've received a message from a VF we consider malicious
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index 60ecd16..3182621 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -451,6 +451,160 @@ free_p_iov:
 #define MSTORM_QZONE_START(dev)   (TSTORM_QZONE_START + \
 				   (TSTORM_QZONE_SIZE * NUM_OF_L2_QUEUES(dev)))
 
+/* @DPDK - changed enum ecore_tunn_clss to enum ecore_tunn_mode */
+static void
+__ecore_vf_prep_tunn_req_tlv(struct vfpf_update_tunn_param_tlv *p_req,
+			     struct ecore_tunn_update_type *p_src,
+			     enum ecore_tunn_mode mask, u8 *p_cls)
+{
+	if (p_src->b_update_mode) {
+		p_req->tun_mode_update_mask |= (1 << mask);
+
+		if (p_src->b_mode_enabled)
+			p_req->tunn_mode |= (1 << mask);
+	}
+
+	*p_cls = p_src->tun_cls;
+}
+
+/* @DPDK - changed enum ecore_tunn_clss to enum ecore_tunn_mode */
+static void
+ecore_vf_prep_tunn_req_tlv(struct vfpf_update_tunn_param_tlv *p_req,
+			   struct ecore_tunn_update_type *p_src,
+			   enum ecore_tunn_mode mask, u8 *p_cls,
+			   struct ecore_tunn_update_udp_port *p_port,
+			   u8 *p_update_port, u16 *p_udp_port)
+{
+	if (p_port->b_update_port) {
+		*p_update_port = 1;
+		*p_udp_port = p_port->port;
+	}
+
+	__ecore_vf_prep_tunn_req_tlv(p_req, p_src, mask, p_cls);
+}
+
+void ecore_vf_set_vf_start_tunn_update_param(struct ecore_tunnel_info *p_tun)
+{
+	if (p_tun->vxlan.b_mode_enabled)
+		p_tun->vxlan.b_update_mode = true;
+	if (p_tun->l2_geneve.b_mode_enabled)
+		p_tun->l2_geneve.b_update_mode = true;
+	if (p_tun->ip_geneve.b_mode_enabled)
+		p_tun->ip_geneve.b_update_mode = true;
+	if (p_tun->l2_gre.b_mode_enabled)
+		p_tun->l2_gre.b_update_mode = true;
+	if (p_tun->ip_gre.b_mode_enabled)
+		p_tun->ip_gre.b_update_mode = true;
+
+	p_tun->b_update_rx_cls = true;
+	p_tun->b_update_tx_cls = true;
+}
+
+static void
+__ecore_vf_update_tunn_param(struct ecore_tunn_update_type *p_tun,
+			     u16 feature_mask, u8 tunn_mode, u8 tunn_cls,
+			     enum ecore_tunn_mode val)
+{
+	if (feature_mask & (1 << val)) {
+		p_tun->b_mode_enabled = tunn_mode;
+		p_tun->tun_cls = tunn_cls;
+	} else {
+		p_tun->b_mode_enabled = false;
+	}
+}
+
+static void
+ecore_vf_update_tunn_param(struct ecore_hwfn *p_hwfn,
+			   struct ecore_tunnel_info *p_tun,
+			   struct pfvf_update_tunn_param_tlv *p_resp)
+{
+	/* Update mode and classes provided by PF */
+	u16 feat_mask = p_resp->tunn_feature_mask;
+
+	__ecore_vf_update_tunn_param(&p_tun->vxlan, feat_mask,
+				     p_resp->vxlan_mode, p_resp->vxlan_clss,
+				     ECORE_MODE_VXLAN_TUNN);
+	__ecore_vf_update_tunn_param(&p_tun->l2_geneve, feat_mask,
+				     p_resp->l2geneve_mode,
+				     p_resp->l2geneve_clss,
+				     ECORE_MODE_L2GENEVE_TUNN);
+	__ecore_vf_update_tunn_param(&p_tun->ip_geneve, feat_mask,
+				     p_resp->ipgeneve_mode,
+				     p_resp->ipgeneve_clss,
+				     ECORE_MODE_IPGENEVE_TUNN);
+	__ecore_vf_update_tunn_param(&p_tun->l2_gre, feat_mask,
+				     p_resp->l2gre_mode, p_resp->l2gre_clss,
+				     ECORE_MODE_L2GRE_TUNN);
+	__ecore_vf_update_tunn_param(&p_tun->ip_gre, feat_mask,
+				     p_resp->ipgre_mode, p_resp->ipgre_clss,
+				     ECORE_MODE_IPGRE_TUNN);
+	p_tun->geneve_port.port = p_resp->geneve_udp_port;
+	p_tun->vxlan_port.port = p_resp->vxlan_udp_port;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+		   "tunn mode: vxlan=0x%x, l2geneve=0x%x, ipgeneve=0x%x, l2gre=0x%x, ipgre=0x%x",
+		   p_tun->vxlan.b_mode_enabled, p_tun->l2_geneve.b_mode_enabled,
+		   p_tun->ip_geneve.b_mode_enabled,
+		   p_tun->l2_gre.b_mode_enabled,
+		   p_tun->ip_gre.b_mode_enabled);
+}
+
+enum _ecore_status_t
+ecore_vf_pf_tunnel_param_update(struct ecore_hwfn *p_hwfn,
+				struct ecore_tunnel_info *p_src)
+{
+	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
+	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+	struct pfvf_update_tunn_param_tlv *p_resp;
+	struct vfpf_update_tunn_param_tlv *p_req;
+	enum _ecore_status_t rc;
+
+	p_req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_UPDATE_TUNN_PARAM,
+				 sizeof(*p_req));
+
+	if (p_src->b_update_rx_cls && p_src->b_update_tx_cls)
+		p_req->update_tun_cls = 1;
+
+	ecore_vf_prep_tunn_req_tlv(p_req, &p_src->vxlan, ECORE_MODE_VXLAN_TUNN,
+				   &p_req->vxlan_clss, &p_src->vxlan_port,
+				   &p_req->update_vxlan_port,
+				   &p_req->vxlan_port);
+	ecore_vf_prep_tunn_req_tlv(p_req, &p_src->l2_geneve,
+				   ECORE_MODE_L2GENEVE_TUNN,
+				   &p_req->l2geneve_clss, &p_src->geneve_port,
+				   &p_req->update_geneve_port,
+				   &p_req->geneve_port);
+	__ecore_vf_prep_tunn_req_tlv(p_req, &p_src->ip_geneve,
+				     ECORE_MODE_IPGENEVE_TUNN,
+				     &p_req->ipgeneve_clss);
+	__ecore_vf_prep_tunn_req_tlv(p_req, &p_src->l2_gre,
+				     ECORE_MODE_L2GRE_TUNN, &p_req->l2gre_clss);
+	__ecore_vf_prep_tunn_req_tlv(p_req, &p_src->ip_gre,
+				     ECORE_MODE_IPGRE_TUNN, &p_req->ipgre_clss);
+
+	/* add list termination tlv */
+	ecore_add_tlv(p_hwfn, &p_iov->offset,
+		      CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	p_resp = &p_iov->pf2vf_reply->tunn_param_resp;
+	rc = ecore_send_msg2pf(p_hwfn, &p_resp->hdr.status, sizeof(*p_resp));
+
+	if (rc)
+		goto exit;
+
+	if (p_resp->hdr.status != PFVF_STATUS_SUCCESS) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "Failed to update tunnel parameters\n");
+		rc = ECORE_INVAL;
+	}
+
+	ecore_vf_update_tunn_param(p_hwfn, p_tun, p_resp);
+exit:
+	ecore_vf_pf_req_end(p_hwfn, rc);
+	return rc;
+}
+
 enum _ecore_status_t
 ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
 		      struct ecore_queue_cid *p_cid,
diff --git a/drivers/net/qede/base/ecore_vf.h b/drivers/net/qede/base/ecore_vf.h
index 1afd667..0d67054 100644
--- a/drivers/net/qede/base/ecore_vf.h
+++ b/drivers/net/qede/base/ecore_vf.h
@@ -258,5 +258,10 @@ void __ecore_vf_get_link_caps(struct ecore_hwfn *p_hwfn,
 			      struct ecore_mcp_link_capabilities *p_link_caps,
 			      struct ecore_bulletin_content *p_bulletin);
 
+enum _ecore_status_t
+ecore_vf_pf_tunnel_param_update(struct ecore_hwfn *p_hwfn,
+				struct ecore_tunnel_info *p_tunn);
+
+void ecore_vf_set_vf_start_tunn_update_param(struct ecore_tunnel_info *p_tun);
 #endif
 #endif /* __ECORE_VF_H__ */
diff --git a/drivers/net/qede/base/ecore_vfpf_if.h b/drivers/net/qede/base/ecore_vfpf_if.h
index 149d092..82ed4f5 100644
--- a/drivers/net/qede/base/ecore_vfpf_if.h
+++ b/drivers/net/qede/base/ecore_vfpf_if.h
@@ -416,6 +416,43 @@ struct vfpf_ucast_filter_tlv {
 	u16			padding[3];
 };
 
+/* tunnel update param tlv */
+struct vfpf_update_tunn_param_tlv {
+	struct vfpf_first_tlv   first_tlv;
+
+	u8			tun_mode_update_mask;
+	u8			tunn_mode;
+	u8			update_tun_cls;
+	u8			vxlan_clss;
+	u8			l2gre_clss;
+	u8			ipgre_clss;
+	u8			l2geneve_clss;
+	u8			ipgeneve_clss;
+	u8			update_geneve_port;
+	u8			update_vxlan_port;
+	u16			geneve_port;
+	u16			vxlan_port;
+	u8			padding[2];
+};
+
+struct pfvf_update_tunn_param_tlv {
+	struct pfvf_tlv hdr;
+
+	u16			tunn_feature_mask;
+	u8			vxlan_mode;
+	u8			l2geneve_mode;
+	u8			ipgeneve_mode;
+	u8			l2gre_mode;
+	u8			ipgre_mode;
+	u8			vxlan_clss;
+	u8			l2gre_clss;
+	u8			ipgre_clss;
+	u8			l2geneve_clss;
+	u8			ipgeneve_clss;
+	u16			vxlan_udp_port;
+	u16			geneve_udp_port;
+};
+
 struct tlv_buffer_size {
 	u8 tlv_buffer[TLV_BUFFER_SIZE];
 };
@@ -431,6 +468,7 @@ union vfpf_tlvs {
 	struct vfpf_vport_start_tlv		start_vport;
 	struct vfpf_vport_update_tlv		vport_update;
 	struct vfpf_ucast_filter_tlv		ucast_filter;
+	struct vfpf_update_tunn_param_tlv	tunn_param_update;
 	struct tlv_buffer_size			tlv_buf_size;
 };
 
@@ -439,6 +477,7 @@ union pfvf_tlvs {
 	struct pfvf_acquire_resp_tlv		acquire_resp;
 	struct tlv_buffer_size			tlv_buf_size;
 	struct pfvf_start_queue_resp_tlv	queue_start;
+	struct pfvf_update_tunn_param_tlv	tunn_param_resp;
 };
 
 /* This is a structure which is allocated in the VF, which the PF may update
@@ -552,6 +591,7 @@ enum {
 	CHANNEL_TLV_VPORT_UPDATE_RSS,
 	CHANNEL_TLV_VPORT_UPDATE_ACCEPT_ANY_VLAN,
 	CHANNEL_TLV_VPORT_UPDATE_SGE_TPA,
+	CHANNEL_TLV_UPDATE_TUNN_PARAM,
 	CHANNEL_TLV_MAX,
 
 	/* Required for iterating over vport-update tlvs.
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 4ef93d4..257e5b2 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -335,15 +335,15 @@ static void qede_set_ucast_cmn_params(struct ecore_filter_ucast *ucast)
 	/* ucast->assert_on_error = true; - For debug */
 }
 
-static void qede_set_cmn_tunn_param(struct qed_tunn_update_params *params,
-				    uint8_t clss, uint64_t mode, uint64_t mask)
+static void qede_set_cmn_tunn_param(struct ecore_tunnel_info *p_tunn,
+				    uint8_t clss, bool mode, bool mask)
 {
-	memset(params, 0, sizeof(struct qed_tunn_update_params));
-	params->tunn_mode = mode;
-	params->tunn_mode_update_mask = mask;
-	params->update_tx_pf_clss = 1;
-	params->update_rx_pf_clss = 1;
-	params->tunn_clss_vxlan = clss;
+	memset(p_tunn, 0, sizeof(struct ecore_tunnel_info));
+	p_tunn->vxlan.b_update_mode = mode;
+	p_tunn->vxlan.b_mode_enabled = mask;
+	p_tunn->b_update_rx_cls = true;
+	p_tunn->b_update_tx_cls = true;
+	p_tunn->vxlan.tun_cls = clss;
 }
 
 static int
@@ -1707,25 +1707,24 @@ qede_conf_udp_dst_port(struct rte_eth_dev *eth_dev,
 {
 	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
-	struct qed_tunn_update_params params;
-	struct ecore_tunnel_info *p_tunn;
+	struct ecore_tunnel_info tunn; /* @DPDK */
 	struct ecore_hwfn *p_hwfn;
 	int rc, i;
 
 	PMD_INIT_FUNC_TRACE(edev);
 
-	memset(&params, 0, sizeof(params));
+	memset(&tunn, 0, sizeof(tunn));
 	if (tunnel_udp->prot_type == RTE_TUNNEL_TYPE_VXLAN) {
-		params.update_vxlan_udp_port = 1;
-		params.vxlan_udp_port = (add) ? tunnel_udp->udp_port :
-					QEDE_VXLAN_DEF_PORT;
+		tunn.vxlan_port.b_update_port = true;
+		tunn.vxlan_port.port = (add) ? tunnel_udp->udp_port :
+						  QEDE_VXLAN_DEF_PORT;
 		for_each_hwfn(edev, i) {
 			p_hwfn = &edev->hwfns[i];
-			rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, p_tunn,
+			rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, &tunn,
 						ECORE_SPQ_MODE_CB, NULL);
 			if (rc != ECORE_SUCCESS) {
 				DP_ERR(edev, "Unable to config UDP port %u\n",
-					params.vxlan_udp_port);
+				       tunn.vxlan_port.port);
 				return rc;
 			}
 		}
@@ -1818,8 +1817,7 @@ static int qede_vxlan_tunn_config(struct rte_eth_dev *eth_dev,
 {
 	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
-	struct qed_tunn_update_params params;
-	struct ecore_tunnel_info *p_tunn;
+	struct ecore_tunnel_info tunn;
 	struct ecore_hwfn *p_hwfn;
 	enum ecore_filter_ucast_type type;
 	enum ecore_tunn_clss clss;
@@ -1868,16 +1866,14 @@ static int qede_vxlan_tunn_config(struct rte_eth_dev *eth_dev,
 		qdev->vxlan_filter_type = filter_type;
 
 		DP_INFO(edev, "Enabling VXLAN tunneling\n");
-		qede_set_cmn_tunn_param(&params, clss,
-					(1 << ECORE_MODE_VXLAN_TUNN),
-					(1 << ECORE_MODE_VXLAN_TUNN));
+		qede_set_cmn_tunn_param(&tunn, clss, true, true);
 		for_each_hwfn(edev, i) {
 			p_hwfn = &edev->hwfns[i];
 			rc = ecore_sp_pf_update_tunn_cfg(p_hwfn,
-				p_tunn, ECORE_SPQ_MODE_CB, NULL);
+				&tunn, ECORE_SPQ_MODE_CB, NULL);
 			if (rc != ECORE_SUCCESS) {
 				DP_ERR(edev, "Failed to update tunn_clss %u\n",
-					params.tunn_clss_vxlan);
+				       tunn.vxlan.tun_cls);
 			}
 		}
 		qdev->num_tunn_filters++; /* Filter added successfully */
@@ -1904,16 +1900,15 @@ static int qede_vxlan_tunn_config(struct rte_eth_dev *eth_dev,
 			DP_INFO(edev, "Disabling VXLAN tunneling\n");
 
 			/* Use 0 as tunnel mode */
-			qede_set_cmn_tunn_param(&params, clss, 0,
-						(1 << ECORE_MODE_VXLAN_TUNN));
+			qede_set_cmn_tunn_param(&tunn, clss, false, true);
 			for_each_hwfn(edev, i) {
 				p_hwfn = &edev->hwfns[i];
-				rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, p_tunn,
+				rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, &tunn,
 					ECORE_SPQ_MODE_CB, NULL);
 				if (rc != ECORE_SUCCESS) {
 					DP_ERR(edev,
 						"Failed to update tunn_clss %u\n",
-						params.tunn_clss_vxlan);
+						tunn.vxlan.tun_cls);
 					break;
 				}
 			}
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 33/61] net/qede/base: formatting changes
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (32 preceding siblings ...)
  2017-03-18  7:05   ` [PATCH v2 32/61] net/qede/base: add tunnelling support for VFs Rasesh Mody
@ 2017-03-18  7:05   ` Rasesh Mody
  2017-03-18  7:05   ` [PATCH v2 34/61] net/qede/base: prevent transmitter stuck condition Rasesh Mody
                     ` (28 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:05 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h      |   14 +--
 drivers/net/qede/base/mcp_public.h |  176 ++++++++++++++++++------------------
 2 files changed, 96 insertions(+), 94 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index f86f7ca..479a991 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -157,8 +157,8 @@ enum DP_MODULE {
 	ECORE_MSG_CXT		= 0x800000,
 	ECORE_MSG_LL2		= 0x1000000,
 	ECORE_MSG_ILT		= 0x2000000,
-	ECORE_MSG_RDMA          = 0x4000000,
-	ECORE_MSG_DEBUG         = 0x8000000,
+	ECORE_MSG_RDMA		= 0x4000000,
+	ECORE_MSG_DEBUG		= 0x8000000,
 	/* to be added...up to 0x8000000 */
 };
 #endif
@@ -480,7 +480,7 @@ struct ecore_hwfn {
 	u32				dp_module;
 	u8				dp_level;
 	char				name[NAME_SIZE];
-	void                            *dp_ctx;
+	void				*dp_ctx;
 
 	bool				first_on_engine;
 	bool				hw_init_done;
@@ -535,8 +535,8 @@ struct ecore_hwfn {
 	u32				rdma_prs_search_reg;
 
 	/* Array of sb_info of all status blocks */
-	struct ecore_sb_info            *sbs_info[MAX_SB_PER_PF_MIMD];
-	u16                             num_sbs;
+	struct ecore_sb_info		*sbs_info[MAX_SB_PER_PF_MIMD];
+	u16				num_sbs;
 
 	struct ecore_cxt_mngr		*p_cxt_mngr;
 
@@ -608,7 +608,7 @@ struct ecore_dev {
 	u32				dp_module;
 	u8				dp_level;
 	char				name[NAME_SIZE];
-	void                            *dp_ctx;
+	void				*dp_ctx;
 
 	u8				type;
 #define ECORE_DEV_TYPE_BB	(0 << 0)
@@ -816,7 +816,7 @@ void ecore_set_fw_mac_addr(__le16 *fw_msb, __le16 *fw_mid, __le16 *fw_lsb,
 #define PQ_FLAGS_MCOS	(1 << 1)
 #define PQ_FLAGS_LB	(1 << 2)
 #define PQ_FLAGS_OOO	(1 << 3)
-#define PQ_FLAGS_ACK    (1 << 4)
+#define PQ_FLAGS_ACK	(1 << 4)
 #define PQ_FLAGS_OFLD	(1 << 5)
 #define PQ_FLAGS_VFS	(1 << 6)
 
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 969dd5a..28909fb 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -586,14 +586,14 @@ struct public_port {
 	u32 link_status;
 #define LINK_STATUS_LINK_UP				0x00000001
 #define LINK_STATUS_SPEED_AND_DUPLEX_MASK		0x0000001e
-#define LINK_STATUS_SPEED_AND_DUPLEX_1000THD			(1 << 1)
-#define LINK_STATUS_SPEED_AND_DUPLEX_1000TFD			(2 << 1)
-#define LINK_STATUS_SPEED_AND_DUPLEX_10G			(3 << 1)
-#define LINK_STATUS_SPEED_AND_DUPLEX_20G			(4 << 1)
-#define LINK_STATUS_SPEED_AND_DUPLEX_40G			(5 << 1)
-#define LINK_STATUS_SPEED_AND_DUPLEX_50G			(6 << 1)
-#define LINK_STATUS_SPEED_AND_DUPLEX_100G			(7 << 1)
-#define LINK_STATUS_SPEED_AND_DUPLEX_25G			(8 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_1000THD		(1 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_1000TFD		(2 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_10G		(3 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_20G		(4 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_40G		(5 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_50G		(6 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_100G		(7 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_25G		(8 << 1)
 #define LINK_STATUS_AUTO_NEGOTIATE_ENABLED		0x00000020
 #define LINK_STATUS_AUTO_NEGOTIATE_COMPLETE		0x00000040
 #define LINK_STATUS_PARALLEL_DETECTION_USED		0x00000080
@@ -607,10 +607,10 @@ struct public_port {
 #define LINK_STATUS_LINK_PARTNER_100G_CAPABLE		0x00008000
 #define LINK_STATUS_LINK_PARTNER_25G_CAPABLE		0x00010000
 #define LINK_STATUS_LINK_PARTNER_FLOW_CONTROL_MASK	0x000C0000
-#define LINK_STATUS_LINK_PARTNER_NOT_PAUSE_CAPABLE		(0 << 18)
-#define LINK_STATUS_LINK_PARTNER_SYMMETRIC_PAUSE		(1 << 18)
-#define LINK_STATUS_LINK_PARTNER_ASYMMETRIC_PAUSE		(2 << 18)
-#define LINK_STATUS_LINK_PARTNER_BOTH_PAUSE			(3 << 18)
+#define LINK_STATUS_LINK_PARTNER_NOT_PAUSE_CAPABLE	(0 << 18)
+#define LINK_STATUS_LINK_PARTNER_SYMMETRIC_PAUSE	(1 << 18)
+#define LINK_STATUS_LINK_PARTNER_ASYMMETRIC_PAUSE	(2 << 18)
+#define LINK_STATUS_LINK_PARTNER_BOTH_PAUSE		(3 << 18)
 #define LINK_STATUS_SFP_TX_FAULT			0x00100000
 #define LINK_STATUS_TX_FLOW_CONTROL_ENABLED		0x00200000
 #define LINK_STATUS_RX_FLOW_CONTROL_ENABLED		0x00400000
@@ -619,9 +619,9 @@ struct public_port {
 #define LINK_STATUS_MAC_REMOTE_FAULT			0x02000000
 #define LINK_STATUS_UNSUPPORTED_SPD_REQ			0x04000000
 #define LINK_STATUS_FEC_MODE_MASK			0x38000000
-#define LINK_STATUS_FEC_MODE_NONE				(0 << 27)
-#define LINK_STATUS_FEC_MODE_FIRECODE_CL74			(1 << 27)
-#define LINK_STATUS_FEC_MODE_RS_CL91				(2 << 27)
+#define LINK_STATUS_FEC_MODE_NONE			(0 << 27)
+#define LINK_STATUS_FEC_MODE_FIRECODE_CL74		(1 << 27)
+#define LINK_STATUS_FEC_MODE_RS_CL91			(2 << 27)
 #define LINK_STATUS_EXT_PHY_LINK_UP			0x40000000
 
 	u32 link_status1;
@@ -762,23 +762,23 @@ struct public_port {
 	 *          When 1'b1 those bits contains a value times 16 microseconds.
 	 */
 	u32 eee_status;
-	#define EEE_TIMER_MASK		0x000fffff
-	#define EEE_ADV_STATUS_MASK	0x00f00000
-		#define EEE_1G_ADV	(1 << 1)
-		#define EEE_10G_ADV	(1 << 2)
-	#define EEE_ADV_STATUS_SHIFT	20
-	#define	EEE_LP_ADV_STATUS_MASK	0x0f000000
-	#define EEE_LP_ADV_STATUS_SHIFT	24
-	#define EEE_REQUESTED_BIT	0x10000000
-	#define EEE_LPI_REQUESTED_BIT	0x20000000
-	#define EEE_ACTIVE_BIT		0x40000000
-	#define EEE_TIME_OUTPUT_BIT	0x80000000
+#define EEE_TIMER_MASK		0x000fffff
+#define EEE_ADV_STATUS_MASK	0x00f00000
+#define EEE_1G_ADV	(1 << 1)
+#define EEE_10G_ADV	(1 << 2)
+#define EEE_ADV_STATUS_SHIFT	20
+#define	EEE_LP_ADV_STATUS_MASK	0x0f000000
+#define EEE_LP_ADV_STATUS_SHIFT	24
+#define EEE_REQUESTED_BIT	0x10000000
+#define EEE_LPI_REQUESTED_BIT	0x20000000
+#define EEE_ACTIVE_BIT		0x40000000
+#define EEE_TIME_OUTPUT_BIT	0x80000000
 
 	u32 eee_remote;	/* Used for EEE in LLDP */
-	#define EEE_REMOTE_TW_TX_MASK	0x0000ffff
-	#define EEE_REMOTE_TW_TX_SHIFT	0
-	#define EEE_REMOTE_TW_RX_MASK	0xffff0000
-	#define EEE_REMOTE_TW_RX_SHIFT	16
+#define EEE_REMOTE_TW_TX_MASK	0x0000ffff
+#define EEE_REMOTE_TW_TX_SHIFT	0
+#define EEE_REMOTE_TW_RX_MASK	0xffff0000
+#define EEE_REMOTE_TW_RX_SHIFT	16
 };
 
 /**************************************/
@@ -1157,15 +1157,17 @@ struct public_drv_mb {
  * [3:0] - func, drv_data[7:0] - MAC/WWNN/WWPN
  */
 #define DRV_MSG_CODE_GET_VMAC                   0x00120000
-	#define DRV_MSG_CODE_VMAC_TYPE_MAC              1
-	#define DRV_MSG_CODE_VMAC_TYPE_WWNN             2
-	#define DRV_MSG_CODE_VMAC_TYPE_WWPN             3
+#define DRV_MSG_CODE_VMAC_TYPE_SHIFT            4
+#define DRV_MSG_CODE_VMAC_TYPE_MASK             0x30
+#define DRV_MSG_CODE_VMAC_TYPE_MAC              1
+#define DRV_MSG_CODE_VMAC_TYPE_WWNN             2
+#define DRV_MSG_CODE_VMAC_TYPE_WWPN             3
 /* Get statistics from pf, params [31:4] - reserved, [3:0] - stats type */
 #define DRV_MSG_CODE_GET_STATS                  0x00130000
-	#define DRV_MSG_CODE_STATS_TYPE_LAN             1
-	#define DRV_MSG_CODE_STATS_TYPE_FCOE            2
-	#define DRV_MSG_CODE_STATS_TYPE_ISCSI           3
-	#define DRV_MSG_CODE_STATS_TYPE_RDMA            4
+#define DRV_MSG_CODE_STATS_TYPE_LAN             1
+#define DRV_MSG_CODE_STATS_TYPE_FCOE            2
+#define DRV_MSG_CODE_STATS_TYPE_ISCSI           3
+#define DRV_MSG_CODE_STATS_TYPE_RDMA            4
 /* Host shall provide buffer and size for MFW  */
 #define DRV_MSG_CODE_PMD_DIAG_DUMP		0x00140000
 /* Host shall provide buffer and size for MFW  */
@@ -1193,8 +1195,8 @@ struct public_drv_mb {
 #define DRV_MSG_CODE_MASK_PARITIES		0x001a0000
 /* param[0] - Simulate fan failure,  param[1] - simulate over temp. */
 #define DRV_MSG_CODE_INDUCE_FAILURE		0x001b0000
-	#define DRV_MSG_FAN_FAILURE_TYPE		(1 << 0)
-	#define DRV_MSG_TEMPERATURE_FAILURE_TYPE	(1 << 1)
+#define DRV_MSG_FAN_FAILURE_TYPE		(1 << 0)
+#define DRV_MSG_TEMPERATURE_FAILURE_TYPE	(1 << 1)
 /* Param: [0:15] - gpio number */
 #define DRV_MSG_CODE_GPIO_READ			0x001c0000
 /* Param: [0:15] - gpio number, [16:31] - gpio value */
@@ -1215,50 +1217,50 @@ struct public_drv_mb {
  * param[15:8] - age
  */
 #define DRV_MSG_CODE_RESOURCE_CMD		0x00230000
-	/* request resource ownership with default aging */
-	#define RESOURCE_OPCODE_REQ			1
-	/* request resource ownership without aging */
-	#define RESOURCE_OPCODE_REQ_WO_AGING		2
-	/* request resource ownership with specific aging timer (in seconds) */
-	#define RESOURCE_OPCODE_REQ_W_AGING		3
-	#define RESOURCE_OPCODE_RELEASE			4 /* release resource */
-	/* force resource release */
-	#define RESOURCE_OPCODE_FORCE_RELEASE		5
-	/* resource is free and granted to requester */
-	#define RESOURCE_OPCODE_GNT			1
-	/* resource is busy, param[7:0] indicates owner as follow 0-15 = PF0-15,
-	 * 16 = MFW, 17 = diag over serial
-	 */
-	#define RESOURCE_OPCODE_BUSY			2
-	/* indicate release request was acknowledged */
-	#define RESOURCE_OPCODE_RELEASED		3
-	/* indicate release request was previously received by other owner */
-	#define RESOURCE_OPCODE_RELEASED_PREVIOUS	4
-	/* indicate wrong owner during release */
-	#define RESOURCE_OPCODE_WRONG_OWNER		5
-	#define RESOURCE_OPCODE_UNKNOWN_CMD		255
-	/* dedicate resource 0 for dump */
-	#define RESOURCE_DUMP				0
+/* request resource ownership with default aging */
+#define RESOURCE_OPCODE_REQ			1
+/* request resource ownership without aging */
+#define RESOURCE_OPCODE_REQ_WO_AGING		2
+/* request resource ownership with specific aging timer (in seconds) */
+#define RESOURCE_OPCODE_REQ_W_AGING		3
+#define RESOURCE_OPCODE_RELEASE			4 /* release resource */
+/* force resource release */
+#define RESOURCE_OPCODE_FORCE_RELEASE		5
+/* resource is free and granted to requester */
+#define RESOURCE_OPCODE_GNT			1
+/* resource is busy, param[7:0] indicates owner as follow 0-15 = PF0-15,
+ * 16 = MFW, 17 = diag over serial
+ */
+#define RESOURCE_OPCODE_BUSY			2
+/* indicate release request was acknowledged */
+#define RESOURCE_OPCODE_RELEASED		3
+/* indicate release request was previously received by other owner */
+#define RESOURCE_OPCODE_RELEASED_PREVIOUS	4
+/* indicate wrong owner during release */
+#define RESOURCE_OPCODE_WRONG_OWNER		5
+#define RESOURCE_OPCODE_UNKNOWN_CMD		255
+/* dedicate resource 0 for dump */
+#define RESOURCE_DUMP				0
 #define DRV_MSG_CODE_GET_MBA_VERSION		0x00240000 /* Get MBA version */
 /* Send crash dump commands with param[3:0] - opcode */
 #define DRV_MSG_CODE_MDUMP_CMD			0x00250000
-	#define MDUMP_DRV_PARAM_OPCODE_MASK		0x0000000f
-	/* acknowledge reception of error indication */
-	#define DRV_MSG_CODE_MDUMP_ACK			0x01
-	/* set epoc and personality as follow: drv_data[3:0] - epoch,
-	 * drv_data[7:4] - personality
-	 */
-	#define DRV_MSG_CODE_MDUMP_SET_VALUES		0x02
-	/* trigger crash dump procedure */
-	#define DRV_MSG_CODE_MDUMP_TRIGGER		0x03
-	/* Request valid logs and config words */
-	#define DRV_MSG_CODE_MDUMP_GET_CONFIG		0x04
-	/* Set triggers mask. drv_mb_param should indicate (bitwise) which
-	 * trigger enabled
-	 */
-	#define DRV_MSG_CODE_MDUMP_SET_ENABLE		0x05
-	/* Clear all logs */
-	#define DRV_MSG_CODE_MDUMP_CLEAR_LOGS		0x06
+#define MDUMP_DRV_PARAM_OPCODE_MASK		0x0000000f
+/* acknowledge reception of error indication */
+#define DRV_MSG_CODE_MDUMP_ACK			0x01
+/* set epoc and personality as follow: drv_data[3:0] - epoch,
+ * drv_data[7:4] - personality
+ */
+#define DRV_MSG_CODE_MDUMP_SET_VALUES		0x02
+/* trigger crash dump procedure */
+#define DRV_MSG_CODE_MDUMP_TRIGGER		0x03
+/* Request valid logs and config words */
+#define DRV_MSG_CODE_MDUMP_GET_CONFIG		0x04
+/* Set triggers mask. drv_mb_param should indicate (bitwise) which
+ * trigger enabled
+ */
+#define DRV_MSG_CODE_MDUMP_SET_ENABLE		0x05
+/* Clear all logs */
+#define DRV_MSG_CODE_MDUMP_CLEAR_LOGS		0x06
 #define DRV_MSG_CODE_MEM_ECC_EVENTS		0x00260000 /* Param: None */
 /* Param: [0:15] - gpio number */
 #define DRV_MSG_CODE_GPIO_INFO			0x00270000
@@ -1266,12 +1268,12 @@ struct public_drv_mb {
 #define DRV_MSG_CODE_EXT_PHY_READ		0x00280000
 /* Value should be placed in union */
 #define DRV_MSG_CODE_EXT_PHY_WRITE		0x00290000
-	#define DRV_MB_PARAM_ADDR_SHIFT			0
-	#define DRV_MB_PARAM_ADDR_MASK			0x0000FFFF
-	#define DRV_MB_PARAM_DEVAD_SHIFT		16
-	#define DRV_MB_PARAM_DEVAD_MASK			0x001F0000
-	#define DRV_MB_PARAM_PORT_SHIFT			21
-	#define DRV_MB_PARAM_PORT_MASK			0x00600000
+#define DRV_MB_PARAM_ADDR_SHIFT			0
+#define DRV_MB_PARAM_ADDR_MASK			0x0000FFFF
+#define DRV_MB_PARAM_DEVAD_SHIFT		16
+#define DRV_MB_PARAM_DEVAD_MASK			0x001F0000
+#define DRV_MB_PARAM_PORT_SHIFT			21
+#define DRV_MB_PARAM_PORT_MASK			0x00600000
 #define DRV_MSG_CODE_EXT_PHY_FW_UPGRADE		0x002a0000
 
 #define DRV_MSG_SEQ_NUMBER_MASK                 0x0000ffff
@@ -1510,7 +1512,7 @@ struct public_drv_mb {
 #define FW_MSG_CODE_EXTPHY_OPERATION_FAILED	0x00720000
 #define FW_MSG_CODE_EXTPHY_NO_PHY_DETECTED	0x00730000
 
-/* mdump related response codes */
+	/* mdump related response codes */
 #define FW_MSG_CODE_MDUMP_NO_IMAGE_FOUND	0x00010000
 #define FW_MSG_CODE_MDUMP_ALLOC_FAILED		0x00020000
 #define FW_MSG_CODE_MDUMP_INVALID_CMD		0x00030000
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 34/61] net/qede/base: prevent transmitter stuck condition
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (33 preceding siblings ...)
  2017-03-18  7:05   ` [PATCH v2 33/61] net/qede/base: formatting changes Rasesh Mody
@ 2017-03-18  7:05   ` Rasesh Mody
  2017-03-18  7:06   ` [PATCH v2 35/61] net/qede/base: add mask/shift defines for resource command Rasesh Mody
                     ` (27 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:05 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Change OOO TC properly to prevent transmitter stuck condition
due to credit underruns.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h      |    4 +---
 drivers/net/qede/base/ecore_dcbx.c |    6 ++----
 drivers/net/qede/base/ecore_dev.c  |   19 ++++++++++++++-----
 drivers/net/qede/base/mcp_public.h |   12 ++++++++----
 4 files changed, 25 insertions(+), 16 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 479a991..c9b1b5a 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -358,9 +358,6 @@ struct ecore_hw_info {
 
 	u8 num_active_tc;
 
-	/* Traffic class used for tcp out of order traffic */
-	u8 ooo_tc;
-
 	/* The traffic class used by PF for it's offloaded protocol */
 	u8 offload_tc;
 
@@ -441,6 +438,7 @@ struct ecore_qm_info {
 	u16			num_vf_pqs;
 	u8			num_vports;
 	u8			max_phys_tcs_per_port;
+	u8			ooo_tc;
 	bool			pf_rl_en;
 	bool			pf_wfq_en;
 	bool			vport_rl_en;
diff --git a/drivers/net/qede/base/ecore_dcbx.c b/drivers/net/qede/base/ecore_dcbx.c
index 102774d..0e11927 100644
--- a/drivers/net/qede/base/ecore_dcbx.c
+++ b/drivers/net/qede/base/ecore_dcbx.c
@@ -129,11 +129,8 @@ ecore_dcbx_set_params(struct ecore_dcbx_results *p_data,
 		p_data->arr[type].update = DONT_UPDATE_DCB_DSCP;
 
 	/* QM reconf data */
-	if (p_hwfn->hw_info.personality == personality) {
+	if (p_hwfn->hw_info.personality == personality)
 		p_hwfn->hw_info.offload_tc = tc;
-		if (personality == ECORE_PCI_ISCSI)
-			p_hwfn->hw_info.ooo_tc = DCBX_ISCSI_OOO_TC;
-	}
 }
 
 /* Update app protocol data and hw_info fields with the TLV info */
@@ -317,6 +314,7 @@ ecore_dcbx_process_mib_info(struct ecore_hwfn *p_hwfn)
 
 	p_info->num_active_tc = ECORE_MFW_GET_FIELD(p_ets->flags,
 						    DCBX_ETS_MAX_TCS);
+	p_hwfn->qm_info.ooo_tc = ECORE_MFW_GET_FIELD(p_ets->flags, DCBX_OOO_TC);
 	data.pf_id = p_hwfn->rel_pf_id;
 	data.dcbx_enabled = !!dcbx_version;
 
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 21fec58..0840d49 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -291,6 +291,7 @@ u16 ecore_init_qm_get_num_pqs(struct ecore_hwfn *p_hwfn)
 static void ecore_init_qm_params(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+	bool four_port;
 
 	/* pq and vport bases for this PF */
 	qm_info->start_pq = (u16)RESC_START(p_hwfn, ECORE_PQ);
@@ -300,10 +301,19 @@ static void ecore_init_qm_params(struct ecore_hwfn *p_hwfn)
 	qm_info->vport_rl_en = 1;
 	qm_info->vport_wfq_en = 1;
 
+	/* TC config is different for AH 4 port */
+	four_port = p_hwfn->p_dev->num_ports_in_engines == MAX_NUM_PORTS_K2;
+
 	/* in AH 4 port we have fewer TCs per port */
-	qm_info->max_phys_tcs_per_port =
-		p_hwfn->p_dev->num_ports_in_engines == MAX_NUM_PORTS_K2 ?
-			NUM_PHYS_TCS_4PORT_K2 : NUM_OF_PHYS_TCS;
+	qm_info->max_phys_tcs_per_port = four_port ? NUM_PHYS_TCS_4PORT_K2 :
+						     NUM_OF_PHYS_TCS;
+
+	/* unless MFW indicated otherwise, ooo_tc should be 3 for AH 4 port and
+	 * 4 otherwise
+	 */
+	if (!qm_info->ooo_tc)
+		qm_info->ooo_tc = four_port ? DCBX_TCP_OOO_K2_4PORT_TC :
+					      DCBX_TCP_OOO_TC;
 }
 
 /* initialize qm vport params */
@@ -532,8 +542,7 @@ static void ecore_init_qm_ooo_pq(struct ecore_hwfn *p_hwfn)
 		return;
 
 	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_OOO, qm_info->num_pqs);
-	ecore_init_qm_pq(p_hwfn, qm_info, DCBX_ISCSI_OOO_TC,
-			 PQ_INIT_SHARE_VPORT);
+	ecore_init_qm_pq(p_hwfn, qm_info, qm_info->ooo_tc, PQ_INIT_SHARE_VPORT);
 }
 
 static void ecore_init_qm_pure_ack_pq(struct ecore_hwfn *p_hwfn)
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 28909fb..bd34557 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -294,16 +294,20 @@ struct dcbx_ets_feature {
 #define DCBX_ETS_CBS_SHIFT                      3
 #define DCBX_ETS_MAX_TCS_MASK                   0x000000f0
 #define DCBX_ETS_MAX_TCS_SHIFT                  4
-#define DCBX_ISCSI_OOO_TC_MASK			0x00000f00
-#define DCBX_ISCSI_OOO_TC_SHIFT                 8
+#define DCBX_OOO_TC_MASK                        0x00000f00
+#define DCBX_OOO_TC_SHIFT                       8
 /* Entries in tc table are orginized that the left most is pri 0, right most is
  * prio 7
  */
 
 	u32  pri_tc_tbl[1];
-#define DCBX_ISCSI_OOO_TC			(4)
+/* Fixed TCP OOO TC usage is deprecated and used only for driver backward
+ * compatibility
+ */
+#define DCBX_TCP_OOO_TC				(4)
+#define DCBX_TCP_OOO_K2_4PORT_TC		(3)
 
-#define NIG_ETS_ISCSI_OOO_CLIENT_OFFSET		(DCBX_ISCSI_OOO_TC + 1)
+#define NIG_ETS_ISCSI_OOO_CLIENT_OFFSET		(DCBX_TCP_OOO_TC + 1)
 #define DCBX_CEE_STRICT_PRIORITY		0xf
 /* Entries in tc table are orginized that the left most is pri 0, right most is
  * prio 7
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 35/61] net/qede/base: add mask/shift defines for resource command
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (34 preceding siblings ...)
  2017-03-18  7:05   ` [PATCH v2 34/61] net/qede/base: prevent transmitter stuck condition Rasesh Mody
@ 2017-03-18  7:06   ` Rasesh Mody
  2017-03-18  7:06   ` [PATCH v2 36/61] net/qede/base: add API for using MFW resource lock Rasesh Mody
                     ` (26 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:06 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add several mask/shift defines for the resource command

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/mcp_public.h |   15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index bd34557..1b1ecd2 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -1217,10 +1217,16 @@ struct public_drv_mb {
 #define DRV_MSG_CODE_TIMESTAMP                  0x00210000
 /* This is an empty mailbox just return OK*/
 #define DRV_MSG_CODE_EMPTY_MB			0x00220000
+
 /* Param[0:4] - resource number (0-31), Param[5:7] - opcode,
  * param[15:8] - age
  */
 #define DRV_MSG_CODE_RESOURCE_CMD		0x00230000
+
+#define RESOURCE_CMD_REQ_RESC_MASK		0x0000001F
+#define RESOURCE_CMD_REQ_RESC_SHIFT		0
+#define RESOURCE_CMD_REQ_OPCODE_MASK		0x000000E0
+#define RESOURCE_CMD_REQ_OPCODE_SHIFT		5
 /* request resource ownership with default aging */
 #define RESOURCE_OPCODE_REQ			1
 /* request resource ownership without aging */
@@ -1230,6 +1236,13 @@ struct public_drv_mb {
 #define RESOURCE_OPCODE_RELEASE			4 /* release resource */
 /* force resource release */
 #define RESOURCE_OPCODE_FORCE_RELEASE		5
+#define RESOURCE_CMD_REQ_AGE_MASK		0x0000FF00
+#define RESOURCE_CMD_REQ_AGE_SHIFT		8
+
+#define RESOURCE_CMD_RSP_OWNER_MASK		0x000000FF
+#define RESOURCE_CMD_RSP_OWNER_SHIFT		0
+#define RESOURCE_CMD_RSP_OPCODE_MASK		0x00000700
+#define RESOURCE_CMD_RSP_OPCODE_SHIFT		8
 /* resource is free and granted to requester */
 #define RESOURCE_OPCODE_GNT			1
 /* resource is busy, param[7:0] indicates owner as follow 0-15 = PF0-15,
@@ -1243,8 +1256,10 @@ struct public_drv_mb {
 /* indicate wrong owner during release */
 #define RESOURCE_OPCODE_WRONG_OWNER		5
 #define RESOURCE_OPCODE_UNKNOWN_CMD		255
+
 /* dedicate resource 0 for dump */
 #define RESOURCE_DUMP				0
+
 #define DRV_MSG_CODE_GET_MBA_VERSION		0x00240000 /* Get MBA version */
 /* Send crash dump commands with param[3:0] - opcode */
 #define DRV_MSG_CODE_MDUMP_CMD			0x00250000
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 36/61] net/qede/base: add API for using MFW resource lock
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (35 preceding siblings ...)
  2017-03-18  7:06   ` [PATCH v2 35/61] net/qede/base: add mask/shift defines for resource command Rasesh Mody
@ 2017-03-18  7:06   ` Rasesh Mody
  2017-03-18  7:06   ` [PATCH v2 37/61] net/qede/base: remove clock slowdown option Rasesh Mody
                     ` (25 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:06 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add base driver API for using the Management FW resource lock

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h      |    9 +++
 drivers/net/qede/base/ecore_dcbx.h |    3 -
 drivers/net/qede/base/ecore_mcp.c  |  143 ++++++++++++++++++++++++++++++++++++
 drivers/net/qede/base/ecore_mcp.h  |   41 +++++++++++
 4 files changed, 193 insertions(+), 3 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index c9b1b5a..acf2244 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -86,6 +86,15 @@ do {									\
 	(((value) >> (name##_SHIFT)) & name##_MASK)
 #endif
 
+#define ECORE_MFW_GET_FIELD(name, field)				\
+	(((name) & (field ## _MASK)) >> (field ## _SHIFT))
+
+#define ECORE_MFW_SET_FIELD(name, field, value)				\
+do {									\
+	(name) &= ~((field ## _MASK) << (field ## _SHIFT));		\
+	(name) |= (((value) << (field ## _SHIFT)) & (field ## _MASK));	\
+} while (0)
+
 static OSAL_INLINE u32 DB_ADDR(u32 cid, u32 DEMS)
 {
 	u32 db_addr = FIELD_VALUE(DB_LEGACY_ADDR_DEMS, DEMS) |
diff --git a/drivers/net/qede/base/ecore_dcbx.h b/drivers/net/qede/base/ecore_dcbx.h
index 2ce4465..0830014 100644
--- a/drivers/net/qede/base/ecore_dcbx.h
+++ b/drivers/net/qede/base/ecore_dcbx.h
@@ -17,9 +17,6 @@
 #include "ecore_hsi_common.h"
 #include "ecore_dcbx_api.h"
 
-#define ECORE_MFW_GET_FIELD(name, field) \
-	(((name) & (field ## _MASK)) >> (field ## _SHIFT))
-
 struct ecore_dcbx_info {
 	struct lldp_status_params_s lldp_remote[LLDP_MAX_LLDP_AGENTS];
 	struct lldp_config_params_s lldp_local[LLDP_MAX_LLDP_AGENTS];
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 2b9c819..30cb76e 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -2502,3 +2502,146 @@ enum _ecore_status_t ecore_mcp_initiate_pf_flr(struct ecore_hwfn *p_hwfn,
 	return ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_INITIATE_PF_FLR, 0,
 			     &mcp_resp, &mcp_param);
 }
+
+static enum _ecore_status_t ecore_mcp_resource_cmd(struct ecore_hwfn *p_hwfn,
+						   struct ecore_ptt *p_ptt,
+						   u32 param, u32 *p_mcp_resp,
+						   u32 *p_mcp_param)
+{
+	enum _ecore_status_t rc;
+
+	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_RESOURCE_CMD, param,
+			   p_mcp_resp, p_mcp_param);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	/* A zero response implies that the resource command is not supported */
+	if (!*p_mcp_resp)
+		return ECORE_NOTIMPL;
+
+	if (*p_mcp_param == RESOURCE_OPCODE_UNKNOWN_CMD) {
+		u8 opcode = ECORE_MFW_GET_FIELD(param, RESOURCE_CMD_REQ_OPCODE);
+
+		DP_NOTICE(p_hwfn, false,
+			  "The resource command is unknown to the MFW [param 0x%08x, opcode %d]\n",
+			  param, opcode);
+		return ECORE_INVAL;
+	}
+
+	return rc;
+}
+
+enum _ecore_status_t ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn,
+					 struct ecore_ptt *p_ptt,
+					 u8 resource_num, u8 timeout,
+					 bool *p_granted, u8 *p_owner)
+{
+	u32 param = 0, mcp_resp, mcp_param;
+	u8 opcode;
+	enum _ecore_status_t rc;
+
+	switch (timeout) {
+	case ECORE_MCP_RESC_LOCK_TO_DEFAULT:
+		opcode = RESOURCE_OPCODE_REQ;
+		timeout = 0;
+		break;
+	case ECORE_MCP_RESC_LOCK_TO_NONE:
+		opcode = RESOURCE_OPCODE_REQ_WO_AGING;
+		timeout = 0;
+		break;
+	default:
+		opcode = RESOURCE_OPCODE_REQ_W_AGING;
+		break;
+	}
+
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_RESC, resource_num);
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_OPCODE, opcode);
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_AGE, timeout);
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Resource lock request: param 0x%08x [age %d, opcode %d, resc_num %d]\n",
+		   param, timeout, opcode, resource_num);
+
+	/* Attempt to acquire the resource */
+	rc = ecore_mcp_resource_cmd(p_hwfn, p_ptt, param, &mcp_resp,
+				    &mcp_param);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	/* Analyze the response */
+	*p_owner = ECORE_MFW_GET_FIELD(mcp_param, RESOURCE_CMD_RSP_OWNER);
+	opcode = ECORE_MFW_GET_FIELD(mcp_param, RESOURCE_CMD_RSP_OPCODE);
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Resource lock response: mcp_param 0x%08x [opcode %d, owner %d]\n",
+		   mcp_param, opcode, *p_owner);
+
+	switch (opcode) {
+	case RESOURCE_OPCODE_GNT:
+		*p_granted = true;
+		break;
+	case RESOURCE_OPCODE_BUSY:
+		*p_granted = false;
+		break;
+	default:
+		DP_NOTICE(p_hwfn, false,
+			  "Unexpected opcode in resource lock response [mcp_param 0x%08x, opcode %d]\n",
+			  mcp_param, opcode);
+		return ECORE_INVAL;
+	}
+
+	return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t ecore_mcp_resc_unlock(struct ecore_hwfn *p_hwfn,
+					   struct ecore_ptt *p_ptt,
+					   u8 resource_num, bool force,
+					   bool *p_released)
+{
+	u32 param = 0, mcp_resp, mcp_param;
+	u8 opcode;
+	enum _ecore_status_t rc;
+
+	opcode = force ? RESOURCE_OPCODE_FORCE_RELEASE
+		       : RESOURCE_OPCODE_RELEASE;
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_RESC, resource_num);
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_OPCODE, opcode);
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Resource unlock request: param 0x%08x [opcode %d, resc_num %d]\n",
+		   param, opcode, resource_num);
+
+	/* Attempt to release the resource */
+	rc = ecore_mcp_resource_cmd(p_hwfn, p_ptt, param, &mcp_resp,
+				    &mcp_param);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	/* Analyze the response */
+	opcode = ECORE_MFW_GET_FIELD(mcp_param, RESOURCE_CMD_RSP_OPCODE);
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Resource unlock response: mcp_param 0x%08x [opcode %d]\n",
+		   mcp_param, opcode);
+
+	switch (opcode) {
+	case RESOURCE_OPCODE_RELEASED_PREVIOUS:
+		DP_INFO(p_hwfn,
+			"Resource unlock request for an already released resource [resc_num %d]\n",
+			resource_num);
+		/* Fallthrough */
+	case RESOURCE_OPCODE_RELEASED:
+		*p_released = true;
+		break;
+	case RESOURCE_OPCODE_WRONG_OWNER:
+		*p_released = false;
+		break;
+	default:
+		DP_NOTICE(p_hwfn, false,
+			  "Unexpected opcode in resource unlock response [mcp_param 0x%08x, opcode %d]\n",
+			  mcp_param, opcode);
+		return ECORE_INVAL;
+	}
+
+	return ECORE_SUCCESS;
+}
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index 0708923..7a81516 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -361,4 +361,45 @@ enum _ecore_status_t ecore_mcp_get_resc_info(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_mcp_initiate_pf_flr(struct ecore_hwfn *p_hwfn,
 					       struct ecore_ptt *p_ptt);
 
+#define ECORE_MCP_RESC_LOCK_TO_DEFAULT	0
+#define ECORE_MCP_RESC_LOCK_TO_NONE	255
+
+/**
+ * @brief Acquires MFW generic resource lock
+ *
+ *  @param p_hwfn
+ *  @param p_ptt
+ *  @param resource_num - valid values are 0..31
+ *  @param timeout - lock timeout value in seconds
+ *                   (1..254, '0' - default value, '255' - no timeout).
+ *  @param p_granted - will be filled as true if the resource is free and
+ *                     granted, or false if it is busy.
+ *  @param p_owner - A pointer to a variable to be filled with the resource
+ *                   owner (0..15 = PF0-15, 16 = MFW, 17 = diag over serial).
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
+ */
+enum _ecore_status_t ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn,
+					 struct ecore_ptt *p_ptt,
+					 u8 resource_num, u8 timeout,
+					 bool *p_granted, u8 *p_owner);
+
+/**
+ * @brief Releases MFW generic resource lock
+ *
+ *  @param p_hwfn
+ *  @param p_ptt
+ *  @param resource_num
+ *  @param force -  allows to release a reeource even if belongs to another PF
+ *  @param p_released - will be filled as true if the resource is released (or
+ *			has been already released), and false if the resource is
+ *			acquired by another PF and the `force' flag was not set.
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
+ */
+enum _ecore_status_t ecore_mcp_resc_unlock(struct ecore_hwfn *p_hwfn,
+					   struct ecore_ptt *p_ptt,
+					   u8 resource_num, bool force,
+					   bool *p_released);
+
 #endif /* __ECORE_MCP_H__ */
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 37/61] net/qede/base: remove clock slowdown option
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (36 preceding siblings ...)
  2017-03-18  7:06   ` [PATCH v2 36/61] net/qede/base: add API for using MFW resource lock Rasesh Mody
@ 2017-03-18  7:06   ` Rasesh Mody
  2017-03-18  7:06   ` [PATCH v2 38/61] net/qede/base: add new image types Rasesh Mody
                     ` (24 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:06 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Remove clock slowdown NVM config option as this is not supported
for current chipsets.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/nvm_cfg.h |   10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/drivers/net/qede/base/nvm_cfg.h b/drivers/net/qede/base/nvm_cfg.h
index 4202337..4e58835 100644
--- a/drivers/net/qede/base/nvm_cfg.h
+++ b/drivers/net/qede/base/nvm_cfg.h
@@ -72,10 +72,12 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_ENABLE_ATC_OFFSET 30
 		#define NVM_CFG1_GLOB_ENABLE_ATC_DISABLED 0x0
 		#define NVM_CFG1_GLOB_ENABLE_ATC_ENABLED 0x1
-		#define NVM_CFG1_GLOB_CLOCK_SLOWDOWN_MASK 0x80000000
-		#define NVM_CFG1_GLOB_CLOCK_SLOWDOWN_OFFSET 31
-		#define NVM_CFG1_GLOB_CLOCK_SLOWDOWN_DISABLED 0x0
-		#define NVM_CFG1_GLOB_CLOCK_SLOWDOWN_ENABLED 0x1
+		#define NVM_CFG1_GLOB_RESERVED__M_WAS_CLOCK_SLOWDOWN_MASK \
+								0x80000000
+		#define NVM_CFG1_GLOB_RESERVED__M_WAS_CLOCK_SLOWDOWN_OFFSET 31
+		#define NVM_CFG1_GLOB_RESERVED__M_WAS_CLOCK_SLOWDOWN_DISABLED \
+								0x0
+		#define NVM_CFG1_GLOB_RESERVED__M_WAS_CLOCK_SLOWDOWN_ENABLED 0x1
 	u32 engineering_change[3]; /* 0x4 */
 	u32 manufacturing_id; /* 0x10 */
 	u32 serial_number[4]; /* 0x14 */
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 38/61] net/qede/base: add new image types
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (37 preceding siblings ...)
  2017-03-18  7:06   ` [PATCH v2 37/61] net/qede/base: remove clock slowdown option Rasesh Mody
@ 2017-03-18  7:06   ` Rasesh Mody
  2017-03-18  7:06   ` [PATCH v2 39/61] net/qede/base: use L2-handles for RSS configuration Rasesh Mody
                     ` (23 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:06 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add new image types - RECOVERY and PK (Public Key) towards
the second phase of NVRAM security support.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/mcp_public.h |    5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 1b1ecd2..d3cbc96 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -1502,6 +1502,10 @@ struct public_drv_mb {
 #define FW_MSG_CODE_NVM_PUT_FILE_FINISH_OK	0x00400000
 /* MFW reject "mcp reset" command if one of the drivers is up */
 #define FW_MSG_CODE_MCP_RESET_REJECT		0x00600000
+#define FW_MSG_CODE_NVM_FAILED_CALC_HASH	0x00310000
+#define FW_MSG_CODE_NVM_PUBLIC_KEY_MISSING	0x00320000
+#define FW_MSG_CODE_NVM_INVALID_PUBLIC_KEY	0x00330000
+
 #define FW_MSG_CODE_PHY_OK			0x00110000
 #define FW_MSG_CODE_PHY_ERROR			0x00120000
 #define FW_MSG_CODE_SET_SECURE_MODE_ERROR	0x00130000
@@ -1530,6 +1534,7 @@ struct public_drv_mb {
 #define FW_MSG_CODE_EXTPHY_INVALID_PHY_TYPE	0x00710000
 #define FW_MSG_CODE_EXTPHY_OPERATION_FAILED	0x00720000
 #define FW_MSG_CODE_EXTPHY_NO_PHY_DETECTED	0x00730000
+#define FW_MSG_CODE_RECOVERY_MODE		0x00740000
 
 	/* mdump related response codes */
 #define FW_MSG_CODE_MDUMP_NO_IMAGE_FOUND	0x00010000
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 39/61] net/qede/base: use L2-handles for RSS configuration
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (38 preceding siblings ...)
  2017-03-18  7:06   ` [PATCH v2 38/61] net/qede/base: add new image types Rasesh Mody
@ 2017-03-18  7:06   ` Rasesh Mody
  2017-03-18  7:06   ` [PATCH v2 40/61] net/qede/base: change valloc to vzalloc Rasesh Mody
                     ` (22 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:06 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Move RSS configuration into using L2-handles instead of queue-ids.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_l2.c     |   48 ++++++++++++++++++-------
 drivers/net/qede/base/ecore_l2.h     |    2 ++
 drivers/net/qede/base/ecore_l2_api.h |    4 ++-
 drivers/net/qede/base/ecore_sriov.c  |   66 +++++++++++++++++++++-------------
 drivers/net/qede/base/ecore_vf.c     |   13 +++++--
 drivers/net/qede/qede_ethdev.c       |    4 +--
 6 files changed, 95 insertions(+), 42 deletions(-)

diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 352620a..2635213 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -59,6 +59,7 @@ _ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
 	p_cid->cid = cid;
 	p_cid->vf_qid = vf_qid;
 	p_cid->rel = *p_params;
+	p_cid->p_owner = p_hwfn;
 
 	/* Don't try calculating the absolute indices for VFs */
 	if (IS_VF(p_hwfn->p_dev)) {
@@ -267,10 +268,9 @@ ecore_sp_vport_update_rss(struct ecore_hwfn *p_hwfn,
 			  struct vport_update_ramrod_data *p_ramrod,
 			  struct ecore_rss_params *p_rss)
 {
-	enum _ecore_status_t rc = ECORE_SUCCESS;
 	struct eth_vport_rss_config *p_config;
-	u16 abs_l2_queue = 0;
-	int i;
+	int i, table_size;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
 
 	if (!p_rss) {
 		p_ramrod->common.update_rss_flg = 0;
@@ -324,16 +324,40 @@ ecore_sp_vport_update_rss(struct ecore_hwfn *p_hwfn,
 		   p_config->capabilities,
 		   p_config->update_rss_ind_table, p_config->update_rss_key);
 
-	for (i = 0; i < ECORE_RSS_IND_TABLE_SIZE; i++) {
-		rc = ecore_fw_l2_queue(p_hwfn,
-				       p_rss->rss_ind_table[i],
-				       &abs_l2_queue);
-		if (rc != ECORE_SUCCESS)
-			return rc;
+	table_size = OSAL_MIN_T(int, ECORE_RSS_IND_TABLE_SIZE,
+				1 << p_config->tbl_size);
+	for (i = 0; i < table_size; i++) {
+		struct ecore_queue_cid *p_queue = p_rss->rss_ind_table[i];
 
-		p_config->indirection_table[i] = OSAL_CPU_TO_LE16(abs_l2_queue);
-		DP_VERBOSE(p_hwfn, ECORE_MSG_IFUP, "i= %d, queue = %d\n",
-			   i, p_config->indirection_table[i]);
+		if (!p_queue)
+			return ECORE_INVAL;
+
+		p_config->indirection_table[i] =
+				OSAL_CPU_TO_LE16(p_queue->abs.queue_id);
+	}
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IFUP,
+		   "Configured RSS indirection table [%d entries]:\n",
+		   table_size);
+	for (i = 0; i < ECORE_RSS_IND_TABLE_SIZE; i += 0x10) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IFUP,
+			   "%04x %04x %04x %04x %04x %04x %04x %04x %04x %04x %04x %04x %04x %04x %04x %04x\n",
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 1]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 2]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 3]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 4]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 5]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 6]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 7]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 8]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 9]),
+			  OSAL_LE16_TO_CPU(p_config->indirection_table[i + 10]),
+			  OSAL_LE16_TO_CPU(p_config->indirection_table[i + 11]),
+			  OSAL_LE16_TO_CPU(p_config->indirection_table[i + 12]),
+			  OSAL_LE16_TO_CPU(p_config->indirection_table[i + 13]),
+			  OSAL_LE16_TO_CPU(p_config->indirection_table[i + 14]),
+			 OSAL_LE16_TO_CPU(p_config->indirection_table[i + 15]));
 	}
 
 	for (i = 0; i < 10; i++)
diff --git a/drivers/net/qede/base/ecore_l2.h b/drivers/net/qede/base/ecore_l2.h
index c136389..4b0ccb4 100644
--- a/drivers/net/qede/base/ecore_l2.h
+++ b/drivers/net/qede/base/ecore_l2.h
@@ -36,6 +36,8 @@ struct ecore_queue_cid {
 
 	/* Legacy VFs might have Rx producer located elsewhere */
 	bool b_legacy_vf;
+
+	struct ecore_hwfn *p_owner;
 };
 
 void ecore_eth_queue_cid_release(struct ecore_hwfn *p_hwfn,
diff --git a/drivers/net/qede/base/ecore_l2_api.h b/drivers/net/qede/base/ecore_l2_api.h
index af316d3..5a7db76 100644
--- a/drivers/net/qede/base/ecore_l2_api.h
+++ b/drivers/net/qede/base/ecore_l2_api.h
@@ -59,7 +59,9 @@ struct ecore_rss_params {
 	u8 update_rss_key;
 	u8 rss_caps;
 	u8 rss_table_size_log; /* The table size is 2 ^ rss_table_size_log */
-	u16 rss_ind_table[ECORE_RSS_IND_TABLE_SIZE];
+
+	/* Indirection table consist of rx queue handles */
+	void *rss_ind_table[ECORE_RSS_IND_TABLE_SIZE];
 	u32 rss_key[ECORE_RSS_KEY_SIZE];
 };
 
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 6cec7b2..280c992 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -2704,12 +2704,14 @@ ecore_iov_vp_update_rss_param(struct ecore_hwfn *p_hwfn,
 			      struct ecore_vf_info *vf,
 			      struct ecore_sp_vport_update_params *p_data,
 			      struct ecore_rss_params *p_rss,
-			      struct ecore_iov_vf_mbx *p_mbx, u16 *tlvs_mask)
+			      struct ecore_iov_vf_mbx *p_mbx,
+			      u16 *tlvs_mask, u16 *tlvs_accepted)
 {
 	struct vfpf_vport_update_rss_tlv *p_rss_tlv;
 	u16 tlv = CHANNEL_TLV_VPORT_UPDATE_RSS;
-	u16 i, q_idx, max_q_idx;
+	bool b_reject = false;
 	u16 table_size;
+	u16 i, q_idx;
 
 	p_rss_tlv = (struct vfpf_vport_update_rss_tlv *)
 	    ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv);
@@ -2737,36 +2739,38 @@ ecore_iov_vp_update_rss_param(struct ecore_hwfn *p_hwfn,
 	p_rss->rss_eng_id = vf->relative_vf_id + 1;
 	p_rss->rss_caps = p_rss_tlv->rss_caps;
 	p_rss->rss_table_size_log = p_rss_tlv->rss_table_size_log;
-	OSAL_MEMCPY(p_rss->rss_ind_table, p_rss_tlv->rss_ind_table,
-		    sizeof(p_rss->rss_ind_table));
 	OSAL_MEMCPY(p_rss->rss_key, p_rss_tlv->rss_key,
 		    sizeof(p_rss->rss_key));
 
 	table_size = OSAL_MIN_T(u16, OSAL_ARRAY_SIZE(p_rss->rss_ind_table),
 				(1 << p_rss_tlv->rss_table_size_log));
 
-	max_q_idx = OSAL_ARRAY_SIZE(vf->vf_queues);
-
 	for (i = 0; i < table_size; i++) {
-		u16 index = vf->vf_queues[0].fw_rx_qid;
+		q_idx = p_rss_tlv->rss_ind_table[i];
+		if (!ecore_iov_validate_rxq(p_hwfn, vf, q_idx)) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VF[%d]: Omitting RSS due to wrong queue %04x\n",
+				   vf->relative_vf_id, q_idx);
+			b_reject = true;
+			goto out;
+		}
 
-		q_idx = p_rss->rss_ind_table[i];
-		if (q_idx >= max_q_idx)
-			DP_NOTICE(p_hwfn, true,
-				  "rss_ind_table[%d] = %d,"
-				  " rxq is out of range\n",
-				  i, q_idx);
-		else if (!vf->vf_queues[q_idx].p_rx_cid)
-			DP_NOTICE(p_hwfn, true,
-				  "rss_ind_table[%d] = %d, rxq is not active\n",
-				  i, q_idx);
-		else
-			index = vf->vf_queues[q_idx].fw_rx_qid;
-		p_rss->rss_ind_table[i] = index;
+		if (!vf->vf_queues[q_idx].p_rx_cid) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VF[%d]: Omitting RSS due to inactive queue %08x\n",
+				   vf->relative_vf_id, q_idx);
+			b_reject = true;
+			goto out;
+		}
+
+		p_rss->rss_ind_table[i] = vf->vf_queues[q_idx].p_rx_cid;
 	}
 
 	p_data->rss_params = p_rss;
+out:
 	*tlvs_mask |= 1 << ECORE_IOV_VP_UPDATE_RSS;
+	if (!b_reject)
+		*tlvs_accepted |= 1 << ECORE_IOV_VP_UPDATE_RSS;
 }
 
 static void
@@ -2822,11 +2826,11 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 					  struct ecore_ptt *p_ptt,
 					  struct ecore_vf_info *vf)
 {
+	struct ecore_rss_params *p_rss_params = OSAL_NULL;
 	struct ecore_sp_vport_update_params params;
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
 	struct ecore_sge_tpa_params sge_tpa_params;
 	u16 tlvs_mask = 0, tlvs_accepted = 0;
-	struct ecore_rss_params rss_params;
 	u8 status = PFVF_STATUS_SUCCESS;
 	u16 length;
 	enum _ecore_status_t rc;
@@ -2841,6 +2845,12 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 		goto out;
 	}
 
+	p_rss_params = OSAL_VALLOC(p_hwfn->p_dev, sizeof(*p_rss_params));
+	if (p_rss_params == OSAL_NULL) {
+		status = PFVF_STATUS_FAILURE;
+		goto out;
+	}
+
 	OSAL_MEMSET(&params, 0, sizeof(params));
 	params.opaque_fid = vf->opaque_fid;
 	params.vport_id = vf->vport_id;
@@ -2854,19 +2864,24 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 	ecore_iov_vp_update_tx_switch(p_hwfn, &params, mbx, &tlvs_mask);
 	ecore_iov_vp_update_mcast_bin_param(p_hwfn, &params, mbx, &tlvs_mask);
 	ecore_iov_vp_update_accept_flag(p_hwfn, &params, mbx, &tlvs_mask);
-	ecore_iov_vp_update_rss_param(p_hwfn, vf, &params, &rss_params,
-				      mbx, &tlvs_mask);
 	ecore_iov_vp_update_accept_any_vlan(p_hwfn, &params, mbx, &tlvs_mask);
 	ecore_iov_vp_update_sge_tpa_param(p_hwfn, vf, &params,
 					  &sge_tpa_params, mbx, &tlvs_mask);
 
+	tlvs_accepted = tlvs_mask;
+
+	/* Some of the extended TLVs need to be validated first; In that case,
+	 * they can update the mask without updating the accepted [so that
+	 * PF could communicate to VF it has rejected request].
+	 */
+	ecore_iov_vp_update_rss_param(p_hwfn, vf, &params, p_rss_params,
+				      mbx, &tlvs_mask, &tlvs_accepted);
+
 	/* Just log a message if there is no single extended tlv in buffer.
 	 * When all features of vport update ramrod would be requested by VF
 	 * as extended TLVs in buffer then an error can be returned in response
 	 * if there is no extended TLV present in buffer.
 	 */
-	tlvs_accepted = tlvs_mask;
-
 	if (OSAL_IOV_VF_VPORT_UPDATE(p_hwfn, vf->relative_vf_id,
 				     &params, &tlvs_accepted) !=
 	    ECORE_SUCCESS) {
@@ -2894,6 +2909,7 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 		status = PFVF_STATUS_FAILURE;
 
 out:
+	OSAL_VFREE(p_hwfn->p_dev, p_rss_params);
 	length = ecore_iov_prep_vp_update_resp_tlvs(p_hwfn, vf, mbx, status,
 						    tlvs_mask, tlvs_accepted);
 	ecore_iov_send_response(p_hwfn, p_ptt, vf, length, status);
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index 3182621..a072a81 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -1132,6 +1132,7 @@ ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn,
 	if (p_params->rss_params) {
 		struct ecore_rss_params *rss_params = p_params->rss_params;
 		struct vfpf_vport_update_rss_tlv *p_rss_tlv;
+		int i, table_size;
 
 		size = sizeof(struct vfpf_vport_update_rss_tlv);
 		p_rss_tlv = ecore_add_tlv(p_hwfn, &p_iov->offset,
@@ -1153,8 +1154,16 @@ ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn,
 		p_rss_tlv->rss_enable = rss_params->rss_enable;
 		p_rss_tlv->rss_caps = rss_params->rss_caps;
 		p_rss_tlv->rss_table_size_log = rss_params->rss_table_size_log;
-		OSAL_MEMCPY(p_rss_tlv->rss_ind_table, rss_params->rss_ind_table,
-			    sizeof(rss_params->rss_ind_table));
+
+		table_size = OSAL_MIN_T(int, T_ETH_INDIRECTION_TABLE_SIZE,
+					1 << p_rss_tlv->rss_table_size_log);
+		for (i = 0; i < table_size; i++) {
+			struct ecore_queue_cid *p_queue;
+
+			p_queue = rss_params->rss_ind_table[i];
+			p_rss_tlv->rss_ind_table[i] = p_queue->rel.queue_id;
+		}
+
 		OSAL_MEMCPY(p_rss_tlv->rss_key, rss_params->rss_key,
 			    sizeof(rss_params->rss_key));
 	}
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 257e5b2..6fbd898 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1607,14 +1607,14 @@ static int qede_rss_reta_update(struct rte_eth_dev *eth_dev,
 		shift = i % RTE_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift)) {
 			entry = reta_conf[idx].reta[shift];
-			params.rss_ind_table[i] = entry;
+			params.rss_ind_table[i] = &entry;
 		}
 	}
 
 	/* Fix up RETA for CMT mode device */
 	if (edev->num_hwfns > 1)
 		qdev->rss_enable = qed_update_rss_parm_cmt(edev,
-					&params.rss_ind_table[0]);
+					params.rss_ind_table[0]);
 	params.update_rss_ind_table = 1;
 	params.rss_table_size_log = 7;
 	params.update_rss_config = 1;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 40/61] net/qede/base: change valloc to vzalloc
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (39 preceding siblings ...)
  2017-03-18  7:06   ` [PATCH v2 39/61] net/qede/base: use L2-handles for RSS configuration Rasesh Mody
@ 2017-03-18  7:06   ` Rasesh Mody
  2017-03-18  7:06   ` [PATCH v2 41/61] net/qede/base: add support for previous driver unload Rasesh Mody
                     ` (21 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:06 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Change OSAL_VALLOC() into OSAL_VZALLOC() which would also zero memory.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/bcm_osal.h      |    2 +-
 drivers/net/qede/base/ecore_dev.c     |    3 +--
 drivers/net/qede/base/ecore_l2.c      |    3 +--
 drivers/net/qede/base/ecore_mng_tlv.c |    5 ++---
 drivers/net/qede/base/ecore_sriov.c   |    2 +-
 5 files changed, 6 insertions(+), 9 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 4c91dc0..052a0cf 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -89,7 +89,7 @@ typedef int bool;
 #define OSAL_ALLOC(dev, GFP, size) rte_malloc("qede", size, 0)
 #define OSAL_ZALLOC(dev, GFP, size) rte_zmalloc("qede", size, 0)
 #define OSAL_CALLOC(dev, GFP, num, size) rte_calloc("qede", num, size, 0)
-#define OSAL_VALLOC(dev, size) rte_malloc("qede", size, 0)
+#define OSAL_VZALLOC(dev, size) rte_zmalloc("qede", size, 0)
 #define OSAL_FREE(dev, memory)		  \
 	do {				  \
 		rte_free((void *)memory); \
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 0840d49..6d75e60 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -3717,13 +3717,12 @@ ecore_chain_alloc_pbl(struct ecore_dev *p_dev,
 	u32 page_cnt = p_chain->page_cnt, size, i;
 
 	size = page_cnt * sizeof(*pp_virt_addr_tbl);
-	pp_virt_addr_tbl = (void **)OSAL_VALLOC(p_dev, size);
+	pp_virt_addr_tbl = (void **)OSAL_VZALLOC(p_dev, size);
 	if (!pp_virt_addr_tbl) {
 		DP_NOTICE(p_dev, true,
 			  "Failed to allocate memory for the chain virtual addresses table\n");
 		return ECORE_NOMEM;
 	}
-	OSAL_MEM_ZERO(pp_virt_addr_tbl, size);
 
 	/* The allocation of the PBL table is done with its full size, since it
 	 * is expected to be successive.
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 2635213..4d26e19 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -50,10 +50,9 @@ _ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
 	struct ecore_queue_cid *p_cid;
 	enum _ecore_status_t rc;
 
-	p_cid = OSAL_VALLOC(p_hwfn->p_dev, sizeof(*p_cid));
+	p_cid = OSAL_VZALLOC(p_hwfn->p_dev, sizeof(*p_cid));
 	if (p_cid == OSAL_NULL)
 		return OSAL_NULL;
-	OSAL_MEM_ZERO(p_cid, sizeof(*p_cid));
 
 	p_cid->opaque_fid = opaque_fid;
 	p_cid->cid = cid;
diff --git a/drivers/net/qede/base/ecore_mng_tlv.c b/drivers/net/qede/base/ecore_mng_tlv.c
index 0065d12..0bf1be8 100644
--- a/drivers/net/qede/base/ecore_mng_tlv.c
+++ b/drivers/net/qede/base/ecore_mng_tlv.c
@@ -1413,11 +1413,10 @@ ecore_mfw_update_tlvs(u8 tlv_group, struct ecore_hwfn *p_hwfn,
 	u32 offset;
 	int len;
 
-	p_tlv_data = OSAL_VALLOC(p_hwfn->p_dev, sizeof(*p_tlv_data));
+	p_tlv_data = OSAL_VZALLOC(p_hwfn->p_dev, sizeof(*p_tlv_data));
 	if (!p_tlv_data)
 		return ECORE_NOMEM;
 
-	OSAL_MEMSET(p_tlv_data, 0, sizeof(*p_tlv_data));
 	if (OSAL_MFW_FILL_TLV_DATA(p_hwfn, tlv_group, p_tlv_data)) {
 		OSAL_VFREE(p_hwfn->p_dev, p_tlv_data);
 		return ECORE_INVAL;
@@ -1487,7 +1486,7 @@ ecore_mfw_process_tlv_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
 		goto drv_done;
 	}
 
-	p_mfw_buf = (void *)OSAL_VALLOC(p_hwfn->p_dev, size);
+	p_mfw_buf = (void *)OSAL_VZALLOC(p_hwfn->p_dev, size);
 	if (!p_mfw_buf) {
 		DP_NOTICE(p_hwfn, false,
 			  "Failed allocate memory for p_mfw_buf\n");
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 280c992..aab9925 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -2845,7 +2845,7 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 		goto out;
 	}
 
-	p_rss_params = OSAL_VALLOC(p_hwfn->p_dev, sizeof(*p_rss_params));
+	p_rss_params = OSAL_VZALLOC(p_hwfn->p_dev, sizeof(*p_rss_params));
 	if (p_rss_params == OSAL_NULL) {
 		status = PFVF_STATUS_FAILURE;
 		goto out;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 41/61] net/qede/base: add support for previous driver unload
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (40 preceding siblings ...)
  2017-03-18  7:06   ` [PATCH v2 40/61] net/qede/base: change valloc to vzalloc Rasesh Mody
@ 2017-03-18  7:06   ` Rasesh Mody
  2017-03-18  7:06   ` [PATCH v2 42/61] net/qede/base: add non-L2 dcbx tlv application support Rasesh Mody
                     ` (20 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:06 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

New driver/management fw load request sequence for handling previous
driver unload.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h         |   13 ++
 drivers/net/qede/base/ecore_dev.c     |   43 ++--
 drivers/net/qede/base/ecore_dev_api.h |   30 ++-
 drivers/net/qede/base/ecore_mcp.c     |  369 ++++++++++++++++++++++++++++++---
 drivers/net/qede/base/ecore_mcp.h     |   40 ++--
 drivers/net/qede/base/mcp_public.h    |   56 ++++-
 drivers/net/qede/qede_main.c          |    2 +
 7 files changed, 482 insertions(+), 71 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index acf2244..60a8a6b 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -28,6 +28,19 @@
 #include "ecore_proto_if.h"
 #include "mcp_public.h"
 
+#define ECORE_MAJOR_VERSION		8
+#define ECORE_MINOR_VERSION		18
+#define ECORE_REVISION_VERSION		7
+#define ECORE_ENGINEERING_VERSION	0
+
+#define ECORE_VERSION							\
+	((ECORE_MAJOR_VERSION << 24) | (ECORE_MINOR_VERSION << 16) |	\
+	 (ECORE_REVISION_VERSION << 8) | ECORE_ENGINEERING_VERSION)
+
+#define STORM_FW_VERSION						\
+	((FW_MAJOR_VERSION << 24) | (FW_MINOR_VERSION << 16) |	\
+	 (FW_REVISION_VERSION << 8) | FW_ENGINEERING_VERSION)
+
 #define MAX_HWFNS_PER_DEVICE	2
 #define NAME_SIZE 128 /* @DPDK */
 #define ECORE_WFQ_UNIT	100
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 6d75e60..29dd292 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1901,10 +1901,11 @@ enum _ecore_status_t ecore_vf_start(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 				   struct ecore_hw_init_params *p_params)
 {
-	enum _ecore_status_t rc = ECORE_SUCCESS, mfw_rc;
+	struct ecore_load_req_params load_req_params;
 	u32 load_code, param, drv_mb_param;
-	bool b_default_mtu = true;
 	struct ecore_hwfn *p_hwfn;
+	bool b_default_mtu = true;
+	enum _ecore_status_t rc = ECORE_SUCCESS, mfw_rc;
 	int i;
 
 	if ((p_params->int_mode == ECORE_INT_MODE_MSI) &&
@@ -1943,17 +1944,25 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		if (rc != ECORE_SUCCESS)
 			return rc;
 
-		/* @@@TBD need to add here:
-		 * Check for fan failure
-		 * Prev_unload
-		 */
-		rc = ecore_mcp_load_req(p_hwfn, p_hwfn->p_main_ptt, &load_code);
-		if (rc) {
+		OSAL_MEM_ZERO(&load_req_params, sizeof(load_req_params));
+		load_req_params.drv_role = p_params->is_crash_kernel ?
+					   ECORE_DRV_ROLE_KDUMP :
+					   ECORE_DRV_ROLE_OS;
+		load_req_params.timeout_val = p_params->mfw_timeout_val;
+		load_req_params.avoid_eng_reset = p_params->avoid_eng_reset;
+		rc = ecore_mcp_load_req(p_hwfn, p_hwfn->p_main_ptt,
+					&load_req_params);
+		if (rc != ECORE_SUCCESS) {
 			DP_NOTICE(p_hwfn, true,
-				  "Failed sending LOAD_REQ command\n");
+				  "Failed sending a LOAD_REQ command\n");
 			return rc;
 		}
 
+		load_code = load_req_params.load_code;
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+			   "Load request was sent. Load code: 0x%x\n",
+			   load_code);
+
 		/* CQ75580:
 		 * When coming back from hiberbate state, the registers from
 		 * which shadow is read initially are not initialized. It turns
@@ -1966,10 +1975,6 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		 */
 		ecore_reset_mb_shadow(p_hwfn, p_hwfn->p_main_ptt);
 
-		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-			   "Load request was sent. Resp:0x%x, Load code: 0x%x\n",
-			   rc, load_code);
-
 		/* Only relevant for recovery:
 		 * Clear the indication after the LOAD_REQ command is responded
 		 * by the MFW.
@@ -1988,13 +1993,13 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		case FW_MSG_CODE_DRV_LOAD_ENGINE:
 			rc = ecore_hw_init_common(p_hwfn, p_hwfn->p_main_ptt,
 						  p_hwfn->hw_info.hw_mode);
-			if (rc)
+			if (rc != ECORE_SUCCESS)
 				break;
 			/* Fall into */
 		case FW_MSG_CODE_DRV_LOAD_PORT:
 			rc = ecore_hw_init_port(p_hwfn, p_hwfn->p_main_ptt,
 						p_hwfn->hw_info.hw_mode);
-			if (rc)
+			if (rc != ECORE_SUCCESS)
 				break;
 			/* Fall into */
 		case FW_MSG_CODE_DRV_LOAD_FUNCTION:
@@ -2006,6 +2011,8 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 					      p_params->allow_npar_tx_switch);
 			break;
 		default:
+			DP_NOTICE(p_hwfn, false,
+				  "Unexpected load code [0x%08x]", load_code);
 			rc = ECORE_NOTIMPL;
 			break;
 		}
@@ -2021,6 +2028,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 				       0, &load_code, &param);
 		if (rc != ECORE_SUCCESS)
 			return rc;
+
 		if (mfw_rc != ECORE_SUCCESS) {
 			DP_NOTICE(p_hwfn, true,
 				  "Failed sending LOAD_DONE command\n");
@@ -2045,10 +2053,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 
 	if (IS_PF(p_dev)) {
 		p_hwfn = ECORE_LEADING_HWFN(p_dev);
-		drv_mb_param = (FW_MAJOR_VERSION << 24) |
-			       (FW_MINOR_VERSION << 16) |
-			       (FW_REVISION_VERSION << 8) |
-			       (FW_ENGINEERING_VERSION);
+		drv_mb_param = STORM_FW_VERSION;
 		rc = ecore_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
 				   DRV_MSG_CODE_OV_UPDATE_STORM_FW_VER,
 				   drv_mb_param, &load_code, &param);
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index 356c5e4..7e90778 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -58,16 +58,38 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev);
 void ecore_resc_setup(struct ecore_dev *p_dev);
 
 struct ecore_hw_init_params {
-	/* tunnelling parameters */
+	/* Tunnelling parameters */
 	struct ecore_tunnel_info *p_tunn;
+
 	bool b_hw_start;
-	/* interrupt mode [msix, inta, etc.] to use */
+
+	/* Interrupt mode [msix, inta, etc.] to use */
 	enum ecore_int_mode int_mode;
-/* npar tx switching to be used for vports configured for tx-switching */
 
+	/* NPAR tx switching to be used for vports configured for tx-switching
+	 */
 	bool allow_npar_tx_switch;
-	/* binary fw data pointer in binary fw file */
+
+	/* Binary fw data pointer in binary fw file */
 	const u8 *bin_fw_data;
+
+	/* Indicates whether the driver is running over a crash kernel.
+	 * As part of the load request, this will be used for providing the
+	 * driver role to the MFW.
+	 * In case of a crash kernel over PDA - this should be set to false.
+	 */
+	bool is_crash_kernel;
+
+	/* The timeout value that the MFW should use when locking the engine for
+	 * the driver load process.
+	 * A value of '0' means the default value, and '255' means no timeout.
+	 */
+	u8 mfw_timeout_val;
+#define ECORE_LOAD_REQ_LOCK_TO_DEFAULT	0
+#define ECORE_LOAD_REQ_LOCK_TO_NONE	255
+
+	/* Avoid engine reset when first PF loads on it */
+	bool avoid_eng_reset;
 };
 
 /**
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 30cb76e..6c5b5db 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -518,51 +518,368 @@ static void ecore_mcp_mf_workaround(struct ecore_hwfn *p_hwfn,
 }
 #endif
 
+static bool ecore_mcp_can_force_load(u8 drv_role, u8 exist_drv_role)
+{
+	return (drv_role == DRV_ROLE_OS &&
+		exist_drv_role == DRV_ROLE_PREBOOT) ||
+	       (drv_role == DRV_ROLE_KDUMP && exist_drv_role == DRV_ROLE_OS);
+}
+
+static enum _ecore_status_t ecore_mcp_cancel_load_req(struct ecore_hwfn *p_hwfn,
+						      struct ecore_ptt *p_ptt)
+{
+	u32 resp = 0, param = 0;
+	enum _ecore_status_t rc;
+
+	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_CANCEL_LOAD_REQ, 0,
+			   &resp, &param);
+	if (rc != ECORE_SUCCESS)
+		DP_NOTICE(p_hwfn, false,
+			  "Failed to send cancel load request, rc = %d\n", rc);
+
+	return rc;
+}
+
+#define CONFIG_ECORE_L2_BITMAP_IDX	(0x1 << 0)
+#define CONFIG_ECORE_SRIOV_BITMAP_IDX	(0x1 << 1)
+#define CONFIG_ECORE_ROCE_BITMAP_IDX	(0x1 << 2)
+#define CONFIG_ECORE_IWARP_BITMAP_IDX	(0x1 << 3)
+#define CONFIG_ECORE_FCOE_BITMAP_IDX	(0x1 << 4)
+#define CONFIG_ECORE_ISCSI_BITMAP_IDX	(0x1 << 5)
+#define CONFIG_ECORE_LL2_BITMAP_IDX	(0x1 << 6)
+
+static u32 ecore_get_config_bitmap(void)
+{
+	u32 config_bitmap = 0x0;
+
+#ifdef CONFIG_ECORE_L2
+	config_bitmap |= CONFIG_ECORE_L2_BITMAP_IDX;
+#endif
+#ifdef CONFIG_ECORE_SRIOV
+	config_bitmap |= CONFIG_ECORE_SRIOV_BITMAP_IDX;
+#endif
+#ifdef CONFIG_ECORE_ROCE
+	config_bitmap |= CONFIG_ECORE_ROCE_BITMAP_IDX;
+#endif
+#ifdef CONFIG_ECORE_IWARP
+	config_bitmap |= CONFIG_ECORE_IWARP_BITMAP_IDX;
+#endif
+#ifdef CONFIG_ECORE_FCOE
+	config_bitmap |= CONFIG_ECORE_FCOE_BITMAP_IDX;
+#endif
+#ifdef CONFIG_ECORE_ISCSI
+	config_bitmap |= CONFIG_ECORE_ISCSI_BITMAP_IDX;
+#endif
+#ifdef CONFIG_ECORE_LL2
+	config_bitmap |= CONFIG_ECORE_LL2_BITMAP_IDX;
+#endif
+
+	return config_bitmap;
+}
+
+struct ecore_load_req_in_params {
+	u8 hsi_ver;
+#define ECORE_LOAD_REQ_HSI_VER_DEFAULT	0
+#define ECORE_LOAD_REQ_HSI_VER_1	1
+	u32 drv_ver_0;
+	u32 drv_ver_1;
+	u32 fw_ver;
+	u8 drv_role;
+	u8 timeout_val;
+	u8 force_cmd;
+	bool avoid_eng_reset;
+};
+
+struct ecore_load_req_out_params {
+	u32 load_code;
+	u32 exist_drv_ver_0;
+	u32 exist_drv_ver_1;
+	u32 exist_fw_ver;
+	u8 exist_drv_role;
+	u8 mfw_hsi_ver;
+	bool drv_exists;
+};
+
+static enum _ecore_status_t
+__ecore_mcp_load_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		     struct ecore_load_req_in_params *p_in_params,
+		     struct ecore_load_req_out_params *p_out_params)
+{
+	union drv_union_data union_data_src, union_data_dst;
+	struct ecore_mcp_mb_params mb_params;
+	struct load_req_stc *p_load_req;
+	struct load_rsp_stc *p_load_rsp;
+	u32 hsi_ver;
+	enum _ecore_status_t rc;
+
+	p_load_req = &union_data_src.load_req;
+	OSAL_MEM_ZERO(p_load_req, sizeof(*p_load_req));
+	p_load_req->drv_ver_0 = p_in_params->drv_ver_0;
+	p_load_req->drv_ver_1 = p_in_params->drv_ver_1;
+	p_load_req->fw_ver = p_in_params->fw_ver;
+	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_ROLE,
+			    p_in_params->drv_role);
+	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_LOCK_TO,
+			    p_in_params->timeout_val);
+	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_FORCE,
+			    p_in_params->force_cmd);
+	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_FLAGS0,
+			    p_in_params->avoid_eng_reset);
+
+	hsi_ver = (p_in_params->hsi_ver == ECORE_LOAD_REQ_HSI_VER_DEFAULT) ?
+		  DRV_ID_MCP_HSI_VER_CURRENT :
+		  (p_in_params->hsi_ver << DRV_ID_MCP_HSI_VER_SHIFT);
+
+	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
+	mb_params.cmd = DRV_MSG_CODE_LOAD_REQ;
+	mb_params.param = PDA_COMP | hsi_ver | p_hwfn->p_dev->drv_type;
+	mb_params.p_data_src = &union_data_src;
+	mb_params.p_data_dst = &union_data_dst;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Load Request: param 0x%08x [init_hw %d, drv_type %d, hsi_ver %d, pda 0x%04x]\n",
+		   mb_params.param,
+		   ECORE_MFW_GET_FIELD(mb_params.param, DRV_ID_DRV_INIT_HW),
+		   ECORE_MFW_GET_FIELD(mb_params.param, DRV_ID_DRV_TYPE),
+		   ECORE_MFW_GET_FIELD(mb_params.param, DRV_ID_MCP_HSI_VER),
+		   ECORE_MFW_GET_FIELD(mb_params.param, DRV_ID_PDA_COMP_VER));
+
+	if (p_in_params->hsi_ver != ECORE_LOAD_REQ_HSI_VER_1)
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+			   "Load Request: drv_ver 0x%08x_0x%08x, fw_ver 0x%08x, misc0 0x%08x [role %d, timeout %d, force %d, flags0 0x%x]\n",
+			   p_load_req->drv_ver_0, p_load_req->drv_ver_1,
+			   p_load_req->fw_ver, p_load_req->misc0,
+			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
+					       LOAD_REQ_ROLE),
+			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
+					       LOAD_REQ_LOCK_TO),
+			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
+					       LOAD_REQ_FORCE),
+			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
+					       LOAD_REQ_FLAGS0));
+
+	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+	if (rc != ECORE_SUCCESS) {
+		DP_NOTICE(p_hwfn, false,
+			  "Failed to send load request, rc = %d\n", rc);
+		return rc;
+	}
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Load Response: resp 0x%08x\n", mb_params.mcp_resp);
+	p_out_params->load_code = mb_params.mcp_resp;
+
+	if (p_in_params->hsi_ver != ECORE_LOAD_REQ_HSI_VER_1 &&
+	    p_out_params->load_code != FW_MSG_CODE_DRV_LOAD_REFUSED_HSI_1) {
+		p_load_rsp = &union_data_dst.load_rsp;
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+			   "Load Response: exist_drv_ver 0x%08x_0x%08x, exist_fw_ver 0x%08x, misc0 0x%08x [exist_role %d, mfw_hsi %d, flags0 0x%x]\n",
+			   p_load_rsp->drv_ver_0, p_load_rsp->drv_ver_1,
+			   p_load_rsp->fw_ver, p_load_rsp->misc0,
+			   ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
+					       LOAD_RSP_ROLE),
+			   ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
+					       LOAD_RSP_HSI),
+			   ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
+					       LOAD_RSP_FLAGS0));
+
+		p_out_params->exist_drv_ver_0 = p_load_rsp->drv_ver_0;
+		p_out_params->exist_drv_ver_1 = p_load_rsp->drv_ver_1;
+		p_out_params->exist_fw_ver = p_load_rsp->fw_ver;
+		p_out_params->exist_drv_role =
+			ECORE_MFW_GET_FIELD(p_load_rsp->misc0, LOAD_RSP_ROLE);
+		p_out_params->mfw_hsi_ver =
+			ECORE_MFW_GET_FIELD(p_load_rsp->misc0, LOAD_RSP_HSI);
+		p_out_params->drv_exists =
+			ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
+					    LOAD_RSP_FLAGS0) &
+			LOAD_RSP_FLAGS0_DRV_EXISTS;
+	}
+
+	return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t eocre_get_mfw_drv_role(struct ecore_hwfn *p_hwfn,
+						   enum ecore_drv_role drv_role,
+						   u8 *p_mfw_drv_role)
+{
+	switch (drv_role) {
+	case ECORE_DRV_ROLE_OS:
+		*p_mfw_drv_role = DRV_ROLE_OS;
+		break;
+	case ECORE_DRV_ROLE_KDUMP:
+		*p_mfw_drv_role = DRV_ROLE_KDUMP;
+		break;
+	default:
+		DP_ERR(p_hwfn, "Unexpected driver role %d\n", drv_role);
+		return ECORE_INVAL;
+	}
+
+	return ECORE_SUCCESS;
+}
+
+enum ecore_load_req_force {
+	ECORE_LOAD_REQ_FORCE_NONE,
+	ECORE_LOAD_REQ_FORCE_PF,
+	ECORE_LOAD_REQ_FORCE_ALL,
+};
+
+static enum _ecore_status_t
+ecore_get_mfw_force_cmd(struct ecore_hwfn *p_hwfn,
+			enum ecore_load_req_force force_cmd,
+			u8 *p_mfw_force_cmd)
+{
+	switch (force_cmd) {
+	case ECORE_LOAD_REQ_FORCE_NONE:
+		*p_mfw_force_cmd = LOAD_REQ_FORCE_NONE;
+		break;
+	case ECORE_LOAD_REQ_FORCE_PF:
+		*p_mfw_force_cmd = LOAD_REQ_FORCE_PF;
+		break;
+	case ECORE_LOAD_REQ_FORCE_ALL:
+		*p_mfw_force_cmd = LOAD_REQ_FORCE_ALL;
+		break;
+	default:
+		DP_ERR(p_hwfn, "Unexpected force value %d\n", force_cmd);
+		return ECORE_INVAL;
+	}
+
+	return ECORE_SUCCESS;
+}
+
 enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 					struct ecore_ptt *p_ptt,
-					u32 *p_load_code)
+					struct ecore_load_req_params *p_params)
 {
-	struct ecore_dev *p_dev = p_hwfn->p_dev;
-	struct ecore_mcp_mb_params mb_params;
+	struct ecore_load_req_out_params out_params;
+	struct ecore_load_req_in_params in_params;
+	u8 mfw_drv_role, mfw_force_cmd;
 	enum _ecore_status_t rc;
 
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
-		ecore_mcp_mf_workaround(p_hwfn, p_load_code);
+		ecore_mcp_mf_workaround(p_hwfn, &p_params->load_code);
 		return ECORE_SUCCESS;
 	}
 #endif
 
-	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
-	mb_params.cmd = DRV_MSG_CODE_LOAD_REQ;
-	mb_params.param = PDA_COMP | DRV_ID_MCP_HSI_VER_CURRENT |
-			  p_dev->drv_type;
-	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+	OSAL_MEM_ZERO(&in_params, sizeof(in_params));
+	in_params.hsi_ver = ECORE_LOAD_REQ_HSI_VER_DEFAULT;
+	in_params.drv_ver_0 = ECORE_VERSION;
+	in_params.drv_ver_1 = ecore_get_config_bitmap();
+	in_params.fw_ver = STORM_FW_VERSION;
+	rc = eocre_get_mfw_drv_role(p_hwfn, p_params->drv_role, &mfw_drv_role);
+	if (rc != ECORE_SUCCESS)
+		return rc;
 
-	/* if mcp fails to respond we must abort */
-	if (rc != ECORE_SUCCESS) {
-		DP_ERR(p_hwfn, "MCP response failure, aborting\n");
+	in_params.drv_role = mfw_drv_role;
+	in_params.timeout_val = p_params->timeout_val;
+	rc = ecore_get_mfw_force_cmd(p_hwfn, ECORE_LOAD_REQ_FORCE_NONE,
+				     &mfw_force_cmd);
+	if (rc != ECORE_SUCCESS)
 		return rc;
-	}
 
-	*p_load_code = mb_params.mcp_resp;
+	in_params.force_cmd = mfw_force_cmd;
+	in_params.avoid_eng_reset = p_params->avoid_eng_reset;
 
-	/* If MFW refused (e.g. other port is in diagnostic mode) we
-	 * must abort. This can happen in the following cases:
-	 * - Other port is in diagnostic mode
-	 * - Previously loaded function on the engine is not compliant with
-	 *   the requester.
-	 * - MFW cannot cope with the requester's DRV_MFW_HSI_VERSION.
-	 *      -
+	OSAL_MEM_ZERO(&out_params, sizeof(out_params));
+	rc = __ecore_mcp_load_req(p_hwfn, p_ptt, &in_params, &out_params);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	/* First handle cases where another load request should/might be sent:
+	 * - MFW expects the old interface [HSI version = 1]
+	 * - MFW responds that a force load request is required
 	 */
-	if (!(*p_load_code) ||
-	    ((*p_load_code) == FW_MSG_CODE_DRV_LOAD_REFUSED_HSI) ||
-	    ((*p_load_code) == FW_MSG_CODE_DRV_LOAD_REFUSED_PDA) ||
-	    ((*p_load_code) == FW_MSG_CODE_DRV_LOAD_REFUSED_DIAG)) {
-		DP_ERR(p_hwfn, "MCP refused load request, aborting\n");
+	if (out_params.load_code == FW_MSG_CODE_DRV_LOAD_REFUSED_HSI_1) {
+		DP_INFO(p_hwfn,
+			"MFW refused a load request due to HSI > 1. Resending with HSI = 1.\n");
+
+		/* The previous load request set the mailbox blocking */
+		p_hwfn->mcp_info->block_mb_sending = false;
+
+		in_params.hsi_ver = ECORE_LOAD_REQ_HSI_VER_1;
+		OSAL_MEM_ZERO(&out_params, sizeof(out_params));
+		rc = __ecore_mcp_load_req(p_hwfn, p_ptt, &in_params,
+					  &out_params);
+		if (rc != ECORE_SUCCESS)
+			return rc;
+	} else if (out_params.load_code ==
+		   FW_MSG_CODE_DRV_LOAD_REFUSED_REQUIRES_FORCE) {
+		/* The previous load request set the mailbox blocking */
+		p_hwfn->mcp_info->block_mb_sending = false;
+
+		if (ecore_mcp_can_force_load(in_params.drv_role,
+					     out_params.exist_drv_role)) {
+			DP_INFO(p_hwfn,
+				"A force load is required [existing: role %d, fw_ver 0x%08x, drv_ver 0x%08x_0x%08x]. Sending a force load request.\n",
+				out_params.exist_drv_role,
+				out_params.exist_fw_ver,
+				out_params.exist_drv_ver_0,
+				out_params.exist_drv_ver_1);
+
+			rc = ecore_get_mfw_force_cmd(p_hwfn,
+						     ECORE_LOAD_REQ_FORCE_ALL,
+						     &mfw_force_cmd);
+			if (rc != ECORE_SUCCESS)
+				return rc;
+
+			in_params.force_cmd = mfw_force_cmd;
+			OSAL_MEM_ZERO(&out_params, sizeof(out_params));
+			rc = __ecore_mcp_load_req(p_hwfn, p_ptt, &in_params,
+						  &out_params);
+			if (rc != ECORE_SUCCESS)
+				return rc;
+		} else {
+			DP_NOTICE(p_hwfn, false,
+				  "A force load is required [existing: role %d, fw_ver 0x%08x, drv_ver 0x%08x_0x%08x]. Avoiding to prevent disruption of active PFs.\n",
+				  out_params.exist_drv_role,
+				  out_params.exist_fw_ver,
+				  out_params.exist_drv_ver_0,
+				  out_params.exist_drv_ver_1);
+
+			ecore_mcp_cancel_load_req(p_hwfn, p_ptt);
+			return ECORE_BUSY;
+		}
+	}
+
+	/* Now handle the other types of responses.
+	 * The "REFUSED_HSI_1" and "REFUSED_REQUIRES_FORCE" responses are not
+	 * expected here after the additional revised load requests were sent.
+	 */
+	switch (out_params.load_code) {
+	case FW_MSG_CODE_DRV_LOAD_ENGINE:
+	case FW_MSG_CODE_DRV_LOAD_PORT:
+	case FW_MSG_CODE_DRV_LOAD_FUNCTION:
+		if (out_params.mfw_hsi_ver != ECORE_LOAD_REQ_HSI_VER_1 &&
+		    out_params.drv_exists) {
+			/* The role and fw/driver version match, but the PF is
+			 * already loaded and has not been unloaded gracefully.
+			 * This is unexpected since a quasi-FLR request was
+			 * previously sent as part of ecore_hw_prepare().
+			 */
+			DP_NOTICE(p_hwfn, false,
+				  "PF is already loaded - shouldn't have got here since a quasi-FLR request was previously sent!\n");
+			return ECORE_INVAL;
+		}
+		break;
+	case FW_MSG_CODE_DRV_LOAD_REFUSED_PDA:
+	case FW_MSG_CODE_DRV_LOAD_REFUSED_DIAG:
+	case FW_MSG_CODE_DRV_LOAD_REFUSED_HSI:
+	case FW_MSG_CODE_DRV_LOAD_REFUSED_REJECT:
+		DP_NOTICE(p_hwfn, false,
+			  "MFW refused a load request [resp 0x%08x]. Aborting.\n",
+			  out_params.load_code);
 		return ECORE_BUSY;
+	default:
+		DP_NOTICE(p_hwfn, false,
+			  "Unexpected response to load request [resp 0x%08x]. Aborting.\n",
+			  out_params.load_code);
+		break;
 	}
 
+	p_params->load_code = out_params.load_code;
+
 	return ECORE_SUCCESS;
 }
 
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index 7a81516..4138a12 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -136,32 +136,36 @@ enum _ecore_status_t ecore_mcp_handle_events(struct ecore_hwfn *p_hwfn,
  * @param p_hwfn - hw function
  * @param p_ptt - PTT required for register access
  * @return enum _ecore_status_t - ECORE_SUCCESS - operation
- * was successul.
+ * was successful.
  */
 enum _ecore_status_t ecore_issue_pulse(struct ecore_hwfn *p_hwfn,
 				       struct ecore_ptt *p_ptt);
 
+enum ecore_drv_role {
+	ECORE_DRV_ROLE_OS,
+	ECORE_DRV_ROLE_KDUMP,
+};
+
+struct ecore_load_req_params {
+	enum ecore_drv_role drv_role;
+	u8 timeout_val; /* 1..254, '0' - default value, '255' - no timeout */
+	bool avoid_eng_reset;
+	u32 load_code;
+};
+
 /**
- * @brief Sends a LOAD_REQ to the MFW, and in case operation
- *        succeed, returns whether this PF is the first on the
- *        chip/engine/port or function. This function should be
- *        called when driver is ready to accept MFW events after
- *        Storms initializations are done.
- *
- * @param p_hwfn       - hw function
- * @param p_ptt        - PTT required for register access
- * @param p_load_code  - The MCP response param containing one
- *      of the following:
- *      FW_MSG_CODE_DRV_LOAD_ENGINE
- *      FW_MSG_CODE_DRV_LOAD_PORT
- *      FW_MSG_CODE_DRV_LOAD_FUNCTION
- * @return enum _ecore_status_t -
- *      ECORE_SUCCESS - Operation was successul.
- *      ECORE_BUSY - Operation failed
+ * @brief Sends a LOAD_REQ to the MFW, and in case the operation succeeds,
+ *        returns whether this PF is the first on the engine/port or function.
+ *
+ * @param p_hwfn
+ * @param p_pt
+ * @param p_params
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - Operation was successful.
  */
 enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 					struct ecore_ptt *p_ptt,
-					u32 *p_load_code);
+					struct ecore_load_req_params *p_params);
 
 /**
  * @brief Read the MFW mailbox into Current buffer.
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index d3cbc96..7f94ba1 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -878,9 +878,11 @@ struct public_func {
 #define DRV_ID_PDA_COMP_VER_MASK	0x0000ffff
 #define DRV_ID_PDA_COMP_VER_SHIFT	0
 
+#define LOAD_REQ_HSI_VERSION		2
 #define DRV_ID_MCP_HSI_VER_MASK		0x00ff0000
 #define DRV_ID_MCP_HSI_VER_SHIFT	16
-#define DRV_ID_MCP_HSI_VER_CURRENT	(1 << DRV_ID_MCP_HSI_VER_SHIFT)
+#define DRV_ID_MCP_HSI_VER_CURRENT	(LOAD_REQ_HSI_VERSION << \
+					 DRV_ID_MCP_HSI_VER_SHIFT)
 
 #define DRV_ID_DRV_TYPE_MASK		0x7f000000
 #define DRV_ID_DRV_TYPE_SHIFT		24
@@ -1040,8 +1042,47 @@ struct resource_info {
 #define RESOURCE_ELEMENT_STRICT (1 << 0)
 };
 
+#define DRV_ROLE_NONE		0
+#define DRV_ROLE_PREBOOT	1
+#define DRV_ROLE_OS		2
+#define DRV_ROLE_KDUMP		3
+
+struct load_req_stc {
+	u32 drv_ver_0;
+	u32 drv_ver_1;
+	u32 fw_ver;
+	u32 misc0;
+#define LOAD_REQ_ROLE_MASK		0x000000FF
+#define LOAD_REQ_ROLE_SHIFT		0
+#define LOAD_REQ_LOCK_TO_MASK		0x0000FF00
+#define LOAD_REQ_LOCK_TO_SHIFT		8
+#define LOAD_REQ_LOCK_TO_DEFAULT	0
+#define LOAD_REQ_LOCK_TO_NONE		255
+#define LOAD_REQ_FORCE_MASK		0x000F0000
+#define LOAD_REQ_FORCE_SHIFT		16
+#define LOAD_REQ_FORCE_NONE		0
+#define LOAD_REQ_FORCE_PF		1
+#define LOAD_REQ_FORCE_ALL		2
+#define LOAD_REQ_FLAGS0_MASK		0x00F00000
+#define LOAD_REQ_FLAGS0_SHIFT		20
+#define LOAD_REQ_FLAGS0_AVOID_RESET	(0x1 << 0)
+};
+
+struct load_rsp_stc {
+	u32 drv_ver_0;
+	u32 drv_ver_1;
+	u32 fw_ver;
+	u32 misc0;
+#define LOAD_RSP_ROLE_MASK		0x000000FF
+#define LOAD_RSP_ROLE_SHIFT		0
+#define LOAD_RSP_HSI_MASK		0x0000FF00
+#define LOAD_RSP_HSI_SHIFT		8
+#define LOAD_RSP_FLAGS0_MASK		0x000F0000
+#define LOAD_RSP_FLAGS0_SHIFT		16
+#define LOAD_RSP_FLAGS0_DRV_EXISTS	(0x1 << 0)
+};
+
 union drv_union_data {
-	u32 ver_str[MCP_DRV_VER_STR_SIZE_DWORD];    /* LOAD_REQ */
 	struct mcp_mac wol_mac; /* UNLOAD_DONE */
 
 /* This configuration should be set by the driver for the LINK_SET command. */
@@ -1068,6 +1109,9 @@ union drv_union_data {
 	struct bist_nvm_image_att nvm_image_att;
 	struct mdump_config_stc mdump_config;
 	u32 dword;
+
+	struct load_req_stc load_req;
+	struct load_rsp_stc load_rsp;
 	/* ... */
 };
 
@@ -1077,6 +1121,7 @@ struct public_drv_mb {
 #define DRV_MSG_CODE_LOAD_REQ                   0x10000000
 #define DRV_MSG_CODE_LOAD_DONE                  0x11000000
 #define DRV_MSG_CODE_INIT_HW                    0x12000000
+#define DRV_MSG_CODE_CANCEL_LOAD_REQ            0x13000000
 #define DRV_MSG_CODE_UNLOAD_REQ		        0x20000000
 #define DRV_MSG_CODE_UNLOAD_DONE                0x21000000
 #define DRV_MSG_CODE_INIT_PHY			0x22000000
@@ -1448,8 +1493,11 @@ struct public_drv_mb {
 #define FW_MSG_CODE_DRV_LOAD_PORT               0x10110000
 #define FW_MSG_CODE_DRV_LOAD_FUNCTION           0x10120000
 #define FW_MSG_CODE_DRV_LOAD_REFUSED_PDA        0x10200000
-#define FW_MSG_CODE_DRV_LOAD_REFUSED_HSI        0x10210000
+#define FW_MSG_CODE_DRV_LOAD_REFUSED_HSI_1      0x10210000
 #define FW_MSG_CODE_DRV_LOAD_REFUSED_DIAG       0x10220000
+#define FW_MSG_CODE_DRV_LOAD_REFUSED_HSI        0x10230000
+#define FW_MSG_CODE_DRV_LOAD_REFUSED_REQUIRES_FORCE 0x10300000
+#define FW_MSG_CODE_DRV_LOAD_REFUSED_REJECT     0x10310000
 #define FW_MSG_CODE_DRV_LOAD_DONE               0x11100000
 #define FW_MSG_CODE_DRV_UNLOAD_ENGINE           0x20110000
 #define FW_MSG_CODE_DRV_UNLOAD_PORT             0x20120000
@@ -1547,7 +1595,7 @@ struct public_drv_mb {
 
 
 	u32 fw_mb_param;
-	/* Resource Allocation params - MFW  version support*/
+/* Resource Allocation params - MFW  version support */
 #define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR_MASK	0xFFFF0000
 #define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR_SHIFT		16
 #define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR_MASK	0x0000FFFF
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 5c79055..326e56f 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -276,6 +276,8 @@ static int qed_slowpath_start(struct ecore_dev *edev,
 	hw_init_params.int_mode = ECORE_INT_MODE_MSIX;
 	hw_init_params.allow_npar_tx_switch = allow_npar_tx_switching;
 	hw_init_params.bin_fw_data = data;
+	hw_init_params.mfw_timeout_val = ECORE_LOAD_REQ_LOCK_TO_DEFAULT;
+	hw_init_params.avoid_eng_reset = false;
 	rc = ecore_hw_init(edev, &hw_init_params);
 	if (rc) {
 		DP_ERR(edev, "ecore_hw_init failed\n");
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 42/61] net/qede/base: add non-L2 dcbx tlv application support
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (41 preceding siblings ...)
  2017-03-18  7:06   ` [PATCH v2 41/61] net/qede/base: add support for previous driver unload Rasesh Mody
@ 2017-03-18  7:06   ` Rasesh Mody
  2017-03-18  7:06   ` [PATCH v2 43/61] net/qede/base: update bulletin board during VF init Rasesh Mody
                     ` (19 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:06 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add non-l2 dcbx tlv application support.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dcbx.c     |   30 ++++++++++++++++++++++++++----
 drivers/net/qede/base/ecore_dcbx.h     |    1 +
 drivers/net/qede/base/ecore_dcbx_api.h |    4 +++-
 drivers/net/qede/base/ecore_proto_if.h |    3 +++
 4 files changed, 33 insertions(+), 5 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dcbx.c b/drivers/net/qede/base/ecore_dcbx.c
index 0e11927..5ecc6b0 100644
--- a/drivers/net/qede/base/ecore_dcbx.c
+++ b/drivers/net/qede/base/ecore_dcbx.c
@@ -72,6 +72,23 @@ static bool ecore_dcbx_default_tlv(u32 app_info_bitmap, u16 proto_id, bool ieee)
 	return !!(ethtype && (proto_id == ECORE_ETH_TYPE_DEFAULT));
 }
 
+static bool ecore_dcbx_iwarp_tlv(struct ecore_hwfn *p_hwfn, u32 app_info_bitmap,
+				 u16 proto_id, bool ieee)
+{
+	bool port;
+
+	if (!p_hwfn->p_dcbx_info->iwarp_port)
+		return false;
+
+	if (ieee)
+		port = ecore_dcbx_ieee_app_port(app_info_bitmap,
+						DCBX_APP_SF_IEEE_TCP_PORT);
+	else
+		port = ecore_dcbx_app_port(app_info_bitmap);
+
+	return !!(port && (proto_id == p_hwfn->p_dcbx_info->iwarp_port));
+}
+
 static void
 ecore_dcbx_dp_protocol(struct ecore_hwfn *p_hwfn,
 		       struct ecore_dcbx_results *p_data)
@@ -896,17 +913,18 @@ ecore_dcbx_mib_update_event(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 
 enum _ecore_status_t ecore_dcbx_info_alloc(struct ecore_hwfn *p_hwfn)
 {
-	enum _ecore_status_t rc = ECORE_SUCCESS;
-
 	p_hwfn->p_dcbx_info = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
 					  sizeof(*p_hwfn->p_dcbx_info));
 	if (!p_hwfn->p_dcbx_info) {
 		DP_NOTICE(p_hwfn, true,
 			  "Failed to allocate `struct ecore_dcbx_info'");
-		rc = ECORE_NOMEM;
+		return ECORE_NOMEM;
 	}
 
-	return rc;
+	p_hwfn->p_dcbx_info->iwarp_port =
+		p_hwfn->pf_params.rdma_pf_params.iwarp_port;
+
+	return ECORE_SUCCESS;
 }
 
 void ecore_dcbx_info_free(struct ecore_hwfn *p_hwfn,
@@ -937,9 +955,13 @@ void ecore_dcbx_set_pf_update_params(struct ecore_dcbx_results *p_src,
 
 	update_flag = p_src->arr[DCBX_PROTOCOL_ETH].update;
 	p_dest->update_eth_dcb_data_mode = update_flag;
+	update_flag = p_src->arr[DCBX_PROTOCOL_IWARP].update;
+	p_dest->update_iwarp_dcb_data_mode = update_flag;
 
 	p_dcb_data = &p_dest->eth_dcb_data;
 	ecore_dcbx_update_protocol_data(p_dcb_data, p_src, DCBX_PROTOCOL_ETH);
+	p_dcb_data = &p_dest->iwarp_dcb_data;
+	ecore_dcbx_update_protocol_data(p_dcb_data, p_src, DCBX_PROTOCOL_IWARP);
 }
 
 enum _ecore_status_t ecore_dcbx_query_params(struct ecore_hwfn *p_hwfn,
diff --git a/drivers/net/qede/base/ecore_dcbx.h b/drivers/net/qede/base/ecore_dcbx.h
index 0830014..eba2d91 100644
--- a/drivers/net/qede/base/ecore_dcbx.h
+++ b/drivers/net/qede/base/ecore_dcbx.h
@@ -29,6 +29,7 @@ struct ecore_dcbx_info {
 	struct ecore_dcbx_set set;
 	struct ecore_dcbx_get get;
 	u8 dcbx_cap;
+	u16 iwarp_port;
 };
 
 struct ecore_dcbx_mib_meta_data {
diff --git a/drivers/net/qede/base/ecore_dcbx_api.h b/drivers/net/qede/base/ecore_dcbx_api.h
index 3a1712f..2dc7679 100644
--- a/drivers/net/qede/base/ecore_dcbx_api.h
+++ b/drivers/net/qede/base/ecore_dcbx_api.h
@@ -37,6 +37,7 @@ enum dcbx_protocol_type {
 	DCBX_PROTOCOL_ROCE,
 	DCBX_PROTOCOL_ROCE_V2,
 	DCBX_PROTOCOL_ETH,
+	DCBX_PROTOCOL_IWARP,
 	DCBX_MAX_PROTOCOL_TYPE
 };
 
@@ -191,7 +192,8 @@ static const struct ecore_dcbx_app_metadata ecore_dcbx_app_update[] = {
 	{DCBX_PROTOCOL_FCOE, "FCOE", ECORE_PCI_FCOE},
 	{DCBX_PROTOCOL_ROCE, "ROCE", ECORE_PCI_ETH_ROCE},
 	{DCBX_PROTOCOL_ROCE_V2, "ROCE_V2", ECORE_PCI_ETH_ROCE},
-	{DCBX_PROTOCOL_ETH, "ETH", ECORE_PCI_ETH}
+	{DCBX_PROTOCOL_ETH, "ETH", ECORE_PCI_ETH},
+	{DCBX_PROTOCOL_IWARP, "IWARP", ECORE_PCI_ETH_IWARP}
 };
 
 #endif /* __ECORE_DCBX_API_H__ */
diff --git a/drivers/net/qede/base/ecore_proto_if.h b/drivers/net/qede/base/ecore_proto_if.h
index e252d52..ed24019 100644
--- a/drivers/net/qede/base/ecore_proto_if.h
+++ b/drivers/net/qede/base/ecore_proto_if.h
@@ -76,6 +76,9 @@ struct ecore_rdma_pf_params {
 
 	/* Will allocate rate limiters to be used with QPs */
 	u8		enable_dcqcn;
+
+	/* TCP port number used for the iwarp traffic */
+	u16		iwarp_port;
 };
 
 struct ecore_pf_params {
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 43/61] net/qede/base: update bulletin board during VF init
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (42 preceding siblings ...)
  2017-03-18  7:06   ` [PATCH v2 42/61] net/qede/base: add non-L2 dcbx tlv application support Rasesh Mody
@ 2017-03-18  7:06   ` Rasesh Mody
  2017-03-18  7:06   ` [PATCH v2 44/61] net/qede/base: add coalescing support for VFs Rasesh Mody
                     ` (18 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:06 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Updated bulletin board with link state during VF initialization.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_sriov.c |   88 ++++++++++++++++++++---------------
 1 file changed, 51 insertions(+), 37 deletions(-)

diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index aab9925..703c1e8 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -954,11 +954,51 @@ static void ecore_iov_free_vf_igu_sbs(struct ecore_hwfn *p_hwfn,
 	vf->num_sbs = 0;
 }
 
+void ecore_iov_set_link(struct ecore_hwfn *p_hwfn,
+			u16 vfid,
+			struct ecore_mcp_link_params *params,
+			struct ecore_mcp_link_state *link,
+			struct ecore_mcp_link_capabilities *p_caps)
+{
+	struct ecore_vf_info *p_vf = ecore_iov_get_vf_info(p_hwfn, vfid, false);
+	struct ecore_bulletin_content *p_bulletin;
+
+	if (!p_vf)
+		return;
+
+	p_bulletin = p_vf->bulletin.p_virt;
+	p_bulletin->req_autoneg = params->speed.autoneg;
+	p_bulletin->req_adv_speed = params->speed.advertised_speeds;
+	p_bulletin->req_forced_speed = params->speed.forced_speed;
+	p_bulletin->req_autoneg_pause = params->pause.autoneg;
+	p_bulletin->req_forced_rx = params->pause.forced_rx;
+	p_bulletin->req_forced_tx = params->pause.forced_tx;
+	p_bulletin->req_loopback = params->loopback_mode;
+
+	p_bulletin->link_up = link->link_up;
+	p_bulletin->speed = link->speed;
+	p_bulletin->full_duplex = link->full_duplex;
+	p_bulletin->autoneg = link->an;
+	p_bulletin->autoneg_complete = link->an_complete;
+	p_bulletin->parallel_detection = link->parallel_detection;
+	p_bulletin->pfc_enabled = link->pfc_enabled;
+	p_bulletin->partner_adv_speed = link->partner_adv_speed;
+	p_bulletin->partner_tx_flow_ctrl_en = link->partner_tx_flow_ctrl_en;
+	p_bulletin->partner_rx_flow_ctrl_en = link->partner_rx_flow_ctrl_en;
+	p_bulletin->partner_adv_pause = link->partner_adv_pause;
+	p_bulletin->sfp_tx_fault = link->sfp_tx_fault;
+
+	p_bulletin->capability_speed = p_caps->speed_capabilities;
+}
+
 enum _ecore_status_t
 ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 			 struct ecore_ptt *p_ptt,
 			 struct ecore_iov_vf_init_params *p_params)
 {
+	struct ecore_mcp_link_capabilities link_caps;
+	struct ecore_mcp_link_params link_params;
+	struct ecore_mcp_link_state link_state;
 	u8 num_of_vf_available_chains  = 0;
 	struct ecore_vf_info *vf = OSAL_NULL;
 	u16 qid, num_irqs;
@@ -1045,6 +1085,17 @@ ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 			   p_queue->fw_cid);
 	}
 
+	/* Update the link configuration in bulletin.
+	 */
+	OSAL_MEMCPY(&link_params, ecore_mcp_get_link_params(p_hwfn),
+		    sizeof(link_params));
+	OSAL_MEMCPY(&link_state, ecore_mcp_get_link_state(p_hwfn),
+		    sizeof(link_state));
+	OSAL_MEMCPY(&link_caps, ecore_mcp_get_link_capabilities(p_hwfn),
+		    sizeof(link_caps));
+	ecore_iov_set_link(p_hwfn, p_params->rel_vf_id,
+			   &link_params, &link_state, &link_caps);
+
 	rc = ecore_iov_enable_vf_access(p_hwfn, p_ptt, vf);
 
 	if (rc == ECORE_SUCCESS) {
@@ -1059,43 +1110,6 @@ ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-void ecore_iov_set_link(struct ecore_hwfn *p_hwfn,
-			u16 vfid,
-			struct ecore_mcp_link_params *params,
-			struct ecore_mcp_link_state *link,
-			struct ecore_mcp_link_capabilities *p_caps)
-{
-	struct ecore_vf_info *p_vf = ecore_iov_get_vf_info(p_hwfn, vfid, false);
-	struct ecore_bulletin_content *p_bulletin;
-
-	if (!p_vf)
-		return;
-
-	p_bulletin = p_vf->bulletin.p_virt;
-	p_bulletin->req_autoneg = params->speed.autoneg;
-	p_bulletin->req_adv_speed = params->speed.advertised_speeds;
-	p_bulletin->req_forced_speed = params->speed.forced_speed;
-	p_bulletin->req_autoneg_pause = params->pause.autoneg;
-	p_bulletin->req_forced_rx = params->pause.forced_rx;
-	p_bulletin->req_forced_tx = params->pause.forced_tx;
-	p_bulletin->req_loopback = params->loopback_mode;
-
-	p_bulletin->link_up = link->link_up;
-	p_bulletin->speed = link->speed;
-	p_bulletin->full_duplex = link->full_duplex;
-	p_bulletin->autoneg = link->an;
-	p_bulletin->autoneg_complete = link->an_complete;
-	p_bulletin->parallel_detection = link->parallel_detection;
-	p_bulletin->pfc_enabled = link->pfc_enabled;
-	p_bulletin->partner_adv_speed = link->partner_adv_speed;
-	p_bulletin->partner_tx_flow_ctrl_en = link->partner_tx_flow_ctrl_en;
-	p_bulletin->partner_rx_flow_ctrl_en = link->partner_rx_flow_ctrl_en;
-	p_bulletin->partner_adv_pause = link->partner_adv_pause;
-	p_bulletin->sfp_tx_fault = link->sfp_tx_fault;
-
-	p_bulletin->capability_speed = p_caps->speed_capabilities;
-}
-
 enum _ecore_status_t ecore_iov_release_hw_for_vf(struct ecore_hwfn *p_hwfn,
 						 struct ecore_ptt *p_ptt,
 						 u16 rel_vf_id)
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 44/61] net/qede/base: add coalescing support for VFs
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (43 preceding siblings ...)
  2017-03-18  7:06   ` [PATCH v2 43/61] net/qede/base: update bulletin board during VF init Rasesh Mody
@ 2017-03-18  7:06   ` Rasesh Mody
  2017-03-18  7:06   ` [PATCH v2 45/61] net/qede/base: add macro got resource value message Rasesh Mody
                     ` (17 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:06 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add coalescing support for VFs.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c     |   83 ++++++++++++++++++++++-----------
 drivers/net/qede/base/ecore_dev_api.h |   43 ++++++-----------
 drivers/net/qede/base/ecore_sriov.c   |   66 +++++++++++++++++++++++++-
 drivers/net/qede/base/ecore_vf.c      |   42 +++++++++++++++++
 drivers/net/qede/base/ecore_vf.h      |   24 ++++++++++
 drivers/net/qede/base/ecore_vfpf_if.h |   10 ++++
 6 files changed, 209 insertions(+), 59 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 29dd292..7a876bc 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -30,6 +30,7 @@
 #include "nvm_cfg.h"
 #include "ecore_dev_api.h"
 #include "ecore_dcbx.h"
+#include "ecore_l2.h"
 
 /* TODO - there's a bug in DCBx re-configuration flows in MF, as the QM
  * registers involved are not split and thus configuration is a race where
@@ -4198,11 +4199,6 @@ static enum _ecore_status_t ecore_set_coalesce(struct ecore_hwfn *p_hwfn,
 {
 	struct coalescing_timeset *p_coal_timeset;
 
-	if (IS_VF(p_hwfn->p_dev)) {
-		DP_NOTICE(p_hwfn, true, "VF coalescing config not supported\n");
-		return ECORE_INVAL;
-	}
-
 	if (p_hwfn->p_dev->int_coalescing_mode != ECORE_COAL_MODE_ENABLE) {
 		DP_NOTICE(p_hwfn, true,
 			  "Coalescing configuration not enabled\n");
@@ -4218,13 +4214,53 @@ static enum _ecore_status_t ecore_set_coalesce(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
+enum _ecore_status_t ecore_set_queue_coalesce(struct ecore_hwfn *p_hwfn,
+					      u16 rx_coal, u16 tx_coal,
+					      void *p_handle)
+{
+	struct ecore_queue_cid *p_cid = (struct ecore_queue_cid *)p_handle;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	struct ecore_ptt *p_ptt;
+
+	/* TODO - Configuring a single queue's coalescing but
+	 * claiming all queues are abiding same configuration
+	 * for PF and VF both.
+	 */
+
+	if (IS_VF(p_hwfn->p_dev))
+		return ecore_vf_pf_set_coalesce(p_hwfn, rx_coal,
+						tx_coal, p_cid);
+
+	p_ptt = ecore_ptt_acquire(p_hwfn);
+	if (!p_ptt)
+		return ECORE_AGAIN;
+
+	if (rx_coal) {
+		rc = ecore_set_rxq_coalesce(p_hwfn, p_ptt, rx_coal, p_cid);
+		if (rc)
+			goto out;
+		p_hwfn->p_dev->rx_coalesce_usecs = rx_coal;
+	}
+
+	if (tx_coal) {
+		rc = ecore_set_txq_coalesce(p_hwfn, p_ptt, tx_coal, p_cid);
+		if (rc)
+			goto out;
+		p_hwfn->p_dev->tx_coalesce_usecs = tx_coal;
+	}
+out:
+	ecore_ptt_release(p_hwfn, p_ptt);
+
+	return rc;
+}
+
 enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u16 qid, u16 sb_id)
+					    u16 coalesce,
+					    struct ecore_queue_cid *p_cid)
 {
 	struct ustorm_eth_queue_zone eth_qzone;
 	u8 timeset, timer_res;
-	u16 fw_qid = 0;
 	u32 address;
 	enum _ecore_status_t rc;
 
@@ -4241,33 +4277,30 @@ enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
 	}
 	timeset = (u8)(coalesce >> timer_res);
 
-	rc = ecore_fw_l2_queue(p_hwfn, qid, &fw_qid);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	rc = ecore_int_set_timer_res(p_hwfn, p_ptt, timer_res, sb_id, false);
+	rc = ecore_int_set_timer_res(p_hwfn, p_ptt, timer_res,
+				     p_cid->abs.sb_idx, false);
 	if (rc != ECORE_SUCCESS)
 		goto out;
 
-	address = BAR0_MAP_REG_USDM_RAM + USTORM_ETH_QUEUE_ZONE_OFFSET(fw_qid);
+	address = BAR0_MAP_REG_USDM_RAM +
+		  USTORM_ETH_QUEUE_ZONE_OFFSET(p_cid->abs.queue_id);
 
 	rc = ecore_set_coalesce(p_hwfn, p_ptt, address, &eth_qzone,
 				sizeof(struct ustorm_eth_queue_zone), timeset);
 	if (rc != ECORE_SUCCESS)
 		goto out;
 
-	p_hwfn->p_dev->rx_coalesce_usecs = coalesce;
-out:
+ out:
 	return rc;
 }
 
 enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u16 qid, u16 sb_id)
+					    u16 coalesce,
+					    struct ecore_queue_cid *p_cid)
 {
 	struct xstorm_eth_queue_zone eth_qzone;
 	u8 timeset, timer_res;
-	u16 fw_qid = 0;
 	u32 address;
 	enum _ecore_status_t rc;
 
@@ -4285,23 +4318,17 @@ enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
 
 	timeset = (u8)(coalesce >> timer_res);
 
-	rc = ecore_fw_l2_queue(p_hwfn, qid, &fw_qid);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	rc = ecore_int_set_timer_res(p_hwfn, p_ptt, timer_res, sb_id, true);
+	rc = ecore_int_set_timer_res(p_hwfn, p_ptt, timer_res,
+				     p_cid->abs.sb_idx, true);
 	if (rc != ECORE_SUCCESS)
 		goto out;
 
-	address = BAR0_MAP_REG_XSDM_RAM + XSTORM_ETH_QUEUE_ZONE_OFFSET(fw_qid);
+	address = BAR0_MAP_REG_XSDM_RAM +
+		  XSTORM_ETH_QUEUE_ZONE_OFFSET(p_cid->abs.queue_id);
 
 	rc = ecore_set_coalesce(p_hwfn, p_ptt, address, &eth_qzone,
 				sizeof(struct xstorm_eth_queue_zone), timeset);
-	if (rc != ECORE_SUCCESS)
-		goto out;
-
-	p_hwfn->p_dev->tx_coalesce_usecs = coalesce;
-out:
+ out:
 	return rc;
 }
 
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index 7e90778..ce764d2 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -570,41 +570,24 @@ enum _ecore_status_t ecore_final_cleanup(struct ecore_hwfn	*p_hwfn,
 					 struct ecore_ptt	*p_ptt,
 					 u16			id,
 					 bool			is_vf);
-
-/**
- * @brief ecore_set_rxq_coalesce - Configure coalesce parameters for an Rx queue
- *    The fact that we can configure coalescing to up to 511, but on varying
- *    accuracy [the bigger the value the less accurate] up to a mistake of 3usec
- *    for the highest values.
- *
- * @param p_hwfn
- * @param p_ptt
- * @param coalesce - Coalesce value in micro seconds.
- * @param qid - Queue index.
- * @param qid - SB Id
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
-					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u16 qid, u16 sb_id);
-
 /**
- * @brief ecore_set_txq_coalesce - Configure coalesce parameters for a Tx queue
- *    While the API allows setting coalescing per-qid, all tx queues sharing a
- *    SB should be in same range [i.e., either 0-0x7f, 0x80-0xff or 0x100-0x1ff]
+ * @brief ecore_set_queue_coalesce - Configure coalesce parameters for Rx and
+ *    Tx queue. The fact that we can configure coalescing to up to 511, but on
+ *    varying accuracy [the bigger the value the less accurate] up to a mistake
+ *    of 3usec for the highest values.
+ *    While the API allows setting coalescing per-qid, all queues sharing a SB
+ *    should be in same range [i.e., either 0-0x7f, 0x80-0xff or 0x100-0x1ff]
  *    otherwise configuration would break.
  *
  * @param p_hwfn
- * @param p_ptt
- * @param coalesce - Coalesce value in micro seconds.
- * @param qid - Queue index.
- * @param qid - SB Id
+ * @param rx_coal - Rx Coalesce value in micro seconds.
+ * @param tx_coal - TX Coalesce value in micro seconds.
+ * @param p_handle
  *
  * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
-					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u16 qid, u16 sb_id);
+ **/
+enum _ecore_status_t
+ecore_set_queue_coalesce(struct ecore_hwfn *p_hwfn, u16 rx_coal,
+			 u16 tx_coal, void *p_handle);
 
 #endif
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 703c1e8..4ffa8d0 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -52,6 +52,7 @@ const char *ecore_channel_tlvs_string[] = {
 	"CHANNEL_TLV_VPORT_UPDATE_ACCEPT_ANY_VLAN",
 	"CHANNEL_TLV_VPORT_UPDATE_SGE_TPA",
 	"CHANNEL_TLV_UPDATE_TUNN_PARAM",
+	"CHANNEL_TLV_COALESCE_UPDATE",
 	"CHANNEL_TLV_MAX"
 };
 
@@ -1939,6 +1940,8 @@ static void ecore_iov_vf_mbx_start_vport(struct ecore_hwfn *p_hwfn,
 	vf->state = VF_ENABLED;
 	start = &mbx->req_virt->start_vport;
 
+	ecore_iov_enable_vf_traffic(p_hwfn, p_ptt, vf);
+
 	/* Initialize Status block in CAU */
 	for (sb_id = 0; sb_id < vf->num_sbs; sb_id++) {
 		if (!start->sb_addr[sb_id]) {
@@ -1953,7 +1956,6 @@ static void ecore_iov_vf_mbx_start_vport(struct ecore_hwfn *p_hwfn,
 				      vf->igu_sbs[sb_id],
 				      vf->abs_vf_id, 1);
 	}
-	ecore_iov_enable_vf_traffic(p_hwfn, p_ptt, vf);
 
 	vf->mtu = start->mtu;
 	vf->shadow_config.inner_vlan_removal = start->inner_vlan_removal;
@@ -3226,6 +3228,65 @@ static void ecore_iov_vf_mbx_release(struct ecore_hwfn *p_hwfn,
 			       length, status);
 }
 
+static void ecore_iov_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn,
+					 struct ecore_ptt *p_ptt,
+					 struct ecore_vf_info *vf)
+{
+	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	struct vfpf_update_coalesce *req;
+	u8 status = PFVF_STATUS_FAILURE;
+	struct ecore_queue_cid *p_cid;
+	u16 rx_coal, tx_coal;
+	u16  qid;
+
+	req = &mbx->req_virt->update_coalesce;
+
+	rx_coal = req->rx_coal;
+	tx_coal = req->tx_coal;
+	qid = req->qid;
+	p_cid = vf->vf_queues[qid].p_rx_cid;
+
+	if (!ecore_iov_validate_rxq(p_hwfn, vf, qid)) {
+		DP_ERR(p_hwfn, "VF[%d]: Invalid Rx queue_id = %d\n",
+		       vf->abs_vf_id, qid);
+		goto out;
+	}
+
+	if (!ecore_iov_validate_txq(p_hwfn, vf, qid)) {
+		DP_ERR(p_hwfn, "VF[%d]: Invalid Tx queue_id = %d\n",
+		       vf->abs_vf_id, qid);
+		goto out;
+	}
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+		   "VF[%d]: Setting coalesce for VF rx_coal = %d, tx_coal = %d at queue = %d\n",
+		   vf->abs_vf_id, rx_coal, tx_coal, qid);
+	if (rx_coal) {
+		rc = ecore_set_rxq_coalesce(p_hwfn, p_ptt, rx_coal, p_cid);
+		if (rc != ECORE_SUCCESS) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VF[%d]: Unable to set rx queue = %d coalesce\n",
+				   vf->abs_vf_id, vf->vf_queues[qid].fw_rx_qid);
+			goto out;
+		}
+	}
+	if (tx_coal) {
+		rc =  ecore_set_txq_coalesce(p_hwfn, p_ptt, tx_coal, p_cid);
+		if (rc != ECORE_SUCCESS) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VF[%d]: Unable to set tx queue = %d coalesce\n",
+				   vf->abs_vf_id, vf->vf_queues[qid].fw_tx_qid);
+			goto out;
+		}
+	}
+
+	status = PFVF_STATUS_SUCCESS;
+out:
+	ecore_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_COALESCE_UPDATE,
+			       sizeof(struct pfvf_def_resp_tlv), status);
+}
+
 static enum _ecore_status_t
 ecore_iov_vf_flr_poll_dorq(struct ecore_hwfn *p_hwfn,
 			   struct ecore_vf_info *p_vf, struct ecore_ptt *p_ptt)
@@ -3579,6 +3640,9 @@ void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn,
 		case CHANNEL_TLV_UPDATE_TUNN_PARAM:
 			ecore_iov_vf_mbx_update_tunn_param(p_hwfn, p_ptt, p_vf);
 			break;
+		case CHANNEL_TLV_COALESCE_UPDATE:
+			ecore_iov_vf_pf_set_coalesce(p_hwfn, p_ptt, p_vf);
+			break;
 		}
 	} else if (ecore_iov_tlv_supported(mbx->first_tlv.tl.type)) {
 		/* If we've received a message from a VF we consider malicious
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index a072a81..bf516cc 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -1424,6 +1424,48 @@ exit:
 	return rc;
 }
 
+enum _ecore_status_t
+ecore_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn, u16 rx_coal, u16 tx_coal,
+			 struct ecore_queue_cid     *p_cid)
+{
+	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+	struct vfpf_update_coalesce *req;
+	struct pfvf_def_resp_tlv *resp;
+	enum _ecore_status_t rc;
+
+	/* clear mailbox and prep header tlv */
+	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_COALESCE_UPDATE,
+			       sizeof(*req));
+
+	req->rx_coal = rx_coal;
+	req->tx_coal = tx_coal;
+	req->qid = p_cid->rel.queue_id;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+		   "Setting coalesce rx_coal = %d, tx_coal = %d at queue = %d\n",
+		   rx_coal, tx_coal, req->qid);
+
+	/* add list termination tlv */
+	ecore_add_tlv(p_hwfn, &p_iov->offset, CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	resp = &p_iov->pf2vf_reply->default_resp;
+	rc = ecore_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
+
+	if (rc != ECORE_SUCCESS)
+		goto exit;
+
+	if (resp->hdr.status != PFVF_STATUS_SUCCESS)
+		goto exit;
+
+	p_hwfn->p_dev->rx_coalesce_usecs = rx_coal;
+	p_hwfn->p_dev->tx_coalesce_usecs = tx_coal;
+
+exit:
+	ecore_vf_pf_req_end(p_hwfn, rc);
+	return rc;
+}
+
 u16 ecore_vf_get_igu_sb_id(struct ecore_hwfn *p_hwfn,
 			   u16               sb_id)
 {
diff --git a/drivers/net/qede/base/ecore_vf.h b/drivers/net/qede/base/ecore_vf.h
index 0d67054..228bbf0 100644
--- a/drivers/net/qede/base/ecore_vf.h
+++ b/drivers/net/qede/base/ecore_vf.h
@@ -50,6 +50,20 @@ struct ecore_vf_iov {
 enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn);
 
 /**
+ * @brief VF - Set Rx/Tx coalesce per VF's relative queue.
+ *	Coalesce value '0' will omit the configuration.
+ *
+ *	@param p_hwfn
+ *	@param rx_coal - coalesce value in micro second for rx queue
+ *	@param tx_coal - coalesce value in micro second for tx queue
+ *	@param qid
+ *
+ **/
+enum _ecore_status_t ecore_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn,
+					      u16 rx_coal, u16 tx_coal,
+					      struct ecore_queue_cid *p_cid);
+
+/**
  * @brief VF - start the RX Queue by sending a message to the PF
  *
  * @param p_hwfn
@@ -263,5 +277,15 @@ ecore_vf_pf_tunnel_param_update(struct ecore_hwfn *p_hwfn,
 				struct ecore_tunnel_info *p_tunn);
 
 void ecore_vf_set_vf_start_tunn_update_param(struct ecore_tunnel_info *p_tun);
+
+enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
+					    struct ecore_ptt *p_ptt,
+					    u16 coalesce,
+					    struct ecore_queue_cid *p_cid);
+
+enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
+					    struct ecore_ptt *p_ptt,
+					    u16 coalesce,
+					    struct ecore_queue_cid *p_cid);
 #endif
 #endif /* __ECORE_VF_H__ */
diff --git a/drivers/net/qede/base/ecore_vfpf_if.h b/drivers/net/qede/base/ecore_vfpf_if.h
index 82ed4f5..e0b63bf 100644
--- a/drivers/net/qede/base/ecore_vfpf_if.h
+++ b/drivers/net/qede/base/ecore_vfpf_if.h
@@ -457,6 +457,14 @@ struct tlv_buffer_size {
 	u8 tlv_buffer[TLV_BUFFER_SIZE];
 };
 
+struct vfpf_update_coalesce {
+	struct vfpf_first_tlv first_tlv;
+	u16 rx_coal;
+	u16 tx_coal;
+	u16 qid;
+	u8 padding[2];
+};
+
 union vfpf_tlvs {
 	struct vfpf_first_tlv			first_tlv;
 	struct vfpf_acquire_tlv			acquire;
@@ -469,6 +477,7 @@ union vfpf_tlvs {
 	struct vfpf_vport_update_tlv		vport_update;
 	struct vfpf_ucast_filter_tlv		ucast_filter;
 	struct vfpf_update_tunn_param_tlv	tunn_param_update;
+	struct vfpf_update_coalesce		update_coalesce;
 	struct tlv_buffer_size			tlv_buf_size;
 };
 
@@ -592,6 +601,7 @@ enum {
 	CHANNEL_TLV_VPORT_UPDATE_ACCEPT_ANY_VLAN,
 	CHANNEL_TLV_VPORT_UPDATE_SGE_TPA,
 	CHANNEL_TLV_UPDATE_TUNN_PARAM,
+	CHANNEL_TLV_COALESCE_UPDATE,
 	CHANNEL_TLV_MAX,
 
 	/* Required for iterating over vport-update tlvs.
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 45/61] net/qede/base: add macro got resource value message
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (44 preceding siblings ...)
  2017-03-18  7:06   ` [PATCH v2 44/61] net/qede/base: add coalescing support for VFs Rasesh Mody
@ 2017-03-18  7:06   ` Rasesh Mody
  2017-03-18  7:06   ` [PATCH v2 46/61] net/qede/base: add mailbox for resource allocation Rasesh Mody
                     ` (16 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:06 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add macro got resource value message

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/mcp_public.h |    5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 7f94ba1..6f0e2f9 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -1137,16 +1137,15 @@ struct public_drv_mb {
 #define DRV_MSG_CODE_OV_UPDATE_BUS_NUM		0x27000000
 #define DRV_MSG_CODE_OV_UPDATE_BOOT_PROGRESS	0x28000000
 #define DRV_MSG_CODE_OV_UPDATE_STORM_FW_VER	0x29000000
+#define DRV_MSG_CODE_NIG_DRAIN			0x30000000
 #define DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE	0x31000000
 #define DRV_MSG_CODE_BW_UPDATE_ACK		0x32000000
 #define DRV_MSG_CODE_OV_UPDATE_MTU		0x33000000
-
-#define DRV_MSG_CODE_NIG_DRAIN			0x30000000
-
 /* DRV_MB Param: driver version supp, FW_MB param: MFW version supp,
  * data: struct resource_info
  */
 #define DRV_MSG_GET_RESOURCE_ALLOC_MSG		0x34000000
+#define DRV_MSG_SET_RESOURCE_VALUE_MSG		0x35000000
 
 /*deprecated don't use*/
 #define DRV_MSG_CODE_INITIATE_FLR_DEPRECATED    0x02000000
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 46/61] net/qede/base: add mailbox for resource allocation
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (45 preceding siblings ...)
  2017-03-18  7:06   ` [PATCH v2 45/61] net/qede/base: add macro got resource value message Rasesh Mody
@ 2017-03-18  7:06   ` Rasesh Mody
  2017-03-18  7:06   ` [PATCH v2 47/61] net/qede/base: add macro for unsupported command Rasesh Mody
                     ` (15 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:06 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add the Management FW mailbox for getting non-l2 resource allocation
information.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h      |    1 +
 drivers/net/qede/base/ecore_dev.c  |   60 ++++++++++++++++++++++++------------
 drivers/net/qede/base/mcp_public.h |    1 +
 3 files changed, 43 insertions(+), 19 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 60a8a6b..25b6c4e 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -291,6 +291,7 @@ enum ecore_resources {
 	ECORE_LL2_QUEUE,
 	ECORE_CMDQS_CQS,
 	ECORE_RDMA_STATS_QUEUE,
+	ECORE_BDQ,
 	ECORE_MAX_RESC,			/* must be last */
 };
 
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 7a876bc..d5a8a90 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2463,6 +2463,9 @@ ecore_hw_get_mfw_res_id(enum ecore_resources res_id)
 	case ECORE_RDMA_STATS_QUEUE:
 		mfw_res_id = RESOURCE_RDMA_STATS_QUEUE_E;
 		break;
+	case ECORE_BDQ:
+		mfw_res_id = RESOURCE_BDQ_E;
+		break;
 	default:
 		break;
 	}
@@ -2470,67 +2473,84 @@ ecore_hw_get_mfw_res_id(enum ecore_resources res_id)
 	return mfw_res_id;
 }
 
-static u32 ecore_hw_get_dflt_resc_num(struct ecore_hwfn *p_hwfn,
-				      enum ecore_resources res_id)
+static
+enum _ecore_status_t ecore_hw_get_dflt_resc(struct ecore_hwfn *p_hwfn,
+					    enum ecore_resources res_id,
+					    u32 *p_resc_num,
+					    u32 *p_resc_start)
 {
 	u8 num_funcs = p_hwfn->num_funcs_on_engine;
 	bool b_ah = ECORE_IS_AH(p_hwfn->p_dev);
 	struct ecore_sb_cnt_info sb_cnt_info;
-	u32 dflt_resc_num = 0;
 
 	switch (res_id) {
 	case ECORE_SB:
 		OSAL_MEM_ZERO(&sb_cnt_info, sizeof(sb_cnt_info));
 		ecore_int_get_num_sbs(p_hwfn, &sb_cnt_info);
-		dflt_resc_num = sb_cnt_info.sb_cnt;
+		*p_resc_num = sb_cnt_info.sb_cnt;
 		break;
 	case ECORE_L2_QUEUE:
-		dflt_resc_num = (b_ah ? MAX_NUM_L2_QUEUES_K2 :
+		*p_resc_num = (b_ah ? MAX_NUM_L2_QUEUES_K2 :
 				 MAX_NUM_L2_QUEUES_BB) / num_funcs;
 		break;
 	case ECORE_VPORT:
-		dflt_resc_num = (b_ah ? MAX_NUM_VPORTS_K2 :
+		*p_resc_num = (b_ah ? MAX_NUM_VPORTS_K2 :
 				 MAX_NUM_VPORTS_BB) / num_funcs;
 		break;
 	case ECORE_RSS_ENG:
-		dflt_resc_num = (b_ah ? ETH_RSS_ENGINE_NUM_K2 :
+		*p_resc_num = (b_ah ? ETH_RSS_ENGINE_NUM_K2 :
 				 ETH_RSS_ENGINE_NUM_BB) / num_funcs;
 		break;
 	case ECORE_PQ:
-		dflt_resc_num = (b_ah ? MAX_QM_TX_QUEUES_K2 :
+		*p_resc_num = (b_ah ? MAX_QM_TX_QUEUES_K2 :
 				 MAX_QM_TX_QUEUES_BB) / num_funcs;
 		break;
 	case ECORE_RL:
-		dflt_resc_num = MAX_QM_GLOBAL_RLS / num_funcs;
+		*p_resc_num = MAX_QM_GLOBAL_RLS / num_funcs;
 		break;
 	case ECORE_MAC:
 	case ECORE_VLAN:
 		/* Each VFC resource can accommodate both a MAC and a VLAN */
-		dflt_resc_num = ETH_NUM_MAC_FILTERS / num_funcs;
+		*p_resc_num = ETH_NUM_MAC_FILTERS / num_funcs;
 		break;
 	case ECORE_ILT:
-		dflt_resc_num = (b_ah ? PXP_NUM_ILT_RECORDS_K2 :
+		*p_resc_num = (b_ah ? PXP_NUM_ILT_RECORDS_K2 :
 				 PXP_NUM_ILT_RECORDS_BB) / num_funcs;
 		break;
 	case ECORE_LL2_QUEUE:
-		dflt_resc_num = MAX_NUM_LL2_RX_QUEUES / num_funcs;
+		*p_resc_num = MAX_NUM_LL2_RX_QUEUES / num_funcs;
 		break;
 	case ECORE_RDMA_CNQ_RAM:
 	case ECORE_CMDQS_CQS:
 		/* CNQ/CMDQS are the same resource */
 		/* @DPDK */
-		dflt_resc_num = (NUM_OF_GLOBAL_QUEUES / 2) / num_funcs;
+		*p_resc_num = (NUM_OF_GLOBAL_QUEUES / 2) / num_funcs;
 		break;
 	case ECORE_RDMA_STATS_QUEUE:
 		/* @DPDK */
-		dflt_resc_num = (b_ah ? MAX_NUM_VPORTS_K2 :
+		*p_resc_num = (b_ah ? MAX_NUM_VPORTS_K2 :
 				 MAX_NUM_VPORTS_BB) / num_funcs;
 		break;
+	case ECORE_BDQ:
+		/* @DPDK */
+		*p_resc_num = 0;
+		break;
+	default:
+		break;
+	}
+
+
+	switch (res_id) {
+	case ECORE_BDQ:
+		if (!*p_resc_num)
+			*p_resc_start = 0;
+		break;
 	default:
+		*p_resc_start = *p_resc_num * p_hwfn->enabled_func_idx;
 		break;
 	}
 
-	return dflt_resc_num;
+	return ECORE_SUCCESS;
 }
 
 static const char *ecore_hw_get_resc_name(enum ecore_resources res_id)
@@ -2562,6 +2582,8 @@ static const char *ecore_hw_get_resc_name(enum ecore_resources res_id)
 		return "CMDQS_CQS";
 	case ECORE_RDMA_STATS_QUEUE:
 		return "RDMA_STATS_QUEUE";
+	case ECORE_BDQ:
+		return "BDQ";
 	default:
 		return "UNKNOWN_RESOURCE";
 	}
@@ -2579,14 +2601,14 @@ static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
 	p_resc_num = &RESC_NUM(p_hwfn, res_id);
 	p_resc_start = &RESC_START(p_hwfn, res_id);
 
-	dflt_resc_num = ecore_hw_get_dflt_resc_num(p_hwfn, res_id);
-	if (!dflt_resc_num) {
+	rc = ecore_hw_get_dflt_resc(p_hwfn, res_id,
+				    &dflt_resc_num, &dflt_resc_start);
+	if (rc != ECORE_SUCCESS) {
 		DP_ERR(p_hwfn,
 		       "Failed to get default amount for resource %d [%s]\n",
 			res_id, ecore_hw_get_resc_name(res_id));
-		return ECORE_INVAL;
+		return rc;
 	}
-	dflt_resc_start = dflt_resc_num * p_hwfn->enabled_func_idx;
 
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_SLOW(p_hwfn->p_dev)) {
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 6f0e2f9..333d147 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -1025,6 +1025,7 @@ enum resource_id_enum {
 	RESOURCE_NUM_RSS_ENGINES_E	=	14,
 	RESOURCE_LL2_QUEUE_E		=	15,
 	RESOURCE_RDMA_STATS_QUEUE_E	=	16,
+	RESOURCE_BDQ_E			=	17,
 	RESOURCE_MAX_NUM,
 	RESOURCE_NUM_INVALID		=	0xFFFFFFFF
 };
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 47/61] net/qede/base: add macro for unsupported command
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (46 preceding siblings ...)
  2017-03-18  7:06   ` [PATCH v2 46/61] net/qede/base: add mailbox for resource allocation Rasesh Mody
@ 2017-03-18  7:06   ` Rasesh Mody
  2017-03-18  7:06   ` [PATCH v2 48/61] net/qede/base: set max values for soft resoruces Rasesh Mody
                     ` (14 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:06 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add a macro for upsupported management FW command

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_mcp.c  |    6 ++----
 drivers/net/qede/base/mcp_public.h |    1 +
 2 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 6c5b5db..15f3ea0 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1424,8 +1424,7 @@ ecore_mcp_mdump_get_config(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	/* A zero response implies that the mdump command is not supported */
-	if (!mcp_resp)
+	if (mcp_resp == FW_MSG_CODE_UNSUPPORTED)
 		return ECORE_NOTIMPL;
 
 	if (mcp_resp != FW_MSG_CODE_OK) {
@@ -2832,8 +2831,7 @@ static enum _ecore_status_t ecore_mcp_resource_cmd(struct ecore_hwfn *p_hwfn,
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	/* A zero response implies that the resource command is not supported */
-	if (!*p_mcp_resp)
+	if (*p_mcp_resp == FW_MSG_CODE_UNSUPPORTED)
 		return ECORE_NOTIMPL;
 
 	if (*p_mcp_param == RESOURCE_OPCODE_UNKNOWN_CMD) {
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 333d147..fcf9847 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -1489,6 +1489,7 @@ struct public_drv_mb {
 
 	u32 fw_mb_header;
 #define FW_MSG_CODE_MASK                        0xffff0000
+#define FW_MSG_CODE_UNSUPPORTED			0x00000000
 #define FW_MSG_CODE_DRV_LOAD_ENGINE		0x10100000
 #define FW_MSG_CODE_DRV_LOAD_PORT               0x10110000
 #define FW_MSG_CODE_DRV_LOAD_FUNCTION           0x10120000
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 48/61] net/qede/base: set max values for soft resoruces
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (47 preceding siblings ...)
  2017-03-18  7:06   ` [PATCH v2 47/61] net/qede/base: add macro for unsupported command Rasesh Mody
@ 2017-03-18  7:06   ` Rasesh Mody
  2017-03-18  7:06   ` [PATCH v2 49/61] net/qede/base: add return code check Rasesh Mody
                     ` (13 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:06 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add support for the new interface with the Management FW for setting
max values of "soft" resoruces.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h     |    2 +
 drivers/net/qede/base/ecore_dev.c |  282 ++++++++++++++++++++++--------------
 drivers/net/qede/base/ecore_mcp.c |  287 +++++++++++++++++++++++++++++++------
 drivers/net/qede/base/ecore_mcp.h |  104 ++++++++++----
 4 files changed, 498 insertions(+), 177 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 25b6c4e..7379b3f 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -856,4 +856,6 @@ u16 ecore_init_qm_get_num_pqs(struct ecore_hwfn *p_hwfn);
 
 #define ECORE_LEADING_HWFN(dev)	(&dev->hwfns[0])
 
+const char *ecore_hw_get_resc_name(enum ecore_resources res_id);
+
 #endif /* __ECORE_H */
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index d5a8a90..3191ee4 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2420,64 +2420,109 @@ static void ecore_hw_set_feat(struct ecore_hwfn *p_hwfn)
 		   RESC_NUM(p_hwfn, ECORE_SB));
 }
 
-static enum resource_id_enum
-ecore_hw_get_mfw_res_id(enum ecore_resources res_id)
+const char *ecore_hw_get_resc_name(enum ecore_resources res_id)
 {
-	enum resource_id_enum mfw_res_id = RESOURCE_NUM_INVALID;
-
 	switch (res_id) {
 	case ECORE_SB:
-		mfw_res_id = RESOURCE_NUM_SB_E;
-		break;
+		return "SB";
 	case ECORE_L2_QUEUE:
-		mfw_res_id = RESOURCE_NUM_L2_QUEUE_E;
-		break;
+		return "L2_QUEUE";
 	case ECORE_VPORT:
-		mfw_res_id = RESOURCE_NUM_VPORT_E;
-		break;
+		return "VPORT";
 	case ECORE_RSS_ENG:
-		mfw_res_id = RESOURCE_NUM_RSS_ENGINES_E;
-		break;
+		return "RSS_ENG";
 	case ECORE_PQ:
-		mfw_res_id = RESOURCE_NUM_PQ_E;
-		break;
+		return "PQ";
 	case ECORE_RL:
-		mfw_res_id = RESOURCE_NUM_RL_E;
-		break;
+		return "RL";
 	case ECORE_MAC:
+		return "MAC";
 	case ECORE_VLAN:
-		/* Each VFC resource can accommodate both a MAC and a VLAN */
-		mfw_res_id = RESOURCE_VFC_FILTER_E;
-		break;
+		return "VLAN";
+	case ECORE_RDMA_CNQ_RAM:
+		return "RDMA_CNQ_RAM";
 	case ECORE_ILT:
-		mfw_res_id = RESOURCE_ILT_E;
-		break;
+		return "ILT";
 	case ECORE_LL2_QUEUE:
-		mfw_res_id = RESOURCE_LL2_QUEUE_E;
-		break;
-	case ECORE_RDMA_CNQ_RAM:
+		return "LL2_QUEUE";
 	case ECORE_CMDQS_CQS:
-		/* CNQ/CMDQS are the same resource */
-		mfw_res_id = RESOURCE_CQS_E;
-		break;
+		return "CMDQS_CQS";
 	case ECORE_RDMA_STATS_QUEUE:
-		mfw_res_id = RESOURCE_RDMA_STATS_QUEUE_E;
-		break;
+		return "RDMA_STATS_QUEUE";
 	case ECORE_BDQ:
-		mfw_res_id = RESOURCE_BDQ_E;
-		break;
+		return "BDQ";
 	default:
-		break;
+		return "UNKNOWN_RESOURCE";
 	}
+}
 
-	return mfw_res_id;
+static enum _ecore_status_t
+__ecore_hw_set_soft_resc_size(struct ecore_hwfn *p_hwfn,
+			      enum ecore_resources res_id, u32 resc_max_val,
+			      u32 *p_mcp_resp)
+{
+	enum _ecore_status_t rc;
+
+	rc = ecore_mcp_set_resc_max_val(p_hwfn, p_hwfn->p_main_ptt, res_id,
+					resc_max_val, p_mcp_resp);
+	if (rc != ECORE_SUCCESS) {
+		DP_NOTICE(p_hwfn, true,
+			  "MFW response failure for a max value setting of resource %d [%s]\n",
+			  res_id, ecore_hw_get_resc_name(res_id));
+		return rc;
+	}
+
+	if (*p_mcp_resp != FW_MSG_CODE_RESOURCE_ALLOC_OK)
+		DP_INFO(p_hwfn,
+			"Failed to set the max value of resource %d [%s]. mcp_resp = 0x%08x.\n",
+			res_id, ecore_hw_get_resc_name(res_id), *p_mcp_resp);
+
+	return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t
+ecore_hw_set_soft_resc_size(struct ecore_hwfn *p_hwfn)
+{
+	bool b_ah = ECORE_IS_AH(p_hwfn->p_dev);
+	u32 resc_max_val, mcp_resp;
+	u8 res_id;
+	enum _ecore_status_t rc;
+
+	for (res_id = 0; res_id < ECORE_MAX_RESC; res_id++) {
+		/* @DPDK */
+		switch (res_id) {
+		case ECORE_LL2_QUEUE:
+		case ECORE_RDMA_CNQ_RAM:
+		case ECORE_RDMA_STATS_QUEUE:
+		case ECORE_BDQ:
+			resc_max_val = 0;
+			break;
+		default:
+			continue;
+		}
+
+		rc = __ecore_hw_set_soft_resc_size(p_hwfn, res_id,
+						   resc_max_val, &mcp_resp);
+		if (rc != ECORE_SUCCESS)
+			return rc;
+
+		/* There's no point to continue to the next resource if the
+		 * command is not supported by the MFW.
+		 * We do continue if the command is supported but the resource
+		 * is unknown to the MFW. Such a resource will be later
+		 * configured with the default allocation values.
+		 */
+		if (mcp_resp == FW_MSG_CODE_UNSUPPORTED)
+			return ECORE_NOTIMPL;
+	}
+
+	return ECORE_SUCCESS;
 }
 
 static
 enum _ecore_status_t ecore_hw_get_dflt_resc(struct ecore_hwfn *p_hwfn,
 					    enum ecore_resources res_id,
-					    u32 *p_resc_num,
-					    u32 *p_resc_start)
+					    u32 *p_resc_num, u32 *p_resc_start)
 {
 	u8 num_funcs = p_hwfn->num_funcs_on_engine;
 	bool b_ah = ECORE_IS_AH(p_hwfn->p_dev);
@@ -2553,56 +2598,19 @@ enum _ecore_status_t ecore_hw_get_dflt_resc(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-static const char *ecore_hw_get_resc_name(enum ecore_resources res_id)
-{
-	switch (res_id) {
-	case ECORE_SB:
-		return "SB";
-	case ECORE_L2_QUEUE:
-		return "L2_QUEUE";
-	case ECORE_VPORT:
-		return "VPORT";
-	case ECORE_RSS_ENG:
-		return "RSS_ENG";
-	case ECORE_PQ:
-		return "PQ";
-	case ECORE_RL:
-		return "RL";
-	case ECORE_MAC:
-		return "MAC";
-	case ECORE_VLAN:
-		return "VLAN";
-	case ECORE_RDMA_CNQ_RAM:
-		return "RDMA_CNQ_RAM";
-	case ECORE_ILT:
-		return "ILT";
-	case ECORE_LL2_QUEUE:
-		return "LL2_QUEUE";
-	case ECORE_CMDQS_CQS:
-		return "CMDQS_CQS";
-	case ECORE_RDMA_STATS_QUEUE:
-		return "RDMA_STATS_QUEUE";
-	case ECORE_BDQ:
-		return "BDQ";
-	default:
-		return "UNKNOWN_RESOURCE";
-	}
-}
-
-static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
-						   enum ecore_resources res_id,
-						   bool drv_resc_alloc)
+static enum _ecore_status_t
+__ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn, enum ecore_resources res_id,
+			 bool drv_resc_alloc)
 {
-	u32 dflt_resc_num = 0, dflt_resc_start = 0, mcp_resp, mcp_param;
-	u32 *p_resc_num, *p_resc_start;
-	struct resource_info resc_info;
+	u32 dflt_resc_num = 0, dflt_resc_start = 0;
+	u32 mcp_resp, *p_resc_num, *p_resc_start;
 	enum _ecore_status_t rc;
 
 	p_resc_num = &RESC_NUM(p_hwfn, res_id);
 	p_resc_start = &RESC_START(p_hwfn, res_id);
 
-	rc = ecore_hw_get_dflt_resc(p_hwfn, res_id,
-				    &dflt_resc_num, &dflt_resc_start);
+	rc = ecore_hw_get_dflt_resc(p_hwfn, res_id, &dflt_resc_num,
+				    &dflt_resc_start);
 	if (rc != ECORE_SUCCESS) {
 		DP_ERR(p_hwfn,
 		       "Failed to get default amount for resource %d [%s]\n",
@@ -2618,17 +2626,8 @@ static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
 	}
 #endif
 
-	OSAL_MEM_ZERO(&resc_info, sizeof(resc_info));
-	resc_info.res_id = ecore_hw_get_mfw_res_id(res_id);
-	if (resc_info.res_id == RESOURCE_NUM_INVALID) {
-		DP_ERR(p_hwfn,
-		       "Failed to match resource %d with MFW resources\n",
-		       res_id);
-		return ECORE_INVAL;
-	}
-
-	rc = ecore_mcp_get_resc_info(p_hwfn, p_hwfn->p_main_ptt, &resc_info,
-				     &mcp_resp, &mcp_param);
+	rc = ecore_mcp_get_resc_info(p_hwfn, p_hwfn->p_main_ptt, res_id,
+				     &mcp_resp, p_resc_num, p_resc_start);
 	if (rc != ECORE_SUCCESS) {
 		DP_NOTICE(p_hwfn, true,
 			  "MFW response failure for an allocation request for"
@@ -2642,13 +2641,11 @@ static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
 	 * - There is an internal error in the MFW while processing the request
 	 * - The resource ID is unknown to the MFW
 	 */
-	if (mcp_resp != FW_MSG_CODE_RESOURCE_ALLOC_OK &&
-	    mcp_resp != FW_MSG_CODE_RESOURCE_ALLOC_DEPRECATED) {
-		/* @DPDK */
+	if (mcp_resp != FW_MSG_CODE_RESOURCE_ALLOC_OK) {
 		DP_INFO(p_hwfn,
-			"Resource %d [%s]: No allocation info was received"
-			" [mcp_resp 0x%x]. Applying default values"
-			" [num %d, start %d].\n",
+			"Failed to receive allocation info for resource %d [%s]."
+			" mcp_resp = 0x%x. Applying default values"
+			" [%d,%d].\n",
 			res_id, ecore_hw_get_resc_name(res_id), mcp_resp,
 			dflt_resc_num, dflt_resc_start);
 
@@ -2660,16 +2657,13 @@ static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
 	/* TBD - remove this when revising the handling of the SB resource */
 	if (res_id == ECORE_SB) {
 		/* Excluding the slowpath SB */
-		resc_info.size -= 1;
-		resc_info.offset -= p_hwfn->enabled_func_idx;
+		*p_resc_num -= 1;
+		*p_resc_start -= p_hwfn->enabled_func_idx;
 	}
 
-	*p_resc_num = resc_info.size;
-	*p_resc_start = resc_info.offset;
-
 	if (*p_resc_num != dflt_resc_num || *p_resc_start != dflt_resc_start) {
 		DP_INFO(p_hwfn,
-			"Resource %d [%s]: MFW allocation [num %d, start %d] differs from default values [num %d, start %d]%s\n",
+			"MFW allocation for resource %d [%s] differs from default values [%d,%d vs. %d,%d]%s\n",
 			res_id, ecore_hw_get_resc_name(res_id), *p_resc_num,
 			*p_resc_start, dflt_resc_num, dflt_resc_start,
 			drv_resc_alloc ? " - Applying default values" : "");
@@ -2682,12 +2676,32 @@ out:
 	return ECORE_SUCCESS;
 }
 
+static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
+						   bool drv_resc_alloc)
+{
+	enum _ecore_status_t rc;
+	u8 res_id;
+
+	for (res_id = 0; res_id < ECORE_MAX_RESC; res_id++) {
+		rc = __ecore_hw_set_resc_info(p_hwfn, res_id, drv_resc_alloc);
+		if (rc != ECORE_SUCCESS)
+			return rc;
+	}
+
+	return ECORE_SUCCESS;
+}
+
+#define ECORE_RESC_ALLOC_LOCK_RETRY_CNT		10
+#define ECORE_RESC_ALLOC_LOCK_RETRY_INTVL_US	10000 /* 10 msec */
+
 static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 					      bool drv_resc_alloc)
 {
+	struct ecore_resc_unlock_params resc_unlock_params;
+	struct ecore_resc_lock_params resc_lock_params;
 	bool b_ah = ECORE_IS_AH(p_hwfn->p_dev);
-	enum _ecore_status_t rc;
 	u8 res_id;
+	enum _ecore_status_t rc;
 #ifndef ASIC_ONLY
 	u32 *resc_start = p_hwfn->hw_info.resc_start;
 	u32 *resc_num = p_hwfn->hw_info.resc_num;
@@ -2700,10 +2714,62 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 	u32 roce_min_ilt_lines = PXP_NUM_ILT_RECORDS_BB / MAX_NUM_PFS_BB;
 #endif
 
-	for (res_id = 0; res_id < ECORE_MAX_RESC; res_id++) {
-		rc = ecore_hw_set_resc_info(p_hwfn, res_id, drv_resc_alloc);
+	/* Setting the max values of the soft resources and the following
+	 * resources allocation queries should be atomic. Since several PFs can
+	 * run in parallel - a resource lock is needed.
+	 * If either the resource lock or resource set value commands are not
+	 * supported - skip the the max values setting, release the lock if
+	 * needed, and proceed to the queries. Other failures, including a
+	 * failure to acquire the lock, will cause this function to fail.
+	 * Old drivers that don't acquire the lock can run in parallel, and
+	 * their allocation values won't be affected by the updated max values.
+	 */
+	OSAL_MEM_ZERO(&resc_lock_params, sizeof(resc_lock_params));
+	resc_lock_params.resource = ECORE_RESC_LOCK_RESC_ALLOC;
+	resc_lock_params.retry_num = ECORE_RESC_ALLOC_LOCK_RETRY_CNT;
+	resc_lock_params.retry_interval = ECORE_RESC_ALLOC_LOCK_RETRY_INTVL_US;
+	resc_lock_params.sleep_b4_retry = true;
+	OSAL_MEM_ZERO(&resc_unlock_params, sizeof(resc_unlock_params));
+	resc_unlock_params.resource = ECORE_RESC_LOCK_RESC_ALLOC;
+
+	rc = ecore_mcp_resc_lock(p_hwfn, p_hwfn->p_main_ptt, &resc_lock_params);
+	if (rc != ECORE_SUCCESS && rc != ECORE_NOTIMPL) {
+		return rc;
+	} else if (rc == ECORE_NOTIMPL) {
+		DP_INFO(p_hwfn,
+			"Skip the max values setting of the soft resources since the resource lock is not supported by the MFW\n");
+	} else if (rc == ECORE_SUCCESS && !resc_lock_params.b_granted) {
+		DP_NOTICE(p_hwfn, false,
+			  "Failed to acquire the resource lock for the resource allocation commands\n");
+		rc = ECORE_BUSY;
+		goto unlock_and_exit;
+	} else {
+		rc = ecore_hw_set_soft_resc_size(p_hwfn);
+		if (rc != ECORE_SUCCESS && rc != ECORE_NOTIMPL) {
+			DP_NOTICE(p_hwfn, false,
+				  "Failed to set the max values of the soft resources\n");
+			goto unlock_and_exit;
+		} else if (rc == ECORE_NOTIMPL) {
+			DP_INFO(p_hwfn,
+				"Skip the max values setting of the soft resources since it is not supported by the MFW\n");
+			rc = ecore_mcp_resc_unlock(p_hwfn, p_hwfn->p_main_ptt,
+						   &resc_unlock_params);
+			if (rc != ECORE_SUCCESS)
+				DP_INFO(p_hwfn,
+					"Failed to release the resource lock for the resource allocation commands\n");
+		}
+	}
+
+	rc = ecore_hw_set_resc_info(p_hwfn, drv_resc_alloc);
+	if (rc != ECORE_SUCCESS)
+		goto unlock_and_exit;
+
+	if (resc_lock_params.b_granted && !resc_unlock_params.b_released) {
+		rc = ecore_mcp_resc_unlock(p_hwfn, p_hwfn->p_main_ptt,
+					   &resc_unlock_params);
 		if (rc != ECORE_SUCCESS)
-			return rc;
+			DP_INFO(p_hwfn,
+				"Failed to release the resource lock for the resource allocation commands\n");
 	}
 
 #ifndef ASIC_ONLY
@@ -2756,6 +2822,10 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 			   RESC_START(p_hwfn, res_id));
 
 	return ECORE_SUCCESS;
+
+unlock_and_exit:
+	ecore_mcp_resc_unlock(p_hwfn, p_hwfn->p_main_ptt, &resc_unlock_params);
+	return rc;
 }
 
 static enum _ecore_status_t
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 15f3ea0..3efe0a0 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -2768,7 +2768,60 @@ enum _ecore_status_t ecore_mcp_mem_ecc_events(struct ecore_hwfn *p_hwfn,
 			     0, &rsp, (u32 *)num_events);
 }
 
-#define ECORE_RESC_ALLOC_VERSION_MAJOR	1
+static enum resource_id_enum
+ecore_mcp_get_mfw_res_id(enum ecore_resources res_id)
+{
+	enum resource_id_enum mfw_res_id = RESOURCE_NUM_INVALID;
+
+	switch (res_id) {
+	case ECORE_SB:
+		mfw_res_id = RESOURCE_NUM_SB_E;
+		break;
+	case ECORE_L2_QUEUE:
+		mfw_res_id = RESOURCE_NUM_L2_QUEUE_E;
+		break;
+	case ECORE_VPORT:
+		mfw_res_id = RESOURCE_NUM_VPORT_E;
+		break;
+	case ECORE_RSS_ENG:
+		mfw_res_id = RESOURCE_NUM_RSS_ENGINES_E;
+		break;
+	case ECORE_PQ:
+		mfw_res_id = RESOURCE_NUM_PQ_E;
+		break;
+	case ECORE_RL:
+		mfw_res_id = RESOURCE_NUM_RL_E;
+		break;
+	case ECORE_MAC:
+	case ECORE_VLAN:
+		/* Each VFC resource can accommodate both a MAC and a VLAN */
+		mfw_res_id = RESOURCE_VFC_FILTER_E;
+		break;
+	case ECORE_ILT:
+		mfw_res_id = RESOURCE_ILT_E;
+		break;
+	case ECORE_LL2_QUEUE:
+		mfw_res_id = RESOURCE_LL2_QUEUE_E;
+		break;
+	case ECORE_RDMA_CNQ_RAM:
+	case ECORE_CMDQS_CQS:
+		/* CNQ/CMDQS are the same resource */
+		mfw_res_id = RESOURCE_CQS_E;
+		break;
+	case ECORE_RDMA_STATS_QUEUE:
+		mfw_res_id = RESOURCE_RDMA_STATS_QUEUE_E;
+		break;
+	case ECORE_BDQ:
+		mfw_res_id = RESOURCE_BDQ_E;
+		break;
+	default:
+		break;
+	}
+
+	return mfw_res_id;
+}
+
+#define ECORE_RESC_ALLOC_VERSION_MAJOR	2
 #define ECORE_RESC_ALLOC_VERSION_MINOR	0
 #define ECORE_RESC_ALLOC_VERSION				\
 	((ECORE_RESC_ALLOC_VERSION_MAJOR <<			\
@@ -2776,36 +2829,146 @@ enum _ecore_status_t ecore_mcp_mem_ecc_events(struct ecore_hwfn *p_hwfn,
 	 (ECORE_RESC_ALLOC_VERSION_MINOR <<			\
 	  DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR_SHIFT))
 
-enum _ecore_status_t ecore_mcp_get_resc_info(struct ecore_hwfn *p_hwfn,
-					     struct ecore_ptt *p_ptt,
-					     struct resource_info *p_resc_info,
-					     u32 *p_mcp_resp, u32 *p_mcp_param)
+struct ecore_resc_alloc_in_params {
+	u32 cmd;
+	enum ecore_resources res_id;
+	u32 resc_max_val;
+};
+
+struct ecore_resc_alloc_out_params {
+	u32 mcp_resp;
+	u32 mcp_param;
+	u32 resc_num;
+	u32 resc_start;
+	u32 vf_resc_num;
+	u32 vf_resc_start;
+	u32 flags;
+};
+
+static enum _ecore_status_t
+ecore_mcp_resc_allocation_msg(struct ecore_hwfn *p_hwfn,
+			      struct ecore_ptt *p_ptt,
+			      struct ecore_resc_alloc_in_params *p_in_params,
+			      struct ecore_resc_alloc_out_params *p_out_params)
 {
+	struct resource_info *p_mfw_resc_info;
 	struct ecore_mcp_mb_params mb_params;
 	union drv_union_data union_data;
 	enum _ecore_status_t rc;
 
+	p_mfw_resc_info = &union_data.resource;
+	OSAL_MEM_ZERO(p_mfw_resc_info, sizeof(*p_mfw_resc_info));
+
+	p_mfw_resc_info->res_id = ecore_mcp_get_mfw_res_id(p_in_params->res_id);
+	if (p_mfw_resc_info->res_id == RESOURCE_NUM_INVALID) {
+		DP_ERR(p_hwfn,
+		       "Failed to match resource %d [%s] with the MFW resources\n",
+		       p_in_params->res_id,
+		       ecore_hw_get_resc_name(p_in_params->res_id));
+		return ECORE_INVAL;
+	}
+
+	switch (p_in_params->cmd) {
+	case DRV_MSG_SET_RESOURCE_VALUE_MSG:
+		p_mfw_resc_info->size = p_in_params->resc_max_val;
+		/* Fallthrough */
+	case DRV_MSG_GET_RESOURCE_ALLOC_MSG:
+		break;
+	default:
+		DP_ERR(p_hwfn, "Unexpected resource alloc command [0x%08x]\n",
+		       p_in_params->cmd);
+		return ECORE_INVAL;
+	}
+
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
-	mb_params.cmd = DRV_MSG_GET_RESOURCE_ALLOC_MSG;
+	mb_params.cmd = p_in_params->cmd;
 	mb_params.param = ECORE_RESC_ALLOC_VERSION;
-	OSAL_MEMCPY(&union_data.resource, p_resc_info, sizeof(*p_resc_info));
 	mb_params.p_data_src = &union_data;
 	mb_params.p_data_dst = &union_data;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Resource message request: cmd 0x%08x, res_id %d [%s], hsi_version %d.%d, val 0x%x\n",
+		   p_in_params->cmd, p_in_params->res_id,
+		   ecore_hw_get_resc_name(p_in_params->res_id),
+		   ECORE_MFW_GET_FIELD(mb_params.param,
+			   DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR),
+		   ECORE_MFW_GET_FIELD(mb_params.param,
+			   DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR),
+		   p_in_params->resc_max_val);
+
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	*p_mcp_resp = mb_params.mcp_resp;
-	*p_mcp_param = mb_params.mcp_param;
-
-	OSAL_MEMCPY(p_resc_info, &union_data.resource, sizeof(*p_resc_info));
+	p_out_params->mcp_resp = mb_params.mcp_resp;
+	p_out_params->mcp_param = mb_params.mcp_param;
+	p_out_params->resc_num = p_mfw_resc_info->size;
+	p_out_params->resc_start = p_mfw_resc_info->offset;
+	p_out_params->vf_resc_num = p_mfw_resc_info->vf_size;
+	p_out_params->vf_resc_start = p_mfw_resc_info->vf_offset;
+	p_out_params->flags = p_mfw_resc_info->flags;
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-		   "MFW resource_info: version 0x%x, res_id 0x%x, size 0x%x,"
-		   " offset 0x%x, vf_size 0x%x, vf_offset 0x%x, flags 0x%x\n",
-		   *p_mcp_param, p_resc_info->res_id, p_resc_info->size,
-		   p_resc_info->offset, p_resc_info->vf_size,
-		   p_resc_info->vf_offset, p_resc_info->flags);
+		   "Resource message response: mfw_hsi_version %d.%d, num 0x%x, start 0x%x, vf_num 0x%x, vf_start 0x%x, flags 0x%08x\n",
+		   ECORE_MFW_GET_FIELD(p_out_params->mcp_param,
+			   FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR),
+		   ECORE_MFW_GET_FIELD(p_out_params->mcp_param,
+			   FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR),
+		   p_out_params->resc_num, p_out_params->resc_start,
+		   p_out_params->vf_resc_num, p_out_params->vf_resc_start,
+		   p_out_params->flags);
+
+	return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t
+ecore_mcp_set_resc_max_val(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			   enum ecore_resources res_id, u32 resc_max_val,
+			   u32 *p_mcp_resp)
+{
+	struct ecore_resc_alloc_out_params out_params;
+	struct ecore_resc_alloc_in_params in_params;
+	enum _ecore_status_t rc;
+
+	OSAL_MEM_ZERO(&in_params, sizeof(in_params));
+	in_params.cmd = DRV_MSG_SET_RESOURCE_VALUE_MSG;
+	in_params.res_id = res_id;
+	in_params.resc_max_val = resc_max_val;
+	OSAL_MEM_ZERO(&out_params, sizeof(out_params));
+	rc = ecore_mcp_resc_allocation_msg(p_hwfn, p_ptt, &in_params,
+					   &out_params);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	*p_mcp_resp = out_params.mcp_resp;
+
+	return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t
+ecore_mcp_get_resc_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			enum ecore_resources res_id, u32 *p_mcp_resp,
+			u32 *p_resc_num, u32 *p_resc_start)
+{
+	struct ecore_resc_alloc_out_params out_params;
+	struct ecore_resc_alloc_in_params in_params;
+	enum _ecore_status_t rc;
+
+	OSAL_MEM_ZERO(&in_params, sizeof(in_params));
+	in_params.cmd = DRV_MSG_GET_RESOURCE_ALLOC_MSG;
+	in_params.res_id = res_id;
+	OSAL_MEM_ZERO(&out_params, sizeof(out_params));
+	rc = ecore_mcp_resc_allocation_msg(p_hwfn, p_ptt, &in_params,
+					   &out_params);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	*p_mcp_resp = out_params.mcp_resp;
+
+	if (*p_mcp_resp == FW_MSG_CODE_RESOURCE_ALLOC_OK) {
+		*p_resc_num = out_params.resc_num;
+		*p_resc_start = out_params.resc_start;
+	}
 
 	return ECORE_SUCCESS;
 }
@@ -2831,8 +2994,11 @@ static enum _ecore_status_t ecore_mcp_resource_cmd(struct ecore_hwfn *p_hwfn,
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	if (*p_mcp_resp == FW_MSG_CODE_UNSUPPORTED)
+	if (*p_mcp_resp == FW_MSG_CODE_UNSUPPORTED) {
+		DP_INFO(p_hwfn,
+			"The resource command is unsupported by the MFW\n");
 		return ECORE_NOTIMPL;
+	}
 
 	if (*p_mcp_param == RESOURCE_OPCODE_UNKNOWN_CMD) {
 		u8 opcode = ECORE_MFW_GET_FIELD(param, RESOURCE_CMD_REQ_OPCODE);
@@ -2846,36 +3012,35 @@ static enum _ecore_status_t ecore_mcp_resource_cmd(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-enum _ecore_status_t ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn,
-					 struct ecore_ptt *p_ptt,
-					 u8 resource_num, u8 timeout,
-					 bool *p_granted, u8 *p_owner)
+enum _ecore_status_t
+__ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		      struct ecore_resc_lock_params *p_params)
 {
 	u32 param = 0, mcp_resp, mcp_param;
 	u8 opcode;
 	enum _ecore_status_t rc;
 
-	switch (timeout) {
+	switch (p_params->timeout) {
 	case ECORE_MCP_RESC_LOCK_TO_DEFAULT:
 		opcode = RESOURCE_OPCODE_REQ;
-		timeout = 0;
+		p_params->timeout = 0;
 		break;
 	case ECORE_MCP_RESC_LOCK_TO_NONE:
 		opcode = RESOURCE_OPCODE_REQ_WO_AGING;
-		timeout = 0;
+		p_params->timeout = 0;
 		break;
 	default:
 		opcode = RESOURCE_OPCODE_REQ_W_AGING;
 		break;
 	}
 
-	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_RESC, resource_num);
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_RESC, p_params->resource);
 	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_OPCODE, opcode);
-	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_AGE, timeout);
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_AGE, p_params->timeout);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-		   "Resource lock request: param 0x%08x [age %d, opcode %d, resc_num %d]\n",
-		   param, timeout, opcode, resource_num);
+		   "Resource lock request: param 0x%08x [age %d, opcode %d, resource %d]\n",
+		   param, p_params->timeout, opcode, p_params->resource);
 
 	/* Attempt to acquire the resource */
 	rc = ecore_mcp_resource_cmd(p_hwfn, p_ptt, param, &mcp_resp,
@@ -2884,19 +3049,20 @@ enum _ecore_status_t ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn,
 		return rc;
 
 	/* Analyze the response */
-	*p_owner = ECORE_MFW_GET_FIELD(mcp_param, RESOURCE_CMD_RSP_OWNER);
+	p_params->owner = ECORE_MFW_GET_FIELD(mcp_param,
+					     RESOURCE_CMD_RSP_OWNER);
 	opcode = ECORE_MFW_GET_FIELD(mcp_param, RESOURCE_CMD_RSP_OPCODE);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 		   "Resource lock response: mcp_param 0x%08x [opcode %d, owner %d]\n",
-		   mcp_param, opcode, *p_owner);
+		   mcp_param, opcode, p_params->owner);
 
 	switch (opcode) {
 	case RESOURCE_OPCODE_GNT:
-		*p_granted = true;
+		p_params->b_granted = true;
 		break;
 	case RESOURCE_OPCODE_BUSY:
-		*p_granted = false;
+		p_params->b_granted = false;
 		break;
 	default:
 		DP_NOTICE(p_hwfn, false,
@@ -2908,23 +3074,54 @@ enum _ecore_status_t ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-enum _ecore_status_t ecore_mcp_resc_unlock(struct ecore_hwfn *p_hwfn,
-					   struct ecore_ptt *p_ptt,
-					   u8 resource_num, bool force,
-					   bool *p_released)
+enum _ecore_status_t
+ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		    struct ecore_resc_lock_params *p_params)
+{
+	u32 retry_cnt = 0;
+	enum _ecore_status_t rc;
+
+	do {
+		/* No need for an interval before the first iteration */
+		if (retry_cnt) {
+			if (p_params->sleep_b4_retry) {
+				u16 retry_interval_in_ms =
+					DIV_ROUND_UP(p_params->retry_interval,
+						     1000);
+
+				OSAL_MSLEEP(retry_interval_in_ms);
+			} else {
+				OSAL_UDELAY(p_params->retry_interval);
+			}
+		}
+
+		rc = __ecore_mcp_resc_lock(p_hwfn, p_ptt, p_params);
+		if (rc != ECORE_SUCCESS)
+			return rc;
+
+		if (p_params->b_granted)
+			break;
+	} while (retry_cnt++ < p_params->retry_num);
+
+	return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t
+ecore_mcp_resc_unlock(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		      struct ecore_resc_unlock_params *p_params)
 {
 	u32 param = 0, mcp_resp, mcp_param;
 	u8 opcode;
 	enum _ecore_status_t rc;
 
-	opcode = force ? RESOURCE_OPCODE_FORCE_RELEASE
-		       : RESOURCE_OPCODE_RELEASE;
-	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_RESC, resource_num);
+	opcode = p_params->b_force ? RESOURCE_OPCODE_FORCE_RELEASE
+				   : RESOURCE_OPCODE_RELEASE;
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_RESC, p_params->resource);
 	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_OPCODE, opcode);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-		   "Resource unlock request: param 0x%08x [opcode %d, resc_num %d]\n",
-		   param, opcode, resource_num);
+		   "Resource unlock request: param 0x%08x [opcode %d, resource %d]\n",
+		   param, opcode, p_params->resource);
 
 	/* Attempt to release the resource */
 	rc = ecore_mcp_resource_cmd(p_hwfn, p_ptt, param, &mcp_resp,
@@ -2942,14 +3139,14 @@ enum _ecore_status_t ecore_mcp_resc_unlock(struct ecore_hwfn *p_hwfn,
 	switch (opcode) {
 	case RESOURCE_OPCODE_RELEASED_PREVIOUS:
 		DP_INFO(p_hwfn,
-			"Resource unlock request for an already released resource [resc_num %d]\n",
-			resource_num);
+			"Resource unlock request for an already released resource [%d]\n",
+			p_params->resource);
 		/* Fallthrough */
 	case RESOURCE_OPCODE_RELEASED:
-		*p_released = true;
+		p_params->b_released = true;
 		break;
 	case RESOURCE_OPCODE_WRONG_OWNER:
-		*p_released = false;
+		p_params->b_released = false;
 		break;
 	default:
 		DP_NOTICE(p_hwfn, false,
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index 4138a12..f5dac9d 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -11,6 +11,7 @@
 
 #include "bcm_osal.h"
 #include "mcp_public.h"
+#include "ecore.h"
 #include "ecore_mcp_api.h"
 
 /* Using hwfn number (and not pf_num) is required since in CMT mode,
@@ -339,20 +340,37 @@ enum _ecore_status_t ecore_mcp_mdump_trigger(struct ecore_hwfn *p_hwfn,
 					     struct ecore_ptt *p_ptt);
 
 /**
+ * @brief - Sets the MFW's max value for the given resource
+ *
+ *  @param p_hwfn
+ *  @param p_ptt
+ *  @param res_id
+ *  @param resc_max_val
+ *  @param p_mcp_resp
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
+ */
+enum _ecore_status_t
+ecore_mcp_set_resc_max_val(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			   enum ecore_resources res_id, u32 resc_max_val,
+			   u32 *p_mcp_resp);
+
+/**
  * @brief - Gets the MFW allocation info for the given resource
  *
  *  @param p_hwfn
  *  @param p_ptt
- *  @param p_resc_info
+ *  @param res_id
  *  @param p_mcp_resp
- *  @param p_mcp_param
+ *  @param p_resc_num
+ *  @param p_resc_start
  *
  * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
  */
-enum _ecore_status_t ecore_mcp_get_resc_info(struct ecore_hwfn *p_hwfn,
-					     struct ecore_ptt *p_ptt,
-					     struct resource_info *p_resc_info,
-					     u32 *p_mcp_resp, u32 *p_mcp_param);
+enum _ecore_status_t
+ecore_mcp_get_resc_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			enum ecore_resources res_id, u32 *p_mcp_resp,
+			u32 *p_resc_num, u32 *p_resc_start);
 
 /**
  * @brief - Initiates PF FLR
@@ -365,45 +383,79 @@ enum _ecore_status_t ecore_mcp_get_resc_info(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_mcp_initiate_pf_flr(struct ecore_hwfn *p_hwfn,
 					       struct ecore_ptt *p_ptt);
 
+#define ECORE_MCP_RESC_LOCK_MIN_VAL	RESOURCE_DUMP /* 0 */
+#define ECORE_MCP_RESC_LOCK_MAX_VAL	31
+
+enum ecore_resc_lock {
+	ECORE_RESC_LOCK_DBG_DUMP = ECORE_MCP_RESC_LOCK_MIN_VAL,
+	/* Locks that the MFW is aware of should be added here downwards */
+
+	/* Ecore only locks should be added here upwards */
+	ECORE_RESC_LOCK_RESC_ALLOC = ECORE_MCP_RESC_LOCK_MAX_VAL
+};
+
+struct ecore_resc_lock_params {
+	/* Resource number [valid values are 0..31] */
+	u8 resource;
+
+	/* Lock timeout value in seconds [default, none or 1..254] */
+	u8 timeout;
 #define ECORE_MCP_RESC_LOCK_TO_DEFAULT	0
 #define ECORE_MCP_RESC_LOCK_TO_NONE	255
 
+	/* Number of times to retry locking */
+	u8 retry_num;
+
+	/* The interval in usec between retries */
+	u16 retry_interval;
+
+	/* Use sleep or delay between retries */
+	bool sleep_b4_retry;
+
+	/* Will be set as true if the resource is free and granted */
+	bool b_granted;
+
+	/* Will be filled with the resource owner.
+	 * [0..15 = PF0-15, 16 = MFW, 17 = diag over serial]
+	 */
+	u8 owner;
+};
+
 /**
  * @brief Acquires MFW generic resource lock
  *
  *  @param p_hwfn
  *  @param p_ptt
- *  @param resource_num - valid values are 0..31
- *  @param timeout - lock timeout value in seconds
- *                   (1..254, '0' - default value, '255' - no timeout).
- *  @param p_granted - will be filled as true if the resource is free and
- *                     granted, or false if it is busy.
- *  @param p_owner - A pointer to a variable to be filled with the resource
- *                   owner (0..15 = PF0-15, 16 = MFW, 17 = diag over serial).
+ *  @param p_params
  *
  * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
  */
-enum _ecore_status_t ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn,
-					 struct ecore_ptt *p_ptt,
-					 u8 resource_num, u8 timeout,
-					 bool *p_granted, u8 *p_owner);
+enum _ecore_status_t
+ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		    struct ecore_resc_lock_params *p_params);
+
+struct ecore_resc_unlock_params {
+	/* Resource number [valid values are 0..31] */
+	u8 resource;
+
+	/* Allow to release a resource even if belongs to another PF */
+	bool b_force;
+
+	/* Will be set as true if the resource is released */
+	bool b_released;
+};
 
 /**
  * @brief Releases MFW generic resource lock
  *
  *  @param p_hwfn
  *  @param p_ptt
- *  @param resource_num
- *  @param force -  allows to release a reeource even if belongs to another PF
- *  @param p_released - will be filled as true if the resource is released (or
- *			has been already released), and false if the resource is
- *			acquired by another PF and the `force' flag was not set.
+ *  @param p_params
  *
  * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
  */
-enum _ecore_status_t ecore_mcp_resc_unlock(struct ecore_hwfn *p_hwfn,
-					   struct ecore_ptt *p_ptt,
-					   u8 resource_num, bool force,
-					   bool *p_released);
+enum _ecore_status_t
+ecore_mcp_resc_unlock(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		      struct ecore_resc_unlock_params *p_params);
 
 #endif /* __ECORE_MCP_H__ */
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 49/61] net/qede/base: add return code check
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (48 preceding siblings ...)
  2017-03-18  7:06   ` [PATCH v2 48/61] net/qede/base: set max values for soft resoruces Rasesh Mody
@ 2017-03-18  7:06   ` Rasesh Mody
  2017-03-18  7:06   ` [PATCH v2 50/61] net/qede/base: zero out MFW mailbox data Rasesh Mody
                     ` (12 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:06 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add a check of the return code of ecore_mcp_cmd_and_union() in
ecore_mcp_send_protocol_stats()

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_mcp.c |    5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 3efe0a0..0ebb5cd 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1237,6 +1237,7 @@ static void ecore_mcp_send_protocol_stats(struct ecore_hwfn *p_hwfn,
 	struct ecore_mcp_mb_params mb_params;
 	union drv_union_data union_data;
 	u32 hsi_param;
+	enum _ecore_status_t rc;
 
 	switch (type) {
 	case MFW_DRV_MSG_GET_LAN_STATS:
@@ -1255,7 +1256,9 @@ static void ecore_mcp_send_protocol_stats(struct ecore_hwfn *p_hwfn,
 	mb_params.param = hsi_param;
 	OSAL_MEMCPY(&union_data, &stats, sizeof(stats));
 	mb_params.p_data_src = &union_data;
-	ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+	if (rc != ECORE_SUCCESS)
+		DP_ERR(p_hwfn, "Failed to send protocol stats, rc = %d\n", rc);
 }
 
 static void ecore_read_pf_bandwidth(struct ecore_hwfn *p_hwfn,
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 50/61] net/qede/base: zero out MFW mailbox data
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (49 preceding siblings ...)
  2017-03-18  7:06   ` [PATCH v2 49/61] net/qede/base: add return code check Rasesh Mody
@ 2017-03-18  7:06   ` Rasesh Mody
  2017-03-18  7:06   ` [PATCH v2 51/61] net/qede/base: move code bits Rasesh Mody
                     ` (11 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:06 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Zero the whole union data of the Management FW mailbox before copying
the actual union member

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c |    4 +-
 drivers/net/qede/base/ecore_mcp.c |  296 ++++++++++++++++++++-----------------
 drivers/net/qede/base/ecore_mcp.h |   19 ++-
 3 files changed, 181 insertions(+), 138 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 3191ee4..e584058 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2311,9 +2311,7 @@ enum _ecore_status_t ecore_hw_reset(struct ecore_dev *p_dev)
 			unload_resp = FW_MSG_CODE_DRV_UNLOAD_ENGINE;
 		}
 
-		rc = ecore_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
-				   DRV_MSG_CODE_UNLOAD_DONE,
-				   0, &unload_resp, &unload_param);
+		rc = ecore_mcp_unload_done(p_hwfn, p_hwfn->p_main_ptt);
 		if (rc != ECORE_SUCCESS) {
 			DP_NOTICE(p_hwfn,
 				  true, "ecore_hw_reset: UNLOAD_DONE failed\n");
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 0ebb5cd..b53210f 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -364,6 +364,7 @@ ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn,
 			struct ecore_ptt *p_ptt,
 			struct ecore_mcp_mb_params *p_mb_params)
 {
+	union drv_union_data union_data;
 	u32 union_data_addr;
 	enum _ecore_status_t rc;
 
@@ -373,6 +374,15 @@ ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn,
 		return ECORE_BUSY;
 	}
 
+	if (p_mb_params->data_src_size > sizeof(union_data) ||
+	    p_mb_params->data_dst_size > sizeof(union_data)) {
+		DP_ERR(p_hwfn,
+		       "The provided size is larger than the union data size [src_size %u, dst_size %u, union_data_size %zu]\n",
+		       p_mb_params->data_src_size, p_mb_params->data_dst_size,
+		       sizeof(union_data));
+		return ECORE_INVAL;
+	}
+
 	union_data_addr = p_hwfn->mcp_info->drv_mb_addr +
 			  OFFSETOF(struct public_drv_mb, union_data);
 
@@ -383,19 +393,21 @@ ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn,
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	if (p_mb_params->p_data_src != OSAL_NULL)
-		ecore_memcpy_to(p_hwfn, p_ptt, union_data_addr,
-				p_mb_params->p_data_src,
-				sizeof(*p_mb_params->p_data_src));
+	OSAL_MEM_ZERO(&union_data, sizeof(union_data));
+	if (p_mb_params->p_data_src != OSAL_NULL && p_mb_params->data_src_size)
+		OSAL_MEMCPY(&union_data, p_mb_params->p_data_src,
+			    p_mb_params->data_src_size);
+	ecore_memcpy_to(p_hwfn, p_ptt, union_data_addr, &union_data,
+			sizeof(union_data));
 
 	rc = ecore_do_mcp_cmd(p_hwfn, p_ptt, p_mb_params->cmd,
 			      p_mb_params->param, &p_mb_params->mcp_resp,
 			      &p_mb_params->mcp_param);
 
-	if (p_mb_params->p_data_dst != OSAL_NULL)
+	if (p_mb_params->p_data_dst != OSAL_NULL &&
+	    p_mb_params->data_dst_size)
 		ecore_memcpy_from(p_hwfn, p_ptt, p_mb_params->p_data_dst,
-				  union_data_addr,
-				  sizeof(*p_mb_params->p_data_dst));
+				  union_data_addr, p_mb_params->data_dst_size);
 
 	ecore_mcp_mb_unlock(p_hwfn, p_mb_params->cmd);
 
@@ -443,14 +455,13 @@ enum _ecore_status_t ecore_mcp_nvm_wr_cmd(struct ecore_hwfn *p_hwfn,
 					  u32 i_txn_size, u32 *i_buf)
 {
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
 	enum _ecore_status_t rc;
 
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = cmd;
 	mb_params.param = param;
-	OSAL_MEMCPY((u32 *)&union_data.raw_data, i_buf, i_txn_size);
-	mb_params.p_data_src = &union_data;
+	mb_params.p_data_src = i_buf;
+	mb_params.data_src_size = (u8)i_txn_size;
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
@@ -470,13 +481,17 @@ enum _ecore_status_t ecore_mcp_nvm_rd_cmd(struct ecore_hwfn *p_hwfn,
 					  u32 *o_txn_size, u32 *o_buf)
 {
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
+	u8 raw_data[MCP_DRV_NVM_BUF_LEN];
 	enum _ecore_status_t rc;
 
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = cmd;
 	mb_params.param = param;
-	mb_params.p_data_dst = &union_data;
+	mb_params.p_data_dst = raw_data;
+
+	/* Use the maximal value since the actual one is part of the response */
+	mb_params.data_dst_size = MCP_DRV_NVM_BUF_LEN;
+
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
@@ -485,7 +500,7 @@ enum _ecore_status_t ecore_mcp_nvm_rd_cmd(struct ecore_hwfn *p_hwfn,
 	*o_mcp_param = mb_params.mcp_param;
 
 	*o_txn_size = *o_mcp_param;
-	OSAL_MEMCPY(o_buf, (u32 *)&union_data.raw_data, *o_txn_size);
+	OSAL_MEMCPY(o_buf, raw_data, *o_txn_size);
 
 	return ECORE_SUCCESS;
 }
@@ -605,26 +620,23 @@ __ecore_mcp_load_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 		     struct ecore_load_req_in_params *p_in_params,
 		     struct ecore_load_req_out_params *p_out_params)
 {
-	union drv_union_data union_data_src, union_data_dst;
 	struct ecore_mcp_mb_params mb_params;
-	struct load_req_stc *p_load_req;
-	struct load_rsp_stc *p_load_rsp;
+	struct load_req_stc load_req;
+	struct load_rsp_stc load_rsp;
 	u32 hsi_ver;
 	enum _ecore_status_t rc;
 
-	p_load_req = &union_data_src.load_req;
-	OSAL_MEM_ZERO(p_load_req, sizeof(*p_load_req));
-	p_load_req->drv_ver_0 = p_in_params->drv_ver_0;
-	p_load_req->drv_ver_1 = p_in_params->drv_ver_1;
-	p_load_req->fw_ver = p_in_params->fw_ver;
-	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_ROLE,
+	OSAL_MEM_ZERO(&load_req, sizeof(load_req));
+	load_req.drv_ver_0 = p_in_params->drv_ver_0;
+	load_req.drv_ver_1 = p_in_params->drv_ver_1;
+	load_req.fw_ver = p_in_params->fw_ver;
+	ECORE_MFW_SET_FIELD(load_req.misc0, LOAD_REQ_ROLE,
 			    p_in_params->drv_role);
-	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_LOCK_TO,
+	ECORE_MFW_SET_FIELD(load_req.misc0, LOAD_REQ_LOCK_TO,
 			    p_in_params->timeout_val);
-	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_FORCE,
-			    p_in_params->force_cmd);
-	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_FLAGS0,
-			    p_in_params->avoid_eng_reset);
+
+	/* @DPDK */
+	load_req.misc0 |= LOAD_REQ_FORCE_NONE;
 
 	hsi_ver = (p_in_params->hsi_ver == ECORE_LOAD_REQ_HSI_VER_DEFAULT) ?
 		  DRV_ID_MCP_HSI_VER_CURRENT :
@@ -633,8 +645,10 @@ __ecore_mcp_load_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = DRV_MSG_CODE_LOAD_REQ;
 	mb_params.param = PDA_COMP | hsi_ver | p_hwfn->p_dev->drv_type;
-	mb_params.p_data_src = &union_data_src;
-	mb_params.p_data_dst = &union_data_dst;
+	mb_params.p_data_src = &load_req;
+	mb_params.data_src_size = sizeof(load_req);
+	mb_params.p_data_dst = &load_rsp;
+	mb_params.data_dst_size = sizeof(load_rsp);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 		   "Load Request: param 0x%08x [init_hw %d, drv_type %d, hsi_ver %d, pda 0x%04x]\n",
@@ -647,15 +661,13 @@ __ecore_mcp_load_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	if (p_in_params->hsi_ver != ECORE_LOAD_REQ_HSI_VER_1)
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 			   "Load Request: drv_ver 0x%08x_0x%08x, fw_ver 0x%08x, misc0 0x%08x [role %d, timeout %d, force %d, flags0 0x%x]\n",
-			   p_load_req->drv_ver_0, p_load_req->drv_ver_1,
-			   p_load_req->fw_ver, p_load_req->misc0,
-			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
-					       LOAD_REQ_ROLE),
-			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
+			   load_req.drv_ver_0, load_req.drv_ver_1,
+			   load_req.fw_ver, load_req.misc0,
+			   ECORE_MFW_GET_FIELD(load_req.misc0, LOAD_REQ_ROLE),
+			   ECORE_MFW_GET_FIELD(load_req.misc0,
 					       LOAD_REQ_LOCK_TO),
-			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
-					       LOAD_REQ_FORCE),
-			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
+			   ECORE_MFW_GET_FIELD(load_req.misc0, LOAD_REQ_FORCE),
+			   ECORE_MFW_GET_FIELD(load_req.misc0,
 					       LOAD_REQ_FLAGS0));
 
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
@@ -671,28 +683,24 @@ __ecore_mcp_load_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 
 	if (p_in_params->hsi_ver != ECORE_LOAD_REQ_HSI_VER_1 &&
 	    p_out_params->load_code != FW_MSG_CODE_DRV_LOAD_REFUSED_HSI_1) {
-		p_load_rsp = &union_data_dst.load_rsp;
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 			   "Load Response: exist_drv_ver 0x%08x_0x%08x, exist_fw_ver 0x%08x, misc0 0x%08x [exist_role %d, mfw_hsi %d, flags0 0x%x]\n",
-			   p_load_rsp->drv_ver_0, p_load_rsp->drv_ver_1,
-			   p_load_rsp->fw_ver, p_load_rsp->misc0,
-			   ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
-					       LOAD_RSP_ROLE),
-			   ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
-					       LOAD_RSP_HSI),
-			   ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
+			   load_rsp.drv_ver_0, load_rsp.drv_ver_1,
+			   load_rsp.fw_ver, load_rsp.misc0,
+			   ECORE_MFW_GET_FIELD(load_rsp.misc0, LOAD_RSP_ROLE),
+			   ECORE_MFW_GET_FIELD(load_rsp.misc0, LOAD_RSP_HSI),
+			   ECORE_MFW_GET_FIELD(load_rsp.misc0,
 					       LOAD_RSP_FLAGS0));
 
-		p_out_params->exist_drv_ver_0 = p_load_rsp->drv_ver_0;
-		p_out_params->exist_drv_ver_1 = p_load_rsp->drv_ver_1;
-		p_out_params->exist_fw_ver = p_load_rsp->fw_ver;
+		p_out_params->exist_drv_ver_0 = load_rsp.drv_ver_0;
+		p_out_params->exist_drv_ver_1 = load_rsp.drv_ver_1;
+		p_out_params->exist_fw_ver = load_rsp.fw_ver;
 		p_out_params->exist_drv_role =
-			ECORE_MFW_GET_FIELD(p_load_rsp->misc0, LOAD_RSP_ROLE);
+			ECORE_MFW_GET_FIELD(load_rsp.misc0, LOAD_RSP_ROLE);
 		p_out_params->mfw_hsi_ver =
-			ECORE_MFW_GET_FIELD(p_load_rsp->misc0, LOAD_RSP_HSI);
+			ECORE_MFW_GET_FIELD(load_rsp.misc0, LOAD_RSP_HSI);
 		p_out_params->drv_exists =
-			ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
-					    LOAD_RSP_FLAGS0) &
+			ECORE_MFW_GET_FIELD(load_rsp.misc0, LOAD_RSP_FLAGS0) &
 			LOAD_RSP_FLAGS0_DRV_EXISTS;
 	}
 
@@ -883,6 +891,18 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
+enum _ecore_status_t ecore_mcp_unload_done(struct ecore_hwfn *p_hwfn,
+					   struct ecore_ptt *p_ptt)
+{
+	struct ecore_mcp_mb_params mb_params;
+	struct mcp_mac wol_mac;
+
+	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
+	mb_params.cmd = DRV_MSG_CODE_UNLOAD_DONE;
+
+	return ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+}
+
 static void ecore_mcp_handle_vf_flr(struct ecore_hwfn *p_hwfn,
 				    struct ecore_ptt *p_ptt)
 {
@@ -924,7 +944,6 @@ enum _ecore_status_t ecore_mcp_ack_vf_flr(struct ecore_hwfn *p_hwfn,
 	u32 func_addr = SECTION_ADDR(mfw_func_offsize,
 				     MCP_PF_ID(p_hwfn));
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
 	enum _ecore_status_t rc;
 	int i;
 
@@ -935,8 +954,8 @@ enum _ecore_status_t ecore_mcp_ack_vf_flr(struct ecore_hwfn *p_hwfn,
 
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = DRV_MSG_CODE_VF_DISABLED_DONE;
-	OSAL_MEMCPY(&union_data.ack_vf_disabled, vfs_to_ack, VF_MAX_STATIC / 8);
-	mb_params.p_data_src = &union_data;
+	mb_params.p_data_src = vfs_to_ack;
+	mb_params.data_src_size = VF_MAX_STATIC / 8;
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt,
 				     &mb_params);
 	if (rc != ECORE_SUCCESS) {
@@ -1122,8 +1141,7 @@ enum _ecore_status_t ecore_mcp_set_link(struct ecore_hwfn *p_hwfn,
 {
 	struct ecore_mcp_link_params *params = &p_hwfn->mcp_info->link_input;
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
-	struct eth_phy_cfg *p_phy_cfg;
+	struct eth_phy_cfg phy_cfg;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 	u32 cmd;
 
@@ -1133,30 +1151,30 @@ enum _ecore_status_t ecore_mcp_set_link(struct ecore_hwfn *p_hwfn,
 #endif
 
 	/* Set the shmem configuration according to params */
-	p_phy_cfg = &union_data.drv_phy_cfg;
-	OSAL_MEMSET(p_phy_cfg, 0, sizeof(*p_phy_cfg));
+	OSAL_MEM_ZERO(&phy_cfg, sizeof(phy_cfg));
 	cmd = b_up ? DRV_MSG_CODE_INIT_PHY : DRV_MSG_CODE_LINK_RESET;
 	if (!params->speed.autoneg)
-		p_phy_cfg->speed = params->speed.forced_speed;
-	p_phy_cfg->pause |= (params->pause.autoneg) ? ETH_PAUSE_AUTONEG : 0;
-	p_phy_cfg->pause |= (params->pause.forced_rx) ? ETH_PAUSE_RX : 0;
-	p_phy_cfg->pause |= (params->pause.forced_tx) ? ETH_PAUSE_TX : 0;
-	p_phy_cfg->adv_speed = params->speed.advertised_speeds;
-	p_phy_cfg->loopback_mode = params->loopback_mode;
+		phy_cfg.speed = params->speed.forced_speed;
+	phy_cfg.pause |= (params->pause.autoneg) ? ETH_PAUSE_AUTONEG : 0;
+	phy_cfg.pause |= (params->pause.forced_rx) ? ETH_PAUSE_RX : 0;
+	phy_cfg.pause |= (params->pause.forced_tx) ? ETH_PAUSE_TX : 0;
+	phy_cfg.adv_speed = params->speed.advertised_speeds;
+	phy_cfg.loopback_mode = params->loopback_mode;
 	p_hwfn->b_drv_link_init = b_up;
 
 	if (b_up)
 		DP_VERBOSE(p_hwfn, ECORE_MSG_LINK,
 			   "Configuring Link: Speed 0x%08x, Pause 0x%08x,"
 			   " adv_speed 0x%08x, loopback 0x%08x\n",
-			   p_phy_cfg->speed, p_phy_cfg->pause,
-			   p_phy_cfg->adv_speed, p_phy_cfg->loopback_mode);
+			   phy_cfg.speed, phy_cfg.pause, phy_cfg.adv_speed,
+			   phy_cfg.loopback_mode);
 	else
 		DP_VERBOSE(p_hwfn, ECORE_MSG_LINK, "Resetting link\n");
 
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = cmd;
-	mb_params.p_data_src = &union_data;
+	mb_params.p_data_src = &phy_cfg;
+	mb_params.data_src_size = sizeof(phy_cfg);
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 
 	/* if mcp fails to respond we must abort */
@@ -1235,7 +1253,6 @@ static void ecore_mcp_send_protocol_stats(struct ecore_hwfn *p_hwfn,
 	enum ecore_mcp_protocol_type stats_type;
 	union ecore_mcp_protocol_stats stats;
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
 	u32 hsi_param;
 	enum _ecore_status_t rc;
 
@@ -1254,8 +1271,8 @@ static void ecore_mcp_send_protocol_stats(struct ecore_hwfn *p_hwfn,
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = DRV_MSG_CODE_GET_STATS;
 	mb_params.param = hsi_param;
-	OSAL_MEMCPY(&union_data, &stats, sizeof(stats));
-	mb_params.p_data_src = &union_data;
+	mb_params.p_data_src = &stats;
+	mb_params.data_src_size = sizeof(stats);
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS)
 		DP_ERR(p_hwfn, "Failed to send protocol stats, rc = %d\n", rc);
@@ -1353,28 +1370,38 @@ static void ecore_mcp_handle_fan_failure(struct ecore_hwfn *p_hwfn,
 	ecore_hw_err_notify(p_hwfn, ECORE_HW_ERR_FAN_FAIL);
 }
 
+struct ecore_mdump_cmd_params {
+	u32 cmd;
+	void *p_data_src;
+	u8 data_src_size;
+	void *p_data_dst;
+	u8 data_dst_size;
+	u32 mcp_resp;
+};
+
 static enum _ecore_status_t
 ecore_mcp_mdump_cmd(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-		    u32 mdump_cmd, union drv_union_data *p_data_src,
-		    union drv_union_data *p_data_dst, u32 *p_mcp_resp)
+		    struct ecore_mdump_cmd_params *p_mdump_cmd_params)
 {
 	struct ecore_mcp_mb_params mb_params;
 	enum _ecore_status_t rc;
 
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = DRV_MSG_CODE_MDUMP_CMD;
-	mb_params.param = mdump_cmd;
-	mb_params.p_data_src = p_data_src;
-	mb_params.p_data_dst = p_data_dst;
+	mb_params.param = p_mdump_cmd_params->cmd;
+	mb_params.p_data_src = p_mdump_cmd_params->p_data_src;
+	mb_params.data_src_size = p_mdump_cmd_params->data_src_size;
+	mb_params.p_data_dst = p_mdump_cmd_params->p_data_dst;
+	mb_params.data_dst_size = p_mdump_cmd_params->data_dst_size;
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	*p_mcp_resp = mb_params.mcp_resp;
-	if (*p_mcp_resp == FW_MSG_CODE_MDUMP_INVALID_CMD) {
+	p_mdump_cmd_params->mcp_resp = mb_params.mcp_resp;
+	if (p_mdump_cmd_params->mcp_resp == FW_MSG_CODE_MDUMP_INVALID_CMD) {
 		DP_NOTICE(p_hwfn, false,
 			  "MFW claims that the mdump command is illegal [mdump_cmd 0x%x]\n",
-			  mdump_cmd);
+			  p_mdump_cmd_params->cmd);
 		rc = ECORE_INVAL;
 	}
 
@@ -1384,62 +1411,68 @@ ecore_mcp_mdump_cmd(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 static enum _ecore_status_t ecore_mcp_mdump_ack(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt)
 {
-	u32 mcp_resp;
+	struct ecore_mdump_cmd_params mdump_cmd_params;
+
+	OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params));
+	mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_ACK;
 
-	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MDUMP_ACK,
-				   OSAL_NULL, OSAL_NULL, &mcp_resp);
+	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
 }
 
 enum _ecore_status_t ecore_mcp_mdump_set_values(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt,
 						u32 epoch)
 {
-	union drv_union_data union_data;
-	u32 mcp_resp;
+	struct ecore_mdump_cmd_params mdump_cmd_params;
 
-	OSAL_MEMCPY(&union_data.raw_data, &epoch, sizeof(epoch));
+	OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params));
+	mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_SET_VALUES;
+	mdump_cmd_params.p_data_src = &epoch;
+	mdump_cmd_params.data_src_size = sizeof(epoch);
 
-	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MDUMP_SET_VALUES,
-				   &union_data, OSAL_NULL, &mcp_resp);
+	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
 }
 
 enum _ecore_status_t ecore_mcp_mdump_trigger(struct ecore_hwfn *p_hwfn,
 					     struct ecore_ptt *p_ptt)
 {
-	u32 mcp_resp;
+	struct ecore_mdump_cmd_params mdump_cmd_params;
 
-	p_hwfn->p_dev->mdump_en = true;
+	OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params));
+	mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_TRIGGER;
 
-	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MDUMP_TRIGGER,
-				   OSAL_NULL, OSAL_NULL, &mcp_resp);
+	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
 }
 
 static enum _ecore_status_t
 ecore_mcp_mdump_get_config(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			   struct mdump_config_stc *p_mdump_config)
 {
-	union drv_union_data union_data;
-	u32 mcp_resp;
+	struct ecore_mdump_cmd_params mdump_cmd_params;
 	enum _ecore_status_t rc;
 
-	rc = ecore_mcp_mdump_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MDUMP_GET_CONFIG,
-				 OSAL_NULL, &union_data, &mcp_resp);
+	OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params));
+	mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_GET_CONFIG;
+	mdump_cmd_params.p_data_dst = p_mdump_config;
+	mdump_cmd_params.data_dst_size = sizeof(*p_mdump_config);
+
+	rc = ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	if (mcp_resp == FW_MSG_CODE_UNSUPPORTED)
+	if (mdump_cmd_params.mcp_resp == FW_MSG_CODE_UNSUPPORTED) {
+		DP_INFO(p_hwfn,
+			"The mdump command is not supported by the MFW\n");
 		return ECORE_NOTIMPL;
+	}
 
-	if (mcp_resp != FW_MSG_CODE_OK) {
+	if (mdump_cmd_params.mcp_resp != FW_MSG_CODE_OK) {
 		DP_NOTICE(p_hwfn, false,
 			  "Failed to get the mdump configuration and logs info [mcp_resp 0x%x]\n",
-			  mcp_resp);
+			  mdump_cmd_params.mcp_resp);
 		rc = ECORE_UNKNOWN_ERROR;
 	}
 
-	OSAL_MEMCPY(p_mdump_config, &union_data.mdump_config,
-		    sizeof(*p_mdump_config));
-
 	return rc;
 }
 
@@ -1489,10 +1522,12 @@ ecore_mcp_mdump_get_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 enum _ecore_status_t ecore_mcp_mdump_clear_logs(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt)
 {
-	u32 mcp_resp;
+	struct ecore_mdump_cmd_params mdump_cmd_params;
 
-	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MDUMP_CLEAR_LOGS,
-				   OSAL_NULL, OSAL_NULL, &mcp_resp);
+	OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params));
+	mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_CLEAR_LOGS;
+
+	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
 }
 
 static void ecore_mcp_handle_critical_error(struct ecore_hwfn *p_hwfn,
@@ -2001,9 +2036,8 @@ enum _ecore_status_t
 ecore_mcp_send_drv_version(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			   struct ecore_mcp_drv_version *p_ver)
 {
-	struct drv_version_stc *p_drv_version;
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
+	struct drv_version_stc drv_version;
 	u32 num_words, i;
 	void *p_name;
 	OSAL_BE32 val;
@@ -2014,19 +2048,20 @@ ecore_mcp_send_drv_version(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 		return ECORE_SUCCESS;
 #endif
 
-	p_drv_version = &union_data.drv_version;
-	p_drv_version->version = p_ver->version;
+	OSAL_MEM_ZERO(&drv_version, sizeof(drv_version));
+	drv_version.version = p_ver->version;
 	num_words = (MCP_DRV_VER_STR_SIZE - 4) / 4;
 	for (i = 0; i < num_words; i++) {
 		/* The driver name is expected to be in a big-endian format */
 		p_name = &p_ver->name[i * sizeof(u32)];
 		val = OSAL_CPU_TO_BE32(*(u32 *)p_name);
-		*(u32 *)&p_drv_version->name[i * sizeof(u32)] = val;
+		*(u32 *)&drv_version.name[i * sizeof(u32)] = val;
 	}
 
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = DRV_MSG_CODE_SET_VERSION;
-	mb_params.p_data_src = &union_data;
+	mb_params.p_data_src = &drv_version;
+	mb_params.data_src_size = sizeof(drv_version);
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS)
 		DP_ERR(p_hwfn, "MCP response failure, aborting\n");
@@ -2695,28 +2730,25 @@ ecore_mcp_get_temperature_info(struct ecore_hwfn *p_hwfn,
 			       struct ecore_temperature_info *p_temp_info)
 {
 	struct ecore_temperature_sensor *p_temp_sensor;
-	struct temperature_status_stc *p_mfw_temp_info;
+	struct temperature_status_stc mfw_temp_info;
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
 	u32 val;
 	enum _ecore_status_t rc;
 	u8 i;
 
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = DRV_MSG_CODE_GET_TEMPERATURE;
-	mb_params.p_data_dst = &union_data;
+	mb_params.p_data_dst = &mfw_temp_info;
+	mb_params.data_dst_size = sizeof(mfw_temp_info);
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	p_mfw_temp_info = &union_data.temp_info;
-
 	OSAL_BUILD_BUG_ON(ECORE_MAX_NUM_OF_SENSORS != MAX_NUM_OF_SENSORS);
-	p_temp_info->num_sensors = OSAL_MIN_T(u32,
-					      p_mfw_temp_info->num_of_sensors,
+	p_temp_info->num_sensors = OSAL_MIN_T(u32, mfw_temp_info.num_of_sensors,
 					      ECORE_MAX_NUM_OF_SENSORS);
 	for (i = 0; i < p_temp_info->num_sensors; i++) {
-		val = p_mfw_temp_info->sensor[i];
+		val = mfw_temp_info.sensor[i];
 		p_temp_sensor = &p_temp_info->sensors[i];
 		p_temp_sensor->sensor_location = (val & SENSOR_LOCATION_MASK) >>
 						 SENSOR_LOCATION_SHIFT;
@@ -2854,16 +2886,14 @@ ecore_mcp_resc_allocation_msg(struct ecore_hwfn *p_hwfn,
 			      struct ecore_resc_alloc_in_params *p_in_params,
 			      struct ecore_resc_alloc_out_params *p_out_params)
 {
-	struct resource_info *p_mfw_resc_info;
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
+	struct resource_info mfw_resc_info;
 	enum _ecore_status_t rc;
 
-	p_mfw_resc_info = &union_data.resource;
-	OSAL_MEM_ZERO(p_mfw_resc_info, sizeof(*p_mfw_resc_info));
+	OSAL_MEM_ZERO(&mfw_resc_info, sizeof(mfw_resc_info));
 
-	p_mfw_resc_info->res_id = ecore_mcp_get_mfw_res_id(p_in_params->res_id);
-	if (p_mfw_resc_info->res_id == RESOURCE_NUM_INVALID) {
+	mfw_resc_info.res_id = ecore_mcp_get_mfw_res_id(p_in_params->res_id);
+	if (mfw_resc_info.res_id == RESOURCE_NUM_INVALID) {
 		DP_ERR(p_hwfn,
 		       "Failed to match resource %d [%s] with the MFW resources\n",
 		       p_in_params->res_id,
@@ -2873,7 +2903,7 @@ ecore_mcp_resc_allocation_msg(struct ecore_hwfn *p_hwfn,
 
 	switch (p_in_params->cmd) {
 	case DRV_MSG_SET_RESOURCE_VALUE_MSG:
-		p_mfw_resc_info->size = p_in_params->resc_max_val;
+		mfw_resc_info.size = p_in_params->resc_max_val;
 		/* Fallthrough */
 	case DRV_MSG_GET_RESOURCE_ALLOC_MSG:
 		break;
@@ -2886,8 +2916,10 @@ ecore_mcp_resc_allocation_msg(struct ecore_hwfn *p_hwfn,
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = p_in_params->cmd;
 	mb_params.param = ECORE_RESC_ALLOC_VERSION;
-	mb_params.p_data_src = &union_data;
-	mb_params.p_data_dst = &union_data;
+	mb_params.p_data_src = &mfw_resc_info;
+	mb_params.data_src_size = sizeof(mfw_resc_info);
+	mb_params.p_data_dst = mb_params.p_data_src;
+	mb_params.data_dst_size = mb_params.data_src_size;
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 		   "Resource message request: cmd 0x%08x, res_id %d [%s], hsi_version %d.%d, val 0x%x\n",
@@ -2905,11 +2937,11 @@ ecore_mcp_resc_allocation_msg(struct ecore_hwfn *p_hwfn,
 
 	p_out_params->mcp_resp = mb_params.mcp_resp;
 	p_out_params->mcp_param = mb_params.mcp_param;
-	p_out_params->resc_num = p_mfw_resc_info->size;
-	p_out_params->resc_start = p_mfw_resc_info->offset;
-	p_out_params->vf_resc_num = p_mfw_resc_info->vf_size;
-	p_out_params->vf_resc_start = p_mfw_resc_info->vf_offset;
-	p_out_params->flags = p_mfw_resc_info->flags;
+	p_out_params->resc_num = mfw_resc_info.size;
+	p_out_params->resc_start = mfw_resc_info.offset;
+	p_out_params->vf_resc_num = mfw_resc_info.vf_size;
+	p_out_params->vf_resc_start = mfw_resc_info.vf_offset;
+	p_out_params->flags = mfw_resc_info.flags;
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 		   "Resource message response: mfw_hsi_version %d.%d, num 0x%x, start 0x%x, vf_num 0x%x, vf_start 0x%x, flags 0x%08x\n",
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index f5dac9d..350d8a2 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -65,8 +65,10 @@ struct ecore_mcp_info {
 struct ecore_mcp_mb_params {
 	u32 cmd;
 	u32 param;
-	union drv_union_data *p_data_src;
-	union drv_union_data *p_data_dst;
+	void *p_data_src;
+	u8 data_src_size;
+	void *p_data_dst;
+	u8 data_dst_size;
 	u32 mcp_resp;
 	u32 mcp_param;
 };
@@ -159,7 +161,7 @@ struct ecore_load_req_params {
  *        returns whether this PF is the first on the engine/port or function.
  *
  * @param p_hwfn
- * @param p_pt
+ * @param p_ptt
  * @param p_params
  *
  * @return enum _ecore_status_t - ECORE_SUCCESS - Operation was successful.
@@ -169,6 +171,17 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 					struct ecore_load_req_params *p_params);
 
 /**
+ * @brief Sends a UNLOAD_DONE message to the MFW
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - Operation was successful.
+ */
+enum _ecore_status_t ecore_mcp_unload_done(struct ecore_hwfn *p_hwfn,
+					   struct ecore_ptt *p_ptt);
+
+/**
  * @brief Read the MFW mailbox into Current buffer.
  *
  * @param p_hwfn
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 51/61] net/qede/base: move code bits
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (50 preceding siblings ...)
  2017-03-18  7:06   ` [PATCH v2 50/61] net/qede/base: zero out MFW mailbox data Rasesh Mody
@ 2017-03-18  7:06   ` Rasesh Mody
  2017-03-18  7:06   ` [PATCH v2 52/61] net/qede/base: add PF parameter Rasesh Mody
                     ` (10 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:06 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_vf.h |   41 +++++++++++++++++++-------------------
 1 file changed, 20 insertions(+), 21 deletions(-)

diff --git a/drivers/net/qede/base/ecore_vf.h b/drivers/net/qede/base/ecore_vf.h
index 228bbf0..f471388 100644
--- a/drivers/net/qede/base/ecore_vf.h
+++ b/drivers/net/qede/base/ecore_vf.h
@@ -38,17 +38,15 @@ struct ecore_vf_iov {
 	bool b_pre_fp_hsi;
 };
 
-#ifdef CONFIG_ECORE_SRIOV
-/**
- * @brief hw preparation for VF
- * sends ACQUIRE message
- *
- * @param p_hwfn
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn);
 
+enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
+					    struct ecore_ptt *p_ptt,
+					    u16 coalesce,
+					    struct ecore_queue_cid *p_cid);
+enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
+					    struct ecore_ptt *p_ptt,
+					    u16 coalesce,
+					    struct ecore_queue_cid *p_cid);
 /**
  * @brief VF - Set Rx/Tx coalesce per VF's relative queue.
  *	Coalesce value '0' will omit the configuration.
@@ -56,13 +54,24 @@ enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn);
  *	@param p_hwfn
  *	@param rx_coal - coalesce value in micro second for rx queue
  *	@param tx_coal - coalesce value in micro second for tx queue
- *	@param qid
+ *	@param queue_cid
  *
  **/
 enum _ecore_status_t ecore_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn,
 					      u16 rx_coal, u16 tx_coal,
 					      struct ecore_queue_cid *p_cid);
 
+#ifdef CONFIG_ECORE_SRIOV
+/**
+ * @brief hw preparation for VF
+ *	sends ACQUIRE message
+ *
+ * @param p_hwfn
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn);
+
 /**
  * @brief VF - start the RX Queue by sending a message to the PF
  *
@@ -277,15 +286,5 @@ ecore_vf_pf_tunnel_param_update(struct ecore_hwfn *p_hwfn,
 				struct ecore_tunnel_info *p_tunn);
 
 void ecore_vf_set_vf_start_tunn_update_param(struct ecore_tunnel_info *p_tun);
-
-enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
-					    struct ecore_ptt *p_ptt,
-					    u16 coalesce,
-					    struct ecore_queue_cid *p_cid);
-
-enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
-					    struct ecore_ptt *p_ptt,
-					    u16 coalesce,
-					    struct ecore_queue_cid *p_cid);
 #endif
 #endif /* __ECORE_VF_H__ */
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 52/61] net/qede/base: add PF parameter
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (51 preceding siblings ...)
  2017-03-18  7:06   ` [PATCH v2 51/61] net/qede/base: move code bits Rasesh Mody
@ 2017-03-18  7:06   ` Rasesh Mody
  2017-03-18  7:06   ` [PATCH v2 53/61] net/qede/base: allow PMD to control vport and RSS engine ids Rasesh Mody
                     ` (9 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:06 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add a common enum to pf_params for RDMA.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_cxt.c      |    1 +
 drivers/net/qede/base/ecore_proto_if.h |    7 +++++++
 2 files changed, 8 insertions(+)

diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index aeeabf1..691d638 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -19,6 +19,7 @@
 #include "ecore_hw.h"
 #include "ecore_dev_api.h"
 #include "ecore_sriov.h"
+#include "ecore_mcp.h"
 
 /* Max number of connection types in HW (DQ/CDU etc.) */
 #define MAX_CONN_TYPES		PROTOCOLID_COMMON
diff --git a/drivers/net/qede/base/ecore_proto_if.h b/drivers/net/qede/base/ecore_proto_if.h
index ed24019..0ac153f 100644
--- a/drivers/net/qede/base/ecore_proto_if.h
+++ b/drivers/net/qede/base/ecore_proto_if.h
@@ -63,6 +63,12 @@ struct ecore_iscsi_pf_params {
 	u8		bdq_pbl_num_entries[2];
 };
 
+enum ecore_rdma_protocol {
+	ECORE_RDMA_PROTOCOL_DEFAULT,
+	ECORE_RDMA_PROTOCOL_ROCE,
+	ECORE_RDMA_PROTOCOL_IWARP,
+};
+
 struct ecore_rdma_pf_params {
 	/* Supplied to ECORE during resource allocation (may affect the ILT and
 	 * the doorbell BAR).
@@ -79,6 +85,7 @@ struct ecore_rdma_pf_params {
 
 	/* TCP port number used for the iwarp traffic */
 	u16		iwarp_port;
+	enum ecore_rdma_protocol rdma_protocol;
 };
 
 struct ecore_pf_params {
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 53/61] net/qede/base: allow PMD to control vport and RSS engine ids
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (52 preceding siblings ...)
  2017-03-18  7:06   ` [PATCH v2 52/61] net/qede/base: add PF parameter Rasesh Mody
@ 2017-03-18  7:06   ` Rasesh Mody
  2017-03-18  7:06   ` [PATCH v2 54/61] net/qede/base: add udp ports in bulletin board message Rasesh Mody
                     ` (8 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:06 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Let PMD have control over the vport-id and rss-eng-id of a given VF
during initializaion.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_iov_api.h |   15 ++++-------
 drivers/net/qede/base/ecore_sriov.c   |   46 +++++++++++++++++++++------------
 drivers/net/qede/base/ecore_sriov.h   |    2 +-
 3 files changed, 35 insertions(+), 28 deletions(-)

diff --git a/drivers/net/qede/base/ecore_iov_api.h b/drivers/net/qede/base/ecore_iov_api.h
index b8dc47b..6a0fc5a 100644
--- a/drivers/net/qede/base/ecore_iov_api.h
+++ b/drivers/net/qede/base/ecore_iov_api.h
@@ -103,6 +103,11 @@ struct ecore_iov_vf_init_params {
 	 */
 	u16 req_rx_queue[ECORE_MAX_VF_CHAINS_PER_PF];
 	u16 req_tx_queue[ECORE_MAX_VF_CHAINS_PER_PF];
+
+	u8 vport_id;
+
+	/* Should be set in case RSS is going to be used for VF */
+	u8 rss_eng_id;
 };
 
 #ifdef CONFIG_ECORE_SW_CHANNEL
@@ -417,16 +422,6 @@ void ecore_iov_get_vfs_opaque_fid(struct ecore_hwfn *p_hwfn, int vfid,
 				  u16 *opaque_fid);
 
 /**
- * @brief Get VFs VPORT id.
- *
- * @param p_hwfn
- * @param vfid
- * @param vport id
- */
-void ecore_iov_get_vfs_vport_id(struct ecore_hwfn *p_hwfn, int vfid,
-				u8 *p_vport_id);
-
-/**
  * @brief Set forced VLAN [pvid] in PFs copy of bulletin board
  *        and configures FW/HW to support the configuration.
  *        Setting of pvid 0 would clear the feature.
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 4ffa8d0..20b51c4 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -426,8 +426,6 @@ static void ecore_iov_setup_vfdb(struct ecore_hwfn *p_hwfn)
 		return;
 	}
 
-	p_iov_info->base_vport_id = 1;	/* @@@TBD resource allocation */
-
 	for (idx = 0; idx < p_iov->total_vfs; idx++) {
 		struct ecore_vf_info *vf = &p_iov_info->vfs_array[idx];
 		u32 concrete;
@@ -456,8 +454,6 @@ static void ecore_iov_setup_vfdb(struct ecore_hwfn *p_hwfn)
 		/* TODO - need to devise a better way of getting opaque */
 		vf->opaque_fid = (p_hwfn->hw_info.opaque_fid & 0xff) |
 		    (vf->abs_vf_id << 8);
-		/* @@TBD MichalK - add base vport_id of VFs to equation */
-		vf->vport_id = p_iov_info->base_vport_id + idx;
 
 		vf->num_mac_filters = ECORE_ETH_VF_NUM_MAC_FILTERS;
 		vf->num_vlan_filters = ECORE_ETH_VF_NUM_VLAN_FILTERS;
@@ -1019,6 +1015,34 @@ ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 		return ECORE_INVAL;
 	}
 
+	/* Perform sanity checking on the requested vport/rss */
+	if (p_params->vport_id >= RESC_NUM(p_hwfn, ECORE_VPORT)) {
+		DP_NOTICE(p_hwfn, true, "VF[%d] - can't use VPORT %02x\n",
+			  p_params->rel_vf_id, p_params->vport_id);
+		return ECORE_INVAL;
+	}
+
+	if ((p_params->num_queues > 1) &&
+	    (p_params->rss_eng_id >= RESC_NUM(p_hwfn, ECORE_RSS_ENG))) {
+		DP_NOTICE(p_hwfn, true, "VF[%d] - can't use RSS_ENG %02x\n",
+			  p_params->rel_vf_id, p_params->rss_eng_id);
+		return ECORE_INVAL;
+	}
+
+	/* TODO - remove this once we get confidence of change */
+	if (!p_params->vport_id) {
+		DP_NOTICE(p_hwfn, false,
+			  "VF[%d] - Unlikely that VF uses vport0. Forgotten?\n",
+			  p_params->rel_vf_id);
+	}
+	if ((!p_params->rss_eng_id) && (p_params->num_queues > 1)) {
+		DP_NOTICE(p_hwfn, false,
+			  "VF[%d] - Unlikely that VF uses RSS_eng0. Forgotten?\n",
+			  p_params->rel_vf_id);
+	}
+	vf->vport_id = p_params->vport_id;
+	vf->rss_eng_id = p_params->rss_eng_id;
+
 	/* Perform sanity checking on the requested queue_id */
 	for (i = 0; i < p_params->num_queues; i++) {
 		u16 min_vf_qzone = (u16)FEAT_NUM(p_hwfn, ECORE_PF_L2_QUE);
@@ -2752,7 +2776,7 @@ ecore_iov_vp_update_rss_param(struct ecore_hwfn *p_hwfn,
 		VFPF_UPDATE_RSS_KEY_FLAG);
 
 	p_rss->rss_enable = p_rss_tlv->rss_enable;
-	p_rss->rss_eng_id = vf->relative_vf_id + 1;
+	p_rss->rss_eng_id = vf->rss_eng_id;
 	p_rss->rss_caps = p_rss_tlv->rss_caps;
 	p_rss->rss_table_size_log = p_rss_tlv->rss_table_size_log;
 	OSAL_MEMCPY(p_rss->rss_key, p_rss_tlv->rss_key,
@@ -3974,18 +3998,6 @@ void ecore_iov_get_vfs_opaque_fid(struct ecore_hwfn *p_hwfn, int vfid,
 	*opaque_fid = vf_info->opaque_fid;
 }
 
-void ecore_iov_get_vfs_vport_id(struct ecore_hwfn *p_hwfn, int vfid,
-				u8 *p_vort_id)
-{
-	struct ecore_vf_info *vf_info;
-
-	vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
-	if (!vf_info)
-		return;
-
-	*p_vort_id = vf_info->vport_id;
-}
-
 void ecore_iov_bulletin_set_forced_vlan(struct ecore_hwfn *p_hwfn,
 					u16 pvid, int vfid)
 {
diff --git a/drivers/net/qede/base/ecore_sriov.h b/drivers/net/qede/base/ecore_sriov.h
index d32f931..66e9271 100644
--- a/drivers/net/qede/base/ecore_sriov.h
+++ b/drivers/net/qede/base/ecore_sriov.h
@@ -111,6 +111,7 @@ struct ecore_vf_info {
 	u16			mtu;
 
 	u8			vport_id;
+	u8			rss_eng_id;
 	u8			relative_vf_id;
 	u8			abs_vf_id;
 #define ECORE_VF_ABS_ID(p_hwfn, p_vf)	(ECORE_PATH_ID(p_hwfn) ? \
@@ -155,7 +156,6 @@ struct ecore_pf_iov {
 	struct ecore_vf_info	vfs_array[E4_MAX_NUM_VFS];
 	u64			pending_events[ECORE_VF_ARRAY_LENGTH];
 	u64			pending_flr[ECORE_VF_ARRAY_LENGTH];
-	u16			base_vport_id;
 
 #ifndef REMOVE_DBG
 	/* This doesn't serve anything functionally, but it makes windows
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 54/61] net/qede/base: add udp ports in bulletin board message
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (53 preceding siblings ...)
  2017-03-18  7:06   ` [PATCH v2 53/61] net/qede/base: allow PMD to control vport and RSS engine ids Rasesh Mody
@ 2017-03-18  7:06   ` Rasesh Mody
  2017-03-18  7:06   ` [PATCH v2 55/61] net/qede/base: prevent DMAE transactions during recovery Rasesh Mody
                     ` (7 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:06 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add udp ports in bulletin board message.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_iov_api.h |    2 ++
 drivers/net/qede/base/ecore_sriov.c   |   33 +++++++++++++++++++++++++++++++++
 drivers/net/qede/base/ecore_vf.c      |   12 ++++++++++++
 drivers/net/qede/base/ecore_vf_api.h  |    2 ++
 drivers/net/qede/base/ecore_vfpf_if.h |    5 ++++-
 5 files changed, 53 insertions(+), 1 deletion(-)

diff --git a/drivers/net/qede/base/ecore_iov_api.h b/drivers/net/qede/base/ecore_iov_api.h
index 6a0fc5a..870c57e 100644
--- a/drivers/net/qede/base/ecore_iov_api.h
+++ b/drivers/net/qede/base/ecore_iov_api.h
@@ -716,6 +716,8 @@ enum _ecore_status_t ecore_iov_configure_min_tx_rate(struct ecore_dev *p_dev,
  */
 u16 ecore_iov_get_next_active_vf(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
 
+void ecore_iov_bulletin_set_udp_ports(struct ecore_hwfn *p_hwfn, int vfid,
+				      u16 vxlan_port, u16 geneve_port);
 #endif /* CONFIG_ECORE_SRIOV */
 
 #define ecore_for_each_vf(_p_hwfn, _i)					\
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 20b51c4..532c492 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -2253,6 +2253,7 @@ static void ecore_iov_vf_mbx_update_tunn_param(struct ecore_hwfn *p_hwfn,
 	bool b_update_required = false;
 	struct ecore_tunnel_info tunn;
 	u16 tunn_feature_mask = 0;
+	int i;
 
 	mbx->offset = (u8 *)mbx->reply_virt;
 
@@ -2300,11 +2301,20 @@ static void ecore_iov_vf_mbx_update_tunn_param(struct ecore_hwfn *p_hwfn,
 
 	/* If ECORE client is willing to update anything ? */
 	if (b_update_required) {
+		u16 geneve_port;
+
 		rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, &tunn,
 						 ECORE_SPQ_MODE_EBLOCK,
 						 OSAL_NULL);
 		if (rc != ECORE_SUCCESS)
 			status = PFVF_STATUS_FAILURE;
+
+		geneve_port = p_tun->geneve_port.port;
+		ecore_for_each_vf(p_hwfn, i) {
+			ecore_iov_bulletin_set_udp_ports(p_hwfn, i,
+							 p_tun->vxlan_port.port,
+							 geneve_port);
+		}
 	}
 
 send_resp:
@@ -4028,6 +4038,29 @@ void ecore_iov_bulletin_set_forced_vlan(struct ecore_hwfn *p_hwfn,
 	ecore_iov_configure_vport_forced(p_hwfn, vf_info, feature);
 }
 
+void ecore_iov_bulletin_set_udp_ports(struct ecore_hwfn *p_hwfn,
+				      int vfid, u16 vxlan_port, u16 geneve_port)
+{
+	struct ecore_vf_info *vf_info;
+
+	vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
+	if (!vf_info) {
+		DP_NOTICE(p_hwfn->p_dev, true,
+			  "Can not set udp ports, invalid vfid [%d]\n", vfid);
+		return;
+	}
+
+	if (vf_info->b_malicious) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "Can not set udp ports to malicious VF [%d]\n",
+			   vfid);
+		return;
+	}
+
+	vf_info->bulletin.p_virt->vxlan_udp_port = vxlan_port;
+	vf_info->bulletin.p_virt->geneve_udp_port = geneve_port;
+}
+
 bool ecore_iov_vf_has_vport_instance(struct ecore_hwfn *p_hwfn, int vfid)
 {
 	struct ecore_vf_info *p_vf_info;
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index bf516cc..8ce9340 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -1652,6 +1652,18 @@ bool ecore_vf_bulletin_get_forced_mac(struct ecore_hwfn *hwfn, u8 *dst_mac,
 	return true;
 }
 
+void ecore_vf_bulletin_get_udp_ports(struct ecore_hwfn *p_hwfn,
+				     u16 *p_vxlan_port,
+				     u16 *p_geneve_port)
+{
+	struct ecore_bulletin_content *p_bulletin;
+
+	p_bulletin = &p_hwfn->vf_iov_info->bulletin_shadow;
+
+	*p_vxlan_port = p_bulletin->vxlan_udp_port;
+	*p_geneve_port = p_bulletin->geneve_udp_port;
+}
+
 bool ecore_vf_bulletin_get_forced_vlan(struct ecore_hwfn *hwfn, u16 *dst_pvid)
 {
 	struct ecore_bulletin_content *bulletin;
diff --git a/drivers/net/qede/base/ecore_vf_api.h b/drivers/net/qede/base/ecore_vf_api.h
index 77b93ff..a6e5f32 100644
--- a/drivers/net/qede/base/ecore_vf_api.h
+++ b/drivers/net/qede/base/ecore_vf_api.h
@@ -152,5 +152,7 @@ void ecore_vf_get_fw_version(struct ecore_hwfn *p_hwfn,
 			     u16 *fw_minor,
 			     u16 *fw_rev,
 			     u16 *fw_eng);
+void ecore_vf_bulletin_get_udp_ports(struct ecore_hwfn *p_hwfn,
+				     u16 *p_vxlan_port, u16 *p_geneve_port);
 #endif
 #endif
diff --git a/drivers/net/qede/base/ecore_vfpf_if.h b/drivers/net/qede/base/ecore_vfpf_if.h
index e0b63bf..6618442 100644
--- a/drivers/net/qede/base/ecore_vfpf_if.h
+++ b/drivers/net/qede/base/ecore_vfpf_if.h
@@ -554,9 +554,12 @@ struct ecore_bulletin_content {
 	u8 pfc_enabled;
 	u8 partner_tx_flow_ctrl_en;
 	u8 partner_rx_flow_ctrl_en;
+
 	u8 partner_adv_pause;
 	u8 sfp_tx_fault;
-	u8 padding4[6];
+	u16 vxlan_udp_port;
+	u16 geneve_udp_port;
+	u8 padding4[2];
 
 	u32 speed;
 	u32 partner_adv_speed;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 55/61] net/qede/base: prevent DMAE transactions during recovery
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (54 preceding siblings ...)
  2017-03-18  7:06   ` [PATCH v2 54/61] net/qede/base: add udp ports in bulletin board message Rasesh Mody
@ 2017-03-18  7:06   ` Rasesh Mody
  2017-03-18  7:06   ` [PATCH v2 56/61] net/qede/base: multi-Txq support on same queue-zone for VFs Rasesh Mody
                     ` (6 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:06 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Prevent DMA engine transactions during recovery phase.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_hw.c |   11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/drivers/net/qede/base/ecore_hw.c b/drivers/net/qede/base/ecore_hw.c
index 396edc2..280925f 100644
--- a/drivers/net/qede/base/ecore_hw.c
+++ b/drivers/net/qede/base/ecore_hw.c
@@ -773,6 +773,17 @@ ecore_dmae_execute_command(struct ecore_hwfn *p_hwfn,
 	enum _ecore_status_t ecore_status = ECORE_SUCCESS;
 	u32 offset = 0;
 
+	if (p_hwfn->p_dev->recov_in_prog) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+			   "Recovery is in progress. Avoid DMAE transaction [{src: addr 0x%lx, type %d}, {dst: addr 0x%lx, type %d}, size %d].\n",
+			   src_addr, src_type, dst_addr, dst_type,
+			   size_in_dwords);
+		/* Return success to let the flow to be completed successfully
+		 * w/o any error handling.
+		 */
+		return ECORE_SUCCESS;
+	}
+
 	ecore_dmae_opcode(p_hwfn,
 			  (src_type == ECORE_DMAE_ADDRESS_GRC),
 			  (dst_type == ECORE_DMAE_ADDRESS_GRC), p_params);
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 56/61] net/qede/base: multi-Txq support on same queue-zone for VFs
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (55 preceding siblings ...)
  2017-03-18  7:06   ` [PATCH v2 55/61] net/qede/base: prevent DMAE transactions during recovery Rasesh Mody
@ 2017-03-18  7:06   ` Rasesh Mody
  2017-03-18  7:06   ` [PATCH v2 57/61] net/qede/base: prevent race condition during unload Rasesh Mody
                     ` (5 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:06 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

A step toward having multi-Txq support on same queue-zone for VFs.

This change takes care of:

 - VFs assume a single CID per-queue, where queue X receives CID X.
   Switch to a model similar to that of PF - I.e., Use different CIDs
   for Rx/Tx, and use mapping to acquire/release those. Each VF
   currently will have 32 CIDs available for it [for its possible 16
   Rx & 16 Tx queues].

 - To retain the same interface for PFs/VFs when initializing queues,
   the base driver would have to retain a unique number per-each queue
   that would be communicated in some extended TLV [current TLV
   interface allows the PF to send only the queue-id]. The new TLV isn't
   part of the current change but base driver would now start adding
   such unique keys internally to queue_cids. This would also force
   us to start having alloc/setup/free for L2 [we've refrained from
   doing so until now]
   The limit would be no-more than 64 queues per qzone [This could be
   changed if needed, but hopefully no one needs so many queues]

 - In IOV, Add infrastructure for up to 64 qids per-qzone, although
   at the moment hard-code '0' for Rx and '1' for Tx [Since VF still
   isn't communicating via new TLV which index to associate with a
   given queue in its queue-zone].

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h         |    4 +
 drivers/net/qede/base/ecore_cxt.c     |  230 +++++++++++++++-----
 drivers/net/qede/base/ecore_cxt.h     |   53 ++++-
 drivers/net/qede/base/ecore_cxt_api.h |   13 --
 drivers/net/qede/base/ecore_dev.c     |   24 +-
 drivers/net/qede/base/ecore_l2.c      |  248 ++++++++++++++++++---
 drivers/net/qede/base/ecore_l2.h      |   46 +++-
 drivers/net/qede/base/ecore_sriov.c   |  387 ++++++++++++++++++++++-----------
 drivers/net/qede/base/ecore_sriov.h   |   17 +-
 drivers/net/qede/base/ecore_vf.c      |    6 +
 drivers/net/qede/base/ecore_vf_api.h  |    9 +
 11 files changed, 794 insertions(+), 243 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 7379b3f..fab8193 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -200,6 +200,7 @@ struct ecore_cxt_mngr;
 struct ecore_dma_mem;
 struct ecore_sb_sp_info;
 struct ecore_ll2_info;
+struct ecore_l2_info;
 struct ecore_igu_info;
 struct ecore_mcp_info;
 struct ecore_dcbx_info;
@@ -598,6 +599,9 @@ struct ecore_hwfn {
 	/* If one of the following is set then EDPM shouldn't be used */
 	u8				dcbx_no_edpm;
 	u8				db_bar_no_edpm;
+
+	/* L2-related */
+	struct ecore_l2_info		*p_l2_info;
 };
 
 #ifndef __EXTRACT__LINUX__
diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index 691d638..f7b5672 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -8,6 +8,7 @@
 
 #include "bcm_osal.h"
 #include "reg_addr.h"
+#include "common_hsi.h"
 #include "ecore_hsi_common.h"
 #include "ecore_hsi_eth.h"
 #include "ecore_rt_defs.h"
@@ -101,7 +102,6 @@ struct ecore_tid_seg {
 
 struct ecore_conn_type_cfg {
 	u32 cid_count;
-	u32 cid_start;
 	u32 cids_per_vf;
 	struct ecore_tid_seg tid_seg[TASK_SEGMENTS];
 };
@@ -197,6 +197,9 @@ struct ecore_cxt_mngr {
 
 	/* Acquired CIDs */
 	struct ecore_cid_acquired_map acquired[MAX_CONN_TYPES];
+	/* TBD - do we want this allocated to reserve space? */
+	struct ecore_cid_acquired_map
+		acquired_vf[MAX_CONN_TYPES][COMMON_MAX_NUM_VFS];
 
 	/* ILT  shadow table */
 	struct ecore_dma_mem *ilt_shadow;
@@ -1015,44 +1018,75 @@ ilt_shadow_fail:
 static void ecore_cid_map_free(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
-	u32 type;
+	u32 type, vf;
 
 	for (type = 0; type < MAX_CONN_TYPES; type++) {
 		OSAL_FREE(p_hwfn->p_dev, p_mngr->acquired[type].cid_map);
 		p_mngr->acquired[type].max_count = 0;
 		p_mngr->acquired[type].start_cid = 0;
+
+		for (vf = 0; vf < COMMON_MAX_NUM_VFS; vf++) {
+			OSAL_FREE(p_hwfn->p_dev,
+				  p_mngr->acquired_vf[type][vf].cid_map);
+			p_mngr->acquired_vf[type][vf].max_count = 0;
+			p_mngr->acquired_vf[type][vf].start_cid = 0;
+		}
 	}
 }
 
+static enum _ecore_status_t
+ecore_cid_map_alloc_single(struct ecore_hwfn *p_hwfn, u32 type,
+			   u32 cid_start, u32 cid_count,
+			   struct ecore_cid_acquired_map *p_map)
+{
+	u32 size;
+
+	if (!cid_count)
+		return ECORE_SUCCESS;
+
+	size = MAP_WORD_SIZE * DIV_ROUND_UP(cid_count, BITS_PER_MAP_WORD);
+	p_map->cid_map = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL, size);
+	if (p_map->cid_map == OSAL_NULL)
+		return ECORE_NOMEM;
+
+	p_map->max_count = cid_count;
+	p_map->start_cid = cid_start;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_CXT,
+		   "Type %08x start: %08x count %08x\n",
+		   type, p_map->start_cid, p_map->max_count);
+
+	return ECORE_SUCCESS;
+}
+
 static enum _ecore_status_t ecore_cid_map_alloc(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
-	u32 start_cid = 0;
-	u32 type;
+	u32 start_cid = 0, vf_start_cid = 0;
+	u32 type, vf;
 
 	for (type = 0; type < MAX_CONN_TYPES; type++) {
-		u32 cid_cnt = p_hwfn->p_cxt_mngr->conn_cfg[type].cid_count;
-		u32 size;
-
-		if (cid_cnt == 0)
-			continue;
+		struct ecore_conn_type_cfg *p_cfg = &p_mngr->conn_cfg[type];
+		struct ecore_cid_acquired_map *p_map;
 
-		size = MAP_WORD_SIZE * DIV_ROUND_UP(cid_cnt, BITS_PER_MAP_WORD);
-		p_mngr->acquired[type].cid_map = OSAL_ZALLOC(p_hwfn->p_dev,
-							     GFP_KERNEL, size);
-		if (!p_mngr->acquired[type].cid_map)
+		/* Handle PF maps */
+		p_map = &p_mngr->acquired[type];
+		if (ecore_cid_map_alloc_single(p_hwfn, type, start_cid,
+					       p_cfg->cid_count, p_map))
 			goto cid_map_fail;
 
-		p_mngr->acquired[type].max_count = cid_cnt;
-		p_mngr->acquired[type].start_cid = start_cid;
-
-		p_hwfn->p_cxt_mngr->conn_cfg[type].cid_start = start_cid;
+		/* Handle VF maps */
+		for (vf = 0; vf < COMMON_MAX_NUM_VFS; vf++) {
+			p_map = &p_mngr->acquired_vf[type][vf];
+			if (ecore_cid_map_alloc_single(p_hwfn, type,
+						       vf_start_cid,
+						       p_cfg->cids_per_vf,
+						       p_map))
+				goto cid_map_fail;
+		}
 
-		DP_VERBOSE(p_hwfn, ECORE_MSG_CXT,
-			   "Type %08x start: %08x count %08x\n",
-			   type, p_mngr->acquired[type].start_cid,
-			   p_mngr->acquired[type].max_count);
-		start_cid += cid_cnt;
+		start_cid += p_cfg->cid_count;
+		vf_start_cid += p_cfg->cids_per_vf;
 	}
 
 	return ECORE_SUCCESS;
@@ -1171,18 +1205,34 @@ void ecore_cxt_mngr_free(struct ecore_hwfn *p_hwfn)
 void ecore_cxt_mngr_setup(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
+	struct ecore_cid_acquired_map *p_map;
+	struct ecore_conn_type_cfg *p_cfg;
 	int type;
+	u32 len;
 
 	/* Reset acquired cids */
 	for (type = 0; type < MAX_CONN_TYPES; type++) {
-		u32 cid_cnt = p_hwfn->p_cxt_mngr->conn_cfg[type].cid_count;
-		u32 i;
+		u32 vf;
+
+		p_cfg = &p_mngr->conn_cfg[type];
+		if (p_cfg->cid_count) {
+			p_map = &p_mngr->acquired[type];
+			len = DIV_ROUND_UP(p_map->max_count,
+					   BITS_PER_MAP_WORD) *
+			      MAP_WORD_SIZE;
+			OSAL_MEM_ZERO(p_map->cid_map, len);
+		}
 
-		if (cid_cnt == 0)
+		if (!p_cfg->cids_per_vf)
 			continue;
 
-		for (i = 0; i < DIV_ROUND_UP(cid_cnt, BITS_PER_MAP_WORD); i++)
-			p_mngr->acquired[type].cid_map[i] = 0;
+		for (vf = 0; vf < COMMON_MAX_NUM_VFS; vf++) {
+			p_map = &p_mngr->acquired_vf[type][vf];
+			len = DIV_ROUND_UP(p_map->max_count,
+					   BITS_PER_MAP_WORD) *
+			      MAP_WORD_SIZE;
+			OSAL_MEM_ZERO(p_map->cid_map, len);
+		}
 	}
 }
 
@@ -1723,93 +1773,150 @@ void ecore_cxt_hw_init_pf(struct ecore_hwfn *p_hwfn)
 	ecore_prs_init_pf(p_hwfn);
 }
 
-enum _ecore_status_t ecore_cxt_acquire_cid(struct ecore_hwfn *p_hwfn,
-					   enum protocol_type type, u32 *p_cid)
+enum _ecore_status_t _ecore_cxt_acquire_cid(struct ecore_hwfn *p_hwfn,
+					    enum protocol_type type,
+					    u32 *p_cid, u8 vfid)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
+	struct ecore_cid_acquired_map *p_map;
 	u32 rel_cid;
 
-	if (type >= MAX_CONN_TYPES || !p_mngr->acquired[type].cid_map) {
+	if (type >= MAX_CONN_TYPES) {
 		DP_NOTICE(p_hwfn, true, "Invalid protocol type %d", type);
 		return ECORE_INVAL;
 	}
 
-	rel_cid = OSAL_FIND_FIRST_ZERO_BIT(p_mngr->acquired[type].cid_map,
-					   p_mngr->acquired[type].max_count);
+	if (vfid >= COMMON_MAX_NUM_VFS && vfid != ECORE_CXT_PF_CID) {
+		DP_NOTICE(p_hwfn, true, "VF [%02x] is out of range\n", vfid);
+		return ECORE_INVAL;
+	}
+
+	/* Determine the right map to take this CID from */
+	if (vfid == ECORE_CXT_PF_CID)
+		p_map = &p_mngr->acquired[type];
+	else
+		p_map = &p_mngr->acquired_vf[type][vfid];
 
-	if (rel_cid >= p_mngr->acquired[type].max_count) {
+	if (p_map->cid_map == OSAL_NULL) {
+		DP_NOTICE(p_hwfn, true, "Invalid protocol type %d", type);
+		return ECORE_INVAL;
+	}
+
+	rel_cid = OSAL_FIND_FIRST_ZERO_BIT(p_map->cid_map,
+					   p_map->max_count);
+
+	if (rel_cid >= p_map->max_count) {
 		DP_NOTICE(p_hwfn, false, "no CID available for protocol %d\n",
 			  type);
 		return ECORE_NORESOURCES;
 	}
 
-	OSAL_SET_BIT(rel_cid, p_mngr->acquired[type].cid_map);
+	OSAL_SET_BIT(rel_cid, p_map->cid_map);
 
-	*p_cid = rel_cid + p_mngr->acquired[type].start_cid;
+	*p_cid = rel_cid + p_map->start_cid;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_CXT,
+		   "Acquired cid 0x%08x [rel. %08x] vfid %02x type %d\n",
+		   *p_cid, rel_cid, vfid, type);
 
 	return ECORE_SUCCESS;
 }
 
+enum _ecore_status_t ecore_cxt_acquire_cid(struct ecore_hwfn *p_hwfn,
+					   enum protocol_type type,
+					   u32 *p_cid)
+{
+	return _ecore_cxt_acquire_cid(p_hwfn, type, p_cid, ECORE_CXT_PF_CID);
+}
+
 static bool ecore_cxt_test_cid_acquired(struct ecore_hwfn *p_hwfn,
-					u32 cid, enum protocol_type *p_type)
+					u32 cid, u8 vfid,
+					enum protocol_type *p_type,
+					struct ecore_cid_acquired_map **pp_map)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
-	struct ecore_cid_acquired_map *p_map;
-	enum protocol_type p;
 	u32 rel_cid;
 
 	/* Iterate over protocols and find matching cid range */
-	for (p = 0; p < MAX_CONN_TYPES; p++) {
-		p_map = &p_mngr->acquired[p];
+	for (*p_type = 0; *p_type < MAX_CONN_TYPES; (*p_type)++) {
+		if (vfid == ECORE_CXT_PF_CID)
+			*pp_map = &p_mngr->acquired[*p_type];
+		else
+			*pp_map = &p_mngr->acquired_vf[*p_type][vfid];
 
-		if (!p_map->cid_map)
+		if (!((*pp_map)->cid_map))
 			continue;
-		if (cid >= p_map->start_cid &&
-		    cid < p_map->start_cid + p_map->max_count) {
+		if (cid >= (*pp_map)->start_cid &&
+		    cid < (*pp_map)->start_cid + (*pp_map)->max_count) {
 			break;
 		}
 	}
-	*p_type = p;
-
-	if (p == MAX_CONN_TYPES) {
-		DP_NOTICE(p_hwfn, true, "Invalid CID %d", cid);
-		return false;
+	if (*p_type == MAX_CONN_TYPES) {
+		DP_NOTICE(p_hwfn, true, "Invalid CID %d vfid %02x", cid, vfid);
+		goto fail;
 	}
-	rel_cid = cid - p_map->start_cid;
-	if (!OSAL_TEST_BIT(rel_cid, p_map->cid_map)) {
-		DP_NOTICE(p_hwfn, true, "CID %d not acquired", cid);
-		return false;
+
+	rel_cid = cid - (*pp_map)->start_cid;
+	if (!OSAL_TEST_BIT(rel_cid, (*pp_map)->cid_map)) {
+		DP_NOTICE(p_hwfn, true,
+			  "CID %d [vifd %02x] not acquired", cid, vfid);
+		goto fail;
 	}
+
 	return true;
+fail:
+	*p_type = MAX_CONN_TYPES;
+	*pp_map = OSAL_NULL;
+	return false;
 }
 
-void ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn, u32 cid)
+void _ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn, u32 cid, u8 vfid)
 {
-	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
+	struct ecore_cid_acquired_map *p_map = OSAL_NULL;
 	enum protocol_type type;
 	bool b_acquired;
 	u32 rel_cid;
 
+	if (vfid != ECORE_CXT_PF_CID && vfid > COMMON_MAX_NUM_VFS) {
+		DP_NOTICE(p_hwfn, true,
+			  "Trying to return incorrect CID belonging to VF %02x\n",
+			  vfid);
+		return;
+	}
+
 	/* Test acquired and find matching per-protocol map */
-	b_acquired = ecore_cxt_test_cid_acquired(p_hwfn, cid, &type);
+	b_acquired = ecore_cxt_test_cid_acquired(p_hwfn, cid, vfid,
+						 &type, &p_map);
 
 	if (!b_acquired)
 		return;
 
-	rel_cid = cid - p_mngr->acquired[type].start_cid;
-	OSAL_CLEAR_BIT(rel_cid, p_mngr->acquired[type].cid_map);
+	rel_cid = cid - p_map->start_cid;
+	OSAL_CLEAR_BIT(rel_cid, p_map->cid_map);
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_CXT,
+		   "Released CID 0x%08x [rel. %08x] vfid %02x type %d\n",
+		   cid, rel_cid, vfid, type);
+}
+
+void ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn, u32 cid)
+{
+	_ecore_cxt_release_cid(p_hwfn, cid, ECORE_CXT_PF_CID);
 }
 
 enum _ecore_status_t ecore_cxt_get_cid_info(struct ecore_hwfn *p_hwfn,
 					    struct ecore_cxt_info *p_info)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
+	struct ecore_cid_acquired_map *p_map = OSAL_NULL;
 	u32 conn_cxt_size, hw_p_size, cxts_per_p, line;
 	enum protocol_type type;
 	bool b_acquired;
 
 	/* Test acquired and find matching per-protocol map */
-	b_acquired = ecore_cxt_test_cid_acquired(p_hwfn, p_info->iid, &type);
+	b_acquired = ecore_cxt_test_cid_acquired(p_hwfn, p_info->iid,
+						 ECORE_CXT_PF_CID,
+						 &type, &p_map);
 
 	if (!b_acquired)
 		return ECORE_INVAL;
@@ -1865,9 +1972,14 @@ enum _ecore_status_t ecore_cxt_set_pf_params(struct ecore_hwfn *p_hwfn)
 			struct ecore_eth_pf_params *p_params =
 			    &p_hwfn->pf_params.eth_pf_params;
 
+			/* TODO - we probably want to add VF number to the PF
+			 * params;
+			 * As of now, allocates 16 * 2 per-VF [to retain regular
+			 * functionality].
+			 */
 			ecore_cxt_set_proto_cid_count(p_hwfn,
 				PROTOCOLID_ETH,
-				p_params->num_cons, 1);	/* FIXME VF count... */
+				p_params->num_cons, 32);
 
 			break;
 		}
diff --git a/drivers/net/qede/base/ecore_cxt.h b/drivers/net/qede/base/ecore_cxt.h
index 5379d7b..1128051 100644
--- a/drivers/net/qede/base/ecore_cxt.h
+++ b/drivers/net/qede/base/ecore_cxt.h
@@ -130,14 +130,53 @@ void ecore_qm_init_pf(struct ecore_hwfn *p_hwfn);
 enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn,
 				     struct ecore_ptt *p_ptt);
 
+#define ECORE_CXT_PF_CID (0xff)
+
+/**
+ * @brief ecore_cxt_release - Release a cid
+ *
+ * @param p_hwfn
+ * @param cid
+ */
+void ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn, u32 cid);
+
 /**
-* @brief ecore_cxt_release - Release a cid
-*
-* @param p_hwfn
-* @param cid
-*/
-void ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn,
-			   u32 cid);
+ * @brief ecore_cxt_release - Release a cid belonging to a vf-queue
+ *
+ * @param p_hwfn
+ * @param cid
+ * @param vfid - engine relative index. ECORE_CXT_PF_CID if belongs to PF
+ */
+void _ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn,
+			    u32 cid, u8 vfid);
+
+/**
+ * @brief ecore_cxt_acquire - Acquire a new cid of a specific protocol type
+ *
+ * @param p_hwfn
+ * @param type
+ * @param p_cid
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_cxt_acquire_cid(struct ecore_hwfn *p_hwfn,
+					   enum protocol_type type,
+					   u32 *p_cid);
+
+/**
+ * @brief _ecore_cxt_acquire - Acquire a new cid of a specific protocol type
+ *                             for a vf-queue
+ *
+ * @param p_hwfn
+ * @param type
+ * @param p_cid
+ * @param vfid - engine relative index. ECORE_CXT_PF_CID if belongs to PF
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t _ecore_cxt_acquire_cid(struct ecore_hwfn *p_hwfn,
+					    enum protocol_type type,
+					    u32 *p_cid, u8 vfid);
 
 /**
  * @brief ecore_cxt_get_tid_mem_info - function checks if the
diff --git a/drivers/net/qede/base/ecore_cxt_api.h b/drivers/net/qede/base/ecore_cxt_api.h
index 6a50412..f154e0d 100644
--- a/drivers/net/qede/base/ecore_cxt_api.h
+++ b/drivers/net/qede/base/ecore_cxt_api.h
@@ -26,19 +26,6 @@ struct ecore_tid_mem {
 };
 
 /**
-* @brief ecore_cxt_acquire - Acquire a new cid of a specific protocol type
-*
-* @param p_hwfn
-* @param type
-* @param p_cid
-*
-* @return enum _ecore_status_t
-*/
-enum _ecore_status_t ecore_cxt_acquire_cid(struct ecore_hwfn  *p_hwfn,
-					   enum protocol_type type,
-					   u32 *p_cid);
-
-/**
 * @brief ecoreo_cid_get_cxt_info - Returns the context info for a specific cid
 *
 *
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index e584058..2a621f7 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -146,8 +146,11 @@ void ecore_resc_free(struct ecore_dev *p_dev)
 {
 	int i;
 
-	if (IS_VF(p_dev))
+	if (IS_VF(p_dev)) {
+		for_each_hwfn(p_dev, i)
+			ecore_l2_free(&p_dev->hwfns[i]);
 		return;
+	}
 
 	OSAL_FREE(p_dev, p_dev->fw_data);
 
@@ -163,6 +166,7 @@ void ecore_resc_free(struct ecore_dev *p_dev)
 		ecore_consq_free(p_hwfn);
 		ecore_int_free(p_hwfn);
 		ecore_iov_free(p_hwfn);
+		ecore_l2_free(p_hwfn);
 		ecore_dmae_info_free(p_hwfn);
 		ecore_dcbx_info_free(p_hwfn, p_hwfn->p_dcbx_info);
 		/* @@@TBD Flush work-queue ? */
@@ -839,8 +843,14 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 	int i;
 
-	if (IS_VF(p_dev))
+	if (IS_VF(p_dev)) {
+		for_each_hwfn(p_dev, i) {
+			rc = ecore_l2_alloc(&p_dev->hwfns[i]);
+			if (rc != ECORE_SUCCESS)
+				return rc;
+		}
 		return rc;
+	}
 
 	p_dev->fw_data = OSAL_ZALLOC(p_dev, GFP_KERNEL,
 				     sizeof(*p_dev->fw_data));
@@ -961,6 +971,10 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 		if (rc)
 			goto alloc_err;
 
+		rc = ecore_l2_alloc(p_hwfn);
+		if (rc != ECORE_SUCCESS)
+			goto alloc_err;
+
 		/* DMA info initialization */
 		rc = ecore_dmae_info_alloc(p_hwfn);
 		if (rc) {
@@ -999,8 +1013,11 @@ void ecore_resc_setup(struct ecore_dev *p_dev)
 {
 	int i;
 
-	if (IS_VF(p_dev))
+	if (IS_VF(p_dev)) {
+		for_each_hwfn(p_dev, i)
+			ecore_l2_setup(&p_dev->hwfns[i]);
 		return;
+	}
 
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
@@ -1018,6 +1035,7 @@ void ecore_resc_setup(struct ecore_dev *p_dev)
 
 		ecore_int_setup(p_hwfn, p_hwfn->p_main_ptt);
 
+		ecore_l2_setup(p_hwfn);
 		ecore_iov_setup(p_hwfn, p_hwfn->p_main_ptt);
 	}
 }
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 4d26e19..adb5e47 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -29,24 +29,172 @@
 #define ECORE_MAX_SGES_NUM 16
 #define CRC32_POLY 0x1edc6f41
 
+struct ecore_l2_info {
+	u32 queues;
+	unsigned long **pp_qid_usage;
+
+	/* The lock is meant to synchronize access to the qid usage */
+	osal_mutex_t lock;
+};
+
+enum _ecore_status_t ecore_l2_alloc(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_l2_info *p_l2_info;
+	unsigned long **pp_qids;
+	u32 i;
+
+	if (!ECORE_IS_L2_PERSONALITY(p_hwfn))
+		return ECORE_SUCCESS;
+
+	p_l2_info = OSAL_VZALLOC(p_hwfn->p_dev, sizeof(*p_l2_info));
+	if (!p_l2_info)
+		return ECORE_NOMEM;
+	p_hwfn->p_l2_info = p_l2_info;
+
+	if (IS_PF(p_hwfn->p_dev)) {
+		p_l2_info->queues = RESC_NUM(p_hwfn, ECORE_L2_QUEUE);
+	} else {
+		u8 rx = 0, tx = 0;
+
+		ecore_vf_get_num_rxqs(p_hwfn, &rx);
+		ecore_vf_get_num_txqs(p_hwfn, &tx);
+
+		p_l2_info->queues = (u32)OSAL_MAX_T(u8, rx, tx);
+	}
+
+	pp_qids = OSAL_VZALLOC(p_hwfn->p_dev,
+			       sizeof(unsigned long *) *
+			       p_l2_info->queues);
+	if (pp_qids == OSAL_NULL)
+		return ECORE_NOMEM;
+	p_l2_info->pp_qid_usage = pp_qids;
+
+	for (i = 0; i < p_l2_info->queues; i++) {
+		pp_qids[i] = OSAL_VZALLOC(p_hwfn->p_dev,
+					  MAX_QUEUES_PER_QZONE / 8);
+		if (pp_qids[i] == OSAL_NULL)
+			return ECORE_NOMEM;
+	}
+
+#ifdef CONFIG_ECORE_LOCK_ALLOC
+	OSAL_MUTEX_ALLOC(p_hwfn, &p_l2_info->lock);
+#endif
+
+	return ECORE_SUCCESS;
+}
+
+void ecore_l2_setup(struct ecore_hwfn *p_hwfn)
+{
+	if (!ECORE_IS_L2_PERSONALITY(p_hwfn))
+		return;
+
+	OSAL_MUTEX_INIT(&p_hwfn->p_l2_info->lock);
+}
+
+void ecore_l2_free(struct ecore_hwfn *p_hwfn)
+{
+	u32 i;
+
+	if (!ECORE_IS_L2_PERSONALITY(p_hwfn))
+		return;
+
+	if (p_hwfn->p_l2_info == OSAL_NULL)
+		return;
+
+	if (p_hwfn->p_l2_info->pp_qid_usage == OSAL_NULL)
+		goto out_l2_info;
+
+	/* Free until hit first uninitialized entry */
+	for (i = 0; i < p_hwfn->p_l2_info->queues; i++) {
+		if (p_hwfn->p_l2_info->pp_qid_usage[i] == OSAL_NULL)
+			break;
+		OSAL_VFREE(p_hwfn->p_dev,
+			   p_hwfn->p_l2_info->pp_qid_usage[i]);
+	}
+
+#ifdef CONFIG_ECORE_LOCK_ALLOC
+	/* Lock is last to initialize, if everything else was */
+	if (i == p_hwfn->p_l2_info->queues)
+		OSAL_MUTEX_DEALLOC(&p_hwfn->p_l2_info->lock);
+#endif
+
+	OSAL_VFREE(p_hwfn->p_dev, p_hwfn->p_l2_info->pp_qid_usage);
+
+out_l2_info:
+	OSAL_VFREE(p_hwfn->p_dev, p_hwfn->p_l2_info);
+	p_hwfn->p_l2_info = OSAL_NULL;
+}
+
+/* TODO - we'll need locking around these... */
+static bool ecore_eth_queue_qid_usage_add(struct ecore_hwfn *p_hwfn,
+					  struct ecore_queue_cid *p_cid)
+{
+	struct ecore_l2_info *p_l2_info = p_hwfn->p_l2_info;
+	u16 queue_id = p_cid->rel.queue_id;
+	bool b_rc = true;
+	u8 first;
+
+	OSAL_MUTEX_ACQUIRE(&p_l2_info->lock);
+
+	if (queue_id > p_l2_info->queues) {
+		DP_NOTICE(p_hwfn, true,
+			  "Requested to increase usage for qzone %04x out of %08x\n",
+			  queue_id, p_l2_info->queues);
+		b_rc = false;
+		goto out;
+	}
+
+	first = (u8)OSAL_FIND_FIRST_ZERO_BIT(p_l2_info->pp_qid_usage[queue_id],
+					     MAX_QUEUES_PER_QZONE);
+	if (first >= MAX_QUEUES_PER_QZONE) {
+		b_rc = false;
+		goto out;
+	}
+
+	OSAL_SET_BIT(first, p_l2_info->pp_qid_usage[queue_id]);
+	p_cid->qid_usage_idx = first;
+
+out:
+	OSAL_MUTEX_RELEASE(&p_l2_info->lock);
+	return b_rc;
+}
+
+static void ecore_eth_queue_qid_usage_del(struct ecore_hwfn *p_hwfn,
+					  struct ecore_queue_cid *p_cid)
+{
+	OSAL_MUTEX_ACQUIRE(&p_hwfn->p_l2_info->lock);
+
+	OSAL_CLEAR_BIT(p_cid->qid_usage_idx,
+		       p_hwfn->p_l2_info->pp_qid_usage[p_cid->rel.queue_id]);
+
+	OSAL_MUTEX_RELEASE(&p_hwfn->p_l2_info->lock);
+}
+
 void ecore_eth_queue_cid_release(struct ecore_hwfn *p_hwfn,
 				 struct ecore_queue_cid *p_cid)
 {
+	/* For VF-queues, stuff is a bit complicated as:
+	 *  - They always maintain the qid_usage on their own.
+	 *  - In legacy mode, they also maintain their CIDs.
+	 */
+
 	/* VFs' CIDs are 0-based in PF-view, and uninitialized on VF */
-	if (!p_cid->is_vf && IS_PF(p_hwfn->p_dev))
-		ecore_cxt_release_cid(p_hwfn, p_cid->cid);
+	if (IS_PF(p_hwfn->p_dev) && !p_cid->b_legacy_vf)
+		_ecore_cxt_release_cid(p_hwfn, p_cid->cid, p_cid->vfid);
+	if (!p_cid->b_legacy_vf)
+		ecore_eth_queue_qid_usage_del(p_hwfn, p_cid);
 	OSAL_VFREE(p_hwfn->p_dev, p_cid);
 }
 
 /* The internal is only meant to be directly called by PFs initializeing CIDs
  * for their VFs.
  */
-struct ecore_queue_cid *
+static struct ecore_queue_cid *
 _ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
-			u16 opaque_fid, u32 cid, u8 vf_qid,
-			struct ecore_queue_start_common_params *p_params)
+			u16 opaque_fid, u32 cid,
+			struct ecore_queue_start_common_params *p_params,
+			struct ecore_queue_cid_vf_params *p_vf_params)
 {
-	bool b_is_same = (p_hwfn->hw_info.opaque_fid == opaque_fid);
 	struct ecore_queue_cid *p_cid;
 	enum _ecore_status_t rc;
 
@@ -56,13 +204,22 @@ _ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
 
 	p_cid->opaque_fid = opaque_fid;
 	p_cid->cid = cid;
-	p_cid->vf_qid = vf_qid;
 	p_cid->rel = *p_params;
 	p_cid->p_owner = p_hwfn;
 
+	/* Fill-in bits related to VFs' queues if information was provided */
+	if (p_vf_params != OSAL_NULL) {
+		p_cid->vfid = p_vf_params->vfid;
+		p_cid->vf_qid = p_vf_params->vf_qid;
+		p_cid->b_legacy_vf = p_vf_params->b_legacy;
+	} else {
+		p_cid->vfid = ECORE_QUEUE_CID_PF;
+	}
+
 	/* Don't try calculating the absolute indices for VFs */
 	if (IS_VF(p_hwfn->p_dev)) {
 		p_cid->abs = p_cid->rel;
+
 		goto out;
 	}
 
@@ -82,7 +239,7 @@ _ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
 	/* In case of a PF configuring its VF's queues, the stats-id is already
 	 * absolute [since there's a single index that's suitable per-VF].
 	 */
-	if (b_is_same) {
+	if (p_cid->vfid == ECORE_QUEUE_CID_PF) {
 		rc = ecore_fw_vport(p_hwfn, p_cid->rel.stats_id,
 				    &p_cid->abs.stats_id);
 		if (rc != ECORE_SUCCESS)
@@ -95,17 +252,23 @@ _ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
 	p_cid->abs.sb = p_cid->rel.sb;
 	p_cid->abs.sb_idx = p_cid->rel.sb_idx;
 
-	/* This is tricky - we're actually interested in whehter this is a PF
-	 * entry meant for the VF.
-	 */
-	if (!b_is_same)
-		p_cid->is_vf = true;
 out:
+	/* VF-images have provided the qid_usage_idx on their own.
+	 * Otherwise, we need to allocate a unique one.
+	 */
+	if (!p_vf_params) {
+		if (!ecore_eth_queue_qid_usage_add(p_hwfn, p_cid))
+			goto fail;
+	} else {
+		p_cid->qid_usage_idx = p_vf_params->qid_usage_idx;
+	}
+
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-		   "opaque_fid: %04x CID %08x vport %02x [%02x] qzone %04x [%04x] stats %02x [%02x] SB %04x PI %02x\n",
+		   "opaque_fid: %04x CID %08x vport %02x [%02x] qzone %04x.%02x [%04x] stats %02x [%02x] SB %04x PI %02x\n",
 		   p_cid->opaque_fid, p_cid->cid,
 		   p_cid->rel.vport_id, p_cid->abs.vport_id,
-		   p_cid->rel.queue_id, p_cid->abs.queue_id,
+		   p_cid->rel.queue_id,	p_cid->qid_usage_idx,
+		   p_cid->abs.queue_id,
 		   p_cid->rel.stats_id, p_cid->abs.stats_id,
 		   p_cid->abs.sb, p_cid->abs.sb_idx);
 
@@ -116,33 +279,56 @@ fail:
 	return OSAL_NULL;
 }
 
-static struct ecore_queue_cid *
-ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
-		       u16 opaque_fid,
-		       struct ecore_queue_start_common_params *p_params)
+struct ecore_queue_cid *
+ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn, u16 opaque_fid,
+		       struct ecore_queue_start_common_params *p_params,
+		       struct ecore_queue_cid_vf_params *p_vf_params)
 {
 	struct ecore_queue_cid *p_cid;
+	u8 vfid = ECORE_CXT_PF_CID;
+	bool b_legacy_vf = false;
 	u32 cid = 0;
 
+	/* In case of legacy VFs, The CID can be derived from the additional
+	 * VF parameters - the VF assumes queue X uses CID X, so we can simply
+	 * use the vf_qid for this purpose as well.
+	 */
+	if (p_vf_params) {
+		vfid = p_vf_params->vfid;
+
+		if (p_vf_params->b_legacy) {
+			b_legacy_vf = true;
+			cid = p_vf_params->vf_qid;
+		}
+	}
+
 	/* Get a unique firmware CID for this queue, in case it's a PF.
 	 * VF's don't need a CID as the queue configuration will be done
 	 * by PF.
 	 */
-	if (IS_PF(p_hwfn->p_dev)) {
-		if (ecore_cxt_acquire_cid(p_hwfn, PROTOCOLID_ETH,
-					  &cid) != ECORE_SUCCESS) {
+	if (IS_PF(p_hwfn->p_dev) && !b_legacy_vf) {
+		if (_ecore_cxt_acquire_cid(p_hwfn, PROTOCOLID_ETH,
+					   &cid, vfid) != ECORE_SUCCESS) {
 			DP_NOTICE(p_hwfn, true, "Failed to acquire cid\n");
 			return OSAL_NULL;
 		}
 	}
 
-	p_cid = _ecore_eth_queue_to_cid(p_hwfn, opaque_fid, cid, 0, p_params);
-	if ((p_cid == OSAL_NULL) && IS_PF(p_hwfn->p_dev))
-		ecore_cxt_release_cid(p_hwfn, cid);
+	p_cid = _ecore_eth_queue_to_cid(p_hwfn, opaque_fid, cid,
+					p_params, p_vf_params);
+	if ((p_cid == OSAL_NULL) && IS_PF(p_hwfn->p_dev) && !b_legacy_vf)
+		_ecore_cxt_release_cid(p_hwfn, cid, vfid);
 
 	return p_cid;
 }
 
+static struct ecore_queue_cid *
+ecore_eth_queue_to_cid_pf(struct ecore_hwfn *p_hwfn, u16 opaque_fid,
+			  struct ecore_queue_start_common_params *p_params)
+{
+	return ecore_eth_queue_to_cid(p_hwfn, opaque_fid, p_params, OSAL_NULL);
+}
+
 enum _ecore_status_t
 ecore_sp_eth_vport_start(struct ecore_hwfn *p_hwfn,
 			 struct ecore_sp_vport_start_params *p_params)
@@ -741,7 +927,7 @@ ecore_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
 	p_ramrod->num_of_pbl_pages = OSAL_CPU_TO_LE16(cqe_pbl_size);
 	DMA_REGPAIR_LE(p_ramrod->cqe_pbl_addr, cqe_pbl_addr);
 
-	if (p_cid->is_vf) {
+	if (p_cid->vfid != ECORE_QUEUE_CID_PF) {
 		p_ramrod->vf_rx_prod_index = p_cid->vf_qid;
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 			   "Queue%s is meant for VF rxq[%02x]\n",
@@ -793,7 +979,7 @@ ecore_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
 	enum _ecore_status_t rc;
 
 	/* Allocate a CID for the queue */
-	p_cid = ecore_eth_queue_to_cid(p_hwfn, opaque_fid, p_params);
+	p_cid = ecore_eth_queue_to_cid_pf(p_hwfn, opaque_fid, p_params);
 	if (p_cid == OSAL_NULL)
 		return ECORE_NOMEM;
 
@@ -905,9 +1091,11 @@ ecore_eth_pf_rx_queue_stop(struct ecore_hwfn *p_hwfn,
 	/* Cleaning the queue requires the completion to arrive there.
 	 * In addition, VFs require the answer to come as eqe to PF.
 	 */
-	p_ramrod->complete_cqe_flg = (!p_cid->is_vf && !b_eq_completion_only) ||
+	p_ramrod->complete_cqe_flg = ((p_cid->vfid == ECORE_QUEUE_CID_PF) &&
+				      !b_eq_completion_only) ||
 				     b_cqe_completion;
-	p_ramrod->complete_event_flg = p_cid->is_vf || b_eq_completion_only;
+	p_ramrod->complete_event_flg = (p_cid->vfid != ECORE_QUEUE_CID_PF) ||
+				       b_eq_completion_only;
 
 	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
 }
@@ -1007,7 +1195,7 @@ ecore_eth_tx_queue_start(struct ecore_hwfn *p_hwfn, u16 opaque_fid,
 	struct ecore_queue_cid *p_cid;
 	enum _ecore_status_t rc;
 
-	p_cid = ecore_eth_queue_to_cid(p_hwfn, opaque_fid, p_params);
+	p_cid = ecore_eth_queue_to_cid_pf(p_hwfn, opaque_fid, p_params);
 	if (p_cid == OSAL_NULL)
 		return ECORE_INVAL;
 
diff --git a/drivers/net/qede/base/ecore_l2.h b/drivers/net/qede/base/ecore_l2.h
index 4b0ccb4..3f86eac 100644
--- a/drivers/net/qede/base/ecore_l2.h
+++ b/drivers/net/qede/base/ecore_l2.h
@@ -15,6 +15,34 @@
 #include "ecore_spq.h"
 #include "ecore_l2_api.h"
 
+#define MAX_QUEUES_PER_QZONE	(sizeof(unsigned long) * 8)
+#define ECORE_QUEUE_CID_PF	(0xff)
+
+/* Additional parameters required for initialization of the queue_cid
+ * and are relevant only for a PF initializing one for its VFs.
+ */
+struct ecore_queue_cid_vf_params {
+	/* Should match the VF's relative index */
+	u8 vfid;
+
+	/* 0-based queue index. Should reflect the relative qzone the
+	 * VF thinks is associated with it [in its range].
+	 */
+	u8 vf_qid;
+
+	/* Indicates a VF is legacy, making it differ in several things:
+	 *  - Producers would be placed in a different place.
+	 *  - Makes assumptions regarding the CIDs.
+	 */
+	bool b_legacy;
+
+	/* For VFs, this index arrives via TLV to diffrentiate between
+	 * different queues opened on the same qzone, and is passed
+	 * [where the PF would have allocated it internally for its own].
+	 */
+	u8 qid_usage_idx;
+};
+
 struct ecore_queue_cid {
 	/* 'Relative' is a relative term ;-). Usually the indices [not counting
 	 * SBs] would be PF-relative, but there are some cases where that isn't
@@ -31,22 +59,32 @@ struct ecore_queue_cid {
 	 * Notice this is relevant on the *PF* queue-cid of its VF's queues,
 	 * and not on the VF itself.
 	 */
-	bool is_vf;
+	u8 vfid;
 	u8 vf_qid;
 
+	/* We need an additional index to diffrentiate between queues opened
+	 * for same queue-zone, as VFs would have to communicate the info
+	 * to the PF [otherwise PF has no way to diffrentiate].
+	 */
+	u8 qid_usage_idx;
+
 	/* Legacy VFs might have Rx producer located elsewhere */
 	bool b_legacy_vf;
 
 	struct ecore_hwfn *p_owner;
 };
 
+enum _ecore_status_t ecore_l2_alloc(struct ecore_hwfn *p_hwfn);
+void ecore_l2_setup(struct ecore_hwfn *p_hwfn);
+void ecore_l2_free(struct ecore_hwfn *p_hwfn);
+
 void ecore_eth_queue_cid_release(struct ecore_hwfn *p_hwfn,
 				 struct ecore_queue_cid *p_cid);
 
 struct ecore_queue_cid *
-_ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
-			u16 opaque_fid, u32 cid, u8 vf_qid,
-			struct ecore_queue_start_common_params *p_params);
+ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn, u16 opaque_fid,
+		       struct ecore_queue_start_common_params *p_params,
+		       struct ecore_queue_cid_vf_params *p_vf_params);
 
 enum _ecore_status_t
 ecore_sp_eth_vport_start(struct ecore_hwfn *p_hwfn,
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 532c492..39d3e88 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -192,28 +192,90 @@ struct ecore_vf_info *ecore_iov_get_vf_info(struct ecore_hwfn *p_hwfn,
 	return vf;
 }
 
+static struct ecore_queue_cid *
+ecore_iov_get_vf_rx_queue_cid(struct ecore_hwfn *p_hwfn,
+			      struct ecore_vf_info *p_vf,
+			      struct ecore_vf_queue *p_queue)
+{
+	int i;
+
+	for (i = 0; i < MAX_QUEUES_PER_QZONE; i++) {
+		if (p_queue->cids[i].p_cid &&
+		    !p_queue->cids[i].b_is_tx)
+			return p_queue->cids[i].p_cid;
+	}
+
+	return OSAL_NULL;
+}
+
+enum ecore_iov_validate_q_mode {
+	ECORE_IOV_VALIDATE_Q_NA,
+	ECORE_IOV_VALIDATE_Q_ENABLE,
+	ECORE_IOV_VALIDATE_Q_DISABLE,
+};
+
+static bool ecore_iov_validate_queue_mode(struct ecore_hwfn *p_hwfn,
+					  struct ecore_vf_info *p_vf,
+					  u16 qid,
+					  enum ecore_iov_validate_q_mode mode,
+					  bool b_is_tx)
+{
+	int i;
+
+	if (mode == ECORE_IOV_VALIDATE_Q_NA)
+		return true;
+
+	for (i = 0; i < MAX_QUEUES_PER_QZONE; i++) {
+		struct ecore_vf_queue_cid *p_qcid;
+
+		p_qcid = &p_vf->vf_queues[qid].cids[i];
+
+		if (p_qcid->p_cid == OSAL_NULL)
+			continue;
+
+		if (p_qcid->b_is_tx != b_is_tx)
+			continue;
+
+		/* Found. It's enabled. */
+		return (mode == ECORE_IOV_VALIDATE_Q_ENABLE);
+	}
+
+	/* In case we haven't found any valid cid, then its disabled */
+	return (mode == ECORE_IOV_VALIDATE_Q_DISABLE);
+}
+
 static bool ecore_iov_validate_rxq(struct ecore_hwfn *p_hwfn,
 				   struct ecore_vf_info *p_vf,
-				   u16 rx_qid)
+				   u16 rx_qid,
+				   enum ecore_iov_validate_q_mode mode)
 {
-	if (rx_qid >= p_vf->num_rxqs)
+	if (rx_qid >= p_vf->num_rxqs) {
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 			   "VF[0x%02x] - can't touch Rx queue[%04x];"
 			   " Only 0x%04x are allocated\n",
 			   p_vf->abs_vf_id, rx_qid, p_vf->num_rxqs);
-	return rx_qid < p_vf->num_rxqs;
+		return false;
+	}
+
+	return ecore_iov_validate_queue_mode(p_hwfn, p_vf, rx_qid,
+					     mode, false);
 }
 
 static bool ecore_iov_validate_txq(struct ecore_hwfn *p_hwfn,
 				   struct ecore_vf_info *p_vf,
-				   u16 tx_qid)
+				   u16 tx_qid,
+				   enum ecore_iov_validate_q_mode mode)
 {
-	if (tx_qid >= p_vf->num_txqs)
+	if (tx_qid >= p_vf->num_txqs) {
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 			   "VF[0x%02x] - can't touch Tx queue[%04x];"
 			   " Only 0x%04x are allocated\n",
 			   p_vf->abs_vf_id, tx_qid, p_vf->num_txqs);
-	return tx_qid < p_vf->num_txqs;
+		return false;
+	}
+
+	return ecore_iov_validate_queue_mode(p_hwfn, p_vf, tx_qid,
+					     mode, true);
 }
 
 static bool ecore_iov_validate_sb(struct ecore_hwfn *p_hwfn,
@@ -234,13 +296,16 @@ static bool ecore_iov_validate_sb(struct ecore_hwfn *p_hwfn,
 	return false;
 }
 
+/* Is there at least 1 queue open? */
 static bool ecore_iov_validate_active_rxq(struct ecore_hwfn *p_hwfn,
 					  struct ecore_vf_info *p_vf)
 {
 	u8 i;
 
 	for (i = 0; i < p_vf->num_rxqs; i++)
-		if (p_vf->vf_queues[i].p_rx_cid)
+		if (ecore_iov_validate_queue_mode(p_hwfn, p_vf, i,
+						  ECORE_IOV_VALIDATE_Q_ENABLE,
+						  false))
 			return true;
 
 	return false;
@@ -251,8 +316,10 @@ static bool ecore_iov_validate_active_txq(struct ecore_hwfn *p_hwfn,
 {
 	u8 i;
 
-	for (i = 0; i < p_vf->num_rxqs; i++)
-		if (p_vf->vf_queues[i].p_tx_cid)
+	for (i = 0; i < p_vf->num_txqs; i++)
+		if (ecore_iov_validate_queue_mode(p_hwfn, p_vf, i,
+						  ECORE_IOV_VALIDATE_Q_ENABLE,
+						  true))
 			return true;
 
 	return false;
@@ -1095,19 +1162,15 @@ ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 	vf->num_txqs = num_of_vf_available_chains;
 
 	for (i = 0; i < vf->num_rxqs; i++) {
-		struct ecore_vf_q_info *p_queue = &vf->vf_queues[i];
+		struct ecore_vf_queue *p_queue = &vf->vf_queues[i];
 
 		p_queue->fw_rx_qid = p_params->req_rx_queue[i];
 		p_queue->fw_tx_qid = p_params->req_tx_queue[i];
 
-		/* CIDs are per-VF, so no problem having them 0-based. */
-		p_queue->fw_cid = i;
-
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-			   "VF[%d] - Q[%d] SB %04x, qid [Rx %04x Tx %04x]  CID %04x\n",
+			   "VF[%d] - Q[%d] SB %04x, qid [Rx %04x Tx %04x]\n",
 			   vf->relative_vf_id, i, vf->igu_sbs[i],
-			   p_queue->fw_rx_qid, p_queue->fw_tx_qid,
-			   p_queue->fw_cid);
+			   p_queue->fw_rx_qid, p_queue->fw_tx_qid);
 	}
 
 	/* Update the link configuration in bulletin.
@@ -1443,7 +1506,7 @@ struct ecore_public_vf_info
 static void ecore_iov_vf_cleanup(struct ecore_hwfn *p_hwfn,
 				 struct ecore_vf_info *p_vf)
 {
-	u32 i;
+	u32 i, j;
 	p_vf->vf_bulletin = 0;
 	p_vf->vport_instance = 0;
 	p_vf->configured_features = 0;
@@ -1455,18 +1518,15 @@ static void ecore_iov_vf_cleanup(struct ecore_hwfn *p_hwfn,
 	p_vf->num_active_rxqs = 0;
 
 	for (i = 0; i < ECORE_MAX_VF_CHAINS_PER_PF; i++) {
-		struct ecore_vf_q_info *p_queue = &p_vf->vf_queues[i];
+		struct ecore_vf_queue *p_queue = &p_vf->vf_queues[i];
 
-		if (p_queue->p_rx_cid) {
-			ecore_eth_queue_cid_release(p_hwfn,
-						    p_queue->p_rx_cid);
-			p_queue->p_rx_cid = OSAL_NULL;
-		}
+		for (j = 0; j < MAX_QUEUES_PER_QZONE; j++) {
+			if (!p_queue->cids[j].p_cid)
+				continue;
 
-		if (p_queue->p_tx_cid) {
 			ecore_eth_queue_cid_release(p_hwfn,
-						    p_queue->p_tx_cid);
-			p_queue->p_tx_cid = OSAL_NULL;
+						    p_queue->cids[j].p_cid);
+			p_queue->cids[j].p_cid = OSAL_NULL;
 		}
 	}
 
@@ -1481,7 +1541,7 @@ static u8 ecore_iov_vf_mbx_acquire_resc(struct ecore_hwfn *p_hwfn,
 					struct vf_pf_resc_request *p_req,
 					struct pf_vf_resc *p_resp)
 {
-	int i;
+	u8 i;
 
 	/* Queue related information */
 	p_resp->num_rxqs = p_vf->num_rxqs;
@@ -1502,7 +1562,7 @@ static u8 ecore_iov_vf_mbx_acquire_resc(struct ecore_hwfn *p_hwfn,
 	for (i = 0; i < p_resp->num_rxqs; i++) {
 		ecore_fw_l2_queue(p_hwfn, p_vf->vf_queues[i].fw_rx_qid,
 				  (u16 *)&p_resp->hw_qid[i]);
-		p_resp->cid[i] = p_vf->vf_queues[i].fw_cid;
+		p_resp->cid[i] = i;
 	}
 
 	/* Filter related information */
@@ -1905,9 +1965,12 @@ ecore_iov_configure_vport_forced(struct ecore_hwfn *p_hwfn,
 
 		/* Update all the Rx queues */
 		for (i = 0; i < ECORE_MAX_VF_CHAINS_PER_PF; i++) {
-			struct ecore_queue_cid *p_cid;
+			struct ecore_vf_queue *p_queue = &p_vf->vf_queues[i];
+			struct ecore_queue_cid *p_cid = OSAL_NULL;
 
-			p_cid = p_vf->vf_queues[i].p_rx_cid;
+			/* There can be at most 1 Rx queue on qzone. Find it */
+			p_cid = ecore_iov_get_vf_rx_queue_cid(p_hwfn, p_vf,
+							      p_queue);
 			if (p_cid == OSAL_NULL)
 				continue;
 
@@ -2113,19 +2176,32 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
 				       struct ecore_vf_info *vf)
 {
 	struct ecore_queue_start_common_params params;
+	struct ecore_queue_cid_vf_params vf_params;
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
 	u8 status = PFVF_STATUS_NO_RESOURCE;
-	struct ecore_vf_q_info *p_queue;
+	struct ecore_vf_queue *p_queue;
 	struct vfpf_start_rxq_tlv *req;
+	struct ecore_queue_cid *p_cid;
 	bool b_legacy_vf = false;
+	u8 qid_usage_idx;
 	enum _ecore_status_t rc;
 
 	req = &mbx->req_virt->start_rxq;
 
-	if (!ecore_iov_validate_rxq(p_hwfn, vf, req->rx_qid) ||
+	if (!ecore_iov_validate_rxq(p_hwfn, vf, req->rx_qid,
+				    ECORE_IOV_VALIDATE_Q_DISABLE) ||
 	    !ecore_iov_validate_sb(p_hwfn, vf, req->hw_sb))
 		goto out;
 
+	/* Legacy VFs made assumptions on the CID their queues connected to,
+	 * assuming queue X used CID X.
+	 * TODO - need to validate that there was no official release post
+	 * the current legacy scheme that still made that assumption.
+	 */
+	if (vf->acquire.vfdev_info.eth_fp_hsi_minor ==
+	    ETH_HSI_VER_NO_PKT_LEN_TUNN)
+		b_legacy_vf = true;
+
 	/* Acquire a new queue-cid */
 	p_queue = &vf->vf_queues[req->rx_qid];
 
@@ -2136,39 +2212,42 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
 	params.sb = req->hw_sb;
 	params.sb_idx = req->sb_index;
 
-	p_queue->p_rx_cid = _ecore_eth_queue_to_cid(p_hwfn,
-						    vf->opaque_fid,
-						    p_queue->fw_cid,
-						    (u8)req->rx_qid,
-						    &params);
-	if (p_queue->p_rx_cid == OSAL_NULL)
+	/* TODO - set qid_usage_idx according to extended TLV. For now, use
+	 * '0' for Rx.
+	 */
+	qid_usage_idx = 0;
+
+	OSAL_MEM_ZERO(&vf_params, sizeof(vf_params));
+	vf_params.vfid = vf->relative_vf_id;
+	vf_params.vf_qid = (u8)req->rx_qid;
+	vf_params.b_legacy = b_legacy_vf;
+	vf_params.qid_usage_idx = qid_usage_idx;
+
+	p_cid = ecore_eth_queue_to_cid(p_hwfn, vf->opaque_fid,
+				       &params, &vf_params);
+	if (p_cid == OSAL_NULL)
 		goto out;
 
 	/* Legacy VFs have their Producers in a different location, which they
 	 * calculate on their own and clean the producer prior to this.
 	 */
-	if (vf->acquire.vfdev_info.eth_fp_hsi_minor ==
-	    ETH_HSI_VER_NO_PKT_LEN_TUNN)
-		b_legacy_vf = true;
-	else
+	if (!b_legacy_vf)
 		REG_WR(p_hwfn,
 		       GTT_BAR0_MAP_REG_MSDM_RAM +
 		       MSTORM_ETH_VF_PRODS_OFFSET(vf->abs_vf_id, req->rx_qid),
 		       0);
-	p_queue->p_rx_cid->b_legacy_vf = b_legacy_vf;
 
-
-	rc = ecore_eth_rxq_start_ramrod(p_hwfn,
-					p_queue->p_rx_cid,
+	rc = ecore_eth_rxq_start_ramrod(p_hwfn, p_cid,
 					req->bd_max_bytes,
 					req->rxq_addr,
 					req->cqe_pbl_addr,
 					req->cqe_pbl_size);
 	if (rc != ECORE_SUCCESS) {
 		status = PFVF_STATUS_FAILURE;
-		ecore_eth_queue_cid_release(p_hwfn, p_queue->p_rx_cid);
-		p_queue->p_rx_cid = OSAL_NULL;
+		ecore_eth_queue_cid_release(p_hwfn, p_cid);
 	} else {
+		p_queue->cids[qid_usage_idx].p_cid = p_cid;
+		p_queue->cids[qid_usage_idx].b_is_tx = false;
 		status = PFVF_STATUS_SUCCESS;
 		vf->num_active_rxqs++;
 	}
@@ -2331,6 +2410,7 @@ send_resp:
 static void ecore_iov_vf_mbx_start_txq_resp(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
 					    struct ecore_vf_info *p_vf,
+					    u32 cid,
 					    u8 status)
 {
 	struct ecore_iov_vf_mbx *mbx = &p_vf->vf_mbx;
@@ -2359,12 +2439,8 @@ static void ecore_iov_vf_mbx_start_txq_resp(struct ecore_hwfn *p_hwfn,
 		      sizeof(struct channel_list_end_tlv));
 
 	/* Update the TLV with the response */
-	if ((status == PFVF_STATUS_SUCCESS) && !b_legacy) {
-		u16 qid = mbx->req_virt->start_txq.tx_qid;
-
-		p_tlv->offset = DB_ADDR_VF(p_vf->vf_queues[qid].fw_cid,
-					   DQ_DEMS_LEGACY);
-	}
+	if ((status == PFVF_STATUS_SUCCESS) && !b_legacy)
+		p_tlv->offset = DB_ADDR_VF(cid, DQ_DEMS_LEGACY);
 
 	ecore_iov_send_response(p_hwfn, p_ptt, p_vf, length, status);
 }
@@ -2374,20 +2450,34 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 				       struct ecore_vf_info *vf)
 {
 	struct ecore_queue_start_common_params params;
+	struct ecore_queue_cid_vf_params vf_params;
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
 	u8 status = PFVF_STATUS_NO_RESOURCE;
-	struct ecore_vf_q_info *p_queue;
+	struct ecore_vf_queue *p_queue;
 	struct vfpf_start_txq_tlv *req;
+	struct ecore_queue_cid *p_cid;
+	bool b_legacy_vf = false;
+	u8 qid_usage_idx;
+	u32 cid = 0;
 	enum _ecore_status_t rc;
 	u16 pq;
 
 	OSAL_MEMSET(&params, 0, sizeof(params));
 	req = &mbx->req_virt->start_txq;
 
-	if (!ecore_iov_validate_txq(p_hwfn, vf, req->tx_qid) ||
+	if (!ecore_iov_validate_txq(p_hwfn, vf, req->tx_qid,
+				    ECORE_IOV_VALIDATE_Q_NA) ||
 	    !ecore_iov_validate_sb(p_hwfn, vf, req->hw_sb))
 		goto out;
 
+	/* In case this is a legacy VF - need to know to use the right cids.
+	 * TODO - need to validate that there was no official release post
+	 * the current legacy scheme that still made that assumption.
+	 */
+	if (vf->acquire.vfdev_info.eth_fp_hsi_minor ==
+	    ETH_HSI_VER_NO_PKT_LEN_TUNN)
+		b_legacy_vf = true;
+
 	/* Acquire a new queue-cid */
 	p_queue = &vf->vf_queues[req->tx_qid];
 
@@ -2397,29 +2487,42 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 	params.sb = req->hw_sb;
 	params.sb_idx = req->sb_index;
 
-	p_queue->p_tx_cid = _ecore_eth_queue_to_cid(p_hwfn,
-						    vf->opaque_fid,
-						    p_queue->fw_cid,
-						    (u8)req->tx_qid,
-						    &params);
-	if (p_queue->p_tx_cid == OSAL_NULL)
+	/* TODO - set qid_usage_idx according to extended TLV. For now, use
+	 * '1' for Tx.
+	 */
+	qid_usage_idx = 1;
+
+	if (p_queue->cids[qid_usage_idx].p_cid)
+		goto out;
+
+	OSAL_MEM_ZERO(&vf_params, sizeof(vf_params));
+	vf_params.vfid = vf->relative_vf_id;
+	vf_params.vf_qid = (u8)req->tx_qid;
+	vf_params.b_legacy = b_legacy_vf;
+	vf_params.qid_usage_idx = qid_usage_idx;
+
+	p_cid = ecore_eth_queue_to_cid(p_hwfn, vf->opaque_fid,
+				       &params, &vf_params);
+	if (p_cid == OSAL_NULL)
 		goto out;
 
 	pq = ecore_get_cm_pq_idx_vf(p_hwfn,
 				    vf->relative_vf_id);
-	rc = ecore_eth_txq_start_ramrod(p_hwfn, p_queue->p_tx_cid,
+	rc = ecore_eth_txq_start_ramrod(p_hwfn, p_cid,
 					req->pbl_addr, req->pbl_size, pq);
 	if (rc != ECORE_SUCCESS) {
 		status = PFVF_STATUS_FAILURE;
-		ecore_eth_queue_cid_release(p_hwfn,
-					    p_queue->p_tx_cid);
-		p_queue->p_tx_cid = OSAL_NULL;
+		ecore_eth_queue_cid_release(p_hwfn, p_cid);
 	} else {
 		status = PFVF_STATUS_SUCCESS;
+		p_queue->cids[qid_usage_idx].p_cid = p_cid;
+		p_queue->cids[qid_usage_idx].b_is_tx = true;
+		cid = p_cid->cid;
 	}
 
 out:
-	ecore_iov_vf_mbx_start_txq_resp(p_hwfn, p_ptt, vf, status);
+	ecore_iov_vf_mbx_start_txq_resp(p_hwfn, p_ptt, vf,
+					cid, status);
 }
 
 static enum _ecore_status_t ecore_iov_vf_stop_rxqs(struct ecore_hwfn *p_hwfn,
@@ -2428,26 +2531,38 @@ static enum _ecore_status_t ecore_iov_vf_stop_rxqs(struct ecore_hwfn *p_hwfn,
 						   u8 num_rxqs,
 						   bool cqe_completion)
 {
-	struct ecore_vf_q_info *p_queue;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
-	int qid;
+	int qid, i;
 
+	/* TODO - improve validation [wrap around] */
 	if (rxq_id + num_rxqs > OSAL_ARRAY_SIZE(vf->vf_queues))
 		return ECORE_INVAL;
 
 	for (qid = rxq_id; qid < rxq_id + num_rxqs; qid++) {
-		p_queue = &vf->vf_queues[qid];
-
-		if (!p_queue->p_rx_cid)
+		struct ecore_vf_queue *p_queue = &vf->vf_queues[qid];
+		struct ecore_queue_cid **pp_cid = OSAL_NULL;
+
+		/* There can be at most a single Rx per qzone. Find it */
+		for (i = 0; i < MAX_QUEUES_PER_QZONE; i++) {
+			if (p_queue->cids[i].p_cid &&
+			    !p_queue->cids[i].b_is_tx) {
+				pp_cid = &p_queue->cids[i].p_cid;
+				break;
+			}
+		}
+		if (pp_cid == OSAL_NULL) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "Ignoring VF[%02x] request of closing Rx queue %04x - closed\n",
+				   vf->relative_vf_id, qid);
 			continue;
+		}
 
-		rc = ecore_eth_rx_queue_stop(p_hwfn,
-					     p_queue->p_rx_cid,
+		rc = ecore_eth_rx_queue_stop(p_hwfn, *pp_cid,
 					     false, cqe_completion);
 		if (rc != ECORE_SUCCESS)
 			return rc;
 
-		vf->vf_queues[qid].p_rx_cid = OSAL_NULL;
+		*pp_cid = OSAL_NULL;
 		vf->num_active_rxqs--;
 	}
 
@@ -2459,24 +2574,33 @@ static enum _ecore_status_t ecore_iov_vf_stop_txqs(struct ecore_hwfn *p_hwfn,
 						   u16 txq_id, u8 num_txqs)
 {
 	enum _ecore_status_t rc = ECORE_SUCCESS;
-	struct ecore_vf_q_info *p_queue;
-	int qid;
+	struct ecore_vf_queue *p_queue;
+	int qid, j;
 
-	if (txq_id + num_txqs > OSAL_ARRAY_SIZE(vf->vf_queues))
+	if (!ecore_iov_validate_txq(p_hwfn, vf, txq_id,
+				    ECORE_IOV_VALIDATE_Q_NA) ||
+	    !ecore_iov_validate_txq(p_hwfn, vf, txq_id + num_txqs,
+				    ECORE_IOV_VALIDATE_Q_NA))
 		return ECORE_INVAL;
 
 	for (qid = txq_id; qid < txq_id + num_txqs; qid++) {
 		p_queue = &vf->vf_queues[qid];
-		if (!p_queue->p_tx_cid)
-			continue;
+		for (j = 0; j < MAX_QUEUES_PER_QZONE; j++) {
+			if (p_queue->cids[j].p_cid == OSAL_NULL)
+				continue;
 
-		rc = ecore_eth_tx_queue_stop(p_hwfn,
-					     p_queue->p_tx_cid);
-		if (rc != ECORE_SUCCESS)
-			return rc;
+			if (!p_queue->cids[j].b_is_tx)
+				continue;
+
+			rc = ecore_eth_tx_queue_stop(p_hwfn,
+						     p_queue->cids[j].p_cid);
+			if (rc != ECORE_SUCCESS)
+				return rc;
 
-		p_queue->p_tx_cid = OSAL_NULL;
+			p_queue->cids[j].p_cid = OSAL_NULL;
+		}
 	}
+
 	return rc;
 }
 
@@ -2538,33 +2662,32 @@ static void ecore_iov_vf_mbx_update_rxqs(struct ecore_hwfn *p_hwfn,
 	u8 status = PFVF_STATUS_FAILURE;
 	u8 complete_event_flg;
 	u8 complete_cqe_flg;
-	u16 qid;
 	enum _ecore_status_t rc;
-	u8 i;
+	u16 i;
 
 	req = &mbx->req_virt->update_rxq;
 	complete_cqe_flg = !!(req->flags & VFPF_RXQ_UPD_COMPLETE_CQE_FLAG);
 	complete_event_flg = !!(req->flags & VFPF_RXQ_UPD_COMPLETE_EVENT_FLAG);
 
-	/* Validaute inputs */
-	if (req->num_rxqs + req->rx_qid > ECORE_MAX_VF_CHAINS_PER_PF ||
-	    !ecore_iov_validate_rxq(p_hwfn, vf, req->rx_qid)) {
-		DP_INFO(p_hwfn, "VF[%d]: Incorrect Rxqs [%04x, %02x]\n",
-			vf->relative_vf_id, req->rx_qid, req->num_rxqs);
-		goto out;
+	/* Validate inputs */
+	for (i = req->rx_qid; i < req->rx_qid + req->num_rxqs; i++) {
+		if (!ecore_iov_validate_rxq(p_hwfn, vf, i,
+					    ECORE_IOV_VALIDATE_Q_ENABLE)) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VF[%d]: Incorrect Rxqs [%04x, %02x]\n",
+				   vf->relative_vf_id, req->rx_qid,
+				   req->num_rxqs);
+			goto out;
+		}
 	}
 
 	for (i = 0; i < req->num_rxqs; i++) {
-		qid = req->rx_qid + i;
-
-		if (!vf->vf_queues[qid].p_rx_cid) {
-			DP_INFO(p_hwfn,
-				"VF[%d] rx_qid = %d isn`t active!\n",
-				vf->relative_vf_id, qid);
-			goto out;
-		}
+		struct ecore_vf_queue *p_queue;
+		u16 qid = req->rx_qid + i;
 
-		handlers[i] = vf->vf_queues[qid].p_rx_cid;
+		p_queue = &vf->vf_queues[qid];
+		handlers[i] = ecore_iov_get_vf_rx_queue_cid(p_hwfn, vf,
+							    p_queue);
 	}
 
 	rc = ecore_sp_eth_rx_queues_update(p_hwfn, (void **)&handlers,
@@ -2796,8 +2919,11 @@ ecore_iov_vp_update_rss_param(struct ecore_hwfn *p_hwfn,
 				(1 << p_rss_tlv->rss_table_size_log));
 
 	for (i = 0; i < table_size; i++) {
+		struct ecore_queue_cid *p_cid;
+
 		q_idx = p_rss_tlv->rss_ind_table[i];
-		if (!ecore_iov_validate_rxq(p_hwfn, vf, q_idx)) {
+		if (!ecore_iov_validate_rxq(p_hwfn, vf, q_idx,
+					    ECORE_IOV_VALIDATE_Q_ENABLE)) {
 			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 				   "VF[%d]: Omitting RSS due to wrong queue %04x\n",
 				   vf->relative_vf_id, q_idx);
@@ -2805,15 +2931,9 @@ ecore_iov_vp_update_rss_param(struct ecore_hwfn *p_hwfn,
 			goto out;
 		}
 
-		if (!vf->vf_queues[q_idx].p_rx_cid) {
-			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-				   "VF[%d]: Omitting RSS due to inactive queue %08x\n",
-				   vf->relative_vf_id, q_idx);
-			b_reject = true;
-			goto out;
-		}
-
-		p_rss->rss_ind_table[i] = vf->vf_queues[q_idx].p_rx_cid;
+		p_cid = ecore_iov_get_vf_rx_queue_cid(p_hwfn, vf,
+						      &vf->vf_queues[q_idx]);
+		p_rss->rss_ind_table[i] = p_cid;
 	}
 
 	p_data->rss_params = p_rss;
@@ -3272,22 +3392,26 @@ static void ecore_iov_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn,
 	u8 status = PFVF_STATUS_FAILURE;
 	struct ecore_queue_cid *p_cid;
 	u16 rx_coal, tx_coal;
-	u16  qid;
+	u16 qid;
+	int i;
 
 	req = &mbx->req_virt->update_coalesce;
 
 	rx_coal = req->rx_coal;
 	tx_coal = req->tx_coal;
 	qid = req->qid;
-	p_cid = vf->vf_queues[qid].p_rx_cid;
 
-	if (!ecore_iov_validate_rxq(p_hwfn, vf, qid)) {
+	if (!ecore_iov_validate_rxq(p_hwfn, vf, qid,
+				    ECORE_IOV_VALIDATE_Q_ENABLE) &&
+	    rx_coal) {
 		DP_ERR(p_hwfn, "VF[%d]: Invalid Rx queue_id = %d\n",
 		       vf->abs_vf_id, qid);
 		goto out;
 	}
 
-	if (!ecore_iov_validate_txq(p_hwfn, vf, qid)) {
+	if (!ecore_iov_validate_txq(p_hwfn, vf, qid,
+				    ECORE_IOV_VALIDATE_Q_ENABLE) &&
+	    tx_coal) {
 		DP_ERR(p_hwfn, "VF[%d]: Invalid Tx queue_id = %d\n",
 		       vf->abs_vf_id, qid);
 		goto out;
@@ -3296,7 +3420,11 @@ static void ecore_iov_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn,
 	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 		   "VF[%d]: Setting coalesce for VF rx_coal = %d, tx_coal = %d at queue = %d\n",
 		   vf->abs_vf_id, rx_coal, tx_coal, qid);
+
 	if (rx_coal) {
+		p_cid = ecore_iov_get_vf_rx_queue_cid(p_hwfn, vf,
+						      &vf->vf_queues[qid]);
+
 		rc = ecore_set_rxq_coalesce(p_hwfn, p_ptt, rx_coal, p_cid);
 		if (rc != ECORE_SUCCESS) {
 			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
@@ -3305,13 +3433,28 @@ static void ecore_iov_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn,
 			goto out;
 		}
 	}
+
+	/* TODO - in future, it might be possible to pass this in a per-cid
+	 * granularity. For now, do this for all Tx queues.
+	 */
 	if (tx_coal) {
-		rc =  ecore_set_txq_coalesce(p_hwfn, p_ptt, tx_coal, p_cid);
-		if (rc != ECORE_SUCCESS) {
-			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-				   "VF[%d]: Unable to set tx queue = %d coalesce\n",
-				   vf->abs_vf_id, vf->vf_queues[qid].fw_tx_qid);
-			goto out;
+		struct ecore_vf_queue *p_queue = &vf->vf_queues[qid];
+
+		for (i = 0; i < MAX_QUEUES_PER_QZONE; i++) {
+			if (p_queue->cids[i].p_cid == OSAL_NULL)
+				continue;
+
+			if (!p_queue->cids[i].b_is_tx)
+				continue;
+
+			rc = ecore_set_txq_coalesce(p_hwfn, p_ptt, tx_coal,
+						    p_queue->cids[i].p_cid);
+			if (rc != ECORE_SUCCESS) {
+				DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+					   "VF[%d]: Unable to set tx queue coalesce\n",
+					   vf->abs_vf_id);
+				goto out;
+			}
 		}
 	}
 
diff --git a/drivers/net/qede/base/ecore_sriov.h b/drivers/net/qede/base/ecore_sriov.h
index 66e9271..3c2f58b 100644
--- a/drivers/net/qede/base/ecore_sriov.h
+++ b/drivers/net/qede/base/ecore_sriov.h
@@ -13,6 +13,7 @@
 #include "ecore_vfpf_if.h"
 #include "ecore_iov_api.h"
 #include "ecore_hsi_common.h"
+#include "ecore_l2.h"
 
 #define ECORE_ETH_MAX_VF_NUM_VLAN_FILTERS \
 	(E4_MAX_NUM_VFS * ECORE_ETH_VF_NUM_VLAN_FILTERS)
@@ -62,12 +63,18 @@ struct ecore_iov_vf_mbx {
 					 */
 };
 
-struct ecore_vf_q_info {
+struct ecore_vf_queue_cid {
+	bool b_is_tx;
+	struct ecore_queue_cid *p_cid;
+};
+
+/* Describes a qzone associated with the VF */
+struct ecore_vf_queue {
+	/* Input from upper-layer, mapping relateive queue to queue-zone */
 	u16 fw_rx_qid;
-	struct ecore_queue_cid *p_rx_cid;
 	u16 fw_tx_qid;
-	struct ecore_queue_cid *p_tx_cid;
-	u8 fw_cid;
+
+	struct ecore_vf_queue_cid cids[MAX_QUEUES_PER_QZONE];
 };
 
 enum vf_state {
@@ -127,7 +134,7 @@ struct ecore_vf_info {
 	u8			num_mac_filters;
 	u8			num_vlan_filters;
 
-	struct ecore_vf_q_info	vf_queues[ECORE_MAX_VF_CHAINS_PER_PF];
+	struct ecore_vf_queue	vf_queues[ECORE_MAX_VF_CHAINS_PER_PF];
 	u16			igu_sbs[ECORE_MAX_VF_CHAINS_PER_PF];
 
 	/* TODO - Only windows is using it - should be removed */
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index 8ce9340..ac72681 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -1582,6 +1582,12 @@ void ecore_vf_get_num_rxqs(struct ecore_hwfn *p_hwfn, u8 *num_rxqs)
 	*num_rxqs = p_hwfn->vf_iov_info->acquire_resp.resc.num_rxqs;
 }
 
+void ecore_vf_get_num_txqs(struct ecore_hwfn *p_hwfn,
+			   u8 *num_txqs)
+{
+	*num_txqs = p_hwfn->vf_iov_info->acquire_resp.resc.num_txqs;
+}
+
 void ecore_vf_get_port_mac(struct ecore_hwfn *p_hwfn, u8 *port_mac)
 {
 	OSAL_MEMCPY(port_mac,
diff --git a/drivers/net/qede/base/ecore_vf_api.h b/drivers/net/qede/base/ecore_vf_api.h
index a6e5f32..be3a326 100644
--- a/drivers/net/qede/base/ecore_vf_api.h
+++ b/drivers/net/qede/base/ecore_vf_api.h
@@ -61,6 +61,15 @@ void ecore_vf_get_num_rxqs(struct ecore_hwfn *p_hwfn,
 			   u8 *num_rxqs);
 
 /**
+ * @brief Get number of Rx queues allocated for VF by ecore
+ *
+ *  @param p_hwfn
+ *  @param num_txqs - allocated RX queues
+ */
+void ecore_vf_get_num_txqs(struct ecore_hwfn *p_hwfn,
+			   u8 *num_txqs);
+
+/**
  * @brief Get port mac address for VF
  *
  * @param p_hwfn
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 57/61] net/qede/base: prevent race condition during unload
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (56 preceding siblings ...)
  2017-03-18  7:06   ` [PATCH v2 56/61] net/qede/base: multi-Txq support on same queue-zone for VFs Rasesh Mody
@ 2017-03-18  7:06   ` Rasesh Mody
  2017-03-18  7:06   ` [PATCH v2 58/61] net/qede/base: semantic changes Rasesh Mody
                     ` (4 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:06 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Merge hw_stop and hw_reset into one function.
Prevent race condition between MFW attentions and pf stop command during
unload flow that causes an ASSERT.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/bcm_osal.h      |    1 +
 drivers/net/qede/base/ecore_dev.c     |  175 ++++++++++++++++-----------------
 drivers/net/qede/base/ecore_dev_api.h |    9 --
 drivers/net/qede/base/ecore_mcp.c     |   12 +++
 drivers/net/qede/base/ecore_mcp.h     |   11 +++
 drivers/net/qede/base/ecore_spq.c     |    3 +
 drivers/net/qede/qede_main.c          |   18 +---
 7 files changed, 116 insertions(+), 113 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 052a0cf..32c9b25 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -168,6 +168,7 @@ typedef pthread_mutex_t osal_mutex_t;
 #define OSAL_DPC_ALLOC(hwfn) OSAL_ALLOC(hwfn, GFP, sizeof(osal_dpc_t))
 #define OSAL_DPC_INIT(dpc, hwfn) nothing
 #define OSAL_POLL_MODE_DPC(hwfn) nothing
+#define OSAL_DPC_SYNC(hwfn) nothing
 
 /* Lists */
 
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 2a621f7..d8e4ca2 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2050,7 +2050,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 
 		if (mfw_rc != ECORE_SUCCESS) {
 			DP_NOTICE(p_hwfn, true,
-				  "Failed sending LOAD_DONE command\n");
+				  "Failed sending a LOAD_DONE command\n");
 			return mfw_rc;
 		}
 
@@ -2139,32 +2139,77 @@ void ecore_hw_timers_stop_all(struct ecore_dev *p_dev)
 	}
 }
 
+static enum _ecore_status_t ecore_verify_reg_val(struct ecore_hwfn *p_hwfn,
+						 struct ecore_ptt *p_ptt,
+						 u32 addr, u32 expected_val)
+{
+	u32 val = ecore_rd(p_hwfn, p_ptt, addr);
+
+	if (val != expected_val) {
+		DP_NOTICE(p_hwfn, true,
+			  "Value at address 0x%08x is 0x%08x while the expected value is 0x%08x\n",
+			  addr, val, expected_val);
+		return ECORE_UNKNOWN_ERROR;
+	}
+
+	return ECORE_SUCCESS;
+}
+
 enum _ecore_status_t ecore_hw_stop(struct ecore_dev *p_dev)
 {
-	enum _ecore_status_t rc = ECORE_SUCCESS, t_rc;
+	struct ecore_hwfn *p_hwfn;
+	struct ecore_ptt *p_ptt;
+	enum _ecore_status_t rc, rc2 = ECORE_SUCCESS;
 	int j;
 
 	for_each_hwfn(p_dev, j) {
-		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[j];
-		struct ecore_ptt *p_ptt = p_hwfn->p_main_ptt;
+		p_hwfn = &p_dev->hwfns[j];
+		p_ptt = p_hwfn->p_main_ptt;
 
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IFDOWN, "Stopping hw/fw\n");
 
 		if (IS_VF(p_dev)) {
 			ecore_vf_pf_int_cleanup(p_hwfn);
+			rc = ecore_vf_pf_reset(p_hwfn);
+			if (rc != ECORE_SUCCESS) {
+				DP_NOTICE(p_hwfn, true,
+					  "ecore_vf_pf_reset failed. rc = %d.\n",
+					  rc);
+				rc2 = ECORE_UNKNOWN_ERROR;
+			}
 			continue;
 		}
 
 		/* mark the hw as uninitialized... */
 		p_hwfn->hw_init_done = false;
 
+		/* Send unload command to MCP */
+		if (!p_dev->recov_in_prog) {
+			rc = ecore_mcp_unload_req(p_hwfn, p_ptt);
+			if (rc != ECORE_SUCCESS) {
+				DP_NOTICE(p_hwfn, true,
+					  "Failed sending a UNLOAD_REQ command. rc = %d.\n",
+					  rc);
+				rc2 = ECORE_UNKNOWN_ERROR;
+			}
+		}
+
+		OSAL_DPC_SYNC(p_hwfn);
+
+		/* After this point no MFW attentions are expected, e.g. prevent
+		 * race between pf stop and dcbx pf update.
+		 */
+
 		rc = ecore_sp_pf_stop(p_hwfn);
-		if (rc)
+		if (rc != ECORE_SUCCESS) {
 			DP_NOTICE(p_hwfn, true,
-				  "Failed to close PF against FW. Continue to stop HW to prevent illegal host access by the device\n");
+				  "Failed to close PF against FW [rc = %d]. Continue to stop HW to prevent illegal host access by the device.\n",
+				  rc);
+			rc2 = ECORE_UNKNOWN_ERROR;
+		}
 
 		/* perform debug action after PF stop was sent */
-		OSAL_AFTER_PF_STOP((void *)p_hwfn->p_dev, p_hwfn->my_id);
+		OSAL_AFTER_PF_STOP((void *)p_dev, p_hwfn->my_id);
 
 		/* close NIG to BRB gate */
 		ecore_wr(p_hwfn, p_ptt,
@@ -2191,20 +2236,48 @@ enum _ecore_status_t ecore_hw_stop(struct ecore_dev *p_dev)
 		ecore_int_igu_init_pure_rt(p_hwfn, p_ptt, false, true);
 		/* Need to wait 1ms to guarantee SBs are cleared */
 		OSAL_MSLEEP(1);
-	}
+
+		if (!p_dev->recov_in_prog) {
+			ecore_verify_reg_val(p_hwfn, p_ptt,
+					     QM_REG_USG_CNT_PF_TX, 0);
+			ecore_verify_reg_val(p_hwfn, p_ptt,
+					     QM_REG_USG_CNT_PF_OTHER, 0);
+			/* @@@TBD - assert on incorrect xCFC values (10.b) */
+		}
+
+		/* Disable PF in HW blocks */
+		ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_DB_ENABLE, 0);
+		ecore_wr(p_hwfn, p_ptt, QM_REG_PF_EN, 0);
+
+		if (!p_dev->recov_in_prog) {
+			ecore_mcp_unload_done(p_hwfn, p_ptt);
+			if (rc != ECORE_SUCCESS) {
+				DP_NOTICE(p_hwfn, true,
+					  "Failed sending a UNLOAD_DONE command. rc = %d.\n",
+					  rc);
+				rc2 = ECORE_UNKNOWN_ERROR;
+			}
+		}
+	} /* hwfn loop */
 
 	if (IS_PF(p_dev)) {
+		p_hwfn = ECORE_LEADING_HWFN(p_dev);
+		p_ptt = ECORE_LEADING_HWFN(p_dev)->p_main_ptt;
+
 		/* Disable DMAE in PXP - in CMT, this should only be done for
 		 * first hw-function, and only after all transactions have
 		 * stopped for all active hw-functions.
 		 */
-		t_rc = ecore_change_pci_hwfn(&p_dev->hwfns[0],
-					     p_dev->hwfns[0].p_main_ptt, false);
-		if (t_rc != ECORE_SUCCESS)
-			rc = t_rc;
+		rc = ecore_change_pci_hwfn(p_hwfn, p_ptt, false);
+		if (rc != ECORE_SUCCESS) {
+			DP_NOTICE(p_hwfn, true,
+				  "ecore_change_pci_hwfn failed. rc = %d.\n",
+				  rc);
+			rc2 = ECORE_UNKNOWN_ERROR;
+		}
 	}
 
-	return rc;
+	return rc2;
 }
 
 void ecore_hw_stop_fastpath(struct ecore_dev *p_dev)
@@ -2265,82 +2338,6 @@ void ecore_hw_start_fastpath(struct ecore_hwfn *p_hwfn)
 		 NIG_REG_RX_LLH_BRB_GATE_DNTFWD_PERPF, 0x0);
 }
 
-static enum _ecore_status_t ecore_reg_assert(struct ecore_hwfn *p_hwfn,
-					     struct ecore_ptt *p_ptt, u32 reg,
-					     bool expected)
-{
-	u32 assert_val = ecore_rd(p_hwfn, p_ptt, reg);
-
-	if (assert_val != expected) {
-		DP_NOTICE(p_hwfn, true, "Value at address 0x%08x != 0x%08x\n",
-			  reg, expected);
-		return ECORE_UNKNOWN_ERROR;
-	}
-
-	return 0;
-}
-
-enum _ecore_status_t ecore_hw_reset(struct ecore_dev *p_dev)
-{
-	enum _ecore_status_t rc = ECORE_SUCCESS;
-	u32 unload_resp, unload_param;
-	int i;
-
-	for_each_hwfn(p_dev, i) {
-		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
-
-		if (IS_VF(p_dev)) {
-			rc = ecore_vf_pf_reset(p_hwfn);
-			if (rc)
-				return rc;
-			continue;
-		}
-
-		DP_VERBOSE(p_hwfn, ECORE_MSG_IFDOWN, "Resetting hw/fw\n");
-
-		/* Check for incorrect states */
-		if (!p_dev->recov_in_prog) {
-			ecore_reg_assert(p_hwfn, p_hwfn->p_main_ptt,
-					 QM_REG_USG_CNT_PF_TX, 0);
-			ecore_reg_assert(p_hwfn, p_hwfn->p_main_ptt,
-					 QM_REG_USG_CNT_PF_OTHER, 0);
-			/* @@@TBD - assert on incorrect xCFC values (10.b) */
-		}
-
-		/* Disable PF in HW blocks */
-		ecore_wr(p_hwfn, p_hwfn->p_main_ptt, DORQ_REG_PF_DB_ENABLE, 0);
-		ecore_wr(p_hwfn, p_hwfn->p_main_ptt, QM_REG_PF_EN, 0);
-
-		if (p_dev->recov_in_prog) {
-			DP_VERBOSE(p_hwfn, ECORE_MSG_IFDOWN,
-				   "Recovery is in progress -> skip sending unload_req/done\n");
-			break;
-		}
-
-		/* Send unload command to MCP */
-		rc = ecore_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
-				   DRV_MSG_CODE_UNLOAD_REQ,
-				   DRV_MB_PARAM_UNLOAD_WOL_MCP,
-				   &unload_resp, &unload_param);
-		if (rc != ECORE_SUCCESS) {
-			DP_NOTICE(p_hwfn, true,
-				  "ecore_hw_reset: UNLOAD_REQ failed\n");
-			/* @@TBD - what to do? for now, assume ENG. */
-			unload_resp = FW_MSG_CODE_DRV_UNLOAD_ENGINE;
-		}
-
-		rc = ecore_mcp_unload_done(p_hwfn, p_hwfn->p_main_ptt);
-		if (rc != ECORE_SUCCESS) {
-			DP_NOTICE(p_hwfn,
-				  true, "ecore_hw_reset: UNLOAD_DONE failed\n");
-			/* @@@TBD - Should it really ASSERT here ? */
-			return rc;
-		}
-	}
-
-	return rc;
-}
-
 /* Free hwfn memory and resources acquired in hw_hwfn_prepare */
 static void ecore_hw_hwfn_free(struct ecore_hwfn *p_hwfn)
 {
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index ce764d2..e64a768 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -151,15 +151,6 @@ void ecore_prepare_hibernate(struct ecore_dev *p_dev);
  */
 void ecore_hw_start_fastpath(struct ecore_hwfn *p_hwfn);
 
-/**
- * @brief ecore_hw_reset -
- *
- * @param p_dev
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_hw_reset(struct ecore_dev *p_dev);
-
 enum ecore_hw_prepare_result {
 	ECORE_HW_PREPARE_SUCCESS,
 
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index b53210f..1c5f24c 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -891,6 +891,18 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
+enum _ecore_status_t ecore_mcp_unload_req(struct ecore_hwfn *p_hwfn,
+					  struct ecore_ptt *p_ptt)
+{
+	u32 wol_param, mcp_resp, mcp_param;
+
+	/* @DPDK */
+	wol_param = DRV_MB_PARAM_UNLOAD_WOL_MCP;
+
+	return ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_UNLOAD_REQ, wol_param,
+			     &mcp_resp, &mcp_param);
+}
+
 enum _ecore_status_t ecore_mcp_unload_done(struct ecore_hwfn *p_hwfn,
 					   struct ecore_ptt *p_ptt)
 {
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index 350d8a2..37d1835 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -171,6 +171,17 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 					struct ecore_load_req_params *p_params);
 
 /**
+ * @brief Sends a UNLOAD_REQ message to the MFW
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - Operation was successful.
+ */
+enum _ecore_status_t ecore_mcp_unload_req(struct ecore_hwfn *p_hwfn,
+					  struct ecore_ptt *p_ptt);
+
+/**
  * @brief Sends a UNLOAD_DONE message to the MFW
  *
  * @param p_hwfn
diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c
index 016de74..3c1d05b 100644
--- a/drivers/net/qede/base/ecore_spq.c
+++ b/drivers/net/qede/base/ecore_spq.c
@@ -190,6 +190,9 @@ static void ecore_spq_hw_initialize(struct ecore_hwfn *p_hwfn,
 
 	p_cxt = cxt_info.p_cxt;
 
+	/* @@@TBD we zero the context until we have ilt_reset implemented. */
+	OSAL_MEM_ZERO(p_cxt, sizeof(*p_cxt));
+
 	if (ECORE_IS_BB(p_hwfn->p_dev) || ECORE_IS_AH(p_hwfn->p_dev)) {
 		SET_FIELD(p_cxt->xstorm_ag_context.flags10,
 			  E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN, 1);
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 326e56f..74856c5 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -636,19 +636,6 @@ static int qed_nic_stop(struct ecore_dev *edev)
 	return rc;
 }
 
-static int qed_nic_reset(struct ecore_dev *edev)
-{
-	int rc;
-
-	rc = ecore_hw_reset(edev);
-	if (rc)
-		return rc;
-
-	ecore_resc_free(edev);
-
-	return 0;
-}
-
 static int qed_slowpath_stop(struct ecore_dev *edev)
 {
 #ifdef CONFIG_QED_SRIOV
@@ -667,10 +654,11 @@ static int qed_slowpath_stop(struct ecore_dev *edev)
 		if (IS_QED_ETH_IF(edev))
 			qed_sriov_disable(edev, true);
 #endif
-		qed_nic_stop(edev);
 	}
 
-	qed_nic_reset(edev);
+	qed_nic_stop(edev);
+
+	ecore_resc_free(edev);
 	qed_stop_iov_task(edev);
 
 	return 0;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 58/61] net/qede/base: semantic changes
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (57 preceding siblings ...)
  2017-03-18  7:06   ` [PATCH v2 57/61] net/qede/base: prevent race condition during unload Rasesh Mody
@ 2017-03-18  7:06   ` Rasesh Mody
  2017-03-18  7:06   ` [PATCH v2 59/61] net/qede/base: add support for arfs mode Rasesh Mody
                     ` (3 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:06 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Make APIs static and other semantic changes.
A step toward cleaning 'make C=1' with GCC 4.8.3.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_cxt.c  |    5 +-
 drivers/net/qede/base/ecore_cxt.h  |   11 ----
 drivers/net/qede/base/ecore_dcbx.c |    2 +-
 drivers/net/qede/base/ecore_dev.c  |  109 ++++++++++++++++++------------------
 drivers/net/qede/base/ecore_l2.c   |   12 ++--
 drivers/net/qede/base/ecore_vf.c   |    2 +-
 6 files changed, 66 insertions(+), 75 deletions(-)

diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index f7b5672..1a2a701 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -327,7 +327,8 @@ static OSAL_INLINE void ecore_cxt_tm_iids(struct ecore_cxt_mngr *p_mngr,
 	}
 }
 
-void ecore_cxt_qm_iids(struct ecore_hwfn *p_hwfn, struct ecore_qm_iids *iids)
+static void ecore_cxt_qm_iids(struct ecore_hwfn *p_hwfn,
+			      struct ecore_qm_iids *iids)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
 	struct ecore_tid_seg *segs;
@@ -1945,7 +1946,7 @@ enum _ecore_status_t ecore_cxt_get_cid_info(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-void ecore_cxt_set_srq_count(struct ecore_hwfn *p_hwfn, u32 num_srqs)
+static void ecore_cxt_set_srq_count(struct ecore_hwfn *p_hwfn, u32 num_srqs)
 {
 	struct ecore_cxt_mngr *p_mgr = p_hwfn->p_cxt_mngr;
 
diff --git a/drivers/net/qede/base/ecore_cxt.h b/drivers/net/qede/base/ecore_cxt.h
index 1128051..e678118 100644
--- a/drivers/net/qede/base/ecore_cxt.h
+++ b/drivers/net/qede/base/ecore_cxt.h
@@ -35,17 +35,6 @@ u32 ecore_cxt_get_proto_cid_start(struct ecore_hwfn *p_hwfn,
 				  enum protocol_type type);
 u32 ecore_cxt_get_srq_count(struct ecore_hwfn *p_hwfn);
 
-#ifndef LINUX_REMOVE
-/**
- * @brief ecore_cxt_qm_iids - fills the cid/tid counts for the QM configuration
- *
- * @param p_hwfn
- * @param iids [out], a structure holding all the counters
- */
-void ecore_cxt_qm_iids(struct ecore_hwfn *p_hwfn,
-		       struct ecore_qm_iids *iids);
-#endif
-
 /**
  * @brief ecore_cxt_set_pf_params - Set the PF params for cxt init
  *
diff --git a/drivers/net/qede/base/ecore_dcbx.c b/drivers/net/qede/base/ecore_dcbx.c
index 5ecc6b0..4f1b069 100644
--- a/drivers/net/qede/base/ecore_dcbx.c
+++ b/drivers/net/qede/base/ecore_dcbx.c
@@ -114,7 +114,7 @@ ecore_dcbx_dp_protocol(struct ecore_hwfn *p_hwfn,
 	}
 }
 
-void
+static void
 ecore_dcbx_set_params(struct ecore_dcbx_results *p_data,
 		      struct ecore_hwfn *p_hwfn,
 		      bool enable, u8 prio, u8 tc,
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index d8e4ca2..865103c 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -759,8 +759,8 @@ enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn,
 				     struct ecore_ptt *p_ptt)
 {
 	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
-	enum _ecore_status_t rc;
 	bool b_rc;
+	enum _ecore_status_t rc;
 
 	/* initialize ecore's qm data structure */
 	ecore_init_qm_info(p_hwfn);
@@ -1507,54 +1507,6 @@ static void ecore_link_init_bb(struct ecore_hwfn *p_hwfn,
 }
 #endif
 
-static enum _ecore_status_t ecore_hw_init_port(struct ecore_hwfn *p_hwfn,
-					       struct ecore_ptt *p_ptt,
-					       int hw_mode)
-{
-	enum _ecore_status_t rc = ECORE_SUCCESS;
-
-	rc = ecore_init_run(p_hwfn, p_ptt, PHASE_PORT, p_hwfn->port_id,
-			    hw_mode);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-#ifndef ASIC_ONLY
-	if (CHIP_REV_IS_ASIC(p_hwfn->p_dev))
-		return ECORE_SUCCESS;
-
-	if (CHIP_REV_IS_FPGA(p_hwfn->p_dev)) {
-		if (ECORE_IS_AH(p_hwfn->p_dev))
-			return ECORE_SUCCESS;
-		else if (ECORE_IS_BB(p_hwfn->p_dev))
-			ecore_link_init_bb(p_hwfn, p_ptt, p_hwfn->port_id);
-	} else if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
-		if (p_hwfn->p_dev->num_hwfns > 1) {
-			/* Activate OPTE in CMT */
-			u32 val;
-
-			val = ecore_rd(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV);
-			val |= 0x10;
-			ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV, val);
-			ecore_wr(p_hwfn, p_ptt, MISC_REG_CLK_100G_MODE, 1);
-			ecore_wr(p_hwfn, p_ptt, MISCS_REG_CLK_100G_MODE, 1);
-			ecore_wr(p_hwfn, p_ptt, MISC_REG_OPTE_MODE, 1);
-			ecore_wr(p_hwfn, p_ptt,
-				 NIG_REG_LLH_ENG_CLS_TCP_4_TUPLE_SEARCH, 1);
-			ecore_wr(p_hwfn, p_ptt,
-				 NIG_REG_LLH_ENG_CLS_ENG_ID_TBL, 0x55555555);
-			ecore_wr(p_hwfn, p_ptt,
-				 NIG_REG_LLH_ENG_CLS_ENG_ID_TBL + 0x4,
-				 0x55555555);
-		}
-
-		ecore_emul_link_init(p_hwfn, p_ptt);
-	} else {
-		DP_INFO(p_hwfn->p_dev, "link is not being configured\n");
-	}
-#endif
-
-	return rc;
-}
-
 static enum _ecore_status_t
 ecore_hw_init_dpi_size(struct ecore_hwfn *p_hwfn,
 		       struct ecore_ptt *p_ptt, u32 pwm_region_size, u32 n_cpus)
@@ -1623,7 +1575,7 @@ ecore_hw_init_pf_doorbell_bar(struct ecore_hwfn *p_hwfn,
 	u32 db_bar_size, n_cpus;
 	u32 roce_edpm_mode;
 	u32 pf_dems_shift;
-	int rc = ECORE_SUCCESS;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
 	u8 cond;
 
 	db_bar_size = ecore_hw_bar_size(p_hwfn, BAR_ID_1);
@@ -1678,8 +1630,9 @@ ecore_hw_init_pf_doorbell_bar(struct ecore_hwfn *p_hwfn,
 		rc = ecore_hw_init_dpi_size(p_hwfn, p_ptt, pwm_regsize, n_cpus);
 	}
 
-	cond = ((rc) && (roce_edpm_mode == ECORE_ROCE_EDPM_MODE_ENABLE)) ||
-	    (roce_edpm_mode == ECORE_ROCE_EDPM_MODE_DISABLE);
+	cond = ((rc != ECORE_SUCCESS) &&
+		(roce_edpm_mode == ECORE_ROCE_EDPM_MODE_ENABLE)) ||
+		(roce_edpm_mode == ECORE_ROCE_EDPM_MODE_DISABLE);
 	if (cond || p_hwfn->dcbx_no_edpm) {
 		/* Either EDPM is disabled from user configuration, or it is
 		 * disabled via DCBx, or it is not mandatory and we failed to
@@ -1703,7 +1656,7 @@ ecore_hw_init_pf_doorbell_bar(struct ecore_hwfn *p_hwfn,
 		"disabled" : "enabled");
 
 	/* Check return codes from above calls */
-	if (rc) {
+	if (rc != ECORE_SUCCESS) {
 		DP_ERR(p_hwfn,
 		       "Failed to allocate enough DPIs\n");
 		return ECORE_NORESOURCES;
@@ -1721,6 +1674,54 @@ ecore_hw_init_pf_doorbell_bar(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
+static enum _ecore_status_t ecore_hw_init_port(struct ecore_hwfn *p_hwfn,
+					       struct ecore_ptt *p_ptt,
+					       int hw_mode)
+{
+	enum _ecore_status_t rc	= ECORE_SUCCESS;
+
+	rc = ecore_init_run(p_hwfn, p_ptt, PHASE_PORT, p_hwfn->port_id,
+			    hw_mode);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+#ifndef ASIC_ONLY
+	if (CHIP_REV_IS_ASIC(p_hwfn->p_dev))
+		return ECORE_SUCCESS;
+
+	if (CHIP_REV_IS_FPGA(p_hwfn->p_dev)) {
+		if (ECORE_IS_AH(p_hwfn->p_dev))
+			return ECORE_SUCCESS;
+		else if (ECORE_IS_BB(p_hwfn->p_dev))
+			ecore_link_init_bb(p_hwfn, p_ptt, p_hwfn->port_id);
+	} else if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
+		if (p_hwfn->p_dev->num_hwfns > 1) {
+			/* Activate OPTE in CMT */
+			u32 val;
+
+			val = ecore_rd(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV);
+			val |= 0x10;
+			ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV, val);
+			ecore_wr(p_hwfn, p_ptt, MISC_REG_CLK_100G_MODE, 1);
+			ecore_wr(p_hwfn, p_ptt, MISCS_REG_CLK_100G_MODE, 1);
+			ecore_wr(p_hwfn, p_ptt, MISC_REG_OPTE_MODE, 1);
+			ecore_wr(p_hwfn, p_ptt,
+				 NIG_REG_LLH_ENG_CLS_TCP_4_TUPLE_SEARCH, 1);
+			ecore_wr(p_hwfn, p_ptt,
+				 NIG_REG_LLH_ENG_CLS_ENG_ID_TBL, 0x55555555);
+			ecore_wr(p_hwfn, p_ptt,
+				 NIG_REG_LLH_ENG_CLS_ENG_ID_TBL + 0x4,
+				 0x55555555);
+		}
+
+		ecore_emul_link_init(p_hwfn, p_ptt);
+	} else {
+		DP_INFO(p_hwfn->p_dev, "link is not being configured\n");
+	}
+#endif
+
+	return rc;
+}
+
 static enum _ecore_status_t
 ecore_hw_init_pf(struct ecore_hwfn *p_hwfn,
 		 struct ecore_ptt *p_ptt,
@@ -1922,8 +1923,8 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 {
 	struct ecore_load_req_params load_req_params;
 	u32 load_code, param, drv_mb_param;
-	struct ecore_hwfn *p_hwfn;
 	bool b_default_mtu = true;
+	struct ecore_hwfn *p_hwfn;
 	enum _ecore_status_t rc = ECORE_SUCCESS, mfw_rc;
 	int i;
 
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index adb5e47..c4af895 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -946,17 +946,17 @@ ecore_eth_pf_rx_queue_start(struct ecore_hwfn *p_hwfn,
 			    dma_addr_t bd_chain_phys_addr,
 			    dma_addr_t cqe_pbl_addr,
 			    u16 cqe_pbl_size,
-			    void OSAL_IOMEM * *pp_producer)
+			    void OSAL_IOMEM * *pp_prod)
 {
 	u32 init_prod_val = 0;
 
-	*pp_producer = (u8 OSAL_IOMEM *)
-		       p_hwfn->regview +
-		       GTT_BAR0_MAP_REG_MSDM_RAM +
-		       MSTORM_ETH_PF_PRODS_OFFSET(p_cid->abs.queue_id);
+	*pp_prod = (u8 OSAL_IOMEM *)
+		    p_hwfn->regview +
+		    GTT_BAR0_MAP_REG_MSDM_RAM +
+		    MSTORM_ETH_PF_PRODS_OFFSET(p_cid->abs.queue_id);
 
 	/* Init the rcq, rx bd and rx sge (if valid) producers to 0 */
-	__internal_ram_wr(p_hwfn, *pp_producer, sizeof(u32),
+	__internal_ram_wr(p_hwfn, *pp_prod, sizeof(u32),
 			  (u32 *)(&init_prod_val));
 
 	return ecore_eth_rxq_start_ramrod(p_hwfn, p_cid,
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index ac72681..f4d331c 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -1285,8 +1285,8 @@ enum _ecore_status_t ecore_vf_pf_release(struct ecore_hwfn *p_hwfn)
 	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
 	struct pfvf_def_resp_tlv *resp;
 	struct vfpf_first_tlv *req;
-	enum _ecore_status_t rc;
 	u32 size;
+	enum _ecore_status_t rc;
 
 	/* clear mailbox and prep first tlv */
 	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_RELEASE, sizeof(*req));
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 59/61] net/qede/base: add support for arfs mode
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (58 preceding siblings ...)
  2017-03-18  7:06   ` [PATCH v2 58/61] net/qede/base: semantic changes Rasesh Mody
@ 2017-03-18  7:06   ` Rasesh Mody
  2017-03-18  7:06   ` [PATCH v2 60/61] net/qede: add ntuple and flow director filter support Rasesh Mody
                     ` (2 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:06 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Harish Patil, Dept-EngDPDKDev

From: Harish Patil <harish.patil@qlogic.com>

Add base driver APIs to enable accelerated RFS[aRFS] mode and ramrod
to configure rfs and ntuple filter.

Signed-off-by: Harish Patil <harish.patil@qlogic.com>
---
 drivers/net/qede/base/ecore_cxt.c           |   49 +++++++++++-----
 drivers/net/qede/base/ecore_init_fw_funcs.c |   31 ++++++++++
 drivers/net/qede/base/ecore_init_fw_funcs.h |   11 ++++
 drivers/net/qede/base/ecore_l2.c            |   84 +++++++++++++++++++++++++++
 drivers/net/qede/base/ecore_l2.h            |   27 +++++++++
 drivers/net/qede/base/ecore_l2_api.h        |   22 +++++++
 drivers/net/qede/base/ecore_proto_if.h      |    6 ++
 drivers/net/qede/base/ecore_spq.h           |    1 +
 8 files changed, 218 insertions(+), 13 deletions(-)

diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index 1a2a701..80ad102 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -192,9 +192,6 @@ struct ecore_cxt_mngr {
 	 */
 	u32 vf_count;
 
-	/* total number of SRQ's for this hwfn */
-	u32				srq_count;
-
 	/* Acquired CIDs */
 	struct ecore_cid_acquired_map acquired[MAX_CONN_TYPES];
 	/* TBD - do we want this allocated to reserve space? */
@@ -213,10 +210,29 @@ struct ecore_cxt_mngr {
 	u32 t2_num_pages;
 	u64 first_free;
 	u64 last_free;
+
+	/* The infrastructure originally was very generic and context/task
+	 * oriented - per connection-type we would set how many of those
+	 * are needed, and later when determining how much memory we're
+	 * needing for a given block we'd iterate over all the relevant
+	 * connection-types.
+	 * But since then we've had some additional resources, some of which
+	 * require memory which is indepent of the general context/task
+	 * scheme. We add those here explicitly per-feature.
+	 */
+
+	/* total number of SRQ's for this hwfn */
+	u32				srq_count;
+
+	/* Maximal number of L2 steering filters */
+	u32				arfs_count;
+
+	/* TODO - VF arfs filters ? */
 };
 
 /* check if resources/configuration is required according to protocol type */
-static OSAL_INLINE bool src_proto(enum protocol_type type)
+static OSAL_INLINE bool src_proto(struct ecore_hwfn *p_hwfn,
+				  enum protocol_type type)
 {
 	return type == PROTOCOLID_TOE;
 }
@@ -254,18 +270,22 @@ struct ecore_src_iids {
 	u32 per_vf_cids;
 };
 
-static OSAL_INLINE void ecore_cxt_src_iids(struct ecore_cxt_mngr *p_mngr,
+static OSAL_INLINE void ecore_cxt_src_iids(struct ecore_hwfn *p_hwfn,
+					   struct ecore_cxt_mngr *p_mngr,
 					   struct ecore_src_iids *iids)
 {
 	u32 i;
 
 	for (i = 0; i < MAX_CONN_TYPES; i++) {
-		if (!src_proto(i))
+		if (!src_proto(p_hwfn, i))
 			continue;
 
 		iids->pf_cids += p_mngr->conn_cfg[i].cid_count;
 		iids->per_vf_cids += p_mngr->conn_cfg[i].cids_per_vf;
 	}
+
+	/* Add L2 filtering filters in addition */
+	iids->pf_cids += p_mngr->arfs_count;
 }
 
 /* counts the iids for the Timers block configuration */
@@ -686,7 +706,7 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 
 	/* SRC */
 	p_cli = &p_mngr->clients[ILT_CLI_SRC];
-	ecore_cxt_src_iids(p_mngr, &src_iids);
+	ecore_cxt_src_iids(p_hwfn, p_mngr, &src_iids);
 
 	/* Both the PF and VFs searcher connections are stored in the per PF
 	 * database. Thus sum the PF searcher cids and all the VFs searcher
@@ -800,7 +820,7 @@ static enum _ecore_status_t ecore_cxt_src_t2_alloc(struct ecore_hwfn *p_hwfn)
 	if (!p_src->active)
 		return ECORE_SUCCESS;
 
-	ecore_cxt_src_iids(p_mngr, &src_iids);
+	ecore_cxt_src_iids(p_hwfn, p_mngr, &src_iids);
 	conn_num = src_iids.pf_cids + src_iids.per_vf_cids * p_mngr->vf_count;
 	total_size = conn_num * sizeof(struct src_ent);
 
@@ -1619,7 +1639,7 @@ static void ecore_src_init_pf(struct ecore_hwfn *p_hwfn)
 	struct ecore_src_iids src_iids;
 
 	OSAL_MEM_ZERO(&src_iids, sizeof(src_iids));
-	ecore_cxt_src_iids(p_mngr, &src_iids);
+	ecore_cxt_src_iids(p_hwfn, p_mngr, &src_iids);
 	conn_num = src_iids.pf_cids + src_iids.per_vf_cids * p_mngr->vf_count;
 	if (!conn_num)
 		return;
@@ -1635,6 +1655,9 @@ static void ecore_src_init_pf(struct ecore_hwfn *p_hwfn)
 			 p_hwfn->p_cxt_mngr->first_free);
 	STORE_RT_REG_AGG(p_hwfn, SRC_REG_LASTFREE_RT_OFFSET,
 			 p_hwfn->p_cxt_mngr->last_free);
+	DP_VERBOSE(p_hwfn, ECORE_MSG_ILT,
+		   "Configured SEARCHER for 0x%08x connections\n",
+		   conn_num);
 }
 
 /* Timers PF */
@@ -1978,10 +2001,10 @@ enum _ecore_status_t ecore_cxt_set_pf_params(struct ecore_hwfn *p_hwfn)
 			 * As of now, allocates 16 * 2 per-VF [to retain regular
 			 * functionality].
 			 */
-			ecore_cxt_set_proto_cid_count(p_hwfn,
-				PROTOCOLID_ETH,
-				p_params->num_cons, 32);
-
+			ecore_cxt_set_proto_cid_count(p_hwfn, PROTOCOLID_ETH,
+						      p_params->num_cons, 32);
+			p_hwfn->p_cxt_mngr->arfs_count =
+						p_params->num_arfs_filters;
 			break;
 		}
 	default:
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.c b/drivers/net/qede/base/ecore_init_fw_funcs.c
index af0deaa..004ab35 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.c
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.c
@@ -1497,6 +1497,37 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
 #define RAM_LINE_SIZE sizeof(u64)
 #define REG_SIZE sizeof(u32)
 
+void ecore_set_rfs_mode_disable(struct ecore_hwfn *p_hwfn,
+	struct ecore_ptt *p_ptt,
+	u16 pf_id)
+{
+	union gft_cam_line_union cam_line;
+	struct gft_ram_line ram_line;
+	u32 i, *ram_line_ptr;
+
+	ram_line_ptr = (u32 *)&ram_line;
+
+	/* Stop using gft logic, disable gft search */
+	ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_GFT, 0);
+	ecore_wr(p_hwfn, p_ptt, PRS_REG_CM_HDR_GFT, 0x0);
+
+	/* Clean ram & cam for next rfs/gft session*/
+
+	/* Zero camline */
+	OSAL_MEMSET(&cam_line, 0, sizeof(cam_line));
+	ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_CAM + CAM_LINE_SIZE * pf_id,
+					cam_line.cam_line_mapped.camline);
+
+	/* Zero ramline */
+	OSAL_MEMSET(&ram_line, 0, sizeof(ram_line));
+
+	/* Each iteration write to reg */
+	for (i = 0; i < RAM_LINE_SIZE / REG_SIZE; i++)
+		ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_PROFILE_MASK_RAM +
+			 RAM_LINE_SIZE * pf_id +
+			 i * REG_SIZE, *(ram_line_ptr + i));
+}
+
 
 void ecore_set_gft_event_id_cm_hdr(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt)
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.h b/drivers/net/qede/base/ecore_init_fw_funcs.h
index 2d1ab7c..4da3fc2 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.h
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.h
@@ -351,6 +351,17 @@ void ecore_set_gft_event_id_cm_hdr(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt);
 
 /**
+ * @brief ecore_set_rfs_mode_disable - Disable and configure HW for RFS
+ *
+ * @param p_hwfn -   HW device data
+ * @param p_ptt -   ptt window used for writing the registers.
+ * @param pf_id - pf on which to disable RFS.
+ */
+void ecore_set_rfs_mode_disable(struct ecore_hwfn *p_hwfn,
+				struct ecore_ptt *p_ptt,
+				u16 pf_id);
+
+/**
 * @brief ecore_set_rfs_mode_enable - enable and configure HW for RFS
 *
 * @param p_ptt	- ptt window used for writing the registers.
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index c4af895..3f75467 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -2018,3 +2018,87 @@ void ecore_reset_vport_stats(struct ecore_dev *p_dev)
 	else
 		_ecore_get_vport_stats(p_dev, p_dev->reset_stats);
 }
+
+void ecore_arfs_mode_configure(struct ecore_hwfn *p_hwfn,
+			       struct ecore_ptt *p_ptt,
+			       struct ecore_arfs_config_params *p_cfg_params)
+{
+	if (p_cfg_params->arfs_enable) {
+		ecore_set_rfs_mode_enable(p_hwfn, p_ptt, p_hwfn->rel_pf_id,
+					  p_cfg_params->tcp,
+					  p_cfg_params->udp,
+					  p_cfg_params->ipv4,
+					  p_cfg_params->ipv6);
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+			   "tcp = %s, udp = %s, ipv4 = %s, ipv6 =%s\n",
+			   p_cfg_params->tcp ? "Enable" : "Disable",
+			   p_cfg_params->udp ? "Enable" : "Disable",
+			   p_cfg_params->ipv4 ? "Enable" : "Disable",
+			   p_cfg_params->ipv6 ? "Enable" : "Disable");
+	} else {
+		ecore_set_rfs_mode_disable(p_hwfn, p_ptt, p_hwfn->rel_pf_id);
+	}
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP, "Configured ARFS mode : %s\n",
+		   p_cfg_params->arfs_enable ? "Enable" : "Disable");
+}
+
+enum _ecore_status_t
+ecore_configure_rfs_ntuple_filter(struct ecore_hwfn *p_hwfn,
+				  struct ecore_ptt *p_ptt,
+				  struct ecore_spq_comp_cb *p_cb,
+				  dma_addr_t p_addr, u16 length,
+				  u16 qid, u8 vport_id,
+				  bool b_is_add)
+{
+	struct rx_update_gft_filter_data *p_ramrod = OSAL_NULL;
+	struct ecore_spq_entry *p_ent = OSAL_NULL;
+	struct ecore_sp_init_data init_data;
+	u16 abs_rx_q_id = 0;
+	u8 abs_vport_id = 0;
+	enum _ecore_status_t rc = ECORE_NOTIMPL;
+
+	rc = ecore_fw_vport(p_hwfn, vport_id, &abs_vport_id);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	rc = ecore_fw_l2_queue(p_hwfn, qid, &abs_rx_q_id);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	/* Get SPQ entry */
+	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
+	init_data.cid = ecore_spq_get_cid(p_hwfn);
+
+	init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
+
+	if (p_cb) {
+		init_data.comp_mode = ECORE_SPQ_MODE_CB;
+		init_data.p_comp_data = p_cb;
+	} else {
+		init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
+	}
+
+	rc = ecore_sp_init_request(p_hwfn, &p_ent,
+				   ETH_RAMROD_GFT_UPDATE_FILTER,
+				   PROTOCOLID_ETH, &init_data);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	p_ramrod = &p_ent->ramrod.rx_update_gft;
+
+	DMA_REGPAIR_LE(p_ramrod->pkt_hdr_addr, p_addr);
+	p_ramrod->pkt_hdr_length = OSAL_CPU_TO_LE16(length);
+	p_ramrod->rx_qid_or_action_icid = OSAL_CPU_TO_LE16(abs_rx_q_id);
+	p_ramrod->vport_id = abs_vport_id;
+	p_ramrod->filter_type = RFS_FILTER_TYPE;
+	p_ramrod->filter_action = b_is_add ? GFT_ADD_FILTER
+					   : GFT_DELETE_FILTER;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "V[%0x], Q[%04x] - %s filter from 0x%lx [length %04xb]\n",
+		   abs_vport_id, abs_rx_q_id,
+		   b_is_add ? "Adding" : "Removing",
+		   (u64)p_addr, length);
+
+	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
+}
diff --git a/drivers/net/qede/base/ecore_l2.h b/drivers/net/qede/base/ecore_l2.h
index 3f86eac..7fe4cbc 100644
--- a/drivers/net/qede/base/ecore_l2.h
+++ b/drivers/net/qede/base/ecore_l2.h
@@ -129,4 +129,31 @@ ecore_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
 
 u8 ecore_mcast_bin_from_mac(u8 *mac);
 
+/**
+ * @brief - ecore_configure_rfs_ntuple_filter
+ *
+ * This ramrod should be used to add or remove arfs hw filter
+ *
+ * @params p_hwfn
+ * @params p_ptt
+ * @params p_cb		Used for ECORE_SPQ_MODE_CB,where client would initialize
+			it with cookie and callback function address, if not
+			using this mode then client must pass NULL.
+ * @params p_addr	p_addr is an actual packet header that needs to be
+ *			filter. It has to mapped with IO to read prior to
+ *			calling this, [contains 4 tuples- src ip, dest ip,
+ *			src port, dest port].
+ * @params length	length of p_addr header up to past the transport header.
+ * @params qid		receive packet will be directed to this queue.
+ * @params vport_id
+ * @params b_is_add	flag to add or remove filter.
+ *
+ */
+enum _ecore_status_t
+ecore_configure_rfs_ntuple_filter(struct ecore_hwfn *p_hwfn,
+				  struct ecore_ptt *p_ptt,
+				  struct ecore_spq_comp_cb *p_cb,
+				  dma_addr_t p_addr, u16 length,
+				  u16 qid, u8 vport_id,
+				  bool b_is_add);
 #endif
diff --git a/drivers/net/qede/base/ecore_l2_api.h b/drivers/net/qede/base/ecore_l2_api.h
index 5a7db76..d09f3c4 100644
--- a/drivers/net/qede/base/ecore_l2_api.h
+++ b/drivers/net/qede/base/ecore_l2_api.h
@@ -141,6 +141,14 @@ struct ecore_filter_accept_flags {
 #define ECORE_ACCEPT_BCAST		0x20
 };
 
+struct ecore_arfs_config_params {
+	bool tcp;
+	bool udp;
+	bool ipv4;
+	bool ipv6;
+	bool arfs_enable;	/* Enable or disable arfs mode */
+};
+
 /* Add / remove / move / remove-all unicast MAC-VLAN filters.
  * FW will assert in the following cases, so driver should take care...:
  * 1. Adding a filter to a full table.
@@ -414,4 +422,18 @@ void ecore_get_vport_stats(struct ecore_dev *p_dev,
 
 void ecore_reset_vport_stats(struct ecore_dev *p_dev);
 
+/**
+ *@brief ecore_arfs_mode_configure -
+ *
+ *Enable or disable rfs mode. It must accept atleast one of tcp or udp true
+ *and atleast one of ipv4 or ipv6 true to enable rfs mode.
+ *
+ *@param p_hwfn
+ *@param p_ptt
+ *@param p_cfg_params		arfs mode configuration parameters.
+ *
+ */
+void ecore_arfs_mode_configure(struct ecore_hwfn *p_hwfn,
+			       struct ecore_ptt *p_ptt,
+			       struct ecore_arfs_config_params *p_cfg_params);
 #endif
diff --git a/drivers/net/qede/base/ecore_proto_if.h b/drivers/net/qede/base/ecore_proto_if.h
index 0ac153f..226e3d2 100644
--- a/drivers/net/qede/base/ecore_proto_if.h
+++ b/drivers/net/qede/base/ecore_proto_if.h
@@ -21,6 +21,12 @@ struct ecore_eth_pf_params {
 	 * to update_pf_params routine invoked before slowpath start
 	 */
 	u16	num_cons;
+
+	/* To enable arfs, previous to HW-init a positive number needs to be
+	 * set [as filters require allocated searcher ILT memory].
+	 * This will set the maximal number of configured steering-filters.
+	 */
+	u32	num_arfs_filters;
 };
 
 /* Most of the the parameters below are described in the FW iSCSI / TCP HSI */
diff --git a/drivers/net/qede/base/ecore_spq.h b/drivers/net/qede/base/ecore_spq.h
index e2468b7..e530f83 100644
--- a/drivers/net/qede/base/ecore_spq.h
+++ b/drivers/net/qede/base/ecore_spq.h
@@ -26,6 +26,7 @@ union ramrod_data {
 	struct tx_queue_stop_ramrod_data		tx_queue_stop;
 	struct vport_start_ramrod_data			vport_start;
 	struct vport_stop_ramrod_data			vport_stop;
+	struct rx_update_gft_filter_data		rx_update_gft;
 	struct vport_update_ramrod_data			vport_update;
 	struct core_rx_start_ramrod_data		core_rx_queue_start;
 	struct core_rx_stop_ramrod_data			core_rx_queue_stop;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 60/61] net/qede: add ntuple and flow director filter support
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (59 preceding siblings ...)
  2017-03-18  7:06   ` [PATCH v2 59/61] net/qede/base: add support for arfs mode Rasesh Mody
@ 2017-03-18  7:06   ` Rasesh Mody
  2017-03-18  7:06   ` [PATCH v2 61/61] net/qede: add LRO/TSO offloads support Rasesh Mody
  2017-03-18  7:18   ` [PATCH 00/61] net/qede/base: qede PMD enhancements Mody, Rasesh
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:06 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Harish Patil, Dept-EngDPDKDev

From: Harish Patil <harish.patil@qlogic.com>

Add limited support for ntuple filter and flow director configuration.
The filtering is based on 4-tuples viz src-ip, dst-ip, src-port,
dst-port. The mask fields, tcp_flags, flex masks, priority fields,
Rx queue drop etc are not supported.

Signed-off-by: Harish Patil <harish.patil@qlogic.com>
---
 doc/guides/nics/features/qede.ini |    2 +
 doc/guides/nics/qede.rst          |    7 +-
 drivers/net/qede/Makefile         |    1 +
 drivers/net/qede/base/ecore.h     |    3 +
 drivers/net/qede/qede_ethdev.c    |   16 +-
 drivers/net/qede/qede_ethdev.h    |   39 +++
 drivers/net/qede/qede_fdir.c      |  486 +++++++++++++++++++++++++++++++++++++
 drivers/net/qede/qede_main.c      |   19 +-
 8 files changed, 563 insertions(+), 10 deletions(-)
 create mode 100644 drivers/net/qede/qede_fdir.c

diff --git a/doc/guides/nics/features/qede.ini b/doc/guides/nics/features/qede.ini
index 8858e5d..b688914 100644
--- a/doc/guides/nics/features/qede.ini
+++ b/doc/guides/nics/features/qede.ini
@@ -34,3 +34,5 @@ Multiprocess aware   = Y
 Linux UIO            = Y
 x86-64               = Y
 Usage doc            = Y
+N-tuple filter       = Y
+Flow director        = Y
diff --git a/doc/guides/nics/qede.rst b/doc/guides/nics/qede.rst
index 1cf5501..5f65bde 100644
--- a/doc/guides/nics/qede.rst
+++ b/doc/guides/nics/qede.rst
@@ -60,6 +60,7 @@ Supported Features
 - Multiprocess aware
 - Scatter-Gather
 - VXLAN tunneling offload
+- N-tuple filter and flow director (limited support)
 
 Non-supported Features
 ----------------------
@@ -77,10 +78,10 @@ Supported QLogic Adapters
 Prerequisites
 -------------
 
-- Requires firmware version **8.14.x.** and management firmware
-  version **8.14.x or higher**. Firmware may be available
+- Requires firmware version **8.18.x.** and management firmware
+  version **8.18.x or higher**. Firmware may be available
   inbox in certain newer Linux distros under the standard directory
-  ``E.g. /lib/firmware/qed/qed_init_values-8.14.6.0.bin``
+  ``E.g. /lib/firmware/qed/qed_init_values-8.18.9.0.bin``
 
 - If the required firmware files are not available then visit
   `QLogic Driver Download Center <http://driverdownloads.qlogic.com>`_.
diff --git a/drivers/net/qede/Makefile b/drivers/net/qede/Makefile
index 29b443d..aae6bd2 100644
--- a/drivers/net/qede/Makefile
+++ b/drivers/net/qede/Makefile
@@ -99,6 +99,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_ethdev.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_eth_if.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_main.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_rxtx.c
+SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_fdir.c
 
 # dependent libs:
 DEPDIRS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += lib/librte_eal lib/librte_ether
diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index fab8193..31470b6 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -602,6 +602,9 @@ struct ecore_hwfn {
 
 	/* L2-related */
 	struct ecore_l2_info		*p_l2_info;
+
+	/* @DPDK */
+	struct ecore_ptt		*p_arfs_ptt;
 };
 
 #ifndef __EXTRACT__LINUX__
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 6fbd898..2b91a10 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -924,6 +924,15 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 		return -EINVAL;
 	}
 
+	/* Flow director mode check */
+	rc = qede_check_fdir_support(eth_dev);
+	if (rc) {
+		qdev->ops->vport_stop(edev, 0);
+		qede_dealloc_fp_resc(eth_dev);
+		return -EINVAL;
+	}
+	SLIST_INIT(&qdev->fdir_info.fdir_list_head);
+
 	SLIST_INIT(&qdev->vlan_list_head);
 
 	/* Add primary mac for PF */
@@ -1124,6 +1133,8 @@ static void qede_dev_close(struct rte_eth_dev *eth_dev)
 
 	PMD_INIT_FUNC_TRACE(edev);
 
+	qede_fdir_dealloc_resc(eth_dev);
+
 	/* dev_stop() shall cleanup fp resources in hw but without releasing
 	 * dma memories and sw structures so that dev_start() can be called
 	 * by the app without reconfiguration. However, in dev_close() we
@@ -1957,11 +1968,13 @@ int qede_dev_filter_ctrl(struct rte_eth_dev *eth_dev,
 		}
 		break;
 	case RTE_ETH_FILTER_FDIR:
+		return qede_fdir_filter_conf(eth_dev, filter_op, arg);
+	case RTE_ETH_FILTER_NTUPLE:
+		return qede_ntuple_filter_conf(eth_dev, filter_op, arg);
 	case RTE_ETH_FILTER_MACVLAN:
 	case RTE_ETH_FILTER_ETHERTYPE:
 	case RTE_ETH_FILTER_FLEXIBLE:
 	case RTE_ETH_FILTER_SYN:
-	case RTE_ETH_FILTER_NTUPLE:
 	case RTE_ETH_FILTER_HASH:
 	case RTE_ETH_FILTER_L2_TUNNEL:
 	case RTE_ETH_FILTER_MAX:
@@ -2052,6 +2065,7 @@ static void qede_update_pf_params(struct ecore_dev *edev)
 
 	memset(&pf_params, 0, sizeof(struct ecore_pf_params));
 	pf_params.eth_pf_params.num_cons = QEDE_PF_NUM_CONNS;
+	pf_params.eth_pf_params.num_arfs_filters = QEDE_RFS_MAX_FLTR;
 	qed_ops->common->update_pf_params(edev, &pf_params);
 }
 
diff --git a/drivers/net/qede/qede_ethdev.h b/drivers/net/qede/qede_ethdev.h
index be54f31..8342b99 100644
--- a/drivers/net/qede/qede_ethdev.h
+++ b/drivers/net/qede/qede_ethdev.h
@@ -34,6 +34,8 @@
 #include "base/nvm_cfg.h"
 #include "base/ecore_iov_api.h"
 #include "base/ecore_sp_commands.h"
+#include "base/ecore_l2.h"
+#include "base/ecore_dev_api.h"
 
 #include "qede_logs.h"
 #include "qede_if.h"
@@ -131,6 +133,9 @@ extern char fw_file[];
 /* Number of PF connections - 32 RX + 32 TX */
 #define QEDE_PF_NUM_CONNS		(64)
 
+/* Maximum number of flowdir filters */
+#define QEDE_RFS_MAX_FLTR		(256)
+
 /* Port/function states */
 enum qede_dev_state {
 	QEDE_DEV_INIT, /* Init the chip and Slowpath */
@@ -156,6 +161,21 @@ struct qede_ucast_entry {
 	SLIST_ENTRY(qede_ucast_entry) list;
 };
 
+struct qede_fdir_entry {
+	uint32_t soft_id; /* unused for now */
+	uint16_t pkt_len; /* actual packet length to match */
+	uint16_t rx_queue; /* queue to be steered to */
+	const struct rte_memzone *mz; /* mz used to hold L2 frame */
+	SLIST_ENTRY(qede_fdir_entry) list;
+};
+
+struct qede_fdir_info {
+	struct ecore_arfs_config_params arfs;
+	uint16_t filter_count;
+	SLIST_HEAD(fdir_list_head, qede_fdir_entry)fdir_list_head;
+};
+
+
 /*
  *  Structure to store private data for each port.
  */
@@ -190,6 +210,7 @@ struct qede_dev {
 	bool handle_hw_err;
 	uint16_t num_tunn_filters;
 	uint16_t vxlan_filter_type;
+	struct qede_fdir_info fdir_info;
 	char drv_ver[QEDE_PMD_DRV_VER_STR_SIZE];
 };
 
@@ -208,6 +229,11 @@ static void qede_init_rss_caps(uint8_t *rss_caps, uint64_t hf);
 
 static inline uint32_t qede_rx_cqe_to_pkt_type(uint16_t flags);
 
+static uint16_t qede_fdir_construct_pkt(struct rte_eth_dev *eth_dev,
+					struct rte_eth_fdir_filter *fdir,
+					void *buff,
+					struct ecore_arfs_config_params *param);
+
 /* Non-static functions */
 void qede_init_rss_caps(uint8_t *rss_caps, uint64_t hf);
 
@@ -215,4 +241,17 @@ int qed_fill_eth_dev_info(struct ecore_dev *edev,
 				 struct qed_dev_eth_info *info);
 int qede_dev_set_link_state(struct rte_eth_dev *eth_dev, bool link_up);
 
+int qede_dev_filter_ctrl(struct rte_eth_dev *dev, enum rte_filter_type type,
+			 enum rte_filter_op op, void *arg);
+
+int qede_fdir_filter_conf(struct rte_eth_dev *eth_dev,
+			  enum rte_filter_op filter_op, void *arg);
+
+int qede_ntuple_filter_conf(struct rte_eth_dev *eth_dev,
+			    enum rte_filter_op filter_op, void *arg);
+
+int qede_check_fdir_support(struct rte_eth_dev *eth_dev);
+
+void qede_fdir_dealloc_resc(struct rte_eth_dev *eth_dev);
+
 #endif /* _QEDE_ETHDEV_H_ */
diff --git a/drivers/net/qede/qede_fdir.c b/drivers/net/qede/qede_fdir.c
new file mode 100644
index 0000000..6d9a99b
--- /dev/null
+++ b/drivers/net/qede/qede_fdir.c
@@ -0,0 +1,486 @@
+/*
+ * Copyright (c) 2017 QLogic Corporation.
+ * All rights reserved.
+ * www.qlogic.com
+ *
+ * See LICENSE.qede_pmd for copyright and licensing details.
+ */
+
+#include <rte_udp.h>
+#include <rte_tcp.h>
+#include <rte_sctp.h>
+#include <rte_errno.h>
+
+#include "qede_ethdev.h"
+
+#define IP_VERSION				(0x40)
+#define IP_HDRLEN				(0x5)
+#define QEDE_FDIR_IP_DEFAULT_VERSION_IHL	(IP_VERSION | IP_HDRLEN)
+#define QEDE_FDIR_TCP_DEFAULT_DATAOFF		(0x50)
+#define QEDE_FDIR_IPV4_DEF_TTL			(64)
+
+/* Sum of length of header types of L2, L3, L4.
+ * L2 : ether_hdr + vlan_hdr + vxlan_hdr
+ * L3 : ipv6_hdr
+ * L4 : tcp_hdr
+ */
+#define QEDE_MAX_FDIR_PKT_LEN			(86)
+
+#ifndef IPV6_ADDR_LEN
+#define IPV6_ADDR_LEN				(16)
+#endif
+
+#define QEDE_VALID_FLOW(flow_type) \
+	((flow_type) == RTE_ETH_FLOW_FRAG_IPV4		|| \
+	(flow_type) == RTE_ETH_FLOW_NONFRAG_IPV4_TCP	|| \
+	(flow_type) == RTE_ETH_FLOW_NONFRAG_IPV4_UDP	|| \
+	(flow_type) == RTE_ETH_FLOW_FRAG_IPV6		|| \
+	(flow_type) == RTE_ETH_FLOW_NONFRAG_IPV6_TCP	|| \
+	(flow_type) == RTE_ETH_FLOW_NONFRAG_IPV6_UDP)
+
+/* Note: Flowdir support is only partial.
+ * For ex: drop_queue, FDIR masks, flex_conf are not supported.
+ * Parameters like pballoc/status fields are irrelevant here.
+ */
+int qede_check_fdir_support(struct rte_eth_dev *eth_dev)
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	struct rte_fdir_conf *fdir = &eth_dev->data->dev_conf.fdir_conf;
+
+	/* check FDIR modes */
+	switch (fdir->mode) {
+	case RTE_FDIR_MODE_NONE:
+		qdev->fdir_info.arfs.arfs_enable = false;
+		DP_INFO(edev, "flowdir is disabled\n");
+	break;
+	case RTE_FDIR_MODE_PERFECT:
+		if (edev->num_hwfns > 1) {
+			DP_ERR(edev, "flowdir is not supported in 100G mode\n");
+			qdev->fdir_info.arfs.arfs_enable = false;
+			return -ENOTSUP;
+		}
+		qdev->fdir_info.arfs.arfs_enable = true;
+		DP_INFO(edev, "flowdir is enabled\n");
+	break;
+	case RTE_FDIR_MODE_PERFECT_TUNNEL:
+	case RTE_FDIR_MODE_SIGNATURE:
+	case RTE_FDIR_MODE_PERFECT_MAC_VLAN:
+		DP_ERR(edev, "Unsupported flowdir mode %d\n", fdir->mode);
+		return -ENOTSUP;
+	}
+
+	return 0;
+}
+
+void qede_fdir_dealloc_resc(struct rte_eth_dev *eth_dev)
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	struct qede_fdir_entry *tmp = NULL;
+	struct qede_fdir_entry *fdir;
+
+	SLIST_FOREACH(tmp, &qdev->fdir_info.fdir_list_head, list) {
+		if (tmp) {
+			if (tmp->mz)
+				rte_memzone_free(tmp->mz);
+			SLIST_REMOVE(&qdev->fdir_info.fdir_list_head, tmp,
+				     qede_fdir_entry, list);
+			rte_free(tmp);
+		}
+	}
+}
+
+static int
+qede_config_cmn_fdir_filter(struct rte_eth_dev *eth_dev,
+			    struct rte_eth_fdir_filter *fdir_filter,
+			    bool add)
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	char mz_name[RTE_MEMZONE_NAMESIZE] = {0};
+	struct qede_fdir_entry *tmp = NULL;
+	struct qede_fdir_entry *fdir;
+	const struct rte_memzone *mz;
+	struct ecore_hwfn *p_hwfn;
+	enum _ecore_status_t rc;
+	uint16_t pkt_len;
+	uint16_t len;
+	void *pkt;
+
+	if (add) {
+		if (qdev->fdir_info.filter_count == QEDE_RFS_MAX_FLTR - 1) {
+			DP_ERR(edev, "Reached max flowdir filter limit\n");
+			return -EINVAL;
+		}
+		fdir = rte_malloc(NULL, sizeof(struct qede_fdir_entry),
+				  RTE_CACHE_LINE_SIZE);
+		if (!fdir) {
+			DP_ERR(edev, "Did not allocate memory for fdir\n");
+			return -ENOMEM;
+		}
+	}
+	/* soft_id could have been used as memzone string, but soft_id is
+	 * not currently used so it has no significance.
+	 */
+	snprintf(mz_name, sizeof(mz_name) - 1, "%lx", rte_get_timer_cycles());
+	mz = rte_memzone_reserve_aligned(mz_name, QEDE_MAX_FDIR_PKT_LEN,
+					 SOCKET_ID_ANY, 0, RTE_CACHE_LINE_SIZE);
+	if (!mz) {
+		DP_ERR(edev, "Failed to allocate memzone for fdir, err = %s\n",
+		       rte_strerror(rte_errno));
+		rc = -rte_errno;
+		goto err1;
+	}
+
+	pkt = mz->addr;
+	memset(pkt, 0, QEDE_MAX_FDIR_PKT_LEN);
+	pkt_len = qede_fdir_construct_pkt(eth_dev, fdir_filter, pkt,
+					  &qdev->fdir_info.arfs);
+	if (pkt_len == 0) {
+		rc = -EINVAL;
+		goto err2;
+	}
+	DP_INFO(edev, "pkt_len = %u memzone = %s\n", pkt_len, mz_name);
+	if (add) {
+		SLIST_FOREACH(tmp, &qdev->fdir_info.fdir_list_head, list) {
+			if (memcmp(tmp->mz->addr, pkt, pkt_len) == 0) {
+				DP_ERR(edev, "flowdir filter exist\n");
+				rc = -EEXIST;
+				goto err2;
+			}
+		}
+	} else {
+		SLIST_FOREACH(tmp, &qdev->fdir_info.fdir_list_head, list) {
+			if (memcmp(tmp->mz->addr, pkt, pkt_len) == 0)
+				break;
+		}
+		if (!tmp) {
+			DP_ERR(edev, "flowdir filter does not exist\n");
+			rc = -EEXIST;
+			goto err2;
+		}
+	}
+	p_hwfn = ECORE_LEADING_HWFN(edev);
+	if (add) {
+		if (!qdev->fdir_info.arfs.arfs_enable) {
+			/* Force update */
+			eth_dev->data->dev_conf.fdir_conf.mode =
+						RTE_FDIR_MODE_PERFECT;
+			qdev->fdir_info.arfs.arfs_enable = true;
+			DP_INFO(edev, "Force enable flowdir in perfect mode\n");
+		}
+		/* Enable ARFS searcher with updated flow_types */
+		ecore_arfs_mode_configure(p_hwfn, p_hwfn->p_arfs_ptt,
+					  &qdev->fdir_info.arfs);
+	}
+	/* configure filter with ECORE_SPQ_MODE_EBLOCK */
+	rc = ecore_configure_rfs_ntuple_filter(p_hwfn, p_hwfn->p_arfs_ptt, NULL,
+					       (dma_addr_t)mz->phys_addr,
+					       pkt_len,
+					       fdir_filter->action.rx_queue,
+					       0, add);
+	if (rc == ECORE_SUCCESS) {
+		if (add) {
+			fdir->rx_queue = fdir_filter->action.rx_queue;
+			fdir->pkt_len = pkt_len;
+			fdir->mz = mz;
+			SLIST_INSERT_HEAD(&qdev->fdir_info.fdir_list_head,
+					  fdir, list);
+			qdev->fdir_info.filter_count++;
+			DP_INFO(edev, "flowdir filter added, count = %d\n",
+				qdev->fdir_info.filter_count);
+		} else {
+			rte_memzone_free(tmp->mz);
+			SLIST_REMOVE(&qdev->fdir_info.fdir_list_head, tmp,
+				     qede_fdir_entry, list);
+			rte_free(tmp); /* the node deleted */
+			rte_memzone_free(mz); /* temp node allocated */
+			qdev->fdir_info.filter_count--;
+			DP_INFO(edev, "Fdir filter deleted, count = %d\n",
+				qdev->fdir_info.filter_count);
+		}
+	} else {
+		DP_ERR(edev, "flowdir filter failed, rc=%d filter_count=%d\n",
+		       rc, qdev->fdir_info.filter_count);
+	}
+
+	/* Disable ARFS searcher if there are no more filters */
+	if (qdev->fdir_info.filter_count == 0) {
+		memset(&qdev->fdir_info.arfs, 0,
+		       sizeof(struct ecore_arfs_config_params));
+		DP_INFO(edev, "Disabling flowdir\n");
+		qdev->fdir_info.arfs.arfs_enable = false;
+		ecore_arfs_mode_configure(p_hwfn, p_hwfn->p_arfs_ptt,
+					  &qdev->fdir_info.arfs);
+	}
+	return 0;
+
+err2:
+	rte_memzone_free(mz);
+err1:
+	if (add)
+		rte_free(fdir);
+	return rc;
+}
+
+static int
+qede_fdir_filter_add(struct rte_eth_dev *eth_dev,
+		     struct rte_eth_fdir_filter *fdir,
+		     bool add)
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+
+	if (!QEDE_VALID_FLOW(fdir->input.flow_type)) {
+		DP_ERR(edev, "invalid flow_type input\n");
+		return -EINVAL;
+	}
+
+	if (fdir->action.rx_queue >= QEDE_RSS_COUNT(qdev)) {
+		DP_ERR(edev, "invalid queue number %u\n",
+		       fdir->action.rx_queue);
+		return -EINVAL;
+	}
+
+	if (fdir->input.flow_ext.is_vf) {
+		DP_ERR(edev, "flowdir is not supported over VF\n");
+		return -EINVAL;
+	}
+
+	return qede_config_cmn_fdir_filter(eth_dev, fdir, add);
+}
+
+/* Fills the L3/L4 headers and returns the actual length  of flowdir packet */
+static uint16_t
+qede_fdir_construct_pkt(struct rte_eth_dev *eth_dev,
+			struct rte_eth_fdir_filter *fdir,
+			void *buff,
+			struct ecore_arfs_config_params *params)
+
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	uint16_t *ether_type;
+	uint8_t *raw_pkt;
+	struct rte_eth_fdir_input *input;
+	static uint8_t vlan_frame[] = {0x81, 0, 0, 0};
+	struct ipv4_hdr *ip;
+	struct ipv6_hdr *ip6;
+	struct udp_hdr *udp;
+	struct tcp_hdr *tcp;
+	struct sctp_hdr *sctp;
+	uint8_t size, dst = 0;
+	uint16_t len;
+	static const uint8_t next_proto[] = {
+		[RTE_ETH_FLOW_FRAG_IPV4] = IPPROTO_IP,
+		[RTE_ETH_FLOW_NONFRAG_IPV4_TCP] = IPPROTO_TCP,
+		[RTE_ETH_FLOW_NONFRAG_IPV4_UDP] = IPPROTO_UDP,
+		[RTE_ETH_FLOW_FRAG_IPV6] = IPPROTO_NONE,
+		[RTE_ETH_FLOW_NONFRAG_IPV6_TCP] = IPPROTO_TCP,
+		[RTE_ETH_FLOW_NONFRAG_IPV6_UDP] = IPPROTO_UDP,
+	};
+	raw_pkt = (uint8_t *)buff;
+	input = &fdir->input;
+	DP_INFO(edev, "flow_type %d\n", input->flow_type);
+
+	len =  2 * sizeof(struct ether_addr);
+	raw_pkt += 2 * sizeof(struct ether_addr);
+	if (input->flow_ext.vlan_tci) {
+		DP_INFO(edev, "adding VLAN header\n");
+		rte_memcpy(raw_pkt, vlan_frame, sizeof(vlan_frame));
+		rte_memcpy(raw_pkt + sizeof(uint16_t),
+			   &input->flow_ext.vlan_tci,
+			   sizeof(uint16_t));
+		raw_pkt += sizeof(vlan_frame);
+		len += sizeof(vlan_frame);
+	}
+	ether_type = (uint16_t *)raw_pkt;
+	raw_pkt += sizeof(uint16_t);
+	len += sizeof(uint16_t);
+
+	/* fill the common ip header */
+	switch (input->flow_type) {
+	case RTE_ETH_FLOW_NONFRAG_IPV4_TCP:
+	case RTE_ETH_FLOW_NONFRAG_IPV4_UDP:
+	case RTE_ETH_FLOW_FRAG_IPV4:
+		ip = (struct ipv4_hdr *)raw_pkt;
+		*ether_type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+		ip->version_ihl = QEDE_FDIR_IP_DEFAULT_VERSION_IHL;
+		ip->total_length = sizeof(struct ipv4_hdr);
+		ip->next_proto_id = input->flow.ip4_flow.proto ?
+				    input->flow.ip4_flow.proto :
+				    next_proto[input->flow_type];
+		ip->time_to_live = input->flow.ip4_flow.ttl ?
+				   input->flow.ip4_flow.ttl :
+				   QEDE_FDIR_IPV4_DEF_TTL;
+		ip->type_of_service = input->flow.ip4_flow.tos;
+		ip->dst_addr = input->flow.ip4_flow.dst_ip;
+		ip->src_addr = input->flow.ip4_flow.src_ip;
+		len += sizeof(struct ipv4_hdr);
+		params->ipv4 = true;
+		break;
+	case RTE_ETH_FLOW_NONFRAG_IPV6_TCP:
+	case RTE_ETH_FLOW_NONFRAG_IPV6_UDP:
+	case RTE_ETH_FLOW_FRAG_IPV6:
+		ip6 = (struct ipv6_hdr *)raw_pkt;
+		*ether_type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+		ip6->proto = input->flow.ipv6_flow.proto ?
+					input->flow.ipv6_flow.proto :
+					next_proto[input->flow_type];
+		rte_memcpy(&ip6->src_addr, &input->flow.ipv6_flow.dst_ip,
+			   IPV6_ADDR_LEN);
+		rte_memcpy(&ip6->dst_addr, &input->flow.ipv6_flow.src_ip,
+			   IPV6_ADDR_LEN);
+		len += sizeof(struct ipv6_hdr);
+		break;
+	default:
+		DP_ERR(edev, "Unsupported flow_type %u\n",
+		       input->flow_type);
+		return 0;
+	}
+
+	/* fill the L4 header */
+	raw_pkt = (uint8_t *)buff;
+	switch (input->flow_type) {
+	case RTE_ETH_FLOW_NONFRAG_IPV4_UDP:
+		udp = (struct udp_hdr *)(raw_pkt + len);
+		udp->dst_port = input->flow.udp4_flow.dst_port;
+		udp->src_port = input->flow.udp4_flow.src_port;
+		udp->dgram_len = sizeof(struct udp_hdr);
+		len += sizeof(struct udp_hdr);
+		/* adjust ip total_length */
+		ip->total_length += sizeof(struct udp_hdr);
+		params->udp = true;
+	break;
+	case RTE_ETH_FLOW_NONFRAG_IPV4_TCP:
+		tcp = (struct tcp_hdr *)(raw_pkt + len);
+		tcp->src_port = input->flow.tcp4_flow.src_port;
+		tcp->dst_port = input->flow.tcp4_flow.dst_port;
+		tcp->data_off = QEDE_FDIR_TCP_DEFAULT_DATAOFF;
+		len += sizeof(struct tcp_hdr);
+		/* adjust ip total_length */
+		ip->total_length += sizeof(struct tcp_hdr);
+		params->tcp = true;
+	break;
+	case RTE_ETH_FLOW_NONFRAG_IPV6_TCP:
+		tcp = (struct tcp_hdr *)(raw_pkt + len);
+		tcp->data_off = QEDE_FDIR_TCP_DEFAULT_DATAOFF;
+		tcp->src_port = input->flow.udp6_flow.src_port;
+		tcp->dst_port = input->flow.udp6_flow.dst_port;
+		/* adjust ip total_length */
+		len += sizeof(struct tcp_hdr);
+		params->tcp = true;
+	break;
+	case RTE_ETH_FLOW_NONFRAG_IPV6_UDP:
+		udp = (struct udp_hdr *)(raw_pkt + len);
+		udp->src_port = input->flow.udp6_flow.dst_port;
+		udp->dst_port = input->flow.udp6_flow.src_port;
+		/* adjust ip total_length */
+		len += sizeof(struct udp_hdr);
+		params->udp = true;
+	break;
+	default:
+		DP_ERR(edev, "Unsupported flow_type %d\n", input->flow_type);
+		return 0;
+	}
+	return len;
+}
+
+int
+qede_fdir_filter_conf(struct rte_eth_dev *eth_dev,
+		      enum rte_filter_op filter_op,
+		      void *arg)
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	struct rte_eth_fdir_filter *fdir;
+	int ret;
+
+	fdir = (struct rte_eth_fdir_filter *)arg;
+	switch (filter_op) {
+	case RTE_ETH_FILTER_NOP:
+		/* Typically used to query flowdir support */
+		if (edev->num_hwfns > 1) {
+			DP_ERR(edev, "flowdir is not supported in 100G mode\n");
+			return -ENOTSUP;
+		}
+		return 0; /* means supported */
+	case RTE_ETH_FILTER_ADD:
+		ret = qede_fdir_filter_add(eth_dev, fdir, 1);
+	break;
+	case RTE_ETH_FILTER_DELETE:
+		ret = qede_fdir_filter_add(eth_dev, fdir, 0);
+	break;
+	case RTE_ETH_FILTER_FLUSH:
+	case RTE_ETH_FILTER_UPDATE:
+	case RTE_ETH_FILTER_INFO:
+		return -ENOTSUP;
+	break;
+	default:
+		DP_ERR(edev, "unknown operation %u", filter_op);
+		ret = -EINVAL;
+	}
+
+	return ret;
+}
+
+int qede_ntuple_filter_conf(struct rte_eth_dev *eth_dev,
+			    enum rte_filter_op filter_op,
+			    void *arg)
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	struct rte_eth_ntuple_filter *ntuple;
+	struct rte_eth_fdir_filter fdir_entry;
+	struct rte_eth_tcpv4_flow *tcpv4_flow;
+	struct rte_eth_udpv4_flow *udpv4_flow;
+	struct ecore_hwfn *p_hwfn;
+	bool add;
+
+	switch (filter_op) {
+	case RTE_ETH_FILTER_NOP:
+		/* Typically used to query fdir support */
+		if (edev->num_hwfns > 1) {
+			DP_ERR(edev, "flowdir is not supported in 100G mode\n");
+			return -ENOTSUP;
+		}
+		return 0; /* means supported */
+	case RTE_ETH_FILTER_ADD:
+		add = true;
+	break;
+	case RTE_ETH_FILTER_DELETE:
+		add = false;
+	break;
+	case RTE_ETH_FILTER_INFO:
+	case RTE_ETH_FILTER_GET:
+	case RTE_ETH_FILTER_UPDATE:
+	case RTE_ETH_FILTER_FLUSH:
+	case RTE_ETH_FILTER_SET:
+	case RTE_ETH_FILTER_STATS:
+	case RTE_ETH_FILTER_OP_MAX:
+		DP_ERR(edev, "Unsupported filter_op %d\n", filter_op);
+		return -ENOTSUP;
+	}
+	ntuple = (struct rte_eth_ntuple_filter *)arg;
+	/* Internally convert ntuple to fdir entry */
+	memset(&fdir_entry, 0, sizeof(fdir_entry));
+	if (ntuple->proto == IPPROTO_TCP) {
+		fdir_entry.input.flow_type = RTE_ETH_FLOW_NONFRAG_IPV4_TCP;
+		tcpv4_flow = &fdir_entry.input.flow.tcp4_flow;
+		tcpv4_flow->ip.src_ip = ntuple->src_ip;
+		tcpv4_flow->ip.dst_ip = ntuple->dst_ip;
+		tcpv4_flow->ip.proto = IPPROTO_TCP;
+		tcpv4_flow->src_port = ntuple->src_port;
+		tcpv4_flow->dst_port = ntuple->dst_port;
+	} else {
+		fdir_entry.input.flow_type = RTE_ETH_FLOW_NONFRAG_IPV4_UDP;
+		udpv4_flow = &fdir_entry.input.flow.udp4_flow;
+		udpv4_flow->ip.src_ip = ntuple->src_ip;
+		udpv4_flow->ip.dst_ip = ntuple->dst_ip;
+		udpv4_flow->ip.proto = IPPROTO_TCP;
+		udpv4_flow->src_port = ntuple->src_port;
+		udpv4_flow->dst_port = ntuple->dst_port;
+	}
+	return qede_config_cmn_fdir_filter(eth_dev, &fdir_entry, add);
+}
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 74856c5..5548b0f 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -12,8 +12,6 @@
 
 #include "qede_ethdev.h"
 
-static uint8_t npar_tx_switching = 1;
-
 /* Alarm timeout. */
 #define QEDE_ALARM_TIMEOUT_US 100000
 
@@ -224,12 +222,12 @@ static void qed_stop_iov_task(struct ecore_dev *edev)
 static int qed_slowpath_start(struct ecore_dev *edev,
 			      struct qed_slowpath_params *params)
 {
-	bool allow_npar_tx_switching;
 	const uint8_t *data = NULL;
 	struct ecore_hwfn *hwfn;
 	struct ecore_mcp_drv_version drv_version;
 	struct ecore_hw_init_params hw_init_params;
 	struct qede_dev *qdev = (struct qede_dev *)edev;
+	struct ecore_ptt *p_ptt;
 	int rc;
 
 #ifdef CONFIG_ECORE_BINARY_FW
@@ -241,6 +239,17 @@ static int qed_slowpath_start(struct ecore_dev *edev,
 		}
 	}
 #endif
+	hwfn = ECORE_LEADING_HWFN(edev);
+	if (edev->num_hwfns == 1) { /* skip aRFS for 100G device */
+		p_ptt = ecore_ptt_acquire(hwfn);
+		if (p_ptt) {
+			ECORE_LEADING_HWFN(edev)->p_arfs_ptt = p_ptt;
+		} else {
+			DP_ERR(edev, "Failed to acquire PTT for flowdir\n");
+			rc = -ENOMEM;
+			goto err;
+		}
+	}
 
 	rc = qed_nic_setup(edev);
 	if (rc)
@@ -268,13 +277,11 @@ static int qed_slowpath_start(struct ecore_dev *edev,
 		data = (const uint8_t *)edev->firmware + sizeof(u32);
 #endif
 
-	allow_npar_tx_switching = npar_tx_switching ? true : false;
-
 	/* Start the slowpath */
 	memset(&hw_init_params, 0, sizeof(hw_init_params));
 	hw_init_params.b_hw_start = true;
 	hw_init_params.int_mode = ECORE_INT_MODE_MSIX;
-	hw_init_params.allow_npar_tx_switch = allow_npar_tx_switching;
+	hw_init_params.allow_npar_tx_switch = true;
 	hw_init_params.bin_fw_data = data;
 	hw_init_params.mfw_timeout_val = ECORE_LOAD_REQ_LOCK_TO_DEFAULT;
 	hw_init_params.avoid_eng_reset = false;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v2 61/61] net/qede: add LRO/TSO offloads support
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (60 preceding siblings ...)
  2017-03-18  7:06   ` [PATCH v2 60/61] net/qede: add ntuple and flow director filter support Rasesh Mody
@ 2017-03-18  7:06   ` Rasesh Mody
  2017-03-24 11:58     ` Ferruh Yigit
  2017-03-18  7:18   ` [PATCH 00/61] net/qede/base: qede PMD enhancements Mody, Rasesh
  62 siblings, 1 reply; 329+ messages in thread
From: Rasesh Mody @ 2017-03-18  7:06 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Harish Patil, Dept-EngDPDKDev

From: Harish Patil <harish.patil@qlogic.com>

This patch includes slowpath configuration and fastpath changes
to support LRO and TSO. A bit of revamping is needed in order
to make use of existing packet classification schemes in Rx fastpath
and for SG element processing in Tx.

Signed-off-by: Harish Patil <harish.patil@qlogic.com>
---
 doc/guides/nics/features/qede.ini    |    2 +
 doc/guides/nics/features/qede_vf.ini |    2 +
 doc/guides/nics/qede.rst             |    2 +-
 drivers/net/qede/qede_eth_if.c       |    6 +-
 drivers/net/qede/qede_eth_if.h       |    3 +-
 drivers/net/qede/qede_ethdev.c       |   29 +-
 drivers/net/qede/qede_ethdev.h       |    3 +-
 drivers/net/qede/qede_rxtx.c         |  635 +++++++++++++++++++++++++++-------
 drivers/net/qede/qede_rxtx.h         |   30 ++
 9 files changed, 561 insertions(+), 151 deletions(-)

diff --git a/doc/guides/nics/features/qede.ini b/doc/guides/nics/features/qede.ini
index b688914..fba5dc3 100644
--- a/doc/guides/nics/features/qede.ini
+++ b/doc/guides/nics/features/qede.ini
@@ -36,3 +36,5 @@ x86-64               = Y
 Usage doc            = Y
 N-tuple filter       = Y
 Flow director        = Y
+LRO                  = Y
+TSO                  = Y
diff --git a/doc/guides/nics/features/qede_vf.ini b/doc/guides/nics/features/qede_vf.ini
index acb1b99..21ec40f 100644
--- a/doc/guides/nics/features/qede_vf.ini
+++ b/doc/guides/nics/features/qede_vf.ini
@@ -31,4 +31,6 @@ Stats per queue      = Y
 Multiprocess aware   = Y
 Linux UIO            = Y
 x86-64               = Y
+LRO                  = Y
+TSO                  = Y
 Usage doc            = Y
diff --git a/doc/guides/nics/qede.rst b/doc/guides/nics/qede.rst
index 5f65bde..9023b7f 100644
--- a/doc/guides/nics/qede.rst
+++ b/doc/guides/nics/qede.rst
@@ -61,13 +61,13 @@ Supported Features
 - Scatter-Gather
 - VXLAN tunneling offload
 - N-tuple filter and flow director (limited support)
+- LRO/TSO
 
 Non-supported Features
 ----------------------
 
 - SR-IOV PF
 - GENEVE and NVGRE Tunneling offloads
-- LRO/TSO
 - NPAR
 
 Supported QLogic Adapters
diff --git a/drivers/net/qede/qede_eth_if.c b/drivers/net/qede/qede_eth_if.c
index 8e4290c..86bb129 100644
--- a/drivers/net/qede/qede_eth_if.c
+++ b/drivers/net/qede/qede_eth_if.c
@@ -18,8 +18,8 @@ qed_start_vport(struct ecore_dev *edev, struct qed_start_vport_params *p_params)
 		u8 tx_switching = 0;
 		struct ecore_sp_vport_start_params start = { 0 };
 
-		start.tpa_mode = p_params->gro_enable ? ECORE_TPA_MODE_GRO :
-		    ECORE_TPA_MODE_NONE;
+		start.tpa_mode = p_params->enable_lro ? ECORE_TPA_MODE_RSC :
+				ECORE_TPA_MODE_NONE;
 		start.remove_inner_vlan = p_params->remove_inner_vlan;
 		start.tx_switching = tx_switching;
 		start.only_untagged = false;	/* untagged only */
@@ -29,7 +29,6 @@ qed_start_vport(struct ecore_dev *edev, struct qed_start_vport_params *p_params)
 		start.concrete_fid = p_hwfn->hw_info.concrete_fid;
 		start.handle_ptp_pkts = p_params->handle_ptp_pkts;
 		start.vport_id = p_params->vport_id;
-		start.max_buffers_per_cqe = 16;	/* TODO-is this right */
 		start.mtu = p_params->mtu;
 		/* @DPDK - Disable FW placement */
 		start.zero_placement_offset = 1;
@@ -120,6 +119,7 @@ qed_update_vport(struct ecore_dev *edev, struct qed_update_vport_params *params)
 	sp_params.update_accept_any_vlan_flg =
 	    params->update_accept_any_vlan_flg;
 	sp_params.mtu = params->mtu;
+	sp_params.sge_tpa_params = params->sge_tpa_params;
 
 	for_each_hwfn(edev, i) {
 		struct ecore_hwfn *p_hwfn = &edev->hwfns[i];
diff --git a/drivers/net/qede/qede_eth_if.h b/drivers/net/qede/qede_eth_if.h
index 12dd828..d845bac 100644
--- a/drivers/net/qede/qede_eth_if.h
+++ b/drivers/net/qede/qede_eth_if.h
@@ -59,12 +59,13 @@ struct qed_update_vport_params {
 	uint8_t accept_any_vlan;
 	uint8_t update_rss_flg;
 	uint16_t mtu;
+	struct ecore_sge_tpa_params *sge_tpa_params;
 };
 
 struct qed_start_vport_params {
 	bool remove_inner_vlan;
 	bool handle_ptp_pkts;
-	bool gro_enable;
+	bool enable_lro;
 	bool drop_ttl0;
 	uint8_t vport_id;
 	uint16_t mtu;
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 2b91a10..d709097 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -769,7 +769,7 @@ static int qede_init_vport(struct qede_dev *qdev)
 	int rc;
 
 	start.remove_inner_vlan = 1;
-	start.gro_enable = 0;
+	start.enable_lro = qdev->enable_lro;
 	start.mtu = ETHER_MTU + QEDE_ETH_OVERHEAD;
 	start.vport_id = 0;
 	start.drop_ttl0 = false;
@@ -866,11 +866,6 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 	if (rxmode->enable_scatter == 1)
 		eth_dev->data->scattered_rx = 1;
 
-	if (rxmode->enable_lro == 1) {
-		DP_ERR(edev, "LRO is not supported\n");
-		return -EINVAL;
-	}
-
 	if (!rxmode->hw_strip_crc)
 		DP_INFO(edev, "L2 CRC stripping is always enabled in hw\n");
 
@@ -878,6 +873,13 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 		DP_INFO(edev, "IP/UDP/TCP checksum offload is always enabled "
 			      "in hw\n");
 
+	if (rxmode->enable_lro) {
+		qdev->enable_lro = true;
+		/* Enable scatter mode for LRO */
+		if (!rxmode->enable_scatter)
+			eth_dev->data->scattered_rx = 1;
+	}
+
 	/* Check for the port restart case */
 	if (qdev->state != QEDE_DEV_INIT) {
 		rc = qdev->ops->vport_stop(edev, 0);
@@ -957,13 +959,15 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 static const struct rte_eth_desc_lim qede_rx_desc_lim = {
 	.nb_max = NUM_RX_BDS_MAX,
 	.nb_min = 128,
-	.nb_align = 128	/* lowest common multiple */
+	.nb_align = 128 /* lowest common multiple */
 };
 
 static const struct rte_eth_desc_lim qede_tx_desc_lim = {
 	.nb_max = NUM_TX_BDS_MAX,
 	.nb_min = 256,
-	.nb_align = 256
+	.nb_align = 256,
+	.nb_seg_max = ETH_TX_MAX_BDS_PER_LSO_PACKET,
+	.nb_mtu_seg_max = ETH_TX_MAX_BDS_PER_NON_LSO_PACKET
 };
 
 static void
@@ -1005,12 +1009,16 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
 				     DEV_RX_OFFLOAD_IPV4_CKSUM	|
 				     DEV_RX_OFFLOAD_UDP_CKSUM	|
 				     DEV_RX_OFFLOAD_TCP_CKSUM	|
-				     DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM);
+				     DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+				     DEV_RX_OFFLOAD_TCP_LRO);
+
 	dev_info->tx_offload_capa = (DEV_TX_OFFLOAD_VLAN_INSERT	|
 				     DEV_TX_OFFLOAD_IPV4_CKSUM	|
 				     DEV_TX_OFFLOAD_UDP_CKSUM	|
 				     DEV_TX_OFFLOAD_TCP_CKSUM	|
-				     DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM);
+				     DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				     DEV_TX_OFFLOAD_TCP_TSO |
+				     DEV_TX_OFFLOAD_VXLAN_TNL_TSO);
 
 	memset(&link, 0, sizeof(struct qed_link_output));
 	qdev->ops->common->get_link(edev, &link);
@@ -2102,6 +2110,7 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
 
 	eth_dev->rx_pkt_burst = qede_recv_pkts;
 	eth_dev->tx_pkt_burst = qede_xmit_pkts;
+	eth_dev->tx_pkt_prepare = qede_xmit_prep_pkts;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
 		DP_NOTICE(edev, false,
diff --git a/drivers/net/qede/qede_ethdev.h b/drivers/net/qede/qede_ethdev.h
index 8342b99..799a3ba 100644
--- a/drivers/net/qede/qede_ethdev.h
+++ b/drivers/net/qede/qede_ethdev.h
@@ -193,8 +193,7 @@ struct qede_dev {
 	uint16_t rss_ind_table[ECORE_RSS_IND_TABLE_SIZE];
 	uint64_t rss_hf;
 	uint8_t rss_key_len;
-	uint32_t flags;
-	bool gro_disable;
+	bool enable_lro;
 	uint16_t num_queues;
 	uint8_t fp_num_tx;
 	uint8_t fp_num_rx;
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 85134fb..5943ef2 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -6,10 +6,9 @@
  * See LICENSE.qede_pmd for copyright and licensing details.
  */
 
+#include <rte_net.h>
 #include "qede_rxtx.h"
 
-static bool gro_disable = 1;	/* mod_param */
-
 static inline int qede_alloc_rx_buffer(struct qede_rx_queue *rxq)
 {
 	struct rte_mbuf *new_mb = NULL;
@@ -352,7 +351,6 @@ static void qede_init_fp(struct qede_dev *qdev)
 		snprintf(fp->name, sizeof(fp->name), "%s-fp-%d", "qdev", i);
 	}
 
-	qdev->gro_disable = gro_disable;
 }
 
 void qede_free_fp_arrays(struct qede_dev *qdev)
@@ -509,6 +507,30 @@ qede_update_rx_prod(struct qede_dev *edev, struct qede_rx_queue *rxq)
 	PMD_RX_LOG(DEBUG, rxq, "bd_prod %u  cqe_prod %u", bd_prod, cqe_prod);
 }
 
+static void
+qede_update_sge_tpa_params(struct ecore_sge_tpa_params *sge_tpa_params,
+			   uint16_t mtu, bool enable)
+{
+	/* Enable LRO in split mode */
+	sge_tpa_params->tpa_ipv4_en_flg = enable;
+	sge_tpa_params->tpa_ipv6_en_flg = enable;
+	sge_tpa_params->tpa_ipv4_tunn_en_flg = enable;
+	sge_tpa_params->tpa_ipv6_tunn_en_flg = enable;
+	/* set if tpa enable changes */
+	sge_tpa_params->update_tpa_en_flg = 1;
+	/* set if tpa parameters should be handled */
+	sge_tpa_params->update_tpa_param_flg = enable;
+
+	sge_tpa_params->max_buffers_per_cqe = 20;
+	sge_tpa_params->tpa_pkt_split_flg = 1;
+	sge_tpa_params->tpa_hdr_data_split_flg = 0;
+	sge_tpa_params->tpa_gro_consistent_flg = 0;
+	sge_tpa_params->tpa_max_aggs_num = ETH_TPA_MAX_AGGS_NUM;
+	sge_tpa_params->tpa_max_size = 0x7FFF;
+	sge_tpa_params->tpa_min_size_to_start = mtu / 2;
+	sge_tpa_params->tpa_min_size_to_cont = mtu / 2;
+}
+
 static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 {
 	struct qede_dev *qdev = eth_dev->data->dev_private;
@@ -516,6 +538,7 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 	struct ecore_queue_start_common_params q_params;
 	struct qed_dev_info *qed_info = &qdev->dev_info.common;
 	struct qed_update_vport_params vport_update_params;
+	struct ecore_sge_tpa_params tpa_params;
 	struct qede_tx_queue *txq;
 	struct qede_fastpath *fp;
 	dma_addr_t p_phys_table;
@@ -625,6 +648,14 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 		vport_update_params.tx_switching_flg = 1;
 	}
 
+	/* TPA */
+	if (qdev->enable_lro) {
+		DP_INFO(edev, "Enabling LRO\n");
+		memset(&tpa_params, 0, sizeof(struct ecore_sge_tpa_params));
+		qede_update_sge_tpa_params(&tpa_params, qdev->mtu, true);
+		vport_update_params.sge_tpa_params = &tpa_params;
+	}
+
 	rc = qdev->ops->vport_update(edev, &vport_update_params);
 	if (rc) {
 		DP_ERR(edev, "Update V-PORT failed %d\n", rc);
@@ -761,6 +792,94 @@ static inline uint32_t qede_rx_cqe_to_pkt_type(uint16_t flags)
 		return RTE_PTYPE_UNKNOWN;
 }
 
+static inline void
+qede_rx_process_tpa_cont_cqe(struct qede_dev *qdev,
+			     struct qede_rx_queue *rxq,
+			     struct eth_fast_path_rx_tpa_cont_cqe *cqe)
+{
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	struct qede_agg_info *tpa_info;
+	struct rte_mbuf *temp_frag; /* Pointer to mbuf chain head */
+	struct rte_mbuf *curr_frag;
+	uint8_t list_count = 0;
+	uint16_t cons_idx;
+	uint8_t i;
+
+	PMD_RX_LOG(INFO, rxq, "TPA cont[%02x] - len_list [%04x %04x]\n",
+		   cqe->tpa_agg_index, rte_le_to_cpu_16(cqe->len_list[0]),
+		   rte_le_to_cpu_16(cqe->len_list[1]));
+
+	tpa_info = &rxq->tpa_info[cqe->tpa_agg_index];
+	temp_frag = tpa_info->mbuf;
+	assert(temp_frag);
+
+	for (i = 0; cqe->len_list[i]; i++) {
+		cons_idx = rxq->sw_rx_cons & NUM_RX_BDS(rxq);
+		curr_frag = rxq->sw_rx_ring[cons_idx].mbuf;
+		qede_rx_bd_ring_consume(rxq);
+		curr_frag->data_len = rte_le_to_cpu_16(cqe->len_list[i]);
+		temp_frag->next = curr_frag;
+		temp_frag = curr_frag;
+		list_count++;
+	}
+
+	/* Allocate RX mbuf on the RX BD ring for those many consumed  */
+	for (i = 0 ; i < list_count ; i++) {
+		if (unlikely(qede_alloc_rx_buffer(rxq) != 0)) {
+			DP_ERR(edev, "Failed to allocate mbuf for LRO cont\n");
+			tpa_info->state = QEDE_AGG_STATE_ERROR;
+		}
+	}
+}
+
+static inline void
+qede_rx_process_tpa_end_cqe(struct qede_dev *qdev,
+			    struct qede_rx_queue *rxq,
+			    struct eth_fast_path_rx_tpa_end_cqe *cqe)
+{
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	struct qede_agg_info *tpa_info;
+	struct rte_mbuf *temp_frag; /* Pointer to mbuf chain head */
+	struct rte_mbuf *curr_frag;
+	struct rte_mbuf *rx_mb;
+	uint8_t list_count = 0;
+	uint16_t cons_idx;
+	uint8_t i;
+
+	PMD_RX_LOG(INFO, rxq, "TPA End[%02x] - len_list [%04x %04x]\n",
+		   cqe->tpa_agg_index, rte_le_to_cpu_16(cqe->len_list[0]),
+		   rte_le_to_cpu_16(cqe->len_list[1]));
+
+	tpa_info = &rxq->tpa_info[cqe->tpa_agg_index];
+	temp_frag = tpa_info->mbuf;
+	assert(temp_frag);
+
+	for (i = 0; cqe->len_list[i]; i++) {
+		cons_idx = rxq->sw_rx_cons & NUM_RX_BDS(rxq);
+		curr_frag = rxq->sw_rx_ring[cons_idx].mbuf;
+		qede_rx_bd_ring_consume(rxq);
+		curr_frag->data_len = rte_le_to_cpu_16(cqe->len_list[i]);
+		temp_frag->next = curr_frag;
+		temp_frag = curr_frag;
+		list_count++;
+	}
+
+	/* Allocate RX mbuf on the RX BD ring for those many consumed */
+	for (i = 0 ; i < list_count ; i++) {
+		if (unlikely(qede_alloc_rx_buffer(rxq) != 0)) {
+			DP_ERR(edev, "Failed to allocate mbuf for lro end\n");
+			tpa_info->state = QEDE_AGG_STATE_ERROR;
+		}
+	}
+
+	/* Update total length and frags based on end TPA */
+	rx_mb = rxq->tpa_info[cqe->tpa_agg_index].mbuf;
+	/* TBD: Add sanity checks here */
+	rx_mb->nb_segs = cqe->num_of_bds;
+	rx_mb->pkt_len = cqe->total_packet_len;
+	tpa_info->state = QEDE_AGG_STATE_NONE;
+}
+
 static inline uint32_t qede_rx_cqe_to_tunn_pkt_type(uint16_t flags)
 {
 	uint32_t val;
@@ -882,6 +1001,14 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 	enum rss_hash_type htype;
 	uint8_t tunn_parse_flag;
 	uint8_t j;
+	struct eth_fast_path_rx_tpa_start_cqe *cqe_start_tpa;
+	uint64_t ol_flags;
+	uint32_t packet_type;
+	uint16_t vlan_tci;
+	bool tpa_start_flg;
+	uint8_t bitfield_val;
+	uint8_t offset;
+	struct qede_agg_info *tpa_info;
 
 	hw_comp_cons = rte_le_to_cpu_16(*rxq->hw_cons_ptr);
 	sw_comp_cons = ecore_chain_get_cons_idx(&rxq->rx_comp_ring);
@@ -892,16 +1019,55 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		return 0;
 
 	while (sw_comp_cons != hw_comp_cons) {
+		ol_flags = 0;
+		packet_type = RTE_PTYPE_UNKNOWN;
+		vlan_tci = 0;
+		tpa_start_flg = false;
+
 		/* Get the CQE from the completion ring */
 		cqe =
 		    (union eth_rx_cqe *)ecore_chain_consume(&rxq->rx_comp_ring);
 		cqe_type = cqe->fast_path_regular.type;
-
-		if (unlikely(cqe_type == ETH_RX_CQE_TYPE_SLOW_PATH)) {
-			PMD_RX_LOG(DEBUG, rxq, "Got a slowath CQE");
-
+		PMD_RX_LOG(INFO, rxq, "Rx CQE type %d\n", cqe_type);
+
+		switch (cqe_type) {
+		case ETH_RX_CQE_TYPE_REGULAR:
+			fp_cqe = &cqe->fast_path_regular;
+		break;
+		case ETH_RX_CQE_TYPE_TPA_START:
+			cqe_start_tpa = &cqe->fast_path_tpa_start;
+			tpa_info = &rxq->tpa_info[cqe_start_tpa->tpa_agg_index];
+			tpa_start_flg = true;
+			PMD_RX_LOG(INFO, rxq,
+				   "TPA start[%u] - len %04x [header %02x]"
+				   " [bd_list[0] %04x], [seg_len %04x]\n",
+				    cqe_start_tpa->tpa_agg_index,
+				    rte_le_to_cpu_16(cqe_start_tpa->
+						     len_on_first_bd),
+				    cqe_start_tpa->header_len,
+				    rte_le_to_cpu_16(cqe_start_tpa->
+							ext_bd_len_list[0]),
+				    rte_le_to_cpu_16(cqe_start_tpa->seg_len));
+
+		break;
+		case ETH_RX_CQE_TYPE_TPA_CONT:
+			qede_rx_process_tpa_cont_cqe(qdev, rxq,
+						     &cqe->fast_path_tpa_cont);
+			continue;
+		case ETH_RX_CQE_TYPE_TPA_END:
+			qede_rx_process_tpa_end_cqe(qdev, rxq,
+						    &cqe->fast_path_tpa_end);
+			rx_mb = rxq->
+			tpa_info[cqe->fast_path_tpa_end.tpa_agg_index].mbuf;
+			PMD_RX_LOG(INFO, rxq, "TPA end reason %d\n",
+				   cqe->fast_path_tpa_end.end_reason);
+			goto tpa_end;
+		case ETH_RX_CQE_TYPE_SLOW_PATH:
+			PMD_RX_LOG(INFO, rxq, "Got unexpected slowpath CQE\n");
 			qdev->ops->eth_cqe_completion(edev, fp->id,
 				(struct eth_slow_path_rx_cqe *)cqe);
+			/* fall-thru */
+		default:
 			goto next_cqe;
 		}
 
@@ -910,69 +1076,93 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		rx_mb = rxq->sw_rx_ring[sw_rx_index].mbuf;
 		assert(rx_mb != NULL);
 
-		/* non GRO */
-		fp_cqe = &cqe->fast_path_regular;
-
-		len = rte_le_to_cpu_16(fp_cqe->len_on_first_bd);
-		pkt_len = rte_le_to_cpu_16(fp_cqe->pkt_len);
-		pad = fp_cqe->placement_offset;
-		assert((len + pad) <= rx_mb->buf_len);
-
-		PMD_RX_LOG(DEBUG, rxq,
-			   "CQE type = 0x%x, flags = 0x%x, vlan = 0x%x"
-			   " len = %u, parsing_flags = %d",
-			   cqe_type, fp_cqe->bitfields,
-			   rte_le_to_cpu_16(fp_cqe->vlan_tag),
-			   len, rte_le_to_cpu_16(fp_cqe->pars_flags.flags));
-
-		/* If this is an error packet then drop it */
-		parse_flag =
-		    rte_le_to_cpu_16(cqe->fast_path_regular.pars_flags.flags);
-
-		rx_mb->ol_flags = 0;
-
+		/* Handle regular CQE or TPA start CQE */
+		if (!tpa_start_flg) {
+			parse_flag = rte_le_to_cpu_16(fp_cqe->pars_flags.flags);
+			bitfield_val = fp_cqe->bitfields;
+			offset = fp_cqe->placement_offset;
+			len = rte_le_to_cpu_16(fp_cqe->len_on_first_bd);
+			pkt_len = rte_le_to_cpu_16(fp_cqe->pkt_len);
+		} else {
+			parse_flag = rte_le_to_cpu_16(cqe_start_tpa->
+							pars_flags.flags);
+			bitfield_val = cqe_start_tpa->bitfields;
+			offset = cqe_start_tpa->placement_offset;
+			/* seg_len = len_on_first_bd */
+			len = rte_le_to_cpu_16(cqe_start_tpa->len_on_first_bd);
+			tpa_info->start_cqe_bd_len = len +
+						cqe_start_tpa->header_len;
+			tpa_info->mbuf = rx_mb;
+		}
 		if (qede_tunn_exist(parse_flag)) {
-			PMD_RX_LOG(DEBUG, rxq, "Rx tunneled packet");
+			PMD_RX_LOG(INFO, rxq, "Rx tunneled packet\n");
 			if (unlikely(qede_check_tunn_csum_l4(parse_flag))) {
 				PMD_RX_LOG(ERR, rxq,
-					    "L4 csum failed, flags = 0x%x",
+					    "L4 csum failed, flags = 0x%x\n",
 					    parse_flag);
 				rxq->rx_hw_errors++;
-				rx_mb->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+				ol_flags |= PKT_RX_L4_CKSUM_BAD;
 			} else {
-				tunn_parse_flag =
-						fp_cqe->tunnel_pars_flags.flags;
-				rx_mb->packet_type =
-					qede_rx_cqe_to_tunn_pkt_type(
-							tunn_parse_flag);
+				ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+				if (tpa_start_flg)
+					tunn_parse_flag = cqe_start_tpa->
+							tunnel_pars_flags.flags;
+				else
+					tunn_parse_flag = fp_cqe->
+							tunnel_pars_flags.flags;
+				packet_type =
+				qede_rx_cqe_to_tunn_pkt_type(tunn_parse_flag);
 			}
 		} else {
-			PMD_RX_LOG(DEBUG, rxq, "Rx non-tunneled packet");
+			PMD_RX_LOG(INFO, rxq, "Rx non-tunneled packet\n");
 			if (unlikely(qede_check_notunn_csum_l4(parse_flag))) {
 				PMD_RX_LOG(ERR, rxq,
-					    "L4 csum failed, flags = 0x%x",
+					    "L4 csum failed, flags = 0x%x\n",
 					    parse_flag);
 				rxq->rx_hw_errors++;
-				rx_mb->ol_flags |= PKT_RX_L4_CKSUM_BAD;
-			} else if (unlikely(qede_check_notunn_csum_l3(rx_mb,
+				ol_flags |= PKT_RX_L4_CKSUM_BAD;
+			} else {
+				ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			}
+			if (unlikely(qede_check_notunn_csum_l3(rx_mb,
 							parse_flag))) {
 				PMD_RX_LOG(ERR, rxq,
-					   "IP csum failed, flags = 0x%x",
+					   "IP csum failed, flags = 0x%x\n",
 					   parse_flag);
 				rxq->rx_hw_errors++;
-				rx_mb->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+				ol_flags |= PKT_RX_IP_CKSUM_BAD;
 			} else {
-				rx_mb->packet_type =
+				ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+				packet_type =
 					qede_rx_cqe_to_pkt_type(parse_flag);
 			}
 		}
 
-		PMD_RX_LOG(INFO, rxq, "packet_type 0x%x", rx_mb->packet_type);
+		if (CQE_HAS_VLAN(parse_flag)) {
+			vlan_tci = rte_le_to_cpu_16(fp_cqe->vlan_tag);
+			ol_flags |= PKT_RX_VLAN_PKT;
+		}
+
+		if (CQE_HAS_OUTER_VLAN(parse_flag)) {
+			vlan_tci = rte_le_to_cpu_16(fp_cqe->vlan_tag);
+			ol_flags |= PKT_RX_QINQ_PKT;
+			rx_mb->vlan_tci_outer = 0;
+		}
+
+		/* RSS Hash */
+		htype = (uint8_t)GET_FIELD(bitfield_val,
+					ETH_FAST_PATH_RX_REG_CQE_RSS_HASH_TYPE);
+		if (qdev->rss_enable && htype) {
+			ol_flags |= PKT_RX_RSS_HASH;
+			rx_mb->hash.rss = rte_le_to_cpu_32(fp_cqe->rss_hash);
+			PMD_RX_LOG(INFO, rxq, "Hash result 0x%x\n",
+				   rx_mb->hash.rss);
+		}
 
 		if (unlikely(qede_alloc_rx_buffer(rxq) != 0)) {
 			PMD_RX_LOG(ERR, rxq,
 				   "New buffer allocation failed,"
-				   "dropping incoming packet");
+				   "dropping incoming packet\n");
 			qede_recycle_rx_bd_ring(rxq, qdev, fp_cqe->bd_num);
 			rte_eth_devices[rxq->port_id].
 			    data->rx_mbuf_alloc_failed++;
@@ -980,7 +1170,8 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 			break;
 		}
 		qede_rx_bd_ring_consume(rxq);
-		if (fp_cqe->bd_num > 1) {
+
+		if (!tpa_start_flg && fp_cqe->bd_num > 1) {
 			PMD_RX_LOG(DEBUG, rxq, "Jumbo-over-BD packet: %02x BDs"
 				   " len on first: %04x Total Len: %04x",
 				   fp_cqe->bd_num, len, pkt_len);
@@ -1009,39 +1200,23 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 
 		/* Update rest of the MBUF fields */
 		rx_mb->data_off = pad + RTE_PKTMBUF_HEADROOM;
-		rx_mb->nb_segs = fp_cqe->bd_num;
-		rx_mb->data_len = len;
-		rx_mb->pkt_len = pkt_len;
 		rx_mb->port = rxq->port_id;
-
-		htype = (uint8_t)GET_FIELD(fp_cqe->bitfields,
-				ETH_FAST_PATH_RX_REG_CQE_RSS_HASH_TYPE);
-		if (qdev->rss_enable && htype) {
-			rx_mb->ol_flags |= PKT_RX_RSS_HASH;
-			rx_mb->hash.rss = rte_le_to_cpu_32(fp_cqe->rss_hash);
-			PMD_RX_LOG(DEBUG, rxq, "Hash result 0x%x",
-				   rx_mb->hash.rss);
+		rx_mb->ol_flags = ol_flags;
+		rx_mb->data_len = len;
+		rx_mb->vlan_tci = vlan_tci;
+		rx_mb->packet_type = packet_type;
+		PMD_RX_LOG(INFO, rxq, "pkt_type %04x len %04x flags %04lx\n",
+			   packet_type, len, ol_flags);
+		if (!tpa_start_flg) {
+			rx_mb->nb_segs = fp_cqe->bd_num;
+			rx_mb->pkt_len = pkt_len;
 		}
-
 		rte_prefetch1(rte_pktmbuf_mtod(rx_mb, void *));
-
-		if (CQE_HAS_VLAN(parse_flag)) {
-			rx_mb->vlan_tci = rte_le_to_cpu_16(fp_cqe->vlan_tag);
-			rx_mb->ol_flags |= PKT_RX_VLAN_PKT;
-		}
-
-		if (CQE_HAS_OUTER_VLAN(parse_flag)) {
-			/* FW does not provide indication of Outer VLAN tag,
-			 * which is always stripped, so vlan_tci_outer is set
-			 * to 0. Here vlan_tag represents inner VLAN tag.
-			 */
-			rx_mb->vlan_tci = rte_le_to_cpu_16(fp_cqe->vlan_tag);
-			rx_mb->ol_flags |= PKT_RX_QINQ_PKT;
-			rx_mb->vlan_tci_outer = 0;
+tpa_end:
+		if (!tpa_start_flg) {
+			rx_pkts[rx_pkt] = rx_mb;
+			rx_pkt++;
 		}
-
-		rx_pkts[rx_pkt] = rx_mb;
-		rx_pkt++;
 next_cqe:
 		ecore_chain_recycle_consumed(&rxq->rx_comp_ring);
 		sw_comp_cons = ecore_chain_get_cons_idx(&rxq->rx_comp_ring);
@@ -1120,43 +1295,44 @@ qede_process_tx_compl(struct ecore_dev *edev, struct qede_tx_queue *txq)
 /* Populate scatter gather buffer descriptor fields */
 static inline uint8_t
 qede_encode_sg_bd(struct qede_tx_queue *p_txq, struct rte_mbuf *m_seg,
-		  struct eth_tx_1st_bd *bd1)
+		  struct eth_tx_2nd_bd **bd2, struct eth_tx_3rd_bd **bd3)
 {
 	struct qede_tx_queue *txq = p_txq;
-	struct eth_tx_2nd_bd *bd2 = NULL;
-	struct eth_tx_3rd_bd *bd3 = NULL;
 	struct eth_tx_bd *tx_bd = NULL;
 	dma_addr_t mapping;
-	uint8_t nb_segs = 1; /* min one segment per packet */
+	uint8_t nb_segs = 0;
 
 	/* Check for scattered buffers */
 	while (m_seg) {
-		if (nb_segs == 1) {
-			bd2 = (struct eth_tx_2nd_bd *)
-				ecore_chain_produce(&txq->tx_pbl);
-			memset(bd2, 0, sizeof(*bd2));
+		if (nb_segs == 0) {
+			if (!*bd2) {
+				*bd2 = (struct eth_tx_2nd_bd *)
+					ecore_chain_produce(&txq->tx_pbl);
+				memset(*bd2, 0, sizeof(struct eth_tx_2nd_bd));
+				nb_segs++;
+			}
 			mapping = rte_mbuf_data_dma_addr(m_seg);
-			QEDE_BD_SET_ADDR_LEN(bd2, mapping, m_seg->data_len);
-			PMD_TX_LOG(DEBUG, txq, "BD2 len %04x",
-				   m_seg->data_len);
-		} else if (nb_segs == 2) {
-			bd3 = (struct eth_tx_3rd_bd *)
-				ecore_chain_produce(&txq->tx_pbl);
-			memset(bd3, 0, sizeof(*bd3));
+			QEDE_BD_SET_ADDR_LEN(*bd2, mapping, m_seg->data_len);
+			PMD_TX_LOG(DEBUG, txq, "BD2 len %04x", m_seg->data_len);
+		} else if (nb_segs == 1) {
+			if (!*bd3) {
+				*bd3 = (struct eth_tx_3rd_bd *)
+					ecore_chain_produce(&txq->tx_pbl);
+				memset(*bd3, 0, sizeof(struct eth_tx_3rd_bd));
+				nb_segs++;
+			}
 			mapping = rte_mbuf_data_dma_addr(m_seg);
-			QEDE_BD_SET_ADDR_LEN(bd3, mapping, m_seg->data_len);
-			PMD_TX_LOG(DEBUG, txq, "BD3 len %04x",
-				   m_seg->data_len);
+			QEDE_BD_SET_ADDR_LEN(*bd3, mapping, m_seg->data_len);
+			PMD_TX_LOG(DEBUG, txq, "BD3 len %04x", m_seg->data_len);
 		} else {
 			tx_bd = (struct eth_tx_bd *)
 				ecore_chain_produce(&txq->tx_pbl);
 			memset(tx_bd, 0, sizeof(*tx_bd));
+			nb_segs++;
 			mapping = rte_mbuf_data_dma_addr(m_seg);
 			QEDE_BD_SET_ADDR_LEN(tx_bd, mapping, m_seg->data_len);
-			PMD_TX_LOG(DEBUG, txq, "BD len %04x",
-				   m_seg->data_len);
+			PMD_TX_LOG(DEBUG, txq, "BD len %04x", m_seg->data_len);
 		}
-		nb_segs++;
 		m_seg = m_seg->next;
 	}
 
@@ -1164,6 +1340,96 @@ qede_encode_sg_bd(struct qede_tx_queue *p_txq, struct rte_mbuf *m_seg,
 	return nb_segs;
 }
 
+#ifdef RTE_LIBRTE_QEDE_DEBUG_TX
+static inline void
+print_tx_bd_info(struct qede_tx_queue *txq,
+		 struct eth_tx_1st_bd *bd1,
+		 struct eth_tx_2nd_bd *bd2,
+		 struct eth_tx_3rd_bd *bd3,
+		 uint64_t tx_ol_flags)
+{
+	char ol_buf[256] = { 0 }; /* for verbose prints */
+
+	if (bd1)
+		PMD_TX_LOG(INFO, txq,
+			   "BD1: nbytes=%u nbds=%u bd_flags=04%x bf=%04x",
+			   rte_cpu_to_le_16(bd1->nbytes), bd1->data.nbds,
+			   bd1->data.bd_flags.bitfields,
+			   rte_cpu_to_le_16(bd1->data.bitfields));
+	if (bd2)
+		PMD_TX_LOG(INFO, txq,
+			   "BD2: nbytes=%u bf=%04x\n",
+			   rte_cpu_to_le_16(bd2->nbytes), bd2->data.bitfields1);
+	if (bd3)
+		PMD_TX_LOG(INFO, txq,
+			   "BD3: nbytes=%u bf=%04x mss=%u\n",
+			   rte_cpu_to_le_16(bd3->nbytes),
+			   rte_cpu_to_le_16(bd3->data.bitfields),
+			   rte_cpu_to_le_16(bd3->data.lso_mss));
+
+	rte_get_tx_ol_flag_list(tx_ol_flags, ol_buf, sizeof(ol_buf));
+	PMD_TX_LOG(INFO, txq, "TX offloads = %s\n", ol_buf);
+}
+#endif
+
+/* TX prepare to check packets meets TX conditions */
+uint16_t
+qede_xmit_prep_pkts(void *p_txq, struct rte_mbuf **tx_pkts,
+		    uint16_t nb_pkts)
+{
+	struct qede_tx_queue *txq = p_txq;
+	uint64_t ol_flags;
+	struct rte_mbuf *m;
+	uint16_t i;
+	int ret;
+
+	for (i = 0; i < nb_pkts; i++) {
+		m = tx_pkts[i];
+		ol_flags = m->ol_flags;
+		if (ol_flags & PKT_TX_TCP_SEG) {
+			if (m->nb_segs >= ETH_TX_MAX_BDS_PER_LSO_PACKET) {
+				rte_errno = -EINVAL;
+				break;
+			}
+			/* TBD: confirm its ~9700B for both ? */
+			if (m->tso_segsz > ETH_TX_MAX_NON_LSO_PKT_LEN) {
+				rte_errno = -EINVAL;
+				break;
+			}
+		} else {
+			if (m->nb_segs >= ETH_TX_MAX_BDS_PER_NON_LSO_PACKET) {
+				rte_errno = -EINVAL;
+				break;
+			}
+		}
+		if (ol_flags & QEDE_TX_OFFLOAD_NOTSUP_MASK) {
+			rte_errno = -ENOTSUP;
+			break;
+		}
+
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+		ret = rte_validate_tx_offload(m);
+		if (ret != 0) {
+			rte_errno = ret;
+			break;
+		}
+#endif
+		/* TBD: pseudo csum calcuation required iff
+		 * ETH_TX_DATA_2ND_BD_L4_PSEUDO_CSUM_MODE not set?
+		 */
+		ret = rte_net_intel_cksum_prepare(m);
+		if (ret != 0) {
+			rte_errno = ret;
+			break;
+		}
+	}
+
+	if (unlikely(i != nb_pkts))
+		PMD_TX_LOG(ERR, txq, "TX prepare failed for %u\n",
+			   nb_pkts - i);
+	return i;
+}
+
 uint16_t
 qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 {
@@ -1171,15 +1437,22 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 	struct qede_dev *qdev = txq->qdev;
 	struct ecore_dev *edev = &qdev->edev;
 	struct qede_fastpath *fp;
-	struct eth_tx_1st_bd *bd1;
 	struct rte_mbuf *mbuf;
 	struct rte_mbuf *m_seg = NULL;
 	uint16_t nb_tx_pkts;
 	uint16_t bd_prod;
 	uint16_t idx;
-	uint16_t tx_count;
 	uint16_t nb_frags;
 	uint16_t nb_pkt_sent = 0;
+	uint8_t nbds;
+	bool ipv6_ext_flg;
+	bool lso_flg;
+	bool tunn_flg;
+	struct eth_tx_1st_bd *bd1;
+	struct eth_tx_2nd_bd *bd2;
+	struct eth_tx_3rd_bd *bd3;
+	uint64_t tx_ol_flags;
+	uint16_t hdr_size;
 
 	fp = &qdev->fp_array[QEDE_RSS_COUNT(qdev) + txq->queue_id];
 
@@ -1189,34 +1462,86 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		(void)qede_process_tx_compl(edev, txq);
 	}
 
-	nb_tx_pkts = RTE_MIN(nb_pkts, (txq->nb_tx_avail /
-			ETH_TX_MAX_BDS_PER_NON_LSO_PACKET));
-	if (unlikely(nb_tx_pkts == 0)) {
-		PMD_TX_LOG(DEBUG, txq, "Out of BDs nb_pkts=%u avail=%u",
-			   nb_pkts, txq->nb_tx_avail);
-		return 0;
-	}
-
-	tx_count = nb_tx_pkts;
+	nb_tx_pkts  = nb_pkts;
+	bd_prod = rte_cpu_to_le_16(ecore_chain_get_prod_idx(&txq->tx_pbl));
 	while (nb_tx_pkts--) {
+		/* Init flags/values */
+		ipv6_ext_flg = false;
+		tunn_flg = false;
+		lso_flg = false;
+		nbds = 0;
+		bd1 = NULL;
+		bd2 = NULL;
+		bd3 = NULL;
+		hdr_size = 0;
+
 		/* Fill the entry in the SW ring and the BDs in the FW ring */
 		idx = TX_PROD(txq);
 		mbuf = *tx_pkts++;
 		txq->sw_tx_ring[idx].mbuf = mbuf;
+		tx_ol_flags = mbuf->ol_flags;
+
+#define RTE_ETH_IS_IPV6_HDR_EXT(ptype) ((ptype) & RTE_PTYPE_L3_IPV6_EXT)
+		if (RTE_ETH_IS_IPV6_HDR_EXT(mbuf->packet_type))
+			ipv6_ext_flg = true;
+
+		if (RTE_ETH_IS_TUNNEL_PKT(mbuf->packet_type))
+			tunn_flg = true;
+
+		if (tx_ol_flags & PKT_TX_TCP_SEG)
+			lso_flg = true;
+
+		/* Check minimum TX BDS availability against available BDs */
+		if (unlikely(txq->nb_tx_avail < mbuf->nb_segs))
+			break;
+
+		if (lso_flg) {
+			if (unlikely(txq->nb_tx_avail <
+						ETH_TX_MIN_BDS_PER_LSO_PKT))
+				break;
+		} else {
+			if (unlikely(txq->nb_tx_avail <
+					ETH_TX_MIN_BDS_PER_NON_LSO_PKT))
+				break;
+		}
+
+		if (tunn_flg && ipv6_ext_flg) {
+			if (unlikely(txq->nb_tx_avail <
+				ETH_TX_MIN_BDS_PER_TUNN_IPV6_WITH_EXT_PKT))
+				break;
+		}
+		if (ipv6_ext_flg) {
+			if (unlikely(txq->nb_tx_avail <
+					ETH_TX_MIN_BDS_PER_IPV6_WITH_EXT_PKT))
+				break;
+		}
+		/* BD1 */
 		bd1 = (struct eth_tx_1st_bd *)ecore_chain_produce(&txq->tx_pbl);
-		bd1->data.bd_flags.bitfields =
+		nbds++;
+		bd1->data.bd_flags.bitfields = 0;
+		bd1->data.bitfields = 0;
+
+		bd1->data.bd_flags.bitfields |=
 			1 << ETH_TX_1ST_BD_FLAGS_START_BD_SHIFT;
 		/* FW 8.10.x specific change */
-		bd1->data.bitfields =
+		if (!lso_flg) {
+			bd1->data.bitfields |=
 			(mbuf->pkt_len & ETH_TX_DATA_1ST_BD_PKT_LEN_MASK)
 				<< ETH_TX_DATA_1ST_BD_PKT_LEN_SHIFT;
-		/* Map MBUF linear data for DMA and set in the first BD */
-		QEDE_BD_SET_ADDR_LEN(bd1, rte_mbuf_data_dma_addr(mbuf),
-				     mbuf->data_len);
-		PMD_TX_LOG(INFO, txq, "BD1 len %04x", mbuf->data_len);
+			/* Map MBUF linear data for DMA and set in the BD1 */
+			QEDE_BD_SET_ADDR_LEN(bd1, rte_mbuf_data_dma_addr(mbuf),
+					     mbuf->data_len);
+		} else {
+			/* For LSO, packet header and payload must reside on
+			 * buffers pointed by different BDs. Using BD1 for HDR
+			 * and BD2 onwards for data.
+			 */
+			hdr_size = mbuf->l2_len + mbuf->l3_len + mbuf->l4_len;
+			QEDE_BD_SET_ADDR_LEN(bd1, rte_mbuf_data_dma_addr(mbuf),
+					     hdr_size);
+		}
 
-		if (RTE_ETH_IS_TUNNEL_PKT(mbuf->packet_type)) {
-			PMD_TX_LOG(INFO, txq, "Tx tunnel packet");
+		if (tunn_flg) {
 			/* First indicate its a tunnel pkt */
 			bd1->data.bd_flags.bitfields |=
 				ETH_TX_DATA_1ST_BD_TUNN_FLAG_MASK <<
@@ -1231,8 +1556,7 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 					1 << ETH_TX_DATA_1ST_BD_TUNN_FLAG_SHIFT;
 
 			/* Outer IP checksum offload */
-			if (mbuf->ol_flags & PKT_TX_OUTER_IP_CKSUM) {
-				PMD_TX_LOG(INFO, txq, "OuterIP csum offload");
+			if (tx_ol_flags & PKT_TX_OUTER_IP_CKSUM) {
 				bd1->data.bd_flags.bitfields |=
 					ETH_TX_1ST_BD_FLAGS_TUNN_IP_CSUM_MASK <<
 					ETH_TX_1ST_BD_FLAGS_TUNN_IP_CSUM_SHIFT;
@@ -1245,43 +1569,79 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		}
 
 		/* Descriptor based VLAN insertion */
-		if (mbuf->ol_flags & (PKT_TX_VLAN_PKT | PKT_TX_QINQ_PKT)) {
-			PMD_TX_LOG(INFO, txq, "Insert VLAN 0x%x",
-				   mbuf->vlan_tci);
+		if (tx_ol_flags & (PKT_TX_VLAN_PKT | PKT_TX_QINQ_PKT)) {
 			bd1->data.vlan = rte_cpu_to_le_16(mbuf->vlan_tci);
 			bd1->data.bd_flags.bitfields |=
 			    1 << ETH_TX_1ST_BD_FLAGS_VLAN_INSERTION_SHIFT;
 		}
 
+		if (lso_flg)
+			bd1->data.bd_flags.bitfields |=
+				1 << ETH_TX_1ST_BD_FLAGS_LSO_SHIFT;
+
 		/* Offload the IP checksum in the hardware */
-		if (mbuf->ol_flags & PKT_TX_IP_CKSUM) {
-			PMD_TX_LOG(INFO, txq, "IP csum offload");
+		if ((lso_flg) || (tx_ol_flags & PKT_TX_IP_CKSUM))
 			bd1->data.bd_flags.bitfields |=
 			    1 << ETH_TX_1ST_BD_FLAGS_IP_CSUM_SHIFT;
-		}
 
 		/* L4 checksum offload (tcp or udp) */
-		if (mbuf->ol_flags & (PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM)) {
-			PMD_TX_LOG(INFO, txq, "L4 csum offload");
+		if ((lso_flg) || (tx_ol_flags & (PKT_TX_TCP_CKSUM |
+						PKT_TX_UDP_CKSUM)))
+			/* PKT_TX_TCP_SEG implies PKT_TX_TCP_CKSUM */
 			bd1->data.bd_flags.bitfields |=
 			    1 << ETH_TX_1ST_BD_FLAGS_L4_CSUM_SHIFT;
-			/* IPv6 + extn. -> later */
+
+		/* BD2 */
+		if (lso_flg || ipv6_ext_flg) {
+			bd2 = (struct eth_tx_2nd_bd *)ecore_chain_produce
+							(&txq->tx_pbl);
+			memset(bd2, 0, sizeof(struct eth_tx_2nd_bd));
+			nbds++;
+			QEDE_BD_SET_ADDR_LEN(bd2,
+					    (hdr_size +
+					    rte_mbuf_data_dma_addr(mbuf)),
+					    mbuf->data_len - hdr_size);
+			/* TBD: check pseudo csum iff tx_prepare not called? */
+			if (ipv6_ext_flg) {
+				bd2->data.bitfields1 |=
+				ETH_L4_PSEUDO_CSUM_ZERO_LENGTH <<
+				ETH_TX_DATA_2ND_BD_L4_PSEUDO_CSUM_MODE_SHIFT;
+			}
+		}
+
+		/* BD3 */
+		if (lso_flg || ipv6_ext_flg) {
+			bd3 = (struct eth_tx_3rd_bd *)ecore_chain_produce
+							(&txq->tx_pbl);
+			memset(bd3, 0, sizeof(struct eth_tx_3rd_bd));
+			nbds++;
+			if (lso_flg) {
+				bd3->data.lso_mss =
+					rte_cpu_to_le_16(mbuf->tso_segsz);
+				/* Using one header BD */
+				bd3->data.bitfields |=
+					rte_cpu_to_le_16(1 <<
+					ETH_TX_DATA_3RD_BD_HDR_NBD_SHIFT);
+			}
 		}
 
 		/* Handle fragmented MBUF */
 		m_seg = mbuf->next;
 		/* Encode scatter gather buffer descriptors if required */
-		nb_frags = qede_encode_sg_bd(txq, m_seg, bd1);
-		bd1->data.nbds = nb_frags;
-		txq->nb_tx_avail -= nb_frags;
+		nb_frags = qede_encode_sg_bd(txq, m_seg, &bd2, &bd3);
+		bd1->data.nbds = nbds + nb_frags;
+		txq->nb_tx_avail -= bd1->data.nbds;
 		txq->sw_tx_prod++;
 		rte_prefetch0(txq->sw_tx_ring[TX_PROD(txq)].mbuf);
 		bd_prod =
 		    rte_cpu_to_le_16(ecore_chain_get_prod_idx(&txq->tx_pbl));
+#ifdef RTE_LIBRTE_QEDE_DEBUG_TX
+		print_tx_bd_info(txq, bd1, bd2, bd3, tx_ol_flags);
+		PMD_TX_LOG(INFO, txq, "lso=%d tunn=%d ipv6_ext=%d\n",
+			   lso_flg, tunn_flg, ipv6_ext_flg);
+#endif
 		nb_pkt_sent++;
 		txq->xmit_pkts++;
-		PMD_TX_LOG(INFO, txq, "nbds = %d pkt_len = %04x",
-			   bd1->data.nbds, mbuf->pkt_len);
 	}
 
 	/* Write value of prod idx into bd_prod */
@@ -1294,8 +1654,8 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 	/* Check again for Tx completions */
 	(void)qede_process_tx_compl(edev, txq);
 
-	PMD_TX_LOG(DEBUG, txq, "to_send=%u can_send=%u sent=%u core=%d",
-		   nb_pkts, tx_count, nb_pkt_sent, rte_lcore_id());
+	PMD_TX_LOG(DEBUG, txq, "to_send=%u sent=%u bd_prod=%u core=%d",
+		   nb_pkts, nb_pkt_sent, TX_PROD(txq), rte_lcore_id());
 
 	return nb_pkt_sent;
 }
@@ -1412,6 +1772,7 @@ static int qede_stop_queues(struct qede_dev *qdev)
 {
 	struct qed_update_vport_params vport_update_params;
 	struct ecore_dev *edev = &qdev->edev;
+	struct ecore_sge_tpa_params tpa_params;
 	struct qede_fastpath *fp;
 	int rc, tc, i;
 
@@ -1421,9 +1782,15 @@ static int qede_stop_queues(struct qede_dev *qdev)
 	vport_update_params.update_vport_active_flg = 1;
 	vport_update_params.vport_active_flg = 0;
 	vport_update_params.update_rss_flg = 0;
+	/* Disable TPA */
+	if (qdev->enable_lro) {
+		DP_INFO(edev, "Disabling LRO\n");
+		memset(&tpa_params, 0, sizeof(struct ecore_sge_tpa_params));
+		qede_update_sge_tpa_params(&tpa_params, qdev->mtu, false);
+		vport_update_params.sge_tpa_params = &tpa_params;
+	}
 
 	DP_INFO(edev, "Deactivate vport\n");
-
 	rc = qdev->ops->vport_update(edev, &vport_update_params);
 	if (rc) {
 		DP_ERR(edev, "Failed to update vport\n");
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index 17a2f0c..c27632e 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -126,6 +126,19 @@
 
 #define QEDE_PKT_TYPE_TUNN_MAX_TYPE			0x20 /* 2^5 */
 
+#define QEDE_TX_CSUM_OFFLOAD_MASK (PKT_TX_IP_CKSUM              | \
+				   PKT_TX_TCP_CKSUM             | \
+				   PKT_TX_UDP_CKSUM             | \
+				   PKT_TX_OUTER_IP_CKSUM        | \
+				   PKT_TX_TCP_SEG)
+
+#define QEDE_TX_OFFLOAD_MASK (QEDE_TX_CSUM_OFFLOAD_MASK | \
+			      PKT_TX_QINQ_PKT           | \
+			      PKT_TX_VLAN_PKT)
+
+#define QEDE_TX_OFFLOAD_NOTSUP_MASK \
+	(PKT_TX_OFFLOAD_MASK ^ QEDE_TX_OFFLOAD_MASK)
+
 /*
  * RX BD descriptor ring
  */
@@ -135,6 +148,19 @@ struct qede_rx_entry {
 	/* allows expansion .. */
 };
 
+/* TPA related structures */
+enum qede_agg_state {
+	QEDE_AGG_STATE_NONE  = 0,
+	QEDE_AGG_STATE_START = 1,
+	QEDE_AGG_STATE_ERROR = 2
+};
+
+struct qede_agg_info {
+	struct rte_mbuf *mbuf;
+	uint16_t start_cqe_bd_len;
+	uint8_t state; /* for sanity check */
+};
+
 /*
  * Structure associated with each RX queue.
  */
@@ -155,6 +181,7 @@ struct qede_rx_queue {
 	uint64_t rx_segs;
 	uint64_t rx_hw_errors;
 	uint64_t rx_alloc_errors;
+	struct qede_agg_info tpa_info[ETH_TPA_MAX_AGGS_NUM];
 	struct qede_dev *qdev;
 	void *handle;
 };
@@ -232,6 +259,9 @@ void qede_free_mem_load(struct rte_eth_dev *eth_dev);
 uint16_t qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts,
 			uint16_t nb_pkts);
 
+uint16_t qede_xmit_prep_pkts(void *p_txq, struct rte_mbuf **tx_pkts,
+			     uint16_t nb_pkts);
+
 uint16_t qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts,
 			uint16_t nb_pkts);
 
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* Re: [PATCH 00/61] net/qede/base: qede PMD enhancements
  2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
                     ` (61 preceding siblings ...)
  2017-03-18  7:06   ` [PATCH v2 61/61] net/qede: add LRO/TSO offloads support Rasesh Mody
@ 2017-03-18  7:18   ` Mody, Rasesh
  62 siblings, 0 replies; 329+ messages in thread
From: Mody, Rasesh @ 2017-03-18  7:18 UTC (permalink / raw)
  To: Ferruh Yigit, dev; +Cc: Dept-Eng DPDK Dev

> From: Ferruh Yigit [mailto:ferruh.yigit@intel.com]
> Sent: Friday, March 03, 2017 2:25 AM
> 
> On 2/27/2017 7:56 AM, Rasesh Mody wrote:
> > Hi,
> >
> > This patch set adds support for new firmware 8.18.9.0, new features
> > and bug fixes.
> 
> This looks like depends other qede driver patchset [1], can you please
> confirm? If so, it helps to mention from it here.

Yes, this patch set depended on [1]. Note added in v2 submission.

> 
> Also I am getting following build errors [2].

A part of if..else got in into our final submission unintentionally, sorry about that. 

[2] addressed in v2 submission.

> 
> And there are some checkpatch and check-git-log.sh [3] errors.

[3] addressed in v2 submission.

Thanks!
-Rasesh
> 
> Thanks,
> ferruh
> 
> [1]
> http://dpdk.org/dev/patchwork/patch/20816/ [patchset with 21 patches]
> 
> 
> 
> [2]
> .../drivers/net/qede/base/ecore_dev.c:1703:4: error: use of undeclared
> identifier 'ECORE_E5_MISSING_CODE'
>                         ECORE_E5_MISSING_CODE;
>                         ^
> 1 error generated.
> make[7]: *** [base/ecore_dev.o] Error 1
> make[7]: *** Waiting for unfinished jobs....
> .../drivers/net/qede/qede_rxtx.c:1202:21: error: variable 'pad' is uninitialized
> when used here [-Werror,-Wuninitialized]
>                 rx_mb->data_off = pad + RTE_PKTMBUF_HEADROOM;
>                                   ^~~
> .../drivers/net/qede/qede_rxtx.c:997:14: note: initialize the variable 'pad' to
> silence this warning
>         uint16_t pad;
>                     ^
>                      = 0
> 1 error generated.
> 
> .../drivers/net/qede/qede_fdir.c: In function 'qede_config_cmn_fdir_filter':
> .../drivers/net/qede/qede_fdir.c:126:44: error: format '%lx' expects
> argument of type 'long unsigned int', but argument 4 has type 'uint64_t {aka
> long long unsigned int}' [-Werror=format=]
>   snprintf(mz_name, sizeof(mz_name) - 1, "%lx", rte_get_timer_cycles());
> 
> 
> 
> [3]
> Wrong headline format:
>         send FW version driver state to MFW
>         net/qede/base: decrease MAX_HWFNS_PER_DEVICE from 4 to 2
>         net/qede/base: add a printout of the FW, MFW and MBI versions
>         net/qede/base: set the drv_type before sending load request Wrong
> headline prefix:
>         send FW version driver state to MFW
>         drivers/net/qede: upgrade the FW to 8.18.9.0 Wrong headline
> uppercase:
>         net/qede/base: L2 handler changes
>         net/qede/base: Add support to set max values of soft resoruces Wrong
> headline lowercase:
>         net/qede/base: use default mtu from shared memory
>         net/qede/base: update MFW when default mtu is changed
>         net/qede/base: add non-l2 dcbx tlv application support
>         net/qede/base: allow PMD to control vport-id and rss-eng-id Headline
> too long:
>         net/qede/base: remove attribute field from update current config
>         net/qede/base: add support to read personality via MFW commands
>         net/qede/base: allow only trusted VFs to be promisc/multi-promisc
>         net/qede/base: add a printout of the FW, MFW and MBI versions
>         net/qede/base: update bulletin board with link state during init
>         net/qede/base: Add support to set max values of soft resoruces
>         net/qede/base: add multi-Txq support on same queue-zone for VFs
>         net/qede/base: fix race cond between MFW attentions and PF stop
> Missing 'Fixes' tag:
>         net/qede/base: fix to set pointers to NULL after freeing
>         net/qede/base: fix race cond between MFW attentions and PF stop
> 
> 
> 
> >
> > Please apply to dpdk-net-next for 17.05 release.
> >
> > Thanks!
> > Rasesh
> >
> > Harish Patil (3):
> >   net/qede/base: add support for arfs mode
> >   net/qede: add ntuple and flow director filter support
> >   net/qede: add LRO/TSO offloads support
> >
> > Rasesh Mody (58):
> >   net/qede/base: return an initialized return value
> >   send FW version driver state to MFW
> >   net/qede/base: mask Rx buffer attention bits
> >   net/qede/base: print various indication on Tx-timeouts
> >   net/qede/base: utilize FW 8.18.9.0
> >   drivers/net/qede: upgrade the FW to 8.18.9.0
> >   net/qede/base: decrease MAX_HWFNS_PER_DEVICE from 4 to 2
> >   net/qede/base: move mask constants defining NIC type
> >   net/qede/base: remove attribute field from update current config
> >   net/qede/base: add nvram options
> >   net/qede/base: add comment
> >   net/qede/base: use default mtu from shared memory
> >   net/qede/base: change queue/sb-id from 8 bit to 16 bit
> >   net/qede/base: update MFW when default mtu is changed
> >   net/qede/base: prevent device init failure
> >   net/qede/base: add support to read personality via MFW commands
> >   net/qede/base: allow probe to succeed with minor HW-issues
> >   net/qede/base: remove unneeded step in HW init
> >   net/qede/base: allow only trusted VFs to be promisc/multi-promisc
> >   net/qede/base: qm initialization revamp
> >   net/qede/base: add a printout of the FW, MFW and MBI versions
> >   net/qede/base: check active VF queues before stopping
> >   net/qede/base: set the drv_type before sending load request
> >   net/qede/base: prevent driver laod with invalid resources
> >   net/qede/base: add interfaces for MFW TLV request processing
> >   net/qede/base: fix to set pointers to NULL after freeing
> >   net/qede/base: L2 handler changes
> >   net/qede/base: add support for handling TLV request from MFW
> >   net/qede/base: optimize cache-line access
> >   net/qede/base: infrastructure changes for VF tunnelling
> >   net/qede/base: revise tunnel APIs/structs
> >   net/qede/base: add tunnelling support for VFs
> >   net/qede/base: formatting changes
> >   net/qede/base: prevent transmitter stuck condition
> >   net/qede/base: add mask/shift defines for resource command
> >   net/qede/base: add API for using MFW resource lock
> >   net/qede/base: remove clock slowdown option
> >   net/qede/base: add new image types
> >   net/qede/base: use L2-handles for RSS configuration
> >   net/qede/base: change valloc to vzalloc
> >   net/qede/base: add support for previous driver unload
> >   net/qede/base: add non-l2 dcbx tlv application support
> >   net/qede/base: update bulletin board with link state during init
> >   net/qede/base: add coalescing support for VFs
> >   net/qede/base: add macro got resource value message
> >   net/qede/base: add mailbox for resource allocation
> >   net/qede/base: add macro for unsupported command
> >   net/qede/base: Add support to set max values of soft resoruces
> >   net/qede/base: add return code check
> >   net/qede/base: zero out MFW mailbox data
> >   net/qede/base: move code bits
> >   net/qede/base: add PF parameter
> >   net/qede/base: allow PMD to control vport-id and rss-eng-id
> >   net/qede/base: add udp ports in bulletin board message
> >   net/qede/base: prevent DMAE transactions during recovery
> >   net/qede/base: add multi-Txq support on same queue-zone for VFs
> >   net/qede/base: fix race cond between MFW attentions and PF stop
> >   net/qede/base: semantic changes
> 
> <...>

^ permalink raw reply	[flat|nested] 329+ messages in thread

* Re: [PATCH v2 00/61] net/qede/base: qede PMD enhancements
  2017-03-18  7:05   ` [PATCH v2 " Rasesh Mody
@ 2017-03-20 16:59     ` Ferruh Yigit
  2017-03-24  7:27       ` [PATCH v3 " Rasesh Mody
                         ` (62 more replies)
  0 siblings, 63 replies; 329+ messages in thread
From: Ferruh Yigit @ 2017-03-20 16:59 UTC (permalink / raw)
  To: Rasesh Mody, dev; +Cc: Dept-EngDPDKDev

On 3/18/2017 7:05 AM, Rasesh Mody wrote:
> Hi,
> 
> This patch set adds support for new firmware 8.18.9.0, new features and
> bug fixes.
> 
> Please apply to dpdk-net-next for 17.05 release. Note that this patch set
> depends on http://dpdk.org/dev/patchwork/patch/21896.
> 
> v1..v2
>  - address all the review comments received so far
> 
> Thanks!
> Rasesh
> 
> Harish Patil (3):
>   net/qede/base: add support for arfs mode
>   net/qede: add ntuple and flow director filter support
>   net/qede: add LRO/TSO offloads support
> 
> Rasesh Mody (58):
>   net/qede/base: return an initialized return value
>   net/qede/base: send FW version driver state to MFW
>   net/qede/base: mask Rx buffer attention bits
>   net/qede/base: print various indication on Tx-timeouts
>   net/qede/base: utilize FW 8.18.9.0
>   net/qede: upgrade the FW to 8.18.9.0
>   net/qede/base: decrease maximum HW func per device
>   net/qede/base: move mask constants defining NIC type
>   net/qede/base: remove attribute from update current config
>   net/qede/base: add nvram options
>   net/qede/base: add comment
>   net/qede/base: use default MTU from shared memory
>   net/qede/base: change queue/sb-id from 8 bit to 16 bit
>   net/qede/base: update MFW when default MTU is changed
>   net/qede/base: prevent device init failure
>   net/qede/base: read card personality via MFW commands
>   net/qede/base: allow probe to succeed with minor HW-issues
>   net/qede/base: remove unneeded step in HW init
>   net/qede/base: allow only trusted VFs to be promisc
>   net/qede/base: qm initialization revamp
>   net/qede/base: print firmware MFW and MBI versions
>   net/qede/base: check active VF queues before stopping
>   net/qede/base: set driver type before sending load request
>   net/qede/base: prevent driver laod with invalid resources
>   net/qede/base: add interfaces for MFW TLV request processing
>   net/qede/base: code refactoring of SP queues
>   net/qede/base: make L2 queues handle based
>   net/qede/base: add support for handling TLV request from MFW
>   net/qede/base: optimize cache-line access
>   net/qede/base: infrastructure changes for VF tunnelling
>   net/qede/base: revise tunnel APIs/structs
>   net/qede/base: add tunnelling support for VFs
>   net/qede/base: formatting changes
>   net/qede/base: prevent transmitter stuck condition
>   net/qede/base: add mask/shift defines for resource command
>   net/qede/base: add API for using MFW resource lock
>   net/qede/base: remove clock slowdown option
>   net/qede/base: add new image types
>   net/qede/base: use L2-handles for RSS configuration
>   net/qede/base: change valloc to vzalloc
>   net/qede/base: add support for previous driver unload
>   net/qede/base: add non-L2 dcbx tlv application support
>   net/qede/base: update bulletin board during VF init
>   net/qede/base: add coalescing support for VFs
>   net/qede/base: add macro got resource value message
>   net/qede/base: add mailbox for resource allocation
>   net/qede/base: add macro for unsupported command
>   net/qede/base: set max values for soft resoruces
>   net/qede/base: add return code check
>   net/qede/base: zero out MFW mailbox data
>   net/qede/base: move code bits
>   net/qede/base: add PF parameter
>   net/qede/base: allow PMD to control vport and RSS engine ids
>   net/qede/base: add udp ports in bulletin board message
>   net/qede/base: prevent DMAE transactions during recovery
>   net/qede/base: multi-Txq support on same queue-zone for VFs
>   net/qede/base: prevent race condition during unload
>   net/qede/base: semantic changes
> 

Hi Rasesh,

Getting following build errors, one with clang [1] and other with 32bit
[2], I have not investigated which patch cause the error, just
copy-pasting the build errors.

These looks like same build errors with previous version of the patchset.


[1]
.../drivers/net/qede/qede_rxtx.c:1202:21: error: variable 'pad' is
uninitialized when used here [-Werror,-Wuninitialized]
                rx_mb->data_off = pad + RTE_PKTMBUF_HEADROOM;
                                  ^~~
.../drivers/net/qede/qede_rxtx.c:997:14: note: initialize the variable
'pad' to silence this warning
        uint16_t pad;
                    ^
                     = 0


[2]
.../drivers/net/qede/qede_fdir.c: In function ‘qede_config_cmn_fdir_filter’:
.../drivers/net/qede/qede_fdir.c:126:44: error: format ‘%lx’ expects
argument of type ‘long unsigned int’, but argument 4 has type ‘uint64_t
{aka long long unsigned int}’ [-Werror=format=]
  snprintf(mz_name, sizeof(mz_name) - 1, "%lx", rte_get_timer_cycles());
                                            ^

^ permalink raw reply	[flat|nested] 329+ messages in thread

* [PATCH v3 00/61] net/qede/base: qede PMD enhancements
  2017-03-20 16:59     ` Ferruh Yigit
@ 2017-03-24  7:27       ` Rasesh Mody
  2017-03-24 11:08         ` Ferruh Yigit
  2017-03-24  7:27       ` [PATCH v3 01/61] net/qede/base: return an initialized return value Rasesh Mody
                         ` (61 subsequent siblings)
  62 siblings, 1 reply; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:27 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Hi Ferruh,

This patch set adds support for new firmware 8.18.9.0, new features and
bug fixes.

Please apply to dpdk-net-next for 17.05 release.

v1..v3
 - address all the review comments received so far including addressal of
   clang and 32-bit compilation errors.

Thanks!
Rasesh

Harish Patil (3):
  net/qede/base: add support for arfs mode
  net/qede: add ntuple and flow director filter support
  net/qede: add LRO/TSO offloads support

Rasesh Mody (58):
  net/qede/base: return an initialized return value
  net/qede/base: send FW version driver state to MFW
  net/qede/base: mask Rx buffer attention bits
  net/qede/base: print various indication on Tx-timeouts
  net/qede/base: utilize FW 8.18.9.0
  net/qede: upgrade the FW to 8.18.9.0
  net/qede/base: decrease maximum HW func per device
  net/qede/base: move mask constants defining NIC type
  net/qede/base: remove attribute from update current config
  net/qede/base: add nvram options
  net/qede/base: add comment
  net/qede/base: use default MTU from shared memory
  net/qede/base: change queue/sb-id from 8 bit to 16 bit
  net/qede/base: update MFW when default MTU is changed
  net/qede/base: prevent device init failure
  net/qede/base: read card personality via MFW commands
  net/qede/base: allow probe to succeed with minor HW-issues
  net/qede/base: remove unneeded step in HW init
  net/qede/base: allow only trusted VFs to be promisc
  net/qede/base: qm initialization revamp
  net/qede/base: print firmware MFW and MBI versions
  net/qede/base: check active VF queues before stopping
  net/qede/base: set driver type before sending load request
  net/qede/base: prevent driver laod with invalid resources
  net/qede/base: add interfaces for MFW TLV request processing
  net/qede/base: code refactoring of SP queues
  net/qede/base: make L2 queues handle based
  net/qede/base: add support for handling TLV request from MFW
  net/qede/base: optimize cache-line access
  net/qede/base: infrastructure changes for VF tunnelling
  net/qede/base: revise tunnel APIs/structs
  net/qede/base: add tunnelling support for VFs
  net/qede/base: formatting changes
  net/qede/base: prevent transmitter stuck condition
  net/qede/base: add mask/shift defines for resource command
  net/qede/base: add API for using MFW resource lock
  net/qede/base: remove clock slowdown option
  net/qede/base: add new image types
  net/qede/base: use L2-handles for RSS configuration
  net/qede/base: change valloc to vzalloc
  net/qede/base: add support for previous driver unload
  net/qede/base: add non-L2 dcbx tlv application support
  net/qede/base: update bulletin board during VF init
  net/qede/base: add coalescing support for VFs
  net/qede/base: add macro got resource value message
  net/qede/base: add mailbox for resource allocation
  net/qede/base: add macro for unsupported command
  net/qede/base: set max values for soft resoruces
  net/qede/base: add return code check
  net/qede/base: zero out MFW mailbox data
  net/qede/base: move code bits
  net/qede/base: add PF parameter
  net/qede/base: allow PMD to control vport and RSS engine ids
  net/qede/base: add udp ports in bulletin board message
  net/qede/base: prevent DMAE transactions during recovery
  net/qede/base: multi-Txq support on same queue-zone for VFs
  net/qede/base: prevent race condition during unload
  net/qede/base: semantic changes

 doc/guides/nics/features/qede.ini             |    4 +
 doc/guides/nics/features/qede_vf.ini          |    2 +
 doc/guides/nics/qede.rst                      |   11 +-
 drivers/net/qede/Makefile                     |    1 +
 drivers/net/qede/base/bcm_osal.h              |   13 +-
 drivers/net/qede/base/common_hsi.h            |  191 ++-
 drivers/net/qede/base/ecore.h                 |  169 +-
 drivers/net/qede/base/ecore_chain.h           |  143 +-
 drivers/net/qede/base/ecore_cxt.c             |  297 +++-
 drivers/net/qede/base/ecore_cxt.h             |   64 +-
 drivers/net/qede/base/ecore_cxt_api.h         |   13 -
 drivers/net/qede/base/ecore_dcbx.c            |   42 +-
 drivers/net/qede/base/ecore_dcbx.h            |    4 +-
 drivers/net/qede/base/ecore_dcbx_api.h        |    4 +-
 drivers/net/qede/base/ecore_dev.c             | 2137 +++++++++++++++----------
 drivers/net/qede/base/ecore_dev_api.h         |  122 +-
 drivers/net/qede/base/ecore_gtt_reg_addr.h    |   20 +-
 drivers/net/qede/base/ecore_hsi_common.h      |  816 +++++-----
 drivers/net/qede/base/ecore_hsi_debug_tools.h |  203 ++-
 drivers/net/qede/base/ecore_hsi_eth.h         | 2069 ++++++++++++------------
 drivers/net/qede/base/ecore_hsi_init_tool.h   |   78 +-
 drivers/net/qede/base/ecore_hw.c              |   50 +-
 drivers/net/qede/base/ecore_init_fw_funcs.c   | 1409 ++++++++++------
 drivers/net/qede/base/ecore_init_fw_funcs.h   |  172 +-
 drivers/net/qede/base/ecore_int.c             |   51 +-
 drivers/net/qede/base/ecore_int.h             |   10 -
 drivers/net/qede/base/ecore_int_api.h         |   21 +
 drivers/net/qede/base/ecore_iov_api.h         |   45 +-
 drivers/net/qede/base/ecore_iro.h             |    8 +
 drivers/net/qede/base/ecore_iro_values.h      |   28 +-
 drivers/net/qede/base/ecore_l2.c              |  853 +++++++---
 drivers/net/qede/base/ecore_l2.h              |  149 +-
 drivers/net/qede/base/ecore_l2_api.h          |  134 +-
 drivers/net/qede/base/ecore_mcp.c             | 1018 ++++++++++--
 drivers/net/qede/base/ecore_mcp.h             |  181 ++-
 drivers/net/qede/base/ecore_mcp_api.h         |  316 +++-
 drivers/net/qede/base/ecore_mng_tlv.c         | 1535 ++++++++++++++++++
 drivers/net/qede/base/ecore_proto_if.h        |   16 +
 drivers/net/qede/base/ecore_rt_defs.h         |  623 ++++---
 drivers/net/qede/base/ecore_sp_api.h          |   19 +
 drivers/net/qede/base/ecore_sp_commands.c     |  372 +++--
 drivers/net/qede/base/ecore_sp_commands.h     |   23 +-
 drivers/net/qede/base/ecore_spq.c             |   86 +-
 drivers/net/qede/base/ecore_spq.h             |   36 +-
 drivers/net/qede/base/ecore_sriov.c           |  953 ++++++++---
 drivers/net/qede/base/ecore_sriov.h           |   23 +-
 drivers/net/qede/base/ecore_vf.c              |  348 +++-
 drivers/net/qede/base/ecore_vf.h              |   85 +-
 drivers/net/qede/base/ecore_vf_api.h          |   11 +
 drivers/net/qede/base/ecore_vfpf_if.h         |   55 +-
 drivers/net/qede/base/eth_common.h            |    2 +-
 drivers/net/qede/base/mcp_public.h            |  271 ++--
 drivers/net/qede/base/nvm_cfg.h               |  475 +++++-
 drivers/net/qede/base/reg_addr.h              |   59 +
 drivers/net/qede/qede_eth_if.c                |   56 +-
 drivers/net/qede/qede_eth_if.h                |   25 +-
 drivers/net/qede/qede_ethdev.c                |  115 +-
 drivers/net/qede/qede_ethdev.h                |   42 +-
 drivers/net/qede/qede_fdir.c                  |  487 ++++++
 drivers/net/qede/qede_if.h                    |   58 +-
 drivers/net/qede/qede_main.c                  |  126 +-
 drivers/net/qede/qede_rxtx.c                  |  775 ++++++---
 drivers/net/qede/qede_rxtx.h                  |   32 +
 63 files changed, 12370 insertions(+), 5186 deletions(-)
 create mode 100644 drivers/net/qede/base/ecore_mng_tlv.c
 create mode 100644 drivers/net/qede/qede_fdir.c

-- 
1.7.10.3

^ permalink raw reply	[flat|nested] 329+ messages in thread

* [PATCH v3 01/61] net/qede/base: return an initialized return value
  2017-03-20 16:59     ` Ferruh Yigit
  2017-03-24  7:27       ` [PATCH v3 " Rasesh Mody
@ 2017-03-24  7:27       ` Rasesh Mody
  2017-03-24  7:27       ` [PATCH v3 02/61] net/qede/base: send FW version driver state to MFW Rasesh Mody
                         ` (60 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:27 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Make sure ecore_iov_mark_vf_flr() always returns an initialized return
value.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_sriov.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 6912cf8..d1c809c 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -3164,7 +3164,7 @@ ecore_iov_single_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
 
 bool ecore_iov_mark_vf_flr(struct ecore_hwfn *p_hwfn, u32 *p_disabled_vfs)
 {
-	bool found;
+	bool found = false;
 	u16 i;
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "Marking FLR-ed VFs\n");
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 02/61] net/qede/base: send FW version driver state to MFW
  2017-03-20 16:59     ` Ferruh Yigit
  2017-03-24  7:27       ` [PATCH v3 " Rasesh Mody
  2017-03-24  7:27       ` [PATCH v3 01/61] net/qede/base: return an initialized return value Rasesh Mody
@ 2017-03-24  7:27       ` Rasesh Mody
  2017-03-24  7:27       ` [PATCH v3 03/61] net/qede/base: mask Rx buffer attention bits Rasesh Mody
                         ` (59 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:27 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add support to send FW version and driver state to Management FW.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c     |   31 ++++++++++++++++++++++++++++---
 drivers/net/qede/base/ecore_mcp.c     |    7 +++++--
 drivers/net/qede/base/ecore_mcp_api.h |    3 ++-
 drivers/net/qede/qede_if.h            |    3 +++
 drivers/net/qede/qede_main.c          |   20 ++++++++++++++++++++
 5 files changed, 58 insertions(+), 6 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index da9cdc9..2d1e031 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1609,8 +1609,9 @@ static void ecore_reset_mb_shadow(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 				   struct ecore_hw_init_params *p_params)
 {
-	enum _ecore_status_t rc, mfw_rc;
-	u32 load_code, param;
+	enum _ecore_status_t rc = ECORE_SUCCESS, mfw_rc;
+	u32 load_code, param, drv_mb_param;
+	struct ecore_hwfn *p_hwfn;
 	int i;
 
 	if ((p_params->int_mode == ECORE_INT_MODE_MSI) &&
@@ -1743,7 +1744,26 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		p_hwfn->hw_init_done = true;
 	}
 
-	return ECORE_SUCCESS;
+	if (IS_PF(p_dev)) {
+		p_hwfn = ECORE_LEADING_HWFN(p_dev);
+		drv_mb_param = (FW_MAJOR_VERSION << 24) |
+			       (FW_MINOR_VERSION << 16) |
+			       (FW_REVISION_VERSION << 8) |
+			       (FW_ENGINEERING_VERSION);
+		rc = ecore_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
+				   DRV_MSG_CODE_OV_UPDATE_STORM_FW_VER,
+				   drv_mb_param, &load_code, &param);
+		if (rc != ECORE_SUCCESS) {
+			DP_ERR(p_hwfn, "Failed to send firmware version\n");
+			return rc;
+		}
+
+		rc = ecore_mcp_ov_update_driver_state(p_hwfn,
+						      p_hwfn->p_main_ptt,
+						ECORE_OV_DRIVER_STATE_DISABLED);
+	}
+
+	return rc;
 }
 
 #define ECORE_HW_STOP_RETRY_LIMIT	(10)
@@ -3130,8 +3150,13 @@ enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev,
 
 void ecore_hw_remove(struct ecore_dev *p_dev)
 {
+	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
 	int i;
 
+	if (IS_PF(p_dev))
+		ecore_mcp_ov_update_driver_state(p_hwfn, p_hwfn->p_main_ptt,
+					ECORE_OV_DRIVER_STATE_NOT_LOADED);
+
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
 
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index cb3e0bd..e236f39 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1723,6 +1723,9 @@ ecore_mcp_ov_update_current_config(struct ecore_hwfn *p_hwfn,
 	case ECORE_OV_CLIENT_USER:
 		drv_mb_param = DRV_MB_PARAM_OV_CURR_CFG_OTHER;
 		break;
+	case ECORE_OV_CLIENT_VENDOR_SPEC:
+		drv_mb_param = DRV_MB_PARAM_OV_CURR_CFG_VENDOR_SPEC;
+		break;
 	default:
 		DP_NOTICE(p_hwfn, true, "Invalid client type %d\n", config);
 		return ECORE_INVAL;
@@ -1761,9 +1764,9 @@ ecore_mcp_ov_update_driver_state(struct ecore_hwfn *p_hwfn,
 	}
 
 	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE,
-			   drv_state, &resp, &param);
+			   drv_mb_param, &resp, &param);
 	if (rc != ECORE_SUCCESS)
-		DP_ERR(p_hwfn, "MCP response failure, aborting\n");
+		DP_ERR(p_hwfn, "Failed to send driver state\n");
 
 	return rc;
 }
diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h
index 4e954bd..614cf67 100644
--- a/drivers/net/qede/base/ecore_mcp_api.h
+++ b/drivers/net/qede/base/ecore_mcp_api.h
@@ -181,7 +181,8 @@ enum ecore_ov_config_method {
 
 enum ecore_ov_client {
 	ECORE_OV_CLIENT_DRV,
-	ECORE_OV_CLIENT_USER
+	ECORE_OV_CLIENT_USER,
+	ECORE_OV_CLIENT_VENDOR_SPEC
 };
 
 enum ecore_ov_driver_state {
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index 4289d0b..4b23bb9 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -150,8 +150,11 @@ struct qed_common_ops {
 			    uint16_t sb_id, enum qed_sb_type type);
 
 	bool (*can_link_change)(struct ecore_dev *edev);
+
 	void (*update_msglvl)(struct ecore_dev *edev,
 			      uint32_t dp_module, uint8_t dp_level);
+
+	int (*send_drv_state)(struct ecore_dev *edev, bool active);
 };
 
 #endif /* _QEDE_IF_H */
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 8a4d68a..f0033a1 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -668,6 +668,25 @@ static void qed_remove(struct ecore_dev *edev)
 	ecore_hw_remove(edev);
 }
 
+static int qed_send_drv_state(struct ecore_dev *edev, bool active)
+{
+	struct ecore_hwfn *hwfn = ECORE_LEADING_HWFN(edev);
+	struct ecore_ptt *ptt;
+	int status = 0;
+
+	ptt = ecore_ptt_acquire(hwfn);
+	if (!ptt)
+		return -EAGAIN;
+
+	status = ecore_mcp_ov_update_driver_state(hwfn, ptt, active ?
+						  ECORE_OV_DRIVER_STATE_ACTIVE :
+						ECORE_OV_DRIVER_STATE_DISABLED);
+
+	ecore_ptt_release(hwfn, ptt);
+
+	return status;
+}
+
 const struct qed_common_ops qed_common_ops_pass = {
 	INIT_STRUCT_FIELD(probe, &qed_probe),
 	INIT_STRUCT_FIELD(update_pf_params, &qed_update_pf_params),
@@ -681,4 +700,5 @@ const struct qed_common_ops qed_common_ops_pass = {
 	INIT_STRUCT_FIELD(drain, &qed_drain),
 	INIT_STRUCT_FIELD(slowpath_stop, &qed_slowpath_stop),
 	INIT_STRUCT_FIELD(remove, &qed_remove),
+	INIT_STRUCT_FIELD(send_drv_state, &qed_send_drv_state),
 };
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 03/61] net/qede/base: mask Rx buffer attention bits
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (2 preceding siblings ...)
  2017-03-24  7:27       ` [PATCH v3 02/61] net/qede/base: send FW version driver state to MFW Rasesh Mody
@ 2017-03-24  7:27       ` Rasesh Mody
  2017-03-24  7:27       ` [PATCH v3 04/61] net/qede/base: print various indication on Tx-timeouts Rasesh Mody
                         ` (58 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:27 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Mask the BRB "RC0_EOP_OUT_SYNC_FIFO_PUSH_ERROR" attention.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c |    6 ++++++
 drivers/net/qede/base/reg_addr.h  |    3 +++
 2 files changed, 9 insertions(+)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 2d1e031..eef24cd 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1051,6 +1051,12 @@ static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn,
 	/* pretend to original PF */
 	ecore_fid_pretend(p_hwfn, p_ptt, p_hwfn->rel_pf_id);
 
+	/* @@@TMP:
+	 * CQ89456 - Mask the BRB "RC0_EOP_OUT_SYNC_FIFO_PUSH_ERROR" attention.
+	 */
+	if (ECORE_IS_AH(p_dev))
+		ecore_wr(p_hwfn, p_ptt, BRB_REG_INT_MASK_10, 0x4000000);
+
 	return rc;
 }
 
diff --git a/drivers/net/qede/base/reg_addr.h b/drivers/net/qede/base/reg_addr.h
index 3c369aa..21cbdbd 100644
--- a/drivers/net/qede/base/reg_addr.h
+++ b/drivers/net/qede/base/reg_addr.h
@@ -1141,3 +1141,6 @@
 #define NIG_REG_LLH_TAGMAC_DEF_PF_VECTOR 0x50196cUL
 #define PRS_REG_MSG_INFO 0x1f0a1cUL
 #define BAR0_MAP_REG_XSDM_RAM 0x1e00000UL
+
+/* 8.18.7.0 FW */
+#define BRB_REG_INT_MASK_10 0x3401b8UL
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 04/61] net/qede/base: print various indication on Tx-timeouts
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (3 preceding siblings ...)
  2017-03-24  7:27       ` [PATCH v3 03/61] net/qede/base: mask Rx buffer attention bits Rasesh Mody
@ 2017-03-24  7:27       ` Rasesh Mody
  2017-03-24  7:27       ` [PATCH v3 05/61] net/qede/base: utilize FW 8.18.9.0 Rasesh Mody
                         ` (57 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:27 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Print various indication on Tx-timeouts.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_int.c     |   27 +++++++++++++++++++++++++++
 drivers/net/qede/base/ecore_int_api.h |   21 +++++++++++++++++++++
 drivers/net/qede/base/reg_addr.h      |    3 +++
 drivers/net/qede/qede_main.c          |   23 +++++++++++++++++++++++
 4 files changed, 74 insertions(+)

diff --git a/drivers/net/qede/base/ecore_int.c b/drivers/net/qede/base/ecore_int.c
index b6b8e2d..e5a4359 100644
--- a/drivers/net/qede/base/ecore_int.c
+++ b/drivers/net/qede/base/ecore_int.c
@@ -2255,3 +2255,30 @@ enum _ecore_status_t ecore_int_set_timer_res(struct ecore_hwfn *p_hwfn,
 
 	return rc;
 }
+
+enum _ecore_status_t ecore_int_get_sb_dbg(struct ecore_hwfn *p_hwfn,
+					  struct ecore_ptt *p_ptt,
+					  struct ecore_sb_info *p_sb,
+					  struct ecore_sb_info_dbg *p_info)
+{
+	u16 sbid = p_sb->igu_sb_id;
+	int i;
+
+	if (IS_VF(p_hwfn->p_dev))
+		return ECORE_INVAL;
+
+	if (sbid > NUM_OF_SBS(p_hwfn->p_dev))
+		return ECORE_INVAL;
+
+	p_info->igu_prod = ecore_rd(p_hwfn, p_ptt,
+				    IGU_REG_PRODUCER_MEMORY + sbid * 4);
+	p_info->igu_cons = ecore_rd(p_hwfn, p_ptt,
+				    IGU_REG_CONSUMER_MEM + sbid * 4);
+
+	for (i = 0; i < PIS_PER_SB; i++)
+		p_info->pi[i] = (u16)ecore_rd(p_hwfn, p_ptt,
+					      CAU_REG_PI_MEMORY +
+					      sbid * 4 * PIS_PER_SB +  i * 4);
+
+	return ECORE_SUCCESS;
+}
diff --git a/drivers/net/qede/base/ecore_int_api.h b/drivers/net/qede/base/ecore_int_api.h
index a0d6a43..fdfcba8 100644
--- a/drivers/net/qede/base/ecore_int_api.h
+++ b/drivers/net/qede/base/ecore_int_api.h
@@ -41,6 +41,12 @@ struct ecore_sb_info {
 	struct ecore_dev *p_dev;
 };
 
+struct ecore_sb_info_dbg {
+	u32 igu_prod;
+	u32 igu_cons;
+	u16 pi[PIS_PER_SB];
+};
+
 struct ecore_sb_cnt_info {
 	int sb_cnt;
 	int sb_iov_cnt;
@@ -303,4 +309,19 @@ void ecore_int_disable_post_isr_release(struct ecore_dev *p_dev);
  */
 void ecore_int_attn_clr_enable(struct ecore_dev *p_dev, bool clr_enable);
 
+/**
+ * @brief Read debug information regarding a given SB.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param p_sb - point to Status block for which we want to get info.
+ * @param p_info - pointer to struct to fill with information regarding SB.
+ *
+ * @return ECORE_SUCCESS if pointer is filled; failure otherwise.
+ */
+enum _ecore_status_t ecore_int_get_sb_dbg(struct ecore_hwfn *p_hwfn,
+					  struct ecore_ptt *p_ptt,
+					  struct ecore_sb_info *p_sb,
+					  struct ecore_sb_info_dbg *p_info);
+
 #endif
diff --git a/drivers/net/qede/base/reg_addr.h b/drivers/net/qede/base/reg_addr.h
index 21cbdbd..3cc7fd4 100644
--- a/drivers/net/qede/base/reg_addr.h
+++ b/drivers/net/qede/base/reg_addr.h
@@ -1144,3 +1144,6 @@
 
 /* 8.18.7.0 FW */
 #define BRB_REG_INT_MASK_10 0x3401b8UL
+
+#define IGU_REG_PRODUCER_MEMORY 0x182000UL
+#define IGU_REG_CONSUMER_MEM 0x183000UL
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index f0033a1..a604a5b 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -687,6 +687,29 @@ static int qed_send_drv_state(struct ecore_dev *edev, bool active)
 	return status;
 }
 
+static int qed_get_sb_info(struct ecore_dev *edev, struct ecore_sb_info *sb,
+			   u16 qid, struct ecore_sb_info_dbg *sb_dbg)
+{
+	struct ecore_hwfn *hwfn = &edev->hwfns[qid % edev->num_hwfns];
+	struct ecore_ptt *ptt;
+	int rc;
+
+	if (IS_VF(edev))
+		return -EINVAL;
+
+	ptt = ecore_ptt_acquire(hwfn);
+	if (!ptt) {
+		DP_NOTICE(hwfn, true, "Can't acquire PTT\n");
+		return -EAGAIN;
+	}
+
+	memset(sb_dbg, 0, sizeof(*sb_dbg));
+	rc = ecore_int_get_sb_dbg(hwfn, ptt, sb, sb_dbg);
+
+	ecore_ptt_release(hwfn, ptt);
+	return rc;
+}
+
 const struct qed_common_ops qed_common_ops_pass = {
 	INIT_STRUCT_FIELD(probe, &qed_probe),
 	INIT_STRUCT_FIELD(update_pf_params, &qed_update_pf_params),
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 05/61] net/qede/base: utilize FW 8.18.9.0
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (4 preceding siblings ...)
  2017-03-24  7:27       ` [PATCH v3 04/61] net/qede/base: print various indication on Tx-timeouts Rasesh Mody
@ 2017-03-24  7:27       ` Rasesh Mody
  2017-03-24  7:27       ` [PATCH v3 06/61] net/qede: upgrade the FW to 8.18.9.0 Rasesh Mody
                         ` (56 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:27 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

This change is in preparation to work with new FW 8.18.9.0.
Rename the defines to use E4_ and structs to use e4_. This renaming
is to add support for future chipsets.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/common_hsi.h       |   15 +-
 drivers/net/qede/base/ecore_hsi_common.h |  770 +++++------
 drivers/net/qede/base/ecore_hsi_eth.h    | 2052 +++++++++++++++---------------
 drivers/net/qede/base/ecore_iov_api.h    |    4 +-
 drivers/net/qede/base/ecore_spq.c        |   20 +-
 drivers/net/qede/base/ecore_sriov.c      |    2 +-
 drivers/net/qede/base/ecore_sriov.h      |    4 +-
 7 files changed, 1447 insertions(+), 1420 deletions(-)

diff --git a/drivers/net/qede/base/common_hsi.h b/drivers/net/qede/base/common_hsi.h
index 2f84148..59e751f 100644
--- a/drivers/net/qede/base/common_hsi.h
+++ b/drivers/net/qede/base/common_hsi.h
@@ -107,20 +107,20 @@
 #define MAX_NUM_PFS	(MAX_NUM_PFS_K2)
 #define MAX_NUM_OF_PFS_IN_CHIP (16) /* On both engines */
 
-#define MAX_NUM_VFS_K2	(192)
 #define MAX_NUM_VFS_BB	(120)
-#define MAX_NUM_VFS	(MAX_NUM_VFS_K2)
+#define MAX_NUM_VFS_K2	(192)
+#define E4_MAX_NUM_VFS	(MAX_NUM_VFS_K2)
 
 #define MAX_NUM_FUNCTIONS_BB	(MAX_NUM_PFS_BB + MAX_NUM_VFS_BB)
 #define MAX_NUM_FUNCTIONS_K2	(MAX_NUM_PFS_K2 + MAX_NUM_VFS_K2)
-#define MAX_NUM_FUNCTIONS	(MAX_NUM_PFS + MAX_NUM_VFS)
+#define MAX_NUM_FUNCTIONS	(MAX_NUM_PFS + E4_MAX_NUM_VFS)
 
 /* in both BB and K2, the VF number starts from 16. so for arrays containing all
  * possible PFs and VFs - we need a constant for this size
  */
 #define MAX_FUNCTION_NUMBER_BB	(MAX_NUM_PFS + MAX_NUM_VFS_BB)
 #define MAX_FUNCTION_NUMBER_K2	(MAX_NUM_PFS + MAX_NUM_VFS_K2)
-#define MAX_FUNCTION_NUMBER	(MAX_NUM_PFS + MAX_NUM_VFS)
+#define MAX_FUNCTION_NUMBER	(MAX_NUM_PFS + E4_MAX_NUM_VFS)
 
 #define MAX_NUM_VPORTS_K2	(208)
 #define MAX_NUM_VPORTS_BB	(160)
@@ -149,9 +149,10 @@
 #define MAX_PHYS_VOQS		(NUM_OF_PHYS_TCS * MAX_NUM_PORTS_BB)
 
 /* CIDs */
-#define NUM_OF_CONNECTION_TYPES	(8)
-#define NUM_OF_LCIDS		(320)
-#define NUM_OF_LTIDS		(320)
+#define E4_NUM_OF_CONNECTION_TYPES (8)
+#define NUM_OF_TASK_TYPES		(8)
+#define NUM_OF_LCIDS			(320)
+#define NUM_OF_LTIDS			(320)
 
 /* Clock values */
 #define MASTER_CLK_FREQ_E4		(375e6)
diff --git a/drivers/net/qede/base/ecore_hsi_common.h b/drivers/net/qede/base/ecore_hsi_common.h
index d978bb0..f934e68 100644
--- a/drivers/net/qede/base/ecore_hsi_common.h
+++ b/drivers/net/qede/base/ecore_hsi_common.h
@@ -75,306 +75,306 @@ struct xstorm_core_conn_st_ctx {
 	__le32 reserved0[55] /* Pad to 15 cycles */;
 };
 
-struct xstorm_core_conn_ag_ctx {
+struct e4_xstorm_core_conn_ag_ctx {
 	u8 reserved0 /* cdu_validation */;
 	u8 core_state /* state */;
 	u8 flags0;
 /* exist_in_qm0 */
-#define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_SHIFT        0
+#define E4_XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_SHIFT        0
 /* exist_in_qm1 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED1_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED1_SHIFT           1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED1_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED1_SHIFT           1
 /* exist_in_qm2 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED2_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED2_SHIFT           2
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED2_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED2_SHIFT           2
 /* exist_in_qm3 */
-#define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM3_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM3_SHIFT        3
+#define E4_XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM3_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM3_SHIFT        3
 /* bit4 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED3_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED3_SHIFT           4
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED3_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED3_SHIFT           4
 /* cf_array_active */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED4_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED4_SHIFT           5
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED4_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED4_SHIFT           5
 /* bit6 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED5_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED5_SHIFT           6
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED5_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED5_SHIFT           6
 /* bit7 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED6_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED6_SHIFT           7
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED6_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED6_SHIFT           7
 	u8 flags1;
 /* bit8 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED7_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED7_SHIFT           0
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED7_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED7_SHIFT           0
 /* bit9 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED8_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED8_SHIFT           1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED8_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED8_SHIFT           1
 /* bit10 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED9_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED9_SHIFT           2
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED9_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED9_SHIFT           2
 /* bit11 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT11_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT11_SHIFT               3
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT11_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT11_SHIFT               3
 /* bit12 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT12_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT12_SHIFT               4
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT12_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT12_SHIFT               4
 /* bit13 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT13_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT13_SHIFT               5
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT13_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT13_SHIFT               5
 /* bit14 */
-#define XSTORM_CORE_CONN_AG_CTX_TX_RULE_ACTIVE_MASK       0x1
-#define XSTORM_CORE_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT      6
+#define E4_XSTORM_CORE_CONN_AG_CTX_TX_RULE_ACTIVE_MASK       0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT      6
 /* bit15 */
-#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT        7
+#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT        7
 	u8 flags2;
 /* timer0cf */
-#define XSTORM_CORE_CONN_AG_CTX_CF0_MASK                  0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF0_SHIFT                 0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF0_MASK                  0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF0_SHIFT                 0
 /* timer1cf */
-#define XSTORM_CORE_CONN_AG_CTX_CF1_MASK                  0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF1_SHIFT                 2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF1_MASK                  0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF1_SHIFT                 2
 /* timer2cf */
-#define XSTORM_CORE_CONN_AG_CTX_CF2_MASK                  0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF2_SHIFT                 4
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF2_MASK                  0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF2_SHIFT                 4
 /* timer_stop_all */
-#define XSTORM_CORE_CONN_AG_CTX_CF3_MASK                  0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF3_SHIFT                 6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF3_MASK                  0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF3_SHIFT                 6
 	u8 flags3;
-#define XSTORM_CORE_CONN_AG_CTX_CF4_MASK                  0x3 /* cf4 */
-#define XSTORM_CORE_CONN_AG_CTX_CF4_SHIFT                 0
-#define XSTORM_CORE_CONN_AG_CTX_CF5_MASK                  0x3 /* cf5 */
-#define XSTORM_CORE_CONN_AG_CTX_CF5_SHIFT                 2
-#define XSTORM_CORE_CONN_AG_CTX_CF6_MASK                  0x3 /* cf6 */
-#define XSTORM_CORE_CONN_AG_CTX_CF6_SHIFT                 4
-#define XSTORM_CORE_CONN_AG_CTX_CF7_MASK                  0x3 /* cf7 */
-#define XSTORM_CORE_CONN_AG_CTX_CF7_SHIFT                 6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF4_MASK                  0x3 /* cf4 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF4_SHIFT                 0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF5_MASK                  0x3 /* cf5 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF5_SHIFT                 2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF6_MASK                  0x3 /* cf6 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF6_SHIFT                 4
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF7_MASK                  0x3 /* cf7 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF7_SHIFT                 6
 	u8 flags4;
-#define XSTORM_CORE_CONN_AG_CTX_CF8_MASK                  0x3 /* cf8 */
-#define XSTORM_CORE_CONN_AG_CTX_CF8_SHIFT                 0
-#define XSTORM_CORE_CONN_AG_CTX_CF9_MASK                  0x3 /* cf9 */
-#define XSTORM_CORE_CONN_AG_CTX_CF9_SHIFT                 2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF8_MASK                  0x3 /* cf8 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF8_SHIFT                 0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF9_MASK                  0x3 /* cf9 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF9_SHIFT                 2
 /* cf10 */
-#define XSTORM_CORE_CONN_AG_CTX_CF10_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF10_SHIFT                4
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF10_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF10_SHIFT                4
 /* cf11 */
-#define XSTORM_CORE_CONN_AG_CTX_CF11_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF11_SHIFT                6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF11_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF11_SHIFT                6
 	u8 flags5;
 /* cf12 */
-#define XSTORM_CORE_CONN_AG_CTX_CF12_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF12_SHIFT                0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF12_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF12_SHIFT                0
 /* cf13 */
-#define XSTORM_CORE_CONN_AG_CTX_CF13_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF13_SHIFT                2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF13_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF13_SHIFT                2
 /* cf14 */
-#define XSTORM_CORE_CONN_AG_CTX_CF14_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF14_SHIFT                4
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF14_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF14_SHIFT                4
 /* cf15 */
-#define XSTORM_CORE_CONN_AG_CTX_CF15_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF15_SHIFT                6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF15_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF15_SHIFT                6
 	u8 flags6;
 /* cf16 */
-#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_MASK     0x3
-#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_SHIFT    0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_MASK     0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_SHIFT    0
 /* cf_array_cf */
-#define XSTORM_CORE_CONN_AG_CTX_CF17_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF17_SHIFT                2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF17_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF17_SHIFT                2
 /* cf18 */
-#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_MASK                0x3
-#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_SHIFT               4
+#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_MASK                0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_SHIFT               4
 /* cf19 */
-#define XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_MASK         0x3
-#define XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_SHIFT        6
+#define E4_XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_MASK         0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_SHIFT        6
 	u8 flags7;
 /* cf20 */
-#define XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_MASK             0x3
-#define XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_SHIFT            0
+#define E4_XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_MASK             0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_SHIFT            0
 /* cf21 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED10_MASK           0x3
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED10_SHIFT          2
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED10_MASK           0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED10_SHIFT          2
 /* cf22 */
-#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_MASK            0x3
-#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_SHIFT           4
+#define E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_MASK            0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_SHIFT           4
 /* cf0en */
-#define XSTORM_CORE_CONN_AG_CTX_CF0EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT               6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF0EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT               6
 /* cf1en */
-#define XSTORM_CORE_CONN_AG_CTX_CF1EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT               7
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF1EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT               7
 	u8 flags8;
 /* cf2en */
-#define XSTORM_CORE_CONN_AG_CTX_CF2EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT               0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF2EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT               0
 /* cf3en */
-#define XSTORM_CORE_CONN_AG_CTX_CF3EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT               1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF3EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT               1
 /* cf4en */
-#define XSTORM_CORE_CONN_AG_CTX_CF4EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT               2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF4EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT               2
 /* cf5en */
-#define XSTORM_CORE_CONN_AG_CTX_CF5EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT               3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF5EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT               3
 /* cf6en */
-#define XSTORM_CORE_CONN_AG_CTX_CF6EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT               4
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF6EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT               4
 /* cf7en */
-#define XSTORM_CORE_CONN_AG_CTX_CF7EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF7EN_SHIFT               5
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF7EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF7EN_SHIFT               5
 /* cf8en */
-#define XSTORM_CORE_CONN_AG_CTX_CF8EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF8EN_SHIFT               6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF8EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF8EN_SHIFT               6
 /* cf9en */
-#define XSTORM_CORE_CONN_AG_CTX_CF9EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF9EN_SHIFT               7
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF9EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF9EN_SHIFT               7
 	u8 flags9;
 /* cf10en */
-#define XSTORM_CORE_CONN_AG_CTX_CF10EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF10EN_SHIFT              0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF10EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF10EN_SHIFT              0
 /* cf11en */
-#define XSTORM_CORE_CONN_AG_CTX_CF11EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF11EN_SHIFT              1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF11EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF11EN_SHIFT              1
 /* cf12en */
-#define XSTORM_CORE_CONN_AG_CTX_CF12EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF12EN_SHIFT              2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF12EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF12EN_SHIFT              2
 /* cf13en */
-#define XSTORM_CORE_CONN_AG_CTX_CF13EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF13EN_SHIFT              3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF13EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF13EN_SHIFT              3
 /* cf14en */
-#define XSTORM_CORE_CONN_AG_CTX_CF14EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF14EN_SHIFT              4
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF14EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF14EN_SHIFT              4
 /* cf15en */
-#define XSTORM_CORE_CONN_AG_CTX_CF15EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF15EN_SHIFT              5
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF15EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF15EN_SHIFT              5
 /* cf16en */
-#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_MASK  0x1
-#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_SHIFT 6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_MASK  0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_SHIFT 6
 /* cf_array_cf_en */
-#define XSTORM_CORE_CONN_AG_CTX_CF17EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF17EN_SHIFT              7
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF17EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF17EN_SHIFT              7
 	u8 flags10;
 /* cf18en */
-#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_SHIFT            0
+#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_SHIFT            0
 /* cf19en */
-#define XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_EN_MASK      0x1
-#define XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT     1
+#define E4_XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_EN_MASK      0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT     1
 /* cf20en */
-#define XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_EN_MASK          0x1
-#define XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT         2
+#define E4_XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_EN_MASK          0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT         2
 /* cf21en */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED11_MASK           0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED11_SHIFT          3
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED11_MASK           0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED11_SHIFT          3
 /* cf22en */
-#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_SHIFT        4
+#define E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_SHIFT        4
 /* cf23en */
-#define XSTORM_CORE_CONN_AG_CTX_CF23EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF23EN_SHIFT              5
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF23EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF23EN_SHIFT              5
 /* rule0en */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED12_MASK           0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED12_SHIFT          6
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED12_MASK           0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED12_SHIFT          6
 /* rule1en */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED13_MASK           0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED13_SHIFT          7
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED13_MASK           0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED13_SHIFT          7
 	u8 flags11;
 /* rule2en */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED14_MASK           0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED14_SHIFT          0
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED14_MASK           0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED14_SHIFT          0
 /* rule3en */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED15_MASK           0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED15_SHIFT          1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED15_MASK           0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED15_SHIFT          1
 /* rule4en */
-#define XSTORM_CORE_CONN_AG_CTX_TX_DEC_RULE_EN_MASK       0x1
-#define XSTORM_CORE_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT      2
+#define E4_XSTORM_CORE_CONN_AG_CTX_TX_DEC_RULE_EN_MASK       0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT      2
 /* rule5en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE5EN_MASK              0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT             3
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE5EN_MASK              0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT             3
 /* rule6en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE6EN_MASK              0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT             4
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE6EN_MASK              0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT             4
 /* rule7en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE7EN_MASK              0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT             5
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE7EN_MASK              0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT             5
 /* rule8en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED1_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED1_SHIFT        6
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED1_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED1_SHIFT        6
 /* rule9en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE9EN_MASK              0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE9EN_SHIFT             7
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE9EN_MASK              0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE9EN_SHIFT             7
 	u8 flags12;
 /* rule10en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE10EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE10EN_SHIFT            0
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE10EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE10EN_SHIFT            0
 /* rule11en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE11EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE11EN_SHIFT            1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE11EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE11EN_SHIFT            1
 /* rule12en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED2_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED2_SHIFT        2
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED2_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED2_SHIFT        2
 /* rule13en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED3_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED3_SHIFT        3
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED3_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED3_SHIFT        3
 /* rule14en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE14EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE14EN_SHIFT            4
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE14EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE14EN_SHIFT            4
 /* rule15en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE15EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE15EN_SHIFT            5
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE15EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE15EN_SHIFT            5
 /* rule16en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE16EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE16EN_SHIFT            6
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE16EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE16EN_SHIFT            6
 /* rule17en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE17EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE17EN_SHIFT            7
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE17EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE17EN_SHIFT            7
 	u8 flags13;
 /* rule18en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE18EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE18EN_SHIFT            0
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE18EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE18EN_SHIFT            0
 /* rule19en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE19EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE19EN_SHIFT            1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE19EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE19EN_SHIFT            1
 /* rule20en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED4_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED4_SHIFT        2
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED4_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED4_SHIFT        2
 /* rule21en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED5_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED5_SHIFT        3
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED5_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED5_SHIFT        3
 /* rule22en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED6_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED6_SHIFT        4
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED6_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED6_SHIFT        4
 /* rule23en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED7_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED7_SHIFT        5
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED7_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED7_SHIFT        5
 /* rule24en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED8_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED8_SHIFT        6
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED8_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED8_SHIFT        6
 /* rule25en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED9_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED9_SHIFT        7
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED9_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED9_SHIFT        7
 	u8 flags14;
 /* bit16 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT16_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT16_SHIFT               0
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT16_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT16_SHIFT               0
 /* bit17 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT17_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT17_SHIFT               1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT17_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT17_SHIFT               1
 /* bit18 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT18_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT18_SHIFT               2
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT18_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT18_SHIFT               2
 /* bit19 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT19_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT19_SHIFT               3
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT19_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT19_SHIFT               3
 /* bit20 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT20_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT20_SHIFT               4
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT20_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT20_SHIFT               4
 /* bit21 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT21_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT21_SHIFT               5
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT21_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT21_SHIFT               5
 /* cf23 */
-#define XSTORM_CORE_CONN_AG_CTX_CF23_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF23_SHIFT                6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF23_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF23_SHIFT                6
 	u8 byte2 /* byte2 */;
 	__le16 physical_q0 /* physical_q0 */;
 	__le16 consolid_prod /* physical_q1 */;
@@ -410,7 +410,7 @@ struct xstorm_core_conn_ag_ctx {
 	u8 byte13 /* byte13 */;
 	u8 byte14 /* byte14 */;
 	u8 byte15 /* byte15 */;
-	u8 byte16 /* byte16 */;
+	u8 e5_reserved /* e5_reserved */;
 	__le16 word11 /* word11 */;
 	__le32 reg10 /* reg10 */;
 	__le32 reg11 /* reg11 */;
@@ -428,89 +428,89 @@ struct xstorm_core_conn_ag_ctx {
 	__le16 word15 /* word15 */;
 };
 
-struct tstorm_core_conn_ag_ctx {
+struct e4_tstorm_core_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 byte1 /* state */;
 	u8 flags0;
-#define TSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
-#define TSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
-#define TSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
-#define TSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
-#define TSTORM_CORE_CONN_AG_CTX_BIT2_MASK     0x1 /* bit2 */
-#define TSTORM_CORE_CONN_AG_CTX_BIT2_SHIFT    2
-#define TSTORM_CORE_CONN_AG_CTX_BIT3_MASK     0x1 /* bit3 */
-#define TSTORM_CORE_CONN_AG_CTX_BIT3_SHIFT    3
-#define TSTORM_CORE_CONN_AG_CTX_BIT4_MASK     0x1 /* bit4 */
-#define TSTORM_CORE_CONN_AG_CTX_BIT4_SHIFT    4
-#define TSTORM_CORE_CONN_AG_CTX_BIT5_MASK     0x1 /* bit5 */
-#define TSTORM_CORE_CONN_AG_CTX_BIT5_SHIFT    5
-#define TSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* timer0cf */
-#define TSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     6
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT2_MASK     0x1 /* bit2 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT2_SHIFT    2
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT3_MASK     0x1 /* bit3 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT3_SHIFT    3
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT4_MASK     0x1 /* bit4 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT4_SHIFT    4
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT5_MASK     0x1 /* bit5 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT5_SHIFT    5
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* timer0cf */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     6
 	u8 flags1;
-#define TSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* timer1cf */
-#define TSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     0
-#define TSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* timer2cf */
-#define TSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     2
-#define TSTORM_CORE_CONN_AG_CTX_CF3_MASK      0x3 /* timer_stop_all */
-#define TSTORM_CORE_CONN_AG_CTX_CF3_SHIFT     4
-#define TSTORM_CORE_CONN_AG_CTX_CF4_MASK      0x3 /* cf4 */
-#define TSTORM_CORE_CONN_AG_CTX_CF4_SHIFT     6
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* timer1cf */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     0
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* timer2cf */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     2
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF3_MASK      0x3 /* timer_stop_all */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF3_SHIFT     4
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF4_MASK      0x3 /* cf4 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF4_SHIFT     6
 	u8 flags2;
-#define TSTORM_CORE_CONN_AG_CTX_CF5_MASK      0x3 /* cf5 */
-#define TSTORM_CORE_CONN_AG_CTX_CF5_SHIFT     0
-#define TSTORM_CORE_CONN_AG_CTX_CF6_MASK      0x3 /* cf6 */
-#define TSTORM_CORE_CONN_AG_CTX_CF6_SHIFT     2
-#define TSTORM_CORE_CONN_AG_CTX_CF7_MASK      0x3 /* cf7 */
-#define TSTORM_CORE_CONN_AG_CTX_CF7_SHIFT     4
-#define TSTORM_CORE_CONN_AG_CTX_CF8_MASK      0x3 /* cf8 */
-#define TSTORM_CORE_CONN_AG_CTX_CF8_SHIFT     6
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF5_MASK      0x3 /* cf5 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF5_SHIFT     0
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF6_MASK      0x3 /* cf6 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF6_SHIFT     2
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF7_MASK      0x3 /* cf7 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF7_SHIFT     4
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF8_MASK      0x3 /* cf8 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF8_SHIFT     6
 	u8 flags3;
-#define TSTORM_CORE_CONN_AG_CTX_CF9_MASK      0x3 /* cf9 */
-#define TSTORM_CORE_CONN_AG_CTX_CF9_SHIFT     0
-#define TSTORM_CORE_CONN_AG_CTX_CF10_MASK     0x3 /* cf10 */
-#define TSTORM_CORE_CONN_AG_CTX_CF10_SHIFT    2
-#define TSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
-#define TSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   4
-#define TSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
-#define TSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   5
-#define TSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
-#define TSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   6
-#define TSTORM_CORE_CONN_AG_CTX_CF3EN_MASK    0x1 /* cf3en */
-#define TSTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT   7
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF9_MASK      0x3 /* cf9 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF9_SHIFT     0
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF10_MASK     0x3 /* cf10 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF10_SHIFT    2
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   4
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   5
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   6
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF3EN_MASK    0x1 /* cf3en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT   7
 	u8 flags4;
-#define TSTORM_CORE_CONN_AG_CTX_CF4EN_MASK    0x1 /* cf4en */
-#define TSTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT   0
-#define TSTORM_CORE_CONN_AG_CTX_CF5EN_MASK    0x1 /* cf5en */
-#define TSTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT   1
-#define TSTORM_CORE_CONN_AG_CTX_CF6EN_MASK    0x1 /* cf6en */
-#define TSTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT   2
-#define TSTORM_CORE_CONN_AG_CTX_CF7EN_MASK    0x1 /* cf7en */
-#define TSTORM_CORE_CONN_AG_CTX_CF7EN_SHIFT   3
-#define TSTORM_CORE_CONN_AG_CTX_CF8EN_MASK    0x1 /* cf8en */
-#define TSTORM_CORE_CONN_AG_CTX_CF8EN_SHIFT   4
-#define TSTORM_CORE_CONN_AG_CTX_CF9EN_MASK    0x1 /* cf9en */
-#define TSTORM_CORE_CONN_AG_CTX_CF9EN_SHIFT   5
-#define TSTORM_CORE_CONN_AG_CTX_CF10EN_MASK   0x1 /* cf10en */
-#define TSTORM_CORE_CONN_AG_CTX_CF10EN_SHIFT  6
-#define TSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 7
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF4EN_MASK    0x1 /* cf4en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT   0
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF5EN_MASK    0x1 /* cf5en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT   1
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF6EN_MASK    0x1 /* cf6en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT   2
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF7EN_MASK    0x1 /* cf7en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF7EN_SHIFT   3
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF8EN_MASK    0x1 /* cf8en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF8EN_SHIFT   4
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF9EN_MASK    0x1 /* cf9en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF9EN_SHIFT   5
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF10EN_MASK   0x1 /* cf10en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF10EN_SHIFT  6
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 7
 	u8 flags5;
-#define TSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 0
-#define TSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 1
-#define TSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 2
-#define TSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 3
-#define TSTORM_CORE_CONN_AG_CTX_RULE5EN_MASK  0x1 /* rule5en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT 4
-#define TSTORM_CORE_CONN_AG_CTX_RULE6EN_MASK  0x1 /* rule6en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT 5
-#define TSTORM_CORE_CONN_AG_CTX_RULE7EN_MASK  0x1 /* rule7en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT 6
-#define TSTORM_CORE_CONN_AG_CTX_RULE8EN_MASK  0x1 /* rule8en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE8EN_SHIFT 7
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 0
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 1
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 2
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 3
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE5EN_MASK  0x1 /* rule5en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT 4
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE6EN_MASK  0x1 /* rule6en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT 5
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE7EN_MASK  0x1 /* rule7en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT 6
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE8EN_MASK  0x1 /* rule8en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE8EN_SHIFT 7
 	__le32 reg0 /* reg0 */;
 	__le32 reg1 /* reg1 */;
 	__le32 reg2 /* reg2 */;
@@ -532,63 +532,63 @@ struct tstorm_core_conn_ag_ctx {
 	__le32 reg10 /* reg10 */;
 };
 
-struct ustorm_core_conn_ag_ctx {
+struct e4_ustorm_core_conn_ag_ctx {
 	u8 reserved /* cdu_validation */;
 	u8 byte1 /* state */;
 	u8 flags0;
-#define USTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
-#define USTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
-#define USTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
-#define USTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
-#define USTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* timer0cf */
-#define USTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
-#define USTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* timer1cf */
-#define USTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
-#define USTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* timer2cf */
-#define USTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
+#define E4_USTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
+#define E4_USTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
+#define E4_USTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
+#define E4_USTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
+#define E4_USTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* timer0cf */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
+#define E4_USTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* timer1cf */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
+#define E4_USTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* timer2cf */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
 	u8 flags1;
-#define USTORM_CORE_CONN_AG_CTX_CF3_MASK      0x3 /* timer_stop_all */
-#define USTORM_CORE_CONN_AG_CTX_CF3_SHIFT     0
-#define USTORM_CORE_CONN_AG_CTX_CF4_MASK      0x3 /* cf4 */
-#define USTORM_CORE_CONN_AG_CTX_CF4_SHIFT     2
-#define USTORM_CORE_CONN_AG_CTX_CF5_MASK      0x3 /* cf5 */
-#define USTORM_CORE_CONN_AG_CTX_CF5_SHIFT     4
-#define USTORM_CORE_CONN_AG_CTX_CF6_MASK      0x3 /* cf6 */
-#define USTORM_CORE_CONN_AG_CTX_CF6_SHIFT     6
+#define E4_USTORM_CORE_CONN_AG_CTX_CF3_MASK      0x3 /* timer_stop_all */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF3_SHIFT     0
+#define E4_USTORM_CORE_CONN_AG_CTX_CF4_MASK      0x3 /* cf4 */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF4_SHIFT     2
+#define E4_USTORM_CORE_CONN_AG_CTX_CF5_MASK      0x3 /* cf5 */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF5_SHIFT     4
+#define E4_USTORM_CORE_CONN_AG_CTX_CF6_MASK      0x3 /* cf6 */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF6_SHIFT     6
 	u8 flags2;
-#define USTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
-#define USTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
-#define USTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
-#define USTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
-#define USTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
-#define USTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
-#define USTORM_CORE_CONN_AG_CTX_CF3EN_MASK    0x1 /* cf3en */
-#define USTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT   3
-#define USTORM_CORE_CONN_AG_CTX_CF4EN_MASK    0x1 /* cf4en */
-#define USTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT   4
-#define USTORM_CORE_CONN_AG_CTX_CF5EN_MASK    0x1 /* cf5en */
-#define USTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT   5
-#define USTORM_CORE_CONN_AG_CTX_CF6EN_MASK    0x1 /* cf6en */
-#define USTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT   6
-#define USTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
-#define USTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 7
+#define E4_USTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
+#define E4_USTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
+#define E4_USTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
+#define E4_USTORM_CORE_CONN_AG_CTX_CF3EN_MASK    0x1 /* cf3en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT   3
+#define E4_USTORM_CORE_CONN_AG_CTX_CF4EN_MASK    0x1 /* cf4en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT   4
+#define E4_USTORM_CORE_CONN_AG_CTX_CF5EN_MASK    0x1 /* cf5en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT   5
+#define E4_USTORM_CORE_CONN_AG_CTX_CF6EN_MASK    0x1 /* cf6en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT   6
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 7
 	u8 flags3;
-#define USTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
-#define USTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 0
-#define USTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
-#define USTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 1
-#define USTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
-#define USTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 2
-#define USTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
-#define USTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 3
-#define USTORM_CORE_CONN_AG_CTX_RULE5EN_MASK  0x1 /* rule5en */
-#define USTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT 4
-#define USTORM_CORE_CONN_AG_CTX_RULE6EN_MASK  0x1 /* rule6en */
-#define USTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT 5
-#define USTORM_CORE_CONN_AG_CTX_RULE7EN_MASK  0x1 /* rule7en */
-#define USTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT 6
-#define USTORM_CORE_CONN_AG_CTX_RULE8EN_MASK  0x1 /* rule8en */
-#define USTORM_CORE_CONN_AG_CTX_RULE8EN_SHIFT 7
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 0
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 1
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 2
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 3
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE5EN_MASK  0x1 /* rule5en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT 4
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE6EN_MASK  0x1 /* rule6en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT 5
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE7EN_MASK  0x1 /* rule7en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT 6
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE8EN_MASK  0x1 /* rule8en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE8EN_SHIFT 7
 	u8 byte2 /* byte2 */;
 	u8 byte3 /* byte3 */;
 	__le16 word0 /* conn_dpi */;
@@ -628,11 +628,11 @@ struct core_conn_context {
 /* xstorm storm context */
 	struct xstorm_core_conn_st_ctx xstorm_st_context;
 /* xstorm aggregative context */
-	struct xstorm_core_conn_ag_ctx xstorm_ag_context;
+	struct e4_xstorm_core_conn_ag_ctx xstorm_ag_context;
 /* tstorm aggregative context */
-	struct tstorm_core_conn_ag_ctx tstorm_ag_context;
+	struct e4_tstorm_core_conn_ag_ctx tstorm_ag_context;
 /* ustorm aggregative context */
-	struct ustorm_core_conn_ag_ctx ustorm_ag_context;
+	struct e4_ustorm_core_conn_ag_ctx ustorm_ag_context;
 /* mstorm storm context */
 	struct mstorm_core_conn_st_ctx mstorm_st_context;
 /* ustorm storm context */
@@ -1934,6 +1934,92 @@ enum dmae_cmd_src_enum {
 };
 
 
+struct e4_mstorm_core_conn_ag_ctx {
+	u8 byte0 /* cdu_validation */;
+	u8 byte1 /* state */;
+	u8 flags0;
+#define E4_MSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
+#define E4_MSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
+#define E4_MSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
+#define E4_MSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* cf0 */
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* cf1 */
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* cf2 */
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
+	u8 flags1;
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 3
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 4
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 5
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 6
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 7
+	__le16 word0 /* word0 */;
+	__le16 word1 /* word1 */;
+	__le32 reg0 /* reg0 */;
+	__le32 reg1 /* reg1 */;
+};
+
+
+
+
+
+struct e4_ystorm_core_conn_ag_ctx {
+	u8 byte0 /* cdu_validation */;
+	u8 byte1 /* state */;
+	u8 flags0;
+#define E4_YSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
+#define E4_YSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
+#define E4_YSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
+#define E4_YSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* cf0 */
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* cf1 */
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* cf2 */
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
+	u8 flags1;
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 3
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 4
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 5
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 6
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 7
+	u8 byte2 /* byte2 */;
+	u8 byte3 /* byte3 */;
+	__le16 word0 /* word0 */;
+	__le32 reg0 /* reg0 */;
+	__le32 reg1 /* reg1 */;
+	__le16 word1 /* word1 */;
+	__le16 word2 /* word2 */;
+	__le16 word3 /* word3 */;
+	__le16 word4 /* word4 */;
+	__le32 reg2 /* reg2 */;
+	__le32 reg3 /* reg3 */;
+};
+
+
 /*
  * IGU cleanup command
  */
@@ -2017,44 +2103,6 @@ struct igu_msix_vector {
 };
 
 
-struct mstorm_core_conn_ag_ctx {
-	u8 byte0 /* cdu_validation */;
-	u8 byte1 /* state */;
-	u8 flags0;
-#define MSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
-#define MSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
-#define MSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
-#define MSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
-#define MSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* cf0 */
-#define MSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
-#define MSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* cf1 */
-#define MSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
-#define MSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* cf2 */
-#define MSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
-	u8 flags1;
-#define MSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
-#define MSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
-#define MSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
-#define MSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
-#define MSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
-#define MSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
-#define MSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
-#define MSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 3
-#define MSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
-#define MSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 4
-#define MSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
-#define MSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 5
-#define MSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
-#define MSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 6
-#define MSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
-#define MSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 7
-	__le16 word0 /* word0 */;
-	__le16 word1 /* word1 */;
-	__le32 reg0 /* reg0 */;
-	__le32 reg1 /* reg1 */;
-};
-
-
 /*
  * per encapsulation type enabling flags
  */
diff --git a/drivers/net/qede/base/ecore_hsi_eth.h b/drivers/net/qede/base/ecore_hsi_eth.h
index e8373d7..9d2a118 100644
--- a/drivers/net/qede/base/ecore_hsi_eth.h
+++ b/drivers/net/qede/base/ecore_hsi_eth.h
@@ -34,315 +34,315 @@ struct xstorm_eth_conn_st_ctx {
 	__le32 reserved[60];
 };
 
-struct xstorm_eth_conn_ag_ctx {
+struct e4_xstorm_eth_conn_ag_ctx {
 	u8 reserved0 /* cdu_validation */;
 	u8 eth_state /* state */;
 	u8 flags0;
 /* exist_in_qm0 */
-#define XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_SHIFT           0
+#define E4_XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_SHIFT           0
 /* exist_in_qm1 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED1_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED1_SHIFT              1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED1_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED1_SHIFT              1
 /* exist_in_qm2 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED2_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED2_SHIFT              2
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED2_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED2_SHIFT              2
 /* exist_in_qm3 */
-#define XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM3_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM3_SHIFT           3
+#define E4_XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM3_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM3_SHIFT           3
 /* bit4 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED3_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED3_SHIFT              4
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED3_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED3_SHIFT              4
 /* cf_array_active */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED4_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED4_SHIFT              5
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED4_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED4_SHIFT              5
 /* bit6 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED5_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED5_SHIFT              6
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED5_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED5_SHIFT              6
 /* bit7 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED6_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED6_SHIFT              7
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED6_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED6_SHIFT              7
 	u8 flags1;
 /* bit8 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED7_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED7_SHIFT              0
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED7_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED7_SHIFT              0
 /* bit9 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED8_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED8_SHIFT              1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED8_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED8_SHIFT              1
 /* bit10 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED9_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED9_SHIFT              2
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED9_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED9_SHIFT              2
 /* bit11 */
-#define XSTORM_ETH_CONN_AG_CTX_BIT11_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_BIT11_SHIFT                  3
+#define E4_XSTORM_ETH_CONN_AG_CTX_BIT11_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_BIT11_SHIFT                  3
 /* bit12 */
-#define XSTORM_ETH_CONN_AG_CTX_BIT12_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_BIT12_SHIFT                  4
+#define E4_XSTORM_ETH_CONN_AG_CTX_BIT12_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_BIT12_SHIFT                  4
 /* bit13 */
-#define XSTORM_ETH_CONN_AG_CTX_BIT13_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_BIT13_SHIFT                  5
+#define E4_XSTORM_ETH_CONN_AG_CTX_BIT13_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_BIT13_SHIFT                  5
 /* bit14 */
-#define XSTORM_ETH_CONN_AG_CTX_TX_RULE_ACTIVE_MASK          0x1
-#define XSTORM_ETH_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT         6
+#define E4_XSTORM_ETH_CONN_AG_CTX_TX_RULE_ACTIVE_MASK          0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT         6
 /* bit15 */
-#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_ACTIVE_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT           7
+#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_ACTIVE_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT           7
 	u8 flags2;
 /* timer0cf */
-#define XSTORM_ETH_CONN_AG_CTX_CF0_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF0_SHIFT                    0
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF0_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF0_SHIFT                    0
 /* timer1cf */
-#define XSTORM_ETH_CONN_AG_CTX_CF1_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF1_SHIFT                    2
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF1_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF1_SHIFT                    2
 /* timer2cf */
-#define XSTORM_ETH_CONN_AG_CTX_CF2_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF2_SHIFT                    4
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF2_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF2_SHIFT                    4
 /* timer_stop_all */
-#define XSTORM_ETH_CONN_AG_CTX_CF3_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF3_SHIFT                    6
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF3_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF3_SHIFT                    6
 	u8 flags3;
 /* cf4 */
-#define XSTORM_ETH_CONN_AG_CTX_CF4_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF4_SHIFT                    0
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF4_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF4_SHIFT                    0
 /* cf5 */
-#define XSTORM_ETH_CONN_AG_CTX_CF5_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF5_SHIFT                    2
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF5_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF5_SHIFT                    2
 /* cf6 */
-#define XSTORM_ETH_CONN_AG_CTX_CF6_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF6_SHIFT                    4
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF6_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF6_SHIFT                    4
 /* cf7 */
-#define XSTORM_ETH_CONN_AG_CTX_CF7_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF7_SHIFT                    6
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF7_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF7_SHIFT                    6
 	u8 flags4;
 /* cf8 */
-#define XSTORM_ETH_CONN_AG_CTX_CF8_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF8_SHIFT                    0
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF8_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF8_SHIFT                    0
 /* cf9 */
-#define XSTORM_ETH_CONN_AG_CTX_CF9_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF9_SHIFT                    2
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF9_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF9_SHIFT                    2
 /* cf10 */
-#define XSTORM_ETH_CONN_AG_CTX_CF10_MASK                    0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF10_SHIFT                   4
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF10_MASK                    0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF10_SHIFT                   4
 /* cf11 */
-#define XSTORM_ETH_CONN_AG_CTX_CF11_MASK                    0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF11_SHIFT                   6
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF11_MASK                    0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF11_SHIFT                   6
 	u8 flags5;
 /* cf12 */
-#define XSTORM_ETH_CONN_AG_CTX_CF12_MASK                    0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF12_SHIFT                   0
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF12_MASK                    0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF12_SHIFT                   0
 /* cf13 */
-#define XSTORM_ETH_CONN_AG_CTX_CF13_MASK                    0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF13_SHIFT                   2
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF13_MASK                    0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF13_SHIFT                   2
 /* cf14 */
-#define XSTORM_ETH_CONN_AG_CTX_CF14_MASK                    0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF14_SHIFT                   4
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF14_MASK                    0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF14_SHIFT                   4
 /* cf15 */
-#define XSTORM_ETH_CONN_AG_CTX_CF15_MASK                    0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF15_SHIFT                   6
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF15_MASK                    0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF15_SHIFT                   6
 	u8 flags6;
 /* cf16 */
-#define XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_MASK        0x3
-#define XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_SHIFT       0
+#define E4_XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_MASK        0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_SHIFT       0
 /* cf_array_cf */
-#define XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_MASK        0x3
-#define XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_SHIFT       2
+#define E4_XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_MASK        0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_SHIFT       2
 /* cf18 */
-#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_MASK                   0x3
-#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_SHIFT                  4
+#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_MASK                   0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_SHIFT                  4
 /* cf19 */
-#define XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_MASK            0x3
-#define XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_SHIFT           6
+#define E4_XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_MASK            0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_SHIFT           6
 	u8 flags7;
 /* cf20 */
-#define XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_MASK                0x3
-#define XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_SHIFT               0
+#define E4_XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_MASK                0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_SHIFT               0
 /* cf21 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED10_MASK              0x3
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED10_SHIFT             2
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED10_MASK              0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED10_SHIFT             2
 /* cf22 */
-#define XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_MASK               0x3
-#define XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_SHIFT              4
+#define E4_XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_MASK               0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_SHIFT              4
 /* cf0en */
-#define XSTORM_ETH_CONN_AG_CTX_CF0EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT                  6
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF0EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT                  6
 /* cf1en */
-#define XSTORM_ETH_CONN_AG_CTX_CF1EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT                  7
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF1EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT                  7
 	u8 flags8;
 /* cf2en */
-#define XSTORM_ETH_CONN_AG_CTX_CF2EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                  0
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF2EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                  0
 /* cf3en */
-#define XSTORM_ETH_CONN_AG_CTX_CF3EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT                  1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF3EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT                  1
 /* cf4en */
-#define XSTORM_ETH_CONN_AG_CTX_CF4EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF4EN_SHIFT                  2
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF4EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF4EN_SHIFT                  2
 /* cf5en */
-#define XSTORM_ETH_CONN_AG_CTX_CF5EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF5EN_SHIFT                  3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF5EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF5EN_SHIFT                  3
 /* cf6en */
-#define XSTORM_ETH_CONN_AG_CTX_CF6EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF6EN_SHIFT                  4
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF6EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF6EN_SHIFT                  4
 /* cf7en */
-#define XSTORM_ETH_CONN_AG_CTX_CF7EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF7EN_SHIFT                  5
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF7EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF7EN_SHIFT                  5
 /* cf8en */
-#define XSTORM_ETH_CONN_AG_CTX_CF8EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF8EN_SHIFT                  6
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF8EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF8EN_SHIFT                  6
 /* cf9en */
-#define XSTORM_ETH_CONN_AG_CTX_CF9EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF9EN_SHIFT                  7
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF9EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF9EN_SHIFT                  7
 	u8 flags9;
 /* cf10en */
-#define XSTORM_ETH_CONN_AG_CTX_CF10EN_MASK                  0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF10EN_SHIFT                 0
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF10EN_MASK                  0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF10EN_SHIFT                 0
 /* cf11en */
-#define XSTORM_ETH_CONN_AG_CTX_CF11EN_MASK                  0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF11EN_SHIFT                 1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF11EN_MASK                  0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF11EN_SHIFT                 1
 /* cf12en */
-#define XSTORM_ETH_CONN_AG_CTX_CF12EN_MASK                  0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF12EN_SHIFT                 2
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF12EN_MASK                  0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF12EN_SHIFT                 2
 /* cf13en */
-#define XSTORM_ETH_CONN_AG_CTX_CF13EN_MASK                  0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF13EN_SHIFT                 3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF13EN_MASK                  0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF13EN_SHIFT                 3
 /* cf14en */
-#define XSTORM_ETH_CONN_AG_CTX_CF14EN_MASK                  0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF14EN_SHIFT                 4
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF14EN_MASK                  0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF14EN_SHIFT                 4
 /* cf15en */
-#define XSTORM_ETH_CONN_AG_CTX_CF15EN_MASK                  0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF15EN_SHIFT                 5
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF15EN_MASK                  0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF15EN_SHIFT                 5
 /* cf16en */
-#define XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_MASK     0x1
-#define XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_SHIFT    6
+#define E4_XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_MASK     0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_SHIFT    6
 /* cf_array_cf_en */
-#define XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_EN_MASK     0x1
-#define XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_EN_SHIFT    7
+#define E4_XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_EN_MASK     0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_EN_SHIFT    7
 	u8 flags10;
 /* cf18en */
-#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_EN_SHIFT               0
+#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_EN_SHIFT               0
 /* cf19en */
-#define XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_EN_MASK         0x1
-#define XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT        1
+#define E4_XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_EN_MASK         0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT        1
 /* cf20en */
-#define XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_EN_MASK             0x1
-#define XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT            2
+#define E4_XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_EN_MASK             0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT            2
 /* cf21en */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED11_MASK              0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED11_SHIFT             3
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED11_MASK              0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED11_SHIFT             3
 /* cf22en */
-#define XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_EN_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_EN_SHIFT           4
+#define E4_XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_EN_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_EN_SHIFT           4
 /* cf23en */
-#define XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_MASK  0x1
-#define XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_SHIFT 5
+#define E4_XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_MASK  0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_SHIFT 5
 /* rule0en */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED12_MASK              0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED12_SHIFT             6
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED12_MASK              0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED12_SHIFT             6
 /* rule1en */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED13_MASK              0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED13_SHIFT             7
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED13_MASK              0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED13_SHIFT             7
 	u8 flags11;
 /* rule2en */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED14_MASK              0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED14_SHIFT             0
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED14_MASK              0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED14_SHIFT             0
 /* rule3en */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED15_MASK              0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED15_SHIFT             1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED15_MASK              0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED15_SHIFT             1
 /* rule4en */
-#define XSTORM_ETH_CONN_AG_CTX_TX_DEC_RULE_EN_MASK          0x1
-#define XSTORM_ETH_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT         2
+#define E4_XSTORM_ETH_CONN_AG_CTX_TX_DEC_RULE_EN_MASK          0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT         2
 /* rule5en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE5EN_MASK                 0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT                3
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE5EN_MASK                 0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT                3
 /* rule6en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE6EN_MASK                 0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE6EN_SHIFT                4
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE6EN_MASK                 0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE6EN_SHIFT                4
 /* rule7en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE7EN_MASK                 0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT                5
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE7EN_MASK                 0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT                5
 /* rule8en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED1_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED1_SHIFT           6
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED1_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED1_SHIFT           6
 /* rule9en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE9EN_MASK                 0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE9EN_SHIFT                7
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE9EN_MASK                 0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE9EN_SHIFT                7
 	u8 flags12;
 /* rule10en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE10EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE10EN_SHIFT               0
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE10EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE10EN_SHIFT               0
 /* rule11en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE11EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE11EN_SHIFT               1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE11EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE11EN_SHIFT               1
 /* rule12en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED2_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED2_SHIFT           2
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED2_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED2_SHIFT           2
 /* rule13en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED3_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED3_SHIFT           3
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED3_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED3_SHIFT           3
 /* rule14en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE14EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE14EN_SHIFT               4
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE14EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE14EN_SHIFT               4
 /* rule15en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE15EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE15EN_SHIFT               5
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE15EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE15EN_SHIFT               5
 /* rule16en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE16EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE16EN_SHIFT               6
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE16EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE16EN_SHIFT               6
 /* rule17en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE17EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE17EN_SHIFT               7
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE17EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE17EN_SHIFT               7
 	u8 flags13;
 /* rule18en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE18EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE18EN_SHIFT               0
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE18EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE18EN_SHIFT               0
 /* rule19en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE19EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE19EN_SHIFT               1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE19EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE19EN_SHIFT               1
 /* rule20en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED4_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED4_SHIFT           2
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED4_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED4_SHIFT           2
 /* rule21en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED5_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED5_SHIFT           3
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED5_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED5_SHIFT           3
 /* rule22en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED6_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED6_SHIFT           4
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED6_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED6_SHIFT           4
 /* rule23en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED7_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED7_SHIFT           5
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED7_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED7_SHIFT           5
 /* rule24en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED8_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED8_SHIFT           6
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED8_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED8_SHIFT           6
 /* rule25en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED9_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED9_SHIFT           7
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED9_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED9_SHIFT           7
 	u8 flags14;
 /* bit16 */
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_USE_EXT_HDR_MASK        0x1
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_USE_EXT_HDR_SHIFT       0
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_USE_EXT_HDR_MASK        0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_USE_EXT_HDR_SHIFT       0
 /* bit17 */
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_MASK      0x1
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_SHIFT     1
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_MASK      0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_SHIFT     1
 /* bit18 */
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_MASK    0x1
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_SHIFT   2
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_MASK    0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_SHIFT   2
 /* bit19 */
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_MASK    0x1
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_SHIFT   3
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_MASK    0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_SHIFT   3
 /* bit20 */
-#define XSTORM_ETH_CONN_AG_CTX_L2_EDPM_ENABLE_MASK          0x1
-#define XSTORM_ETH_CONN_AG_CTX_L2_EDPM_ENABLE_SHIFT         4
+#define E4_XSTORM_ETH_CONN_AG_CTX_L2_EDPM_ENABLE_MASK          0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_L2_EDPM_ENABLE_SHIFT         4
 /* bit21 */
-#define XSTORM_ETH_CONN_AG_CTX_ROCE_EDPM_ENABLE_MASK        0x1
-#define XSTORM_ETH_CONN_AG_CTX_ROCE_EDPM_ENABLE_SHIFT       5
+#define E4_XSTORM_ETH_CONN_AG_CTX_ROCE_EDPM_ENABLE_MASK        0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_ROCE_EDPM_ENABLE_SHIFT       5
 /* cf23 */
-#define XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_MASK              0x3
-#define XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_SHIFT             6
+#define E4_XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_MASK              0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_SHIFT             6
 	u8 edpm_event_id /* byte2 */;
 	__le16 physical_q0 /* physical_q0 */;
-	__le16 quota /* physical_q1 */;
+	__le16 e5_reserved1 /* physical_q1 */;
 	__le16 edpm_num_bds /* physical_q2 */;
 	__le16 tx_bd_cons /* word3 */;
 	__le16 tx_bd_prod /* word4 */;
@@ -375,7 +375,7 @@ struct xstorm_eth_conn_ag_ctx {
 	u8 byte13 /* byte13 */;
 	u8 byte14 /* byte14 */;
 	u8 byte15 /* byte15 */;
-	u8 byte16 /* byte16 */;
+	u8 e5_reserved /* e5_reserved */;
 	__le16 word11 /* word11 */;
 	__le32 reg10 /* reg10 */;
 	__le32 reg11 /* reg11 */;
@@ -400,47 +400,47 @@ struct ystorm_eth_conn_st_ctx {
 	__le32 reserved[8];
 };
 
-struct ystorm_eth_conn_ag_ctx {
+struct e4_ystorm_eth_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 state /* state */;
 	u8 flags0;
 /* exist_in_qm0 */
-#define YSTORM_ETH_CONN_AG_CTX_BIT0_MASK                  0x1
-#define YSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT                 0
+#define E4_YSTORM_ETH_CONN_AG_CTX_BIT0_MASK                  0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT                 0
 /* exist_in_qm1 */
-#define YSTORM_ETH_CONN_AG_CTX_BIT1_MASK                  0x1
-#define YSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT                 1
-#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK     0x3 /* cf0 */
-#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT    2
-#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_MASK      0x3 /* cf1 */
-#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_SHIFT     4
-#define YSTORM_ETH_CONN_AG_CTX_CF2_MASK                   0x3 /* cf2 */
-#define YSTORM_ETH_CONN_AG_CTX_CF2_SHIFT                  6
+#define E4_YSTORM_ETH_CONN_AG_CTX_BIT1_MASK                  0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT                 1
+#define E4_YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK     0x3 /* cf0 */
+#define E4_YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT    2
+#define E4_YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_MASK      0x3 /* cf1 */
+#define E4_YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_SHIFT     4
+#define E4_YSTORM_ETH_CONN_AG_CTX_CF2_MASK                   0x3 /* cf2 */
+#define E4_YSTORM_ETH_CONN_AG_CTX_CF2_SHIFT                  6
 	u8 flags1;
 /* cf0en */
-#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK  0x1
-#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT 0
+#define E4_YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK  0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT 0
 /* cf1en */
-#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_MASK   0x1
-#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_SHIFT  1
+#define E4_YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_MASK   0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_SHIFT  1
 /* cf2en */
-#define YSTORM_ETH_CONN_AG_CTX_CF2EN_MASK                 0x1
-#define YSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                2
+#define E4_YSTORM_ETH_CONN_AG_CTX_CF2EN_MASK                 0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                2
 /* rule0en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK               0x1
-#define YSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT              3
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK               0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT              3
 /* rule1en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK               0x1
-#define YSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT              4
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK               0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT              4
 /* rule2en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK               0x1
-#define YSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT              5
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK               0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT              5
 /* rule3en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK               0x1
-#define YSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT              6
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK               0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT              6
 /* rule4en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK               0x1
-#define YSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT              7
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK               0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT              7
 	u8 tx_q0_int_coallecing_timeset /* byte2 */;
 	u8 byte3 /* byte3 */;
 	__le16 word0 /* word0 */;
@@ -454,89 +454,89 @@ struct ystorm_eth_conn_ag_ctx {
 	__le32 reg3 /* reg3 */;
 };
 
-struct tstorm_eth_conn_ag_ctx {
+struct e4_tstorm_eth_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 byte1 /* state */;
 	u8 flags0;
-#define TSTORM_ETH_CONN_AG_CTX_BIT0_MASK      0x1 /* exist_in_qm0 */
-#define TSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT     0
-#define TSTORM_ETH_CONN_AG_CTX_BIT1_MASK      0x1 /* exist_in_qm1 */
-#define TSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT     1
-#define TSTORM_ETH_CONN_AG_CTX_BIT2_MASK      0x1 /* bit2 */
-#define TSTORM_ETH_CONN_AG_CTX_BIT2_SHIFT     2
-#define TSTORM_ETH_CONN_AG_CTX_BIT3_MASK      0x1 /* bit3 */
-#define TSTORM_ETH_CONN_AG_CTX_BIT3_SHIFT     3
-#define TSTORM_ETH_CONN_AG_CTX_BIT4_MASK      0x1 /* bit4 */
-#define TSTORM_ETH_CONN_AG_CTX_BIT4_SHIFT     4
-#define TSTORM_ETH_CONN_AG_CTX_BIT5_MASK      0x1 /* bit5 */
-#define TSTORM_ETH_CONN_AG_CTX_BIT5_SHIFT     5
-#define TSTORM_ETH_CONN_AG_CTX_CF0_MASK       0x3 /* timer0cf */
-#define TSTORM_ETH_CONN_AG_CTX_CF0_SHIFT      6
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT0_MASK      0x1 /* exist_in_qm0 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT     0
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT1_MASK      0x1 /* exist_in_qm1 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT     1
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT2_MASK      0x1 /* bit2 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT2_SHIFT     2
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT3_MASK      0x1 /* bit3 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT3_SHIFT     3
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT4_MASK      0x1 /* bit4 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT4_SHIFT     4
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT5_MASK      0x1 /* bit5 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT5_SHIFT     5
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF0_MASK       0x3 /* timer0cf */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF0_SHIFT      6
 	u8 flags1;
-#define TSTORM_ETH_CONN_AG_CTX_CF1_MASK       0x3 /* timer1cf */
-#define TSTORM_ETH_CONN_AG_CTX_CF1_SHIFT      0
-#define TSTORM_ETH_CONN_AG_CTX_CF2_MASK       0x3 /* timer2cf */
-#define TSTORM_ETH_CONN_AG_CTX_CF2_SHIFT      2
-#define TSTORM_ETH_CONN_AG_CTX_CF3_MASK       0x3 /* timer_stop_all */
-#define TSTORM_ETH_CONN_AG_CTX_CF3_SHIFT      4
-#define TSTORM_ETH_CONN_AG_CTX_CF4_MASK       0x3 /* cf4 */
-#define TSTORM_ETH_CONN_AG_CTX_CF4_SHIFT      6
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF1_MASK       0x3 /* timer1cf */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF1_SHIFT      0
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF2_MASK       0x3 /* timer2cf */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF2_SHIFT      2
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF3_MASK       0x3 /* timer_stop_all */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF3_SHIFT      4
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF4_MASK       0x3 /* cf4 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF4_SHIFT      6
 	u8 flags2;
-#define TSTORM_ETH_CONN_AG_CTX_CF5_MASK       0x3 /* cf5 */
-#define TSTORM_ETH_CONN_AG_CTX_CF5_SHIFT      0
-#define TSTORM_ETH_CONN_AG_CTX_CF6_MASK       0x3 /* cf6 */
-#define TSTORM_ETH_CONN_AG_CTX_CF6_SHIFT      2
-#define TSTORM_ETH_CONN_AG_CTX_CF7_MASK       0x3 /* cf7 */
-#define TSTORM_ETH_CONN_AG_CTX_CF7_SHIFT      4
-#define TSTORM_ETH_CONN_AG_CTX_CF8_MASK       0x3 /* cf8 */
-#define TSTORM_ETH_CONN_AG_CTX_CF8_SHIFT      6
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF5_MASK       0x3 /* cf5 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF5_SHIFT      0
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF6_MASK       0x3 /* cf6 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF6_SHIFT      2
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF7_MASK       0x3 /* cf7 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF7_SHIFT      4
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF8_MASK       0x3 /* cf8 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF8_SHIFT      6
 	u8 flags3;
-#define TSTORM_ETH_CONN_AG_CTX_CF9_MASK       0x3 /* cf9 */
-#define TSTORM_ETH_CONN_AG_CTX_CF9_SHIFT      0
-#define TSTORM_ETH_CONN_AG_CTX_CF10_MASK      0x3 /* cf10 */
-#define TSTORM_ETH_CONN_AG_CTX_CF10_SHIFT     2
-#define TSTORM_ETH_CONN_AG_CTX_CF0EN_MASK     0x1 /* cf0en */
-#define TSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT    4
-#define TSTORM_ETH_CONN_AG_CTX_CF1EN_MASK     0x1 /* cf1en */
-#define TSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT    5
-#define TSTORM_ETH_CONN_AG_CTX_CF2EN_MASK     0x1 /* cf2en */
-#define TSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT    6
-#define TSTORM_ETH_CONN_AG_CTX_CF3EN_MASK     0x1 /* cf3en */
-#define TSTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT    7
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF9_MASK       0x3 /* cf9 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF9_SHIFT      0
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF10_MASK      0x3 /* cf10 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF10_SHIFT     2
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF0EN_MASK     0x1 /* cf0en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT    4
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF1EN_MASK     0x1 /* cf1en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT    5
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF2EN_MASK     0x1 /* cf2en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT    6
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF3EN_MASK     0x1 /* cf3en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT    7
 	u8 flags4;
-#define TSTORM_ETH_CONN_AG_CTX_CF4EN_MASK     0x1 /* cf4en */
-#define TSTORM_ETH_CONN_AG_CTX_CF4EN_SHIFT    0
-#define TSTORM_ETH_CONN_AG_CTX_CF5EN_MASK     0x1 /* cf5en */
-#define TSTORM_ETH_CONN_AG_CTX_CF5EN_SHIFT    1
-#define TSTORM_ETH_CONN_AG_CTX_CF6EN_MASK     0x1 /* cf6en */
-#define TSTORM_ETH_CONN_AG_CTX_CF6EN_SHIFT    2
-#define TSTORM_ETH_CONN_AG_CTX_CF7EN_MASK     0x1 /* cf7en */
-#define TSTORM_ETH_CONN_AG_CTX_CF7EN_SHIFT    3
-#define TSTORM_ETH_CONN_AG_CTX_CF8EN_MASK     0x1 /* cf8en */
-#define TSTORM_ETH_CONN_AG_CTX_CF8EN_SHIFT    4
-#define TSTORM_ETH_CONN_AG_CTX_CF9EN_MASK     0x1 /* cf9en */
-#define TSTORM_ETH_CONN_AG_CTX_CF9EN_SHIFT    5
-#define TSTORM_ETH_CONN_AG_CTX_CF10EN_MASK    0x1 /* cf10en */
-#define TSTORM_ETH_CONN_AG_CTX_CF10EN_SHIFT   6
-#define TSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK   0x1 /* rule0en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT  7
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF4EN_MASK     0x1 /* cf4en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF4EN_SHIFT    0
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF5EN_MASK     0x1 /* cf5en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF5EN_SHIFT    1
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF6EN_MASK     0x1 /* cf6en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF6EN_SHIFT    2
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF7EN_MASK     0x1 /* cf7en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF7EN_SHIFT    3
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF8EN_MASK     0x1 /* cf8en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF8EN_SHIFT    4
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF9EN_MASK     0x1 /* cf9en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF9EN_SHIFT    5
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF10EN_MASK    0x1 /* cf10en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF10EN_SHIFT   6
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK   0x1 /* rule0en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT  7
 	u8 flags5;
-#define TSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK   0x1 /* rule1en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT  0
-#define TSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK   0x1 /* rule2en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT  1
-#define TSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK   0x1 /* rule3en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT  2
-#define TSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK   0x1 /* rule4en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT  3
-#define TSTORM_ETH_CONN_AG_CTX_RULE5EN_MASK   0x1 /* rule5en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT  4
-#define TSTORM_ETH_CONN_AG_CTX_RX_BD_EN_MASK  0x1 /* rule6en */
-#define TSTORM_ETH_CONN_AG_CTX_RX_BD_EN_SHIFT 5
-#define TSTORM_ETH_CONN_AG_CTX_RULE7EN_MASK   0x1 /* rule7en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT  6
-#define TSTORM_ETH_CONN_AG_CTX_RULE8EN_MASK   0x1 /* rule8en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE8EN_SHIFT  7
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK   0x1 /* rule1en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT  0
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK   0x1 /* rule2en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT  1
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK   0x1 /* rule3en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT  2
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK   0x1 /* rule4en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT  3
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE5EN_MASK   0x1 /* rule5en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT  4
+#define E4_TSTORM_ETH_CONN_AG_CTX_RX_BD_EN_MASK  0x1 /* rule6en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RX_BD_EN_SHIFT 5
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE7EN_MASK   0x1 /* rule7en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT  6
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE8EN_MASK   0x1 /* rule8en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE8EN_SHIFT  7
 	__le32 reg0 /* reg0 */;
 	__le32 reg1 /* reg1 */;
 	__le32 reg2 /* reg2 */;
@@ -558,88 +558,88 @@ struct tstorm_eth_conn_ag_ctx {
 	__le32 reg10 /* reg10 */;
 };
 
-struct ustorm_eth_conn_ag_ctx {
+struct e4_ustorm_eth_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 byte1 /* state */;
 	u8 flags0;
 /* exist_in_qm0 */
-#define USTORM_ETH_CONN_AG_CTX_BIT0_MASK                    0x1
-#define USTORM_ETH_CONN_AG_CTX_BIT0_SHIFT                   0
+#define E4_USTORM_ETH_CONN_AG_CTX_BIT0_MASK                    0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_BIT0_SHIFT                   0
 /* exist_in_qm1 */
-#define USTORM_ETH_CONN_AG_CTX_BIT1_MASK                    0x1
-#define USTORM_ETH_CONN_AG_CTX_BIT1_SHIFT                   1
+#define E4_USTORM_ETH_CONN_AG_CTX_BIT1_MASK                    0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_BIT1_SHIFT                   1
 /* timer0cf */
-#define USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_MASK     0x3
-#define USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_SHIFT    2
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_MASK     0x3
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_SHIFT    2
 /* timer1cf */
-#define USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_MASK     0x3
-#define USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_SHIFT    4
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_MASK     0x3
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_SHIFT    4
 /* timer2cf */
-#define USTORM_ETH_CONN_AG_CTX_CF2_MASK                     0x3
-#define USTORM_ETH_CONN_AG_CTX_CF2_SHIFT                    6
+#define E4_USTORM_ETH_CONN_AG_CTX_CF2_MASK                     0x3
+#define E4_USTORM_ETH_CONN_AG_CTX_CF2_SHIFT                    6
 	u8 flags1;
 /* timer_stop_all */
-#define USTORM_ETH_CONN_AG_CTX_CF3_MASK                     0x3
-#define USTORM_ETH_CONN_AG_CTX_CF3_SHIFT                    0
+#define E4_USTORM_ETH_CONN_AG_CTX_CF3_MASK                     0x3
+#define E4_USTORM_ETH_CONN_AG_CTX_CF3_SHIFT                    0
 /* cf4 */
-#define USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_MASK               0x3
-#define USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_SHIFT              2
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_MASK               0x3
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_SHIFT              2
 /* cf5 */
-#define USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_MASK               0x3
-#define USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_SHIFT              4
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_MASK               0x3
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_SHIFT              4
 /* cf6 */
-#define USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK       0x3
-#define USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT      6
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK       0x3
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT      6
 	u8 flags2;
 /* cf0en */
-#define USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_EN_MASK  0x1
-#define USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_EN_SHIFT 0
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_EN_MASK  0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_EN_SHIFT 0
 /* cf1en */
-#define USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_EN_MASK  0x1
-#define USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_EN_SHIFT 1
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_EN_MASK  0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_EN_SHIFT 1
 /* cf2en */
-#define USTORM_ETH_CONN_AG_CTX_CF2EN_MASK                   0x1
-#define USTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                  2
+#define E4_USTORM_ETH_CONN_AG_CTX_CF2EN_MASK                   0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                  2
 /* cf3en */
-#define USTORM_ETH_CONN_AG_CTX_CF3EN_MASK                   0x1
-#define USTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT                  3
+#define E4_USTORM_ETH_CONN_AG_CTX_CF3EN_MASK                   0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT                  3
 /* cf4en */
-#define USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_EN_MASK            0x1
-#define USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_EN_SHIFT           4
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_EN_MASK            0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_EN_SHIFT           4
 /* cf5en */
-#define USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_EN_MASK            0x1
-#define USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_EN_SHIFT           5
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_EN_MASK            0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_EN_SHIFT           5
 /* cf6en */
-#define USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK    0x1
-#define USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT   6
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK    0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT   6
 /* rule0en */
-#define USTORM_ETH_CONN_AG_CTX_RULE0EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT                7
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE0EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT                7
 	u8 flags3;
 /* rule1en */
-#define USTORM_ETH_CONN_AG_CTX_RULE1EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT                0
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE1EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT                0
 /* rule2en */
-#define USTORM_ETH_CONN_AG_CTX_RULE2EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT                1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE2EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT                1
 /* rule3en */
-#define USTORM_ETH_CONN_AG_CTX_RULE3EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT                2
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE3EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT                2
 /* rule4en */
-#define USTORM_ETH_CONN_AG_CTX_RULE4EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT                3
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE4EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT                3
 /* rule5en */
-#define USTORM_ETH_CONN_AG_CTX_RULE5EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT                4
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE5EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT                4
 /* rule6en */
-#define USTORM_ETH_CONN_AG_CTX_RULE6EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE6EN_SHIFT                5
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE6EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE6EN_SHIFT                5
 /* rule7en */
-#define USTORM_ETH_CONN_AG_CTX_RULE7EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT                6
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE7EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT                6
 /* rule8en */
-#define USTORM_ETH_CONN_AG_CTX_RULE8EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE8EN_SHIFT                7
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE8EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE8EN_SHIFT                7
 	u8 byte2 /* byte2 */;
 	u8 byte3 /* byte3 */;
 	__le16 word0 /* conn_dpi */;
@@ -678,15 +678,15 @@ struct eth_conn_context {
 /* xstorm storm context */
 	struct xstorm_eth_conn_st_ctx xstorm_st_context;
 /* xstorm aggregative context */
-	struct xstorm_eth_conn_ag_ctx xstorm_ag_context;
+	struct e4_xstorm_eth_conn_ag_ctx xstorm_ag_context;
 /* ystorm storm context */
 	struct ystorm_eth_conn_st_ctx ystorm_st_context;
 /* ystorm aggregative context */
-	struct ystorm_eth_conn_ag_ctx ystorm_ag_context;
+	struct e4_ystorm_eth_conn_ag_ctx ystorm_ag_context;
 /* tstorm aggregative context */
-	struct tstorm_eth_conn_ag_ctx tstorm_ag_context;
+	struct e4_tstorm_eth_conn_ag_ctx tstorm_ag_context;
 /* ustorm aggregative context */
-	struct ustorm_eth_conn_ag_ctx ustorm_ag_context;
+	struct e4_ustorm_eth_conn_ag_ctx ustorm_ag_context;
 /* ustorm storm context */
 	struct ustorm_eth_conn_st_ctx ustorm_st_context;
 /* mstorm storm context */
@@ -1480,6 +1480,668 @@ struct vport_update_ramrod_data {
 
 
 
+struct E4XstormEthConnAgCtxDqExtLdPart {
+	u8 reserved0 /* cdu_validation */;
+	u8 eth_state /* state */;
+	u8 flags0;
+/* exist_in_qm0 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM0_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM0_SHIFT           0
+/* exist_in_qm1 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED1_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED1_SHIFT              1
+/* exist_in_qm2 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED2_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED2_SHIFT              2
+/* exist_in_qm3 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM3_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM3_SHIFT           3
+/* bit4 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED3_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED3_SHIFT              4
+/* cf_array_active */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED4_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED4_SHIFT              5
+/* bit6 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED5_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED5_SHIFT              6
+/* bit7 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED6_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED6_SHIFT              7
+	u8 flags1;
+/* bit8 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED7_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED7_SHIFT              0
+/* bit9 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED8_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED8_SHIFT              1
+/* bit10 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED9_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED9_SHIFT              2
+/* bit11 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_BIT11_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_BIT11_SHIFT                  3
+/* bit12 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_BIT12_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_BIT12_SHIFT                  4
+/* bit13 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_BIT13_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_BIT13_SHIFT                  5
+/* bit14 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TX_RULE_ACTIVE_MASK          0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TX_RULE_ACTIVE_SHIFT         6
+/* bit15 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_ACTIVE_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_ACTIVE_SHIFT           7
+	u8 flags2;
+/* timer0cf */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF0_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF0_SHIFT                    0
+/* timer1cf */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF1_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF1_SHIFT                    2
+/* timer2cf */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF2_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF2_SHIFT                    4
+/* timer_stop_all */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF3_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF3_SHIFT                    6
+	u8 flags3;
+/* cf4 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF4_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF4_SHIFT                    0
+/* cf5 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF5_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF5_SHIFT                    2
+/* cf6 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF6_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF6_SHIFT                    4
+/* cf7 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF7_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF7_SHIFT                    6
+	u8 flags4;
+/* cf8 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF8_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF8_SHIFT                    0
+/* cf9 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF9_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF9_SHIFT                    2
+/* cf10 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF10_MASK                    0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF10_SHIFT                   4
+/* cf11 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF11_MASK                    0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF11_SHIFT                   6
+	u8 flags5;
+/* cf12 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF12_MASK                    0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF12_SHIFT                   0
+/* cf13 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF13_MASK                    0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF13_SHIFT                   2
+/* cf14 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF14_MASK                    0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF14_SHIFT                   4
+/* cf15 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF15_MASK                    0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF15_SHIFT                   6
+	u8 flags6;
+/* cf16 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_MASK        0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_SHIFT       0
+/* cf_array_cf */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_MASK        0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_SHIFT       2
+/* cf18 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_MASK                   0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_SHIFT                  4
+/* cf19 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_MASK            0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_SHIFT           6
+	u8 flags7;
+/* cf20 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_MASK                0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_SHIFT               0
+/* cf21 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED10_MASK              0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED10_SHIFT             2
+/* cf22 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_MASK               0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_SHIFT              4
+/* cf0en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF0EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF0EN_SHIFT                  6
+/* cf1en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF1EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF1EN_SHIFT                  7
+	u8 flags8;
+/* cf2en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF2EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF2EN_SHIFT                  0
+/* cf3en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF3EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF3EN_SHIFT                  1
+/* cf4en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF4EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF4EN_SHIFT                  2
+/* cf5en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF5EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF5EN_SHIFT                  3
+/* cf6en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF6EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF6EN_SHIFT                  4
+/* cf7en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF7EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF7EN_SHIFT                  5
+/* cf8en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF8EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF8EN_SHIFT                  6
+/* cf9en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF9EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF9EN_SHIFT                  7
+	u8 flags9;
+/* cf10en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF10EN_MASK                  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF10EN_SHIFT                 0
+/* cf11en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF11EN_MASK                  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF11EN_SHIFT                 1
+/* cf12en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF12EN_MASK                  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF12EN_SHIFT                 2
+/* cf13en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF13EN_MASK                  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF13EN_SHIFT                 3
+/* cf14en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF14EN_MASK                  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF14EN_SHIFT                 4
+/* cf15en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF15EN_MASK                  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF15EN_SHIFT                 5
+/* cf16en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_EN_MASK     0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_EN_SHIFT    6
+/* cf_array_cf_en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_EN_MASK     0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_EN_SHIFT    7
+	u8 flags10;
+/* cf18en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_EN_SHIFT               0
+/* cf19en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_EN_MASK         0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_EN_SHIFT        1
+/* cf20en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_EN_MASK             0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_EN_SHIFT            2
+/* cf21en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED11_MASK              0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED11_SHIFT             3
+/* cf22en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_EN_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_EN_SHIFT           4
+/* cf23en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_EN_RESERVED_MASK  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_EN_RESERVED_SHIFT 5
+/* rule0en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED12_MASK              0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED12_SHIFT             6
+/* rule1en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED13_MASK              0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED13_SHIFT             7
+	u8 flags11;
+/* rule2en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED14_MASK              0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED14_SHIFT             0
+/* rule3en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED15_MASK              0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED15_SHIFT             1
+/* rule4en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TX_DEC_RULE_EN_MASK          0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TX_DEC_RULE_EN_SHIFT         2
+/* rule5en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE5EN_MASK                 0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE5EN_SHIFT                3
+/* rule6en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE6EN_MASK                 0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE6EN_SHIFT                4
+/* rule7en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE7EN_MASK                 0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE7EN_SHIFT                5
+/* rule8en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED1_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED1_SHIFT           6
+/* rule9en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE9EN_MASK                 0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE9EN_SHIFT                7
+	u8 flags12;
+/* rule10en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE10EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE10EN_SHIFT               0
+/* rule11en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE11EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE11EN_SHIFT               1
+/* rule12en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED2_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED2_SHIFT           2
+/* rule13en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED3_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED3_SHIFT           3
+/* rule14en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE14EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE14EN_SHIFT               4
+/* rule15en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE15EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE15EN_SHIFT               5
+/* rule16en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE16EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE16EN_SHIFT               6
+/* rule17en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE17EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE17EN_SHIFT               7
+	u8 flags13;
+/* rule18en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE18EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE18EN_SHIFT               0
+/* rule19en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE19EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE19EN_SHIFT               1
+/* rule20en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED4_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED4_SHIFT           2
+/* rule21en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED5_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED5_SHIFT           3
+/* rule22en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED6_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED6_SHIFT           4
+/* rule23en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED7_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED7_SHIFT           5
+/* rule24en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED8_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED8_SHIFT           6
+/* rule25en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED9_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED9_SHIFT           7
+	u8 flags14;
+/* bit16 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_USE_EXT_HDR_MASK        0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_USE_EXT_HDR_SHIFT       0
+/* bit17 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_RAW_L3L4_MASK      0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_RAW_L3L4_SHIFT     1
+/* bit18 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_INBAND_PROP_HDR_MASK    0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_INBAND_PROP_HDR_SHIFT   2
+/* bit19 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_EXT_TUNNEL_MASK    0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_EXT_TUNNEL_SHIFT   3
+/* bit20 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_L2_EDPM_ENABLE_MASK          0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_L2_EDPM_ENABLE_SHIFT         4
+/* bit21 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_ROCE_EDPM_ENABLE_MASK        0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_ROCE_EDPM_ENABLE_SHIFT       5
+/* cf23 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_MASK              0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_SHIFT             6
+	u8 edpm_event_id /* byte2 */;
+	__le16 physical_q0 /* physical_q0 */;
+	__le16 e5_reserved1 /* physical_q1 */;
+	__le16 edpm_num_bds /* physical_q2 */;
+	__le16 tx_bd_cons /* word3 */;
+	__le16 tx_bd_prod /* word4 */;
+	__le16 tx_class /* word5 */;
+	__le16 conn_dpi /* conn_dpi */;
+	u8 byte3 /* byte3 */;
+	u8 byte4 /* byte4 */;
+	u8 byte5 /* byte5 */;
+	u8 byte6 /* byte6 */;
+	__le32 reg0 /* reg0 */;
+	__le32 reg1 /* reg1 */;
+	__le32 reg2 /* reg2 */;
+	__le32 reg3 /* reg3 */;
+	__le32 reg4 /* reg4 */;
+};
+
+
+struct e4_mstorm_eth_conn_ag_ctx {
+	u8 byte0 /* cdu_validation */;
+	u8 byte1 /* state */;
+	u8 flags0;
+#define E4_MSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_MASK  0x1 /* exist_in_qm0 */
+#define E4_MSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_SHIFT 0
+#define E4_MSTORM_ETH_CONN_AG_CTX_BIT1_MASK          0x1 /* exist_in_qm1 */
+#define E4_MSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT         1
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF0_MASK           0x3 /* cf0 */
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF0_SHIFT          2
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF1_MASK           0x3 /* cf1 */
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF1_SHIFT          4
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF2_MASK           0x3 /* cf2 */
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF2_SHIFT          6
+	u8 flags1;
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF0EN_MASK         0x1 /* cf0en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT        0
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF1EN_MASK         0x1 /* cf1en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT        1
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF2EN_MASK         0x1 /* cf2en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT        2
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK       0x1 /* rule0en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT      3
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK       0x1 /* rule1en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT      4
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK       0x1 /* rule2en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT      5
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK       0x1 /* rule3en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT      6
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK       0x1 /* rule4en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT      7
+	__le16 word0 /* word0 */;
+	__le16 word1 /* word1 */;
+	__le32 reg0 /* reg0 */;
+	__le32 reg1 /* reg1 */;
+};
+
+
+
+
+
+struct e4_xstorm_eth_hw_conn_ag_ctx {
+	u8 reserved0 /* cdu_validation */;
+	u8 eth_state /* state */;
+	u8 flags0;
+/* exist_in_qm0 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM0_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM0_SHIFT           0
+/* exist_in_qm1 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED1_MASK               0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED1_SHIFT              1
+/* exist_in_qm2 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED2_MASK               0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED2_SHIFT              2
+/* exist_in_qm3 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM3_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM3_SHIFT           3
+/* bit4 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED3_MASK               0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED3_SHIFT              4
+/* cf_array_active */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED4_MASK               0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED4_SHIFT              5
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED5_MASK               0x1 /* bit6 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED5_SHIFT              6
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED6_MASK               0x1 /* bit7 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED6_SHIFT              7
+	u8 flags1;
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED7_MASK               0x1 /* bit8 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED7_SHIFT              0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED8_MASK               0x1 /* bit9 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED8_SHIFT              1
+/* bit10 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED9_MASK               0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED9_SHIFT              2
+/* bit11 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT11_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT11_SHIFT                  3
+/* bit12 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT12_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT12_SHIFT                  4
+/* bit13 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT13_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT13_SHIFT                  5
+/* bit14 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TX_RULE_ACTIVE_MASK          0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT         6
+/* bit15 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_ACTIVE_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT           7
+	u8 flags2;
+/* timer0cf */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF0_MASK                     0x3
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF0_SHIFT                    0
+/* timer1cf */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF1_MASK                     0x3
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF1_SHIFT                    2
+/* timer2cf */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF2_MASK                     0x3
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF2_SHIFT                    4
+/* timer_stop_all */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF3_MASK                     0x3
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF3_SHIFT                    6
+	u8 flags3;
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF4_MASK                     0x3 /* cf4 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF4_SHIFT                    0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF5_MASK                     0x3 /* cf5 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF5_SHIFT                    2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF6_MASK                     0x3 /* cf6 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF6_SHIFT                    4
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF7_MASK                     0x3 /* cf7 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF7_SHIFT                    6
+	u8 flags4;
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF8_MASK                     0x3 /* cf8 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF8_SHIFT                    0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF9_MASK                     0x3 /* cf9 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF9_SHIFT                    2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF10_MASK                    0x3 /* cf10 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF10_SHIFT                   4
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF11_MASK                    0x3 /* cf11 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF11_SHIFT                   6
+	u8 flags5;
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF12_MASK                    0x3 /* cf12 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF12_SHIFT                   0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF13_MASK                    0x3 /* cf13 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF13_SHIFT                   2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF14_MASK                    0x3 /* cf14 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF14_SHIFT                   4
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF15_MASK                    0x3 /* cf15 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF15_SHIFT                   6
+	u8 flags6;
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_MASK        0x3 /* cf16 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_SHIFT       0
+/* cf_array_cf */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_MASK        0x3
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_SHIFT       2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_MASK                   0x3 /* cf18 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_SHIFT                  4
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_MASK            0x3 /* cf19 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_SHIFT           6
+	u8 flags7;
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_MASK                0x3 /* cf20 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_SHIFT               0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED10_MASK              0x3 /* cf21 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED10_SHIFT             2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_MASK               0x3 /* cf22 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_SHIFT              4
+/* cf0en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF0EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF0EN_SHIFT                  6
+/* cf1en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF1EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF1EN_SHIFT                  7
+	u8 flags8;
+/* cf2en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF2EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF2EN_SHIFT                  0
+/* cf3en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF3EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF3EN_SHIFT                  1
+/* cf4en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF4EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF4EN_SHIFT                  2
+/* cf5en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF5EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF5EN_SHIFT                  3
+/* cf6en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF6EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF6EN_SHIFT                  4
+/* cf7en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF7EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF7EN_SHIFT                  5
+/* cf8en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF8EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF8EN_SHIFT                  6
+/* cf9en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF9EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF9EN_SHIFT                  7
+	u8 flags9;
+/* cf10en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF10EN_MASK                  0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF10EN_SHIFT                 0
+/* cf11en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF11EN_MASK                  0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF11EN_SHIFT                 1
+/* cf12en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF12EN_MASK                  0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF12EN_SHIFT                 2
+/* cf13en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF13EN_MASK                  0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF13EN_SHIFT                 3
+/* cf14en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF14EN_MASK                  0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF14EN_SHIFT                 4
+/* cf15en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF15EN_MASK                  0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF15EN_SHIFT                 5
+/* cf16en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_MASK     0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_SHIFT    6
+/* cf_array_cf_en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_EN_MASK     0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_EN_SHIFT    7
+	u8 flags10;
+/* cf18en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_EN_SHIFT               0
+/* cf19en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_EN_MASK         0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT        1
+/* cf20en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_EN_MASK             0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT            2
+/* cf21en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED11_MASK              0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED11_SHIFT             3
+/* cf22en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_EN_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_EN_SHIFT           4
+/* cf23en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_MASK  0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_SHIFT 5
+/* rule0en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED12_MASK              0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED12_SHIFT             6
+/* rule1en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED13_MASK              0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED13_SHIFT             7
+	u8 flags11;
+/* rule2en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED14_MASK              0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED14_SHIFT             0
+/* rule3en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED15_MASK              0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED15_SHIFT             1
+/* rule4en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TX_DEC_RULE_EN_MASK          0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT         2
+/* rule5en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE5EN_MASK                 0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE5EN_SHIFT                3
+/* rule6en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE6EN_MASK                 0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE6EN_SHIFT                4
+/* rule7en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE7EN_MASK                 0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE7EN_SHIFT                5
+/* rule8en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED1_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED1_SHIFT           6
+/* rule9en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE9EN_MASK                 0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE9EN_SHIFT                7
+	u8 flags12;
+/* rule10en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE10EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE10EN_SHIFT               0
+/* rule11en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE11EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE11EN_SHIFT               1
+/* rule12en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED2_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED2_SHIFT           2
+/* rule13en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED3_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED3_SHIFT           3
+/* rule14en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE14EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE14EN_SHIFT               4
+/* rule15en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE15EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE15EN_SHIFT               5
+/* rule16en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE16EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE16EN_SHIFT               6
+/* rule17en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE17EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE17EN_SHIFT               7
+	u8 flags13;
+/* rule18en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE18EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE18EN_SHIFT               0
+/* rule19en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE19EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE19EN_SHIFT               1
+/* rule20en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED4_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED4_SHIFT           2
+/* rule21en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED5_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED5_SHIFT           3
+/* rule22en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED6_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED6_SHIFT           4
+/* rule23en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED7_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED7_SHIFT           5
+/* rule24en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED8_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED8_SHIFT           6
+/* rule25en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED9_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED9_SHIFT           7
+	u8 flags14;
+/* bit16 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_USE_EXT_HDR_MASK        0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_USE_EXT_HDR_SHIFT       0
+/* bit17 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_MASK      0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_SHIFT     1
+/* bit18 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_MASK    0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_SHIFT   2
+/* bit19 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_MASK    0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_SHIFT   3
+/* bit20 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_L2_EDPM_ENABLE_MASK          0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_L2_EDPM_ENABLE_SHIFT         4
+/* bit21 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_ROCE_EDPM_ENABLE_MASK        0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_ROCE_EDPM_ENABLE_SHIFT       5
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_MASK              0x3 /* cf23 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_SHIFT             6
+	u8 edpm_event_id /* byte2 */;
+	__le16 physical_q0 /* physical_q0 */;
+	__le16 e5_reserved1 /* physical_q1 */;
+	__le16 edpm_num_bds /* physical_q2 */;
+	__le16 tx_bd_cons /* word3 */;
+	__le16 tx_bd_prod /* word4 */;
+	__le16 tx_class /* word5 */;
+	__le16 conn_dpi /* conn_dpi */;
+};
+
+
+
 /*
  * GFT CAM line struct
  */
@@ -1730,690 +2392,4 @@ enum gft_vlan_select {
 };
 
 
-struct mstorm_eth_conn_ag_ctx {
-	u8 byte0 /* cdu_validation */;
-	u8 byte1 /* state */;
-	u8 flags0;
-/* exist_in_qm0 */
-#define MSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_MASK  0x1
-#define MSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_SHIFT 0
-/* exist_in_qm1 */
-#define MSTORM_ETH_CONN_AG_CTX_BIT1_MASK          0x1
-#define MSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT         1
-#define MSTORM_ETH_CONN_AG_CTX_CF0_MASK           0x3 /* cf0 */
-#define MSTORM_ETH_CONN_AG_CTX_CF0_SHIFT          2
-#define MSTORM_ETH_CONN_AG_CTX_CF1_MASK           0x3 /* cf1 */
-#define MSTORM_ETH_CONN_AG_CTX_CF1_SHIFT          4
-#define MSTORM_ETH_CONN_AG_CTX_CF2_MASK           0x3 /* cf2 */
-#define MSTORM_ETH_CONN_AG_CTX_CF2_SHIFT          6
-	u8 flags1;
-#define MSTORM_ETH_CONN_AG_CTX_CF0EN_MASK         0x1 /* cf0en */
-#define MSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT        0
-#define MSTORM_ETH_CONN_AG_CTX_CF1EN_MASK         0x1 /* cf1en */
-#define MSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT        1
-#define MSTORM_ETH_CONN_AG_CTX_CF2EN_MASK         0x1 /* cf2en */
-#define MSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT        2
-#define MSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK       0x1 /* rule0en */
-#define MSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT      3
-#define MSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK       0x1 /* rule1en */
-#define MSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT      4
-#define MSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK       0x1 /* rule2en */
-#define MSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT      5
-#define MSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK       0x1 /* rule3en */
-#define MSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT      6
-#define MSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK       0x1 /* rule4en */
-#define MSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT      7
-	__le16 word0 /* word0 */;
-	__le16 word1 /* word1 */;
-	__le32 reg0 /* reg0 */;
-	__le32 reg1 /* reg1 */;
-};
-
-
-
-
-struct xstormEthConnAgCtxDqExtLdPart {
-	u8 reserved0 /* cdu_validation */;
-	u8 eth_state /* state */;
-	u8 flags0;
-/* exist_in_qm0 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM0_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM0_SHIFT           0
-/* exist_in_qm1 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED1_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED1_SHIFT              1
-/* exist_in_qm2 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED2_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED2_SHIFT              2
-/* exist_in_qm3 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM3_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM3_SHIFT           3
-/* bit4 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED3_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED3_SHIFT              4
-/* cf_array_active */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED4_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED4_SHIFT              5
-/* bit6 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED5_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED5_SHIFT              6
-/* bit7 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED6_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED6_SHIFT              7
-	u8 flags1;
-/* bit8 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED7_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED7_SHIFT              0
-/* bit9 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED8_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED8_SHIFT              1
-/* bit10 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED9_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED9_SHIFT              2
-/* bit11 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_BIT11_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_BIT11_SHIFT                  3
-/* bit12 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_BIT12_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_BIT12_SHIFT                  4
-/* bit13 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_BIT13_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_BIT13_SHIFT                  5
-/* bit14 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TX_RULE_ACTIVE_MASK          0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TX_RULE_ACTIVE_SHIFT         6
-/* bit15 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_ACTIVE_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_ACTIVE_SHIFT           7
-	u8 flags2;
-/* timer0cf */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF0_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF0_SHIFT                    0
-/* timer1cf */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF1_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF1_SHIFT                    2
-/* timer2cf */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF2_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF2_SHIFT                    4
-/* timer_stop_all */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF3_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF3_SHIFT                    6
-	u8 flags3;
-/* cf4 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF4_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF4_SHIFT                    0
-/* cf5 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF5_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF5_SHIFT                    2
-/* cf6 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF6_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF6_SHIFT                    4
-/* cf7 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF7_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF7_SHIFT                    6
-	u8 flags4;
-/* cf8 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF8_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF8_SHIFT                    0
-/* cf9 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF9_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF9_SHIFT                    2
-/* cf10 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF10_MASK                    0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF10_SHIFT                   4
-/* cf11 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF11_MASK                    0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF11_SHIFT                   6
-	u8 flags5;
-/* cf12 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF12_MASK                    0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF12_SHIFT                   0
-/* cf13 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF13_MASK                    0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF13_SHIFT                   2
-/* cf14 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF14_MASK                    0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF14_SHIFT                   4
-/* cf15 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF15_MASK                    0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF15_SHIFT                   6
-	u8 flags6;
-/* cf16 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_MASK        0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_SHIFT       0
-/* cf_array_cf */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_MASK        0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_SHIFT       2
-/* cf18 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_MASK                   0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_SHIFT                  4
-/* cf19 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_MASK            0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_SHIFT           6
-	u8 flags7;
-/* cf20 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_MASK                0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_SHIFT               0
-/* cf21 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED10_MASK              0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED10_SHIFT             2
-/* cf22 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_MASK               0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_SHIFT              4
-/* cf0en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF0EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF0EN_SHIFT                  6
-/* cf1en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF1EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF1EN_SHIFT                  7
-	u8 flags8;
-/* cf2en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF2EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF2EN_SHIFT                  0
-/* cf3en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF3EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF3EN_SHIFT                  1
-/* cf4en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF4EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF4EN_SHIFT                  2
-/* cf5en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF5EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF5EN_SHIFT                  3
-/* cf6en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF6EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF6EN_SHIFT                  4
-/* cf7en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF7EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF7EN_SHIFT                  5
-/* cf8en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF8EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF8EN_SHIFT                  6
-/* cf9en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF9EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF9EN_SHIFT                  7
-	u8 flags9;
-/* cf10en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF10EN_MASK                  0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF10EN_SHIFT                 0
-/* cf11en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF11EN_MASK                  0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF11EN_SHIFT                 1
-/* cf12en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF12EN_MASK                  0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF12EN_SHIFT                 2
-/* cf13en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF13EN_MASK                  0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF13EN_SHIFT                 3
-/* cf14en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF14EN_MASK                  0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF14EN_SHIFT                 4
-/* cf15en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF15EN_MASK                  0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF15EN_SHIFT                 5
-/* cf16en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_EN_MASK     0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_EN_SHIFT    6
-/* cf_array_cf_en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_EN_MASK     0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_EN_SHIFT    7
-	u8 flags10;
-/* cf18en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_EN_SHIFT               0
-/* cf19en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_EN_MASK         0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_EN_SHIFT        1
-/* cf20en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_EN_MASK             0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_EN_SHIFT            2
-/* cf21en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED11_MASK              0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED11_SHIFT             3
-/* cf22en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_EN_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_EN_SHIFT           4
-/* cf23en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_EN_RESERVED_MASK  0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_EN_RESERVED_SHIFT 5
-/* rule0en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED12_MASK              0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED12_SHIFT             6
-/* rule1en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED13_MASK              0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED13_SHIFT             7
-	u8 flags11;
-/* rule2en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED14_MASK              0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED14_SHIFT             0
-/* rule3en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED15_MASK              0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED15_SHIFT             1
-/* rule4en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TX_DEC_RULE_EN_MASK          0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TX_DEC_RULE_EN_SHIFT         2
-/* rule5en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE5EN_MASK                 0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE5EN_SHIFT                3
-/* rule6en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE6EN_MASK                 0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE6EN_SHIFT                4
-/* rule7en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE7EN_MASK                 0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE7EN_SHIFT                5
-/* rule8en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED1_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED1_SHIFT           6
-/* rule9en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE9EN_MASK                 0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE9EN_SHIFT                7
-	u8 flags12;
-/* rule10en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE10EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE10EN_SHIFT               0
-/* rule11en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE11EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE11EN_SHIFT               1
-/* rule12en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED2_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED2_SHIFT           2
-/* rule13en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED3_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED3_SHIFT           3
-/* rule14en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE14EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE14EN_SHIFT               4
-/* rule15en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE15EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE15EN_SHIFT               5
-/* rule16en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE16EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE16EN_SHIFT               6
-/* rule17en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE17EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE17EN_SHIFT               7
-	u8 flags13;
-/* rule18en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE18EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE18EN_SHIFT               0
-/* rule19en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE19EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE19EN_SHIFT               1
-/* rule20en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED4_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED4_SHIFT           2
-/* rule21en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED5_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED5_SHIFT           3
-/* rule22en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED6_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED6_SHIFT           4
-/* rule23en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED7_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED7_SHIFT           5
-/* rule24en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED8_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED8_SHIFT           6
-/* rule25en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED9_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED9_SHIFT           7
-	u8 flags14;
-/* bit16 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_USE_EXT_HDR_MASK        0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_USE_EXT_HDR_SHIFT       0
-/* bit17 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_RAW_L3L4_MASK      0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_RAW_L3L4_SHIFT     1
-/* bit18 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_INBAND_PROP_HDR_MASK    0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_INBAND_PROP_HDR_SHIFT   2
-/* bit19 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_EXT_TUNNEL_MASK    0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_EXT_TUNNEL_SHIFT   3
-/* bit20 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_L2_EDPM_ENABLE_MASK          0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_L2_EDPM_ENABLE_SHIFT         4
-/* bit21 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_ROCE_EDPM_ENABLE_MASK        0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_ROCE_EDPM_ENABLE_SHIFT       5
-/* cf23 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_MASK              0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_SHIFT             6
-	u8 edpm_event_id /* byte2 */;
-	__le16 physical_q0 /* physical_q0 */;
-	__le16 quota /* physical_q1 */;
-	__le16 edpm_num_bds /* physical_q2 */;
-	__le16 tx_bd_cons /* word3 */;
-	__le16 tx_bd_prod /* word4 */;
-	__le16 tx_class /* word5 */;
-	__le16 conn_dpi /* conn_dpi */;
-	u8 byte3 /* byte3 */;
-	u8 byte4 /* byte4 */;
-	u8 byte5 /* byte5 */;
-	u8 byte6 /* byte6 */;
-	__le32 reg0 /* reg0 */;
-	__le32 reg1 /* reg1 */;
-	__le32 reg2 /* reg2 */;
-	__le32 reg3 /* reg3 */;
-	__le32 reg4 /* reg4 */;
-};
-
-
-
-struct xstorm_eth_hw_conn_ag_ctx {
-	u8 reserved0 /* cdu_validation */;
-	u8 eth_state /* state */;
-	u8 flags0;
-/* exist_in_qm0 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM0_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM0_SHIFT           0
-/* exist_in_qm1 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED1_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED1_SHIFT              1
-/* exist_in_qm2 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED2_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED2_SHIFT              2
-/* exist_in_qm3 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM3_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM3_SHIFT           3
-/* bit4 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED3_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED3_SHIFT              4
-/* cf_array_active */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED4_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED4_SHIFT              5
-/* bit6 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED5_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED5_SHIFT              6
-/* bit7 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED6_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED6_SHIFT              7
-	u8 flags1;
-/* bit8 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED7_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED7_SHIFT              0
-/* bit9 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED8_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED8_SHIFT              1
-/* bit10 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED9_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED9_SHIFT              2
-/* bit11 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_BIT11_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_BIT11_SHIFT                  3
-/* bit12 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_BIT12_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_BIT12_SHIFT                  4
-/* bit13 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_BIT13_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_BIT13_SHIFT                  5
-/* bit14 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_TX_RULE_ACTIVE_MASK          0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT         6
-/* bit15 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_ACTIVE_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT           7
-	u8 flags2;
-/* timer0cf */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF0_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF0_SHIFT                    0
-/* timer1cf */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF1_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF1_SHIFT                    2
-/* timer2cf */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF2_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF2_SHIFT                    4
-/* timer_stop_all */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF3_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF3_SHIFT                    6
-	u8 flags3;
-/* cf4 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF4_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF4_SHIFT                    0
-/* cf5 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF5_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF5_SHIFT                    2
-/* cf6 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF6_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF6_SHIFT                    4
-/* cf7 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF7_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF7_SHIFT                    6
-	u8 flags4;
-/* cf8 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF8_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF8_SHIFT                    0
-/* cf9 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF9_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF9_SHIFT                    2
-/* cf10 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF10_MASK                    0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF10_SHIFT                   4
-/* cf11 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF11_MASK                    0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF11_SHIFT                   6
-	u8 flags5;
-/* cf12 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF12_MASK                    0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF12_SHIFT                   0
-/* cf13 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF13_MASK                    0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF13_SHIFT                   2
-/* cf14 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF14_MASK                    0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF14_SHIFT                   4
-/* cf15 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF15_MASK                    0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF15_SHIFT                   6
-	u8 flags6;
-/* cf16 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_MASK        0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_SHIFT       0
-/* cf_array_cf */
-#define XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_MASK        0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_SHIFT       2
-/* cf18 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_MASK                   0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_SHIFT                  4
-/* cf19 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_MASK            0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_SHIFT           6
-	u8 flags7;
-/* cf20 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_MASK                0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_SHIFT               0
-/* cf21 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED10_MASK              0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED10_SHIFT             2
-/* cf22 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_MASK               0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_SHIFT              4
-/* cf0en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF0EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF0EN_SHIFT                  6
-/* cf1en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF1EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF1EN_SHIFT                  7
-	u8 flags8;
-/* cf2en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF2EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF2EN_SHIFT                  0
-/* cf3en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF3EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF3EN_SHIFT                  1
-/* cf4en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF4EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF4EN_SHIFT                  2
-/* cf5en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF5EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF5EN_SHIFT                  3
-/* cf6en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF6EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF6EN_SHIFT                  4
-/* cf7en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF7EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF7EN_SHIFT                  5
-/* cf8en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF8EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF8EN_SHIFT                  6
-/* cf9en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF9EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF9EN_SHIFT                  7
-	u8 flags9;
-/* cf10en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF10EN_MASK                  0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF10EN_SHIFT                 0
-/* cf11en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF11EN_MASK                  0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF11EN_SHIFT                 1
-/* cf12en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF12EN_MASK                  0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF12EN_SHIFT                 2
-/* cf13en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF13EN_MASK                  0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF13EN_SHIFT                 3
-/* cf14en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF14EN_MASK                  0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF14EN_SHIFT                 4
-/* cf15en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF15EN_MASK                  0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF15EN_SHIFT                 5
-/* cf16en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_MASK     0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_SHIFT    6
-/* cf_array_cf_en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_EN_MASK     0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_EN_SHIFT    7
-	u8 flags10;
-/* cf18en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_EN_SHIFT               0
-/* cf19en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_EN_MASK         0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT        1
-/* cf20en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_EN_MASK             0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT            2
-/* cf21en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED11_MASK              0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED11_SHIFT             3
-/* cf22en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_EN_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_EN_SHIFT           4
-/* cf23en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_MASK  0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_SHIFT 5
-/* rule0en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED12_MASK              0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED12_SHIFT             6
-/* rule1en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED13_MASK              0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED13_SHIFT             7
-	u8 flags11;
-/* rule2en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED14_MASK              0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED14_SHIFT             0
-/* rule3en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED15_MASK              0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED15_SHIFT             1
-/* rule4en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_TX_DEC_RULE_EN_MASK          0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT         2
-/* rule5en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE5EN_MASK                 0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE5EN_SHIFT                3
-/* rule6en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE6EN_MASK                 0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE6EN_SHIFT                4
-/* rule7en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE7EN_MASK                 0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE7EN_SHIFT                5
-/* rule8en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED1_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED1_SHIFT           6
-/* rule9en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE9EN_MASK                 0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE9EN_SHIFT                7
-	u8 flags12;
-/* rule10en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE10EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE10EN_SHIFT               0
-/* rule11en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE11EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE11EN_SHIFT               1
-/* rule12en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED2_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED2_SHIFT           2
-/* rule13en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED3_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED3_SHIFT           3
-/* rule14en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE14EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE14EN_SHIFT               4
-/* rule15en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE15EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE15EN_SHIFT               5
-/* rule16en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE16EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE16EN_SHIFT               6
-/* rule17en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE17EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE17EN_SHIFT               7
-	u8 flags13;
-/* rule18en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE18EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE18EN_SHIFT               0
-/* rule19en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE19EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE19EN_SHIFT               1
-/* rule20en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED4_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED4_SHIFT           2
-/* rule21en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED5_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED5_SHIFT           3
-/* rule22en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED6_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED6_SHIFT           4
-/* rule23en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED7_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED7_SHIFT           5
-/* rule24en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED8_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED8_SHIFT           6
-/* rule25en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED9_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED9_SHIFT           7
-	u8 flags14;
-/* bit16 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_USE_EXT_HDR_MASK        0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_USE_EXT_HDR_SHIFT       0
-/* bit17 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_MASK      0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_SHIFT     1
-/* bit18 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_MASK    0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_SHIFT   2
-/* bit19 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_MASK    0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_SHIFT   3
-/* bit20 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_L2_EDPM_ENABLE_MASK          0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_L2_EDPM_ENABLE_SHIFT         4
-/* bit21 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_ROCE_EDPM_ENABLE_MASK        0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_ROCE_EDPM_ENABLE_SHIFT       5
-/* cf23 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_MASK              0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_SHIFT             6
-	u8 edpm_event_id /* byte2 */;
-	__le16 physical_q0 /* physical_q0 */;
-	__le16 quota /* physical_q1 */;
-	__le16 edpm_num_bds /* physical_q2 */;
-	__le16 tx_bd_cons /* word3 */;
-	__le16 tx_bd_prod /* word4 */;
-	__le16 tx_class /* word5 */;
-	__le16 conn_dpi /* conn_dpi */;
-};
-
-
 #endif /* __ECORE_HSI_ETH__ */
diff --git a/drivers/net/qede/base/ecore_iov_api.h b/drivers/net/qede/base/ecore_iov_api.h
index 24a43d3..9775360 100644
--- a/drivers/net/qede/base/ecore_iov_api.h
+++ b/drivers/net/qede/base/ecore_iov_api.h
@@ -701,7 +701,7 @@ enum _ecore_status_t ecore_iov_configure_min_tx_rate(struct ecore_dev *p_dev,
  * @param p_hwfn
  * @param rel_vf_id
  *
- * @return MAX_NUM_VFS in case no further active VFs, otherwise index.
+ * @return E4_MAX_NUM_VFS in case no further active VFs, otherwise index.
  */
 u16 ecore_iov_get_next_active_vf(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
 
@@ -709,7 +709,7 @@ u16 ecore_iov_get_next_active_vf(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
 
 #define ecore_for_each_vf(_p_hwfn, _i)					\
 	for (_i = ecore_iov_get_next_active_vf(_p_hwfn, 0);		\
-	     _i < MAX_NUM_VFS;						\
+	     _i < E4_MAX_NUM_VFS;					\
 	     _i = ecore_iov_get_next_active_vf(_p_hwfn, _i + 1))
 
 #endif
diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c
index 1f35d6c..9035d3b 100644
--- a/drivers/net/qede/base/ecore_spq.c
+++ b/drivers/net/qede/base/ecore_spq.c
@@ -191,15 +191,17 @@ static void ecore_spq_hw_initialize(struct ecore_hwfn *p_hwfn,
 
 	p_cxt = cxt_info.p_cxt;
 
-	SET_FIELD(p_cxt->xstorm_ag_context.flags10,
-		  XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN, 1);
-	SET_FIELD(p_cxt->xstorm_ag_context.flags1,
-		  XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE, 1);
-	/* SET_FIELD(p_cxt->xstorm_ag_context.flags10,
-	 *           XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN, 1);
-	 */
-	SET_FIELD(p_cxt->xstorm_ag_context.flags9,
-		  XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN, 1);
+	if (ECORE_IS_BB(p_hwfn->p_dev) || ECORE_IS_AH(p_hwfn->p_dev)) {
+		SET_FIELD(p_cxt->xstorm_ag_context.flags10,
+			  E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN, 1);
+		SET_FIELD(p_cxt->xstorm_ag_context.flags1,
+			  E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE, 1);
+		/* SET_FIELD(p_cxt->xstorm_ag_context.flags10,
+		 *	  E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN, 1);
+		 */
+		SET_FIELD(p_cxt->xstorm_ag_context.flags9,
+			  E4_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN, 1);
+	}
 
 	/* CDU validation - FIXME currently disabled */
 
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index d1c809c..b051678 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -3487,7 +3487,7 @@ u16 ecore_iov_get_next_active_vf(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
 			return i;
 
 out:
-	return MAX_NUM_VFS;
+	return E4_MAX_NUM_VFS;
 }
 
 enum _ecore_status_t ecore_iov_copy_vf_msg(struct ecore_hwfn *p_hwfn,
diff --git a/drivers/net/qede/base/ecore_sriov.h b/drivers/net/qede/base/ecore_sriov.h
index 884a90c..e9ccc79 100644
--- a/drivers/net/qede/base/ecore_sriov.h
+++ b/drivers/net/qede/base/ecore_sriov.h
@@ -15,7 +15,7 @@
 #include "ecore_hsi_common.h"
 
 #define ECORE_ETH_MAX_VF_NUM_VLAN_FILTERS \
-	(MAX_NUM_VFS * ECORE_ETH_VF_NUM_VLAN_FILTERS)
+	(E4_MAX_NUM_VFS * ECORE_ETH_VF_NUM_VLAN_FILTERS)
 
 /* Represents a full message. Both the request filled by VF
  * and the response filled by the PF. The VF needs one copy
@@ -152,7 +152,7 @@ struct ecore_vf_info {
  * capability enabled.
  */
 struct ecore_pf_iov {
-	struct ecore_vf_info	vfs_array[MAX_NUM_VFS];
+	struct ecore_vf_info	vfs_array[E4_MAX_NUM_VFS];
 	u64			pending_events[ECORE_VF_ARRAY_LENGTH];
 	u64			pending_flr[ECORE_VF_ARRAY_LENGTH];
 	u16			base_vport_id;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 06/61] net/qede: upgrade the FW to 8.18.9.0
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (5 preceding siblings ...)
  2017-03-24  7:27       ` [PATCH v3 05/61] net/qede/base: utilize FW 8.18.9.0 Rasesh Mody
@ 2017-03-24  7:27       ` Rasesh Mody
  2017-03-24  7:27       ` [PATCH v3 07/61] net/qede/base: decrease maximum HW func per device Rasesh Mody
                         ` (55 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:27 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

This patchset adds changes to upgrade to 8.18.9.0 FW.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 doc/guides/nics/qede.rst                      |    8 +-
 drivers/net/qede/base/bcm_osal.h              |    1 +
 drivers/net/qede/base/common_hsi.h            |  176 +++-
 drivers/net/qede/base/ecore_dcbx.c            |    4 +-
 drivers/net/qede/base/ecore_dev.c             |  204 ++--
 drivers/net/qede/base/ecore_gtt_reg_addr.h    |   20 +-
 drivers/net/qede/base/ecore_hsi_common.h      |   46 +-
 drivers/net/qede/base/ecore_hsi_debug_tools.h |  203 ++--
 drivers/net/qede/base/ecore_hsi_eth.h         |   17 +-
 drivers/net/qede/base/ecore_hsi_init_tool.h   |   78 +-
 drivers/net/qede/base/ecore_init_fw_funcs.c   | 1378 ++++++++++++++++---------
 drivers/net/qede/base/ecore_init_fw_funcs.h   |  161 ++-
 drivers/net/qede/base/ecore_iro.h             |    8 +
 drivers/net/qede/base/ecore_iro_values.h      |   28 +-
 drivers/net/qede/base/ecore_rt_defs.h         |  623 ++++++-----
 drivers/net/qede/base/eth_common.h            |    2 +-
 drivers/net/qede/base/reg_addr.h              |   53 +
 drivers/net/qede/qede_main.c                  |    2 +-
 18 files changed, 1886 insertions(+), 1126 deletions(-)

diff --git a/doc/guides/nics/qede.rst b/doc/guides/nics/qede.rst
index 4694ec0..36b26b3 100644
--- a/doc/guides/nics/qede.rst
+++ b/doc/guides/nics/qede.rst
@@ -77,10 +77,10 @@ Supported QLogic Adapters
 Prerequisites
 -------------
 
-- Requires firmware version **8.14.x.** and management firmware
-  version **8.14.x or higher**. Firmware may be available
+- Requires firmware version **8.18.x.** and management firmware
+  version **8.18.x or higher**. Firmware may be available
   inbox in certain newer Linux distros under the standard directory
-  ``E.g. /lib/firmware/qed/qed_init_values-8.14.6.0.bin``
+  ``E.g. /lib/firmware/qed/qed_init_values-8.18.9.0.bin``
 
 - If the required firmware files are not available then visit
   `QLogic Driver Download Center <http://driverdownloads.qlogic.com>`_.
@@ -119,7 +119,7 @@ enabling debugging options may affect system performance.
 - ``CONFIG_RTE_LIBRTE_QEDE_FW`` (default **""**)
 
   Gives absolute path of firmware file.
-  ``Eg: "/lib/firmware/qed/qed_init_values_zipped-8.14.6.0.bin"``
+  ``Eg: "/lib/firmware/qed/qed_init_values_zipped-8.18.9.0.bin"``
   Empty string indicates driver will pick up the firmware file
   from the default location.
 
diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 88246b7..0d239c9 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -398,6 +398,7 @@ u32 qede_osal_log2(u32);
 #define OSAL_STRCPY(dst, string) strcpy(dst, string)
 #define OSAL_STRNCPY(dst, string, len) strncpy(dst, string, len)
 #define OSAL_STRCMP(str1, str2) strcmp(str1, str2)
+#define OSAL_STRTOUL(str, base, res) 0
 
 #define OSAL_INLINE inline
 #define OSAL_REG_ADDR(_p_hwfn, _offset) \
diff --git a/drivers/net/qede/base/common_hsi.h b/drivers/net/qede/base/common_hsi.h
index 59e751f..cbcde22 100644
--- a/drivers/net/qede/base/common_hsi.h
+++ b/drivers/net/qede/base/common_hsi.h
@@ -78,8 +78,16 @@
 
 #define CORE_SPQE_PAGE_SIZE_BYTES                       4096
 
-#define MAX_NUM_LL2_RX_QUEUES					32
-#define MAX_NUM_LL2_TX_STATS_COUNTERS			32
+/*
+ * Usually LL2 queues are opened in pairs TX-RX.
+ * There is a hard restriction on number of RX queues (limited by Tstorm RAM)
+ * and TX counters (Pstorm RAM).
+ * Number of TX queues is almost unlimited.
+ * The constants are different so as to allow asymmetric LL2 connections
+ */
+
+#define MAX_NUM_LL2_RX_QUEUES					48
+#define MAX_NUM_LL2_TX_STATS_COUNTERS			48
 
 
 /****************************************************************************/
@@ -89,8 +97,8 @@
 
 
 #define FW_MAJOR_VERSION		8
-#define FW_MINOR_VERSION		14
-#define FW_REVISION_VERSION		6
+#define FW_MINOR_VERSION		18
+#define FW_REVISION_VERSION		9
 #define FW_ENGINEERING_VERSION	0
 
 /***********************/
@@ -110,6 +118,7 @@
 #define MAX_NUM_VFS_BB	(120)
 #define MAX_NUM_VFS_K2	(192)
 #define E4_MAX_NUM_VFS	(MAX_NUM_VFS_K2)
+#define COMMON_MAX_NUM_VFS (240)
 
 #define MAX_NUM_FUNCTIONS_BB	(MAX_NUM_PFS_BB + MAX_NUM_VFS_BB)
 #define MAX_NUM_FUNCTIONS_K2	(MAX_NUM_PFS_K2 + MAX_NUM_VFS_K2)
@@ -177,6 +186,13 @@
 #define CDU_VF_FL_SEG_TYPE_OFFSET_REG_TYPE_SHIFT	(12)
 #define CDU_VF_FL_SEG_TYPE_OFFSET_REG_OFFSET_MASK	(0xfff)
 
+#define	CDU_CONTEXT_VALIDATION_CFG_ENABLE_SHIFT				(0)
+#define	CDU_CONTEXT_VALIDATION_CFG_VALIDATION_TYPE_SHIFT	(1)
+#define	CDU_CONTEXT_VALIDATION_CFG_USE_TYPE				(2)
+#define	CDU_CONTEXT_VALIDATION_CFG_USE_REGION				(3)
+#define	CDU_CONTEXT_VALIDATION_CFG_USE_CID				(4)
+#define	CDU_CONTEXT_VALIDATION_CFG_USE_ACTIVE				(5)
+
 
 /*****************/
 /* DQ CONSTANTS  */
@@ -472,7 +488,6 @@
 #define PXP_BAR_DQ                                          1
 
 /* PTT and GTT */
-#define PXP_NUM_PF_WINDOWS		12
 #define PXP_PER_PF_ENTRY_SIZE		8
 #define PXP_NUM_GLOBAL_WINDOWS		243
 #define PXP_GLOBAL_ENTRY_SIZE		4
@@ -497,6 +512,8 @@
 #define PXP_PF_ME_OPAQUE_ADDR		0x1f8
 #define PXP_PF_ME_CONCRETE_ADDR		0x1fc
 
+#define PXP_NUM_PF_WINDOWS		12
+
 #define PXP_EXTERNAL_BAR_PF_WINDOW_START	0x1000
 #define PXP_EXTERNAL_BAR_PF_WINDOW_NUM		PXP_NUM_PF_WINDOWS
 #define PXP_EXTERNAL_BAR_PF_WINDOW_SINGLE_SIZE	0x1000
@@ -519,8 +536,6 @@
 	 PXP_EXTERNAL_BAR_GLOBAL_WINDOW_LENGTH - 1)
 
 /* PF BAR */
-/*#define PXP_BAR0_START_GRC 0x1000 */
-/*#define PXP_BAR0_GRC_LENGTH 0xBFF000 */
 #define PXP_BAR0_START_GRC                      0x0000
 #define PXP_BAR0_GRC_LENGTH                     0x1C00000
 #define PXP_BAR0_END_GRC                        \
@@ -589,7 +604,7 @@
 #define SDM_OP_GEN_TRIG_AGG_INT			2
 #define SDM_OP_GEN_TRIG_LOADER			4
 #define SDM_OP_GEN_TRIG_INDICATE_ERROR	6
-#define SDM_OP_GEN_TRIG_RELEASE_THREAD	7
+#define SDM_OP_GEN_TRIG_INC_ORDER_CNT	9
 
 /***********************************************************/
 /* Completion types                                        */
@@ -612,6 +627,7 @@
 #define SDM_COMP_TYPE_RELEASE_THREAD	7
 /* Write to local RAM as a completion */
 #define SDM_COMP_TYPE_RAM		8
+#define SDM_COMP_TYPE_INC_ORDER_CNT	9 /* Applicable only for E4 */
 
 
 /******************/
@@ -881,7 +897,7 @@ enum db_dest {
  */
 enum db_dpm_type {
 	DPM_LEGACY /* Legacy DPM- to Xstorm RAM */,
-	DPM_ROCE /* RoCE DPM- to NIG */,
+	DPM_RDMA /* RDMA DPM (only RoCE in E4) - to NIG */,
 /* L2 DPM inline- to PBF, with packet data on doorbell */
 	DPM_L2_INLINE,
 	DPM_L2_BD /* L2 DPM with BD- to PBF, with TX BD data on doorbell */,
@@ -968,42 +984,42 @@ struct db_pwm_addr {
 };
 
 /*
- * Parameters to RoCE firmware, passed in EDPM doorbell
+ * Parameters to RDMA firmware, passed in EDPM doorbell
  */
-struct db_roce_dpm_params {
+struct db_rdma_dpm_params {
 	__le32 params;
 /* Size in QWORD-s of the DPM burst */
-#define DB_ROCE_DPM_PARAMS_SIZE_MASK            0x3F
-#define DB_ROCE_DPM_PARAMS_SIZE_SHIFT           0
-/* Type of DPM transacation (DPM_ROCE) (use enum db_dpm_type) */
-#define DB_ROCE_DPM_PARAMS_DPM_TYPE_MASK        0x3
-#define DB_ROCE_DPM_PARAMS_DPM_TYPE_SHIFT       6
-/* opcode for ROCE operation */
-#define DB_ROCE_DPM_PARAMS_OPCODE_MASK          0xFF
-#define DB_ROCE_DPM_PARAMS_OPCODE_SHIFT         8
+#define DB_RDMA_DPM_PARAMS_SIZE_MASK            0x3F
+#define DB_RDMA_DPM_PARAMS_SIZE_SHIFT           0
+/* Type of DPM transacation (DPM_RDMA) (use enum db_dpm_type) */
+#define DB_RDMA_DPM_PARAMS_DPM_TYPE_MASK        0x3
+#define DB_RDMA_DPM_PARAMS_DPM_TYPE_SHIFT       6
+/* opcode for RDMA operation */
+#define DB_RDMA_DPM_PARAMS_OPCODE_MASK          0xFF
+#define DB_RDMA_DPM_PARAMS_OPCODE_SHIFT         8
 /* the size of the WQE payload in bytes */
-#define DB_ROCE_DPM_PARAMS_WQE_SIZE_MASK        0x7FF
-#define DB_ROCE_DPM_PARAMS_WQE_SIZE_SHIFT       16
-#define DB_ROCE_DPM_PARAMS_RESERVED0_MASK       0x1
-#define DB_ROCE_DPM_PARAMS_RESERVED0_SHIFT      27
+#define DB_RDMA_DPM_PARAMS_WQE_SIZE_MASK        0x7FF
+#define DB_RDMA_DPM_PARAMS_WQE_SIZE_SHIFT       16
+#define DB_RDMA_DPM_PARAMS_RESERVED0_MASK       0x1
+#define DB_RDMA_DPM_PARAMS_RESERVED0_SHIFT      27
 /* RoCE completion flag */
-#define DB_ROCE_DPM_PARAMS_COMPLETION_FLG_MASK  0x1
-#define DB_ROCE_DPM_PARAMS_COMPLETION_FLG_SHIFT 28
-#define DB_ROCE_DPM_PARAMS_S_FLG_MASK           0x1 /* RoCE S flag */
-#define DB_ROCE_DPM_PARAMS_S_FLG_SHIFT          29
-#define DB_ROCE_DPM_PARAMS_RESERVED1_MASK       0x3
-#define DB_ROCE_DPM_PARAMS_RESERVED1_SHIFT      30
+#define DB_RDMA_DPM_PARAMS_COMPLETION_FLG_MASK  0x1
+#define DB_RDMA_DPM_PARAMS_COMPLETION_FLG_SHIFT 28
+#define DB_RDMA_DPM_PARAMS_S_FLG_MASK           0x1 /* RoCE S flag */
+#define DB_RDMA_DPM_PARAMS_S_FLG_SHIFT          29
+#define DB_RDMA_DPM_PARAMS_RESERVED1_MASK       0x3
+#define DB_RDMA_DPM_PARAMS_RESERVED1_SHIFT      30
 };
 
 /*
- * Structure for doorbell data, in ROCE DPM mode, for the first doorbell in a
+ * Structure for doorbell data, in RDMA DPM mode, for the first doorbell in a
  * DPM burst
  */
-struct db_roce_dpm_data {
+struct db_rdma_dpm_data {
 	__le16 icid /* internal CID */;
 	__le16 prod_val /* aggregated value to update */;
-/* parameters passed to RoCE firmware */
-	struct db_roce_dpm_params params;
+/* parameters passed to RDMA firmware */
+	struct db_rdma_dpm_params params;
 };
 
 /* Igu interrupt command */
@@ -1136,6 +1152,68 @@ struct parsing_and_err_flags {
 
 
 /*
+ * Parsing error flags bitmap.
+ */
+struct parsing_err_flags {
+	__le16 flags;
+/* MAC error indication */
+#define PARSING_ERR_FLAGS_MAC_ERROR_MASK                          0x1
+#define PARSING_ERR_FLAGS_MAC_ERROR_SHIFT                         0
+/* truncation error indication */
+#define PARSING_ERR_FLAGS_TRUNC_ERROR_MASK                        0x1
+#define PARSING_ERR_FLAGS_TRUNC_ERROR_SHIFT                       1
+/* packet too small indication */
+#define PARSING_ERR_FLAGS_PKT_TOO_SMALL_MASK                      0x1
+#define PARSING_ERR_FLAGS_PKT_TOO_SMALL_SHIFT                     2
+/* Header Missing Tag */
+#define PARSING_ERR_FLAGS_ANY_HDR_MISSING_TAG_MASK                0x1
+#define PARSING_ERR_FLAGS_ANY_HDR_MISSING_TAG_SHIFT               3
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_ANY_HDR_IP_VER_MISMTCH_MASK             0x1
+#define PARSING_ERR_FLAGS_ANY_HDR_IP_VER_MISMTCH_SHIFT            4
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_ANY_HDR_IP_V4_HDR_LEN_TOO_SMALL_MASK    0x1
+#define PARSING_ERR_FLAGS_ANY_HDR_IP_V4_HDR_LEN_TOO_SMALL_SHIFT   5
+/* set this error if: 1. total-len is smaller than hdr-len 2. total-ip-len
+ * indicates number that is bigger than real packet length 3. tunneling:
+ * total-ip-length of the outer header points to offset that is smaller than
+ * the one pointed to by the total-ip-len of the inner hdr.
+ */
+#define PARSING_ERR_FLAGS_ANY_HDR_IP_BAD_TOTAL_LEN_MASK           0x1
+#define PARSING_ERR_FLAGS_ANY_HDR_IP_BAD_TOTAL_LEN_SHIFT          6
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_IP_V4_CHKSM_ERROR_MASK                  0x1
+#define PARSING_ERR_FLAGS_IP_V4_CHKSM_ERROR_SHIFT                 7
+/* from frame cracker output. for either TCP or UDP */
+#define PARSING_ERR_FLAGS_ANY_HDR_L4_IP_LEN_MISMTCH_MASK          0x1
+#define PARSING_ERR_FLAGS_ANY_HDR_L4_IP_LEN_MISMTCH_SHIFT         8
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_ZERO_UDP_IP_V6_CHKSM_MASK               0x1
+#define PARSING_ERR_FLAGS_ZERO_UDP_IP_V6_CHKSM_SHIFT              9
+/* cksm calculated and value isn't 0xffff or L4-cksm-wasnt-calculated for any
+ * reason, like: udp/ipv4 checksum is 0 etc.
+ */
+#define PARSING_ERR_FLAGS_INNER_L4_CHKSM_ERROR_MASK               0x1
+#define PARSING_ERR_FLAGS_INNER_L4_CHKSM_ERROR_SHIFT              10
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_ANY_HDR_ZERO_TTL_OR_HOP_LIM_MASK        0x1
+#define PARSING_ERR_FLAGS_ANY_HDR_ZERO_TTL_OR_HOP_LIM_SHIFT       11
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_NON_8021Q_TAG_EXISTS_IN_BOTH_HDRS_MASK  0x1
+#define PARSING_ERR_FLAGS_NON_8021Q_TAG_EXISTS_IN_BOTH_HDRS_SHIFT 12
+/* set if geneve option size was over 32 byte */
+#define PARSING_ERR_FLAGS_GENEVE_OPTION_OVERSIZED_MASK            0x1
+#define PARSING_ERR_FLAGS_GENEVE_OPTION_OVERSIZED_SHIFT           13
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_TUNNEL_IP_V4_CHKSM_ERROR_MASK           0x1
+#define PARSING_ERR_FLAGS_TUNNEL_IP_V4_CHKSM_ERROR_SHIFT          14
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_TUNNEL_L4_CHKSM_ERROR_MASK              0x1
+#define PARSING_ERR_FLAGS_TUNNEL_L4_CHKSM_ERROR_SHIFT             15
+};
+
+
+/*
  * Pb context
  */
 struct pb_context {
@@ -1492,49 +1570,57 @@ struct tdif_task_context {
 struct timers_context {
 	__le32 logical_client_0;
 /* Expiration time of logical client 0 */
-#define TIMERS_CONTEXT_EXPIRATIONTIMELC0_MASK     0xFFFFFFF
+#define TIMERS_CONTEXT_EXPIRATIONTIMELC0_MASK     0x7FFFFFF
 #define TIMERS_CONTEXT_EXPIRATIONTIMELC0_SHIFT    0
+#define TIMERS_CONTEXT_RESERVED0_MASK             0x1
+#define TIMERS_CONTEXT_RESERVED0_SHIFT            27
 /* Valid bit of logical client 0 */
 #define TIMERS_CONTEXT_VALIDLC0_MASK              0x1
 #define TIMERS_CONTEXT_VALIDLC0_SHIFT             28
 /* Active bit of logical client 0 */
 #define TIMERS_CONTEXT_ACTIVELC0_MASK             0x1
 #define TIMERS_CONTEXT_ACTIVELC0_SHIFT            29
-#define TIMERS_CONTEXT_RESERVED0_MASK             0x3
-#define TIMERS_CONTEXT_RESERVED0_SHIFT            30
+#define TIMERS_CONTEXT_RESERVED1_MASK             0x3
+#define TIMERS_CONTEXT_RESERVED1_SHIFT            30
 	__le32 logical_client_1;
 /* Expiration time of logical client 1 */
-#define TIMERS_CONTEXT_EXPIRATIONTIMELC1_MASK     0xFFFFFFF
+#define TIMERS_CONTEXT_EXPIRATIONTIMELC1_MASK     0x7FFFFFF
 #define TIMERS_CONTEXT_EXPIRATIONTIMELC1_SHIFT    0
+#define TIMERS_CONTEXT_RESERVED2_MASK             0x1
+#define TIMERS_CONTEXT_RESERVED2_SHIFT            27
 /* Valid bit of logical client 1 */
 #define TIMERS_CONTEXT_VALIDLC1_MASK              0x1
 #define TIMERS_CONTEXT_VALIDLC1_SHIFT             28
 /* Active bit of logical client 1 */
 #define TIMERS_CONTEXT_ACTIVELC1_MASK             0x1
 #define TIMERS_CONTEXT_ACTIVELC1_SHIFT            29
-#define TIMERS_CONTEXT_RESERVED1_MASK             0x3
-#define TIMERS_CONTEXT_RESERVED1_SHIFT            30
+#define TIMERS_CONTEXT_RESERVED3_MASK             0x3
+#define TIMERS_CONTEXT_RESERVED3_SHIFT            30
 	__le32 logical_client_2;
 /* Expiration time of logical client 2 */
-#define TIMERS_CONTEXT_EXPIRATIONTIMELC2_MASK     0xFFFFFFF
+#define TIMERS_CONTEXT_EXPIRATIONTIMELC2_MASK     0x7FFFFFF
 #define TIMERS_CONTEXT_EXPIRATIONTIMELC2_SHIFT    0
+#define TIMERS_CONTEXT_RESERVED4_MASK             0x1
+#define TIMERS_CONTEXT_RESERVED4_SHIFT            27
 /* Valid bit of logical client 2 */
 #define TIMERS_CONTEXT_VALIDLC2_MASK              0x1
 #define TIMERS_CONTEXT_VALIDLC2_SHIFT             28
 /* Active bit of logical client 2 */
 #define TIMERS_CONTEXT_ACTIVELC2_MASK             0x1
 #define TIMERS_CONTEXT_ACTIVELC2_SHIFT            29
-#define TIMERS_CONTEXT_RESERVED2_MASK             0x3
-#define TIMERS_CONTEXT_RESERVED2_SHIFT            30
+#define TIMERS_CONTEXT_RESERVED5_MASK             0x3
+#define TIMERS_CONTEXT_RESERVED5_SHIFT            30
 	__le32 host_expiration_fields;
 /* Expiration time on host (closest one) */
-#define TIMERS_CONTEXT_HOSTEXPRIRATIONVALUE_MASK  0xFFFFFFF
+#define TIMERS_CONTEXT_HOSTEXPRIRATIONVALUE_MASK  0x7FFFFFF
 #define TIMERS_CONTEXT_HOSTEXPRIRATIONVALUE_SHIFT 0
+#define TIMERS_CONTEXT_RESERVED6_MASK             0x1
+#define TIMERS_CONTEXT_RESERVED6_SHIFT            27
 /* Valid bit of host expiration */
 #define TIMERS_CONTEXT_HOSTEXPRIRATIONVALID_MASK  0x1
 #define TIMERS_CONTEXT_HOSTEXPRIRATIONVALID_SHIFT 28
-#define TIMERS_CONTEXT_RESERVED3_MASK             0x7
-#define TIMERS_CONTEXT_RESERVED3_SHIFT            29
+#define TIMERS_CONTEXT_RESERVED7_MASK             0x7
+#define TIMERS_CONTEXT_RESERVED7_SHIFT            29
 };
 
 
diff --git a/drivers/net/qede/base/ecore_dcbx.c b/drivers/net/qede/base/ecore_dcbx.c
index 7380fd8..102774d 100644
--- a/drivers/net/qede/base/ecore_dcbx.c
+++ b/drivers/net/qede/base/ecore_dcbx.c
@@ -126,7 +126,7 @@ ecore_dcbx_set_params(struct ecore_dcbx_results *p_data,
 	else if (enable)
 		p_data->arr[type].update = UPDATE_DCB;
 	else
-		p_data->arr[type].update = DONT_UPDATE_DCB_DHCP;
+		p_data->arr[type].update = DONT_UPDATE_DCB_DSCP;
 
 	/* QM reconf data */
 	if (p_hwfn->hw_info.personality == personality) {
@@ -938,7 +938,7 @@ void ecore_dcbx_set_pf_update_params(struct ecore_dcbx_results *p_src,
 	p_dest->pf_id = p_src->pf_id;
 
 	update_flag = p_src->arr[DCBX_PROTOCOL_ETH].update;
-	p_dest->update_eth_dcb_data_flag = update_flag;
+	p_dest->update_eth_dcb_data_mode = update_flag;
 
 	p_dcb_data = &p_dest->eth_dcb_data;
 	ecore_dcbx_update_protocol_data(p_dcb_data, p_src, DCBX_PROTOCOL_ETH);
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index eef24cd..f82f5e6 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -814,7 +814,7 @@ static enum _ecore_status_t ecore_calc_hw_mode(struct ecore_hwfn *p_hwfn)
 	int hw_mode = 0;
 
 	if (ECORE_IS_BB_B0(p_hwfn->p_dev)) {
-		hw_mode |= 1 << MODE_BB_B0;
+		hw_mode |= 1 << MODE_BB;
 	} else if (ECORE_IS_AH(p_hwfn->p_dev)) {
 		hw_mode |= 1 << MODE_K2;
 	} else {
@@ -886,29 +886,36 @@ static enum _ecore_status_t ecore_calc_hw_mode(struct ecore_hwfn *p_hwfn)
 static enum _ecore_status_t ecore_hw_init_chip(struct ecore_hwfn *p_hwfn,
 					       struct ecore_ptt *p_ptt)
 {
+	struct ecore_dev *p_dev = p_hwfn->p_dev;
 	u32 pl_hv = 1;
 	int i;
 
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && ECORE_IS_AH(p_hwfn->p_dev))
-		pl_hv |= 0x600;
+	if (CHIP_REV_IS_EMUL(p_dev)) {
+		if (ECORE_IS_AH(p_dev))
+			pl_hv |= 0x600;
+	}
 
 	ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV + 4, pl_hv);
 
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && ECORE_IS_AH(p_hwfn->p_dev))
-		ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV_2, 0x3ffffff);
+	if (CHIP_REV_IS_EMUL(p_dev) &&
+	    (ECORE_IS_AH(p_dev)))
+		ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV_2_K2_E5,
+			 0x3ffffff);
 
 	/* initialize port mode to 4x10G_E (10G with 4x10 SERDES) */
 	/* CNIG_REG_NW_PORT_MODE is same for A0 and B0 */
-	if (!CHIP_REV_IS_EMUL(p_hwfn->p_dev) || !ECORE_IS_AH(p_hwfn->p_dev))
-		ecore_wr(p_hwfn, p_ptt, CNIG_REG_NW_PORT_MODE_BB_B0, 4);
+	if (!CHIP_REV_IS_EMUL(p_dev) || ECORE_IS_BB(p_dev))
+		ecore_wr(p_hwfn, p_ptt, CNIG_REG_NW_PORT_MODE_BB, 4);
 
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && ECORE_IS_AH(p_hwfn->p_dev)) {
-		/* 2 for 4-port, 1 for 2-port, 0 for 1-port */
-		ecore_wr(p_hwfn, p_ptt, MISC_REG_PORT_MODE,
-			 (p_hwfn->p_dev->num_ports_in_engines >> 1));
+	if (CHIP_REV_IS_EMUL(p_dev)) {
+		if (ECORE_IS_AH(p_dev)) {
+			/* 2 for 4-port, 1 for 2-port, 0 for 1-port */
+			ecore_wr(p_hwfn, p_ptt, MISC_REG_PORT_MODE,
+				 (p_dev->num_ports_in_engines >> 1));
 
-		ecore_wr(p_hwfn, p_ptt, MISC_REG_BLOCK_256B_EN,
-			 p_hwfn->p_dev->num_ports_in_engines == 4 ? 0 : 3);
+			ecore_wr(p_hwfn, p_ptt, MISC_REG_BLOCK_256B_EN,
+				 p_dev->num_ports_in_engines == 4 ? 0 : 3);
+		}
 	}
 
 	/* Poll on RBC */
@@ -1051,12 +1058,6 @@ static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn,
 	/* pretend to original PF */
 	ecore_fid_pretend(p_hwfn, p_ptt, p_hwfn->rel_pf_id);
 
-	/* @@@TMP:
-	 * CQ89456 - Mask the BRB "RC0_EOP_OUT_SYNC_FIFO_PUSH_ERROR" attention.
-	 */
-	if (ECORE_IS_AH(p_dev))
-		ecore_wr(p_hwfn, p_ptt, BRB_REG_INT_MASK_10, 0x4000000);
-
 	return rc;
 }
 
@@ -1072,20 +1073,19 @@ static void ecore_wr_nw_port(struct ecore_hwfn *p_hwfn,
 {
 	DP_VERBOSE(p_hwfn, ECORE_MSG_LINK,
 		   "CMD: %08x, ADDR: 0x%08x, DATA: %08x:%08x\n",
-		   ecore_rd(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB_B0) |
+		   ecore_rd(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB) |
 		   (8 << PMEG_IF_BYTE_COUNT),
 		   (reg_type << 25) | (addr << 8) | port,
 		   (u32)((data >> 32) & 0xffffffff),
 		   (u32)(data & 0xffffffff));
 
-	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB_B0,
-		 (ecore_rd(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB_B0) &
+	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB,
+		 (ecore_rd(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB) &
 		  0xffff00fe) | (8 << PMEG_IF_BYTE_COUNT));
-	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_ADDR_BB_B0,
+	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_ADDR_BB,
 		 (reg_type << 25) | (addr << 8) | port);
-	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_WRDATA_BB_B0,
-		 data & 0xffffffff);
-	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_WRDATA_BB_B0,
+	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_WRDATA_BB, data & 0xffffffff);
+	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_WRDATA_BB,
 		 (data >> 32) & 0xffffffff);
 }
 
@@ -1101,48 +1101,13 @@ static void ecore_wr_nw_port(struct ecore_hwfn *p_hwfn,
 #define XLMAC_PAUSE_CTRL (0x60d)
 #define XLMAC_PFC_CTRL (0x60e)
 
-static void ecore_emul_link_init_ah(struct ecore_hwfn *p_hwfn,
+static void ecore_emul_link_init_bb(struct ecore_hwfn *p_hwfn,
 				    struct ecore_ptt *p_ptt)
 {
-	u8 port = p_hwfn->port_id;
-	u32 mac_base = NWM_REG_MAC0 + (port << 2) * NWM_REG_MAC0_SIZE;
-
-	ecore_wr(p_hwfn, p_ptt, CNIG_REG_NIG_PORT0_CONF_K2 + (port << 2),
-		 (1 << CNIG_REG_NIG_PORT0_CONF_NIG_PORT_ENABLE_0_SHIFT) |
-		 (port << CNIG_REG_NIG_PORT0_CONF_NIG_PORT_NWM_PORT_MAP_0_SHIFT)
-		 | (0 << CNIG_REG_NIG_PORT0_CONF_NIG_PORT_RATE_0_SHIFT));
-
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_XIF_MODE,
-		 1 << ETH_MAC_REG_XIF_MODE_XGMII_SHIFT);
-
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_FRM_LENGTH,
-		 9018 << ETH_MAC_REG_FRM_LENGTH_FRM_LENGTH_SHIFT);
-
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_TX_IPG_LENGTH,
-		 0xc << ETH_MAC_REG_TX_IPG_LENGTH_TXIPG_SHIFT);
-
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_RX_FIFO_SECTIONS,
-		 8 << ETH_MAC_REG_RX_FIFO_SECTIONS_RX_SECTION_FULL_SHIFT);
-
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_TX_FIFO_SECTIONS,
-		 (0xA << ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_EMPTY_SHIFT) |
-		 (8 << ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_FULL_SHIFT));
-
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_COMMAND_CONFIG, 0xa853);
-}
-
-static void ecore_emul_link_init(struct ecore_hwfn *p_hwfn,
-				 struct ecore_ptt *p_ptt)
-{
 	u8 loopback = 0, port = p_hwfn->port_id * 2;
 
 	DP_INFO(p_hwfn->p_dev, "Configurating Emulation Link %02x\n", port);
 
-	if (ECORE_IS_AH(p_hwfn->p_dev)) {
-		ecore_emul_link_init_ah(p_hwfn, p_ptt);
-		return;
-	}
-
 	/* XLPORT MAC MODE *//* 0 Quad, 4 Single... */
 	ecore_wr_nw_port(p_hwfn, p_ptt, XLPORT_MODE_REG, (0x4 << 4) | 0x4, 1,
 			 port);
@@ -1171,8 +1136,53 @@ static void ecore_emul_link_init(struct ecore_hwfn *p_hwfn,
 	ecore_wr_nw_port(p_hwfn, p_ptt, XLPORT_ENABLE_REG, 0xf, 1, port);
 }
 
-static void ecore_link_init(struct ecore_hwfn *p_hwfn,
-			    struct ecore_ptt *p_ptt, u8 port)
+static void ecore_emul_link_init_ah_e5(struct ecore_hwfn *p_hwfn,
+				       struct ecore_ptt *p_ptt)
+{
+	u8 port = p_hwfn->port_id;
+	u32 mac_base = NWM_REG_MAC0_K2_E5 + (port << 2) * NWM_REG_MAC0_SIZE;
+
+	DP_INFO(p_hwfn->p_dev, "Configurating Emulation Link %02x\n", port);
+
+	ecore_wr(p_hwfn, p_ptt, CNIG_REG_NIG_PORT0_CONF_K2_E5 + (port << 2),
+		 (1 << CNIG_REG_NIG_PORT0_CONF_NIG_PORT_ENABLE_0_K2_E5_SHIFT) |
+		 (port <<
+		  CNIG_REG_NIG_PORT0_CONF_NIG_PORT_NWM_PORT_MAP_0_K2_E5_SHIFT) |
+		 (0 << CNIG_REG_NIG_PORT0_CONF_NIG_PORT_RATE_0_K2_E5_SHIFT));
+
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_XIF_MODE_K2_E5,
+		 1 << ETH_MAC_REG_XIF_MODE_XGMII_K2_E5_SHIFT);
+
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_FRM_LENGTH_K2_E5,
+		 9018 << ETH_MAC_REG_FRM_LENGTH_FRM_LENGTH_K2_E5_SHIFT);
+
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_TX_IPG_LENGTH_K2_E5,
+		 0xc << ETH_MAC_REG_TX_IPG_LENGTH_TXIPG_K2_E5_SHIFT);
+
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_RX_FIFO_SECTIONS_K2_E5,
+		 8 << ETH_MAC_REG_RX_FIFO_SECTIONS_RX_SECTION_FULL_K2_E5_SHIFT);
+
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_TX_FIFO_SECTIONS_K2_E5,
+		 (0xA <<
+		  ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_EMPTY_K2_E5_SHIFT) |
+		 (8 <<
+		  ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_FULL_K2_E5_SHIFT));
+
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_COMMAND_CONFIG_K2_E5,
+		 0xa853);
+}
+
+static void ecore_emul_link_init(struct ecore_hwfn *p_hwfn,
+				 struct ecore_ptt *p_ptt)
+{
+	if (ECORE_IS_AH(p_hwfn->p_dev))
+		ecore_emul_link_init_ah_e5(p_hwfn, p_ptt);
+	else /* BB */
+		ecore_emul_link_init_bb(p_hwfn, p_ptt);
+}
+
+static void ecore_link_init_bb(struct ecore_hwfn *p_hwfn,
+			       struct ecore_ptt *p_ptt,  u8 port)
 {
 	int port_offset = port ? 0x800 : 0;
 	u32 xmac_rxctrl = 0;
@@ -1185,10 +1195,10 @@ static void ecore_link_init(struct ecore_hwfn *p_hwfn,
 	ecore_wr(p_hwfn, p_ptt, MISC_REG_RESET_PL_PDA_VAUX + sizeof(u32),
 		 MISC_REG_RESET_REG_2_XMAC_BIT);	/* Set */
 
-	ecore_wr(p_hwfn, p_ptt, MISC_REG_XMAC_CORE_PORT_MODE, 1);
+	ecore_wr(p_hwfn, p_ptt, MISC_REG_XMAC_CORE_PORT_MODE_BB, 1);
 
 	/* Set the number of ports on the Warp Core to 10G */
-	ecore_wr(p_hwfn, p_ptt, MISC_REG_XMAC_PHY_PORT_MODE, 3);
+	ecore_wr(p_hwfn, p_ptt, MISC_REG_XMAC_PHY_PORT_MODE_BB, 3);
 
 	/* Soft reset of XMAC */
 	ecore_wr(p_hwfn, p_ptt, MISC_REG_RESET_PL_PDA_VAUX + 2 * sizeof(u32),
@@ -1199,20 +1209,21 @@ static void ecore_link_init(struct ecore_hwfn *p_hwfn,
 
 	/* FIXME: move to common end */
 	if (CHIP_REV_IS_FPGA(p_hwfn->p_dev))
-		ecore_wr(p_hwfn, p_ptt, XMAC_REG_MODE + port_offset, 0x20);
+		ecore_wr(p_hwfn, p_ptt, XMAC_REG_MODE_BB + port_offset, 0x20);
 
 	/* Set Max packet size: initialize XMAC block register for port 0 */
-	ecore_wr(p_hwfn, p_ptt, XMAC_REG_RX_MAX_SIZE + port_offset, 0x2710);
+	ecore_wr(p_hwfn, p_ptt, XMAC_REG_RX_MAX_SIZE_BB + port_offset, 0x2710);
 
 	/* CRC append for Tx packets: init XMAC block register for port 1 */
-	ecore_wr(p_hwfn, p_ptt, XMAC_REG_TX_CTRL_LO + port_offset, 0xC800);
+	ecore_wr(p_hwfn, p_ptt, XMAC_REG_TX_CTRL_LO_BB + port_offset, 0xC800);
 
 	/* Enable TX and RX: initialize XMAC block register for port 1 */
-	ecore_wr(p_hwfn, p_ptt, XMAC_REG_CTRL + port_offset,
-		 XMAC_REG_CTRL_TX_EN | XMAC_REG_CTRL_RX_EN);
-	xmac_rxctrl = ecore_rd(p_hwfn, p_ptt, XMAC_REG_RX_CTRL + port_offset);
-	xmac_rxctrl |= XMAC_REG_RX_CTRL_PROCESS_VARIABLE_PREAMBLE;
-	ecore_wr(p_hwfn, p_ptt, XMAC_REG_RX_CTRL + port_offset, xmac_rxctrl);
+	ecore_wr(p_hwfn, p_ptt, XMAC_REG_CTRL_BB + port_offset,
+		 XMAC_REG_CTRL_TX_EN_BB | XMAC_REG_CTRL_RX_EN_BB);
+	xmac_rxctrl = ecore_rd(p_hwfn, p_ptt,
+			       XMAC_REG_RX_CTRL_BB + port_offset);
+	xmac_rxctrl |= XMAC_REG_RX_CTRL_PROCESS_VARIABLE_PREAMBLE_BB;
+	ecore_wr(p_hwfn, p_ptt, XMAC_REG_RX_CTRL_BB + port_offset, xmac_rxctrl);
 }
 #endif
 
@@ -1233,7 +1244,8 @@ static enum _ecore_status_t ecore_hw_init_port(struct ecore_hwfn *p_hwfn,
 	if (CHIP_REV_IS_FPGA(p_hwfn->p_dev)) {
 		if (ECORE_IS_AH(p_hwfn->p_dev))
 			return ECORE_SUCCESS;
-		ecore_link_init(p_hwfn, p_ptt, p_hwfn->port_id);
+		else if (ECORE_IS_BB(p_hwfn->p_dev))
+			ecore_link_init_bb(p_hwfn, p_ptt, p_hwfn->port_id);
 	} else if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
 		if (p_hwfn->p_dev->num_hwfns > 1) {
 			/* Activate OPTE in CMT */
@@ -1667,7 +1679,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		 * out that these registers get initialized during the call to
 		 * ecore_mcp_load_req request. So we need to reread them here
 		 * to get the proper shadow register value.
-		 * Note: This is a workaround for the missinginig MFW
+		 * Note: This is a workaround for the missing MFW
 		 * initialization. It may be removed once the implementation
 		 * is done.
 		 */
@@ -2033,22 +2045,22 @@ static void ecore_hw_hwfn_prepare(struct ecore_hwfn *p_hwfn)
 	/* clear indirect access */
 	if (ECORE_IS_AH(p_hwfn->p_dev)) {
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_E8_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_E8_F0_K2_E5, 0);
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_EC_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_EC_F0_K2_E5, 0);
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_F0_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_F0_F0_K2_E5, 0);
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_F4_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_F4_F0_K2_E5, 0);
 	} else {
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_88_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_88_F0_BB, 0);
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_8C_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_8C_F0_BB, 0);
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_90_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_90_F0_BB, 0);
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_94_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_94_F0_BB, 0);
 	}
 
 	/* Clean Previous errors if such exist */
@@ -2643,7 +2655,12 @@ static void ecore_get_num_funcs(struct ecore_hwfn *p_hwfn,
 	 * In case of CMT in BB, only the "even" functions are enabled, and thus
 	 * the number of functions for both hwfns is learnt from the same bits.
 	 */
-	reg_function_hide = ecore_rd(p_hwfn, p_ptt, MISCS_REG_FUNCTION_HIDE);
+	if (ECORE_IS_BB(p_dev) || ECORE_IS_AH(p_dev)) {
+		reg_function_hide = ecore_rd(p_hwfn, p_ptt,
+					     MISCS_REG_FUNCTION_HIDE_BB_K2);
+	} else { /* E5 */
+		reg_function_hide = 0;
+	}
 
 	if (reg_function_hide & 0x1) {
 		if (ECORE_IS_BB(p_dev)) {
@@ -2709,8 +2726,7 @@ static void ecore_hw_info_port_num_bb(struct ecore_hwfn *p_hwfn,
 		port_mode = 1;
 	else
 #endif
-		port_mode = ecore_rd(p_hwfn, p_ptt,
-				     CNIG_REG_NW_PORT_MODE_BB_B0);
+	port_mode = ecore_rd(p_hwfn, p_ptt, CNIG_REG_NW_PORT_MODE_BB);
 
 	if (port_mode < 3) {
 		p_hwfn->p_dev->num_ports_in_engines = 1;
@@ -2725,8 +2741,8 @@ static void ecore_hw_info_port_num_bb(struct ecore_hwfn *p_hwfn,
 	}
 }
 
-static void ecore_hw_info_port_num_ah(struct ecore_hwfn *p_hwfn,
-				      struct ecore_ptt *p_ptt)
+static void ecore_hw_info_port_num_ah_e5(struct ecore_hwfn *p_hwfn,
+					 struct ecore_ptt *p_ptt)
 {
 	u32 port;
 	int i;
@@ -2755,7 +2771,8 @@ static void ecore_hw_info_port_num_ah(struct ecore_hwfn *p_hwfn,
 #endif
 		for (i = 0; i < MAX_NUM_PORTS_K2; i++) {
 			port = ecore_rd(p_hwfn, p_ptt,
-					CNIG_REG_NIG_PORT0_CONF_K2 + (i * 4));
+					CNIG_REG_NIG_PORT0_CONF_K2_E5 +
+					(i * 4));
 			if (port & 1)
 				p_hwfn->p_dev->num_ports_in_engines++;
 		}
@@ -2767,7 +2784,7 @@ static void ecore_hw_info_port_num(struct ecore_hwfn *p_hwfn,
 	if (ECORE_IS_BB(p_hwfn->p_dev))
 		ecore_hw_info_port_num_bb(p_hwfn, p_ptt);
 	else
-		ecore_hw_info_port_num_ah(p_hwfn, p_ptt);
+		ecore_hw_info_port_num_ah_e5(p_hwfn, p_ptt);
 }
 
 static enum _ecore_status_t
@@ -3076,12 +3093,13 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn,
 	if (CHIP_REV_IS_FPGA(p_dev)) {
 		DP_NOTICE(p_hwfn, false,
 			  "FPGA: workaround; Prevent DMAE parities\n");
-		ecore_wr(p_hwfn, p_hwfn->p_main_ptt, PCIE_REG_PRTY_MASK, 7);
+		ecore_wr(p_hwfn, p_hwfn->p_main_ptt, PCIE_REG_PRTY_MASK_K2_E5,
+			 7);
 
 		DP_NOTICE(p_hwfn, false,
 			  "FPGA: workaround: Set VF bar0 size\n");
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_VF_BAR0_SIZE, 4);
+			 PGLUE_B_REG_VF_BAR0_SIZE_K2_E5, 4);
 	}
 #endif
 
diff --git a/drivers/net/qede/base/ecore_gtt_reg_addr.h b/drivers/net/qede/base/ecore_gtt_reg_addr.h
index 070588d..2acd864 100644
--- a/drivers/net/qede/base/ecore_gtt_reg_addr.h
+++ b/drivers/net/qede/base/ecore_gtt_reg_addr.h
@@ -10,43 +10,43 @@
 #define GTT_REG_ADDR_H
 
 /* Win 2 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_IGU_CMD                                      0x00f000UL
 
 /* Win 3 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_TSDM_RAM                                     0x010000UL
 
 /* Win 4 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_MSDM_RAM                                     0x011000UL
 
 /* Win 5 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_MSDM_RAM_1024                                0x012000UL
 
 /* Win 6 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_USDM_RAM                                     0x013000UL
 
 /* Win 7 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_USDM_RAM_1024                                0x014000UL
 
 /* Win 8 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_USDM_RAM_2048                                0x015000UL
 
 /* Win 9 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_XSDM_RAM                                     0x016000UL
 
 /* Win 10 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_YSDM_RAM                                     0x017000UL
 
 /* Win 11 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_PSDM_RAM                                     0x018000UL
 
 #endif
diff --git a/drivers/net/qede/base/ecore_hsi_common.h b/drivers/net/qede/base/ecore_hsi_common.h
index f934e68..3042ed5 100644
--- a/drivers/net/qede/base/ecore_hsi_common.h
+++ b/drivers/net/qede/base/ecore_hsi_common.h
@@ -836,7 +836,12 @@ struct core_rx_fast_path_cqe {
 	__le16 packet_length /* Total packet length (from the parser) */;
 	__le16 vlan /* 802.1q VLAN tag */;
 	struct core_rx_cqe_opaque_data opaque_data /* Opaque Data */;
-	__le32 reserved[4];
+/* bit- map: each bit represents a specific error. errors indications are
+ * provided by the cracker. see spec for detailed description
+ */
+	struct parsing_err_flags err_flags;
+	__le16 reserved0;
+	__le32 reserved1[3];
 };
 
 /*
@@ -1042,13 +1047,13 @@ struct core_tx_stop_ramrod_data {
 /*
  * Enum flag for what type of dcb data to update
  */
-enum dcb_dhcp_update_flag {
+enum dcb_dscp_update_mode {
 /* use when no change should be done to dcb data */
-	DONT_UPDATE_DCB_DHCP,
+	DONT_UPDATE_DCB_DSCP,
 	UPDATE_DCB /* use to update only l2 (vlan) priority */,
-	UPDATE_DSCP /* use to update only l3 dhcp */,
-	UPDATE_DCB_DSCP /* update vlan pri and dhcp */,
-	MAX_DCB_DHCP_UPDATE_FLAG
+	UPDATE_DSCP /* use to update only l3 dscp */,
+	UPDATE_DCB_DSCP /* update vlan pri and dscp */,
+	MAX_DCB_DSCP_UPDATE_FLAG
 };
 
 
@@ -1232,6 +1237,10 @@ enum iwarp_ll2_tx_queues {
 	IWARP_LL2_IN_ORDER_TX_QUEUE = 1,
 /* LL2 queue for unaligned packets sent aligned by the driver */
 	IWARP_LL2_ALIGNED_TX_QUEUE,
+/* LL2 queue for unaligned packets sent aligned and was right-trimmed by the
+ * driver
+ */
+	IWARP_LL2_ALIGNED_RIGHT_TRIMMED_TX_QUEUE,
 	IWARP_LL2_ERROR /* Error indication */,
 	MAX_IWARP_LL2_TX_QUEUES
 };
@@ -1446,13 +1455,13 @@ struct pf_update_tunnel_config {
  */
 struct pf_update_ramrod_data {
 	u8 pf_id;
-	u8 update_eth_dcb_data_flag /* Update Eth DCB  data indication */;
-	u8 update_fcoe_dcb_data_flag /* Update FCOE DCB  data indication */;
-	u8 update_iscsi_dcb_data_flag /* Update iSCSI DCB  data indication */;
-	u8 update_roce_dcb_data_flag /* Update ROCE DCB  data indication */;
+	u8 update_eth_dcb_data_mode /* Update Eth DCB  data indication */;
+	u8 update_fcoe_dcb_data_mode /* Update FCOE DCB  data indication */;
+	u8 update_iscsi_dcb_data_mode /* Update iSCSI DCB  data indication */;
+	u8 update_roce_dcb_data_mode /* Update ROCE DCB  data indication */;
 /* Update RROCE (RoceV2) DCB  data indication */
-	u8 update_rroce_dcb_data_flag;
-	u8 update_iwarp_dcb_data_flag /* Update IWARP DCB  data indication */;
+	u8 update_rroce_dcb_data_mode;
+	u8 update_iwarp_dcb_data_mode /* Update IWARP DCB  data indication */;
 	u8 update_mf_vlan_flag /* Update MF outer vlan Id */;
 	struct protocol_dcb_data eth_dcb_data /* core eth related fields */;
 	struct protocol_dcb_data fcoe_dcb_data /* core fcoe related fields */;
@@ -1611,6 +1620,8 @@ struct tstorm_per_port_stat {
 	struct regpair fcoe_irregular_pkt;
 /* packet is an ROCE irregular packet */
 	struct regpair roce_irregular_pkt;
+/* packet is an IWARP irregular packet */
+	struct regpair iwarp_irregular_pkt;
 /* packet is an ETH irregular packet */
 	struct regpair eth_irregular_pkt;
 /* packet is an TOE irregular packet */
@@ -1861,8 +1872,11 @@ struct dmae_cmd {
 #define DMAE_CMD_SRC_VF_ID_SHIFT       0
 #define DMAE_CMD_DST_VF_ID_MASK        0xFF /* Destination VF id */
 #define DMAE_CMD_DST_VF_ID_SHIFT       8
-	__le32 comp_addr_lo /* PCIe completion address low or grc address */;
-/* PCIe completion address high or reserved (if completion address is in GRC) */
+/* PCIe completion address low in bytes or GRC completion address in DW */
+	__le32 comp_addr_lo;
+/* PCIe completion address high in bytes or reserved (if completion address is
+ * GRC)
+ */
 	__le32 comp_addr_hi;
 	__le32 comp_val /* Value to write to completion address */;
 	__le32 crc32 /* crc16 result */;
@@ -2250,10 +2264,6 @@ struct sdm_op_gen {
 #define SDM_OP_GEN_RESERVED_SHIFT   20
 };
 
-
-
-
-
 struct ystorm_core_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 byte1 /* state */;
diff --git a/drivers/net/qede/base/ecore_hsi_debug_tools.h b/drivers/net/qede/base/ecore_hsi_debug_tools.h
index effb6ed..917e8f4 100644
--- a/drivers/net/qede/base/ecore_hsi_debug_tools.h
+++ b/drivers/net/qede/base/ecore_hsi_debug_tools.h
@@ -93,10 +93,12 @@ enum block_addr {
 	GRCBASE_PHY_PCIE = 0x620000,
 	GRCBASE_LED = 0x6b8000,
 	GRCBASE_AVS_WRAP = 0x6b0000,
-	GRCBASE_RGFS = 0x19d0000,
-	GRCBASE_TGFS = 0x19e0000,
-	GRCBASE_PTLD = 0x19f0000,
-	GRCBASE_YPLD = 0x1a10000,
+	GRCBASE_RGFS = 0x1fa0000,
+	GRCBASE_RGSRC = 0x1fa8000,
+	GRCBASE_TGFS = 0x1fb0000,
+	GRCBASE_TGSRC = 0x1fb8000,
+	GRCBASE_PTLD = 0x1fc0000,
+	GRCBASE_YPLD = 0x1fe0000,
 	GRCBASE_MISC_AEU = 0x8000,
 	GRCBASE_BAR0_MAP = 0x1c00000,
 	MAX_BLOCK_ADDR
@@ -184,7 +186,9 @@ enum block_id {
 	BLOCK_LED,
 	BLOCK_AVS_WRAP,
 	BLOCK_RGFS,
+	BLOCK_RGSRC,
 	BLOCK_TGFS,
+	BLOCK_TGSRC,
 	BLOCK_PTLD,
 	BLOCK_YPLD,
 	BLOCK_MISC_AEU,
@@ -208,6 +212,10 @@ enum bin_dbg_buffer_type {
 	BIN_BUF_DBG_ATTN_REGS /* Attention registers */,
 	BIN_BUF_DBG_ATTN_INDEXES /* Attention indexes */,
 	BIN_BUF_DBG_ATTN_NAME_OFFSETS /* Attention name offsets */,
+	BIN_BUF_DBG_BUS_BLOCKS /* Debug Bus blocks */,
+	BIN_BUF_DBG_BUS_LINES /* Debug Bus lines */,
+	BIN_BUF_DBG_BUS_BLOCKS_USER_DATA /* Debug Bus blocks user data */,
+	BIN_BUF_DBG_BUS_LINE_NAME_OFFSETS /* Debug Bus line name offsets */,
 	BIN_BUF_DBG_PARSING_STRINGS /* Debug Tools parsing strings */,
 	MAX_BIN_DBG_BUFFER_TYPE
 };
@@ -219,8 +227,8 @@ enum bin_dbg_buffer_type {
 struct dbg_attn_bit_mapping {
 	__le16 data;
 /* The index of an attention in the blocks attentions list
- * (if is_unused_idx_cnt=0), or a number of consecutive unused attention bits
- * (if is_unused_idx_cnt=1)
+ * (if is_unused_bit_cnt=0), or a number of consecutive unused attention bits
+ * (if is_unused_bit_cnt=1)
  */
 #define DBG_ATTN_BIT_MAPPING_VAL_MASK                0x7FFF
 #define DBG_ATTN_BIT_MAPPING_VAL_SHIFT               0
@@ -269,10 +277,10 @@ struct dbg_attn_reg_result {
 #define DBG_ATTN_REG_RESULT_STS_ADDRESS_MASK   0xFFFFFF
 #define DBG_ATTN_REG_RESULT_STS_ADDRESS_SHIFT  0
 /* Number of attention indexes in this register */
-#define DBG_ATTN_REG_RESULT_NUM_ATTN_IDX_MASK  0xFF
-#define DBG_ATTN_REG_RESULT_NUM_ATTN_IDX_SHIFT 24
-/* Offset of this registers block attention indexes (values in the range
- * 0..number of block attentions)
+#define DBG_ATTN_REG_RESULT_NUM_REG_ATTN_MASK  0xFF
+#define DBG_ATTN_REG_RESULT_NUM_REG_ATTN_SHIFT 24
+/* The offset of this registers attentions within the blocks attentions
+ * list (a value in the range 0..number of block attentions-1)
  */
 	__le16 attn_idx_offset;
 	__le16 reserved;
@@ -289,7 +297,7 @@ struct dbg_attn_block_result {
 /* Value from dbg_attn_type enum */
 #define DBG_ATTN_BLOCK_RESULT_ATTN_TYPE_MASK  0x3
 #define DBG_ATTN_BLOCK_RESULT_ATTN_TYPE_SHIFT 0
-/* Number of registers in the blok in which at least one attention bit is set */
+/* Number of registers in block in which at least one attention bit is set */
 #define DBG_ATTN_BLOCK_RESULT_NUM_REGS_MASK   0x3F
 #define DBG_ATTN_BLOCK_RESULT_NUM_REGS_SHIFT  2
 /* Offset of this registers block attention names in the attention name offsets
@@ -324,17 +332,17 @@ struct dbg_mode_hdr {
  */
 struct dbg_attn_reg {
 	struct dbg_mode_hdr mode /* Mode header */;
-/* Offset of this registers block attention indexes (values in the range
- * 0..number of block attentions)
+/* The offset of this registers attentions within the blocks attentions
+ * list (a value in the range 0..number of block attentions-1)
  */
 	__le16 attn_idx_offset;
 	__le32 data;
 /* STS attention register GRC address (in dwords) */
 #define DBG_ATTN_REG_STS_ADDRESS_MASK   0xFFFFFF
 #define DBG_ATTN_REG_STS_ADDRESS_SHIFT  0
-/* Number of attention indexes in this register */
-#define DBG_ATTN_REG_NUM_ATTN_IDX_MASK  0xFF
-#define DBG_ATTN_REG_NUM_ATTN_IDX_SHIFT 24
+/* Number of attention in this register */
+#define DBG_ATTN_REG_NUM_REG_ATTN_MASK  0xFF
+#define DBG_ATTN_REG_NUM_REG_ATTN_SHIFT 24
 /* STS_CLR attention register GRC address (in dwords) */
 	__le32 sts_clr_address;
 /* MASK attention register GRC address (in dwords) */
@@ -354,6 +362,53 @@ enum dbg_attn_type {
 
 
 /*
+ * Debug Bus block data
+ */
+struct dbg_bus_block {
+/* Number of debug lines in this block (excluding signature & latency events) */
+	u8 num_of_lines;
+/* Indicates if this block has a latency events debug line (0/1). */
+	u8 has_latency_events;
+/* Offset of this blocks lines in the Debug Bus lines array. */
+	__le16 lines_offset;
+};
+
+
+/*
+ * Debug Bus block user data
+ */
+struct dbg_bus_block_user_data {
+/* Number of debug lines in this block (excluding signature & latency events) */
+	u8 num_of_lines;
+/* Indicates if this block has a latency events debug line (0/1). */
+	u8 has_latency_events;
+/* Offset of this blocks lines in the debug bus line name offsets array. */
+	__le16 names_offset;
+};
+
+
+/*
+ * Block Debug line data
+ */
+struct dbg_bus_line {
+	u8 data;
+/* Number of groups in the line (0-3) */
+#define DBG_BUS_LINE_NUM_OF_GROUPS_MASK  0xF
+#define DBG_BUS_LINE_NUM_OF_GROUPS_SHIFT 0
+/* Indicates if this is a 128b line (0) or a 256b line (1). */
+#define DBG_BUS_LINE_IS_256B_MASK        0x1
+#define DBG_BUS_LINE_IS_256B_SHIFT       4
+#define DBG_BUS_LINE_RESERVED_MASK       0x7
+#define DBG_BUS_LINE_RESERVED_SHIFT      5
+/* Four 2-bit values, indicating the size of each group minus 1 (i.e.
+ * value=0 means size=1, value=1 means size=2, etc), starting from lsb.
+ * The sizes are in dwords (if is_256b=0) or in qwords (if is_256b=1).
+ */
+	u8 group_sizes;
+};
+
+
+/*
  * condition header for registers dump
  */
 struct dbg_dump_cond_hdr {
@@ -377,8 +432,11 @@ struct dbg_dump_mem {
 /* register size (in dwords) */
 #define DBG_DUMP_MEM_LENGTH_MASK        0xFFFFFF
 #define DBG_DUMP_MEM_LENGTH_SHIFT       0
-#define DBG_DUMP_MEM_RESERVED_MASK      0xFF
-#define DBG_DUMP_MEM_RESERVED_SHIFT     24
+/* indicates if the register is wide-bus */
+#define DBG_DUMP_MEM_WIDE_BUS_MASK      0x1
+#define DBG_DUMP_MEM_WIDE_BUS_SHIFT     24
+#define DBG_DUMP_MEM_RESERVED_MASK      0x7F
+#define DBG_DUMP_MEM_RESERVED_SHIFT     25
 };
 
 
@@ -388,10 +446,13 @@ struct dbg_dump_mem {
 struct dbg_dump_reg {
 	__le32 data;
 /* register address (in dwords) */
-#define DBG_DUMP_REG_ADDRESS_MASK  0xFFFFFF
-#define DBG_DUMP_REG_ADDRESS_SHIFT 0
-#define DBG_DUMP_REG_LENGTH_MASK   0xFF /* register size (in dwords) */
-#define DBG_DUMP_REG_LENGTH_SHIFT  24
+#define DBG_DUMP_REG_ADDRESS_MASK   0x7FFFFF /* register address (in dwords) */
+#define DBG_DUMP_REG_ADDRESS_SHIFT  0
+/* indicates if the register is wide-bus */
+#define DBG_DUMP_REG_WIDE_BUS_MASK  0x1
+#define DBG_DUMP_REG_WIDE_BUS_SHIFT 23
+#define DBG_DUMP_REG_LENGTH_MASK    0xFF /* register size (in dwords) */
+#define DBG_DUMP_REG_LENGTH_SHIFT   24
 };
 
 
@@ -424,8 +485,11 @@ struct dbg_idle_chk_cond_hdr {
 struct dbg_idle_chk_cond_reg {
 	__le32 data;
 /* Register GRC address (in dwords) */
-#define DBG_IDLE_CHK_COND_REG_ADDRESS_MASK   0xFFFFFF
+#define DBG_IDLE_CHK_COND_REG_ADDRESS_MASK   0x7FFFFF
 #define DBG_IDLE_CHK_COND_REG_ADDRESS_SHIFT  0
+/* indicates if the register is wide-bus */
+#define DBG_IDLE_CHK_COND_REG_WIDE_BUS_MASK  0x1
+#define DBG_IDLE_CHK_COND_REG_WIDE_BUS_SHIFT 23
 /* value from block_id enum */
 #define DBG_IDLE_CHK_COND_REG_BLOCK_ID_MASK  0xFF
 #define DBG_IDLE_CHK_COND_REG_BLOCK_ID_SHIFT 24
@@ -441,8 +505,11 @@ struct dbg_idle_chk_cond_reg {
 struct dbg_idle_chk_info_reg {
 	__le32 data;
 /* Register GRC address (in dwords) */
-#define DBG_IDLE_CHK_INFO_REG_ADDRESS_MASK   0xFFFFFF
+#define DBG_IDLE_CHK_INFO_REG_ADDRESS_MASK   0x7FFFFF
 #define DBG_IDLE_CHK_INFO_REG_ADDRESS_SHIFT  0
+/* indicates if the register is wide-bus */
+#define DBG_IDLE_CHK_INFO_REG_WIDE_BUS_MASK  0x1
+#define DBG_IDLE_CHK_INFO_REG_WIDE_BUS_SHIFT 23
 /* value from block_id enum */
 #define DBG_IDLE_CHK_INFO_REG_BLOCK_ID_MASK  0xFF
 #define DBG_IDLE_CHK_INFO_REG_BLOCK_ID_SHIFT 24
@@ -544,17 +611,21 @@ enum dbg_idle_chk_severity_types {
  * Debug Bus block data
  */
 struct dbg_bus_block_data {
-/* Indicates if the block is enabled for recording (0/1) */
-	u8 enabled;
-	u8 hw_id /* HW ID associated with the block */;
+	__le16 data;
+/* 4-bit value: bit i set -> dword/qword i is enabled. */
+#define DBG_BUS_BLOCK_DATA_ENABLE_MASK_MASK       0xF
+#define DBG_BUS_BLOCK_DATA_ENABLE_MASK_SHIFT      0
+/* Number of dwords/qwords to shift right the debug data (0-3) */
+#define DBG_BUS_BLOCK_DATA_RIGHT_SHIFT_MASK       0xF
+#define DBG_BUS_BLOCK_DATA_RIGHT_SHIFT_SHIFT      4
+/* 4-bit value: bit i set -> dword/qword i is forced valid. */
+#define DBG_BUS_BLOCK_DATA_FORCE_VALID_MASK_MASK  0xF
+#define DBG_BUS_BLOCK_DATA_FORCE_VALID_MASK_SHIFT 8
+/* 4-bit value: bit i set -> dword/qword i frame bit is forced. */
+#define DBG_BUS_BLOCK_DATA_FORCE_FRAME_MASK_MASK  0xF
+#define DBG_BUS_BLOCK_DATA_FORCE_FRAME_MASK_SHIFT 12
 	u8 line_num /* Debug line number to select */;
-	u8 right_shift /* Number of units to  right the debug data (0-3) */;
-	u8 cycle_en /* 4-bit value: bit i set -> unit i is enabled. */;
-/* 4-bit value: bit i set -> unit i is forced valid. */
-	u8 force_valid;
-/* 4-bit value: bit i set -> unit i frame bit is forced. */
-	u8 force_frame;
-	u8 reserved;
+	u8 hw_id /* HW ID associated with the block */;
 };
 
 
@@ -604,6 +675,21 @@ enum dbg_bus_constraint_ops {
 
 
 /*
+ * Debug Bus trigger state data
+ */
+struct dbg_bus_trigger_state_data {
+	u8 data;
+/* 4-bit value: bit i set -> dword i of the trigger state block
+ * (after right shift) is enabled.
+ */
+#define DBG_BUS_TRIGGER_STATE_DATA_BLOCK_SHIFTED_ENABLE_MASK_MASK  0xF
+#define DBG_BUS_TRIGGER_STATE_DATA_BLOCK_SHIFTED_ENABLE_MASK_SHIFT 0
+/* 4-bit value: bit i set -> dword i is compared by a constraint */
+#define DBG_BUS_TRIGGER_STATE_DATA_CONSTRAINT_DWORD_MASK_MASK      0xF
+#define DBG_BUS_TRIGGER_STATE_DATA_CONSTRAINT_DWORD_MASK_SHIFT     4
+};
+
+/*
  * Debug Bus memory address
  */
 struct dbg_bus_mem_addr {
@@ -650,14 +736,8 @@ union dbg_bus_storm_eid_params {
  * Debug Bus Storm data
  */
 struct dbg_bus_storm_data {
-/* Indicates if the Storm is enabled for fast debug recording (0/1) */
-	u8 fast_enabled;
-/* Fast debug Storm mode, valid only if fast_enabled is set */
-	u8 fast_mode;
-/* Indicates if the Storm is enabled for slow debug recording (0/1) */
-	u8 slow_enabled;
-/* Slow debug Storm mode, valid only if slow_enabled is set */
-	u8 slow_mode;
+	u8 enabled /* indicates if the Storm is enabled for recording */;
+	u8 mode /* Storm debug mode, valid only if the Storm is enabled */;
 	u8 hw_id /* HW ID associated with the Storm */;
 	u8 eid_filter_en /* Indicates if EID filtering is performed (0/1) */;
 /* 1 = EID range filter, 0 = EID mask filter. Valid only if eid_filter_en is
@@ -667,7 +747,6 @@ struct dbg_bus_storm_data {
 	u8 cid_filter_en /* Indicates if CID filtering is performed (0/1) */;
 /* EID filter params to filter on. Valid only if eid_filter_en is set. */
 	union dbg_bus_storm_eid_params eid_filter_params;
-	__le16 reserved;
 /* CID to filter on. Valid only if cid_filter_en is set. */
 	__le32 cid;
 };
@@ -679,20 +758,18 @@ struct dbg_bus_data {
 	__le32 app_version /* The tools version number of the application */;
 	u8 state /* The current debug bus state */;
 	u8 hw_dwords /* HW dwords per cycle */;
-	u8 next_hw_id /* Next HW ID to be associated with an input */;
+/* The HW IDs of the recorded HW blocks, where bits i*3..i*3+2 contain the
+ * HW ID of dword/qword i
+ */
+	__le16 hw_id_mask;
 	u8 num_enabled_blocks /* Number of blocks enabled for recording */;
 	u8 num_enabled_storms /* Number of Storms enabled for recording */;
 	u8 target /* Output target */;
-	u8 next_trigger_state /* ID of next trigger state to be added */;
-/* ID of next filter/trigger constraint to be added */
-	u8 next_constraint_id;
 	u8 one_shot_en /* Indicates if one-shot mode is enabled (0/1) */;
 	u8 grc_input_en /* Indicates if GRC recording is enabled (0/1) */;
 /* Indicates if timestamp recording is enabled (0/1) */
 	u8 timestamp_input_en;
 	u8 filter_en /* Indicates if the recording filter is enabled (0/1) */;
-/* Indicates if the recording trigger is enabled (0/1) */
-	u8 trigger_en;
 /* If true, the next added constraint belong to the filter. Otherwise,
  * it belongs to the last added trigger state. Valid only if either filter or
  * triggers are enabled.
@@ -706,6 +783,14 @@ struct dbg_bus_data {
  * Valid only if both filter and trigger are enabled (0/1)
  */
 	u8 filter_post_trigger;
+	__le16 reserved;
+/* Indicates if the recording trigger is enabled (0/1) */
+	u8 trigger_en;
+/* trigger states data */
+	struct dbg_bus_trigger_state_data trigger_states[3];
+	u8 next_trigger_state /* ID of next trigger state to be added */;
+/* ID of next filter/trigger constraint to be added */
+	u8 next_constraint_id;
 /* If true, all inputs are associated with HW ID 0. Otherwise, each input is
  * assigned a different HW ID (0/1)
  */
@@ -716,7 +801,6 @@ struct dbg_bus_data {
  * DBG_BUS_TARGET_ID_PCI.
  */
 	struct dbg_bus_pci_buf_data pci_buf;
-	__le16 reserved;
 /* Debug Bus data for each block */
 	struct dbg_bus_block_data blocks[88];
 /* Debug Bus data for each block */
@@ -748,17 +832,6 @@ enum dbg_bus_frame_modes {
 
 
 /*
- * Debug bus input types
- */
-enum dbg_bus_input_types {
-	DBG_BUS_INPUT_TYPE_STORM,
-	DBG_BUS_INPUT_TYPE_BLOCK,
-	MAX_DBG_BUS_INPUT_TYPES
-};
-
-
-
-/*
  * Debug bus other engine mode
  */
 enum dbg_bus_other_engine_modes {
@@ -852,6 +925,7 @@ enum dbg_bus_targets {
 };
 
 
+
 /*
  * GRC Dump data
  */
@@ -987,7 +1061,10 @@ enum dbg_status {
 	DBG_STATUS_REG_FIFO_BAD_DATA,
 	DBG_STATUS_PROTECTION_OVERRIDE_BAD_DATA,
 	DBG_STATUS_DBG_ARRAY_NOT_SET,
-	DBG_STATUS_MULTI_BLOCKS_WITH_FILTER,
+	DBG_STATUS_FILTER_BUG,
+	DBG_STATUS_NON_MATCHING_LINES,
+	DBG_STATUS_INVALID_TRIGGER_DWORD_OFFSET,
+	DBG_STATUS_DBG_BUS_IN_USE,
 	MAX_DBG_STATUS
 };
 
@@ -1028,7 +1105,7 @@ struct dbg_tools_data {
 /* Indicates if a block is in reset state (0/1) */
 	u8 block_in_reset[88];
 	u8 chip_id /* Chip ID (from enum chip_ids) */;
-	u8 platform_id /* Platform ID (from enum platform_ids) */;
+	u8 platform_id /* Platform ID */;
 	u8 initialized /* Indicates if the data was initialized */;
 	u8 reserved;
 };
diff --git a/drivers/net/qede/base/ecore_hsi_eth.h b/drivers/net/qede/base/ecore_hsi_eth.h
index 9d2a118..397c408 100644
--- a/drivers/net/qede/base/ecore_hsi_eth.h
+++ b/drivers/net/qede/base/ecore_hsi_eth.h
@@ -739,6 +739,7 @@ enum eth_error_code {
 	ETH_FILTERS_VNI_ADD_FAIL_FULL,
 /* vni add filters command failed due to duplicate VNI filter */
 	ETH_FILTERS_VNI_ADD_FAIL_DUP,
+	ETH_FILTERS_GFT_UPDATE_FAIL /* Fail update GFT filter. */,
 	MAX_ETH_ERROR_CODE
 };
 
@@ -982,8 +983,10 @@ struct eth_vport_rss_config {
 	u8 rss_id;
 	u8 rss_mode /* The RSS mode for this function */;
 	u8 update_rss_key /* if set update the rss key */;
-	u8 update_rss_ind_table /* if set update the indirection table */;
-	u8 update_rss_capabilities /* if set update the capabilities */;
+/* if set update the indirection table values */
+	u8 update_rss_ind_table;
+/* if set update the capabilities and indirection table size. */
+	u8 update_rss_capabilities;
 	u8 tbl_size /* rss mask (Tbl size) */;
 	__le32 reserved2[2];
 /* RSS indirection table */
@@ -1267,7 +1270,10 @@ struct rx_update_gft_filter_data {
 /* Use enum to set type of flow using gft HW logic blocks */
 	u8 filter_type;
 	u8 filter_action /* Use to set type of action on filter */;
-	u8 reserved;
+/* 0 - dont assert in case of error. Just return an error code. 1 - assert in
+ * case of error.
+ */
+	u8 assert_on_error;
 };
 
 
@@ -2290,8 +2296,7 @@ enum gft_profile_upper_protocol_type {
  * GFT RAM line struct
  */
 struct gft_ram_line {
-	__le32 low32bits;
-/*  (use enum gft_vlan_select) */
+	__le32 lo;
 #define GFT_RAM_LINE_VLAN_SELECT_MASK              0x3
 #define GFT_RAM_LINE_VLAN_SELECT_SHIFT             0
 #define GFT_RAM_LINE_TUNNEL_ENTROPHY_MASK          0x1
@@ -2354,7 +2359,7 @@ struct gft_ram_line {
 #define GFT_RAM_LINE_DST_PORT_SHIFT                30
 #define GFT_RAM_LINE_SRC_PORT_MASK                 0x1
 #define GFT_RAM_LINE_SRC_PORT_SHIFT                31
-	__le32 high32bits;
+	__le32 hi;
 #define GFT_RAM_LINE_DSCP_MASK                     0x1
 #define GFT_RAM_LINE_DSCP_SHIFT                    0
 #define GFT_RAM_LINE_OVER_IP_PROTOCOL_MASK         0x1
diff --git a/drivers/net/qede/base/ecore_hsi_init_tool.h b/drivers/net/qede/base/ecore_hsi_init_tool.h
index d07549c..1f57e9b 100644
--- a/drivers/net/qede/base/ecore_hsi_init_tool.h
+++ b/drivers/net/qede/base/ecore_hsi_init_tool.h
@@ -22,43 +22,13 @@
 /* Max size in dwords of a zipped array */
 #define MAX_ZIPPED_SIZE			8192
 
-enum init_modes {
-	MODE_BB_A0_DEPRECATED,
-	MODE_BB_B0,
-	MODE_K2,
-	MODE_ASIC,
-	MODE_EMUL_REDUCED,
-	MODE_EMUL_FULL,
-	MODE_FPGA,
-	MODE_CHIPSIM,
-	MODE_SF,
-	MODE_MF_SD,
-	MODE_MF_SI,
-	MODE_PORTS_PER_ENG_1,
-	MODE_PORTS_PER_ENG_2,
-	MODE_PORTS_PER_ENG_4,
-	MODE_100G,
-	MODE_E5,
-	MAX_INIT_MODES
-};
-
-enum init_phases {
-	PHASE_ENGINE,
-	PHASE_PORT,
-	PHASE_PF,
-	PHASE_VF,
-	PHASE_QM_PF,
-	MAX_INIT_PHASES
+enum chip_ids {
+	CHIP_BB,
+	CHIP_K2,
+	CHIP_E5,
+	MAX_CHIP_IDS
 };
 
-enum init_split_types {
-	SPLIT_TYPE_NONE,
-	SPLIT_TYPE_PORT,
-	SPLIT_TYPE_PF,
-	SPLIT_TYPE_PORT_PF,
-	SPLIT_TYPE_VF,
-	MAX_INIT_SPLIT_TYPES
-};
 
 struct fw_asserts_ram_section {
 /* The offset of the section in the RAM in RAM lines (64-bit units) */
@@ -196,8 +166,46 @@ union init_array_hdr {
 };
 
 
+enum init_modes {
+	MODE_BB_A0_DEPRECATED,
+	MODE_BB,
+	MODE_K2,
+	MODE_ASIC,
+	MODE_EMUL_REDUCED,
+	MODE_EMUL_FULL,
+	MODE_FPGA,
+	MODE_CHIPSIM,
+	MODE_SF,
+	MODE_MF_SD,
+	MODE_MF_SI,
+	MODE_PORTS_PER_ENG_1,
+	MODE_PORTS_PER_ENG_2,
+	MODE_PORTS_PER_ENG_4,
+	MODE_100G,
+	MODE_E5,
+	MAX_INIT_MODES
+};
 
 
+enum init_phases {
+	PHASE_ENGINE,
+	PHASE_PORT,
+	PHASE_PF,
+	PHASE_VF,
+	PHASE_QM_PF,
+	MAX_INIT_PHASES
+};
+
+
+enum init_split_types {
+	SPLIT_TYPE_NONE,
+	SPLIT_TYPE_PORT,
+	SPLIT_TYPE_PF,
+	SPLIT_TYPE_PORT_PF,
+	SPLIT_TYPE_VF,
+	MAX_INIT_SPLIT_TYPES
+};
+
 
 /*
  * init array types
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.c b/drivers/net/qede/base/ecore_init_fw_funcs.c
index 77f9152..af0deaa 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.c
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.c
@@ -17,112 +17,156 @@
 #include "ecore_hsi_init_tool.h"
 #include "ecore_iro.h"
 #include "ecore_init_fw_funcs.h"
-enum CmInterfaceEnum {
-	MCM_SEC,
-	MCM_PRI,
-	UCM_SEC,
-	UCM_PRI,
-	TCM_SEC,
-	TCM_PRI,
-	YCM_SEC,
-	YCM_PRI,
-	XCM_SEC,
-	XCM_PRI,
-	NUM_OF_CM_INTERFACES
+
+#define CDU_VALIDATION_DEFAULT_CFG 61
+
+static u16 con_region_offsets[3][E4_NUM_OF_CONNECTION_TYPES] = {
+	{ 400,  336,  352,  304,  304,  384,  416,  352}, /* region 3 offsets */
+	{ 528,  496,  416,  448,  448,  512,  544,  480}, /* region 4 offsets */
+	{ 608,  544,  496,  512,  576,  592,  624,  560}  /* region 5 offsets */
+};
+static u16 task_region_offsets[1][E4_NUM_OF_CONNECTION_TYPES] = {
+	{ 240,  240,  112,    0,    0,    0,    0,   96}  /* region 1 offsets */
 };
-/* general constants */
-#define QM_PQ_MEM_4KB(pq_size) \
-(pq_size ? DIV_ROUND_UP((pq_size + 1) * QM_PQ_ELEMENT_SIZE, 0x1000) : 0)
-#define QM_PQ_SIZE_256B(pq_size) \
-(pq_size ? DIV_ROUND_UP(pq_size, 0x100) - 1 : 0)
-#define QM_INVALID_PQ_ID			0xffff
-/* feature enable */
-#define QM_BYPASS_EN				1
-#define QM_BYTE_CRD_EN				1
-/* other PQ constants */
-#define QM_OTHER_PQS_PER_PF			4
-/* WFQ constants */
-#define QM_WFQ_UPPER_BOUND			62500000
+
+/* General constants */
+#define QM_PQ_MEM_4KB(pq_size) (pq_size ? DIV_ROUND_UP((pq_size + 1) * \
+				QM_PQ_ELEMENT_SIZE, 0x1000) : 0)
+#define QM_PQ_SIZE_256B(pq_size) (pq_size ? DIV_ROUND_UP(pq_size, 0x100) - 1 : \
+				  0)
+#define QM_INVALID_PQ_ID		0xffff
+
+/* Feature enable */
+#define QM_BYPASS_EN			1
+#define QM_BYTE_CRD_EN			1
+
+/* Other PQ constants */
+#define QM_OTHER_PQS_PER_PF		4
+
+/* WFQ constants: */
+
+/* Upper bound in MB, 10 * burst size of 1ms in 50Gbps */
+#define QM_WFQ_UPPER_BOUND		62500000
+
+/* Bit  of VOQ in WFQ VP PQ map */
 #define QM_WFQ_VP_PQ_VOQ_SHIFT		0
+
+/* Bit  of PF in WFQ VP PQ map */
 #define QM_WFQ_VP_PQ_PF_SHIFT		5
+
+/* 0x9000 = 4*9*1024 */
 #define QM_WFQ_INC_VAL(weight)		((weight) * 0x9000)
-#define QM_WFQ_MAX_INC_VAL			43750000
-/* RL constants */
-#define QM_RL_UPPER_BOUND			62500000
-#define QM_RL_PERIOD				5
+
+/* 0.7 * upper bound (62500000) */
+#define QM_WFQ_MAX_INC_VAL		43750000
+
+/* RL constants: */
+
+/* Upper bound is set to 10 * burst size of 1ms in 50Gbps */
+#define QM_RL_UPPER_BOUND		62500000
+
+/* Period in us */
+#define QM_RL_PERIOD			5
+
+/* Period in 25MHz cycles */
 #define QM_RL_PERIOD_CLK_25M		(25 * QM_RL_PERIOD)
-#define QM_RL_MAX_INC_VAL			43750000
-/* RL increment value - the factor of 1.01 was added after seeing only
- * 99% factor reached in a 25Gbps port with DPDK RFC 2544 test.
- * In this scenario the PF RL was reducing the line rate to 99% although
- * the credit increment value was the correct one and FW calculated
- * correct packet sizes. The reason for the inaccuracy of the RL is
- * unknown at this point.
+
+/* 0.7 * upper bound (62500000) */
+#define QM_RL_MAX_INC_VAL		43750000
+
+/* RL increment value - rate is specified in mbps. the factor of 1.01 was
+ * added after seeing only 99% factor reached in a 25Gbps port with DPDK RFC
+ * 2544 test. In this scenario the PF RL was reducing the line rate to 99%
+ * although the credit increment value was the correct one and FW calculated
+ * correct packet sizes. The reason for the inaccuracy of the RL is unknown at
+ * this point.
  */
-/* rate in mbps */
 #define QM_RL_INC_VAL(rate) OSAL_MAX_T(u32, (u32)(((rate ? rate : 1000000) * \
-					QM_RL_PERIOD * 101) / (8 * 100)), 1)
+				       QM_RL_PERIOD * 101) / (8 * 100)), 1)
+
 /* AFullOprtnstcCrdMask constants */
 #define QM_OPPOR_LINE_VOQ_DEF		1
 #define QM_OPPOR_FW_STOP_DEF		0
 #define QM_OPPOR_PQ_EMPTY_DEF		1
-/* Command Queue constants */
-#define PBF_CMDQ_PURE_LB_LINES			150
+
+/* Command Queue constants: */
+
+/* Pure LB CmdQ lines (+spare) */
+#define PBF_CMDQ_PURE_LB_LINES		150
+
 #define PBF_CMDQ_LINES_RT_OFFSET(voq) \
-(PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET + \
-voq * (PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET \
-- PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET))
+	(PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET + voq * \
+	 (PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET - \
+	  PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET))
+
 #define PBF_BTB_GUARANTEED_RT_OFFSET(voq) \
-(PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET + voq * \
-(PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET - PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET))
+	(PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET + voq * \
+	 (PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET - \
+	  PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET))
+
 #define QM_VOQ_LINE_CRD(pbf_cmd_lines) \
 ((((pbf_cmd_lines) - 4) * 2) | QM_LINE_CRD_REG_SIGN_BIT)
+
 /* BTB: blocks constants (block size = 256B) */
-#define BTB_JUMBO_PKT_BLOCKS 38	/* 256B blocks in 9700B packet */
-/* headroom per-port */
-#define BTB_HEADROOM_BLOCKS BTB_JUMBO_PKT_BLOCKS
+
+/* 256B blocks in 9700B packet */
+#define BTB_JUMBO_PKT_BLOCKS		38
+
+/* Headroom per-port */
+#define BTB_HEADROOM_BLOCKS		BTB_JUMBO_PKT_BLOCKS
 #define BTB_PURE_LB_FACTOR		10
-#define BTB_PURE_LB_RATIO		7 /* factored (hence really 0.7) */
+
+/* Factored (hence really 0.7) */
+#define BTB_PURE_LB_RATIO		7
+
 /* QM stop command constants */
-#define QM_STOP_PQ_MASK_WIDTH			32
-#define QM_STOP_CMD_ADDR				0x2
-#define QM_STOP_CMD_STRUCT_SIZE			2
+#define QM_STOP_PQ_MASK_WIDTH		32
+#define QM_STOP_CMD_ADDR		2
+#define QM_STOP_CMD_STRUCT_SIZE		2
 #define QM_STOP_CMD_PAUSE_MASK_OFFSET	0
 #define QM_STOP_CMD_PAUSE_MASK_SHIFT	0
-#define QM_STOP_CMD_PAUSE_MASK_MASK		0xffffffff /* @DPDK */
-#define QM_STOP_CMD_GROUP_ID_OFFSET		1
-#define QM_STOP_CMD_GROUP_ID_SHIFT		16
-#define QM_STOP_CMD_GROUP_ID_MASK		15
-#define QM_STOP_CMD_PQ_TYPE_OFFSET		1
-#define QM_STOP_CMD_PQ_TYPE_SHIFT		24
-#define QM_STOP_CMD_PQ_TYPE_MASK		1
-#define QM_STOP_CMD_MAX_POLL_COUNT		100
-#define QM_STOP_CMD_POLL_PERIOD_US		500
+#define QM_STOP_CMD_PAUSE_MASK_MASK	0xffffffff /* @DPDK */
+#define QM_STOP_CMD_GROUP_ID_OFFSET	1
+#define QM_STOP_CMD_GROUP_ID_SHIFT	16
+#define QM_STOP_CMD_GROUP_ID_MASK	15
+#define QM_STOP_CMD_PQ_TYPE_OFFSET	1
+#define QM_STOP_CMD_PQ_TYPE_SHIFT	24
+#define QM_STOP_CMD_PQ_TYPE_MASK	1
+#define QM_STOP_CMD_MAX_POLL_COUNT	100
+#define QM_STOP_CMD_POLL_PERIOD_US	500
+
 /* QM command macros */
-#define QM_CMD_STRUCT_SIZE(cmd)	cmd##_STRUCT_SIZE
+#define QM_CMD_STRUCT_SIZE(cmd) cmd##_STRUCT_SIZE
 #define QM_CMD_SET_FIELD(var, cmd, field, value) \
-SET_FIELD(var[cmd##_##field##_OFFSET], cmd##_##field, value)
+	SET_FIELD(var[cmd##_##field##_OFFSET], cmd##_##field, value)
+
 /* QM: VOQ macros */
 #define PHYS_VOQ(port, tc, max_phys_tcs_per_port) \
-((port) * (max_phys_tcs_per_port) + (tc))
-#define LB_VOQ(port)				(MAX_PHYS_VOQS + (port))
+	((port) * (max_phys_tcs_per_port) + (tc))
+#define LB_VOQ(port)				 (MAX_PHYS_VOQS + (port))
 #define VOQ(port, tc, max_phys_tcs_per_port) \
-((tc) < LB_TC ? PHYS_VOQ(port, tc, max_phys_tcs_per_port) : LB_VOQ(port))
+	((tc) < LB_TC ? PHYS_VOQ(port, tc, max_phys_tcs_per_port) : \
+				 LB_VOQ(port))
+
+
 /******************** INTERNAL IMPLEMENTATION *********************/
+
 /* Prepare PF RL enable/disable runtime init values */
 static void ecore_enable_pf_rl(struct ecore_hwfn *p_hwfn, bool pf_rl_en)
 {
 	STORE_RT_REG(p_hwfn, QM_REG_RLPFENABLE_RT_OFFSET, pf_rl_en ? 1 : 0);
 	if (pf_rl_en) {
-		/* enable RLs for all VOQs */
+		/* Enable RLs for all VOQs */
 		STORE_RT_REG(p_hwfn, QM_REG_RLPFVOQENABLE_RT_OFFSET,
 			     (1 << MAX_NUM_VOQS) - 1);
-		/* write RL period */
+
+		/* Write RL period */
 		STORE_RT_REG(p_hwfn, QM_REG_RLPFPERIOD_RT_OFFSET,
 			     QM_RL_PERIOD_CLK_25M);
 		STORE_RT_REG(p_hwfn, QM_REG_RLPFPERIODTIMER_RT_OFFSET,
 			     QM_RL_PERIOD_CLK_25M);
-		/* set credit threshold for QM bypass flow */
+
+		/* Set credit threshold for QM bypass flow */
 		if (QM_BYPASS_EN)
 			STORE_RT_REG(p_hwfn, QM_REG_AFULLQMBYPTHRPFRL_RT_OFFSET,
 				     QM_RL_UPPER_BOUND);
@@ -133,7 +177,8 @@ static void ecore_enable_pf_rl(struct ecore_hwfn *p_hwfn, bool pf_rl_en)
 static void ecore_enable_pf_wfq(struct ecore_hwfn *p_hwfn, bool pf_wfq_en)
 {
 	STORE_RT_REG(p_hwfn, QM_REG_WFQPFENABLE_RT_OFFSET, pf_wfq_en ? 1 : 0);
-	/* set credit threshold for QM bypass flow */
+
+	/* Set credit threshold for QM bypass flow */
 	if (pf_wfq_en && QM_BYPASS_EN)
 		STORE_RT_REG(p_hwfn, QM_REG_AFULLQMBYPTHRPFWFQ_RT_OFFSET,
 			     QM_WFQ_UPPER_BOUND);
@@ -145,12 +190,13 @@ static void ecore_enable_vport_rl(struct ecore_hwfn *p_hwfn, bool vport_rl_en)
 	STORE_RT_REG(p_hwfn, QM_REG_RLGLBLENABLE_RT_OFFSET,
 		     vport_rl_en ? 1 : 0);
 	if (vport_rl_en) {
-		/* write RL period (use timer 0 only) */
+		/* Write RL period (use timer 0 only) */
 		STORE_RT_REG(p_hwfn, QM_REG_RLGLBLPERIOD_0_RT_OFFSET,
 			     QM_RL_PERIOD_CLK_25M);
 		STORE_RT_REG(p_hwfn, QM_REG_RLGLBLPERIODTIMER_0_RT_OFFSET,
 			     QM_RL_PERIOD_CLK_25M);
-		/* set credit threshold for QM bypass flow */
+
+		/* Set credit threshold for QM bypass flow */
 		if (QM_BYPASS_EN)
 			STORE_RT_REG(p_hwfn,
 				     QM_REG_AFULLQMBYPTHRGLBLRL_RT_OFFSET,
@@ -163,7 +209,8 @@ static void ecore_enable_vport_wfq(struct ecore_hwfn *p_hwfn, bool vport_wfq_en)
 {
 	STORE_RT_REG(p_hwfn, QM_REG_WFQVPENABLE_RT_OFFSET,
 		     vport_wfq_en ? 1 : 0);
-	/* set credit threshold for QM bypass flow */
+
+	/* Set credit threshold for QM bypass flow */
 	if (vport_wfq_en && QM_BYPASS_EN)
 		STORE_RT_REG(p_hwfn, QM_REG_AFULLQMBYPTHRVPWFQ_RT_OFFSET,
 			     QM_WFQ_UPPER_BOUND);
@@ -176,7 +223,9 @@ static void ecore_cmdq_lines_voq_rt_init(struct ecore_hwfn *p_hwfn,
 					 u8 voq, u16 cmdq_lines)
 {
 	u32 qm_line_crd;
+
 	qm_line_crd = QM_VOQ_LINE_CRD(cmdq_lines);
+
 	OVERWRITE_RT_REG(p_hwfn, PBF_CMDQ_LINES_RT_OFFSET(voq),
 			 (u32)cmdq_lines);
 	STORE_RT_REG(p_hwfn, QM_REG_VOQCRDLINE_RT_OFFSET + voq, qm_line_crd);
@@ -192,38 +241,43 @@ static void ecore_cmdq_lines_rt_init(struct ecore_hwfn *p_hwfn,
 				     port_params[MAX_NUM_PORTS])
 {
 	u8 tc, voq, port_id, num_tcs_in_port;
-	/* clear PBF lines for all VOQs */
+
+	/* Clear PBF lines for all VOQs */
 	for (voq = 0; voq < MAX_NUM_VOQS; voq++)
 		STORE_RT_REG(p_hwfn, PBF_CMDQ_LINES_RT_OFFSET(voq), 0);
+
 	for (port_id = 0; port_id < max_ports_per_engine; port_id++) {
-		if (port_params[port_id].active) {
-			u16 phys_lines, phys_lines_per_tc;
-			/* find #lines to divide between active physical TCs */
-			phys_lines =
-			    port_params[port_id].num_pbf_cmd_lines -
-			    PBF_CMDQ_PURE_LB_LINES;
-			/* find #lines per active physical TC */
-			num_tcs_in_port = 0;
-			for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) {
-				if (((port_params[port_id].active_phys_tcs >>
-						tc) & 0x1) == 1)
-					num_tcs_in_port++;
-			}
-			phys_lines_per_tc = phys_lines / num_tcs_in_port;
-			/* init registers per active TC */
-			for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) {
-				if (((port_params[port_id].active_phys_tcs >>
-							tc) & 0x1) == 1) {
-					voq = PHYS_VOQ(port_id, tc,
-							max_phys_tcs_per_port);
-					ecore_cmdq_lines_voq_rt_init(p_hwfn,
-							voq, phys_lines_per_tc);
-				}
+		u16 phys_lines, phys_lines_per_tc;
+
+		if (!port_params[port_id].active)
+			continue;
+
+		/* Find #lines to divide between the active physical TCs */
+		phys_lines = port_params[port_id].num_pbf_cmd_lines -
+			     PBF_CMDQ_PURE_LB_LINES;
+
+		/* Find #lines per active physical TC */
+		num_tcs_in_port = 0;
+		for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++)
+			if (((port_params[port_id].active_phys_tcs >> tc) &
+			      0x1) == 1)
+				num_tcs_in_port++;
+		phys_lines_per_tc = phys_lines / num_tcs_in_port;
+
+		/* Init registers per active TC */
+		for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) {
+			if (((port_params[port_id].active_phys_tcs >> tc) &
+			      0x1) == 1) {
+				voq = PHYS_VOQ(port_id, tc,
+					       max_phys_tcs_per_port);
+				ecore_cmdq_lines_voq_rt_init(p_hwfn, voq,
+							     phys_lines_per_tc);
 			}
-			/* init registers for pure LB TC */
-			ecore_cmdq_lines_voq_rt_init(p_hwfn, LB_VOQ(port_id),
-						     PBF_CMDQ_PURE_LB_LINES);
 		}
+
+		/* Init registers for pure LB TC */
+		ecore_cmdq_lines_voq_rt_init(p_hwfn, LB_VOQ(port_id),
+					     PBF_CMDQ_PURE_LB_LINES);
 	}
 }
 
@@ -253,50 +307,51 @@ static void ecore_btb_blocks_rt_init(struct ecore_hwfn *p_hwfn,
 				     struct init_qm_port_params
 				     port_params[MAX_NUM_PORTS])
 {
-	u8 tc, voq, port_id, num_tcs_in_port;
 	u32 usable_blocks, pure_lb_blocks, phys_blocks;
+	u8 tc, voq, port_id, num_tcs_in_port;
+
 	for (port_id = 0; port_id < max_ports_per_engine; port_id++) {
-		if (port_params[port_id].active) {
-			/* subtract headroom blocks */
-			usable_blocks =
-			    port_params[port_id].num_btb_blocks -
-			    BTB_HEADROOM_BLOCKS;
-/* find blocks per physical TC. use factor to avoid floating arithmethic */
-
-			num_tcs_in_port = 0;
-			for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++)
-				if (((port_params[port_id].active_phys_tcs >>
-								tc) & 0x1) == 1)
-					num_tcs_in_port++;
-			pure_lb_blocks =
-			    (usable_blocks * BTB_PURE_LB_FACTOR) /
-			    (num_tcs_in_port *
-			     BTB_PURE_LB_FACTOR + BTB_PURE_LB_RATIO);
-			pure_lb_blocks =
-			    OSAL_MAX_T(u32, BTB_JUMBO_PKT_BLOCKS,
-				       pure_lb_blocks / BTB_PURE_LB_FACTOR);
-			phys_blocks =
-			    (usable_blocks -
-			     pure_lb_blocks) /
-			     num_tcs_in_port;
-			/* init physical TCs */
-			for (tc = 0;
-			     tc < NUM_OF_PHYS_TCS;
-			     tc++) {
-				if (((port_params[port_id].active_phys_tcs >>
-							tc) & 0x1) == 1) {
-					voq = PHYS_VOQ(port_id, tc,
-						       max_phys_tcs_per_port);
-					STORE_RT_REG(p_hwfn,
+		if (!port_params[port_id].active)
+			continue;
+
+		/* Subtract headroom blocks */
+		usable_blocks = port_params[port_id].num_btb_blocks -
+				BTB_HEADROOM_BLOCKS;
+
+		/* Find blocks per physical TC. use factor to avoid floating
+		 * arithmethic.
+		 */
+		num_tcs_in_port = 0;
+		for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++)
+			if (((port_params[port_id].active_phys_tcs >> tc) &
+			      0x1) == 1)
+				num_tcs_in_port++;
+
+		pure_lb_blocks = (usable_blocks * BTB_PURE_LB_FACTOR) /
+				  (num_tcs_in_port * BTB_PURE_LB_FACTOR +
+				   BTB_PURE_LB_RATIO);
+		pure_lb_blocks = OSAL_MAX_T(u32, BTB_JUMBO_PKT_BLOCKS,
+					    pure_lb_blocks /
+					    BTB_PURE_LB_FACTOR);
+		phys_blocks = (usable_blocks - pure_lb_blocks) /
+			      num_tcs_in_port;
+
+		/* Init physical TCs */
+		for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) {
+			if (((port_params[port_id].active_phys_tcs >> tc) &
+			      0x1) == 1) {
+				voq = PHYS_VOQ(port_id, tc,
+					       max_phys_tcs_per_port);
+				STORE_RT_REG(p_hwfn,
 					     PBF_BTB_GUARANTEED_RT_OFFSET(voq),
 					     phys_blocks);
-				}
 			}
-			/* init pure LB TC */
-			STORE_RT_REG(p_hwfn,
-				     PBF_BTB_GUARANTEED_RT_OFFSET(
-					LB_VOQ(port_id)), pure_lb_blocks);
 		}
+
+		/* Init pure LB TC */
+		STORE_RT_REG(p_hwfn,
+			     PBF_BTB_GUARANTEED_RT_OFFSET(LB_VOQ(port_id)),
+			     pure_lb_blocks);
 	}
 }
 
@@ -317,57 +372,69 @@ static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 				    struct init_qm_pq_params *pq_params,
 				    struct init_qm_vport_params *vport_params)
 {
-	u16 i, pq_id, pq_group;
-	u16 num_pqs = num_pf_pqs + num_vf_pqs;
-	u16 first_pq_group = start_pq / QM_PF_QUEUE_GROUP_SIZE;
-	u16 last_pq_group = (start_pq + num_pqs - 1) / QM_PF_QUEUE_GROUP_SIZE;
-	/* a bit per Tx PQ indicating if the PQ is associated with a VF */
+	/* A bit per Tx PQ indicating if the PQ is associated with a VF */
 	u32 tx_pq_vf_mask[MAX_QM_TX_QUEUES / QM_PF_QUEUE_GROUP_SIZE] = { 0 };
 	u32 num_tx_pq_vf_masks = MAX_QM_TX_QUEUES / QM_PF_QUEUE_GROUP_SIZE;
-	u32 pq_mem_4kb = QM_PQ_MEM_4KB(num_pf_cids);
-	u32 vport_pq_mem_4kb = QM_PQ_MEM_4KB(num_vf_cids);
-	u32 mem_addr_4kb = base_mem_addr_4kb;
-	/* set mapping from PQ group to PF */
+	u16 num_pqs, first_pq_group, last_pq_group, i, pq_id, pq_group;
+	u32 pq_mem_4kb, vport_pq_mem_4kb, mem_addr_4kb;
+
+	num_pqs = num_pf_pqs + num_vf_pqs;
+
+	first_pq_group = start_pq / QM_PF_QUEUE_GROUP_SIZE;
+	last_pq_group = (start_pq + num_pqs - 1) / QM_PF_QUEUE_GROUP_SIZE;
+
+	pq_mem_4kb = QM_PQ_MEM_4KB(num_pf_cids);
+	vport_pq_mem_4kb = QM_PQ_MEM_4KB(num_vf_cids);
+	mem_addr_4kb = base_mem_addr_4kb;
+
+	/* Set mapping from PQ group to PF */
 	for (pq_group = first_pq_group; pq_group <= last_pq_group; pq_group++)
 		STORE_RT_REG(p_hwfn, QM_REG_PQTX2PF_0_RT_OFFSET + pq_group,
 			     (u32)(pf_id));
-	/* set PQ sizes */
+
+	/* Set PQ sizes */
 	STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZE_0_RT_OFFSET,
 		     QM_PQ_SIZE_256B(num_pf_cids));
 	STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZE_1_RT_OFFSET,
 		     QM_PQ_SIZE_256B(num_vf_cids));
-	/* go over all Tx PQs */
+
+	/* Go over all Tx PQs */
 	for (i = 0, pq_id = start_pq; i < num_pqs; i++, pq_id++) {
-		struct qm_rf_pq_map tx_pq_map;
-		u8 voq =
-		    VOQ(port_id, pq_params[i].tc_id, max_phys_tcs_per_port);
-		bool is_vf_pq = (i >= num_pf_pqs);
-		/* added to avoid compilation warning */
 		u32 max_qm_global_rls = MAX_QM_GLOBAL_RLS;
-		bool rl_valid = pq_params[i].rl_valid &&
-				pq_params[i].vport_id < max_qm_global_rls;
-		/* update first Tx PQ of VPORT/TC */
-		u8 vport_id_in_pf = pq_params[i].vport_id - start_vport;
-		u16 first_tx_pq_id =
-		    vport_params[vport_id_in_pf].first_tx_pq_id[pq_params[i].
-								tc_id];
+		struct qm_rf_pq_map tx_pq_map;
+		bool is_vf_pq, rl_valid;
+		u8 voq, vport_id_in_pf;
+		u16 first_tx_pq_id;
+
+		voq = VOQ(port_id, pq_params[i].tc_id, max_phys_tcs_per_port);
+		is_vf_pq = (i >= num_pf_pqs);
+		rl_valid = pq_params[i].rl_valid && pq_params[i].vport_id <
+			   max_qm_global_rls;
+
+		/* Update first Tx PQ of VPORT/TC */
+		vport_id_in_pf = pq_params[i].vport_id - start_vport;
+		first_tx_pq_id =
+		vport_params[vport_id_in_pf].first_tx_pq_id[pq_params[i].tc_id];
 		if (first_tx_pq_id == QM_INVALID_PQ_ID) {
-			/* create new VP PQ */
+			/* Create new VP PQ */
 			vport_params[vport_id_in_pf].
 			    first_tx_pq_id[pq_params[i].tc_id] = pq_id;
 			first_tx_pq_id = pq_id;
-			/* map VP PQ to VOQ and PF */
+
+			/* Map VP PQ to VOQ and PF */
 			STORE_RT_REG(p_hwfn,
 				     QM_REG_WFQVPMAP_RT_OFFSET + first_tx_pq_id,
 				     (voq << QM_WFQ_VP_PQ_VOQ_SHIFT) | (pf_id <<
 							QM_WFQ_VP_PQ_PF_SHIFT));
 		}
-		/* check RL ID */
+
+		/* Check RL ID */
 		if (pq_params[i].rl_valid && pq_params[i].vport_id >=
 							max_qm_global_rls)
 			DP_NOTICE(p_hwfn, true,
-				  "Invalid VPORT ID for rate limiter config");
-		/* fill PQ map entry */
+				  "Invalid VPORT ID for rate limiter config\n");
+
+		/* Fill PQ map entry */
 		OSAL_MEMSET(&tx_pq_map, 0, sizeof(tx_pq_map));
 		SET_FIELD(tx_pq_map.reg, QM_RF_PQ_MAP_PQ_VALID, 1);
 		SET_FIELD(tx_pq_map.reg, QM_RF_PQ_MAP_RL_VALID,
@@ -378,17 +445,17 @@ static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 		SET_FIELD(tx_pq_map.reg, QM_RF_PQ_MAP_VOQ, voq);
 		SET_FIELD(tx_pq_map.reg, QM_RF_PQ_MAP_WRR_WEIGHT_GROUP,
 			  pq_params[i].wrr_group);
-		/* write PQ map entry to CAM */
+
+		/* Write PQ map entry to CAM */
 		STORE_RT_REG(p_hwfn, QM_REG_TXPQMAP_RT_OFFSET + pq_id,
 			     *((u32 *)&tx_pq_map));
-		/* set base address */
+
+		/* Set base address */
 		STORE_RT_REG(p_hwfn, QM_REG_BASEADDRTXPQ_RT_OFFSET + pq_id,
 			     mem_addr_4kb);
-		/* check if VF PQ */
+
+		/* If VF PQ, add indication to PQ VF mask */
 		if (is_vf_pq) {
-			/* if PQ is associated with a VF, add indication to PQ
-			 * VF mask
-			 */
 			tx_pq_vf_mask[pq_id / QM_PF_QUEUE_GROUP_SIZE] |=
 				(1 << (pq_id % QM_PF_QUEUE_GROUP_SIZE));
 			mem_addr_4kb += vport_pq_mem_4kb;
@@ -396,12 +463,12 @@ static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 			mem_addr_4kb += pq_mem_4kb;
 		}
 	}
-	/* store Tx PQ VF mask to size select register */
-	for (i = 0; i < num_tx_pq_vf_masks; i++) {
+
+	/* Store Tx PQ VF mask to size select register */
+	for (i = 0; i < num_tx_pq_vf_masks; i++)
 		if (tx_pq_vf_mask[i])
 			STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZETXSEL_0_RT_OFFSET +
 				     i, tx_pq_vf_mask[i]);
-	}
 }
 
 /* Prepare Other PQ mapping runtime init values for the specified PF */
@@ -411,20 +478,26 @@ static void ecore_other_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 				       u32 num_pf_cids,
 				       u32 num_tids, u32 base_mem_addr_4kb)
 {
-	u16 i, pq_id;
-/* a single other PQ grp is used in each PF, where PQ group i is used in PF i */
-
-	u16 pq_group = pf_id;
-	u32 pq_size = num_pf_cids + num_tids;
-	u32 pq_mem_4kb = QM_PQ_MEM_4KB(pq_size);
-	u32 mem_addr_4kb = base_mem_addr_4kb;
-	/* map PQ group to PF */
+	u32 pq_size, pq_mem_4kb, mem_addr_4kb;
+	u16 i, pq_id, pq_group;
+
+	/* A single other PQ group is used in each PF, where PQ group i is used
+	 * in PF i.
+	 */
+	pq_group = pf_id;
+	pq_size = num_pf_cids + num_tids;
+	pq_mem_4kb = QM_PQ_MEM_4KB(pq_size);
+	mem_addr_4kb = base_mem_addr_4kb;
+
+	/* Map PQ group to PF */
 	STORE_RT_REG(p_hwfn, QM_REG_PQOTHER2PF_0_RT_OFFSET + pq_group,
 		     (u32)(pf_id));
-	/* set PQ sizes */
+
+	/* Set PQ sizes */
 	STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZE_2_RT_OFFSET,
 		     QM_PQ_SIZE_256B(pq_size));
-	/* set base address */
+
+	/* Set base address */
 	for (i = 0, pq_id = pf_id * QM_PF_QUEUE_GROUP_SIZE;
 	     i < QM_OTHER_PQS_PER_PF; i++, pq_id++) {
 		STORE_RT_REG(p_hwfn, QM_REG_BASEADDROTHERPQ_RT_OFFSET + pq_id,
@@ -432,7 +505,10 @@ static void ecore_other_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 		mem_addr_4kb += pq_mem_4kb;
 	}
 }
-/* Prepare PF WFQ runtime init values for specified PF. Return -1 on error. */
+
+/* Prepare PF WFQ runtime init values for the specified PF.
+ * Return -1 on error.
+ */
 static int ecore_pf_wfq_rt_init(struct ecore_hwfn *p_hwfn,
 				u8 port_id,
 				u8 pf_id,
@@ -441,76 +517,89 @@ static int ecore_pf_wfq_rt_init(struct ecore_hwfn *p_hwfn,
 				u16 num_tx_pqs,
 				struct init_qm_pq_params *pq_params)
 {
+	u32 inc_val, crd_reg_offset;
+	u8 voq;
 	u16 i;
-	u32 inc_val;
-	u32 crd_reg_offset =
-	    (pf_id <
-	     MAX_NUM_PFS_BB ? QM_REG_WFQPFCRD_RT_OFFSET :
-	     QM_REG_WFQPFCRD_MSB_RT_OFFSET) + (pf_id % MAX_NUM_PFS_BB);
+
+	crd_reg_offset = (pf_id < MAX_NUM_PFS_BB ? QM_REG_WFQPFCRD_RT_OFFSET :
+			  QM_REG_WFQPFCRD_MSB_RT_OFFSET) +
+			 (pf_id % MAX_NUM_PFS_BB);
+
 	inc_val = QM_WFQ_INC_VAL(pf_wfq);
-	if (inc_val == 0 || inc_val > QM_WFQ_MAX_INC_VAL) {
-		DP_NOTICE(p_hwfn, true, "Invalid PF WFQ weight configuration");
+	if (!inc_val || inc_val > QM_WFQ_MAX_INC_VAL) {
+		DP_NOTICE(p_hwfn, true,
+			  "Invalid PF WFQ weight configuration\n");
 		return -1;
 	}
+
 	for (i = 0; i < num_tx_pqs; i++) {
-		u8 voq =
-		    VOQ(port_id, pq_params[i].tc_id, max_phys_tcs_per_port);
+		voq = VOQ(port_id, pq_params[i].tc_id, max_phys_tcs_per_port);
 		OVERWRITE_RT_REG(p_hwfn, crd_reg_offset + voq * MAX_NUM_PFS_BB,
 				 (u32)QM_WFQ_CRD_REG_SIGN_BIT);
 	}
+
 	STORE_RT_REG(p_hwfn, QM_REG_WFQPFUPPERBOUND_RT_OFFSET + pf_id,
 		     QM_WFQ_UPPER_BOUND | (u32)QM_WFQ_CRD_REG_SIGN_BIT);
 	STORE_RT_REG(p_hwfn, QM_REG_WFQPFWEIGHT_RT_OFFSET + pf_id, inc_val);
 	return 0;
 }
-/* Prepare PF RL runtime init values for specified PF. Return -1 on error. */
+
+/* Prepare PF RL runtime init values for the specified PF.
+ * Return -1 on error.
+ */
 static int ecore_pf_rl_rt_init(struct ecore_hwfn *p_hwfn, u8 pf_id, u32 pf_rl)
 {
-	u32 inc_val = QM_RL_INC_VAL(pf_rl);
+	u32 inc_val;
+
+	inc_val = QM_RL_INC_VAL(pf_rl);
 	if (inc_val > QM_RL_MAX_INC_VAL) {
-		DP_NOTICE(p_hwfn, true, "Invalid PF rate limit configuration");
+		DP_NOTICE(p_hwfn, true,
+			  "Invalid PF rate limit configuration\n");
 		return -1;
 	}
+
 	STORE_RT_REG(p_hwfn, QM_REG_RLPFCRD_RT_OFFSET + pf_id,
 		     (u32)QM_RL_CRD_REG_SIGN_BIT);
 	STORE_RT_REG(p_hwfn, QM_REG_RLPFUPPERBOUND_RT_OFFSET + pf_id,
 		     QM_RL_UPPER_BOUND | (u32)QM_RL_CRD_REG_SIGN_BIT);
 	STORE_RT_REG(p_hwfn, QM_REG_RLPFINCVAL_RT_OFFSET + pf_id, inc_val);
+
 	return 0;
 }
-/* Prepare VPORT WFQ runtime init values for the specified VPORTs. Return -1 on
- * error.
+
+/* Prepare VPORT WFQ runtime init values for the specified VPORTs.
+ * Return -1 on error.
  */
 static int ecore_vp_wfq_rt_init(struct ecore_hwfn *p_hwfn,
 				u8 num_vports,
 				struct init_qm_vport_params *vport_params)
 {
-	u8 tc, i;
+	u16 vport_pq_id;
 	u32 inc_val;
-	/* go over all PF VPORTs */
+	u8 tc, i;
+
+	/* Go over all PF VPORTs */
 	for (i = 0; i < num_vports; i++) {
-		if (vport_params[i].vport_wfq) {
-			inc_val = QM_WFQ_INC_VAL(vport_params[i].vport_wfq);
-			if (inc_val > QM_WFQ_MAX_INC_VAL) {
-				DP_NOTICE(p_hwfn, true,
-					  "Invalid VPORT WFQ weight config");
-				return -1;
-			}
-			/* each VPORT can have several VPORT PQ IDs for
-			 * different TCs
-			 */
-			for (tc = 0; tc < NUM_OF_TCS; tc++) {
-				u16 vport_pq_id =
-				    vport_params[i].first_tx_pq_id[tc];
-				if (vport_pq_id != QM_INVALID_PQ_ID) {
-					STORE_RT_REG(p_hwfn,
-						  QM_REG_WFQVPCRD_RT_OFFSET +
-						  vport_pq_id,
-						  (u32)QM_WFQ_CRD_REG_SIGN_BIT);
-					STORE_RT_REG(p_hwfn,
-						QM_REG_WFQVPWEIGHT_RT_OFFSET
-						     + vport_pq_id, inc_val);
-				}
+		if (!vport_params[i].vport_wfq)
+			continue;
+
+		inc_val = QM_WFQ_INC_VAL(vport_params[i].vport_wfq);
+		if (inc_val > QM_WFQ_MAX_INC_VAL) {
+			DP_NOTICE(p_hwfn, true,
+				  "Invalid VPORT WFQ weight configuration\n");
+			return -1;
+		}
+
+		/* Each VPORT can have several VPORT PQ IDs for various TCs */
+		for (tc = 0; tc < NUM_OF_TCS; tc++) {
+			vport_pq_id = vport_params[i].first_tx_pq_id[tc];
+			if (vport_pq_id != QM_INVALID_PQ_ID) {
+				STORE_RT_REG(p_hwfn, QM_REG_WFQVPCRD_RT_OFFSET +
+					     vport_pq_id,
+					     (u32)QM_WFQ_CRD_REG_SIGN_BIT);
+				STORE_RT_REG(p_hwfn,
+					     QM_REG_WFQVPWEIGHT_RT_OFFSET +
+					     vport_pq_id, inc_val);
 			}
 		}
 	}
@@ -526,19 +615,23 @@ static int ecore_vport_rl_rt_init(struct ecore_hwfn *p_hwfn,
 				  struct init_qm_vport_params *vport_params)
 {
 	u8 i, vport_id;
+	u32 inc_val;
+
 	if (start_vport + num_vports >= MAX_QM_GLOBAL_RLS) {
 		DP_NOTICE(p_hwfn, true,
-			  "Invalid VPORT ID for rate limiter configuration");
+			  "Invalid VPORT ID for rate limiter configuration\n");
 		return -1;
 	}
-	/* go over all PF VPORTs */
+
+	/* Go over all PF VPORTs */
 	for (i = 0, vport_id = start_vport; i < num_vports; i++, vport_id++) {
 		u32 inc_val = QM_RL_INC_VAL(vport_params[i].vport_rl);
 		if (inc_val > QM_RL_MAX_INC_VAL) {
 			DP_NOTICE(p_hwfn, true,
-				  "Invalid VPORT rate-limit configuration");
+				  "Invalid VPORT rate-limit configuration\n");
 			return -1;
 		}
+
 		STORE_RT_REG(p_hwfn, QM_REG_RLGLBLCRD_RT_OFFSET + vport_id,
 			     (u32)QM_RL_CRD_REG_SIGN_BIT);
 		STORE_RT_REG(p_hwfn,
@@ -547,6 +640,7 @@ static int ecore_vport_rl_rt_init(struct ecore_hwfn *p_hwfn,
 		STORE_RT_REG(p_hwfn, QM_REG_RLGLBLINCVAL_RT_OFFSET + vport_id,
 			     inc_val);
 	}
+
 	return 0;
 }
 
@@ -554,17 +648,20 @@ static bool ecore_poll_on_qm_cmd_ready(struct ecore_hwfn *p_hwfn,
 				       struct ecore_ptt *p_ptt)
 {
 	u32 reg_val, i;
-	for (i = 0, reg_val = 0; i < QM_STOP_CMD_MAX_POLL_COUNT && reg_val == 0;
+
+	for (i = 0, reg_val = 0; i < QM_STOP_CMD_MAX_POLL_COUNT && !reg_val;
 	     i++) {
 		OSAL_UDELAY(QM_STOP_CMD_POLL_PERIOD_US);
 		reg_val = ecore_rd(p_hwfn, p_ptt, QM_REG_SDMCMDREADY);
 	}
-	/* check if timeout while waiting for SDM command ready */
+
+	/* Check if timeout while waiting for SDM command ready */
 	if (i == QM_STOP_CMD_MAX_POLL_COUNT) {
 		DP_VERBOSE(p_hwfn, ECORE_MSG_DEBUG,
 			   "Timeout waiting for QM SDM cmd ready signal\n");
 		return false;
 	}
+
 	return true;
 }
 
@@ -574,15 +671,19 @@ static bool ecore_send_qm_cmd(struct ecore_hwfn *p_hwfn,
 {
 	if (!ecore_poll_on_qm_cmd_ready(p_hwfn, p_ptt))
 		return false;
+
 	ecore_wr(p_hwfn, p_ptt, QM_REG_SDMCMDADDR, cmd_addr);
 	ecore_wr(p_hwfn, p_ptt, QM_REG_SDMCMDDATALSB, cmd_data_lsb);
 	ecore_wr(p_hwfn, p_ptt, QM_REG_SDMCMDDATAMSB, cmd_data_msb);
 	ecore_wr(p_hwfn, p_ptt, QM_REG_SDMCMDGO, 1);
 	ecore_wr(p_hwfn, p_ptt, QM_REG_SDMCMDGO, 0);
+
 	return ecore_poll_on_qm_cmd_ready(p_hwfn, p_ptt);
 }
 
+
 /******************** INTERFACE IMPLEMENTATION *********************/
+
 u32 ecore_qm_pf_mem_size(u8 pf_id,
 			 u32 num_pf_cids,
 			 u32 num_vf_cids,
@@ -603,32 +704,42 @@ int ecore_qm_common_rt_init(struct ecore_hwfn *p_hwfn,
 			    struct init_qm_port_params
 			    port_params[MAX_NUM_PORTS])
 {
-	/* init AFullOprtnstcCrdMask */
-	u32 mask =
-	    (QM_OPPOR_LINE_VOQ_DEF << QM_RF_OPPORTUNISTIC_MASK_LINEVOQ_SHIFT) |
-	    (QM_BYTE_CRD_EN << QM_RF_OPPORTUNISTIC_MASK_BYTEVOQ_SHIFT) |
-	    (pf_wfq_en << QM_RF_OPPORTUNISTIC_MASK_PFWFQ_SHIFT) |
-	    (vport_wfq_en << QM_RF_OPPORTUNISTIC_MASK_VPWFQ_SHIFT) |
-	    (pf_rl_en << QM_RF_OPPORTUNISTIC_MASK_PFRL_SHIFT) |
-	    (vport_rl_en << QM_RF_OPPORTUNISTIC_MASK_VPQCNRL_SHIFT) |
-	    (QM_OPPOR_FW_STOP_DEF << QM_RF_OPPORTUNISTIC_MASK_FWPAUSE_SHIFT) |
-	    (QM_OPPOR_PQ_EMPTY_DEF <<
-	     QM_RF_OPPORTUNISTIC_MASK_QUEUEEMPTY_SHIFT);
+	u32 mask;
+
+	/* Init AFullOprtnstcCrdMask */
+	mask = (QM_OPPOR_LINE_VOQ_DEF <<
+		QM_RF_OPPORTUNISTIC_MASK_LINEVOQ_SHIFT) |
+		(QM_BYTE_CRD_EN << QM_RF_OPPORTUNISTIC_MASK_BYTEVOQ_SHIFT) |
+		(pf_wfq_en << QM_RF_OPPORTUNISTIC_MASK_PFWFQ_SHIFT) |
+		(vport_wfq_en << QM_RF_OPPORTUNISTIC_MASK_VPWFQ_SHIFT) |
+		(pf_rl_en << QM_RF_OPPORTUNISTIC_MASK_PFRL_SHIFT) |
+		(vport_rl_en << QM_RF_OPPORTUNISTIC_MASK_VPQCNRL_SHIFT) |
+		(QM_OPPOR_FW_STOP_DEF <<
+		 QM_RF_OPPORTUNISTIC_MASK_FWPAUSE_SHIFT) |
+		(QM_OPPOR_PQ_EMPTY_DEF <<
+		 QM_RF_OPPORTUNISTIC_MASK_QUEUEEMPTY_SHIFT);
 	STORE_RT_REG(p_hwfn, QM_REG_AFULLOPRTNSTCCRDMASK_RT_OFFSET, mask);
-	/* enable/disable PF RL */
+
+	/* Enable/disable PF RL */
 	ecore_enable_pf_rl(p_hwfn, pf_rl_en);
-	/* enable/disable PF WFQ */
+
+	/* Enable/disable PF WFQ */
 	ecore_enable_pf_wfq(p_hwfn, pf_wfq_en);
-	/* enable/disable VPORT RL */
+
+	/* Enable/disable VPORT RL */
 	ecore_enable_vport_rl(p_hwfn, vport_rl_en);
-	/* enable/disable VPORT WFQ */
+
+	/* Enable/disable VPORT WFQ */
 	ecore_enable_vport_wfq(p_hwfn, vport_wfq_en);
-	/* init PBF CMDQ line credit */
+
+	/* Init PBF CMDQ line credit */
 	ecore_cmdq_lines_rt_init(p_hwfn, max_ports_per_engine,
 				 max_phys_tcs_per_port, port_params);
-	/* init BTB blocks in PBF */
+
+	/* Init BTB blocks in PBF */
 	ecore_btb_blocks_rt_init(p_hwfn, max_ports_per_engine,
 				 max_phys_tcs_per_port, port_params);
+
 	return 0;
 }
 
@@ -651,66 +762,86 @@ int ecore_qm_pf_rt_init(struct ecore_hwfn *p_hwfn,
 			struct init_qm_pq_params *pq_params,
 			struct init_qm_vport_params *vport_params)
 {
+	u32 other_mem_size_4kb;
 	u8 tc, i;
-	u32 other_mem_size_4kb =
-	    QM_PQ_MEM_4KB(num_pf_cids + num_tids) * QM_OTHER_PQS_PER_PF;
-	/* clear first Tx PQ ID array for each VPORT */
+
+	other_mem_size_4kb = QM_PQ_MEM_4KB(num_pf_cids + num_tids) *
+			     QM_OTHER_PQS_PER_PF;
+
+	/* Clear first Tx PQ ID array for each VPORT */
 	for (i = 0; i < num_vports; i++)
 		for (tc = 0; tc < NUM_OF_TCS; tc++)
 			vport_params[i].first_tx_pq_id[tc] = QM_INVALID_PQ_ID;
-	/* map Other PQs (if any) */
+
+	/* Map Other PQs (if any) */
 #if QM_OTHER_PQS_PER_PF > 0
 	ecore_other_pq_map_rt_init(p_hwfn, port_id, pf_id, num_pf_cids,
 				   num_tids, 0);
 #endif
-	/* map Tx PQs */
+
+	/* Map Tx PQs */
 	ecore_tx_pq_map_rt_init(p_hwfn, p_ptt, port_id, pf_id,
 				max_phys_tcs_per_port, is_first_pf, num_pf_cids,
 				num_vf_cids, start_pq, num_pf_pqs, num_vf_pqs,
 				start_vport, other_mem_size_4kb, pq_params,
 				vport_params);
-	/* init PF WFQ */
+
+	/* Init PF WFQ */
 	if (pf_wfq)
 		if (ecore_pf_wfq_rt_init
 		    (p_hwfn, port_id, pf_id, pf_wfq, max_phys_tcs_per_port,
-		     num_pf_pqs + num_vf_pqs, pq_params) != 0)
+		     num_pf_pqs + num_vf_pqs, pq_params))
 			return -1;
-	/* init PF RL */
-	if (ecore_pf_rl_rt_init(p_hwfn, pf_id, pf_rl) != 0)
+
+	/* Init PF RL */
+	if (ecore_pf_rl_rt_init(p_hwfn, pf_id, pf_rl))
 		return -1;
-	/* set VPORT WFQ */
-	if (ecore_vp_wfq_rt_init(p_hwfn, num_vports, vport_params) != 0)
+
+	/* Set VPORT WFQ */
+	if (ecore_vp_wfq_rt_init(p_hwfn, num_vports, vport_params))
 		return -1;
-	/* set VPORT RL */
+
+	/* Set VPORT RL */
 	if (ecore_vport_rl_rt_init
-	    (p_hwfn, start_vport, num_vports, vport_params) != 0)
+	    (p_hwfn, start_vport, num_vports, vport_params))
 		return -1;
+
 	return 0;
 }
 
 int ecore_init_pf_wfq(struct ecore_hwfn *p_hwfn,
 		      struct ecore_ptt *p_ptt, u8 pf_id, u16 pf_wfq)
 {
-	u32 inc_val = QM_WFQ_INC_VAL(pf_wfq);
-	if (inc_val == 0 || inc_val > QM_WFQ_MAX_INC_VAL) {
-		DP_NOTICE(p_hwfn, true, "Invalid PF WFQ weight configuration");
+	u32 inc_val;
+
+	inc_val = QM_WFQ_INC_VAL(pf_wfq);
+	if (!inc_val || inc_val > QM_WFQ_MAX_INC_VAL) {
+		DP_NOTICE(p_hwfn, true,
+			  "Invalid PF WFQ weight configuration\n");
 		return -1;
 	}
+
 	ecore_wr(p_hwfn, p_ptt, QM_REG_WFQPFWEIGHT + pf_id * 4, inc_val);
+
 	return 0;
 }
 
 int ecore_init_pf_rl(struct ecore_hwfn *p_hwfn,
 		     struct ecore_ptt *p_ptt, u8 pf_id, u32 pf_rl)
 {
-	u32 inc_val = QM_RL_INC_VAL(pf_rl);
+	u32 inc_val;
+
+	inc_val = QM_RL_INC_VAL(pf_rl);
 	if (inc_val > QM_RL_MAX_INC_VAL) {
-		DP_NOTICE(p_hwfn, true, "Invalid PF rate limit configuration");
+		DP_NOTICE(p_hwfn, true,
+			  "Invalid PF rate limit configuration\n");
 		return -1;
 	}
+
 	ecore_wr(p_hwfn, p_ptt, QM_REG_RLPFCRD + pf_id * 4,
 		 (u32)QM_RL_CRD_REG_SIGN_BIT);
 	ecore_wr(p_hwfn, p_ptt, QM_REG_RLPFINCVAL + pf_id * 4, inc_val);
+
 	return 0;
 }
 
@@ -718,20 +849,25 @@ int ecore_init_vport_wfq(struct ecore_hwfn *p_hwfn,
 			 struct ecore_ptt *p_ptt,
 			 u16 first_tx_pq_id[NUM_OF_TCS], u16 vport_wfq)
 {
+	u16 vport_pq_id;
+	u32 inc_val;
 	u8 tc;
-	u32 inc_val = QM_WFQ_INC_VAL(vport_wfq);
-	if (inc_val == 0 || inc_val > QM_WFQ_MAX_INC_VAL) {
+
+	inc_val = QM_WFQ_INC_VAL(vport_wfq);
+	if (!inc_val || inc_val > QM_WFQ_MAX_INC_VAL) {
 		DP_NOTICE(p_hwfn, true,
-			  "Invalid VPORT WFQ weight configuration");
+			  "Invalid VPORT WFQ weight configuration\n");
 		return -1;
 	}
+
 	for (tc = 0; tc < NUM_OF_TCS; tc++) {
-		u16 vport_pq_id = first_tx_pq_id[tc];
+		vport_pq_id = first_tx_pq_id[tc];
 		if (vport_pq_id != QM_INVALID_PQ_ID) {
 			ecore_wr(p_hwfn, p_ptt,
 				 QM_REG_WFQVPWEIGHT + vport_pq_id * 4, inc_val);
 		}
 	}
+
 	return 0;
 }
 
@@ -739,20 +875,24 @@ int ecore_init_vport_rl(struct ecore_hwfn *p_hwfn,
 			struct ecore_ptt *p_ptt, u8 vport_id, u32 vport_rl)
 {
 	u32 inc_val, max_qm_global_rls = MAX_QM_GLOBAL_RLS;
+
 	if (vport_id >= max_qm_global_rls) {
 		DP_NOTICE(p_hwfn, true,
-			  "Invalid VPORT ID for rate limiter configuration");
+			  "Invalid VPORT ID for rate limiter configuration\n");
 		return -1;
 	}
+
 	inc_val = QM_RL_INC_VAL(vport_rl);
 	if (inc_val > QM_RL_MAX_INC_VAL) {
 		DP_NOTICE(p_hwfn, true,
-			  "Invalid VPORT rate-limit configuration");
+			  "Invalid VPORT rate-limit configuration\n");
 		return -1;
 	}
+
 	ecore_wr(p_hwfn, p_ptt, QM_REG_RLGLBLCRD + vport_id * 4,
 		 (u32)QM_RL_CRD_REG_SIGN_BIT);
 	ecore_wr(p_hwfn, p_ptt, QM_REG_RLGLBLINCVAL + vport_id * 4, inc_val);
+
 	return 0;
 }
 
@@ -762,15 +902,20 @@ bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn,
 			    bool is_tx_pq, u16 start_pq, u16 num_pqs)
 {
 	u32 cmd_arr[QM_CMD_STRUCT_SIZE(QM_STOP_CMD)] = { 0 };
-	u32 pq_mask = 0, last_pq = start_pq + num_pqs - 1, pq_id;
-	/* set command's PQ type */
+	u32 pq_mask = 0, last_pq, pq_id;
+
+	last_pq = start_pq + num_pqs - 1;
+
+	/* Set command's PQ type */
 	QM_CMD_SET_FIELD(cmd_arr, QM_STOP_CMD, PQ_TYPE, is_tx_pq ? 0 : 1);
-	/* go over requested PQs */
+
+	/* Go over requested PQs */
 	for (pq_id = start_pq; pq_id <= last_pq; pq_id++) {
-		/* set PQ bit in mask (stop command only) */
+		/* Set PQ bit in mask (stop command only) */
 		if (!is_release_cmd)
 			pq_mask |= (1 << (pq_id % QM_STOP_PQ_MASK_WIDTH));
-		/* if last PQ or end of PQ mask, write command */
+
+		/* If last PQ or end of PQ mask, write command */
 		if ((pq_id == last_pq) ||
 		    (pq_id % QM_STOP_PQ_MASK_WIDTH ==
 		    (QM_STOP_PQ_MASK_WIDTH - 1))) {
@@ -785,68 +930,92 @@ bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn,
 			pq_mask = 0;
 		}
 	}
+
 	return true;
 }
 
+
 /* NIG: ETS configuration constants */
 #define NIG_TX_ETS_CLIENT_OFFSET	4
 #define NIG_LB_ETS_CLIENT_OFFSET	1
 #define NIG_ETS_MIN_WFQ_BYTES		1600
+
 /* NIG: ETS constants */
 #define NIG_ETS_UP_BOUND(weight, mtu) \
-(2 * ((weight) > (mtu) ? (weight) : (mtu)))
+	(2 * ((weight) > (mtu) ? (weight) : (mtu)))
+
 /* NIG: RL constants */
-#define NIG_RL_BASE_TYPE			1	/* byte base type */
-#define NIG_RL_PERIOD				1	/* in us */
+
+/* Byte base type value */
+#define NIG_RL_BASE_TYPE		1
+
+/* Period in us */
+#define NIG_RL_PERIOD			1
+
+/* Period in 25MHz cycles */
 #define NIG_RL_PERIOD_CLK_25M		(25 * NIG_RL_PERIOD)
+
+/* Rate in mbps */
 #define NIG_RL_INC_VAL(rate)		(((rate) * NIG_RL_PERIOD) / 8)
+
 #define NIG_RL_MAX_VAL(inc_val, mtu) \
-(2 * ((inc_val) > (mtu) ? (inc_val) : (mtu)))
+	(2 * ((inc_val) > (mtu) ? (inc_val) : (mtu)))
+
 /* NIG: packet prioritry configuration constants */
-#define NIG_PRIORITY_MAP_TC_BITS 4
+#define NIG_PRIORITY_MAP_TC_BITS	4
+
+
 void ecore_init_nig_ets(struct ecore_hwfn *p_hwfn,
 			struct ecore_ptt *p_ptt,
 			struct init_ets_req *req, bool is_lb)
 {
-	u8 tc, sp_tc_map = 0, wfq_tc_map = 0;
-	u8 num_tc = is_lb ? NUM_OF_TCS : NUM_OF_PHYS_TCS;
-	u8 tc_client_offset =
-	    is_lb ? NIG_LB_ETS_CLIENT_OFFSET : NIG_TX_ETS_CLIENT_OFFSET;
-	u32 min_weight = 0xffffffff;
-	u32 tc_weight_base_addr =
-	    is_lb ? NIG_REG_LB_ARB_CREDIT_WEIGHT_0 :
-	    NIG_REG_TX_ARB_CREDIT_WEIGHT_0;
-	u32 tc_weight_addr_diff =
-	    is_lb ? NIG_REG_LB_ARB_CREDIT_WEIGHT_1 -
-	    NIG_REG_LB_ARB_CREDIT_WEIGHT_0 : NIG_REG_TX_ARB_CREDIT_WEIGHT_1 -
-	    NIG_REG_TX_ARB_CREDIT_WEIGHT_0;
-	u32 tc_bound_base_addr =
-	    is_lb ? NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_0 :
-	    NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_0;
-	u32 tc_bound_addr_diff =
-	    is_lb ? NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_1 -
-	    NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_0 :
-	    NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_1 -
-	    NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_0;
+	u32 min_weight, tc_weight_base_addr, tc_weight_addr_diff;
+	u32 tc_bound_base_addr, tc_bound_addr_diff;
+	u8 sp_tc_map = 0, wfq_tc_map = 0;
+	u8 tc, num_tc, tc_client_offset;
+
+	num_tc = is_lb ? NUM_OF_TCS : NUM_OF_PHYS_TCS;
+	tc_client_offset = is_lb ? NIG_LB_ETS_CLIENT_OFFSET :
+				   NIG_TX_ETS_CLIENT_OFFSET;
+	min_weight = 0xffffffff;
+	tc_weight_base_addr = is_lb ? NIG_REG_LB_ARB_CREDIT_WEIGHT_0 :
+				      NIG_REG_TX_ARB_CREDIT_WEIGHT_0;
+	tc_weight_addr_diff = is_lb ? NIG_REG_LB_ARB_CREDIT_WEIGHT_1 -
+				      NIG_REG_LB_ARB_CREDIT_WEIGHT_0 :
+				      NIG_REG_TX_ARB_CREDIT_WEIGHT_1 -
+				      NIG_REG_TX_ARB_CREDIT_WEIGHT_0;
+	tc_bound_base_addr = is_lb ? NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_0 :
+				     NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_0;
+	tc_bound_addr_diff = is_lb ? NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_1 -
+				     NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_0 :
+				     NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_1 -
+				     NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_0;
+
 	for (tc = 0; tc < num_tc; tc++) {
 		struct init_ets_tc_req *tc_req = &req->tc_req[tc];
-		/* update SP map */
+
+		/* Update SP map */
 		if (tc_req->use_sp)
 			sp_tc_map |= (1 << tc);
-		if (tc_req->use_wfq) {
-			/* update WFQ map */
-			wfq_tc_map |= (1 << tc);
-			/* find minimal weight */
-			if (tc_req->weight < min_weight)
-				min_weight = tc_req->weight;
-		}
+
+		if (!tc_req->use_wfq)
+			continue;
+
+		/* Update WFQ map */
+		wfq_tc_map |= (1 << tc);
+
+		/* Find minimal weight */
+		if (tc_req->weight < min_weight)
+			min_weight = tc_req->weight;
 	}
-	/* write SP map */
+
+	/* Write SP map */
 	ecore_wr(p_hwfn, p_ptt,
 		 is_lb ? NIG_REG_LB_ARB_CLIENT_IS_STRICT :
 		 NIG_REG_TX_ARB_CLIENT_IS_STRICT,
 		 (sp_tc_map << tc_client_offset));
-	/* write WFQ map */
+
+	/* Write WFQ map */
 	ecore_wr(p_hwfn, p_ptt,
 		 is_lb ? NIG_REG_LB_ARB_CLIENT_IS_SUBJECT2WFQ :
 		 NIG_REG_TX_ARB_CLIENT_IS_SUBJECT2WFQ,
@@ -854,22 +1023,23 @@ void ecore_init_nig_ets(struct ecore_hwfn *p_hwfn,
 	/* write WFQ weights */
 	for (tc = 0; tc < num_tc; tc++, tc_client_offset++) {
 		struct init_ets_tc_req *tc_req = &req->tc_req[tc];
-		if (tc_req->use_wfq) {
-			/* translate weight to bytes */
-			u32 byte_weight =
-			    (NIG_ETS_MIN_WFQ_BYTES * tc_req->weight) /
-			    min_weight;
-			/* write WFQ weight */
-			ecore_wr(p_hwfn, p_ptt,
-				 tc_weight_base_addr +
-				 tc_weight_addr_diff * tc_client_offset,
-				 byte_weight);
-			/* write WFQ upper bound */
-			ecore_wr(p_hwfn, p_ptt,
-				 tc_bound_base_addr +
-				 tc_bound_addr_diff * tc_client_offset,
-				 NIG_ETS_UP_BOUND(byte_weight, req->mtu));
-		}
+		u32 byte_weight;
+
+		if (!tc_req->use_wfq)
+			continue;
+
+		/* Translate weight to bytes */
+		byte_weight = (NIG_ETS_MIN_WFQ_BYTES * tc_req->weight) /
+			      min_weight;
+
+		/* Write WFQ weight */
+		ecore_wr(p_hwfn, p_ptt, tc_weight_base_addr +
+			 tc_weight_addr_diff * tc_client_offset, byte_weight);
+
+		/* Write WFQ upper bound */
+		ecore_wr(p_hwfn, p_ptt, tc_bound_base_addr +
+			 tc_bound_addr_diff * tc_client_offset,
+			 NIG_ETS_UP_BOUND(byte_weight, req->mtu));
 	}
 }
 
@@ -877,16 +1047,18 @@ void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn,
 			  struct ecore_ptt *p_ptt,
 			  struct init_nig_lb_rl_req *req)
 {
-	u8 tc;
 	u32 ctrl, inc_val, reg_offset;
-	/* disable global MAC+LB RL */
+	u8 tc;
+
+	/* Disable global MAC+LB RL */
 	ctrl =
 	    NIG_RL_BASE_TYPE <<
 	    NIG_REG_TX_LB_GLBRATELIMIT_CTRL_TX_LB_GLBRATELIMIT_BASE_TYPE_SHIFT;
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_CTRL, ctrl);
-	/* configure and enable global MAC+LB RL */
+
+	/* Configure and enable global MAC+LB RL */
 	if (req->lb_mac_rate) {
-		/* configure  */
+		/* Configure  */
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_INC_PERIOD,
 			 NIG_RL_PERIOD_CLK_25M);
 		inc_val = NIG_RL_INC_VAL(req->lb_mac_rate);
@@ -894,20 +1066,23 @@ void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn,
 			 inc_val);
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_MAX_VALUE,
 			 NIG_RL_MAX_VAL(inc_val, req->mtu));
-		/* enable */
+
+		/* Enable */
 		ctrl |=
 		    1 <<
 		    NIG_REG_TX_LB_GLBRATELIMIT_CTRL_TX_LB_GLBRATELIMIT_EN_SHIFT;
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_CTRL, ctrl);
 	}
-	/* disable global LB-only RL */
+
+	/* Disable global LB-only RL */
 	ctrl =
 	    NIG_RL_BASE_TYPE <<
 	    NIG_REG_LB_BRBRATELIMIT_CTRL_LB_BRBRATELIMIT_BASE_TYPE_SHIFT;
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_CTRL, ctrl);
-	/* configure and enable global LB-only RL */
+
+	/* Configure and enable global LB-only RL */
 	if (req->lb_rate) {
-		/* configure  */
+		/* Configure  */
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_INC_PERIOD,
 			 NIG_RL_PERIOD_CLK_25M);
 		inc_val = NIG_RL_INC_VAL(req->lb_rate);
@@ -915,41 +1090,41 @@ void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn,
 			 inc_val);
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_MAX_VALUE,
 			 NIG_RL_MAX_VAL(inc_val, req->mtu));
-		/* enable */
+
+		/* Enable */
 		ctrl |=
 		    1 << NIG_REG_LB_BRBRATELIMIT_CTRL_LB_BRBRATELIMIT_EN_SHIFT;
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_CTRL, ctrl);
 	}
-	/* per-TC RLs */
+
+	/* Per-TC RLs */
 	for (tc = 0, reg_offset = 0; tc < NUM_OF_PHYS_TCS;
 	     tc++, reg_offset += 4) {
-		/* disable TC RL */
+		/* Disable TC RL */
 		ctrl =
 		    NIG_RL_BASE_TYPE <<
 		NIG_REG_LB_TCRATELIMIT_CTRL_0_LB_TCRATELIMIT_BASE_TYPE_0_SHIFT;
 		ecore_wr(p_hwfn, p_ptt,
 			 NIG_REG_LB_TCRATELIMIT_CTRL_0 + reg_offset, ctrl);
-		/* configure and enable TC RL */
-		if (req->tc_rate[tc]) {
-			/* configure */
-			ecore_wr(p_hwfn, p_ptt,
-				 NIG_REG_LB_TCRATELIMIT_INC_PERIOD_0 +
-				 reg_offset, NIG_RL_PERIOD_CLK_25M);
-			inc_val = NIG_RL_INC_VAL(req->tc_rate[tc]);
-			ecore_wr(p_hwfn, p_ptt,
-				 NIG_REG_LB_TCRATELIMIT_INC_VALUE_0 +
-				 reg_offset, inc_val);
-			ecore_wr(p_hwfn, p_ptt,
-				 NIG_REG_LB_TCRATELIMIT_MAX_VALUE_0 +
-				 reg_offset, NIG_RL_MAX_VAL(inc_val, req->mtu));
-			/* enable */
-			ctrl |=
-			    1 <<
-		NIG_REG_LB_TCRATELIMIT_CTRL_0_LB_TCRATELIMIT_EN_0_SHIFT;
-			ecore_wr(p_hwfn, p_ptt,
-				 NIG_REG_LB_TCRATELIMIT_CTRL_0 + reg_offset,
-				 ctrl);
-		}
+
+		/* Configure and enable TC RL */
+		if (!req->tc_rate[tc])
+			continue;
+
+		/* Configure */
+		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_INC_PERIOD_0 +
+			 reg_offset, NIG_RL_PERIOD_CLK_25M);
+		inc_val = NIG_RL_INC_VAL(req->tc_rate[tc]);
+		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_INC_VALUE_0 +
+			 reg_offset, inc_val);
+		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_MAX_VALUE_0 +
+			 reg_offset, NIG_RL_MAX_VAL(inc_val, req->mtu));
+
+		/* Enable */
+		ctrl |= 1 <<
+			NIG_REG_LB_TCRATELIMIT_CTRL_0_LB_TCRATELIMIT_EN_0_SHIFT;
+		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_CTRL_0 +
+			 reg_offset, ctrl);
 	}
 }
 
@@ -957,20 +1132,23 @@ void ecore_init_nig_pri_tc_map(struct ecore_hwfn *p_hwfn,
 			       struct ecore_ptt *p_ptt,
 			       struct init_nig_pri_tc_map_req *req)
 {
-	u8 pri, tc;
-	u32 pri_tc_mask = 0;
 	u8 tc_pri_mask[NUM_OF_PHYS_TCS] = { 0 };
+	u32 pri_tc_mask = 0;
+	u8 pri, tc;
+
 	for (pri = 0; pri < NUM_OF_VLAN_PRIORITIES; pri++) {
-		if (req->pri[pri].valid) {
-			pri_tc_mask |=
-			    (req->pri[pri].
-			     tc_id << (pri * NIG_PRIORITY_MAP_TC_BITS));
-			tc_pri_mask[req->pri[pri].tc_id] |= (1 << pri);
-		}
+		if (!req->pri[pri].valid)
+			continue;
+
+		pri_tc_mask |= (req->pri[pri].tc_id <<
+				(pri * NIG_PRIORITY_MAP_TC_BITS));
+		tc_pri_mask[req->pri[pri].tc_id] |= (1 << pri);
 	}
-	/* write priority -> TC mask */
+
+	/* Write priority -> TC mask */
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_PKT_PRIORITY_TO_TC, pri_tc_mask);
-	/* write TC -> priority mask */
+
+	/* Write TC -> priority mask */
 	for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) {
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_PRIORITY_FOR_TC_0 + tc * 4,
 			 tc_pri_mask[tc]);
@@ -979,110 +1157,133 @@ void ecore_init_nig_pri_tc_map(struct ecore_hwfn *p_hwfn,
 	}
 }
 
+
 /* PRS: ETS configuration constants */
-#define PRS_ETS_MIN_WFQ_BYTES			1600
+#define PRS_ETS_MIN_WFQ_BYTES		1600
 #define PRS_ETS_UP_BOUND(weight, mtu) \
-(2 * ((weight) > (mtu) ? (weight) : (mtu)))
+	(2 * ((weight) > (mtu) ? (weight) : (mtu)))
+
+
 void ecore_init_prs_ets(struct ecore_hwfn *p_hwfn,
 			struct ecore_ptt *p_ptt, struct init_ets_req *req)
 {
+	u32 tc_weight_addr_diff, tc_bound_addr_diff, min_weight = 0xffffffff;
 	u8 tc, sp_tc_map = 0, wfq_tc_map = 0;
-	u32 min_weight = 0xffffffff;
-	u32 tc_weight_addr_diff =
-	    PRS_REG_ETS_ARB_CREDIT_WEIGHT_1 - PRS_REG_ETS_ARB_CREDIT_WEIGHT_0;
-	u32 tc_bound_addr_diff =
-	    PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_1 -
-	    PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_0;
+
+	tc_weight_addr_diff = PRS_REG_ETS_ARB_CREDIT_WEIGHT_1 -
+			      PRS_REG_ETS_ARB_CREDIT_WEIGHT_0;
+	tc_bound_addr_diff = PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_1 -
+			     PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_0;
+
 	for (tc = 0; tc < NUM_OF_TCS; tc++) {
 		struct init_ets_tc_req *tc_req = &req->tc_req[tc];
-		/* update SP map */
+
+		/* Update SP map */
 		if (tc_req->use_sp)
 			sp_tc_map |= (1 << tc);
-		if (tc_req->use_wfq) {
-			/* update WFQ map */
-			wfq_tc_map |= (1 << tc);
-			/* find minimal weight */
-			if (tc_req->weight < min_weight)
-				min_weight = tc_req->weight;
-		}
+
+		if (!tc_req->use_wfq)
+			continue;
+
+		/* Update WFQ map */
+		wfq_tc_map |= (1 << tc);
+
+		/* Find minimal weight */
+		if (tc_req->weight < min_weight)
+			min_weight = tc_req->weight;
 	}
+
 	/* write SP map */
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CLIENT_IS_STRICT, sp_tc_map);
+
 	/* write WFQ map */
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CLIENT_IS_SUBJECT2WFQ,
 		 wfq_tc_map);
+
 	/* write WFQ weights */
 	for (tc = 0; tc < NUM_OF_TCS; tc++) {
 		struct init_ets_tc_req *tc_req = &req->tc_req[tc];
-		if (tc_req->use_wfq) {
-			/* translate weight to bytes */
-			u32 byte_weight =
-			    (PRS_ETS_MIN_WFQ_BYTES * tc_req->weight) /
-			    min_weight;
-			/* write WFQ weight */
-			ecore_wr(p_hwfn, p_ptt,
-				 PRS_REG_ETS_ARB_CREDIT_WEIGHT_0 +
-				 tc * tc_weight_addr_diff, byte_weight);
-			/* write WFQ upper bound */
-			ecore_wr(p_hwfn, p_ptt,
-				 PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_0 +
-				 tc * tc_bound_addr_diff,
-				 PRS_ETS_UP_BOUND(byte_weight, req->mtu));
-		}
+		u32 byte_weight;
+
+		if (!tc_req->use_wfq)
+			continue;
+
+		/* Translate weight to bytes */
+		byte_weight = (PRS_ETS_MIN_WFQ_BYTES * tc_req->weight) /
+			      min_weight;
+
+		/* Write WFQ weight */
+		ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CREDIT_WEIGHT_0 + tc *
+			 tc_weight_addr_diff, byte_weight);
+
+		/* Write WFQ upper bound */
+		ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_0 +
+			 tc * tc_bound_addr_diff, PRS_ETS_UP_BOUND(byte_weight,
+								   req->mtu));
 	}
 }
 
+
 /* BRB: RAM configuration constants */
 #define BRB_TOTAL_RAM_BLOCKS_BB	4800
 #define BRB_TOTAL_RAM_BLOCKS_K2	5632
-#define BRB_BLOCK_SIZE			128	/* in bytes */
+#define BRB_BLOCK_SIZE		128
 #define BRB_MIN_BLOCKS_PER_TC	9
-#define BRB_HYST_BYTES			10240
-#define BRB_HYST_BLOCKS			(BRB_HYST_BYTES / BRB_BLOCK_SIZE)
-/*
- * temporary big RAM allocation - should be updated
- */
+#define BRB_HYST_BYTES		10240
+#define BRB_HYST_BLOCKS		(BRB_HYST_BYTES / BRB_BLOCK_SIZE)
+
+/* Temporary big RAM allocation - should be updated */
 void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
 			struct ecore_ptt *p_ptt, struct init_brb_ram_req *req)
 {
-	u8 port, active_ports = 0;
+	u32 tc_headroom_blocks, min_pkt_size_blocks, total_blocks;
 	u32 active_port_blocks, reg_offset = 0;
-	u32 tc_headroom_blocks =
-	    (u32)DIV_ROUND_UP(req->headroom_per_tc, BRB_BLOCK_SIZE);
-	u32 min_pkt_size_blocks =
-	    (u32)DIV_ROUND_UP(req->min_pkt_size, BRB_BLOCK_SIZE);
-	u32 total_blocks =
-	    ECORE_IS_K2(p_hwfn->
-			p_dev) ? BRB_TOTAL_RAM_BLOCKS_K2 :
-	    BRB_TOTAL_RAM_BLOCKS_BB;
-	/* find number of active ports */
+	u8 port, active_ports = 0;
+
+	tc_headroom_blocks = (u32)DIV_ROUND_UP(req->headroom_per_tc,
+					       BRB_BLOCK_SIZE);
+	min_pkt_size_blocks = (u32)DIV_ROUND_UP(req->min_pkt_size,
+						BRB_BLOCK_SIZE);
+	total_blocks = ECORE_IS_K2(p_hwfn->p_dev) ? BRB_TOTAL_RAM_BLOCKS_K2 :
+						    BRB_TOTAL_RAM_BLOCKS_BB;
+
+	/* Find number of active ports */
 	for (port = 0; port < MAX_NUM_PORTS; port++)
 		if (req->num_active_tcs[port])
 			active_ports++;
+
 	active_port_blocks = (u32)(total_blocks / active_ports);
+
 	for (port = 0; port < req->max_ports_per_engine; port++) {
-		/* calculate per-port sizes */
-		u32 tc_guaranteed_blocks =
-		    (u32)DIV_ROUND_UP(req->guranteed_per_tc, BRB_BLOCK_SIZE);
-		u32 port_blocks =
-		    req->num_active_tcs[port] ? active_port_blocks : 0;
-		u32 port_guaranteed_blocks =
-		    req->num_active_tcs[port] * tc_guaranteed_blocks;
-		u32 port_shared_blocks = port_blocks - port_guaranteed_blocks;
-		u32 full_xoff_th =
-		    req->num_active_tcs[port] * BRB_MIN_BLOCKS_PER_TC;
-		u32 full_xon_th = full_xoff_th + min_pkt_size_blocks;
-		u32 pause_xoff_th = tc_headroom_blocks;
-		u32 pause_xon_th = pause_xoff_th + min_pkt_size_blocks;
+		u32 port_blocks, port_shared_blocks, port_guaranteed_blocks;
+		u32 full_xoff_th, full_xon_th, pause_xoff_th, pause_xon_th;
+		u32 tc_guaranteed_blocks;
 		u8 tc;
-		/* init total size per port */
+
+		/* Calculate per-port sizes */
+		tc_guaranteed_blocks = (u32)DIV_ROUND_UP(req->guranteed_per_tc,
+							 BRB_BLOCK_SIZE);
+		port_blocks = req->num_active_tcs[port] ? active_port_blocks :
+							  0;
+		port_guaranteed_blocks = req->num_active_tcs[port] *
+					 tc_guaranteed_blocks;
+		port_shared_blocks = port_blocks - port_guaranteed_blocks;
+		full_xoff_th = req->num_active_tcs[port] *
+			       BRB_MIN_BLOCKS_PER_TC;
+		full_xon_th = full_xoff_th + min_pkt_size_blocks;
+		pause_xoff_th = tc_headroom_blocks;
+		pause_xon_th = pause_xoff_th + min_pkt_size_blocks;
+
+		/* Init total size per port */
 		ecore_wr(p_hwfn, p_ptt, BRB_REG_TOTAL_MAC_SIZE + port * 4,
 			 port_blocks);
-		/* init shared size per port */
+
+		/* Init shared size per port */
 		ecore_wr(p_hwfn, p_ptt, BRB_REG_SHARED_HR_AREA + port * 4,
 			 port_shared_blocks);
+
 		for (tc = 0; tc < NUM_OF_TCS; tc++, reg_offset += 4) {
-			/* clear init values for non-active TCs */
+			/* Clear init values for non-active TCs */
 			if (tc == req->num_active_tcs[port]) {
 				tc_guaranteed_blocks = 0;
 				full_xoff_th = 0;
@@ -1090,15 +1291,18 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
 				pause_xoff_th = 0;
 				pause_xon_th = 0;
 			}
-			/* init guaranteed size per TC */
+
+			/* Init guaranteed size per TC */
 			ecore_wr(p_hwfn, p_ptt,
 				 BRB_REG_TC_GUARANTIED_0 + reg_offset,
 				 tc_guaranteed_blocks);
 			ecore_wr(p_hwfn, p_ptt,
 				 BRB_REG_MAIN_TC_GUARANTIED_HYST_0 + reg_offset,
 				 BRB_HYST_BLOCKS);
-/* init pause/full thresholds per physical TC - for loopback traffic */
 
+			/* Init pause/full thresholds per physical TC - for
+			 * loopback traffic.
+			 */
 			ecore_wr(p_hwfn, p_ptt,
 				 BRB_REG_LB_TC_FULL_XOFF_THRESHOLD_0 +
 				 reg_offset, full_xoff_th);
@@ -1111,7 +1315,10 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
 			ecore_wr(p_hwfn, p_ptt,
 				 BRB_REG_LB_TC_PAUSE_XON_THRESHOLD_0 +
 				 reg_offset, pause_xon_th);
-/* init pause/full thresholds per physical TC - for main traffic */
+
+			/* Init pause/full thresholds per physical TC - for
+			 * main traffic.
+			 */
 			ecore_wr(p_hwfn, p_ptt,
 				 BRB_REG_MAIN_TC_FULL_XOFF_THRESHOLD_0 +
 				 reg_offset, full_xoff_th);
@@ -1128,23 +1335,25 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
 	}
 }
 
-/*In MF should be called once per engine to set EtherType of OuterTag*/
+/* In MF should be called once per engine to set EtherType of OuterTag */
 void ecore_set_engine_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
 					struct ecore_ptt *p_ptt, u32 ethType)
 {
-	/* update PRS register */
+	/* Update PRS register */
 	STORE_RT_REG(p_hwfn, PRS_REG_TAG_ETHERTYPE_0_RT_OFFSET, ethType);
-	/* update NIG register */
+
+	/* Update NIG register */
 	STORE_RT_REG(p_hwfn, NIG_REG_TAG_ETHERTYPE_0_RT_OFFSET, ethType);
-	/* update PBF register */
+
+	/* Update PBF register */
 	STORE_RT_REG(p_hwfn, PBF_REG_TAG_ETHERTYPE_0_RT_OFFSET, ethType);
 }
 
-/*In MF should be called once per port to set EtherType of OuterTag*/
+/* In MF should be called once per port to set EtherType of OuterTag */
 void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
 				      struct ecore_ptt *p_ptt, u32 ethType)
 {
-	/* update DORQ register */
+	/* Update DORQ register */
 	STORE_RT_REG(p_hwfn, DORQ_REG_TAG1_ETHERTYPE_RT_OFFSET, ethType);
 }
 
@@ -1154,11 +1363,13 @@ void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
 void ecore_set_vxlan_dest_port(struct ecore_hwfn *p_hwfn,
 			       struct ecore_ptt *p_ptt, u16 dest_port)
 {
-	/* update PRS register */
+	/* Update PRS register */
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_VXLAN_PORT, dest_port);
-	/* update NIG register */
+
+	/* Update NIG register */
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_VXLAN_CTRL, dest_port);
-	/* update PBF register */
+
+	/* Update PBF register */
 	ecore_wr(p_hwfn, p_ptt, PBF_REG_VXLAN_PORT, dest_port);
 }
 
@@ -1166,23 +1377,26 @@ void ecore_set_vxlan_enable(struct ecore_hwfn *p_hwfn,
 			    struct ecore_ptt *p_ptt, bool vxlan_enable)
 {
 	u32 reg_val;
-	/* update PRS register */
+
+	/* Update PRS register */
 	reg_val = ecore_rd(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN);
 	SET_TUNNEL_TYPE_ENABLE_BIT(reg_val,
 			   PRS_REG_ENCAPSULATION_TYPE_EN_VXLAN_ENABLE_SHIFT,
 			   vxlan_enable);
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN, reg_val);
 	if (reg_val) {
-		ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0,
+		ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0_BB_K2,
 			 (u32)PRS_ETH_TUNN_FIC_FORMAT);
 	}
-	/* update NIG register */
+
+	/* Update NIG register */
 	reg_val = ecore_rd(p_hwfn, p_ptt, NIG_REG_ENC_TYPE_ENABLE);
 	SET_TUNNEL_TYPE_ENABLE_BIT(reg_val,
 				   NIG_REG_ENC_TYPE_ENABLE_VXLAN_ENABLE_SHIFT,
 				   vxlan_enable);
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_ENC_TYPE_ENABLE, reg_val);
-	/* update DORQ register */
+
+	/* Update DORQ register */
 	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_VXLAN_EN,
 		 vxlan_enable ? 1 : 0);
 }
@@ -1192,7 +1406,8 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn,
 			  bool eth_gre_enable, bool ip_gre_enable)
 {
 	u32 reg_val;
-	/* update PRS register */
+
+	/* Update PRS register */
 	reg_val = ecore_rd(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN);
 	SET_TUNNEL_TYPE_ENABLE_BIT(reg_val,
 		   PRS_REG_ENCAPSULATION_TYPE_EN_ETH_OVER_GRE_ENABLE_SHIFT,
@@ -1202,10 +1417,11 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn,
 		   ip_gre_enable);
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN, reg_val);
 	if (reg_val) {
-		ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0,
+		ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0_BB_K2,
 			 (u32)PRS_ETH_TUNN_FIC_FORMAT);
 	}
-	/* update NIG register */
+
+	/* Update NIG register */
 	reg_val = ecore_rd(p_hwfn, p_ptt, NIG_REG_ENC_TYPE_ENABLE);
 	SET_TUNNEL_TYPE_ENABLE_BIT(reg_val,
 		   NIG_REG_ENC_TYPE_ENABLE_ETH_OVER_GRE_ENABLE_SHIFT,
@@ -1214,7 +1430,8 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn,
 		   NIG_REG_ENC_TYPE_ENABLE_IP_OVER_GRE_ENABLE_SHIFT,
 		   ip_gre_enable);
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_ENC_TYPE_ENABLE, reg_val);
-	/* update DORQ registers */
+
+	/* Update DORQ registers */
 	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_GRE_ETH_EN,
 		 eth_gre_enable ? 1 : 0);
 	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_GRE_IP_EN,
@@ -1224,11 +1441,13 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn,
 void ecore_set_geneve_dest_port(struct ecore_hwfn *p_hwfn,
 				struct ecore_ptt *p_ptt, u16 dest_port)
 {
-	/* update PRS register */
+	/* Update PRS register */
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_NGE_PORT, dest_port);
-	/* update NIG register */
+
+	/* Update NIG register */
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_NGE_PORT, dest_port);
-	/* update PBF register */
+
+	/* Update PBF register */
 	ecore_wr(p_hwfn, p_ptt, PBF_REG_NGE_PORT, dest_port);
 }
 
@@ -1237,7 +1456,8 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
 			     bool eth_geneve_enable, bool ip_geneve_enable)
 {
 	u32 reg_val;
-	/* update PRS register */
+
+	/* Update PRS register */
 	reg_val = ecore_rd(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN);
 	SET_TUNNEL_TYPE_ENABLE_BIT(reg_val,
 		   PRS_REG_ENCAPSULATION_TYPE_EN_ETH_OVER_GENEVE_ENABLE_SHIFT,
@@ -1247,37 +1467,44 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
 		   ip_geneve_enable);
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN, reg_val);
 	if (reg_val) {
-		ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0,
+		ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0_BB_K2,
 			 (u32)PRS_ETH_TUNN_FIC_FORMAT);
 	}
-	/* update NIG register */
+
+	/* Update NIG register */
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_NGE_ETH_ENABLE,
 		 eth_geneve_enable ? 1 : 0);
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_NGE_IP_ENABLE,
 		 ip_geneve_enable ? 1 : 0);
-	/* EDPM with geneve tunnel not supported in BB_B0 */
+
+	/* EDPM with geneve tunnel not supported in BB */
 	if (ECORE_IS_BB_B0(p_hwfn->p_dev))
 		return;
-	/* update DORQ registers */
-	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_ETH_EN,
+
+	/* Update DORQ registers */
+	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_ETH_EN_K2_E5,
 		 eth_geneve_enable ? 1 : 0);
-	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_IP_EN,
+	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_IP_EN_K2_E5,
 		 ip_geneve_enable ? 1 : 0);
 }
 
+
 #define T_ETH_PACKET_ACTION_GFT_EVENTID  23
 #define PARSER_ETH_CONN_GFT_ACTION_CM_HDR  272
 #define T_ETH_PACKET_MATCH_RFS_EVENTID 25
-#define PARSER_ETH_CONN_CM_HDR (0x0)
+#define PARSER_ETH_CONN_CM_HDR 0
 #define CAM_LINE_SIZE sizeof(u32)
 #define RAM_LINE_SIZE sizeof(u64)
 #define REG_SIZE sizeof(u32)
 
+
 void ecore_set_gft_event_id_cm_hdr(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt)
 {
-	/* set RFS event ID to be awakened i Tstorm By Prs */
-	u32 rfs_cm_hdr_event_id = ecore_rd(p_hwfn, p_ptt, PRS_REG_CM_HDR_GFT);
+	u32 rfs_cm_hdr_event_id;
+
+	/* Set RFS event ID to be awakened i Tstorm By Prs */
+	rfs_cm_hdr_event_id = ecore_rd(p_hwfn, p_ptt, PRS_REG_CM_HDR_GFT);
 	rfs_cm_hdr_event_id |= T_ETH_PACKET_ACTION_GFT_EVENTID <<
 	    PRS_REG_CM_HDR_GFT_EVENT_ID_SHIFT;
 	rfs_cm_hdr_event_id |= PARSER_ETH_CONN_GFT_ACTION_CM_HDR <<
@@ -1298,39 +1525,48 @@ void ecore_set_rfs_mode_enable(struct ecore_hwfn *p_hwfn,
 	struct gft_ram_line ramLine;
 	u32 *ramLinePointer = (u32 *)&ramLine;
 	int i;
+
 	if (!ipv6 && !ipv4)
 		DP_NOTICE(p_hwfn, true,
 			  "set_rfs_mode_enable: must accept at "
 			  "least on of - ipv4 or ipv6");
+
 	if (!tcp && !udp)
 		DP_NOTICE(p_hwfn, true,
 			  "set_rfs_mode_enable: must accept at "
 			  "least on of - udp or tcp");
-	/* set RFS event ID to be awakened i Tstorm By Prs */
+
+	/* Set RFS event ID to be awakened i Tstorm By Prs */
 	rfs_cm_hdr_event_id |=  T_ETH_PACKET_MATCH_RFS_EVENTID <<
 	    PRS_REG_CM_HDR_GFT_EVENT_ID_SHIFT;
 	rfs_cm_hdr_event_id |=  PARSER_ETH_CONN_CM_HDR <<
 	    PRS_REG_CM_HDR_GFT_CM_HDR_SHIFT;
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_CM_HDR_GFT, rfs_cm_hdr_event_id);
+
 	/* Configure Registers for RFS mode */
-/* enable gft search */
+
+	/* Enable gft search */
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_GFT, 1);
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_LOAD_L2_FILTER, 0); /* do not load
 							     * context only cid
 							     * in PRS on match
 							     */
 	camLine.cam_line_mapped.camline = 0;
-	/* cam line is now valid!! */
+
+	/* Cam line is now valid!! */
 	SET_FIELD(camLine.cam_line_mapped.camline,
 		  GFT_CAM_LINE_MAPPED_VALID, 1);
-	/* filters are per PF!! */
+
+	/* Filters are per PF!! */
 	SET_FIELD(camLine.cam_line_mapped.camline,
 		  GFT_CAM_LINE_MAPPED_PF_ID_MASK, 1);
 	SET_FIELD(camLine.cam_line_mapped.camline,
 		  GFT_CAM_LINE_MAPPED_PF_ID, pf_id);
+
 	if (!(tcp && udp)) {
 		SET_FIELD(camLine.cam_line_mapped.camline,
-			  GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE_MASK, 1);
+			  GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE_MASK,
+			  GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE_MASK_MASK);
 		if (tcp)
 			SET_FIELD(camLine.cam_line_mapped.camline,
 				  GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE,
@@ -1340,6 +1576,7 @@ void ecore_set_rfs_mode_enable(struct ecore_hwfn *p_hwfn,
 				  GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE,
 				  GFT_PROFILE_UDP_PROTOCOL);
 	}
+
 	if (!(ipv4 && ipv6)) {
 		SET_FIELD(camLine.cam_line_mapped.camline,
 			  GFT_CAM_LINE_MAPPED_IP_VERSION_MASK, 1);
@@ -1352,44 +1589,53 @@ void ecore_set_rfs_mode_enable(struct ecore_hwfn *p_hwfn,
 				  GFT_CAM_LINE_MAPPED_IP_VERSION,
 				  GFT_PROFILE_IPV6);
 	}
-	/* write characteristics to cam */
+
+	/* Write characteristics to cam */
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_CAM + CAM_LINE_SIZE * pf_id,
 	    camLine.cam_line_mapped.camline);
 	camLine.cam_line_mapped.camline =
 	    ecore_rd(p_hwfn, p_ptt, PRS_REG_GFT_CAM + CAM_LINE_SIZE * pf_id);
-	/* write line to RAM - compare to filter 4 tuple */
-	ramLine.low32bits = 0;
-	ramLine.high32bits = 0;
-	SET_FIELD(ramLine.high32bits, GFT_RAM_LINE_DST_IP, 1);
-	SET_FIELD(ramLine.high32bits, GFT_RAM_LINE_SRC_IP, 1);
-	SET_FIELD(ramLine.low32bits, GFT_RAM_LINE_SRC_PORT, 1);
-	SET_FIELD(ramLine.low32bits, GFT_RAM_LINE_DST_PORT, 1);
-	/* each iteration write to reg */
+
+	/* Write line to RAM - compare to filter 4 tuple */
+	ramLine.lo = 0;
+	ramLine.hi = 0;
+	SET_FIELD(ramLine.hi, GFT_RAM_LINE_DST_IP, 1);
+	SET_FIELD(ramLine.hi, GFT_RAM_LINE_SRC_IP, 1);
+	SET_FIELD(ramLine.hi, GFT_RAM_LINE_OVER_IP_PROTOCOL, 1);
+	SET_FIELD(ramLine.lo, GFT_RAM_LINE_ETHERTYPE, 1);
+	SET_FIELD(ramLine.lo, GFT_RAM_LINE_SRC_PORT, 1);
+	SET_FIELD(ramLine.lo, GFT_RAM_LINE_DST_PORT, 1);
+
+	/* Each iteration write to reg */
 	for (i = 0; i < RAM_LINE_SIZE / REG_SIZE; i++)
 		ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_PROFILE_MASK_RAM +
 			 RAM_LINE_SIZE * pf_id +
 			 i * REG_SIZE, *(ramLinePointer + i));
-	/* set default profile so that no filter match will happen */
-	ramLine.low32bits = 0xffff;
-	ramLine.high32bits = 0xffff;
+
+	/* Set default profile so that no filter match will happen */
+	ramLine.lo = 0xffff;
+	ramLine.hi = 0xffff;
 	for (i = 0; i < RAM_LINE_SIZE / REG_SIZE; i++)
 		ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_PROFILE_MASK_RAM +
 			 RAM_LINE_SIZE * PRS_GFT_CAM_LINES_NO_MATCH +
 			 i * REG_SIZE, *(ramLinePointer + i));
 }
 
-/* Configure VF zone size mode*/
+/* Configure VF zone size mode */
 void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn,
 				    struct ecore_ptt *p_ptt, u16 mode,
 				    bool runtime_init)
 {
 	u32 msdm_vf_size_log = MSTORM_VF_ZONE_DEFAULT_SIZE_LOG;
 	u32 msdm_vf_offset_mask;
+
 	if (mode == VF_ZONE_SIZE_MODE_DOUBLE)
 		msdm_vf_size_log += 1;
 	else if (mode == VF_ZONE_SIZE_MODE_QUAD)
 		msdm_vf_size_log += 2;
+
 	msdm_vf_offset_mask = (1 << msdm_vf_size_log) - 1;
+
 	if (runtime_init) {
 		STORE_RT_REG(p_hwfn,
 			     PGLUE_REG_B_MSDM_VF_SHIFT_B_RT_OFFSET,
@@ -1405,12 +1651,13 @@ void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn,
 	}
 }
 
-/* get mstorm statistics for offset by VF zone size mode*/
+/* Get mstorm statistics for offset by VF zone size mode */
 u32 ecore_get_mstorm_queue_stat_offset(struct ecore_hwfn *p_hwfn,
 				       u16 stat_cnt_id,
 				       u16 vf_zone_size_mode)
 {
 	u32 offset = MSTORM_QUEUE_STAT_OFFSET(stat_cnt_id);
+
 	if ((vf_zone_size_mode != VF_ZONE_SIZE_MODE_DEFAULT) &&
 	    (stat_cnt_id > MAX_NUM_PFS)) {
 		if (vf_zone_size_mode == VF_ZONE_SIZE_MODE_DOUBLE)
@@ -1420,16 +1667,18 @@ u32 ecore_get_mstorm_queue_stat_offset(struct ecore_hwfn *p_hwfn,
 			offset += 3 * (1 << MSTORM_VF_ZONE_DEFAULT_SIZE_LOG) *
 			    (stat_cnt_id - MAX_NUM_PFS);
 	}
+
 	return offset;
 }
 
-/* get mstorm VF producer offset by VF zone size mode*/
+/* Get mstorm VF producer offset by VF zone size mode */
 u32 ecore_get_mstorm_eth_vf_prods_offset(struct ecore_hwfn *p_hwfn,
 					 u8 vf_id,
 					 u8 vf_queue_id,
 					 u16 vf_zone_size_mode)
 {
 	u32 offset = MSTORM_ETH_VF_PRODS_OFFSET(vf_id, vf_queue_id);
+
 	if (vf_zone_size_mode != VF_ZONE_SIZE_MODE_DEFAULT) {
 		if (vf_zone_size_mode == VF_ZONE_SIZE_MODE_DOUBLE)
 			offset += (1 << MSTORM_VF_ZONE_DEFAULT_SIZE_LOG) *
@@ -1438,5 +1687,166 @@ u32 ecore_get_mstorm_eth_vf_prods_offset(struct ecore_hwfn *p_hwfn,
 			offset += 3 * (1 << MSTORM_VF_ZONE_DEFAULT_SIZE_LOG) *
 				  vf_id;
 	}
+
 	return offset;
 }
+
+/* Calculate CRC8 of first 4 bytes in buf */
+static u8 ecore_calc_crc8(const u8 *buf)
+{
+	u32 i, j, crc = 0xff << 8;
+
+	/* CRC-8 polynomial */
+	#define POLY 0x1070
+
+	for (j = 0; j < 4; j++, buf++) {
+		crc ^= (*buf << 8);
+		for (i = 0; i < 8; i++) {
+			if (crc & 0x8000)
+				crc ^= (POLY << 3);
+
+			 crc <<= 1;
+		}
+	}
+
+	return (u8)(crc >> 8);
+}
+
+/* Calculate and return CDU validation byte per conneciton type / region /
+ * cid
+ */
+static u8 ecore_calc_cdu_validation_byte(u8 conn_type, u8 region,
+					 u32 cid)
+{
+	const u8 validation_cfg = CDU_VALIDATION_DEFAULT_CFG;
+	u8 crc, validation_byte = 0;
+	u32 validation_string = 0;
+	const u8 *data_to_crc_rev;
+	u8 data_to_crc[4];
+
+	data_to_crc_rev = (const u8 *)&validation_string;
+
+	/*
+	 * The CRC is calculated on the String-to-compress:
+	 * [31:8]  = {CID[31:20],CID[11:0]}
+	 * [7:4]   = Region
+	 * [3:0]   = Type
+	 */
+	if ((validation_cfg >> CDU_CONTEXT_VALIDATION_CFG_USE_CID) & 1)
+		validation_string |= (cid & 0xFFF00000) | ((cid & 0xFFF) << 8);
+
+	if ((validation_cfg >> CDU_CONTEXT_VALIDATION_CFG_USE_REGION) & 1)
+		validation_string |= ((region & 0xF) << 4);
+
+	if ((validation_cfg >> CDU_CONTEXT_VALIDATION_CFG_USE_TYPE) & 1)
+		validation_string |= (conn_type & 0xF);
+
+	/* Convert to big-endian (ntoh())*/
+	data_to_crc[0] = data_to_crc_rev[3];
+	data_to_crc[1] = data_to_crc_rev[2];
+	data_to_crc[2] = data_to_crc_rev[1];
+	data_to_crc[3] = data_to_crc_rev[0];
+
+	crc = ecore_calc_crc8(data_to_crc);
+
+	validation_byte |= ((validation_cfg >>
+			     CDU_CONTEXT_VALIDATION_CFG_USE_ACTIVE) & 1) << 7;
+
+	if ((validation_cfg >>
+	     CDU_CONTEXT_VALIDATION_CFG_VALIDATION_TYPE_SHIFT) & 1)
+		validation_byte |= ((conn_type & 0xF) << 3) | (crc & 0x7);
+	else
+		validation_byte |= crc & 0x7F;
+
+	return validation_byte;
+}
+
+/* Calcualte and set validation bytes for session context */
+void ecore_calc_session_ctx_validation(void *p_ctx_mem, u16 ctx_size,
+				       u8 ctx_type, u32 cid)
+{
+	u8 *x_val_ptr, *t_val_ptr, *u_val_ptr, *p_ctx;
+
+	p_ctx = (u8 *)p_ctx_mem;
+	x_val_ptr = &p_ctx[con_region_offsets[0][ctx_type]];
+	t_val_ptr = &p_ctx[con_region_offsets[1][ctx_type]];
+	u_val_ptr = &p_ctx[con_region_offsets[2][ctx_type]];
+
+	OSAL_MEMSET(p_ctx, 0, ctx_size);
+
+	*x_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 3, cid);
+	*t_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 4, cid);
+	*u_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 5, cid);
+}
+
+/* Calcualte and set validation bytes for task context */
+void ecore_calc_task_ctx_validation(void *p_ctx_mem, u16 ctx_size,
+				    u8 ctx_type, u32 tid)
+{
+	u8 *p_ctx, *region1_val_ptr;
+
+	p_ctx = (u8 *)p_ctx_mem;
+	region1_val_ptr = &p_ctx[task_region_offsets[0][ctx_type]];
+
+	OSAL_MEMSET(p_ctx, 0, ctx_size);
+
+	*region1_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 1, tid);
+}
+
+/* Memset session context to 0 while preserving validation bytes */
+void ecore_memset_session_ctx(void *p_ctx_mem, u32 ctx_size, u8 ctx_type)
+{
+	u8 *x_val_ptr, *t_val_ptr, *u_val_ptr, *p_ctx;
+	u8 x_val, t_val, u_val;
+
+	p_ctx = (u8 *)p_ctx_mem;
+	x_val_ptr = &p_ctx[con_region_offsets[0][ctx_type]];
+	t_val_ptr = &p_ctx[con_region_offsets[1][ctx_type]];
+	u_val_ptr = &p_ctx[con_region_offsets[2][ctx_type]];
+
+	x_val = *x_val_ptr;
+	t_val = *t_val_ptr;
+	u_val = *u_val_ptr;
+
+	OSAL_MEMSET(p_ctx, 0, ctx_size);
+
+	*x_val_ptr = x_val;
+	*t_val_ptr = t_val;
+	*u_val_ptr = u_val;
+}
+
+/* Memset task context to 0 while preserving validation bytes */
+void ecore_memset_task_ctx(void *p_ctx_mem, const u32 ctx_size,
+			   const u8 ctx_type)
+{
+	u8 *p_ctx, *region1_val_ptr;
+	u8 region1_val;
+
+	p_ctx = (u8 *)p_ctx_mem;
+	region1_val_ptr = &p_ctx[task_region_offsets[0][ctx_type]];
+
+	region1_val = *region1_val_ptr;
+
+	OSAL_MEMSET(p_ctx, 0, ctx_size);
+
+	*region1_val_ptr = region1_val;
+}
+
+/* Enable and configure context validation */
+void ecore_enable_context_validation(struct ecore_hwfn *p_hwfn,
+				     struct ecore_ptt *p_ptt)
+{
+	u32 ctx_validation;
+
+	/* Enable validation for connection region 3 - bits [31:24] */
+	ctx_validation = CDU_VALIDATION_DEFAULT_CFG << 24;
+	ecore_wr(p_hwfn, p_ptt, CDU_REG_CCFC_CTX_VALID0, ctx_validation);
+
+	/* Enable validation for connection region 5 - bits [15: 8] */
+	ctx_validation = CDU_VALIDATION_DEFAULT_CFG << 8;
+	ecore_wr(p_hwfn, p_ptt, CDU_REG_CCFC_CTX_VALID1, ctx_validation);
+
+	/* Enable validation for connection region 1 - bits [15: 8] */
+	ctx_validation = CDU_VALIDATION_DEFAULT_CFG << 8;
+	ecore_wr(p_hwfn, p_ptt, CDU_REG_TCFC_CTX_VALID0, ctx_validation);
+}
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.h b/drivers/net/qede/base/ecore_init_fw_funcs.h
index 9df0e7d..2d1ab7c 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.h
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.h
@@ -8,20 +8,22 @@
 
 #ifndef _INIT_FW_FUNCS_H
 #define _INIT_FW_FUNCS_H
-/* forward declarations */
+/* Forward declarations */
+
 struct init_qm_pq_params;
+
 /**
- * @brief ecore_qm_pf_mem_size - prepare QM ILT sizes
+ * @brief ecore_qm_pf_mem_size - Prepare QM ILT sizes
  *
  * Returns the required host memory size in 4KB units.
  * Must be called before all QM init HSI functions.
  *
- * @param pf_id			- physical function ID
- * @param num_pf_cids	- number of connections used by this PF
- * @param num_vf_cids	- number of connections used by VFs of this PF
- * @param num_tids		- number of tasks used by this PF
- * @param num_pf_pqs	- number of PQs used by this PF
- * @param num_vf_pqs	- number of PQs used by VFs of this PF
+ * @param pf_id -	physical function ID
+ * @param num_pf_cids - number of connections used by this PF
+ * @param num_vf_cids -	number of connections used by VFs of this PF
+ * @param num_tids -	number of tasks used by this PF
+ * @param num_pf_pqs -	number of PQs used by this PF
+ * @param num_vf_pqs -	number of PQs used by VFs of this PF
  *
  * @return The required host memory size in 4KB units.
  */
@@ -31,6 +33,7 @@ u32 ecore_qm_pf_mem_size(u8 pf_id,
 						 u32 num_tids,
 						 u16 num_pf_pqs,
 						 u16 num_vf_pqs);
+
 /**
  * @brief ecore_qm_common_rt_init - Prepare QM runtime init values for engine
  *                                  phase
@@ -38,10 +41,10 @@ u32 ecore_qm_pf_mem_size(u8 pf_id,
  * @param p_hwfn
  * @param max_ports_per_engine	- max number of ports per engine in HW
  * @param max_phys_tcs_per_port	- max number of physical TCs per port in HW
- * @param pf_rl_en				- enable per-PF rate limiters
- * @param pf_wfq_en				- enable per-PF WFQ
- * @param vport_rl_en			- enable per-VPORT rate limiters
- * @param vport_wfq_en			- enable per-VPORT WFQ
+ * @param pf_rl_en		- enable per-PF rate limiters
+ * @param pf_wfq_en		- enable per-PF WFQ
+ * @param vport_rl_en		- enable per-VPORT rate limiters
+ * @param vport_wfq_en		- enable per-VPORT WFQ
  * @param port_params - array of size MAX_NUM_PORTS with params for each port
  *
  * @return 0 on success, -1 on error.
@@ -54,22 +57,24 @@ int ecore_qm_common_rt_init(struct ecore_hwfn *p_hwfn,
 			 bool vport_rl_en,
 			 bool vport_wfq_en,
 			 struct init_qm_port_params port_params[MAX_NUM_PORTS]);
+
 /**
  * @brief ecore_qm_pf_rt_init  Prepare QM runtime init values for the PF phase
  *
  * @param p_hwfn
  * @param p_ptt			- ptt window used for writing the registers
- * @param port_id				- port ID
- * @param pf_id					- PF ID
+ * @param port_id		- port ID
+ * @param pf_id			- PF ID
  * @param max_phys_tcs_per_port	- max number of physical TCs per port in HW
- * @param is_first_pf			- 1 = first PF in engine, 0 = othwerwise
- * @param num_pf_cids			- number of connections used by this PF
+ * @param is_first_pf		- 1 = first PF in engine, 0 = othwerwise
+ * @param num_pf_cids		- number of connections used by this PF
  * @param num_vf_cids		- number of connections used by VFs of this PF
- * @param num_tids			- number of tasks used by this PF
- * @param start_pq			- first Tx PQ ID associated with this PF
- * @param num_pf_pqs	- number of Tx PQs associated with this PF (non-VF)
- * @param num_vf_pqs			- number of Tx PQs associated with a VF
- * @param start_vport			- first VPORT ID associated with this PF
+ * @param num_tids		- number of tasks used by this PF
+ * @param start_pq		- first Tx PQ ID associated with this PF
+ * @param num_pf_pqs		- number of Tx PQs associated with this PF
+ *                                (non-VF)
+ * @param num_vf_pqs		- number of Tx PQs associated with a VF
+ * @param start_vport		- first VPORT ID associated with this PF
  * @param num_vports - number of VPORTs associated with this PF
  * @param pf_wfq - WFQ weight. if PF WFQ is globally disabled, the weight must
  *		   be 0. otherwise, the weight must be non-zero.
@@ -100,6 +105,7 @@ int ecore_qm_pf_rt_init(struct ecore_hwfn *p_hwfn,
 				u32 pf_rl,
 				struct init_qm_pq_params *pq_params,
 				struct init_qm_vport_params *vport_params);
+
 /**
  * @brief ecore_init_pf_wfq  Initializes the WFQ weight of the specified PF
  *
@@ -114,11 +120,12 @@ int ecore_init_pf_wfq(struct ecore_hwfn *p_hwfn,
 					  struct ecore_ptt *p_ptt,
 					  u8 pf_id,
 					  u16 pf_wfq);
+
 /**
- * @brief ecore_init_pf_rl  Initializes the rate limit of the specified PF
+ * @brief ecore_init_pf_rl - Initializes the rate limit of the specified PF
  *
  * @param p_hwfn
- * @param p_ptt	- ptt window used for writing the registers
+ * @param p_ptt - ptt window used for writing the registers
  * @param pf_id	- PF ID
  * @param pf_rl	- rate limit in Mb/sec units
  *
@@ -128,6 +135,7 @@ int ecore_init_pf_rl(struct ecore_hwfn *p_hwfn,
 					 struct ecore_ptt *p_ptt,
 					 u8 pf_id,
 					 u32 pf_rl);
+
 /**
  * @brief ecore_init_vport_wfq  Initializes the WFQ weight of specified VPORT
  *
@@ -144,10 +152,12 @@ int ecore_init_vport_wfq(struct ecore_hwfn *p_hwfn,
 						 struct ecore_ptt *p_ptt,
 						 u16 first_tx_pq_id[NUM_OF_TCS],
 						 u16 vport_wfq);
+
 /**
- * @brief ecore_init_vport_rl  Initializes the rate limit of the specified VPORT
+ * @brief ecore_init_vport_rl - Initializes the rate limit of the specified
+ * VPORT.
  *
- * @param p_hwfn
+ * @param p_hwfn	- HW device data
  * @param p_ptt		- ptt window used for writing the registers
  * @param vport_id	- VPORT ID
  * @param vport_rl	- rate limit in Mb/sec units
@@ -158,6 +168,7 @@ int ecore_init_vport_rl(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt,
 						u8 vport_id,
 						u32 vport_rl);
+
 /**
  * @brief ecore_send_qm_stop_cmd  Sends a stop command to the QM
  *
@@ -178,6 +189,7 @@ bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn,
 							u16 start_pq,
 							u16 num_pqs);
 #ifndef UNUSED_HSI_FUNC
+
 /**
  * @brief ecore_init_nig_ets - initializes the NIG ETS arbiter
  *
@@ -193,6 +205,7 @@ void ecore_init_nig_ets(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt,
 						struct init_ets_req *req,
 						bool is_lb);
+
 /**
  * @brief ecore_init_nig_lb_rl - initializes the NIG LB RLs
  *
@@ -205,6 +218,7 @@ void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn,
 				  struct ecore_ptt *p_ptt,
 				  struct init_nig_lb_rl_req *req);
 #endif /* UNUSED_HSI_FUNC */
+
 /**
  * @brief ecore_init_nig_pri_tc_map - initializes the NIG priority to TC map.
  *
@@ -216,6 +230,7 @@ void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn,
 void ecore_init_nig_pri_tc_map(struct ecore_hwfn *p_hwfn,
 					   struct ecore_ptt *p_ptt,
 					   struct init_nig_pri_tc_map_req *req);
+
 #ifndef UNUSED_HSI_FUNC
 /**
  * @brief ecore_init_prs_ets - initializes the PRS Rx ETS arbiter
@@ -229,6 +244,7 @@ void ecore_init_prs_ets(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt,
 						struct init_ets_req *req);
 #endif /* UNUSED_HSI_FUNC */
+
 #ifndef UNUSED_HSI_FUNC
 /**
  * @brief ecore_init_brb_ram - initializes BRB RAM sizes per TC
@@ -242,6 +258,7 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt,
 						struct init_brb_ram_req *req);
 #endif /* UNUSED_HSI_FUNC */
+
 #ifndef UNUSED_HSI_FUNC
 /**
  * @brief ecore_set_engine_mf_ovlan_eth_type - initializes Nig,Prs,Pbf and llh
@@ -250,22 +267,24 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
  *                                             if engine
  *  is in BD mode.
  *
- * @param p_ptt    - ptt window used for writing the registers.
+ * @param p_ptt   - ptt window used for writing the registers.
  * @param ethType - etherType to configure
  */
 void ecore_set_engine_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
 			struct ecore_ptt *p_ptt, u32 ethType);
+
 /**
  * @brief ecore_set_port_mf_ovlan_eth_type - initializes DORQ ethType Regs to
  *                                           input ethType should Be called
  *                                           once per port.
  *
- * @param p_ptt    - ptt window used for writing the registers.
+ * @param p_ptt   - ptt window used for writing the registers.
  * @param ethType - etherType to configure
  */
 void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
 			struct ecore_ptt *p_ptt, u32 ethType);
 #endif /* UNUSED_HSI_FUNC */
+
 /**
  * @brief ecore_set_vxlan_dest_port - initializes vxlan tunnel destination udp
  *                                    port
@@ -276,15 +295,17 @@ void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
 void ecore_set_vxlan_dest_port(struct ecore_hwfn *p_hwfn,
 			       struct ecore_ptt *p_ptt,
 			       u16 dest_port);
+
 /**
  * @brief ecore_set_vxlan_enable - enable or disable VXLAN tunnel in HW
  *
- * @param p_ptt	- ptt window used for writing the registers.
- * @param vxlan_enable - vxlan enable flag.
+ * @param p_ptt		- ptt window used for writing the registers.
+ * @param vxlan_enable	- vxlan enable flag.
  */
 void ecore_set_vxlan_enable(struct ecore_hwfn *p_hwfn,
 			    struct ecore_ptt *p_ptt,
 			    bool vxlan_enable);
+
 /**
  * @brief ecore_set_gre_enable - enable or disable GRE tunnel in HW
  *
@@ -296,6 +317,7 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn,
 			  struct ecore_ptt *p_ptt,
 			  bool eth_gre_enable,
 			  bool ip_gre_enable);
+
 /**
  * @brief ecore_set_geneve_dest_port - initializes geneve tunnel destination
  *                                     udp port
@@ -306,6 +328,7 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn,
 void ecore_set_geneve_dest_port(struct ecore_hwfn *p_hwfn,
 				struct ecore_ptt *p_ptt,
 				u16 dest_port);
+
 /**
  * @brief ecore_set_gre_enable - enable or disable GRE tunnel in HW
  *
@@ -318,6 +341,7 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
 			     bool eth_geneve_enable,
 			     bool ip_geneve_enable);
 #ifndef UNUSED_HSI_FUNC
+
 /**
 * @brief ecore_set_gft_event_id_cm_hdr - configure GFT event id and cm header
 *
@@ -325,16 +349,16 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
 */
 void ecore_set_gft_event_id_cm_hdr(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt);
+
 /**
 * @brief ecore_set_rfs_mode_enable - enable and configure HW for RFS
 *
-*
-* @param p_ptt             - ptt window used for writing the registers.
-* @param pf_id - pf on which to enable RFS.
-* @param tcp -  set profile tcp packets.
-* @param udp -  set profile udp  packet.
-* @param ipv4 - set profile ipv4 packet.
-* @param ipv6 - set profile ipv6 packet.
+* @param p_ptt	- ptt window used for writing the registers.
+* @param pf_id	- pf on which to enable RFS.
+* @param tcp	- set profile tcp packets.
+* @param udp	- set profile udp  packet.
+* @param ipv4	- set profile ipv4 packet.
+* @param ipv6	- set profile ipv6 packet.
 */
 void ecore_set_rfs_mode_enable(struct ecore_hwfn *p_hwfn,
 	struct ecore_ptt *p_ptt,
@@ -344,6 +368,7 @@ void ecore_set_rfs_mode_enable(struct ecore_hwfn *p_hwfn,
 	bool ipv4,
 	bool ipv6);
 #endif /* UNUSED_HSI_FUNC */
+
 /**
 * @brief ecore_config_vf_zone_size_mode - Configure VF zone size mode. Must be
 *                                         used before first ETH queue started.
@@ -357,18 +382,20 @@ void ecore_set_rfs_mode_enable(struct ecore_hwfn *p_hwfn,
 */
 void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn, struct ecore_ptt
 				    *p_ptt, u16 mode, bool runtime_init);
+
 /**
-* @brief ecore_get_mstorm_queue_stat_offset - get mstorm statistics offset by VF
-*                                             zone size mode.
+ * @brief ecore_get_mstorm_queue_stat_offset - Get mstorm statistics offset by
+ * VF zone size mode.
 *
 * @param stat_cnt_id         -  statistic counter id
 * @param vf_zone_size_mode   -  VF zone size mode. Use enum vf_zone_size_mode.
 */
 u32 ecore_get_mstorm_queue_stat_offset(struct ecore_hwfn *p_hwfn,
 				       u16 stat_cnt_id, u16 vf_zone_size_mode);
+
 /**
-* @brief ecore_get_mstorm_eth_vf_prods_offset - VF producer offset by VF zone
-*                                               size mode.
+ * @brief ecore_get_mstorm_eth_vf_prods_offset - VF producer offset by VF zone
+ * size mode.
 *
 * @param vf_id               -  vf id.
 * @param vf_queue_id         -  per VF rx queue id.
@@ -376,4 +403,58 @@ u32 ecore_get_mstorm_queue_stat_offset(struct ecore_hwfn *p_hwfn,
 */
 u32 ecore_get_mstorm_eth_vf_prods_offset(struct ecore_hwfn *p_hwfn, u8 vf_id, u8
 					 vf_queue_id, u16 vf_zone_size_mode);
+/**
+ * @brief ecore_enable_context_validation - Enable and configure context
+ *                                          validation.
+ *
+ * @param p_ptt - ptt window used for writing the registers.
+ */
+void ecore_enable_context_validation(struct ecore_hwfn *p_hwfn,
+				     struct ecore_ptt *p_ptt);
+/**
+ * @brief ecore_calc_session_ctx_validation - Calcualte validation byte for
+ *                                            session context.
+ *
+ *
+ * @param p_ctx_mem           -  pointer to context memory.
+ * @param ctx_size            -  context size.
+ * @param ctx_type            -  context type.
+ * @param cid                 -  context cid.
+ */
+void ecore_calc_session_ctx_validation(void *p_ctx_mem, u16 ctx_size,
+				       u8 ctx_type, u32 cid);
+/**
+ * @brief ecore_calc_task_ctx_validation - Calcualte validation byte for task
+ *                                         context.
+ *
+ *
+ * @param p_ctx_mem           -  pointer to context memory.
+ * @param ctx_size            -  context size.
+ * @param ctx_type            -  context type.
+ * @param tid                 -  context tid.
+ */
+void ecore_calc_task_ctx_validation(void *p_ctx_mem, u16 ctx_size,
+				    u8 ctx_type, u32 tid);
+/**
+ * @brief ecore_memset_session_ctx - Memset session context to 0 while
+ *                                   preserving validation bytes.
+ *
+ *
+ * @param p_ctx_mem           -  pointer to context memory.
+ * @param ctx_size            -  size to initialzie.
+ * @param ctx_type            -  context type.
+ */
+void ecore_memset_session_ctx(void *p_ctx_mem, u32 ctx_size,
+			      u8 ctx_type);
+/**
+ * @brief ecore_memset_task_ctx - Memset session context to 0 while preserving
+ *                                validation bytes.
+ *
+ *
+ * @param p_ctx_mem           -  pointer to context memory.
+ * @param ctx_size            -  size to initialzie.
+ * @param ctx_type            -  context type.
+ */
+void ecore_memset_task_ctx(void *p_ctx_mem, u32 ctx_size,
+			   u8 ctx_type);
 #endif
diff --git a/drivers/net/qede/base/ecore_iro.h b/drivers/net/qede/base/ecore_iro.h
index aad9012..b4bfe89 100644
--- a/drivers/net/qede/base/ecore_iro.h
+++ b/drivers/net/qede/base/ecore_iro.h
@@ -185,5 +185,13 @@
 #define TSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) (IRO[46].base + \
 	((rdma_stat_counter_id) * IRO[46].m1))
 #define TSTORM_RDMA_QUEUE_STAT_SIZE (IRO[46].size)
+/* Xstorm iWARP rxmit stats */
+#define XSTORM_IWARP_RXMIT_STATS_OFFSET(pf_id) (IRO[47].base + \
+	((pf_id) * IRO[47].m1))
+#define XSTORM_IWARP_RXMIT_STATS_SIZE (IRO[47].size)
+/* Tstorm RoCE Event Statistics */
+#define TSTORM_ROCE_EVENTS_STAT_OFFSET(roce_pf_id) (IRO[48].base + \
+	((roce_pf_id) * IRO[48].m1))
+#define TSTORM_ROCE_EVENTS_STAT_SIZE (IRO[48].size)
 
 #endif /* __IRO_H__ */
diff --git a/drivers/net/qede/base/ecore_iro_values.h b/drivers/net/qede/base/ecore_iro_values.h
index 4ff7e95..6764bfa 100644
--- a/drivers/net/qede/base/ecore_iro_values.h
+++ b/drivers/net/qede/base/ecore_iro_values.h
@@ -9,13 +9,13 @@
 #ifndef __IRO_VALUES_H__
 #define __IRO_VALUES_H__
 
-static const struct iro iro_arr[47] = {
+static const struct iro iro_arr[49] = {
 /* YSTORM_FLOW_CONTROL_MODE_OFFSET */
 	{      0x0,      0x0,      0x0,      0x0,      0x8},
 /* TSTORM_PORT_STAT_OFFSET(port_id) */
-	{   0x4cb0,     0x78,      0x0,      0x0,     0x78},
+	{   0x4cb0,     0x80,      0x0,      0x0,     0x80},
 /* TSTORM_LL2_PORT_STAT_OFFSET(port_id) */
-	{   0x6318,     0x20,      0x0,      0x0,     0x20},
+	{   0x6518,     0x20,      0x0,      0x0,     0x20},
 /* USTORM_VF_PF_CHANNEL_READY_OFFSET(vf_id) */
 	{    0xb00,      0x8,      0x0,      0x0,      0x4},
 /* USTORM_FLR_FINAL_ACK_OFFSET(pf_id) */
@@ -41,7 +41,7 @@ static const struct iro iro_arr[47] = {
 /* TSTORM_LL2_RX_PRODS_OFFSET(core_rx_queue_id) */
 	{    0xa28,      0x8,      0x0,      0x0,      0x8},
 /* CORE_LL2_TSTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id) */
-	{   0x60f8,     0x10,      0x0,      0x0,     0x10},
+	{   0x61f8,     0x10,      0x0,      0x0,     0x10},
 /* CORE_LL2_USTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id) */
 	{   0xb820,     0x30,      0x0,      0x0,     0x30},
 /* CORE_LL2_PSTORM_PER_QUEUE_STAT_OFFSET(core_tx_stats_id) */
@@ -53,7 +53,7 @@ static const struct iro iro_arr[47] = {
 /* MSTORM_ETH_VF_PRODS_OFFSET(vf_id,vf_queue_id) */
 	{   0x53a0,     0x80,      0x4,      0x0,      0x4},
 /* MSTORM_TPA_TIMEOUT_US_OFFSET */
-	{   0xc8f0,      0x0,      0x0,      0x0,      0x4},
+	{   0xc7c8,      0x0,      0x0,      0x0,      0x4},
 /* MSTORM_ETH_PF_STAT_OFFSET(pf_id) */
 	{   0x4ba0,     0x80,      0x0,      0x0,     0x20},
 /* USTORM_QUEUE_STAT_OFFSET(stat_counter_id) */
@@ -63,13 +63,13 @@ static const struct iro iro_arr[47] = {
 /* PSTORM_QUEUE_STAT_OFFSET(stat_counter_id) */
 	{   0x2b48,     0x80,      0x0,      0x0,     0x38},
 /* PSTORM_ETH_PF_STAT_OFFSET(pf_id) */
-	{   0xf188,     0x78,      0x0,      0x0,     0x78},
+	{   0xf1b0,     0x78,      0x0,      0x0,     0x78},
 /* PSTORM_CTL_FRAME_ETHTYPE_OFFSET(ethType_id) */
 	{    0x1f8,      0x4,      0x0,      0x0,      0x4},
 /* TSTORM_ETH_PRS_INPUT_OFFSET */
-	{   0xacf0,      0x0,      0x0,      0x0,     0xf0},
+	{   0xaef8,      0x0,      0x0,      0x0,     0xf0},
 /* ETH_RX_RATE_LIMIT_OFFSET(pf_id) */
-	{   0xade0,      0x8,      0x0,      0x0,      0x8},
+	{   0xafe8,      0x8,      0x0,      0x0,      0x8},
 /* XSTORM_ETH_QUEUE_ZONE_OFFSET(queue_id) */
 	{    0x1f8,      0x8,      0x0,      0x0,      0x8},
 /* YSTORM_TOE_CQ_PROD_OFFSET(rss_id) */
@@ -85,9 +85,9 @@ static const struct iro iro_arr[47] = {
 /* MSTORM_SCSI_BDQ_EXT_PROD_OFFSET(func_id,bdq_id) */
 	{    0xb78,     0x10,      0x8,      0x0,      0x2},
 /* TSTORM_ISCSI_RX_STATS_OFFSET(pf_id) */
-	{   0xd888,     0x38,      0x0,      0x0,     0x24},
+	{   0xd9a8,     0x38,      0x0,      0x0,     0x24},
 /* MSTORM_ISCSI_RX_STATS_OFFSET(pf_id) */
-	{  0x12c38,     0x10,      0x0,      0x0,      0x8},
+	{  0x12988,     0x10,      0x0,      0x0,      0x8},
 /* USTORM_ISCSI_RX_STATS_OFFSET(pf_id) */
 	{  0x11aa0,     0x38,      0x0,      0x0,     0x18},
 /* XSTORM_ISCSI_TX_STATS_OFFSET(pf_id) */
@@ -97,13 +97,17 @@ static const struct iro iro_arr[47] = {
 /* PSTORM_ISCSI_TX_STATS_OFFSET(pf_id) */
 	{  0x101f8,     0x10,      0x0,      0x0,     0x10},
 /* TSTORM_FCOE_RX_STATS_OFFSET(pf_id) */
-	{   0xdd08,     0x48,      0x0,      0x0,     0x38},
+	{   0xde28,     0x48,      0x0,      0x0,     0x38},
 /* PSTORM_FCOE_TX_STATS_OFFSET(pf_id) */
 	{  0x10660,     0x20,      0x0,      0x0,     0x20},
 /* PSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) */
 	{   0x2b80,     0x80,      0x0,      0x0,     0x10},
 /* TSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) */
-	{   0x5000,     0x10,      0x0,      0x0,     0x10},
+	{   0x5020,     0x10,      0x0,      0x0,     0x10},
+/* XSTORM_IWARP_RXMIT_STATS_OFFSET(pf_id) */
+	{   0xc9b0,     0x30,      0x0,      0x0,     0x10},
+/* TSTORM_ROCE_EVENTS_STAT_OFFSET(roce_pf_id) */
+	{   0xeec0,     0x10,      0x0,      0x0,     0x10},
 };
 
 #endif /* __IRO_VALUES_H__ */
diff --git a/drivers/net/qede/base/ecore_rt_defs.h b/drivers/net/qede/base/ecore_rt_defs.h
index 01a29e3..846dc6d 100644
--- a/drivers/net/qede/base/ecore_rt_defs.h
+++ b/drivers/net/qede/base/ecore_rt_defs.h
@@ -115,339 +115,338 @@
 #define TM_REG_CONFIG_CONN_MEM_RT_OFFSET                            28716
 #define TM_REG_CONFIG_CONN_MEM_RT_SIZE                              416
 #define TM_REG_CONFIG_TASK_MEM_RT_OFFSET                            29132
-#define TM_REG_CONFIG_TASK_MEM_RT_SIZE                              512
-#define QM_REG_MAXPQSIZE_0_RT_OFFSET                                29644
-#define QM_REG_MAXPQSIZE_1_RT_OFFSET                                29645
-#define QM_REG_MAXPQSIZE_2_RT_OFFSET                                29646
-#define QM_REG_MAXPQSIZETXSEL_0_RT_OFFSET                           29647
-#define QM_REG_MAXPQSIZETXSEL_1_RT_OFFSET                           29648
-#define QM_REG_MAXPQSIZETXSEL_2_RT_OFFSET                           29649
-#define QM_REG_MAXPQSIZETXSEL_3_RT_OFFSET                           29650
-#define QM_REG_MAXPQSIZETXSEL_4_RT_OFFSET                           29651
-#define QM_REG_MAXPQSIZETXSEL_5_RT_OFFSET                           29652
-#define QM_REG_MAXPQSIZETXSEL_6_RT_OFFSET                           29653
-#define QM_REG_MAXPQSIZETXSEL_7_RT_OFFSET                           29654
-#define QM_REG_MAXPQSIZETXSEL_8_RT_OFFSET                           29655
-#define QM_REG_MAXPQSIZETXSEL_9_RT_OFFSET                           29656
-#define QM_REG_MAXPQSIZETXSEL_10_RT_OFFSET                          29657
-#define QM_REG_MAXPQSIZETXSEL_11_RT_OFFSET                          29658
-#define QM_REG_MAXPQSIZETXSEL_12_RT_OFFSET                          29659
-#define QM_REG_MAXPQSIZETXSEL_13_RT_OFFSET                          29660
-#define QM_REG_MAXPQSIZETXSEL_14_RT_OFFSET                          29661
-#define QM_REG_MAXPQSIZETXSEL_15_RT_OFFSET                          29662
-#define QM_REG_MAXPQSIZETXSEL_16_RT_OFFSET                          29663
-#define QM_REG_MAXPQSIZETXSEL_17_RT_OFFSET                          29664
-#define QM_REG_MAXPQSIZETXSEL_18_RT_OFFSET                          29665
-#define QM_REG_MAXPQSIZETXSEL_19_RT_OFFSET                          29666
-#define QM_REG_MAXPQSIZETXSEL_20_RT_OFFSET                          29667
-#define QM_REG_MAXPQSIZETXSEL_21_RT_OFFSET                          29668
-#define QM_REG_MAXPQSIZETXSEL_22_RT_OFFSET                          29669
-#define QM_REG_MAXPQSIZETXSEL_23_RT_OFFSET                          29670
-#define QM_REG_MAXPQSIZETXSEL_24_RT_OFFSET                          29671
-#define QM_REG_MAXPQSIZETXSEL_25_RT_OFFSET                          29672
-#define QM_REG_MAXPQSIZETXSEL_26_RT_OFFSET                          29673
-#define QM_REG_MAXPQSIZETXSEL_27_RT_OFFSET                          29674
-#define QM_REG_MAXPQSIZETXSEL_28_RT_OFFSET                          29675
-#define QM_REG_MAXPQSIZETXSEL_29_RT_OFFSET                          29676
-#define QM_REG_MAXPQSIZETXSEL_30_RT_OFFSET                          29677
-#define QM_REG_MAXPQSIZETXSEL_31_RT_OFFSET                          29678
-#define QM_REG_MAXPQSIZETXSEL_32_RT_OFFSET                          29679
-#define QM_REG_MAXPQSIZETXSEL_33_RT_OFFSET                          29680
-#define QM_REG_MAXPQSIZETXSEL_34_RT_OFFSET                          29681
-#define QM_REG_MAXPQSIZETXSEL_35_RT_OFFSET                          29682
-#define QM_REG_MAXPQSIZETXSEL_36_RT_OFFSET                          29683
-#define QM_REG_MAXPQSIZETXSEL_37_RT_OFFSET                          29684
-#define QM_REG_MAXPQSIZETXSEL_38_RT_OFFSET                          29685
-#define QM_REG_MAXPQSIZETXSEL_39_RT_OFFSET                          29686
-#define QM_REG_MAXPQSIZETXSEL_40_RT_OFFSET                          29687
-#define QM_REG_MAXPQSIZETXSEL_41_RT_OFFSET                          29688
-#define QM_REG_MAXPQSIZETXSEL_42_RT_OFFSET                          29689
-#define QM_REG_MAXPQSIZETXSEL_43_RT_OFFSET                          29690
-#define QM_REG_MAXPQSIZETXSEL_44_RT_OFFSET                          29691
-#define QM_REG_MAXPQSIZETXSEL_45_RT_OFFSET                          29692
-#define QM_REG_MAXPQSIZETXSEL_46_RT_OFFSET                          29693
-#define QM_REG_MAXPQSIZETXSEL_47_RT_OFFSET                          29694
-#define QM_REG_MAXPQSIZETXSEL_48_RT_OFFSET                          29695
-#define QM_REG_MAXPQSIZETXSEL_49_RT_OFFSET                          29696
-#define QM_REG_MAXPQSIZETXSEL_50_RT_OFFSET                          29697
-#define QM_REG_MAXPQSIZETXSEL_51_RT_OFFSET                          29698
-#define QM_REG_MAXPQSIZETXSEL_52_RT_OFFSET                          29699
-#define QM_REG_MAXPQSIZETXSEL_53_RT_OFFSET                          29700
-#define QM_REG_MAXPQSIZETXSEL_54_RT_OFFSET                          29701
-#define QM_REG_MAXPQSIZETXSEL_55_RT_OFFSET                          29702
-#define QM_REG_MAXPQSIZETXSEL_56_RT_OFFSET                          29703
-#define QM_REG_MAXPQSIZETXSEL_57_RT_OFFSET                          29704
-#define QM_REG_MAXPQSIZETXSEL_58_RT_OFFSET                          29705
-#define QM_REG_MAXPQSIZETXSEL_59_RT_OFFSET                          29706
-#define QM_REG_MAXPQSIZETXSEL_60_RT_OFFSET                          29707
-#define QM_REG_MAXPQSIZETXSEL_61_RT_OFFSET                          29708
-#define QM_REG_MAXPQSIZETXSEL_62_RT_OFFSET                          29709
-#define QM_REG_MAXPQSIZETXSEL_63_RT_OFFSET                          29710
-#define QM_REG_BASEADDROTHERPQ_RT_OFFSET                            29711
-#define QM_REG_BASEADDROTHERPQ_RT_SIZE                              128
-#define QM_REG_VOQCRDLINE_RT_OFFSET                                 29839
-#define QM_REG_VOQCRDLINE_RT_SIZE                                   20
-#define QM_REG_VOQINITCRDLINE_RT_OFFSET                             29859
-#define QM_REG_VOQINITCRDLINE_RT_SIZE                               20
-#define QM_REG_AFULLQMBYPTHRPFWFQ_RT_OFFSET                         29879
-#define QM_REG_AFULLQMBYPTHRVPWFQ_RT_OFFSET                         29880
-#define QM_REG_AFULLQMBYPTHRPFRL_RT_OFFSET                          29881
-#define QM_REG_AFULLQMBYPTHRGLBLRL_RT_OFFSET                        29882
-#define QM_REG_AFULLOPRTNSTCCRDMASK_RT_OFFSET                       29883
-#define QM_REG_WRROTHERPQGRP_0_RT_OFFSET                            29884
-#define QM_REG_WRROTHERPQGRP_1_RT_OFFSET                            29885
-#define QM_REG_WRROTHERPQGRP_2_RT_OFFSET                            29886
-#define QM_REG_WRROTHERPQGRP_3_RT_OFFSET                            29887
-#define QM_REG_WRROTHERPQGRP_4_RT_OFFSET                            29888
-#define QM_REG_WRROTHERPQGRP_5_RT_OFFSET                            29889
-#define QM_REG_WRROTHERPQGRP_6_RT_OFFSET                            29890
-#define QM_REG_WRROTHERPQGRP_7_RT_OFFSET                            29891
-#define QM_REG_WRROTHERPQGRP_8_RT_OFFSET                            29892
-#define QM_REG_WRROTHERPQGRP_9_RT_OFFSET                            29893
-#define QM_REG_WRROTHERPQGRP_10_RT_OFFSET                           29894
-#define QM_REG_WRROTHERPQGRP_11_RT_OFFSET                           29895
-#define QM_REG_WRROTHERPQGRP_12_RT_OFFSET                           29896
-#define QM_REG_WRROTHERPQGRP_13_RT_OFFSET                           29897
-#define QM_REG_WRROTHERPQGRP_14_RT_OFFSET                           29898
-#define QM_REG_WRROTHERPQGRP_15_RT_OFFSET                           29899
-#define QM_REG_WRROTHERGRPWEIGHT_0_RT_OFFSET                        29900
-#define QM_REG_WRROTHERGRPWEIGHT_1_RT_OFFSET                        29901
-#define QM_REG_WRROTHERGRPWEIGHT_2_RT_OFFSET                        29902
-#define QM_REG_WRROTHERGRPWEIGHT_3_RT_OFFSET                        29903
-#define QM_REG_WRRTXGRPWEIGHT_0_RT_OFFSET                           29904
-#define QM_REG_WRRTXGRPWEIGHT_1_RT_OFFSET                           29905
-#define QM_REG_PQTX2PF_0_RT_OFFSET                                  29906
-#define QM_REG_PQTX2PF_1_RT_OFFSET                                  29907
-#define QM_REG_PQTX2PF_2_RT_OFFSET                                  29908
-#define QM_REG_PQTX2PF_3_RT_OFFSET                                  29909
-#define QM_REG_PQTX2PF_4_RT_OFFSET                                  29910
-#define QM_REG_PQTX2PF_5_RT_OFFSET                                  29911
-#define QM_REG_PQTX2PF_6_RT_OFFSET                                  29912
-#define QM_REG_PQTX2PF_7_RT_OFFSET                                  29913
-#define QM_REG_PQTX2PF_8_RT_OFFSET                                  29914
-#define QM_REG_PQTX2PF_9_RT_OFFSET                                  29915
-#define QM_REG_PQTX2PF_10_RT_OFFSET                                 29916
-#define QM_REG_PQTX2PF_11_RT_OFFSET                                 29917
-#define QM_REG_PQTX2PF_12_RT_OFFSET                                 29918
-#define QM_REG_PQTX2PF_13_RT_OFFSET                                 29919
-#define QM_REG_PQTX2PF_14_RT_OFFSET                                 29920
-#define QM_REG_PQTX2PF_15_RT_OFFSET                                 29921
-#define QM_REG_PQTX2PF_16_RT_OFFSET                                 29922
-#define QM_REG_PQTX2PF_17_RT_OFFSET                                 29923
-#define QM_REG_PQTX2PF_18_RT_OFFSET                                 29924
-#define QM_REG_PQTX2PF_19_RT_OFFSET                                 29925
-#define QM_REG_PQTX2PF_20_RT_OFFSET                                 29926
-#define QM_REG_PQTX2PF_21_RT_OFFSET                                 29927
-#define QM_REG_PQTX2PF_22_RT_OFFSET                                 29928
-#define QM_REG_PQTX2PF_23_RT_OFFSET                                 29929
-#define QM_REG_PQTX2PF_24_RT_OFFSET                                 29930
-#define QM_REG_PQTX2PF_25_RT_OFFSET                                 29931
-#define QM_REG_PQTX2PF_26_RT_OFFSET                                 29932
-#define QM_REG_PQTX2PF_27_RT_OFFSET                                 29933
-#define QM_REG_PQTX2PF_28_RT_OFFSET                                 29934
-#define QM_REG_PQTX2PF_29_RT_OFFSET                                 29935
-#define QM_REG_PQTX2PF_30_RT_OFFSET                                 29936
-#define QM_REG_PQTX2PF_31_RT_OFFSET                                 29937
-#define QM_REG_PQTX2PF_32_RT_OFFSET                                 29938
-#define QM_REG_PQTX2PF_33_RT_OFFSET                                 29939
-#define QM_REG_PQTX2PF_34_RT_OFFSET                                 29940
-#define QM_REG_PQTX2PF_35_RT_OFFSET                                 29941
-#define QM_REG_PQTX2PF_36_RT_OFFSET                                 29942
-#define QM_REG_PQTX2PF_37_RT_OFFSET                                 29943
-#define QM_REG_PQTX2PF_38_RT_OFFSET                                 29944
-#define QM_REG_PQTX2PF_39_RT_OFFSET                                 29945
-#define QM_REG_PQTX2PF_40_RT_OFFSET                                 29946
-#define QM_REG_PQTX2PF_41_RT_OFFSET                                 29947
-#define QM_REG_PQTX2PF_42_RT_OFFSET                                 29948
-#define QM_REG_PQTX2PF_43_RT_OFFSET                                 29949
-#define QM_REG_PQTX2PF_44_RT_OFFSET                                 29950
-#define QM_REG_PQTX2PF_45_RT_OFFSET                                 29951
-#define QM_REG_PQTX2PF_46_RT_OFFSET                                 29952
-#define QM_REG_PQTX2PF_47_RT_OFFSET                                 29953
-#define QM_REG_PQTX2PF_48_RT_OFFSET                                 29954
-#define QM_REG_PQTX2PF_49_RT_OFFSET                                 29955
-#define QM_REG_PQTX2PF_50_RT_OFFSET                                 29956
-#define QM_REG_PQTX2PF_51_RT_OFFSET                                 29957
-#define QM_REG_PQTX2PF_52_RT_OFFSET                                 29958
-#define QM_REG_PQTX2PF_53_RT_OFFSET                                 29959
-#define QM_REG_PQTX2PF_54_RT_OFFSET                                 29960
-#define QM_REG_PQTX2PF_55_RT_OFFSET                                 29961
-#define QM_REG_PQTX2PF_56_RT_OFFSET                                 29962
-#define QM_REG_PQTX2PF_57_RT_OFFSET                                 29963
-#define QM_REG_PQTX2PF_58_RT_OFFSET                                 29964
-#define QM_REG_PQTX2PF_59_RT_OFFSET                                 29965
-#define QM_REG_PQTX2PF_60_RT_OFFSET                                 29966
-#define QM_REG_PQTX2PF_61_RT_OFFSET                                 29967
-#define QM_REG_PQTX2PF_62_RT_OFFSET                                 29968
-#define QM_REG_PQTX2PF_63_RT_OFFSET                                 29969
-#define QM_REG_PQOTHER2PF_0_RT_OFFSET                               29970
-#define QM_REG_PQOTHER2PF_1_RT_OFFSET                               29971
-#define QM_REG_PQOTHER2PF_2_RT_OFFSET                               29972
-#define QM_REG_PQOTHER2PF_3_RT_OFFSET                               29973
-#define QM_REG_PQOTHER2PF_4_RT_OFFSET                               29974
-#define QM_REG_PQOTHER2PF_5_RT_OFFSET                               29975
-#define QM_REG_PQOTHER2PF_6_RT_OFFSET                               29976
-#define QM_REG_PQOTHER2PF_7_RT_OFFSET                               29977
-#define QM_REG_PQOTHER2PF_8_RT_OFFSET                               29978
-#define QM_REG_PQOTHER2PF_9_RT_OFFSET                               29979
-#define QM_REG_PQOTHER2PF_10_RT_OFFSET                              29980
-#define QM_REG_PQOTHER2PF_11_RT_OFFSET                              29981
-#define QM_REG_PQOTHER2PF_12_RT_OFFSET                              29982
-#define QM_REG_PQOTHER2PF_13_RT_OFFSET                              29983
-#define QM_REG_PQOTHER2PF_14_RT_OFFSET                              29984
-#define QM_REG_PQOTHER2PF_15_RT_OFFSET                              29985
-#define QM_REG_RLGLBLPERIOD_0_RT_OFFSET                             29986
-#define QM_REG_RLGLBLPERIOD_1_RT_OFFSET                             29987
-#define QM_REG_RLGLBLPERIODTIMER_0_RT_OFFSET                        29988
-#define QM_REG_RLGLBLPERIODTIMER_1_RT_OFFSET                        29989
-#define QM_REG_RLGLBLPERIODSEL_0_RT_OFFSET                          29990
-#define QM_REG_RLGLBLPERIODSEL_1_RT_OFFSET                          29991
-#define QM_REG_RLGLBLPERIODSEL_2_RT_OFFSET                          29992
-#define QM_REG_RLGLBLPERIODSEL_3_RT_OFFSET                          29993
-#define QM_REG_RLGLBLPERIODSEL_4_RT_OFFSET                          29994
-#define QM_REG_RLGLBLPERIODSEL_5_RT_OFFSET                          29995
-#define QM_REG_RLGLBLPERIODSEL_6_RT_OFFSET                          29996
-#define QM_REG_RLGLBLPERIODSEL_7_RT_OFFSET                          29997
-#define QM_REG_RLGLBLINCVAL_RT_OFFSET                               29998
+#define TM_REG_CONFIG_TASK_MEM_RT_SIZE                              608
+#define QM_REG_MAXPQSIZE_0_RT_OFFSET                                29740
+#define QM_REG_MAXPQSIZE_1_RT_OFFSET                                29741
+#define QM_REG_MAXPQSIZE_2_RT_OFFSET                                29742
+#define QM_REG_MAXPQSIZETXSEL_0_RT_OFFSET                           29743
+#define QM_REG_MAXPQSIZETXSEL_1_RT_OFFSET                           29744
+#define QM_REG_MAXPQSIZETXSEL_2_RT_OFFSET                           29745
+#define QM_REG_MAXPQSIZETXSEL_3_RT_OFFSET                           29746
+#define QM_REG_MAXPQSIZETXSEL_4_RT_OFFSET                           29747
+#define QM_REG_MAXPQSIZETXSEL_5_RT_OFFSET                           29748
+#define QM_REG_MAXPQSIZETXSEL_6_RT_OFFSET                           29749
+#define QM_REG_MAXPQSIZETXSEL_7_RT_OFFSET                           29750
+#define QM_REG_MAXPQSIZETXSEL_8_RT_OFFSET                           29751
+#define QM_REG_MAXPQSIZETXSEL_9_RT_OFFSET                           29752
+#define QM_REG_MAXPQSIZETXSEL_10_RT_OFFSET                          29753
+#define QM_REG_MAXPQSIZETXSEL_11_RT_OFFSET                          29754
+#define QM_REG_MAXPQSIZETXSEL_12_RT_OFFSET                          29755
+#define QM_REG_MAXPQSIZETXSEL_13_RT_OFFSET                          29756
+#define QM_REG_MAXPQSIZETXSEL_14_RT_OFFSET                          29757
+#define QM_REG_MAXPQSIZETXSEL_15_RT_OFFSET                          29758
+#define QM_REG_MAXPQSIZETXSEL_16_RT_OFFSET                          29759
+#define QM_REG_MAXPQSIZETXSEL_17_RT_OFFSET                          29760
+#define QM_REG_MAXPQSIZETXSEL_18_RT_OFFSET                          29761
+#define QM_REG_MAXPQSIZETXSEL_19_RT_OFFSET                          29762
+#define QM_REG_MAXPQSIZETXSEL_20_RT_OFFSET                          29763
+#define QM_REG_MAXPQSIZETXSEL_21_RT_OFFSET                          29764
+#define QM_REG_MAXPQSIZETXSEL_22_RT_OFFSET                          29765
+#define QM_REG_MAXPQSIZETXSEL_23_RT_OFFSET                          29766
+#define QM_REG_MAXPQSIZETXSEL_24_RT_OFFSET                          29767
+#define QM_REG_MAXPQSIZETXSEL_25_RT_OFFSET                          29768
+#define QM_REG_MAXPQSIZETXSEL_26_RT_OFFSET                          29769
+#define QM_REG_MAXPQSIZETXSEL_27_RT_OFFSET                          29770
+#define QM_REG_MAXPQSIZETXSEL_28_RT_OFFSET                          29771
+#define QM_REG_MAXPQSIZETXSEL_29_RT_OFFSET                          29772
+#define QM_REG_MAXPQSIZETXSEL_30_RT_OFFSET                          29773
+#define QM_REG_MAXPQSIZETXSEL_31_RT_OFFSET                          29774
+#define QM_REG_MAXPQSIZETXSEL_32_RT_OFFSET                          29775
+#define QM_REG_MAXPQSIZETXSEL_33_RT_OFFSET                          29776
+#define QM_REG_MAXPQSIZETXSEL_34_RT_OFFSET                          29777
+#define QM_REG_MAXPQSIZETXSEL_35_RT_OFFSET                          29778
+#define QM_REG_MAXPQSIZETXSEL_36_RT_OFFSET                          29779
+#define QM_REG_MAXPQSIZETXSEL_37_RT_OFFSET                          29780
+#define QM_REG_MAXPQSIZETXSEL_38_RT_OFFSET                          29781
+#define QM_REG_MAXPQSIZETXSEL_39_RT_OFFSET                          29782
+#define QM_REG_MAXPQSIZETXSEL_40_RT_OFFSET                          29783
+#define QM_REG_MAXPQSIZETXSEL_41_RT_OFFSET                          29784
+#define QM_REG_MAXPQSIZETXSEL_42_RT_OFFSET                          29785
+#define QM_REG_MAXPQSIZETXSEL_43_RT_OFFSET                          29786
+#define QM_REG_MAXPQSIZETXSEL_44_RT_OFFSET                          29787
+#define QM_REG_MAXPQSIZETXSEL_45_RT_OFFSET                          29788
+#define QM_REG_MAXPQSIZETXSEL_46_RT_OFFSET                          29789
+#define QM_REG_MAXPQSIZETXSEL_47_RT_OFFSET                          29790
+#define QM_REG_MAXPQSIZETXSEL_48_RT_OFFSET                          29791
+#define QM_REG_MAXPQSIZETXSEL_49_RT_OFFSET                          29792
+#define QM_REG_MAXPQSIZETXSEL_50_RT_OFFSET                          29793
+#define QM_REG_MAXPQSIZETXSEL_51_RT_OFFSET                          29794
+#define QM_REG_MAXPQSIZETXSEL_52_RT_OFFSET                          29795
+#define QM_REG_MAXPQSIZETXSEL_53_RT_OFFSET                          29796
+#define QM_REG_MAXPQSIZETXSEL_54_RT_OFFSET                          29797
+#define QM_REG_MAXPQSIZETXSEL_55_RT_OFFSET                          29798
+#define QM_REG_MAXPQSIZETXSEL_56_RT_OFFSET                          29799
+#define QM_REG_MAXPQSIZETXSEL_57_RT_OFFSET                          29800
+#define QM_REG_MAXPQSIZETXSEL_58_RT_OFFSET                          29801
+#define QM_REG_MAXPQSIZETXSEL_59_RT_OFFSET                          29802
+#define QM_REG_MAXPQSIZETXSEL_60_RT_OFFSET                          29803
+#define QM_REG_MAXPQSIZETXSEL_61_RT_OFFSET                          29804
+#define QM_REG_MAXPQSIZETXSEL_62_RT_OFFSET                          29805
+#define QM_REG_MAXPQSIZETXSEL_63_RT_OFFSET                          29806
+#define QM_REG_BASEADDROTHERPQ_RT_OFFSET                            29807
+#define QM_REG_AFULLQMBYPTHRPFWFQ_RT_OFFSET                         29935
+#define QM_REG_AFULLQMBYPTHRVPWFQ_RT_OFFSET                         29936
+#define QM_REG_AFULLQMBYPTHRPFRL_RT_OFFSET                          29937
+#define QM_REG_AFULLQMBYPTHRGLBLRL_RT_OFFSET                        29938
+#define QM_REG_AFULLOPRTNSTCCRDMASK_RT_OFFSET                       29939
+#define QM_REG_WRROTHERPQGRP_0_RT_OFFSET                            29940
+#define QM_REG_WRROTHERPQGRP_1_RT_OFFSET                            29941
+#define QM_REG_WRROTHERPQGRP_2_RT_OFFSET                            29942
+#define QM_REG_WRROTHERPQGRP_3_RT_OFFSET                            29943
+#define QM_REG_WRROTHERPQGRP_4_RT_OFFSET                            29944
+#define QM_REG_WRROTHERPQGRP_5_RT_OFFSET                            29945
+#define QM_REG_WRROTHERPQGRP_6_RT_OFFSET                            29946
+#define QM_REG_WRROTHERPQGRP_7_RT_OFFSET                            29947
+#define QM_REG_WRROTHERPQGRP_8_RT_OFFSET                            29948
+#define QM_REG_WRROTHERPQGRP_9_RT_OFFSET                            29949
+#define QM_REG_WRROTHERPQGRP_10_RT_OFFSET                           29950
+#define QM_REG_WRROTHERPQGRP_11_RT_OFFSET                           29951
+#define QM_REG_WRROTHERPQGRP_12_RT_OFFSET                           29952
+#define QM_REG_WRROTHERPQGRP_13_RT_OFFSET                           29953
+#define QM_REG_WRROTHERPQGRP_14_RT_OFFSET                           29954
+#define QM_REG_WRROTHERPQGRP_15_RT_OFFSET                           29955
+#define QM_REG_WRROTHERGRPWEIGHT_0_RT_OFFSET                        29956
+#define QM_REG_WRROTHERGRPWEIGHT_1_RT_OFFSET                        29957
+#define QM_REG_WRROTHERGRPWEIGHT_2_RT_OFFSET                        29958
+#define QM_REG_WRROTHERGRPWEIGHT_3_RT_OFFSET                        29959
+#define QM_REG_WRRTXGRPWEIGHT_0_RT_OFFSET                           29960
+#define QM_REG_WRRTXGRPWEIGHT_1_RT_OFFSET                           29961
+#define QM_REG_PQTX2PF_0_RT_OFFSET                                  29962
+#define QM_REG_PQTX2PF_1_RT_OFFSET                                  29963
+#define QM_REG_PQTX2PF_2_RT_OFFSET                                  29964
+#define QM_REG_PQTX2PF_3_RT_OFFSET                                  29965
+#define QM_REG_PQTX2PF_4_RT_OFFSET                                  29966
+#define QM_REG_PQTX2PF_5_RT_OFFSET                                  29967
+#define QM_REG_PQTX2PF_6_RT_OFFSET                                  29968
+#define QM_REG_PQTX2PF_7_RT_OFFSET                                  29969
+#define QM_REG_PQTX2PF_8_RT_OFFSET                                  29970
+#define QM_REG_PQTX2PF_9_RT_OFFSET                                  29971
+#define QM_REG_PQTX2PF_10_RT_OFFSET                                 29972
+#define QM_REG_PQTX2PF_11_RT_OFFSET                                 29973
+#define QM_REG_PQTX2PF_12_RT_OFFSET                                 29974
+#define QM_REG_PQTX2PF_13_RT_OFFSET                                 29975
+#define QM_REG_PQTX2PF_14_RT_OFFSET                                 29976
+#define QM_REG_PQTX2PF_15_RT_OFFSET                                 29977
+#define QM_REG_PQTX2PF_16_RT_OFFSET                                 29978
+#define QM_REG_PQTX2PF_17_RT_OFFSET                                 29979
+#define QM_REG_PQTX2PF_18_RT_OFFSET                                 29980
+#define QM_REG_PQTX2PF_19_RT_OFFSET                                 29981
+#define QM_REG_PQTX2PF_20_RT_OFFSET                                 29982
+#define QM_REG_PQTX2PF_21_RT_OFFSET                                 29983
+#define QM_REG_PQTX2PF_22_RT_OFFSET                                 29984
+#define QM_REG_PQTX2PF_23_RT_OFFSET                                 29985
+#define QM_REG_PQTX2PF_24_RT_OFFSET                                 29986
+#define QM_REG_PQTX2PF_25_RT_OFFSET                                 29987
+#define QM_REG_PQTX2PF_26_RT_OFFSET                                 29988
+#define QM_REG_PQTX2PF_27_RT_OFFSET                                 29989
+#define QM_REG_PQTX2PF_28_RT_OFFSET                                 29990
+#define QM_REG_PQTX2PF_29_RT_OFFSET                                 29991
+#define QM_REG_PQTX2PF_30_RT_OFFSET                                 29992
+#define QM_REG_PQTX2PF_31_RT_OFFSET                                 29993
+#define QM_REG_PQTX2PF_32_RT_OFFSET                                 29994
+#define QM_REG_PQTX2PF_33_RT_OFFSET                                 29995
+#define QM_REG_PQTX2PF_34_RT_OFFSET                                 29996
+#define QM_REG_PQTX2PF_35_RT_OFFSET                                 29997
+#define QM_REG_PQTX2PF_36_RT_OFFSET                                 29998
+#define QM_REG_PQTX2PF_37_RT_OFFSET                                 29999
+#define QM_REG_PQTX2PF_38_RT_OFFSET                                 30000
+#define QM_REG_PQTX2PF_39_RT_OFFSET                                 30001
+#define QM_REG_PQTX2PF_40_RT_OFFSET                                 30002
+#define QM_REG_PQTX2PF_41_RT_OFFSET                                 30003
+#define QM_REG_PQTX2PF_42_RT_OFFSET                                 30004
+#define QM_REG_PQTX2PF_43_RT_OFFSET                                 30005
+#define QM_REG_PQTX2PF_44_RT_OFFSET                                 30006
+#define QM_REG_PQTX2PF_45_RT_OFFSET                                 30007
+#define QM_REG_PQTX2PF_46_RT_OFFSET                                 30008
+#define QM_REG_PQTX2PF_47_RT_OFFSET                                 30009
+#define QM_REG_PQTX2PF_48_RT_OFFSET                                 30010
+#define QM_REG_PQTX2PF_49_RT_OFFSET                                 30011
+#define QM_REG_PQTX2PF_50_RT_OFFSET                                 30012
+#define QM_REG_PQTX2PF_51_RT_OFFSET                                 30013
+#define QM_REG_PQTX2PF_52_RT_OFFSET                                 30014
+#define QM_REG_PQTX2PF_53_RT_OFFSET                                 30015
+#define QM_REG_PQTX2PF_54_RT_OFFSET                                 30016
+#define QM_REG_PQTX2PF_55_RT_OFFSET                                 30017
+#define QM_REG_PQTX2PF_56_RT_OFFSET                                 30018
+#define QM_REG_PQTX2PF_57_RT_OFFSET                                 30019
+#define QM_REG_PQTX2PF_58_RT_OFFSET                                 30020
+#define QM_REG_PQTX2PF_59_RT_OFFSET                                 30021
+#define QM_REG_PQTX2PF_60_RT_OFFSET                                 30022
+#define QM_REG_PQTX2PF_61_RT_OFFSET                                 30023
+#define QM_REG_PQTX2PF_62_RT_OFFSET                                 30024
+#define QM_REG_PQTX2PF_63_RT_OFFSET                                 30025
+#define QM_REG_PQOTHER2PF_0_RT_OFFSET                               30026
+#define QM_REG_PQOTHER2PF_1_RT_OFFSET                               30027
+#define QM_REG_PQOTHER2PF_2_RT_OFFSET                               30028
+#define QM_REG_PQOTHER2PF_3_RT_OFFSET                               30029
+#define QM_REG_PQOTHER2PF_4_RT_OFFSET                               30030
+#define QM_REG_PQOTHER2PF_5_RT_OFFSET                               30031
+#define QM_REG_PQOTHER2PF_6_RT_OFFSET                               30032
+#define QM_REG_PQOTHER2PF_7_RT_OFFSET                               30033
+#define QM_REG_PQOTHER2PF_8_RT_OFFSET                               30034
+#define QM_REG_PQOTHER2PF_9_RT_OFFSET                               30035
+#define QM_REG_PQOTHER2PF_10_RT_OFFSET                              30036
+#define QM_REG_PQOTHER2PF_11_RT_OFFSET                              30037
+#define QM_REG_PQOTHER2PF_12_RT_OFFSET                              30038
+#define QM_REG_PQOTHER2PF_13_RT_OFFSET                              30039
+#define QM_REG_PQOTHER2PF_14_RT_OFFSET                              30040
+#define QM_REG_PQOTHER2PF_15_RT_OFFSET                              30041
+#define QM_REG_RLGLBLPERIOD_0_RT_OFFSET                             30042
+#define QM_REG_RLGLBLPERIOD_1_RT_OFFSET                             30043
+#define QM_REG_RLGLBLPERIODTIMER_0_RT_OFFSET                        30044
+#define QM_REG_RLGLBLPERIODTIMER_1_RT_OFFSET                        30045
+#define QM_REG_RLGLBLPERIODSEL_0_RT_OFFSET                          30046
+#define QM_REG_RLGLBLPERIODSEL_1_RT_OFFSET                          30047
+#define QM_REG_RLGLBLPERIODSEL_2_RT_OFFSET                          30048
+#define QM_REG_RLGLBLPERIODSEL_3_RT_OFFSET                          30049
+#define QM_REG_RLGLBLPERIODSEL_4_RT_OFFSET                          30050
+#define QM_REG_RLGLBLPERIODSEL_5_RT_OFFSET                          30051
+#define QM_REG_RLGLBLPERIODSEL_6_RT_OFFSET                          30052
+#define QM_REG_RLGLBLPERIODSEL_7_RT_OFFSET                          30053
+#define QM_REG_RLGLBLINCVAL_RT_OFFSET                               30054
 #define QM_REG_RLGLBLINCVAL_RT_SIZE                                 256
-#define QM_REG_RLGLBLUPPERBOUND_RT_OFFSET                           30254
+#define QM_REG_RLGLBLUPPERBOUND_RT_OFFSET                           30310
 #define QM_REG_RLGLBLUPPERBOUND_RT_SIZE                             256
-#define QM_REG_RLGLBLCRD_RT_OFFSET                                  30510
+#define QM_REG_RLGLBLCRD_RT_OFFSET                                  30566
 #define QM_REG_RLGLBLCRD_RT_SIZE                                    256
-#define QM_REG_RLGLBLENABLE_RT_OFFSET                               30766
-#define QM_REG_RLPFPERIOD_RT_OFFSET                                 30767
-#define QM_REG_RLPFPERIODTIMER_RT_OFFSET                            30768
-#define QM_REG_RLPFINCVAL_RT_OFFSET                                 30769
+#define QM_REG_RLGLBLENABLE_RT_OFFSET                               30822
+#define QM_REG_RLPFPERIOD_RT_OFFSET                                 30823
+#define QM_REG_RLPFPERIODTIMER_RT_OFFSET                            30824
+#define QM_REG_RLPFINCVAL_RT_OFFSET                                 30825
 #define QM_REG_RLPFINCVAL_RT_SIZE                                   16
-#define QM_REG_RLPFUPPERBOUND_RT_OFFSET                             30785
+#define QM_REG_RLPFUPPERBOUND_RT_OFFSET                             30841
 #define QM_REG_RLPFUPPERBOUND_RT_SIZE                               16
-#define QM_REG_RLPFCRD_RT_OFFSET                                    30801
+#define QM_REG_RLPFCRD_RT_OFFSET                                    30857
 #define QM_REG_RLPFCRD_RT_SIZE                                      16
-#define QM_REG_RLPFENABLE_RT_OFFSET                                 30817
-#define QM_REG_RLPFVOQENABLE_RT_OFFSET                              30818
-#define QM_REG_WFQPFWEIGHT_RT_OFFSET                                30819
+#define QM_REG_RLPFENABLE_RT_OFFSET                                 30873
+#define QM_REG_RLPFVOQENABLE_RT_OFFSET                              30874
+#define QM_REG_WFQPFWEIGHT_RT_OFFSET                                30875
 #define QM_REG_WFQPFWEIGHT_RT_SIZE                                  16
-#define QM_REG_WFQPFUPPERBOUND_RT_OFFSET                            30835
+#define QM_REG_WFQPFUPPERBOUND_RT_OFFSET                            30891
 #define QM_REG_WFQPFUPPERBOUND_RT_SIZE                              16
-#define QM_REG_WFQPFCRD_RT_OFFSET                                   30851
-#define QM_REG_WFQPFCRD_RT_SIZE                                     160
-#define QM_REG_WFQPFENABLE_RT_OFFSET                                31011
-#define QM_REG_WFQVPENABLE_RT_OFFSET                                31012
-#define QM_REG_BASEADDRTXPQ_RT_OFFSET                               31013
+#define QM_REG_WFQPFCRD_RT_OFFSET                                   30907
+#define QM_REG_WFQPFCRD_RT_SIZE                                     256
+#define QM_REG_WFQPFENABLE_RT_OFFSET                                31163
+#define QM_REG_WFQVPENABLE_RT_OFFSET                                31164
+#define QM_REG_BASEADDRTXPQ_RT_OFFSET                               31165
 #define QM_REG_BASEADDRTXPQ_RT_SIZE                                 512
-#define QM_REG_TXPQMAP_RT_OFFSET                                    31525
+#define QM_REG_TXPQMAP_RT_OFFSET                                    31677
 #define QM_REG_TXPQMAP_RT_SIZE                                      512
-#define QM_REG_WFQVPWEIGHT_RT_OFFSET                                32037
+#define QM_REG_WFQVPWEIGHT_RT_OFFSET                                32189
 #define QM_REG_WFQVPWEIGHT_RT_SIZE                                  512
-#define QM_REG_WFQVPCRD_RT_OFFSET                                   32549
+#define QM_REG_WFQVPCRD_RT_OFFSET                                   32701
 #define QM_REG_WFQVPCRD_RT_SIZE                                     512
-#define QM_REG_WFQVPMAP_RT_OFFSET                                   33061
+#define QM_REG_WFQVPMAP_RT_OFFSET                                   33213
 #define QM_REG_WFQVPMAP_RT_SIZE                                     512
-#define QM_REG_WFQPFCRD_MSB_RT_OFFSET                               33573
-#define QM_REG_WFQPFCRD_MSB_RT_SIZE                                 160
-#define NIG_REG_TAG_ETHERTYPE_0_RT_OFFSET                           33733
-#define NIG_REG_OUTER_TAG_VALUE_LIST0_RT_OFFSET                     33734
-#define NIG_REG_OUTER_TAG_VALUE_LIST1_RT_OFFSET                     33735
-#define NIG_REG_OUTER_TAG_VALUE_LIST2_RT_OFFSET                     33736
-#define NIG_REG_OUTER_TAG_VALUE_LIST3_RT_OFFSET                     33737
-#define NIG_REG_OUTER_TAG_VALUE_MASK_RT_OFFSET                      33738
-#define NIG_REG_LLH_FUNC_TAGMAC_CLS_TYPE_RT_OFFSET                  33739
-#define NIG_REG_LLH_FUNC_TAG_EN_RT_OFFSET                           33740
+#define QM_REG_WFQPFCRD_MSB_RT_OFFSET                               33725
+#define QM_REG_WFQPFCRD_MSB_RT_SIZE                                 320
+#define QM_REG_VOQCRDLINE_RT_OFFSET                                 34045
+#define QM_REG_VOQCRDLINE_RT_SIZE                                   36
+#define QM_REG_VOQINITCRDLINE_RT_OFFSET                             34081
+#define QM_REG_VOQINITCRDLINE_RT_SIZE                               36
+#define NIG_REG_TAG_ETHERTYPE_0_RT_OFFSET                           34117
+#define NIG_REG_OUTER_TAG_VALUE_LIST0_RT_OFFSET                     34118
+#define NIG_REG_OUTER_TAG_VALUE_LIST1_RT_OFFSET                     34119
+#define NIG_REG_OUTER_TAG_VALUE_LIST2_RT_OFFSET                     34120
+#define NIG_REG_OUTER_TAG_VALUE_LIST3_RT_OFFSET                     34121
+#define NIG_REG_OUTER_TAG_VALUE_MASK_RT_OFFSET                      34122
+#define NIG_REG_LLH_FUNC_TAGMAC_CLS_TYPE_RT_OFFSET                  34123
+#define NIG_REG_LLH_FUNC_TAG_EN_RT_OFFSET                           34124
 #define NIG_REG_LLH_FUNC_TAG_EN_RT_SIZE                             4
-#define NIG_REG_LLH_FUNC_TAG_HDR_SEL_RT_OFFSET                      33744
+#define NIG_REG_LLH_FUNC_TAG_HDR_SEL_RT_OFFSET                      34128
 #define NIG_REG_LLH_FUNC_TAG_HDR_SEL_RT_SIZE                        4
-#define NIG_REG_LLH_FUNC_TAG_VALUE_RT_OFFSET                        33748
+#define NIG_REG_LLH_FUNC_TAG_VALUE_RT_OFFSET                        34132
 #define NIG_REG_LLH_FUNC_TAG_VALUE_RT_SIZE                          4
-#define NIG_REG_LLH_FUNC_NO_TAG_RT_OFFSET                           33752
-#define NIG_REG_LLH_FUNC_FILTER_VALUE_RT_OFFSET                     33753
+#define NIG_REG_LLH_FUNC_NO_TAG_RT_OFFSET                           34136
+#define NIG_REG_LLH_FUNC_FILTER_VALUE_RT_OFFSET                     34137
 #define NIG_REG_LLH_FUNC_FILTER_VALUE_RT_SIZE                       32
-#define NIG_REG_LLH_FUNC_FILTER_EN_RT_OFFSET                        33785
+#define NIG_REG_LLH_FUNC_FILTER_EN_RT_OFFSET                        34169
 #define NIG_REG_LLH_FUNC_FILTER_EN_RT_SIZE                          16
-#define NIG_REG_LLH_FUNC_FILTER_MODE_RT_OFFSET                      33801
+#define NIG_REG_LLH_FUNC_FILTER_MODE_RT_OFFSET                      34185
 #define NIG_REG_LLH_FUNC_FILTER_MODE_RT_SIZE                        16
-#define NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_RT_OFFSET             33817
+#define NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_RT_OFFSET             34201
 #define NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_RT_SIZE               16
-#define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_RT_OFFSET                   33833
+#define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_RT_OFFSET                   34217
 #define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_RT_SIZE                     16
-#define NIG_REG_TX_EDPM_CTRL_RT_OFFSET                              33849
-#define NIG_REG_ROCE_DUPLICATE_TO_HOST_RT_OFFSET                    33850
-#define CDU_REG_CID_ADDR_PARAMS_RT_OFFSET                           33851
-#define CDU_REG_SEGMENT0_PARAMS_RT_OFFSET                           33852
-#define CDU_REG_SEGMENT1_PARAMS_RT_OFFSET                           33853
-#define CDU_REG_PF_SEG0_TYPE_OFFSET_RT_OFFSET                       33854
-#define CDU_REG_PF_SEG1_TYPE_OFFSET_RT_OFFSET                       33855
-#define CDU_REG_PF_SEG2_TYPE_OFFSET_RT_OFFSET                       33856
-#define CDU_REG_PF_SEG3_TYPE_OFFSET_RT_OFFSET                       33857
-#define CDU_REG_PF_FL_SEG0_TYPE_OFFSET_RT_OFFSET                    33858
-#define CDU_REG_PF_FL_SEG1_TYPE_OFFSET_RT_OFFSET                    33859
-#define CDU_REG_PF_FL_SEG2_TYPE_OFFSET_RT_OFFSET                    33860
-#define CDU_REG_PF_FL_SEG3_TYPE_OFFSET_RT_OFFSET                    33861
-#define CDU_REG_VF_SEG_TYPE_OFFSET_RT_OFFSET                        33862
-#define CDU_REG_VF_FL_SEG_TYPE_OFFSET_RT_OFFSET                     33863
-#define PBF_REG_TAG_ETHERTYPE_0_RT_OFFSET                           33864
-#define PBF_REG_BTB_SHARED_AREA_SIZE_RT_OFFSET                      33865
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET                    33866
-#define PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET                       33867
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ0_RT_OFFSET                33868
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET                    33869
-#define PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET                       33870
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ1_RT_OFFSET                33871
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ2_RT_OFFSET                    33872
-#define PBF_REG_BTB_GUARANTEED_VOQ2_RT_OFFSET                       33873
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ2_RT_OFFSET                33874
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ3_RT_OFFSET                    33875
-#define PBF_REG_BTB_GUARANTEED_VOQ3_RT_OFFSET                       33876
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ3_RT_OFFSET                33877
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ4_RT_OFFSET                    33878
-#define PBF_REG_BTB_GUARANTEED_VOQ4_RT_OFFSET                       33879
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ4_RT_OFFSET                33880
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ5_RT_OFFSET                    33881
-#define PBF_REG_BTB_GUARANTEED_VOQ5_RT_OFFSET                       33882
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ5_RT_OFFSET                33883
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ6_RT_OFFSET                    33884
-#define PBF_REG_BTB_GUARANTEED_VOQ6_RT_OFFSET                       33885
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ6_RT_OFFSET                33886
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ7_RT_OFFSET                    33887
-#define PBF_REG_BTB_GUARANTEED_VOQ7_RT_OFFSET                       33888
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ7_RT_OFFSET                33889
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ8_RT_OFFSET                    33890
-#define PBF_REG_BTB_GUARANTEED_VOQ8_RT_OFFSET                       33891
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ8_RT_OFFSET                33892
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ9_RT_OFFSET                    33893
-#define PBF_REG_BTB_GUARANTEED_VOQ9_RT_OFFSET                       33894
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ9_RT_OFFSET                33895
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ10_RT_OFFSET                   33896
-#define PBF_REG_BTB_GUARANTEED_VOQ10_RT_OFFSET                      33897
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ10_RT_OFFSET               33898
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ11_RT_OFFSET                   33899
-#define PBF_REG_BTB_GUARANTEED_VOQ11_RT_OFFSET                      33900
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ11_RT_OFFSET               33901
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ12_RT_OFFSET                   33902
-#define PBF_REG_BTB_GUARANTEED_VOQ12_RT_OFFSET                      33903
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ12_RT_OFFSET               33904
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ13_RT_OFFSET                   33905
-#define PBF_REG_BTB_GUARANTEED_VOQ13_RT_OFFSET                      33906
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ13_RT_OFFSET               33907
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ14_RT_OFFSET                   33908
-#define PBF_REG_BTB_GUARANTEED_VOQ14_RT_OFFSET                      33909
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ14_RT_OFFSET               33910
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ15_RT_OFFSET                   33911
-#define PBF_REG_BTB_GUARANTEED_VOQ15_RT_OFFSET                      33912
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ15_RT_OFFSET               33913
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ16_RT_OFFSET                   33914
-#define PBF_REG_BTB_GUARANTEED_VOQ16_RT_OFFSET                      33915
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ16_RT_OFFSET               33916
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ17_RT_OFFSET                   33917
-#define PBF_REG_BTB_GUARANTEED_VOQ17_RT_OFFSET                      33918
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ17_RT_OFFSET               33919
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ18_RT_OFFSET                   33920
-#define PBF_REG_BTB_GUARANTEED_VOQ18_RT_OFFSET                      33921
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ18_RT_OFFSET               33922
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ19_RT_OFFSET                   33923
-#define PBF_REG_BTB_GUARANTEED_VOQ19_RT_OFFSET                      33924
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ19_RT_OFFSET               33925
-#define XCM_REG_CON_PHY_Q3_RT_OFFSET                                33926
+#define NIG_REG_TX_EDPM_CTRL_RT_OFFSET                              34233
+#define NIG_REG_ROCE_DUPLICATE_TO_HOST_RT_OFFSET                    34234
+#define CDU_REG_CID_ADDR_PARAMS_RT_OFFSET                           34235
+#define CDU_REG_SEGMENT0_PARAMS_RT_OFFSET                           34236
+#define CDU_REG_SEGMENT1_PARAMS_RT_OFFSET                           34237
+#define CDU_REG_PF_SEG0_TYPE_OFFSET_RT_OFFSET                       34238
+#define CDU_REG_PF_SEG1_TYPE_OFFSET_RT_OFFSET                       34239
+#define CDU_REG_PF_SEG2_TYPE_OFFSET_RT_OFFSET                       34240
+#define CDU_REG_PF_SEG3_TYPE_OFFSET_RT_OFFSET                       34241
+#define CDU_REG_PF_FL_SEG0_TYPE_OFFSET_RT_OFFSET                    34242
+#define CDU_REG_PF_FL_SEG1_TYPE_OFFSET_RT_OFFSET                    34243
+#define CDU_REG_PF_FL_SEG2_TYPE_OFFSET_RT_OFFSET                    34244
+#define CDU_REG_PF_FL_SEG3_TYPE_OFFSET_RT_OFFSET                    34245
+#define CDU_REG_VF_SEG_TYPE_OFFSET_RT_OFFSET                        34246
+#define CDU_REG_VF_FL_SEG_TYPE_OFFSET_RT_OFFSET                     34247
+#define PBF_REG_TAG_ETHERTYPE_0_RT_OFFSET                           34248
+#define PBF_REG_BTB_SHARED_AREA_SIZE_RT_OFFSET                      34249
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET                    34250
+#define PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET                       34251
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ0_RT_OFFSET                34252
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET                    34253
+#define PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET                       34254
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ1_RT_OFFSET                34255
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ2_RT_OFFSET                    34256
+#define PBF_REG_BTB_GUARANTEED_VOQ2_RT_OFFSET                       34257
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ2_RT_OFFSET                34258
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ3_RT_OFFSET                    34259
+#define PBF_REG_BTB_GUARANTEED_VOQ3_RT_OFFSET                       34260
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ3_RT_OFFSET                34261
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ4_RT_OFFSET                    34262
+#define PBF_REG_BTB_GUARANTEED_VOQ4_RT_OFFSET                       34263
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ4_RT_OFFSET                34264
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ5_RT_OFFSET                    34265
+#define PBF_REG_BTB_GUARANTEED_VOQ5_RT_OFFSET                       34266
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ5_RT_OFFSET                34267
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ6_RT_OFFSET                    34268
+#define PBF_REG_BTB_GUARANTEED_VOQ6_RT_OFFSET                       34269
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ6_RT_OFFSET                34270
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ7_RT_OFFSET                    34271
+#define PBF_REG_BTB_GUARANTEED_VOQ7_RT_OFFSET                       34272
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ7_RT_OFFSET                34273
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ8_RT_OFFSET                    34274
+#define PBF_REG_BTB_GUARANTEED_VOQ8_RT_OFFSET                       34275
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ8_RT_OFFSET                34276
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ9_RT_OFFSET                    34277
+#define PBF_REG_BTB_GUARANTEED_VOQ9_RT_OFFSET                       34278
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ9_RT_OFFSET                34279
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ10_RT_OFFSET                   34280
+#define PBF_REG_BTB_GUARANTEED_VOQ10_RT_OFFSET                      34281
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ10_RT_OFFSET               34282
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ11_RT_OFFSET                   34283
+#define PBF_REG_BTB_GUARANTEED_VOQ11_RT_OFFSET                      34284
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ11_RT_OFFSET               34285
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ12_RT_OFFSET                   34286
+#define PBF_REG_BTB_GUARANTEED_VOQ12_RT_OFFSET                      34287
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ12_RT_OFFSET               34288
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ13_RT_OFFSET                   34289
+#define PBF_REG_BTB_GUARANTEED_VOQ13_RT_OFFSET                      34290
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ13_RT_OFFSET               34291
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ14_RT_OFFSET                   34292
+#define PBF_REG_BTB_GUARANTEED_VOQ14_RT_OFFSET                      34293
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ14_RT_OFFSET               34294
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ15_RT_OFFSET                   34295
+#define PBF_REG_BTB_GUARANTEED_VOQ15_RT_OFFSET                      34296
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ15_RT_OFFSET               34297
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ16_RT_OFFSET                   34298
+#define PBF_REG_BTB_GUARANTEED_VOQ16_RT_OFFSET                      34299
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ16_RT_OFFSET               34300
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ17_RT_OFFSET                   34301
+#define PBF_REG_BTB_GUARANTEED_VOQ17_RT_OFFSET                      34302
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ17_RT_OFFSET               34303
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ18_RT_OFFSET                   34304
+#define PBF_REG_BTB_GUARANTEED_VOQ18_RT_OFFSET                      34305
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ18_RT_OFFSET               34306
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ19_RT_OFFSET                   34307
+#define PBF_REG_BTB_GUARANTEED_VOQ19_RT_OFFSET                      34308
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ19_RT_OFFSET               34309
+#define XCM_REG_CON_PHY_Q3_RT_OFFSET                                34310
 
-#define RUNTIME_ARRAY_SIZE 33927
+#define RUNTIME_ARRAY_SIZE 34311
 
 #endif /* __RT_DEFS_H__ */
diff --git a/drivers/net/qede/base/eth_common.h b/drivers/net/qede/base/eth_common.h
index d2ebce8..6dc969b 100644
--- a/drivers/net/qede/base/eth_common.h
+++ b/drivers/net/qede/base/eth_common.h
@@ -182,7 +182,7 @@ struct eth_tx_1st_bd_flags {
 struct eth_tx_data_1st_bd {
 /* VLAN tag to insert to packet (if enabled by vlan_insertion flag). */
 	__le16 vlan;
-/* Number of BDs in packet. Should be at least 2 in non-LSO packet and at least
+/* Number of BDs in packet. Should be at least 1 in non-LSO packet and at least
  * 3 in LSO (or Tunnel with IPv6+ext) packet.
  */
 	u8 nbds;
diff --git a/drivers/net/qede/base/reg_addr.h b/drivers/net/qede/base/reg_addr.h
index 3cc7fd4..f9920f3 100644
--- a/drivers/net/qede/base/reg_addr.h
+++ b/drivers/net/qede/base/reg_addr.h
@@ -1147,3 +1147,56 @@
 
 #define IGU_REG_PRODUCER_MEMORY 0x182000UL
 #define IGU_REG_CONSUMER_MEM 0x183000UL
+
+#define CDU_REG_CCFC_CTX_VALID0 0x580400UL
+#define CDU_REG_CCFC_CTX_VALID1 0x580404UL
+#define CDU_REG_TCFC_CTX_VALID0 0x580408UL
+
+#define DORQ_REG_L2_EDPM_TUNNEL_NGE_IP_EN_K2_E5 0x10092cUL
+#define DORQ_REG_L2_EDPM_TUNNEL_NGE_ETH_EN_K2_E5 0x100930UL
+#define MISCS_REG_RESET_PL_HV_2_K2_E5 0x009150UL
+#define CNIG_REG_NW_PORT_MODE_BB 0x218200UL
+#define CNIG_REG_PMEG_IF_CMD_BB 0x21821cUL
+#define CNIG_REG_PMEG_IF_ADDR_BB 0x218224UL
+#define CNIG_REG_PMEG_IF_WRDATA_BB 0x218228UL
+#define NWM_REG_MAC0_K2_E5 0x800400UL
+#define CNIG_REG_NIG_PORT0_CONF_K2_E5 0x218200UL
+#define CNIG_REG_NIG_PORT0_CONF_NIG_PORT_ENABLE_0_K2_E5_SHIFT 0
+#define CNIG_REG_NIG_PORT0_CONF_NIG_PORT_NWM_PORT_MAP_0_K2_E5_SHIFT 1
+#define CNIG_REG_NIG_PORT0_CONF_NIG_PORT_RATE_0_K2_E5_SHIFT 3
+#define ETH_MAC_REG_XIF_MODE_K2_E5 0x000080UL
+#define ETH_MAC_REG_XIF_MODE_XGMII_K2_E5_SHIFT 0
+#define ETH_MAC_REG_FRM_LENGTH_K2_E5 0x000014UL
+#define ETH_MAC_REG_FRM_LENGTH_FRM_LENGTH_K2_E5_SHIFT 0
+#define ETH_MAC_REG_TX_IPG_LENGTH_K2_E5 0x000044UL
+#define ETH_MAC_REG_TX_IPG_LENGTH_TXIPG_K2_E5_SHIFT 0
+#define ETH_MAC_REG_RX_FIFO_SECTIONS_K2_E5 0x00001cUL
+#define ETH_MAC_REG_RX_FIFO_SECTIONS_RX_SECTION_FULL_K2_E5_SHIFT 0
+#define ETH_MAC_REG_TX_FIFO_SECTIONS_K2_E5 0x000020UL
+#define ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_EMPTY_K2_E5_SHIFT 16
+#define ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_FULL_K2_E5_SHIFT 0
+#define ETH_MAC_REG_COMMAND_CONFIG_K2_E5 0x000008UL
+#define MISC_REG_XMAC_CORE_PORT_MODE_BB 0x008c08UL
+#define MISC_REG_XMAC_PHY_PORT_MODE_BB 0x008c04UL
+#define XMAC_REG_MODE_BB 0x210008UL
+#define XMAC_REG_RX_MAX_SIZE_BB  0x210040UL
+#define XMAC_REG_TX_CTRL_LO_BB 0x210020UL
+#define XMAC_REG_CTRL_BB 0x210000UL
+#define XMAC_REG_CTRL_TX_EN_BB (0x1 << 0)
+#define XMAC_REG_CTRL_RX_EN_BB (0x1 << 1)
+#define XMAC_REG_RX_CTRL_BB 0x210030UL
+#define XMAC_REG_RX_CTRL_PROCESS_VARIABLE_PREAMBLE_BB (0x1 << 12)
+
+#define PGLUE_B_REG_PGL_ADDR_E8_F0_K2_E5 0x2aaf98UL
+#define PGLUE_B_REG_PGL_ADDR_EC_F0_K2_E5 0x2aaf9cUL
+#define PGLUE_B_REG_PGL_ADDR_F0_F0_K2_E5 0x2aafa0UL
+#define PGLUE_B_REG_PGL_ADDR_F4_F0_K2_E5 0x2aafa4UL
+#define PGLUE_B_REG_PGL_ADDR_88_F0_BB 0x2aa404UL
+#define PGLUE_B_REG_PGL_ADDR_8C_F0_BB 0x2aa408UL
+#define PGLUE_B_REG_PGL_ADDR_90_F0_BB 0x2aa40cUL
+#define PGLUE_B_REG_PGL_ADDR_94_F0_BB 0x2aa410UL
+#define MISCS_REG_FUNCTION_HIDE_BB_K2 0x0096f0UL
+#define PCIE_REG_PRTY_MASK_K2_E5 0x0547b4UL
+#define PGLUE_B_REG_VF_BAR0_SIZE_K2_E5 0x2aaeb4UL
+
+#define PRS_REG_OUTPUT_FORMAT_4_0_BB_K2 0x1f099cUL
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index a604a5b..332b1f8 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -21,7 +21,7 @@ static uint8_t npar_tx_switching = 1;
 char fw_file[PATH_MAX];
 
 const char *QEDE_DEFAULT_FIRMWARE =
-	"/lib/firmware/qed/qed_init_values-8.14.6.0.bin";
+	"/lib/firmware/qed/qed_init_values-8.18.9.0.bin";
 
 static void
 qed_update_pf_params(struct ecore_dev *edev, struct ecore_pf_params *params)
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 07/61] net/qede/base: decrease maximum HW func per device
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (6 preceding siblings ...)
  2017-03-24  7:27       ` [PATCH v3 06/61] net/qede: upgrade the FW to 8.18.9.0 Rasesh Mody
@ 2017-03-24  7:27       ` Rasesh Mody
  2017-03-24  7:27       ` [PATCH v3 08/61] net/qede/base: move mask constants defining NIC type Rasesh Mody
                         ` (54 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:27 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Decrease MAX_HWFNS_PER_DEVICE from 4 to 2

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index b2f4910..d14f99c 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -28,7 +28,7 @@
 #include "ecore_proto_if.h"
 #include "mcp_public.h"
 
-#define MAX_HWFNS_PER_DEVICE	(4)
+#define MAX_HWFNS_PER_DEVICE	2
 #define NAME_SIZE 128 /* @DPDK */
 #define VER_SIZE 16
 #define ECORE_WFQ_UNIT	100
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 08/61] net/qede/base: move mask constants defining NIC type
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (7 preceding siblings ...)
  2017-03-24  7:27       ` [PATCH v3 07/61] net/qede/base: decrease maximum HW func per device Rasesh Mody
@ 2017-03-24  7:27       ` Rasesh Mody
  2017-03-24  7:27       ` [PATCH v3 09/61] net/qede/base: remove attribute from update current config Rasesh Mody
                         ` (53 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:27 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Move mask constants defining NIC type to ecore.h

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h     |    4 ++++
 drivers/net/qede/base/ecore_dev.c |    4 ----
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index d14f99c..a6cf52e 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -625,6 +625,10 @@ struct ecore_dev {
 #define ECORE_IS_AH(dev)	((dev)->type == ECORE_DEV_TYPE_AH)
 #define ECORE_IS_K2(dev)	ECORE_IS_AH(dev)
 
+#define ECORE_DEV_ID_MASK	0xff00
+#define ECORE_DEV_ID_MASK_BB	0x1600
+#define ECORE_DEV_ID_MASK_AH	0x8000
+
 	u16 vendor_id;
 	u16 device_id;
 
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index f82f5e6..ee50090 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2888,10 +2888,6 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	return ecore_hw_get_resc(p_hwfn, drv_resc_alloc);
 }
 
-#define ECORE_DEV_ID_MASK	0xff00
-#define ECORE_DEV_ID_MASK_BB	0x1600
-#define ECORE_DEV_ID_MASK_AH	0x8000
-
 static enum _ecore_status_t ecore_get_dev_info(struct ecore_dev *p_dev)
 {
 	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 09/61] net/qede/base: remove attribute from update current config
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (8 preceding siblings ...)
  2017-03-24  7:27       ` [PATCH v3 08/61] net/qede/base: move mask constants defining NIC type Rasesh Mody
@ 2017-03-24  7:27       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 10/61] net/qede/base: add nvram options Rasesh Mody
                         ` (52 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:27 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Remove attribute field from update_current_config() API, Management FW
need to know only the last entity who configured the device.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_mcp.c     |    5 ++---
 drivers/net/qede/base/ecore_mcp_api.h |    8 --------
 2 files changed, 2 insertions(+), 11 deletions(-)

diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index e236f39..245d478 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1709,14 +1709,13 @@ enum _ecore_status_t ecore_mcp_resume(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t
 ecore_mcp_ov_update_current_config(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt,
-				   enum ecore_ov_config_method config,
 				   enum ecore_ov_client client)
 {
 	enum _ecore_status_t rc;
 	u32 resp = 0, param = 0;
 	u32 drv_mb_param;
 
-	switch (config) {
+	switch (client) {
 	case ECORE_OV_CLIENT_DRV:
 		drv_mb_param = DRV_MB_PARAM_OV_CURR_CFG_OS;
 		break;
@@ -1727,7 +1726,7 @@ ecore_mcp_ov_update_current_config(struct ecore_hwfn *p_hwfn,
 		drv_mb_param = DRV_MB_PARAM_OV_CURR_CFG_VENDOR_SPEC;
 		break;
 	default:
-		DP_NOTICE(p_hwfn, true, "Invalid client type %d\n", config);
+		DP_NOTICE(p_hwfn, true, "Invalid client type %d\n", client);
 		return ECORE_INVAL;
 	}
 
diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h
index 614cf67..72a58e4 100644
--- a/drivers/net/qede/base/ecore_mcp_api.h
+++ b/drivers/net/qede/base/ecore_mcp_api.h
@@ -173,12 +173,6 @@ union ecore_mcp_protocol_stats {
 };
 #endif
 
-enum ecore_ov_config_method {
-	ECORE_OV_CONFIG_MTU,
-	ECORE_OV_CONFIG_MAC,
-	ECORE_OV_CONFIG_WOL
-};
-
 enum ecore_ov_client {
 	ECORE_OV_CLIENT_DRV,
 	ECORE_OV_CLIENT_USER,
@@ -453,7 +447,6 @@ enum _ecore_status_t ecore_start_recovery_process(struct ecore_hwfn *p_hwfn,
  *
  *  @param p_hwfn
  *  @param p_ptt
- *  @param config - Configuation that has been updated
  *  @param client - ecore client type
  *
  * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
@@ -461,7 +454,6 @@ enum _ecore_status_t ecore_start_recovery_process(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t
 ecore_mcp_ov_update_current_config(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt,
-				   enum ecore_ov_config_method config,
 				   enum ecore_ov_client client);
 
 /**
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 10/61] net/qede/base: add nvram options
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (9 preceding siblings ...)
  2017-03-24  7:27       ` [PATCH v3 09/61] net/qede/base: remove attribute from update current config Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 11/61] net/qede/base: add comment Rasesh Mody
                         ` (51 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add a bunch of NVRAM options like MCOT, FEC selection, temperature
threshold, Reset On Lan, etc.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/nvm_cfg.h |  465 ++++++++++++++++++++++++++++++++++++++-
 1 file changed, 461 insertions(+), 4 deletions(-)

diff --git a/drivers/net/qede/base/nvm_cfg.h b/drivers/net/qede/base/nvm_cfg.h
index 68abc2d..4202337 100644
--- a/drivers/net/qede/base/nvm_cfg.h
+++ b/drivers/net/qede/base/nvm_cfg.h
@@ -13,13 +13,21 @@
  * Description: NVM config file - Generated file from nvm cfg excel.
  *              DO NOT MODIFY !!!
  *
- * Created:     9/6/2016
+ * Created:     12/15/2016
  *
  ****************************************************************************/
 
 #ifndef NVM_CFG_H
 #define NVM_CFG_H
 
+#define NVM_CFG_version 0x81805
+
+#define NVM_CFG_new_option_seq 15
+
+#define NVM_CFG_removed_option_seq 0
+
+#define NVM_CFG_updated_value_seq 1
+
 struct nvm_cfg_mac_address {
 	u32 mac_addr_hi;
 		#define NVM_CFG_MAC_ADDRESS_HI_MASK 0x0000FFFF
@@ -242,6 +250,11 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_INTERNAL 0x0
 		#define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_EXTERNAL 0x1
 		#define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_BOTH 0x2
+	/*  ROL enable */
+		#define NVM_CFG1_GLOB_RESET_ON_LAN_MASK 0x80000000
+		#define NVM_CFG1_GLOB_RESET_ON_LAN_OFFSET 31
+		#define NVM_CFG1_GLOB_RESET_ON_LAN_DISABLED 0x0
+		#define NVM_CFG1_GLOB_RESET_ON_LAN_ENABLED 0x1
 	u32 f_lane_cfg1; /* 0x38 */
 		#define NVM_CFG1_GLOB_RX_LANE0_SWAP_MASK 0x0000000F
 		#define NVM_CFG1_GLOB_RX_LANE0_SWAP_OFFSET 0
@@ -470,6 +483,15 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_MANUF3_VER_OFFSET 18
 		#define NVM_CFG1_GLOB_MANUF4_VER_MASK 0x3F000000
 		#define NVM_CFG1_GLOB_MANUF4_VER_OFFSET 24
+	/*  Select package id method */
+		#define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_IO_MASK 0x40000000
+		#define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_IO_OFFSET 30
+		#define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_IO_NVRAM 0x0
+		#define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_IO_IO_PINS 0x1
+		#define NVM_CFG1_GLOB_RECOVERY_MODE_MASK 0x80000000
+		#define NVM_CFG1_GLOB_RECOVERY_MODE_OFFSET 31
+		#define NVM_CFG1_GLOB_RECOVERY_MODE_DISABLED 0x0
+		#define NVM_CFG1_GLOB_RECOVERY_MODE_ENABLED 0x1
 	u32 manufacture_time; /* 0x70 */
 		#define NVM_CFG1_GLOB_MANUF0_TIME_MASK 0x0000003F
 		#define NVM_CFG1_GLOB_MANUF0_TIME_OFFSET 0
@@ -480,6 +502,11 @@ struct nvm_cfg1_glob {
 	/*  Max MSIX for Ethernet in default mode */
 		#define NVM_CFG1_GLOB_MAX_MSIX_MASK 0x03FC0000
 		#define NVM_CFG1_GLOB_MAX_MSIX_OFFSET 18
+	/*  PF Mapping */
+		#define NVM_CFG1_GLOB_PF_MAPPING_MASK 0x0C000000
+		#define NVM_CFG1_GLOB_PF_MAPPING_OFFSET 26
+		#define NVM_CFG1_GLOB_PF_MAPPING_CONTINUOUS 0x0
+		#define NVM_CFG1_GLOB_PF_MAPPING_FIXED 0x1
 	u32 led_global_settings; /* 0x74 */
 		#define NVM_CFG1_GLOB_LED_SWAP_0_MASK 0x0000000F
 		#define NVM_CFG1_GLOB_LED_SWAP_0_OFFSET 0
@@ -489,6 +516,47 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_LED_SWAP_2_OFFSET 8
 		#define NVM_CFG1_GLOB_LED_SWAP_3_MASK 0x0000F000
 		#define NVM_CFG1_GLOB_LED_SWAP_3_OFFSET 12
+	/*  Max. continues operating temperature */
+		#define NVM_CFG1_GLOB_MAX_CONT_OPERATING_TEMP_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_MAX_CONT_OPERATING_TEMP_OFFSET 16
+	/*  GPIO which triggers run-time port swap according to the map
+	 *  specified in option 205
+	 */
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_OFFSET 24
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO31 0x20
 	u32 generic_cont1; /* 0x78 */
 		#define NVM_CFG1_GLOB_AVS_DAC_CODE_MASK 0x000003FF
 		#define NVM_CFG1_GLOB_AVS_DAC_CODE_OFFSET 0
@@ -508,6 +576,17 @@ struct nvm_cfg1_glob {
 	/*  PCIe Preset value - applies only if option 194 is enabled */
 		#define NVM_CFG1_GLOB_PCIE_PRESET_VALUE_MASK 0x00780000
 		#define NVM_CFG1_GLOB_PCIE_PRESET_VALUE_OFFSET 19
+	/*  Port mapping to be used when the run-time GPIO for port-swap is
+	 *  defined and set.
+	 */
+		#define NVM_CFG1_GLOB_RUNTIME_PORT0_SWAP_MAP_MASK 0x01800000
+		#define NVM_CFG1_GLOB_RUNTIME_PORT0_SWAP_MAP_OFFSET 23
+		#define NVM_CFG1_GLOB_RUNTIME_PORT1_SWAP_MAP_MASK 0x06000000
+		#define NVM_CFG1_GLOB_RUNTIME_PORT1_SWAP_MAP_OFFSET 25
+		#define NVM_CFG1_GLOB_RUNTIME_PORT2_SWAP_MAP_MASK 0x18000000
+		#define NVM_CFG1_GLOB_RUNTIME_PORT2_SWAP_MAP_OFFSET 27
+		#define NVM_CFG1_GLOB_RUNTIME_PORT3_SWAP_MAP_MASK 0x60000000
+		#define NVM_CFG1_GLOB_RUNTIME_PORT3_SWAP_MAP_OFFSET 29
 	u32 mbi_version; /* 0x7C */
 		#define NVM_CFG1_GLOB_MBI_VERSION_0_MASK 0x000000FF
 		#define NVM_CFG1_GLOB_MBI_VERSION_0_OFFSET 0
@@ -515,6 +594,44 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_MBI_VERSION_1_OFFSET 8
 		#define NVM_CFG1_GLOB_MBI_VERSION_2_MASK 0x00FF0000
 		#define NVM_CFG1_GLOB_MBI_VERSION_2_OFFSET 16
+	/*  If set to other than NA, 0 - Normal operation, 1 - Thermal event
+	 *  occurred
+	 */
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_OFFSET 24
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO31 0x20
 	u32 mbi_date; /* 0x80 */
 	u32 misc_sig; /* 0x84 */
 	/*  Define the GPIO mapping to switch i2c mux */
@@ -555,6 +672,81 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO29 0x1E
 		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO30 0x1F
 		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO31 0x20
+	/*  Interrupt signal used for SMBus/I2C management interface
+	 *  0 = Interrupt event occurred
+	 *  1 = Normal
+	 */
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_OFFSET 16
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO31 0x20
+	/*  Set aLOM FAN on GPIO */
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_OFFSET 24
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO31 0x20
 	u32 device_capabilities; /* 0x88 */
 		#define NVM_CFG1_GLOB_DEVICE_CAPABILITIES_ETHERNET 0x1
 		#define NVM_CFG1_GLOB_DEVICE_CAPABILITIES_FCOE 0x2
@@ -591,11 +783,262 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_BB_1X100G \
 			0x80
 		#define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_2X10G 0x100
-	u32 reserved[41]; /* 0x9C */
+	/* @DPDK */
+	u32 reserved1[12]; /* 0x9C */
+	u32 oem1_number[8]; /* 0xCC */
+	u32 oem2_number[8]; /* 0xEC */
+	u32 mps25_active_txfir_pre; /* 0x10C */
+		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_PRE_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_PRE_OFFSET 0
+		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_PRE_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_PRE_OFFSET 8
+		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_PRE_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_PRE_OFFSET 16
+		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_PRE_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_PRE_OFFSET 24
+	u32 mps25_active_txfir_main; /* 0x110 */
+		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_MAIN_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_MAIN_OFFSET 0
+		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_MAIN_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_MAIN_OFFSET 8
+		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_MAIN_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_MAIN_OFFSET 16
+		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_MAIN_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_MAIN_OFFSET 24
+	u32 mps25_active_txfir_post; /* 0x114 */
+		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_POST_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_POST_OFFSET 0
+		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_POST_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_POST_OFFSET 8
+		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_POST_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_POST_OFFSET 16
+		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_POST_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_POST_OFFSET 24
+	u32 features; /* 0x118 */
+	/*  Set the Aux Fan on temperature  */
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_VALUE_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_VALUE_OFFSET 0
+	/*  Set NC-SI package ID */
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_OFFSET 8
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO31 0x20
+	/*  PMBUS Clock GPIO */
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_OFFSET 16
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO31 0x20
+	/*  PMBUS Data GPIO */
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_OFFSET 24
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO31 0x20
+	u32 tx_rx_eq_25g_hlpc; /* 0x11C */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_HLPC_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_HLPC_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_HLPC_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_HLPC_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_HLPC_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_HLPC_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_HLPC_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_HLPC_OFFSET 24
+	u32 tx_rx_eq_25g_llpc; /* 0x120 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_LLPC_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_LLPC_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_LLPC_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_LLPC_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_LLPC_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_LLPC_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_LLPC_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_LLPC_OFFSET 24
+	u32 tx_rx_eq_25g_ac; /* 0x124 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_AC_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_AC_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_AC_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_AC_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_AC_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_AC_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_AC_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_AC_OFFSET 24
+	u32 tx_rx_eq_10g_pc; /* 0x128 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_PC_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_PC_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_PC_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_PC_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_PC_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_PC_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_PC_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_PC_OFFSET 24
+	u32 tx_rx_eq_10g_ac; /* 0x12C */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_AC_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_AC_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_AC_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_AC_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_AC_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_AC_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_AC_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_AC_OFFSET 24
+	u32 tx_rx_eq_1g; /* 0x130 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_1G_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_1G_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_1G_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_1G_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_1G_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_1G_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_1G_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_1G_OFFSET 24
+	u32 tx_rx_eq_25g_bt; /* 0x134 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_BT_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_BT_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_BT_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_BT_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_BT_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_BT_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_BT_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_BT_OFFSET 24
+	u32 tx_rx_eq_10g_bt; /* 0x138 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_BT_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_BT_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_BT_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_BT_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_BT_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_BT_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_BT_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_BT_OFFSET 24
+	u32 generic_cont4; /* 0x13C */
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_OFFSET 0
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO31 0x20
+	u32 reserved[58]; /* 0x140 */
 };
 
 struct nvm_cfg1_path {
-	u32 reserved[30]; /* 0x0 */
+	u32 reserved[1]; /* 0x0 */
 };
 
 struct nvm_cfg1_port {
@@ -749,6 +1192,15 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_FEC_FORCE_MODE_FIRECODE 0x1
 		#define NVM_CFG1_PORT_FEC_FORCE_MODE_RS 0x2
 		#define NVM_CFG1_PORT_FEC_FORCE_MODE_AUTO 0x7
+		#define NVM_CFG1_PORT_FEC_AN_MODE_MASK 0x00700000
+		#define NVM_CFG1_PORT_FEC_AN_MODE_OFFSET 20
+		#define NVM_CFG1_PORT_FEC_AN_MODE_NONE 0x0
+		#define NVM_CFG1_PORT_FEC_AN_MODE_10G_FIRECODE 0x1
+		#define NVM_CFG1_PORT_FEC_AN_MODE_25G_FIRECODE 0x2
+		#define NVM_CFG1_PORT_FEC_AN_MODE_10G_AND_25G_FIRECODE 0x3
+		#define NVM_CFG1_PORT_FEC_AN_MODE_25G_RS 0x4
+		#define NVM_CFG1_PORT_FEC_AN_MODE_25G_FIRECODE_AND_RS 0x5
+		#define NVM_CFG1_PORT_FEC_AN_MODE_ALL 0x6
 	u32 phy_cfg; /* 0x1C */
 		#define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_MASK 0x0000FFFF
 		#define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_OFFSET 0
@@ -1451,12 +1903,17 @@ struct nvm_cfg1_func {
 		#define NVM_CFG1_FUNC_PREBOOT_VLAN_VALUE_OFFSET 0
 		#define NVM_CFG1_FUNC_PREBOOT_VLAN_MASK 0x00010000
 		#define NVM_CFG1_FUNC_PREBOOT_VLAN_OFFSET 16
+		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_MASK 0x001E0000
+		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_OFFSET 17
+		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_ETHERNET 0x1
+		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_FCOE 0x2
+		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_ISCSI 0x4
 	u32 reserved[8]; /* 0x30 */
 };
 
 struct nvm_cfg1 {
 	struct nvm_cfg1_glob glob; /* 0x0 */
-	struct nvm_cfg1_path path[MCP_GLOB_PATH_MAX]; /* 0x140 */
+	struct nvm_cfg1_path path[MCP_GLOB_PATH_MAX]; /* 0x228 */
 	struct nvm_cfg1_port port[MCP_GLOB_PORT_MAX]; /* 0x230 */
 	struct nvm_cfg1_func func[MCP_GLOB_FUNC_MAX]; /* 0xB90 */
 };
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 11/61] net/qede/base: add comment
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (10 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 10/61] net/qede/base: add nvram options Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 12/61] net/qede/base: use default MTU from shared memory Rasesh Mody
                         ` (50 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add a comment for the endianness manipulation in
ecore_mcp_send_drv_version().

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_mcp.c |    1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 245d478..df6ebd2 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1662,6 +1662,7 @@ ecore_mcp_send_drv_version(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	p_drv_version->version = p_ver->version;
 	num_words = (MCP_DRV_VER_STR_SIZE - 4) / 4;
 	for (i = 0; i < num_words; i++) {
+		/* The driver name is expected to be in a big-endian format */
 		p_name = &p_ver->name[i * sizeof(u32)];
 		val = OSAL_CPU_TO_BE32(*(u32 *)p_name);
 		*(u32 *)&p_drv_version->name[i * sizeof(u32)] = val;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 12/61] net/qede/base: use default MTU from shared memory
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (11 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 11/61] net/qede/base: add comment Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 13/61] net/qede/base: change queue/sb-id from 8 bit to 16 bit Rasesh Mody
                         ` (49 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Read and use the default MTU value from shared-memory.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h         |    2 ++
 drivers/net/qede/base/ecore_dev.c     |    3 +++
 drivers/net/qede/base/ecore_mcp.c     |   10 ++++++++++
 drivers/net/qede/base/ecore_mcp_api.h |    2 ++
 drivers/net/qede/qede_if.h            |    1 +
 drivers/net/qede/qede_main.c          |    2 ++
 6 files changed, 20 insertions(+)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index a6cf52e..25c96f8 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -377,6 +377,8 @@ struct ecore_hw_info {
 
 	/* Default DCBX mode */
 	u8 dcbx_mode;
+
+	u16 mtu;
 };
 
 struct ecore_hw_cid_data {
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index ee50090..87c1c23 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2879,6 +2879,9 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 
 	ecore_get_num_funcs(p_hwfn, p_ptt);
 
+	if (ecore_mcp_is_init(p_hwfn))
+		p_hwfn->hw_info.mtu = p_hwfn->mcp_info->func_info.mtu;
+
 	/* In case of forcing the driver's default resource allocation, calling
 	 * ecore_hw_get_resc() should come after initializing the personality
 	 * and after getting the number of functions, since the calculation of
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index df6ebd2..8720ae7 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1431,6 +1431,16 @@ enum _ecore_status_t ecore_mcp_fill_shmem_func_info(struct ecore_hwfn *p_hwfn,
 
 	info->ovlan = (u16)(shmem_info.ovlan_stag & FUNC_MF_CFG_OV_STAG_MASK);
 
+	info->mtu = (u16)shmem_info.mtu_size;
+
+	if (info->mtu == 0)
+		info->mtu = 1500;
+
+	info->mtu = (u16)shmem_info.mtu_size;
+
+	if (info->mtu == 0)
+		info->mtu = 1500;
+
 	DP_VERBOSE(p_hwfn, (ECORE_MSG_SP | ECORE_MSG_IFUP),
 		   "Read configuration from shmem: pause_on_host %02x"
 		    " protocol %02x BW [%02x - %02x]"
diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h
index 72a58e4..1be22dd 100644
--- a/drivers/net/qede/base/ecore_mcp_api.h
+++ b/drivers/net/qede/base/ecore_mcp_api.h
@@ -84,6 +84,8 @@ struct ecore_mcp_function_info {
 
 #define ECORE_MCP_VLAN_UNSET		(0xffff)
 	u16 ovlan;
+
+	u16 mtu;
 };
 
 struct ecore_mcp_nvm_common {
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index 4b23bb9..18404fb 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -34,6 +34,7 @@ struct qed_dev_info {
 	uint32_t flash_size;
 	uint8_t mf_mode;
 	bool tx_switching;
+	u16 mtu;
 	/* To be added... */
 };
 
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 332b1f8..e76346e 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -365,6 +365,8 @@ qed_fill_dev_info(struct ecore_dev *edev, struct qed_dev_info *dev_info)
 				      &dev_info->mfw_rev, NULL);
 	}
 
+	dev_info->mtu = ECORE_LEADING_HWFN(edev)->hw_info.mtu;
+
 	return 0;
 }
 
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 13/61] net/qede/base: change queue/sb-id from 8 bit to 16 bit
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (12 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 12/61] net/qede/base: use default MTU from shared memory Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 14/61] net/qede/base: update MFW when default MTU is changed Rasesh Mody
                         ` (48 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Change the queue/sb-id values from 8 bit fields to 16 bit fields.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c     |    8 ++++----
 drivers/net/qede/base/ecore_dev_api.h |    4 ++--
 drivers/net/qede/base/ecore_l2.c      |    2 +-
 drivers/net/qede/base/ecore_l2_api.h  |    2 +-
 drivers/net/qede/base/ecore_sriov.c   |    4 ++--
 5 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 87c1c23..7a501bb 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -3876,7 +3876,7 @@ static enum _ecore_status_t ecore_set_coalesce(struct ecore_hwfn *p_hwfn,
 
 enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u8 qid, u16 sb_id)
+					    u16 coalesce, u16 qid, u16 sb_id)
 {
 	struct ustorm_eth_queue_zone eth_qzone;
 	u8 timeset, timer_res;
@@ -3897,7 +3897,7 @@ enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
 	}
 	timeset = (u8)(coalesce >> timer_res);
 
-	rc = ecore_fw_l2_queue(p_hwfn, (u16)qid, &fw_qid);
+	rc = ecore_fw_l2_queue(p_hwfn, qid, &fw_qid);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
@@ -3919,7 +3919,7 @@ enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
 
 enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u8 qid, u16 sb_id)
+					    u16 coalesce, u16 qid, u16 sb_id)
 {
 	struct xstorm_eth_queue_zone eth_qzone;
 	u8 timeset, timer_res;
@@ -3941,7 +3941,7 @@ enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
 
 	timeset = (u8)(coalesce >> timer_res);
 
-	rc = ecore_fw_l2_queue(p_hwfn, (u16)qid, &fw_qid);
+	rc = ecore_fw_l2_queue(p_hwfn, qid, &fw_qid);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index 0dee68a..e7332ac 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -535,7 +535,7 @@ enum _ecore_status_t ecore_final_cleanup(struct ecore_hwfn	*p_hwfn,
  */
 enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u8 qid, u16 sb_id);
+					    u16 coalesce, u16 qid, u16 sb_id);
 
 /**
  * @brief ecore_set_txq_coalesce - Configure coalesce parameters for a Tx queue
@@ -553,6 +553,6 @@ enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
  */
 enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u8 qid, u16 sb_id);
+					    u16 coalesce, u16 qid, u16 sb_id);
 
 #endif
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 22bb43d..1379a1b 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -212,7 +212,7 @@ ecore_sp_vport_update_rss(struct ecore_hwfn *p_hwfn,
 
 	for (i = 0; i < ECORE_RSS_IND_TABLE_SIZE; i++) {
 		rc = ecore_fw_l2_queue(p_hwfn,
-				       (u8)p_rss->rss_ind_table[i],
+				       p_rss->rss_ind_table[i],
 				       &abs_l2_queue);
 		if (rc != ECORE_SUCCESS)
 			return rc;
diff --git a/drivers/net/qede/base/ecore_l2_api.h b/drivers/net/qede/base/ecore_l2_api.h
index 247316b..8f7b614 100644
--- a/drivers/net/qede/base/ecore_l2_api.h
+++ b/drivers/net/qede/base/ecore_l2_api.h
@@ -37,7 +37,7 @@ struct ecore_queue_start_common_params {
 	/* q_zone_id is relative, may be different from queue id
 	 * currently used by Tx-only, upper-bounded by number of FW-queues
 	 */
-	u8 qzone_id;
+	u16 qzone_id;
 
 	/* stats_id is relative or absolute depends on function */
 	u8 stats_id;
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index b051678..6e86966 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -2118,8 +2118,8 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 	    !ecore_iov_validate_sb(p_hwfn, vf, req->hw_sb))
 		goto out;
 
-	params.queue_id = (u8)vf->vf_queues[req->tx_qid].fw_tx_qid;
-	params.qzone_id = (u8)vf->vf_queues[req->tx_qid].fw_tx_qid;
+	params.queue_id = vf->vf_queues[req->tx_qid].fw_tx_qid;
+	params.qzone_id = vf->vf_queues[req->tx_qid].fw_tx_qid;
 	params.vport_id = vf->vport_id;
 	params.stats_id = vf->abs_vf_id + 0x10;
 	params.sb = req->hw_sb;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 14/61] net/qede/base: update MFW when default MTU is changed
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (13 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 13/61] net/qede/base: change queue/sb-id from 8 bit to 16 bit Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 15/61] net/qede/base: prevent device init failure Rasesh Mody
                         ` (47 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Send mailbox command to Management FW when MTU changes.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c |   11 +++++++++++
 drivers/net/qede/base/ecore_mcp.c |    3 ---
 2 files changed, 11 insertions(+), 3 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 7a501bb..13e13ba 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1629,6 +1629,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 {
 	enum _ecore_status_t rc = ECORE_SUCCESS, mfw_rc;
 	u32 load_code, param, drv_mb_param;
+	bool b_default_mtu = true;
 	struct ecore_hwfn *p_hwfn;
 	int i;
 
@@ -1648,6 +1649,12 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
 
+		/* If management didn't provide a default, set one of our own */
+		if (!p_hwfn->hw_info.mtu) {
+			p_hwfn->hw_info.mtu = 1500;
+			b_default_mtu = false;
+		}
+
 		if (IS_VF(p_dev)) {
 			p_hwfn->b_int_enabled = 1;
 			continue;
@@ -1776,6 +1783,10 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 			return rc;
 		}
 
+		if (!b_default_mtu)
+			ecore_mcp_ov_update_mtu(p_hwfn, p_hwfn->p_main_ptt,
+						p_hwfn->hw_info.mtu);
+
 		rc = ecore_mcp_ov_update_driver_state(p_hwfn,
 						      p_hwfn->p_main_ptt,
 						ECORE_OV_DRIVER_STATE_DISABLED);
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 8720ae7..0338576 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1438,9 +1438,6 @@ enum _ecore_status_t ecore_mcp_fill_shmem_func_info(struct ecore_hwfn *p_hwfn,
 
 	info->mtu = (u16)shmem_info.mtu_size;
 
-	if (info->mtu == 0)
-		info->mtu = 1500;
-
 	DP_VERBOSE(p_hwfn, (ECORE_MSG_SP | ECORE_MSG_IFUP),
 		   "Read configuration from shmem: pause_on_host %02x"
 		    " protocol %02x BW [%02x - %02x]"
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 15/61] net/qede/base: prevent device init failure
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (14 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 14/61] net/qede/base: update MFW when default MTU is changed Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 16/61] net/qede/base: read card personality via MFW commands Rasesh Mody
                         ` (46 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Device initialization flow should not be failed because the FW interface
command is not available.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c |   14 ++++++++------
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 13e13ba..7494f93 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1778,18 +1778,20 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		rc = ecore_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
 				   DRV_MSG_CODE_OV_UPDATE_STORM_FW_VER,
 				   drv_mb_param, &load_code, &param);
-		if (rc != ECORE_SUCCESS) {
-			DP_ERR(p_hwfn, "Failed to send firmware version\n");
-			return rc;
-		}
+		if (rc != ECORE_SUCCESS)
+			DP_INFO(p_hwfn, "Failed to update firmware version\n");
 
 		if (!b_default_mtu)
-			ecore_mcp_ov_update_mtu(p_hwfn, p_hwfn->p_main_ptt,
-						p_hwfn->hw_info.mtu);
+			rc = ecore_mcp_ov_update_mtu(p_hwfn, p_hwfn->p_main_ptt,
+						      p_hwfn->hw_info.mtu);
+		if (rc != ECORE_SUCCESS)
+			DP_INFO(p_hwfn, "Failed to update default mtu\n");
 
 		rc = ecore_mcp_ov_update_driver_state(p_hwfn,
 						      p_hwfn->p_main_ptt,
 						ECORE_OV_DRIVER_STATE_DISABLED);
+		if (rc != ECORE_SUCCESS)
+			DP_INFO(p_hwfn, "Failed to update driver state\n");
 	}
 
 	return rc;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 16/61] net/qede/base: read card personality via MFW commands
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (15 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 15/61] net/qede/base: prevent device init failure Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 17/61] net/qede/base: allow probe to succeed with minor HW-issues Rasesh Mody
                         ` (45 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add support to read NIC personality via management FW for non-L2
protocols.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h       |   16 +++++++++++++-
 drivers/net/qede/base/ecore_dev.c   |   17 +++++----------
 drivers/net/qede/base/ecore_mcp.c   |   41 +++++++++++++++++++++++++++++++----
 drivers/net/qede/base/ecore_sriov.c |    1 +
 4 files changed, 59 insertions(+), 16 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 25c96f8..842a3b5 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -243,7 +243,8 @@ enum ecore_pci_personality {
 	ECORE_PCI_FCOE,
 	ECORE_PCI_ISCSI,
 	ECORE_PCI_ETH_ROCE,
-	ECORE_PCI_IWARP,
+	ECORE_PCI_ETH_IWARP,
+	ECORE_PCI_ETH_RDMA,
 	ECORE_PCI_DEFAULT /* default in shmem */
 };
 
@@ -328,6 +329,19 @@ enum ecore_hw_err_type {
 struct ecore_hw_info {
 	/* PCI personality */
 	enum ecore_pci_personality personality;
+#define ECORE_IS_RDMA_PERSONALITY(dev)			    \
+	((dev)->hw_info.personality == ECORE_PCI_ETH_ROCE ||  \
+	 (dev)->hw_info.personality == ECORE_PCI_ETH_IWARP || \
+	 (dev)->hw_info.personality == ECORE_PCI_ETH_RDMA)
+#define ECORE_IS_ROCE_PERSONALITY(dev)			   \
+	((dev)->hw_info.personality == ECORE_PCI_ETH_ROCE || \
+	 (dev)->hw_info.personality == ECORE_PCI_ETH_RDMA)
+#define ECORE_IS_IWARP_PERSONALITY(dev)			    \
+	((dev)->hw_info.personality == ECORE_PCI_ETH_IWARP || \
+	 (dev)->hw_info.personality == ECORE_PCI_ETH_RDMA)
+#define ECORE_IS_L2_PERSONALITY(dev)		      \
+	((dev)->hw_info.personality == ECORE_PCI_ETH || \
+	 ECORE_IS_RDMA_PERSONALITY(dev))
 
 	/* Resource Allocation scheme results */
 	u32 resc_start[ECORE_MAX_RESC];
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 7494f93..1b033b7 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -219,9 +219,7 @@ static enum _ecore_status_t ecore_init_qm_info(struct ecore_hwfn *p_hwfn,
 	 * don't have a good recycle flow. Non ethernet PFs require only a
 	 * single physical queue.
 	 */
-	if (p_hwfn->hw_info.personality == ECORE_PCI_ETH_ROCE ||
-	    p_hwfn->hw_info.personality == ECORE_PCI_IWARP ||
-	    p_hwfn->hw_info.personality == ECORE_PCI_ETH)
+	if (ECORE_IS_L2_PERSONALITY(p_hwfn))
 		protocol_pqs = p_hwfn->hw_info.num_hw_tc;
 	else
 		protocol_pqs = 1;
@@ -229,7 +227,7 @@ static enum _ecore_status_t ecore_init_qm_info(struct ecore_hwfn *p_hwfn,
 	num_pqs = protocol_pqs + num_vfs + 1;	/* The '1' is for pure-LB */
 	num_vports = (u8)RESC_NUM(p_hwfn, ECORE_VPORT);
 
-	if (p_hwfn->hw_info.personality == ECORE_PCI_ETH_ROCE) {
+	if (ECORE_IS_ROCE_PERSONALITY(p_hwfn)) {
 		num_pqs++;	/* for RoCE queue */
 		init_rdma_offload_pq = true;
 		if (p_hwfn->pf_params.rdma_pf_params.enable_dcqcn) {
@@ -259,7 +257,7 @@ static enum _ecore_status_t ecore_init_qm_info(struct ecore_hwfn *p_hwfn,
 		qm_info->num_pf_rls = (u8)num_pf_rls;
 	}
 
-	if (p_hwfn->hw_info.personality == ECORE_PCI_IWARP) {
+	if (ECORE_IS_IWARP_PERSONALITY(p_hwfn)) {
 		num_pqs += 3;	/* for iwarp queue / pure-ack / ooo */
 		init_rdma_offload_pq = true;
 		init_pure_ack_pq = true;
@@ -335,9 +333,7 @@ static enum _ecore_status_t ecore_init_qm_info(struct ecore_hwfn *p_hwfn,
 		struct init_qm_pq_params *params =
 		    &qm_info->qm_pq_params[curr_queue++];
 
-		if (p_hwfn->hw_info.personality == ECORE_PCI_ETH_ROCE ||
-		    p_hwfn->hw_info.personality == ECORE_PCI_IWARP ||
-		    p_hwfn->hw_info.personality == ECORE_PCI_ETH) {
+		if (ECORE_IS_L2_PERSONALITY(p_hwfn)) {
 			params->vport_id = vport_id;
 			params->tc_id = i;
 			/* Note: this assumes that if we had a configuration
@@ -612,8 +608,7 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 
 		/* EQ */
 		n_eqes = ecore_chain_get_capacity(&p_hwfn->p_spq->chain);
-		if ((p_hwfn->hw_info.personality == ECORE_PCI_ETH_ROCE) ||
-		    (p_hwfn->hw_info.personality == ECORE_PCI_IWARP)) {
+		if (ECORE_IS_RDMA_PERSONALITY(p_hwfn)) {
 			/* Calculate the EQ size
 			 * ---------------------
 			 * Each ICID may generate up to one event at a time i.e.
@@ -636,7 +631,7 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 			 *          smaller than RoCE's so we avoid exact
 			 *          calculation.
 			 */
-			if (p_hwfn->hw_info.personality == ECORE_PCI_ETH_ROCE) {
+			if (ECORE_IS_ROCE_PERSONALITY(p_hwfn)) {
 				num_cons =
 				    ecore_cxt_get_proto_cid_count(
 						p_hwfn,
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 0338576..9f897b5 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1373,16 +1373,47 @@ enum _ecore_status_t ecore_mcp_get_media_type(struct ecore_dev *p_dev,
 	return ECORE_SUCCESS;
 }
 
+/* @DPDK */
+/* Old MFW has a global configuration for all PFs regarding RDMA support */
+static void
+ecore_mcp_get_shmem_proto_legacy(struct ecore_hwfn *p_hwfn,
+				 enum ecore_pci_personality *p_proto)
+{
+	*p_proto = ECORE_PCI_ETH;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IFUP,
+		   "According to Legacy capabilities, L2 personality is %08x\n",
+		   (u32)*p_proto);
+}
+
+/* @DPDK */
+static enum _ecore_status_t
+ecore_mcp_get_shmem_proto_mfw(struct ecore_hwfn *p_hwfn,
+			      struct ecore_ptt *p_ptt,
+			      enum ecore_pci_personality *p_proto)
+{
+	u32 resp = 0, param = 0;
+	enum _ecore_status_t rc;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IFUP,
+		   "According to capabilities, L2 personality is %08x [resp %08x param %08x]\n",
+		   (u32)*p_proto, resp, param);
+	return ECORE_SUCCESS;
+}
+
 static enum _ecore_status_t
 ecore_mcp_get_shmem_proto(struct ecore_hwfn *p_hwfn,
 			  struct public_func *p_info,
+			  struct ecore_ptt *p_ptt,
 			  enum ecore_pci_personality *p_proto)
 {
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 
 	switch (p_info->config & FUNC_MF_CFG_PROTOCOL_MASK) {
 	case FUNC_MF_CFG_PROTOCOL_ETHERNET:
-		*p_proto = ECORE_PCI_ETH;
+		if (ecore_mcp_get_shmem_proto_mfw(p_hwfn, p_ptt, p_proto) !=
+		    ECORE_SUCCESS)
+			ecore_mcp_get_shmem_proto_legacy(p_hwfn, p_proto);
 		break;
 	default:
 		rc = ECORE_INVAL;
@@ -1403,7 +1434,8 @@ enum _ecore_status_t ecore_mcp_fill_shmem_func_info(struct ecore_hwfn *p_hwfn,
 	info->pause_on_host = (shmem_info.config &
 			       FUNC_MF_CFG_PAUSE_ON_HOST_RING) ? 1 : 0;
 
-	if (ecore_mcp_get_shmem_proto(p_hwfn, &shmem_info, &info->protocol)) {
+	if (ecore_mcp_get_shmem_proto(p_hwfn, &shmem_info, p_ptt,
+				      &info->protocol)) {
 		DP_ERR(p_hwfn, "Unknown personality %08x\n",
 		       (u32)(shmem_info.config & FUNC_MF_CFG_PROTOCOL_MASK));
 		return ECORE_INVAL;
@@ -1559,8 +1591,9 @@ int ecore_mcp_get_personality_cnt(struct ecore_hwfn *p_hwfn,
 		if (shmem_info.config & FUNC_MF_CFG_FUNC_HIDE)
 			continue;
 
-		if (ecore_mcp_get_shmem_proto(p_hwfn, &shmem_info,
-					      &protocol) != ECORE_SUCCESS)
+		if (ecore_mcp_get_shmem_proto(p_hwfn, &shmem_info, p_ptt,
+					      &protocol) !=
+		    ECORE_SUCCESS)
 			continue;
 
 		if ((1 << ((u32)protocol)) & personalities)
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 6e86966..578899c 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -86,6 +86,7 @@ static enum _ecore_status_t ecore_sp_vf_start(struct ecore_hwfn *p_hwfn,
 		p_ramrod->personality = PERSONALITY_ETH;
 		break;
 	case ECORE_PCI_ETH_ROCE:
+	case ECORE_PCI_ETH_IWARP:
 		p_ramrod->personality = PERSONALITY_RDMA_AND_ETH;
 		break;
 	default:
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 17/61] net/qede/base: allow probe to succeed with minor HW-issues
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (16 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 16/61] net/qede/base: read card personality via MFW commands Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 18/61] net/qede/base: remove unneeded step in HW init Rasesh Mody
                         ` (44 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Allow probe to succeed with various 'minor' HW-issues [if requested]

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c     |   71 +++++++++++++++++++++++++++------
 drivers/net/qede/base/ecore_dev_api.h |   40 ++++++++++++++++---
 2 files changed, 94 insertions(+), 17 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 1b033b7..907566c 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2445,12 +2445,15 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-static enum _ecore_status_t ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
-						  struct ecore_ptt *p_ptt)
+static enum _ecore_status_t
+ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
+		      struct ecore_ptt *p_ptt,
+		      struct ecore_hw_prepare_params *p_params)
 {
 	u32 nvm_cfg1_offset, mf_mode, addr, generic_cont0, core_cfg, dcbx_mode;
 	u32 port_cfg_addr, link_temp, nvm_cfg_addr, device_capabilities;
 	struct ecore_mcp_link_params *link;
+	enum _ecore_status_t rc;
 
 	/* Read global nvm_cfg address */
 	nvm_cfg_addr = ecore_rd(p_hwfn, p_ptt, MISC_REG_GEN_PURP_CR0);
@@ -2458,6 +2461,8 @@ static enum _ecore_status_t ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
 	/* Verify MCP has initialized it */
 	if (!nvm_cfg_addr) {
 		DP_NOTICE(p_hwfn, false, "Shared memory not initialized\n");
+		if (p_params->b_relaxed_probe)
+			p_params->p_relaxed_res = ECORE_HW_PREPARE_FAILED_NVM;
 		return ECORE_INVAL;
 	}
 
@@ -2643,7 +2648,13 @@ static enum _ecore_status_t ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
 		OSAL_SET_BIT(ECORE_DEV_CAP_IWARP,
 			     &p_hwfn->hw_info.device_capabilities);
 
-	return ecore_mcp_fill_shmem_func_info(p_hwfn, p_ptt);
+	rc = ecore_mcp_fill_shmem_func_info(p_hwfn, p_ptt);
+	if (rc != ECORE_SUCCESS && p_params->b_relaxed_probe) {
+		rc = ECORE_SUCCESS;
+		p_params->p_relaxed_res = ECORE_HW_PREPARE_BAD_MCP;
+	}
+
+	return rc;
 }
 
 static void ecore_get_num_funcs(struct ecore_hwfn *p_hwfn,
@@ -2797,15 +2808,22 @@ static void ecore_hw_info_port_num(struct ecore_hwfn *p_hwfn,
 
 static enum _ecore_status_t
 ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-		  enum ecore_pci_personality personality, bool drv_resc_alloc)
+		  enum ecore_pci_personality personality,
+		  struct ecore_hw_prepare_params *p_params)
 {
+	bool drv_resc_alloc = p_params->drv_resc_alloc;
 	enum _ecore_status_t rc;
 
 	/* Since all information is common, only first hwfns should do this */
 	if (IS_LEAD_HWFN(p_hwfn)) {
 		rc = ecore_iov_hw_info(p_hwfn);
-		if (rc != ECORE_SUCCESS)
-			return rc;
+		if (rc != ECORE_SUCCESS) {
+			if (p_params->b_relaxed_probe)
+				p_params->p_relaxed_res =
+						ECORE_HW_PREPARE_BAD_IOV;
+			else
+				return rc;
+		}
 	}
 
 	/* TODO In get_hw_info, amoungst others:
@@ -2820,7 +2838,7 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_ASIC(p_hwfn->p_dev)) {
 #endif
-	rc = ecore_hw_get_nvm_info(p_hwfn, p_ptt);
+	rc = ecore_hw_get_nvm_info(p_hwfn, p_ptt, p_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 #ifndef ASIC_ONLY
@@ -2828,8 +2846,12 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 #endif
 
 	rc = ecore_int_igu_read_cam(p_hwfn, p_ptt);
-	if (rc != ECORE_SUCCESS)
-		return rc;
+	if (rc != ECORE_SUCCESS) {
+		if (p_params->b_relaxed_probe)
+			p_params->p_relaxed_res = ECORE_HW_PREPARE_BAD_IGU;
+		else
+			return rc;
+	}
 
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_ASIC(p_hwfn->p_dev) && ecore_mcp_is_init(p_hwfn)) {
@@ -2896,7 +2918,13 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	 * the resources/features depends on them.
 	 * This order is not harmful if not forcing.
 	 */
-	return ecore_hw_get_resc(p_hwfn, drv_resc_alloc);
+	rc = ecore_hw_get_resc(p_hwfn, drv_resc_alloc);
+	if (rc != ECORE_SUCCESS && p_params->b_relaxed_probe) {
+		rc = ECORE_SUCCESS;
+		p_params->p_relaxed_res = ECORE_HW_PREPARE_BAD_MCP;
+	}
+
+	return rc;
 }
 
 static enum _ecore_status_t ecore_get_dev_info(struct ecore_dev *p_dev)
@@ -3028,6 +3056,8 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn,
 	if (REG_RD(p_hwfn, PXP_PF_ME_OPAQUE_ADDR) == 0xffffffff) {
 		DP_ERR(p_hwfn,
 		       "Reading the ME register returns all Fs; Preventing further chip access\n");
+		if (p_params->b_relaxed_probe)
+			p_params->p_relaxed_res = ECORE_HW_PREPARE_FAILED_ME;
 		return ECORE_INVAL;
 	}
 
@@ -3037,6 +3067,8 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn,
 	rc = ecore_ptt_pool_alloc(p_hwfn);
 	if (rc) {
 		DP_NOTICE(p_hwfn, true, "Failed to prepare hwfn's hw\n");
+		if (p_params->b_relaxed_probe)
+			p_params->p_relaxed_res = ECORE_HW_PREPARE_FAILED_MEM;
 		goto err0;
 	}
 
@@ -3046,8 +3078,12 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn,
 	/* First hwfn learns basic information, e.g., number of hwfns */
 	if (!p_hwfn->my_id) {
 		rc = ecore_get_dev_info(p_dev);
-		if (rc != ECORE_SUCCESS)
+		if (rc != ECORE_SUCCESS) {
+			if (p_params->b_relaxed_probe)
+				p_params->p_relaxed_res =
+					ECORE_HW_PREPARE_FAILED_DEV;
 			goto err1;
+		}
 	}
 
 	ecore_hw_hwfn_prepare(p_hwfn);
@@ -3056,12 +3092,14 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn,
 	rc = ecore_mcp_cmd_init(p_hwfn, p_hwfn->p_main_ptt);
 	if (rc) {
 		DP_NOTICE(p_hwfn, true, "Failed initializing mcp command\n");
+		if (p_params->b_relaxed_probe)
+			p_params->p_relaxed_res = ECORE_HW_PREPARE_FAILED_MEM;
 		goto err1;
 	}
 
 	/* Read the device configuration information from the HW and SHMEM */
 	rc = ecore_get_hw_info(p_hwfn, p_hwfn->p_main_ptt,
-			       p_params->personality, p_params->drv_resc_alloc);
+			       p_params->personality, p_params);
 	if (rc) {
 		DP_NOTICE(p_hwfn, true, "Failed to get HW information\n");
 		goto err2;
@@ -3094,6 +3132,8 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn,
 	rc = ecore_init_alloc(p_hwfn);
 	if (rc) {
 		DP_NOTICE(p_hwfn, true, "Failed to allocate the init array\n");
+		if (p_params->b_relaxed_probe)
+			p_params->p_relaxed_res = ECORE_HW_PREPARE_FAILED_MEM;
 		goto err2;
 	}
 #ifndef ASIC_ONLY
@@ -3129,6 +3169,9 @@ enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev,
 
 	p_dev->chk_reg_fifo = p_params->chk_reg_fifo;
 
+	if (p_params->b_relaxed_probe)
+		p_params->p_relaxed_res = ECORE_HW_PREPARE_SUCCESS;
+
 	/* Store the precompiled init data ptrs */
 	if (IS_PF(p_dev))
 		ecore_init_iro_array(p_dev);
@@ -3164,6 +3207,10 @@ enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev,
 		 * initiliazed hwfn 0.
 		 */
 		if (rc != ECORE_SUCCESS) {
+			if (p_params->b_relaxed_probe)
+				p_params->p_relaxed_res =
+						ECORE_HW_PREPARE_FAILED_ENG2;
+
 			if (IS_PF(p_dev)) {
 				ecore_init_free(p_hwfn);
 				ecore_mcp_free(p_hwfn);
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index e7332ac..74a15ef 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -138,17 +138,47 @@ void ecore_hw_start_fastpath(struct ecore_hwfn *p_hwfn);
  */
 enum _ecore_status_t ecore_hw_reset(struct ecore_dev *p_dev);
 
+enum ecore_hw_prepare_result {
+	ECORE_HW_PREPARE_SUCCESS,
+
+	/* FAILED results indicate probe has failed & cleaned up */
+	ECORE_HW_PREPARE_FAILED_ENG2,
+	ECORE_HW_PREPARE_FAILED_ME,
+	ECORE_HW_PREPARE_FAILED_MEM,
+	ECORE_HW_PREPARE_FAILED_DEV,
+	ECORE_HW_PREPARE_FAILED_NVM,
+
+	/* BAD results indicate probe is passed even though some wrongness
+	 * has occurred; Trying to actually use [I.e., hw_init()] might have
+	 * dire reprecautions.
+	 */
+	ECORE_HW_PREPARE_BAD_IOV,
+	ECORE_HW_PREPARE_BAD_MCP,
+	ECORE_HW_PREPARE_BAD_IGU,
+};
+
 struct ecore_hw_prepare_params {
-	/* personality to initialize */
+	/* Personality to initialize */
 	int personality;
-	/* force the driver's default resource allocation */
+
+	/* Force the driver's default resource allocation */
 	bool drv_resc_alloc;
-	/* check the reg_fifo after any register access */
+
+	/* Check the reg_fifo after any register access */
 	bool chk_reg_fifo;
-	/* request the MFW to initiate PF FLR */
+
+	/* Request the MFW to initiate PF FLR */
 	bool initiate_pf_flr;
-	/* the OS Epoch time in seconds */
+
+	/* The OS Epoch time in seconds */
 	u32 epoch;
+
+	/* Allow prepare to pass even if some initializations are failing.
+	 * If set, the `p_prepare_res' field would be set with the return,
+	 * and might allow probe to pass even if there are certain issues.
+	 */
+	bool b_relaxed_probe;
+	enum ecore_hw_prepare_result p_relaxed_res;
 };
 
 /**
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 18/61] net/qede/base: remove unneeded step in HW init
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (17 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 17/61] net/qede/base: allow probe to succeed with minor HW-issues Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 19/61] net/qede/base: allow only trusted VFs to be promisc Rasesh Mody
                         ` (43 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

There is no need to close the OUT_EN NIG registers, so remove that.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c |   12 ------------
 1 file changed, 12 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 907566c..e2d4132 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -999,18 +999,6 @@ static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn,
 
 	ecore_cxt_hw_init_common(p_hwfn);
 
-	/* Close gate from NIG to BRB/Storm; By default they are open, but
-	 * we close them to prevent NIG from passing data to reset blocks.
-	 * Should have been done in the ENGINE phase, but init-tool lacks
-	 * proper port-pretend capabilities.
-	 */
-	ecore_wr(p_hwfn, p_ptt, NIG_REG_RX_BRB_OUT_EN, 0);
-	ecore_wr(p_hwfn, p_ptt, NIG_REG_STORM_OUT_EN, 0);
-	ecore_port_pretend(p_hwfn, p_ptt, p_hwfn->port_id ^ 1);
-	ecore_wr(p_hwfn, p_ptt, NIG_REG_RX_BRB_OUT_EN, 0);
-	ecore_wr(p_hwfn, p_ptt, NIG_REG_STORM_OUT_EN, 0);
-	ecore_port_unpretend(p_hwfn, p_ptt);
-
 	rc = ecore_init_run(p_hwfn, p_ptt, PHASE_ENGINE, ANY_PHASE_ID, hw_mode);
 	if (rc != ECORE_SUCCESS)
 		return rc;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 19/61] net/qede/base: allow only trusted VFs to be promisc
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (18 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 18/61] net/qede/base: remove unneeded step in HW init Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 20/61] net/qede/base: qm initialization revamp Rasesh Mody
                         ` (42 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Allow only trusted VFs to be promisc/multi-promisc. The reasonable
thing is to use the 'trusted' node instead of simply allowing VFs to
become promiscuous.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_l2.c    |    8 ++++----
 drivers/net/qede/base/ecore_sriov.c |    2 --
 2 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 1379a1b..d2e1719 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -274,8 +274,8 @@ ecore_sp_update_accept_mode(struct ecore_hwfn *p_hwfn,
 
 		p_ramrod->rx_mode.state = OSAL_CPU_TO_LE16(state);
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-			   "p_ramrod->rx_mode.state = 0x%x\n",
-			   state);
+			   "vport[%02x] p_ramrod->rx_mode.state = 0x%x\n",
+			   p_ramrod->common.vport_id, state);
 	}
 
 	/* Set Tx mode accept flags */
@@ -298,8 +298,8 @@ ecore_sp_update_accept_mode(struct ecore_hwfn *p_hwfn,
 
 		p_ramrod->tx_mode.state = OSAL_CPU_TO_LE16(state);
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-			   "p_ramrod->tx_mode.state = 0x%x\n",
-			   state);
+			   "vport[%02x] p_ramrod->tx_mode.state = 0x%x\n",
+			   p_ramrod->common.vport_id, state);
 	}
 }
 
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 578899c..a302e9e 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -2626,7 +2626,6 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 	 */
 	tlvs_accepted = tlvs_mask;
 
-#ifndef LINUX_REMOVE
 	if (OSAL_IOV_VF_VPORT_UPDATE(p_hwfn, vf->relative_vf_id,
 				     &params, &tlvs_accepted) !=
 	    ECORE_SUCCESS) {
@@ -2634,7 +2633,6 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 		status = PFVF_STATUS_NOT_SUPPORTED;
 		goto out;
 	}
-#endif
 
 	if (!tlvs_accepted) {
 		if (tlvs_mask)
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 20/61] net/qede/base: qm initialization revamp
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (19 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 19/61] net/qede/base: allow only trusted VFs to be promisc Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 21/61] net/qede/base: print firmware MFW and MBI versions Rasesh Mody
                         ` (41 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

This patch revamps queue initialization.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/bcm_osal.h    |    2 +
 drivers/net/qede/base/ecore.h       |   34 +-
 drivers/net/qede/base/ecore_cxt.c   |   14 +-
 drivers/net/qede/base/ecore_dev.c   |  869 ++++++++++++++++++++++++-----------
 drivers/net/qede/base/ecore_hw.c    |   38 --
 drivers/net/qede/base/ecore_l2.c    |   12 +-
 drivers/net/qede/base/ecore_l2.h    |    2 +-
 drivers/net/qede/base/ecore_spq.c   |    9 +-
 drivers/net/qede/base/ecore_sriov.c |   13 +-
 9 files changed, 645 insertions(+), 348 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 0d239c9..63ee6d5 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -320,6 +320,8 @@ u32 qede_find_first_zero_bit(unsigned long *, u32);
 #define OSAL_BUILD_BUG_ON(cond)		nothing
 #define ETH_ALEN			ETHER_ADDR_LEN
 
+#define OSAL_BITMAP_WEIGHT(bitmap, count) 0
+
 #define OSAL_LINK_UPDATE(hwfn) qed_link_update(hwfn)
 #define OSAL_DCBX_AEN(hwfn, mib_type) nothing
 
diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 842a3b5..58c97a3 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -445,11 +445,13 @@ struct ecore_qm_info {
 	struct init_qm_port_params  *qm_port_params;
 	u16			start_pq;
 	u8			start_vport;
-	u8			pure_lb_pq;
-	u8			offload_pq;
-	u8			pure_ack_pq;
-	u8			ooo_pq;
-	u8			vf_queues_offset;
+	u16			pure_lb_pq;
+	u16			offload_pq;
+	u16			pure_ack_pq;
+	u16			ooo_pq;
+	u16			first_vf_pq;
+	u16			first_mcos_pq;
+	u16			first_rl_pq;
 	u16			num_pqs;
 	u16			num_vf_pqs;
 	u8			num_vports;
@@ -828,6 +830,28 @@ int ecore_device_num_ports(struct ecore_dev *p_dev);
 void ecore_set_fw_mac_addr(__le16 *fw_msb, __le16 *fw_mid, __le16 *fw_lsb,
 			   u8 *mac);
 
+/* Flags for indication of required queues */
+#define PQ_FLAGS_RLS	(1 << 0)
+#define PQ_FLAGS_MCOS	(1 << 1)
+#define PQ_FLAGS_LB	(1 << 2)
+#define PQ_FLAGS_OOO	(1 << 3)
+#define PQ_FLAGS_ACK    (1 << 4)
+#define PQ_FLAGS_OFLD	(1 << 5)
+#define PQ_FLAGS_VFS	(1 << 6)
+
+/* physical queue index for cm context intialization */
+u16 ecore_get_cm_pq_idx(struct ecore_hwfn *p_hwfn, u32 pq_flags);
+u16 ecore_get_cm_pq_idx_mcos(struct ecore_hwfn *p_hwfn, u8 tc);
+u16 ecore_get_cm_pq_idx_vf(struct ecore_hwfn *p_hwfn, u16 vf);
+u16 ecore_get_cm_pq_idx_rl(struct ecore_hwfn *p_hwfn, u8 qpid);
+
+/* amount of resources used in qm init */
+u8 ecore_init_qm_get_num_tcs(struct ecore_hwfn *p_hwfn);
+u16 ecore_init_qm_get_num_vfs(struct ecore_hwfn *p_hwfn);
+u16 ecore_init_qm_get_num_pf_rls(struct ecore_hwfn *p_hwfn);
+u16 ecore_init_qm_get_num_vports(struct ecore_hwfn *p_hwfn);
+u16 ecore_init_qm_get_num_pqs(struct ecore_hwfn *p_hwfn);
+
 #define ECORE_LEADING_HWFN(dev)	(&dev->hwfns[0])
 
 #endif /* __ECORE_H */
diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index 2635030..aeeabf1 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -1372,18 +1372,10 @@ void ecore_qm_init_pf(struct ecore_hwfn *p_hwfn)
 }
 
 /* CM PF */
-static enum _ecore_status_t ecore_cm_init_pf(struct ecore_hwfn *p_hwfn)
+void ecore_cm_init_pf(struct ecore_hwfn *p_hwfn)
 {
-	union ecore_qm_pq_params pq_params;
-	u16 pq;
-
-	/* XCM pure-LB queue */
-	OSAL_MEMSET(&pq_params, 0, sizeof(pq_params));
-	pq_params.core.tc = LB_TC;
-	pq = ecore_get_qm_pq(p_hwfn, PROTOCOLID_CORE, &pq_params);
-	STORE_RT_REG(p_hwfn, XCM_REG_CON_PHY_Q3_RT_OFFSET, pq);
-
-	return ECORE_SUCCESS;
+	STORE_RT_REG(p_hwfn, XCM_REG_CON_PHY_Q3_RT_OFFSET,
+		     ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_LB));
 }
 
 /* DQ PF */
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index e2d4132..380c5ba 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -178,282 +178,575 @@ void ecore_resc_free(struct ecore_dev *p_dev)
 	}
 }
 
-static enum _ecore_status_t ecore_init_qm_info(struct ecore_hwfn *p_hwfn,
-					       bool b_sleepable)
+/******************** QM initialization *******************/
+
+/* bitmaps for indicating active traffic classes.
+ * Special case for Arrowhead 4 port
+ */
+/* 0..3 actualy used, 4 serves OOO, 7 serves high priority stuff (e.g. DCQCN) */
+#define ACTIVE_TCS_BMAP 0x9f
+/* 0..3 actually used, OOO and high priority stuff all use 3 */
+#define ACTIVE_TCS_BMAP_4PORT_K2 0xf
+
+/* determines the physical queue flags for a given PF. */
+static u32 ecore_get_pq_flags(struct ecore_hwfn *p_hwfn)
 {
-	u8 num_vports, vf_offset = 0, i, vport_id, num_ports, curr_queue;
-	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
-	struct init_qm_port_params *p_qm_port;
-	bool init_rdma_offload_pq = false;
-	bool init_pure_ack_pq = false;
-	bool init_ooo_pq = false;
-	u16 num_pqs, protocol_pqs;
-	u16 num_pf_rls = 0;
-	u16 num_vfs = 0;
-	u32 pf_rl;
-	u8 pf_wfq;
-
-	/* @TMP - saving the existing min/max bw config before resetting the
-	 * qm_info to restore them.
-	 */
-	pf_rl = qm_info->pf_rl;
-	pf_wfq = qm_info->pf_wfq;
+	u32 flags;
 
-#ifdef CONFIG_ECORE_SRIOV
-	if (p_hwfn->p_dev->p_iov_info)
-		num_vfs = p_hwfn->p_dev->p_iov_info->total_vfs;
-#endif
-	OSAL_MEM_ZERO(qm_info, sizeof(*qm_info));
+	/* common flags */
+	flags = PQ_FLAGS_LB;
 
-#ifndef ASIC_ONLY
-	/* @TMP - Don't allocate QM queues for VFs on emulation */
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
-		DP_NOTICE(p_hwfn, false,
-			  "Emulation - skip configuring QM queues for VFs\n");
-		num_vfs = 0;
+	/* feature flags */
+	if (IS_ECORE_SRIOV(p_hwfn->p_dev))
+		flags |= PQ_FLAGS_VFS;
+
+	/* protocol flags */
+	switch (p_hwfn->hw_info.personality) {
+	case ECORE_PCI_ETH:
+		flags |= PQ_FLAGS_MCOS;
+		break;
+	case ECORE_PCI_FCOE:
+		flags |= PQ_FLAGS_OFLD;
+		break;
+	case ECORE_PCI_ISCSI:
+		flags |= PQ_FLAGS_ACK | PQ_FLAGS_OOO | PQ_FLAGS_OFLD;
+		break;
+	case ECORE_PCI_ETH_ROCE:
+		flags |= PQ_FLAGS_MCOS | PQ_FLAGS_OFLD;
+		break;
+	case ECORE_PCI_ETH_IWARP:
+		flags |= PQ_FLAGS_MCOS | PQ_FLAGS_ACK | PQ_FLAGS_OOO |
+			 PQ_FLAGS_OFLD;
+		break;
+	default:
+		DP_ERR(p_hwfn, "unknown personality %d\n",
+		       p_hwfn->hw_info.personality);
+		return 0;
 	}
-#endif
+	return flags;
+}
 
-	/* ethernet PFs require a pq per tc. Even if only a subset of the TCs
-	 * active, we want physical queues allocated for all of them, since we
-	 * don't have a good recycle flow. Non ethernet PFs require only a
-	 * single physical queue.
-	 */
-	if (ECORE_IS_L2_PERSONALITY(p_hwfn))
-		protocol_pqs = p_hwfn->hw_info.num_hw_tc;
-	else
-		protocol_pqs = 1;
-
-	num_pqs = protocol_pqs + num_vfs + 1;	/* The '1' is for pure-LB */
-	num_vports = (u8)RESC_NUM(p_hwfn, ECORE_VPORT);
-
-	if (ECORE_IS_ROCE_PERSONALITY(p_hwfn)) {
-		num_pqs++;	/* for RoCE queue */
-		init_rdma_offload_pq = true;
-		if (p_hwfn->pf_params.rdma_pf_params.enable_dcqcn) {
-			/* Due to FW assumption that rl==vport, we limit the
-			 * number of rate limiters by the minimum between its
-			 * allocated number and the allocated number of vports.
-			 * Another limitation is the number of supported qps
-			 * with rate limiters in FW.
-			 */
-			num_pf_rls =
-			    (u16)OSAL_MIN_T(u32, RESC_NUM(p_hwfn, ECORE_RL),
-					     RESC_NUM(p_hwfn, ECORE_VPORT));
+/* Getters for resource amounts necessary for qm initialization */
+u8 ecore_init_qm_get_num_tcs(struct ecore_hwfn *p_hwfn)
+{
+	return p_hwfn->hw_info.num_hw_tc;
+}
 
-			/* we subtract num_vfs because each one requires a rate
-			 * limiter, and one default rate limiter.
-			 */
-			if (num_pf_rls < num_vfs + 1) {
-				DP_ERR(p_hwfn, "No RL for DCQCN");
-				DP_ERR(p_hwfn, "[num_pf_rls %d num_vfs %d]\n",
-				       num_pf_rls, num_vfs);
-				return ECORE_INVAL;
-			}
-			num_pf_rls -= num_vfs + 1;
-		}
+u16 ecore_init_qm_get_num_vfs(struct ecore_hwfn *p_hwfn)
+{
+	return IS_ECORE_SRIOV(p_hwfn->p_dev) ?
+			p_hwfn->p_dev->p_iov_info->total_vfs : 0;
+}
 
-		num_pqs += num_pf_rls;
-		qm_info->num_pf_rls = (u8)num_pf_rls;
-	}
+#define NUM_DEFAULT_RLS 1
 
-	if (ECORE_IS_IWARP_PERSONALITY(p_hwfn)) {
-		num_pqs += 3;	/* for iwarp queue / pure-ack / ooo */
-		init_rdma_offload_pq = true;
-		init_pure_ack_pq = true;
-		init_ooo_pq = true;
-	}
+u16 ecore_init_qm_get_num_pf_rls(struct ecore_hwfn *p_hwfn)
+{
+	u16 num_pf_rls, num_vfs = ecore_init_qm_get_num_vfs(p_hwfn);
 
-	if (p_hwfn->hw_info.personality == ECORE_PCI_ISCSI) {
-		num_pqs += 2;	/* for iSCSI pure-ACK / OOO queue */
-		init_pure_ack_pq = true;
-		init_ooo_pq = true;
-	}
+	/* @DPDK */
+	/* num RLs can't exceed resource amount of rls or vports or the
+	 * dcqcn qps
+	 */
+	num_pf_rls = (u16)OSAL_MIN_T(u32, RESC_NUM(p_hwfn, ECORE_RL),
+				     (u16)RESC_NUM(p_hwfn, ECORE_VPORT));
 
-	/* Sanity checking that setup requires legal number of resources */
-	if (num_pqs > RESC_NUM(p_hwfn, ECORE_PQ)) {
-		DP_ERR(p_hwfn,
-		       "Need too many Physical queues - 0x%04x avail %04x",
-		       num_pqs, RESC_NUM(p_hwfn, ECORE_PQ));
-		return ECORE_INVAL;
+	/* make sure after we reserve the default and VF rls we'll have
+	 * something left
+	 */
+	if (num_pf_rls < num_vfs + NUM_DEFAULT_RLS) {
+		DP_NOTICE(p_hwfn, false,
+			  "no rate limiters left for PF rate limiting"
+			  " [num_pf_rls %d num_vfs %d]\n", num_pf_rls, num_vfs);
+		return 0;
 	}
 
-	/* PQs will be arranged as follows: First per-TC PQ, then pure-LB queue,
-	 * then special queues (iSCSI pure-ACK / RoCE), then per-VF PQ.
+	/* subtract rls necessary for VFs and one default one for the PF */
+	num_pf_rls -= num_vfs + NUM_DEFAULT_RLS;
+
+	return num_pf_rls;
+}
+
+u16 ecore_init_qm_get_num_vports(struct ecore_hwfn *p_hwfn)
+{
+	u32 pq_flags = ecore_get_pq_flags(p_hwfn);
+
+	/* all pqs share the same vport (hence the 1 below), except for vfs
+	 * and pf_rl pqs
 	 */
-	qm_info->qm_pq_params = OSAL_ZALLOC(p_hwfn->p_dev,
-					    b_sleepable ? GFP_KERNEL :
-					    GFP_ATOMIC,
-					    sizeof(struct init_qm_pq_params) *
-					    num_pqs);
-	if (!qm_info->qm_pq_params)
-		goto alloc_err;
+	return (!!(PQ_FLAGS_RLS & pq_flags)) *
+		ecore_init_qm_get_num_pf_rls(p_hwfn) +
+	       (!!(PQ_FLAGS_VFS & pq_flags)) *
+		ecore_init_qm_get_num_vfs(p_hwfn) + 1;
+}
 
-	qm_info->qm_vport_params = OSAL_ZALLOC(p_hwfn->p_dev,
-					       b_sleepable ? GFP_KERNEL :
-					       GFP_ATOMIC,
-					       sizeof(struct
-						      init_qm_vport_params) *
-					       num_vports);
-	if (!qm_info->qm_vport_params)
-		goto alloc_err;
+/* calc amount of PQs according to the requested flags */
+u16 ecore_init_qm_get_num_pqs(struct ecore_hwfn *p_hwfn)
+{
+	u32 pq_flags = ecore_get_pq_flags(p_hwfn);
+
+	return (!!(PQ_FLAGS_RLS & pq_flags)) *
+		ecore_init_qm_get_num_pf_rls(p_hwfn) +
+	       (!!(PQ_FLAGS_MCOS & pq_flags)) *
+		ecore_init_qm_get_num_tcs(p_hwfn) +
+	       (!!(PQ_FLAGS_LB & pq_flags)) +
+	       (!!(PQ_FLAGS_OOO & pq_flags)) +
+	       (!!(PQ_FLAGS_ACK & pq_flags)) +
+	       (!!(PQ_FLAGS_OFLD & pq_flags)) +
+	       (!!(PQ_FLAGS_VFS & pq_flags)) *
+		ecore_init_qm_get_num_vfs(p_hwfn);
+}
 
-	qm_info->qm_port_params = OSAL_ZALLOC(p_hwfn->p_dev,
-					      b_sleepable ? GFP_KERNEL :
-					      GFP_ATOMIC,
-					      sizeof(struct init_qm_port_params)
-					      * MAX_NUM_PORTS);
-	if (!qm_info->qm_port_params)
-		goto alloc_err;
+/* initialize the top level QM params */
+static void ecore_init_qm_params(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
 
-	qm_info->wfq_data = OSAL_ZALLOC(p_hwfn->p_dev,
-					b_sleepable ? GFP_KERNEL :
-					GFP_ATOMIC,
-					sizeof(struct ecore_wfq_data) *
-					num_vports);
+	/* pq and vport bases for this PF */
+	qm_info->start_pq = (u16)RESC_START(p_hwfn, ECORE_PQ);
+	qm_info->start_vport = (u8)RESC_START(p_hwfn, ECORE_VPORT);
 
-	if (!qm_info->wfq_data)
-		goto alloc_err;
+	/* rate limiting and weighted fair queueing are always enabled */
+	qm_info->vport_rl_en = 1;
+	qm_info->vport_wfq_en = 1;
 
-	vport_id = (u8)RESC_START(p_hwfn, ECORE_VPORT);
+	/* in AH 4 port we have fewer TCs per port */
+	qm_info->max_phys_tcs_per_port =
+		p_hwfn->p_dev->num_ports_in_engines == MAX_NUM_PORTS_K2 ?
+			NUM_PHYS_TCS_4PORT_K2 : NUM_OF_PHYS_TCS;
+}
 
-	/* First init rate limited queues ( Due to RoCE assumption of
-	 * qpid=rlid )
-	 */
-	for (curr_queue = 0; curr_queue < num_pf_rls; curr_queue++) {
-		qm_info->qm_pq_params[curr_queue].vport_id = vport_id++;
-		qm_info->qm_pq_params[curr_queue].tc_id =
-		    p_hwfn->hw_info.offload_tc;
-		qm_info->qm_pq_params[curr_queue].wrr_group = 1;
-		qm_info->qm_pq_params[curr_queue].rl_valid = 1;
-	};
-
-	/* Protocol PQs */
-	for (i = 0; i < protocol_pqs; i++) {
-		struct init_qm_pq_params *params =
-		    &qm_info->qm_pq_params[curr_queue++];
-
-		if (ECORE_IS_L2_PERSONALITY(p_hwfn)) {
-			params->vport_id = vport_id;
-			params->tc_id = i;
-			/* Note: this assumes that if we had a configuration
-			 * with N tcs and subsequently another configuration
-			 * With Fewer TCs, the in flight traffic (in QM queues,
-			 * in FW, from driver to FW) will still trickle out and
-			 * not get "stuck" in the QM. This is determined by the
-			 * NIG_REG_TX_ARB_CLIENT_IS_SUBJECT2WFQ. Unused TCs are
-			 * supposed to be cleared in this map, allowing traffic
-			 * to flush out. If this is not the case, we would need
-			 * to set the TC of unused queues to 0, and reconfigure
-			 * QM every time num of TCs changes. Unused queues in
-			 * this context would mean those intended for TCs where
-			 * tc_id > hw_info.num_active_tcs.
-			 */
-			params->wrr_group = 1;	/* @@@TBD ECORE_WRR_MEDIUM */
-		} else {
-			params->vport_id = vport_id;
-			params->tc_id = p_hwfn->hw_info.offload_tc;
-			params->wrr_group = 1;	/* @@@TBD ECORE_WRR_MEDIUM */
-		}
-	}
+/* initialize qm vport params */
+static void ecore_init_qm_vport_params(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+	u8 i;
 
-	/* Then init pure-LB PQ */
-	qm_info->pure_lb_pq = curr_queue;
-	qm_info->qm_pq_params[curr_queue].vport_id =
-	    (u8)RESC_START(p_hwfn, ECORE_VPORT);
-	qm_info->qm_pq_params[curr_queue].tc_id = PURE_LB_TC;
-	qm_info->qm_pq_params[curr_queue].wrr_group = 1;
-	curr_queue++;
-
-	qm_info->offload_pq = 0;	/* Already initialized for iSCSI/FCoE */
-	if (init_rdma_offload_pq) {
-		qm_info->offload_pq = curr_queue;
-		qm_info->qm_pq_params[curr_queue].vport_id = vport_id;
-		qm_info->qm_pq_params[curr_queue].tc_id =
-		    p_hwfn->hw_info.offload_tc;
-		qm_info->qm_pq_params[curr_queue].wrr_group = 1;
-		curr_queue++;
-	}
-
-	if (init_pure_ack_pq) {
-		qm_info->pure_ack_pq = curr_queue;
-		qm_info->qm_pq_params[curr_queue].vport_id = vport_id;
-		qm_info->qm_pq_params[curr_queue].tc_id =
-		    p_hwfn->hw_info.offload_tc;
-		qm_info->qm_pq_params[curr_queue].wrr_group = 1;
-		curr_queue++;
-	}
-
-	if (init_ooo_pq) {
-		qm_info->ooo_pq = curr_queue;
-		qm_info->qm_pq_params[curr_queue].vport_id = vport_id;
-		qm_info->qm_pq_params[curr_queue].tc_id = DCBX_ISCSI_OOO_TC;
-		qm_info->qm_pq_params[curr_queue].wrr_group = 1;
-		curr_queue++;
-	}
-
-	/* Then init per-VF PQs */
-	vf_offset = curr_queue;
-	for (i = 0; i < num_vfs; i++) {
-		/* First vport is used by the PF */
-		qm_info->qm_pq_params[curr_queue].vport_id = vport_id + i + 1;
-		/* @@@TBD VF Multi-cos */
-		qm_info->qm_pq_params[curr_queue].tc_id = 0;
-		qm_info->qm_pq_params[curr_queue].wrr_group = 1;
-		qm_info->qm_pq_params[curr_queue].rl_valid = 1;
-		curr_queue++;
-	};
-
-	qm_info->vf_queues_offset = vf_offset;
-	qm_info->num_pqs = num_pqs;
-	qm_info->num_vports = num_vports;
+	/* all vports participate in weighted fair queueing */
+	for (i = 0; i < ecore_init_qm_get_num_vports(p_hwfn); i++)
+		qm_info->qm_vport_params[i].vport_wfq = 1;
+}
 
+/* initialize qm port params */
+static void ecore_init_qm_port_params(struct ecore_hwfn *p_hwfn)
+{
 	/* Initialize qm port parameters */
-	num_ports = p_hwfn->p_dev->num_ports_in_engines;
+	u8 i, active_phys_tcs, num_ports = p_hwfn->p_dev->num_ports_in_engines;
+
+	/* indicate how ooo and high pri traffic is dealt with */
+	active_phys_tcs = num_ports == MAX_NUM_PORTS_K2 ?
+		ACTIVE_TCS_BMAP_4PORT_K2 : ACTIVE_TCS_BMAP;
+
 	for (i = 0; i < num_ports; i++) {
-		p_qm_port = &qm_info->qm_port_params[i];
+		struct init_qm_port_params *p_qm_port =
+			&p_hwfn->qm_info.qm_port_params[i];
+
 		p_qm_port->active = 1;
-		/* @@@TMP - was NUM_OF_PHYS_TCS; Changed until dcbx will
-		 * be in place
-		 */
-		if (num_ports == 4)
-			p_qm_port->active_phys_tcs = 0xf;
-		else
-			p_qm_port->active_phys_tcs = 0x9f;
+		p_qm_port->active_phys_tcs = active_phys_tcs;
 		p_qm_port->num_pbf_cmd_lines = PBF_MAX_CMD_LINES / num_ports;
 		p_qm_port->num_btb_blocks = BTB_MAX_BLOCKS / num_ports;
 	}
+}
 
-	if (ECORE_IS_AH(p_hwfn->p_dev) && (num_ports == 4))
-		qm_info->max_phys_tcs_per_port = NUM_PHYS_TCS_4PORT_K2;
-	else
-		qm_info->max_phys_tcs_per_port = NUM_OF_PHYS_TCS;
+/* Reset the params which must be reset for qm init. QM init may be called as
+ * a result of flows other than driver load (e.g. dcbx renegotiation). Other
+ * params may be affected by the init but would simply recalculate to the same
+ * values. The allocations made for QM init, ports, vports, pqs and vfqs are not
+ * affected as these amounts stay the same.
+ */
+static void ecore_init_qm_reset_params(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
 
-	qm_info->start_pq = (u16)RESC_START(p_hwfn, ECORE_PQ);
+	qm_info->num_pqs = 0;
+	qm_info->num_vports = 0;
+	qm_info->num_pf_rls = 0;
+	qm_info->num_vf_pqs = 0;
+	qm_info->first_vf_pq = 0;
+	qm_info->first_mcos_pq = 0;
+	qm_info->first_rl_pq = 0;
+}
+
+static void ecore_init_qm_advance_vport(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+
+	qm_info->num_vports++;
+
+	if (qm_info->num_vports > ecore_init_qm_get_num_vports(p_hwfn))
+		DP_ERR(p_hwfn,
+		       "vport overflow! qm_info->num_vports %d,"
+		       " qm_init_get_num_vports() %d\n",
+		       qm_info->num_vports,
+		       ecore_init_qm_get_num_vports(p_hwfn));
+}
+
+/* initialize a single pq and manage qm_info resources accounting.
+ * The pq_init_flags param determines whether the PQ is rate limited
+ * (for VF or PF)
+ * and whether a new vport is allocated to the pq or not (i.e. vport will be
+ * shared)
+ */
+
+/* flags for pq init */
+#define PQ_INIT_SHARE_VPORT	(1 << 0)
+#define PQ_INIT_PF_RL		(1 << 1)
+#define PQ_INIT_VF_RL		(1 << 2)
+
+/* defines for pq init */
+#define PQ_INIT_DEFAULT_WRR_GROUP	1
+#define PQ_INIT_DEFAULT_TC		0
+#define PQ_INIT_OFLD_TC			(p_hwfn->hw_info.offload_tc)
+
+static void ecore_init_qm_pq(struct ecore_hwfn *p_hwfn,
+			     struct ecore_qm_info *qm_info,
+			     u8 tc, u32 pq_init_flags)
+{
+	u16 pq_idx = qm_info->num_pqs, max_pq =
+					ecore_init_qm_get_num_pqs(p_hwfn);
+
+	if (pq_idx > max_pq)
+		DP_ERR(p_hwfn,
+		       "pq overflow! pq %d, max pq %d\n", pq_idx, max_pq);
+
+	/* init pq params */
+	qm_info->qm_pq_params[pq_idx].vport_id = qm_info->start_vport +
+						 qm_info->num_vports;
+	qm_info->qm_pq_params[pq_idx].tc_id = tc;
+	qm_info->qm_pq_params[pq_idx].wrr_group = PQ_INIT_DEFAULT_WRR_GROUP;
+	qm_info->qm_pq_params[pq_idx].rl_valid =
+		(pq_init_flags & PQ_INIT_PF_RL ||
+		 pq_init_flags & PQ_INIT_VF_RL);
+
+	/* qm params accounting */
+	qm_info->num_pqs++;
+	if (!(pq_init_flags & PQ_INIT_SHARE_VPORT))
+		qm_info->num_vports++;
+
+	if (pq_init_flags & PQ_INIT_PF_RL)
+		qm_info->num_pf_rls++;
+
+	if (qm_info->num_vports > ecore_init_qm_get_num_vports(p_hwfn))
+		DP_ERR(p_hwfn,
+		       "vport overflow! qm_info->num_vports %d,"
+		       " qm_init_get_num_vports() %d\n",
+		       qm_info->num_vports,
+		       ecore_init_qm_get_num_vports(p_hwfn));
+
+	if (qm_info->num_pf_rls > ecore_init_qm_get_num_pf_rls(p_hwfn))
+		DP_ERR(p_hwfn, "rl overflow! qm_info->num_pf_rls %d,"
+		       " qm_init_get_num_pf_rls() %d\n",
+		       qm_info->num_pf_rls,
+		       ecore_init_qm_get_num_pf_rls(p_hwfn));
+}
+
+/* get pq index according to PQ_FLAGS */
+static u16 *ecore_init_qm_get_idx_from_flags(struct ecore_hwfn *p_hwfn,
+					     u32 pq_flags)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+
+	/* Can't have multiple flags set here */
+	if (OSAL_BITMAP_WEIGHT((unsigned long *)&pq_flags,
+				sizeof(pq_flags)) > 1)
+		goto err;
+
+	switch (pq_flags) {
+	case PQ_FLAGS_RLS:
+		return &qm_info->first_rl_pq;
+	case PQ_FLAGS_MCOS:
+		return &qm_info->first_mcos_pq;
+	case PQ_FLAGS_LB:
+		return &qm_info->pure_lb_pq;
+	case PQ_FLAGS_OOO:
+		return &qm_info->ooo_pq;
+	case PQ_FLAGS_ACK:
+		return &qm_info->pure_ack_pq;
+	case PQ_FLAGS_OFLD:
+		return &qm_info->offload_pq;
+	case PQ_FLAGS_VFS:
+		return &qm_info->first_vf_pq;
+	default:
+		goto err;
+	}
+
+err:
+	DP_ERR(p_hwfn, "BAD pq flags %d\n", pq_flags);
+	return OSAL_NULL;
+}
+
+/* save pq index in qm info */
+static void ecore_init_qm_set_idx(struct ecore_hwfn *p_hwfn,
+				  u32 pq_flags, u16 pq_val)
+{
+	u16 *base_pq_idx = ecore_init_qm_get_idx_from_flags(p_hwfn, pq_flags);
+
+	*base_pq_idx = p_hwfn->qm_info.start_pq + pq_val;
+}
+
+/* get tx pq index, with the PQ TX base already set (ready for context init) */
+u16 ecore_get_cm_pq_idx(struct ecore_hwfn *p_hwfn, u32 pq_flags)
+{
+	u16 *base_pq_idx = ecore_init_qm_get_idx_from_flags(p_hwfn, pq_flags);
+
+	return *base_pq_idx + CM_TX_PQ_BASE;
+}
+
+u16 ecore_get_cm_pq_idx_mcos(struct ecore_hwfn *p_hwfn, u8 tc)
+{
+	u8 max_tc = ecore_init_qm_get_num_tcs(p_hwfn);
+
+	if (tc > max_tc)
+		DP_ERR(p_hwfn, "tc %d must be smaller than %d\n", tc, max_tc);
+
+	return ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_MCOS) + tc;
+}
+
+u16 ecore_get_cm_pq_idx_vf(struct ecore_hwfn *p_hwfn, u16 vf)
+{
+	u16 max_vf = ecore_init_qm_get_num_vfs(p_hwfn);
+
+	if (vf > max_vf)
+		DP_ERR(p_hwfn, "vf %d must be smaller than %d\n", vf, max_vf);
+
+	return ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_VFS) + vf;
+}
+
+u16 ecore_get_cm_pq_idx_rl(struct ecore_hwfn *p_hwfn, u8 rl)
+{
+	u16 max_rl = ecore_init_qm_get_num_pf_rls(p_hwfn);
+
+	if (rl > max_rl)
+		DP_ERR(p_hwfn, "rl %d must be smaller than %d\n", rl, max_rl);
+
+	return ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_RLS) + rl;
+}
+
+/* Functions for creating specific types of pqs */
+static void ecore_init_qm_lb_pq(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+
+	if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_LB))
+		return;
+
+	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_LB, qm_info->num_pqs);
+	ecore_init_qm_pq(p_hwfn, qm_info, PURE_LB_TC, PQ_INIT_SHARE_VPORT);
+}
+
+static void ecore_init_qm_ooo_pq(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+
+	if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_OOO))
+		return;
+
+	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_OOO, qm_info->num_pqs);
+	ecore_init_qm_pq(p_hwfn, qm_info, DCBX_ISCSI_OOO_TC,
+			 PQ_INIT_SHARE_VPORT);
+}
+
+static void ecore_init_qm_pure_ack_pq(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+
+	if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_ACK))
+		return;
+
+	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_ACK, qm_info->num_pqs);
+	ecore_init_qm_pq(p_hwfn, qm_info, PQ_INIT_OFLD_TC, PQ_INIT_SHARE_VPORT);
+}
+
+static void ecore_init_qm_offload_pq(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+
+	if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_OFLD))
+		return;
+
+	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_OFLD, qm_info->num_pqs);
+	ecore_init_qm_pq(p_hwfn, qm_info, PQ_INIT_OFLD_TC, PQ_INIT_SHARE_VPORT);
+}
+
+static void ecore_init_qm_mcos_pqs(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+	u8 tc_idx;
+
+	if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_MCOS))
+		return;
+
+	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_MCOS, qm_info->num_pqs);
+	for (tc_idx = 0; tc_idx < ecore_init_qm_get_num_tcs(p_hwfn); tc_idx++)
+		ecore_init_qm_pq(p_hwfn, qm_info, tc_idx, PQ_INIT_SHARE_VPORT);
+}
+
+static void ecore_init_qm_vf_pqs(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+	u16 vf_idx, num_vfs = ecore_init_qm_get_num_vfs(p_hwfn);
+
+	if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_VFS))
+		return;
+
+	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_VFS, qm_info->num_pqs);
 
 	qm_info->num_vf_pqs = num_vfs;
-	qm_info->start_vport = (u8)RESC_START(p_hwfn, ECORE_VPORT);
+	for (vf_idx = 0; vf_idx < num_vfs; vf_idx++)
+		ecore_init_qm_pq(p_hwfn, qm_info, PQ_INIT_DEFAULT_TC,
+				 PQ_INIT_VF_RL);
+}
 
-	for (i = 0; i < qm_info->num_vports; i++)
-		qm_info->qm_vport_params[i].vport_wfq = 1;
+static void ecore_init_qm_rl_pqs(struct ecore_hwfn *p_hwfn)
+{
+	u16 pf_rls_idx, num_pf_rls = ecore_init_qm_get_num_pf_rls(p_hwfn);
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
 
-	qm_info->vport_rl_en = 1;
-	qm_info->vport_wfq_en = 1;
-	qm_info->pf_rl = pf_rl;
-	qm_info->pf_wfq = pf_wfq;
+	if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_RLS))
+		return;
+
+	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_RLS, qm_info->num_pqs);
+	for (pf_rls_idx = 0; pf_rls_idx < num_pf_rls; pf_rls_idx++)
+		ecore_init_qm_pq(p_hwfn, qm_info, PQ_INIT_OFLD_TC,
+				 PQ_INIT_PF_RL);
+}
+
+static void ecore_init_qm_pq_params(struct ecore_hwfn *p_hwfn)
+{
+	/* rate limited pqs, must come first (FW assumption) */
+	ecore_init_qm_rl_pqs(p_hwfn);
+
+	/* pqs for multi cos */
+	ecore_init_qm_mcos_pqs(p_hwfn);
+
+	/* pure loopback pq */
+	ecore_init_qm_lb_pq(p_hwfn);
+
+	/* out of order pq */
+	ecore_init_qm_ooo_pq(p_hwfn);
+
+	/* pure ack pq */
+	ecore_init_qm_pure_ack_pq(p_hwfn);
+
+	/* pq for offloaded protocol */
+	ecore_init_qm_offload_pq(p_hwfn);
+
+	/* done sharing vports */
+	ecore_init_qm_advance_vport(p_hwfn);
+
+	/* pqs for vfs */
+	ecore_init_qm_vf_pqs(p_hwfn);
+}
+
+/* compare values of getters against resources amounts */
+static enum _ecore_status_t ecore_init_qm_sanity(struct ecore_hwfn *p_hwfn)
+{
+	if (ecore_init_qm_get_num_vports(p_hwfn) >
+	    RESC_NUM(p_hwfn, ECORE_VPORT)) {
+		DP_ERR(p_hwfn, "requested amount of vports exceeds resource\n");
+		return ECORE_INVAL;
+	}
+
+	if (ecore_init_qm_get_num_pqs(p_hwfn) > RESC_NUM(p_hwfn, ECORE_PQ)) {
+		DP_ERR(p_hwfn, "requested amount of pqs exceeds resource\n");
+		return ECORE_INVAL;
+	}
 
 	return ECORE_SUCCESS;
+}
 
- alloc_err:
-	DP_NOTICE(p_hwfn, false, "Failed to allocate memory for QM params\n");
-	ecore_qm_info_free(p_hwfn);
-	return ECORE_NOMEM;
+/*
+ * Function for verbose printing of the qm initialization results
+ */
+static void ecore_dp_init_qm_params(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+	struct init_qm_vport_params *vport;
+	struct init_qm_port_params *port;
+	struct init_qm_pq_params *pq;
+	int i, tc;
+
+	/* top level params */
+	DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+		   "qm init top level params: start_pq %d, start_vport %d,"
+		   " pure_lb_pq %d, offload_pq %d, pure_ack_pq %d\n",
+		   qm_info->start_pq, qm_info->start_vport, qm_info->pure_lb_pq,
+		   qm_info->offload_pq, qm_info->pure_ack_pq);
+	DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+		   "ooo_pq %d, first_vf_pq %d, num_pqs %d, num_vf_pqs %d,"
+		   " num_vports %d, max_phys_tcs_per_port %d\n",
+		   qm_info->ooo_pq, qm_info->first_vf_pq, qm_info->num_pqs,
+		   qm_info->num_vf_pqs, qm_info->num_vports,
+		   qm_info->max_phys_tcs_per_port);
+	DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+		   "pf_rl_en %d, pf_wfq_en %d, vport_rl_en %d, vport_wfq_en %d,"
+		   " pf_wfq %d, pf_rl %d, num_pf_rls %d, pq_flags %x\n",
+		   qm_info->pf_rl_en, qm_info->pf_wfq_en, qm_info->vport_rl_en,
+		   qm_info->vport_wfq_en, qm_info->pf_wfq, qm_info->pf_rl,
+		   qm_info->num_pf_rls, ecore_get_pq_flags(p_hwfn));
+
+	/* port table */
+	for (i = 0; i < p_hwfn->p_dev->num_ports_in_engines; i++) {
+		port = &qm_info->qm_port_params[i];
+		DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+			   "port idx %d, active %d, active_phys_tcs %d,"
+			   " num_pbf_cmd_lines %d, num_btb_blocks %d,"
+			   " reserved %d\n",
+			   i, port->active, port->active_phys_tcs,
+			   port->num_pbf_cmd_lines, port->num_btb_blocks,
+			   port->reserved);
+	}
+
+	/* vport table */
+	for (i = 0; i < qm_info->num_vports; i++) {
+		vport = &qm_info->qm_vport_params[i];
+		DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+			   "vport idx %d, vport_rl %d, wfq %d,"
+			   " first_tx_pq_id [ ",
+			   qm_info->start_vport + i, vport->vport_rl,
+			   vport->vport_wfq);
+		for (tc = 0; tc < NUM_OF_TCS; tc++)
+			DP_VERBOSE(p_hwfn, ECORE_MSG_HW, "%d ",
+				   vport->first_tx_pq_id[tc]);
+		DP_VERBOSE(p_hwfn, ECORE_MSG_HW, "]\n");
+	}
+
+	/* pq table */
+	for (i = 0; i < qm_info->num_pqs; i++) {
+		pq = &qm_info->qm_pq_params[i];
+		DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+			   "pq idx %d, vport_id %d, tc %d, wrr_grp %d,"
+			   " rl_valid %d\n",
+			   qm_info->start_pq + i, pq->vport_id, pq->tc_id,
+			   pq->wrr_group, pq->rl_valid);
+	}
+}
+
+static void ecore_init_qm_info(struct ecore_hwfn *p_hwfn)
+{
+	/* reset params required for init run */
+	ecore_init_qm_reset_params(p_hwfn);
+
+	/* init QM top level params */
+	ecore_init_qm_params(p_hwfn);
+
+	/* init QM port params */
+	ecore_init_qm_port_params(p_hwfn);
+
+	/* init QM vport params */
+	ecore_init_qm_vport_params(p_hwfn);
+
+	/* init QM physical queue params */
+	ecore_init_qm_pq_params(p_hwfn);
+
+	/* display all that init */
+	ecore_dp_init_qm_params(p_hwfn);
 }
 
 /* This function reconfigures the QM pf on the fly.
  * For this purpose we:
  * 1. reconfigure the QM database
- * 2. set new values to runtime arrat
+ * 2. set new values to runtime array
  * 3. send an sdm_qm_cmd through the rbc interface to stop the QM
  * 4. activate init tool in QM_PF stage
  * 5. send an sdm_qm_cmd through rbc interface to release the QM
@@ -462,20 +755,11 @@ enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn,
 				     struct ecore_ptt *p_ptt)
 {
 	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
-	bool b_rc;
 	enum _ecore_status_t rc;
-
-	/* qm_info is allocated in ecore_init_qm_info() which is already called
-	 * from ecore_resc_alloc() or previous call of ecore_qm_reconf().
-	 * The allocated size may change each init, so we free it before next
-	 * allocation.
-	 */
-	ecore_qm_info_free(p_hwfn);
+	bool b_rc;
 
 	/* initialize ecore's qm data structure */
-	rc = ecore_init_qm_info(p_hwfn, false);
-	if (rc != ECORE_SUCCESS)
-		return rc;
+	ecore_init_qm_info(p_hwfn);
 
 	/* stop PF's qm queues */
 	OSAL_SPIN_LOCK(&qm_lock);
@@ -508,6 +792,48 @@ enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
+static enum _ecore_status_t ecore_alloc_qm_data(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+	enum _ecore_status_t rc;
+
+	rc = ecore_init_qm_sanity(p_hwfn);
+	if (rc != ECORE_SUCCESS)
+		goto alloc_err;
+
+	qm_info->qm_pq_params = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
+					    sizeof(struct init_qm_pq_params) *
+					    ecore_init_qm_get_num_pqs(p_hwfn));
+	if (!qm_info->qm_pq_params)
+		goto alloc_err;
+
+	qm_info->qm_vport_params = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
+				       sizeof(struct init_qm_vport_params) *
+				       ecore_init_qm_get_num_vports(p_hwfn));
+	if (!qm_info->qm_vport_params)
+		goto alloc_err;
+
+	qm_info->qm_port_params = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
+				      sizeof(struct init_qm_port_params) *
+				      p_hwfn->p_dev->num_ports_in_engines);
+	if (!qm_info->qm_port_params)
+		goto alloc_err;
+
+	qm_info->wfq_data = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
+					sizeof(struct ecore_wfq_data) *
+					ecore_init_qm_get_num_vports(p_hwfn));
+	if (!qm_info->wfq_data)
+		goto alloc_err;
+
+	return ECORE_SUCCESS;
+
+alloc_err:
+	DP_NOTICE(p_hwfn, false, "Failed to allocate memory for QM params\n");
+	ecore_qm_info_free(p_hwfn);
+	return ECORE_NOMEM;
+}
+/******************** End QM initialization ***************/
+
 enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 {
 	struct ecore_consq *p_consq;
@@ -572,11 +898,13 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 		if (rc)
 			goto alloc_err;
 
-		/* Prepare and process QM requirements */
-		rc = ecore_init_qm_info(p_hwfn, true);
+		rc = ecore_alloc_qm_data(p_hwfn);
 		if (rc)
 			goto alloc_err;
 
+		/* init qm info */
+		ecore_init_qm_info(p_hwfn);
+
 		/* Compute the ILT client partition */
 		rc = ecore_cxt_cfg_ilt_compute(p_hwfn);
 		if (rc)
@@ -618,18 +946,18 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 			 * worst case:
 			 * - Core - according to SPQ.
 			 * - RoCE - per QP there are a couple of ICIDs, one
-			 *          responder and one requester, each can
-			 *          generate an EQE => n_eqes_qp = 2 * n_qp.
-			 *          Each CQ can generate an EQE. There are 2 CQs
-			 *          per QP => n_eqes_cq = 2 * n_qp.
-			 *          Hence the RoCE total is 4 * n_qp or
-			 *          2 * num_cons.
+			 *	  responder and one requester, each can
+			 *	  generate an EQE => n_eqes_qp = 2 * n_qp.
+			 *	  Each CQ can generate an EQE. There are 2 CQs
+			 *	  per QP => n_eqes_cq = 2 * n_qp.
+			 *	  Hence the RoCE total is 4 * n_qp or
+			 *	  2 * num_cons.
 			 * - ENet - There can be up to two events per VF. One
-			 *          for VF-PF channel and another for VF FLR
-			 *          initial cleanup. The number of VFs is
-			 *          bounded by MAX_NUM_VFS_BB, and is much
-			 *          smaller than RoCE's so we avoid exact
-			 *          calculation.
+			 *	  for VF-PF channel and another for VF FLR
+			 *	  initial cleanup. The number of VFs is
+			 *	  bounded by MAX_NUM_VFS_BB, and is much
+			 *	  smaller than RoCE's so we avoid exact
+			 *	  calculation.
 			 */
 			if (ECORE_IS_ROCE_PERSONALITY(p_hwfn)) {
 				num_cons =
@@ -683,7 +1011,8 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 		rc = ecore_dmae_info_alloc(p_hwfn);
 		if (rc) {
 			DP_NOTICE(p_hwfn, true,
-				  "Failed to allocate memory for dmae_info structure\n");
+				  "Failed to allocate memory for dmae_info"
+				  " structure\n");
 			goto alloc_err;
 		}
 
@@ -705,9 +1034,9 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 
 	return ECORE_SUCCESS;
 
- alloc_no_mem:
+alloc_no_mem:
 	rc = ECORE_NOMEM;
- alloc_err:
+alloc_err:
 	ecore_resc_free(p_dev);
 	return rc;
 }
@@ -2353,7 +2682,7 @@ static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
 			*p_resc_start = dflt_resc_start;
 		}
 	}
- out:
+out:
 	return ECORE_SUCCESS;
 }
 
@@ -3139,13 +3468,13 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn,
 #endif
 
 	return rc;
- err2:
+err2:
 	if (IS_LEAD_HWFN(p_hwfn))
 		ecore_iov_free_hw_info(p_dev);
 	ecore_mcp_free(p_hwfn);
- err1:
+err1:
 	ecore_hw_hwfn_free(p_hwfn);
- err0:
+err0:
 	return rc;
 }
 
@@ -3309,7 +3638,7 @@ static void ecore_chain_free_pbl(struct ecore_dev *p_dev,
 	if (!p_chain->pbl.external)
 		OSAL_DMA_FREE_COHERENT(p_dev, p_chain->pbl.p_virt_table,
 				       p_chain->pbl.p_phys_table, pbl_size);
- out:
+out:
 	OSAL_VFREE(p_dev, p_chain->pbl.pp_virt_addr_tbl);
 }
 
@@ -3521,7 +3850,7 @@ enum _ecore_status_t ecore_chain_alloc(struct ecore_dev *p_dev,
 
 	return ECORE_SUCCESS;
 
- nomem:
+nomem:
 	ecore_chain_free(p_dev, p_chain);
 	return rc;
 }
@@ -3956,7 +4285,7 @@ enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
 		goto out;
 
 	p_hwfn->p_dev->rx_coalesce_usecs = coalesce;
- out:
+out:
 	return rc;
 }
 
@@ -4000,7 +4329,7 @@ enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
 		goto out;
 
 	p_hwfn->p_dev->tx_coalesce_usecs = coalesce;
- out:
+out:
 	return rc;
 }
 
diff --git a/drivers/net/qede/base/ecore_hw.c b/drivers/net/qede/base/ecore_hw.c
index 49d52c0..396edc2 100644
--- a/drivers/net/qede/base/ecore_hw.c
+++ b/drivers/net/qede/base/ecore_hw.c
@@ -905,44 +905,6 @@ ecore_dmae_host2host(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-u16 ecore_get_qm_pq(struct ecore_hwfn *p_hwfn,
-		    enum protocol_type proto,
-		    union ecore_qm_pq_params *p_params)
-{
-	u16 pq_id = 0;
-
-	if ((proto == PROTOCOLID_CORE ||
-	     proto == PROTOCOLID_ETH) && !p_params) {
-		DP_NOTICE(p_hwfn, true,
-			  "Protocol %d received NULL PQ params\n", proto);
-		return 0;
-	}
-
-	switch (proto) {
-	case PROTOCOLID_CORE:
-		if (p_params->core.tc == LB_TC)
-			pq_id = p_hwfn->qm_info.pure_lb_pq;
-		else if (p_params->core.tc == PKT_LB_TC)
-			pq_id = p_hwfn->qm_info.ooo_pq;
-		else
-			pq_id = p_hwfn->qm_info.offload_pq;
-		break;
-	case PROTOCOLID_ETH:
-		pq_id = p_params->eth.tc;
-		/* TODO - multi-CoS for VFs? */
-		if (p_params->eth.is_vf)
-			pq_id += p_hwfn->qm_info.vf_queues_offset +
-			    p_params->eth.vf_id;
-		break;
-	default:
-		pq_id = 0;
-	}
-
-	pq_id = CM_TX_PQ_BASE + pq_id + RESC_START(p_hwfn, ECORE_PQ);
-
-	return pq_id;
-}
-
 void ecore_hw_err_notify(struct ecore_hwfn *p_hwfn,
 			 enum ecore_hw_err_type err_type)
 {
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index d2e1719..0220d19 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -834,13 +834,13 @@ ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
 			      struct ecore_queue_start_common_params *p_params,
 			      dma_addr_t pbl_addr,
 			      u16 pbl_size,
-			      union ecore_qm_pq_params *p_pq_params)
+			      u16 pq_id)
 {
 	struct tx_queue_start_ramrod_data *p_ramrod = OSAL_NULL;
 	struct ecore_spq_entry *p_ent = OSAL_NULL;
 	struct ecore_sp_init_data init_data;
 	struct ecore_hw_cid_data *p_tx_cid;
-	u16 pq_id, abs_tx_qzone_id = 0;
+	u16 abs_tx_qzone_id = 0;
 	enum _ecore_status_t rc = ECORE_NOTIMPL;
 	u8 abs_vport_id;
 
@@ -882,7 +882,6 @@ ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
 	p_ramrod->pbl_size = OSAL_CPU_TO_LE16(pbl_size);
 	DMA_REGPAIR_LE(p_ramrod->pbl_base_addr, pbl_addr);
 
-	pq_id = ecore_get_qm_pq(p_hwfn, PROTOCOLID_ETH, p_pq_params);
 	p_ramrod->qm_pq_id = OSAL_CPU_TO_LE16(pq_id);
 
 	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
@@ -898,7 +897,6 @@ ecore_sp_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
 			    void OSAL_IOMEM * *pp_doorbell)
 {
 	struct ecore_hw_cid_data *p_tx_cid;
-	union ecore_qm_pq_params pq_params;
 	u8 abs_stats_id = 0;
 	enum _ecore_status_t rc;
 
@@ -918,9 +916,6 @@ ecore_sp_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
 
 	p_tx_cid = &p_hwfn->p_tx_cids[p_params->queue_id];
 	OSAL_MEMSET(p_tx_cid, 0, sizeof(*p_tx_cid));
-	OSAL_MEMSET(&pq_params, 0, sizeof(pq_params));
-
-	pq_params.eth.tc = tc;
 
 	/* Allocate a CID for the queue */
 	rc = ecore_cxt_acquire_cid(p_hwfn, PROTOCOLID_ETH, &p_tx_cid->cid);
@@ -944,7 +939,8 @@ ecore_sp_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
 					   p_params,
 					   pbl_addr,
 					   pbl_size,
-					   &pq_params);
+					   ecore_get_cm_pq_idx_mcos(p_hwfn,
+								    tc));
 
 	*pp_doorbell = (u8 OSAL_IOMEM *)p_hwfn->doorbells +
 	    DB_ADDR(p_tx_cid->cid, DQ_DEMS_LEGACY);
diff --git a/drivers/net/qede/base/ecore_l2.h b/drivers/net/qede/base/ecore_l2.h
index 9c1bd38..b598eda 100644
--- a/drivers/net/qede/base/ecore_l2.h
+++ b/drivers/net/qede/base/ecore_l2.h
@@ -81,7 +81,7 @@ ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn	*p_hwfn,
 			      struct ecore_queue_start_common_params *p_params,
 			      dma_addr_t pbl_addr,
 			      u16 pbl_size,
-			      union ecore_qm_pq_params *p_pq_params);
+			      u16 pq_id);
 
 u8 ecore_mcast_bin_from_mac(u8 *mac);
 
diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c
index 9035d3b..ba26d45 100644
--- a/drivers/net/qede/base/ecore_spq.c
+++ b/drivers/net/qede/base/ecore_spq.c
@@ -173,11 +173,10 @@ ecore_spq_fill_entry(struct ecore_hwfn *p_hwfn, struct ecore_spq_entry *p_ent)
 static void ecore_spq_hw_initialize(struct ecore_hwfn *p_hwfn,
 				    struct ecore_spq *p_spq)
 {
-	u16 pq;
 	struct ecore_cxt_info cxt_info;
 	struct core_conn_context *p_cxt;
-	union ecore_qm_pq_params pq_params;
 	enum _ecore_status_t rc;
+	u16 physical_q;
 
 	cxt_info.iid = p_spq->cid;
 
@@ -206,10 +205,8 @@ static void ecore_spq_hw_initialize(struct ecore_hwfn *p_hwfn,
 	/* CDU validation - FIXME currently disabled */
 
 	/* QM physical queue */
-	OSAL_MEMSET(&pq_params, 0, sizeof(pq_params));
-	pq_params.core.tc = LB_TC;
-	pq = ecore_get_qm_pq(p_hwfn, PROTOCOLID_CORE, &pq_params);
-	p_cxt->xstorm_ag_context.physical_q0 = OSAL_CPU_TO_LE16(pq);
+	physical_q = ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_LB);
+	p_cxt->xstorm_ag_context.physical_q0 = OSAL_CPU_TO_LE16(physical_q);
 
 	p_cxt->xstorm_st_context.spq_base_lo =
 	    DMA_LO_LE(p_spq->chain.p_phys_addr);
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index a302e9e..365be25 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -632,8 +632,8 @@ enum _ecore_status_t ecore_iov_hw_info(struct ecore_hwfn *p_hwfn)
 	return ECORE_SUCCESS;
 }
 
-bool _ecore_iov_pf_sanity_check(struct ecore_hwfn *p_hwfn, int vfid,
-				bool b_fail_malicious)
+static bool _ecore_iov_pf_sanity_check(struct ecore_hwfn *p_hwfn, int vfid,
+				       bool b_fail_malicious)
 {
 	/* Check PF supports sriov */
 	if (IS_VF(p_hwfn->p_dev) || !IS_ECORE_SRIOV(p_hwfn->p_dev) ||
@@ -2103,15 +2103,9 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 	struct ecore_queue_start_common_params params;
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
 	u8 status = PFVF_STATUS_NO_RESOURCE;
-	union ecore_qm_pq_params pq_params;
 	struct vfpf_start_txq_tlv *req;
 	enum _ecore_status_t rc;
 
-	/* Prepare the parameters which would choose the right PQ */
-	OSAL_MEMSET(&pq_params, 0, sizeof(pq_params));
-	pq_params.eth.is_vf = 1;
-	pq_params.eth.vf_id = vf->relative_vf_id;
-
 	OSAL_MEMSET(&params, 0, sizeof(params));
 	req = &mbx->req_virt->start_txq;
 
@@ -2132,7 +2126,8 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 					   &params,
 					   req->pbl_addr,
 					   req->pbl_size,
-					   &pq_params);
+					   ecore_get_cm_pq_idx_vf(p_hwfn,
+							vf->relative_vf_id));
 
 	if (rc)
 		status = PFVF_STATUS_FAILURE;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 21/61] net/qede/base: print firmware MFW and MBI versions
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (20 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 20/61] net/qede/base: qm initialization revamp Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 22/61] net/qede/base: check active VF queues before stopping Rasesh Mody
                         ` (40 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add a printout of the FW, Management FW and MBI versions.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/qede_if.h   |    9 ++++++++-
 drivers/net/qede/qede_main.c |   14 ++++++--------
 2 files changed, 14 insertions(+), 9 deletions(-)

diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index 18404fb..1e27428 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -30,12 +30,19 @@ struct qed_dev_info {
 
 	/* MFW version */
 	uint32_t mfw_rev;
+#define QED_MFW_VERSION_0_MASK		0x000000FF
+#define QED_MFW_VERSION_0_OFFSET	0
+#define QED_MFW_VERSION_1_MASK		0x0000FF00
+#define QED_MFW_VERSION_1_OFFSET	8
+#define QED_MFW_VERSION_2_MASK		0x00FF0000
+#define QED_MFW_VERSION_2_OFFSET	16
+#define QED_MFW_VERSION_3_MASK		0xFF000000
+#define QED_MFW_VERSION_3_OFFSET	24
 
 	uint32_t flash_size;
 	uint8_t mf_mode;
 	bool tx_switching;
 	u16 mtu;
-	/* To be added... */
 };
 
 enum qed_sb_type {
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index e76346e..1d4f336 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -327,6 +327,8 @@ qed_fill_dev_info(struct ecore_dev *edev, struct qed_dev_info *dev_info)
 	memset(dev_info, 0, sizeof(struct qed_dev_info));
 	dev_info->num_hwfns = edev->num_hwfns;
 	dev_info->is_mf_default = IS_MF_DEFAULT(&edev->hwfns[0]);
+	dev_info->mtu = ECORE_LEADING_HWFN(edev)->hw_info.mtu;
+
 	rte_memcpy(&dev_info->hw_mac, &edev->hwfns[0].hw_info.hw_mac_addr,
 	       ETHER_ADDR_LEN);
 
@@ -337,13 +339,7 @@ qed_fill_dev_info(struct ecore_dev *edev, struct qed_dev_info *dev_info)
 		dev_info->fw_eng = FW_ENGINEERING_VERSION;
 		dev_info->mf_mode = edev->mf_mode;
 		dev_info->tx_switching = false;
-	} else {
-		ecore_vf_get_fw_version(&edev->hwfns[0], &dev_info->fw_major,
-					&dev_info->fw_minor, &dev_info->fw_rev,
-					&dev_info->fw_eng);
-	}
 
-	if (IS_PF(edev)) {
 		ptt = ecore_ptt_acquire(ECORE_LEADING_HWFN(edev));
 		if (ptt) {
 			ecore_mcp_get_mfw_ver(ECORE_LEADING_HWFN(edev), ptt,
@@ -361,12 +357,14 @@ qed_fill_dev_info(struct ecore_dev *edev, struct qed_dev_info *dev_info)
 			ecore_ptt_release(ECORE_LEADING_HWFN(edev), ptt);
 		}
 	} else {
+		ecore_vf_get_fw_version(&edev->hwfns[0], &dev_info->fw_major,
+					&dev_info->fw_minor, &dev_info->fw_rev,
+					&dev_info->fw_eng);
+
 		ecore_mcp_get_mfw_ver(ECORE_LEADING_HWFN(edev), ptt,
 				      &dev_info->mfw_rev, NULL);
 	}
 
-	dev_info->mtu = ECORE_LEADING_HWFN(edev)->hw_info.mtu;
-
 	return 0;
 }
 
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 22/61] net/qede/base: check active VF queues before stopping
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (21 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 21/61] net/qede/base: print firmware MFW and MBI versions Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 23/61] net/qede/base: set driver type before sending load request Rasesh Mody
                         ` (39 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Make sure VF queue are closed before stopping vport.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_sriov.c |   37 ++++++++++++++++++++++++++++++++++-
 1 file changed, 36 insertions(+), 1 deletion(-)

diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 365be25..73c4015 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -232,6 +232,30 @@ static bool ecore_iov_validate_sb(struct ecore_hwfn *p_hwfn,
 	return false;
 }
 
+static bool ecore_iov_validate_active_rxq(struct ecore_hwfn *p_hwfn,
+					  struct ecore_vf_info *p_vf)
+{
+	u8 i;
+
+	for (i = 0; i < p_vf->num_rxqs; i++)
+		if (p_vf->vf_queues[i].rxq_active)
+			return true;
+
+	return false;
+}
+
+static bool ecore_iov_validate_active_txq(struct ecore_hwfn *p_hwfn,
+					  struct ecore_vf_info *p_vf)
+{
+	u8 i;
+
+	for (i = 0; i < p_vf->num_rxqs; i++)
+		if (p_vf->vf_queues[i].txq_active)
+			return true;
+
+	return false;
+}
+
 /* TODO - this is linux crc32; Need a way to ifdef it out for linux */
 u32 ecore_crc32(u32 crc, u8 *ptr, u32 length)
 {
@@ -1365,8 +1389,10 @@ static void ecore_iov_vf_cleanup(struct ecore_hwfn *p_hwfn,
 
 	p_vf->num_active_rxqs = 0;
 
-	for (i = 0; i < ECORE_MAX_VF_CHAINS_PER_PF; i++)
+	for (i = 0; i < ECORE_MAX_VF_CHAINS_PER_PF; i++) {
 		p_vf->vf_queues[i].rxq_active = 0;
+		p_vf->vf_queues[i].txq_active = 0;
+	}
 
 	OSAL_MEMSET(&p_vf->shadow_config, 0, sizeof(p_vf->shadow_config));
 	OSAL_MEMSET(&p_vf->acquire, 0, sizeof(p_vf->acquire));
@@ -1943,6 +1969,15 @@ static void ecore_iov_vf_mbx_stop_vport(struct ecore_hwfn *p_hwfn,
 	vf->vport_instance--;
 	vf->spoof_chk = false;
 
+	if ((ecore_iov_validate_active_rxq(p_hwfn, vf)) ||
+	    (ecore_iov_validate_active_txq(p_hwfn, vf))) {
+		vf->b_malicious = true;
+		DP_NOTICE(p_hwfn, false,
+			  "VF [%02x] - considered malicious;"
+			  " Unable to stop RX/TX queuess\n",
+			  vf->abs_vf_id);
+	}
+
 	rc = ecore_sp_vport_stop(p_hwfn, vf->opaque_fid, vf->vport_id);
 	if (rc != ECORE_SUCCESS) {
 		DP_ERR(p_hwfn,
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 23/61] net/qede/base: set driver type before sending load request
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (22 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 22/61] net/qede/base: check active VF queues before stopping Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 24/61] net/qede/base: prevent driver laod with invalid resources Rasesh Mody
                         ` (38 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Set the drv_type before sending LOAD_REQ and remove the
ver_str which is not used by the MFW

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h     |    3 +--
 drivers/net/qede/base/ecore_mcp.c |    3 ---
 drivers/net/qede/qede_ethdev.c    |    2 +-
 drivers/net/qede/qede_if.h        |    3 +--
 drivers/net/qede/qede_main.c      |   10 ++++------
 5 files changed, 7 insertions(+), 14 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 58c97a3..b8c8bfd 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -30,7 +30,6 @@
 
 #define MAX_HWFNS_PER_DEVICE	2
 #define NAME_SIZE 128 /* @DPDK */
-#define VER_SIZE 16
 #define ECORE_WFQ_UNIT	100
 #include "../qede_logs.h" /* @DPDK */
 
@@ -706,7 +705,7 @@ struct ecore_dev {
 
 	int				pcie_width;
 	int				pcie_speed;
-	u8				ver_str[NAME_SIZE]; /* @DPDK */
+
 	/* Add MF related configuration */
 	u8				mcp_rev;
 	u8				boot_mode;
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 9f897b5..2b9c819 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -524,7 +524,6 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 {
 	struct ecore_dev *p_dev = p_hwfn->p_dev;
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
 	enum _ecore_status_t rc;
 
 #ifndef ASIC_ONLY
@@ -538,8 +537,6 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 	mb_params.cmd = DRV_MSG_CODE_LOAD_REQ;
 	mb_params.param = PDA_COMP | DRV_ID_MCP_HSI_VER_CURRENT |
 			  p_dev->drv_type;
-	OSAL_MEMCPY(&union_data.ver_str, p_dev->ver_str, MCP_DRV_VER_STR_SIZE);
-	mb_params.p_data_src = &union_data;
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 
 	/* if mcp fails to respond we must abort */
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index c372181..d52e1be 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -2175,7 +2175,7 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
 
 	qede_alloc_etherdev(adapter, &dev_info);
 
-	adapter->ops->common->set_id(edev, edev->name, QEDE_PMD_VERSION);
+	adapter->ops->common->set_name(edev, edev->name);
 
 	if (!is_vf)
 		adapter->dev_info.num_mac_filters =
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index 1e27428..0a1f7db 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -116,8 +116,7 @@ struct qed_common_ops {
 		     struct rte_pci_device *pci_dev,
 		     enum qed_protocol protocol,
 		     uint32_t dp_module, uint8_t dp_level, bool is_vf);
-	void (*set_id)(struct ecore_dev *edev,
-		char name[], const char ver_str[]);
+	void (*set_name)(struct ecore_dev *edev, char name[]);
 	enum _ecore_status_t
 		(*chain_alloc)(struct ecore_dev *edev,
 			       enum ecore_chain_use_mode
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 1d4f336..a932c5f 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -50,7 +50,9 @@ qed_probe(struct ecore_dev *edev, struct rte_pci_device *pci_dev,
 	int rc;
 
 	ecore_init_struct(edev);
+	edev->drv_type = DRV_ID_DRV_TYPE_LINUX;
 	qdev->protocol = protocol;
+
 	if (is_vf)
 		edev->b_is_vf = true;
 
@@ -420,9 +422,7 @@ qed_fill_eth_dev_info(struct ecore_dev *edev, struct qed_dev_eth_info *info)
 	return 0;
 }
 
-static void
-qed_set_id(struct ecore_dev *edev, char name[NAME_SIZE],
-	   const char ver_str[NAME_SIZE])
+static void qed_set_name(struct ecore_dev *edev, char name[NAME_SIZE])
 {
 	int i;
 
@@ -430,8 +430,6 @@ qed_set_id(struct ecore_dev *edev, char name[NAME_SIZE],
 	for_each_hwfn(edev, i) {
 		snprintf(edev->hwfns[i].name, NAME_SIZE, "%s-%d", name, i);
 	}
-	memcpy(edev->ver_str, ver_str, NAME_SIZE);
-	edev->drv_type = DRV_ID_DRV_TYPE_LINUX;
 }
 
 static uint32_t
@@ -714,7 +712,7 @@ const struct qed_common_ops qed_common_ops_pass = {
 	INIT_STRUCT_FIELD(probe, &qed_probe),
 	INIT_STRUCT_FIELD(update_pf_params, &qed_update_pf_params),
 	INIT_STRUCT_FIELD(slowpath_start, &qed_slowpath_start),
-	INIT_STRUCT_FIELD(set_id, &qed_set_id),
+	INIT_STRUCT_FIELD(set_name, &qed_set_name),
 	INIT_STRUCT_FIELD(chain_alloc, &ecore_chain_alloc),
 	INIT_STRUCT_FIELD(chain_free, &ecore_chain_free),
 	INIT_STRUCT_FIELD(sb_init, &qed_sb_init),
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 24/61] net/qede/base: prevent driver laod with invalid resources
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (23 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 23/61] net/qede/base: set driver type before sending load request Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 25/61] net/qede/base: add interfaces for MFW TLV request processing Rasesh Mody
                         ` (37 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Prevent storage drivers from attempting to load with invalid resources.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c |   12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 380c5ba..7fce4fd 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2437,13 +2437,19 @@ static void ecore_hw_set_feat(struct ecore_hwfn *p_hwfn)
 			   FEAT_NUM(p_hwfn, ECORE_PF_L2_QUE),
 			   sb_cnt_info.sb_iov_cnt);
 
+	feat_num[ECORE_FCOE_CQ] = OSAL_MIN_T(u32, RESC_NUM(p_hwfn, ECORE_SB),
+					     RESC_NUM(p_hwfn, ECORE_CMDQS_CQS));
+	feat_num[ECORE_ISCSI_CQ] = OSAL_MIN_T(u32, RESC_NUM(p_hwfn, ECORE_SB),
+					     RESC_NUM(p_hwfn, ECORE_CMDQS_CQS));
+
 	DP_VERBOSE(p_hwfn, ECORE_MSG_PROBE,
-		   "#PF_L2_QUEUES=%d VF_L2_QUEUES=%d #ROCE_CNQ=%d #SBS=%d num_features=%d\n",
+		   "#PF_L2_QUEUE=%d VF_L2_QUEUES=%d #ROCE_CNQ=%d #FCOE_CQ=%d #ISCSI_CQ=%d #SB=%d\n",
 		   (int)FEAT_NUM(p_hwfn, ECORE_PF_L2_QUE),
 		   (int)FEAT_NUM(p_hwfn, ECORE_VF_L2_QUE),
 		   (int)FEAT_NUM(p_hwfn, ECORE_RDMA_CNQ),
-		   RESC_NUM(p_hwfn, ECORE_SB),
-		   num_features);
+		   (int)FEAT_NUM(p_hwfn, ECORE_FCOE_CQ),
+		   (int)FEAT_NUM(p_hwfn, ECORE_ISCSI_CQ),
+		   RESC_NUM(p_hwfn, ECORE_SB));
 }
 
 static enum resource_id_enum
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 25/61] net/qede/base: add interfaces for MFW TLV request processing
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (24 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 24/61] net/qede/base: prevent driver laod with invalid resources Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 26/61] net/qede/base: code refactoring of SP queues Rasesh Mody
                         ` (36 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add new base driver interfaces for Management FW TLV request processing.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_mcp.c     |    6 +
 drivers/net/qede/base/ecore_mcp_api.h |  301 +++++++++++++++++++++++++++++++++
 2 files changed, 307 insertions(+)

diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 2b9c819..79a907b 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -2502,3 +2502,9 @@ enum _ecore_status_t ecore_mcp_initiate_pf_flr(struct ecore_hwfn *p_hwfn,
 	return ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_INITIATE_PF_FLR, 0,
 			     &mcp_resp, &mcp_param);
 }
+
+enum _ecore_status_t
+ecore_mfw_process_tlv_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
+{
+	return ECORE_SUCCESS;
+}
diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h
index 1be22dd..8cad43d 100644
--- a/drivers/net/qede/base/ecore_mcp_api.h
+++ b/drivers/net/qede/base/ecore_mcp_api.h
@@ -232,6 +232,295 @@ struct ecore_mba_vers {
 	u32 mba_vers[ECORE_MAX_NUM_OF_ROMIMG];
 };
 
+enum ecore_mfw_tlv_type {
+	ECORE_MFW_TLV_GENERIC = 0x1,	/* Core driver TLVs */
+	ECORE_MFW_TLV_FCOE = 0x2,	/* FCoE protocol TLVs */
+	ECORE_MFW_TLV_ISCSI = 0x4,	/* SCSI protocol TLVs */
+};
+
+struct ecore_mfw_tlv_generic {
+	u16 feat_flags;
+	bool feat_flags_set;
+	u64 local_mac;
+	bool local_mac_set;
+	u64 additional_mac1;
+	bool additional_mac1_set;
+	u64 additional_mac2;
+	bool additional_mac2_set;
+	u16 lso_maxoff_size;
+	bool lso_maxoff_size_set;
+	u16 lso_minseg_size;
+	bool lso_minseg_size_set;
+	u8 prom_mode;
+	bool prom_mode_set;
+	u16 tx_descr_size;
+	bool tx_descr_size_set;
+	u16 rx_descr_size;
+	bool rx_descr_size_set;
+	u16 netq_count;
+	bool netq_count_set;
+	u16 flex_vlan;
+	bool flex_vlan_set;
+	u8 drv_state;
+	bool drv_state_set;
+	u8 pxe_progress;
+	bool pxe_progress_set;
+	u32 tcp4_offloads;
+	bool tcp4_offloads_set;
+	u32 tcp6_offloads;
+	bool tcp6_offloads_set;
+	u16 tx_descr_qdepth;
+	bool tx_descr_qdepth_set;
+	u16 rx_descr_qdepth;
+	bool rx_descr_qdepth_set;
+	u64 rx_frames;
+	bool rx_frames_set;
+	u64 rx_bytes;
+	bool rx_bytes_set;
+	u64 tx_frames;
+	bool tx_frames_set;
+	u64 tx_bytes;
+	bool tx_bytes_set;
+	u8 iov_offload;
+	bool iov_offload_set;
+	u8 txqs_empty;
+	bool txqs_empty_set;
+	u8 rxqs_empty;
+	bool rxqs_empty_set;
+	u8 num_txqs_full;
+	bool num_txqs_full_set;
+	u8 num_rxqs_full;
+	bool num_rxqs_full_set;
+};
+
+struct ecore_mfw_tlv_fcoe {
+	u8 scsi_timeout;
+	bool scsi_timeout_set;
+	u32 rt_tov;
+	bool rt_tov_set;
+	u32 ra_tov;
+	bool ra_tov_set;
+	u32 ed_tov;
+	bool ed_tov_set;
+	u32 cr_tov;
+	bool cr_tov_set;
+	u8 boot_type;
+	bool boot_type_set;
+	u8 npiv_state;
+	bool npiv_state_set;
+	u32 num_npiv_ids;
+	bool num_npiv_ids_set;
+	u8 switch_name[8];
+	bool switch_name_set;
+	u16 switch_portnum;
+	bool switch_portnum_set;
+	u8 switch_portid[3];
+	bool switch_portid_set;
+	u8 vendor_name[8];
+	bool vendor_name_set;
+	u8 switch_model[8];
+	bool switch_model_set;
+	u8 switch_fw_version[8];
+	bool switch_fw_version_set;
+	u8 qos_pri;
+	bool qos_pri_set;
+	u8 port_alias[3];
+	bool port_alias_set;
+	u8 port_state;
+	bool port_state_set;
+	u16 fip_tx_descr_size;
+	bool fip_tx_descr_size_set;
+	u16 fip_rx_descr_size;
+	bool fip_rx_descr_size_set;
+	u16 link_failures;
+	bool link_failures_set;
+	u8 fcoe_boot_progress;
+	bool fcoe_boot_progress_set;
+	u64 rx_bcast;
+	bool rx_bcast_set;
+	u64 tx_bcast;
+	bool tx_bcast_set;
+	u16 fcoe_txq_depth;
+	bool fcoe_txq_depth_set;
+	u16 fcoe_rxq_depth;
+	bool fcoe_rxq_depth_set;
+	u64 fcoe_rx_frames;
+	bool fcoe_rx_frames_set;
+	u64 fcoe_rx_bytes;
+	bool fcoe_rx_bytes_set;
+	u64 fcoe_tx_frames;
+	bool fcoe_tx_frames_set;
+	u64 fcoe_tx_bytes;
+	bool fcoe_tx_bytes_set;
+	u16 crc_count;
+	bool crc_count_set;
+	u32 crc_err_src_fcid[5];
+	bool crc_err_src_fcid_set[5];
+	u8 crc_err_tstamp[5][14];
+	bool crc_err_tstamp_set[5];
+	u16 losync_err;
+	bool losync_err_set;
+	u16 losig_err;
+	bool losig_err_set;
+	u16 primtive_err;
+	bool primtive_err_set;
+	u16 disparity_err;
+	bool disparity_err_set;
+	u16 code_violation_err;
+	bool code_violation_err_set;
+	u32 flogi_param[4];
+	bool flogi_param_set[4];
+	u8 flogi_tstamp[14];
+	bool flogi_tstamp_set;
+	u32 flogi_acc_param[4];
+	bool flogi_acc_param_set[4];
+	u8 flogi_acc_tstamp[14];
+	bool flogi_acc_tstamp_set;
+	u32 flogi_rjt;
+	bool flogi_rjt_set;
+	u8 flogi_rjt_tstamp[14];
+	bool flogi_rjt_tstamp_set;
+	u32 fdiscs;
+	bool fdiscs_set;
+	u8 fdisc_acc;
+	bool fdisc_acc_set;
+	u8 fdisc_rjt;
+	bool fdisc_rjt_set;
+	u8 plogi;
+	bool plogi_set;
+	u8 plogi_acc;
+	bool plogi_acc_set;
+	u8 plogi_rjt;
+	bool plogi_rjt_set;
+	u32 plogi_dst_fcid[5];
+	bool plogi_dst_fcid_set[5];
+	u8 plogi_tstamp[5][14];
+	bool plogi_tstamp_set[5];
+	u32 plogi_acc_src_fcid[5];
+	bool plogi_acc_src_fcid_set[5];
+	u8 plogi_acc_tstamp[5][14];
+	bool plogi_acc_tstamp_set[5];
+	u8 tx_plogos;
+	bool tx_plogos_set;
+	u8 plogo_acc;
+	bool plogo_acc_set;
+	u8 plogo_rjt;
+	bool plogo_rjt_set;
+	u32 plogo_src_fcid[5];
+	bool plogo_src_fcid_set[5];
+	u8 plogo_tstamp[5][14];
+	bool plogo_tstamp_set[5];
+	u8 rx_logos;
+	bool rx_logos_set;
+	u8 tx_accs;
+	bool tx_accs_set;
+	u8 tx_prlis;
+	bool tx_prlis_set;
+	u8 rx_accs;
+	bool rx_accs_set;
+	u8 tx_abts;
+	bool tx_abts_set;
+	u8 rx_abts_acc;
+	bool rx_abts_acc_set;
+	u8 rx_abts_rjt;
+	bool rx_abts_rjt_set;
+	u32 abts_dst_fcid[5];
+	bool abts_dst_fcid_set[5];
+	u8 abts_tstamp[5][14];
+	bool abts_tstamp_set[5];
+	u8 rx_rscn;
+	bool rx_rscn_set;
+	u32 rx_rscn_nport[4];
+	bool rx_rscn_nport_set[4];
+	u8 tx_lun_rst;
+	bool tx_lun_rst_set;
+	u8 abort_task_sets;
+	bool abort_task_sets_set;
+	u8 tx_tprlos;
+	bool tx_tprlos_set;
+	u8 tx_nos;
+	bool tx_nos_set;
+	u8 rx_nos;
+	bool rx_nos_set;
+	u8 ols;
+	bool ols_set;
+	u8 lr;
+	bool lr_set;
+	u8 llr;
+	bool llrt;
+	u8 tx_lip;
+	bool tx_lip_set;
+	u8 rx_lip;
+	bool rx_lip_set;
+	u8 eofa;
+	bool eofa_set;
+	u8 eofni;
+	bool eofni_set;
+	u8 scsi_chks;
+	bool scsi_chks_set;
+	u8 scsi_cond_met;
+	bool scsi_cond_met_set;
+	u8 scsi_busy;
+	bool scsi_busy_set;
+	u8 scsi_inter;
+	bool scsi_inter_set;
+	u8 scsi_inter_cond_met;
+	bool scsi_inter_cond_met_set;
+	u8 scsi_rsv_conflicts;
+	bool scsi_rsv_conflicts_set;
+	u8 scsi_tsk_full;
+	bool scsi_tsk_full_set;
+	u8 scsi_aca_active;
+	bool scsi_aca_active_set;
+	u8 scsi_tsk_abort;
+	bool scsi_tsk_abort_set;
+	u32 scsi_rx_chk[5];
+	bool scsi_rx_chk_set[5];
+	u8 scsi_chk_tstamp[5][14];
+	bool scsi_chk_tstamp_set[5];
+};
+
+struct ecore_mfw_tlv_iscsi {
+	u8 target_llmnr;
+	bool target_llmnr_set;
+	u8 header_digest;
+	bool header_digest_set;
+	u8 data_digest;
+	bool data_digest_set;
+	u8 auth_method;
+	bool auth_method_set;
+	u16 boot_taget_portal;
+	bool boot_taget_portal_set;
+	u16 frame_size;
+	bool frame_size_set;
+	u16 tx_desc_size;
+	bool tx_desc_size_set;
+	u16 rx_desc_size;
+	bool rx_desc_size_set;
+	u8 boot_progress;
+	bool boot_progress_set;
+	u16 tx_desc_qdepth;
+	bool tx_desc_qdepth_set;
+	u16 rx_desc_qdepth;
+	bool rx_desc_qdepth_set;
+	u64 rx_frames;
+	bool rx_frames_set;
+	u64 rx_bytes;
+	bool rx_bytes_set;
+	u64 tx_frames;
+	bool tx_frames_set;
+	u64 tx_bytes;
+	bool tx_bytes_set;
+	u32 cpcp_spcp_map;
+	bool cpcp_spcp_map_set;
+};
+
+union ecore_mfw_tlv_data {
+	struct ecore_mfw_tlv_generic generic;
+	struct ecore_mfw_tlv_fcoe fcoe;
+	struct ecore_mfw_tlv_iscsi iscsi;
+};
+
 /**
  * @brief - returns the link params of the hw function
  *
@@ -820,4 +1109,16 @@ ecore_mcp_mdump_get_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 enum _ecore_status_t ecore_mcp_mdump_clear_logs(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt);
 
+/**
+ * @brief - Processes the TLV request from MFW i.e., get the required TLV info
+ *          from the ecore client and send it to the MFW.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ *
+ * @param return ECORE_SUCCESS upon success.
+ */
+enum _ecore_status_t ecore_mfw_process_tlv_req(struct ecore_hwfn *p_hwfn,
+					       struct ecore_ptt *p_ptt);
+
 #endif
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 26/61] net/qede/base: code refactoring of SP queues
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (25 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 25/61] net/qede/base: add interfaces for MFW TLV request processing Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 27/61] net/qede/base: make L2 queues handle based Rasesh Mody
                         ` (35 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Maintain slowpath event queue and consumer queue within HW function
structure, update corresponding alloc and free APIs accordingly.
Cleanup unused code under CONFIG_ECORE_LL2 ifdef.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c |   43 +++++++----------------------
 drivers/net/qede/base/ecore_spq.c |   54 ++++++++++++++++++++-----------------
 drivers/net/qede/base/ecore_spq.h |   35 +++++++++---------------
 3 files changed, 52 insertions(+), 80 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 7fce4fd..1ce7d8e 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -165,12 +165,9 @@ void ecore_resc_free(struct ecore_dev *p_dev)
 		ecore_cxt_mngr_free(p_hwfn);
 		ecore_qm_info_free(p_hwfn);
 		ecore_spq_free(p_hwfn);
-		ecore_eq_free(p_hwfn, p_hwfn->p_eq);
-		ecore_consq_free(p_hwfn, p_hwfn->p_consq);
+		ecore_eq_free(p_hwfn);
+		ecore_consq_free(p_hwfn);
 		ecore_int_free(p_hwfn);
-#ifdef CONFIG_ECORE_LL2
-		ecore_ll2_free(p_hwfn, p_hwfn->p_ll2_info);
-#endif
 		ecore_iov_free(p_hwfn);
 		ecore_dmae_info_free(p_hwfn);
 		ecore_dcbx_info_free(p_hwfn, p_hwfn->p_dcbx_info);
@@ -836,11 +833,6 @@ alloc_err:
 
 enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 {
-	struct ecore_consq *p_consq;
-	struct ecore_eq *p_eq;
-#ifdef	CONFIG_ECORE_LL2
-	struct ecore_ll2_info *p_ll2_info;
-#endif
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 	int i;
 
@@ -988,24 +980,13 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 			goto alloc_no_mem;
 		}
 
-		p_eq = ecore_eq_alloc(p_hwfn, (u16)n_eqes);
-		if (!p_eq)
-			goto alloc_no_mem;
-		p_hwfn->p_eq = p_eq;
+		rc = ecore_eq_alloc(p_hwfn, (u16)n_eqes);
+		if (rc)
+			goto alloc_err;
 
-		p_consq = ecore_consq_alloc(p_hwfn);
-		if (!p_consq)
-			goto alloc_no_mem;
-		p_hwfn->p_consq = p_consq;
-
-#ifdef CONFIG_ECORE_LL2
-		if (p_hwfn->using_ll2) {
-			p_ll2_info = ecore_ll2_alloc(p_hwfn);
-			if (!p_ll2_info)
-				goto alloc_no_mem;
-			p_hwfn->p_ll2_info = p_ll2_info;
-		}
-#endif
+		rc = ecore_consq_alloc(p_hwfn);
+		if (rc)
+			goto alloc_err;
 
 		/* DMA info initialization */
 		rc = ecore_dmae_info_alloc(p_hwfn);
@@ -1053,8 +1034,8 @@ void ecore_resc_setup(struct ecore_dev *p_dev)
 
 		ecore_cxt_mngr_setup(p_hwfn);
 		ecore_spq_setup(p_hwfn);
-		ecore_eq_setup(p_hwfn, p_hwfn->p_eq);
-		ecore_consq_setup(p_hwfn, p_hwfn->p_consq);
+		ecore_eq_setup(p_hwfn);
+		ecore_consq_setup(p_hwfn);
 
 		/* Read shadow of current MFW mailbox */
 		ecore_mcp_read_mb(p_hwfn, p_hwfn->p_main_ptt);
@@ -1065,10 +1046,6 @@ void ecore_resc_setup(struct ecore_dev *p_dev)
 		ecore_int_setup(p_hwfn, p_hwfn->p_main_ptt);
 
 		ecore_iov_setup(p_hwfn, p_hwfn->p_main_ptt);
-#ifdef CONFIG_ECORE_LL2
-		if (p_hwfn->using_ll2)
-			ecore_ll2_setup(p_hwfn, p_hwfn->p_ll2_info);
-#endif
 	}
 }
 
diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c
index ba26d45..016de74 100644
--- a/drivers/net/qede/base/ecore_spq.c
+++ b/drivers/net/qede/base/ecore_spq.c
@@ -355,7 +355,7 @@ enum _ecore_status_t ecore_eq_completion(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-struct ecore_eq *ecore_eq_alloc(struct ecore_hwfn *p_hwfn, u16 num_elem)
+enum _ecore_status_t ecore_eq_alloc(struct ecore_hwfn *p_hwfn, u16 num_elem)
 {
 	struct ecore_eq *p_eq;
 
@@ -364,7 +364,7 @@ struct ecore_eq *ecore_eq_alloc(struct ecore_hwfn *p_hwfn, u16 num_elem)
 	if (!p_eq) {
 		DP_NOTICE(p_hwfn, true,
 			  "Failed to allocate `struct ecore_eq'\n");
-		return OSAL_NULL;
+		return ECORE_NOMEM;
 	}
 
 	/* Allocate and initialize EQ chain*/
@@ -374,7 +374,7 @@ struct ecore_eq *ecore_eq_alloc(struct ecore_hwfn *p_hwfn, u16 num_elem)
 			      ECORE_CHAIN_CNT_TYPE_U16,
 			      num_elem,
 			      sizeof(union event_ring_element),
-			      &p_eq->chain, OSAL_NULL)) {
+			      &p_eq->chain, OSAL_NULL) != ECORE_SUCCESS) {
 		DP_NOTICE(p_hwfn, true, "Failed to allocate eq chain\n");
 		goto eq_allocate_fail;
 	}
@@ -383,24 +383,28 @@ struct ecore_eq *ecore_eq_alloc(struct ecore_hwfn *p_hwfn, u16 num_elem)
 	ecore_int_register_cb(p_hwfn, ecore_eq_completion,
 			      p_eq, &p_eq->eq_sb_index, &p_eq->p_fw_cons);
 
-	return p_eq;
+	p_hwfn->p_eq = p_eq;
+	return ECORE_SUCCESS;
 
 eq_allocate_fail:
-	ecore_eq_free(p_hwfn, p_eq);
-	return OSAL_NULL;
+	OSAL_FREE(p_hwfn->p_dev, p_eq);
+	return ECORE_NOMEM;
 }
 
-void ecore_eq_setup(struct ecore_hwfn *p_hwfn, struct ecore_eq *p_eq)
+void ecore_eq_setup(struct ecore_hwfn *p_hwfn)
 {
-	ecore_chain_reset(&p_eq->chain);
+	ecore_chain_reset(&p_hwfn->p_eq->chain);
 }
 
-void ecore_eq_free(struct ecore_hwfn *p_hwfn, struct ecore_eq *p_eq)
+void ecore_eq_free(struct ecore_hwfn *p_hwfn)
 {
-	if (!p_eq)
+	if (!p_hwfn->p_eq)
 		return;
-	ecore_chain_free(p_hwfn->p_dev, &p_eq->chain);
-	OSAL_FREE(p_hwfn->p_dev, p_eq);
+
+	ecore_chain_free(p_hwfn->p_dev, &p_hwfn->p_eq->chain);
+
+	OSAL_FREE(p_hwfn->p_dev, p_hwfn->p_eq);
+	p_hwfn->p_eq = OSAL_NULL;
 }
 
 /***************************************************************************
@@ -943,7 +947,7 @@ enum _ecore_status_t ecore_spq_completion(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-struct ecore_consq *ecore_consq_alloc(struct ecore_hwfn *p_hwfn)
+enum _ecore_status_t ecore_consq_alloc(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_consq *p_consq;
 
@@ -953,7 +957,7 @@ struct ecore_consq *ecore_consq_alloc(struct ecore_hwfn *p_hwfn)
 	if (!p_consq) {
 		DP_NOTICE(p_hwfn, true,
 			  "Failed to allocate `struct ecore_consq'\n");
-		return OSAL_NULL;
+		return ECORE_NOMEM;
 	}
 
 	/* Allocate and initialize EQ chain */
@@ -963,27 +967,29 @@ struct ecore_consq *ecore_consq_alloc(struct ecore_hwfn *p_hwfn)
 			      ECORE_CHAIN_CNT_TYPE_U16,
 			      ECORE_CHAIN_PAGE_SIZE / 0x80,
 			      0x80,
-			      &p_consq->chain, OSAL_NULL)) {
+			      &p_consq->chain, OSAL_NULL) != ECORE_SUCCESS) {
 		DP_NOTICE(p_hwfn, true, "Failed to allocate consq chain");
 		goto consq_allocate_fail;
 	}
 
-	return p_consq;
+	p_hwfn->p_consq = p_consq;
+	return ECORE_SUCCESS;
 
 consq_allocate_fail:
-	ecore_consq_free(p_hwfn, p_consq);
-	return OSAL_NULL;
+	OSAL_FREE(p_hwfn->p_dev, p_consq);
+	return ECORE_NOMEM;
 }
 
-void ecore_consq_setup(struct ecore_hwfn *p_hwfn, struct ecore_consq *p_consq)
+void ecore_consq_setup(struct ecore_hwfn *p_hwfn)
 {
-	ecore_chain_reset(&p_consq->chain);
+	ecore_chain_reset(&p_hwfn->p_consq->chain);
 }
 
-void ecore_consq_free(struct ecore_hwfn *p_hwfn, struct ecore_consq *p_consq)
+void ecore_consq_free(struct ecore_hwfn *p_hwfn)
 {
-	if (!p_consq)
+	if (!p_hwfn->p_consq)
 		return;
-	ecore_chain_free(p_hwfn->p_dev, &p_consq->chain);
-	OSAL_FREE(p_hwfn->p_dev, p_consq);
+
+	ecore_chain_free(p_hwfn->p_dev, &p_hwfn->p_consq->chain);
+	OSAL_FREE(p_hwfn->p_dev, p_hwfn->p_consq);
 }
diff --git a/drivers/net/qede/base/ecore_spq.h b/drivers/net/qede/base/ecore_spq.h
index 717ede3..e2468b7 100644
--- a/drivers/net/qede/base/ecore_spq.h
+++ b/drivers/net/qede/base/ecore_spq.h
@@ -194,28 +194,23 @@ void ecore_spq_return_entry(struct ecore_hwfn		*p_hwfn,
  * @param p_hwfn
  * @param num_elem number of elements in the eq
  *
- * @return struct ecore_eq* - a newly allocated structure; NULL upon error.
+ * @return enum _ecore_status_t
  */
-struct ecore_eq *ecore_eq_alloc(struct ecore_hwfn	*p_hwfn,
-				 u16			num_elem);
+enum _ecore_status_t ecore_eq_alloc(struct ecore_hwfn	*p_hwfn, u16 num_elem);
 
 /**
- * @brief ecore_eq_setup - Reset the SPQ to its start state.
+ * @brief ecore_eq_setup - Reset the EQ to its start state.
  *
  * @param p_hwfn
- * @param p_eq
  */
-void ecore_eq_setup(struct ecore_hwfn *p_hwfn,
-		    struct ecore_eq   *p_eq);
+void ecore_eq_setup(struct ecore_hwfn *p_hwfn);
 
 /**
- * @brief ecore_eq_deallocate - deallocates the given EQ struct.
+ * @brief ecore_eq_free - deallocates the given EQ struct.
  *
  * @param p_hwfn
- * @param p_eq
  */
-void ecore_eq_free(struct ecore_hwfn *p_hwfn,
-		   struct ecore_eq   *p_eq);
+void ecore_eq_free(struct ecore_hwfn *p_hwfn);
 
 /**
  * @brief ecore_eq_prod_update - update the FW with default EQ producer
@@ -261,32 +256,26 @@ enum _ecore_status_t ecore_spq_completion(struct ecore_hwfn	*p_hwfn,
 u32 ecore_spq_get_cid(struct ecore_hwfn *p_hwfn);
 
 /**
- * @brief ecore_consq_alloc - Allocates & initializes an ConsQ
- *        struct
+ * @brief ecore_consq_alloc - Allocates & initializes an ConsQ struct
  *
  * @param p_hwfn
  *
- * @return struct ecore_eq* - a newly allocated structure; NULL upon error.
+ * @return enum _ecore_status_t
  */
-struct ecore_consq *ecore_consq_alloc(struct ecore_hwfn	*p_hwfn);
+enum _ecore_status_t ecore_consq_alloc(struct ecore_hwfn *p_hwfn);
 
 /**
- * @brief ecore_consq_setup - Reset the ConsQ to its start
- *        state.
+ * @brief ecore_consq_setup - Reset the ConsQ to its start state.
  *
  * @param p_hwfn
- * @param p_eq
  */
-void ecore_consq_setup(struct ecore_hwfn *p_hwfn,
-		    struct ecore_consq   *p_consq);
+void ecore_consq_setup(struct ecore_hwfn *p_hwfn);
 
 /**
  * @brief ecore_consq_free - deallocates the given ConsQ struct.
  *
  * @param p_hwfn
- * @param p_eq
  */
-void ecore_consq_free(struct ecore_hwfn *p_hwfn,
-		   struct ecore_consq   *p_consq);
+void ecore_consq_free(struct ecore_hwfn *p_hwfn);
 
 #endif /* __ECORE_SPQ_H__ */
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 27/61] net/qede/base: make L2 queues handle based
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (26 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 26/61] net/qede/base: code refactoring of SP queues Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 28/61] net/qede/base: add support for handling TLV request from MFW Rasesh Mody
                         ` (34 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

L2 handler changes:

This is change to remove the queue-id/qzone difference for Tx queues.

It does that by mainly doing:

a. VFs queues are no longer determined by the SBs they're using.
Instead, the ecore-client needs to maintain those and choose the values
to be used by VF when initializing it.

b. Eliminate the HW-cid array in the hw-function.
To do that, have all the rx/tx functionality turn into 'handle' base -
when queue would be started the caller would get a (void*) handle,
which it would later use with ecore for configuring various
queue-related stop [update, close].

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h         |   13 -
 drivers/net/qede/base/ecore_dev.c     |   37 ---
 drivers/net/qede/base/ecore_int.c     |   24 --
 drivers/net/qede/base/ecore_int.h     |   10 -
 drivers/net/qede/base/ecore_iov_api.h |   24 +-
 drivers/net/qede/base/ecore_l2.c      |  526 ++++++++++++++++++---------------
 drivers/net/qede/base/ecore_l2.h      |   84 +++---
 drivers/net/qede/base/ecore_l2_api.h  |  108 ++++---
 drivers/net/qede/base/ecore_sriov.c   |  262 ++++++++++------
 drivers/net/qede/base/ecore_sriov.h   |    4 +-
 drivers/net/qede/base/ecore_vf.c      |  119 +++++---
 drivers/net/qede/base/ecore_vf.h      |   55 ++--
 drivers/net/qede/qede_eth_if.c        |   50 ++--
 drivers/net/qede/qede_eth_if.h        |   22 +-
 drivers/net/qede/qede_rxtx.c          |   42 +--
 drivers/net/qede/qede_rxtx.h          |    2 +
 16 files changed, 723 insertions(+), 659 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index b8c8bfd..de0f49a 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -394,16 +394,6 @@ struct ecore_hw_info {
 	u16 mtu;
 };
 
-struct ecore_hw_cid_data {
-	u32	cid;
-	bool	b_cid_allocated;
-	u8	vfid; /* 1-based; 0 signals this is for a PF */
-
-	/* Additional identifiers */
-	u16	opaque_fid;
-	u8	vport_id;
-};
-
 /* maximun size of read/write commands (HW limit) */
 #define DMAE_MAX_RW_SIZE	0x2000
 
@@ -566,9 +556,6 @@ struct ecore_hwfn {
 	struct ecore_mcp_info		*mcp_info;
 	struct ecore_dcbx_info		*p_dcbx_info;
 
-	struct ecore_hw_cid_data	*p_tx_cids;
-	struct ecore_hw_cid_data	*p_rx_cids;
-
 	struct ecore_dmae_info		dmae_info;
 
 	/* QM init */
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 1ce7d8e..c895656 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -155,13 +155,6 @@ void ecore_resc_free(struct ecore_dev *p_dev)
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
 
-		OSAL_FREE(p_dev, p_hwfn->p_tx_cids);
-		OSAL_FREE(p_dev, p_hwfn->p_rx_cids);
-	}
-
-	for_each_hwfn(p_dev, i) {
-		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
-
 		ecore_cxt_mngr_free(p_hwfn);
 		ecore_qm_info_free(p_hwfn);
 		ecore_spq_free(p_hwfn);
@@ -844,36 +837,6 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 	if (!p_dev->fw_data)
 		return ECORE_NOMEM;
 
-	/* Allocate Memory for the Queue->CID mapping */
-	for_each_hwfn(p_dev, i) {
-		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
-		u32 num_tx_conns = RESC_NUM(p_hwfn, ECORE_L2_QUEUE);
-		int tx_size, rx_size;
-
-		/* @@@TMP - resc management, change to actual required size */
-		if (p_hwfn->pf_params.eth_pf_params.num_cons > num_tx_conns)
-			num_tx_conns = p_hwfn->pf_params.eth_pf_params.num_cons;
-		tx_size = sizeof(struct ecore_hw_cid_data) * num_tx_conns;
-		rx_size = sizeof(struct ecore_hw_cid_data) *
-		    RESC_NUM(p_hwfn, ECORE_L2_QUEUE);
-
-		p_hwfn->p_tx_cids = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
-						tx_size);
-		if (!p_hwfn->p_tx_cids) {
-			DP_NOTICE(p_hwfn, true,
-				  "Failed to allocate memory for Tx Cids\n");
-			goto alloc_no_mem;
-		}
-
-		p_hwfn->p_rx_cids = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
-						rx_size);
-		if (!p_hwfn->p_rx_cids) {
-			DP_NOTICE(p_hwfn, true,
-				  "Failed to allocate memory for Rx Cids\n");
-			goto alloc_no_mem;
-		}
-	}
-
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
 		u32 n_eqes, num_cons;
diff --git a/drivers/net/qede/base/ecore_int.c b/drivers/net/qede/base/ecore_int.c
index e5a4359..8dc4d15 100644
--- a/drivers/net/qede/base/ecore_int.c
+++ b/drivers/net/qede/base/ecore_int.c
@@ -2182,30 +2182,6 @@ void ecore_int_get_num_sbs(struct ecore_hwfn *p_hwfn,
 	p_sb_cnt_info->sb_free_blk = info->free_blks;
 }
 
-u16 ecore_int_queue_id_from_sb_id(struct ecore_hwfn *p_hwfn, u16 sb_id)
-{
-	struct ecore_igu_info *p_info = p_hwfn->hw_info.p_igu_info;
-
-	/* Determine origin of SB id */
-	if ((sb_id >= p_info->igu_base_sb) &&
-	    (sb_id < p_info->igu_base_sb + p_info->igu_sb_cnt)) {
-		return sb_id - p_info->igu_base_sb;
-	} else if ((sb_id >= p_info->igu_base_sb_iov) &&
-		   (sb_id < p_info->igu_base_sb_iov +
-			    p_info->igu_sb_cnt_iov)) {
-		/* We want the first VF queue to be adjacent to the
-		 * last PF queue. Since L2 queues can be partial to
-		 * SBs, we'll use the feature instead.
-		 */
-		return sb_id - p_info->igu_base_sb_iov +
-		       FEAT_NUM(p_hwfn, ECORE_PF_L2_QUE);
-	} else {
-		DP_NOTICE(p_hwfn, true, "SB %d not in range for function\n",
-			  sb_id);
-		return 0;
-	}
-}
-
 void ecore_int_disable_post_isr_release(struct ecore_dev *p_dev)
 {
 	int i;
diff --git a/drivers/net/qede/base/ecore_int.h b/drivers/net/qede/base/ecore_int.h
index 45358b9..0c8929e 100644
--- a/drivers/net/qede/base/ecore_int.h
+++ b/drivers/net/qede/base/ecore_int.h
@@ -172,16 +172,6 @@ void ecore_int_free(struct ecore_hwfn *p_hwfn);
 void ecore_int_setup(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt);
 
 /**
- * @brief - Returns an Rx queue index appropriate for usage with given SB.
- *
- * @param p_hwfn
- * @param sb_id - absolute index of SB
- *
- * @return index of Rx queue
- */
-u16 ecore_int_queue_id_from_sb_id(struct ecore_hwfn *p_hwfn, u16 sb_id);
-
-/**
  * @brief - Enable Interrupt & Attention for hw function
  *
  * @param p_hwfn
diff --git a/drivers/net/qede/base/ecore_iov_api.h b/drivers/net/qede/base/ecore_iov_api.h
index 9775360..b8dc47b 100644
--- a/drivers/net/qede/base/ecore_iov_api.h
+++ b/drivers/net/qede/base/ecore_iov_api.h
@@ -88,6 +88,23 @@ struct ecore_public_vf_info {
 	u16 forced_vlan;
 };
 
+struct ecore_iov_vf_init_params {
+	u16 rel_vf_id;
+
+	/* Number of requested Queues; Currently, don't support different
+	 * number of Rx/Tx queues.
+	 */
+	/* TODO - remove this limitation */
+	u16 num_queues;
+
+	/* Allow the client to choose which qzones to use for Rx/Tx,
+	 * and which queue_base to use for Tx queues on a per-queue basis.
+	 * Notice values should be relative to the PF resources.
+	 */
+	u16 req_rx_queue[ECORE_MAX_VF_CHAINS_PER_PF];
+	u16 req_tx_queue[ECORE_MAX_VF_CHAINS_PER_PF];
+};
+
 #ifdef CONFIG_ECORE_SW_CHANNEL
 /* This is SW channel related only... */
 enum mbx_state {
@@ -175,15 +192,14 @@ void ecore_iov_set_vf_to_disable(struct ecore_dev *p_dev,
  *
  * @param p_hwfn
  * @param p_ptt
- * @param rel_vf_id
- * @param num_rx_queues
+ * @param p_params
  *
  * @return enum _ecore_status_t
  */
 enum _ecore_status_t ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 					      struct ecore_ptt *p_ptt,
-					      u16 rel_vf_id,
-					      u16 num_rx_queues);
+					      struct ecore_iov_vf_init_params
+						     *p_params);
 
 /**
  * @brief ecore_iov_process_mbx_req - process a request received
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 0220d19..352620a 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -29,6 +29,120 @@
 #define ECORE_MAX_SGES_NUM 16
 #define CRC32_POLY 0x1edc6f41
 
+void ecore_eth_queue_cid_release(struct ecore_hwfn *p_hwfn,
+				 struct ecore_queue_cid *p_cid)
+{
+	/* VFs' CIDs are 0-based in PF-view, and uninitialized on VF */
+	if (!p_cid->is_vf && IS_PF(p_hwfn->p_dev))
+		ecore_cxt_release_cid(p_hwfn, p_cid->cid);
+	OSAL_VFREE(p_hwfn->p_dev, p_cid);
+}
+
+/* The internal is only meant to be directly called by PFs initializeing CIDs
+ * for their VFs.
+ */
+struct ecore_queue_cid *
+_ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
+			u16 opaque_fid, u32 cid, u8 vf_qid,
+			struct ecore_queue_start_common_params *p_params)
+{
+	bool b_is_same = (p_hwfn->hw_info.opaque_fid == opaque_fid);
+	struct ecore_queue_cid *p_cid;
+	enum _ecore_status_t rc;
+
+	p_cid = OSAL_VALLOC(p_hwfn->p_dev, sizeof(*p_cid));
+	if (p_cid == OSAL_NULL)
+		return OSAL_NULL;
+	OSAL_MEM_ZERO(p_cid, sizeof(*p_cid));
+
+	p_cid->opaque_fid = opaque_fid;
+	p_cid->cid = cid;
+	p_cid->vf_qid = vf_qid;
+	p_cid->rel = *p_params;
+
+	/* Don't try calculating the absolute indices for VFs */
+	if (IS_VF(p_hwfn->p_dev)) {
+		p_cid->abs = p_cid->rel;
+		goto out;
+	}
+
+	/* Calculate the engine-absolute indices of the resources.
+	 * The would guarantee they're valid later on.
+	 * In some cases [SBs] we already have the right values.
+	 */
+	rc = ecore_fw_vport(p_hwfn, p_cid->rel.vport_id, &p_cid->abs.vport_id);
+	if (rc != ECORE_SUCCESS)
+		goto fail;
+
+	rc = ecore_fw_l2_queue(p_hwfn, p_cid->rel.queue_id,
+			       &p_cid->abs.queue_id);
+	if (rc != ECORE_SUCCESS)
+		goto fail;
+
+	/* In case of a PF configuring its VF's queues, the stats-id is already
+	 * absolute [since there's a single index that's suitable per-VF].
+	 */
+	if (b_is_same) {
+		rc = ecore_fw_vport(p_hwfn, p_cid->rel.stats_id,
+				    &p_cid->abs.stats_id);
+		if (rc != ECORE_SUCCESS)
+			goto fail;
+	} else {
+		p_cid->abs.stats_id = p_cid->rel.stats_id;
+	}
+
+	/* SBs relevant information was already provided as absolute */
+	p_cid->abs.sb = p_cid->rel.sb;
+	p_cid->abs.sb_idx = p_cid->rel.sb_idx;
+
+	/* This is tricky - we're actually interested in whehter this is a PF
+	 * entry meant for the VF.
+	 */
+	if (!b_is_same)
+		p_cid->is_vf = true;
+out:
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "opaque_fid: %04x CID %08x vport %02x [%02x] qzone %04x [%04x] stats %02x [%02x] SB %04x PI %02x\n",
+		   p_cid->opaque_fid, p_cid->cid,
+		   p_cid->rel.vport_id, p_cid->abs.vport_id,
+		   p_cid->rel.queue_id, p_cid->abs.queue_id,
+		   p_cid->rel.stats_id, p_cid->abs.stats_id,
+		   p_cid->abs.sb, p_cid->abs.sb_idx);
+
+	return p_cid;
+
+fail:
+	OSAL_VFREE(p_hwfn->p_dev, p_cid);
+	return OSAL_NULL;
+}
+
+static struct ecore_queue_cid *
+ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
+		       u16 opaque_fid,
+		       struct ecore_queue_start_common_params *p_params)
+{
+	struct ecore_queue_cid *p_cid;
+	u32 cid = 0;
+
+	/* Get a unique firmware CID for this queue, in case it's a PF.
+	 * VF's don't need a CID as the queue configuration will be done
+	 * by PF.
+	 */
+	if (IS_PF(p_hwfn->p_dev)) {
+		if (ecore_cxt_acquire_cid(p_hwfn, PROTOCOLID_ETH,
+					  &cid) != ECORE_SUCCESS) {
+			DP_NOTICE(p_hwfn, true, "Failed to acquire cid\n");
+			return OSAL_NULL;
+		}
+	}
+
+	p_cid = _ecore_eth_queue_to_cid(p_hwfn, opaque_fid, cid, 0, p_params);
+	if ((p_cid == OSAL_NULL) && IS_PF(p_hwfn->p_dev))
+		ecore_cxt_release_cid(p_hwfn, cid);
+
+	return p_cid;
+}
+
 enum _ecore_status_t
 ecore_sp_eth_vport_start(struct ecore_hwfn *p_hwfn,
 			 struct ecore_sp_vport_start_params *p_params)
@@ -558,57 +672,28 @@ ecore_filter_accept_cmd(struct ecore_dev *p_dev,
 	return 0;
 }
 
-static void ecore_sp_release_queue_cid(struct ecore_hwfn *p_hwfn,
-				       struct ecore_hw_cid_data *p_cid_data)
-{
-	if (!p_cid_data->b_cid_allocated)
-		return;
-
-	ecore_cxt_release_cid(p_hwfn, p_cid_data->cid);
-	p_cid_data->b_cid_allocated = false;
-}
-
 enum _ecore_status_t
-ecore_sp_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
-			      u16 opaque_fid,
-			      u32 cid,
-			      struct ecore_queue_start_common_params *p_params,
-			      u16 bd_max_bytes,
-			      dma_addr_t bd_chain_phys_addr,
-			      dma_addr_t cqe_pbl_addr,
-			      u16 cqe_pbl_size, bool b_use_zone_a_prod)
+ecore_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
+			   struct ecore_queue_cid *p_cid,
+			   u16 bd_max_bytes,
+			   dma_addr_t bd_chain_phys_addr,
+			   dma_addr_t cqe_pbl_addr,
+			   u16 cqe_pbl_size)
 {
 	struct rx_queue_start_ramrod_data *p_ramrod = OSAL_NULL;
 	struct ecore_spq_entry *p_ent = OSAL_NULL;
 	struct ecore_sp_init_data init_data;
-	struct ecore_hw_cid_data *p_rx_cid;
-	u16 abs_rx_q_id = 0;
-	u8 abs_vport_id = 0;
 	enum _ecore_status_t rc = ECORE_NOTIMPL;
 
-	/* Store information for the stop */
-	p_rx_cid = &p_hwfn->p_rx_cids[p_params->queue_id];
-	p_rx_cid->cid = cid;
-	p_rx_cid->opaque_fid = opaque_fid;
-	p_rx_cid->vport_id = p_params->vport_id;
-
-	rc = ecore_fw_vport(p_hwfn, p_params->vport_id, &abs_vport_id);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	rc = ecore_fw_l2_queue(p_hwfn, p_params->queue_id, &abs_rx_q_id);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-		   "opaque_fid=0x%x, cid=0x%x, rx_qid=0x%x, vport_id=0x%x, sb_id=0x%x\n",
-		   opaque_fid, cid, p_params->queue_id,
-		   p_params->vport_id, p_params->sb);
+		   "opaque_fid=0x%x, cid=0x%x, rx_qzone=0x%x, vport_id=0x%x, sb_id=0x%x\n",
+		   p_cid->opaque_fid, p_cid->cid, p_cid->abs.queue_id,
+		   p_cid->abs.vport_id, p_cid->abs.sb);
 
 	/* Get SPQ entry */
 	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
-	init_data.cid = cid;
-	init_data.opaque_fid = opaque_fid;
+	init_data.cid = p_cid->cid;
+	init_data.opaque_fid = p_cid->opaque_fid;
 	init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
 
 	rc = ecore_sp_init_request(p_hwfn, &p_ent,
@@ -619,11 +704,11 @@ ecore_sp_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
 
 	p_ramrod = &p_ent->ramrod.rx_queue_start;
 
-	p_ramrod->sb_id = OSAL_CPU_TO_LE16(p_params->sb);
-	p_ramrod->sb_index = (u8)p_params->sb_idx;
-	p_ramrod->vport_id = abs_vport_id;
-	p_ramrod->stats_counter_id = p_params->stats_id;
-	p_ramrod->rx_queue_id = OSAL_CPU_TO_LE16(abs_rx_q_id);
+	p_ramrod->sb_id = OSAL_CPU_TO_LE16(p_cid->abs.sb);
+	p_ramrod->sb_index = p_cid->abs.sb_idx;
+	p_ramrod->vport_id = p_cid->abs.vport_id;
+	p_ramrod->stats_counter_id = p_cid->abs.stats_id;
+	p_ramrod->rx_queue_id = OSAL_CPU_TO_LE16(p_cid->abs.queue_id);
 	p_ramrod->complete_cqe_flg = 0;
 	p_ramrod->complete_event_flg = 1;
 
@@ -633,92 +718,88 @@ ecore_sp_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
 	p_ramrod->num_of_pbl_pages = OSAL_CPU_TO_LE16(cqe_pbl_size);
 	DMA_REGPAIR_LE(p_ramrod->cqe_pbl_addr, cqe_pbl_addr);
 
-	if (p_params->vf_qid || b_use_zone_a_prod) {
-		p_ramrod->vf_rx_prod_index = (u8)p_params->vf_qid;
+	if (p_cid->is_vf) {
+		p_ramrod->vf_rx_prod_index = p_cid->vf_qid;
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 			   "Queue%s is meant for VF rxq[%02x]\n",
-			   b_use_zone_a_prod ? " [legacy]" : "",
-			   p_params->vf_qid);
-		p_ramrod->vf_rx_prod_use_zone_a = b_use_zone_a_prod;
+			   !!p_cid->b_legacy_vf ? " [legacy]" : "",
+			   p_cid->vf_qid);
+		p_ramrod->vf_rx_prod_use_zone_a = !!p_cid->b_legacy_vf;
 	}
 
 	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
 }
 
-enum _ecore_status_t
-ecore_sp_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
-			    u16 opaque_fid,
-			    struct ecore_queue_start_common_params *p_params,
+static enum _ecore_status_t
+ecore_eth_pf_rx_queue_start(struct ecore_hwfn *p_hwfn,
+			    struct ecore_queue_cid *p_cid,
 			    u16 bd_max_bytes,
 			    dma_addr_t bd_chain_phys_addr,
 			    dma_addr_t cqe_pbl_addr,
 			    u16 cqe_pbl_size,
-			    void OSAL_IOMEM * *pp_prod)
+			    void OSAL_IOMEM * *pp_producer)
 {
-	struct ecore_hw_cid_data *p_rx_cid;
 	u32 init_prod_val = 0;
-	u16 abs_l2_queue = 0;
-	u8 abs_stats_id = 0;
-	enum _ecore_status_t rc;
-
-	if (IS_VF(p_hwfn->p_dev)) {
-		return ecore_vf_pf_rxq_start(p_hwfn,
-					     (u8)p_params->queue_id,
-					     p_params->sb,
-					     (u8)p_params->sb_idx,
-					     bd_max_bytes,
-					     bd_chain_phys_addr,
-					     cqe_pbl_addr,
-					     cqe_pbl_size, pp_prod);
-	}
-
-	rc = ecore_fw_l2_queue(p_hwfn, p_params->queue_id, &abs_l2_queue);
-	if (rc != ECORE_SUCCESS)
-		return rc;
 
-	rc = ecore_fw_vport(p_hwfn, p_params->stats_id, &abs_stats_id);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	*pp_prod = (u8 OSAL_IOMEM *)p_hwfn->regview +
-	    GTT_BAR0_MAP_REG_MSDM_RAM +
-	    MSTORM_ETH_PF_PRODS_OFFSET(abs_l2_queue);
+	*pp_producer = (u8 OSAL_IOMEM *)
+		       p_hwfn->regview +
+		       GTT_BAR0_MAP_REG_MSDM_RAM +
+		       MSTORM_ETH_PF_PRODS_OFFSET(p_cid->abs.queue_id);
 
 	/* Init the rcq, rx bd and rx sge (if valid) producers to 0 */
-	__internal_ram_wr(p_hwfn, *pp_prod, sizeof(u32),
+	__internal_ram_wr(p_hwfn, *pp_producer, sizeof(u32),
 			  (u32 *)(&init_prod_val));
 
+	return ecore_eth_rxq_start_ramrod(p_hwfn, p_cid,
+					  bd_max_bytes,
+					  bd_chain_phys_addr,
+					  cqe_pbl_addr, cqe_pbl_size);
+}
+
+enum _ecore_status_t
+ecore_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
+			 u16 opaque_fid,
+			 struct ecore_queue_start_common_params *p_params,
+			 u16 bd_max_bytes,
+			 dma_addr_t bd_chain_phys_addr,
+			 dma_addr_t cqe_pbl_addr,
+			 u16 cqe_pbl_size,
+			 struct ecore_rxq_start_ret_params *p_ret_params)
+{
+	struct ecore_queue_cid *p_cid;
+	enum _ecore_status_t rc;
+
 	/* Allocate a CID for the queue */
-	p_rx_cid = &p_hwfn->p_rx_cids[p_params->queue_id];
-	rc = ecore_cxt_acquire_cid(p_hwfn, PROTOCOLID_ETH,
-				   &p_rx_cid->cid);
-	if (rc != ECORE_SUCCESS) {
-		DP_NOTICE(p_hwfn, true, "Failed to acquire cid\n");
-		return rc;
-	}
-	p_rx_cid->b_cid_allocated = true;
-	p_params->stats_id = abs_stats_id;
-	p_params->vf_qid = 0;
-
-	rc = ecore_sp_eth_rxq_start_ramrod(p_hwfn,
-					   opaque_fid,
-					   p_rx_cid->cid,
-					   p_params,
+	p_cid = ecore_eth_queue_to_cid(p_hwfn, opaque_fid, p_params);
+	if (p_cid == OSAL_NULL)
+		return ECORE_NOMEM;
+
+	if (IS_PF(p_hwfn->p_dev))
+		rc = ecore_eth_pf_rx_queue_start(p_hwfn, p_cid,
+						 bd_max_bytes,
+						 bd_chain_phys_addr,
+						 cqe_pbl_addr, cqe_pbl_size,
+						 &p_ret_params->p_prod);
+	else
+		rc = ecore_vf_pf_rxq_start(p_hwfn, p_cid,
 					   bd_max_bytes,
 					   bd_chain_phys_addr,
 					   cqe_pbl_addr,
 					   cqe_pbl_size,
-					   false);
+					   &p_ret_params->p_prod);
 
+	/* Provide the caller with a reference to as handler */
 	if (rc != ECORE_SUCCESS)
-		ecore_sp_release_queue_cid(p_hwfn, p_rx_cid);
+		ecore_eth_queue_cid_release(p_hwfn, p_cid);
+	else
+		p_ret_params->p_handle = (void *)p_cid;
 
 	return rc;
 }
 
 enum _ecore_status_t
 ecore_sp_eth_rx_queues_update(struct ecore_hwfn *p_hwfn,
-			      u16 rx_queue_id,
+			      void **pp_rxq_handles,
 			      u8 num_rxqs,
 			      u8 complete_cqe_flg,
 			      u8 complete_event_flg,
@@ -728,14 +809,14 @@ ecore_sp_eth_rx_queues_update(struct ecore_hwfn *p_hwfn,
 	struct rx_queue_update_ramrod_data *p_ramrod = OSAL_NULL;
 	struct ecore_spq_entry *p_ent = OSAL_NULL;
 	struct ecore_sp_init_data init_data;
-	struct ecore_hw_cid_data *p_rx_cid;
-	u16 qid, abs_rx_q_id = 0;
+	struct ecore_queue_cid *p_cid;
 	enum _ecore_status_t rc = ECORE_NOTIMPL;
 	u8 i;
 
 	if (IS_VF(p_hwfn->p_dev))
 		return ecore_vf_pf_rxqs_update(p_hwfn,
-					       rx_queue_id,
+					       (struct ecore_queue_cid **)
+					       pp_rxq_handles,
 					       num_rxqs,
 					       complete_cqe_flg,
 					       complete_event_flg);
@@ -745,12 +826,11 @@ ecore_sp_eth_rx_queues_update(struct ecore_hwfn *p_hwfn,
 	init_data.p_comp_data = p_comp_data;
 
 	for (i = 0; i < num_rxqs; i++) {
-		qid = rx_queue_id + i;
-		p_rx_cid = &p_hwfn->p_rx_cids[qid];
+		p_cid = ((struct ecore_queue_cid **)pp_rxq_handles)[i];
 
 		/* Get SPQ entry */
-		init_data.cid = p_rx_cid->cid;
-		init_data.opaque_fid = p_rx_cid->opaque_fid;
+		init_data.cid = p_cid->cid;
+		init_data.opaque_fid = p_cid->opaque_fid;
 
 		rc = ecore_sp_init_request(p_hwfn, &p_ent,
 					   ETH_RAMROD_RX_QUEUE_UPDATE,
@@ -759,41 +839,34 @@ ecore_sp_eth_rx_queues_update(struct ecore_hwfn *p_hwfn,
 			return rc;
 
 		p_ramrod = &p_ent->ramrod.rx_queue_update;
+		p_ramrod->vport_id = p_cid->abs.vport_id;
 
-		ecore_fw_vport(p_hwfn, p_rx_cid->vport_id, &p_ramrod->vport_id);
-		ecore_fw_l2_queue(p_hwfn, qid, &abs_rx_q_id);
-		p_ramrod->rx_queue_id = OSAL_CPU_TO_LE16(abs_rx_q_id);
+		p_ramrod->rx_queue_id = OSAL_CPU_TO_LE16(p_cid->abs.queue_id);
 		p_ramrod->complete_cqe_flg = complete_cqe_flg;
 		p_ramrod->complete_event_flg = complete_event_flg;
 
 		rc = ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
-		if (rc)
+		if (rc != ECORE_SUCCESS)
 			return rc;
 	}
 
 	return rc;
 }
 
-enum _ecore_status_t
-ecore_sp_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
-			   u16 rx_queue_id,
-			   bool eq_completion_only, bool cqe_completion)
+static enum _ecore_status_t
+ecore_eth_pf_rx_queue_stop(struct ecore_hwfn *p_hwfn,
+			   struct ecore_queue_cid *p_cid,
+			   bool b_eq_completion_only,
+			   bool b_cqe_completion)
 {
-	struct ecore_hw_cid_data *p_rx_cid = &p_hwfn->p_rx_cids[rx_queue_id];
 	struct rx_queue_stop_ramrod_data *p_ramrod = OSAL_NULL;
 	struct ecore_spq_entry *p_ent = OSAL_NULL;
 	struct ecore_sp_init_data init_data;
-	u16 abs_rx_q_id = 0;
-	enum _ecore_status_t rc = ECORE_NOTIMPL;
-
-	if (IS_VF(p_hwfn->p_dev))
-		return ecore_vf_pf_rxq_stop(p_hwfn, rx_queue_id,
-					    cqe_completion);
+	enum _ecore_status_t rc;
 
-	/* Get SPQ entry */
 	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
-	init_data.cid = p_rx_cid->cid;
-	init_data.opaque_fid = p_rx_cid->opaque_fid;
+	init_data.cid = p_cid->cid;
+	init_data.opaque_fid = p_cid->opaque_fid;
 	init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
 
 	rc = ecore_sp_init_request(p_hwfn, &p_ent,
@@ -803,64 +876,54 @@ ecore_sp_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
 		return rc;
 
 	p_ramrod = &p_ent->ramrod.rx_queue_stop;
-
-	ecore_fw_vport(p_hwfn, p_rx_cid->vport_id, &p_ramrod->vport_id);
-	ecore_fw_l2_queue(p_hwfn, rx_queue_id, &abs_rx_q_id);
-	p_ramrod->rx_queue_id = OSAL_CPU_TO_LE16(abs_rx_q_id);
+	p_ramrod->vport_id = p_cid->abs.vport_id;
+	p_ramrod->rx_queue_id = OSAL_CPU_TO_LE16(p_cid->abs.queue_id);
 
 	/* Cleaning the queue requires the completion to arrive there.
 	 * In addition, VFs require the answer to come as eqe to PF.
 	 */
-	p_ramrod->complete_cqe_flg = (!!(p_rx_cid->opaque_fid ==
-					 p_hwfn->hw_info.opaque_fid) &&
-				      !eq_completion_only) || cqe_completion;
-	p_ramrod->complete_event_flg = !(p_rx_cid->opaque_fid ==
-					 p_hwfn->hw_info.opaque_fid) ||
-	    eq_completion_only;
+	p_ramrod->complete_cqe_flg = (!p_cid->is_vf && !b_eq_completion_only) ||
+				     b_cqe_completion;
+	p_ramrod->complete_event_flg = p_cid->is_vf || b_eq_completion_only;
 
-	rc = ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
-	if (rc != ECORE_SUCCESS)
-		return rc;
+	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
+}
 
-	ecore_sp_release_queue_cid(p_hwfn, p_rx_cid);
+enum _ecore_status_t ecore_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
+					     void *p_rxq,
+					     bool eq_completion_only,
+					     bool cqe_completion)
+{
+	struct ecore_queue_cid *p_cid = (struct ecore_queue_cid *)p_rxq;
+	enum _ecore_status_t rc = ECORE_NOTIMPL;
+
+	if (IS_PF(p_hwfn->p_dev))
+		rc = ecore_eth_pf_rx_queue_stop(p_hwfn, p_cid,
+						eq_completion_only,
+						cqe_completion);
+	else
+		rc = ecore_vf_pf_rxq_stop(p_hwfn, p_cid, cqe_completion);
 
+	if (rc == ECORE_SUCCESS)
+		ecore_eth_queue_cid_release(p_hwfn, p_cid);
 	return rc;
 }
 
 enum _ecore_status_t
-ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
-			      u16 opaque_fid,
-			      u32 cid,
-			      struct ecore_queue_start_common_params *p_params,
-			      dma_addr_t pbl_addr,
-			      u16 pbl_size,
-			      u16 pq_id)
+ecore_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
+			   struct ecore_queue_cid *p_cid,
+			   dma_addr_t pbl_addr, u16 pbl_size,
+			   u16 pq_id)
 {
 	struct tx_queue_start_ramrod_data *p_ramrod = OSAL_NULL;
 	struct ecore_spq_entry *p_ent = OSAL_NULL;
 	struct ecore_sp_init_data init_data;
-	struct ecore_hw_cid_data *p_tx_cid;
-	u16 abs_tx_qzone_id = 0;
 	enum _ecore_status_t rc = ECORE_NOTIMPL;
-	u8 abs_vport_id;
-
-	/* Store information for the stop */
-	p_tx_cid = &p_hwfn->p_tx_cids[p_params->queue_id];
-	p_tx_cid->cid = cid;
-	p_tx_cid->opaque_fid = opaque_fid;
-
-	rc = ecore_fw_vport(p_hwfn, p_params->vport_id, &abs_vport_id);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	rc = ecore_fw_l2_queue(p_hwfn, p_params->qzone_id, &abs_tx_qzone_id);
-	if (rc != ECORE_SUCCESS)
-		return rc;
 
 	/* Get SPQ entry */
 	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
-	init_data.cid = cid;
-	init_data.opaque_fid = opaque_fid;
+	init_data.cid = p_cid->cid;
+	init_data.opaque_fid = p_cid->opaque_fid;
 	init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
 
 	rc = ecore_sp_init_request(p_hwfn, &p_ent,
@@ -870,14 +933,14 @@ ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
 		return rc;
 
 	p_ramrod = &p_ent->ramrod.tx_queue_start;
-	p_ramrod->vport_id = abs_vport_id;
+	p_ramrod->vport_id = p_cid->abs.vport_id;
 
-	p_ramrod->sb_id = OSAL_CPU_TO_LE16(p_params->sb);
-	p_ramrod->sb_index = (u8)p_params->sb_idx;
-	p_ramrod->stats_counter_id = p_params->stats_id;
+	p_ramrod->sb_id = OSAL_CPU_TO_LE16(p_cid->abs.sb);
+	p_ramrod->sb_index = p_cid->abs.sb_idx;
+	p_ramrod->stats_counter_id = p_cid->abs.stats_id;
 
-	p_ramrod->queue_zone_id = OSAL_CPU_TO_LE16(abs_tx_qzone_id);
-	p_ramrod->same_as_last_id = OSAL_CPU_TO_LE16(abs_tx_qzone_id);
+	p_ramrod->queue_zone_id = OSAL_CPU_TO_LE16(p_cid->abs.queue_id);
+	p_ramrod->same_as_last_id = OSAL_CPU_TO_LE16(p_cid->abs.queue_id);
 
 	p_ramrod->pbl_size = OSAL_CPU_TO_LE16(pbl_size);
 	DMA_REGPAIR_LE(p_ramrod->pbl_base_addr, pbl_addr);
@@ -887,90 +950,72 @@ ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
 	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
 }
 
-enum _ecore_status_t
-ecore_sp_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
-			    u16 opaque_fid,
-			    struct ecore_queue_start_common_params *p_params,
+static enum _ecore_status_t
+ecore_eth_pf_tx_queue_start(struct ecore_hwfn *p_hwfn,
+			    struct ecore_queue_cid *p_cid,
 			    u8 tc,
-			    dma_addr_t pbl_addr,
-			    u16 pbl_size,
+			    dma_addr_t pbl_addr, u16 pbl_size,
 			    void OSAL_IOMEM * *pp_doorbell)
 {
-	struct ecore_hw_cid_data *p_tx_cid;
-	u8 abs_stats_id = 0;
 	enum _ecore_status_t rc;
 
-	if (IS_VF(p_hwfn->p_dev)) {
-		return ecore_vf_pf_txq_start(p_hwfn,
-					     p_params->queue_id,
-					     p_params->sb,
-					     (u8)p_params->sb_idx,
-					     pbl_addr,
-					     pbl_size,
-					     pp_doorbell);
-	}
-
-	rc = ecore_fw_vport(p_hwfn, p_params->stats_id, &abs_stats_id);
+	/* TODO - set tc in the pq_params for multi-cos */
+	rc = ecore_eth_txq_start_ramrod(p_hwfn, p_cid,
+					pbl_addr, pbl_size,
+					ecore_get_cm_pq_idx_mcos(p_hwfn, tc));
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	p_tx_cid = &p_hwfn->p_tx_cids[p_params->queue_id];
-	OSAL_MEMSET(p_tx_cid, 0, sizeof(*p_tx_cid));
+	/* Provide the caller with the necessary return values */
+	*pp_doorbell = (u8 OSAL_IOMEM *)
+		       p_hwfn->doorbells +
+		       DB_ADDR(p_cid->cid, DQ_DEMS_LEGACY);
 
-	/* Allocate a CID for the queue */
-	rc = ecore_cxt_acquire_cid(p_hwfn, PROTOCOLID_ETH, &p_tx_cid->cid);
-	if (rc != ECORE_SUCCESS) {
-		DP_NOTICE(p_hwfn, true, "Failed to acquire cid\n");
-		return rc;
-	}
-	p_tx_cid->b_cid_allocated = true;
+	return ECORE_SUCCESS;
+}
 
-	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-		   "opaque_fid=0x%x, cid=0x%x, tx_qid=0x%x, vport_id=0x%x, sb_id=0x%x\n",
-		    opaque_fid, p_tx_cid->cid, p_params->queue_id,
-		    p_params->vport_id, p_params->sb);
+enum _ecore_status_t
+ecore_eth_tx_queue_start(struct ecore_hwfn *p_hwfn, u16 opaque_fid,
+			 struct ecore_queue_start_common_params *p_params,
+			 u8 tc,
+			 dma_addr_t pbl_addr, u16 pbl_size,
+			 struct ecore_txq_start_ret_params *p_ret_params)
+{
+	struct ecore_queue_cid *p_cid;
+	enum _ecore_status_t rc;
 
-	p_params->stats_id = abs_stats_id;
+	p_cid = ecore_eth_queue_to_cid(p_hwfn, opaque_fid, p_params);
+	if (p_cid == OSAL_NULL)
+		return ECORE_INVAL;
 
-	/* TODO - set tc in the pq_params for multi-cos */
-	rc = ecore_sp_eth_txq_start_ramrod(p_hwfn,
-					   opaque_fid,
-					   p_tx_cid->cid,
-					   p_params,
-					   pbl_addr,
-					   pbl_size,
-					   ecore_get_cm_pq_idx_mcos(p_hwfn,
-								    tc));
-
-	*pp_doorbell = (u8 OSAL_IOMEM *)p_hwfn->doorbells +
-	    DB_ADDR(p_tx_cid->cid, DQ_DEMS_LEGACY);
+	if (IS_PF(p_hwfn->p_dev))
+		rc = ecore_eth_pf_tx_queue_start(p_hwfn, p_cid, tc,
+						 pbl_addr, pbl_size,
+						 &p_ret_params->p_doorbell);
+	else
+		rc = ecore_vf_pf_txq_start(p_hwfn, p_cid,
+					   pbl_addr, pbl_size,
+					   &p_ret_params->p_doorbell);
 
 	if (rc != ECORE_SUCCESS)
-		ecore_sp_release_queue_cid(p_hwfn, p_tx_cid);
+		ecore_eth_queue_cid_release(p_hwfn, p_cid);
+	else
+		p_ret_params->p_handle = (void *)p_cid;
 
 	return rc;
 }
 
-enum _ecore_status_t ecore_sp_eth_tx_queue_update(struct ecore_hwfn *p_hwfn)
-{
-	return ECORE_NOTIMPL;
-}
-
-enum _ecore_status_t ecore_sp_eth_tx_queue_stop(struct ecore_hwfn *p_hwfn,
-						u16 tx_queue_id)
+static enum _ecore_status_t
+ecore_eth_pf_tx_queue_stop(struct ecore_hwfn *p_hwfn,
+			   struct ecore_queue_cid *p_cid)
 {
-	struct ecore_hw_cid_data *p_tx_cid = &p_hwfn->p_tx_cids[tx_queue_id];
 	struct ecore_spq_entry *p_ent = OSAL_NULL;
 	struct ecore_sp_init_data init_data;
-	enum _ecore_status_t rc = ECORE_NOTIMPL;
-
-	if (IS_VF(p_hwfn->p_dev))
-		return ecore_vf_pf_txq_stop(p_hwfn, tx_queue_id);
+	enum _ecore_status_t rc;
 
-	/* Get SPQ entry */
 	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
-	init_data.cid = p_tx_cid->cid;
-	init_data.opaque_fid = p_tx_cid->opaque_fid;
+	init_data.cid = p_cid->cid;
+	init_data.opaque_fid = p_cid->opaque_fid;
 	init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
 
 	rc = ecore_sp_init_request(p_hwfn, &p_ent,
@@ -979,11 +1024,22 @@ enum _ecore_status_t ecore_sp_eth_tx_queue_stop(struct ecore_hwfn *p_hwfn,
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	rc = ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
-	if (rc != ECORE_SUCCESS)
-		return rc;
+	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
+}
+
+enum _ecore_status_t ecore_eth_tx_queue_stop(struct ecore_hwfn *p_hwfn,
+					     void *p_handle)
+{
+	struct ecore_queue_cid *p_cid = (struct ecore_queue_cid *)p_handle;
+	enum _ecore_status_t rc;
+
+	if (IS_PF(p_hwfn->p_dev))
+		rc = ecore_eth_pf_tx_queue_stop(p_hwfn, p_cid);
+	else
+		rc = ecore_vf_pf_txq_stop(p_hwfn, p_cid);
 
-	ecore_sp_release_queue_cid(p_hwfn, p_tx_cid);
+	if (rc == ECORE_SUCCESS)
+		ecore_eth_queue_cid_release(p_hwfn, p_cid);
 	return rc;
 }
 
diff --git a/drivers/net/qede/base/ecore_l2.h b/drivers/net/qede/base/ecore_l2.h
index b598eda..c136389 100644
--- a/drivers/net/qede/base/ecore_l2.h
+++ b/drivers/net/qede/base/ecore_l2.h
@@ -15,59 +15,66 @@
 #include "ecore_spq.h"
 #include "ecore_l2_api.h"
 
-/**
- * @brief ecore_sp_eth_tx_queue_update -
- *
- * This ramrod updates a TX queue. It is used for setting the active
- * state of the queue.
- *
- * @note Final phase API.
- *
- * @param p_hwfn
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_sp_eth_tx_queue_update(struct ecore_hwfn *p_hwfn);
+struct ecore_queue_cid {
+	/* 'Relative' is a relative term ;-). Usually the indices [not counting
+	 * SBs] would be PF-relative, but there are some cases where that isn't
+	 * the case - specifically for a PF configuring its VF indices it's
+	 * possible some fields [E.g., stats-id] in 'rel' would already be abs.
+	 */
+	struct ecore_queue_start_common_params rel;
+	struct ecore_queue_start_common_params abs;
+	u32 cid;
+	u16 opaque_fid;
+
+	/* VFs queues are mapped differently, so we need to know the
+	 * relative queue associated with them [0-based].
+	 * Notice this is relevant on the *PF* queue-cid of its VF's queues,
+	 * and not on the VF itself.
+	 */
+	bool is_vf;
+	u8 vf_qid;
+
+	/* Legacy VFs might have Rx producer located elsewhere */
+	bool b_legacy_vf;
+};
+
+void ecore_eth_queue_cid_release(struct ecore_hwfn *p_hwfn,
+				 struct ecore_queue_cid *p_cid);
+
+struct ecore_queue_cid *
+_ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
+			u16 opaque_fid, u32 cid, u8 vf_qid,
+			struct ecore_queue_start_common_params *p_params);
 
 enum _ecore_status_t
 ecore_sp_eth_vport_start(struct ecore_hwfn *p_hwfn,
 			 struct ecore_sp_vport_start_params *p_params);
 
 /**
- * @brief - Starts an Rx queue; Should be used where contexts are handled
- * outside of the ramrod area [specifically iov scenarios]
+ * @brief - Starts an Rx queue, when queue_cid is already prepared
  *
  * @param p_hwfn
- * @param opaque_fid
- * @param cid
- * @param p_params [queue_id, vport_id, stats_id, sb, sb_idx, vf_qid]
-	  stats_id is absolute packed in p_params.
+ * @param p_cid
  * @param bd_max_bytes
  * @param bd_chain_phys_addr
  * @param cqe_pbl_addr
  * @param cqe_pbl_size
- * @param b_use_zone_a_prod - support legacy VF producers
  *
  * @return enum _ecore_status_t
  */
 enum _ecore_status_t
-ecore_sp_eth_rxq_start_ramrod(struct ecore_hwfn	*p_hwfn,
-			      u16 opaque_fid,
-			      u32 cid,
-			      struct ecore_queue_start_common_params *p_params,
-			      u16 bd_max_bytes,
-			      dma_addr_t bd_chain_phys_addr,
-			      dma_addr_t cqe_pbl_addr,
-			      u16 cqe_pbl_size, bool b_use_zone_a_prod);
+ecore_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
+			   struct ecore_queue_cid *p_cid,
+			   u16 bd_max_bytes,
+			   dma_addr_t bd_chain_phys_addr,
+			   dma_addr_t cqe_pbl_addr,
+			   u16 cqe_pbl_size);
 
 /**
- * @brief - Starts a Tx queue; Should be used where contexts are handled
- * outside of the ramrod area [specifically iov scenarios]
+ * @brief - Starts a Tx queue, where queue_cid is already prepared
  *
  * @param p_hwfn
- * @param opaque_fid
- * @param cid
- * @param p_params [queue_id, vport_id,stats_id, sb, sb_idx, vf_qid]
+ * @param p_cid
  * @param pbl_addr
  * @param pbl_size
  * @param p_pq_params - parameters for choosing the PQ for this Tx queue
@@ -75,13 +82,10 @@ ecore_sp_eth_rxq_start_ramrod(struct ecore_hwfn	*p_hwfn,
  * @return enum _ecore_status_t
  */
 enum _ecore_status_t
-ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn	*p_hwfn,
-			      u16 opaque_fid,
-			      u32 cid,
-			      struct ecore_queue_start_common_params *p_params,
-			      dma_addr_t pbl_addr,
-			      u16 pbl_size,
-			      u16 pq_id);
+ecore_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
+			   struct ecore_queue_cid *p_cid,
+			   dma_addr_t pbl_addr, u16 pbl_size,
+			   u16 pq_id);
 
 u8 ecore_mcast_bin_from_mac(u8 *mac);
 
diff --git a/drivers/net/qede/base/ecore_l2_api.h b/drivers/net/qede/base/ecore_l2_api.h
index 8f7b614..af316d3 100644
--- a/drivers/net/qede/base/ecore_l2_api.h
+++ b/drivers/net/qede/base/ecore_l2_api.h
@@ -28,22 +28,26 @@ enum ecore_rss_caps {
 #endif
 
 struct ecore_queue_start_common_params {
-	/* Rx/Tx queue relative id to keep obtained cid in corresponding array
-	 * RX - upper-bounded by number of FW-queues
-	 */
-	u16 queue_id;
+	/* Should always be relative to entity sending this. */
 	u8 vport_id;
+	u16 queue_id;
 
-	/* q_zone_id is relative, may be different from queue id
-	 * currently used by Tx-only, upper-bounded by number of FW-queues
-	 */
-	u16 qzone_id;
-
-	/* stats_id is relative or absolute depends on function */
+	/* Relative, but relevant only for PFs */
 	u8 stats_id;
+
+	/* These are always absolute */
 	u16 sb;
-	u16 sb_idx;
-	u16 vf_qid;
+	u8 sb_idx;
+};
+
+struct ecore_rxq_start_ret_params {
+	void OSAL_IOMEM *p_prod;
+	void *p_handle;
+};
+
+struct ecore_txq_start_ret_params {
+	void OSAL_IOMEM *p_doorbell;
+	void *p_handle;
 };
 
 struct ecore_rss_params {
@@ -167,42 +171,37 @@ ecore_filter_accept_cmd(
 	struct ecore_spq_comp_cb	 *p_comp_data);
 
 /**
- * @brief ecore_sp_eth_rx_queue_start - RX Queue Start Ramrod
+ * @brief ecore_eth_rx_queue_start - RX Queue Start Ramrod
  *
  * This ramrod initializes an RX Queue for a VPort. An Assert is generated if
  * the VPort ID is not currently initialized.
  *
  * @param p_hwfn
  * @param opaque_fid
- * @p_params			[stats_id is relative, packed in p_params]
+ * @p_params			Inputs; Relative for PF [SB being an exception]
  * @param bd_max_bytes		Maximum bytes that can be placed on a BD
  * @param bd_chain_phys_addr	Physical address of BDs for receive.
  * @param cqe_pbl_addr		Physical address of the CQE PBL Table.
  * @param cqe_pbl_size		Size of the CQE PBL Table
- * @param pp_prod		Pointer to place producer's
- *                              address for the Rx Q (May be
- *				NULL).
+ * @param p_ret_params		Pointed struct to be filled with outputs.
  *
  * @return enum _ecore_status_t
  */
 enum _ecore_status_t
-ecore_sp_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
-			    u16 opaque_fid,
-			    struct ecore_queue_start_common_params *p_params,
-			    u16 bd_max_bytes,
-			    dma_addr_t bd_chain_phys_addr,
-			    dma_addr_t cqe_pbl_addr,
-			    u16 cqe_pbl_size,
-			    void OSAL_IOMEM * *pp_prod);
+ecore_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
+			 u16 opaque_fid,
+			 struct ecore_queue_start_common_params *p_params,
+			 u16 bd_max_bytes,
+			 dma_addr_t bd_chain_phys_addr,
+			 dma_addr_t cqe_pbl_addr,
+			 u16 cqe_pbl_size,
+			 struct ecore_rxq_start_ret_params *p_ret_params);
 
 /**
- * @brief ecore_sp_eth_rx_queue_stop -
- *
- * This ramrod closes an RX queue. It sends RX queue stop ramrod
- * + CFC delete ramrod
+ * @brief ecore_eth_rx_queue_stop - This ramrod closes an Rx queue
  *
  * @param p_hwfn
- * @param rx_queue_id		RX Queue ID
+ * @param p_rxq			Handler of queue to close
  * @param eq_completion_only	If True completion will be on
  *				EQe, if False completion will be
  *				on EQe if p_hwfn opaque
@@ -213,13 +212,13 @@ ecore_sp_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
  * @return enum _ecore_status_t
  */
 enum _ecore_status_t
-ecore_sp_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
-			   u16 rx_queue_id,
-			   bool eq_completion_only,
-			   bool cqe_completion);
+ecore_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
+			void *p_rxq,
+			bool eq_completion_only,
+			bool cqe_completion);
 
 /**
- * @brief ecore_sp_eth_tx_queue_start - TX Queue Start Ramrod
+ * @brief - TX Queue Start Ramrod
  *
  * This ramrod initializes a TX Queue for a VPort. An Assert is generated if
  * the VPort is not currently initialized.
@@ -230,34 +229,29 @@ ecore_sp_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
  * @param tc			traffic class to use with this L2 txq
  * @param pbl_addr		address of the pbl array
  * @param pbl_size		number of entries in pbl
- * @param pp_doorbell		Pointer to place doorbell pointer (May be NULL).
- *				This address should be used with the
- *				DIRECT_REG_WR macro.
+ * @param p_ret_params		Pointer to fill the return parameters in.
  *
  * @return enum _ecore_status_t
  */
 enum _ecore_status_t
-ecore_sp_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
-			    u16 opaque_fid,
-			    struct ecore_queue_start_common_params *p_params,
-			    u8 tc,
-			    dma_addr_t pbl_addr,
-			    u16 pbl_size,
-			    void OSAL_IOMEM * *pp_doorbell);
+ecore_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
+			 u16 opaque_fid,
+			 struct ecore_queue_start_common_params *p_params,
+			 u8 tc,
+			 dma_addr_t pbl_addr,
+			 u16 pbl_size,
+			 struct ecore_txq_start_ret_params *p_ret_params);
 
 /**
- * @brief ecore_sp_eth_tx_queue_stop -
- *
- * This ramrod closes a TX queue. It sends TX queue stop ramrod
- * + CFC delete ramrod
+ * @brief ecore_eth_tx_queue_stop - closes a Tx queue
  *
  * @param p_hwfn
- * @param tx_queue_id		TX Queue ID
+ * @param p_txq - handle to Tx queue needed to be closed
  *
  * @return enum _ecore_status_t
  */
-enum _ecore_status_t ecore_sp_eth_tx_queue_stop(struct ecore_hwfn *p_hwfn,
-						u16 tx_queue_id);
+enum _ecore_status_t ecore_eth_tx_queue_stop(struct ecore_hwfn *p_hwfn,
+					     void *p_txq);
 
 enum ecore_tpa_mode	{
 	ECORE_TPA_MODE_NONE,
@@ -389,19 +383,19 @@ ecore_sp_eth_filter_ucast(struct ecore_hwfn *p_hwfn,
  * @note Final phase API.
  *
  * @param p_hwfn
- * @param rx_queue_id		RX Queue ID
- * @param num_rxqs              Allow to update multiple rx
- *				queues, from rx_queue_id to
- *				(rx_queue_id + num_rxqs)
+ * @param pp_rxq_handlers	An array of queue handlers to be updated.
+ * @param num_rxqs              number of queues to update.
  * @param complete_cqe_flg	Post completion to the CQE Ring if set
  * @param complete_event_flg	Post completion to the Event Ring if set
+ * @param comp_mode
+ * @param p_comp_data
  *
  * @return enum _ecore_status_t
  */
 
 enum _ecore_status_t
 ecore_sp_eth_rx_queues_update(struct ecore_hwfn *p_hwfn,
-			      u16 rx_queue_id,
+			      void **pp_rxq_handlers,
 			      u8 num_rxqs,
 			      u8 complete_cqe_flg,
 			      u8 complete_event_flg,
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 73c4015..7378420 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -238,7 +238,7 @@ static bool ecore_iov_validate_active_rxq(struct ecore_hwfn *p_hwfn,
 	u8 i;
 
 	for (i = 0; i < p_vf->num_rxqs; i++)
-		if (p_vf->vf_queues[i].rxq_active)
+		if (p_vf->vf_queues[i].p_rx_cid)
 			return true;
 
 	return false;
@@ -250,7 +250,7 @@ static bool ecore_iov_validate_active_txq(struct ecore_hwfn *p_hwfn,
 	u8 i;
 
 	for (i = 0; i < p_vf->num_rxqs; i++)
-		if (p_vf->vf_queues[i].txq_active)
+		if (p_vf->vf_queues[i].p_tx_cid)
 			return true;
 
 	return false;
@@ -953,17 +953,19 @@ static void ecore_iov_free_vf_igu_sbs(struct ecore_hwfn *p_hwfn,
 	vf->num_sbs = 0;
 }
 
-enum _ecore_status_t ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
-					      struct ecore_ptt *p_ptt,
-					      u16 rel_vf_id, u16 num_rx_queues)
+enum _ecore_status_t
+ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
+			 struct ecore_ptt *p_ptt,
+			 struct ecore_iov_vf_init_params *p_params)
 {
 	u8 num_of_vf_available_chains  = 0;
 	struct ecore_vf_info *vf = OSAL_NULL;
+	u16 qid, num_irqs;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 	u32 cids;
 	u8 i;
 
-	vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, false);
+	vf = ecore_iov_get_vf_info(p_hwfn, p_params->rel_vf_id, false);
 	if (!vf) {
 		DP_ERR(p_hwfn, "ecore_iov_init_hw_for_vf : vf is OSAL_NULL\n");
 		return ECORE_UNKNOWN_ERROR;
@@ -971,22 +973,52 @@ enum _ecore_status_t ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 
 	if (vf->b_init) {
 		DP_NOTICE(p_hwfn, true, "VF[%d] is already active.\n",
-			  rel_vf_id);
+			  p_params->rel_vf_id);
 		return ECORE_INVAL;
 	}
 
+	/* Perform sanity checking on the requested queue_id */
+	for (i = 0; i < p_params->num_queues; i++) {
+		u16 min_vf_qzone = (u16)FEAT_NUM(p_hwfn, ECORE_PF_L2_QUE);
+		u16 max_vf_qzone = min_vf_qzone +
+				   FEAT_NUM(p_hwfn, ECORE_VF_L2_QUE) - 1;
+
+		qid = p_params->req_rx_queue[i];
+		if (qid < min_vf_qzone || qid > max_vf_qzone) {
+			DP_NOTICE(p_hwfn, true,
+				  "Can't enable Rx qid [%04x] for VF[%d]: qids [0x%04x,...,0x%04x] available\n",
+				  qid, p_params->rel_vf_id,
+				  min_vf_qzone, max_vf_qzone);
+			return ECORE_INVAL;
+		}
+
+		qid = p_params->req_tx_queue[i];
+		if (qid > max_vf_qzone) {
+			DP_NOTICE(p_hwfn, true,
+				  "Can't enable Tx qid [%04x] for VF[%d]: max qid 0x%04x\n",
+				  qid, p_params->rel_vf_id, max_vf_qzone);
+			return ECORE_INVAL;
+		}
+
+		/* If client *really* wants, Tx qid can be shared with PF */
+		if (qid < min_vf_qzone)
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VF[%d] is using PF qid [0x%04x] for Txq[0x%02x]\n",
+				   p_params->rel_vf_id, qid, i);
+	}
+
 	/* Limit number of queues according to number of CIDs */
 	ecore_cxt_get_proto_cid_count(p_hwfn, PROTOCOLID_ETH, &cids);
 	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 		   "VF[%d] - requesting to initialize for 0x%04x queues"
 		   " [0x%04x CIDs available]\n",
-		   vf->relative_vf_id, num_rx_queues, (u16)cids);
-	num_rx_queues = OSAL_MIN_T(u16, num_rx_queues, ((u16)cids));
+		   vf->relative_vf_id, p_params->num_queues, (u16)cids);
+	num_irqs = OSAL_MIN_T(u16, p_params->num_queues, ((u16)cids));
 
 	num_of_vf_available_chains = ecore_iov_alloc_vf_igu_sbs(p_hwfn,
 							       p_ptt,
 							       vf,
-							       num_rx_queues);
+							       num_irqs);
 	if (num_of_vf_available_chains == 0) {
 		DP_ERR(p_hwfn, "no available igu sbs\n");
 		return ECORE_NOMEM;
@@ -997,26 +1029,19 @@ enum _ecore_status_t ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 	vf->num_txqs = num_of_vf_available_chains;
 
 	for (i = 0; i < vf->num_rxqs; i++) {
-		u16 queue_id = ecore_int_queue_id_from_sb_id(p_hwfn,
-							     vf->igu_sbs[i]);
+		struct ecore_vf_q_info *p_queue = &vf->vf_queues[i];
 
-		if (queue_id > RESC_NUM(p_hwfn, ECORE_L2_QUEUE)) {
-			DP_NOTICE(p_hwfn, true,
-				  "VF[%d] will require utilizing of"
-				  " out-of-bounds queues - %04x\n",
-				  vf->relative_vf_id, queue_id);
-			/* TODO - cleanup the already allocate SBs */
-			return ECORE_INVAL;
-		}
+		p_queue->fw_rx_qid = p_params->req_rx_queue[i];
+		p_queue->fw_tx_qid = p_params->req_tx_queue[i];
 
 		/* CIDs are per-VF, so no problem having them 0-based. */
-		vf->vf_queues[i].fw_rx_qid = queue_id;
-		vf->vf_queues[i].fw_tx_qid = queue_id;
-		vf->vf_queues[i].fw_cid = i;
+		p_queue->fw_cid = i;
 
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-			   "VF[%d] - [%d] SB %04x, Tx/Rx queue %04x CID %04x\n",
-			   vf->relative_vf_id, i, vf->igu_sbs[i], queue_id, i);
+			   "VF[%d] - Q[%d] SB %04x, qid [Rx %04x Tx %04x]  CID %04x\n",
+			   vf->relative_vf_id, i, vf->igu_sbs[i],
+			   p_queue->fw_rx_qid, p_queue->fw_tx_qid,
+			   p_queue->fw_cid);
 	}
 
 	rc = ecore_iov_enable_vf_access(p_hwfn, p_ptt, vf);
@@ -1390,8 +1415,19 @@ static void ecore_iov_vf_cleanup(struct ecore_hwfn *p_hwfn,
 	p_vf->num_active_rxqs = 0;
 
 	for (i = 0; i < ECORE_MAX_VF_CHAINS_PER_PF; i++) {
-		p_vf->vf_queues[i].rxq_active = 0;
-		p_vf->vf_queues[i].txq_active = 0;
+		struct ecore_vf_q_info *p_queue = &p_vf->vf_queues[i];
+
+		if (p_queue->p_rx_cid) {
+			ecore_eth_queue_cid_release(p_hwfn,
+						    p_queue->p_rx_cid);
+			p_queue->p_rx_cid = OSAL_NULL;
+		}
+
+		if (p_queue->p_tx_cid) {
+			ecore_eth_queue_cid_release(p_hwfn,
+						    p_queue->p_tx_cid);
+			p_queue->p_tx_cid = OSAL_NULL;
+		}
 	}
 
 	OSAL_MEMSET(&p_vf->shadow_config, 0, sizeof(p_vf->shadow_config));
@@ -1829,14 +1865,14 @@ ecore_iov_configure_vport_forced(struct ecore_hwfn *p_hwfn,
 
 		/* Update all the Rx queues */
 		for (i = 0; i < ECORE_MAX_VF_CHAINS_PER_PF; i++) {
-			u16 qid;
+			struct ecore_queue_cid *p_cid;
 
-			if (!p_vf->vf_queues[i].rxq_active)
+			p_cid = p_vf->vf_queues[i].p_rx_cid;
+			if (p_cid == OSAL_NULL)
 				continue;
 
-			qid = p_vf->vf_queues[i].fw_rx_qid;
-
-			rc = ecore_sp_eth_rx_queues_update(p_hwfn, qid,
+			rc = ecore_sp_eth_rx_queues_update(p_hwfn,
+							   (void **)&p_cid,
 						   1, 0, 1,
 						   ECORE_SPQ_MODE_EBLOCK,
 						   OSAL_NULL);
@@ -1844,7 +1880,7 @@ ecore_iov_configure_vport_forced(struct ecore_hwfn *p_hwfn,
 				DP_NOTICE(p_hwfn, true,
 					  "Failed to send Rx update"
 					  " fo queue[0x%04x]\n",
-					  qid);
+					  p_cid->rel.queue_id);
 				return rc;
 			}
 		}
@@ -2038,6 +2074,7 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
 	struct ecore_queue_start_common_params params;
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
 	u8 status = PFVF_STATUS_NO_RESOURCE;
+	struct ecore_vf_q_info *p_queue;
 	struct vfpf_start_rxq_tlv *req;
 	bool b_legacy_vf = false;
 	enum _ecore_status_t rc;
@@ -2048,14 +2085,24 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
 	    !ecore_iov_validate_sb(p_hwfn, vf, req->hw_sb))
 		goto out;
 
+	/* Acquire a new queue-cid */
+	p_queue = &vf->vf_queues[req->rx_qid];
+
 	OSAL_MEMSET(&params, 0, sizeof(params));
-	params.queue_id = (u8)vf->vf_queues[req->rx_qid].fw_rx_qid;
-	params.vf_qid = req->rx_qid;
+	params.queue_id = (u8)p_queue->fw_rx_qid;
 	params.vport_id = vf->vport_id;
 	params.stats_id = vf->abs_vf_id + 0x10;
 	params.sb = req->hw_sb;
 	params.sb_idx = req->sb_index;
 
+	p_queue->p_rx_cid = _ecore_eth_queue_to_cid(p_hwfn,
+						    vf->opaque_fid,
+						    p_queue->fw_cid,
+						    (u8)req->rx_qid,
+						    &params);
+	if (p_queue->p_rx_cid == OSAL_NULL)
+		goto out;
+
 	/* Legacy VFs have their Producers in a different location, which they
 	 * calculate on their own and clean the producer prior to this.
 	 */
@@ -2067,27 +2114,27 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
 		       GTT_BAR0_MAP_REG_MSDM_RAM +
 		       MSTORM_ETH_VF_PRODS_OFFSET(vf->abs_vf_id, req->rx_qid),
 		       0);
+	p_queue->p_rx_cid->b_legacy_vf = b_legacy_vf;
 
-	rc = ecore_sp_eth_rxq_start_ramrod(p_hwfn, vf->opaque_fid,
-					   vf->vf_queues[req->rx_qid].fw_cid,
-					   &params,
-					   req->bd_max_bytes,
-					   req->rxq_addr,
-					   req->cqe_pbl_addr,
-					   req->cqe_pbl_size,
-					   b_legacy_vf);
 
-	if (rc) {
+	rc = ecore_eth_rxq_start_ramrod(p_hwfn,
+					p_queue->p_rx_cid,
+					req->bd_max_bytes,
+					req->rxq_addr,
+					req->cqe_pbl_addr,
+					req->cqe_pbl_size);
+	if (rc != ECORE_SUCCESS) {
 		status = PFVF_STATUS_FAILURE;
+		ecore_eth_queue_cid_release(p_hwfn, p_queue->p_rx_cid);
+		p_queue->p_rx_cid = OSAL_NULL;
 	} else {
 		status = PFVF_STATUS_SUCCESS;
-		vf->vf_queues[req->rx_qid].rxq_active = true;
 		vf->num_active_rxqs++;
 	}
 
 out:
-	ecore_iov_vf_mbx_start_rxq_resp(p_hwfn, p_ptt, vf,
-					status, b_legacy_vf);
+	ecore_iov_vf_mbx_start_rxq_resp(p_hwfn, p_ptt, vf, status,
+					b_legacy_vf);
 }
 
 static void ecore_iov_vf_mbx_start_txq_resp(struct ecore_hwfn *p_hwfn,
@@ -2138,8 +2185,10 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 	struct ecore_queue_start_common_params params;
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
 	u8 status = PFVF_STATUS_NO_RESOURCE;
+	struct ecore_vf_q_info *p_queue;
 	struct vfpf_start_txq_tlv *req;
 	enum _ecore_status_t rc;
+	u16 pq;
 
 	OSAL_MEMSET(&params, 0, sizeof(params));
 	req = &mbx->req_virt->start_txq;
@@ -2148,27 +2197,34 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 	    !ecore_iov_validate_sb(p_hwfn, vf, req->hw_sb))
 		goto out;
 
-	params.queue_id = vf->vf_queues[req->tx_qid].fw_tx_qid;
-	params.qzone_id = vf->vf_queues[req->tx_qid].fw_tx_qid;
+	/* Acquire a new queue-cid */
+	p_queue = &vf->vf_queues[req->tx_qid];
+
+	params.queue_id = p_queue->fw_tx_qid;
 	params.vport_id = vf->vport_id;
 	params.stats_id = vf->abs_vf_id + 0x10;
 	params.sb = req->hw_sb;
 	params.sb_idx = req->sb_index;
 
-	rc = ecore_sp_eth_txq_start_ramrod(p_hwfn,
-					   vf->opaque_fid,
-					   vf->vf_queues[req->tx_qid].fw_cid,
-					   &params,
-					   req->pbl_addr,
-					   req->pbl_size,
-					   ecore_get_cm_pq_idx_vf(p_hwfn,
-							vf->relative_vf_id));
+	p_queue->p_tx_cid = _ecore_eth_queue_to_cid(p_hwfn,
+						    vf->opaque_fid,
+						    p_queue->fw_cid,
+						    (u8)req->tx_qid,
+						    &params);
+	if (p_queue->p_tx_cid == OSAL_NULL)
+		goto out;
 
-	if (rc)
+	pq = ecore_get_cm_pq_idx_vf(p_hwfn,
+				    vf->relative_vf_id);
+	rc = ecore_eth_txq_start_ramrod(p_hwfn, p_queue->p_tx_cid,
+					req->pbl_addr, req->pbl_size, pq);
+	if (rc != ECORE_SUCCESS) {
 		status = PFVF_STATUS_FAILURE;
-	else {
+		ecore_eth_queue_cid_release(p_hwfn,
+					    p_queue->p_tx_cid);
+		p_queue->p_tx_cid = OSAL_NULL;
+	} else {
 		status = PFVF_STATUS_SUCCESS;
-		vf->vf_queues[req->tx_qid].txq_active = true;
 	}
 
 out:
@@ -2181,6 +2237,7 @@ static enum _ecore_status_t ecore_iov_vf_stop_rxqs(struct ecore_hwfn *p_hwfn,
 						   u8 num_rxqs,
 						   bool cqe_completion)
 {
+	struct ecore_vf_q_info *p_queue;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 	int qid;
 
@@ -2188,16 +2245,18 @@ static enum _ecore_status_t ecore_iov_vf_stop_rxqs(struct ecore_hwfn *p_hwfn,
 		return ECORE_INVAL;
 
 	for (qid = rxq_id; qid < rxq_id + num_rxqs; qid++) {
-		if (vf->vf_queues[qid].rxq_active) {
-			rc = ecore_sp_eth_rx_queue_stop(p_hwfn,
-							vf->vf_queues[qid].
-							fw_rx_qid, false,
-							cqe_completion);
+		p_queue = &vf->vf_queues[qid];
 
-			if (rc)
-				return rc;
-		}
-		vf->vf_queues[qid].rxq_active = false;
+		if (!p_queue->p_rx_cid)
+			continue;
+
+		rc = ecore_eth_rx_queue_stop(p_hwfn,
+					     p_queue->p_rx_cid,
+					     false, cqe_completion);
+		if (rc != ECORE_SUCCESS)
+			return rc;
+
+		vf->vf_queues[qid].p_rx_cid = OSAL_NULL;
 		vf->num_active_rxqs--;
 	}
 
@@ -2209,21 +2268,23 @@ static enum _ecore_status_t ecore_iov_vf_stop_txqs(struct ecore_hwfn *p_hwfn,
 						   u16 txq_id, u8 num_txqs)
 {
 	enum _ecore_status_t rc = ECORE_SUCCESS;
+	struct ecore_vf_q_info *p_queue;
 	int qid;
 
 	if (txq_id + num_txqs > OSAL_ARRAY_SIZE(vf->vf_queues))
 		return ECORE_INVAL;
 
 	for (qid = txq_id; qid < txq_id + num_txqs; qid++) {
-		if (vf->vf_queues[qid].txq_active) {
-			rc = ecore_sp_eth_tx_queue_stop(p_hwfn,
-							vf->vf_queues[qid].
-							fw_tx_qid);
+		p_queue = &vf->vf_queues[qid];
+		if (!p_queue->p_tx_cid)
+			continue;
 
-			if (rc)
-				return rc;
-		}
-		vf->vf_queues[qid].txq_active = false;
+		rc = ecore_eth_tx_queue_stop(p_hwfn,
+					     p_queue->p_tx_cid);
+		if (rc != ECORE_SUCCESS)
+			return rc;
+
+		p_queue->p_tx_cid = OSAL_NULL;
 	}
 	return rc;
 }
@@ -2279,10 +2340,11 @@ static void ecore_iov_vf_mbx_update_rxqs(struct ecore_hwfn *p_hwfn,
 					 struct ecore_ptt *p_ptt,
 					 struct ecore_vf_info *vf)
 {
+	struct ecore_queue_cid *handlers[ECORE_MAX_VF_CHAINS_PER_PF];
 	u16 length = sizeof(struct pfvf_def_resp_tlv);
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
 	struct vfpf_update_rxq_tlv *req;
-	u8 status = PFVF_STATUS_SUCCESS;
+	u8 status = PFVF_STATUS_FAILURE;
 	u8 complete_event_flg;
 	u8 complete_cqe_flg;
 	u16 qid;
@@ -2293,30 +2355,38 @@ static void ecore_iov_vf_mbx_update_rxqs(struct ecore_hwfn *p_hwfn,
 	complete_cqe_flg = !!(req->flags & VFPF_RXQ_UPD_COMPLETE_CQE_FLAG);
 	complete_event_flg = !!(req->flags & VFPF_RXQ_UPD_COMPLETE_EVENT_FLAG);
 
+	/* Validaute inputs */
+	if (req->num_rxqs + req->rx_qid > ECORE_MAX_VF_CHAINS_PER_PF ||
+	    !ecore_iov_validate_rxq(p_hwfn, vf, req->rx_qid)) {
+		DP_INFO(p_hwfn, "VF[%d]: Incorrect Rxqs [%04x, %02x]\n",
+			vf->relative_vf_id, req->rx_qid, req->num_rxqs);
+		goto out;
+	}
+
 	for (i = 0; i < req->num_rxqs; i++) {
 		qid = req->rx_qid + i;
 
-		if (!vf->vf_queues[qid].rxq_active) {
-			DP_NOTICE(p_hwfn, true,
-				  "VF rx_qid = %d isn`t active!\n", qid);
-			status = PFVF_STATUS_FAILURE;
-			break;
+		if (!vf->vf_queues[qid].p_rx_cid) {
+			DP_INFO(p_hwfn,
+				"VF[%d] rx_qid = %d isn`t active!\n",
+				vf->relative_vf_id, qid);
+			goto out;
 		}
 
-		rc = ecore_sp_eth_rx_queues_update(p_hwfn,
-						   vf->vf_queues[qid].fw_rx_qid,
-						   1,
-						   complete_cqe_flg,
-						   complete_event_flg,
-						   ECORE_SPQ_MODE_EBLOCK,
-						   OSAL_NULL);
-
-		if (rc) {
-			status = PFVF_STATUS_FAILURE;
-			break;
-		}
+		handlers[i] = vf->vf_queues[qid].p_rx_cid;
 	}
 
+	rc = ecore_sp_eth_rx_queues_update(p_hwfn, (void **)&handlers,
+					   req->num_rxqs,
+					   complete_cqe_flg,
+					   complete_event_flg,
+					   ECORE_SPQ_MODE_EBLOCK,
+					   OSAL_NULL);
+	if (rc)
+		goto out;
+
+	status = PFVF_STATUS_SUCCESS;
+out:
 	ecore_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_UPDATE_RXQ,
 			       length, status);
 }
@@ -2545,7 +2615,7 @@ ecore_iov_vp_update_rss_param(struct ecore_hwfn *p_hwfn,
 				  "rss_ind_table[%d] = %d,"
 				  " rxq is out of range\n",
 				  i, q_idx);
-		else if (!vf->vf_queues[q_idx].rxq_active)
+		else if (!vf->vf_queues[q_idx].p_rx_cid)
 			DP_NOTICE(p_hwfn, true,
 				  "rss_ind_table[%d] = %d, rxq is not active\n",
 				  i, q_idx);
diff --git a/drivers/net/qede/base/ecore_sriov.h b/drivers/net/qede/base/ecore_sriov.h
index e9ccc79..d32f931 100644
--- a/drivers/net/qede/base/ecore_sriov.h
+++ b/drivers/net/qede/base/ecore_sriov.h
@@ -64,10 +64,10 @@ struct ecore_iov_vf_mbx {
 
 struct ecore_vf_q_info {
 	u16 fw_rx_qid;
+	struct ecore_queue_cid *p_rx_cid;
 	u16 fw_tx_qid;
+	struct ecore_queue_cid *p_tx_cid;
 	u8 fw_cid;
-	u8 rxq_active;
-	u8 txq_active;
 };
 
 enum vf_state {
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index 05ceefd..60ecd16 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -451,19 +451,19 @@ free_p_iov:
 #define MSTORM_QZONE_START(dev)   (TSTORM_QZONE_START + \
 				   (TSTORM_QZONE_SIZE * NUM_OF_L2_QUEUES(dev)))
 
-enum _ecore_status_t ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
-					   u8 rx_qid,
-					   u16 sb,
-					   u8 sb_index,
-					   u16 bd_max_bytes,
-					   dma_addr_t bd_chain_phys_addr,
-					   dma_addr_t cqe_pbl_addr,
-					   u16 cqe_pbl_size,
-					   void OSAL_IOMEM **pp_prod)
+enum _ecore_status_t
+ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
+		      struct ecore_queue_cid *p_cid,
+		      u16 bd_max_bytes,
+		      dma_addr_t bd_chain_phys_addr,
+		      dma_addr_t cqe_pbl_addr,
+		      u16 cqe_pbl_size,
+		      void OSAL_IOMEM **pp_prod)
 {
 	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
 	struct pfvf_start_queue_resp_tlv *resp;
 	struct vfpf_start_rxq_tlv *req;
+	u16 rx_qid = p_cid->rel.queue_id;
 	enum _ecore_status_t rc;
 
 	/* clear mailbox and prep first tlv */
@@ -473,19 +473,20 @@ enum _ecore_status_t ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
 	req->cqe_pbl_addr = cqe_pbl_addr;
 	req->cqe_pbl_size = cqe_pbl_size;
 	req->rxq_addr = bd_chain_phys_addr;
-	req->hw_sb = sb;
-	req->sb_index = sb_index;
+	req->hw_sb = p_cid->rel.sb;
+	req->sb_index = p_cid->rel.sb_idx;
 	req->bd_max_bytes = bd_max_bytes;
 	req->stat_id = -1; /* Keep initialized, for future compatibility */
 
 	/* If PF is legacy, we'll need to calculate producers ourselves
 	 * as well as clean them.
 	 */
-	if (pp_prod && p_iov->b_pre_fp_hsi) {
+	if (p_iov->b_pre_fp_hsi) {
 		u8 hw_qid = p_iov->acquire_resp.resc.hw_qid[rx_qid];
 		u32 init_prod_val = 0;
 
-		*pp_prod = (u8 OSAL_IOMEM *)p_hwfn->regview +
+		*pp_prod = (u8 OSAL_IOMEM *)
+			   p_hwfn->regview +
 			   MSTORM_QZONE_START(p_hwfn->p_dev) +
 			   (hw_qid) * MSTORM_QZONE_SIZE;
 
@@ -510,7 +511,7 @@ enum _ecore_status_t ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
 	}
 
 	/* Learn the address of the producer from the response */
-	if (pp_prod && !p_iov->b_pre_fp_hsi) {
+	if (!p_iov->b_pre_fp_hsi) {
 		u32 init_prod_val = 0;
 
 		*pp_prod = (u8 OSAL_IOMEM *)p_hwfn->regview + resp->offset;
@@ -534,7 +535,8 @@ exit:
 }
 
 enum _ecore_status_t ecore_vf_pf_rxq_stop(struct ecore_hwfn *p_hwfn,
-					  u16 rx_qid, bool cqe_completion)
+					  struct ecore_queue_cid *p_cid,
+					  bool cqe_completion)
 {
 	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
 	struct vfpf_stop_rxqs_tlv *req;
@@ -544,7 +546,7 @@ enum _ecore_status_t ecore_vf_pf_rxq_stop(struct ecore_hwfn *p_hwfn,
 	/* clear mailbox and prep first tlv */
 	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_STOP_RXQS, sizeof(*req));
 
-	req->rx_qid = rx_qid;
+	req->rx_qid = p_cid->rel.queue_id;
 	req->num_rxqs = 1;
 	req->cqe_completion = cqe_completion;
 
@@ -569,29 +571,28 @@ exit:
 	return rc;
 }
 
-enum _ecore_status_t ecore_vf_pf_txq_start(struct ecore_hwfn *p_hwfn,
-					   u16 tx_queue_id,
-					   u16 sb,
-					   u8 sb_index,
-					   dma_addr_t pbl_addr,
-					   u16 pbl_size,
-					   void OSAL_IOMEM **pp_doorbell)
+enum _ecore_status_t
+ecore_vf_pf_txq_start(struct ecore_hwfn *p_hwfn,
+		      struct ecore_queue_cid *p_cid,
+		      dma_addr_t pbl_addr, u16 pbl_size,
+		      void OSAL_IOMEM **pp_doorbell)
 {
 	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
 	struct pfvf_start_queue_resp_tlv *resp;
 	struct vfpf_start_txq_tlv *req;
+	u16 qid = p_cid->rel.queue_id;
 	enum _ecore_status_t rc;
 
 	/* clear mailbox and prep first tlv */
 	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_START_TXQ, sizeof(*req));
 
-	req->tx_qid = tx_queue_id;
+	req->tx_qid = qid;
 
 	/* Tx */
 	req->pbl_addr = pbl_addr;
 	req->pbl_size = pbl_size;
-	req->hw_sb = sb;
-	req->sb_index = sb_index;
+	req->hw_sb = p_cid->rel.sb;
+	req->sb_index = p_cid->rel.sb_idx;
 
 	/* add list termination tlv */
 	ecore_add_tlv(p_hwfn, &p_iov->offset,
@@ -608,32 +609,30 @@ enum _ecore_status_t ecore_vf_pf_txq_start(struct ecore_hwfn *p_hwfn,
 		goto exit;
 	}
 
-	if (pp_doorbell) {
-		/* Modern PFs provide the actual offsets, while legacy
-		 * provided only the queue id.
-		 */
-		if (!p_iov->b_pre_fp_hsi) {
-			*pp_doorbell = (u8 OSAL_IOMEM *)p_hwfn->doorbells +
-						       resp->offset;
-		} else {
-			u8 cid = p_iov->acquire_resp.resc.cid[tx_queue_id];
-
+	/* Modern PFs provide the actual offsets, while legacy
+	 * provided only the queue id.
+	 */
+	if (!p_iov->b_pre_fp_hsi) {
 		*pp_doorbell = (u8 OSAL_IOMEM *)p_hwfn->doorbells +
-				DB_ADDR_VF(cid, DQ_DEMS_LEGACY);
-		}
+						resp->offset;
+	} else {
+		u8 cid = p_iov->acquire_resp.resc.cid[qid];
 
-		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-			   "Txq[0x%02x]: doorbell at %p [offset 0x%08x]\n",
-			   tx_queue_id, *pp_doorbell, resp->offset);
+		*pp_doorbell = (u8 OSAL_IOMEM *)p_hwfn->doorbells +
+						DB_ADDR_VF(cid, DQ_DEMS_LEGACY);
 	}
 
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+		   "Txq[0x%02x]: doorbell at %p [offset 0x%08x]\n",
+		   qid, *pp_doorbell, resp->offset);
 exit:
 	ecore_vf_pf_req_end(p_hwfn, rc);
 
 	return rc;
 }
 
-enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn *p_hwfn, u16 tx_qid)
+enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn *p_hwfn,
+					  struct ecore_queue_cid *p_cid)
 {
 	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
 	struct vfpf_stop_txqs_tlv *req;
@@ -643,7 +642,7 @@ enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn *p_hwfn, u16 tx_qid)
 	/* clear mailbox and prep first tlv */
 	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_STOP_TXQS, sizeof(*req));
 
-	req->tx_qid = tx_qid;
+	req->tx_qid = p_cid->rel.queue_id;
 	req->num_txqs = 1;
 
 	/* add list termination tlv */
@@ -668,20 +667,36 @@ exit:
 }
 
 enum _ecore_status_t ecore_vf_pf_rxqs_update(struct ecore_hwfn *p_hwfn,
-					     u16 rx_queue_id,
+					     struct ecore_queue_cid **pp_cid,
 					     u8 num_rxqs,
-					     u8 comp_cqe_flg, u8 comp_event_flg)
+					     u8 comp_cqe_flg,
+					     u8 comp_event_flg)
 {
 	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
 	struct pfvf_def_resp_tlv *resp = &p_iov->pf2vf_reply->default_resp;
 	struct vfpf_update_rxq_tlv *req;
 	enum _ecore_status_t rc;
 
+	/* TODO - API is limited to assuming continuous regions of queues,
+	 * but VF queues might not fullfil this requirement.
+	 * Need to consider whether we need new TLVs for this, or whether
+	 * simply doing it iteratively is good enough.
+	 */
+	if (!num_rxqs)
+		return ECORE_INVAL;
+
+again:
 	/* clear mailbox and prep first tlv */
 	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_UPDATE_RXQ, sizeof(*req));
 
-	req->rx_qid = rx_queue_id;
-	req->num_rxqs = num_rxqs;
+	/* Find the length of the current contagious range of queues beginning
+	 * at first queue's index.
+	 */
+	req->rx_qid = (*pp_cid)->rel.queue_id;
+	for (req->num_rxqs = 1; req->num_rxqs < num_rxqs; req->num_rxqs++)
+		if (pp_cid[req->num_rxqs]->rel.queue_id !=
+		    req->rx_qid + req->num_rxqs)
+			break;
 
 	if (comp_cqe_flg)
 		req->flags |= VFPF_RXQ_UPD_COMPLETE_CQE_FLAG;
@@ -702,9 +717,17 @@ enum _ecore_status_t ecore_vf_pf_rxqs_update(struct ecore_hwfn *p_hwfn,
 		goto exit;
 	}
 
+	/* Make sure we're done with all the queues */
+	if (req->num_rxqs < num_rxqs) {
+		num_rxqs -= req->num_rxqs;
+		pp_cid += req->num_rxqs;
+		/* TODO - should we give a non-locked variant instead? */
+		ecore_vf_pf_req_end(p_hwfn, rc);
+		goto again;
+	}
+
 exit:
 	ecore_vf_pf_req_end(p_hwfn, rc);
-
 	return rc;
 }
 
diff --git a/drivers/net/qede/base/ecore_vf.h b/drivers/net/qede/base/ecore_vf.h
index 6077d60..1afd667 100644
--- a/drivers/net/qede/base/ecore_vf.h
+++ b/drivers/net/qede/base/ecore_vf.h
@@ -53,10 +53,7 @@ enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn);
  * @brief VF - start the RX Queue by sending a message to the PF
  *
  * @param p_hwfn
- * @param cid			- zero based within the VF
- * @param rx_queue_id		- zero based within the VF
- * @param sb			- VF status block for this queue
- * @param sb_index		- Index within the status block
+ * @param p_cid			- Only relative fields are relevant
  * @param bd_max_bytes		- maximum number of bytes per bd
  * @param bd_chain_phys_addr	- physical address of bd chain
  * @param cqe_pbl_addr		- physical address of pbl
@@ -67,9 +64,7 @@ enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn);
  * @return enum _ecore_status_t
  */
 enum _ecore_status_t ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
-					   u8 rx_queue_id,
-					   u16 sb,
-					   u8 sb_index,
+					   struct ecore_queue_cid *p_cid,
 					   u16 bd_max_bytes,
 					   dma_addr_t bd_chain_phys_addr,
 					   dma_addr_t cqe_pbl_addr,
@@ -81,46 +76,44 @@ enum _ecore_status_t ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
  *        PF.
  *
  * @param p_hwfn
- * @param tx_queue_id		- zero based within the VF
- * @param sb			- status block for this queue
- * @param sb_index		- index within the status block
+ * @param p_cid
  * @param bd_chain_phys_addr	- physical address of tx chain
  * @param pp_doorbell		- pointer to address to which to
  *				write the doorbell too..
  *
  * @return enum _ecore_status_t
  */
-enum _ecore_status_t ecore_vf_pf_txq_start(struct ecore_hwfn *p_hwfn,
-					   u16 tx_queue_id,
-					   u16 sb,
-					   u8 sb_index,
-					   dma_addr_t pbl_addr,
-					   u16 pbl_size,
-					   void OSAL_IOMEM **pp_doorbell);
+enum _ecore_status_t
+ecore_vf_pf_txq_start(struct ecore_hwfn *p_hwfn,
+		      struct ecore_queue_cid *p_cid,
+		      dma_addr_t pbl_addr, u16 pbl_size,
+		      void OSAL_IOMEM **pp_doorbell);
 
 /**
  * @brief VF - stop the RX queue by sending a message to the PF
  *
  * @param p_hwfn
- * @param rx_qid
+ * @param p_cid
  * @param cqe_completion
  *
  * @return enum _ecore_status_t
  */
-enum _ecore_status_t ecore_vf_pf_rxq_stop(struct ecore_hwfn	*p_hwfn,
-					  u16			rx_qid,
-					  bool			cqe_completion);
+enum _ecore_status_t ecore_vf_pf_rxq_stop(struct ecore_hwfn *p_hwfn,
+					  struct ecore_queue_cid *p_cid,
+					  bool cqe_completion);
 
 /**
  * @brief VF - stop the TX queue by sending a message to the PF
  *
  * @param p_hwfn
- * @param tx_qid
+ * @param p_cid
  *
  * @return enum _ecore_status_t
  */
-enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn	*p_hwfn,
-					  u16			tx_qid);
+enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn *p_hwfn,
+					  struct ecore_queue_cid *p_cid);
+
+/* TODO - fix all the !SRIOV prototypes */
 
 #ifndef LINUX_REMOVE
 /**
@@ -128,20 +121,18 @@ enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn	*p_hwfn,
  *        PF
  *
  * @param p_hwfn
- * @param rx_queue_id
+ * @param pp_cid - list of queue-cids which we want to update
  * @param num_rxqs
- * @param init_sge_ring
  * @param comp_cqe_flg
  * @param comp_event_flg
  *
  * @return enum _ecore_status_t
  */
-enum _ecore_status_t ecore_vf_pf_rxqs_update(
-			struct ecore_hwfn	*p_hwfn,
-			u16			rx_queue_id,
-			u8			num_rxqs,
-			u8			comp_cqe_flg,
-			u8			comp_event_flg);
+enum _ecore_status_t ecore_vf_pf_rxqs_update(struct ecore_hwfn *p_hwfn,
+					     struct ecore_queue_cid **pp_cid,
+					     u8 num_rxqs,
+					     u8 comp_cqe_flg,
+					     u8 comp_event_flg);
 #endif
 
 /**
diff --git a/drivers/net/qede/qede_eth_if.c b/drivers/net/qede/qede_eth_if.c
index d0f6e87..8e4290c 100644
--- a/drivers/net/qede/qede_eth_if.c
+++ b/drivers/net/qede/qede_eth_if.c
@@ -148,7 +148,8 @@ qed_start_rxq(struct ecore_dev *edev,
 	      uint16_t bd_max_bytes,
 	      dma_addr_t bd_chain_phys_addr,
 	      dma_addr_t cqe_pbl_addr,
-	      uint16_t cqe_pbl_size, void OSAL_IOMEM * *pp_prod)
+	      uint16_t cqe_pbl_size,
+	      struct ecore_rxq_start_ret_params *ret_params)
 {
 	struct ecore_hwfn *p_hwfn;
 	int rc, hwfn_index;
@@ -159,12 +160,14 @@ qed_start_rxq(struct ecore_dev *edev,
 	p_params->queue_id = p_params->queue_id / edev->num_hwfns;
 	p_params->stats_id = p_params->vport_id;
 
-	rc = ecore_sp_eth_rx_queue_start(p_hwfn,
-					 p_hwfn->hw_info.opaque_fid,
-					 p_params,
-					 bd_max_bytes,
-					 bd_chain_phys_addr,
-					 cqe_pbl_addr, cqe_pbl_size, pp_prod);
+	rc = ecore_eth_rx_queue_start(p_hwfn,
+				      p_hwfn->hw_info.opaque_fid,
+				      p_params,
+				      bd_max_bytes,
+				      bd_chain_phys_addr,
+				      cqe_pbl_addr,
+				      cqe_pbl_size,
+				      ret_params);
 
 	if (rc) {
 		DP_ERR(edev, "Failed to start RXQ#%d\n", p_params->queue_id);
@@ -180,19 +183,17 @@ qed_start_rxq(struct ecore_dev *edev,
 }
 
 static int
-qed_stop_rxq(struct ecore_dev *edev, struct qed_stop_rxq_params *params)
+qed_stop_rxq(struct ecore_dev *edev, uint8_t rss_id, void *handle)
 {
 	int rc, hwfn_index;
 	struct ecore_hwfn *p_hwfn;
 
-	hwfn_index = params->rss_id % edev->num_hwfns;
+	hwfn_index = rss_id % edev->num_hwfns;
 	p_hwfn = &edev->hwfns[hwfn_index];
 
-	rc = ecore_sp_eth_rx_queue_stop(p_hwfn,
-					params->rx_queue_id / edev->num_hwfns,
-					params->eq_completion_only, false);
+	rc = ecore_eth_rx_queue_stop(p_hwfn, handle, true, false);
 	if (rc) {
-		DP_ERR(edev, "Failed to stop RXQ#%d\n", params->rx_queue_id);
+		DP_ERR(edev, "Failed to stop RXQ#%02x\n", rss_id);
 		return rc;
 	}
 
@@ -204,7 +205,8 @@ qed_start_txq(struct ecore_dev *edev,
 	      uint8_t rss_num,
 	      struct ecore_queue_start_common_params *p_params,
 	      dma_addr_t pbl_addr,
-	      uint16_t pbl_size, void OSAL_IOMEM * *pp_doorbell)
+	      uint16_t pbl_size,
+	      struct ecore_txq_start_ret_params *ret_params)
 {
 	struct ecore_hwfn *p_hwfn;
 	int rc, hwfn_index;
@@ -213,14 +215,13 @@ qed_start_txq(struct ecore_dev *edev,
 	p_hwfn = &edev->hwfns[hwfn_index];
 
 	p_params->queue_id = p_params->queue_id / edev->num_hwfns;
-	p_params->qzone_id = p_params->queue_id;
 	p_params->stats_id = p_params->vport_id;
 
-	rc = ecore_sp_eth_tx_queue_start(p_hwfn,
-					 p_hwfn->hw_info.opaque_fid,
-					 p_params,
-					 0 /* tc */,
-					 pbl_addr, pbl_size, pp_doorbell);
+	rc = ecore_eth_tx_queue_start(p_hwfn,
+				      p_hwfn->hw_info.opaque_fid,
+				      p_params, 0 /* tc */,
+				      pbl_addr, pbl_size,
+				      ret_params);
 
 	if (rc) {
 		DP_ERR(edev, "Failed to start TXQ#%d\n", p_params->queue_id);
@@ -236,18 +237,17 @@ qed_start_txq(struct ecore_dev *edev,
 }
 
 static int
-qed_stop_txq(struct ecore_dev *edev, struct qed_stop_txq_params *params)
+qed_stop_txq(struct ecore_dev *edev, uint8_t rss_id, void *handle)
 {
 	struct ecore_hwfn *p_hwfn;
 	int rc, hwfn_index;
 
-	hwfn_index = params->rss_id % edev->num_hwfns;
+	hwfn_index = rss_id % edev->num_hwfns;
 	p_hwfn = &edev->hwfns[hwfn_index];
 
-	rc = ecore_sp_eth_tx_queue_stop(p_hwfn,
-					params->tx_queue_id / edev->num_hwfns);
+	rc = ecore_eth_tx_queue_stop(p_hwfn, handle);
 	if (rc) {
-		DP_ERR(edev, "Failed to stop TXQ#%d\n", params->tx_queue_id);
+		DP_ERR(edev, "Failed to stop TXQ#%02x\n", rss_id);
 		return rc;
 	}
 
diff --git a/drivers/net/qede/qede_eth_if.h b/drivers/net/qede/qede_eth_if.h
index 37b1b74..12dd828 100644
--- a/drivers/net/qede/qede_eth_if.h
+++ b/drivers/net/qede/qede_eth_if.h
@@ -47,13 +47,6 @@ struct qed_dev_eth_info {
 	bool is_legacy;
 };
 
-struct qed_stop_rxq_params {
-	uint8_t rss_id;
-	uint8_t rx_queue_id;
-	uint8_t vport_id;
-	bool eq_completion_only;
-};
-
 struct qed_update_vport_params {
 	uint8_t vport_id;
 	uint8_t update_vport_active_flg;
@@ -78,11 +71,6 @@ struct qed_start_vport_params {
 	bool clear_stats;
 };
 
-struct qed_stop_txq_params {
-	uint8_t rss_id;
-	uint8_t tx_queue_id;
-};
-
 struct qed_eth_ops {
 	const struct qed_common_ops *common;
 
@@ -103,19 +91,21 @@ struct qed_eth_ops {
 			  uint16_t bd_max_bytes,
 			  dma_addr_t bd_chain_phys_addr,
 			  dma_addr_t cqe_pbl_addr,
-			  uint16_t cqe_pbl_size, void OSAL_IOMEM * *pp_prod);
+			  uint16_t cqe_pbl_size,
+			  struct ecore_rxq_start_ret_params *ret_params);
 
 	int (*q_rx_stop)(struct ecore_dev *edev,
-			 struct qed_stop_rxq_params *params);
+			 uint8_t rss_id, void *handle);
 
 	int (*q_tx_start)(struct ecore_dev *edev,
 			  uint8_t rss_num,
 			  struct ecore_queue_start_common_params *p_params,
 			  dma_addr_t pbl_addr,
-			  uint16_t pbl_size, void OSAL_IOMEM * *pp_doorbell);
+			  uint16_t pbl_size,
+			  struct ecore_txq_start_ret_params *ret_params);
 
 	int (*q_tx_stop)(struct ecore_dev *edev,
-			 struct qed_stop_txq_params *params);
+			 uint8_t rss_id, void *handle);
 
 	int (*eth_cqe_completion)(struct ecore_dev *edev,
 				  uint8_t rss_id,
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 01ea9b4..85134fb 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -527,11 +527,14 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 	for_each_queue(i) {
 		fp = &qdev->fp_array[i];
 		if (fp->type & QEDE_FASTPATH_RX) {
+			struct ecore_rxq_start_ret_params ret_params;
+
 			p_phys_table = ecore_chain_get_pbl_phys(&fp->rxq->
 								rx_comp_ring);
 			page_cnt = ecore_chain_get_page_cnt(&fp->rxq->
 								rx_comp_ring);
 
+			memset(&ret_params, 0, sizeof(ret_params));
 			memset(&q_params, 0, sizeof(q_params));
 			q_params.queue_id = i;
 			q_params.vport_id = 0;
@@ -545,13 +548,17 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 					   fp->rxq->rx_bd_ring.p_phys_addr,
 					   p_phys_table,
 					   page_cnt,
-					   &fp->rxq->hw_rxq_prod_addr);
+					   &ret_params);
 			if (rc) {
 				DP_ERR(edev, "Start rxq #%d failed %d\n",
 				       fp->rxq->queue_id, rc);
 				return rc;
 			}
 
+			/* Use the return parameters */
+			fp->rxq->hw_rxq_prod_addr = ret_params.p_prod;
+			fp->rxq->handle = ret_params.p_handle;
+
 			fp->rxq->hw_cons_ptr =
 					&fp->sb_info->sb_virt->pi_array[RX_PI];
 
@@ -561,6 +568,8 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 		if (!(fp->type & QEDE_FASTPATH_TX))
 			continue;
 		for (tc = 0; tc < qdev->num_tc; tc++) {
+			struct ecore_txq_start_ret_params ret_params;
+
 			txq = fp->txqs[tc];
 			txq_index = tc * QEDE_RSS_COUNT(qdev) + i;
 
@@ -568,6 +577,7 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 			page_cnt = ecore_chain_get_page_cnt(&txq->tx_pbl);
 
 			memset(&q_params, 0, sizeof(q_params));
+			memset(&ret_params, 0, sizeof(ret_params));
 			q_params.queue_id = txq->queue_id;
 			q_params.vport_id = 0;
 			q_params.sb = fp->sb_info->igu_sb_id;
@@ -576,13 +586,16 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 			rc = qdev->ops->q_tx_start(edev, i, &q_params,
 						   p_phys_table,
 						   page_cnt, /* **pp_doorbell */
-						   &txq->doorbell_addr);
+						   &ret_params);
 			if (rc) {
 				DP_ERR(edev, "Start txq %u failed %d\n",
 				       txq_index, rc);
 				return rc;
 			}
 
+			txq->doorbell_addr = ret_params.p_doorbell;
+			txq->handle = ret_params.p_handle;
+
 			txq->hw_cons_ptr =
 			    &fp->sb_info->sb_virt->pi_array[TX_PI(tc)];
 			SET_FIELD(txq->tx_db.data.params,
@@ -1399,6 +1412,7 @@ static int qede_stop_queues(struct qede_dev *qdev)
 {
 	struct qed_update_vport_params vport_update_params;
 	struct ecore_dev *edev = &qdev->edev;
+	struct qede_fastpath *fp;
 	int rc, tc, i;
 
 	/* Disable the vport */
@@ -1420,7 +1434,7 @@ static int qede_stop_queues(struct qede_dev *qdev)
 
 	/* Flush Tx queues. If needed, request drain from MCP */
 	for_each_queue(i) {
-		struct qede_fastpath *fp = &qdev->fp_array[i];
+		fp = &qdev->fp_array[i];
 
 		if (fp->type & QEDE_FASTPATH_TX) {
 			for (tc = 0; tc < qdev->num_tc; tc++) {
@@ -1435,23 +1449,17 @@ static int qede_stop_queues(struct qede_dev *qdev)
 
 	/* Stop all Queues in reverse order */
 	for (i = QEDE_QUEUE_CNT(qdev) - 1; i >= 0; i--) {
-		struct qed_stop_rxq_params rx_params;
+		fp = &qdev->fp_array[i];
 
 		/* Stop the Tx Queue(s) */
 		if (qdev->fp_array[i].type & QEDE_FASTPATH_TX) {
 			for (tc = 0; tc < qdev->num_tc; tc++) {
-				struct qed_stop_txq_params tx_params;
-				u8 val;
-
-				tx_params.rss_id = i;
-				val = qdev->fp_array[i].txqs[tc]->queue_id;
-				tx_params.tx_queue_id = val;
-
+				struct qede_tx_queue *txq = fp->txqs[tc];
 				DP_INFO(edev, "Stopping tx queues\n");
-				rc = qdev->ops->q_tx_stop(edev, &tx_params);
+				rc = qdev->ops->q_tx_stop(edev, i, txq->handle);
 				if (rc) {
 					DP_ERR(edev, "Failed to stop TXQ #%d\n",
-					       tx_params.tx_queue_id);
+					       i);
 					return rc;
 				}
 			}
@@ -1459,14 +1467,8 @@ static int qede_stop_queues(struct qede_dev *qdev)
 
 		/* Stop the Rx Queue */
 		if (qdev->fp_array[i].type & QEDE_FASTPATH_RX) {
-			memset(&rx_params, 0, sizeof(rx_params));
-			rx_params.rss_id = i;
-			rx_params.rx_queue_id = qdev->fp_array[i].rxq->queue_id;
-			rx_params.eq_completion_only = 1;
-
 			DP_INFO(edev, "Stopping rx queues\n");
-
-			rc = qdev->ops->q_rx_stop(edev, &rx_params);
+			rc = qdev->ops->q_rx_stop(edev, i, fp->rxq->handle);
 			if (rc) {
 				DP_ERR(edev, "Failed to stop RXQ #%d\n", i);
 				return rc;
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index 9a393e9..17a2f0c 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -156,6 +156,7 @@ struct qede_rx_queue {
 	uint64_t rx_hw_errors;
 	uint64_t rx_alloc_errors;
 	struct qede_dev *qdev;
+	void *handle;
 };
 
 /*
@@ -187,6 +188,7 @@ struct qede_tx_queue {
 	uint64_t xmit_pkts;
 	bool is_legacy;
 	struct qede_dev *qdev;
+	void *handle;
 };
 
 struct qede_fastpath {
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 28/61] net/qede/base: add support for handling TLV request from MFW
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (27 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 27/61] net/qede/base: make L2 queues handle based Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 29/61] net/qede/base: optimize cache-line access Rasesh Mody
                         ` (33 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add support for handling the TLV request from Management FW.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/bcm_osal.h      |    3 +
 drivers/net/qede/base/ecore_mcp.c     |    6 -
 drivers/net/qede/base/ecore_mcp.h     |    8 +
 drivers/net/qede/base/ecore_mcp_api.h |   44 +-
 drivers/net/qede/base/ecore_mng_tlv.c | 1536 +++++++++++++++++++++++++++++++++
 drivers/net/qede/qede_if.h            |   21 +
 6 files changed, 1591 insertions(+), 27 deletions(-)
 create mode 100644 drivers/net/qede/base/ecore_mng_tlv.c

diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 63ee6d5..82e3ebd 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -419,5 +419,8 @@ void qede_get_mcp_proto_stats(struct ecore_dev *, enum ecore_mcp_protocol_type,
 	qede_get_mcp_proto_stats(dev, type, stats)
 
 #define	OSAL_SLOWPATH_IRQ_REQ(p_hwfn) (0)
+#define OSAL_MFW_TLV_REQ(p_hwfn) (0)
+#define OSAL_MFW_FILL_TLV_DATA(type, buf, data) (0)
+
 
 #endif /* __BCM_OSAL_H */
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 79a907b..2b9c819 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -2502,9 +2502,3 @@ enum _ecore_status_t ecore_mcp_initiate_pf_flr(struct ecore_hwfn *p_hwfn,
 	return ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_INITIATE_PF_FLR, 0,
 			     &mcp_resp, &mcp_param);
 }
-
-enum _ecore_status_t
-ecore_mfw_process_tlv_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
-{
-	return ECORE_SUCCESS;
-}
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index d77b5df..0708923 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -70,6 +70,14 @@ struct ecore_mcp_mb_params {
 	u32 mcp_param;
 };
 
+struct ecore_drv_tlv_hdr {
+	u8 tlv_type;	/* According to the enum below */
+	u8 tlv_length;	/* In dwords - not including this header */
+	u8 tlv_reserved;
+#define ECORE_DRV_TLV_FLAGS_CHANGED 0x01
+	u8 tlv_flags;
+};
+
 /**
  * @brief Initialize the interface with the MCP
  *
diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h
index 8cad43d..190c135 100644
--- a/drivers/net/qede/base/ecore_mcp_api.h
+++ b/drivers/net/qede/base/ecore_mcp_api.h
@@ -233,9 +233,11 @@ struct ecore_mba_vers {
 };
 
 enum ecore_mfw_tlv_type {
-	ECORE_MFW_TLV_GENERIC = 0x1,	/* Core driver TLVs */
-	ECORE_MFW_TLV_FCOE = 0x2,	/* FCoE protocol TLVs */
-	ECORE_MFW_TLV_ISCSI = 0x4,	/* SCSI protocol TLVs */
+	ECORE_MFW_TLV_GENERIC = 0x1, /* Core driver TLVs */
+	ECORE_MFW_TLV_ETH = 0x2, /* L2 driver TLVs */
+	ECORE_MFW_TLV_FCOE = 0x4, /* FCoE protocol TLVs */
+	ECORE_MFW_TLV_ISCSI = 0x8, /* SCSI protocol TLVs */
+	ECORE_MFW_TLV_MAX = 0x16,
 };
 
 struct ecore_mfw_tlv_generic {
@@ -247,6 +249,21 @@ struct ecore_mfw_tlv_generic {
 	bool additional_mac1_set;
 	u64 additional_mac2;
 	bool additional_mac2_set;
+	u8 drv_state;
+	bool drv_state_set;
+	u8 pxe_progress;
+	bool pxe_progress_set;
+	u64 rx_frames;
+	bool rx_frames_set;
+	u64 rx_bytes;
+	bool rx_bytes_set;
+	u64 tx_frames;
+	bool tx_frames_set;
+	u64 tx_bytes;
+	bool tx_bytes_set;
+};
+
+struct ecore_mfw_tlv_eth {
 	u16 lso_maxoff_size;
 	bool lso_maxoff_size_set;
 	u16 lso_minseg_size;
@@ -259,12 +276,6 @@ struct ecore_mfw_tlv_generic {
 	bool rx_descr_size_set;
 	u16 netq_count;
 	bool netq_count_set;
-	u16 flex_vlan;
-	bool flex_vlan_set;
-	u8 drv_state;
-	bool drv_state_set;
-	u8 pxe_progress;
-	bool pxe_progress_set;
 	u32 tcp4_offloads;
 	bool tcp4_offloads_set;
 	u32 tcp6_offloads;
@@ -273,14 +284,6 @@ struct ecore_mfw_tlv_generic {
 	bool tx_descr_qdepth_set;
 	u16 rx_descr_qdepth;
 	bool rx_descr_qdepth_set;
-	u64 rx_frames;
-	bool rx_frames_set;
-	u64 rx_bytes;
-	bool rx_bytes_set;
-	u64 tx_frames;
-	bool tx_frames_set;
-	u64 tx_bytes;
-	bool tx_bytes_set;
 	u8 iov_offload;
 	bool iov_offload_set;
 	u8 txqs_empty;
@@ -446,8 +449,8 @@ struct ecore_mfw_tlv_fcoe {
 	bool ols_set;
 	u8 lr;
 	bool lr_set;
-	u8 llr;
-	bool llrt;
+	u8 lrr;
+	bool lrr_set;
 	u8 tx_lip;
 	bool tx_lip_set;
 	u8 rx_lip;
@@ -511,12 +514,11 @@ struct ecore_mfw_tlv_iscsi {
 	bool tx_frames_set;
 	u64 tx_bytes;
 	bool tx_bytes_set;
-	u32 cpcp_spcp_map;
-	bool cpcp_spcp_map_set;
 };
 
 union ecore_mfw_tlv_data {
 	struct ecore_mfw_tlv_generic generic;
+	struct ecore_mfw_tlv_eth eth;
 	struct ecore_mfw_tlv_fcoe fcoe;
 	struct ecore_mfw_tlv_iscsi iscsi;
 };
diff --git a/drivers/net/qede/base/ecore_mng_tlv.c b/drivers/net/qede/base/ecore_mng_tlv.c
new file mode 100644
index 0000000..0065d12
--- /dev/null
+++ b/drivers/net/qede/base/ecore_mng_tlv.c
@@ -0,0 +1,1536 @@
+#include "bcm_osal.h"
+#include "ecore.h"
+#include "ecore_status.h"
+#include "ecore_mcp.h"
+#include "ecore_hw.h"
+#include "reg_addr.h"
+
+#define TLV_TYPE(p)	(p[0])
+#define TLV_LENGTH(p)	(p[1])
+#define TLV_FLAGS(p)	(p[3])
+
+static enum _ecore_status_t
+ecore_mfw_get_tlv_group(u8 tlv_type, u8 *tlv_group)
+{
+	switch (tlv_type) {
+	case DRV_TLV_FEATURE_FLAGS:
+	case DRV_TLV_LOCAL_ADMIN_ADDR:
+	case DRV_TLV_ADDITIONAL_MAC_ADDR_1:
+	case DRV_TLV_ADDITIONAL_MAC_ADDR_2:
+	case DRV_TLV_OS_DRIVER_STATES:
+	case DRV_TLV_PXE_BOOT_PROGRESS:
+	case DRV_TLV_RX_FRAMES_RECEIVED:
+	case DRV_TLV_RX_BYTES_RECEIVED:
+	case DRV_TLV_TX_FRAMES_SENT:
+	case DRV_TLV_TX_BYTES_SENT:
+		*tlv_group |= ECORE_MFW_TLV_GENERIC;
+		break;
+	case DRV_TLV_LSO_MAX_OFFLOAD_SIZE:
+	case DRV_TLV_LSO_MIN_SEGMENT_COUNT:
+	case DRV_TLV_PROMISCUOUS_MODE:
+	case DRV_TLV_TX_DESCRIPTORS_QUEUE_SIZE:
+	case DRV_TLV_RX_DESCRIPTORS_QUEUE_SIZE:
+	case DRV_TLV_NUM_OF_NET_QUEUE_VMQ_CFG:
+	case DRV_TLV_NUM_OFFLOADED_CONNECTIONS_TCP_IPV4:
+	case DRV_TLV_NUM_OFFLOADED_CONNECTIONS_TCP_IPV6:
+	case DRV_TLV_TX_DESCRIPTOR_QUEUE_AVG_DEPTH:
+	case DRV_TLV_RX_DESCRIPTORS_QUEUE_AVG_DEPTH:
+	case DRV_TLV_IOV_OFFLOAD:
+	case DRV_TLV_TX_QUEUES_EMPTY:
+	case DRV_TLV_RX_QUEUES_EMPTY:
+	case DRV_TLV_TX_QUEUES_FULL:
+	case DRV_TLV_RX_QUEUES_FULL:
+		*tlv_group |= ECORE_MFW_TLV_ETH;
+		break;
+	case DRV_TLV_SCSI_TO:
+	case DRV_TLV_R_T_TOV:
+	case DRV_TLV_R_A_TOV:
+	case DRV_TLV_E_D_TOV:
+	case DRV_TLV_CR_TOV:
+	case DRV_TLV_BOOT_TYPE:
+	case DRV_TLV_NPIV_STATE:
+	case DRV_TLV_NUM_OF_NPIV_IDS:
+	case DRV_TLV_SWITCH_NAME:
+	case DRV_TLV_SWITCH_PORT_NUM:
+	case DRV_TLV_SWITCH_PORT_ID:
+	case DRV_TLV_VENDOR_NAME:
+	case DRV_TLV_SWITCH_MODEL:
+	case DRV_TLV_SWITCH_FW_VER:
+	case DRV_TLV_QOS_PRIORITY_PER_802_1P:
+	case DRV_TLV_PORT_ALIAS:
+	case DRV_TLV_PORT_STATE:
+	case DRV_TLV_FIP_TX_DESCRIPTORS_QUEUE_SIZE:
+	case DRV_TLV_FCOE_RX_DESCRIPTORS_QUEUE_SIZE:
+	case DRV_TLV_LINK_FAILURE_COUNT:
+	case DRV_TLV_FCOE_BOOT_PROGRESS:
+	case DRV_TLV_RX_BROADCAST_PACKETS:
+	case DRV_TLV_TX_BROADCAST_PACKETS:
+	case DRV_TLV_FCOE_TX_DESCRIPTOR_QUEUE_AVG_DEPTH:
+	case DRV_TLV_FCOE_RX_DESCRIPTORS_QUEUE_AVG_DEPTH:
+	case DRV_TLV_FCOE_RX_FRAMES_RECEIVED:
+	case DRV_TLV_FCOE_RX_BYTES_RECEIVED:
+	case DRV_TLV_FCOE_TX_FRAMES_SENT:
+	case DRV_TLV_FCOE_TX_BYTES_SENT:
+	case DRV_TLV_CRC_ERROR_COUNT:
+	case DRV_TLV_CRC_ERROR_1_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_CRC_ERROR_1_TIMESTAMP:
+	case DRV_TLV_CRC_ERROR_2_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_CRC_ERROR_2_TIMESTAMP:
+	case DRV_TLV_CRC_ERROR_3_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_CRC_ERROR_3_TIMESTAMP:
+	case DRV_TLV_CRC_ERROR_4_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_CRC_ERROR_4_TIMESTAMP:
+	case DRV_TLV_CRC_ERROR_5_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_CRC_ERROR_5_TIMESTAMP:
+	case DRV_TLV_LOSS_OF_SYNC_ERROR_COUNT:
+	case DRV_TLV_LOSS_OF_SIGNAL_ERRORS:
+	case DRV_TLV_PRIMITIVE_SEQUENCE_PROTOCOL_ERROR_COUNT:
+	case DRV_TLV_DISPARITY_ERROR_COUNT:
+	case DRV_TLV_CODE_VIOLATION_ERROR_COUNT:
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_1:
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_2:
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_3:
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_4:
+	case DRV_TLV_LAST_FLOGI_TIMESTAMP:
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_1:
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_2:
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_3:
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_4:
+	case DRV_TLV_LAST_FLOGI_ACC_TIMESTAMP:
+	case DRV_TLV_LAST_FLOGI_RJT:
+	case DRV_TLV_LAST_FLOGI_RJT_TIMESTAMP:
+	case DRV_TLV_FDISCS_SENT_COUNT:
+	case DRV_TLV_FDISC_ACCS_RECEIVED:
+	case DRV_TLV_FDISC_RJTS_RECEIVED:
+	case DRV_TLV_PLOGI_SENT_COUNT:
+	case DRV_TLV_PLOGI_ACCS_RECEIVED:
+	case DRV_TLV_PLOGI_RJTS_RECEIVED:
+	case DRV_TLV_PLOGI_1_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_PLOGI_1_TIMESTAMP:
+	case DRV_TLV_PLOGI_2_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_PLOGI_2_TIMESTAMP:
+	case DRV_TLV_PLOGI_3_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_PLOGI_3_TIMESTAMP:
+	case DRV_TLV_PLOGI_4_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_PLOGI_4_TIMESTAMP:
+	case DRV_TLV_PLOGI_5_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_PLOGI_5_TIMESTAMP:
+	case DRV_TLV_PLOGI_1_ACC_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_PLOGI_1_ACC_TIMESTAMP:
+	case DRV_TLV_PLOGI_2_ACC_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_PLOGI_2_ACC_TIMESTAMP:
+	case DRV_TLV_PLOGI_3_ACC_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_PLOGI_3_ACC_TIMESTAMP:
+	case DRV_TLV_PLOGI_4_ACC_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_PLOGI_4_ACC_TIMESTAMP:
+	case DRV_TLV_PLOGI_5_ACC_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_PLOGI_5_ACC_TIMESTAMP:
+	case DRV_TLV_LOGOS_ISSUED:
+	case DRV_TLV_LOGO_ACCS_RECEIVED:
+	case DRV_TLV_LOGO_RJTS_RECEIVED:
+	case DRV_TLV_LOGO_1_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_LOGO_1_TIMESTAMP:
+	case DRV_TLV_LOGO_2_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_LOGO_2_TIMESTAMP:
+	case DRV_TLV_LOGO_3_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_LOGO_3_TIMESTAMP:
+	case DRV_TLV_LOGO_4_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_LOGO_4_TIMESTAMP:
+	case DRV_TLV_LOGO_5_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_LOGO_5_TIMESTAMP:
+	case DRV_TLV_LOGOS_RECEIVED:
+	case DRV_TLV_ACCS_ISSUED:
+	case DRV_TLV_PRLIS_ISSUED:
+	case DRV_TLV_ACCS_RECEIVED:
+	case DRV_TLV_ABTS_SENT_COUNT:
+	case DRV_TLV_ABTS_ACCS_RECEIVED:
+	case DRV_TLV_ABTS_RJTS_RECEIVED:
+	case DRV_TLV_ABTS_1_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_ABTS_1_TIMESTAMP:
+	case DRV_TLV_ABTS_2_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_ABTS_2_TIMESTAMP:
+	case DRV_TLV_ABTS_3_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_ABTS_3_TIMESTAMP:
+	case DRV_TLV_ABTS_4_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_ABTS_4_TIMESTAMP:
+	case DRV_TLV_ABTS_5_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_ABTS_5_TIMESTAMP:
+	case DRV_TLV_RSCNS_RECEIVED:
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_1:
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_2:
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_3:
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_4:
+	case DRV_TLV_LUN_RESETS_ISSUED:
+	case DRV_TLV_ABORT_TASK_SETS_ISSUED:
+	case DRV_TLV_TPRLOS_SENT:
+	case DRV_TLV_NOS_SENT_COUNT:
+	case DRV_TLV_NOS_RECEIVED_COUNT:
+	case DRV_TLV_OLS_COUNT:
+	case DRV_TLV_LR_COUNT:
+	case DRV_TLV_LRR_COUNT:
+	case DRV_TLV_LIP_SENT_COUNT:
+	case DRV_TLV_LIP_RECEIVED_COUNT:
+	case DRV_TLV_EOFA_COUNT:
+	case DRV_TLV_EOFNI_COUNT:
+	case DRV_TLV_SCSI_STATUS_CHECK_CONDITION_COUNT:
+	case DRV_TLV_SCSI_STATUS_CONDITION_MET_COUNT:
+	case DRV_TLV_SCSI_STATUS_BUSY_COUNT:
+	case DRV_TLV_SCSI_STATUS_INTERMEDIATE_COUNT:
+	case DRV_TLV_SCSI_STATUS_INTERMEDIATE_CONDITION_MET_COUNT:
+	case DRV_TLV_SCSI_STATUS_RESERVATION_CONFLICT_COUNT:
+	case DRV_TLV_SCSI_STATUS_TASK_SET_FULL_COUNT:
+	case DRV_TLV_SCSI_STATUS_ACA_ACTIVE_COUNT:
+	case DRV_TLV_SCSI_STATUS_TASK_ABORTED_COUNT:
+	case DRV_TLV_SCSI_CHECK_CONDITION_1_RECEIVED_SK_ASC_ASCQ:
+	case DRV_TLV_SCSI_CHECK_1_TIMESTAMP:
+	case DRV_TLV_SCSI_CHECK_CONDITION_2_RECEIVED_SK_ASC_ASCQ:
+	case DRV_TLV_SCSI_CHECK_2_TIMESTAMP:
+	case DRV_TLV_SCSI_CHECK_CONDITION_3_RECEIVED_SK_ASC_ASCQ:
+	case DRV_TLV_SCSI_CHECK_3_TIMESTAMP:
+	case DRV_TLV_SCSI_CHECK_CONDITION_4_RECEIVED_SK_ASC_ASCQ:
+	case DRV_TLV_SCSI_CHECK_4_TIMESTAMP:
+	case DRV_TLV_SCSI_CHECK_CONDITION_5_RECEIVED_SK_ASC_ASCQ:
+	case DRV_TLV_SCSI_CHECK_5_TIMESTAMP:
+		*tlv_group = ECORE_MFW_TLV_FCOE;
+		break;
+	case DRV_TLV_TARGET_LLMNR_ENABLED:
+	case DRV_TLV_HEADER_DIGEST_FLAG_ENABLED:
+	case DRV_TLV_DATA_DIGEST_FLAG_ENABLED:
+	case DRV_TLV_AUTHENTICATION_METHOD:
+	case DRV_TLV_ISCSI_BOOT_TARGET_PORTAL:
+	case DRV_TLV_MAX_FRAME_SIZE:
+	case DRV_TLV_PDU_TX_DESCRIPTORS_QUEUE_SIZE:
+	case DRV_TLV_PDU_RX_DESCRIPTORS_QUEUE_SIZE:
+	case DRV_TLV_ISCSI_BOOT_PROGRESS:
+	case DRV_TLV_PDU_TX_DESCRIPTOR_QUEUE_AVG_DEPTH:
+	case DRV_TLV_PDU_RX_DESCRIPTORS_QUEUE_AVG_DEPTH:
+	case DRV_TLV_ISCSI_PDU_RX_FRAMES_RECEIVED:
+	case DRV_TLV_ISCSI_PDU_RX_BYTES_RECEIVED:
+	case DRV_TLV_ISCSI_PDU_TX_FRAMES_SENT:
+	case DRV_TLV_ISCSI_PDU_TX_BYTES_SENT:
+		*tlv_group |= ECORE_MFW_TLV_ISCSI;
+		break;
+	default:
+		return ECORE_INVAL;
+	}
+
+	return ECORE_SUCCESS;
+}
+
+static int
+ecore_mfw_get_gen_tlv_value(struct ecore_drv_tlv_hdr *p_tlv,
+			    struct ecore_mfw_tlv_generic *p_drv_buf,
+			    u8 **p_tlv_buf)
+{
+	switch (p_tlv->tlv_type) {
+	case DRV_TLV_FEATURE_FLAGS:
+		if (p_drv_buf->feat_flags_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->feat_flags;
+			return sizeof(p_drv_buf->feat_flags);
+		}
+		break;
+	case DRV_TLV_LOCAL_ADMIN_ADDR:
+		if (p_drv_buf->local_mac_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->local_mac;
+			return sizeof(p_drv_buf->local_mac);
+		}
+		break;
+	case DRV_TLV_ADDITIONAL_MAC_ADDR_1:
+		if (p_drv_buf->additional_mac1_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->additional_mac1;
+			return sizeof(p_drv_buf->additional_mac1);
+		}
+		break;
+	case DRV_TLV_ADDITIONAL_MAC_ADDR_2:
+		if (p_drv_buf->additional_mac2_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->additional_mac2;
+			return sizeof(p_drv_buf->additional_mac2);
+		}
+		break;
+	case DRV_TLV_OS_DRIVER_STATES:
+		if (p_drv_buf->drv_state_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->drv_state;
+			return sizeof(p_drv_buf->drv_state);
+		}
+		break;
+	case DRV_TLV_PXE_BOOT_PROGRESS:
+		if (p_drv_buf->pxe_progress_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->pxe_progress;
+			return sizeof(p_drv_buf->pxe_progress);
+		}
+		break;
+	case DRV_TLV_RX_FRAMES_RECEIVED:
+		if (p_drv_buf->rx_frames_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_frames;
+			return sizeof(p_drv_buf->rx_frames);
+		}
+		break;
+	case DRV_TLV_RX_BYTES_RECEIVED:
+		if (p_drv_buf->rx_bytes_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_bytes;
+			return sizeof(p_drv_buf->rx_bytes);
+		}
+		break;
+	case DRV_TLV_TX_FRAMES_SENT:
+		if (p_drv_buf->tx_frames_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_frames;
+			return sizeof(p_drv_buf->tx_frames);
+		}
+		break;
+	case DRV_TLV_TX_BYTES_SENT:
+		if (p_drv_buf->tx_bytes_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_bytes;
+			return sizeof(p_drv_buf->tx_bytes);
+		}
+		break;
+	default:
+		break;
+	}
+
+	return -1;
+}
+
+static int
+ecore_mfw_get_eth_tlv_value(struct ecore_drv_tlv_hdr *p_tlv,
+			    struct ecore_mfw_tlv_eth *p_drv_buf,
+			    u8 **p_tlv_buf)
+{
+	switch (p_tlv->tlv_type) {
+	case DRV_TLV_LSO_MAX_OFFLOAD_SIZE:
+		if (p_drv_buf->lso_maxoff_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->lso_maxoff_size;
+			return sizeof(p_drv_buf->lso_maxoff_size);
+		}
+		break;
+	case DRV_TLV_LSO_MIN_SEGMENT_COUNT:
+		if (p_drv_buf->lso_minseg_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->lso_minseg_size;
+			return sizeof(p_drv_buf->lso_minseg_size);
+		}
+		break;
+	case DRV_TLV_PROMISCUOUS_MODE:
+		if (p_drv_buf->prom_mode_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->prom_mode;
+			return sizeof(p_drv_buf->prom_mode);
+		}
+		break;
+	case DRV_TLV_TX_DESCRIPTORS_QUEUE_SIZE:
+		if (p_drv_buf->tx_descr_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_descr_size;
+			return sizeof(p_drv_buf->tx_descr_size);
+		}
+		break;
+	case DRV_TLV_RX_DESCRIPTORS_QUEUE_SIZE:
+		if (p_drv_buf->rx_descr_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_descr_size;
+			return sizeof(p_drv_buf->rx_descr_size);
+		}
+		break;
+	case DRV_TLV_NUM_OF_NET_QUEUE_VMQ_CFG:
+		if (p_drv_buf->netq_count_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->netq_count;
+			return sizeof(p_drv_buf->netq_count);
+		}
+		break;
+	case DRV_TLV_NUM_OFFLOADED_CONNECTIONS_TCP_IPV4:
+		if (p_drv_buf->tcp4_offloads_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tcp4_offloads;
+			return sizeof(p_drv_buf->tcp4_offloads);
+		}
+		break;
+	case DRV_TLV_NUM_OFFLOADED_CONNECTIONS_TCP_IPV6:
+		if (p_drv_buf->tcp6_offloads_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tcp6_offloads;
+			return sizeof(p_drv_buf->tcp6_offloads);
+		}
+		break;
+	case DRV_TLV_TX_DESCRIPTOR_QUEUE_AVG_DEPTH:
+		if (p_drv_buf->tx_descr_qdepth_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_descr_qdepth;
+			return sizeof(p_drv_buf->tx_descr_qdepth);
+		}
+		break;
+	case DRV_TLV_RX_DESCRIPTORS_QUEUE_AVG_DEPTH:
+		if (p_drv_buf->rx_descr_qdepth_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_descr_qdepth;
+			return sizeof(p_drv_buf->rx_descr_qdepth);
+		}
+		break;
+	case DRV_TLV_IOV_OFFLOAD:
+		if (p_drv_buf->iov_offload_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->iov_offload;
+			return sizeof(p_drv_buf->iov_offload);
+		}
+		break;
+	case DRV_TLV_TX_QUEUES_EMPTY:
+		if (p_drv_buf->txqs_empty_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->txqs_empty;
+			return sizeof(p_drv_buf->txqs_empty);
+		}
+		break;
+	case DRV_TLV_RX_QUEUES_EMPTY:
+		if (p_drv_buf->rxqs_empty_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rxqs_empty;
+			return sizeof(p_drv_buf->rxqs_empty);
+		}
+		break;
+	case DRV_TLV_TX_QUEUES_FULL:
+		if (p_drv_buf->num_txqs_full_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->num_txqs_full;
+			return sizeof(p_drv_buf->num_txqs_full);
+		}
+		break;
+	case DRV_TLV_RX_QUEUES_FULL:
+		if (p_drv_buf->num_rxqs_full_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->num_rxqs_full;
+			return sizeof(p_drv_buf->num_rxqs_full);
+		}
+		break;
+	default:
+		break;
+	}
+
+	return -1;
+}
+
+static int
+ecore_mfw_get_fcoe_tlv_value(struct ecore_drv_tlv_hdr *p_tlv,
+			     struct ecore_mfw_tlv_fcoe *p_drv_buf,
+			     u8 **p_tlv_buf)
+{
+	switch (p_tlv->tlv_type) {
+	case DRV_TLV_SCSI_TO:
+		if (p_drv_buf->scsi_timeout_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_timeout;
+			return sizeof(p_drv_buf->scsi_timeout);
+		}
+		break;
+	case DRV_TLV_R_T_TOV:
+		if (p_drv_buf->rt_tov_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rt_tov;
+			return sizeof(p_drv_buf->rt_tov);
+		}
+		break;
+	case DRV_TLV_R_A_TOV:
+		if (p_drv_buf->ra_tov_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->ra_tov;
+			return sizeof(p_drv_buf->ra_tov);
+		}
+		break;
+	case DRV_TLV_E_D_TOV:
+		if (p_drv_buf->ed_tov_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->ed_tov;
+			return sizeof(p_drv_buf->ed_tov);
+		}
+		break;
+	case DRV_TLV_CR_TOV:
+		if (p_drv_buf->cr_tov_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->cr_tov;
+			return sizeof(p_drv_buf->cr_tov);
+		}
+		break;
+	case DRV_TLV_BOOT_TYPE:
+		if (p_drv_buf->boot_type_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->boot_type;
+			return sizeof(p_drv_buf->boot_type);
+		}
+		break;
+	case DRV_TLV_NPIV_STATE:
+		if (p_drv_buf->npiv_state_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->npiv_state;
+			return sizeof(p_drv_buf->npiv_state);
+		}
+		break;
+	case DRV_TLV_NUM_OF_NPIV_IDS:
+		if (p_drv_buf->num_npiv_ids_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->num_npiv_ids;
+			return sizeof(p_drv_buf->num_npiv_ids);
+		}
+		break;
+	case DRV_TLV_SWITCH_NAME:
+		if (p_drv_buf->switch_name_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->switch_name;
+			return sizeof(p_drv_buf->switch_name);
+		}
+		break;
+	case DRV_TLV_SWITCH_PORT_NUM:
+		if (p_drv_buf->switch_portnum_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->switch_portnum;
+			return sizeof(p_drv_buf->switch_portnum);
+		}
+		break;
+	case DRV_TLV_SWITCH_PORT_ID:
+		if (p_drv_buf->switch_portid_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->switch_portid;
+			return sizeof(p_drv_buf->switch_portid);
+		}
+		break;
+	case DRV_TLV_VENDOR_NAME:
+		if (p_drv_buf->vendor_name_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->vendor_name;
+			return sizeof(p_drv_buf->vendor_name);
+		}
+		break;
+	case DRV_TLV_SWITCH_MODEL:
+		if (p_drv_buf->switch_model_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->switch_model;
+			return sizeof(p_drv_buf->switch_model);
+		}
+		break;
+	case DRV_TLV_SWITCH_FW_VER:
+		if (p_drv_buf->switch_fw_version_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->switch_fw_version;
+			return sizeof(p_drv_buf->switch_fw_version);
+		}
+		break;
+	case DRV_TLV_QOS_PRIORITY_PER_802_1P:
+		if (p_drv_buf->qos_pri_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->qos_pri;
+			return sizeof(p_drv_buf->qos_pri);
+		}
+		break;
+	case DRV_TLV_PORT_ALIAS:
+		if (p_drv_buf->port_alias_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->port_alias;
+			return sizeof(p_drv_buf->port_alias);
+		}
+		break;
+	case DRV_TLV_PORT_STATE:
+		if (p_drv_buf->port_state_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->port_state;
+			return sizeof(p_drv_buf->port_state);
+		}
+		break;
+	case DRV_TLV_FIP_TX_DESCRIPTORS_QUEUE_SIZE:
+		if (p_drv_buf->fip_tx_descr_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fip_tx_descr_size;
+			return sizeof(p_drv_buf->fip_tx_descr_size);
+		}
+		break;
+	case DRV_TLV_FCOE_RX_DESCRIPTORS_QUEUE_SIZE:
+		if (p_drv_buf->fip_rx_descr_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fip_rx_descr_size;
+			return sizeof(p_drv_buf->fip_rx_descr_size);
+		}
+		break;
+	case DRV_TLV_LINK_FAILURE_COUNT:
+		if (p_drv_buf->link_failures_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->link_failures;
+			return sizeof(p_drv_buf->link_failures);
+		}
+		break;
+	case DRV_TLV_FCOE_BOOT_PROGRESS:
+		if (p_drv_buf->fcoe_boot_progress_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_boot_progress;
+			return sizeof(p_drv_buf->fcoe_boot_progress);
+		}
+		break;
+	case DRV_TLV_RX_BROADCAST_PACKETS:
+		if (p_drv_buf->rx_bcast_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_bcast;
+			return sizeof(p_drv_buf->rx_bcast);
+		}
+		break;
+	case DRV_TLV_TX_BROADCAST_PACKETS:
+		if (p_drv_buf->tx_bcast_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_bcast;
+			return sizeof(p_drv_buf->tx_bcast);
+		}
+		break;
+	case DRV_TLV_FCOE_TX_DESCRIPTOR_QUEUE_AVG_DEPTH:
+		if (p_drv_buf->fcoe_txq_depth_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_txq_depth;
+			return sizeof(p_drv_buf->fcoe_txq_depth);
+		}
+		break;
+	case DRV_TLV_FCOE_RX_DESCRIPTORS_QUEUE_AVG_DEPTH:
+		if (p_drv_buf->fcoe_rxq_depth_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_rxq_depth;
+			return sizeof(p_drv_buf->fcoe_rxq_depth);
+		}
+		break;
+	case DRV_TLV_FCOE_RX_FRAMES_RECEIVED:
+		if (p_drv_buf->fcoe_rx_frames_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_rx_frames;
+			return sizeof(p_drv_buf->fcoe_rx_frames);
+		}
+		break;
+	case DRV_TLV_FCOE_RX_BYTES_RECEIVED:
+		if (p_drv_buf->fcoe_rx_bytes_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_rx_bytes;
+			return sizeof(p_drv_buf->fcoe_rx_bytes);
+		}
+		break;
+	case DRV_TLV_FCOE_TX_FRAMES_SENT:
+		if (p_drv_buf->fcoe_tx_frames_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_tx_frames;
+			return sizeof(p_drv_buf->fcoe_tx_frames);
+		}
+		break;
+	case DRV_TLV_FCOE_TX_BYTES_SENT:
+		if (p_drv_buf->fcoe_tx_bytes_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_tx_bytes;
+			return sizeof(p_drv_buf->fcoe_tx_bytes);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_COUNT:
+		if (p_drv_buf->crc_count_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_count;
+			return sizeof(p_drv_buf->crc_count);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_1_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->crc_err_src_fcid_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_src_fcid[0];
+			return sizeof(p_drv_buf->crc_err_src_fcid[0]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_2_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->crc_err_src_fcid_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_src_fcid[1];
+			return sizeof(p_drv_buf->crc_err_src_fcid[1]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_3_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->crc_err_src_fcid_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_src_fcid[2];
+			return sizeof(p_drv_buf->crc_err_src_fcid[2]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_4_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->crc_err_src_fcid_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_src_fcid[3];
+			return sizeof(p_drv_buf->crc_err_src_fcid[3]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_5_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->crc_err_src_fcid_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_src_fcid[4];
+			return sizeof(p_drv_buf->crc_err_src_fcid[4]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_1_TIMESTAMP:
+		if (p_drv_buf->crc_err_tstamp_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_tstamp[0];
+			return sizeof(p_drv_buf->crc_err_tstamp[0]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_2_TIMESTAMP:
+		if (p_drv_buf->crc_err_tstamp_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_tstamp[1];
+			return sizeof(p_drv_buf->crc_err_tstamp[1]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_3_TIMESTAMP:
+		if (p_drv_buf->crc_err_tstamp_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_tstamp[2];
+			return sizeof(p_drv_buf->crc_err_tstamp[2]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_4_TIMESTAMP:
+		if (p_drv_buf->crc_err_tstamp_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_tstamp[3];
+			return sizeof(p_drv_buf->crc_err_tstamp[3]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_5_TIMESTAMP:
+		if (p_drv_buf->crc_err_tstamp_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_tstamp[4];
+			return sizeof(p_drv_buf->crc_err_tstamp[4]);
+		}
+		break;
+	case DRV_TLV_LOSS_OF_SYNC_ERROR_COUNT:
+		if (p_drv_buf->losync_err_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->losync_err;
+			return sizeof(p_drv_buf->losync_err);
+		}
+		break;
+	case DRV_TLV_LOSS_OF_SIGNAL_ERRORS:
+		if (p_drv_buf->losig_err_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->losig_err;
+			return sizeof(p_drv_buf->losig_err);
+		}
+		break;
+	case DRV_TLV_PRIMITIVE_SEQUENCE_PROTOCOL_ERROR_COUNT:
+		if (p_drv_buf->primtive_err_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->primtive_err;
+			return sizeof(p_drv_buf->primtive_err);
+		}
+		break;
+	case DRV_TLV_DISPARITY_ERROR_COUNT:
+		if (p_drv_buf->disparity_err_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->disparity_err;
+			return sizeof(p_drv_buf->disparity_err);
+		}
+		break;
+	case DRV_TLV_CODE_VIOLATION_ERROR_COUNT:
+		if (p_drv_buf->code_violation_err_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->code_violation_err;
+			return sizeof(p_drv_buf->code_violation_err);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_1:
+		if (p_drv_buf->flogi_param_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_param[0];
+			return sizeof(p_drv_buf->flogi_param[0]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_2:
+		if (p_drv_buf->flogi_param_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_param[1];
+			return sizeof(p_drv_buf->flogi_param[1]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_3:
+		if (p_drv_buf->flogi_param_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_param[2];
+			return sizeof(p_drv_buf->flogi_param[2]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_4:
+		if (p_drv_buf->flogi_param_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_param[3];
+			return sizeof(p_drv_buf->flogi_param[3]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_TIMESTAMP:
+		if (p_drv_buf->flogi_tstamp_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_tstamp;
+			return sizeof(p_drv_buf->flogi_tstamp);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_1:
+		if (p_drv_buf->flogi_acc_param_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_acc_param[0];
+			return sizeof(p_drv_buf->flogi_acc_param[0]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_2:
+		if (p_drv_buf->flogi_acc_param_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_acc_param[1];
+			return sizeof(p_drv_buf->flogi_acc_param[1]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_3:
+		if (p_drv_buf->flogi_acc_param_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_acc_param[2];
+			return sizeof(p_drv_buf->flogi_acc_param[2]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_4:
+		if (p_drv_buf->flogi_acc_param_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_acc_param[3];
+			return sizeof(p_drv_buf->flogi_acc_param[3]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ACC_TIMESTAMP:
+		if (p_drv_buf->flogi_acc_tstamp_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_acc_tstamp;
+			return sizeof(p_drv_buf->flogi_acc_tstamp);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_RJT:
+		if (p_drv_buf->flogi_rjt_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_rjt;
+			return sizeof(p_drv_buf->flogi_rjt);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_RJT_TIMESTAMP:
+		if (p_drv_buf->flogi_rjt_tstamp_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_rjt_tstamp;
+			return sizeof(p_drv_buf->flogi_rjt_tstamp);
+		}
+		break;
+	case DRV_TLV_FDISCS_SENT_COUNT:
+		if (p_drv_buf->fdiscs_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fdiscs;
+			return sizeof(p_drv_buf->fdiscs);
+		}
+		break;
+	case DRV_TLV_FDISC_ACCS_RECEIVED:
+		if (p_drv_buf->fdisc_acc_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fdisc_acc;
+			return sizeof(p_drv_buf->fdisc_acc);
+		}
+		break;
+	case DRV_TLV_FDISC_RJTS_RECEIVED:
+		if (p_drv_buf->fdisc_rjt_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fdisc_rjt;
+			return sizeof(p_drv_buf->fdisc_rjt);
+		}
+		break;
+	case DRV_TLV_PLOGI_SENT_COUNT:
+		if (p_drv_buf->plogi_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi;
+			return sizeof(p_drv_buf->plogi);
+		}
+		break;
+	case DRV_TLV_PLOGI_ACCS_RECEIVED:
+		if (p_drv_buf->plogi_acc_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc;
+			return sizeof(p_drv_buf->plogi_acc);
+		}
+		break;
+	case DRV_TLV_PLOGI_RJTS_RECEIVED:
+		if (p_drv_buf->plogi_rjt_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_rjt;
+			return sizeof(p_drv_buf->plogi_rjt);
+		}
+		break;
+	case DRV_TLV_PLOGI_1_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->plogi_dst_fcid_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_dst_fcid[0];
+			return sizeof(p_drv_buf->plogi_dst_fcid[0]);
+		}
+		break;
+	case DRV_TLV_PLOGI_2_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->plogi_dst_fcid_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_dst_fcid[1];
+			return sizeof(p_drv_buf->plogi_dst_fcid[1]);
+		}
+		break;
+	case DRV_TLV_PLOGI_3_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->plogi_dst_fcid_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_dst_fcid[2];
+			return sizeof(p_drv_buf->plogi_dst_fcid[2]);
+		}
+		break;
+	case DRV_TLV_PLOGI_4_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->plogi_dst_fcid_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_dst_fcid[3];
+			return sizeof(p_drv_buf->plogi_dst_fcid[3]);
+		}
+		break;
+	case DRV_TLV_PLOGI_5_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->plogi_dst_fcid_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_dst_fcid[4];
+			return sizeof(p_drv_buf->plogi_dst_fcid[4]);
+		}
+		break;
+	case DRV_TLV_PLOGI_1_TIMESTAMP:
+		if (p_drv_buf->plogi_tstamp_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_tstamp[0];
+			return sizeof(p_drv_buf->plogi_tstamp[0]);
+		}
+		break;
+	case DRV_TLV_PLOGI_2_TIMESTAMP:
+		if (p_drv_buf->plogi_tstamp_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_tstamp[1];
+			return sizeof(p_drv_buf->plogi_tstamp[1]);
+		}
+		break;
+	case DRV_TLV_PLOGI_3_TIMESTAMP:
+		if (p_drv_buf->plogi_tstamp_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_tstamp[2];
+			return sizeof(p_drv_buf->plogi_tstamp[2]);
+		}
+		break;
+	case DRV_TLV_PLOGI_4_TIMESTAMP:
+		if (p_drv_buf->plogi_tstamp_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_tstamp[3];
+			return sizeof(p_drv_buf->plogi_tstamp[3]);
+		}
+		break;
+	case DRV_TLV_PLOGI_5_TIMESTAMP:
+		if (p_drv_buf->plogi_tstamp_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_tstamp[4];
+			return sizeof(p_drv_buf->plogi_tstamp[4]);
+		}
+		break;
+	case DRV_TLV_PLOGI_1_ACC_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogi_acc_src_fcid_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_src_fcid[0];
+			return sizeof(p_drv_buf->plogi_acc_src_fcid[0]);
+		}
+		break;
+	case DRV_TLV_PLOGI_2_ACC_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogi_acc_src_fcid_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_src_fcid[1];
+			return sizeof(p_drv_buf->plogi_acc_src_fcid[1]);
+		}
+		break;
+	case DRV_TLV_PLOGI_3_ACC_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogi_acc_src_fcid_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_src_fcid[2];
+			return sizeof(p_drv_buf->plogi_acc_src_fcid[2]);
+		}
+		break;
+	case DRV_TLV_PLOGI_4_ACC_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogi_acc_src_fcid_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_src_fcid[3];
+			return sizeof(p_drv_buf->plogi_acc_src_fcid[3]);
+		}
+		break;
+	case DRV_TLV_PLOGI_5_ACC_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogi_acc_src_fcid_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_src_fcid[4];
+			return sizeof(p_drv_buf->plogi_acc_src_fcid[4]);
+		}
+		break;
+	case DRV_TLV_PLOGI_1_ACC_TIMESTAMP:
+		if (p_drv_buf->plogi_acc_tstamp_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_tstamp[0];
+			return sizeof(p_drv_buf->plogi_acc_tstamp[0]);
+		}
+		break;
+	case DRV_TLV_PLOGI_2_ACC_TIMESTAMP:
+		if (p_drv_buf->plogi_acc_tstamp_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_tstamp[1];
+			return sizeof(p_drv_buf->plogi_acc_tstamp[1]);
+		}
+		break;
+	case DRV_TLV_PLOGI_3_ACC_TIMESTAMP:
+		if (p_drv_buf->plogi_acc_tstamp_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_tstamp[2];
+			return sizeof(p_drv_buf->plogi_acc_tstamp[2]);
+		}
+		break;
+	case DRV_TLV_PLOGI_4_ACC_TIMESTAMP:
+		if (p_drv_buf->plogi_acc_tstamp_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_tstamp[3];
+			return sizeof(p_drv_buf->plogi_acc_tstamp[3]);
+		}
+		break;
+	case DRV_TLV_PLOGI_5_ACC_TIMESTAMP:
+		if (p_drv_buf->plogi_acc_tstamp_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_tstamp[4];
+			return sizeof(p_drv_buf->plogi_acc_tstamp[4]);
+		}
+		break;
+	case DRV_TLV_LOGOS_ISSUED:
+		if (p_drv_buf->tx_plogos_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_plogos;
+			return sizeof(p_drv_buf->tx_plogos);
+		}
+		break;
+	case DRV_TLV_LOGO_ACCS_RECEIVED:
+		if (p_drv_buf->plogo_acc_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_acc;
+			return sizeof(p_drv_buf->plogo_acc);
+		}
+		break;
+	case DRV_TLV_LOGO_RJTS_RECEIVED:
+		if (p_drv_buf->plogo_rjt_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_rjt;
+			return sizeof(p_drv_buf->plogo_rjt);
+		}
+		break;
+	case DRV_TLV_LOGO_1_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogo_src_fcid_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_src_fcid[0];
+			return sizeof(p_drv_buf->plogo_src_fcid[0]);
+		}
+		break;
+	case DRV_TLV_LOGO_2_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogo_src_fcid_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_src_fcid[1];
+			return sizeof(p_drv_buf->plogo_src_fcid[1]);
+		}
+		break;
+	case DRV_TLV_LOGO_3_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogo_src_fcid_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_src_fcid[2];
+			return sizeof(p_drv_buf->plogo_src_fcid[2]);
+		}
+		break;
+	case DRV_TLV_LOGO_4_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogo_src_fcid_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_src_fcid[3];
+			return sizeof(p_drv_buf->plogo_src_fcid[3]);
+		}
+		break;
+	case DRV_TLV_LOGO_5_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogo_src_fcid_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_src_fcid[4];
+			return sizeof(p_drv_buf->plogo_src_fcid[4]);
+		}
+		break;
+	case DRV_TLV_LOGO_1_TIMESTAMP:
+		if (p_drv_buf->plogo_tstamp_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_tstamp[0];
+			return sizeof(p_drv_buf->plogo_tstamp[0]);
+		}
+		break;
+	case DRV_TLV_LOGO_2_TIMESTAMP:
+		if (p_drv_buf->plogo_tstamp_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_tstamp[1];
+			return sizeof(p_drv_buf->plogo_tstamp[1]);
+		}
+		break;
+	case DRV_TLV_LOGO_3_TIMESTAMP:
+		if (p_drv_buf->plogo_tstamp_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_tstamp[2];
+			return sizeof(p_drv_buf->plogo_tstamp[2]);
+		}
+		break;
+	case DRV_TLV_LOGO_4_TIMESTAMP:
+		if (p_drv_buf->plogo_tstamp_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_tstamp[3];
+			return sizeof(p_drv_buf->plogo_tstamp[3]);
+		}
+		break;
+	case DRV_TLV_LOGO_5_TIMESTAMP:
+		if (p_drv_buf->plogo_tstamp_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_tstamp[4];
+			return sizeof(p_drv_buf->plogo_tstamp[4]);
+		}
+		break;
+	case DRV_TLV_LOGOS_RECEIVED:
+		if (p_drv_buf->rx_logos_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_logos;
+			return sizeof(p_drv_buf->rx_logos);
+		}
+		break;
+	case DRV_TLV_ACCS_ISSUED:
+		if (p_drv_buf->tx_accs_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_accs;
+			return sizeof(p_drv_buf->tx_accs);
+		}
+		break;
+	case DRV_TLV_PRLIS_ISSUED:
+		if (p_drv_buf->tx_prlis_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_prlis;
+			return sizeof(p_drv_buf->tx_prlis);
+		}
+		break;
+	case DRV_TLV_ACCS_RECEIVED:
+		if (p_drv_buf->rx_accs_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_accs;
+			return sizeof(p_drv_buf->rx_accs);
+		}
+		break;
+	case DRV_TLV_ABTS_SENT_COUNT:
+		if (p_drv_buf->tx_abts_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_abts;
+			return sizeof(p_drv_buf->tx_abts);
+		}
+		break;
+	case DRV_TLV_ABTS_ACCS_RECEIVED:
+		if (p_drv_buf->rx_abts_acc_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_abts_acc;
+			return sizeof(p_drv_buf->rx_abts_acc);
+		}
+		break;
+	case DRV_TLV_ABTS_RJTS_RECEIVED:
+		if (p_drv_buf->rx_abts_rjt_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_abts_rjt;
+			return sizeof(p_drv_buf->rx_abts_rjt);
+		}
+		break;
+	case DRV_TLV_ABTS_1_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->abts_dst_fcid_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_dst_fcid[0];
+			return sizeof(p_drv_buf->abts_dst_fcid[0]);
+		}
+		break;
+	case DRV_TLV_ABTS_2_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->abts_dst_fcid_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_dst_fcid[1];
+			return sizeof(p_drv_buf->abts_dst_fcid[1]);
+		}
+		break;
+	case DRV_TLV_ABTS_3_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->abts_dst_fcid_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_dst_fcid[2];
+			return sizeof(p_drv_buf->abts_dst_fcid[2]);
+		}
+		break;
+	case DRV_TLV_ABTS_4_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->abts_dst_fcid_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_dst_fcid[3];
+			return sizeof(p_drv_buf->abts_dst_fcid[3]);
+		}
+		break;
+	case DRV_TLV_ABTS_5_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->abts_dst_fcid_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_dst_fcid[4];
+			return sizeof(p_drv_buf->abts_dst_fcid[4]);
+		}
+		break;
+	case DRV_TLV_ABTS_1_TIMESTAMP:
+		if (p_drv_buf->abts_tstamp_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_tstamp[0];
+			return sizeof(p_drv_buf->abts_tstamp[0]);
+		}
+		break;
+	case DRV_TLV_ABTS_2_TIMESTAMP:
+		if (p_drv_buf->abts_tstamp_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_tstamp[1];
+			return sizeof(p_drv_buf->abts_tstamp[1]);
+		}
+		break;
+	case DRV_TLV_ABTS_3_TIMESTAMP:
+		if (p_drv_buf->abts_tstamp_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_tstamp[2];
+			return sizeof(p_drv_buf->abts_tstamp[2]);
+		}
+		break;
+	case DRV_TLV_ABTS_4_TIMESTAMP:
+		if (p_drv_buf->abts_tstamp_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_tstamp[3];
+			return sizeof(p_drv_buf->abts_tstamp[3]);
+		}
+		break;
+	case DRV_TLV_ABTS_5_TIMESTAMP:
+		if (p_drv_buf->abts_tstamp_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_tstamp[4];
+			return sizeof(p_drv_buf->abts_tstamp[4]);
+		}
+		break;
+	case DRV_TLV_RSCNS_RECEIVED:
+		if (p_drv_buf->rx_rscn_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_rscn;
+			return sizeof(p_drv_buf->rx_rscn);
+		}
+		break;
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_1:
+		if (p_drv_buf->rx_rscn_nport_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_rscn_nport[0];
+			return sizeof(p_drv_buf->rx_rscn_nport[0]);
+		}
+		break;
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_2:
+		if (p_drv_buf->rx_rscn_nport_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_rscn_nport[1];
+			return sizeof(p_drv_buf->rx_rscn_nport[1]);
+		}
+		break;
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_3:
+		if (p_drv_buf->rx_rscn_nport_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_rscn_nport[2];
+			return sizeof(p_drv_buf->rx_rscn_nport[2]);
+		}
+		break;
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_4:
+		if (p_drv_buf->rx_rscn_nport_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_rscn_nport[3];
+			return sizeof(p_drv_buf->rx_rscn_nport[3]);
+		}
+		break;
+	case DRV_TLV_LUN_RESETS_ISSUED:
+		if (p_drv_buf->tx_lun_rst_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_lun_rst;
+			return sizeof(p_drv_buf->tx_lun_rst);
+		}
+		break;
+	case DRV_TLV_ABORT_TASK_SETS_ISSUED:
+		if (p_drv_buf->abort_task_sets_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abort_task_sets;
+			return sizeof(p_drv_buf->abort_task_sets);
+		}
+		break;
+	case DRV_TLV_TPRLOS_SENT:
+		if (p_drv_buf->tx_tprlos_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_tprlos;
+			return sizeof(p_drv_buf->tx_tprlos);
+		}
+		break;
+	case DRV_TLV_NOS_SENT_COUNT:
+		if (p_drv_buf->tx_nos_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_nos;
+			return sizeof(p_drv_buf->tx_nos);
+		}
+		break;
+	case DRV_TLV_NOS_RECEIVED_COUNT:
+		if (p_drv_buf->rx_nos_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_nos;
+			return sizeof(p_drv_buf->rx_nos);
+		}
+		break;
+	case DRV_TLV_OLS_COUNT:
+		if (p_drv_buf->ols_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->ols;
+			return sizeof(p_drv_buf->ols);
+		}
+		break;
+	case DRV_TLV_LR_COUNT:
+		if (p_drv_buf->lr_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->lr;
+			return sizeof(p_drv_buf->lr);
+		}
+		break;
+	case DRV_TLV_LRR_COUNT:
+		if (p_drv_buf->lrr_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->lrr;
+			return sizeof(p_drv_buf->lrr);
+		}
+		break;
+	case DRV_TLV_LIP_SENT_COUNT:
+		if (p_drv_buf->tx_lip_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_lip;
+			return sizeof(p_drv_buf->tx_lip);
+		}
+		break;
+	case DRV_TLV_LIP_RECEIVED_COUNT:
+		if (p_drv_buf->rx_lip_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_lip;
+			return sizeof(p_drv_buf->rx_lip);
+		}
+		break;
+	case DRV_TLV_EOFA_COUNT:
+		if (p_drv_buf->eofa_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->eofa;
+			return sizeof(p_drv_buf->eofa);
+		}
+		break;
+	case DRV_TLV_EOFNI_COUNT:
+		if (p_drv_buf->eofni_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->eofni;
+			return sizeof(p_drv_buf->eofni);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_CHECK_CONDITION_COUNT:
+		if (p_drv_buf->scsi_chks_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_chks;
+			return sizeof(p_drv_buf->scsi_chks);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_CONDITION_MET_COUNT:
+		if (p_drv_buf->scsi_cond_met_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_cond_met;
+			return sizeof(p_drv_buf->scsi_cond_met);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_BUSY_COUNT:
+		if (p_drv_buf->scsi_busy_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_busy;
+			return sizeof(p_drv_buf->scsi_busy);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_INTERMEDIATE_COUNT:
+		if (p_drv_buf->scsi_inter_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_inter;
+			return sizeof(p_drv_buf->scsi_inter);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_INTERMEDIATE_CONDITION_MET_COUNT:
+		if (p_drv_buf->scsi_inter_cond_met_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_inter_cond_met;
+			return sizeof(p_drv_buf->scsi_inter_cond_met);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_RESERVATION_CONFLICT_COUNT:
+		if (p_drv_buf->scsi_rsv_conflicts_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_rsv_conflicts;
+			return sizeof(p_drv_buf->scsi_rsv_conflicts);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_TASK_SET_FULL_COUNT:
+		if (p_drv_buf->scsi_tsk_full_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_tsk_full;
+			return sizeof(p_drv_buf->scsi_tsk_full);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_ACA_ACTIVE_COUNT:
+		if (p_drv_buf->scsi_aca_active_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_aca_active;
+			return sizeof(p_drv_buf->scsi_aca_active);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_TASK_ABORTED_COUNT:
+		if (p_drv_buf->scsi_tsk_abort_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_tsk_abort;
+			return sizeof(p_drv_buf->scsi_tsk_abort);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_CONDITION_1_RECEIVED_SK_ASC_ASCQ:
+		if (p_drv_buf->scsi_rx_chk_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_rx_chk[0];
+			return sizeof(p_drv_buf->scsi_rx_chk[0]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_CONDITION_2_RECEIVED_SK_ASC_ASCQ:
+		if (p_drv_buf->scsi_rx_chk_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_rx_chk[1];
+			return sizeof(p_drv_buf->scsi_rx_chk[1]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_CONDITION_3_RECEIVED_SK_ASC_ASCQ:
+		if (p_drv_buf->scsi_rx_chk_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_rx_chk[2];
+			return sizeof(p_drv_buf->scsi_rx_chk[2]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_CONDITION_4_RECEIVED_SK_ASC_ASCQ:
+		if (p_drv_buf->scsi_rx_chk_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_rx_chk[3];
+			return sizeof(p_drv_buf->scsi_rx_chk[4]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_CONDITION_5_RECEIVED_SK_ASC_ASCQ:
+		if (p_drv_buf->scsi_rx_chk_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_rx_chk[4];
+			return sizeof(p_drv_buf->scsi_rx_chk[4]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_1_TIMESTAMP:
+		if (p_drv_buf->scsi_chk_tstamp_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_chk_tstamp[0];
+			return sizeof(p_drv_buf->scsi_chk_tstamp[0]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_2_TIMESTAMP:
+		if (p_drv_buf->scsi_chk_tstamp_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_chk_tstamp[1];
+			return sizeof(p_drv_buf->scsi_chk_tstamp[1]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_3_TIMESTAMP:
+		if (p_drv_buf->scsi_chk_tstamp_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_chk_tstamp[2];
+			return sizeof(p_drv_buf->scsi_chk_tstamp[2]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_4_TIMESTAMP:
+		if (p_drv_buf->scsi_chk_tstamp_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_chk_tstamp[3];
+			return sizeof(p_drv_buf->scsi_chk_tstamp[3]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_5_TIMESTAMP:
+		if (p_drv_buf->scsi_chk_tstamp_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_chk_tstamp[4];
+			return sizeof(p_drv_buf->scsi_chk_tstamp[4]);
+		}
+		break;
+	default:
+		break;
+	}
+
+	return -1;
+}
+
+static int
+ecore_mfw_get_iscsi_tlv_value(struct ecore_drv_tlv_hdr *p_tlv,
+			      struct ecore_mfw_tlv_iscsi *p_drv_buf,
+			      u8 **p_tlv_buf)
+{
+	switch (p_tlv->tlv_type) {
+	case DRV_TLV_TARGET_LLMNR_ENABLED:
+		if (p_drv_buf->target_llmnr_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->target_llmnr;
+			return sizeof(p_drv_buf->target_llmnr);
+		}
+		break;
+	case DRV_TLV_HEADER_DIGEST_FLAG_ENABLED:
+		if (p_drv_buf->header_digest_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->header_digest;
+			return sizeof(p_drv_buf->header_digest);
+		}
+		break;
+	case DRV_TLV_DATA_DIGEST_FLAG_ENABLED:
+		if (p_drv_buf->data_digest_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->data_digest;
+			return sizeof(p_drv_buf->data_digest);
+		}
+		break;
+	case DRV_TLV_AUTHENTICATION_METHOD:
+		if (p_drv_buf->auth_method_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->auth_method;
+			return sizeof(p_drv_buf->auth_method);
+		}
+		break;
+	case DRV_TLV_ISCSI_BOOT_TARGET_PORTAL:
+		if (p_drv_buf->boot_taget_portal_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->boot_taget_portal;
+			return sizeof(p_drv_buf->boot_taget_portal);
+		}
+		break;
+	case DRV_TLV_MAX_FRAME_SIZE:
+		if (p_drv_buf->frame_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->frame_size;
+			return sizeof(p_drv_buf->frame_size);
+		}
+		break;
+	case DRV_TLV_PDU_TX_DESCRIPTORS_QUEUE_SIZE:
+		if (p_drv_buf->tx_desc_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_desc_size;
+			return sizeof(p_drv_buf->tx_desc_size);
+		}
+		break;
+	case DRV_TLV_PDU_RX_DESCRIPTORS_QUEUE_SIZE:
+		if (p_drv_buf->rx_desc_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_desc_size;
+			return sizeof(p_drv_buf->rx_desc_size);
+		}
+		break;
+	case DRV_TLV_ISCSI_BOOT_PROGRESS:
+		if (p_drv_buf->boot_progress_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->boot_progress;
+			return sizeof(p_drv_buf->boot_progress);
+		}
+		break;
+	case DRV_TLV_PDU_TX_DESCRIPTOR_QUEUE_AVG_DEPTH:
+		if (p_drv_buf->tx_desc_qdepth_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_desc_qdepth;
+			return sizeof(p_drv_buf->tx_desc_qdepth);
+		}
+		break;
+	case DRV_TLV_PDU_RX_DESCRIPTORS_QUEUE_AVG_DEPTH:
+		if (p_drv_buf->rx_desc_qdepth_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_desc_qdepth;
+			return sizeof(p_drv_buf->rx_desc_qdepth);
+		}
+		break;
+	case DRV_TLV_ISCSI_PDU_RX_FRAMES_RECEIVED:
+		if (p_drv_buf->rx_frames_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_frames;
+			return sizeof(p_drv_buf->rx_frames);
+		}
+		break;
+	case DRV_TLV_ISCSI_PDU_RX_BYTES_RECEIVED:
+		if (p_drv_buf->rx_bytes_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_bytes;
+			return sizeof(p_drv_buf->rx_bytes);
+		}
+		break;
+	case DRV_TLV_ISCSI_PDU_TX_FRAMES_SENT:
+		if (p_drv_buf->tx_frames_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_frames;
+			return sizeof(p_drv_buf->tx_frames);
+		}
+		break;
+	case DRV_TLV_ISCSI_PDU_TX_BYTES_SENT:
+		if (p_drv_buf->tx_bytes_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_bytes;
+			return sizeof(p_drv_buf->tx_bytes);
+		}
+		break;
+	default:
+		break;
+	}
+
+	return -1;
+}
+
+static enum _ecore_status_t
+ecore_mfw_update_tlvs(u8 tlv_group, struct ecore_hwfn *p_hwfn,
+		      struct ecore_ptt *p_ptt, u8 *p_mfw_buf, u32 size)
+{
+	union ecore_mfw_tlv_data *p_tlv_data;
+	struct ecore_drv_tlv_hdr tlv;
+	u8 *p_tlv_ptr = OSAL_NULL, *p_temp;
+	u32 offset;
+	int len;
+
+	p_tlv_data = OSAL_VALLOC(p_hwfn->p_dev, sizeof(*p_tlv_data));
+	if (!p_tlv_data)
+		return ECORE_NOMEM;
+
+	OSAL_MEMSET(p_tlv_data, 0, sizeof(*p_tlv_data));
+	if (OSAL_MFW_FILL_TLV_DATA(p_hwfn, tlv_group, p_tlv_data)) {
+		OSAL_VFREE(p_hwfn->p_dev, p_tlv_data);
+		return ECORE_INVAL;
+	}
+
+	offset = 0;
+	OSAL_MEMSET(&tlv, 0, sizeof(tlv));
+	while (offset < size) {
+		p_temp = &p_mfw_buf[offset];
+		tlv.tlv_type = TLV_TYPE(p_temp);
+		tlv.tlv_length = TLV_LENGTH(p_temp);
+		tlv.tlv_flags = TLV_FLAGS(p_temp);
+		DP_INFO(p_hwfn, "Type %d length = %d flags = 0x%x\n",
+			tlv.tlv_type, tlv.tlv_length, tlv.tlv_flags);
+
+		offset += sizeof(tlv);
+		if (tlv_group == ECORE_MFW_TLV_GENERIC)
+			len = ecore_mfw_get_gen_tlv_value(&tlv,
+					&p_tlv_data->generic, &p_tlv_ptr);
+		else if (tlv_group == ECORE_MFW_TLV_ETH)
+			len = ecore_mfw_get_eth_tlv_value(&tlv,
+					&p_tlv_data->eth, &p_tlv_ptr);
+		else if (tlv_group == ECORE_MFW_TLV_FCOE)
+			len = ecore_mfw_get_fcoe_tlv_value(&tlv,
+					&p_tlv_data->fcoe, &p_tlv_ptr);
+		else
+			len = ecore_mfw_get_iscsi_tlv_value(&tlv,
+					&p_tlv_data->iscsi, &p_tlv_ptr);
+
+		if (len > 0) {
+			OSAL_WARN(len > 4 * tlv.tlv_length,
+				  "Incorrect MFW TLV length");
+			len = OSAL_MIN_T(int, len, 4 * tlv.tlv_length);
+			tlv.tlv_flags |= ECORE_DRV_TLV_FLAGS_CHANGED;
+			/* TODO: Endianness handling? */
+			OSAL_MEMCPY(p_mfw_buf, &tlv, sizeof(tlv));
+			OSAL_MEMCPY(p_mfw_buf + offset, p_tlv_ptr, len);
+		}
+
+		offset += sizeof(u32) * tlv.tlv_length;
+	}
+
+	OSAL_VFREE(p_hwfn->p_dev, p_tlv_data);
+
+	return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t
+ecore_mfw_process_tlv_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
+{
+	u32 addr, size, offset, resp, param, val;
+	u8 tlv_group = 0, id, *p_mfw_buf = OSAL_NULL, *p_temp;
+	u32 global_offsize, global_addr;
+	enum _ecore_status_t rc;
+	struct ecore_drv_tlv_hdr tlv;
+
+	addr = SECTION_OFFSIZE_ADDR(p_hwfn->mcp_info->public_base,
+				    PUBLIC_GLOBAL);
+	global_offsize = ecore_rd(p_hwfn, p_ptt, addr);
+	global_addr = SECTION_ADDR(global_offsize, 0);
+	addr = global_addr + OFFSETOF(struct public_global, data_ptr);
+	size = ecore_rd(p_hwfn, p_ptt, global_addr +
+			OFFSETOF(struct public_global, data_size));
+
+	if (!size) {
+		DP_NOTICE(p_hwfn, false, "Invalid TLV req size = %d\n", size);
+		goto drv_done;
+	}
+
+	p_mfw_buf = (void *)OSAL_VALLOC(p_hwfn->p_dev, size);
+	if (!p_mfw_buf) {
+		DP_NOTICE(p_hwfn, false,
+			  "Failed allocate memory for p_mfw_buf\n");
+		goto drv_done;
+	}
+
+	/* Read the TLV request to local buffer */
+	for (offset = 0; offset < size; offset += sizeof(u32)) {
+		val = ecore_rd(p_hwfn, p_ptt, addr + offset);
+		OSAL_MEMCPY(&p_mfw_buf[offset], &val, sizeof(u32));
+	}
+
+	/* Parse the headers to enumerate the requested TLV groups */
+	for (offset = 0; offset < size;
+	     offset += sizeof(tlv) + sizeof(u32) * tlv.tlv_length) {
+		p_temp = &p_mfw_buf[offset];
+		tlv.tlv_type = TLV_TYPE(p_temp);
+		tlv.tlv_length = TLV_LENGTH(p_temp);
+		if (ecore_mfw_get_tlv_group(tlv.tlv_type, &tlv_group))
+			goto drv_done;
+	}
+
+	/* Update the TLV values in the local buffer */
+	for (id = ECORE_MFW_TLV_GENERIC; id < ECORE_MFW_TLV_MAX; id <<= 1) {
+		if (tlv_group & id) {
+			if (ecore_mfw_update_tlvs(id, p_hwfn, p_ptt, p_mfw_buf,
+						  size))
+				goto drv_done;
+		}
+	}
+
+	/* Write the TLV data to shared memory */
+	for (offset = 0; offset < size; offset += sizeof(u32)) {
+		val = (u32)p_mfw_buf[offset];
+		ecore_wr(p_hwfn, p_ptt, addr + offset, val);
+		offset += sizeof(u32);
+	}
+
+drv_done:
+	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_GET_TLV_DONE, 0, &resp,
+			   &param);
+
+	OSAL_VFREE(p_hwfn->p_dev, p_mfw_buf);
+
+	return rc;
+}
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index 0a1f7db..bfd96d6 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -96,8 +96,29 @@ struct qed_slowpath_params {
 
 #define ILT_PAGE_SIZE_TCFC 0x8000	/* 32KB */
 
+struct qed_eth_tlvs {
+	u16 feat_flags;
+	u8 mac[3][ETH_ALEN];
+	u16 lso_maxoff;
+	u16 lso_minseg;
+	bool prom_mode;
+	u16 num_txqs;
+	u16 num_rxqs;
+	u16 num_netqs;
+	u16 flex_vlan;
+	u32 tcp4_offloads;
+	u32 tcp6_offloads;
+	u16 tx_avg_qdepth;
+	u16 rx_avg_qdepth;
+	u8 txqs_empty;
+	u8 rxqs_empty;
+	u8 num_txqs_full;
+	u8 num_rxqs_full;
+};
+
 struct qed_common_cb_ops {
 	void (*link_update)(void *dev, struct qed_link_output *link);
+	void (*get_tlv_data)(void *dev, struct qed_eth_tlvs *data);
 };
 
 struct qed_selftest_ops {
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 29/61] net/qede/base: optimize cache-line access
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (28 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 28/61] net/qede/base: add support for handling TLV request from MFW Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 30/61] net/qede/base: infrastructure changes for VF tunnelling Rasesh Mody
                         ` (32 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Optimize cache-line access in ecore_chain -
re-arrange fields so that fields that are needed for fastpath
[mostly produce/consume and their derivatives] are in the first cache
line, and the rest are in the second.

This is true for both PBL and NEXT_PTR kind of chains.
Advancing a page in a SINGLE_PAGE chain would still require the 2nd
cacheline as well, but afaik only SPQ uses it and so it isn't
considered as 'fastpath'.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_chain.h       |  143 ++++++++++++++++-------------
 drivers/net/qede/base/ecore_dev.c         |   14 +--
 drivers/net/qede/base/ecore_sp_commands.c |    4 +-
 3 files changed, 89 insertions(+), 72 deletions(-)

diff --git a/drivers/net/qede/base/ecore_chain.h b/drivers/net/qede/base/ecore_chain.h
index 61e39b5..ba272a9 100644
--- a/drivers/net/qede/base/ecore_chain.h
+++ b/drivers/net/qede/base/ecore_chain.h
@@ -59,25 +59,6 @@ struct ecore_chain_ext_pbl {
 	void *p_pbl_virt;
 };
 
-struct ecore_chain_pbl {
-	/* Base address of a pre-allocated buffer for pbl */
-	dma_addr_t p_phys_table;
-	void *p_virt_table;
-
-	/* Table for keeping the virtual addresses of the chain pages,
-	 * respectively to the physical addresses in the pbl table.
-	 */
-	void **pp_virt_addr_tbl;
-
-	/* Index to current used page by producer/consumer */
-	union {
-		struct ecore_chain_pbl_u16 pbl16;
-		struct ecore_chain_pbl_u32 pbl32;
-	} u;
-
-	bool external;
-};
-
 struct ecore_chain_u16 {
 	/* Cyclic index of next element to produce/consme */
 	u16 prod_idx;
@@ -91,40 +72,75 @@ struct ecore_chain_u32 {
 };
 
 struct ecore_chain {
-	/* Address of first page of the chain */
-	void *p_virt_addr;
-	dma_addr_t p_phys_addr;
-
+	/* fastpath portion of the chain - required for commands such
+	 * as produce / consume.
+	 */
 	/* Point to next element to produce/consume */
 	void *p_prod_elem;
 	void *p_cons_elem;
 
-	enum ecore_chain_mode mode;
-	enum ecore_chain_use_mode intended_use;
+	/* Fastpath portions of the PBL [if exists] */
+
+	struct {
+		/* Table for keeping the virtual addresses of the chain pages,
+		 * respectively to the physical addresses in the pbl table.
+		 */
+		void		**pp_virt_addr_tbl;
+
+		union {
+			struct ecore_chain_pbl_u16	u16;
+			struct ecore_chain_pbl_u32	u32;
+		} c;
+	} pbl;
 
-	enum ecore_chain_cnt_type cnt_type;
 	union {
 		struct ecore_chain_u16 chain16;
 		struct ecore_chain_u32 chain32;
 	} u;
 
-	u32 page_cnt;
+	/* Capacity counts only usable elements */
+	u32				capacity;
+	u32				page_cnt;
 
-	/* Number of elements - capacity is for usable elements only,
-	 * while size will contain total number of elements [for entire chain].
+	/* A u8 would suffice for mode, but it would save as a lot of headaches
+	 * on castings & defaults.
 	 */
-	u32 capacity;
-	u32 size;
+	enum ecore_chain_mode		mode;
 
 	/* Elements information for fast calculations */
 	u16 elem_per_page;
 	u16 elem_per_page_mask;
-	u16 elem_unusable;
-	u16 usable_per_page;
 	u16 elem_size;
 	u16 next_page_mask;
+	u16 usable_per_page;
+	u8 elem_unusable;
 
-	struct ecore_chain_pbl pbl;
+	u8				cnt_type;
+
+	/* Slowpath of the chain - required for initialization and destruction,
+	 * but isn't involved in regular functionality.
+	 */
+
+	/* Base address of a pre-allocated buffer for pbl */
+	struct {
+		dma_addr_t		p_phys_table;
+		void			*p_virt_table;
+	} pbl_sp;
+
+	/* Address of first page of the chain  - the address is required
+	 * for fastpath operation [consume/produce] but only for the the SINGLE
+	 * flavour which isn't considered fastpath [== SPQ].
+	 */
+	void				*p_virt_addr;
+	dma_addr_t			p_phys_addr;
+
+	/* Total number of elements [for entire chain] */
+	u32				size;
+
+	u8				intended_use;
+
+	/* TBD - do we really need this? Couldn't find usage for it */
+	bool				b_external_pbl;
 
 	void *dp_ctx;
 };
@@ -135,8 +151,8 @@ struct ecore_chain {
 
 #define UNUSABLE_ELEMS_PER_PAGE(elem_size, mode)		\
 	  ((mode == ECORE_CHAIN_MODE_NEXT_PTR) ?		\
-	   (1 + ((sizeof(struct ecore_chain_next) - 1) /		\
-	   (elem_size))) : 0)
+	   (u8)(1 + ((sizeof(struct ecore_chain_next) - 1) /	\
+		     (elem_size))) : 0)
 
 #define USABLE_ELEMS_PER_PAGE(elem_size, mode)		\
 	((u32)(ELEMS_PER_PAGE(elem_size) -			\
@@ -245,7 +261,7 @@ u16 ecore_chain_get_usable_per_page(struct ecore_chain *p_chain)
 }
 
 static OSAL_INLINE
-u16 ecore_chain_get_unusable_per_page(struct ecore_chain *p_chain)
+u8 ecore_chain_get_unusable_per_page(struct ecore_chain *p_chain)
 {
 	return p_chain->elem_unusable;
 }
@@ -263,7 +279,7 @@ static OSAL_INLINE u32 ecore_chain_get_page_cnt(struct ecore_chain *p_chain)
 static OSAL_INLINE
 dma_addr_t ecore_chain_get_pbl_phys(struct ecore_chain *p_chain)
 {
-	return p_chain->pbl.p_phys_table;
+	return p_chain->pbl_sp.p_phys_table;
 }
 
 /**
@@ -288,9 +304,9 @@ ecore_chain_advance_page(struct ecore_chain *p_chain, void **p_next_elem,
 		p_next = (struct ecore_chain_next *)(*p_next_elem);
 		*p_next_elem = p_next->next_virt;
 		if (is_chain_u16(p_chain))
-			*(u16 *)idx_to_inc += p_chain->elem_unusable;
+			*(u16 *)idx_to_inc += (u16)p_chain->elem_unusable;
 		else
-			*(u32 *)idx_to_inc += p_chain->elem_unusable;
+			*(u32 *)idx_to_inc += (u16)p_chain->elem_unusable;
 		break;
 	case ECORE_CHAIN_MODE_SINGLE:
 		*p_next_elem = p_chain->p_virt_addr;
@@ -391,7 +407,7 @@ static OSAL_INLINE void *ecore_chain_produce(struct ecore_chain *p_chain)
 		if ((p_chain->u.chain16.prod_idx &
 		     p_chain->elem_per_page_mask) == p_chain->next_page_mask) {
 			p_prod_idx = &p_chain->u.chain16.prod_idx;
-			p_prod_page_idx = &p_chain->pbl.u.pbl16.prod_page_idx;
+			p_prod_page_idx = &p_chain->pbl.c.u16.prod_page_idx;
 			ecore_chain_advance_page(p_chain, &p_chain->p_prod_elem,
 						 p_prod_idx, p_prod_page_idx);
 		}
@@ -400,7 +416,7 @@ static OSAL_INLINE void *ecore_chain_produce(struct ecore_chain *p_chain)
 		if ((p_chain->u.chain32.prod_idx &
 		     p_chain->elem_per_page_mask) == p_chain->next_page_mask) {
 			p_prod_idx = &p_chain->u.chain32.prod_idx;
-			p_prod_page_idx = &p_chain->pbl.u.pbl32.prod_page_idx;
+			p_prod_page_idx = &p_chain->pbl.c.u32.prod_page_idx;
 			ecore_chain_advance_page(p_chain, &p_chain->p_prod_elem,
 						 p_prod_idx, p_prod_page_idx);
 		}
@@ -465,7 +481,7 @@ static OSAL_INLINE void *ecore_chain_consume(struct ecore_chain *p_chain)
 		if ((p_chain->u.chain16.cons_idx &
 		     p_chain->elem_per_page_mask) == p_chain->next_page_mask) {
 			p_cons_idx = &p_chain->u.chain16.cons_idx;
-			p_cons_page_idx = &p_chain->pbl.u.pbl16.cons_page_idx;
+			p_cons_page_idx = &p_chain->pbl.c.u16.cons_page_idx;
 			ecore_chain_advance_page(p_chain, &p_chain->p_cons_elem,
 						 p_cons_idx, p_cons_page_idx);
 		}
@@ -474,7 +490,7 @@ static OSAL_INLINE void *ecore_chain_consume(struct ecore_chain *p_chain)
 		if ((p_chain->u.chain32.cons_idx &
 		     p_chain->elem_per_page_mask) == p_chain->next_page_mask) {
 			p_cons_idx = &p_chain->u.chain32.cons_idx;
-			p_cons_page_idx = &p_chain->pbl.u.pbl32.cons_page_idx;
+			p_cons_page_idx = &p_chain->pbl.c.u32.cons_page_idx;
 			ecore_chain_advance_page(p_chain, &p_chain->p_cons_elem,
 						 p_cons_idx, p_cons_page_idx);
 		}
@@ -518,25 +534,26 @@ static OSAL_INLINE void ecore_chain_reset(struct ecore_chain *p_chain)
 		u32 reset_val = p_chain->page_cnt - 1;
 
 		if (is_chain_u16(p_chain)) {
-			p_chain->pbl.u.pbl16.prod_page_idx = (u16)reset_val;
-			p_chain->pbl.u.pbl16.cons_page_idx = (u16)reset_val;
+			p_chain->pbl.c.u16.prod_page_idx = (u16)reset_val;
+			p_chain->pbl.c.u16.cons_page_idx = (u16)reset_val;
 		} else {
-			p_chain->pbl.u.pbl32.prod_page_idx = reset_val;
-			p_chain->pbl.u.pbl32.cons_page_idx = reset_val;
+			p_chain->pbl.c.u32.prod_page_idx = reset_val;
+			p_chain->pbl.c.u32.cons_page_idx = reset_val;
 		}
 	}
 
 	switch (p_chain->intended_use) {
-	case ECORE_CHAIN_USE_TO_CONSUME_PRODUCE:
-	case ECORE_CHAIN_USE_TO_PRODUCE:
-			/* Do nothing */
-			break;
-
 	case ECORE_CHAIN_USE_TO_CONSUME:
-			/* produce empty elements */
-			for (i = 0; i < p_chain->capacity; i++)
+		/* produce empty elements */
+		for (i = 0; i < p_chain->capacity; i++)
 			ecore_chain_recycle_consumed(p_chain);
-			break;
+		break;
+
+	case ECORE_CHAIN_USE_TO_CONSUME_PRODUCE:
+	case ECORE_CHAIN_USE_TO_PRODUCE:
+	default:
+		/* Do nothing */
+		break;
 	}
 }
 
@@ -563,9 +580,9 @@ ecore_chain_init_params(struct ecore_chain *p_chain, u32 page_cnt, u8 elem_size,
 	p_chain->p_virt_addr = OSAL_NULL;
 	p_chain->p_phys_addr = 0;
 	p_chain->elem_size = elem_size;
-	p_chain->intended_use = intended_use;
+	p_chain->intended_use = (u8)intended_use;
 	p_chain->mode = mode;
-	p_chain->cnt_type = cnt_type;
+	p_chain->cnt_type = (u8)cnt_type;
 
 	p_chain->elem_per_page = ELEMS_PER_PAGE(elem_size);
 	p_chain->usable_per_page = USABLE_ELEMS_PER_PAGE(elem_size, mode);
@@ -577,9 +594,9 @@ ecore_chain_init_params(struct ecore_chain *p_chain, u32 page_cnt, u8 elem_size,
 	p_chain->page_cnt = page_cnt;
 	p_chain->capacity = p_chain->usable_per_page * page_cnt;
 	p_chain->size = p_chain->elem_per_page * page_cnt;
-	p_chain->pbl.external = false;
-	p_chain->pbl.p_phys_table = 0;
-	p_chain->pbl.p_virt_table = OSAL_NULL;
+	p_chain->b_external_pbl = false;
+	p_chain->pbl_sp.p_phys_table = 0;
+	p_chain->pbl_sp.p_virt_table = OSAL_NULL;
 	p_chain->pbl.pp_virt_addr_tbl = OSAL_NULL;
 
 	p_chain->dp_ctx = dp_ctx;
@@ -623,8 +640,8 @@ static OSAL_INLINE void ecore_chain_init_pbl_mem(struct ecore_chain *p_chain,
 						 dma_addr_t p_phys_pbl,
 						 void **pp_virt_addr_tbl)
 {
-	p_chain->pbl.p_phys_table = p_phys_pbl;
-	p_chain->pbl.p_virt_table = p_virt_pbl;
+	p_chain->pbl_sp.p_phys_table = p_phys_pbl;
+	p_chain->pbl_sp.p_virt_table = p_virt_pbl;
 	p_chain->pbl.pp_virt_addr_tbl = pp_virt_addr_tbl;
 }
 
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index c895656..1c08d4a 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -3559,13 +3559,13 @@ static void ecore_chain_free_pbl(struct ecore_dev *p_dev,
 				 struct ecore_chain *p_chain)
 {
 	void **pp_virt_addr_tbl = p_chain->pbl.pp_virt_addr_tbl;
-	u8 *p_pbl_virt = (u8 *)p_chain->pbl.p_virt_table;
+	u8 *p_pbl_virt = (u8 *)p_chain->pbl_sp.p_virt_table;
 	u32 page_cnt = p_chain->page_cnt, i, pbl_size;
 
 	if (!pp_virt_addr_tbl)
 		return;
 
-	if (!p_chain->pbl.p_virt_table)
+	if (!p_pbl_virt)
 		goto out;
 
 	for (i = 0; i < page_cnt; i++) {
@@ -3581,10 +3581,10 @@ static void ecore_chain_free_pbl(struct ecore_dev *p_dev,
 
 	pbl_size = page_cnt * ECORE_CHAIN_PBL_ENTRY_SIZE;
 
-	if (!p_chain->pbl.external)
-		OSAL_DMA_FREE_COHERENT(p_dev, p_chain->pbl.p_virt_table,
-				       p_chain->pbl.p_phys_table, pbl_size);
-out:
+	if (!p_chain->b_external_pbl)
+		OSAL_DMA_FREE_COHERENT(p_dev, p_chain->pbl_sp.p_virt_table,
+				       p_chain->pbl_sp.p_phys_table, pbl_size);
+ out:
 	OSAL_VFREE(p_dev, p_chain->pbl.pp_virt_addr_tbl);
 }
 
@@ -3716,7 +3716,7 @@ ecore_chain_alloc_pbl(struct ecore_dev *p_dev,
 	} else {
 		p_pbl_virt = ext_pbl->p_pbl_virt;
 		p_pbl_phys = ext_pbl->p_pbl_phys;
-		p_chain->pbl.external = true;
+		p_chain->b_external_pbl = true;
 	}
 
 	ecore_chain_init_pbl_mem(p_chain, p_pbl_virt, p_pbl_phys,
diff --git a/drivers/net/qede/base/ecore_sp_commands.c b/drivers/net/qede/base/ecore_sp_commands.c
index 23ebab7..b831970 100644
--- a/drivers/net/qede/base/ecore_sp_commands.c
+++ b/drivers/net/qede/base/ecore_sp_commands.c
@@ -379,11 +379,11 @@ enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
 
 	/* Place EQ address in RAMROD */
 	DMA_REGPAIR_LE(p_ramrod->event_ring_pbl_addr,
-		       p_hwfn->p_eq->chain.pbl.p_phys_table);
+		       p_hwfn->p_eq->chain.pbl_sp.p_phys_table);
 	page_cnt = (u8)ecore_chain_get_page_cnt(&p_hwfn->p_eq->chain);
 	p_ramrod->event_ring_num_pages = page_cnt;
 	DMA_REGPAIR_LE(p_ramrod->consolid_q_pbl_addr,
-		       p_hwfn->p_consq->chain.pbl.p_phys_table);
+		       p_hwfn->p_consq->chain.pbl_sp.p_phys_table);
 
 	ecore_tunn_set_pf_start_params(p_hwfn, p_tunn,
 				       &p_ramrod->tunnel_config);
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 30/61] net/qede/base: infrastructure changes for VF tunnelling
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (29 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 29/61] net/qede/base: optimize cache-line access Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 31/61] net/qede/base: revise tunnel APIs/structs Rasesh Mody
                         ` (31 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Infrastructure changes for VF tunnelling.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/bcm_osal.h          |    3 +-
 drivers/net/qede/base/ecore.h             |   14 ++++-
 drivers/net/qede/base/ecore_sp_commands.c |   87 +++++++++++++++++++----------
 drivers/net/qede/qede_if.h                |    5 ++
 drivers/net/qede/qede_main.c              |   18 ++++++
 5 files changed, 93 insertions(+), 34 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 82e3ebd..513d542 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -292,7 +292,8 @@ typedef struct osal_list_t {
 #define OSAL_WMB(dev)			rte_wmb()
 #define OSAL_DMA_SYNC(dev, addr, length, is_post) nothing
 
-#define OSAL_BITS_PER_BYTE		(8)
+#define OSAL_BIT(nr)            (1UL << (nr))
+#define OSAL_BITS_PER_BYTE	(8)
 #define OSAL_BITS_PER_UL	(sizeof(unsigned long) * OSAL_BITS_PER_BYTE)
 #define OSAL_BITS_PER_UL_MASK		(OSAL_BITS_PER_UL - 1)
 
diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index de0f49a..5c12c1e 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -470,6 +470,17 @@ struct ecore_fw_data {
 	u32 init_ops_size;
 };
 
+struct ecore_tunnel_info {
+	u8		tunn_clss_vxlan;
+	u8		tunn_clss_l2geneve;
+	u8		tunn_clss_ipgeneve;
+	u8		tunn_clss_l2gre;
+	u8		tunn_clss_ipgre;
+	unsigned long	tunn_mode;
+	u16		port_vxlan_udp_port;
+	u16		port_geneve_udp_port;
+};
+
 struct ecore_hwfn {
 	struct ecore_dev		*p_dev;
 	u8				my_id;		/* ID inside the PF */
@@ -724,8 +735,7 @@ struct ecore_dev {
 	/* SRIOV */
 	struct ecore_hw_sriov_info	*p_iov_info;
 #define IS_ECORE_SRIOV(p_dev)		(!!(p_dev)->p_iov_info)
-	unsigned long			tunn_mode;
-
+	struct ecore_tunnel_info	tunnel;
 	bool				b_is_vf;
 
 	u32				drv_type;
diff --git a/drivers/net/qede/base/ecore_sp_commands.c b/drivers/net/qede/base/ecore_sp_commands.c
index b831970..f5860a0 100644
--- a/drivers/net/qede/base/ecore_sp_commands.c
+++ b/drivers/net/qede/base/ecore_sp_commands.c
@@ -111,8 +111,9 @@ ecore_tunn_set_pf_fix_tunn_mode(struct ecore_hwfn *p_hwfn,
 				struct ecore_tunn_update_params *p_src,
 				struct pf_update_tunnel_config *p_tunn_cfg)
 {
-	unsigned long cached_tunn_mode = p_hwfn->p_dev->tunn_mode;
 	unsigned long update_mask = p_src->tunn_mode_update_mask;
+	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
+	unsigned long cached_tunn_mode = p_tun->tunn_mode;
 	unsigned long tunn_mode = p_src->tunn_mode;
 	unsigned long new_tunn_mode = 0;
 
@@ -149,9 +150,10 @@ ecore_tunn_set_pf_fix_tunn_mode(struct ecore_hwfn *p_hwfn,
 	}
 
 	if (p_src->update_geneve_udp_port) {
+		p_tun->port_geneve_udp_port = p_src->geneve_udp_port;
 		p_tunn_cfg->set_geneve_udp_port_flg = 1;
 		p_tunn_cfg->geneve_udp_port =
-		    OSAL_CPU_TO_LE16(p_src->geneve_udp_port);
+				OSAL_CPU_TO_LE16(p_tun->port_geneve_udp_port);
 	}
 
 	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &update_mask)) {
@@ -178,33 +180,39 @@ ecore_tunn_set_pf_update_params(struct ecore_hwfn *p_hwfn,
 				struct ecore_tunn_update_params *p_src,
 				struct pf_update_tunnel_config *p_tunn_cfg)
 {
-	unsigned long tunn_mode = p_src->tunn_mode;
+	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
 	enum tunnel_clss type;
 
 	ecore_tunn_set_pf_fix_tunn_mode(p_hwfn, p_src, p_tunn_cfg);
+	p_tun->tunn_mode = p_src->tunn_mode;
+
 	p_tunn_cfg->update_rx_pf_clss = p_src->update_rx_pf_clss;
 	p_tunn_cfg->update_tx_pf_clss = p_src->update_tx_pf_clss;
 
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_vxlan);
-	p_tunn_cfg->tunnel_clss_vxlan = type;
+	p_tun->tunn_clss_vxlan = type;
+	p_tunn_cfg->tunnel_clss_vxlan = p_tun->tunn_clss_vxlan;
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2gre);
-	p_tunn_cfg->tunnel_clss_l2gre = type;
+	p_tun->tunn_clss_l2gre = type;
+	p_tunn_cfg->tunnel_clss_l2gre = p_tun->tunn_clss_l2gre;
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgre);
-	p_tunn_cfg->tunnel_clss_ipgre = type;
+	p_tun->tunn_clss_ipgre = type;
+	p_tunn_cfg->tunnel_clss_ipgre = p_tun->tunn_clss_ipgre;
 
 	if (p_src->update_vxlan_udp_port) {
+		p_tun->port_vxlan_udp_port = p_src->vxlan_udp_port;
 		p_tunn_cfg->set_vxlan_udp_port_flg = 1;
 		p_tunn_cfg->vxlan_udp_port =
-		    OSAL_CPU_TO_LE16(p_src->vxlan_udp_port);
+				OSAL_CPU_TO_LE16(p_tun->port_vxlan_udp_port);
 	}
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_l2gre = 1;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_ipgre = 1;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_vxlan = 1;
 
 	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
@@ -215,21 +223,24 @@ ecore_tunn_set_pf_update_params(struct ecore_hwfn *p_hwfn,
 	}
 
 	if (p_src->update_geneve_udp_port) {
+		p_tun->port_geneve_udp_port = p_src->geneve_udp_port;
 		p_tunn_cfg->set_geneve_udp_port_flg = 1;
 		p_tunn_cfg->geneve_udp_port =
-		    OSAL_CPU_TO_LE16(p_src->geneve_udp_port);
+				OSAL_CPU_TO_LE16(p_tun->port_geneve_udp_port);
 	}
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_l2geneve = 1;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_ipgeneve = 1;
 
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2geneve);
-	p_tunn_cfg->tunnel_clss_l2geneve = type;
+	p_tun->tunn_clss_l2geneve = type;
+	p_tunn_cfg->tunnel_clss_l2geneve = p_tun->tunn_clss_l2geneve;
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgeneve);
-	p_tunn_cfg->tunnel_clss_ipgeneve = type;
+	p_tun->tunn_clss_ipgeneve = type;
+	p_tunn_cfg->tunnel_clss_ipgeneve = p_tun->tunn_clss_ipgeneve;
 }
 
 static void ecore_set_hw_tunn_mode(struct ecore_hwfn *p_hwfn,
@@ -269,33 +280,37 @@ ecore_tunn_set_pf_start_params(struct ecore_hwfn *p_hwfn,
 			       struct ecore_tunn_start_params *p_src,
 			       struct pf_start_tunnel_config *p_tunn_cfg)
 {
-	unsigned long tunn_mode;
+	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
 	enum tunnel_clss type;
 
 	if (!p_src)
 		return;
 
-	tunn_mode = p_src->tunn_mode;
+	p_tun->tunn_mode = p_src->tunn_mode;
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_vxlan);
-	p_tunn_cfg->tunnel_clss_vxlan = type;
+	p_tun->tunn_clss_vxlan = type;
+	p_tunn_cfg->tunnel_clss_vxlan = p_tun->tunn_clss_vxlan;
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2gre);
-	p_tunn_cfg->tunnel_clss_l2gre = type;
+	p_tun->tunn_clss_l2gre = type;
+	p_tunn_cfg->tunnel_clss_l2gre = p_tun->tunn_clss_l2gre;
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgre);
-	p_tunn_cfg->tunnel_clss_ipgre = type;
+	p_tun->tunn_clss_ipgre = type;
+	p_tunn_cfg->tunnel_clss_ipgre = p_tun->tunn_clss_ipgre;
 
 	if (p_src->update_vxlan_udp_port) {
+		p_tun->port_vxlan_udp_port = p_src->vxlan_udp_port;
 		p_tunn_cfg->set_vxlan_udp_port_flg = 1;
 		p_tunn_cfg->vxlan_udp_port =
-		    OSAL_CPU_TO_LE16(p_src->vxlan_udp_port);
+				OSAL_CPU_TO_LE16(p_tun->port_vxlan_udp_port);
 	}
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_l2gre = 1;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_ipgre = 1;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_vxlan = 1;
 
 	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
@@ -306,21 +321,24 @@ ecore_tunn_set_pf_start_params(struct ecore_hwfn *p_hwfn,
 	}
 
 	if (p_src->update_geneve_udp_port) {
+		p_tun->port_geneve_udp_port = p_src->geneve_udp_port;
 		p_tunn_cfg->set_geneve_udp_port_flg = 1;
 		p_tunn_cfg->geneve_udp_port =
-		    OSAL_CPU_TO_LE16(p_src->geneve_udp_port);
+				OSAL_CPU_TO_LE16(p_tun->port_geneve_udp_port);
 	}
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_l2geneve = 1;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_ipgeneve = 1;
 
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2geneve);
-	p_tunn_cfg->tunnel_clss_l2geneve = type;
+	p_tun->tunn_clss_l2geneve = type;
+	p_tunn_cfg->tunnel_clss_l2geneve = p_tun->tunn_clss_l2geneve;
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgeneve);
-	p_tunn_cfg->tunnel_clss_ipgeneve = type;
+	p_tun->tunn_clss_ipgeneve = type;
+	p_tunn_cfg->tunnel_clss_ipgeneve = p_tun->tunn_clss_ipgeneve;
 }
 
 enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
@@ -420,9 +438,16 @@ enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
 	rc = ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
 
 	if (p_tunn) {
+		if (p_tunn->update_vxlan_udp_port)
+			ecore_set_vxlan_dest_port(p_hwfn, p_hwfn->p_main_ptt,
+						  p_tunn->vxlan_udp_port);
+
+		if (p_tunn->update_geneve_udp_port)
+			ecore_set_geneve_dest_port(p_hwfn, p_hwfn->p_main_ptt,
+						   p_tunn->geneve_udp_port);
+
 		ecore_set_hw_tunn_mode(p_hwfn, p_hwfn->p_main_ptt,
 				       p_tunn->tunn_mode);
-		p_hwfn->p_dev->tunn_mode = p_tunn->tunn_mode;
 	}
 
 	return rc;
@@ -529,12 +554,12 @@ ecore_sp_pf_update_tunn_cfg(struct ecore_hwfn *p_hwfn,
 	if (p_tunn->update_vxlan_udp_port)
 		ecore_set_vxlan_dest_port(p_hwfn, p_hwfn->p_main_ptt,
 					  p_tunn->vxlan_udp_port);
+
 	if (p_tunn->update_geneve_udp_port)
 		ecore_set_geneve_dest_port(p_hwfn, p_hwfn->p_main_ptt,
 					   p_tunn->geneve_udp_port);
 
 	ecore_set_hw_tunn_mode(p_hwfn, p_hwfn->p_main_ptt, p_tunn->tunn_mode);
-	p_hwfn->p_dev->tunn_mode = p_tunn->tunn_mode;
 
 	return rc;
 }
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index bfd96d6..baa8476 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -43,6 +43,11 @@ struct qed_dev_info {
 	uint8_t mf_mode;
 	bool tx_switching;
 	u16 mtu;
+
+	/* Out param for qede */
+	bool vxlan_enable;
+	bool gre_enable;
+	bool geneve_enable;
 };
 
 enum qed_sb_type {
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index a932c5f..e7195b4 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -325,8 +325,26 @@ static int
 qed_fill_dev_info(struct ecore_dev *edev, struct qed_dev_info *dev_info)
 {
 	struct ecore_ptt *ptt = NULL;
+	struct ecore_tunnel_info *tun = &edev->tunnel;
 
 	memset(dev_info, 0, sizeof(struct qed_dev_info));
+
+	if (tun->tunn_mode & OSAL_BIT(ECORE_MODE_VXLAN_TUNN) &&
+	    tun->tunn_clss_vxlan == ECORE_TUNN_CLSS_MAC_VLAN)
+		dev_info->vxlan_enable = true;
+
+	if (tun->tunn_mode & OSAL_BIT(ECORE_MODE_L2GRE_TUNN) &&
+	    tun->tunn_mode & OSAL_BIT(ECORE_MODE_IPGRE_TUNN) &&
+	    tun->tunn_clss_l2gre == ECORE_TUNN_CLSS_MAC_VLAN &&
+	    tun->tunn_clss_ipgre == ECORE_TUNN_CLSS_MAC_VLAN)
+		dev_info->gre_enable = true;
+
+	if (tun->tunn_mode & OSAL_BIT(ECORE_MODE_L2GENEVE_TUNN) &&
+	    tun->tunn_mode & OSAL_BIT(ECORE_MODE_IPGENEVE_TUNN) &&
+	    tun->tunn_clss_l2geneve == ECORE_TUNN_CLSS_MAC_VLAN &&
+	    tun->tunn_clss_ipgeneve == ECORE_TUNN_CLSS_MAC_VLAN)
+		dev_info->geneve_enable = true;
+
 	dev_info->num_hwfns = edev->num_hwfns;
 	dev_info->is_mf_default = IS_MF_DEFAULT(&edev->hwfns[0]);
 	dev_info->mtu = ECORE_LEADING_HWFN(edev)->hw_info.mtu;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 31/61] net/qede/base: revise tunnel APIs/structs
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (30 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 30/61] net/qede/base: infrastructure changes for VF tunnelling Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 32/61] net/qede/base: add tunnelling support for VFs Rasesh Mody
                         ` (30 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Revise tunnel APIs/structs.
 - Unite tunnel start and update params in single struct
   "ecore_tunnel_info"
 - Remove A0 chip tunnelling support.
 - Added per tunnel info - removed bitmasks.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h             |   57 ++---
 drivers/net/qede/base/ecore_dev.c         |    2 +-
 drivers/net/qede/base/ecore_dev_api.h     |    2 +-
 drivers/net/qede/base/ecore_sp_api.h      |   19 ++
 drivers/net/qede/base/ecore_sp_commands.c |  384 +++++++++++++----------------
 drivers/net/qede/base/ecore_sp_commands.h |   23 +-
 drivers/net/qede/qede_ethdev.c            |   20 +-
 drivers/net/qede/qede_if.h                |   16 ++
 drivers/net/qede/qede_main.c              |   18 +-
 9 files changed, 248 insertions(+), 293 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 5c12c1e..f86f7ca 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -204,33 +204,29 @@ enum ecore_tunn_clss {
 	MAX_ECORE_TUNN_CLSS,
 };
 
-struct ecore_tunn_start_params {
-	unsigned long tunn_mode;
-	u16	vxlan_udp_port;
-	u16	geneve_udp_port;
-	u8	update_vxlan_udp_port;
-	u8	update_geneve_udp_port;
-	u8	tunn_clss_vxlan;
-	u8	tunn_clss_l2geneve;
-	u8	tunn_clss_ipgeneve;
-	u8	tunn_clss_l2gre;
-	u8	tunn_clss_ipgre;
+struct ecore_tunn_update_type {
+	bool b_update_mode;
+	bool b_mode_enabled;
+	enum ecore_tunn_clss tun_cls;
 };
 
-struct ecore_tunn_update_params {
-	unsigned long tunn_mode_update_mask;
-	unsigned long tunn_mode;
-	u16	vxlan_udp_port;
-	u16	geneve_udp_port;
-	u8	update_rx_pf_clss;
-	u8	update_tx_pf_clss;
-	u8	update_vxlan_udp_port;
-	u8	update_geneve_udp_port;
-	u8	tunn_clss_vxlan;
-	u8	tunn_clss_l2geneve;
-	u8	tunn_clss_ipgeneve;
-	u8	tunn_clss_l2gre;
-	u8	tunn_clss_ipgre;
+struct ecore_tunn_update_udp_port {
+	bool b_update_port;
+	u16 port;
+};
+
+struct ecore_tunnel_info {
+	struct ecore_tunn_update_type vxlan;
+	struct ecore_tunn_update_type l2_geneve;
+	struct ecore_tunn_update_type ip_geneve;
+	struct ecore_tunn_update_type l2_gre;
+	struct ecore_tunn_update_type ip_gre;
+
+	struct ecore_tunn_update_udp_port vxlan_port;
+	struct ecore_tunn_update_udp_port geneve_port;
+
+	bool b_update_rx_cls;
+	bool b_update_tx_cls;
 };
 
 /* The PCI personality is not quite synonymous to protocol ID:
@@ -470,17 +466,6 @@ struct ecore_fw_data {
 	u32 init_ops_size;
 };
 
-struct ecore_tunnel_info {
-	u8		tunn_clss_vxlan;
-	u8		tunn_clss_l2geneve;
-	u8		tunn_clss_ipgeneve;
-	u8		tunn_clss_l2gre;
-	u8		tunn_clss_ipgre;
-	unsigned long	tunn_mode;
-	u16		port_vxlan_udp_port;
-	u16		port_geneve_udp_port;
-};
-
 struct ecore_hwfn {
 	struct ecore_dev		*p_dev;
 	u8				my_id;		/* ID inside the PF */
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 1c08d4a..0d3971c 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1696,7 +1696,7 @@ ecore_hw_init_pf_doorbell_bar(struct ecore_hwfn *p_hwfn,
 static enum _ecore_status_t
 ecore_hw_init_pf(struct ecore_hwfn *p_hwfn,
 		 struct ecore_ptt *p_ptt,
-		 struct ecore_tunn_start_params *p_tunn,
+		 struct ecore_tunnel_info *p_tunn,
 		 int hw_mode,
 		 bool b_hw_start,
 		 enum ecore_int_mode int_mode, bool allow_npar_tx_switch)
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index 74a15ef..356c5e4 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -59,7 +59,7 @@ void ecore_resc_setup(struct ecore_dev *p_dev);
 
 struct ecore_hw_init_params {
 	/* tunnelling parameters */
-	struct ecore_tunn_start_params *p_tunn;
+	struct ecore_tunnel_info *p_tunn;
 	bool b_hw_start;
 	/* interrupt mode [msix, inta, etc.] to use */
 	enum ecore_int_mode int_mode;
diff --git a/drivers/net/qede/base/ecore_sp_api.h b/drivers/net/qede/base/ecore_sp_api.h
index a4cb507..c8e564f 100644
--- a/drivers/net/qede/base/ecore_sp_api.h
+++ b/drivers/net/qede/base/ecore_sp_api.h
@@ -41,5 +41,24 @@ struct ecore_spq_comp_cb {
  */
 enum _ecore_status_t ecore_eth_cqe_completion(struct ecore_hwfn *p_hwfn,
 					      struct eth_slow_path_rx_cqe *cqe);
+/**
+ * @brief ecore_sp_pf_update_tunn_cfg - PF Function Tunnel configuration
+ *					update  Ramrod
+ *
+ * This ramrod is sent to update a tunneling configuration
+ * for a physical function (PF).
+ *
+ * @param p_hwfn
+ * @param p_tunn - pf update tunneling parameters
+ * @param comp_mode - completion mode
+ * @param p_comp_data - callback function
+ *
+ * @return enum _ecore_status_t
+ */
 
+enum _ecore_status_t
+ecore_sp_pf_update_tunn_cfg(struct ecore_hwfn *p_hwfn,
+			    struct ecore_tunnel_info *p_tunn,
+			    enum spq_mode comp_mode,
+			    struct ecore_spq_comp_cb *p_comp_data);
 #endif
diff --git a/drivers/net/qede/base/ecore_sp_commands.c b/drivers/net/qede/base/ecore_sp_commands.c
index f5860a0..4cacce8 100644
--- a/drivers/net/qede/base/ecore_sp_commands.c
+++ b/drivers/net/qede/base/ecore_sp_commands.c
@@ -88,7 +88,7 @@ enum _ecore_status_t ecore_sp_init_request(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-static enum tunnel_clss ecore_tunn_get_clss_type(u8 type)
+static enum tunnel_clss ecore_tunn_clss_to_fw_clss(u8 type)
 {
 	switch (type) {
 	case ECORE_TUNN_CLSS_MAC_VLAN:
@@ -107,242 +107,207 @@ static enum tunnel_clss ecore_tunn_get_clss_type(u8 type)
 }
 
 static void
-ecore_tunn_set_pf_fix_tunn_mode(struct ecore_hwfn *p_hwfn,
-				struct ecore_tunn_update_params *p_src,
-				struct pf_update_tunnel_config *p_tunn_cfg)
+ecore_set_pf_update_tunn_mode(struct ecore_tunnel_info *p_tun,
+			      struct ecore_tunnel_info *p_src,
+			      bool b_pf_start)
 {
-	unsigned long update_mask = p_src->tunn_mode_update_mask;
-	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
-	unsigned long cached_tunn_mode = p_tun->tunn_mode;
-	unsigned long tunn_mode = p_src->tunn_mode;
-	unsigned long new_tunn_mode = 0;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &update_mask)) {
-		if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_L2GRE_TUNN, &new_tunn_mode);
-	} else {
-		if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &cached_tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_L2GRE_TUNN, &new_tunn_mode);
-	}
-
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &update_mask)) {
-		if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_IPGRE_TUNN, &new_tunn_mode);
-	} else {
-		if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &cached_tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_IPGRE_TUNN, &new_tunn_mode);
-	}
-
-	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &update_mask)) {
-		if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_VXLAN_TUNN, &new_tunn_mode);
-	} else {
-		if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &cached_tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_VXLAN_TUNN, &new_tunn_mode);
-	}
-
-	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
-		if (p_src->update_geneve_udp_port)
-			DP_NOTICE(p_hwfn, true, "Geneve not supported\n");
-		p_src->update_geneve_udp_port = 0;
-		p_src->tunn_mode = new_tunn_mode;
-		return;
-	}
+	if (p_src->vxlan.b_update_mode || b_pf_start)
+		p_tun->vxlan.b_mode_enabled = p_src->vxlan.b_mode_enabled;
 
-	if (p_src->update_geneve_udp_port) {
-		p_tun->port_geneve_udp_port = p_src->geneve_udp_port;
-		p_tunn_cfg->set_geneve_udp_port_flg = 1;
-		p_tunn_cfg->geneve_udp_port =
-				OSAL_CPU_TO_LE16(p_tun->port_geneve_udp_port);
-	}
+	if (p_src->l2_gre.b_update_mode || b_pf_start)
+		p_tun->l2_gre.b_mode_enabled = p_src->l2_gre.b_mode_enabled;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &update_mask)) {
-		if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_L2GENEVE_TUNN, &new_tunn_mode);
-	} else {
-		if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &cached_tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_L2GENEVE_TUNN, &new_tunn_mode);
-	}
+	if (p_src->ip_gre.b_update_mode || b_pf_start)
+		p_tun->ip_gre.b_mode_enabled = p_src->ip_gre.b_mode_enabled;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &update_mask)) {
-		if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_IPGENEVE_TUNN, &new_tunn_mode);
-	} else {
-		if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &cached_tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_IPGENEVE_TUNN, &new_tunn_mode);
-	}
+	if (p_src->l2_geneve.b_update_mode || b_pf_start)
+		p_tun->l2_geneve.b_mode_enabled =
+				p_src->l2_geneve.b_mode_enabled;
 
-	p_src->tunn_mode = new_tunn_mode;
+	if (p_src->ip_geneve.b_update_mode || b_pf_start)
+		p_tun->ip_geneve.b_mode_enabled =
+				p_src->ip_geneve.b_mode_enabled;
 }
 
-static void
-ecore_tunn_set_pf_update_params(struct ecore_hwfn *p_hwfn,
-				struct ecore_tunn_update_params *p_src,
-				struct pf_update_tunnel_config *p_tunn_cfg)
+static void ecore_set_tunn_cls_info(struct ecore_tunnel_info *p_tun,
+				    struct ecore_tunnel_info *p_src)
 {
-	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
 	enum tunnel_clss type;
 
-	ecore_tunn_set_pf_fix_tunn_mode(p_hwfn, p_src, p_tunn_cfg);
-	p_tun->tunn_mode = p_src->tunn_mode;
-
-	p_tunn_cfg->update_rx_pf_clss = p_src->update_rx_pf_clss;
-	p_tunn_cfg->update_tx_pf_clss = p_src->update_tx_pf_clss;
-
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_vxlan);
-	p_tun->tunn_clss_vxlan = type;
-	p_tunn_cfg->tunnel_clss_vxlan = p_tun->tunn_clss_vxlan;
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2gre);
-	p_tun->tunn_clss_l2gre = type;
-	p_tunn_cfg->tunnel_clss_l2gre = p_tun->tunn_clss_l2gre;
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgre);
-	p_tun->tunn_clss_ipgre = type;
-	p_tunn_cfg->tunnel_clss_ipgre = p_tun->tunn_clss_ipgre;
-
-	if (p_src->update_vxlan_udp_port) {
-		p_tun->port_vxlan_udp_port = p_src->vxlan_udp_port;
-		p_tunn_cfg->set_vxlan_udp_port_flg = 1;
-		p_tunn_cfg->vxlan_udp_port =
-				OSAL_CPU_TO_LE16(p_tun->port_vxlan_udp_port);
-	}
+	p_tun->b_update_rx_cls = p_src->b_update_rx_cls;
+	p_tun->b_update_tx_cls = p_src->b_update_tx_cls;
+
+	type = ecore_tunn_clss_to_fw_clss(p_src->vxlan.tun_cls);
+	p_tun->vxlan.tun_cls = type;
+	type = ecore_tunn_clss_to_fw_clss(p_src->l2_gre.tun_cls);
+	p_tun->l2_gre.tun_cls = type;
+	type = ecore_tunn_clss_to_fw_clss(p_src->ip_gre.tun_cls);
+	p_tun->ip_gre.tun_cls = type;
+	type = ecore_tunn_clss_to_fw_clss(p_src->l2_geneve.tun_cls);
+	p_tun->l2_geneve.tun_cls = type;
+	type = ecore_tunn_clss_to_fw_clss(p_src->ip_geneve.tun_cls);
+	p_tun->ip_geneve.tun_cls = type;
+}
+
+static void ecore_set_tunn_ports(struct ecore_tunnel_info *p_tun,
+				 struct ecore_tunnel_info *p_src)
+{
+	p_tun->geneve_port.b_update_port = p_src->geneve_port.b_update_port;
+	p_tun->vxlan_port.b_update_port = p_src->vxlan_port.b_update_port;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_l2gre = 1;
+	if (p_src->geneve_port.b_update_port)
+		p_tun->geneve_port.port = p_src->geneve_port.port;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_ipgre = 1;
+	if (p_src->vxlan_port.b_update_port)
+		p_tun->vxlan_port.port = p_src->vxlan_port.port;
+}
 
-	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_vxlan = 1;
+static void
+__ecore_set_ramrod_tunnel_param(u8 *p_tunn_cls, u8 *p_enable_tx_clas,
+				struct ecore_tunn_update_type *tun_type)
+{
+	*p_tunn_cls = tun_type->tun_cls;
 
-	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
-		if (p_src->update_geneve_udp_port)
-			DP_NOTICE(p_hwfn, true, "Geneve not supported\n");
-		p_src->update_geneve_udp_port = 0;
-		return;
-	}
+	if (tun_type->b_mode_enabled)
+		*p_enable_tx_clas = 1;
+}
 
-	if (p_src->update_geneve_udp_port) {
-		p_tun->port_geneve_udp_port = p_src->geneve_udp_port;
-		p_tunn_cfg->set_geneve_udp_port_flg = 1;
-		p_tunn_cfg->geneve_udp_port =
-				OSAL_CPU_TO_LE16(p_tun->port_geneve_udp_port);
+static void
+ecore_set_ramrod_tunnel_param(u8 *p_tunn_cls, u8 *p_enable_tx_clas,
+			      struct ecore_tunn_update_type *tun_type,
+			      u8 *p_update_port, __le16 *p_port,
+			      struct ecore_tunn_update_udp_port *p_udp_port)
+{
+	__ecore_set_ramrod_tunnel_param(p_tunn_cls, p_enable_tx_clas,
+					tun_type);
+	if (p_udp_port->b_update_port) {
+		*p_update_port = 1;
+		*p_port = OSAL_CPU_TO_LE16(p_udp_port->port);
 	}
+}
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_l2geneve = 1;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_ipgeneve = 1;
+static void
+ecore_tunn_set_pf_update_params(struct ecore_hwfn		*p_hwfn,
+				struct ecore_tunnel_info *p_src,
+				struct pf_update_tunnel_config	*p_tunn_cfg)
+{
+	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
 
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2geneve);
-	p_tun->tunn_clss_l2geneve = type;
-	p_tunn_cfg->tunnel_clss_l2geneve = p_tun->tunn_clss_l2geneve;
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgeneve);
-	p_tun->tunn_clss_ipgeneve = type;
-	p_tunn_cfg->tunnel_clss_ipgeneve = p_tun->tunn_clss_ipgeneve;
+	ecore_set_pf_update_tunn_mode(p_tun, p_src, false);
+	ecore_set_tunn_cls_info(p_tun, p_src);
+	ecore_set_tunn_ports(p_tun, p_src);
+
+	ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_vxlan,
+				      &p_tunn_cfg->tx_enable_vxlan,
+				      &p_tun->vxlan,
+				      &p_tunn_cfg->set_vxlan_udp_port_flg,
+				      &p_tunn_cfg->vxlan_udp_port,
+				      &p_tun->vxlan_port);
+
+	ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_l2geneve,
+				      &p_tunn_cfg->tx_enable_l2geneve,
+				      &p_tun->l2_geneve,
+				      &p_tunn_cfg->set_geneve_udp_port_flg,
+				      &p_tunn_cfg->geneve_udp_port,
+				      &p_tun->geneve_port);
+
+	__ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_ipgeneve,
+					&p_tunn_cfg->tx_enable_ipgeneve,
+					&p_tun->ip_geneve);
+
+	__ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_l2gre,
+					&p_tunn_cfg->tx_enable_l2gre,
+					&p_tun->l2_gre);
+
+	__ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_ipgre,
+					&p_tunn_cfg->tx_enable_ipgre,
+					&p_tun->ip_gre);
+
+	p_tunn_cfg->update_rx_pf_clss = p_tun->b_update_rx_cls;
+	p_tunn_cfg->update_tx_pf_clss = p_tun->b_update_tx_cls;
 }
 
 static void ecore_set_hw_tunn_mode(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt,
-				   unsigned long tunn_mode)
+				   struct ecore_tunnel_info *p_tun)
 {
-	u8 l2gre_enable = 0, ipgre_enable = 0, vxlan_enable = 0;
-	u8 l2geneve_enable = 0, ipgeneve_enable = 0;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &tunn_mode))
-		l2gre_enable = 1;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &tunn_mode))
-		ipgre_enable = 1;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &tunn_mode))
-		vxlan_enable = 1;
+	ecore_set_gre_enable(p_hwfn, p_ptt, p_tun->l2_gre.b_mode_enabled,
+			     p_tun->ip_gre.b_mode_enabled);
+	ecore_set_vxlan_enable(p_hwfn, p_ptt, p_tun->vxlan.b_mode_enabled);
 
-	ecore_set_gre_enable(p_hwfn, p_ptt, l2gre_enable, ipgre_enable);
-	ecore_set_vxlan_enable(p_hwfn, p_ptt, vxlan_enable);
+	ecore_set_geneve_enable(p_hwfn, p_ptt, p_tun->l2_geneve.b_mode_enabled,
+				p_tun->ip_geneve.b_mode_enabled);
+}
 
-	if (ECORE_IS_BB_A0(p_hwfn->p_dev))
+static void ecore_set_hw_tunn_mode_port(struct ecore_hwfn *p_hwfn,
+					struct ecore_tunnel_info *p_tunn)
+{
+	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
+		DP_NOTICE(p_hwfn, true,
+			  "A0 chip: tunnel hw config is not supported\n");
 		return;
+	}
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &tunn_mode))
-		l2geneve_enable = 1;
+	if (p_tunn->vxlan_port.b_update_port)
+		ecore_set_vxlan_dest_port(p_hwfn, p_hwfn->p_main_ptt,
+					  p_tunn->vxlan_port.port);
 
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &tunn_mode))
-		ipgeneve_enable = 1;
+	if (p_tunn->geneve_port.b_update_port)
+		ecore_set_geneve_dest_port(p_hwfn, p_hwfn->p_main_ptt,
+					   p_tunn->geneve_port.port);
 
-	ecore_set_geneve_enable(p_hwfn, p_ptt, l2geneve_enable,
-				ipgeneve_enable);
+	ecore_set_hw_tunn_mode(p_hwfn, p_hwfn->p_main_ptt, p_tunn);
 }
 
 static void
 ecore_tunn_set_pf_start_params(struct ecore_hwfn *p_hwfn,
-			       struct ecore_tunn_start_params *p_src,
+			       struct ecore_tunnel_info		*p_src,
 			       struct pf_start_tunnel_config *p_tunn_cfg)
 {
 	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
-	enum tunnel_clss type;
-
-	if (!p_src)
-		return;
-
-	p_tun->tunn_mode = p_src->tunn_mode;
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_vxlan);
-	p_tun->tunn_clss_vxlan = type;
-	p_tunn_cfg->tunnel_clss_vxlan = p_tun->tunn_clss_vxlan;
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2gre);
-	p_tun->tunn_clss_l2gre = type;
-	p_tunn_cfg->tunnel_clss_l2gre = p_tun->tunn_clss_l2gre;
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgre);
-	p_tun->tunn_clss_ipgre = type;
-	p_tunn_cfg->tunnel_clss_ipgre = p_tun->tunn_clss_ipgre;
-
-	if (p_src->update_vxlan_udp_port) {
-		p_tun->port_vxlan_udp_port = p_src->vxlan_udp_port;
-		p_tunn_cfg->set_vxlan_udp_port_flg = 1;
-		p_tunn_cfg->vxlan_udp_port =
-				OSAL_CPU_TO_LE16(p_tun->port_vxlan_udp_port);
-	}
-
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_l2gre = 1;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_ipgre = 1;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_vxlan = 1;
 
 	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
-		if (p_src->update_geneve_udp_port)
-			DP_NOTICE(p_hwfn, true, "Geneve not supported\n");
-		p_src->update_geneve_udp_port = 0;
+		DP_NOTICE(p_hwfn, true,
+			  "A0 chip: tunnel pf start config is not supported\n");
 		return;
 	}
 
-	if (p_src->update_geneve_udp_port) {
-		p_tun->port_geneve_udp_port = p_src->geneve_udp_port;
-		p_tunn_cfg->set_geneve_udp_port_flg = 1;
-		p_tunn_cfg->geneve_udp_port =
-				OSAL_CPU_TO_LE16(p_tun->port_geneve_udp_port);
-	}
-
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_l2geneve = 1;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_ipgeneve = 1;
+	if (!p_src)
+		return;
 
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2geneve);
-	p_tun->tunn_clss_l2geneve = type;
-	p_tunn_cfg->tunnel_clss_l2geneve = p_tun->tunn_clss_l2geneve;
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgeneve);
-	p_tun->tunn_clss_ipgeneve = type;
-	p_tunn_cfg->tunnel_clss_ipgeneve = p_tun->tunn_clss_ipgeneve;
+	ecore_set_pf_update_tunn_mode(p_tun, p_src, true);
+	ecore_set_tunn_cls_info(p_tun, p_src);
+	ecore_set_tunn_ports(p_tun, p_src);
+
+	ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_vxlan,
+				      &p_tunn_cfg->tx_enable_vxlan,
+				      &p_tun->vxlan,
+				      &p_tunn_cfg->set_vxlan_udp_port_flg,
+				      &p_tunn_cfg->vxlan_udp_port,
+				      &p_tun->vxlan_port);
+
+	ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_l2geneve,
+				      &p_tunn_cfg->tx_enable_l2geneve,
+				      &p_tun->l2_geneve,
+				      &p_tunn_cfg->set_geneve_udp_port_flg,
+				      &p_tunn_cfg->geneve_udp_port,
+				      &p_tun->geneve_port);
+
+	__ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_ipgeneve,
+					&p_tunn_cfg->tx_enable_ipgeneve,
+					&p_tun->ip_geneve);
+
+	__ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_l2gre,
+					&p_tunn_cfg->tx_enable_l2gre,
+					&p_tun->l2_gre);
+
+	__ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_ipgre,
+					&p_tunn_cfg->tx_enable_ipgre,
+					&p_tun->ip_gre);
 }
 
 enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
-				       struct ecore_tunn_start_params *p_tunn,
+				       struct ecore_tunnel_info *p_tunn,
 				       enum ecore_mf_mode mode,
 				       bool allow_npar_tx_switch)
 {
@@ -437,18 +402,8 @@ enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
 
 	rc = ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
 
-	if (p_tunn) {
-		if (p_tunn->update_vxlan_udp_port)
-			ecore_set_vxlan_dest_port(p_hwfn, p_hwfn->p_main_ptt,
-						  p_tunn->vxlan_udp_port);
-
-		if (p_tunn->update_geneve_udp_port)
-			ecore_set_geneve_dest_port(p_hwfn, p_hwfn->p_main_ptt,
-						   p_tunn->geneve_udp_port);
-
-		ecore_set_hw_tunn_mode(p_hwfn, p_hwfn->p_main_ptt,
-				       p_tunn->tunn_mode);
-	}
+	if (p_tunn)
+		ecore_set_hw_tunn_mode_port(p_hwfn, &p_hwfn->p_dev->tunnel);
 
 	return rc;
 }
@@ -523,7 +478,7 @@ enum _ecore_status_t ecore_sp_rl_update(struct ecore_hwfn *p_hwfn,
 /* Set pf update ramrod command params */
 enum _ecore_status_t
 ecore_sp_pf_update_tunn_cfg(struct ecore_hwfn *p_hwfn,
-			    struct ecore_tunn_update_params *p_tunn,
+			    struct ecore_tunnel_info *p_tunn,
 			    enum spq_mode comp_mode,
 			    struct ecore_spq_comp_cb *p_comp_data)
 {
@@ -531,6 +486,15 @@ ecore_sp_pf_update_tunn_cfg(struct ecore_hwfn *p_hwfn,
 	struct ecore_sp_init_data init_data;
 	enum _ecore_status_t rc = ECORE_NOTIMPL;
 
+	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
+		DP_NOTICE(p_hwfn, true,
+			  "A0 chip: tunnel pf update config is not supported\n");
+		return rc;
+	}
+
+	if (!p_tunn)
+		return ECORE_INVAL;
+
 	/* Get SPQ entry */
 	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
 	init_data.cid = ecore_spq_get_cid(p_hwfn);
@@ -551,15 +515,7 @@ ecore_sp_pf_update_tunn_cfg(struct ecore_hwfn *p_hwfn,
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	if (p_tunn->update_vxlan_udp_port)
-		ecore_set_vxlan_dest_port(p_hwfn, p_hwfn->p_main_ptt,
-					  p_tunn->vxlan_udp_port);
-
-	if (p_tunn->update_geneve_udp_port)
-		ecore_set_geneve_dest_port(p_hwfn, p_hwfn->p_main_ptt,
-					   p_tunn->geneve_udp_port);
-
-	ecore_set_hw_tunn_mode(p_hwfn, p_hwfn->p_main_ptt, p_tunn->tunn_mode);
+	ecore_set_hw_tunn_mode_port(p_hwfn, &p_hwfn->p_dev->tunnel);
 
 	return rc;
 }
diff --git a/drivers/net/qede/base/ecore_sp_commands.h b/drivers/net/qede/base/ecore_sp_commands.h
index 66c9a69..33e31e4 100644
--- a/drivers/net/qede/base/ecore_sp_commands.h
+++ b/drivers/net/qede/base/ecore_sp_commands.h
@@ -68,32 +68,11 @@ enum _ecore_status_t ecore_sp_init_request(struct ecore_hwfn *p_hwfn,
  */
 
 enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
-				       struct ecore_tunn_start_params *p_tunn,
+				       struct ecore_tunnel_info *p_tunn,
 				       enum ecore_mf_mode mode,
 				       bool allow_npar_tx_switch);
 
 /**
- * @brief ecore_sp_pf_update_tunn_cfg - PF Function Tunnel configuration
- *					update  Ramrod
- *
- * This ramrod is sent to update a tunneling configuration
- * for a physical function (PF).
- *
- * @param p_hwfn
- * @param p_tunn - pf update tunneling parameters
- * @param comp_mode - completion mode
- * @param p_comp_data - callback function
- *
- * @return enum _ecore_status_t
- */
-
-enum _ecore_status_t
-ecore_sp_pf_update_tunn_cfg(struct ecore_hwfn *p_hwfn,
-			    struct ecore_tunn_update_params *p_tunn,
-			    enum spq_mode comp_mode,
-			    struct ecore_spq_comp_cb *p_comp_data);
-
-/**
  * @brief ecore_sp_pf_update - PF Function Update Ramrod
  *
  * This ramrod updates function-related parameters. Every parameter can be
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index d52e1be..4ef93d4 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -335,10 +335,10 @@ static void qede_set_ucast_cmn_params(struct ecore_filter_ucast *ucast)
 	/* ucast->assert_on_error = true; - For debug */
 }
 
-static void qede_set_cmn_tunn_param(struct ecore_tunn_update_params *params,
-				     uint8_t clss, uint64_t mode, uint64_t mask)
+static void qede_set_cmn_tunn_param(struct qed_tunn_update_params *params,
+				    uint8_t clss, uint64_t mode, uint64_t mask)
 {
-	memset(params, 0, sizeof(struct ecore_tunn_update_params));
+	memset(params, 0, sizeof(struct qed_tunn_update_params));
 	params->tunn_mode = mode;
 	params->tunn_mode_update_mask = mask;
 	params->update_tx_pf_clss = 1;
@@ -1707,7 +1707,8 @@ qede_conf_udp_dst_port(struct rte_eth_dev *eth_dev,
 {
 	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
-	struct ecore_tunn_update_params params;
+	struct qed_tunn_update_params params;
+	struct ecore_tunnel_info *p_tunn;
 	struct ecore_hwfn *p_hwfn;
 	int rc, i;
 
@@ -1720,7 +1721,7 @@ qede_conf_udp_dst_port(struct rte_eth_dev *eth_dev,
 					QEDE_VXLAN_DEF_PORT;
 		for_each_hwfn(edev, i) {
 			p_hwfn = &edev->hwfns[i];
-			rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, &params,
+			rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, p_tunn,
 						ECORE_SPQ_MODE_CB, NULL);
 			if (rc != ECORE_SUCCESS) {
 				DP_ERR(edev, "Unable to config UDP port %u\n",
@@ -1817,7 +1818,8 @@ static int qede_vxlan_tunn_config(struct rte_eth_dev *eth_dev,
 {
 	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
-	struct ecore_tunn_update_params params;
+	struct qed_tunn_update_params params;
+	struct ecore_tunnel_info *p_tunn;
 	struct ecore_hwfn *p_hwfn;
 	enum ecore_filter_ucast_type type;
 	enum ecore_tunn_clss clss;
@@ -1872,7 +1874,7 @@ static int qede_vxlan_tunn_config(struct rte_eth_dev *eth_dev,
 		for_each_hwfn(edev, i) {
 			p_hwfn = &edev->hwfns[i];
 			rc = ecore_sp_pf_update_tunn_cfg(p_hwfn,
-				&params, ECORE_SPQ_MODE_CB, NULL);
+				p_tunn, ECORE_SPQ_MODE_CB, NULL);
 			if (rc != ECORE_SUCCESS) {
 				DP_ERR(edev, "Failed to update tunn_clss %u\n",
 					params.tunn_clss_vxlan);
@@ -1906,8 +1908,8 @@ static int qede_vxlan_tunn_config(struct rte_eth_dev *eth_dev,
 						(1 << ECORE_MODE_VXLAN_TUNN));
 			for_each_hwfn(edev, i) {
 				p_hwfn = &edev->hwfns[i];
-				rc = ecore_sp_pf_update_tunn_cfg(p_hwfn,
-					&params, ECORE_SPQ_MODE_CB, NULL);
+				rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, p_tunn,
+					ECORE_SPQ_MODE_CB, NULL);
 				if (rc != ECORE_SUCCESS) {
 					DP_ERR(edev,
 						"Failed to update tunn_clss %u\n",
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index baa8476..09b6912 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -121,6 +121,22 @@ struct qed_eth_tlvs {
 	u8 num_rxqs_full;
 };
 
+struct qed_tunn_update_params {
+	unsigned long   tunn_mode_update_mask;
+	unsigned long   tunn_mode;
+	u16             vxlan_udp_port;
+	u16             geneve_udp_port;
+	u8              update_rx_pf_clss;
+	u8              update_tx_pf_clss;
+	u8              update_vxlan_udp_port;
+	u8              update_geneve_udp_port;
+	u8              tunn_clss_vxlan;
+	u8              tunn_clss_l2geneve;
+	u8              tunn_clss_ipgeneve;
+	u8              tunn_clss_l2gre;
+	u8              tunn_clss_ipgre;
+};
+
 struct qed_common_cb_ops {
 	void (*link_update)(void *dev, struct qed_link_output *link);
 	void (*get_tlv_data)(void *dev, struct qed_eth_tlvs *data);
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index e7195b4..5c79055 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -329,20 +329,18 @@ qed_fill_dev_info(struct ecore_dev *edev, struct qed_dev_info *dev_info)
 
 	memset(dev_info, 0, sizeof(struct qed_dev_info));
 
-	if (tun->tunn_mode & OSAL_BIT(ECORE_MODE_VXLAN_TUNN) &&
-	    tun->tunn_clss_vxlan == ECORE_TUNN_CLSS_MAC_VLAN)
+	if (tun->vxlan.tun_cls == ECORE_TUNN_CLSS_MAC_VLAN &&
+	    tun->vxlan.b_mode_enabled)
 		dev_info->vxlan_enable = true;
 
-	if (tun->tunn_mode & OSAL_BIT(ECORE_MODE_L2GRE_TUNN) &&
-	    tun->tunn_mode & OSAL_BIT(ECORE_MODE_IPGRE_TUNN) &&
-	    tun->tunn_clss_l2gre == ECORE_TUNN_CLSS_MAC_VLAN &&
-	    tun->tunn_clss_ipgre == ECORE_TUNN_CLSS_MAC_VLAN)
+	if (tun->l2_gre.b_mode_enabled && tun->ip_gre.b_mode_enabled &&
+	    tun->l2_gre.tun_cls == ECORE_TUNN_CLSS_MAC_VLAN &&
+	    tun->ip_gre.tun_cls == ECORE_TUNN_CLSS_MAC_VLAN)
 		dev_info->gre_enable = true;
 
-	if (tun->tunn_mode & OSAL_BIT(ECORE_MODE_L2GENEVE_TUNN) &&
-	    tun->tunn_mode & OSAL_BIT(ECORE_MODE_IPGENEVE_TUNN) &&
-	    tun->tunn_clss_l2geneve == ECORE_TUNN_CLSS_MAC_VLAN &&
-	    tun->tunn_clss_ipgeneve == ECORE_TUNN_CLSS_MAC_VLAN)
+	if (tun->l2_geneve.b_mode_enabled && tun->ip_geneve.b_mode_enabled &&
+	    tun->l2_geneve.tun_cls == ECORE_TUNN_CLSS_MAC_VLAN &&
+	    tun->ip_geneve.tun_cls == ECORE_TUNN_CLSS_MAC_VLAN)
 		dev_info->geneve_enable = true;
 
 	dev_info->num_hwfns = edev->num_hwfns;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 32/61] net/qede/base: add tunnelling support for VFs
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (31 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 31/61] net/qede/base: revise tunnel APIs/structs Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 33/61] net/qede/base: formatting changes Rasesh Mody
                         ` (29 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add new tunnelling support for VFs.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/bcm_osal.h          |    3 +-
 drivers/net/qede/base/ecore_dev.c         |   15 ++-
 drivers/net/qede/base/ecore_sp_commands.c |   15 ++-
 drivers/net/qede/base/ecore_sriov.c       |  144 +++++++++++++++++++++++++++
 drivers/net/qede/base/ecore_vf.c          |  154 +++++++++++++++++++++++++++++
 drivers/net/qede/base/ecore_vf.h          |    5 +
 drivers/net/qede/base/ecore_vfpf_if.h     |   40 ++++++++
 drivers/net/qede/qede_ethdev.c            |   49 +++++----
 8 files changed, 390 insertions(+), 35 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 513d542..4c91dc0 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -422,6 +422,5 @@ void qede_get_mcp_proto_stats(struct ecore_dev *, enum ecore_mcp_protocol_type,
 #define	OSAL_SLOWPATH_IRQ_REQ(p_hwfn) (0)
 #define OSAL_MFW_TLV_REQ(p_hwfn) (0)
 #define OSAL_MFW_FILL_TLV_DATA(type, buf, data) (0)
-
-
+#define OSAL_PF_VALIDATE_MODIFY_TUNN_CONFIG(p_hwfn, mask, b_update, tunn) 0
 #endif /* __BCM_OSAL_H */
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 0d3971c..21fec58 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1876,6 +1876,19 @@ static void ecore_reset_mb_shadow(struct ecore_hwfn *p_hwfn,
 		    p_hwfn->mcp_info->mfw_mb_length);
 }
 
+enum _ecore_status_t ecore_vf_start(struct ecore_hwfn *p_hwfn,
+				    struct ecore_hw_init_params *p_params)
+{
+	if (p_params->p_tunn) {
+		ecore_vf_set_vf_start_tunn_update_param(p_params->p_tunn);
+		ecore_vf_pf_tunnel_param_update(p_hwfn, p_params->p_tunn);
+	}
+
+	p_hwfn->b_int_enabled = 1;
+
+	return ECORE_SUCCESS;
+}
+
 enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 				   struct ecore_hw_init_params *p_params)
 {
@@ -1908,7 +1921,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		}
 
 		if (IS_VF(p_dev)) {
-			p_hwfn->b_int_enabled = 1;
+			ecore_vf_start(p_hwfn, p_params);
 			continue;
 		}
 
diff --git a/drivers/net/qede/base/ecore_sp_commands.c b/drivers/net/qede/base/ecore_sp_commands.c
index 4cacce8..8fd64d7 100644
--- a/drivers/net/qede/base/ecore_sp_commands.c
+++ b/drivers/net/qede/base/ecore_sp_commands.c
@@ -22,6 +22,7 @@
 #include "ecore_hw.h"
 #include "ecore_dcbx.h"
 #include "ecore_sriov.h"
+#include "ecore_vf.h"
 
 enum _ecore_status_t ecore_sp_init_request(struct ecore_hwfn *p_hwfn,
 					   struct ecore_spq_entry **pp_ent,
@@ -137,16 +138,17 @@ static void ecore_set_tunn_cls_info(struct ecore_tunnel_info *p_tun,
 	p_tun->b_update_rx_cls = p_src->b_update_rx_cls;
 	p_tun->b_update_tx_cls = p_src->b_update_tx_cls;
 
+	/* @DPDK - typecast tunnul class */
 	type = ecore_tunn_clss_to_fw_clss(p_src->vxlan.tun_cls);
-	p_tun->vxlan.tun_cls = type;
+	p_tun->vxlan.tun_cls = (enum ecore_tunn_clss)type;
 	type = ecore_tunn_clss_to_fw_clss(p_src->l2_gre.tun_cls);
-	p_tun->l2_gre.tun_cls = type;
+	p_tun->l2_gre.tun_cls = (enum ecore_tunn_clss)type;
 	type = ecore_tunn_clss_to_fw_clss(p_src->ip_gre.tun_cls);
-	p_tun->ip_gre.tun_cls = type;
+	p_tun->ip_gre.tun_cls = (enum ecore_tunn_clss)type;
 	type = ecore_tunn_clss_to_fw_clss(p_src->l2_geneve.tun_cls);
-	p_tun->l2_geneve.tun_cls = type;
+	p_tun->l2_geneve.tun_cls = (enum ecore_tunn_clss)type;
 	type = ecore_tunn_clss_to_fw_clss(p_src->ip_geneve.tun_cls);
-	p_tun->ip_geneve.tun_cls = type;
+	p_tun->ip_geneve.tun_cls = (enum ecore_tunn_clss)type;
 }
 
 static void ecore_set_tunn_ports(struct ecore_tunnel_info *p_tun,
@@ -486,6 +488,9 @@ ecore_sp_pf_update_tunn_cfg(struct ecore_hwfn *p_hwfn,
 	struct ecore_sp_init_data init_data;
 	enum _ecore_status_t rc = ECORE_NOTIMPL;
 
+	if (IS_VF(p_hwfn->p_dev))
+		return ecore_vf_pf_tunnel_param_update(p_hwfn, p_tunn);
+
 	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
 		DP_NOTICE(p_hwfn, true,
 			  "A0 chip: tunnel pf update config is not supported\n");
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 7378420..6cec7b2 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -51,6 +51,7 @@ const char *ecore_channel_tlvs_string[] = {
 	"CHANNEL_TLV_VPORT_UPDATE_RSS",
 	"CHANNEL_TLV_VPORT_UPDATE_ACCEPT_ANY_VLAN",
 	"CHANNEL_TLV_VPORT_UPDATE_SGE_TPA",
+	"CHANNEL_TLV_UPDATE_TUNN_PARAM",
 	"CHANNEL_TLV_MAX"
 };
 
@@ -2137,6 +2138,146 @@ out:
 					b_legacy_vf);
 }
 
+static void
+ecore_iov_pf_update_tun_response(struct pfvf_update_tunn_param_tlv *p_resp,
+				 struct ecore_tunnel_info *p_tun,
+				 u16 tunn_feature_mask)
+{
+	p_resp->tunn_feature_mask = tunn_feature_mask;
+	p_resp->vxlan_mode = p_tun->vxlan.b_mode_enabled;
+	p_resp->l2geneve_mode = p_tun->l2_geneve.b_mode_enabled;
+	p_resp->ipgeneve_mode = p_tun->ip_geneve.b_mode_enabled;
+	p_resp->l2gre_mode = p_tun->l2_gre.b_mode_enabled;
+	p_resp->ipgre_mode = p_tun->l2_gre.b_mode_enabled;
+	p_resp->vxlan_clss = p_tun->vxlan.tun_cls;
+	p_resp->l2gre_clss = p_tun->l2_gre.tun_cls;
+	p_resp->ipgre_clss = p_tun->ip_gre.tun_cls;
+	p_resp->l2geneve_clss = p_tun->l2_geneve.tun_cls;
+	p_resp->ipgeneve_clss = p_tun->ip_geneve.tun_cls;
+	p_resp->geneve_udp_port = p_tun->geneve_port.port;
+	p_resp->vxlan_udp_port = p_tun->vxlan_port.port;
+}
+
+static void
+__ecore_iov_pf_update_tun_param(struct vfpf_update_tunn_param_tlv *p_req,
+				struct ecore_tunn_update_type *p_tun,
+				enum ecore_tunn_mode mask, u8 tun_cls)
+{
+	if (p_req->tun_mode_update_mask & (1 << mask)) {
+		p_tun->b_update_mode = true;
+
+		if (p_req->tunn_mode & (1 << mask))
+			p_tun->b_mode_enabled = true;
+	}
+
+	p_tun->tun_cls = tun_cls;
+}
+
+static void
+ecore_iov_pf_update_tun_param(struct vfpf_update_tunn_param_tlv *p_req,
+			      struct ecore_tunn_update_type *p_tun,
+			      struct ecore_tunn_update_udp_port *p_port,
+			      enum ecore_tunn_mode mask,
+			      u8 tun_cls, u8 update_port, u16 port)
+{
+	if (update_port) {
+		p_port->b_update_port = true;
+		p_port->port = port;
+	}
+
+	__ecore_iov_pf_update_tun_param(p_req, p_tun, mask, tun_cls);
+}
+
+static bool
+ecore_iov_pf_validate_tunn_param(struct vfpf_update_tunn_param_tlv *p_req)
+{
+	bool b_update_requested = false;
+
+	if (p_req->tun_mode_update_mask || p_req->update_tun_cls ||
+	    p_req->update_geneve_port || p_req->update_vxlan_port)
+		b_update_requested = true;
+
+	return b_update_requested;
+}
+
+static void ecore_iov_vf_mbx_update_tunn_param(struct ecore_hwfn *p_hwfn,
+					       struct ecore_ptt *p_ptt,
+					       struct ecore_vf_info *p_vf)
+{
+	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
+	struct ecore_iov_vf_mbx *mbx = &p_vf->vf_mbx;
+	struct pfvf_update_tunn_param_tlv *p_resp;
+	struct vfpf_update_tunn_param_tlv *p_req;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	u8 status = PFVF_STATUS_SUCCESS;
+	bool b_update_required = false;
+	struct ecore_tunnel_info tunn;
+	u16 tunn_feature_mask = 0;
+
+	mbx->offset = (u8 *)mbx->reply_virt;
+
+	OSAL_MEM_ZERO(&tunn, sizeof(tunn));
+	p_req = &mbx->req_virt->tunn_param_update;
+
+	if (!ecore_iov_pf_validate_tunn_param(p_req)) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "No tunnel update requested by VF\n");
+		status = PFVF_STATUS_FAILURE;
+		goto send_resp;
+	}
+
+	tunn.b_update_rx_cls = p_req->update_tun_cls;
+	tunn.b_update_tx_cls = p_req->update_tun_cls;
+
+	ecore_iov_pf_update_tun_param(p_req, &tunn.vxlan, &tunn.vxlan_port,
+				      ECORE_MODE_VXLAN_TUNN, p_req->vxlan_clss,
+				      p_req->update_vxlan_port,
+				      p_req->vxlan_port);
+	ecore_iov_pf_update_tun_param(p_req, &tunn.l2_geneve, &tunn.geneve_port,
+				      ECORE_MODE_L2GENEVE_TUNN,
+				      p_req->l2geneve_clss,
+				      p_req->update_geneve_port,
+				      p_req->geneve_port);
+	__ecore_iov_pf_update_tun_param(p_req, &tunn.ip_geneve,
+					ECORE_MODE_IPGENEVE_TUNN,
+					p_req->ipgeneve_clss);
+	__ecore_iov_pf_update_tun_param(p_req, &tunn.l2_gre,
+					ECORE_MODE_L2GRE_TUNN,
+					p_req->l2gre_clss);
+	__ecore_iov_pf_update_tun_param(p_req, &tunn.ip_gre,
+					ECORE_MODE_IPGRE_TUNN,
+					p_req->ipgre_clss);
+
+	/* If PF modifies VF's req then it should
+	 * still return an error in case of partial configuration
+	 * or modified configuration as opposed to requested one.
+	 */
+	rc = OSAL_PF_VALIDATE_MODIFY_TUNN_CONFIG(p_hwfn, &tunn_feature_mask,
+						 &b_update_required, &tunn);
+
+	if (rc != ECORE_SUCCESS)
+		status = PFVF_STATUS_FAILURE;
+
+	/* If ECORE client is willing to update anything ? */
+	if (b_update_required) {
+		rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, &tunn,
+						 ECORE_SPQ_MODE_EBLOCK,
+						 OSAL_NULL);
+		if (rc != ECORE_SUCCESS)
+			status = PFVF_STATUS_FAILURE;
+	}
+
+send_resp:
+	p_resp = ecore_add_tlv(p_hwfn, &mbx->offset,
+			       CHANNEL_TLV_UPDATE_TUNN_PARAM, sizeof(*p_resp));
+
+	ecore_iov_pf_update_tun_response(p_resp, p_tun, tunn_feature_mask);
+	ecore_add_tlv(p_hwfn, &mbx->offset, CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	ecore_iov_send_response(p_hwfn, p_ptt, p_vf, sizeof(*p_resp), status);
+}
+
 static void ecore_iov_vf_mbx_start_txq_resp(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
 					    struct ecore_vf_info *p_vf,
@@ -3405,6 +3546,9 @@ void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn,
 		case CHANNEL_TLV_RELEASE:
 			ecore_iov_vf_mbx_release(p_hwfn, p_ptt, p_vf);
 			break;
+		case CHANNEL_TLV_UPDATE_TUNN_PARAM:
+			ecore_iov_vf_mbx_update_tunn_param(p_hwfn, p_ptt, p_vf);
+			break;
 		}
 	} else if (ecore_iov_tlv_supported(mbx->first_tlv.tl.type)) {
 		/* If we've received a message from a VF we consider malicious
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index 60ecd16..3182621 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -451,6 +451,160 @@ free_p_iov:
 #define MSTORM_QZONE_START(dev)   (TSTORM_QZONE_START + \
 				   (TSTORM_QZONE_SIZE * NUM_OF_L2_QUEUES(dev)))
 
+/* @DPDK - changed enum ecore_tunn_clss to enum ecore_tunn_mode */
+static void
+__ecore_vf_prep_tunn_req_tlv(struct vfpf_update_tunn_param_tlv *p_req,
+			     struct ecore_tunn_update_type *p_src,
+			     enum ecore_tunn_mode mask, u8 *p_cls)
+{
+	if (p_src->b_update_mode) {
+		p_req->tun_mode_update_mask |= (1 << mask);
+
+		if (p_src->b_mode_enabled)
+			p_req->tunn_mode |= (1 << mask);
+	}
+
+	*p_cls = p_src->tun_cls;
+}
+
+/* @DPDK - changed enum ecore_tunn_clss to enum ecore_tunn_mode */
+static void
+ecore_vf_prep_tunn_req_tlv(struct vfpf_update_tunn_param_tlv *p_req,
+			   struct ecore_tunn_update_type *p_src,
+			   enum ecore_tunn_mode mask, u8 *p_cls,
+			   struct ecore_tunn_update_udp_port *p_port,
+			   u8 *p_update_port, u16 *p_udp_port)
+{
+	if (p_port->b_update_port) {
+		*p_update_port = 1;
+		*p_udp_port = p_port->port;
+	}
+
+	__ecore_vf_prep_tunn_req_tlv(p_req, p_src, mask, p_cls);
+}
+
+void ecore_vf_set_vf_start_tunn_update_param(struct ecore_tunnel_info *p_tun)
+{
+	if (p_tun->vxlan.b_mode_enabled)
+		p_tun->vxlan.b_update_mode = true;
+	if (p_tun->l2_geneve.b_mode_enabled)
+		p_tun->l2_geneve.b_update_mode = true;
+	if (p_tun->ip_geneve.b_mode_enabled)
+		p_tun->ip_geneve.b_update_mode = true;
+	if (p_tun->l2_gre.b_mode_enabled)
+		p_tun->l2_gre.b_update_mode = true;
+	if (p_tun->ip_gre.b_mode_enabled)
+		p_tun->ip_gre.b_update_mode = true;
+
+	p_tun->b_update_rx_cls = true;
+	p_tun->b_update_tx_cls = true;
+}
+
+static void
+__ecore_vf_update_tunn_param(struct ecore_tunn_update_type *p_tun,
+			     u16 feature_mask, u8 tunn_mode, u8 tunn_cls,
+			     enum ecore_tunn_mode val)
+{
+	if (feature_mask & (1 << val)) {
+		p_tun->b_mode_enabled = tunn_mode;
+		p_tun->tun_cls = tunn_cls;
+	} else {
+		p_tun->b_mode_enabled = false;
+	}
+}
+
+static void
+ecore_vf_update_tunn_param(struct ecore_hwfn *p_hwfn,
+			   struct ecore_tunnel_info *p_tun,
+			   struct pfvf_update_tunn_param_tlv *p_resp)
+{
+	/* Update mode and classes provided by PF */
+	u16 feat_mask = p_resp->tunn_feature_mask;
+
+	__ecore_vf_update_tunn_param(&p_tun->vxlan, feat_mask,
+				     p_resp->vxlan_mode, p_resp->vxlan_clss,
+				     ECORE_MODE_VXLAN_TUNN);
+	__ecore_vf_update_tunn_param(&p_tun->l2_geneve, feat_mask,
+				     p_resp->l2geneve_mode,
+				     p_resp->l2geneve_clss,
+				     ECORE_MODE_L2GENEVE_TUNN);
+	__ecore_vf_update_tunn_param(&p_tun->ip_geneve, feat_mask,
+				     p_resp->ipgeneve_mode,
+				     p_resp->ipgeneve_clss,
+				     ECORE_MODE_IPGENEVE_TUNN);
+	__ecore_vf_update_tunn_param(&p_tun->l2_gre, feat_mask,
+				     p_resp->l2gre_mode, p_resp->l2gre_clss,
+				     ECORE_MODE_L2GRE_TUNN);
+	__ecore_vf_update_tunn_param(&p_tun->ip_gre, feat_mask,
+				     p_resp->ipgre_mode, p_resp->ipgre_clss,
+				     ECORE_MODE_IPGRE_TUNN);
+	p_tun->geneve_port.port = p_resp->geneve_udp_port;
+	p_tun->vxlan_port.port = p_resp->vxlan_udp_port;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+		   "tunn mode: vxlan=0x%x, l2geneve=0x%x, ipgeneve=0x%x, l2gre=0x%x, ipgre=0x%x",
+		   p_tun->vxlan.b_mode_enabled, p_tun->l2_geneve.b_mode_enabled,
+		   p_tun->ip_geneve.b_mode_enabled,
+		   p_tun->l2_gre.b_mode_enabled,
+		   p_tun->ip_gre.b_mode_enabled);
+}
+
+enum _ecore_status_t
+ecore_vf_pf_tunnel_param_update(struct ecore_hwfn *p_hwfn,
+				struct ecore_tunnel_info *p_src)
+{
+	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
+	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+	struct pfvf_update_tunn_param_tlv *p_resp;
+	struct vfpf_update_tunn_param_tlv *p_req;
+	enum _ecore_status_t rc;
+
+	p_req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_UPDATE_TUNN_PARAM,
+				 sizeof(*p_req));
+
+	if (p_src->b_update_rx_cls && p_src->b_update_tx_cls)
+		p_req->update_tun_cls = 1;
+
+	ecore_vf_prep_tunn_req_tlv(p_req, &p_src->vxlan, ECORE_MODE_VXLAN_TUNN,
+				   &p_req->vxlan_clss, &p_src->vxlan_port,
+				   &p_req->update_vxlan_port,
+				   &p_req->vxlan_port);
+	ecore_vf_prep_tunn_req_tlv(p_req, &p_src->l2_geneve,
+				   ECORE_MODE_L2GENEVE_TUNN,
+				   &p_req->l2geneve_clss, &p_src->geneve_port,
+				   &p_req->update_geneve_port,
+				   &p_req->geneve_port);
+	__ecore_vf_prep_tunn_req_tlv(p_req, &p_src->ip_geneve,
+				     ECORE_MODE_IPGENEVE_TUNN,
+				     &p_req->ipgeneve_clss);
+	__ecore_vf_prep_tunn_req_tlv(p_req, &p_src->l2_gre,
+				     ECORE_MODE_L2GRE_TUNN, &p_req->l2gre_clss);
+	__ecore_vf_prep_tunn_req_tlv(p_req, &p_src->ip_gre,
+				     ECORE_MODE_IPGRE_TUNN, &p_req->ipgre_clss);
+
+	/* add list termination tlv */
+	ecore_add_tlv(p_hwfn, &p_iov->offset,
+		      CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	p_resp = &p_iov->pf2vf_reply->tunn_param_resp;
+	rc = ecore_send_msg2pf(p_hwfn, &p_resp->hdr.status, sizeof(*p_resp));
+
+	if (rc)
+		goto exit;
+
+	if (p_resp->hdr.status != PFVF_STATUS_SUCCESS) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "Failed to update tunnel parameters\n");
+		rc = ECORE_INVAL;
+	}
+
+	ecore_vf_update_tunn_param(p_hwfn, p_tun, p_resp);
+exit:
+	ecore_vf_pf_req_end(p_hwfn, rc);
+	return rc;
+}
+
 enum _ecore_status_t
 ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
 		      struct ecore_queue_cid *p_cid,
diff --git a/drivers/net/qede/base/ecore_vf.h b/drivers/net/qede/base/ecore_vf.h
index 1afd667..0d67054 100644
--- a/drivers/net/qede/base/ecore_vf.h
+++ b/drivers/net/qede/base/ecore_vf.h
@@ -258,5 +258,10 @@ void __ecore_vf_get_link_caps(struct ecore_hwfn *p_hwfn,
 			      struct ecore_mcp_link_capabilities *p_link_caps,
 			      struct ecore_bulletin_content *p_bulletin);
 
+enum _ecore_status_t
+ecore_vf_pf_tunnel_param_update(struct ecore_hwfn *p_hwfn,
+				struct ecore_tunnel_info *p_tunn);
+
+void ecore_vf_set_vf_start_tunn_update_param(struct ecore_tunnel_info *p_tun);
 #endif
 #endif /* __ECORE_VF_H__ */
diff --git a/drivers/net/qede/base/ecore_vfpf_if.h b/drivers/net/qede/base/ecore_vfpf_if.h
index 149d092..82ed4f5 100644
--- a/drivers/net/qede/base/ecore_vfpf_if.h
+++ b/drivers/net/qede/base/ecore_vfpf_if.h
@@ -416,6 +416,43 @@ struct vfpf_ucast_filter_tlv {
 	u16			padding[3];
 };
 
+/* tunnel update param tlv */
+struct vfpf_update_tunn_param_tlv {
+	struct vfpf_first_tlv   first_tlv;
+
+	u8			tun_mode_update_mask;
+	u8			tunn_mode;
+	u8			update_tun_cls;
+	u8			vxlan_clss;
+	u8			l2gre_clss;
+	u8			ipgre_clss;
+	u8			l2geneve_clss;
+	u8			ipgeneve_clss;
+	u8			update_geneve_port;
+	u8			update_vxlan_port;
+	u16			geneve_port;
+	u16			vxlan_port;
+	u8			padding[2];
+};
+
+struct pfvf_update_tunn_param_tlv {
+	struct pfvf_tlv hdr;
+
+	u16			tunn_feature_mask;
+	u8			vxlan_mode;
+	u8			l2geneve_mode;
+	u8			ipgeneve_mode;
+	u8			l2gre_mode;
+	u8			ipgre_mode;
+	u8			vxlan_clss;
+	u8			l2gre_clss;
+	u8			ipgre_clss;
+	u8			l2geneve_clss;
+	u8			ipgeneve_clss;
+	u16			vxlan_udp_port;
+	u16			geneve_udp_port;
+};
+
 struct tlv_buffer_size {
 	u8 tlv_buffer[TLV_BUFFER_SIZE];
 };
@@ -431,6 +468,7 @@ union vfpf_tlvs {
 	struct vfpf_vport_start_tlv		start_vport;
 	struct vfpf_vport_update_tlv		vport_update;
 	struct vfpf_ucast_filter_tlv		ucast_filter;
+	struct vfpf_update_tunn_param_tlv	tunn_param_update;
 	struct tlv_buffer_size			tlv_buf_size;
 };
 
@@ -439,6 +477,7 @@ union pfvf_tlvs {
 	struct pfvf_acquire_resp_tlv		acquire_resp;
 	struct tlv_buffer_size			tlv_buf_size;
 	struct pfvf_start_queue_resp_tlv	queue_start;
+	struct pfvf_update_tunn_param_tlv	tunn_param_resp;
 };
 
 /* This is a structure which is allocated in the VF, which the PF may update
@@ -552,6 +591,7 @@ enum {
 	CHANNEL_TLV_VPORT_UPDATE_RSS,
 	CHANNEL_TLV_VPORT_UPDATE_ACCEPT_ANY_VLAN,
 	CHANNEL_TLV_VPORT_UPDATE_SGE_TPA,
+	CHANNEL_TLV_UPDATE_TUNN_PARAM,
 	CHANNEL_TLV_MAX,
 
 	/* Required for iterating over vport-update tlvs.
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 4ef93d4..257e5b2 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -335,15 +335,15 @@ static void qede_set_ucast_cmn_params(struct ecore_filter_ucast *ucast)
 	/* ucast->assert_on_error = true; - For debug */
 }
 
-static void qede_set_cmn_tunn_param(struct qed_tunn_update_params *params,
-				    uint8_t clss, uint64_t mode, uint64_t mask)
+static void qede_set_cmn_tunn_param(struct ecore_tunnel_info *p_tunn,
+				    uint8_t clss, bool mode, bool mask)
 {
-	memset(params, 0, sizeof(struct qed_tunn_update_params));
-	params->tunn_mode = mode;
-	params->tunn_mode_update_mask = mask;
-	params->update_tx_pf_clss = 1;
-	params->update_rx_pf_clss = 1;
-	params->tunn_clss_vxlan = clss;
+	memset(p_tunn, 0, sizeof(struct ecore_tunnel_info));
+	p_tunn->vxlan.b_update_mode = mode;
+	p_tunn->vxlan.b_mode_enabled = mask;
+	p_tunn->b_update_rx_cls = true;
+	p_tunn->b_update_tx_cls = true;
+	p_tunn->vxlan.tun_cls = clss;
 }
 
 static int
@@ -1707,25 +1707,24 @@ qede_conf_udp_dst_port(struct rte_eth_dev *eth_dev,
 {
 	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
-	struct qed_tunn_update_params params;
-	struct ecore_tunnel_info *p_tunn;
+	struct ecore_tunnel_info tunn; /* @DPDK */
 	struct ecore_hwfn *p_hwfn;
 	int rc, i;
 
 	PMD_INIT_FUNC_TRACE(edev);
 
-	memset(&params, 0, sizeof(params));
+	memset(&tunn, 0, sizeof(tunn));
 	if (tunnel_udp->prot_type == RTE_TUNNEL_TYPE_VXLAN) {
-		params.update_vxlan_udp_port = 1;
-		params.vxlan_udp_port = (add) ? tunnel_udp->udp_port :
-					QEDE_VXLAN_DEF_PORT;
+		tunn.vxlan_port.b_update_port = true;
+		tunn.vxlan_port.port = (add) ? tunnel_udp->udp_port :
+						  QEDE_VXLAN_DEF_PORT;
 		for_each_hwfn(edev, i) {
 			p_hwfn = &edev->hwfns[i];
-			rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, p_tunn,
+			rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, &tunn,
 						ECORE_SPQ_MODE_CB, NULL);
 			if (rc != ECORE_SUCCESS) {
 				DP_ERR(edev, "Unable to config UDP port %u\n",
-					params.vxlan_udp_port);
+				       tunn.vxlan_port.port);
 				return rc;
 			}
 		}
@@ -1818,8 +1817,7 @@ static int qede_vxlan_tunn_config(struct rte_eth_dev *eth_dev,
 {
 	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
-	struct qed_tunn_update_params params;
-	struct ecore_tunnel_info *p_tunn;
+	struct ecore_tunnel_info tunn;
 	struct ecore_hwfn *p_hwfn;
 	enum ecore_filter_ucast_type type;
 	enum ecore_tunn_clss clss;
@@ -1868,16 +1866,14 @@ static int qede_vxlan_tunn_config(struct rte_eth_dev *eth_dev,
 		qdev->vxlan_filter_type = filter_type;
 
 		DP_INFO(edev, "Enabling VXLAN tunneling\n");
-		qede_set_cmn_tunn_param(&params, clss,
-					(1 << ECORE_MODE_VXLAN_TUNN),
-					(1 << ECORE_MODE_VXLAN_TUNN));
+		qede_set_cmn_tunn_param(&tunn, clss, true, true);
 		for_each_hwfn(edev, i) {
 			p_hwfn = &edev->hwfns[i];
 			rc = ecore_sp_pf_update_tunn_cfg(p_hwfn,
-				p_tunn, ECORE_SPQ_MODE_CB, NULL);
+				&tunn, ECORE_SPQ_MODE_CB, NULL);
 			if (rc != ECORE_SUCCESS) {
 				DP_ERR(edev, "Failed to update tunn_clss %u\n",
-					params.tunn_clss_vxlan);
+				       tunn.vxlan.tun_cls);
 			}
 		}
 		qdev->num_tunn_filters++; /* Filter added successfully */
@@ -1904,16 +1900,15 @@ static int qede_vxlan_tunn_config(struct rte_eth_dev *eth_dev,
 			DP_INFO(edev, "Disabling VXLAN tunneling\n");
 
 			/* Use 0 as tunnel mode */
-			qede_set_cmn_tunn_param(&params, clss, 0,
-						(1 << ECORE_MODE_VXLAN_TUNN));
+			qede_set_cmn_tunn_param(&tunn, clss, false, true);
 			for_each_hwfn(edev, i) {
 				p_hwfn = &edev->hwfns[i];
-				rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, p_tunn,
+				rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, &tunn,
 					ECORE_SPQ_MODE_CB, NULL);
 				if (rc != ECORE_SUCCESS) {
 					DP_ERR(edev,
 						"Failed to update tunn_clss %u\n",
-						params.tunn_clss_vxlan);
+						tunn.vxlan.tun_cls);
 					break;
 				}
 			}
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 33/61] net/qede/base: formatting changes
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (32 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 32/61] net/qede/base: add tunnelling support for VFs Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 34/61] net/qede/base: prevent transmitter stuck condition Rasesh Mody
                         ` (28 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h      |   14 +--
 drivers/net/qede/base/mcp_public.h |  176 ++++++++++++++++++------------------
 2 files changed, 96 insertions(+), 94 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index f86f7ca..479a991 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -157,8 +157,8 @@ enum DP_MODULE {
 	ECORE_MSG_CXT		= 0x800000,
 	ECORE_MSG_LL2		= 0x1000000,
 	ECORE_MSG_ILT		= 0x2000000,
-	ECORE_MSG_RDMA          = 0x4000000,
-	ECORE_MSG_DEBUG         = 0x8000000,
+	ECORE_MSG_RDMA		= 0x4000000,
+	ECORE_MSG_DEBUG		= 0x8000000,
 	/* to be added...up to 0x8000000 */
 };
 #endif
@@ -480,7 +480,7 @@ struct ecore_hwfn {
 	u32				dp_module;
 	u8				dp_level;
 	char				name[NAME_SIZE];
-	void                            *dp_ctx;
+	void				*dp_ctx;
 
 	bool				first_on_engine;
 	bool				hw_init_done;
@@ -535,8 +535,8 @@ struct ecore_hwfn {
 	u32				rdma_prs_search_reg;
 
 	/* Array of sb_info of all status blocks */
-	struct ecore_sb_info            *sbs_info[MAX_SB_PER_PF_MIMD];
-	u16                             num_sbs;
+	struct ecore_sb_info		*sbs_info[MAX_SB_PER_PF_MIMD];
+	u16				num_sbs;
 
 	struct ecore_cxt_mngr		*p_cxt_mngr;
 
@@ -608,7 +608,7 @@ struct ecore_dev {
 	u32				dp_module;
 	u8				dp_level;
 	char				name[NAME_SIZE];
-	void                            *dp_ctx;
+	void				*dp_ctx;
 
 	u8				type;
 #define ECORE_DEV_TYPE_BB	(0 << 0)
@@ -816,7 +816,7 @@ void ecore_set_fw_mac_addr(__le16 *fw_msb, __le16 *fw_mid, __le16 *fw_lsb,
 #define PQ_FLAGS_MCOS	(1 << 1)
 #define PQ_FLAGS_LB	(1 << 2)
 #define PQ_FLAGS_OOO	(1 << 3)
-#define PQ_FLAGS_ACK    (1 << 4)
+#define PQ_FLAGS_ACK	(1 << 4)
 #define PQ_FLAGS_OFLD	(1 << 5)
 #define PQ_FLAGS_VFS	(1 << 6)
 
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 969dd5a..28909fb 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -586,14 +586,14 @@ struct public_port {
 	u32 link_status;
 #define LINK_STATUS_LINK_UP				0x00000001
 #define LINK_STATUS_SPEED_AND_DUPLEX_MASK		0x0000001e
-#define LINK_STATUS_SPEED_AND_DUPLEX_1000THD			(1 << 1)
-#define LINK_STATUS_SPEED_AND_DUPLEX_1000TFD			(2 << 1)
-#define LINK_STATUS_SPEED_AND_DUPLEX_10G			(3 << 1)
-#define LINK_STATUS_SPEED_AND_DUPLEX_20G			(4 << 1)
-#define LINK_STATUS_SPEED_AND_DUPLEX_40G			(5 << 1)
-#define LINK_STATUS_SPEED_AND_DUPLEX_50G			(6 << 1)
-#define LINK_STATUS_SPEED_AND_DUPLEX_100G			(7 << 1)
-#define LINK_STATUS_SPEED_AND_DUPLEX_25G			(8 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_1000THD		(1 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_1000TFD		(2 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_10G		(3 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_20G		(4 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_40G		(5 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_50G		(6 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_100G		(7 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_25G		(8 << 1)
 #define LINK_STATUS_AUTO_NEGOTIATE_ENABLED		0x00000020
 #define LINK_STATUS_AUTO_NEGOTIATE_COMPLETE		0x00000040
 #define LINK_STATUS_PARALLEL_DETECTION_USED		0x00000080
@@ -607,10 +607,10 @@ struct public_port {
 #define LINK_STATUS_LINK_PARTNER_100G_CAPABLE		0x00008000
 #define LINK_STATUS_LINK_PARTNER_25G_CAPABLE		0x00010000
 #define LINK_STATUS_LINK_PARTNER_FLOW_CONTROL_MASK	0x000C0000
-#define LINK_STATUS_LINK_PARTNER_NOT_PAUSE_CAPABLE		(0 << 18)
-#define LINK_STATUS_LINK_PARTNER_SYMMETRIC_PAUSE		(1 << 18)
-#define LINK_STATUS_LINK_PARTNER_ASYMMETRIC_PAUSE		(2 << 18)
-#define LINK_STATUS_LINK_PARTNER_BOTH_PAUSE			(3 << 18)
+#define LINK_STATUS_LINK_PARTNER_NOT_PAUSE_CAPABLE	(0 << 18)
+#define LINK_STATUS_LINK_PARTNER_SYMMETRIC_PAUSE	(1 << 18)
+#define LINK_STATUS_LINK_PARTNER_ASYMMETRIC_PAUSE	(2 << 18)
+#define LINK_STATUS_LINK_PARTNER_BOTH_PAUSE		(3 << 18)
 #define LINK_STATUS_SFP_TX_FAULT			0x00100000
 #define LINK_STATUS_TX_FLOW_CONTROL_ENABLED		0x00200000
 #define LINK_STATUS_RX_FLOW_CONTROL_ENABLED		0x00400000
@@ -619,9 +619,9 @@ struct public_port {
 #define LINK_STATUS_MAC_REMOTE_FAULT			0x02000000
 #define LINK_STATUS_UNSUPPORTED_SPD_REQ			0x04000000
 #define LINK_STATUS_FEC_MODE_MASK			0x38000000
-#define LINK_STATUS_FEC_MODE_NONE				(0 << 27)
-#define LINK_STATUS_FEC_MODE_FIRECODE_CL74			(1 << 27)
-#define LINK_STATUS_FEC_MODE_RS_CL91				(2 << 27)
+#define LINK_STATUS_FEC_MODE_NONE			(0 << 27)
+#define LINK_STATUS_FEC_MODE_FIRECODE_CL74		(1 << 27)
+#define LINK_STATUS_FEC_MODE_RS_CL91			(2 << 27)
 #define LINK_STATUS_EXT_PHY_LINK_UP			0x40000000
 
 	u32 link_status1;
@@ -762,23 +762,23 @@ struct public_port {
 	 *          When 1'b1 those bits contains a value times 16 microseconds.
 	 */
 	u32 eee_status;
-	#define EEE_TIMER_MASK		0x000fffff
-	#define EEE_ADV_STATUS_MASK	0x00f00000
-		#define EEE_1G_ADV	(1 << 1)
-		#define EEE_10G_ADV	(1 << 2)
-	#define EEE_ADV_STATUS_SHIFT	20
-	#define	EEE_LP_ADV_STATUS_MASK	0x0f000000
-	#define EEE_LP_ADV_STATUS_SHIFT	24
-	#define EEE_REQUESTED_BIT	0x10000000
-	#define EEE_LPI_REQUESTED_BIT	0x20000000
-	#define EEE_ACTIVE_BIT		0x40000000
-	#define EEE_TIME_OUTPUT_BIT	0x80000000
+#define EEE_TIMER_MASK		0x000fffff
+#define EEE_ADV_STATUS_MASK	0x00f00000
+#define EEE_1G_ADV	(1 << 1)
+#define EEE_10G_ADV	(1 << 2)
+#define EEE_ADV_STATUS_SHIFT	20
+#define	EEE_LP_ADV_STATUS_MASK	0x0f000000
+#define EEE_LP_ADV_STATUS_SHIFT	24
+#define EEE_REQUESTED_BIT	0x10000000
+#define EEE_LPI_REQUESTED_BIT	0x20000000
+#define EEE_ACTIVE_BIT		0x40000000
+#define EEE_TIME_OUTPUT_BIT	0x80000000
 
 	u32 eee_remote;	/* Used for EEE in LLDP */
-	#define EEE_REMOTE_TW_TX_MASK	0x0000ffff
-	#define EEE_REMOTE_TW_TX_SHIFT	0
-	#define EEE_REMOTE_TW_RX_MASK	0xffff0000
-	#define EEE_REMOTE_TW_RX_SHIFT	16
+#define EEE_REMOTE_TW_TX_MASK	0x0000ffff
+#define EEE_REMOTE_TW_TX_SHIFT	0
+#define EEE_REMOTE_TW_RX_MASK	0xffff0000
+#define EEE_REMOTE_TW_RX_SHIFT	16
 };
 
 /**************************************/
@@ -1157,15 +1157,17 @@ struct public_drv_mb {
  * [3:0] - func, drv_data[7:0] - MAC/WWNN/WWPN
  */
 #define DRV_MSG_CODE_GET_VMAC                   0x00120000
-	#define DRV_MSG_CODE_VMAC_TYPE_MAC              1
-	#define DRV_MSG_CODE_VMAC_TYPE_WWNN             2
-	#define DRV_MSG_CODE_VMAC_TYPE_WWPN             3
+#define DRV_MSG_CODE_VMAC_TYPE_SHIFT            4
+#define DRV_MSG_CODE_VMAC_TYPE_MASK             0x30
+#define DRV_MSG_CODE_VMAC_TYPE_MAC              1
+#define DRV_MSG_CODE_VMAC_TYPE_WWNN             2
+#define DRV_MSG_CODE_VMAC_TYPE_WWPN             3
 /* Get statistics from pf, params [31:4] - reserved, [3:0] - stats type */
 #define DRV_MSG_CODE_GET_STATS                  0x00130000
-	#define DRV_MSG_CODE_STATS_TYPE_LAN             1
-	#define DRV_MSG_CODE_STATS_TYPE_FCOE            2
-	#define DRV_MSG_CODE_STATS_TYPE_ISCSI           3
-	#define DRV_MSG_CODE_STATS_TYPE_RDMA            4
+#define DRV_MSG_CODE_STATS_TYPE_LAN             1
+#define DRV_MSG_CODE_STATS_TYPE_FCOE            2
+#define DRV_MSG_CODE_STATS_TYPE_ISCSI           3
+#define DRV_MSG_CODE_STATS_TYPE_RDMA            4
 /* Host shall provide buffer and size for MFW  */
 #define DRV_MSG_CODE_PMD_DIAG_DUMP		0x00140000
 /* Host shall provide buffer and size for MFW  */
@@ -1193,8 +1195,8 @@ struct public_drv_mb {
 #define DRV_MSG_CODE_MASK_PARITIES		0x001a0000
 /* param[0] - Simulate fan failure,  param[1] - simulate over temp. */
 #define DRV_MSG_CODE_INDUCE_FAILURE		0x001b0000
-	#define DRV_MSG_FAN_FAILURE_TYPE		(1 << 0)
-	#define DRV_MSG_TEMPERATURE_FAILURE_TYPE	(1 << 1)
+#define DRV_MSG_FAN_FAILURE_TYPE		(1 << 0)
+#define DRV_MSG_TEMPERATURE_FAILURE_TYPE	(1 << 1)
 /* Param: [0:15] - gpio number */
 #define DRV_MSG_CODE_GPIO_READ			0x001c0000
 /* Param: [0:15] - gpio number, [16:31] - gpio value */
@@ -1215,50 +1217,50 @@ struct public_drv_mb {
  * param[15:8] - age
  */
 #define DRV_MSG_CODE_RESOURCE_CMD		0x00230000
-	/* request resource ownership with default aging */
-	#define RESOURCE_OPCODE_REQ			1
-	/* request resource ownership without aging */
-	#define RESOURCE_OPCODE_REQ_WO_AGING		2
-	/* request resource ownership with specific aging timer (in seconds) */
-	#define RESOURCE_OPCODE_REQ_W_AGING		3
-	#define RESOURCE_OPCODE_RELEASE			4 /* release resource */
-	/* force resource release */
-	#define RESOURCE_OPCODE_FORCE_RELEASE		5
-	/* resource is free and granted to requester */
-	#define RESOURCE_OPCODE_GNT			1
-	/* resource is busy, param[7:0] indicates owner as follow 0-15 = PF0-15,
-	 * 16 = MFW, 17 = diag over serial
-	 */
-	#define RESOURCE_OPCODE_BUSY			2
-	/* indicate release request was acknowledged */
-	#define RESOURCE_OPCODE_RELEASED		3
-	/* indicate release request was previously received by other owner */
-	#define RESOURCE_OPCODE_RELEASED_PREVIOUS	4
-	/* indicate wrong owner during release */
-	#define RESOURCE_OPCODE_WRONG_OWNER		5
-	#define RESOURCE_OPCODE_UNKNOWN_CMD		255
-	/* dedicate resource 0 for dump */
-	#define RESOURCE_DUMP				0
+/* request resource ownership with default aging */
+#define RESOURCE_OPCODE_REQ			1
+/* request resource ownership without aging */
+#define RESOURCE_OPCODE_REQ_WO_AGING		2
+/* request resource ownership with specific aging timer (in seconds) */
+#define RESOURCE_OPCODE_REQ_W_AGING		3
+#define RESOURCE_OPCODE_RELEASE			4 /* release resource */
+/* force resource release */
+#define RESOURCE_OPCODE_FORCE_RELEASE		5
+/* resource is free and granted to requester */
+#define RESOURCE_OPCODE_GNT			1
+/* resource is busy, param[7:0] indicates owner as follow 0-15 = PF0-15,
+ * 16 = MFW, 17 = diag over serial
+ */
+#define RESOURCE_OPCODE_BUSY			2
+/* indicate release request was acknowledged */
+#define RESOURCE_OPCODE_RELEASED		3
+/* indicate release request was previously received by other owner */
+#define RESOURCE_OPCODE_RELEASED_PREVIOUS	4
+/* indicate wrong owner during release */
+#define RESOURCE_OPCODE_WRONG_OWNER		5
+#define RESOURCE_OPCODE_UNKNOWN_CMD		255
+/* dedicate resource 0 for dump */
+#define RESOURCE_DUMP				0
 #define DRV_MSG_CODE_GET_MBA_VERSION		0x00240000 /* Get MBA version */
 /* Send crash dump commands with param[3:0] - opcode */
 #define DRV_MSG_CODE_MDUMP_CMD			0x00250000
-	#define MDUMP_DRV_PARAM_OPCODE_MASK		0x0000000f
-	/* acknowledge reception of error indication */
-	#define DRV_MSG_CODE_MDUMP_ACK			0x01
-	/* set epoc and personality as follow: drv_data[3:0] - epoch,
-	 * drv_data[7:4] - personality
-	 */
-	#define DRV_MSG_CODE_MDUMP_SET_VALUES		0x02
-	/* trigger crash dump procedure */
-	#define DRV_MSG_CODE_MDUMP_TRIGGER		0x03
-	/* Request valid logs and config words */
-	#define DRV_MSG_CODE_MDUMP_GET_CONFIG		0x04
-	/* Set triggers mask. drv_mb_param should indicate (bitwise) which
-	 * trigger enabled
-	 */
-	#define DRV_MSG_CODE_MDUMP_SET_ENABLE		0x05
-	/* Clear all logs */
-	#define DRV_MSG_CODE_MDUMP_CLEAR_LOGS		0x06
+#define MDUMP_DRV_PARAM_OPCODE_MASK		0x0000000f
+/* acknowledge reception of error indication */
+#define DRV_MSG_CODE_MDUMP_ACK			0x01
+/* set epoc and personality as follow: drv_data[3:0] - epoch,
+ * drv_data[7:4] - personality
+ */
+#define DRV_MSG_CODE_MDUMP_SET_VALUES		0x02
+/* trigger crash dump procedure */
+#define DRV_MSG_CODE_MDUMP_TRIGGER		0x03
+/* Request valid logs and config words */
+#define DRV_MSG_CODE_MDUMP_GET_CONFIG		0x04
+/* Set triggers mask. drv_mb_param should indicate (bitwise) which
+ * trigger enabled
+ */
+#define DRV_MSG_CODE_MDUMP_SET_ENABLE		0x05
+/* Clear all logs */
+#define DRV_MSG_CODE_MDUMP_CLEAR_LOGS		0x06
 #define DRV_MSG_CODE_MEM_ECC_EVENTS		0x00260000 /* Param: None */
 /* Param: [0:15] - gpio number */
 #define DRV_MSG_CODE_GPIO_INFO			0x00270000
@@ -1266,12 +1268,12 @@ struct public_drv_mb {
 #define DRV_MSG_CODE_EXT_PHY_READ		0x00280000
 /* Value should be placed in union */
 #define DRV_MSG_CODE_EXT_PHY_WRITE		0x00290000
-	#define DRV_MB_PARAM_ADDR_SHIFT			0
-	#define DRV_MB_PARAM_ADDR_MASK			0x0000FFFF
-	#define DRV_MB_PARAM_DEVAD_SHIFT		16
-	#define DRV_MB_PARAM_DEVAD_MASK			0x001F0000
-	#define DRV_MB_PARAM_PORT_SHIFT			21
-	#define DRV_MB_PARAM_PORT_MASK			0x00600000
+#define DRV_MB_PARAM_ADDR_SHIFT			0
+#define DRV_MB_PARAM_ADDR_MASK			0x0000FFFF
+#define DRV_MB_PARAM_DEVAD_SHIFT		16
+#define DRV_MB_PARAM_DEVAD_MASK			0x001F0000
+#define DRV_MB_PARAM_PORT_SHIFT			21
+#define DRV_MB_PARAM_PORT_MASK			0x00600000
 #define DRV_MSG_CODE_EXT_PHY_FW_UPGRADE		0x002a0000
 
 #define DRV_MSG_SEQ_NUMBER_MASK                 0x0000ffff
@@ -1510,7 +1512,7 @@ struct public_drv_mb {
 #define FW_MSG_CODE_EXTPHY_OPERATION_FAILED	0x00720000
 #define FW_MSG_CODE_EXTPHY_NO_PHY_DETECTED	0x00730000
 
-/* mdump related response codes */
+	/* mdump related response codes */
 #define FW_MSG_CODE_MDUMP_NO_IMAGE_FOUND	0x00010000
 #define FW_MSG_CODE_MDUMP_ALLOC_FAILED		0x00020000
 #define FW_MSG_CODE_MDUMP_INVALID_CMD		0x00030000
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 34/61] net/qede/base: prevent transmitter stuck condition
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (33 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 33/61] net/qede/base: formatting changes Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 35/61] net/qede/base: add mask/shift defines for resource command Rasesh Mody
                         ` (27 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Change OOO TC properly to prevent transmitter stuck condition
due to credit underruns.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h      |    4 +---
 drivers/net/qede/base/ecore_dcbx.c |    6 ++----
 drivers/net/qede/base/ecore_dev.c  |   19 ++++++++++++++-----
 drivers/net/qede/base/mcp_public.h |   12 ++++++++----
 4 files changed, 25 insertions(+), 16 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 479a991..c9b1b5a 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -358,9 +358,6 @@ struct ecore_hw_info {
 
 	u8 num_active_tc;
 
-	/* Traffic class used for tcp out of order traffic */
-	u8 ooo_tc;
-
 	/* The traffic class used by PF for it's offloaded protocol */
 	u8 offload_tc;
 
@@ -441,6 +438,7 @@ struct ecore_qm_info {
 	u16			num_vf_pqs;
 	u8			num_vports;
 	u8			max_phys_tcs_per_port;
+	u8			ooo_tc;
 	bool			pf_rl_en;
 	bool			pf_wfq_en;
 	bool			vport_rl_en;
diff --git a/drivers/net/qede/base/ecore_dcbx.c b/drivers/net/qede/base/ecore_dcbx.c
index 102774d..0e11927 100644
--- a/drivers/net/qede/base/ecore_dcbx.c
+++ b/drivers/net/qede/base/ecore_dcbx.c
@@ -129,11 +129,8 @@ ecore_dcbx_set_params(struct ecore_dcbx_results *p_data,
 		p_data->arr[type].update = DONT_UPDATE_DCB_DSCP;
 
 	/* QM reconf data */
-	if (p_hwfn->hw_info.personality == personality) {
+	if (p_hwfn->hw_info.personality == personality)
 		p_hwfn->hw_info.offload_tc = tc;
-		if (personality == ECORE_PCI_ISCSI)
-			p_hwfn->hw_info.ooo_tc = DCBX_ISCSI_OOO_TC;
-	}
 }
 
 /* Update app protocol data and hw_info fields with the TLV info */
@@ -317,6 +314,7 @@ ecore_dcbx_process_mib_info(struct ecore_hwfn *p_hwfn)
 
 	p_info->num_active_tc = ECORE_MFW_GET_FIELD(p_ets->flags,
 						    DCBX_ETS_MAX_TCS);
+	p_hwfn->qm_info.ooo_tc = ECORE_MFW_GET_FIELD(p_ets->flags, DCBX_OOO_TC);
 	data.pf_id = p_hwfn->rel_pf_id;
 	data.dcbx_enabled = !!dcbx_version;
 
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 21fec58..0840d49 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -291,6 +291,7 @@ u16 ecore_init_qm_get_num_pqs(struct ecore_hwfn *p_hwfn)
 static void ecore_init_qm_params(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+	bool four_port;
 
 	/* pq and vport bases for this PF */
 	qm_info->start_pq = (u16)RESC_START(p_hwfn, ECORE_PQ);
@@ -300,10 +301,19 @@ static void ecore_init_qm_params(struct ecore_hwfn *p_hwfn)
 	qm_info->vport_rl_en = 1;
 	qm_info->vport_wfq_en = 1;
 
+	/* TC config is different for AH 4 port */
+	four_port = p_hwfn->p_dev->num_ports_in_engines == MAX_NUM_PORTS_K2;
+
 	/* in AH 4 port we have fewer TCs per port */
-	qm_info->max_phys_tcs_per_port =
-		p_hwfn->p_dev->num_ports_in_engines == MAX_NUM_PORTS_K2 ?
-			NUM_PHYS_TCS_4PORT_K2 : NUM_OF_PHYS_TCS;
+	qm_info->max_phys_tcs_per_port = four_port ? NUM_PHYS_TCS_4PORT_K2 :
+						     NUM_OF_PHYS_TCS;
+
+	/* unless MFW indicated otherwise, ooo_tc should be 3 for AH 4 port and
+	 * 4 otherwise
+	 */
+	if (!qm_info->ooo_tc)
+		qm_info->ooo_tc = four_port ? DCBX_TCP_OOO_K2_4PORT_TC :
+					      DCBX_TCP_OOO_TC;
 }
 
 /* initialize qm vport params */
@@ -532,8 +542,7 @@ static void ecore_init_qm_ooo_pq(struct ecore_hwfn *p_hwfn)
 		return;
 
 	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_OOO, qm_info->num_pqs);
-	ecore_init_qm_pq(p_hwfn, qm_info, DCBX_ISCSI_OOO_TC,
-			 PQ_INIT_SHARE_VPORT);
+	ecore_init_qm_pq(p_hwfn, qm_info, qm_info->ooo_tc, PQ_INIT_SHARE_VPORT);
 }
 
 static void ecore_init_qm_pure_ack_pq(struct ecore_hwfn *p_hwfn)
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 28909fb..bd34557 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -294,16 +294,20 @@ struct dcbx_ets_feature {
 #define DCBX_ETS_CBS_SHIFT                      3
 #define DCBX_ETS_MAX_TCS_MASK                   0x000000f0
 #define DCBX_ETS_MAX_TCS_SHIFT                  4
-#define DCBX_ISCSI_OOO_TC_MASK			0x00000f00
-#define DCBX_ISCSI_OOO_TC_SHIFT                 8
+#define DCBX_OOO_TC_MASK                        0x00000f00
+#define DCBX_OOO_TC_SHIFT                       8
 /* Entries in tc table are orginized that the left most is pri 0, right most is
  * prio 7
  */
 
 	u32  pri_tc_tbl[1];
-#define DCBX_ISCSI_OOO_TC			(4)
+/* Fixed TCP OOO TC usage is deprecated and used only for driver backward
+ * compatibility
+ */
+#define DCBX_TCP_OOO_TC				(4)
+#define DCBX_TCP_OOO_K2_4PORT_TC		(3)
 
-#define NIG_ETS_ISCSI_OOO_CLIENT_OFFSET		(DCBX_ISCSI_OOO_TC + 1)
+#define NIG_ETS_ISCSI_OOO_CLIENT_OFFSET		(DCBX_TCP_OOO_TC + 1)
 #define DCBX_CEE_STRICT_PRIORITY		0xf
 /* Entries in tc table are orginized that the left most is pri 0, right most is
  * prio 7
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 35/61] net/qede/base: add mask/shift defines for resource command
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (34 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 34/61] net/qede/base: prevent transmitter stuck condition Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 36/61] net/qede/base: add API for using MFW resource lock Rasesh Mody
                         ` (26 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add several mask/shift defines for the resource command

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/mcp_public.h |   15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index bd34557..1b1ecd2 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -1217,10 +1217,16 @@ struct public_drv_mb {
 #define DRV_MSG_CODE_TIMESTAMP                  0x00210000
 /* This is an empty mailbox just return OK*/
 #define DRV_MSG_CODE_EMPTY_MB			0x00220000
+
 /* Param[0:4] - resource number (0-31), Param[5:7] - opcode,
  * param[15:8] - age
  */
 #define DRV_MSG_CODE_RESOURCE_CMD		0x00230000
+
+#define RESOURCE_CMD_REQ_RESC_MASK		0x0000001F
+#define RESOURCE_CMD_REQ_RESC_SHIFT		0
+#define RESOURCE_CMD_REQ_OPCODE_MASK		0x000000E0
+#define RESOURCE_CMD_REQ_OPCODE_SHIFT		5
 /* request resource ownership with default aging */
 #define RESOURCE_OPCODE_REQ			1
 /* request resource ownership without aging */
@@ -1230,6 +1236,13 @@ struct public_drv_mb {
 #define RESOURCE_OPCODE_RELEASE			4 /* release resource */
 /* force resource release */
 #define RESOURCE_OPCODE_FORCE_RELEASE		5
+#define RESOURCE_CMD_REQ_AGE_MASK		0x0000FF00
+#define RESOURCE_CMD_REQ_AGE_SHIFT		8
+
+#define RESOURCE_CMD_RSP_OWNER_MASK		0x000000FF
+#define RESOURCE_CMD_RSP_OWNER_SHIFT		0
+#define RESOURCE_CMD_RSP_OPCODE_MASK		0x00000700
+#define RESOURCE_CMD_RSP_OPCODE_SHIFT		8
 /* resource is free and granted to requester */
 #define RESOURCE_OPCODE_GNT			1
 /* resource is busy, param[7:0] indicates owner as follow 0-15 = PF0-15,
@@ -1243,8 +1256,10 @@ struct public_drv_mb {
 /* indicate wrong owner during release */
 #define RESOURCE_OPCODE_WRONG_OWNER		5
 #define RESOURCE_OPCODE_UNKNOWN_CMD		255
+
 /* dedicate resource 0 for dump */
 #define RESOURCE_DUMP				0
+
 #define DRV_MSG_CODE_GET_MBA_VERSION		0x00240000 /* Get MBA version */
 /* Send crash dump commands with param[3:0] - opcode */
 #define DRV_MSG_CODE_MDUMP_CMD			0x00250000
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 36/61] net/qede/base: add API for using MFW resource lock
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (35 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 35/61] net/qede/base: add mask/shift defines for resource command Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 37/61] net/qede/base: remove clock slowdown option Rasesh Mody
                         ` (25 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add base driver API for using the Management FW resource lock

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h      |    9 +++
 drivers/net/qede/base/ecore_dcbx.h |    3 -
 drivers/net/qede/base/ecore_mcp.c  |  143 ++++++++++++++++++++++++++++++++++++
 drivers/net/qede/base/ecore_mcp.h  |   41 +++++++++++
 4 files changed, 193 insertions(+), 3 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index c9b1b5a..acf2244 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -86,6 +86,15 @@ do {									\
 	(((value) >> (name##_SHIFT)) & name##_MASK)
 #endif
 
+#define ECORE_MFW_GET_FIELD(name, field)				\
+	(((name) & (field ## _MASK)) >> (field ## _SHIFT))
+
+#define ECORE_MFW_SET_FIELD(name, field, value)				\
+do {									\
+	(name) &= ~((field ## _MASK) << (field ## _SHIFT));		\
+	(name) |= (((value) << (field ## _SHIFT)) & (field ## _MASK));	\
+} while (0)
+
 static OSAL_INLINE u32 DB_ADDR(u32 cid, u32 DEMS)
 {
 	u32 db_addr = FIELD_VALUE(DB_LEGACY_ADDR_DEMS, DEMS) |
diff --git a/drivers/net/qede/base/ecore_dcbx.h b/drivers/net/qede/base/ecore_dcbx.h
index 2ce4465..0830014 100644
--- a/drivers/net/qede/base/ecore_dcbx.h
+++ b/drivers/net/qede/base/ecore_dcbx.h
@@ -17,9 +17,6 @@
 #include "ecore_hsi_common.h"
 #include "ecore_dcbx_api.h"
 
-#define ECORE_MFW_GET_FIELD(name, field) \
-	(((name) & (field ## _MASK)) >> (field ## _SHIFT))
-
 struct ecore_dcbx_info {
 	struct lldp_status_params_s lldp_remote[LLDP_MAX_LLDP_AGENTS];
 	struct lldp_config_params_s lldp_local[LLDP_MAX_LLDP_AGENTS];
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 2b9c819..30cb76e 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -2502,3 +2502,146 @@ enum _ecore_status_t ecore_mcp_initiate_pf_flr(struct ecore_hwfn *p_hwfn,
 	return ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_INITIATE_PF_FLR, 0,
 			     &mcp_resp, &mcp_param);
 }
+
+static enum _ecore_status_t ecore_mcp_resource_cmd(struct ecore_hwfn *p_hwfn,
+						   struct ecore_ptt *p_ptt,
+						   u32 param, u32 *p_mcp_resp,
+						   u32 *p_mcp_param)
+{
+	enum _ecore_status_t rc;
+
+	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_RESOURCE_CMD, param,
+			   p_mcp_resp, p_mcp_param);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	/* A zero response implies that the resource command is not supported */
+	if (!*p_mcp_resp)
+		return ECORE_NOTIMPL;
+
+	if (*p_mcp_param == RESOURCE_OPCODE_UNKNOWN_CMD) {
+		u8 opcode = ECORE_MFW_GET_FIELD(param, RESOURCE_CMD_REQ_OPCODE);
+
+		DP_NOTICE(p_hwfn, false,
+			  "The resource command is unknown to the MFW [param 0x%08x, opcode %d]\n",
+			  param, opcode);
+		return ECORE_INVAL;
+	}
+
+	return rc;
+}
+
+enum _ecore_status_t ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn,
+					 struct ecore_ptt *p_ptt,
+					 u8 resource_num, u8 timeout,
+					 bool *p_granted, u8 *p_owner)
+{
+	u32 param = 0, mcp_resp, mcp_param;
+	u8 opcode;
+	enum _ecore_status_t rc;
+
+	switch (timeout) {
+	case ECORE_MCP_RESC_LOCK_TO_DEFAULT:
+		opcode = RESOURCE_OPCODE_REQ;
+		timeout = 0;
+		break;
+	case ECORE_MCP_RESC_LOCK_TO_NONE:
+		opcode = RESOURCE_OPCODE_REQ_WO_AGING;
+		timeout = 0;
+		break;
+	default:
+		opcode = RESOURCE_OPCODE_REQ_W_AGING;
+		break;
+	}
+
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_RESC, resource_num);
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_OPCODE, opcode);
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_AGE, timeout);
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Resource lock request: param 0x%08x [age %d, opcode %d, resc_num %d]\n",
+		   param, timeout, opcode, resource_num);
+
+	/* Attempt to acquire the resource */
+	rc = ecore_mcp_resource_cmd(p_hwfn, p_ptt, param, &mcp_resp,
+				    &mcp_param);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	/* Analyze the response */
+	*p_owner = ECORE_MFW_GET_FIELD(mcp_param, RESOURCE_CMD_RSP_OWNER);
+	opcode = ECORE_MFW_GET_FIELD(mcp_param, RESOURCE_CMD_RSP_OPCODE);
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Resource lock response: mcp_param 0x%08x [opcode %d, owner %d]\n",
+		   mcp_param, opcode, *p_owner);
+
+	switch (opcode) {
+	case RESOURCE_OPCODE_GNT:
+		*p_granted = true;
+		break;
+	case RESOURCE_OPCODE_BUSY:
+		*p_granted = false;
+		break;
+	default:
+		DP_NOTICE(p_hwfn, false,
+			  "Unexpected opcode in resource lock response [mcp_param 0x%08x, opcode %d]\n",
+			  mcp_param, opcode);
+		return ECORE_INVAL;
+	}
+
+	return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t ecore_mcp_resc_unlock(struct ecore_hwfn *p_hwfn,
+					   struct ecore_ptt *p_ptt,
+					   u8 resource_num, bool force,
+					   bool *p_released)
+{
+	u32 param = 0, mcp_resp, mcp_param;
+	u8 opcode;
+	enum _ecore_status_t rc;
+
+	opcode = force ? RESOURCE_OPCODE_FORCE_RELEASE
+		       : RESOURCE_OPCODE_RELEASE;
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_RESC, resource_num);
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_OPCODE, opcode);
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Resource unlock request: param 0x%08x [opcode %d, resc_num %d]\n",
+		   param, opcode, resource_num);
+
+	/* Attempt to release the resource */
+	rc = ecore_mcp_resource_cmd(p_hwfn, p_ptt, param, &mcp_resp,
+				    &mcp_param);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	/* Analyze the response */
+	opcode = ECORE_MFW_GET_FIELD(mcp_param, RESOURCE_CMD_RSP_OPCODE);
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Resource unlock response: mcp_param 0x%08x [opcode %d]\n",
+		   mcp_param, opcode);
+
+	switch (opcode) {
+	case RESOURCE_OPCODE_RELEASED_PREVIOUS:
+		DP_INFO(p_hwfn,
+			"Resource unlock request for an already released resource [resc_num %d]\n",
+			resource_num);
+		/* Fallthrough */
+	case RESOURCE_OPCODE_RELEASED:
+		*p_released = true;
+		break;
+	case RESOURCE_OPCODE_WRONG_OWNER:
+		*p_released = false;
+		break;
+	default:
+		DP_NOTICE(p_hwfn, false,
+			  "Unexpected opcode in resource unlock response [mcp_param 0x%08x, opcode %d]\n",
+			  mcp_param, opcode);
+		return ECORE_INVAL;
+	}
+
+	return ECORE_SUCCESS;
+}
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index 0708923..7a81516 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -361,4 +361,45 @@ enum _ecore_status_t ecore_mcp_get_resc_info(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_mcp_initiate_pf_flr(struct ecore_hwfn *p_hwfn,
 					       struct ecore_ptt *p_ptt);
 
+#define ECORE_MCP_RESC_LOCK_TO_DEFAULT	0
+#define ECORE_MCP_RESC_LOCK_TO_NONE	255
+
+/**
+ * @brief Acquires MFW generic resource lock
+ *
+ *  @param p_hwfn
+ *  @param p_ptt
+ *  @param resource_num - valid values are 0..31
+ *  @param timeout - lock timeout value in seconds
+ *                   (1..254, '0' - default value, '255' - no timeout).
+ *  @param p_granted - will be filled as true if the resource is free and
+ *                     granted, or false if it is busy.
+ *  @param p_owner - A pointer to a variable to be filled with the resource
+ *                   owner (0..15 = PF0-15, 16 = MFW, 17 = diag over serial).
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
+ */
+enum _ecore_status_t ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn,
+					 struct ecore_ptt *p_ptt,
+					 u8 resource_num, u8 timeout,
+					 bool *p_granted, u8 *p_owner);
+
+/**
+ * @brief Releases MFW generic resource lock
+ *
+ *  @param p_hwfn
+ *  @param p_ptt
+ *  @param resource_num
+ *  @param force -  allows to release a reeource even if belongs to another PF
+ *  @param p_released - will be filled as true if the resource is released (or
+ *			has been already released), and false if the resource is
+ *			acquired by another PF and the `force' flag was not set.
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
+ */
+enum _ecore_status_t ecore_mcp_resc_unlock(struct ecore_hwfn *p_hwfn,
+					   struct ecore_ptt *p_ptt,
+					   u8 resource_num, bool force,
+					   bool *p_released);
+
 #endif /* __ECORE_MCP_H__ */
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 37/61] net/qede/base: remove clock slowdown option
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (36 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 36/61] net/qede/base: add API for using MFW resource lock Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 38/61] net/qede/base: add new image types Rasesh Mody
                         ` (24 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Remove clock slowdown NVM config option as this is not supported
for current chipsets.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/nvm_cfg.h |   10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/drivers/net/qede/base/nvm_cfg.h b/drivers/net/qede/base/nvm_cfg.h
index 4202337..4e58835 100644
--- a/drivers/net/qede/base/nvm_cfg.h
+++ b/drivers/net/qede/base/nvm_cfg.h
@@ -72,10 +72,12 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_ENABLE_ATC_OFFSET 30
 		#define NVM_CFG1_GLOB_ENABLE_ATC_DISABLED 0x0
 		#define NVM_CFG1_GLOB_ENABLE_ATC_ENABLED 0x1
-		#define NVM_CFG1_GLOB_CLOCK_SLOWDOWN_MASK 0x80000000
-		#define NVM_CFG1_GLOB_CLOCK_SLOWDOWN_OFFSET 31
-		#define NVM_CFG1_GLOB_CLOCK_SLOWDOWN_DISABLED 0x0
-		#define NVM_CFG1_GLOB_CLOCK_SLOWDOWN_ENABLED 0x1
+		#define NVM_CFG1_GLOB_RESERVED__M_WAS_CLOCK_SLOWDOWN_MASK \
+								0x80000000
+		#define NVM_CFG1_GLOB_RESERVED__M_WAS_CLOCK_SLOWDOWN_OFFSET 31
+		#define NVM_CFG1_GLOB_RESERVED__M_WAS_CLOCK_SLOWDOWN_DISABLED \
+								0x0
+		#define NVM_CFG1_GLOB_RESERVED__M_WAS_CLOCK_SLOWDOWN_ENABLED 0x1
 	u32 engineering_change[3]; /* 0x4 */
 	u32 manufacturing_id; /* 0x10 */
 	u32 serial_number[4]; /* 0x14 */
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 38/61] net/qede/base: add new image types
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (37 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 37/61] net/qede/base: remove clock slowdown option Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 39/61] net/qede/base: use L2-handles for RSS configuration Rasesh Mody
                         ` (23 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add new image types - RECOVERY and PK (Public Key) towards
the second phase of NVRAM security support.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/mcp_public.h |    5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 1b1ecd2..d3cbc96 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -1502,6 +1502,10 @@ struct public_drv_mb {
 #define FW_MSG_CODE_NVM_PUT_FILE_FINISH_OK	0x00400000
 /* MFW reject "mcp reset" command if one of the drivers is up */
 #define FW_MSG_CODE_MCP_RESET_REJECT		0x00600000
+#define FW_MSG_CODE_NVM_FAILED_CALC_HASH	0x00310000
+#define FW_MSG_CODE_NVM_PUBLIC_KEY_MISSING	0x00320000
+#define FW_MSG_CODE_NVM_INVALID_PUBLIC_KEY	0x00330000
+
 #define FW_MSG_CODE_PHY_OK			0x00110000
 #define FW_MSG_CODE_PHY_ERROR			0x00120000
 #define FW_MSG_CODE_SET_SECURE_MODE_ERROR	0x00130000
@@ -1530,6 +1534,7 @@ struct public_drv_mb {
 #define FW_MSG_CODE_EXTPHY_INVALID_PHY_TYPE	0x00710000
 #define FW_MSG_CODE_EXTPHY_OPERATION_FAILED	0x00720000
 #define FW_MSG_CODE_EXTPHY_NO_PHY_DETECTED	0x00730000
+#define FW_MSG_CODE_RECOVERY_MODE		0x00740000
 
 	/* mdump related response codes */
 #define FW_MSG_CODE_MDUMP_NO_IMAGE_FOUND	0x00010000
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 39/61] net/qede/base: use L2-handles for RSS configuration
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (38 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 38/61] net/qede/base: add new image types Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 40/61] net/qede/base: change valloc to vzalloc Rasesh Mody
                         ` (22 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Move RSS configuration into using L2-handles instead of queue-ids.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_l2.c     |   48 ++++++++++++++++++-------
 drivers/net/qede/base/ecore_l2.h     |    2 ++
 drivers/net/qede/base/ecore_l2_api.h |    4 ++-
 drivers/net/qede/base/ecore_sriov.c  |   66 +++++++++++++++++++++-------------
 drivers/net/qede/base/ecore_vf.c     |   13 +++++--
 drivers/net/qede/qede_ethdev.c       |   19 ++++++----
 6 files changed, 105 insertions(+), 47 deletions(-)

diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 352620a..2635213 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -59,6 +59,7 @@ _ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
 	p_cid->cid = cid;
 	p_cid->vf_qid = vf_qid;
 	p_cid->rel = *p_params;
+	p_cid->p_owner = p_hwfn;
 
 	/* Don't try calculating the absolute indices for VFs */
 	if (IS_VF(p_hwfn->p_dev)) {
@@ -267,10 +268,9 @@ ecore_sp_vport_update_rss(struct ecore_hwfn *p_hwfn,
 			  struct vport_update_ramrod_data *p_ramrod,
 			  struct ecore_rss_params *p_rss)
 {
-	enum _ecore_status_t rc = ECORE_SUCCESS;
 	struct eth_vport_rss_config *p_config;
-	u16 abs_l2_queue = 0;
-	int i;
+	int i, table_size;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
 
 	if (!p_rss) {
 		p_ramrod->common.update_rss_flg = 0;
@@ -324,16 +324,40 @@ ecore_sp_vport_update_rss(struct ecore_hwfn *p_hwfn,
 		   p_config->capabilities,
 		   p_config->update_rss_ind_table, p_config->update_rss_key);
 
-	for (i = 0; i < ECORE_RSS_IND_TABLE_SIZE; i++) {
-		rc = ecore_fw_l2_queue(p_hwfn,
-				       p_rss->rss_ind_table[i],
-				       &abs_l2_queue);
-		if (rc != ECORE_SUCCESS)
-			return rc;
+	table_size = OSAL_MIN_T(int, ECORE_RSS_IND_TABLE_SIZE,
+				1 << p_config->tbl_size);
+	for (i = 0; i < table_size; i++) {
+		struct ecore_queue_cid *p_queue = p_rss->rss_ind_table[i];
 
-		p_config->indirection_table[i] = OSAL_CPU_TO_LE16(abs_l2_queue);
-		DP_VERBOSE(p_hwfn, ECORE_MSG_IFUP, "i= %d, queue = %d\n",
-			   i, p_config->indirection_table[i]);
+		if (!p_queue)
+			return ECORE_INVAL;
+
+		p_config->indirection_table[i] =
+				OSAL_CPU_TO_LE16(p_queue->abs.queue_id);
+	}
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IFUP,
+		   "Configured RSS indirection table [%d entries]:\n",
+		   table_size);
+	for (i = 0; i < ECORE_RSS_IND_TABLE_SIZE; i += 0x10) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IFUP,
+			   "%04x %04x %04x %04x %04x %04x %04x %04x %04x %04x %04x %04x %04x %04x %04x %04x\n",
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 1]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 2]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 3]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 4]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 5]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 6]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 7]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 8]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 9]),
+			  OSAL_LE16_TO_CPU(p_config->indirection_table[i + 10]),
+			  OSAL_LE16_TO_CPU(p_config->indirection_table[i + 11]),
+			  OSAL_LE16_TO_CPU(p_config->indirection_table[i + 12]),
+			  OSAL_LE16_TO_CPU(p_config->indirection_table[i + 13]),
+			  OSAL_LE16_TO_CPU(p_config->indirection_table[i + 14]),
+			 OSAL_LE16_TO_CPU(p_config->indirection_table[i + 15]));
 	}
 
 	for (i = 0; i < 10; i++)
diff --git a/drivers/net/qede/base/ecore_l2.h b/drivers/net/qede/base/ecore_l2.h
index c136389..4b0ccb4 100644
--- a/drivers/net/qede/base/ecore_l2.h
+++ b/drivers/net/qede/base/ecore_l2.h
@@ -36,6 +36,8 @@ struct ecore_queue_cid {
 
 	/* Legacy VFs might have Rx producer located elsewhere */
 	bool b_legacy_vf;
+
+	struct ecore_hwfn *p_owner;
 };
 
 void ecore_eth_queue_cid_release(struct ecore_hwfn *p_hwfn,
diff --git a/drivers/net/qede/base/ecore_l2_api.h b/drivers/net/qede/base/ecore_l2_api.h
index af316d3..5a7db76 100644
--- a/drivers/net/qede/base/ecore_l2_api.h
+++ b/drivers/net/qede/base/ecore_l2_api.h
@@ -59,7 +59,9 @@ struct ecore_rss_params {
 	u8 update_rss_key;
 	u8 rss_caps;
 	u8 rss_table_size_log; /* The table size is 2 ^ rss_table_size_log */
-	u16 rss_ind_table[ECORE_RSS_IND_TABLE_SIZE];
+
+	/* Indirection table consist of rx queue handles */
+	void *rss_ind_table[ECORE_RSS_IND_TABLE_SIZE];
 	u32 rss_key[ECORE_RSS_KEY_SIZE];
 };
 
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 6cec7b2..280c992 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -2704,12 +2704,14 @@ ecore_iov_vp_update_rss_param(struct ecore_hwfn *p_hwfn,
 			      struct ecore_vf_info *vf,
 			      struct ecore_sp_vport_update_params *p_data,
 			      struct ecore_rss_params *p_rss,
-			      struct ecore_iov_vf_mbx *p_mbx, u16 *tlvs_mask)
+			      struct ecore_iov_vf_mbx *p_mbx,
+			      u16 *tlvs_mask, u16 *tlvs_accepted)
 {
 	struct vfpf_vport_update_rss_tlv *p_rss_tlv;
 	u16 tlv = CHANNEL_TLV_VPORT_UPDATE_RSS;
-	u16 i, q_idx, max_q_idx;
+	bool b_reject = false;
 	u16 table_size;
+	u16 i, q_idx;
 
 	p_rss_tlv = (struct vfpf_vport_update_rss_tlv *)
 	    ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv);
@@ -2737,36 +2739,38 @@ ecore_iov_vp_update_rss_param(struct ecore_hwfn *p_hwfn,
 	p_rss->rss_eng_id = vf->relative_vf_id + 1;
 	p_rss->rss_caps = p_rss_tlv->rss_caps;
 	p_rss->rss_table_size_log = p_rss_tlv->rss_table_size_log;
-	OSAL_MEMCPY(p_rss->rss_ind_table, p_rss_tlv->rss_ind_table,
-		    sizeof(p_rss->rss_ind_table));
 	OSAL_MEMCPY(p_rss->rss_key, p_rss_tlv->rss_key,
 		    sizeof(p_rss->rss_key));
 
 	table_size = OSAL_MIN_T(u16, OSAL_ARRAY_SIZE(p_rss->rss_ind_table),
 				(1 << p_rss_tlv->rss_table_size_log));
 
-	max_q_idx = OSAL_ARRAY_SIZE(vf->vf_queues);
-
 	for (i = 0; i < table_size; i++) {
-		u16 index = vf->vf_queues[0].fw_rx_qid;
+		q_idx = p_rss_tlv->rss_ind_table[i];
+		if (!ecore_iov_validate_rxq(p_hwfn, vf, q_idx)) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VF[%d]: Omitting RSS due to wrong queue %04x\n",
+				   vf->relative_vf_id, q_idx);
+			b_reject = true;
+			goto out;
+		}
 
-		q_idx = p_rss->rss_ind_table[i];
-		if (q_idx >= max_q_idx)
-			DP_NOTICE(p_hwfn, true,
-				  "rss_ind_table[%d] = %d,"
-				  " rxq is out of range\n",
-				  i, q_idx);
-		else if (!vf->vf_queues[q_idx].p_rx_cid)
-			DP_NOTICE(p_hwfn, true,
-				  "rss_ind_table[%d] = %d, rxq is not active\n",
-				  i, q_idx);
-		else
-			index = vf->vf_queues[q_idx].fw_rx_qid;
-		p_rss->rss_ind_table[i] = index;
+		if (!vf->vf_queues[q_idx].p_rx_cid) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VF[%d]: Omitting RSS due to inactive queue %08x\n",
+				   vf->relative_vf_id, q_idx);
+			b_reject = true;
+			goto out;
+		}
+
+		p_rss->rss_ind_table[i] = vf->vf_queues[q_idx].p_rx_cid;
 	}
 
 	p_data->rss_params = p_rss;
+out:
 	*tlvs_mask |= 1 << ECORE_IOV_VP_UPDATE_RSS;
+	if (!b_reject)
+		*tlvs_accepted |= 1 << ECORE_IOV_VP_UPDATE_RSS;
 }
 
 static void
@@ -2822,11 +2826,11 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 					  struct ecore_ptt *p_ptt,
 					  struct ecore_vf_info *vf)
 {
+	struct ecore_rss_params *p_rss_params = OSAL_NULL;
 	struct ecore_sp_vport_update_params params;
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
 	struct ecore_sge_tpa_params sge_tpa_params;
 	u16 tlvs_mask = 0, tlvs_accepted = 0;
-	struct ecore_rss_params rss_params;
 	u8 status = PFVF_STATUS_SUCCESS;
 	u16 length;
 	enum _ecore_status_t rc;
@@ -2841,6 +2845,12 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 		goto out;
 	}
 
+	p_rss_params = OSAL_VALLOC(p_hwfn->p_dev, sizeof(*p_rss_params));
+	if (p_rss_params == OSAL_NULL) {
+		status = PFVF_STATUS_FAILURE;
+		goto out;
+	}
+
 	OSAL_MEMSET(&params, 0, sizeof(params));
 	params.opaque_fid = vf->opaque_fid;
 	params.vport_id = vf->vport_id;
@@ -2854,19 +2864,24 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 	ecore_iov_vp_update_tx_switch(p_hwfn, &params, mbx, &tlvs_mask);
 	ecore_iov_vp_update_mcast_bin_param(p_hwfn, &params, mbx, &tlvs_mask);
 	ecore_iov_vp_update_accept_flag(p_hwfn, &params, mbx, &tlvs_mask);
-	ecore_iov_vp_update_rss_param(p_hwfn, vf, &params, &rss_params,
-				      mbx, &tlvs_mask);
 	ecore_iov_vp_update_accept_any_vlan(p_hwfn, &params, mbx, &tlvs_mask);
 	ecore_iov_vp_update_sge_tpa_param(p_hwfn, vf, &params,
 					  &sge_tpa_params, mbx, &tlvs_mask);
 
+	tlvs_accepted = tlvs_mask;
+
+	/* Some of the extended TLVs need to be validated first; In that case,
+	 * they can update the mask without updating the accepted [so that
+	 * PF could communicate to VF it has rejected request].
+	 */
+	ecore_iov_vp_update_rss_param(p_hwfn, vf, &params, p_rss_params,
+				      mbx, &tlvs_mask, &tlvs_accepted);
+
 	/* Just log a message if there is no single extended tlv in buffer.
 	 * When all features of vport update ramrod would be requested by VF
 	 * as extended TLVs in buffer then an error can be returned in response
 	 * if there is no extended TLV present in buffer.
 	 */
-	tlvs_accepted = tlvs_mask;
-
 	if (OSAL_IOV_VF_VPORT_UPDATE(p_hwfn, vf->relative_vf_id,
 				     &params, &tlvs_accepted) !=
 	    ECORE_SUCCESS) {
@@ -2894,6 +2909,7 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 		status = PFVF_STATUS_FAILURE;
 
 out:
+	OSAL_VFREE(p_hwfn->p_dev, p_rss_params);
 	length = ecore_iov_prep_vp_update_resp_tlvs(p_hwfn, vf, mbx, status,
 						    tlvs_mask, tlvs_accepted);
 	ecore_iov_send_response(p_hwfn, p_ptt, vf, length, status);
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index 3182621..a072a81 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -1132,6 +1132,7 @@ ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn,
 	if (p_params->rss_params) {
 		struct ecore_rss_params *rss_params = p_params->rss_params;
 		struct vfpf_vport_update_rss_tlv *p_rss_tlv;
+		int i, table_size;
 
 		size = sizeof(struct vfpf_vport_update_rss_tlv);
 		p_rss_tlv = ecore_add_tlv(p_hwfn, &p_iov->offset,
@@ -1153,8 +1154,16 @@ ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn,
 		p_rss_tlv->rss_enable = rss_params->rss_enable;
 		p_rss_tlv->rss_caps = rss_params->rss_caps;
 		p_rss_tlv->rss_table_size_log = rss_params->rss_table_size_log;
-		OSAL_MEMCPY(p_rss_tlv->rss_ind_table, rss_params->rss_ind_table,
-			    sizeof(rss_params->rss_ind_table));
+
+		table_size = OSAL_MIN_T(int, T_ETH_INDIRECTION_TABLE_SIZE,
+					1 << p_rss_tlv->rss_table_size_log);
+		for (i = 0; i < table_size; i++) {
+			struct ecore_queue_cid *p_queue;
+
+			p_queue = rss_params->rss_ind_table[i];
+			p_rss_tlv->rss_ind_table[i] = p_queue->rel.queue_id;
+		}
+
 		OSAL_MEMCPY(p_rss_tlv->rss_key, rss_params->rss_key,
 			    sizeof(rss_params->rss_key));
 	}
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 257e5b2..bd190d0 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1487,11 +1487,11 @@ static int qede_rss_hash_update(struct rte_eth_dev *eth_dev,
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
 	struct ecore_sp_vport_update_params vport_update_params;
 	struct ecore_rss_params rss_params;
-	struct ecore_rss_params params;
 	struct ecore_hwfn *p_hwfn;
 	uint32_t *key = (uint32_t *)rss_conf->rss_key;
 	uint64_t hf = rss_conf->rss_hf;
 	uint8_t len = rss_conf->rss_key_len;
+	uint8_t idx;
 	uint8_t i;
 	int rc;
 
@@ -1526,6 +1526,11 @@ static int qede_rss_hash_update(struct rte_eth_dev *eth_dev,
 	/* tbl_size has to be set with capabilities */
 	rss_params.rss_table_size_log = 7;
 	vport_update_params.vport_id = 0;
+	/* pass the L2 handles instead of qids */
+	for (i = 0 ; i < ECORE_RSS_IND_TABLE_SIZE ; i++) {
+		idx = qdev->rss_ind_table[i];
+		rss_params.rss_ind_table[i] = qdev->fp_array[idx].rxq->handle;
+	}
 	vport_update_params.rss_params = &rss_params;
 
 	for_each_hwfn(edev, i) {
@@ -1607,14 +1612,18 @@ static int qede_rss_reta_update(struct rte_eth_dev *eth_dev,
 		shift = i % RTE_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift)) {
 			entry = reta_conf[idx].reta[shift];
-			params.rss_ind_table[i] = entry;
+			/* Pass rxq handles to ecore */
+			params.rss_ind_table[i] =
+					qdev->fp_array[entry].rxq->handle;
+			/* Update the local copy for RETA query command */
+			qdev->rss_ind_table[i] = entry;
 		}
 	}
 
 	/* Fix up RETA for CMT mode device */
 	if (edev->num_hwfns > 1)
 		qdev->rss_enable = qed_update_rss_parm_cmt(edev,
-					&params.rss_ind_table[0]);
+					params.rss_ind_table[0]);
 	params.update_rss_ind_table = 1;
 	params.rss_table_size_log = 7;
 	params.update_rss_config = 1;
@@ -1634,10 +1643,6 @@ static int qede_rss_reta_update(struct rte_eth_dev *eth_dev,
 		}
 	}
 
-	/* Update the local copy for RETA query command */
-	memcpy(qdev->rss_ind_table, params.rss_ind_table,
-	       sizeof(params.rss_ind_table));
-
 	return 0;
 }
 
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 40/61] net/qede/base: change valloc to vzalloc
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (39 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 39/61] net/qede/base: use L2-handles for RSS configuration Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 41/61] net/qede/base: add support for previous driver unload Rasesh Mody
                         ` (21 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Change OSAL_VALLOC() into OSAL_VZALLOC() which would also zero memory.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/bcm_osal.h      |    2 +-
 drivers/net/qede/base/ecore_dev.c     |    3 +--
 drivers/net/qede/base/ecore_l2.c      |    3 +--
 drivers/net/qede/base/ecore_mng_tlv.c |    5 ++---
 drivers/net/qede/base/ecore_sriov.c   |    2 +-
 5 files changed, 6 insertions(+), 9 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 4c91dc0..052a0cf 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -89,7 +89,7 @@ typedef int bool;
 #define OSAL_ALLOC(dev, GFP, size) rte_malloc("qede", size, 0)
 #define OSAL_ZALLOC(dev, GFP, size) rte_zmalloc("qede", size, 0)
 #define OSAL_CALLOC(dev, GFP, num, size) rte_calloc("qede", num, size, 0)
-#define OSAL_VALLOC(dev, size) rte_malloc("qede", size, 0)
+#define OSAL_VZALLOC(dev, size) rte_zmalloc("qede", size, 0)
 #define OSAL_FREE(dev, memory)		  \
 	do {				  \
 		rte_free((void *)memory); \
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 0840d49..6d75e60 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -3717,13 +3717,12 @@ ecore_chain_alloc_pbl(struct ecore_dev *p_dev,
 	u32 page_cnt = p_chain->page_cnt, size, i;
 
 	size = page_cnt * sizeof(*pp_virt_addr_tbl);
-	pp_virt_addr_tbl = (void **)OSAL_VALLOC(p_dev, size);
+	pp_virt_addr_tbl = (void **)OSAL_VZALLOC(p_dev, size);
 	if (!pp_virt_addr_tbl) {
 		DP_NOTICE(p_dev, true,
 			  "Failed to allocate memory for the chain virtual addresses table\n");
 		return ECORE_NOMEM;
 	}
-	OSAL_MEM_ZERO(pp_virt_addr_tbl, size);
 
 	/* The allocation of the PBL table is done with its full size, since it
 	 * is expected to be successive.
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 2635213..4d26e19 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -50,10 +50,9 @@ _ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
 	struct ecore_queue_cid *p_cid;
 	enum _ecore_status_t rc;
 
-	p_cid = OSAL_VALLOC(p_hwfn->p_dev, sizeof(*p_cid));
+	p_cid = OSAL_VZALLOC(p_hwfn->p_dev, sizeof(*p_cid));
 	if (p_cid == OSAL_NULL)
 		return OSAL_NULL;
-	OSAL_MEM_ZERO(p_cid, sizeof(*p_cid));
 
 	p_cid->opaque_fid = opaque_fid;
 	p_cid->cid = cid;
diff --git a/drivers/net/qede/base/ecore_mng_tlv.c b/drivers/net/qede/base/ecore_mng_tlv.c
index 0065d12..0bf1be8 100644
--- a/drivers/net/qede/base/ecore_mng_tlv.c
+++ b/drivers/net/qede/base/ecore_mng_tlv.c
@@ -1413,11 +1413,10 @@ ecore_mfw_update_tlvs(u8 tlv_group, struct ecore_hwfn *p_hwfn,
 	u32 offset;
 	int len;
 
-	p_tlv_data = OSAL_VALLOC(p_hwfn->p_dev, sizeof(*p_tlv_data));
+	p_tlv_data = OSAL_VZALLOC(p_hwfn->p_dev, sizeof(*p_tlv_data));
 	if (!p_tlv_data)
 		return ECORE_NOMEM;
 
-	OSAL_MEMSET(p_tlv_data, 0, sizeof(*p_tlv_data));
 	if (OSAL_MFW_FILL_TLV_DATA(p_hwfn, tlv_group, p_tlv_data)) {
 		OSAL_VFREE(p_hwfn->p_dev, p_tlv_data);
 		return ECORE_INVAL;
@@ -1487,7 +1486,7 @@ ecore_mfw_process_tlv_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
 		goto drv_done;
 	}
 
-	p_mfw_buf = (void *)OSAL_VALLOC(p_hwfn->p_dev, size);
+	p_mfw_buf = (void *)OSAL_VZALLOC(p_hwfn->p_dev, size);
 	if (!p_mfw_buf) {
 		DP_NOTICE(p_hwfn, false,
 			  "Failed allocate memory for p_mfw_buf\n");
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 280c992..aab9925 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -2845,7 +2845,7 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 		goto out;
 	}
 
-	p_rss_params = OSAL_VALLOC(p_hwfn->p_dev, sizeof(*p_rss_params));
+	p_rss_params = OSAL_VZALLOC(p_hwfn->p_dev, sizeof(*p_rss_params));
 	if (p_rss_params == OSAL_NULL) {
 		status = PFVF_STATUS_FAILURE;
 		goto out;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 41/61] net/qede/base: add support for previous driver unload
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (40 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 40/61] net/qede/base: change valloc to vzalloc Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24 11:00         ` Ferruh Yigit
  2017-03-24  7:28       ` [PATCH v3 42/61] net/qede/base: add non-L2 dcbx tlv application support Rasesh Mody
                         ` (20 subsequent siblings)
  62 siblings, 1 reply; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

New driver/management fw load request sequence for handling previous
driver unload.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h         |   13 ++
 drivers/net/qede/base/ecore_dev.c     |   43 ++--
 drivers/net/qede/base/ecore_dev_api.h |   30 ++-
 drivers/net/qede/base/ecore_mcp.c     |  369 ++++++++++++++++++++++++++++++---
 drivers/net/qede/base/ecore_mcp.h     |   40 ++--
 drivers/net/qede/base/mcp_public.h    |   56 ++++-
 drivers/net/qede/qede_main.c          |    2 +
 7 files changed, 482 insertions(+), 71 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index acf2244..60a8a6b 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -28,6 +28,19 @@
 #include "ecore_proto_if.h"
 #include "mcp_public.h"
 
+#define ECORE_MAJOR_VERSION		8
+#define ECORE_MINOR_VERSION		18
+#define ECORE_REVISION_VERSION		7
+#define ECORE_ENGINEERING_VERSION	0
+
+#define ECORE_VERSION							\
+	((ECORE_MAJOR_VERSION << 24) | (ECORE_MINOR_VERSION << 16) |	\
+	 (ECORE_REVISION_VERSION << 8) | ECORE_ENGINEERING_VERSION)
+
+#define STORM_FW_VERSION						\
+	((FW_MAJOR_VERSION << 24) | (FW_MINOR_VERSION << 16) |	\
+	 (FW_REVISION_VERSION << 8) | FW_ENGINEERING_VERSION)
+
 #define MAX_HWFNS_PER_DEVICE	2
 #define NAME_SIZE 128 /* @DPDK */
 #define ECORE_WFQ_UNIT	100
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 6d75e60..29dd292 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1901,10 +1901,11 @@ enum _ecore_status_t ecore_vf_start(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 				   struct ecore_hw_init_params *p_params)
 {
-	enum _ecore_status_t rc = ECORE_SUCCESS, mfw_rc;
+	struct ecore_load_req_params load_req_params;
 	u32 load_code, param, drv_mb_param;
-	bool b_default_mtu = true;
 	struct ecore_hwfn *p_hwfn;
+	bool b_default_mtu = true;
+	enum _ecore_status_t rc = ECORE_SUCCESS, mfw_rc;
 	int i;
 
 	if ((p_params->int_mode == ECORE_INT_MODE_MSI) &&
@@ -1943,17 +1944,25 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		if (rc != ECORE_SUCCESS)
 			return rc;
 
-		/* @@@TBD need to add here:
-		 * Check for fan failure
-		 * Prev_unload
-		 */
-		rc = ecore_mcp_load_req(p_hwfn, p_hwfn->p_main_ptt, &load_code);
-		if (rc) {
+		OSAL_MEM_ZERO(&load_req_params, sizeof(load_req_params));
+		load_req_params.drv_role = p_params->is_crash_kernel ?
+					   ECORE_DRV_ROLE_KDUMP :
+					   ECORE_DRV_ROLE_OS;
+		load_req_params.timeout_val = p_params->mfw_timeout_val;
+		load_req_params.avoid_eng_reset = p_params->avoid_eng_reset;
+		rc = ecore_mcp_load_req(p_hwfn, p_hwfn->p_main_ptt,
+					&load_req_params);
+		if (rc != ECORE_SUCCESS) {
 			DP_NOTICE(p_hwfn, true,
-				  "Failed sending LOAD_REQ command\n");
+				  "Failed sending a LOAD_REQ command\n");
 			return rc;
 		}
 
+		load_code = load_req_params.load_code;
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+			   "Load request was sent. Load code: 0x%x\n",
+			   load_code);
+
 		/* CQ75580:
 		 * When coming back from hiberbate state, the registers from
 		 * which shadow is read initially are not initialized. It turns
@@ -1966,10 +1975,6 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		 */
 		ecore_reset_mb_shadow(p_hwfn, p_hwfn->p_main_ptt);
 
-		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-			   "Load request was sent. Resp:0x%x, Load code: 0x%x\n",
-			   rc, load_code);
-
 		/* Only relevant for recovery:
 		 * Clear the indication after the LOAD_REQ command is responded
 		 * by the MFW.
@@ -1988,13 +1993,13 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		case FW_MSG_CODE_DRV_LOAD_ENGINE:
 			rc = ecore_hw_init_common(p_hwfn, p_hwfn->p_main_ptt,
 						  p_hwfn->hw_info.hw_mode);
-			if (rc)
+			if (rc != ECORE_SUCCESS)
 				break;
 			/* Fall into */
 		case FW_MSG_CODE_DRV_LOAD_PORT:
 			rc = ecore_hw_init_port(p_hwfn, p_hwfn->p_main_ptt,
 						p_hwfn->hw_info.hw_mode);
-			if (rc)
+			if (rc != ECORE_SUCCESS)
 				break;
 			/* Fall into */
 		case FW_MSG_CODE_DRV_LOAD_FUNCTION:
@@ -2006,6 +2011,8 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 					      p_params->allow_npar_tx_switch);
 			break;
 		default:
+			DP_NOTICE(p_hwfn, false,
+				  "Unexpected load code [0x%08x]", load_code);
 			rc = ECORE_NOTIMPL;
 			break;
 		}
@@ -2021,6 +2028,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 				       0, &load_code, &param);
 		if (rc != ECORE_SUCCESS)
 			return rc;
+
 		if (mfw_rc != ECORE_SUCCESS) {
 			DP_NOTICE(p_hwfn, true,
 				  "Failed sending LOAD_DONE command\n");
@@ -2045,10 +2053,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 
 	if (IS_PF(p_dev)) {
 		p_hwfn = ECORE_LEADING_HWFN(p_dev);
-		drv_mb_param = (FW_MAJOR_VERSION << 24) |
-			       (FW_MINOR_VERSION << 16) |
-			       (FW_REVISION_VERSION << 8) |
-			       (FW_ENGINEERING_VERSION);
+		drv_mb_param = STORM_FW_VERSION;
 		rc = ecore_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
 				   DRV_MSG_CODE_OV_UPDATE_STORM_FW_VER,
 				   drv_mb_param, &load_code, &param);
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index 356c5e4..7e90778 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -58,16 +58,38 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev);
 void ecore_resc_setup(struct ecore_dev *p_dev);
 
 struct ecore_hw_init_params {
-	/* tunnelling parameters */
+	/* Tunnelling parameters */
 	struct ecore_tunnel_info *p_tunn;
+
 	bool b_hw_start;
-	/* interrupt mode [msix, inta, etc.] to use */
+
+	/* Interrupt mode [msix, inta, etc.] to use */
 	enum ecore_int_mode int_mode;
-/* npar tx switching to be used for vports configured for tx-switching */
 
+	/* NPAR tx switching to be used for vports configured for tx-switching
+	 */
 	bool allow_npar_tx_switch;
-	/* binary fw data pointer in binary fw file */
+
+	/* Binary fw data pointer in binary fw file */
 	const u8 *bin_fw_data;
+
+	/* Indicates whether the driver is running over a crash kernel.
+	 * As part of the load request, this will be used for providing the
+	 * driver role to the MFW.
+	 * In case of a crash kernel over PDA - this should be set to false.
+	 */
+	bool is_crash_kernel;
+
+	/* The timeout value that the MFW should use when locking the engine for
+	 * the driver load process.
+	 * A value of '0' means the default value, and '255' means no timeout.
+	 */
+	u8 mfw_timeout_val;
+#define ECORE_LOAD_REQ_LOCK_TO_DEFAULT	0
+#define ECORE_LOAD_REQ_LOCK_TO_NONE	255
+
+	/* Avoid engine reset when first PF loads on it */
+	bool avoid_eng_reset;
 };
 
 /**
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 30cb76e..6c5b5db 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -518,51 +518,368 @@ static void ecore_mcp_mf_workaround(struct ecore_hwfn *p_hwfn,
 }
 #endif
 
+static bool ecore_mcp_can_force_load(u8 drv_role, u8 exist_drv_role)
+{
+	return (drv_role == DRV_ROLE_OS &&
+		exist_drv_role == DRV_ROLE_PREBOOT) ||
+	       (drv_role == DRV_ROLE_KDUMP && exist_drv_role == DRV_ROLE_OS);
+}
+
+static enum _ecore_status_t ecore_mcp_cancel_load_req(struct ecore_hwfn *p_hwfn,
+						      struct ecore_ptt *p_ptt)
+{
+	u32 resp = 0, param = 0;
+	enum _ecore_status_t rc;
+
+	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_CANCEL_LOAD_REQ, 0,
+			   &resp, &param);
+	if (rc != ECORE_SUCCESS)
+		DP_NOTICE(p_hwfn, false,
+			  "Failed to send cancel load request, rc = %d\n", rc);
+
+	return rc;
+}
+
+#define CONFIG_ECORE_L2_BITMAP_IDX	(0x1 << 0)
+#define CONFIG_ECORE_SRIOV_BITMAP_IDX	(0x1 << 1)
+#define CONFIG_ECORE_ROCE_BITMAP_IDX	(0x1 << 2)
+#define CONFIG_ECORE_IWARP_BITMAP_IDX	(0x1 << 3)
+#define CONFIG_ECORE_FCOE_BITMAP_IDX	(0x1 << 4)
+#define CONFIG_ECORE_ISCSI_BITMAP_IDX	(0x1 << 5)
+#define CONFIG_ECORE_LL2_BITMAP_IDX	(0x1 << 6)
+
+static u32 ecore_get_config_bitmap(void)
+{
+	u32 config_bitmap = 0x0;
+
+#ifdef CONFIG_ECORE_L2
+	config_bitmap |= CONFIG_ECORE_L2_BITMAP_IDX;
+#endif
+#ifdef CONFIG_ECORE_SRIOV
+	config_bitmap |= CONFIG_ECORE_SRIOV_BITMAP_IDX;
+#endif
+#ifdef CONFIG_ECORE_ROCE
+	config_bitmap |= CONFIG_ECORE_ROCE_BITMAP_IDX;
+#endif
+#ifdef CONFIG_ECORE_IWARP
+	config_bitmap |= CONFIG_ECORE_IWARP_BITMAP_IDX;
+#endif
+#ifdef CONFIG_ECORE_FCOE
+	config_bitmap |= CONFIG_ECORE_FCOE_BITMAP_IDX;
+#endif
+#ifdef CONFIG_ECORE_ISCSI
+	config_bitmap |= CONFIG_ECORE_ISCSI_BITMAP_IDX;
+#endif
+#ifdef CONFIG_ECORE_LL2
+	config_bitmap |= CONFIG_ECORE_LL2_BITMAP_IDX;
+#endif
+
+	return config_bitmap;
+}
+
+struct ecore_load_req_in_params {
+	u8 hsi_ver;
+#define ECORE_LOAD_REQ_HSI_VER_DEFAULT	0
+#define ECORE_LOAD_REQ_HSI_VER_1	1
+	u32 drv_ver_0;
+	u32 drv_ver_1;
+	u32 fw_ver;
+	u8 drv_role;
+	u8 timeout_val;
+	u8 force_cmd;
+	bool avoid_eng_reset;
+};
+
+struct ecore_load_req_out_params {
+	u32 load_code;
+	u32 exist_drv_ver_0;
+	u32 exist_drv_ver_1;
+	u32 exist_fw_ver;
+	u8 exist_drv_role;
+	u8 mfw_hsi_ver;
+	bool drv_exists;
+};
+
+static enum _ecore_status_t
+__ecore_mcp_load_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		     struct ecore_load_req_in_params *p_in_params,
+		     struct ecore_load_req_out_params *p_out_params)
+{
+	union drv_union_data union_data_src, union_data_dst;
+	struct ecore_mcp_mb_params mb_params;
+	struct load_req_stc *p_load_req;
+	struct load_rsp_stc *p_load_rsp;
+	u32 hsi_ver;
+	enum _ecore_status_t rc;
+
+	p_load_req = &union_data_src.load_req;
+	OSAL_MEM_ZERO(p_load_req, sizeof(*p_load_req));
+	p_load_req->drv_ver_0 = p_in_params->drv_ver_0;
+	p_load_req->drv_ver_1 = p_in_params->drv_ver_1;
+	p_load_req->fw_ver = p_in_params->fw_ver;
+	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_ROLE,
+			    p_in_params->drv_role);
+	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_LOCK_TO,
+			    p_in_params->timeout_val);
+	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_FORCE,
+			    p_in_params->force_cmd);
+	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_FLAGS0,
+			    p_in_params->avoid_eng_reset);
+
+	hsi_ver = (p_in_params->hsi_ver == ECORE_LOAD_REQ_HSI_VER_DEFAULT) ?
+		  DRV_ID_MCP_HSI_VER_CURRENT :
+		  (p_in_params->hsi_ver << DRV_ID_MCP_HSI_VER_SHIFT);
+
+	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
+	mb_params.cmd = DRV_MSG_CODE_LOAD_REQ;
+	mb_params.param = PDA_COMP | hsi_ver | p_hwfn->p_dev->drv_type;
+	mb_params.p_data_src = &union_data_src;
+	mb_params.p_data_dst = &union_data_dst;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Load Request: param 0x%08x [init_hw %d, drv_type %d, hsi_ver %d, pda 0x%04x]\n",
+		   mb_params.param,
+		   ECORE_MFW_GET_FIELD(mb_params.param, DRV_ID_DRV_INIT_HW),
+		   ECORE_MFW_GET_FIELD(mb_params.param, DRV_ID_DRV_TYPE),
+		   ECORE_MFW_GET_FIELD(mb_params.param, DRV_ID_MCP_HSI_VER),
+		   ECORE_MFW_GET_FIELD(mb_params.param, DRV_ID_PDA_COMP_VER));
+
+	if (p_in_params->hsi_ver != ECORE_LOAD_REQ_HSI_VER_1)
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+			   "Load Request: drv_ver 0x%08x_0x%08x, fw_ver 0x%08x, misc0 0x%08x [role %d, timeout %d, force %d, flags0 0x%x]\n",
+			   p_load_req->drv_ver_0, p_load_req->drv_ver_1,
+			   p_load_req->fw_ver, p_load_req->misc0,
+			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
+					       LOAD_REQ_ROLE),
+			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
+					       LOAD_REQ_LOCK_TO),
+			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
+					       LOAD_REQ_FORCE),
+			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
+					       LOAD_REQ_FLAGS0));
+
+	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+	if (rc != ECORE_SUCCESS) {
+		DP_NOTICE(p_hwfn, false,
+			  "Failed to send load request, rc = %d\n", rc);
+		return rc;
+	}
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Load Response: resp 0x%08x\n", mb_params.mcp_resp);
+	p_out_params->load_code = mb_params.mcp_resp;
+
+	if (p_in_params->hsi_ver != ECORE_LOAD_REQ_HSI_VER_1 &&
+	    p_out_params->load_code != FW_MSG_CODE_DRV_LOAD_REFUSED_HSI_1) {
+		p_load_rsp = &union_data_dst.load_rsp;
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+			   "Load Response: exist_drv_ver 0x%08x_0x%08x, exist_fw_ver 0x%08x, misc0 0x%08x [exist_role %d, mfw_hsi %d, flags0 0x%x]\n",
+			   p_load_rsp->drv_ver_0, p_load_rsp->drv_ver_1,
+			   p_load_rsp->fw_ver, p_load_rsp->misc0,
+			   ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
+					       LOAD_RSP_ROLE),
+			   ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
+					       LOAD_RSP_HSI),
+			   ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
+					       LOAD_RSP_FLAGS0));
+
+		p_out_params->exist_drv_ver_0 = p_load_rsp->drv_ver_0;
+		p_out_params->exist_drv_ver_1 = p_load_rsp->drv_ver_1;
+		p_out_params->exist_fw_ver = p_load_rsp->fw_ver;
+		p_out_params->exist_drv_role =
+			ECORE_MFW_GET_FIELD(p_load_rsp->misc0, LOAD_RSP_ROLE);
+		p_out_params->mfw_hsi_ver =
+			ECORE_MFW_GET_FIELD(p_load_rsp->misc0, LOAD_RSP_HSI);
+		p_out_params->drv_exists =
+			ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
+					    LOAD_RSP_FLAGS0) &
+			LOAD_RSP_FLAGS0_DRV_EXISTS;
+	}
+
+	return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t eocre_get_mfw_drv_role(struct ecore_hwfn *p_hwfn,
+						   enum ecore_drv_role drv_role,
+						   u8 *p_mfw_drv_role)
+{
+	switch (drv_role) {
+	case ECORE_DRV_ROLE_OS:
+		*p_mfw_drv_role = DRV_ROLE_OS;
+		break;
+	case ECORE_DRV_ROLE_KDUMP:
+		*p_mfw_drv_role = DRV_ROLE_KDUMP;
+		break;
+	default:
+		DP_ERR(p_hwfn, "Unexpected driver role %d\n", drv_role);
+		return ECORE_INVAL;
+	}
+
+	return ECORE_SUCCESS;
+}
+
+enum ecore_load_req_force {
+	ECORE_LOAD_REQ_FORCE_NONE,
+	ECORE_LOAD_REQ_FORCE_PF,
+	ECORE_LOAD_REQ_FORCE_ALL,
+};
+
+static enum _ecore_status_t
+ecore_get_mfw_force_cmd(struct ecore_hwfn *p_hwfn,
+			enum ecore_load_req_force force_cmd,
+			u8 *p_mfw_force_cmd)
+{
+	switch (force_cmd) {
+	case ECORE_LOAD_REQ_FORCE_NONE:
+		*p_mfw_force_cmd = LOAD_REQ_FORCE_NONE;
+		break;
+	case ECORE_LOAD_REQ_FORCE_PF:
+		*p_mfw_force_cmd = LOAD_REQ_FORCE_PF;
+		break;
+	case ECORE_LOAD_REQ_FORCE_ALL:
+		*p_mfw_force_cmd = LOAD_REQ_FORCE_ALL;
+		break;
+	default:
+		DP_ERR(p_hwfn, "Unexpected force value %d\n", force_cmd);
+		return ECORE_INVAL;
+	}
+
+	return ECORE_SUCCESS;
+}
+
 enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 					struct ecore_ptt *p_ptt,
-					u32 *p_load_code)
+					struct ecore_load_req_params *p_params)
 {
-	struct ecore_dev *p_dev = p_hwfn->p_dev;
-	struct ecore_mcp_mb_params mb_params;
+	struct ecore_load_req_out_params out_params;
+	struct ecore_load_req_in_params in_params;
+	u8 mfw_drv_role, mfw_force_cmd;
 	enum _ecore_status_t rc;
 
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
-		ecore_mcp_mf_workaround(p_hwfn, p_load_code);
+		ecore_mcp_mf_workaround(p_hwfn, &p_params->load_code);
 		return ECORE_SUCCESS;
 	}
 #endif
 
-	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
-	mb_params.cmd = DRV_MSG_CODE_LOAD_REQ;
-	mb_params.param = PDA_COMP | DRV_ID_MCP_HSI_VER_CURRENT |
-			  p_dev->drv_type;
-	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+	OSAL_MEM_ZERO(&in_params, sizeof(in_params));
+	in_params.hsi_ver = ECORE_LOAD_REQ_HSI_VER_DEFAULT;
+	in_params.drv_ver_0 = ECORE_VERSION;
+	in_params.drv_ver_1 = ecore_get_config_bitmap();
+	in_params.fw_ver = STORM_FW_VERSION;
+	rc = eocre_get_mfw_drv_role(p_hwfn, p_params->drv_role, &mfw_drv_role);
+	if (rc != ECORE_SUCCESS)
+		return rc;
 
-	/* if mcp fails to respond we must abort */
-	if (rc != ECORE_SUCCESS) {
-		DP_ERR(p_hwfn, "MCP response failure, aborting\n");
+	in_params.drv_role = mfw_drv_role;
+	in_params.timeout_val = p_params->timeout_val;
+	rc = ecore_get_mfw_force_cmd(p_hwfn, ECORE_LOAD_REQ_FORCE_NONE,
+				     &mfw_force_cmd);
+	if (rc != ECORE_SUCCESS)
 		return rc;
-	}
 
-	*p_load_code = mb_params.mcp_resp;
+	in_params.force_cmd = mfw_force_cmd;
+	in_params.avoid_eng_reset = p_params->avoid_eng_reset;
 
-	/* If MFW refused (e.g. other port is in diagnostic mode) we
-	 * must abort. This can happen in the following cases:
-	 * - Other port is in diagnostic mode
-	 * - Previously loaded function on the engine is not compliant with
-	 *   the requester.
-	 * - MFW cannot cope with the requester's DRV_MFW_HSI_VERSION.
-	 *      -
+	OSAL_MEM_ZERO(&out_params, sizeof(out_params));
+	rc = __ecore_mcp_load_req(p_hwfn, p_ptt, &in_params, &out_params);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	/* First handle cases where another load request should/might be sent:
+	 * - MFW expects the old interface [HSI version = 1]
+	 * - MFW responds that a force load request is required
 	 */
-	if (!(*p_load_code) ||
-	    ((*p_load_code) == FW_MSG_CODE_DRV_LOAD_REFUSED_HSI) ||
-	    ((*p_load_code) == FW_MSG_CODE_DRV_LOAD_REFUSED_PDA) ||
-	    ((*p_load_code) == FW_MSG_CODE_DRV_LOAD_REFUSED_DIAG)) {
-		DP_ERR(p_hwfn, "MCP refused load request, aborting\n");
+	if (out_params.load_code == FW_MSG_CODE_DRV_LOAD_REFUSED_HSI_1) {
+		DP_INFO(p_hwfn,
+			"MFW refused a load request due to HSI > 1. Resending with HSI = 1.\n");
+
+		/* The previous load request set the mailbox blocking */
+		p_hwfn->mcp_info->block_mb_sending = false;
+
+		in_params.hsi_ver = ECORE_LOAD_REQ_HSI_VER_1;
+		OSAL_MEM_ZERO(&out_params, sizeof(out_params));
+		rc = __ecore_mcp_load_req(p_hwfn, p_ptt, &in_params,
+					  &out_params);
+		if (rc != ECORE_SUCCESS)
+			return rc;
+	} else if (out_params.load_code ==
+		   FW_MSG_CODE_DRV_LOAD_REFUSED_REQUIRES_FORCE) {
+		/* The previous load request set the mailbox blocking */
+		p_hwfn->mcp_info->block_mb_sending = false;
+
+		if (ecore_mcp_can_force_load(in_params.drv_role,
+					     out_params.exist_drv_role)) {
+			DP_INFO(p_hwfn,
+				"A force load is required [existing: role %d, fw_ver 0x%08x, drv_ver 0x%08x_0x%08x]. Sending a force load request.\n",
+				out_params.exist_drv_role,
+				out_params.exist_fw_ver,
+				out_params.exist_drv_ver_0,
+				out_params.exist_drv_ver_1);
+
+			rc = ecore_get_mfw_force_cmd(p_hwfn,
+						     ECORE_LOAD_REQ_FORCE_ALL,
+						     &mfw_force_cmd);
+			if (rc != ECORE_SUCCESS)
+				return rc;
+
+			in_params.force_cmd = mfw_force_cmd;
+			OSAL_MEM_ZERO(&out_params, sizeof(out_params));
+			rc = __ecore_mcp_load_req(p_hwfn, p_ptt, &in_params,
+						  &out_params);
+			if (rc != ECORE_SUCCESS)
+				return rc;
+		} else {
+			DP_NOTICE(p_hwfn, false,
+				  "A force load is required [existing: role %d, fw_ver 0x%08x, drv_ver 0x%08x_0x%08x]. Avoiding to prevent disruption of active PFs.\n",
+				  out_params.exist_drv_role,
+				  out_params.exist_fw_ver,
+				  out_params.exist_drv_ver_0,
+				  out_params.exist_drv_ver_1);
+
+			ecore_mcp_cancel_load_req(p_hwfn, p_ptt);
+			return ECORE_BUSY;
+		}
+	}
+
+	/* Now handle the other types of responses.
+	 * The "REFUSED_HSI_1" and "REFUSED_REQUIRES_FORCE" responses are not
+	 * expected here after the additional revised load requests were sent.
+	 */
+	switch (out_params.load_code) {
+	case FW_MSG_CODE_DRV_LOAD_ENGINE:
+	case FW_MSG_CODE_DRV_LOAD_PORT:
+	case FW_MSG_CODE_DRV_LOAD_FUNCTION:
+		if (out_params.mfw_hsi_ver != ECORE_LOAD_REQ_HSI_VER_1 &&
+		    out_params.drv_exists) {
+			/* The role and fw/driver version match, but the PF is
+			 * already loaded and has not been unloaded gracefully.
+			 * This is unexpected since a quasi-FLR request was
+			 * previously sent as part of ecore_hw_prepare().
+			 */
+			DP_NOTICE(p_hwfn, false,
+				  "PF is already loaded - shouldn't have got here since a quasi-FLR request was previously sent!\n");
+			return ECORE_INVAL;
+		}
+		break;
+	case FW_MSG_CODE_DRV_LOAD_REFUSED_PDA:
+	case FW_MSG_CODE_DRV_LOAD_REFUSED_DIAG:
+	case FW_MSG_CODE_DRV_LOAD_REFUSED_HSI:
+	case FW_MSG_CODE_DRV_LOAD_REFUSED_REJECT:
+		DP_NOTICE(p_hwfn, false,
+			  "MFW refused a load request [resp 0x%08x]. Aborting.\n",
+			  out_params.load_code);
 		return ECORE_BUSY;
+	default:
+		DP_NOTICE(p_hwfn, false,
+			  "Unexpected response to load request [resp 0x%08x]. Aborting.\n",
+			  out_params.load_code);
+		break;
 	}
 
+	p_params->load_code = out_params.load_code;
+
 	return ECORE_SUCCESS;
 }
 
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index 7a81516..4138a12 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -136,32 +136,36 @@ enum _ecore_status_t ecore_mcp_handle_events(struct ecore_hwfn *p_hwfn,
  * @param p_hwfn - hw function
  * @param p_ptt - PTT required for register access
  * @return enum _ecore_status_t - ECORE_SUCCESS - operation
- * was successul.
+ * was successful.
  */
 enum _ecore_status_t ecore_issue_pulse(struct ecore_hwfn *p_hwfn,
 				       struct ecore_ptt *p_ptt);
 
+enum ecore_drv_role {
+	ECORE_DRV_ROLE_OS,
+	ECORE_DRV_ROLE_KDUMP,
+};
+
+struct ecore_load_req_params {
+	enum ecore_drv_role drv_role;
+	u8 timeout_val; /* 1..254, '0' - default value, '255' - no timeout */
+	bool avoid_eng_reset;
+	u32 load_code;
+};
+
 /**
- * @brief Sends a LOAD_REQ to the MFW, and in case operation
- *        succeed, returns whether this PF is the first on the
- *        chip/engine/port or function. This function should be
- *        called when driver is ready to accept MFW events after
- *        Storms initializations are done.
- *
- * @param p_hwfn       - hw function
- * @param p_ptt        - PTT required for register access
- * @param p_load_code  - The MCP response param containing one
- *      of the following:
- *      FW_MSG_CODE_DRV_LOAD_ENGINE
- *      FW_MSG_CODE_DRV_LOAD_PORT
- *      FW_MSG_CODE_DRV_LOAD_FUNCTION
- * @return enum _ecore_status_t -
- *      ECORE_SUCCESS - Operation was successul.
- *      ECORE_BUSY - Operation failed
+ * @brief Sends a LOAD_REQ to the MFW, and in case the operation succeeds,
+ *        returns whether this PF is the first on the engine/port or function.
+ *
+ * @param p_hwfn
+ * @param p_pt
+ * @param p_params
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - Operation was successful.
  */
 enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 					struct ecore_ptt *p_ptt,
-					u32 *p_load_code);
+					struct ecore_load_req_params *p_params);
 
 /**
  * @brief Read the MFW mailbox into Current buffer.
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index d3cbc96..7f94ba1 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -878,9 +878,11 @@ struct public_func {
 #define DRV_ID_PDA_COMP_VER_MASK	0x0000ffff
 #define DRV_ID_PDA_COMP_VER_SHIFT	0
 
+#define LOAD_REQ_HSI_VERSION		2
 #define DRV_ID_MCP_HSI_VER_MASK		0x00ff0000
 #define DRV_ID_MCP_HSI_VER_SHIFT	16
-#define DRV_ID_MCP_HSI_VER_CURRENT	(1 << DRV_ID_MCP_HSI_VER_SHIFT)
+#define DRV_ID_MCP_HSI_VER_CURRENT	(LOAD_REQ_HSI_VERSION << \
+					 DRV_ID_MCP_HSI_VER_SHIFT)
 
 #define DRV_ID_DRV_TYPE_MASK		0x7f000000
 #define DRV_ID_DRV_TYPE_SHIFT		24
@@ -1040,8 +1042,47 @@ struct resource_info {
 #define RESOURCE_ELEMENT_STRICT (1 << 0)
 };
 
+#define DRV_ROLE_NONE		0
+#define DRV_ROLE_PREBOOT	1
+#define DRV_ROLE_OS		2
+#define DRV_ROLE_KDUMP		3
+
+struct load_req_stc {
+	u32 drv_ver_0;
+	u32 drv_ver_1;
+	u32 fw_ver;
+	u32 misc0;
+#define LOAD_REQ_ROLE_MASK		0x000000FF
+#define LOAD_REQ_ROLE_SHIFT		0
+#define LOAD_REQ_LOCK_TO_MASK		0x0000FF00
+#define LOAD_REQ_LOCK_TO_SHIFT		8
+#define LOAD_REQ_LOCK_TO_DEFAULT	0
+#define LOAD_REQ_LOCK_TO_NONE		255
+#define LOAD_REQ_FORCE_MASK		0x000F0000
+#define LOAD_REQ_FORCE_SHIFT		16
+#define LOAD_REQ_FORCE_NONE		0
+#define LOAD_REQ_FORCE_PF		1
+#define LOAD_REQ_FORCE_ALL		2
+#define LOAD_REQ_FLAGS0_MASK		0x00F00000
+#define LOAD_REQ_FLAGS0_SHIFT		20
+#define LOAD_REQ_FLAGS0_AVOID_RESET	(0x1 << 0)
+};
+
+struct load_rsp_stc {
+	u32 drv_ver_0;
+	u32 drv_ver_1;
+	u32 fw_ver;
+	u32 misc0;
+#define LOAD_RSP_ROLE_MASK		0x000000FF
+#define LOAD_RSP_ROLE_SHIFT		0
+#define LOAD_RSP_HSI_MASK		0x0000FF00
+#define LOAD_RSP_HSI_SHIFT		8
+#define LOAD_RSP_FLAGS0_MASK		0x000F0000
+#define LOAD_RSP_FLAGS0_SHIFT		16
+#define LOAD_RSP_FLAGS0_DRV_EXISTS	(0x1 << 0)
+};
+
 union drv_union_data {
-	u32 ver_str[MCP_DRV_VER_STR_SIZE_DWORD];    /* LOAD_REQ */
 	struct mcp_mac wol_mac; /* UNLOAD_DONE */
 
 /* This configuration should be set by the driver for the LINK_SET command. */
@@ -1068,6 +1109,9 @@ union drv_union_data {
 	struct bist_nvm_image_att nvm_image_att;
 	struct mdump_config_stc mdump_config;
 	u32 dword;
+
+	struct load_req_stc load_req;
+	struct load_rsp_stc load_rsp;
 	/* ... */
 };
 
@@ -1077,6 +1121,7 @@ struct public_drv_mb {
 #define DRV_MSG_CODE_LOAD_REQ                   0x10000000
 #define DRV_MSG_CODE_LOAD_DONE                  0x11000000
 #define DRV_MSG_CODE_INIT_HW                    0x12000000
+#define DRV_MSG_CODE_CANCEL_LOAD_REQ            0x13000000
 #define DRV_MSG_CODE_UNLOAD_REQ		        0x20000000
 #define DRV_MSG_CODE_UNLOAD_DONE                0x21000000
 #define DRV_MSG_CODE_INIT_PHY			0x22000000
@@ -1448,8 +1493,11 @@ struct public_drv_mb {
 #define FW_MSG_CODE_DRV_LOAD_PORT               0x10110000
 #define FW_MSG_CODE_DRV_LOAD_FUNCTION           0x10120000
 #define FW_MSG_CODE_DRV_LOAD_REFUSED_PDA        0x10200000
-#define FW_MSG_CODE_DRV_LOAD_REFUSED_HSI        0x10210000
+#define FW_MSG_CODE_DRV_LOAD_REFUSED_HSI_1      0x10210000
 #define FW_MSG_CODE_DRV_LOAD_REFUSED_DIAG       0x10220000
+#define FW_MSG_CODE_DRV_LOAD_REFUSED_HSI        0x10230000
+#define FW_MSG_CODE_DRV_LOAD_REFUSED_REQUIRES_FORCE 0x10300000
+#define FW_MSG_CODE_DRV_LOAD_REFUSED_REJECT     0x10310000
 #define FW_MSG_CODE_DRV_LOAD_DONE               0x11100000
 #define FW_MSG_CODE_DRV_UNLOAD_ENGINE           0x20110000
 #define FW_MSG_CODE_DRV_UNLOAD_PORT             0x20120000
@@ -1547,7 +1595,7 @@ struct public_drv_mb {
 
 
 	u32 fw_mb_param;
-	/* Resource Allocation params - MFW  version support*/
+/* Resource Allocation params - MFW  version support */
 #define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR_MASK	0xFFFF0000
 #define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR_SHIFT		16
 #define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR_MASK	0x0000FFFF
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 5c79055..326e56f 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -276,6 +276,8 @@ static int qed_slowpath_start(struct ecore_dev *edev,
 	hw_init_params.int_mode = ECORE_INT_MODE_MSIX;
 	hw_init_params.allow_npar_tx_switch = allow_npar_tx_switching;
 	hw_init_params.bin_fw_data = data;
+	hw_init_params.mfw_timeout_val = ECORE_LOAD_REQ_LOCK_TO_DEFAULT;
+	hw_init_params.avoid_eng_reset = false;
 	rc = ecore_hw_init(edev, &hw_init_params);
 	if (rc) {
 		DP_ERR(edev, "ecore_hw_init failed\n");
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 42/61] net/qede/base: add non-L2 dcbx tlv application support
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (41 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 41/61] net/qede/base: add support for previous driver unload Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 43/61] net/qede/base: update bulletin board during VF init Rasesh Mody
                         ` (19 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add non-l2 dcbx tlv application support.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dcbx.c     |   30 ++++++++++++++++++++++++++----
 drivers/net/qede/base/ecore_dcbx.h     |    1 +
 drivers/net/qede/base/ecore_dcbx_api.h |    4 +++-
 drivers/net/qede/base/ecore_proto_if.h |    3 +++
 4 files changed, 33 insertions(+), 5 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dcbx.c b/drivers/net/qede/base/ecore_dcbx.c
index 0e11927..5ecc6b0 100644
--- a/drivers/net/qede/base/ecore_dcbx.c
+++ b/drivers/net/qede/base/ecore_dcbx.c
@@ -72,6 +72,23 @@ static bool ecore_dcbx_default_tlv(u32 app_info_bitmap, u16 proto_id, bool ieee)
 	return !!(ethtype && (proto_id == ECORE_ETH_TYPE_DEFAULT));
 }
 
+static bool ecore_dcbx_iwarp_tlv(struct ecore_hwfn *p_hwfn, u32 app_info_bitmap,
+				 u16 proto_id, bool ieee)
+{
+	bool port;
+
+	if (!p_hwfn->p_dcbx_info->iwarp_port)
+		return false;
+
+	if (ieee)
+		port = ecore_dcbx_ieee_app_port(app_info_bitmap,
+						DCBX_APP_SF_IEEE_TCP_PORT);
+	else
+		port = ecore_dcbx_app_port(app_info_bitmap);
+
+	return !!(port && (proto_id == p_hwfn->p_dcbx_info->iwarp_port));
+}
+
 static void
 ecore_dcbx_dp_protocol(struct ecore_hwfn *p_hwfn,
 		       struct ecore_dcbx_results *p_data)
@@ -896,17 +913,18 @@ ecore_dcbx_mib_update_event(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 
 enum _ecore_status_t ecore_dcbx_info_alloc(struct ecore_hwfn *p_hwfn)
 {
-	enum _ecore_status_t rc = ECORE_SUCCESS;
-
 	p_hwfn->p_dcbx_info = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
 					  sizeof(*p_hwfn->p_dcbx_info));
 	if (!p_hwfn->p_dcbx_info) {
 		DP_NOTICE(p_hwfn, true,
 			  "Failed to allocate `struct ecore_dcbx_info'");
-		rc = ECORE_NOMEM;
+		return ECORE_NOMEM;
 	}
 
-	return rc;
+	p_hwfn->p_dcbx_info->iwarp_port =
+		p_hwfn->pf_params.rdma_pf_params.iwarp_port;
+
+	return ECORE_SUCCESS;
 }
 
 void ecore_dcbx_info_free(struct ecore_hwfn *p_hwfn,
@@ -937,9 +955,13 @@ void ecore_dcbx_set_pf_update_params(struct ecore_dcbx_results *p_src,
 
 	update_flag = p_src->arr[DCBX_PROTOCOL_ETH].update;
 	p_dest->update_eth_dcb_data_mode = update_flag;
+	update_flag = p_src->arr[DCBX_PROTOCOL_IWARP].update;
+	p_dest->update_iwarp_dcb_data_mode = update_flag;
 
 	p_dcb_data = &p_dest->eth_dcb_data;
 	ecore_dcbx_update_protocol_data(p_dcb_data, p_src, DCBX_PROTOCOL_ETH);
+	p_dcb_data = &p_dest->iwarp_dcb_data;
+	ecore_dcbx_update_protocol_data(p_dcb_data, p_src, DCBX_PROTOCOL_IWARP);
 }
 
 enum _ecore_status_t ecore_dcbx_query_params(struct ecore_hwfn *p_hwfn,
diff --git a/drivers/net/qede/base/ecore_dcbx.h b/drivers/net/qede/base/ecore_dcbx.h
index 0830014..eba2d91 100644
--- a/drivers/net/qede/base/ecore_dcbx.h
+++ b/drivers/net/qede/base/ecore_dcbx.h
@@ -29,6 +29,7 @@ struct ecore_dcbx_info {
 	struct ecore_dcbx_set set;
 	struct ecore_dcbx_get get;
 	u8 dcbx_cap;
+	u16 iwarp_port;
 };
 
 struct ecore_dcbx_mib_meta_data {
diff --git a/drivers/net/qede/base/ecore_dcbx_api.h b/drivers/net/qede/base/ecore_dcbx_api.h
index 3a1712f..2dc7679 100644
--- a/drivers/net/qede/base/ecore_dcbx_api.h
+++ b/drivers/net/qede/base/ecore_dcbx_api.h
@@ -37,6 +37,7 @@ enum dcbx_protocol_type {
 	DCBX_PROTOCOL_ROCE,
 	DCBX_PROTOCOL_ROCE_V2,
 	DCBX_PROTOCOL_ETH,
+	DCBX_PROTOCOL_IWARP,
 	DCBX_MAX_PROTOCOL_TYPE
 };
 
@@ -191,7 +192,8 @@ static const struct ecore_dcbx_app_metadata ecore_dcbx_app_update[] = {
 	{DCBX_PROTOCOL_FCOE, "FCOE", ECORE_PCI_FCOE},
 	{DCBX_PROTOCOL_ROCE, "ROCE", ECORE_PCI_ETH_ROCE},
 	{DCBX_PROTOCOL_ROCE_V2, "ROCE_V2", ECORE_PCI_ETH_ROCE},
-	{DCBX_PROTOCOL_ETH, "ETH", ECORE_PCI_ETH}
+	{DCBX_PROTOCOL_ETH, "ETH", ECORE_PCI_ETH},
+	{DCBX_PROTOCOL_IWARP, "IWARP", ECORE_PCI_ETH_IWARP}
 };
 
 #endif /* __ECORE_DCBX_API_H__ */
diff --git a/drivers/net/qede/base/ecore_proto_if.h b/drivers/net/qede/base/ecore_proto_if.h
index e252d52..ed24019 100644
--- a/drivers/net/qede/base/ecore_proto_if.h
+++ b/drivers/net/qede/base/ecore_proto_if.h
@@ -76,6 +76,9 @@ struct ecore_rdma_pf_params {
 
 	/* Will allocate rate limiters to be used with QPs */
 	u8		enable_dcqcn;
+
+	/* TCP port number used for the iwarp traffic */
+	u16		iwarp_port;
 };
 
 struct ecore_pf_params {
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 43/61] net/qede/base: update bulletin board during VF init
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (42 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 42/61] net/qede/base: add non-L2 dcbx tlv application support Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 44/61] net/qede/base: add coalescing support for VFs Rasesh Mody
                         ` (18 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Updated bulletin board with link state during VF initialization.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_sriov.c |   88 ++++++++++++++++++++---------------
 1 file changed, 51 insertions(+), 37 deletions(-)

diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index aab9925..703c1e8 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -954,11 +954,51 @@ static void ecore_iov_free_vf_igu_sbs(struct ecore_hwfn *p_hwfn,
 	vf->num_sbs = 0;
 }
 
+void ecore_iov_set_link(struct ecore_hwfn *p_hwfn,
+			u16 vfid,
+			struct ecore_mcp_link_params *params,
+			struct ecore_mcp_link_state *link,
+			struct ecore_mcp_link_capabilities *p_caps)
+{
+	struct ecore_vf_info *p_vf = ecore_iov_get_vf_info(p_hwfn, vfid, false);
+	struct ecore_bulletin_content *p_bulletin;
+
+	if (!p_vf)
+		return;
+
+	p_bulletin = p_vf->bulletin.p_virt;
+	p_bulletin->req_autoneg = params->speed.autoneg;
+	p_bulletin->req_adv_speed = params->speed.advertised_speeds;
+	p_bulletin->req_forced_speed = params->speed.forced_speed;
+	p_bulletin->req_autoneg_pause = params->pause.autoneg;
+	p_bulletin->req_forced_rx = params->pause.forced_rx;
+	p_bulletin->req_forced_tx = params->pause.forced_tx;
+	p_bulletin->req_loopback = params->loopback_mode;
+
+	p_bulletin->link_up = link->link_up;
+	p_bulletin->speed = link->speed;
+	p_bulletin->full_duplex = link->full_duplex;
+	p_bulletin->autoneg = link->an;
+	p_bulletin->autoneg_complete = link->an_complete;
+	p_bulletin->parallel_detection = link->parallel_detection;
+	p_bulletin->pfc_enabled = link->pfc_enabled;
+	p_bulletin->partner_adv_speed = link->partner_adv_speed;
+	p_bulletin->partner_tx_flow_ctrl_en = link->partner_tx_flow_ctrl_en;
+	p_bulletin->partner_rx_flow_ctrl_en = link->partner_rx_flow_ctrl_en;
+	p_bulletin->partner_adv_pause = link->partner_adv_pause;
+	p_bulletin->sfp_tx_fault = link->sfp_tx_fault;
+
+	p_bulletin->capability_speed = p_caps->speed_capabilities;
+}
+
 enum _ecore_status_t
 ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 			 struct ecore_ptt *p_ptt,
 			 struct ecore_iov_vf_init_params *p_params)
 {
+	struct ecore_mcp_link_capabilities link_caps;
+	struct ecore_mcp_link_params link_params;
+	struct ecore_mcp_link_state link_state;
 	u8 num_of_vf_available_chains  = 0;
 	struct ecore_vf_info *vf = OSAL_NULL;
 	u16 qid, num_irqs;
@@ -1045,6 +1085,17 @@ ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 			   p_queue->fw_cid);
 	}
 
+	/* Update the link configuration in bulletin.
+	 */
+	OSAL_MEMCPY(&link_params, ecore_mcp_get_link_params(p_hwfn),
+		    sizeof(link_params));
+	OSAL_MEMCPY(&link_state, ecore_mcp_get_link_state(p_hwfn),
+		    sizeof(link_state));
+	OSAL_MEMCPY(&link_caps, ecore_mcp_get_link_capabilities(p_hwfn),
+		    sizeof(link_caps));
+	ecore_iov_set_link(p_hwfn, p_params->rel_vf_id,
+			   &link_params, &link_state, &link_caps);
+
 	rc = ecore_iov_enable_vf_access(p_hwfn, p_ptt, vf);
 
 	if (rc == ECORE_SUCCESS) {
@@ -1059,43 +1110,6 @@ ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-void ecore_iov_set_link(struct ecore_hwfn *p_hwfn,
-			u16 vfid,
-			struct ecore_mcp_link_params *params,
-			struct ecore_mcp_link_state *link,
-			struct ecore_mcp_link_capabilities *p_caps)
-{
-	struct ecore_vf_info *p_vf = ecore_iov_get_vf_info(p_hwfn, vfid, false);
-	struct ecore_bulletin_content *p_bulletin;
-
-	if (!p_vf)
-		return;
-
-	p_bulletin = p_vf->bulletin.p_virt;
-	p_bulletin->req_autoneg = params->speed.autoneg;
-	p_bulletin->req_adv_speed = params->speed.advertised_speeds;
-	p_bulletin->req_forced_speed = params->speed.forced_speed;
-	p_bulletin->req_autoneg_pause = params->pause.autoneg;
-	p_bulletin->req_forced_rx = params->pause.forced_rx;
-	p_bulletin->req_forced_tx = params->pause.forced_tx;
-	p_bulletin->req_loopback = params->loopback_mode;
-
-	p_bulletin->link_up = link->link_up;
-	p_bulletin->speed = link->speed;
-	p_bulletin->full_duplex = link->full_duplex;
-	p_bulletin->autoneg = link->an;
-	p_bulletin->autoneg_complete = link->an_complete;
-	p_bulletin->parallel_detection = link->parallel_detection;
-	p_bulletin->pfc_enabled = link->pfc_enabled;
-	p_bulletin->partner_adv_speed = link->partner_adv_speed;
-	p_bulletin->partner_tx_flow_ctrl_en = link->partner_tx_flow_ctrl_en;
-	p_bulletin->partner_rx_flow_ctrl_en = link->partner_rx_flow_ctrl_en;
-	p_bulletin->partner_adv_pause = link->partner_adv_pause;
-	p_bulletin->sfp_tx_fault = link->sfp_tx_fault;
-
-	p_bulletin->capability_speed = p_caps->speed_capabilities;
-}
-
 enum _ecore_status_t ecore_iov_release_hw_for_vf(struct ecore_hwfn *p_hwfn,
 						 struct ecore_ptt *p_ptt,
 						 u16 rel_vf_id)
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 44/61] net/qede/base: add coalescing support for VFs
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (43 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 43/61] net/qede/base: update bulletin board during VF init Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 45/61] net/qede/base: add macro got resource value message Rasesh Mody
                         ` (17 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add coalescing support for VFs.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c     |   83 ++++++++++++++++++++++-----------
 drivers/net/qede/base/ecore_dev_api.h |   43 ++++++-----------
 drivers/net/qede/base/ecore_sriov.c   |   66 +++++++++++++++++++++++++-
 drivers/net/qede/base/ecore_vf.c      |   42 +++++++++++++++++
 drivers/net/qede/base/ecore_vf.h      |   24 ++++++++++
 drivers/net/qede/base/ecore_vfpf_if.h |   10 ++++
 6 files changed, 209 insertions(+), 59 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 29dd292..7a876bc 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -30,6 +30,7 @@
 #include "nvm_cfg.h"
 #include "ecore_dev_api.h"
 #include "ecore_dcbx.h"
+#include "ecore_l2.h"
 
 /* TODO - there's a bug in DCBx re-configuration flows in MF, as the QM
  * registers involved are not split and thus configuration is a race where
@@ -4198,11 +4199,6 @@ static enum _ecore_status_t ecore_set_coalesce(struct ecore_hwfn *p_hwfn,
 {
 	struct coalescing_timeset *p_coal_timeset;
 
-	if (IS_VF(p_hwfn->p_dev)) {
-		DP_NOTICE(p_hwfn, true, "VF coalescing config not supported\n");
-		return ECORE_INVAL;
-	}
-
 	if (p_hwfn->p_dev->int_coalescing_mode != ECORE_COAL_MODE_ENABLE) {
 		DP_NOTICE(p_hwfn, true,
 			  "Coalescing configuration not enabled\n");
@@ -4218,13 +4214,53 @@ static enum _ecore_status_t ecore_set_coalesce(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
+enum _ecore_status_t ecore_set_queue_coalesce(struct ecore_hwfn *p_hwfn,
+					      u16 rx_coal, u16 tx_coal,
+					      void *p_handle)
+{
+	struct ecore_queue_cid *p_cid = (struct ecore_queue_cid *)p_handle;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	struct ecore_ptt *p_ptt;
+
+	/* TODO - Configuring a single queue's coalescing but
+	 * claiming all queues are abiding same configuration
+	 * for PF and VF both.
+	 */
+
+	if (IS_VF(p_hwfn->p_dev))
+		return ecore_vf_pf_set_coalesce(p_hwfn, rx_coal,
+						tx_coal, p_cid);
+
+	p_ptt = ecore_ptt_acquire(p_hwfn);
+	if (!p_ptt)
+		return ECORE_AGAIN;
+
+	if (rx_coal) {
+		rc = ecore_set_rxq_coalesce(p_hwfn, p_ptt, rx_coal, p_cid);
+		if (rc)
+			goto out;
+		p_hwfn->p_dev->rx_coalesce_usecs = rx_coal;
+	}
+
+	if (tx_coal) {
+		rc = ecore_set_txq_coalesce(p_hwfn, p_ptt, tx_coal, p_cid);
+		if (rc)
+			goto out;
+		p_hwfn->p_dev->tx_coalesce_usecs = tx_coal;
+	}
+out:
+	ecore_ptt_release(p_hwfn, p_ptt);
+
+	return rc;
+}
+
 enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u16 qid, u16 sb_id)
+					    u16 coalesce,
+					    struct ecore_queue_cid *p_cid)
 {
 	struct ustorm_eth_queue_zone eth_qzone;
 	u8 timeset, timer_res;
-	u16 fw_qid = 0;
 	u32 address;
 	enum _ecore_status_t rc;
 
@@ -4241,33 +4277,30 @@ enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
 	}
 	timeset = (u8)(coalesce >> timer_res);
 
-	rc = ecore_fw_l2_queue(p_hwfn, qid, &fw_qid);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	rc = ecore_int_set_timer_res(p_hwfn, p_ptt, timer_res, sb_id, false);
+	rc = ecore_int_set_timer_res(p_hwfn, p_ptt, timer_res,
+				     p_cid->abs.sb_idx, false);
 	if (rc != ECORE_SUCCESS)
 		goto out;
 
-	address = BAR0_MAP_REG_USDM_RAM + USTORM_ETH_QUEUE_ZONE_OFFSET(fw_qid);
+	address = BAR0_MAP_REG_USDM_RAM +
+		  USTORM_ETH_QUEUE_ZONE_OFFSET(p_cid->abs.queue_id);
 
 	rc = ecore_set_coalesce(p_hwfn, p_ptt, address, &eth_qzone,
 				sizeof(struct ustorm_eth_queue_zone), timeset);
 	if (rc != ECORE_SUCCESS)
 		goto out;
 
-	p_hwfn->p_dev->rx_coalesce_usecs = coalesce;
-out:
+ out:
 	return rc;
 }
 
 enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u16 qid, u16 sb_id)
+					    u16 coalesce,
+					    struct ecore_queue_cid *p_cid)
 {
 	struct xstorm_eth_queue_zone eth_qzone;
 	u8 timeset, timer_res;
-	u16 fw_qid = 0;
 	u32 address;
 	enum _ecore_status_t rc;
 
@@ -4285,23 +4318,17 @@ enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
 
 	timeset = (u8)(coalesce >> timer_res);
 
-	rc = ecore_fw_l2_queue(p_hwfn, qid, &fw_qid);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	rc = ecore_int_set_timer_res(p_hwfn, p_ptt, timer_res, sb_id, true);
+	rc = ecore_int_set_timer_res(p_hwfn, p_ptt, timer_res,
+				     p_cid->abs.sb_idx, true);
 	if (rc != ECORE_SUCCESS)
 		goto out;
 
-	address = BAR0_MAP_REG_XSDM_RAM + XSTORM_ETH_QUEUE_ZONE_OFFSET(fw_qid);
+	address = BAR0_MAP_REG_XSDM_RAM +
+		  XSTORM_ETH_QUEUE_ZONE_OFFSET(p_cid->abs.queue_id);
 
 	rc = ecore_set_coalesce(p_hwfn, p_ptt, address, &eth_qzone,
 				sizeof(struct xstorm_eth_queue_zone), timeset);
-	if (rc != ECORE_SUCCESS)
-		goto out;
-
-	p_hwfn->p_dev->tx_coalesce_usecs = coalesce;
-out:
+ out:
 	return rc;
 }
 
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index 7e90778..ce764d2 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -570,41 +570,24 @@ enum _ecore_status_t ecore_final_cleanup(struct ecore_hwfn	*p_hwfn,
 					 struct ecore_ptt	*p_ptt,
 					 u16			id,
 					 bool			is_vf);
-
-/**
- * @brief ecore_set_rxq_coalesce - Configure coalesce parameters for an Rx queue
- *    The fact that we can configure coalescing to up to 511, but on varying
- *    accuracy [the bigger the value the less accurate] up to a mistake of 3usec
- *    for the highest values.
- *
- * @param p_hwfn
- * @param p_ptt
- * @param coalesce - Coalesce value in micro seconds.
- * @param qid - Queue index.
- * @param qid - SB Id
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
-					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u16 qid, u16 sb_id);
-
 /**
- * @brief ecore_set_txq_coalesce - Configure coalesce parameters for a Tx queue
- *    While the API allows setting coalescing per-qid, all tx queues sharing a
- *    SB should be in same range [i.e., either 0-0x7f, 0x80-0xff or 0x100-0x1ff]
+ * @brief ecore_set_queue_coalesce - Configure coalesce parameters for Rx and
+ *    Tx queue. The fact that we can configure coalescing to up to 511, but on
+ *    varying accuracy [the bigger the value the less accurate] up to a mistake
+ *    of 3usec for the highest values.
+ *    While the API allows setting coalescing per-qid, all queues sharing a SB
+ *    should be in same range [i.e., either 0-0x7f, 0x80-0xff or 0x100-0x1ff]
  *    otherwise configuration would break.
  *
  * @param p_hwfn
- * @param p_ptt
- * @param coalesce - Coalesce value in micro seconds.
- * @param qid - Queue index.
- * @param qid - SB Id
+ * @param rx_coal - Rx Coalesce value in micro seconds.
+ * @param tx_coal - TX Coalesce value in micro seconds.
+ * @param p_handle
  *
  * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
-					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u16 qid, u16 sb_id);
+ **/
+enum _ecore_status_t
+ecore_set_queue_coalesce(struct ecore_hwfn *p_hwfn, u16 rx_coal,
+			 u16 tx_coal, void *p_handle);
 
 #endif
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 703c1e8..4ffa8d0 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -52,6 +52,7 @@ const char *ecore_channel_tlvs_string[] = {
 	"CHANNEL_TLV_VPORT_UPDATE_ACCEPT_ANY_VLAN",
 	"CHANNEL_TLV_VPORT_UPDATE_SGE_TPA",
 	"CHANNEL_TLV_UPDATE_TUNN_PARAM",
+	"CHANNEL_TLV_COALESCE_UPDATE",
 	"CHANNEL_TLV_MAX"
 };
 
@@ -1939,6 +1940,8 @@ static void ecore_iov_vf_mbx_start_vport(struct ecore_hwfn *p_hwfn,
 	vf->state = VF_ENABLED;
 	start = &mbx->req_virt->start_vport;
 
+	ecore_iov_enable_vf_traffic(p_hwfn, p_ptt, vf);
+
 	/* Initialize Status block in CAU */
 	for (sb_id = 0; sb_id < vf->num_sbs; sb_id++) {
 		if (!start->sb_addr[sb_id]) {
@@ -1953,7 +1956,6 @@ static void ecore_iov_vf_mbx_start_vport(struct ecore_hwfn *p_hwfn,
 				      vf->igu_sbs[sb_id],
 				      vf->abs_vf_id, 1);
 	}
-	ecore_iov_enable_vf_traffic(p_hwfn, p_ptt, vf);
 
 	vf->mtu = start->mtu;
 	vf->shadow_config.inner_vlan_removal = start->inner_vlan_removal;
@@ -3226,6 +3228,65 @@ static void ecore_iov_vf_mbx_release(struct ecore_hwfn *p_hwfn,
 			       length, status);
 }
 
+static void ecore_iov_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn,
+					 struct ecore_ptt *p_ptt,
+					 struct ecore_vf_info *vf)
+{
+	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	struct vfpf_update_coalesce *req;
+	u8 status = PFVF_STATUS_FAILURE;
+	struct ecore_queue_cid *p_cid;
+	u16 rx_coal, tx_coal;
+	u16  qid;
+
+	req = &mbx->req_virt->update_coalesce;
+
+	rx_coal = req->rx_coal;
+	tx_coal = req->tx_coal;
+	qid = req->qid;
+	p_cid = vf->vf_queues[qid].p_rx_cid;
+
+	if (!ecore_iov_validate_rxq(p_hwfn, vf, qid)) {
+		DP_ERR(p_hwfn, "VF[%d]: Invalid Rx queue_id = %d\n",
+		       vf->abs_vf_id, qid);
+		goto out;
+	}
+
+	if (!ecore_iov_validate_txq(p_hwfn, vf, qid)) {
+		DP_ERR(p_hwfn, "VF[%d]: Invalid Tx queue_id = %d\n",
+		       vf->abs_vf_id, qid);
+		goto out;
+	}
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+		   "VF[%d]: Setting coalesce for VF rx_coal = %d, tx_coal = %d at queue = %d\n",
+		   vf->abs_vf_id, rx_coal, tx_coal, qid);
+	if (rx_coal) {
+		rc = ecore_set_rxq_coalesce(p_hwfn, p_ptt, rx_coal, p_cid);
+		if (rc != ECORE_SUCCESS) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VF[%d]: Unable to set rx queue = %d coalesce\n",
+				   vf->abs_vf_id, vf->vf_queues[qid].fw_rx_qid);
+			goto out;
+		}
+	}
+	if (tx_coal) {
+		rc =  ecore_set_txq_coalesce(p_hwfn, p_ptt, tx_coal, p_cid);
+		if (rc != ECORE_SUCCESS) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VF[%d]: Unable to set tx queue = %d coalesce\n",
+				   vf->abs_vf_id, vf->vf_queues[qid].fw_tx_qid);
+			goto out;
+		}
+	}
+
+	status = PFVF_STATUS_SUCCESS;
+out:
+	ecore_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_COALESCE_UPDATE,
+			       sizeof(struct pfvf_def_resp_tlv), status);
+}
+
 static enum _ecore_status_t
 ecore_iov_vf_flr_poll_dorq(struct ecore_hwfn *p_hwfn,
 			   struct ecore_vf_info *p_vf, struct ecore_ptt *p_ptt)
@@ -3579,6 +3640,9 @@ void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn,
 		case CHANNEL_TLV_UPDATE_TUNN_PARAM:
 			ecore_iov_vf_mbx_update_tunn_param(p_hwfn, p_ptt, p_vf);
 			break;
+		case CHANNEL_TLV_COALESCE_UPDATE:
+			ecore_iov_vf_pf_set_coalesce(p_hwfn, p_ptt, p_vf);
+			break;
 		}
 	} else if (ecore_iov_tlv_supported(mbx->first_tlv.tl.type)) {
 		/* If we've received a message from a VF we consider malicious
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index a072a81..bf516cc 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -1424,6 +1424,48 @@ exit:
 	return rc;
 }
 
+enum _ecore_status_t
+ecore_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn, u16 rx_coal, u16 tx_coal,
+			 struct ecore_queue_cid     *p_cid)
+{
+	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+	struct vfpf_update_coalesce *req;
+	struct pfvf_def_resp_tlv *resp;
+	enum _ecore_status_t rc;
+
+	/* clear mailbox and prep header tlv */
+	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_COALESCE_UPDATE,
+			       sizeof(*req));
+
+	req->rx_coal = rx_coal;
+	req->tx_coal = tx_coal;
+	req->qid = p_cid->rel.queue_id;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+		   "Setting coalesce rx_coal = %d, tx_coal = %d at queue = %d\n",
+		   rx_coal, tx_coal, req->qid);
+
+	/* add list termination tlv */
+	ecore_add_tlv(p_hwfn, &p_iov->offset, CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	resp = &p_iov->pf2vf_reply->default_resp;
+	rc = ecore_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
+
+	if (rc != ECORE_SUCCESS)
+		goto exit;
+
+	if (resp->hdr.status != PFVF_STATUS_SUCCESS)
+		goto exit;
+
+	p_hwfn->p_dev->rx_coalesce_usecs = rx_coal;
+	p_hwfn->p_dev->tx_coalesce_usecs = tx_coal;
+
+exit:
+	ecore_vf_pf_req_end(p_hwfn, rc);
+	return rc;
+}
+
 u16 ecore_vf_get_igu_sb_id(struct ecore_hwfn *p_hwfn,
 			   u16               sb_id)
 {
diff --git a/drivers/net/qede/base/ecore_vf.h b/drivers/net/qede/base/ecore_vf.h
index 0d67054..228bbf0 100644
--- a/drivers/net/qede/base/ecore_vf.h
+++ b/drivers/net/qede/base/ecore_vf.h
@@ -50,6 +50,20 @@ struct ecore_vf_iov {
 enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn);
 
 /**
+ * @brief VF - Set Rx/Tx coalesce per VF's relative queue.
+ *	Coalesce value '0' will omit the configuration.
+ *
+ *	@param p_hwfn
+ *	@param rx_coal - coalesce value in micro second for rx queue
+ *	@param tx_coal - coalesce value in micro second for tx queue
+ *	@param qid
+ *
+ **/
+enum _ecore_status_t ecore_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn,
+					      u16 rx_coal, u16 tx_coal,
+					      struct ecore_queue_cid *p_cid);
+
+/**
  * @brief VF - start the RX Queue by sending a message to the PF
  *
  * @param p_hwfn
@@ -263,5 +277,15 @@ ecore_vf_pf_tunnel_param_update(struct ecore_hwfn *p_hwfn,
 				struct ecore_tunnel_info *p_tunn);
 
 void ecore_vf_set_vf_start_tunn_update_param(struct ecore_tunnel_info *p_tun);
+
+enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
+					    struct ecore_ptt *p_ptt,
+					    u16 coalesce,
+					    struct ecore_queue_cid *p_cid);
+
+enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
+					    struct ecore_ptt *p_ptt,
+					    u16 coalesce,
+					    struct ecore_queue_cid *p_cid);
 #endif
 #endif /* __ECORE_VF_H__ */
diff --git a/drivers/net/qede/base/ecore_vfpf_if.h b/drivers/net/qede/base/ecore_vfpf_if.h
index 82ed4f5..e0b63bf 100644
--- a/drivers/net/qede/base/ecore_vfpf_if.h
+++ b/drivers/net/qede/base/ecore_vfpf_if.h
@@ -457,6 +457,14 @@ struct tlv_buffer_size {
 	u8 tlv_buffer[TLV_BUFFER_SIZE];
 };
 
+struct vfpf_update_coalesce {
+	struct vfpf_first_tlv first_tlv;
+	u16 rx_coal;
+	u16 tx_coal;
+	u16 qid;
+	u8 padding[2];
+};
+
 union vfpf_tlvs {
 	struct vfpf_first_tlv			first_tlv;
 	struct vfpf_acquire_tlv			acquire;
@@ -469,6 +477,7 @@ union vfpf_tlvs {
 	struct vfpf_vport_update_tlv		vport_update;
 	struct vfpf_ucast_filter_tlv		ucast_filter;
 	struct vfpf_update_tunn_param_tlv	tunn_param_update;
+	struct vfpf_update_coalesce		update_coalesce;
 	struct tlv_buffer_size			tlv_buf_size;
 };
 
@@ -592,6 +601,7 @@ enum {
 	CHANNEL_TLV_VPORT_UPDATE_ACCEPT_ANY_VLAN,
 	CHANNEL_TLV_VPORT_UPDATE_SGE_TPA,
 	CHANNEL_TLV_UPDATE_TUNN_PARAM,
+	CHANNEL_TLV_COALESCE_UPDATE,
 	CHANNEL_TLV_MAX,
 
 	/* Required for iterating over vport-update tlvs.
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 45/61] net/qede/base: add macro got resource value message
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (44 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 44/61] net/qede/base: add coalescing support for VFs Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 46/61] net/qede/base: add mailbox for resource allocation Rasesh Mody
                         ` (16 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add macro got resource value message

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/mcp_public.h |    5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 7f94ba1..6f0e2f9 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -1137,16 +1137,15 @@ struct public_drv_mb {
 #define DRV_MSG_CODE_OV_UPDATE_BUS_NUM		0x27000000
 #define DRV_MSG_CODE_OV_UPDATE_BOOT_PROGRESS	0x28000000
 #define DRV_MSG_CODE_OV_UPDATE_STORM_FW_VER	0x29000000
+#define DRV_MSG_CODE_NIG_DRAIN			0x30000000
 #define DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE	0x31000000
 #define DRV_MSG_CODE_BW_UPDATE_ACK		0x32000000
 #define DRV_MSG_CODE_OV_UPDATE_MTU		0x33000000
-
-#define DRV_MSG_CODE_NIG_DRAIN			0x30000000
-
 /* DRV_MB Param: driver version supp, FW_MB param: MFW version supp,
  * data: struct resource_info
  */
 #define DRV_MSG_GET_RESOURCE_ALLOC_MSG		0x34000000
+#define DRV_MSG_SET_RESOURCE_VALUE_MSG		0x35000000
 
 /*deprecated don't use*/
 #define DRV_MSG_CODE_INITIATE_FLR_DEPRECATED    0x02000000
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 46/61] net/qede/base: add mailbox for resource allocation
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (45 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 45/61] net/qede/base: add macro got resource value message Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 47/61] net/qede/base: add macro for unsupported command Rasesh Mody
                         ` (15 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add the Management FW mailbox for getting non-l2 resource allocation
information.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h      |    1 +
 drivers/net/qede/base/ecore_dev.c  |   60 ++++++++++++++++++++++++------------
 drivers/net/qede/base/mcp_public.h |    1 +
 3 files changed, 43 insertions(+), 19 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 60a8a6b..25b6c4e 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -291,6 +291,7 @@ enum ecore_resources {
 	ECORE_LL2_QUEUE,
 	ECORE_CMDQS_CQS,
 	ECORE_RDMA_STATS_QUEUE,
+	ECORE_BDQ,
 	ECORE_MAX_RESC,			/* must be last */
 };
 
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 7a876bc..d5a8a90 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2463,6 +2463,9 @@ ecore_hw_get_mfw_res_id(enum ecore_resources res_id)
 	case ECORE_RDMA_STATS_QUEUE:
 		mfw_res_id = RESOURCE_RDMA_STATS_QUEUE_E;
 		break;
+	case ECORE_BDQ:
+		mfw_res_id = RESOURCE_BDQ_E;
+		break;
 	default:
 		break;
 	}
@@ -2470,67 +2473,84 @@ ecore_hw_get_mfw_res_id(enum ecore_resources res_id)
 	return mfw_res_id;
 }
 
-static u32 ecore_hw_get_dflt_resc_num(struct ecore_hwfn *p_hwfn,
-				      enum ecore_resources res_id)
+static
+enum _ecore_status_t ecore_hw_get_dflt_resc(struct ecore_hwfn *p_hwfn,
+					    enum ecore_resources res_id,
+					    u32 *p_resc_num,
+					    u32 *p_resc_start)
 {
 	u8 num_funcs = p_hwfn->num_funcs_on_engine;
 	bool b_ah = ECORE_IS_AH(p_hwfn->p_dev);
 	struct ecore_sb_cnt_info sb_cnt_info;
-	u32 dflt_resc_num = 0;
 
 	switch (res_id) {
 	case ECORE_SB:
 		OSAL_MEM_ZERO(&sb_cnt_info, sizeof(sb_cnt_info));
 		ecore_int_get_num_sbs(p_hwfn, &sb_cnt_info);
-		dflt_resc_num = sb_cnt_info.sb_cnt;
+		*p_resc_num = sb_cnt_info.sb_cnt;
 		break;
 	case ECORE_L2_QUEUE:
-		dflt_resc_num = (b_ah ? MAX_NUM_L2_QUEUES_K2 :
+		*p_resc_num = (b_ah ? MAX_NUM_L2_QUEUES_K2 :
 				 MAX_NUM_L2_QUEUES_BB) / num_funcs;
 		break;
 	case ECORE_VPORT:
-		dflt_resc_num = (b_ah ? MAX_NUM_VPORTS_K2 :
+		*p_resc_num = (b_ah ? MAX_NUM_VPORTS_K2 :
 				 MAX_NUM_VPORTS_BB) / num_funcs;
 		break;
 	case ECORE_RSS_ENG:
-		dflt_resc_num = (b_ah ? ETH_RSS_ENGINE_NUM_K2 :
+		*p_resc_num = (b_ah ? ETH_RSS_ENGINE_NUM_K2 :
 				 ETH_RSS_ENGINE_NUM_BB) / num_funcs;
 		break;
 	case ECORE_PQ:
-		dflt_resc_num = (b_ah ? MAX_QM_TX_QUEUES_K2 :
+		*p_resc_num = (b_ah ? MAX_QM_TX_QUEUES_K2 :
 				 MAX_QM_TX_QUEUES_BB) / num_funcs;
 		break;
 	case ECORE_RL:
-		dflt_resc_num = MAX_QM_GLOBAL_RLS / num_funcs;
+		*p_resc_num = MAX_QM_GLOBAL_RLS / num_funcs;
 		break;
 	case ECORE_MAC:
 	case ECORE_VLAN:
 		/* Each VFC resource can accommodate both a MAC and a VLAN */
-		dflt_resc_num = ETH_NUM_MAC_FILTERS / num_funcs;
+		*p_resc_num = ETH_NUM_MAC_FILTERS / num_funcs;
 		break;
 	case ECORE_ILT:
-		dflt_resc_num = (b_ah ? PXP_NUM_ILT_RECORDS_K2 :
+		*p_resc_num = (b_ah ? PXP_NUM_ILT_RECORDS_K2 :
 				 PXP_NUM_ILT_RECORDS_BB) / num_funcs;
 		break;
 	case ECORE_LL2_QUEUE:
-		dflt_resc_num = MAX_NUM_LL2_RX_QUEUES / num_funcs;
+		*p_resc_num = MAX_NUM_LL2_RX_QUEUES / num_funcs;
 		break;
 	case ECORE_RDMA_CNQ_RAM:
 	case ECORE_CMDQS_CQS:
 		/* CNQ/CMDQS are the same resource */
 		/* @DPDK */
-		dflt_resc_num = (NUM_OF_GLOBAL_QUEUES / 2) / num_funcs;
+		*p_resc_num = (NUM_OF_GLOBAL_QUEUES / 2) / num_funcs;
 		break;
 	case ECORE_RDMA_STATS_QUEUE:
 		/* @DPDK */
-		dflt_resc_num = (b_ah ? MAX_NUM_VPORTS_K2 :
+		*p_resc_num = (b_ah ? MAX_NUM_VPORTS_K2 :
 				 MAX_NUM_VPORTS_BB) / num_funcs;
 		break;
+	case ECORE_BDQ:
+		/* @DPDK */
+		*p_resc_num = 0;
+		break;
+	default:
+		break;
+	}
+
+
+	switch (res_id) {
+	case ECORE_BDQ:
+		if (!*p_resc_num)
+			*p_resc_start = 0;
+		break;
 	default:
+		*p_resc_start = *p_resc_num * p_hwfn->enabled_func_idx;
 		break;
 	}
 
-	return dflt_resc_num;
+	return ECORE_SUCCESS;
 }
 
 static const char *ecore_hw_get_resc_name(enum ecore_resources res_id)
@@ -2562,6 +2582,8 @@ static const char *ecore_hw_get_resc_name(enum ecore_resources res_id)
 		return "CMDQS_CQS";
 	case ECORE_RDMA_STATS_QUEUE:
 		return "RDMA_STATS_QUEUE";
+	case ECORE_BDQ:
+		return "BDQ";
 	default:
 		return "UNKNOWN_RESOURCE";
 	}
@@ -2579,14 +2601,14 @@ static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
 	p_resc_num = &RESC_NUM(p_hwfn, res_id);
 	p_resc_start = &RESC_START(p_hwfn, res_id);
 
-	dflt_resc_num = ecore_hw_get_dflt_resc_num(p_hwfn, res_id);
-	if (!dflt_resc_num) {
+	rc = ecore_hw_get_dflt_resc(p_hwfn, res_id,
+				    &dflt_resc_num, &dflt_resc_start);
+	if (rc != ECORE_SUCCESS) {
 		DP_ERR(p_hwfn,
 		       "Failed to get default amount for resource %d [%s]\n",
 			res_id, ecore_hw_get_resc_name(res_id));
-		return ECORE_INVAL;
+		return rc;
 	}
-	dflt_resc_start = dflt_resc_num * p_hwfn->enabled_func_idx;
 
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_SLOW(p_hwfn->p_dev)) {
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 6f0e2f9..333d147 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -1025,6 +1025,7 @@ enum resource_id_enum {
 	RESOURCE_NUM_RSS_ENGINES_E	=	14,
 	RESOURCE_LL2_QUEUE_E		=	15,
 	RESOURCE_RDMA_STATS_QUEUE_E	=	16,
+	RESOURCE_BDQ_E			=	17,
 	RESOURCE_MAX_NUM,
 	RESOURCE_NUM_INVALID		=	0xFFFFFFFF
 };
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 47/61] net/qede/base: add macro for unsupported command
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (46 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 46/61] net/qede/base: add mailbox for resource allocation Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 48/61] net/qede/base: set max values for soft resoruces Rasesh Mody
                         ` (14 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add a macro for upsupported management FW command

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_mcp.c  |    6 ++----
 drivers/net/qede/base/mcp_public.h |    1 +
 2 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 6c5b5db..15f3ea0 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1424,8 +1424,7 @@ ecore_mcp_mdump_get_config(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	/* A zero response implies that the mdump command is not supported */
-	if (!mcp_resp)
+	if (mcp_resp == FW_MSG_CODE_UNSUPPORTED)
 		return ECORE_NOTIMPL;
 
 	if (mcp_resp != FW_MSG_CODE_OK) {
@@ -2832,8 +2831,7 @@ static enum _ecore_status_t ecore_mcp_resource_cmd(struct ecore_hwfn *p_hwfn,
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	/* A zero response implies that the resource command is not supported */
-	if (!*p_mcp_resp)
+	if (*p_mcp_resp == FW_MSG_CODE_UNSUPPORTED)
 		return ECORE_NOTIMPL;
 
 	if (*p_mcp_param == RESOURCE_OPCODE_UNKNOWN_CMD) {
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 333d147..fcf9847 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -1489,6 +1489,7 @@ struct public_drv_mb {
 
 	u32 fw_mb_header;
 #define FW_MSG_CODE_MASK                        0xffff0000
+#define FW_MSG_CODE_UNSUPPORTED			0x00000000
 #define FW_MSG_CODE_DRV_LOAD_ENGINE		0x10100000
 #define FW_MSG_CODE_DRV_LOAD_PORT               0x10110000
 #define FW_MSG_CODE_DRV_LOAD_FUNCTION           0x10120000
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 48/61] net/qede/base: set max values for soft resoruces
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (47 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 47/61] net/qede/base: add macro for unsupported command Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 49/61] net/qede/base: add return code check Rasesh Mody
                         ` (13 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add support for the new interface with the Management FW for setting
max values of "soft" resoruces.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h     |    2 +
 drivers/net/qede/base/ecore_dev.c |  282 ++++++++++++++++++++++--------------
 drivers/net/qede/base/ecore_mcp.c |  287 +++++++++++++++++++++++++++++++------
 drivers/net/qede/base/ecore_mcp.h |  104 ++++++++++----
 4 files changed, 498 insertions(+), 177 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 25b6c4e..7379b3f 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -856,4 +856,6 @@ u16 ecore_init_qm_get_num_pqs(struct ecore_hwfn *p_hwfn);
 
 #define ECORE_LEADING_HWFN(dev)	(&dev->hwfns[0])
 
+const char *ecore_hw_get_resc_name(enum ecore_resources res_id);
+
 #endif /* __ECORE_H */
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index d5a8a90..3191ee4 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2420,64 +2420,109 @@ static void ecore_hw_set_feat(struct ecore_hwfn *p_hwfn)
 		   RESC_NUM(p_hwfn, ECORE_SB));
 }
 
-static enum resource_id_enum
-ecore_hw_get_mfw_res_id(enum ecore_resources res_id)
+const char *ecore_hw_get_resc_name(enum ecore_resources res_id)
 {
-	enum resource_id_enum mfw_res_id = RESOURCE_NUM_INVALID;
-
 	switch (res_id) {
 	case ECORE_SB:
-		mfw_res_id = RESOURCE_NUM_SB_E;
-		break;
+		return "SB";
 	case ECORE_L2_QUEUE:
-		mfw_res_id = RESOURCE_NUM_L2_QUEUE_E;
-		break;
+		return "L2_QUEUE";
 	case ECORE_VPORT:
-		mfw_res_id = RESOURCE_NUM_VPORT_E;
-		break;
+		return "VPORT";
 	case ECORE_RSS_ENG:
-		mfw_res_id = RESOURCE_NUM_RSS_ENGINES_E;
-		break;
+		return "RSS_ENG";
 	case ECORE_PQ:
-		mfw_res_id = RESOURCE_NUM_PQ_E;
-		break;
+		return "PQ";
 	case ECORE_RL:
-		mfw_res_id = RESOURCE_NUM_RL_E;
-		break;
+		return "RL";
 	case ECORE_MAC:
+		return "MAC";
 	case ECORE_VLAN:
-		/* Each VFC resource can accommodate both a MAC and a VLAN */
-		mfw_res_id = RESOURCE_VFC_FILTER_E;
-		break;
+		return "VLAN";
+	case ECORE_RDMA_CNQ_RAM:
+		return "RDMA_CNQ_RAM";
 	case ECORE_ILT:
-		mfw_res_id = RESOURCE_ILT_E;
-		break;
+		return "ILT";
 	case ECORE_LL2_QUEUE:
-		mfw_res_id = RESOURCE_LL2_QUEUE_E;
-		break;
-	case ECORE_RDMA_CNQ_RAM:
+		return "LL2_QUEUE";
 	case ECORE_CMDQS_CQS:
-		/* CNQ/CMDQS are the same resource */
-		mfw_res_id = RESOURCE_CQS_E;
-		break;
+		return "CMDQS_CQS";
 	case ECORE_RDMA_STATS_QUEUE:
-		mfw_res_id = RESOURCE_RDMA_STATS_QUEUE_E;
-		break;
+		return "RDMA_STATS_QUEUE";
 	case ECORE_BDQ:
-		mfw_res_id = RESOURCE_BDQ_E;
-		break;
+		return "BDQ";
 	default:
-		break;
+		return "UNKNOWN_RESOURCE";
 	}
+}
 
-	return mfw_res_id;
+static enum _ecore_status_t
+__ecore_hw_set_soft_resc_size(struct ecore_hwfn *p_hwfn,
+			      enum ecore_resources res_id, u32 resc_max_val,
+			      u32 *p_mcp_resp)
+{
+	enum _ecore_status_t rc;
+
+	rc = ecore_mcp_set_resc_max_val(p_hwfn, p_hwfn->p_main_ptt, res_id,
+					resc_max_val, p_mcp_resp);
+	if (rc != ECORE_SUCCESS) {
+		DP_NOTICE(p_hwfn, true,
+			  "MFW response failure for a max value setting of resource %d [%s]\n",
+			  res_id, ecore_hw_get_resc_name(res_id));
+		return rc;
+	}
+
+	if (*p_mcp_resp != FW_MSG_CODE_RESOURCE_ALLOC_OK)
+		DP_INFO(p_hwfn,
+			"Failed to set the max value of resource %d [%s]. mcp_resp = 0x%08x.\n",
+			res_id, ecore_hw_get_resc_name(res_id), *p_mcp_resp);
+
+	return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t
+ecore_hw_set_soft_resc_size(struct ecore_hwfn *p_hwfn)
+{
+	bool b_ah = ECORE_IS_AH(p_hwfn->p_dev);
+	u32 resc_max_val, mcp_resp;
+	u8 res_id;
+	enum _ecore_status_t rc;
+
+	for (res_id = 0; res_id < ECORE_MAX_RESC; res_id++) {
+		/* @DPDK */
+		switch (res_id) {
+		case ECORE_LL2_QUEUE:
+		case ECORE_RDMA_CNQ_RAM:
+		case ECORE_RDMA_STATS_QUEUE:
+		case ECORE_BDQ:
+			resc_max_val = 0;
+			break;
+		default:
+			continue;
+		}
+
+		rc = __ecore_hw_set_soft_resc_size(p_hwfn, res_id,
+						   resc_max_val, &mcp_resp);
+		if (rc != ECORE_SUCCESS)
+			return rc;
+
+		/* There's no point to continue to the next resource if the
+		 * command is not supported by the MFW.
+		 * We do continue if the command is supported but the resource
+		 * is unknown to the MFW. Such a resource will be later
+		 * configured with the default allocation values.
+		 */
+		if (mcp_resp == FW_MSG_CODE_UNSUPPORTED)
+			return ECORE_NOTIMPL;
+	}
+
+	return ECORE_SUCCESS;
 }
 
 static
 enum _ecore_status_t ecore_hw_get_dflt_resc(struct ecore_hwfn *p_hwfn,
 					    enum ecore_resources res_id,
-					    u32 *p_resc_num,
-					    u32 *p_resc_start)
+					    u32 *p_resc_num, u32 *p_resc_start)
 {
 	u8 num_funcs = p_hwfn->num_funcs_on_engine;
 	bool b_ah = ECORE_IS_AH(p_hwfn->p_dev);
@@ -2553,56 +2598,19 @@ enum _ecore_status_t ecore_hw_get_dflt_resc(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-static const char *ecore_hw_get_resc_name(enum ecore_resources res_id)
-{
-	switch (res_id) {
-	case ECORE_SB:
-		return "SB";
-	case ECORE_L2_QUEUE:
-		return "L2_QUEUE";
-	case ECORE_VPORT:
-		return "VPORT";
-	case ECORE_RSS_ENG:
-		return "RSS_ENG";
-	case ECORE_PQ:
-		return "PQ";
-	case ECORE_RL:
-		return "RL";
-	case ECORE_MAC:
-		return "MAC";
-	case ECORE_VLAN:
-		return "VLAN";
-	case ECORE_RDMA_CNQ_RAM:
-		return "RDMA_CNQ_RAM";
-	case ECORE_ILT:
-		return "ILT";
-	case ECORE_LL2_QUEUE:
-		return "LL2_QUEUE";
-	case ECORE_CMDQS_CQS:
-		return "CMDQS_CQS";
-	case ECORE_RDMA_STATS_QUEUE:
-		return "RDMA_STATS_QUEUE";
-	case ECORE_BDQ:
-		return "BDQ";
-	default:
-		return "UNKNOWN_RESOURCE";
-	}
-}
-
-static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
-						   enum ecore_resources res_id,
-						   bool drv_resc_alloc)
+static enum _ecore_status_t
+__ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn, enum ecore_resources res_id,
+			 bool drv_resc_alloc)
 {
-	u32 dflt_resc_num = 0, dflt_resc_start = 0, mcp_resp, mcp_param;
-	u32 *p_resc_num, *p_resc_start;
-	struct resource_info resc_info;
+	u32 dflt_resc_num = 0, dflt_resc_start = 0;
+	u32 mcp_resp, *p_resc_num, *p_resc_start;
 	enum _ecore_status_t rc;
 
 	p_resc_num = &RESC_NUM(p_hwfn, res_id);
 	p_resc_start = &RESC_START(p_hwfn, res_id);
 
-	rc = ecore_hw_get_dflt_resc(p_hwfn, res_id,
-				    &dflt_resc_num, &dflt_resc_start);
+	rc = ecore_hw_get_dflt_resc(p_hwfn, res_id, &dflt_resc_num,
+				    &dflt_resc_start);
 	if (rc != ECORE_SUCCESS) {
 		DP_ERR(p_hwfn,
 		       "Failed to get default amount for resource %d [%s]\n",
@@ -2618,17 +2626,8 @@ static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
 	}
 #endif
 
-	OSAL_MEM_ZERO(&resc_info, sizeof(resc_info));
-	resc_info.res_id = ecore_hw_get_mfw_res_id(res_id);
-	if (resc_info.res_id == RESOURCE_NUM_INVALID) {
-		DP_ERR(p_hwfn,
-		       "Failed to match resource %d with MFW resources\n",
-		       res_id);
-		return ECORE_INVAL;
-	}
-
-	rc = ecore_mcp_get_resc_info(p_hwfn, p_hwfn->p_main_ptt, &resc_info,
-				     &mcp_resp, &mcp_param);
+	rc = ecore_mcp_get_resc_info(p_hwfn, p_hwfn->p_main_ptt, res_id,
+				     &mcp_resp, p_resc_num, p_resc_start);
 	if (rc != ECORE_SUCCESS) {
 		DP_NOTICE(p_hwfn, true,
 			  "MFW response failure for an allocation request for"
@@ -2642,13 +2641,11 @@ static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
 	 * - There is an internal error in the MFW while processing the request
 	 * - The resource ID is unknown to the MFW
 	 */
-	if (mcp_resp != FW_MSG_CODE_RESOURCE_ALLOC_OK &&
-	    mcp_resp != FW_MSG_CODE_RESOURCE_ALLOC_DEPRECATED) {
-		/* @DPDK */
+	if (mcp_resp != FW_MSG_CODE_RESOURCE_ALLOC_OK) {
 		DP_INFO(p_hwfn,
-			"Resource %d [%s]: No allocation info was received"
-			" [mcp_resp 0x%x]. Applying default values"
-			" [num %d, start %d].\n",
+			"Failed to receive allocation info for resource %d [%s]."
+			" mcp_resp = 0x%x. Applying default values"
+			" [%d,%d].\n",
 			res_id, ecore_hw_get_resc_name(res_id), mcp_resp,
 			dflt_resc_num, dflt_resc_start);
 
@@ -2660,16 +2657,13 @@ static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
 	/* TBD - remove this when revising the handling of the SB resource */
 	if (res_id == ECORE_SB) {
 		/* Excluding the slowpath SB */
-		resc_info.size -= 1;
-		resc_info.offset -= p_hwfn->enabled_func_idx;
+		*p_resc_num -= 1;
+		*p_resc_start -= p_hwfn->enabled_func_idx;
 	}
 
-	*p_resc_num = resc_info.size;
-	*p_resc_start = resc_info.offset;
-
 	if (*p_resc_num != dflt_resc_num || *p_resc_start != dflt_resc_start) {
 		DP_INFO(p_hwfn,
-			"Resource %d [%s]: MFW allocation [num %d, start %d] differs from default values [num %d, start %d]%s\n",
+			"MFW allocation for resource %d [%s] differs from default values [%d,%d vs. %d,%d]%s\n",
 			res_id, ecore_hw_get_resc_name(res_id), *p_resc_num,
 			*p_resc_start, dflt_resc_num, dflt_resc_start,
 			drv_resc_alloc ? " - Applying default values" : "");
@@ -2682,12 +2676,32 @@ out:
 	return ECORE_SUCCESS;
 }
 
+static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
+						   bool drv_resc_alloc)
+{
+	enum _ecore_status_t rc;
+	u8 res_id;
+
+	for (res_id = 0; res_id < ECORE_MAX_RESC; res_id++) {
+		rc = __ecore_hw_set_resc_info(p_hwfn, res_id, drv_resc_alloc);
+		if (rc != ECORE_SUCCESS)
+			return rc;
+	}
+
+	return ECORE_SUCCESS;
+}
+
+#define ECORE_RESC_ALLOC_LOCK_RETRY_CNT		10
+#define ECORE_RESC_ALLOC_LOCK_RETRY_INTVL_US	10000 /* 10 msec */
+
 static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 					      bool drv_resc_alloc)
 {
+	struct ecore_resc_unlock_params resc_unlock_params;
+	struct ecore_resc_lock_params resc_lock_params;
 	bool b_ah = ECORE_IS_AH(p_hwfn->p_dev);
-	enum _ecore_status_t rc;
 	u8 res_id;
+	enum _ecore_status_t rc;
 #ifndef ASIC_ONLY
 	u32 *resc_start = p_hwfn->hw_info.resc_start;
 	u32 *resc_num = p_hwfn->hw_info.resc_num;
@@ -2700,10 +2714,62 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 	u32 roce_min_ilt_lines = PXP_NUM_ILT_RECORDS_BB / MAX_NUM_PFS_BB;
 #endif
 
-	for (res_id = 0; res_id < ECORE_MAX_RESC; res_id++) {
-		rc = ecore_hw_set_resc_info(p_hwfn, res_id, drv_resc_alloc);
+	/* Setting the max values of the soft resources and the following
+	 * resources allocation queries should be atomic. Since several PFs can
+	 * run in parallel - a resource lock is needed.
+	 * If either the resource lock or resource set value commands are not
+	 * supported - skip the the max values setting, release the lock if
+	 * needed, and proceed to the queries. Other failures, including a
+	 * failure to acquire the lock, will cause this function to fail.
+	 * Old drivers that don't acquire the lock can run in parallel, and
+	 * their allocation values won't be affected by the updated max values.
+	 */
+	OSAL_MEM_ZERO(&resc_lock_params, sizeof(resc_lock_params));
+	resc_lock_params.resource = ECORE_RESC_LOCK_RESC_ALLOC;
+	resc_lock_params.retry_num = ECORE_RESC_ALLOC_LOCK_RETRY_CNT;
+	resc_lock_params.retry_interval = ECORE_RESC_ALLOC_LOCK_RETRY_INTVL_US;
+	resc_lock_params.sleep_b4_retry = true;
+	OSAL_MEM_ZERO(&resc_unlock_params, sizeof(resc_unlock_params));
+	resc_unlock_params.resource = ECORE_RESC_LOCK_RESC_ALLOC;
+
+	rc = ecore_mcp_resc_lock(p_hwfn, p_hwfn->p_main_ptt, &resc_lock_params);
+	if (rc != ECORE_SUCCESS && rc != ECORE_NOTIMPL) {
+		return rc;
+	} else if (rc == ECORE_NOTIMPL) {
+		DP_INFO(p_hwfn,
+			"Skip the max values setting of the soft resources since the resource lock is not supported by the MFW\n");
+	} else if (rc == ECORE_SUCCESS && !resc_lock_params.b_granted) {
+		DP_NOTICE(p_hwfn, false,
+			  "Failed to acquire the resource lock for the resource allocation commands\n");
+		rc = ECORE_BUSY;
+		goto unlock_and_exit;
+	} else {
+		rc = ecore_hw_set_soft_resc_size(p_hwfn);
+		if (rc != ECORE_SUCCESS && rc != ECORE_NOTIMPL) {
+			DP_NOTICE(p_hwfn, false,
+				  "Failed to set the max values of the soft resources\n");
+			goto unlock_and_exit;
+		} else if (rc == ECORE_NOTIMPL) {
+			DP_INFO(p_hwfn,
+				"Skip the max values setting of the soft resources since it is not supported by the MFW\n");
+			rc = ecore_mcp_resc_unlock(p_hwfn, p_hwfn->p_main_ptt,
+						   &resc_unlock_params);
+			if (rc != ECORE_SUCCESS)
+				DP_INFO(p_hwfn,
+					"Failed to release the resource lock for the resource allocation commands\n");
+		}
+	}
+
+	rc = ecore_hw_set_resc_info(p_hwfn, drv_resc_alloc);
+	if (rc != ECORE_SUCCESS)
+		goto unlock_and_exit;
+
+	if (resc_lock_params.b_granted && !resc_unlock_params.b_released) {
+		rc = ecore_mcp_resc_unlock(p_hwfn, p_hwfn->p_main_ptt,
+					   &resc_unlock_params);
 		if (rc != ECORE_SUCCESS)
-			return rc;
+			DP_INFO(p_hwfn,
+				"Failed to release the resource lock for the resource allocation commands\n");
 	}
 
 #ifndef ASIC_ONLY
@@ -2756,6 +2822,10 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 			   RESC_START(p_hwfn, res_id));
 
 	return ECORE_SUCCESS;
+
+unlock_and_exit:
+	ecore_mcp_resc_unlock(p_hwfn, p_hwfn->p_main_ptt, &resc_unlock_params);
+	return rc;
 }
 
 static enum _ecore_status_t
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 15f3ea0..3efe0a0 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -2768,7 +2768,60 @@ enum _ecore_status_t ecore_mcp_mem_ecc_events(struct ecore_hwfn *p_hwfn,
 			     0, &rsp, (u32 *)num_events);
 }
 
-#define ECORE_RESC_ALLOC_VERSION_MAJOR	1
+static enum resource_id_enum
+ecore_mcp_get_mfw_res_id(enum ecore_resources res_id)
+{
+	enum resource_id_enum mfw_res_id = RESOURCE_NUM_INVALID;
+
+	switch (res_id) {
+	case ECORE_SB:
+		mfw_res_id = RESOURCE_NUM_SB_E;
+		break;
+	case ECORE_L2_QUEUE:
+		mfw_res_id = RESOURCE_NUM_L2_QUEUE_E;
+		break;
+	case ECORE_VPORT:
+		mfw_res_id = RESOURCE_NUM_VPORT_E;
+		break;
+	case ECORE_RSS_ENG:
+		mfw_res_id = RESOURCE_NUM_RSS_ENGINES_E;
+		break;
+	case ECORE_PQ:
+		mfw_res_id = RESOURCE_NUM_PQ_E;
+		break;
+	case ECORE_RL:
+		mfw_res_id = RESOURCE_NUM_RL_E;
+		break;
+	case ECORE_MAC:
+	case ECORE_VLAN:
+		/* Each VFC resource can accommodate both a MAC and a VLAN */
+		mfw_res_id = RESOURCE_VFC_FILTER_E;
+		break;
+	case ECORE_ILT:
+		mfw_res_id = RESOURCE_ILT_E;
+		break;
+	case ECORE_LL2_QUEUE:
+		mfw_res_id = RESOURCE_LL2_QUEUE_E;
+		break;
+	case ECORE_RDMA_CNQ_RAM:
+	case ECORE_CMDQS_CQS:
+		/* CNQ/CMDQS are the same resource */
+		mfw_res_id = RESOURCE_CQS_E;
+		break;
+	case ECORE_RDMA_STATS_QUEUE:
+		mfw_res_id = RESOURCE_RDMA_STATS_QUEUE_E;
+		break;
+	case ECORE_BDQ:
+		mfw_res_id = RESOURCE_BDQ_E;
+		break;
+	default:
+		break;
+	}
+
+	return mfw_res_id;
+}
+
+#define ECORE_RESC_ALLOC_VERSION_MAJOR	2
 #define ECORE_RESC_ALLOC_VERSION_MINOR	0
 #define ECORE_RESC_ALLOC_VERSION				\
 	((ECORE_RESC_ALLOC_VERSION_MAJOR <<			\
@@ -2776,36 +2829,146 @@ enum _ecore_status_t ecore_mcp_mem_ecc_events(struct ecore_hwfn *p_hwfn,
 	 (ECORE_RESC_ALLOC_VERSION_MINOR <<			\
 	  DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR_SHIFT))
 
-enum _ecore_status_t ecore_mcp_get_resc_info(struct ecore_hwfn *p_hwfn,
-					     struct ecore_ptt *p_ptt,
-					     struct resource_info *p_resc_info,
-					     u32 *p_mcp_resp, u32 *p_mcp_param)
+struct ecore_resc_alloc_in_params {
+	u32 cmd;
+	enum ecore_resources res_id;
+	u32 resc_max_val;
+};
+
+struct ecore_resc_alloc_out_params {
+	u32 mcp_resp;
+	u32 mcp_param;
+	u32 resc_num;
+	u32 resc_start;
+	u32 vf_resc_num;
+	u32 vf_resc_start;
+	u32 flags;
+};
+
+static enum _ecore_status_t
+ecore_mcp_resc_allocation_msg(struct ecore_hwfn *p_hwfn,
+			      struct ecore_ptt *p_ptt,
+			      struct ecore_resc_alloc_in_params *p_in_params,
+			      struct ecore_resc_alloc_out_params *p_out_params)
 {
+	struct resource_info *p_mfw_resc_info;
 	struct ecore_mcp_mb_params mb_params;
 	union drv_union_data union_data;
 	enum _ecore_status_t rc;
 
+	p_mfw_resc_info = &union_data.resource;
+	OSAL_MEM_ZERO(p_mfw_resc_info, sizeof(*p_mfw_resc_info));
+
+	p_mfw_resc_info->res_id = ecore_mcp_get_mfw_res_id(p_in_params->res_id);
+	if (p_mfw_resc_info->res_id == RESOURCE_NUM_INVALID) {
+		DP_ERR(p_hwfn,
+		       "Failed to match resource %d [%s] with the MFW resources\n",
+		       p_in_params->res_id,
+		       ecore_hw_get_resc_name(p_in_params->res_id));
+		return ECORE_INVAL;
+	}
+
+	switch (p_in_params->cmd) {
+	case DRV_MSG_SET_RESOURCE_VALUE_MSG:
+		p_mfw_resc_info->size = p_in_params->resc_max_val;
+		/* Fallthrough */
+	case DRV_MSG_GET_RESOURCE_ALLOC_MSG:
+		break;
+	default:
+		DP_ERR(p_hwfn, "Unexpected resource alloc command [0x%08x]\n",
+		       p_in_params->cmd);
+		return ECORE_INVAL;
+	}
+
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
-	mb_params.cmd = DRV_MSG_GET_RESOURCE_ALLOC_MSG;
+	mb_params.cmd = p_in_params->cmd;
 	mb_params.param = ECORE_RESC_ALLOC_VERSION;
-	OSAL_MEMCPY(&union_data.resource, p_resc_info, sizeof(*p_resc_info));
 	mb_params.p_data_src = &union_data;
 	mb_params.p_data_dst = &union_data;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Resource message request: cmd 0x%08x, res_id %d [%s], hsi_version %d.%d, val 0x%x\n",
+		   p_in_params->cmd, p_in_params->res_id,
+		   ecore_hw_get_resc_name(p_in_params->res_id),
+		   ECORE_MFW_GET_FIELD(mb_params.param,
+			   DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR),
+		   ECORE_MFW_GET_FIELD(mb_params.param,
+			   DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR),
+		   p_in_params->resc_max_val);
+
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	*p_mcp_resp = mb_params.mcp_resp;
-	*p_mcp_param = mb_params.mcp_param;
-
-	OSAL_MEMCPY(p_resc_info, &union_data.resource, sizeof(*p_resc_info));
+	p_out_params->mcp_resp = mb_params.mcp_resp;
+	p_out_params->mcp_param = mb_params.mcp_param;
+	p_out_params->resc_num = p_mfw_resc_info->size;
+	p_out_params->resc_start = p_mfw_resc_info->offset;
+	p_out_params->vf_resc_num = p_mfw_resc_info->vf_size;
+	p_out_params->vf_resc_start = p_mfw_resc_info->vf_offset;
+	p_out_params->flags = p_mfw_resc_info->flags;
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-		   "MFW resource_info: version 0x%x, res_id 0x%x, size 0x%x,"
-		   " offset 0x%x, vf_size 0x%x, vf_offset 0x%x, flags 0x%x\n",
-		   *p_mcp_param, p_resc_info->res_id, p_resc_info->size,
-		   p_resc_info->offset, p_resc_info->vf_size,
-		   p_resc_info->vf_offset, p_resc_info->flags);
+		   "Resource message response: mfw_hsi_version %d.%d, num 0x%x, start 0x%x, vf_num 0x%x, vf_start 0x%x, flags 0x%08x\n",
+		   ECORE_MFW_GET_FIELD(p_out_params->mcp_param,
+			   FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR),
+		   ECORE_MFW_GET_FIELD(p_out_params->mcp_param,
+			   FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR),
+		   p_out_params->resc_num, p_out_params->resc_start,
+		   p_out_params->vf_resc_num, p_out_params->vf_resc_start,
+		   p_out_params->flags);
+
+	return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t
+ecore_mcp_set_resc_max_val(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			   enum ecore_resources res_id, u32 resc_max_val,
+			   u32 *p_mcp_resp)
+{
+	struct ecore_resc_alloc_out_params out_params;
+	struct ecore_resc_alloc_in_params in_params;
+	enum _ecore_status_t rc;
+
+	OSAL_MEM_ZERO(&in_params, sizeof(in_params));
+	in_params.cmd = DRV_MSG_SET_RESOURCE_VALUE_MSG;
+	in_params.res_id = res_id;
+	in_params.resc_max_val = resc_max_val;
+	OSAL_MEM_ZERO(&out_params, sizeof(out_params));
+	rc = ecore_mcp_resc_allocation_msg(p_hwfn, p_ptt, &in_params,
+					   &out_params);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	*p_mcp_resp = out_params.mcp_resp;
+
+	return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t
+ecore_mcp_get_resc_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			enum ecore_resources res_id, u32 *p_mcp_resp,
+			u32 *p_resc_num, u32 *p_resc_start)
+{
+	struct ecore_resc_alloc_out_params out_params;
+	struct ecore_resc_alloc_in_params in_params;
+	enum _ecore_status_t rc;
+
+	OSAL_MEM_ZERO(&in_params, sizeof(in_params));
+	in_params.cmd = DRV_MSG_GET_RESOURCE_ALLOC_MSG;
+	in_params.res_id = res_id;
+	OSAL_MEM_ZERO(&out_params, sizeof(out_params));
+	rc = ecore_mcp_resc_allocation_msg(p_hwfn, p_ptt, &in_params,
+					   &out_params);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	*p_mcp_resp = out_params.mcp_resp;
+
+	if (*p_mcp_resp == FW_MSG_CODE_RESOURCE_ALLOC_OK) {
+		*p_resc_num = out_params.resc_num;
+		*p_resc_start = out_params.resc_start;
+	}
 
 	return ECORE_SUCCESS;
 }
@@ -2831,8 +2994,11 @@ static enum _ecore_status_t ecore_mcp_resource_cmd(struct ecore_hwfn *p_hwfn,
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	if (*p_mcp_resp == FW_MSG_CODE_UNSUPPORTED)
+	if (*p_mcp_resp == FW_MSG_CODE_UNSUPPORTED) {
+		DP_INFO(p_hwfn,
+			"The resource command is unsupported by the MFW\n");
 		return ECORE_NOTIMPL;
+	}
 
 	if (*p_mcp_param == RESOURCE_OPCODE_UNKNOWN_CMD) {
 		u8 opcode = ECORE_MFW_GET_FIELD(param, RESOURCE_CMD_REQ_OPCODE);
@@ -2846,36 +3012,35 @@ static enum _ecore_status_t ecore_mcp_resource_cmd(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-enum _ecore_status_t ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn,
-					 struct ecore_ptt *p_ptt,
-					 u8 resource_num, u8 timeout,
-					 bool *p_granted, u8 *p_owner)
+enum _ecore_status_t
+__ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		      struct ecore_resc_lock_params *p_params)
 {
 	u32 param = 0, mcp_resp, mcp_param;
 	u8 opcode;
 	enum _ecore_status_t rc;
 
-	switch (timeout) {
+	switch (p_params->timeout) {
 	case ECORE_MCP_RESC_LOCK_TO_DEFAULT:
 		opcode = RESOURCE_OPCODE_REQ;
-		timeout = 0;
+		p_params->timeout = 0;
 		break;
 	case ECORE_MCP_RESC_LOCK_TO_NONE:
 		opcode = RESOURCE_OPCODE_REQ_WO_AGING;
-		timeout = 0;
+		p_params->timeout = 0;
 		break;
 	default:
 		opcode = RESOURCE_OPCODE_REQ_W_AGING;
 		break;
 	}
 
-	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_RESC, resource_num);
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_RESC, p_params->resource);
 	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_OPCODE, opcode);
-	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_AGE, timeout);
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_AGE, p_params->timeout);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-		   "Resource lock request: param 0x%08x [age %d, opcode %d, resc_num %d]\n",
-		   param, timeout, opcode, resource_num);
+		   "Resource lock request: param 0x%08x [age %d, opcode %d, resource %d]\n",
+		   param, p_params->timeout, opcode, p_params->resource);
 
 	/* Attempt to acquire the resource */
 	rc = ecore_mcp_resource_cmd(p_hwfn, p_ptt, param, &mcp_resp,
@@ -2884,19 +3049,20 @@ enum _ecore_status_t ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn,
 		return rc;
 
 	/* Analyze the response */
-	*p_owner = ECORE_MFW_GET_FIELD(mcp_param, RESOURCE_CMD_RSP_OWNER);
+	p_params->owner = ECORE_MFW_GET_FIELD(mcp_param,
+					     RESOURCE_CMD_RSP_OWNER);
 	opcode = ECORE_MFW_GET_FIELD(mcp_param, RESOURCE_CMD_RSP_OPCODE);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 		   "Resource lock response: mcp_param 0x%08x [opcode %d, owner %d]\n",
-		   mcp_param, opcode, *p_owner);
+		   mcp_param, opcode, p_params->owner);
 
 	switch (opcode) {
 	case RESOURCE_OPCODE_GNT:
-		*p_granted = true;
+		p_params->b_granted = true;
 		break;
 	case RESOURCE_OPCODE_BUSY:
-		*p_granted = false;
+		p_params->b_granted = false;
 		break;
 	default:
 		DP_NOTICE(p_hwfn, false,
@@ -2908,23 +3074,54 @@ enum _ecore_status_t ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-enum _ecore_status_t ecore_mcp_resc_unlock(struct ecore_hwfn *p_hwfn,
-					   struct ecore_ptt *p_ptt,
-					   u8 resource_num, bool force,
-					   bool *p_released)
+enum _ecore_status_t
+ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		    struct ecore_resc_lock_params *p_params)
+{
+	u32 retry_cnt = 0;
+	enum _ecore_status_t rc;
+
+	do {
+		/* No need for an interval before the first iteration */
+		if (retry_cnt) {
+			if (p_params->sleep_b4_retry) {
+				u16 retry_interval_in_ms =
+					DIV_ROUND_UP(p_params->retry_interval,
+						     1000);
+
+				OSAL_MSLEEP(retry_interval_in_ms);
+			} else {
+				OSAL_UDELAY(p_params->retry_interval);
+			}
+		}
+
+		rc = __ecore_mcp_resc_lock(p_hwfn, p_ptt, p_params);
+		if (rc != ECORE_SUCCESS)
+			return rc;
+
+		if (p_params->b_granted)
+			break;
+	} while (retry_cnt++ < p_params->retry_num);
+
+	return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t
+ecore_mcp_resc_unlock(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		      struct ecore_resc_unlock_params *p_params)
 {
 	u32 param = 0, mcp_resp, mcp_param;
 	u8 opcode;
 	enum _ecore_status_t rc;
 
-	opcode = force ? RESOURCE_OPCODE_FORCE_RELEASE
-		       : RESOURCE_OPCODE_RELEASE;
-	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_RESC, resource_num);
+	opcode = p_params->b_force ? RESOURCE_OPCODE_FORCE_RELEASE
+				   : RESOURCE_OPCODE_RELEASE;
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_RESC, p_params->resource);
 	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_OPCODE, opcode);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-		   "Resource unlock request: param 0x%08x [opcode %d, resc_num %d]\n",
-		   param, opcode, resource_num);
+		   "Resource unlock request: param 0x%08x [opcode %d, resource %d]\n",
+		   param, opcode, p_params->resource);
 
 	/* Attempt to release the resource */
 	rc = ecore_mcp_resource_cmd(p_hwfn, p_ptt, param, &mcp_resp,
@@ -2942,14 +3139,14 @@ enum _ecore_status_t ecore_mcp_resc_unlock(struct ecore_hwfn *p_hwfn,
 	switch (opcode) {
 	case RESOURCE_OPCODE_RELEASED_PREVIOUS:
 		DP_INFO(p_hwfn,
-			"Resource unlock request for an already released resource [resc_num %d]\n",
-			resource_num);
+			"Resource unlock request for an already released resource [%d]\n",
+			p_params->resource);
 		/* Fallthrough */
 	case RESOURCE_OPCODE_RELEASED:
-		*p_released = true;
+		p_params->b_released = true;
 		break;
 	case RESOURCE_OPCODE_WRONG_OWNER:
-		*p_released = false;
+		p_params->b_released = false;
 		break;
 	default:
 		DP_NOTICE(p_hwfn, false,
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index 4138a12..f5dac9d 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -11,6 +11,7 @@
 
 #include "bcm_osal.h"
 #include "mcp_public.h"
+#include "ecore.h"
 #include "ecore_mcp_api.h"
 
 /* Using hwfn number (and not pf_num) is required since in CMT mode,
@@ -339,20 +340,37 @@ enum _ecore_status_t ecore_mcp_mdump_trigger(struct ecore_hwfn *p_hwfn,
 					     struct ecore_ptt *p_ptt);
 
 /**
+ * @brief - Sets the MFW's max value for the given resource
+ *
+ *  @param p_hwfn
+ *  @param p_ptt
+ *  @param res_id
+ *  @param resc_max_val
+ *  @param p_mcp_resp
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
+ */
+enum _ecore_status_t
+ecore_mcp_set_resc_max_val(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			   enum ecore_resources res_id, u32 resc_max_val,
+			   u32 *p_mcp_resp);
+
+/**
  * @brief - Gets the MFW allocation info for the given resource
  *
  *  @param p_hwfn
  *  @param p_ptt
- *  @param p_resc_info
+ *  @param res_id
  *  @param p_mcp_resp
- *  @param p_mcp_param
+ *  @param p_resc_num
+ *  @param p_resc_start
  *
  * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
  */
-enum _ecore_status_t ecore_mcp_get_resc_info(struct ecore_hwfn *p_hwfn,
-					     struct ecore_ptt *p_ptt,
-					     struct resource_info *p_resc_info,
-					     u32 *p_mcp_resp, u32 *p_mcp_param);
+enum _ecore_status_t
+ecore_mcp_get_resc_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			enum ecore_resources res_id, u32 *p_mcp_resp,
+			u32 *p_resc_num, u32 *p_resc_start);
 
 /**
  * @brief - Initiates PF FLR
@@ -365,45 +383,79 @@ enum _ecore_status_t ecore_mcp_get_resc_info(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_mcp_initiate_pf_flr(struct ecore_hwfn *p_hwfn,
 					       struct ecore_ptt *p_ptt);
 
+#define ECORE_MCP_RESC_LOCK_MIN_VAL	RESOURCE_DUMP /* 0 */
+#define ECORE_MCP_RESC_LOCK_MAX_VAL	31
+
+enum ecore_resc_lock {
+	ECORE_RESC_LOCK_DBG_DUMP = ECORE_MCP_RESC_LOCK_MIN_VAL,
+	/* Locks that the MFW is aware of should be added here downwards */
+
+	/* Ecore only locks should be added here upwards */
+	ECORE_RESC_LOCK_RESC_ALLOC = ECORE_MCP_RESC_LOCK_MAX_VAL
+};
+
+struct ecore_resc_lock_params {
+	/* Resource number [valid values are 0..31] */
+	u8 resource;
+
+	/* Lock timeout value in seconds [default, none or 1..254] */
+	u8 timeout;
 #define ECORE_MCP_RESC_LOCK_TO_DEFAULT	0
 #define ECORE_MCP_RESC_LOCK_TO_NONE	255
 
+	/* Number of times to retry locking */
+	u8 retry_num;
+
+	/* The interval in usec between retries */
+	u16 retry_interval;
+
+	/* Use sleep or delay between retries */
+	bool sleep_b4_retry;
+
+	/* Will be set as true if the resource is free and granted */
+	bool b_granted;
+
+	/* Will be filled with the resource owner.
+	 * [0..15 = PF0-15, 16 = MFW, 17 = diag over serial]
+	 */
+	u8 owner;
+};
+
 /**
  * @brief Acquires MFW generic resource lock
  *
  *  @param p_hwfn
  *  @param p_ptt
- *  @param resource_num - valid values are 0..31
- *  @param timeout - lock timeout value in seconds
- *                   (1..254, '0' - default value, '255' - no timeout).
- *  @param p_granted - will be filled as true if the resource is free and
- *                     granted, or false if it is busy.
- *  @param p_owner - A pointer to a variable to be filled with the resource
- *                   owner (0..15 = PF0-15, 16 = MFW, 17 = diag over serial).
+ *  @param p_params
  *
  * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
  */
-enum _ecore_status_t ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn,
-					 struct ecore_ptt *p_ptt,
-					 u8 resource_num, u8 timeout,
-					 bool *p_granted, u8 *p_owner);
+enum _ecore_status_t
+ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		    struct ecore_resc_lock_params *p_params);
+
+struct ecore_resc_unlock_params {
+	/* Resource number [valid values are 0..31] */
+	u8 resource;
+
+	/* Allow to release a resource even if belongs to another PF */
+	bool b_force;
+
+	/* Will be set as true if the resource is released */
+	bool b_released;
+};
 
 /**
  * @brief Releases MFW generic resource lock
  *
  *  @param p_hwfn
  *  @param p_ptt
- *  @param resource_num
- *  @param force -  allows to release a reeource even if belongs to another PF
- *  @param p_released - will be filled as true if the resource is released (or
- *			has been already released), and false if the resource is
- *			acquired by another PF and the `force' flag was not set.
+ *  @param p_params
  *
  * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
  */
-enum _ecore_status_t ecore_mcp_resc_unlock(struct ecore_hwfn *p_hwfn,
-					   struct ecore_ptt *p_ptt,
-					   u8 resource_num, bool force,
-					   bool *p_released);
+enum _ecore_status_t
+ecore_mcp_resc_unlock(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		      struct ecore_resc_unlock_params *p_params);
 
 #endif /* __ECORE_MCP_H__ */
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 49/61] net/qede/base: add return code check
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (48 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 48/61] net/qede/base: set max values for soft resoruces Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 50/61] net/qede/base: zero out MFW mailbox data Rasesh Mody
                         ` (12 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add a check of the return code of ecore_mcp_cmd_and_union() in
ecore_mcp_send_protocol_stats()

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_mcp.c |    5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 3efe0a0..0ebb5cd 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1237,6 +1237,7 @@ static void ecore_mcp_send_protocol_stats(struct ecore_hwfn *p_hwfn,
 	struct ecore_mcp_mb_params mb_params;
 	union drv_union_data union_data;
 	u32 hsi_param;
+	enum _ecore_status_t rc;
 
 	switch (type) {
 	case MFW_DRV_MSG_GET_LAN_STATS:
@@ -1255,7 +1256,9 @@ static void ecore_mcp_send_protocol_stats(struct ecore_hwfn *p_hwfn,
 	mb_params.param = hsi_param;
 	OSAL_MEMCPY(&union_data, &stats, sizeof(stats));
 	mb_params.p_data_src = &union_data;
-	ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+	if (rc != ECORE_SUCCESS)
+		DP_ERR(p_hwfn, "Failed to send protocol stats, rc = %d\n", rc);
 }
 
 static void ecore_read_pf_bandwidth(struct ecore_hwfn *p_hwfn,
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 50/61] net/qede/base: zero out MFW mailbox data
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (49 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 49/61] net/qede/base: add return code check Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 51/61] net/qede/base: move code bits Rasesh Mody
                         ` (11 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Zero the whole union data of the Management FW mailbox before copying
the actual union member

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c |    4 +-
 drivers/net/qede/base/ecore_mcp.c |  296 ++++++++++++++++++++-----------------
 drivers/net/qede/base/ecore_mcp.h |   19 ++-
 3 files changed, 181 insertions(+), 138 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 3191ee4..e584058 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2311,9 +2311,7 @@ enum _ecore_status_t ecore_hw_reset(struct ecore_dev *p_dev)
 			unload_resp = FW_MSG_CODE_DRV_UNLOAD_ENGINE;
 		}
 
-		rc = ecore_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
-				   DRV_MSG_CODE_UNLOAD_DONE,
-				   0, &unload_resp, &unload_param);
+		rc = ecore_mcp_unload_done(p_hwfn, p_hwfn->p_main_ptt);
 		if (rc != ECORE_SUCCESS) {
 			DP_NOTICE(p_hwfn,
 				  true, "ecore_hw_reset: UNLOAD_DONE failed\n");
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 0ebb5cd..b53210f 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -364,6 +364,7 @@ ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn,
 			struct ecore_ptt *p_ptt,
 			struct ecore_mcp_mb_params *p_mb_params)
 {
+	union drv_union_data union_data;
 	u32 union_data_addr;
 	enum _ecore_status_t rc;
 
@@ -373,6 +374,15 @@ ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn,
 		return ECORE_BUSY;
 	}
 
+	if (p_mb_params->data_src_size > sizeof(union_data) ||
+	    p_mb_params->data_dst_size > sizeof(union_data)) {
+		DP_ERR(p_hwfn,
+		       "The provided size is larger than the union data size [src_size %u, dst_size %u, union_data_size %zu]\n",
+		       p_mb_params->data_src_size, p_mb_params->data_dst_size,
+		       sizeof(union_data));
+		return ECORE_INVAL;
+	}
+
 	union_data_addr = p_hwfn->mcp_info->drv_mb_addr +
 			  OFFSETOF(struct public_drv_mb, union_data);
 
@@ -383,19 +393,21 @@ ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn,
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	if (p_mb_params->p_data_src != OSAL_NULL)
-		ecore_memcpy_to(p_hwfn, p_ptt, union_data_addr,
-				p_mb_params->p_data_src,
-				sizeof(*p_mb_params->p_data_src));
+	OSAL_MEM_ZERO(&union_data, sizeof(union_data));
+	if (p_mb_params->p_data_src != OSAL_NULL && p_mb_params->data_src_size)
+		OSAL_MEMCPY(&union_data, p_mb_params->p_data_src,
+			    p_mb_params->data_src_size);
+	ecore_memcpy_to(p_hwfn, p_ptt, union_data_addr, &union_data,
+			sizeof(union_data));
 
 	rc = ecore_do_mcp_cmd(p_hwfn, p_ptt, p_mb_params->cmd,
 			      p_mb_params->param, &p_mb_params->mcp_resp,
 			      &p_mb_params->mcp_param);
 
-	if (p_mb_params->p_data_dst != OSAL_NULL)
+	if (p_mb_params->p_data_dst != OSAL_NULL &&
+	    p_mb_params->data_dst_size)
 		ecore_memcpy_from(p_hwfn, p_ptt, p_mb_params->p_data_dst,
-				  union_data_addr,
-				  sizeof(*p_mb_params->p_data_dst));
+				  union_data_addr, p_mb_params->data_dst_size);
 
 	ecore_mcp_mb_unlock(p_hwfn, p_mb_params->cmd);
 
@@ -443,14 +455,13 @@ enum _ecore_status_t ecore_mcp_nvm_wr_cmd(struct ecore_hwfn *p_hwfn,
 					  u32 i_txn_size, u32 *i_buf)
 {
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
 	enum _ecore_status_t rc;
 
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = cmd;
 	mb_params.param = param;
-	OSAL_MEMCPY((u32 *)&union_data.raw_data, i_buf, i_txn_size);
-	mb_params.p_data_src = &union_data;
+	mb_params.p_data_src = i_buf;
+	mb_params.data_src_size = (u8)i_txn_size;
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
@@ -470,13 +481,17 @@ enum _ecore_status_t ecore_mcp_nvm_rd_cmd(struct ecore_hwfn *p_hwfn,
 					  u32 *o_txn_size, u32 *o_buf)
 {
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
+	u8 raw_data[MCP_DRV_NVM_BUF_LEN];
 	enum _ecore_status_t rc;
 
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = cmd;
 	mb_params.param = param;
-	mb_params.p_data_dst = &union_data;
+	mb_params.p_data_dst = raw_data;
+
+	/* Use the maximal value since the actual one is part of the response */
+	mb_params.data_dst_size = MCP_DRV_NVM_BUF_LEN;
+
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
@@ -485,7 +500,7 @@ enum _ecore_status_t ecore_mcp_nvm_rd_cmd(struct ecore_hwfn *p_hwfn,
 	*o_mcp_param = mb_params.mcp_param;
 
 	*o_txn_size = *o_mcp_param;
-	OSAL_MEMCPY(o_buf, (u32 *)&union_data.raw_data, *o_txn_size);
+	OSAL_MEMCPY(o_buf, raw_data, *o_txn_size);
 
 	return ECORE_SUCCESS;
 }
@@ -605,26 +620,23 @@ __ecore_mcp_load_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 		     struct ecore_load_req_in_params *p_in_params,
 		     struct ecore_load_req_out_params *p_out_params)
 {
-	union drv_union_data union_data_src, union_data_dst;
 	struct ecore_mcp_mb_params mb_params;
-	struct load_req_stc *p_load_req;
-	struct load_rsp_stc *p_load_rsp;
+	struct load_req_stc load_req;
+	struct load_rsp_stc load_rsp;
 	u32 hsi_ver;
 	enum _ecore_status_t rc;
 
-	p_load_req = &union_data_src.load_req;
-	OSAL_MEM_ZERO(p_load_req, sizeof(*p_load_req));
-	p_load_req->drv_ver_0 = p_in_params->drv_ver_0;
-	p_load_req->drv_ver_1 = p_in_params->drv_ver_1;
-	p_load_req->fw_ver = p_in_params->fw_ver;
-	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_ROLE,
+	OSAL_MEM_ZERO(&load_req, sizeof(load_req));
+	load_req.drv_ver_0 = p_in_params->drv_ver_0;
+	load_req.drv_ver_1 = p_in_params->drv_ver_1;
+	load_req.fw_ver = p_in_params->fw_ver;
+	ECORE_MFW_SET_FIELD(load_req.misc0, LOAD_REQ_ROLE,
 			    p_in_params->drv_role);
-	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_LOCK_TO,
+	ECORE_MFW_SET_FIELD(load_req.misc0, LOAD_REQ_LOCK_TO,
 			    p_in_params->timeout_val);
-	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_FORCE,
-			    p_in_params->force_cmd);
-	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_FLAGS0,
-			    p_in_params->avoid_eng_reset);
+
+	/* @DPDK */
+	load_req.misc0 |= LOAD_REQ_FORCE_NONE;
 
 	hsi_ver = (p_in_params->hsi_ver == ECORE_LOAD_REQ_HSI_VER_DEFAULT) ?
 		  DRV_ID_MCP_HSI_VER_CURRENT :
@@ -633,8 +645,10 @@ __ecore_mcp_load_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = DRV_MSG_CODE_LOAD_REQ;
 	mb_params.param = PDA_COMP | hsi_ver | p_hwfn->p_dev->drv_type;
-	mb_params.p_data_src = &union_data_src;
-	mb_params.p_data_dst = &union_data_dst;
+	mb_params.p_data_src = &load_req;
+	mb_params.data_src_size = sizeof(load_req);
+	mb_params.p_data_dst = &load_rsp;
+	mb_params.data_dst_size = sizeof(load_rsp);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 		   "Load Request: param 0x%08x [init_hw %d, drv_type %d, hsi_ver %d, pda 0x%04x]\n",
@@ -647,15 +661,13 @@ __ecore_mcp_load_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	if (p_in_params->hsi_ver != ECORE_LOAD_REQ_HSI_VER_1)
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 			   "Load Request: drv_ver 0x%08x_0x%08x, fw_ver 0x%08x, misc0 0x%08x [role %d, timeout %d, force %d, flags0 0x%x]\n",
-			   p_load_req->drv_ver_0, p_load_req->drv_ver_1,
-			   p_load_req->fw_ver, p_load_req->misc0,
-			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
-					       LOAD_REQ_ROLE),
-			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
+			   load_req.drv_ver_0, load_req.drv_ver_1,
+			   load_req.fw_ver, load_req.misc0,
+			   ECORE_MFW_GET_FIELD(load_req.misc0, LOAD_REQ_ROLE),
+			   ECORE_MFW_GET_FIELD(load_req.misc0,
 					       LOAD_REQ_LOCK_TO),
-			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
-					       LOAD_REQ_FORCE),
-			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
+			   ECORE_MFW_GET_FIELD(load_req.misc0, LOAD_REQ_FORCE),
+			   ECORE_MFW_GET_FIELD(load_req.misc0,
 					       LOAD_REQ_FLAGS0));
 
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
@@ -671,28 +683,24 @@ __ecore_mcp_load_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 
 	if (p_in_params->hsi_ver != ECORE_LOAD_REQ_HSI_VER_1 &&
 	    p_out_params->load_code != FW_MSG_CODE_DRV_LOAD_REFUSED_HSI_1) {
-		p_load_rsp = &union_data_dst.load_rsp;
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 			   "Load Response: exist_drv_ver 0x%08x_0x%08x, exist_fw_ver 0x%08x, misc0 0x%08x [exist_role %d, mfw_hsi %d, flags0 0x%x]\n",
-			   p_load_rsp->drv_ver_0, p_load_rsp->drv_ver_1,
-			   p_load_rsp->fw_ver, p_load_rsp->misc0,
-			   ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
-					       LOAD_RSP_ROLE),
-			   ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
-					       LOAD_RSP_HSI),
-			   ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
+			   load_rsp.drv_ver_0, load_rsp.drv_ver_1,
+			   load_rsp.fw_ver, load_rsp.misc0,
+			   ECORE_MFW_GET_FIELD(load_rsp.misc0, LOAD_RSP_ROLE),
+			   ECORE_MFW_GET_FIELD(load_rsp.misc0, LOAD_RSP_HSI),
+			   ECORE_MFW_GET_FIELD(load_rsp.misc0,
 					       LOAD_RSP_FLAGS0));
 
-		p_out_params->exist_drv_ver_0 = p_load_rsp->drv_ver_0;
-		p_out_params->exist_drv_ver_1 = p_load_rsp->drv_ver_1;
-		p_out_params->exist_fw_ver = p_load_rsp->fw_ver;
+		p_out_params->exist_drv_ver_0 = load_rsp.drv_ver_0;
+		p_out_params->exist_drv_ver_1 = load_rsp.drv_ver_1;
+		p_out_params->exist_fw_ver = load_rsp.fw_ver;
 		p_out_params->exist_drv_role =
-			ECORE_MFW_GET_FIELD(p_load_rsp->misc0, LOAD_RSP_ROLE);
+			ECORE_MFW_GET_FIELD(load_rsp.misc0, LOAD_RSP_ROLE);
 		p_out_params->mfw_hsi_ver =
-			ECORE_MFW_GET_FIELD(p_load_rsp->misc0, LOAD_RSP_HSI);
+			ECORE_MFW_GET_FIELD(load_rsp.misc0, LOAD_RSP_HSI);
 		p_out_params->drv_exists =
-			ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
-					    LOAD_RSP_FLAGS0) &
+			ECORE_MFW_GET_FIELD(load_rsp.misc0, LOAD_RSP_FLAGS0) &
 			LOAD_RSP_FLAGS0_DRV_EXISTS;
 	}
 
@@ -883,6 +891,18 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
+enum _ecore_status_t ecore_mcp_unload_done(struct ecore_hwfn *p_hwfn,
+					   struct ecore_ptt *p_ptt)
+{
+	struct ecore_mcp_mb_params mb_params;
+	struct mcp_mac wol_mac;
+
+	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
+	mb_params.cmd = DRV_MSG_CODE_UNLOAD_DONE;
+
+	return ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+}
+
 static void ecore_mcp_handle_vf_flr(struct ecore_hwfn *p_hwfn,
 				    struct ecore_ptt *p_ptt)
 {
@@ -924,7 +944,6 @@ enum _ecore_status_t ecore_mcp_ack_vf_flr(struct ecore_hwfn *p_hwfn,
 	u32 func_addr = SECTION_ADDR(mfw_func_offsize,
 				     MCP_PF_ID(p_hwfn));
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
 	enum _ecore_status_t rc;
 	int i;
 
@@ -935,8 +954,8 @@ enum _ecore_status_t ecore_mcp_ack_vf_flr(struct ecore_hwfn *p_hwfn,
 
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = DRV_MSG_CODE_VF_DISABLED_DONE;
-	OSAL_MEMCPY(&union_data.ack_vf_disabled, vfs_to_ack, VF_MAX_STATIC / 8);
-	mb_params.p_data_src = &union_data;
+	mb_params.p_data_src = vfs_to_ack;
+	mb_params.data_src_size = VF_MAX_STATIC / 8;
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt,
 				     &mb_params);
 	if (rc != ECORE_SUCCESS) {
@@ -1122,8 +1141,7 @@ enum _ecore_status_t ecore_mcp_set_link(struct ecore_hwfn *p_hwfn,
 {
 	struct ecore_mcp_link_params *params = &p_hwfn->mcp_info->link_input;
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
-	struct eth_phy_cfg *p_phy_cfg;
+	struct eth_phy_cfg phy_cfg;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 	u32 cmd;
 
@@ -1133,30 +1151,30 @@ enum _ecore_status_t ecore_mcp_set_link(struct ecore_hwfn *p_hwfn,
 #endif
 
 	/* Set the shmem configuration according to params */
-	p_phy_cfg = &union_data.drv_phy_cfg;
-	OSAL_MEMSET(p_phy_cfg, 0, sizeof(*p_phy_cfg));
+	OSAL_MEM_ZERO(&phy_cfg, sizeof(phy_cfg));
 	cmd = b_up ? DRV_MSG_CODE_INIT_PHY : DRV_MSG_CODE_LINK_RESET;
 	if (!params->speed.autoneg)
-		p_phy_cfg->speed = params->speed.forced_speed;
-	p_phy_cfg->pause |= (params->pause.autoneg) ? ETH_PAUSE_AUTONEG : 0;
-	p_phy_cfg->pause |= (params->pause.forced_rx) ? ETH_PAUSE_RX : 0;
-	p_phy_cfg->pause |= (params->pause.forced_tx) ? ETH_PAUSE_TX : 0;
-	p_phy_cfg->adv_speed = params->speed.advertised_speeds;
-	p_phy_cfg->loopback_mode = params->loopback_mode;
+		phy_cfg.speed = params->speed.forced_speed;
+	phy_cfg.pause |= (params->pause.autoneg) ? ETH_PAUSE_AUTONEG : 0;
+	phy_cfg.pause |= (params->pause.forced_rx) ? ETH_PAUSE_RX : 0;
+	phy_cfg.pause |= (params->pause.forced_tx) ? ETH_PAUSE_TX : 0;
+	phy_cfg.adv_speed = params->speed.advertised_speeds;
+	phy_cfg.loopback_mode = params->loopback_mode;
 	p_hwfn->b_drv_link_init = b_up;
 
 	if (b_up)
 		DP_VERBOSE(p_hwfn, ECORE_MSG_LINK,
 			   "Configuring Link: Speed 0x%08x, Pause 0x%08x,"
 			   " adv_speed 0x%08x, loopback 0x%08x\n",
-			   p_phy_cfg->speed, p_phy_cfg->pause,
-			   p_phy_cfg->adv_speed, p_phy_cfg->loopback_mode);
+			   phy_cfg.speed, phy_cfg.pause, phy_cfg.adv_speed,
+			   phy_cfg.loopback_mode);
 	else
 		DP_VERBOSE(p_hwfn, ECORE_MSG_LINK, "Resetting link\n");
 
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = cmd;
-	mb_params.p_data_src = &union_data;
+	mb_params.p_data_src = &phy_cfg;
+	mb_params.data_src_size = sizeof(phy_cfg);
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 
 	/* if mcp fails to respond we must abort */
@@ -1235,7 +1253,6 @@ static void ecore_mcp_send_protocol_stats(struct ecore_hwfn *p_hwfn,
 	enum ecore_mcp_protocol_type stats_type;
 	union ecore_mcp_protocol_stats stats;
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
 	u32 hsi_param;
 	enum _ecore_status_t rc;
 
@@ -1254,8 +1271,8 @@ static void ecore_mcp_send_protocol_stats(struct ecore_hwfn *p_hwfn,
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = DRV_MSG_CODE_GET_STATS;
 	mb_params.param = hsi_param;
-	OSAL_MEMCPY(&union_data, &stats, sizeof(stats));
-	mb_params.p_data_src = &union_data;
+	mb_params.p_data_src = &stats;
+	mb_params.data_src_size = sizeof(stats);
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS)
 		DP_ERR(p_hwfn, "Failed to send protocol stats, rc = %d\n", rc);
@@ -1353,28 +1370,38 @@ static void ecore_mcp_handle_fan_failure(struct ecore_hwfn *p_hwfn,
 	ecore_hw_err_notify(p_hwfn, ECORE_HW_ERR_FAN_FAIL);
 }
 
+struct ecore_mdump_cmd_params {
+	u32 cmd;
+	void *p_data_src;
+	u8 data_src_size;
+	void *p_data_dst;
+	u8 data_dst_size;
+	u32 mcp_resp;
+};
+
 static enum _ecore_status_t
 ecore_mcp_mdump_cmd(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-		    u32 mdump_cmd, union drv_union_data *p_data_src,
-		    union drv_union_data *p_data_dst, u32 *p_mcp_resp)
+		    struct ecore_mdump_cmd_params *p_mdump_cmd_params)
 {
 	struct ecore_mcp_mb_params mb_params;
 	enum _ecore_status_t rc;
 
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = DRV_MSG_CODE_MDUMP_CMD;
-	mb_params.param = mdump_cmd;
-	mb_params.p_data_src = p_data_src;
-	mb_params.p_data_dst = p_data_dst;
+	mb_params.param = p_mdump_cmd_params->cmd;
+	mb_params.p_data_src = p_mdump_cmd_params->p_data_src;
+	mb_params.data_src_size = p_mdump_cmd_params->data_src_size;
+	mb_params.p_data_dst = p_mdump_cmd_params->p_data_dst;
+	mb_params.data_dst_size = p_mdump_cmd_params->data_dst_size;
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	*p_mcp_resp = mb_params.mcp_resp;
-	if (*p_mcp_resp == FW_MSG_CODE_MDUMP_INVALID_CMD) {
+	p_mdump_cmd_params->mcp_resp = mb_params.mcp_resp;
+	if (p_mdump_cmd_params->mcp_resp == FW_MSG_CODE_MDUMP_INVALID_CMD) {
 		DP_NOTICE(p_hwfn, false,
 			  "MFW claims that the mdump command is illegal [mdump_cmd 0x%x]\n",
-			  mdump_cmd);
+			  p_mdump_cmd_params->cmd);
 		rc = ECORE_INVAL;
 	}
 
@@ -1384,62 +1411,68 @@ ecore_mcp_mdump_cmd(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 static enum _ecore_status_t ecore_mcp_mdump_ack(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt)
 {
-	u32 mcp_resp;
+	struct ecore_mdump_cmd_params mdump_cmd_params;
+
+	OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params));
+	mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_ACK;
 
-	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MDUMP_ACK,
-				   OSAL_NULL, OSAL_NULL, &mcp_resp);
+	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
 }
 
 enum _ecore_status_t ecore_mcp_mdump_set_values(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt,
 						u32 epoch)
 {
-	union drv_union_data union_data;
-	u32 mcp_resp;
+	struct ecore_mdump_cmd_params mdump_cmd_params;
 
-	OSAL_MEMCPY(&union_data.raw_data, &epoch, sizeof(epoch));
+	OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params));
+	mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_SET_VALUES;
+	mdump_cmd_params.p_data_src = &epoch;
+	mdump_cmd_params.data_src_size = sizeof(epoch);
 
-	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MDUMP_SET_VALUES,
-				   &union_data, OSAL_NULL, &mcp_resp);
+	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
 }
 
 enum _ecore_status_t ecore_mcp_mdump_trigger(struct ecore_hwfn *p_hwfn,
 					     struct ecore_ptt *p_ptt)
 {
-	u32 mcp_resp;
+	struct ecore_mdump_cmd_params mdump_cmd_params;
 
-	p_hwfn->p_dev->mdump_en = true;
+	OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params));
+	mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_TRIGGER;
 
-	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MDUMP_TRIGGER,
-				   OSAL_NULL, OSAL_NULL, &mcp_resp);
+	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
 }
 
 static enum _ecore_status_t
 ecore_mcp_mdump_get_config(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			   struct mdump_config_stc *p_mdump_config)
 {
-	union drv_union_data union_data;
-	u32 mcp_resp;
+	struct ecore_mdump_cmd_params mdump_cmd_params;
 	enum _ecore_status_t rc;
 
-	rc = ecore_mcp_mdump_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MDUMP_GET_CONFIG,
-				 OSAL_NULL, &union_data, &mcp_resp);
+	OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params));
+	mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_GET_CONFIG;
+	mdump_cmd_params.p_data_dst = p_mdump_config;
+	mdump_cmd_params.data_dst_size = sizeof(*p_mdump_config);
+
+	rc = ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	if (mcp_resp == FW_MSG_CODE_UNSUPPORTED)
+	if (mdump_cmd_params.mcp_resp == FW_MSG_CODE_UNSUPPORTED) {
+		DP_INFO(p_hwfn,
+			"The mdump command is not supported by the MFW\n");
 		return ECORE_NOTIMPL;
+	}
 
-	if (mcp_resp != FW_MSG_CODE_OK) {
+	if (mdump_cmd_params.mcp_resp != FW_MSG_CODE_OK) {
 		DP_NOTICE(p_hwfn, false,
 			  "Failed to get the mdump configuration and logs info [mcp_resp 0x%x]\n",
-			  mcp_resp);
+			  mdump_cmd_params.mcp_resp);
 		rc = ECORE_UNKNOWN_ERROR;
 	}
 
-	OSAL_MEMCPY(p_mdump_config, &union_data.mdump_config,
-		    sizeof(*p_mdump_config));
-
 	return rc;
 }
 
@@ -1489,10 +1522,12 @@ ecore_mcp_mdump_get_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 enum _ecore_status_t ecore_mcp_mdump_clear_logs(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt)
 {
-	u32 mcp_resp;
+	struct ecore_mdump_cmd_params mdump_cmd_params;
 
-	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MDUMP_CLEAR_LOGS,
-				   OSAL_NULL, OSAL_NULL, &mcp_resp);
+	OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params));
+	mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_CLEAR_LOGS;
+
+	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
 }
 
 static void ecore_mcp_handle_critical_error(struct ecore_hwfn *p_hwfn,
@@ -2001,9 +2036,8 @@ enum _ecore_status_t
 ecore_mcp_send_drv_version(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			   struct ecore_mcp_drv_version *p_ver)
 {
-	struct drv_version_stc *p_drv_version;
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
+	struct drv_version_stc drv_version;
 	u32 num_words, i;
 	void *p_name;
 	OSAL_BE32 val;
@@ -2014,19 +2048,20 @@ ecore_mcp_send_drv_version(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 		return ECORE_SUCCESS;
 #endif
 
-	p_drv_version = &union_data.drv_version;
-	p_drv_version->version = p_ver->version;
+	OSAL_MEM_ZERO(&drv_version, sizeof(drv_version));
+	drv_version.version = p_ver->version;
 	num_words = (MCP_DRV_VER_STR_SIZE - 4) / 4;
 	for (i = 0; i < num_words; i++) {
 		/* The driver name is expected to be in a big-endian format */
 		p_name = &p_ver->name[i * sizeof(u32)];
 		val = OSAL_CPU_TO_BE32(*(u32 *)p_name);
-		*(u32 *)&p_drv_version->name[i * sizeof(u32)] = val;
+		*(u32 *)&drv_version.name[i * sizeof(u32)] = val;
 	}
 
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = DRV_MSG_CODE_SET_VERSION;
-	mb_params.p_data_src = &union_data;
+	mb_params.p_data_src = &drv_version;
+	mb_params.data_src_size = sizeof(drv_version);
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS)
 		DP_ERR(p_hwfn, "MCP response failure, aborting\n");
@@ -2695,28 +2730,25 @@ ecore_mcp_get_temperature_info(struct ecore_hwfn *p_hwfn,
 			       struct ecore_temperature_info *p_temp_info)
 {
 	struct ecore_temperature_sensor *p_temp_sensor;
-	struct temperature_status_stc *p_mfw_temp_info;
+	struct temperature_status_stc mfw_temp_info;
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
 	u32 val;
 	enum _ecore_status_t rc;
 	u8 i;
 
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = DRV_MSG_CODE_GET_TEMPERATURE;
-	mb_params.p_data_dst = &union_data;
+	mb_params.p_data_dst = &mfw_temp_info;
+	mb_params.data_dst_size = sizeof(mfw_temp_info);
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	p_mfw_temp_info = &union_data.temp_info;
-
 	OSAL_BUILD_BUG_ON(ECORE_MAX_NUM_OF_SENSORS != MAX_NUM_OF_SENSORS);
-	p_temp_info->num_sensors = OSAL_MIN_T(u32,
-					      p_mfw_temp_info->num_of_sensors,
+	p_temp_info->num_sensors = OSAL_MIN_T(u32, mfw_temp_info.num_of_sensors,
 					      ECORE_MAX_NUM_OF_SENSORS);
 	for (i = 0; i < p_temp_info->num_sensors; i++) {
-		val = p_mfw_temp_info->sensor[i];
+		val = mfw_temp_info.sensor[i];
 		p_temp_sensor = &p_temp_info->sensors[i];
 		p_temp_sensor->sensor_location = (val & SENSOR_LOCATION_MASK) >>
 						 SENSOR_LOCATION_SHIFT;
@@ -2854,16 +2886,14 @@ ecore_mcp_resc_allocation_msg(struct ecore_hwfn *p_hwfn,
 			      struct ecore_resc_alloc_in_params *p_in_params,
 			      struct ecore_resc_alloc_out_params *p_out_params)
 {
-	struct resource_info *p_mfw_resc_info;
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
+	struct resource_info mfw_resc_info;
 	enum _ecore_status_t rc;
 
-	p_mfw_resc_info = &union_data.resource;
-	OSAL_MEM_ZERO(p_mfw_resc_info, sizeof(*p_mfw_resc_info));
+	OSAL_MEM_ZERO(&mfw_resc_info, sizeof(mfw_resc_info));
 
-	p_mfw_resc_info->res_id = ecore_mcp_get_mfw_res_id(p_in_params->res_id);
-	if (p_mfw_resc_info->res_id == RESOURCE_NUM_INVALID) {
+	mfw_resc_info.res_id = ecore_mcp_get_mfw_res_id(p_in_params->res_id);
+	if (mfw_resc_info.res_id == RESOURCE_NUM_INVALID) {
 		DP_ERR(p_hwfn,
 		       "Failed to match resource %d [%s] with the MFW resources\n",
 		       p_in_params->res_id,
@@ -2873,7 +2903,7 @@ ecore_mcp_resc_allocation_msg(struct ecore_hwfn *p_hwfn,
 
 	switch (p_in_params->cmd) {
 	case DRV_MSG_SET_RESOURCE_VALUE_MSG:
-		p_mfw_resc_info->size = p_in_params->resc_max_val;
+		mfw_resc_info.size = p_in_params->resc_max_val;
 		/* Fallthrough */
 	case DRV_MSG_GET_RESOURCE_ALLOC_MSG:
 		break;
@@ -2886,8 +2916,10 @@ ecore_mcp_resc_allocation_msg(struct ecore_hwfn *p_hwfn,
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = p_in_params->cmd;
 	mb_params.param = ECORE_RESC_ALLOC_VERSION;
-	mb_params.p_data_src = &union_data;
-	mb_params.p_data_dst = &union_data;
+	mb_params.p_data_src = &mfw_resc_info;
+	mb_params.data_src_size = sizeof(mfw_resc_info);
+	mb_params.p_data_dst = mb_params.p_data_src;
+	mb_params.data_dst_size = mb_params.data_src_size;
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 		   "Resource message request: cmd 0x%08x, res_id %d [%s], hsi_version %d.%d, val 0x%x\n",
@@ -2905,11 +2937,11 @@ ecore_mcp_resc_allocation_msg(struct ecore_hwfn *p_hwfn,
 
 	p_out_params->mcp_resp = mb_params.mcp_resp;
 	p_out_params->mcp_param = mb_params.mcp_param;
-	p_out_params->resc_num = p_mfw_resc_info->size;
-	p_out_params->resc_start = p_mfw_resc_info->offset;
-	p_out_params->vf_resc_num = p_mfw_resc_info->vf_size;
-	p_out_params->vf_resc_start = p_mfw_resc_info->vf_offset;
-	p_out_params->flags = p_mfw_resc_info->flags;
+	p_out_params->resc_num = mfw_resc_info.size;
+	p_out_params->resc_start = mfw_resc_info.offset;
+	p_out_params->vf_resc_num = mfw_resc_info.vf_size;
+	p_out_params->vf_resc_start = mfw_resc_info.vf_offset;
+	p_out_params->flags = mfw_resc_info.flags;
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 		   "Resource message response: mfw_hsi_version %d.%d, num 0x%x, start 0x%x, vf_num 0x%x, vf_start 0x%x, flags 0x%08x\n",
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index f5dac9d..350d8a2 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -65,8 +65,10 @@ struct ecore_mcp_info {
 struct ecore_mcp_mb_params {
 	u32 cmd;
 	u32 param;
-	union drv_union_data *p_data_src;
-	union drv_union_data *p_data_dst;
+	void *p_data_src;
+	u8 data_src_size;
+	void *p_data_dst;
+	u8 data_dst_size;
 	u32 mcp_resp;
 	u32 mcp_param;
 };
@@ -159,7 +161,7 @@ struct ecore_load_req_params {
  *        returns whether this PF is the first on the engine/port or function.
  *
  * @param p_hwfn
- * @param p_pt
+ * @param p_ptt
  * @param p_params
  *
  * @return enum _ecore_status_t - ECORE_SUCCESS - Operation was successful.
@@ -169,6 +171,17 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 					struct ecore_load_req_params *p_params);
 
 /**
+ * @brief Sends a UNLOAD_DONE message to the MFW
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - Operation was successful.
+ */
+enum _ecore_status_t ecore_mcp_unload_done(struct ecore_hwfn *p_hwfn,
+					   struct ecore_ptt *p_ptt);
+
+/**
  * @brief Read the MFW mailbox into Current buffer.
  *
  * @param p_hwfn
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 51/61] net/qede/base: move code bits
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (50 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 50/61] net/qede/base: zero out MFW mailbox data Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 52/61] net/qede/base: add PF parameter Rasesh Mody
                         ` (10 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_vf.h |   41 +++++++++++++++++++-------------------
 1 file changed, 20 insertions(+), 21 deletions(-)

diff --git a/drivers/net/qede/base/ecore_vf.h b/drivers/net/qede/base/ecore_vf.h
index 228bbf0..f471388 100644
--- a/drivers/net/qede/base/ecore_vf.h
+++ b/drivers/net/qede/base/ecore_vf.h
@@ -38,17 +38,15 @@ struct ecore_vf_iov {
 	bool b_pre_fp_hsi;
 };
 
-#ifdef CONFIG_ECORE_SRIOV
-/**
- * @brief hw preparation for VF
- * sends ACQUIRE message
- *
- * @param p_hwfn
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn);
 
+enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
+					    struct ecore_ptt *p_ptt,
+					    u16 coalesce,
+					    struct ecore_queue_cid *p_cid);
+enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
+					    struct ecore_ptt *p_ptt,
+					    u16 coalesce,
+					    struct ecore_queue_cid *p_cid);
 /**
  * @brief VF - Set Rx/Tx coalesce per VF's relative queue.
  *	Coalesce value '0' will omit the configuration.
@@ -56,13 +54,24 @@ enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn);
  *	@param p_hwfn
  *	@param rx_coal - coalesce value in micro second for rx queue
  *	@param tx_coal - coalesce value in micro second for tx queue
- *	@param qid
+ *	@param queue_cid
  *
  **/
 enum _ecore_status_t ecore_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn,
 					      u16 rx_coal, u16 tx_coal,
 					      struct ecore_queue_cid *p_cid);
 
+#ifdef CONFIG_ECORE_SRIOV
+/**
+ * @brief hw preparation for VF
+ *	sends ACQUIRE message
+ *
+ * @param p_hwfn
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn);
+
 /**
  * @brief VF - start the RX Queue by sending a message to the PF
  *
@@ -277,15 +286,5 @@ ecore_vf_pf_tunnel_param_update(struct ecore_hwfn *p_hwfn,
 				struct ecore_tunnel_info *p_tunn);
 
 void ecore_vf_set_vf_start_tunn_update_param(struct ecore_tunnel_info *p_tun);
-
-enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
-					    struct ecore_ptt *p_ptt,
-					    u16 coalesce,
-					    struct ecore_queue_cid *p_cid);
-
-enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
-					    struct ecore_ptt *p_ptt,
-					    u16 coalesce,
-					    struct ecore_queue_cid *p_cid);
 #endif
 #endif /* __ECORE_VF_H__ */
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 52/61] net/qede/base: add PF parameter
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (51 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 51/61] net/qede/base: move code bits Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 53/61] net/qede/base: allow PMD to control vport and RSS engine ids Rasesh Mody
                         ` (9 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add a common enum to pf_params for RDMA.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_cxt.c      |    1 +
 drivers/net/qede/base/ecore_proto_if.h |    7 +++++++
 2 files changed, 8 insertions(+)

diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index aeeabf1..691d638 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -19,6 +19,7 @@
 #include "ecore_hw.h"
 #include "ecore_dev_api.h"
 #include "ecore_sriov.h"
+#include "ecore_mcp.h"
 
 /* Max number of connection types in HW (DQ/CDU etc.) */
 #define MAX_CONN_TYPES		PROTOCOLID_COMMON
diff --git a/drivers/net/qede/base/ecore_proto_if.h b/drivers/net/qede/base/ecore_proto_if.h
index ed24019..0ac153f 100644
--- a/drivers/net/qede/base/ecore_proto_if.h
+++ b/drivers/net/qede/base/ecore_proto_if.h
@@ -63,6 +63,12 @@ struct ecore_iscsi_pf_params {
 	u8		bdq_pbl_num_entries[2];
 };
 
+enum ecore_rdma_protocol {
+	ECORE_RDMA_PROTOCOL_DEFAULT,
+	ECORE_RDMA_PROTOCOL_ROCE,
+	ECORE_RDMA_PROTOCOL_IWARP,
+};
+
 struct ecore_rdma_pf_params {
 	/* Supplied to ECORE during resource allocation (may affect the ILT and
 	 * the doorbell BAR).
@@ -79,6 +85,7 @@ struct ecore_rdma_pf_params {
 
 	/* TCP port number used for the iwarp traffic */
 	u16		iwarp_port;
+	enum ecore_rdma_protocol rdma_protocol;
 };
 
 struct ecore_pf_params {
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 53/61] net/qede/base: allow PMD to control vport and RSS engine ids
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (52 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 52/61] net/qede/base: add PF parameter Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 54/61] net/qede/base: add udp ports in bulletin board message Rasesh Mody
                         ` (8 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Let PMD have control over the vport-id and rss-eng-id of a given VF
during initializaion.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_iov_api.h |   15 ++++-------
 drivers/net/qede/base/ecore_sriov.c   |   46 +++++++++++++++++++++------------
 drivers/net/qede/base/ecore_sriov.h   |    2 +-
 3 files changed, 35 insertions(+), 28 deletions(-)

diff --git a/drivers/net/qede/base/ecore_iov_api.h b/drivers/net/qede/base/ecore_iov_api.h
index b8dc47b..6a0fc5a 100644
--- a/drivers/net/qede/base/ecore_iov_api.h
+++ b/drivers/net/qede/base/ecore_iov_api.h
@@ -103,6 +103,11 @@ struct ecore_iov_vf_init_params {
 	 */
 	u16 req_rx_queue[ECORE_MAX_VF_CHAINS_PER_PF];
 	u16 req_tx_queue[ECORE_MAX_VF_CHAINS_PER_PF];
+
+	u8 vport_id;
+
+	/* Should be set in case RSS is going to be used for VF */
+	u8 rss_eng_id;
 };
 
 #ifdef CONFIG_ECORE_SW_CHANNEL
@@ -417,16 +422,6 @@ void ecore_iov_get_vfs_opaque_fid(struct ecore_hwfn *p_hwfn, int vfid,
 				  u16 *opaque_fid);
 
 /**
- * @brief Get VFs VPORT id.
- *
- * @param p_hwfn
- * @param vfid
- * @param vport id
- */
-void ecore_iov_get_vfs_vport_id(struct ecore_hwfn *p_hwfn, int vfid,
-				u8 *p_vport_id);
-
-/**
  * @brief Set forced VLAN [pvid] in PFs copy of bulletin board
  *        and configures FW/HW to support the configuration.
  *        Setting of pvid 0 would clear the feature.
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 4ffa8d0..20b51c4 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -426,8 +426,6 @@ static void ecore_iov_setup_vfdb(struct ecore_hwfn *p_hwfn)
 		return;
 	}
 
-	p_iov_info->base_vport_id = 1;	/* @@@TBD resource allocation */
-
 	for (idx = 0; idx < p_iov->total_vfs; idx++) {
 		struct ecore_vf_info *vf = &p_iov_info->vfs_array[idx];
 		u32 concrete;
@@ -456,8 +454,6 @@ static void ecore_iov_setup_vfdb(struct ecore_hwfn *p_hwfn)
 		/* TODO - need to devise a better way of getting opaque */
 		vf->opaque_fid = (p_hwfn->hw_info.opaque_fid & 0xff) |
 		    (vf->abs_vf_id << 8);
-		/* @@TBD MichalK - add base vport_id of VFs to equation */
-		vf->vport_id = p_iov_info->base_vport_id + idx;
 
 		vf->num_mac_filters = ECORE_ETH_VF_NUM_MAC_FILTERS;
 		vf->num_vlan_filters = ECORE_ETH_VF_NUM_VLAN_FILTERS;
@@ -1019,6 +1015,34 @@ ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 		return ECORE_INVAL;
 	}
 
+	/* Perform sanity checking on the requested vport/rss */
+	if (p_params->vport_id >= RESC_NUM(p_hwfn, ECORE_VPORT)) {
+		DP_NOTICE(p_hwfn, true, "VF[%d] - can't use VPORT %02x\n",
+			  p_params->rel_vf_id, p_params->vport_id);
+		return ECORE_INVAL;
+	}
+
+	if ((p_params->num_queues > 1) &&
+	    (p_params->rss_eng_id >= RESC_NUM(p_hwfn, ECORE_RSS_ENG))) {
+		DP_NOTICE(p_hwfn, true, "VF[%d] - can't use RSS_ENG %02x\n",
+			  p_params->rel_vf_id, p_params->rss_eng_id);
+		return ECORE_INVAL;
+	}
+
+	/* TODO - remove this once we get confidence of change */
+	if (!p_params->vport_id) {
+		DP_NOTICE(p_hwfn, false,
+			  "VF[%d] - Unlikely that VF uses vport0. Forgotten?\n",
+			  p_params->rel_vf_id);
+	}
+	if ((!p_params->rss_eng_id) && (p_params->num_queues > 1)) {
+		DP_NOTICE(p_hwfn, false,
+			  "VF[%d] - Unlikely that VF uses RSS_eng0. Forgotten?\n",
+			  p_params->rel_vf_id);
+	}
+	vf->vport_id = p_params->vport_id;
+	vf->rss_eng_id = p_params->rss_eng_id;
+
 	/* Perform sanity checking on the requested queue_id */
 	for (i = 0; i < p_params->num_queues; i++) {
 		u16 min_vf_qzone = (u16)FEAT_NUM(p_hwfn, ECORE_PF_L2_QUE);
@@ -2752,7 +2776,7 @@ ecore_iov_vp_update_rss_param(struct ecore_hwfn *p_hwfn,
 		VFPF_UPDATE_RSS_KEY_FLAG);
 
 	p_rss->rss_enable = p_rss_tlv->rss_enable;
-	p_rss->rss_eng_id = vf->relative_vf_id + 1;
+	p_rss->rss_eng_id = vf->rss_eng_id;
 	p_rss->rss_caps = p_rss_tlv->rss_caps;
 	p_rss->rss_table_size_log = p_rss_tlv->rss_table_size_log;
 	OSAL_MEMCPY(p_rss->rss_key, p_rss_tlv->rss_key,
@@ -3974,18 +3998,6 @@ void ecore_iov_get_vfs_opaque_fid(struct ecore_hwfn *p_hwfn, int vfid,
 	*opaque_fid = vf_info->opaque_fid;
 }
 
-void ecore_iov_get_vfs_vport_id(struct ecore_hwfn *p_hwfn, int vfid,
-				u8 *p_vort_id)
-{
-	struct ecore_vf_info *vf_info;
-
-	vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
-	if (!vf_info)
-		return;
-
-	*p_vort_id = vf_info->vport_id;
-}
-
 void ecore_iov_bulletin_set_forced_vlan(struct ecore_hwfn *p_hwfn,
 					u16 pvid, int vfid)
 {
diff --git a/drivers/net/qede/base/ecore_sriov.h b/drivers/net/qede/base/ecore_sriov.h
index d32f931..66e9271 100644
--- a/drivers/net/qede/base/ecore_sriov.h
+++ b/drivers/net/qede/base/ecore_sriov.h
@@ -111,6 +111,7 @@ struct ecore_vf_info {
 	u16			mtu;
 
 	u8			vport_id;
+	u8			rss_eng_id;
 	u8			relative_vf_id;
 	u8			abs_vf_id;
 #define ECORE_VF_ABS_ID(p_hwfn, p_vf)	(ECORE_PATH_ID(p_hwfn) ? \
@@ -155,7 +156,6 @@ struct ecore_pf_iov {
 	struct ecore_vf_info	vfs_array[E4_MAX_NUM_VFS];
 	u64			pending_events[ECORE_VF_ARRAY_LENGTH];
 	u64			pending_flr[ECORE_VF_ARRAY_LENGTH];
-	u16			base_vport_id;
 
 #ifndef REMOVE_DBG
 	/* This doesn't serve anything functionally, but it makes windows
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 54/61] net/qede/base: add udp ports in bulletin board message
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (53 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 53/61] net/qede/base: allow PMD to control vport and RSS engine ids Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 55/61] net/qede/base: prevent DMAE transactions during recovery Rasesh Mody
                         ` (7 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add udp ports in bulletin board message.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_iov_api.h |    2 ++
 drivers/net/qede/base/ecore_sriov.c   |   33 +++++++++++++++++++++++++++++++++
 drivers/net/qede/base/ecore_vf.c      |   12 ++++++++++++
 drivers/net/qede/base/ecore_vf_api.h  |    2 ++
 drivers/net/qede/base/ecore_vfpf_if.h |    5 ++++-
 5 files changed, 53 insertions(+), 1 deletion(-)

diff --git a/drivers/net/qede/base/ecore_iov_api.h b/drivers/net/qede/base/ecore_iov_api.h
index 6a0fc5a..870c57e 100644
--- a/drivers/net/qede/base/ecore_iov_api.h
+++ b/drivers/net/qede/base/ecore_iov_api.h
@@ -716,6 +716,8 @@ enum _ecore_status_t ecore_iov_configure_min_tx_rate(struct ecore_dev *p_dev,
  */
 u16 ecore_iov_get_next_active_vf(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
 
+void ecore_iov_bulletin_set_udp_ports(struct ecore_hwfn *p_hwfn, int vfid,
+				      u16 vxlan_port, u16 geneve_port);
 #endif /* CONFIG_ECORE_SRIOV */
 
 #define ecore_for_each_vf(_p_hwfn, _i)					\
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 20b51c4..532c492 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -2253,6 +2253,7 @@ static void ecore_iov_vf_mbx_update_tunn_param(struct ecore_hwfn *p_hwfn,
 	bool b_update_required = false;
 	struct ecore_tunnel_info tunn;
 	u16 tunn_feature_mask = 0;
+	int i;
 
 	mbx->offset = (u8 *)mbx->reply_virt;
 
@@ -2300,11 +2301,20 @@ static void ecore_iov_vf_mbx_update_tunn_param(struct ecore_hwfn *p_hwfn,
 
 	/* If ECORE client is willing to update anything ? */
 	if (b_update_required) {
+		u16 geneve_port;
+
 		rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, &tunn,
 						 ECORE_SPQ_MODE_EBLOCK,
 						 OSAL_NULL);
 		if (rc != ECORE_SUCCESS)
 			status = PFVF_STATUS_FAILURE;
+
+		geneve_port = p_tun->geneve_port.port;
+		ecore_for_each_vf(p_hwfn, i) {
+			ecore_iov_bulletin_set_udp_ports(p_hwfn, i,
+							 p_tun->vxlan_port.port,
+							 geneve_port);
+		}
 	}
 
 send_resp:
@@ -4028,6 +4038,29 @@ void ecore_iov_bulletin_set_forced_vlan(struct ecore_hwfn *p_hwfn,
 	ecore_iov_configure_vport_forced(p_hwfn, vf_info, feature);
 }
 
+void ecore_iov_bulletin_set_udp_ports(struct ecore_hwfn *p_hwfn,
+				      int vfid, u16 vxlan_port, u16 geneve_port)
+{
+	struct ecore_vf_info *vf_info;
+
+	vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
+	if (!vf_info) {
+		DP_NOTICE(p_hwfn->p_dev, true,
+			  "Can not set udp ports, invalid vfid [%d]\n", vfid);
+		return;
+	}
+
+	if (vf_info->b_malicious) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "Can not set udp ports to malicious VF [%d]\n",
+			   vfid);
+		return;
+	}
+
+	vf_info->bulletin.p_virt->vxlan_udp_port = vxlan_port;
+	vf_info->bulletin.p_virt->geneve_udp_port = geneve_port;
+}
+
 bool ecore_iov_vf_has_vport_instance(struct ecore_hwfn *p_hwfn, int vfid)
 {
 	struct ecore_vf_info *p_vf_info;
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index bf516cc..8ce9340 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -1652,6 +1652,18 @@ bool ecore_vf_bulletin_get_forced_mac(struct ecore_hwfn *hwfn, u8 *dst_mac,
 	return true;
 }
 
+void ecore_vf_bulletin_get_udp_ports(struct ecore_hwfn *p_hwfn,
+				     u16 *p_vxlan_port,
+				     u16 *p_geneve_port)
+{
+	struct ecore_bulletin_content *p_bulletin;
+
+	p_bulletin = &p_hwfn->vf_iov_info->bulletin_shadow;
+
+	*p_vxlan_port = p_bulletin->vxlan_udp_port;
+	*p_geneve_port = p_bulletin->geneve_udp_port;
+}
+
 bool ecore_vf_bulletin_get_forced_vlan(struct ecore_hwfn *hwfn, u16 *dst_pvid)
 {
 	struct ecore_bulletin_content *bulletin;
diff --git a/drivers/net/qede/base/ecore_vf_api.h b/drivers/net/qede/base/ecore_vf_api.h
index 77b93ff..a6e5f32 100644
--- a/drivers/net/qede/base/ecore_vf_api.h
+++ b/drivers/net/qede/base/ecore_vf_api.h
@@ -152,5 +152,7 @@ void ecore_vf_get_fw_version(struct ecore_hwfn *p_hwfn,
 			     u16 *fw_minor,
 			     u16 *fw_rev,
 			     u16 *fw_eng);
+void ecore_vf_bulletin_get_udp_ports(struct ecore_hwfn *p_hwfn,
+				     u16 *p_vxlan_port, u16 *p_geneve_port);
 #endif
 #endif
diff --git a/drivers/net/qede/base/ecore_vfpf_if.h b/drivers/net/qede/base/ecore_vfpf_if.h
index e0b63bf..6618442 100644
--- a/drivers/net/qede/base/ecore_vfpf_if.h
+++ b/drivers/net/qede/base/ecore_vfpf_if.h
@@ -554,9 +554,12 @@ struct ecore_bulletin_content {
 	u8 pfc_enabled;
 	u8 partner_tx_flow_ctrl_en;
 	u8 partner_rx_flow_ctrl_en;
+
 	u8 partner_adv_pause;
 	u8 sfp_tx_fault;
-	u8 padding4[6];
+	u16 vxlan_udp_port;
+	u16 geneve_udp_port;
+	u8 padding4[2];
 
 	u32 speed;
 	u32 partner_adv_speed;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 55/61] net/qede/base: prevent DMAE transactions during recovery
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (54 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 54/61] net/qede/base: add udp ports in bulletin board message Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 56/61] net/qede/base: multi-Txq support on same queue-zone for VFs Rasesh Mody
                         ` (6 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Prevent DMA engine transactions during recovery phase.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_hw.c |   12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/drivers/net/qede/base/ecore_hw.c b/drivers/net/qede/base/ecore_hw.c
index 396edc2..2bcc32d 100644
--- a/drivers/net/qede/base/ecore_hw.c
+++ b/drivers/net/qede/base/ecore_hw.c
@@ -773,6 +773,18 @@ ecore_dmae_execute_command(struct ecore_hwfn *p_hwfn,
 	enum _ecore_status_t ecore_status = ECORE_SUCCESS;
 	u32 offset = 0;
 
+	if (p_hwfn->p_dev->recov_in_prog) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+			   "Recovery is in progress. Avoid DMAE transaction [{src: addr 0x%lx, type %d}, {dst: addr 0x%lx, type %d}, size %d].\n",
+			   (unsigned long)src_addr, src_type,
+			   (unsigned long)dst_addr, dst_type,
+			   size_in_dwords);
+		/* Return success to let the flow to be completed successfully
+		 * w/o any error handling.
+		 */
+		return ECORE_SUCCESS;
+	}
+
 	ecore_dmae_opcode(p_hwfn,
 			  (src_type == ECORE_DMAE_ADDRESS_GRC),
 			  (dst_type == ECORE_DMAE_ADDRESS_GRC), p_params);
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 56/61] net/qede/base: multi-Txq support on same queue-zone for VFs
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (55 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 55/61] net/qede/base: prevent DMAE transactions during recovery Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 57/61] net/qede/base: prevent race condition during unload Rasesh Mody
                         ` (5 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

A step toward having multi-Txq support on same queue-zone for VFs.

This change takes care of:

 - VFs assume a single CID per-queue, where queue X receives CID X.
   Switch to a model similar to that of PF - I.e., Use different CIDs
   for Rx/Tx, and use mapping to acquire/release those. Each VF
   currently will have 32 CIDs available for it [for its possible 16
   Rx & 16 Tx queues].

 - To retain the same interface for PFs/VFs when initializing queues,
   the base driver would have to retain a unique number per-each queue
   that would be communicated in some extended TLV [current TLV
   interface allows the PF to send only the queue-id]. The new TLV isn't
   part of the current change but base driver would now start adding
   such unique keys internally to queue_cids. This would also force
   us to start having alloc/setup/free for L2 [we've refrained from
   doing so until now]
   The limit would be no-more than 64 queues per qzone [This could be
   changed if needed, but hopefully no one needs so many queues]

 - In IOV, Add infrastructure for up to 64 qids per-qzone, although
   at the moment hard-code '0' for Rx and '1' for Tx [Since VF still
   isn't communicating via new TLV which index to associate with a
   given queue in its queue-zone].

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h         |    4 +
 drivers/net/qede/base/ecore_cxt.c     |  230 +++++++++++++++-----
 drivers/net/qede/base/ecore_cxt.h     |   53 ++++-
 drivers/net/qede/base/ecore_cxt_api.h |   13 --
 drivers/net/qede/base/ecore_dev.c     |   24 +-
 drivers/net/qede/base/ecore_l2.c      |  248 ++++++++++++++++++---
 drivers/net/qede/base/ecore_l2.h      |   46 +++-
 drivers/net/qede/base/ecore_sriov.c   |  387 ++++++++++++++++++++++-----------
 drivers/net/qede/base/ecore_sriov.h   |   17 +-
 drivers/net/qede/base/ecore_vf.c      |    6 +
 drivers/net/qede/base/ecore_vf_api.h  |    9 +
 11 files changed, 794 insertions(+), 243 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 7379b3f..fab8193 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -200,6 +200,7 @@ struct ecore_cxt_mngr;
 struct ecore_dma_mem;
 struct ecore_sb_sp_info;
 struct ecore_ll2_info;
+struct ecore_l2_info;
 struct ecore_igu_info;
 struct ecore_mcp_info;
 struct ecore_dcbx_info;
@@ -598,6 +599,9 @@ struct ecore_hwfn {
 	/* If one of the following is set then EDPM shouldn't be used */
 	u8				dcbx_no_edpm;
 	u8				db_bar_no_edpm;
+
+	/* L2-related */
+	struct ecore_l2_info		*p_l2_info;
 };
 
 #ifndef __EXTRACT__LINUX__
diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index 691d638..f7b5672 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -8,6 +8,7 @@
 
 #include "bcm_osal.h"
 #include "reg_addr.h"
+#include "common_hsi.h"
 #include "ecore_hsi_common.h"
 #include "ecore_hsi_eth.h"
 #include "ecore_rt_defs.h"
@@ -101,7 +102,6 @@ struct ecore_tid_seg {
 
 struct ecore_conn_type_cfg {
 	u32 cid_count;
-	u32 cid_start;
 	u32 cids_per_vf;
 	struct ecore_tid_seg tid_seg[TASK_SEGMENTS];
 };
@@ -197,6 +197,9 @@ struct ecore_cxt_mngr {
 
 	/* Acquired CIDs */
 	struct ecore_cid_acquired_map acquired[MAX_CONN_TYPES];
+	/* TBD - do we want this allocated to reserve space? */
+	struct ecore_cid_acquired_map
+		acquired_vf[MAX_CONN_TYPES][COMMON_MAX_NUM_VFS];
 
 	/* ILT  shadow table */
 	struct ecore_dma_mem *ilt_shadow;
@@ -1015,44 +1018,75 @@ ilt_shadow_fail:
 static void ecore_cid_map_free(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
-	u32 type;
+	u32 type, vf;
 
 	for (type = 0; type < MAX_CONN_TYPES; type++) {
 		OSAL_FREE(p_hwfn->p_dev, p_mngr->acquired[type].cid_map);
 		p_mngr->acquired[type].max_count = 0;
 		p_mngr->acquired[type].start_cid = 0;
+
+		for (vf = 0; vf < COMMON_MAX_NUM_VFS; vf++) {
+			OSAL_FREE(p_hwfn->p_dev,
+				  p_mngr->acquired_vf[type][vf].cid_map);
+			p_mngr->acquired_vf[type][vf].max_count = 0;
+			p_mngr->acquired_vf[type][vf].start_cid = 0;
+		}
 	}
 }
 
+static enum _ecore_status_t
+ecore_cid_map_alloc_single(struct ecore_hwfn *p_hwfn, u32 type,
+			   u32 cid_start, u32 cid_count,
+			   struct ecore_cid_acquired_map *p_map)
+{
+	u32 size;
+
+	if (!cid_count)
+		return ECORE_SUCCESS;
+
+	size = MAP_WORD_SIZE * DIV_ROUND_UP(cid_count, BITS_PER_MAP_WORD);
+	p_map->cid_map = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL, size);
+	if (p_map->cid_map == OSAL_NULL)
+		return ECORE_NOMEM;
+
+	p_map->max_count = cid_count;
+	p_map->start_cid = cid_start;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_CXT,
+		   "Type %08x start: %08x count %08x\n",
+		   type, p_map->start_cid, p_map->max_count);
+
+	return ECORE_SUCCESS;
+}
+
 static enum _ecore_status_t ecore_cid_map_alloc(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
-	u32 start_cid = 0;
-	u32 type;
+	u32 start_cid = 0, vf_start_cid = 0;
+	u32 type, vf;
 
 	for (type = 0; type < MAX_CONN_TYPES; type++) {
-		u32 cid_cnt = p_hwfn->p_cxt_mngr->conn_cfg[type].cid_count;
-		u32 size;
-
-		if (cid_cnt == 0)
-			continue;
+		struct ecore_conn_type_cfg *p_cfg = &p_mngr->conn_cfg[type];
+		struct ecore_cid_acquired_map *p_map;
 
-		size = MAP_WORD_SIZE * DIV_ROUND_UP(cid_cnt, BITS_PER_MAP_WORD);
-		p_mngr->acquired[type].cid_map = OSAL_ZALLOC(p_hwfn->p_dev,
-							     GFP_KERNEL, size);
-		if (!p_mngr->acquired[type].cid_map)
+		/* Handle PF maps */
+		p_map = &p_mngr->acquired[type];
+		if (ecore_cid_map_alloc_single(p_hwfn, type, start_cid,
+					       p_cfg->cid_count, p_map))
 			goto cid_map_fail;
 
-		p_mngr->acquired[type].max_count = cid_cnt;
-		p_mngr->acquired[type].start_cid = start_cid;
-
-		p_hwfn->p_cxt_mngr->conn_cfg[type].cid_start = start_cid;
+		/* Handle VF maps */
+		for (vf = 0; vf < COMMON_MAX_NUM_VFS; vf++) {
+			p_map = &p_mngr->acquired_vf[type][vf];
+			if (ecore_cid_map_alloc_single(p_hwfn, type,
+						       vf_start_cid,
+						       p_cfg->cids_per_vf,
+						       p_map))
+				goto cid_map_fail;
+		}
 
-		DP_VERBOSE(p_hwfn, ECORE_MSG_CXT,
-			   "Type %08x start: %08x count %08x\n",
-			   type, p_mngr->acquired[type].start_cid,
-			   p_mngr->acquired[type].max_count);
-		start_cid += cid_cnt;
+		start_cid += p_cfg->cid_count;
+		vf_start_cid += p_cfg->cids_per_vf;
 	}
 
 	return ECORE_SUCCESS;
@@ -1171,18 +1205,34 @@ void ecore_cxt_mngr_free(struct ecore_hwfn *p_hwfn)
 void ecore_cxt_mngr_setup(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
+	struct ecore_cid_acquired_map *p_map;
+	struct ecore_conn_type_cfg *p_cfg;
 	int type;
+	u32 len;
 
 	/* Reset acquired cids */
 	for (type = 0; type < MAX_CONN_TYPES; type++) {
-		u32 cid_cnt = p_hwfn->p_cxt_mngr->conn_cfg[type].cid_count;
-		u32 i;
+		u32 vf;
+
+		p_cfg = &p_mngr->conn_cfg[type];
+		if (p_cfg->cid_count) {
+			p_map = &p_mngr->acquired[type];
+			len = DIV_ROUND_UP(p_map->max_count,
+					   BITS_PER_MAP_WORD) *
+			      MAP_WORD_SIZE;
+			OSAL_MEM_ZERO(p_map->cid_map, len);
+		}
 
-		if (cid_cnt == 0)
+		if (!p_cfg->cids_per_vf)
 			continue;
 
-		for (i = 0; i < DIV_ROUND_UP(cid_cnt, BITS_PER_MAP_WORD); i++)
-			p_mngr->acquired[type].cid_map[i] = 0;
+		for (vf = 0; vf < COMMON_MAX_NUM_VFS; vf++) {
+			p_map = &p_mngr->acquired_vf[type][vf];
+			len = DIV_ROUND_UP(p_map->max_count,
+					   BITS_PER_MAP_WORD) *
+			      MAP_WORD_SIZE;
+			OSAL_MEM_ZERO(p_map->cid_map, len);
+		}
 	}
 }
 
@@ -1723,93 +1773,150 @@ void ecore_cxt_hw_init_pf(struct ecore_hwfn *p_hwfn)
 	ecore_prs_init_pf(p_hwfn);
 }
 
-enum _ecore_status_t ecore_cxt_acquire_cid(struct ecore_hwfn *p_hwfn,
-					   enum protocol_type type, u32 *p_cid)
+enum _ecore_status_t _ecore_cxt_acquire_cid(struct ecore_hwfn *p_hwfn,
+					    enum protocol_type type,
+					    u32 *p_cid, u8 vfid)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
+	struct ecore_cid_acquired_map *p_map;
 	u32 rel_cid;
 
-	if (type >= MAX_CONN_TYPES || !p_mngr->acquired[type].cid_map) {
+	if (type >= MAX_CONN_TYPES) {
 		DP_NOTICE(p_hwfn, true, "Invalid protocol type %d", type);
 		return ECORE_INVAL;
 	}
 
-	rel_cid = OSAL_FIND_FIRST_ZERO_BIT(p_mngr->acquired[type].cid_map,
-					   p_mngr->acquired[type].max_count);
+	if (vfid >= COMMON_MAX_NUM_VFS && vfid != ECORE_CXT_PF_CID) {
+		DP_NOTICE(p_hwfn, true, "VF [%02x] is out of range\n", vfid);
+		return ECORE_INVAL;
+	}
+
+	/* Determine the right map to take this CID from */
+	if (vfid == ECORE_CXT_PF_CID)
+		p_map = &p_mngr->acquired[type];
+	else
+		p_map = &p_mngr->acquired_vf[type][vfid];
 
-	if (rel_cid >= p_mngr->acquired[type].max_count) {
+	if (p_map->cid_map == OSAL_NULL) {
+		DP_NOTICE(p_hwfn, true, "Invalid protocol type %d", type);
+		return ECORE_INVAL;
+	}
+
+	rel_cid = OSAL_FIND_FIRST_ZERO_BIT(p_map->cid_map,
+					   p_map->max_count);
+
+	if (rel_cid >= p_map->max_count) {
 		DP_NOTICE(p_hwfn, false, "no CID available for protocol %d\n",
 			  type);
 		return ECORE_NORESOURCES;
 	}
 
-	OSAL_SET_BIT(rel_cid, p_mngr->acquired[type].cid_map);
+	OSAL_SET_BIT(rel_cid, p_map->cid_map);
 
-	*p_cid = rel_cid + p_mngr->acquired[type].start_cid;
+	*p_cid = rel_cid + p_map->start_cid;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_CXT,
+		   "Acquired cid 0x%08x [rel. %08x] vfid %02x type %d\n",
+		   *p_cid, rel_cid, vfid, type);
 
 	return ECORE_SUCCESS;
 }
 
+enum _ecore_status_t ecore_cxt_acquire_cid(struct ecore_hwfn *p_hwfn,
+					   enum protocol_type type,
+					   u32 *p_cid)
+{
+	return _ecore_cxt_acquire_cid(p_hwfn, type, p_cid, ECORE_CXT_PF_CID);
+}
+
 static bool ecore_cxt_test_cid_acquired(struct ecore_hwfn *p_hwfn,
-					u32 cid, enum protocol_type *p_type)
+					u32 cid, u8 vfid,
+					enum protocol_type *p_type,
+					struct ecore_cid_acquired_map **pp_map)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
-	struct ecore_cid_acquired_map *p_map;
-	enum protocol_type p;
 	u32 rel_cid;
 
 	/* Iterate over protocols and find matching cid range */
-	for (p = 0; p < MAX_CONN_TYPES; p++) {
-		p_map = &p_mngr->acquired[p];
+	for (*p_type = 0; *p_type < MAX_CONN_TYPES; (*p_type)++) {
+		if (vfid == ECORE_CXT_PF_CID)
+			*pp_map = &p_mngr->acquired[*p_type];
+		else
+			*pp_map = &p_mngr->acquired_vf[*p_type][vfid];
 
-		if (!p_map->cid_map)
+		if (!((*pp_map)->cid_map))
 			continue;
-		if (cid >= p_map->start_cid &&
-		    cid < p_map->start_cid + p_map->max_count) {
+		if (cid >= (*pp_map)->start_cid &&
+		    cid < (*pp_map)->start_cid + (*pp_map)->max_count) {
 			break;
 		}
 	}
-	*p_type = p;
-
-	if (p == MAX_CONN_TYPES) {
-		DP_NOTICE(p_hwfn, true, "Invalid CID %d", cid);
-		return false;
+	if (*p_type == MAX_CONN_TYPES) {
+		DP_NOTICE(p_hwfn, true, "Invalid CID %d vfid %02x", cid, vfid);
+		goto fail;
 	}
-	rel_cid = cid - p_map->start_cid;
-	if (!OSAL_TEST_BIT(rel_cid, p_map->cid_map)) {
-		DP_NOTICE(p_hwfn, true, "CID %d not acquired", cid);
-		return false;
+
+	rel_cid = cid - (*pp_map)->start_cid;
+	if (!OSAL_TEST_BIT(rel_cid, (*pp_map)->cid_map)) {
+		DP_NOTICE(p_hwfn, true,
+			  "CID %d [vifd %02x] not acquired", cid, vfid);
+		goto fail;
 	}
+
 	return true;
+fail:
+	*p_type = MAX_CONN_TYPES;
+	*pp_map = OSAL_NULL;
+	return false;
 }
 
-void ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn, u32 cid)
+void _ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn, u32 cid, u8 vfid)
 {
-	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
+	struct ecore_cid_acquired_map *p_map = OSAL_NULL;
 	enum protocol_type type;
 	bool b_acquired;
 	u32 rel_cid;
 
+	if (vfid != ECORE_CXT_PF_CID && vfid > COMMON_MAX_NUM_VFS) {
+		DP_NOTICE(p_hwfn, true,
+			  "Trying to return incorrect CID belonging to VF %02x\n",
+			  vfid);
+		return;
+	}
+
 	/* Test acquired and find matching per-protocol map */
-	b_acquired = ecore_cxt_test_cid_acquired(p_hwfn, cid, &type);
+	b_acquired = ecore_cxt_test_cid_acquired(p_hwfn, cid, vfid,
+						 &type, &p_map);
 
 	if (!b_acquired)
 		return;
 
-	rel_cid = cid - p_mngr->acquired[type].start_cid;
-	OSAL_CLEAR_BIT(rel_cid, p_mngr->acquired[type].cid_map);
+	rel_cid = cid - p_map->start_cid;
+	OSAL_CLEAR_BIT(rel_cid, p_map->cid_map);
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_CXT,
+		   "Released CID 0x%08x [rel. %08x] vfid %02x type %d\n",
+		   cid, rel_cid, vfid, type);
+}
+
+void ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn, u32 cid)
+{
+	_ecore_cxt_release_cid(p_hwfn, cid, ECORE_CXT_PF_CID);
 }
 
 enum _ecore_status_t ecore_cxt_get_cid_info(struct ecore_hwfn *p_hwfn,
 					    struct ecore_cxt_info *p_info)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
+	struct ecore_cid_acquired_map *p_map = OSAL_NULL;
 	u32 conn_cxt_size, hw_p_size, cxts_per_p, line;
 	enum protocol_type type;
 	bool b_acquired;
 
 	/* Test acquired and find matching per-protocol map */
-	b_acquired = ecore_cxt_test_cid_acquired(p_hwfn, p_info->iid, &type);
+	b_acquired = ecore_cxt_test_cid_acquired(p_hwfn, p_info->iid,
+						 ECORE_CXT_PF_CID,
+						 &type, &p_map);
 
 	if (!b_acquired)
 		return ECORE_INVAL;
@@ -1865,9 +1972,14 @@ enum _ecore_status_t ecore_cxt_set_pf_params(struct ecore_hwfn *p_hwfn)
 			struct ecore_eth_pf_params *p_params =
 			    &p_hwfn->pf_params.eth_pf_params;
 
+			/* TODO - we probably want to add VF number to the PF
+			 * params;
+			 * As of now, allocates 16 * 2 per-VF [to retain regular
+			 * functionality].
+			 */
 			ecore_cxt_set_proto_cid_count(p_hwfn,
 				PROTOCOLID_ETH,
-				p_params->num_cons, 1);	/* FIXME VF count... */
+				p_params->num_cons, 32);
 
 			break;
 		}
diff --git a/drivers/net/qede/base/ecore_cxt.h b/drivers/net/qede/base/ecore_cxt.h
index 5379d7b..1128051 100644
--- a/drivers/net/qede/base/ecore_cxt.h
+++ b/drivers/net/qede/base/ecore_cxt.h
@@ -130,14 +130,53 @@ void ecore_qm_init_pf(struct ecore_hwfn *p_hwfn);
 enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn,
 				     struct ecore_ptt *p_ptt);
 
+#define ECORE_CXT_PF_CID (0xff)
+
+/**
+ * @brief ecore_cxt_release - Release a cid
+ *
+ * @param p_hwfn
+ * @param cid
+ */
+void ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn, u32 cid);
+
 /**
-* @brief ecore_cxt_release - Release a cid
-*
-* @param p_hwfn
-* @param cid
-*/
-void ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn,
-			   u32 cid);
+ * @brief ecore_cxt_release - Release a cid belonging to a vf-queue
+ *
+ * @param p_hwfn
+ * @param cid
+ * @param vfid - engine relative index. ECORE_CXT_PF_CID if belongs to PF
+ */
+void _ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn,
+			    u32 cid, u8 vfid);
+
+/**
+ * @brief ecore_cxt_acquire - Acquire a new cid of a specific protocol type
+ *
+ * @param p_hwfn
+ * @param type
+ * @param p_cid
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_cxt_acquire_cid(struct ecore_hwfn *p_hwfn,
+					   enum protocol_type type,
+					   u32 *p_cid);
+
+/**
+ * @brief _ecore_cxt_acquire - Acquire a new cid of a specific protocol type
+ *                             for a vf-queue
+ *
+ * @param p_hwfn
+ * @param type
+ * @param p_cid
+ * @param vfid - engine relative index. ECORE_CXT_PF_CID if belongs to PF
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t _ecore_cxt_acquire_cid(struct ecore_hwfn *p_hwfn,
+					    enum protocol_type type,
+					    u32 *p_cid, u8 vfid);
 
 /**
  * @brief ecore_cxt_get_tid_mem_info - function checks if the
diff --git a/drivers/net/qede/base/ecore_cxt_api.h b/drivers/net/qede/base/ecore_cxt_api.h
index 6a50412..f154e0d 100644
--- a/drivers/net/qede/base/ecore_cxt_api.h
+++ b/drivers/net/qede/base/ecore_cxt_api.h
@@ -26,19 +26,6 @@ struct ecore_tid_mem {
 };
 
 /**
-* @brief ecore_cxt_acquire - Acquire a new cid of a specific protocol type
-*
-* @param p_hwfn
-* @param type
-* @param p_cid
-*
-* @return enum _ecore_status_t
-*/
-enum _ecore_status_t ecore_cxt_acquire_cid(struct ecore_hwfn  *p_hwfn,
-					   enum protocol_type type,
-					   u32 *p_cid);
-
-/**
 * @brief ecoreo_cid_get_cxt_info - Returns the context info for a specific cid
 *
 *
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index e584058..2a621f7 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -146,8 +146,11 @@ void ecore_resc_free(struct ecore_dev *p_dev)
 {
 	int i;
 
-	if (IS_VF(p_dev))
+	if (IS_VF(p_dev)) {
+		for_each_hwfn(p_dev, i)
+			ecore_l2_free(&p_dev->hwfns[i]);
 		return;
+	}
 
 	OSAL_FREE(p_dev, p_dev->fw_data);
 
@@ -163,6 +166,7 @@ void ecore_resc_free(struct ecore_dev *p_dev)
 		ecore_consq_free(p_hwfn);
 		ecore_int_free(p_hwfn);
 		ecore_iov_free(p_hwfn);
+		ecore_l2_free(p_hwfn);
 		ecore_dmae_info_free(p_hwfn);
 		ecore_dcbx_info_free(p_hwfn, p_hwfn->p_dcbx_info);
 		/* @@@TBD Flush work-queue ? */
@@ -839,8 +843,14 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 	int i;
 
-	if (IS_VF(p_dev))
+	if (IS_VF(p_dev)) {
+		for_each_hwfn(p_dev, i) {
+			rc = ecore_l2_alloc(&p_dev->hwfns[i]);
+			if (rc != ECORE_SUCCESS)
+				return rc;
+		}
 		return rc;
+	}
 
 	p_dev->fw_data = OSAL_ZALLOC(p_dev, GFP_KERNEL,
 				     sizeof(*p_dev->fw_data));
@@ -961,6 +971,10 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 		if (rc)
 			goto alloc_err;
 
+		rc = ecore_l2_alloc(p_hwfn);
+		if (rc != ECORE_SUCCESS)
+			goto alloc_err;
+
 		/* DMA info initialization */
 		rc = ecore_dmae_info_alloc(p_hwfn);
 		if (rc) {
@@ -999,8 +1013,11 @@ void ecore_resc_setup(struct ecore_dev *p_dev)
 {
 	int i;
 
-	if (IS_VF(p_dev))
+	if (IS_VF(p_dev)) {
+		for_each_hwfn(p_dev, i)
+			ecore_l2_setup(&p_dev->hwfns[i]);
 		return;
+	}
 
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
@@ -1018,6 +1035,7 @@ void ecore_resc_setup(struct ecore_dev *p_dev)
 
 		ecore_int_setup(p_hwfn, p_hwfn->p_main_ptt);
 
+		ecore_l2_setup(p_hwfn);
 		ecore_iov_setup(p_hwfn, p_hwfn->p_main_ptt);
 	}
 }
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 4d26e19..adb5e47 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -29,24 +29,172 @@
 #define ECORE_MAX_SGES_NUM 16
 #define CRC32_POLY 0x1edc6f41
 
+struct ecore_l2_info {
+	u32 queues;
+	unsigned long **pp_qid_usage;
+
+	/* The lock is meant to synchronize access to the qid usage */
+	osal_mutex_t lock;
+};
+
+enum _ecore_status_t ecore_l2_alloc(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_l2_info *p_l2_info;
+	unsigned long **pp_qids;
+	u32 i;
+
+	if (!ECORE_IS_L2_PERSONALITY(p_hwfn))
+		return ECORE_SUCCESS;
+
+	p_l2_info = OSAL_VZALLOC(p_hwfn->p_dev, sizeof(*p_l2_info));
+	if (!p_l2_info)
+		return ECORE_NOMEM;
+	p_hwfn->p_l2_info = p_l2_info;
+
+	if (IS_PF(p_hwfn->p_dev)) {
+		p_l2_info->queues = RESC_NUM(p_hwfn, ECORE_L2_QUEUE);
+	} else {
+		u8 rx = 0, tx = 0;
+
+		ecore_vf_get_num_rxqs(p_hwfn, &rx);
+		ecore_vf_get_num_txqs(p_hwfn, &tx);
+
+		p_l2_info->queues = (u32)OSAL_MAX_T(u8, rx, tx);
+	}
+
+	pp_qids = OSAL_VZALLOC(p_hwfn->p_dev,
+			       sizeof(unsigned long *) *
+			       p_l2_info->queues);
+	if (pp_qids == OSAL_NULL)
+		return ECORE_NOMEM;
+	p_l2_info->pp_qid_usage = pp_qids;
+
+	for (i = 0; i < p_l2_info->queues; i++) {
+		pp_qids[i] = OSAL_VZALLOC(p_hwfn->p_dev,
+					  MAX_QUEUES_PER_QZONE / 8);
+		if (pp_qids[i] == OSAL_NULL)
+			return ECORE_NOMEM;
+	}
+
+#ifdef CONFIG_ECORE_LOCK_ALLOC
+	OSAL_MUTEX_ALLOC(p_hwfn, &p_l2_info->lock);
+#endif
+
+	return ECORE_SUCCESS;
+}
+
+void ecore_l2_setup(struct ecore_hwfn *p_hwfn)
+{
+	if (!ECORE_IS_L2_PERSONALITY(p_hwfn))
+		return;
+
+	OSAL_MUTEX_INIT(&p_hwfn->p_l2_info->lock);
+}
+
+void ecore_l2_free(struct ecore_hwfn *p_hwfn)
+{
+	u32 i;
+
+	if (!ECORE_IS_L2_PERSONALITY(p_hwfn))
+		return;
+
+	if (p_hwfn->p_l2_info == OSAL_NULL)
+		return;
+
+	if (p_hwfn->p_l2_info->pp_qid_usage == OSAL_NULL)
+		goto out_l2_info;
+
+	/* Free until hit first uninitialized entry */
+	for (i = 0; i < p_hwfn->p_l2_info->queues; i++) {
+		if (p_hwfn->p_l2_info->pp_qid_usage[i] == OSAL_NULL)
+			break;
+		OSAL_VFREE(p_hwfn->p_dev,
+			   p_hwfn->p_l2_info->pp_qid_usage[i]);
+	}
+
+#ifdef CONFIG_ECORE_LOCK_ALLOC
+	/* Lock is last to initialize, if everything else was */
+	if (i == p_hwfn->p_l2_info->queues)
+		OSAL_MUTEX_DEALLOC(&p_hwfn->p_l2_info->lock);
+#endif
+
+	OSAL_VFREE(p_hwfn->p_dev, p_hwfn->p_l2_info->pp_qid_usage);
+
+out_l2_info:
+	OSAL_VFREE(p_hwfn->p_dev, p_hwfn->p_l2_info);
+	p_hwfn->p_l2_info = OSAL_NULL;
+}
+
+/* TODO - we'll need locking around these... */
+static bool ecore_eth_queue_qid_usage_add(struct ecore_hwfn *p_hwfn,
+					  struct ecore_queue_cid *p_cid)
+{
+	struct ecore_l2_info *p_l2_info = p_hwfn->p_l2_info;
+	u16 queue_id = p_cid->rel.queue_id;
+	bool b_rc = true;
+	u8 first;
+
+	OSAL_MUTEX_ACQUIRE(&p_l2_info->lock);
+
+	if (queue_id > p_l2_info->queues) {
+		DP_NOTICE(p_hwfn, true,
+			  "Requested to increase usage for qzone %04x out of %08x\n",
+			  queue_id, p_l2_info->queues);
+		b_rc = false;
+		goto out;
+	}
+
+	first = (u8)OSAL_FIND_FIRST_ZERO_BIT(p_l2_info->pp_qid_usage[queue_id],
+					     MAX_QUEUES_PER_QZONE);
+	if (first >= MAX_QUEUES_PER_QZONE) {
+		b_rc = false;
+		goto out;
+	}
+
+	OSAL_SET_BIT(first, p_l2_info->pp_qid_usage[queue_id]);
+	p_cid->qid_usage_idx = first;
+
+out:
+	OSAL_MUTEX_RELEASE(&p_l2_info->lock);
+	return b_rc;
+}
+
+static void ecore_eth_queue_qid_usage_del(struct ecore_hwfn *p_hwfn,
+					  struct ecore_queue_cid *p_cid)
+{
+	OSAL_MUTEX_ACQUIRE(&p_hwfn->p_l2_info->lock);
+
+	OSAL_CLEAR_BIT(p_cid->qid_usage_idx,
+		       p_hwfn->p_l2_info->pp_qid_usage[p_cid->rel.queue_id]);
+
+	OSAL_MUTEX_RELEASE(&p_hwfn->p_l2_info->lock);
+}
+
 void ecore_eth_queue_cid_release(struct ecore_hwfn *p_hwfn,
 				 struct ecore_queue_cid *p_cid)
 {
+	/* For VF-queues, stuff is a bit complicated as:
+	 *  - They always maintain the qid_usage on their own.
+	 *  - In legacy mode, they also maintain their CIDs.
+	 */
+
 	/* VFs' CIDs are 0-based in PF-view, and uninitialized on VF */
-	if (!p_cid->is_vf && IS_PF(p_hwfn->p_dev))
-		ecore_cxt_release_cid(p_hwfn, p_cid->cid);
+	if (IS_PF(p_hwfn->p_dev) && !p_cid->b_legacy_vf)
+		_ecore_cxt_release_cid(p_hwfn, p_cid->cid, p_cid->vfid);
+	if (!p_cid->b_legacy_vf)
+		ecore_eth_queue_qid_usage_del(p_hwfn, p_cid);
 	OSAL_VFREE(p_hwfn->p_dev, p_cid);
 }
 
 /* The internal is only meant to be directly called by PFs initializeing CIDs
  * for their VFs.
  */
-struct ecore_queue_cid *
+static struct ecore_queue_cid *
 _ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
-			u16 opaque_fid, u32 cid, u8 vf_qid,
-			struct ecore_queue_start_common_params *p_params)
+			u16 opaque_fid, u32 cid,
+			struct ecore_queue_start_common_params *p_params,
+			struct ecore_queue_cid_vf_params *p_vf_params)
 {
-	bool b_is_same = (p_hwfn->hw_info.opaque_fid == opaque_fid);
 	struct ecore_queue_cid *p_cid;
 	enum _ecore_status_t rc;
 
@@ -56,13 +204,22 @@ _ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
 
 	p_cid->opaque_fid = opaque_fid;
 	p_cid->cid = cid;
-	p_cid->vf_qid = vf_qid;
 	p_cid->rel = *p_params;
 	p_cid->p_owner = p_hwfn;
 
+	/* Fill-in bits related to VFs' queues if information was provided */
+	if (p_vf_params != OSAL_NULL) {
+		p_cid->vfid = p_vf_params->vfid;
+		p_cid->vf_qid = p_vf_params->vf_qid;
+		p_cid->b_legacy_vf = p_vf_params->b_legacy;
+	} else {
+		p_cid->vfid = ECORE_QUEUE_CID_PF;
+	}
+
 	/* Don't try calculating the absolute indices for VFs */
 	if (IS_VF(p_hwfn->p_dev)) {
 		p_cid->abs = p_cid->rel;
+
 		goto out;
 	}
 
@@ -82,7 +239,7 @@ _ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
 	/* In case of a PF configuring its VF's queues, the stats-id is already
 	 * absolute [since there's a single index that's suitable per-VF].
 	 */
-	if (b_is_same) {
+	if (p_cid->vfid == ECORE_QUEUE_CID_PF) {
 		rc = ecore_fw_vport(p_hwfn, p_cid->rel.stats_id,
 				    &p_cid->abs.stats_id);
 		if (rc != ECORE_SUCCESS)
@@ -95,17 +252,23 @@ _ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
 	p_cid->abs.sb = p_cid->rel.sb;
 	p_cid->abs.sb_idx = p_cid->rel.sb_idx;
 
-	/* This is tricky - we're actually interested in whehter this is a PF
-	 * entry meant for the VF.
-	 */
-	if (!b_is_same)
-		p_cid->is_vf = true;
 out:
+	/* VF-images have provided the qid_usage_idx on their own.
+	 * Otherwise, we need to allocate a unique one.
+	 */
+	if (!p_vf_params) {
+		if (!ecore_eth_queue_qid_usage_add(p_hwfn, p_cid))
+			goto fail;
+	} else {
+		p_cid->qid_usage_idx = p_vf_params->qid_usage_idx;
+	}
+
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-		   "opaque_fid: %04x CID %08x vport %02x [%02x] qzone %04x [%04x] stats %02x [%02x] SB %04x PI %02x\n",
+		   "opaque_fid: %04x CID %08x vport %02x [%02x] qzone %04x.%02x [%04x] stats %02x [%02x] SB %04x PI %02x\n",
 		   p_cid->opaque_fid, p_cid->cid,
 		   p_cid->rel.vport_id, p_cid->abs.vport_id,
-		   p_cid->rel.queue_id, p_cid->abs.queue_id,
+		   p_cid->rel.queue_id,	p_cid->qid_usage_idx,
+		   p_cid->abs.queue_id,
 		   p_cid->rel.stats_id, p_cid->abs.stats_id,
 		   p_cid->abs.sb, p_cid->abs.sb_idx);
 
@@ -116,33 +279,56 @@ fail:
 	return OSAL_NULL;
 }
 
-static struct ecore_queue_cid *
-ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
-		       u16 opaque_fid,
-		       struct ecore_queue_start_common_params *p_params)
+struct ecore_queue_cid *
+ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn, u16 opaque_fid,
+		       struct ecore_queue_start_common_params *p_params,
+		       struct ecore_queue_cid_vf_params *p_vf_params)
 {
 	struct ecore_queue_cid *p_cid;
+	u8 vfid = ECORE_CXT_PF_CID;
+	bool b_legacy_vf = false;
 	u32 cid = 0;
 
+	/* In case of legacy VFs, The CID can be derived from the additional
+	 * VF parameters - the VF assumes queue X uses CID X, so we can simply
+	 * use the vf_qid for this purpose as well.
+	 */
+	if (p_vf_params) {
+		vfid = p_vf_params->vfid;
+
+		if (p_vf_params->b_legacy) {
+			b_legacy_vf = true;
+			cid = p_vf_params->vf_qid;
+		}
+	}
+
 	/* Get a unique firmware CID for this queue, in case it's a PF.
 	 * VF's don't need a CID as the queue configuration will be done
 	 * by PF.
 	 */
-	if (IS_PF(p_hwfn->p_dev)) {
-		if (ecore_cxt_acquire_cid(p_hwfn, PROTOCOLID_ETH,
-					  &cid) != ECORE_SUCCESS) {
+	if (IS_PF(p_hwfn->p_dev) && !b_legacy_vf) {
+		if (_ecore_cxt_acquire_cid(p_hwfn, PROTOCOLID_ETH,
+					   &cid, vfid) != ECORE_SUCCESS) {
 			DP_NOTICE(p_hwfn, true, "Failed to acquire cid\n");
 			return OSAL_NULL;
 		}
 	}
 
-	p_cid = _ecore_eth_queue_to_cid(p_hwfn, opaque_fid, cid, 0, p_params);
-	if ((p_cid == OSAL_NULL) && IS_PF(p_hwfn->p_dev))
-		ecore_cxt_release_cid(p_hwfn, cid);
+	p_cid = _ecore_eth_queue_to_cid(p_hwfn, opaque_fid, cid,
+					p_params, p_vf_params);
+	if ((p_cid == OSAL_NULL) && IS_PF(p_hwfn->p_dev) && !b_legacy_vf)
+		_ecore_cxt_release_cid(p_hwfn, cid, vfid);
 
 	return p_cid;
 }
 
+static struct ecore_queue_cid *
+ecore_eth_queue_to_cid_pf(struct ecore_hwfn *p_hwfn, u16 opaque_fid,
+			  struct ecore_queue_start_common_params *p_params)
+{
+	return ecore_eth_queue_to_cid(p_hwfn, opaque_fid, p_params, OSAL_NULL);
+}
+
 enum _ecore_status_t
 ecore_sp_eth_vport_start(struct ecore_hwfn *p_hwfn,
 			 struct ecore_sp_vport_start_params *p_params)
@@ -741,7 +927,7 @@ ecore_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
 	p_ramrod->num_of_pbl_pages = OSAL_CPU_TO_LE16(cqe_pbl_size);
 	DMA_REGPAIR_LE(p_ramrod->cqe_pbl_addr, cqe_pbl_addr);
 
-	if (p_cid->is_vf) {
+	if (p_cid->vfid != ECORE_QUEUE_CID_PF) {
 		p_ramrod->vf_rx_prod_index = p_cid->vf_qid;
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 			   "Queue%s is meant for VF rxq[%02x]\n",
@@ -793,7 +979,7 @@ ecore_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
 	enum _ecore_status_t rc;
 
 	/* Allocate a CID for the queue */
-	p_cid = ecore_eth_queue_to_cid(p_hwfn, opaque_fid, p_params);
+	p_cid = ecore_eth_queue_to_cid_pf(p_hwfn, opaque_fid, p_params);
 	if (p_cid == OSAL_NULL)
 		return ECORE_NOMEM;
 
@@ -905,9 +1091,11 @@ ecore_eth_pf_rx_queue_stop(struct ecore_hwfn *p_hwfn,
 	/* Cleaning the queue requires the completion to arrive there.
 	 * In addition, VFs require the answer to come as eqe to PF.
 	 */
-	p_ramrod->complete_cqe_flg = (!p_cid->is_vf && !b_eq_completion_only) ||
+	p_ramrod->complete_cqe_flg = ((p_cid->vfid == ECORE_QUEUE_CID_PF) &&
+				      !b_eq_completion_only) ||
 				     b_cqe_completion;
-	p_ramrod->complete_event_flg = p_cid->is_vf || b_eq_completion_only;
+	p_ramrod->complete_event_flg = (p_cid->vfid != ECORE_QUEUE_CID_PF) ||
+				       b_eq_completion_only;
 
 	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
 }
@@ -1007,7 +1195,7 @@ ecore_eth_tx_queue_start(struct ecore_hwfn *p_hwfn, u16 opaque_fid,
 	struct ecore_queue_cid *p_cid;
 	enum _ecore_status_t rc;
 
-	p_cid = ecore_eth_queue_to_cid(p_hwfn, opaque_fid, p_params);
+	p_cid = ecore_eth_queue_to_cid_pf(p_hwfn, opaque_fid, p_params);
 	if (p_cid == OSAL_NULL)
 		return ECORE_INVAL;
 
diff --git a/drivers/net/qede/base/ecore_l2.h b/drivers/net/qede/base/ecore_l2.h
index 4b0ccb4..3f86eac 100644
--- a/drivers/net/qede/base/ecore_l2.h
+++ b/drivers/net/qede/base/ecore_l2.h
@@ -15,6 +15,34 @@
 #include "ecore_spq.h"
 #include "ecore_l2_api.h"
 
+#define MAX_QUEUES_PER_QZONE	(sizeof(unsigned long) * 8)
+#define ECORE_QUEUE_CID_PF	(0xff)
+
+/* Additional parameters required for initialization of the queue_cid
+ * and are relevant only for a PF initializing one for its VFs.
+ */
+struct ecore_queue_cid_vf_params {
+	/* Should match the VF's relative index */
+	u8 vfid;
+
+	/* 0-based queue index. Should reflect the relative qzone the
+	 * VF thinks is associated with it [in its range].
+	 */
+	u8 vf_qid;
+
+	/* Indicates a VF is legacy, making it differ in several things:
+	 *  - Producers would be placed in a different place.
+	 *  - Makes assumptions regarding the CIDs.
+	 */
+	bool b_legacy;
+
+	/* For VFs, this index arrives via TLV to diffrentiate between
+	 * different queues opened on the same qzone, and is passed
+	 * [where the PF would have allocated it internally for its own].
+	 */
+	u8 qid_usage_idx;
+};
+
 struct ecore_queue_cid {
 	/* 'Relative' is a relative term ;-). Usually the indices [not counting
 	 * SBs] would be PF-relative, but there are some cases where that isn't
@@ -31,22 +59,32 @@ struct ecore_queue_cid {
 	 * Notice this is relevant on the *PF* queue-cid of its VF's queues,
 	 * and not on the VF itself.
 	 */
-	bool is_vf;
+	u8 vfid;
 	u8 vf_qid;
 
+	/* We need an additional index to diffrentiate between queues opened
+	 * for same queue-zone, as VFs would have to communicate the info
+	 * to the PF [otherwise PF has no way to diffrentiate].
+	 */
+	u8 qid_usage_idx;
+
 	/* Legacy VFs might have Rx producer located elsewhere */
 	bool b_legacy_vf;
 
 	struct ecore_hwfn *p_owner;
 };
 
+enum _ecore_status_t ecore_l2_alloc(struct ecore_hwfn *p_hwfn);
+void ecore_l2_setup(struct ecore_hwfn *p_hwfn);
+void ecore_l2_free(struct ecore_hwfn *p_hwfn);
+
 void ecore_eth_queue_cid_release(struct ecore_hwfn *p_hwfn,
 				 struct ecore_queue_cid *p_cid);
 
 struct ecore_queue_cid *
-_ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
-			u16 opaque_fid, u32 cid, u8 vf_qid,
-			struct ecore_queue_start_common_params *p_params);
+ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn, u16 opaque_fid,
+		       struct ecore_queue_start_common_params *p_params,
+		       struct ecore_queue_cid_vf_params *p_vf_params);
 
 enum _ecore_status_t
 ecore_sp_eth_vport_start(struct ecore_hwfn *p_hwfn,
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 532c492..39d3e88 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -192,28 +192,90 @@ struct ecore_vf_info *ecore_iov_get_vf_info(struct ecore_hwfn *p_hwfn,
 	return vf;
 }
 
+static struct ecore_queue_cid *
+ecore_iov_get_vf_rx_queue_cid(struct ecore_hwfn *p_hwfn,
+			      struct ecore_vf_info *p_vf,
+			      struct ecore_vf_queue *p_queue)
+{
+	int i;
+
+	for (i = 0; i < MAX_QUEUES_PER_QZONE; i++) {
+		if (p_queue->cids[i].p_cid &&
+		    !p_queue->cids[i].b_is_tx)
+			return p_queue->cids[i].p_cid;
+	}
+
+	return OSAL_NULL;
+}
+
+enum ecore_iov_validate_q_mode {
+	ECORE_IOV_VALIDATE_Q_NA,
+	ECORE_IOV_VALIDATE_Q_ENABLE,
+	ECORE_IOV_VALIDATE_Q_DISABLE,
+};
+
+static bool ecore_iov_validate_queue_mode(struct ecore_hwfn *p_hwfn,
+					  struct ecore_vf_info *p_vf,
+					  u16 qid,
+					  enum ecore_iov_validate_q_mode mode,
+					  bool b_is_tx)
+{
+	int i;
+
+	if (mode == ECORE_IOV_VALIDATE_Q_NA)
+		return true;
+
+	for (i = 0; i < MAX_QUEUES_PER_QZONE; i++) {
+		struct ecore_vf_queue_cid *p_qcid;
+
+		p_qcid = &p_vf->vf_queues[qid].cids[i];
+
+		if (p_qcid->p_cid == OSAL_NULL)
+			continue;
+
+		if (p_qcid->b_is_tx != b_is_tx)
+			continue;
+
+		/* Found. It's enabled. */
+		return (mode == ECORE_IOV_VALIDATE_Q_ENABLE);
+	}
+
+	/* In case we haven't found any valid cid, then its disabled */
+	return (mode == ECORE_IOV_VALIDATE_Q_DISABLE);
+}
+
 static bool ecore_iov_validate_rxq(struct ecore_hwfn *p_hwfn,
 				   struct ecore_vf_info *p_vf,
-				   u16 rx_qid)
+				   u16 rx_qid,
+				   enum ecore_iov_validate_q_mode mode)
 {
-	if (rx_qid >= p_vf->num_rxqs)
+	if (rx_qid >= p_vf->num_rxqs) {
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 			   "VF[0x%02x] - can't touch Rx queue[%04x];"
 			   " Only 0x%04x are allocated\n",
 			   p_vf->abs_vf_id, rx_qid, p_vf->num_rxqs);
-	return rx_qid < p_vf->num_rxqs;
+		return false;
+	}
+
+	return ecore_iov_validate_queue_mode(p_hwfn, p_vf, rx_qid,
+					     mode, false);
 }
 
 static bool ecore_iov_validate_txq(struct ecore_hwfn *p_hwfn,
 				   struct ecore_vf_info *p_vf,
-				   u16 tx_qid)
+				   u16 tx_qid,
+				   enum ecore_iov_validate_q_mode mode)
 {
-	if (tx_qid >= p_vf->num_txqs)
+	if (tx_qid >= p_vf->num_txqs) {
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 			   "VF[0x%02x] - can't touch Tx queue[%04x];"
 			   " Only 0x%04x are allocated\n",
 			   p_vf->abs_vf_id, tx_qid, p_vf->num_txqs);
-	return tx_qid < p_vf->num_txqs;
+		return false;
+	}
+
+	return ecore_iov_validate_queue_mode(p_hwfn, p_vf, tx_qid,
+					     mode, true);
 }
 
 static bool ecore_iov_validate_sb(struct ecore_hwfn *p_hwfn,
@@ -234,13 +296,16 @@ static bool ecore_iov_validate_sb(struct ecore_hwfn *p_hwfn,
 	return false;
 }
 
+/* Is there at least 1 queue open? */
 static bool ecore_iov_validate_active_rxq(struct ecore_hwfn *p_hwfn,
 					  struct ecore_vf_info *p_vf)
 {
 	u8 i;
 
 	for (i = 0; i < p_vf->num_rxqs; i++)
-		if (p_vf->vf_queues[i].p_rx_cid)
+		if (ecore_iov_validate_queue_mode(p_hwfn, p_vf, i,
+						  ECORE_IOV_VALIDATE_Q_ENABLE,
+						  false))
 			return true;
 
 	return false;
@@ -251,8 +316,10 @@ static bool ecore_iov_validate_active_txq(struct ecore_hwfn *p_hwfn,
 {
 	u8 i;
 
-	for (i = 0; i < p_vf->num_rxqs; i++)
-		if (p_vf->vf_queues[i].p_tx_cid)
+	for (i = 0; i < p_vf->num_txqs; i++)
+		if (ecore_iov_validate_queue_mode(p_hwfn, p_vf, i,
+						  ECORE_IOV_VALIDATE_Q_ENABLE,
+						  true))
 			return true;
 
 	return false;
@@ -1095,19 +1162,15 @@ ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 	vf->num_txqs = num_of_vf_available_chains;
 
 	for (i = 0; i < vf->num_rxqs; i++) {
-		struct ecore_vf_q_info *p_queue = &vf->vf_queues[i];
+		struct ecore_vf_queue *p_queue = &vf->vf_queues[i];
 
 		p_queue->fw_rx_qid = p_params->req_rx_queue[i];
 		p_queue->fw_tx_qid = p_params->req_tx_queue[i];
 
-		/* CIDs are per-VF, so no problem having them 0-based. */
-		p_queue->fw_cid = i;
-
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-			   "VF[%d] - Q[%d] SB %04x, qid [Rx %04x Tx %04x]  CID %04x\n",
+			   "VF[%d] - Q[%d] SB %04x, qid [Rx %04x Tx %04x]\n",
 			   vf->relative_vf_id, i, vf->igu_sbs[i],
-			   p_queue->fw_rx_qid, p_queue->fw_tx_qid,
-			   p_queue->fw_cid);
+			   p_queue->fw_rx_qid, p_queue->fw_tx_qid);
 	}
 
 	/* Update the link configuration in bulletin.
@@ -1443,7 +1506,7 @@ struct ecore_public_vf_info
 static void ecore_iov_vf_cleanup(struct ecore_hwfn *p_hwfn,
 				 struct ecore_vf_info *p_vf)
 {
-	u32 i;
+	u32 i, j;
 	p_vf->vf_bulletin = 0;
 	p_vf->vport_instance = 0;
 	p_vf->configured_features = 0;
@@ -1455,18 +1518,15 @@ static void ecore_iov_vf_cleanup(struct ecore_hwfn *p_hwfn,
 	p_vf->num_active_rxqs = 0;
 
 	for (i = 0; i < ECORE_MAX_VF_CHAINS_PER_PF; i++) {
-		struct ecore_vf_q_info *p_queue = &p_vf->vf_queues[i];
+		struct ecore_vf_queue *p_queue = &p_vf->vf_queues[i];
 
-		if (p_queue->p_rx_cid) {
-			ecore_eth_queue_cid_release(p_hwfn,
-						    p_queue->p_rx_cid);
-			p_queue->p_rx_cid = OSAL_NULL;
-		}
+		for (j = 0; j < MAX_QUEUES_PER_QZONE; j++) {
+			if (!p_queue->cids[j].p_cid)
+				continue;
 
-		if (p_queue->p_tx_cid) {
 			ecore_eth_queue_cid_release(p_hwfn,
-						    p_queue->p_tx_cid);
-			p_queue->p_tx_cid = OSAL_NULL;
+						    p_queue->cids[j].p_cid);
+			p_queue->cids[j].p_cid = OSAL_NULL;
 		}
 	}
 
@@ -1481,7 +1541,7 @@ static u8 ecore_iov_vf_mbx_acquire_resc(struct ecore_hwfn *p_hwfn,
 					struct vf_pf_resc_request *p_req,
 					struct pf_vf_resc *p_resp)
 {
-	int i;
+	u8 i;
 
 	/* Queue related information */
 	p_resp->num_rxqs = p_vf->num_rxqs;
@@ -1502,7 +1562,7 @@ static u8 ecore_iov_vf_mbx_acquire_resc(struct ecore_hwfn *p_hwfn,
 	for (i = 0; i < p_resp->num_rxqs; i++) {
 		ecore_fw_l2_queue(p_hwfn, p_vf->vf_queues[i].fw_rx_qid,
 				  (u16 *)&p_resp->hw_qid[i]);
-		p_resp->cid[i] = p_vf->vf_queues[i].fw_cid;
+		p_resp->cid[i] = i;
 	}
 
 	/* Filter related information */
@@ -1905,9 +1965,12 @@ ecore_iov_configure_vport_forced(struct ecore_hwfn *p_hwfn,
 
 		/* Update all the Rx queues */
 		for (i = 0; i < ECORE_MAX_VF_CHAINS_PER_PF; i++) {
-			struct ecore_queue_cid *p_cid;
+			struct ecore_vf_queue *p_queue = &p_vf->vf_queues[i];
+			struct ecore_queue_cid *p_cid = OSAL_NULL;
 
-			p_cid = p_vf->vf_queues[i].p_rx_cid;
+			/* There can be at most 1 Rx queue on qzone. Find it */
+			p_cid = ecore_iov_get_vf_rx_queue_cid(p_hwfn, p_vf,
+							      p_queue);
 			if (p_cid == OSAL_NULL)
 				continue;
 
@@ -2113,19 +2176,32 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
 				       struct ecore_vf_info *vf)
 {
 	struct ecore_queue_start_common_params params;
+	struct ecore_queue_cid_vf_params vf_params;
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
 	u8 status = PFVF_STATUS_NO_RESOURCE;
-	struct ecore_vf_q_info *p_queue;
+	struct ecore_vf_queue *p_queue;
 	struct vfpf_start_rxq_tlv *req;
+	struct ecore_queue_cid *p_cid;
 	bool b_legacy_vf = false;
+	u8 qid_usage_idx;
 	enum _ecore_status_t rc;
 
 	req = &mbx->req_virt->start_rxq;
 
-	if (!ecore_iov_validate_rxq(p_hwfn, vf, req->rx_qid) ||
+	if (!ecore_iov_validate_rxq(p_hwfn, vf, req->rx_qid,
+				    ECORE_IOV_VALIDATE_Q_DISABLE) ||
 	    !ecore_iov_validate_sb(p_hwfn, vf, req->hw_sb))
 		goto out;
 
+	/* Legacy VFs made assumptions on the CID their queues connected to,
+	 * assuming queue X used CID X.
+	 * TODO - need to validate that there was no official release post
+	 * the current legacy scheme that still made that assumption.
+	 */
+	if (vf->acquire.vfdev_info.eth_fp_hsi_minor ==
+	    ETH_HSI_VER_NO_PKT_LEN_TUNN)
+		b_legacy_vf = true;
+
 	/* Acquire a new queue-cid */
 	p_queue = &vf->vf_queues[req->rx_qid];
 
@@ -2136,39 +2212,42 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
 	params.sb = req->hw_sb;
 	params.sb_idx = req->sb_index;
 
-	p_queue->p_rx_cid = _ecore_eth_queue_to_cid(p_hwfn,
-						    vf->opaque_fid,
-						    p_queue->fw_cid,
-						    (u8)req->rx_qid,
-						    &params);
-	if (p_queue->p_rx_cid == OSAL_NULL)
+	/* TODO - set qid_usage_idx according to extended TLV. For now, use
+	 * '0' for Rx.
+	 */
+	qid_usage_idx = 0;
+
+	OSAL_MEM_ZERO(&vf_params, sizeof(vf_params));
+	vf_params.vfid = vf->relative_vf_id;
+	vf_params.vf_qid = (u8)req->rx_qid;
+	vf_params.b_legacy = b_legacy_vf;
+	vf_params.qid_usage_idx = qid_usage_idx;
+
+	p_cid = ecore_eth_queue_to_cid(p_hwfn, vf->opaque_fid,
+				       &params, &vf_params);
+	if (p_cid == OSAL_NULL)
 		goto out;
 
 	/* Legacy VFs have their Producers in a different location, which they
 	 * calculate on their own and clean the producer prior to this.
 	 */
-	if (vf->acquire.vfdev_info.eth_fp_hsi_minor ==
-	    ETH_HSI_VER_NO_PKT_LEN_TUNN)
-		b_legacy_vf = true;
-	else
+	if (!b_legacy_vf)
 		REG_WR(p_hwfn,
 		       GTT_BAR0_MAP_REG_MSDM_RAM +
 		       MSTORM_ETH_VF_PRODS_OFFSET(vf->abs_vf_id, req->rx_qid),
 		       0);
-	p_queue->p_rx_cid->b_legacy_vf = b_legacy_vf;
 
-
-	rc = ecore_eth_rxq_start_ramrod(p_hwfn,
-					p_queue->p_rx_cid,
+	rc = ecore_eth_rxq_start_ramrod(p_hwfn, p_cid,
 					req->bd_max_bytes,
 					req->rxq_addr,
 					req->cqe_pbl_addr,
 					req->cqe_pbl_size);
 	if (rc != ECORE_SUCCESS) {
 		status = PFVF_STATUS_FAILURE;
-		ecore_eth_queue_cid_release(p_hwfn, p_queue->p_rx_cid);
-		p_queue->p_rx_cid = OSAL_NULL;
+		ecore_eth_queue_cid_release(p_hwfn, p_cid);
 	} else {
+		p_queue->cids[qid_usage_idx].p_cid = p_cid;
+		p_queue->cids[qid_usage_idx].b_is_tx = false;
 		status = PFVF_STATUS_SUCCESS;
 		vf->num_active_rxqs++;
 	}
@@ -2331,6 +2410,7 @@ send_resp:
 static void ecore_iov_vf_mbx_start_txq_resp(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
 					    struct ecore_vf_info *p_vf,
+					    u32 cid,
 					    u8 status)
 {
 	struct ecore_iov_vf_mbx *mbx = &p_vf->vf_mbx;
@@ -2359,12 +2439,8 @@ static void ecore_iov_vf_mbx_start_txq_resp(struct ecore_hwfn *p_hwfn,
 		      sizeof(struct channel_list_end_tlv));
 
 	/* Update the TLV with the response */
-	if ((status == PFVF_STATUS_SUCCESS) && !b_legacy) {
-		u16 qid = mbx->req_virt->start_txq.tx_qid;
-
-		p_tlv->offset = DB_ADDR_VF(p_vf->vf_queues[qid].fw_cid,
-					   DQ_DEMS_LEGACY);
-	}
+	if ((status == PFVF_STATUS_SUCCESS) && !b_legacy)
+		p_tlv->offset = DB_ADDR_VF(cid, DQ_DEMS_LEGACY);
 
 	ecore_iov_send_response(p_hwfn, p_ptt, p_vf, length, status);
 }
@@ -2374,20 +2450,34 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 				       struct ecore_vf_info *vf)
 {
 	struct ecore_queue_start_common_params params;
+	struct ecore_queue_cid_vf_params vf_params;
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
 	u8 status = PFVF_STATUS_NO_RESOURCE;
-	struct ecore_vf_q_info *p_queue;
+	struct ecore_vf_queue *p_queue;
 	struct vfpf_start_txq_tlv *req;
+	struct ecore_queue_cid *p_cid;
+	bool b_legacy_vf = false;
+	u8 qid_usage_idx;
+	u32 cid = 0;
 	enum _ecore_status_t rc;
 	u16 pq;
 
 	OSAL_MEMSET(&params, 0, sizeof(params));
 	req = &mbx->req_virt->start_txq;
 
-	if (!ecore_iov_validate_txq(p_hwfn, vf, req->tx_qid) ||
+	if (!ecore_iov_validate_txq(p_hwfn, vf, req->tx_qid,
+				    ECORE_IOV_VALIDATE_Q_NA) ||
 	    !ecore_iov_validate_sb(p_hwfn, vf, req->hw_sb))
 		goto out;
 
+	/* In case this is a legacy VF - need to know to use the right cids.
+	 * TODO - need to validate that there was no official release post
+	 * the current legacy scheme that still made that assumption.
+	 */
+	if (vf->acquire.vfdev_info.eth_fp_hsi_minor ==
+	    ETH_HSI_VER_NO_PKT_LEN_TUNN)
+		b_legacy_vf = true;
+
 	/* Acquire a new queue-cid */
 	p_queue = &vf->vf_queues[req->tx_qid];
 
@@ -2397,29 +2487,42 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 	params.sb = req->hw_sb;
 	params.sb_idx = req->sb_index;
 
-	p_queue->p_tx_cid = _ecore_eth_queue_to_cid(p_hwfn,
-						    vf->opaque_fid,
-						    p_queue->fw_cid,
-						    (u8)req->tx_qid,
-						    &params);
-	if (p_queue->p_tx_cid == OSAL_NULL)
+	/* TODO - set qid_usage_idx according to extended TLV. For now, use
+	 * '1' for Tx.
+	 */
+	qid_usage_idx = 1;
+
+	if (p_queue->cids[qid_usage_idx].p_cid)
+		goto out;
+
+	OSAL_MEM_ZERO(&vf_params, sizeof(vf_params));
+	vf_params.vfid = vf->relative_vf_id;
+	vf_params.vf_qid = (u8)req->tx_qid;
+	vf_params.b_legacy = b_legacy_vf;
+	vf_params.qid_usage_idx = qid_usage_idx;
+
+	p_cid = ecore_eth_queue_to_cid(p_hwfn, vf->opaque_fid,
+				       &params, &vf_params);
+	if (p_cid == OSAL_NULL)
 		goto out;
 
 	pq = ecore_get_cm_pq_idx_vf(p_hwfn,
 				    vf->relative_vf_id);
-	rc = ecore_eth_txq_start_ramrod(p_hwfn, p_queue->p_tx_cid,
+	rc = ecore_eth_txq_start_ramrod(p_hwfn, p_cid,
 					req->pbl_addr, req->pbl_size, pq);
 	if (rc != ECORE_SUCCESS) {
 		status = PFVF_STATUS_FAILURE;
-		ecore_eth_queue_cid_release(p_hwfn,
-					    p_queue->p_tx_cid);
-		p_queue->p_tx_cid = OSAL_NULL;
+		ecore_eth_queue_cid_release(p_hwfn, p_cid);
 	} else {
 		status = PFVF_STATUS_SUCCESS;
+		p_queue->cids[qid_usage_idx].p_cid = p_cid;
+		p_queue->cids[qid_usage_idx].b_is_tx = true;
+		cid = p_cid->cid;
 	}
 
 out:
-	ecore_iov_vf_mbx_start_txq_resp(p_hwfn, p_ptt, vf, status);
+	ecore_iov_vf_mbx_start_txq_resp(p_hwfn, p_ptt, vf,
+					cid, status);
 }
 
 static enum _ecore_status_t ecore_iov_vf_stop_rxqs(struct ecore_hwfn *p_hwfn,
@@ -2428,26 +2531,38 @@ static enum _ecore_status_t ecore_iov_vf_stop_rxqs(struct ecore_hwfn *p_hwfn,
 						   u8 num_rxqs,
 						   bool cqe_completion)
 {
-	struct ecore_vf_q_info *p_queue;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
-	int qid;
+	int qid, i;
 
+	/* TODO - improve validation [wrap around] */
 	if (rxq_id + num_rxqs > OSAL_ARRAY_SIZE(vf->vf_queues))
 		return ECORE_INVAL;
 
 	for (qid = rxq_id; qid < rxq_id + num_rxqs; qid++) {
-		p_queue = &vf->vf_queues[qid];
-
-		if (!p_queue->p_rx_cid)
+		struct ecore_vf_queue *p_queue = &vf->vf_queues[qid];
+		struct ecore_queue_cid **pp_cid = OSAL_NULL;
+
+		/* There can be at most a single Rx per qzone. Find it */
+		for (i = 0; i < MAX_QUEUES_PER_QZONE; i++) {
+			if (p_queue->cids[i].p_cid &&
+			    !p_queue->cids[i].b_is_tx) {
+				pp_cid = &p_queue->cids[i].p_cid;
+				break;
+			}
+		}
+		if (pp_cid == OSAL_NULL) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "Ignoring VF[%02x] request of closing Rx queue %04x - closed\n",
+				   vf->relative_vf_id, qid);
 			continue;
+		}
 
-		rc = ecore_eth_rx_queue_stop(p_hwfn,
-					     p_queue->p_rx_cid,
+		rc = ecore_eth_rx_queue_stop(p_hwfn, *pp_cid,
 					     false, cqe_completion);
 		if (rc != ECORE_SUCCESS)
 			return rc;
 
-		vf->vf_queues[qid].p_rx_cid = OSAL_NULL;
+		*pp_cid = OSAL_NULL;
 		vf->num_active_rxqs--;
 	}
 
@@ -2459,24 +2574,33 @@ static enum _ecore_status_t ecore_iov_vf_stop_txqs(struct ecore_hwfn *p_hwfn,
 						   u16 txq_id, u8 num_txqs)
 {
 	enum _ecore_status_t rc = ECORE_SUCCESS;
-	struct ecore_vf_q_info *p_queue;
-	int qid;
+	struct ecore_vf_queue *p_queue;
+	int qid, j;
 
-	if (txq_id + num_txqs > OSAL_ARRAY_SIZE(vf->vf_queues))
+	if (!ecore_iov_validate_txq(p_hwfn, vf, txq_id,
+				    ECORE_IOV_VALIDATE_Q_NA) ||
+	    !ecore_iov_validate_txq(p_hwfn, vf, txq_id + num_txqs,
+				    ECORE_IOV_VALIDATE_Q_NA))
 		return ECORE_INVAL;
 
 	for (qid = txq_id; qid < txq_id + num_txqs; qid++) {
 		p_queue = &vf->vf_queues[qid];
-		if (!p_queue->p_tx_cid)
-			continue;
+		for (j = 0; j < MAX_QUEUES_PER_QZONE; j++) {
+			if (p_queue->cids[j].p_cid == OSAL_NULL)
+				continue;
 
-		rc = ecore_eth_tx_queue_stop(p_hwfn,
-					     p_queue->p_tx_cid);
-		if (rc != ECORE_SUCCESS)
-			return rc;
+			if (!p_queue->cids[j].b_is_tx)
+				continue;
+
+			rc = ecore_eth_tx_queue_stop(p_hwfn,
+						     p_queue->cids[j].p_cid);
+			if (rc != ECORE_SUCCESS)
+				return rc;
 
-		p_queue->p_tx_cid = OSAL_NULL;
+			p_queue->cids[j].p_cid = OSAL_NULL;
+		}
 	}
+
 	return rc;
 }
 
@@ -2538,33 +2662,32 @@ static void ecore_iov_vf_mbx_update_rxqs(struct ecore_hwfn *p_hwfn,
 	u8 status = PFVF_STATUS_FAILURE;
 	u8 complete_event_flg;
 	u8 complete_cqe_flg;
-	u16 qid;
 	enum _ecore_status_t rc;
-	u8 i;
+	u16 i;
 
 	req = &mbx->req_virt->update_rxq;
 	complete_cqe_flg = !!(req->flags & VFPF_RXQ_UPD_COMPLETE_CQE_FLAG);
 	complete_event_flg = !!(req->flags & VFPF_RXQ_UPD_COMPLETE_EVENT_FLAG);
 
-	/* Validaute inputs */
-	if (req->num_rxqs + req->rx_qid > ECORE_MAX_VF_CHAINS_PER_PF ||
-	    !ecore_iov_validate_rxq(p_hwfn, vf, req->rx_qid)) {
-		DP_INFO(p_hwfn, "VF[%d]: Incorrect Rxqs [%04x, %02x]\n",
-			vf->relative_vf_id, req->rx_qid, req->num_rxqs);
-		goto out;
+	/* Validate inputs */
+	for (i = req->rx_qid; i < req->rx_qid + req->num_rxqs; i++) {
+		if (!ecore_iov_validate_rxq(p_hwfn, vf, i,
+					    ECORE_IOV_VALIDATE_Q_ENABLE)) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VF[%d]: Incorrect Rxqs [%04x, %02x]\n",
+				   vf->relative_vf_id, req->rx_qid,
+				   req->num_rxqs);
+			goto out;
+		}
 	}
 
 	for (i = 0; i < req->num_rxqs; i++) {
-		qid = req->rx_qid + i;
-
-		if (!vf->vf_queues[qid].p_rx_cid) {
-			DP_INFO(p_hwfn,
-				"VF[%d] rx_qid = %d isn`t active!\n",
-				vf->relative_vf_id, qid);
-			goto out;
-		}
+		struct ecore_vf_queue *p_queue;
+		u16 qid = req->rx_qid + i;
 
-		handlers[i] = vf->vf_queues[qid].p_rx_cid;
+		p_queue = &vf->vf_queues[qid];
+		handlers[i] = ecore_iov_get_vf_rx_queue_cid(p_hwfn, vf,
+							    p_queue);
 	}
 
 	rc = ecore_sp_eth_rx_queues_update(p_hwfn, (void **)&handlers,
@@ -2796,8 +2919,11 @@ ecore_iov_vp_update_rss_param(struct ecore_hwfn *p_hwfn,
 				(1 << p_rss_tlv->rss_table_size_log));
 
 	for (i = 0; i < table_size; i++) {
+		struct ecore_queue_cid *p_cid;
+
 		q_idx = p_rss_tlv->rss_ind_table[i];
-		if (!ecore_iov_validate_rxq(p_hwfn, vf, q_idx)) {
+		if (!ecore_iov_validate_rxq(p_hwfn, vf, q_idx,
+					    ECORE_IOV_VALIDATE_Q_ENABLE)) {
 			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 				   "VF[%d]: Omitting RSS due to wrong queue %04x\n",
 				   vf->relative_vf_id, q_idx);
@@ -2805,15 +2931,9 @@ ecore_iov_vp_update_rss_param(struct ecore_hwfn *p_hwfn,
 			goto out;
 		}
 
-		if (!vf->vf_queues[q_idx].p_rx_cid) {
-			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-				   "VF[%d]: Omitting RSS due to inactive queue %08x\n",
-				   vf->relative_vf_id, q_idx);
-			b_reject = true;
-			goto out;
-		}
-
-		p_rss->rss_ind_table[i] = vf->vf_queues[q_idx].p_rx_cid;
+		p_cid = ecore_iov_get_vf_rx_queue_cid(p_hwfn, vf,
+						      &vf->vf_queues[q_idx]);
+		p_rss->rss_ind_table[i] = p_cid;
 	}
 
 	p_data->rss_params = p_rss;
@@ -3272,22 +3392,26 @@ static void ecore_iov_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn,
 	u8 status = PFVF_STATUS_FAILURE;
 	struct ecore_queue_cid *p_cid;
 	u16 rx_coal, tx_coal;
-	u16  qid;
+	u16 qid;
+	int i;
 
 	req = &mbx->req_virt->update_coalesce;
 
 	rx_coal = req->rx_coal;
 	tx_coal = req->tx_coal;
 	qid = req->qid;
-	p_cid = vf->vf_queues[qid].p_rx_cid;
 
-	if (!ecore_iov_validate_rxq(p_hwfn, vf, qid)) {
+	if (!ecore_iov_validate_rxq(p_hwfn, vf, qid,
+				    ECORE_IOV_VALIDATE_Q_ENABLE) &&
+	    rx_coal) {
 		DP_ERR(p_hwfn, "VF[%d]: Invalid Rx queue_id = %d\n",
 		       vf->abs_vf_id, qid);
 		goto out;
 	}
 
-	if (!ecore_iov_validate_txq(p_hwfn, vf, qid)) {
+	if (!ecore_iov_validate_txq(p_hwfn, vf, qid,
+				    ECORE_IOV_VALIDATE_Q_ENABLE) &&
+	    tx_coal) {
 		DP_ERR(p_hwfn, "VF[%d]: Invalid Tx queue_id = %d\n",
 		       vf->abs_vf_id, qid);
 		goto out;
@@ -3296,7 +3420,11 @@ static void ecore_iov_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn,
 	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 		   "VF[%d]: Setting coalesce for VF rx_coal = %d, tx_coal = %d at queue = %d\n",
 		   vf->abs_vf_id, rx_coal, tx_coal, qid);
+
 	if (rx_coal) {
+		p_cid = ecore_iov_get_vf_rx_queue_cid(p_hwfn, vf,
+						      &vf->vf_queues[qid]);
+
 		rc = ecore_set_rxq_coalesce(p_hwfn, p_ptt, rx_coal, p_cid);
 		if (rc != ECORE_SUCCESS) {
 			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
@@ -3305,13 +3433,28 @@ static void ecore_iov_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn,
 			goto out;
 		}
 	}
+
+	/* TODO - in future, it might be possible to pass this in a per-cid
+	 * granularity. For now, do this for all Tx queues.
+	 */
 	if (tx_coal) {
-		rc =  ecore_set_txq_coalesce(p_hwfn, p_ptt, tx_coal, p_cid);
-		if (rc != ECORE_SUCCESS) {
-			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-				   "VF[%d]: Unable to set tx queue = %d coalesce\n",
-				   vf->abs_vf_id, vf->vf_queues[qid].fw_tx_qid);
-			goto out;
+		struct ecore_vf_queue *p_queue = &vf->vf_queues[qid];
+
+		for (i = 0; i < MAX_QUEUES_PER_QZONE; i++) {
+			if (p_queue->cids[i].p_cid == OSAL_NULL)
+				continue;
+
+			if (!p_queue->cids[i].b_is_tx)
+				continue;
+
+			rc = ecore_set_txq_coalesce(p_hwfn, p_ptt, tx_coal,
+						    p_queue->cids[i].p_cid);
+			if (rc != ECORE_SUCCESS) {
+				DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+					   "VF[%d]: Unable to set tx queue coalesce\n",
+					   vf->abs_vf_id);
+				goto out;
+			}
 		}
 	}
 
diff --git a/drivers/net/qede/base/ecore_sriov.h b/drivers/net/qede/base/ecore_sriov.h
index 66e9271..3c2f58b 100644
--- a/drivers/net/qede/base/ecore_sriov.h
+++ b/drivers/net/qede/base/ecore_sriov.h
@@ -13,6 +13,7 @@
 #include "ecore_vfpf_if.h"
 #include "ecore_iov_api.h"
 #include "ecore_hsi_common.h"
+#include "ecore_l2.h"
 
 #define ECORE_ETH_MAX_VF_NUM_VLAN_FILTERS \
 	(E4_MAX_NUM_VFS * ECORE_ETH_VF_NUM_VLAN_FILTERS)
@@ -62,12 +63,18 @@ struct ecore_iov_vf_mbx {
 					 */
 };
 
-struct ecore_vf_q_info {
+struct ecore_vf_queue_cid {
+	bool b_is_tx;
+	struct ecore_queue_cid *p_cid;
+};
+
+/* Describes a qzone associated with the VF */
+struct ecore_vf_queue {
+	/* Input from upper-layer, mapping relateive queue to queue-zone */
 	u16 fw_rx_qid;
-	struct ecore_queue_cid *p_rx_cid;
 	u16 fw_tx_qid;
-	struct ecore_queue_cid *p_tx_cid;
-	u8 fw_cid;
+
+	struct ecore_vf_queue_cid cids[MAX_QUEUES_PER_QZONE];
 };
 
 enum vf_state {
@@ -127,7 +134,7 @@ struct ecore_vf_info {
 	u8			num_mac_filters;
 	u8			num_vlan_filters;
 
-	struct ecore_vf_q_info	vf_queues[ECORE_MAX_VF_CHAINS_PER_PF];
+	struct ecore_vf_queue	vf_queues[ECORE_MAX_VF_CHAINS_PER_PF];
 	u16			igu_sbs[ECORE_MAX_VF_CHAINS_PER_PF];
 
 	/* TODO - Only windows is using it - should be removed */
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index 8ce9340..ac72681 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -1582,6 +1582,12 @@ void ecore_vf_get_num_rxqs(struct ecore_hwfn *p_hwfn, u8 *num_rxqs)
 	*num_rxqs = p_hwfn->vf_iov_info->acquire_resp.resc.num_rxqs;
 }
 
+void ecore_vf_get_num_txqs(struct ecore_hwfn *p_hwfn,
+			   u8 *num_txqs)
+{
+	*num_txqs = p_hwfn->vf_iov_info->acquire_resp.resc.num_txqs;
+}
+
 void ecore_vf_get_port_mac(struct ecore_hwfn *p_hwfn, u8 *port_mac)
 {
 	OSAL_MEMCPY(port_mac,
diff --git a/drivers/net/qede/base/ecore_vf_api.h b/drivers/net/qede/base/ecore_vf_api.h
index a6e5f32..be3a326 100644
--- a/drivers/net/qede/base/ecore_vf_api.h
+++ b/drivers/net/qede/base/ecore_vf_api.h
@@ -61,6 +61,15 @@ void ecore_vf_get_num_rxqs(struct ecore_hwfn *p_hwfn,
 			   u8 *num_rxqs);
 
 /**
+ * @brief Get number of Rx queues allocated for VF by ecore
+ *
+ *  @param p_hwfn
+ *  @param num_txqs - allocated RX queues
+ */
+void ecore_vf_get_num_txqs(struct ecore_hwfn *p_hwfn,
+			   u8 *num_txqs);
+
+/**
  * @brief Get port mac address for VF
  *
  * @param p_hwfn
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 57/61] net/qede/base: prevent race condition during unload
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (56 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 56/61] net/qede/base: multi-Txq support on same queue-zone for VFs Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 58/61] net/qede/base: semantic changes Rasesh Mody
                         ` (4 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Merge hw_stop and hw_reset into one function.
Prevent race condition between MFW attentions and pf stop command during
unload flow that causes an ASSERT.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/bcm_osal.h      |    1 +
 drivers/net/qede/base/ecore_dev.c     |  175 ++++++++++++++++-----------------
 drivers/net/qede/base/ecore_dev_api.h |    9 --
 drivers/net/qede/base/ecore_mcp.c     |   12 +++
 drivers/net/qede/base/ecore_mcp.h     |   11 +++
 drivers/net/qede/base/ecore_spq.c     |    3 +
 drivers/net/qede/qede_main.c          |   18 +---
 7 files changed, 116 insertions(+), 113 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 052a0cf..32c9b25 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -168,6 +168,7 @@ typedef pthread_mutex_t osal_mutex_t;
 #define OSAL_DPC_ALLOC(hwfn) OSAL_ALLOC(hwfn, GFP, sizeof(osal_dpc_t))
 #define OSAL_DPC_INIT(dpc, hwfn) nothing
 #define OSAL_POLL_MODE_DPC(hwfn) nothing
+#define OSAL_DPC_SYNC(hwfn) nothing
 
 /* Lists */
 
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 2a621f7..d8e4ca2 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2050,7 +2050,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 
 		if (mfw_rc != ECORE_SUCCESS) {
 			DP_NOTICE(p_hwfn, true,
-				  "Failed sending LOAD_DONE command\n");
+				  "Failed sending a LOAD_DONE command\n");
 			return mfw_rc;
 		}
 
@@ -2139,32 +2139,77 @@ void ecore_hw_timers_stop_all(struct ecore_dev *p_dev)
 	}
 }
 
+static enum _ecore_status_t ecore_verify_reg_val(struct ecore_hwfn *p_hwfn,
+						 struct ecore_ptt *p_ptt,
+						 u32 addr, u32 expected_val)
+{
+	u32 val = ecore_rd(p_hwfn, p_ptt, addr);
+
+	if (val != expected_val) {
+		DP_NOTICE(p_hwfn, true,
+			  "Value at address 0x%08x is 0x%08x while the expected value is 0x%08x\n",
+			  addr, val, expected_val);
+		return ECORE_UNKNOWN_ERROR;
+	}
+
+	return ECORE_SUCCESS;
+}
+
 enum _ecore_status_t ecore_hw_stop(struct ecore_dev *p_dev)
 {
-	enum _ecore_status_t rc = ECORE_SUCCESS, t_rc;
+	struct ecore_hwfn *p_hwfn;
+	struct ecore_ptt *p_ptt;
+	enum _ecore_status_t rc, rc2 = ECORE_SUCCESS;
 	int j;
 
 	for_each_hwfn(p_dev, j) {
-		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[j];
-		struct ecore_ptt *p_ptt = p_hwfn->p_main_ptt;
+		p_hwfn = &p_dev->hwfns[j];
+		p_ptt = p_hwfn->p_main_ptt;
 
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IFDOWN, "Stopping hw/fw\n");
 
 		if (IS_VF(p_dev)) {
 			ecore_vf_pf_int_cleanup(p_hwfn);
+			rc = ecore_vf_pf_reset(p_hwfn);
+			if (rc != ECORE_SUCCESS) {
+				DP_NOTICE(p_hwfn, true,
+					  "ecore_vf_pf_reset failed. rc = %d.\n",
+					  rc);
+				rc2 = ECORE_UNKNOWN_ERROR;
+			}
 			continue;
 		}
 
 		/* mark the hw as uninitialized... */
 		p_hwfn->hw_init_done = false;
 
+		/* Send unload command to MCP */
+		if (!p_dev->recov_in_prog) {
+			rc = ecore_mcp_unload_req(p_hwfn, p_ptt);
+			if (rc != ECORE_SUCCESS) {
+				DP_NOTICE(p_hwfn, true,
+					  "Failed sending a UNLOAD_REQ command. rc = %d.\n",
+					  rc);
+				rc2 = ECORE_UNKNOWN_ERROR;
+			}
+		}
+
+		OSAL_DPC_SYNC(p_hwfn);
+
+		/* After this point no MFW attentions are expected, e.g. prevent
+		 * race between pf stop and dcbx pf update.
+		 */
+
 		rc = ecore_sp_pf_stop(p_hwfn);
-		if (rc)
+		if (rc != ECORE_SUCCESS) {
 			DP_NOTICE(p_hwfn, true,
-				  "Failed to close PF against FW. Continue to stop HW to prevent illegal host access by the device\n");
+				  "Failed to close PF against FW [rc = %d]. Continue to stop HW to prevent illegal host access by the device.\n",
+				  rc);
+			rc2 = ECORE_UNKNOWN_ERROR;
+		}
 
 		/* perform debug action after PF stop was sent */
-		OSAL_AFTER_PF_STOP((void *)p_hwfn->p_dev, p_hwfn->my_id);
+		OSAL_AFTER_PF_STOP((void *)p_dev, p_hwfn->my_id);
 
 		/* close NIG to BRB gate */
 		ecore_wr(p_hwfn, p_ptt,
@@ -2191,20 +2236,48 @@ enum _ecore_status_t ecore_hw_stop(struct ecore_dev *p_dev)
 		ecore_int_igu_init_pure_rt(p_hwfn, p_ptt, false, true);
 		/* Need to wait 1ms to guarantee SBs are cleared */
 		OSAL_MSLEEP(1);
-	}
+
+		if (!p_dev->recov_in_prog) {
+			ecore_verify_reg_val(p_hwfn, p_ptt,
+					     QM_REG_USG_CNT_PF_TX, 0);
+			ecore_verify_reg_val(p_hwfn, p_ptt,
+					     QM_REG_USG_CNT_PF_OTHER, 0);
+			/* @@@TBD - assert on incorrect xCFC values (10.b) */
+		}
+
+		/* Disable PF in HW blocks */
+		ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_DB_ENABLE, 0);
+		ecore_wr(p_hwfn, p_ptt, QM_REG_PF_EN, 0);
+
+		if (!p_dev->recov_in_prog) {
+			ecore_mcp_unload_done(p_hwfn, p_ptt);
+			if (rc != ECORE_SUCCESS) {
+				DP_NOTICE(p_hwfn, true,
+					  "Failed sending a UNLOAD_DONE command. rc = %d.\n",
+					  rc);
+				rc2 = ECORE_UNKNOWN_ERROR;
+			}
+		}
+	} /* hwfn loop */
 
 	if (IS_PF(p_dev)) {
+		p_hwfn = ECORE_LEADING_HWFN(p_dev);
+		p_ptt = ECORE_LEADING_HWFN(p_dev)->p_main_ptt;
+
 		/* Disable DMAE in PXP - in CMT, this should only be done for
 		 * first hw-function, and only after all transactions have
 		 * stopped for all active hw-functions.
 		 */
-		t_rc = ecore_change_pci_hwfn(&p_dev->hwfns[0],
-					     p_dev->hwfns[0].p_main_ptt, false);
-		if (t_rc != ECORE_SUCCESS)
-			rc = t_rc;
+		rc = ecore_change_pci_hwfn(p_hwfn, p_ptt, false);
+		if (rc != ECORE_SUCCESS) {
+			DP_NOTICE(p_hwfn, true,
+				  "ecore_change_pci_hwfn failed. rc = %d.\n",
+				  rc);
+			rc2 = ECORE_UNKNOWN_ERROR;
+		}
 	}
 
-	return rc;
+	return rc2;
 }
 
 void ecore_hw_stop_fastpath(struct ecore_dev *p_dev)
@@ -2265,82 +2338,6 @@ void ecore_hw_start_fastpath(struct ecore_hwfn *p_hwfn)
 		 NIG_REG_RX_LLH_BRB_GATE_DNTFWD_PERPF, 0x0);
 }
 
-static enum _ecore_status_t ecore_reg_assert(struct ecore_hwfn *p_hwfn,
-					     struct ecore_ptt *p_ptt, u32 reg,
-					     bool expected)
-{
-	u32 assert_val = ecore_rd(p_hwfn, p_ptt, reg);
-
-	if (assert_val != expected) {
-		DP_NOTICE(p_hwfn, true, "Value at address 0x%08x != 0x%08x\n",
-			  reg, expected);
-		return ECORE_UNKNOWN_ERROR;
-	}
-
-	return 0;
-}
-
-enum _ecore_status_t ecore_hw_reset(struct ecore_dev *p_dev)
-{
-	enum _ecore_status_t rc = ECORE_SUCCESS;
-	u32 unload_resp, unload_param;
-	int i;
-
-	for_each_hwfn(p_dev, i) {
-		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
-
-		if (IS_VF(p_dev)) {
-			rc = ecore_vf_pf_reset(p_hwfn);
-			if (rc)
-				return rc;
-			continue;
-		}
-
-		DP_VERBOSE(p_hwfn, ECORE_MSG_IFDOWN, "Resetting hw/fw\n");
-
-		/* Check for incorrect states */
-		if (!p_dev->recov_in_prog) {
-			ecore_reg_assert(p_hwfn, p_hwfn->p_main_ptt,
-					 QM_REG_USG_CNT_PF_TX, 0);
-			ecore_reg_assert(p_hwfn, p_hwfn->p_main_ptt,
-					 QM_REG_USG_CNT_PF_OTHER, 0);
-			/* @@@TBD - assert on incorrect xCFC values (10.b) */
-		}
-
-		/* Disable PF in HW blocks */
-		ecore_wr(p_hwfn, p_hwfn->p_main_ptt, DORQ_REG_PF_DB_ENABLE, 0);
-		ecore_wr(p_hwfn, p_hwfn->p_main_ptt, QM_REG_PF_EN, 0);
-
-		if (p_dev->recov_in_prog) {
-			DP_VERBOSE(p_hwfn, ECORE_MSG_IFDOWN,
-				   "Recovery is in progress -> skip sending unload_req/done\n");
-			break;
-		}
-
-		/* Send unload command to MCP */
-		rc = ecore_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
-				   DRV_MSG_CODE_UNLOAD_REQ,
-				   DRV_MB_PARAM_UNLOAD_WOL_MCP,
-				   &unload_resp, &unload_param);
-		if (rc != ECORE_SUCCESS) {
-			DP_NOTICE(p_hwfn, true,
-				  "ecore_hw_reset: UNLOAD_REQ failed\n");
-			/* @@TBD - what to do? for now, assume ENG. */
-			unload_resp = FW_MSG_CODE_DRV_UNLOAD_ENGINE;
-		}
-
-		rc = ecore_mcp_unload_done(p_hwfn, p_hwfn->p_main_ptt);
-		if (rc != ECORE_SUCCESS) {
-			DP_NOTICE(p_hwfn,
-				  true, "ecore_hw_reset: UNLOAD_DONE failed\n");
-			/* @@@TBD - Should it really ASSERT here ? */
-			return rc;
-		}
-	}
-
-	return rc;
-}
-
 /* Free hwfn memory and resources acquired in hw_hwfn_prepare */
 static void ecore_hw_hwfn_free(struct ecore_hwfn *p_hwfn)
 {
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index ce764d2..e64a768 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -151,15 +151,6 @@ void ecore_prepare_hibernate(struct ecore_dev *p_dev);
  */
 void ecore_hw_start_fastpath(struct ecore_hwfn *p_hwfn);
 
-/**
- * @brief ecore_hw_reset -
- *
- * @param p_dev
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_hw_reset(struct ecore_dev *p_dev);
-
 enum ecore_hw_prepare_result {
 	ECORE_HW_PREPARE_SUCCESS,
 
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index b53210f..1c5f24c 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -891,6 +891,18 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
+enum _ecore_status_t ecore_mcp_unload_req(struct ecore_hwfn *p_hwfn,
+					  struct ecore_ptt *p_ptt)
+{
+	u32 wol_param, mcp_resp, mcp_param;
+
+	/* @DPDK */
+	wol_param = DRV_MB_PARAM_UNLOAD_WOL_MCP;
+
+	return ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_UNLOAD_REQ, wol_param,
+			     &mcp_resp, &mcp_param);
+}
+
 enum _ecore_status_t ecore_mcp_unload_done(struct ecore_hwfn *p_hwfn,
 					   struct ecore_ptt *p_ptt)
 {
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index 350d8a2..37d1835 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -171,6 +171,17 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 					struct ecore_load_req_params *p_params);
 
 /**
+ * @brief Sends a UNLOAD_REQ message to the MFW
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - Operation was successful.
+ */
+enum _ecore_status_t ecore_mcp_unload_req(struct ecore_hwfn *p_hwfn,
+					  struct ecore_ptt *p_ptt);
+
+/**
  * @brief Sends a UNLOAD_DONE message to the MFW
  *
  * @param p_hwfn
diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c
index 016de74..3c1d05b 100644
--- a/drivers/net/qede/base/ecore_spq.c
+++ b/drivers/net/qede/base/ecore_spq.c
@@ -190,6 +190,9 @@ static void ecore_spq_hw_initialize(struct ecore_hwfn *p_hwfn,
 
 	p_cxt = cxt_info.p_cxt;
 
+	/* @@@TBD we zero the context until we have ilt_reset implemented. */
+	OSAL_MEM_ZERO(p_cxt, sizeof(*p_cxt));
+
 	if (ECORE_IS_BB(p_hwfn->p_dev) || ECORE_IS_AH(p_hwfn->p_dev)) {
 		SET_FIELD(p_cxt->xstorm_ag_context.flags10,
 			  E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN, 1);
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 326e56f..74856c5 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -636,19 +636,6 @@ static int qed_nic_stop(struct ecore_dev *edev)
 	return rc;
 }
 
-static int qed_nic_reset(struct ecore_dev *edev)
-{
-	int rc;
-
-	rc = ecore_hw_reset(edev);
-	if (rc)
-		return rc;
-
-	ecore_resc_free(edev);
-
-	return 0;
-}
-
 static int qed_slowpath_stop(struct ecore_dev *edev)
 {
 #ifdef CONFIG_QED_SRIOV
@@ -667,10 +654,11 @@ static int qed_slowpath_stop(struct ecore_dev *edev)
 		if (IS_QED_ETH_IF(edev))
 			qed_sriov_disable(edev, true);
 #endif
-		qed_nic_stop(edev);
 	}
 
-	qed_nic_reset(edev);
+	qed_nic_stop(edev);
+
+	ecore_resc_free(edev);
 	qed_stop_iov_task(edev);
 
 	return 0;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 58/61] net/qede/base: semantic changes
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (57 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 57/61] net/qede/base: prevent race condition during unload Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 59/61] net/qede/base: add support for arfs mode Rasesh Mody
                         ` (3 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Make APIs static and other semantic changes.
A step toward cleaning 'make C=1' with GCC 4.8.3.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_cxt.c  |    5 +-
 drivers/net/qede/base/ecore_cxt.h  |   11 ----
 drivers/net/qede/base/ecore_dcbx.c |    2 +-
 drivers/net/qede/base/ecore_dev.c  |  109 ++++++++++++++++++------------------
 drivers/net/qede/base/ecore_l2.c   |   12 ++--
 drivers/net/qede/base/ecore_vf.c   |    2 +-
 6 files changed, 66 insertions(+), 75 deletions(-)

diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index f7b5672..1a2a701 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -327,7 +327,8 @@ static OSAL_INLINE void ecore_cxt_tm_iids(struct ecore_cxt_mngr *p_mngr,
 	}
 }
 
-void ecore_cxt_qm_iids(struct ecore_hwfn *p_hwfn, struct ecore_qm_iids *iids)
+static void ecore_cxt_qm_iids(struct ecore_hwfn *p_hwfn,
+			      struct ecore_qm_iids *iids)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
 	struct ecore_tid_seg *segs;
@@ -1945,7 +1946,7 @@ enum _ecore_status_t ecore_cxt_get_cid_info(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-void ecore_cxt_set_srq_count(struct ecore_hwfn *p_hwfn, u32 num_srqs)
+static void ecore_cxt_set_srq_count(struct ecore_hwfn *p_hwfn, u32 num_srqs)
 {
 	struct ecore_cxt_mngr *p_mgr = p_hwfn->p_cxt_mngr;
 
diff --git a/drivers/net/qede/base/ecore_cxt.h b/drivers/net/qede/base/ecore_cxt.h
index 1128051..e678118 100644
--- a/drivers/net/qede/base/ecore_cxt.h
+++ b/drivers/net/qede/base/ecore_cxt.h
@@ -35,17 +35,6 @@ u32 ecore_cxt_get_proto_cid_start(struct ecore_hwfn *p_hwfn,
 				  enum protocol_type type);
 u32 ecore_cxt_get_srq_count(struct ecore_hwfn *p_hwfn);
 
-#ifndef LINUX_REMOVE
-/**
- * @brief ecore_cxt_qm_iids - fills the cid/tid counts for the QM configuration
- *
- * @param p_hwfn
- * @param iids [out], a structure holding all the counters
- */
-void ecore_cxt_qm_iids(struct ecore_hwfn *p_hwfn,
-		       struct ecore_qm_iids *iids);
-#endif
-
 /**
  * @brief ecore_cxt_set_pf_params - Set the PF params for cxt init
  *
diff --git a/drivers/net/qede/base/ecore_dcbx.c b/drivers/net/qede/base/ecore_dcbx.c
index 5ecc6b0..4f1b069 100644
--- a/drivers/net/qede/base/ecore_dcbx.c
+++ b/drivers/net/qede/base/ecore_dcbx.c
@@ -114,7 +114,7 @@ ecore_dcbx_dp_protocol(struct ecore_hwfn *p_hwfn,
 	}
 }
 
-void
+static void
 ecore_dcbx_set_params(struct ecore_dcbx_results *p_data,
 		      struct ecore_hwfn *p_hwfn,
 		      bool enable, u8 prio, u8 tc,
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index d8e4ca2..865103c 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -759,8 +759,8 @@ enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn,
 				     struct ecore_ptt *p_ptt)
 {
 	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
-	enum _ecore_status_t rc;
 	bool b_rc;
+	enum _ecore_status_t rc;
 
 	/* initialize ecore's qm data structure */
 	ecore_init_qm_info(p_hwfn);
@@ -1507,54 +1507,6 @@ static void ecore_link_init_bb(struct ecore_hwfn *p_hwfn,
 }
 #endif
 
-static enum _ecore_status_t ecore_hw_init_port(struct ecore_hwfn *p_hwfn,
-					       struct ecore_ptt *p_ptt,
-					       int hw_mode)
-{
-	enum _ecore_status_t rc = ECORE_SUCCESS;
-
-	rc = ecore_init_run(p_hwfn, p_ptt, PHASE_PORT, p_hwfn->port_id,
-			    hw_mode);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-#ifndef ASIC_ONLY
-	if (CHIP_REV_IS_ASIC(p_hwfn->p_dev))
-		return ECORE_SUCCESS;
-
-	if (CHIP_REV_IS_FPGA(p_hwfn->p_dev)) {
-		if (ECORE_IS_AH(p_hwfn->p_dev))
-			return ECORE_SUCCESS;
-		else if (ECORE_IS_BB(p_hwfn->p_dev))
-			ecore_link_init_bb(p_hwfn, p_ptt, p_hwfn->port_id);
-	} else if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
-		if (p_hwfn->p_dev->num_hwfns > 1) {
-			/* Activate OPTE in CMT */
-			u32 val;
-
-			val = ecore_rd(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV);
-			val |= 0x10;
-			ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV, val);
-			ecore_wr(p_hwfn, p_ptt, MISC_REG_CLK_100G_MODE, 1);
-			ecore_wr(p_hwfn, p_ptt, MISCS_REG_CLK_100G_MODE, 1);
-			ecore_wr(p_hwfn, p_ptt, MISC_REG_OPTE_MODE, 1);
-			ecore_wr(p_hwfn, p_ptt,
-				 NIG_REG_LLH_ENG_CLS_TCP_4_TUPLE_SEARCH, 1);
-			ecore_wr(p_hwfn, p_ptt,
-				 NIG_REG_LLH_ENG_CLS_ENG_ID_TBL, 0x55555555);
-			ecore_wr(p_hwfn, p_ptt,
-				 NIG_REG_LLH_ENG_CLS_ENG_ID_TBL + 0x4,
-				 0x55555555);
-		}
-
-		ecore_emul_link_init(p_hwfn, p_ptt);
-	} else {
-		DP_INFO(p_hwfn->p_dev, "link is not being configured\n");
-	}
-#endif
-
-	return rc;
-}
-
 static enum _ecore_status_t
 ecore_hw_init_dpi_size(struct ecore_hwfn *p_hwfn,
 		       struct ecore_ptt *p_ptt, u32 pwm_region_size, u32 n_cpus)
@@ -1623,7 +1575,7 @@ ecore_hw_init_pf_doorbell_bar(struct ecore_hwfn *p_hwfn,
 	u32 db_bar_size, n_cpus;
 	u32 roce_edpm_mode;
 	u32 pf_dems_shift;
-	int rc = ECORE_SUCCESS;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
 	u8 cond;
 
 	db_bar_size = ecore_hw_bar_size(p_hwfn, BAR_ID_1);
@@ -1678,8 +1630,9 @@ ecore_hw_init_pf_doorbell_bar(struct ecore_hwfn *p_hwfn,
 		rc = ecore_hw_init_dpi_size(p_hwfn, p_ptt, pwm_regsize, n_cpus);
 	}
 
-	cond = ((rc) && (roce_edpm_mode == ECORE_ROCE_EDPM_MODE_ENABLE)) ||
-	    (roce_edpm_mode == ECORE_ROCE_EDPM_MODE_DISABLE);
+	cond = ((rc != ECORE_SUCCESS) &&
+		(roce_edpm_mode == ECORE_ROCE_EDPM_MODE_ENABLE)) ||
+		(roce_edpm_mode == ECORE_ROCE_EDPM_MODE_DISABLE);
 	if (cond || p_hwfn->dcbx_no_edpm) {
 		/* Either EDPM is disabled from user configuration, or it is
 		 * disabled via DCBx, or it is not mandatory and we failed to
@@ -1703,7 +1656,7 @@ ecore_hw_init_pf_doorbell_bar(struct ecore_hwfn *p_hwfn,
 		"disabled" : "enabled");
 
 	/* Check return codes from above calls */
-	if (rc) {
+	if (rc != ECORE_SUCCESS) {
 		DP_ERR(p_hwfn,
 		       "Failed to allocate enough DPIs\n");
 		return ECORE_NORESOURCES;
@@ -1721,6 +1674,54 @@ ecore_hw_init_pf_doorbell_bar(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
+static enum _ecore_status_t ecore_hw_init_port(struct ecore_hwfn *p_hwfn,
+					       struct ecore_ptt *p_ptt,
+					       int hw_mode)
+{
+	enum _ecore_status_t rc	= ECORE_SUCCESS;
+
+	rc = ecore_init_run(p_hwfn, p_ptt, PHASE_PORT, p_hwfn->port_id,
+			    hw_mode);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+#ifndef ASIC_ONLY
+	if (CHIP_REV_IS_ASIC(p_hwfn->p_dev))
+		return ECORE_SUCCESS;
+
+	if (CHIP_REV_IS_FPGA(p_hwfn->p_dev)) {
+		if (ECORE_IS_AH(p_hwfn->p_dev))
+			return ECORE_SUCCESS;
+		else if (ECORE_IS_BB(p_hwfn->p_dev))
+			ecore_link_init_bb(p_hwfn, p_ptt, p_hwfn->port_id);
+	} else if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
+		if (p_hwfn->p_dev->num_hwfns > 1) {
+			/* Activate OPTE in CMT */
+			u32 val;
+
+			val = ecore_rd(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV);
+			val |= 0x10;
+			ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV, val);
+			ecore_wr(p_hwfn, p_ptt, MISC_REG_CLK_100G_MODE, 1);
+			ecore_wr(p_hwfn, p_ptt, MISCS_REG_CLK_100G_MODE, 1);
+			ecore_wr(p_hwfn, p_ptt, MISC_REG_OPTE_MODE, 1);
+			ecore_wr(p_hwfn, p_ptt,
+				 NIG_REG_LLH_ENG_CLS_TCP_4_TUPLE_SEARCH, 1);
+			ecore_wr(p_hwfn, p_ptt,
+				 NIG_REG_LLH_ENG_CLS_ENG_ID_TBL, 0x55555555);
+			ecore_wr(p_hwfn, p_ptt,
+				 NIG_REG_LLH_ENG_CLS_ENG_ID_TBL + 0x4,
+				 0x55555555);
+		}
+
+		ecore_emul_link_init(p_hwfn, p_ptt);
+	} else {
+		DP_INFO(p_hwfn->p_dev, "link is not being configured\n");
+	}
+#endif
+
+	return rc;
+}
+
 static enum _ecore_status_t
 ecore_hw_init_pf(struct ecore_hwfn *p_hwfn,
 		 struct ecore_ptt *p_ptt,
@@ -1922,8 +1923,8 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 {
 	struct ecore_load_req_params load_req_params;
 	u32 load_code, param, drv_mb_param;
-	struct ecore_hwfn *p_hwfn;
 	bool b_default_mtu = true;
+	struct ecore_hwfn *p_hwfn;
 	enum _ecore_status_t rc = ECORE_SUCCESS, mfw_rc;
 	int i;
 
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index adb5e47..c4af895 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -946,17 +946,17 @@ ecore_eth_pf_rx_queue_start(struct ecore_hwfn *p_hwfn,
 			    dma_addr_t bd_chain_phys_addr,
 			    dma_addr_t cqe_pbl_addr,
 			    u16 cqe_pbl_size,
-			    void OSAL_IOMEM * *pp_producer)
+			    void OSAL_IOMEM * *pp_prod)
 {
 	u32 init_prod_val = 0;
 
-	*pp_producer = (u8 OSAL_IOMEM *)
-		       p_hwfn->regview +
-		       GTT_BAR0_MAP_REG_MSDM_RAM +
-		       MSTORM_ETH_PF_PRODS_OFFSET(p_cid->abs.queue_id);
+	*pp_prod = (u8 OSAL_IOMEM *)
+		    p_hwfn->regview +
+		    GTT_BAR0_MAP_REG_MSDM_RAM +
+		    MSTORM_ETH_PF_PRODS_OFFSET(p_cid->abs.queue_id);
 
 	/* Init the rcq, rx bd and rx sge (if valid) producers to 0 */
-	__internal_ram_wr(p_hwfn, *pp_producer, sizeof(u32),
+	__internal_ram_wr(p_hwfn, *pp_prod, sizeof(u32),
 			  (u32 *)(&init_prod_val));
 
 	return ecore_eth_rxq_start_ramrod(p_hwfn, p_cid,
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index ac72681..f4d331c 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -1285,8 +1285,8 @@ enum _ecore_status_t ecore_vf_pf_release(struct ecore_hwfn *p_hwfn)
 	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
 	struct pfvf_def_resp_tlv *resp;
 	struct vfpf_first_tlv *req;
-	enum _ecore_status_t rc;
 	u32 size;
+	enum _ecore_status_t rc;
 
 	/* clear mailbox and prep first tlv */
 	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_RELEASE, sizeof(*req));
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 59/61] net/qede/base: add support for arfs mode
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (58 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 58/61] net/qede/base: semantic changes Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 60/61] net/qede: add ntuple and flow director filter support Rasesh Mody
                         ` (2 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Harish Patil, Dept-EngDPDKDev

From: Harish Patil <harish.patil@qlogic.com>

Add base driver APIs to enable accelerated RFS[aRFS] mode and ramrod
to configure rfs and ntuple filter.

Signed-off-by: Harish Patil <harish.patil@qlogic.com>
---
 drivers/net/qede/base/ecore_cxt.c           |   49 +++++++++++-----
 drivers/net/qede/base/ecore_init_fw_funcs.c |   31 ++++++++++
 drivers/net/qede/base/ecore_init_fw_funcs.h |   11 ++++
 drivers/net/qede/base/ecore_l2.c            |   84 +++++++++++++++++++++++++++
 drivers/net/qede/base/ecore_l2.h            |   27 +++++++++
 drivers/net/qede/base/ecore_l2_api.h        |   22 +++++++
 drivers/net/qede/base/ecore_proto_if.h      |    6 ++
 drivers/net/qede/base/ecore_spq.h           |    1 +
 8 files changed, 218 insertions(+), 13 deletions(-)

diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index 1a2a701..80ad102 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -192,9 +192,6 @@ struct ecore_cxt_mngr {
 	 */
 	u32 vf_count;
 
-	/* total number of SRQ's for this hwfn */
-	u32				srq_count;
-
 	/* Acquired CIDs */
 	struct ecore_cid_acquired_map acquired[MAX_CONN_TYPES];
 	/* TBD - do we want this allocated to reserve space? */
@@ -213,10 +210,29 @@ struct ecore_cxt_mngr {
 	u32 t2_num_pages;
 	u64 first_free;
 	u64 last_free;
+
+	/* The infrastructure originally was very generic and context/task
+	 * oriented - per connection-type we would set how many of those
+	 * are needed, and later when determining how much memory we're
+	 * needing for a given block we'd iterate over all the relevant
+	 * connection-types.
+	 * But since then we've had some additional resources, some of which
+	 * require memory which is indepent of the general context/task
+	 * scheme. We add those here explicitly per-feature.
+	 */
+
+	/* total number of SRQ's for this hwfn */
+	u32				srq_count;
+
+	/* Maximal number of L2 steering filters */
+	u32				arfs_count;
+
+	/* TODO - VF arfs filters ? */
 };
 
 /* check if resources/configuration is required according to protocol type */
-static OSAL_INLINE bool src_proto(enum protocol_type type)
+static OSAL_INLINE bool src_proto(struct ecore_hwfn *p_hwfn,
+				  enum protocol_type type)
 {
 	return type == PROTOCOLID_TOE;
 }
@@ -254,18 +270,22 @@ struct ecore_src_iids {
 	u32 per_vf_cids;
 };
 
-static OSAL_INLINE void ecore_cxt_src_iids(struct ecore_cxt_mngr *p_mngr,
+static OSAL_INLINE void ecore_cxt_src_iids(struct ecore_hwfn *p_hwfn,
+					   struct ecore_cxt_mngr *p_mngr,
 					   struct ecore_src_iids *iids)
 {
 	u32 i;
 
 	for (i = 0; i < MAX_CONN_TYPES; i++) {
-		if (!src_proto(i))
+		if (!src_proto(p_hwfn, i))
 			continue;
 
 		iids->pf_cids += p_mngr->conn_cfg[i].cid_count;
 		iids->per_vf_cids += p_mngr->conn_cfg[i].cids_per_vf;
 	}
+
+	/* Add L2 filtering filters in addition */
+	iids->pf_cids += p_mngr->arfs_count;
 }
 
 /* counts the iids for the Timers block configuration */
@@ -686,7 +706,7 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 
 	/* SRC */
 	p_cli = &p_mngr->clients[ILT_CLI_SRC];
-	ecore_cxt_src_iids(p_mngr, &src_iids);
+	ecore_cxt_src_iids(p_hwfn, p_mngr, &src_iids);
 
 	/* Both the PF and VFs searcher connections are stored in the per PF
 	 * database. Thus sum the PF searcher cids and all the VFs searcher
@@ -800,7 +820,7 @@ static enum _ecore_status_t ecore_cxt_src_t2_alloc(struct ecore_hwfn *p_hwfn)
 	if (!p_src->active)
 		return ECORE_SUCCESS;
 
-	ecore_cxt_src_iids(p_mngr, &src_iids);
+	ecore_cxt_src_iids(p_hwfn, p_mngr, &src_iids);
 	conn_num = src_iids.pf_cids + src_iids.per_vf_cids * p_mngr->vf_count;
 	total_size = conn_num * sizeof(struct src_ent);
 
@@ -1619,7 +1639,7 @@ static void ecore_src_init_pf(struct ecore_hwfn *p_hwfn)
 	struct ecore_src_iids src_iids;
 
 	OSAL_MEM_ZERO(&src_iids, sizeof(src_iids));
-	ecore_cxt_src_iids(p_mngr, &src_iids);
+	ecore_cxt_src_iids(p_hwfn, p_mngr, &src_iids);
 	conn_num = src_iids.pf_cids + src_iids.per_vf_cids * p_mngr->vf_count;
 	if (!conn_num)
 		return;
@@ -1635,6 +1655,9 @@ static void ecore_src_init_pf(struct ecore_hwfn *p_hwfn)
 			 p_hwfn->p_cxt_mngr->first_free);
 	STORE_RT_REG_AGG(p_hwfn, SRC_REG_LASTFREE_RT_OFFSET,
 			 p_hwfn->p_cxt_mngr->last_free);
+	DP_VERBOSE(p_hwfn, ECORE_MSG_ILT,
+		   "Configured SEARCHER for 0x%08x connections\n",
+		   conn_num);
 }
 
 /* Timers PF */
@@ -1978,10 +2001,10 @@ enum _ecore_status_t ecore_cxt_set_pf_params(struct ecore_hwfn *p_hwfn)
 			 * As of now, allocates 16 * 2 per-VF [to retain regular
 			 * functionality].
 			 */
-			ecore_cxt_set_proto_cid_count(p_hwfn,
-				PROTOCOLID_ETH,
-				p_params->num_cons, 32);
-
+			ecore_cxt_set_proto_cid_count(p_hwfn, PROTOCOLID_ETH,
+						      p_params->num_cons, 32);
+			p_hwfn->p_cxt_mngr->arfs_count =
+						p_params->num_arfs_filters;
 			break;
 		}
 	default:
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.c b/drivers/net/qede/base/ecore_init_fw_funcs.c
index af0deaa..004ab35 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.c
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.c
@@ -1497,6 +1497,37 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
 #define RAM_LINE_SIZE sizeof(u64)
 #define REG_SIZE sizeof(u32)
 
+void ecore_set_rfs_mode_disable(struct ecore_hwfn *p_hwfn,
+	struct ecore_ptt *p_ptt,
+	u16 pf_id)
+{
+	union gft_cam_line_union cam_line;
+	struct gft_ram_line ram_line;
+	u32 i, *ram_line_ptr;
+
+	ram_line_ptr = (u32 *)&ram_line;
+
+	/* Stop using gft logic, disable gft search */
+	ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_GFT, 0);
+	ecore_wr(p_hwfn, p_ptt, PRS_REG_CM_HDR_GFT, 0x0);
+
+	/* Clean ram & cam for next rfs/gft session*/
+
+	/* Zero camline */
+	OSAL_MEMSET(&cam_line, 0, sizeof(cam_line));
+	ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_CAM + CAM_LINE_SIZE * pf_id,
+					cam_line.cam_line_mapped.camline);
+
+	/* Zero ramline */
+	OSAL_MEMSET(&ram_line, 0, sizeof(ram_line));
+
+	/* Each iteration write to reg */
+	for (i = 0; i < RAM_LINE_SIZE / REG_SIZE; i++)
+		ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_PROFILE_MASK_RAM +
+			 RAM_LINE_SIZE * pf_id +
+			 i * REG_SIZE, *(ram_line_ptr + i));
+}
+
 
 void ecore_set_gft_event_id_cm_hdr(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt)
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.h b/drivers/net/qede/base/ecore_init_fw_funcs.h
index 2d1ab7c..4da3fc2 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.h
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.h
@@ -351,6 +351,17 @@ void ecore_set_gft_event_id_cm_hdr(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt);
 
 /**
+ * @brief ecore_set_rfs_mode_disable - Disable and configure HW for RFS
+ *
+ * @param p_hwfn -   HW device data
+ * @param p_ptt -   ptt window used for writing the registers.
+ * @param pf_id - pf on which to disable RFS.
+ */
+void ecore_set_rfs_mode_disable(struct ecore_hwfn *p_hwfn,
+				struct ecore_ptt *p_ptt,
+				u16 pf_id);
+
+/**
 * @brief ecore_set_rfs_mode_enable - enable and configure HW for RFS
 *
 * @param p_ptt	- ptt window used for writing the registers.
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index c4af895..4ab8fd5 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -2018,3 +2018,87 @@ void ecore_reset_vport_stats(struct ecore_dev *p_dev)
 	else
 		_ecore_get_vport_stats(p_dev, p_dev->reset_stats);
 }
+
+void ecore_arfs_mode_configure(struct ecore_hwfn *p_hwfn,
+			       struct ecore_ptt *p_ptt,
+			       struct ecore_arfs_config_params *p_cfg_params)
+{
+	if (p_cfg_params->arfs_enable) {
+		ecore_set_rfs_mode_enable(p_hwfn, p_ptt, p_hwfn->rel_pf_id,
+					  p_cfg_params->tcp,
+					  p_cfg_params->udp,
+					  p_cfg_params->ipv4,
+					  p_cfg_params->ipv6);
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+			   "tcp = %s, udp = %s, ipv4 = %s, ipv6 =%s\n",
+			   p_cfg_params->tcp ? "Enable" : "Disable",
+			   p_cfg_params->udp ? "Enable" : "Disable",
+			   p_cfg_params->ipv4 ? "Enable" : "Disable",
+			   p_cfg_params->ipv6 ? "Enable" : "Disable");
+	} else {
+		ecore_set_rfs_mode_disable(p_hwfn, p_ptt, p_hwfn->rel_pf_id);
+	}
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP, "Configured ARFS mode : %s\n",
+		   p_cfg_params->arfs_enable ? "Enable" : "Disable");
+}
+
+enum _ecore_status_t
+ecore_configure_rfs_ntuple_filter(struct ecore_hwfn *p_hwfn,
+				  struct ecore_ptt *p_ptt,
+				  struct ecore_spq_comp_cb *p_cb,
+				  dma_addr_t p_addr, u16 length,
+				  u16 qid, u8 vport_id,
+				  bool b_is_add)
+{
+	struct rx_update_gft_filter_data *p_ramrod = OSAL_NULL;
+	struct ecore_spq_entry *p_ent = OSAL_NULL;
+	struct ecore_sp_init_data init_data;
+	u16 abs_rx_q_id = 0;
+	u8 abs_vport_id = 0;
+	enum _ecore_status_t rc = ECORE_NOTIMPL;
+
+	rc = ecore_fw_vport(p_hwfn, vport_id, &abs_vport_id);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	rc = ecore_fw_l2_queue(p_hwfn, qid, &abs_rx_q_id);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	/* Get SPQ entry */
+	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
+	init_data.cid = ecore_spq_get_cid(p_hwfn);
+
+	init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
+
+	if (p_cb) {
+		init_data.comp_mode = ECORE_SPQ_MODE_CB;
+		init_data.p_comp_data = p_cb;
+	} else {
+		init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
+	}
+
+	rc = ecore_sp_init_request(p_hwfn, &p_ent,
+				   ETH_RAMROD_GFT_UPDATE_FILTER,
+				   PROTOCOLID_ETH, &init_data);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	p_ramrod = &p_ent->ramrod.rx_update_gft;
+
+	DMA_REGPAIR_LE(p_ramrod->pkt_hdr_addr, p_addr);
+	p_ramrod->pkt_hdr_length = OSAL_CPU_TO_LE16(length);
+	p_ramrod->rx_qid_or_action_icid = OSAL_CPU_TO_LE16(abs_rx_q_id);
+	p_ramrod->vport_id = abs_vport_id;
+	p_ramrod->filter_type = RFS_FILTER_TYPE;
+	p_ramrod->filter_action = b_is_add ? GFT_ADD_FILTER
+					   : GFT_DELETE_FILTER;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "V[%0x], Q[%04x] - %s filter from 0x%lx [length %04xb]\n",
+		   abs_vport_id, abs_rx_q_id,
+		   b_is_add ? "Adding" : "Removing",
+		   (unsigned long)p_addr, length);
+
+	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
+}
diff --git a/drivers/net/qede/base/ecore_l2.h b/drivers/net/qede/base/ecore_l2.h
index 3f86eac..7fe4cbc 100644
--- a/drivers/net/qede/base/ecore_l2.h
+++ b/drivers/net/qede/base/ecore_l2.h
@@ -129,4 +129,31 @@ ecore_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
 
 u8 ecore_mcast_bin_from_mac(u8 *mac);
 
+/**
+ * @brief - ecore_configure_rfs_ntuple_filter
+ *
+ * This ramrod should be used to add or remove arfs hw filter
+ *
+ * @params p_hwfn
+ * @params p_ptt
+ * @params p_cb		Used for ECORE_SPQ_MODE_CB,where client would initialize
+			it with cookie and callback function address, if not
+			using this mode then client must pass NULL.
+ * @params p_addr	p_addr is an actual packet header that needs to be
+ *			filter. It has to mapped with IO to read prior to
+ *			calling this, [contains 4 tuples- src ip, dest ip,
+ *			src port, dest port].
+ * @params length	length of p_addr header up to past the transport header.
+ * @params qid		receive packet will be directed to this queue.
+ * @params vport_id
+ * @params b_is_add	flag to add or remove filter.
+ *
+ */
+enum _ecore_status_t
+ecore_configure_rfs_ntuple_filter(struct ecore_hwfn *p_hwfn,
+				  struct ecore_ptt *p_ptt,
+				  struct ecore_spq_comp_cb *p_cb,
+				  dma_addr_t p_addr, u16 length,
+				  u16 qid, u8 vport_id,
+				  bool b_is_add);
 #endif
diff --git a/drivers/net/qede/base/ecore_l2_api.h b/drivers/net/qede/base/ecore_l2_api.h
index 5a7db76..d09f3c4 100644
--- a/drivers/net/qede/base/ecore_l2_api.h
+++ b/drivers/net/qede/base/ecore_l2_api.h
@@ -141,6 +141,14 @@ struct ecore_filter_accept_flags {
 #define ECORE_ACCEPT_BCAST		0x20
 };
 
+struct ecore_arfs_config_params {
+	bool tcp;
+	bool udp;
+	bool ipv4;
+	bool ipv6;
+	bool arfs_enable;	/* Enable or disable arfs mode */
+};
+
 /* Add / remove / move / remove-all unicast MAC-VLAN filters.
  * FW will assert in the following cases, so driver should take care...:
  * 1. Adding a filter to a full table.
@@ -414,4 +422,18 @@ void ecore_get_vport_stats(struct ecore_dev *p_dev,
 
 void ecore_reset_vport_stats(struct ecore_dev *p_dev);
 
+/**
+ *@brief ecore_arfs_mode_configure -
+ *
+ *Enable or disable rfs mode. It must accept atleast one of tcp or udp true
+ *and atleast one of ipv4 or ipv6 true to enable rfs mode.
+ *
+ *@param p_hwfn
+ *@param p_ptt
+ *@param p_cfg_params		arfs mode configuration parameters.
+ *
+ */
+void ecore_arfs_mode_configure(struct ecore_hwfn *p_hwfn,
+			       struct ecore_ptt *p_ptt,
+			       struct ecore_arfs_config_params *p_cfg_params);
 #endif
diff --git a/drivers/net/qede/base/ecore_proto_if.h b/drivers/net/qede/base/ecore_proto_if.h
index 0ac153f..226e3d2 100644
--- a/drivers/net/qede/base/ecore_proto_if.h
+++ b/drivers/net/qede/base/ecore_proto_if.h
@@ -21,6 +21,12 @@ struct ecore_eth_pf_params {
 	 * to update_pf_params routine invoked before slowpath start
 	 */
 	u16	num_cons;
+
+	/* To enable arfs, previous to HW-init a positive number needs to be
+	 * set [as filters require allocated searcher ILT memory].
+	 * This will set the maximal number of configured steering-filters.
+	 */
+	u32	num_arfs_filters;
 };
 
 /* Most of the the parameters below are described in the FW iSCSI / TCP HSI */
diff --git a/drivers/net/qede/base/ecore_spq.h b/drivers/net/qede/base/ecore_spq.h
index e2468b7..e530f83 100644
--- a/drivers/net/qede/base/ecore_spq.h
+++ b/drivers/net/qede/base/ecore_spq.h
@@ -26,6 +26,7 @@ union ramrod_data {
 	struct tx_queue_stop_ramrod_data		tx_queue_stop;
 	struct vport_start_ramrod_data			vport_start;
 	struct vport_stop_ramrod_data			vport_stop;
+	struct rx_update_gft_filter_data		rx_update_gft;
 	struct vport_update_ramrod_data			vport_update;
 	struct core_rx_start_ramrod_data		core_rx_queue_start;
 	struct core_rx_stop_ramrod_data			core_rx_queue_stop;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 60/61] net/qede: add ntuple and flow director filter support
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (59 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 59/61] net/qede/base: add support for arfs mode Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:28       ` [PATCH v3 61/61] net/qede: add LRO/TSO offloads support Rasesh Mody
  2017-03-24  7:45       ` [PATCH v2 00/61] net/qede/base: qede PMD enhancements Mody, Rasesh
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Harish Patil, Dept-EngDPDKDev

From: Harish Patil <harish.patil@qlogic.com>

Add limited support for ntuple filter and flow director configuration.
The filtering is based on 4-tuples viz src-ip, dst-ip, src-port,
dst-port. The mask fields, tcp_flags, flex masks, priority fields,
Rx queue drop etc are not supported.

Signed-off-by: Harish Patil <harish.patil@qlogic.com>
---
 doc/guides/nics/features/qede.ini |    2 +
 doc/guides/nics/qede.rst          |    1 +
 drivers/net/qede/Makefile         |    1 +
 drivers/net/qede/base/ecore.h     |    3 +
 drivers/net/qede/qede_ethdev.c    |   16 +-
 drivers/net/qede/qede_ethdev.h    |   39 +++
 drivers/net/qede/qede_fdir.c      |  487 +++++++++++++++++++++++++++++++++++++
 drivers/net/qede/qede_main.c      |   23 +-
 8 files changed, 563 insertions(+), 9 deletions(-)
 create mode 100644 drivers/net/qede/qede_fdir.c

diff --git a/doc/guides/nics/features/qede.ini b/doc/guides/nics/features/qede.ini
index 8858e5d..b688914 100644
--- a/doc/guides/nics/features/qede.ini
+++ b/doc/guides/nics/features/qede.ini
@@ -34,3 +34,5 @@ Multiprocess aware   = Y
 Linux UIO            = Y
 x86-64               = Y
 Usage doc            = Y
+N-tuple filter       = Y
+Flow director        = Y
diff --git a/doc/guides/nics/qede.rst b/doc/guides/nics/qede.rst
index 36b26b3..df0aaec 100644
--- a/doc/guides/nics/qede.rst
+++ b/doc/guides/nics/qede.rst
@@ -60,6 +60,7 @@ Supported Features
 - Multiprocess aware
 - Scatter-Gather
 - VXLAN tunneling offload
+- N-tuple filter and flow director (limited support)
 
 Non-supported Features
 ----------------------
diff --git a/drivers/net/qede/Makefile b/drivers/net/qede/Makefile
index 29b443d..aae6bd2 100644
--- a/drivers/net/qede/Makefile
+++ b/drivers/net/qede/Makefile
@@ -99,6 +99,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_ethdev.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_eth_if.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_main.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_rxtx.c
+SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_fdir.c
 
 # dependent libs:
 DEPDIRS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += lib/librte_eal lib/librte_ether
diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index fab8193..31470b6 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -602,6 +602,9 @@ struct ecore_hwfn {
 
 	/* L2-related */
 	struct ecore_l2_info		*p_l2_info;
+
+	/* @DPDK */
+	struct ecore_ptt		*p_arfs_ptt;
 };
 
 #ifndef __EXTRACT__LINUX__
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index bd190d0..22b528d 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -924,6 +924,15 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 		return -EINVAL;
 	}
 
+	/* Flow director mode check */
+	rc = qede_check_fdir_support(eth_dev);
+	if (rc) {
+		qdev->ops->vport_stop(edev, 0);
+		qede_dealloc_fp_resc(eth_dev);
+		return -EINVAL;
+	}
+	SLIST_INIT(&qdev->fdir_info.fdir_list_head);
+
 	SLIST_INIT(&qdev->vlan_list_head);
 
 	/* Add primary mac for PF */
@@ -1124,6 +1133,8 @@ static void qede_dev_close(struct rte_eth_dev *eth_dev)
 
 	PMD_INIT_FUNC_TRACE(edev);
 
+	qede_fdir_dealloc_resc(eth_dev);
+
 	/* dev_stop() shall cleanup fp resources in hw but without releasing
 	 * dma memories and sw structures so that dev_start() can be called
 	 * by the app without reconfiguration. However, in dev_close() we
@@ -1962,11 +1973,13 @@ int qede_dev_filter_ctrl(struct rte_eth_dev *eth_dev,
 		}
 		break;
 	case RTE_ETH_FILTER_FDIR:
+		return qede_fdir_filter_conf(eth_dev, filter_op, arg);
+	case RTE_ETH_FILTER_NTUPLE:
+		return qede_ntuple_filter_conf(eth_dev, filter_op, arg);
 	case RTE_ETH_FILTER_MACVLAN:
 	case RTE_ETH_FILTER_ETHERTYPE:
 	case RTE_ETH_FILTER_FLEXIBLE:
 	case RTE_ETH_FILTER_SYN:
-	case RTE_ETH_FILTER_NTUPLE:
 	case RTE_ETH_FILTER_HASH:
 	case RTE_ETH_FILTER_L2_TUNNEL:
 	case RTE_ETH_FILTER_MAX:
@@ -2057,6 +2070,7 @@ static void qede_update_pf_params(struct ecore_dev *edev)
 
 	memset(&pf_params, 0, sizeof(struct ecore_pf_params));
 	pf_params.eth_pf_params.num_cons = QEDE_PF_NUM_CONNS;
+	pf_params.eth_pf_params.num_arfs_filters = QEDE_RFS_MAX_FLTR;
 	qed_ops->common->update_pf_params(edev, &pf_params);
 }
 
diff --git a/drivers/net/qede/qede_ethdev.h b/drivers/net/qede/qede_ethdev.h
index be54f31..8342b99 100644
--- a/drivers/net/qede/qede_ethdev.h
+++ b/drivers/net/qede/qede_ethdev.h
@@ -34,6 +34,8 @@
 #include "base/nvm_cfg.h"
 #include "base/ecore_iov_api.h"
 #include "base/ecore_sp_commands.h"
+#include "base/ecore_l2.h"
+#include "base/ecore_dev_api.h"
 
 #include "qede_logs.h"
 #include "qede_if.h"
@@ -131,6 +133,9 @@ extern char fw_file[];
 /* Number of PF connections - 32 RX + 32 TX */
 #define QEDE_PF_NUM_CONNS		(64)
 
+/* Maximum number of flowdir filters */
+#define QEDE_RFS_MAX_FLTR		(256)
+
 /* Port/function states */
 enum qede_dev_state {
 	QEDE_DEV_INIT, /* Init the chip and Slowpath */
@@ -156,6 +161,21 @@ struct qede_ucast_entry {
 	SLIST_ENTRY(qede_ucast_entry) list;
 };
 
+struct qede_fdir_entry {
+	uint32_t soft_id; /* unused for now */
+	uint16_t pkt_len; /* actual packet length to match */
+	uint16_t rx_queue; /* queue to be steered to */
+	const struct rte_memzone *mz; /* mz used to hold L2 frame */
+	SLIST_ENTRY(qede_fdir_entry) list;
+};
+
+struct qede_fdir_info {
+	struct ecore_arfs_config_params arfs;
+	uint16_t filter_count;
+	SLIST_HEAD(fdir_list_head, qede_fdir_entry)fdir_list_head;
+};
+
+
 /*
  *  Structure to store private data for each port.
  */
@@ -190,6 +210,7 @@ struct qede_dev {
 	bool handle_hw_err;
 	uint16_t num_tunn_filters;
 	uint16_t vxlan_filter_type;
+	struct qede_fdir_info fdir_info;
 	char drv_ver[QEDE_PMD_DRV_VER_STR_SIZE];
 };
 
@@ -208,6 +229,11 @@ static void qede_init_rss_caps(uint8_t *rss_caps, uint64_t hf);
 
 static inline uint32_t qede_rx_cqe_to_pkt_type(uint16_t flags);
 
+static uint16_t qede_fdir_construct_pkt(struct rte_eth_dev *eth_dev,
+					struct rte_eth_fdir_filter *fdir,
+					void *buff,
+					struct ecore_arfs_config_params *param);
+
 /* Non-static functions */
 void qede_init_rss_caps(uint8_t *rss_caps, uint64_t hf);
 
@@ -215,4 +241,17 @@ int qed_fill_eth_dev_info(struct ecore_dev *edev,
 				 struct qed_dev_eth_info *info);
 int qede_dev_set_link_state(struct rte_eth_dev *eth_dev, bool link_up);
 
+int qede_dev_filter_ctrl(struct rte_eth_dev *dev, enum rte_filter_type type,
+			 enum rte_filter_op op, void *arg);
+
+int qede_fdir_filter_conf(struct rte_eth_dev *eth_dev,
+			  enum rte_filter_op filter_op, void *arg);
+
+int qede_ntuple_filter_conf(struct rte_eth_dev *eth_dev,
+			    enum rte_filter_op filter_op, void *arg);
+
+int qede_check_fdir_support(struct rte_eth_dev *eth_dev);
+
+void qede_fdir_dealloc_resc(struct rte_eth_dev *eth_dev);
+
 #endif /* _QEDE_ETHDEV_H_ */
diff --git a/drivers/net/qede/qede_fdir.c b/drivers/net/qede/qede_fdir.c
new file mode 100644
index 0000000..f0dc73a
--- /dev/null
+++ b/drivers/net/qede/qede_fdir.c
@@ -0,0 +1,487 @@
+/*
+ * Copyright (c) 2017 QLogic Corporation.
+ * All rights reserved.
+ * www.qlogic.com
+ *
+ * See LICENSE.qede_pmd for copyright and licensing details.
+ */
+
+#include <rte_udp.h>
+#include <rte_tcp.h>
+#include <rte_sctp.h>
+#include <rte_errno.h>
+
+#include "qede_ethdev.h"
+
+#define IP_VERSION				(0x40)
+#define IP_HDRLEN				(0x5)
+#define QEDE_FDIR_IP_DEFAULT_VERSION_IHL	(IP_VERSION | IP_HDRLEN)
+#define QEDE_FDIR_TCP_DEFAULT_DATAOFF		(0x50)
+#define QEDE_FDIR_IPV4_DEF_TTL			(64)
+
+/* Sum of length of header types of L2, L3, L4.
+ * L2 : ether_hdr + vlan_hdr + vxlan_hdr
+ * L3 : ipv6_hdr
+ * L4 : tcp_hdr
+ */
+#define QEDE_MAX_FDIR_PKT_LEN			(86)
+
+#ifndef IPV6_ADDR_LEN
+#define IPV6_ADDR_LEN				(16)
+#endif
+
+#define QEDE_VALID_FLOW(flow_type) \
+	((flow_type) == RTE_ETH_FLOW_FRAG_IPV4		|| \
+	(flow_type) == RTE_ETH_FLOW_NONFRAG_IPV4_TCP	|| \
+	(flow_type) == RTE_ETH_FLOW_NONFRAG_IPV4_UDP	|| \
+	(flow_type) == RTE_ETH_FLOW_FRAG_IPV6		|| \
+	(flow_type) == RTE_ETH_FLOW_NONFRAG_IPV6_TCP	|| \
+	(flow_type) == RTE_ETH_FLOW_NONFRAG_IPV6_UDP)
+
+/* Note: Flowdir support is only partial.
+ * For ex: drop_queue, FDIR masks, flex_conf are not supported.
+ * Parameters like pballoc/status fields are irrelevant here.
+ */
+int qede_check_fdir_support(struct rte_eth_dev *eth_dev)
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	struct rte_fdir_conf *fdir = &eth_dev->data->dev_conf.fdir_conf;
+
+	/* check FDIR modes */
+	switch (fdir->mode) {
+	case RTE_FDIR_MODE_NONE:
+		qdev->fdir_info.arfs.arfs_enable = false;
+		DP_INFO(edev, "flowdir is disabled\n");
+	break;
+	case RTE_FDIR_MODE_PERFECT:
+		if (edev->num_hwfns > 1) {
+			DP_ERR(edev, "flowdir is not supported in 100G mode\n");
+			qdev->fdir_info.arfs.arfs_enable = false;
+			return -ENOTSUP;
+		}
+		qdev->fdir_info.arfs.arfs_enable = true;
+		DP_INFO(edev, "flowdir is enabled\n");
+	break;
+	case RTE_FDIR_MODE_PERFECT_TUNNEL:
+	case RTE_FDIR_MODE_SIGNATURE:
+	case RTE_FDIR_MODE_PERFECT_MAC_VLAN:
+		DP_ERR(edev, "Unsupported flowdir mode %d\n", fdir->mode);
+		return -ENOTSUP;
+	}
+
+	return 0;
+}
+
+void qede_fdir_dealloc_resc(struct rte_eth_dev *eth_dev)
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	struct qede_fdir_entry *tmp = NULL;
+	struct qede_fdir_entry *fdir;
+
+	SLIST_FOREACH(tmp, &qdev->fdir_info.fdir_list_head, list) {
+		if (tmp) {
+			if (tmp->mz)
+				rte_memzone_free(tmp->mz);
+			SLIST_REMOVE(&qdev->fdir_info.fdir_list_head, tmp,
+				     qede_fdir_entry, list);
+			rte_free(tmp);
+		}
+	}
+}
+
+static int
+qede_config_cmn_fdir_filter(struct rte_eth_dev *eth_dev,
+			    struct rte_eth_fdir_filter *fdir_filter,
+			    bool add)
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	char mz_name[RTE_MEMZONE_NAMESIZE] = {0};
+	struct qede_fdir_entry *tmp = NULL;
+	struct qede_fdir_entry *fdir;
+	const struct rte_memzone *mz;
+	struct ecore_hwfn *p_hwfn;
+	enum _ecore_status_t rc;
+	uint16_t pkt_len;
+	uint16_t len;
+	void *pkt;
+
+	if (add) {
+		if (qdev->fdir_info.filter_count == QEDE_RFS_MAX_FLTR - 1) {
+			DP_ERR(edev, "Reached max flowdir filter limit\n");
+			return -EINVAL;
+		}
+		fdir = rte_malloc(NULL, sizeof(struct qede_fdir_entry),
+				  RTE_CACHE_LINE_SIZE);
+		if (!fdir) {
+			DP_ERR(edev, "Did not allocate memory for fdir\n");
+			return -ENOMEM;
+		}
+	}
+	/* soft_id could have been used as memzone string, but soft_id is
+	 * not currently used so it has no significance.
+	 */
+	snprintf(mz_name, sizeof(mz_name) - 1, "%lx",
+		 (unsigned long)rte_get_timer_cycles());
+	mz = rte_memzone_reserve_aligned(mz_name, QEDE_MAX_FDIR_PKT_LEN,
+					 SOCKET_ID_ANY, 0, RTE_CACHE_LINE_SIZE);
+	if (!mz) {
+		DP_ERR(edev, "Failed to allocate memzone for fdir, err = %s\n",
+		       rte_strerror(rte_errno));
+		rc = -rte_errno;
+		goto err1;
+	}
+
+	pkt = mz->addr;
+	memset(pkt, 0, QEDE_MAX_FDIR_PKT_LEN);
+	pkt_len = qede_fdir_construct_pkt(eth_dev, fdir_filter, pkt,
+					  &qdev->fdir_info.arfs);
+	if (pkt_len == 0) {
+		rc = -EINVAL;
+		goto err2;
+	}
+	DP_INFO(edev, "pkt_len = %u memzone = %s\n", pkt_len, mz_name);
+	if (add) {
+		SLIST_FOREACH(tmp, &qdev->fdir_info.fdir_list_head, list) {
+			if (memcmp(tmp->mz->addr, pkt, pkt_len) == 0) {
+				DP_ERR(edev, "flowdir filter exist\n");
+				rc = -EEXIST;
+				goto err2;
+			}
+		}
+	} else {
+		SLIST_FOREACH(tmp, &qdev->fdir_info.fdir_list_head, list) {
+			if (memcmp(tmp->mz->addr, pkt, pkt_len) == 0)
+				break;
+		}
+		if (!tmp) {
+			DP_ERR(edev, "flowdir filter does not exist\n");
+			rc = -EEXIST;
+			goto err2;
+		}
+	}
+	p_hwfn = ECORE_LEADING_HWFN(edev);
+	if (add) {
+		if (!qdev->fdir_info.arfs.arfs_enable) {
+			/* Force update */
+			eth_dev->data->dev_conf.fdir_conf.mode =
+						RTE_FDIR_MODE_PERFECT;
+			qdev->fdir_info.arfs.arfs_enable = true;
+			DP_INFO(edev, "Force enable flowdir in perfect mode\n");
+		}
+		/* Enable ARFS searcher with updated flow_types */
+		ecore_arfs_mode_configure(p_hwfn, p_hwfn->p_arfs_ptt,
+					  &qdev->fdir_info.arfs);
+	}
+	/* configure filter with ECORE_SPQ_MODE_EBLOCK */
+	rc = ecore_configure_rfs_ntuple_filter(p_hwfn, p_hwfn->p_arfs_ptt, NULL,
+					       (dma_addr_t)mz->phys_addr,
+					       pkt_len,
+					       fdir_filter->action.rx_queue,
+					       0, add);
+	if (rc == ECORE_SUCCESS) {
+		if (add) {
+			fdir->rx_queue = fdir_filter->action.rx_queue;
+			fdir->pkt_len = pkt_len;
+			fdir->mz = mz;
+			SLIST_INSERT_HEAD(&qdev->fdir_info.fdir_list_head,
+					  fdir, list);
+			qdev->fdir_info.filter_count++;
+			DP_INFO(edev, "flowdir filter added, count = %d\n",
+				qdev->fdir_info.filter_count);
+		} else {
+			rte_memzone_free(tmp->mz);
+			SLIST_REMOVE(&qdev->fdir_info.fdir_list_head, tmp,
+				     qede_fdir_entry, list);
+			rte_free(tmp); /* the node deleted */
+			rte_memzone_free(mz); /* temp node allocated */
+			qdev->fdir_info.filter_count--;
+			DP_INFO(edev, "Fdir filter deleted, count = %d\n",
+				qdev->fdir_info.filter_count);
+		}
+	} else {
+		DP_ERR(edev, "flowdir filter failed, rc=%d filter_count=%d\n",
+		       rc, qdev->fdir_info.filter_count);
+	}
+
+	/* Disable ARFS searcher if there are no more filters */
+	if (qdev->fdir_info.filter_count == 0) {
+		memset(&qdev->fdir_info.arfs, 0,
+		       sizeof(struct ecore_arfs_config_params));
+		DP_INFO(edev, "Disabling flowdir\n");
+		qdev->fdir_info.arfs.arfs_enable = false;
+		ecore_arfs_mode_configure(p_hwfn, p_hwfn->p_arfs_ptt,
+					  &qdev->fdir_info.arfs);
+	}
+	return 0;
+
+err2:
+	rte_memzone_free(mz);
+err1:
+	if (add)
+		rte_free(fdir);
+	return rc;
+}
+
+static int
+qede_fdir_filter_add(struct rte_eth_dev *eth_dev,
+		     struct rte_eth_fdir_filter *fdir,
+		     bool add)
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+
+	if (!QEDE_VALID_FLOW(fdir->input.flow_type)) {
+		DP_ERR(edev, "invalid flow_type input\n");
+		return -EINVAL;
+	}
+
+	if (fdir->action.rx_queue >= QEDE_RSS_COUNT(qdev)) {
+		DP_ERR(edev, "invalid queue number %u\n",
+		       fdir->action.rx_queue);
+		return -EINVAL;
+	}
+
+	if (fdir->input.flow_ext.is_vf) {
+		DP_ERR(edev, "flowdir is not supported over VF\n");
+		return -EINVAL;
+	}
+
+	return qede_config_cmn_fdir_filter(eth_dev, fdir, add);
+}
+
+/* Fills the L3/L4 headers and returns the actual length  of flowdir packet */
+static uint16_t
+qede_fdir_construct_pkt(struct rte_eth_dev *eth_dev,
+			struct rte_eth_fdir_filter *fdir,
+			void *buff,
+			struct ecore_arfs_config_params *params)
+
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	uint16_t *ether_type;
+	uint8_t *raw_pkt;
+	struct rte_eth_fdir_input *input;
+	static uint8_t vlan_frame[] = {0x81, 0, 0, 0};
+	struct ipv4_hdr *ip;
+	struct ipv6_hdr *ip6;
+	struct udp_hdr *udp;
+	struct tcp_hdr *tcp;
+	struct sctp_hdr *sctp;
+	uint8_t size, dst = 0;
+	uint16_t len;
+	static const uint8_t next_proto[] = {
+		[RTE_ETH_FLOW_FRAG_IPV4] = IPPROTO_IP,
+		[RTE_ETH_FLOW_NONFRAG_IPV4_TCP] = IPPROTO_TCP,
+		[RTE_ETH_FLOW_NONFRAG_IPV4_UDP] = IPPROTO_UDP,
+		[RTE_ETH_FLOW_FRAG_IPV6] = IPPROTO_NONE,
+		[RTE_ETH_FLOW_NONFRAG_IPV6_TCP] = IPPROTO_TCP,
+		[RTE_ETH_FLOW_NONFRAG_IPV6_UDP] = IPPROTO_UDP,
+	};
+	raw_pkt = (uint8_t *)buff;
+	input = &fdir->input;
+	DP_INFO(edev, "flow_type %d\n", input->flow_type);
+
+	len =  2 * sizeof(struct ether_addr);
+	raw_pkt += 2 * sizeof(struct ether_addr);
+	if (input->flow_ext.vlan_tci) {
+		DP_INFO(edev, "adding VLAN header\n");
+		rte_memcpy(raw_pkt, vlan_frame, sizeof(vlan_frame));
+		rte_memcpy(raw_pkt + sizeof(uint16_t),
+			   &input->flow_ext.vlan_tci,
+			   sizeof(uint16_t));
+		raw_pkt += sizeof(vlan_frame);
+		len += sizeof(vlan_frame);
+	}
+	ether_type = (uint16_t *)raw_pkt;
+	raw_pkt += sizeof(uint16_t);
+	len += sizeof(uint16_t);
+
+	/* fill the common ip header */
+	switch (input->flow_type) {
+	case RTE_ETH_FLOW_NONFRAG_IPV4_TCP:
+	case RTE_ETH_FLOW_NONFRAG_IPV4_UDP:
+	case RTE_ETH_FLOW_FRAG_IPV4:
+		ip = (struct ipv4_hdr *)raw_pkt;
+		*ether_type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+		ip->version_ihl = QEDE_FDIR_IP_DEFAULT_VERSION_IHL;
+		ip->total_length = sizeof(struct ipv4_hdr);
+		ip->next_proto_id = input->flow.ip4_flow.proto ?
+				    input->flow.ip4_flow.proto :
+				    next_proto[input->flow_type];
+		ip->time_to_live = input->flow.ip4_flow.ttl ?
+				   input->flow.ip4_flow.ttl :
+				   QEDE_FDIR_IPV4_DEF_TTL;
+		ip->type_of_service = input->flow.ip4_flow.tos;
+		ip->dst_addr = input->flow.ip4_flow.dst_ip;
+		ip->src_addr = input->flow.ip4_flow.src_ip;
+		len += sizeof(struct ipv4_hdr);
+		params->ipv4 = true;
+		break;
+	case RTE_ETH_FLOW_NONFRAG_IPV6_TCP:
+	case RTE_ETH_FLOW_NONFRAG_IPV6_UDP:
+	case RTE_ETH_FLOW_FRAG_IPV6:
+		ip6 = (struct ipv6_hdr *)raw_pkt;
+		*ether_type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+		ip6->proto = input->flow.ipv6_flow.proto ?
+					input->flow.ipv6_flow.proto :
+					next_proto[input->flow_type];
+		rte_memcpy(&ip6->src_addr, &input->flow.ipv6_flow.dst_ip,
+			   IPV6_ADDR_LEN);
+		rte_memcpy(&ip6->dst_addr, &input->flow.ipv6_flow.src_ip,
+			   IPV6_ADDR_LEN);
+		len += sizeof(struct ipv6_hdr);
+		break;
+	default:
+		DP_ERR(edev, "Unsupported flow_type %u\n",
+		       input->flow_type);
+		return 0;
+	}
+
+	/* fill the L4 header */
+	raw_pkt = (uint8_t *)buff;
+	switch (input->flow_type) {
+	case RTE_ETH_FLOW_NONFRAG_IPV4_UDP:
+		udp = (struct udp_hdr *)(raw_pkt + len);
+		udp->dst_port = input->flow.udp4_flow.dst_port;
+		udp->src_port = input->flow.udp4_flow.src_port;
+		udp->dgram_len = sizeof(struct udp_hdr);
+		len += sizeof(struct udp_hdr);
+		/* adjust ip total_length */
+		ip->total_length += sizeof(struct udp_hdr);
+		params->udp = true;
+	break;
+	case RTE_ETH_FLOW_NONFRAG_IPV4_TCP:
+		tcp = (struct tcp_hdr *)(raw_pkt + len);
+		tcp->src_port = input->flow.tcp4_flow.src_port;
+		tcp->dst_port = input->flow.tcp4_flow.dst_port;
+		tcp->data_off = QEDE_FDIR_TCP_DEFAULT_DATAOFF;
+		len += sizeof(struct tcp_hdr);
+		/* adjust ip total_length */
+		ip->total_length += sizeof(struct tcp_hdr);
+		params->tcp = true;
+	break;
+	case RTE_ETH_FLOW_NONFRAG_IPV6_TCP:
+		tcp = (struct tcp_hdr *)(raw_pkt + len);
+		tcp->data_off = QEDE_FDIR_TCP_DEFAULT_DATAOFF;
+		tcp->src_port = input->flow.udp6_flow.src_port;
+		tcp->dst_port = input->flow.udp6_flow.dst_port;
+		/* adjust ip total_length */
+		len += sizeof(struct tcp_hdr);
+		params->tcp = true;
+	break;
+	case RTE_ETH_FLOW_NONFRAG_IPV6_UDP:
+		udp = (struct udp_hdr *)(raw_pkt + len);
+		udp->src_port = input->flow.udp6_flow.dst_port;
+		udp->dst_port = input->flow.udp6_flow.src_port;
+		/* adjust ip total_length */
+		len += sizeof(struct udp_hdr);
+		params->udp = true;
+	break;
+	default:
+		DP_ERR(edev, "Unsupported flow_type %d\n", input->flow_type);
+		return 0;
+	}
+	return len;
+}
+
+int
+qede_fdir_filter_conf(struct rte_eth_dev *eth_dev,
+		      enum rte_filter_op filter_op,
+		      void *arg)
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	struct rte_eth_fdir_filter *fdir;
+	int ret;
+
+	fdir = (struct rte_eth_fdir_filter *)arg;
+	switch (filter_op) {
+	case RTE_ETH_FILTER_NOP:
+		/* Typically used to query flowdir support */
+		if (edev->num_hwfns > 1) {
+			DP_ERR(edev, "flowdir is not supported in 100G mode\n");
+			return -ENOTSUP;
+		}
+		return 0; /* means supported */
+	case RTE_ETH_FILTER_ADD:
+		ret = qede_fdir_filter_add(eth_dev, fdir, 1);
+	break;
+	case RTE_ETH_FILTER_DELETE:
+		ret = qede_fdir_filter_add(eth_dev, fdir, 0);
+	break;
+	case RTE_ETH_FILTER_FLUSH:
+	case RTE_ETH_FILTER_UPDATE:
+	case RTE_ETH_FILTER_INFO:
+		return -ENOTSUP;
+	break;
+	default:
+		DP_ERR(edev, "unknown operation %u", filter_op);
+		ret = -EINVAL;
+	}
+
+	return ret;
+}
+
+int qede_ntuple_filter_conf(struct rte_eth_dev *eth_dev,
+			    enum rte_filter_op filter_op,
+			    void *arg)
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	struct rte_eth_ntuple_filter *ntuple;
+	struct rte_eth_fdir_filter fdir_entry;
+	struct rte_eth_tcpv4_flow *tcpv4_flow;
+	struct rte_eth_udpv4_flow *udpv4_flow;
+	struct ecore_hwfn *p_hwfn;
+	bool add;
+
+	switch (filter_op) {
+	case RTE_ETH_FILTER_NOP:
+		/* Typically used to query fdir support */
+		if (edev->num_hwfns > 1) {
+			DP_ERR(edev, "flowdir is not supported in 100G mode\n");
+			return -ENOTSUP;
+		}
+		return 0; /* means supported */
+	case RTE_ETH_FILTER_ADD:
+		add = true;
+	break;
+	case RTE_ETH_FILTER_DELETE:
+		add = false;
+	break;
+	case RTE_ETH_FILTER_INFO:
+	case RTE_ETH_FILTER_GET:
+	case RTE_ETH_FILTER_UPDATE:
+	case RTE_ETH_FILTER_FLUSH:
+	case RTE_ETH_FILTER_SET:
+	case RTE_ETH_FILTER_STATS:
+	case RTE_ETH_FILTER_OP_MAX:
+		DP_ERR(edev, "Unsupported filter_op %d\n", filter_op);
+		return -ENOTSUP;
+	}
+	ntuple = (struct rte_eth_ntuple_filter *)arg;
+	/* Internally convert ntuple to fdir entry */
+	memset(&fdir_entry, 0, sizeof(fdir_entry));
+	if (ntuple->proto == IPPROTO_TCP) {
+		fdir_entry.input.flow_type = RTE_ETH_FLOW_NONFRAG_IPV4_TCP;
+		tcpv4_flow = &fdir_entry.input.flow.tcp4_flow;
+		tcpv4_flow->ip.src_ip = ntuple->src_ip;
+		tcpv4_flow->ip.dst_ip = ntuple->dst_ip;
+		tcpv4_flow->ip.proto = IPPROTO_TCP;
+		tcpv4_flow->src_port = ntuple->src_port;
+		tcpv4_flow->dst_port = ntuple->dst_port;
+	} else {
+		fdir_entry.input.flow_type = RTE_ETH_FLOW_NONFRAG_IPV4_UDP;
+		udpv4_flow = &fdir_entry.input.flow.udp4_flow;
+		udpv4_flow->ip.src_ip = ntuple->src_ip;
+		udpv4_flow->ip.dst_ip = ntuple->dst_ip;
+		udpv4_flow->ip.proto = IPPROTO_TCP;
+		udpv4_flow->src_port = ntuple->src_port;
+		udpv4_flow->dst_port = ntuple->dst_port;
+	}
+	return qede_config_cmn_fdir_filter(eth_dev, &fdir_entry, add);
+}
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 74856c5..307b33a 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -12,8 +12,6 @@
 
 #include "qede_ethdev.h"
 
-static uint8_t npar_tx_switching = 1;
-
 /* Alarm timeout. */
 #define QEDE_ALARM_TIMEOUT_US 100000
 
@@ -224,23 +222,34 @@ static void qed_stop_iov_task(struct ecore_dev *edev)
 static int qed_slowpath_start(struct ecore_dev *edev,
 			      struct qed_slowpath_params *params)
 {
-	bool allow_npar_tx_switching;
 	const uint8_t *data = NULL;
 	struct ecore_hwfn *hwfn;
 	struct ecore_mcp_drv_version drv_version;
 	struct ecore_hw_init_params hw_init_params;
 	struct qede_dev *qdev = (struct qede_dev *)edev;
+	struct ecore_ptt *p_ptt;
 	int rc;
 
-#ifdef CONFIG_ECORE_BINARY_FW
 	if (IS_PF(edev)) {
+#ifdef CONFIG_ECORE_BINARY_FW
 		rc = qed_load_firmware_data(edev);
 		if (rc) {
 			DP_ERR(edev, "Failed to find fw file %s\n", fw_file);
 			goto err;
 		}
-	}
 #endif
+		hwfn = ECORE_LEADING_HWFN(edev);
+		if (edev->num_hwfns == 1) { /* skip aRFS for 100G device */
+			p_ptt = ecore_ptt_acquire(hwfn);
+			if (p_ptt) {
+				ECORE_LEADING_HWFN(edev)->p_arfs_ptt = p_ptt;
+			} else {
+				DP_ERR(edev, "Failed to acquire PTT for flowdir\n");
+				rc = -ENOMEM;
+				goto err;
+			}
+		}
+	}
 
 	rc = qed_nic_setup(edev);
 	if (rc)
@@ -268,13 +277,11 @@ static int qed_slowpath_start(struct ecore_dev *edev,
 		data = (const uint8_t *)edev->firmware + sizeof(u32);
 #endif
 
-	allow_npar_tx_switching = npar_tx_switching ? true : false;
-
 	/* Start the slowpath */
 	memset(&hw_init_params, 0, sizeof(hw_init_params));
 	hw_init_params.b_hw_start = true;
 	hw_init_params.int_mode = ECORE_INT_MODE_MSIX;
-	hw_init_params.allow_npar_tx_switch = allow_npar_tx_switching;
+	hw_init_params.allow_npar_tx_switch = true;
 	hw_init_params.bin_fw_data = data;
 	hw_init_params.mfw_timeout_val = ECORE_LOAD_REQ_LOCK_TO_DEFAULT;
 	hw_init_params.avoid_eng_reset = false;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v3 61/61] net/qede: add LRO/TSO offloads support
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (60 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 60/61] net/qede: add ntuple and flow director filter support Rasesh Mody
@ 2017-03-24  7:28       ` Rasesh Mody
  2017-03-24  7:45       ` [PATCH v2 00/61] net/qede/base: qede PMD enhancements Mody, Rasesh
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-24  7:28 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Harish Patil, Dept-EngDPDKDev

From: Harish Patil <harish.patil@qlogic.com>

This patch includes slowpath configuration and fastpath changes
to support LRO and TSO. A bit of revamping is needed in order
to make use of existing packet classification schemes in Rx fastpath
and for SG element processing in Tx.

Signed-off-by: Harish Patil <harish.patil@qlogic.com>
---
 doc/guides/nics/features/qede.ini    |    2 +
 doc/guides/nics/features/qede_vf.ini |    2 +
 doc/guides/nics/qede.rst             |    2 +-
 drivers/net/qede/qede_eth_if.c       |    6 +-
 drivers/net/qede/qede_eth_if.h       |    3 +-
 drivers/net/qede/qede_ethdev.c       |   29 +-
 drivers/net/qede/qede_ethdev.h       |    3 +-
 drivers/net/qede/qede_rxtx.c         |  733 +++++++++++++++++++++++++---------
 drivers/net/qede/qede_rxtx.h         |   30 ++
 9 files changed, 603 insertions(+), 207 deletions(-)

diff --git a/doc/guides/nics/features/qede.ini b/doc/guides/nics/features/qede.ini
index b688914..fba5dc3 100644
--- a/doc/guides/nics/features/qede.ini
+++ b/doc/guides/nics/features/qede.ini
@@ -36,3 +36,5 @@ x86-64               = Y
 Usage doc            = Y
 N-tuple filter       = Y
 Flow director        = Y
+LRO                  = Y
+TSO                  = Y
diff --git a/doc/guides/nics/features/qede_vf.ini b/doc/guides/nics/features/qede_vf.ini
index acb1b99..21ec40f 100644
--- a/doc/guides/nics/features/qede_vf.ini
+++ b/doc/guides/nics/features/qede_vf.ini
@@ -31,4 +31,6 @@ Stats per queue      = Y
 Multiprocess aware   = Y
 Linux UIO            = Y
 x86-64               = Y
+LRO                  = Y
+TSO                  = Y
 Usage doc            = Y
diff --git a/doc/guides/nics/qede.rst b/doc/guides/nics/qede.rst
index df0aaec..eacb3da 100644
--- a/doc/guides/nics/qede.rst
+++ b/doc/guides/nics/qede.rst
@@ -61,13 +61,13 @@ Supported Features
 - Scatter-Gather
 - VXLAN tunneling offload
 - N-tuple filter and flow director (limited support)
+- LRO/TSO
 
 Non-supported Features
 ----------------------
 
 - SR-IOV PF
 - GENEVE and NVGRE Tunneling offloads
-- LRO/TSO
 - NPAR
 
 Supported QLogic Adapters
diff --git a/drivers/net/qede/qede_eth_if.c b/drivers/net/qede/qede_eth_if.c
index 8e4290c..86bb129 100644
--- a/drivers/net/qede/qede_eth_if.c
+++ b/drivers/net/qede/qede_eth_if.c
@@ -18,8 +18,8 @@ qed_start_vport(struct ecore_dev *edev, struct qed_start_vport_params *p_params)
 		u8 tx_switching = 0;
 		struct ecore_sp_vport_start_params start = { 0 };
 
-		start.tpa_mode = p_params->gro_enable ? ECORE_TPA_MODE_GRO :
-		    ECORE_TPA_MODE_NONE;
+		start.tpa_mode = p_params->enable_lro ? ECORE_TPA_MODE_RSC :
+				ECORE_TPA_MODE_NONE;
 		start.remove_inner_vlan = p_params->remove_inner_vlan;
 		start.tx_switching = tx_switching;
 		start.only_untagged = false;	/* untagged only */
@@ -29,7 +29,6 @@ qed_start_vport(struct ecore_dev *edev, struct qed_start_vport_params *p_params)
 		start.concrete_fid = p_hwfn->hw_info.concrete_fid;
 		start.handle_ptp_pkts = p_params->handle_ptp_pkts;
 		start.vport_id = p_params->vport_id;
-		start.max_buffers_per_cqe = 16;	/* TODO-is this right */
 		start.mtu = p_params->mtu;
 		/* @DPDK - Disable FW placement */
 		start.zero_placement_offset = 1;
@@ -120,6 +119,7 @@ qed_update_vport(struct ecore_dev *edev, struct qed_update_vport_params *params)
 	sp_params.update_accept_any_vlan_flg =
 	    params->update_accept_any_vlan_flg;
 	sp_params.mtu = params->mtu;
+	sp_params.sge_tpa_params = params->sge_tpa_params;
 
 	for_each_hwfn(edev, i) {
 		struct ecore_hwfn *p_hwfn = &edev->hwfns[i];
diff --git a/drivers/net/qede/qede_eth_if.h b/drivers/net/qede/qede_eth_if.h
index 12dd828..d845bac 100644
--- a/drivers/net/qede/qede_eth_if.h
+++ b/drivers/net/qede/qede_eth_if.h
@@ -59,12 +59,13 @@ struct qed_update_vport_params {
 	uint8_t accept_any_vlan;
 	uint8_t update_rss_flg;
 	uint16_t mtu;
+	struct ecore_sge_tpa_params *sge_tpa_params;
 };
 
 struct qed_start_vport_params {
 	bool remove_inner_vlan;
 	bool handle_ptp_pkts;
-	bool gro_enable;
+	bool enable_lro;
 	bool drop_ttl0;
 	uint8_t vport_id;
 	uint16_t mtu;
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 22b528d..0762111 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -769,7 +769,7 @@ static int qede_init_vport(struct qede_dev *qdev)
 	int rc;
 
 	start.remove_inner_vlan = 1;
-	start.gro_enable = 0;
+	start.enable_lro = qdev->enable_lro;
 	start.mtu = ETHER_MTU + QEDE_ETH_OVERHEAD;
 	start.vport_id = 0;
 	start.drop_ttl0 = false;
@@ -866,11 +866,6 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 	if (rxmode->enable_scatter == 1)
 		eth_dev->data->scattered_rx = 1;
 
-	if (rxmode->enable_lro == 1) {
-		DP_ERR(edev, "LRO is not supported\n");
-		return -EINVAL;
-	}
-
 	if (!rxmode->hw_strip_crc)
 		DP_INFO(edev, "L2 CRC stripping is always enabled in hw\n");
 
@@ -878,6 +873,13 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 		DP_INFO(edev, "IP/UDP/TCP checksum offload is always enabled "
 			      "in hw\n");
 
+	if (rxmode->enable_lro) {
+		qdev->enable_lro = true;
+		/* Enable scatter mode for LRO */
+		if (!rxmode->enable_scatter)
+			eth_dev->data->scattered_rx = 1;
+	}
+
 	/* Check for the port restart case */
 	if (qdev->state != QEDE_DEV_INIT) {
 		rc = qdev->ops->vport_stop(edev, 0);
@@ -957,13 +959,15 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 static const struct rte_eth_desc_lim qede_rx_desc_lim = {
 	.nb_max = NUM_RX_BDS_MAX,
 	.nb_min = 128,
-	.nb_align = 128	/* lowest common multiple */
+	.nb_align = 128 /* lowest common multiple */
 };
 
 static const struct rte_eth_desc_lim qede_tx_desc_lim = {
 	.nb_max = NUM_TX_BDS_MAX,
 	.nb_min = 256,
-	.nb_align = 256
+	.nb_align = 256,
+	.nb_seg_max = ETH_TX_MAX_BDS_PER_LSO_PACKET,
+	.nb_mtu_seg_max = ETH_TX_MAX_BDS_PER_NON_LSO_PACKET
 };
 
 static void
@@ -1005,12 +1009,16 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
 				     DEV_RX_OFFLOAD_IPV4_CKSUM	|
 				     DEV_RX_OFFLOAD_UDP_CKSUM	|
 				     DEV_RX_OFFLOAD_TCP_CKSUM	|
-				     DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM);
+				     DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+				     DEV_RX_OFFLOAD_TCP_LRO);
+
 	dev_info->tx_offload_capa = (DEV_TX_OFFLOAD_VLAN_INSERT	|
 				     DEV_TX_OFFLOAD_IPV4_CKSUM	|
 				     DEV_TX_OFFLOAD_UDP_CKSUM	|
 				     DEV_TX_OFFLOAD_TCP_CKSUM	|
-				     DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM);
+				     DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				     DEV_TX_OFFLOAD_TCP_TSO |
+				     DEV_TX_OFFLOAD_VXLAN_TNL_TSO);
 
 	memset(&link, 0, sizeof(struct qed_link_output));
 	qdev->ops->common->get_link(edev, &link);
@@ -2107,6 +2115,7 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
 
 	eth_dev->rx_pkt_burst = qede_recv_pkts;
 	eth_dev->tx_pkt_burst = qede_xmit_pkts;
+	eth_dev->tx_pkt_prepare = qede_xmit_prep_pkts;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
 		DP_NOTICE(edev, false,
diff --git a/drivers/net/qede/qede_ethdev.h b/drivers/net/qede/qede_ethdev.h
index 8342b99..799a3ba 100644
--- a/drivers/net/qede/qede_ethdev.h
+++ b/drivers/net/qede/qede_ethdev.h
@@ -193,8 +193,7 @@ struct qede_dev {
 	uint16_t rss_ind_table[ECORE_RSS_IND_TABLE_SIZE];
 	uint64_t rss_hf;
 	uint8_t rss_key_len;
-	uint32_t flags;
-	bool gro_disable;
+	bool enable_lro;
 	uint16_t num_queues;
 	uint8_t fp_num_tx;
 	uint8_t fp_num_rx;
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 85134fb..380d8fb 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -6,10 +6,9 @@
  * See LICENSE.qede_pmd for copyright and licensing details.
  */
 
+#include <rte_net.h>
 #include "qede_rxtx.h"
 
-static bool gro_disable = 1;	/* mod_param */
-
 static inline int qede_alloc_rx_buffer(struct qede_rx_queue *rxq)
 {
 	struct rte_mbuf *new_mb = NULL;
@@ -352,7 +351,6 @@ static void qede_init_fp(struct qede_dev *qdev)
 		snprintf(fp->name, sizeof(fp->name), "%s-fp-%d", "qdev", i);
 	}
 
-	qdev->gro_disable = gro_disable;
 }
 
 void qede_free_fp_arrays(struct qede_dev *qdev)
@@ -509,6 +507,30 @@ qede_update_rx_prod(struct qede_dev *edev, struct qede_rx_queue *rxq)
 	PMD_RX_LOG(DEBUG, rxq, "bd_prod %u  cqe_prod %u", bd_prod, cqe_prod);
 }
 
+static void
+qede_update_sge_tpa_params(struct ecore_sge_tpa_params *sge_tpa_params,
+			   uint16_t mtu, bool enable)
+{
+	/* Enable LRO in split mode */
+	sge_tpa_params->tpa_ipv4_en_flg = enable;
+	sge_tpa_params->tpa_ipv6_en_flg = enable;
+	sge_tpa_params->tpa_ipv4_tunn_en_flg = enable;
+	sge_tpa_params->tpa_ipv6_tunn_en_flg = enable;
+	/* set if tpa enable changes */
+	sge_tpa_params->update_tpa_en_flg = 1;
+	/* set if tpa parameters should be handled */
+	sge_tpa_params->update_tpa_param_flg = enable;
+
+	sge_tpa_params->max_buffers_per_cqe = 20;
+	sge_tpa_params->tpa_pkt_split_flg = 1;
+	sge_tpa_params->tpa_hdr_data_split_flg = 0;
+	sge_tpa_params->tpa_gro_consistent_flg = 0;
+	sge_tpa_params->tpa_max_aggs_num = ETH_TPA_MAX_AGGS_NUM;
+	sge_tpa_params->tpa_max_size = 0x7FFF;
+	sge_tpa_params->tpa_min_size_to_start = mtu / 2;
+	sge_tpa_params->tpa_min_size_to_cont = mtu / 2;
+}
+
 static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 {
 	struct qede_dev *qdev = eth_dev->data->dev_private;
@@ -516,6 +538,7 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 	struct ecore_queue_start_common_params q_params;
 	struct qed_dev_info *qed_info = &qdev->dev_info.common;
 	struct qed_update_vport_params vport_update_params;
+	struct ecore_sge_tpa_params tpa_params;
 	struct qede_tx_queue *txq;
 	struct qede_fastpath *fp;
 	dma_addr_t p_phys_table;
@@ -625,6 +648,14 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 		vport_update_params.tx_switching_flg = 1;
 	}
 
+	/* TPA */
+	if (qdev->enable_lro) {
+		DP_INFO(edev, "Enabling LRO\n");
+		memset(&tpa_params, 0, sizeof(struct ecore_sge_tpa_params));
+		qede_update_sge_tpa_params(&tpa_params, qdev->mtu, true);
+		vport_update_params.sge_tpa_params = &tpa_params;
+	}
+
 	rc = qdev->ops->vport_update(edev, &vport_update_params);
 	if (rc) {
 		DP_ERR(edev, "Update V-PORT failed %d\n", rc);
@@ -761,6 +792,94 @@ static inline uint32_t qede_rx_cqe_to_pkt_type(uint16_t flags)
 		return RTE_PTYPE_UNKNOWN;
 }
 
+static inline void
+qede_rx_process_tpa_cont_cqe(struct qede_dev *qdev,
+			     struct qede_rx_queue *rxq,
+			     struct eth_fast_path_rx_tpa_cont_cqe *cqe)
+{
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	struct qede_agg_info *tpa_info;
+	struct rte_mbuf *temp_frag; /* Pointer to mbuf chain head */
+	struct rte_mbuf *curr_frag;
+	uint8_t list_count = 0;
+	uint16_t cons_idx;
+	uint8_t i;
+
+	PMD_RX_LOG(INFO, rxq, "TPA cont[%02x] - len_list [%04x %04x]\n",
+		   cqe->tpa_agg_index, rte_le_to_cpu_16(cqe->len_list[0]),
+		   rte_le_to_cpu_16(cqe->len_list[1]));
+
+	tpa_info = &rxq->tpa_info[cqe->tpa_agg_index];
+	temp_frag = tpa_info->mbuf;
+	assert(temp_frag);
+
+	for (i = 0; cqe->len_list[i]; i++) {
+		cons_idx = rxq->sw_rx_cons & NUM_RX_BDS(rxq);
+		curr_frag = rxq->sw_rx_ring[cons_idx].mbuf;
+		qede_rx_bd_ring_consume(rxq);
+		curr_frag->data_len = rte_le_to_cpu_16(cqe->len_list[i]);
+		temp_frag->next = curr_frag;
+		temp_frag = curr_frag;
+		list_count++;
+	}
+
+	/* Allocate RX mbuf on the RX BD ring for those many consumed  */
+	for (i = 0 ; i < list_count ; i++) {
+		if (unlikely(qede_alloc_rx_buffer(rxq) != 0)) {
+			DP_ERR(edev, "Failed to allocate mbuf for LRO cont\n");
+			tpa_info->state = QEDE_AGG_STATE_ERROR;
+		}
+	}
+}
+
+static inline void
+qede_rx_process_tpa_end_cqe(struct qede_dev *qdev,
+			    struct qede_rx_queue *rxq,
+			    struct eth_fast_path_rx_tpa_end_cqe *cqe)
+{
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	struct qede_agg_info *tpa_info;
+	struct rte_mbuf *temp_frag; /* Pointer to mbuf chain head */
+	struct rte_mbuf *curr_frag;
+	struct rte_mbuf *rx_mb;
+	uint8_t list_count = 0;
+	uint16_t cons_idx;
+	uint8_t i;
+
+	PMD_RX_LOG(INFO, rxq, "TPA End[%02x] - len_list [%04x %04x]\n",
+		   cqe->tpa_agg_index, rte_le_to_cpu_16(cqe->len_list[0]),
+		   rte_le_to_cpu_16(cqe->len_list[1]));
+
+	tpa_info = &rxq->tpa_info[cqe->tpa_agg_index];
+	temp_frag = tpa_info->mbuf;
+	assert(temp_frag);
+
+	for (i = 0; cqe->len_list[i]; i++) {
+		cons_idx = rxq->sw_rx_cons & NUM_RX_BDS(rxq);
+		curr_frag = rxq->sw_rx_ring[cons_idx].mbuf;
+		qede_rx_bd_ring_consume(rxq);
+		curr_frag->data_len = rte_le_to_cpu_16(cqe->len_list[i]);
+		temp_frag->next = curr_frag;
+		temp_frag = curr_frag;
+		list_count++;
+	}
+
+	/* Allocate RX mbuf on the RX BD ring for those many consumed */
+	for (i = 0 ; i < list_count ; i++) {
+		if (unlikely(qede_alloc_rx_buffer(rxq) != 0)) {
+			DP_ERR(edev, "Failed to allocate mbuf for lro end\n");
+			tpa_info->state = QEDE_AGG_STATE_ERROR;
+		}
+	}
+
+	/* Update total length and frags based on end TPA */
+	rx_mb = rxq->tpa_info[cqe->tpa_agg_index].mbuf;
+	/* TBD: Add sanity checks here */
+	rx_mb->nb_segs = cqe->num_of_bds;
+	rx_mb->pkt_len = cqe->total_packet_len;
+	tpa_info->state = QEDE_AGG_STATE_NONE;
+}
+
 static inline uint32_t qede_rx_cqe_to_tunn_pkt_type(uint16_t flags)
 {
 	uint32_t val;
@@ -875,13 +994,20 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 	uint16_t pkt_len; /* Sum of all BD segments */
 	uint16_t len; /* Length of first BD */
 	uint8_t num_segs = 1;
-	uint16_t pad;
 	uint16_t preload_idx;
 	uint8_t csum_flag;
 	uint16_t parse_flag;
 	enum rss_hash_type htype;
 	uint8_t tunn_parse_flag;
 	uint8_t j;
+	struct eth_fast_path_rx_tpa_start_cqe *cqe_start_tpa;
+	uint64_t ol_flags;
+	uint32_t packet_type;
+	uint16_t vlan_tci;
+	bool tpa_start_flg;
+	uint8_t bitfield_val;
+	uint8_t offset;
+	struct qede_agg_info *tpa_info;
 
 	hw_comp_cons = rte_le_to_cpu_16(*rxq->hw_cons_ptr);
 	sw_comp_cons = ecore_chain_get_cons_idx(&rxq->rx_comp_ring);
@@ -892,16 +1018,55 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		return 0;
 
 	while (sw_comp_cons != hw_comp_cons) {
+		ol_flags = 0;
+		packet_type = RTE_PTYPE_UNKNOWN;
+		vlan_tci = 0;
+		tpa_start_flg = false;
+
 		/* Get the CQE from the completion ring */
 		cqe =
 		    (union eth_rx_cqe *)ecore_chain_consume(&rxq->rx_comp_ring);
 		cqe_type = cqe->fast_path_regular.type;
-
-		if (unlikely(cqe_type == ETH_RX_CQE_TYPE_SLOW_PATH)) {
-			PMD_RX_LOG(DEBUG, rxq, "Got a slowath CQE");
-
+		PMD_RX_LOG(INFO, rxq, "Rx CQE type %d\n", cqe_type);
+
+		switch (cqe_type) {
+		case ETH_RX_CQE_TYPE_REGULAR:
+			fp_cqe = &cqe->fast_path_regular;
+		break;
+		case ETH_RX_CQE_TYPE_TPA_START:
+			cqe_start_tpa = &cqe->fast_path_tpa_start;
+			tpa_info = &rxq->tpa_info[cqe_start_tpa->tpa_agg_index];
+			tpa_start_flg = true;
+			PMD_RX_LOG(INFO, rxq,
+				   "TPA start[%u] - len %04x [header %02x]"
+				   " [bd_list[0] %04x], [seg_len %04x]\n",
+				    cqe_start_tpa->tpa_agg_index,
+				    rte_le_to_cpu_16(cqe_start_tpa->
+						     len_on_first_bd),
+				    cqe_start_tpa->header_len,
+				    rte_le_to_cpu_16(cqe_start_tpa->
+							ext_bd_len_list[0]),
+				    rte_le_to_cpu_16(cqe_start_tpa->seg_len));
+
+		break;
+		case ETH_RX_CQE_TYPE_TPA_CONT:
+			qede_rx_process_tpa_cont_cqe(qdev, rxq,
+						     &cqe->fast_path_tpa_cont);
+			continue;
+		case ETH_RX_CQE_TYPE_TPA_END:
+			qede_rx_process_tpa_end_cqe(qdev, rxq,
+						    &cqe->fast_path_tpa_end);
+			rx_mb = rxq->
+			tpa_info[cqe->fast_path_tpa_end.tpa_agg_index].mbuf;
+			PMD_RX_LOG(INFO, rxq, "TPA end reason %d\n",
+				   cqe->fast_path_tpa_end.end_reason);
+			goto tpa_end;
+		case ETH_RX_CQE_TYPE_SLOW_PATH:
+			PMD_RX_LOG(INFO, rxq, "Got unexpected slowpath CQE\n");
 			qdev->ops->eth_cqe_completion(edev, fp->id,
 				(struct eth_slow_path_rx_cqe *)cqe);
+			/* fall-thru */
+		default:
 			goto next_cqe;
 		}
 
@@ -910,69 +1075,93 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		rx_mb = rxq->sw_rx_ring[sw_rx_index].mbuf;
 		assert(rx_mb != NULL);
 
-		/* non GRO */
-		fp_cqe = &cqe->fast_path_regular;
-
-		len = rte_le_to_cpu_16(fp_cqe->len_on_first_bd);
-		pkt_len = rte_le_to_cpu_16(fp_cqe->pkt_len);
-		pad = fp_cqe->placement_offset;
-		assert((len + pad) <= rx_mb->buf_len);
-
-		PMD_RX_LOG(DEBUG, rxq,
-			   "CQE type = 0x%x, flags = 0x%x, vlan = 0x%x"
-			   " len = %u, parsing_flags = %d",
-			   cqe_type, fp_cqe->bitfields,
-			   rte_le_to_cpu_16(fp_cqe->vlan_tag),
-			   len, rte_le_to_cpu_16(fp_cqe->pars_flags.flags));
-
-		/* If this is an error packet then drop it */
-		parse_flag =
-		    rte_le_to_cpu_16(cqe->fast_path_regular.pars_flags.flags);
-
-		rx_mb->ol_flags = 0;
-
+		/* Handle regular CQE or TPA start CQE */
+		if (!tpa_start_flg) {
+			parse_flag = rte_le_to_cpu_16(fp_cqe->pars_flags.flags);
+			bitfield_val = fp_cqe->bitfields;
+			offset = fp_cqe->placement_offset;
+			len = rte_le_to_cpu_16(fp_cqe->len_on_first_bd);
+			pkt_len = rte_le_to_cpu_16(fp_cqe->pkt_len);
+		} else {
+			parse_flag = rte_le_to_cpu_16(cqe_start_tpa->
+							pars_flags.flags);
+			bitfield_val = cqe_start_tpa->bitfields;
+			offset = cqe_start_tpa->placement_offset;
+			/* seg_len = len_on_first_bd */
+			len = rte_le_to_cpu_16(cqe_start_tpa->len_on_first_bd);
+			tpa_info->start_cqe_bd_len = len +
+						cqe_start_tpa->header_len;
+			tpa_info->mbuf = rx_mb;
+		}
 		if (qede_tunn_exist(parse_flag)) {
-			PMD_RX_LOG(DEBUG, rxq, "Rx tunneled packet");
+			PMD_RX_LOG(INFO, rxq, "Rx tunneled packet\n");
 			if (unlikely(qede_check_tunn_csum_l4(parse_flag))) {
 				PMD_RX_LOG(ERR, rxq,
-					    "L4 csum failed, flags = 0x%x",
+					    "L4 csum failed, flags = 0x%x\n",
 					    parse_flag);
 				rxq->rx_hw_errors++;
-				rx_mb->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+				ol_flags |= PKT_RX_L4_CKSUM_BAD;
 			} else {
-				tunn_parse_flag =
-						fp_cqe->tunnel_pars_flags.flags;
-				rx_mb->packet_type =
-					qede_rx_cqe_to_tunn_pkt_type(
-							tunn_parse_flag);
+				ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+				if (tpa_start_flg)
+					tunn_parse_flag = cqe_start_tpa->
+							tunnel_pars_flags.flags;
+				else
+					tunn_parse_flag = fp_cqe->
+							tunnel_pars_flags.flags;
+				packet_type =
+				qede_rx_cqe_to_tunn_pkt_type(tunn_parse_flag);
 			}
 		} else {
-			PMD_RX_LOG(DEBUG, rxq, "Rx non-tunneled packet");
+			PMD_RX_LOG(INFO, rxq, "Rx non-tunneled packet\n");
 			if (unlikely(qede_check_notunn_csum_l4(parse_flag))) {
 				PMD_RX_LOG(ERR, rxq,
-					    "L4 csum failed, flags = 0x%x",
+					    "L4 csum failed, flags = 0x%x\n",
 					    parse_flag);
 				rxq->rx_hw_errors++;
-				rx_mb->ol_flags |= PKT_RX_L4_CKSUM_BAD;
-			} else if (unlikely(qede_check_notunn_csum_l3(rx_mb,
+				ol_flags |= PKT_RX_L4_CKSUM_BAD;
+			} else {
+				ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			}
+			if (unlikely(qede_check_notunn_csum_l3(rx_mb,
 							parse_flag))) {
 				PMD_RX_LOG(ERR, rxq,
-					   "IP csum failed, flags = 0x%x",
+					   "IP csum failed, flags = 0x%x\n",
 					   parse_flag);
 				rxq->rx_hw_errors++;
-				rx_mb->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+				ol_flags |= PKT_RX_IP_CKSUM_BAD;
 			} else {
-				rx_mb->packet_type =
+				ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+				packet_type =
 					qede_rx_cqe_to_pkt_type(parse_flag);
 			}
 		}
 
-		PMD_RX_LOG(INFO, rxq, "packet_type 0x%x", rx_mb->packet_type);
+		if (CQE_HAS_VLAN(parse_flag)) {
+			vlan_tci = rte_le_to_cpu_16(fp_cqe->vlan_tag);
+			ol_flags |= PKT_RX_VLAN_PKT;
+		}
+
+		if (CQE_HAS_OUTER_VLAN(parse_flag)) {
+			vlan_tci = rte_le_to_cpu_16(fp_cqe->vlan_tag);
+			ol_flags |= PKT_RX_QINQ_PKT;
+			rx_mb->vlan_tci_outer = 0;
+		}
+
+		/* RSS Hash */
+		htype = (uint8_t)GET_FIELD(bitfield_val,
+					ETH_FAST_PATH_RX_REG_CQE_RSS_HASH_TYPE);
+		if (qdev->rss_enable && htype) {
+			ol_flags |= PKT_RX_RSS_HASH;
+			rx_mb->hash.rss = rte_le_to_cpu_32(fp_cqe->rss_hash);
+			PMD_RX_LOG(INFO, rxq, "Hash result 0x%x\n",
+				   rx_mb->hash.rss);
+		}
 
 		if (unlikely(qede_alloc_rx_buffer(rxq) != 0)) {
 			PMD_RX_LOG(ERR, rxq,
 				   "New buffer allocation failed,"
-				   "dropping incoming packet");
+				   "dropping incoming packet\n");
 			qede_recycle_rx_bd_ring(rxq, qdev, fp_cqe->bd_num);
 			rte_eth_devices[rxq->port_id].
 			    data->rx_mbuf_alloc_failed++;
@@ -980,7 +1169,8 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 			break;
 		}
 		qede_rx_bd_ring_consume(rxq);
-		if (fp_cqe->bd_num > 1) {
+
+		if (!tpa_start_flg && fp_cqe->bd_num > 1) {
 			PMD_RX_LOG(DEBUG, rxq, "Jumbo-over-BD packet: %02x BDs"
 				   " len on first: %04x Total Len: %04x",
 				   fp_cqe->bd_num, len, pkt_len);
@@ -1008,40 +1198,24 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		rte_prefetch0(rxq->sw_rx_ring[preload_idx].mbuf);
 
 		/* Update rest of the MBUF fields */
-		rx_mb->data_off = pad + RTE_PKTMBUF_HEADROOM;
-		rx_mb->nb_segs = fp_cqe->bd_num;
-		rx_mb->data_len = len;
-		rx_mb->pkt_len = pkt_len;
+		rx_mb->data_off = offset + RTE_PKTMBUF_HEADROOM;
 		rx_mb->port = rxq->port_id;
-
-		htype = (uint8_t)GET_FIELD(fp_cqe->bitfields,
-				ETH_FAST_PATH_RX_REG_CQE_RSS_HASH_TYPE);
-		if (qdev->rss_enable && htype) {
-			rx_mb->ol_flags |= PKT_RX_RSS_HASH;
-			rx_mb->hash.rss = rte_le_to_cpu_32(fp_cqe->rss_hash);
-			PMD_RX_LOG(DEBUG, rxq, "Hash result 0x%x",
-				   rx_mb->hash.rss);
+		rx_mb->ol_flags = ol_flags;
+		rx_mb->data_len = len;
+		rx_mb->vlan_tci = vlan_tci;
+		rx_mb->packet_type = packet_type;
+		PMD_RX_LOG(INFO, rxq, "pkt_type %04x len %04x flags %04lx\n",
+			   packet_type, len, (unsigned long)ol_flags);
+		if (!tpa_start_flg) {
+			rx_mb->nb_segs = fp_cqe->bd_num;
+			rx_mb->pkt_len = pkt_len;
 		}
-
 		rte_prefetch1(rte_pktmbuf_mtod(rx_mb, void *));
-
-		if (CQE_HAS_VLAN(parse_flag)) {
-			rx_mb->vlan_tci = rte_le_to_cpu_16(fp_cqe->vlan_tag);
-			rx_mb->ol_flags |= PKT_RX_VLAN_PKT;
-		}
-
-		if (CQE_HAS_OUTER_VLAN(parse_flag)) {
-			/* FW does not provide indication of Outer VLAN tag,
-			 * which is always stripped, so vlan_tci_outer is set
-			 * to 0. Here vlan_tag represents inner VLAN tag.
-			 */
-			rx_mb->vlan_tci = rte_le_to_cpu_16(fp_cqe->vlan_tag);
-			rx_mb->ol_flags |= PKT_RX_QINQ_PKT;
-			rx_mb->vlan_tci_outer = 0;
+tpa_end:
+		if (!tpa_start_flg) {
+			rx_pkts[rx_pkt] = rx_mb;
+			rx_pkt++;
 		}
-
-		rx_pkts[rx_pkt] = rx_mb;
-		rx_pkt++;
 next_cqe:
 		ecore_chain_recycle_consumed(&rxq->rx_comp_ring);
 		sw_comp_cons = ecore_chain_get_cons_idx(&rxq->rx_comp_ring);
@@ -1062,101 +1236,91 @@ next_cqe:
 	return rx_pkt;
 }
 
-static inline int
-qede_free_tx_pkt(struct ecore_dev *edev, struct qede_tx_queue *txq)
+static inline void
+qede_free_tx_pkt(struct qede_tx_queue *txq)
 {
-	uint16_t nb_segs, idx = TX_CONS(txq);
-	struct eth_tx_bd *tx_data_bd;
-	struct rte_mbuf *mbuf = txq->sw_tx_ring[idx].mbuf;
-
-	if (unlikely(!mbuf)) {
-		PMD_TX_LOG(ERR, txq, "null mbuf");
-		PMD_TX_LOG(ERR, txq,
-			   "tx_desc %u tx_avail %u tx_cons %u tx_prod %u",
-			   txq->nb_tx_desc, txq->nb_tx_avail, idx,
-			   TX_PROD(txq));
-		return -1;
-	}
-
-	nb_segs = mbuf->nb_segs;
-	while (nb_segs) {
-		/* It's like consuming rxbuf in recv() */
+	struct rte_mbuf *mbuf;
+	uint16_t nb_segs;
+	uint16_t idx;
+	uint8_t nbds;
+
+	idx = TX_CONS(txq);
+	mbuf = txq->sw_tx_ring[idx].mbuf;
+	if (mbuf) {
+		nb_segs = mbuf->nb_segs;
+		PMD_TX_LOG(DEBUG, txq, "nb_segs to free %u\n", nb_segs);
+		while (nb_segs) {
+			/* It's like consuming rxbuf in recv() */
+			ecore_chain_consume(&txq->tx_pbl);
+			txq->nb_tx_avail++;
+			nb_segs--;
+		}
+		rte_pktmbuf_free(mbuf);
+		txq->sw_tx_ring[idx].mbuf = NULL;
+		txq->sw_tx_cons++;
+		PMD_TX_LOG(DEBUG, txq, "Freed tx packet\n");
+	} else {
 		ecore_chain_consume(&txq->tx_pbl);
 		txq->nb_tx_avail++;
-		nb_segs--;
 	}
-	rte_pktmbuf_free(mbuf);
-	txq->sw_tx_ring[idx].mbuf = NULL;
-
-	return 0;
 }
 
-static inline uint16_t
+static inline void
 qede_process_tx_compl(struct ecore_dev *edev, struct qede_tx_queue *txq)
 {
-	uint16_t tx_compl = 0;
 	uint16_t hw_bd_cons;
+	uint16_t sw_tx_cons;
 
-	hw_bd_cons = rte_le_to_cpu_16(*txq->hw_cons_ptr);
 	rte_compiler_barrier();
-
-	while (hw_bd_cons != ecore_chain_get_cons_idx(&txq->tx_pbl)) {
-		if (qede_free_tx_pkt(edev, txq)) {
-			PMD_TX_LOG(ERR, txq,
-				   "hw_bd_cons = %u, chain_cons = %u",
-				   hw_bd_cons,
-				   ecore_chain_get_cons_idx(&txq->tx_pbl));
-			break;
-		}
-		txq->sw_tx_cons++;	/* Making TXD available */
-		tx_compl++;
-	}
-
-	PMD_TX_LOG(DEBUG, txq, "Tx compl %u sw_tx_cons %u avail %u",
-		   tx_compl, txq->sw_tx_cons, txq->nb_tx_avail);
-	return tx_compl;
+	hw_bd_cons = rte_le_to_cpu_16(*txq->hw_cons_ptr);
+	sw_tx_cons = ecore_chain_get_cons_idx(&txq->tx_pbl);
+	PMD_TX_LOG(DEBUG, txq, "Tx Completions = %u\n",
+		   abs(hw_bd_cons - sw_tx_cons));
+	while (hw_bd_cons !=  ecore_chain_get_cons_idx(&txq->tx_pbl))
+		qede_free_tx_pkt(txq);
 }
 
 /* Populate scatter gather buffer descriptor fields */
 static inline uint8_t
 qede_encode_sg_bd(struct qede_tx_queue *p_txq, struct rte_mbuf *m_seg,
-		  struct eth_tx_1st_bd *bd1)
+		  struct eth_tx_2nd_bd **bd2, struct eth_tx_3rd_bd **bd3)
 {
 	struct qede_tx_queue *txq = p_txq;
-	struct eth_tx_2nd_bd *bd2 = NULL;
-	struct eth_tx_3rd_bd *bd3 = NULL;
 	struct eth_tx_bd *tx_bd = NULL;
 	dma_addr_t mapping;
-	uint8_t nb_segs = 1; /* min one segment per packet */
+	uint8_t nb_segs = 0;
 
 	/* Check for scattered buffers */
 	while (m_seg) {
-		if (nb_segs == 1) {
-			bd2 = (struct eth_tx_2nd_bd *)
-				ecore_chain_produce(&txq->tx_pbl);
-			memset(bd2, 0, sizeof(*bd2));
+		if (nb_segs == 0) {
+			if (!*bd2) {
+				*bd2 = (struct eth_tx_2nd_bd *)
+					ecore_chain_produce(&txq->tx_pbl);
+				memset(*bd2, 0, sizeof(struct eth_tx_2nd_bd));
+				nb_segs++;
+			}
 			mapping = rte_mbuf_data_dma_addr(m_seg);
-			QEDE_BD_SET_ADDR_LEN(bd2, mapping, m_seg->data_len);
-			PMD_TX_LOG(DEBUG, txq, "BD2 len %04x",
-				   m_seg->data_len);
-		} else if (nb_segs == 2) {
-			bd3 = (struct eth_tx_3rd_bd *)
-				ecore_chain_produce(&txq->tx_pbl);
-			memset(bd3, 0, sizeof(*bd3));
+			QEDE_BD_SET_ADDR_LEN(*bd2, mapping, m_seg->data_len);
+			PMD_TX_LOG(DEBUG, txq, "BD2 len %04x", m_seg->data_len);
+		} else if (nb_segs == 1) {
+			if (!*bd3) {
+				*bd3 = (struct eth_tx_3rd_bd *)
+					ecore_chain_produce(&txq->tx_pbl);
+				memset(*bd3, 0, sizeof(struct eth_tx_3rd_bd));
+				nb_segs++;
+			}
 			mapping = rte_mbuf_data_dma_addr(m_seg);
-			QEDE_BD_SET_ADDR_LEN(bd3, mapping, m_seg->data_len);
-			PMD_TX_LOG(DEBUG, txq, "BD3 len %04x",
-				   m_seg->data_len);
+			QEDE_BD_SET_ADDR_LEN(*bd3, mapping, m_seg->data_len);
+			PMD_TX_LOG(DEBUG, txq, "BD3 len %04x", m_seg->data_len);
 		} else {
 			tx_bd = (struct eth_tx_bd *)
 				ecore_chain_produce(&txq->tx_pbl);
 			memset(tx_bd, 0, sizeof(*tx_bd));
+			nb_segs++;
 			mapping = rte_mbuf_data_dma_addr(m_seg);
 			QEDE_BD_SET_ADDR_LEN(tx_bd, mapping, m_seg->data_len);
-			PMD_TX_LOG(DEBUG, txq, "BD len %04x",
-				   m_seg->data_len);
+			PMD_TX_LOG(DEBUG, txq, "BD len %04x", m_seg->data_len);
 		}
-		nb_segs++;
 		m_seg = m_seg->next;
 	}
 
@@ -1164,59 +1328,209 @@ qede_encode_sg_bd(struct qede_tx_queue *p_txq, struct rte_mbuf *m_seg,
 	return nb_segs;
 }
 
+#ifdef RTE_LIBRTE_QEDE_DEBUG_TX
+static inline void
+print_tx_bd_info(struct qede_tx_queue *txq,
+		 struct eth_tx_1st_bd *bd1,
+		 struct eth_tx_2nd_bd *bd2,
+		 struct eth_tx_3rd_bd *bd3,
+		 uint64_t tx_ol_flags)
+{
+	char ol_buf[256] = { 0 }; /* for verbose prints */
+
+	if (bd1)
+		PMD_TX_LOG(INFO, txq,
+			   "BD1: nbytes=%u nbds=%u bd_flags=04%x bf=%04x",
+			   rte_cpu_to_le_16(bd1->nbytes), bd1->data.nbds,
+			   bd1->data.bd_flags.bitfields,
+			   rte_cpu_to_le_16(bd1->data.bitfields));
+	if (bd2)
+		PMD_TX_LOG(INFO, txq,
+			   "BD2: nbytes=%u bf=%04x\n",
+			   rte_cpu_to_le_16(bd2->nbytes), bd2->data.bitfields1);
+	if (bd3)
+		PMD_TX_LOG(INFO, txq,
+			   "BD3: nbytes=%u bf=%04x mss=%u\n",
+			   rte_cpu_to_le_16(bd3->nbytes),
+			   rte_cpu_to_le_16(bd3->data.bitfields),
+			   rte_cpu_to_le_16(bd3->data.lso_mss));
+
+	rte_get_tx_ol_flag_list(tx_ol_flags, ol_buf, sizeof(ol_buf));
+	PMD_TX_LOG(INFO, txq, "TX offloads = %s\n", ol_buf);
+}
+#endif
+
+/* TX prepare to check packets meets TX conditions */
+uint16_t
+qede_xmit_prep_pkts(void *p_txq, struct rte_mbuf **tx_pkts,
+		    uint16_t nb_pkts)
+{
+	struct qede_tx_queue *txq = p_txq;
+	uint64_t ol_flags;
+	struct rte_mbuf *m;
+	uint16_t i;
+	int ret;
+
+	for (i = 0; i < nb_pkts; i++) {
+		m = tx_pkts[i];
+		ol_flags = m->ol_flags;
+		if (ol_flags & PKT_TX_TCP_SEG) {
+			if (m->nb_segs >= ETH_TX_MAX_BDS_PER_LSO_PACKET) {
+				rte_errno = -EINVAL;
+				break;
+			}
+			/* TBD: confirm its ~9700B for both ? */
+			if (m->tso_segsz > ETH_TX_MAX_NON_LSO_PKT_LEN) {
+				rte_errno = -EINVAL;
+				break;
+			}
+		} else {
+			if (m->nb_segs >= ETH_TX_MAX_BDS_PER_NON_LSO_PACKET) {
+				rte_errno = -EINVAL;
+				break;
+			}
+		}
+		if (ol_flags & QEDE_TX_OFFLOAD_NOTSUP_MASK) {
+			rte_errno = -ENOTSUP;
+			break;
+		}
+
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+		ret = rte_validate_tx_offload(m);
+		if (ret != 0) {
+			rte_errno = ret;
+			break;
+		}
+#endif
+		/* TBD: pseudo csum calcuation required iff
+		 * ETH_TX_DATA_2ND_BD_L4_PSEUDO_CSUM_MODE not set?
+		 */
+		ret = rte_net_intel_cksum_prepare(m);
+		if (ret != 0) {
+			rte_errno = ret;
+			break;
+		}
+	}
+
+	if (unlikely(i != nb_pkts))
+		PMD_TX_LOG(ERR, txq, "TX prepare failed for %u\n",
+			   nb_pkts - i);
+	return i;
+}
+
 uint16_t
 qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 {
 	struct qede_tx_queue *txq = p_txq;
 	struct qede_dev *qdev = txq->qdev;
 	struct ecore_dev *edev = &qdev->edev;
-	struct qede_fastpath *fp;
-	struct eth_tx_1st_bd *bd1;
 	struct rte_mbuf *mbuf;
 	struct rte_mbuf *m_seg = NULL;
 	uint16_t nb_tx_pkts;
 	uint16_t bd_prod;
 	uint16_t idx;
-	uint16_t tx_count;
 	uint16_t nb_frags;
 	uint16_t nb_pkt_sent = 0;
-
-	fp = &qdev->fp_array[QEDE_RSS_COUNT(qdev) + txq->queue_id];
+	uint8_t nbds;
+	bool ipv6_ext_flg;
+	bool lso_flg;
+	bool tunn_flg;
+	struct eth_tx_1st_bd *bd1;
+	struct eth_tx_2nd_bd *bd2;
+	struct eth_tx_3rd_bd *bd3;
+	uint64_t tx_ol_flags;
+	uint16_t hdr_size;
 
 	if (unlikely(txq->nb_tx_avail < txq->tx_free_thresh)) {
 		PMD_TX_LOG(DEBUG, txq, "send=%u avail=%u free_thresh=%u",
 			   nb_pkts, txq->nb_tx_avail, txq->tx_free_thresh);
-		(void)qede_process_tx_compl(edev, txq);
-	}
-
-	nb_tx_pkts = RTE_MIN(nb_pkts, (txq->nb_tx_avail /
-			ETH_TX_MAX_BDS_PER_NON_LSO_PACKET));
-	if (unlikely(nb_tx_pkts == 0)) {
-		PMD_TX_LOG(DEBUG, txq, "Out of BDs nb_pkts=%u avail=%u",
-			   nb_pkts, txq->nb_tx_avail);
-		return 0;
+		qede_process_tx_compl(edev, txq);
 	}
 
-	tx_count = nb_tx_pkts;
+	nb_tx_pkts  = nb_pkts;
+	bd_prod = rte_cpu_to_le_16(ecore_chain_get_prod_idx(&txq->tx_pbl));
 	while (nb_tx_pkts--) {
+		/* Init flags/values */
+		ipv6_ext_flg = false;
+		tunn_flg = false;
+		lso_flg = false;
+		nbds = 0;
+		bd1 = NULL;
+		bd2 = NULL;
+		bd3 = NULL;
+		hdr_size = 0;
+
+		mbuf = *tx_pkts;
+		assert(mbuf);
+
+		/* Check minimum TX BDS availability against available BDs */
+		if (unlikely(txq->nb_tx_avail < mbuf->nb_segs))
+			break;
+
+		tx_ol_flags = mbuf->ol_flags;
+
+#define RTE_ETH_IS_IPV6_HDR_EXT(ptype) ((ptype) & RTE_PTYPE_L3_IPV6_EXT)
+		if (RTE_ETH_IS_IPV6_HDR_EXT(mbuf->packet_type))
+			ipv6_ext_flg = true;
+
+		if (RTE_ETH_IS_TUNNEL_PKT(mbuf->packet_type))
+			tunn_flg = true;
+
+		if (tx_ol_flags & PKT_TX_TCP_SEG)
+			lso_flg = true;
+
+		if (lso_flg) {
+			if (unlikely(txq->nb_tx_avail <
+						ETH_TX_MIN_BDS_PER_LSO_PKT))
+				break;
+		} else {
+			if (unlikely(txq->nb_tx_avail <
+					ETH_TX_MIN_BDS_PER_NON_LSO_PKT))
+				break;
+		}
+
+		if (tunn_flg && ipv6_ext_flg) {
+			if (unlikely(txq->nb_tx_avail <
+				ETH_TX_MIN_BDS_PER_TUNN_IPV6_WITH_EXT_PKT))
+				break;
+		}
+		if (ipv6_ext_flg) {
+			if (unlikely(txq->nb_tx_avail <
+					ETH_TX_MIN_BDS_PER_IPV6_WITH_EXT_PKT))
+				break;
+		}
+
 		/* Fill the entry in the SW ring and the BDs in the FW ring */
 		idx = TX_PROD(txq);
-		mbuf = *tx_pkts++;
+		*tx_pkts++;
 		txq->sw_tx_ring[idx].mbuf = mbuf;
+
+		/* BD1 */
 		bd1 = (struct eth_tx_1st_bd *)ecore_chain_produce(&txq->tx_pbl);
-		bd1->data.bd_flags.bitfields =
+		memset(bd1, 0, sizeof(struct eth_tx_1st_bd));
+		nbds++;
+
+		bd1->data.bd_flags.bitfields |=
 			1 << ETH_TX_1ST_BD_FLAGS_START_BD_SHIFT;
 		/* FW 8.10.x specific change */
-		bd1->data.bitfields =
+		if (!lso_flg) {
+			bd1->data.bitfields |=
 			(mbuf->pkt_len & ETH_TX_DATA_1ST_BD_PKT_LEN_MASK)
 				<< ETH_TX_DATA_1ST_BD_PKT_LEN_SHIFT;
-		/* Map MBUF linear data for DMA and set in the first BD */
-		QEDE_BD_SET_ADDR_LEN(bd1, rte_mbuf_data_dma_addr(mbuf),
-				     mbuf->data_len);
-		PMD_TX_LOG(INFO, txq, "BD1 len %04x", mbuf->data_len);
+			/* Map MBUF linear data for DMA and set in the BD1 */
+			QEDE_BD_SET_ADDR_LEN(bd1, rte_mbuf_data_dma_addr(mbuf),
+					     mbuf->data_len);
+		} else {
+			/* For LSO, packet header and payload must reside on
+			 * buffers pointed by different BDs. Using BD1 for HDR
+			 * and BD2 onwards for data.
+			 */
+			hdr_size = mbuf->l2_len + mbuf->l3_len + mbuf->l4_len;
+			QEDE_BD_SET_ADDR_LEN(bd1, rte_mbuf_data_dma_addr(mbuf),
+					     hdr_size);
+		}
 
-		if (RTE_ETH_IS_TUNNEL_PKT(mbuf->packet_type)) {
-			PMD_TX_LOG(INFO, txq, "Tx tunnel packet");
+		if (tunn_flg) {
 			/* First indicate its a tunnel pkt */
 			bd1->data.bd_flags.bitfields |=
 				ETH_TX_DATA_1ST_BD_TUNN_FLAG_MASK <<
@@ -1231,8 +1545,7 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 					1 << ETH_TX_DATA_1ST_BD_TUNN_FLAG_SHIFT;
 
 			/* Outer IP checksum offload */
-			if (mbuf->ol_flags & PKT_TX_OUTER_IP_CKSUM) {
-				PMD_TX_LOG(INFO, txq, "OuterIP csum offload");
+			if (tx_ol_flags & PKT_TX_OUTER_IP_CKSUM) {
 				bd1->data.bd_flags.bitfields |=
 					ETH_TX_1ST_BD_FLAGS_TUNN_IP_CSUM_MASK <<
 					ETH_TX_1ST_BD_FLAGS_TUNN_IP_CSUM_SHIFT;
@@ -1245,43 +1558,79 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		}
 
 		/* Descriptor based VLAN insertion */
-		if (mbuf->ol_flags & (PKT_TX_VLAN_PKT | PKT_TX_QINQ_PKT)) {
-			PMD_TX_LOG(INFO, txq, "Insert VLAN 0x%x",
-				   mbuf->vlan_tci);
+		if (tx_ol_flags & (PKT_TX_VLAN_PKT | PKT_TX_QINQ_PKT)) {
 			bd1->data.vlan = rte_cpu_to_le_16(mbuf->vlan_tci);
 			bd1->data.bd_flags.bitfields |=
 			    1 << ETH_TX_1ST_BD_FLAGS_VLAN_INSERTION_SHIFT;
 		}
 
+		if (lso_flg)
+			bd1->data.bd_flags.bitfields |=
+				1 << ETH_TX_1ST_BD_FLAGS_LSO_SHIFT;
+
 		/* Offload the IP checksum in the hardware */
-		if (mbuf->ol_flags & PKT_TX_IP_CKSUM) {
-			PMD_TX_LOG(INFO, txq, "IP csum offload");
+		if ((lso_flg) || (tx_ol_flags & PKT_TX_IP_CKSUM))
 			bd1->data.bd_flags.bitfields |=
 			    1 << ETH_TX_1ST_BD_FLAGS_IP_CSUM_SHIFT;
-		}
 
 		/* L4 checksum offload (tcp or udp) */
-		if (mbuf->ol_flags & (PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM)) {
-			PMD_TX_LOG(INFO, txq, "L4 csum offload");
+		if ((lso_flg) || (tx_ol_flags & (PKT_TX_TCP_CKSUM |
+						PKT_TX_UDP_CKSUM)))
+			/* PKT_TX_TCP_SEG implies PKT_TX_TCP_CKSUM */
 			bd1->data.bd_flags.bitfields |=
 			    1 << ETH_TX_1ST_BD_FLAGS_L4_CSUM_SHIFT;
-			/* IPv6 + extn. -> later */
+
+		/* BD2 */
+		if (lso_flg || ipv6_ext_flg) {
+			bd2 = (struct eth_tx_2nd_bd *)ecore_chain_produce
+							(&txq->tx_pbl);
+			memset(bd2, 0, sizeof(struct eth_tx_2nd_bd));
+			nbds++;
+			QEDE_BD_SET_ADDR_LEN(bd2,
+					    (hdr_size +
+					    rte_mbuf_data_dma_addr(mbuf)),
+					    mbuf->data_len - hdr_size);
+			/* TBD: check pseudo csum iff tx_prepare not called? */
+			if (ipv6_ext_flg) {
+				bd2->data.bitfields1 |=
+				ETH_L4_PSEUDO_CSUM_ZERO_LENGTH <<
+				ETH_TX_DATA_2ND_BD_L4_PSEUDO_CSUM_MODE_SHIFT;
+			}
+		}
+
+		/* BD3 */
+		if (lso_flg || ipv6_ext_flg) {
+			bd3 = (struct eth_tx_3rd_bd *)ecore_chain_produce
+							(&txq->tx_pbl);
+			memset(bd3, 0, sizeof(struct eth_tx_3rd_bd));
+			nbds++;
+			if (lso_flg) {
+				bd3->data.lso_mss =
+					rte_cpu_to_le_16(mbuf->tso_segsz);
+				/* Using one header BD */
+				bd3->data.bitfields |=
+					rte_cpu_to_le_16(1 <<
+					ETH_TX_DATA_3RD_BD_HDR_NBD_SHIFT);
+			}
 		}
 
 		/* Handle fragmented MBUF */
 		m_seg = mbuf->next;
 		/* Encode scatter gather buffer descriptors if required */
-		nb_frags = qede_encode_sg_bd(txq, m_seg, bd1);
-		bd1->data.nbds = nb_frags;
-		txq->nb_tx_avail -= nb_frags;
+		nb_frags = qede_encode_sg_bd(txq, m_seg, &bd2, &bd3);
+		bd1->data.nbds = nbds + nb_frags;
+		txq->nb_tx_avail -= bd1->data.nbds;
 		txq->sw_tx_prod++;
 		rte_prefetch0(txq->sw_tx_ring[TX_PROD(txq)].mbuf);
 		bd_prod =
 		    rte_cpu_to_le_16(ecore_chain_get_prod_idx(&txq->tx_pbl));
+#ifdef RTE_LIBRTE_QEDE_DEBUG_TX
+		print_tx_bd_info(txq, bd1, bd2, bd3, tx_ol_flags);
+		PMD_TX_LOG(INFO, txq, "lso=%d tunn=%d ipv6_ext=%d\n",
+			   lso_flg, tunn_flg, ipv6_ext_flg);
+#endif
 		nb_pkt_sent++;
 		txq->xmit_pkts++;
-		PMD_TX_LOG(INFO, txq, "nbds = %d pkt_len = %04x",
-			   bd1->data.nbds, mbuf->pkt_len);
 	}
 
 	/* Write value of prod idx into bd_prod */
@@ -1292,10 +1641,10 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 	rte_wmb();
 
 	/* Check again for Tx completions */
-	(void)qede_process_tx_compl(edev, txq);
+	qede_process_tx_compl(edev, txq);
 
-	PMD_TX_LOG(DEBUG, txq, "to_send=%u can_send=%u sent=%u core=%d",
-		   nb_pkts, tx_count, nb_pkt_sent, rte_lcore_id());
+	PMD_TX_LOG(DEBUG, txq, "to_send=%u sent=%u bd_prod=%u core=%d",
+		   nb_pkts, nb_pkt_sent, TX_PROD(txq), rte_lcore_id());
 
 	return nb_pkt_sent;
 }
@@ -1380,8 +1729,7 @@ static int qede_drain_txq(struct qede_dev *qdev,
 		qede_process_tx_compl(edev, txq);
 		if (!cnt) {
 			if (allow_drain) {
-				DP_NOTICE(edev, false,
-					  "Tx queue[%u] is stuck,"
+				DP_ERR(edev, "Tx queue[%u] is stuck,"
 					  "requesting MCP to drain\n",
 					  txq->queue_id);
 				rc = qdev->ops->common->drain(edev);
@@ -1389,13 +1737,11 @@ static int qede_drain_txq(struct qede_dev *qdev,
 					return rc;
 				return qede_drain_txq(qdev, txq, false);
 			}
-
-			DP_NOTICE(edev, false,
-				  "Timeout waiting for tx queue[%d]:"
+			DP_ERR(edev, "Timeout waiting for tx queue[%d]:"
 				  "PROD=%d, CONS=%d\n",
 				  txq->queue_id, txq->sw_tx_prod,
 				  txq->sw_tx_cons);
-			return -ENODEV;
+			return -1;
 		}
 		cnt--;
 		DELAY(1000);
@@ -1412,6 +1758,7 @@ static int qede_stop_queues(struct qede_dev *qdev)
 {
 	struct qed_update_vport_params vport_update_params;
 	struct ecore_dev *edev = &qdev->edev;
+	struct ecore_sge_tpa_params tpa_params;
 	struct qede_fastpath *fp;
 	int rc, tc, i;
 
@@ -1421,9 +1768,15 @@ static int qede_stop_queues(struct qede_dev *qdev)
 	vport_update_params.update_vport_active_flg = 1;
 	vport_update_params.vport_active_flg = 0;
 	vport_update_params.update_rss_flg = 0;
+	/* Disable TPA */
+	if (qdev->enable_lro) {
+		DP_INFO(edev, "Disabling LRO\n");
+		memset(&tpa_params, 0, sizeof(struct ecore_sge_tpa_params));
+		qede_update_sge_tpa_params(&tpa_params, qdev->mtu, false);
+		vport_update_params.sge_tpa_params = &tpa_params;
+	}
 
 	DP_INFO(edev, "Deactivate vport\n");
-
 	rc = qdev->ops->vport_update(edev, &vport_update_params);
 	if (rc) {
 		DP_ERR(edev, "Failed to update vport\n");
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index 17a2f0c..c27632e 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -126,6 +126,19 @@
 
 #define QEDE_PKT_TYPE_TUNN_MAX_TYPE			0x20 /* 2^5 */
 
+#define QEDE_TX_CSUM_OFFLOAD_MASK (PKT_TX_IP_CKSUM              | \
+				   PKT_TX_TCP_CKSUM             | \
+				   PKT_TX_UDP_CKSUM             | \
+				   PKT_TX_OUTER_IP_CKSUM        | \
+				   PKT_TX_TCP_SEG)
+
+#define QEDE_TX_OFFLOAD_MASK (QEDE_TX_CSUM_OFFLOAD_MASK | \
+			      PKT_TX_QINQ_PKT           | \
+			      PKT_TX_VLAN_PKT)
+
+#define QEDE_TX_OFFLOAD_NOTSUP_MASK \
+	(PKT_TX_OFFLOAD_MASK ^ QEDE_TX_OFFLOAD_MASK)
+
 /*
  * RX BD descriptor ring
  */
@@ -135,6 +148,19 @@ struct qede_rx_entry {
 	/* allows expansion .. */
 };
 
+/* TPA related structures */
+enum qede_agg_state {
+	QEDE_AGG_STATE_NONE  = 0,
+	QEDE_AGG_STATE_START = 1,
+	QEDE_AGG_STATE_ERROR = 2
+};
+
+struct qede_agg_info {
+	struct rte_mbuf *mbuf;
+	uint16_t start_cqe_bd_len;
+	uint8_t state; /* for sanity check */
+};
+
 /*
  * Structure associated with each RX queue.
  */
@@ -155,6 +181,7 @@ struct qede_rx_queue {
 	uint64_t rx_segs;
 	uint64_t rx_hw_errors;
 	uint64_t rx_alloc_errors;
+	struct qede_agg_info tpa_info[ETH_TPA_MAX_AGGS_NUM];
 	struct qede_dev *qdev;
 	void *handle;
 };
@@ -232,6 +259,9 @@ void qede_free_mem_load(struct rte_eth_dev *eth_dev);
 uint16_t qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts,
 			uint16_t nb_pkts);
 
+uint16_t qede_xmit_prep_pkts(void *p_txq, struct rte_mbuf **tx_pkts,
+			     uint16_t nb_pkts);
+
 uint16_t qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts,
 			uint16_t nb_pkts);
 
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* Re: [PATCH v2 00/61] net/qede/base: qede PMD enhancements
  2017-03-20 16:59     ` Ferruh Yigit
                         ` (61 preceding siblings ...)
  2017-03-24  7:28       ` [PATCH v3 61/61] net/qede: add LRO/TSO offloads support Rasesh Mody
@ 2017-03-24  7:45       ` Mody, Rasesh
  62 siblings, 0 replies; 329+ messages in thread
From: Mody, Rasesh @ 2017-03-24  7:45 UTC (permalink / raw)
  To: Ferruh Yigit, dev; +Cc: Dept-Eng DPDK Dev

Hi Ferruh,
> From: Ferruh Yigit [mailto:ferruh.yigit@intel.com]
> Sent: Monday, March 20, 2017 9:59 AM
> To: Mody, Rasesh <Rasesh.Mody@cavium.com>; dev@dpdk.org
> Cc: Dept-Eng DPDK Dev <Dept-EngDPDKDev@cavium.com>
> Subject: Re: [PATCH v2 00/61] net/qede/base: qede PMD enhancements
> 
> On 3/18/2017 7:05 AM, Rasesh Mody wrote:
> > Hi,
> >
> > This patch set adds support for new firmware 8.18.9.0, new features
> > and bug fixes.
> >
> > Please apply to dpdk-net-next for 17.05 release. Note that this patch
> > set depends on http://dpdk.org/dev/patchwork/patch/21896.
> >
> > v1..v2
> >  - address all the review comments received so far
> >
> > Thanks!
> > Rasesh
> >
> > Harish Patil (3):
> >   net/qede/base: add support for arfs mode
> >   net/qede: add ntuple and flow director filter support
> >   net/qede: add LRO/TSO offloads support
> >
> > Rasesh Mody (58):
> >   net/qede/base: return an initialized return value
> >   net/qede/base: send FW version driver state to MFW
> >   net/qede/base: mask Rx buffer attention bits
> >   net/qede/base: print various indication on Tx-timeouts
> >   net/qede/base: utilize FW 8.18.9.0
> >   net/qede: upgrade the FW to 8.18.9.0
> >   net/qede/base: decrease maximum HW func per device
> >   net/qede/base: move mask constants defining NIC type
> >   net/qede/base: remove attribute from update current config
> >   net/qede/base: add nvram options
> >   net/qede/base: add comment
> >   net/qede/base: use default MTU from shared memory
> >   net/qede/base: change queue/sb-id from 8 bit to 16 bit
> >   net/qede/base: update MFW when default MTU is changed
> >   net/qede/base: prevent device init failure
> >   net/qede/base: read card personality via MFW commands
> >   net/qede/base: allow probe to succeed with minor HW-issues
> >   net/qede/base: remove unneeded step in HW init
> >   net/qede/base: allow only trusted VFs to be promisc
> >   net/qede/base: qm initialization revamp
> >   net/qede/base: print firmware MFW and MBI versions
> >   net/qede/base: check active VF queues before stopping
> >   net/qede/base: set driver type before sending load request
> >   net/qede/base: prevent driver laod with invalid resources
> >   net/qede/base: add interfaces for MFW TLV request processing
> >   net/qede/base: code refactoring of SP queues
> >   net/qede/base: make L2 queues handle based
> >   net/qede/base: add support for handling TLV request from MFW
> >   net/qede/base: optimize cache-line access
> >   net/qede/base: infrastructure changes for VF tunnelling
> >   net/qede/base: revise tunnel APIs/structs
> >   net/qede/base: add tunnelling support for VFs
> >   net/qede/base: formatting changes
> >   net/qede/base: prevent transmitter stuck condition
> >   net/qede/base: add mask/shift defines for resource command
> >   net/qede/base: add API for using MFW resource lock
> >   net/qede/base: remove clock slowdown option
> >   net/qede/base: add new image types
> >   net/qede/base: use L2-handles for RSS configuration
> >   net/qede/base: change valloc to vzalloc
> >   net/qede/base: add support for previous driver unload
> >   net/qede/base: add non-L2 dcbx tlv application support
> >   net/qede/base: update bulletin board during VF init
> >   net/qede/base: add coalescing support for VFs
> >   net/qede/base: add macro got resource value message
> >   net/qede/base: add mailbox for resource allocation
> >   net/qede/base: add macro for unsupported command
> >   net/qede/base: set max values for soft resoruces
> >   net/qede/base: add return code check
> >   net/qede/base: zero out MFW mailbox data
> >   net/qede/base: move code bits
> >   net/qede/base: add PF parameter
> >   net/qede/base: allow PMD to control vport and RSS engine ids
> >   net/qede/base: add udp ports in bulletin board message
> >   net/qede/base: prevent DMAE transactions during recovery
> >   net/qede/base: multi-Txq support on same queue-zone for VFs
> >   net/qede/base: prevent race condition during unload
> >   net/qede/base: semantic changes
> >
> 
> Hi Rasesh,
> 
> Getting following build errors, one with clang [1] and other with 32bit [2], I
> have not investigated which patch cause the error, just copy-pasting the
> build errors.

We've addressed clang and 32bit errors in our v3 submission.

Thanks!
-Rasesh
> 
> These looks like same build errors with previous version of the patchset.
> 
> 
> [1]
> .../drivers/net/qede/qede_rxtx.c:1202:21: error: variable 'pad' is uninitialized
> when used here [-Werror,-Wuninitialized]
>                 rx_mb->data_off = pad + RTE_PKTMBUF_HEADROOM;
>                                   ^~~
> .../drivers/net/qede/qede_rxtx.c:997:14: note: initialize the variable 'pad' to
> silence this warning
>         uint16_t pad;
>                     ^
>                      = 0
> 
> 
> [2]
> .../drivers/net/qede/qede_fdir.c: In function 'qede_config_cmn_fdir_filter':
> .../drivers/net/qede/qede_fdir.c:126:44: error: format '%lx' expects
> argument of type 'long unsigned int', but argument 4 has type 'uint64_t {aka
> long long unsigned int}' [-Werror=format=]
>   snprintf(mz_name, sizeof(mz_name) - 1, "%lx", rte_get_timer_cycles());
>                                             ^

^ permalink raw reply	[flat|nested] 329+ messages in thread

* Re: [PATCH v3 41/61] net/qede/base: add support for previous driver unload
  2017-03-24  7:28       ` [PATCH v3 41/61] net/qede/base: add support for previous driver unload Rasesh Mody
@ 2017-03-24 11:00         ` Ferruh Yigit
  2017-03-25  6:25           ` Mody, Rasesh
  0 siblings, 1 reply; 329+ messages in thread
From: Ferruh Yigit @ 2017-03-24 11:00 UTC (permalink / raw)
  To: Rasesh Mody, dev; +Cc: Dept-EngDPDKDev

On 3/24/2017 7:28 AM, Rasesh Mody wrote:
> New driver/management fw load request sequence for handling previous
> driver unload.
> 
> Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>

Hi Rasesh,

Patch by patch build broken with this patch with following build error,
and fixed back with patch 50/61:

.../drivers/net/qede/base/ecore_mcp.c:624:2: error: signed shift result
(0xF00000000) requires 37 bits to represent, but 'int' only has 32 bits
[-Werror,-Wshift-overflow]
        ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_FORCE,
        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.../drivers/net/qede/base/ecore.h:107:31: note: expanded from macro
'ECORE_MFW_SET_FIELD'
        (name) &= ~((field ## _MASK) << (field ## _SHIFT));             \
                    ~~~~~~~~~~~~~~~~ ^  ~~~~~~~~~~~~~~~~~
.../drivers/net/qede/base/ecore_mcp.c:626:2: error: signed shift result
(0xF0000000000) requires 45 bits to represent, but 'int' only has 32
bits [-Werror,-Wshift-overflow]
        ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_FLAGS0,
        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.../drivers/net/qede/base/ecore.h:107:31: note: expanded from macro
'ECORE_MFW_SET_FIELD'
        (name) &= ~((field ## _MASK) << (field ## _SHIFT));             \
                    ~~~~~~~~~~~~~~~~ ^  ~~~~~~~~~~~~~~~~~
<...>

^ permalink raw reply	[flat|nested] 329+ messages in thread

* Re: [PATCH v3 00/61] net/qede/base: qede PMD enhancements
  2017-03-24  7:27       ` [PATCH v3 " Rasesh Mody
@ 2017-03-24 11:08         ` Ferruh Yigit
  2017-03-28  6:42           ` [PATCH 01/62] net/qede/base: return an initialized return value Rasesh Mody
                             ` (64 more replies)
  0 siblings, 65 replies; 329+ messages in thread
From: Ferruh Yigit @ 2017-03-24 11:08 UTC (permalink / raw)
  To: Rasesh Mody, dev; +Cc: Dept-EngDPDKDev

On 3/24/2017 7:27 AM, Rasesh Mody wrote:
> Hi Ferruh,
> 
> This patch set adds support for new firmware 8.18.9.0, new features and
> bug fixes.
> 
> Please apply to dpdk-net-next for 17.05 release.
> 
> v1..v3
>  - address all the review comments received so far including addressal of
>    clang and 32-bit compilation errors.
> 
> Thanks!
> Rasesh
> 
> Harish Patil (3):
>   net/qede/base: add support for arfs mode
>   net/qede: add ntuple and flow director filter support
>   net/qede: add LRO/TSO offloads support
> 
> Rasesh Mody (58):
>   net/qede/base: return an initialized return value
>   net/qede/base: send FW version driver state to MFW
>   net/qede/base: mask Rx buffer attention bits
>   net/qede/base: print various indication on Tx-timeouts
>   net/qede/base: utilize FW 8.18.9.0
>   net/qede: upgrade the FW to 8.18.9.0
>   net/qede/base: decrease maximum HW func per device
>   net/qede/base: move mask constants defining NIC type
>   net/qede/base: remove attribute from update current config
>   net/qede/base: add nvram options
>   net/qede/base: add comment
>   net/qede/base: use default MTU from shared memory
>   net/qede/base: change queue/sb-id from 8 bit to 16 bit
>   net/qede/base: update MFW when default MTU is changed
>   net/qede/base: prevent device init failure
>   net/qede/base: read card personality via MFW commands
>   net/qede/base: allow probe to succeed with minor HW-issues
>   net/qede/base: remove unneeded step in HW init
>   net/qede/base: allow only trusted VFs to be promisc
>   net/qede/base: qm initialization revamp
>   net/qede/base: print firmware MFW and MBI versions
>   net/qede/base: check active VF queues before stopping
>   net/qede/base: set driver type before sending load request
>   net/qede/base: prevent driver laod with invalid resources
>   net/qede/base: add interfaces for MFW TLV request processing
>   net/qede/base: code refactoring of SP queues
>   net/qede/base: make L2 queues handle based
>   net/qede/base: add support for handling TLV request from MFW
>   net/qede/base: optimize cache-line access
>   net/qede/base: infrastructure changes for VF tunnelling
>   net/qede/base: revise tunnel APIs/structs
>   net/qede/base: add tunnelling support for VFs
>   net/qede/base: formatting changes
>   net/qede/base: prevent transmitter stuck condition
>   net/qede/base: add mask/shift defines for resource command
>   net/qede/base: add API for using MFW resource lock
>   net/qede/base: remove clock slowdown option
>   net/qede/base: add new image types
>   net/qede/base: use L2-handles for RSS configuration
>   net/qede/base: change valloc to vzalloc
>   net/qede/base: add support for previous driver unload
>   net/qede/base: add non-L2 dcbx tlv application support
>   net/qede/base: update bulletin board during VF init
>   net/qede/base: add coalescing support for VFs
>   net/qede/base: add macro got resource value message
>   net/qede/base: add mailbox for resource allocation
>   net/qede/base: add macro for unsupported command
>   net/qede/base: set max values for soft resoruces
>   net/qede/base: add return code check
>   net/qede/base: zero out MFW mailbox data
>   net/qede/base: move code bits
>   net/qede/base: add PF parameter
>   net/qede/base: allow PMD to control vport and RSS engine ids
>   net/qede/base: add udp ports in bulletin board message
>   net/qede/base: prevent DMAE transactions during recovery
>   net/qede/base: multi-Txq support on same queue-zone for VFs
>   net/qede/base: prevent race condition during unload
>   net/qede/base: semantic changes

Can you also check following commit log spellings [1] for next version,
thanks.


[1]
--->  net/qede/base: allow PMD to control vport and RSS engine ids
initializaion

--->  net/qede/base: set max values for soft resoruces
resoruces

--->  net/qede/base: add macro for unsupported command
upsupported

--->  net/qede/base: prevent driver laod with invalid resources
laod

^ permalink raw reply	[flat|nested] 329+ messages in thread

* Re: [PATCH v2 61/61] net/qede: add LRO/TSO offloads support
  2017-03-18  7:06   ` [PATCH v2 61/61] net/qede: add LRO/TSO offloads support Rasesh Mody
@ 2017-03-24 11:58     ` Ferruh Yigit
  2017-03-25  6:28       ` Mody, Rasesh
  0 siblings, 1 reply; 329+ messages in thread
From: Ferruh Yigit @ 2017-03-24 11:58 UTC (permalink / raw)
  To: Rasesh Mody, dev; +Cc: Harish Patil, Dept-EngDPDKDev

On 3/18/2017 7:06 AM, Rasesh Mody wrote:
> From: Harish Patil <harish.patil@qlogic.com>
> 
> This patch includes slowpath configuration and fastpath changes
> to support LRO and TSO. A bit of revamping is needed in order
> to make use of existing packet classification schemes in Rx fastpath
> and for SG element processing in Tx.
> 
> Signed-off-by: Harish Patil <harish.patil@qlogic.com>

This patch is giving following checkpatch errors [1], I can see the
reason of the multiline dereference is to fit into 80 column line limit,
and those lines are not easy to escape from line limit.

But eventually if we get a checkpatch warning, I prefer it to be from
long line, multiline dereference makes code harder to read.

What do you think?



[1]
WARNING:MULTILINE_DEREFERENCE: Avoid multiple line dereference - prefer
'cqe_start_tpa->len_on_first_bd'
#450: FILE: drivers/net/qede/qede_rxtx.c:1045:
+                                   rte_le_to_cpu_16(cqe_start_tpa->
+                                                    len_on_first_bd),

WARNING:MULTILINE_DEREFERENCE: Avoid multiple line dereference - prefer
'cqe_start_tpa->ext_bd_len_list[0]'
#453: FILE: drivers/net/qede/qede_rxtx.c:1048:
+                                   rte_le_to_cpu_16(cqe_start_tpa->
+                                                       ext_bd_len_list[0]),

WARNING:MULTILINE_DEREFERENCE: Avoid multiple line dereference - prefer
'rxq->tpa_info[cqe->fast_path_tpa_end.tpa_agg_index].mbuf'
#465: FILE: drivers/net/qede/qede_rxtx.c:1060:
+                       rx_mb = rxq->
+                       tpa_info[cqe->fast_path_tpa_end.tpa_agg_index].mbuf;

WARNING:MULTILINE_DEREFERENCE: Avoid multiple line dereference - prefer
'cqe_start_tpa->pars_flags.flags'
#512: FILE: drivers/net/qede/qede_rxtx.c:1087:
+                       parse_flag = rte_le_to_cpu_16(cqe_start_tpa->
+                                                       pars_flags.flags);

WARNING:MULTILINE_DEREFERENCE: Avoid multiple line dereference - prefer
'cqe_start_tpa->tunnel_pars_flags.flags'
#541: FILE: drivers/net/qede/qede_rxtx.c:1108:
+                                       tunn_parse_flag = cqe_start_tpa->
+
tunnel_pars_flags.flags;

WARNING:MULTILINE_DEREFERENCE: Avoid multiple line dereference - prefer
'fp_cqe->tunnel_pars_flags.flags'
#544: FILE: drivers/net/qede/qede_rxtx.c:1111:
+                                       tunn_parse_flag = fp_cqe->
+
tunnel_pars_flags.flags;


<...>

^ permalink raw reply	[flat|nested] 329+ messages in thread

* Re: [PATCH v3 41/61] net/qede/base: add support for previous driver unload
  2017-03-24 11:00         ` Ferruh Yigit
@ 2017-03-25  6:25           ` Mody, Rasesh
  0 siblings, 0 replies; 329+ messages in thread
From: Mody, Rasesh @ 2017-03-25  6:25 UTC (permalink / raw)
  To: Ferruh Yigit, dev; +Cc: Dept-Eng DPDK Dev

Hi Ferruh,
> From: Ferruh Yigit [mailto:ferruh.yigit@intel.com]
> Sent: Friday, March 24, 2017 4:01 AM
> 
> On 3/24/2017 7:28 AM, Rasesh Mody wrote:
> > New driver/management fw load request sequence for handling previous
> > driver unload.
> >
> > Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
> 
> Hi Rasesh,
> 
> Patch by patch build broken with this patch with following build error, and
> fixed back with patch 50/61:
> 
> .../drivers/net/qede/base/ecore_mcp.c:624:2: error: signed shift result
> (0xF00000000) requires 37 bits to represent, but 'int' only has 32 bits [-
> Werror,-Wshift-overflow]
>         ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_FORCE,
>         ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> .../drivers/net/qede/base/ecore.h:107:31: note: expanded from macro
> 'ECORE_MFW_SET_FIELD'
>         (name) &= ~((field ## _MASK) << (field ## _SHIFT));             \
>                     ~~~~~~~~~~~~~~~~ ^  ~~~~~~~~~~~~~~~~~
> .../drivers/net/qede/base/ecore_mcp.c:626:2: error: signed shift result
> (0xF0000000000) requires 45 bits to represent, but 'int' only has 32 bits [-
> Werror,-Wshift-overflow]
>         ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_FLAGS0,
>         ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> .../drivers/net/qede/base/ecore.h:107:31: note: expanded from macro
> 'ECORE_MFW_SET_FIELD'
>         (name) &= ~((field ## _MASK) << (field ## _SHIFT));             \
>                     ~~~~~~~~~~~~~~~~ ^  ~~~~~~~~~~~~~~~~~ <...>
We observed the same issue with patch 41, unfortunately  the fix was unintentionally added only in patch 50 whereas it was candidate for patch 41. We'll address this and resubmit.

Thanks!
-Rasesh

^ permalink raw reply	[flat|nested] 329+ messages in thread

* Re: [PATCH v2 61/61] net/qede: add LRO/TSO offloads support
  2017-03-24 11:58     ` Ferruh Yigit
@ 2017-03-25  6:28       ` Mody, Rasesh
  0 siblings, 0 replies; 329+ messages in thread
From: Mody, Rasesh @ 2017-03-25  6:28 UTC (permalink / raw)
  To: Ferruh Yigit, dev; +Cc: Harish Patil, Dept-Eng DPDK Dev

> From: Ferruh Yigit [mailto:ferruh.yigit@intel.com]
> Sent: Friday, March 24, 2017 4:59 AM
> 
> On 3/18/2017 7:06 AM, Rasesh Mody wrote:
> > From: Harish Patil <harish.patil@qlogic.com>
> >
> > This patch includes slowpath configuration and fastpath changes to
> > support LRO and TSO. A bit of revamping is needed in order to make use
> > of existing packet classification schemes in Rx fastpath and for SG
> > element processing in Tx.
> >
> > Signed-off-by: Harish Patil <harish.patil@qlogic.com>
> 
> This patch is giving following checkpatch errors [1], I can see the reason of
> the multiline dereference is to fit into 80 column line limit, and those lines are
> not easy to escape from line limit.
> 
> But eventually if we get a checkpatch warning, I prefer it to be from long line,
> multiline dereference makes code harder to read.
> 
> What do you think?

Will try to address this more efficiently.
 
> 
> 
> [1]
> WARNING:MULTILINE_DEREFERENCE: Avoid multiple line dereference -
> prefer 'cqe_start_tpa->len_on_first_bd'
> #450: FILE: drivers/net/qede/qede_rxtx.c:1045:
> +                                   rte_le_to_cpu_16(cqe_start_tpa->
> +                                                    len_on_first_bd),
> 
> WARNING:MULTILINE_DEREFERENCE: Avoid multiple line dereference -
> prefer 'cqe_start_tpa->ext_bd_len_list[0]'
> #453: FILE: drivers/net/qede/qede_rxtx.c:1048:
> +                                   rte_le_to_cpu_16(cqe_start_tpa->
> +
> + ext_bd_len_list[0]),
> 
> WARNING:MULTILINE_DEREFERENCE: Avoid multiple line dereference -
> prefer 'rxq->tpa_info[cqe->fast_path_tpa_end.tpa_agg_index].mbuf'
> #465: FILE: drivers/net/qede/qede_rxtx.c:1060:
> +                       rx_mb = rxq->
> +
> + tpa_info[cqe->fast_path_tpa_end.tpa_agg_index].mbuf;
> 
> WARNING:MULTILINE_DEREFERENCE: Avoid multiple line dereference -
> prefer 'cqe_start_tpa->pars_flags.flags'
> #512: FILE: drivers/net/qede/qede_rxtx.c:1087:
> +                       parse_flag = rte_le_to_cpu_16(cqe_start_tpa->
> +
> + pars_flags.flags);
> 
> WARNING:MULTILINE_DEREFERENCE: Avoid multiple line dereference -
> prefer 'cqe_start_tpa->tunnel_pars_flags.flags'
> #541: FILE: drivers/net/qede/qede_rxtx.c:1108:
> +                                       tunn_parse_flag =
> + cqe_start_tpa->
> +
> tunnel_pars_flags.flags;
> 
> WARNING:MULTILINE_DEREFERENCE: Avoid multiple line dereference -
> prefer 'fp_cqe->tunnel_pars_flags.flags'
> #544: FILE: drivers/net/qede/qede_rxtx.c:1111:
> +                                       tunn_parse_flag = fp_cqe->
> +
> tunnel_pars_flags.flags;
> 
> 
> <...>

^ permalink raw reply	[flat|nested] 329+ messages in thread

* [PATCH 01/62] net/qede/base: return an initialized return value
  2017-03-24 11:08         ` Ferruh Yigit
@ 2017-03-28  6:42           ` Rasesh Mody
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                             ` (63 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:42 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Make sure ecore_iov_mark_vf_flr() always returns an initialized return
value.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_sriov.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 6912cf8..d1c809c 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -3164,7 +3164,7 @@ ecore_iov_single_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
 
 bool ecore_iov_mark_vf_flr(struct ecore_hwfn *p_hwfn, u32 *p_disabled_vfs)
 {
-	bool found;
+	bool found = false;
 	u16 i;
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "Marking FLR-ed VFs\n");
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1
  2017-03-24 11:08         ` Ferruh Yigit
  2017-03-28  6:42           ` [PATCH 01/62] net/qede/base: return an initialized return value Rasesh Mody
@ 2017-03-28  6:51           ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 " Rasesh Mody
                               ` (62 more replies)
  2017-03-28  6:51           ` [PATCH v4 01/62] net/qede/base: return an initialized return value Rasesh Mody
                             ` (62 subsequent siblings)
  64 siblings, 63 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:51 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Hi Ferruh,

This patch set adds support for new firmware 8.18.9.0, adds new features
and includes bug fixes. This patch set updates PMD version to 2.4.0.1.

Please apply to dpdk-net-next for 17.05 release.

v1..v4
 - address all the review comments received so far

Thanks!
Rasesh

Harish Patil (3):
  net/qede/base: add support for arfs mode
  net/qede: add ntuple and flow director filter support
  net/qede: add LRO/TSO offloads support

Rasesh Mody (59):
  net/qede/base: return an initialized return value
  net/qede/base: send FW version driver state to MFW
  net/qede/base: mask Rx buffer attention bits
  net/qede/base: print various indication on Tx-timeouts
  net/qede/base: utilize FW 8.18.9.0
  net/qede: upgrade the FW to 8.18.9.0
  net/qede/base: decrease maximum HW func per device
  net/qede/base: move mask constants defining NIC type
  net/qede/base: remove attribute from update current config
  net/qede/base: add nvram options
  net/qede/base: add comment
  net/qede/base: use default MTU from shared memory
  net/qede/base: change queue/sb-id from 8 bit to 16 bit
  net/qede/base: update MFW when default MTU is changed
  net/qede/base: prevent device init failure
  net/qede/base: read card personality via MFW commands
  net/qede/base: allow probe to succeed with minor HW-issues
  net/qede/base: remove unneeded step in HW init
  net/qede/base: allow only trusted VFs to be promisc
  net/qede/base: qm initialization revamp
  net/qede/base: print firmware MFW and MBI versions
  net/qede/base: check active VF queues before stopping
  net/qede/base: set driver type before sending load request
  net/qede/base: prevent driver load with invalid resources
  net/qede/base: add interfaces for MFW TLV request processing
  net/qede/base: code refactoring of SP queues
  net/qede/base: make L2 queues handle based
  net/qede/base: add support for handling TLV request from MFW
  net/qede/base: optimize cache-line access
  net/qede/base: infrastructure changes for VF tunnelling
  net/qede/base: revise tunnel APIs/structs
  net/qede/base: add tunnelling support for VFs
  net/qede/base: formatting changes
  net/qede/base: prevent transmitter stuck condition
  net/qede/base: add mask/shift defines for resource command
  net/qede/base: add API for using MFW resource lock
  net/qede/base: remove clock slowdown option
  net/qede/base: add new image types
  net/qede/base: use L2-handles for RSS configuration
  net/qede/base: change valloc to vzalloc
  net/qede/base: add support for previous driver unload
  net/qede/base: add non-L2 dcbx tlv application support
  net/qede/base: update bulletin board during VF init
  net/qede/base: add coalescing support for VFs
  net/qede/base: add macro got resource value message
  net/qede/base: add mailbox for resource allocation
  net/qede/base: add macro for unsupported command
  net/qede/base: set max values for soft resources
  net/qede/base: add return code check
  net/qede/base: zero out MFW mailbox data
  net/qede/base: move code bits
  net/qede/base: add PF parameter
  net/qede/base: allow PMD to control vport and RSS engine ids
  net/qede/base: add udp ports in bulletin board message
  net/qede/base: prevent DMAE transactions during recovery
  net/qede/base: multi-Txq support on same queue-zone for VFs
  net/qede/base: prevent race condition during unload
  net/qede/base: semantic changes
  net/qede: update PMD version to 2.4.0.1

 doc/guides/nics/features/qede.ini             |    4 +
 doc/guides/nics/features/qede_vf.ini          |    2 +
 doc/guides/nics/qede.rst                      |   11 +-
 drivers/net/qede/Makefile                     |    1 +
 drivers/net/qede/base/bcm_osal.h              |   13 +-
 drivers/net/qede/base/common_hsi.h            |  191 ++-
 drivers/net/qede/base/ecore.h                 |  169 +-
 drivers/net/qede/base/ecore_chain.h           |  143 +-
 drivers/net/qede/base/ecore_cxt.c             |  297 +++-
 drivers/net/qede/base/ecore_cxt.h             |   64 +-
 drivers/net/qede/base/ecore_cxt_api.h         |   13 -
 drivers/net/qede/base/ecore_dcbx.c            |   42 +-
 drivers/net/qede/base/ecore_dcbx.h            |    4 +-
 drivers/net/qede/base/ecore_dcbx_api.h        |    4 +-
 drivers/net/qede/base/ecore_dev.c             | 2137 +++++++++++++++----------
 drivers/net/qede/base/ecore_dev_api.h         |  122 +-
 drivers/net/qede/base/ecore_gtt_reg_addr.h    |   20 +-
 drivers/net/qede/base/ecore_hsi_common.h      |  816 +++++-----
 drivers/net/qede/base/ecore_hsi_debug_tools.h |  203 ++-
 drivers/net/qede/base/ecore_hsi_eth.h         | 2069 ++++++++++++------------
 drivers/net/qede/base/ecore_hsi_init_tool.h   |   78 +-
 drivers/net/qede/base/ecore_hw.c              |   50 +-
 drivers/net/qede/base/ecore_init_fw_funcs.c   | 1409 ++++++++++------
 drivers/net/qede/base/ecore_init_fw_funcs.h   |  172 +-
 drivers/net/qede/base/ecore_int.c             |   51 +-
 drivers/net/qede/base/ecore_int.h             |   10 -
 drivers/net/qede/base/ecore_int_api.h         |   21 +
 drivers/net/qede/base/ecore_iov_api.h         |   45 +-
 drivers/net/qede/base/ecore_iro.h             |    8 +
 drivers/net/qede/base/ecore_iro_values.h      |   28 +-
 drivers/net/qede/base/ecore_l2.c              |  853 +++++++---
 drivers/net/qede/base/ecore_l2.h              |  149 +-
 drivers/net/qede/base/ecore_l2_api.h          |  134 +-
 drivers/net/qede/base/ecore_mcp.c             | 1020 ++++++++++--
 drivers/net/qede/base/ecore_mcp.h             |  181 ++-
 drivers/net/qede/base/ecore_mcp_api.h         |  316 +++-
 drivers/net/qede/base/ecore_mng_tlv.c         | 1535 ++++++++++++++++++
 drivers/net/qede/base/ecore_proto_if.h        |   16 +
 drivers/net/qede/base/ecore_rt_defs.h         |  623 ++++---
 drivers/net/qede/base/ecore_sp_api.h          |   19 +
 drivers/net/qede/base/ecore_sp_commands.c     |  372 +++--
 drivers/net/qede/base/ecore_sp_commands.h     |   23 +-
 drivers/net/qede/base/ecore_spq.c             |   86 +-
 drivers/net/qede/base/ecore_spq.h             |   36 +-
 drivers/net/qede/base/ecore_sriov.c           |  953 ++++++++---
 drivers/net/qede/base/ecore_sriov.h           |   23 +-
 drivers/net/qede/base/ecore_vf.c              |  348 +++-
 drivers/net/qede/base/ecore_vf.h              |   85 +-
 drivers/net/qede/base/ecore_vf_api.h          |   11 +
 drivers/net/qede/base/ecore_vfpf_if.h         |   55 +-
 drivers/net/qede/base/eth_common.h            |    2 +-
 drivers/net/qede/base/mcp_public.h            |  271 ++--
 drivers/net/qede/base/nvm_cfg.h               |  475 +++++-
 drivers/net/qede/base/reg_addr.h              |   59 +
 drivers/net/qede/qede_eth_if.c                |   56 +-
 drivers/net/qede/qede_eth_if.h                |   25 +-
 drivers/net/qede/qede_ethdev.c                |  115 +-
 drivers/net/qede/qede_ethdev.h                |   44 +-
 drivers/net/qede/qede_fdir.c                  |  487 ++++++
 drivers/net/qede/qede_if.h                    |   58 +-
 drivers/net/qede/qede_main.c                  |  126 +-
 drivers/net/qede/qede_rxtx.c                  |  781 ++++++---
 drivers/net/qede/qede_rxtx.h                  |   32 +
 63 files changed, 12375 insertions(+), 5191 deletions(-)
 create mode 100644 drivers/net/qede/base/ecore_mng_tlv.c
 create mode 100644 drivers/net/qede/qede_fdir.c

-- 
1.7.10.3

^ permalink raw reply	[flat|nested] 329+ messages in thread

* [PATCH v4 01/62] net/qede/base: return an initialized return value
  2017-03-24 11:08         ` Ferruh Yigit
  2017-03-28  6:42           ` [PATCH 01/62] net/qede/base: return an initialized return value Rasesh Mody
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
@ 2017-03-28  6:51           ` Rasesh Mody
  2017-03-28  6:51           ` [PATCH v4 02/62] net/qede/base: send FW version driver state to MFW Rasesh Mody
                             ` (61 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:51 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Make sure ecore_iov_mark_vf_flr() always returns an initialized return
value.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_sriov.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 6912cf8..d1c809c 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -3164,7 +3164,7 @@ ecore_iov_single_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
 
 bool ecore_iov_mark_vf_flr(struct ecore_hwfn *p_hwfn, u32 *p_disabled_vfs)
 {
-	bool found;
+	bool found = false;
 	u16 i;
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "Marking FLR-ed VFs\n");
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 02/62] net/qede/base: send FW version driver state to MFW
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (2 preceding siblings ...)
  2017-03-28  6:51           ` [PATCH v4 01/62] net/qede/base: return an initialized return value Rasesh Mody
@ 2017-03-28  6:51           ` Rasesh Mody
  2017-03-28  6:51           ` [PATCH v4 03/62] net/qede/base: mask Rx buffer attention bits Rasesh Mody
                             ` (60 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:51 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Add support to send FW version and driver state to Management FW.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c     |   31 ++++++++++++++++++++++++++++---
 drivers/net/qede/base/ecore_mcp.c     |    7 +++++--
 drivers/net/qede/base/ecore_mcp_api.h |    3 ++-
 drivers/net/qede/qede_if.h            |    3 +++
 drivers/net/qede/qede_main.c          |   20 ++++++++++++++++++++
 5 files changed, 58 insertions(+), 6 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index da9cdc9..2d1e031 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1609,8 +1609,9 @@ static void ecore_reset_mb_shadow(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 				   struct ecore_hw_init_params *p_params)
 {
-	enum _ecore_status_t rc, mfw_rc;
-	u32 load_code, param;
+	enum _ecore_status_t rc = ECORE_SUCCESS, mfw_rc;
+	u32 load_code, param, drv_mb_param;
+	struct ecore_hwfn *p_hwfn;
 	int i;
 
 	if ((p_params->int_mode == ECORE_INT_MODE_MSI) &&
@@ -1743,7 +1744,26 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		p_hwfn->hw_init_done = true;
 	}
 
-	return ECORE_SUCCESS;
+	if (IS_PF(p_dev)) {
+		p_hwfn = ECORE_LEADING_HWFN(p_dev);
+		drv_mb_param = (FW_MAJOR_VERSION << 24) |
+			       (FW_MINOR_VERSION << 16) |
+			       (FW_REVISION_VERSION << 8) |
+			       (FW_ENGINEERING_VERSION);
+		rc = ecore_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
+				   DRV_MSG_CODE_OV_UPDATE_STORM_FW_VER,
+				   drv_mb_param, &load_code, &param);
+		if (rc != ECORE_SUCCESS) {
+			DP_ERR(p_hwfn, "Failed to send firmware version\n");
+			return rc;
+		}
+
+		rc = ecore_mcp_ov_update_driver_state(p_hwfn,
+						      p_hwfn->p_main_ptt,
+						ECORE_OV_DRIVER_STATE_DISABLED);
+	}
+
+	return rc;
 }
 
 #define ECORE_HW_STOP_RETRY_LIMIT	(10)
@@ -3130,8 +3150,13 @@ enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev,
 
 void ecore_hw_remove(struct ecore_dev *p_dev)
 {
+	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
 	int i;
 
+	if (IS_PF(p_dev))
+		ecore_mcp_ov_update_driver_state(p_hwfn, p_hwfn->p_main_ptt,
+					ECORE_OV_DRIVER_STATE_NOT_LOADED);
+
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
 
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index cb3e0bd..e236f39 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1723,6 +1723,9 @@ ecore_mcp_ov_update_current_config(struct ecore_hwfn *p_hwfn,
 	case ECORE_OV_CLIENT_USER:
 		drv_mb_param = DRV_MB_PARAM_OV_CURR_CFG_OTHER;
 		break;
+	case ECORE_OV_CLIENT_VENDOR_SPEC:
+		drv_mb_param = DRV_MB_PARAM_OV_CURR_CFG_VENDOR_SPEC;
+		break;
 	default:
 		DP_NOTICE(p_hwfn, true, "Invalid client type %d\n", config);
 		return ECORE_INVAL;
@@ -1761,9 +1764,9 @@ ecore_mcp_ov_update_driver_state(struct ecore_hwfn *p_hwfn,
 	}
 
 	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE,
-			   drv_state, &resp, &param);
+			   drv_mb_param, &resp, &param);
 	if (rc != ECORE_SUCCESS)
-		DP_ERR(p_hwfn, "MCP response failure, aborting\n");
+		DP_ERR(p_hwfn, "Failed to send driver state\n");
 
 	return rc;
 }
diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h
index 4e954bd..614cf67 100644
--- a/drivers/net/qede/base/ecore_mcp_api.h
+++ b/drivers/net/qede/base/ecore_mcp_api.h
@@ -181,7 +181,8 @@ enum ecore_ov_config_method {
 
 enum ecore_ov_client {
 	ECORE_OV_CLIENT_DRV,
-	ECORE_OV_CLIENT_USER
+	ECORE_OV_CLIENT_USER,
+	ECORE_OV_CLIENT_VENDOR_SPEC
 };
 
 enum ecore_ov_driver_state {
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index 4289d0b..4b23bb9 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -150,8 +150,11 @@ struct qed_common_ops {
 			    uint16_t sb_id, enum qed_sb_type type);
 
 	bool (*can_link_change)(struct ecore_dev *edev);
+
 	void (*update_msglvl)(struct ecore_dev *edev,
 			      uint32_t dp_module, uint8_t dp_level);
+
+	int (*send_drv_state)(struct ecore_dev *edev, bool active);
 };
 
 #endif /* _QEDE_IF_H */
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 8a4d68a..f0033a1 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -668,6 +668,25 @@ static void qed_remove(struct ecore_dev *edev)
 	ecore_hw_remove(edev);
 }
 
+static int qed_send_drv_state(struct ecore_dev *edev, bool active)
+{
+	struct ecore_hwfn *hwfn = ECORE_LEADING_HWFN(edev);
+	struct ecore_ptt *ptt;
+	int status = 0;
+
+	ptt = ecore_ptt_acquire(hwfn);
+	if (!ptt)
+		return -EAGAIN;
+
+	status = ecore_mcp_ov_update_driver_state(hwfn, ptt, active ?
+						  ECORE_OV_DRIVER_STATE_ACTIVE :
+						ECORE_OV_DRIVER_STATE_DISABLED);
+
+	ecore_ptt_release(hwfn, ptt);
+
+	return status;
+}
+
 const struct qed_common_ops qed_common_ops_pass = {
 	INIT_STRUCT_FIELD(probe, &qed_probe),
 	INIT_STRUCT_FIELD(update_pf_params, &qed_update_pf_params),
@@ -681,4 +700,5 @@ const struct qed_common_ops qed_common_ops_pass = {
 	INIT_STRUCT_FIELD(drain, &qed_drain),
 	INIT_STRUCT_FIELD(slowpath_stop, &qed_slowpath_stop),
 	INIT_STRUCT_FIELD(remove, &qed_remove),
+	INIT_STRUCT_FIELD(send_drv_state, &qed_send_drv_state),
 };
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 03/62] net/qede/base: mask Rx buffer attention bits
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (3 preceding siblings ...)
  2017-03-28  6:51           ` [PATCH v4 02/62] net/qede/base: send FW version driver state to MFW Rasesh Mody
@ 2017-03-28  6:51           ` Rasesh Mody
  2017-03-28  6:51           ` [PATCH v4 04/62] net/qede/base: print various indication on Tx-timeouts Rasesh Mody
                             ` (59 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:51 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Mask the BRB "RC0_EOP_OUT_SYNC_FIFO_PUSH_ERROR" attention.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c |    6 ++++++
 drivers/net/qede/base/reg_addr.h  |    3 +++
 2 files changed, 9 insertions(+)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 2d1e031..eef24cd 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1051,6 +1051,12 @@ static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn,
 	/* pretend to original PF */
 	ecore_fid_pretend(p_hwfn, p_ptt, p_hwfn->rel_pf_id);
 
+	/* @@@TMP:
+	 * CQ89456 - Mask the BRB "RC0_EOP_OUT_SYNC_FIFO_PUSH_ERROR" attention.
+	 */
+	if (ECORE_IS_AH(p_dev))
+		ecore_wr(p_hwfn, p_ptt, BRB_REG_INT_MASK_10, 0x4000000);
+
 	return rc;
 }
 
diff --git a/drivers/net/qede/base/reg_addr.h b/drivers/net/qede/base/reg_addr.h
index 3c369aa..21cbdbd 100644
--- a/drivers/net/qede/base/reg_addr.h
+++ b/drivers/net/qede/base/reg_addr.h
@@ -1141,3 +1141,6 @@
 #define NIG_REG_LLH_TAGMAC_DEF_PF_VECTOR 0x50196cUL
 #define PRS_REG_MSG_INFO 0x1f0a1cUL
 #define BAR0_MAP_REG_XSDM_RAM 0x1e00000UL
+
+/* 8.18.7.0 FW */
+#define BRB_REG_INT_MASK_10 0x3401b8UL
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 04/62] net/qede/base: print various indication on Tx-timeouts
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (4 preceding siblings ...)
  2017-03-28  6:51           ` [PATCH v4 03/62] net/qede/base: mask Rx buffer attention bits Rasesh Mody
@ 2017-03-28  6:51           ` Rasesh Mody
  2017-03-28  6:51           ` [PATCH v4 05/62] net/qede/base: utilize FW 8.18.9.0 Rasesh Mody
                             ` (58 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:51 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Print various indication on Tx-timeouts.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_int.c     |   27 +++++++++++++++++++++++++++
 drivers/net/qede/base/ecore_int_api.h |   21 +++++++++++++++++++++
 drivers/net/qede/base/reg_addr.h      |    3 +++
 drivers/net/qede/qede_main.c          |   23 +++++++++++++++++++++++
 4 files changed, 74 insertions(+)

diff --git a/drivers/net/qede/base/ecore_int.c b/drivers/net/qede/base/ecore_int.c
index b6b8e2d..e5a4359 100644
--- a/drivers/net/qede/base/ecore_int.c
+++ b/drivers/net/qede/base/ecore_int.c
@@ -2255,3 +2255,30 @@ enum _ecore_status_t ecore_int_set_timer_res(struct ecore_hwfn *p_hwfn,
 
 	return rc;
 }
+
+enum _ecore_status_t ecore_int_get_sb_dbg(struct ecore_hwfn *p_hwfn,
+					  struct ecore_ptt *p_ptt,
+					  struct ecore_sb_info *p_sb,
+					  struct ecore_sb_info_dbg *p_info)
+{
+	u16 sbid = p_sb->igu_sb_id;
+	int i;
+
+	if (IS_VF(p_hwfn->p_dev))
+		return ECORE_INVAL;
+
+	if (sbid > NUM_OF_SBS(p_hwfn->p_dev))
+		return ECORE_INVAL;
+
+	p_info->igu_prod = ecore_rd(p_hwfn, p_ptt,
+				    IGU_REG_PRODUCER_MEMORY + sbid * 4);
+	p_info->igu_cons = ecore_rd(p_hwfn, p_ptt,
+				    IGU_REG_CONSUMER_MEM + sbid * 4);
+
+	for (i = 0; i < PIS_PER_SB; i++)
+		p_info->pi[i] = (u16)ecore_rd(p_hwfn, p_ptt,
+					      CAU_REG_PI_MEMORY +
+					      sbid * 4 * PIS_PER_SB +  i * 4);
+
+	return ECORE_SUCCESS;
+}
diff --git a/drivers/net/qede/base/ecore_int_api.h b/drivers/net/qede/base/ecore_int_api.h
index a0d6a43..fdfcba8 100644
--- a/drivers/net/qede/base/ecore_int_api.h
+++ b/drivers/net/qede/base/ecore_int_api.h
@@ -41,6 +41,12 @@ struct ecore_sb_info {
 	struct ecore_dev *p_dev;
 };
 
+struct ecore_sb_info_dbg {
+	u32 igu_prod;
+	u32 igu_cons;
+	u16 pi[PIS_PER_SB];
+};
+
 struct ecore_sb_cnt_info {
 	int sb_cnt;
 	int sb_iov_cnt;
@@ -303,4 +309,19 @@ void ecore_int_disable_post_isr_release(struct ecore_dev *p_dev);
  */
 void ecore_int_attn_clr_enable(struct ecore_dev *p_dev, bool clr_enable);
 
+/**
+ * @brief Read debug information regarding a given SB.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param p_sb - point to Status block for which we want to get info.
+ * @param p_info - pointer to struct to fill with information regarding SB.
+ *
+ * @return ECORE_SUCCESS if pointer is filled; failure otherwise.
+ */
+enum _ecore_status_t ecore_int_get_sb_dbg(struct ecore_hwfn *p_hwfn,
+					  struct ecore_ptt *p_ptt,
+					  struct ecore_sb_info *p_sb,
+					  struct ecore_sb_info_dbg *p_info);
+
 #endif
diff --git a/drivers/net/qede/base/reg_addr.h b/drivers/net/qede/base/reg_addr.h
index 21cbdbd..3cc7fd4 100644
--- a/drivers/net/qede/base/reg_addr.h
+++ b/drivers/net/qede/base/reg_addr.h
@@ -1144,3 +1144,6 @@
 
 /* 8.18.7.0 FW */
 #define BRB_REG_INT_MASK_10 0x3401b8UL
+
+#define IGU_REG_PRODUCER_MEMORY 0x182000UL
+#define IGU_REG_CONSUMER_MEM 0x183000UL
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index f0033a1..a604a5b 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -687,6 +687,29 @@ static int qed_send_drv_state(struct ecore_dev *edev, bool active)
 	return status;
 }
 
+static int qed_get_sb_info(struct ecore_dev *edev, struct ecore_sb_info *sb,
+			   u16 qid, struct ecore_sb_info_dbg *sb_dbg)
+{
+	struct ecore_hwfn *hwfn = &edev->hwfns[qid % edev->num_hwfns];
+	struct ecore_ptt *ptt;
+	int rc;
+
+	if (IS_VF(edev))
+		return -EINVAL;
+
+	ptt = ecore_ptt_acquire(hwfn);
+	if (!ptt) {
+		DP_NOTICE(hwfn, true, "Can't acquire PTT\n");
+		return -EAGAIN;
+	}
+
+	memset(sb_dbg, 0, sizeof(*sb_dbg));
+	rc = ecore_int_get_sb_dbg(hwfn, ptt, sb, sb_dbg);
+
+	ecore_ptt_release(hwfn, ptt);
+	return rc;
+}
+
 const struct qed_common_ops qed_common_ops_pass = {
 	INIT_STRUCT_FIELD(probe, &qed_probe),
 	INIT_STRUCT_FIELD(update_pf_params, &qed_update_pf_params),
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 05/62] net/qede/base: utilize FW 8.18.9.0
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (5 preceding siblings ...)
  2017-03-28  6:51           ` [PATCH v4 04/62] net/qede/base: print various indication on Tx-timeouts Rasesh Mody
@ 2017-03-28  6:51           ` Rasesh Mody
  2017-03-28  6:51           ` [PATCH v4 06/62] net/qede: upgrade the FW to 8.18.9.0 Rasesh Mody
                             ` (57 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:51 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

This change is in preparation to work with new FW 8.18.9.0.
Rename the defines to use E4_ and structs to use e4_. This renaming
is to add support for future chipsets.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/common_hsi.h       |   15 +-
 drivers/net/qede/base/ecore_hsi_common.h |  770 +++++------
 drivers/net/qede/base/ecore_hsi_eth.h    | 2052 +++++++++++++++---------------
 drivers/net/qede/base/ecore_iov_api.h    |    4 +-
 drivers/net/qede/base/ecore_spq.c        |   20 +-
 drivers/net/qede/base/ecore_sriov.c      |    2 +-
 drivers/net/qede/base/ecore_sriov.h      |    4 +-
 7 files changed, 1447 insertions(+), 1420 deletions(-)

diff --git a/drivers/net/qede/base/common_hsi.h b/drivers/net/qede/base/common_hsi.h
index 2f84148..59e751f 100644
--- a/drivers/net/qede/base/common_hsi.h
+++ b/drivers/net/qede/base/common_hsi.h
@@ -107,20 +107,20 @@
 #define MAX_NUM_PFS	(MAX_NUM_PFS_K2)
 #define MAX_NUM_OF_PFS_IN_CHIP (16) /* On both engines */
 
-#define MAX_NUM_VFS_K2	(192)
 #define MAX_NUM_VFS_BB	(120)
-#define MAX_NUM_VFS	(MAX_NUM_VFS_K2)
+#define MAX_NUM_VFS_K2	(192)
+#define E4_MAX_NUM_VFS	(MAX_NUM_VFS_K2)
 
 #define MAX_NUM_FUNCTIONS_BB	(MAX_NUM_PFS_BB + MAX_NUM_VFS_BB)
 #define MAX_NUM_FUNCTIONS_K2	(MAX_NUM_PFS_K2 + MAX_NUM_VFS_K2)
-#define MAX_NUM_FUNCTIONS	(MAX_NUM_PFS + MAX_NUM_VFS)
+#define MAX_NUM_FUNCTIONS	(MAX_NUM_PFS + E4_MAX_NUM_VFS)
 
 /* in both BB and K2, the VF number starts from 16. so for arrays containing all
  * possible PFs and VFs - we need a constant for this size
  */
 #define MAX_FUNCTION_NUMBER_BB	(MAX_NUM_PFS + MAX_NUM_VFS_BB)
 #define MAX_FUNCTION_NUMBER_K2	(MAX_NUM_PFS + MAX_NUM_VFS_K2)
-#define MAX_FUNCTION_NUMBER	(MAX_NUM_PFS + MAX_NUM_VFS)
+#define MAX_FUNCTION_NUMBER	(MAX_NUM_PFS + E4_MAX_NUM_VFS)
 
 #define MAX_NUM_VPORTS_K2	(208)
 #define MAX_NUM_VPORTS_BB	(160)
@@ -149,9 +149,10 @@
 #define MAX_PHYS_VOQS		(NUM_OF_PHYS_TCS * MAX_NUM_PORTS_BB)
 
 /* CIDs */
-#define NUM_OF_CONNECTION_TYPES	(8)
-#define NUM_OF_LCIDS		(320)
-#define NUM_OF_LTIDS		(320)
+#define E4_NUM_OF_CONNECTION_TYPES (8)
+#define NUM_OF_TASK_TYPES		(8)
+#define NUM_OF_LCIDS			(320)
+#define NUM_OF_LTIDS			(320)
 
 /* Clock values */
 #define MASTER_CLK_FREQ_E4		(375e6)
diff --git a/drivers/net/qede/base/ecore_hsi_common.h b/drivers/net/qede/base/ecore_hsi_common.h
index d978bb0..f934e68 100644
--- a/drivers/net/qede/base/ecore_hsi_common.h
+++ b/drivers/net/qede/base/ecore_hsi_common.h
@@ -75,306 +75,306 @@ struct xstorm_core_conn_st_ctx {
 	__le32 reserved0[55] /* Pad to 15 cycles */;
 };
 
-struct xstorm_core_conn_ag_ctx {
+struct e4_xstorm_core_conn_ag_ctx {
 	u8 reserved0 /* cdu_validation */;
 	u8 core_state /* state */;
 	u8 flags0;
 /* exist_in_qm0 */
-#define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_SHIFT        0
+#define E4_XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_SHIFT        0
 /* exist_in_qm1 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED1_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED1_SHIFT           1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED1_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED1_SHIFT           1
 /* exist_in_qm2 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED2_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED2_SHIFT           2
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED2_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED2_SHIFT           2
 /* exist_in_qm3 */
-#define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM3_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM3_SHIFT        3
+#define E4_XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM3_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM3_SHIFT        3
 /* bit4 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED3_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED3_SHIFT           4
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED3_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED3_SHIFT           4
 /* cf_array_active */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED4_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED4_SHIFT           5
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED4_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED4_SHIFT           5
 /* bit6 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED5_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED5_SHIFT           6
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED5_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED5_SHIFT           6
 /* bit7 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED6_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED6_SHIFT           7
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED6_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED6_SHIFT           7
 	u8 flags1;
 /* bit8 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED7_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED7_SHIFT           0
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED7_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED7_SHIFT           0
 /* bit9 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED8_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED8_SHIFT           1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED8_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED8_SHIFT           1
 /* bit10 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED9_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED9_SHIFT           2
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED9_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED9_SHIFT           2
 /* bit11 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT11_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT11_SHIFT               3
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT11_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT11_SHIFT               3
 /* bit12 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT12_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT12_SHIFT               4
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT12_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT12_SHIFT               4
 /* bit13 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT13_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT13_SHIFT               5
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT13_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT13_SHIFT               5
 /* bit14 */
-#define XSTORM_CORE_CONN_AG_CTX_TX_RULE_ACTIVE_MASK       0x1
-#define XSTORM_CORE_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT      6
+#define E4_XSTORM_CORE_CONN_AG_CTX_TX_RULE_ACTIVE_MASK       0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT      6
 /* bit15 */
-#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT        7
+#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT        7
 	u8 flags2;
 /* timer0cf */
-#define XSTORM_CORE_CONN_AG_CTX_CF0_MASK                  0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF0_SHIFT                 0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF0_MASK                  0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF0_SHIFT                 0
 /* timer1cf */
-#define XSTORM_CORE_CONN_AG_CTX_CF1_MASK                  0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF1_SHIFT                 2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF1_MASK                  0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF1_SHIFT                 2
 /* timer2cf */
-#define XSTORM_CORE_CONN_AG_CTX_CF2_MASK                  0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF2_SHIFT                 4
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF2_MASK                  0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF2_SHIFT                 4
 /* timer_stop_all */
-#define XSTORM_CORE_CONN_AG_CTX_CF3_MASK                  0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF3_SHIFT                 6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF3_MASK                  0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF3_SHIFT                 6
 	u8 flags3;
-#define XSTORM_CORE_CONN_AG_CTX_CF4_MASK                  0x3 /* cf4 */
-#define XSTORM_CORE_CONN_AG_CTX_CF4_SHIFT                 0
-#define XSTORM_CORE_CONN_AG_CTX_CF5_MASK                  0x3 /* cf5 */
-#define XSTORM_CORE_CONN_AG_CTX_CF5_SHIFT                 2
-#define XSTORM_CORE_CONN_AG_CTX_CF6_MASK                  0x3 /* cf6 */
-#define XSTORM_CORE_CONN_AG_CTX_CF6_SHIFT                 4
-#define XSTORM_CORE_CONN_AG_CTX_CF7_MASK                  0x3 /* cf7 */
-#define XSTORM_CORE_CONN_AG_CTX_CF7_SHIFT                 6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF4_MASK                  0x3 /* cf4 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF4_SHIFT                 0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF5_MASK                  0x3 /* cf5 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF5_SHIFT                 2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF6_MASK                  0x3 /* cf6 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF6_SHIFT                 4
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF7_MASK                  0x3 /* cf7 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF7_SHIFT                 6
 	u8 flags4;
-#define XSTORM_CORE_CONN_AG_CTX_CF8_MASK                  0x3 /* cf8 */
-#define XSTORM_CORE_CONN_AG_CTX_CF8_SHIFT                 0
-#define XSTORM_CORE_CONN_AG_CTX_CF9_MASK                  0x3 /* cf9 */
-#define XSTORM_CORE_CONN_AG_CTX_CF9_SHIFT                 2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF8_MASK                  0x3 /* cf8 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF8_SHIFT                 0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF9_MASK                  0x3 /* cf9 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF9_SHIFT                 2
 /* cf10 */
-#define XSTORM_CORE_CONN_AG_CTX_CF10_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF10_SHIFT                4
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF10_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF10_SHIFT                4
 /* cf11 */
-#define XSTORM_CORE_CONN_AG_CTX_CF11_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF11_SHIFT                6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF11_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF11_SHIFT                6
 	u8 flags5;
 /* cf12 */
-#define XSTORM_CORE_CONN_AG_CTX_CF12_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF12_SHIFT                0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF12_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF12_SHIFT                0
 /* cf13 */
-#define XSTORM_CORE_CONN_AG_CTX_CF13_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF13_SHIFT                2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF13_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF13_SHIFT                2
 /* cf14 */
-#define XSTORM_CORE_CONN_AG_CTX_CF14_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF14_SHIFT                4
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF14_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF14_SHIFT                4
 /* cf15 */
-#define XSTORM_CORE_CONN_AG_CTX_CF15_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF15_SHIFT                6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF15_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF15_SHIFT                6
 	u8 flags6;
 /* cf16 */
-#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_MASK     0x3
-#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_SHIFT    0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_MASK     0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_SHIFT    0
 /* cf_array_cf */
-#define XSTORM_CORE_CONN_AG_CTX_CF17_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF17_SHIFT                2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF17_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF17_SHIFT                2
 /* cf18 */
-#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_MASK                0x3
-#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_SHIFT               4
+#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_MASK                0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_SHIFT               4
 /* cf19 */
-#define XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_MASK         0x3
-#define XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_SHIFT        6
+#define E4_XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_MASK         0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_SHIFT        6
 	u8 flags7;
 /* cf20 */
-#define XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_MASK             0x3
-#define XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_SHIFT            0
+#define E4_XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_MASK             0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_SHIFT            0
 /* cf21 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED10_MASK           0x3
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED10_SHIFT          2
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED10_MASK           0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED10_SHIFT          2
 /* cf22 */
-#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_MASK            0x3
-#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_SHIFT           4
+#define E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_MASK            0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_SHIFT           4
 /* cf0en */
-#define XSTORM_CORE_CONN_AG_CTX_CF0EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT               6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF0EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT               6
 /* cf1en */
-#define XSTORM_CORE_CONN_AG_CTX_CF1EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT               7
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF1EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT               7
 	u8 flags8;
 /* cf2en */
-#define XSTORM_CORE_CONN_AG_CTX_CF2EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT               0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF2EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT               0
 /* cf3en */
-#define XSTORM_CORE_CONN_AG_CTX_CF3EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT               1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF3EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT               1
 /* cf4en */
-#define XSTORM_CORE_CONN_AG_CTX_CF4EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT               2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF4EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT               2
 /* cf5en */
-#define XSTORM_CORE_CONN_AG_CTX_CF5EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT               3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF5EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT               3
 /* cf6en */
-#define XSTORM_CORE_CONN_AG_CTX_CF6EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT               4
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF6EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT               4
 /* cf7en */
-#define XSTORM_CORE_CONN_AG_CTX_CF7EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF7EN_SHIFT               5
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF7EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF7EN_SHIFT               5
 /* cf8en */
-#define XSTORM_CORE_CONN_AG_CTX_CF8EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF8EN_SHIFT               6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF8EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF8EN_SHIFT               6
 /* cf9en */
-#define XSTORM_CORE_CONN_AG_CTX_CF9EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF9EN_SHIFT               7
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF9EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF9EN_SHIFT               7
 	u8 flags9;
 /* cf10en */
-#define XSTORM_CORE_CONN_AG_CTX_CF10EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF10EN_SHIFT              0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF10EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF10EN_SHIFT              0
 /* cf11en */
-#define XSTORM_CORE_CONN_AG_CTX_CF11EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF11EN_SHIFT              1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF11EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF11EN_SHIFT              1
 /* cf12en */
-#define XSTORM_CORE_CONN_AG_CTX_CF12EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF12EN_SHIFT              2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF12EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF12EN_SHIFT              2
 /* cf13en */
-#define XSTORM_CORE_CONN_AG_CTX_CF13EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF13EN_SHIFT              3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF13EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF13EN_SHIFT              3
 /* cf14en */
-#define XSTORM_CORE_CONN_AG_CTX_CF14EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF14EN_SHIFT              4
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF14EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF14EN_SHIFT              4
 /* cf15en */
-#define XSTORM_CORE_CONN_AG_CTX_CF15EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF15EN_SHIFT              5
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF15EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF15EN_SHIFT              5
 /* cf16en */
-#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_MASK  0x1
-#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_SHIFT 6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_MASK  0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_SHIFT 6
 /* cf_array_cf_en */
-#define XSTORM_CORE_CONN_AG_CTX_CF17EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF17EN_SHIFT              7
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF17EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF17EN_SHIFT              7
 	u8 flags10;
 /* cf18en */
-#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_SHIFT            0
+#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_SHIFT            0
 /* cf19en */
-#define XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_EN_MASK      0x1
-#define XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT     1
+#define E4_XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_EN_MASK      0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT     1
 /* cf20en */
-#define XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_EN_MASK          0x1
-#define XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT         2
+#define E4_XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_EN_MASK          0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT         2
 /* cf21en */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED11_MASK           0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED11_SHIFT          3
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED11_MASK           0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED11_SHIFT          3
 /* cf22en */
-#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_SHIFT        4
+#define E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_SHIFT        4
 /* cf23en */
-#define XSTORM_CORE_CONN_AG_CTX_CF23EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF23EN_SHIFT              5
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF23EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF23EN_SHIFT              5
 /* rule0en */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED12_MASK           0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED12_SHIFT          6
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED12_MASK           0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED12_SHIFT          6
 /* rule1en */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED13_MASK           0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED13_SHIFT          7
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED13_MASK           0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED13_SHIFT          7
 	u8 flags11;
 /* rule2en */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED14_MASK           0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED14_SHIFT          0
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED14_MASK           0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED14_SHIFT          0
 /* rule3en */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED15_MASK           0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED15_SHIFT          1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED15_MASK           0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED15_SHIFT          1
 /* rule4en */
-#define XSTORM_CORE_CONN_AG_CTX_TX_DEC_RULE_EN_MASK       0x1
-#define XSTORM_CORE_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT      2
+#define E4_XSTORM_CORE_CONN_AG_CTX_TX_DEC_RULE_EN_MASK       0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT      2
 /* rule5en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE5EN_MASK              0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT             3
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE5EN_MASK              0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT             3
 /* rule6en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE6EN_MASK              0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT             4
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE6EN_MASK              0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT             4
 /* rule7en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE7EN_MASK              0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT             5
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE7EN_MASK              0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT             5
 /* rule8en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED1_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED1_SHIFT        6
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED1_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED1_SHIFT        6
 /* rule9en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE9EN_MASK              0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE9EN_SHIFT             7
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE9EN_MASK              0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE9EN_SHIFT             7
 	u8 flags12;
 /* rule10en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE10EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE10EN_SHIFT            0
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE10EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE10EN_SHIFT            0
 /* rule11en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE11EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE11EN_SHIFT            1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE11EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE11EN_SHIFT            1
 /* rule12en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED2_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED2_SHIFT        2
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED2_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED2_SHIFT        2
 /* rule13en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED3_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED3_SHIFT        3
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED3_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED3_SHIFT        3
 /* rule14en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE14EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE14EN_SHIFT            4
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE14EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE14EN_SHIFT            4
 /* rule15en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE15EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE15EN_SHIFT            5
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE15EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE15EN_SHIFT            5
 /* rule16en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE16EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE16EN_SHIFT            6
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE16EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE16EN_SHIFT            6
 /* rule17en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE17EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE17EN_SHIFT            7
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE17EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE17EN_SHIFT            7
 	u8 flags13;
 /* rule18en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE18EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE18EN_SHIFT            0
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE18EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE18EN_SHIFT            0
 /* rule19en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE19EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE19EN_SHIFT            1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE19EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE19EN_SHIFT            1
 /* rule20en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED4_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED4_SHIFT        2
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED4_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED4_SHIFT        2
 /* rule21en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED5_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED5_SHIFT        3
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED5_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED5_SHIFT        3
 /* rule22en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED6_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED6_SHIFT        4
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED6_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED6_SHIFT        4
 /* rule23en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED7_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED7_SHIFT        5
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED7_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED7_SHIFT        5
 /* rule24en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED8_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED8_SHIFT        6
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED8_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED8_SHIFT        6
 /* rule25en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED9_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED9_SHIFT        7
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED9_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED9_SHIFT        7
 	u8 flags14;
 /* bit16 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT16_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT16_SHIFT               0
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT16_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT16_SHIFT               0
 /* bit17 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT17_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT17_SHIFT               1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT17_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT17_SHIFT               1
 /* bit18 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT18_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT18_SHIFT               2
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT18_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT18_SHIFT               2
 /* bit19 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT19_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT19_SHIFT               3
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT19_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT19_SHIFT               3
 /* bit20 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT20_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT20_SHIFT               4
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT20_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT20_SHIFT               4
 /* bit21 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT21_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT21_SHIFT               5
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT21_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT21_SHIFT               5
 /* cf23 */
-#define XSTORM_CORE_CONN_AG_CTX_CF23_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF23_SHIFT                6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF23_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF23_SHIFT                6
 	u8 byte2 /* byte2 */;
 	__le16 physical_q0 /* physical_q0 */;
 	__le16 consolid_prod /* physical_q1 */;
@@ -410,7 +410,7 @@ struct xstorm_core_conn_ag_ctx {
 	u8 byte13 /* byte13 */;
 	u8 byte14 /* byte14 */;
 	u8 byte15 /* byte15 */;
-	u8 byte16 /* byte16 */;
+	u8 e5_reserved /* e5_reserved */;
 	__le16 word11 /* word11 */;
 	__le32 reg10 /* reg10 */;
 	__le32 reg11 /* reg11 */;
@@ -428,89 +428,89 @@ struct xstorm_core_conn_ag_ctx {
 	__le16 word15 /* word15 */;
 };
 
-struct tstorm_core_conn_ag_ctx {
+struct e4_tstorm_core_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 byte1 /* state */;
 	u8 flags0;
-#define TSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
-#define TSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
-#define TSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
-#define TSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
-#define TSTORM_CORE_CONN_AG_CTX_BIT2_MASK     0x1 /* bit2 */
-#define TSTORM_CORE_CONN_AG_CTX_BIT2_SHIFT    2
-#define TSTORM_CORE_CONN_AG_CTX_BIT3_MASK     0x1 /* bit3 */
-#define TSTORM_CORE_CONN_AG_CTX_BIT3_SHIFT    3
-#define TSTORM_CORE_CONN_AG_CTX_BIT4_MASK     0x1 /* bit4 */
-#define TSTORM_CORE_CONN_AG_CTX_BIT4_SHIFT    4
-#define TSTORM_CORE_CONN_AG_CTX_BIT5_MASK     0x1 /* bit5 */
-#define TSTORM_CORE_CONN_AG_CTX_BIT5_SHIFT    5
-#define TSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* timer0cf */
-#define TSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     6
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT2_MASK     0x1 /* bit2 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT2_SHIFT    2
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT3_MASK     0x1 /* bit3 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT3_SHIFT    3
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT4_MASK     0x1 /* bit4 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT4_SHIFT    4
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT5_MASK     0x1 /* bit5 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT5_SHIFT    5
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* timer0cf */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     6
 	u8 flags1;
-#define TSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* timer1cf */
-#define TSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     0
-#define TSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* timer2cf */
-#define TSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     2
-#define TSTORM_CORE_CONN_AG_CTX_CF3_MASK      0x3 /* timer_stop_all */
-#define TSTORM_CORE_CONN_AG_CTX_CF3_SHIFT     4
-#define TSTORM_CORE_CONN_AG_CTX_CF4_MASK      0x3 /* cf4 */
-#define TSTORM_CORE_CONN_AG_CTX_CF4_SHIFT     6
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* timer1cf */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     0
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* timer2cf */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     2
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF3_MASK      0x3 /* timer_stop_all */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF3_SHIFT     4
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF4_MASK      0x3 /* cf4 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF4_SHIFT     6
 	u8 flags2;
-#define TSTORM_CORE_CONN_AG_CTX_CF5_MASK      0x3 /* cf5 */
-#define TSTORM_CORE_CONN_AG_CTX_CF5_SHIFT     0
-#define TSTORM_CORE_CONN_AG_CTX_CF6_MASK      0x3 /* cf6 */
-#define TSTORM_CORE_CONN_AG_CTX_CF6_SHIFT     2
-#define TSTORM_CORE_CONN_AG_CTX_CF7_MASK      0x3 /* cf7 */
-#define TSTORM_CORE_CONN_AG_CTX_CF7_SHIFT     4
-#define TSTORM_CORE_CONN_AG_CTX_CF8_MASK      0x3 /* cf8 */
-#define TSTORM_CORE_CONN_AG_CTX_CF8_SHIFT     6
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF5_MASK      0x3 /* cf5 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF5_SHIFT     0
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF6_MASK      0x3 /* cf6 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF6_SHIFT     2
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF7_MASK      0x3 /* cf7 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF7_SHIFT     4
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF8_MASK      0x3 /* cf8 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF8_SHIFT     6
 	u8 flags3;
-#define TSTORM_CORE_CONN_AG_CTX_CF9_MASK      0x3 /* cf9 */
-#define TSTORM_CORE_CONN_AG_CTX_CF9_SHIFT     0
-#define TSTORM_CORE_CONN_AG_CTX_CF10_MASK     0x3 /* cf10 */
-#define TSTORM_CORE_CONN_AG_CTX_CF10_SHIFT    2
-#define TSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
-#define TSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   4
-#define TSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
-#define TSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   5
-#define TSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
-#define TSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   6
-#define TSTORM_CORE_CONN_AG_CTX_CF3EN_MASK    0x1 /* cf3en */
-#define TSTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT   7
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF9_MASK      0x3 /* cf9 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF9_SHIFT     0
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF10_MASK     0x3 /* cf10 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF10_SHIFT    2
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   4
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   5
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   6
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF3EN_MASK    0x1 /* cf3en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT   7
 	u8 flags4;
-#define TSTORM_CORE_CONN_AG_CTX_CF4EN_MASK    0x1 /* cf4en */
-#define TSTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT   0
-#define TSTORM_CORE_CONN_AG_CTX_CF5EN_MASK    0x1 /* cf5en */
-#define TSTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT   1
-#define TSTORM_CORE_CONN_AG_CTX_CF6EN_MASK    0x1 /* cf6en */
-#define TSTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT   2
-#define TSTORM_CORE_CONN_AG_CTX_CF7EN_MASK    0x1 /* cf7en */
-#define TSTORM_CORE_CONN_AG_CTX_CF7EN_SHIFT   3
-#define TSTORM_CORE_CONN_AG_CTX_CF8EN_MASK    0x1 /* cf8en */
-#define TSTORM_CORE_CONN_AG_CTX_CF8EN_SHIFT   4
-#define TSTORM_CORE_CONN_AG_CTX_CF9EN_MASK    0x1 /* cf9en */
-#define TSTORM_CORE_CONN_AG_CTX_CF9EN_SHIFT   5
-#define TSTORM_CORE_CONN_AG_CTX_CF10EN_MASK   0x1 /* cf10en */
-#define TSTORM_CORE_CONN_AG_CTX_CF10EN_SHIFT  6
-#define TSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 7
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF4EN_MASK    0x1 /* cf4en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT   0
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF5EN_MASK    0x1 /* cf5en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT   1
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF6EN_MASK    0x1 /* cf6en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT   2
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF7EN_MASK    0x1 /* cf7en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF7EN_SHIFT   3
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF8EN_MASK    0x1 /* cf8en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF8EN_SHIFT   4
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF9EN_MASK    0x1 /* cf9en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF9EN_SHIFT   5
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF10EN_MASK   0x1 /* cf10en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF10EN_SHIFT  6
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 7
 	u8 flags5;
-#define TSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 0
-#define TSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 1
-#define TSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 2
-#define TSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 3
-#define TSTORM_CORE_CONN_AG_CTX_RULE5EN_MASK  0x1 /* rule5en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT 4
-#define TSTORM_CORE_CONN_AG_CTX_RULE6EN_MASK  0x1 /* rule6en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT 5
-#define TSTORM_CORE_CONN_AG_CTX_RULE7EN_MASK  0x1 /* rule7en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT 6
-#define TSTORM_CORE_CONN_AG_CTX_RULE8EN_MASK  0x1 /* rule8en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE8EN_SHIFT 7
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 0
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 1
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 2
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 3
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE5EN_MASK  0x1 /* rule5en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT 4
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE6EN_MASK  0x1 /* rule6en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT 5
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE7EN_MASK  0x1 /* rule7en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT 6
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE8EN_MASK  0x1 /* rule8en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE8EN_SHIFT 7
 	__le32 reg0 /* reg0 */;
 	__le32 reg1 /* reg1 */;
 	__le32 reg2 /* reg2 */;
@@ -532,63 +532,63 @@ struct tstorm_core_conn_ag_ctx {
 	__le32 reg10 /* reg10 */;
 };
 
-struct ustorm_core_conn_ag_ctx {
+struct e4_ustorm_core_conn_ag_ctx {
 	u8 reserved /* cdu_validation */;
 	u8 byte1 /* state */;
 	u8 flags0;
-#define USTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
-#define USTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
-#define USTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
-#define USTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
-#define USTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* timer0cf */
-#define USTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
-#define USTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* timer1cf */
-#define USTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
-#define USTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* timer2cf */
-#define USTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
+#define E4_USTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
+#define E4_USTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
+#define E4_USTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
+#define E4_USTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
+#define E4_USTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* timer0cf */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
+#define E4_USTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* timer1cf */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
+#define E4_USTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* timer2cf */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
 	u8 flags1;
-#define USTORM_CORE_CONN_AG_CTX_CF3_MASK      0x3 /* timer_stop_all */
-#define USTORM_CORE_CONN_AG_CTX_CF3_SHIFT     0
-#define USTORM_CORE_CONN_AG_CTX_CF4_MASK      0x3 /* cf4 */
-#define USTORM_CORE_CONN_AG_CTX_CF4_SHIFT     2
-#define USTORM_CORE_CONN_AG_CTX_CF5_MASK      0x3 /* cf5 */
-#define USTORM_CORE_CONN_AG_CTX_CF5_SHIFT     4
-#define USTORM_CORE_CONN_AG_CTX_CF6_MASK      0x3 /* cf6 */
-#define USTORM_CORE_CONN_AG_CTX_CF6_SHIFT     6
+#define E4_USTORM_CORE_CONN_AG_CTX_CF3_MASK      0x3 /* timer_stop_all */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF3_SHIFT     0
+#define E4_USTORM_CORE_CONN_AG_CTX_CF4_MASK      0x3 /* cf4 */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF4_SHIFT     2
+#define E4_USTORM_CORE_CONN_AG_CTX_CF5_MASK      0x3 /* cf5 */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF5_SHIFT     4
+#define E4_USTORM_CORE_CONN_AG_CTX_CF6_MASK      0x3 /* cf6 */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF6_SHIFT     6
 	u8 flags2;
-#define USTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
-#define USTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
-#define USTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
-#define USTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
-#define USTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
-#define USTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
-#define USTORM_CORE_CONN_AG_CTX_CF3EN_MASK    0x1 /* cf3en */
-#define USTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT   3
-#define USTORM_CORE_CONN_AG_CTX_CF4EN_MASK    0x1 /* cf4en */
-#define USTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT   4
-#define USTORM_CORE_CONN_AG_CTX_CF5EN_MASK    0x1 /* cf5en */
-#define USTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT   5
-#define USTORM_CORE_CONN_AG_CTX_CF6EN_MASK    0x1 /* cf6en */
-#define USTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT   6
-#define USTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
-#define USTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 7
+#define E4_USTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
+#define E4_USTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
+#define E4_USTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
+#define E4_USTORM_CORE_CONN_AG_CTX_CF3EN_MASK    0x1 /* cf3en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT   3
+#define E4_USTORM_CORE_CONN_AG_CTX_CF4EN_MASK    0x1 /* cf4en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT   4
+#define E4_USTORM_CORE_CONN_AG_CTX_CF5EN_MASK    0x1 /* cf5en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT   5
+#define E4_USTORM_CORE_CONN_AG_CTX_CF6EN_MASK    0x1 /* cf6en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT   6
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 7
 	u8 flags3;
-#define USTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
-#define USTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 0
-#define USTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
-#define USTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 1
-#define USTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
-#define USTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 2
-#define USTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
-#define USTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 3
-#define USTORM_CORE_CONN_AG_CTX_RULE5EN_MASK  0x1 /* rule5en */
-#define USTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT 4
-#define USTORM_CORE_CONN_AG_CTX_RULE6EN_MASK  0x1 /* rule6en */
-#define USTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT 5
-#define USTORM_CORE_CONN_AG_CTX_RULE7EN_MASK  0x1 /* rule7en */
-#define USTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT 6
-#define USTORM_CORE_CONN_AG_CTX_RULE8EN_MASK  0x1 /* rule8en */
-#define USTORM_CORE_CONN_AG_CTX_RULE8EN_SHIFT 7
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 0
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 1
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 2
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 3
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE5EN_MASK  0x1 /* rule5en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT 4
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE6EN_MASK  0x1 /* rule6en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT 5
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE7EN_MASK  0x1 /* rule7en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT 6
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE8EN_MASK  0x1 /* rule8en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE8EN_SHIFT 7
 	u8 byte2 /* byte2 */;
 	u8 byte3 /* byte3 */;
 	__le16 word0 /* conn_dpi */;
@@ -628,11 +628,11 @@ struct core_conn_context {
 /* xstorm storm context */
 	struct xstorm_core_conn_st_ctx xstorm_st_context;
 /* xstorm aggregative context */
-	struct xstorm_core_conn_ag_ctx xstorm_ag_context;
+	struct e4_xstorm_core_conn_ag_ctx xstorm_ag_context;
 /* tstorm aggregative context */
-	struct tstorm_core_conn_ag_ctx tstorm_ag_context;
+	struct e4_tstorm_core_conn_ag_ctx tstorm_ag_context;
 /* ustorm aggregative context */
-	struct ustorm_core_conn_ag_ctx ustorm_ag_context;
+	struct e4_ustorm_core_conn_ag_ctx ustorm_ag_context;
 /* mstorm storm context */
 	struct mstorm_core_conn_st_ctx mstorm_st_context;
 /* ustorm storm context */
@@ -1934,6 +1934,92 @@ enum dmae_cmd_src_enum {
 };
 
 
+struct e4_mstorm_core_conn_ag_ctx {
+	u8 byte0 /* cdu_validation */;
+	u8 byte1 /* state */;
+	u8 flags0;
+#define E4_MSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
+#define E4_MSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
+#define E4_MSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
+#define E4_MSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* cf0 */
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* cf1 */
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* cf2 */
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
+	u8 flags1;
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 3
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 4
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 5
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 6
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 7
+	__le16 word0 /* word0 */;
+	__le16 word1 /* word1 */;
+	__le32 reg0 /* reg0 */;
+	__le32 reg1 /* reg1 */;
+};
+
+
+
+
+
+struct e4_ystorm_core_conn_ag_ctx {
+	u8 byte0 /* cdu_validation */;
+	u8 byte1 /* state */;
+	u8 flags0;
+#define E4_YSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
+#define E4_YSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
+#define E4_YSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
+#define E4_YSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* cf0 */
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* cf1 */
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* cf2 */
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
+	u8 flags1;
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 3
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 4
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 5
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 6
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 7
+	u8 byte2 /* byte2 */;
+	u8 byte3 /* byte3 */;
+	__le16 word0 /* word0 */;
+	__le32 reg0 /* reg0 */;
+	__le32 reg1 /* reg1 */;
+	__le16 word1 /* word1 */;
+	__le16 word2 /* word2 */;
+	__le16 word3 /* word3 */;
+	__le16 word4 /* word4 */;
+	__le32 reg2 /* reg2 */;
+	__le32 reg3 /* reg3 */;
+};
+
+
 /*
  * IGU cleanup command
  */
@@ -2017,44 +2103,6 @@ struct igu_msix_vector {
 };
 
 
-struct mstorm_core_conn_ag_ctx {
-	u8 byte0 /* cdu_validation */;
-	u8 byte1 /* state */;
-	u8 flags0;
-#define MSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
-#define MSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
-#define MSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
-#define MSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
-#define MSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* cf0 */
-#define MSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
-#define MSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* cf1 */
-#define MSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
-#define MSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* cf2 */
-#define MSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
-	u8 flags1;
-#define MSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
-#define MSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
-#define MSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
-#define MSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
-#define MSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
-#define MSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
-#define MSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
-#define MSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 3
-#define MSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
-#define MSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 4
-#define MSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
-#define MSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 5
-#define MSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
-#define MSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 6
-#define MSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
-#define MSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 7
-	__le16 word0 /* word0 */;
-	__le16 word1 /* word1 */;
-	__le32 reg0 /* reg0 */;
-	__le32 reg1 /* reg1 */;
-};
-
-
 /*
  * per encapsulation type enabling flags
  */
diff --git a/drivers/net/qede/base/ecore_hsi_eth.h b/drivers/net/qede/base/ecore_hsi_eth.h
index e8373d7..9d2a118 100644
--- a/drivers/net/qede/base/ecore_hsi_eth.h
+++ b/drivers/net/qede/base/ecore_hsi_eth.h
@@ -34,315 +34,315 @@ struct xstorm_eth_conn_st_ctx {
 	__le32 reserved[60];
 };
 
-struct xstorm_eth_conn_ag_ctx {
+struct e4_xstorm_eth_conn_ag_ctx {
 	u8 reserved0 /* cdu_validation */;
 	u8 eth_state /* state */;
 	u8 flags0;
 /* exist_in_qm0 */
-#define XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_SHIFT           0
+#define E4_XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_SHIFT           0
 /* exist_in_qm1 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED1_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED1_SHIFT              1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED1_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED1_SHIFT              1
 /* exist_in_qm2 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED2_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED2_SHIFT              2
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED2_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED2_SHIFT              2
 /* exist_in_qm3 */
-#define XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM3_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM3_SHIFT           3
+#define E4_XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM3_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM3_SHIFT           3
 /* bit4 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED3_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED3_SHIFT              4
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED3_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED3_SHIFT              4
 /* cf_array_active */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED4_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED4_SHIFT              5
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED4_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED4_SHIFT              5
 /* bit6 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED5_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED5_SHIFT              6
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED5_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED5_SHIFT              6
 /* bit7 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED6_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED6_SHIFT              7
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED6_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED6_SHIFT              7
 	u8 flags1;
 /* bit8 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED7_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED7_SHIFT              0
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED7_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED7_SHIFT              0
 /* bit9 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED8_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED8_SHIFT              1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED8_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED8_SHIFT              1
 /* bit10 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED9_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED9_SHIFT              2
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED9_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED9_SHIFT              2
 /* bit11 */
-#define XSTORM_ETH_CONN_AG_CTX_BIT11_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_BIT11_SHIFT                  3
+#define E4_XSTORM_ETH_CONN_AG_CTX_BIT11_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_BIT11_SHIFT                  3
 /* bit12 */
-#define XSTORM_ETH_CONN_AG_CTX_BIT12_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_BIT12_SHIFT                  4
+#define E4_XSTORM_ETH_CONN_AG_CTX_BIT12_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_BIT12_SHIFT                  4
 /* bit13 */
-#define XSTORM_ETH_CONN_AG_CTX_BIT13_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_BIT13_SHIFT                  5
+#define E4_XSTORM_ETH_CONN_AG_CTX_BIT13_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_BIT13_SHIFT                  5
 /* bit14 */
-#define XSTORM_ETH_CONN_AG_CTX_TX_RULE_ACTIVE_MASK          0x1
-#define XSTORM_ETH_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT         6
+#define E4_XSTORM_ETH_CONN_AG_CTX_TX_RULE_ACTIVE_MASK          0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT         6
 /* bit15 */
-#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_ACTIVE_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT           7
+#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_ACTIVE_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT           7
 	u8 flags2;
 /* timer0cf */
-#define XSTORM_ETH_CONN_AG_CTX_CF0_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF0_SHIFT                    0
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF0_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF0_SHIFT                    0
 /* timer1cf */
-#define XSTORM_ETH_CONN_AG_CTX_CF1_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF1_SHIFT                    2
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF1_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF1_SHIFT                    2
 /* timer2cf */
-#define XSTORM_ETH_CONN_AG_CTX_CF2_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF2_SHIFT                    4
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF2_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF2_SHIFT                    4
 /* timer_stop_all */
-#define XSTORM_ETH_CONN_AG_CTX_CF3_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF3_SHIFT                    6
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF3_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF3_SHIFT                    6
 	u8 flags3;
 /* cf4 */
-#define XSTORM_ETH_CONN_AG_CTX_CF4_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF4_SHIFT                    0
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF4_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF4_SHIFT                    0
 /* cf5 */
-#define XSTORM_ETH_CONN_AG_CTX_CF5_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF5_SHIFT                    2
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF5_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF5_SHIFT                    2
 /* cf6 */
-#define XSTORM_ETH_CONN_AG_CTX_CF6_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF6_SHIFT                    4
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF6_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF6_SHIFT                    4
 /* cf7 */
-#define XSTORM_ETH_CONN_AG_CTX_CF7_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF7_SHIFT                    6
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF7_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF7_SHIFT                    6
 	u8 flags4;
 /* cf8 */
-#define XSTORM_ETH_CONN_AG_CTX_CF8_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF8_SHIFT                    0
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF8_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF8_SHIFT                    0
 /* cf9 */
-#define XSTORM_ETH_CONN_AG_CTX_CF9_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF9_SHIFT                    2
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF9_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF9_SHIFT                    2
 /* cf10 */
-#define XSTORM_ETH_CONN_AG_CTX_CF10_MASK                    0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF10_SHIFT                   4
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF10_MASK                    0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF10_SHIFT                   4
 /* cf11 */
-#define XSTORM_ETH_CONN_AG_CTX_CF11_MASK                    0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF11_SHIFT                   6
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF11_MASK                    0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF11_SHIFT                   6
 	u8 flags5;
 /* cf12 */
-#define XSTORM_ETH_CONN_AG_CTX_CF12_MASK                    0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF12_SHIFT                   0
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF12_MASK                    0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF12_SHIFT                   0
 /* cf13 */
-#define XSTORM_ETH_CONN_AG_CTX_CF13_MASK                    0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF13_SHIFT                   2
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF13_MASK                    0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF13_SHIFT                   2
 /* cf14 */
-#define XSTORM_ETH_CONN_AG_CTX_CF14_MASK                    0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF14_SHIFT                   4
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF14_MASK                    0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF14_SHIFT                   4
 /* cf15 */
-#define XSTORM_ETH_CONN_AG_CTX_CF15_MASK                    0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF15_SHIFT                   6
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF15_MASK                    0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF15_SHIFT                   6
 	u8 flags6;
 /* cf16 */
-#define XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_MASK        0x3
-#define XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_SHIFT       0
+#define E4_XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_MASK        0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_SHIFT       0
 /* cf_array_cf */
-#define XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_MASK        0x3
-#define XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_SHIFT       2
+#define E4_XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_MASK        0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_SHIFT       2
 /* cf18 */
-#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_MASK                   0x3
-#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_SHIFT                  4
+#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_MASK                   0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_SHIFT                  4
 /* cf19 */
-#define XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_MASK            0x3
-#define XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_SHIFT           6
+#define E4_XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_MASK            0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_SHIFT           6
 	u8 flags7;
 /* cf20 */
-#define XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_MASK                0x3
-#define XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_SHIFT               0
+#define E4_XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_MASK                0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_SHIFT               0
 /* cf21 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED10_MASK              0x3
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED10_SHIFT             2
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED10_MASK              0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED10_SHIFT             2
 /* cf22 */
-#define XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_MASK               0x3
-#define XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_SHIFT              4
+#define E4_XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_MASK               0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_SHIFT              4
 /* cf0en */
-#define XSTORM_ETH_CONN_AG_CTX_CF0EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT                  6
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF0EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT                  6
 /* cf1en */
-#define XSTORM_ETH_CONN_AG_CTX_CF1EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT                  7
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF1EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT                  7
 	u8 flags8;
 /* cf2en */
-#define XSTORM_ETH_CONN_AG_CTX_CF2EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                  0
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF2EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                  0
 /* cf3en */
-#define XSTORM_ETH_CONN_AG_CTX_CF3EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT                  1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF3EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT                  1
 /* cf4en */
-#define XSTORM_ETH_CONN_AG_CTX_CF4EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF4EN_SHIFT                  2
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF4EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF4EN_SHIFT                  2
 /* cf5en */
-#define XSTORM_ETH_CONN_AG_CTX_CF5EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF5EN_SHIFT                  3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF5EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF5EN_SHIFT                  3
 /* cf6en */
-#define XSTORM_ETH_CONN_AG_CTX_CF6EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF6EN_SHIFT                  4
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF6EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF6EN_SHIFT                  4
 /* cf7en */
-#define XSTORM_ETH_CONN_AG_CTX_CF7EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF7EN_SHIFT                  5
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF7EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF7EN_SHIFT                  5
 /* cf8en */
-#define XSTORM_ETH_CONN_AG_CTX_CF8EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF8EN_SHIFT                  6
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF8EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF8EN_SHIFT                  6
 /* cf9en */
-#define XSTORM_ETH_CONN_AG_CTX_CF9EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF9EN_SHIFT                  7
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF9EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF9EN_SHIFT                  7
 	u8 flags9;
 /* cf10en */
-#define XSTORM_ETH_CONN_AG_CTX_CF10EN_MASK                  0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF10EN_SHIFT                 0
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF10EN_MASK                  0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF10EN_SHIFT                 0
 /* cf11en */
-#define XSTORM_ETH_CONN_AG_CTX_CF11EN_MASK                  0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF11EN_SHIFT                 1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF11EN_MASK                  0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF11EN_SHIFT                 1
 /* cf12en */
-#define XSTORM_ETH_CONN_AG_CTX_CF12EN_MASK                  0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF12EN_SHIFT                 2
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF12EN_MASK                  0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF12EN_SHIFT                 2
 /* cf13en */
-#define XSTORM_ETH_CONN_AG_CTX_CF13EN_MASK                  0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF13EN_SHIFT                 3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF13EN_MASK                  0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF13EN_SHIFT                 3
 /* cf14en */
-#define XSTORM_ETH_CONN_AG_CTX_CF14EN_MASK                  0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF14EN_SHIFT                 4
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF14EN_MASK                  0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF14EN_SHIFT                 4
 /* cf15en */
-#define XSTORM_ETH_CONN_AG_CTX_CF15EN_MASK                  0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF15EN_SHIFT                 5
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF15EN_MASK                  0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF15EN_SHIFT                 5
 /* cf16en */
-#define XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_MASK     0x1
-#define XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_SHIFT    6
+#define E4_XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_MASK     0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_SHIFT    6
 /* cf_array_cf_en */
-#define XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_EN_MASK     0x1
-#define XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_EN_SHIFT    7
+#define E4_XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_EN_MASK     0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_EN_SHIFT    7
 	u8 flags10;
 /* cf18en */
-#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_EN_SHIFT               0
+#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_EN_SHIFT               0
 /* cf19en */
-#define XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_EN_MASK         0x1
-#define XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT        1
+#define E4_XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_EN_MASK         0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT        1
 /* cf20en */
-#define XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_EN_MASK             0x1
-#define XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT            2
+#define E4_XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_EN_MASK             0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT            2
 /* cf21en */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED11_MASK              0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED11_SHIFT             3
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED11_MASK              0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED11_SHIFT             3
 /* cf22en */
-#define XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_EN_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_EN_SHIFT           4
+#define E4_XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_EN_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_EN_SHIFT           4
 /* cf23en */
-#define XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_MASK  0x1
-#define XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_SHIFT 5
+#define E4_XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_MASK  0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_SHIFT 5
 /* rule0en */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED12_MASK              0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED12_SHIFT             6
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED12_MASK              0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED12_SHIFT             6
 /* rule1en */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED13_MASK              0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED13_SHIFT             7
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED13_MASK              0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED13_SHIFT             7
 	u8 flags11;
 /* rule2en */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED14_MASK              0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED14_SHIFT             0
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED14_MASK              0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED14_SHIFT             0
 /* rule3en */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED15_MASK              0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED15_SHIFT             1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED15_MASK              0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED15_SHIFT             1
 /* rule4en */
-#define XSTORM_ETH_CONN_AG_CTX_TX_DEC_RULE_EN_MASK          0x1
-#define XSTORM_ETH_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT         2
+#define E4_XSTORM_ETH_CONN_AG_CTX_TX_DEC_RULE_EN_MASK          0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT         2
 /* rule5en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE5EN_MASK                 0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT                3
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE5EN_MASK                 0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT                3
 /* rule6en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE6EN_MASK                 0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE6EN_SHIFT                4
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE6EN_MASK                 0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE6EN_SHIFT                4
 /* rule7en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE7EN_MASK                 0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT                5
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE7EN_MASK                 0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT                5
 /* rule8en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED1_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED1_SHIFT           6
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED1_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED1_SHIFT           6
 /* rule9en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE9EN_MASK                 0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE9EN_SHIFT                7
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE9EN_MASK                 0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE9EN_SHIFT                7
 	u8 flags12;
 /* rule10en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE10EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE10EN_SHIFT               0
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE10EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE10EN_SHIFT               0
 /* rule11en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE11EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE11EN_SHIFT               1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE11EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE11EN_SHIFT               1
 /* rule12en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED2_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED2_SHIFT           2
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED2_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED2_SHIFT           2
 /* rule13en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED3_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED3_SHIFT           3
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED3_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED3_SHIFT           3
 /* rule14en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE14EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE14EN_SHIFT               4
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE14EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE14EN_SHIFT               4
 /* rule15en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE15EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE15EN_SHIFT               5
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE15EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE15EN_SHIFT               5
 /* rule16en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE16EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE16EN_SHIFT               6
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE16EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE16EN_SHIFT               6
 /* rule17en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE17EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE17EN_SHIFT               7
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE17EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE17EN_SHIFT               7
 	u8 flags13;
 /* rule18en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE18EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE18EN_SHIFT               0
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE18EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE18EN_SHIFT               0
 /* rule19en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE19EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE19EN_SHIFT               1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE19EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE19EN_SHIFT               1
 /* rule20en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED4_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED4_SHIFT           2
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED4_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED4_SHIFT           2
 /* rule21en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED5_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED5_SHIFT           3
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED5_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED5_SHIFT           3
 /* rule22en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED6_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED6_SHIFT           4
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED6_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED6_SHIFT           4
 /* rule23en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED7_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED7_SHIFT           5
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED7_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED7_SHIFT           5
 /* rule24en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED8_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED8_SHIFT           6
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED8_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED8_SHIFT           6
 /* rule25en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED9_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED9_SHIFT           7
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED9_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED9_SHIFT           7
 	u8 flags14;
 /* bit16 */
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_USE_EXT_HDR_MASK        0x1
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_USE_EXT_HDR_SHIFT       0
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_USE_EXT_HDR_MASK        0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_USE_EXT_HDR_SHIFT       0
 /* bit17 */
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_MASK      0x1
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_SHIFT     1
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_MASK      0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_SHIFT     1
 /* bit18 */
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_MASK    0x1
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_SHIFT   2
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_MASK    0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_SHIFT   2
 /* bit19 */
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_MASK    0x1
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_SHIFT   3
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_MASK    0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_SHIFT   3
 /* bit20 */
-#define XSTORM_ETH_CONN_AG_CTX_L2_EDPM_ENABLE_MASK          0x1
-#define XSTORM_ETH_CONN_AG_CTX_L2_EDPM_ENABLE_SHIFT         4
+#define E4_XSTORM_ETH_CONN_AG_CTX_L2_EDPM_ENABLE_MASK          0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_L2_EDPM_ENABLE_SHIFT         4
 /* bit21 */
-#define XSTORM_ETH_CONN_AG_CTX_ROCE_EDPM_ENABLE_MASK        0x1
-#define XSTORM_ETH_CONN_AG_CTX_ROCE_EDPM_ENABLE_SHIFT       5
+#define E4_XSTORM_ETH_CONN_AG_CTX_ROCE_EDPM_ENABLE_MASK        0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_ROCE_EDPM_ENABLE_SHIFT       5
 /* cf23 */
-#define XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_MASK              0x3
-#define XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_SHIFT             6
+#define E4_XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_MASK              0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_SHIFT             6
 	u8 edpm_event_id /* byte2 */;
 	__le16 physical_q0 /* physical_q0 */;
-	__le16 quota /* physical_q1 */;
+	__le16 e5_reserved1 /* physical_q1 */;
 	__le16 edpm_num_bds /* physical_q2 */;
 	__le16 tx_bd_cons /* word3 */;
 	__le16 tx_bd_prod /* word4 */;
@@ -375,7 +375,7 @@ struct xstorm_eth_conn_ag_ctx {
 	u8 byte13 /* byte13 */;
 	u8 byte14 /* byte14 */;
 	u8 byte15 /* byte15 */;
-	u8 byte16 /* byte16 */;
+	u8 e5_reserved /* e5_reserved */;
 	__le16 word11 /* word11 */;
 	__le32 reg10 /* reg10 */;
 	__le32 reg11 /* reg11 */;
@@ -400,47 +400,47 @@ struct ystorm_eth_conn_st_ctx {
 	__le32 reserved[8];
 };
 
-struct ystorm_eth_conn_ag_ctx {
+struct e4_ystorm_eth_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 state /* state */;
 	u8 flags0;
 /* exist_in_qm0 */
-#define YSTORM_ETH_CONN_AG_CTX_BIT0_MASK                  0x1
-#define YSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT                 0
+#define E4_YSTORM_ETH_CONN_AG_CTX_BIT0_MASK                  0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT                 0
 /* exist_in_qm1 */
-#define YSTORM_ETH_CONN_AG_CTX_BIT1_MASK                  0x1
-#define YSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT                 1
-#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK     0x3 /* cf0 */
-#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT    2
-#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_MASK      0x3 /* cf1 */
-#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_SHIFT     4
-#define YSTORM_ETH_CONN_AG_CTX_CF2_MASK                   0x3 /* cf2 */
-#define YSTORM_ETH_CONN_AG_CTX_CF2_SHIFT                  6
+#define E4_YSTORM_ETH_CONN_AG_CTX_BIT1_MASK                  0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT                 1
+#define E4_YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK     0x3 /* cf0 */
+#define E4_YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT    2
+#define E4_YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_MASK      0x3 /* cf1 */
+#define E4_YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_SHIFT     4
+#define E4_YSTORM_ETH_CONN_AG_CTX_CF2_MASK                   0x3 /* cf2 */
+#define E4_YSTORM_ETH_CONN_AG_CTX_CF2_SHIFT                  6
 	u8 flags1;
 /* cf0en */
-#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK  0x1
-#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT 0
+#define E4_YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK  0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT 0
 /* cf1en */
-#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_MASK   0x1
-#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_SHIFT  1
+#define E4_YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_MASK   0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_SHIFT  1
 /* cf2en */
-#define YSTORM_ETH_CONN_AG_CTX_CF2EN_MASK                 0x1
-#define YSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                2
+#define E4_YSTORM_ETH_CONN_AG_CTX_CF2EN_MASK                 0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                2
 /* rule0en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK               0x1
-#define YSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT              3
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK               0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT              3
 /* rule1en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK               0x1
-#define YSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT              4
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK               0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT              4
 /* rule2en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK               0x1
-#define YSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT              5
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK               0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT              5
 /* rule3en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK               0x1
-#define YSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT              6
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK               0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT              6
 /* rule4en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK               0x1
-#define YSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT              7
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK               0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT              7
 	u8 tx_q0_int_coallecing_timeset /* byte2 */;
 	u8 byte3 /* byte3 */;
 	__le16 word0 /* word0 */;
@@ -454,89 +454,89 @@ struct ystorm_eth_conn_ag_ctx {
 	__le32 reg3 /* reg3 */;
 };
 
-struct tstorm_eth_conn_ag_ctx {
+struct e4_tstorm_eth_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 byte1 /* state */;
 	u8 flags0;
-#define TSTORM_ETH_CONN_AG_CTX_BIT0_MASK      0x1 /* exist_in_qm0 */
-#define TSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT     0
-#define TSTORM_ETH_CONN_AG_CTX_BIT1_MASK      0x1 /* exist_in_qm1 */
-#define TSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT     1
-#define TSTORM_ETH_CONN_AG_CTX_BIT2_MASK      0x1 /* bit2 */
-#define TSTORM_ETH_CONN_AG_CTX_BIT2_SHIFT     2
-#define TSTORM_ETH_CONN_AG_CTX_BIT3_MASK      0x1 /* bit3 */
-#define TSTORM_ETH_CONN_AG_CTX_BIT3_SHIFT     3
-#define TSTORM_ETH_CONN_AG_CTX_BIT4_MASK      0x1 /* bit4 */
-#define TSTORM_ETH_CONN_AG_CTX_BIT4_SHIFT     4
-#define TSTORM_ETH_CONN_AG_CTX_BIT5_MASK      0x1 /* bit5 */
-#define TSTORM_ETH_CONN_AG_CTX_BIT5_SHIFT     5
-#define TSTORM_ETH_CONN_AG_CTX_CF0_MASK       0x3 /* timer0cf */
-#define TSTORM_ETH_CONN_AG_CTX_CF0_SHIFT      6
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT0_MASK      0x1 /* exist_in_qm0 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT     0
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT1_MASK      0x1 /* exist_in_qm1 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT     1
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT2_MASK      0x1 /* bit2 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT2_SHIFT     2
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT3_MASK      0x1 /* bit3 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT3_SHIFT     3
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT4_MASK      0x1 /* bit4 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT4_SHIFT     4
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT5_MASK      0x1 /* bit5 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT5_SHIFT     5
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF0_MASK       0x3 /* timer0cf */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF0_SHIFT      6
 	u8 flags1;
-#define TSTORM_ETH_CONN_AG_CTX_CF1_MASK       0x3 /* timer1cf */
-#define TSTORM_ETH_CONN_AG_CTX_CF1_SHIFT      0
-#define TSTORM_ETH_CONN_AG_CTX_CF2_MASK       0x3 /* timer2cf */
-#define TSTORM_ETH_CONN_AG_CTX_CF2_SHIFT      2
-#define TSTORM_ETH_CONN_AG_CTX_CF3_MASK       0x3 /* timer_stop_all */
-#define TSTORM_ETH_CONN_AG_CTX_CF3_SHIFT      4
-#define TSTORM_ETH_CONN_AG_CTX_CF4_MASK       0x3 /* cf4 */
-#define TSTORM_ETH_CONN_AG_CTX_CF4_SHIFT      6
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF1_MASK       0x3 /* timer1cf */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF1_SHIFT      0
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF2_MASK       0x3 /* timer2cf */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF2_SHIFT      2
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF3_MASK       0x3 /* timer_stop_all */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF3_SHIFT      4
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF4_MASK       0x3 /* cf4 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF4_SHIFT      6
 	u8 flags2;
-#define TSTORM_ETH_CONN_AG_CTX_CF5_MASK       0x3 /* cf5 */
-#define TSTORM_ETH_CONN_AG_CTX_CF5_SHIFT      0
-#define TSTORM_ETH_CONN_AG_CTX_CF6_MASK       0x3 /* cf6 */
-#define TSTORM_ETH_CONN_AG_CTX_CF6_SHIFT      2
-#define TSTORM_ETH_CONN_AG_CTX_CF7_MASK       0x3 /* cf7 */
-#define TSTORM_ETH_CONN_AG_CTX_CF7_SHIFT      4
-#define TSTORM_ETH_CONN_AG_CTX_CF8_MASK       0x3 /* cf8 */
-#define TSTORM_ETH_CONN_AG_CTX_CF8_SHIFT      6
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF5_MASK       0x3 /* cf5 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF5_SHIFT      0
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF6_MASK       0x3 /* cf6 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF6_SHIFT      2
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF7_MASK       0x3 /* cf7 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF7_SHIFT      4
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF8_MASK       0x3 /* cf8 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF8_SHIFT      6
 	u8 flags3;
-#define TSTORM_ETH_CONN_AG_CTX_CF9_MASK       0x3 /* cf9 */
-#define TSTORM_ETH_CONN_AG_CTX_CF9_SHIFT      0
-#define TSTORM_ETH_CONN_AG_CTX_CF10_MASK      0x3 /* cf10 */
-#define TSTORM_ETH_CONN_AG_CTX_CF10_SHIFT     2
-#define TSTORM_ETH_CONN_AG_CTX_CF0EN_MASK     0x1 /* cf0en */
-#define TSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT    4
-#define TSTORM_ETH_CONN_AG_CTX_CF1EN_MASK     0x1 /* cf1en */
-#define TSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT    5
-#define TSTORM_ETH_CONN_AG_CTX_CF2EN_MASK     0x1 /* cf2en */
-#define TSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT    6
-#define TSTORM_ETH_CONN_AG_CTX_CF3EN_MASK     0x1 /* cf3en */
-#define TSTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT    7
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF9_MASK       0x3 /* cf9 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF9_SHIFT      0
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF10_MASK      0x3 /* cf10 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF10_SHIFT     2
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF0EN_MASK     0x1 /* cf0en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT    4
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF1EN_MASK     0x1 /* cf1en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT    5
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF2EN_MASK     0x1 /* cf2en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT    6
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF3EN_MASK     0x1 /* cf3en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT    7
 	u8 flags4;
-#define TSTORM_ETH_CONN_AG_CTX_CF4EN_MASK     0x1 /* cf4en */
-#define TSTORM_ETH_CONN_AG_CTX_CF4EN_SHIFT    0
-#define TSTORM_ETH_CONN_AG_CTX_CF5EN_MASK     0x1 /* cf5en */
-#define TSTORM_ETH_CONN_AG_CTX_CF5EN_SHIFT    1
-#define TSTORM_ETH_CONN_AG_CTX_CF6EN_MASK     0x1 /* cf6en */
-#define TSTORM_ETH_CONN_AG_CTX_CF6EN_SHIFT    2
-#define TSTORM_ETH_CONN_AG_CTX_CF7EN_MASK     0x1 /* cf7en */
-#define TSTORM_ETH_CONN_AG_CTX_CF7EN_SHIFT    3
-#define TSTORM_ETH_CONN_AG_CTX_CF8EN_MASK     0x1 /* cf8en */
-#define TSTORM_ETH_CONN_AG_CTX_CF8EN_SHIFT    4
-#define TSTORM_ETH_CONN_AG_CTX_CF9EN_MASK     0x1 /* cf9en */
-#define TSTORM_ETH_CONN_AG_CTX_CF9EN_SHIFT    5
-#define TSTORM_ETH_CONN_AG_CTX_CF10EN_MASK    0x1 /* cf10en */
-#define TSTORM_ETH_CONN_AG_CTX_CF10EN_SHIFT   6
-#define TSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK   0x1 /* rule0en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT  7
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF4EN_MASK     0x1 /* cf4en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF4EN_SHIFT    0
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF5EN_MASK     0x1 /* cf5en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF5EN_SHIFT    1
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF6EN_MASK     0x1 /* cf6en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF6EN_SHIFT    2
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF7EN_MASK     0x1 /* cf7en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF7EN_SHIFT    3
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF8EN_MASK     0x1 /* cf8en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF8EN_SHIFT    4
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF9EN_MASK     0x1 /* cf9en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF9EN_SHIFT    5
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF10EN_MASK    0x1 /* cf10en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF10EN_SHIFT   6
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK   0x1 /* rule0en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT  7
 	u8 flags5;
-#define TSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK   0x1 /* rule1en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT  0
-#define TSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK   0x1 /* rule2en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT  1
-#define TSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK   0x1 /* rule3en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT  2
-#define TSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK   0x1 /* rule4en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT  3
-#define TSTORM_ETH_CONN_AG_CTX_RULE5EN_MASK   0x1 /* rule5en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT  4
-#define TSTORM_ETH_CONN_AG_CTX_RX_BD_EN_MASK  0x1 /* rule6en */
-#define TSTORM_ETH_CONN_AG_CTX_RX_BD_EN_SHIFT 5
-#define TSTORM_ETH_CONN_AG_CTX_RULE7EN_MASK   0x1 /* rule7en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT  6
-#define TSTORM_ETH_CONN_AG_CTX_RULE8EN_MASK   0x1 /* rule8en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE8EN_SHIFT  7
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK   0x1 /* rule1en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT  0
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK   0x1 /* rule2en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT  1
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK   0x1 /* rule3en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT  2
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK   0x1 /* rule4en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT  3
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE5EN_MASK   0x1 /* rule5en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT  4
+#define E4_TSTORM_ETH_CONN_AG_CTX_RX_BD_EN_MASK  0x1 /* rule6en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RX_BD_EN_SHIFT 5
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE7EN_MASK   0x1 /* rule7en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT  6
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE8EN_MASK   0x1 /* rule8en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE8EN_SHIFT  7
 	__le32 reg0 /* reg0 */;
 	__le32 reg1 /* reg1 */;
 	__le32 reg2 /* reg2 */;
@@ -558,88 +558,88 @@ struct tstorm_eth_conn_ag_ctx {
 	__le32 reg10 /* reg10 */;
 };
 
-struct ustorm_eth_conn_ag_ctx {
+struct e4_ustorm_eth_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 byte1 /* state */;
 	u8 flags0;
 /* exist_in_qm0 */
-#define USTORM_ETH_CONN_AG_CTX_BIT0_MASK                    0x1
-#define USTORM_ETH_CONN_AG_CTX_BIT0_SHIFT                   0
+#define E4_USTORM_ETH_CONN_AG_CTX_BIT0_MASK                    0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_BIT0_SHIFT                   0
 /* exist_in_qm1 */
-#define USTORM_ETH_CONN_AG_CTX_BIT1_MASK                    0x1
-#define USTORM_ETH_CONN_AG_CTX_BIT1_SHIFT                   1
+#define E4_USTORM_ETH_CONN_AG_CTX_BIT1_MASK                    0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_BIT1_SHIFT                   1
 /* timer0cf */
-#define USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_MASK     0x3
-#define USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_SHIFT    2
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_MASK     0x3
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_SHIFT    2
 /* timer1cf */
-#define USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_MASK     0x3
-#define USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_SHIFT    4
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_MASK     0x3
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_SHIFT    4
 /* timer2cf */
-#define USTORM_ETH_CONN_AG_CTX_CF2_MASK                     0x3
-#define USTORM_ETH_CONN_AG_CTX_CF2_SHIFT                    6
+#define E4_USTORM_ETH_CONN_AG_CTX_CF2_MASK                     0x3
+#define E4_USTORM_ETH_CONN_AG_CTX_CF2_SHIFT                    6
 	u8 flags1;
 /* timer_stop_all */
-#define USTORM_ETH_CONN_AG_CTX_CF3_MASK                     0x3
-#define USTORM_ETH_CONN_AG_CTX_CF3_SHIFT                    0
+#define E4_USTORM_ETH_CONN_AG_CTX_CF3_MASK                     0x3
+#define E4_USTORM_ETH_CONN_AG_CTX_CF3_SHIFT                    0
 /* cf4 */
-#define USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_MASK               0x3
-#define USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_SHIFT              2
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_MASK               0x3
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_SHIFT              2
 /* cf5 */
-#define USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_MASK               0x3
-#define USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_SHIFT              4
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_MASK               0x3
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_SHIFT              4
 /* cf6 */
-#define USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK       0x3
-#define USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT      6
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK       0x3
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT      6
 	u8 flags2;
 /* cf0en */
-#define USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_EN_MASK  0x1
-#define USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_EN_SHIFT 0
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_EN_MASK  0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_EN_SHIFT 0
 /* cf1en */
-#define USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_EN_MASK  0x1
-#define USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_EN_SHIFT 1
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_EN_MASK  0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_EN_SHIFT 1
 /* cf2en */
-#define USTORM_ETH_CONN_AG_CTX_CF2EN_MASK                   0x1
-#define USTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                  2
+#define E4_USTORM_ETH_CONN_AG_CTX_CF2EN_MASK                   0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                  2
 /* cf3en */
-#define USTORM_ETH_CONN_AG_CTX_CF3EN_MASK                   0x1
-#define USTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT                  3
+#define E4_USTORM_ETH_CONN_AG_CTX_CF3EN_MASK                   0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT                  3
 /* cf4en */
-#define USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_EN_MASK            0x1
-#define USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_EN_SHIFT           4
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_EN_MASK            0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_EN_SHIFT           4
 /* cf5en */
-#define USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_EN_MASK            0x1
-#define USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_EN_SHIFT           5
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_EN_MASK            0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_EN_SHIFT           5
 /* cf6en */
-#define USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK    0x1
-#define USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT   6
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK    0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT   6
 /* rule0en */
-#define USTORM_ETH_CONN_AG_CTX_RULE0EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT                7
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE0EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT                7
 	u8 flags3;
 /* rule1en */
-#define USTORM_ETH_CONN_AG_CTX_RULE1EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT                0
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE1EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT                0
 /* rule2en */
-#define USTORM_ETH_CONN_AG_CTX_RULE2EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT                1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE2EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT                1
 /* rule3en */
-#define USTORM_ETH_CONN_AG_CTX_RULE3EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT                2
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE3EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT                2
 /* rule4en */
-#define USTORM_ETH_CONN_AG_CTX_RULE4EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT                3
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE4EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT                3
 /* rule5en */
-#define USTORM_ETH_CONN_AG_CTX_RULE5EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT                4
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE5EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT                4
 /* rule6en */
-#define USTORM_ETH_CONN_AG_CTX_RULE6EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE6EN_SHIFT                5
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE6EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE6EN_SHIFT                5
 /* rule7en */
-#define USTORM_ETH_CONN_AG_CTX_RULE7EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT                6
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE7EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT                6
 /* rule8en */
-#define USTORM_ETH_CONN_AG_CTX_RULE8EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE8EN_SHIFT                7
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE8EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE8EN_SHIFT                7
 	u8 byte2 /* byte2 */;
 	u8 byte3 /* byte3 */;
 	__le16 word0 /* conn_dpi */;
@@ -678,15 +678,15 @@ struct eth_conn_context {
 /* xstorm storm context */
 	struct xstorm_eth_conn_st_ctx xstorm_st_context;
 /* xstorm aggregative context */
-	struct xstorm_eth_conn_ag_ctx xstorm_ag_context;
+	struct e4_xstorm_eth_conn_ag_ctx xstorm_ag_context;
 /* ystorm storm context */
 	struct ystorm_eth_conn_st_ctx ystorm_st_context;
 /* ystorm aggregative context */
-	struct ystorm_eth_conn_ag_ctx ystorm_ag_context;
+	struct e4_ystorm_eth_conn_ag_ctx ystorm_ag_context;
 /* tstorm aggregative context */
-	struct tstorm_eth_conn_ag_ctx tstorm_ag_context;
+	struct e4_tstorm_eth_conn_ag_ctx tstorm_ag_context;
 /* ustorm aggregative context */
-	struct ustorm_eth_conn_ag_ctx ustorm_ag_context;
+	struct e4_ustorm_eth_conn_ag_ctx ustorm_ag_context;
 /* ustorm storm context */
 	struct ustorm_eth_conn_st_ctx ustorm_st_context;
 /* mstorm storm context */
@@ -1480,6 +1480,668 @@ struct vport_update_ramrod_data {
 
 
 
+struct E4XstormEthConnAgCtxDqExtLdPart {
+	u8 reserved0 /* cdu_validation */;
+	u8 eth_state /* state */;
+	u8 flags0;
+/* exist_in_qm0 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM0_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM0_SHIFT           0
+/* exist_in_qm1 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED1_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED1_SHIFT              1
+/* exist_in_qm2 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED2_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED2_SHIFT              2
+/* exist_in_qm3 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM3_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM3_SHIFT           3
+/* bit4 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED3_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED3_SHIFT              4
+/* cf_array_active */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED4_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED4_SHIFT              5
+/* bit6 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED5_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED5_SHIFT              6
+/* bit7 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED6_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED6_SHIFT              7
+	u8 flags1;
+/* bit8 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED7_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED7_SHIFT              0
+/* bit9 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED8_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED8_SHIFT              1
+/* bit10 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED9_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED9_SHIFT              2
+/* bit11 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_BIT11_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_BIT11_SHIFT                  3
+/* bit12 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_BIT12_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_BIT12_SHIFT                  4
+/* bit13 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_BIT13_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_BIT13_SHIFT                  5
+/* bit14 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TX_RULE_ACTIVE_MASK          0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TX_RULE_ACTIVE_SHIFT         6
+/* bit15 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_ACTIVE_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_ACTIVE_SHIFT           7
+	u8 flags2;
+/* timer0cf */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF0_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF0_SHIFT                    0
+/* timer1cf */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF1_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF1_SHIFT                    2
+/* timer2cf */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF2_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF2_SHIFT                    4
+/* timer_stop_all */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF3_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF3_SHIFT                    6
+	u8 flags3;
+/* cf4 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF4_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF4_SHIFT                    0
+/* cf5 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF5_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF5_SHIFT                    2
+/* cf6 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF6_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF6_SHIFT                    4
+/* cf7 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF7_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF7_SHIFT                    6
+	u8 flags4;
+/* cf8 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF8_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF8_SHIFT                    0
+/* cf9 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF9_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF9_SHIFT                    2
+/* cf10 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF10_MASK                    0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF10_SHIFT                   4
+/* cf11 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF11_MASK                    0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF11_SHIFT                   6
+	u8 flags5;
+/* cf12 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF12_MASK                    0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF12_SHIFT                   0
+/* cf13 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF13_MASK                    0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF13_SHIFT                   2
+/* cf14 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF14_MASK                    0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF14_SHIFT                   4
+/* cf15 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF15_MASK                    0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF15_SHIFT                   6
+	u8 flags6;
+/* cf16 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_MASK        0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_SHIFT       0
+/* cf_array_cf */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_MASK        0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_SHIFT       2
+/* cf18 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_MASK                   0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_SHIFT                  4
+/* cf19 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_MASK            0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_SHIFT           6
+	u8 flags7;
+/* cf20 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_MASK                0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_SHIFT               0
+/* cf21 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED10_MASK              0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED10_SHIFT             2
+/* cf22 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_MASK               0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_SHIFT              4
+/* cf0en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF0EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF0EN_SHIFT                  6
+/* cf1en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF1EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF1EN_SHIFT                  7
+	u8 flags8;
+/* cf2en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF2EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF2EN_SHIFT                  0
+/* cf3en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF3EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF3EN_SHIFT                  1
+/* cf4en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF4EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF4EN_SHIFT                  2
+/* cf5en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF5EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF5EN_SHIFT                  3
+/* cf6en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF6EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF6EN_SHIFT                  4
+/* cf7en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF7EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF7EN_SHIFT                  5
+/* cf8en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF8EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF8EN_SHIFT                  6
+/* cf9en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF9EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF9EN_SHIFT                  7
+	u8 flags9;
+/* cf10en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF10EN_MASK                  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF10EN_SHIFT                 0
+/* cf11en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF11EN_MASK                  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF11EN_SHIFT                 1
+/* cf12en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF12EN_MASK                  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF12EN_SHIFT                 2
+/* cf13en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF13EN_MASK                  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF13EN_SHIFT                 3
+/* cf14en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF14EN_MASK                  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF14EN_SHIFT                 4
+/* cf15en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF15EN_MASK                  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF15EN_SHIFT                 5
+/* cf16en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_EN_MASK     0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_EN_SHIFT    6
+/* cf_array_cf_en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_EN_MASK     0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_EN_SHIFT    7
+	u8 flags10;
+/* cf18en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_EN_SHIFT               0
+/* cf19en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_EN_MASK         0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_EN_SHIFT        1
+/* cf20en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_EN_MASK             0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_EN_SHIFT            2
+/* cf21en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED11_MASK              0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED11_SHIFT             3
+/* cf22en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_EN_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_EN_SHIFT           4
+/* cf23en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_EN_RESERVED_MASK  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_EN_RESERVED_SHIFT 5
+/* rule0en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED12_MASK              0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED12_SHIFT             6
+/* rule1en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED13_MASK              0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED13_SHIFT             7
+	u8 flags11;
+/* rule2en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED14_MASK              0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED14_SHIFT             0
+/* rule3en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED15_MASK              0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED15_SHIFT             1
+/* rule4en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TX_DEC_RULE_EN_MASK          0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TX_DEC_RULE_EN_SHIFT         2
+/* rule5en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE5EN_MASK                 0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE5EN_SHIFT                3
+/* rule6en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE6EN_MASK                 0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE6EN_SHIFT                4
+/* rule7en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE7EN_MASK                 0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE7EN_SHIFT                5
+/* rule8en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED1_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED1_SHIFT           6
+/* rule9en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE9EN_MASK                 0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE9EN_SHIFT                7
+	u8 flags12;
+/* rule10en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE10EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE10EN_SHIFT               0
+/* rule11en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE11EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE11EN_SHIFT               1
+/* rule12en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED2_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED2_SHIFT           2
+/* rule13en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED3_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED3_SHIFT           3
+/* rule14en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE14EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE14EN_SHIFT               4
+/* rule15en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE15EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE15EN_SHIFT               5
+/* rule16en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE16EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE16EN_SHIFT               6
+/* rule17en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE17EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE17EN_SHIFT               7
+	u8 flags13;
+/* rule18en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE18EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE18EN_SHIFT               0
+/* rule19en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE19EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE19EN_SHIFT               1
+/* rule20en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED4_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED4_SHIFT           2
+/* rule21en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED5_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED5_SHIFT           3
+/* rule22en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED6_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED6_SHIFT           4
+/* rule23en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED7_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED7_SHIFT           5
+/* rule24en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED8_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED8_SHIFT           6
+/* rule25en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED9_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED9_SHIFT           7
+	u8 flags14;
+/* bit16 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_USE_EXT_HDR_MASK        0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_USE_EXT_HDR_SHIFT       0
+/* bit17 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_RAW_L3L4_MASK      0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_RAW_L3L4_SHIFT     1
+/* bit18 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_INBAND_PROP_HDR_MASK    0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_INBAND_PROP_HDR_SHIFT   2
+/* bit19 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_EXT_TUNNEL_MASK    0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_EXT_TUNNEL_SHIFT   3
+/* bit20 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_L2_EDPM_ENABLE_MASK          0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_L2_EDPM_ENABLE_SHIFT         4
+/* bit21 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_ROCE_EDPM_ENABLE_MASK        0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_ROCE_EDPM_ENABLE_SHIFT       5
+/* cf23 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_MASK              0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_SHIFT             6
+	u8 edpm_event_id /* byte2 */;
+	__le16 physical_q0 /* physical_q0 */;
+	__le16 e5_reserved1 /* physical_q1 */;
+	__le16 edpm_num_bds /* physical_q2 */;
+	__le16 tx_bd_cons /* word3 */;
+	__le16 tx_bd_prod /* word4 */;
+	__le16 tx_class /* word5 */;
+	__le16 conn_dpi /* conn_dpi */;
+	u8 byte3 /* byte3 */;
+	u8 byte4 /* byte4 */;
+	u8 byte5 /* byte5 */;
+	u8 byte6 /* byte6 */;
+	__le32 reg0 /* reg0 */;
+	__le32 reg1 /* reg1 */;
+	__le32 reg2 /* reg2 */;
+	__le32 reg3 /* reg3 */;
+	__le32 reg4 /* reg4 */;
+};
+
+
+struct e4_mstorm_eth_conn_ag_ctx {
+	u8 byte0 /* cdu_validation */;
+	u8 byte1 /* state */;
+	u8 flags0;
+#define E4_MSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_MASK  0x1 /* exist_in_qm0 */
+#define E4_MSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_SHIFT 0
+#define E4_MSTORM_ETH_CONN_AG_CTX_BIT1_MASK          0x1 /* exist_in_qm1 */
+#define E4_MSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT         1
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF0_MASK           0x3 /* cf0 */
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF0_SHIFT          2
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF1_MASK           0x3 /* cf1 */
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF1_SHIFT          4
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF2_MASK           0x3 /* cf2 */
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF2_SHIFT          6
+	u8 flags1;
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF0EN_MASK         0x1 /* cf0en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT        0
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF1EN_MASK         0x1 /* cf1en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT        1
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF2EN_MASK         0x1 /* cf2en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT        2
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK       0x1 /* rule0en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT      3
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK       0x1 /* rule1en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT      4
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK       0x1 /* rule2en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT      5
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK       0x1 /* rule3en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT      6
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK       0x1 /* rule4en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT      7
+	__le16 word0 /* word0 */;
+	__le16 word1 /* word1 */;
+	__le32 reg0 /* reg0 */;
+	__le32 reg1 /* reg1 */;
+};
+
+
+
+
+
+struct e4_xstorm_eth_hw_conn_ag_ctx {
+	u8 reserved0 /* cdu_validation */;
+	u8 eth_state /* state */;
+	u8 flags0;
+/* exist_in_qm0 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM0_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM0_SHIFT           0
+/* exist_in_qm1 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED1_MASK               0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED1_SHIFT              1
+/* exist_in_qm2 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED2_MASK               0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED2_SHIFT              2
+/* exist_in_qm3 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM3_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM3_SHIFT           3
+/* bit4 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED3_MASK               0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED3_SHIFT              4
+/* cf_array_active */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED4_MASK               0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED4_SHIFT              5
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED5_MASK               0x1 /* bit6 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED5_SHIFT              6
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED6_MASK               0x1 /* bit7 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED6_SHIFT              7
+	u8 flags1;
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED7_MASK               0x1 /* bit8 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED7_SHIFT              0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED8_MASK               0x1 /* bit9 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED8_SHIFT              1
+/* bit10 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED9_MASK               0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED9_SHIFT              2
+/* bit11 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT11_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT11_SHIFT                  3
+/* bit12 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT12_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT12_SHIFT                  4
+/* bit13 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT13_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT13_SHIFT                  5
+/* bit14 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TX_RULE_ACTIVE_MASK          0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT         6
+/* bit15 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_ACTIVE_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT           7
+	u8 flags2;
+/* timer0cf */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF0_MASK                     0x3
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF0_SHIFT                    0
+/* timer1cf */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF1_MASK                     0x3
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF1_SHIFT                    2
+/* timer2cf */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF2_MASK                     0x3
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF2_SHIFT                    4
+/* timer_stop_all */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF3_MASK                     0x3
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF3_SHIFT                    6
+	u8 flags3;
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF4_MASK                     0x3 /* cf4 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF4_SHIFT                    0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF5_MASK                     0x3 /* cf5 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF5_SHIFT                    2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF6_MASK                     0x3 /* cf6 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF6_SHIFT                    4
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF7_MASK                     0x3 /* cf7 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF7_SHIFT                    6
+	u8 flags4;
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF8_MASK                     0x3 /* cf8 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF8_SHIFT                    0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF9_MASK                     0x3 /* cf9 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF9_SHIFT                    2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF10_MASK                    0x3 /* cf10 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF10_SHIFT                   4
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF11_MASK                    0x3 /* cf11 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF11_SHIFT                   6
+	u8 flags5;
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF12_MASK                    0x3 /* cf12 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF12_SHIFT                   0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF13_MASK                    0x3 /* cf13 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF13_SHIFT                   2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF14_MASK                    0x3 /* cf14 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF14_SHIFT                   4
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF15_MASK                    0x3 /* cf15 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF15_SHIFT                   6
+	u8 flags6;
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_MASK        0x3 /* cf16 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_SHIFT       0
+/* cf_array_cf */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_MASK        0x3
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_SHIFT       2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_MASK                   0x3 /* cf18 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_SHIFT                  4
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_MASK            0x3 /* cf19 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_SHIFT           6
+	u8 flags7;
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_MASK                0x3 /* cf20 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_SHIFT               0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED10_MASK              0x3 /* cf21 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED10_SHIFT             2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_MASK               0x3 /* cf22 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_SHIFT              4
+/* cf0en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF0EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF0EN_SHIFT                  6
+/* cf1en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF1EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF1EN_SHIFT                  7
+	u8 flags8;
+/* cf2en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF2EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF2EN_SHIFT                  0
+/* cf3en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF3EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF3EN_SHIFT                  1
+/* cf4en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF4EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF4EN_SHIFT                  2
+/* cf5en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF5EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF5EN_SHIFT                  3
+/* cf6en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF6EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF6EN_SHIFT                  4
+/* cf7en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF7EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF7EN_SHIFT                  5
+/* cf8en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF8EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF8EN_SHIFT                  6
+/* cf9en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF9EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF9EN_SHIFT                  7
+	u8 flags9;
+/* cf10en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF10EN_MASK                  0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF10EN_SHIFT                 0
+/* cf11en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF11EN_MASK                  0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF11EN_SHIFT                 1
+/* cf12en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF12EN_MASK                  0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF12EN_SHIFT                 2
+/* cf13en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF13EN_MASK                  0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF13EN_SHIFT                 3
+/* cf14en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF14EN_MASK                  0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF14EN_SHIFT                 4
+/* cf15en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF15EN_MASK                  0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF15EN_SHIFT                 5
+/* cf16en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_MASK     0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_SHIFT    6
+/* cf_array_cf_en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_EN_MASK     0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_EN_SHIFT    7
+	u8 flags10;
+/* cf18en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_EN_SHIFT               0
+/* cf19en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_EN_MASK         0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT        1
+/* cf20en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_EN_MASK             0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT            2
+/* cf21en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED11_MASK              0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED11_SHIFT             3
+/* cf22en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_EN_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_EN_SHIFT           4
+/* cf23en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_MASK  0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_SHIFT 5
+/* rule0en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED12_MASK              0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED12_SHIFT             6
+/* rule1en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED13_MASK              0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED13_SHIFT             7
+	u8 flags11;
+/* rule2en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED14_MASK              0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED14_SHIFT             0
+/* rule3en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED15_MASK              0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED15_SHIFT             1
+/* rule4en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TX_DEC_RULE_EN_MASK          0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT         2
+/* rule5en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE5EN_MASK                 0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE5EN_SHIFT                3
+/* rule6en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE6EN_MASK                 0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE6EN_SHIFT                4
+/* rule7en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE7EN_MASK                 0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE7EN_SHIFT                5
+/* rule8en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED1_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED1_SHIFT           6
+/* rule9en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE9EN_MASK                 0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE9EN_SHIFT                7
+	u8 flags12;
+/* rule10en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE10EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE10EN_SHIFT               0
+/* rule11en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE11EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE11EN_SHIFT               1
+/* rule12en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED2_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED2_SHIFT           2
+/* rule13en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED3_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED3_SHIFT           3
+/* rule14en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE14EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE14EN_SHIFT               4
+/* rule15en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE15EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE15EN_SHIFT               5
+/* rule16en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE16EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE16EN_SHIFT               6
+/* rule17en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE17EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE17EN_SHIFT               7
+	u8 flags13;
+/* rule18en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE18EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE18EN_SHIFT               0
+/* rule19en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE19EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE19EN_SHIFT               1
+/* rule20en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED4_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED4_SHIFT           2
+/* rule21en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED5_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED5_SHIFT           3
+/* rule22en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED6_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED6_SHIFT           4
+/* rule23en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED7_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED7_SHIFT           5
+/* rule24en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED8_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED8_SHIFT           6
+/* rule25en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED9_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED9_SHIFT           7
+	u8 flags14;
+/* bit16 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_USE_EXT_HDR_MASK        0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_USE_EXT_HDR_SHIFT       0
+/* bit17 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_MASK      0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_SHIFT     1
+/* bit18 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_MASK    0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_SHIFT   2
+/* bit19 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_MASK    0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_SHIFT   3
+/* bit20 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_L2_EDPM_ENABLE_MASK          0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_L2_EDPM_ENABLE_SHIFT         4
+/* bit21 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_ROCE_EDPM_ENABLE_MASK        0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_ROCE_EDPM_ENABLE_SHIFT       5
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_MASK              0x3 /* cf23 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_SHIFT             6
+	u8 edpm_event_id /* byte2 */;
+	__le16 physical_q0 /* physical_q0 */;
+	__le16 e5_reserved1 /* physical_q1 */;
+	__le16 edpm_num_bds /* physical_q2 */;
+	__le16 tx_bd_cons /* word3 */;
+	__le16 tx_bd_prod /* word4 */;
+	__le16 tx_class /* word5 */;
+	__le16 conn_dpi /* conn_dpi */;
+};
+
+
+
 /*
  * GFT CAM line struct
  */
@@ -1730,690 +2392,4 @@ enum gft_vlan_select {
 };
 
 
-struct mstorm_eth_conn_ag_ctx {
-	u8 byte0 /* cdu_validation */;
-	u8 byte1 /* state */;
-	u8 flags0;
-/* exist_in_qm0 */
-#define MSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_MASK  0x1
-#define MSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_SHIFT 0
-/* exist_in_qm1 */
-#define MSTORM_ETH_CONN_AG_CTX_BIT1_MASK          0x1
-#define MSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT         1
-#define MSTORM_ETH_CONN_AG_CTX_CF0_MASK           0x3 /* cf0 */
-#define MSTORM_ETH_CONN_AG_CTX_CF0_SHIFT          2
-#define MSTORM_ETH_CONN_AG_CTX_CF1_MASK           0x3 /* cf1 */
-#define MSTORM_ETH_CONN_AG_CTX_CF1_SHIFT          4
-#define MSTORM_ETH_CONN_AG_CTX_CF2_MASK           0x3 /* cf2 */
-#define MSTORM_ETH_CONN_AG_CTX_CF2_SHIFT          6
-	u8 flags1;
-#define MSTORM_ETH_CONN_AG_CTX_CF0EN_MASK         0x1 /* cf0en */
-#define MSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT        0
-#define MSTORM_ETH_CONN_AG_CTX_CF1EN_MASK         0x1 /* cf1en */
-#define MSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT        1
-#define MSTORM_ETH_CONN_AG_CTX_CF2EN_MASK         0x1 /* cf2en */
-#define MSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT        2
-#define MSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK       0x1 /* rule0en */
-#define MSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT      3
-#define MSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK       0x1 /* rule1en */
-#define MSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT      4
-#define MSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK       0x1 /* rule2en */
-#define MSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT      5
-#define MSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK       0x1 /* rule3en */
-#define MSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT      6
-#define MSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK       0x1 /* rule4en */
-#define MSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT      7
-	__le16 word0 /* word0 */;
-	__le16 word1 /* word1 */;
-	__le32 reg0 /* reg0 */;
-	__le32 reg1 /* reg1 */;
-};
-
-
-
-
-struct xstormEthConnAgCtxDqExtLdPart {
-	u8 reserved0 /* cdu_validation */;
-	u8 eth_state /* state */;
-	u8 flags0;
-/* exist_in_qm0 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM0_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM0_SHIFT           0
-/* exist_in_qm1 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED1_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED1_SHIFT              1
-/* exist_in_qm2 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED2_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED2_SHIFT              2
-/* exist_in_qm3 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM3_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM3_SHIFT           3
-/* bit4 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED3_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED3_SHIFT              4
-/* cf_array_active */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED4_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED4_SHIFT              5
-/* bit6 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED5_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED5_SHIFT              6
-/* bit7 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED6_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED6_SHIFT              7
-	u8 flags1;
-/* bit8 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED7_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED7_SHIFT              0
-/* bit9 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED8_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED8_SHIFT              1
-/* bit10 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED9_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED9_SHIFT              2
-/* bit11 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_BIT11_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_BIT11_SHIFT                  3
-/* bit12 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_BIT12_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_BIT12_SHIFT                  4
-/* bit13 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_BIT13_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_BIT13_SHIFT                  5
-/* bit14 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TX_RULE_ACTIVE_MASK          0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TX_RULE_ACTIVE_SHIFT         6
-/* bit15 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_ACTIVE_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_ACTIVE_SHIFT           7
-	u8 flags2;
-/* timer0cf */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF0_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF0_SHIFT                    0
-/* timer1cf */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF1_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF1_SHIFT                    2
-/* timer2cf */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF2_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF2_SHIFT                    4
-/* timer_stop_all */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF3_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF3_SHIFT                    6
-	u8 flags3;
-/* cf4 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF4_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF4_SHIFT                    0
-/* cf5 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF5_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF5_SHIFT                    2
-/* cf6 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF6_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF6_SHIFT                    4
-/* cf7 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF7_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF7_SHIFT                    6
-	u8 flags4;
-/* cf8 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF8_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF8_SHIFT                    0
-/* cf9 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF9_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF9_SHIFT                    2
-/* cf10 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF10_MASK                    0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF10_SHIFT                   4
-/* cf11 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF11_MASK                    0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF11_SHIFT                   6
-	u8 flags5;
-/* cf12 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF12_MASK                    0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF12_SHIFT                   0
-/* cf13 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF13_MASK                    0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF13_SHIFT                   2
-/* cf14 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF14_MASK                    0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF14_SHIFT                   4
-/* cf15 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF15_MASK                    0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF15_SHIFT                   6
-	u8 flags6;
-/* cf16 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_MASK        0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_SHIFT       0
-/* cf_array_cf */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_MASK        0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_SHIFT       2
-/* cf18 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_MASK                   0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_SHIFT                  4
-/* cf19 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_MASK            0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_SHIFT           6
-	u8 flags7;
-/* cf20 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_MASK                0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_SHIFT               0
-/* cf21 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED10_MASK              0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED10_SHIFT             2
-/* cf22 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_MASK               0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_SHIFT              4
-/* cf0en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF0EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF0EN_SHIFT                  6
-/* cf1en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF1EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF1EN_SHIFT                  7
-	u8 flags8;
-/* cf2en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF2EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF2EN_SHIFT                  0
-/* cf3en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF3EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF3EN_SHIFT                  1
-/* cf4en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF4EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF4EN_SHIFT                  2
-/* cf5en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF5EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF5EN_SHIFT                  3
-/* cf6en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF6EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF6EN_SHIFT                  4
-/* cf7en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF7EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF7EN_SHIFT                  5
-/* cf8en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF8EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF8EN_SHIFT                  6
-/* cf9en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF9EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF9EN_SHIFT                  7
-	u8 flags9;
-/* cf10en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF10EN_MASK                  0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF10EN_SHIFT                 0
-/* cf11en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF11EN_MASK                  0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF11EN_SHIFT                 1
-/* cf12en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF12EN_MASK                  0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF12EN_SHIFT                 2
-/* cf13en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF13EN_MASK                  0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF13EN_SHIFT                 3
-/* cf14en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF14EN_MASK                  0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF14EN_SHIFT                 4
-/* cf15en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF15EN_MASK                  0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF15EN_SHIFT                 5
-/* cf16en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_EN_MASK     0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_EN_SHIFT    6
-/* cf_array_cf_en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_EN_MASK     0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_EN_SHIFT    7
-	u8 flags10;
-/* cf18en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_EN_SHIFT               0
-/* cf19en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_EN_MASK         0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_EN_SHIFT        1
-/* cf20en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_EN_MASK             0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_EN_SHIFT            2
-/* cf21en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED11_MASK              0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED11_SHIFT             3
-/* cf22en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_EN_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_EN_SHIFT           4
-/* cf23en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_EN_RESERVED_MASK  0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_EN_RESERVED_SHIFT 5
-/* rule0en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED12_MASK              0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED12_SHIFT             6
-/* rule1en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED13_MASK              0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED13_SHIFT             7
-	u8 flags11;
-/* rule2en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED14_MASK              0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED14_SHIFT             0
-/* rule3en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED15_MASK              0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED15_SHIFT             1
-/* rule4en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TX_DEC_RULE_EN_MASK          0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TX_DEC_RULE_EN_SHIFT         2
-/* rule5en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE5EN_MASK                 0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE5EN_SHIFT                3
-/* rule6en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE6EN_MASK                 0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE6EN_SHIFT                4
-/* rule7en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE7EN_MASK                 0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE7EN_SHIFT                5
-/* rule8en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED1_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED1_SHIFT           6
-/* rule9en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE9EN_MASK                 0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE9EN_SHIFT                7
-	u8 flags12;
-/* rule10en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE10EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE10EN_SHIFT               0
-/* rule11en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE11EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE11EN_SHIFT               1
-/* rule12en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED2_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED2_SHIFT           2
-/* rule13en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED3_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED3_SHIFT           3
-/* rule14en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE14EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE14EN_SHIFT               4
-/* rule15en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE15EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE15EN_SHIFT               5
-/* rule16en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE16EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE16EN_SHIFT               6
-/* rule17en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE17EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE17EN_SHIFT               7
-	u8 flags13;
-/* rule18en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE18EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE18EN_SHIFT               0
-/* rule19en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE19EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE19EN_SHIFT               1
-/* rule20en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED4_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED4_SHIFT           2
-/* rule21en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED5_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED5_SHIFT           3
-/* rule22en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED6_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED6_SHIFT           4
-/* rule23en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED7_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED7_SHIFT           5
-/* rule24en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED8_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED8_SHIFT           6
-/* rule25en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED9_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED9_SHIFT           7
-	u8 flags14;
-/* bit16 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_USE_EXT_HDR_MASK        0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_USE_EXT_HDR_SHIFT       0
-/* bit17 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_RAW_L3L4_MASK      0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_RAW_L3L4_SHIFT     1
-/* bit18 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_INBAND_PROP_HDR_MASK    0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_INBAND_PROP_HDR_SHIFT   2
-/* bit19 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_EXT_TUNNEL_MASK    0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_EXT_TUNNEL_SHIFT   3
-/* bit20 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_L2_EDPM_ENABLE_MASK          0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_L2_EDPM_ENABLE_SHIFT         4
-/* bit21 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_ROCE_EDPM_ENABLE_MASK        0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_ROCE_EDPM_ENABLE_SHIFT       5
-/* cf23 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_MASK              0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_SHIFT             6
-	u8 edpm_event_id /* byte2 */;
-	__le16 physical_q0 /* physical_q0 */;
-	__le16 quota /* physical_q1 */;
-	__le16 edpm_num_bds /* physical_q2 */;
-	__le16 tx_bd_cons /* word3 */;
-	__le16 tx_bd_prod /* word4 */;
-	__le16 tx_class /* word5 */;
-	__le16 conn_dpi /* conn_dpi */;
-	u8 byte3 /* byte3 */;
-	u8 byte4 /* byte4 */;
-	u8 byte5 /* byte5 */;
-	u8 byte6 /* byte6 */;
-	__le32 reg0 /* reg0 */;
-	__le32 reg1 /* reg1 */;
-	__le32 reg2 /* reg2 */;
-	__le32 reg3 /* reg3 */;
-	__le32 reg4 /* reg4 */;
-};
-
-
-
-struct xstorm_eth_hw_conn_ag_ctx {
-	u8 reserved0 /* cdu_validation */;
-	u8 eth_state /* state */;
-	u8 flags0;
-/* exist_in_qm0 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM0_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM0_SHIFT           0
-/* exist_in_qm1 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED1_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED1_SHIFT              1
-/* exist_in_qm2 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED2_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED2_SHIFT              2
-/* exist_in_qm3 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM3_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM3_SHIFT           3
-/* bit4 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED3_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED3_SHIFT              4
-/* cf_array_active */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED4_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED4_SHIFT              5
-/* bit6 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED5_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED5_SHIFT              6
-/* bit7 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED6_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED6_SHIFT              7
-	u8 flags1;
-/* bit8 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED7_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED7_SHIFT              0
-/* bit9 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED8_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED8_SHIFT              1
-/* bit10 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED9_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED9_SHIFT              2
-/* bit11 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_BIT11_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_BIT11_SHIFT                  3
-/* bit12 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_BIT12_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_BIT12_SHIFT                  4
-/* bit13 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_BIT13_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_BIT13_SHIFT                  5
-/* bit14 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_TX_RULE_ACTIVE_MASK          0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT         6
-/* bit15 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_ACTIVE_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT           7
-	u8 flags2;
-/* timer0cf */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF0_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF0_SHIFT                    0
-/* timer1cf */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF1_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF1_SHIFT                    2
-/* timer2cf */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF2_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF2_SHIFT                    4
-/* timer_stop_all */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF3_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF3_SHIFT                    6
-	u8 flags3;
-/* cf4 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF4_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF4_SHIFT                    0
-/* cf5 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF5_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF5_SHIFT                    2
-/* cf6 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF6_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF6_SHIFT                    4
-/* cf7 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF7_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF7_SHIFT                    6
-	u8 flags4;
-/* cf8 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF8_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF8_SHIFT                    0
-/* cf9 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF9_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF9_SHIFT                    2
-/* cf10 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF10_MASK                    0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF10_SHIFT                   4
-/* cf11 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF11_MASK                    0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF11_SHIFT                   6
-	u8 flags5;
-/* cf12 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF12_MASK                    0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF12_SHIFT                   0
-/* cf13 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF13_MASK                    0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF13_SHIFT                   2
-/* cf14 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF14_MASK                    0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF14_SHIFT                   4
-/* cf15 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF15_MASK                    0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF15_SHIFT                   6
-	u8 flags6;
-/* cf16 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_MASK        0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_SHIFT       0
-/* cf_array_cf */
-#define XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_MASK        0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_SHIFT       2
-/* cf18 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_MASK                   0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_SHIFT                  4
-/* cf19 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_MASK            0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_SHIFT           6
-	u8 flags7;
-/* cf20 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_MASK                0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_SHIFT               0
-/* cf21 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED10_MASK              0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED10_SHIFT             2
-/* cf22 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_MASK               0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_SHIFT              4
-/* cf0en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF0EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF0EN_SHIFT                  6
-/* cf1en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF1EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF1EN_SHIFT                  7
-	u8 flags8;
-/* cf2en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF2EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF2EN_SHIFT                  0
-/* cf3en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF3EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF3EN_SHIFT                  1
-/* cf4en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF4EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF4EN_SHIFT                  2
-/* cf5en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF5EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF5EN_SHIFT                  3
-/* cf6en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF6EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF6EN_SHIFT                  4
-/* cf7en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF7EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF7EN_SHIFT                  5
-/* cf8en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF8EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF8EN_SHIFT                  6
-/* cf9en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF9EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF9EN_SHIFT                  7
-	u8 flags9;
-/* cf10en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF10EN_MASK                  0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF10EN_SHIFT                 0
-/* cf11en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF11EN_MASK                  0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF11EN_SHIFT                 1
-/* cf12en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF12EN_MASK                  0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF12EN_SHIFT                 2
-/* cf13en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF13EN_MASK                  0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF13EN_SHIFT                 3
-/* cf14en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF14EN_MASK                  0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF14EN_SHIFT                 4
-/* cf15en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF15EN_MASK                  0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF15EN_SHIFT                 5
-/* cf16en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_MASK     0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_SHIFT    6
-/* cf_array_cf_en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_EN_MASK     0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_EN_SHIFT    7
-	u8 flags10;
-/* cf18en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_EN_SHIFT               0
-/* cf19en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_EN_MASK         0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT        1
-/* cf20en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_EN_MASK             0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT            2
-/* cf21en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED11_MASK              0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED11_SHIFT             3
-/* cf22en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_EN_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_EN_SHIFT           4
-/* cf23en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_MASK  0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_SHIFT 5
-/* rule0en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED12_MASK              0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED12_SHIFT             6
-/* rule1en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED13_MASK              0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED13_SHIFT             7
-	u8 flags11;
-/* rule2en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED14_MASK              0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED14_SHIFT             0
-/* rule3en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED15_MASK              0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED15_SHIFT             1
-/* rule4en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_TX_DEC_RULE_EN_MASK          0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT         2
-/* rule5en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE5EN_MASK                 0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE5EN_SHIFT                3
-/* rule6en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE6EN_MASK                 0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE6EN_SHIFT                4
-/* rule7en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE7EN_MASK                 0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE7EN_SHIFT                5
-/* rule8en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED1_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED1_SHIFT           6
-/* rule9en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE9EN_MASK                 0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE9EN_SHIFT                7
-	u8 flags12;
-/* rule10en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE10EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE10EN_SHIFT               0
-/* rule11en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE11EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE11EN_SHIFT               1
-/* rule12en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED2_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED2_SHIFT           2
-/* rule13en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED3_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED3_SHIFT           3
-/* rule14en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE14EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE14EN_SHIFT               4
-/* rule15en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE15EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE15EN_SHIFT               5
-/* rule16en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE16EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE16EN_SHIFT               6
-/* rule17en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE17EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE17EN_SHIFT               7
-	u8 flags13;
-/* rule18en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE18EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE18EN_SHIFT               0
-/* rule19en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE19EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE19EN_SHIFT               1
-/* rule20en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED4_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED4_SHIFT           2
-/* rule21en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED5_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED5_SHIFT           3
-/* rule22en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED6_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED6_SHIFT           4
-/* rule23en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED7_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED7_SHIFT           5
-/* rule24en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED8_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED8_SHIFT           6
-/* rule25en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED9_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED9_SHIFT           7
-	u8 flags14;
-/* bit16 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_USE_EXT_HDR_MASK        0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_USE_EXT_HDR_SHIFT       0
-/* bit17 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_MASK      0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_SHIFT     1
-/* bit18 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_MASK    0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_SHIFT   2
-/* bit19 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_MASK    0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_SHIFT   3
-/* bit20 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_L2_EDPM_ENABLE_MASK          0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_L2_EDPM_ENABLE_SHIFT         4
-/* bit21 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_ROCE_EDPM_ENABLE_MASK        0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_ROCE_EDPM_ENABLE_SHIFT       5
-/* cf23 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_MASK              0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_SHIFT             6
-	u8 edpm_event_id /* byte2 */;
-	__le16 physical_q0 /* physical_q0 */;
-	__le16 quota /* physical_q1 */;
-	__le16 edpm_num_bds /* physical_q2 */;
-	__le16 tx_bd_cons /* word3 */;
-	__le16 tx_bd_prod /* word4 */;
-	__le16 tx_class /* word5 */;
-	__le16 conn_dpi /* conn_dpi */;
-};
-
-
 #endif /* __ECORE_HSI_ETH__ */
diff --git a/drivers/net/qede/base/ecore_iov_api.h b/drivers/net/qede/base/ecore_iov_api.h
index 24a43d3..9775360 100644
--- a/drivers/net/qede/base/ecore_iov_api.h
+++ b/drivers/net/qede/base/ecore_iov_api.h
@@ -701,7 +701,7 @@ enum _ecore_status_t ecore_iov_configure_min_tx_rate(struct ecore_dev *p_dev,
  * @param p_hwfn
  * @param rel_vf_id
  *
- * @return MAX_NUM_VFS in case no further active VFs, otherwise index.
+ * @return E4_MAX_NUM_VFS in case no further active VFs, otherwise index.
  */
 u16 ecore_iov_get_next_active_vf(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
 
@@ -709,7 +709,7 @@ u16 ecore_iov_get_next_active_vf(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
 
 #define ecore_for_each_vf(_p_hwfn, _i)					\
 	for (_i = ecore_iov_get_next_active_vf(_p_hwfn, 0);		\
-	     _i < MAX_NUM_VFS;						\
+	     _i < E4_MAX_NUM_VFS;					\
 	     _i = ecore_iov_get_next_active_vf(_p_hwfn, _i + 1))
 
 #endif
diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c
index 1f35d6c..9035d3b 100644
--- a/drivers/net/qede/base/ecore_spq.c
+++ b/drivers/net/qede/base/ecore_spq.c
@@ -191,15 +191,17 @@ static void ecore_spq_hw_initialize(struct ecore_hwfn *p_hwfn,
 
 	p_cxt = cxt_info.p_cxt;
 
-	SET_FIELD(p_cxt->xstorm_ag_context.flags10,
-		  XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN, 1);
-	SET_FIELD(p_cxt->xstorm_ag_context.flags1,
-		  XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE, 1);
-	/* SET_FIELD(p_cxt->xstorm_ag_context.flags10,
-	 *           XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN, 1);
-	 */
-	SET_FIELD(p_cxt->xstorm_ag_context.flags9,
-		  XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN, 1);
+	if (ECORE_IS_BB(p_hwfn->p_dev) || ECORE_IS_AH(p_hwfn->p_dev)) {
+		SET_FIELD(p_cxt->xstorm_ag_context.flags10,
+			  E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN, 1);
+		SET_FIELD(p_cxt->xstorm_ag_context.flags1,
+			  E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE, 1);
+		/* SET_FIELD(p_cxt->xstorm_ag_context.flags10,
+		 *	  E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN, 1);
+		 */
+		SET_FIELD(p_cxt->xstorm_ag_context.flags9,
+			  E4_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN, 1);
+	}
 
 	/* CDU validation - FIXME currently disabled */
 
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index d1c809c..b051678 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -3487,7 +3487,7 @@ u16 ecore_iov_get_next_active_vf(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
 			return i;
 
 out:
-	return MAX_NUM_VFS;
+	return E4_MAX_NUM_VFS;
 }
 
 enum _ecore_status_t ecore_iov_copy_vf_msg(struct ecore_hwfn *p_hwfn,
diff --git a/drivers/net/qede/base/ecore_sriov.h b/drivers/net/qede/base/ecore_sriov.h
index 884a90c..e9ccc79 100644
--- a/drivers/net/qede/base/ecore_sriov.h
+++ b/drivers/net/qede/base/ecore_sriov.h
@@ -15,7 +15,7 @@
 #include "ecore_hsi_common.h"
 
 #define ECORE_ETH_MAX_VF_NUM_VLAN_FILTERS \
-	(MAX_NUM_VFS * ECORE_ETH_VF_NUM_VLAN_FILTERS)
+	(E4_MAX_NUM_VFS * ECORE_ETH_VF_NUM_VLAN_FILTERS)
 
 /* Represents a full message. Both the request filled by VF
  * and the response filled by the PF. The VF needs one copy
@@ -152,7 +152,7 @@ struct ecore_vf_info {
  * capability enabled.
  */
 struct ecore_pf_iov {
-	struct ecore_vf_info	vfs_array[MAX_NUM_VFS];
+	struct ecore_vf_info	vfs_array[E4_MAX_NUM_VFS];
 	u64			pending_events[ECORE_VF_ARRAY_LENGTH];
 	u64			pending_flr[ECORE_VF_ARRAY_LENGTH];
 	u16			base_vport_id;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 06/62] net/qede: upgrade the FW to 8.18.9.0
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (6 preceding siblings ...)
  2017-03-28  6:51           ` [PATCH v4 05/62] net/qede/base: utilize FW 8.18.9.0 Rasesh Mody
@ 2017-03-28  6:51           ` Rasesh Mody
  2017-03-28  6:51           ` [PATCH v4 07/62] net/qede/base: decrease maximum HW func per device Rasesh Mody
                             ` (56 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:51 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

This patchset adds changes to upgrade to 8.18.9.0 FW.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 doc/guides/nics/qede.rst                      |    8 +-
 drivers/net/qede/base/bcm_osal.h              |    1 +
 drivers/net/qede/base/common_hsi.h            |  176 +++-
 drivers/net/qede/base/ecore_dcbx.c            |    4 +-
 drivers/net/qede/base/ecore_dev.c             |  204 ++--
 drivers/net/qede/base/ecore_gtt_reg_addr.h    |   20 +-
 drivers/net/qede/base/ecore_hsi_common.h      |   46 +-
 drivers/net/qede/base/ecore_hsi_debug_tools.h |  203 ++--
 drivers/net/qede/base/ecore_hsi_eth.h         |   17 +-
 drivers/net/qede/base/ecore_hsi_init_tool.h   |   78 +-
 drivers/net/qede/base/ecore_init_fw_funcs.c   | 1378 ++++++++++++++++---------
 drivers/net/qede/base/ecore_init_fw_funcs.h   |  161 ++-
 drivers/net/qede/base/ecore_iro.h             |    8 +
 drivers/net/qede/base/ecore_iro_values.h      |   28 +-
 drivers/net/qede/base/ecore_rt_defs.h         |  623 ++++++-----
 drivers/net/qede/base/eth_common.h            |    2 +-
 drivers/net/qede/base/reg_addr.h              |   53 +
 drivers/net/qede/qede_main.c                  |    2 +-
 18 files changed, 1886 insertions(+), 1126 deletions(-)

diff --git a/doc/guides/nics/qede.rst b/doc/guides/nics/qede.rst
index 4694ec0..36b26b3 100644
--- a/doc/guides/nics/qede.rst
+++ b/doc/guides/nics/qede.rst
@@ -77,10 +77,10 @@ Supported QLogic Adapters
 Prerequisites
 -------------
 
-- Requires firmware version **8.14.x.** and management firmware
-  version **8.14.x or higher**. Firmware may be available
+- Requires firmware version **8.18.x.** and management firmware
+  version **8.18.x or higher**. Firmware may be available
   inbox in certain newer Linux distros under the standard directory
-  ``E.g. /lib/firmware/qed/qed_init_values-8.14.6.0.bin``
+  ``E.g. /lib/firmware/qed/qed_init_values-8.18.9.0.bin``
 
 - If the required firmware files are not available then visit
   `QLogic Driver Download Center <http://driverdownloads.qlogic.com>`_.
@@ -119,7 +119,7 @@ enabling debugging options may affect system performance.
 - ``CONFIG_RTE_LIBRTE_QEDE_FW`` (default **""**)
 
   Gives absolute path of firmware file.
-  ``Eg: "/lib/firmware/qed/qed_init_values_zipped-8.14.6.0.bin"``
+  ``Eg: "/lib/firmware/qed/qed_init_values_zipped-8.18.9.0.bin"``
   Empty string indicates driver will pick up the firmware file
   from the default location.
 
diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 88246b7..0d239c9 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -398,6 +398,7 @@ u32 qede_osal_log2(u32);
 #define OSAL_STRCPY(dst, string) strcpy(dst, string)
 #define OSAL_STRNCPY(dst, string, len) strncpy(dst, string, len)
 #define OSAL_STRCMP(str1, str2) strcmp(str1, str2)
+#define OSAL_STRTOUL(str, base, res) 0
 
 #define OSAL_INLINE inline
 #define OSAL_REG_ADDR(_p_hwfn, _offset) \
diff --git a/drivers/net/qede/base/common_hsi.h b/drivers/net/qede/base/common_hsi.h
index 59e751f..cbcde22 100644
--- a/drivers/net/qede/base/common_hsi.h
+++ b/drivers/net/qede/base/common_hsi.h
@@ -78,8 +78,16 @@
 
 #define CORE_SPQE_PAGE_SIZE_BYTES                       4096
 
-#define MAX_NUM_LL2_RX_QUEUES					32
-#define MAX_NUM_LL2_TX_STATS_COUNTERS			32
+/*
+ * Usually LL2 queues are opened in pairs TX-RX.
+ * There is a hard restriction on number of RX queues (limited by Tstorm RAM)
+ * and TX counters (Pstorm RAM).
+ * Number of TX queues is almost unlimited.
+ * The constants are different so as to allow asymmetric LL2 connections
+ */
+
+#define MAX_NUM_LL2_RX_QUEUES					48
+#define MAX_NUM_LL2_TX_STATS_COUNTERS			48
 
 
 /****************************************************************************/
@@ -89,8 +97,8 @@
 
 
 #define FW_MAJOR_VERSION		8
-#define FW_MINOR_VERSION		14
-#define FW_REVISION_VERSION		6
+#define FW_MINOR_VERSION		18
+#define FW_REVISION_VERSION		9
 #define FW_ENGINEERING_VERSION	0
 
 /***********************/
@@ -110,6 +118,7 @@
 #define MAX_NUM_VFS_BB	(120)
 #define MAX_NUM_VFS_K2	(192)
 #define E4_MAX_NUM_VFS	(MAX_NUM_VFS_K2)
+#define COMMON_MAX_NUM_VFS (240)
 
 #define MAX_NUM_FUNCTIONS_BB	(MAX_NUM_PFS_BB + MAX_NUM_VFS_BB)
 #define MAX_NUM_FUNCTIONS_K2	(MAX_NUM_PFS_K2 + MAX_NUM_VFS_K2)
@@ -177,6 +186,13 @@
 #define CDU_VF_FL_SEG_TYPE_OFFSET_REG_TYPE_SHIFT	(12)
 #define CDU_VF_FL_SEG_TYPE_OFFSET_REG_OFFSET_MASK	(0xfff)
 
+#define	CDU_CONTEXT_VALIDATION_CFG_ENABLE_SHIFT				(0)
+#define	CDU_CONTEXT_VALIDATION_CFG_VALIDATION_TYPE_SHIFT	(1)
+#define	CDU_CONTEXT_VALIDATION_CFG_USE_TYPE				(2)
+#define	CDU_CONTEXT_VALIDATION_CFG_USE_REGION				(3)
+#define	CDU_CONTEXT_VALIDATION_CFG_USE_CID				(4)
+#define	CDU_CONTEXT_VALIDATION_CFG_USE_ACTIVE				(5)
+
 
 /*****************/
 /* DQ CONSTANTS  */
@@ -472,7 +488,6 @@
 #define PXP_BAR_DQ                                          1
 
 /* PTT and GTT */
-#define PXP_NUM_PF_WINDOWS		12
 #define PXP_PER_PF_ENTRY_SIZE		8
 #define PXP_NUM_GLOBAL_WINDOWS		243
 #define PXP_GLOBAL_ENTRY_SIZE		4
@@ -497,6 +512,8 @@
 #define PXP_PF_ME_OPAQUE_ADDR		0x1f8
 #define PXP_PF_ME_CONCRETE_ADDR		0x1fc
 
+#define PXP_NUM_PF_WINDOWS		12
+
 #define PXP_EXTERNAL_BAR_PF_WINDOW_START	0x1000
 #define PXP_EXTERNAL_BAR_PF_WINDOW_NUM		PXP_NUM_PF_WINDOWS
 #define PXP_EXTERNAL_BAR_PF_WINDOW_SINGLE_SIZE	0x1000
@@ -519,8 +536,6 @@
 	 PXP_EXTERNAL_BAR_GLOBAL_WINDOW_LENGTH - 1)
 
 /* PF BAR */
-/*#define PXP_BAR0_START_GRC 0x1000 */
-/*#define PXP_BAR0_GRC_LENGTH 0xBFF000 */
 #define PXP_BAR0_START_GRC                      0x0000
 #define PXP_BAR0_GRC_LENGTH                     0x1C00000
 #define PXP_BAR0_END_GRC                        \
@@ -589,7 +604,7 @@
 #define SDM_OP_GEN_TRIG_AGG_INT			2
 #define SDM_OP_GEN_TRIG_LOADER			4
 #define SDM_OP_GEN_TRIG_INDICATE_ERROR	6
-#define SDM_OP_GEN_TRIG_RELEASE_THREAD	7
+#define SDM_OP_GEN_TRIG_INC_ORDER_CNT	9
 
 /***********************************************************/
 /* Completion types                                        */
@@ -612,6 +627,7 @@
 #define SDM_COMP_TYPE_RELEASE_THREAD	7
 /* Write to local RAM as a completion */
 #define SDM_COMP_TYPE_RAM		8
+#define SDM_COMP_TYPE_INC_ORDER_CNT	9 /* Applicable only for E4 */
 
 
 /******************/
@@ -881,7 +897,7 @@ enum db_dest {
  */
 enum db_dpm_type {
 	DPM_LEGACY /* Legacy DPM- to Xstorm RAM */,
-	DPM_ROCE /* RoCE DPM- to NIG */,
+	DPM_RDMA /* RDMA DPM (only RoCE in E4) - to NIG */,
 /* L2 DPM inline- to PBF, with packet data on doorbell */
 	DPM_L2_INLINE,
 	DPM_L2_BD /* L2 DPM with BD- to PBF, with TX BD data on doorbell */,
@@ -968,42 +984,42 @@ struct db_pwm_addr {
 };
 
 /*
- * Parameters to RoCE firmware, passed in EDPM doorbell
+ * Parameters to RDMA firmware, passed in EDPM doorbell
  */
-struct db_roce_dpm_params {
+struct db_rdma_dpm_params {
 	__le32 params;
 /* Size in QWORD-s of the DPM burst */
-#define DB_ROCE_DPM_PARAMS_SIZE_MASK            0x3F
-#define DB_ROCE_DPM_PARAMS_SIZE_SHIFT           0
-/* Type of DPM transacation (DPM_ROCE) (use enum db_dpm_type) */
-#define DB_ROCE_DPM_PARAMS_DPM_TYPE_MASK        0x3
-#define DB_ROCE_DPM_PARAMS_DPM_TYPE_SHIFT       6
-/* opcode for ROCE operation */
-#define DB_ROCE_DPM_PARAMS_OPCODE_MASK          0xFF
-#define DB_ROCE_DPM_PARAMS_OPCODE_SHIFT         8
+#define DB_RDMA_DPM_PARAMS_SIZE_MASK            0x3F
+#define DB_RDMA_DPM_PARAMS_SIZE_SHIFT           0
+/* Type of DPM transacation (DPM_RDMA) (use enum db_dpm_type) */
+#define DB_RDMA_DPM_PARAMS_DPM_TYPE_MASK        0x3
+#define DB_RDMA_DPM_PARAMS_DPM_TYPE_SHIFT       6
+/* opcode for RDMA operation */
+#define DB_RDMA_DPM_PARAMS_OPCODE_MASK          0xFF
+#define DB_RDMA_DPM_PARAMS_OPCODE_SHIFT         8
 /* the size of the WQE payload in bytes */
-#define DB_ROCE_DPM_PARAMS_WQE_SIZE_MASK        0x7FF
-#define DB_ROCE_DPM_PARAMS_WQE_SIZE_SHIFT       16
-#define DB_ROCE_DPM_PARAMS_RESERVED0_MASK       0x1
-#define DB_ROCE_DPM_PARAMS_RESERVED0_SHIFT      27
+#define DB_RDMA_DPM_PARAMS_WQE_SIZE_MASK        0x7FF
+#define DB_RDMA_DPM_PARAMS_WQE_SIZE_SHIFT       16
+#define DB_RDMA_DPM_PARAMS_RESERVED0_MASK       0x1
+#define DB_RDMA_DPM_PARAMS_RESERVED0_SHIFT      27
 /* RoCE completion flag */
-#define DB_ROCE_DPM_PARAMS_COMPLETION_FLG_MASK  0x1
-#define DB_ROCE_DPM_PARAMS_COMPLETION_FLG_SHIFT 28
-#define DB_ROCE_DPM_PARAMS_S_FLG_MASK           0x1 /* RoCE S flag */
-#define DB_ROCE_DPM_PARAMS_S_FLG_SHIFT          29
-#define DB_ROCE_DPM_PARAMS_RESERVED1_MASK       0x3
-#define DB_ROCE_DPM_PARAMS_RESERVED1_SHIFT      30
+#define DB_RDMA_DPM_PARAMS_COMPLETION_FLG_MASK  0x1
+#define DB_RDMA_DPM_PARAMS_COMPLETION_FLG_SHIFT 28
+#define DB_RDMA_DPM_PARAMS_S_FLG_MASK           0x1 /* RoCE S flag */
+#define DB_RDMA_DPM_PARAMS_S_FLG_SHIFT          29
+#define DB_RDMA_DPM_PARAMS_RESERVED1_MASK       0x3
+#define DB_RDMA_DPM_PARAMS_RESERVED1_SHIFT      30
 };
 
 /*
- * Structure for doorbell data, in ROCE DPM mode, for the first doorbell in a
+ * Structure for doorbell data, in RDMA DPM mode, for the first doorbell in a
  * DPM burst
  */
-struct db_roce_dpm_data {
+struct db_rdma_dpm_data {
 	__le16 icid /* internal CID */;
 	__le16 prod_val /* aggregated value to update */;
-/* parameters passed to RoCE firmware */
-	struct db_roce_dpm_params params;
+/* parameters passed to RDMA firmware */
+	struct db_rdma_dpm_params params;
 };
 
 /* Igu interrupt command */
@@ -1136,6 +1152,68 @@ struct parsing_and_err_flags {
 
 
 /*
+ * Parsing error flags bitmap.
+ */
+struct parsing_err_flags {
+	__le16 flags;
+/* MAC error indication */
+#define PARSING_ERR_FLAGS_MAC_ERROR_MASK                          0x1
+#define PARSING_ERR_FLAGS_MAC_ERROR_SHIFT                         0
+/* truncation error indication */
+#define PARSING_ERR_FLAGS_TRUNC_ERROR_MASK                        0x1
+#define PARSING_ERR_FLAGS_TRUNC_ERROR_SHIFT                       1
+/* packet too small indication */
+#define PARSING_ERR_FLAGS_PKT_TOO_SMALL_MASK                      0x1
+#define PARSING_ERR_FLAGS_PKT_TOO_SMALL_SHIFT                     2
+/* Header Missing Tag */
+#define PARSING_ERR_FLAGS_ANY_HDR_MISSING_TAG_MASK                0x1
+#define PARSING_ERR_FLAGS_ANY_HDR_MISSING_TAG_SHIFT               3
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_ANY_HDR_IP_VER_MISMTCH_MASK             0x1
+#define PARSING_ERR_FLAGS_ANY_HDR_IP_VER_MISMTCH_SHIFT            4
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_ANY_HDR_IP_V4_HDR_LEN_TOO_SMALL_MASK    0x1
+#define PARSING_ERR_FLAGS_ANY_HDR_IP_V4_HDR_LEN_TOO_SMALL_SHIFT   5
+/* set this error if: 1. total-len is smaller than hdr-len 2. total-ip-len
+ * indicates number that is bigger than real packet length 3. tunneling:
+ * total-ip-length of the outer header points to offset that is smaller than
+ * the one pointed to by the total-ip-len of the inner hdr.
+ */
+#define PARSING_ERR_FLAGS_ANY_HDR_IP_BAD_TOTAL_LEN_MASK           0x1
+#define PARSING_ERR_FLAGS_ANY_HDR_IP_BAD_TOTAL_LEN_SHIFT          6
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_IP_V4_CHKSM_ERROR_MASK                  0x1
+#define PARSING_ERR_FLAGS_IP_V4_CHKSM_ERROR_SHIFT                 7
+/* from frame cracker output. for either TCP or UDP */
+#define PARSING_ERR_FLAGS_ANY_HDR_L4_IP_LEN_MISMTCH_MASK          0x1
+#define PARSING_ERR_FLAGS_ANY_HDR_L4_IP_LEN_MISMTCH_SHIFT         8
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_ZERO_UDP_IP_V6_CHKSM_MASK               0x1
+#define PARSING_ERR_FLAGS_ZERO_UDP_IP_V6_CHKSM_SHIFT              9
+/* cksm calculated and value isn't 0xffff or L4-cksm-wasnt-calculated for any
+ * reason, like: udp/ipv4 checksum is 0 etc.
+ */
+#define PARSING_ERR_FLAGS_INNER_L4_CHKSM_ERROR_MASK               0x1
+#define PARSING_ERR_FLAGS_INNER_L4_CHKSM_ERROR_SHIFT              10
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_ANY_HDR_ZERO_TTL_OR_HOP_LIM_MASK        0x1
+#define PARSING_ERR_FLAGS_ANY_HDR_ZERO_TTL_OR_HOP_LIM_SHIFT       11
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_NON_8021Q_TAG_EXISTS_IN_BOTH_HDRS_MASK  0x1
+#define PARSING_ERR_FLAGS_NON_8021Q_TAG_EXISTS_IN_BOTH_HDRS_SHIFT 12
+/* set if geneve option size was over 32 byte */
+#define PARSING_ERR_FLAGS_GENEVE_OPTION_OVERSIZED_MASK            0x1
+#define PARSING_ERR_FLAGS_GENEVE_OPTION_OVERSIZED_SHIFT           13
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_TUNNEL_IP_V4_CHKSM_ERROR_MASK           0x1
+#define PARSING_ERR_FLAGS_TUNNEL_IP_V4_CHKSM_ERROR_SHIFT          14
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_TUNNEL_L4_CHKSM_ERROR_MASK              0x1
+#define PARSING_ERR_FLAGS_TUNNEL_L4_CHKSM_ERROR_SHIFT             15
+};
+
+
+/*
  * Pb context
  */
 struct pb_context {
@@ -1492,49 +1570,57 @@ struct tdif_task_context {
 struct timers_context {
 	__le32 logical_client_0;
 /* Expiration time of logical client 0 */
-#define TIMERS_CONTEXT_EXPIRATIONTIMELC0_MASK     0xFFFFFFF
+#define TIMERS_CONTEXT_EXPIRATIONTIMELC0_MASK     0x7FFFFFF
 #define TIMERS_CONTEXT_EXPIRATIONTIMELC0_SHIFT    0
+#define TIMERS_CONTEXT_RESERVED0_MASK             0x1
+#define TIMERS_CONTEXT_RESERVED0_SHIFT            27
 /* Valid bit of logical client 0 */
 #define TIMERS_CONTEXT_VALIDLC0_MASK              0x1
 #define TIMERS_CONTEXT_VALIDLC0_SHIFT             28
 /* Active bit of logical client 0 */
 #define TIMERS_CONTEXT_ACTIVELC0_MASK             0x1
 #define TIMERS_CONTEXT_ACTIVELC0_SHIFT            29
-#define TIMERS_CONTEXT_RESERVED0_MASK             0x3
-#define TIMERS_CONTEXT_RESERVED0_SHIFT            30
+#define TIMERS_CONTEXT_RESERVED1_MASK             0x3
+#define TIMERS_CONTEXT_RESERVED1_SHIFT            30
 	__le32 logical_client_1;
 /* Expiration time of logical client 1 */
-#define TIMERS_CONTEXT_EXPIRATIONTIMELC1_MASK     0xFFFFFFF
+#define TIMERS_CONTEXT_EXPIRATIONTIMELC1_MASK     0x7FFFFFF
 #define TIMERS_CONTEXT_EXPIRATIONTIMELC1_SHIFT    0
+#define TIMERS_CONTEXT_RESERVED2_MASK             0x1
+#define TIMERS_CONTEXT_RESERVED2_SHIFT            27
 /* Valid bit of logical client 1 */
 #define TIMERS_CONTEXT_VALIDLC1_MASK              0x1
 #define TIMERS_CONTEXT_VALIDLC1_SHIFT             28
 /* Active bit of logical client 1 */
 #define TIMERS_CONTEXT_ACTIVELC1_MASK             0x1
 #define TIMERS_CONTEXT_ACTIVELC1_SHIFT            29
-#define TIMERS_CONTEXT_RESERVED1_MASK             0x3
-#define TIMERS_CONTEXT_RESERVED1_SHIFT            30
+#define TIMERS_CONTEXT_RESERVED3_MASK             0x3
+#define TIMERS_CONTEXT_RESERVED3_SHIFT            30
 	__le32 logical_client_2;
 /* Expiration time of logical client 2 */
-#define TIMERS_CONTEXT_EXPIRATIONTIMELC2_MASK     0xFFFFFFF
+#define TIMERS_CONTEXT_EXPIRATIONTIMELC2_MASK     0x7FFFFFF
 #define TIMERS_CONTEXT_EXPIRATIONTIMELC2_SHIFT    0
+#define TIMERS_CONTEXT_RESERVED4_MASK             0x1
+#define TIMERS_CONTEXT_RESERVED4_SHIFT            27
 /* Valid bit of logical client 2 */
 #define TIMERS_CONTEXT_VALIDLC2_MASK              0x1
 #define TIMERS_CONTEXT_VALIDLC2_SHIFT             28
 /* Active bit of logical client 2 */
 #define TIMERS_CONTEXT_ACTIVELC2_MASK             0x1
 #define TIMERS_CONTEXT_ACTIVELC2_SHIFT            29
-#define TIMERS_CONTEXT_RESERVED2_MASK             0x3
-#define TIMERS_CONTEXT_RESERVED2_SHIFT            30
+#define TIMERS_CONTEXT_RESERVED5_MASK             0x3
+#define TIMERS_CONTEXT_RESERVED5_SHIFT            30
 	__le32 host_expiration_fields;
 /* Expiration time on host (closest one) */
-#define TIMERS_CONTEXT_HOSTEXPRIRATIONVALUE_MASK  0xFFFFFFF
+#define TIMERS_CONTEXT_HOSTEXPRIRATIONVALUE_MASK  0x7FFFFFF
 #define TIMERS_CONTEXT_HOSTEXPRIRATIONVALUE_SHIFT 0
+#define TIMERS_CONTEXT_RESERVED6_MASK             0x1
+#define TIMERS_CONTEXT_RESERVED6_SHIFT            27
 /* Valid bit of host expiration */
 #define TIMERS_CONTEXT_HOSTEXPRIRATIONVALID_MASK  0x1
 #define TIMERS_CONTEXT_HOSTEXPRIRATIONVALID_SHIFT 28
-#define TIMERS_CONTEXT_RESERVED3_MASK             0x7
-#define TIMERS_CONTEXT_RESERVED3_SHIFT            29
+#define TIMERS_CONTEXT_RESERVED7_MASK             0x7
+#define TIMERS_CONTEXT_RESERVED7_SHIFT            29
 };
 
 
diff --git a/drivers/net/qede/base/ecore_dcbx.c b/drivers/net/qede/base/ecore_dcbx.c
index 7380fd8..102774d 100644
--- a/drivers/net/qede/base/ecore_dcbx.c
+++ b/drivers/net/qede/base/ecore_dcbx.c
@@ -126,7 +126,7 @@ ecore_dcbx_set_params(struct ecore_dcbx_results *p_data,
 	else if (enable)
 		p_data->arr[type].update = UPDATE_DCB;
 	else
-		p_data->arr[type].update = DONT_UPDATE_DCB_DHCP;
+		p_data->arr[type].update = DONT_UPDATE_DCB_DSCP;
 
 	/* QM reconf data */
 	if (p_hwfn->hw_info.personality == personality) {
@@ -938,7 +938,7 @@ void ecore_dcbx_set_pf_update_params(struct ecore_dcbx_results *p_src,
 	p_dest->pf_id = p_src->pf_id;
 
 	update_flag = p_src->arr[DCBX_PROTOCOL_ETH].update;
-	p_dest->update_eth_dcb_data_flag = update_flag;
+	p_dest->update_eth_dcb_data_mode = update_flag;
 
 	p_dcb_data = &p_dest->eth_dcb_data;
 	ecore_dcbx_update_protocol_data(p_dcb_data, p_src, DCBX_PROTOCOL_ETH);
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index eef24cd..f82f5e6 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -814,7 +814,7 @@ static enum _ecore_status_t ecore_calc_hw_mode(struct ecore_hwfn *p_hwfn)
 	int hw_mode = 0;
 
 	if (ECORE_IS_BB_B0(p_hwfn->p_dev)) {
-		hw_mode |= 1 << MODE_BB_B0;
+		hw_mode |= 1 << MODE_BB;
 	} else if (ECORE_IS_AH(p_hwfn->p_dev)) {
 		hw_mode |= 1 << MODE_K2;
 	} else {
@@ -886,29 +886,36 @@ static enum _ecore_status_t ecore_calc_hw_mode(struct ecore_hwfn *p_hwfn)
 static enum _ecore_status_t ecore_hw_init_chip(struct ecore_hwfn *p_hwfn,
 					       struct ecore_ptt *p_ptt)
 {
+	struct ecore_dev *p_dev = p_hwfn->p_dev;
 	u32 pl_hv = 1;
 	int i;
 
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && ECORE_IS_AH(p_hwfn->p_dev))
-		pl_hv |= 0x600;
+	if (CHIP_REV_IS_EMUL(p_dev)) {
+		if (ECORE_IS_AH(p_dev))
+			pl_hv |= 0x600;
+	}
 
 	ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV + 4, pl_hv);
 
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && ECORE_IS_AH(p_hwfn->p_dev))
-		ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV_2, 0x3ffffff);
+	if (CHIP_REV_IS_EMUL(p_dev) &&
+	    (ECORE_IS_AH(p_dev)))
+		ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV_2_K2_E5,
+			 0x3ffffff);
 
 	/* initialize port mode to 4x10G_E (10G with 4x10 SERDES) */
 	/* CNIG_REG_NW_PORT_MODE is same for A0 and B0 */
-	if (!CHIP_REV_IS_EMUL(p_hwfn->p_dev) || !ECORE_IS_AH(p_hwfn->p_dev))
-		ecore_wr(p_hwfn, p_ptt, CNIG_REG_NW_PORT_MODE_BB_B0, 4);
+	if (!CHIP_REV_IS_EMUL(p_dev) || ECORE_IS_BB(p_dev))
+		ecore_wr(p_hwfn, p_ptt, CNIG_REG_NW_PORT_MODE_BB, 4);
 
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && ECORE_IS_AH(p_hwfn->p_dev)) {
-		/* 2 for 4-port, 1 for 2-port, 0 for 1-port */
-		ecore_wr(p_hwfn, p_ptt, MISC_REG_PORT_MODE,
-			 (p_hwfn->p_dev->num_ports_in_engines >> 1));
+	if (CHIP_REV_IS_EMUL(p_dev)) {
+		if (ECORE_IS_AH(p_dev)) {
+			/* 2 for 4-port, 1 for 2-port, 0 for 1-port */
+			ecore_wr(p_hwfn, p_ptt, MISC_REG_PORT_MODE,
+				 (p_dev->num_ports_in_engines >> 1));
 
-		ecore_wr(p_hwfn, p_ptt, MISC_REG_BLOCK_256B_EN,
-			 p_hwfn->p_dev->num_ports_in_engines == 4 ? 0 : 3);
+			ecore_wr(p_hwfn, p_ptt, MISC_REG_BLOCK_256B_EN,
+				 p_dev->num_ports_in_engines == 4 ? 0 : 3);
+		}
 	}
 
 	/* Poll on RBC */
@@ -1051,12 +1058,6 @@ static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn,
 	/* pretend to original PF */
 	ecore_fid_pretend(p_hwfn, p_ptt, p_hwfn->rel_pf_id);
 
-	/* @@@TMP:
-	 * CQ89456 - Mask the BRB "RC0_EOP_OUT_SYNC_FIFO_PUSH_ERROR" attention.
-	 */
-	if (ECORE_IS_AH(p_dev))
-		ecore_wr(p_hwfn, p_ptt, BRB_REG_INT_MASK_10, 0x4000000);
-
 	return rc;
 }
 
@@ -1072,20 +1073,19 @@ static void ecore_wr_nw_port(struct ecore_hwfn *p_hwfn,
 {
 	DP_VERBOSE(p_hwfn, ECORE_MSG_LINK,
 		   "CMD: %08x, ADDR: 0x%08x, DATA: %08x:%08x\n",
-		   ecore_rd(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB_B0) |
+		   ecore_rd(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB) |
 		   (8 << PMEG_IF_BYTE_COUNT),
 		   (reg_type << 25) | (addr << 8) | port,
 		   (u32)((data >> 32) & 0xffffffff),
 		   (u32)(data & 0xffffffff));
 
-	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB_B0,
-		 (ecore_rd(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB_B0) &
+	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB,
+		 (ecore_rd(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB) &
 		  0xffff00fe) | (8 << PMEG_IF_BYTE_COUNT));
-	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_ADDR_BB_B0,
+	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_ADDR_BB,
 		 (reg_type << 25) | (addr << 8) | port);
-	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_WRDATA_BB_B0,
-		 data & 0xffffffff);
-	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_WRDATA_BB_B0,
+	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_WRDATA_BB, data & 0xffffffff);
+	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_WRDATA_BB,
 		 (data >> 32) & 0xffffffff);
 }
 
@@ -1101,48 +1101,13 @@ static void ecore_wr_nw_port(struct ecore_hwfn *p_hwfn,
 #define XLMAC_PAUSE_CTRL (0x60d)
 #define XLMAC_PFC_CTRL (0x60e)
 
-static void ecore_emul_link_init_ah(struct ecore_hwfn *p_hwfn,
+static void ecore_emul_link_init_bb(struct ecore_hwfn *p_hwfn,
 				    struct ecore_ptt *p_ptt)
 {
-	u8 port = p_hwfn->port_id;
-	u32 mac_base = NWM_REG_MAC0 + (port << 2) * NWM_REG_MAC0_SIZE;
-
-	ecore_wr(p_hwfn, p_ptt, CNIG_REG_NIG_PORT0_CONF_K2 + (port << 2),
-		 (1 << CNIG_REG_NIG_PORT0_CONF_NIG_PORT_ENABLE_0_SHIFT) |
-		 (port << CNIG_REG_NIG_PORT0_CONF_NIG_PORT_NWM_PORT_MAP_0_SHIFT)
-		 | (0 << CNIG_REG_NIG_PORT0_CONF_NIG_PORT_RATE_0_SHIFT));
-
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_XIF_MODE,
-		 1 << ETH_MAC_REG_XIF_MODE_XGMII_SHIFT);
-
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_FRM_LENGTH,
-		 9018 << ETH_MAC_REG_FRM_LENGTH_FRM_LENGTH_SHIFT);
-
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_TX_IPG_LENGTH,
-		 0xc << ETH_MAC_REG_TX_IPG_LENGTH_TXIPG_SHIFT);
-
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_RX_FIFO_SECTIONS,
-		 8 << ETH_MAC_REG_RX_FIFO_SECTIONS_RX_SECTION_FULL_SHIFT);
-
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_TX_FIFO_SECTIONS,
-		 (0xA << ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_EMPTY_SHIFT) |
-		 (8 << ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_FULL_SHIFT));
-
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_COMMAND_CONFIG, 0xa853);
-}
-
-static void ecore_emul_link_init(struct ecore_hwfn *p_hwfn,
-				 struct ecore_ptt *p_ptt)
-{
 	u8 loopback = 0, port = p_hwfn->port_id * 2;
 
 	DP_INFO(p_hwfn->p_dev, "Configurating Emulation Link %02x\n", port);
 
-	if (ECORE_IS_AH(p_hwfn->p_dev)) {
-		ecore_emul_link_init_ah(p_hwfn, p_ptt);
-		return;
-	}
-
 	/* XLPORT MAC MODE *//* 0 Quad, 4 Single... */
 	ecore_wr_nw_port(p_hwfn, p_ptt, XLPORT_MODE_REG, (0x4 << 4) | 0x4, 1,
 			 port);
@@ -1171,8 +1136,53 @@ static void ecore_emul_link_init(struct ecore_hwfn *p_hwfn,
 	ecore_wr_nw_port(p_hwfn, p_ptt, XLPORT_ENABLE_REG, 0xf, 1, port);
 }
 
-static void ecore_link_init(struct ecore_hwfn *p_hwfn,
-			    struct ecore_ptt *p_ptt, u8 port)
+static void ecore_emul_link_init_ah_e5(struct ecore_hwfn *p_hwfn,
+				       struct ecore_ptt *p_ptt)
+{
+	u8 port = p_hwfn->port_id;
+	u32 mac_base = NWM_REG_MAC0_K2_E5 + (port << 2) * NWM_REG_MAC0_SIZE;
+
+	DP_INFO(p_hwfn->p_dev, "Configurating Emulation Link %02x\n", port);
+
+	ecore_wr(p_hwfn, p_ptt, CNIG_REG_NIG_PORT0_CONF_K2_E5 + (port << 2),
+		 (1 << CNIG_REG_NIG_PORT0_CONF_NIG_PORT_ENABLE_0_K2_E5_SHIFT) |
+		 (port <<
+		  CNIG_REG_NIG_PORT0_CONF_NIG_PORT_NWM_PORT_MAP_0_K2_E5_SHIFT) |
+		 (0 << CNIG_REG_NIG_PORT0_CONF_NIG_PORT_RATE_0_K2_E5_SHIFT));
+
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_XIF_MODE_K2_E5,
+		 1 << ETH_MAC_REG_XIF_MODE_XGMII_K2_E5_SHIFT);
+
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_FRM_LENGTH_K2_E5,
+		 9018 << ETH_MAC_REG_FRM_LENGTH_FRM_LENGTH_K2_E5_SHIFT);
+
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_TX_IPG_LENGTH_K2_E5,
+		 0xc << ETH_MAC_REG_TX_IPG_LENGTH_TXIPG_K2_E5_SHIFT);
+
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_RX_FIFO_SECTIONS_K2_E5,
+		 8 << ETH_MAC_REG_RX_FIFO_SECTIONS_RX_SECTION_FULL_K2_E5_SHIFT);
+
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_TX_FIFO_SECTIONS_K2_E5,
+		 (0xA <<
+		  ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_EMPTY_K2_E5_SHIFT) |
+		 (8 <<
+		  ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_FULL_K2_E5_SHIFT));
+
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_COMMAND_CONFIG_K2_E5,
+		 0xa853);
+}
+
+static void ecore_emul_link_init(struct ecore_hwfn *p_hwfn,
+				 struct ecore_ptt *p_ptt)
+{
+	if (ECORE_IS_AH(p_hwfn->p_dev))
+		ecore_emul_link_init_ah_e5(p_hwfn, p_ptt);
+	else /* BB */
+		ecore_emul_link_init_bb(p_hwfn, p_ptt);
+}
+
+static void ecore_link_init_bb(struct ecore_hwfn *p_hwfn,
+			       struct ecore_ptt *p_ptt,  u8 port)
 {
 	int port_offset = port ? 0x800 : 0;
 	u32 xmac_rxctrl = 0;
@@ -1185,10 +1195,10 @@ static void ecore_link_init(struct ecore_hwfn *p_hwfn,
 	ecore_wr(p_hwfn, p_ptt, MISC_REG_RESET_PL_PDA_VAUX + sizeof(u32),
 		 MISC_REG_RESET_REG_2_XMAC_BIT);	/* Set */
 
-	ecore_wr(p_hwfn, p_ptt, MISC_REG_XMAC_CORE_PORT_MODE, 1);
+	ecore_wr(p_hwfn, p_ptt, MISC_REG_XMAC_CORE_PORT_MODE_BB, 1);
 
 	/* Set the number of ports on the Warp Core to 10G */
-	ecore_wr(p_hwfn, p_ptt, MISC_REG_XMAC_PHY_PORT_MODE, 3);
+	ecore_wr(p_hwfn, p_ptt, MISC_REG_XMAC_PHY_PORT_MODE_BB, 3);
 
 	/* Soft reset of XMAC */
 	ecore_wr(p_hwfn, p_ptt, MISC_REG_RESET_PL_PDA_VAUX + 2 * sizeof(u32),
@@ -1199,20 +1209,21 @@ static void ecore_link_init(struct ecore_hwfn *p_hwfn,
 
 	/* FIXME: move to common end */
 	if (CHIP_REV_IS_FPGA(p_hwfn->p_dev))
-		ecore_wr(p_hwfn, p_ptt, XMAC_REG_MODE + port_offset, 0x20);
+		ecore_wr(p_hwfn, p_ptt, XMAC_REG_MODE_BB + port_offset, 0x20);
 
 	/* Set Max packet size: initialize XMAC block register for port 0 */
-	ecore_wr(p_hwfn, p_ptt, XMAC_REG_RX_MAX_SIZE + port_offset, 0x2710);
+	ecore_wr(p_hwfn, p_ptt, XMAC_REG_RX_MAX_SIZE_BB + port_offset, 0x2710);
 
 	/* CRC append for Tx packets: init XMAC block register for port 1 */
-	ecore_wr(p_hwfn, p_ptt, XMAC_REG_TX_CTRL_LO + port_offset, 0xC800);
+	ecore_wr(p_hwfn, p_ptt, XMAC_REG_TX_CTRL_LO_BB + port_offset, 0xC800);
 
 	/* Enable TX and RX: initialize XMAC block register for port 1 */
-	ecore_wr(p_hwfn, p_ptt, XMAC_REG_CTRL + port_offset,
-		 XMAC_REG_CTRL_TX_EN | XMAC_REG_CTRL_RX_EN);
-	xmac_rxctrl = ecore_rd(p_hwfn, p_ptt, XMAC_REG_RX_CTRL + port_offset);
-	xmac_rxctrl |= XMAC_REG_RX_CTRL_PROCESS_VARIABLE_PREAMBLE;
-	ecore_wr(p_hwfn, p_ptt, XMAC_REG_RX_CTRL + port_offset, xmac_rxctrl);
+	ecore_wr(p_hwfn, p_ptt, XMAC_REG_CTRL_BB + port_offset,
+		 XMAC_REG_CTRL_TX_EN_BB | XMAC_REG_CTRL_RX_EN_BB);
+	xmac_rxctrl = ecore_rd(p_hwfn, p_ptt,
+			       XMAC_REG_RX_CTRL_BB + port_offset);
+	xmac_rxctrl |= XMAC_REG_RX_CTRL_PROCESS_VARIABLE_PREAMBLE_BB;
+	ecore_wr(p_hwfn, p_ptt, XMAC_REG_RX_CTRL_BB + port_offset, xmac_rxctrl);
 }
 #endif
 
@@ -1233,7 +1244,8 @@ static enum _ecore_status_t ecore_hw_init_port(struct ecore_hwfn *p_hwfn,
 	if (CHIP_REV_IS_FPGA(p_hwfn->p_dev)) {
 		if (ECORE_IS_AH(p_hwfn->p_dev))
 			return ECORE_SUCCESS;
-		ecore_link_init(p_hwfn, p_ptt, p_hwfn->port_id);
+		else if (ECORE_IS_BB(p_hwfn->p_dev))
+			ecore_link_init_bb(p_hwfn, p_ptt, p_hwfn->port_id);
 	} else if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
 		if (p_hwfn->p_dev->num_hwfns > 1) {
 			/* Activate OPTE in CMT */
@@ -1667,7 +1679,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		 * out that these registers get initialized during the call to
 		 * ecore_mcp_load_req request. So we need to reread them here
 		 * to get the proper shadow register value.
-		 * Note: This is a workaround for the missinginig MFW
+		 * Note: This is a workaround for the missing MFW
 		 * initialization. It may be removed once the implementation
 		 * is done.
 		 */
@@ -2033,22 +2045,22 @@ static void ecore_hw_hwfn_prepare(struct ecore_hwfn *p_hwfn)
 	/* clear indirect access */
 	if (ECORE_IS_AH(p_hwfn->p_dev)) {
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_E8_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_E8_F0_K2_E5, 0);
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_EC_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_EC_F0_K2_E5, 0);
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_F0_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_F0_F0_K2_E5, 0);
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_F4_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_F4_F0_K2_E5, 0);
 	} else {
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_88_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_88_F0_BB, 0);
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_8C_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_8C_F0_BB, 0);
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_90_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_90_F0_BB, 0);
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_94_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_94_F0_BB, 0);
 	}
 
 	/* Clean Previous errors if such exist */
@@ -2643,7 +2655,12 @@ static void ecore_get_num_funcs(struct ecore_hwfn *p_hwfn,
 	 * In case of CMT in BB, only the "even" functions are enabled, and thus
 	 * the number of functions for both hwfns is learnt from the same bits.
 	 */
-	reg_function_hide = ecore_rd(p_hwfn, p_ptt, MISCS_REG_FUNCTION_HIDE);
+	if (ECORE_IS_BB(p_dev) || ECORE_IS_AH(p_dev)) {
+		reg_function_hide = ecore_rd(p_hwfn, p_ptt,
+					     MISCS_REG_FUNCTION_HIDE_BB_K2);
+	} else { /* E5 */
+		reg_function_hide = 0;
+	}
 
 	if (reg_function_hide & 0x1) {
 		if (ECORE_IS_BB(p_dev)) {
@@ -2709,8 +2726,7 @@ static void ecore_hw_info_port_num_bb(struct ecore_hwfn *p_hwfn,
 		port_mode = 1;
 	else
 #endif
-		port_mode = ecore_rd(p_hwfn, p_ptt,
-				     CNIG_REG_NW_PORT_MODE_BB_B0);
+	port_mode = ecore_rd(p_hwfn, p_ptt, CNIG_REG_NW_PORT_MODE_BB);
 
 	if (port_mode < 3) {
 		p_hwfn->p_dev->num_ports_in_engines = 1;
@@ -2725,8 +2741,8 @@ static void ecore_hw_info_port_num_bb(struct ecore_hwfn *p_hwfn,
 	}
 }
 
-static void ecore_hw_info_port_num_ah(struct ecore_hwfn *p_hwfn,
-				      struct ecore_ptt *p_ptt)
+static void ecore_hw_info_port_num_ah_e5(struct ecore_hwfn *p_hwfn,
+					 struct ecore_ptt *p_ptt)
 {
 	u32 port;
 	int i;
@@ -2755,7 +2771,8 @@ static void ecore_hw_info_port_num_ah(struct ecore_hwfn *p_hwfn,
 #endif
 		for (i = 0; i < MAX_NUM_PORTS_K2; i++) {
 			port = ecore_rd(p_hwfn, p_ptt,
-					CNIG_REG_NIG_PORT0_CONF_K2 + (i * 4));
+					CNIG_REG_NIG_PORT0_CONF_K2_E5 +
+					(i * 4));
 			if (port & 1)
 				p_hwfn->p_dev->num_ports_in_engines++;
 		}
@@ -2767,7 +2784,7 @@ static void ecore_hw_info_port_num(struct ecore_hwfn *p_hwfn,
 	if (ECORE_IS_BB(p_hwfn->p_dev))
 		ecore_hw_info_port_num_bb(p_hwfn, p_ptt);
 	else
-		ecore_hw_info_port_num_ah(p_hwfn, p_ptt);
+		ecore_hw_info_port_num_ah_e5(p_hwfn, p_ptt);
 }
 
 static enum _ecore_status_t
@@ -3076,12 +3093,13 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn,
 	if (CHIP_REV_IS_FPGA(p_dev)) {
 		DP_NOTICE(p_hwfn, false,
 			  "FPGA: workaround; Prevent DMAE parities\n");
-		ecore_wr(p_hwfn, p_hwfn->p_main_ptt, PCIE_REG_PRTY_MASK, 7);
+		ecore_wr(p_hwfn, p_hwfn->p_main_ptt, PCIE_REG_PRTY_MASK_K2_E5,
+			 7);
 
 		DP_NOTICE(p_hwfn, false,
 			  "FPGA: workaround: Set VF bar0 size\n");
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_VF_BAR0_SIZE, 4);
+			 PGLUE_B_REG_VF_BAR0_SIZE_K2_E5, 4);
 	}
 #endif
 
diff --git a/drivers/net/qede/base/ecore_gtt_reg_addr.h b/drivers/net/qede/base/ecore_gtt_reg_addr.h
index 070588d..2acd864 100644
--- a/drivers/net/qede/base/ecore_gtt_reg_addr.h
+++ b/drivers/net/qede/base/ecore_gtt_reg_addr.h
@@ -10,43 +10,43 @@
 #define GTT_REG_ADDR_H
 
 /* Win 2 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_IGU_CMD                                      0x00f000UL
 
 /* Win 3 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_TSDM_RAM                                     0x010000UL
 
 /* Win 4 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_MSDM_RAM                                     0x011000UL
 
 /* Win 5 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_MSDM_RAM_1024                                0x012000UL
 
 /* Win 6 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_USDM_RAM                                     0x013000UL
 
 /* Win 7 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_USDM_RAM_1024                                0x014000UL
 
 /* Win 8 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_USDM_RAM_2048                                0x015000UL
 
 /* Win 9 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_XSDM_RAM                                     0x016000UL
 
 /* Win 10 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_YSDM_RAM                                     0x017000UL
 
 /* Win 11 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_PSDM_RAM                                     0x018000UL
 
 #endif
diff --git a/drivers/net/qede/base/ecore_hsi_common.h b/drivers/net/qede/base/ecore_hsi_common.h
index f934e68..3042ed5 100644
--- a/drivers/net/qede/base/ecore_hsi_common.h
+++ b/drivers/net/qede/base/ecore_hsi_common.h
@@ -836,7 +836,12 @@ struct core_rx_fast_path_cqe {
 	__le16 packet_length /* Total packet length (from the parser) */;
 	__le16 vlan /* 802.1q VLAN tag */;
 	struct core_rx_cqe_opaque_data opaque_data /* Opaque Data */;
-	__le32 reserved[4];
+/* bit- map: each bit represents a specific error. errors indications are
+ * provided by the cracker. see spec for detailed description
+ */
+	struct parsing_err_flags err_flags;
+	__le16 reserved0;
+	__le32 reserved1[3];
 };
 
 /*
@@ -1042,13 +1047,13 @@ struct core_tx_stop_ramrod_data {
 /*
  * Enum flag for what type of dcb data to update
  */
-enum dcb_dhcp_update_flag {
+enum dcb_dscp_update_mode {
 /* use when no change should be done to dcb data */
-	DONT_UPDATE_DCB_DHCP,
+	DONT_UPDATE_DCB_DSCP,
 	UPDATE_DCB /* use to update only l2 (vlan) priority */,
-	UPDATE_DSCP /* use to update only l3 dhcp */,
-	UPDATE_DCB_DSCP /* update vlan pri and dhcp */,
-	MAX_DCB_DHCP_UPDATE_FLAG
+	UPDATE_DSCP /* use to update only l3 dscp */,
+	UPDATE_DCB_DSCP /* update vlan pri and dscp */,
+	MAX_DCB_DSCP_UPDATE_FLAG
 };
 
 
@@ -1232,6 +1237,10 @@ enum iwarp_ll2_tx_queues {
 	IWARP_LL2_IN_ORDER_TX_QUEUE = 1,
 /* LL2 queue for unaligned packets sent aligned by the driver */
 	IWARP_LL2_ALIGNED_TX_QUEUE,
+/* LL2 queue for unaligned packets sent aligned and was right-trimmed by the
+ * driver
+ */
+	IWARP_LL2_ALIGNED_RIGHT_TRIMMED_TX_QUEUE,
 	IWARP_LL2_ERROR /* Error indication */,
 	MAX_IWARP_LL2_TX_QUEUES
 };
@@ -1446,13 +1455,13 @@ struct pf_update_tunnel_config {
  */
 struct pf_update_ramrod_data {
 	u8 pf_id;
-	u8 update_eth_dcb_data_flag /* Update Eth DCB  data indication */;
-	u8 update_fcoe_dcb_data_flag /* Update FCOE DCB  data indication */;
-	u8 update_iscsi_dcb_data_flag /* Update iSCSI DCB  data indication */;
-	u8 update_roce_dcb_data_flag /* Update ROCE DCB  data indication */;
+	u8 update_eth_dcb_data_mode /* Update Eth DCB  data indication */;
+	u8 update_fcoe_dcb_data_mode /* Update FCOE DCB  data indication */;
+	u8 update_iscsi_dcb_data_mode /* Update iSCSI DCB  data indication */;
+	u8 update_roce_dcb_data_mode /* Update ROCE DCB  data indication */;
 /* Update RROCE (RoceV2) DCB  data indication */
-	u8 update_rroce_dcb_data_flag;
-	u8 update_iwarp_dcb_data_flag /* Update IWARP DCB  data indication */;
+	u8 update_rroce_dcb_data_mode;
+	u8 update_iwarp_dcb_data_mode /* Update IWARP DCB  data indication */;
 	u8 update_mf_vlan_flag /* Update MF outer vlan Id */;
 	struct protocol_dcb_data eth_dcb_data /* core eth related fields */;
 	struct protocol_dcb_data fcoe_dcb_data /* core fcoe related fields */;
@@ -1611,6 +1620,8 @@ struct tstorm_per_port_stat {
 	struct regpair fcoe_irregular_pkt;
 /* packet is an ROCE irregular packet */
 	struct regpair roce_irregular_pkt;
+/* packet is an IWARP irregular packet */
+	struct regpair iwarp_irregular_pkt;
 /* packet is an ETH irregular packet */
 	struct regpair eth_irregular_pkt;
 /* packet is an TOE irregular packet */
@@ -1861,8 +1872,11 @@ struct dmae_cmd {
 #define DMAE_CMD_SRC_VF_ID_SHIFT       0
 #define DMAE_CMD_DST_VF_ID_MASK        0xFF /* Destination VF id */
 #define DMAE_CMD_DST_VF_ID_SHIFT       8
-	__le32 comp_addr_lo /* PCIe completion address low or grc address */;
-/* PCIe completion address high or reserved (if completion address is in GRC) */
+/* PCIe completion address low in bytes or GRC completion address in DW */
+	__le32 comp_addr_lo;
+/* PCIe completion address high in bytes or reserved (if completion address is
+ * GRC)
+ */
 	__le32 comp_addr_hi;
 	__le32 comp_val /* Value to write to completion address */;
 	__le32 crc32 /* crc16 result */;
@@ -2250,10 +2264,6 @@ struct sdm_op_gen {
 #define SDM_OP_GEN_RESERVED_SHIFT   20
 };
 
-
-
-
-
 struct ystorm_core_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 byte1 /* state */;
diff --git a/drivers/net/qede/base/ecore_hsi_debug_tools.h b/drivers/net/qede/base/ecore_hsi_debug_tools.h
index effb6ed..917e8f4 100644
--- a/drivers/net/qede/base/ecore_hsi_debug_tools.h
+++ b/drivers/net/qede/base/ecore_hsi_debug_tools.h
@@ -93,10 +93,12 @@ enum block_addr {
 	GRCBASE_PHY_PCIE = 0x620000,
 	GRCBASE_LED = 0x6b8000,
 	GRCBASE_AVS_WRAP = 0x6b0000,
-	GRCBASE_RGFS = 0x19d0000,
-	GRCBASE_TGFS = 0x19e0000,
-	GRCBASE_PTLD = 0x19f0000,
-	GRCBASE_YPLD = 0x1a10000,
+	GRCBASE_RGFS = 0x1fa0000,
+	GRCBASE_RGSRC = 0x1fa8000,
+	GRCBASE_TGFS = 0x1fb0000,
+	GRCBASE_TGSRC = 0x1fb8000,
+	GRCBASE_PTLD = 0x1fc0000,
+	GRCBASE_YPLD = 0x1fe0000,
 	GRCBASE_MISC_AEU = 0x8000,
 	GRCBASE_BAR0_MAP = 0x1c00000,
 	MAX_BLOCK_ADDR
@@ -184,7 +186,9 @@ enum block_id {
 	BLOCK_LED,
 	BLOCK_AVS_WRAP,
 	BLOCK_RGFS,
+	BLOCK_RGSRC,
 	BLOCK_TGFS,
+	BLOCK_TGSRC,
 	BLOCK_PTLD,
 	BLOCK_YPLD,
 	BLOCK_MISC_AEU,
@@ -208,6 +212,10 @@ enum bin_dbg_buffer_type {
 	BIN_BUF_DBG_ATTN_REGS /* Attention registers */,
 	BIN_BUF_DBG_ATTN_INDEXES /* Attention indexes */,
 	BIN_BUF_DBG_ATTN_NAME_OFFSETS /* Attention name offsets */,
+	BIN_BUF_DBG_BUS_BLOCKS /* Debug Bus blocks */,
+	BIN_BUF_DBG_BUS_LINES /* Debug Bus lines */,
+	BIN_BUF_DBG_BUS_BLOCKS_USER_DATA /* Debug Bus blocks user data */,
+	BIN_BUF_DBG_BUS_LINE_NAME_OFFSETS /* Debug Bus line name offsets */,
 	BIN_BUF_DBG_PARSING_STRINGS /* Debug Tools parsing strings */,
 	MAX_BIN_DBG_BUFFER_TYPE
 };
@@ -219,8 +227,8 @@ enum bin_dbg_buffer_type {
 struct dbg_attn_bit_mapping {
 	__le16 data;
 /* The index of an attention in the blocks attentions list
- * (if is_unused_idx_cnt=0), or a number of consecutive unused attention bits
- * (if is_unused_idx_cnt=1)
+ * (if is_unused_bit_cnt=0), or a number of consecutive unused attention bits
+ * (if is_unused_bit_cnt=1)
  */
 #define DBG_ATTN_BIT_MAPPING_VAL_MASK                0x7FFF
 #define DBG_ATTN_BIT_MAPPING_VAL_SHIFT               0
@@ -269,10 +277,10 @@ struct dbg_attn_reg_result {
 #define DBG_ATTN_REG_RESULT_STS_ADDRESS_MASK   0xFFFFFF
 #define DBG_ATTN_REG_RESULT_STS_ADDRESS_SHIFT  0
 /* Number of attention indexes in this register */
-#define DBG_ATTN_REG_RESULT_NUM_ATTN_IDX_MASK  0xFF
-#define DBG_ATTN_REG_RESULT_NUM_ATTN_IDX_SHIFT 24
-/* Offset of this registers block attention indexes (values in the range
- * 0..number of block attentions)
+#define DBG_ATTN_REG_RESULT_NUM_REG_ATTN_MASK  0xFF
+#define DBG_ATTN_REG_RESULT_NUM_REG_ATTN_SHIFT 24
+/* The offset of this registers attentions within the blocks attentions
+ * list (a value in the range 0..number of block attentions-1)
  */
 	__le16 attn_idx_offset;
 	__le16 reserved;
@@ -289,7 +297,7 @@ struct dbg_attn_block_result {
 /* Value from dbg_attn_type enum */
 #define DBG_ATTN_BLOCK_RESULT_ATTN_TYPE_MASK  0x3
 #define DBG_ATTN_BLOCK_RESULT_ATTN_TYPE_SHIFT 0
-/* Number of registers in the blok in which at least one attention bit is set */
+/* Number of registers in block in which at least one attention bit is set */
 #define DBG_ATTN_BLOCK_RESULT_NUM_REGS_MASK   0x3F
 #define DBG_ATTN_BLOCK_RESULT_NUM_REGS_SHIFT  2
 /* Offset of this registers block attention names in the attention name offsets
@@ -324,17 +332,17 @@ struct dbg_mode_hdr {
  */
 struct dbg_attn_reg {
 	struct dbg_mode_hdr mode /* Mode header */;
-/* Offset of this registers block attention indexes (values in the range
- * 0..number of block attentions)
+/* The offset of this registers attentions within the blocks attentions
+ * list (a value in the range 0..number of block attentions-1)
  */
 	__le16 attn_idx_offset;
 	__le32 data;
 /* STS attention register GRC address (in dwords) */
 #define DBG_ATTN_REG_STS_ADDRESS_MASK   0xFFFFFF
 #define DBG_ATTN_REG_STS_ADDRESS_SHIFT  0
-/* Number of attention indexes in this register */
-#define DBG_ATTN_REG_NUM_ATTN_IDX_MASK  0xFF
-#define DBG_ATTN_REG_NUM_ATTN_IDX_SHIFT 24
+/* Number of attention in this register */
+#define DBG_ATTN_REG_NUM_REG_ATTN_MASK  0xFF
+#define DBG_ATTN_REG_NUM_REG_ATTN_SHIFT 24
 /* STS_CLR attention register GRC address (in dwords) */
 	__le32 sts_clr_address;
 /* MASK attention register GRC address (in dwords) */
@@ -354,6 +362,53 @@ enum dbg_attn_type {
 
 
 /*
+ * Debug Bus block data
+ */
+struct dbg_bus_block {
+/* Number of debug lines in this block (excluding signature & latency events) */
+	u8 num_of_lines;
+/* Indicates if this block has a latency events debug line (0/1). */
+	u8 has_latency_events;
+/* Offset of this blocks lines in the Debug Bus lines array. */
+	__le16 lines_offset;
+};
+
+
+/*
+ * Debug Bus block user data
+ */
+struct dbg_bus_block_user_data {
+/* Number of debug lines in this block (excluding signature & latency events) */
+	u8 num_of_lines;
+/* Indicates if this block has a latency events debug line (0/1). */
+	u8 has_latency_events;
+/* Offset of this blocks lines in the debug bus line name offsets array. */
+	__le16 names_offset;
+};
+
+
+/*
+ * Block Debug line data
+ */
+struct dbg_bus_line {
+	u8 data;
+/* Number of groups in the line (0-3) */
+#define DBG_BUS_LINE_NUM_OF_GROUPS_MASK  0xF
+#define DBG_BUS_LINE_NUM_OF_GROUPS_SHIFT 0
+/* Indicates if this is a 128b line (0) or a 256b line (1). */
+#define DBG_BUS_LINE_IS_256B_MASK        0x1
+#define DBG_BUS_LINE_IS_256B_SHIFT       4
+#define DBG_BUS_LINE_RESERVED_MASK       0x7
+#define DBG_BUS_LINE_RESERVED_SHIFT      5
+/* Four 2-bit values, indicating the size of each group minus 1 (i.e.
+ * value=0 means size=1, value=1 means size=2, etc), starting from lsb.
+ * The sizes are in dwords (if is_256b=0) or in qwords (if is_256b=1).
+ */
+	u8 group_sizes;
+};
+
+
+/*
  * condition header for registers dump
  */
 struct dbg_dump_cond_hdr {
@@ -377,8 +432,11 @@ struct dbg_dump_mem {
 /* register size (in dwords) */
 #define DBG_DUMP_MEM_LENGTH_MASK        0xFFFFFF
 #define DBG_DUMP_MEM_LENGTH_SHIFT       0
-#define DBG_DUMP_MEM_RESERVED_MASK      0xFF
-#define DBG_DUMP_MEM_RESERVED_SHIFT     24
+/* indicates if the register is wide-bus */
+#define DBG_DUMP_MEM_WIDE_BUS_MASK      0x1
+#define DBG_DUMP_MEM_WIDE_BUS_SHIFT     24
+#define DBG_DUMP_MEM_RESERVED_MASK      0x7F
+#define DBG_DUMP_MEM_RESERVED_SHIFT     25
 };
 
 
@@ -388,10 +446,13 @@ struct dbg_dump_mem {
 struct dbg_dump_reg {
 	__le32 data;
 /* register address (in dwords) */
-#define DBG_DUMP_REG_ADDRESS_MASK  0xFFFFFF
-#define DBG_DUMP_REG_ADDRESS_SHIFT 0
-#define DBG_DUMP_REG_LENGTH_MASK   0xFF /* register size (in dwords) */
-#define DBG_DUMP_REG_LENGTH_SHIFT  24
+#define DBG_DUMP_REG_ADDRESS_MASK   0x7FFFFF /* register address (in dwords) */
+#define DBG_DUMP_REG_ADDRESS_SHIFT  0
+/* indicates if the register is wide-bus */
+#define DBG_DUMP_REG_WIDE_BUS_MASK  0x1
+#define DBG_DUMP_REG_WIDE_BUS_SHIFT 23
+#define DBG_DUMP_REG_LENGTH_MASK    0xFF /* register size (in dwords) */
+#define DBG_DUMP_REG_LENGTH_SHIFT   24
 };
 
 
@@ -424,8 +485,11 @@ struct dbg_idle_chk_cond_hdr {
 struct dbg_idle_chk_cond_reg {
 	__le32 data;
 /* Register GRC address (in dwords) */
-#define DBG_IDLE_CHK_COND_REG_ADDRESS_MASK   0xFFFFFF
+#define DBG_IDLE_CHK_COND_REG_ADDRESS_MASK   0x7FFFFF
 #define DBG_IDLE_CHK_COND_REG_ADDRESS_SHIFT  0
+/* indicates if the register is wide-bus */
+#define DBG_IDLE_CHK_COND_REG_WIDE_BUS_MASK  0x1
+#define DBG_IDLE_CHK_COND_REG_WIDE_BUS_SHIFT 23
 /* value from block_id enum */
 #define DBG_IDLE_CHK_COND_REG_BLOCK_ID_MASK  0xFF
 #define DBG_IDLE_CHK_COND_REG_BLOCK_ID_SHIFT 24
@@ -441,8 +505,11 @@ struct dbg_idle_chk_cond_reg {
 struct dbg_idle_chk_info_reg {
 	__le32 data;
 /* Register GRC address (in dwords) */
-#define DBG_IDLE_CHK_INFO_REG_ADDRESS_MASK   0xFFFFFF
+#define DBG_IDLE_CHK_INFO_REG_ADDRESS_MASK   0x7FFFFF
 #define DBG_IDLE_CHK_INFO_REG_ADDRESS_SHIFT  0
+/* indicates if the register is wide-bus */
+#define DBG_IDLE_CHK_INFO_REG_WIDE_BUS_MASK  0x1
+#define DBG_IDLE_CHK_INFO_REG_WIDE_BUS_SHIFT 23
 /* value from block_id enum */
 #define DBG_IDLE_CHK_INFO_REG_BLOCK_ID_MASK  0xFF
 #define DBG_IDLE_CHK_INFO_REG_BLOCK_ID_SHIFT 24
@@ -544,17 +611,21 @@ enum dbg_idle_chk_severity_types {
  * Debug Bus block data
  */
 struct dbg_bus_block_data {
-/* Indicates if the block is enabled for recording (0/1) */
-	u8 enabled;
-	u8 hw_id /* HW ID associated with the block */;
+	__le16 data;
+/* 4-bit value: bit i set -> dword/qword i is enabled. */
+#define DBG_BUS_BLOCK_DATA_ENABLE_MASK_MASK       0xF
+#define DBG_BUS_BLOCK_DATA_ENABLE_MASK_SHIFT      0
+/* Number of dwords/qwords to shift right the debug data (0-3) */
+#define DBG_BUS_BLOCK_DATA_RIGHT_SHIFT_MASK       0xF
+#define DBG_BUS_BLOCK_DATA_RIGHT_SHIFT_SHIFT      4
+/* 4-bit value: bit i set -> dword/qword i is forced valid. */
+#define DBG_BUS_BLOCK_DATA_FORCE_VALID_MASK_MASK  0xF
+#define DBG_BUS_BLOCK_DATA_FORCE_VALID_MASK_SHIFT 8
+/* 4-bit value: bit i set -> dword/qword i frame bit is forced. */
+#define DBG_BUS_BLOCK_DATA_FORCE_FRAME_MASK_MASK  0xF
+#define DBG_BUS_BLOCK_DATA_FORCE_FRAME_MASK_SHIFT 12
 	u8 line_num /* Debug line number to select */;
-	u8 right_shift /* Number of units to  right the debug data (0-3) */;
-	u8 cycle_en /* 4-bit value: bit i set -> unit i is enabled. */;
-/* 4-bit value: bit i set -> unit i is forced valid. */
-	u8 force_valid;
-/* 4-bit value: bit i set -> unit i frame bit is forced. */
-	u8 force_frame;
-	u8 reserved;
+	u8 hw_id /* HW ID associated with the block */;
 };
 
 
@@ -604,6 +675,21 @@ enum dbg_bus_constraint_ops {
 
 
 /*
+ * Debug Bus trigger state data
+ */
+struct dbg_bus_trigger_state_data {
+	u8 data;
+/* 4-bit value: bit i set -> dword i of the trigger state block
+ * (after right shift) is enabled.
+ */
+#define DBG_BUS_TRIGGER_STATE_DATA_BLOCK_SHIFTED_ENABLE_MASK_MASK  0xF
+#define DBG_BUS_TRIGGER_STATE_DATA_BLOCK_SHIFTED_ENABLE_MASK_SHIFT 0
+/* 4-bit value: bit i set -> dword i is compared by a constraint */
+#define DBG_BUS_TRIGGER_STATE_DATA_CONSTRAINT_DWORD_MASK_MASK      0xF
+#define DBG_BUS_TRIGGER_STATE_DATA_CONSTRAINT_DWORD_MASK_SHIFT     4
+};
+
+/*
  * Debug Bus memory address
  */
 struct dbg_bus_mem_addr {
@@ -650,14 +736,8 @@ union dbg_bus_storm_eid_params {
  * Debug Bus Storm data
  */
 struct dbg_bus_storm_data {
-/* Indicates if the Storm is enabled for fast debug recording (0/1) */
-	u8 fast_enabled;
-/* Fast debug Storm mode, valid only if fast_enabled is set */
-	u8 fast_mode;
-/* Indicates if the Storm is enabled for slow debug recording (0/1) */
-	u8 slow_enabled;
-/* Slow debug Storm mode, valid only if slow_enabled is set */
-	u8 slow_mode;
+	u8 enabled /* indicates if the Storm is enabled for recording */;
+	u8 mode /* Storm debug mode, valid only if the Storm is enabled */;
 	u8 hw_id /* HW ID associated with the Storm */;
 	u8 eid_filter_en /* Indicates if EID filtering is performed (0/1) */;
 /* 1 = EID range filter, 0 = EID mask filter. Valid only if eid_filter_en is
@@ -667,7 +747,6 @@ struct dbg_bus_storm_data {
 	u8 cid_filter_en /* Indicates if CID filtering is performed (0/1) */;
 /* EID filter params to filter on. Valid only if eid_filter_en is set. */
 	union dbg_bus_storm_eid_params eid_filter_params;
-	__le16 reserved;
 /* CID to filter on. Valid only if cid_filter_en is set. */
 	__le32 cid;
 };
@@ -679,20 +758,18 @@ struct dbg_bus_data {
 	__le32 app_version /* The tools version number of the application */;
 	u8 state /* The current debug bus state */;
 	u8 hw_dwords /* HW dwords per cycle */;
-	u8 next_hw_id /* Next HW ID to be associated with an input */;
+/* The HW IDs of the recorded HW blocks, where bits i*3..i*3+2 contain the
+ * HW ID of dword/qword i
+ */
+	__le16 hw_id_mask;
 	u8 num_enabled_blocks /* Number of blocks enabled for recording */;
 	u8 num_enabled_storms /* Number of Storms enabled for recording */;
 	u8 target /* Output target */;
-	u8 next_trigger_state /* ID of next trigger state to be added */;
-/* ID of next filter/trigger constraint to be added */
-	u8 next_constraint_id;
 	u8 one_shot_en /* Indicates if one-shot mode is enabled (0/1) */;
 	u8 grc_input_en /* Indicates if GRC recording is enabled (0/1) */;
 /* Indicates if timestamp recording is enabled (0/1) */
 	u8 timestamp_input_en;
 	u8 filter_en /* Indicates if the recording filter is enabled (0/1) */;
-/* Indicates if the recording trigger is enabled (0/1) */
-	u8 trigger_en;
 /* If true, the next added constraint belong to the filter. Otherwise,
  * it belongs to the last added trigger state. Valid only if either filter or
  * triggers are enabled.
@@ -706,6 +783,14 @@ struct dbg_bus_data {
  * Valid only if both filter and trigger are enabled (0/1)
  */
 	u8 filter_post_trigger;
+	__le16 reserved;
+/* Indicates if the recording trigger is enabled (0/1) */
+	u8 trigger_en;
+/* trigger states data */
+	struct dbg_bus_trigger_state_data trigger_states[3];
+	u8 next_trigger_state /* ID of next trigger state to be added */;
+/* ID of next filter/trigger constraint to be added */
+	u8 next_constraint_id;
 /* If true, all inputs are associated with HW ID 0. Otherwise, each input is
  * assigned a different HW ID (0/1)
  */
@@ -716,7 +801,6 @@ struct dbg_bus_data {
  * DBG_BUS_TARGET_ID_PCI.
  */
 	struct dbg_bus_pci_buf_data pci_buf;
-	__le16 reserved;
 /* Debug Bus data for each block */
 	struct dbg_bus_block_data blocks[88];
 /* Debug Bus data for each block */
@@ -748,17 +832,6 @@ enum dbg_bus_frame_modes {
 
 
 /*
- * Debug bus input types
- */
-enum dbg_bus_input_types {
-	DBG_BUS_INPUT_TYPE_STORM,
-	DBG_BUS_INPUT_TYPE_BLOCK,
-	MAX_DBG_BUS_INPUT_TYPES
-};
-
-
-
-/*
  * Debug bus other engine mode
  */
 enum dbg_bus_other_engine_modes {
@@ -852,6 +925,7 @@ enum dbg_bus_targets {
 };
 
 
+
 /*
  * GRC Dump data
  */
@@ -987,7 +1061,10 @@ enum dbg_status {
 	DBG_STATUS_REG_FIFO_BAD_DATA,
 	DBG_STATUS_PROTECTION_OVERRIDE_BAD_DATA,
 	DBG_STATUS_DBG_ARRAY_NOT_SET,
-	DBG_STATUS_MULTI_BLOCKS_WITH_FILTER,
+	DBG_STATUS_FILTER_BUG,
+	DBG_STATUS_NON_MATCHING_LINES,
+	DBG_STATUS_INVALID_TRIGGER_DWORD_OFFSET,
+	DBG_STATUS_DBG_BUS_IN_USE,
 	MAX_DBG_STATUS
 };
 
@@ -1028,7 +1105,7 @@ struct dbg_tools_data {
 /* Indicates if a block is in reset state (0/1) */
 	u8 block_in_reset[88];
 	u8 chip_id /* Chip ID (from enum chip_ids) */;
-	u8 platform_id /* Platform ID (from enum platform_ids) */;
+	u8 platform_id /* Platform ID */;
 	u8 initialized /* Indicates if the data was initialized */;
 	u8 reserved;
 };
diff --git a/drivers/net/qede/base/ecore_hsi_eth.h b/drivers/net/qede/base/ecore_hsi_eth.h
index 9d2a118..397c408 100644
--- a/drivers/net/qede/base/ecore_hsi_eth.h
+++ b/drivers/net/qede/base/ecore_hsi_eth.h
@@ -739,6 +739,7 @@ enum eth_error_code {
 	ETH_FILTERS_VNI_ADD_FAIL_FULL,
 /* vni add filters command failed due to duplicate VNI filter */
 	ETH_FILTERS_VNI_ADD_FAIL_DUP,
+	ETH_FILTERS_GFT_UPDATE_FAIL /* Fail update GFT filter. */,
 	MAX_ETH_ERROR_CODE
 };
 
@@ -982,8 +983,10 @@ struct eth_vport_rss_config {
 	u8 rss_id;
 	u8 rss_mode /* The RSS mode for this function */;
 	u8 update_rss_key /* if set update the rss key */;
-	u8 update_rss_ind_table /* if set update the indirection table */;
-	u8 update_rss_capabilities /* if set update the capabilities */;
+/* if set update the indirection table values */
+	u8 update_rss_ind_table;
+/* if set update the capabilities and indirection table size. */
+	u8 update_rss_capabilities;
 	u8 tbl_size /* rss mask (Tbl size) */;
 	__le32 reserved2[2];
 /* RSS indirection table */
@@ -1267,7 +1270,10 @@ struct rx_update_gft_filter_data {
 /* Use enum to set type of flow using gft HW logic blocks */
 	u8 filter_type;
 	u8 filter_action /* Use to set type of action on filter */;
-	u8 reserved;
+/* 0 - dont assert in case of error. Just return an error code. 1 - assert in
+ * case of error.
+ */
+	u8 assert_on_error;
 };
 
 
@@ -2290,8 +2296,7 @@ enum gft_profile_upper_protocol_type {
  * GFT RAM line struct
  */
 struct gft_ram_line {
-	__le32 low32bits;
-/*  (use enum gft_vlan_select) */
+	__le32 lo;
 #define GFT_RAM_LINE_VLAN_SELECT_MASK              0x3
 #define GFT_RAM_LINE_VLAN_SELECT_SHIFT             0
 #define GFT_RAM_LINE_TUNNEL_ENTROPHY_MASK          0x1
@@ -2354,7 +2359,7 @@ struct gft_ram_line {
 #define GFT_RAM_LINE_DST_PORT_SHIFT                30
 #define GFT_RAM_LINE_SRC_PORT_MASK                 0x1
 #define GFT_RAM_LINE_SRC_PORT_SHIFT                31
-	__le32 high32bits;
+	__le32 hi;
 #define GFT_RAM_LINE_DSCP_MASK                     0x1
 #define GFT_RAM_LINE_DSCP_SHIFT                    0
 #define GFT_RAM_LINE_OVER_IP_PROTOCOL_MASK         0x1
diff --git a/drivers/net/qede/base/ecore_hsi_init_tool.h b/drivers/net/qede/base/ecore_hsi_init_tool.h
index d07549c..1f57e9b 100644
--- a/drivers/net/qede/base/ecore_hsi_init_tool.h
+++ b/drivers/net/qede/base/ecore_hsi_init_tool.h
@@ -22,43 +22,13 @@
 /* Max size in dwords of a zipped array */
 #define MAX_ZIPPED_SIZE			8192
 
-enum init_modes {
-	MODE_BB_A0_DEPRECATED,
-	MODE_BB_B0,
-	MODE_K2,
-	MODE_ASIC,
-	MODE_EMUL_REDUCED,
-	MODE_EMUL_FULL,
-	MODE_FPGA,
-	MODE_CHIPSIM,
-	MODE_SF,
-	MODE_MF_SD,
-	MODE_MF_SI,
-	MODE_PORTS_PER_ENG_1,
-	MODE_PORTS_PER_ENG_2,
-	MODE_PORTS_PER_ENG_4,
-	MODE_100G,
-	MODE_E5,
-	MAX_INIT_MODES
-};
-
-enum init_phases {
-	PHASE_ENGINE,
-	PHASE_PORT,
-	PHASE_PF,
-	PHASE_VF,
-	PHASE_QM_PF,
-	MAX_INIT_PHASES
+enum chip_ids {
+	CHIP_BB,
+	CHIP_K2,
+	CHIP_E5,
+	MAX_CHIP_IDS
 };
 
-enum init_split_types {
-	SPLIT_TYPE_NONE,
-	SPLIT_TYPE_PORT,
-	SPLIT_TYPE_PF,
-	SPLIT_TYPE_PORT_PF,
-	SPLIT_TYPE_VF,
-	MAX_INIT_SPLIT_TYPES
-};
 
 struct fw_asserts_ram_section {
 /* The offset of the section in the RAM in RAM lines (64-bit units) */
@@ -196,8 +166,46 @@ union init_array_hdr {
 };
 
 
+enum init_modes {
+	MODE_BB_A0_DEPRECATED,
+	MODE_BB,
+	MODE_K2,
+	MODE_ASIC,
+	MODE_EMUL_REDUCED,
+	MODE_EMUL_FULL,
+	MODE_FPGA,
+	MODE_CHIPSIM,
+	MODE_SF,
+	MODE_MF_SD,
+	MODE_MF_SI,
+	MODE_PORTS_PER_ENG_1,
+	MODE_PORTS_PER_ENG_2,
+	MODE_PORTS_PER_ENG_4,
+	MODE_100G,
+	MODE_E5,
+	MAX_INIT_MODES
+};
 
 
+enum init_phases {
+	PHASE_ENGINE,
+	PHASE_PORT,
+	PHASE_PF,
+	PHASE_VF,
+	PHASE_QM_PF,
+	MAX_INIT_PHASES
+};
+
+
+enum init_split_types {
+	SPLIT_TYPE_NONE,
+	SPLIT_TYPE_PORT,
+	SPLIT_TYPE_PF,
+	SPLIT_TYPE_PORT_PF,
+	SPLIT_TYPE_VF,
+	MAX_INIT_SPLIT_TYPES
+};
+
 
 /*
  * init array types
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.c b/drivers/net/qede/base/ecore_init_fw_funcs.c
index 77f9152..af0deaa 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.c
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.c
@@ -17,112 +17,156 @@
 #include "ecore_hsi_init_tool.h"
 #include "ecore_iro.h"
 #include "ecore_init_fw_funcs.h"
-enum CmInterfaceEnum {
-	MCM_SEC,
-	MCM_PRI,
-	UCM_SEC,
-	UCM_PRI,
-	TCM_SEC,
-	TCM_PRI,
-	YCM_SEC,
-	YCM_PRI,
-	XCM_SEC,
-	XCM_PRI,
-	NUM_OF_CM_INTERFACES
+
+#define CDU_VALIDATION_DEFAULT_CFG 61
+
+static u16 con_region_offsets[3][E4_NUM_OF_CONNECTION_TYPES] = {
+	{ 400,  336,  352,  304,  304,  384,  416,  352}, /* region 3 offsets */
+	{ 528,  496,  416,  448,  448,  512,  544,  480}, /* region 4 offsets */
+	{ 608,  544,  496,  512,  576,  592,  624,  560}  /* region 5 offsets */
+};
+static u16 task_region_offsets[1][E4_NUM_OF_CONNECTION_TYPES] = {
+	{ 240,  240,  112,    0,    0,    0,    0,   96}  /* region 1 offsets */
 };
-/* general constants */
-#define QM_PQ_MEM_4KB(pq_size) \
-(pq_size ? DIV_ROUND_UP((pq_size + 1) * QM_PQ_ELEMENT_SIZE, 0x1000) : 0)
-#define QM_PQ_SIZE_256B(pq_size) \
-(pq_size ? DIV_ROUND_UP(pq_size, 0x100) - 1 : 0)
-#define QM_INVALID_PQ_ID			0xffff
-/* feature enable */
-#define QM_BYPASS_EN				1
-#define QM_BYTE_CRD_EN				1
-/* other PQ constants */
-#define QM_OTHER_PQS_PER_PF			4
-/* WFQ constants */
-#define QM_WFQ_UPPER_BOUND			62500000
+
+/* General constants */
+#define QM_PQ_MEM_4KB(pq_size) (pq_size ? DIV_ROUND_UP((pq_size + 1) * \
+				QM_PQ_ELEMENT_SIZE, 0x1000) : 0)
+#define QM_PQ_SIZE_256B(pq_size) (pq_size ? DIV_ROUND_UP(pq_size, 0x100) - 1 : \
+				  0)
+#define QM_INVALID_PQ_ID		0xffff
+
+/* Feature enable */
+#define QM_BYPASS_EN			1
+#define QM_BYTE_CRD_EN			1
+
+/* Other PQ constants */
+#define QM_OTHER_PQS_PER_PF		4
+
+/* WFQ constants: */
+
+/* Upper bound in MB, 10 * burst size of 1ms in 50Gbps */
+#define QM_WFQ_UPPER_BOUND		62500000
+
+/* Bit  of VOQ in WFQ VP PQ map */
 #define QM_WFQ_VP_PQ_VOQ_SHIFT		0
+
+/* Bit  of PF in WFQ VP PQ map */
 #define QM_WFQ_VP_PQ_PF_SHIFT		5
+
+/* 0x9000 = 4*9*1024 */
 #define QM_WFQ_INC_VAL(weight)		((weight) * 0x9000)
-#define QM_WFQ_MAX_INC_VAL			43750000
-/* RL constants */
-#define QM_RL_UPPER_BOUND			62500000
-#define QM_RL_PERIOD				5
+
+/* 0.7 * upper bound (62500000) */
+#define QM_WFQ_MAX_INC_VAL		43750000
+
+/* RL constants: */
+
+/* Upper bound is set to 10 * burst size of 1ms in 50Gbps */
+#define QM_RL_UPPER_BOUND		62500000
+
+/* Period in us */
+#define QM_RL_PERIOD			5
+
+/* Period in 25MHz cycles */
 #define QM_RL_PERIOD_CLK_25M		(25 * QM_RL_PERIOD)
-#define QM_RL_MAX_INC_VAL			43750000
-/* RL increment value - the factor of 1.01 was added after seeing only
- * 99% factor reached in a 25Gbps port with DPDK RFC 2544 test.
- * In this scenario the PF RL was reducing the line rate to 99% although
- * the credit increment value was the correct one and FW calculated
- * correct packet sizes. The reason for the inaccuracy of the RL is
- * unknown at this point.
+
+/* 0.7 * upper bound (62500000) */
+#define QM_RL_MAX_INC_VAL		43750000
+
+/* RL increment value - rate is specified in mbps. the factor of 1.01 was
+ * added after seeing only 99% factor reached in a 25Gbps port with DPDK RFC
+ * 2544 test. In this scenario the PF RL was reducing the line rate to 99%
+ * although the credit increment value was the correct one and FW calculated
+ * correct packet sizes. The reason for the inaccuracy of the RL is unknown at
+ * this point.
  */
-/* rate in mbps */
 #define QM_RL_INC_VAL(rate) OSAL_MAX_T(u32, (u32)(((rate ? rate : 1000000) * \
-					QM_RL_PERIOD * 101) / (8 * 100)), 1)
+				       QM_RL_PERIOD * 101) / (8 * 100)), 1)
+
 /* AFullOprtnstcCrdMask constants */
 #define QM_OPPOR_LINE_VOQ_DEF		1
 #define QM_OPPOR_FW_STOP_DEF		0
 #define QM_OPPOR_PQ_EMPTY_DEF		1
-/* Command Queue constants */
-#define PBF_CMDQ_PURE_LB_LINES			150
+
+/* Command Queue constants: */
+
+/* Pure LB CmdQ lines (+spare) */
+#define PBF_CMDQ_PURE_LB_LINES		150
+
 #define PBF_CMDQ_LINES_RT_OFFSET(voq) \
-(PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET + \
-voq * (PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET \
-- PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET))
+	(PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET + voq * \
+	 (PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET - \
+	  PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET))
+
 #define PBF_BTB_GUARANTEED_RT_OFFSET(voq) \
-(PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET + voq * \
-(PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET - PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET))
+	(PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET + voq * \
+	 (PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET - \
+	  PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET))
+
 #define QM_VOQ_LINE_CRD(pbf_cmd_lines) \
 ((((pbf_cmd_lines) - 4) * 2) | QM_LINE_CRD_REG_SIGN_BIT)
+
 /* BTB: blocks constants (block size = 256B) */
-#define BTB_JUMBO_PKT_BLOCKS 38	/* 256B blocks in 9700B packet */
-/* headroom per-port */
-#define BTB_HEADROOM_BLOCKS BTB_JUMBO_PKT_BLOCKS
+
+/* 256B blocks in 9700B packet */
+#define BTB_JUMBO_PKT_BLOCKS		38
+
+/* Headroom per-port */
+#define BTB_HEADROOM_BLOCKS		BTB_JUMBO_PKT_BLOCKS
 #define BTB_PURE_LB_FACTOR		10
-#define BTB_PURE_LB_RATIO		7 /* factored (hence really 0.7) */
+
+/* Factored (hence really 0.7) */
+#define BTB_PURE_LB_RATIO		7
+
 /* QM stop command constants */
-#define QM_STOP_PQ_MASK_WIDTH			32
-#define QM_STOP_CMD_ADDR				0x2
-#define QM_STOP_CMD_STRUCT_SIZE			2
+#define QM_STOP_PQ_MASK_WIDTH		32
+#define QM_STOP_CMD_ADDR		2
+#define QM_STOP_CMD_STRUCT_SIZE		2
 #define QM_STOP_CMD_PAUSE_MASK_OFFSET	0
 #define QM_STOP_CMD_PAUSE_MASK_SHIFT	0
-#define QM_STOP_CMD_PAUSE_MASK_MASK		0xffffffff /* @DPDK */
-#define QM_STOP_CMD_GROUP_ID_OFFSET		1
-#define QM_STOP_CMD_GROUP_ID_SHIFT		16
-#define QM_STOP_CMD_GROUP_ID_MASK		15
-#define QM_STOP_CMD_PQ_TYPE_OFFSET		1
-#define QM_STOP_CMD_PQ_TYPE_SHIFT		24
-#define QM_STOP_CMD_PQ_TYPE_MASK		1
-#define QM_STOP_CMD_MAX_POLL_COUNT		100
-#define QM_STOP_CMD_POLL_PERIOD_US		500
+#define QM_STOP_CMD_PAUSE_MASK_MASK	0xffffffff /* @DPDK */
+#define QM_STOP_CMD_GROUP_ID_OFFSET	1
+#define QM_STOP_CMD_GROUP_ID_SHIFT	16
+#define QM_STOP_CMD_GROUP_ID_MASK	15
+#define QM_STOP_CMD_PQ_TYPE_OFFSET	1
+#define QM_STOP_CMD_PQ_TYPE_SHIFT	24
+#define QM_STOP_CMD_PQ_TYPE_MASK	1
+#define QM_STOP_CMD_MAX_POLL_COUNT	100
+#define QM_STOP_CMD_POLL_PERIOD_US	500
+
 /* QM command macros */
-#define QM_CMD_STRUCT_SIZE(cmd)	cmd##_STRUCT_SIZE
+#define QM_CMD_STRUCT_SIZE(cmd) cmd##_STRUCT_SIZE
 #define QM_CMD_SET_FIELD(var, cmd, field, value) \
-SET_FIELD(var[cmd##_##field##_OFFSET], cmd##_##field, value)
+	SET_FIELD(var[cmd##_##field##_OFFSET], cmd##_##field, value)
+
 /* QM: VOQ macros */
 #define PHYS_VOQ(port, tc, max_phys_tcs_per_port) \
-((port) * (max_phys_tcs_per_port) + (tc))
-#define LB_VOQ(port)				(MAX_PHYS_VOQS + (port))
+	((port) * (max_phys_tcs_per_port) + (tc))
+#define LB_VOQ(port)				 (MAX_PHYS_VOQS + (port))
 #define VOQ(port, tc, max_phys_tcs_per_port) \
-((tc) < LB_TC ? PHYS_VOQ(port, tc, max_phys_tcs_per_port) : LB_VOQ(port))
+	((tc) < LB_TC ? PHYS_VOQ(port, tc, max_phys_tcs_per_port) : \
+				 LB_VOQ(port))
+
+
 /******************** INTERNAL IMPLEMENTATION *********************/
+
 /* Prepare PF RL enable/disable runtime init values */
 static void ecore_enable_pf_rl(struct ecore_hwfn *p_hwfn, bool pf_rl_en)
 {
 	STORE_RT_REG(p_hwfn, QM_REG_RLPFENABLE_RT_OFFSET, pf_rl_en ? 1 : 0);
 	if (pf_rl_en) {
-		/* enable RLs for all VOQs */
+		/* Enable RLs for all VOQs */
 		STORE_RT_REG(p_hwfn, QM_REG_RLPFVOQENABLE_RT_OFFSET,
 			     (1 << MAX_NUM_VOQS) - 1);
-		/* write RL period */
+
+		/* Write RL period */
 		STORE_RT_REG(p_hwfn, QM_REG_RLPFPERIOD_RT_OFFSET,
 			     QM_RL_PERIOD_CLK_25M);
 		STORE_RT_REG(p_hwfn, QM_REG_RLPFPERIODTIMER_RT_OFFSET,
 			     QM_RL_PERIOD_CLK_25M);
-		/* set credit threshold for QM bypass flow */
+
+		/* Set credit threshold for QM bypass flow */
 		if (QM_BYPASS_EN)
 			STORE_RT_REG(p_hwfn, QM_REG_AFULLQMBYPTHRPFRL_RT_OFFSET,
 				     QM_RL_UPPER_BOUND);
@@ -133,7 +177,8 @@ static void ecore_enable_pf_rl(struct ecore_hwfn *p_hwfn, bool pf_rl_en)
 static void ecore_enable_pf_wfq(struct ecore_hwfn *p_hwfn, bool pf_wfq_en)
 {
 	STORE_RT_REG(p_hwfn, QM_REG_WFQPFENABLE_RT_OFFSET, pf_wfq_en ? 1 : 0);
-	/* set credit threshold for QM bypass flow */
+
+	/* Set credit threshold for QM bypass flow */
 	if (pf_wfq_en && QM_BYPASS_EN)
 		STORE_RT_REG(p_hwfn, QM_REG_AFULLQMBYPTHRPFWFQ_RT_OFFSET,
 			     QM_WFQ_UPPER_BOUND);
@@ -145,12 +190,13 @@ static void ecore_enable_vport_rl(struct ecore_hwfn *p_hwfn, bool vport_rl_en)
 	STORE_RT_REG(p_hwfn, QM_REG_RLGLBLENABLE_RT_OFFSET,
 		     vport_rl_en ? 1 : 0);
 	if (vport_rl_en) {
-		/* write RL period (use timer 0 only) */
+		/* Write RL period (use timer 0 only) */
 		STORE_RT_REG(p_hwfn, QM_REG_RLGLBLPERIOD_0_RT_OFFSET,
 			     QM_RL_PERIOD_CLK_25M);
 		STORE_RT_REG(p_hwfn, QM_REG_RLGLBLPERIODTIMER_0_RT_OFFSET,
 			     QM_RL_PERIOD_CLK_25M);
-		/* set credit threshold for QM bypass flow */
+
+		/* Set credit threshold for QM bypass flow */
 		if (QM_BYPASS_EN)
 			STORE_RT_REG(p_hwfn,
 				     QM_REG_AFULLQMBYPTHRGLBLRL_RT_OFFSET,
@@ -163,7 +209,8 @@ static void ecore_enable_vport_wfq(struct ecore_hwfn *p_hwfn, bool vport_wfq_en)
 {
 	STORE_RT_REG(p_hwfn, QM_REG_WFQVPENABLE_RT_OFFSET,
 		     vport_wfq_en ? 1 : 0);
-	/* set credit threshold for QM bypass flow */
+
+	/* Set credit threshold for QM bypass flow */
 	if (vport_wfq_en && QM_BYPASS_EN)
 		STORE_RT_REG(p_hwfn, QM_REG_AFULLQMBYPTHRVPWFQ_RT_OFFSET,
 			     QM_WFQ_UPPER_BOUND);
@@ -176,7 +223,9 @@ static void ecore_cmdq_lines_voq_rt_init(struct ecore_hwfn *p_hwfn,
 					 u8 voq, u16 cmdq_lines)
 {
 	u32 qm_line_crd;
+
 	qm_line_crd = QM_VOQ_LINE_CRD(cmdq_lines);
+
 	OVERWRITE_RT_REG(p_hwfn, PBF_CMDQ_LINES_RT_OFFSET(voq),
 			 (u32)cmdq_lines);
 	STORE_RT_REG(p_hwfn, QM_REG_VOQCRDLINE_RT_OFFSET + voq, qm_line_crd);
@@ -192,38 +241,43 @@ static void ecore_cmdq_lines_rt_init(struct ecore_hwfn *p_hwfn,
 				     port_params[MAX_NUM_PORTS])
 {
 	u8 tc, voq, port_id, num_tcs_in_port;
-	/* clear PBF lines for all VOQs */
+
+	/* Clear PBF lines for all VOQs */
 	for (voq = 0; voq < MAX_NUM_VOQS; voq++)
 		STORE_RT_REG(p_hwfn, PBF_CMDQ_LINES_RT_OFFSET(voq), 0);
+
 	for (port_id = 0; port_id < max_ports_per_engine; port_id++) {
-		if (port_params[port_id].active) {
-			u16 phys_lines, phys_lines_per_tc;
-			/* find #lines to divide between active physical TCs */
-			phys_lines =
-			    port_params[port_id].num_pbf_cmd_lines -
-			    PBF_CMDQ_PURE_LB_LINES;
-			/* find #lines per active physical TC */
-			num_tcs_in_port = 0;
-			for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) {
-				if (((port_params[port_id].active_phys_tcs >>
-						tc) & 0x1) == 1)
-					num_tcs_in_port++;
-			}
-			phys_lines_per_tc = phys_lines / num_tcs_in_port;
-			/* init registers per active TC */
-			for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) {
-				if (((port_params[port_id].active_phys_tcs >>
-							tc) & 0x1) == 1) {
-					voq = PHYS_VOQ(port_id, tc,
-							max_phys_tcs_per_port);
-					ecore_cmdq_lines_voq_rt_init(p_hwfn,
-							voq, phys_lines_per_tc);
-				}
+		u16 phys_lines, phys_lines_per_tc;
+
+		if (!port_params[port_id].active)
+			continue;
+
+		/* Find #lines to divide between the active physical TCs */
+		phys_lines = port_params[port_id].num_pbf_cmd_lines -
+			     PBF_CMDQ_PURE_LB_LINES;
+
+		/* Find #lines per active physical TC */
+		num_tcs_in_port = 0;
+		for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++)
+			if (((port_params[port_id].active_phys_tcs >> tc) &
+			      0x1) == 1)
+				num_tcs_in_port++;
+		phys_lines_per_tc = phys_lines / num_tcs_in_port;
+
+		/* Init registers per active TC */
+		for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) {
+			if (((port_params[port_id].active_phys_tcs >> tc) &
+			      0x1) == 1) {
+				voq = PHYS_VOQ(port_id, tc,
+					       max_phys_tcs_per_port);
+				ecore_cmdq_lines_voq_rt_init(p_hwfn, voq,
+							     phys_lines_per_tc);
 			}
-			/* init registers for pure LB TC */
-			ecore_cmdq_lines_voq_rt_init(p_hwfn, LB_VOQ(port_id),
-						     PBF_CMDQ_PURE_LB_LINES);
 		}
+
+		/* Init registers for pure LB TC */
+		ecore_cmdq_lines_voq_rt_init(p_hwfn, LB_VOQ(port_id),
+					     PBF_CMDQ_PURE_LB_LINES);
 	}
 }
 
@@ -253,50 +307,51 @@ static void ecore_btb_blocks_rt_init(struct ecore_hwfn *p_hwfn,
 				     struct init_qm_port_params
 				     port_params[MAX_NUM_PORTS])
 {
-	u8 tc, voq, port_id, num_tcs_in_port;
 	u32 usable_blocks, pure_lb_blocks, phys_blocks;
+	u8 tc, voq, port_id, num_tcs_in_port;
+
 	for (port_id = 0; port_id < max_ports_per_engine; port_id++) {
-		if (port_params[port_id].active) {
-			/* subtract headroom blocks */
-			usable_blocks =
-			    port_params[port_id].num_btb_blocks -
-			    BTB_HEADROOM_BLOCKS;
-/* find blocks per physical TC. use factor to avoid floating arithmethic */
-
-			num_tcs_in_port = 0;
-			for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++)
-				if (((port_params[port_id].active_phys_tcs >>
-								tc) & 0x1) == 1)
-					num_tcs_in_port++;
-			pure_lb_blocks =
-			    (usable_blocks * BTB_PURE_LB_FACTOR) /
-			    (num_tcs_in_port *
-			     BTB_PURE_LB_FACTOR + BTB_PURE_LB_RATIO);
-			pure_lb_blocks =
-			    OSAL_MAX_T(u32, BTB_JUMBO_PKT_BLOCKS,
-				       pure_lb_blocks / BTB_PURE_LB_FACTOR);
-			phys_blocks =
-			    (usable_blocks -
-			     pure_lb_blocks) /
-			     num_tcs_in_port;
-			/* init physical TCs */
-			for (tc = 0;
-			     tc < NUM_OF_PHYS_TCS;
-			     tc++) {
-				if (((port_params[port_id].active_phys_tcs >>
-							tc) & 0x1) == 1) {
-					voq = PHYS_VOQ(port_id, tc,
-						       max_phys_tcs_per_port);
-					STORE_RT_REG(p_hwfn,
+		if (!port_params[port_id].active)
+			continue;
+
+		/* Subtract headroom blocks */
+		usable_blocks = port_params[port_id].num_btb_blocks -
+				BTB_HEADROOM_BLOCKS;
+
+		/* Find blocks per physical TC. use factor to avoid floating
+		 * arithmethic.
+		 */
+		num_tcs_in_port = 0;
+		for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++)
+			if (((port_params[port_id].active_phys_tcs >> tc) &
+			      0x1) == 1)
+				num_tcs_in_port++;
+
+		pure_lb_blocks = (usable_blocks * BTB_PURE_LB_FACTOR) /
+				  (num_tcs_in_port * BTB_PURE_LB_FACTOR +
+				   BTB_PURE_LB_RATIO);
+		pure_lb_blocks = OSAL_MAX_T(u32, BTB_JUMBO_PKT_BLOCKS,
+					    pure_lb_blocks /
+					    BTB_PURE_LB_FACTOR);
+		phys_blocks = (usable_blocks - pure_lb_blocks) /
+			      num_tcs_in_port;
+
+		/* Init physical TCs */
+		for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) {
+			if (((port_params[port_id].active_phys_tcs >> tc) &
+			      0x1) == 1) {
+				voq = PHYS_VOQ(port_id, tc,
+					       max_phys_tcs_per_port);
+				STORE_RT_REG(p_hwfn,
 					     PBF_BTB_GUARANTEED_RT_OFFSET(voq),
 					     phys_blocks);
-				}
 			}
-			/* init pure LB TC */
-			STORE_RT_REG(p_hwfn,
-				     PBF_BTB_GUARANTEED_RT_OFFSET(
-					LB_VOQ(port_id)), pure_lb_blocks);
 		}
+
+		/* Init pure LB TC */
+		STORE_RT_REG(p_hwfn,
+			     PBF_BTB_GUARANTEED_RT_OFFSET(LB_VOQ(port_id)),
+			     pure_lb_blocks);
 	}
 }
 
@@ -317,57 +372,69 @@ static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 				    struct init_qm_pq_params *pq_params,
 				    struct init_qm_vport_params *vport_params)
 {
-	u16 i, pq_id, pq_group;
-	u16 num_pqs = num_pf_pqs + num_vf_pqs;
-	u16 first_pq_group = start_pq / QM_PF_QUEUE_GROUP_SIZE;
-	u16 last_pq_group = (start_pq + num_pqs - 1) / QM_PF_QUEUE_GROUP_SIZE;
-	/* a bit per Tx PQ indicating if the PQ is associated with a VF */
+	/* A bit per Tx PQ indicating if the PQ is associated with a VF */
 	u32 tx_pq_vf_mask[MAX_QM_TX_QUEUES / QM_PF_QUEUE_GROUP_SIZE] = { 0 };
 	u32 num_tx_pq_vf_masks = MAX_QM_TX_QUEUES / QM_PF_QUEUE_GROUP_SIZE;
-	u32 pq_mem_4kb = QM_PQ_MEM_4KB(num_pf_cids);
-	u32 vport_pq_mem_4kb = QM_PQ_MEM_4KB(num_vf_cids);
-	u32 mem_addr_4kb = base_mem_addr_4kb;
-	/* set mapping from PQ group to PF */
+	u16 num_pqs, first_pq_group, last_pq_group, i, pq_id, pq_group;
+	u32 pq_mem_4kb, vport_pq_mem_4kb, mem_addr_4kb;
+
+	num_pqs = num_pf_pqs + num_vf_pqs;
+
+	first_pq_group = start_pq / QM_PF_QUEUE_GROUP_SIZE;
+	last_pq_group = (start_pq + num_pqs - 1) / QM_PF_QUEUE_GROUP_SIZE;
+
+	pq_mem_4kb = QM_PQ_MEM_4KB(num_pf_cids);
+	vport_pq_mem_4kb = QM_PQ_MEM_4KB(num_vf_cids);
+	mem_addr_4kb = base_mem_addr_4kb;
+
+	/* Set mapping from PQ group to PF */
 	for (pq_group = first_pq_group; pq_group <= last_pq_group; pq_group++)
 		STORE_RT_REG(p_hwfn, QM_REG_PQTX2PF_0_RT_OFFSET + pq_group,
 			     (u32)(pf_id));
-	/* set PQ sizes */
+
+	/* Set PQ sizes */
 	STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZE_0_RT_OFFSET,
 		     QM_PQ_SIZE_256B(num_pf_cids));
 	STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZE_1_RT_OFFSET,
 		     QM_PQ_SIZE_256B(num_vf_cids));
-	/* go over all Tx PQs */
+
+	/* Go over all Tx PQs */
 	for (i = 0, pq_id = start_pq; i < num_pqs; i++, pq_id++) {
-		struct qm_rf_pq_map tx_pq_map;
-		u8 voq =
-		    VOQ(port_id, pq_params[i].tc_id, max_phys_tcs_per_port);
-		bool is_vf_pq = (i >= num_pf_pqs);
-		/* added to avoid compilation warning */
 		u32 max_qm_global_rls = MAX_QM_GLOBAL_RLS;
-		bool rl_valid = pq_params[i].rl_valid &&
-				pq_params[i].vport_id < max_qm_global_rls;
-		/* update first Tx PQ of VPORT/TC */
-		u8 vport_id_in_pf = pq_params[i].vport_id - start_vport;
-		u16 first_tx_pq_id =
-		    vport_params[vport_id_in_pf].first_tx_pq_id[pq_params[i].
-								tc_id];
+		struct qm_rf_pq_map tx_pq_map;
+		bool is_vf_pq, rl_valid;
+		u8 voq, vport_id_in_pf;
+		u16 first_tx_pq_id;
+
+		voq = VOQ(port_id, pq_params[i].tc_id, max_phys_tcs_per_port);
+		is_vf_pq = (i >= num_pf_pqs);
+		rl_valid = pq_params[i].rl_valid && pq_params[i].vport_id <
+			   max_qm_global_rls;
+
+		/* Update first Tx PQ of VPORT/TC */
+		vport_id_in_pf = pq_params[i].vport_id - start_vport;
+		first_tx_pq_id =
+		vport_params[vport_id_in_pf].first_tx_pq_id[pq_params[i].tc_id];
 		if (first_tx_pq_id == QM_INVALID_PQ_ID) {
-			/* create new VP PQ */
+			/* Create new VP PQ */
 			vport_params[vport_id_in_pf].
 			    first_tx_pq_id[pq_params[i].tc_id] = pq_id;
 			first_tx_pq_id = pq_id;
-			/* map VP PQ to VOQ and PF */
+
+			/* Map VP PQ to VOQ and PF */
 			STORE_RT_REG(p_hwfn,
 				     QM_REG_WFQVPMAP_RT_OFFSET + first_tx_pq_id,
 				     (voq << QM_WFQ_VP_PQ_VOQ_SHIFT) | (pf_id <<
 							QM_WFQ_VP_PQ_PF_SHIFT));
 		}
-		/* check RL ID */
+
+		/* Check RL ID */
 		if (pq_params[i].rl_valid && pq_params[i].vport_id >=
 							max_qm_global_rls)
 			DP_NOTICE(p_hwfn, true,
-				  "Invalid VPORT ID for rate limiter config");
-		/* fill PQ map entry */
+				  "Invalid VPORT ID for rate limiter config\n");
+
+		/* Fill PQ map entry */
 		OSAL_MEMSET(&tx_pq_map, 0, sizeof(tx_pq_map));
 		SET_FIELD(tx_pq_map.reg, QM_RF_PQ_MAP_PQ_VALID, 1);
 		SET_FIELD(tx_pq_map.reg, QM_RF_PQ_MAP_RL_VALID,
@@ -378,17 +445,17 @@ static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 		SET_FIELD(tx_pq_map.reg, QM_RF_PQ_MAP_VOQ, voq);
 		SET_FIELD(tx_pq_map.reg, QM_RF_PQ_MAP_WRR_WEIGHT_GROUP,
 			  pq_params[i].wrr_group);
-		/* write PQ map entry to CAM */
+
+		/* Write PQ map entry to CAM */
 		STORE_RT_REG(p_hwfn, QM_REG_TXPQMAP_RT_OFFSET + pq_id,
 			     *((u32 *)&tx_pq_map));
-		/* set base address */
+
+		/* Set base address */
 		STORE_RT_REG(p_hwfn, QM_REG_BASEADDRTXPQ_RT_OFFSET + pq_id,
 			     mem_addr_4kb);
-		/* check if VF PQ */
+
+		/* If VF PQ, add indication to PQ VF mask */
 		if (is_vf_pq) {
-			/* if PQ is associated with a VF, add indication to PQ
-			 * VF mask
-			 */
 			tx_pq_vf_mask[pq_id / QM_PF_QUEUE_GROUP_SIZE] |=
 				(1 << (pq_id % QM_PF_QUEUE_GROUP_SIZE));
 			mem_addr_4kb += vport_pq_mem_4kb;
@@ -396,12 +463,12 @@ static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 			mem_addr_4kb += pq_mem_4kb;
 		}
 	}
-	/* store Tx PQ VF mask to size select register */
-	for (i = 0; i < num_tx_pq_vf_masks; i++) {
+
+	/* Store Tx PQ VF mask to size select register */
+	for (i = 0; i < num_tx_pq_vf_masks; i++)
 		if (tx_pq_vf_mask[i])
 			STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZETXSEL_0_RT_OFFSET +
 				     i, tx_pq_vf_mask[i]);
-	}
 }
 
 /* Prepare Other PQ mapping runtime init values for the specified PF */
@@ -411,20 +478,26 @@ static void ecore_other_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 				       u32 num_pf_cids,
 				       u32 num_tids, u32 base_mem_addr_4kb)
 {
-	u16 i, pq_id;
-/* a single other PQ grp is used in each PF, where PQ group i is used in PF i */
-
-	u16 pq_group = pf_id;
-	u32 pq_size = num_pf_cids + num_tids;
-	u32 pq_mem_4kb = QM_PQ_MEM_4KB(pq_size);
-	u32 mem_addr_4kb = base_mem_addr_4kb;
-	/* map PQ group to PF */
+	u32 pq_size, pq_mem_4kb, mem_addr_4kb;
+	u16 i, pq_id, pq_group;
+
+	/* A single other PQ group is used in each PF, where PQ group i is used
+	 * in PF i.
+	 */
+	pq_group = pf_id;
+	pq_size = num_pf_cids + num_tids;
+	pq_mem_4kb = QM_PQ_MEM_4KB(pq_size);
+	mem_addr_4kb = base_mem_addr_4kb;
+
+	/* Map PQ group to PF */
 	STORE_RT_REG(p_hwfn, QM_REG_PQOTHER2PF_0_RT_OFFSET + pq_group,
 		     (u32)(pf_id));
-	/* set PQ sizes */
+
+	/* Set PQ sizes */
 	STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZE_2_RT_OFFSET,
 		     QM_PQ_SIZE_256B(pq_size));
-	/* set base address */
+
+	/* Set base address */
 	for (i = 0, pq_id = pf_id * QM_PF_QUEUE_GROUP_SIZE;
 	     i < QM_OTHER_PQS_PER_PF; i++, pq_id++) {
 		STORE_RT_REG(p_hwfn, QM_REG_BASEADDROTHERPQ_RT_OFFSET + pq_id,
@@ -432,7 +505,10 @@ static void ecore_other_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 		mem_addr_4kb += pq_mem_4kb;
 	}
 }
-/* Prepare PF WFQ runtime init values for specified PF. Return -1 on error. */
+
+/* Prepare PF WFQ runtime init values for the specified PF.
+ * Return -1 on error.
+ */
 static int ecore_pf_wfq_rt_init(struct ecore_hwfn *p_hwfn,
 				u8 port_id,
 				u8 pf_id,
@@ -441,76 +517,89 @@ static int ecore_pf_wfq_rt_init(struct ecore_hwfn *p_hwfn,
 				u16 num_tx_pqs,
 				struct init_qm_pq_params *pq_params)
 {
+	u32 inc_val, crd_reg_offset;
+	u8 voq;
 	u16 i;
-	u32 inc_val;
-	u32 crd_reg_offset =
-	    (pf_id <
-	     MAX_NUM_PFS_BB ? QM_REG_WFQPFCRD_RT_OFFSET :
-	     QM_REG_WFQPFCRD_MSB_RT_OFFSET) + (pf_id % MAX_NUM_PFS_BB);
+
+	crd_reg_offset = (pf_id < MAX_NUM_PFS_BB ? QM_REG_WFQPFCRD_RT_OFFSET :
+			  QM_REG_WFQPFCRD_MSB_RT_OFFSET) +
+			 (pf_id % MAX_NUM_PFS_BB);
+
 	inc_val = QM_WFQ_INC_VAL(pf_wfq);
-	if (inc_val == 0 || inc_val > QM_WFQ_MAX_INC_VAL) {
-		DP_NOTICE(p_hwfn, true, "Invalid PF WFQ weight configuration");
+	if (!inc_val || inc_val > QM_WFQ_MAX_INC_VAL) {
+		DP_NOTICE(p_hwfn, true,
+			  "Invalid PF WFQ weight configuration\n");
 		return -1;
 	}
+
 	for (i = 0; i < num_tx_pqs; i++) {
-		u8 voq =
-		    VOQ(port_id, pq_params[i].tc_id, max_phys_tcs_per_port);
+		voq = VOQ(port_id, pq_params[i].tc_id, max_phys_tcs_per_port);
 		OVERWRITE_RT_REG(p_hwfn, crd_reg_offset + voq * MAX_NUM_PFS_BB,
 				 (u32)QM_WFQ_CRD_REG_SIGN_BIT);
 	}
+
 	STORE_RT_REG(p_hwfn, QM_REG_WFQPFUPPERBOUND_RT_OFFSET + pf_id,
 		     QM_WFQ_UPPER_BOUND | (u32)QM_WFQ_CRD_REG_SIGN_BIT);
 	STORE_RT_REG(p_hwfn, QM_REG_WFQPFWEIGHT_RT_OFFSET + pf_id, inc_val);
 	return 0;
 }
-/* Prepare PF RL runtime init values for specified PF. Return -1 on error. */
+
+/* Prepare PF RL runtime init values for the specified PF.
+ * Return -1 on error.
+ */
 static int ecore_pf_rl_rt_init(struct ecore_hwfn *p_hwfn, u8 pf_id, u32 pf_rl)
 {
-	u32 inc_val = QM_RL_INC_VAL(pf_rl);
+	u32 inc_val;
+
+	inc_val = QM_RL_INC_VAL(pf_rl);
 	if (inc_val > QM_RL_MAX_INC_VAL) {
-		DP_NOTICE(p_hwfn, true, "Invalid PF rate limit configuration");
+		DP_NOTICE(p_hwfn, true,
+			  "Invalid PF rate limit configuration\n");
 		return -1;
 	}
+
 	STORE_RT_REG(p_hwfn, QM_REG_RLPFCRD_RT_OFFSET + pf_id,
 		     (u32)QM_RL_CRD_REG_SIGN_BIT);
 	STORE_RT_REG(p_hwfn, QM_REG_RLPFUPPERBOUND_RT_OFFSET + pf_id,
 		     QM_RL_UPPER_BOUND | (u32)QM_RL_CRD_REG_SIGN_BIT);
 	STORE_RT_REG(p_hwfn, QM_REG_RLPFINCVAL_RT_OFFSET + pf_id, inc_val);
+
 	return 0;
 }
-/* Prepare VPORT WFQ runtime init values for the specified VPORTs. Return -1 on
- * error.
+
+/* Prepare VPORT WFQ runtime init values for the specified VPORTs.
+ * Return -1 on error.
  */
 static int ecore_vp_wfq_rt_init(struct ecore_hwfn *p_hwfn,
 				u8 num_vports,
 				struct init_qm_vport_params *vport_params)
 {
-	u8 tc, i;
+	u16 vport_pq_id;
 	u32 inc_val;
-	/* go over all PF VPORTs */
+	u8 tc, i;
+
+	/* Go over all PF VPORTs */
 	for (i = 0; i < num_vports; i++) {
-		if (vport_params[i].vport_wfq) {
-			inc_val = QM_WFQ_INC_VAL(vport_params[i].vport_wfq);
-			if (inc_val > QM_WFQ_MAX_INC_VAL) {
-				DP_NOTICE(p_hwfn, true,
-					  "Invalid VPORT WFQ weight config");
-				return -1;
-			}
-			/* each VPORT can have several VPORT PQ IDs for
-			 * different TCs
-			 */
-			for (tc = 0; tc < NUM_OF_TCS; tc++) {
-				u16 vport_pq_id =
-				    vport_params[i].first_tx_pq_id[tc];
-				if (vport_pq_id != QM_INVALID_PQ_ID) {
-					STORE_RT_REG(p_hwfn,
-						  QM_REG_WFQVPCRD_RT_OFFSET +
-						  vport_pq_id,
-						  (u32)QM_WFQ_CRD_REG_SIGN_BIT);
-					STORE_RT_REG(p_hwfn,
-						QM_REG_WFQVPWEIGHT_RT_OFFSET
-						     + vport_pq_id, inc_val);
-				}
+		if (!vport_params[i].vport_wfq)
+			continue;
+
+		inc_val = QM_WFQ_INC_VAL(vport_params[i].vport_wfq);
+		if (inc_val > QM_WFQ_MAX_INC_VAL) {
+			DP_NOTICE(p_hwfn, true,
+				  "Invalid VPORT WFQ weight configuration\n");
+			return -1;
+		}
+
+		/* Each VPORT can have several VPORT PQ IDs for various TCs */
+		for (tc = 0; tc < NUM_OF_TCS; tc++) {
+			vport_pq_id = vport_params[i].first_tx_pq_id[tc];
+			if (vport_pq_id != QM_INVALID_PQ_ID) {
+				STORE_RT_REG(p_hwfn, QM_REG_WFQVPCRD_RT_OFFSET +
+					     vport_pq_id,
+					     (u32)QM_WFQ_CRD_REG_SIGN_BIT);
+				STORE_RT_REG(p_hwfn,
+					     QM_REG_WFQVPWEIGHT_RT_OFFSET +
+					     vport_pq_id, inc_val);
 			}
 		}
 	}
@@ -526,19 +615,23 @@ static int ecore_vport_rl_rt_init(struct ecore_hwfn *p_hwfn,
 				  struct init_qm_vport_params *vport_params)
 {
 	u8 i, vport_id;
+	u32 inc_val;
+
 	if (start_vport + num_vports >= MAX_QM_GLOBAL_RLS) {
 		DP_NOTICE(p_hwfn, true,
-			  "Invalid VPORT ID for rate limiter configuration");
+			  "Invalid VPORT ID for rate limiter configuration\n");
 		return -1;
 	}
-	/* go over all PF VPORTs */
+
+	/* Go over all PF VPORTs */
 	for (i = 0, vport_id = start_vport; i < num_vports; i++, vport_id++) {
 		u32 inc_val = QM_RL_INC_VAL(vport_params[i].vport_rl);
 		if (inc_val > QM_RL_MAX_INC_VAL) {
 			DP_NOTICE(p_hwfn, true,
-				  "Invalid VPORT rate-limit configuration");
+				  "Invalid VPORT rate-limit configuration\n");
 			return -1;
 		}
+
 		STORE_RT_REG(p_hwfn, QM_REG_RLGLBLCRD_RT_OFFSET + vport_id,
 			     (u32)QM_RL_CRD_REG_SIGN_BIT);
 		STORE_RT_REG(p_hwfn,
@@ -547,6 +640,7 @@ static int ecore_vport_rl_rt_init(struct ecore_hwfn *p_hwfn,
 		STORE_RT_REG(p_hwfn, QM_REG_RLGLBLINCVAL_RT_OFFSET + vport_id,
 			     inc_val);
 	}
+
 	return 0;
 }
 
@@ -554,17 +648,20 @@ static bool ecore_poll_on_qm_cmd_ready(struct ecore_hwfn *p_hwfn,
 				       struct ecore_ptt *p_ptt)
 {
 	u32 reg_val, i;
-	for (i = 0, reg_val = 0; i < QM_STOP_CMD_MAX_POLL_COUNT && reg_val == 0;
+
+	for (i = 0, reg_val = 0; i < QM_STOP_CMD_MAX_POLL_COUNT && !reg_val;
 	     i++) {
 		OSAL_UDELAY(QM_STOP_CMD_POLL_PERIOD_US);
 		reg_val = ecore_rd(p_hwfn, p_ptt, QM_REG_SDMCMDREADY);
 	}
-	/* check if timeout while waiting for SDM command ready */
+
+	/* Check if timeout while waiting for SDM command ready */
 	if (i == QM_STOP_CMD_MAX_POLL_COUNT) {
 		DP_VERBOSE(p_hwfn, ECORE_MSG_DEBUG,
 			   "Timeout waiting for QM SDM cmd ready signal\n");
 		return false;
 	}
+
 	return true;
 }
 
@@ -574,15 +671,19 @@ static bool ecore_send_qm_cmd(struct ecore_hwfn *p_hwfn,
 {
 	if (!ecore_poll_on_qm_cmd_ready(p_hwfn, p_ptt))
 		return false;
+
 	ecore_wr(p_hwfn, p_ptt, QM_REG_SDMCMDADDR, cmd_addr);
 	ecore_wr(p_hwfn, p_ptt, QM_REG_SDMCMDDATALSB, cmd_data_lsb);
 	ecore_wr(p_hwfn, p_ptt, QM_REG_SDMCMDDATAMSB, cmd_data_msb);
 	ecore_wr(p_hwfn, p_ptt, QM_REG_SDMCMDGO, 1);
 	ecore_wr(p_hwfn, p_ptt, QM_REG_SDMCMDGO, 0);
+
 	return ecore_poll_on_qm_cmd_ready(p_hwfn, p_ptt);
 }
 
+
 /******************** INTERFACE IMPLEMENTATION *********************/
+
 u32 ecore_qm_pf_mem_size(u8 pf_id,
 			 u32 num_pf_cids,
 			 u32 num_vf_cids,
@@ -603,32 +704,42 @@ int ecore_qm_common_rt_init(struct ecore_hwfn *p_hwfn,
 			    struct init_qm_port_params
 			    port_params[MAX_NUM_PORTS])
 {
-	/* init AFullOprtnstcCrdMask */
-	u32 mask =
-	    (QM_OPPOR_LINE_VOQ_DEF << QM_RF_OPPORTUNISTIC_MASK_LINEVOQ_SHIFT) |
-	    (QM_BYTE_CRD_EN << QM_RF_OPPORTUNISTIC_MASK_BYTEVOQ_SHIFT) |
-	    (pf_wfq_en << QM_RF_OPPORTUNISTIC_MASK_PFWFQ_SHIFT) |
-	    (vport_wfq_en << QM_RF_OPPORTUNISTIC_MASK_VPWFQ_SHIFT) |
-	    (pf_rl_en << QM_RF_OPPORTUNISTIC_MASK_PFRL_SHIFT) |
-	    (vport_rl_en << QM_RF_OPPORTUNISTIC_MASK_VPQCNRL_SHIFT) |
-	    (QM_OPPOR_FW_STOP_DEF << QM_RF_OPPORTUNISTIC_MASK_FWPAUSE_SHIFT) |
-	    (QM_OPPOR_PQ_EMPTY_DEF <<
-	     QM_RF_OPPORTUNISTIC_MASK_QUEUEEMPTY_SHIFT);
+	u32 mask;
+
+	/* Init AFullOprtnstcCrdMask */
+	mask = (QM_OPPOR_LINE_VOQ_DEF <<
+		QM_RF_OPPORTUNISTIC_MASK_LINEVOQ_SHIFT) |
+		(QM_BYTE_CRD_EN << QM_RF_OPPORTUNISTIC_MASK_BYTEVOQ_SHIFT) |
+		(pf_wfq_en << QM_RF_OPPORTUNISTIC_MASK_PFWFQ_SHIFT) |
+		(vport_wfq_en << QM_RF_OPPORTUNISTIC_MASK_VPWFQ_SHIFT) |
+		(pf_rl_en << QM_RF_OPPORTUNISTIC_MASK_PFRL_SHIFT) |
+		(vport_rl_en << QM_RF_OPPORTUNISTIC_MASK_VPQCNRL_SHIFT) |
+		(QM_OPPOR_FW_STOP_DEF <<
+		 QM_RF_OPPORTUNISTIC_MASK_FWPAUSE_SHIFT) |
+		(QM_OPPOR_PQ_EMPTY_DEF <<
+		 QM_RF_OPPORTUNISTIC_MASK_QUEUEEMPTY_SHIFT);
 	STORE_RT_REG(p_hwfn, QM_REG_AFULLOPRTNSTCCRDMASK_RT_OFFSET, mask);
-	/* enable/disable PF RL */
+
+	/* Enable/disable PF RL */
 	ecore_enable_pf_rl(p_hwfn, pf_rl_en);
-	/* enable/disable PF WFQ */
+
+	/* Enable/disable PF WFQ */
 	ecore_enable_pf_wfq(p_hwfn, pf_wfq_en);
-	/* enable/disable VPORT RL */
+
+	/* Enable/disable VPORT RL */
 	ecore_enable_vport_rl(p_hwfn, vport_rl_en);
-	/* enable/disable VPORT WFQ */
+
+	/* Enable/disable VPORT WFQ */
 	ecore_enable_vport_wfq(p_hwfn, vport_wfq_en);
-	/* init PBF CMDQ line credit */
+
+	/* Init PBF CMDQ line credit */
 	ecore_cmdq_lines_rt_init(p_hwfn, max_ports_per_engine,
 				 max_phys_tcs_per_port, port_params);
-	/* init BTB blocks in PBF */
+
+	/* Init BTB blocks in PBF */
 	ecore_btb_blocks_rt_init(p_hwfn, max_ports_per_engine,
 				 max_phys_tcs_per_port, port_params);
+
 	return 0;
 }
 
@@ -651,66 +762,86 @@ int ecore_qm_pf_rt_init(struct ecore_hwfn *p_hwfn,
 			struct init_qm_pq_params *pq_params,
 			struct init_qm_vport_params *vport_params)
 {
+	u32 other_mem_size_4kb;
 	u8 tc, i;
-	u32 other_mem_size_4kb =
-	    QM_PQ_MEM_4KB(num_pf_cids + num_tids) * QM_OTHER_PQS_PER_PF;
-	/* clear first Tx PQ ID array for each VPORT */
+
+	other_mem_size_4kb = QM_PQ_MEM_4KB(num_pf_cids + num_tids) *
+			     QM_OTHER_PQS_PER_PF;
+
+	/* Clear first Tx PQ ID array for each VPORT */
 	for (i = 0; i < num_vports; i++)
 		for (tc = 0; tc < NUM_OF_TCS; tc++)
 			vport_params[i].first_tx_pq_id[tc] = QM_INVALID_PQ_ID;
-	/* map Other PQs (if any) */
+
+	/* Map Other PQs (if any) */
 #if QM_OTHER_PQS_PER_PF > 0
 	ecore_other_pq_map_rt_init(p_hwfn, port_id, pf_id, num_pf_cids,
 				   num_tids, 0);
 #endif
-	/* map Tx PQs */
+
+	/* Map Tx PQs */
 	ecore_tx_pq_map_rt_init(p_hwfn, p_ptt, port_id, pf_id,
 				max_phys_tcs_per_port, is_first_pf, num_pf_cids,
 				num_vf_cids, start_pq, num_pf_pqs, num_vf_pqs,
 				start_vport, other_mem_size_4kb, pq_params,
 				vport_params);
-	/* init PF WFQ */
+
+	/* Init PF WFQ */
 	if (pf_wfq)
 		if (ecore_pf_wfq_rt_init
 		    (p_hwfn, port_id, pf_id, pf_wfq, max_phys_tcs_per_port,
-		     num_pf_pqs + num_vf_pqs, pq_params) != 0)
+		     num_pf_pqs + num_vf_pqs, pq_params))
 			return -1;
-	/* init PF RL */
-	if (ecore_pf_rl_rt_init(p_hwfn, pf_id, pf_rl) != 0)
+
+	/* Init PF RL */
+	if (ecore_pf_rl_rt_init(p_hwfn, pf_id, pf_rl))
 		return -1;
-	/* set VPORT WFQ */
-	if (ecore_vp_wfq_rt_init(p_hwfn, num_vports, vport_params) != 0)
+
+	/* Set VPORT WFQ */
+	if (ecore_vp_wfq_rt_init(p_hwfn, num_vports, vport_params))
 		return -1;
-	/* set VPORT RL */
+
+	/* Set VPORT RL */
 	if (ecore_vport_rl_rt_init
-	    (p_hwfn, start_vport, num_vports, vport_params) != 0)
+	    (p_hwfn, start_vport, num_vports, vport_params))
 		return -1;
+
 	return 0;
 }
 
 int ecore_init_pf_wfq(struct ecore_hwfn *p_hwfn,
 		      struct ecore_ptt *p_ptt, u8 pf_id, u16 pf_wfq)
 {
-	u32 inc_val = QM_WFQ_INC_VAL(pf_wfq);
-	if (inc_val == 0 || inc_val > QM_WFQ_MAX_INC_VAL) {
-		DP_NOTICE(p_hwfn, true, "Invalid PF WFQ weight configuration");
+	u32 inc_val;
+
+	inc_val = QM_WFQ_INC_VAL(pf_wfq);
+	if (!inc_val || inc_val > QM_WFQ_MAX_INC_VAL) {
+		DP_NOTICE(p_hwfn, true,
+			  "Invalid PF WFQ weight configuration\n");
 		return -1;
 	}
+
 	ecore_wr(p_hwfn, p_ptt, QM_REG_WFQPFWEIGHT + pf_id * 4, inc_val);
+
 	return 0;
 }
 
 int ecore_init_pf_rl(struct ecore_hwfn *p_hwfn,
 		     struct ecore_ptt *p_ptt, u8 pf_id, u32 pf_rl)
 {
-	u32 inc_val = QM_RL_INC_VAL(pf_rl);
+	u32 inc_val;
+
+	inc_val = QM_RL_INC_VAL(pf_rl);
 	if (inc_val > QM_RL_MAX_INC_VAL) {
-		DP_NOTICE(p_hwfn, true, "Invalid PF rate limit configuration");
+		DP_NOTICE(p_hwfn, true,
+			  "Invalid PF rate limit configuration\n");
 		return -1;
 	}
+
 	ecore_wr(p_hwfn, p_ptt, QM_REG_RLPFCRD + pf_id * 4,
 		 (u32)QM_RL_CRD_REG_SIGN_BIT);
 	ecore_wr(p_hwfn, p_ptt, QM_REG_RLPFINCVAL + pf_id * 4, inc_val);
+
 	return 0;
 }
 
@@ -718,20 +849,25 @@ int ecore_init_vport_wfq(struct ecore_hwfn *p_hwfn,
 			 struct ecore_ptt *p_ptt,
 			 u16 first_tx_pq_id[NUM_OF_TCS], u16 vport_wfq)
 {
+	u16 vport_pq_id;
+	u32 inc_val;
 	u8 tc;
-	u32 inc_val = QM_WFQ_INC_VAL(vport_wfq);
-	if (inc_val == 0 || inc_val > QM_WFQ_MAX_INC_VAL) {
+
+	inc_val = QM_WFQ_INC_VAL(vport_wfq);
+	if (!inc_val || inc_val > QM_WFQ_MAX_INC_VAL) {
 		DP_NOTICE(p_hwfn, true,
-			  "Invalid VPORT WFQ weight configuration");
+			  "Invalid VPORT WFQ weight configuration\n");
 		return -1;
 	}
+
 	for (tc = 0; tc < NUM_OF_TCS; tc++) {
-		u16 vport_pq_id = first_tx_pq_id[tc];
+		vport_pq_id = first_tx_pq_id[tc];
 		if (vport_pq_id != QM_INVALID_PQ_ID) {
 			ecore_wr(p_hwfn, p_ptt,
 				 QM_REG_WFQVPWEIGHT + vport_pq_id * 4, inc_val);
 		}
 	}
+
 	return 0;
 }
 
@@ -739,20 +875,24 @@ int ecore_init_vport_rl(struct ecore_hwfn *p_hwfn,
 			struct ecore_ptt *p_ptt, u8 vport_id, u32 vport_rl)
 {
 	u32 inc_val, max_qm_global_rls = MAX_QM_GLOBAL_RLS;
+
 	if (vport_id >= max_qm_global_rls) {
 		DP_NOTICE(p_hwfn, true,
-			  "Invalid VPORT ID for rate limiter configuration");
+			  "Invalid VPORT ID for rate limiter configuration\n");
 		return -1;
 	}
+
 	inc_val = QM_RL_INC_VAL(vport_rl);
 	if (inc_val > QM_RL_MAX_INC_VAL) {
 		DP_NOTICE(p_hwfn, true,
-			  "Invalid VPORT rate-limit configuration");
+			  "Invalid VPORT rate-limit configuration\n");
 		return -1;
 	}
+
 	ecore_wr(p_hwfn, p_ptt, QM_REG_RLGLBLCRD + vport_id * 4,
 		 (u32)QM_RL_CRD_REG_SIGN_BIT);
 	ecore_wr(p_hwfn, p_ptt, QM_REG_RLGLBLINCVAL + vport_id * 4, inc_val);
+
 	return 0;
 }
 
@@ -762,15 +902,20 @@ bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn,
 			    bool is_tx_pq, u16 start_pq, u16 num_pqs)
 {
 	u32 cmd_arr[QM_CMD_STRUCT_SIZE(QM_STOP_CMD)] = { 0 };
-	u32 pq_mask = 0, last_pq = start_pq + num_pqs - 1, pq_id;
-	/* set command's PQ type */
+	u32 pq_mask = 0, last_pq, pq_id;
+
+	last_pq = start_pq + num_pqs - 1;
+
+	/* Set command's PQ type */
 	QM_CMD_SET_FIELD(cmd_arr, QM_STOP_CMD, PQ_TYPE, is_tx_pq ? 0 : 1);
-	/* go over requested PQs */
+
+	/* Go over requested PQs */
 	for (pq_id = start_pq; pq_id <= last_pq; pq_id++) {
-		/* set PQ bit in mask (stop command only) */
+		/* Set PQ bit in mask (stop command only) */
 		if (!is_release_cmd)
 			pq_mask |= (1 << (pq_id % QM_STOP_PQ_MASK_WIDTH));
-		/* if last PQ or end of PQ mask, write command */
+
+		/* If last PQ or end of PQ mask, write command */
 		if ((pq_id == last_pq) ||
 		    (pq_id % QM_STOP_PQ_MASK_WIDTH ==
 		    (QM_STOP_PQ_MASK_WIDTH - 1))) {
@@ -785,68 +930,92 @@ bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn,
 			pq_mask = 0;
 		}
 	}
+
 	return true;
 }
 
+
 /* NIG: ETS configuration constants */
 #define NIG_TX_ETS_CLIENT_OFFSET	4
 #define NIG_LB_ETS_CLIENT_OFFSET	1
 #define NIG_ETS_MIN_WFQ_BYTES		1600
+
 /* NIG: ETS constants */
 #define NIG_ETS_UP_BOUND(weight, mtu) \
-(2 * ((weight) > (mtu) ? (weight) : (mtu)))
+	(2 * ((weight) > (mtu) ? (weight) : (mtu)))
+
 /* NIG: RL constants */
-#define NIG_RL_BASE_TYPE			1	/* byte base type */
-#define NIG_RL_PERIOD				1	/* in us */
+
+/* Byte base type value */
+#define NIG_RL_BASE_TYPE		1
+
+/* Period in us */
+#define NIG_RL_PERIOD			1
+
+/* Period in 25MHz cycles */
 #define NIG_RL_PERIOD_CLK_25M		(25 * NIG_RL_PERIOD)
+
+/* Rate in mbps */
 #define NIG_RL_INC_VAL(rate)		(((rate) * NIG_RL_PERIOD) / 8)
+
 #define NIG_RL_MAX_VAL(inc_val, mtu) \
-(2 * ((inc_val) > (mtu) ? (inc_val) : (mtu)))
+	(2 * ((inc_val) > (mtu) ? (inc_val) : (mtu)))
+
 /* NIG: packet prioritry configuration constants */
-#define NIG_PRIORITY_MAP_TC_BITS 4
+#define NIG_PRIORITY_MAP_TC_BITS	4
+
+
 void ecore_init_nig_ets(struct ecore_hwfn *p_hwfn,
 			struct ecore_ptt *p_ptt,
 			struct init_ets_req *req, bool is_lb)
 {
-	u8 tc, sp_tc_map = 0, wfq_tc_map = 0;
-	u8 num_tc = is_lb ? NUM_OF_TCS : NUM_OF_PHYS_TCS;
-	u8 tc_client_offset =
-	    is_lb ? NIG_LB_ETS_CLIENT_OFFSET : NIG_TX_ETS_CLIENT_OFFSET;
-	u32 min_weight = 0xffffffff;
-	u32 tc_weight_base_addr =
-	    is_lb ? NIG_REG_LB_ARB_CREDIT_WEIGHT_0 :
-	    NIG_REG_TX_ARB_CREDIT_WEIGHT_0;
-	u32 tc_weight_addr_diff =
-	    is_lb ? NIG_REG_LB_ARB_CREDIT_WEIGHT_1 -
-	    NIG_REG_LB_ARB_CREDIT_WEIGHT_0 : NIG_REG_TX_ARB_CREDIT_WEIGHT_1 -
-	    NIG_REG_TX_ARB_CREDIT_WEIGHT_0;
-	u32 tc_bound_base_addr =
-	    is_lb ? NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_0 :
-	    NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_0;
-	u32 tc_bound_addr_diff =
-	    is_lb ? NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_1 -
-	    NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_0 :
-	    NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_1 -
-	    NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_0;
+	u32 min_weight, tc_weight_base_addr, tc_weight_addr_diff;
+	u32 tc_bound_base_addr, tc_bound_addr_diff;
+	u8 sp_tc_map = 0, wfq_tc_map = 0;
+	u8 tc, num_tc, tc_client_offset;
+
+	num_tc = is_lb ? NUM_OF_TCS : NUM_OF_PHYS_TCS;
+	tc_client_offset = is_lb ? NIG_LB_ETS_CLIENT_OFFSET :
+				   NIG_TX_ETS_CLIENT_OFFSET;
+	min_weight = 0xffffffff;
+	tc_weight_base_addr = is_lb ? NIG_REG_LB_ARB_CREDIT_WEIGHT_0 :
+				      NIG_REG_TX_ARB_CREDIT_WEIGHT_0;
+	tc_weight_addr_diff = is_lb ? NIG_REG_LB_ARB_CREDIT_WEIGHT_1 -
+				      NIG_REG_LB_ARB_CREDIT_WEIGHT_0 :
+				      NIG_REG_TX_ARB_CREDIT_WEIGHT_1 -
+				      NIG_REG_TX_ARB_CREDIT_WEIGHT_0;
+	tc_bound_base_addr = is_lb ? NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_0 :
+				     NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_0;
+	tc_bound_addr_diff = is_lb ? NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_1 -
+				     NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_0 :
+				     NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_1 -
+				     NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_0;
+
 	for (tc = 0; tc < num_tc; tc++) {
 		struct init_ets_tc_req *tc_req = &req->tc_req[tc];
-		/* update SP map */
+
+		/* Update SP map */
 		if (tc_req->use_sp)
 			sp_tc_map |= (1 << tc);
-		if (tc_req->use_wfq) {
-			/* update WFQ map */
-			wfq_tc_map |= (1 << tc);
-			/* find minimal weight */
-			if (tc_req->weight < min_weight)
-				min_weight = tc_req->weight;
-		}
+
+		if (!tc_req->use_wfq)
+			continue;
+
+		/* Update WFQ map */
+		wfq_tc_map |= (1 << tc);
+
+		/* Find minimal weight */
+		if (tc_req->weight < min_weight)
+			min_weight = tc_req->weight;
 	}
-	/* write SP map */
+
+	/* Write SP map */
 	ecore_wr(p_hwfn, p_ptt,
 		 is_lb ? NIG_REG_LB_ARB_CLIENT_IS_STRICT :
 		 NIG_REG_TX_ARB_CLIENT_IS_STRICT,
 		 (sp_tc_map << tc_client_offset));
-	/* write WFQ map */
+
+	/* Write WFQ map */
 	ecore_wr(p_hwfn, p_ptt,
 		 is_lb ? NIG_REG_LB_ARB_CLIENT_IS_SUBJECT2WFQ :
 		 NIG_REG_TX_ARB_CLIENT_IS_SUBJECT2WFQ,
@@ -854,22 +1023,23 @@ void ecore_init_nig_ets(struct ecore_hwfn *p_hwfn,
 	/* write WFQ weights */
 	for (tc = 0; tc < num_tc; tc++, tc_client_offset++) {
 		struct init_ets_tc_req *tc_req = &req->tc_req[tc];
-		if (tc_req->use_wfq) {
-			/* translate weight to bytes */
-			u32 byte_weight =
-			    (NIG_ETS_MIN_WFQ_BYTES * tc_req->weight) /
-			    min_weight;
-			/* write WFQ weight */
-			ecore_wr(p_hwfn, p_ptt,
-				 tc_weight_base_addr +
-				 tc_weight_addr_diff * tc_client_offset,
-				 byte_weight);
-			/* write WFQ upper bound */
-			ecore_wr(p_hwfn, p_ptt,
-				 tc_bound_base_addr +
-				 tc_bound_addr_diff * tc_client_offset,
-				 NIG_ETS_UP_BOUND(byte_weight, req->mtu));
-		}
+		u32 byte_weight;
+
+		if (!tc_req->use_wfq)
+			continue;
+
+		/* Translate weight to bytes */
+		byte_weight = (NIG_ETS_MIN_WFQ_BYTES * tc_req->weight) /
+			      min_weight;
+
+		/* Write WFQ weight */
+		ecore_wr(p_hwfn, p_ptt, tc_weight_base_addr +
+			 tc_weight_addr_diff * tc_client_offset, byte_weight);
+
+		/* Write WFQ upper bound */
+		ecore_wr(p_hwfn, p_ptt, tc_bound_base_addr +
+			 tc_bound_addr_diff * tc_client_offset,
+			 NIG_ETS_UP_BOUND(byte_weight, req->mtu));
 	}
 }
 
@@ -877,16 +1047,18 @@ void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn,
 			  struct ecore_ptt *p_ptt,
 			  struct init_nig_lb_rl_req *req)
 {
-	u8 tc;
 	u32 ctrl, inc_val, reg_offset;
-	/* disable global MAC+LB RL */
+	u8 tc;
+
+	/* Disable global MAC+LB RL */
 	ctrl =
 	    NIG_RL_BASE_TYPE <<
 	    NIG_REG_TX_LB_GLBRATELIMIT_CTRL_TX_LB_GLBRATELIMIT_BASE_TYPE_SHIFT;
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_CTRL, ctrl);
-	/* configure and enable global MAC+LB RL */
+
+	/* Configure and enable global MAC+LB RL */
 	if (req->lb_mac_rate) {
-		/* configure  */
+		/* Configure  */
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_INC_PERIOD,
 			 NIG_RL_PERIOD_CLK_25M);
 		inc_val = NIG_RL_INC_VAL(req->lb_mac_rate);
@@ -894,20 +1066,23 @@ void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn,
 			 inc_val);
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_MAX_VALUE,
 			 NIG_RL_MAX_VAL(inc_val, req->mtu));
-		/* enable */
+
+		/* Enable */
 		ctrl |=
 		    1 <<
 		    NIG_REG_TX_LB_GLBRATELIMIT_CTRL_TX_LB_GLBRATELIMIT_EN_SHIFT;
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_CTRL, ctrl);
 	}
-	/* disable global LB-only RL */
+
+	/* Disable global LB-only RL */
 	ctrl =
 	    NIG_RL_BASE_TYPE <<
 	    NIG_REG_LB_BRBRATELIMIT_CTRL_LB_BRBRATELIMIT_BASE_TYPE_SHIFT;
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_CTRL, ctrl);
-	/* configure and enable global LB-only RL */
+
+	/* Configure and enable global LB-only RL */
 	if (req->lb_rate) {
-		/* configure  */
+		/* Configure  */
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_INC_PERIOD,
 			 NIG_RL_PERIOD_CLK_25M);
 		inc_val = NIG_RL_INC_VAL(req->lb_rate);
@@ -915,41 +1090,41 @@ void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn,
 			 inc_val);
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_MAX_VALUE,
 			 NIG_RL_MAX_VAL(inc_val, req->mtu));
-		/* enable */
+
+		/* Enable */
 		ctrl |=
 		    1 << NIG_REG_LB_BRBRATELIMIT_CTRL_LB_BRBRATELIMIT_EN_SHIFT;
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_CTRL, ctrl);
 	}
-	/* per-TC RLs */
+
+	/* Per-TC RLs */
 	for (tc = 0, reg_offset = 0; tc < NUM_OF_PHYS_TCS;
 	     tc++, reg_offset += 4) {
-		/* disable TC RL */
+		/* Disable TC RL */
 		ctrl =
 		    NIG_RL_BASE_TYPE <<
 		NIG_REG_LB_TCRATELIMIT_CTRL_0_LB_TCRATELIMIT_BASE_TYPE_0_SHIFT;
 		ecore_wr(p_hwfn, p_ptt,
 			 NIG_REG_LB_TCRATELIMIT_CTRL_0 + reg_offset, ctrl);
-		/* configure and enable TC RL */
-		if (req->tc_rate[tc]) {
-			/* configure */
-			ecore_wr(p_hwfn, p_ptt,
-				 NIG_REG_LB_TCRATELIMIT_INC_PERIOD_0 +
-				 reg_offset, NIG_RL_PERIOD_CLK_25M);
-			inc_val = NIG_RL_INC_VAL(req->tc_rate[tc]);
-			ecore_wr(p_hwfn, p_ptt,
-				 NIG_REG_LB_TCRATELIMIT_INC_VALUE_0 +
-				 reg_offset, inc_val);
-			ecore_wr(p_hwfn, p_ptt,
-				 NIG_REG_LB_TCRATELIMIT_MAX_VALUE_0 +
-				 reg_offset, NIG_RL_MAX_VAL(inc_val, req->mtu));
-			/* enable */
-			ctrl |=
-			    1 <<
-		NIG_REG_LB_TCRATELIMIT_CTRL_0_LB_TCRATELIMIT_EN_0_SHIFT;
-			ecore_wr(p_hwfn, p_ptt,
-				 NIG_REG_LB_TCRATELIMIT_CTRL_0 + reg_offset,
-				 ctrl);
-		}
+
+		/* Configure and enable TC RL */
+		if (!req->tc_rate[tc])
+			continue;
+
+		/* Configure */
+		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_INC_PERIOD_0 +
+			 reg_offset, NIG_RL_PERIOD_CLK_25M);
+		inc_val = NIG_RL_INC_VAL(req->tc_rate[tc]);
+		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_INC_VALUE_0 +
+			 reg_offset, inc_val);
+		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_MAX_VALUE_0 +
+			 reg_offset, NIG_RL_MAX_VAL(inc_val, req->mtu));
+
+		/* Enable */
+		ctrl |= 1 <<
+			NIG_REG_LB_TCRATELIMIT_CTRL_0_LB_TCRATELIMIT_EN_0_SHIFT;
+		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_CTRL_0 +
+			 reg_offset, ctrl);
 	}
 }
 
@@ -957,20 +1132,23 @@ void ecore_init_nig_pri_tc_map(struct ecore_hwfn *p_hwfn,
 			       struct ecore_ptt *p_ptt,
 			       struct init_nig_pri_tc_map_req *req)
 {
-	u8 pri, tc;
-	u32 pri_tc_mask = 0;
 	u8 tc_pri_mask[NUM_OF_PHYS_TCS] = { 0 };
+	u32 pri_tc_mask = 0;
+	u8 pri, tc;
+
 	for (pri = 0; pri < NUM_OF_VLAN_PRIORITIES; pri++) {
-		if (req->pri[pri].valid) {
-			pri_tc_mask |=
-			    (req->pri[pri].
-			     tc_id << (pri * NIG_PRIORITY_MAP_TC_BITS));
-			tc_pri_mask[req->pri[pri].tc_id] |= (1 << pri);
-		}
+		if (!req->pri[pri].valid)
+			continue;
+
+		pri_tc_mask |= (req->pri[pri].tc_id <<
+				(pri * NIG_PRIORITY_MAP_TC_BITS));
+		tc_pri_mask[req->pri[pri].tc_id] |= (1 << pri);
 	}
-	/* write priority -> TC mask */
+
+	/* Write priority -> TC mask */
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_PKT_PRIORITY_TO_TC, pri_tc_mask);
-	/* write TC -> priority mask */
+
+	/* Write TC -> priority mask */
 	for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) {
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_PRIORITY_FOR_TC_0 + tc * 4,
 			 tc_pri_mask[tc]);
@@ -979,110 +1157,133 @@ void ecore_init_nig_pri_tc_map(struct ecore_hwfn *p_hwfn,
 	}
 }
 
+
 /* PRS: ETS configuration constants */
-#define PRS_ETS_MIN_WFQ_BYTES			1600
+#define PRS_ETS_MIN_WFQ_BYTES		1600
 #define PRS_ETS_UP_BOUND(weight, mtu) \
-(2 * ((weight) > (mtu) ? (weight) : (mtu)))
+	(2 * ((weight) > (mtu) ? (weight) : (mtu)))
+
+
 void ecore_init_prs_ets(struct ecore_hwfn *p_hwfn,
 			struct ecore_ptt *p_ptt, struct init_ets_req *req)
 {
+	u32 tc_weight_addr_diff, tc_bound_addr_diff, min_weight = 0xffffffff;
 	u8 tc, sp_tc_map = 0, wfq_tc_map = 0;
-	u32 min_weight = 0xffffffff;
-	u32 tc_weight_addr_diff =
-	    PRS_REG_ETS_ARB_CREDIT_WEIGHT_1 - PRS_REG_ETS_ARB_CREDIT_WEIGHT_0;
-	u32 tc_bound_addr_diff =
-	    PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_1 -
-	    PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_0;
+
+	tc_weight_addr_diff = PRS_REG_ETS_ARB_CREDIT_WEIGHT_1 -
+			      PRS_REG_ETS_ARB_CREDIT_WEIGHT_0;
+	tc_bound_addr_diff = PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_1 -
+			     PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_0;
+
 	for (tc = 0; tc < NUM_OF_TCS; tc++) {
 		struct init_ets_tc_req *tc_req = &req->tc_req[tc];
-		/* update SP map */
+
+		/* Update SP map */
 		if (tc_req->use_sp)
 			sp_tc_map |= (1 << tc);
-		if (tc_req->use_wfq) {
-			/* update WFQ map */
-			wfq_tc_map |= (1 << tc);
-			/* find minimal weight */
-			if (tc_req->weight < min_weight)
-				min_weight = tc_req->weight;
-		}
+
+		if (!tc_req->use_wfq)
+			continue;
+
+		/* Update WFQ map */
+		wfq_tc_map |= (1 << tc);
+
+		/* Find minimal weight */
+		if (tc_req->weight < min_weight)
+			min_weight = tc_req->weight;
 	}
+
 	/* write SP map */
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CLIENT_IS_STRICT, sp_tc_map);
+
 	/* write WFQ map */
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CLIENT_IS_SUBJECT2WFQ,
 		 wfq_tc_map);
+
 	/* write WFQ weights */
 	for (tc = 0; tc < NUM_OF_TCS; tc++) {
 		struct init_ets_tc_req *tc_req = &req->tc_req[tc];
-		if (tc_req->use_wfq) {
-			/* translate weight to bytes */
-			u32 byte_weight =
-			    (PRS_ETS_MIN_WFQ_BYTES * tc_req->weight) /
-			    min_weight;
-			/* write WFQ weight */
-			ecore_wr(p_hwfn, p_ptt,
-				 PRS_REG_ETS_ARB_CREDIT_WEIGHT_0 +
-				 tc * tc_weight_addr_diff, byte_weight);
-			/* write WFQ upper bound */
-			ecore_wr(p_hwfn, p_ptt,
-				 PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_0 +
-				 tc * tc_bound_addr_diff,
-				 PRS_ETS_UP_BOUND(byte_weight, req->mtu));
-		}
+		u32 byte_weight;
+
+		if (!tc_req->use_wfq)
+			continue;
+
+		/* Translate weight to bytes */
+		byte_weight = (PRS_ETS_MIN_WFQ_BYTES * tc_req->weight) /
+			      min_weight;
+
+		/* Write WFQ weight */
+		ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CREDIT_WEIGHT_0 + tc *
+			 tc_weight_addr_diff, byte_weight);
+
+		/* Write WFQ upper bound */
+		ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_0 +
+			 tc * tc_bound_addr_diff, PRS_ETS_UP_BOUND(byte_weight,
+								   req->mtu));
 	}
 }
 
+
 /* BRB: RAM configuration constants */
 #define BRB_TOTAL_RAM_BLOCKS_BB	4800
 #define BRB_TOTAL_RAM_BLOCKS_K2	5632
-#define BRB_BLOCK_SIZE			128	/* in bytes */
+#define BRB_BLOCK_SIZE		128
 #define BRB_MIN_BLOCKS_PER_TC	9
-#define BRB_HYST_BYTES			10240
-#define BRB_HYST_BLOCKS			(BRB_HYST_BYTES / BRB_BLOCK_SIZE)
-/*
- * temporary big RAM allocation - should be updated
- */
+#define BRB_HYST_BYTES		10240
+#define BRB_HYST_BLOCKS		(BRB_HYST_BYTES / BRB_BLOCK_SIZE)
+
+/* Temporary big RAM allocation - should be updated */
 void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
 			struct ecore_ptt *p_ptt, struct init_brb_ram_req *req)
 {
-	u8 port, active_ports = 0;
+	u32 tc_headroom_blocks, min_pkt_size_blocks, total_blocks;
 	u32 active_port_blocks, reg_offset = 0;
-	u32 tc_headroom_blocks =
-	    (u32)DIV_ROUND_UP(req->headroom_per_tc, BRB_BLOCK_SIZE);
-	u32 min_pkt_size_blocks =
-	    (u32)DIV_ROUND_UP(req->min_pkt_size, BRB_BLOCK_SIZE);
-	u32 total_blocks =
-	    ECORE_IS_K2(p_hwfn->
-			p_dev) ? BRB_TOTAL_RAM_BLOCKS_K2 :
-	    BRB_TOTAL_RAM_BLOCKS_BB;
-	/* find number of active ports */
+	u8 port, active_ports = 0;
+
+	tc_headroom_blocks = (u32)DIV_ROUND_UP(req->headroom_per_tc,
+					       BRB_BLOCK_SIZE);
+	min_pkt_size_blocks = (u32)DIV_ROUND_UP(req->min_pkt_size,
+						BRB_BLOCK_SIZE);
+	total_blocks = ECORE_IS_K2(p_hwfn->p_dev) ? BRB_TOTAL_RAM_BLOCKS_K2 :
+						    BRB_TOTAL_RAM_BLOCKS_BB;
+
+	/* Find number of active ports */
 	for (port = 0; port < MAX_NUM_PORTS; port++)
 		if (req->num_active_tcs[port])
 			active_ports++;
+
 	active_port_blocks = (u32)(total_blocks / active_ports);
+
 	for (port = 0; port < req->max_ports_per_engine; port++) {
-		/* calculate per-port sizes */
-		u32 tc_guaranteed_blocks =
-		    (u32)DIV_ROUND_UP(req->guranteed_per_tc, BRB_BLOCK_SIZE);
-		u32 port_blocks =
-		    req->num_active_tcs[port] ? active_port_blocks : 0;
-		u32 port_guaranteed_blocks =
-		    req->num_active_tcs[port] * tc_guaranteed_blocks;
-		u32 port_shared_blocks = port_blocks - port_guaranteed_blocks;
-		u32 full_xoff_th =
-		    req->num_active_tcs[port] * BRB_MIN_BLOCKS_PER_TC;
-		u32 full_xon_th = full_xoff_th + min_pkt_size_blocks;
-		u32 pause_xoff_th = tc_headroom_blocks;
-		u32 pause_xon_th = pause_xoff_th + min_pkt_size_blocks;
+		u32 port_blocks, port_shared_blocks, port_guaranteed_blocks;
+		u32 full_xoff_th, full_xon_th, pause_xoff_th, pause_xon_th;
+		u32 tc_guaranteed_blocks;
 		u8 tc;
-		/* init total size per port */
+
+		/* Calculate per-port sizes */
+		tc_guaranteed_blocks = (u32)DIV_ROUND_UP(req->guranteed_per_tc,
+							 BRB_BLOCK_SIZE);
+		port_blocks = req->num_active_tcs[port] ? active_port_blocks :
+							  0;
+		port_guaranteed_blocks = req->num_active_tcs[port] *
+					 tc_guaranteed_blocks;
+		port_shared_blocks = port_blocks - port_guaranteed_blocks;
+		full_xoff_th = req->num_active_tcs[port] *
+			       BRB_MIN_BLOCKS_PER_TC;
+		full_xon_th = full_xoff_th + min_pkt_size_blocks;
+		pause_xoff_th = tc_headroom_blocks;
+		pause_xon_th = pause_xoff_th + min_pkt_size_blocks;
+
+		/* Init total size per port */
 		ecore_wr(p_hwfn, p_ptt, BRB_REG_TOTAL_MAC_SIZE + port * 4,
 			 port_blocks);
-		/* init shared size per port */
+
+		/* Init shared size per port */
 		ecore_wr(p_hwfn, p_ptt, BRB_REG_SHARED_HR_AREA + port * 4,
 			 port_shared_blocks);
+
 		for (tc = 0; tc < NUM_OF_TCS; tc++, reg_offset += 4) {
-			/* clear init values for non-active TCs */
+			/* Clear init values for non-active TCs */
 			if (tc == req->num_active_tcs[port]) {
 				tc_guaranteed_blocks = 0;
 				full_xoff_th = 0;
@@ -1090,15 +1291,18 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
 				pause_xoff_th = 0;
 				pause_xon_th = 0;
 			}
-			/* init guaranteed size per TC */
+
+			/* Init guaranteed size per TC */
 			ecore_wr(p_hwfn, p_ptt,
 				 BRB_REG_TC_GUARANTIED_0 + reg_offset,
 				 tc_guaranteed_blocks);
 			ecore_wr(p_hwfn, p_ptt,
 				 BRB_REG_MAIN_TC_GUARANTIED_HYST_0 + reg_offset,
 				 BRB_HYST_BLOCKS);
-/* init pause/full thresholds per physical TC - for loopback traffic */
 
+			/* Init pause/full thresholds per physical TC - for
+			 * loopback traffic.
+			 */
 			ecore_wr(p_hwfn, p_ptt,
 				 BRB_REG_LB_TC_FULL_XOFF_THRESHOLD_0 +
 				 reg_offset, full_xoff_th);
@@ -1111,7 +1315,10 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
 			ecore_wr(p_hwfn, p_ptt,
 				 BRB_REG_LB_TC_PAUSE_XON_THRESHOLD_0 +
 				 reg_offset, pause_xon_th);
-/* init pause/full thresholds per physical TC - for main traffic */
+
+			/* Init pause/full thresholds per physical TC - for
+			 * main traffic.
+			 */
 			ecore_wr(p_hwfn, p_ptt,
 				 BRB_REG_MAIN_TC_FULL_XOFF_THRESHOLD_0 +
 				 reg_offset, full_xoff_th);
@@ -1128,23 +1335,25 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
 	}
 }
 
-/*In MF should be called once per engine to set EtherType of OuterTag*/
+/* In MF should be called once per engine to set EtherType of OuterTag */
 void ecore_set_engine_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
 					struct ecore_ptt *p_ptt, u32 ethType)
 {
-	/* update PRS register */
+	/* Update PRS register */
 	STORE_RT_REG(p_hwfn, PRS_REG_TAG_ETHERTYPE_0_RT_OFFSET, ethType);
-	/* update NIG register */
+
+	/* Update NIG register */
 	STORE_RT_REG(p_hwfn, NIG_REG_TAG_ETHERTYPE_0_RT_OFFSET, ethType);
-	/* update PBF register */
+
+	/* Update PBF register */
 	STORE_RT_REG(p_hwfn, PBF_REG_TAG_ETHERTYPE_0_RT_OFFSET, ethType);
 }
 
-/*In MF should be called once per port to set EtherType of OuterTag*/
+/* In MF should be called once per port to set EtherType of OuterTag */
 void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
 				      struct ecore_ptt *p_ptt, u32 ethType)
 {
-	/* update DORQ register */
+	/* Update DORQ register */
 	STORE_RT_REG(p_hwfn, DORQ_REG_TAG1_ETHERTYPE_RT_OFFSET, ethType);
 }
 
@@ -1154,11 +1363,13 @@ void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
 void ecore_set_vxlan_dest_port(struct ecore_hwfn *p_hwfn,
 			       struct ecore_ptt *p_ptt, u16 dest_port)
 {
-	/* update PRS register */
+	/* Update PRS register */
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_VXLAN_PORT, dest_port);
-	/* update NIG register */
+
+	/* Update NIG register */
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_VXLAN_CTRL, dest_port);
-	/* update PBF register */
+
+	/* Update PBF register */
 	ecore_wr(p_hwfn, p_ptt, PBF_REG_VXLAN_PORT, dest_port);
 }
 
@@ -1166,23 +1377,26 @@ void ecore_set_vxlan_enable(struct ecore_hwfn *p_hwfn,
 			    struct ecore_ptt *p_ptt, bool vxlan_enable)
 {
 	u32 reg_val;
-	/* update PRS register */
+
+	/* Update PRS register */
 	reg_val = ecore_rd(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN);
 	SET_TUNNEL_TYPE_ENABLE_BIT(reg_val,
 			   PRS_REG_ENCAPSULATION_TYPE_EN_VXLAN_ENABLE_SHIFT,
 			   vxlan_enable);
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN, reg_val);
 	if (reg_val) {
-		ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0,
+		ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0_BB_K2,
 			 (u32)PRS_ETH_TUNN_FIC_FORMAT);
 	}
-	/* update NIG register */
+
+	/* Update NIG register */
 	reg_val = ecore_rd(p_hwfn, p_ptt, NIG_REG_ENC_TYPE_ENABLE);
 	SET_TUNNEL_TYPE_ENABLE_BIT(reg_val,
 				   NIG_REG_ENC_TYPE_ENABLE_VXLAN_ENABLE_SHIFT,
 				   vxlan_enable);
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_ENC_TYPE_ENABLE, reg_val);
-	/* update DORQ register */
+
+	/* Update DORQ register */
 	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_VXLAN_EN,
 		 vxlan_enable ? 1 : 0);
 }
@@ -1192,7 +1406,8 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn,
 			  bool eth_gre_enable, bool ip_gre_enable)
 {
 	u32 reg_val;
-	/* update PRS register */
+
+	/* Update PRS register */
 	reg_val = ecore_rd(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN);
 	SET_TUNNEL_TYPE_ENABLE_BIT(reg_val,
 		   PRS_REG_ENCAPSULATION_TYPE_EN_ETH_OVER_GRE_ENABLE_SHIFT,
@@ -1202,10 +1417,11 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn,
 		   ip_gre_enable);
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN, reg_val);
 	if (reg_val) {
-		ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0,
+		ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0_BB_K2,
 			 (u32)PRS_ETH_TUNN_FIC_FORMAT);
 	}
-	/* update NIG register */
+
+	/* Update NIG register */
 	reg_val = ecore_rd(p_hwfn, p_ptt, NIG_REG_ENC_TYPE_ENABLE);
 	SET_TUNNEL_TYPE_ENABLE_BIT(reg_val,
 		   NIG_REG_ENC_TYPE_ENABLE_ETH_OVER_GRE_ENABLE_SHIFT,
@@ -1214,7 +1430,8 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn,
 		   NIG_REG_ENC_TYPE_ENABLE_IP_OVER_GRE_ENABLE_SHIFT,
 		   ip_gre_enable);
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_ENC_TYPE_ENABLE, reg_val);
-	/* update DORQ registers */
+
+	/* Update DORQ registers */
 	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_GRE_ETH_EN,
 		 eth_gre_enable ? 1 : 0);
 	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_GRE_IP_EN,
@@ -1224,11 +1441,13 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn,
 void ecore_set_geneve_dest_port(struct ecore_hwfn *p_hwfn,
 				struct ecore_ptt *p_ptt, u16 dest_port)
 {
-	/* update PRS register */
+	/* Update PRS register */
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_NGE_PORT, dest_port);
-	/* update NIG register */
+
+	/* Update NIG register */
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_NGE_PORT, dest_port);
-	/* update PBF register */
+
+	/* Update PBF register */
 	ecore_wr(p_hwfn, p_ptt, PBF_REG_NGE_PORT, dest_port);
 }
 
@@ -1237,7 +1456,8 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
 			     bool eth_geneve_enable, bool ip_geneve_enable)
 {
 	u32 reg_val;
-	/* update PRS register */
+
+	/* Update PRS register */
 	reg_val = ecore_rd(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN);
 	SET_TUNNEL_TYPE_ENABLE_BIT(reg_val,
 		   PRS_REG_ENCAPSULATION_TYPE_EN_ETH_OVER_GENEVE_ENABLE_SHIFT,
@@ -1247,37 +1467,44 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
 		   ip_geneve_enable);
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN, reg_val);
 	if (reg_val) {
-		ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0,
+		ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0_BB_K2,
 			 (u32)PRS_ETH_TUNN_FIC_FORMAT);
 	}
-	/* update NIG register */
+
+	/* Update NIG register */
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_NGE_ETH_ENABLE,
 		 eth_geneve_enable ? 1 : 0);
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_NGE_IP_ENABLE,
 		 ip_geneve_enable ? 1 : 0);
-	/* EDPM with geneve tunnel not supported in BB_B0 */
+
+	/* EDPM with geneve tunnel not supported in BB */
 	if (ECORE_IS_BB_B0(p_hwfn->p_dev))
 		return;
-	/* update DORQ registers */
-	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_ETH_EN,
+
+	/* Update DORQ registers */
+	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_ETH_EN_K2_E5,
 		 eth_geneve_enable ? 1 : 0);
-	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_IP_EN,
+	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_IP_EN_K2_E5,
 		 ip_geneve_enable ? 1 : 0);
 }
 
+
 #define T_ETH_PACKET_ACTION_GFT_EVENTID  23
 #define PARSER_ETH_CONN_GFT_ACTION_CM_HDR  272
 #define T_ETH_PACKET_MATCH_RFS_EVENTID 25
-#define PARSER_ETH_CONN_CM_HDR (0x0)
+#define PARSER_ETH_CONN_CM_HDR 0
 #define CAM_LINE_SIZE sizeof(u32)
 #define RAM_LINE_SIZE sizeof(u64)
 #define REG_SIZE sizeof(u32)
 
+
 void ecore_set_gft_event_id_cm_hdr(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt)
 {
-	/* set RFS event ID to be awakened i Tstorm By Prs */
-	u32 rfs_cm_hdr_event_id = ecore_rd(p_hwfn, p_ptt, PRS_REG_CM_HDR_GFT);
+	u32 rfs_cm_hdr_event_id;
+
+	/* Set RFS event ID to be awakened i Tstorm By Prs */
+	rfs_cm_hdr_event_id = ecore_rd(p_hwfn, p_ptt, PRS_REG_CM_HDR_GFT);
 	rfs_cm_hdr_event_id |= T_ETH_PACKET_ACTION_GFT_EVENTID <<
 	    PRS_REG_CM_HDR_GFT_EVENT_ID_SHIFT;
 	rfs_cm_hdr_event_id |= PARSER_ETH_CONN_GFT_ACTION_CM_HDR <<
@@ -1298,39 +1525,48 @@ void ecore_set_rfs_mode_enable(struct ecore_hwfn *p_hwfn,
 	struct gft_ram_line ramLine;
 	u32 *ramLinePointer = (u32 *)&ramLine;
 	int i;
+
 	if (!ipv6 && !ipv4)
 		DP_NOTICE(p_hwfn, true,
 			  "set_rfs_mode_enable: must accept at "
 			  "least on of - ipv4 or ipv6");
+
 	if (!tcp && !udp)
 		DP_NOTICE(p_hwfn, true,
 			  "set_rfs_mode_enable: must accept at "
 			  "least on of - udp or tcp");
-	/* set RFS event ID to be awakened i Tstorm By Prs */
+
+	/* Set RFS event ID to be awakened i Tstorm By Prs */
 	rfs_cm_hdr_event_id |=  T_ETH_PACKET_MATCH_RFS_EVENTID <<
 	    PRS_REG_CM_HDR_GFT_EVENT_ID_SHIFT;
 	rfs_cm_hdr_event_id |=  PARSER_ETH_CONN_CM_HDR <<
 	    PRS_REG_CM_HDR_GFT_CM_HDR_SHIFT;
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_CM_HDR_GFT, rfs_cm_hdr_event_id);
+
 	/* Configure Registers for RFS mode */
-/* enable gft search */
+
+	/* Enable gft search */
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_GFT, 1);
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_LOAD_L2_FILTER, 0); /* do not load
 							     * context only cid
 							     * in PRS on match
 							     */
 	camLine.cam_line_mapped.camline = 0;
-	/* cam line is now valid!! */
+
+	/* Cam line is now valid!! */
 	SET_FIELD(camLine.cam_line_mapped.camline,
 		  GFT_CAM_LINE_MAPPED_VALID, 1);
-	/* filters are per PF!! */
+
+	/* Filters are per PF!! */
 	SET_FIELD(camLine.cam_line_mapped.camline,
 		  GFT_CAM_LINE_MAPPED_PF_ID_MASK, 1);
 	SET_FIELD(camLine.cam_line_mapped.camline,
 		  GFT_CAM_LINE_MAPPED_PF_ID, pf_id);
+
 	if (!(tcp && udp)) {
 		SET_FIELD(camLine.cam_line_mapped.camline,
-			  GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE_MASK, 1);
+			  GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE_MASK,
+			  GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE_MASK_MASK);
 		if (tcp)
 			SET_FIELD(camLine.cam_line_mapped.camline,
 				  GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE,
@@ -1340,6 +1576,7 @@ void ecore_set_rfs_mode_enable(struct ecore_hwfn *p_hwfn,
 				  GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE,
 				  GFT_PROFILE_UDP_PROTOCOL);
 	}
+
 	if (!(ipv4 && ipv6)) {
 		SET_FIELD(camLine.cam_line_mapped.camline,
 			  GFT_CAM_LINE_MAPPED_IP_VERSION_MASK, 1);
@@ -1352,44 +1589,53 @@ void ecore_set_rfs_mode_enable(struct ecore_hwfn *p_hwfn,
 				  GFT_CAM_LINE_MAPPED_IP_VERSION,
 				  GFT_PROFILE_IPV6);
 	}
-	/* write characteristics to cam */
+
+	/* Write characteristics to cam */
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_CAM + CAM_LINE_SIZE * pf_id,
 	    camLine.cam_line_mapped.camline);
 	camLine.cam_line_mapped.camline =
 	    ecore_rd(p_hwfn, p_ptt, PRS_REG_GFT_CAM + CAM_LINE_SIZE * pf_id);
-	/* write line to RAM - compare to filter 4 tuple */
-	ramLine.low32bits = 0;
-	ramLine.high32bits = 0;
-	SET_FIELD(ramLine.high32bits, GFT_RAM_LINE_DST_IP, 1);
-	SET_FIELD(ramLine.high32bits, GFT_RAM_LINE_SRC_IP, 1);
-	SET_FIELD(ramLine.low32bits, GFT_RAM_LINE_SRC_PORT, 1);
-	SET_FIELD(ramLine.low32bits, GFT_RAM_LINE_DST_PORT, 1);
-	/* each iteration write to reg */
+
+	/* Write line to RAM - compare to filter 4 tuple */
+	ramLine.lo = 0;
+	ramLine.hi = 0;
+	SET_FIELD(ramLine.hi, GFT_RAM_LINE_DST_IP, 1);
+	SET_FIELD(ramLine.hi, GFT_RAM_LINE_SRC_IP, 1);
+	SET_FIELD(ramLine.hi, GFT_RAM_LINE_OVER_IP_PROTOCOL, 1);
+	SET_FIELD(ramLine.lo, GFT_RAM_LINE_ETHERTYPE, 1);
+	SET_FIELD(ramLine.lo, GFT_RAM_LINE_SRC_PORT, 1);
+	SET_FIELD(ramLine.lo, GFT_RAM_LINE_DST_PORT, 1);
+
+	/* Each iteration write to reg */
 	for (i = 0; i < RAM_LINE_SIZE / REG_SIZE; i++)
 		ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_PROFILE_MASK_RAM +
 			 RAM_LINE_SIZE * pf_id +
 			 i * REG_SIZE, *(ramLinePointer + i));
-	/* set default profile so that no filter match will happen */
-	ramLine.low32bits = 0xffff;
-	ramLine.high32bits = 0xffff;
+
+	/* Set default profile so that no filter match will happen */
+	ramLine.lo = 0xffff;
+	ramLine.hi = 0xffff;
 	for (i = 0; i < RAM_LINE_SIZE / REG_SIZE; i++)
 		ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_PROFILE_MASK_RAM +
 			 RAM_LINE_SIZE * PRS_GFT_CAM_LINES_NO_MATCH +
 			 i * REG_SIZE, *(ramLinePointer + i));
 }
 
-/* Configure VF zone size mode*/
+/* Configure VF zone size mode */
 void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn,
 				    struct ecore_ptt *p_ptt, u16 mode,
 				    bool runtime_init)
 {
 	u32 msdm_vf_size_log = MSTORM_VF_ZONE_DEFAULT_SIZE_LOG;
 	u32 msdm_vf_offset_mask;
+
 	if (mode == VF_ZONE_SIZE_MODE_DOUBLE)
 		msdm_vf_size_log += 1;
 	else if (mode == VF_ZONE_SIZE_MODE_QUAD)
 		msdm_vf_size_log += 2;
+
 	msdm_vf_offset_mask = (1 << msdm_vf_size_log) - 1;
+
 	if (runtime_init) {
 		STORE_RT_REG(p_hwfn,
 			     PGLUE_REG_B_MSDM_VF_SHIFT_B_RT_OFFSET,
@@ -1405,12 +1651,13 @@ void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn,
 	}
 }
 
-/* get mstorm statistics for offset by VF zone size mode*/
+/* Get mstorm statistics for offset by VF zone size mode */
 u32 ecore_get_mstorm_queue_stat_offset(struct ecore_hwfn *p_hwfn,
 				       u16 stat_cnt_id,
 				       u16 vf_zone_size_mode)
 {
 	u32 offset = MSTORM_QUEUE_STAT_OFFSET(stat_cnt_id);
+
 	if ((vf_zone_size_mode != VF_ZONE_SIZE_MODE_DEFAULT) &&
 	    (stat_cnt_id > MAX_NUM_PFS)) {
 		if (vf_zone_size_mode == VF_ZONE_SIZE_MODE_DOUBLE)
@@ -1420,16 +1667,18 @@ u32 ecore_get_mstorm_queue_stat_offset(struct ecore_hwfn *p_hwfn,
 			offset += 3 * (1 << MSTORM_VF_ZONE_DEFAULT_SIZE_LOG) *
 			    (stat_cnt_id - MAX_NUM_PFS);
 	}
+
 	return offset;
 }
 
-/* get mstorm VF producer offset by VF zone size mode*/
+/* Get mstorm VF producer offset by VF zone size mode */
 u32 ecore_get_mstorm_eth_vf_prods_offset(struct ecore_hwfn *p_hwfn,
 					 u8 vf_id,
 					 u8 vf_queue_id,
 					 u16 vf_zone_size_mode)
 {
 	u32 offset = MSTORM_ETH_VF_PRODS_OFFSET(vf_id, vf_queue_id);
+
 	if (vf_zone_size_mode != VF_ZONE_SIZE_MODE_DEFAULT) {
 		if (vf_zone_size_mode == VF_ZONE_SIZE_MODE_DOUBLE)
 			offset += (1 << MSTORM_VF_ZONE_DEFAULT_SIZE_LOG) *
@@ -1438,5 +1687,166 @@ u32 ecore_get_mstorm_eth_vf_prods_offset(struct ecore_hwfn *p_hwfn,
 			offset += 3 * (1 << MSTORM_VF_ZONE_DEFAULT_SIZE_LOG) *
 				  vf_id;
 	}
+
 	return offset;
 }
+
+/* Calculate CRC8 of first 4 bytes in buf */
+static u8 ecore_calc_crc8(const u8 *buf)
+{
+	u32 i, j, crc = 0xff << 8;
+
+	/* CRC-8 polynomial */
+	#define POLY 0x1070
+
+	for (j = 0; j < 4; j++, buf++) {
+		crc ^= (*buf << 8);
+		for (i = 0; i < 8; i++) {
+			if (crc & 0x8000)
+				crc ^= (POLY << 3);
+
+			 crc <<= 1;
+		}
+	}
+
+	return (u8)(crc >> 8);
+}
+
+/* Calculate and return CDU validation byte per conneciton type / region /
+ * cid
+ */
+static u8 ecore_calc_cdu_validation_byte(u8 conn_type, u8 region,
+					 u32 cid)
+{
+	const u8 validation_cfg = CDU_VALIDATION_DEFAULT_CFG;
+	u8 crc, validation_byte = 0;
+	u32 validation_string = 0;
+	const u8 *data_to_crc_rev;
+	u8 data_to_crc[4];
+
+	data_to_crc_rev = (const u8 *)&validation_string;
+
+	/*
+	 * The CRC is calculated on the String-to-compress:
+	 * [31:8]  = {CID[31:20],CID[11:0]}
+	 * [7:4]   = Region
+	 * [3:0]   = Type
+	 */
+	if ((validation_cfg >> CDU_CONTEXT_VALIDATION_CFG_USE_CID) & 1)
+		validation_string |= (cid & 0xFFF00000) | ((cid & 0xFFF) << 8);
+
+	if ((validation_cfg >> CDU_CONTEXT_VALIDATION_CFG_USE_REGION) & 1)
+		validation_string |= ((region & 0xF) << 4);
+
+	if ((validation_cfg >> CDU_CONTEXT_VALIDATION_CFG_USE_TYPE) & 1)
+		validation_string |= (conn_type & 0xF);
+
+	/* Convert to big-endian (ntoh())*/
+	data_to_crc[0] = data_to_crc_rev[3];
+	data_to_crc[1] = data_to_crc_rev[2];
+	data_to_crc[2] = data_to_crc_rev[1];
+	data_to_crc[3] = data_to_crc_rev[0];
+
+	crc = ecore_calc_crc8(data_to_crc);
+
+	validation_byte |= ((validation_cfg >>
+			     CDU_CONTEXT_VALIDATION_CFG_USE_ACTIVE) & 1) << 7;
+
+	if ((validation_cfg >>
+	     CDU_CONTEXT_VALIDATION_CFG_VALIDATION_TYPE_SHIFT) & 1)
+		validation_byte |= ((conn_type & 0xF) << 3) | (crc & 0x7);
+	else
+		validation_byte |= crc & 0x7F;
+
+	return validation_byte;
+}
+
+/* Calcualte and set validation bytes for session context */
+void ecore_calc_session_ctx_validation(void *p_ctx_mem, u16 ctx_size,
+				       u8 ctx_type, u32 cid)
+{
+	u8 *x_val_ptr, *t_val_ptr, *u_val_ptr, *p_ctx;
+
+	p_ctx = (u8 *)p_ctx_mem;
+	x_val_ptr = &p_ctx[con_region_offsets[0][ctx_type]];
+	t_val_ptr = &p_ctx[con_region_offsets[1][ctx_type]];
+	u_val_ptr = &p_ctx[con_region_offsets[2][ctx_type]];
+
+	OSAL_MEMSET(p_ctx, 0, ctx_size);
+
+	*x_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 3, cid);
+	*t_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 4, cid);
+	*u_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 5, cid);
+}
+
+/* Calcualte and set validation bytes for task context */
+void ecore_calc_task_ctx_validation(void *p_ctx_mem, u16 ctx_size,
+				    u8 ctx_type, u32 tid)
+{
+	u8 *p_ctx, *region1_val_ptr;
+
+	p_ctx = (u8 *)p_ctx_mem;
+	region1_val_ptr = &p_ctx[task_region_offsets[0][ctx_type]];
+
+	OSAL_MEMSET(p_ctx, 0, ctx_size);
+
+	*region1_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 1, tid);
+}
+
+/* Memset session context to 0 while preserving validation bytes */
+void ecore_memset_session_ctx(void *p_ctx_mem, u32 ctx_size, u8 ctx_type)
+{
+	u8 *x_val_ptr, *t_val_ptr, *u_val_ptr, *p_ctx;
+	u8 x_val, t_val, u_val;
+
+	p_ctx = (u8 *)p_ctx_mem;
+	x_val_ptr = &p_ctx[con_region_offsets[0][ctx_type]];
+	t_val_ptr = &p_ctx[con_region_offsets[1][ctx_type]];
+	u_val_ptr = &p_ctx[con_region_offsets[2][ctx_type]];
+
+	x_val = *x_val_ptr;
+	t_val = *t_val_ptr;
+	u_val = *u_val_ptr;
+
+	OSAL_MEMSET(p_ctx, 0, ctx_size);
+
+	*x_val_ptr = x_val;
+	*t_val_ptr = t_val;
+	*u_val_ptr = u_val;
+}
+
+/* Memset task context to 0 while preserving validation bytes */
+void ecore_memset_task_ctx(void *p_ctx_mem, const u32 ctx_size,
+			   const u8 ctx_type)
+{
+	u8 *p_ctx, *region1_val_ptr;
+	u8 region1_val;
+
+	p_ctx = (u8 *)p_ctx_mem;
+	region1_val_ptr = &p_ctx[task_region_offsets[0][ctx_type]];
+
+	region1_val = *region1_val_ptr;
+
+	OSAL_MEMSET(p_ctx, 0, ctx_size);
+
+	*region1_val_ptr = region1_val;
+}
+
+/* Enable and configure context validation */
+void ecore_enable_context_validation(struct ecore_hwfn *p_hwfn,
+				     struct ecore_ptt *p_ptt)
+{
+	u32 ctx_validation;
+
+	/* Enable validation for connection region 3 - bits [31:24] */
+	ctx_validation = CDU_VALIDATION_DEFAULT_CFG << 24;
+	ecore_wr(p_hwfn, p_ptt, CDU_REG_CCFC_CTX_VALID0, ctx_validation);
+
+	/* Enable validation for connection region 5 - bits [15: 8] */
+	ctx_validation = CDU_VALIDATION_DEFAULT_CFG << 8;
+	ecore_wr(p_hwfn, p_ptt, CDU_REG_CCFC_CTX_VALID1, ctx_validation);
+
+	/* Enable validation for connection region 1 - bits [15: 8] */
+	ctx_validation = CDU_VALIDATION_DEFAULT_CFG << 8;
+	ecore_wr(p_hwfn, p_ptt, CDU_REG_TCFC_CTX_VALID0, ctx_validation);
+}
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.h b/drivers/net/qede/base/ecore_init_fw_funcs.h
index 9df0e7d..2d1ab7c 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.h
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.h
@@ -8,20 +8,22 @@
 
 #ifndef _INIT_FW_FUNCS_H
 #define _INIT_FW_FUNCS_H
-/* forward declarations */
+/* Forward declarations */
+
 struct init_qm_pq_params;
+
 /**
- * @brief ecore_qm_pf_mem_size - prepare QM ILT sizes
+ * @brief ecore_qm_pf_mem_size - Prepare QM ILT sizes
  *
  * Returns the required host memory size in 4KB units.
  * Must be called before all QM init HSI functions.
  *
- * @param pf_id			- physical function ID
- * @param num_pf_cids	- number of connections used by this PF
- * @param num_vf_cids	- number of connections used by VFs of this PF
- * @param num_tids		- number of tasks used by this PF
- * @param num_pf_pqs	- number of PQs used by this PF
- * @param num_vf_pqs	- number of PQs used by VFs of this PF
+ * @param pf_id -	physical function ID
+ * @param num_pf_cids - number of connections used by this PF
+ * @param num_vf_cids -	number of connections used by VFs of this PF
+ * @param num_tids -	number of tasks used by this PF
+ * @param num_pf_pqs -	number of PQs used by this PF
+ * @param num_vf_pqs -	number of PQs used by VFs of this PF
  *
  * @return The required host memory size in 4KB units.
  */
@@ -31,6 +33,7 @@ u32 ecore_qm_pf_mem_size(u8 pf_id,
 						 u32 num_tids,
 						 u16 num_pf_pqs,
 						 u16 num_vf_pqs);
+
 /**
  * @brief ecore_qm_common_rt_init - Prepare QM runtime init values for engine
  *                                  phase
@@ -38,10 +41,10 @@ u32 ecore_qm_pf_mem_size(u8 pf_id,
  * @param p_hwfn
  * @param max_ports_per_engine	- max number of ports per engine in HW
  * @param max_phys_tcs_per_port	- max number of physical TCs per port in HW
- * @param pf_rl_en				- enable per-PF rate limiters
- * @param pf_wfq_en				- enable per-PF WFQ
- * @param vport_rl_en			- enable per-VPORT rate limiters
- * @param vport_wfq_en			- enable per-VPORT WFQ
+ * @param pf_rl_en		- enable per-PF rate limiters
+ * @param pf_wfq_en		- enable per-PF WFQ
+ * @param vport_rl_en		- enable per-VPORT rate limiters
+ * @param vport_wfq_en		- enable per-VPORT WFQ
  * @param port_params - array of size MAX_NUM_PORTS with params for each port
  *
  * @return 0 on success, -1 on error.
@@ -54,22 +57,24 @@ int ecore_qm_common_rt_init(struct ecore_hwfn *p_hwfn,
 			 bool vport_rl_en,
 			 bool vport_wfq_en,
 			 struct init_qm_port_params port_params[MAX_NUM_PORTS]);
+
 /**
  * @brief ecore_qm_pf_rt_init  Prepare QM runtime init values for the PF phase
  *
  * @param p_hwfn
  * @param p_ptt			- ptt window used for writing the registers
- * @param port_id				- port ID
- * @param pf_id					- PF ID
+ * @param port_id		- port ID
+ * @param pf_id			- PF ID
  * @param max_phys_tcs_per_port	- max number of physical TCs per port in HW
- * @param is_first_pf			- 1 = first PF in engine, 0 = othwerwise
- * @param num_pf_cids			- number of connections used by this PF
+ * @param is_first_pf		- 1 = first PF in engine, 0 = othwerwise
+ * @param num_pf_cids		- number of connections used by this PF
  * @param num_vf_cids		- number of connections used by VFs of this PF
- * @param num_tids			- number of tasks used by this PF
- * @param start_pq			- first Tx PQ ID associated with this PF
- * @param num_pf_pqs	- number of Tx PQs associated with this PF (non-VF)
- * @param num_vf_pqs			- number of Tx PQs associated with a VF
- * @param start_vport			- first VPORT ID associated with this PF
+ * @param num_tids		- number of tasks used by this PF
+ * @param start_pq		- first Tx PQ ID associated with this PF
+ * @param num_pf_pqs		- number of Tx PQs associated with this PF
+ *                                (non-VF)
+ * @param num_vf_pqs		- number of Tx PQs associated with a VF
+ * @param start_vport		- first VPORT ID associated with this PF
  * @param num_vports - number of VPORTs associated with this PF
  * @param pf_wfq - WFQ weight. if PF WFQ is globally disabled, the weight must
  *		   be 0. otherwise, the weight must be non-zero.
@@ -100,6 +105,7 @@ int ecore_qm_pf_rt_init(struct ecore_hwfn *p_hwfn,
 				u32 pf_rl,
 				struct init_qm_pq_params *pq_params,
 				struct init_qm_vport_params *vport_params);
+
 /**
  * @brief ecore_init_pf_wfq  Initializes the WFQ weight of the specified PF
  *
@@ -114,11 +120,12 @@ int ecore_init_pf_wfq(struct ecore_hwfn *p_hwfn,
 					  struct ecore_ptt *p_ptt,
 					  u8 pf_id,
 					  u16 pf_wfq);
+
 /**
- * @brief ecore_init_pf_rl  Initializes the rate limit of the specified PF
+ * @brief ecore_init_pf_rl - Initializes the rate limit of the specified PF
  *
  * @param p_hwfn
- * @param p_ptt	- ptt window used for writing the registers
+ * @param p_ptt - ptt window used for writing the registers
  * @param pf_id	- PF ID
  * @param pf_rl	- rate limit in Mb/sec units
  *
@@ -128,6 +135,7 @@ int ecore_init_pf_rl(struct ecore_hwfn *p_hwfn,
 					 struct ecore_ptt *p_ptt,
 					 u8 pf_id,
 					 u32 pf_rl);
+
 /**
  * @brief ecore_init_vport_wfq  Initializes the WFQ weight of specified VPORT
  *
@@ -144,10 +152,12 @@ int ecore_init_vport_wfq(struct ecore_hwfn *p_hwfn,
 						 struct ecore_ptt *p_ptt,
 						 u16 first_tx_pq_id[NUM_OF_TCS],
 						 u16 vport_wfq);
+
 /**
- * @brief ecore_init_vport_rl  Initializes the rate limit of the specified VPORT
+ * @brief ecore_init_vport_rl - Initializes the rate limit of the specified
+ * VPORT.
  *
- * @param p_hwfn
+ * @param p_hwfn	- HW device data
  * @param p_ptt		- ptt window used for writing the registers
  * @param vport_id	- VPORT ID
  * @param vport_rl	- rate limit in Mb/sec units
@@ -158,6 +168,7 @@ int ecore_init_vport_rl(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt,
 						u8 vport_id,
 						u32 vport_rl);
+
 /**
  * @brief ecore_send_qm_stop_cmd  Sends a stop command to the QM
  *
@@ -178,6 +189,7 @@ bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn,
 							u16 start_pq,
 							u16 num_pqs);
 #ifndef UNUSED_HSI_FUNC
+
 /**
  * @brief ecore_init_nig_ets - initializes the NIG ETS arbiter
  *
@@ -193,6 +205,7 @@ void ecore_init_nig_ets(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt,
 						struct init_ets_req *req,
 						bool is_lb);
+
 /**
  * @brief ecore_init_nig_lb_rl - initializes the NIG LB RLs
  *
@@ -205,6 +218,7 @@ void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn,
 				  struct ecore_ptt *p_ptt,
 				  struct init_nig_lb_rl_req *req);
 #endif /* UNUSED_HSI_FUNC */
+
 /**
  * @brief ecore_init_nig_pri_tc_map - initializes the NIG priority to TC map.
  *
@@ -216,6 +230,7 @@ void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn,
 void ecore_init_nig_pri_tc_map(struct ecore_hwfn *p_hwfn,
 					   struct ecore_ptt *p_ptt,
 					   struct init_nig_pri_tc_map_req *req);
+
 #ifndef UNUSED_HSI_FUNC
 /**
  * @brief ecore_init_prs_ets - initializes the PRS Rx ETS arbiter
@@ -229,6 +244,7 @@ void ecore_init_prs_ets(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt,
 						struct init_ets_req *req);
 #endif /* UNUSED_HSI_FUNC */
+
 #ifndef UNUSED_HSI_FUNC
 /**
  * @brief ecore_init_brb_ram - initializes BRB RAM sizes per TC
@@ -242,6 +258,7 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt,
 						struct init_brb_ram_req *req);
 #endif /* UNUSED_HSI_FUNC */
+
 #ifndef UNUSED_HSI_FUNC
 /**
  * @brief ecore_set_engine_mf_ovlan_eth_type - initializes Nig,Prs,Pbf and llh
@@ -250,22 +267,24 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
  *                                             if engine
  *  is in BD mode.
  *
- * @param p_ptt    - ptt window used for writing the registers.
+ * @param p_ptt   - ptt window used for writing the registers.
  * @param ethType - etherType to configure
  */
 void ecore_set_engine_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
 			struct ecore_ptt *p_ptt, u32 ethType);
+
 /**
  * @brief ecore_set_port_mf_ovlan_eth_type - initializes DORQ ethType Regs to
  *                                           input ethType should Be called
  *                                           once per port.
  *
- * @param p_ptt    - ptt window used for writing the registers.
+ * @param p_ptt   - ptt window used for writing the registers.
  * @param ethType - etherType to configure
  */
 void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
 			struct ecore_ptt *p_ptt, u32 ethType);
 #endif /* UNUSED_HSI_FUNC */
+
 /**
  * @brief ecore_set_vxlan_dest_port - initializes vxlan tunnel destination udp
  *                                    port
@@ -276,15 +295,17 @@ void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
 void ecore_set_vxlan_dest_port(struct ecore_hwfn *p_hwfn,
 			       struct ecore_ptt *p_ptt,
 			       u16 dest_port);
+
 /**
  * @brief ecore_set_vxlan_enable - enable or disable VXLAN tunnel in HW
  *
- * @param p_ptt	- ptt window used for writing the registers.
- * @param vxlan_enable - vxlan enable flag.
+ * @param p_ptt		- ptt window used for writing the registers.
+ * @param vxlan_enable	- vxlan enable flag.
  */
 void ecore_set_vxlan_enable(struct ecore_hwfn *p_hwfn,
 			    struct ecore_ptt *p_ptt,
 			    bool vxlan_enable);
+
 /**
  * @brief ecore_set_gre_enable - enable or disable GRE tunnel in HW
  *
@@ -296,6 +317,7 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn,
 			  struct ecore_ptt *p_ptt,
 			  bool eth_gre_enable,
 			  bool ip_gre_enable);
+
 /**
  * @brief ecore_set_geneve_dest_port - initializes geneve tunnel destination
  *                                     udp port
@@ -306,6 +328,7 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn,
 void ecore_set_geneve_dest_port(struct ecore_hwfn *p_hwfn,
 				struct ecore_ptt *p_ptt,
 				u16 dest_port);
+
 /**
  * @brief ecore_set_gre_enable - enable or disable GRE tunnel in HW
  *
@@ -318,6 +341,7 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
 			     bool eth_geneve_enable,
 			     bool ip_geneve_enable);
 #ifndef UNUSED_HSI_FUNC
+
 /**
 * @brief ecore_set_gft_event_id_cm_hdr - configure GFT event id and cm header
 *
@@ -325,16 +349,16 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
 */
 void ecore_set_gft_event_id_cm_hdr(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt);
+
 /**
 * @brief ecore_set_rfs_mode_enable - enable and configure HW for RFS
 *
-*
-* @param p_ptt             - ptt window used for writing the registers.
-* @param pf_id - pf on which to enable RFS.
-* @param tcp -  set profile tcp packets.
-* @param udp -  set profile udp  packet.
-* @param ipv4 - set profile ipv4 packet.
-* @param ipv6 - set profile ipv6 packet.
+* @param p_ptt	- ptt window used for writing the registers.
+* @param pf_id	- pf on which to enable RFS.
+* @param tcp	- set profile tcp packets.
+* @param udp	- set profile udp  packet.
+* @param ipv4	- set profile ipv4 packet.
+* @param ipv6	- set profile ipv6 packet.
 */
 void ecore_set_rfs_mode_enable(struct ecore_hwfn *p_hwfn,
 	struct ecore_ptt *p_ptt,
@@ -344,6 +368,7 @@ void ecore_set_rfs_mode_enable(struct ecore_hwfn *p_hwfn,
 	bool ipv4,
 	bool ipv6);
 #endif /* UNUSED_HSI_FUNC */
+
 /**
 * @brief ecore_config_vf_zone_size_mode - Configure VF zone size mode. Must be
 *                                         used before first ETH queue started.
@@ -357,18 +382,20 @@ void ecore_set_rfs_mode_enable(struct ecore_hwfn *p_hwfn,
 */
 void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn, struct ecore_ptt
 				    *p_ptt, u16 mode, bool runtime_init);
+
 /**
-* @brief ecore_get_mstorm_queue_stat_offset - get mstorm statistics offset by VF
-*                                             zone size mode.
+ * @brief ecore_get_mstorm_queue_stat_offset - Get mstorm statistics offset by
+ * VF zone size mode.
 *
 * @param stat_cnt_id         -  statistic counter id
 * @param vf_zone_size_mode   -  VF zone size mode. Use enum vf_zone_size_mode.
 */
 u32 ecore_get_mstorm_queue_stat_offset(struct ecore_hwfn *p_hwfn,
 				       u16 stat_cnt_id, u16 vf_zone_size_mode);
+
 /**
-* @brief ecore_get_mstorm_eth_vf_prods_offset - VF producer offset by VF zone
-*                                               size mode.
+ * @brief ecore_get_mstorm_eth_vf_prods_offset - VF producer offset by VF zone
+ * size mode.
 *
 * @param vf_id               -  vf id.
 * @param vf_queue_id         -  per VF rx queue id.
@@ -376,4 +403,58 @@ u32 ecore_get_mstorm_queue_stat_offset(struct ecore_hwfn *p_hwfn,
 */
 u32 ecore_get_mstorm_eth_vf_prods_offset(struct ecore_hwfn *p_hwfn, u8 vf_id, u8
 					 vf_queue_id, u16 vf_zone_size_mode);
+/**
+ * @brief ecore_enable_context_validation - Enable and configure context
+ *                                          validation.
+ *
+ * @param p_ptt - ptt window used for writing the registers.
+ */
+void ecore_enable_context_validation(struct ecore_hwfn *p_hwfn,
+				     struct ecore_ptt *p_ptt);
+/**
+ * @brief ecore_calc_session_ctx_validation - Calcualte validation byte for
+ *                                            session context.
+ *
+ *
+ * @param p_ctx_mem           -  pointer to context memory.
+ * @param ctx_size            -  context size.
+ * @param ctx_type            -  context type.
+ * @param cid                 -  context cid.
+ */
+void ecore_calc_session_ctx_validation(void *p_ctx_mem, u16 ctx_size,
+				       u8 ctx_type, u32 cid);
+/**
+ * @brief ecore_calc_task_ctx_validation - Calcualte validation byte for task
+ *                                         context.
+ *
+ *
+ * @param p_ctx_mem           -  pointer to context memory.
+ * @param ctx_size            -  context size.
+ * @param ctx_type            -  context type.
+ * @param tid                 -  context tid.
+ */
+void ecore_calc_task_ctx_validation(void *p_ctx_mem, u16 ctx_size,
+				    u8 ctx_type, u32 tid);
+/**
+ * @brief ecore_memset_session_ctx - Memset session context to 0 while
+ *                                   preserving validation bytes.
+ *
+ *
+ * @param p_ctx_mem           -  pointer to context memory.
+ * @param ctx_size            -  size to initialzie.
+ * @param ctx_type            -  context type.
+ */
+void ecore_memset_session_ctx(void *p_ctx_mem, u32 ctx_size,
+			      u8 ctx_type);
+/**
+ * @brief ecore_memset_task_ctx - Memset session context to 0 while preserving
+ *                                validation bytes.
+ *
+ *
+ * @param p_ctx_mem           -  pointer to context memory.
+ * @param ctx_size            -  size to initialzie.
+ * @param ctx_type            -  context type.
+ */
+void ecore_memset_task_ctx(void *p_ctx_mem, u32 ctx_size,
+			   u8 ctx_type);
 #endif
diff --git a/drivers/net/qede/base/ecore_iro.h b/drivers/net/qede/base/ecore_iro.h
index aad9012..b4bfe89 100644
--- a/drivers/net/qede/base/ecore_iro.h
+++ b/drivers/net/qede/base/ecore_iro.h
@@ -185,5 +185,13 @@
 #define TSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) (IRO[46].base + \
 	((rdma_stat_counter_id) * IRO[46].m1))
 #define TSTORM_RDMA_QUEUE_STAT_SIZE (IRO[46].size)
+/* Xstorm iWARP rxmit stats */
+#define XSTORM_IWARP_RXMIT_STATS_OFFSET(pf_id) (IRO[47].base + \
+	((pf_id) * IRO[47].m1))
+#define XSTORM_IWARP_RXMIT_STATS_SIZE (IRO[47].size)
+/* Tstorm RoCE Event Statistics */
+#define TSTORM_ROCE_EVENTS_STAT_OFFSET(roce_pf_id) (IRO[48].base + \
+	((roce_pf_id) * IRO[48].m1))
+#define TSTORM_ROCE_EVENTS_STAT_SIZE (IRO[48].size)
 
 #endif /* __IRO_H__ */
diff --git a/drivers/net/qede/base/ecore_iro_values.h b/drivers/net/qede/base/ecore_iro_values.h
index 4ff7e95..6764bfa 100644
--- a/drivers/net/qede/base/ecore_iro_values.h
+++ b/drivers/net/qede/base/ecore_iro_values.h
@@ -9,13 +9,13 @@
 #ifndef __IRO_VALUES_H__
 #define __IRO_VALUES_H__
 
-static const struct iro iro_arr[47] = {
+static const struct iro iro_arr[49] = {
 /* YSTORM_FLOW_CONTROL_MODE_OFFSET */
 	{      0x0,      0x0,      0x0,      0x0,      0x8},
 /* TSTORM_PORT_STAT_OFFSET(port_id) */
-	{   0x4cb0,     0x78,      0x0,      0x0,     0x78},
+	{   0x4cb0,     0x80,      0x0,      0x0,     0x80},
 /* TSTORM_LL2_PORT_STAT_OFFSET(port_id) */
-	{   0x6318,     0x20,      0x0,      0x0,     0x20},
+	{   0x6518,     0x20,      0x0,      0x0,     0x20},
 /* USTORM_VF_PF_CHANNEL_READY_OFFSET(vf_id) */
 	{    0xb00,      0x8,      0x0,      0x0,      0x4},
 /* USTORM_FLR_FINAL_ACK_OFFSET(pf_id) */
@@ -41,7 +41,7 @@ static const struct iro iro_arr[47] = {
 /* TSTORM_LL2_RX_PRODS_OFFSET(core_rx_queue_id) */
 	{    0xa28,      0x8,      0x0,      0x0,      0x8},
 /* CORE_LL2_TSTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id) */
-	{   0x60f8,     0x10,      0x0,      0x0,     0x10},
+	{   0x61f8,     0x10,      0x0,      0x0,     0x10},
 /* CORE_LL2_USTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id) */
 	{   0xb820,     0x30,      0x0,      0x0,     0x30},
 /* CORE_LL2_PSTORM_PER_QUEUE_STAT_OFFSET(core_tx_stats_id) */
@@ -53,7 +53,7 @@ static const struct iro iro_arr[47] = {
 /* MSTORM_ETH_VF_PRODS_OFFSET(vf_id,vf_queue_id) */
 	{   0x53a0,     0x80,      0x4,      0x0,      0x4},
 /* MSTORM_TPA_TIMEOUT_US_OFFSET */
-	{   0xc8f0,      0x0,      0x0,      0x0,      0x4},
+	{   0xc7c8,      0x0,      0x0,      0x0,      0x4},
 /* MSTORM_ETH_PF_STAT_OFFSET(pf_id) */
 	{   0x4ba0,     0x80,      0x0,      0x0,     0x20},
 /* USTORM_QUEUE_STAT_OFFSET(stat_counter_id) */
@@ -63,13 +63,13 @@ static const struct iro iro_arr[47] = {
 /* PSTORM_QUEUE_STAT_OFFSET(stat_counter_id) */
 	{   0x2b48,     0x80,      0x0,      0x0,     0x38},
 /* PSTORM_ETH_PF_STAT_OFFSET(pf_id) */
-	{   0xf188,     0x78,      0x0,      0x0,     0x78},
+	{   0xf1b0,     0x78,      0x0,      0x0,     0x78},
 /* PSTORM_CTL_FRAME_ETHTYPE_OFFSET(ethType_id) */
 	{    0x1f8,      0x4,      0x0,      0x0,      0x4},
 /* TSTORM_ETH_PRS_INPUT_OFFSET */
-	{   0xacf0,      0x0,      0x0,      0x0,     0xf0},
+	{   0xaef8,      0x0,      0x0,      0x0,     0xf0},
 /* ETH_RX_RATE_LIMIT_OFFSET(pf_id) */
-	{   0xade0,      0x8,      0x0,      0x0,      0x8},
+	{   0xafe8,      0x8,      0x0,      0x0,      0x8},
 /* XSTORM_ETH_QUEUE_ZONE_OFFSET(queue_id) */
 	{    0x1f8,      0x8,      0x0,      0x0,      0x8},
 /* YSTORM_TOE_CQ_PROD_OFFSET(rss_id) */
@@ -85,9 +85,9 @@ static const struct iro iro_arr[47] = {
 /* MSTORM_SCSI_BDQ_EXT_PROD_OFFSET(func_id,bdq_id) */
 	{    0xb78,     0x10,      0x8,      0x0,      0x2},
 /* TSTORM_ISCSI_RX_STATS_OFFSET(pf_id) */
-	{   0xd888,     0x38,      0x0,      0x0,     0x24},
+	{   0xd9a8,     0x38,      0x0,      0x0,     0x24},
 /* MSTORM_ISCSI_RX_STATS_OFFSET(pf_id) */
-	{  0x12c38,     0x10,      0x0,      0x0,      0x8},
+	{  0x12988,     0x10,      0x0,      0x0,      0x8},
 /* USTORM_ISCSI_RX_STATS_OFFSET(pf_id) */
 	{  0x11aa0,     0x38,      0x0,      0x0,     0x18},
 /* XSTORM_ISCSI_TX_STATS_OFFSET(pf_id) */
@@ -97,13 +97,17 @@ static const struct iro iro_arr[47] = {
 /* PSTORM_ISCSI_TX_STATS_OFFSET(pf_id) */
 	{  0x101f8,     0x10,      0x0,      0x0,     0x10},
 /* TSTORM_FCOE_RX_STATS_OFFSET(pf_id) */
-	{   0xdd08,     0x48,      0x0,      0x0,     0x38},
+	{   0xde28,     0x48,      0x0,      0x0,     0x38},
 /* PSTORM_FCOE_TX_STATS_OFFSET(pf_id) */
 	{  0x10660,     0x20,      0x0,      0x0,     0x20},
 /* PSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) */
 	{   0x2b80,     0x80,      0x0,      0x0,     0x10},
 /* TSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) */
-	{   0x5000,     0x10,      0x0,      0x0,     0x10},
+	{   0x5020,     0x10,      0x0,      0x0,     0x10},
+/* XSTORM_IWARP_RXMIT_STATS_OFFSET(pf_id) */
+	{   0xc9b0,     0x30,      0x0,      0x0,     0x10},
+/* TSTORM_ROCE_EVENTS_STAT_OFFSET(roce_pf_id) */
+	{   0xeec0,     0x10,      0x0,      0x0,     0x10},
 };
 
 #endif /* __IRO_VALUES_H__ */
diff --git a/drivers/net/qede/base/ecore_rt_defs.h b/drivers/net/qede/base/ecore_rt_defs.h
index 01a29e3..846dc6d 100644
--- a/drivers/net/qede/base/ecore_rt_defs.h
+++ b/drivers/net/qede/base/ecore_rt_defs.h
@@ -115,339 +115,338 @@
 #define TM_REG_CONFIG_CONN_MEM_RT_OFFSET                            28716
 #define TM_REG_CONFIG_CONN_MEM_RT_SIZE                              416
 #define TM_REG_CONFIG_TASK_MEM_RT_OFFSET                            29132
-#define TM_REG_CONFIG_TASK_MEM_RT_SIZE                              512
-#define QM_REG_MAXPQSIZE_0_RT_OFFSET                                29644
-#define QM_REG_MAXPQSIZE_1_RT_OFFSET                                29645
-#define QM_REG_MAXPQSIZE_2_RT_OFFSET                                29646
-#define QM_REG_MAXPQSIZETXSEL_0_RT_OFFSET                           29647
-#define QM_REG_MAXPQSIZETXSEL_1_RT_OFFSET                           29648
-#define QM_REG_MAXPQSIZETXSEL_2_RT_OFFSET                           29649
-#define QM_REG_MAXPQSIZETXSEL_3_RT_OFFSET                           29650
-#define QM_REG_MAXPQSIZETXSEL_4_RT_OFFSET                           29651
-#define QM_REG_MAXPQSIZETXSEL_5_RT_OFFSET                           29652
-#define QM_REG_MAXPQSIZETXSEL_6_RT_OFFSET                           29653
-#define QM_REG_MAXPQSIZETXSEL_7_RT_OFFSET                           29654
-#define QM_REG_MAXPQSIZETXSEL_8_RT_OFFSET                           29655
-#define QM_REG_MAXPQSIZETXSEL_9_RT_OFFSET                           29656
-#define QM_REG_MAXPQSIZETXSEL_10_RT_OFFSET                          29657
-#define QM_REG_MAXPQSIZETXSEL_11_RT_OFFSET                          29658
-#define QM_REG_MAXPQSIZETXSEL_12_RT_OFFSET                          29659
-#define QM_REG_MAXPQSIZETXSEL_13_RT_OFFSET                          29660
-#define QM_REG_MAXPQSIZETXSEL_14_RT_OFFSET                          29661
-#define QM_REG_MAXPQSIZETXSEL_15_RT_OFFSET                          29662
-#define QM_REG_MAXPQSIZETXSEL_16_RT_OFFSET                          29663
-#define QM_REG_MAXPQSIZETXSEL_17_RT_OFFSET                          29664
-#define QM_REG_MAXPQSIZETXSEL_18_RT_OFFSET                          29665
-#define QM_REG_MAXPQSIZETXSEL_19_RT_OFFSET                          29666
-#define QM_REG_MAXPQSIZETXSEL_20_RT_OFFSET                          29667
-#define QM_REG_MAXPQSIZETXSEL_21_RT_OFFSET                          29668
-#define QM_REG_MAXPQSIZETXSEL_22_RT_OFFSET                          29669
-#define QM_REG_MAXPQSIZETXSEL_23_RT_OFFSET                          29670
-#define QM_REG_MAXPQSIZETXSEL_24_RT_OFFSET                          29671
-#define QM_REG_MAXPQSIZETXSEL_25_RT_OFFSET                          29672
-#define QM_REG_MAXPQSIZETXSEL_26_RT_OFFSET                          29673
-#define QM_REG_MAXPQSIZETXSEL_27_RT_OFFSET                          29674
-#define QM_REG_MAXPQSIZETXSEL_28_RT_OFFSET                          29675
-#define QM_REG_MAXPQSIZETXSEL_29_RT_OFFSET                          29676
-#define QM_REG_MAXPQSIZETXSEL_30_RT_OFFSET                          29677
-#define QM_REG_MAXPQSIZETXSEL_31_RT_OFFSET                          29678
-#define QM_REG_MAXPQSIZETXSEL_32_RT_OFFSET                          29679
-#define QM_REG_MAXPQSIZETXSEL_33_RT_OFFSET                          29680
-#define QM_REG_MAXPQSIZETXSEL_34_RT_OFFSET                          29681
-#define QM_REG_MAXPQSIZETXSEL_35_RT_OFFSET                          29682
-#define QM_REG_MAXPQSIZETXSEL_36_RT_OFFSET                          29683
-#define QM_REG_MAXPQSIZETXSEL_37_RT_OFFSET                          29684
-#define QM_REG_MAXPQSIZETXSEL_38_RT_OFFSET                          29685
-#define QM_REG_MAXPQSIZETXSEL_39_RT_OFFSET                          29686
-#define QM_REG_MAXPQSIZETXSEL_40_RT_OFFSET                          29687
-#define QM_REG_MAXPQSIZETXSEL_41_RT_OFFSET                          29688
-#define QM_REG_MAXPQSIZETXSEL_42_RT_OFFSET                          29689
-#define QM_REG_MAXPQSIZETXSEL_43_RT_OFFSET                          29690
-#define QM_REG_MAXPQSIZETXSEL_44_RT_OFFSET                          29691
-#define QM_REG_MAXPQSIZETXSEL_45_RT_OFFSET                          29692
-#define QM_REG_MAXPQSIZETXSEL_46_RT_OFFSET                          29693
-#define QM_REG_MAXPQSIZETXSEL_47_RT_OFFSET                          29694
-#define QM_REG_MAXPQSIZETXSEL_48_RT_OFFSET                          29695
-#define QM_REG_MAXPQSIZETXSEL_49_RT_OFFSET                          29696
-#define QM_REG_MAXPQSIZETXSEL_50_RT_OFFSET                          29697
-#define QM_REG_MAXPQSIZETXSEL_51_RT_OFFSET                          29698
-#define QM_REG_MAXPQSIZETXSEL_52_RT_OFFSET                          29699
-#define QM_REG_MAXPQSIZETXSEL_53_RT_OFFSET                          29700
-#define QM_REG_MAXPQSIZETXSEL_54_RT_OFFSET                          29701
-#define QM_REG_MAXPQSIZETXSEL_55_RT_OFFSET                          29702
-#define QM_REG_MAXPQSIZETXSEL_56_RT_OFFSET                          29703
-#define QM_REG_MAXPQSIZETXSEL_57_RT_OFFSET                          29704
-#define QM_REG_MAXPQSIZETXSEL_58_RT_OFFSET                          29705
-#define QM_REG_MAXPQSIZETXSEL_59_RT_OFFSET                          29706
-#define QM_REG_MAXPQSIZETXSEL_60_RT_OFFSET                          29707
-#define QM_REG_MAXPQSIZETXSEL_61_RT_OFFSET                          29708
-#define QM_REG_MAXPQSIZETXSEL_62_RT_OFFSET                          29709
-#define QM_REG_MAXPQSIZETXSEL_63_RT_OFFSET                          29710
-#define QM_REG_BASEADDROTHERPQ_RT_OFFSET                            29711
-#define QM_REG_BASEADDROTHERPQ_RT_SIZE                              128
-#define QM_REG_VOQCRDLINE_RT_OFFSET                                 29839
-#define QM_REG_VOQCRDLINE_RT_SIZE                                   20
-#define QM_REG_VOQINITCRDLINE_RT_OFFSET                             29859
-#define QM_REG_VOQINITCRDLINE_RT_SIZE                               20
-#define QM_REG_AFULLQMBYPTHRPFWFQ_RT_OFFSET                         29879
-#define QM_REG_AFULLQMBYPTHRVPWFQ_RT_OFFSET                         29880
-#define QM_REG_AFULLQMBYPTHRPFRL_RT_OFFSET                          29881
-#define QM_REG_AFULLQMBYPTHRGLBLRL_RT_OFFSET                        29882
-#define QM_REG_AFULLOPRTNSTCCRDMASK_RT_OFFSET                       29883
-#define QM_REG_WRROTHERPQGRP_0_RT_OFFSET                            29884
-#define QM_REG_WRROTHERPQGRP_1_RT_OFFSET                            29885
-#define QM_REG_WRROTHERPQGRP_2_RT_OFFSET                            29886
-#define QM_REG_WRROTHERPQGRP_3_RT_OFFSET                            29887
-#define QM_REG_WRROTHERPQGRP_4_RT_OFFSET                            29888
-#define QM_REG_WRROTHERPQGRP_5_RT_OFFSET                            29889
-#define QM_REG_WRROTHERPQGRP_6_RT_OFFSET                            29890
-#define QM_REG_WRROTHERPQGRP_7_RT_OFFSET                            29891
-#define QM_REG_WRROTHERPQGRP_8_RT_OFFSET                            29892
-#define QM_REG_WRROTHERPQGRP_9_RT_OFFSET                            29893
-#define QM_REG_WRROTHERPQGRP_10_RT_OFFSET                           29894
-#define QM_REG_WRROTHERPQGRP_11_RT_OFFSET                           29895
-#define QM_REG_WRROTHERPQGRP_12_RT_OFFSET                           29896
-#define QM_REG_WRROTHERPQGRP_13_RT_OFFSET                           29897
-#define QM_REG_WRROTHERPQGRP_14_RT_OFFSET                           29898
-#define QM_REG_WRROTHERPQGRP_15_RT_OFFSET                           29899
-#define QM_REG_WRROTHERGRPWEIGHT_0_RT_OFFSET                        29900
-#define QM_REG_WRROTHERGRPWEIGHT_1_RT_OFFSET                        29901
-#define QM_REG_WRROTHERGRPWEIGHT_2_RT_OFFSET                        29902
-#define QM_REG_WRROTHERGRPWEIGHT_3_RT_OFFSET                        29903
-#define QM_REG_WRRTXGRPWEIGHT_0_RT_OFFSET                           29904
-#define QM_REG_WRRTXGRPWEIGHT_1_RT_OFFSET                           29905
-#define QM_REG_PQTX2PF_0_RT_OFFSET                                  29906
-#define QM_REG_PQTX2PF_1_RT_OFFSET                                  29907
-#define QM_REG_PQTX2PF_2_RT_OFFSET                                  29908
-#define QM_REG_PQTX2PF_3_RT_OFFSET                                  29909
-#define QM_REG_PQTX2PF_4_RT_OFFSET                                  29910
-#define QM_REG_PQTX2PF_5_RT_OFFSET                                  29911
-#define QM_REG_PQTX2PF_6_RT_OFFSET                                  29912
-#define QM_REG_PQTX2PF_7_RT_OFFSET                                  29913
-#define QM_REG_PQTX2PF_8_RT_OFFSET                                  29914
-#define QM_REG_PQTX2PF_9_RT_OFFSET                                  29915
-#define QM_REG_PQTX2PF_10_RT_OFFSET                                 29916
-#define QM_REG_PQTX2PF_11_RT_OFFSET                                 29917
-#define QM_REG_PQTX2PF_12_RT_OFFSET                                 29918
-#define QM_REG_PQTX2PF_13_RT_OFFSET                                 29919
-#define QM_REG_PQTX2PF_14_RT_OFFSET                                 29920
-#define QM_REG_PQTX2PF_15_RT_OFFSET                                 29921
-#define QM_REG_PQTX2PF_16_RT_OFFSET                                 29922
-#define QM_REG_PQTX2PF_17_RT_OFFSET                                 29923
-#define QM_REG_PQTX2PF_18_RT_OFFSET                                 29924
-#define QM_REG_PQTX2PF_19_RT_OFFSET                                 29925
-#define QM_REG_PQTX2PF_20_RT_OFFSET                                 29926
-#define QM_REG_PQTX2PF_21_RT_OFFSET                                 29927
-#define QM_REG_PQTX2PF_22_RT_OFFSET                                 29928
-#define QM_REG_PQTX2PF_23_RT_OFFSET                                 29929
-#define QM_REG_PQTX2PF_24_RT_OFFSET                                 29930
-#define QM_REG_PQTX2PF_25_RT_OFFSET                                 29931
-#define QM_REG_PQTX2PF_26_RT_OFFSET                                 29932
-#define QM_REG_PQTX2PF_27_RT_OFFSET                                 29933
-#define QM_REG_PQTX2PF_28_RT_OFFSET                                 29934
-#define QM_REG_PQTX2PF_29_RT_OFFSET                                 29935
-#define QM_REG_PQTX2PF_30_RT_OFFSET                                 29936
-#define QM_REG_PQTX2PF_31_RT_OFFSET                                 29937
-#define QM_REG_PQTX2PF_32_RT_OFFSET                                 29938
-#define QM_REG_PQTX2PF_33_RT_OFFSET                                 29939
-#define QM_REG_PQTX2PF_34_RT_OFFSET                                 29940
-#define QM_REG_PQTX2PF_35_RT_OFFSET                                 29941
-#define QM_REG_PQTX2PF_36_RT_OFFSET                                 29942
-#define QM_REG_PQTX2PF_37_RT_OFFSET                                 29943
-#define QM_REG_PQTX2PF_38_RT_OFFSET                                 29944
-#define QM_REG_PQTX2PF_39_RT_OFFSET                                 29945
-#define QM_REG_PQTX2PF_40_RT_OFFSET                                 29946
-#define QM_REG_PQTX2PF_41_RT_OFFSET                                 29947
-#define QM_REG_PQTX2PF_42_RT_OFFSET                                 29948
-#define QM_REG_PQTX2PF_43_RT_OFFSET                                 29949
-#define QM_REG_PQTX2PF_44_RT_OFFSET                                 29950
-#define QM_REG_PQTX2PF_45_RT_OFFSET                                 29951
-#define QM_REG_PQTX2PF_46_RT_OFFSET                                 29952
-#define QM_REG_PQTX2PF_47_RT_OFFSET                                 29953
-#define QM_REG_PQTX2PF_48_RT_OFFSET                                 29954
-#define QM_REG_PQTX2PF_49_RT_OFFSET                                 29955
-#define QM_REG_PQTX2PF_50_RT_OFFSET                                 29956
-#define QM_REG_PQTX2PF_51_RT_OFFSET                                 29957
-#define QM_REG_PQTX2PF_52_RT_OFFSET                                 29958
-#define QM_REG_PQTX2PF_53_RT_OFFSET                                 29959
-#define QM_REG_PQTX2PF_54_RT_OFFSET                                 29960
-#define QM_REG_PQTX2PF_55_RT_OFFSET                                 29961
-#define QM_REG_PQTX2PF_56_RT_OFFSET                                 29962
-#define QM_REG_PQTX2PF_57_RT_OFFSET                                 29963
-#define QM_REG_PQTX2PF_58_RT_OFFSET                                 29964
-#define QM_REG_PQTX2PF_59_RT_OFFSET                                 29965
-#define QM_REG_PQTX2PF_60_RT_OFFSET                                 29966
-#define QM_REG_PQTX2PF_61_RT_OFFSET                                 29967
-#define QM_REG_PQTX2PF_62_RT_OFFSET                                 29968
-#define QM_REG_PQTX2PF_63_RT_OFFSET                                 29969
-#define QM_REG_PQOTHER2PF_0_RT_OFFSET                               29970
-#define QM_REG_PQOTHER2PF_1_RT_OFFSET                               29971
-#define QM_REG_PQOTHER2PF_2_RT_OFFSET                               29972
-#define QM_REG_PQOTHER2PF_3_RT_OFFSET                               29973
-#define QM_REG_PQOTHER2PF_4_RT_OFFSET                               29974
-#define QM_REG_PQOTHER2PF_5_RT_OFFSET                               29975
-#define QM_REG_PQOTHER2PF_6_RT_OFFSET                               29976
-#define QM_REG_PQOTHER2PF_7_RT_OFFSET                               29977
-#define QM_REG_PQOTHER2PF_8_RT_OFFSET                               29978
-#define QM_REG_PQOTHER2PF_9_RT_OFFSET                               29979
-#define QM_REG_PQOTHER2PF_10_RT_OFFSET                              29980
-#define QM_REG_PQOTHER2PF_11_RT_OFFSET                              29981
-#define QM_REG_PQOTHER2PF_12_RT_OFFSET                              29982
-#define QM_REG_PQOTHER2PF_13_RT_OFFSET                              29983
-#define QM_REG_PQOTHER2PF_14_RT_OFFSET                              29984
-#define QM_REG_PQOTHER2PF_15_RT_OFFSET                              29985
-#define QM_REG_RLGLBLPERIOD_0_RT_OFFSET                             29986
-#define QM_REG_RLGLBLPERIOD_1_RT_OFFSET                             29987
-#define QM_REG_RLGLBLPERIODTIMER_0_RT_OFFSET                        29988
-#define QM_REG_RLGLBLPERIODTIMER_1_RT_OFFSET                        29989
-#define QM_REG_RLGLBLPERIODSEL_0_RT_OFFSET                          29990
-#define QM_REG_RLGLBLPERIODSEL_1_RT_OFFSET                          29991
-#define QM_REG_RLGLBLPERIODSEL_2_RT_OFFSET                          29992
-#define QM_REG_RLGLBLPERIODSEL_3_RT_OFFSET                          29993
-#define QM_REG_RLGLBLPERIODSEL_4_RT_OFFSET                          29994
-#define QM_REG_RLGLBLPERIODSEL_5_RT_OFFSET                          29995
-#define QM_REG_RLGLBLPERIODSEL_6_RT_OFFSET                          29996
-#define QM_REG_RLGLBLPERIODSEL_7_RT_OFFSET                          29997
-#define QM_REG_RLGLBLINCVAL_RT_OFFSET                               29998
+#define TM_REG_CONFIG_TASK_MEM_RT_SIZE                              608
+#define QM_REG_MAXPQSIZE_0_RT_OFFSET                                29740
+#define QM_REG_MAXPQSIZE_1_RT_OFFSET                                29741
+#define QM_REG_MAXPQSIZE_2_RT_OFFSET                                29742
+#define QM_REG_MAXPQSIZETXSEL_0_RT_OFFSET                           29743
+#define QM_REG_MAXPQSIZETXSEL_1_RT_OFFSET                           29744
+#define QM_REG_MAXPQSIZETXSEL_2_RT_OFFSET                           29745
+#define QM_REG_MAXPQSIZETXSEL_3_RT_OFFSET                           29746
+#define QM_REG_MAXPQSIZETXSEL_4_RT_OFFSET                           29747
+#define QM_REG_MAXPQSIZETXSEL_5_RT_OFFSET                           29748
+#define QM_REG_MAXPQSIZETXSEL_6_RT_OFFSET                           29749
+#define QM_REG_MAXPQSIZETXSEL_7_RT_OFFSET                           29750
+#define QM_REG_MAXPQSIZETXSEL_8_RT_OFFSET                           29751
+#define QM_REG_MAXPQSIZETXSEL_9_RT_OFFSET                           29752
+#define QM_REG_MAXPQSIZETXSEL_10_RT_OFFSET                          29753
+#define QM_REG_MAXPQSIZETXSEL_11_RT_OFFSET                          29754
+#define QM_REG_MAXPQSIZETXSEL_12_RT_OFFSET                          29755
+#define QM_REG_MAXPQSIZETXSEL_13_RT_OFFSET                          29756
+#define QM_REG_MAXPQSIZETXSEL_14_RT_OFFSET                          29757
+#define QM_REG_MAXPQSIZETXSEL_15_RT_OFFSET                          29758
+#define QM_REG_MAXPQSIZETXSEL_16_RT_OFFSET                          29759
+#define QM_REG_MAXPQSIZETXSEL_17_RT_OFFSET                          29760
+#define QM_REG_MAXPQSIZETXSEL_18_RT_OFFSET                          29761
+#define QM_REG_MAXPQSIZETXSEL_19_RT_OFFSET                          29762
+#define QM_REG_MAXPQSIZETXSEL_20_RT_OFFSET                          29763
+#define QM_REG_MAXPQSIZETXSEL_21_RT_OFFSET                          29764
+#define QM_REG_MAXPQSIZETXSEL_22_RT_OFFSET                          29765
+#define QM_REG_MAXPQSIZETXSEL_23_RT_OFFSET                          29766
+#define QM_REG_MAXPQSIZETXSEL_24_RT_OFFSET                          29767
+#define QM_REG_MAXPQSIZETXSEL_25_RT_OFFSET                          29768
+#define QM_REG_MAXPQSIZETXSEL_26_RT_OFFSET                          29769
+#define QM_REG_MAXPQSIZETXSEL_27_RT_OFFSET                          29770
+#define QM_REG_MAXPQSIZETXSEL_28_RT_OFFSET                          29771
+#define QM_REG_MAXPQSIZETXSEL_29_RT_OFFSET                          29772
+#define QM_REG_MAXPQSIZETXSEL_30_RT_OFFSET                          29773
+#define QM_REG_MAXPQSIZETXSEL_31_RT_OFFSET                          29774
+#define QM_REG_MAXPQSIZETXSEL_32_RT_OFFSET                          29775
+#define QM_REG_MAXPQSIZETXSEL_33_RT_OFFSET                          29776
+#define QM_REG_MAXPQSIZETXSEL_34_RT_OFFSET                          29777
+#define QM_REG_MAXPQSIZETXSEL_35_RT_OFFSET                          29778
+#define QM_REG_MAXPQSIZETXSEL_36_RT_OFFSET                          29779
+#define QM_REG_MAXPQSIZETXSEL_37_RT_OFFSET                          29780
+#define QM_REG_MAXPQSIZETXSEL_38_RT_OFFSET                          29781
+#define QM_REG_MAXPQSIZETXSEL_39_RT_OFFSET                          29782
+#define QM_REG_MAXPQSIZETXSEL_40_RT_OFFSET                          29783
+#define QM_REG_MAXPQSIZETXSEL_41_RT_OFFSET                          29784
+#define QM_REG_MAXPQSIZETXSEL_42_RT_OFFSET                          29785
+#define QM_REG_MAXPQSIZETXSEL_43_RT_OFFSET                          29786
+#define QM_REG_MAXPQSIZETXSEL_44_RT_OFFSET                          29787
+#define QM_REG_MAXPQSIZETXSEL_45_RT_OFFSET                          29788
+#define QM_REG_MAXPQSIZETXSEL_46_RT_OFFSET                          29789
+#define QM_REG_MAXPQSIZETXSEL_47_RT_OFFSET                          29790
+#define QM_REG_MAXPQSIZETXSEL_48_RT_OFFSET                          29791
+#define QM_REG_MAXPQSIZETXSEL_49_RT_OFFSET                          29792
+#define QM_REG_MAXPQSIZETXSEL_50_RT_OFFSET                          29793
+#define QM_REG_MAXPQSIZETXSEL_51_RT_OFFSET                          29794
+#define QM_REG_MAXPQSIZETXSEL_52_RT_OFFSET                          29795
+#define QM_REG_MAXPQSIZETXSEL_53_RT_OFFSET                          29796
+#define QM_REG_MAXPQSIZETXSEL_54_RT_OFFSET                          29797
+#define QM_REG_MAXPQSIZETXSEL_55_RT_OFFSET                          29798
+#define QM_REG_MAXPQSIZETXSEL_56_RT_OFFSET                          29799
+#define QM_REG_MAXPQSIZETXSEL_57_RT_OFFSET                          29800
+#define QM_REG_MAXPQSIZETXSEL_58_RT_OFFSET                          29801
+#define QM_REG_MAXPQSIZETXSEL_59_RT_OFFSET                          29802
+#define QM_REG_MAXPQSIZETXSEL_60_RT_OFFSET                          29803
+#define QM_REG_MAXPQSIZETXSEL_61_RT_OFFSET                          29804
+#define QM_REG_MAXPQSIZETXSEL_62_RT_OFFSET                          29805
+#define QM_REG_MAXPQSIZETXSEL_63_RT_OFFSET                          29806
+#define QM_REG_BASEADDROTHERPQ_RT_OFFSET                            29807
+#define QM_REG_AFULLQMBYPTHRPFWFQ_RT_OFFSET                         29935
+#define QM_REG_AFULLQMBYPTHRVPWFQ_RT_OFFSET                         29936
+#define QM_REG_AFULLQMBYPTHRPFRL_RT_OFFSET                          29937
+#define QM_REG_AFULLQMBYPTHRGLBLRL_RT_OFFSET                        29938
+#define QM_REG_AFULLOPRTNSTCCRDMASK_RT_OFFSET                       29939
+#define QM_REG_WRROTHERPQGRP_0_RT_OFFSET                            29940
+#define QM_REG_WRROTHERPQGRP_1_RT_OFFSET                            29941
+#define QM_REG_WRROTHERPQGRP_2_RT_OFFSET                            29942
+#define QM_REG_WRROTHERPQGRP_3_RT_OFFSET                            29943
+#define QM_REG_WRROTHERPQGRP_4_RT_OFFSET                            29944
+#define QM_REG_WRROTHERPQGRP_5_RT_OFFSET                            29945
+#define QM_REG_WRROTHERPQGRP_6_RT_OFFSET                            29946
+#define QM_REG_WRROTHERPQGRP_7_RT_OFFSET                            29947
+#define QM_REG_WRROTHERPQGRP_8_RT_OFFSET                            29948
+#define QM_REG_WRROTHERPQGRP_9_RT_OFFSET                            29949
+#define QM_REG_WRROTHERPQGRP_10_RT_OFFSET                           29950
+#define QM_REG_WRROTHERPQGRP_11_RT_OFFSET                           29951
+#define QM_REG_WRROTHERPQGRP_12_RT_OFFSET                           29952
+#define QM_REG_WRROTHERPQGRP_13_RT_OFFSET                           29953
+#define QM_REG_WRROTHERPQGRP_14_RT_OFFSET                           29954
+#define QM_REG_WRROTHERPQGRP_15_RT_OFFSET                           29955
+#define QM_REG_WRROTHERGRPWEIGHT_0_RT_OFFSET                        29956
+#define QM_REG_WRROTHERGRPWEIGHT_1_RT_OFFSET                        29957
+#define QM_REG_WRROTHERGRPWEIGHT_2_RT_OFFSET                        29958
+#define QM_REG_WRROTHERGRPWEIGHT_3_RT_OFFSET                        29959
+#define QM_REG_WRRTXGRPWEIGHT_0_RT_OFFSET                           29960
+#define QM_REG_WRRTXGRPWEIGHT_1_RT_OFFSET                           29961
+#define QM_REG_PQTX2PF_0_RT_OFFSET                                  29962
+#define QM_REG_PQTX2PF_1_RT_OFFSET                                  29963
+#define QM_REG_PQTX2PF_2_RT_OFFSET                                  29964
+#define QM_REG_PQTX2PF_3_RT_OFFSET                                  29965
+#define QM_REG_PQTX2PF_4_RT_OFFSET                                  29966
+#define QM_REG_PQTX2PF_5_RT_OFFSET                                  29967
+#define QM_REG_PQTX2PF_6_RT_OFFSET                                  29968
+#define QM_REG_PQTX2PF_7_RT_OFFSET                                  29969
+#define QM_REG_PQTX2PF_8_RT_OFFSET                                  29970
+#define QM_REG_PQTX2PF_9_RT_OFFSET                                  29971
+#define QM_REG_PQTX2PF_10_RT_OFFSET                                 29972
+#define QM_REG_PQTX2PF_11_RT_OFFSET                                 29973
+#define QM_REG_PQTX2PF_12_RT_OFFSET                                 29974
+#define QM_REG_PQTX2PF_13_RT_OFFSET                                 29975
+#define QM_REG_PQTX2PF_14_RT_OFFSET                                 29976
+#define QM_REG_PQTX2PF_15_RT_OFFSET                                 29977
+#define QM_REG_PQTX2PF_16_RT_OFFSET                                 29978
+#define QM_REG_PQTX2PF_17_RT_OFFSET                                 29979
+#define QM_REG_PQTX2PF_18_RT_OFFSET                                 29980
+#define QM_REG_PQTX2PF_19_RT_OFFSET                                 29981
+#define QM_REG_PQTX2PF_20_RT_OFFSET                                 29982
+#define QM_REG_PQTX2PF_21_RT_OFFSET                                 29983
+#define QM_REG_PQTX2PF_22_RT_OFFSET                                 29984
+#define QM_REG_PQTX2PF_23_RT_OFFSET                                 29985
+#define QM_REG_PQTX2PF_24_RT_OFFSET                                 29986
+#define QM_REG_PQTX2PF_25_RT_OFFSET                                 29987
+#define QM_REG_PQTX2PF_26_RT_OFFSET                                 29988
+#define QM_REG_PQTX2PF_27_RT_OFFSET                                 29989
+#define QM_REG_PQTX2PF_28_RT_OFFSET                                 29990
+#define QM_REG_PQTX2PF_29_RT_OFFSET                                 29991
+#define QM_REG_PQTX2PF_30_RT_OFFSET                                 29992
+#define QM_REG_PQTX2PF_31_RT_OFFSET                                 29993
+#define QM_REG_PQTX2PF_32_RT_OFFSET                                 29994
+#define QM_REG_PQTX2PF_33_RT_OFFSET                                 29995
+#define QM_REG_PQTX2PF_34_RT_OFFSET                                 29996
+#define QM_REG_PQTX2PF_35_RT_OFFSET                                 29997
+#define QM_REG_PQTX2PF_36_RT_OFFSET                                 29998
+#define QM_REG_PQTX2PF_37_RT_OFFSET                                 29999
+#define QM_REG_PQTX2PF_38_RT_OFFSET                                 30000
+#define QM_REG_PQTX2PF_39_RT_OFFSET                                 30001
+#define QM_REG_PQTX2PF_40_RT_OFFSET                                 30002
+#define QM_REG_PQTX2PF_41_RT_OFFSET                                 30003
+#define QM_REG_PQTX2PF_42_RT_OFFSET                                 30004
+#define QM_REG_PQTX2PF_43_RT_OFFSET                                 30005
+#define QM_REG_PQTX2PF_44_RT_OFFSET                                 30006
+#define QM_REG_PQTX2PF_45_RT_OFFSET                                 30007
+#define QM_REG_PQTX2PF_46_RT_OFFSET                                 30008
+#define QM_REG_PQTX2PF_47_RT_OFFSET                                 30009
+#define QM_REG_PQTX2PF_48_RT_OFFSET                                 30010
+#define QM_REG_PQTX2PF_49_RT_OFFSET                                 30011
+#define QM_REG_PQTX2PF_50_RT_OFFSET                                 30012
+#define QM_REG_PQTX2PF_51_RT_OFFSET                                 30013
+#define QM_REG_PQTX2PF_52_RT_OFFSET                                 30014
+#define QM_REG_PQTX2PF_53_RT_OFFSET                                 30015
+#define QM_REG_PQTX2PF_54_RT_OFFSET                                 30016
+#define QM_REG_PQTX2PF_55_RT_OFFSET                                 30017
+#define QM_REG_PQTX2PF_56_RT_OFFSET                                 30018
+#define QM_REG_PQTX2PF_57_RT_OFFSET                                 30019
+#define QM_REG_PQTX2PF_58_RT_OFFSET                                 30020
+#define QM_REG_PQTX2PF_59_RT_OFFSET                                 30021
+#define QM_REG_PQTX2PF_60_RT_OFFSET                                 30022
+#define QM_REG_PQTX2PF_61_RT_OFFSET                                 30023
+#define QM_REG_PQTX2PF_62_RT_OFFSET                                 30024
+#define QM_REG_PQTX2PF_63_RT_OFFSET                                 30025
+#define QM_REG_PQOTHER2PF_0_RT_OFFSET                               30026
+#define QM_REG_PQOTHER2PF_1_RT_OFFSET                               30027
+#define QM_REG_PQOTHER2PF_2_RT_OFFSET                               30028
+#define QM_REG_PQOTHER2PF_3_RT_OFFSET                               30029
+#define QM_REG_PQOTHER2PF_4_RT_OFFSET                               30030
+#define QM_REG_PQOTHER2PF_5_RT_OFFSET                               30031
+#define QM_REG_PQOTHER2PF_6_RT_OFFSET                               30032
+#define QM_REG_PQOTHER2PF_7_RT_OFFSET                               30033
+#define QM_REG_PQOTHER2PF_8_RT_OFFSET                               30034
+#define QM_REG_PQOTHER2PF_9_RT_OFFSET                               30035
+#define QM_REG_PQOTHER2PF_10_RT_OFFSET                              30036
+#define QM_REG_PQOTHER2PF_11_RT_OFFSET                              30037
+#define QM_REG_PQOTHER2PF_12_RT_OFFSET                              30038
+#define QM_REG_PQOTHER2PF_13_RT_OFFSET                              30039
+#define QM_REG_PQOTHER2PF_14_RT_OFFSET                              30040
+#define QM_REG_PQOTHER2PF_15_RT_OFFSET                              30041
+#define QM_REG_RLGLBLPERIOD_0_RT_OFFSET                             30042
+#define QM_REG_RLGLBLPERIOD_1_RT_OFFSET                             30043
+#define QM_REG_RLGLBLPERIODTIMER_0_RT_OFFSET                        30044
+#define QM_REG_RLGLBLPERIODTIMER_1_RT_OFFSET                        30045
+#define QM_REG_RLGLBLPERIODSEL_0_RT_OFFSET                          30046
+#define QM_REG_RLGLBLPERIODSEL_1_RT_OFFSET                          30047
+#define QM_REG_RLGLBLPERIODSEL_2_RT_OFFSET                          30048
+#define QM_REG_RLGLBLPERIODSEL_3_RT_OFFSET                          30049
+#define QM_REG_RLGLBLPERIODSEL_4_RT_OFFSET                          30050
+#define QM_REG_RLGLBLPERIODSEL_5_RT_OFFSET                          30051
+#define QM_REG_RLGLBLPERIODSEL_6_RT_OFFSET                          30052
+#define QM_REG_RLGLBLPERIODSEL_7_RT_OFFSET                          30053
+#define QM_REG_RLGLBLINCVAL_RT_OFFSET                               30054
 #define QM_REG_RLGLBLINCVAL_RT_SIZE                                 256
-#define QM_REG_RLGLBLUPPERBOUND_RT_OFFSET                           30254
+#define QM_REG_RLGLBLUPPERBOUND_RT_OFFSET                           30310
 #define QM_REG_RLGLBLUPPERBOUND_RT_SIZE                             256
-#define QM_REG_RLGLBLCRD_RT_OFFSET                                  30510
+#define QM_REG_RLGLBLCRD_RT_OFFSET                                  30566
 #define QM_REG_RLGLBLCRD_RT_SIZE                                    256
-#define QM_REG_RLGLBLENABLE_RT_OFFSET                               30766
-#define QM_REG_RLPFPERIOD_RT_OFFSET                                 30767
-#define QM_REG_RLPFPERIODTIMER_RT_OFFSET                            30768
-#define QM_REG_RLPFINCVAL_RT_OFFSET                                 30769
+#define QM_REG_RLGLBLENABLE_RT_OFFSET                               30822
+#define QM_REG_RLPFPERIOD_RT_OFFSET                                 30823
+#define QM_REG_RLPFPERIODTIMER_RT_OFFSET                            30824
+#define QM_REG_RLPFINCVAL_RT_OFFSET                                 30825
 #define QM_REG_RLPFINCVAL_RT_SIZE                                   16
-#define QM_REG_RLPFUPPERBOUND_RT_OFFSET                             30785
+#define QM_REG_RLPFUPPERBOUND_RT_OFFSET                             30841
 #define QM_REG_RLPFUPPERBOUND_RT_SIZE                               16
-#define QM_REG_RLPFCRD_RT_OFFSET                                    30801
+#define QM_REG_RLPFCRD_RT_OFFSET                                    30857
 #define QM_REG_RLPFCRD_RT_SIZE                                      16
-#define QM_REG_RLPFENABLE_RT_OFFSET                                 30817
-#define QM_REG_RLPFVOQENABLE_RT_OFFSET                              30818
-#define QM_REG_WFQPFWEIGHT_RT_OFFSET                                30819
+#define QM_REG_RLPFENABLE_RT_OFFSET                                 30873
+#define QM_REG_RLPFVOQENABLE_RT_OFFSET                              30874
+#define QM_REG_WFQPFWEIGHT_RT_OFFSET                                30875
 #define QM_REG_WFQPFWEIGHT_RT_SIZE                                  16
-#define QM_REG_WFQPFUPPERBOUND_RT_OFFSET                            30835
+#define QM_REG_WFQPFUPPERBOUND_RT_OFFSET                            30891
 #define QM_REG_WFQPFUPPERBOUND_RT_SIZE                              16
-#define QM_REG_WFQPFCRD_RT_OFFSET                                   30851
-#define QM_REG_WFQPFCRD_RT_SIZE                                     160
-#define QM_REG_WFQPFENABLE_RT_OFFSET                                31011
-#define QM_REG_WFQVPENABLE_RT_OFFSET                                31012
-#define QM_REG_BASEADDRTXPQ_RT_OFFSET                               31013
+#define QM_REG_WFQPFCRD_RT_OFFSET                                   30907
+#define QM_REG_WFQPFCRD_RT_SIZE                                     256
+#define QM_REG_WFQPFENABLE_RT_OFFSET                                31163
+#define QM_REG_WFQVPENABLE_RT_OFFSET                                31164
+#define QM_REG_BASEADDRTXPQ_RT_OFFSET                               31165
 #define QM_REG_BASEADDRTXPQ_RT_SIZE                                 512
-#define QM_REG_TXPQMAP_RT_OFFSET                                    31525
+#define QM_REG_TXPQMAP_RT_OFFSET                                    31677
 #define QM_REG_TXPQMAP_RT_SIZE                                      512
-#define QM_REG_WFQVPWEIGHT_RT_OFFSET                                32037
+#define QM_REG_WFQVPWEIGHT_RT_OFFSET                                32189
 #define QM_REG_WFQVPWEIGHT_RT_SIZE                                  512
-#define QM_REG_WFQVPCRD_RT_OFFSET                                   32549
+#define QM_REG_WFQVPCRD_RT_OFFSET                                   32701
 #define QM_REG_WFQVPCRD_RT_SIZE                                     512
-#define QM_REG_WFQVPMAP_RT_OFFSET                                   33061
+#define QM_REG_WFQVPMAP_RT_OFFSET                                   33213
 #define QM_REG_WFQVPMAP_RT_SIZE                                     512
-#define QM_REG_WFQPFCRD_MSB_RT_OFFSET                               33573
-#define QM_REG_WFQPFCRD_MSB_RT_SIZE                                 160
-#define NIG_REG_TAG_ETHERTYPE_0_RT_OFFSET                           33733
-#define NIG_REG_OUTER_TAG_VALUE_LIST0_RT_OFFSET                     33734
-#define NIG_REG_OUTER_TAG_VALUE_LIST1_RT_OFFSET                     33735
-#define NIG_REG_OUTER_TAG_VALUE_LIST2_RT_OFFSET                     33736
-#define NIG_REG_OUTER_TAG_VALUE_LIST3_RT_OFFSET                     33737
-#define NIG_REG_OUTER_TAG_VALUE_MASK_RT_OFFSET                      33738
-#define NIG_REG_LLH_FUNC_TAGMAC_CLS_TYPE_RT_OFFSET                  33739
-#define NIG_REG_LLH_FUNC_TAG_EN_RT_OFFSET                           33740
+#define QM_REG_WFQPFCRD_MSB_RT_OFFSET                               33725
+#define QM_REG_WFQPFCRD_MSB_RT_SIZE                                 320
+#define QM_REG_VOQCRDLINE_RT_OFFSET                                 34045
+#define QM_REG_VOQCRDLINE_RT_SIZE                                   36
+#define QM_REG_VOQINITCRDLINE_RT_OFFSET                             34081
+#define QM_REG_VOQINITCRDLINE_RT_SIZE                               36
+#define NIG_REG_TAG_ETHERTYPE_0_RT_OFFSET                           34117
+#define NIG_REG_OUTER_TAG_VALUE_LIST0_RT_OFFSET                     34118
+#define NIG_REG_OUTER_TAG_VALUE_LIST1_RT_OFFSET                     34119
+#define NIG_REG_OUTER_TAG_VALUE_LIST2_RT_OFFSET                     34120
+#define NIG_REG_OUTER_TAG_VALUE_LIST3_RT_OFFSET                     34121
+#define NIG_REG_OUTER_TAG_VALUE_MASK_RT_OFFSET                      34122
+#define NIG_REG_LLH_FUNC_TAGMAC_CLS_TYPE_RT_OFFSET                  34123
+#define NIG_REG_LLH_FUNC_TAG_EN_RT_OFFSET                           34124
 #define NIG_REG_LLH_FUNC_TAG_EN_RT_SIZE                             4
-#define NIG_REG_LLH_FUNC_TAG_HDR_SEL_RT_OFFSET                      33744
+#define NIG_REG_LLH_FUNC_TAG_HDR_SEL_RT_OFFSET                      34128
 #define NIG_REG_LLH_FUNC_TAG_HDR_SEL_RT_SIZE                        4
-#define NIG_REG_LLH_FUNC_TAG_VALUE_RT_OFFSET                        33748
+#define NIG_REG_LLH_FUNC_TAG_VALUE_RT_OFFSET                        34132
 #define NIG_REG_LLH_FUNC_TAG_VALUE_RT_SIZE                          4
-#define NIG_REG_LLH_FUNC_NO_TAG_RT_OFFSET                           33752
-#define NIG_REG_LLH_FUNC_FILTER_VALUE_RT_OFFSET                     33753
+#define NIG_REG_LLH_FUNC_NO_TAG_RT_OFFSET                           34136
+#define NIG_REG_LLH_FUNC_FILTER_VALUE_RT_OFFSET                     34137
 #define NIG_REG_LLH_FUNC_FILTER_VALUE_RT_SIZE                       32
-#define NIG_REG_LLH_FUNC_FILTER_EN_RT_OFFSET                        33785
+#define NIG_REG_LLH_FUNC_FILTER_EN_RT_OFFSET                        34169
 #define NIG_REG_LLH_FUNC_FILTER_EN_RT_SIZE                          16
-#define NIG_REG_LLH_FUNC_FILTER_MODE_RT_OFFSET                      33801
+#define NIG_REG_LLH_FUNC_FILTER_MODE_RT_OFFSET                      34185
 #define NIG_REG_LLH_FUNC_FILTER_MODE_RT_SIZE                        16
-#define NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_RT_OFFSET             33817
+#define NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_RT_OFFSET             34201
 #define NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_RT_SIZE               16
-#define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_RT_OFFSET                   33833
+#define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_RT_OFFSET                   34217
 #define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_RT_SIZE                     16
-#define NIG_REG_TX_EDPM_CTRL_RT_OFFSET                              33849
-#define NIG_REG_ROCE_DUPLICATE_TO_HOST_RT_OFFSET                    33850
-#define CDU_REG_CID_ADDR_PARAMS_RT_OFFSET                           33851
-#define CDU_REG_SEGMENT0_PARAMS_RT_OFFSET                           33852
-#define CDU_REG_SEGMENT1_PARAMS_RT_OFFSET                           33853
-#define CDU_REG_PF_SEG0_TYPE_OFFSET_RT_OFFSET                       33854
-#define CDU_REG_PF_SEG1_TYPE_OFFSET_RT_OFFSET                       33855
-#define CDU_REG_PF_SEG2_TYPE_OFFSET_RT_OFFSET                       33856
-#define CDU_REG_PF_SEG3_TYPE_OFFSET_RT_OFFSET                       33857
-#define CDU_REG_PF_FL_SEG0_TYPE_OFFSET_RT_OFFSET                    33858
-#define CDU_REG_PF_FL_SEG1_TYPE_OFFSET_RT_OFFSET                    33859
-#define CDU_REG_PF_FL_SEG2_TYPE_OFFSET_RT_OFFSET                    33860
-#define CDU_REG_PF_FL_SEG3_TYPE_OFFSET_RT_OFFSET                    33861
-#define CDU_REG_VF_SEG_TYPE_OFFSET_RT_OFFSET                        33862
-#define CDU_REG_VF_FL_SEG_TYPE_OFFSET_RT_OFFSET                     33863
-#define PBF_REG_TAG_ETHERTYPE_0_RT_OFFSET                           33864
-#define PBF_REG_BTB_SHARED_AREA_SIZE_RT_OFFSET                      33865
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET                    33866
-#define PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET                       33867
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ0_RT_OFFSET                33868
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET                    33869
-#define PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET                       33870
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ1_RT_OFFSET                33871
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ2_RT_OFFSET                    33872
-#define PBF_REG_BTB_GUARANTEED_VOQ2_RT_OFFSET                       33873
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ2_RT_OFFSET                33874
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ3_RT_OFFSET                    33875
-#define PBF_REG_BTB_GUARANTEED_VOQ3_RT_OFFSET                       33876
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ3_RT_OFFSET                33877
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ4_RT_OFFSET                    33878
-#define PBF_REG_BTB_GUARANTEED_VOQ4_RT_OFFSET                       33879
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ4_RT_OFFSET                33880
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ5_RT_OFFSET                    33881
-#define PBF_REG_BTB_GUARANTEED_VOQ5_RT_OFFSET                       33882
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ5_RT_OFFSET                33883
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ6_RT_OFFSET                    33884
-#define PBF_REG_BTB_GUARANTEED_VOQ6_RT_OFFSET                       33885
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ6_RT_OFFSET                33886
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ7_RT_OFFSET                    33887
-#define PBF_REG_BTB_GUARANTEED_VOQ7_RT_OFFSET                       33888
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ7_RT_OFFSET                33889
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ8_RT_OFFSET                    33890
-#define PBF_REG_BTB_GUARANTEED_VOQ8_RT_OFFSET                       33891
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ8_RT_OFFSET                33892
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ9_RT_OFFSET                    33893
-#define PBF_REG_BTB_GUARANTEED_VOQ9_RT_OFFSET                       33894
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ9_RT_OFFSET                33895
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ10_RT_OFFSET                   33896
-#define PBF_REG_BTB_GUARANTEED_VOQ10_RT_OFFSET                      33897
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ10_RT_OFFSET               33898
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ11_RT_OFFSET                   33899
-#define PBF_REG_BTB_GUARANTEED_VOQ11_RT_OFFSET                      33900
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ11_RT_OFFSET               33901
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ12_RT_OFFSET                   33902
-#define PBF_REG_BTB_GUARANTEED_VOQ12_RT_OFFSET                      33903
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ12_RT_OFFSET               33904
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ13_RT_OFFSET                   33905
-#define PBF_REG_BTB_GUARANTEED_VOQ13_RT_OFFSET                      33906
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ13_RT_OFFSET               33907
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ14_RT_OFFSET                   33908
-#define PBF_REG_BTB_GUARANTEED_VOQ14_RT_OFFSET                      33909
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ14_RT_OFFSET               33910
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ15_RT_OFFSET                   33911
-#define PBF_REG_BTB_GUARANTEED_VOQ15_RT_OFFSET                      33912
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ15_RT_OFFSET               33913
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ16_RT_OFFSET                   33914
-#define PBF_REG_BTB_GUARANTEED_VOQ16_RT_OFFSET                      33915
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ16_RT_OFFSET               33916
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ17_RT_OFFSET                   33917
-#define PBF_REG_BTB_GUARANTEED_VOQ17_RT_OFFSET                      33918
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ17_RT_OFFSET               33919
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ18_RT_OFFSET                   33920
-#define PBF_REG_BTB_GUARANTEED_VOQ18_RT_OFFSET                      33921
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ18_RT_OFFSET               33922
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ19_RT_OFFSET                   33923
-#define PBF_REG_BTB_GUARANTEED_VOQ19_RT_OFFSET                      33924
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ19_RT_OFFSET               33925
-#define XCM_REG_CON_PHY_Q3_RT_OFFSET                                33926
+#define NIG_REG_TX_EDPM_CTRL_RT_OFFSET                              34233
+#define NIG_REG_ROCE_DUPLICATE_TO_HOST_RT_OFFSET                    34234
+#define CDU_REG_CID_ADDR_PARAMS_RT_OFFSET                           34235
+#define CDU_REG_SEGMENT0_PARAMS_RT_OFFSET                           34236
+#define CDU_REG_SEGMENT1_PARAMS_RT_OFFSET                           34237
+#define CDU_REG_PF_SEG0_TYPE_OFFSET_RT_OFFSET                       34238
+#define CDU_REG_PF_SEG1_TYPE_OFFSET_RT_OFFSET                       34239
+#define CDU_REG_PF_SEG2_TYPE_OFFSET_RT_OFFSET                       34240
+#define CDU_REG_PF_SEG3_TYPE_OFFSET_RT_OFFSET                       34241
+#define CDU_REG_PF_FL_SEG0_TYPE_OFFSET_RT_OFFSET                    34242
+#define CDU_REG_PF_FL_SEG1_TYPE_OFFSET_RT_OFFSET                    34243
+#define CDU_REG_PF_FL_SEG2_TYPE_OFFSET_RT_OFFSET                    34244
+#define CDU_REG_PF_FL_SEG3_TYPE_OFFSET_RT_OFFSET                    34245
+#define CDU_REG_VF_SEG_TYPE_OFFSET_RT_OFFSET                        34246
+#define CDU_REG_VF_FL_SEG_TYPE_OFFSET_RT_OFFSET                     34247
+#define PBF_REG_TAG_ETHERTYPE_0_RT_OFFSET                           34248
+#define PBF_REG_BTB_SHARED_AREA_SIZE_RT_OFFSET                      34249
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET                    34250
+#define PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET                       34251
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ0_RT_OFFSET                34252
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET                    34253
+#define PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET                       34254
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ1_RT_OFFSET                34255
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ2_RT_OFFSET                    34256
+#define PBF_REG_BTB_GUARANTEED_VOQ2_RT_OFFSET                       34257
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ2_RT_OFFSET                34258
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ3_RT_OFFSET                    34259
+#define PBF_REG_BTB_GUARANTEED_VOQ3_RT_OFFSET                       34260
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ3_RT_OFFSET                34261
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ4_RT_OFFSET                    34262
+#define PBF_REG_BTB_GUARANTEED_VOQ4_RT_OFFSET                       34263
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ4_RT_OFFSET                34264
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ5_RT_OFFSET                    34265
+#define PBF_REG_BTB_GUARANTEED_VOQ5_RT_OFFSET                       34266
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ5_RT_OFFSET                34267
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ6_RT_OFFSET                    34268
+#define PBF_REG_BTB_GUARANTEED_VOQ6_RT_OFFSET                       34269
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ6_RT_OFFSET                34270
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ7_RT_OFFSET                    34271
+#define PBF_REG_BTB_GUARANTEED_VOQ7_RT_OFFSET                       34272
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ7_RT_OFFSET                34273
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ8_RT_OFFSET                    34274
+#define PBF_REG_BTB_GUARANTEED_VOQ8_RT_OFFSET                       34275
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ8_RT_OFFSET                34276
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ9_RT_OFFSET                    34277
+#define PBF_REG_BTB_GUARANTEED_VOQ9_RT_OFFSET                       34278
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ9_RT_OFFSET                34279
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ10_RT_OFFSET                   34280
+#define PBF_REG_BTB_GUARANTEED_VOQ10_RT_OFFSET                      34281
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ10_RT_OFFSET               34282
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ11_RT_OFFSET                   34283
+#define PBF_REG_BTB_GUARANTEED_VOQ11_RT_OFFSET                      34284
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ11_RT_OFFSET               34285
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ12_RT_OFFSET                   34286
+#define PBF_REG_BTB_GUARANTEED_VOQ12_RT_OFFSET                      34287
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ12_RT_OFFSET               34288
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ13_RT_OFFSET                   34289
+#define PBF_REG_BTB_GUARANTEED_VOQ13_RT_OFFSET                      34290
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ13_RT_OFFSET               34291
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ14_RT_OFFSET                   34292
+#define PBF_REG_BTB_GUARANTEED_VOQ14_RT_OFFSET                      34293
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ14_RT_OFFSET               34294
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ15_RT_OFFSET                   34295
+#define PBF_REG_BTB_GUARANTEED_VOQ15_RT_OFFSET                      34296
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ15_RT_OFFSET               34297
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ16_RT_OFFSET                   34298
+#define PBF_REG_BTB_GUARANTEED_VOQ16_RT_OFFSET                      34299
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ16_RT_OFFSET               34300
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ17_RT_OFFSET                   34301
+#define PBF_REG_BTB_GUARANTEED_VOQ17_RT_OFFSET                      34302
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ17_RT_OFFSET               34303
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ18_RT_OFFSET                   34304
+#define PBF_REG_BTB_GUARANTEED_VOQ18_RT_OFFSET                      34305
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ18_RT_OFFSET               34306
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ19_RT_OFFSET                   34307
+#define PBF_REG_BTB_GUARANTEED_VOQ19_RT_OFFSET                      34308
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ19_RT_OFFSET               34309
+#define XCM_REG_CON_PHY_Q3_RT_OFFSET                                34310
 
-#define RUNTIME_ARRAY_SIZE 33927
+#define RUNTIME_ARRAY_SIZE 34311
 
 #endif /* __RT_DEFS_H__ */
diff --git a/drivers/net/qede/base/eth_common.h b/drivers/net/qede/base/eth_common.h
index d2ebce8..6dc969b 100644
--- a/drivers/net/qede/base/eth_common.h
+++ b/drivers/net/qede/base/eth_common.h
@@ -182,7 +182,7 @@ struct eth_tx_1st_bd_flags {
 struct eth_tx_data_1st_bd {
 /* VLAN tag to insert to packet (if enabled by vlan_insertion flag). */
 	__le16 vlan;
-/* Number of BDs in packet. Should be at least 2 in non-LSO packet and at least
+/* Number of BDs in packet. Should be at least 1 in non-LSO packet and at least
  * 3 in LSO (or Tunnel with IPv6+ext) packet.
  */
 	u8 nbds;
diff --git a/drivers/net/qede/base/reg_addr.h b/drivers/net/qede/base/reg_addr.h
index 3cc7fd4..f9920f3 100644
--- a/drivers/net/qede/base/reg_addr.h
+++ b/drivers/net/qede/base/reg_addr.h
@@ -1147,3 +1147,56 @@
 
 #define IGU_REG_PRODUCER_MEMORY 0x182000UL
 #define IGU_REG_CONSUMER_MEM 0x183000UL
+
+#define CDU_REG_CCFC_CTX_VALID0 0x580400UL
+#define CDU_REG_CCFC_CTX_VALID1 0x580404UL
+#define CDU_REG_TCFC_CTX_VALID0 0x580408UL
+
+#define DORQ_REG_L2_EDPM_TUNNEL_NGE_IP_EN_K2_E5 0x10092cUL
+#define DORQ_REG_L2_EDPM_TUNNEL_NGE_ETH_EN_K2_E5 0x100930UL
+#define MISCS_REG_RESET_PL_HV_2_K2_E5 0x009150UL
+#define CNIG_REG_NW_PORT_MODE_BB 0x218200UL
+#define CNIG_REG_PMEG_IF_CMD_BB 0x21821cUL
+#define CNIG_REG_PMEG_IF_ADDR_BB 0x218224UL
+#define CNIG_REG_PMEG_IF_WRDATA_BB 0x218228UL
+#define NWM_REG_MAC0_K2_E5 0x800400UL
+#define CNIG_REG_NIG_PORT0_CONF_K2_E5 0x218200UL
+#define CNIG_REG_NIG_PORT0_CONF_NIG_PORT_ENABLE_0_K2_E5_SHIFT 0
+#define CNIG_REG_NIG_PORT0_CONF_NIG_PORT_NWM_PORT_MAP_0_K2_E5_SHIFT 1
+#define CNIG_REG_NIG_PORT0_CONF_NIG_PORT_RATE_0_K2_E5_SHIFT 3
+#define ETH_MAC_REG_XIF_MODE_K2_E5 0x000080UL
+#define ETH_MAC_REG_XIF_MODE_XGMII_K2_E5_SHIFT 0
+#define ETH_MAC_REG_FRM_LENGTH_K2_E5 0x000014UL
+#define ETH_MAC_REG_FRM_LENGTH_FRM_LENGTH_K2_E5_SHIFT 0
+#define ETH_MAC_REG_TX_IPG_LENGTH_K2_E5 0x000044UL
+#define ETH_MAC_REG_TX_IPG_LENGTH_TXIPG_K2_E5_SHIFT 0
+#define ETH_MAC_REG_RX_FIFO_SECTIONS_K2_E5 0x00001cUL
+#define ETH_MAC_REG_RX_FIFO_SECTIONS_RX_SECTION_FULL_K2_E5_SHIFT 0
+#define ETH_MAC_REG_TX_FIFO_SECTIONS_K2_E5 0x000020UL
+#define ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_EMPTY_K2_E5_SHIFT 16
+#define ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_FULL_K2_E5_SHIFT 0
+#define ETH_MAC_REG_COMMAND_CONFIG_K2_E5 0x000008UL
+#define MISC_REG_XMAC_CORE_PORT_MODE_BB 0x008c08UL
+#define MISC_REG_XMAC_PHY_PORT_MODE_BB 0x008c04UL
+#define XMAC_REG_MODE_BB 0x210008UL
+#define XMAC_REG_RX_MAX_SIZE_BB  0x210040UL
+#define XMAC_REG_TX_CTRL_LO_BB 0x210020UL
+#define XMAC_REG_CTRL_BB 0x210000UL
+#define XMAC_REG_CTRL_TX_EN_BB (0x1 << 0)
+#define XMAC_REG_CTRL_RX_EN_BB (0x1 << 1)
+#define XMAC_REG_RX_CTRL_BB 0x210030UL
+#define XMAC_REG_RX_CTRL_PROCESS_VARIABLE_PREAMBLE_BB (0x1 << 12)
+
+#define PGLUE_B_REG_PGL_ADDR_E8_F0_K2_E5 0x2aaf98UL
+#define PGLUE_B_REG_PGL_ADDR_EC_F0_K2_E5 0x2aaf9cUL
+#define PGLUE_B_REG_PGL_ADDR_F0_F0_K2_E5 0x2aafa0UL
+#define PGLUE_B_REG_PGL_ADDR_F4_F0_K2_E5 0x2aafa4UL
+#define PGLUE_B_REG_PGL_ADDR_88_F0_BB 0x2aa404UL
+#define PGLUE_B_REG_PGL_ADDR_8C_F0_BB 0x2aa408UL
+#define PGLUE_B_REG_PGL_ADDR_90_F0_BB 0x2aa40cUL
+#define PGLUE_B_REG_PGL_ADDR_94_F0_BB 0x2aa410UL
+#define MISCS_REG_FUNCTION_HIDE_BB_K2 0x0096f0UL
+#define PCIE_REG_PRTY_MASK_K2_E5 0x0547b4UL
+#define PGLUE_B_REG_VF_BAR0_SIZE_K2_E5 0x2aaeb4UL
+
+#define PRS_REG_OUTPUT_FORMAT_4_0_BB_K2 0x1f099cUL
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index a604a5b..332b1f8 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -21,7 +21,7 @@ static uint8_t npar_tx_switching = 1;
 char fw_file[PATH_MAX];
 
 const char *QEDE_DEFAULT_FIRMWARE =
-	"/lib/firmware/qed/qed_init_values-8.14.6.0.bin";
+	"/lib/firmware/qed/qed_init_values-8.18.9.0.bin";
 
 static void
 qed_update_pf_params(struct ecore_dev *edev, struct ecore_pf_params *params)
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 07/62] net/qede/base: decrease maximum HW func per device
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (7 preceding siblings ...)
  2017-03-28  6:51           ` [PATCH v4 06/62] net/qede: upgrade the FW to 8.18.9.0 Rasesh Mody
@ 2017-03-28  6:51           ` Rasesh Mody
  2017-03-28  6:51           ` [PATCH v4 08/62] net/qede/base: move mask constants defining NIC type Rasesh Mody
                             ` (55 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:51 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Decrease MAX_HWFNS_PER_DEVICE from 4 to 2

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index b2f4910..d14f99c 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -28,7 +28,7 @@
 #include "ecore_proto_if.h"
 #include "mcp_public.h"
 
-#define MAX_HWFNS_PER_DEVICE	(4)
+#define MAX_HWFNS_PER_DEVICE	2
 #define NAME_SIZE 128 /* @DPDK */
 #define VER_SIZE 16
 #define ECORE_WFQ_UNIT	100
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 08/62] net/qede/base: move mask constants defining NIC type
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (8 preceding siblings ...)
  2017-03-28  6:51           ` [PATCH v4 07/62] net/qede/base: decrease maximum HW func per device Rasesh Mody
@ 2017-03-28  6:51           ` Rasesh Mody
  2017-03-28  6:51           ` [PATCH v4 09/62] net/qede/base: remove attribute from update current config Rasesh Mody
                             ` (54 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:51 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Move mask constants defining NIC type to ecore.h

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h     |    4 ++++
 drivers/net/qede/base/ecore_dev.c |    4 ----
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index d14f99c..a6cf52e 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -625,6 +625,10 @@ struct ecore_dev {
 #define ECORE_IS_AH(dev)	((dev)->type == ECORE_DEV_TYPE_AH)
 #define ECORE_IS_K2(dev)	ECORE_IS_AH(dev)
 
+#define ECORE_DEV_ID_MASK	0xff00
+#define ECORE_DEV_ID_MASK_BB	0x1600
+#define ECORE_DEV_ID_MASK_AH	0x8000
+
 	u16 vendor_id;
 	u16 device_id;
 
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index f82f5e6..ee50090 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2888,10 +2888,6 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	return ecore_hw_get_resc(p_hwfn, drv_resc_alloc);
 }
 
-#define ECORE_DEV_ID_MASK	0xff00
-#define ECORE_DEV_ID_MASK_BB	0x1600
-#define ECORE_DEV_ID_MASK_AH	0x8000
-
 static enum _ecore_status_t ecore_get_dev_info(struct ecore_dev *p_dev)
 {
 	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 09/62] net/qede/base: remove attribute from update current config
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (9 preceding siblings ...)
  2017-03-28  6:51           ` [PATCH v4 08/62] net/qede/base: move mask constants defining NIC type Rasesh Mody
@ 2017-03-28  6:51           ` Rasesh Mody
  2017-03-28  6:51           ` [PATCH v4 10/62] net/qede/base: add nvram options Rasesh Mody
                             ` (53 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:51 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Remove attribute field from update_current_config() API, Management FW
need to know only the last entity who configured the device.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_mcp.c     |    5 ++---
 drivers/net/qede/base/ecore_mcp_api.h |    8 --------
 2 files changed, 2 insertions(+), 11 deletions(-)

diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index e236f39..245d478 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1709,14 +1709,13 @@ enum _ecore_status_t ecore_mcp_resume(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t
 ecore_mcp_ov_update_current_config(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt,
-				   enum ecore_ov_config_method config,
 				   enum ecore_ov_client client)
 {
 	enum _ecore_status_t rc;
 	u32 resp = 0, param = 0;
 	u32 drv_mb_param;
 
-	switch (config) {
+	switch (client) {
 	case ECORE_OV_CLIENT_DRV:
 		drv_mb_param = DRV_MB_PARAM_OV_CURR_CFG_OS;
 		break;
@@ -1727,7 +1726,7 @@ ecore_mcp_ov_update_current_config(struct ecore_hwfn *p_hwfn,
 		drv_mb_param = DRV_MB_PARAM_OV_CURR_CFG_VENDOR_SPEC;
 		break;
 	default:
-		DP_NOTICE(p_hwfn, true, "Invalid client type %d\n", config);
+		DP_NOTICE(p_hwfn, true, "Invalid client type %d\n", client);
 		return ECORE_INVAL;
 	}
 
diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h
index 614cf67..72a58e4 100644
--- a/drivers/net/qede/base/ecore_mcp_api.h
+++ b/drivers/net/qede/base/ecore_mcp_api.h
@@ -173,12 +173,6 @@ union ecore_mcp_protocol_stats {
 };
 #endif
 
-enum ecore_ov_config_method {
-	ECORE_OV_CONFIG_MTU,
-	ECORE_OV_CONFIG_MAC,
-	ECORE_OV_CONFIG_WOL
-};
-
 enum ecore_ov_client {
 	ECORE_OV_CLIENT_DRV,
 	ECORE_OV_CLIENT_USER,
@@ -453,7 +447,6 @@ enum _ecore_status_t ecore_start_recovery_process(struct ecore_hwfn *p_hwfn,
  *
  *  @param p_hwfn
  *  @param p_ptt
- *  @param config - Configuation that has been updated
  *  @param client - ecore client type
  *
  * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
@@ -461,7 +454,6 @@ enum _ecore_status_t ecore_start_recovery_process(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t
 ecore_mcp_ov_update_current_config(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt,
-				   enum ecore_ov_config_method config,
 				   enum ecore_ov_client client);
 
 /**
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 10/62] net/qede/base: add nvram options
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (10 preceding siblings ...)
  2017-03-28  6:51           ` [PATCH v4 09/62] net/qede/base: remove attribute from update current config Rasesh Mody
@ 2017-03-28  6:51           ` Rasesh Mody
  2017-03-28  6:51           ` [PATCH v4 11/62] net/qede/base: add comment Rasesh Mody
                             ` (52 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:51 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Add a bunch of NVRAM options like MCOT, FEC selection, temperature
threshold, Reset On Lan, etc.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/nvm_cfg.h |  465 ++++++++++++++++++++++++++++++++++++++-
 1 file changed, 461 insertions(+), 4 deletions(-)

diff --git a/drivers/net/qede/base/nvm_cfg.h b/drivers/net/qede/base/nvm_cfg.h
index 68abc2d..4202337 100644
--- a/drivers/net/qede/base/nvm_cfg.h
+++ b/drivers/net/qede/base/nvm_cfg.h
@@ -13,13 +13,21 @@
  * Description: NVM config file - Generated file from nvm cfg excel.
  *              DO NOT MODIFY !!!
  *
- * Created:     9/6/2016
+ * Created:     12/15/2016
  *
  ****************************************************************************/
 
 #ifndef NVM_CFG_H
 #define NVM_CFG_H
 
+#define NVM_CFG_version 0x81805
+
+#define NVM_CFG_new_option_seq 15
+
+#define NVM_CFG_removed_option_seq 0
+
+#define NVM_CFG_updated_value_seq 1
+
 struct nvm_cfg_mac_address {
 	u32 mac_addr_hi;
 		#define NVM_CFG_MAC_ADDRESS_HI_MASK 0x0000FFFF
@@ -242,6 +250,11 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_INTERNAL 0x0
 		#define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_EXTERNAL 0x1
 		#define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_BOTH 0x2
+	/*  ROL enable */
+		#define NVM_CFG1_GLOB_RESET_ON_LAN_MASK 0x80000000
+		#define NVM_CFG1_GLOB_RESET_ON_LAN_OFFSET 31
+		#define NVM_CFG1_GLOB_RESET_ON_LAN_DISABLED 0x0
+		#define NVM_CFG1_GLOB_RESET_ON_LAN_ENABLED 0x1
 	u32 f_lane_cfg1; /* 0x38 */
 		#define NVM_CFG1_GLOB_RX_LANE0_SWAP_MASK 0x0000000F
 		#define NVM_CFG1_GLOB_RX_LANE0_SWAP_OFFSET 0
@@ -470,6 +483,15 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_MANUF3_VER_OFFSET 18
 		#define NVM_CFG1_GLOB_MANUF4_VER_MASK 0x3F000000
 		#define NVM_CFG1_GLOB_MANUF4_VER_OFFSET 24
+	/*  Select package id method */
+		#define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_IO_MASK 0x40000000
+		#define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_IO_OFFSET 30
+		#define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_IO_NVRAM 0x0
+		#define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_IO_IO_PINS 0x1
+		#define NVM_CFG1_GLOB_RECOVERY_MODE_MASK 0x80000000
+		#define NVM_CFG1_GLOB_RECOVERY_MODE_OFFSET 31
+		#define NVM_CFG1_GLOB_RECOVERY_MODE_DISABLED 0x0
+		#define NVM_CFG1_GLOB_RECOVERY_MODE_ENABLED 0x1
 	u32 manufacture_time; /* 0x70 */
 		#define NVM_CFG1_GLOB_MANUF0_TIME_MASK 0x0000003F
 		#define NVM_CFG1_GLOB_MANUF0_TIME_OFFSET 0
@@ -480,6 +502,11 @@ struct nvm_cfg1_glob {
 	/*  Max MSIX for Ethernet in default mode */
 		#define NVM_CFG1_GLOB_MAX_MSIX_MASK 0x03FC0000
 		#define NVM_CFG1_GLOB_MAX_MSIX_OFFSET 18
+	/*  PF Mapping */
+		#define NVM_CFG1_GLOB_PF_MAPPING_MASK 0x0C000000
+		#define NVM_CFG1_GLOB_PF_MAPPING_OFFSET 26
+		#define NVM_CFG1_GLOB_PF_MAPPING_CONTINUOUS 0x0
+		#define NVM_CFG1_GLOB_PF_MAPPING_FIXED 0x1
 	u32 led_global_settings; /* 0x74 */
 		#define NVM_CFG1_GLOB_LED_SWAP_0_MASK 0x0000000F
 		#define NVM_CFG1_GLOB_LED_SWAP_0_OFFSET 0
@@ -489,6 +516,47 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_LED_SWAP_2_OFFSET 8
 		#define NVM_CFG1_GLOB_LED_SWAP_3_MASK 0x0000F000
 		#define NVM_CFG1_GLOB_LED_SWAP_3_OFFSET 12
+	/*  Max. continues operating temperature */
+		#define NVM_CFG1_GLOB_MAX_CONT_OPERATING_TEMP_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_MAX_CONT_OPERATING_TEMP_OFFSET 16
+	/*  GPIO which triggers run-time port swap according to the map
+	 *  specified in option 205
+	 */
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_OFFSET 24
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO31 0x20
 	u32 generic_cont1; /* 0x78 */
 		#define NVM_CFG1_GLOB_AVS_DAC_CODE_MASK 0x000003FF
 		#define NVM_CFG1_GLOB_AVS_DAC_CODE_OFFSET 0
@@ -508,6 +576,17 @@ struct nvm_cfg1_glob {
 	/*  PCIe Preset value - applies only if option 194 is enabled */
 		#define NVM_CFG1_GLOB_PCIE_PRESET_VALUE_MASK 0x00780000
 		#define NVM_CFG1_GLOB_PCIE_PRESET_VALUE_OFFSET 19
+	/*  Port mapping to be used when the run-time GPIO for port-swap is
+	 *  defined and set.
+	 */
+		#define NVM_CFG1_GLOB_RUNTIME_PORT0_SWAP_MAP_MASK 0x01800000
+		#define NVM_CFG1_GLOB_RUNTIME_PORT0_SWAP_MAP_OFFSET 23
+		#define NVM_CFG1_GLOB_RUNTIME_PORT1_SWAP_MAP_MASK 0x06000000
+		#define NVM_CFG1_GLOB_RUNTIME_PORT1_SWAP_MAP_OFFSET 25
+		#define NVM_CFG1_GLOB_RUNTIME_PORT2_SWAP_MAP_MASK 0x18000000
+		#define NVM_CFG1_GLOB_RUNTIME_PORT2_SWAP_MAP_OFFSET 27
+		#define NVM_CFG1_GLOB_RUNTIME_PORT3_SWAP_MAP_MASK 0x60000000
+		#define NVM_CFG1_GLOB_RUNTIME_PORT3_SWAP_MAP_OFFSET 29
 	u32 mbi_version; /* 0x7C */
 		#define NVM_CFG1_GLOB_MBI_VERSION_0_MASK 0x000000FF
 		#define NVM_CFG1_GLOB_MBI_VERSION_0_OFFSET 0
@@ -515,6 +594,44 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_MBI_VERSION_1_OFFSET 8
 		#define NVM_CFG1_GLOB_MBI_VERSION_2_MASK 0x00FF0000
 		#define NVM_CFG1_GLOB_MBI_VERSION_2_OFFSET 16
+	/*  If set to other than NA, 0 - Normal operation, 1 - Thermal event
+	 *  occurred
+	 */
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_OFFSET 24
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO31 0x20
 	u32 mbi_date; /* 0x80 */
 	u32 misc_sig; /* 0x84 */
 	/*  Define the GPIO mapping to switch i2c mux */
@@ -555,6 +672,81 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO29 0x1E
 		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO30 0x1F
 		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO31 0x20
+	/*  Interrupt signal used for SMBus/I2C management interface
+	 *  0 = Interrupt event occurred
+	 *  1 = Normal
+	 */
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_OFFSET 16
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO31 0x20
+	/*  Set aLOM FAN on GPIO */
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_OFFSET 24
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO31 0x20
 	u32 device_capabilities; /* 0x88 */
 		#define NVM_CFG1_GLOB_DEVICE_CAPABILITIES_ETHERNET 0x1
 		#define NVM_CFG1_GLOB_DEVICE_CAPABILITIES_FCOE 0x2
@@ -591,11 +783,262 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_BB_1X100G \
 			0x80
 		#define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_2X10G 0x100
-	u32 reserved[41]; /* 0x9C */
+	/* @DPDK */
+	u32 reserved1[12]; /* 0x9C */
+	u32 oem1_number[8]; /* 0xCC */
+	u32 oem2_number[8]; /* 0xEC */
+	u32 mps25_active_txfir_pre; /* 0x10C */
+		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_PRE_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_PRE_OFFSET 0
+		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_PRE_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_PRE_OFFSET 8
+		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_PRE_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_PRE_OFFSET 16
+		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_PRE_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_PRE_OFFSET 24
+	u32 mps25_active_txfir_main; /* 0x110 */
+		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_MAIN_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_MAIN_OFFSET 0
+		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_MAIN_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_MAIN_OFFSET 8
+		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_MAIN_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_MAIN_OFFSET 16
+		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_MAIN_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_MAIN_OFFSET 24
+	u32 mps25_active_txfir_post; /* 0x114 */
+		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_POST_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_POST_OFFSET 0
+		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_POST_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_POST_OFFSET 8
+		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_POST_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_POST_OFFSET 16
+		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_POST_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_POST_OFFSET 24
+	u32 features; /* 0x118 */
+	/*  Set the Aux Fan on temperature  */
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_VALUE_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_VALUE_OFFSET 0
+	/*  Set NC-SI package ID */
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_OFFSET 8
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO31 0x20
+	/*  PMBUS Clock GPIO */
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_OFFSET 16
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO31 0x20
+	/*  PMBUS Data GPIO */
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_OFFSET 24
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO31 0x20
+	u32 tx_rx_eq_25g_hlpc; /* 0x11C */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_HLPC_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_HLPC_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_HLPC_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_HLPC_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_HLPC_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_HLPC_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_HLPC_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_HLPC_OFFSET 24
+	u32 tx_rx_eq_25g_llpc; /* 0x120 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_LLPC_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_LLPC_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_LLPC_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_LLPC_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_LLPC_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_LLPC_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_LLPC_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_LLPC_OFFSET 24
+	u32 tx_rx_eq_25g_ac; /* 0x124 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_AC_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_AC_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_AC_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_AC_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_AC_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_AC_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_AC_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_AC_OFFSET 24
+	u32 tx_rx_eq_10g_pc; /* 0x128 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_PC_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_PC_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_PC_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_PC_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_PC_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_PC_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_PC_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_PC_OFFSET 24
+	u32 tx_rx_eq_10g_ac; /* 0x12C */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_AC_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_AC_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_AC_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_AC_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_AC_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_AC_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_AC_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_AC_OFFSET 24
+	u32 tx_rx_eq_1g; /* 0x130 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_1G_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_1G_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_1G_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_1G_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_1G_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_1G_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_1G_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_1G_OFFSET 24
+	u32 tx_rx_eq_25g_bt; /* 0x134 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_BT_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_BT_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_BT_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_BT_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_BT_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_BT_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_BT_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_BT_OFFSET 24
+	u32 tx_rx_eq_10g_bt; /* 0x138 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_BT_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_BT_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_BT_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_BT_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_BT_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_BT_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_BT_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_BT_OFFSET 24
+	u32 generic_cont4; /* 0x13C */
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_OFFSET 0
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO31 0x20
+	u32 reserved[58]; /* 0x140 */
 };
 
 struct nvm_cfg1_path {
-	u32 reserved[30]; /* 0x0 */
+	u32 reserved[1]; /* 0x0 */
 };
 
 struct nvm_cfg1_port {
@@ -749,6 +1192,15 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_FEC_FORCE_MODE_FIRECODE 0x1
 		#define NVM_CFG1_PORT_FEC_FORCE_MODE_RS 0x2
 		#define NVM_CFG1_PORT_FEC_FORCE_MODE_AUTO 0x7
+		#define NVM_CFG1_PORT_FEC_AN_MODE_MASK 0x00700000
+		#define NVM_CFG1_PORT_FEC_AN_MODE_OFFSET 20
+		#define NVM_CFG1_PORT_FEC_AN_MODE_NONE 0x0
+		#define NVM_CFG1_PORT_FEC_AN_MODE_10G_FIRECODE 0x1
+		#define NVM_CFG1_PORT_FEC_AN_MODE_25G_FIRECODE 0x2
+		#define NVM_CFG1_PORT_FEC_AN_MODE_10G_AND_25G_FIRECODE 0x3
+		#define NVM_CFG1_PORT_FEC_AN_MODE_25G_RS 0x4
+		#define NVM_CFG1_PORT_FEC_AN_MODE_25G_FIRECODE_AND_RS 0x5
+		#define NVM_CFG1_PORT_FEC_AN_MODE_ALL 0x6
 	u32 phy_cfg; /* 0x1C */
 		#define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_MASK 0x0000FFFF
 		#define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_OFFSET 0
@@ -1451,12 +1903,17 @@ struct nvm_cfg1_func {
 		#define NVM_CFG1_FUNC_PREBOOT_VLAN_VALUE_OFFSET 0
 		#define NVM_CFG1_FUNC_PREBOOT_VLAN_MASK 0x00010000
 		#define NVM_CFG1_FUNC_PREBOOT_VLAN_OFFSET 16
+		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_MASK 0x001E0000
+		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_OFFSET 17
+		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_ETHERNET 0x1
+		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_FCOE 0x2
+		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_ISCSI 0x4
 	u32 reserved[8]; /* 0x30 */
 };
 
 struct nvm_cfg1 {
 	struct nvm_cfg1_glob glob; /* 0x0 */
-	struct nvm_cfg1_path path[MCP_GLOB_PATH_MAX]; /* 0x140 */
+	struct nvm_cfg1_path path[MCP_GLOB_PATH_MAX]; /* 0x228 */
 	struct nvm_cfg1_port port[MCP_GLOB_PORT_MAX]; /* 0x230 */
 	struct nvm_cfg1_func func[MCP_GLOB_FUNC_MAX]; /* 0xB90 */
 };
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 11/62] net/qede/base: add comment
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (11 preceding siblings ...)
  2017-03-28  6:51           ` [PATCH v4 10/62] net/qede/base: add nvram options Rasesh Mody
@ 2017-03-28  6:51           ` Rasesh Mody
  2017-03-28  6:51           ` [PATCH v4 12/62] net/qede/base: use default MTU from shared memory Rasesh Mody
                             ` (51 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:51 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Add a comment for the endianness manipulation in
ecore_mcp_send_drv_version().

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_mcp.c |    1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 245d478..df6ebd2 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1662,6 +1662,7 @@ ecore_mcp_send_drv_version(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	p_drv_version->version = p_ver->version;
 	num_words = (MCP_DRV_VER_STR_SIZE - 4) / 4;
 	for (i = 0; i < num_words; i++) {
+		/* The driver name is expected to be in a big-endian format */
 		p_name = &p_ver->name[i * sizeof(u32)];
 		val = OSAL_CPU_TO_BE32(*(u32 *)p_name);
 		*(u32 *)&p_drv_version->name[i * sizeof(u32)] = val;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 12/62] net/qede/base: use default MTU from shared memory
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (12 preceding siblings ...)
  2017-03-28  6:51           ` [PATCH v4 11/62] net/qede/base: add comment Rasesh Mody
@ 2017-03-28  6:51           ` Rasesh Mody
  2017-03-28  6:51           ` [PATCH v4 13/62] net/qede/base: change queue/sb-id from 8 bit to 16 bit Rasesh Mody
                             ` (50 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:51 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Read and use the default MTU value from shared-memory.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h         |    2 ++
 drivers/net/qede/base/ecore_dev.c     |    3 +++
 drivers/net/qede/base/ecore_mcp.c     |   10 ++++++++++
 drivers/net/qede/base/ecore_mcp_api.h |    2 ++
 drivers/net/qede/qede_if.h            |    1 +
 drivers/net/qede/qede_main.c          |    2 ++
 6 files changed, 20 insertions(+)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index a6cf52e..25c96f8 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -377,6 +377,8 @@ struct ecore_hw_info {
 
 	/* Default DCBX mode */
 	u8 dcbx_mode;
+
+	u16 mtu;
 };
 
 struct ecore_hw_cid_data {
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index ee50090..87c1c23 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2879,6 +2879,9 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 
 	ecore_get_num_funcs(p_hwfn, p_ptt);
 
+	if (ecore_mcp_is_init(p_hwfn))
+		p_hwfn->hw_info.mtu = p_hwfn->mcp_info->func_info.mtu;
+
 	/* In case of forcing the driver's default resource allocation, calling
 	 * ecore_hw_get_resc() should come after initializing the personality
 	 * and after getting the number of functions, since the calculation of
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index df6ebd2..8720ae7 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1431,6 +1431,16 @@ enum _ecore_status_t ecore_mcp_fill_shmem_func_info(struct ecore_hwfn *p_hwfn,
 
 	info->ovlan = (u16)(shmem_info.ovlan_stag & FUNC_MF_CFG_OV_STAG_MASK);
 
+	info->mtu = (u16)shmem_info.mtu_size;
+
+	if (info->mtu == 0)
+		info->mtu = 1500;
+
+	info->mtu = (u16)shmem_info.mtu_size;
+
+	if (info->mtu == 0)
+		info->mtu = 1500;
+
 	DP_VERBOSE(p_hwfn, (ECORE_MSG_SP | ECORE_MSG_IFUP),
 		   "Read configuration from shmem: pause_on_host %02x"
 		    " protocol %02x BW [%02x - %02x]"
diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h
index 72a58e4..1be22dd 100644
--- a/drivers/net/qede/base/ecore_mcp_api.h
+++ b/drivers/net/qede/base/ecore_mcp_api.h
@@ -84,6 +84,8 @@ struct ecore_mcp_function_info {
 
 #define ECORE_MCP_VLAN_UNSET		(0xffff)
 	u16 ovlan;
+
+	u16 mtu;
 };
 
 struct ecore_mcp_nvm_common {
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index 4b23bb9..18404fb 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -34,6 +34,7 @@ struct qed_dev_info {
 	uint32_t flash_size;
 	uint8_t mf_mode;
 	bool tx_switching;
+	u16 mtu;
 	/* To be added... */
 };
 
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 332b1f8..e76346e 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -365,6 +365,8 @@ qed_fill_dev_info(struct ecore_dev *edev, struct qed_dev_info *dev_info)
 				      &dev_info->mfw_rev, NULL);
 	}
 
+	dev_info->mtu = ECORE_LEADING_HWFN(edev)->hw_info.mtu;
+
 	return 0;
 }
 
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 13/62] net/qede/base: change queue/sb-id from 8 bit to 16 bit
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (13 preceding siblings ...)
  2017-03-28  6:51           ` [PATCH v4 12/62] net/qede/base: use default MTU from shared memory Rasesh Mody
@ 2017-03-28  6:51           ` Rasesh Mody
  2017-03-28  6:51           ` [PATCH v4 14/62] net/qede/base: update MFW when default MTU is changed Rasesh Mody
                             ` (49 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:51 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Change the queue/sb-id values from 8 bit fields to 16 bit fields.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c     |    8 ++++----
 drivers/net/qede/base/ecore_dev_api.h |    4 ++--
 drivers/net/qede/base/ecore_l2.c      |    2 +-
 drivers/net/qede/base/ecore_l2_api.h  |    2 +-
 drivers/net/qede/base/ecore_sriov.c   |    4 ++--
 5 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 87c1c23..7a501bb 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -3876,7 +3876,7 @@ static enum _ecore_status_t ecore_set_coalesce(struct ecore_hwfn *p_hwfn,
 
 enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u8 qid, u16 sb_id)
+					    u16 coalesce, u16 qid, u16 sb_id)
 {
 	struct ustorm_eth_queue_zone eth_qzone;
 	u8 timeset, timer_res;
@@ -3897,7 +3897,7 @@ enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
 	}
 	timeset = (u8)(coalesce >> timer_res);
 
-	rc = ecore_fw_l2_queue(p_hwfn, (u16)qid, &fw_qid);
+	rc = ecore_fw_l2_queue(p_hwfn, qid, &fw_qid);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
@@ -3919,7 +3919,7 @@ enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
 
 enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u8 qid, u16 sb_id)
+					    u16 coalesce, u16 qid, u16 sb_id)
 {
 	struct xstorm_eth_queue_zone eth_qzone;
 	u8 timeset, timer_res;
@@ -3941,7 +3941,7 @@ enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
 
 	timeset = (u8)(coalesce >> timer_res);
 
-	rc = ecore_fw_l2_queue(p_hwfn, (u16)qid, &fw_qid);
+	rc = ecore_fw_l2_queue(p_hwfn, qid, &fw_qid);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index 0dee68a..e7332ac 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -535,7 +535,7 @@ enum _ecore_status_t ecore_final_cleanup(struct ecore_hwfn	*p_hwfn,
  */
 enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u8 qid, u16 sb_id);
+					    u16 coalesce, u16 qid, u16 sb_id);
 
 /**
  * @brief ecore_set_txq_coalesce - Configure coalesce parameters for a Tx queue
@@ -553,6 +553,6 @@ enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
  */
 enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u8 qid, u16 sb_id);
+					    u16 coalesce, u16 qid, u16 sb_id);
 
 #endif
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 22bb43d..1379a1b 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -212,7 +212,7 @@ ecore_sp_vport_update_rss(struct ecore_hwfn *p_hwfn,
 
 	for (i = 0; i < ECORE_RSS_IND_TABLE_SIZE; i++) {
 		rc = ecore_fw_l2_queue(p_hwfn,
-				       (u8)p_rss->rss_ind_table[i],
+				       p_rss->rss_ind_table[i],
 				       &abs_l2_queue);
 		if (rc != ECORE_SUCCESS)
 			return rc;
diff --git a/drivers/net/qede/base/ecore_l2_api.h b/drivers/net/qede/base/ecore_l2_api.h
index 247316b..8f7b614 100644
--- a/drivers/net/qede/base/ecore_l2_api.h
+++ b/drivers/net/qede/base/ecore_l2_api.h
@@ -37,7 +37,7 @@ struct ecore_queue_start_common_params {
 	/* q_zone_id is relative, may be different from queue id
 	 * currently used by Tx-only, upper-bounded by number of FW-queues
 	 */
-	u8 qzone_id;
+	u16 qzone_id;
 
 	/* stats_id is relative or absolute depends on function */
 	u8 stats_id;
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index b051678..6e86966 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -2118,8 +2118,8 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 	    !ecore_iov_validate_sb(p_hwfn, vf, req->hw_sb))
 		goto out;
 
-	params.queue_id = (u8)vf->vf_queues[req->tx_qid].fw_tx_qid;
-	params.qzone_id = (u8)vf->vf_queues[req->tx_qid].fw_tx_qid;
+	params.queue_id = vf->vf_queues[req->tx_qid].fw_tx_qid;
+	params.qzone_id = vf->vf_queues[req->tx_qid].fw_tx_qid;
 	params.vport_id = vf->vport_id;
 	params.stats_id = vf->abs_vf_id + 0x10;
 	params.sb = req->hw_sb;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 14/62] net/qede/base: update MFW when default MTU is changed
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (14 preceding siblings ...)
  2017-03-28  6:51           ` [PATCH v4 13/62] net/qede/base: change queue/sb-id from 8 bit to 16 bit Rasesh Mody
@ 2017-03-28  6:51           ` Rasesh Mody
  2017-03-28  6:51           ` [PATCH v4 15/62] net/qede/base: prevent device init failure Rasesh Mody
                             ` (48 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:51 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Send mailbox command to Management FW when MTU changes.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c |   11 +++++++++++
 drivers/net/qede/base/ecore_mcp.c |    3 ---
 2 files changed, 11 insertions(+), 3 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 7a501bb..13e13ba 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1629,6 +1629,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 {
 	enum _ecore_status_t rc = ECORE_SUCCESS, mfw_rc;
 	u32 load_code, param, drv_mb_param;
+	bool b_default_mtu = true;
 	struct ecore_hwfn *p_hwfn;
 	int i;
 
@@ -1648,6 +1649,12 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
 
+		/* If management didn't provide a default, set one of our own */
+		if (!p_hwfn->hw_info.mtu) {
+			p_hwfn->hw_info.mtu = 1500;
+			b_default_mtu = false;
+		}
+
 		if (IS_VF(p_dev)) {
 			p_hwfn->b_int_enabled = 1;
 			continue;
@@ -1776,6 +1783,10 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 			return rc;
 		}
 
+		if (!b_default_mtu)
+			ecore_mcp_ov_update_mtu(p_hwfn, p_hwfn->p_main_ptt,
+						p_hwfn->hw_info.mtu);
+
 		rc = ecore_mcp_ov_update_driver_state(p_hwfn,
 						      p_hwfn->p_main_ptt,
 						ECORE_OV_DRIVER_STATE_DISABLED);
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 8720ae7..0338576 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1438,9 +1438,6 @@ enum _ecore_status_t ecore_mcp_fill_shmem_func_info(struct ecore_hwfn *p_hwfn,
 
 	info->mtu = (u16)shmem_info.mtu_size;
 
-	if (info->mtu == 0)
-		info->mtu = 1500;
-
 	DP_VERBOSE(p_hwfn, (ECORE_MSG_SP | ECORE_MSG_IFUP),
 		   "Read configuration from shmem: pause_on_host %02x"
 		    " protocol %02x BW [%02x - %02x]"
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 15/62] net/qede/base: prevent device init failure
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (15 preceding siblings ...)
  2017-03-28  6:51           ` [PATCH v4 14/62] net/qede/base: update MFW when default MTU is changed Rasesh Mody
@ 2017-03-28  6:51           ` Rasesh Mody
  2017-03-28  6:51           ` [PATCH v4 16/62] net/qede/base: read card personality via MFW commands Rasesh Mody
                             ` (47 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:51 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Device initialization flow should not be failed because the FW interface
command is not available.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c |   14 ++++++++------
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 13e13ba..7494f93 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1778,18 +1778,20 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		rc = ecore_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
 				   DRV_MSG_CODE_OV_UPDATE_STORM_FW_VER,
 				   drv_mb_param, &load_code, &param);
-		if (rc != ECORE_SUCCESS) {
-			DP_ERR(p_hwfn, "Failed to send firmware version\n");
-			return rc;
-		}
+		if (rc != ECORE_SUCCESS)
+			DP_INFO(p_hwfn, "Failed to update firmware version\n");
 
 		if (!b_default_mtu)
-			ecore_mcp_ov_update_mtu(p_hwfn, p_hwfn->p_main_ptt,
-						p_hwfn->hw_info.mtu);
+			rc = ecore_mcp_ov_update_mtu(p_hwfn, p_hwfn->p_main_ptt,
+						      p_hwfn->hw_info.mtu);
+		if (rc != ECORE_SUCCESS)
+			DP_INFO(p_hwfn, "Failed to update default mtu\n");
 
 		rc = ecore_mcp_ov_update_driver_state(p_hwfn,
 						      p_hwfn->p_main_ptt,
 						ECORE_OV_DRIVER_STATE_DISABLED);
+		if (rc != ECORE_SUCCESS)
+			DP_INFO(p_hwfn, "Failed to update driver state\n");
 	}
 
 	return rc;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 16/62] net/qede/base: read card personality via MFW commands
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (16 preceding siblings ...)
  2017-03-28  6:51           ` [PATCH v4 15/62] net/qede/base: prevent device init failure Rasesh Mody
@ 2017-03-28  6:51           ` Rasesh Mody
  2017-03-28  6:51           ` [PATCH v4 17/62] net/qede/base: allow probe to succeed with minor HW-issues Rasesh Mody
                             ` (46 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:51 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Add support to read NIC personality via management FW for non-L2
protocols.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h       |   16 +++++++++++++-
 drivers/net/qede/base/ecore_dev.c   |   17 +++++----------
 drivers/net/qede/base/ecore_mcp.c   |   41 +++++++++++++++++++++++++++++++----
 drivers/net/qede/base/ecore_sriov.c |    1 +
 4 files changed, 59 insertions(+), 16 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 25c96f8..842a3b5 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -243,7 +243,8 @@ enum ecore_pci_personality {
 	ECORE_PCI_FCOE,
 	ECORE_PCI_ISCSI,
 	ECORE_PCI_ETH_ROCE,
-	ECORE_PCI_IWARP,
+	ECORE_PCI_ETH_IWARP,
+	ECORE_PCI_ETH_RDMA,
 	ECORE_PCI_DEFAULT /* default in shmem */
 };
 
@@ -328,6 +329,19 @@ enum ecore_hw_err_type {
 struct ecore_hw_info {
 	/* PCI personality */
 	enum ecore_pci_personality personality;
+#define ECORE_IS_RDMA_PERSONALITY(dev)			    \
+	((dev)->hw_info.personality == ECORE_PCI_ETH_ROCE ||  \
+	 (dev)->hw_info.personality == ECORE_PCI_ETH_IWARP || \
+	 (dev)->hw_info.personality == ECORE_PCI_ETH_RDMA)
+#define ECORE_IS_ROCE_PERSONALITY(dev)			   \
+	((dev)->hw_info.personality == ECORE_PCI_ETH_ROCE || \
+	 (dev)->hw_info.personality == ECORE_PCI_ETH_RDMA)
+#define ECORE_IS_IWARP_PERSONALITY(dev)			    \
+	((dev)->hw_info.personality == ECORE_PCI_ETH_IWARP || \
+	 (dev)->hw_info.personality == ECORE_PCI_ETH_RDMA)
+#define ECORE_IS_L2_PERSONALITY(dev)		      \
+	((dev)->hw_info.personality == ECORE_PCI_ETH || \
+	 ECORE_IS_RDMA_PERSONALITY(dev))
 
 	/* Resource Allocation scheme results */
 	u32 resc_start[ECORE_MAX_RESC];
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 7494f93..1b033b7 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -219,9 +219,7 @@ static enum _ecore_status_t ecore_init_qm_info(struct ecore_hwfn *p_hwfn,
 	 * don't have a good recycle flow. Non ethernet PFs require only a
 	 * single physical queue.
 	 */
-	if (p_hwfn->hw_info.personality == ECORE_PCI_ETH_ROCE ||
-	    p_hwfn->hw_info.personality == ECORE_PCI_IWARP ||
-	    p_hwfn->hw_info.personality == ECORE_PCI_ETH)
+	if (ECORE_IS_L2_PERSONALITY(p_hwfn))
 		protocol_pqs = p_hwfn->hw_info.num_hw_tc;
 	else
 		protocol_pqs = 1;
@@ -229,7 +227,7 @@ static enum _ecore_status_t ecore_init_qm_info(struct ecore_hwfn *p_hwfn,
 	num_pqs = protocol_pqs + num_vfs + 1;	/* The '1' is for pure-LB */
 	num_vports = (u8)RESC_NUM(p_hwfn, ECORE_VPORT);
 
-	if (p_hwfn->hw_info.personality == ECORE_PCI_ETH_ROCE) {
+	if (ECORE_IS_ROCE_PERSONALITY(p_hwfn)) {
 		num_pqs++;	/* for RoCE queue */
 		init_rdma_offload_pq = true;
 		if (p_hwfn->pf_params.rdma_pf_params.enable_dcqcn) {
@@ -259,7 +257,7 @@ static enum _ecore_status_t ecore_init_qm_info(struct ecore_hwfn *p_hwfn,
 		qm_info->num_pf_rls = (u8)num_pf_rls;
 	}
 
-	if (p_hwfn->hw_info.personality == ECORE_PCI_IWARP) {
+	if (ECORE_IS_IWARP_PERSONALITY(p_hwfn)) {
 		num_pqs += 3;	/* for iwarp queue / pure-ack / ooo */
 		init_rdma_offload_pq = true;
 		init_pure_ack_pq = true;
@@ -335,9 +333,7 @@ static enum _ecore_status_t ecore_init_qm_info(struct ecore_hwfn *p_hwfn,
 		struct init_qm_pq_params *params =
 		    &qm_info->qm_pq_params[curr_queue++];
 
-		if (p_hwfn->hw_info.personality == ECORE_PCI_ETH_ROCE ||
-		    p_hwfn->hw_info.personality == ECORE_PCI_IWARP ||
-		    p_hwfn->hw_info.personality == ECORE_PCI_ETH) {
+		if (ECORE_IS_L2_PERSONALITY(p_hwfn)) {
 			params->vport_id = vport_id;
 			params->tc_id = i;
 			/* Note: this assumes that if we had a configuration
@@ -612,8 +608,7 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 
 		/* EQ */
 		n_eqes = ecore_chain_get_capacity(&p_hwfn->p_spq->chain);
-		if ((p_hwfn->hw_info.personality == ECORE_PCI_ETH_ROCE) ||
-		    (p_hwfn->hw_info.personality == ECORE_PCI_IWARP)) {
+		if (ECORE_IS_RDMA_PERSONALITY(p_hwfn)) {
 			/* Calculate the EQ size
 			 * ---------------------
 			 * Each ICID may generate up to one event at a time i.e.
@@ -636,7 +631,7 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 			 *          smaller than RoCE's so we avoid exact
 			 *          calculation.
 			 */
-			if (p_hwfn->hw_info.personality == ECORE_PCI_ETH_ROCE) {
+			if (ECORE_IS_ROCE_PERSONALITY(p_hwfn)) {
 				num_cons =
 				    ecore_cxt_get_proto_cid_count(
 						p_hwfn,
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 0338576..9f897b5 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1373,16 +1373,47 @@ enum _ecore_status_t ecore_mcp_get_media_type(struct ecore_dev *p_dev,
 	return ECORE_SUCCESS;
 }
 
+/* @DPDK */
+/* Old MFW has a global configuration for all PFs regarding RDMA support */
+static void
+ecore_mcp_get_shmem_proto_legacy(struct ecore_hwfn *p_hwfn,
+				 enum ecore_pci_personality *p_proto)
+{
+	*p_proto = ECORE_PCI_ETH;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IFUP,
+		   "According to Legacy capabilities, L2 personality is %08x\n",
+		   (u32)*p_proto);
+}
+
+/* @DPDK */
+static enum _ecore_status_t
+ecore_mcp_get_shmem_proto_mfw(struct ecore_hwfn *p_hwfn,
+			      struct ecore_ptt *p_ptt,
+			      enum ecore_pci_personality *p_proto)
+{
+	u32 resp = 0, param = 0;
+	enum _ecore_status_t rc;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IFUP,
+		   "According to capabilities, L2 personality is %08x [resp %08x param %08x]\n",
+		   (u32)*p_proto, resp, param);
+	return ECORE_SUCCESS;
+}
+
 static enum _ecore_status_t
 ecore_mcp_get_shmem_proto(struct ecore_hwfn *p_hwfn,
 			  struct public_func *p_info,
+			  struct ecore_ptt *p_ptt,
 			  enum ecore_pci_personality *p_proto)
 {
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 
 	switch (p_info->config & FUNC_MF_CFG_PROTOCOL_MASK) {
 	case FUNC_MF_CFG_PROTOCOL_ETHERNET:
-		*p_proto = ECORE_PCI_ETH;
+		if (ecore_mcp_get_shmem_proto_mfw(p_hwfn, p_ptt, p_proto) !=
+		    ECORE_SUCCESS)
+			ecore_mcp_get_shmem_proto_legacy(p_hwfn, p_proto);
 		break;
 	default:
 		rc = ECORE_INVAL;
@@ -1403,7 +1434,8 @@ enum _ecore_status_t ecore_mcp_fill_shmem_func_info(struct ecore_hwfn *p_hwfn,
 	info->pause_on_host = (shmem_info.config &
 			       FUNC_MF_CFG_PAUSE_ON_HOST_RING) ? 1 : 0;
 
-	if (ecore_mcp_get_shmem_proto(p_hwfn, &shmem_info, &info->protocol)) {
+	if (ecore_mcp_get_shmem_proto(p_hwfn, &shmem_info, p_ptt,
+				      &info->protocol)) {
 		DP_ERR(p_hwfn, "Unknown personality %08x\n",
 		       (u32)(shmem_info.config & FUNC_MF_CFG_PROTOCOL_MASK));
 		return ECORE_INVAL;
@@ -1559,8 +1591,9 @@ int ecore_mcp_get_personality_cnt(struct ecore_hwfn *p_hwfn,
 		if (shmem_info.config & FUNC_MF_CFG_FUNC_HIDE)
 			continue;
 
-		if (ecore_mcp_get_shmem_proto(p_hwfn, &shmem_info,
-					      &protocol) != ECORE_SUCCESS)
+		if (ecore_mcp_get_shmem_proto(p_hwfn, &shmem_info, p_ptt,
+					      &protocol) !=
+		    ECORE_SUCCESS)
 			continue;
 
 		if ((1 << ((u32)protocol)) & personalities)
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 6e86966..578899c 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -86,6 +86,7 @@ static enum _ecore_status_t ecore_sp_vf_start(struct ecore_hwfn *p_hwfn,
 		p_ramrod->personality = PERSONALITY_ETH;
 		break;
 	case ECORE_PCI_ETH_ROCE:
+	case ECORE_PCI_ETH_IWARP:
 		p_ramrod->personality = PERSONALITY_RDMA_AND_ETH;
 		break;
 	default:
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 17/62] net/qede/base: allow probe to succeed with minor HW-issues
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (17 preceding siblings ...)
  2017-03-28  6:51           ` [PATCH v4 16/62] net/qede/base: read card personality via MFW commands Rasesh Mody
@ 2017-03-28  6:51           ` Rasesh Mody
  2017-03-28  6:51           ` [PATCH v4 18/62] net/qede/base: remove unneeded step in HW init Rasesh Mody
                             ` (45 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:51 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Allow probe to succeed with various 'minor' HW-issues [if requested]

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c     |   71 +++++++++++++++++++++++++++------
 drivers/net/qede/base/ecore_dev_api.h |   40 ++++++++++++++++---
 2 files changed, 94 insertions(+), 17 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 1b033b7..907566c 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2445,12 +2445,15 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-static enum _ecore_status_t ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
-						  struct ecore_ptt *p_ptt)
+static enum _ecore_status_t
+ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
+		      struct ecore_ptt *p_ptt,
+		      struct ecore_hw_prepare_params *p_params)
 {
 	u32 nvm_cfg1_offset, mf_mode, addr, generic_cont0, core_cfg, dcbx_mode;
 	u32 port_cfg_addr, link_temp, nvm_cfg_addr, device_capabilities;
 	struct ecore_mcp_link_params *link;
+	enum _ecore_status_t rc;
 
 	/* Read global nvm_cfg address */
 	nvm_cfg_addr = ecore_rd(p_hwfn, p_ptt, MISC_REG_GEN_PURP_CR0);
@@ -2458,6 +2461,8 @@ static enum _ecore_status_t ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
 	/* Verify MCP has initialized it */
 	if (!nvm_cfg_addr) {
 		DP_NOTICE(p_hwfn, false, "Shared memory not initialized\n");
+		if (p_params->b_relaxed_probe)
+			p_params->p_relaxed_res = ECORE_HW_PREPARE_FAILED_NVM;
 		return ECORE_INVAL;
 	}
 
@@ -2643,7 +2648,13 @@ static enum _ecore_status_t ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
 		OSAL_SET_BIT(ECORE_DEV_CAP_IWARP,
 			     &p_hwfn->hw_info.device_capabilities);
 
-	return ecore_mcp_fill_shmem_func_info(p_hwfn, p_ptt);
+	rc = ecore_mcp_fill_shmem_func_info(p_hwfn, p_ptt);
+	if (rc != ECORE_SUCCESS && p_params->b_relaxed_probe) {
+		rc = ECORE_SUCCESS;
+		p_params->p_relaxed_res = ECORE_HW_PREPARE_BAD_MCP;
+	}
+
+	return rc;
 }
 
 static void ecore_get_num_funcs(struct ecore_hwfn *p_hwfn,
@@ -2797,15 +2808,22 @@ static void ecore_hw_info_port_num(struct ecore_hwfn *p_hwfn,
 
 static enum _ecore_status_t
 ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-		  enum ecore_pci_personality personality, bool drv_resc_alloc)
+		  enum ecore_pci_personality personality,
+		  struct ecore_hw_prepare_params *p_params)
 {
+	bool drv_resc_alloc = p_params->drv_resc_alloc;
 	enum _ecore_status_t rc;
 
 	/* Since all information is common, only first hwfns should do this */
 	if (IS_LEAD_HWFN(p_hwfn)) {
 		rc = ecore_iov_hw_info(p_hwfn);
-		if (rc != ECORE_SUCCESS)
-			return rc;
+		if (rc != ECORE_SUCCESS) {
+			if (p_params->b_relaxed_probe)
+				p_params->p_relaxed_res =
+						ECORE_HW_PREPARE_BAD_IOV;
+			else
+				return rc;
+		}
 	}
 
 	/* TODO In get_hw_info, amoungst others:
@@ -2820,7 +2838,7 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_ASIC(p_hwfn->p_dev)) {
 #endif
-	rc = ecore_hw_get_nvm_info(p_hwfn, p_ptt);
+	rc = ecore_hw_get_nvm_info(p_hwfn, p_ptt, p_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 #ifndef ASIC_ONLY
@@ -2828,8 +2846,12 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 #endif
 
 	rc = ecore_int_igu_read_cam(p_hwfn, p_ptt);
-	if (rc != ECORE_SUCCESS)
-		return rc;
+	if (rc != ECORE_SUCCESS) {
+		if (p_params->b_relaxed_probe)
+			p_params->p_relaxed_res = ECORE_HW_PREPARE_BAD_IGU;
+		else
+			return rc;
+	}
 
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_ASIC(p_hwfn->p_dev) && ecore_mcp_is_init(p_hwfn)) {
@@ -2896,7 +2918,13 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	 * the resources/features depends on them.
 	 * This order is not harmful if not forcing.
 	 */
-	return ecore_hw_get_resc(p_hwfn, drv_resc_alloc);
+	rc = ecore_hw_get_resc(p_hwfn, drv_resc_alloc);
+	if (rc != ECORE_SUCCESS && p_params->b_relaxed_probe) {
+		rc = ECORE_SUCCESS;
+		p_params->p_relaxed_res = ECORE_HW_PREPARE_BAD_MCP;
+	}
+
+	return rc;
 }
 
 static enum _ecore_status_t ecore_get_dev_info(struct ecore_dev *p_dev)
@@ -3028,6 +3056,8 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn,
 	if (REG_RD(p_hwfn, PXP_PF_ME_OPAQUE_ADDR) == 0xffffffff) {
 		DP_ERR(p_hwfn,
 		       "Reading the ME register returns all Fs; Preventing further chip access\n");
+		if (p_params->b_relaxed_probe)
+			p_params->p_relaxed_res = ECORE_HW_PREPARE_FAILED_ME;
 		return ECORE_INVAL;
 	}
 
@@ -3037,6 +3067,8 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn,
 	rc = ecore_ptt_pool_alloc(p_hwfn);
 	if (rc) {
 		DP_NOTICE(p_hwfn, true, "Failed to prepare hwfn's hw\n");
+		if (p_params->b_relaxed_probe)
+			p_params->p_relaxed_res = ECORE_HW_PREPARE_FAILED_MEM;
 		goto err0;
 	}
 
@@ -3046,8 +3078,12 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn,
 	/* First hwfn learns basic information, e.g., number of hwfns */
 	if (!p_hwfn->my_id) {
 		rc = ecore_get_dev_info(p_dev);
-		if (rc != ECORE_SUCCESS)
+		if (rc != ECORE_SUCCESS) {
+			if (p_params->b_relaxed_probe)
+				p_params->p_relaxed_res =
+					ECORE_HW_PREPARE_FAILED_DEV;
 			goto err1;
+		}
 	}
 
 	ecore_hw_hwfn_prepare(p_hwfn);
@@ -3056,12 +3092,14 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn,
 	rc = ecore_mcp_cmd_init(p_hwfn, p_hwfn->p_main_ptt);
 	if (rc) {
 		DP_NOTICE(p_hwfn, true, "Failed initializing mcp command\n");
+		if (p_params->b_relaxed_probe)
+			p_params->p_relaxed_res = ECORE_HW_PREPARE_FAILED_MEM;
 		goto err1;
 	}
 
 	/* Read the device configuration information from the HW and SHMEM */
 	rc = ecore_get_hw_info(p_hwfn, p_hwfn->p_main_ptt,
-			       p_params->personality, p_params->drv_resc_alloc);
+			       p_params->personality, p_params);
 	if (rc) {
 		DP_NOTICE(p_hwfn, true, "Failed to get HW information\n");
 		goto err2;
@@ -3094,6 +3132,8 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn,
 	rc = ecore_init_alloc(p_hwfn);
 	if (rc) {
 		DP_NOTICE(p_hwfn, true, "Failed to allocate the init array\n");
+		if (p_params->b_relaxed_probe)
+			p_params->p_relaxed_res = ECORE_HW_PREPARE_FAILED_MEM;
 		goto err2;
 	}
 #ifndef ASIC_ONLY
@@ -3129,6 +3169,9 @@ enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev,
 
 	p_dev->chk_reg_fifo = p_params->chk_reg_fifo;
 
+	if (p_params->b_relaxed_probe)
+		p_params->p_relaxed_res = ECORE_HW_PREPARE_SUCCESS;
+
 	/* Store the precompiled init data ptrs */
 	if (IS_PF(p_dev))
 		ecore_init_iro_array(p_dev);
@@ -3164,6 +3207,10 @@ enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev,
 		 * initiliazed hwfn 0.
 		 */
 		if (rc != ECORE_SUCCESS) {
+			if (p_params->b_relaxed_probe)
+				p_params->p_relaxed_res =
+						ECORE_HW_PREPARE_FAILED_ENG2;
+
 			if (IS_PF(p_dev)) {
 				ecore_init_free(p_hwfn);
 				ecore_mcp_free(p_hwfn);
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index e7332ac..74a15ef 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -138,17 +138,47 @@ void ecore_hw_start_fastpath(struct ecore_hwfn *p_hwfn);
  */
 enum _ecore_status_t ecore_hw_reset(struct ecore_dev *p_dev);
 
+enum ecore_hw_prepare_result {
+	ECORE_HW_PREPARE_SUCCESS,
+
+	/* FAILED results indicate probe has failed & cleaned up */
+	ECORE_HW_PREPARE_FAILED_ENG2,
+	ECORE_HW_PREPARE_FAILED_ME,
+	ECORE_HW_PREPARE_FAILED_MEM,
+	ECORE_HW_PREPARE_FAILED_DEV,
+	ECORE_HW_PREPARE_FAILED_NVM,
+
+	/* BAD results indicate probe is passed even though some wrongness
+	 * has occurred; Trying to actually use [I.e., hw_init()] might have
+	 * dire reprecautions.
+	 */
+	ECORE_HW_PREPARE_BAD_IOV,
+	ECORE_HW_PREPARE_BAD_MCP,
+	ECORE_HW_PREPARE_BAD_IGU,
+};
+
 struct ecore_hw_prepare_params {
-	/* personality to initialize */
+	/* Personality to initialize */
 	int personality;
-	/* force the driver's default resource allocation */
+
+	/* Force the driver's default resource allocation */
 	bool drv_resc_alloc;
-	/* check the reg_fifo after any register access */
+
+	/* Check the reg_fifo after any register access */
 	bool chk_reg_fifo;
-	/* request the MFW to initiate PF FLR */
+
+	/* Request the MFW to initiate PF FLR */
 	bool initiate_pf_flr;
-	/* the OS Epoch time in seconds */
+
+	/* The OS Epoch time in seconds */
 	u32 epoch;
+
+	/* Allow prepare to pass even if some initializations are failing.
+	 * If set, the `p_prepare_res' field would be set with the return,
+	 * and might allow probe to pass even if there are certain issues.
+	 */
+	bool b_relaxed_probe;
+	enum ecore_hw_prepare_result p_relaxed_res;
 };
 
 /**
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 18/62] net/qede/base: remove unneeded step in HW init
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (18 preceding siblings ...)
  2017-03-28  6:51           ` [PATCH v4 17/62] net/qede/base: allow probe to succeed with minor HW-issues Rasesh Mody
@ 2017-03-28  6:51           ` Rasesh Mody
  2017-03-28  6:51           ` [PATCH v4 19/62] net/qede/base: allow only trusted VFs to be promisc Rasesh Mody
                             ` (44 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:51 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

There is no need to close the OUT_EN NIG registers, so remove that.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c |   12 ------------
 1 file changed, 12 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 907566c..e2d4132 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -999,18 +999,6 @@ static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn,
 
 	ecore_cxt_hw_init_common(p_hwfn);
 
-	/* Close gate from NIG to BRB/Storm; By default they are open, but
-	 * we close them to prevent NIG from passing data to reset blocks.
-	 * Should have been done in the ENGINE phase, but init-tool lacks
-	 * proper port-pretend capabilities.
-	 */
-	ecore_wr(p_hwfn, p_ptt, NIG_REG_RX_BRB_OUT_EN, 0);
-	ecore_wr(p_hwfn, p_ptt, NIG_REG_STORM_OUT_EN, 0);
-	ecore_port_pretend(p_hwfn, p_ptt, p_hwfn->port_id ^ 1);
-	ecore_wr(p_hwfn, p_ptt, NIG_REG_RX_BRB_OUT_EN, 0);
-	ecore_wr(p_hwfn, p_ptt, NIG_REG_STORM_OUT_EN, 0);
-	ecore_port_unpretend(p_hwfn, p_ptt);
-
 	rc = ecore_init_run(p_hwfn, p_ptt, PHASE_ENGINE, ANY_PHASE_ID, hw_mode);
 	if (rc != ECORE_SUCCESS)
 		return rc;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 19/62] net/qede/base: allow only trusted VFs to be promisc
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (19 preceding siblings ...)
  2017-03-28  6:51           ` [PATCH v4 18/62] net/qede/base: remove unneeded step in HW init Rasesh Mody
@ 2017-03-28  6:51           ` Rasesh Mody
  2017-03-28  6:51           ` [PATCH v4 20/62] net/qede/base: qm initialization revamp Rasesh Mody
                             ` (43 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:51 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Allow only trusted VFs to be promisc/multi-promisc. The reasonable
thing is to use the 'trusted' node instead of simply allowing VFs to
become promiscuous.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_l2.c    |    8 ++++----
 drivers/net/qede/base/ecore_sriov.c |    2 --
 2 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 1379a1b..d2e1719 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -274,8 +274,8 @@ ecore_sp_update_accept_mode(struct ecore_hwfn *p_hwfn,
 
 		p_ramrod->rx_mode.state = OSAL_CPU_TO_LE16(state);
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-			   "p_ramrod->rx_mode.state = 0x%x\n",
-			   state);
+			   "vport[%02x] p_ramrod->rx_mode.state = 0x%x\n",
+			   p_ramrod->common.vport_id, state);
 	}
 
 	/* Set Tx mode accept flags */
@@ -298,8 +298,8 @@ ecore_sp_update_accept_mode(struct ecore_hwfn *p_hwfn,
 
 		p_ramrod->tx_mode.state = OSAL_CPU_TO_LE16(state);
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-			   "p_ramrod->tx_mode.state = 0x%x\n",
-			   state);
+			   "vport[%02x] p_ramrod->tx_mode.state = 0x%x\n",
+			   p_ramrod->common.vport_id, state);
 	}
 }
 
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 578899c..a302e9e 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -2626,7 +2626,6 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 	 */
 	tlvs_accepted = tlvs_mask;
 
-#ifndef LINUX_REMOVE
 	if (OSAL_IOV_VF_VPORT_UPDATE(p_hwfn, vf->relative_vf_id,
 				     &params, &tlvs_accepted) !=
 	    ECORE_SUCCESS) {
@@ -2634,7 +2633,6 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 		status = PFVF_STATUS_NOT_SUPPORTED;
 		goto out;
 	}
-#endif
 
 	if (!tlvs_accepted) {
 		if (tlvs_mask)
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 20/62] net/qede/base: qm initialization revamp
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (20 preceding siblings ...)
  2017-03-28  6:51           ` [PATCH v4 19/62] net/qede/base: allow only trusted VFs to be promisc Rasesh Mody
@ 2017-03-28  6:51           ` Rasesh Mody
  2017-03-28  6:51           ` [PATCH v4 21/62] net/qede/base: print firmware MFW and MBI versions Rasesh Mody
                             ` (42 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:51 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

This patch revamps queue initialization.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/bcm_osal.h    |    2 +
 drivers/net/qede/base/ecore.h       |   34 +-
 drivers/net/qede/base/ecore_cxt.c   |   14 +-
 drivers/net/qede/base/ecore_dev.c   |  869 ++++++++++++++++++++++++-----------
 drivers/net/qede/base/ecore_hw.c    |   38 --
 drivers/net/qede/base/ecore_l2.c    |   12 +-
 drivers/net/qede/base/ecore_l2.h    |    2 +-
 drivers/net/qede/base/ecore_spq.c   |    9 +-
 drivers/net/qede/base/ecore_sriov.c |   13 +-
 9 files changed, 645 insertions(+), 348 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 0d239c9..63ee6d5 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -320,6 +320,8 @@ u32 qede_find_first_zero_bit(unsigned long *, u32);
 #define OSAL_BUILD_BUG_ON(cond)		nothing
 #define ETH_ALEN			ETHER_ADDR_LEN
 
+#define OSAL_BITMAP_WEIGHT(bitmap, count) 0
+
 #define OSAL_LINK_UPDATE(hwfn) qed_link_update(hwfn)
 #define OSAL_DCBX_AEN(hwfn, mib_type) nothing
 
diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 842a3b5..58c97a3 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -445,11 +445,13 @@ struct ecore_qm_info {
 	struct init_qm_port_params  *qm_port_params;
 	u16			start_pq;
 	u8			start_vport;
-	u8			pure_lb_pq;
-	u8			offload_pq;
-	u8			pure_ack_pq;
-	u8			ooo_pq;
-	u8			vf_queues_offset;
+	u16			pure_lb_pq;
+	u16			offload_pq;
+	u16			pure_ack_pq;
+	u16			ooo_pq;
+	u16			first_vf_pq;
+	u16			first_mcos_pq;
+	u16			first_rl_pq;
 	u16			num_pqs;
 	u16			num_vf_pqs;
 	u8			num_vports;
@@ -828,6 +830,28 @@ int ecore_device_num_ports(struct ecore_dev *p_dev);
 void ecore_set_fw_mac_addr(__le16 *fw_msb, __le16 *fw_mid, __le16 *fw_lsb,
 			   u8 *mac);
 
+/* Flags for indication of required queues */
+#define PQ_FLAGS_RLS	(1 << 0)
+#define PQ_FLAGS_MCOS	(1 << 1)
+#define PQ_FLAGS_LB	(1 << 2)
+#define PQ_FLAGS_OOO	(1 << 3)
+#define PQ_FLAGS_ACK    (1 << 4)
+#define PQ_FLAGS_OFLD	(1 << 5)
+#define PQ_FLAGS_VFS	(1 << 6)
+
+/* physical queue index for cm context intialization */
+u16 ecore_get_cm_pq_idx(struct ecore_hwfn *p_hwfn, u32 pq_flags);
+u16 ecore_get_cm_pq_idx_mcos(struct ecore_hwfn *p_hwfn, u8 tc);
+u16 ecore_get_cm_pq_idx_vf(struct ecore_hwfn *p_hwfn, u16 vf);
+u16 ecore_get_cm_pq_idx_rl(struct ecore_hwfn *p_hwfn, u8 qpid);
+
+/* amount of resources used in qm init */
+u8 ecore_init_qm_get_num_tcs(struct ecore_hwfn *p_hwfn);
+u16 ecore_init_qm_get_num_vfs(struct ecore_hwfn *p_hwfn);
+u16 ecore_init_qm_get_num_pf_rls(struct ecore_hwfn *p_hwfn);
+u16 ecore_init_qm_get_num_vports(struct ecore_hwfn *p_hwfn);
+u16 ecore_init_qm_get_num_pqs(struct ecore_hwfn *p_hwfn);
+
 #define ECORE_LEADING_HWFN(dev)	(&dev->hwfns[0])
 
 #endif /* __ECORE_H */
diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index 2635030..aeeabf1 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -1372,18 +1372,10 @@ void ecore_qm_init_pf(struct ecore_hwfn *p_hwfn)
 }
 
 /* CM PF */
-static enum _ecore_status_t ecore_cm_init_pf(struct ecore_hwfn *p_hwfn)
+void ecore_cm_init_pf(struct ecore_hwfn *p_hwfn)
 {
-	union ecore_qm_pq_params pq_params;
-	u16 pq;
-
-	/* XCM pure-LB queue */
-	OSAL_MEMSET(&pq_params, 0, sizeof(pq_params));
-	pq_params.core.tc = LB_TC;
-	pq = ecore_get_qm_pq(p_hwfn, PROTOCOLID_CORE, &pq_params);
-	STORE_RT_REG(p_hwfn, XCM_REG_CON_PHY_Q3_RT_OFFSET, pq);
-
-	return ECORE_SUCCESS;
+	STORE_RT_REG(p_hwfn, XCM_REG_CON_PHY_Q3_RT_OFFSET,
+		     ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_LB));
 }
 
 /* DQ PF */
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index e2d4132..380c5ba 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -178,282 +178,575 @@ void ecore_resc_free(struct ecore_dev *p_dev)
 	}
 }
 
-static enum _ecore_status_t ecore_init_qm_info(struct ecore_hwfn *p_hwfn,
-					       bool b_sleepable)
+/******************** QM initialization *******************/
+
+/* bitmaps for indicating active traffic classes.
+ * Special case for Arrowhead 4 port
+ */
+/* 0..3 actualy used, 4 serves OOO, 7 serves high priority stuff (e.g. DCQCN) */
+#define ACTIVE_TCS_BMAP 0x9f
+/* 0..3 actually used, OOO and high priority stuff all use 3 */
+#define ACTIVE_TCS_BMAP_4PORT_K2 0xf
+
+/* determines the physical queue flags for a given PF. */
+static u32 ecore_get_pq_flags(struct ecore_hwfn *p_hwfn)
 {
-	u8 num_vports, vf_offset = 0, i, vport_id, num_ports, curr_queue;
-	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
-	struct init_qm_port_params *p_qm_port;
-	bool init_rdma_offload_pq = false;
-	bool init_pure_ack_pq = false;
-	bool init_ooo_pq = false;
-	u16 num_pqs, protocol_pqs;
-	u16 num_pf_rls = 0;
-	u16 num_vfs = 0;
-	u32 pf_rl;
-	u8 pf_wfq;
-
-	/* @TMP - saving the existing min/max bw config before resetting the
-	 * qm_info to restore them.
-	 */
-	pf_rl = qm_info->pf_rl;
-	pf_wfq = qm_info->pf_wfq;
+	u32 flags;
 
-#ifdef CONFIG_ECORE_SRIOV
-	if (p_hwfn->p_dev->p_iov_info)
-		num_vfs = p_hwfn->p_dev->p_iov_info->total_vfs;
-#endif
-	OSAL_MEM_ZERO(qm_info, sizeof(*qm_info));
+	/* common flags */
+	flags = PQ_FLAGS_LB;
 
-#ifndef ASIC_ONLY
-	/* @TMP - Don't allocate QM queues for VFs on emulation */
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
-		DP_NOTICE(p_hwfn, false,
-			  "Emulation - skip configuring QM queues for VFs\n");
-		num_vfs = 0;
+	/* feature flags */
+	if (IS_ECORE_SRIOV(p_hwfn->p_dev))
+		flags |= PQ_FLAGS_VFS;
+
+	/* protocol flags */
+	switch (p_hwfn->hw_info.personality) {
+	case ECORE_PCI_ETH:
+		flags |= PQ_FLAGS_MCOS;
+		break;
+	case ECORE_PCI_FCOE:
+		flags |= PQ_FLAGS_OFLD;
+		break;
+	case ECORE_PCI_ISCSI:
+		flags |= PQ_FLAGS_ACK | PQ_FLAGS_OOO | PQ_FLAGS_OFLD;
+		break;
+	case ECORE_PCI_ETH_ROCE:
+		flags |= PQ_FLAGS_MCOS | PQ_FLAGS_OFLD;
+		break;
+	case ECORE_PCI_ETH_IWARP:
+		flags |= PQ_FLAGS_MCOS | PQ_FLAGS_ACK | PQ_FLAGS_OOO |
+			 PQ_FLAGS_OFLD;
+		break;
+	default:
+		DP_ERR(p_hwfn, "unknown personality %d\n",
+		       p_hwfn->hw_info.personality);
+		return 0;
 	}
-#endif
+	return flags;
+}
 
-	/* ethernet PFs require a pq per tc. Even if only a subset of the TCs
-	 * active, we want physical queues allocated for all of them, since we
-	 * don't have a good recycle flow. Non ethernet PFs require only a
-	 * single physical queue.
-	 */
-	if (ECORE_IS_L2_PERSONALITY(p_hwfn))
-		protocol_pqs = p_hwfn->hw_info.num_hw_tc;
-	else
-		protocol_pqs = 1;
-
-	num_pqs = protocol_pqs + num_vfs + 1;	/* The '1' is for pure-LB */
-	num_vports = (u8)RESC_NUM(p_hwfn, ECORE_VPORT);
-
-	if (ECORE_IS_ROCE_PERSONALITY(p_hwfn)) {
-		num_pqs++;	/* for RoCE queue */
-		init_rdma_offload_pq = true;
-		if (p_hwfn->pf_params.rdma_pf_params.enable_dcqcn) {
-			/* Due to FW assumption that rl==vport, we limit the
-			 * number of rate limiters by the minimum between its
-			 * allocated number and the allocated number of vports.
-			 * Another limitation is the number of supported qps
-			 * with rate limiters in FW.
-			 */
-			num_pf_rls =
-			    (u16)OSAL_MIN_T(u32, RESC_NUM(p_hwfn, ECORE_RL),
-					     RESC_NUM(p_hwfn, ECORE_VPORT));
+/* Getters for resource amounts necessary for qm initialization */
+u8 ecore_init_qm_get_num_tcs(struct ecore_hwfn *p_hwfn)
+{
+	return p_hwfn->hw_info.num_hw_tc;
+}
 
-			/* we subtract num_vfs because each one requires a rate
-			 * limiter, and one default rate limiter.
-			 */
-			if (num_pf_rls < num_vfs + 1) {
-				DP_ERR(p_hwfn, "No RL for DCQCN");
-				DP_ERR(p_hwfn, "[num_pf_rls %d num_vfs %d]\n",
-				       num_pf_rls, num_vfs);
-				return ECORE_INVAL;
-			}
-			num_pf_rls -= num_vfs + 1;
-		}
+u16 ecore_init_qm_get_num_vfs(struct ecore_hwfn *p_hwfn)
+{
+	return IS_ECORE_SRIOV(p_hwfn->p_dev) ?
+			p_hwfn->p_dev->p_iov_info->total_vfs : 0;
+}
 
-		num_pqs += num_pf_rls;
-		qm_info->num_pf_rls = (u8)num_pf_rls;
-	}
+#define NUM_DEFAULT_RLS 1
 
-	if (ECORE_IS_IWARP_PERSONALITY(p_hwfn)) {
-		num_pqs += 3;	/* for iwarp queue / pure-ack / ooo */
-		init_rdma_offload_pq = true;
-		init_pure_ack_pq = true;
-		init_ooo_pq = true;
-	}
+u16 ecore_init_qm_get_num_pf_rls(struct ecore_hwfn *p_hwfn)
+{
+	u16 num_pf_rls, num_vfs = ecore_init_qm_get_num_vfs(p_hwfn);
 
-	if (p_hwfn->hw_info.personality == ECORE_PCI_ISCSI) {
-		num_pqs += 2;	/* for iSCSI pure-ACK / OOO queue */
-		init_pure_ack_pq = true;
-		init_ooo_pq = true;
-	}
+	/* @DPDK */
+	/* num RLs can't exceed resource amount of rls or vports or the
+	 * dcqcn qps
+	 */
+	num_pf_rls = (u16)OSAL_MIN_T(u32, RESC_NUM(p_hwfn, ECORE_RL),
+				     (u16)RESC_NUM(p_hwfn, ECORE_VPORT));
 
-	/* Sanity checking that setup requires legal number of resources */
-	if (num_pqs > RESC_NUM(p_hwfn, ECORE_PQ)) {
-		DP_ERR(p_hwfn,
-		       "Need too many Physical queues - 0x%04x avail %04x",
-		       num_pqs, RESC_NUM(p_hwfn, ECORE_PQ));
-		return ECORE_INVAL;
+	/* make sure after we reserve the default and VF rls we'll have
+	 * something left
+	 */
+	if (num_pf_rls < num_vfs + NUM_DEFAULT_RLS) {
+		DP_NOTICE(p_hwfn, false,
+			  "no rate limiters left for PF rate limiting"
+			  " [num_pf_rls %d num_vfs %d]\n", num_pf_rls, num_vfs);
+		return 0;
 	}
 
-	/* PQs will be arranged as follows: First per-TC PQ, then pure-LB queue,
-	 * then special queues (iSCSI pure-ACK / RoCE), then per-VF PQ.
+	/* subtract rls necessary for VFs and one default one for the PF */
+	num_pf_rls -= num_vfs + NUM_DEFAULT_RLS;
+
+	return num_pf_rls;
+}
+
+u16 ecore_init_qm_get_num_vports(struct ecore_hwfn *p_hwfn)
+{
+	u32 pq_flags = ecore_get_pq_flags(p_hwfn);
+
+	/* all pqs share the same vport (hence the 1 below), except for vfs
+	 * and pf_rl pqs
 	 */
-	qm_info->qm_pq_params = OSAL_ZALLOC(p_hwfn->p_dev,
-					    b_sleepable ? GFP_KERNEL :
-					    GFP_ATOMIC,
-					    sizeof(struct init_qm_pq_params) *
-					    num_pqs);
-	if (!qm_info->qm_pq_params)
-		goto alloc_err;
+	return (!!(PQ_FLAGS_RLS & pq_flags)) *
+		ecore_init_qm_get_num_pf_rls(p_hwfn) +
+	       (!!(PQ_FLAGS_VFS & pq_flags)) *
+		ecore_init_qm_get_num_vfs(p_hwfn) + 1;
+}
 
-	qm_info->qm_vport_params = OSAL_ZALLOC(p_hwfn->p_dev,
-					       b_sleepable ? GFP_KERNEL :
-					       GFP_ATOMIC,
-					       sizeof(struct
-						      init_qm_vport_params) *
-					       num_vports);
-	if (!qm_info->qm_vport_params)
-		goto alloc_err;
+/* calc amount of PQs according to the requested flags */
+u16 ecore_init_qm_get_num_pqs(struct ecore_hwfn *p_hwfn)
+{
+	u32 pq_flags = ecore_get_pq_flags(p_hwfn);
+
+	return (!!(PQ_FLAGS_RLS & pq_flags)) *
+		ecore_init_qm_get_num_pf_rls(p_hwfn) +
+	       (!!(PQ_FLAGS_MCOS & pq_flags)) *
+		ecore_init_qm_get_num_tcs(p_hwfn) +
+	       (!!(PQ_FLAGS_LB & pq_flags)) +
+	       (!!(PQ_FLAGS_OOO & pq_flags)) +
+	       (!!(PQ_FLAGS_ACK & pq_flags)) +
+	       (!!(PQ_FLAGS_OFLD & pq_flags)) +
+	       (!!(PQ_FLAGS_VFS & pq_flags)) *
+		ecore_init_qm_get_num_vfs(p_hwfn);
+}
 
-	qm_info->qm_port_params = OSAL_ZALLOC(p_hwfn->p_dev,
-					      b_sleepable ? GFP_KERNEL :
-					      GFP_ATOMIC,
-					      sizeof(struct init_qm_port_params)
-					      * MAX_NUM_PORTS);
-	if (!qm_info->qm_port_params)
-		goto alloc_err;
+/* initialize the top level QM params */
+static void ecore_init_qm_params(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
 
-	qm_info->wfq_data = OSAL_ZALLOC(p_hwfn->p_dev,
-					b_sleepable ? GFP_KERNEL :
-					GFP_ATOMIC,
-					sizeof(struct ecore_wfq_data) *
-					num_vports);
+	/* pq and vport bases for this PF */
+	qm_info->start_pq = (u16)RESC_START(p_hwfn, ECORE_PQ);
+	qm_info->start_vport = (u8)RESC_START(p_hwfn, ECORE_VPORT);
 
-	if (!qm_info->wfq_data)
-		goto alloc_err;
+	/* rate limiting and weighted fair queueing are always enabled */
+	qm_info->vport_rl_en = 1;
+	qm_info->vport_wfq_en = 1;
 
-	vport_id = (u8)RESC_START(p_hwfn, ECORE_VPORT);
+	/* in AH 4 port we have fewer TCs per port */
+	qm_info->max_phys_tcs_per_port =
+		p_hwfn->p_dev->num_ports_in_engines == MAX_NUM_PORTS_K2 ?
+			NUM_PHYS_TCS_4PORT_K2 : NUM_OF_PHYS_TCS;
+}
 
-	/* First init rate limited queues ( Due to RoCE assumption of
-	 * qpid=rlid )
-	 */
-	for (curr_queue = 0; curr_queue < num_pf_rls; curr_queue++) {
-		qm_info->qm_pq_params[curr_queue].vport_id = vport_id++;
-		qm_info->qm_pq_params[curr_queue].tc_id =
-		    p_hwfn->hw_info.offload_tc;
-		qm_info->qm_pq_params[curr_queue].wrr_group = 1;
-		qm_info->qm_pq_params[curr_queue].rl_valid = 1;
-	};
-
-	/* Protocol PQs */
-	for (i = 0; i < protocol_pqs; i++) {
-		struct init_qm_pq_params *params =
-		    &qm_info->qm_pq_params[curr_queue++];
-
-		if (ECORE_IS_L2_PERSONALITY(p_hwfn)) {
-			params->vport_id = vport_id;
-			params->tc_id = i;
-			/* Note: this assumes that if we had a configuration
-			 * with N tcs and subsequently another configuration
-			 * With Fewer TCs, the in flight traffic (in QM queues,
-			 * in FW, from driver to FW) will still trickle out and
-			 * not get "stuck" in the QM. This is determined by the
-			 * NIG_REG_TX_ARB_CLIENT_IS_SUBJECT2WFQ. Unused TCs are
-			 * supposed to be cleared in this map, allowing traffic
-			 * to flush out. If this is not the case, we would need
-			 * to set the TC of unused queues to 0, and reconfigure
-			 * QM every time num of TCs changes. Unused queues in
-			 * this context would mean those intended for TCs where
-			 * tc_id > hw_info.num_active_tcs.
-			 */
-			params->wrr_group = 1;	/* @@@TBD ECORE_WRR_MEDIUM */
-		} else {
-			params->vport_id = vport_id;
-			params->tc_id = p_hwfn->hw_info.offload_tc;
-			params->wrr_group = 1;	/* @@@TBD ECORE_WRR_MEDIUM */
-		}
-	}
+/* initialize qm vport params */
+static void ecore_init_qm_vport_params(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+	u8 i;
 
-	/* Then init pure-LB PQ */
-	qm_info->pure_lb_pq = curr_queue;
-	qm_info->qm_pq_params[curr_queue].vport_id =
-	    (u8)RESC_START(p_hwfn, ECORE_VPORT);
-	qm_info->qm_pq_params[curr_queue].tc_id = PURE_LB_TC;
-	qm_info->qm_pq_params[curr_queue].wrr_group = 1;
-	curr_queue++;
-
-	qm_info->offload_pq = 0;	/* Already initialized for iSCSI/FCoE */
-	if (init_rdma_offload_pq) {
-		qm_info->offload_pq = curr_queue;
-		qm_info->qm_pq_params[curr_queue].vport_id = vport_id;
-		qm_info->qm_pq_params[curr_queue].tc_id =
-		    p_hwfn->hw_info.offload_tc;
-		qm_info->qm_pq_params[curr_queue].wrr_group = 1;
-		curr_queue++;
-	}
-
-	if (init_pure_ack_pq) {
-		qm_info->pure_ack_pq = curr_queue;
-		qm_info->qm_pq_params[curr_queue].vport_id = vport_id;
-		qm_info->qm_pq_params[curr_queue].tc_id =
-		    p_hwfn->hw_info.offload_tc;
-		qm_info->qm_pq_params[curr_queue].wrr_group = 1;
-		curr_queue++;
-	}
-
-	if (init_ooo_pq) {
-		qm_info->ooo_pq = curr_queue;
-		qm_info->qm_pq_params[curr_queue].vport_id = vport_id;
-		qm_info->qm_pq_params[curr_queue].tc_id = DCBX_ISCSI_OOO_TC;
-		qm_info->qm_pq_params[curr_queue].wrr_group = 1;
-		curr_queue++;
-	}
-
-	/* Then init per-VF PQs */
-	vf_offset = curr_queue;
-	for (i = 0; i < num_vfs; i++) {
-		/* First vport is used by the PF */
-		qm_info->qm_pq_params[curr_queue].vport_id = vport_id + i + 1;
-		/* @@@TBD VF Multi-cos */
-		qm_info->qm_pq_params[curr_queue].tc_id = 0;
-		qm_info->qm_pq_params[curr_queue].wrr_group = 1;
-		qm_info->qm_pq_params[curr_queue].rl_valid = 1;
-		curr_queue++;
-	};
-
-	qm_info->vf_queues_offset = vf_offset;
-	qm_info->num_pqs = num_pqs;
-	qm_info->num_vports = num_vports;
+	/* all vports participate in weighted fair queueing */
+	for (i = 0; i < ecore_init_qm_get_num_vports(p_hwfn); i++)
+		qm_info->qm_vport_params[i].vport_wfq = 1;
+}
 
+/* initialize qm port params */
+static void ecore_init_qm_port_params(struct ecore_hwfn *p_hwfn)
+{
 	/* Initialize qm port parameters */
-	num_ports = p_hwfn->p_dev->num_ports_in_engines;
+	u8 i, active_phys_tcs, num_ports = p_hwfn->p_dev->num_ports_in_engines;
+
+	/* indicate how ooo and high pri traffic is dealt with */
+	active_phys_tcs = num_ports == MAX_NUM_PORTS_K2 ?
+		ACTIVE_TCS_BMAP_4PORT_K2 : ACTIVE_TCS_BMAP;
+
 	for (i = 0; i < num_ports; i++) {
-		p_qm_port = &qm_info->qm_port_params[i];
+		struct init_qm_port_params *p_qm_port =
+			&p_hwfn->qm_info.qm_port_params[i];
+
 		p_qm_port->active = 1;
-		/* @@@TMP - was NUM_OF_PHYS_TCS; Changed until dcbx will
-		 * be in place
-		 */
-		if (num_ports == 4)
-			p_qm_port->active_phys_tcs = 0xf;
-		else
-			p_qm_port->active_phys_tcs = 0x9f;
+		p_qm_port->active_phys_tcs = active_phys_tcs;
 		p_qm_port->num_pbf_cmd_lines = PBF_MAX_CMD_LINES / num_ports;
 		p_qm_port->num_btb_blocks = BTB_MAX_BLOCKS / num_ports;
 	}
+}
 
-	if (ECORE_IS_AH(p_hwfn->p_dev) && (num_ports == 4))
-		qm_info->max_phys_tcs_per_port = NUM_PHYS_TCS_4PORT_K2;
-	else
-		qm_info->max_phys_tcs_per_port = NUM_OF_PHYS_TCS;
+/* Reset the params which must be reset for qm init. QM init may be called as
+ * a result of flows other than driver load (e.g. dcbx renegotiation). Other
+ * params may be affected by the init but would simply recalculate to the same
+ * values. The allocations made for QM init, ports, vports, pqs and vfqs are not
+ * affected as these amounts stay the same.
+ */
+static void ecore_init_qm_reset_params(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
 
-	qm_info->start_pq = (u16)RESC_START(p_hwfn, ECORE_PQ);
+	qm_info->num_pqs = 0;
+	qm_info->num_vports = 0;
+	qm_info->num_pf_rls = 0;
+	qm_info->num_vf_pqs = 0;
+	qm_info->first_vf_pq = 0;
+	qm_info->first_mcos_pq = 0;
+	qm_info->first_rl_pq = 0;
+}
+
+static void ecore_init_qm_advance_vport(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+
+	qm_info->num_vports++;
+
+	if (qm_info->num_vports > ecore_init_qm_get_num_vports(p_hwfn))
+		DP_ERR(p_hwfn,
+		       "vport overflow! qm_info->num_vports %d,"
+		       " qm_init_get_num_vports() %d\n",
+		       qm_info->num_vports,
+		       ecore_init_qm_get_num_vports(p_hwfn));
+}
+
+/* initialize a single pq and manage qm_info resources accounting.
+ * The pq_init_flags param determines whether the PQ is rate limited
+ * (for VF or PF)
+ * and whether a new vport is allocated to the pq or not (i.e. vport will be
+ * shared)
+ */
+
+/* flags for pq init */
+#define PQ_INIT_SHARE_VPORT	(1 << 0)
+#define PQ_INIT_PF_RL		(1 << 1)
+#define PQ_INIT_VF_RL		(1 << 2)
+
+/* defines for pq init */
+#define PQ_INIT_DEFAULT_WRR_GROUP	1
+#define PQ_INIT_DEFAULT_TC		0
+#define PQ_INIT_OFLD_TC			(p_hwfn->hw_info.offload_tc)
+
+static void ecore_init_qm_pq(struct ecore_hwfn *p_hwfn,
+			     struct ecore_qm_info *qm_info,
+			     u8 tc, u32 pq_init_flags)
+{
+	u16 pq_idx = qm_info->num_pqs, max_pq =
+					ecore_init_qm_get_num_pqs(p_hwfn);
+
+	if (pq_idx > max_pq)
+		DP_ERR(p_hwfn,
+		       "pq overflow! pq %d, max pq %d\n", pq_idx, max_pq);
+
+	/* init pq params */
+	qm_info->qm_pq_params[pq_idx].vport_id = qm_info->start_vport +
+						 qm_info->num_vports;
+	qm_info->qm_pq_params[pq_idx].tc_id = tc;
+	qm_info->qm_pq_params[pq_idx].wrr_group = PQ_INIT_DEFAULT_WRR_GROUP;
+	qm_info->qm_pq_params[pq_idx].rl_valid =
+		(pq_init_flags & PQ_INIT_PF_RL ||
+		 pq_init_flags & PQ_INIT_VF_RL);
+
+	/* qm params accounting */
+	qm_info->num_pqs++;
+	if (!(pq_init_flags & PQ_INIT_SHARE_VPORT))
+		qm_info->num_vports++;
+
+	if (pq_init_flags & PQ_INIT_PF_RL)
+		qm_info->num_pf_rls++;
+
+	if (qm_info->num_vports > ecore_init_qm_get_num_vports(p_hwfn))
+		DP_ERR(p_hwfn,
+		       "vport overflow! qm_info->num_vports %d,"
+		       " qm_init_get_num_vports() %d\n",
+		       qm_info->num_vports,
+		       ecore_init_qm_get_num_vports(p_hwfn));
+
+	if (qm_info->num_pf_rls > ecore_init_qm_get_num_pf_rls(p_hwfn))
+		DP_ERR(p_hwfn, "rl overflow! qm_info->num_pf_rls %d,"
+		       " qm_init_get_num_pf_rls() %d\n",
+		       qm_info->num_pf_rls,
+		       ecore_init_qm_get_num_pf_rls(p_hwfn));
+}
+
+/* get pq index according to PQ_FLAGS */
+static u16 *ecore_init_qm_get_idx_from_flags(struct ecore_hwfn *p_hwfn,
+					     u32 pq_flags)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+
+	/* Can't have multiple flags set here */
+	if (OSAL_BITMAP_WEIGHT((unsigned long *)&pq_flags,
+				sizeof(pq_flags)) > 1)
+		goto err;
+
+	switch (pq_flags) {
+	case PQ_FLAGS_RLS:
+		return &qm_info->first_rl_pq;
+	case PQ_FLAGS_MCOS:
+		return &qm_info->first_mcos_pq;
+	case PQ_FLAGS_LB:
+		return &qm_info->pure_lb_pq;
+	case PQ_FLAGS_OOO:
+		return &qm_info->ooo_pq;
+	case PQ_FLAGS_ACK:
+		return &qm_info->pure_ack_pq;
+	case PQ_FLAGS_OFLD:
+		return &qm_info->offload_pq;
+	case PQ_FLAGS_VFS:
+		return &qm_info->first_vf_pq;
+	default:
+		goto err;
+	}
+
+err:
+	DP_ERR(p_hwfn, "BAD pq flags %d\n", pq_flags);
+	return OSAL_NULL;
+}
+
+/* save pq index in qm info */
+static void ecore_init_qm_set_idx(struct ecore_hwfn *p_hwfn,
+				  u32 pq_flags, u16 pq_val)
+{
+	u16 *base_pq_idx = ecore_init_qm_get_idx_from_flags(p_hwfn, pq_flags);
+
+	*base_pq_idx = p_hwfn->qm_info.start_pq + pq_val;
+}
+
+/* get tx pq index, with the PQ TX base already set (ready for context init) */
+u16 ecore_get_cm_pq_idx(struct ecore_hwfn *p_hwfn, u32 pq_flags)
+{
+	u16 *base_pq_idx = ecore_init_qm_get_idx_from_flags(p_hwfn, pq_flags);
+
+	return *base_pq_idx + CM_TX_PQ_BASE;
+}
+
+u16 ecore_get_cm_pq_idx_mcos(struct ecore_hwfn *p_hwfn, u8 tc)
+{
+	u8 max_tc = ecore_init_qm_get_num_tcs(p_hwfn);
+
+	if (tc > max_tc)
+		DP_ERR(p_hwfn, "tc %d must be smaller than %d\n", tc, max_tc);
+
+	return ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_MCOS) + tc;
+}
+
+u16 ecore_get_cm_pq_idx_vf(struct ecore_hwfn *p_hwfn, u16 vf)
+{
+	u16 max_vf = ecore_init_qm_get_num_vfs(p_hwfn);
+
+	if (vf > max_vf)
+		DP_ERR(p_hwfn, "vf %d must be smaller than %d\n", vf, max_vf);
+
+	return ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_VFS) + vf;
+}
+
+u16 ecore_get_cm_pq_idx_rl(struct ecore_hwfn *p_hwfn, u8 rl)
+{
+	u16 max_rl = ecore_init_qm_get_num_pf_rls(p_hwfn);
+
+	if (rl > max_rl)
+		DP_ERR(p_hwfn, "rl %d must be smaller than %d\n", rl, max_rl);
+
+	return ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_RLS) + rl;
+}
+
+/* Functions for creating specific types of pqs */
+static void ecore_init_qm_lb_pq(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+
+	if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_LB))
+		return;
+
+	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_LB, qm_info->num_pqs);
+	ecore_init_qm_pq(p_hwfn, qm_info, PURE_LB_TC, PQ_INIT_SHARE_VPORT);
+}
+
+static void ecore_init_qm_ooo_pq(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+
+	if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_OOO))
+		return;
+
+	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_OOO, qm_info->num_pqs);
+	ecore_init_qm_pq(p_hwfn, qm_info, DCBX_ISCSI_OOO_TC,
+			 PQ_INIT_SHARE_VPORT);
+}
+
+static void ecore_init_qm_pure_ack_pq(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+
+	if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_ACK))
+		return;
+
+	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_ACK, qm_info->num_pqs);
+	ecore_init_qm_pq(p_hwfn, qm_info, PQ_INIT_OFLD_TC, PQ_INIT_SHARE_VPORT);
+}
+
+static void ecore_init_qm_offload_pq(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+
+	if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_OFLD))
+		return;
+
+	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_OFLD, qm_info->num_pqs);
+	ecore_init_qm_pq(p_hwfn, qm_info, PQ_INIT_OFLD_TC, PQ_INIT_SHARE_VPORT);
+}
+
+static void ecore_init_qm_mcos_pqs(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+	u8 tc_idx;
+
+	if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_MCOS))
+		return;
+
+	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_MCOS, qm_info->num_pqs);
+	for (tc_idx = 0; tc_idx < ecore_init_qm_get_num_tcs(p_hwfn); tc_idx++)
+		ecore_init_qm_pq(p_hwfn, qm_info, tc_idx, PQ_INIT_SHARE_VPORT);
+}
+
+static void ecore_init_qm_vf_pqs(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+	u16 vf_idx, num_vfs = ecore_init_qm_get_num_vfs(p_hwfn);
+
+	if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_VFS))
+		return;
+
+	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_VFS, qm_info->num_pqs);
 
 	qm_info->num_vf_pqs = num_vfs;
-	qm_info->start_vport = (u8)RESC_START(p_hwfn, ECORE_VPORT);
+	for (vf_idx = 0; vf_idx < num_vfs; vf_idx++)
+		ecore_init_qm_pq(p_hwfn, qm_info, PQ_INIT_DEFAULT_TC,
+				 PQ_INIT_VF_RL);
+}
 
-	for (i = 0; i < qm_info->num_vports; i++)
-		qm_info->qm_vport_params[i].vport_wfq = 1;
+static void ecore_init_qm_rl_pqs(struct ecore_hwfn *p_hwfn)
+{
+	u16 pf_rls_idx, num_pf_rls = ecore_init_qm_get_num_pf_rls(p_hwfn);
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
 
-	qm_info->vport_rl_en = 1;
-	qm_info->vport_wfq_en = 1;
-	qm_info->pf_rl = pf_rl;
-	qm_info->pf_wfq = pf_wfq;
+	if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_RLS))
+		return;
+
+	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_RLS, qm_info->num_pqs);
+	for (pf_rls_idx = 0; pf_rls_idx < num_pf_rls; pf_rls_idx++)
+		ecore_init_qm_pq(p_hwfn, qm_info, PQ_INIT_OFLD_TC,
+				 PQ_INIT_PF_RL);
+}
+
+static void ecore_init_qm_pq_params(struct ecore_hwfn *p_hwfn)
+{
+	/* rate limited pqs, must come first (FW assumption) */
+	ecore_init_qm_rl_pqs(p_hwfn);
+
+	/* pqs for multi cos */
+	ecore_init_qm_mcos_pqs(p_hwfn);
+
+	/* pure loopback pq */
+	ecore_init_qm_lb_pq(p_hwfn);
+
+	/* out of order pq */
+	ecore_init_qm_ooo_pq(p_hwfn);
+
+	/* pure ack pq */
+	ecore_init_qm_pure_ack_pq(p_hwfn);
+
+	/* pq for offloaded protocol */
+	ecore_init_qm_offload_pq(p_hwfn);
+
+	/* done sharing vports */
+	ecore_init_qm_advance_vport(p_hwfn);
+
+	/* pqs for vfs */
+	ecore_init_qm_vf_pqs(p_hwfn);
+}
+
+/* compare values of getters against resources amounts */
+static enum _ecore_status_t ecore_init_qm_sanity(struct ecore_hwfn *p_hwfn)
+{
+	if (ecore_init_qm_get_num_vports(p_hwfn) >
+	    RESC_NUM(p_hwfn, ECORE_VPORT)) {
+		DP_ERR(p_hwfn, "requested amount of vports exceeds resource\n");
+		return ECORE_INVAL;
+	}
+
+	if (ecore_init_qm_get_num_pqs(p_hwfn) > RESC_NUM(p_hwfn, ECORE_PQ)) {
+		DP_ERR(p_hwfn, "requested amount of pqs exceeds resource\n");
+		return ECORE_INVAL;
+	}
 
 	return ECORE_SUCCESS;
+}
 
- alloc_err:
-	DP_NOTICE(p_hwfn, false, "Failed to allocate memory for QM params\n");
-	ecore_qm_info_free(p_hwfn);
-	return ECORE_NOMEM;
+/*
+ * Function for verbose printing of the qm initialization results
+ */
+static void ecore_dp_init_qm_params(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+	struct init_qm_vport_params *vport;
+	struct init_qm_port_params *port;
+	struct init_qm_pq_params *pq;
+	int i, tc;
+
+	/* top level params */
+	DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+		   "qm init top level params: start_pq %d, start_vport %d,"
+		   " pure_lb_pq %d, offload_pq %d, pure_ack_pq %d\n",
+		   qm_info->start_pq, qm_info->start_vport, qm_info->pure_lb_pq,
+		   qm_info->offload_pq, qm_info->pure_ack_pq);
+	DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+		   "ooo_pq %d, first_vf_pq %d, num_pqs %d, num_vf_pqs %d,"
+		   " num_vports %d, max_phys_tcs_per_port %d\n",
+		   qm_info->ooo_pq, qm_info->first_vf_pq, qm_info->num_pqs,
+		   qm_info->num_vf_pqs, qm_info->num_vports,
+		   qm_info->max_phys_tcs_per_port);
+	DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+		   "pf_rl_en %d, pf_wfq_en %d, vport_rl_en %d, vport_wfq_en %d,"
+		   " pf_wfq %d, pf_rl %d, num_pf_rls %d, pq_flags %x\n",
+		   qm_info->pf_rl_en, qm_info->pf_wfq_en, qm_info->vport_rl_en,
+		   qm_info->vport_wfq_en, qm_info->pf_wfq, qm_info->pf_rl,
+		   qm_info->num_pf_rls, ecore_get_pq_flags(p_hwfn));
+
+	/* port table */
+	for (i = 0; i < p_hwfn->p_dev->num_ports_in_engines; i++) {
+		port = &qm_info->qm_port_params[i];
+		DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+			   "port idx %d, active %d, active_phys_tcs %d,"
+			   " num_pbf_cmd_lines %d, num_btb_blocks %d,"
+			   " reserved %d\n",
+			   i, port->active, port->active_phys_tcs,
+			   port->num_pbf_cmd_lines, port->num_btb_blocks,
+			   port->reserved);
+	}
+
+	/* vport table */
+	for (i = 0; i < qm_info->num_vports; i++) {
+		vport = &qm_info->qm_vport_params[i];
+		DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+			   "vport idx %d, vport_rl %d, wfq %d,"
+			   " first_tx_pq_id [ ",
+			   qm_info->start_vport + i, vport->vport_rl,
+			   vport->vport_wfq);
+		for (tc = 0; tc < NUM_OF_TCS; tc++)
+			DP_VERBOSE(p_hwfn, ECORE_MSG_HW, "%d ",
+				   vport->first_tx_pq_id[tc]);
+		DP_VERBOSE(p_hwfn, ECORE_MSG_HW, "]\n");
+	}
+
+	/* pq table */
+	for (i = 0; i < qm_info->num_pqs; i++) {
+		pq = &qm_info->qm_pq_params[i];
+		DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+			   "pq idx %d, vport_id %d, tc %d, wrr_grp %d,"
+			   " rl_valid %d\n",
+			   qm_info->start_pq + i, pq->vport_id, pq->tc_id,
+			   pq->wrr_group, pq->rl_valid);
+	}
+}
+
+static void ecore_init_qm_info(struct ecore_hwfn *p_hwfn)
+{
+	/* reset params required for init run */
+	ecore_init_qm_reset_params(p_hwfn);
+
+	/* init QM top level params */
+	ecore_init_qm_params(p_hwfn);
+
+	/* init QM port params */
+	ecore_init_qm_port_params(p_hwfn);
+
+	/* init QM vport params */
+	ecore_init_qm_vport_params(p_hwfn);
+
+	/* init QM physical queue params */
+	ecore_init_qm_pq_params(p_hwfn);
+
+	/* display all that init */
+	ecore_dp_init_qm_params(p_hwfn);
 }
 
 /* This function reconfigures the QM pf on the fly.
  * For this purpose we:
  * 1. reconfigure the QM database
- * 2. set new values to runtime arrat
+ * 2. set new values to runtime array
  * 3. send an sdm_qm_cmd through the rbc interface to stop the QM
  * 4. activate init tool in QM_PF stage
  * 5. send an sdm_qm_cmd through rbc interface to release the QM
@@ -462,20 +755,11 @@ enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn,
 				     struct ecore_ptt *p_ptt)
 {
 	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
-	bool b_rc;
 	enum _ecore_status_t rc;
-
-	/* qm_info is allocated in ecore_init_qm_info() which is already called
-	 * from ecore_resc_alloc() or previous call of ecore_qm_reconf().
-	 * The allocated size may change each init, so we free it before next
-	 * allocation.
-	 */
-	ecore_qm_info_free(p_hwfn);
+	bool b_rc;
 
 	/* initialize ecore's qm data structure */
-	rc = ecore_init_qm_info(p_hwfn, false);
-	if (rc != ECORE_SUCCESS)
-		return rc;
+	ecore_init_qm_info(p_hwfn);
 
 	/* stop PF's qm queues */
 	OSAL_SPIN_LOCK(&qm_lock);
@@ -508,6 +792,48 @@ enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
+static enum _ecore_status_t ecore_alloc_qm_data(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+	enum _ecore_status_t rc;
+
+	rc = ecore_init_qm_sanity(p_hwfn);
+	if (rc != ECORE_SUCCESS)
+		goto alloc_err;
+
+	qm_info->qm_pq_params = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
+					    sizeof(struct init_qm_pq_params) *
+					    ecore_init_qm_get_num_pqs(p_hwfn));
+	if (!qm_info->qm_pq_params)
+		goto alloc_err;
+
+	qm_info->qm_vport_params = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
+				       sizeof(struct init_qm_vport_params) *
+				       ecore_init_qm_get_num_vports(p_hwfn));
+	if (!qm_info->qm_vport_params)
+		goto alloc_err;
+
+	qm_info->qm_port_params = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
+				      sizeof(struct init_qm_port_params) *
+				      p_hwfn->p_dev->num_ports_in_engines);
+	if (!qm_info->qm_port_params)
+		goto alloc_err;
+
+	qm_info->wfq_data = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
+					sizeof(struct ecore_wfq_data) *
+					ecore_init_qm_get_num_vports(p_hwfn));
+	if (!qm_info->wfq_data)
+		goto alloc_err;
+
+	return ECORE_SUCCESS;
+
+alloc_err:
+	DP_NOTICE(p_hwfn, false, "Failed to allocate memory for QM params\n");
+	ecore_qm_info_free(p_hwfn);
+	return ECORE_NOMEM;
+}
+/******************** End QM initialization ***************/
+
 enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 {
 	struct ecore_consq *p_consq;
@@ -572,11 +898,13 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 		if (rc)
 			goto alloc_err;
 
-		/* Prepare and process QM requirements */
-		rc = ecore_init_qm_info(p_hwfn, true);
+		rc = ecore_alloc_qm_data(p_hwfn);
 		if (rc)
 			goto alloc_err;
 
+		/* init qm info */
+		ecore_init_qm_info(p_hwfn);
+
 		/* Compute the ILT client partition */
 		rc = ecore_cxt_cfg_ilt_compute(p_hwfn);
 		if (rc)
@@ -618,18 +946,18 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 			 * worst case:
 			 * - Core - according to SPQ.
 			 * - RoCE - per QP there are a couple of ICIDs, one
-			 *          responder and one requester, each can
-			 *          generate an EQE => n_eqes_qp = 2 * n_qp.
-			 *          Each CQ can generate an EQE. There are 2 CQs
-			 *          per QP => n_eqes_cq = 2 * n_qp.
-			 *          Hence the RoCE total is 4 * n_qp or
-			 *          2 * num_cons.
+			 *	  responder and one requester, each can
+			 *	  generate an EQE => n_eqes_qp = 2 * n_qp.
+			 *	  Each CQ can generate an EQE. There are 2 CQs
+			 *	  per QP => n_eqes_cq = 2 * n_qp.
+			 *	  Hence the RoCE total is 4 * n_qp or
+			 *	  2 * num_cons.
 			 * - ENet - There can be up to two events per VF. One
-			 *          for VF-PF channel and another for VF FLR
-			 *          initial cleanup. The number of VFs is
-			 *          bounded by MAX_NUM_VFS_BB, and is much
-			 *          smaller than RoCE's so we avoid exact
-			 *          calculation.
+			 *	  for VF-PF channel and another for VF FLR
+			 *	  initial cleanup. The number of VFs is
+			 *	  bounded by MAX_NUM_VFS_BB, and is much
+			 *	  smaller than RoCE's so we avoid exact
+			 *	  calculation.
 			 */
 			if (ECORE_IS_ROCE_PERSONALITY(p_hwfn)) {
 				num_cons =
@@ -683,7 +1011,8 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 		rc = ecore_dmae_info_alloc(p_hwfn);
 		if (rc) {
 			DP_NOTICE(p_hwfn, true,
-				  "Failed to allocate memory for dmae_info structure\n");
+				  "Failed to allocate memory for dmae_info"
+				  " structure\n");
 			goto alloc_err;
 		}
 
@@ -705,9 +1034,9 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 
 	return ECORE_SUCCESS;
 
- alloc_no_mem:
+alloc_no_mem:
 	rc = ECORE_NOMEM;
- alloc_err:
+alloc_err:
 	ecore_resc_free(p_dev);
 	return rc;
 }
@@ -2353,7 +2682,7 @@ static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
 			*p_resc_start = dflt_resc_start;
 		}
 	}
- out:
+out:
 	return ECORE_SUCCESS;
 }
 
@@ -3139,13 +3468,13 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn,
 #endif
 
 	return rc;
- err2:
+err2:
 	if (IS_LEAD_HWFN(p_hwfn))
 		ecore_iov_free_hw_info(p_dev);
 	ecore_mcp_free(p_hwfn);
- err1:
+err1:
 	ecore_hw_hwfn_free(p_hwfn);
- err0:
+err0:
 	return rc;
 }
 
@@ -3309,7 +3638,7 @@ static void ecore_chain_free_pbl(struct ecore_dev *p_dev,
 	if (!p_chain->pbl.external)
 		OSAL_DMA_FREE_COHERENT(p_dev, p_chain->pbl.p_virt_table,
 				       p_chain->pbl.p_phys_table, pbl_size);
- out:
+out:
 	OSAL_VFREE(p_dev, p_chain->pbl.pp_virt_addr_tbl);
 }
 
@@ -3521,7 +3850,7 @@ enum _ecore_status_t ecore_chain_alloc(struct ecore_dev *p_dev,
 
 	return ECORE_SUCCESS;
 
- nomem:
+nomem:
 	ecore_chain_free(p_dev, p_chain);
 	return rc;
 }
@@ -3956,7 +4285,7 @@ enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
 		goto out;
 
 	p_hwfn->p_dev->rx_coalesce_usecs = coalesce;
- out:
+out:
 	return rc;
 }
 
@@ -4000,7 +4329,7 @@ enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
 		goto out;
 
 	p_hwfn->p_dev->tx_coalesce_usecs = coalesce;
- out:
+out:
 	return rc;
 }
 
diff --git a/drivers/net/qede/base/ecore_hw.c b/drivers/net/qede/base/ecore_hw.c
index 49d52c0..396edc2 100644
--- a/drivers/net/qede/base/ecore_hw.c
+++ b/drivers/net/qede/base/ecore_hw.c
@@ -905,44 +905,6 @@ ecore_dmae_host2host(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-u16 ecore_get_qm_pq(struct ecore_hwfn *p_hwfn,
-		    enum protocol_type proto,
-		    union ecore_qm_pq_params *p_params)
-{
-	u16 pq_id = 0;
-
-	if ((proto == PROTOCOLID_CORE ||
-	     proto == PROTOCOLID_ETH) && !p_params) {
-		DP_NOTICE(p_hwfn, true,
-			  "Protocol %d received NULL PQ params\n", proto);
-		return 0;
-	}
-
-	switch (proto) {
-	case PROTOCOLID_CORE:
-		if (p_params->core.tc == LB_TC)
-			pq_id = p_hwfn->qm_info.pure_lb_pq;
-		else if (p_params->core.tc == PKT_LB_TC)
-			pq_id = p_hwfn->qm_info.ooo_pq;
-		else
-			pq_id = p_hwfn->qm_info.offload_pq;
-		break;
-	case PROTOCOLID_ETH:
-		pq_id = p_params->eth.tc;
-		/* TODO - multi-CoS for VFs? */
-		if (p_params->eth.is_vf)
-			pq_id += p_hwfn->qm_info.vf_queues_offset +
-			    p_params->eth.vf_id;
-		break;
-	default:
-		pq_id = 0;
-	}
-
-	pq_id = CM_TX_PQ_BASE + pq_id + RESC_START(p_hwfn, ECORE_PQ);
-
-	return pq_id;
-}
-
 void ecore_hw_err_notify(struct ecore_hwfn *p_hwfn,
 			 enum ecore_hw_err_type err_type)
 {
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index d2e1719..0220d19 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -834,13 +834,13 @@ ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
 			      struct ecore_queue_start_common_params *p_params,
 			      dma_addr_t pbl_addr,
 			      u16 pbl_size,
-			      union ecore_qm_pq_params *p_pq_params)
+			      u16 pq_id)
 {
 	struct tx_queue_start_ramrod_data *p_ramrod = OSAL_NULL;
 	struct ecore_spq_entry *p_ent = OSAL_NULL;
 	struct ecore_sp_init_data init_data;
 	struct ecore_hw_cid_data *p_tx_cid;
-	u16 pq_id, abs_tx_qzone_id = 0;
+	u16 abs_tx_qzone_id = 0;
 	enum _ecore_status_t rc = ECORE_NOTIMPL;
 	u8 abs_vport_id;
 
@@ -882,7 +882,6 @@ ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
 	p_ramrod->pbl_size = OSAL_CPU_TO_LE16(pbl_size);
 	DMA_REGPAIR_LE(p_ramrod->pbl_base_addr, pbl_addr);
 
-	pq_id = ecore_get_qm_pq(p_hwfn, PROTOCOLID_ETH, p_pq_params);
 	p_ramrod->qm_pq_id = OSAL_CPU_TO_LE16(pq_id);
 
 	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
@@ -898,7 +897,6 @@ ecore_sp_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
 			    void OSAL_IOMEM * *pp_doorbell)
 {
 	struct ecore_hw_cid_data *p_tx_cid;
-	union ecore_qm_pq_params pq_params;
 	u8 abs_stats_id = 0;
 	enum _ecore_status_t rc;
 
@@ -918,9 +916,6 @@ ecore_sp_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
 
 	p_tx_cid = &p_hwfn->p_tx_cids[p_params->queue_id];
 	OSAL_MEMSET(p_tx_cid, 0, sizeof(*p_tx_cid));
-	OSAL_MEMSET(&pq_params, 0, sizeof(pq_params));
-
-	pq_params.eth.tc = tc;
 
 	/* Allocate a CID for the queue */
 	rc = ecore_cxt_acquire_cid(p_hwfn, PROTOCOLID_ETH, &p_tx_cid->cid);
@@ -944,7 +939,8 @@ ecore_sp_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
 					   p_params,
 					   pbl_addr,
 					   pbl_size,
-					   &pq_params);
+					   ecore_get_cm_pq_idx_mcos(p_hwfn,
+								    tc));
 
 	*pp_doorbell = (u8 OSAL_IOMEM *)p_hwfn->doorbells +
 	    DB_ADDR(p_tx_cid->cid, DQ_DEMS_LEGACY);
diff --git a/drivers/net/qede/base/ecore_l2.h b/drivers/net/qede/base/ecore_l2.h
index 9c1bd38..b598eda 100644
--- a/drivers/net/qede/base/ecore_l2.h
+++ b/drivers/net/qede/base/ecore_l2.h
@@ -81,7 +81,7 @@ ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn	*p_hwfn,
 			      struct ecore_queue_start_common_params *p_params,
 			      dma_addr_t pbl_addr,
 			      u16 pbl_size,
-			      union ecore_qm_pq_params *p_pq_params);
+			      u16 pq_id);
 
 u8 ecore_mcast_bin_from_mac(u8 *mac);
 
diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c
index 9035d3b..ba26d45 100644
--- a/drivers/net/qede/base/ecore_spq.c
+++ b/drivers/net/qede/base/ecore_spq.c
@@ -173,11 +173,10 @@ ecore_spq_fill_entry(struct ecore_hwfn *p_hwfn, struct ecore_spq_entry *p_ent)
 static void ecore_spq_hw_initialize(struct ecore_hwfn *p_hwfn,
 				    struct ecore_spq *p_spq)
 {
-	u16 pq;
 	struct ecore_cxt_info cxt_info;
 	struct core_conn_context *p_cxt;
-	union ecore_qm_pq_params pq_params;
 	enum _ecore_status_t rc;
+	u16 physical_q;
 
 	cxt_info.iid = p_spq->cid;
 
@@ -206,10 +205,8 @@ static void ecore_spq_hw_initialize(struct ecore_hwfn *p_hwfn,
 	/* CDU validation - FIXME currently disabled */
 
 	/* QM physical queue */
-	OSAL_MEMSET(&pq_params, 0, sizeof(pq_params));
-	pq_params.core.tc = LB_TC;
-	pq = ecore_get_qm_pq(p_hwfn, PROTOCOLID_CORE, &pq_params);
-	p_cxt->xstorm_ag_context.physical_q0 = OSAL_CPU_TO_LE16(pq);
+	physical_q = ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_LB);
+	p_cxt->xstorm_ag_context.physical_q0 = OSAL_CPU_TO_LE16(physical_q);
 
 	p_cxt->xstorm_st_context.spq_base_lo =
 	    DMA_LO_LE(p_spq->chain.p_phys_addr);
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index a302e9e..365be25 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -632,8 +632,8 @@ enum _ecore_status_t ecore_iov_hw_info(struct ecore_hwfn *p_hwfn)
 	return ECORE_SUCCESS;
 }
 
-bool _ecore_iov_pf_sanity_check(struct ecore_hwfn *p_hwfn, int vfid,
-				bool b_fail_malicious)
+static bool _ecore_iov_pf_sanity_check(struct ecore_hwfn *p_hwfn, int vfid,
+				       bool b_fail_malicious)
 {
 	/* Check PF supports sriov */
 	if (IS_VF(p_hwfn->p_dev) || !IS_ECORE_SRIOV(p_hwfn->p_dev) ||
@@ -2103,15 +2103,9 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 	struct ecore_queue_start_common_params params;
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
 	u8 status = PFVF_STATUS_NO_RESOURCE;
-	union ecore_qm_pq_params pq_params;
 	struct vfpf_start_txq_tlv *req;
 	enum _ecore_status_t rc;
 
-	/* Prepare the parameters which would choose the right PQ */
-	OSAL_MEMSET(&pq_params, 0, sizeof(pq_params));
-	pq_params.eth.is_vf = 1;
-	pq_params.eth.vf_id = vf->relative_vf_id;
-
 	OSAL_MEMSET(&params, 0, sizeof(params));
 	req = &mbx->req_virt->start_txq;
 
@@ -2132,7 +2126,8 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 					   &params,
 					   req->pbl_addr,
 					   req->pbl_size,
-					   &pq_params);
+					   ecore_get_cm_pq_idx_vf(p_hwfn,
+							vf->relative_vf_id));
 
 	if (rc)
 		status = PFVF_STATUS_FAILURE;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 21/62] net/qede/base: print firmware MFW and MBI versions
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (21 preceding siblings ...)
  2017-03-28  6:51           ` [PATCH v4 20/62] net/qede/base: qm initialization revamp Rasesh Mody
@ 2017-03-28  6:51           ` Rasesh Mody
  2017-03-28  6:51           ` [PATCH v4 22/62] net/qede/base: check active VF queues before stopping Rasesh Mody
                             ` (41 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:51 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Add a printout of the FW, Management FW and MBI versions.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/qede_if.h   |    9 ++++++++-
 drivers/net/qede/qede_main.c |   14 ++++++--------
 2 files changed, 14 insertions(+), 9 deletions(-)

diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index 18404fb..1e27428 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -30,12 +30,19 @@ struct qed_dev_info {
 
 	/* MFW version */
 	uint32_t mfw_rev;
+#define QED_MFW_VERSION_0_MASK		0x000000FF
+#define QED_MFW_VERSION_0_OFFSET	0
+#define QED_MFW_VERSION_1_MASK		0x0000FF00
+#define QED_MFW_VERSION_1_OFFSET	8
+#define QED_MFW_VERSION_2_MASK		0x00FF0000
+#define QED_MFW_VERSION_2_OFFSET	16
+#define QED_MFW_VERSION_3_MASK		0xFF000000
+#define QED_MFW_VERSION_3_OFFSET	24
 
 	uint32_t flash_size;
 	uint8_t mf_mode;
 	bool tx_switching;
 	u16 mtu;
-	/* To be added... */
 };
 
 enum qed_sb_type {
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index e76346e..1d4f336 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -327,6 +327,8 @@ qed_fill_dev_info(struct ecore_dev *edev, struct qed_dev_info *dev_info)
 	memset(dev_info, 0, sizeof(struct qed_dev_info));
 	dev_info->num_hwfns = edev->num_hwfns;
 	dev_info->is_mf_default = IS_MF_DEFAULT(&edev->hwfns[0]);
+	dev_info->mtu = ECORE_LEADING_HWFN(edev)->hw_info.mtu;
+
 	rte_memcpy(&dev_info->hw_mac, &edev->hwfns[0].hw_info.hw_mac_addr,
 	       ETHER_ADDR_LEN);
 
@@ -337,13 +339,7 @@ qed_fill_dev_info(struct ecore_dev *edev, struct qed_dev_info *dev_info)
 		dev_info->fw_eng = FW_ENGINEERING_VERSION;
 		dev_info->mf_mode = edev->mf_mode;
 		dev_info->tx_switching = false;
-	} else {
-		ecore_vf_get_fw_version(&edev->hwfns[0], &dev_info->fw_major,
-					&dev_info->fw_minor, &dev_info->fw_rev,
-					&dev_info->fw_eng);
-	}
 
-	if (IS_PF(edev)) {
 		ptt = ecore_ptt_acquire(ECORE_LEADING_HWFN(edev));
 		if (ptt) {
 			ecore_mcp_get_mfw_ver(ECORE_LEADING_HWFN(edev), ptt,
@@ -361,12 +357,14 @@ qed_fill_dev_info(struct ecore_dev *edev, struct qed_dev_info *dev_info)
 			ecore_ptt_release(ECORE_LEADING_HWFN(edev), ptt);
 		}
 	} else {
+		ecore_vf_get_fw_version(&edev->hwfns[0], &dev_info->fw_major,
+					&dev_info->fw_minor, &dev_info->fw_rev,
+					&dev_info->fw_eng);
+
 		ecore_mcp_get_mfw_ver(ECORE_LEADING_HWFN(edev), ptt,
 				      &dev_info->mfw_rev, NULL);
 	}
 
-	dev_info->mtu = ECORE_LEADING_HWFN(edev)->hw_info.mtu;
-
 	return 0;
 }
 
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 22/62] net/qede/base: check active VF queues before stopping
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (22 preceding siblings ...)
  2017-03-28  6:51           ` [PATCH v4 21/62] net/qede/base: print firmware MFW and MBI versions Rasesh Mody
@ 2017-03-28  6:51           ` Rasesh Mody
  2017-03-28  6:51           ` [PATCH v4 23/62] net/qede/base: set driver type before sending load request Rasesh Mody
                             ` (40 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:51 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Make sure VF queue are closed before stopping vport.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_sriov.c |   37 ++++++++++++++++++++++++++++++++++-
 1 file changed, 36 insertions(+), 1 deletion(-)

diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 365be25..73c4015 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -232,6 +232,30 @@ static bool ecore_iov_validate_sb(struct ecore_hwfn *p_hwfn,
 	return false;
 }
 
+static bool ecore_iov_validate_active_rxq(struct ecore_hwfn *p_hwfn,
+					  struct ecore_vf_info *p_vf)
+{
+	u8 i;
+
+	for (i = 0; i < p_vf->num_rxqs; i++)
+		if (p_vf->vf_queues[i].rxq_active)
+			return true;
+
+	return false;
+}
+
+static bool ecore_iov_validate_active_txq(struct ecore_hwfn *p_hwfn,
+					  struct ecore_vf_info *p_vf)
+{
+	u8 i;
+
+	for (i = 0; i < p_vf->num_rxqs; i++)
+		if (p_vf->vf_queues[i].txq_active)
+			return true;
+
+	return false;
+}
+
 /* TODO - this is linux crc32; Need a way to ifdef it out for linux */
 u32 ecore_crc32(u32 crc, u8 *ptr, u32 length)
 {
@@ -1365,8 +1389,10 @@ static void ecore_iov_vf_cleanup(struct ecore_hwfn *p_hwfn,
 
 	p_vf->num_active_rxqs = 0;
 
-	for (i = 0; i < ECORE_MAX_VF_CHAINS_PER_PF; i++)
+	for (i = 0; i < ECORE_MAX_VF_CHAINS_PER_PF; i++) {
 		p_vf->vf_queues[i].rxq_active = 0;
+		p_vf->vf_queues[i].txq_active = 0;
+	}
 
 	OSAL_MEMSET(&p_vf->shadow_config, 0, sizeof(p_vf->shadow_config));
 	OSAL_MEMSET(&p_vf->acquire, 0, sizeof(p_vf->acquire));
@@ -1943,6 +1969,15 @@ static void ecore_iov_vf_mbx_stop_vport(struct ecore_hwfn *p_hwfn,
 	vf->vport_instance--;
 	vf->spoof_chk = false;
 
+	if ((ecore_iov_validate_active_rxq(p_hwfn, vf)) ||
+	    (ecore_iov_validate_active_txq(p_hwfn, vf))) {
+		vf->b_malicious = true;
+		DP_NOTICE(p_hwfn, false,
+			  "VF [%02x] - considered malicious;"
+			  " Unable to stop RX/TX queuess\n",
+			  vf->abs_vf_id);
+	}
+
 	rc = ecore_sp_vport_stop(p_hwfn, vf->opaque_fid, vf->vport_id);
 	if (rc != ECORE_SUCCESS) {
 		DP_ERR(p_hwfn,
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 23/62] net/qede/base: set driver type before sending load request
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (23 preceding siblings ...)
  2017-03-28  6:51           ` [PATCH v4 22/62] net/qede/base: check active VF queues before stopping Rasesh Mody
@ 2017-03-28  6:51           ` Rasesh Mody
  2017-03-28  6:51           ` [PATCH v4 24/62] net/qede/base: prevent driver load with invalid resources Rasesh Mody
                             ` (39 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:51 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Set the drv_type before sending LOAD_REQ and remove the
ver_str which is not used by the MFW

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h     |    3 +--
 drivers/net/qede/base/ecore_mcp.c |    3 ---
 drivers/net/qede/qede_ethdev.c    |    2 +-
 drivers/net/qede/qede_if.h        |    3 +--
 drivers/net/qede/qede_main.c      |   10 ++++------
 5 files changed, 7 insertions(+), 14 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 58c97a3..b8c8bfd 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -30,7 +30,6 @@
 
 #define MAX_HWFNS_PER_DEVICE	2
 #define NAME_SIZE 128 /* @DPDK */
-#define VER_SIZE 16
 #define ECORE_WFQ_UNIT	100
 #include "../qede_logs.h" /* @DPDK */
 
@@ -706,7 +705,7 @@ struct ecore_dev {
 
 	int				pcie_width;
 	int				pcie_speed;
-	u8				ver_str[NAME_SIZE]; /* @DPDK */
+
 	/* Add MF related configuration */
 	u8				mcp_rev;
 	u8				boot_mode;
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 9f897b5..2b9c819 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -524,7 +524,6 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 {
 	struct ecore_dev *p_dev = p_hwfn->p_dev;
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
 	enum _ecore_status_t rc;
 
 #ifndef ASIC_ONLY
@@ -538,8 +537,6 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 	mb_params.cmd = DRV_MSG_CODE_LOAD_REQ;
 	mb_params.param = PDA_COMP | DRV_ID_MCP_HSI_VER_CURRENT |
 			  p_dev->drv_type;
-	OSAL_MEMCPY(&union_data.ver_str, p_dev->ver_str, MCP_DRV_VER_STR_SIZE);
-	mb_params.p_data_src = &union_data;
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 
 	/* if mcp fails to respond we must abort */
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index c372181..d52e1be 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -2175,7 +2175,7 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
 
 	qede_alloc_etherdev(adapter, &dev_info);
 
-	adapter->ops->common->set_id(edev, edev->name, QEDE_PMD_VERSION);
+	adapter->ops->common->set_name(edev, edev->name);
 
 	if (!is_vf)
 		adapter->dev_info.num_mac_filters =
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index 1e27428..0a1f7db 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -116,8 +116,7 @@ struct qed_common_ops {
 		     struct rte_pci_device *pci_dev,
 		     enum qed_protocol protocol,
 		     uint32_t dp_module, uint8_t dp_level, bool is_vf);
-	void (*set_id)(struct ecore_dev *edev,
-		char name[], const char ver_str[]);
+	void (*set_name)(struct ecore_dev *edev, char name[]);
 	enum _ecore_status_t
 		(*chain_alloc)(struct ecore_dev *edev,
 			       enum ecore_chain_use_mode
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 1d4f336..a932c5f 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -50,7 +50,9 @@ qed_probe(struct ecore_dev *edev, struct rte_pci_device *pci_dev,
 	int rc;
 
 	ecore_init_struct(edev);
+	edev->drv_type = DRV_ID_DRV_TYPE_LINUX;
 	qdev->protocol = protocol;
+
 	if (is_vf)
 		edev->b_is_vf = true;
 
@@ -420,9 +422,7 @@ qed_fill_eth_dev_info(struct ecore_dev *edev, struct qed_dev_eth_info *info)
 	return 0;
 }
 
-static void
-qed_set_id(struct ecore_dev *edev, char name[NAME_SIZE],
-	   const char ver_str[NAME_SIZE])
+static void qed_set_name(struct ecore_dev *edev, char name[NAME_SIZE])
 {
 	int i;
 
@@ -430,8 +430,6 @@ qed_set_id(struct ecore_dev *edev, char name[NAME_SIZE],
 	for_each_hwfn(edev, i) {
 		snprintf(edev->hwfns[i].name, NAME_SIZE, "%s-%d", name, i);
 	}
-	memcpy(edev->ver_str, ver_str, NAME_SIZE);
-	edev->drv_type = DRV_ID_DRV_TYPE_LINUX;
 }
 
 static uint32_t
@@ -714,7 +712,7 @@ const struct qed_common_ops qed_common_ops_pass = {
 	INIT_STRUCT_FIELD(probe, &qed_probe),
 	INIT_STRUCT_FIELD(update_pf_params, &qed_update_pf_params),
 	INIT_STRUCT_FIELD(slowpath_start, &qed_slowpath_start),
-	INIT_STRUCT_FIELD(set_id, &qed_set_id),
+	INIT_STRUCT_FIELD(set_name, &qed_set_name),
 	INIT_STRUCT_FIELD(chain_alloc, &ecore_chain_alloc),
 	INIT_STRUCT_FIELD(chain_free, &ecore_chain_free),
 	INIT_STRUCT_FIELD(sb_init, &qed_sb_init),
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 24/62] net/qede/base: prevent driver load with invalid resources
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (24 preceding siblings ...)
  2017-03-28  6:51           ` [PATCH v4 23/62] net/qede/base: set driver type before sending load request Rasesh Mody
@ 2017-03-28  6:51           ` Rasesh Mody
  2017-03-28  6:51           ` [PATCH v4 25/62] net/qede/base: add interfaces for MFW TLV request processing Rasesh Mody
                             ` (38 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:51 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Prevent storage drivers from attempting to load with invalid resources.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c |   12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 380c5ba..7fce4fd 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2437,13 +2437,19 @@ static void ecore_hw_set_feat(struct ecore_hwfn *p_hwfn)
 			   FEAT_NUM(p_hwfn, ECORE_PF_L2_QUE),
 			   sb_cnt_info.sb_iov_cnt);
 
+	feat_num[ECORE_FCOE_CQ] = OSAL_MIN_T(u32, RESC_NUM(p_hwfn, ECORE_SB),
+					     RESC_NUM(p_hwfn, ECORE_CMDQS_CQS));
+	feat_num[ECORE_ISCSI_CQ] = OSAL_MIN_T(u32, RESC_NUM(p_hwfn, ECORE_SB),
+					     RESC_NUM(p_hwfn, ECORE_CMDQS_CQS));
+
 	DP_VERBOSE(p_hwfn, ECORE_MSG_PROBE,
-		   "#PF_L2_QUEUES=%d VF_L2_QUEUES=%d #ROCE_CNQ=%d #SBS=%d num_features=%d\n",
+		   "#PF_L2_QUEUE=%d VF_L2_QUEUES=%d #ROCE_CNQ=%d #FCOE_CQ=%d #ISCSI_CQ=%d #SB=%d\n",
 		   (int)FEAT_NUM(p_hwfn, ECORE_PF_L2_QUE),
 		   (int)FEAT_NUM(p_hwfn, ECORE_VF_L2_QUE),
 		   (int)FEAT_NUM(p_hwfn, ECORE_RDMA_CNQ),
-		   RESC_NUM(p_hwfn, ECORE_SB),
-		   num_features);
+		   (int)FEAT_NUM(p_hwfn, ECORE_FCOE_CQ),
+		   (int)FEAT_NUM(p_hwfn, ECORE_ISCSI_CQ),
+		   RESC_NUM(p_hwfn, ECORE_SB));
 }
 
 static enum resource_id_enum
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 25/62] net/qede/base: add interfaces for MFW TLV request processing
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (25 preceding siblings ...)
  2017-03-28  6:51           ` [PATCH v4 24/62] net/qede/base: prevent driver load with invalid resources Rasesh Mody
@ 2017-03-28  6:51           ` Rasesh Mody
  2017-03-28  6:51           ` [PATCH v4 26/62] net/qede/base: code refactoring of SP queues Rasesh Mody
                             ` (37 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:51 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Add new base driver interfaces for Management FW TLV request processing.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_mcp.c     |    6 +
 drivers/net/qede/base/ecore_mcp_api.h |  301 +++++++++++++++++++++++++++++++++
 2 files changed, 307 insertions(+)

diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 2b9c819..79a907b 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -2502,3 +2502,9 @@ enum _ecore_status_t ecore_mcp_initiate_pf_flr(struct ecore_hwfn *p_hwfn,
 	return ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_INITIATE_PF_FLR, 0,
 			     &mcp_resp, &mcp_param);
 }
+
+enum _ecore_status_t
+ecore_mfw_process_tlv_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
+{
+	return ECORE_SUCCESS;
+}
diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h
index 1be22dd..8cad43d 100644
--- a/drivers/net/qede/base/ecore_mcp_api.h
+++ b/drivers/net/qede/base/ecore_mcp_api.h
@@ -232,6 +232,295 @@ struct ecore_mba_vers {
 	u32 mba_vers[ECORE_MAX_NUM_OF_ROMIMG];
 };
 
+enum ecore_mfw_tlv_type {
+	ECORE_MFW_TLV_GENERIC = 0x1,	/* Core driver TLVs */
+	ECORE_MFW_TLV_FCOE = 0x2,	/* FCoE protocol TLVs */
+	ECORE_MFW_TLV_ISCSI = 0x4,	/* SCSI protocol TLVs */
+};
+
+struct ecore_mfw_tlv_generic {
+	u16 feat_flags;
+	bool feat_flags_set;
+	u64 local_mac;
+	bool local_mac_set;
+	u64 additional_mac1;
+	bool additional_mac1_set;
+	u64 additional_mac2;
+	bool additional_mac2_set;
+	u16 lso_maxoff_size;
+	bool lso_maxoff_size_set;
+	u16 lso_minseg_size;
+	bool lso_minseg_size_set;
+	u8 prom_mode;
+	bool prom_mode_set;
+	u16 tx_descr_size;
+	bool tx_descr_size_set;
+	u16 rx_descr_size;
+	bool rx_descr_size_set;
+	u16 netq_count;
+	bool netq_count_set;
+	u16 flex_vlan;
+	bool flex_vlan_set;
+	u8 drv_state;
+	bool drv_state_set;
+	u8 pxe_progress;
+	bool pxe_progress_set;
+	u32 tcp4_offloads;
+	bool tcp4_offloads_set;
+	u32 tcp6_offloads;
+	bool tcp6_offloads_set;
+	u16 tx_descr_qdepth;
+	bool tx_descr_qdepth_set;
+	u16 rx_descr_qdepth;
+	bool rx_descr_qdepth_set;
+	u64 rx_frames;
+	bool rx_frames_set;
+	u64 rx_bytes;
+	bool rx_bytes_set;
+	u64 tx_frames;
+	bool tx_frames_set;
+	u64 tx_bytes;
+	bool tx_bytes_set;
+	u8 iov_offload;
+	bool iov_offload_set;
+	u8 txqs_empty;
+	bool txqs_empty_set;
+	u8 rxqs_empty;
+	bool rxqs_empty_set;
+	u8 num_txqs_full;
+	bool num_txqs_full_set;
+	u8 num_rxqs_full;
+	bool num_rxqs_full_set;
+};
+
+struct ecore_mfw_tlv_fcoe {
+	u8 scsi_timeout;
+	bool scsi_timeout_set;
+	u32 rt_tov;
+	bool rt_tov_set;
+	u32 ra_tov;
+	bool ra_tov_set;
+	u32 ed_tov;
+	bool ed_tov_set;
+	u32 cr_tov;
+	bool cr_tov_set;
+	u8 boot_type;
+	bool boot_type_set;
+	u8 npiv_state;
+	bool npiv_state_set;
+	u32 num_npiv_ids;
+	bool num_npiv_ids_set;
+	u8 switch_name[8];
+	bool switch_name_set;
+	u16 switch_portnum;
+	bool switch_portnum_set;
+	u8 switch_portid[3];
+	bool switch_portid_set;
+	u8 vendor_name[8];
+	bool vendor_name_set;
+	u8 switch_model[8];
+	bool switch_model_set;
+	u8 switch_fw_version[8];
+	bool switch_fw_version_set;
+	u8 qos_pri;
+	bool qos_pri_set;
+	u8 port_alias[3];
+	bool port_alias_set;
+	u8 port_state;
+	bool port_state_set;
+	u16 fip_tx_descr_size;
+	bool fip_tx_descr_size_set;
+	u16 fip_rx_descr_size;
+	bool fip_rx_descr_size_set;
+	u16 link_failures;
+	bool link_failures_set;
+	u8 fcoe_boot_progress;
+	bool fcoe_boot_progress_set;
+	u64 rx_bcast;
+	bool rx_bcast_set;
+	u64 tx_bcast;
+	bool tx_bcast_set;
+	u16 fcoe_txq_depth;
+	bool fcoe_txq_depth_set;
+	u16 fcoe_rxq_depth;
+	bool fcoe_rxq_depth_set;
+	u64 fcoe_rx_frames;
+	bool fcoe_rx_frames_set;
+	u64 fcoe_rx_bytes;
+	bool fcoe_rx_bytes_set;
+	u64 fcoe_tx_frames;
+	bool fcoe_tx_frames_set;
+	u64 fcoe_tx_bytes;
+	bool fcoe_tx_bytes_set;
+	u16 crc_count;
+	bool crc_count_set;
+	u32 crc_err_src_fcid[5];
+	bool crc_err_src_fcid_set[5];
+	u8 crc_err_tstamp[5][14];
+	bool crc_err_tstamp_set[5];
+	u16 losync_err;
+	bool losync_err_set;
+	u16 losig_err;
+	bool losig_err_set;
+	u16 primtive_err;
+	bool primtive_err_set;
+	u16 disparity_err;
+	bool disparity_err_set;
+	u16 code_violation_err;
+	bool code_violation_err_set;
+	u32 flogi_param[4];
+	bool flogi_param_set[4];
+	u8 flogi_tstamp[14];
+	bool flogi_tstamp_set;
+	u32 flogi_acc_param[4];
+	bool flogi_acc_param_set[4];
+	u8 flogi_acc_tstamp[14];
+	bool flogi_acc_tstamp_set;
+	u32 flogi_rjt;
+	bool flogi_rjt_set;
+	u8 flogi_rjt_tstamp[14];
+	bool flogi_rjt_tstamp_set;
+	u32 fdiscs;
+	bool fdiscs_set;
+	u8 fdisc_acc;
+	bool fdisc_acc_set;
+	u8 fdisc_rjt;
+	bool fdisc_rjt_set;
+	u8 plogi;
+	bool plogi_set;
+	u8 plogi_acc;
+	bool plogi_acc_set;
+	u8 plogi_rjt;
+	bool plogi_rjt_set;
+	u32 plogi_dst_fcid[5];
+	bool plogi_dst_fcid_set[5];
+	u8 plogi_tstamp[5][14];
+	bool plogi_tstamp_set[5];
+	u32 plogi_acc_src_fcid[5];
+	bool plogi_acc_src_fcid_set[5];
+	u8 plogi_acc_tstamp[5][14];
+	bool plogi_acc_tstamp_set[5];
+	u8 tx_plogos;
+	bool tx_plogos_set;
+	u8 plogo_acc;
+	bool plogo_acc_set;
+	u8 plogo_rjt;
+	bool plogo_rjt_set;
+	u32 plogo_src_fcid[5];
+	bool plogo_src_fcid_set[5];
+	u8 plogo_tstamp[5][14];
+	bool plogo_tstamp_set[5];
+	u8 rx_logos;
+	bool rx_logos_set;
+	u8 tx_accs;
+	bool tx_accs_set;
+	u8 tx_prlis;
+	bool tx_prlis_set;
+	u8 rx_accs;
+	bool rx_accs_set;
+	u8 tx_abts;
+	bool tx_abts_set;
+	u8 rx_abts_acc;
+	bool rx_abts_acc_set;
+	u8 rx_abts_rjt;
+	bool rx_abts_rjt_set;
+	u32 abts_dst_fcid[5];
+	bool abts_dst_fcid_set[5];
+	u8 abts_tstamp[5][14];
+	bool abts_tstamp_set[5];
+	u8 rx_rscn;
+	bool rx_rscn_set;
+	u32 rx_rscn_nport[4];
+	bool rx_rscn_nport_set[4];
+	u8 tx_lun_rst;
+	bool tx_lun_rst_set;
+	u8 abort_task_sets;
+	bool abort_task_sets_set;
+	u8 tx_tprlos;
+	bool tx_tprlos_set;
+	u8 tx_nos;
+	bool tx_nos_set;
+	u8 rx_nos;
+	bool rx_nos_set;
+	u8 ols;
+	bool ols_set;
+	u8 lr;
+	bool lr_set;
+	u8 llr;
+	bool llrt;
+	u8 tx_lip;
+	bool tx_lip_set;
+	u8 rx_lip;
+	bool rx_lip_set;
+	u8 eofa;
+	bool eofa_set;
+	u8 eofni;
+	bool eofni_set;
+	u8 scsi_chks;
+	bool scsi_chks_set;
+	u8 scsi_cond_met;
+	bool scsi_cond_met_set;
+	u8 scsi_busy;
+	bool scsi_busy_set;
+	u8 scsi_inter;
+	bool scsi_inter_set;
+	u8 scsi_inter_cond_met;
+	bool scsi_inter_cond_met_set;
+	u8 scsi_rsv_conflicts;
+	bool scsi_rsv_conflicts_set;
+	u8 scsi_tsk_full;
+	bool scsi_tsk_full_set;
+	u8 scsi_aca_active;
+	bool scsi_aca_active_set;
+	u8 scsi_tsk_abort;
+	bool scsi_tsk_abort_set;
+	u32 scsi_rx_chk[5];
+	bool scsi_rx_chk_set[5];
+	u8 scsi_chk_tstamp[5][14];
+	bool scsi_chk_tstamp_set[5];
+};
+
+struct ecore_mfw_tlv_iscsi {
+	u8 target_llmnr;
+	bool target_llmnr_set;
+	u8 header_digest;
+	bool header_digest_set;
+	u8 data_digest;
+	bool data_digest_set;
+	u8 auth_method;
+	bool auth_method_set;
+	u16 boot_taget_portal;
+	bool boot_taget_portal_set;
+	u16 frame_size;
+	bool frame_size_set;
+	u16 tx_desc_size;
+	bool tx_desc_size_set;
+	u16 rx_desc_size;
+	bool rx_desc_size_set;
+	u8 boot_progress;
+	bool boot_progress_set;
+	u16 tx_desc_qdepth;
+	bool tx_desc_qdepth_set;
+	u16 rx_desc_qdepth;
+	bool rx_desc_qdepth_set;
+	u64 rx_frames;
+	bool rx_frames_set;
+	u64 rx_bytes;
+	bool rx_bytes_set;
+	u64 tx_frames;
+	bool tx_frames_set;
+	u64 tx_bytes;
+	bool tx_bytes_set;
+	u32 cpcp_spcp_map;
+	bool cpcp_spcp_map_set;
+};
+
+union ecore_mfw_tlv_data {
+	struct ecore_mfw_tlv_generic generic;
+	struct ecore_mfw_tlv_fcoe fcoe;
+	struct ecore_mfw_tlv_iscsi iscsi;
+};
+
 /**
  * @brief - returns the link params of the hw function
  *
@@ -820,4 +1109,16 @@ ecore_mcp_mdump_get_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 enum _ecore_status_t ecore_mcp_mdump_clear_logs(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt);
 
+/**
+ * @brief - Processes the TLV request from MFW i.e., get the required TLV info
+ *          from the ecore client and send it to the MFW.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ *
+ * @param return ECORE_SUCCESS upon success.
+ */
+enum _ecore_status_t ecore_mfw_process_tlv_req(struct ecore_hwfn *p_hwfn,
+					       struct ecore_ptt *p_ptt);
+
 #endif
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 26/62] net/qede/base: code refactoring of SP queues
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (26 preceding siblings ...)
  2017-03-28  6:51           ` [PATCH v4 25/62] net/qede/base: add interfaces for MFW TLV request processing Rasesh Mody
@ 2017-03-28  6:51           ` Rasesh Mody
  2017-03-28  6:51           ` [PATCH v4 27/62] net/qede/base: make L2 queues handle based Rasesh Mody
                             ` (36 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:51 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Maintain slowpath event queue and consumer queue within HW function
structure, update corresponding alloc and free APIs accordingly.
Cleanup unused code under CONFIG_ECORE_LL2 ifdef.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c |   43 +++++++----------------------
 drivers/net/qede/base/ecore_spq.c |   54 ++++++++++++++++++++-----------------
 drivers/net/qede/base/ecore_spq.h |   35 +++++++++---------------
 3 files changed, 52 insertions(+), 80 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 7fce4fd..1ce7d8e 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -165,12 +165,9 @@ void ecore_resc_free(struct ecore_dev *p_dev)
 		ecore_cxt_mngr_free(p_hwfn);
 		ecore_qm_info_free(p_hwfn);
 		ecore_spq_free(p_hwfn);
-		ecore_eq_free(p_hwfn, p_hwfn->p_eq);
-		ecore_consq_free(p_hwfn, p_hwfn->p_consq);
+		ecore_eq_free(p_hwfn);
+		ecore_consq_free(p_hwfn);
 		ecore_int_free(p_hwfn);
-#ifdef CONFIG_ECORE_LL2
-		ecore_ll2_free(p_hwfn, p_hwfn->p_ll2_info);
-#endif
 		ecore_iov_free(p_hwfn);
 		ecore_dmae_info_free(p_hwfn);
 		ecore_dcbx_info_free(p_hwfn, p_hwfn->p_dcbx_info);
@@ -836,11 +833,6 @@ alloc_err:
 
 enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 {
-	struct ecore_consq *p_consq;
-	struct ecore_eq *p_eq;
-#ifdef	CONFIG_ECORE_LL2
-	struct ecore_ll2_info *p_ll2_info;
-#endif
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 	int i;
 
@@ -988,24 +980,13 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 			goto alloc_no_mem;
 		}
 
-		p_eq = ecore_eq_alloc(p_hwfn, (u16)n_eqes);
-		if (!p_eq)
-			goto alloc_no_mem;
-		p_hwfn->p_eq = p_eq;
+		rc = ecore_eq_alloc(p_hwfn, (u16)n_eqes);
+		if (rc)
+			goto alloc_err;
 
-		p_consq = ecore_consq_alloc(p_hwfn);
-		if (!p_consq)
-			goto alloc_no_mem;
-		p_hwfn->p_consq = p_consq;
-
-#ifdef CONFIG_ECORE_LL2
-		if (p_hwfn->using_ll2) {
-			p_ll2_info = ecore_ll2_alloc(p_hwfn);
-			if (!p_ll2_info)
-				goto alloc_no_mem;
-			p_hwfn->p_ll2_info = p_ll2_info;
-		}
-#endif
+		rc = ecore_consq_alloc(p_hwfn);
+		if (rc)
+			goto alloc_err;
 
 		/* DMA info initialization */
 		rc = ecore_dmae_info_alloc(p_hwfn);
@@ -1053,8 +1034,8 @@ void ecore_resc_setup(struct ecore_dev *p_dev)
 
 		ecore_cxt_mngr_setup(p_hwfn);
 		ecore_spq_setup(p_hwfn);
-		ecore_eq_setup(p_hwfn, p_hwfn->p_eq);
-		ecore_consq_setup(p_hwfn, p_hwfn->p_consq);
+		ecore_eq_setup(p_hwfn);
+		ecore_consq_setup(p_hwfn);
 
 		/* Read shadow of current MFW mailbox */
 		ecore_mcp_read_mb(p_hwfn, p_hwfn->p_main_ptt);
@@ -1065,10 +1046,6 @@ void ecore_resc_setup(struct ecore_dev *p_dev)
 		ecore_int_setup(p_hwfn, p_hwfn->p_main_ptt);
 
 		ecore_iov_setup(p_hwfn, p_hwfn->p_main_ptt);
-#ifdef CONFIG_ECORE_LL2
-		if (p_hwfn->using_ll2)
-			ecore_ll2_setup(p_hwfn, p_hwfn->p_ll2_info);
-#endif
 	}
 }
 
diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c
index ba26d45..016de74 100644
--- a/drivers/net/qede/base/ecore_spq.c
+++ b/drivers/net/qede/base/ecore_spq.c
@@ -355,7 +355,7 @@ enum _ecore_status_t ecore_eq_completion(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-struct ecore_eq *ecore_eq_alloc(struct ecore_hwfn *p_hwfn, u16 num_elem)
+enum _ecore_status_t ecore_eq_alloc(struct ecore_hwfn *p_hwfn, u16 num_elem)
 {
 	struct ecore_eq *p_eq;
 
@@ -364,7 +364,7 @@ struct ecore_eq *ecore_eq_alloc(struct ecore_hwfn *p_hwfn, u16 num_elem)
 	if (!p_eq) {
 		DP_NOTICE(p_hwfn, true,
 			  "Failed to allocate `struct ecore_eq'\n");
-		return OSAL_NULL;
+		return ECORE_NOMEM;
 	}
 
 	/* Allocate and initialize EQ chain*/
@@ -374,7 +374,7 @@ struct ecore_eq *ecore_eq_alloc(struct ecore_hwfn *p_hwfn, u16 num_elem)
 			      ECORE_CHAIN_CNT_TYPE_U16,
 			      num_elem,
 			      sizeof(union event_ring_element),
-			      &p_eq->chain, OSAL_NULL)) {
+			      &p_eq->chain, OSAL_NULL) != ECORE_SUCCESS) {
 		DP_NOTICE(p_hwfn, true, "Failed to allocate eq chain\n");
 		goto eq_allocate_fail;
 	}
@@ -383,24 +383,28 @@ struct ecore_eq *ecore_eq_alloc(struct ecore_hwfn *p_hwfn, u16 num_elem)
 	ecore_int_register_cb(p_hwfn, ecore_eq_completion,
 			      p_eq, &p_eq->eq_sb_index, &p_eq->p_fw_cons);
 
-	return p_eq;
+	p_hwfn->p_eq = p_eq;
+	return ECORE_SUCCESS;
 
 eq_allocate_fail:
-	ecore_eq_free(p_hwfn, p_eq);
-	return OSAL_NULL;
+	OSAL_FREE(p_hwfn->p_dev, p_eq);
+	return ECORE_NOMEM;
 }
 
-void ecore_eq_setup(struct ecore_hwfn *p_hwfn, struct ecore_eq *p_eq)
+void ecore_eq_setup(struct ecore_hwfn *p_hwfn)
 {
-	ecore_chain_reset(&p_eq->chain);
+	ecore_chain_reset(&p_hwfn->p_eq->chain);
 }
 
-void ecore_eq_free(struct ecore_hwfn *p_hwfn, struct ecore_eq *p_eq)
+void ecore_eq_free(struct ecore_hwfn *p_hwfn)
 {
-	if (!p_eq)
+	if (!p_hwfn->p_eq)
 		return;
-	ecore_chain_free(p_hwfn->p_dev, &p_eq->chain);
-	OSAL_FREE(p_hwfn->p_dev, p_eq);
+
+	ecore_chain_free(p_hwfn->p_dev, &p_hwfn->p_eq->chain);
+
+	OSAL_FREE(p_hwfn->p_dev, p_hwfn->p_eq);
+	p_hwfn->p_eq = OSAL_NULL;
 }
 
 /***************************************************************************
@@ -943,7 +947,7 @@ enum _ecore_status_t ecore_spq_completion(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-struct ecore_consq *ecore_consq_alloc(struct ecore_hwfn *p_hwfn)
+enum _ecore_status_t ecore_consq_alloc(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_consq *p_consq;
 
@@ -953,7 +957,7 @@ struct ecore_consq *ecore_consq_alloc(struct ecore_hwfn *p_hwfn)
 	if (!p_consq) {
 		DP_NOTICE(p_hwfn, true,
 			  "Failed to allocate `struct ecore_consq'\n");
-		return OSAL_NULL;
+		return ECORE_NOMEM;
 	}
 
 	/* Allocate and initialize EQ chain */
@@ -963,27 +967,29 @@ struct ecore_consq *ecore_consq_alloc(struct ecore_hwfn *p_hwfn)
 			      ECORE_CHAIN_CNT_TYPE_U16,
 			      ECORE_CHAIN_PAGE_SIZE / 0x80,
 			      0x80,
-			      &p_consq->chain, OSAL_NULL)) {
+			      &p_consq->chain, OSAL_NULL) != ECORE_SUCCESS) {
 		DP_NOTICE(p_hwfn, true, "Failed to allocate consq chain");
 		goto consq_allocate_fail;
 	}
 
-	return p_consq;
+	p_hwfn->p_consq = p_consq;
+	return ECORE_SUCCESS;
 
 consq_allocate_fail:
-	ecore_consq_free(p_hwfn, p_consq);
-	return OSAL_NULL;
+	OSAL_FREE(p_hwfn->p_dev, p_consq);
+	return ECORE_NOMEM;
 }
 
-void ecore_consq_setup(struct ecore_hwfn *p_hwfn, struct ecore_consq *p_consq)
+void ecore_consq_setup(struct ecore_hwfn *p_hwfn)
 {
-	ecore_chain_reset(&p_consq->chain);
+	ecore_chain_reset(&p_hwfn->p_consq->chain);
 }
 
-void ecore_consq_free(struct ecore_hwfn *p_hwfn, struct ecore_consq *p_consq)
+void ecore_consq_free(struct ecore_hwfn *p_hwfn)
 {
-	if (!p_consq)
+	if (!p_hwfn->p_consq)
 		return;
-	ecore_chain_free(p_hwfn->p_dev, &p_consq->chain);
-	OSAL_FREE(p_hwfn->p_dev, p_consq);
+
+	ecore_chain_free(p_hwfn->p_dev, &p_hwfn->p_consq->chain);
+	OSAL_FREE(p_hwfn->p_dev, p_hwfn->p_consq);
 }
diff --git a/drivers/net/qede/base/ecore_spq.h b/drivers/net/qede/base/ecore_spq.h
index 717ede3..e2468b7 100644
--- a/drivers/net/qede/base/ecore_spq.h
+++ b/drivers/net/qede/base/ecore_spq.h
@@ -194,28 +194,23 @@ void ecore_spq_return_entry(struct ecore_hwfn		*p_hwfn,
  * @param p_hwfn
  * @param num_elem number of elements in the eq
  *
- * @return struct ecore_eq* - a newly allocated structure; NULL upon error.
+ * @return enum _ecore_status_t
  */
-struct ecore_eq *ecore_eq_alloc(struct ecore_hwfn	*p_hwfn,
-				 u16			num_elem);
+enum _ecore_status_t ecore_eq_alloc(struct ecore_hwfn	*p_hwfn, u16 num_elem);
 
 /**
- * @brief ecore_eq_setup - Reset the SPQ to its start state.
+ * @brief ecore_eq_setup - Reset the EQ to its start state.
  *
  * @param p_hwfn
- * @param p_eq
  */
-void ecore_eq_setup(struct ecore_hwfn *p_hwfn,
-		    struct ecore_eq   *p_eq);
+void ecore_eq_setup(struct ecore_hwfn *p_hwfn);
 
 /**
- * @brief ecore_eq_deallocate - deallocates the given EQ struct.
+ * @brief ecore_eq_free - deallocates the given EQ struct.
  *
  * @param p_hwfn
- * @param p_eq
  */
-void ecore_eq_free(struct ecore_hwfn *p_hwfn,
-		   struct ecore_eq   *p_eq);
+void ecore_eq_free(struct ecore_hwfn *p_hwfn);
 
 /**
  * @brief ecore_eq_prod_update - update the FW with default EQ producer
@@ -261,32 +256,26 @@ enum _ecore_status_t ecore_spq_completion(struct ecore_hwfn	*p_hwfn,
 u32 ecore_spq_get_cid(struct ecore_hwfn *p_hwfn);
 
 /**
- * @brief ecore_consq_alloc - Allocates & initializes an ConsQ
- *        struct
+ * @brief ecore_consq_alloc - Allocates & initializes an ConsQ struct
  *
  * @param p_hwfn
  *
- * @return struct ecore_eq* - a newly allocated structure; NULL upon error.
+ * @return enum _ecore_status_t
  */
-struct ecore_consq *ecore_consq_alloc(struct ecore_hwfn	*p_hwfn);
+enum _ecore_status_t ecore_consq_alloc(struct ecore_hwfn *p_hwfn);
 
 /**
- * @brief ecore_consq_setup - Reset the ConsQ to its start
- *        state.
+ * @brief ecore_consq_setup - Reset the ConsQ to its start state.
  *
  * @param p_hwfn
- * @param p_eq
  */
-void ecore_consq_setup(struct ecore_hwfn *p_hwfn,
-		    struct ecore_consq   *p_consq);
+void ecore_consq_setup(struct ecore_hwfn *p_hwfn);
 
 /**
  * @brief ecore_consq_free - deallocates the given ConsQ struct.
  *
  * @param p_hwfn
- * @param p_eq
  */
-void ecore_consq_free(struct ecore_hwfn *p_hwfn,
-		   struct ecore_consq   *p_consq);
+void ecore_consq_free(struct ecore_hwfn *p_hwfn);
 
 #endif /* __ECORE_SPQ_H__ */
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 27/62] net/qede/base: make L2 queues handle based
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (27 preceding siblings ...)
  2017-03-28  6:51           ` [PATCH v4 26/62] net/qede/base: code refactoring of SP queues Rasesh Mody
@ 2017-03-28  6:51           ` Rasesh Mody
  2017-03-28  6:51           ` [PATCH v4 28/62] net/qede/base: add support for handling TLV request from MFW Rasesh Mody
                             ` (35 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:51 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

L2 handler changes:

This is change to remove the queue-id/qzone difference for Tx queues.

It does that by mainly doing:

a. VFs queues are no longer determined by the SBs they're using.
Instead, the ecore-client needs to maintain those and choose the values
to be used by VF when initializing it.

b. Eliminate the HW-cid array in the hw-function.
To do that, have all the rx/tx functionality turn into 'handle' base -
when queue would be started the caller would get a (void*) handle,
which it would later use with ecore for configuring various
queue-related stop [update, close].

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h         |   13 -
 drivers/net/qede/base/ecore_dev.c     |   37 ---
 drivers/net/qede/base/ecore_int.c     |   24 --
 drivers/net/qede/base/ecore_int.h     |   10 -
 drivers/net/qede/base/ecore_iov_api.h |   24 +-
 drivers/net/qede/base/ecore_l2.c      |  526 ++++++++++++++++++---------------
 drivers/net/qede/base/ecore_l2.h      |   84 +++---
 drivers/net/qede/base/ecore_l2_api.h  |  108 ++++---
 drivers/net/qede/base/ecore_sriov.c   |  262 ++++++++++------
 drivers/net/qede/base/ecore_sriov.h   |    4 +-
 drivers/net/qede/base/ecore_vf.c      |  119 +++++---
 drivers/net/qede/base/ecore_vf.h      |   55 ++--
 drivers/net/qede/qede_eth_if.c        |   50 ++--
 drivers/net/qede/qede_eth_if.h        |   22 +-
 drivers/net/qede/qede_rxtx.c          |   42 +--
 drivers/net/qede/qede_rxtx.h          |    2 +
 16 files changed, 723 insertions(+), 659 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index b8c8bfd..de0f49a 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -394,16 +394,6 @@ struct ecore_hw_info {
 	u16 mtu;
 };
 
-struct ecore_hw_cid_data {
-	u32	cid;
-	bool	b_cid_allocated;
-	u8	vfid; /* 1-based; 0 signals this is for a PF */
-
-	/* Additional identifiers */
-	u16	opaque_fid;
-	u8	vport_id;
-};
-
 /* maximun size of read/write commands (HW limit) */
 #define DMAE_MAX_RW_SIZE	0x2000
 
@@ -566,9 +556,6 @@ struct ecore_hwfn {
 	struct ecore_mcp_info		*mcp_info;
 	struct ecore_dcbx_info		*p_dcbx_info;
 
-	struct ecore_hw_cid_data	*p_tx_cids;
-	struct ecore_hw_cid_data	*p_rx_cids;
-
 	struct ecore_dmae_info		dmae_info;
 
 	/* QM init */
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 1ce7d8e..c895656 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -155,13 +155,6 @@ void ecore_resc_free(struct ecore_dev *p_dev)
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
 
-		OSAL_FREE(p_dev, p_hwfn->p_tx_cids);
-		OSAL_FREE(p_dev, p_hwfn->p_rx_cids);
-	}
-
-	for_each_hwfn(p_dev, i) {
-		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
-
 		ecore_cxt_mngr_free(p_hwfn);
 		ecore_qm_info_free(p_hwfn);
 		ecore_spq_free(p_hwfn);
@@ -844,36 +837,6 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 	if (!p_dev->fw_data)
 		return ECORE_NOMEM;
 
-	/* Allocate Memory for the Queue->CID mapping */
-	for_each_hwfn(p_dev, i) {
-		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
-		u32 num_tx_conns = RESC_NUM(p_hwfn, ECORE_L2_QUEUE);
-		int tx_size, rx_size;
-
-		/* @@@TMP - resc management, change to actual required size */
-		if (p_hwfn->pf_params.eth_pf_params.num_cons > num_tx_conns)
-			num_tx_conns = p_hwfn->pf_params.eth_pf_params.num_cons;
-		tx_size = sizeof(struct ecore_hw_cid_data) * num_tx_conns;
-		rx_size = sizeof(struct ecore_hw_cid_data) *
-		    RESC_NUM(p_hwfn, ECORE_L2_QUEUE);
-
-		p_hwfn->p_tx_cids = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
-						tx_size);
-		if (!p_hwfn->p_tx_cids) {
-			DP_NOTICE(p_hwfn, true,
-				  "Failed to allocate memory for Tx Cids\n");
-			goto alloc_no_mem;
-		}
-
-		p_hwfn->p_rx_cids = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
-						rx_size);
-		if (!p_hwfn->p_rx_cids) {
-			DP_NOTICE(p_hwfn, true,
-				  "Failed to allocate memory for Rx Cids\n");
-			goto alloc_no_mem;
-		}
-	}
-
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
 		u32 n_eqes, num_cons;
diff --git a/drivers/net/qede/base/ecore_int.c b/drivers/net/qede/base/ecore_int.c
index e5a4359..8dc4d15 100644
--- a/drivers/net/qede/base/ecore_int.c
+++ b/drivers/net/qede/base/ecore_int.c
@@ -2182,30 +2182,6 @@ void ecore_int_get_num_sbs(struct ecore_hwfn *p_hwfn,
 	p_sb_cnt_info->sb_free_blk = info->free_blks;
 }
 
-u16 ecore_int_queue_id_from_sb_id(struct ecore_hwfn *p_hwfn, u16 sb_id)
-{
-	struct ecore_igu_info *p_info = p_hwfn->hw_info.p_igu_info;
-
-	/* Determine origin of SB id */
-	if ((sb_id >= p_info->igu_base_sb) &&
-	    (sb_id < p_info->igu_base_sb + p_info->igu_sb_cnt)) {
-		return sb_id - p_info->igu_base_sb;
-	} else if ((sb_id >= p_info->igu_base_sb_iov) &&
-		   (sb_id < p_info->igu_base_sb_iov +
-			    p_info->igu_sb_cnt_iov)) {
-		/* We want the first VF queue to be adjacent to the
-		 * last PF queue. Since L2 queues can be partial to
-		 * SBs, we'll use the feature instead.
-		 */
-		return sb_id - p_info->igu_base_sb_iov +
-		       FEAT_NUM(p_hwfn, ECORE_PF_L2_QUE);
-	} else {
-		DP_NOTICE(p_hwfn, true, "SB %d not in range for function\n",
-			  sb_id);
-		return 0;
-	}
-}
-
 void ecore_int_disable_post_isr_release(struct ecore_dev *p_dev)
 {
 	int i;
diff --git a/drivers/net/qede/base/ecore_int.h b/drivers/net/qede/base/ecore_int.h
index 45358b9..0c8929e 100644
--- a/drivers/net/qede/base/ecore_int.h
+++ b/drivers/net/qede/base/ecore_int.h
@@ -172,16 +172,6 @@ void ecore_int_free(struct ecore_hwfn *p_hwfn);
 void ecore_int_setup(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt);
 
 /**
- * @brief - Returns an Rx queue index appropriate for usage with given SB.
- *
- * @param p_hwfn
- * @param sb_id - absolute index of SB
- *
- * @return index of Rx queue
- */
-u16 ecore_int_queue_id_from_sb_id(struct ecore_hwfn *p_hwfn, u16 sb_id);
-
-/**
  * @brief - Enable Interrupt & Attention for hw function
  *
  * @param p_hwfn
diff --git a/drivers/net/qede/base/ecore_iov_api.h b/drivers/net/qede/base/ecore_iov_api.h
index 9775360..b8dc47b 100644
--- a/drivers/net/qede/base/ecore_iov_api.h
+++ b/drivers/net/qede/base/ecore_iov_api.h
@@ -88,6 +88,23 @@ struct ecore_public_vf_info {
 	u16 forced_vlan;
 };
 
+struct ecore_iov_vf_init_params {
+	u16 rel_vf_id;
+
+	/* Number of requested Queues; Currently, don't support different
+	 * number of Rx/Tx queues.
+	 */
+	/* TODO - remove this limitation */
+	u16 num_queues;
+
+	/* Allow the client to choose which qzones to use for Rx/Tx,
+	 * and which queue_base to use for Tx queues on a per-queue basis.
+	 * Notice values should be relative to the PF resources.
+	 */
+	u16 req_rx_queue[ECORE_MAX_VF_CHAINS_PER_PF];
+	u16 req_tx_queue[ECORE_MAX_VF_CHAINS_PER_PF];
+};
+
 #ifdef CONFIG_ECORE_SW_CHANNEL
 /* This is SW channel related only... */
 enum mbx_state {
@@ -175,15 +192,14 @@ void ecore_iov_set_vf_to_disable(struct ecore_dev *p_dev,
  *
  * @param p_hwfn
  * @param p_ptt
- * @param rel_vf_id
- * @param num_rx_queues
+ * @param p_params
  *
  * @return enum _ecore_status_t
  */
 enum _ecore_status_t ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 					      struct ecore_ptt *p_ptt,
-					      u16 rel_vf_id,
-					      u16 num_rx_queues);
+					      struct ecore_iov_vf_init_params
+						     *p_params);
 
 /**
  * @brief ecore_iov_process_mbx_req - process a request received
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 0220d19..352620a 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -29,6 +29,120 @@
 #define ECORE_MAX_SGES_NUM 16
 #define CRC32_POLY 0x1edc6f41
 
+void ecore_eth_queue_cid_release(struct ecore_hwfn *p_hwfn,
+				 struct ecore_queue_cid *p_cid)
+{
+	/* VFs' CIDs are 0-based in PF-view, and uninitialized on VF */
+	if (!p_cid->is_vf && IS_PF(p_hwfn->p_dev))
+		ecore_cxt_release_cid(p_hwfn, p_cid->cid);
+	OSAL_VFREE(p_hwfn->p_dev, p_cid);
+}
+
+/* The internal is only meant to be directly called by PFs initializeing CIDs
+ * for their VFs.
+ */
+struct ecore_queue_cid *
+_ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
+			u16 opaque_fid, u32 cid, u8 vf_qid,
+			struct ecore_queue_start_common_params *p_params)
+{
+	bool b_is_same = (p_hwfn->hw_info.opaque_fid == opaque_fid);
+	struct ecore_queue_cid *p_cid;
+	enum _ecore_status_t rc;
+
+	p_cid = OSAL_VALLOC(p_hwfn->p_dev, sizeof(*p_cid));
+	if (p_cid == OSAL_NULL)
+		return OSAL_NULL;
+	OSAL_MEM_ZERO(p_cid, sizeof(*p_cid));
+
+	p_cid->opaque_fid = opaque_fid;
+	p_cid->cid = cid;
+	p_cid->vf_qid = vf_qid;
+	p_cid->rel = *p_params;
+
+	/* Don't try calculating the absolute indices for VFs */
+	if (IS_VF(p_hwfn->p_dev)) {
+		p_cid->abs = p_cid->rel;
+		goto out;
+	}
+
+	/* Calculate the engine-absolute indices of the resources.
+	 * The would guarantee they're valid later on.
+	 * In some cases [SBs] we already have the right values.
+	 */
+	rc = ecore_fw_vport(p_hwfn, p_cid->rel.vport_id, &p_cid->abs.vport_id);
+	if (rc != ECORE_SUCCESS)
+		goto fail;
+
+	rc = ecore_fw_l2_queue(p_hwfn, p_cid->rel.queue_id,
+			       &p_cid->abs.queue_id);
+	if (rc != ECORE_SUCCESS)
+		goto fail;
+
+	/* In case of a PF configuring its VF's queues, the stats-id is already
+	 * absolute [since there's a single index that's suitable per-VF].
+	 */
+	if (b_is_same) {
+		rc = ecore_fw_vport(p_hwfn, p_cid->rel.stats_id,
+				    &p_cid->abs.stats_id);
+		if (rc != ECORE_SUCCESS)
+			goto fail;
+	} else {
+		p_cid->abs.stats_id = p_cid->rel.stats_id;
+	}
+
+	/* SBs relevant information was already provided as absolute */
+	p_cid->abs.sb = p_cid->rel.sb;
+	p_cid->abs.sb_idx = p_cid->rel.sb_idx;
+
+	/* This is tricky - we're actually interested in whehter this is a PF
+	 * entry meant for the VF.
+	 */
+	if (!b_is_same)
+		p_cid->is_vf = true;
+out:
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "opaque_fid: %04x CID %08x vport %02x [%02x] qzone %04x [%04x] stats %02x [%02x] SB %04x PI %02x\n",
+		   p_cid->opaque_fid, p_cid->cid,
+		   p_cid->rel.vport_id, p_cid->abs.vport_id,
+		   p_cid->rel.queue_id, p_cid->abs.queue_id,
+		   p_cid->rel.stats_id, p_cid->abs.stats_id,
+		   p_cid->abs.sb, p_cid->abs.sb_idx);
+
+	return p_cid;
+
+fail:
+	OSAL_VFREE(p_hwfn->p_dev, p_cid);
+	return OSAL_NULL;
+}
+
+static struct ecore_queue_cid *
+ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
+		       u16 opaque_fid,
+		       struct ecore_queue_start_common_params *p_params)
+{
+	struct ecore_queue_cid *p_cid;
+	u32 cid = 0;
+
+	/* Get a unique firmware CID for this queue, in case it's a PF.
+	 * VF's don't need a CID as the queue configuration will be done
+	 * by PF.
+	 */
+	if (IS_PF(p_hwfn->p_dev)) {
+		if (ecore_cxt_acquire_cid(p_hwfn, PROTOCOLID_ETH,
+					  &cid) != ECORE_SUCCESS) {
+			DP_NOTICE(p_hwfn, true, "Failed to acquire cid\n");
+			return OSAL_NULL;
+		}
+	}
+
+	p_cid = _ecore_eth_queue_to_cid(p_hwfn, opaque_fid, cid, 0, p_params);
+	if ((p_cid == OSAL_NULL) && IS_PF(p_hwfn->p_dev))
+		ecore_cxt_release_cid(p_hwfn, cid);
+
+	return p_cid;
+}
+
 enum _ecore_status_t
 ecore_sp_eth_vport_start(struct ecore_hwfn *p_hwfn,
 			 struct ecore_sp_vport_start_params *p_params)
@@ -558,57 +672,28 @@ ecore_filter_accept_cmd(struct ecore_dev *p_dev,
 	return 0;
 }
 
-static void ecore_sp_release_queue_cid(struct ecore_hwfn *p_hwfn,
-				       struct ecore_hw_cid_data *p_cid_data)
-{
-	if (!p_cid_data->b_cid_allocated)
-		return;
-
-	ecore_cxt_release_cid(p_hwfn, p_cid_data->cid);
-	p_cid_data->b_cid_allocated = false;
-}
-
 enum _ecore_status_t
-ecore_sp_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
-			      u16 opaque_fid,
-			      u32 cid,
-			      struct ecore_queue_start_common_params *p_params,
-			      u16 bd_max_bytes,
-			      dma_addr_t bd_chain_phys_addr,
-			      dma_addr_t cqe_pbl_addr,
-			      u16 cqe_pbl_size, bool b_use_zone_a_prod)
+ecore_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
+			   struct ecore_queue_cid *p_cid,
+			   u16 bd_max_bytes,
+			   dma_addr_t bd_chain_phys_addr,
+			   dma_addr_t cqe_pbl_addr,
+			   u16 cqe_pbl_size)
 {
 	struct rx_queue_start_ramrod_data *p_ramrod = OSAL_NULL;
 	struct ecore_spq_entry *p_ent = OSAL_NULL;
 	struct ecore_sp_init_data init_data;
-	struct ecore_hw_cid_data *p_rx_cid;
-	u16 abs_rx_q_id = 0;
-	u8 abs_vport_id = 0;
 	enum _ecore_status_t rc = ECORE_NOTIMPL;
 
-	/* Store information for the stop */
-	p_rx_cid = &p_hwfn->p_rx_cids[p_params->queue_id];
-	p_rx_cid->cid = cid;
-	p_rx_cid->opaque_fid = opaque_fid;
-	p_rx_cid->vport_id = p_params->vport_id;
-
-	rc = ecore_fw_vport(p_hwfn, p_params->vport_id, &abs_vport_id);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	rc = ecore_fw_l2_queue(p_hwfn, p_params->queue_id, &abs_rx_q_id);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-		   "opaque_fid=0x%x, cid=0x%x, rx_qid=0x%x, vport_id=0x%x, sb_id=0x%x\n",
-		   opaque_fid, cid, p_params->queue_id,
-		   p_params->vport_id, p_params->sb);
+		   "opaque_fid=0x%x, cid=0x%x, rx_qzone=0x%x, vport_id=0x%x, sb_id=0x%x\n",
+		   p_cid->opaque_fid, p_cid->cid, p_cid->abs.queue_id,
+		   p_cid->abs.vport_id, p_cid->abs.sb);
 
 	/* Get SPQ entry */
 	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
-	init_data.cid = cid;
-	init_data.opaque_fid = opaque_fid;
+	init_data.cid = p_cid->cid;
+	init_data.opaque_fid = p_cid->opaque_fid;
 	init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
 
 	rc = ecore_sp_init_request(p_hwfn, &p_ent,
@@ -619,11 +704,11 @@ ecore_sp_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
 
 	p_ramrod = &p_ent->ramrod.rx_queue_start;
 
-	p_ramrod->sb_id = OSAL_CPU_TO_LE16(p_params->sb);
-	p_ramrod->sb_index = (u8)p_params->sb_idx;
-	p_ramrod->vport_id = abs_vport_id;
-	p_ramrod->stats_counter_id = p_params->stats_id;
-	p_ramrod->rx_queue_id = OSAL_CPU_TO_LE16(abs_rx_q_id);
+	p_ramrod->sb_id = OSAL_CPU_TO_LE16(p_cid->abs.sb);
+	p_ramrod->sb_index = p_cid->abs.sb_idx;
+	p_ramrod->vport_id = p_cid->abs.vport_id;
+	p_ramrod->stats_counter_id = p_cid->abs.stats_id;
+	p_ramrod->rx_queue_id = OSAL_CPU_TO_LE16(p_cid->abs.queue_id);
 	p_ramrod->complete_cqe_flg = 0;
 	p_ramrod->complete_event_flg = 1;
 
@@ -633,92 +718,88 @@ ecore_sp_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
 	p_ramrod->num_of_pbl_pages = OSAL_CPU_TO_LE16(cqe_pbl_size);
 	DMA_REGPAIR_LE(p_ramrod->cqe_pbl_addr, cqe_pbl_addr);
 
-	if (p_params->vf_qid || b_use_zone_a_prod) {
-		p_ramrod->vf_rx_prod_index = (u8)p_params->vf_qid;
+	if (p_cid->is_vf) {
+		p_ramrod->vf_rx_prod_index = p_cid->vf_qid;
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 			   "Queue%s is meant for VF rxq[%02x]\n",
-			   b_use_zone_a_prod ? " [legacy]" : "",
-			   p_params->vf_qid);
-		p_ramrod->vf_rx_prod_use_zone_a = b_use_zone_a_prod;
+			   !!p_cid->b_legacy_vf ? " [legacy]" : "",
+			   p_cid->vf_qid);
+		p_ramrod->vf_rx_prod_use_zone_a = !!p_cid->b_legacy_vf;
 	}
 
 	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
 }
 
-enum _ecore_status_t
-ecore_sp_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
-			    u16 opaque_fid,
-			    struct ecore_queue_start_common_params *p_params,
+static enum _ecore_status_t
+ecore_eth_pf_rx_queue_start(struct ecore_hwfn *p_hwfn,
+			    struct ecore_queue_cid *p_cid,
 			    u16 bd_max_bytes,
 			    dma_addr_t bd_chain_phys_addr,
 			    dma_addr_t cqe_pbl_addr,
 			    u16 cqe_pbl_size,
-			    void OSAL_IOMEM * *pp_prod)
+			    void OSAL_IOMEM * *pp_producer)
 {
-	struct ecore_hw_cid_data *p_rx_cid;
 	u32 init_prod_val = 0;
-	u16 abs_l2_queue = 0;
-	u8 abs_stats_id = 0;
-	enum _ecore_status_t rc;
-
-	if (IS_VF(p_hwfn->p_dev)) {
-		return ecore_vf_pf_rxq_start(p_hwfn,
-					     (u8)p_params->queue_id,
-					     p_params->sb,
-					     (u8)p_params->sb_idx,
-					     bd_max_bytes,
-					     bd_chain_phys_addr,
-					     cqe_pbl_addr,
-					     cqe_pbl_size, pp_prod);
-	}
-
-	rc = ecore_fw_l2_queue(p_hwfn, p_params->queue_id, &abs_l2_queue);
-	if (rc != ECORE_SUCCESS)
-		return rc;
 
-	rc = ecore_fw_vport(p_hwfn, p_params->stats_id, &abs_stats_id);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	*pp_prod = (u8 OSAL_IOMEM *)p_hwfn->regview +
-	    GTT_BAR0_MAP_REG_MSDM_RAM +
-	    MSTORM_ETH_PF_PRODS_OFFSET(abs_l2_queue);
+	*pp_producer = (u8 OSAL_IOMEM *)
+		       p_hwfn->regview +
+		       GTT_BAR0_MAP_REG_MSDM_RAM +
+		       MSTORM_ETH_PF_PRODS_OFFSET(p_cid->abs.queue_id);
 
 	/* Init the rcq, rx bd and rx sge (if valid) producers to 0 */
-	__internal_ram_wr(p_hwfn, *pp_prod, sizeof(u32),
+	__internal_ram_wr(p_hwfn, *pp_producer, sizeof(u32),
 			  (u32 *)(&init_prod_val));
 
+	return ecore_eth_rxq_start_ramrod(p_hwfn, p_cid,
+					  bd_max_bytes,
+					  bd_chain_phys_addr,
+					  cqe_pbl_addr, cqe_pbl_size);
+}
+
+enum _ecore_status_t
+ecore_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
+			 u16 opaque_fid,
+			 struct ecore_queue_start_common_params *p_params,
+			 u16 bd_max_bytes,
+			 dma_addr_t bd_chain_phys_addr,
+			 dma_addr_t cqe_pbl_addr,
+			 u16 cqe_pbl_size,
+			 struct ecore_rxq_start_ret_params *p_ret_params)
+{
+	struct ecore_queue_cid *p_cid;
+	enum _ecore_status_t rc;
+
 	/* Allocate a CID for the queue */
-	p_rx_cid = &p_hwfn->p_rx_cids[p_params->queue_id];
-	rc = ecore_cxt_acquire_cid(p_hwfn, PROTOCOLID_ETH,
-				   &p_rx_cid->cid);
-	if (rc != ECORE_SUCCESS) {
-		DP_NOTICE(p_hwfn, true, "Failed to acquire cid\n");
-		return rc;
-	}
-	p_rx_cid->b_cid_allocated = true;
-	p_params->stats_id = abs_stats_id;
-	p_params->vf_qid = 0;
-
-	rc = ecore_sp_eth_rxq_start_ramrod(p_hwfn,
-					   opaque_fid,
-					   p_rx_cid->cid,
-					   p_params,
+	p_cid = ecore_eth_queue_to_cid(p_hwfn, opaque_fid, p_params);
+	if (p_cid == OSAL_NULL)
+		return ECORE_NOMEM;
+
+	if (IS_PF(p_hwfn->p_dev))
+		rc = ecore_eth_pf_rx_queue_start(p_hwfn, p_cid,
+						 bd_max_bytes,
+						 bd_chain_phys_addr,
+						 cqe_pbl_addr, cqe_pbl_size,
+						 &p_ret_params->p_prod);
+	else
+		rc = ecore_vf_pf_rxq_start(p_hwfn, p_cid,
 					   bd_max_bytes,
 					   bd_chain_phys_addr,
 					   cqe_pbl_addr,
 					   cqe_pbl_size,
-					   false);
+					   &p_ret_params->p_prod);
 
+	/* Provide the caller with a reference to as handler */
 	if (rc != ECORE_SUCCESS)
-		ecore_sp_release_queue_cid(p_hwfn, p_rx_cid);
+		ecore_eth_queue_cid_release(p_hwfn, p_cid);
+	else
+		p_ret_params->p_handle = (void *)p_cid;
 
 	return rc;
 }
 
 enum _ecore_status_t
 ecore_sp_eth_rx_queues_update(struct ecore_hwfn *p_hwfn,
-			      u16 rx_queue_id,
+			      void **pp_rxq_handles,
 			      u8 num_rxqs,
 			      u8 complete_cqe_flg,
 			      u8 complete_event_flg,
@@ -728,14 +809,14 @@ ecore_sp_eth_rx_queues_update(struct ecore_hwfn *p_hwfn,
 	struct rx_queue_update_ramrod_data *p_ramrod = OSAL_NULL;
 	struct ecore_spq_entry *p_ent = OSAL_NULL;
 	struct ecore_sp_init_data init_data;
-	struct ecore_hw_cid_data *p_rx_cid;
-	u16 qid, abs_rx_q_id = 0;
+	struct ecore_queue_cid *p_cid;
 	enum _ecore_status_t rc = ECORE_NOTIMPL;
 	u8 i;
 
 	if (IS_VF(p_hwfn->p_dev))
 		return ecore_vf_pf_rxqs_update(p_hwfn,
-					       rx_queue_id,
+					       (struct ecore_queue_cid **)
+					       pp_rxq_handles,
 					       num_rxqs,
 					       complete_cqe_flg,
 					       complete_event_flg);
@@ -745,12 +826,11 @@ ecore_sp_eth_rx_queues_update(struct ecore_hwfn *p_hwfn,
 	init_data.p_comp_data = p_comp_data;
 
 	for (i = 0; i < num_rxqs; i++) {
-		qid = rx_queue_id + i;
-		p_rx_cid = &p_hwfn->p_rx_cids[qid];
+		p_cid = ((struct ecore_queue_cid **)pp_rxq_handles)[i];
 
 		/* Get SPQ entry */
-		init_data.cid = p_rx_cid->cid;
-		init_data.opaque_fid = p_rx_cid->opaque_fid;
+		init_data.cid = p_cid->cid;
+		init_data.opaque_fid = p_cid->opaque_fid;
 
 		rc = ecore_sp_init_request(p_hwfn, &p_ent,
 					   ETH_RAMROD_RX_QUEUE_UPDATE,
@@ -759,41 +839,34 @@ ecore_sp_eth_rx_queues_update(struct ecore_hwfn *p_hwfn,
 			return rc;
 
 		p_ramrod = &p_ent->ramrod.rx_queue_update;
+		p_ramrod->vport_id = p_cid->abs.vport_id;
 
-		ecore_fw_vport(p_hwfn, p_rx_cid->vport_id, &p_ramrod->vport_id);
-		ecore_fw_l2_queue(p_hwfn, qid, &abs_rx_q_id);
-		p_ramrod->rx_queue_id = OSAL_CPU_TO_LE16(abs_rx_q_id);
+		p_ramrod->rx_queue_id = OSAL_CPU_TO_LE16(p_cid->abs.queue_id);
 		p_ramrod->complete_cqe_flg = complete_cqe_flg;
 		p_ramrod->complete_event_flg = complete_event_flg;
 
 		rc = ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
-		if (rc)
+		if (rc != ECORE_SUCCESS)
 			return rc;
 	}
 
 	return rc;
 }
 
-enum _ecore_status_t
-ecore_sp_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
-			   u16 rx_queue_id,
-			   bool eq_completion_only, bool cqe_completion)
+static enum _ecore_status_t
+ecore_eth_pf_rx_queue_stop(struct ecore_hwfn *p_hwfn,
+			   struct ecore_queue_cid *p_cid,
+			   bool b_eq_completion_only,
+			   bool b_cqe_completion)
 {
-	struct ecore_hw_cid_data *p_rx_cid = &p_hwfn->p_rx_cids[rx_queue_id];
 	struct rx_queue_stop_ramrod_data *p_ramrod = OSAL_NULL;
 	struct ecore_spq_entry *p_ent = OSAL_NULL;
 	struct ecore_sp_init_data init_data;
-	u16 abs_rx_q_id = 0;
-	enum _ecore_status_t rc = ECORE_NOTIMPL;
-
-	if (IS_VF(p_hwfn->p_dev))
-		return ecore_vf_pf_rxq_stop(p_hwfn, rx_queue_id,
-					    cqe_completion);
+	enum _ecore_status_t rc;
 
-	/* Get SPQ entry */
 	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
-	init_data.cid = p_rx_cid->cid;
-	init_data.opaque_fid = p_rx_cid->opaque_fid;
+	init_data.cid = p_cid->cid;
+	init_data.opaque_fid = p_cid->opaque_fid;
 	init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
 
 	rc = ecore_sp_init_request(p_hwfn, &p_ent,
@@ -803,64 +876,54 @@ ecore_sp_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
 		return rc;
 
 	p_ramrod = &p_ent->ramrod.rx_queue_stop;
-
-	ecore_fw_vport(p_hwfn, p_rx_cid->vport_id, &p_ramrod->vport_id);
-	ecore_fw_l2_queue(p_hwfn, rx_queue_id, &abs_rx_q_id);
-	p_ramrod->rx_queue_id = OSAL_CPU_TO_LE16(abs_rx_q_id);
+	p_ramrod->vport_id = p_cid->abs.vport_id;
+	p_ramrod->rx_queue_id = OSAL_CPU_TO_LE16(p_cid->abs.queue_id);
 
 	/* Cleaning the queue requires the completion to arrive there.
 	 * In addition, VFs require the answer to come as eqe to PF.
 	 */
-	p_ramrod->complete_cqe_flg = (!!(p_rx_cid->opaque_fid ==
-					 p_hwfn->hw_info.opaque_fid) &&
-				      !eq_completion_only) || cqe_completion;
-	p_ramrod->complete_event_flg = !(p_rx_cid->opaque_fid ==
-					 p_hwfn->hw_info.opaque_fid) ||
-	    eq_completion_only;
+	p_ramrod->complete_cqe_flg = (!p_cid->is_vf && !b_eq_completion_only) ||
+				     b_cqe_completion;
+	p_ramrod->complete_event_flg = p_cid->is_vf || b_eq_completion_only;
 
-	rc = ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
-	if (rc != ECORE_SUCCESS)
-		return rc;
+	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
+}
 
-	ecore_sp_release_queue_cid(p_hwfn, p_rx_cid);
+enum _ecore_status_t ecore_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
+					     void *p_rxq,
+					     bool eq_completion_only,
+					     bool cqe_completion)
+{
+	struct ecore_queue_cid *p_cid = (struct ecore_queue_cid *)p_rxq;
+	enum _ecore_status_t rc = ECORE_NOTIMPL;
+
+	if (IS_PF(p_hwfn->p_dev))
+		rc = ecore_eth_pf_rx_queue_stop(p_hwfn, p_cid,
+						eq_completion_only,
+						cqe_completion);
+	else
+		rc = ecore_vf_pf_rxq_stop(p_hwfn, p_cid, cqe_completion);
 
+	if (rc == ECORE_SUCCESS)
+		ecore_eth_queue_cid_release(p_hwfn, p_cid);
 	return rc;
 }
 
 enum _ecore_status_t
-ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
-			      u16 opaque_fid,
-			      u32 cid,
-			      struct ecore_queue_start_common_params *p_params,
-			      dma_addr_t pbl_addr,
-			      u16 pbl_size,
-			      u16 pq_id)
+ecore_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
+			   struct ecore_queue_cid *p_cid,
+			   dma_addr_t pbl_addr, u16 pbl_size,
+			   u16 pq_id)
 {
 	struct tx_queue_start_ramrod_data *p_ramrod = OSAL_NULL;
 	struct ecore_spq_entry *p_ent = OSAL_NULL;
 	struct ecore_sp_init_data init_data;
-	struct ecore_hw_cid_data *p_tx_cid;
-	u16 abs_tx_qzone_id = 0;
 	enum _ecore_status_t rc = ECORE_NOTIMPL;
-	u8 abs_vport_id;
-
-	/* Store information for the stop */
-	p_tx_cid = &p_hwfn->p_tx_cids[p_params->queue_id];
-	p_tx_cid->cid = cid;
-	p_tx_cid->opaque_fid = opaque_fid;
-
-	rc = ecore_fw_vport(p_hwfn, p_params->vport_id, &abs_vport_id);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	rc = ecore_fw_l2_queue(p_hwfn, p_params->qzone_id, &abs_tx_qzone_id);
-	if (rc != ECORE_SUCCESS)
-		return rc;
 
 	/* Get SPQ entry */
 	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
-	init_data.cid = cid;
-	init_data.opaque_fid = opaque_fid;
+	init_data.cid = p_cid->cid;
+	init_data.opaque_fid = p_cid->opaque_fid;
 	init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
 
 	rc = ecore_sp_init_request(p_hwfn, &p_ent,
@@ -870,14 +933,14 @@ ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
 		return rc;
 
 	p_ramrod = &p_ent->ramrod.tx_queue_start;
-	p_ramrod->vport_id = abs_vport_id;
+	p_ramrod->vport_id = p_cid->abs.vport_id;
 
-	p_ramrod->sb_id = OSAL_CPU_TO_LE16(p_params->sb);
-	p_ramrod->sb_index = (u8)p_params->sb_idx;
-	p_ramrod->stats_counter_id = p_params->stats_id;
+	p_ramrod->sb_id = OSAL_CPU_TO_LE16(p_cid->abs.sb);
+	p_ramrod->sb_index = p_cid->abs.sb_idx;
+	p_ramrod->stats_counter_id = p_cid->abs.stats_id;
 
-	p_ramrod->queue_zone_id = OSAL_CPU_TO_LE16(abs_tx_qzone_id);
-	p_ramrod->same_as_last_id = OSAL_CPU_TO_LE16(abs_tx_qzone_id);
+	p_ramrod->queue_zone_id = OSAL_CPU_TO_LE16(p_cid->abs.queue_id);
+	p_ramrod->same_as_last_id = OSAL_CPU_TO_LE16(p_cid->abs.queue_id);
 
 	p_ramrod->pbl_size = OSAL_CPU_TO_LE16(pbl_size);
 	DMA_REGPAIR_LE(p_ramrod->pbl_base_addr, pbl_addr);
@@ -887,90 +950,72 @@ ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
 	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
 }
 
-enum _ecore_status_t
-ecore_sp_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
-			    u16 opaque_fid,
-			    struct ecore_queue_start_common_params *p_params,
+static enum _ecore_status_t
+ecore_eth_pf_tx_queue_start(struct ecore_hwfn *p_hwfn,
+			    struct ecore_queue_cid *p_cid,
 			    u8 tc,
-			    dma_addr_t pbl_addr,
-			    u16 pbl_size,
+			    dma_addr_t pbl_addr, u16 pbl_size,
 			    void OSAL_IOMEM * *pp_doorbell)
 {
-	struct ecore_hw_cid_data *p_tx_cid;
-	u8 abs_stats_id = 0;
 	enum _ecore_status_t rc;
 
-	if (IS_VF(p_hwfn->p_dev)) {
-		return ecore_vf_pf_txq_start(p_hwfn,
-					     p_params->queue_id,
-					     p_params->sb,
-					     (u8)p_params->sb_idx,
-					     pbl_addr,
-					     pbl_size,
-					     pp_doorbell);
-	}
-
-	rc = ecore_fw_vport(p_hwfn, p_params->stats_id, &abs_stats_id);
+	/* TODO - set tc in the pq_params for multi-cos */
+	rc = ecore_eth_txq_start_ramrod(p_hwfn, p_cid,
+					pbl_addr, pbl_size,
+					ecore_get_cm_pq_idx_mcos(p_hwfn, tc));
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	p_tx_cid = &p_hwfn->p_tx_cids[p_params->queue_id];
-	OSAL_MEMSET(p_tx_cid, 0, sizeof(*p_tx_cid));
+	/* Provide the caller with the necessary return values */
+	*pp_doorbell = (u8 OSAL_IOMEM *)
+		       p_hwfn->doorbells +
+		       DB_ADDR(p_cid->cid, DQ_DEMS_LEGACY);
 
-	/* Allocate a CID for the queue */
-	rc = ecore_cxt_acquire_cid(p_hwfn, PROTOCOLID_ETH, &p_tx_cid->cid);
-	if (rc != ECORE_SUCCESS) {
-		DP_NOTICE(p_hwfn, true, "Failed to acquire cid\n");
-		return rc;
-	}
-	p_tx_cid->b_cid_allocated = true;
+	return ECORE_SUCCESS;
+}
 
-	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-		   "opaque_fid=0x%x, cid=0x%x, tx_qid=0x%x, vport_id=0x%x, sb_id=0x%x\n",
-		    opaque_fid, p_tx_cid->cid, p_params->queue_id,
-		    p_params->vport_id, p_params->sb);
+enum _ecore_status_t
+ecore_eth_tx_queue_start(struct ecore_hwfn *p_hwfn, u16 opaque_fid,
+			 struct ecore_queue_start_common_params *p_params,
+			 u8 tc,
+			 dma_addr_t pbl_addr, u16 pbl_size,
+			 struct ecore_txq_start_ret_params *p_ret_params)
+{
+	struct ecore_queue_cid *p_cid;
+	enum _ecore_status_t rc;
 
-	p_params->stats_id = abs_stats_id;
+	p_cid = ecore_eth_queue_to_cid(p_hwfn, opaque_fid, p_params);
+	if (p_cid == OSAL_NULL)
+		return ECORE_INVAL;
 
-	/* TODO - set tc in the pq_params for multi-cos */
-	rc = ecore_sp_eth_txq_start_ramrod(p_hwfn,
-					   opaque_fid,
-					   p_tx_cid->cid,
-					   p_params,
-					   pbl_addr,
-					   pbl_size,
-					   ecore_get_cm_pq_idx_mcos(p_hwfn,
-								    tc));
-
-	*pp_doorbell = (u8 OSAL_IOMEM *)p_hwfn->doorbells +
-	    DB_ADDR(p_tx_cid->cid, DQ_DEMS_LEGACY);
+	if (IS_PF(p_hwfn->p_dev))
+		rc = ecore_eth_pf_tx_queue_start(p_hwfn, p_cid, tc,
+						 pbl_addr, pbl_size,
+						 &p_ret_params->p_doorbell);
+	else
+		rc = ecore_vf_pf_txq_start(p_hwfn, p_cid,
+					   pbl_addr, pbl_size,
+					   &p_ret_params->p_doorbell);
 
 	if (rc != ECORE_SUCCESS)
-		ecore_sp_release_queue_cid(p_hwfn, p_tx_cid);
+		ecore_eth_queue_cid_release(p_hwfn, p_cid);
+	else
+		p_ret_params->p_handle = (void *)p_cid;
 
 	return rc;
 }
 
-enum _ecore_status_t ecore_sp_eth_tx_queue_update(struct ecore_hwfn *p_hwfn)
-{
-	return ECORE_NOTIMPL;
-}
-
-enum _ecore_status_t ecore_sp_eth_tx_queue_stop(struct ecore_hwfn *p_hwfn,
-						u16 tx_queue_id)
+static enum _ecore_status_t
+ecore_eth_pf_tx_queue_stop(struct ecore_hwfn *p_hwfn,
+			   struct ecore_queue_cid *p_cid)
 {
-	struct ecore_hw_cid_data *p_tx_cid = &p_hwfn->p_tx_cids[tx_queue_id];
 	struct ecore_spq_entry *p_ent = OSAL_NULL;
 	struct ecore_sp_init_data init_data;
-	enum _ecore_status_t rc = ECORE_NOTIMPL;
-
-	if (IS_VF(p_hwfn->p_dev))
-		return ecore_vf_pf_txq_stop(p_hwfn, tx_queue_id);
+	enum _ecore_status_t rc;
 
-	/* Get SPQ entry */
 	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
-	init_data.cid = p_tx_cid->cid;
-	init_data.opaque_fid = p_tx_cid->opaque_fid;
+	init_data.cid = p_cid->cid;
+	init_data.opaque_fid = p_cid->opaque_fid;
 	init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
 
 	rc = ecore_sp_init_request(p_hwfn, &p_ent,
@@ -979,11 +1024,22 @@ enum _ecore_status_t ecore_sp_eth_tx_queue_stop(struct ecore_hwfn *p_hwfn,
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	rc = ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
-	if (rc != ECORE_SUCCESS)
-		return rc;
+	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
+}
+
+enum _ecore_status_t ecore_eth_tx_queue_stop(struct ecore_hwfn *p_hwfn,
+					     void *p_handle)
+{
+	struct ecore_queue_cid *p_cid = (struct ecore_queue_cid *)p_handle;
+	enum _ecore_status_t rc;
+
+	if (IS_PF(p_hwfn->p_dev))
+		rc = ecore_eth_pf_tx_queue_stop(p_hwfn, p_cid);
+	else
+		rc = ecore_vf_pf_txq_stop(p_hwfn, p_cid);
 
-	ecore_sp_release_queue_cid(p_hwfn, p_tx_cid);
+	if (rc == ECORE_SUCCESS)
+		ecore_eth_queue_cid_release(p_hwfn, p_cid);
 	return rc;
 }
 
diff --git a/drivers/net/qede/base/ecore_l2.h b/drivers/net/qede/base/ecore_l2.h
index b598eda..c136389 100644
--- a/drivers/net/qede/base/ecore_l2.h
+++ b/drivers/net/qede/base/ecore_l2.h
@@ -15,59 +15,66 @@
 #include "ecore_spq.h"
 #include "ecore_l2_api.h"
 
-/**
- * @brief ecore_sp_eth_tx_queue_update -
- *
- * This ramrod updates a TX queue. It is used for setting the active
- * state of the queue.
- *
- * @note Final phase API.
- *
- * @param p_hwfn
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_sp_eth_tx_queue_update(struct ecore_hwfn *p_hwfn);
+struct ecore_queue_cid {
+	/* 'Relative' is a relative term ;-). Usually the indices [not counting
+	 * SBs] would be PF-relative, but there are some cases where that isn't
+	 * the case - specifically for a PF configuring its VF indices it's
+	 * possible some fields [E.g., stats-id] in 'rel' would already be abs.
+	 */
+	struct ecore_queue_start_common_params rel;
+	struct ecore_queue_start_common_params abs;
+	u32 cid;
+	u16 opaque_fid;
+
+	/* VFs queues are mapped differently, so we need to know the
+	 * relative queue associated with them [0-based].
+	 * Notice this is relevant on the *PF* queue-cid of its VF's queues,
+	 * and not on the VF itself.
+	 */
+	bool is_vf;
+	u8 vf_qid;
+
+	/* Legacy VFs might have Rx producer located elsewhere */
+	bool b_legacy_vf;
+};
+
+void ecore_eth_queue_cid_release(struct ecore_hwfn *p_hwfn,
+				 struct ecore_queue_cid *p_cid);
+
+struct ecore_queue_cid *
+_ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
+			u16 opaque_fid, u32 cid, u8 vf_qid,
+			struct ecore_queue_start_common_params *p_params);
 
 enum _ecore_status_t
 ecore_sp_eth_vport_start(struct ecore_hwfn *p_hwfn,
 			 struct ecore_sp_vport_start_params *p_params);
 
 /**
- * @brief - Starts an Rx queue; Should be used where contexts are handled
- * outside of the ramrod area [specifically iov scenarios]
+ * @brief - Starts an Rx queue, when queue_cid is already prepared
  *
  * @param p_hwfn
- * @param opaque_fid
- * @param cid
- * @param p_params [queue_id, vport_id, stats_id, sb, sb_idx, vf_qid]
-	  stats_id is absolute packed in p_params.
+ * @param p_cid
  * @param bd_max_bytes
  * @param bd_chain_phys_addr
  * @param cqe_pbl_addr
  * @param cqe_pbl_size
- * @param b_use_zone_a_prod - support legacy VF producers
  *
  * @return enum _ecore_status_t
  */
 enum _ecore_status_t
-ecore_sp_eth_rxq_start_ramrod(struct ecore_hwfn	*p_hwfn,
-			      u16 opaque_fid,
-			      u32 cid,
-			      struct ecore_queue_start_common_params *p_params,
-			      u16 bd_max_bytes,
-			      dma_addr_t bd_chain_phys_addr,
-			      dma_addr_t cqe_pbl_addr,
-			      u16 cqe_pbl_size, bool b_use_zone_a_prod);
+ecore_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
+			   struct ecore_queue_cid *p_cid,
+			   u16 bd_max_bytes,
+			   dma_addr_t bd_chain_phys_addr,
+			   dma_addr_t cqe_pbl_addr,
+			   u16 cqe_pbl_size);
 
 /**
- * @brief - Starts a Tx queue; Should be used where contexts are handled
- * outside of the ramrod area [specifically iov scenarios]
+ * @brief - Starts a Tx queue, where queue_cid is already prepared
  *
  * @param p_hwfn
- * @param opaque_fid
- * @param cid
- * @param p_params [queue_id, vport_id,stats_id, sb, sb_idx, vf_qid]
+ * @param p_cid
  * @param pbl_addr
  * @param pbl_size
  * @param p_pq_params - parameters for choosing the PQ for this Tx queue
@@ -75,13 +82,10 @@ ecore_sp_eth_rxq_start_ramrod(struct ecore_hwfn	*p_hwfn,
  * @return enum _ecore_status_t
  */
 enum _ecore_status_t
-ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn	*p_hwfn,
-			      u16 opaque_fid,
-			      u32 cid,
-			      struct ecore_queue_start_common_params *p_params,
-			      dma_addr_t pbl_addr,
-			      u16 pbl_size,
-			      u16 pq_id);
+ecore_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
+			   struct ecore_queue_cid *p_cid,
+			   dma_addr_t pbl_addr, u16 pbl_size,
+			   u16 pq_id);
 
 u8 ecore_mcast_bin_from_mac(u8 *mac);
 
diff --git a/drivers/net/qede/base/ecore_l2_api.h b/drivers/net/qede/base/ecore_l2_api.h
index 8f7b614..af316d3 100644
--- a/drivers/net/qede/base/ecore_l2_api.h
+++ b/drivers/net/qede/base/ecore_l2_api.h
@@ -28,22 +28,26 @@ enum ecore_rss_caps {
 #endif
 
 struct ecore_queue_start_common_params {
-	/* Rx/Tx queue relative id to keep obtained cid in corresponding array
-	 * RX - upper-bounded by number of FW-queues
-	 */
-	u16 queue_id;
+	/* Should always be relative to entity sending this. */
 	u8 vport_id;
+	u16 queue_id;
 
-	/* q_zone_id is relative, may be different from queue id
-	 * currently used by Tx-only, upper-bounded by number of FW-queues
-	 */
-	u16 qzone_id;
-
-	/* stats_id is relative or absolute depends on function */
+	/* Relative, but relevant only for PFs */
 	u8 stats_id;
+
+	/* These are always absolute */
 	u16 sb;
-	u16 sb_idx;
-	u16 vf_qid;
+	u8 sb_idx;
+};
+
+struct ecore_rxq_start_ret_params {
+	void OSAL_IOMEM *p_prod;
+	void *p_handle;
+};
+
+struct ecore_txq_start_ret_params {
+	void OSAL_IOMEM *p_doorbell;
+	void *p_handle;
 };
 
 struct ecore_rss_params {
@@ -167,42 +171,37 @@ ecore_filter_accept_cmd(
 	struct ecore_spq_comp_cb	 *p_comp_data);
 
 /**
- * @brief ecore_sp_eth_rx_queue_start - RX Queue Start Ramrod
+ * @brief ecore_eth_rx_queue_start - RX Queue Start Ramrod
  *
  * This ramrod initializes an RX Queue for a VPort. An Assert is generated if
  * the VPort ID is not currently initialized.
  *
  * @param p_hwfn
  * @param opaque_fid
- * @p_params			[stats_id is relative, packed in p_params]
+ * @p_params			Inputs; Relative for PF [SB being an exception]
  * @param bd_max_bytes		Maximum bytes that can be placed on a BD
  * @param bd_chain_phys_addr	Physical address of BDs for receive.
  * @param cqe_pbl_addr		Physical address of the CQE PBL Table.
  * @param cqe_pbl_size		Size of the CQE PBL Table
- * @param pp_prod		Pointer to place producer's
- *                              address for the Rx Q (May be
- *				NULL).
+ * @param p_ret_params		Pointed struct to be filled with outputs.
  *
  * @return enum _ecore_status_t
  */
 enum _ecore_status_t
-ecore_sp_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
-			    u16 opaque_fid,
-			    struct ecore_queue_start_common_params *p_params,
-			    u16 bd_max_bytes,
-			    dma_addr_t bd_chain_phys_addr,
-			    dma_addr_t cqe_pbl_addr,
-			    u16 cqe_pbl_size,
-			    void OSAL_IOMEM * *pp_prod);
+ecore_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
+			 u16 opaque_fid,
+			 struct ecore_queue_start_common_params *p_params,
+			 u16 bd_max_bytes,
+			 dma_addr_t bd_chain_phys_addr,
+			 dma_addr_t cqe_pbl_addr,
+			 u16 cqe_pbl_size,
+			 struct ecore_rxq_start_ret_params *p_ret_params);
 
 /**
- * @brief ecore_sp_eth_rx_queue_stop -
- *
- * This ramrod closes an RX queue. It sends RX queue stop ramrod
- * + CFC delete ramrod
+ * @brief ecore_eth_rx_queue_stop - This ramrod closes an Rx queue
  *
  * @param p_hwfn
- * @param rx_queue_id		RX Queue ID
+ * @param p_rxq			Handler of queue to close
  * @param eq_completion_only	If True completion will be on
  *				EQe, if False completion will be
  *				on EQe if p_hwfn opaque
@@ -213,13 +212,13 @@ ecore_sp_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
  * @return enum _ecore_status_t
  */
 enum _ecore_status_t
-ecore_sp_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
-			   u16 rx_queue_id,
-			   bool eq_completion_only,
-			   bool cqe_completion);
+ecore_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
+			void *p_rxq,
+			bool eq_completion_only,
+			bool cqe_completion);
 
 /**
- * @brief ecore_sp_eth_tx_queue_start - TX Queue Start Ramrod
+ * @brief - TX Queue Start Ramrod
  *
  * This ramrod initializes a TX Queue for a VPort. An Assert is generated if
  * the VPort is not currently initialized.
@@ -230,34 +229,29 @@ ecore_sp_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
  * @param tc			traffic class to use with this L2 txq
  * @param pbl_addr		address of the pbl array
  * @param pbl_size		number of entries in pbl
- * @param pp_doorbell		Pointer to place doorbell pointer (May be NULL).
- *				This address should be used with the
- *				DIRECT_REG_WR macro.
+ * @param p_ret_params		Pointer to fill the return parameters in.
  *
  * @return enum _ecore_status_t
  */
 enum _ecore_status_t
-ecore_sp_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
-			    u16 opaque_fid,
-			    struct ecore_queue_start_common_params *p_params,
-			    u8 tc,
-			    dma_addr_t pbl_addr,
-			    u16 pbl_size,
-			    void OSAL_IOMEM * *pp_doorbell);
+ecore_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
+			 u16 opaque_fid,
+			 struct ecore_queue_start_common_params *p_params,
+			 u8 tc,
+			 dma_addr_t pbl_addr,
+			 u16 pbl_size,
+			 struct ecore_txq_start_ret_params *p_ret_params);
 
 /**
- * @brief ecore_sp_eth_tx_queue_stop -
- *
- * This ramrod closes a TX queue. It sends TX queue stop ramrod
- * + CFC delete ramrod
+ * @brief ecore_eth_tx_queue_stop - closes a Tx queue
  *
  * @param p_hwfn
- * @param tx_queue_id		TX Queue ID
+ * @param p_txq - handle to Tx queue needed to be closed
  *
  * @return enum _ecore_status_t
  */
-enum _ecore_status_t ecore_sp_eth_tx_queue_stop(struct ecore_hwfn *p_hwfn,
-						u16 tx_queue_id);
+enum _ecore_status_t ecore_eth_tx_queue_stop(struct ecore_hwfn *p_hwfn,
+					     void *p_txq);
 
 enum ecore_tpa_mode	{
 	ECORE_TPA_MODE_NONE,
@@ -389,19 +383,19 @@ ecore_sp_eth_filter_ucast(struct ecore_hwfn *p_hwfn,
  * @note Final phase API.
  *
  * @param p_hwfn
- * @param rx_queue_id		RX Queue ID
- * @param num_rxqs              Allow to update multiple rx
- *				queues, from rx_queue_id to
- *				(rx_queue_id + num_rxqs)
+ * @param pp_rxq_handlers	An array of queue handlers to be updated.
+ * @param num_rxqs              number of queues to update.
  * @param complete_cqe_flg	Post completion to the CQE Ring if set
  * @param complete_event_flg	Post completion to the Event Ring if set
+ * @param comp_mode
+ * @param p_comp_data
  *
  * @return enum _ecore_status_t
  */
 
 enum _ecore_status_t
 ecore_sp_eth_rx_queues_update(struct ecore_hwfn *p_hwfn,
-			      u16 rx_queue_id,
+			      void **pp_rxq_handlers,
 			      u8 num_rxqs,
 			      u8 complete_cqe_flg,
 			      u8 complete_event_flg,
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 73c4015..7378420 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -238,7 +238,7 @@ static bool ecore_iov_validate_active_rxq(struct ecore_hwfn *p_hwfn,
 	u8 i;
 
 	for (i = 0; i < p_vf->num_rxqs; i++)
-		if (p_vf->vf_queues[i].rxq_active)
+		if (p_vf->vf_queues[i].p_rx_cid)
 			return true;
 
 	return false;
@@ -250,7 +250,7 @@ static bool ecore_iov_validate_active_txq(struct ecore_hwfn *p_hwfn,
 	u8 i;
 
 	for (i = 0; i < p_vf->num_rxqs; i++)
-		if (p_vf->vf_queues[i].txq_active)
+		if (p_vf->vf_queues[i].p_tx_cid)
 			return true;
 
 	return false;
@@ -953,17 +953,19 @@ static void ecore_iov_free_vf_igu_sbs(struct ecore_hwfn *p_hwfn,
 	vf->num_sbs = 0;
 }
 
-enum _ecore_status_t ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
-					      struct ecore_ptt *p_ptt,
-					      u16 rel_vf_id, u16 num_rx_queues)
+enum _ecore_status_t
+ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
+			 struct ecore_ptt *p_ptt,
+			 struct ecore_iov_vf_init_params *p_params)
 {
 	u8 num_of_vf_available_chains  = 0;
 	struct ecore_vf_info *vf = OSAL_NULL;
+	u16 qid, num_irqs;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 	u32 cids;
 	u8 i;
 
-	vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, false);
+	vf = ecore_iov_get_vf_info(p_hwfn, p_params->rel_vf_id, false);
 	if (!vf) {
 		DP_ERR(p_hwfn, "ecore_iov_init_hw_for_vf : vf is OSAL_NULL\n");
 		return ECORE_UNKNOWN_ERROR;
@@ -971,22 +973,52 @@ enum _ecore_status_t ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 
 	if (vf->b_init) {
 		DP_NOTICE(p_hwfn, true, "VF[%d] is already active.\n",
-			  rel_vf_id);
+			  p_params->rel_vf_id);
 		return ECORE_INVAL;
 	}
 
+	/* Perform sanity checking on the requested queue_id */
+	for (i = 0; i < p_params->num_queues; i++) {
+		u16 min_vf_qzone = (u16)FEAT_NUM(p_hwfn, ECORE_PF_L2_QUE);
+		u16 max_vf_qzone = min_vf_qzone +
+				   FEAT_NUM(p_hwfn, ECORE_VF_L2_QUE) - 1;
+
+		qid = p_params->req_rx_queue[i];
+		if (qid < min_vf_qzone || qid > max_vf_qzone) {
+			DP_NOTICE(p_hwfn, true,
+				  "Can't enable Rx qid [%04x] for VF[%d]: qids [0x%04x,...,0x%04x] available\n",
+				  qid, p_params->rel_vf_id,
+				  min_vf_qzone, max_vf_qzone);
+			return ECORE_INVAL;
+		}
+
+		qid = p_params->req_tx_queue[i];
+		if (qid > max_vf_qzone) {
+			DP_NOTICE(p_hwfn, true,
+				  "Can't enable Tx qid [%04x] for VF[%d]: max qid 0x%04x\n",
+				  qid, p_params->rel_vf_id, max_vf_qzone);
+			return ECORE_INVAL;
+		}
+
+		/* If client *really* wants, Tx qid can be shared with PF */
+		if (qid < min_vf_qzone)
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VF[%d] is using PF qid [0x%04x] for Txq[0x%02x]\n",
+				   p_params->rel_vf_id, qid, i);
+	}
+
 	/* Limit number of queues according to number of CIDs */
 	ecore_cxt_get_proto_cid_count(p_hwfn, PROTOCOLID_ETH, &cids);
 	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 		   "VF[%d] - requesting to initialize for 0x%04x queues"
 		   " [0x%04x CIDs available]\n",
-		   vf->relative_vf_id, num_rx_queues, (u16)cids);
-	num_rx_queues = OSAL_MIN_T(u16, num_rx_queues, ((u16)cids));
+		   vf->relative_vf_id, p_params->num_queues, (u16)cids);
+	num_irqs = OSAL_MIN_T(u16, p_params->num_queues, ((u16)cids));
 
 	num_of_vf_available_chains = ecore_iov_alloc_vf_igu_sbs(p_hwfn,
 							       p_ptt,
 							       vf,
-							       num_rx_queues);
+							       num_irqs);
 	if (num_of_vf_available_chains == 0) {
 		DP_ERR(p_hwfn, "no available igu sbs\n");
 		return ECORE_NOMEM;
@@ -997,26 +1029,19 @@ enum _ecore_status_t ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 	vf->num_txqs = num_of_vf_available_chains;
 
 	for (i = 0; i < vf->num_rxqs; i++) {
-		u16 queue_id = ecore_int_queue_id_from_sb_id(p_hwfn,
-							     vf->igu_sbs[i]);
+		struct ecore_vf_q_info *p_queue = &vf->vf_queues[i];
 
-		if (queue_id > RESC_NUM(p_hwfn, ECORE_L2_QUEUE)) {
-			DP_NOTICE(p_hwfn, true,
-				  "VF[%d] will require utilizing of"
-				  " out-of-bounds queues - %04x\n",
-				  vf->relative_vf_id, queue_id);
-			/* TODO - cleanup the already allocate SBs */
-			return ECORE_INVAL;
-		}
+		p_queue->fw_rx_qid = p_params->req_rx_queue[i];
+		p_queue->fw_tx_qid = p_params->req_tx_queue[i];
 
 		/* CIDs are per-VF, so no problem having them 0-based. */
-		vf->vf_queues[i].fw_rx_qid = queue_id;
-		vf->vf_queues[i].fw_tx_qid = queue_id;
-		vf->vf_queues[i].fw_cid = i;
+		p_queue->fw_cid = i;
 
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-			   "VF[%d] - [%d] SB %04x, Tx/Rx queue %04x CID %04x\n",
-			   vf->relative_vf_id, i, vf->igu_sbs[i], queue_id, i);
+			   "VF[%d] - Q[%d] SB %04x, qid [Rx %04x Tx %04x]  CID %04x\n",
+			   vf->relative_vf_id, i, vf->igu_sbs[i],
+			   p_queue->fw_rx_qid, p_queue->fw_tx_qid,
+			   p_queue->fw_cid);
 	}
 
 	rc = ecore_iov_enable_vf_access(p_hwfn, p_ptt, vf);
@@ -1390,8 +1415,19 @@ static void ecore_iov_vf_cleanup(struct ecore_hwfn *p_hwfn,
 	p_vf->num_active_rxqs = 0;
 
 	for (i = 0; i < ECORE_MAX_VF_CHAINS_PER_PF; i++) {
-		p_vf->vf_queues[i].rxq_active = 0;
-		p_vf->vf_queues[i].txq_active = 0;
+		struct ecore_vf_q_info *p_queue = &p_vf->vf_queues[i];
+
+		if (p_queue->p_rx_cid) {
+			ecore_eth_queue_cid_release(p_hwfn,
+						    p_queue->p_rx_cid);
+			p_queue->p_rx_cid = OSAL_NULL;
+		}
+
+		if (p_queue->p_tx_cid) {
+			ecore_eth_queue_cid_release(p_hwfn,
+						    p_queue->p_tx_cid);
+			p_queue->p_tx_cid = OSAL_NULL;
+		}
 	}
 
 	OSAL_MEMSET(&p_vf->shadow_config, 0, sizeof(p_vf->shadow_config));
@@ -1829,14 +1865,14 @@ ecore_iov_configure_vport_forced(struct ecore_hwfn *p_hwfn,
 
 		/* Update all the Rx queues */
 		for (i = 0; i < ECORE_MAX_VF_CHAINS_PER_PF; i++) {
-			u16 qid;
+			struct ecore_queue_cid *p_cid;
 
-			if (!p_vf->vf_queues[i].rxq_active)
+			p_cid = p_vf->vf_queues[i].p_rx_cid;
+			if (p_cid == OSAL_NULL)
 				continue;
 
-			qid = p_vf->vf_queues[i].fw_rx_qid;
-
-			rc = ecore_sp_eth_rx_queues_update(p_hwfn, qid,
+			rc = ecore_sp_eth_rx_queues_update(p_hwfn,
+							   (void **)&p_cid,
 						   1, 0, 1,
 						   ECORE_SPQ_MODE_EBLOCK,
 						   OSAL_NULL);
@@ -1844,7 +1880,7 @@ ecore_iov_configure_vport_forced(struct ecore_hwfn *p_hwfn,
 				DP_NOTICE(p_hwfn, true,
 					  "Failed to send Rx update"
 					  " fo queue[0x%04x]\n",
-					  qid);
+					  p_cid->rel.queue_id);
 				return rc;
 			}
 		}
@@ -2038,6 +2074,7 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
 	struct ecore_queue_start_common_params params;
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
 	u8 status = PFVF_STATUS_NO_RESOURCE;
+	struct ecore_vf_q_info *p_queue;
 	struct vfpf_start_rxq_tlv *req;
 	bool b_legacy_vf = false;
 	enum _ecore_status_t rc;
@@ -2048,14 +2085,24 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
 	    !ecore_iov_validate_sb(p_hwfn, vf, req->hw_sb))
 		goto out;
 
+	/* Acquire a new queue-cid */
+	p_queue = &vf->vf_queues[req->rx_qid];
+
 	OSAL_MEMSET(&params, 0, sizeof(params));
-	params.queue_id = (u8)vf->vf_queues[req->rx_qid].fw_rx_qid;
-	params.vf_qid = req->rx_qid;
+	params.queue_id = (u8)p_queue->fw_rx_qid;
 	params.vport_id = vf->vport_id;
 	params.stats_id = vf->abs_vf_id + 0x10;
 	params.sb = req->hw_sb;
 	params.sb_idx = req->sb_index;
 
+	p_queue->p_rx_cid = _ecore_eth_queue_to_cid(p_hwfn,
+						    vf->opaque_fid,
+						    p_queue->fw_cid,
+						    (u8)req->rx_qid,
+						    &params);
+	if (p_queue->p_rx_cid == OSAL_NULL)
+		goto out;
+
 	/* Legacy VFs have their Producers in a different location, which they
 	 * calculate on their own and clean the producer prior to this.
 	 */
@@ -2067,27 +2114,27 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
 		       GTT_BAR0_MAP_REG_MSDM_RAM +
 		       MSTORM_ETH_VF_PRODS_OFFSET(vf->abs_vf_id, req->rx_qid),
 		       0);
+	p_queue->p_rx_cid->b_legacy_vf = b_legacy_vf;
 
-	rc = ecore_sp_eth_rxq_start_ramrod(p_hwfn, vf->opaque_fid,
-					   vf->vf_queues[req->rx_qid].fw_cid,
-					   &params,
-					   req->bd_max_bytes,
-					   req->rxq_addr,
-					   req->cqe_pbl_addr,
-					   req->cqe_pbl_size,
-					   b_legacy_vf);
 
-	if (rc) {
+	rc = ecore_eth_rxq_start_ramrod(p_hwfn,
+					p_queue->p_rx_cid,
+					req->bd_max_bytes,
+					req->rxq_addr,
+					req->cqe_pbl_addr,
+					req->cqe_pbl_size);
+	if (rc != ECORE_SUCCESS) {
 		status = PFVF_STATUS_FAILURE;
+		ecore_eth_queue_cid_release(p_hwfn, p_queue->p_rx_cid);
+		p_queue->p_rx_cid = OSAL_NULL;
 	} else {
 		status = PFVF_STATUS_SUCCESS;
-		vf->vf_queues[req->rx_qid].rxq_active = true;
 		vf->num_active_rxqs++;
 	}
 
 out:
-	ecore_iov_vf_mbx_start_rxq_resp(p_hwfn, p_ptt, vf,
-					status, b_legacy_vf);
+	ecore_iov_vf_mbx_start_rxq_resp(p_hwfn, p_ptt, vf, status,
+					b_legacy_vf);
 }
 
 static void ecore_iov_vf_mbx_start_txq_resp(struct ecore_hwfn *p_hwfn,
@@ -2138,8 +2185,10 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 	struct ecore_queue_start_common_params params;
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
 	u8 status = PFVF_STATUS_NO_RESOURCE;
+	struct ecore_vf_q_info *p_queue;
 	struct vfpf_start_txq_tlv *req;
 	enum _ecore_status_t rc;
+	u16 pq;
 
 	OSAL_MEMSET(&params, 0, sizeof(params));
 	req = &mbx->req_virt->start_txq;
@@ -2148,27 +2197,34 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 	    !ecore_iov_validate_sb(p_hwfn, vf, req->hw_sb))
 		goto out;
 
-	params.queue_id = vf->vf_queues[req->tx_qid].fw_tx_qid;
-	params.qzone_id = vf->vf_queues[req->tx_qid].fw_tx_qid;
+	/* Acquire a new queue-cid */
+	p_queue = &vf->vf_queues[req->tx_qid];
+
+	params.queue_id = p_queue->fw_tx_qid;
 	params.vport_id = vf->vport_id;
 	params.stats_id = vf->abs_vf_id + 0x10;
 	params.sb = req->hw_sb;
 	params.sb_idx = req->sb_index;
 
-	rc = ecore_sp_eth_txq_start_ramrod(p_hwfn,
-					   vf->opaque_fid,
-					   vf->vf_queues[req->tx_qid].fw_cid,
-					   &params,
-					   req->pbl_addr,
-					   req->pbl_size,
-					   ecore_get_cm_pq_idx_vf(p_hwfn,
-							vf->relative_vf_id));
+	p_queue->p_tx_cid = _ecore_eth_queue_to_cid(p_hwfn,
+						    vf->opaque_fid,
+						    p_queue->fw_cid,
+						    (u8)req->tx_qid,
+						    &params);
+	if (p_queue->p_tx_cid == OSAL_NULL)
+		goto out;
 
-	if (rc)
+	pq = ecore_get_cm_pq_idx_vf(p_hwfn,
+				    vf->relative_vf_id);
+	rc = ecore_eth_txq_start_ramrod(p_hwfn, p_queue->p_tx_cid,
+					req->pbl_addr, req->pbl_size, pq);
+	if (rc != ECORE_SUCCESS) {
 		status = PFVF_STATUS_FAILURE;
-	else {
+		ecore_eth_queue_cid_release(p_hwfn,
+					    p_queue->p_tx_cid);
+		p_queue->p_tx_cid = OSAL_NULL;
+	} else {
 		status = PFVF_STATUS_SUCCESS;
-		vf->vf_queues[req->tx_qid].txq_active = true;
 	}
 
 out:
@@ -2181,6 +2237,7 @@ static enum _ecore_status_t ecore_iov_vf_stop_rxqs(struct ecore_hwfn *p_hwfn,
 						   u8 num_rxqs,
 						   bool cqe_completion)
 {
+	struct ecore_vf_q_info *p_queue;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 	int qid;
 
@@ -2188,16 +2245,18 @@ static enum _ecore_status_t ecore_iov_vf_stop_rxqs(struct ecore_hwfn *p_hwfn,
 		return ECORE_INVAL;
 
 	for (qid = rxq_id; qid < rxq_id + num_rxqs; qid++) {
-		if (vf->vf_queues[qid].rxq_active) {
-			rc = ecore_sp_eth_rx_queue_stop(p_hwfn,
-							vf->vf_queues[qid].
-							fw_rx_qid, false,
-							cqe_completion);
+		p_queue = &vf->vf_queues[qid];
 
-			if (rc)
-				return rc;
-		}
-		vf->vf_queues[qid].rxq_active = false;
+		if (!p_queue->p_rx_cid)
+			continue;
+
+		rc = ecore_eth_rx_queue_stop(p_hwfn,
+					     p_queue->p_rx_cid,
+					     false, cqe_completion);
+		if (rc != ECORE_SUCCESS)
+			return rc;
+
+		vf->vf_queues[qid].p_rx_cid = OSAL_NULL;
 		vf->num_active_rxqs--;
 	}
 
@@ -2209,21 +2268,23 @@ static enum _ecore_status_t ecore_iov_vf_stop_txqs(struct ecore_hwfn *p_hwfn,
 						   u16 txq_id, u8 num_txqs)
 {
 	enum _ecore_status_t rc = ECORE_SUCCESS;
+	struct ecore_vf_q_info *p_queue;
 	int qid;
 
 	if (txq_id + num_txqs > OSAL_ARRAY_SIZE(vf->vf_queues))
 		return ECORE_INVAL;
 
 	for (qid = txq_id; qid < txq_id + num_txqs; qid++) {
-		if (vf->vf_queues[qid].txq_active) {
-			rc = ecore_sp_eth_tx_queue_stop(p_hwfn,
-							vf->vf_queues[qid].
-							fw_tx_qid);
+		p_queue = &vf->vf_queues[qid];
+		if (!p_queue->p_tx_cid)
+			continue;
 
-			if (rc)
-				return rc;
-		}
-		vf->vf_queues[qid].txq_active = false;
+		rc = ecore_eth_tx_queue_stop(p_hwfn,
+					     p_queue->p_tx_cid);
+		if (rc != ECORE_SUCCESS)
+			return rc;
+
+		p_queue->p_tx_cid = OSAL_NULL;
 	}
 	return rc;
 }
@@ -2279,10 +2340,11 @@ static void ecore_iov_vf_mbx_update_rxqs(struct ecore_hwfn *p_hwfn,
 					 struct ecore_ptt *p_ptt,
 					 struct ecore_vf_info *vf)
 {
+	struct ecore_queue_cid *handlers[ECORE_MAX_VF_CHAINS_PER_PF];
 	u16 length = sizeof(struct pfvf_def_resp_tlv);
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
 	struct vfpf_update_rxq_tlv *req;
-	u8 status = PFVF_STATUS_SUCCESS;
+	u8 status = PFVF_STATUS_FAILURE;
 	u8 complete_event_flg;
 	u8 complete_cqe_flg;
 	u16 qid;
@@ -2293,30 +2355,38 @@ static void ecore_iov_vf_mbx_update_rxqs(struct ecore_hwfn *p_hwfn,
 	complete_cqe_flg = !!(req->flags & VFPF_RXQ_UPD_COMPLETE_CQE_FLAG);
 	complete_event_flg = !!(req->flags & VFPF_RXQ_UPD_COMPLETE_EVENT_FLAG);
 
+	/* Validaute inputs */
+	if (req->num_rxqs + req->rx_qid > ECORE_MAX_VF_CHAINS_PER_PF ||
+	    !ecore_iov_validate_rxq(p_hwfn, vf, req->rx_qid)) {
+		DP_INFO(p_hwfn, "VF[%d]: Incorrect Rxqs [%04x, %02x]\n",
+			vf->relative_vf_id, req->rx_qid, req->num_rxqs);
+		goto out;
+	}
+
 	for (i = 0; i < req->num_rxqs; i++) {
 		qid = req->rx_qid + i;
 
-		if (!vf->vf_queues[qid].rxq_active) {
-			DP_NOTICE(p_hwfn, true,
-				  "VF rx_qid = %d isn`t active!\n", qid);
-			status = PFVF_STATUS_FAILURE;
-			break;
+		if (!vf->vf_queues[qid].p_rx_cid) {
+			DP_INFO(p_hwfn,
+				"VF[%d] rx_qid = %d isn`t active!\n",
+				vf->relative_vf_id, qid);
+			goto out;
 		}
 
-		rc = ecore_sp_eth_rx_queues_update(p_hwfn,
-						   vf->vf_queues[qid].fw_rx_qid,
-						   1,
-						   complete_cqe_flg,
-						   complete_event_flg,
-						   ECORE_SPQ_MODE_EBLOCK,
-						   OSAL_NULL);
-
-		if (rc) {
-			status = PFVF_STATUS_FAILURE;
-			break;
-		}
+		handlers[i] = vf->vf_queues[qid].p_rx_cid;
 	}
 
+	rc = ecore_sp_eth_rx_queues_update(p_hwfn, (void **)&handlers,
+					   req->num_rxqs,
+					   complete_cqe_flg,
+					   complete_event_flg,
+					   ECORE_SPQ_MODE_EBLOCK,
+					   OSAL_NULL);
+	if (rc)
+		goto out;
+
+	status = PFVF_STATUS_SUCCESS;
+out:
 	ecore_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_UPDATE_RXQ,
 			       length, status);
 }
@@ -2545,7 +2615,7 @@ ecore_iov_vp_update_rss_param(struct ecore_hwfn *p_hwfn,
 				  "rss_ind_table[%d] = %d,"
 				  " rxq is out of range\n",
 				  i, q_idx);
-		else if (!vf->vf_queues[q_idx].rxq_active)
+		else if (!vf->vf_queues[q_idx].p_rx_cid)
 			DP_NOTICE(p_hwfn, true,
 				  "rss_ind_table[%d] = %d, rxq is not active\n",
 				  i, q_idx);
diff --git a/drivers/net/qede/base/ecore_sriov.h b/drivers/net/qede/base/ecore_sriov.h
index e9ccc79..d32f931 100644
--- a/drivers/net/qede/base/ecore_sriov.h
+++ b/drivers/net/qede/base/ecore_sriov.h
@@ -64,10 +64,10 @@ struct ecore_iov_vf_mbx {
 
 struct ecore_vf_q_info {
 	u16 fw_rx_qid;
+	struct ecore_queue_cid *p_rx_cid;
 	u16 fw_tx_qid;
+	struct ecore_queue_cid *p_tx_cid;
 	u8 fw_cid;
-	u8 rxq_active;
-	u8 txq_active;
 };
 
 enum vf_state {
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index 05ceefd..60ecd16 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -451,19 +451,19 @@ free_p_iov:
 #define MSTORM_QZONE_START(dev)   (TSTORM_QZONE_START + \
 				   (TSTORM_QZONE_SIZE * NUM_OF_L2_QUEUES(dev)))
 
-enum _ecore_status_t ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
-					   u8 rx_qid,
-					   u16 sb,
-					   u8 sb_index,
-					   u16 bd_max_bytes,
-					   dma_addr_t bd_chain_phys_addr,
-					   dma_addr_t cqe_pbl_addr,
-					   u16 cqe_pbl_size,
-					   void OSAL_IOMEM **pp_prod)
+enum _ecore_status_t
+ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
+		      struct ecore_queue_cid *p_cid,
+		      u16 bd_max_bytes,
+		      dma_addr_t bd_chain_phys_addr,
+		      dma_addr_t cqe_pbl_addr,
+		      u16 cqe_pbl_size,
+		      void OSAL_IOMEM **pp_prod)
 {
 	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
 	struct pfvf_start_queue_resp_tlv *resp;
 	struct vfpf_start_rxq_tlv *req;
+	u16 rx_qid = p_cid->rel.queue_id;
 	enum _ecore_status_t rc;
 
 	/* clear mailbox and prep first tlv */
@@ -473,19 +473,20 @@ enum _ecore_status_t ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
 	req->cqe_pbl_addr = cqe_pbl_addr;
 	req->cqe_pbl_size = cqe_pbl_size;
 	req->rxq_addr = bd_chain_phys_addr;
-	req->hw_sb = sb;
-	req->sb_index = sb_index;
+	req->hw_sb = p_cid->rel.sb;
+	req->sb_index = p_cid->rel.sb_idx;
 	req->bd_max_bytes = bd_max_bytes;
 	req->stat_id = -1; /* Keep initialized, for future compatibility */
 
 	/* If PF is legacy, we'll need to calculate producers ourselves
 	 * as well as clean them.
 	 */
-	if (pp_prod && p_iov->b_pre_fp_hsi) {
+	if (p_iov->b_pre_fp_hsi) {
 		u8 hw_qid = p_iov->acquire_resp.resc.hw_qid[rx_qid];
 		u32 init_prod_val = 0;
 
-		*pp_prod = (u8 OSAL_IOMEM *)p_hwfn->regview +
+		*pp_prod = (u8 OSAL_IOMEM *)
+			   p_hwfn->regview +
 			   MSTORM_QZONE_START(p_hwfn->p_dev) +
 			   (hw_qid) * MSTORM_QZONE_SIZE;
 
@@ -510,7 +511,7 @@ enum _ecore_status_t ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
 	}
 
 	/* Learn the address of the producer from the response */
-	if (pp_prod && !p_iov->b_pre_fp_hsi) {
+	if (!p_iov->b_pre_fp_hsi) {
 		u32 init_prod_val = 0;
 
 		*pp_prod = (u8 OSAL_IOMEM *)p_hwfn->regview + resp->offset;
@@ -534,7 +535,8 @@ exit:
 }
 
 enum _ecore_status_t ecore_vf_pf_rxq_stop(struct ecore_hwfn *p_hwfn,
-					  u16 rx_qid, bool cqe_completion)
+					  struct ecore_queue_cid *p_cid,
+					  bool cqe_completion)
 {
 	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
 	struct vfpf_stop_rxqs_tlv *req;
@@ -544,7 +546,7 @@ enum _ecore_status_t ecore_vf_pf_rxq_stop(struct ecore_hwfn *p_hwfn,
 	/* clear mailbox and prep first tlv */
 	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_STOP_RXQS, sizeof(*req));
 
-	req->rx_qid = rx_qid;
+	req->rx_qid = p_cid->rel.queue_id;
 	req->num_rxqs = 1;
 	req->cqe_completion = cqe_completion;
 
@@ -569,29 +571,28 @@ exit:
 	return rc;
 }
 
-enum _ecore_status_t ecore_vf_pf_txq_start(struct ecore_hwfn *p_hwfn,
-					   u16 tx_queue_id,
-					   u16 sb,
-					   u8 sb_index,
-					   dma_addr_t pbl_addr,
-					   u16 pbl_size,
-					   void OSAL_IOMEM **pp_doorbell)
+enum _ecore_status_t
+ecore_vf_pf_txq_start(struct ecore_hwfn *p_hwfn,
+		      struct ecore_queue_cid *p_cid,
+		      dma_addr_t pbl_addr, u16 pbl_size,
+		      void OSAL_IOMEM **pp_doorbell)
 {
 	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
 	struct pfvf_start_queue_resp_tlv *resp;
 	struct vfpf_start_txq_tlv *req;
+	u16 qid = p_cid->rel.queue_id;
 	enum _ecore_status_t rc;
 
 	/* clear mailbox and prep first tlv */
 	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_START_TXQ, sizeof(*req));
 
-	req->tx_qid = tx_queue_id;
+	req->tx_qid = qid;
 
 	/* Tx */
 	req->pbl_addr = pbl_addr;
 	req->pbl_size = pbl_size;
-	req->hw_sb = sb;
-	req->sb_index = sb_index;
+	req->hw_sb = p_cid->rel.sb;
+	req->sb_index = p_cid->rel.sb_idx;
 
 	/* add list termination tlv */
 	ecore_add_tlv(p_hwfn, &p_iov->offset,
@@ -608,32 +609,30 @@ enum _ecore_status_t ecore_vf_pf_txq_start(struct ecore_hwfn *p_hwfn,
 		goto exit;
 	}
 
-	if (pp_doorbell) {
-		/* Modern PFs provide the actual offsets, while legacy
-		 * provided only the queue id.
-		 */
-		if (!p_iov->b_pre_fp_hsi) {
-			*pp_doorbell = (u8 OSAL_IOMEM *)p_hwfn->doorbells +
-						       resp->offset;
-		} else {
-			u8 cid = p_iov->acquire_resp.resc.cid[tx_queue_id];
-
+	/* Modern PFs provide the actual offsets, while legacy
+	 * provided only the queue id.
+	 */
+	if (!p_iov->b_pre_fp_hsi) {
 		*pp_doorbell = (u8 OSAL_IOMEM *)p_hwfn->doorbells +
-				DB_ADDR_VF(cid, DQ_DEMS_LEGACY);
-		}
+						resp->offset;
+	} else {
+		u8 cid = p_iov->acquire_resp.resc.cid[qid];
 
-		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-			   "Txq[0x%02x]: doorbell at %p [offset 0x%08x]\n",
-			   tx_queue_id, *pp_doorbell, resp->offset);
+		*pp_doorbell = (u8 OSAL_IOMEM *)p_hwfn->doorbells +
+						DB_ADDR_VF(cid, DQ_DEMS_LEGACY);
 	}
 
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+		   "Txq[0x%02x]: doorbell at %p [offset 0x%08x]\n",
+		   qid, *pp_doorbell, resp->offset);
 exit:
 	ecore_vf_pf_req_end(p_hwfn, rc);
 
 	return rc;
 }
 
-enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn *p_hwfn, u16 tx_qid)
+enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn *p_hwfn,
+					  struct ecore_queue_cid *p_cid)
 {
 	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
 	struct vfpf_stop_txqs_tlv *req;
@@ -643,7 +642,7 @@ enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn *p_hwfn, u16 tx_qid)
 	/* clear mailbox and prep first tlv */
 	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_STOP_TXQS, sizeof(*req));
 
-	req->tx_qid = tx_qid;
+	req->tx_qid = p_cid->rel.queue_id;
 	req->num_txqs = 1;
 
 	/* add list termination tlv */
@@ -668,20 +667,36 @@ exit:
 }
 
 enum _ecore_status_t ecore_vf_pf_rxqs_update(struct ecore_hwfn *p_hwfn,
-					     u16 rx_queue_id,
+					     struct ecore_queue_cid **pp_cid,
 					     u8 num_rxqs,
-					     u8 comp_cqe_flg, u8 comp_event_flg)
+					     u8 comp_cqe_flg,
+					     u8 comp_event_flg)
 {
 	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
 	struct pfvf_def_resp_tlv *resp = &p_iov->pf2vf_reply->default_resp;
 	struct vfpf_update_rxq_tlv *req;
 	enum _ecore_status_t rc;
 
+	/* TODO - API is limited to assuming continuous regions of queues,
+	 * but VF queues might not fullfil this requirement.
+	 * Need to consider whether we need new TLVs for this, or whether
+	 * simply doing it iteratively is good enough.
+	 */
+	if (!num_rxqs)
+		return ECORE_INVAL;
+
+again:
 	/* clear mailbox and prep first tlv */
 	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_UPDATE_RXQ, sizeof(*req));
 
-	req->rx_qid = rx_queue_id;
-	req->num_rxqs = num_rxqs;
+	/* Find the length of the current contagious range of queues beginning
+	 * at first queue's index.
+	 */
+	req->rx_qid = (*pp_cid)->rel.queue_id;
+	for (req->num_rxqs = 1; req->num_rxqs < num_rxqs; req->num_rxqs++)
+		if (pp_cid[req->num_rxqs]->rel.queue_id !=
+		    req->rx_qid + req->num_rxqs)
+			break;
 
 	if (comp_cqe_flg)
 		req->flags |= VFPF_RXQ_UPD_COMPLETE_CQE_FLAG;
@@ -702,9 +717,17 @@ enum _ecore_status_t ecore_vf_pf_rxqs_update(struct ecore_hwfn *p_hwfn,
 		goto exit;
 	}
 
+	/* Make sure we're done with all the queues */
+	if (req->num_rxqs < num_rxqs) {
+		num_rxqs -= req->num_rxqs;
+		pp_cid += req->num_rxqs;
+		/* TODO - should we give a non-locked variant instead? */
+		ecore_vf_pf_req_end(p_hwfn, rc);
+		goto again;
+	}
+
 exit:
 	ecore_vf_pf_req_end(p_hwfn, rc);
-
 	return rc;
 }
 
diff --git a/drivers/net/qede/base/ecore_vf.h b/drivers/net/qede/base/ecore_vf.h
index 6077d60..1afd667 100644
--- a/drivers/net/qede/base/ecore_vf.h
+++ b/drivers/net/qede/base/ecore_vf.h
@@ -53,10 +53,7 @@ enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn);
  * @brief VF - start the RX Queue by sending a message to the PF
  *
  * @param p_hwfn
- * @param cid			- zero based within the VF
- * @param rx_queue_id		- zero based within the VF
- * @param sb			- VF status block for this queue
- * @param sb_index		- Index within the status block
+ * @param p_cid			- Only relative fields are relevant
  * @param bd_max_bytes		- maximum number of bytes per bd
  * @param bd_chain_phys_addr	- physical address of bd chain
  * @param cqe_pbl_addr		- physical address of pbl
@@ -67,9 +64,7 @@ enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn);
  * @return enum _ecore_status_t
  */
 enum _ecore_status_t ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
-					   u8 rx_queue_id,
-					   u16 sb,
-					   u8 sb_index,
+					   struct ecore_queue_cid *p_cid,
 					   u16 bd_max_bytes,
 					   dma_addr_t bd_chain_phys_addr,
 					   dma_addr_t cqe_pbl_addr,
@@ -81,46 +76,44 @@ enum _ecore_status_t ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
  *        PF.
  *
  * @param p_hwfn
- * @param tx_queue_id		- zero based within the VF
- * @param sb			- status block for this queue
- * @param sb_index		- index within the status block
+ * @param p_cid
  * @param bd_chain_phys_addr	- physical address of tx chain
  * @param pp_doorbell		- pointer to address to which to
  *				write the doorbell too..
  *
  * @return enum _ecore_status_t
  */
-enum _ecore_status_t ecore_vf_pf_txq_start(struct ecore_hwfn *p_hwfn,
-					   u16 tx_queue_id,
-					   u16 sb,
-					   u8 sb_index,
-					   dma_addr_t pbl_addr,
-					   u16 pbl_size,
-					   void OSAL_IOMEM **pp_doorbell);
+enum _ecore_status_t
+ecore_vf_pf_txq_start(struct ecore_hwfn *p_hwfn,
+		      struct ecore_queue_cid *p_cid,
+		      dma_addr_t pbl_addr, u16 pbl_size,
+		      void OSAL_IOMEM **pp_doorbell);
 
 /**
  * @brief VF - stop the RX queue by sending a message to the PF
  *
  * @param p_hwfn
- * @param rx_qid
+ * @param p_cid
  * @param cqe_completion
  *
  * @return enum _ecore_status_t
  */
-enum _ecore_status_t ecore_vf_pf_rxq_stop(struct ecore_hwfn	*p_hwfn,
-					  u16			rx_qid,
-					  bool			cqe_completion);
+enum _ecore_status_t ecore_vf_pf_rxq_stop(struct ecore_hwfn *p_hwfn,
+					  struct ecore_queue_cid *p_cid,
+					  bool cqe_completion);
 
 /**
  * @brief VF - stop the TX queue by sending a message to the PF
  *
  * @param p_hwfn
- * @param tx_qid
+ * @param p_cid
  *
  * @return enum _ecore_status_t
  */
-enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn	*p_hwfn,
-					  u16			tx_qid);
+enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn *p_hwfn,
+					  struct ecore_queue_cid *p_cid);
+
+/* TODO - fix all the !SRIOV prototypes */
 
 #ifndef LINUX_REMOVE
 /**
@@ -128,20 +121,18 @@ enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn	*p_hwfn,
  *        PF
  *
  * @param p_hwfn
- * @param rx_queue_id
+ * @param pp_cid - list of queue-cids which we want to update
  * @param num_rxqs
- * @param init_sge_ring
  * @param comp_cqe_flg
  * @param comp_event_flg
  *
  * @return enum _ecore_status_t
  */
-enum _ecore_status_t ecore_vf_pf_rxqs_update(
-			struct ecore_hwfn	*p_hwfn,
-			u16			rx_queue_id,
-			u8			num_rxqs,
-			u8			comp_cqe_flg,
-			u8			comp_event_flg);
+enum _ecore_status_t ecore_vf_pf_rxqs_update(struct ecore_hwfn *p_hwfn,
+					     struct ecore_queue_cid **pp_cid,
+					     u8 num_rxqs,
+					     u8 comp_cqe_flg,
+					     u8 comp_event_flg);
 #endif
 
 /**
diff --git a/drivers/net/qede/qede_eth_if.c b/drivers/net/qede/qede_eth_if.c
index d0f6e87..8e4290c 100644
--- a/drivers/net/qede/qede_eth_if.c
+++ b/drivers/net/qede/qede_eth_if.c
@@ -148,7 +148,8 @@ qed_start_rxq(struct ecore_dev *edev,
 	      uint16_t bd_max_bytes,
 	      dma_addr_t bd_chain_phys_addr,
 	      dma_addr_t cqe_pbl_addr,
-	      uint16_t cqe_pbl_size, void OSAL_IOMEM * *pp_prod)
+	      uint16_t cqe_pbl_size,
+	      struct ecore_rxq_start_ret_params *ret_params)
 {
 	struct ecore_hwfn *p_hwfn;
 	int rc, hwfn_index;
@@ -159,12 +160,14 @@ qed_start_rxq(struct ecore_dev *edev,
 	p_params->queue_id = p_params->queue_id / edev->num_hwfns;
 	p_params->stats_id = p_params->vport_id;
 
-	rc = ecore_sp_eth_rx_queue_start(p_hwfn,
-					 p_hwfn->hw_info.opaque_fid,
-					 p_params,
-					 bd_max_bytes,
-					 bd_chain_phys_addr,
-					 cqe_pbl_addr, cqe_pbl_size, pp_prod);
+	rc = ecore_eth_rx_queue_start(p_hwfn,
+				      p_hwfn->hw_info.opaque_fid,
+				      p_params,
+				      bd_max_bytes,
+				      bd_chain_phys_addr,
+				      cqe_pbl_addr,
+				      cqe_pbl_size,
+				      ret_params);
 
 	if (rc) {
 		DP_ERR(edev, "Failed to start RXQ#%d\n", p_params->queue_id);
@@ -180,19 +183,17 @@ qed_start_rxq(struct ecore_dev *edev,
 }
 
 static int
-qed_stop_rxq(struct ecore_dev *edev, struct qed_stop_rxq_params *params)
+qed_stop_rxq(struct ecore_dev *edev, uint8_t rss_id, void *handle)
 {
 	int rc, hwfn_index;
 	struct ecore_hwfn *p_hwfn;
 
-	hwfn_index = params->rss_id % edev->num_hwfns;
+	hwfn_index = rss_id % edev->num_hwfns;
 	p_hwfn = &edev->hwfns[hwfn_index];
 
-	rc = ecore_sp_eth_rx_queue_stop(p_hwfn,
-					params->rx_queue_id / edev->num_hwfns,
-					params->eq_completion_only, false);
+	rc = ecore_eth_rx_queue_stop(p_hwfn, handle, true, false);
 	if (rc) {
-		DP_ERR(edev, "Failed to stop RXQ#%d\n", params->rx_queue_id);
+		DP_ERR(edev, "Failed to stop RXQ#%02x\n", rss_id);
 		return rc;
 	}
 
@@ -204,7 +205,8 @@ qed_start_txq(struct ecore_dev *edev,
 	      uint8_t rss_num,
 	      struct ecore_queue_start_common_params *p_params,
 	      dma_addr_t pbl_addr,
-	      uint16_t pbl_size, void OSAL_IOMEM * *pp_doorbell)
+	      uint16_t pbl_size,
+	      struct ecore_txq_start_ret_params *ret_params)
 {
 	struct ecore_hwfn *p_hwfn;
 	int rc, hwfn_index;
@@ -213,14 +215,13 @@ qed_start_txq(struct ecore_dev *edev,
 	p_hwfn = &edev->hwfns[hwfn_index];
 
 	p_params->queue_id = p_params->queue_id / edev->num_hwfns;
-	p_params->qzone_id = p_params->queue_id;
 	p_params->stats_id = p_params->vport_id;
 
-	rc = ecore_sp_eth_tx_queue_start(p_hwfn,
-					 p_hwfn->hw_info.opaque_fid,
-					 p_params,
-					 0 /* tc */,
-					 pbl_addr, pbl_size, pp_doorbell);
+	rc = ecore_eth_tx_queue_start(p_hwfn,
+				      p_hwfn->hw_info.opaque_fid,
+				      p_params, 0 /* tc */,
+				      pbl_addr, pbl_size,
+				      ret_params);
 
 	if (rc) {
 		DP_ERR(edev, "Failed to start TXQ#%d\n", p_params->queue_id);
@@ -236,18 +237,17 @@ qed_start_txq(struct ecore_dev *edev,
 }
 
 static int
-qed_stop_txq(struct ecore_dev *edev, struct qed_stop_txq_params *params)
+qed_stop_txq(struct ecore_dev *edev, uint8_t rss_id, void *handle)
 {
 	struct ecore_hwfn *p_hwfn;
 	int rc, hwfn_index;
 
-	hwfn_index = params->rss_id % edev->num_hwfns;
+	hwfn_index = rss_id % edev->num_hwfns;
 	p_hwfn = &edev->hwfns[hwfn_index];
 
-	rc = ecore_sp_eth_tx_queue_stop(p_hwfn,
-					params->tx_queue_id / edev->num_hwfns);
+	rc = ecore_eth_tx_queue_stop(p_hwfn, handle);
 	if (rc) {
-		DP_ERR(edev, "Failed to stop TXQ#%d\n", params->tx_queue_id);
+		DP_ERR(edev, "Failed to stop TXQ#%02x\n", rss_id);
 		return rc;
 	}
 
diff --git a/drivers/net/qede/qede_eth_if.h b/drivers/net/qede/qede_eth_if.h
index 37b1b74..12dd828 100644
--- a/drivers/net/qede/qede_eth_if.h
+++ b/drivers/net/qede/qede_eth_if.h
@@ -47,13 +47,6 @@ struct qed_dev_eth_info {
 	bool is_legacy;
 };
 
-struct qed_stop_rxq_params {
-	uint8_t rss_id;
-	uint8_t rx_queue_id;
-	uint8_t vport_id;
-	bool eq_completion_only;
-};
-
 struct qed_update_vport_params {
 	uint8_t vport_id;
 	uint8_t update_vport_active_flg;
@@ -78,11 +71,6 @@ struct qed_start_vport_params {
 	bool clear_stats;
 };
 
-struct qed_stop_txq_params {
-	uint8_t rss_id;
-	uint8_t tx_queue_id;
-};
-
 struct qed_eth_ops {
 	const struct qed_common_ops *common;
 
@@ -103,19 +91,21 @@ struct qed_eth_ops {
 			  uint16_t bd_max_bytes,
 			  dma_addr_t bd_chain_phys_addr,
 			  dma_addr_t cqe_pbl_addr,
-			  uint16_t cqe_pbl_size, void OSAL_IOMEM * *pp_prod);
+			  uint16_t cqe_pbl_size,
+			  struct ecore_rxq_start_ret_params *ret_params);
 
 	int (*q_rx_stop)(struct ecore_dev *edev,
-			 struct qed_stop_rxq_params *params);
+			 uint8_t rss_id, void *handle);
 
 	int (*q_tx_start)(struct ecore_dev *edev,
 			  uint8_t rss_num,
 			  struct ecore_queue_start_common_params *p_params,
 			  dma_addr_t pbl_addr,
-			  uint16_t pbl_size, void OSAL_IOMEM * *pp_doorbell);
+			  uint16_t pbl_size,
+			  struct ecore_txq_start_ret_params *ret_params);
 
 	int (*q_tx_stop)(struct ecore_dev *edev,
-			 struct qed_stop_txq_params *params);
+			 uint8_t rss_id, void *handle);
 
 	int (*eth_cqe_completion)(struct ecore_dev *edev,
 				  uint8_t rss_id,
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 01ea9b4..85134fb 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -527,11 +527,14 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 	for_each_queue(i) {
 		fp = &qdev->fp_array[i];
 		if (fp->type & QEDE_FASTPATH_RX) {
+			struct ecore_rxq_start_ret_params ret_params;
+
 			p_phys_table = ecore_chain_get_pbl_phys(&fp->rxq->
 								rx_comp_ring);
 			page_cnt = ecore_chain_get_page_cnt(&fp->rxq->
 								rx_comp_ring);
 
+			memset(&ret_params, 0, sizeof(ret_params));
 			memset(&q_params, 0, sizeof(q_params));
 			q_params.queue_id = i;
 			q_params.vport_id = 0;
@@ -545,13 +548,17 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 					   fp->rxq->rx_bd_ring.p_phys_addr,
 					   p_phys_table,
 					   page_cnt,
-					   &fp->rxq->hw_rxq_prod_addr);
+					   &ret_params);
 			if (rc) {
 				DP_ERR(edev, "Start rxq #%d failed %d\n",
 				       fp->rxq->queue_id, rc);
 				return rc;
 			}
 
+			/* Use the return parameters */
+			fp->rxq->hw_rxq_prod_addr = ret_params.p_prod;
+			fp->rxq->handle = ret_params.p_handle;
+
 			fp->rxq->hw_cons_ptr =
 					&fp->sb_info->sb_virt->pi_array[RX_PI];
 
@@ -561,6 +568,8 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 		if (!(fp->type & QEDE_FASTPATH_TX))
 			continue;
 		for (tc = 0; tc < qdev->num_tc; tc++) {
+			struct ecore_txq_start_ret_params ret_params;
+
 			txq = fp->txqs[tc];
 			txq_index = tc * QEDE_RSS_COUNT(qdev) + i;
 
@@ -568,6 +577,7 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 			page_cnt = ecore_chain_get_page_cnt(&txq->tx_pbl);
 
 			memset(&q_params, 0, sizeof(q_params));
+			memset(&ret_params, 0, sizeof(ret_params));
 			q_params.queue_id = txq->queue_id;
 			q_params.vport_id = 0;
 			q_params.sb = fp->sb_info->igu_sb_id;
@@ -576,13 +586,16 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 			rc = qdev->ops->q_tx_start(edev, i, &q_params,
 						   p_phys_table,
 						   page_cnt, /* **pp_doorbell */
-						   &txq->doorbell_addr);
+						   &ret_params);
 			if (rc) {
 				DP_ERR(edev, "Start txq %u failed %d\n",
 				       txq_index, rc);
 				return rc;
 			}
 
+			txq->doorbell_addr = ret_params.p_doorbell;
+			txq->handle = ret_params.p_handle;
+
 			txq->hw_cons_ptr =
 			    &fp->sb_info->sb_virt->pi_array[TX_PI(tc)];
 			SET_FIELD(txq->tx_db.data.params,
@@ -1399,6 +1412,7 @@ static int qede_stop_queues(struct qede_dev *qdev)
 {
 	struct qed_update_vport_params vport_update_params;
 	struct ecore_dev *edev = &qdev->edev;
+	struct qede_fastpath *fp;
 	int rc, tc, i;
 
 	/* Disable the vport */
@@ -1420,7 +1434,7 @@ static int qede_stop_queues(struct qede_dev *qdev)
 
 	/* Flush Tx queues. If needed, request drain from MCP */
 	for_each_queue(i) {
-		struct qede_fastpath *fp = &qdev->fp_array[i];
+		fp = &qdev->fp_array[i];
 
 		if (fp->type & QEDE_FASTPATH_TX) {
 			for (tc = 0; tc < qdev->num_tc; tc++) {
@@ -1435,23 +1449,17 @@ static int qede_stop_queues(struct qede_dev *qdev)
 
 	/* Stop all Queues in reverse order */
 	for (i = QEDE_QUEUE_CNT(qdev) - 1; i >= 0; i--) {
-		struct qed_stop_rxq_params rx_params;
+		fp = &qdev->fp_array[i];
 
 		/* Stop the Tx Queue(s) */
 		if (qdev->fp_array[i].type & QEDE_FASTPATH_TX) {
 			for (tc = 0; tc < qdev->num_tc; tc++) {
-				struct qed_stop_txq_params tx_params;
-				u8 val;
-
-				tx_params.rss_id = i;
-				val = qdev->fp_array[i].txqs[tc]->queue_id;
-				tx_params.tx_queue_id = val;
-
+				struct qede_tx_queue *txq = fp->txqs[tc];
 				DP_INFO(edev, "Stopping tx queues\n");
-				rc = qdev->ops->q_tx_stop(edev, &tx_params);
+				rc = qdev->ops->q_tx_stop(edev, i, txq->handle);
 				if (rc) {
 					DP_ERR(edev, "Failed to stop TXQ #%d\n",
-					       tx_params.tx_queue_id);
+					       i);
 					return rc;
 				}
 			}
@@ -1459,14 +1467,8 @@ static int qede_stop_queues(struct qede_dev *qdev)
 
 		/* Stop the Rx Queue */
 		if (qdev->fp_array[i].type & QEDE_FASTPATH_RX) {
-			memset(&rx_params, 0, sizeof(rx_params));
-			rx_params.rss_id = i;
-			rx_params.rx_queue_id = qdev->fp_array[i].rxq->queue_id;
-			rx_params.eq_completion_only = 1;
-
 			DP_INFO(edev, "Stopping rx queues\n");
-
-			rc = qdev->ops->q_rx_stop(edev, &rx_params);
+			rc = qdev->ops->q_rx_stop(edev, i, fp->rxq->handle);
 			if (rc) {
 				DP_ERR(edev, "Failed to stop RXQ #%d\n", i);
 				return rc;
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index 9a393e9..17a2f0c 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -156,6 +156,7 @@ struct qede_rx_queue {
 	uint64_t rx_hw_errors;
 	uint64_t rx_alloc_errors;
 	struct qede_dev *qdev;
+	void *handle;
 };
 
 /*
@@ -187,6 +188,7 @@ struct qede_tx_queue {
 	uint64_t xmit_pkts;
 	bool is_legacy;
 	struct qede_dev *qdev;
+	void *handle;
 };
 
 struct qede_fastpath {
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 28/62] net/qede/base: add support for handling TLV request from MFW
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (28 preceding siblings ...)
  2017-03-28  6:51           ` [PATCH v4 27/62] net/qede/base: make L2 queues handle based Rasesh Mody
@ 2017-03-28  6:51           ` Rasesh Mody
  2017-03-28  6:51           ` [PATCH v4 29/62] net/qede/base: optimize cache-line access Rasesh Mody
                             ` (34 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:51 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Add support for handling the TLV request from Management FW.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/bcm_osal.h      |    3 +
 drivers/net/qede/base/ecore_mcp.c     |    6 -
 drivers/net/qede/base/ecore_mcp.h     |    8 +
 drivers/net/qede/base/ecore_mcp_api.h |   44 +-
 drivers/net/qede/base/ecore_mng_tlv.c | 1536 +++++++++++++++++++++++++++++++++
 drivers/net/qede/qede_if.h            |   21 +
 6 files changed, 1591 insertions(+), 27 deletions(-)
 create mode 100644 drivers/net/qede/base/ecore_mng_tlv.c

diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 63ee6d5..82e3ebd 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -419,5 +419,8 @@ void qede_get_mcp_proto_stats(struct ecore_dev *, enum ecore_mcp_protocol_type,
 	qede_get_mcp_proto_stats(dev, type, stats)
 
 #define	OSAL_SLOWPATH_IRQ_REQ(p_hwfn) (0)
+#define OSAL_MFW_TLV_REQ(p_hwfn) (0)
+#define OSAL_MFW_FILL_TLV_DATA(type, buf, data) (0)
+
 
 #endif /* __BCM_OSAL_H */
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 79a907b..2b9c819 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -2502,9 +2502,3 @@ enum _ecore_status_t ecore_mcp_initiate_pf_flr(struct ecore_hwfn *p_hwfn,
 	return ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_INITIATE_PF_FLR, 0,
 			     &mcp_resp, &mcp_param);
 }
-
-enum _ecore_status_t
-ecore_mfw_process_tlv_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
-{
-	return ECORE_SUCCESS;
-}
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index d77b5df..0708923 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -70,6 +70,14 @@ struct ecore_mcp_mb_params {
 	u32 mcp_param;
 };
 
+struct ecore_drv_tlv_hdr {
+	u8 tlv_type;	/* According to the enum below */
+	u8 tlv_length;	/* In dwords - not including this header */
+	u8 tlv_reserved;
+#define ECORE_DRV_TLV_FLAGS_CHANGED 0x01
+	u8 tlv_flags;
+};
+
 /**
  * @brief Initialize the interface with the MCP
  *
diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h
index 8cad43d..190c135 100644
--- a/drivers/net/qede/base/ecore_mcp_api.h
+++ b/drivers/net/qede/base/ecore_mcp_api.h
@@ -233,9 +233,11 @@ struct ecore_mba_vers {
 };
 
 enum ecore_mfw_tlv_type {
-	ECORE_MFW_TLV_GENERIC = 0x1,	/* Core driver TLVs */
-	ECORE_MFW_TLV_FCOE = 0x2,	/* FCoE protocol TLVs */
-	ECORE_MFW_TLV_ISCSI = 0x4,	/* SCSI protocol TLVs */
+	ECORE_MFW_TLV_GENERIC = 0x1, /* Core driver TLVs */
+	ECORE_MFW_TLV_ETH = 0x2, /* L2 driver TLVs */
+	ECORE_MFW_TLV_FCOE = 0x4, /* FCoE protocol TLVs */
+	ECORE_MFW_TLV_ISCSI = 0x8, /* SCSI protocol TLVs */
+	ECORE_MFW_TLV_MAX = 0x16,
 };
 
 struct ecore_mfw_tlv_generic {
@@ -247,6 +249,21 @@ struct ecore_mfw_tlv_generic {
 	bool additional_mac1_set;
 	u64 additional_mac2;
 	bool additional_mac2_set;
+	u8 drv_state;
+	bool drv_state_set;
+	u8 pxe_progress;
+	bool pxe_progress_set;
+	u64 rx_frames;
+	bool rx_frames_set;
+	u64 rx_bytes;
+	bool rx_bytes_set;
+	u64 tx_frames;
+	bool tx_frames_set;
+	u64 tx_bytes;
+	bool tx_bytes_set;
+};
+
+struct ecore_mfw_tlv_eth {
 	u16 lso_maxoff_size;
 	bool lso_maxoff_size_set;
 	u16 lso_minseg_size;
@@ -259,12 +276,6 @@ struct ecore_mfw_tlv_generic {
 	bool rx_descr_size_set;
 	u16 netq_count;
 	bool netq_count_set;
-	u16 flex_vlan;
-	bool flex_vlan_set;
-	u8 drv_state;
-	bool drv_state_set;
-	u8 pxe_progress;
-	bool pxe_progress_set;
 	u32 tcp4_offloads;
 	bool tcp4_offloads_set;
 	u32 tcp6_offloads;
@@ -273,14 +284,6 @@ struct ecore_mfw_tlv_generic {
 	bool tx_descr_qdepth_set;
 	u16 rx_descr_qdepth;
 	bool rx_descr_qdepth_set;
-	u64 rx_frames;
-	bool rx_frames_set;
-	u64 rx_bytes;
-	bool rx_bytes_set;
-	u64 tx_frames;
-	bool tx_frames_set;
-	u64 tx_bytes;
-	bool tx_bytes_set;
 	u8 iov_offload;
 	bool iov_offload_set;
 	u8 txqs_empty;
@@ -446,8 +449,8 @@ struct ecore_mfw_tlv_fcoe {
 	bool ols_set;
 	u8 lr;
 	bool lr_set;
-	u8 llr;
-	bool llrt;
+	u8 lrr;
+	bool lrr_set;
 	u8 tx_lip;
 	bool tx_lip_set;
 	u8 rx_lip;
@@ -511,12 +514,11 @@ struct ecore_mfw_tlv_iscsi {
 	bool tx_frames_set;
 	u64 tx_bytes;
 	bool tx_bytes_set;
-	u32 cpcp_spcp_map;
-	bool cpcp_spcp_map_set;
 };
 
 union ecore_mfw_tlv_data {
 	struct ecore_mfw_tlv_generic generic;
+	struct ecore_mfw_tlv_eth eth;
 	struct ecore_mfw_tlv_fcoe fcoe;
 	struct ecore_mfw_tlv_iscsi iscsi;
 };
diff --git a/drivers/net/qede/base/ecore_mng_tlv.c b/drivers/net/qede/base/ecore_mng_tlv.c
new file mode 100644
index 0000000..0065d12
--- /dev/null
+++ b/drivers/net/qede/base/ecore_mng_tlv.c
@@ -0,0 +1,1536 @@
+#include "bcm_osal.h"
+#include "ecore.h"
+#include "ecore_status.h"
+#include "ecore_mcp.h"
+#include "ecore_hw.h"
+#include "reg_addr.h"
+
+#define TLV_TYPE(p)	(p[0])
+#define TLV_LENGTH(p)	(p[1])
+#define TLV_FLAGS(p)	(p[3])
+
+static enum _ecore_status_t
+ecore_mfw_get_tlv_group(u8 tlv_type, u8 *tlv_group)
+{
+	switch (tlv_type) {
+	case DRV_TLV_FEATURE_FLAGS:
+	case DRV_TLV_LOCAL_ADMIN_ADDR:
+	case DRV_TLV_ADDITIONAL_MAC_ADDR_1:
+	case DRV_TLV_ADDITIONAL_MAC_ADDR_2:
+	case DRV_TLV_OS_DRIVER_STATES:
+	case DRV_TLV_PXE_BOOT_PROGRESS:
+	case DRV_TLV_RX_FRAMES_RECEIVED:
+	case DRV_TLV_RX_BYTES_RECEIVED:
+	case DRV_TLV_TX_FRAMES_SENT:
+	case DRV_TLV_TX_BYTES_SENT:
+		*tlv_group |= ECORE_MFW_TLV_GENERIC;
+		break;
+	case DRV_TLV_LSO_MAX_OFFLOAD_SIZE:
+	case DRV_TLV_LSO_MIN_SEGMENT_COUNT:
+	case DRV_TLV_PROMISCUOUS_MODE:
+	case DRV_TLV_TX_DESCRIPTORS_QUEUE_SIZE:
+	case DRV_TLV_RX_DESCRIPTORS_QUEUE_SIZE:
+	case DRV_TLV_NUM_OF_NET_QUEUE_VMQ_CFG:
+	case DRV_TLV_NUM_OFFLOADED_CONNECTIONS_TCP_IPV4:
+	case DRV_TLV_NUM_OFFLOADED_CONNECTIONS_TCP_IPV6:
+	case DRV_TLV_TX_DESCRIPTOR_QUEUE_AVG_DEPTH:
+	case DRV_TLV_RX_DESCRIPTORS_QUEUE_AVG_DEPTH:
+	case DRV_TLV_IOV_OFFLOAD:
+	case DRV_TLV_TX_QUEUES_EMPTY:
+	case DRV_TLV_RX_QUEUES_EMPTY:
+	case DRV_TLV_TX_QUEUES_FULL:
+	case DRV_TLV_RX_QUEUES_FULL:
+		*tlv_group |= ECORE_MFW_TLV_ETH;
+		break;
+	case DRV_TLV_SCSI_TO:
+	case DRV_TLV_R_T_TOV:
+	case DRV_TLV_R_A_TOV:
+	case DRV_TLV_E_D_TOV:
+	case DRV_TLV_CR_TOV:
+	case DRV_TLV_BOOT_TYPE:
+	case DRV_TLV_NPIV_STATE:
+	case DRV_TLV_NUM_OF_NPIV_IDS:
+	case DRV_TLV_SWITCH_NAME:
+	case DRV_TLV_SWITCH_PORT_NUM:
+	case DRV_TLV_SWITCH_PORT_ID:
+	case DRV_TLV_VENDOR_NAME:
+	case DRV_TLV_SWITCH_MODEL:
+	case DRV_TLV_SWITCH_FW_VER:
+	case DRV_TLV_QOS_PRIORITY_PER_802_1P:
+	case DRV_TLV_PORT_ALIAS:
+	case DRV_TLV_PORT_STATE:
+	case DRV_TLV_FIP_TX_DESCRIPTORS_QUEUE_SIZE:
+	case DRV_TLV_FCOE_RX_DESCRIPTORS_QUEUE_SIZE:
+	case DRV_TLV_LINK_FAILURE_COUNT:
+	case DRV_TLV_FCOE_BOOT_PROGRESS:
+	case DRV_TLV_RX_BROADCAST_PACKETS:
+	case DRV_TLV_TX_BROADCAST_PACKETS:
+	case DRV_TLV_FCOE_TX_DESCRIPTOR_QUEUE_AVG_DEPTH:
+	case DRV_TLV_FCOE_RX_DESCRIPTORS_QUEUE_AVG_DEPTH:
+	case DRV_TLV_FCOE_RX_FRAMES_RECEIVED:
+	case DRV_TLV_FCOE_RX_BYTES_RECEIVED:
+	case DRV_TLV_FCOE_TX_FRAMES_SENT:
+	case DRV_TLV_FCOE_TX_BYTES_SENT:
+	case DRV_TLV_CRC_ERROR_COUNT:
+	case DRV_TLV_CRC_ERROR_1_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_CRC_ERROR_1_TIMESTAMP:
+	case DRV_TLV_CRC_ERROR_2_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_CRC_ERROR_2_TIMESTAMP:
+	case DRV_TLV_CRC_ERROR_3_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_CRC_ERROR_3_TIMESTAMP:
+	case DRV_TLV_CRC_ERROR_4_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_CRC_ERROR_4_TIMESTAMP:
+	case DRV_TLV_CRC_ERROR_5_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_CRC_ERROR_5_TIMESTAMP:
+	case DRV_TLV_LOSS_OF_SYNC_ERROR_COUNT:
+	case DRV_TLV_LOSS_OF_SIGNAL_ERRORS:
+	case DRV_TLV_PRIMITIVE_SEQUENCE_PROTOCOL_ERROR_COUNT:
+	case DRV_TLV_DISPARITY_ERROR_COUNT:
+	case DRV_TLV_CODE_VIOLATION_ERROR_COUNT:
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_1:
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_2:
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_3:
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_4:
+	case DRV_TLV_LAST_FLOGI_TIMESTAMP:
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_1:
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_2:
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_3:
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_4:
+	case DRV_TLV_LAST_FLOGI_ACC_TIMESTAMP:
+	case DRV_TLV_LAST_FLOGI_RJT:
+	case DRV_TLV_LAST_FLOGI_RJT_TIMESTAMP:
+	case DRV_TLV_FDISCS_SENT_COUNT:
+	case DRV_TLV_FDISC_ACCS_RECEIVED:
+	case DRV_TLV_FDISC_RJTS_RECEIVED:
+	case DRV_TLV_PLOGI_SENT_COUNT:
+	case DRV_TLV_PLOGI_ACCS_RECEIVED:
+	case DRV_TLV_PLOGI_RJTS_RECEIVED:
+	case DRV_TLV_PLOGI_1_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_PLOGI_1_TIMESTAMP:
+	case DRV_TLV_PLOGI_2_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_PLOGI_2_TIMESTAMP:
+	case DRV_TLV_PLOGI_3_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_PLOGI_3_TIMESTAMP:
+	case DRV_TLV_PLOGI_4_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_PLOGI_4_TIMESTAMP:
+	case DRV_TLV_PLOGI_5_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_PLOGI_5_TIMESTAMP:
+	case DRV_TLV_PLOGI_1_ACC_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_PLOGI_1_ACC_TIMESTAMP:
+	case DRV_TLV_PLOGI_2_ACC_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_PLOGI_2_ACC_TIMESTAMP:
+	case DRV_TLV_PLOGI_3_ACC_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_PLOGI_3_ACC_TIMESTAMP:
+	case DRV_TLV_PLOGI_4_ACC_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_PLOGI_4_ACC_TIMESTAMP:
+	case DRV_TLV_PLOGI_5_ACC_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_PLOGI_5_ACC_TIMESTAMP:
+	case DRV_TLV_LOGOS_ISSUED:
+	case DRV_TLV_LOGO_ACCS_RECEIVED:
+	case DRV_TLV_LOGO_RJTS_RECEIVED:
+	case DRV_TLV_LOGO_1_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_LOGO_1_TIMESTAMP:
+	case DRV_TLV_LOGO_2_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_LOGO_2_TIMESTAMP:
+	case DRV_TLV_LOGO_3_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_LOGO_3_TIMESTAMP:
+	case DRV_TLV_LOGO_4_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_LOGO_4_TIMESTAMP:
+	case DRV_TLV_LOGO_5_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_LOGO_5_TIMESTAMP:
+	case DRV_TLV_LOGOS_RECEIVED:
+	case DRV_TLV_ACCS_ISSUED:
+	case DRV_TLV_PRLIS_ISSUED:
+	case DRV_TLV_ACCS_RECEIVED:
+	case DRV_TLV_ABTS_SENT_COUNT:
+	case DRV_TLV_ABTS_ACCS_RECEIVED:
+	case DRV_TLV_ABTS_RJTS_RECEIVED:
+	case DRV_TLV_ABTS_1_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_ABTS_1_TIMESTAMP:
+	case DRV_TLV_ABTS_2_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_ABTS_2_TIMESTAMP:
+	case DRV_TLV_ABTS_3_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_ABTS_3_TIMESTAMP:
+	case DRV_TLV_ABTS_4_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_ABTS_4_TIMESTAMP:
+	case DRV_TLV_ABTS_5_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_ABTS_5_TIMESTAMP:
+	case DRV_TLV_RSCNS_RECEIVED:
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_1:
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_2:
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_3:
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_4:
+	case DRV_TLV_LUN_RESETS_ISSUED:
+	case DRV_TLV_ABORT_TASK_SETS_ISSUED:
+	case DRV_TLV_TPRLOS_SENT:
+	case DRV_TLV_NOS_SENT_COUNT:
+	case DRV_TLV_NOS_RECEIVED_COUNT:
+	case DRV_TLV_OLS_COUNT:
+	case DRV_TLV_LR_COUNT:
+	case DRV_TLV_LRR_COUNT:
+	case DRV_TLV_LIP_SENT_COUNT:
+	case DRV_TLV_LIP_RECEIVED_COUNT:
+	case DRV_TLV_EOFA_COUNT:
+	case DRV_TLV_EOFNI_COUNT:
+	case DRV_TLV_SCSI_STATUS_CHECK_CONDITION_COUNT:
+	case DRV_TLV_SCSI_STATUS_CONDITION_MET_COUNT:
+	case DRV_TLV_SCSI_STATUS_BUSY_COUNT:
+	case DRV_TLV_SCSI_STATUS_INTERMEDIATE_COUNT:
+	case DRV_TLV_SCSI_STATUS_INTERMEDIATE_CONDITION_MET_COUNT:
+	case DRV_TLV_SCSI_STATUS_RESERVATION_CONFLICT_COUNT:
+	case DRV_TLV_SCSI_STATUS_TASK_SET_FULL_COUNT:
+	case DRV_TLV_SCSI_STATUS_ACA_ACTIVE_COUNT:
+	case DRV_TLV_SCSI_STATUS_TASK_ABORTED_COUNT:
+	case DRV_TLV_SCSI_CHECK_CONDITION_1_RECEIVED_SK_ASC_ASCQ:
+	case DRV_TLV_SCSI_CHECK_1_TIMESTAMP:
+	case DRV_TLV_SCSI_CHECK_CONDITION_2_RECEIVED_SK_ASC_ASCQ:
+	case DRV_TLV_SCSI_CHECK_2_TIMESTAMP:
+	case DRV_TLV_SCSI_CHECK_CONDITION_3_RECEIVED_SK_ASC_ASCQ:
+	case DRV_TLV_SCSI_CHECK_3_TIMESTAMP:
+	case DRV_TLV_SCSI_CHECK_CONDITION_4_RECEIVED_SK_ASC_ASCQ:
+	case DRV_TLV_SCSI_CHECK_4_TIMESTAMP:
+	case DRV_TLV_SCSI_CHECK_CONDITION_5_RECEIVED_SK_ASC_ASCQ:
+	case DRV_TLV_SCSI_CHECK_5_TIMESTAMP:
+		*tlv_group = ECORE_MFW_TLV_FCOE;
+		break;
+	case DRV_TLV_TARGET_LLMNR_ENABLED:
+	case DRV_TLV_HEADER_DIGEST_FLAG_ENABLED:
+	case DRV_TLV_DATA_DIGEST_FLAG_ENABLED:
+	case DRV_TLV_AUTHENTICATION_METHOD:
+	case DRV_TLV_ISCSI_BOOT_TARGET_PORTAL:
+	case DRV_TLV_MAX_FRAME_SIZE:
+	case DRV_TLV_PDU_TX_DESCRIPTORS_QUEUE_SIZE:
+	case DRV_TLV_PDU_RX_DESCRIPTORS_QUEUE_SIZE:
+	case DRV_TLV_ISCSI_BOOT_PROGRESS:
+	case DRV_TLV_PDU_TX_DESCRIPTOR_QUEUE_AVG_DEPTH:
+	case DRV_TLV_PDU_RX_DESCRIPTORS_QUEUE_AVG_DEPTH:
+	case DRV_TLV_ISCSI_PDU_RX_FRAMES_RECEIVED:
+	case DRV_TLV_ISCSI_PDU_RX_BYTES_RECEIVED:
+	case DRV_TLV_ISCSI_PDU_TX_FRAMES_SENT:
+	case DRV_TLV_ISCSI_PDU_TX_BYTES_SENT:
+		*tlv_group |= ECORE_MFW_TLV_ISCSI;
+		break;
+	default:
+		return ECORE_INVAL;
+	}
+
+	return ECORE_SUCCESS;
+}
+
+static int
+ecore_mfw_get_gen_tlv_value(struct ecore_drv_tlv_hdr *p_tlv,
+			    struct ecore_mfw_tlv_generic *p_drv_buf,
+			    u8 **p_tlv_buf)
+{
+	switch (p_tlv->tlv_type) {
+	case DRV_TLV_FEATURE_FLAGS:
+		if (p_drv_buf->feat_flags_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->feat_flags;
+			return sizeof(p_drv_buf->feat_flags);
+		}
+		break;
+	case DRV_TLV_LOCAL_ADMIN_ADDR:
+		if (p_drv_buf->local_mac_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->local_mac;
+			return sizeof(p_drv_buf->local_mac);
+		}
+		break;
+	case DRV_TLV_ADDITIONAL_MAC_ADDR_1:
+		if (p_drv_buf->additional_mac1_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->additional_mac1;
+			return sizeof(p_drv_buf->additional_mac1);
+		}
+		break;
+	case DRV_TLV_ADDITIONAL_MAC_ADDR_2:
+		if (p_drv_buf->additional_mac2_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->additional_mac2;
+			return sizeof(p_drv_buf->additional_mac2);
+		}
+		break;
+	case DRV_TLV_OS_DRIVER_STATES:
+		if (p_drv_buf->drv_state_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->drv_state;
+			return sizeof(p_drv_buf->drv_state);
+		}
+		break;
+	case DRV_TLV_PXE_BOOT_PROGRESS:
+		if (p_drv_buf->pxe_progress_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->pxe_progress;
+			return sizeof(p_drv_buf->pxe_progress);
+		}
+		break;
+	case DRV_TLV_RX_FRAMES_RECEIVED:
+		if (p_drv_buf->rx_frames_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_frames;
+			return sizeof(p_drv_buf->rx_frames);
+		}
+		break;
+	case DRV_TLV_RX_BYTES_RECEIVED:
+		if (p_drv_buf->rx_bytes_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_bytes;
+			return sizeof(p_drv_buf->rx_bytes);
+		}
+		break;
+	case DRV_TLV_TX_FRAMES_SENT:
+		if (p_drv_buf->tx_frames_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_frames;
+			return sizeof(p_drv_buf->tx_frames);
+		}
+		break;
+	case DRV_TLV_TX_BYTES_SENT:
+		if (p_drv_buf->tx_bytes_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_bytes;
+			return sizeof(p_drv_buf->tx_bytes);
+		}
+		break;
+	default:
+		break;
+	}
+
+	return -1;
+}
+
+static int
+ecore_mfw_get_eth_tlv_value(struct ecore_drv_tlv_hdr *p_tlv,
+			    struct ecore_mfw_tlv_eth *p_drv_buf,
+			    u8 **p_tlv_buf)
+{
+	switch (p_tlv->tlv_type) {
+	case DRV_TLV_LSO_MAX_OFFLOAD_SIZE:
+		if (p_drv_buf->lso_maxoff_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->lso_maxoff_size;
+			return sizeof(p_drv_buf->lso_maxoff_size);
+		}
+		break;
+	case DRV_TLV_LSO_MIN_SEGMENT_COUNT:
+		if (p_drv_buf->lso_minseg_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->lso_minseg_size;
+			return sizeof(p_drv_buf->lso_minseg_size);
+		}
+		break;
+	case DRV_TLV_PROMISCUOUS_MODE:
+		if (p_drv_buf->prom_mode_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->prom_mode;
+			return sizeof(p_drv_buf->prom_mode);
+		}
+		break;
+	case DRV_TLV_TX_DESCRIPTORS_QUEUE_SIZE:
+		if (p_drv_buf->tx_descr_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_descr_size;
+			return sizeof(p_drv_buf->tx_descr_size);
+		}
+		break;
+	case DRV_TLV_RX_DESCRIPTORS_QUEUE_SIZE:
+		if (p_drv_buf->rx_descr_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_descr_size;
+			return sizeof(p_drv_buf->rx_descr_size);
+		}
+		break;
+	case DRV_TLV_NUM_OF_NET_QUEUE_VMQ_CFG:
+		if (p_drv_buf->netq_count_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->netq_count;
+			return sizeof(p_drv_buf->netq_count);
+		}
+		break;
+	case DRV_TLV_NUM_OFFLOADED_CONNECTIONS_TCP_IPV4:
+		if (p_drv_buf->tcp4_offloads_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tcp4_offloads;
+			return sizeof(p_drv_buf->tcp4_offloads);
+		}
+		break;
+	case DRV_TLV_NUM_OFFLOADED_CONNECTIONS_TCP_IPV6:
+		if (p_drv_buf->tcp6_offloads_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tcp6_offloads;
+			return sizeof(p_drv_buf->tcp6_offloads);
+		}
+		break;
+	case DRV_TLV_TX_DESCRIPTOR_QUEUE_AVG_DEPTH:
+		if (p_drv_buf->tx_descr_qdepth_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_descr_qdepth;
+			return sizeof(p_drv_buf->tx_descr_qdepth);
+		}
+		break;
+	case DRV_TLV_RX_DESCRIPTORS_QUEUE_AVG_DEPTH:
+		if (p_drv_buf->rx_descr_qdepth_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_descr_qdepth;
+			return sizeof(p_drv_buf->rx_descr_qdepth);
+		}
+		break;
+	case DRV_TLV_IOV_OFFLOAD:
+		if (p_drv_buf->iov_offload_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->iov_offload;
+			return sizeof(p_drv_buf->iov_offload);
+		}
+		break;
+	case DRV_TLV_TX_QUEUES_EMPTY:
+		if (p_drv_buf->txqs_empty_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->txqs_empty;
+			return sizeof(p_drv_buf->txqs_empty);
+		}
+		break;
+	case DRV_TLV_RX_QUEUES_EMPTY:
+		if (p_drv_buf->rxqs_empty_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rxqs_empty;
+			return sizeof(p_drv_buf->rxqs_empty);
+		}
+		break;
+	case DRV_TLV_TX_QUEUES_FULL:
+		if (p_drv_buf->num_txqs_full_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->num_txqs_full;
+			return sizeof(p_drv_buf->num_txqs_full);
+		}
+		break;
+	case DRV_TLV_RX_QUEUES_FULL:
+		if (p_drv_buf->num_rxqs_full_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->num_rxqs_full;
+			return sizeof(p_drv_buf->num_rxqs_full);
+		}
+		break;
+	default:
+		break;
+	}
+
+	return -1;
+}
+
+static int
+ecore_mfw_get_fcoe_tlv_value(struct ecore_drv_tlv_hdr *p_tlv,
+			     struct ecore_mfw_tlv_fcoe *p_drv_buf,
+			     u8 **p_tlv_buf)
+{
+	switch (p_tlv->tlv_type) {
+	case DRV_TLV_SCSI_TO:
+		if (p_drv_buf->scsi_timeout_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_timeout;
+			return sizeof(p_drv_buf->scsi_timeout);
+		}
+		break;
+	case DRV_TLV_R_T_TOV:
+		if (p_drv_buf->rt_tov_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rt_tov;
+			return sizeof(p_drv_buf->rt_tov);
+		}
+		break;
+	case DRV_TLV_R_A_TOV:
+		if (p_drv_buf->ra_tov_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->ra_tov;
+			return sizeof(p_drv_buf->ra_tov);
+		}
+		break;
+	case DRV_TLV_E_D_TOV:
+		if (p_drv_buf->ed_tov_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->ed_tov;
+			return sizeof(p_drv_buf->ed_tov);
+		}
+		break;
+	case DRV_TLV_CR_TOV:
+		if (p_drv_buf->cr_tov_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->cr_tov;
+			return sizeof(p_drv_buf->cr_tov);
+		}
+		break;
+	case DRV_TLV_BOOT_TYPE:
+		if (p_drv_buf->boot_type_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->boot_type;
+			return sizeof(p_drv_buf->boot_type);
+		}
+		break;
+	case DRV_TLV_NPIV_STATE:
+		if (p_drv_buf->npiv_state_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->npiv_state;
+			return sizeof(p_drv_buf->npiv_state);
+		}
+		break;
+	case DRV_TLV_NUM_OF_NPIV_IDS:
+		if (p_drv_buf->num_npiv_ids_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->num_npiv_ids;
+			return sizeof(p_drv_buf->num_npiv_ids);
+		}
+		break;
+	case DRV_TLV_SWITCH_NAME:
+		if (p_drv_buf->switch_name_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->switch_name;
+			return sizeof(p_drv_buf->switch_name);
+		}
+		break;
+	case DRV_TLV_SWITCH_PORT_NUM:
+		if (p_drv_buf->switch_portnum_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->switch_portnum;
+			return sizeof(p_drv_buf->switch_portnum);
+		}
+		break;
+	case DRV_TLV_SWITCH_PORT_ID:
+		if (p_drv_buf->switch_portid_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->switch_portid;
+			return sizeof(p_drv_buf->switch_portid);
+		}
+		break;
+	case DRV_TLV_VENDOR_NAME:
+		if (p_drv_buf->vendor_name_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->vendor_name;
+			return sizeof(p_drv_buf->vendor_name);
+		}
+		break;
+	case DRV_TLV_SWITCH_MODEL:
+		if (p_drv_buf->switch_model_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->switch_model;
+			return sizeof(p_drv_buf->switch_model);
+		}
+		break;
+	case DRV_TLV_SWITCH_FW_VER:
+		if (p_drv_buf->switch_fw_version_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->switch_fw_version;
+			return sizeof(p_drv_buf->switch_fw_version);
+		}
+		break;
+	case DRV_TLV_QOS_PRIORITY_PER_802_1P:
+		if (p_drv_buf->qos_pri_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->qos_pri;
+			return sizeof(p_drv_buf->qos_pri);
+		}
+		break;
+	case DRV_TLV_PORT_ALIAS:
+		if (p_drv_buf->port_alias_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->port_alias;
+			return sizeof(p_drv_buf->port_alias);
+		}
+		break;
+	case DRV_TLV_PORT_STATE:
+		if (p_drv_buf->port_state_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->port_state;
+			return sizeof(p_drv_buf->port_state);
+		}
+		break;
+	case DRV_TLV_FIP_TX_DESCRIPTORS_QUEUE_SIZE:
+		if (p_drv_buf->fip_tx_descr_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fip_tx_descr_size;
+			return sizeof(p_drv_buf->fip_tx_descr_size);
+		}
+		break;
+	case DRV_TLV_FCOE_RX_DESCRIPTORS_QUEUE_SIZE:
+		if (p_drv_buf->fip_rx_descr_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fip_rx_descr_size;
+			return sizeof(p_drv_buf->fip_rx_descr_size);
+		}
+		break;
+	case DRV_TLV_LINK_FAILURE_COUNT:
+		if (p_drv_buf->link_failures_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->link_failures;
+			return sizeof(p_drv_buf->link_failures);
+		}
+		break;
+	case DRV_TLV_FCOE_BOOT_PROGRESS:
+		if (p_drv_buf->fcoe_boot_progress_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_boot_progress;
+			return sizeof(p_drv_buf->fcoe_boot_progress);
+		}
+		break;
+	case DRV_TLV_RX_BROADCAST_PACKETS:
+		if (p_drv_buf->rx_bcast_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_bcast;
+			return sizeof(p_drv_buf->rx_bcast);
+		}
+		break;
+	case DRV_TLV_TX_BROADCAST_PACKETS:
+		if (p_drv_buf->tx_bcast_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_bcast;
+			return sizeof(p_drv_buf->tx_bcast);
+		}
+		break;
+	case DRV_TLV_FCOE_TX_DESCRIPTOR_QUEUE_AVG_DEPTH:
+		if (p_drv_buf->fcoe_txq_depth_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_txq_depth;
+			return sizeof(p_drv_buf->fcoe_txq_depth);
+		}
+		break;
+	case DRV_TLV_FCOE_RX_DESCRIPTORS_QUEUE_AVG_DEPTH:
+		if (p_drv_buf->fcoe_rxq_depth_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_rxq_depth;
+			return sizeof(p_drv_buf->fcoe_rxq_depth);
+		}
+		break;
+	case DRV_TLV_FCOE_RX_FRAMES_RECEIVED:
+		if (p_drv_buf->fcoe_rx_frames_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_rx_frames;
+			return sizeof(p_drv_buf->fcoe_rx_frames);
+		}
+		break;
+	case DRV_TLV_FCOE_RX_BYTES_RECEIVED:
+		if (p_drv_buf->fcoe_rx_bytes_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_rx_bytes;
+			return sizeof(p_drv_buf->fcoe_rx_bytes);
+		}
+		break;
+	case DRV_TLV_FCOE_TX_FRAMES_SENT:
+		if (p_drv_buf->fcoe_tx_frames_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_tx_frames;
+			return sizeof(p_drv_buf->fcoe_tx_frames);
+		}
+		break;
+	case DRV_TLV_FCOE_TX_BYTES_SENT:
+		if (p_drv_buf->fcoe_tx_bytes_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_tx_bytes;
+			return sizeof(p_drv_buf->fcoe_tx_bytes);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_COUNT:
+		if (p_drv_buf->crc_count_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_count;
+			return sizeof(p_drv_buf->crc_count);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_1_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->crc_err_src_fcid_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_src_fcid[0];
+			return sizeof(p_drv_buf->crc_err_src_fcid[0]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_2_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->crc_err_src_fcid_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_src_fcid[1];
+			return sizeof(p_drv_buf->crc_err_src_fcid[1]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_3_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->crc_err_src_fcid_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_src_fcid[2];
+			return sizeof(p_drv_buf->crc_err_src_fcid[2]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_4_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->crc_err_src_fcid_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_src_fcid[3];
+			return sizeof(p_drv_buf->crc_err_src_fcid[3]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_5_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->crc_err_src_fcid_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_src_fcid[4];
+			return sizeof(p_drv_buf->crc_err_src_fcid[4]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_1_TIMESTAMP:
+		if (p_drv_buf->crc_err_tstamp_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_tstamp[0];
+			return sizeof(p_drv_buf->crc_err_tstamp[0]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_2_TIMESTAMP:
+		if (p_drv_buf->crc_err_tstamp_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_tstamp[1];
+			return sizeof(p_drv_buf->crc_err_tstamp[1]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_3_TIMESTAMP:
+		if (p_drv_buf->crc_err_tstamp_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_tstamp[2];
+			return sizeof(p_drv_buf->crc_err_tstamp[2]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_4_TIMESTAMP:
+		if (p_drv_buf->crc_err_tstamp_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_tstamp[3];
+			return sizeof(p_drv_buf->crc_err_tstamp[3]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_5_TIMESTAMP:
+		if (p_drv_buf->crc_err_tstamp_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_tstamp[4];
+			return sizeof(p_drv_buf->crc_err_tstamp[4]);
+		}
+		break;
+	case DRV_TLV_LOSS_OF_SYNC_ERROR_COUNT:
+		if (p_drv_buf->losync_err_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->losync_err;
+			return sizeof(p_drv_buf->losync_err);
+		}
+		break;
+	case DRV_TLV_LOSS_OF_SIGNAL_ERRORS:
+		if (p_drv_buf->losig_err_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->losig_err;
+			return sizeof(p_drv_buf->losig_err);
+		}
+		break;
+	case DRV_TLV_PRIMITIVE_SEQUENCE_PROTOCOL_ERROR_COUNT:
+		if (p_drv_buf->primtive_err_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->primtive_err;
+			return sizeof(p_drv_buf->primtive_err);
+		}
+		break;
+	case DRV_TLV_DISPARITY_ERROR_COUNT:
+		if (p_drv_buf->disparity_err_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->disparity_err;
+			return sizeof(p_drv_buf->disparity_err);
+		}
+		break;
+	case DRV_TLV_CODE_VIOLATION_ERROR_COUNT:
+		if (p_drv_buf->code_violation_err_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->code_violation_err;
+			return sizeof(p_drv_buf->code_violation_err);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_1:
+		if (p_drv_buf->flogi_param_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_param[0];
+			return sizeof(p_drv_buf->flogi_param[0]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_2:
+		if (p_drv_buf->flogi_param_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_param[1];
+			return sizeof(p_drv_buf->flogi_param[1]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_3:
+		if (p_drv_buf->flogi_param_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_param[2];
+			return sizeof(p_drv_buf->flogi_param[2]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_4:
+		if (p_drv_buf->flogi_param_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_param[3];
+			return sizeof(p_drv_buf->flogi_param[3]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_TIMESTAMP:
+		if (p_drv_buf->flogi_tstamp_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_tstamp;
+			return sizeof(p_drv_buf->flogi_tstamp);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_1:
+		if (p_drv_buf->flogi_acc_param_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_acc_param[0];
+			return sizeof(p_drv_buf->flogi_acc_param[0]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_2:
+		if (p_drv_buf->flogi_acc_param_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_acc_param[1];
+			return sizeof(p_drv_buf->flogi_acc_param[1]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_3:
+		if (p_drv_buf->flogi_acc_param_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_acc_param[2];
+			return sizeof(p_drv_buf->flogi_acc_param[2]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_4:
+		if (p_drv_buf->flogi_acc_param_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_acc_param[3];
+			return sizeof(p_drv_buf->flogi_acc_param[3]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ACC_TIMESTAMP:
+		if (p_drv_buf->flogi_acc_tstamp_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_acc_tstamp;
+			return sizeof(p_drv_buf->flogi_acc_tstamp);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_RJT:
+		if (p_drv_buf->flogi_rjt_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_rjt;
+			return sizeof(p_drv_buf->flogi_rjt);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_RJT_TIMESTAMP:
+		if (p_drv_buf->flogi_rjt_tstamp_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_rjt_tstamp;
+			return sizeof(p_drv_buf->flogi_rjt_tstamp);
+		}
+		break;
+	case DRV_TLV_FDISCS_SENT_COUNT:
+		if (p_drv_buf->fdiscs_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fdiscs;
+			return sizeof(p_drv_buf->fdiscs);
+		}
+		break;
+	case DRV_TLV_FDISC_ACCS_RECEIVED:
+		if (p_drv_buf->fdisc_acc_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fdisc_acc;
+			return sizeof(p_drv_buf->fdisc_acc);
+		}
+		break;
+	case DRV_TLV_FDISC_RJTS_RECEIVED:
+		if (p_drv_buf->fdisc_rjt_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fdisc_rjt;
+			return sizeof(p_drv_buf->fdisc_rjt);
+		}
+		break;
+	case DRV_TLV_PLOGI_SENT_COUNT:
+		if (p_drv_buf->plogi_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi;
+			return sizeof(p_drv_buf->plogi);
+		}
+		break;
+	case DRV_TLV_PLOGI_ACCS_RECEIVED:
+		if (p_drv_buf->plogi_acc_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc;
+			return sizeof(p_drv_buf->plogi_acc);
+		}
+		break;
+	case DRV_TLV_PLOGI_RJTS_RECEIVED:
+		if (p_drv_buf->plogi_rjt_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_rjt;
+			return sizeof(p_drv_buf->plogi_rjt);
+		}
+		break;
+	case DRV_TLV_PLOGI_1_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->plogi_dst_fcid_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_dst_fcid[0];
+			return sizeof(p_drv_buf->plogi_dst_fcid[0]);
+		}
+		break;
+	case DRV_TLV_PLOGI_2_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->plogi_dst_fcid_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_dst_fcid[1];
+			return sizeof(p_drv_buf->plogi_dst_fcid[1]);
+		}
+		break;
+	case DRV_TLV_PLOGI_3_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->plogi_dst_fcid_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_dst_fcid[2];
+			return sizeof(p_drv_buf->plogi_dst_fcid[2]);
+		}
+		break;
+	case DRV_TLV_PLOGI_4_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->plogi_dst_fcid_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_dst_fcid[3];
+			return sizeof(p_drv_buf->plogi_dst_fcid[3]);
+		}
+		break;
+	case DRV_TLV_PLOGI_5_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->plogi_dst_fcid_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_dst_fcid[4];
+			return sizeof(p_drv_buf->plogi_dst_fcid[4]);
+		}
+		break;
+	case DRV_TLV_PLOGI_1_TIMESTAMP:
+		if (p_drv_buf->plogi_tstamp_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_tstamp[0];
+			return sizeof(p_drv_buf->plogi_tstamp[0]);
+		}
+		break;
+	case DRV_TLV_PLOGI_2_TIMESTAMP:
+		if (p_drv_buf->plogi_tstamp_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_tstamp[1];
+			return sizeof(p_drv_buf->plogi_tstamp[1]);
+		}
+		break;
+	case DRV_TLV_PLOGI_3_TIMESTAMP:
+		if (p_drv_buf->plogi_tstamp_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_tstamp[2];
+			return sizeof(p_drv_buf->plogi_tstamp[2]);
+		}
+		break;
+	case DRV_TLV_PLOGI_4_TIMESTAMP:
+		if (p_drv_buf->plogi_tstamp_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_tstamp[3];
+			return sizeof(p_drv_buf->plogi_tstamp[3]);
+		}
+		break;
+	case DRV_TLV_PLOGI_5_TIMESTAMP:
+		if (p_drv_buf->plogi_tstamp_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_tstamp[4];
+			return sizeof(p_drv_buf->plogi_tstamp[4]);
+		}
+		break;
+	case DRV_TLV_PLOGI_1_ACC_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogi_acc_src_fcid_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_src_fcid[0];
+			return sizeof(p_drv_buf->plogi_acc_src_fcid[0]);
+		}
+		break;
+	case DRV_TLV_PLOGI_2_ACC_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogi_acc_src_fcid_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_src_fcid[1];
+			return sizeof(p_drv_buf->plogi_acc_src_fcid[1]);
+		}
+		break;
+	case DRV_TLV_PLOGI_3_ACC_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogi_acc_src_fcid_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_src_fcid[2];
+			return sizeof(p_drv_buf->plogi_acc_src_fcid[2]);
+		}
+		break;
+	case DRV_TLV_PLOGI_4_ACC_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogi_acc_src_fcid_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_src_fcid[3];
+			return sizeof(p_drv_buf->plogi_acc_src_fcid[3]);
+		}
+		break;
+	case DRV_TLV_PLOGI_5_ACC_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogi_acc_src_fcid_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_src_fcid[4];
+			return sizeof(p_drv_buf->plogi_acc_src_fcid[4]);
+		}
+		break;
+	case DRV_TLV_PLOGI_1_ACC_TIMESTAMP:
+		if (p_drv_buf->plogi_acc_tstamp_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_tstamp[0];
+			return sizeof(p_drv_buf->plogi_acc_tstamp[0]);
+		}
+		break;
+	case DRV_TLV_PLOGI_2_ACC_TIMESTAMP:
+		if (p_drv_buf->plogi_acc_tstamp_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_tstamp[1];
+			return sizeof(p_drv_buf->plogi_acc_tstamp[1]);
+		}
+		break;
+	case DRV_TLV_PLOGI_3_ACC_TIMESTAMP:
+		if (p_drv_buf->plogi_acc_tstamp_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_tstamp[2];
+			return sizeof(p_drv_buf->plogi_acc_tstamp[2]);
+		}
+		break;
+	case DRV_TLV_PLOGI_4_ACC_TIMESTAMP:
+		if (p_drv_buf->plogi_acc_tstamp_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_tstamp[3];
+			return sizeof(p_drv_buf->plogi_acc_tstamp[3]);
+		}
+		break;
+	case DRV_TLV_PLOGI_5_ACC_TIMESTAMP:
+		if (p_drv_buf->plogi_acc_tstamp_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_tstamp[4];
+			return sizeof(p_drv_buf->plogi_acc_tstamp[4]);
+		}
+		break;
+	case DRV_TLV_LOGOS_ISSUED:
+		if (p_drv_buf->tx_plogos_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_plogos;
+			return sizeof(p_drv_buf->tx_plogos);
+		}
+		break;
+	case DRV_TLV_LOGO_ACCS_RECEIVED:
+		if (p_drv_buf->plogo_acc_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_acc;
+			return sizeof(p_drv_buf->plogo_acc);
+		}
+		break;
+	case DRV_TLV_LOGO_RJTS_RECEIVED:
+		if (p_drv_buf->plogo_rjt_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_rjt;
+			return sizeof(p_drv_buf->plogo_rjt);
+		}
+		break;
+	case DRV_TLV_LOGO_1_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogo_src_fcid_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_src_fcid[0];
+			return sizeof(p_drv_buf->plogo_src_fcid[0]);
+		}
+		break;
+	case DRV_TLV_LOGO_2_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogo_src_fcid_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_src_fcid[1];
+			return sizeof(p_drv_buf->plogo_src_fcid[1]);
+		}
+		break;
+	case DRV_TLV_LOGO_3_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogo_src_fcid_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_src_fcid[2];
+			return sizeof(p_drv_buf->plogo_src_fcid[2]);
+		}
+		break;
+	case DRV_TLV_LOGO_4_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogo_src_fcid_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_src_fcid[3];
+			return sizeof(p_drv_buf->plogo_src_fcid[3]);
+		}
+		break;
+	case DRV_TLV_LOGO_5_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogo_src_fcid_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_src_fcid[4];
+			return sizeof(p_drv_buf->plogo_src_fcid[4]);
+		}
+		break;
+	case DRV_TLV_LOGO_1_TIMESTAMP:
+		if (p_drv_buf->plogo_tstamp_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_tstamp[0];
+			return sizeof(p_drv_buf->plogo_tstamp[0]);
+		}
+		break;
+	case DRV_TLV_LOGO_2_TIMESTAMP:
+		if (p_drv_buf->plogo_tstamp_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_tstamp[1];
+			return sizeof(p_drv_buf->plogo_tstamp[1]);
+		}
+		break;
+	case DRV_TLV_LOGO_3_TIMESTAMP:
+		if (p_drv_buf->plogo_tstamp_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_tstamp[2];
+			return sizeof(p_drv_buf->plogo_tstamp[2]);
+		}
+		break;
+	case DRV_TLV_LOGO_4_TIMESTAMP:
+		if (p_drv_buf->plogo_tstamp_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_tstamp[3];
+			return sizeof(p_drv_buf->plogo_tstamp[3]);
+		}
+		break;
+	case DRV_TLV_LOGO_5_TIMESTAMP:
+		if (p_drv_buf->plogo_tstamp_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_tstamp[4];
+			return sizeof(p_drv_buf->plogo_tstamp[4]);
+		}
+		break;
+	case DRV_TLV_LOGOS_RECEIVED:
+		if (p_drv_buf->rx_logos_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_logos;
+			return sizeof(p_drv_buf->rx_logos);
+		}
+		break;
+	case DRV_TLV_ACCS_ISSUED:
+		if (p_drv_buf->tx_accs_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_accs;
+			return sizeof(p_drv_buf->tx_accs);
+		}
+		break;
+	case DRV_TLV_PRLIS_ISSUED:
+		if (p_drv_buf->tx_prlis_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_prlis;
+			return sizeof(p_drv_buf->tx_prlis);
+		}
+		break;
+	case DRV_TLV_ACCS_RECEIVED:
+		if (p_drv_buf->rx_accs_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_accs;
+			return sizeof(p_drv_buf->rx_accs);
+		}
+		break;
+	case DRV_TLV_ABTS_SENT_COUNT:
+		if (p_drv_buf->tx_abts_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_abts;
+			return sizeof(p_drv_buf->tx_abts);
+		}
+		break;
+	case DRV_TLV_ABTS_ACCS_RECEIVED:
+		if (p_drv_buf->rx_abts_acc_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_abts_acc;
+			return sizeof(p_drv_buf->rx_abts_acc);
+		}
+		break;
+	case DRV_TLV_ABTS_RJTS_RECEIVED:
+		if (p_drv_buf->rx_abts_rjt_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_abts_rjt;
+			return sizeof(p_drv_buf->rx_abts_rjt);
+		}
+		break;
+	case DRV_TLV_ABTS_1_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->abts_dst_fcid_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_dst_fcid[0];
+			return sizeof(p_drv_buf->abts_dst_fcid[0]);
+		}
+		break;
+	case DRV_TLV_ABTS_2_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->abts_dst_fcid_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_dst_fcid[1];
+			return sizeof(p_drv_buf->abts_dst_fcid[1]);
+		}
+		break;
+	case DRV_TLV_ABTS_3_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->abts_dst_fcid_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_dst_fcid[2];
+			return sizeof(p_drv_buf->abts_dst_fcid[2]);
+		}
+		break;
+	case DRV_TLV_ABTS_4_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->abts_dst_fcid_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_dst_fcid[3];
+			return sizeof(p_drv_buf->abts_dst_fcid[3]);
+		}
+		break;
+	case DRV_TLV_ABTS_5_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->abts_dst_fcid_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_dst_fcid[4];
+			return sizeof(p_drv_buf->abts_dst_fcid[4]);
+		}
+		break;
+	case DRV_TLV_ABTS_1_TIMESTAMP:
+		if (p_drv_buf->abts_tstamp_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_tstamp[0];
+			return sizeof(p_drv_buf->abts_tstamp[0]);
+		}
+		break;
+	case DRV_TLV_ABTS_2_TIMESTAMP:
+		if (p_drv_buf->abts_tstamp_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_tstamp[1];
+			return sizeof(p_drv_buf->abts_tstamp[1]);
+		}
+		break;
+	case DRV_TLV_ABTS_3_TIMESTAMP:
+		if (p_drv_buf->abts_tstamp_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_tstamp[2];
+			return sizeof(p_drv_buf->abts_tstamp[2]);
+		}
+		break;
+	case DRV_TLV_ABTS_4_TIMESTAMP:
+		if (p_drv_buf->abts_tstamp_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_tstamp[3];
+			return sizeof(p_drv_buf->abts_tstamp[3]);
+		}
+		break;
+	case DRV_TLV_ABTS_5_TIMESTAMP:
+		if (p_drv_buf->abts_tstamp_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_tstamp[4];
+			return sizeof(p_drv_buf->abts_tstamp[4]);
+		}
+		break;
+	case DRV_TLV_RSCNS_RECEIVED:
+		if (p_drv_buf->rx_rscn_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_rscn;
+			return sizeof(p_drv_buf->rx_rscn);
+		}
+		break;
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_1:
+		if (p_drv_buf->rx_rscn_nport_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_rscn_nport[0];
+			return sizeof(p_drv_buf->rx_rscn_nport[0]);
+		}
+		break;
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_2:
+		if (p_drv_buf->rx_rscn_nport_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_rscn_nport[1];
+			return sizeof(p_drv_buf->rx_rscn_nport[1]);
+		}
+		break;
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_3:
+		if (p_drv_buf->rx_rscn_nport_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_rscn_nport[2];
+			return sizeof(p_drv_buf->rx_rscn_nport[2]);
+		}
+		break;
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_4:
+		if (p_drv_buf->rx_rscn_nport_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_rscn_nport[3];
+			return sizeof(p_drv_buf->rx_rscn_nport[3]);
+		}
+		break;
+	case DRV_TLV_LUN_RESETS_ISSUED:
+		if (p_drv_buf->tx_lun_rst_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_lun_rst;
+			return sizeof(p_drv_buf->tx_lun_rst);
+		}
+		break;
+	case DRV_TLV_ABORT_TASK_SETS_ISSUED:
+		if (p_drv_buf->abort_task_sets_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abort_task_sets;
+			return sizeof(p_drv_buf->abort_task_sets);
+		}
+		break;
+	case DRV_TLV_TPRLOS_SENT:
+		if (p_drv_buf->tx_tprlos_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_tprlos;
+			return sizeof(p_drv_buf->tx_tprlos);
+		}
+		break;
+	case DRV_TLV_NOS_SENT_COUNT:
+		if (p_drv_buf->tx_nos_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_nos;
+			return sizeof(p_drv_buf->tx_nos);
+		}
+		break;
+	case DRV_TLV_NOS_RECEIVED_COUNT:
+		if (p_drv_buf->rx_nos_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_nos;
+			return sizeof(p_drv_buf->rx_nos);
+		}
+		break;
+	case DRV_TLV_OLS_COUNT:
+		if (p_drv_buf->ols_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->ols;
+			return sizeof(p_drv_buf->ols);
+		}
+		break;
+	case DRV_TLV_LR_COUNT:
+		if (p_drv_buf->lr_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->lr;
+			return sizeof(p_drv_buf->lr);
+		}
+		break;
+	case DRV_TLV_LRR_COUNT:
+		if (p_drv_buf->lrr_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->lrr;
+			return sizeof(p_drv_buf->lrr);
+		}
+		break;
+	case DRV_TLV_LIP_SENT_COUNT:
+		if (p_drv_buf->tx_lip_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_lip;
+			return sizeof(p_drv_buf->tx_lip);
+		}
+		break;
+	case DRV_TLV_LIP_RECEIVED_COUNT:
+		if (p_drv_buf->rx_lip_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_lip;
+			return sizeof(p_drv_buf->rx_lip);
+		}
+		break;
+	case DRV_TLV_EOFA_COUNT:
+		if (p_drv_buf->eofa_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->eofa;
+			return sizeof(p_drv_buf->eofa);
+		}
+		break;
+	case DRV_TLV_EOFNI_COUNT:
+		if (p_drv_buf->eofni_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->eofni;
+			return sizeof(p_drv_buf->eofni);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_CHECK_CONDITION_COUNT:
+		if (p_drv_buf->scsi_chks_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_chks;
+			return sizeof(p_drv_buf->scsi_chks);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_CONDITION_MET_COUNT:
+		if (p_drv_buf->scsi_cond_met_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_cond_met;
+			return sizeof(p_drv_buf->scsi_cond_met);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_BUSY_COUNT:
+		if (p_drv_buf->scsi_busy_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_busy;
+			return sizeof(p_drv_buf->scsi_busy);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_INTERMEDIATE_COUNT:
+		if (p_drv_buf->scsi_inter_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_inter;
+			return sizeof(p_drv_buf->scsi_inter);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_INTERMEDIATE_CONDITION_MET_COUNT:
+		if (p_drv_buf->scsi_inter_cond_met_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_inter_cond_met;
+			return sizeof(p_drv_buf->scsi_inter_cond_met);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_RESERVATION_CONFLICT_COUNT:
+		if (p_drv_buf->scsi_rsv_conflicts_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_rsv_conflicts;
+			return sizeof(p_drv_buf->scsi_rsv_conflicts);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_TASK_SET_FULL_COUNT:
+		if (p_drv_buf->scsi_tsk_full_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_tsk_full;
+			return sizeof(p_drv_buf->scsi_tsk_full);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_ACA_ACTIVE_COUNT:
+		if (p_drv_buf->scsi_aca_active_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_aca_active;
+			return sizeof(p_drv_buf->scsi_aca_active);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_TASK_ABORTED_COUNT:
+		if (p_drv_buf->scsi_tsk_abort_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_tsk_abort;
+			return sizeof(p_drv_buf->scsi_tsk_abort);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_CONDITION_1_RECEIVED_SK_ASC_ASCQ:
+		if (p_drv_buf->scsi_rx_chk_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_rx_chk[0];
+			return sizeof(p_drv_buf->scsi_rx_chk[0]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_CONDITION_2_RECEIVED_SK_ASC_ASCQ:
+		if (p_drv_buf->scsi_rx_chk_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_rx_chk[1];
+			return sizeof(p_drv_buf->scsi_rx_chk[1]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_CONDITION_3_RECEIVED_SK_ASC_ASCQ:
+		if (p_drv_buf->scsi_rx_chk_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_rx_chk[2];
+			return sizeof(p_drv_buf->scsi_rx_chk[2]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_CONDITION_4_RECEIVED_SK_ASC_ASCQ:
+		if (p_drv_buf->scsi_rx_chk_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_rx_chk[3];
+			return sizeof(p_drv_buf->scsi_rx_chk[4]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_CONDITION_5_RECEIVED_SK_ASC_ASCQ:
+		if (p_drv_buf->scsi_rx_chk_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_rx_chk[4];
+			return sizeof(p_drv_buf->scsi_rx_chk[4]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_1_TIMESTAMP:
+		if (p_drv_buf->scsi_chk_tstamp_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_chk_tstamp[0];
+			return sizeof(p_drv_buf->scsi_chk_tstamp[0]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_2_TIMESTAMP:
+		if (p_drv_buf->scsi_chk_tstamp_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_chk_tstamp[1];
+			return sizeof(p_drv_buf->scsi_chk_tstamp[1]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_3_TIMESTAMP:
+		if (p_drv_buf->scsi_chk_tstamp_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_chk_tstamp[2];
+			return sizeof(p_drv_buf->scsi_chk_tstamp[2]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_4_TIMESTAMP:
+		if (p_drv_buf->scsi_chk_tstamp_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_chk_tstamp[3];
+			return sizeof(p_drv_buf->scsi_chk_tstamp[3]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_5_TIMESTAMP:
+		if (p_drv_buf->scsi_chk_tstamp_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_chk_tstamp[4];
+			return sizeof(p_drv_buf->scsi_chk_tstamp[4]);
+		}
+		break;
+	default:
+		break;
+	}
+
+	return -1;
+}
+
+static int
+ecore_mfw_get_iscsi_tlv_value(struct ecore_drv_tlv_hdr *p_tlv,
+			      struct ecore_mfw_tlv_iscsi *p_drv_buf,
+			      u8 **p_tlv_buf)
+{
+	switch (p_tlv->tlv_type) {
+	case DRV_TLV_TARGET_LLMNR_ENABLED:
+		if (p_drv_buf->target_llmnr_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->target_llmnr;
+			return sizeof(p_drv_buf->target_llmnr);
+		}
+		break;
+	case DRV_TLV_HEADER_DIGEST_FLAG_ENABLED:
+		if (p_drv_buf->header_digest_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->header_digest;
+			return sizeof(p_drv_buf->header_digest);
+		}
+		break;
+	case DRV_TLV_DATA_DIGEST_FLAG_ENABLED:
+		if (p_drv_buf->data_digest_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->data_digest;
+			return sizeof(p_drv_buf->data_digest);
+		}
+		break;
+	case DRV_TLV_AUTHENTICATION_METHOD:
+		if (p_drv_buf->auth_method_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->auth_method;
+			return sizeof(p_drv_buf->auth_method);
+		}
+		break;
+	case DRV_TLV_ISCSI_BOOT_TARGET_PORTAL:
+		if (p_drv_buf->boot_taget_portal_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->boot_taget_portal;
+			return sizeof(p_drv_buf->boot_taget_portal);
+		}
+		break;
+	case DRV_TLV_MAX_FRAME_SIZE:
+		if (p_drv_buf->frame_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->frame_size;
+			return sizeof(p_drv_buf->frame_size);
+		}
+		break;
+	case DRV_TLV_PDU_TX_DESCRIPTORS_QUEUE_SIZE:
+		if (p_drv_buf->tx_desc_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_desc_size;
+			return sizeof(p_drv_buf->tx_desc_size);
+		}
+		break;
+	case DRV_TLV_PDU_RX_DESCRIPTORS_QUEUE_SIZE:
+		if (p_drv_buf->rx_desc_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_desc_size;
+			return sizeof(p_drv_buf->rx_desc_size);
+		}
+		break;
+	case DRV_TLV_ISCSI_BOOT_PROGRESS:
+		if (p_drv_buf->boot_progress_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->boot_progress;
+			return sizeof(p_drv_buf->boot_progress);
+		}
+		break;
+	case DRV_TLV_PDU_TX_DESCRIPTOR_QUEUE_AVG_DEPTH:
+		if (p_drv_buf->tx_desc_qdepth_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_desc_qdepth;
+			return sizeof(p_drv_buf->tx_desc_qdepth);
+		}
+		break;
+	case DRV_TLV_PDU_RX_DESCRIPTORS_QUEUE_AVG_DEPTH:
+		if (p_drv_buf->rx_desc_qdepth_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_desc_qdepth;
+			return sizeof(p_drv_buf->rx_desc_qdepth);
+		}
+		break;
+	case DRV_TLV_ISCSI_PDU_RX_FRAMES_RECEIVED:
+		if (p_drv_buf->rx_frames_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_frames;
+			return sizeof(p_drv_buf->rx_frames);
+		}
+		break;
+	case DRV_TLV_ISCSI_PDU_RX_BYTES_RECEIVED:
+		if (p_drv_buf->rx_bytes_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_bytes;
+			return sizeof(p_drv_buf->rx_bytes);
+		}
+		break;
+	case DRV_TLV_ISCSI_PDU_TX_FRAMES_SENT:
+		if (p_drv_buf->tx_frames_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_frames;
+			return sizeof(p_drv_buf->tx_frames);
+		}
+		break;
+	case DRV_TLV_ISCSI_PDU_TX_BYTES_SENT:
+		if (p_drv_buf->tx_bytes_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_bytes;
+			return sizeof(p_drv_buf->tx_bytes);
+		}
+		break;
+	default:
+		break;
+	}
+
+	return -1;
+}
+
+static enum _ecore_status_t
+ecore_mfw_update_tlvs(u8 tlv_group, struct ecore_hwfn *p_hwfn,
+		      struct ecore_ptt *p_ptt, u8 *p_mfw_buf, u32 size)
+{
+	union ecore_mfw_tlv_data *p_tlv_data;
+	struct ecore_drv_tlv_hdr tlv;
+	u8 *p_tlv_ptr = OSAL_NULL, *p_temp;
+	u32 offset;
+	int len;
+
+	p_tlv_data = OSAL_VALLOC(p_hwfn->p_dev, sizeof(*p_tlv_data));
+	if (!p_tlv_data)
+		return ECORE_NOMEM;
+
+	OSAL_MEMSET(p_tlv_data, 0, sizeof(*p_tlv_data));
+	if (OSAL_MFW_FILL_TLV_DATA(p_hwfn, tlv_group, p_tlv_data)) {
+		OSAL_VFREE(p_hwfn->p_dev, p_tlv_data);
+		return ECORE_INVAL;
+	}
+
+	offset = 0;
+	OSAL_MEMSET(&tlv, 0, sizeof(tlv));
+	while (offset < size) {
+		p_temp = &p_mfw_buf[offset];
+		tlv.tlv_type = TLV_TYPE(p_temp);
+		tlv.tlv_length = TLV_LENGTH(p_temp);
+		tlv.tlv_flags = TLV_FLAGS(p_temp);
+		DP_INFO(p_hwfn, "Type %d length = %d flags = 0x%x\n",
+			tlv.tlv_type, tlv.tlv_length, tlv.tlv_flags);
+
+		offset += sizeof(tlv);
+		if (tlv_group == ECORE_MFW_TLV_GENERIC)
+			len = ecore_mfw_get_gen_tlv_value(&tlv,
+					&p_tlv_data->generic, &p_tlv_ptr);
+		else if (tlv_group == ECORE_MFW_TLV_ETH)
+			len = ecore_mfw_get_eth_tlv_value(&tlv,
+					&p_tlv_data->eth, &p_tlv_ptr);
+		else if (tlv_group == ECORE_MFW_TLV_FCOE)
+			len = ecore_mfw_get_fcoe_tlv_value(&tlv,
+					&p_tlv_data->fcoe, &p_tlv_ptr);
+		else
+			len = ecore_mfw_get_iscsi_tlv_value(&tlv,
+					&p_tlv_data->iscsi, &p_tlv_ptr);
+
+		if (len > 0) {
+			OSAL_WARN(len > 4 * tlv.tlv_length,
+				  "Incorrect MFW TLV length");
+			len = OSAL_MIN_T(int, len, 4 * tlv.tlv_length);
+			tlv.tlv_flags |= ECORE_DRV_TLV_FLAGS_CHANGED;
+			/* TODO: Endianness handling? */
+			OSAL_MEMCPY(p_mfw_buf, &tlv, sizeof(tlv));
+			OSAL_MEMCPY(p_mfw_buf + offset, p_tlv_ptr, len);
+		}
+
+		offset += sizeof(u32) * tlv.tlv_length;
+	}
+
+	OSAL_VFREE(p_hwfn->p_dev, p_tlv_data);
+
+	return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t
+ecore_mfw_process_tlv_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
+{
+	u32 addr, size, offset, resp, param, val;
+	u8 tlv_group = 0, id, *p_mfw_buf = OSAL_NULL, *p_temp;
+	u32 global_offsize, global_addr;
+	enum _ecore_status_t rc;
+	struct ecore_drv_tlv_hdr tlv;
+
+	addr = SECTION_OFFSIZE_ADDR(p_hwfn->mcp_info->public_base,
+				    PUBLIC_GLOBAL);
+	global_offsize = ecore_rd(p_hwfn, p_ptt, addr);
+	global_addr = SECTION_ADDR(global_offsize, 0);
+	addr = global_addr + OFFSETOF(struct public_global, data_ptr);
+	size = ecore_rd(p_hwfn, p_ptt, global_addr +
+			OFFSETOF(struct public_global, data_size));
+
+	if (!size) {
+		DP_NOTICE(p_hwfn, false, "Invalid TLV req size = %d\n", size);
+		goto drv_done;
+	}
+
+	p_mfw_buf = (void *)OSAL_VALLOC(p_hwfn->p_dev, size);
+	if (!p_mfw_buf) {
+		DP_NOTICE(p_hwfn, false,
+			  "Failed allocate memory for p_mfw_buf\n");
+		goto drv_done;
+	}
+
+	/* Read the TLV request to local buffer */
+	for (offset = 0; offset < size; offset += sizeof(u32)) {
+		val = ecore_rd(p_hwfn, p_ptt, addr + offset);
+		OSAL_MEMCPY(&p_mfw_buf[offset], &val, sizeof(u32));
+	}
+
+	/* Parse the headers to enumerate the requested TLV groups */
+	for (offset = 0; offset < size;
+	     offset += sizeof(tlv) + sizeof(u32) * tlv.tlv_length) {
+		p_temp = &p_mfw_buf[offset];
+		tlv.tlv_type = TLV_TYPE(p_temp);
+		tlv.tlv_length = TLV_LENGTH(p_temp);
+		if (ecore_mfw_get_tlv_group(tlv.tlv_type, &tlv_group))
+			goto drv_done;
+	}
+
+	/* Update the TLV values in the local buffer */
+	for (id = ECORE_MFW_TLV_GENERIC; id < ECORE_MFW_TLV_MAX; id <<= 1) {
+		if (tlv_group & id) {
+			if (ecore_mfw_update_tlvs(id, p_hwfn, p_ptt, p_mfw_buf,
+						  size))
+				goto drv_done;
+		}
+	}
+
+	/* Write the TLV data to shared memory */
+	for (offset = 0; offset < size; offset += sizeof(u32)) {
+		val = (u32)p_mfw_buf[offset];
+		ecore_wr(p_hwfn, p_ptt, addr + offset, val);
+		offset += sizeof(u32);
+	}
+
+drv_done:
+	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_GET_TLV_DONE, 0, &resp,
+			   &param);
+
+	OSAL_VFREE(p_hwfn->p_dev, p_mfw_buf);
+
+	return rc;
+}
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index 0a1f7db..bfd96d6 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -96,8 +96,29 @@ struct qed_slowpath_params {
 
 #define ILT_PAGE_SIZE_TCFC 0x8000	/* 32KB */
 
+struct qed_eth_tlvs {
+	u16 feat_flags;
+	u8 mac[3][ETH_ALEN];
+	u16 lso_maxoff;
+	u16 lso_minseg;
+	bool prom_mode;
+	u16 num_txqs;
+	u16 num_rxqs;
+	u16 num_netqs;
+	u16 flex_vlan;
+	u32 tcp4_offloads;
+	u32 tcp6_offloads;
+	u16 tx_avg_qdepth;
+	u16 rx_avg_qdepth;
+	u8 txqs_empty;
+	u8 rxqs_empty;
+	u8 num_txqs_full;
+	u8 num_rxqs_full;
+};
+
 struct qed_common_cb_ops {
 	void (*link_update)(void *dev, struct qed_link_output *link);
+	void (*get_tlv_data)(void *dev, struct qed_eth_tlvs *data);
 };
 
 struct qed_selftest_ops {
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 29/62] net/qede/base: optimize cache-line access
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (29 preceding siblings ...)
  2017-03-28  6:51           ` [PATCH v4 28/62] net/qede/base: add support for handling TLV request from MFW Rasesh Mody
@ 2017-03-28  6:51           ` Rasesh Mody
  2017-03-28  6:52           ` [PATCH v4 30/62] net/qede/base: infrastructure changes for VF tunnelling Rasesh Mody
                             ` (33 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:51 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Optimize cache-line access in ecore_chain -
re-arrange fields so that fields that are needed for fastpath
[mostly produce/consume and their derivatives] are in the first cache
line, and the rest are in the second.

This is true for both PBL and NEXT_PTR kind of chains.
Advancing a page in a SINGLE_PAGE chain would still require the 2nd
cacheline as well, but afaik only SPQ uses it and so it isn't
considered as 'fastpath'.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_chain.h       |  143 ++++++++++++++++-------------
 drivers/net/qede/base/ecore_dev.c         |   14 +--
 drivers/net/qede/base/ecore_sp_commands.c |    4 +-
 3 files changed, 89 insertions(+), 72 deletions(-)

diff --git a/drivers/net/qede/base/ecore_chain.h b/drivers/net/qede/base/ecore_chain.h
index 61e39b5..ba272a9 100644
--- a/drivers/net/qede/base/ecore_chain.h
+++ b/drivers/net/qede/base/ecore_chain.h
@@ -59,25 +59,6 @@ struct ecore_chain_ext_pbl {
 	void *p_pbl_virt;
 };
 
-struct ecore_chain_pbl {
-	/* Base address of a pre-allocated buffer for pbl */
-	dma_addr_t p_phys_table;
-	void *p_virt_table;
-
-	/* Table for keeping the virtual addresses of the chain pages,
-	 * respectively to the physical addresses in the pbl table.
-	 */
-	void **pp_virt_addr_tbl;
-
-	/* Index to current used page by producer/consumer */
-	union {
-		struct ecore_chain_pbl_u16 pbl16;
-		struct ecore_chain_pbl_u32 pbl32;
-	} u;
-
-	bool external;
-};
-
 struct ecore_chain_u16 {
 	/* Cyclic index of next element to produce/consme */
 	u16 prod_idx;
@@ -91,40 +72,75 @@ struct ecore_chain_u32 {
 };
 
 struct ecore_chain {
-	/* Address of first page of the chain */
-	void *p_virt_addr;
-	dma_addr_t p_phys_addr;
-
+	/* fastpath portion of the chain - required for commands such
+	 * as produce / consume.
+	 */
 	/* Point to next element to produce/consume */
 	void *p_prod_elem;
 	void *p_cons_elem;
 
-	enum ecore_chain_mode mode;
-	enum ecore_chain_use_mode intended_use;
+	/* Fastpath portions of the PBL [if exists] */
+
+	struct {
+		/* Table for keeping the virtual addresses of the chain pages,
+		 * respectively to the physical addresses in the pbl table.
+		 */
+		void		**pp_virt_addr_tbl;
+
+		union {
+			struct ecore_chain_pbl_u16	u16;
+			struct ecore_chain_pbl_u32	u32;
+		} c;
+	} pbl;
 
-	enum ecore_chain_cnt_type cnt_type;
 	union {
 		struct ecore_chain_u16 chain16;
 		struct ecore_chain_u32 chain32;
 	} u;
 
-	u32 page_cnt;
+	/* Capacity counts only usable elements */
+	u32				capacity;
+	u32				page_cnt;
 
-	/* Number of elements - capacity is for usable elements only,
-	 * while size will contain total number of elements [for entire chain].
+	/* A u8 would suffice for mode, but it would save as a lot of headaches
+	 * on castings & defaults.
 	 */
-	u32 capacity;
-	u32 size;
+	enum ecore_chain_mode		mode;
 
 	/* Elements information for fast calculations */
 	u16 elem_per_page;
 	u16 elem_per_page_mask;
-	u16 elem_unusable;
-	u16 usable_per_page;
 	u16 elem_size;
 	u16 next_page_mask;
+	u16 usable_per_page;
+	u8 elem_unusable;
 
-	struct ecore_chain_pbl pbl;
+	u8				cnt_type;
+
+	/* Slowpath of the chain - required for initialization and destruction,
+	 * but isn't involved in regular functionality.
+	 */
+
+	/* Base address of a pre-allocated buffer for pbl */
+	struct {
+		dma_addr_t		p_phys_table;
+		void			*p_virt_table;
+	} pbl_sp;
+
+	/* Address of first page of the chain  - the address is required
+	 * for fastpath operation [consume/produce] but only for the the SINGLE
+	 * flavour which isn't considered fastpath [== SPQ].
+	 */
+	void				*p_virt_addr;
+	dma_addr_t			p_phys_addr;
+
+	/* Total number of elements [for entire chain] */
+	u32				size;
+
+	u8				intended_use;
+
+	/* TBD - do we really need this? Couldn't find usage for it */
+	bool				b_external_pbl;
 
 	void *dp_ctx;
 };
@@ -135,8 +151,8 @@ struct ecore_chain {
 
 #define UNUSABLE_ELEMS_PER_PAGE(elem_size, mode)		\
 	  ((mode == ECORE_CHAIN_MODE_NEXT_PTR) ?		\
-	   (1 + ((sizeof(struct ecore_chain_next) - 1) /		\
-	   (elem_size))) : 0)
+	   (u8)(1 + ((sizeof(struct ecore_chain_next) - 1) /	\
+		     (elem_size))) : 0)
 
 #define USABLE_ELEMS_PER_PAGE(elem_size, mode)		\
 	((u32)(ELEMS_PER_PAGE(elem_size) -			\
@@ -245,7 +261,7 @@ u16 ecore_chain_get_usable_per_page(struct ecore_chain *p_chain)
 }
 
 static OSAL_INLINE
-u16 ecore_chain_get_unusable_per_page(struct ecore_chain *p_chain)
+u8 ecore_chain_get_unusable_per_page(struct ecore_chain *p_chain)
 {
 	return p_chain->elem_unusable;
 }
@@ -263,7 +279,7 @@ static OSAL_INLINE u32 ecore_chain_get_page_cnt(struct ecore_chain *p_chain)
 static OSAL_INLINE
 dma_addr_t ecore_chain_get_pbl_phys(struct ecore_chain *p_chain)
 {
-	return p_chain->pbl.p_phys_table;
+	return p_chain->pbl_sp.p_phys_table;
 }
 
 /**
@@ -288,9 +304,9 @@ ecore_chain_advance_page(struct ecore_chain *p_chain, void **p_next_elem,
 		p_next = (struct ecore_chain_next *)(*p_next_elem);
 		*p_next_elem = p_next->next_virt;
 		if (is_chain_u16(p_chain))
-			*(u16 *)idx_to_inc += p_chain->elem_unusable;
+			*(u16 *)idx_to_inc += (u16)p_chain->elem_unusable;
 		else
-			*(u32 *)idx_to_inc += p_chain->elem_unusable;
+			*(u32 *)idx_to_inc += (u16)p_chain->elem_unusable;
 		break;
 	case ECORE_CHAIN_MODE_SINGLE:
 		*p_next_elem = p_chain->p_virt_addr;
@@ -391,7 +407,7 @@ static OSAL_INLINE void *ecore_chain_produce(struct ecore_chain *p_chain)
 		if ((p_chain->u.chain16.prod_idx &
 		     p_chain->elem_per_page_mask) == p_chain->next_page_mask) {
 			p_prod_idx = &p_chain->u.chain16.prod_idx;
-			p_prod_page_idx = &p_chain->pbl.u.pbl16.prod_page_idx;
+			p_prod_page_idx = &p_chain->pbl.c.u16.prod_page_idx;
 			ecore_chain_advance_page(p_chain, &p_chain->p_prod_elem,
 						 p_prod_idx, p_prod_page_idx);
 		}
@@ -400,7 +416,7 @@ static OSAL_INLINE void *ecore_chain_produce(struct ecore_chain *p_chain)
 		if ((p_chain->u.chain32.prod_idx &
 		     p_chain->elem_per_page_mask) == p_chain->next_page_mask) {
 			p_prod_idx = &p_chain->u.chain32.prod_idx;
-			p_prod_page_idx = &p_chain->pbl.u.pbl32.prod_page_idx;
+			p_prod_page_idx = &p_chain->pbl.c.u32.prod_page_idx;
 			ecore_chain_advance_page(p_chain, &p_chain->p_prod_elem,
 						 p_prod_idx, p_prod_page_idx);
 		}
@@ -465,7 +481,7 @@ static OSAL_INLINE void *ecore_chain_consume(struct ecore_chain *p_chain)
 		if ((p_chain->u.chain16.cons_idx &
 		     p_chain->elem_per_page_mask) == p_chain->next_page_mask) {
 			p_cons_idx = &p_chain->u.chain16.cons_idx;
-			p_cons_page_idx = &p_chain->pbl.u.pbl16.cons_page_idx;
+			p_cons_page_idx = &p_chain->pbl.c.u16.cons_page_idx;
 			ecore_chain_advance_page(p_chain, &p_chain->p_cons_elem,
 						 p_cons_idx, p_cons_page_idx);
 		}
@@ -474,7 +490,7 @@ static OSAL_INLINE void *ecore_chain_consume(struct ecore_chain *p_chain)
 		if ((p_chain->u.chain32.cons_idx &
 		     p_chain->elem_per_page_mask) == p_chain->next_page_mask) {
 			p_cons_idx = &p_chain->u.chain32.cons_idx;
-			p_cons_page_idx = &p_chain->pbl.u.pbl32.cons_page_idx;
+			p_cons_page_idx = &p_chain->pbl.c.u32.cons_page_idx;
 			ecore_chain_advance_page(p_chain, &p_chain->p_cons_elem,
 						 p_cons_idx, p_cons_page_idx);
 		}
@@ -518,25 +534,26 @@ static OSAL_INLINE void ecore_chain_reset(struct ecore_chain *p_chain)
 		u32 reset_val = p_chain->page_cnt - 1;
 
 		if (is_chain_u16(p_chain)) {
-			p_chain->pbl.u.pbl16.prod_page_idx = (u16)reset_val;
-			p_chain->pbl.u.pbl16.cons_page_idx = (u16)reset_val;
+			p_chain->pbl.c.u16.prod_page_idx = (u16)reset_val;
+			p_chain->pbl.c.u16.cons_page_idx = (u16)reset_val;
 		} else {
-			p_chain->pbl.u.pbl32.prod_page_idx = reset_val;
-			p_chain->pbl.u.pbl32.cons_page_idx = reset_val;
+			p_chain->pbl.c.u32.prod_page_idx = reset_val;
+			p_chain->pbl.c.u32.cons_page_idx = reset_val;
 		}
 	}
 
 	switch (p_chain->intended_use) {
-	case ECORE_CHAIN_USE_TO_CONSUME_PRODUCE:
-	case ECORE_CHAIN_USE_TO_PRODUCE:
-			/* Do nothing */
-			break;
-
 	case ECORE_CHAIN_USE_TO_CONSUME:
-			/* produce empty elements */
-			for (i = 0; i < p_chain->capacity; i++)
+		/* produce empty elements */
+		for (i = 0; i < p_chain->capacity; i++)
 			ecore_chain_recycle_consumed(p_chain);
-			break;
+		break;
+
+	case ECORE_CHAIN_USE_TO_CONSUME_PRODUCE:
+	case ECORE_CHAIN_USE_TO_PRODUCE:
+	default:
+		/* Do nothing */
+		break;
 	}
 }
 
@@ -563,9 +580,9 @@ ecore_chain_init_params(struct ecore_chain *p_chain, u32 page_cnt, u8 elem_size,
 	p_chain->p_virt_addr = OSAL_NULL;
 	p_chain->p_phys_addr = 0;
 	p_chain->elem_size = elem_size;
-	p_chain->intended_use = intended_use;
+	p_chain->intended_use = (u8)intended_use;
 	p_chain->mode = mode;
-	p_chain->cnt_type = cnt_type;
+	p_chain->cnt_type = (u8)cnt_type;
 
 	p_chain->elem_per_page = ELEMS_PER_PAGE(elem_size);
 	p_chain->usable_per_page = USABLE_ELEMS_PER_PAGE(elem_size, mode);
@@ -577,9 +594,9 @@ ecore_chain_init_params(struct ecore_chain *p_chain, u32 page_cnt, u8 elem_size,
 	p_chain->page_cnt = page_cnt;
 	p_chain->capacity = p_chain->usable_per_page * page_cnt;
 	p_chain->size = p_chain->elem_per_page * page_cnt;
-	p_chain->pbl.external = false;
-	p_chain->pbl.p_phys_table = 0;
-	p_chain->pbl.p_virt_table = OSAL_NULL;
+	p_chain->b_external_pbl = false;
+	p_chain->pbl_sp.p_phys_table = 0;
+	p_chain->pbl_sp.p_virt_table = OSAL_NULL;
 	p_chain->pbl.pp_virt_addr_tbl = OSAL_NULL;
 
 	p_chain->dp_ctx = dp_ctx;
@@ -623,8 +640,8 @@ static OSAL_INLINE void ecore_chain_init_pbl_mem(struct ecore_chain *p_chain,
 						 dma_addr_t p_phys_pbl,
 						 void **pp_virt_addr_tbl)
 {
-	p_chain->pbl.p_phys_table = p_phys_pbl;
-	p_chain->pbl.p_virt_table = p_virt_pbl;
+	p_chain->pbl_sp.p_phys_table = p_phys_pbl;
+	p_chain->pbl_sp.p_virt_table = p_virt_pbl;
 	p_chain->pbl.pp_virt_addr_tbl = pp_virt_addr_tbl;
 }
 
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index c895656..1c08d4a 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -3559,13 +3559,13 @@ static void ecore_chain_free_pbl(struct ecore_dev *p_dev,
 				 struct ecore_chain *p_chain)
 {
 	void **pp_virt_addr_tbl = p_chain->pbl.pp_virt_addr_tbl;
-	u8 *p_pbl_virt = (u8 *)p_chain->pbl.p_virt_table;
+	u8 *p_pbl_virt = (u8 *)p_chain->pbl_sp.p_virt_table;
 	u32 page_cnt = p_chain->page_cnt, i, pbl_size;
 
 	if (!pp_virt_addr_tbl)
 		return;
 
-	if (!p_chain->pbl.p_virt_table)
+	if (!p_pbl_virt)
 		goto out;
 
 	for (i = 0; i < page_cnt; i++) {
@@ -3581,10 +3581,10 @@ static void ecore_chain_free_pbl(struct ecore_dev *p_dev,
 
 	pbl_size = page_cnt * ECORE_CHAIN_PBL_ENTRY_SIZE;
 
-	if (!p_chain->pbl.external)
-		OSAL_DMA_FREE_COHERENT(p_dev, p_chain->pbl.p_virt_table,
-				       p_chain->pbl.p_phys_table, pbl_size);
-out:
+	if (!p_chain->b_external_pbl)
+		OSAL_DMA_FREE_COHERENT(p_dev, p_chain->pbl_sp.p_virt_table,
+				       p_chain->pbl_sp.p_phys_table, pbl_size);
+ out:
 	OSAL_VFREE(p_dev, p_chain->pbl.pp_virt_addr_tbl);
 }
 
@@ -3716,7 +3716,7 @@ ecore_chain_alloc_pbl(struct ecore_dev *p_dev,
 	} else {
 		p_pbl_virt = ext_pbl->p_pbl_virt;
 		p_pbl_phys = ext_pbl->p_pbl_phys;
-		p_chain->pbl.external = true;
+		p_chain->b_external_pbl = true;
 	}
 
 	ecore_chain_init_pbl_mem(p_chain, p_pbl_virt, p_pbl_phys,
diff --git a/drivers/net/qede/base/ecore_sp_commands.c b/drivers/net/qede/base/ecore_sp_commands.c
index 23ebab7..b831970 100644
--- a/drivers/net/qede/base/ecore_sp_commands.c
+++ b/drivers/net/qede/base/ecore_sp_commands.c
@@ -379,11 +379,11 @@ enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
 
 	/* Place EQ address in RAMROD */
 	DMA_REGPAIR_LE(p_ramrod->event_ring_pbl_addr,
-		       p_hwfn->p_eq->chain.pbl.p_phys_table);
+		       p_hwfn->p_eq->chain.pbl_sp.p_phys_table);
 	page_cnt = (u8)ecore_chain_get_page_cnt(&p_hwfn->p_eq->chain);
 	p_ramrod->event_ring_num_pages = page_cnt;
 	DMA_REGPAIR_LE(p_ramrod->consolid_q_pbl_addr,
-		       p_hwfn->p_consq->chain.pbl.p_phys_table);
+		       p_hwfn->p_consq->chain.pbl_sp.p_phys_table);
 
 	ecore_tunn_set_pf_start_params(p_hwfn, p_tunn,
 				       &p_ramrod->tunnel_config);
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 30/62] net/qede/base: infrastructure changes for VF tunnelling
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (30 preceding siblings ...)
  2017-03-28  6:51           ` [PATCH v4 29/62] net/qede/base: optimize cache-line access Rasesh Mody
@ 2017-03-28  6:52           ` Rasesh Mody
  2017-03-28  6:52           ` [PATCH v4 31/62] net/qede/base: revise tunnel APIs/structs Rasesh Mody
                             ` (32 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:52 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Infrastructure changes for VF tunnelling.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/bcm_osal.h          |    3 +-
 drivers/net/qede/base/ecore.h             |   14 ++++-
 drivers/net/qede/base/ecore_sp_commands.c |   87 +++++++++++++++++++----------
 drivers/net/qede/qede_if.h                |    5 ++
 drivers/net/qede/qede_main.c              |   18 ++++++
 5 files changed, 93 insertions(+), 34 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 82e3ebd..513d542 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -292,7 +292,8 @@ typedef struct osal_list_t {
 #define OSAL_WMB(dev)			rte_wmb()
 #define OSAL_DMA_SYNC(dev, addr, length, is_post) nothing
 
-#define OSAL_BITS_PER_BYTE		(8)
+#define OSAL_BIT(nr)            (1UL << (nr))
+#define OSAL_BITS_PER_BYTE	(8)
 #define OSAL_BITS_PER_UL	(sizeof(unsigned long) * OSAL_BITS_PER_BYTE)
 #define OSAL_BITS_PER_UL_MASK		(OSAL_BITS_PER_UL - 1)
 
diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index de0f49a..5c12c1e 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -470,6 +470,17 @@ struct ecore_fw_data {
 	u32 init_ops_size;
 };
 
+struct ecore_tunnel_info {
+	u8		tunn_clss_vxlan;
+	u8		tunn_clss_l2geneve;
+	u8		tunn_clss_ipgeneve;
+	u8		tunn_clss_l2gre;
+	u8		tunn_clss_ipgre;
+	unsigned long	tunn_mode;
+	u16		port_vxlan_udp_port;
+	u16		port_geneve_udp_port;
+};
+
 struct ecore_hwfn {
 	struct ecore_dev		*p_dev;
 	u8				my_id;		/* ID inside the PF */
@@ -724,8 +735,7 @@ struct ecore_dev {
 	/* SRIOV */
 	struct ecore_hw_sriov_info	*p_iov_info;
 #define IS_ECORE_SRIOV(p_dev)		(!!(p_dev)->p_iov_info)
-	unsigned long			tunn_mode;
-
+	struct ecore_tunnel_info	tunnel;
 	bool				b_is_vf;
 
 	u32				drv_type;
diff --git a/drivers/net/qede/base/ecore_sp_commands.c b/drivers/net/qede/base/ecore_sp_commands.c
index b831970..f5860a0 100644
--- a/drivers/net/qede/base/ecore_sp_commands.c
+++ b/drivers/net/qede/base/ecore_sp_commands.c
@@ -111,8 +111,9 @@ ecore_tunn_set_pf_fix_tunn_mode(struct ecore_hwfn *p_hwfn,
 				struct ecore_tunn_update_params *p_src,
 				struct pf_update_tunnel_config *p_tunn_cfg)
 {
-	unsigned long cached_tunn_mode = p_hwfn->p_dev->tunn_mode;
 	unsigned long update_mask = p_src->tunn_mode_update_mask;
+	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
+	unsigned long cached_tunn_mode = p_tun->tunn_mode;
 	unsigned long tunn_mode = p_src->tunn_mode;
 	unsigned long new_tunn_mode = 0;
 
@@ -149,9 +150,10 @@ ecore_tunn_set_pf_fix_tunn_mode(struct ecore_hwfn *p_hwfn,
 	}
 
 	if (p_src->update_geneve_udp_port) {
+		p_tun->port_geneve_udp_port = p_src->geneve_udp_port;
 		p_tunn_cfg->set_geneve_udp_port_flg = 1;
 		p_tunn_cfg->geneve_udp_port =
-		    OSAL_CPU_TO_LE16(p_src->geneve_udp_port);
+				OSAL_CPU_TO_LE16(p_tun->port_geneve_udp_port);
 	}
 
 	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &update_mask)) {
@@ -178,33 +180,39 @@ ecore_tunn_set_pf_update_params(struct ecore_hwfn *p_hwfn,
 				struct ecore_tunn_update_params *p_src,
 				struct pf_update_tunnel_config *p_tunn_cfg)
 {
-	unsigned long tunn_mode = p_src->tunn_mode;
+	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
 	enum tunnel_clss type;
 
 	ecore_tunn_set_pf_fix_tunn_mode(p_hwfn, p_src, p_tunn_cfg);
+	p_tun->tunn_mode = p_src->tunn_mode;
+
 	p_tunn_cfg->update_rx_pf_clss = p_src->update_rx_pf_clss;
 	p_tunn_cfg->update_tx_pf_clss = p_src->update_tx_pf_clss;
 
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_vxlan);
-	p_tunn_cfg->tunnel_clss_vxlan = type;
+	p_tun->tunn_clss_vxlan = type;
+	p_tunn_cfg->tunnel_clss_vxlan = p_tun->tunn_clss_vxlan;
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2gre);
-	p_tunn_cfg->tunnel_clss_l2gre = type;
+	p_tun->tunn_clss_l2gre = type;
+	p_tunn_cfg->tunnel_clss_l2gre = p_tun->tunn_clss_l2gre;
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgre);
-	p_tunn_cfg->tunnel_clss_ipgre = type;
+	p_tun->tunn_clss_ipgre = type;
+	p_tunn_cfg->tunnel_clss_ipgre = p_tun->tunn_clss_ipgre;
 
 	if (p_src->update_vxlan_udp_port) {
+		p_tun->port_vxlan_udp_port = p_src->vxlan_udp_port;
 		p_tunn_cfg->set_vxlan_udp_port_flg = 1;
 		p_tunn_cfg->vxlan_udp_port =
-		    OSAL_CPU_TO_LE16(p_src->vxlan_udp_port);
+				OSAL_CPU_TO_LE16(p_tun->port_vxlan_udp_port);
 	}
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_l2gre = 1;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_ipgre = 1;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_vxlan = 1;
 
 	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
@@ -215,21 +223,24 @@ ecore_tunn_set_pf_update_params(struct ecore_hwfn *p_hwfn,
 	}
 
 	if (p_src->update_geneve_udp_port) {
+		p_tun->port_geneve_udp_port = p_src->geneve_udp_port;
 		p_tunn_cfg->set_geneve_udp_port_flg = 1;
 		p_tunn_cfg->geneve_udp_port =
-		    OSAL_CPU_TO_LE16(p_src->geneve_udp_port);
+				OSAL_CPU_TO_LE16(p_tun->port_geneve_udp_port);
 	}
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_l2geneve = 1;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_ipgeneve = 1;
 
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2geneve);
-	p_tunn_cfg->tunnel_clss_l2geneve = type;
+	p_tun->tunn_clss_l2geneve = type;
+	p_tunn_cfg->tunnel_clss_l2geneve = p_tun->tunn_clss_l2geneve;
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgeneve);
-	p_tunn_cfg->tunnel_clss_ipgeneve = type;
+	p_tun->tunn_clss_ipgeneve = type;
+	p_tunn_cfg->tunnel_clss_ipgeneve = p_tun->tunn_clss_ipgeneve;
 }
 
 static void ecore_set_hw_tunn_mode(struct ecore_hwfn *p_hwfn,
@@ -269,33 +280,37 @@ ecore_tunn_set_pf_start_params(struct ecore_hwfn *p_hwfn,
 			       struct ecore_tunn_start_params *p_src,
 			       struct pf_start_tunnel_config *p_tunn_cfg)
 {
-	unsigned long tunn_mode;
+	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
 	enum tunnel_clss type;
 
 	if (!p_src)
 		return;
 
-	tunn_mode = p_src->tunn_mode;
+	p_tun->tunn_mode = p_src->tunn_mode;
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_vxlan);
-	p_tunn_cfg->tunnel_clss_vxlan = type;
+	p_tun->tunn_clss_vxlan = type;
+	p_tunn_cfg->tunnel_clss_vxlan = p_tun->tunn_clss_vxlan;
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2gre);
-	p_tunn_cfg->tunnel_clss_l2gre = type;
+	p_tun->tunn_clss_l2gre = type;
+	p_tunn_cfg->tunnel_clss_l2gre = p_tun->tunn_clss_l2gre;
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgre);
-	p_tunn_cfg->tunnel_clss_ipgre = type;
+	p_tun->tunn_clss_ipgre = type;
+	p_tunn_cfg->tunnel_clss_ipgre = p_tun->tunn_clss_ipgre;
 
 	if (p_src->update_vxlan_udp_port) {
+		p_tun->port_vxlan_udp_port = p_src->vxlan_udp_port;
 		p_tunn_cfg->set_vxlan_udp_port_flg = 1;
 		p_tunn_cfg->vxlan_udp_port =
-		    OSAL_CPU_TO_LE16(p_src->vxlan_udp_port);
+				OSAL_CPU_TO_LE16(p_tun->port_vxlan_udp_port);
 	}
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_l2gre = 1;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_ipgre = 1;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_vxlan = 1;
 
 	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
@@ -306,21 +321,24 @@ ecore_tunn_set_pf_start_params(struct ecore_hwfn *p_hwfn,
 	}
 
 	if (p_src->update_geneve_udp_port) {
+		p_tun->port_geneve_udp_port = p_src->geneve_udp_port;
 		p_tunn_cfg->set_geneve_udp_port_flg = 1;
 		p_tunn_cfg->geneve_udp_port =
-		    OSAL_CPU_TO_LE16(p_src->geneve_udp_port);
+				OSAL_CPU_TO_LE16(p_tun->port_geneve_udp_port);
 	}
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_l2geneve = 1;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_ipgeneve = 1;
 
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2geneve);
-	p_tunn_cfg->tunnel_clss_l2geneve = type;
+	p_tun->tunn_clss_l2geneve = type;
+	p_tunn_cfg->tunnel_clss_l2geneve = p_tun->tunn_clss_l2geneve;
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgeneve);
-	p_tunn_cfg->tunnel_clss_ipgeneve = type;
+	p_tun->tunn_clss_ipgeneve = type;
+	p_tunn_cfg->tunnel_clss_ipgeneve = p_tun->tunn_clss_ipgeneve;
 }
 
 enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
@@ -420,9 +438,16 @@ enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
 	rc = ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
 
 	if (p_tunn) {
+		if (p_tunn->update_vxlan_udp_port)
+			ecore_set_vxlan_dest_port(p_hwfn, p_hwfn->p_main_ptt,
+						  p_tunn->vxlan_udp_port);
+
+		if (p_tunn->update_geneve_udp_port)
+			ecore_set_geneve_dest_port(p_hwfn, p_hwfn->p_main_ptt,
+						   p_tunn->geneve_udp_port);
+
 		ecore_set_hw_tunn_mode(p_hwfn, p_hwfn->p_main_ptt,
 				       p_tunn->tunn_mode);
-		p_hwfn->p_dev->tunn_mode = p_tunn->tunn_mode;
 	}
 
 	return rc;
@@ -529,12 +554,12 @@ ecore_sp_pf_update_tunn_cfg(struct ecore_hwfn *p_hwfn,
 	if (p_tunn->update_vxlan_udp_port)
 		ecore_set_vxlan_dest_port(p_hwfn, p_hwfn->p_main_ptt,
 					  p_tunn->vxlan_udp_port);
+
 	if (p_tunn->update_geneve_udp_port)
 		ecore_set_geneve_dest_port(p_hwfn, p_hwfn->p_main_ptt,
 					   p_tunn->geneve_udp_port);
 
 	ecore_set_hw_tunn_mode(p_hwfn, p_hwfn->p_main_ptt, p_tunn->tunn_mode);
-	p_hwfn->p_dev->tunn_mode = p_tunn->tunn_mode;
 
 	return rc;
 }
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index bfd96d6..baa8476 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -43,6 +43,11 @@ struct qed_dev_info {
 	uint8_t mf_mode;
 	bool tx_switching;
 	u16 mtu;
+
+	/* Out param for qede */
+	bool vxlan_enable;
+	bool gre_enable;
+	bool geneve_enable;
 };
 
 enum qed_sb_type {
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index a932c5f..e7195b4 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -325,8 +325,26 @@ static int
 qed_fill_dev_info(struct ecore_dev *edev, struct qed_dev_info *dev_info)
 {
 	struct ecore_ptt *ptt = NULL;
+	struct ecore_tunnel_info *tun = &edev->tunnel;
 
 	memset(dev_info, 0, sizeof(struct qed_dev_info));
+
+	if (tun->tunn_mode & OSAL_BIT(ECORE_MODE_VXLAN_TUNN) &&
+	    tun->tunn_clss_vxlan == ECORE_TUNN_CLSS_MAC_VLAN)
+		dev_info->vxlan_enable = true;
+
+	if (tun->tunn_mode & OSAL_BIT(ECORE_MODE_L2GRE_TUNN) &&
+	    tun->tunn_mode & OSAL_BIT(ECORE_MODE_IPGRE_TUNN) &&
+	    tun->tunn_clss_l2gre == ECORE_TUNN_CLSS_MAC_VLAN &&
+	    tun->tunn_clss_ipgre == ECORE_TUNN_CLSS_MAC_VLAN)
+		dev_info->gre_enable = true;
+
+	if (tun->tunn_mode & OSAL_BIT(ECORE_MODE_L2GENEVE_TUNN) &&
+	    tun->tunn_mode & OSAL_BIT(ECORE_MODE_IPGENEVE_TUNN) &&
+	    tun->tunn_clss_l2geneve == ECORE_TUNN_CLSS_MAC_VLAN &&
+	    tun->tunn_clss_ipgeneve == ECORE_TUNN_CLSS_MAC_VLAN)
+		dev_info->geneve_enable = true;
+
 	dev_info->num_hwfns = edev->num_hwfns;
 	dev_info->is_mf_default = IS_MF_DEFAULT(&edev->hwfns[0]);
 	dev_info->mtu = ECORE_LEADING_HWFN(edev)->hw_info.mtu;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 31/62] net/qede/base: revise tunnel APIs/structs
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (31 preceding siblings ...)
  2017-03-28  6:52           ` [PATCH v4 30/62] net/qede/base: infrastructure changes for VF tunnelling Rasesh Mody
@ 2017-03-28  6:52           ` Rasesh Mody
  2017-03-28 11:22             ` Ferruh Yigit
  2017-03-28  6:52           ` [PATCH v4 32/62] net/qede/base: add tunnelling support for VFs Rasesh Mody
                             ` (31 subsequent siblings)
  64 siblings, 1 reply; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:52 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Revise tunnel APIs/structs.
 - Unite tunnel start and update params in single struct
   "ecore_tunnel_info"
 - Remove A0 chip tunnelling support.
 - Added per tunnel info - removed bitmasks.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h             |   57 ++---
 drivers/net/qede/base/ecore_dev.c         |    2 +-
 drivers/net/qede/base/ecore_dev_api.h     |    2 +-
 drivers/net/qede/base/ecore_sp_api.h      |   19 ++
 drivers/net/qede/base/ecore_sp_commands.c |  384 +++++++++++++----------------
 drivers/net/qede/base/ecore_sp_commands.h |   23 +-
 drivers/net/qede/qede_ethdev.c            |   20 +-
 drivers/net/qede/qede_if.h                |   16 ++
 drivers/net/qede/qede_main.c              |   18 +-
 9 files changed, 248 insertions(+), 293 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 5c12c1e..f86f7ca 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -204,33 +204,29 @@ enum ecore_tunn_clss {
 	MAX_ECORE_TUNN_CLSS,
 };
 
-struct ecore_tunn_start_params {
-	unsigned long tunn_mode;
-	u16	vxlan_udp_port;
-	u16	geneve_udp_port;
-	u8	update_vxlan_udp_port;
-	u8	update_geneve_udp_port;
-	u8	tunn_clss_vxlan;
-	u8	tunn_clss_l2geneve;
-	u8	tunn_clss_ipgeneve;
-	u8	tunn_clss_l2gre;
-	u8	tunn_clss_ipgre;
+struct ecore_tunn_update_type {
+	bool b_update_mode;
+	bool b_mode_enabled;
+	enum ecore_tunn_clss tun_cls;
 };
 
-struct ecore_tunn_update_params {
-	unsigned long tunn_mode_update_mask;
-	unsigned long tunn_mode;
-	u16	vxlan_udp_port;
-	u16	geneve_udp_port;
-	u8	update_rx_pf_clss;
-	u8	update_tx_pf_clss;
-	u8	update_vxlan_udp_port;
-	u8	update_geneve_udp_port;
-	u8	tunn_clss_vxlan;
-	u8	tunn_clss_l2geneve;
-	u8	tunn_clss_ipgeneve;
-	u8	tunn_clss_l2gre;
-	u8	tunn_clss_ipgre;
+struct ecore_tunn_update_udp_port {
+	bool b_update_port;
+	u16 port;
+};
+
+struct ecore_tunnel_info {
+	struct ecore_tunn_update_type vxlan;
+	struct ecore_tunn_update_type l2_geneve;
+	struct ecore_tunn_update_type ip_geneve;
+	struct ecore_tunn_update_type l2_gre;
+	struct ecore_tunn_update_type ip_gre;
+
+	struct ecore_tunn_update_udp_port vxlan_port;
+	struct ecore_tunn_update_udp_port geneve_port;
+
+	bool b_update_rx_cls;
+	bool b_update_tx_cls;
 };
 
 /* The PCI personality is not quite synonymous to protocol ID:
@@ -470,17 +466,6 @@ struct ecore_fw_data {
 	u32 init_ops_size;
 };
 
-struct ecore_tunnel_info {
-	u8		tunn_clss_vxlan;
-	u8		tunn_clss_l2geneve;
-	u8		tunn_clss_ipgeneve;
-	u8		tunn_clss_l2gre;
-	u8		tunn_clss_ipgre;
-	unsigned long	tunn_mode;
-	u16		port_vxlan_udp_port;
-	u16		port_geneve_udp_port;
-};
-
 struct ecore_hwfn {
 	struct ecore_dev		*p_dev;
 	u8				my_id;		/* ID inside the PF */
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 1c08d4a..0d3971c 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1696,7 +1696,7 @@ ecore_hw_init_pf_doorbell_bar(struct ecore_hwfn *p_hwfn,
 static enum _ecore_status_t
 ecore_hw_init_pf(struct ecore_hwfn *p_hwfn,
 		 struct ecore_ptt *p_ptt,
-		 struct ecore_tunn_start_params *p_tunn,
+		 struct ecore_tunnel_info *p_tunn,
 		 int hw_mode,
 		 bool b_hw_start,
 		 enum ecore_int_mode int_mode, bool allow_npar_tx_switch)
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index 74a15ef..356c5e4 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -59,7 +59,7 @@ void ecore_resc_setup(struct ecore_dev *p_dev);
 
 struct ecore_hw_init_params {
 	/* tunnelling parameters */
-	struct ecore_tunn_start_params *p_tunn;
+	struct ecore_tunnel_info *p_tunn;
 	bool b_hw_start;
 	/* interrupt mode [msix, inta, etc.] to use */
 	enum ecore_int_mode int_mode;
diff --git a/drivers/net/qede/base/ecore_sp_api.h b/drivers/net/qede/base/ecore_sp_api.h
index a4cb507..c8e564f 100644
--- a/drivers/net/qede/base/ecore_sp_api.h
+++ b/drivers/net/qede/base/ecore_sp_api.h
@@ -41,5 +41,24 @@ struct ecore_spq_comp_cb {
  */
 enum _ecore_status_t ecore_eth_cqe_completion(struct ecore_hwfn *p_hwfn,
 					      struct eth_slow_path_rx_cqe *cqe);
+/**
+ * @brief ecore_sp_pf_update_tunn_cfg - PF Function Tunnel configuration
+ *					update  Ramrod
+ *
+ * This ramrod is sent to update a tunneling configuration
+ * for a physical function (PF).
+ *
+ * @param p_hwfn
+ * @param p_tunn - pf update tunneling parameters
+ * @param comp_mode - completion mode
+ * @param p_comp_data - callback function
+ *
+ * @return enum _ecore_status_t
+ */
 
+enum _ecore_status_t
+ecore_sp_pf_update_tunn_cfg(struct ecore_hwfn *p_hwfn,
+			    struct ecore_tunnel_info *p_tunn,
+			    enum spq_mode comp_mode,
+			    struct ecore_spq_comp_cb *p_comp_data);
 #endif
diff --git a/drivers/net/qede/base/ecore_sp_commands.c b/drivers/net/qede/base/ecore_sp_commands.c
index f5860a0..4cacce8 100644
--- a/drivers/net/qede/base/ecore_sp_commands.c
+++ b/drivers/net/qede/base/ecore_sp_commands.c
@@ -88,7 +88,7 @@ enum _ecore_status_t ecore_sp_init_request(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-static enum tunnel_clss ecore_tunn_get_clss_type(u8 type)
+static enum tunnel_clss ecore_tunn_clss_to_fw_clss(u8 type)
 {
 	switch (type) {
 	case ECORE_TUNN_CLSS_MAC_VLAN:
@@ -107,242 +107,207 @@ static enum tunnel_clss ecore_tunn_get_clss_type(u8 type)
 }
 
 static void
-ecore_tunn_set_pf_fix_tunn_mode(struct ecore_hwfn *p_hwfn,
-				struct ecore_tunn_update_params *p_src,
-				struct pf_update_tunnel_config *p_tunn_cfg)
+ecore_set_pf_update_tunn_mode(struct ecore_tunnel_info *p_tun,
+			      struct ecore_tunnel_info *p_src,
+			      bool b_pf_start)
 {
-	unsigned long update_mask = p_src->tunn_mode_update_mask;
-	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
-	unsigned long cached_tunn_mode = p_tun->tunn_mode;
-	unsigned long tunn_mode = p_src->tunn_mode;
-	unsigned long new_tunn_mode = 0;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &update_mask)) {
-		if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_L2GRE_TUNN, &new_tunn_mode);
-	} else {
-		if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &cached_tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_L2GRE_TUNN, &new_tunn_mode);
-	}
-
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &update_mask)) {
-		if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_IPGRE_TUNN, &new_tunn_mode);
-	} else {
-		if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &cached_tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_IPGRE_TUNN, &new_tunn_mode);
-	}
-
-	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &update_mask)) {
-		if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_VXLAN_TUNN, &new_tunn_mode);
-	} else {
-		if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &cached_tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_VXLAN_TUNN, &new_tunn_mode);
-	}
-
-	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
-		if (p_src->update_geneve_udp_port)
-			DP_NOTICE(p_hwfn, true, "Geneve not supported\n");
-		p_src->update_geneve_udp_port = 0;
-		p_src->tunn_mode = new_tunn_mode;
-		return;
-	}
+	if (p_src->vxlan.b_update_mode || b_pf_start)
+		p_tun->vxlan.b_mode_enabled = p_src->vxlan.b_mode_enabled;
 
-	if (p_src->update_geneve_udp_port) {
-		p_tun->port_geneve_udp_port = p_src->geneve_udp_port;
-		p_tunn_cfg->set_geneve_udp_port_flg = 1;
-		p_tunn_cfg->geneve_udp_port =
-				OSAL_CPU_TO_LE16(p_tun->port_geneve_udp_port);
-	}
+	if (p_src->l2_gre.b_update_mode || b_pf_start)
+		p_tun->l2_gre.b_mode_enabled = p_src->l2_gre.b_mode_enabled;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &update_mask)) {
-		if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_L2GENEVE_TUNN, &new_tunn_mode);
-	} else {
-		if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &cached_tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_L2GENEVE_TUNN, &new_tunn_mode);
-	}
+	if (p_src->ip_gre.b_update_mode || b_pf_start)
+		p_tun->ip_gre.b_mode_enabled = p_src->ip_gre.b_mode_enabled;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &update_mask)) {
-		if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_IPGENEVE_TUNN, &new_tunn_mode);
-	} else {
-		if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &cached_tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_IPGENEVE_TUNN, &new_tunn_mode);
-	}
+	if (p_src->l2_geneve.b_update_mode || b_pf_start)
+		p_tun->l2_geneve.b_mode_enabled =
+				p_src->l2_geneve.b_mode_enabled;
 
-	p_src->tunn_mode = new_tunn_mode;
+	if (p_src->ip_geneve.b_update_mode || b_pf_start)
+		p_tun->ip_geneve.b_mode_enabled =
+				p_src->ip_geneve.b_mode_enabled;
 }
 
-static void
-ecore_tunn_set_pf_update_params(struct ecore_hwfn *p_hwfn,
-				struct ecore_tunn_update_params *p_src,
-				struct pf_update_tunnel_config *p_tunn_cfg)
+static void ecore_set_tunn_cls_info(struct ecore_tunnel_info *p_tun,
+				    struct ecore_tunnel_info *p_src)
 {
-	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
 	enum tunnel_clss type;
 
-	ecore_tunn_set_pf_fix_tunn_mode(p_hwfn, p_src, p_tunn_cfg);
-	p_tun->tunn_mode = p_src->tunn_mode;
-
-	p_tunn_cfg->update_rx_pf_clss = p_src->update_rx_pf_clss;
-	p_tunn_cfg->update_tx_pf_clss = p_src->update_tx_pf_clss;
-
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_vxlan);
-	p_tun->tunn_clss_vxlan = type;
-	p_tunn_cfg->tunnel_clss_vxlan = p_tun->tunn_clss_vxlan;
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2gre);
-	p_tun->tunn_clss_l2gre = type;
-	p_tunn_cfg->tunnel_clss_l2gre = p_tun->tunn_clss_l2gre;
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgre);
-	p_tun->tunn_clss_ipgre = type;
-	p_tunn_cfg->tunnel_clss_ipgre = p_tun->tunn_clss_ipgre;
-
-	if (p_src->update_vxlan_udp_port) {
-		p_tun->port_vxlan_udp_port = p_src->vxlan_udp_port;
-		p_tunn_cfg->set_vxlan_udp_port_flg = 1;
-		p_tunn_cfg->vxlan_udp_port =
-				OSAL_CPU_TO_LE16(p_tun->port_vxlan_udp_port);
-	}
+	p_tun->b_update_rx_cls = p_src->b_update_rx_cls;
+	p_tun->b_update_tx_cls = p_src->b_update_tx_cls;
+
+	type = ecore_tunn_clss_to_fw_clss(p_src->vxlan.tun_cls);
+	p_tun->vxlan.tun_cls = type;
+	type = ecore_tunn_clss_to_fw_clss(p_src->l2_gre.tun_cls);
+	p_tun->l2_gre.tun_cls = type;
+	type = ecore_tunn_clss_to_fw_clss(p_src->ip_gre.tun_cls);
+	p_tun->ip_gre.tun_cls = type;
+	type = ecore_tunn_clss_to_fw_clss(p_src->l2_geneve.tun_cls);
+	p_tun->l2_geneve.tun_cls = type;
+	type = ecore_tunn_clss_to_fw_clss(p_src->ip_geneve.tun_cls);
+	p_tun->ip_geneve.tun_cls = type;
+}
+
+static void ecore_set_tunn_ports(struct ecore_tunnel_info *p_tun,
+				 struct ecore_tunnel_info *p_src)
+{
+	p_tun->geneve_port.b_update_port = p_src->geneve_port.b_update_port;
+	p_tun->vxlan_port.b_update_port = p_src->vxlan_port.b_update_port;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_l2gre = 1;
+	if (p_src->geneve_port.b_update_port)
+		p_tun->geneve_port.port = p_src->geneve_port.port;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_ipgre = 1;
+	if (p_src->vxlan_port.b_update_port)
+		p_tun->vxlan_port.port = p_src->vxlan_port.port;
+}
 
-	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_vxlan = 1;
+static void
+__ecore_set_ramrod_tunnel_param(u8 *p_tunn_cls, u8 *p_enable_tx_clas,
+				struct ecore_tunn_update_type *tun_type)
+{
+	*p_tunn_cls = tun_type->tun_cls;
 
-	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
-		if (p_src->update_geneve_udp_port)
-			DP_NOTICE(p_hwfn, true, "Geneve not supported\n");
-		p_src->update_geneve_udp_port = 0;
-		return;
-	}
+	if (tun_type->b_mode_enabled)
+		*p_enable_tx_clas = 1;
+}
 
-	if (p_src->update_geneve_udp_port) {
-		p_tun->port_geneve_udp_port = p_src->geneve_udp_port;
-		p_tunn_cfg->set_geneve_udp_port_flg = 1;
-		p_tunn_cfg->geneve_udp_port =
-				OSAL_CPU_TO_LE16(p_tun->port_geneve_udp_port);
+static void
+ecore_set_ramrod_tunnel_param(u8 *p_tunn_cls, u8 *p_enable_tx_clas,
+			      struct ecore_tunn_update_type *tun_type,
+			      u8 *p_update_port, __le16 *p_port,
+			      struct ecore_tunn_update_udp_port *p_udp_port)
+{
+	__ecore_set_ramrod_tunnel_param(p_tunn_cls, p_enable_tx_clas,
+					tun_type);
+	if (p_udp_port->b_update_port) {
+		*p_update_port = 1;
+		*p_port = OSAL_CPU_TO_LE16(p_udp_port->port);
 	}
+}
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_l2geneve = 1;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_ipgeneve = 1;
+static void
+ecore_tunn_set_pf_update_params(struct ecore_hwfn		*p_hwfn,
+				struct ecore_tunnel_info *p_src,
+				struct pf_update_tunnel_config	*p_tunn_cfg)
+{
+	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
 
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2geneve);
-	p_tun->tunn_clss_l2geneve = type;
-	p_tunn_cfg->tunnel_clss_l2geneve = p_tun->tunn_clss_l2geneve;
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgeneve);
-	p_tun->tunn_clss_ipgeneve = type;
-	p_tunn_cfg->tunnel_clss_ipgeneve = p_tun->tunn_clss_ipgeneve;
+	ecore_set_pf_update_tunn_mode(p_tun, p_src, false);
+	ecore_set_tunn_cls_info(p_tun, p_src);
+	ecore_set_tunn_ports(p_tun, p_src);
+
+	ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_vxlan,
+				      &p_tunn_cfg->tx_enable_vxlan,
+				      &p_tun->vxlan,
+				      &p_tunn_cfg->set_vxlan_udp_port_flg,
+				      &p_tunn_cfg->vxlan_udp_port,
+				      &p_tun->vxlan_port);
+
+	ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_l2geneve,
+				      &p_tunn_cfg->tx_enable_l2geneve,
+				      &p_tun->l2_geneve,
+				      &p_tunn_cfg->set_geneve_udp_port_flg,
+				      &p_tunn_cfg->geneve_udp_port,
+				      &p_tun->geneve_port);
+
+	__ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_ipgeneve,
+					&p_tunn_cfg->tx_enable_ipgeneve,
+					&p_tun->ip_geneve);
+
+	__ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_l2gre,
+					&p_tunn_cfg->tx_enable_l2gre,
+					&p_tun->l2_gre);
+
+	__ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_ipgre,
+					&p_tunn_cfg->tx_enable_ipgre,
+					&p_tun->ip_gre);
+
+	p_tunn_cfg->update_rx_pf_clss = p_tun->b_update_rx_cls;
+	p_tunn_cfg->update_tx_pf_clss = p_tun->b_update_tx_cls;
 }
 
 static void ecore_set_hw_tunn_mode(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt,
-				   unsigned long tunn_mode)
+				   struct ecore_tunnel_info *p_tun)
 {
-	u8 l2gre_enable = 0, ipgre_enable = 0, vxlan_enable = 0;
-	u8 l2geneve_enable = 0, ipgeneve_enable = 0;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &tunn_mode))
-		l2gre_enable = 1;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &tunn_mode))
-		ipgre_enable = 1;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &tunn_mode))
-		vxlan_enable = 1;
+	ecore_set_gre_enable(p_hwfn, p_ptt, p_tun->l2_gre.b_mode_enabled,
+			     p_tun->ip_gre.b_mode_enabled);
+	ecore_set_vxlan_enable(p_hwfn, p_ptt, p_tun->vxlan.b_mode_enabled);
 
-	ecore_set_gre_enable(p_hwfn, p_ptt, l2gre_enable, ipgre_enable);
-	ecore_set_vxlan_enable(p_hwfn, p_ptt, vxlan_enable);
+	ecore_set_geneve_enable(p_hwfn, p_ptt, p_tun->l2_geneve.b_mode_enabled,
+				p_tun->ip_geneve.b_mode_enabled);
+}
 
-	if (ECORE_IS_BB_A0(p_hwfn->p_dev))
+static void ecore_set_hw_tunn_mode_port(struct ecore_hwfn *p_hwfn,
+					struct ecore_tunnel_info *p_tunn)
+{
+	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
+		DP_NOTICE(p_hwfn, true,
+			  "A0 chip: tunnel hw config is not supported\n");
 		return;
+	}
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &tunn_mode))
-		l2geneve_enable = 1;
+	if (p_tunn->vxlan_port.b_update_port)
+		ecore_set_vxlan_dest_port(p_hwfn, p_hwfn->p_main_ptt,
+					  p_tunn->vxlan_port.port);
 
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &tunn_mode))
-		ipgeneve_enable = 1;
+	if (p_tunn->geneve_port.b_update_port)
+		ecore_set_geneve_dest_port(p_hwfn, p_hwfn->p_main_ptt,
+					   p_tunn->geneve_port.port);
 
-	ecore_set_geneve_enable(p_hwfn, p_ptt, l2geneve_enable,
-				ipgeneve_enable);
+	ecore_set_hw_tunn_mode(p_hwfn, p_hwfn->p_main_ptt, p_tunn);
 }
 
 static void
 ecore_tunn_set_pf_start_params(struct ecore_hwfn *p_hwfn,
-			       struct ecore_tunn_start_params *p_src,
+			       struct ecore_tunnel_info		*p_src,
 			       struct pf_start_tunnel_config *p_tunn_cfg)
 {
 	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
-	enum tunnel_clss type;
-
-	if (!p_src)
-		return;
-
-	p_tun->tunn_mode = p_src->tunn_mode;
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_vxlan);
-	p_tun->tunn_clss_vxlan = type;
-	p_tunn_cfg->tunnel_clss_vxlan = p_tun->tunn_clss_vxlan;
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2gre);
-	p_tun->tunn_clss_l2gre = type;
-	p_tunn_cfg->tunnel_clss_l2gre = p_tun->tunn_clss_l2gre;
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgre);
-	p_tun->tunn_clss_ipgre = type;
-	p_tunn_cfg->tunnel_clss_ipgre = p_tun->tunn_clss_ipgre;
-
-	if (p_src->update_vxlan_udp_port) {
-		p_tun->port_vxlan_udp_port = p_src->vxlan_udp_port;
-		p_tunn_cfg->set_vxlan_udp_port_flg = 1;
-		p_tunn_cfg->vxlan_udp_port =
-				OSAL_CPU_TO_LE16(p_tun->port_vxlan_udp_port);
-	}
-
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_l2gre = 1;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_ipgre = 1;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_vxlan = 1;
 
 	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
-		if (p_src->update_geneve_udp_port)
-			DP_NOTICE(p_hwfn, true, "Geneve not supported\n");
-		p_src->update_geneve_udp_port = 0;
+		DP_NOTICE(p_hwfn, true,
+			  "A0 chip: tunnel pf start config is not supported\n");
 		return;
 	}
 
-	if (p_src->update_geneve_udp_port) {
-		p_tun->port_geneve_udp_port = p_src->geneve_udp_port;
-		p_tunn_cfg->set_geneve_udp_port_flg = 1;
-		p_tunn_cfg->geneve_udp_port =
-				OSAL_CPU_TO_LE16(p_tun->port_geneve_udp_port);
-	}
-
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_l2geneve = 1;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_ipgeneve = 1;
+	if (!p_src)
+		return;
 
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2geneve);
-	p_tun->tunn_clss_l2geneve = type;
-	p_tunn_cfg->tunnel_clss_l2geneve = p_tun->tunn_clss_l2geneve;
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgeneve);
-	p_tun->tunn_clss_ipgeneve = type;
-	p_tunn_cfg->tunnel_clss_ipgeneve = p_tun->tunn_clss_ipgeneve;
+	ecore_set_pf_update_tunn_mode(p_tun, p_src, true);
+	ecore_set_tunn_cls_info(p_tun, p_src);
+	ecore_set_tunn_ports(p_tun, p_src);
+
+	ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_vxlan,
+				      &p_tunn_cfg->tx_enable_vxlan,
+				      &p_tun->vxlan,
+				      &p_tunn_cfg->set_vxlan_udp_port_flg,
+				      &p_tunn_cfg->vxlan_udp_port,
+				      &p_tun->vxlan_port);
+
+	ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_l2geneve,
+				      &p_tunn_cfg->tx_enable_l2geneve,
+				      &p_tun->l2_geneve,
+				      &p_tunn_cfg->set_geneve_udp_port_flg,
+				      &p_tunn_cfg->geneve_udp_port,
+				      &p_tun->geneve_port);
+
+	__ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_ipgeneve,
+					&p_tunn_cfg->tx_enable_ipgeneve,
+					&p_tun->ip_geneve);
+
+	__ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_l2gre,
+					&p_tunn_cfg->tx_enable_l2gre,
+					&p_tun->l2_gre);
+
+	__ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_ipgre,
+					&p_tunn_cfg->tx_enable_ipgre,
+					&p_tun->ip_gre);
 }
 
 enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
-				       struct ecore_tunn_start_params *p_tunn,
+				       struct ecore_tunnel_info *p_tunn,
 				       enum ecore_mf_mode mode,
 				       bool allow_npar_tx_switch)
 {
@@ -437,18 +402,8 @@ enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
 
 	rc = ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
 
-	if (p_tunn) {
-		if (p_tunn->update_vxlan_udp_port)
-			ecore_set_vxlan_dest_port(p_hwfn, p_hwfn->p_main_ptt,
-						  p_tunn->vxlan_udp_port);
-
-		if (p_tunn->update_geneve_udp_port)
-			ecore_set_geneve_dest_port(p_hwfn, p_hwfn->p_main_ptt,
-						   p_tunn->geneve_udp_port);
-
-		ecore_set_hw_tunn_mode(p_hwfn, p_hwfn->p_main_ptt,
-				       p_tunn->tunn_mode);
-	}
+	if (p_tunn)
+		ecore_set_hw_tunn_mode_port(p_hwfn, &p_hwfn->p_dev->tunnel);
 
 	return rc;
 }
@@ -523,7 +478,7 @@ enum _ecore_status_t ecore_sp_rl_update(struct ecore_hwfn *p_hwfn,
 /* Set pf update ramrod command params */
 enum _ecore_status_t
 ecore_sp_pf_update_tunn_cfg(struct ecore_hwfn *p_hwfn,
-			    struct ecore_tunn_update_params *p_tunn,
+			    struct ecore_tunnel_info *p_tunn,
 			    enum spq_mode comp_mode,
 			    struct ecore_spq_comp_cb *p_comp_data)
 {
@@ -531,6 +486,15 @@ ecore_sp_pf_update_tunn_cfg(struct ecore_hwfn *p_hwfn,
 	struct ecore_sp_init_data init_data;
 	enum _ecore_status_t rc = ECORE_NOTIMPL;
 
+	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
+		DP_NOTICE(p_hwfn, true,
+			  "A0 chip: tunnel pf update config is not supported\n");
+		return rc;
+	}
+
+	if (!p_tunn)
+		return ECORE_INVAL;
+
 	/* Get SPQ entry */
 	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
 	init_data.cid = ecore_spq_get_cid(p_hwfn);
@@ -551,15 +515,7 @@ ecore_sp_pf_update_tunn_cfg(struct ecore_hwfn *p_hwfn,
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	if (p_tunn->update_vxlan_udp_port)
-		ecore_set_vxlan_dest_port(p_hwfn, p_hwfn->p_main_ptt,
-					  p_tunn->vxlan_udp_port);
-
-	if (p_tunn->update_geneve_udp_port)
-		ecore_set_geneve_dest_port(p_hwfn, p_hwfn->p_main_ptt,
-					   p_tunn->geneve_udp_port);
-
-	ecore_set_hw_tunn_mode(p_hwfn, p_hwfn->p_main_ptt, p_tunn->tunn_mode);
+	ecore_set_hw_tunn_mode_port(p_hwfn, &p_hwfn->p_dev->tunnel);
 
 	return rc;
 }
diff --git a/drivers/net/qede/base/ecore_sp_commands.h b/drivers/net/qede/base/ecore_sp_commands.h
index 66c9a69..33e31e4 100644
--- a/drivers/net/qede/base/ecore_sp_commands.h
+++ b/drivers/net/qede/base/ecore_sp_commands.h
@@ -68,32 +68,11 @@ enum _ecore_status_t ecore_sp_init_request(struct ecore_hwfn *p_hwfn,
  */
 
 enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
-				       struct ecore_tunn_start_params *p_tunn,
+				       struct ecore_tunnel_info *p_tunn,
 				       enum ecore_mf_mode mode,
 				       bool allow_npar_tx_switch);
 
 /**
- * @brief ecore_sp_pf_update_tunn_cfg - PF Function Tunnel configuration
- *					update  Ramrod
- *
- * This ramrod is sent to update a tunneling configuration
- * for a physical function (PF).
- *
- * @param p_hwfn
- * @param p_tunn - pf update tunneling parameters
- * @param comp_mode - completion mode
- * @param p_comp_data - callback function
- *
- * @return enum _ecore_status_t
- */
-
-enum _ecore_status_t
-ecore_sp_pf_update_tunn_cfg(struct ecore_hwfn *p_hwfn,
-			    struct ecore_tunn_update_params *p_tunn,
-			    enum spq_mode comp_mode,
-			    struct ecore_spq_comp_cb *p_comp_data);
-
-/**
  * @brief ecore_sp_pf_update - PF Function Update Ramrod
  *
  * This ramrod updates function-related parameters. Every parameter can be
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index d52e1be..4ef93d4 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -335,10 +335,10 @@ static void qede_set_ucast_cmn_params(struct ecore_filter_ucast *ucast)
 	/* ucast->assert_on_error = true; - For debug */
 }
 
-static void qede_set_cmn_tunn_param(struct ecore_tunn_update_params *params,
-				     uint8_t clss, uint64_t mode, uint64_t mask)
+static void qede_set_cmn_tunn_param(struct qed_tunn_update_params *params,
+				    uint8_t clss, uint64_t mode, uint64_t mask)
 {
-	memset(params, 0, sizeof(struct ecore_tunn_update_params));
+	memset(params, 0, sizeof(struct qed_tunn_update_params));
 	params->tunn_mode = mode;
 	params->tunn_mode_update_mask = mask;
 	params->update_tx_pf_clss = 1;
@@ -1707,7 +1707,8 @@ qede_conf_udp_dst_port(struct rte_eth_dev *eth_dev,
 {
 	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
-	struct ecore_tunn_update_params params;
+	struct qed_tunn_update_params params;
+	struct ecore_tunnel_info *p_tunn;
 	struct ecore_hwfn *p_hwfn;
 	int rc, i;
 
@@ -1720,7 +1721,7 @@ qede_conf_udp_dst_port(struct rte_eth_dev *eth_dev,
 					QEDE_VXLAN_DEF_PORT;
 		for_each_hwfn(edev, i) {
 			p_hwfn = &edev->hwfns[i];
-			rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, &params,
+			rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, p_tunn,
 						ECORE_SPQ_MODE_CB, NULL);
 			if (rc != ECORE_SUCCESS) {
 				DP_ERR(edev, "Unable to config UDP port %u\n",
@@ -1817,7 +1818,8 @@ static int qede_vxlan_tunn_config(struct rte_eth_dev *eth_dev,
 {
 	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
-	struct ecore_tunn_update_params params;
+	struct qed_tunn_update_params params;
+	struct ecore_tunnel_info *p_tunn;
 	struct ecore_hwfn *p_hwfn;
 	enum ecore_filter_ucast_type type;
 	enum ecore_tunn_clss clss;
@@ -1872,7 +1874,7 @@ static int qede_vxlan_tunn_config(struct rte_eth_dev *eth_dev,
 		for_each_hwfn(edev, i) {
 			p_hwfn = &edev->hwfns[i];
 			rc = ecore_sp_pf_update_tunn_cfg(p_hwfn,
-				&params, ECORE_SPQ_MODE_CB, NULL);
+				p_tunn, ECORE_SPQ_MODE_CB, NULL);
 			if (rc != ECORE_SUCCESS) {
 				DP_ERR(edev, "Failed to update tunn_clss %u\n",
 					params.tunn_clss_vxlan);
@@ -1906,8 +1908,8 @@ static int qede_vxlan_tunn_config(struct rte_eth_dev *eth_dev,
 						(1 << ECORE_MODE_VXLAN_TUNN));
 			for_each_hwfn(edev, i) {
 				p_hwfn = &edev->hwfns[i];
-				rc = ecore_sp_pf_update_tunn_cfg(p_hwfn,
-					&params, ECORE_SPQ_MODE_CB, NULL);
+				rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, p_tunn,
+					ECORE_SPQ_MODE_CB, NULL);
 				if (rc != ECORE_SUCCESS) {
 					DP_ERR(edev,
 						"Failed to update tunn_clss %u\n",
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index baa8476..09b6912 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -121,6 +121,22 @@ struct qed_eth_tlvs {
 	u8 num_rxqs_full;
 };
 
+struct qed_tunn_update_params {
+	unsigned long   tunn_mode_update_mask;
+	unsigned long   tunn_mode;
+	u16             vxlan_udp_port;
+	u16             geneve_udp_port;
+	u8              update_rx_pf_clss;
+	u8              update_tx_pf_clss;
+	u8              update_vxlan_udp_port;
+	u8              update_geneve_udp_port;
+	u8              tunn_clss_vxlan;
+	u8              tunn_clss_l2geneve;
+	u8              tunn_clss_ipgeneve;
+	u8              tunn_clss_l2gre;
+	u8              tunn_clss_ipgre;
+};
+
 struct qed_common_cb_ops {
 	void (*link_update)(void *dev, struct qed_link_output *link);
 	void (*get_tlv_data)(void *dev, struct qed_eth_tlvs *data);
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index e7195b4..5c79055 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -329,20 +329,18 @@ qed_fill_dev_info(struct ecore_dev *edev, struct qed_dev_info *dev_info)
 
 	memset(dev_info, 0, sizeof(struct qed_dev_info));
 
-	if (tun->tunn_mode & OSAL_BIT(ECORE_MODE_VXLAN_TUNN) &&
-	    tun->tunn_clss_vxlan == ECORE_TUNN_CLSS_MAC_VLAN)
+	if (tun->vxlan.tun_cls == ECORE_TUNN_CLSS_MAC_VLAN &&
+	    tun->vxlan.b_mode_enabled)
 		dev_info->vxlan_enable = true;
 
-	if (tun->tunn_mode & OSAL_BIT(ECORE_MODE_L2GRE_TUNN) &&
-	    tun->tunn_mode & OSAL_BIT(ECORE_MODE_IPGRE_TUNN) &&
-	    tun->tunn_clss_l2gre == ECORE_TUNN_CLSS_MAC_VLAN &&
-	    tun->tunn_clss_ipgre == ECORE_TUNN_CLSS_MAC_VLAN)
+	if (tun->l2_gre.b_mode_enabled && tun->ip_gre.b_mode_enabled &&
+	    tun->l2_gre.tun_cls == ECORE_TUNN_CLSS_MAC_VLAN &&
+	    tun->ip_gre.tun_cls == ECORE_TUNN_CLSS_MAC_VLAN)
 		dev_info->gre_enable = true;
 
-	if (tun->tunn_mode & OSAL_BIT(ECORE_MODE_L2GENEVE_TUNN) &&
-	    tun->tunn_mode & OSAL_BIT(ECORE_MODE_IPGENEVE_TUNN) &&
-	    tun->tunn_clss_l2geneve == ECORE_TUNN_CLSS_MAC_VLAN &&
-	    tun->tunn_clss_ipgeneve == ECORE_TUNN_CLSS_MAC_VLAN)
+	if (tun->l2_geneve.b_mode_enabled && tun->ip_geneve.b_mode_enabled &&
+	    tun->l2_geneve.tun_cls == ECORE_TUNN_CLSS_MAC_VLAN &&
+	    tun->ip_geneve.tun_cls == ECORE_TUNN_CLSS_MAC_VLAN)
 		dev_info->geneve_enable = true;
 
 	dev_info->num_hwfns = edev->num_hwfns;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 32/62] net/qede/base: add tunnelling support for VFs
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (32 preceding siblings ...)
  2017-03-28  6:52           ` [PATCH v4 31/62] net/qede/base: revise tunnel APIs/structs Rasesh Mody
@ 2017-03-28  6:52           ` Rasesh Mody
  2017-03-28  6:52           ` [PATCH v4 33/62] net/qede/base: formatting changes Rasesh Mody
                             ` (30 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:52 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Add new tunnelling support for VFs.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/bcm_osal.h          |    3 +-
 drivers/net/qede/base/ecore_dev.c         |   15 ++-
 drivers/net/qede/base/ecore_sp_commands.c |   15 ++-
 drivers/net/qede/base/ecore_sriov.c       |  144 +++++++++++++++++++++++++++
 drivers/net/qede/base/ecore_vf.c          |  154 +++++++++++++++++++++++++++++
 drivers/net/qede/base/ecore_vf.h          |    5 +
 drivers/net/qede/base/ecore_vfpf_if.h     |   40 ++++++++
 drivers/net/qede/qede_ethdev.c            |   49 +++++----
 8 files changed, 390 insertions(+), 35 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 513d542..4c91dc0 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -422,6 +422,5 @@ void qede_get_mcp_proto_stats(struct ecore_dev *, enum ecore_mcp_protocol_type,
 #define	OSAL_SLOWPATH_IRQ_REQ(p_hwfn) (0)
 #define OSAL_MFW_TLV_REQ(p_hwfn) (0)
 #define OSAL_MFW_FILL_TLV_DATA(type, buf, data) (0)
-
-
+#define OSAL_PF_VALIDATE_MODIFY_TUNN_CONFIG(p_hwfn, mask, b_update, tunn) 0
 #endif /* __BCM_OSAL_H */
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 0d3971c..21fec58 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1876,6 +1876,19 @@ static void ecore_reset_mb_shadow(struct ecore_hwfn *p_hwfn,
 		    p_hwfn->mcp_info->mfw_mb_length);
 }
 
+enum _ecore_status_t ecore_vf_start(struct ecore_hwfn *p_hwfn,
+				    struct ecore_hw_init_params *p_params)
+{
+	if (p_params->p_tunn) {
+		ecore_vf_set_vf_start_tunn_update_param(p_params->p_tunn);
+		ecore_vf_pf_tunnel_param_update(p_hwfn, p_params->p_tunn);
+	}
+
+	p_hwfn->b_int_enabled = 1;
+
+	return ECORE_SUCCESS;
+}
+
 enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 				   struct ecore_hw_init_params *p_params)
 {
@@ -1908,7 +1921,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		}
 
 		if (IS_VF(p_dev)) {
-			p_hwfn->b_int_enabled = 1;
+			ecore_vf_start(p_hwfn, p_params);
 			continue;
 		}
 
diff --git a/drivers/net/qede/base/ecore_sp_commands.c b/drivers/net/qede/base/ecore_sp_commands.c
index 4cacce8..8fd64d7 100644
--- a/drivers/net/qede/base/ecore_sp_commands.c
+++ b/drivers/net/qede/base/ecore_sp_commands.c
@@ -22,6 +22,7 @@
 #include "ecore_hw.h"
 #include "ecore_dcbx.h"
 #include "ecore_sriov.h"
+#include "ecore_vf.h"
 
 enum _ecore_status_t ecore_sp_init_request(struct ecore_hwfn *p_hwfn,
 					   struct ecore_spq_entry **pp_ent,
@@ -137,16 +138,17 @@ static void ecore_set_tunn_cls_info(struct ecore_tunnel_info *p_tun,
 	p_tun->b_update_rx_cls = p_src->b_update_rx_cls;
 	p_tun->b_update_tx_cls = p_src->b_update_tx_cls;
 
+	/* @DPDK - typecast tunnul class */
 	type = ecore_tunn_clss_to_fw_clss(p_src->vxlan.tun_cls);
-	p_tun->vxlan.tun_cls = type;
+	p_tun->vxlan.tun_cls = (enum ecore_tunn_clss)type;
 	type = ecore_tunn_clss_to_fw_clss(p_src->l2_gre.tun_cls);
-	p_tun->l2_gre.tun_cls = type;
+	p_tun->l2_gre.tun_cls = (enum ecore_tunn_clss)type;
 	type = ecore_tunn_clss_to_fw_clss(p_src->ip_gre.tun_cls);
-	p_tun->ip_gre.tun_cls = type;
+	p_tun->ip_gre.tun_cls = (enum ecore_tunn_clss)type;
 	type = ecore_tunn_clss_to_fw_clss(p_src->l2_geneve.tun_cls);
-	p_tun->l2_geneve.tun_cls = type;
+	p_tun->l2_geneve.tun_cls = (enum ecore_tunn_clss)type;
 	type = ecore_tunn_clss_to_fw_clss(p_src->ip_geneve.tun_cls);
-	p_tun->ip_geneve.tun_cls = type;
+	p_tun->ip_geneve.tun_cls = (enum ecore_tunn_clss)type;
 }
 
 static void ecore_set_tunn_ports(struct ecore_tunnel_info *p_tun,
@@ -486,6 +488,9 @@ ecore_sp_pf_update_tunn_cfg(struct ecore_hwfn *p_hwfn,
 	struct ecore_sp_init_data init_data;
 	enum _ecore_status_t rc = ECORE_NOTIMPL;
 
+	if (IS_VF(p_hwfn->p_dev))
+		return ecore_vf_pf_tunnel_param_update(p_hwfn, p_tunn);
+
 	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
 		DP_NOTICE(p_hwfn, true,
 			  "A0 chip: tunnel pf update config is not supported\n");
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 7378420..6cec7b2 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -51,6 +51,7 @@ const char *ecore_channel_tlvs_string[] = {
 	"CHANNEL_TLV_VPORT_UPDATE_RSS",
 	"CHANNEL_TLV_VPORT_UPDATE_ACCEPT_ANY_VLAN",
 	"CHANNEL_TLV_VPORT_UPDATE_SGE_TPA",
+	"CHANNEL_TLV_UPDATE_TUNN_PARAM",
 	"CHANNEL_TLV_MAX"
 };
 
@@ -2137,6 +2138,146 @@ out:
 					b_legacy_vf);
 }
 
+static void
+ecore_iov_pf_update_tun_response(struct pfvf_update_tunn_param_tlv *p_resp,
+				 struct ecore_tunnel_info *p_tun,
+				 u16 tunn_feature_mask)
+{
+	p_resp->tunn_feature_mask = tunn_feature_mask;
+	p_resp->vxlan_mode = p_tun->vxlan.b_mode_enabled;
+	p_resp->l2geneve_mode = p_tun->l2_geneve.b_mode_enabled;
+	p_resp->ipgeneve_mode = p_tun->ip_geneve.b_mode_enabled;
+	p_resp->l2gre_mode = p_tun->l2_gre.b_mode_enabled;
+	p_resp->ipgre_mode = p_tun->l2_gre.b_mode_enabled;
+	p_resp->vxlan_clss = p_tun->vxlan.tun_cls;
+	p_resp->l2gre_clss = p_tun->l2_gre.tun_cls;
+	p_resp->ipgre_clss = p_tun->ip_gre.tun_cls;
+	p_resp->l2geneve_clss = p_tun->l2_geneve.tun_cls;
+	p_resp->ipgeneve_clss = p_tun->ip_geneve.tun_cls;
+	p_resp->geneve_udp_port = p_tun->geneve_port.port;
+	p_resp->vxlan_udp_port = p_tun->vxlan_port.port;
+}
+
+static void
+__ecore_iov_pf_update_tun_param(struct vfpf_update_tunn_param_tlv *p_req,
+				struct ecore_tunn_update_type *p_tun,
+				enum ecore_tunn_mode mask, u8 tun_cls)
+{
+	if (p_req->tun_mode_update_mask & (1 << mask)) {
+		p_tun->b_update_mode = true;
+
+		if (p_req->tunn_mode & (1 << mask))
+			p_tun->b_mode_enabled = true;
+	}
+
+	p_tun->tun_cls = tun_cls;
+}
+
+static void
+ecore_iov_pf_update_tun_param(struct vfpf_update_tunn_param_tlv *p_req,
+			      struct ecore_tunn_update_type *p_tun,
+			      struct ecore_tunn_update_udp_port *p_port,
+			      enum ecore_tunn_mode mask,
+			      u8 tun_cls, u8 update_port, u16 port)
+{
+	if (update_port) {
+		p_port->b_update_port = true;
+		p_port->port = port;
+	}
+
+	__ecore_iov_pf_update_tun_param(p_req, p_tun, mask, tun_cls);
+}
+
+static bool
+ecore_iov_pf_validate_tunn_param(struct vfpf_update_tunn_param_tlv *p_req)
+{
+	bool b_update_requested = false;
+
+	if (p_req->tun_mode_update_mask || p_req->update_tun_cls ||
+	    p_req->update_geneve_port || p_req->update_vxlan_port)
+		b_update_requested = true;
+
+	return b_update_requested;
+}
+
+static void ecore_iov_vf_mbx_update_tunn_param(struct ecore_hwfn *p_hwfn,
+					       struct ecore_ptt *p_ptt,
+					       struct ecore_vf_info *p_vf)
+{
+	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
+	struct ecore_iov_vf_mbx *mbx = &p_vf->vf_mbx;
+	struct pfvf_update_tunn_param_tlv *p_resp;
+	struct vfpf_update_tunn_param_tlv *p_req;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	u8 status = PFVF_STATUS_SUCCESS;
+	bool b_update_required = false;
+	struct ecore_tunnel_info tunn;
+	u16 tunn_feature_mask = 0;
+
+	mbx->offset = (u8 *)mbx->reply_virt;
+
+	OSAL_MEM_ZERO(&tunn, sizeof(tunn));
+	p_req = &mbx->req_virt->tunn_param_update;
+
+	if (!ecore_iov_pf_validate_tunn_param(p_req)) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "No tunnel update requested by VF\n");
+		status = PFVF_STATUS_FAILURE;
+		goto send_resp;
+	}
+
+	tunn.b_update_rx_cls = p_req->update_tun_cls;
+	tunn.b_update_tx_cls = p_req->update_tun_cls;
+
+	ecore_iov_pf_update_tun_param(p_req, &tunn.vxlan, &tunn.vxlan_port,
+				      ECORE_MODE_VXLAN_TUNN, p_req->vxlan_clss,
+				      p_req->update_vxlan_port,
+				      p_req->vxlan_port);
+	ecore_iov_pf_update_tun_param(p_req, &tunn.l2_geneve, &tunn.geneve_port,
+				      ECORE_MODE_L2GENEVE_TUNN,
+				      p_req->l2geneve_clss,
+				      p_req->update_geneve_port,
+				      p_req->geneve_port);
+	__ecore_iov_pf_update_tun_param(p_req, &tunn.ip_geneve,
+					ECORE_MODE_IPGENEVE_TUNN,
+					p_req->ipgeneve_clss);
+	__ecore_iov_pf_update_tun_param(p_req, &tunn.l2_gre,
+					ECORE_MODE_L2GRE_TUNN,
+					p_req->l2gre_clss);
+	__ecore_iov_pf_update_tun_param(p_req, &tunn.ip_gre,
+					ECORE_MODE_IPGRE_TUNN,
+					p_req->ipgre_clss);
+
+	/* If PF modifies VF's req then it should
+	 * still return an error in case of partial configuration
+	 * or modified configuration as opposed to requested one.
+	 */
+	rc = OSAL_PF_VALIDATE_MODIFY_TUNN_CONFIG(p_hwfn, &tunn_feature_mask,
+						 &b_update_required, &tunn);
+
+	if (rc != ECORE_SUCCESS)
+		status = PFVF_STATUS_FAILURE;
+
+	/* If ECORE client is willing to update anything ? */
+	if (b_update_required) {
+		rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, &tunn,
+						 ECORE_SPQ_MODE_EBLOCK,
+						 OSAL_NULL);
+		if (rc != ECORE_SUCCESS)
+			status = PFVF_STATUS_FAILURE;
+	}
+
+send_resp:
+	p_resp = ecore_add_tlv(p_hwfn, &mbx->offset,
+			       CHANNEL_TLV_UPDATE_TUNN_PARAM, sizeof(*p_resp));
+
+	ecore_iov_pf_update_tun_response(p_resp, p_tun, tunn_feature_mask);
+	ecore_add_tlv(p_hwfn, &mbx->offset, CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	ecore_iov_send_response(p_hwfn, p_ptt, p_vf, sizeof(*p_resp), status);
+}
+
 static void ecore_iov_vf_mbx_start_txq_resp(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
 					    struct ecore_vf_info *p_vf,
@@ -3405,6 +3546,9 @@ void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn,
 		case CHANNEL_TLV_RELEASE:
 			ecore_iov_vf_mbx_release(p_hwfn, p_ptt, p_vf);
 			break;
+		case CHANNEL_TLV_UPDATE_TUNN_PARAM:
+			ecore_iov_vf_mbx_update_tunn_param(p_hwfn, p_ptt, p_vf);
+			break;
 		}
 	} else if (ecore_iov_tlv_supported(mbx->first_tlv.tl.type)) {
 		/* If we've received a message from a VF we consider malicious
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index 60ecd16..3182621 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -451,6 +451,160 @@ free_p_iov:
 #define MSTORM_QZONE_START(dev)   (TSTORM_QZONE_START + \
 				   (TSTORM_QZONE_SIZE * NUM_OF_L2_QUEUES(dev)))
 
+/* @DPDK - changed enum ecore_tunn_clss to enum ecore_tunn_mode */
+static void
+__ecore_vf_prep_tunn_req_tlv(struct vfpf_update_tunn_param_tlv *p_req,
+			     struct ecore_tunn_update_type *p_src,
+			     enum ecore_tunn_mode mask, u8 *p_cls)
+{
+	if (p_src->b_update_mode) {
+		p_req->tun_mode_update_mask |= (1 << mask);
+
+		if (p_src->b_mode_enabled)
+			p_req->tunn_mode |= (1 << mask);
+	}
+
+	*p_cls = p_src->tun_cls;
+}
+
+/* @DPDK - changed enum ecore_tunn_clss to enum ecore_tunn_mode */
+static void
+ecore_vf_prep_tunn_req_tlv(struct vfpf_update_tunn_param_tlv *p_req,
+			   struct ecore_tunn_update_type *p_src,
+			   enum ecore_tunn_mode mask, u8 *p_cls,
+			   struct ecore_tunn_update_udp_port *p_port,
+			   u8 *p_update_port, u16 *p_udp_port)
+{
+	if (p_port->b_update_port) {
+		*p_update_port = 1;
+		*p_udp_port = p_port->port;
+	}
+
+	__ecore_vf_prep_tunn_req_tlv(p_req, p_src, mask, p_cls);
+}
+
+void ecore_vf_set_vf_start_tunn_update_param(struct ecore_tunnel_info *p_tun)
+{
+	if (p_tun->vxlan.b_mode_enabled)
+		p_tun->vxlan.b_update_mode = true;
+	if (p_tun->l2_geneve.b_mode_enabled)
+		p_tun->l2_geneve.b_update_mode = true;
+	if (p_tun->ip_geneve.b_mode_enabled)
+		p_tun->ip_geneve.b_update_mode = true;
+	if (p_tun->l2_gre.b_mode_enabled)
+		p_tun->l2_gre.b_update_mode = true;
+	if (p_tun->ip_gre.b_mode_enabled)
+		p_tun->ip_gre.b_update_mode = true;
+
+	p_tun->b_update_rx_cls = true;
+	p_tun->b_update_tx_cls = true;
+}
+
+static void
+__ecore_vf_update_tunn_param(struct ecore_tunn_update_type *p_tun,
+			     u16 feature_mask, u8 tunn_mode, u8 tunn_cls,
+			     enum ecore_tunn_mode val)
+{
+	if (feature_mask & (1 << val)) {
+		p_tun->b_mode_enabled = tunn_mode;
+		p_tun->tun_cls = tunn_cls;
+	} else {
+		p_tun->b_mode_enabled = false;
+	}
+}
+
+static void
+ecore_vf_update_tunn_param(struct ecore_hwfn *p_hwfn,
+			   struct ecore_tunnel_info *p_tun,
+			   struct pfvf_update_tunn_param_tlv *p_resp)
+{
+	/* Update mode and classes provided by PF */
+	u16 feat_mask = p_resp->tunn_feature_mask;
+
+	__ecore_vf_update_tunn_param(&p_tun->vxlan, feat_mask,
+				     p_resp->vxlan_mode, p_resp->vxlan_clss,
+				     ECORE_MODE_VXLAN_TUNN);
+	__ecore_vf_update_tunn_param(&p_tun->l2_geneve, feat_mask,
+				     p_resp->l2geneve_mode,
+				     p_resp->l2geneve_clss,
+				     ECORE_MODE_L2GENEVE_TUNN);
+	__ecore_vf_update_tunn_param(&p_tun->ip_geneve, feat_mask,
+				     p_resp->ipgeneve_mode,
+				     p_resp->ipgeneve_clss,
+				     ECORE_MODE_IPGENEVE_TUNN);
+	__ecore_vf_update_tunn_param(&p_tun->l2_gre, feat_mask,
+				     p_resp->l2gre_mode, p_resp->l2gre_clss,
+				     ECORE_MODE_L2GRE_TUNN);
+	__ecore_vf_update_tunn_param(&p_tun->ip_gre, feat_mask,
+				     p_resp->ipgre_mode, p_resp->ipgre_clss,
+				     ECORE_MODE_IPGRE_TUNN);
+	p_tun->geneve_port.port = p_resp->geneve_udp_port;
+	p_tun->vxlan_port.port = p_resp->vxlan_udp_port;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+		   "tunn mode: vxlan=0x%x, l2geneve=0x%x, ipgeneve=0x%x, l2gre=0x%x, ipgre=0x%x",
+		   p_tun->vxlan.b_mode_enabled, p_tun->l2_geneve.b_mode_enabled,
+		   p_tun->ip_geneve.b_mode_enabled,
+		   p_tun->l2_gre.b_mode_enabled,
+		   p_tun->ip_gre.b_mode_enabled);
+}
+
+enum _ecore_status_t
+ecore_vf_pf_tunnel_param_update(struct ecore_hwfn *p_hwfn,
+				struct ecore_tunnel_info *p_src)
+{
+	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
+	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+	struct pfvf_update_tunn_param_tlv *p_resp;
+	struct vfpf_update_tunn_param_tlv *p_req;
+	enum _ecore_status_t rc;
+
+	p_req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_UPDATE_TUNN_PARAM,
+				 sizeof(*p_req));
+
+	if (p_src->b_update_rx_cls && p_src->b_update_tx_cls)
+		p_req->update_tun_cls = 1;
+
+	ecore_vf_prep_tunn_req_tlv(p_req, &p_src->vxlan, ECORE_MODE_VXLAN_TUNN,
+				   &p_req->vxlan_clss, &p_src->vxlan_port,
+				   &p_req->update_vxlan_port,
+				   &p_req->vxlan_port);
+	ecore_vf_prep_tunn_req_tlv(p_req, &p_src->l2_geneve,
+				   ECORE_MODE_L2GENEVE_TUNN,
+				   &p_req->l2geneve_clss, &p_src->geneve_port,
+				   &p_req->update_geneve_port,
+				   &p_req->geneve_port);
+	__ecore_vf_prep_tunn_req_tlv(p_req, &p_src->ip_geneve,
+				     ECORE_MODE_IPGENEVE_TUNN,
+				     &p_req->ipgeneve_clss);
+	__ecore_vf_prep_tunn_req_tlv(p_req, &p_src->l2_gre,
+				     ECORE_MODE_L2GRE_TUNN, &p_req->l2gre_clss);
+	__ecore_vf_prep_tunn_req_tlv(p_req, &p_src->ip_gre,
+				     ECORE_MODE_IPGRE_TUNN, &p_req->ipgre_clss);
+
+	/* add list termination tlv */
+	ecore_add_tlv(p_hwfn, &p_iov->offset,
+		      CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	p_resp = &p_iov->pf2vf_reply->tunn_param_resp;
+	rc = ecore_send_msg2pf(p_hwfn, &p_resp->hdr.status, sizeof(*p_resp));
+
+	if (rc)
+		goto exit;
+
+	if (p_resp->hdr.status != PFVF_STATUS_SUCCESS) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "Failed to update tunnel parameters\n");
+		rc = ECORE_INVAL;
+	}
+
+	ecore_vf_update_tunn_param(p_hwfn, p_tun, p_resp);
+exit:
+	ecore_vf_pf_req_end(p_hwfn, rc);
+	return rc;
+}
+
 enum _ecore_status_t
 ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
 		      struct ecore_queue_cid *p_cid,
diff --git a/drivers/net/qede/base/ecore_vf.h b/drivers/net/qede/base/ecore_vf.h
index 1afd667..0d67054 100644
--- a/drivers/net/qede/base/ecore_vf.h
+++ b/drivers/net/qede/base/ecore_vf.h
@@ -258,5 +258,10 @@ void __ecore_vf_get_link_caps(struct ecore_hwfn *p_hwfn,
 			      struct ecore_mcp_link_capabilities *p_link_caps,
 			      struct ecore_bulletin_content *p_bulletin);
 
+enum _ecore_status_t
+ecore_vf_pf_tunnel_param_update(struct ecore_hwfn *p_hwfn,
+				struct ecore_tunnel_info *p_tunn);
+
+void ecore_vf_set_vf_start_tunn_update_param(struct ecore_tunnel_info *p_tun);
 #endif
 #endif /* __ECORE_VF_H__ */
diff --git a/drivers/net/qede/base/ecore_vfpf_if.h b/drivers/net/qede/base/ecore_vfpf_if.h
index 149d092..82ed4f5 100644
--- a/drivers/net/qede/base/ecore_vfpf_if.h
+++ b/drivers/net/qede/base/ecore_vfpf_if.h
@@ -416,6 +416,43 @@ struct vfpf_ucast_filter_tlv {
 	u16			padding[3];
 };
 
+/* tunnel update param tlv */
+struct vfpf_update_tunn_param_tlv {
+	struct vfpf_first_tlv   first_tlv;
+
+	u8			tun_mode_update_mask;
+	u8			tunn_mode;
+	u8			update_tun_cls;
+	u8			vxlan_clss;
+	u8			l2gre_clss;
+	u8			ipgre_clss;
+	u8			l2geneve_clss;
+	u8			ipgeneve_clss;
+	u8			update_geneve_port;
+	u8			update_vxlan_port;
+	u16			geneve_port;
+	u16			vxlan_port;
+	u8			padding[2];
+};
+
+struct pfvf_update_tunn_param_tlv {
+	struct pfvf_tlv hdr;
+
+	u16			tunn_feature_mask;
+	u8			vxlan_mode;
+	u8			l2geneve_mode;
+	u8			ipgeneve_mode;
+	u8			l2gre_mode;
+	u8			ipgre_mode;
+	u8			vxlan_clss;
+	u8			l2gre_clss;
+	u8			ipgre_clss;
+	u8			l2geneve_clss;
+	u8			ipgeneve_clss;
+	u16			vxlan_udp_port;
+	u16			geneve_udp_port;
+};
+
 struct tlv_buffer_size {
 	u8 tlv_buffer[TLV_BUFFER_SIZE];
 };
@@ -431,6 +468,7 @@ union vfpf_tlvs {
 	struct vfpf_vport_start_tlv		start_vport;
 	struct vfpf_vport_update_tlv		vport_update;
 	struct vfpf_ucast_filter_tlv		ucast_filter;
+	struct vfpf_update_tunn_param_tlv	tunn_param_update;
 	struct tlv_buffer_size			tlv_buf_size;
 };
 
@@ -439,6 +477,7 @@ union pfvf_tlvs {
 	struct pfvf_acquire_resp_tlv		acquire_resp;
 	struct tlv_buffer_size			tlv_buf_size;
 	struct pfvf_start_queue_resp_tlv	queue_start;
+	struct pfvf_update_tunn_param_tlv	tunn_param_resp;
 };
 
 /* This is a structure which is allocated in the VF, which the PF may update
@@ -552,6 +591,7 @@ enum {
 	CHANNEL_TLV_VPORT_UPDATE_RSS,
 	CHANNEL_TLV_VPORT_UPDATE_ACCEPT_ANY_VLAN,
 	CHANNEL_TLV_VPORT_UPDATE_SGE_TPA,
+	CHANNEL_TLV_UPDATE_TUNN_PARAM,
 	CHANNEL_TLV_MAX,
 
 	/* Required for iterating over vport-update tlvs.
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 4ef93d4..257e5b2 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -335,15 +335,15 @@ static void qede_set_ucast_cmn_params(struct ecore_filter_ucast *ucast)
 	/* ucast->assert_on_error = true; - For debug */
 }
 
-static void qede_set_cmn_tunn_param(struct qed_tunn_update_params *params,
-				    uint8_t clss, uint64_t mode, uint64_t mask)
+static void qede_set_cmn_tunn_param(struct ecore_tunnel_info *p_tunn,
+				    uint8_t clss, bool mode, bool mask)
 {
-	memset(params, 0, sizeof(struct qed_tunn_update_params));
-	params->tunn_mode = mode;
-	params->tunn_mode_update_mask = mask;
-	params->update_tx_pf_clss = 1;
-	params->update_rx_pf_clss = 1;
-	params->tunn_clss_vxlan = clss;
+	memset(p_tunn, 0, sizeof(struct ecore_tunnel_info));
+	p_tunn->vxlan.b_update_mode = mode;
+	p_tunn->vxlan.b_mode_enabled = mask;
+	p_tunn->b_update_rx_cls = true;
+	p_tunn->b_update_tx_cls = true;
+	p_tunn->vxlan.tun_cls = clss;
 }
 
 static int
@@ -1707,25 +1707,24 @@ qede_conf_udp_dst_port(struct rte_eth_dev *eth_dev,
 {
 	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
-	struct qed_tunn_update_params params;
-	struct ecore_tunnel_info *p_tunn;
+	struct ecore_tunnel_info tunn; /* @DPDK */
 	struct ecore_hwfn *p_hwfn;
 	int rc, i;
 
 	PMD_INIT_FUNC_TRACE(edev);
 
-	memset(&params, 0, sizeof(params));
+	memset(&tunn, 0, sizeof(tunn));
 	if (tunnel_udp->prot_type == RTE_TUNNEL_TYPE_VXLAN) {
-		params.update_vxlan_udp_port = 1;
-		params.vxlan_udp_port = (add) ? tunnel_udp->udp_port :
-					QEDE_VXLAN_DEF_PORT;
+		tunn.vxlan_port.b_update_port = true;
+		tunn.vxlan_port.port = (add) ? tunnel_udp->udp_port :
+						  QEDE_VXLAN_DEF_PORT;
 		for_each_hwfn(edev, i) {
 			p_hwfn = &edev->hwfns[i];
-			rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, p_tunn,
+			rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, &tunn,
 						ECORE_SPQ_MODE_CB, NULL);
 			if (rc != ECORE_SUCCESS) {
 				DP_ERR(edev, "Unable to config UDP port %u\n",
-					params.vxlan_udp_port);
+				       tunn.vxlan_port.port);
 				return rc;
 			}
 		}
@@ -1818,8 +1817,7 @@ static int qede_vxlan_tunn_config(struct rte_eth_dev *eth_dev,
 {
 	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
-	struct qed_tunn_update_params params;
-	struct ecore_tunnel_info *p_tunn;
+	struct ecore_tunnel_info tunn;
 	struct ecore_hwfn *p_hwfn;
 	enum ecore_filter_ucast_type type;
 	enum ecore_tunn_clss clss;
@@ -1868,16 +1866,14 @@ static int qede_vxlan_tunn_config(struct rte_eth_dev *eth_dev,
 		qdev->vxlan_filter_type = filter_type;
 
 		DP_INFO(edev, "Enabling VXLAN tunneling\n");
-		qede_set_cmn_tunn_param(&params, clss,
-					(1 << ECORE_MODE_VXLAN_TUNN),
-					(1 << ECORE_MODE_VXLAN_TUNN));
+		qede_set_cmn_tunn_param(&tunn, clss, true, true);
 		for_each_hwfn(edev, i) {
 			p_hwfn = &edev->hwfns[i];
 			rc = ecore_sp_pf_update_tunn_cfg(p_hwfn,
-				p_tunn, ECORE_SPQ_MODE_CB, NULL);
+				&tunn, ECORE_SPQ_MODE_CB, NULL);
 			if (rc != ECORE_SUCCESS) {
 				DP_ERR(edev, "Failed to update tunn_clss %u\n",
-					params.tunn_clss_vxlan);
+				       tunn.vxlan.tun_cls);
 			}
 		}
 		qdev->num_tunn_filters++; /* Filter added successfully */
@@ -1904,16 +1900,15 @@ static int qede_vxlan_tunn_config(struct rte_eth_dev *eth_dev,
 			DP_INFO(edev, "Disabling VXLAN tunneling\n");
 
 			/* Use 0 as tunnel mode */
-			qede_set_cmn_tunn_param(&params, clss, 0,
-						(1 << ECORE_MODE_VXLAN_TUNN));
+			qede_set_cmn_tunn_param(&tunn, clss, false, true);
 			for_each_hwfn(edev, i) {
 				p_hwfn = &edev->hwfns[i];
-				rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, p_tunn,
+				rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, &tunn,
 					ECORE_SPQ_MODE_CB, NULL);
 				if (rc != ECORE_SUCCESS) {
 					DP_ERR(edev,
 						"Failed to update tunn_clss %u\n",
-						params.tunn_clss_vxlan);
+						tunn.vxlan.tun_cls);
 					break;
 				}
 			}
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 33/62] net/qede/base: formatting changes
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (33 preceding siblings ...)
  2017-03-28  6:52           ` [PATCH v4 32/62] net/qede/base: add tunnelling support for VFs Rasesh Mody
@ 2017-03-28  6:52           ` Rasesh Mody
  2017-03-28  6:52           ` [PATCH v4 34/62] net/qede/base: prevent transmitter stuck condition Rasesh Mody
                             ` (29 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:52 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h      |   14 +--
 drivers/net/qede/base/mcp_public.h |  176 ++++++++++++++++++------------------
 2 files changed, 96 insertions(+), 94 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index f86f7ca..479a991 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -157,8 +157,8 @@ enum DP_MODULE {
 	ECORE_MSG_CXT		= 0x800000,
 	ECORE_MSG_LL2		= 0x1000000,
 	ECORE_MSG_ILT		= 0x2000000,
-	ECORE_MSG_RDMA          = 0x4000000,
-	ECORE_MSG_DEBUG         = 0x8000000,
+	ECORE_MSG_RDMA		= 0x4000000,
+	ECORE_MSG_DEBUG		= 0x8000000,
 	/* to be added...up to 0x8000000 */
 };
 #endif
@@ -480,7 +480,7 @@ struct ecore_hwfn {
 	u32				dp_module;
 	u8				dp_level;
 	char				name[NAME_SIZE];
-	void                            *dp_ctx;
+	void				*dp_ctx;
 
 	bool				first_on_engine;
 	bool				hw_init_done;
@@ -535,8 +535,8 @@ struct ecore_hwfn {
 	u32				rdma_prs_search_reg;
 
 	/* Array of sb_info of all status blocks */
-	struct ecore_sb_info            *sbs_info[MAX_SB_PER_PF_MIMD];
-	u16                             num_sbs;
+	struct ecore_sb_info		*sbs_info[MAX_SB_PER_PF_MIMD];
+	u16				num_sbs;
 
 	struct ecore_cxt_mngr		*p_cxt_mngr;
 
@@ -608,7 +608,7 @@ struct ecore_dev {
 	u32				dp_module;
 	u8				dp_level;
 	char				name[NAME_SIZE];
-	void                            *dp_ctx;
+	void				*dp_ctx;
 
 	u8				type;
 #define ECORE_DEV_TYPE_BB	(0 << 0)
@@ -816,7 +816,7 @@ void ecore_set_fw_mac_addr(__le16 *fw_msb, __le16 *fw_mid, __le16 *fw_lsb,
 #define PQ_FLAGS_MCOS	(1 << 1)
 #define PQ_FLAGS_LB	(1 << 2)
 #define PQ_FLAGS_OOO	(1 << 3)
-#define PQ_FLAGS_ACK    (1 << 4)
+#define PQ_FLAGS_ACK	(1 << 4)
 #define PQ_FLAGS_OFLD	(1 << 5)
 #define PQ_FLAGS_VFS	(1 << 6)
 
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 969dd5a..28909fb 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -586,14 +586,14 @@ struct public_port {
 	u32 link_status;
 #define LINK_STATUS_LINK_UP				0x00000001
 #define LINK_STATUS_SPEED_AND_DUPLEX_MASK		0x0000001e
-#define LINK_STATUS_SPEED_AND_DUPLEX_1000THD			(1 << 1)
-#define LINK_STATUS_SPEED_AND_DUPLEX_1000TFD			(2 << 1)
-#define LINK_STATUS_SPEED_AND_DUPLEX_10G			(3 << 1)
-#define LINK_STATUS_SPEED_AND_DUPLEX_20G			(4 << 1)
-#define LINK_STATUS_SPEED_AND_DUPLEX_40G			(5 << 1)
-#define LINK_STATUS_SPEED_AND_DUPLEX_50G			(6 << 1)
-#define LINK_STATUS_SPEED_AND_DUPLEX_100G			(7 << 1)
-#define LINK_STATUS_SPEED_AND_DUPLEX_25G			(8 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_1000THD		(1 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_1000TFD		(2 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_10G		(3 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_20G		(4 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_40G		(5 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_50G		(6 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_100G		(7 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_25G		(8 << 1)
 #define LINK_STATUS_AUTO_NEGOTIATE_ENABLED		0x00000020
 #define LINK_STATUS_AUTO_NEGOTIATE_COMPLETE		0x00000040
 #define LINK_STATUS_PARALLEL_DETECTION_USED		0x00000080
@@ -607,10 +607,10 @@ struct public_port {
 #define LINK_STATUS_LINK_PARTNER_100G_CAPABLE		0x00008000
 #define LINK_STATUS_LINK_PARTNER_25G_CAPABLE		0x00010000
 #define LINK_STATUS_LINK_PARTNER_FLOW_CONTROL_MASK	0x000C0000
-#define LINK_STATUS_LINK_PARTNER_NOT_PAUSE_CAPABLE		(0 << 18)
-#define LINK_STATUS_LINK_PARTNER_SYMMETRIC_PAUSE		(1 << 18)
-#define LINK_STATUS_LINK_PARTNER_ASYMMETRIC_PAUSE		(2 << 18)
-#define LINK_STATUS_LINK_PARTNER_BOTH_PAUSE			(3 << 18)
+#define LINK_STATUS_LINK_PARTNER_NOT_PAUSE_CAPABLE	(0 << 18)
+#define LINK_STATUS_LINK_PARTNER_SYMMETRIC_PAUSE	(1 << 18)
+#define LINK_STATUS_LINK_PARTNER_ASYMMETRIC_PAUSE	(2 << 18)
+#define LINK_STATUS_LINK_PARTNER_BOTH_PAUSE		(3 << 18)
 #define LINK_STATUS_SFP_TX_FAULT			0x00100000
 #define LINK_STATUS_TX_FLOW_CONTROL_ENABLED		0x00200000
 #define LINK_STATUS_RX_FLOW_CONTROL_ENABLED		0x00400000
@@ -619,9 +619,9 @@ struct public_port {
 #define LINK_STATUS_MAC_REMOTE_FAULT			0x02000000
 #define LINK_STATUS_UNSUPPORTED_SPD_REQ			0x04000000
 #define LINK_STATUS_FEC_MODE_MASK			0x38000000
-#define LINK_STATUS_FEC_MODE_NONE				(0 << 27)
-#define LINK_STATUS_FEC_MODE_FIRECODE_CL74			(1 << 27)
-#define LINK_STATUS_FEC_MODE_RS_CL91				(2 << 27)
+#define LINK_STATUS_FEC_MODE_NONE			(0 << 27)
+#define LINK_STATUS_FEC_MODE_FIRECODE_CL74		(1 << 27)
+#define LINK_STATUS_FEC_MODE_RS_CL91			(2 << 27)
 #define LINK_STATUS_EXT_PHY_LINK_UP			0x40000000
 
 	u32 link_status1;
@@ -762,23 +762,23 @@ struct public_port {
 	 *          When 1'b1 those bits contains a value times 16 microseconds.
 	 */
 	u32 eee_status;
-	#define EEE_TIMER_MASK		0x000fffff
-	#define EEE_ADV_STATUS_MASK	0x00f00000
-		#define EEE_1G_ADV	(1 << 1)
-		#define EEE_10G_ADV	(1 << 2)
-	#define EEE_ADV_STATUS_SHIFT	20
-	#define	EEE_LP_ADV_STATUS_MASK	0x0f000000
-	#define EEE_LP_ADV_STATUS_SHIFT	24
-	#define EEE_REQUESTED_BIT	0x10000000
-	#define EEE_LPI_REQUESTED_BIT	0x20000000
-	#define EEE_ACTIVE_BIT		0x40000000
-	#define EEE_TIME_OUTPUT_BIT	0x80000000
+#define EEE_TIMER_MASK		0x000fffff
+#define EEE_ADV_STATUS_MASK	0x00f00000
+#define EEE_1G_ADV	(1 << 1)
+#define EEE_10G_ADV	(1 << 2)
+#define EEE_ADV_STATUS_SHIFT	20
+#define	EEE_LP_ADV_STATUS_MASK	0x0f000000
+#define EEE_LP_ADV_STATUS_SHIFT	24
+#define EEE_REQUESTED_BIT	0x10000000
+#define EEE_LPI_REQUESTED_BIT	0x20000000
+#define EEE_ACTIVE_BIT		0x40000000
+#define EEE_TIME_OUTPUT_BIT	0x80000000
 
 	u32 eee_remote;	/* Used for EEE in LLDP */
-	#define EEE_REMOTE_TW_TX_MASK	0x0000ffff
-	#define EEE_REMOTE_TW_TX_SHIFT	0
-	#define EEE_REMOTE_TW_RX_MASK	0xffff0000
-	#define EEE_REMOTE_TW_RX_SHIFT	16
+#define EEE_REMOTE_TW_TX_MASK	0x0000ffff
+#define EEE_REMOTE_TW_TX_SHIFT	0
+#define EEE_REMOTE_TW_RX_MASK	0xffff0000
+#define EEE_REMOTE_TW_RX_SHIFT	16
 };
 
 /**************************************/
@@ -1157,15 +1157,17 @@ struct public_drv_mb {
  * [3:0] - func, drv_data[7:0] - MAC/WWNN/WWPN
  */
 #define DRV_MSG_CODE_GET_VMAC                   0x00120000
-	#define DRV_MSG_CODE_VMAC_TYPE_MAC              1
-	#define DRV_MSG_CODE_VMAC_TYPE_WWNN             2
-	#define DRV_MSG_CODE_VMAC_TYPE_WWPN             3
+#define DRV_MSG_CODE_VMAC_TYPE_SHIFT            4
+#define DRV_MSG_CODE_VMAC_TYPE_MASK             0x30
+#define DRV_MSG_CODE_VMAC_TYPE_MAC              1
+#define DRV_MSG_CODE_VMAC_TYPE_WWNN             2
+#define DRV_MSG_CODE_VMAC_TYPE_WWPN             3
 /* Get statistics from pf, params [31:4] - reserved, [3:0] - stats type */
 #define DRV_MSG_CODE_GET_STATS                  0x00130000
-	#define DRV_MSG_CODE_STATS_TYPE_LAN             1
-	#define DRV_MSG_CODE_STATS_TYPE_FCOE            2
-	#define DRV_MSG_CODE_STATS_TYPE_ISCSI           3
-	#define DRV_MSG_CODE_STATS_TYPE_RDMA            4
+#define DRV_MSG_CODE_STATS_TYPE_LAN             1
+#define DRV_MSG_CODE_STATS_TYPE_FCOE            2
+#define DRV_MSG_CODE_STATS_TYPE_ISCSI           3
+#define DRV_MSG_CODE_STATS_TYPE_RDMA            4
 /* Host shall provide buffer and size for MFW  */
 #define DRV_MSG_CODE_PMD_DIAG_DUMP		0x00140000
 /* Host shall provide buffer and size for MFW  */
@@ -1193,8 +1195,8 @@ struct public_drv_mb {
 #define DRV_MSG_CODE_MASK_PARITIES		0x001a0000
 /* param[0] - Simulate fan failure,  param[1] - simulate over temp. */
 #define DRV_MSG_CODE_INDUCE_FAILURE		0x001b0000
-	#define DRV_MSG_FAN_FAILURE_TYPE		(1 << 0)
-	#define DRV_MSG_TEMPERATURE_FAILURE_TYPE	(1 << 1)
+#define DRV_MSG_FAN_FAILURE_TYPE		(1 << 0)
+#define DRV_MSG_TEMPERATURE_FAILURE_TYPE	(1 << 1)
 /* Param: [0:15] - gpio number */
 #define DRV_MSG_CODE_GPIO_READ			0x001c0000
 /* Param: [0:15] - gpio number, [16:31] - gpio value */
@@ -1215,50 +1217,50 @@ struct public_drv_mb {
  * param[15:8] - age
  */
 #define DRV_MSG_CODE_RESOURCE_CMD		0x00230000
-	/* request resource ownership with default aging */
-	#define RESOURCE_OPCODE_REQ			1
-	/* request resource ownership without aging */
-	#define RESOURCE_OPCODE_REQ_WO_AGING		2
-	/* request resource ownership with specific aging timer (in seconds) */
-	#define RESOURCE_OPCODE_REQ_W_AGING		3
-	#define RESOURCE_OPCODE_RELEASE			4 /* release resource */
-	/* force resource release */
-	#define RESOURCE_OPCODE_FORCE_RELEASE		5
-	/* resource is free and granted to requester */
-	#define RESOURCE_OPCODE_GNT			1
-	/* resource is busy, param[7:0] indicates owner as follow 0-15 = PF0-15,
-	 * 16 = MFW, 17 = diag over serial
-	 */
-	#define RESOURCE_OPCODE_BUSY			2
-	/* indicate release request was acknowledged */
-	#define RESOURCE_OPCODE_RELEASED		3
-	/* indicate release request was previously received by other owner */
-	#define RESOURCE_OPCODE_RELEASED_PREVIOUS	4
-	/* indicate wrong owner during release */
-	#define RESOURCE_OPCODE_WRONG_OWNER		5
-	#define RESOURCE_OPCODE_UNKNOWN_CMD		255
-	/* dedicate resource 0 for dump */
-	#define RESOURCE_DUMP				0
+/* request resource ownership with default aging */
+#define RESOURCE_OPCODE_REQ			1
+/* request resource ownership without aging */
+#define RESOURCE_OPCODE_REQ_WO_AGING		2
+/* request resource ownership with specific aging timer (in seconds) */
+#define RESOURCE_OPCODE_REQ_W_AGING		3
+#define RESOURCE_OPCODE_RELEASE			4 /* release resource */
+/* force resource release */
+#define RESOURCE_OPCODE_FORCE_RELEASE		5
+/* resource is free and granted to requester */
+#define RESOURCE_OPCODE_GNT			1
+/* resource is busy, param[7:0] indicates owner as follow 0-15 = PF0-15,
+ * 16 = MFW, 17 = diag over serial
+ */
+#define RESOURCE_OPCODE_BUSY			2
+/* indicate release request was acknowledged */
+#define RESOURCE_OPCODE_RELEASED		3
+/* indicate release request was previously received by other owner */
+#define RESOURCE_OPCODE_RELEASED_PREVIOUS	4
+/* indicate wrong owner during release */
+#define RESOURCE_OPCODE_WRONG_OWNER		5
+#define RESOURCE_OPCODE_UNKNOWN_CMD		255
+/* dedicate resource 0 for dump */
+#define RESOURCE_DUMP				0
 #define DRV_MSG_CODE_GET_MBA_VERSION		0x00240000 /* Get MBA version */
 /* Send crash dump commands with param[3:0] - opcode */
 #define DRV_MSG_CODE_MDUMP_CMD			0x00250000
-	#define MDUMP_DRV_PARAM_OPCODE_MASK		0x0000000f
-	/* acknowledge reception of error indication */
-	#define DRV_MSG_CODE_MDUMP_ACK			0x01
-	/* set epoc and personality as follow: drv_data[3:0] - epoch,
-	 * drv_data[7:4] - personality
-	 */
-	#define DRV_MSG_CODE_MDUMP_SET_VALUES		0x02
-	/* trigger crash dump procedure */
-	#define DRV_MSG_CODE_MDUMP_TRIGGER		0x03
-	/* Request valid logs and config words */
-	#define DRV_MSG_CODE_MDUMP_GET_CONFIG		0x04
-	/* Set triggers mask. drv_mb_param should indicate (bitwise) which
-	 * trigger enabled
-	 */
-	#define DRV_MSG_CODE_MDUMP_SET_ENABLE		0x05
-	/* Clear all logs */
-	#define DRV_MSG_CODE_MDUMP_CLEAR_LOGS		0x06
+#define MDUMP_DRV_PARAM_OPCODE_MASK		0x0000000f
+/* acknowledge reception of error indication */
+#define DRV_MSG_CODE_MDUMP_ACK			0x01
+/* set epoc and personality as follow: drv_data[3:0] - epoch,
+ * drv_data[7:4] - personality
+ */
+#define DRV_MSG_CODE_MDUMP_SET_VALUES		0x02
+/* trigger crash dump procedure */
+#define DRV_MSG_CODE_MDUMP_TRIGGER		0x03
+/* Request valid logs and config words */
+#define DRV_MSG_CODE_MDUMP_GET_CONFIG		0x04
+/* Set triggers mask. drv_mb_param should indicate (bitwise) which
+ * trigger enabled
+ */
+#define DRV_MSG_CODE_MDUMP_SET_ENABLE		0x05
+/* Clear all logs */
+#define DRV_MSG_CODE_MDUMP_CLEAR_LOGS		0x06
 #define DRV_MSG_CODE_MEM_ECC_EVENTS		0x00260000 /* Param: None */
 /* Param: [0:15] - gpio number */
 #define DRV_MSG_CODE_GPIO_INFO			0x00270000
@@ -1266,12 +1268,12 @@ struct public_drv_mb {
 #define DRV_MSG_CODE_EXT_PHY_READ		0x00280000
 /* Value should be placed in union */
 #define DRV_MSG_CODE_EXT_PHY_WRITE		0x00290000
-	#define DRV_MB_PARAM_ADDR_SHIFT			0
-	#define DRV_MB_PARAM_ADDR_MASK			0x0000FFFF
-	#define DRV_MB_PARAM_DEVAD_SHIFT		16
-	#define DRV_MB_PARAM_DEVAD_MASK			0x001F0000
-	#define DRV_MB_PARAM_PORT_SHIFT			21
-	#define DRV_MB_PARAM_PORT_MASK			0x00600000
+#define DRV_MB_PARAM_ADDR_SHIFT			0
+#define DRV_MB_PARAM_ADDR_MASK			0x0000FFFF
+#define DRV_MB_PARAM_DEVAD_SHIFT		16
+#define DRV_MB_PARAM_DEVAD_MASK			0x001F0000
+#define DRV_MB_PARAM_PORT_SHIFT			21
+#define DRV_MB_PARAM_PORT_MASK			0x00600000
 #define DRV_MSG_CODE_EXT_PHY_FW_UPGRADE		0x002a0000
 
 #define DRV_MSG_SEQ_NUMBER_MASK                 0x0000ffff
@@ -1510,7 +1512,7 @@ struct public_drv_mb {
 #define FW_MSG_CODE_EXTPHY_OPERATION_FAILED	0x00720000
 #define FW_MSG_CODE_EXTPHY_NO_PHY_DETECTED	0x00730000
 
-/* mdump related response codes */
+	/* mdump related response codes */
 #define FW_MSG_CODE_MDUMP_NO_IMAGE_FOUND	0x00010000
 #define FW_MSG_CODE_MDUMP_ALLOC_FAILED		0x00020000
 #define FW_MSG_CODE_MDUMP_INVALID_CMD		0x00030000
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 34/62] net/qede/base: prevent transmitter stuck condition
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (34 preceding siblings ...)
  2017-03-28  6:52           ` [PATCH v4 33/62] net/qede/base: formatting changes Rasesh Mody
@ 2017-03-28  6:52           ` Rasesh Mody
  2017-03-28  6:52           ` [PATCH v4 35/62] net/qede/base: add mask/shift defines for resource command Rasesh Mody
                             ` (28 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:52 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Change OOO TC properly to prevent transmitter stuck condition
due to credit underruns.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h      |    4 +---
 drivers/net/qede/base/ecore_dcbx.c |    6 ++----
 drivers/net/qede/base/ecore_dev.c  |   19 ++++++++++++++-----
 drivers/net/qede/base/mcp_public.h |   12 ++++++++----
 4 files changed, 25 insertions(+), 16 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 479a991..c9b1b5a 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -358,9 +358,6 @@ struct ecore_hw_info {
 
 	u8 num_active_tc;
 
-	/* Traffic class used for tcp out of order traffic */
-	u8 ooo_tc;
-
 	/* The traffic class used by PF for it's offloaded protocol */
 	u8 offload_tc;
 
@@ -441,6 +438,7 @@ struct ecore_qm_info {
 	u16			num_vf_pqs;
 	u8			num_vports;
 	u8			max_phys_tcs_per_port;
+	u8			ooo_tc;
 	bool			pf_rl_en;
 	bool			pf_wfq_en;
 	bool			vport_rl_en;
diff --git a/drivers/net/qede/base/ecore_dcbx.c b/drivers/net/qede/base/ecore_dcbx.c
index 102774d..0e11927 100644
--- a/drivers/net/qede/base/ecore_dcbx.c
+++ b/drivers/net/qede/base/ecore_dcbx.c
@@ -129,11 +129,8 @@ ecore_dcbx_set_params(struct ecore_dcbx_results *p_data,
 		p_data->arr[type].update = DONT_UPDATE_DCB_DSCP;
 
 	/* QM reconf data */
-	if (p_hwfn->hw_info.personality == personality) {
+	if (p_hwfn->hw_info.personality == personality)
 		p_hwfn->hw_info.offload_tc = tc;
-		if (personality == ECORE_PCI_ISCSI)
-			p_hwfn->hw_info.ooo_tc = DCBX_ISCSI_OOO_TC;
-	}
 }
 
 /* Update app protocol data and hw_info fields with the TLV info */
@@ -317,6 +314,7 @@ ecore_dcbx_process_mib_info(struct ecore_hwfn *p_hwfn)
 
 	p_info->num_active_tc = ECORE_MFW_GET_FIELD(p_ets->flags,
 						    DCBX_ETS_MAX_TCS);
+	p_hwfn->qm_info.ooo_tc = ECORE_MFW_GET_FIELD(p_ets->flags, DCBX_OOO_TC);
 	data.pf_id = p_hwfn->rel_pf_id;
 	data.dcbx_enabled = !!dcbx_version;
 
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 21fec58..0840d49 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -291,6 +291,7 @@ u16 ecore_init_qm_get_num_pqs(struct ecore_hwfn *p_hwfn)
 static void ecore_init_qm_params(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+	bool four_port;
 
 	/* pq and vport bases for this PF */
 	qm_info->start_pq = (u16)RESC_START(p_hwfn, ECORE_PQ);
@@ -300,10 +301,19 @@ static void ecore_init_qm_params(struct ecore_hwfn *p_hwfn)
 	qm_info->vport_rl_en = 1;
 	qm_info->vport_wfq_en = 1;
 
+	/* TC config is different for AH 4 port */
+	four_port = p_hwfn->p_dev->num_ports_in_engines == MAX_NUM_PORTS_K2;
+
 	/* in AH 4 port we have fewer TCs per port */
-	qm_info->max_phys_tcs_per_port =
-		p_hwfn->p_dev->num_ports_in_engines == MAX_NUM_PORTS_K2 ?
-			NUM_PHYS_TCS_4PORT_K2 : NUM_OF_PHYS_TCS;
+	qm_info->max_phys_tcs_per_port = four_port ? NUM_PHYS_TCS_4PORT_K2 :
+						     NUM_OF_PHYS_TCS;
+
+	/* unless MFW indicated otherwise, ooo_tc should be 3 for AH 4 port and
+	 * 4 otherwise
+	 */
+	if (!qm_info->ooo_tc)
+		qm_info->ooo_tc = four_port ? DCBX_TCP_OOO_K2_4PORT_TC :
+					      DCBX_TCP_OOO_TC;
 }
 
 /* initialize qm vport params */
@@ -532,8 +542,7 @@ static void ecore_init_qm_ooo_pq(struct ecore_hwfn *p_hwfn)
 		return;
 
 	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_OOO, qm_info->num_pqs);
-	ecore_init_qm_pq(p_hwfn, qm_info, DCBX_ISCSI_OOO_TC,
-			 PQ_INIT_SHARE_VPORT);
+	ecore_init_qm_pq(p_hwfn, qm_info, qm_info->ooo_tc, PQ_INIT_SHARE_VPORT);
 }
 
 static void ecore_init_qm_pure_ack_pq(struct ecore_hwfn *p_hwfn)
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 28909fb..bd34557 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -294,16 +294,20 @@ struct dcbx_ets_feature {
 #define DCBX_ETS_CBS_SHIFT                      3
 #define DCBX_ETS_MAX_TCS_MASK                   0x000000f0
 #define DCBX_ETS_MAX_TCS_SHIFT                  4
-#define DCBX_ISCSI_OOO_TC_MASK			0x00000f00
-#define DCBX_ISCSI_OOO_TC_SHIFT                 8
+#define DCBX_OOO_TC_MASK                        0x00000f00
+#define DCBX_OOO_TC_SHIFT                       8
 /* Entries in tc table are orginized that the left most is pri 0, right most is
  * prio 7
  */
 
 	u32  pri_tc_tbl[1];
-#define DCBX_ISCSI_OOO_TC			(4)
+/* Fixed TCP OOO TC usage is deprecated and used only for driver backward
+ * compatibility
+ */
+#define DCBX_TCP_OOO_TC				(4)
+#define DCBX_TCP_OOO_K2_4PORT_TC		(3)
 
-#define NIG_ETS_ISCSI_OOO_CLIENT_OFFSET		(DCBX_ISCSI_OOO_TC + 1)
+#define NIG_ETS_ISCSI_OOO_CLIENT_OFFSET		(DCBX_TCP_OOO_TC + 1)
 #define DCBX_CEE_STRICT_PRIORITY		0xf
 /* Entries in tc table are orginized that the left most is pri 0, right most is
  * prio 7
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 35/62] net/qede/base: add mask/shift defines for resource command
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (35 preceding siblings ...)
  2017-03-28  6:52           ` [PATCH v4 34/62] net/qede/base: prevent transmitter stuck condition Rasesh Mody
@ 2017-03-28  6:52           ` Rasesh Mody
  2017-03-28  6:52           ` [PATCH v4 36/62] net/qede/base: add API for using MFW resource lock Rasesh Mody
                             ` (27 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:52 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Add several mask/shift defines for the resource command

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/mcp_public.h |   15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index bd34557..1b1ecd2 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -1217,10 +1217,16 @@ struct public_drv_mb {
 #define DRV_MSG_CODE_TIMESTAMP                  0x00210000
 /* This is an empty mailbox just return OK*/
 #define DRV_MSG_CODE_EMPTY_MB			0x00220000
+
 /* Param[0:4] - resource number (0-31), Param[5:7] - opcode,
  * param[15:8] - age
  */
 #define DRV_MSG_CODE_RESOURCE_CMD		0x00230000
+
+#define RESOURCE_CMD_REQ_RESC_MASK		0x0000001F
+#define RESOURCE_CMD_REQ_RESC_SHIFT		0
+#define RESOURCE_CMD_REQ_OPCODE_MASK		0x000000E0
+#define RESOURCE_CMD_REQ_OPCODE_SHIFT		5
 /* request resource ownership with default aging */
 #define RESOURCE_OPCODE_REQ			1
 /* request resource ownership without aging */
@@ -1230,6 +1236,13 @@ struct public_drv_mb {
 #define RESOURCE_OPCODE_RELEASE			4 /* release resource */
 /* force resource release */
 #define RESOURCE_OPCODE_FORCE_RELEASE		5
+#define RESOURCE_CMD_REQ_AGE_MASK		0x0000FF00
+#define RESOURCE_CMD_REQ_AGE_SHIFT		8
+
+#define RESOURCE_CMD_RSP_OWNER_MASK		0x000000FF
+#define RESOURCE_CMD_RSP_OWNER_SHIFT		0
+#define RESOURCE_CMD_RSP_OPCODE_MASK		0x00000700
+#define RESOURCE_CMD_RSP_OPCODE_SHIFT		8
 /* resource is free and granted to requester */
 #define RESOURCE_OPCODE_GNT			1
 /* resource is busy, param[7:0] indicates owner as follow 0-15 = PF0-15,
@@ -1243,8 +1256,10 @@ struct public_drv_mb {
 /* indicate wrong owner during release */
 #define RESOURCE_OPCODE_WRONG_OWNER		5
 #define RESOURCE_OPCODE_UNKNOWN_CMD		255
+
 /* dedicate resource 0 for dump */
 #define RESOURCE_DUMP				0
+
 #define DRV_MSG_CODE_GET_MBA_VERSION		0x00240000 /* Get MBA version */
 /* Send crash dump commands with param[3:0] - opcode */
 #define DRV_MSG_CODE_MDUMP_CMD			0x00250000
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 36/62] net/qede/base: add API for using MFW resource lock
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (36 preceding siblings ...)
  2017-03-28  6:52           ` [PATCH v4 35/62] net/qede/base: add mask/shift defines for resource command Rasesh Mody
@ 2017-03-28  6:52           ` Rasesh Mody
  2017-03-28  6:52           ` [PATCH v4 37/62] net/qede/base: remove clock slowdown option Rasesh Mody
                             ` (26 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:52 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Add base driver API for using the Management FW resource lock

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h      |    9 +++
 drivers/net/qede/base/ecore_dcbx.h |    3 -
 drivers/net/qede/base/ecore_mcp.c  |  143 ++++++++++++++++++++++++++++++++++++
 drivers/net/qede/base/ecore_mcp.h  |   41 +++++++++++
 4 files changed, 193 insertions(+), 3 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index c9b1b5a..acf2244 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -86,6 +86,15 @@ do {									\
 	(((value) >> (name##_SHIFT)) & name##_MASK)
 #endif
 
+#define ECORE_MFW_GET_FIELD(name, field)				\
+	(((name) & (field ## _MASK)) >> (field ## _SHIFT))
+
+#define ECORE_MFW_SET_FIELD(name, field, value)				\
+do {									\
+	(name) &= ~((field ## _MASK) << (field ## _SHIFT));		\
+	(name) |= (((value) << (field ## _SHIFT)) & (field ## _MASK));	\
+} while (0)
+
 static OSAL_INLINE u32 DB_ADDR(u32 cid, u32 DEMS)
 {
 	u32 db_addr = FIELD_VALUE(DB_LEGACY_ADDR_DEMS, DEMS) |
diff --git a/drivers/net/qede/base/ecore_dcbx.h b/drivers/net/qede/base/ecore_dcbx.h
index 2ce4465..0830014 100644
--- a/drivers/net/qede/base/ecore_dcbx.h
+++ b/drivers/net/qede/base/ecore_dcbx.h
@@ -17,9 +17,6 @@
 #include "ecore_hsi_common.h"
 #include "ecore_dcbx_api.h"
 
-#define ECORE_MFW_GET_FIELD(name, field) \
-	(((name) & (field ## _MASK)) >> (field ## _SHIFT))
-
 struct ecore_dcbx_info {
 	struct lldp_status_params_s lldp_remote[LLDP_MAX_LLDP_AGENTS];
 	struct lldp_config_params_s lldp_local[LLDP_MAX_LLDP_AGENTS];
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 2b9c819..30cb76e 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -2502,3 +2502,146 @@ enum _ecore_status_t ecore_mcp_initiate_pf_flr(struct ecore_hwfn *p_hwfn,
 	return ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_INITIATE_PF_FLR, 0,
 			     &mcp_resp, &mcp_param);
 }
+
+static enum _ecore_status_t ecore_mcp_resource_cmd(struct ecore_hwfn *p_hwfn,
+						   struct ecore_ptt *p_ptt,
+						   u32 param, u32 *p_mcp_resp,
+						   u32 *p_mcp_param)
+{
+	enum _ecore_status_t rc;
+
+	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_RESOURCE_CMD, param,
+			   p_mcp_resp, p_mcp_param);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	/* A zero response implies that the resource command is not supported */
+	if (!*p_mcp_resp)
+		return ECORE_NOTIMPL;
+
+	if (*p_mcp_param == RESOURCE_OPCODE_UNKNOWN_CMD) {
+		u8 opcode = ECORE_MFW_GET_FIELD(param, RESOURCE_CMD_REQ_OPCODE);
+
+		DP_NOTICE(p_hwfn, false,
+			  "The resource command is unknown to the MFW [param 0x%08x, opcode %d]\n",
+			  param, opcode);
+		return ECORE_INVAL;
+	}
+
+	return rc;
+}
+
+enum _ecore_status_t ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn,
+					 struct ecore_ptt *p_ptt,
+					 u8 resource_num, u8 timeout,
+					 bool *p_granted, u8 *p_owner)
+{
+	u32 param = 0, mcp_resp, mcp_param;
+	u8 opcode;
+	enum _ecore_status_t rc;
+
+	switch (timeout) {
+	case ECORE_MCP_RESC_LOCK_TO_DEFAULT:
+		opcode = RESOURCE_OPCODE_REQ;
+		timeout = 0;
+		break;
+	case ECORE_MCP_RESC_LOCK_TO_NONE:
+		opcode = RESOURCE_OPCODE_REQ_WO_AGING;
+		timeout = 0;
+		break;
+	default:
+		opcode = RESOURCE_OPCODE_REQ_W_AGING;
+		break;
+	}
+
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_RESC, resource_num);
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_OPCODE, opcode);
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_AGE, timeout);
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Resource lock request: param 0x%08x [age %d, opcode %d, resc_num %d]\n",
+		   param, timeout, opcode, resource_num);
+
+	/* Attempt to acquire the resource */
+	rc = ecore_mcp_resource_cmd(p_hwfn, p_ptt, param, &mcp_resp,
+				    &mcp_param);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	/* Analyze the response */
+	*p_owner = ECORE_MFW_GET_FIELD(mcp_param, RESOURCE_CMD_RSP_OWNER);
+	opcode = ECORE_MFW_GET_FIELD(mcp_param, RESOURCE_CMD_RSP_OPCODE);
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Resource lock response: mcp_param 0x%08x [opcode %d, owner %d]\n",
+		   mcp_param, opcode, *p_owner);
+
+	switch (opcode) {
+	case RESOURCE_OPCODE_GNT:
+		*p_granted = true;
+		break;
+	case RESOURCE_OPCODE_BUSY:
+		*p_granted = false;
+		break;
+	default:
+		DP_NOTICE(p_hwfn, false,
+			  "Unexpected opcode in resource lock response [mcp_param 0x%08x, opcode %d]\n",
+			  mcp_param, opcode);
+		return ECORE_INVAL;
+	}
+
+	return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t ecore_mcp_resc_unlock(struct ecore_hwfn *p_hwfn,
+					   struct ecore_ptt *p_ptt,
+					   u8 resource_num, bool force,
+					   bool *p_released)
+{
+	u32 param = 0, mcp_resp, mcp_param;
+	u8 opcode;
+	enum _ecore_status_t rc;
+
+	opcode = force ? RESOURCE_OPCODE_FORCE_RELEASE
+		       : RESOURCE_OPCODE_RELEASE;
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_RESC, resource_num);
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_OPCODE, opcode);
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Resource unlock request: param 0x%08x [opcode %d, resc_num %d]\n",
+		   param, opcode, resource_num);
+
+	/* Attempt to release the resource */
+	rc = ecore_mcp_resource_cmd(p_hwfn, p_ptt, param, &mcp_resp,
+				    &mcp_param);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	/* Analyze the response */
+	opcode = ECORE_MFW_GET_FIELD(mcp_param, RESOURCE_CMD_RSP_OPCODE);
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Resource unlock response: mcp_param 0x%08x [opcode %d]\n",
+		   mcp_param, opcode);
+
+	switch (opcode) {
+	case RESOURCE_OPCODE_RELEASED_PREVIOUS:
+		DP_INFO(p_hwfn,
+			"Resource unlock request for an already released resource [resc_num %d]\n",
+			resource_num);
+		/* Fallthrough */
+	case RESOURCE_OPCODE_RELEASED:
+		*p_released = true;
+		break;
+	case RESOURCE_OPCODE_WRONG_OWNER:
+		*p_released = false;
+		break;
+	default:
+		DP_NOTICE(p_hwfn, false,
+			  "Unexpected opcode in resource unlock response [mcp_param 0x%08x, opcode %d]\n",
+			  mcp_param, opcode);
+		return ECORE_INVAL;
+	}
+
+	return ECORE_SUCCESS;
+}
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index 0708923..7a81516 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -361,4 +361,45 @@ enum _ecore_status_t ecore_mcp_get_resc_info(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_mcp_initiate_pf_flr(struct ecore_hwfn *p_hwfn,
 					       struct ecore_ptt *p_ptt);
 
+#define ECORE_MCP_RESC_LOCK_TO_DEFAULT	0
+#define ECORE_MCP_RESC_LOCK_TO_NONE	255
+
+/**
+ * @brief Acquires MFW generic resource lock
+ *
+ *  @param p_hwfn
+ *  @param p_ptt
+ *  @param resource_num - valid values are 0..31
+ *  @param timeout - lock timeout value in seconds
+ *                   (1..254, '0' - default value, '255' - no timeout).
+ *  @param p_granted - will be filled as true if the resource is free and
+ *                     granted, or false if it is busy.
+ *  @param p_owner - A pointer to a variable to be filled with the resource
+ *                   owner (0..15 = PF0-15, 16 = MFW, 17 = diag over serial).
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
+ */
+enum _ecore_status_t ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn,
+					 struct ecore_ptt *p_ptt,
+					 u8 resource_num, u8 timeout,
+					 bool *p_granted, u8 *p_owner);
+
+/**
+ * @brief Releases MFW generic resource lock
+ *
+ *  @param p_hwfn
+ *  @param p_ptt
+ *  @param resource_num
+ *  @param force -  allows to release a reeource even if belongs to another PF
+ *  @param p_released - will be filled as true if the resource is released (or
+ *			has been already released), and false if the resource is
+ *			acquired by another PF and the `force' flag was not set.
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
+ */
+enum _ecore_status_t ecore_mcp_resc_unlock(struct ecore_hwfn *p_hwfn,
+					   struct ecore_ptt *p_ptt,
+					   u8 resource_num, bool force,
+					   bool *p_released);
+
 #endif /* __ECORE_MCP_H__ */
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 37/62] net/qede/base: remove clock slowdown option
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (37 preceding siblings ...)
  2017-03-28  6:52           ` [PATCH v4 36/62] net/qede/base: add API for using MFW resource lock Rasesh Mody
@ 2017-03-28  6:52           ` Rasesh Mody
  2017-03-28  6:52           ` [PATCH v4 38/62] net/qede/base: add new image types Rasesh Mody
                             ` (25 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:52 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Remove clock slowdown NVM config option as this is not supported
for current chipsets.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/nvm_cfg.h |   10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/drivers/net/qede/base/nvm_cfg.h b/drivers/net/qede/base/nvm_cfg.h
index 4202337..4e58835 100644
--- a/drivers/net/qede/base/nvm_cfg.h
+++ b/drivers/net/qede/base/nvm_cfg.h
@@ -72,10 +72,12 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_ENABLE_ATC_OFFSET 30
 		#define NVM_CFG1_GLOB_ENABLE_ATC_DISABLED 0x0
 		#define NVM_CFG1_GLOB_ENABLE_ATC_ENABLED 0x1
-		#define NVM_CFG1_GLOB_CLOCK_SLOWDOWN_MASK 0x80000000
-		#define NVM_CFG1_GLOB_CLOCK_SLOWDOWN_OFFSET 31
-		#define NVM_CFG1_GLOB_CLOCK_SLOWDOWN_DISABLED 0x0
-		#define NVM_CFG1_GLOB_CLOCK_SLOWDOWN_ENABLED 0x1
+		#define NVM_CFG1_GLOB_RESERVED__M_WAS_CLOCK_SLOWDOWN_MASK \
+								0x80000000
+		#define NVM_CFG1_GLOB_RESERVED__M_WAS_CLOCK_SLOWDOWN_OFFSET 31
+		#define NVM_CFG1_GLOB_RESERVED__M_WAS_CLOCK_SLOWDOWN_DISABLED \
+								0x0
+		#define NVM_CFG1_GLOB_RESERVED__M_WAS_CLOCK_SLOWDOWN_ENABLED 0x1
 	u32 engineering_change[3]; /* 0x4 */
 	u32 manufacturing_id; /* 0x10 */
 	u32 serial_number[4]; /* 0x14 */
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 38/62] net/qede/base: add new image types
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (38 preceding siblings ...)
  2017-03-28  6:52           ` [PATCH v4 37/62] net/qede/base: remove clock slowdown option Rasesh Mody
@ 2017-03-28  6:52           ` Rasesh Mody
  2017-03-28  6:52           ` [PATCH v4 39/62] net/qede/base: use L2-handles for RSS configuration Rasesh Mody
                             ` (24 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:52 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Add new image types - RECOVERY and PK (Public Key) towards
the second phase of NVRAM security support.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/mcp_public.h |    5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 1b1ecd2..d3cbc96 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -1502,6 +1502,10 @@ struct public_drv_mb {
 #define FW_MSG_CODE_NVM_PUT_FILE_FINISH_OK	0x00400000
 /* MFW reject "mcp reset" command if one of the drivers is up */
 #define FW_MSG_CODE_MCP_RESET_REJECT		0x00600000
+#define FW_MSG_CODE_NVM_FAILED_CALC_HASH	0x00310000
+#define FW_MSG_CODE_NVM_PUBLIC_KEY_MISSING	0x00320000
+#define FW_MSG_CODE_NVM_INVALID_PUBLIC_KEY	0x00330000
+
 #define FW_MSG_CODE_PHY_OK			0x00110000
 #define FW_MSG_CODE_PHY_ERROR			0x00120000
 #define FW_MSG_CODE_SET_SECURE_MODE_ERROR	0x00130000
@@ -1530,6 +1534,7 @@ struct public_drv_mb {
 #define FW_MSG_CODE_EXTPHY_INVALID_PHY_TYPE	0x00710000
 #define FW_MSG_CODE_EXTPHY_OPERATION_FAILED	0x00720000
 #define FW_MSG_CODE_EXTPHY_NO_PHY_DETECTED	0x00730000
+#define FW_MSG_CODE_RECOVERY_MODE		0x00740000
 
 	/* mdump related response codes */
 #define FW_MSG_CODE_MDUMP_NO_IMAGE_FOUND	0x00010000
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 39/62] net/qede/base: use L2-handles for RSS configuration
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (39 preceding siblings ...)
  2017-03-28  6:52           ` [PATCH v4 38/62] net/qede/base: add new image types Rasesh Mody
@ 2017-03-28  6:52           ` Rasesh Mody
  2017-03-28  6:52           ` [PATCH v4 40/62] net/qede/base: change valloc to vzalloc Rasesh Mody
                             ` (23 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:52 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Move RSS configuration into using L2-handles instead of queue-ids.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_l2.c     |   48 ++++++++++++++++++-------
 drivers/net/qede/base/ecore_l2.h     |    2 ++
 drivers/net/qede/base/ecore_l2_api.h |    4 ++-
 drivers/net/qede/base/ecore_sriov.c  |   66 +++++++++++++++++++++-------------
 drivers/net/qede/base/ecore_vf.c     |   13 +++++--
 drivers/net/qede/qede_ethdev.c       |   19 ++++++----
 6 files changed, 105 insertions(+), 47 deletions(-)

diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 352620a..2635213 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -59,6 +59,7 @@ _ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
 	p_cid->cid = cid;
 	p_cid->vf_qid = vf_qid;
 	p_cid->rel = *p_params;
+	p_cid->p_owner = p_hwfn;
 
 	/* Don't try calculating the absolute indices for VFs */
 	if (IS_VF(p_hwfn->p_dev)) {
@@ -267,10 +268,9 @@ ecore_sp_vport_update_rss(struct ecore_hwfn *p_hwfn,
 			  struct vport_update_ramrod_data *p_ramrod,
 			  struct ecore_rss_params *p_rss)
 {
-	enum _ecore_status_t rc = ECORE_SUCCESS;
 	struct eth_vport_rss_config *p_config;
-	u16 abs_l2_queue = 0;
-	int i;
+	int i, table_size;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
 
 	if (!p_rss) {
 		p_ramrod->common.update_rss_flg = 0;
@@ -324,16 +324,40 @@ ecore_sp_vport_update_rss(struct ecore_hwfn *p_hwfn,
 		   p_config->capabilities,
 		   p_config->update_rss_ind_table, p_config->update_rss_key);
 
-	for (i = 0; i < ECORE_RSS_IND_TABLE_SIZE; i++) {
-		rc = ecore_fw_l2_queue(p_hwfn,
-				       p_rss->rss_ind_table[i],
-				       &abs_l2_queue);
-		if (rc != ECORE_SUCCESS)
-			return rc;
+	table_size = OSAL_MIN_T(int, ECORE_RSS_IND_TABLE_SIZE,
+				1 << p_config->tbl_size);
+	for (i = 0; i < table_size; i++) {
+		struct ecore_queue_cid *p_queue = p_rss->rss_ind_table[i];
 
-		p_config->indirection_table[i] = OSAL_CPU_TO_LE16(abs_l2_queue);
-		DP_VERBOSE(p_hwfn, ECORE_MSG_IFUP, "i= %d, queue = %d\n",
-			   i, p_config->indirection_table[i]);
+		if (!p_queue)
+			return ECORE_INVAL;
+
+		p_config->indirection_table[i] =
+				OSAL_CPU_TO_LE16(p_queue->abs.queue_id);
+	}
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IFUP,
+		   "Configured RSS indirection table [%d entries]:\n",
+		   table_size);
+	for (i = 0; i < ECORE_RSS_IND_TABLE_SIZE; i += 0x10) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IFUP,
+			   "%04x %04x %04x %04x %04x %04x %04x %04x %04x %04x %04x %04x %04x %04x %04x %04x\n",
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 1]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 2]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 3]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 4]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 5]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 6]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 7]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 8]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 9]),
+			  OSAL_LE16_TO_CPU(p_config->indirection_table[i + 10]),
+			  OSAL_LE16_TO_CPU(p_config->indirection_table[i + 11]),
+			  OSAL_LE16_TO_CPU(p_config->indirection_table[i + 12]),
+			  OSAL_LE16_TO_CPU(p_config->indirection_table[i + 13]),
+			  OSAL_LE16_TO_CPU(p_config->indirection_table[i + 14]),
+			 OSAL_LE16_TO_CPU(p_config->indirection_table[i + 15]));
 	}
 
 	for (i = 0; i < 10; i++)
diff --git a/drivers/net/qede/base/ecore_l2.h b/drivers/net/qede/base/ecore_l2.h
index c136389..4b0ccb4 100644
--- a/drivers/net/qede/base/ecore_l2.h
+++ b/drivers/net/qede/base/ecore_l2.h
@@ -36,6 +36,8 @@ struct ecore_queue_cid {
 
 	/* Legacy VFs might have Rx producer located elsewhere */
 	bool b_legacy_vf;
+
+	struct ecore_hwfn *p_owner;
 };
 
 void ecore_eth_queue_cid_release(struct ecore_hwfn *p_hwfn,
diff --git a/drivers/net/qede/base/ecore_l2_api.h b/drivers/net/qede/base/ecore_l2_api.h
index af316d3..5a7db76 100644
--- a/drivers/net/qede/base/ecore_l2_api.h
+++ b/drivers/net/qede/base/ecore_l2_api.h
@@ -59,7 +59,9 @@ struct ecore_rss_params {
 	u8 update_rss_key;
 	u8 rss_caps;
 	u8 rss_table_size_log; /* The table size is 2 ^ rss_table_size_log */
-	u16 rss_ind_table[ECORE_RSS_IND_TABLE_SIZE];
+
+	/* Indirection table consist of rx queue handles */
+	void *rss_ind_table[ECORE_RSS_IND_TABLE_SIZE];
 	u32 rss_key[ECORE_RSS_KEY_SIZE];
 };
 
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 6cec7b2..280c992 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -2704,12 +2704,14 @@ ecore_iov_vp_update_rss_param(struct ecore_hwfn *p_hwfn,
 			      struct ecore_vf_info *vf,
 			      struct ecore_sp_vport_update_params *p_data,
 			      struct ecore_rss_params *p_rss,
-			      struct ecore_iov_vf_mbx *p_mbx, u16 *tlvs_mask)
+			      struct ecore_iov_vf_mbx *p_mbx,
+			      u16 *tlvs_mask, u16 *tlvs_accepted)
 {
 	struct vfpf_vport_update_rss_tlv *p_rss_tlv;
 	u16 tlv = CHANNEL_TLV_VPORT_UPDATE_RSS;
-	u16 i, q_idx, max_q_idx;
+	bool b_reject = false;
 	u16 table_size;
+	u16 i, q_idx;
 
 	p_rss_tlv = (struct vfpf_vport_update_rss_tlv *)
 	    ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv);
@@ -2737,36 +2739,38 @@ ecore_iov_vp_update_rss_param(struct ecore_hwfn *p_hwfn,
 	p_rss->rss_eng_id = vf->relative_vf_id + 1;
 	p_rss->rss_caps = p_rss_tlv->rss_caps;
 	p_rss->rss_table_size_log = p_rss_tlv->rss_table_size_log;
-	OSAL_MEMCPY(p_rss->rss_ind_table, p_rss_tlv->rss_ind_table,
-		    sizeof(p_rss->rss_ind_table));
 	OSAL_MEMCPY(p_rss->rss_key, p_rss_tlv->rss_key,
 		    sizeof(p_rss->rss_key));
 
 	table_size = OSAL_MIN_T(u16, OSAL_ARRAY_SIZE(p_rss->rss_ind_table),
 				(1 << p_rss_tlv->rss_table_size_log));
 
-	max_q_idx = OSAL_ARRAY_SIZE(vf->vf_queues);
-
 	for (i = 0; i < table_size; i++) {
-		u16 index = vf->vf_queues[0].fw_rx_qid;
+		q_idx = p_rss_tlv->rss_ind_table[i];
+		if (!ecore_iov_validate_rxq(p_hwfn, vf, q_idx)) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VF[%d]: Omitting RSS due to wrong queue %04x\n",
+				   vf->relative_vf_id, q_idx);
+			b_reject = true;
+			goto out;
+		}
 
-		q_idx = p_rss->rss_ind_table[i];
-		if (q_idx >= max_q_idx)
-			DP_NOTICE(p_hwfn, true,
-				  "rss_ind_table[%d] = %d,"
-				  " rxq is out of range\n",
-				  i, q_idx);
-		else if (!vf->vf_queues[q_idx].p_rx_cid)
-			DP_NOTICE(p_hwfn, true,
-				  "rss_ind_table[%d] = %d, rxq is not active\n",
-				  i, q_idx);
-		else
-			index = vf->vf_queues[q_idx].fw_rx_qid;
-		p_rss->rss_ind_table[i] = index;
+		if (!vf->vf_queues[q_idx].p_rx_cid) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VF[%d]: Omitting RSS due to inactive queue %08x\n",
+				   vf->relative_vf_id, q_idx);
+			b_reject = true;
+			goto out;
+		}
+
+		p_rss->rss_ind_table[i] = vf->vf_queues[q_idx].p_rx_cid;
 	}
 
 	p_data->rss_params = p_rss;
+out:
 	*tlvs_mask |= 1 << ECORE_IOV_VP_UPDATE_RSS;
+	if (!b_reject)
+		*tlvs_accepted |= 1 << ECORE_IOV_VP_UPDATE_RSS;
 }
 
 static void
@@ -2822,11 +2826,11 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 					  struct ecore_ptt *p_ptt,
 					  struct ecore_vf_info *vf)
 {
+	struct ecore_rss_params *p_rss_params = OSAL_NULL;
 	struct ecore_sp_vport_update_params params;
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
 	struct ecore_sge_tpa_params sge_tpa_params;
 	u16 tlvs_mask = 0, tlvs_accepted = 0;
-	struct ecore_rss_params rss_params;
 	u8 status = PFVF_STATUS_SUCCESS;
 	u16 length;
 	enum _ecore_status_t rc;
@@ -2841,6 +2845,12 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 		goto out;
 	}
 
+	p_rss_params = OSAL_VALLOC(p_hwfn->p_dev, sizeof(*p_rss_params));
+	if (p_rss_params == OSAL_NULL) {
+		status = PFVF_STATUS_FAILURE;
+		goto out;
+	}
+
 	OSAL_MEMSET(&params, 0, sizeof(params));
 	params.opaque_fid = vf->opaque_fid;
 	params.vport_id = vf->vport_id;
@@ -2854,19 +2864,24 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 	ecore_iov_vp_update_tx_switch(p_hwfn, &params, mbx, &tlvs_mask);
 	ecore_iov_vp_update_mcast_bin_param(p_hwfn, &params, mbx, &tlvs_mask);
 	ecore_iov_vp_update_accept_flag(p_hwfn, &params, mbx, &tlvs_mask);
-	ecore_iov_vp_update_rss_param(p_hwfn, vf, &params, &rss_params,
-				      mbx, &tlvs_mask);
 	ecore_iov_vp_update_accept_any_vlan(p_hwfn, &params, mbx, &tlvs_mask);
 	ecore_iov_vp_update_sge_tpa_param(p_hwfn, vf, &params,
 					  &sge_tpa_params, mbx, &tlvs_mask);
 
+	tlvs_accepted = tlvs_mask;
+
+	/* Some of the extended TLVs need to be validated first; In that case,
+	 * they can update the mask without updating the accepted [so that
+	 * PF could communicate to VF it has rejected request].
+	 */
+	ecore_iov_vp_update_rss_param(p_hwfn, vf, &params, p_rss_params,
+				      mbx, &tlvs_mask, &tlvs_accepted);
+
 	/* Just log a message if there is no single extended tlv in buffer.
 	 * When all features of vport update ramrod would be requested by VF
 	 * as extended TLVs in buffer then an error can be returned in response
 	 * if there is no extended TLV present in buffer.
 	 */
-	tlvs_accepted = tlvs_mask;
-
 	if (OSAL_IOV_VF_VPORT_UPDATE(p_hwfn, vf->relative_vf_id,
 				     &params, &tlvs_accepted) !=
 	    ECORE_SUCCESS) {
@@ -2894,6 +2909,7 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 		status = PFVF_STATUS_FAILURE;
 
 out:
+	OSAL_VFREE(p_hwfn->p_dev, p_rss_params);
 	length = ecore_iov_prep_vp_update_resp_tlvs(p_hwfn, vf, mbx, status,
 						    tlvs_mask, tlvs_accepted);
 	ecore_iov_send_response(p_hwfn, p_ptt, vf, length, status);
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index 3182621..a072a81 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -1132,6 +1132,7 @@ ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn,
 	if (p_params->rss_params) {
 		struct ecore_rss_params *rss_params = p_params->rss_params;
 		struct vfpf_vport_update_rss_tlv *p_rss_tlv;
+		int i, table_size;
 
 		size = sizeof(struct vfpf_vport_update_rss_tlv);
 		p_rss_tlv = ecore_add_tlv(p_hwfn, &p_iov->offset,
@@ -1153,8 +1154,16 @@ ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn,
 		p_rss_tlv->rss_enable = rss_params->rss_enable;
 		p_rss_tlv->rss_caps = rss_params->rss_caps;
 		p_rss_tlv->rss_table_size_log = rss_params->rss_table_size_log;
-		OSAL_MEMCPY(p_rss_tlv->rss_ind_table, rss_params->rss_ind_table,
-			    sizeof(rss_params->rss_ind_table));
+
+		table_size = OSAL_MIN_T(int, T_ETH_INDIRECTION_TABLE_SIZE,
+					1 << p_rss_tlv->rss_table_size_log);
+		for (i = 0; i < table_size; i++) {
+			struct ecore_queue_cid *p_queue;
+
+			p_queue = rss_params->rss_ind_table[i];
+			p_rss_tlv->rss_ind_table[i] = p_queue->rel.queue_id;
+		}
+
 		OSAL_MEMCPY(p_rss_tlv->rss_key, rss_params->rss_key,
 			    sizeof(rss_params->rss_key));
 	}
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 257e5b2..bd190d0 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1487,11 +1487,11 @@ static int qede_rss_hash_update(struct rte_eth_dev *eth_dev,
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
 	struct ecore_sp_vport_update_params vport_update_params;
 	struct ecore_rss_params rss_params;
-	struct ecore_rss_params params;
 	struct ecore_hwfn *p_hwfn;
 	uint32_t *key = (uint32_t *)rss_conf->rss_key;
 	uint64_t hf = rss_conf->rss_hf;
 	uint8_t len = rss_conf->rss_key_len;
+	uint8_t idx;
 	uint8_t i;
 	int rc;
 
@@ -1526,6 +1526,11 @@ static int qede_rss_hash_update(struct rte_eth_dev *eth_dev,
 	/* tbl_size has to be set with capabilities */
 	rss_params.rss_table_size_log = 7;
 	vport_update_params.vport_id = 0;
+	/* pass the L2 handles instead of qids */
+	for (i = 0 ; i < ECORE_RSS_IND_TABLE_SIZE ; i++) {
+		idx = qdev->rss_ind_table[i];
+		rss_params.rss_ind_table[i] = qdev->fp_array[idx].rxq->handle;
+	}
 	vport_update_params.rss_params = &rss_params;
 
 	for_each_hwfn(edev, i) {
@@ -1607,14 +1612,18 @@ static int qede_rss_reta_update(struct rte_eth_dev *eth_dev,
 		shift = i % RTE_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift)) {
 			entry = reta_conf[idx].reta[shift];
-			params.rss_ind_table[i] = entry;
+			/* Pass rxq handles to ecore */
+			params.rss_ind_table[i] =
+					qdev->fp_array[entry].rxq->handle;
+			/* Update the local copy for RETA query command */
+			qdev->rss_ind_table[i] = entry;
 		}
 	}
 
 	/* Fix up RETA for CMT mode device */
 	if (edev->num_hwfns > 1)
 		qdev->rss_enable = qed_update_rss_parm_cmt(edev,
-					&params.rss_ind_table[0]);
+					params.rss_ind_table[0]);
 	params.update_rss_ind_table = 1;
 	params.rss_table_size_log = 7;
 	params.update_rss_config = 1;
@@ -1634,10 +1643,6 @@ static int qede_rss_reta_update(struct rte_eth_dev *eth_dev,
 		}
 	}
 
-	/* Update the local copy for RETA query command */
-	memcpy(qdev->rss_ind_table, params.rss_ind_table,
-	       sizeof(params.rss_ind_table));
-
 	return 0;
 }
 
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 40/62] net/qede/base: change valloc to vzalloc
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (40 preceding siblings ...)
  2017-03-28  6:52           ` [PATCH v4 39/62] net/qede/base: use L2-handles for RSS configuration Rasesh Mody
@ 2017-03-28  6:52           ` Rasesh Mody
  2017-03-28  6:52           ` [PATCH v4 41/62] net/qede/base: add support for previous driver unload Rasesh Mody
                             ` (22 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:52 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Change OSAL_VALLOC() into OSAL_VZALLOC() which would also zero memory.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/bcm_osal.h      |    2 +-
 drivers/net/qede/base/ecore_dev.c     |    3 +--
 drivers/net/qede/base/ecore_l2.c      |    3 +--
 drivers/net/qede/base/ecore_mng_tlv.c |    5 ++---
 drivers/net/qede/base/ecore_sriov.c   |    2 +-
 5 files changed, 6 insertions(+), 9 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 4c91dc0..052a0cf 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -89,7 +89,7 @@ typedef int bool;
 #define OSAL_ALLOC(dev, GFP, size) rte_malloc("qede", size, 0)
 #define OSAL_ZALLOC(dev, GFP, size) rte_zmalloc("qede", size, 0)
 #define OSAL_CALLOC(dev, GFP, num, size) rte_calloc("qede", num, size, 0)
-#define OSAL_VALLOC(dev, size) rte_malloc("qede", size, 0)
+#define OSAL_VZALLOC(dev, size) rte_zmalloc("qede", size, 0)
 #define OSAL_FREE(dev, memory)		  \
 	do {				  \
 		rte_free((void *)memory); \
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 0840d49..6d75e60 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -3717,13 +3717,12 @@ ecore_chain_alloc_pbl(struct ecore_dev *p_dev,
 	u32 page_cnt = p_chain->page_cnt, size, i;
 
 	size = page_cnt * sizeof(*pp_virt_addr_tbl);
-	pp_virt_addr_tbl = (void **)OSAL_VALLOC(p_dev, size);
+	pp_virt_addr_tbl = (void **)OSAL_VZALLOC(p_dev, size);
 	if (!pp_virt_addr_tbl) {
 		DP_NOTICE(p_dev, true,
 			  "Failed to allocate memory for the chain virtual addresses table\n");
 		return ECORE_NOMEM;
 	}
-	OSAL_MEM_ZERO(pp_virt_addr_tbl, size);
 
 	/* The allocation of the PBL table is done with its full size, since it
 	 * is expected to be successive.
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 2635213..4d26e19 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -50,10 +50,9 @@ _ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
 	struct ecore_queue_cid *p_cid;
 	enum _ecore_status_t rc;
 
-	p_cid = OSAL_VALLOC(p_hwfn->p_dev, sizeof(*p_cid));
+	p_cid = OSAL_VZALLOC(p_hwfn->p_dev, sizeof(*p_cid));
 	if (p_cid == OSAL_NULL)
 		return OSAL_NULL;
-	OSAL_MEM_ZERO(p_cid, sizeof(*p_cid));
 
 	p_cid->opaque_fid = opaque_fid;
 	p_cid->cid = cid;
diff --git a/drivers/net/qede/base/ecore_mng_tlv.c b/drivers/net/qede/base/ecore_mng_tlv.c
index 0065d12..0bf1be8 100644
--- a/drivers/net/qede/base/ecore_mng_tlv.c
+++ b/drivers/net/qede/base/ecore_mng_tlv.c
@@ -1413,11 +1413,10 @@ ecore_mfw_update_tlvs(u8 tlv_group, struct ecore_hwfn *p_hwfn,
 	u32 offset;
 	int len;
 
-	p_tlv_data = OSAL_VALLOC(p_hwfn->p_dev, sizeof(*p_tlv_data));
+	p_tlv_data = OSAL_VZALLOC(p_hwfn->p_dev, sizeof(*p_tlv_data));
 	if (!p_tlv_data)
 		return ECORE_NOMEM;
 
-	OSAL_MEMSET(p_tlv_data, 0, sizeof(*p_tlv_data));
 	if (OSAL_MFW_FILL_TLV_DATA(p_hwfn, tlv_group, p_tlv_data)) {
 		OSAL_VFREE(p_hwfn->p_dev, p_tlv_data);
 		return ECORE_INVAL;
@@ -1487,7 +1486,7 @@ ecore_mfw_process_tlv_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
 		goto drv_done;
 	}
 
-	p_mfw_buf = (void *)OSAL_VALLOC(p_hwfn->p_dev, size);
+	p_mfw_buf = (void *)OSAL_VZALLOC(p_hwfn->p_dev, size);
 	if (!p_mfw_buf) {
 		DP_NOTICE(p_hwfn, false,
 			  "Failed allocate memory for p_mfw_buf\n");
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 280c992..aab9925 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -2845,7 +2845,7 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 		goto out;
 	}
 
-	p_rss_params = OSAL_VALLOC(p_hwfn->p_dev, sizeof(*p_rss_params));
+	p_rss_params = OSAL_VZALLOC(p_hwfn->p_dev, sizeof(*p_rss_params));
 	if (p_rss_params == OSAL_NULL) {
 		status = PFVF_STATUS_FAILURE;
 		goto out;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 41/62] net/qede/base: add support for previous driver unload
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (41 preceding siblings ...)
  2017-03-28  6:52           ` [PATCH v4 40/62] net/qede/base: change valloc to vzalloc Rasesh Mody
@ 2017-03-28  6:52           ` Rasesh Mody
  2017-03-28  6:52           ` [PATCH v4 42/62] net/qede/base: add non-L2 dcbx tlv application support Rasesh Mody
                             ` (21 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:52 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

New driver/management fw load request sequence for handling previous
driver unload.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h         |   13 ++
 drivers/net/qede/base/ecore_dev.c     |   43 ++--
 drivers/net/qede/base/ecore_dev_api.h |   30 ++-
 drivers/net/qede/base/ecore_mcp.c     |  369 ++++++++++++++++++++++++++++++---
 drivers/net/qede/base/ecore_mcp.h     |   40 ++--
 drivers/net/qede/base/mcp_public.h    |   56 ++++-
 drivers/net/qede/qede_main.c          |    2 +
 7 files changed, 482 insertions(+), 71 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index acf2244..60a8a6b 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -28,6 +28,19 @@
 #include "ecore_proto_if.h"
 #include "mcp_public.h"
 
+#define ECORE_MAJOR_VERSION		8
+#define ECORE_MINOR_VERSION		18
+#define ECORE_REVISION_VERSION		7
+#define ECORE_ENGINEERING_VERSION	0
+
+#define ECORE_VERSION							\
+	((ECORE_MAJOR_VERSION << 24) | (ECORE_MINOR_VERSION << 16) |	\
+	 (ECORE_REVISION_VERSION << 8) | ECORE_ENGINEERING_VERSION)
+
+#define STORM_FW_VERSION						\
+	((FW_MAJOR_VERSION << 24) | (FW_MINOR_VERSION << 16) |	\
+	 (FW_REVISION_VERSION << 8) | FW_ENGINEERING_VERSION)
+
 #define MAX_HWFNS_PER_DEVICE	2
 #define NAME_SIZE 128 /* @DPDK */
 #define ECORE_WFQ_UNIT	100
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 6d75e60..29dd292 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1901,10 +1901,11 @@ enum _ecore_status_t ecore_vf_start(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 				   struct ecore_hw_init_params *p_params)
 {
-	enum _ecore_status_t rc = ECORE_SUCCESS, mfw_rc;
+	struct ecore_load_req_params load_req_params;
 	u32 load_code, param, drv_mb_param;
-	bool b_default_mtu = true;
 	struct ecore_hwfn *p_hwfn;
+	bool b_default_mtu = true;
+	enum _ecore_status_t rc = ECORE_SUCCESS, mfw_rc;
 	int i;
 
 	if ((p_params->int_mode == ECORE_INT_MODE_MSI) &&
@@ -1943,17 +1944,25 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		if (rc != ECORE_SUCCESS)
 			return rc;
 
-		/* @@@TBD need to add here:
-		 * Check for fan failure
-		 * Prev_unload
-		 */
-		rc = ecore_mcp_load_req(p_hwfn, p_hwfn->p_main_ptt, &load_code);
-		if (rc) {
+		OSAL_MEM_ZERO(&load_req_params, sizeof(load_req_params));
+		load_req_params.drv_role = p_params->is_crash_kernel ?
+					   ECORE_DRV_ROLE_KDUMP :
+					   ECORE_DRV_ROLE_OS;
+		load_req_params.timeout_val = p_params->mfw_timeout_val;
+		load_req_params.avoid_eng_reset = p_params->avoid_eng_reset;
+		rc = ecore_mcp_load_req(p_hwfn, p_hwfn->p_main_ptt,
+					&load_req_params);
+		if (rc != ECORE_SUCCESS) {
 			DP_NOTICE(p_hwfn, true,
-				  "Failed sending LOAD_REQ command\n");
+				  "Failed sending a LOAD_REQ command\n");
 			return rc;
 		}
 
+		load_code = load_req_params.load_code;
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+			   "Load request was sent. Load code: 0x%x\n",
+			   load_code);
+
 		/* CQ75580:
 		 * When coming back from hiberbate state, the registers from
 		 * which shadow is read initially are not initialized. It turns
@@ -1966,10 +1975,6 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		 */
 		ecore_reset_mb_shadow(p_hwfn, p_hwfn->p_main_ptt);
 
-		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-			   "Load request was sent. Resp:0x%x, Load code: 0x%x\n",
-			   rc, load_code);
-
 		/* Only relevant for recovery:
 		 * Clear the indication after the LOAD_REQ command is responded
 		 * by the MFW.
@@ -1988,13 +1993,13 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		case FW_MSG_CODE_DRV_LOAD_ENGINE:
 			rc = ecore_hw_init_common(p_hwfn, p_hwfn->p_main_ptt,
 						  p_hwfn->hw_info.hw_mode);
-			if (rc)
+			if (rc != ECORE_SUCCESS)
 				break;
 			/* Fall into */
 		case FW_MSG_CODE_DRV_LOAD_PORT:
 			rc = ecore_hw_init_port(p_hwfn, p_hwfn->p_main_ptt,
 						p_hwfn->hw_info.hw_mode);
-			if (rc)
+			if (rc != ECORE_SUCCESS)
 				break;
 			/* Fall into */
 		case FW_MSG_CODE_DRV_LOAD_FUNCTION:
@@ -2006,6 +2011,8 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 					      p_params->allow_npar_tx_switch);
 			break;
 		default:
+			DP_NOTICE(p_hwfn, false,
+				  "Unexpected load code [0x%08x]", load_code);
 			rc = ECORE_NOTIMPL;
 			break;
 		}
@@ -2021,6 +2028,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 				       0, &load_code, &param);
 		if (rc != ECORE_SUCCESS)
 			return rc;
+
 		if (mfw_rc != ECORE_SUCCESS) {
 			DP_NOTICE(p_hwfn, true,
 				  "Failed sending LOAD_DONE command\n");
@@ -2045,10 +2053,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 
 	if (IS_PF(p_dev)) {
 		p_hwfn = ECORE_LEADING_HWFN(p_dev);
-		drv_mb_param = (FW_MAJOR_VERSION << 24) |
-			       (FW_MINOR_VERSION << 16) |
-			       (FW_REVISION_VERSION << 8) |
-			       (FW_ENGINEERING_VERSION);
+		drv_mb_param = STORM_FW_VERSION;
 		rc = ecore_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
 				   DRV_MSG_CODE_OV_UPDATE_STORM_FW_VER,
 				   drv_mb_param, &load_code, &param);
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index 356c5e4..7e90778 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -58,16 +58,38 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev);
 void ecore_resc_setup(struct ecore_dev *p_dev);
 
 struct ecore_hw_init_params {
-	/* tunnelling parameters */
+	/* Tunnelling parameters */
 	struct ecore_tunnel_info *p_tunn;
+
 	bool b_hw_start;
-	/* interrupt mode [msix, inta, etc.] to use */
+
+	/* Interrupt mode [msix, inta, etc.] to use */
 	enum ecore_int_mode int_mode;
-/* npar tx switching to be used for vports configured for tx-switching */
 
+	/* NPAR tx switching to be used for vports configured for tx-switching
+	 */
 	bool allow_npar_tx_switch;
-	/* binary fw data pointer in binary fw file */
+
+	/* Binary fw data pointer in binary fw file */
 	const u8 *bin_fw_data;
+
+	/* Indicates whether the driver is running over a crash kernel.
+	 * As part of the load request, this will be used for providing the
+	 * driver role to the MFW.
+	 * In case of a crash kernel over PDA - this should be set to false.
+	 */
+	bool is_crash_kernel;
+
+	/* The timeout value that the MFW should use when locking the engine for
+	 * the driver load process.
+	 * A value of '0' means the default value, and '255' means no timeout.
+	 */
+	u8 mfw_timeout_val;
+#define ECORE_LOAD_REQ_LOCK_TO_DEFAULT	0
+#define ECORE_LOAD_REQ_LOCK_TO_NONE	255
+
+	/* Avoid engine reset when first PF loads on it */
+	bool avoid_eng_reset;
 };
 
 /**
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 30cb76e..6c5b5db 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -518,51 +518,368 @@ static void ecore_mcp_mf_workaround(struct ecore_hwfn *p_hwfn,
 }
 #endif
 
+static bool ecore_mcp_can_force_load(u8 drv_role, u8 exist_drv_role)
+{
+	return (drv_role == DRV_ROLE_OS &&
+		exist_drv_role == DRV_ROLE_PREBOOT) ||
+	       (drv_role == DRV_ROLE_KDUMP && exist_drv_role == DRV_ROLE_OS);
+}
+
+static enum _ecore_status_t ecore_mcp_cancel_load_req(struct ecore_hwfn *p_hwfn,
+						      struct ecore_ptt *p_ptt)
+{
+	u32 resp = 0, param = 0;
+	enum _ecore_status_t rc;
+
+	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_CANCEL_LOAD_REQ, 0,
+			   &resp, &param);
+	if (rc != ECORE_SUCCESS)
+		DP_NOTICE(p_hwfn, false,
+			  "Failed to send cancel load request, rc = %d\n", rc);
+
+	return rc;
+}
+
+#define CONFIG_ECORE_L2_BITMAP_IDX	(0x1 << 0)
+#define CONFIG_ECORE_SRIOV_BITMAP_IDX	(0x1 << 1)
+#define CONFIG_ECORE_ROCE_BITMAP_IDX	(0x1 << 2)
+#define CONFIG_ECORE_IWARP_BITMAP_IDX	(0x1 << 3)
+#define CONFIG_ECORE_FCOE_BITMAP_IDX	(0x1 << 4)
+#define CONFIG_ECORE_ISCSI_BITMAP_IDX	(0x1 << 5)
+#define CONFIG_ECORE_LL2_BITMAP_IDX	(0x1 << 6)
+
+static u32 ecore_get_config_bitmap(void)
+{
+	u32 config_bitmap = 0x0;
+
+#ifdef CONFIG_ECORE_L2
+	config_bitmap |= CONFIG_ECORE_L2_BITMAP_IDX;
+#endif
+#ifdef CONFIG_ECORE_SRIOV
+	config_bitmap |= CONFIG_ECORE_SRIOV_BITMAP_IDX;
+#endif
+#ifdef CONFIG_ECORE_ROCE
+	config_bitmap |= CONFIG_ECORE_ROCE_BITMAP_IDX;
+#endif
+#ifdef CONFIG_ECORE_IWARP
+	config_bitmap |= CONFIG_ECORE_IWARP_BITMAP_IDX;
+#endif
+#ifdef CONFIG_ECORE_FCOE
+	config_bitmap |= CONFIG_ECORE_FCOE_BITMAP_IDX;
+#endif
+#ifdef CONFIG_ECORE_ISCSI
+	config_bitmap |= CONFIG_ECORE_ISCSI_BITMAP_IDX;
+#endif
+#ifdef CONFIG_ECORE_LL2
+	config_bitmap |= CONFIG_ECORE_LL2_BITMAP_IDX;
+#endif
+
+	return config_bitmap;
+}
+
+struct ecore_load_req_in_params {
+	u8 hsi_ver;
+#define ECORE_LOAD_REQ_HSI_VER_DEFAULT	0
+#define ECORE_LOAD_REQ_HSI_VER_1	1
+	u32 drv_ver_0;
+	u32 drv_ver_1;
+	u32 fw_ver;
+	u8 drv_role;
+	u8 timeout_val;
+	u8 force_cmd;
+	bool avoid_eng_reset;
+};
+
+struct ecore_load_req_out_params {
+	u32 load_code;
+	u32 exist_drv_ver_0;
+	u32 exist_drv_ver_1;
+	u32 exist_fw_ver;
+	u8 exist_drv_role;
+	u8 mfw_hsi_ver;
+	bool drv_exists;
+};
+
+static enum _ecore_status_t
+__ecore_mcp_load_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		     struct ecore_load_req_in_params *p_in_params,
+		     struct ecore_load_req_out_params *p_out_params)
+{
+	union drv_union_data union_data_src, union_data_dst;
+	struct ecore_mcp_mb_params mb_params;
+	struct load_req_stc *p_load_req;
+	struct load_rsp_stc *p_load_rsp;
+	u32 hsi_ver;
+	enum _ecore_status_t rc;
+
+	p_load_req = &union_data_src.load_req;
+	OSAL_MEM_ZERO(p_load_req, sizeof(*p_load_req));
+	p_load_req->drv_ver_0 = p_in_params->drv_ver_0;
+	p_load_req->drv_ver_1 = p_in_params->drv_ver_1;
+	p_load_req->fw_ver = p_in_params->fw_ver;
+	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_ROLE,
+			    p_in_params->drv_role);
+	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_LOCK_TO,
+			    p_in_params->timeout_val);
+	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_FORCE,
+			    p_in_params->force_cmd);
+	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_FLAGS0,
+			    p_in_params->avoid_eng_reset);
+
+	hsi_ver = (p_in_params->hsi_ver == ECORE_LOAD_REQ_HSI_VER_DEFAULT) ?
+		  DRV_ID_MCP_HSI_VER_CURRENT :
+		  (p_in_params->hsi_ver << DRV_ID_MCP_HSI_VER_SHIFT);
+
+	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
+	mb_params.cmd = DRV_MSG_CODE_LOAD_REQ;
+	mb_params.param = PDA_COMP | hsi_ver | p_hwfn->p_dev->drv_type;
+	mb_params.p_data_src = &union_data_src;
+	mb_params.p_data_dst = &union_data_dst;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Load Request: param 0x%08x [init_hw %d, drv_type %d, hsi_ver %d, pda 0x%04x]\n",
+		   mb_params.param,
+		   ECORE_MFW_GET_FIELD(mb_params.param, DRV_ID_DRV_INIT_HW),
+		   ECORE_MFW_GET_FIELD(mb_params.param, DRV_ID_DRV_TYPE),
+		   ECORE_MFW_GET_FIELD(mb_params.param, DRV_ID_MCP_HSI_VER),
+		   ECORE_MFW_GET_FIELD(mb_params.param, DRV_ID_PDA_COMP_VER));
+
+	if (p_in_params->hsi_ver != ECORE_LOAD_REQ_HSI_VER_1)
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+			   "Load Request: drv_ver 0x%08x_0x%08x, fw_ver 0x%08x, misc0 0x%08x [role %d, timeout %d, force %d, flags0 0x%x]\n",
+			   p_load_req->drv_ver_0, p_load_req->drv_ver_1,
+			   p_load_req->fw_ver, p_load_req->misc0,
+			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
+					       LOAD_REQ_ROLE),
+			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
+					       LOAD_REQ_LOCK_TO),
+			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
+					       LOAD_REQ_FORCE),
+			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
+					       LOAD_REQ_FLAGS0));
+
+	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+	if (rc != ECORE_SUCCESS) {
+		DP_NOTICE(p_hwfn, false,
+			  "Failed to send load request, rc = %d\n", rc);
+		return rc;
+	}
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Load Response: resp 0x%08x\n", mb_params.mcp_resp);
+	p_out_params->load_code = mb_params.mcp_resp;
+
+	if (p_in_params->hsi_ver != ECORE_LOAD_REQ_HSI_VER_1 &&
+	    p_out_params->load_code != FW_MSG_CODE_DRV_LOAD_REFUSED_HSI_1) {
+		p_load_rsp = &union_data_dst.load_rsp;
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+			   "Load Response: exist_drv_ver 0x%08x_0x%08x, exist_fw_ver 0x%08x, misc0 0x%08x [exist_role %d, mfw_hsi %d, flags0 0x%x]\n",
+			   p_load_rsp->drv_ver_0, p_load_rsp->drv_ver_1,
+			   p_load_rsp->fw_ver, p_load_rsp->misc0,
+			   ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
+					       LOAD_RSP_ROLE),
+			   ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
+					       LOAD_RSP_HSI),
+			   ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
+					       LOAD_RSP_FLAGS0));
+
+		p_out_params->exist_drv_ver_0 = p_load_rsp->drv_ver_0;
+		p_out_params->exist_drv_ver_1 = p_load_rsp->drv_ver_1;
+		p_out_params->exist_fw_ver = p_load_rsp->fw_ver;
+		p_out_params->exist_drv_role =
+			ECORE_MFW_GET_FIELD(p_load_rsp->misc0, LOAD_RSP_ROLE);
+		p_out_params->mfw_hsi_ver =
+			ECORE_MFW_GET_FIELD(p_load_rsp->misc0, LOAD_RSP_HSI);
+		p_out_params->drv_exists =
+			ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
+					    LOAD_RSP_FLAGS0) &
+			LOAD_RSP_FLAGS0_DRV_EXISTS;
+	}
+
+	return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t eocre_get_mfw_drv_role(struct ecore_hwfn *p_hwfn,
+						   enum ecore_drv_role drv_role,
+						   u8 *p_mfw_drv_role)
+{
+	switch (drv_role) {
+	case ECORE_DRV_ROLE_OS:
+		*p_mfw_drv_role = DRV_ROLE_OS;
+		break;
+	case ECORE_DRV_ROLE_KDUMP:
+		*p_mfw_drv_role = DRV_ROLE_KDUMP;
+		break;
+	default:
+		DP_ERR(p_hwfn, "Unexpected driver role %d\n", drv_role);
+		return ECORE_INVAL;
+	}
+
+	return ECORE_SUCCESS;
+}
+
+enum ecore_load_req_force {
+	ECORE_LOAD_REQ_FORCE_NONE,
+	ECORE_LOAD_REQ_FORCE_PF,
+	ECORE_LOAD_REQ_FORCE_ALL,
+};
+
+static enum _ecore_status_t
+ecore_get_mfw_force_cmd(struct ecore_hwfn *p_hwfn,
+			enum ecore_load_req_force force_cmd,
+			u8 *p_mfw_force_cmd)
+{
+	switch (force_cmd) {
+	case ECORE_LOAD_REQ_FORCE_NONE:
+		*p_mfw_force_cmd = LOAD_REQ_FORCE_NONE;
+		break;
+	case ECORE_LOAD_REQ_FORCE_PF:
+		*p_mfw_force_cmd = LOAD_REQ_FORCE_PF;
+		break;
+	case ECORE_LOAD_REQ_FORCE_ALL:
+		*p_mfw_force_cmd = LOAD_REQ_FORCE_ALL;
+		break;
+	default:
+		DP_ERR(p_hwfn, "Unexpected force value %d\n", force_cmd);
+		return ECORE_INVAL;
+	}
+
+	return ECORE_SUCCESS;
+}
+
 enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 					struct ecore_ptt *p_ptt,
-					u32 *p_load_code)
+					struct ecore_load_req_params *p_params)
 {
-	struct ecore_dev *p_dev = p_hwfn->p_dev;
-	struct ecore_mcp_mb_params mb_params;
+	struct ecore_load_req_out_params out_params;
+	struct ecore_load_req_in_params in_params;
+	u8 mfw_drv_role, mfw_force_cmd;
 	enum _ecore_status_t rc;
 
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
-		ecore_mcp_mf_workaround(p_hwfn, p_load_code);
+		ecore_mcp_mf_workaround(p_hwfn, &p_params->load_code);
 		return ECORE_SUCCESS;
 	}
 #endif
 
-	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
-	mb_params.cmd = DRV_MSG_CODE_LOAD_REQ;
-	mb_params.param = PDA_COMP | DRV_ID_MCP_HSI_VER_CURRENT |
-			  p_dev->drv_type;
-	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+	OSAL_MEM_ZERO(&in_params, sizeof(in_params));
+	in_params.hsi_ver = ECORE_LOAD_REQ_HSI_VER_DEFAULT;
+	in_params.drv_ver_0 = ECORE_VERSION;
+	in_params.drv_ver_1 = ecore_get_config_bitmap();
+	in_params.fw_ver = STORM_FW_VERSION;
+	rc = eocre_get_mfw_drv_role(p_hwfn, p_params->drv_role, &mfw_drv_role);
+	if (rc != ECORE_SUCCESS)
+		return rc;
 
-	/* if mcp fails to respond we must abort */
-	if (rc != ECORE_SUCCESS) {
-		DP_ERR(p_hwfn, "MCP response failure, aborting\n");
+	in_params.drv_role = mfw_drv_role;
+	in_params.timeout_val = p_params->timeout_val;
+	rc = ecore_get_mfw_force_cmd(p_hwfn, ECORE_LOAD_REQ_FORCE_NONE,
+				     &mfw_force_cmd);
+	if (rc != ECORE_SUCCESS)
 		return rc;
-	}
 
-	*p_load_code = mb_params.mcp_resp;
+	in_params.force_cmd = mfw_force_cmd;
+	in_params.avoid_eng_reset = p_params->avoid_eng_reset;
 
-	/* If MFW refused (e.g. other port is in diagnostic mode) we
-	 * must abort. This can happen in the following cases:
-	 * - Other port is in diagnostic mode
-	 * - Previously loaded function on the engine is not compliant with
-	 *   the requester.
-	 * - MFW cannot cope with the requester's DRV_MFW_HSI_VERSION.
-	 *      -
+	OSAL_MEM_ZERO(&out_params, sizeof(out_params));
+	rc = __ecore_mcp_load_req(p_hwfn, p_ptt, &in_params, &out_params);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	/* First handle cases where another load request should/might be sent:
+	 * - MFW expects the old interface [HSI version = 1]
+	 * - MFW responds that a force load request is required
 	 */
-	if (!(*p_load_code) ||
-	    ((*p_load_code) == FW_MSG_CODE_DRV_LOAD_REFUSED_HSI) ||
-	    ((*p_load_code) == FW_MSG_CODE_DRV_LOAD_REFUSED_PDA) ||
-	    ((*p_load_code) == FW_MSG_CODE_DRV_LOAD_REFUSED_DIAG)) {
-		DP_ERR(p_hwfn, "MCP refused load request, aborting\n");
+	if (out_params.load_code == FW_MSG_CODE_DRV_LOAD_REFUSED_HSI_1) {
+		DP_INFO(p_hwfn,
+			"MFW refused a load request due to HSI > 1. Resending with HSI = 1.\n");
+
+		/* The previous load request set the mailbox blocking */
+		p_hwfn->mcp_info->block_mb_sending = false;
+
+		in_params.hsi_ver = ECORE_LOAD_REQ_HSI_VER_1;
+		OSAL_MEM_ZERO(&out_params, sizeof(out_params));
+		rc = __ecore_mcp_load_req(p_hwfn, p_ptt, &in_params,
+					  &out_params);
+		if (rc != ECORE_SUCCESS)
+			return rc;
+	} else if (out_params.load_code ==
+		   FW_MSG_CODE_DRV_LOAD_REFUSED_REQUIRES_FORCE) {
+		/* The previous load request set the mailbox blocking */
+		p_hwfn->mcp_info->block_mb_sending = false;
+
+		if (ecore_mcp_can_force_load(in_params.drv_role,
+					     out_params.exist_drv_role)) {
+			DP_INFO(p_hwfn,
+				"A force load is required [existing: role %d, fw_ver 0x%08x, drv_ver 0x%08x_0x%08x]. Sending a force load request.\n",
+				out_params.exist_drv_role,
+				out_params.exist_fw_ver,
+				out_params.exist_drv_ver_0,
+				out_params.exist_drv_ver_1);
+
+			rc = ecore_get_mfw_force_cmd(p_hwfn,
+						     ECORE_LOAD_REQ_FORCE_ALL,
+						     &mfw_force_cmd);
+			if (rc != ECORE_SUCCESS)
+				return rc;
+
+			in_params.force_cmd = mfw_force_cmd;
+			OSAL_MEM_ZERO(&out_params, sizeof(out_params));
+			rc = __ecore_mcp_load_req(p_hwfn, p_ptt, &in_params,
+						  &out_params);
+			if (rc != ECORE_SUCCESS)
+				return rc;
+		} else {
+			DP_NOTICE(p_hwfn, false,
+				  "A force load is required [existing: role %d, fw_ver 0x%08x, drv_ver 0x%08x_0x%08x]. Avoiding to prevent disruption of active PFs.\n",
+				  out_params.exist_drv_role,
+				  out_params.exist_fw_ver,
+				  out_params.exist_drv_ver_0,
+				  out_params.exist_drv_ver_1);
+
+			ecore_mcp_cancel_load_req(p_hwfn, p_ptt);
+			return ECORE_BUSY;
+		}
+	}
+
+	/* Now handle the other types of responses.
+	 * The "REFUSED_HSI_1" and "REFUSED_REQUIRES_FORCE" responses are not
+	 * expected here after the additional revised load requests were sent.
+	 */
+	switch (out_params.load_code) {
+	case FW_MSG_CODE_DRV_LOAD_ENGINE:
+	case FW_MSG_CODE_DRV_LOAD_PORT:
+	case FW_MSG_CODE_DRV_LOAD_FUNCTION:
+		if (out_params.mfw_hsi_ver != ECORE_LOAD_REQ_HSI_VER_1 &&
+		    out_params.drv_exists) {
+			/* The role and fw/driver version match, but the PF is
+			 * already loaded and has not been unloaded gracefully.
+			 * This is unexpected since a quasi-FLR request was
+			 * previously sent as part of ecore_hw_prepare().
+			 */
+			DP_NOTICE(p_hwfn, false,
+				  "PF is already loaded - shouldn't have got here since a quasi-FLR request was previously sent!\n");
+			return ECORE_INVAL;
+		}
+		break;
+	case FW_MSG_CODE_DRV_LOAD_REFUSED_PDA:
+	case FW_MSG_CODE_DRV_LOAD_REFUSED_DIAG:
+	case FW_MSG_CODE_DRV_LOAD_REFUSED_HSI:
+	case FW_MSG_CODE_DRV_LOAD_REFUSED_REJECT:
+		DP_NOTICE(p_hwfn, false,
+			  "MFW refused a load request [resp 0x%08x]. Aborting.\n",
+			  out_params.load_code);
 		return ECORE_BUSY;
+	default:
+		DP_NOTICE(p_hwfn, false,
+			  "Unexpected response to load request [resp 0x%08x]. Aborting.\n",
+			  out_params.load_code);
+		break;
 	}
 
+	p_params->load_code = out_params.load_code;
+
 	return ECORE_SUCCESS;
 }
 
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index 7a81516..4138a12 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -136,32 +136,36 @@ enum _ecore_status_t ecore_mcp_handle_events(struct ecore_hwfn *p_hwfn,
  * @param p_hwfn - hw function
  * @param p_ptt - PTT required for register access
  * @return enum _ecore_status_t - ECORE_SUCCESS - operation
- * was successul.
+ * was successful.
  */
 enum _ecore_status_t ecore_issue_pulse(struct ecore_hwfn *p_hwfn,
 				       struct ecore_ptt *p_ptt);
 
+enum ecore_drv_role {
+	ECORE_DRV_ROLE_OS,
+	ECORE_DRV_ROLE_KDUMP,
+};
+
+struct ecore_load_req_params {
+	enum ecore_drv_role drv_role;
+	u8 timeout_val; /* 1..254, '0' - default value, '255' - no timeout */
+	bool avoid_eng_reset;
+	u32 load_code;
+};
+
 /**
- * @brief Sends a LOAD_REQ to the MFW, and in case operation
- *        succeed, returns whether this PF is the first on the
- *        chip/engine/port or function. This function should be
- *        called when driver is ready to accept MFW events after
- *        Storms initializations are done.
- *
- * @param p_hwfn       - hw function
- * @param p_ptt        - PTT required for register access
- * @param p_load_code  - The MCP response param containing one
- *      of the following:
- *      FW_MSG_CODE_DRV_LOAD_ENGINE
- *      FW_MSG_CODE_DRV_LOAD_PORT
- *      FW_MSG_CODE_DRV_LOAD_FUNCTION
- * @return enum _ecore_status_t -
- *      ECORE_SUCCESS - Operation was successul.
- *      ECORE_BUSY - Operation failed
+ * @brief Sends a LOAD_REQ to the MFW, and in case the operation succeeds,
+ *        returns whether this PF is the first on the engine/port or function.
+ *
+ * @param p_hwfn
+ * @param p_pt
+ * @param p_params
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - Operation was successful.
  */
 enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 					struct ecore_ptt *p_ptt,
-					u32 *p_load_code);
+					struct ecore_load_req_params *p_params);
 
 /**
  * @brief Read the MFW mailbox into Current buffer.
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index d3cbc96..145f5ca 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -878,9 +878,11 @@ struct public_func {
 #define DRV_ID_PDA_COMP_VER_MASK	0x0000ffff
 #define DRV_ID_PDA_COMP_VER_SHIFT	0
 
+#define LOAD_REQ_HSI_VERSION		2
 #define DRV_ID_MCP_HSI_VER_MASK		0x00ff0000
 #define DRV_ID_MCP_HSI_VER_SHIFT	16
-#define DRV_ID_MCP_HSI_VER_CURRENT	(1 << DRV_ID_MCP_HSI_VER_SHIFT)
+#define DRV_ID_MCP_HSI_VER_CURRENT	(LOAD_REQ_HSI_VERSION << \
+					 DRV_ID_MCP_HSI_VER_SHIFT)
 
 #define DRV_ID_DRV_TYPE_MASK		0x7f000000
 #define DRV_ID_DRV_TYPE_SHIFT		24
@@ -1040,8 +1042,47 @@ struct resource_info {
 #define RESOURCE_ELEMENT_STRICT (1 << 0)
 };
 
+#define DRV_ROLE_NONE		0
+#define DRV_ROLE_PREBOOT	1
+#define DRV_ROLE_OS		2
+#define DRV_ROLE_KDUMP		3
+
+struct load_req_stc {
+	u32 drv_ver_0;
+	u32 drv_ver_1;
+	u32 fw_ver;
+	u32 misc0;
+#define LOAD_REQ_ROLE_MASK		0x000000FF
+#define LOAD_REQ_ROLE_SHIFT		0
+#define LOAD_REQ_LOCK_TO_MASK		0x0000FF00
+#define LOAD_REQ_LOCK_TO_SHIFT		0 /* @DPDK */
+#define LOAD_REQ_LOCK_TO_DEFAULT	0
+#define LOAD_REQ_LOCK_TO_NONE		255
+#define LOAD_REQ_FORCE_MASK		0x000F0000
+#define LOAD_REQ_FORCE_SHIFT		0 /* @DPDK */
+#define LOAD_REQ_FORCE_NONE		0
+#define LOAD_REQ_FORCE_PF		1
+#define LOAD_REQ_FORCE_ALL		2
+#define LOAD_REQ_FLAGS0_MASK		0x00F00000
+#define LOAD_REQ_FLAGS0_SHIFT		0 /* @DPDK */
+#define LOAD_REQ_FLAGS0_AVOID_RESET	(0x1 << 0)
+};
+
+struct load_rsp_stc {
+	u32 drv_ver_0;
+	u32 drv_ver_1;
+	u32 fw_ver;
+	u32 misc0;
+#define LOAD_RSP_ROLE_MASK		0x000000FF
+#define LOAD_RSP_ROLE_SHIFT		0
+#define LOAD_RSP_HSI_MASK		0x0000FF00
+#define LOAD_RSP_HSI_SHIFT		8
+#define LOAD_RSP_FLAGS0_MASK		0x000F0000
+#define LOAD_RSP_FLAGS0_SHIFT		16
+#define LOAD_RSP_FLAGS0_DRV_EXISTS	(0x1 << 0)
+};
+
 union drv_union_data {
-	u32 ver_str[MCP_DRV_VER_STR_SIZE_DWORD];    /* LOAD_REQ */
 	struct mcp_mac wol_mac; /* UNLOAD_DONE */
 
 /* This configuration should be set by the driver for the LINK_SET command. */
@@ -1068,6 +1109,9 @@ union drv_union_data {
 	struct bist_nvm_image_att nvm_image_att;
 	struct mdump_config_stc mdump_config;
 	u32 dword;
+
+	struct load_req_stc load_req;
+	struct load_rsp_stc load_rsp;
 	/* ... */
 };
 
@@ -1077,6 +1121,7 @@ struct public_drv_mb {
 #define DRV_MSG_CODE_LOAD_REQ                   0x10000000
 #define DRV_MSG_CODE_LOAD_DONE                  0x11000000
 #define DRV_MSG_CODE_INIT_HW                    0x12000000
+#define DRV_MSG_CODE_CANCEL_LOAD_REQ            0x13000000
 #define DRV_MSG_CODE_UNLOAD_REQ		        0x20000000
 #define DRV_MSG_CODE_UNLOAD_DONE                0x21000000
 #define DRV_MSG_CODE_INIT_PHY			0x22000000
@@ -1448,8 +1493,11 @@ struct public_drv_mb {
 #define FW_MSG_CODE_DRV_LOAD_PORT               0x10110000
 #define FW_MSG_CODE_DRV_LOAD_FUNCTION           0x10120000
 #define FW_MSG_CODE_DRV_LOAD_REFUSED_PDA        0x10200000
-#define FW_MSG_CODE_DRV_LOAD_REFUSED_HSI        0x10210000
+#define FW_MSG_CODE_DRV_LOAD_REFUSED_HSI_1      0x10210000
 #define FW_MSG_CODE_DRV_LOAD_REFUSED_DIAG       0x10220000
+#define FW_MSG_CODE_DRV_LOAD_REFUSED_HSI        0x10230000
+#define FW_MSG_CODE_DRV_LOAD_REFUSED_REQUIRES_FORCE 0x10300000
+#define FW_MSG_CODE_DRV_LOAD_REFUSED_REJECT     0x10310000
 #define FW_MSG_CODE_DRV_LOAD_DONE               0x11100000
 #define FW_MSG_CODE_DRV_UNLOAD_ENGINE           0x20110000
 #define FW_MSG_CODE_DRV_UNLOAD_PORT             0x20120000
@@ -1547,7 +1595,7 @@ struct public_drv_mb {
 
 
 	u32 fw_mb_param;
-	/* Resource Allocation params - MFW  version support*/
+/* Resource Allocation params - MFW  version support */
 #define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR_MASK	0xFFFF0000
 #define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR_SHIFT		16
 #define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR_MASK	0x0000FFFF
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 5c79055..326e56f 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -276,6 +276,8 @@ static int qed_slowpath_start(struct ecore_dev *edev,
 	hw_init_params.int_mode = ECORE_INT_MODE_MSIX;
 	hw_init_params.allow_npar_tx_switch = allow_npar_tx_switching;
 	hw_init_params.bin_fw_data = data;
+	hw_init_params.mfw_timeout_val = ECORE_LOAD_REQ_LOCK_TO_DEFAULT;
+	hw_init_params.avoid_eng_reset = false;
 	rc = ecore_hw_init(edev, &hw_init_params);
 	if (rc) {
 		DP_ERR(edev, "ecore_hw_init failed\n");
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 42/62] net/qede/base: add non-L2 dcbx tlv application support
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (42 preceding siblings ...)
  2017-03-28  6:52           ` [PATCH v4 41/62] net/qede/base: add support for previous driver unload Rasesh Mody
@ 2017-03-28  6:52           ` Rasesh Mody
  2017-03-28  6:52           ` [PATCH v4 43/62] net/qede/base: update bulletin board during VF init Rasesh Mody
                             ` (20 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:52 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Add non-l2 dcbx tlv application support.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dcbx.c     |   30 ++++++++++++++++++++++++++----
 drivers/net/qede/base/ecore_dcbx.h     |    1 +
 drivers/net/qede/base/ecore_dcbx_api.h |    4 +++-
 drivers/net/qede/base/ecore_proto_if.h |    3 +++
 4 files changed, 33 insertions(+), 5 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dcbx.c b/drivers/net/qede/base/ecore_dcbx.c
index 0e11927..5ecc6b0 100644
--- a/drivers/net/qede/base/ecore_dcbx.c
+++ b/drivers/net/qede/base/ecore_dcbx.c
@@ -72,6 +72,23 @@ static bool ecore_dcbx_default_tlv(u32 app_info_bitmap, u16 proto_id, bool ieee)
 	return !!(ethtype && (proto_id == ECORE_ETH_TYPE_DEFAULT));
 }
 
+static bool ecore_dcbx_iwarp_tlv(struct ecore_hwfn *p_hwfn, u32 app_info_bitmap,
+				 u16 proto_id, bool ieee)
+{
+	bool port;
+
+	if (!p_hwfn->p_dcbx_info->iwarp_port)
+		return false;
+
+	if (ieee)
+		port = ecore_dcbx_ieee_app_port(app_info_bitmap,
+						DCBX_APP_SF_IEEE_TCP_PORT);
+	else
+		port = ecore_dcbx_app_port(app_info_bitmap);
+
+	return !!(port && (proto_id == p_hwfn->p_dcbx_info->iwarp_port));
+}
+
 static void
 ecore_dcbx_dp_protocol(struct ecore_hwfn *p_hwfn,
 		       struct ecore_dcbx_results *p_data)
@@ -896,17 +913,18 @@ ecore_dcbx_mib_update_event(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 
 enum _ecore_status_t ecore_dcbx_info_alloc(struct ecore_hwfn *p_hwfn)
 {
-	enum _ecore_status_t rc = ECORE_SUCCESS;
-
 	p_hwfn->p_dcbx_info = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
 					  sizeof(*p_hwfn->p_dcbx_info));
 	if (!p_hwfn->p_dcbx_info) {
 		DP_NOTICE(p_hwfn, true,
 			  "Failed to allocate `struct ecore_dcbx_info'");
-		rc = ECORE_NOMEM;
+		return ECORE_NOMEM;
 	}
 
-	return rc;
+	p_hwfn->p_dcbx_info->iwarp_port =
+		p_hwfn->pf_params.rdma_pf_params.iwarp_port;
+
+	return ECORE_SUCCESS;
 }
 
 void ecore_dcbx_info_free(struct ecore_hwfn *p_hwfn,
@@ -937,9 +955,13 @@ void ecore_dcbx_set_pf_update_params(struct ecore_dcbx_results *p_src,
 
 	update_flag = p_src->arr[DCBX_PROTOCOL_ETH].update;
 	p_dest->update_eth_dcb_data_mode = update_flag;
+	update_flag = p_src->arr[DCBX_PROTOCOL_IWARP].update;
+	p_dest->update_iwarp_dcb_data_mode = update_flag;
 
 	p_dcb_data = &p_dest->eth_dcb_data;
 	ecore_dcbx_update_protocol_data(p_dcb_data, p_src, DCBX_PROTOCOL_ETH);
+	p_dcb_data = &p_dest->iwarp_dcb_data;
+	ecore_dcbx_update_protocol_data(p_dcb_data, p_src, DCBX_PROTOCOL_IWARP);
 }
 
 enum _ecore_status_t ecore_dcbx_query_params(struct ecore_hwfn *p_hwfn,
diff --git a/drivers/net/qede/base/ecore_dcbx.h b/drivers/net/qede/base/ecore_dcbx.h
index 0830014..eba2d91 100644
--- a/drivers/net/qede/base/ecore_dcbx.h
+++ b/drivers/net/qede/base/ecore_dcbx.h
@@ -29,6 +29,7 @@ struct ecore_dcbx_info {
 	struct ecore_dcbx_set set;
 	struct ecore_dcbx_get get;
 	u8 dcbx_cap;
+	u16 iwarp_port;
 };
 
 struct ecore_dcbx_mib_meta_data {
diff --git a/drivers/net/qede/base/ecore_dcbx_api.h b/drivers/net/qede/base/ecore_dcbx_api.h
index 3a1712f..2dc7679 100644
--- a/drivers/net/qede/base/ecore_dcbx_api.h
+++ b/drivers/net/qede/base/ecore_dcbx_api.h
@@ -37,6 +37,7 @@ enum dcbx_protocol_type {
 	DCBX_PROTOCOL_ROCE,
 	DCBX_PROTOCOL_ROCE_V2,
 	DCBX_PROTOCOL_ETH,
+	DCBX_PROTOCOL_IWARP,
 	DCBX_MAX_PROTOCOL_TYPE
 };
 
@@ -191,7 +192,8 @@ static const struct ecore_dcbx_app_metadata ecore_dcbx_app_update[] = {
 	{DCBX_PROTOCOL_FCOE, "FCOE", ECORE_PCI_FCOE},
 	{DCBX_PROTOCOL_ROCE, "ROCE", ECORE_PCI_ETH_ROCE},
 	{DCBX_PROTOCOL_ROCE_V2, "ROCE_V2", ECORE_PCI_ETH_ROCE},
-	{DCBX_PROTOCOL_ETH, "ETH", ECORE_PCI_ETH}
+	{DCBX_PROTOCOL_ETH, "ETH", ECORE_PCI_ETH},
+	{DCBX_PROTOCOL_IWARP, "IWARP", ECORE_PCI_ETH_IWARP}
 };
 
 #endif /* __ECORE_DCBX_API_H__ */
diff --git a/drivers/net/qede/base/ecore_proto_if.h b/drivers/net/qede/base/ecore_proto_if.h
index e252d52..ed24019 100644
--- a/drivers/net/qede/base/ecore_proto_if.h
+++ b/drivers/net/qede/base/ecore_proto_if.h
@@ -76,6 +76,9 @@ struct ecore_rdma_pf_params {
 
 	/* Will allocate rate limiters to be used with QPs */
 	u8		enable_dcqcn;
+
+	/* TCP port number used for the iwarp traffic */
+	u16		iwarp_port;
 };
 
 struct ecore_pf_params {
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 43/62] net/qede/base: update bulletin board during VF init
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (43 preceding siblings ...)
  2017-03-28  6:52           ` [PATCH v4 42/62] net/qede/base: add non-L2 dcbx tlv application support Rasesh Mody
@ 2017-03-28  6:52           ` Rasesh Mody
  2017-03-28  6:52           ` [PATCH v4 44/62] net/qede/base: add coalescing support for VFs Rasesh Mody
                             ` (19 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:52 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Updated bulletin board with link state during VF initialization.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_sriov.c |   88 ++++++++++++++++++++---------------
 1 file changed, 51 insertions(+), 37 deletions(-)

diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index aab9925..703c1e8 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -954,11 +954,51 @@ static void ecore_iov_free_vf_igu_sbs(struct ecore_hwfn *p_hwfn,
 	vf->num_sbs = 0;
 }
 
+void ecore_iov_set_link(struct ecore_hwfn *p_hwfn,
+			u16 vfid,
+			struct ecore_mcp_link_params *params,
+			struct ecore_mcp_link_state *link,
+			struct ecore_mcp_link_capabilities *p_caps)
+{
+	struct ecore_vf_info *p_vf = ecore_iov_get_vf_info(p_hwfn, vfid, false);
+	struct ecore_bulletin_content *p_bulletin;
+
+	if (!p_vf)
+		return;
+
+	p_bulletin = p_vf->bulletin.p_virt;
+	p_bulletin->req_autoneg = params->speed.autoneg;
+	p_bulletin->req_adv_speed = params->speed.advertised_speeds;
+	p_bulletin->req_forced_speed = params->speed.forced_speed;
+	p_bulletin->req_autoneg_pause = params->pause.autoneg;
+	p_bulletin->req_forced_rx = params->pause.forced_rx;
+	p_bulletin->req_forced_tx = params->pause.forced_tx;
+	p_bulletin->req_loopback = params->loopback_mode;
+
+	p_bulletin->link_up = link->link_up;
+	p_bulletin->speed = link->speed;
+	p_bulletin->full_duplex = link->full_duplex;
+	p_bulletin->autoneg = link->an;
+	p_bulletin->autoneg_complete = link->an_complete;
+	p_bulletin->parallel_detection = link->parallel_detection;
+	p_bulletin->pfc_enabled = link->pfc_enabled;
+	p_bulletin->partner_adv_speed = link->partner_adv_speed;
+	p_bulletin->partner_tx_flow_ctrl_en = link->partner_tx_flow_ctrl_en;
+	p_bulletin->partner_rx_flow_ctrl_en = link->partner_rx_flow_ctrl_en;
+	p_bulletin->partner_adv_pause = link->partner_adv_pause;
+	p_bulletin->sfp_tx_fault = link->sfp_tx_fault;
+
+	p_bulletin->capability_speed = p_caps->speed_capabilities;
+}
+
 enum _ecore_status_t
 ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 			 struct ecore_ptt *p_ptt,
 			 struct ecore_iov_vf_init_params *p_params)
 {
+	struct ecore_mcp_link_capabilities link_caps;
+	struct ecore_mcp_link_params link_params;
+	struct ecore_mcp_link_state link_state;
 	u8 num_of_vf_available_chains  = 0;
 	struct ecore_vf_info *vf = OSAL_NULL;
 	u16 qid, num_irqs;
@@ -1045,6 +1085,17 @@ ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 			   p_queue->fw_cid);
 	}
 
+	/* Update the link configuration in bulletin.
+	 */
+	OSAL_MEMCPY(&link_params, ecore_mcp_get_link_params(p_hwfn),
+		    sizeof(link_params));
+	OSAL_MEMCPY(&link_state, ecore_mcp_get_link_state(p_hwfn),
+		    sizeof(link_state));
+	OSAL_MEMCPY(&link_caps, ecore_mcp_get_link_capabilities(p_hwfn),
+		    sizeof(link_caps));
+	ecore_iov_set_link(p_hwfn, p_params->rel_vf_id,
+			   &link_params, &link_state, &link_caps);
+
 	rc = ecore_iov_enable_vf_access(p_hwfn, p_ptt, vf);
 
 	if (rc == ECORE_SUCCESS) {
@@ -1059,43 +1110,6 @@ ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-void ecore_iov_set_link(struct ecore_hwfn *p_hwfn,
-			u16 vfid,
-			struct ecore_mcp_link_params *params,
-			struct ecore_mcp_link_state *link,
-			struct ecore_mcp_link_capabilities *p_caps)
-{
-	struct ecore_vf_info *p_vf = ecore_iov_get_vf_info(p_hwfn, vfid, false);
-	struct ecore_bulletin_content *p_bulletin;
-
-	if (!p_vf)
-		return;
-
-	p_bulletin = p_vf->bulletin.p_virt;
-	p_bulletin->req_autoneg = params->speed.autoneg;
-	p_bulletin->req_adv_speed = params->speed.advertised_speeds;
-	p_bulletin->req_forced_speed = params->speed.forced_speed;
-	p_bulletin->req_autoneg_pause = params->pause.autoneg;
-	p_bulletin->req_forced_rx = params->pause.forced_rx;
-	p_bulletin->req_forced_tx = params->pause.forced_tx;
-	p_bulletin->req_loopback = params->loopback_mode;
-
-	p_bulletin->link_up = link->link_up;
-	p_bulletin->speed = link->speed;
-	p_bulletin->full_duplex = link->full_duplex;
-	p_bulletin->autoneg = link->an;
-	p_bulletin->autoneg_complete = link->an_complete;
-	p_bulletin->parallel_detection = link->parallel_detection;
-	p_bulletin->pfc_enabled = link->pfc_enabled;
-	p_bulletin->partner_adv_speed = link->partner_adv_speed;
-	p_bulletin->partner_tx_flow_ctrl_en = link->partner_tx_flow_ctrl_en;
-	p_bulletin->partner_rx_flow_ctrl_en = link->partner_rx_flow_ctrl_en;
-	p_bulletin->partner_adv_pause = link->partner_adv_pause;
-	p_bulletin->sfp_tx_fault = link->sfp_tx_fault;
-
-	p_bulletin->capability_speed = p_caps->speed_capabilities;
-}
-
 enum _ecore_status_t ecore_iov_release_hw_for_vf(struct ecore_hwfn *p_hwfn,
 						 struct ecore_ptt *p_ptt,
 						 u16 rel_vf_id)
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 44/62] net/qede/base: add coalescing support for VFs
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (44 preceding siblings ...)
  2017-03-28  6:52           ` [PATCH v4 43/62] net/qede/base: update bulletin board during VF init Rasesh Mody
@ 2017-03-28  6:52           ` Rasesh Mody
  2017-03-28  6:52           ` [PATCH v4 45/62] net/qede/base: add macro got resource value message Rasesh Mody
                             ` (18 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:52 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Add coalescing support for VFs.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c     |   83 ++++++++++++++++++++++-----------
 drivers/net/qede/base/ecore_dev_api.h |   43 ++++++-----------
 drivers/net/qede/base/ecore_sriov.c   |   66 +++++++++++++++++++++++++-
 drivers/net/qede/base/ecore_vf.c      |   42 +++++++++++++++++
 drivers/net/qede/base/ecore_vf.h      |   24 ++++++++++
 drivers/net/qede/base/ecore_vfpf_if.h |   10 ++++
 6 files changed, 209 insertions(+), 59 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 29dd292..7a876bc 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -30,6 +30,7 @@
 #include "nvm_cfg.h"
 #include "ecore_dev_api.h"
 #include "ecore_dcbx.h"
+#include "ecore_l2.h"
 
 /* TODO - there's a bug in DCBx re-configuration flows in MF, as the QM
  * registers involved are not split and thus configuration is a race where
@@ -4198,11 +4199,6 @@ static enum _ecore_status_t ecore_set_coalesce(struct ecore_hwfn *p_hwfn,
 {
 	struct coalescing_timeset *p_coal_timeset;
 
-	if (IS_VF(p_hwfn->p_dev)) {
-		DP_NOTICE(p_hwfn, true, "VF coalescing config not supported\n");
-		return ECORE_INVAL;
-	}
-
 	if (p_hwfn->p_dev->int_coalescing_mode != ECORE_COAL_MODE_ENABLE) {
 		DP_NOTICE(p_hwfn, true,
 			  "Coalescing configuration not enabled\n");
@@ -4218,13 +4214,53 @@ static enum _ecore_status_t ecore_set_coalesce(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
+enum _ecore_status_t ecore_set_queue_coalesce(struct ecore_hwfn *p_hwfn,
+					      u16 rx_coal, u16 tx_coal,
+					      void *p_handle)
+{
+	struct ecore_queue_cid *p_cid = (struct ecore_queue_cid *)p_handle;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	struct ecore_ptt *p_ptt;
+
+	/* TODO - Configuring a single queue's coalescing but
+	 * claiming all queues are abiding same configuration
+	 * for PF and VF both.
+	 */
+
+	if (IS_VF(p_hwfn->p_dev))
+		return ecore_vf_pf_set_coalesce(p_hwfn, rx_coal,
+						tx_coal, p_cid);
+
+	p_ptt = ecore_ptt_acquire(p_hwfn);
+	if (!p_ptt)
+		return ECORE_AGAIN;
+
+	if (rx_coal) {
+		rc = ecore_set_rxq_coalesce(p_hwfn, p_ptt, rx_coal, p_cid);
+		if (rc)
+			goto out;
+		p_hwfn->p_dev->rx_coalesce_usecs = rx_coal;
+	}
+
+	if (tx_coal) {
+		rc = ecore_set_txq_coalesce(p_hwfn, p_ptt, tx_coal, p_cid);
+		if (rc)
+			goto out;
+		p_hwfn->p_dev->tx_coalesce_usecs = tx_coal;
+	}
+out:
+	ecore_ptt_release(p_hwfn, p_ptt);
+
+	return rc;
+}
+
 enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u16 qid, u16 sb_id)
+					    u16 coalesce,
+					    struct ecore_queue_cid *p_cid)
 {
 	struct ustorm_eth_queue_zone eth_qzone;
 	u8 timeset, timer_res;
-	u16 fw_qid = 0;
 	u32 address;
 	enum _ecore_status_t rc;
 
@@ -4241,33 +4277,30 @@ enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
 	}
 	timeset = (u8)(coalesce >> timer_res);
 
-	rc = ecore_fw_l2_queue(p_hwfn, qid, &fw_qid);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	rc = ecore_int_set_timer_res(p_hwfn, p_ptt, timer_res, sb_id, false);
+	rc = ecore_int_set_timer_res(p_hwfn, p_ptt, timer_res,
+				     p_cid->abs.sb_idx, false);
 	if (rc != ECORE_SUCCESS)
 		goto out;
 
-	address = BAR0_MAP_REG_USDM_RAM + USTORM_ETH_QUEUE_ZONE_OFFSET(fw_qid);
+	address = BAR0_MAP_REG_USDM_RAM +
+		  USTORM_ETH_QUEUE_ZONE_OFFSET(p_cid->abs.queue_id);
 
 	rc = ecore_set_coalesce(p_hwfn, p_ptt, address, &eth_qzone,
 				sizeof(struct ustorm_eth_queue_zone), timeset);
 	if (rc != ECORE_SUCCESS)
 		goto out;
 
-	p_hwfn->p_dev->rx_coalesce_usecs = coalesce;
-out:
+ out:
 	return rc;
 }
 
 enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u16 qid, u16 sb_id)
+					    u16 coalesce,
+					    struct ecore_queue_cid *p_cid)
 {
 	struct xstorm_eth_queue_zone eth_qzone;
 	u8 timeset, timer_res;
-	u16 fw_qid = 0;
 	u32 address;
 	enum _ecore_status_t rc;
 
@@ -4285,23 +4318,17 @@ enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
 
 	timeset = (u8)(coalesce >> timer_res);
 
-	rc = ecore_fw_l2_queue(p_hwfn, qid, &fw_qid);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	rc = ecore_int_set_timer_res(p_hwfn, p_ptt, timer_res, sb_id, true);
+	rc = ecore_int_set_timer_res(p_hwfn, p_ptt, timer_res,
+				     p_cid->abs.sb_idx, true);
 	if (rc != ECORE_SUCCESS)
 		goto out;
 
-	address = BAR0_MAP_REG_XSDM_RAM + XSTORM_ETH_QUEUE_ZONE_OFFSET(fw_qid);
+	address = BAR0_MAP_REG_XSDM_RAM +
+		  XSTORM_ETH_QUEUE_ZONE_OFFSET(p_cid->abs.queue_id);
 
 	rc = ecore_set_coalesce(p_hwfn, p_ptt, address, &eth_qzone,
 				sizeof(struct xstorm_eth_queue_zone), timeset);
-	if (rc != ECORE_SUCCESS)
-		goto out;
-
-	p_hwfn->p_dev->tx_coalesce_usecs = coalesce;
-out:
+ out:
 	return rc;
 }
 
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index 7e90778..ce764d2 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -570,41 +570,24 @@ enum _ecore_status_t ecore_final_cleanup(struct ecore_hwfn	*p_hwfn,
 					 struct ecore_ptt	*p_ptt,
 					 u16			id,
 					 bool			is_vf);
-
-/**
- * @brief ecore_set_rxq_coalesce - Configure coalesce parameters for an Rx queue
- *    The fact that we can configure coalescing to up to 511, but on varying
- *    accuracy [the bigger the value the less accurate] up to a mistake of 3usec
- *    for the highest values.
- *
- * @param p_hwfn
- * @param p_ptt
- * @param coalesce - Coalesce value in micro seconds.
- * @param qid - Queue index.
- * @param qid - SB Id
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
-					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u16 qid, u16 sb_id);
-
 /**
- * @brief ecore_set_txq_coalesce - Configure coalesce parameters for a Tx queue
- *    While the API allows setting coalescing per-qid, all tx queues sharing a
- *    SB should be in same range [i.e., either 0-0x7f, 0x80-0xff or 0x100-0x1ff]
+ * @brief ecore_set_queue_coalesce - Configure coalesce parameters for Rx and
+ *    Tx queue. The fact that we can configure coalescing to up to 511, but on
+ *    varying accuracy [the bigger the value the less accurate] up to a mistake
+ *    of 3usec for the highest values.
+ *    While the API allows setting coalescing per-qid, all queues sharing a SB
+ *    should be in same range [i.e., either 0-0x7f, 0x80-0xff or 0x100-0x1ff]
  *    otherwise configuration would break.
  *
  * @param p_hwfn
- * @param p_ptt
- * @param coalesce - Coalesce value in micro seconds.
- * @param qid - Queue index.
- * @param qid - SB Id
+ * @param rx_coal - Rx Coalesce value in micro seconds.
+ * @param tx_coal - TX Coalesce value in micro seconds.
+ * @param p_handle
  *
  * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
-					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u16 qid, u16 sb_id);
+ **/
+enum _ecore_status_t
+ecore_set_queue_coalesce(struct ecore_hwfn *p_hwfn, u16 rx_coal,
+			 u16 tx_coal, void *p_handle);
 
 #endif
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 703c1e8..4ffa8d0 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -52,6 +52,7 @@ const char *ecore_channel_tlvs_string[] = {
 	"CHANNEL_TLV_VPORT_UPDATE_ACCEPT_ANY_VLAN",
 	"CHANNEL_TLV_VPORT_UPDATE_SGE_TPA",
 	"CHANNEL_TLV_UPDATE_TUNN_PARAM",
+	"CHANNEL_TLV_COALESCE_UPDATE",
 	"CHANNEL_TLV_MAX"
 };
 
@@ -1939,6 +1940,8 @@ static void ecore_iov_vf_mbx_start_vport(struct ecore_hwfn *p_hwfn,
 	vf->state = VF_ENABLED;
 	start = &mbx->req_virt->start_vport;
 
+	ecore_iov_enable_vf_traffic(p_hwfn, p_ptt, vf);
+
 	/* Initialize Status block in CAU */
 	for (sb_id = 0; sb_id < vf->num_sbs; sb_id++) {
 		if (!start->sb_addr[sb_id]) {
@@ -1953,7 +1956,6 @@ static void ecore_iov_vf_mbx_start_vport(struct ecore_hwfn *p_hwfn,
 				      vf->igu_sbs[sb_id],
 				      vf->abs_vf_id, 1);
 	}
-	ecore_iov_enable_vf_traffic(p_hwfn, p_ptt, vf);
 
 	vf->mtu = start->mtu;
 	vf->shadow_config.inner_vlan_removal = start->inner_vlan_removal;
@@ -3226,6 +3228,65 @@ static void ecore_iov_vf_mbx_release(struct ecore_hwfn *p_hwfn,
 			       length, status);
 }
 
+static void ecore_iov_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn,
+					 struct ecore_ptt *p_ptt,
+					 struct ecore_vf_info *vf)
+{
+	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	struct vfpf_update_coalesce *req;
+	u8 status = PFVF_STATUS_FAILURE;
+	struct ecore_queue_cid *p_cid;
+	u16 rx_coal, tx_coal;
+	u16  qid;
+
+	req = &mbx->req_virt->update_coalesce;
+
+	rx_coal = req->rx_coal;
+	tx_coal = req->tx_coal;
+	qid = req->qid;
+	p_cid = vf->vf_queues[qid].p_rx_cid;
+
+	if (!ecore_iov_validate_rxq(p_hwfn, vf, qid)) {
+		DP_ERR(p_hwfn, "VF[%d]: Invalid Rx queue_id = %d\n",
+		       vf->abs_vf_id, qid);
+		goto out;
+	}
+
+	if (!ecore_iov_validate_txq(p_hwfn, vf, qid)) {
+		DP_ERR(p_hwfn, "VF[%d]: Invalid Tx queue_id = %d\n",
+		       vf->abs_vf_id, qid);
+		goto out;
+	}
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+		   "VF[%d]: Setting coalesce for VF rx_coal = %d, tx_coal = %d at queue = %d\n",
+		   vf->abs_vf_id, rx_coal, tx_coal, qid);
+	if (rx_coal) {
+		rc = ecore_set_rxq_coalesce(p_hwfn, p_ptt, rx_coal, p_cid);
+		if (rc != ECORE_SUCCESS) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VF[%d]: Unable to set rx queue = %d coalesce\n",
+				   vf->abs_vf_id, vf->vf_queues[qid].fw_rx_qid);
+			goto out;
+		}
+	}
+	if (tx_coal) {
+		rc =  ecore_set_txq_coalesce(p_hwfn, p_ptt, tx_coal, p_cid);
+		if (rc != ECORE_SUCCESS) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VF[%d]: Unable to set tx queue = %d coalesce\n",
+				   vf->abs_vf_id, vf->vf_queues[qid].fw_tx_qid);
+			goto out;
+		}
+	}
+
+	status = PFVF_STATUS_SUCCESS;
+out:
+	ecore_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_COALESCE_UPDATE,
+			       sizeof(struct pfvf_def_resp_tlv), status);
+}
+
 static enum _ecore_status_t
 ecore_iov_vf_flr_poll_dorq(struct ecore_hwfn *p_hwfn,
 			   struct ecore_vf_info *p_vf, struct ecore_ptt *p_ptt)
@@ -3579,6 +3640,9 @@ void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn,
 		case CHANNEL_TLV_UPDATE_TUNN_PARAM:
 			ecore_iov_vf_mbx_update_tunn_param(p_hwfn, p_ptt, p_vf);
 			break;
+		case CHANNEL_TLV_COALESCE_UPDATE:
+			ecore_iov_vf_pf_set_coalesce(p_hwfn, p_ptt, p_vf);
+			break;
 		}
 	} else if (ecore_iov_tlv_supported(mbx->first_tlv.tl.type)) {
 		/* If we've received a message from a VF we consider malicious
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index a072a81..bf516cc 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -1424,6 +1424,48 @@ exit:
 	return rc;
 }
 
+enum _ecore_status_t
+ecore_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn, u16 rx_coal, u16 tx_coal,
+			 struct ecore_queue_cid     *p_cid)
+{
+	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+	struct vfpf_update_coalesce *req;
+	struct pfvf_def_resp_tlv *resp;
+	enum _ecore_status_t rc;
+
+	/* clear mailbox and prep header tlv */
+	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_COALESCE_UPDATE,
+			       sizeof(*req));
+
+	req->rx_coal = rx_coal;
+	req->tx_coal = tx_coal;
+	req->qid = p_cid->rel.queue_id;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+		   "Setting coalesce rx_coal = %d, tx_coal = %d at queue = %d\n",
+		   rx_coal, tx_coal, req->qid);
+
+	/* add list termination tlv */
+	ecore_add_tlv(p_hwfn, &p_iov->offset, CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	resp = &p_iov->pf2vf_reply->default_resp;
+	rc = ecore_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
+
+	if (rc != ECORE_SUCCESS)
+		goto exit;
+
+	if (resp->hdr.status != PFVF_STATUS_SUCCESS)
+		goto exit;
+
+	p_hwfn->p_dev->rx_coalesce_usecs = rx_coal;
+	p_hwfn->p_dev->tx_coalesce_usecs = tx_coal;
+
+exit:
+	ecore_vf_pf_req_end(p_hwfn, rc);
+	return rc;
+}
+
 u16 ecore_vf_get_igu_sb_id(struct ecore_hwfn *p_hwfn,
 			   u16               sb_id)
 {
diff --git a/drivers/net/qede/base/ecore_vf.h b/drivers/net/qede/base/ecore_vf.h
index 0d67054..228bbf0 100644
--- a/drivers/net/qede/base/ecore_vf.h
+++ b/drivers/net/qede/base/ecore_vf.h
@@ -50,6 +50,20 @@ struct ecore_vf_iov {
 enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn);
 
 /**
+ * @brief VF - Set Rx/Tx coalesce per VF's relative queue.
+ *	Coalesce value '0' will omit the configuration.
+ *
+ *	@param p_hwfn
+ *	@param rx_coal - coalesce value in micro second for rx queue
+ *	@param tx_coal - coalesce value in micro second for tx queue
+ *	@param qid
+ *
+ **/
+enum _ecore_status_t ecore_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn,
+					      u16 rx_coal, u16 tx_coal,
+					      struct ecore_queue_cid *p_cid);
+
+/**
  * @brief VF - start the RX Queue by sending a message to the PF
  *
  * @param p_hwfn
@@ -263,5 +277,15 @@ ecore_vf_pf_tunnel_param_update(struct ecore_hwfn *p_hwfn,
 				struct ecore_tunnel_info *p_tunn);
 
 void ecore_vf_set_vf_start_tunn_update_param(struct ecore_tunnel_info *p_tun);
+
+enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
+					    struct ecore_ptt *p_ptt,
+					    u16 coalesce,
+					    struct ecore_queue_cid *p_cid);
+
+enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
+					    struct ecore_ptt *p_ptt,
+					    u16 coalesce,
+					    struct ecore_queue_cid *p_cid);
 #endif
 #endif /* __ECORE_VF_H__ */
diff --git a/drivers/net/qede/base/ecore_vfpf_if.h b/drivers/net/qede/base/ecore_vfpf_if.h
index 82ed4f5..e0b63bf 100644
--- a/drivers/net/qede/base/ecore_vfpf_if.h
+++ b/drivers/net/qede/base/ecore_vfpf_if.h
@@ -457,6 +457,14 @@ struct tlv_buffer_size {
 	u8 tlv_buffer[TLV_BUFFER_SIZE];
 };
 
+struct vfpf_update_coalesce {
+	struct vfpf_first_tlv first_tlv;
+	u16 rx_coal;
+	u16 tx_coal;
+	u16 qid;
+	u8 padding[2];
+};
+
 union vfpf_tlvs {
 	struct vfpf_first_tlv			first_tlv;
 	struct vfpf_acquire_tlv			acquire;
@@ -469,6 +477,7 @@ union vfpf_tlvs {
 	struct vfpf_vport_update_tlv		vport_update;
 	struct vfpf_ucast_filter_tlv		ucast_filter;
 	struct vfpf_update_tunn_param_tlv	tunn_param_update;
+	struct vfpf_update_coalesce		update_coalesce;
 	struct tlv_buffer_size			tlv_buf_size;
 };
 
@@ -592,6 +601,7 @@ enum {
 	CHANNEL_TLV_VPORT_UPDATE_ACCEPT_ANY_VLAN,
 	CHANNEL_TLV_VPORT_UPDATE_SGE_TPA,
 	CHANNEL_TLV_UPDATE_TUNN_PARAM,
+	CHANNEL_TLV_COALESCE_UPDATE,
 	CHANNEL_TLV_MAX,
 
 	/* Required for iterating over vport-update tlvs.
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 45/62] net/qede/base: add macro got resource value message
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (45 preceding siblings ...)
  2017-03-28  6:52           ` [PATCH v4 44/62] net/qede/base: add coalescing support for VFs Rasesh Mody
@ 2017-03-28  6:52           ` Rasesh Mody
  2017-03-28  6:52           ` [PATCH v4 46/62] net/qede/base: add mailbox for resource allocation Rasesh Mody
                             ` (17 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:52 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Add macro got resource value message

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/mcp_public.h |    5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 145f5ca..24acfcb 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -1137,16 +1137,15 @@ struct public_drv_mb {
 #define DRV_MSG_CODE_OV_UPDATE_BUS_NUM		0x27000000
 #define DRV_MSG_CODE_OV_UPDATE_BOOT_PROGRESS	0x28000000
 #define DRV_MSG_CODE_OV_UPDATE_STORM_FW_VER	0x29000000
+#define DRV_MSG_CODE_NIG_DRAIN			0x30000000
 #define DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE	0x31000000
 #define DRV_MSG_CODE_BW_UPDATE_ACK		0x32000000
 #define DRV_MSG_CODE_OV_UPDATE_MTU		0x33000000
-
-#define DRV_MSG_CODE_NIG_DRAIN			0x30000000
-
 /* DRV_MB Param: driver version supp, FW_MB param: MFW version supp,
  * data: struct resource_info
  */
 #define DRV_MSG_GET_RESOURCE_ALLOC_MSG		0x34000000
+#define DRV_MSG_SET_RESOURCE_VALUE_MSG		0x35000000
 
 /*deprecated don't use*/
 #define DRV_MSG_CODE_INITIATE_FLR_DEPRECATED    0x02000000
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 46/62] net/qede/base: add mailbox for resource allocation
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (46 preceding siblings ...)
  2017-03-28  6:52           ` [PATCH v4 45/62] net/qede/base: add macro got resource value message Rasesh Mody
@ 2017-03-28  6:52           ` Rasesh Mody
  2017-03-28  6:52           ` [PATCH v4 47/62] net/qede/base: add macro for unsupported command Rasesh Mody
                             ` (16 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:52 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Add the Management FW mailbox for getting non-l2 resource allocation
information.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h      |    1 +
 drivers/net/qede/base/ecore_dev.c  |   60 ++++++++++++++++++++++++------------
 drivers/net/qede/base/mcp_public.h |    1 +
 3 files changed, 43 insertions(+), 19 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 60a8a6b..25b6c4e 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -291,6 +291,7 @@ enum ecore_resources {
 	ECORE_LL2_QUEUE,
 	ECORE_CMDQS_CQS,
 	ECORE_RDMA_STATS_QUEUE,
+	ECORE_BDQ,
 	ECORE_MAX_RESC,			/* must be last */
 };
 
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 7a876bc..d5a8a90 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2463,6 +2463,9 @@ ecore_hw_get_mfw_res_id(enum ecore_resources res_id)
 	case ECORE_RDMA_STATS_QUEUE:
 		mfw_res_id = RESOURCE_RDMA_STATS_QUEUE_E;
 		break;
+	case ECORE_BDQ:
+		mfw_res_id = RESOURCE_BDQ_E;
+		break;
 	default:
 		break;
 	}
@@ -2470,67 +2473,84 @@ ecore_hw_get_mfw_res_id(enum ecore_resources res_id)
 	return mfw_res_id;
 }
 
-static u32 ecore_hw_get_dflt_resc_num(struct ecore_hwfn *p_hwfn,
-				      enum ecore_resources res_id)
+static
+enum _ecore_status_t ecore_hw_get_dflt_resc(struct ecore_hwfn *p_hwfn,
+					    enum ecore_resources res_id,
+					    u32 *p_resc_num,
+					    u32 *p_resc_start)
 {
 	u8 num_funcs = p_hwfn->num_funcs_on_engine;
 	bool b_ah = ECORE_IS_AH(p_hwfn->p_dev);
 	struct ecore_sb_cnt_info sb_cnt_info;
-	u32 dflt_resc_num = 0;
 
 	switch (res_id) {
 	case ECORE_SB:
 		OSAL_MEM_ZERO(&sb_cnt_info, sizeof(sb_cnt_info));
 		ecore_int_get_num_sbs(p_hwfn, &sb_cnt_info);
-		dflt_resc_num = sb_cnt_info.sb_cnt;
+		*p_resc_num = sb_cnt_info.sb_cnt;
 		break;
 	case ECORE_L2_QUEUE:
-		dflt_resc_num = (b_ah ? MAX_NUM_L2_QUEUES_K2 :
+		*p_resc_num = (b_ah ? MAX_NUM_L2_QUEUES_K2 :
 				 MAX_NUM_L2_QUEUES_BB) / num_funcs;
 		break;
 	case ECORE_VPORT:
-		dflt_resc_num = (b_ah ? MAX_NUM_VPORTS_K2 :
+		*p_resc_num = (b_ah ? MAX_NUM_VPORTS_K2 :
 				 MAX_NUM_VPORTS_BB) / num_funcs;
 		break;
 	case ECORE_RSS_ENG:
-		dflt_resc_num = (b_ah ? ETH_RSS_ENGINE_NUM_K2 :
+		*p_resc_num = (b_ah ? ETH_RSS_ENGINE_NUM_K2 :
 				 ETH_RSS_ENGINE_NUM_BB) / num_funcs;
 		break;
 	case ECORE_PQ:
-		dflt_resc_num = (b_ah ? MAX_QM_TX_QUEUES_K2 :
+		*p_resc_num = (b_ah ? MAX_QM_TX_QUEUES_K2 :
 				 MAX_QM_TX_QUEUES_BB) / num_funcs;
 		break;
 	case ECORE_RL:
-		dflt_resc_num = MAX_QM_GLOBAL_RLS / num_funcs;
+		*p_resc_num = MAX_QM_GLOBAL_RLS / num_funcs;
 		break;
 	case ECORE_MAC:
 	case ECORE_VLAN:
 		/* Each VFC resource can accommodate both a MAC and a VLAN */
-		dflt_resc_num = ETH_NUM_MAC_FILTERS / num_funcs;
+		*p_resc_num = ETH_NUM_MAC_FILTERS / num_funcs;
 		break;
 	case ECORE_ILT:
-		dflt_resc_num = (b_ah ? PXP_NUM_ILT_RECORDS_K2 :
+		*p_resc_num = (b_ah ? PXP_NUM_ILT_RECORDS_K2 :
 				 PXP_NUM_ILT_RECORDS_BB) / num_funcs;
 		break;
 	case ECORE_LL2_QUEUE:
-		dflt_resc_num = MAX_NUM_LL2_RX_QUEUES / num_funcs;
+		*p_resc_num = MAX_NUM_LL2_RX_QUEUES / num_funcs;
 		break;
 	case ECORE_RDMA_CNQ_RAM:
 	case ECORE_CMDQS_CQS:
 		/* CNQ/CMDQS are the same resource */
 		/* @DPDK */
-		dflt_resc_num = (NUM_OF_GLOBAL_QUEUES / 2) / num_funcs;
+		*p_resc_num = (NUM_OF_GLOBAL_QUEUES / 2) / num_funcs;
 		break;
 	case ECORE_RDMA_STATS_QUEUE:
 		/* @DPDK */
-		dflt_resc_num = (b_ah ? MAX_NUM_VPORTS_K2 :
+		*p_resc_num = (b_ah ? MAX_NUM_VPORTS_K2 :
 				 MAX_NUM_VPORTS_BB) / num_funcs;
 		break;
+	case ECORE_BDQ:
+		/* @DPDK */
+		*p_resc_num = 0;
+		break;
+	default:
+		break;
+	}
+
+
+	switch (res_id) {
+	case ECORE_BDQ:
+		if (!*p_resc_num)
+			*p_resc_start = 0;
+		break;
 	default:
+		*p_resc_start = *p_resc_num * p_hwfn->enabled_func_idx;
 		break;
 	}
 
-	return dflt_resc_num;
+	return ECORE_SUCCESS;
 }
 
 static const char *ecore_hw_get_resc_name(enum ecore_resources res_id)
@@ -2562,6 +2582,8 @@ static const char *ecore_hw_get_resc_name(enum ecore_resources res_id)
 		return "CMDQS_CQS";
 	case ECORE_RDMA_STATS_QUEUE:
 		return "RDMA_STATS_QUEUE";
+	case ECORE_BDQ:
+		return "BDQ";
 	default:
 		return "UNKNOWN_RESOURCE";
 	}
@@ -2579,14 +2601,14 @@ static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
 	p_resc_num = &RESC_NUM(p_hwfn, res_id);
 	p_resc_start = &RESC_START(p_hwfn, res_id);
 
-	dflt_resc_num = ecore_hw_get_dflt_resc_num(p_hwfn, res_id);
-	if (!dflt_resc_num) {
+	rc = ecore_hw_get_dflt_resc(p_hwfn, res_id,
+				    &dflt_resc_num, &dflt_resc_start);
+	if (rc != ECORE_SUCCESS) {
 		DP_ERR(p_hwfn,
 		       "Failed to get default amount for resource %d [%s]\n",
 			res_id, ecore_hw_get_resc_name(res_id));
-		return ECORE_INVAL;
+		return rc;
 	}
-	dflt_resc_start = dflt_resc_num * p_hwfn->enabled_func_idx;
 
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_SLOW(p_hwfn->p_dev)) {
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 24acfcb..17971a4 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -1025,6 +1025,7 @@ enum resource_id_enum {
 	RESOURCE_NUM_RSS_ENGINES_E	=	14,
 	RESOURCE_LL2_QUEUE_E		=	15,
 	RESOURCE_RDMA_STATS_QUEUE_E	=	16,
+	RESOURCE_BDQ_E			=	17,
 	RESOURCE_MAX_NUM,
 	RESOURCE_NUM_INVALID		=	0xFFFFFFFF
 };
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 47/62] net/qede/base: add macro for unsupported command
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (47 preceding siblings ...)
  2017-03-28  6:52           ` [PATCH v4 46/62] net/qede/base: add mailbox for resource allocation Rasesh Mody
@ 2017-03-28  6:52           ` Rasesh Mody
  2017-03-28  6:52           ` [PATCH v4 48/62] net/qede/base: set max values for soft resources Rasesh Mody
                             ` (15 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:52 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Add a macro for unsupported management FW command

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_mcp.c  |    6 ++----
 drivers/net/qede/base/mcp_public.h |    1 +
 2 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 6c5b5db..15f3ea0 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1424,8 +1424,7 @@ ecore_mcp_mdump_get_config(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	/* A zero response implies that the mdump command is not supported */
-	if (!mcp_resp)
+	if (mcp_resp == FW_MSG_CODE_UNSUPPORTED)
 		return ECORE_NOTIMPL;
 
 	if (mcp_resp != FW_MSG_CODE_OK) {
@@ -2832,8 +2831,7 @@ static enum _ecore_status_t ecore_mcp_resource_cmd(struct ecore_hwfn *p_hwfn,
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	/* A zero response implies that the resource command is not supported */
-	if (!*p_mcp_resp)
+	if (*p_mcp_resp == FW_MSG_CODE_UNSUPPORTED)
 		return ECORE_NOTIMPL;
 
 	if (*p_mcp_param == RESOURCE_OPCODE_UNKNOWN_CMD) {
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 17971a4..8d65390 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -1489,6 +1489,7 @@ struct public_drv_mb {
 
 	u32 fw_mb_header;
 #define FW_MSG_CODE_MASK                        0xffff0000
+#define FW_MSG_CODE_UNSUPPORTED			0x00000000
 #define FW_MSG_CODE_DRV_LOAD_ENGINE		0x10100000
 #define FW_MSG_CODE_DRV_LOAD_PORT               0x10110000
 #define FW_MSG_CODE_DRV_LOAD_FUNCTION           0x10120000
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 48/62] net/qede/base: set max values for soft resources
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (48 preceding siblings ...)
  2017-03-28  6:52           ` [PATCH v4 47/62] net/qede/base: add macro for unsupported command Rasesh Mody
@ 2017-03-28  6:52           ` Rasesh Mody
  2017-03-28  6:52           ` [PATCH v4 49/62] net/qede/base: add return code check Rasesh Mody
                             ` (14 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:52 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Add support for the new interface with the Management FW for setting
max values of "soft" resources.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h     |    2 +
 drivers/net/qede/base/ecore_dev.c |  282 ++++++++++++++++++++++--------------
 drivers/net/qede/base/ecore_mcp.c |  287 +++++++++++++++++++++++++++++++------
 drivers/net/qede/base/ecore_mcp.h |  104 ++++++++++----
 4 files changed, 498 insertions(+), 177 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 25b6c4e..7379b3f 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -856,4 +856,6 @@ u16 ecore_init_qm_get_num_pqs(struct ecore_hwfn *p_hwfn);
 
 #define ECORE_LEADING_HWFN(dev)	(&dev->hwfns[0])
 
+const char *ecore_hw_get_resc_name(enum ecore_resources res_id);
+
 #endif /* __ECORE_H */
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index d5a8a90..3191ee4 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2420,64 +2420,109 @@ static void ecore_hw_set_feat(struct ecore_hwfn *p_hwfn)
 		   RESC_NUM(p_hwfn, ECORE_SB));
 }
 
-static enum resource_id_enum
-ecore_hw_get_mfw_res_id(enum ecore_resources res_id)
+const char *ecore_hw_get_resc_name(enum ecore_resources res_id)
 {
-	enum resource_id_enum mfw_res_id = RESOURCE_NUM_INVALID;
-
 	switch (res_id) {
 	case ECORE_SB:
-		mfw_res_id = RESOURCE_NUM_SB_E;
-		break;
+		return "SB";
 	case ECORE_L2_QUEUE:
-		mfw_res_id = RESOURCE_NUM_L2_QUEUE_E;
-		break;
+		return "L2_QUEUE";
 	case ECORE_VPORT:
-		mfw_res_id = RESOURCE_NUM_VPORT_E;
-		break;
+		return "VPORT";
 	case ECORE_RSS_ENG:
-		mfw_res_id = RESOURCE_NUM_RSS_ENGINES_E;
-		break;
+		return "RSS_ENG";
 	case ECORE_PQ:
-		mfw_res_id = RESOURCE_NUM_PQ_E;
-		break;
+		return "PQ";
 	case ECORE_RL:
-		mfw_res_id = RESOURCE_NUM_RL_E;
-		break;
+		return "RL";
 	case ECORE_MAC:
+		return "MAC";
 	case ECORE_VLAN:
-		/* Each VFC resource can accommodate both a MAC and a VLAN */
-		mfw_res_id = RESOURCE_VFC_FILTER_E;
-		break;
+		return "VLAN";
+	case ECORE_RDMA_CNQ_RAM:
+		return "RDMA_CNQ_RAM";
 	case ECORE_ILT:
-		mfw_res_id = RESOURCE_ILT_E;
-		break;
+		return "ILT";
 	case ECORE_LL2_QUEUE:
-		mfw_res_id = RESOURCE_LL2_QUEUE_E;
-		break;
-	case ECORE_RDMA_CNQ_RAM:
+		return "LL2_QUEUE";
 	case ECORE_CMDQS_CQS:
-		/* CNQ/CMDQS are the same resource */
-		mfw_res_id = RESOURCE_CQS_E;
-		break;
+		return "CMDQS_CQS";
 	case ECORE_RDMA_STATS_QUEUE:
-		mfw_res_id = RESOURCE_RDMA_STATS_QUEUE_E;
-		break;
+		return "RDMA_STATS_QUEUE";
 	case ECORE_BDQ:
-		mfw_res_id = RESOURCE_BDQ_E;
-		break;
+		return "BDQ";
 	default:
-		break;
+		return "UNKNOWN_RESOURCE";
 	}
+}
 
-	return mfw_res_id;
+static enum _ecore_status_t
+__ecore_hw_set_soft_resc_size(struct ecore_hwfn *p_hwfn,
+			      enum ecore_resources res_id, u32 resc_max_val,
+			      u32 *p_mcp_resp)
+{
+	enum _ecore_status_t rc;
+
+	rc = ecore_mcp_set_resc_max_val(p_hwfn, p_hwfn->p_main_ptt, res_id,
+					resc_max_val, p_mcp_resp);
+	if (rc != ECORE_SUCCESS) {
+		DP_NOTICE(p_hwfn, true,
+			  "MFW response failure for a max value setting of resource %d [%s]\n",
+			  res_id, ecore_hw_get_resc_name(res_id));
+		return rc;
+	}
+
+	if (*p_mcp_resp != FW_MSG_CODE_RESOURCE_ALLOC_OK)
+		DP_INFO(p_hwfn,
+			"Failed to set the max value of resource %d [%s]. mcp_resp = 0x%08x.\n",
+			res_id, ecore_hw_get_resc_name(res_id), *p_mcp_resp);
+
+	return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t
+ecore_hw_set_soft_resc_size(struct ecore_hwfn *p_hwfn)
+{
+	bool b_ah = ECORE_IS_AH(p_hwfn->p_dev);
+	u32 resc_max_val, mcp_resp;
+	u8 res_id;
+	enum _ecore_status_t rc;
+
+	for (res_id = 0; res_id < ECORE_MAX_RESC; res_id++) {
+		/* @DPDK */
+		switch (res_id) {
+		case ECORE_LL2_QUEUE:
+		case ECORE_RDMA_CNQ_RAM:
+		case ECORE_RDMA_STATS_QUEUE:
+		case ECORE_BDQ:
+			resc_max_val = 0;
+			break;
+		default:
+			continue;
+		}
+
+		rc = __ecore_hw_set_soft_resc_size(p_hwfn, res_id,
+						   resc_max_val, &mcp_resp);
+		if (rc != ECORE_SUCCESS)
+			return rc;
+
+		/* There's no point to continue to the next resource if the
+		 * command is not supported by the MFW.
+		 * We do continue if the command is supported but the resource
+		 * is unknown to the MFW. Such a resource will be later
+		 * configured with the default allocation values.
+		 */
+		if (mcp_resp == FW_MSG_CODE_UNSUPPORTED)
+			return ECORE_NOTIMPL;
+	}
+
+	return ECORE_SUCCESS;
 }
 
 static
 enum _ecore_status_t ecore_hw_get_dflt_resc(struct ecore_hwfn *p_hwfn,
 					    enum ecore_resources res_id,
-					    u32 *p_resc_num,
-					    u32 *p_resc_start)
+					    u32 *p_resc_num, u32 *p_resc_start)
 {
 	u8 num_funcs = p_hwfn->num_funcs_on_engine;
 	bool b_ah = ECORE_IS_AH(p_hwfn->p_dev);
@@ -2553,56 +2598,19 @@ enum _ecore_status_t ecore_hw_get_dflt_resc(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-static const char *ecore_hw_get_resc_name(enum ecore_resources res_id)
-{
-	switch (res_id) {
-	case ECORE_SB:
-		return "SB";
-	case ECORE_L2_QUEUE:
-		return "L2_QUEUE";
-	case ECORE_VPORT:
-		return "VPORT";
-	case ECORE_RSS_ENG:
-		return "RSS_ENG";
-	case ECORE_PQ:
-		return "PQ";
-	case ECORE_RL:
-		return "RL";
-	case ECORE_MAC:
-		return "MAC";
-	case ECORE_VLAN:
-		return "VLAN";
-	case ECORE_RDMA_CNQ_RAM:
-		return "RDMA_CNQ_RAM";
-	case ECORE_ILT:
-		return "ILT";
-	case ECORE_LL2_QUEUE:
-		return "LL2_QUEUE";
-	case ECORE_CMDQS_CQS:
-		return "CMDQS_CQS";
-	case ECORE_RDMA_STATS_QUEUE:
-		return "RDMA_STATS_QUEUE";
-	case ECORE_BDQ:
-		return "BDQ";
-	default:
-		return "UNKNOWN_RESOURCE";
-	}
-}
-
-static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
-						   enum ecore_resources res_id,
-						   bool drv_resc_alloc)
+static enum _ecore_status_t
+__ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn, enum ecore_resources res_id,
+			 bool drv_resc_alloc)
 {
-	u32 dflt_resc_num = 0, dflt_resc_start = 0, mcp_resp, mcp_param;
-	u32 *p_resc_num, *p_resc_start;
-	struct resource_info resc_info;
+	u32 dflt_resc_num = 0, dflt_resc_start = 0;
+	u32 mcp_resp, *p_resc_num, *p_resc_start;
 	enum _ecore_status_t rc;
 
 	p_resc_num = &RESC_NUM(p_hwfn, res_id);
 	p_resc_start = &RESC_START(p_hwfn, res_id);
 
-	rc = ecore_hw_get_dflt_resc(p_hwfn, res_id,
-				    &dflt_resc_num, &dflt_resc_start);
+	rc = ecore_hw_get_dflt_resc(p_hwfn, res_id, &dflt_resc_num,
+				    &dflt_resc_start);
 	if (rc != ECORE_SUCCESS) {
 		DP_ERR(p_hwfn,
 		       "Failed to get default amount for resource %d [%s]\n",
@@ -2618,17 +2626,8 @@ static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
 	}
 #endif
 
-	OSAL_MEM_ZERO(&resc_info, sizeof(resc_info));
-	resc_info.res_id = ecore_hw_get_mfw_res_id(res_id);
-	if (resc_info.res_id == RESOURCE_NUM_INVALID) {
-		DP_ERR(p_hwfn,
-		       "Failed to match resource %d with MFW resources\n",
-		       res_id);
-		return ECORE_INVAL;
-	}
-
-	rc = ecore_mcp_get_resc_info(p_hwfn, p_hwfn->p_main_ptt, &resc_info,
-				     &mcp_resp, &mcp_param);
+	rc = ecore_mcp_get_resc_info(p_hwfn, p_hwfn->p_main_ptt, res_id,
+				     &mcp_resp, p_resc_num, p_resc_start);
 	if (rc != ECORE_SUCCESS) {
 		DP_NOTICE(p_hwfn, true,
 			  "MFW response failure for an allocation request for"
@@ -2642,13 +2641,11 @@ static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
 	 * - There is an internal error in the MFW while processing the request
 	 * - The resource ID is unknown to the MFW
 	 */
-	if (mcp_resp != FW_MSG_CODE_RESOURCE_ALLOC_OK &&
-	    mcp_resp != FW_MSG_CODE_RESOURCE_ALLOC_DEPRECATED) {
-		/* @DPDK */
+	if (mcp_resp != FW_MSG_CODE_RESOURCE_ALLOC_OK) {
 		DP_INFO(p_hwfn,
-			"Resource %d [%s]: No allocation info was received"
-			" [mcp_resp 0x%x]. Applying default values"
-			" [num %d, start %d].\n",
+			"Failed to receive allocation info for resource %d [%s]."
+			" mcp_resp = 0x%x. Applying default values"
+			" [%d,%d].\n",
 			res_id, ecore_hw_get_resc_name(res_id), mcp_resp,
 			dflt_resc_num, dflt_resc_start);
 
@@ -2660,16 +2657,13 @@ static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
 	/* TBD - remove this when revising the handling of the SB resource */
 	if (res_id == ECORE_SB) {
 		/* Excluding the slowpath SB */
-		resc_info.size -= 1;
-		resc_info.offset -= p_hwfn->enabled_func_idx;
+		*p_resc_num -= 1;
+		*p_resc_start -= p_hwfn->enabled_func_idx;
 	}
 
-	*p_resc_num = resc_info.size;
-	*p_resc_start = resc_info.offset;
-
 	if (*p_resc_num != dflt_resc_num || *p_resc_start != dflt_resc_start) {
 		DP_INFO(p_hwfn,
-			"Resource %d [%s]: MFW allocation [num %d, start %d] differs from default values [num %d, start %d]%s\n",
+			"MFW allocation for resource %d [%s] differs from default values [%d,%d vs. %d,%d]%s\n",
 			res_id, ecore_hw_get_resc_name(res_id), *p_resc_num,
 			*p_resc_start, dflt_resc_num, dflt_resc_start,
 			drv_resc_alloc ? " - Applying default values" : "");
@@ -2682,12 +2676,32 @@ out:
 	return ECORE_SUCCESS;
 }
 
+static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
+						   bool drv_resc_alloc)
+{
+	enum _ecore_status_t rc;
+	u8 res_id;
+
+	for (res_id = 0; res_id < ECORE_MAX_RESC; res_id++) {
+		rc = __ecore_hw_set_resc_info(p_hwfn, res_id, drv_resc_alloc);
+		if (rc != ECORE_SUCCESS)
+			return rc;
+	}
+
+	return ECORE_SUCCESS;
+}
+
+#define ECORE_RESC_ALLOC_LOCK_RETRY_CNT		10
+#define ECORE_RESC_ALLOC_LOCK_RETRY_INTVL_US	10000 /* 10 msec */
+
 static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 					      bool drv_resc_alloc)
 {
+	struct ecore_resc_unlock_params resc_unlock_params;
+	struct ecore_resc_lock_params resc_lock_params;
 	bool b_ah = ECORE_IS_AH(p_hwfn->p_dev);
-	enum _ecore_status_t rc;
 	u8 res_id;
+	enum _ecore_status_t rc;
 #ifndef ASIC_ONLY
 	u32 *resc_start = p_hwfn->hw_info.resc_start;
 	u32 *resc_num = p_hwfn->hw_info.resc_num;
@@ -2700,10 +2714,62 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 	u32 roce_min_ilt_lines = PXP_NUM_ILT_RECORDS_BB / MAX_NUM_PFS_BB;
 #endif
 
-	for (res_id = 0; res_id < ECORE_MAX_RESC; res_id++) {
-		rc = ecore_hw_set_resc_info(p_hwfn, res_id, drv_resc_alloc);
+	/* Setting the max values of the soft resources and the following
+	 * resources allocation queries should be atomic. Since several PFs can
+	 * run in parallel - a resource lock is needed.
+	 * If either the resource lock or resource set value commands are not
+	 * supported - skip the the max values setting, release the lock if
+	 * needed, and proceed to the queries. Other failures, including a
+	 * failure to acquire the lock, will cause this function to fail.
+	 * Old drivers that don't acquire the lock can run in parallel, and
+	 * their allocation values won't be affected by the updated max values.
+	 */
+	OSAL_MEM_ZERO(&resc_lock_params, sizeof(resc_lock_params));
+	resc_lock_params.resource = ECORE_RESC_LOCK_RESC_ALLOC;
+	resc_lock_params.retry_num = ECORE_RESC_ALLOC_LOCK_RETRY_CNT;
+	resc_lock_params.retry_interval = ECORE_RESC_ALLOC_LOCK_RETRY_INTVL_US;
+	resc_lock_params.sleep_b4_retry = true;
+	OSAL_MEM_ZERO(&resc_unlock_params, sizeof(resc_unlock_params));
+	resc_unlock_params.resource = ECORE_RESC_LOCK_RESC_ALLOC;
+
+	rc = ecore_mcp_resc_lock(p_hwfn, p_hwfn->p_main_ptt, &resc_lock_params);
+	if (rc != ECORE_SUCCESS && rc != ECORE_NOTIMPL) {
+		return rc;
+	} else if (rc == ECORE_NOTIMPL) {
+		DP_INFO(p_hwfn,
+			"Skip the max values setting of the soft resources since the resource lock is not supported by the MFW\n");
+	} else if (rc == ECORE_SUCCESS && !resc_lock_params.b_granted) {
+		DP_NOTICE(p_hwfn, false,
+			  "Failed to acquire the resource lock for the resource allocation commands\n");
+		rc = ECORE_BUSY;
+		goto unlock_and_exit;
+	} else {
+		rc = ecore_hw_set_soft_resc_size(p_hwfn);
+		if (rc != ECORE_SUCCESS && rc != ECORE_NOTIMPL) {
+			DP_NOTICE(p_hwfn, false,
+				  "Failed to set the max values of the soft resources\n");
+			goto unlock_and_exit;
+		} else if (rc == ECORE_NOTIMPL) {
+			DP_INFO(p_hwfn,
+				"Skip the max values setting of the soft resources since it is not supported by the MFW\n");
+			rc = ecore_mcp_resc_unlock(p_hwfn, p_hwfn->p_main_ptt,
+						   &resc_unlock_params);
+			if (rc != ECORE_SUCCESS)
+				DP_INFO(p_hwfn,
+					"Failed to release the resource lock for the resource allocation commands\n");
+		}
+	}
+
+	rc = ecore_hw_set_resc_info(p_hwfn, drv_resc_alloc);
+	if (rc != ECORE_SUCCESS)
+		goto unlock_and_exit;
+
+	if (resc_lock_params.b_granted && !resc_unlock_params.b_released) {
+		rc = ecore_mcp_resc_unlock(p_hwfn, p_hwfn->p_main_ptt,
+					   &resc_unlock_params);
 		if (rc != ECORE_SUCCESS)
-			return rc;
+			DP_INFO(p_hwfn,
+				"Failed to release the resource lock for the resource allocation commands\n");
 	}
 
 #ifndef ASIC_ONLY
@@ -2756,6 +2822,10 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 			   RESC_START(p_hwfn, res_id));
 
 	return ECORE_SUCCESS;
+
+unlock_and_exit:
+	ecore_mcp_resc_unlock(p_hwfn, p_hwfn->p_main_ptt, &resc_unlock_params);
+	return rc;
 }
 
 static enum _ecore_status_t
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 15f3ea0..3efe0a0 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -2768,7 +2768,60 @@ enum _ecore_status_t ecore_mcp_mem_ecc_events(struct ecore_hwfn *p_hwfn,
 			     0, &rsp, (u32 *)num_events);
 }
 
-#define ECORE_RESC_ALLOC_VERSION_MAJOR	1
+static enum resource_id_enum
+ecore_mcp_get_mfw_res_id(enum ecore_resources res_id)
+{
+	enum resource_id_enum mfw_res_id = RESOURCE_NUM_INVALID;
+
+	switch (res_id) {
+	case ECORE_SB:
+		mfw_res_id = RESOURCE_NUM_SB_E;
+		break;
+	case ECORE_L2_QUEUE:
+		mfw_res_id = RESOURCE_NUM_L2_QUEUE_E;
+		break;
+	case ECORE_VPORT:
+		mfw_res_id = RESOURCE_NUM_VPORT_E;
+		break;
+	case ECORE_RSS_ENG:
+		mfw_res_id = RESOURCE_NUM_RSS_ENGINES_E;
+		break;
+	case ECORE_PQ:
+		mfw_res_id = RESOURCE_NUM_PQ_E;
+		break;
+	case ECORE_RL:
+		mfw_res_id = RESOURCE_NUM_RL_E;
+		break;
+	case ECORE_MAC:
+	case ECORE_VLAN:
+		/* Each VFC resource can accommodate both a MAC and a VLAN */
+		mfw_res_id = RESOURCE_VFC_FILTER_E;
+		break;
+	case ECORE_ILT:
+		mfw_res_id = RESOURCE_ILT_E;
+		break;
+	case ECORE_LL2_QUEUE:
+		mfw_res_id = RESOURCE_LL2_QUEUE_E;
+		break;
+	case ECORE_RDMA_CNQ_RAM:
+	case ECORE_CMDQS_CQS:
+		/* CNQ/CMDQS are the same resource */
+		mfw_res_id = RESOURCE_CQS_E;
+		break;
+	case ECORE_RDMA_STATS_QUEUE:
+		mfw_res_id = RESOURCE_RDMA_STATS_QUEUE_E;
+		break;
+	case ECORE_BDQ:
+		mfw_res_id = RESOURCE_BDQ_E;
+		break;
+	default:
+		break;
+	}
+
+	return mfw_res_id;
+}
+
+#define ECORE_RESC_ALLOC_VERSION_MAJOR	2
 #define ECORE_RESC_ALLOC_VERSION_MINOR	0
 #define ECORE_RESC_ALLOC_VERSION				\
 	((ECORE_RESC_ALLOC_VERSION_MAJOR <<			\
@@ -2776,36 +2829,146 @@ enum _ecore_status_t ecore_mcp_mem_ecc_events(struct ecore_hwfn *p_hwfn,
 	 (ECORE_RESC_ALLOC_VERSION_MINOR <<			\
 	  DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR_SHIFT))
 
-enum _ecore_status_t ecore_mcp_get_resc_info(struct ecore_hwfn *p_hwfn,
-					     struct ecore_ptt *p_ptt,
-					     struct resource_info *p_resc_info,
-					     u32 *p_mcp_resp, u32 *p_mcp_param)
+struct ecore_resc_alloc_in_params {
+	u32 cmd;
+	enum ecore_resources res_id;
+	u32 resc_max_val;
+};
+
+struct ecore_resc_alloc_out_params {
+	u32 mcp_resp;
+	u32 mcp_param;
+	u32 resc_num;
+	u32 resc_start;
+	u32 vf_resc_num;
+	u32 vf_resc_start;
+	u32 flags;
+};
+
+static enum _ecore_status_t
+ecore_mcp_resc_allocation_msg(struct ecore_hwfn *p_hwfn,
+			      struct ecore_ptt *p_ptt,
+			      struct ecore_resc_alloc_in_params *p_in_params,
+			      struct ecore_resc_alloc_out_params *p_out_params)
 {
+	struct resource_info *p_mfw_resc_info;
 	struct ecore_mcp_mb_params mb_params;
 	union drv_union_data union_data;
 	enum _ecore_status_t rc;
 
+	p_mfw_resc_info = &union_data.resource;
+	OSAL_MEM_ZERO(p_mfw_resc_info, sizeof(*p_mfw_resc_info));
+
+	p_mfw_resc_info->res_id = ecore_mcp_get_mfw_res_id(p_in_params->res_id);
+	if (p_mfw_resc_info->res_id == RESOURCE_NUM_INVALID) {
+		DP_ERR(p_hwfn,
+		       "Failed to match resource %d [%s] with the MFW resources\n",
+		       p_in_params->res_id,
+		       ecore_hw_get_resc_name(p_in_params->res_id));
+		return ECORE_INVAL;
+	}
+
+	switch (p_in_params->cmd) {
+	case DRV_MSG_SET_RESOURCE_VALUE_MSG:
+		p_mfw_resc_info->size = p_in_params->resc_max_val;
+		/* Fallthrough */
+	case DRV_MSG_GET_RESOURCE_ALLOC_MSG:
+		break;
+	default:
+		DP_ERR(p_hwfn, "Unexpected resource alloc command [0x%08x]\n",
+		       p_in_params->cmd);
+		return ECORE_INVAL;
+	}
+
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
-	mb_params.cmd = DRV_MSG_GET_RESOURCE_ALLOC_MSG;
+	mb_params.cmd = p_in_params->cmd;
 	mb_params.param = ECORE_RESC_ALLOC_VERSION;
-	OSAL_MEMCPY(&union_data.resource, p_resc_info, sizeof(*p_resc_info));
 	mb_params.p_data_src = &union_data;
 	mb_params.p_data_dst = &union_data;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Resource message request: cmd 0x%08x, res_id %d [%s], hsi_version %d.%d, val 0x%x\n",
+		   p_in_params->cmd, p_in_params->res_id,
+		   ecore_hw_get_resc_name(p_in_params->res_id),
+		   ECORE_MFW_GET_FIELD(mb_params.param,
+			   DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR),
+		   ECORE_MFW_GET_FIELD(mb_params.param,
+			   DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR),
+		   p_in_params->resc_max_val);
+
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	*p_mcp_resp = mb_params.mcp_resp;
-	*p_mcp_param = mb_params.mcp_param;
-
-	OSAL_MEMCPY(p_resc_info, &union_data.resource, sizeof(*p_resc_info));
+	p_out_params->mcp_resp = mb_params.mcp_resp;
+	p_out_params->mcp_param = mb_params.mcp_param;
+	p_out_params->resc_num = p_mfw_resc_info->size;
+	p_out_params->resc_start = p_mfw_resc_info->offset;
+	p_out_params->vf_resc_num = p_mfw_resc_info->vf_size;
+	p_out_params->vf_resc_start = p_mfw_resc_info->vf_offset;
+	p_out_params->flags = p_mfw_resc_info->flags;
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-		   "MFW resource_info: version 0x%x, res_id 0x%x, size 0x%x,"
-		   " offset 0x%x, vf_size 0x%x, vf_offset 0x%x, flags 0x%x\n",
-		   *p_mcp_param, p_resc_info->res_id, p_resc_info->size,
-		   p_resc_info->offset, p_resc_info->vf_size,
-		   p_resc_info->vf_offset, p_resc_info->flags);
+		   "Resource message response: mfw_hsi_version %d.%d, num 0x%x, start 0x%x, vf_num 0x%x, vf_start 0x%x, flags 0x%08x\n",
+		   ECORE_MFW_GET_FIELD(p_out_params->mcp_param,
+			   FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR),
+		   ECORE_MFW_GET_FIELD(p_out_params->mcp_param,
+			   FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR),
+		   p_out_params->resc_num, p_out_params->resc_start,
+		   p_out_params->vf_resc_num, p_out_params->vf_resc_start,
+		   p_out_params->flags);
+
+	return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t
+ecore_mcp_set_resc_max_val(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			   enum ecore_resources res_id, u32 resc_max_val,
+			   u32 *p_mcp_resp)
+{
+	struct ecore_resc_alloc_out_params out_params;
+	struct ecore_resc_alloc_in_params in_params;
+	enum _ecore_status_t rc;
+
+	OSAL_MEM_ZERO(&in_params, sizeof(in_params));
+	in_params.cmd = DRV_MSG_SET_RESOURCE_VALUE_MSG;
+	in_params.res_id = res_id;
+	in_params.resc_max_val = resc_max_val;
+	OSAL_MEM_ZERO(&out_params, sizeof(out_params));
+	rc = ecore_mcp_resc_allocation_msg(p_hwfn, p_ptt, &in_params,
+					   &out_params);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	*p_mcp_resp = out_params.mcp_resp;
+
+	return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t
+ecore_mcp_get_resc_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			enum ecore_resources res_id, u32 *p_mcp_resp,
+			u32 *p_resc_num, u32 *p_resc_start)
+{
+	struct ecore_resc_alloc_out_params out_params;
+	struct ecore_resc_alloc_in_params in_params;
+	enum _ecore_status_t rc;
+
+	OSAL_MEM_ZERO(&in_params, sizeof(in_params));
+	in_params.cmd = DRV_MSG_GET_RESOURCE_ALLOC_MSG;
+	in_params.res_id = res_id;
+	OSAL_MEM_ZERO(&out_params, sizeof(out_params));
+	rc = ecore_mcp_resc_allocation_msg(p_hwfn, p_ptt, &in_params,
+					   &out_params);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	*p_mcp_resp = out_params.mcp_resp;
+
+	if (*p_mcp_resp == FW_MSG_CODE_RESOURCE_ALLOC_OK) {
+		*p_resc_num = out_params.resc_num;
+		*p_resc_start = out_params.resc_start;
+	}
 
 	return ECORE_SUCCESS;
 }
@@ -2831,8 +2994,11 @@ static enum _ecore_status_t ecore_mcp_resource_cmd(struct ecore_hwfn *p_hwfn,
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	if (*p_mcp_resp == FW_MSG_CODE_UNSUPPORTED)
+	if (*p_mcp_resp == FW_MSG_CODE_UNSUPPORTED) {
+		DP_INFO(p_hwfn,
+			"The resource command is unsupported by the MFW\n");
 		return ECORE_NOTIMPL;
+	}
 
 	if (*p_mcp_param == RESOURCE_OPCODE_UNKNOWN_CMD) {
 		u8 opcode = ECORE_MFW_GET_FIELD(param, RESOURCE_CMD_REQ_OPCODE);
@@ -2846,36 +3012,35 @@ static enum _ecore_status_t ecore_mcp_resource_cmd(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-enum _ecore_status_t ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn,
-					 struct ecore_ptt *p_ptt,
-					 u8 resource_num, u8 timeout,
-					 bool *p_granted, u8 *p_owner)
+enum _ecore_status_t
+__ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		      struct ecore_resc_lock_params *p_params)
 {
 	u32 param = 0, mcp_resp, mcp_param;
 	u8 opcode;
 	enum _ecore_status_t rc;
 
-	switch (timeout) {
+	switch (p_params->timeout) {
 	case ECORE_MCP_RESC_LOCK_TO_DEFAULT:
 		opcode = RESOURCE_OPCODE_REQ;
-		timeout = 0;
+		p_params->timeout = 0;
 		break;
 	case ECORE_MCP_RESC_LOCK_TO_NONE:
 		opcode = RESOURCE_OPCODE_REQ_WO_AGING;
-		timeout = 0;
+		p_params->timeout = 0;
 		break;
 	default:
 		opcode = RESOURCE_OPCODE_REQ_W_AGING;
 		break;
 	}
 
-	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_RESC, resource_num);
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_RESC, p_params->resource);
 	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_OPCODE, opcode);
-	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_AGE, timeout);
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_AGE, p_params->timeout);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-		   "Resource lock request: param 0x%08x [age %d, opcode %d, resc_num %d]\n",
-		   param, timeout, opcode, resource_num);
+		   "Resource lock request: param 0x%08x [age %d, opcode %d, resource %d]\n",
+		   param, p_params->timeout, opcode, p_params->resource);
 
 	/* Attempt to acquire the resource */
 	rc = ecore_mcp_resource_cmd(p_hwfn, p_ptt, param, &mcp_resp,
@@ -2884,19 +3049,20 @@ enum _ecore_status_t ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn,
 		return rc;
 
 	/* Analyze the response */
-	*p_owner = ECORE_MFW_GET_FIELD(mcp_param, RESOURCE_CMD_RSP_OWNER);
+	p_params->owner = ECORE_MFW_GET_FIELD(mcp_param,
+					     RESOURCE_CMD_RSP_OWNER);
 	opcode = ECORE_MFW_GET_FIELD(mcp_param, RESOURCE_CMD_RSP_OPCODE);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 		   "Resource lock response: mcp_param 0x%08x [opcode %d, owner %d]\n",
-		   mcp_param, opcode, *p_owner);
+		   mcp_param, opcode, p_params->owner);
 
 	switch (opcode) {
 	case RESOURCE_OPCODE_GNT:
-		*p_granted = true;
+		p_params->b_granted = true;
 		break;
 	case RESOURCE_OPCODE_BUSY:
-		*p_granted = false;
+		p_params->b_granted = false;
 		break;
 	default:
 		DP_NOTICE(p_hwfn, false,
@@ -2908,23 +3074,54 @@ enum _ecore_status_t ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-enum _ecore_status_t ecore_mcp_resc_unlock(struct ecore_hwfn *p_hwfn,
-					   struct ecore_ptt *p_ptt,
-					   u8 resource_num, bool force,
-					   bool *p_released)
+enum _ecore_status_t
+ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		    struct ecore_resc_lock_params *p_params)
+{
+	u32 retry_cnt = 0;
+	enum _ecore_status_t rc;
+
+	do {
+		/* No need for an interval before the first iteration */
+		if (retry_cnt) {
+			if (p_params->sleep_b4_retry) {
+				u16 retry_interval_in_ms =
+					DIV_ROUND_UP(p_params->retry_interval,
+						     1000);
+
+				OSAL_MSLEEP(retry_interval_in_ms);
+			} else {
+				OSAL_UDELAY(p_params->retry_interval);
+			}
+		}
+
+		rc = __ecore_mcp_resc_lock(p_hwfn, p_ptt, p_params);
+		if (rc != ECORE_SUCCESS)
+			return rc;
+
+		if (p_params->b_granted)
+			break;
+	} while (retry_cnt++ < p_params->retry_num);
+
+	return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t
+ecore_mcp_resc_unlock(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		      struct ecore_resc_unlock_params *p_params)
 {
 	u32 param = 0, mcp_resp, mcp_param;
 	u8 opcode;
 	enum _ecore_status_t rc;
 
-	opcode = force ? RESOURCE_OPCODE_FORCE_RELEASE
-		       : RESOURCE_OPCODE_RELEASE;
-	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_RESC, resource_num);
+	opcode = p_params->b_force ? RESOURCE_OPCODE_FORCE_RELEASE
+				   : RESOURCE_OPCODE_RELEASE;
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_RESC, p_params->resource);
 	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_OPCODE, opcode);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-		   "Resource unlock request: param 0x%08x [opcode %d, resc_num %d]\n",
-		   param, opcode, resource_num);
+		   "Resource unlock request: param 0x%08x [opcode %d, resource %d]\n",
+		   param, opcode, p_params->resource);
 
 	/* Attempt to release the resource */
 	rc = ecore_mcp_resource_cmd(p_hwfn, p_ptt, param, &mcp_resp,
@@ -2942,14 +3139,14 @@ enum _ecore_status_t ecore_mcp_resc_unlock(struct ecore_hwfn *p_hwfn,
 	switch (opcode) {
 	case RESOURCE_OPCODE_RELEASED_PREVIOUS:
 		DP_INFO(p_hwfn,
-			"Resource unlock request for an already released resource [resc_num %d]\n",
-			resource_num);
+			"Resource unlock request for an already released resource [%d]\n",
+			p_params->resource);
 		/* Fallthrough */
 	case RESOURCE_OPCODE_RELEASED:
-		*p_released = true;
+		p_params->b_released = true;
 		break;
 	case RESOURCE_OPCODE_WRONG_OWNER:
-		*p_released = false;
+		p_params->b_released = false;
 		break;
 	default:
 		DP_NOTICE(p_hwfn, false,
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index 4138a12..f5dac9d 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -11,6 +11,7 @@
 
 #include "bcm_osal.h"
 #include "mcp_public.h"
+#include "ecore.h"
 #include "ecore_mcp_api.h"
 
 /* Using hwfn number (and not pf_num) is required since in CMT mode,
@@ -339,20 +340,37 @@ enum _ecore_status_t ecore_mcp_mdump_trigger(struct ecore_hwfn *p_hwfn,
 					     struct ecore_ptt *p_ptt);
 
 /**
+ * @brief - Sets the MFW's max value for the given resource
+ *
+ *  @param p_hwfn
+ *  @param p_ptt
+ *  @param res_id
+ *  @param resc_max_val
+ *  @param p_mcp_resp
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
+ */
+enum _ecore_status_t
+ecore_mcp_set_resc_max_val(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			   enum ecore_resources res_id, u32 resc_max_val,
+			   u32 *p_mcp_resp);
+
+/**
  * @brief - Gets the MFW allocation info for the given resource
  *
  *  @param p_hwfn
  *  @param p_ptt
- *  @param p_resc_info
+ *  @param res_id
  *  @param p_mcp_resp
- *  @param p_mcp_param
+ *  @param p_resc_num
+ *  @param p_resc_start
  *
  * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
  */
-enum _ecore_status_t ecore_mcp_get_resc_info(struct ecore_hwfn *p_hwfn,
-					     struct ecore_ptt *p_ptt,
-					     struct resource_info *p_resc_info,
-					     u32 *p_mcp_resp, u32 *p_mcp_param);
+enum _ecore_status_t
+ecore_mcp_get_resc_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			enum ecore_resources res_id, u32 *p_mcp_resp,
+			u32 *p_resc_num, u32 *p_resc_start);
 
 /**
  * @brief - Initiates PF FLR
@@ -365,45 +383,79 @@ enum _ecore_status_t ecore_mcp_get_resc_info(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_mcp_initiate_pf_flr(struct ecore_hwfn *p_hwfn,
 					       struct ecore_ptt *p_ptt);
 
+#define ECORE_MCP_RESC_LOCK_MIN_VAL	RESOURCE_DUMP /* 0 */
+#define ECORE_MCP_RESC_LOCK_MAX_VAL	31
+
+enum ecore_resc_lock {
+	ECORE_RESC_LOCK_DBG_DUMP = ECORE_MCP_RESC_LOCK_MIN_VAL,
+	/* Locks that the MFW is aware of should be added here downwards */
+
+	/* Ecore only locks should be added here upwards */
+	ECORE_RESC_LOCK_RESC_ALLOC = ECORE_MCP_RESC_LOCK_MAX_VAL
+};
+
+struct ecore_resc_lock_params {
+	/* Resource number [valid values are 0..31] */
+	u8 resource;
+
+	/* Lock timeout value in seconds [default, none or 1..254] */
+	u8 timeout;
 #define ECORE_MCP_RESC_LOCK_TO_DEFAULT	0
 #define ECORE_MCP_RESC_LOCK_TO_NONE	255
 
+	/* Number of times to retry locking */
+	u8 retry_num;
+
+	/* The interval in usec between retries */
+	u16 retry_interval;
+
+	/* Use sleep or delay between retries */
+	bool sleep_b4_retry;
+
+	/* Will be set as true if the resource is free and granted */
+	bool b_granted;
+
+	/* Will be filled with the resource owner.
+	 * [0..15 = PF0-15, 16 = MFW, 17 = diag over serial]
+	 */
+	u8 owner;
+};
+
 /**
  * @brief Acquires MFW generic resource lock
  *
  *  @param p_hwfn
  *  @param p_ptt
- *  @param resource_num - valid values are 0..31
- *  @param timeout - lock timeout value in seconds
- *                   (1..254, '0' - default value, '255' - no timeout).
- *  @param p_granted - will be filled as true if the resource is free and
- *                     granted, or false if it is busy.
- *  @param p_owner - A pointer to a variable to be filled with the resource
- *                   owner (0..15 = PF0-15, 16 = MFW, 17 = diag over serial).
+ *  @param p_params
  *
  * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
  */
-enum _ecore_status_t ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn,
-					 struct ecore_ptt *p_ptt,
-					 u8 resource_num, u8 timeout,
-					 bool *p_granted, u8 *p_owner);
+enum _ecore_status_t
+ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		    struct ecore_resc_lock_params *p_params);
+
+struct ecore_resc_unlock_params {
+	/* Resource number [valid values are 0..31] */
+	u8 resource;
+
+	/* Allow to release a resource even if belongs to another PF */
+	bool b_force;
+
+	/* Will be set as true if the resource is released */
+	bool b_released;
+};
 
 /**
  * @brief Releases MFW generic resource lock
  *
  *  @param p_hwfn
  *  @param p_ptt
- *  @param resource_num
- *  @param force -  allows to release a reeource even if belongs to another PF
- *  @param p_released - will be filled as true if the resource is released (or
- *			has been already released), and false if the resource is
- *			acquired by another PF and the `force' flag was not set.
+ *  @param p_params
  *
  * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
  */
-enum _ecore_status_t ecore_mcp_resc_unlock(struct ecore_hwfn *p_hwfn,
-					   struct ecore_ptt *p_ptt,
-					   u8 resource_num, bool force,
-					   bool *p_released);
+enum _ecore_status_t
+ecore_mcp_resc_unlock(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		      struct ecore_resc_unlock_params *p_params);
 
 #endif /* __ECORE_MCP_H__ */
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 49/62] net/qede/base: add return code check
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (49 preceding siblings ...)
  2017-03-28  6:52           ` [PATCH v4 48/62] net/qede/base: set max values for soft resources Rasesh Mody
@ 2017-03-28  6:52           ` Rasesh Mody
  2017-03-28  6:52           ` [PATCH v4 50/62] net/qede/base: zero out MFW mailbox data Rasesh Mody
                             ` (13 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:52 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Add a check of the return code of ecore_mcp_cmd_and_union() in
ecore_mcp_send_protocol_stats()

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_mcp.c |    5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 3efe0a0..0ebb5cd 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1237,6 +1237,7 @@ static void ecore_mcp_send_protocol_stats(struct ecore_hwfn *p_hwfn,
 	struct ecore_mcp_mb_params mb_params;
 	union drv_union_data union_data;
 	u32 hsi_param;
+	enum _ecore_status_t rc;
 
 	switch (type) {
 	case MFW_DRV_MSG_GET_LAN_STATS:
@@ -1255,7 +1256,9 @@ static void ecore_mcp_send_protocol_stats(struct ecore_hwfn *p_hwfn,
 	mb_params.param = hsi_param;
 	OSAL_MEMCPY(&union_data, &stats, sizeof(stats));
 	mb_params.p_data_src = &union_data;
-	ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+	if (rc != ECORE_SUCCESS)
+		DP_ERR(p_hwfn, "Failed to send protocol stats, rc = %d\n", rc);
 }
 
 static void ecore_read_pf_bandwidth(struct ecore_hwfn *p_hwfn,
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 50/62] net/qede/base: zero out MFW mailbox data
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (50 preceding siblings ...)
  2017-03-28  6:52           ` [PATCH v4 49/62] net/qede/base: add return code check Rasesh Mody
@ 2017-03-28  6:52           ` Rasesh Mody
  2017-03-28  6:52           ` [PATCH v4 51/62] net/qede/base: move code bits Rasesh Mody
                             ` (12 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:52 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Zero the whole union data of the Management FW mailbox before copying
the actual union member

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c |    4 +-
 drivers/net/qede/base/ecore_mcp.c |  294 +++++++++++++++++++++----------------
 drivers/net/qede/base/ecore_mcp.h |   19 ++-
 3 files changed, 181 insertions(+), 136 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 3191ee4..e584058 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2311,9 +2311,7 @@ enum _ecore_status_t ecore_hw_reset(struct ecore_dev *p_dev)
 			unload_resp = FW_MSG_CODE_DRV_UNLOAD_ENGINE;
 		}
 
-		rc = ecore_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
-				   DRV_MSG_CODE_UNLOAD_DONE,
-				   0, &unload_resp, &unload_param);
+		rc = ecore_mcp_unload_done(p_hwfn, p_hwfn->p_main_ptt);
 		if (rc != ECORE_SUCCESS) {
 			DP_NOTICE(p_hwfn,
 				  true, "ecore_hw_reset: UNLOAD_DONE failed\n");
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 0ebb5cd..a3a6ca1 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -364,6 +364,7 @@ ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn,
 			struct ecore_ptt *p_ptt,
 			struct ecore_mcp_mb_params *p_mb_params)
 {
+	union drv_union_data union_data;
 	u32 union_data_addr;
 	enum _ecore_status_t rc;
 
@@ -373,6 +374,15 @@ ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn,
 		return ECORE_BUSY;
 	}
 
+	if (p_mb_params->data_src_size > sizeof(union_data) ||
+	    p_mb_params->data_dst_size > sizeof(union_data)) {
+		DP_ERR(p_hwfn,
+		       "The provided size is larger than the union data size [src_size %u, dst_size %u, union_data_size %zu]\n",
+		       p_mb_params->data_src_size, p_mb_params->data_dst_size,
+		       sizeof(union_data));
+		return ECORE_INVAL;
+	}
+
 	union_data_addr = p_hwfn->mcp_info->drv_mb_addr +
 			  OFFSETOF(struct public_drv_mb, union_data);
 
@@ -383,19 +393,21 @@ ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn,
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	if (p_mb_params->p_data_src != OSAL_NULL)
-		ecore_memcpy_to(p_hwfn, p_ptt, union_data_addr,
-				p_mb_params->p_data_src,
-				sizeof(*p_mb_params->p_data_src));
+	OSAL_MEM_ZERO(&union_data, sizeof(union_data));
+	if (p_mb_params->p_data_src != OSAL_NULL && p_mb_params->data_src_size)
+		OSAL_MEMCPY(&union_data, p_mb_params->p_data_src,
+			    p_mb_params->data_src_size);
+	ecore_memcpy_to(p_hwfn, p_ptt, union_data_addr, &union_data,
+			sizeof(union_data));
 
 	rc = ecore_do_mcp_cmd(p_hwfn, p_ptt, p_mb_params->cmd,
 			      p_mb_params->param, &p_mb_params->mcp_resp,
 			      &p_mb_params->mcp_param);
 
-	if (p_mb_params->p_data_dst != OSAL_NULL)
+	if (p_mb_params->p_data_dst != OSAL_NULL &&
+	    p_mb_params->data_dst_size)
 		ecore_memcpy_from(p_hwfn, p_ptt, p_mb_params->p_data_dst,
-				  union_data_addr,
-				  sizeof(*p_mb_params->p_data_dst));
+				  union_data_addr, p_mb_params->data_dst_size);
 
 	ecore_mcp_mb_unlock(p_hwfn, p_mb_params->cmd);
 
@@ -443,14 +455,13 @@ enum _ecore_status_t ecore_mcp_nvm_wr_cmd(struct ecore_hwfn *p_hwfn,
 					  u32 i_txn_size, u32 *i_buf)
 {
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
 	enum _ecore_status_t rc;
 
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = cmd;
 	mb_params.param = param;
-	OSAL_MEMCPY((u32 *)&union_data.raw_data, i_buf, i_txn_size);
-	mb_params.p_data_src = &union_data;
+	mb_params.p_data_src = i_buf;
+	mb_params.data_src_size = (u8)i_txn_size;
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
@@ -470,13 +481,17 @@ enum _ecore_status_t ecore_mcp_nvm_rd_cmd(struct ecore_hwfn *p_hwfn,
 					  u32 *o_txn_size, u32 *o_buf)
 {
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
+	u8 raw_data[MCP_DRV_NVM_BUF_LEN];
 	enum _ecore_status_t rc;
 
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = cmd;
 	mb_params.param = param;
-	mb_params.p_data_dst = &union_data;
+	mb_params.p_data_dst = raw_data;
+
+	/* Use the maximal value since the actual one is part of the response */
+	mb_params.data_dst_size = MCP_DRV_NVM_BUF_LEN;
+
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
@@ -485,7 +500,8 @@ enum _ecore_status_t ecore_mcp_nvm_rd_cmd(struct ecore_hwfn *p_hwfn,
 	*o_mcp_param = mb_params.mcp_param;
 
 	*o_txn_size = *o_mcp_param;
-	OSAL_MEMCPY(o_buf, (u32 *)&union_data.raw_data, *o_txn_size);
+	/* @DPDK */
+	OSAL_MEMCPY(o_buf, raw_data, RTE_MIN(*o_txn_size, MCP_DRV_NVM_BUF_LEN));
 
 	return ECORE_SUCCESS;
 }
@@ -605,25 +621,23 @@ __ecore_mcp_load_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 		     struct ecore_load_req_in_params *p_in_params,
 		     struct ecore_load_req_out_params *p_out_params)
 {
-	union drv_union_data union_data_src, union_data_dst;
 	struct ecore_mcp_mb_params mb_params;
-	struct load_req_stc *p_load_req;
-	struct load_rsp_stc *p_load_rsp;
+	struct load_req_stc load_req;
+	struct load_rsp_stc load_rsp;
 	u32 hsi_ver;
 	enum _ecore_status_t rc;
 
-	p_load_req = &union_data_src.load_req;
-	OSAL_MEM_ZERO(p_load_req, sizeof(*p_load_req));
-	p_load_req->drv_ver_0 = p_in_params->drv_ver_0;
-	p_load_req->drv_ver_1 = p_in_params->drv_ver_1;
-	p_load_req->fw_ver = p_in_params->fw_ver;
-	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_ROLE,
+	OSAL_MEM_ZERO(&load_req, sizeof(load_req));
+	load_req.drv_ver_0 = p_in_params->drv_ver_0;
+	load_req.drv_ver_1 = p_in_params->drv_ver_1;
+	load_req.fw_ver = p_in_params->fw_ver;
+	ECORE_MFW_SET_FIELD(load_req.misc0, LOAD_REQ_ROLE,
 			    p_in_params->drv_role);
-	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_LOCK_TO,
+	ECORE_MFW_SET_FIELD(load_req.misc0, LOAD_REQ_LOCK_TO,
 			    p_in_params->timeout_val);
-	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_FORCE,
+	ECORE_MFW_SET_FIELD(load_req.misc0, LOAD_REQ_FORCE,
 			    p_in_params->force_cmd);
-	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_FLAGS0,
+	ECORE_MFW_SET_FIELD(load_req.misc0, LOAD_REQ_FLAGS0,
 			    p_in_params->avoid_eng_reset);
 
 	hsi_ver = (p_in_params->hsi_ver == ECORE_LOAD_REQ_HSI_VER_DEFAULT) ?
@@ -633,8 +647,10 @@ __ecore_mcp_load_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = DRV_MSG_CODE_LOAD_REQ;
 	mb_params.param = PDA_COMP | hsi_ver | p_hwfn->p_dev->drv_type;
-	mb_params.p_data_src = &union_data_src;
-	mb_params.p_data_dst = &union_data_dst;
+	mb_params.p_data_src = &load_req;
+	mb_params.data_src_size = sizeof(load_req);
+	mb_params.p_data_dst = &load_rsp;
+	mb_params.data_dst_size = sizeof(load_rsp);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 		   "Load Request: param 0x%08x [init_hw %d, drv_type %d, hsi_ver %d, pda 0x%04x]\n",
@@ -647,15 +663,13 @@ __ecore_mcp_load_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	if (p_in_params->hsi_ver != ECORE_LOAD_REQ_HSI_VER_1)
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 			   "Load Request: drv_ver 0x%08x_0x%08x, fw_ver 0x%08x, misc0 0x%08x [role %d, timeout %d, force %d, flags0 0x%x]\n",
-			   p_load_req->drv_ver_0, p_load_req->drv_ver_1,
-			   p_load_req->fw_ver, p_load_req->misc0,
-			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
-					       LOAD_REQ_ROLE),
-			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
+			   load_req.drv_ver_0, load_req.drv_ver_1,
+			   load_req.fw_ver, load_req.misc0,
+			   ECORE_MFW_GET_FIELD(load_req.misc0, LOAD_REQ_ROLE),
+			   ECORE_MFW_GET_FIELD(load_req.misc0,
 					       LOAD_REQ_LOCK_TO),
-			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
-					       LOAD_REQ_FORCE),
-			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
+			   ECORE_MFW_GET_FIELD(load_req.misc0, LOAD_REQ_FORCE),
+			   ECORE_MFW_GET_FIELD(load_req.misc0,
 					       LOAD_REQ_FLAGS0));
 
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
@@ -671,28 +685,24 @@ __ecore_mcp_load_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 
 	if (p_in_params->hsi_ver != ECORE_LOAD_REQ_HSI_VER_1 &&
 	    p_out_params->load_code != FW_MSG_CODE_DRV_LOAD_REFUSED_HSI_1) {
-		p_load_rsp = &union_data_dst.load_rsp;
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 			   "Load Response: exist_drv_ver 0x%08x_0x%08x, exist_fw_ver 0x%08x, misc0 0x%08x [exist_role %d, mfw_hsi %d, flags0 0x%x]\n",
-			   p_load_rsp->drv_ver_0, p_load_rsp->drv_ver_1,
-			   p_load_rsp->fw_ver, p_load_rsp->misc0,
-			   ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
-					       LOAD_RSP_ROLE),
-			   ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
-					       LOAD_RSP_HSI),
-			   ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
+			   load_rsp.drv_ver_0, load_rsp.drv_ver_1,
+			   load_rsp.fw_ver, load_rsp.misc0,
+			   ECORE_MFW_GET_FIELD(load_rsp.misc0, LOAD_RSP_ROLE),
+			   ECORE_MFW_GET_FIELD(load_rsp.misc0, LOAD_RSP_HSI),
+			   ECORE_MFW_GET_FIELD(load_rsp.misc0,
 					       LOAD_RSP_FLAGS0));
 
-		p_out_params->exist_drv_ver_0 = p_load_rsp->drv_ver_0;
-		p_out_params->exist_drv_ver_1 = p_load_rsp->drv_ver_1;
-		p_out_params->exist_fw_ver = p_load_rsp->fw_ver;
+		p_out_params->exist_drv_ver_0 = load_rsp.drv_ver_0;
+		p_out_params->exist_drv_ver_1 = load_rsp.drv_ver_1;
+		p_out_params->exist_fw_ver = load_rsp.fw_ver;
 		p_out_params->exist_drv_role =
-			ECORE_MFW_GET_FIELD(p_load_rsp->misc0, LOAD_RSP_ROLE);
+			ECORE_MFW_GET_FIELD(load_rsp.misc0, LOAD_RSP_ROLE);
 		p_out_params->mfw_hsi_ver =
-			ECORE_MFW_GET_FIELD(p_load_rsp->misc0, LOAD_RSP_HSI);
+			ECORE_MFW_GET_FIELD(load_rsp.misc0, LOAD_RSP_HSI);
 		p_out_params->drv_exists =
-			ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
-					    LOAD_RSP_FLAGS0) &
+			ECORE_MFW_GET_FIELD(load_rsp.misc0, LOAD_RSP_FLAGS0) &
 			LOAD_RSP_FLAGS0_DRV_EXISTS;
 	}
 
@@ -883,6 +893,18 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
+enum _ecore_status_t ecore_mcp_unload_done(struct ecore_hwfn *p_hwfn,
+					   struct ecore_ptt *p_ptt)
+{
+	struct ecore_mcp_mb_params mb_params;
+	struct mcp_mac wol_mac;
+
+	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
+	mb_params.cmd = DRV_MSG_CODE_UNLOAD_DONE;
+
+	return ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+}
+
 static void ecore_mcp_handle_vf_flr(struct ecore_hwfn *p_hwfn,
 				    struct ecore_ptt *p_ptt)
 {
@@ -924,7 +946,6 @@ enum _ecore_status_t ecore_mcp_ack_vf_flr(struct ecore_hwfn *p_hwfn,
 	u32 func_addr = SECTION_ADDR(mfw_func_offsize,
 				     MCP_PF_ID(p_hwfn));
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
 	enum _ecore_status_t rc;
 	int i;
 
@@ -935,8 +956,8 @@ enum _ecore_status_t ecore_mcp_ack_vf_flr(struct ecore_hwfn *p_hwfn,
 
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = DRV_MSG_CODE_VF_DISABLED_DONE;
-	OSAL_MEMCPY(&union_data.ack_vf_disabled, vfs_to_ack, VF_MAX_STATIC / 8);
-	mb_params.p_data_src = &union_data;
+	mb_params.p_data_src = vfs_to_ack;
+	mb_params.data_src_size = VF_MAX_STATIC / 8;
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt,
 				     &mb_params);
 	if (rc != ECORE_SUCCESS) {
@@ -1122,8 +1143,7 @@ enum _ecore_status_t ecore_mcp_set_link(struct ecore_hwfn *p_hwfn,
 {
 	struct ecore_mcp_link_params *params = &p_hwfn->mcp_info->link_input;
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
-	struct eth_phy_cfg *p_phy_cfg;
+	struct eth_phy_cfg phy_cfg;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 	u32 cmd;
 
@@ -1133,30 +1153,30 @@ enum _ecore_status_t ecore_mcp_set_link(struct ecore_hwfn *p_hwfn,
 #endif
 
 	/* Set the shmem configuration according to params */
-	p_phy_cfg = &union_data.drv_phy_cfg;
-	OSAL_MEMSET(p_phy_cfg, 0, sizeof(*p_phy_cfg));
+	OSAL_MEM_ZERO(&phy_cfg, sizeof(phy_cfg));
 	cmd = b_up ? DRV_MSG_CODE_INIT_PHY : DRV_MSG_CODE_LINK_RESET;
 	if (!params->speed.autoneg)
-		p_phy_cfg->speed = params->speed.forced_speed;
-	p_phy_cfg->pause |= (params->pause.autoneg) ? ETH_PAUSE_AUTONEG : 0;
-	p_phy_cfg->pause |= (params->pause.forced_rx) ? ETH_PAUSE_RX : 0;
-	p_phy_cfg->pause |= (params->pause.forced_tx) ? ETH_PAUSE_TX : 0;
-	p_phy_cfg->adv_speed = params->speed.advertised_speeds;
-	p_phy_cfg->loopback_mode = params->loopback_mode;
+		phy_cfg.speed = params->speed.forced_speed;
+	phy_cfg.pause |= (params->pause.autoneg) ? ETH_PAUSE_AUTONEG : 0;
+	phy_cfg.pause |= (params->pause.forced_rx) ? ETH_PAUSE_RX : 0;
+	phy_cfg.pause |= (params->pause.forced_tx) ? ETH_PAUSE_TX : 0;
+	phy_cfg.adv_speed = params->speed.advertised_speeds;
+	phy_cfg.loopback_mode = params->loopback_mode;
 	p_hwfn->b_drv_link_init = b_up;
 
 	if (b_up)
 		DP_VERBOSE(p_hwfn, ECORE_MSG_LINK,
 			   "Configuring Link: Speed 0x%08x, Pause 0x%08x,"
 			   " adv_speed 0x%08x, loopback 0x%08x\n",
-			   p_phy_cfg->speed, p_phy_cfg->pause,
-			   p_phy_cfg->adv_speed, p_phy_cfg->loopback_mode);
+			   phy_cfg.speed, phy_cfg.pause, phy_cfg.adv_speed,
+			   phy_cfg.loopback_mode);
 	else
 		DP_VERBOSE(p_hwfn, ECORE_MSG_LINK, "Resetting link\n");
 
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = cmd;
-	mb_params.p_data_src = &union_data;
+	mb_params.p_data_src = &phy_cfg;
+	mb_params.data_src_size = sizeof(phy_cfg);
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 
 	/* if mcp fails to respond we must abort */
@@ -1235,7 +1255,6 @@ static void ecore_mcp_send_protocol_stats(struct ecore_hwfn *p_hwfn,
 	enum ecore_mcp_protocol_type stats_type;
 	union ecore_mcp_protocol_stats stats;
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
 	u32 hsi_param;
 	enum _ecore_status_t rc;
 
@@ -1254,8 +1273,8 @@ static void ecore_mcp_send_protocol_stats(struct ecore_hwfn *p_hwfn,
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = DRV_MSG_CODE_GET_STATS;
 	mb_params.param = hsi_param;
-	OSAL_MEMCPY(&union_data, &stats, sizeof(stats));
-	mb_params.p_data_src = &union_data;
+	mb_params.p_data_src = &stats;
+	mb_params.data_src_size = sizeof(stats);
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS)
 		DP_ERR(p_hwfn, "Failed to send protocol stats, rc = %d\n", rc);
@@ -1353,28 +1372,38 @@ static void ecore_mcp_handle_fan_failure(struct ecore_hwfn *p_hwfn,
 	ecore_hw_err_notify(p_hwfn, ECORE_HW_ERR_FAN_FAIL);
 }
 
+struct ecore_mdump_cmd_params {
+	u32 cmd;
+	void *p_data_src;
+	u8 data_src_size;
+	void *p_data_dst;
+	u8 data_dst_size;
+	u32 mcp_resp;
+};
+
 static enum _ecore_status_t
 ecore_mcp_mdump_cmd(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-		    u32 mdump_cmd, union drv_union_data *p_data_src,
-		    union drv_union_data *p_data_dst, u32 *p_mcp_resp)
+		    struct ecore_mdump_cmd_params *p_mdump_cmd_params)
 {
 	struct ecore_mcp_mb_params mb_params;
 	enum _ecore_status_t rc;
 
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = DRV_MSG_CODE_MDUMP_CMD;
-	mb_params.param = mdump_cmd;
-	mb_params.p_data_src = p_data_src;
-	mb_params.p_data_dst = p_data_dst;
+	mb_params.param = p_mdump_cmd_params->cmd;
+	mb_params.p_data_src = p_mdump_cmd_params->p_data_src;
+	mb_params.data_src_size = p_mdump_cmd_params->data_src_size;
+	mb_params.p_data_dst = p_mdump_cmd_params->p_data_dst;
+	mb_params.data_dst_size = p_mdump_cmd_params->data_dst_size;
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	*p_mcp_resp = mb_params.mcp_resp;
-	if (*p_mcp_resp == FW_MSG_CODE_MDUMP_INVALID_CMD) {
+	p_mdump_cmd_params->mcp_resp = mb_params.mcp_resp;
+	if (p_mdump_cmd_params->mcp_resp == FW_MSG_CODE_MDUMP_INVALID_CMD) {
 		DP_NOTICE(p_hwfn, false,
 			  "MFW claims that the mdump command is illegal [mdump_cmd 0x%x]\n",
-			  mdump_cmd);
+			  p_mdump_cmd_params->cmd);
 		rc = ECORE_INVAL;
 	}
 
@@ -1384,62 +1413,68 @@ ecore_mcp_mdump_cmd(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 static enum _ecore_status_t ecore_mcp_mdump_ack(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt)
 {
-	u32 mcp_resp;
+	struct ecore_mdump_cmd_params mdump_cmd_params;
+
+	OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params));
+	mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_ACK;
 
-	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MDUMP_ACK,
-				   OSAL_NULL, OSAL_NULL, &mcp_resp);
+	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
 }
 
 enum _ecore_status_t ecore_mcp_mdump_set_values(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt,
 						u32 epoch)
 {
-	union drv_union_data union_data;
-	u32 mcp_resp;
+	struct ecore_mdump_cmd_params mdump_cmd_params;
 
-	OSAL_MEMCPY(&union_data.raw_data, &epoch, sizeof(epoch));
+	OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params));
+	mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_SET_VALUES;
+	mdump_cmd_params.p_data_src = &epoch;
+	mdump_cmd_params.data_src_size = sizeof(epoch);
 
-	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MDUMP_SET_VALUES,
-				   &union_data, OSAL_NULL, &mcp_resp);
+	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
 }
 
 enum _ecore_status_t ecore_mcp_mdump_trigger(struct ecore_hwfn *p_hwfn,
 					     struct ecore_ptt *p_ptt)
 {
-	u32 mcp_resp;
+	struct ecore_mdump_cmd_params mdump_cmd_params;
 
-	p_hwfn->p_dev->mdump_en = true;
+	OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params));
+	mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_TRIGGER;
 
-	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MDUMP_TRIGGER,
-				   OSAL_NULL, OSAL_NULL, &mcp_resp);
+	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
 }
 
 static enum _ecore_status_t
 ecore_mcp_mdump_get_config(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			   struct mdump_config_stc *p_mdump_config)
 {
-	union drv_union_data union_data;
-	u32 mcp_resp;
+	struct ecore_mdump_cmd_params mdump_cmd_params;
 	enum _ecore_status_t rc;
 
-	rc = ecore_mcp_mdump_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MDUMP_GET_CONFIG,
-				 OSAL_NULL, &union_data, &mcp_resp);
+	OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params));
+	mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_GET_CONFIG;
+	mdump_cmd_params.p_data_dst = p_mdump_config;
+	mdump_cmd_params.data_dst_size = sizeof(*p_mdump_config);
+
+	rc = ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	if (mcp_resp == FW_MSG_CODE_UNSUPPORTED)
+	if (mdump_cmd_params.mcp_resp == FW_MSG_CODE_UNSUPPORTED) {
+		DP_INFO(p_hwfn,
+			"The mdump command is not supported by the MFW\n");
 		return ECORE_NOTIMPL;
+	}
 
-	if (mcp_resp != FW_MSG_CODE_OK) {
+	if (mdump_cmd_params.mcp_resp != FW_MSG_CODE_OK) {
 		DP_NOTICE(p_hwfn, false,
 			  "Failed to get the mdump configuration and logs info [mcp_resp 0x%x]\n",
-			  mcp_resp);
+			  mdump_cmd_params.mcp_resp);
 		rc = ECORE_UNKNOWN_ERROR;
 	}
 
-	OSAL_MEMCPY(p_mdump_config, &union_data.mdump_config,
-		    sizeof(*p_mdump_config));
-
 	return rc;
 }
 
@@ -1489,10 +1524,12 @@ ecore_mcp_mdump_get_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 enum _ecore_status_t ecore_mcp_mdump_clear_logs(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt)
 {
-	u32 mcp_resp;
+	struct ecore_mdump_cmd_params mdump_cmd_params;
 
-	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MDUMP_CLEAR_LOGS,
-				   OSAL_NULL, OSAL_NULL, &mcp_resp);
+	OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params));
+	mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_CLEAR_LOGS;
+
+	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
 }
 
 static void ecore_mcp_handle_critical_error(struct ecore_hwfn *p_hwfn,
@@ -2001,9 +2038,8 @@ enum _ecore_status_t
 ecore_mcp_send_drv_version(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			   struct ecore_mcp_drv_version *p_ver)
 {
-	struct drv_version_stc *p_drv_version;
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
+	struct drv_version_stc drv_version;
 	u32 num_words, i;
 	void *p_name;
 	OSAL_BE32 val;
@@ -2014,19 +2050,20 @@ ecore_mcp_send_drv_version(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 		return ECORE_SUCCESS;
 #endif
 
-	p_drv_version = &union_data.drv_version;
-	p_drv_version->version = p_ver->version;
+	OSAL_MEM_ZERO(&drv_version, sizeof(drv_version));
+	drv_version.version = p_ver->version;
 	num_words = (MCP_DRV_VER_STR_SIZE - 4) / 4;
 	for (i = 0; i < num_words; i++) {
 		/* The driver name is expected to be in a big-endian format */
 		p_name = &p_ver->name[i * sizeof(u32)];
 		val = OSAL_CPU_TO_BE32(*(u32 *)p_name);
-		*(u32 *)&p_drv_version->name[i * sizeof(u32)] = val;
+		*(u32 *)&drv_version.name[i * sizeof(u32)] = val;
 	}
 
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = DRV_MSG_CODE_SET_VERSION;
-	mb_params.p_data_src = &union_data;
+	mb_params.p_data_src = &drv_version;
+	mb_params.data_src_size = sizeof(drv_version);
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS)
 		DP_ERR(p_hwfn, "MCP response failure, aborting\n");
@@ -2695,28 +2732,25 @@ ecore_mcp_get_temperature_info(struct ecore_hwfn *p_hwfn,
 			       struct ecore_temperature_info *p_temp_info)
 {
 	struct ecore_temperature_sensor *p_temp_sensor;
-	struct temperature_status_stc *p_mfw_temp_info;
+	struct temperature_status_stc mfw_temp_info;
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
 	u32 val;
 	enum _ecore_status_t rc;
 	u8 i;
 
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = DRV_MSG_CODE_GET_TEMPERATURE;
-	mb_params.p_data_dst = &union_data;
+	mb_params.p_data_dst = &mfw_temp_info;
+	mb_params.data_dst_size = sizeof(mfw_temp_info);
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	p_mfw_temp_info = &union_data.temp_info;
-
 	OSAL_BUILD_BUG_ON(ECORE_MAX_NUM_OF_SENSORS != MAX_NUM_OF_SENSORS);
-	p_temp_info->num_sensors = OSAL_MIN_T(u32,
-					      p_mfw_temp_info->num_of_sensors,
+	p_temp_info->num_sensors = OSAL_MIN_T(u32, mfw_temp_info.num_of_sensors,
 					      ECORE_MAX_NUM_OF_SENSORS);
 	for (i = 0; i < p_temp_info->num_sensors; i++) {
-		val = p_mfw_temp_info->sensor[i];
+		val = mfw_temp_info.sensor[i];
 		p_temp_sensor = &p_temp_info->sensors[i];
 		p_temp_sensor->sensor_location = (val & SENSOR_LOCATION_MASK) >>
 						 SENSOR_LOCATION_SHIFT;
@@ -2854,16 +2888,14 @@ ecore_mcp_resc_allocation_msg(struct ecore_hwfn *p_hwfn,
 			      struct ecore_resc_alloc_in_params *p_in_params,
 			      struct ecore_resc_alloc_out_params *p_out_params)
 {
-	struct resource_info *p_mfw_resc_info;
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
+	struct resource_info mfw_resc_info;
 	enum _ecore_status_t rc;
 
-	p_mfw_resc_info = &union_data.resource;
-	OSAL_MEM_ZERO(p_mfw_resc_info, sizeof(*p_mfw_resc_info));
+	OSAL_MEM_ZERO(&mfw_resc_info, sizeof(mfw_resc_info));
 
-	p_mfw_resc_info->res_id = ecore_mcp_get_mfw_res_id(p_in_params->res_id);
-	if (p_mfw_resc_info->res_id == RESOURCE_NUM_INVALID) {
+	mfw_resc_info.res_id = ecore_mcp_get_mfw_res_id(p_in_params->res_id);
+	if (mfw_resc_info.res_id == RESOURCE_NUM_INVALID) {
 		DP_ERR(p_hwfn,
 		       "Failed to match resource %d [%s] with the MFW resources\n",
 		       p_in_params->res_id,
@@ -2873,7 +2905,7 @@ ecore_mcp_resc_allocation_msg(struct ecore_hwfn *p_hwfn,
 
 	switch (p_in_params->cmd) {
 	case DRV_MSG_SET_RESOURCE_VALUE_MSG:
-		p_mfw_resc_info->size = p_in_params->resc_max_val;
+		mfw_resc_info.size = p_in_params->resc_max_val;
 		/* Fallthrough */
 	case DRV_MSG_GET_RESOURCE_ALLOC_MSG:
 		break;
@@ -2886,8 +2918,10 @@ ecore_mcp_resc_allocation_msg(struct ecore_hwfn *p_hwfn,
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = p_in_params->cmd;
 	mb_params.param = ECORE_RESC_ALLOC_VERSION;
-	mb_params.p_data_src = &union_data;
-	mb_params.p_data_dst = &union_data;
+	mb_params.p_data_src = &mfw_resc_info;
+	mb_params.data_src_size = sizeof(mfw_resc_info);
+	mb_params.p_data_dst = mb_params.p_data_src;
+	mb_params.data_dst_size = mb_params.data_src_size;
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 		   "Resource message request: cmd 0x%08x, res_id %d [%s], hsi_version %d.%d, val 0x%x\n",
@@ -2905,11 +2939,11 @@ ecore_mcp_resc_allocation_msg(struct ecore_hwfn *p_hwfn,
 
 	p_out_params->mcp_resp = mb_params.mcp_resp;
 	p_out_params->mcp_param = mb_params.mcp_param;
-	p_out_params->resc_num = p_mfw_resc_info->size;
-	p_out_params->resc_start = p_mfw_resc_info->offset;
-	p_out_params->vf_resc_num = p_mfw_resc_info->vf_size;
-	p_out_params->vf_resc_start = p_mfw_resc_info->vf_offset;
-	p_out_params->flags = p_mfw_resc_info->flags;
+	p_out_params->resc_num = mfw_resc_info.size;
+	p_out_params->resc_start = mfw_resc_info.offset;
+	p_out_params->vf_resc_num = mfw_resc_info.vf_size;
+	p_out_params->vf_resc_start = mfw_resc_info.vf_offset;
+	p_out_params->flags = mfw_resc_info.flags;
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 		   "Resource message response: mfw_hsi_version %d.%d, num 0x%x, start 0x%x, vf_num 0x%x, vf_start 0x%x, flags 0x%08x\n",
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index f5dac9d..350d8a2 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -65,8 +65,10 @@ struct ecore_mcp_info {
 struct ecore_mcp_mb_params {
 	u32 cmd;
 	u32 param;
-	union drv_union_data *p_data_src;
-	union drv_union_data *p_data_dst;
+	void *p_data_src;
+	u8 data_src_size;
+	void *p_data_dst;
+	u8 data_dst_size;
 	u32 mcp_resp;
 	u32 mcp_param;
 };
@@ -159,7 +161,7 @@ struct ecore_load_req_params {
  *        returns whether this PF is the first on the engine/port or function.
  *
  * @param p_hwfn
- * @param p_pt
+ * @param p_ptt
  * @param p_params
  *
  * @return enum _ecore_status_t - ECORE_SUCCESS - Operation was successful.
@@ -169,6 +171,17 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 					struct ecore_load_req_params *p_params);
 
 /**
+ * @brief Sends a UNLOAD_DONE message to the MFW
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - Operation was successful.
+ */
+enum _ecore_status_t ecore_mcp_unload_done(struct ecore_hwfn *p_hwfn,
+					   struct ecore_ptt *p_ptt);
+
+/**
  * @brief Read the MFW mailbox into Current buffer.
  *
  * @param p_hwfn
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 51/62] net/qede/base: move code bits
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (51 preceding siblings ...)
  2017-03-28  6:52           ` [PATCH v4 50/62] net/qede/base: zero out MFW mailbox data Rasesh Mody
@ 2017-03-28  6:52           ` Rasesh Mody
  2017-03-28  6:52           ` [PATCH v4 52/62] net/qede/base: add PF parameter Rasesh Mody
                             ` (11 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:52 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_vf.h |   41 +++++++++++++++++++-------------------
 1 file changed, 20 insertions(+), 21 deletions(-)

diff --git a/drivers/net/qede/base/ecore_vf.h b/drivers/net/qede/base/ecore_vf.h
index 228bbf0..f471388 100644
--- a/drivers/net/qede/base/ecore_vf.h
+++ b/drivers/net/qede/base/ecore_vf.h
@@ -38,17 +38,15 @@ struct ecore_vf_iov {
 	bool b_pre_fp_hsi;
 };
 
-#ifdef CONFIG_ECORE_SRIOV
-/**
- * @brief hw preparation for VF
- * sends ACQUIRE message
- *
- * @param p_hwfn
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn);
 
+enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
+					    struct ecore_ptt *p_ptt,
+					    u16 coalesce,
+					    struct ecore_queue_cid *p_cid);
+enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
+					    struct ecore_ptt *p_ptt,
+					    u16 coalesce,
+					    struct ecore_queue_cid *p_cid);
 /**
  * @brief VF - Set Rx/Tx coalesce per VF's relative queue.
  *	Coalesce value '0' will omit the configuration.
@@ -56,13 +54,24 @@ enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn);
  *	@param p_hwfn
  *	@param rx_coal - coalesce value in micro second for rx queue
  *	@param tx_coal - coalesce value in micro second for tx queue
- *	@param qid
+ *	@param queue_cid
  *
  **/
 enum _ecore_status_t ecore_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn,
 					      u16 rx_coal, u16 tx_coal,
 					      struct ecore_queue_cid *p_cid);
 
+#ifdef CONFIG_ECORE_SRIOV
+/**
+ * @brief hw preparation for VF
+ *	sends ACQUIRE message
+ *
+ * @param p_hwfn
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn);
+
 /**
  * @brief VF - start the RX Queue by sending a message to the PF
  *
@@ -277,15 +286,5 @@ ecore_vf_pf_tunnel_param_update(struct ecore_hwfn *p_hwfn,
 				struct ecore_tunnel_info *p_tunn);
 
 void ecore_vf_set_vf_start_tunn_update_param(struct ecore_tunnel_info *p_tun);
-
-enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
-					    struct ecore_ptt *p_ptt,
-					    u16 coalesce,
-					    struct ecore_queue_cid *p_cid);
-
-enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
-					    struct ecore_ptt *p_ptt,
-					    u16 coalesce,
-					    struct ecore_queue_cid *p_cid);
 #endif
 #endif /* __ECORE_VF_H__ */
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 52/62] net/qede/base: add PF parameter
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (52 preceding siblings ...)
  2017-03-28  6:52           ` [PATCH v4 51/62] net/qede/base: move code bits Rasesh Mody
@ 2017-03-28  6:52           ` Rasesh Mody
  2017-03-28  6:52           ` [PATCH v4 53/62] net/qede/base: allow PMD to control vport and RSS engine ids Rasesh Mody
                             ` (10 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:52 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Add a common enum to pf_params for RDMA.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_cxt.c      |    1 +
 drivers/net/qede/base/ecore_proto_if.h |    7 +++++++
 2 files changed, 8 insertions(+)

diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index aeeabf1..691d638 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -19,6 +19,7 @@
 #include "ecore_hw.h"
 #include "ecore_dev_api.h"
 #include "ecore_sriov.h"
+#include "ecore_mcp.h"
 
 /* Max number of connection types in HW (DQ/CDU etc.) */
 #define MAX_CONN_TYPES		PROTOCOLID_COMMON
diff --git a/drivers/net/qede/base/ecore_proto_if.h b/drivers/net/qede/base/ecore_proto_if.h
index ed24019..0ac153f 100644
--- a/drivers/net/qede/base/ecore_proto_if.h
+++ b/drivers/net/qede/base/ecore_proto_if.h
@@ -63,6 +63,12 @@ struct ecore_iscsi_pf_params {
 	u8		bdq_pbl_num_entries[2];
 };
 
+enum ecore_rdma_protocol {
+	ECORE_RDMA_PROTOCOL_DEFAULT,
+	ECORE_RDMA_PROTOCOL_ROCE,
+	ECORE_RDMA_PROTOCOL_IWARP,
+};
+
 struct ecore_rdma_pf_params {
 	/* Supplied to ECORE during resource allocation (may affect the ILT and
 	 * the doorbell BAR).
@@ -79,6 +85,7 @@ struct ecore_rdma_pf_params {
 
 	/* TCP port number used for the iwarp traffic */
 	u16		iwarp_port;
+	enum ecore_rdma_protocol rdma_protocol;
 };
 
 struct ecore_pf_params {
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 53/62] net/qede/base: allow PMD to control vport and RSS engine ids
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (53 preceding siblings ...)
  2017-03-28  6:52           ` [PATCH v4 52/62] net/qede/base: add PF parameter Rasesh Mody
@ 2017-03-28  6:52           ` Rasesh Mody
  2017-03-28  6:52           ` [PATCH v4 54/62] net/qede/base: add udp ports in bulletin board message Rasesh Mody
                             ` (9 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:52 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Let PMD have control over the vport-id and rss-eng-id of a given VF
during initialization.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_iov_api.h |   15 ++++-------
 drivers/net/qede/base/ecore_sriov.c   |   46 +++++++++++++++++++++------------
 drivers/net/qede/base/ecore_sriov.h   |    2 +-
 3 files changed, 35 insertions(+), 28 deletions(-)

diff --git a/drivers/net/qede/base/ecore_iov_api.h b/drivers/net/qede/base/ecore_iov_api.h
index b8dc47b..6a0fc5a 100644
--- a/drivers/net/qede/base/ecore_iov_api.h
+++ b/drivers/net/qede/base/ecore_iov_api.h
@@ -103,6 +103,11 @@ struct ecore_iov_vf_init_params {
 	 */
 	u16 req_rx_queue[ECORE_MAX_VF_CHAINS_PER_PF];
 	u16 req_tx_queue[ECORE_MAX_VF_CHAINS_PER_PF];
+
+	u8 vport_id;
+
+	/* Should be set in case RSS is going to be used for VF */
+	u8 rss_eng_id;
 };
 
 #ifdef CONFIG_ECORE_SW_CHANNEL
@@ -417,16 +422,6 @@ void ecore_iov_get_vfs_opaque_fid(struct ecore_hwfn *p_hwfn, int vfid,
 				  u16 *opaque_fid);
 
 /**
- * @brief Get VFs VPORT id.
- *
- * @param p_hwfn
- * @param vfid
- * @param vport id
- */
-void ecore_iov_get_vfs_vport_id(struct ecore_hwfn *p_hwfn, int vfid,
-				u8 *p_vport_id);
-
-/**
  * @brief Set forced VLAN [pvid] in PFs copy of bulletin board
  *        and configures FW/HW to support the configuration.
  *        Setting of pvid 0 would clear the feature.
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 4ffa8d0..20b51c4 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -426,8 +426,6 @@ static void ecore_iov_setup_vfdb(struct ecore_hwfn *p_hwfn)
 		return;
 	}
 
-	p_iov_info->base_vport_id = 1;	/* @@@TBD resource allocation */
-
 	for (idx = 0; idx < p_iov->total_vfs; idx++) {
 		struct ecore_vf_info *vf = &p_iov_info->vfs_array[idx];
 		u32 concrete;
@@ -456,8 +454,6 @@ static void ecore_iov_setup_vfdb(struct ecore_hwfn *p_hwfn)
 		/* TODO - need to devise a better way of getting opaque */
 		vf->opaque_fid = (p_hwfn->hw_info.opaque_fid & 0xff) |
 		    (vf->abs_vf_id << 8);
-		/* @@TBD MichalK - add base vport_id of VFs to equation */
-		vf->vport_id = p_iov_info->base_vport_id + idx;
 
 		vf->num_mac_filters = ECORE_ETH_VF_NUM_MAC_FILTERS;
 		vf->num_vlan_filters = ECORE_ETH_VF_NUM_VLAN_FILTERS;
@@ -1019,6 +1015,34 @@ ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 		return ECORE_INVAL;
 	}
 
+	/* Perform sanity checking on the requested vport/rss */
+	if (p_params->vport_id >= RESC_NUM(p_hwfn, ECORE_VPORT)) {
+		DP_NOTICE(p_hwfn, true, "VF[%d] - can't use VPORT %02x\n",
+			  p_params->rel_vf_id, p_params->vport_id);
+		return ECORE_INVAL;
+	}
+
+	if ((p_params->num_queues > 1) &&
+	    (p_params->rss_eng_id >= RESC_NUM(p_hwfn, ECORE_RSS_ENG))) {
+		DP_NOTICE(p_hwfn, true, "VF[%d] - can't use RSS_ENG %02x\n",
+			  p_params->rel_vf_id, p_params->rss_eng_id);
+		return ECORE_INVAL;
+	}
+
+	/* TODO - remove this once we get confidence of change */
+	if (!p_params->vport_id) {
+		DP_NOTICE(p_hwfn, false,
+			  "VF[%d] - Unlikely that VF uses vport0. Forgotten?\n",
+			  p_params->rel_vf_id);
+	}
+	if ((!p_params->rss_eng_id) && (p_params->num_queues > 1)) {
+		DP_NOTICE(p_hwfn, false,
+			  "VF[%d] - Unlikely that VF uses RSS_eng0. Forgotten?\n",
+			  p_params->rel_vf_id);
+	}
+	vf->vport_id = p_params->vport_id;
+	vf->rss_eng_id = p_params->rss_eng_id;
+
 	/* Perform sanity checking on the requested queue_id */
 	for (i = 0; i < p_params->num_queues; i++) {
 		u16 min_vf_qzone = (u16)FEAT_NUM(p_hwfn, ECORE_PF_L2_QUE);
@@ -2752,7 +2776,7 @@ ecore_iov_vp_update_rss_param(struct ecore_hwfn *p_hwfn,
 		VFPF_UPDATE_RSS_KEY_FLAG);
 
 	p_rss->rss_enable = p_rss_tlv->rss_enable;
-	p_rss->rss_eng_id = vf->relative_vf_id + 1;
+	p_rss->rss_eng_id = vf->rss_eng_id;
 	p_rss->rss_caps = p_rss_tlv->rss_caps;
 	p_rss->rss_table_size_log = p_rss_tlv->rss_table_size_log;
 	OSAL_MEMCPY(p_rss->rss_key, p_rss_tlv->rss_key,
@@ -3974,18 +3998,6 @@ void ecore_iov_get_vfs_opaque_fid(struct ecore_hwfn *p_hwfn, int vfid,
 	*opaque_fid = vf_info->opaque_fid;
 }
 
-void ecore_iov_get_vfs_vport_id(struct ecore_hwfn *p_hwfn, int vfid,
-				u8 *p_vort_id)
-{
-	struct ecore_vf_info *vf_info;
-
-	vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
-	if (!vf_info)
-		return;
-
-	*p_vort_id = vf_info->vport_id;
-}
-
 void ecore_iov_bulletin_set_forced_vlan(struct ecore_hwfn *p_hwfn,
 					u16 pvid, int vfid)
 {
diff --git a/drivers/net/qede/base/ecore_sriov.h b/drivers/net/qede/base/ecore_sriov.h
index d32f931..66e9271 100644
--- a/drivers/net/qede/base/ecore_sriov.h
+++ b/drivers/net/qede/base/ecore_sriov.h
@@ -111,6 +111,7 @@ struct ecore_vf_info {
 	u16			mtu;
 
 	u8			vport_id;
+	u8			rss_eng_id;
 	u8			relative_vf_id;
 	u8			abs_vf_id;
 #define ECORE_VF_ABS_ID(p_hwfn, p_vf)	(ECORE_PATH_ID(p_hwfn) ? \
@@ -155,7 +156,6 @@ struct ecore_pf_iov {
 	struct ecore_vf_info	vfs_array[E4_MAX_NUM_VFS];
 	u64			pending_events[ECORE_VF_ARRAY_LENGTH];
 	u64			pending_flr[ECORE_VF_ARRAY_LENGTH];
-	u16			base_vport_id;
 
 #ifndef REMOVE_DBG
 	/* This doesn't serve anything functionally, but it makes windows
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 54/62] net/qede/base: add udp ports in bulletin board message
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (54 preceding siblings ...)
  2017-03-28  6:52           ` [PATCH v4 53/62] net/qede/base: allow PMD to control vport and RSS engine ids Rasesh Mody
@ 2017-03-28  6:52           ` Rasesh Mody
  2017-03-28  6:52           ` [PATCH v4 55/62] net/qede/base: prevent DMAE transactions during recovery Rasesh Mody
                             ` (8 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:52 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Add udp ports in bulletin board message.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_iov_api.h |    2 ++
 drivers/net/qede/base/ecore_sriov.c   |   33 +++++++++++++++++++++++++++++++++
 drivers/net/qede/base/ecore_vf.c      |   12 ++++++++++++
 drivers/net/qede/base/ecore_vf_api.h  |    2 ++
 drivers/net/qede/base/ecore_vfpf_if.h |    5 ++++-
 5 files changed, 53 insertions(+), 1 deletion(-)

diff --git a/drivers/net/qede/base/ecore_iov_api.h b/drivers/net/qede/base/ecore_iov_api.h
index 6a0fc5a..870c57e 100644
--- a/drivers/net/qede/base/ecore_iov_api.h
+++ b/drivers/net/qede/base/ecore_iov_api.h
@@ -716,6 +716,8 @@ enum _ecore_status_t ecore_iov_configure_min_tx_rate(struct ecore_dev *p_dev,
  */
 u16 ecore_iov_get_next_active_vf(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
 
+void ecore_iov_bulletin_set_udp_ports(struct ecore_hwfn *p_hwfn, int vfid,
+				      u16 vxlan_port, u16 geneve_port);
 #endif /* CONFIG_ECORE_SRIOV */
 
 #define ecore_for_each_vf(_p_hwfn, _i)					\
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 20b51c4..532c492 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -2253,6 +2253,7 @@ static void ecore_iov_vf_mbx_update_tunn_param(struct ecore_hwfn *p_hwfn,
 	bool b_update_required = false;
 	struct ecore_tunnel_info tunn;
 	u16 tunn_feature_mask = 0;
+	int i;
 
 	mbx->offset = (u8 *)mbx->reply_virt;
 
@@ -2300,11 +2301,20 @@ static void ecore_iov_vf_mbx_update_tunn_param(struct ecore_hwfn *p_hwfn,
 
 	/* If ECORE client is willing to update anything ? */
 	if (b_update_required) {
+		u16 geneve_port;
+
 		rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, &tunn,
 						 ECORE_SPQ_MODE_EBLOCK,
 						 OSAL_NULL);
 		if (rc != ECORE_SUCCESS)
 			status = PFVF_STATUS_FAILURE;
+
+		geneve_port = p_tun->geneve_port.port;
+		ecore_for_each_vf(p_hwfn, i) {
+			ecore_iov_bulletin_set_udp_ports(p_hwfn, i,
+							 p_tun->vxlan_port.port,
+							 geneve_port);
+		}
 	}
 
 send_resp:
@@ -4028,6 +4038,29 @@ void ecore_iov_bulletin_set_forced_vlan(struct ecore_hwfn *p_hwfn,
 	ecore_iov_configure_vport_forced(p_hwfn, vf_info, feature);
 }
 
+void ecore_iov_bulletin_set_udp_ports(struct ecore_hwfn *p_hwfn,
+				      int vfid, u16 vxlan_port, u16 geneve_port)
+{
+	struct ecore_vf_info *vf_info;
+
+	vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
+	if (!vf_info) {
+		DP_NOTICE(p_hwfn->p_dev, true,
+			  "Can not set udp ports, invalid vfid [%d]\n", vfid);
+		return;
+	}
+
+	if (vf_info->b_malicious) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "Can not set udp ports to malicious VF [%d]\n",
+			   vfid);
+		return;
+	}
+
+	vf_info->bulletin.p_virt->vxlan_udp_port = vxlan_port;
+	vf_info->bulletin.p_virt->geneve_udp_port = geneve_port;
+}
+
 bool ecore_iov_vf_has_vport_instance(struct ecore_hwfn *p_hwfn, int vfid)
 {
 	struct ecore_vf_info *p_vf_info;
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index bf516cc..8ce9340 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -1652,6 +1652,18 @@ bool ecore_vf_bulletin_get_forced_mac(struct ecore_hwfn *hwfn, u8 *dst_mac,
 	return true;
 }
 
+void ecore_vf_bulletin_get_udp_ports(struct ecore_hwfn *p_hwfn,
+				     u16 *p_vxlan_port,
+				     u16 *p_geneve_port)
+{
+	struct ecore_bulletin_content *p_bulletin;
+
+	p_bulletin = &p_hwfn->vf_iov_info->bulletin_shadow;
+
+	*p_vxlan_port = p_bulletin->vxlan_udp_port;
+	*p_geneve_port = p_bulletin->geneve_udp_port;
+}
+
 bool ecore_vf_bulletin_get_forced_vlan(struct ecore_hwfn *hwfn, u16 *dst_pvid)
 {
 	struct ecore_bulletin_content *bulletin;
diff --git a/drivers/net/qede/base/ecore_vf_api.h b/drivers/net/qede/base/ecore_vf_api.h
index 77b93ff..a6e5f32 100644
--- a/drivers/net/qede/base/ecore_vf_api.h
+++ b/drivers/net/qede/base/ecore_vf_api.h
@@ -152,5 +152,7 @@ void ecore_vf_get_fw_version(struct ecore_hwfn *p_hwfn,
 			     u16 *fw_minor,
 			     u16 *fw_rev,
 			     u16 *fw_eng);
+void ecore_vf_bulletin_get_udp_ports(struct ecore_hwfn *p_hwfn,
+				     u16 *p_vxlan_port, u16 *p_geneve_port);
 #endif
 #endif
diff --git a/drivers/net/qede/base/ecore_vfpf_if.h b/drivers/net/qede/base/ecore_vfpf_if.h
index e0b63bf..6618442 100644
--- a/drivers/net/qede/base/ecore_vfpf_if.h
+++ b/drivers/net/qede/base/ecore_vfpf_if.h
@@ -554,9 +554,12 @@ struct ecore_bulletin_content {
 	u8 pfc_enabled;
 	u8 partner_tx_flow_ctrl_en;
 	u8 partner_rx_flow_ctrl_en;
+
 	u8 partner_adv_pause;
 	u8 sfp_tx_fault;
-	u8 padding4[6];
+	u16 vxlan_udp_port;
+	u16 geneve_udp_port;
+	u8 padding4[2];
 
 	u32 speed;
 	u32 partner_adv_speed;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 55/62] net/qede/base: prevent DMAE transactions during recovery
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (55 preceding siblings ...)
  2017-03-28  6:52           ` [PATCH v4 54/62] net/qede/base: add udp ports in bulletin board message Rasesh Mody
@ 2017-03-28  6:52           ` Rasesh Mody
  2017-03-28  6:52           ` [PATCH v4 56/62] net/qede/base: multi-Txq support on same queue-zone for VFs Rasesh Mody
                             ` (7 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:52 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Prevent DMA engine transactions during recovery phase.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_hw.c |   12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/drivers/net/qede/base/ecore_hw.c b/drivers/net/qede/base/ecore_hw.c
index 396edc2..2bcc32d 100644
--- a/drivers/net/qede/base/ecore_hw.c
+++ b/drivers/net/qede/base/ecore_hw.c
@@ -773,6 +773,18 @@ ecore_dmae_execute_command(struct ecore_hwfn *p_hwfn,
 	enum _ecore_status_t ecore_status = ECORE_SUCCESS;
 	u32 offset = 0;
 
+	if (p_hwfn->p_dev->recov_in_prog) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+			   "Recovery is in progress. Avoid DMAE transaction [{src: addr 0x%lx, type %d}, {dst: addr 0x%lx, type %d}, size %d].\n",
+			   (unsigned long)src_addr, src_type,
+			   (unsigned long)dst_addr, dst_type,
+			   size_in_dwords);
+		/* Return success to let the flow to be completed successfully
+		 * w/o any error handling.
+		 */
+		return ECORE_SUCCESS;
+	}
+
 	ecore_dmae_opcode(p_hwfn,
 			  (src_type == ECORE_DMAE_ADDRESS_GRC),
 			  (dst_type == ECORE_DMAE_ADDRESS_GRC), p_params);
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 56/62] net/qede/base: multi-Txq support on same queue-zone for VFs
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (56 preceding siblings ...)
  2017-03-28  6:52           ` [PATCH v4 55/62] net/qede/base: prevent DMAE transactions during recovery Rasesh Mody
@ 2017-03-28  6:52           ` Rasesh Mody
  2017-03-28  6:52           ` [PATCH v4 57/62] net/qede/base: prevent race condition during unload Rasesh Mody
                             ` (6 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:52 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

A step toward having multi-Txq support on same queue-zone for VFs.

This change takes care of:

 - VFs assume a single CID per-queue, where queue X receives CID X.
   Switch to a model similar to that of PF - I.e., Use different CIDs
   for Rx/Tx, and use mapping to acquire/release those. Each VF
   currently will have 32 CIDs available for it [for its possible 16
   Rx & 16 Tx queues].

 - To retain the same interface for PFs/VFs when initializing queues,
   the base driver would have to retain a unique number per-each queue
   that would be communicated in some extended TLV [current TLV
   interface allows the PF to send only the queue-id]. The new TLV isn't
   part of the current change but base driver would now start adding
   such unique keys internally to queue_cids. This would also force
   us to start having alloc/setup/free for L2 [we've refrained from
   doing so until now]
   The limit would be no-more than 64 queues per qzone [This could be
   changed if needed, but hopefully no one needs so many queues]

 - In IOV, Add infrastructure for up to 64 qids per-qzone, although
   at the moment hard-code '0' for Rx and '1' for Tx [Since VF still
   isn't communicating via new TLV which index to associate with a
   given queue in its queue-zone].

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h         |    4 +
 drivers/net/qede/base/ecore_cxt.c     |  230 +++++++++++++++-----
 drivers/net/qede/base/ecore_cxt.h     |   53 ++++-
 drivers/net/qede/base/ecore_cxt_api.h |   13 --
 drivers/net/qede/base/ecore_dev.c     |   24 +-
 drivers/net/qede/base/ecore_l2.c      |  248 ++++++++++++++++++---
 drivers/net/qede/base/ecore_l2.h      |   46 +++-
 drivers/net/qede/base/ecore_sriov.c   |  387 ++++++++++++++++++++++-----------
 drivers/net/qede/base/ecore_sriov.h   |   17 +-
 drivers/net/qede/base/ecore_vf.c      |    6 +
 drivers/net/qede/base/ecore_vf_api.h  |    9 +
 11 files changed, 794 insertions(+), 243 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 7379b3f..fab8193 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -200,6 +200,7 @@ struct ecore_cxt_mngr;
 struct ecore_dma_mem;
 struct ecore_sb_sp_info;
 struct ecore_ll2_info;
+struct ecore_l2_info;
 struct ecore_igu_info;
 struct ecore_mcp_info;
 struct ecore_dcbx_info;
@@ -598,6 +599,9 @@ struct ecore_hwfn {
 	/* If one of the following is set then EDPM shouldn't be used */
 	u8				dcbx_no_edpm;
 	u8				db_bar_no_edpm;
+
+	/* L2-related */
+	struct ecore_l2_info		*p_l2_info;
 };
 
 #ifndef __EXTRACT__LINUX__
diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index 691d638..f7b5672 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -8,6 +8,7 @@
 
 #include "bcm_osal.h"
 #include "reg_addr.h"
+#include "common_hsi.h"
 #include "ecore_hsi_common.h"
 #include "ecore_hsi_eth.h"
 #include "ecore_rt_defs.h"
@@ -101,7 +102,6 @@ struct ecore_tid_seg {
 
 struct ecore_conn_type_cfg {
 	u32 cid_count;
-	u32 cid_start;
 	u32 cids_per_vf;
 	struct ecore_tid_seg tid_seg[TASK_SEGMENTS];
 };
@@ -197,6 +197,9 @@ struct ecore_cxt_mngr {
 
 	/* Acquired CIDs */
 	struct ecore_cid_acquired_map acquired[MAX_CONN_TYPES];
+	/* TBD - do we want this allocated to reserve space? */
+	struct ecore_cid_acquired_map
+		acquired_vf[MAX_CONN_TYPES][COMMON_MAX_NUM_VFS];
 
 	/* ILT  shadow table */
 	struct ecore_dma_mem *ilt_shadow;
@@ -1015,44 +1018,75 @@ ilt_shadow_fail:
 static void ecore_cid_map_free(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
-	u32 type;
+	u32 type, vf;
 
 	for (type = 0; type < MAX_CONN_TYPES; type++) {
 		OSAL_FREE(p_hwfn->p_dev, p_mngr->acquired[type].cid_map);
 		p_mngr->acquired[type].max_count = 0;
 		p_mngr->acquired[type].start_cid = 0;
+
+		for (vf = 0; vf < COMMON_MAX_NUM_VFS; vf++) {
+			OSAL_FREE(p_hwfn->p_dev,
+				  p_mngr->acquired_vf[type][vf].cid_map);
+			p_mngr->acquired_vf[type][vf].max_count = 0;
+			p_mngr->acquired_vf[type][vf].start_cid = 0;
+		}
 	}
 }
 
+static enum _ecore_status_t
+ecore_cid_map_alloc_single(struct ecore_hwfn *p_hwfn, u32 type,
+			   u32 cid_start, u32 cid_count,
+			   struct ecore_cid_acquired_map *p_map)
+{
+	u32 size;
+
+	if (!cid_count)
+		return ECORE_SUCCESS;
+
+	size = MAP_WORD_SIZE * DIV_ROUND_UP(cid_count, BITS_PER_MAP_WORD);
+	p_map->cid_map = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL, size);
+	if (p_map->cid_map == OSAL_NULL)
+		return ECORE_NOMEM;
+
+	p_map->max_count = cid_count;
+	p_map->start_cid = cid_start;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_CXT,
+		   "Type %08x start: %08x count %08x\n",
+		   type, p_map->start_cid, p_map->max_count);
+
+	return ECORE_SUCCESS;
+}
+
 static enum _ecore_status_t ecore_cid_map_alloc(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
-	u32 start_cid = 0;
-	u32 type;
+	u32 start_cid = 0, vf_start_cid = 0;
+	u32 type, vf;
 
 	for (type = 0; type < MAX_CONN_TYPES; type++) {
-		u32 cid_cnt = p_hwfn->p_cxt_mngr->conn_cfg[type].cid_count;
-		u32 size;
-
-		if (cid_cnt == 0)
-			continue;
+		struct ecore_conn_type_cfg *p_cfg = &p_mngr->conn_cfg[type];
+		struct ecore_cid_acquired_map *p_map;
 
-		size = MAP_WORD_SIZE * DIV_ROUND_UP(cid_cnt, BITS_PER_MAP_WORD);
-		p_mngr->acquired[type].cid_map = OSAL_ZALLOC(p_hwfn->p_dev,
-							     GFP_KERNEL, size);
-		if (!p_mngr->acquired[type].cid_map)
+		/* Handle PF maps */
+		p_map = &p_mngr->acquired[type];
+		if (ecore_cid_map_alloc_single(p_hwfn, type, start_cid,
+					       p_cfg->cid_count, p_map))
 			goto cid_map_fail;
 
-		p_mngr->acquired[type].max_count = cid_cnt;
-		p_mngr->acquired[type].start_cid = start_cid;
-
-		p_hwfn->p_cxt_mngr->conn_cfg[type].cid_start = start_cid;
+		/* Handle VF maps */
+		for (vf = 0; vf < COMMON_MAX_NUM_VFS; vf++) {
+			p_map = &p_mngr->acquired_vf[type][vf];
+			if (ecore_cid_map_alloc_single(p_hwfn, type,
+						       vf_start_cid,
+						       p_cfg->cids_per_vf,
+						       p_map))
+				goto cid_map_fail;
+		}
 
-		DP_VERBOSE(p_hwfn, ECORE_MSG_CXT,
-			   "Type %08x start: %08x count %08x\n",
-			   type, p_mngr->acquired[type].start_cid,
-			   p_mngr->acquired[type].max_count);
-		start_cid += cid_cnt;
+		start_cid += p_cfg->cid_count;
+		vf_start_cid += p_cfg->cids_per_vf;
 	}
 
 	return ECORE_SUCCESS;
@@ -1171,18 +1205,34 @@ void ecore_cxt_mngr_free(struct ecore_hwfn *p_hwfn)
 void ecore_cxt_mngr_setup(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
+	struct ecore_cid_acquired_map *p_map;
+	struct ecore_conn_type_cfg *p_cfg;
 	int type;
+	u32 len;
 
 	/* Reset acquired cids */
 	for (type = 0; type < MAX_CONN_TYPES; type++) {
-		u32 cid_cnt = p_hwfn->p_cxt_mngr->conn_cfg[type].cid_count;
-		u32 i;
+		u32 vf;
+
+		p_cfg = &p_mngr->conn_cfg[type];
+		if (p_cfg->cid_count) {
+			p_map = &p_mngr->acquired[type];
+			len = DIV_ROUND_UP(p_map->max_count,
+					   BITS_PER_MAP_WORD) *
+			      MAP_WORD_SIZE;
+			OSAL_MEM_ZERO(p_map->cid_map, len);
+		}
 
-		if (cid_cnt == 0)
+		if (!p_cfg->cids_per_vf)
 			continue;
 
-		for (i = 0; i < DIV_ROUND_UP(cid_cnt, BITS_PER_MAP_WORD); i++)
-			p_mngr->acquired[type].cid_map[i] = 0;
+		for (vf = 0; vf < COMMON_MAX_NUM_VFS; vf++) {
+			p_map = &p_mngr->acquired_vf[type][vf];
+			len = DIV_ROUND_UP(p_map->max_count,
+					   BITS_PER_MAP_WORD) *
+			      MAP_WORD_SIZE;
+			OSAL_MEM_ZERO(p_map->cid_map, len);
+		}
 	}
 }
 
@@ -1723,93 +1773,150 @@ void ecore_cxt_hw_init_pf(struct ecore_hwfn *p_hwfn)
 	ecore_prs_init_pf(p_hwfn);
 }
 
-enum _ecore_status_t ecore_cxt_acquire_cid(struct ecore_hwfn *p_hwfn,
-					   enum protocol_type type, u32 *p_cid)
+enum _ecore_status_t _ecore_cxt_acquire_cid(struct ecore_hwfn *p_hwfn,
+					    enum protocol_type type,
+					    u32 *p_cid, u8 vfid)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
+	struct ecore_cid_acquired_map *p_map;
 	u32 rel_cid;
 
-	if (type >= MAX_CONN_TYPES || !p_mngr->acquired[type].cid_map) {
+	if (type >= MAX_CONN_TYPES) {
 		DP_NOTICE(p_hwfn, true, "Invalid protocol type %d", type);
 		return ECORE_INVAL;
 	}
 
-	rel_cid = OSAL_FIND_FIRST_ZERO_BIT(p_mngr->acquired[type].cid_map,
-					   p_mngr->acquired[type].max_count);
+	if (vfid >= COMMON_MAX_NUM_VFS && vfid != ECORE_CXT_PF_CID) {
+		DP_NOTICE(p_hwfn, true, "VF [%02x] is out of range\n", vfid);
+		return ECORE_INVAL;
+	}
+
+	/* Determine the right map to take this CID from */
+	if (vfid == ECORE_CXT_PF_CID)
+		p_map = &p_mngr->acquired[type];
+	else
+		p_map = &p_mngr->acquired_vf[type][vfid];
 
-	if (rel_cid >= p_mngr->acquired[type].max_count) {
+	if (p_map->cid_map == OSAL_NULL) {
+		DP_NOTICE(p_hwfn, true, "Invalid protocol type %d", type);
+		return ECORE_INVAL;
+	}
+
+	rel_cid = OSAL_FIND_FIRST_ZERO_BIT(p_map->cid_map,
+					   p_map->max_count);
+
+	if (rel_cid >= p_map->max_count) {
 		DP_NOTICE(p_hwfn, false, "no CID available for protocol %d\n",
 			  type);
 		return ECORE_NORESOURCES;
 	}
 
-	OSAL_SET_BIT(rel_cid, p_mngr->acquired[type].cid_map);
+	OSAL_SET_BIT(rel_cid, p_map->cid_map);
 
-	*p_cid = rel_cid + p_mngr->acquired[type].start_cid;
+	*p_cid = rel_cid + p_map->start_cid;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_CXT,
+		   "Acquired cid 0x%08x [rel. %08x] vfid %02x type %d\n",
+		   *p_cid, rel_cid, vfid, type);
 
 	return ECORE_SUCCESS;
 }
 
+enum _ecore_status_t ecore_cxt_acquire_cid(struct ecore_hwfn *p_hwfn,
+					   enum protocol_type type,
+					   u32 *p_cid)
+{
+	return _ecore_cxt_acquire_cid(p_hwfn, type, p_cid, ECORE_CXT_PF_CID);
+}
+
 static bool ecore_cxt_test_cid_acquired(struct ecore_hwfn *p_hwfn,
-					u32 cid, enum protocol_type *p_type)
+					u32 cid, u8 vfid,
+					enum protocol_type *p_type,
+					struct ecore_cid_acquired_map **pp_map)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
-	struct ecore_cid_acquired_map *p_map;
-	enum protocol_type p;
 	u32 rel_cid;
 
 	/* Iterate over protocols and find matching cid range */
-	for (p = 0; p < MAX_CONN_TYPES; p++) {
-		p_map = &p_mngr->acquired[p];
+	for (*p_type = 0; *p_type < MAX_CONN_TYPES; (*p_type)++) {
+		if (vfid == ECORE_CXT_PF_CID)
+			*pp_map = &p_mngr->acquired[*p_type];
+		else
+			*pp_map = &p_mngr->acquired_vf[*p_type][vfid];
 
-		if (!p_map->cid_map)
+		if (!((*pp_map)->cid_map))
 			continue;
-		if (cid >= p_map->start_cid &&
-		    cid < p_map->start_cid + p_map->max_count) {
+		if (cid >= (*pp_map)->start_cid &&
+		    cid < (*pp_map)->start_cid + (*pp_map)->max_count) {
 			break;
 		}
 	}
-	*p_type = p;
-
-	if (p == MAX_CONN_TYPES) {
-		DP_NOTICE(p_hwfn, true, "Invalid CID %d", cid);
-		return false;
+	if (*p_type == MAX_CONN_TYPES) {
+		DP_NOTICE(p_hwfn, true, "Invalid CID %d vfid %02x", cid, vfid);
+		goto fail;
 	}
-	rel_cid = cid - p_map->start_cid;
-	if (!OSAL_TEST_BIT(rel_cid, p_map->cid_map)) {
-		DP_NOTICE(p_hwfn, true, "CID %d not acquired", cid);
-		return false;
+
+	rel_cid = cid - (*pp_map)->start_cid;
+	if (!OSAL_TEST_BIT(rel_cid, (*pp_map)->cid_map)) {
+		DP_NOTICE(p_hwfn, true,
+			  "CID %d [vifd %02x] not acquired", cid, vfid);
+		goto fail;
 	}
+
 	return true;
+fail:
+	*p_type = MAX_CONN_TYPES;
+	*pp_map = OSAL_NULL;
+	return false;
 }
 
-void ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn, u32 cid)
+void _ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn, u32 cid, u8 vfid)
 {
-	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
+	struct ecore_cid_acquired_map *p_map = OSAL_NULL;
 	enum protocol_type type;
 	bool b_acquired;
 	u32 rel_cid;
 
+	if (vfid != ECORE_CXT_PF_CID && vfid > COMMON_MAX_NUM_VFS) {
+		DP_NOTICE(p_hwfn, true,
+			  "Trying to return incorrect CID belonging to VF %02x\n",
+			  vfid);
+		return;
+	}
+
 	/* Test acquired and find matching per-protocol map */
-	b_acquired = ecore_cxt_test_cid_acquired(p_hwfn, cid, &type);
+	b_acquired = ecore_cxt_test_cid_acquired(p_hwfn, cid, vfid,
+						 &type, &p_map);
 
 	if (!b_acquired)
 		return;
 
-	rel_cid = cid - p_mngr->acquired[type].start_cid;
-	OSAL_CLEAR_BIT(rel_cid, p_mngr->acquired[type].cid_map);
+	rel_cid = cid - p_map->start_cid;
+	OSAL_CLEAR_BIT(rel_cid, p_map->cid_map);
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_CXT,
+		   "Released CID 0x%08x [rel. %08x] vfid %02x type %d\n",
+		   cid, rel_cid, vfid, type);
+}
+
+void ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn, u32 cid)
+{
+	_ecore_cxt_release_cid(p_hwfn, cid, ECORE_CXT_PF_CID);
 }
 
 enum _ecore_status_t ecore_cxt_get_cid_info(struct ecore_hwfn *p_hwfn,
 					    struct ecore_cxt_info *p_info)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
+	struct ecore_cid_acquired_map *p_map = OSAL_NULL;
 	u32 conn_cxt_size, hw_p_size, cxts_per_p, line;
 	enum protocol_type type;
 	bool b_acquired;
 
 	/* Test acquired and find matching per-protocol map */
-	b_acquired = ecore_cxt_test_cid_acquired(p_hwfn, p_info->iid, &type);
+	b_acquired = ecore_cxt_test_cid_acquired(p_hwfn, p_info->iid,
+						 ECORE_CXT_PF_CID,
+						 &type, &p_map);
 
 	if (!b_acquired)
 		return ECORE_INVAL;
@@ -1865,9 +1972,14 @@ enum _ecore_status_t ecore_cxt_set_pf_params(struct ecore_hwfn *p_hwfn)
 			struct ecore_eth_pf_params *p_params =
 			    &p_hwfn->pf_params.eth_pf_params;
 
+			/* TODO - we probably want to add VF number to the PF
+			 * params;
+			 * As of now, allocates 16 * 2 per-VF [to retain regular
+			 * functionality].
+			 */
 			ecore_cxt_set_proto_cid_count(p_hwfn,
 				PROTOCOLID_ETH,
-				p_params->num_cons, 1);	/* FIXME VF count... */
+				p_params->num_cons, 32);
 
 			break;
 		}
diff --git a/drivers/net/qede/base/ecore_cxt.h b/drivers/net/qede/base/ecore_cxt.h
index 5379d7b..1128051 100644
--- a/drivers/net/qede/base/ecore_cxt.h
+++ b/drivers/net/qede/base/ecore_cxt.h
@@ -130,14 +130,53 @@ void ecore_qm_init_pf(struct ecore_hwfn *p_hwfn);
 enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn,
 				     struct ecore_ptt *p_ptt);
 
+#define ECORE_CXT_PF_CID (0xff)
+
+/**
+ * @brief ecore_cxt_release - Release a cid
+ *
+ * @param p_hwfn
+ * @param cid
+ */
+void ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn, u32 cid);
+
 /**
-* @brief ecore_cxt_release - Release a cid
-*
-* @param p_hwfn
-* @param cid
-*/
-void ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn,
-			   u32 cid);
+ * @brief ecore_cxt_release - Release a cid belonging to a vf-queue
+ *
+ * @param p_hwfn
+ * @param cid
+ * @param vfid - engine relative index. ECORE_CXT_PF_CID if belongs to PF
+ */
+void _ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn,
+			    u32 cid, u8 vfid);
+
+/**
+ * @brief ecore_cxt_acquire - Acquire a new cid of a specific protocol type
+ *
+ * @param p_hwfn
+ * @param type
+ * @param p_cid
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_cxt_acquire_cid(struct ecore_hwfn *p_hwfn,
+					   enum protocol_type type,
+					   u32 *p_cid);
+
+/**
+ * @brief _ecore_cxt_acquire - Acquire a new cid of a specific protocol type
+ *                             for a vf-queue
+ *
+ * @param p_hwfn
+ * @param type
+ * @param p_cid
+ * @param vfid - engine relative index. ECORE_CXT_PF_CID if belongs to PF
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t _ecore_cxt_acquire_cid(struct ecore_hwfn *p_hwfn,
+					    enum protocol_type type,
+					    u32 *p_cid, u8 vfid);
 
 /**
  * @brief ecore_cxt_get_tid_mem_info - function checks if the
diff --git a/drivers/net/qede/base/ecore_cxt_api.h b/drivers/net/qede/base/ecore_cxt_api.h
index 6a50412..f154e0d 100644
--- a/drivers/net/qede/base/ecore_cxt_api.h
+++ b/drivers/net/qede/base/ecore_cxt_api.h
@@ -26,19 +26,6 @@ struct ecore_tid_mem {
 };
 
 /**
-* @brief ecore_cxt_acquire - Acquire a new cid of a specific protocol type
-*
-* @param p_hwfn
-* @param type
-* @param p_cid
-*
-* @return enum _ecore_status_t
-*/
-enum _ecore_status_t ecore_cxt_acquire_cid(struct ecore_hwfn  *p_hwfn,
-					   enum protocol_type type,
-					   u32 *p_cid);
-
-/**
 * @brief ecoreo_cid_get_cxt_info - Returns the context info for a specific cid
 *
 *
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index e584058..2a621f7 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -146,8 +146,11 @@ void ecore_resc_free(struct ecore_dev *p_dev)
 {
 	int i;
 
-	if (IS_VF(p_dev))
+	if (IS_VF(p_dev)) {
+		for_each_hwfn(p_dev, i)
+			ecore_l2_free(&p_dev->hwfns[i]);
 		return;
+	}
 
 	OSAL_FREE(p_dev, p_dev->fw_data);
 
@@ -163,6 +166,7 @@ void ecore_resc_free(struct ecore_dev *p_dev)
 		ecore_consq_free(p_hwfn);
 		ecore_int_free(p_hwfn);
 		ecore_iov_free(p_hwfn);
+		ecore_l2_free(p_hwfn);
 		ecore_dmae_info_free(p_hwfn);
 		ecore_dcbx_info_free(p_hwfn, p_hwfn->p_dcbx_info);
 		/* @@@TBD Flush work-queue ? */
@@ -839,8 +843,14 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 	int i;
 
-	if (IS_VF(p_dev))
+	if (IS_VF(p_dev)) {
+		for_each_hwfn(p_dev, i) {
+			rc = ecore_l2_alloc(&p_dev->hwfns[i]);
+			if (rc != ECORE_SUCCESS)
+				return rc;
+		}
 		return rc;
+	}
 
 	p_dev->fw_data = OSAL_ZALLOC(p_dev, GFP_KERNEL,
 				     sizeof(*p_dev->fw_data));
@@ -961,6 +971,10 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 		if (rc)
 			goto alloc_err;
 
+		rc = ecore_l2_alloc(p_hwfn);
+		if (rc != ECORE_SUCCESS)
+			goto alloc_err;
+
 		/* DMA info initialization */
 		rc = ecore_dmae_info_alloc(p_hwfn);
 		if (rc) {
@@ -999,8 +1013,11 @@ void ecore_resc_setup(struct ecore_dev *p_dev)
 {
 	int i;
 
-	if (IS_VF(p_dev))
+	if (IS_VF(p_dev)) {
+		for_each_hwfn(p_dev, i)
+			ecore_l2_setup(&p_dev->hwfns[i]);
 		return;
+	}
 
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
@@ -1018,6 +1035,7 @@ void ecore_resc_setup(struct ecore_dev *p_dev)
 
 		ecore_int_setup(p_hwfn, p_hwfn->p_main_ptt);
 
+		ecore_l2_setup(p_hwfn);
 		ecore_iov_setup(p_hwfn, p_hwfn->p_main_ptt);
 	}
 }
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 4d26e19..adb5e47 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -29,24 +29,172 @@
 #define ECORE_MAX_SGES_NUM 16
 #define CRC32_POLY 0x1edc6f41
 
+struct ecore_l2_info {
+	u32 queues;
+	unsigned long **pp_qid_usage;
+
+	/* The lock is meant to synchronize access to the qid usage */
+	osal_mutex_t lock;
+};
+
+enum _ecore_status_t ecore_l2_alloc(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_l2_info *p_l2_info;
+	unsigned long **pp_qids;
+	u32 i;
+
+	if (!ECORE_IS_L2_PERSONALITY(p_hwfn))
+		return ECORE_SUCCESS;
+
+	p_l2_info = OSAL_VZALLOC(p_hwfn->p_dev, sizeof(*p_l2_info));
+	if (!p_l2_info)
+		return ECORE_NOMEM;
+	p_hwfn->p_l2_info = p_l2_info;
+
+	if (IS_PF(p_hwfn->p_dev)) {
+		p_l2_info->queues = RESC_NUM(p_hwfn, ECORE_L2_QUEUE);
+	} else {
+		u8 rx = 0, tx = 0;
+
+		ecore_vf_get_num_rxqs(p_hwfn, &rx);
+		ecore_vf_get_num_txqs(p_hwfn, &tx);
+
+		p_l2_info->queues = (u32)OSAL_MAX_T(u8, rx, tx);
+	}
+
+	pp_qids = OSAL_VZALLOC(p_hwfn->p_dev,
+			       sizeof(unsigned long *) *
+			       p_l2_info->queues);
+	if (pp_qids == OSAL_NULL)
+		return ECORE_NOMEM;
+	p_l2_info->pp_qid_usage = pp_qids;
+
+	for (i = 0; i < p_l2_info->queues; i++) {
+		pp_qids[i] = OSAL_VZALLOC(p_hwfn->p_dev,
+					  MAX_QUEUES_PER_QZONE / 8);
+		if (pp_qids[i] == OSAL_NULL)
+			return ECORE_NOMEM;
+	}
+
+#ifdef CONFIG_ECORE_LOCK_ALLOC
+	OSAL_MUTEX_ALLOC(p_hwfn, &p_l2_info->lock);
+#endif
+
+	return ECORE_SUCCESS;
+}
+
+void ecore_l2_setup(struct ecore_hwfn *p_hwfn)
+{
+	if (!ECORE_IS_L2_PERSONALITY(p_hwfn))
+		return;
+
+	OSAL_MUTEX_INIT(&p_hwfn->p_l2_info->lock);
+}
+
+void ecore_l2_free(struct ecore_hwfn *p_hwfn)
+{
+	u32 i;
+
+	if (!ECORE_IS_L2_PERSONALITY(p_hwfn))
+		return;
+
+	if (p_hwfn->p_l2_info == OSAL_NULL)
+		return;
+
+	if (p_hwfn->p_l2_info->pp_qid_usage == OSAL_NULL)
+		goto out_l2_info;
+
+	/* Free until hit first uninitialized entry */
+	for (i = 0; i < p_hwfn->p_l2_info->queues; i++) {
+		if (p_hwfn->p_l2_info->pp_qid_usage[i] == OSAL_NULL)
+			break;
+		OSAL_VFREE(p_hwfn->p_dev,
+			   p_hwfn->p_l2_info->pp_qid_usage[i]);
+	}
+
+#ifdef CONFIG_ECORE_LOCK_ALLOC
+	/* Lock is last to initialize, if everything else was */
+	if (i == p_hwfn->p_l2_info->queues)
+		OSAL_MUTEX_DEALLOC(&p_hwfn->p_l2_info->lock);
+#endif
+
+	OSAL_VFREE(p_hwfn->p_dev, p_hwfn->p_l2_info->pp_qid_usage);
+
+out_l2_info:
+	OSAL_VFREE(p_hwfn->p_dev, p_hwfn->p_l2_info);
+	p_hwfn->p_l2_info = OSAL_NULL;
+}
+
+/* TODO - we'll need locking around these... */
+static bool ecore_eth_queue_qid_usage_add(struct ecore_hwfn *p_hwfn,
+					  struct ecore_queue_cid *p_cid)
+{
+	struct ecore_l2_info *p_l2_info = p_hwfn->p_l2_info;
+	u16 queue_id = p_cid->rel.queue_id;
+	bool b_rc = true;
+	u8 first;
+
+	OSAL_MUTEX_ACQUIRE(&p_l2_info->lock);
+
+	if (queue_id > p_l2_info->queues) {
+		DP_NOTICE(p_hwfn, true,
+			  "Requested to increase usage for qzone %04x out of %08x\n",
+			  queue_id, p_l2_info->queues);
+		b_rc = false;
+		goto out;
+	}
+
+	first = (u8)OSAL_FIND_FIRST_ZERO_BIT(p_l2_info->pp_qid_usage[queue_id],
+					     MAX_QUEUES_PER_QZONE);
+	if (first >= MAX_QUEUES_PER_QZONE) {
+		b_rc = false;
+		goto out;
+	}
+
+	OSAL_SET_BIT(first, p_l2_info->pp_qid_usage[queue_id]);
+	p_cid->qid_usage_idx = first;
+
+out:
+	OSAL_MUTEX_RELEASE(&p_l2_info->lock);
+	return b_rc;
+}
+
+static void ecore_eth_queue_qid_usage_del(struct ecore_hwfn *p_hwfn,
+					  struct ecore_queue_cid *p_cid)
+{
+	OSAL_MUTEX_ACQUIRE(&p_hwfn->p_l2_info->lock);
+
+	OSAL_CLEAR_BIT(p_cid->qid_usage_idx,
+		       p_hwfn->p_l2_info->pp_qid_usage[p_cid->rel.queue_id]);
+
+	OSAL_MUTEX_RELEASE(&p_hwfn->p_l2_info->lock);
+}
+
 void ecore_eth_queue_cid_release(struct ecore_hwfn *p_hwfn,
 				 struct ecore_queue_cid *p_cid)
 {
+	/* For VF-queues, stuff is a bit complicated as:
+	 *  - They always maintain the qid_usage on their own.
+	 *  - In legacy mode, they also maintain their CIDs.
+	 */
+
 	/* VFs' CIDs are 0-based in PF-view, and uninitialized on VF */
-	if (!p_cid->is_vf && IS_PF(p_hwfn->p_dev))
-		ecore_cxt_release_cid(p_hwfn, p_cid->cid);
+	if (IS_PF(p_hwfn->p_dev) && !p_cid->b_legacy_vf)
+		_ecore_cxt_release_cid(p_hwfn, p_cid->cid, p_cid->vfid);
+	if (!p_cid->b_legacy_vf)
+		ecore_eth_queue_qid_usage_del(p_hwfn, p_cid);
 	OSAL_VFREE(p_hwfn->p_dev, p_cid);
 }
 
 /* The internal is only meant to be directly called by PFs initializeing CIDs
  * for their VFs.
  */
-struct ecore_queue_cid *
+static struct ecore_queue_cid *
 _ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
-			u16 opaque_fid, u32 cid, u8 vf_qid,
-			struct ecore_queue_start_common_params *p_params)
+			u16 opaque_fid, u32 cid,
+			struct ecore_queue_start_common_params *p_params,
+			struct ecore_queue_cid_vf_params *p_vf_params)
 {
-	bool b_is_same = (p_hwfn->hw_info.opaque_fid == opaque_fid);
 	struct ecore_queue_cid *p_cid;
 	enum _ecore_status_t rc;
 
@@ -56,13 +204,22 @@ _ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
 
 	p_cid->opaque_fid = opaque_fid;
 	p_cid->cid = cid;
-	p_cid->vf_qid = vf_qid;
 	p_cid->rel = *p_params;
 	p_cid->p_owner = p_hwfn;
 
+	/* Fill-in bits related to VFs' queues if information was provided */
+	if (p_vf_params != OSAL_NULL) {
+		p_cid->vfid = p_vf_params->vfid;
+		p_cid->vf_qid = p_vf_params->vf_qid;
+		p_cid->b_legacy_vf = p_vf_params->b_legacy;
+	} else {
+		p_cid->vfid = ECORE_QUEUE_CID_PF;
+	}
+
 	/* Don't try calculating the absolute indices for VFs */
 	if (IS_VF(p_hwfn->p_dev)) {
 		p_cid->abs = p_cid->rel;
+
 		goto out;
 	}
 
@@ -82,7 +239,7 @@ _ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
 	/* In case of a PF configuring its VF's queues, the stats-id is already
 	 * absolute [since there's a single index that's suitable per-VF].
 	 */
-	if (b_is_same) {
+	if (p_cid->vfid == ECORE_QUEUE_CID_PF) {
 		rc = ecore_fw_vport(p_hwfn, p_cid->rel.stats_id,
 				    &p_cid->abs.stats_id);
 		if (rc != ECORE_SUCCESS)
@@ -95,17 +252,23 @@ _ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
 	p_cid->abs.sb = p_cid->rel.sb;
 	p_cid->abs.sb_idx = p_cid->rel.sb_idx;
 
-	/* This is tricky - we're actually interested in whehter this is a PF
-	 * entry meant for the VF.
-	 */
-	if (!b_is_same)
-		p_cid->is_vf = true;
 out:
+	/* VF-images have provided the qid_usage_idx on their own.
+	 * Otherwise, we need to allocate a unique one.
+	 */
+	if (!p_vf_params) {
+		if (!ecore_eth_queue_qid_usage_add(p_hwfn, p_cid))
+			goto fail;
+	} else {
+		p_cid->qid_usage_idx = p_vf_params->qid_usage_idx;
+	}
+
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-		   "opaque_fid: %04x CID %08x vport %02x [%02x] qzone %04x [%04x] stats %02x [%02x] SB %04x PI %02x\n",
+		   "opaque_fid: %04x CID %08x vport %02x [%02x] qzone %04x.%02x [%04x] stats %02x [%02x] SB %04x PI %02x\n",
 		   p_cid->opaque_fid, p_cid->cid,
 		   p_cid->rel.vport_id, p_cid->abs.vport_id,
-		   p_cid->rel.queue_id, p_cid->abs.queue_id,
+		   p_cid->rel.queue_id,	p_cid->qid_usage_idx,
+		   p_cid->abs.queue_id,
 		   p_cid->rel.stats_id, p_cid->abs.stats_id,
 		   p_cid->abs.sb, p_cid->abs.sb_idx);
 
@@ -116,33 +279,56 @@ fail:
 	return OSAL_NULL;
 }
 
-static struct ecore_queue_cid *
-ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
-		       u16 opaque_fid,
-		       struct ecore_queue_start_common_params *p_params)
+struct ecore_queue_cid *
+ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn, u16 opaque_fid,
+		       struct ecore_queue_start_common_params *p_params,
+		       struct ecore_queue_cid_vf_params *p_vf_params)
 {
 	struct ecore_queue_cid *p_cid;
+	u8 vfid = ECORE_CXT_PF_CID;
+	bool b_legacy_vf = false;
 	u32 cid = 0;
 
+	/* In case of legacy VFs, The CID can be derived from the additional
+	 * VF parameters - the VF assumes queue X uses CID X, so we can simply
+	 * use the vf_qid for this purpose as well.
+	 */
+	if (p_vf_params) {
+		vfid = p_vf_params->vfid;
+
+		if (p_vf_params->b_legacy) {
+			b_legacy_vf = true;
+			cid = p_vf_params->vf_qid;
+		}
+	}
+
 	/* Get a unique firmware CID for this queue, in case it's a PF.
 	 * VF's don't need a CID as the queue configuration will be done
 	 * by PF.
 	 */
-	if (IS_PF(p_hwfn->p_dev)) {
-		if (ecore_cxt_acquire_cid(p_hwfn, PROTOCOLID_ETH,
-					  &cid) != ECORE_SUCCESS) {
+	if (IS_PF(p_hwfn->p_dev) && !b_legacy_vf) {
+		if (_ecore_cxt_acquire_cid(p_hwfn, PROTOCOLID_ETH,
+					   &cid, vfid) != ECORE_SUCCESS) {
 			DP_NOTICE(p_hwfn, true, "Failed to acquire cid\n");
 			return OSAL_NULL;
 		}
 	}
 
-	p_cid = _ecore_eth_queue_to_cid(p_hwfn, opaque_fid, cid, 0, p_params);
-	if ((p_cid == OSAL_NULL) && IS_PF(p_hwfn->p_dev))
-		ecore_cxt_release_cid(p_hwfn, cid);
+	p_cid = _ecore_eth_queue_to_cid(p_hwfn, opaque_fid, cid,
+					p_params, p_vf_params);
+	if ((p_cid == OSAL_NULL) && IS_PF(p_hwfn->p_dev) && !b_legacy_vf)
+		_ecore_cxt_release_cid(p_hwfn, cid, vfid);
 
 	return p_cid;
 }
 
+static struct ecore_queue_cid *
+ecore_eth_queue_to_cid_pf(struct ecore_hwfn *p_hwfn, u16 opaque_fid,
+			  struct ecore_queue_start_common_params *p_params)
+{
+	return ecore_eth_queue_to_cid(p_hwfn, opaque_fid, p_params, OSAL_NULL);
+}
+
 enum _ecore_status_t
 ecore_sp_eth_vport_start(struct ecore_hwfn *p_hwfn,
 			 struct ecore_sp_vport_start_params *p_params)
@@ -741,7 +927,7 @@ ecore_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
 	p_ramrod->num_of_pbl_pages = OSAL_CPU_TO_LE16(cqe_pbl_size);
 	DMA_REGPAIR_LE(p_ramrod->cqe_pbl_addr, cqe_pbl_addr);
 
-	if (p_cid->is_vf) {
+	if (p_cid->vfid != ECORE_QUEUE_CID_PF) {
 		p_ramrod->vf_rx_prod_index = p_cid->vf_qid;
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 			   "Queue%s is meant for VF rxq[%02x]\n",
@@ -793,7 +979,7 @@ ecore_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
 	enum _ecore_status_t rc;
 
 	/* Allocate a CID for the queue */
-	p_cid = ecore_eth_queue_to_cid(p_hwfn, opaque_fid, p_params);
+	p_cid = ecore_eth_queue_to_cid_pf(p_hwfn, opaque_fid, p_params);
 	if (p_cid == OSAL_NULL)
 		return ECORE_NOMEM;
 
@@ -905,9 +1091,11 @@ ecore_eth_pf_rx_queue_stop(struct ecore_hwfn *p_hwfn,
 	/* Cleaning the queue requires the completion to arrive there.
 	 * In addition, VFs require the answer to come as eqe to PF.
 	 */
-	p_ramrod->complete_cqe_flg = (!p_cid->is_vf && !b_eq_completion_only) ||
+	p_ramrod->complete_cqe_flg = ((p_cid->vfid == ECORE_QUEUE_CID_PF) &&
+				      !b_eq_completion_only) ||
 				     b_cqe_completion;
-	p_ramrod->complete_event_flg = p_cid->is_vf || b_eq_completion_only;
+	p_ramrod->complete_event_flg = (p_cid->vfid != ECORE_QUEUE_CID_PF) ||
+				       b_eq_completion_only;
 
 	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
 }
@@ -1007,7 +1195,7 @@ ecore_eth_tx_queue_start(struct ecore_hwfn *p_hwfn, u16 opaque_fid,
 	struct ecore_queue_cid *p_cid;
 	enum _ecore_status_t rc;
 
-	p_cid = ecore_eth_queue_to_cid(p_hwfn, opaque_fid, p_params);
+	p_cid = ecore_eth_queue_to_cid_pf(p_hwfn, opaque_fid, p_params);
 	if (p_cid == OSAL_NULL)
 		return ECORE_INVAL;
 
diff --git a/drivers/net/qede/base/ecore_l2.h b/drivers/net/qede/base/ecore_l2.h
index 4b0ccb4..3f86eac 100644
--- a/drivers/net/qede/base/ecore_l2.h
+++ b/drivers/net/qede/base/ecore_l2.h
@@ -15,6 +15,34 @@
 #include "ecore_spq.h"
 #include "ecore_l2_api.h"
 
+#define MAX_QUEUES_PER_QZONE	(sizeof(unsigned long) * 8)
+#define ECORE_QUEUE_CID_PF	(0xff)
+
+/* Additional parameters required for initialization of the queue_cid
+ * and are relevant only for a PF initializing one for its VFs.
+ */
+struct ecore_queue_cid_vf_params {
+	/* Should match the VF's relative index */
+	u8 vfid;
+
+	/* 0-based queue index. Should reflect the relative qzone the
+	 * VF thinks is associated with it [in its range].
+	 */
+	u8 vf_qid;
+
+	/* Indicates a VF is legacy, making it differ in several things:
+	 *  - Producers would be placed in a different place.
+	 *  - Makes assumptions regarding the CIDs.
+	 */
+	bool b_legacy;
+
+	/* For VFs, this index arrives via TLV to diffrentiate between
+	 * different queues opened on the same qzone, and is passed
+	 * [where the PF would have allocated it internally for its own].
+	 */
+	u8 qid_usage_idx;
+};
+
 struct ecore_queue_cid {
 	/* 'Relative' is a relative term ;-). Usually the indices [not counting
 	 * SBs] would be PF-relative, but there are some cases where that isn't
@@ -31,22 +59,32 @@ struct ecore_queue_cid {
 	 * Notice this is relevant on the *PF* queue-cid of its VF's queues,
 	 * and not on the VF itself.
 	 */
-	bool is_vf;
+	u8 vfid;
 	u8 vf_qid;
 
+	/* We need an additional index to diffrentiate between queues opened
+	 * for same queue-zone, as VFs would have to communicate the info
+	 * to the PF [otherwise PF has no way to diffrentiate].
+	 */
+	u8 qid_usage_idx;
+
 	/* Legacy VFs might have Rx producer located elsewhere */
 	bool b_legacy_vf;
 
 	struct ecore_hwfn *p_owner;
 };
 
+enum _ecore_status_t ecore_l2_alloc(struct ecore_hwfn *p_hwfn);
+void ecore_l2_setup(struct ecore_hwfn *p_hwfn);
+void ecore_l2_free(struct ecore_hwfn *p_hwfn);
+
 void ecore_eth_queue_cid_release(struct ecore_hwfn *p_hwfn,
 				 struct ecore_queue_cid *p_cid);
 
 struct ecore_queue_cid *
-_ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
-			u16 opaque_fid, u32 cid, u8 vf_qid,
-			struct ecore_queue_start_common_params *p_params);
+ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn, u16 opaque_fid,
+		       struct ecore_queue_start_common_params *p_params,
+		       struct ecore_queue_cid_vf_params *p_vf_params);
 
 enum _ecore_status_t
 ecore_sp_eth_vport_start(struct ecore_hwfn *p_hwfn,
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 532c492..39d3e88 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -192,28 +192,90 @@ struct ecore_vf_info *ecore_iov_get_vf_info(struct ecore_hwfn *p_hwfn,
 	return vf;
 }
 
+static struct ecore_queue_cid *
+ecore_iov_get_vf_rx_queue_cid(struct ecore_hwfn *p_hwfn,
+			      struct ecore_vf_info *p_vf,
+			      struct ecore_vf_queue *p_queue)
+{
+	int i;
+
+	for (i = 0; i < MAX_QUEUES_PER_QZONE; i++) {
+		if (p_queue->cids[i].p_cid &&
+		    !p_queue->cids[i].b_is_tx)
+			return p_queue->cids[i].p_cid;
+	}
+
+	return OSAL_NULL;
+}
+
+enum ecore_iov_validate_q_mode {
+	ECORE_IOV_VALIDATE_Q_NA,
+	ECORE_IOV_VALIDATE_Q_ENABLE,
+	ECORE_IOV_VALIDATE_Q_DISABLE,
+};
+
+static bool ecore_iov_validate_queue_mode(struct ecore_hwfn *p_hwfn,
+					  struct ecore_vf_info *p_vf,
+					  u16 qid,
+					  enum ecore_iov_validate_q_mode mode,
+					  bool b_is_tx)
+{
+	int i;
+
+	if (mode == ECORE_IOV_VALIDATE_Q_NA)
+		return true;
+
+	for (i = 0; i < MAX_QUEUES_PER_QZONE; i++) {
+		struct ecore_vf_queue_cid *p_qcid;
+
+		p_qcid = &p_vf->vf_queues[qid].cids[i];
+
+		if (p_qcid->p_cid == OSAL_NULL)
+			continue;
+
+		if (p_qcid->b_is_tx != b_is_tx)
+			continue;
+
+		/* Found. It's enabled. */
+		return (mode == ECORE_IOV_VALIDATE_Q_ENABLE);
+	}
+
+	/* In case we haven't found any valid cid, then its disabled */
+	return (mode == ECORE_IOV_VALIDATE_Q_DISABLE);
+}
+
 static bool ecore_iov_validate_rxq(struct ecore_hwfn *p_hwfn,
 				   struct ecore_vf_info *p_vf,
-				   u16 rx_qid)
+				   u16 rx_qid,
+				   enum ecore_iov_validate_q_mode mode)
 {
-	if (rx_qid >= p_vf->num_rxqs)
+	if (rx_qid >= p_vf->num_rxqs) {
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 			   "VF[0x%02x] - can't touch Rx queue[%04x];"
 			   " Only 0x%04x are allocated\n",
 			   p_vf->abs_vf_id, rx_qid, p_vf->num_rxqs);
-	return rx_qid < p_vf->num_rxqs;
+		return false;
+	}
+
+	return ecore_iov_validate_queue_mode(p_hwfn, p_vf, rx_qid,
+					     mode, false);
 }
 
 static bool ecore_iov_validate_txq(struct ecore_hwfn *p_hwfn,
 				   struct ecore_vf_info *p_vf,
-				   u16 tx_qid)
+				   u16 tx_qid,
+				   enum ecore_iov_validate_q_mode mode)
 {
-	if (tx_qid >= p_vf->num_txqs)
+	if (tx_qid >= p_vf->num_txqs) {
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 			   "VF[0x%02x] - can't touch Tx queue[%04x];"
 			   " Only 0x%04x are allocated\n",
 			   p_vf->abs_vf_id, tx_qid, p_vf->num_txqs);
-	return tx_qid < p_vf->num_txqs;
+		return false;
+	}
+
+	return ecore_iov_validate_queue_mode(p_hwfn, p_vf, tx_qid,
+					     mode, true);
 }
 
 static bool ecore_iov_validate_sb(struct ecore_hwfn *p_hwfn,
@@ -234,13 +296,16 @@ static bool ecore_iov_validate_sb(struct ecore_hwfn *p_hwfn,
 	return false;
 }
 
+/* Is there at least 1 queue open? */
 static bool ecore_iov_validate_active_rxq(struct ecore_hwfn *p_hwfn,
 					  struct ecore_vf_info *p_vf)
 {
 	u8 i;
 
 	for (i = 0; i < p_vf->num_rxqs; i++)
-		if (p_vf->vf_queues[i].p_rx_cid)
+		if (ecore_iov_validate_queue_mode(p_hwfn, p_vf, i,
+						  ECORE_IOV_VALIDATE_Q_ENABLE,
+						  false))
 			return true;
 
 	return false;
@@ -251,8 +316,10 @@ static bool ecore_iov_validate_active_txq(struct ecore_hwfn *p_hwfn,
 {
 	u8 i;
 
-	for (i = 0; i < p_vf->num_rxqs; i++)
-		if (p_vf->vf_queues[i].p_tx_cid)
+	for (i = 0; i < p_vf->num_txqs; i++)
+		if (ecore_iov_validate_queue_mode(p_hwfn, p_vf, i,
+						  ECORE_IOV_VALIDATE_Q_ENABLE,
+						  true))
 			return true;
 
 	return false;
@@ -1095,19 +1162,15 @@ ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 	vf->num_txqs = num_of_vf_available_chains;
 
 	for (i = 0; i < vf->num_rxqs; i++) {
-		struct ecore_vf_q_info *p_queue = &vf->vf_queues[i];
+		struct ecore_vf_queue *p_queue = &vf->vf_queues[i];
 
 		p_queue->fw_rx_qid = p_params->req_rx_queue[i];
 		p_queue->fw_tx_qid = p_params->req_tx_queue[i];
 
-		/* CIDs are per-VF, so no problem having them 0-based. */
-		p_queue->fw_cid = i;
-
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-			   "VF[%d] - Q[%d] SB %04x, qid [Rx %04x Tx %04x]  CID %04x\n",
+			   "VF[%d] - Q[%d] SB %04x, qid [Rx %04x Tx %04x]\n",
 			   vf->relative_vf_id, i, vf->igu_sbs[i],
-			   p_queue->fw_rx_qid, p_queue->fw_tx_qid,
-			   p_queue->fw_cid);
+			   p_queue->fw_rx_qid, p_queue->fw_tx_qid);
 	}
 
 	/* Update the link configuration in bulletin.
@@ -1443,7 +1506,7 @@ struct ecore_public_vf_info
 static void ecore_iov_vf_cleanup(struct ecore_hwfn *p_hwfn,
 				 struct ecore_vf_info *p_vf)
 {
-	u32 i;
+	u32 i, j;
 	p_vf->vf_bulletin = 0;
 	p_vf->vport_instance = 0;
 	p_vf->configured_features = 0;
@@ -1455,18 +1518,15 @@ static void ecore_iov_vf_cleanup(struct ecore_hwfn *p_hwfn,
 	p_vf->num_active_rxqs = 0;
 
 	for (i = 0; i < ECORE_MAX_VF_CHAINS_PER_PF; i++) {
-		struct ecore_vf_q_info *p_queue = &p_vf->vf_queues[i];
+		struct ecore_vf_queue *p_queue = &p_vf->vf_queues[i];
 
-		if (p_queue->p_rx_cid) {
-			ecore_eth_queue_cid_release(p_hwfn,
-						    p_queue->p_rx_cid);
-			p_queue->p_rx_cid = OSAL_NULL;
-		}
+		for (j = 0; j < MAX_QUEUES_PER_QZONE; j++) {
+			if (!p_queue->cids[j].p_cid)
+				continue;
 
-		if (p_queue->p_tx_cid) {
 			ecore_eth_queue_cid_release(p_hwfn,
-						    p_queue->p_tx_cid);
-			p_queue->p_tx_cid = OSAL_NULL;
+						    p_queue->cids[j].p_cid);
+			p_queue->cids[j].p_cid = OSAL_NULL;
 		}
 	}
 
@@ -1481,7 +1541,7 @@ static u8 ecore_iov_vf_mbx_acquire_resc(struct ecore_hwfn *p_hwfn,
 					struct vf_pf_resc_request *p_req,
 					struct pf_vf_resc *p_resp)
 {
-	int i;
+	u8 i;
 
 	/* Queue related information */
 	p_resp->num_rxqs = p_vf->num_rxqs;
@@ -1502,7 +1562,7 @@ static u8 ecore_iov_vf_mbx_acquire_resc(struct ecore_hwfn *p_hwfn,
 	for (i = 0; i < p_resp->num_rxqs; i++) {
 		ecore_fw_l2_queue(p_hwfn, p_vf->vf_queues[i].fw_rx_qid,
 				  (u16 *)&p_resp->hw_qid[i]);
-		p_resp->cid[i] = p_vf->vf_queues[i].fw_cid;
+		p_resp->cid[i] = i;
 	}
 
 	/* Filter related information */
@@ -1905,9 +1965,12 @@ ecore_iov_configure_vport_forced(struct ecore_hwfn *p_hwfn,
 
 		/* Update all the Rx queues */
 		for (i = 0; i < ECORE_MAX_VF_CHAINS_PER_PF; i++) {
-			struct ecore_queue_cid *p_cid;
+			struct ecore_vf_queue *p_queue = &p_vf->vf_queues[i];
+			struct ecore_queue_cid *p_cid = OSAL_NULL;
 
-			p_cid = p_vf->vf_queues[i].p_rx_cid;
+			/* There can be at most 1 Rx queue on qzone. Find it */
+			p_cid = ecore_iov_get_vf_rx_queue_cid(p_hwfn, p_vf,
+							      p_queue);
 			if (p_cid == OSAL_NULL)
 				continue;
 
@@ -2113,19 +2176,32 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
 				       struct ecore_vf_info *vf)
 {
 	struct ecore_queue_start_common_params params;
+	struct ecore_queue_cid_vf_params vf_params;
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
 	u8 status = PFVF_STATUS_NO_RESOURCE;
-	struct ecore_vf_q_info *p_queue;
+	struct ecore_vf_queue *p_queue;
 	struct vfpf_start_rxq_tlv *req;
+	struct ecore_queue_cid *p_cid;
 	bool b_legacy_vf = false;
+	u8 qid_usage_idx;
 	enum _ecore_status_t rc;
 
 	req = &mbx->req_virt->start_rxq;
 
-	if (!ecore_iov_validate_rxq(p_hwfn, vf, req->rx_qid) ||
+	if (!ecore_iov_validate_rxq(p_hwfn, vf, req->rx_qid,
+				    ECORE_IOV_VALIDATE_Q_DISABLE) ||
 	    !ecore_iov_validate_sb(p_hwfn, vf, req->hw_sb))
 		goto out;
 
+	/* Legacy VFs made assumptions on the CID their queues connected to,
+	 * assuming queue X used CID X.
+	 * TODO - need to validate that there was no official release post
+	 * the current legacy scheme that still made that assumption.
+	 */
+	if (vf->acquire.vfdev_info.eth_fp_hsi_minor ==
+	    ETH_HSI_VER_NO_PKT_LEN_TUNN)
+		b_legacy_vf = true;
+
 	/* Acquire a new queue-cid */
 	p_queue = &vf->vf_queues[req->rx_qid];
 
@@ -2136,39 +2212,42 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
 	params.sb = req->hw_sb;
 	params.sb_idx = req->sb_index;
 
-	p_queue->p_rx_cid = _ecore_eth_queue_to_cid(p_hwfn,
-						    vf->opaque_fid,
-						    p_queue->fw_cid,
-						    (u8)req->rx_qid,
-						    &params);
-	if (p_queue->p_rx_cid == OSAL_NULL)
+	/* TODO - set qid_usage_idx according to extended TLV. For now, use
+	 * '0' for Rx.
+	 */
+	qid_usage_idx = 0;
+
+	OSAL_MEM_ZERO(&vf_params, sizeof(vf_params));
+	vf_params.vfid = vf->relative_vf_id;
+	vf_params.vf_qid = (u8)req->rx_qid;
+	vf_params.b_legacy = b_legacy_vf;
+	vf_params.qid_usage_idx = qid_usage_idx;
+
+	p_cid = ecore_eth_queue_to_cid(p_hwfn, vf->opaque_fid,
+				       &params, &vf_params);
+	if (p_cid == OSAL_NULL)
 		goto out;
 
 	/* Legacy VFs have their Producers in a different location, which they
 	 * calculate on their own and clean the producer prior to this.
 	 */
-	if (vf->acquire.vfdev_info.eth_fp_hsi_minor ==
-	    ETH_HSI_VER_NO_PKT_LEN_TUNN)
-		b_legacy_vf = true;
-	else
+	if (!b_legacy_vf)
 		REG_WR(p_hwfn,
 		       GTT_BAR0_MAP_REG_MSDM_RAM +
 		       MSTORM_ETH_VF_PRODS_OFFSET(vf->abs_vf_id, req->rx_qid),
 		       0);
-	p_queue->p_rx_cid->b_legacy_vf = b_legacy_vf;
 
-
-	rc = ecore_eth_rxq_start_ramrod(p_hwfn,
-					p_queue->p_rx_cid,
+	rc = ecore_eth_rxq_start_ramrod(p_hwfn, p_cid,
 					req->bd_max_bytes,
 					req->rxq_addr,
 					req->cqe_pbl_addr,
 					req->cqe_pbl_size);
 	if (rc != ECORE_SUCCESS) {
 		status = PFVF_STATUS_FAILURE;
-		ecore_eth_queue_cid_release(p_hwfn, p_queue->p_rx_cid);
-		p_queue->p_rx_cid = OSAL_NULL;
+		ecore_eth_queue_cid_release(p_hwfn, p_cid);
 	} else {
+		p_queue->cids[qid_usage_idx].p_cid = p_cid;
+		p_queue->cids[qid_usage_idx].b_is_tx = false;
 		status = PFVF_STATUS_SUCCESS;
 		vf->num_active_rxqs++;
 	}
@@ -2331,6 +2410,7 @@ send_resp:
 static void ecore_iov_vf_mbx_start_txq_resp(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
 					    struct ecore_vf_info *p_vf,
+					    u32 cid,
 					    u8 status)
 {
 	struct ecore_iov_vf_mbx *mbx = &p_vf->vf_mbx;
@@ -2359,12 +2439,8 @@ static void ecore_iov_vf_mbx_start_txq_resp(struct ecore_hwfn *p_hwfn,
 		      sizeof(struct channel_list_end_tlv));
 
 	/* Update the TLV with the response */
-	if ((status == PFVF_STATUS_SUCCESS) && !b_legacy) {
-		u16 qid = mbx->req_virt->start_txq.tx_qid;
-
-		p_tlv->offset = DB_ADDR_VF(p_vf->vf_queues[qid].fw_cid,
-					   DQ_DEMS_LEGACY);
-	}
+	if ((status == PFVF_STATUS_SUCCESS) && !b_legacy)
+		p_tlv->offset = DB_ADDR_VF(cid, DQ_DEMS_LEGACY);
 
 	ecore_iov_send_response(p_hwfn, p_ptt, p_vf, length, status);
 }
@@ -2374,20 +2450,34 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 				       struct ecore_vf_info *vf)
 {
 	struct ecore_queue_start_common_params params;
+	struct ecore_queue_cid_vf_params vf_params;
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
 	u8 status = PFVF_STATUS_NO_RESOURCE;
-	struct ecore_vf_q_info *p_queue;
+	struct ecore_vf_queue *p_queue;
 	struct vfpf_start_txq_tlv *req;
+	struct ecore_queue_cid *p_cid;
+	bool b_legacy_vf = false;
+	u8 qid_usage_idx;
+	u32 cid = 0;
 	enum _ecore_status_t rc;
 	u16 pq;
 
 	OSAL_MEMSET(&params, 0, sizeof(params));
 	req = &mbx->req_virt->start_txq;
 
-	if (!ecore_iov_validate_txq(p_hwfn, vf, req->tx_qid) ||
+	if (!ecore_iov_validate_txq(p_hwfn, vf, req->tx_qid,
+				    ECORE_IOV_VALIDATE_Q_NA) ||
 	    !ecore_iov_validate_sb(p_hwfn, vf, req->hw_sb))
 		goto out;
 
+	/* In case this is a legacy VF - need to know to use the right cids.
+	 * TODO - need to validate that there was no official release post
+	 * the current legacy scheme that still made that assumption.
+	 */
+	if (vf->acquire.vfdev_info.eth_fp_hsi_minor ==
+	    ETH_HSI_VER_NO_PKT_LEN_TUNN)
+		b_legacy_vf = true;
+
 	/* Acquire a new queue-cid */
 	p_queue = &vf->vf_queues[req->tx_qid];
 
@@ -2397,29 +2487,42 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 	params.sb = req->hw_sb;
 	params.sb_idx = req->sb_index;
 
-	p_queue->p_tx_cid = _ecore_eth_queue_to_cid(p_hwfn,
-						    vf->opaque_fid,
-						    p_queue->fw_cid,
-						    (u8)req->tx_qid,
-						    &params);
-	if (p_queue->p_tx_cid == OSAL_NULL)
+	/* TODO - set qid_usage_idx according to extended TLV. For now, use
+	 * '1' for Tx.
+	 */
+	qid_usage_idx = 1;
+
+	if (p_queue->cids[qid_usage_idx].p_cid)
+		goto out;
+
+	OSAL_MEM_ZERO(&vf_params, sizeof(vf_params));
+	vf_params.vfid = vf->relative_vf_id;
+	vf_params.vf_qid = (u8)req->tx_qid;
+	vf_params.b_legacy = b_legacy_vf;
+	vf_params.qid_usage_idx = qid_usage_idx;
+
+	p_cid = ecore_eth_queue_to_cid(p_hwfn, vf->opaque_fid,
+				       &params, &vf_params);
+	if (p_cid == OSAL_NULL)
 		goto out;
 
 	pq = ecore_get_cm_pq_idx_vf(p_hwfn,
 				    vf->relative_vf_id);
-	rc = ecore_eth_txq_start_ramrod(p_hwfn, p_queue->p_tx_cid,
+	rc = ecore_eth_txq_start_ramrod(p_hwfn, p_cid,
 					req->pbl_addr, req->pbl_size, pq);
 	if (rc != ECORE_SUCCESS) {
 		status = PFVF_STATUS_FAILURE;
-		ecore_eth_queue_cid_release(p_hwfn,
-					    p_queue->p_tx_cid);
-		p_queue->p_tx_cid = OSAL_NULL;
+		ecore_eth_queue_cid_release(p_hwfn, p_cid);
 	} else {
 		status = PFVF_STATUS_SUCCESS;
+		p_queue->cids[qid_usage_idx].p_cid = p_cid;
+		p_queue->cids[qid_usage_idx].b_is_tx = true;
+		cid = p_cid->cid;
 	}
 
 out:
-	ecore_iov_vf_mbx_start_txq_resp(p_hwfn, p_ptt, vf, status);
+	ecore_iov_vf_mbx_start_txq_resp(p_hwfn, p_ptt, vf,
+					cid, status);
 }
 
 static enum _ecore_status_t ecore_iov_vf_stop_rxqs(struct ecore_hwfn *p_hwfn,
@@ -2428,26 +2531,38 @@ static enum _ecore_status_t ecore_iov_vf_stop_rxqs(struct ecore_hwfn *p_hwfn,
 						   u8 num_rxqs,
 						   bool cqe_completion)
 {
-	struct ecore_vf_q_info *p_queue;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
-	int qid;
+	int qid, i;
 
+	/* TODO - improve validation [wrap around] */
 	if (rxq_id + num_rxqs > OSAL_ARRAY_SIZE(vf->vf_queues))
 		return ECORE_INVAL;
 
 	for (qid = rxq_id; qid < rxq_id + num_rxqs; qid++) {
-		p_queue = &vf->vf_queues[qid];
-
-		if (!p_queue->p_rx_cid)
+		struct ecore_vf_queue *p_queue = &vf->vf_queues[qid];
+		struct ecore_queue_cid **pp_cid = OSAL_NULL;
+
+		/* There can be at most a single Rx per qzone. Find it */
+		for (i = 0; i < MAX_QUEUES_PER_QZONE; i++) {
+			if (p_queue->cids[i].p_cid &&
+			    !p_queue->cids[i].b_is_tx) {
+				pp_cid = &p_queue->cids[i].p_cid;
+				break;
+			}
+		}
+		if (pp_cid == OSAL_NULL) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "Ignoring VF[%02x] request of closing Rx queue %04x - closed\n",
+				   vf->relative_vf_id, qid);
 			continue;
+		}
 
-		rc = ecore_eth_rx_queue_stop(p_hwfn,
-					     p_queue->p_rx_cid,
+		rc = ecore_eth_rx_queue_stop(p_hwfn, *pp_cid,
 					     false, cqe_completion);
 		if (rc != ECORE_SUCCESS)
 			return rc;
 
-		vf->vf_queues[qid].p_rx_cid = OSAL_NULL;
+		*pp_cid = OSAL_NULL;
 		vf->num_active_rxqs--;
 	}
 
@@ -2459,24 +2574,33 @@ static enum _ecore_status_t ecore_iov_vf_stop_txqs(struct ecore_hwfn *p_hwfn,
 						   u16 txq_id, u8 num_txqs)
 {
 	enum _ecore_status_t rc = ECORE_SUCCESS;
-	struct ecore_vf_q_info *p_queue;
-	int qid;
+	struct ecore_vf_queue *p_queue;
+	int qid, j;
 
-	if (txq_id + num_txqs > OSAL_ARRAY_SIZE(vf->vf_queues))
+	if (!ecore_iov_validate_txq(p_hwfn, vf, txq_id,
+				    ECORE_IOV_VALIDATE_Q_NA) ||
+	    !ecore_iov_validate_txq(p_hwfn, vf, txq_id + num_txqs,
+				    ECORE_IOV_VALIDATE_Q_NA))
 		return ECORE_INVAL;
 
 	for (qid = txq_id; qid < txq_id + num_txqs; qid++) {
 		p_queue = &vf->vf_queues[qid];
-		if (!p_queue->p_tx_cid)
-			continue;
+		for (j = 0; j < MAX_QUEUES_PER_QZONE; j++) {
+			if (p_queue->cids[j].p_cid == OSAL_NULL)
+				continue;
 
-		rc = ecore_eth_tx_queue_stop(p_hwfn,
-					     p_queue->p_tx_cid);
-		if (rc != ECORE_SUCCESS)
-			return rc;
+			if (!p_queue->cids[j].b_is_tx)
+				continue;
+
+			rc = ecore_eth_tx_queue_stop(p_hwfn,
+						     p_queue->cids[j].p_cid);
+			if (rc != ECORE_SUCCESS)
+				return rc;
 
-		p_queue->p_tx_cid = OSAL_NULL;
+			p_queue->cids[j].p_cid = OSAL_NULL;
+		}
 	}
+
 	return rc;
 }
 
@@ -2538,33 +2662,32 @@ static void ecore_iov_vf_mbx_update_rxqs(struct ecore_hwfn *p_hwfn,
 	u8 status = PFVF_STATUS_FAILURE;
 	u8 complete_event_flg;
 	u8 complete_cqe_flg;
-	u16 qid;
 	enum _ecore_status_t rc;
-	u8 i;
+	u16 i;
 
 	req = &mbx->req_virt->update_rxq;
 	complete_cqe_flg = !!(req->flags & VFPF_RXQ_UPD_COMPLETE_CQE_FLAG);
 	complete_event_flg = !!(req->flags & VFPF_RXQ_UPD_COMPLETE_EVENT_FLAG);
 
-	/* Validaute inputs */
-	if (req->num_rxqs + req->rx_qid > ECORE_MAX_VF_CHAINS_PER_PF ||
-	    !ecore_iov_validate_rxq(p_hwfn, vf, req->rx_qid)) {
-		DP_INFO(p_hwfn, "VF[%d]: Incorrect Rxqs [%04x, %02x]\n",
-			vf->relative_vf_id, req->rx_qid, req->num_rxqs);
-		goto out;
+	/* Validate inputs */
+	for (i = req->rx_qid; i < req->rx_qid + req->num_rxqs; i++) {
+		if (!ecore_iov_validate_rxq(p_hwfn, vf, i,
+					    ECORE_IOV_VALIDATE_Q_ENABLE)) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VF[%d]: Incorrect Rxqs [%04x, %02x]\n",
+				   vf->relative_vf_id, req->rx_qid,
+				   req->num_rxqs);
+			goto out;
+		}
 	}
 
 	for (i = 0; i < req->num_rxqs; i++) {
-		qid = req->rx_qid + i;
-
-		if (!vf->vf_queues[qid].p_rx_cid) {
-			DP_INFO(p_hwfn,
-				"VF[%d] rx_qid = %d isn`t active!\n",
-				vf->relative_vf_id, qid);
-			goto out;
-		}
+		struct ecore_vf_queue *p_queue;
+		u16 qid = req->rx_qid + i;
 
-		handlers[i] = vf->vf_queues[qid].p_rx_cid;
+		p_queue = &vf->vf_queues[qid];
+		handlers[i] = ecore_iov_get_vf_rx_queue_cid(p_hwfn, vf,
+							    p_queue);
 	}
 
 	rc = ecore_sp_eth_rx_queues_update(p_hwfn, (void **)&handlers,
@@ -2796,8 +2919,11 @@ ecore_iov_vp_update_rss_param(struct ecore_hwfn *p_hwfn,
 				(1 << p_rss_tlv->rss_table_size_log));
 
 	for (i = 0; i < table_size; i++) {
+		struct ecore_queue_cid *p_cid;
+
 		q_idx = p_rss_tlv->rss_ind_table[i];
-		if (!ecore_iov_validate_rxq(p_hwfn, vf, q_idx)) {
+		if (!ecore_iov_validate_rxq(p_hwfn, vf, q_idx,
+					    ECORE_IOV_VALIDATE_Q_ENABLE)) {
 			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 				   "VF[%d]: Omitting RSS due to wrong queue %04x\n",
 				   vf->relative_vf_id, q_idx);
@@ -2805,15 +2931,9 @@ ecore_iov_vp_update_rss_param(struct ecore_hwfn *p_hwfn,
 			goto out;
 		}
 
-		if (!vf->vf_queues[q_idx].p_rx_cid) {
-			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-				   "VF[%d]: Omitting RSS due to inactive queue %08x\n",
-				   vf->relative_vf_id, q_idx);
-			b_reject = true;
-			goto out;
-		}
-
-		p_rss->rss_ind_table[i] = vf->vf_queues[q_idx].p_rx_cid;
+		p_cid = ecore_iov_get_vf_rx_queue_cid(p_hwfn, vf,
+						      &vf->vf_queues[q_idx]);
+		p_rss->rss_ind_table[i] = p_cid;
 	}
 
 	p_data->rss_params = p_rss;
@@ -3272,22 +3392,26 @@ static void ecore_iov_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn,
 	u8 status = PFVF_STATUS_FAILURE;
 	struct ecore_queue_cid *p_cid;
 	u16 rx_coal, tx_coal;
-	u16  qid;
+	u16 qid;
+	int i;
 
 	req = &mbx->req_virt->update_coalesce;
 
 	rx_coal = req->rx_coal;
 	tx_coal = req->tx_coal;
 	qid = req->qid;
-	p_cid = vf->vf_queues[qid].p_rx_cid;
 
-	if (!ecore_iov_validate_rxq(p_hwfn, vf, qid)) {
+	if (!ecore_iov_validate_rxq(p_hwfn, vf, qid,
+				    ECORE_IOV_VALIDATE_Q_ENABLE) &&
+	    rx_coal) {
 		DP_ERR(p_hwfn, "VF[%d]: Invalid Rx queue_id = %d\n",
 		       vf->abs_vf_id, qid);
 		goto out;
 	}
 
-	if (!ecore_iov_validate_txq(p_hwfn, vf, qid)) {
+	if (!ecore_iov_validate_txq(p_hwfn, vf, qid,
+				    ECORE_IOV_VALIDATE_Q_ENABLE) &&
+	    tx_coal) {
 		DP_ERR(p_hwfn, "VF[%d]: Invalid Tx queue_id = %d\n",
 		       vf->abs_vf_id, qid);
 		goto out;
@@ -3296,7 +3420,11 @@ static void ecore_iov_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn,
 	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 		   "VF[%d]: Setting coalesce for VF rx_coal = %d, tx_coal = %d at queue = %d\n",
 		   vf->abs_vf_id, rx_coal, tx_coal, qid);
+
 	if (rx_coal) {
+		p_cid = ecore_iov_get_vf_rx_queue_cid(p_hwfn, vf,
+						      &vf->vf_queues[qid]);
+
 		rc = ecore_set_rxq_coalesce(p_hwfn, p_ptt, rx_coal, p_cid);
 		if (rc != ECORE_SUCCESS) {
 			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
@@ -3305,13 +3433,28 @@ static void ecore_iov_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn,
 			goto out;
 		}
 	}
+
+	/* TODO - in future, it might be possible to pass this in a per-cid
+	 * granularity. For now, do this for all Tx queues.
+	 */
 	if (tx_coal) {
-		rc =  ecore_set_txq_coalesce(p_hwfn, p_ptt, tx_coal, p_cid);
-		if (rc != ECORE_SUCCESS) {
-			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-				   "VF[%d]: Unable to set tx queue = %d coalesce\n",
-				   vf->abs_vf_id, vf->vf_queues[qid].fw_tx_qid);
-			goto out;
+		struct ecore_vf_queue *p_queue = &vf->vf_queues[qid];
+
+		for (i = 0; i < MAX_QUEUES_PER_QZONE; i++) {
+			if (p_queue->cids[i].p_cid == OSAL_NULL)
+				continue;
+
+			if (!p_queue->cids[i].b_is_tx)
+				continue;
+
+			rc = ecore_set_txq_coalesce(p_hwfn, p_ptt, tx_coal,
+						    p_queue->cids[i].p_cid);
+			if (rc != ECORE_SUCCESS) {
+				DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+					   "VF[%d]: Unable to set tx queue coalesce\n",
+					   vf->abs_vf_id);
+				goto out;
+			}
 		}
 	}
 
diff --git a/drivers/net/qede/base/ecore_sriov.h b/drivers/net/qede/base/ecore_sriov.h
index 66e9271..3c2f58b 100644
--- a/drivers/net/qede/base/ecore_sriov.h
+++ b/drivers/net/qede/base/ecore_sriov.h
@@ -13,6 +13,7 @@
 #include "ecore_vfpf_if.h"
 #include "ecore_iov_api.h"
 #include "ecore_hsi_common.h"
+#include "ecore_l2.h"
 
 #define ECORE_ETH_MAX_VF_NUM_VLAN_FILTERS \
 	(E4_MAX_NUM_VFS * ECORE_ETH_VF_NUM_VLAN_FILTERS)
@@ -62,12 +63,18 @@ struct ecore_iov_vf_mbx {
 					 */
 };
 
-struct ecore_vf_q_info {
+struct ecore_vf_queue_cid {
+	bool b_is_tx;
+	struct ecore_queue_cid *p_cid;
+};
+
+/* Describes a qzone associated with the VF */
+struct ecore_vf_queue {
+	/* Input from upper-layer, mapping relateive queue to queue-zone */
 	u16 fw_rx_qid;
-	struct ecore_queue_cid *p_rx_cid;
 	u16 fw_tx_qid;
-	struct ecore_queue_cid *p_tx_cid;
-	u8 fw_cid;
+
+	struct ecore_vf_queue_cid cids[MAX_QUEUES_PER_QZONE];
 };
 
 enum vf_state {
@@ -127,7 +134,7 @@ struct ecore_vf_info {
 	u8			num_mac_filters;
 	u8			num_vlan_filters;
 
-	struct ecore_vf_q_info	vf_queues[ECORE_MAX_VF_CHAINS_PER_PF];
+	struct ecore_vf_queue	vf_queues[ECORE_MAX_VF_CHAINS_PER_PF];
 	u16			igu_sbs[ECORE_MAX_VF_CHAINS_PER_PF];
 
 	/* TODO - Only windows is using it - should be removed */
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index 8ce9340..ac72681 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -1582,6 +1582,12 @@ void ecore_vf_get_num_rxqs(struct ecore_hwfn *p_hwfn, u8 *num_rxqs)
 	*num_rxqs = p_hwfn->vf_iov_info->acquire_resp.resc.num_rxqs;
 }
 
+void ecore_vf_get_num_txqs(struct ecore_hwfn *p_hwfn,
+			   u8 *num_txqs)
+{
+	*num_txqs = p_hwfn->vf_iov_info->acquire_resp.resc.num_txqs;
+}
+
 void ecore_vf_get_port_mac(struct ecore_hwfn *p_hwfn, u8 *port_mac)
 {
 	OSAL_MEMCPY(port_mac,
diff --git a/drivers/net/qede/base/ecore_vf_api.h b/drivers/net/qede/base/ecore_vf_api.h
index a6e5f32..be3a326 100644
--- a/drivers/net/qede/base/ecore_vf_api.h
+++ b/drivers/net/qede/base/ecore_vf_api.h
@@ -61,6 +61,15 @@ void ecore_vf_get_num_rxqs(struct ecore_hwfn *p_hwfn,
 			   u8 *num_rxqs);
 
 /**
+ * @brief Get number of Rx queues allocated for VF by ecore
+ *
+ *  @param p_hwfn
+ *  @param num_txqs - allocated RX queues
+ */
+void ecore_vf_get_num_txqs(struct ecore_hwfn *p_hwfn,
+			   u8 *num_txqs);
+
+/**
  * @brief Get port mac address for VF
  *
  * @param p_hwfn
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 57/62] net/qede/base: prevent race condition during unload
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (57 preceding siblings ...)
  2017-03-28  6:52           ` [PATCH v4 56/62] net/qede/base: multi-Txq support on same queue-zone for VFs Rasesh Mody
@ 2017-03-28  6:52           ` Rasesh Mody
  2017-03-28  6:52           ` [PATCH v4 58/62] net/qede/base: semantic changes Rasesh Mody
                             ` (5 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:52 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Merge hw_stop and hw_reset into one function.
Prevent race condition between MFW attentions and pf stop command during
unload flow that causes an ASSERT.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/bcm_osal.h      |    1 +
 drivers/net/qede/base/ecore_dev.c     |  175 ++++++++++++++++-----------------
 drivers/net/qede/base/ecore_dev_api.h |    9 --
 drivers/net/qede/base/ecore_mcp.c     |   12 +++
 drivers/net/qede/base/ecore_mcp.h     |   11 +++
 drivers/net/qede/base/ecore_spq.c     |    3 +
 drivers/net/qede/qede_main.c          |   18 +---
 7 files changed, 116 insertions(+), 113 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 052a0cf..32c9b25 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -168,6 +168,7 @@ typedef pthread_mutex_t osal_mutex_t;
 #define OSAL_DPC_ALLOC(hwfn) OSAL_ALLOC(hwfn, GFP, sizeof(osal_dpc_t))
 #define OSAL_DPC_INIT(dpc, hwfn) nothing
 #define OSAL_POLL_MODE_DPC(hwfn) nothing
+#define OSAL_DPC_SYNC(hwfn) nothing
 
 /* Lists */
 
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 2a621f7..d8e4ca2 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2050,7 +2050,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 
 		if (mfw_rc != ECORE_SUCCESS) {
 			DP_NOTICE(p_hwfn, true,
-				  "Failed sending LOAD_DONE command\n");
+				  "Failed sending a LOAD_DONE command\n");
 			return mfw_rc;
 		}
 
@@ -2139,32 +2139,77 @@ void ecore_hw_timers_stop_all(struct ecore_dev *p_dev)
 	}
 }
 
+static enum _ecore_status_t ecore_verify_reg_val(struct ecore_hwfn *p_hwfn,
+						 struct ecore_ptt *p_ptt,
+						 u32 addr, u32 expected_val)
+{
+	u32 val = ecore_rd(p_hwfn, p_ptt, addr);
+
+	if (val != expected_val) {
+		DP_NOTICE(p_hwfn, true,
+			  "Value at address 0x%08x is 0x%08x while the expected value is 0x%08x\n",
+			  addr, val, expected_val);
+		return ECORE_UNKNOWN_ERROR;
+	}
+
+	return ECORE_SUCCESS;
+}
+
 enum _ecore_status_t ecore_hw_stop(struct ecore_dev *p_dev)
 {
-	enum _ecore_status_t rc = ECORE_SUCCESS, t_rc;
+	struct ecore_hwfn *p_hwfn;
+	struct ecore_ptt *p_ptt;
+	enum _ecore_status_t rc, rc2 = ECORE_SUCCESS;
 	int j;
 
 	for_each_hwfn(p_dev, j) {
-		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[j];
-		struct ecore_ptt *p_ptt = p_hwfn->p_main_ptt;
+		p_hwfn = &p_dev->hwfns[j];
+		p_ptt = p_hwfn->p_main_ptt;
 
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IFDOWN, "Stopping hw/fw\n");
 
 		if (IS_VF(p_dev)) {
 			ecore_vf_pf_int_cleanup(p_hwfn);
+			rc = ecore_vf_pf_reset(p_hwfn);
+			if (rc != ECORE_SUCCESS) {
+				DP_NOTICE(p_hwfn, true,
+					  "ecore_vf_pf_reset failed. rc = %d.\n",
+					  rc);
+				rc2 = ECORE_UNKNOWN_ERROR;
+			}
 			continue;
 		}
 
 		/* mark the hw as uninitialized... */
 		p_hwfn->hw_init_done = false;
 
+		/* Send unload command to MCP */
+		if (!p_dev->recov_in_prog) {
+			rc = ecore_mcp_unload_req(p_hwfn, p_ptt);
+			if (rc != ECORE_SUCCESS) {
+				DP_NOTICE(p_hwfn, true,
+					  "Failed sending a UNLOAD_REQ command. rc = %d.\n",
+					  rc);
+				rc2 = ECORE_UNKNOWN_ERROR;
+			}
+		}
+
+		OSAL_DPC_SYNC(p_hwfn);
+
+		/* After this point no MFW attentions are expected, e.g. prevent
+		 * race between pf stop and dcbx pf update.
+		 */
+
 		rc = ecore_sp_pf_stop(p_hwfn);
-		if (rc)
+		if (rc != ECORE_SUCCESS) {
 			DP_NOTICE(p_hwfn, true,
-				  "Failed to close PF against FW. Continue to stop HW to prevent illegal host access by the device\n");
+				  "Failed to close PF against FW [rc = %d]. Continue to stop HW to prevent illegal host access by the device.\n",
+				  rc);
+			rc2 = ECORE_UNKNOWN_ERROR;
+		}
 
 		/* perform debug action after PF stop was sent */
-		OSAL_AFTER_PF_STOP((void *)p_hwfn->p_dev, p_hwfn->my_id);
+		OSAL_AFTER_PF_STOP((void *)p_dev, p_hwfn->my_id);
 
 		/* close NIG to BRB gate */
 		ecore_wr(p_hwfn, p_ptt,
@@ -2191,20 +2236,48 @@ enum _ecore_status_t ecore_hw_stop(struct ecore_dev *p_dev)
 		ecore_int_igu_init_pure_rt(p_hwfn, p_ptt, false, true);
 		/* Need to wait 1ms to guarantee SBs are cleared */
 		OSAL_MSLEEP(1);
-	}
+
+		if (!p_dev->recov_in_prog) {
+			ecore_verify_reg_val(p_hwfn, p_ptt,
+					     QM_REG_USG_CNT_PF_TX, 0);
+			ecore_verify_reg_val(p_hwfn, p_ptt,
+					     QM_REG_USG_CNT_PF_OTHER, 0);
+			/* @@@TBD - assert on incorrect xCFC values (10.b) */
+		}
+
+		/* Disable PF in HW blocks */
+		ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_DB_ENABLE, 0);
+		ecore_wr(p_hwfn, p_ptt, QM_REG_PF_EN, 0);
+
+		if (!p_dev->recov_in_prog) {
+			ecore_mcp_unload_done(p_hwfn, p_ptt);
+			if (rc != ECORE_SUCCESS) {
+				DP_NOTICE(p_hwfn, true,
+					  "Failed sending a UNLOAD_DONE command. rc = %d.\n",
+					  rc);
+				rc2 = ECORE_UNKNOWN_ERROR;
+			}
+		}
+	} /* hwfn loop */
 
 	if (IS_PF(p_dev)) {
+		p_hwfn = ECORE_LEADING_HWFN(p_dev);
+		p_ptt = ECORE_LEADING_HWFN(p_dev)->p_main_ptt;
+
 		/* Disable DMAE in PXP - in CMT, this should only be done for
 		 * first hw-function, and only after all transactions have
 		 * stopped for all active hw-functions.
 		 */
-		t_rc = ecore_change_pci_hwfn(&p_dev->hwfns[0],
-					     p_dev->hwfns[0].p_main_ptt, false);
-		if (t_rc != ECORE_SUCCESS)
-			rc = t_rc;
+		rc = ecore_change_pci_hwfn(p_hwfn, p_ptt, false);
+		if (rc != ECORE_SUCCESS) {
+			DP_NOTICE(p_hwfn, true,
+				  "ecore_change_pci_hwfn failed. rc = %d.\n",
+				  rc);
+			rc2 = ECORE_UNKNOWN_ERROR;
+		}
 	}
 
-	return rc;
+	return rc2;
 }
 
 void ecore_hw_stop_fastpath(struct ecore_dev *p_dev)
@@ -2265,82 +2338,6 @@ void ecore_hw_start_fastpath(struct ecore_hwfn *p_hwfn)
 		 NIG_REG_RX_LLH_BRB_GATE_DNTFWD_PERPF, 0x0);
 }
 
-static enum _ecore_status_t ecore_reg_assert(struct ecore_hwfn *p_hwfn,
-					     struct ecore_ptt *p_ptt, u32 reg,
-					     bool expected)
-{
-	u32 assert_val = ecore_rd(p_hwfn, p_ptt, reg);
-
-	if (assert_val != expected) {
-		DP_NOTICE(p_hwfn, true, "Value at address 0x%08x != 0x%08x\n",
-			  reg, expected);
-		return ECORE_UNKNOWN_ERROR;
-	}
-
-	return 0;
-}
-
-enum _ecore_status_t ecore_hw_reset(struct ecore_dev *p_dev)
-{
-	enum _ecore_status_t rc = ECORE_SUCCESS;
-	u32 unload_resp, unload_param;
-	int i;
-
-	for_each_hwfn(p_dev, i) {
-		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
-
-		if (IS_VF(p_dev)) {
-			rc = ecore_vf_pf_reset(p_hwfn);
-			if (rc)
-				return rc;
-			continue;
-		}
-
-		DP_VERBOSE(p_hwfn, ECORE_MSG_IFDOWN, "Resetting hw/fw\n");
-
-		/* Check for incorrect states */
-		if (!p_dev->recov_in_prog) {
-			ecore_reg_assert(p_hwfn, p_hwfn->p_main_ptt,
-					 QM_REG_USG_CNT_PF_TX, 0);
-			ecore_reg_assert(p_hwfn, p_hwfn->p_main_ptt,
-					 QM_REG_USG_CNT_PF_OTHER, 0);
-			/* @@@TBD - assert on incorrect xCFC values (10.b) */
-		}
-
-		/* Disable PF in HW blocks */
-		ecore_wr(p_hwfn, p_hwfn->p_main_ptt, DORQ_REG_PF_DB_ENABLE, 0);
-		ecore_wr(p_hwfn, p_hwfn->p_main_ptt, QM_REG_PF_EN, 0);
-
-		if (p_dev->recov_in_prog) {
-			DP_VERBOSE(p_hwfn, ECORE_MSG_IFDOWN,
-				   "Recovery is in progress -> skip sending unload_req/done\n");
-			break;
-		}
-
-		/* Send unload command to MCP */
-		rc = ecore_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
-				   DRV_MSG_CODE_UNLOAD_REQ,
-				   DRV_MB_PARAM_UNLOAD_WOL_MCP,
-				   &unload_resp, &unload_param);
-		if (rc != ECORE_SUCCESS) {
-			DP_NOTICE(p_hwfn, true,
-				  "ecore_hw_reset: UNLOAD_REQ failed\n");
-			/* @@TBD - what to do? for now, assume ENG. */
-			unload_resp = FW_MSG_CODE_DRV_UNLOAD_ENGINE;
-		}
-
-		rc = ecore_mcp_unload_done(p_hwfn, p_hwfn->p_main_ptt);
-		if (rc != ECORE_SUCCESS) {
-			DP_NOTICE(p_hwfn,
-				  true, "ecore_hw_reset: UNLOAD_DONE failed\n");
-			/* @@@TBD - Should it really ASSERT here ? */
-			return rc;
-		}
-	}
-
-	return rc;
-}
-
 /* Free hwfn memory and resources acquired in hw_hwfn_prepare */
 static void ecore_hw_hwfn_free(struct ecore_hwfn *p_hwfn)
 {
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index ce764d2..e64a768 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -151,15 +151,6 @@ void ecore_prepare_hibernate(struct ecore_dev *p_dev);
  */
 void ecore_hw_start_fastpath(struct ecore_hwfn *p_hwfn);
 
-/**
- * @brief ecore_hw_reset -
- *
- * @param p_dev
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_hw_reset(struct ecore_dev *p_dev);
-
 enum ecore_hw_prepare_result {
 	ECORE_HW_PREPARE_SUCCESS,
 
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index a3a6ca1..a834ac7 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -893,6 +893,18 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
+enum _ecore_status_t ecore_mcp_unload_req(struct ecore_hwfn *p_hwfn,
+					  struct ecore_ptt *p_ptt)
+{
+	u32 wol_param, mcp_resp, mcp_param;
+
+	/* @DPDK */
+	wol_param = DRV_MB_PARAM_UNLOAD_WOL_MCP;
+
+	return ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_UNLOAD_REQ, wol_param,
+			     &mcp_resp, &mcp_param);
+}
+
 enum _ecore_status_t ecore_mcp_unload_done(struct ecore_hwfn *p_hwfn,
 					   struct ecore_ptt *p_ptt)
 {
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index 350d8a2..37d1835 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -171,6 +171,17 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 					struct ecore_load_req_params *p_params);
 
 /**
+ * @brief Sends a UNLOAD_REQ message to the MFW
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - Operation was successful.
+ */
+enum _ecore_status_t ecore_mcp_unload_req(struct ecore_hwfn *p_hwfn,
+					  struct ecore_ptt *p_ptt);
+
+/**
  * @brief Sends a UNLOAD_DONE message to the MFW
  *
  * @param p_hwfn
diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c
index 016de74..3c1d05b 100644
--- a/drivers/net/qede/base/ecore_spq.c
+++ b/drivers/net/qede/base/ecore_spq.c
@@ -190,6 +190,9 @@ static void ecore_spq_hw_initialize(struct ecore_hwfn *p_hwfn,
 
 	p_cxt = cxt_info.p_cxt;
 
+	/* @@@TBD we zero the context until we have ilt_reset implemented. */
+	OSAL_MEM_ZERO(p_cxt, sizeof(*p_cxt));
+
 	if (ECORE_IS_BB(p_hwfn->p_dev) || ECORE_IS_AH(p_hwfn->p_dev)) {
 		SET_FIELD(p_cxt->xstorm_ag_context.flags10,
 			  E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN, 1);
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 326e56f..74856c5 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -636,19 +636,6 @@ static int qed_nic_stop(struct ecore_dev *edev)
 	return rc;
 }
 
-static int qed_nic_reset(struct ecore_dev *edev)
-{
-	int rc;
-
-	rc = ecore_hw_reset(edev);
-	if (rc)
-		return rc;
-
-	ecore_resc_free(edev);
-
-	return 0;
-}
-
 static int qed_slowpath_stop(struct ecore_dev *edev)
 {
 #ifdef CONFIG_QED_SRIOV
@@ -667,10 +654,11 @@ static int qed_slowpath_stop(struct ecore_dev *edev)
 		if (IS_QED_ETH_IF(edev))
 			qed_sriov_disable(edev, true);
 #endif
-		qed_nic_stop(edev);
 	}
 
-	qed_nic_reset(edev);
+	qed_nic_stop(edev);
+
+	ecore_resc_free(edev);
 	qed_stop_iov_task(edev);
 
 	return 0;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 58/62] net/qede/base: semantic changes
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (58 preceding siblings ...)
  2017-03-28  6:52           ` [PATCH v4 57/62] net/qede/base: prevent race condition during unload Rasesh Mody
@ 2017-03-28  6:52           ` Rasesh Mody
  2017-03-28  6:52           ` [PATCH v4 59/62] net/qede/base: add support for arfs mode Rasesh Mody
                             ` (4 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:52 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Make APIs static and other semantic changes.
A step toward cleaning 'make C=1' with GCC 4.8.3.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_cxt.c  |    5 +-
 drivers/net/qede/base/ecore_cxt.h  |   11 ----
 drivers/net/qede/base/ecore_dcbx.c |    2 +-
 drivers/net/qede/base/ecore_dev.c  |  109 ++++++++++++++++++------------------
 drivers/net/qede/base/ecore_l2.c   |   12 ++--
 drivers/net/qede/base/ecore_vf.c   |    2 +-
 6 files changed, 66 insertions(+), 75 deletions(-)

diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index f7b5672..1a2a701 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -327,7 +327,8 @@ static OSAL_INLINE void ecore_cxt_tm_iids(struct ecore_cxt_mngr *p_mngr,
 	}
 }
 
-void ecore_cxt_qm_iids(struct ecore_hwfn *p_hwfn, struct ecore_qm_iids *iids)
+static void ecore_cxt_qm_iids(struct ecore_hwfn *p_hwfn,
+			      struct ecore_qm_iids *iids)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
 	struct ecore_tid_seg *segs;
@@ -1945,7 +1946,7 @@ enum _ecore_status_t ecore_cxt_get_cid_info(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-void ecore_cxt_set_srq_count(struct ecore_hwfn *p_hwfn, u32 num_srqs)
+static void ecore_cxt_set_srq_count(struct ecore_hwfn *p_hwfn, u32 num_srqs)
 {
 	struct ecore_cxt_mngr *p_mgr = p_hwfn->p_cxt_mngr;
 
diff --git a/drivers/net/qede/base/ecore_cxt.h b/drivers/net/qede/base/ecore_cxt.h
index 1128051..e678118 100644
--- a/drivers/net/qede/base/ecore_cxt.h
+++ b/drivers/net/qede/base/ecore_cxt.h
@@ -35,17 +35,6 @@ u32 ecore_cxt_get_proto_cid_start(struct ecore_hwfn *p_hwfn,
 				  enum protocol_type type);
 u32 ecore_cxt_get_srq_count(struct ecore_hwfn *p_hwfn);
 
-#ifndef LINUX_REMOVE
-/**
- * @brief ecore_cxt_qm_iids - fills the cid/tid counts for the QM configuration
- *
- * @param p_hwfn
- * @param iids [out], a structure holding all the counters
- */
-void ecore_cxt_qm_iids(struct ecore_hwfn *p_hwfn,
-		       struct ecore_qm_iids *iids);
-#endif
-
 /**
  * @brief ecore_cxt_set_pf_params - Set the PF params for cxt init
  *
diff --git a/drivers/net/qede/base/ecore_dcbx.c b/drivers/net/qede/base/ecore_dcbx.c
index 5ecc6b0..4f1b069 100644
--- a/drivers/net/qede/base/ecore_dcbx.c
+++ b/drivers/net/qede/base/ecore_dcbx.c
@@ -114,7 +114,7 @@ ecore_dcbx_dp_protocol(struct ecore_hwfn *p_hwfn,
 	}
 }
 
-void
+static void
 ecore_dcbx_set_params(struct ecore_dcbx_results *p_data,
 		      struct ecore_hwfn *p_hwfn,
 		      bool enable, u8 prio, u8 tc,
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index d8e4ca2..865103c 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -759,8 +759,8 @@ enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn,
 				     struct ecore_ptt *p_ptt)
 {
 	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
-	enum _ecore_status_t rc;
 	bool b_rc;
+	enum _ecore_status_t rc;
 
 	/* initialize ecore's qm data structure */
 	ecore_init_qm_info(p_hwfn);
@@ -1507,54 +1507,6 @@ static void ecore_link_init_bb(struct ecore_hwfn *p_hwfn,
 }
 #endif
 
-static enum _ecore_status_t ecore_hw_init_port(struct ecore_hwfn *p_hwfn,
-					       struct ecore_ptt *p_ptt,
-					       int hw_mode)
-{
-	enum _ecore_status_t rc = ECORE_SUCCESS;
-
-	rc = ecore_init_run(p_hwfn, p_ptt, PHASE_PORT, p_hwfn->port_id,
-			    hw_mode);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-#ifndef ASIC_ONLY
-	if (CHIP_REV_IS_ASIC(p_hwfn->p_dev))
-		return ECORE_SUCCESS;
-
-	if (CHIP_REV_IS_FPGA(p_hwfn->p_dev)) {
-		if (ECORE_IS_AH(p_hwfn->p_dev))
-			return ECORE_SUCCESS;
-		else if (ECORE_IS_BB(p_hwfn->p_dev))
-			ecore_link_init_bb(p_hwfn, p_ptt, p_hwfn->port_id);
-	} else if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
-		if (p_hwfn->p_dev->num_hwfns > 1) {
-			/* Activate OPTE in CMT */
-			u32 val;
-
-			val = ecore_rd(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV);
-			val |= 0x10;
-			ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV, val);
-			ecore_wr(p_hwfn, p_ptt, MISC_REG_CLK_100G_MODE, 1);
-			ecore_wr(p_hwfn, p_ptt, MISCS_REG_CLK_100G_MODE, 1);
-			ecore_wr(p_hwfn, p_ptt, MISC_REG_OPTE_MODE, 1);
-			ecore_wr(p_hwfn, p_ptt,
-				 NIG_REG_LLH_ENG_CLS_TCP_4_TUPLE_SEARCH, 1);
-			ecore_wr(p_hwfn, p_ptt,
-				 NIG_REG_LLH_ENG_CLS_ENG_ID_TBL, 0x55555555);
-			ecore_wr(p_hwfn, p_ptt,
-				 NIG_REG_LLH_ENG_CLS_ENG_ID_TBL + 0x4,
-				 0x55555555);
-		}
-
-		ecore_emul_link_init(p_hwfn, p_ptt);
-	} else {
-		DP_INFO(p_hwfn->p_dev, "link is not being configured\n");
-	}
-#endif
-
-	return rc;
-}
-
 static enum _ecore_status_t
 ecore_hw_init_dpi_size(struct ecore_hwfn *p_hwfn,
 		       struct ecore_ptt *p_ptt, u32 pwm_region_size, u32 n_cpus)
@@ -1623,7 +1575,7 @@ ecore_hw_init_pf_doorbell_bar(struct ecore_hwfn *p_hwfn,
 	u32 db_bar_size, n_cpus;
 	u32 roce_edpm_mode;
 	u32 pf_dems_shift;
-	int rc = ECORE_SUCCESS;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
 	u8 cond;
 
 	db_bar_size = ecore_hw_bar_size(p_hwfn, BAR_ID_1);
@@ -1678,8 +1630,9 @@ ecore_hw_init_pf_doorbell_bar(struct ecore_hwfn *p_hwfn,
 		rc = ecore_hw_init_dpi_size(p_hwfn, p_ptt, pwm_regsize, n_cpus);
 	}
 
-	cond = ((rc) && (roce_edpm_mode == ECORE_ROCE_EDPM_MODE_ENABLE)) ||
-	    (roce_edpm_mode == ECORE_ROCE_EDPM_MODE_DISABLE);
+	cond = ((rc != ECORE_SUCCESS) &&
+		(roce_edpm_mode == ECORE_ROCE_EDPM_MODE_ENABLE)) ||
+		(roce_edpm_mode == ECORE_ROCE_EDPM_MODE_DISABLE);
 	if (cond || p_hwfn->dcbx_no_edpm) {
 		/* Either EDPM is disabled from user configuration, or it is
 		 * disabled via DCBx, or it is not mandatory and we failed to
@@ -1703,7 +1656,7 @@ ecore_hw_init_pf_doorbell_bar(struct ecore_hwfn *p_hwfn,
 		"disabled" : "enabled");
 
 	/* Check return codes from above calls */
-	if (rc) {
+	if (rc != ECORE_SUCCESS) {
 		DP_ERR(p_hwfn,
 		       "Failed to allocate enough DPIs\n");
 		return ECORE_NORESOURCES;
@@ -1721,6 +1674,54 @@ ecore_hw_init_pf_doorbell_bar(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
+static enum _ecore_status_t ecore_hw_init_port(struct ecore_hwfn *p_hwfn,
+					       struct ecore_ptt *p_ptt,
+					       int hw_mode)
+{
+	enum _ecore_status_t rc	= ECORE_SUCCESS;
+
+	rc = ecore_init_run(p_hwfn, p_ptt, PHASE_PORT, p_hwfn->port_id,
+			    hw_mode);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+#ifndef ASIC_ONLY
+	if (CHIP_REV_IS_ASIC(p_hwfn->p_dev))
+		return ECORE_SUCCESS;
+
+	if (CHIP_REV_IS_FPGA(p_hwfn->p_dev)) {
+		if (ECORE_IS_AH(p_hwfn->p_dev))
+			return ECORE_SUCCESS;
+		else if (ECORE_IS_BB(p_hwfn->p_dev))
+			ecore_link_init_bb(p_hwfn, p_ptt, p_hwfn->port_id);
+	} else if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
+		if (p_hwfn->p_dev->num_hwfns > 1) {
+			/* Activate OPTE in CMT */
+			u32 val;
+
+			val = ecore_rd(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV);
+			val |= 0x10;
+			ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV, val);
+			ecore_wr(p_hwfn, p_ptt, MISC_REG_CLK_100G_MODE, 1);
+			ecore_wr(p_hwfn, p_ptt, MISCS_REG_CLK_100G_MODE, 1);
+			ecore_wr(p_hwfn, p_ptt, MISC_REG_OPTE_MODE, 1);
+			ecore_wr(p_hwfn, p_ptt,
+				 NIG_REG_LLH_ENG_CLS_TCP_4_TUPLE_SEARCH, 1);
+			ecore_wr(p_hwfn, p_ptt,
+				 NIG_REG_LLH_ENG_CLS_ENG_ID_TBL, 0x55555555);
+			ecore_wr(p_hwfn, p_ptt,
+				 NIG_REG_LLH_ENG_CLS_ENG_ID_TBL + 0x4,
+				 0x55555555);
+		}
+
+		ecore_emul_link_init(p_hwfn, p_ptt);
+	} else {
+		DP_INFO(p_hwfn->p_dev, "link is not being configured\n");
+	}
+#endif
+
+	return rc;
+}
+
 static enum _ecore_status_t
 ecore_hw_init_pf(struct ecore_hwfn *p_hwfn,
 		 struct ecore_ptt *p_ptt,
@@ -1922,8 +1923,8 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 {
 	struct ecore_load_req_params load_req_params;
 	u32 load_code, param, drv_mb_param;
-	struct ecore_hwfn *p_hwfn;
 	bool b_default_mtu = true;
+	struct ecore_hwfn *p_hwfn;
 	enum _ecore_status_t rc = ECORE_SUCCESS, mfw_rc;
 	int i;
 
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index adb5e47..c4af895 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -946,17 +946,17 @@ ecore_eth_pf_rx_queue_start(struct ecore_hwfn *p_hwfn,
 			    dma_addr_t bd_chain_phys_addr,
 			    dma_addr_t cqe_pbl_addr,
 			    u16 cqe_pbl_size,
-			    void OSAL_IOMEM * *pp_producer)
+			    void OSAL_IOMEM * *pp_prod)
 {
 	u32 init_prod_val = 0;
 
-	*pp_producer = (u8 OSAL_IOMEM *)
-		       p_hwfn->regview +
-		       GTT_BAR0_MAP_REG_MSDM_RAM +
-		       MSTORM_ETH_PF_PRODS_OFFSET(p_cid->abs.queue_id);
+	*pp_prod = (u8 OSAL_IOMEM *)
+		    p_hwfn->regview +
+		    GTT_BAR0_MAP_REG_MSDM_RAM +
+		    MSTORM_ETH_PF_PRODS_OFFSET(p_cid->abs.queue_id);
 
 	/* Init the rcq, rx bd and rx sge (if valid) producers to 0 */
-	__internal_ram_wr(p_hwfn, *pp_producer, sizeof(u32),
+	__internal_ram_wr(p_hwfn, *pp_prod, sizeof(u32),
 			  (u32 *)(&init_prod_val));
 
 	return ecore_eth_rxq_start_ramrod(p_hwfn, p_cid,
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index ac72681..f4d331c 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -1285,8 +1285,8 @@ enum _ecore_status_t ecore_vf_pf_release(struct ecore_hwfn *p_hwfn)
 	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
 	struct pfvf_def_resp_tlv *resp;
 	struct vfpf_first_tlv *req;
-	enum _ecore_status_t rc;
 	u32 size;
+	enum _ecore_status_t rc;
 
 	/* clear mailbox and prep first tlv */
 	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_RELEASE, sizeof(*req));
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 59/62] net/qede/base: add support for arfs mode
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (59 preceding siblings ...)
  2017-03-28  6:52           ` [PATCH v4 58/62] net/qede/base: semantic changes Rasesh Mody
@ 2017-03-28  6:52           ` Rasesh Mody
  2017-03-28  6:52           ` [PATCH v4 60/62] net/qede: add ntuple and flow director filter support Rasesh Mody
                             ` (3 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:52 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Harish Patil

From: Harish Patil <harish.patil@qlogic.com>

Add base driver APIs to enable accelerated RFS[aRFS] mode and ramrod
to configure rfs and ntuple filter.

Signed-off-by: Harish Patil <harish.patil@qlogic.com>
---
 drivers/net/qede/base/ecore_cxt.c           |   49 +++++++++++-----
 drivers/net/qede/base/ecore_init_fw_funcs.c |   31 ++++++++++
 drivers/net/qede/base/ecore_init_fw_funcs.h |   11 ++++
 drivers/net/qede/base/ecore_l2.c            |   84 +++++++++++++++++++++++++++
 drivers/net/qede/base/ecore_l2.h            |   27 +++++++++
 drivers/net/qede/base/ecore_l2_api.h        |   22 +++++++
 drivers/net/qede/base/ecore_proto_if.h      |    6 ++
 drivers/net/qede/base/ecore_spq.h           |    1 +
 8 files changed, 218 insertions(+), 13 deletions(-)

diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index 1a2a701..80ad102 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -192,9 +192,6 @@ struct ecore_cxt_mngr {
 	 */
 	u32 vf_count;
 
-	/* total number of SRQ's for this hwfn */
-	u32				srq_count;
-
 	/* Acquired CIDs */
 	struct ecore_cid_acquired_map acquired[MAX_CONN_TYPES];
 	/* TBD - do we want this allocated to reserve space? */
@@ -213,10 +210,29 @@ struct ecore_cxt_mngr {
 	u32 t2_num_pages;
 	u64 first_free;
 	u64 last_free;
+
+	/* The infrastructure originally was very generic and context/task
+	 * oriented - per connection-type we would set how many of those
+	 * are needed, and later when determining how much memory we're
+	 * needing for a given block we'd iterate over all the relevant
+	 * connection-types.
+	 * But since then we've had some additional resources, some of which
+	 * require memory which is indepent of the general context/task
+	 * scheme. We add those here explicitly per-feature.
+	 */
+
+	/* total number of SRQ's for this hwfn */
+	u32				srq_count;
+
+	/* Maximal number of L2 steering filters */
+	u32				arfs_count;
+
+	/* TODO - VF arfs filters ? */
 };
 
 /* check if resources/configuration is required according to protocol type */
-static OSAL_INLINE bool src_proto(enum protocol_type type)
+static OSAL_INLINE bool src_proto(struct ecore_hwfn *p_hwfn,
+				  enum protocol_type type)
 {
 	return type == PROTOCOLID_TOE;
 }
@@ -254,18 +270,22 @@ struct ecore_src_iids {
 	u32 per_vf_cids;
 };
 
-static OSAL_INLINE void ecore_cxt_src_iids(struct ecore_cxt_mngr *p_mngr,
+static OSAL_INLINE void ecore_cxt_src_iids(struct ecore_hwfn *p_hwfn,
+					   struct ecore_cxt_mngr *p_mngr,
 					   struct ecore_src_iids *iids)
 {
 	u32 i;
 
 	for (i = 0; i < MAX_CONN_TYPES; i++) {
-		if (!src_proto(i))
+		if (!src_proto(p_hwfn, i))
 			continue;
 
 		iids->pf_cids += p_mngr->conn_cfg[i].cid_count;
 		iids->per_vf_cids += p_mngr->conn_cfg[i].cids_per_vf;
 	}
+
+	/* Add L2 filtering filters in addition */
+	iids->pf_cids += p_mngr->arfs_count;
 }
 
 /* counts the iids for the Timers block configuration */
@@ -686,7 +706,7 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 
 	/* SRC */
 	p_cli = &p_mngr->clients[ILT_CLI_SRC];
-	ecore_cxt_src_iids(p_mngr, &src_iids);
+	ecore_cxt_src_iids(p_hwfn, p_mngr, &src_iids);
 
 	/* Both the PF and VFs searcher connections are stored in the per PF
 	 * database. Thus sum the PF searcher cids and all the VFs searcher
@@ -800,7 +820,7 @@ static enum _ecore_status_t ecore_cxt_src_t2_alloc(struct ecore_hwfn *p_hwfn)
 	if (!p_src->active)
 		return ECORE_SUCCESS;
 
-	ecore_cxt_src_iids(p_mngr, &src_iids);
+	ecore_cxt_src_iids(p_hwfn, p_mngr, &src_iids);
 	conn_num = src_iids.pf_cids + src_iids.per_vf_cids * p_mngr->vf_count;
 	total_size = conn_num * sizeof(struct src_ent);
 
@@ -1619,7 +1639,7 @@ static void ecore_src_init_pf(struct ecore_hwfn *p_hwfn)
 	struct ecore_src_iids src_iids;
 
 	OSAL_MEM_ZERO(&src_iids, sizeof(src_iids));
-	ecore_cxt_src_iids(p_mngr, &src_iids);
+	ecore_cxt_src_iids(p_hwfn, p_mngr, &src_iids);
 	conn_num = src_iids.pf_cids + src_iids.per_vf_cids * p_mngr->vf_count;
 	if (!conn_num)
 		return;
@@ -1635,6 +1655,9 @@ static void ecore_src_init_pf(struct ecore_hwfn *p_hwfn)
 			 p_hwfn->p_cxt_mngr->first_free);
 	STORE_RT_REG_AGG(p_hwfn, SRC_REG_LASTFREE_RT_OFFSET,
 			 p_hwfn->p_cxt_mngr->last_free);
+	DP_VERBOSE(p_hwfn, ECORE_MSG_ILT,
+		   "Configured SEARCHER for 0x%08x connections\n",
+		   conn_num);
 }
 
 /* Timers PF */
@@ -1978,10 +2001,10 @@ enum _ecore_status_t ecore_cxt_set_pf_params(struct ecore_hwfn *p_hwfn)
 			 * As of now, allocates 16 * 2 per-VF [to retain regular
 			 * functionality].
 			 */
-			ecore_cxt_set_proto_cid_count(p_hwfn,
-				PROTOCOLID_ETH,
-				p_params->num_cons, 32);
-
+			ecore_cxt_set_proto_cid_count(p_hwfn, PROTOCOLID_ETH,
+						      p_params->num_cons, 32);
+			p_hwfn->p_cxt_mngr->arfs_count =
+						p_params->num_arfs_filters;
 			break;
 		}
 	default:
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.c b/drivers/net/qede/base/ecore_init_fw_funcs.c
index af0deaa..004ab35 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.c
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.c
@@ -1497,6 +1497,37 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
 #define RAM_LINE_SIZE sizeof(u64)
 #define REG_SIZE sizeof(u32)
 
+void ecore_set_rfs_mode_disable(struct ecore_hwfn *p_hwfn,
+	struct ecore_ptt *p_ptt,
+	u16 pf_id)
+{
+	union gft_cam_line_union cam_line;
+	struct gft_ram_line ram_line;
+	u32 i, *ram_line_ptr;
+
+	ram_line_ptr = (u32 *)&ram_line;
+
+	/* Stop using gft logic, disable gft search */
+	ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_GFT, 0);
+	ecore_wr(p_hwfn, p_ptt, PRS_REG_CM_HDR_GFT, 0x0);
+
+	/* Clean ram & cam for next rfs/gft session*/
+
+	/* Zero camline */
+	OSAL_MEMSET(&cam_line, 0, sizeof(cam_line));
+	ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_CAM + CAM_LINE_SIZE * pf_id,
+					cam_line.cam_line_mapped.camline);
+
+	/* Zero ramline */
+	OSAL_MEMSET(&ram_line, 0, sizeof(ram_line));
+
+	/* Each iteration write to reg */
+	for (i = 0; i < RAM_LINE_SIZE / REG_SIZE; i++)
+		ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_PROFILE_MASK_RAM +
+			 RAM_LINE_SIZE * pf_id +
+			 i * REG_SIZE, *(ram_line_ptr + i));
+}
+
 
 void ecore_set_gft_event_id_cm_hdr(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt)
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.h b/drivers/net/qede/base/ecore_init_fw_funcs.h
index 2d1ab7c..4da3fc2 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.h
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.h
@@ -351,6 +351,17 @@ void ecore_set_gft_event_id_cm_hdr(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt);
 
 /**
+ * @brief ecore_set_rfs_mode_disable - Disable and configure HW for RFS
+ *
+ * @param p_hwfn -   HW device data
+ * @param p_ptt -   ptt window used for writing the registers.
+ * @param pf_id - pf on which to disable RFS.
+ */
+void ecore_set_rfs_mode_disable(struct ecore_hwfn *p_hwfn,
+				struct ecore_ptt *p_ptt,
+				u16 pf_id);
+
+/**
 * @brief ecore_set_rfs_mode_enable - enable and configure HW for RFS
 *
 * @param p_ptt	- ptt window used for writing the registers.
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index c4af895..4ab8fd5 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -2018,3 +2018,87 @@ void ecore_reset_vport_stats(struct ecore_dev *p_dev)
 	else
 		_ecore_get_vport_stats(p_dev, p_dev->reset_stats);
 }
+
+void ecore_arfs_mode_configure(struct ecore_hwfn *p_hwfn,
+			       struct ecore_ptt *p_ptt,
+			       struct ecore_arfs_config_params *p_cfg_params)
+{
+	if (p_cfg_params->arfs_enable) {
+		ecore_set_rfs_mode_enable(p_hwfn, p_ptt, p_hwfn->rel_pf_id,
+					  p_cfg_params->tcp,
+					  p_cfg_params->udp,
+					  p_cfg_params->ipv4,
+					  p_cfg_params->ipv6);
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+			   "tcp = %s, udp = %s, ipv4 = %s, ipv6 =%s\n",
+			   p_cfg_params->tcp ? "Enable" : "Disable",
+			   p_cfg_params->udp ? "Enable" : "Disable",
+			   p_cfg_params->ipv4 ? "Enable" : "Disable",
+			   p_cfg_params->ipv6 ? "Enable" : "Disable");
+	} else {
+		ecore_set_rfs_mode_disable(p_hwfn, p_ptt, p_hwfn->rel_pf_id);
+	}
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP, "Configured ARFS mode : %s\n",
+		   p_cfg_params->arfs_enable ? "Enable" : "Disable");
+}
+
+enum _ecore_status_t
+ecore_configure_rfs_ntuple_filter(struct ecore_hwfn *p_hwfn,
+				  struct ecore_ptt *p_ptt,
+				  struct ecore_spq_comp_cb *p_cb,
+				  dma_addr_t p_addr, u16 length,
+				  u16 qid, u8 vport_id,
+				  bool b_is_add)
+{
+	struct rx_update_gft_filter_data *p_ramrod = OSAL_NULL;
+	struct ecore_spq_entry *p_ent = OSAL_NULL;
+	struct ecore_sp_init_data init_data;
+	u16 abs_rx_q_id = 0;
+	u8 abs_vport_id = 0;
+	enum _ecore_status_t rc = ECORE_NOTIMPL;
+
+	rc = ecore_fw_vport(p_hwfn, vport_id, &abs_vport_id);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	rc = ecore_fw_l2_queue(p_hwfn, qid, &abs_rx_q_id);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	/* Get SPQ entry */
+	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
+	init_data.cid = ecore_spq_get_cid(p_hwfn);
+
+	init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
+
+	if (p_cb) {
+		init_data.comp_mode = ECORE_SPQ_MODE_CB;
+		init_data.p_comp_data = p_cb;
+	} else {
+		init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
+	}
+
+	rc = ecore_sp_init_request(p_hwfn, &p_ent,
+				   ETH_RAMROD_GFT_UPDATE_FILTER,
+				   PROTOCOLID_ETH, &init_data);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	p_ramrod = &p_ent->ramrod.rx_update_gft;
+
+	DMA_REGPAIR_LE(p_ramrod->pkt_hdr_addr, p_addr);
+	p_ramrod->pkt_hdr_length = OSAL_CPU_TO_LE16(length);
+	p_ramrod->rx_qid_or_action_icid = OSAL_CPU_TO_LE16(abs_rx_q_id);
+	p_ramrod->vport_id = abs_vport_id;
+	p_ramrod->filter_type = RFS_FILTER_TYPE;
+	p_ramrod->filter_action = b_is_add ? GFT_ADD_FILTER
+					   : GFT_DELETE_FILTER;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "V[%0x], Q[%04x] - %s filter from 0x%lx [length %04xb]\n",
+		   abs_vport_id, abs_rx_q_id,
+		   b_is_add ? "Adding" : "Removing",
+		   (unsigned long)p_addr, length);
+
+	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
+}
diff --git a/drivers/net/qede/base/ecore_l2.h b/drivers/net/qede/base/ecore_l2.h
index 3f86eac..7fe4cbc 100644
--- a/drivers/net/qede/base/ecore_l2.h
+++ b/drivers/net/qede/base/ecore_l2.h
@@ -129,4 +129,31 @@ ecore_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
 
 u8 ecore_mcast_bin_from_mac(u8 *mac);
 
+/**
+ * @brief - ecore_configure_rfs_ntuple_filter
+ *
+ * This ramrod should be used to add or remove arfs hw filter
+ *
+ * @params p_hwfn
+ * @params p_ptt
+ * @params p_cb		Used for ECORE_SPQ_MODE_CB,where client would initialize
+			it with cookie and callback function address, if not
+			using this mode then client must pass NULL.
+ * @params p_addr	p_addr is an actual packet header that needs to be
+ *			filter. It has to mapped with IO to read prior to
+ *			calling this, [contains 4 tuples- src ip, dest ip,
+ *			src port, dest port].
+ * @params length	length of p_addr header up to past the transport header.
+ * @params qid		receive packet will be directed to this queue.
+ * @params vport_id
+ * @params b_is_add	flag to add or remove filter.
+ *
+ */
+enum _ecore_status_t
+ecore_configure_rfs_ntuple_filter(struct ecore_hwfn *p_hwfn,
+				  struct ecore_ptt *p_ptt,
+				  struct ecore_spq_comp_cb *p_cb,
+				  dma_addr_t p_addr, u16 length,
+				  u16 qid, u8 vport_id,
+				  bool b_is_add);
 #endif
diff --git a/drivers/net/qede/base/ecore_l2_api.h b/drivers/net/qede/base/ecore_l2_api.h
index 5a7db76..d09f3c4 100644
--- a/drivers/net/qede/base/ecore_l2_api.h
+++ b/drivers/net/qede/base/ecore_l2_api.h
@@ -141,6 +141,14 @@ struct ecore_filter_accept_flags {
 #define ECORE_ACCEPT_BCAST		0x20
 };
 
+struct ecore_arfs_config_params {
+	bool tcp;
+	bool udp;
+	bool ipv4;
+	bool ipv6;
+	bool arfs_enable;	/* Enable or disable arfs mode */
+};
+
 /* Add / remove / move / remove-all unicast MAC-VLAN filters.
  * FW will assert in the following cases, so driver should take care...:
  * 1. Adding a filter to a full table.
@@ -414,4 +422,18 @@ void ecore_get_vport_stats(struct ecore_dev *p_dev,
 
 void ecore_reset_vport_stats(struct ecore_dev *p_dev);
 
+/**
+ *@brief ecore_arfs_mode_configure -
+ *
+ *Enable or disable rfs mode. It must accept atleast one of tcp or udp true
+ *and atleast one of ipv4 or ipv6 true to enable rfs mode.
+ *
+ *@param p_hwfn
+ *@param p_ptt
+ *@param p_cfg_params		arfs mode configuration parameters.
+ *
+ */
+void ecore_arfs_mode_configure(struct ecore_hwfn *p_hwfn,
+			       struct ecore_ptt *p_ptt,
+			       struct ecore_arfs_config_params *p_cfg_params);
 #endif
diff --git a/drivers/net/qede/base/ecore_proto_if.h b/drivers/net/qede/base/ecore_proto_if.h
index 0ac153f..226e3d2 100644
--- a/drivers/net/qede/base/ecore_proto_if.h
+++ b/drivers/net/qede/base/ecore_proto_if.h
@@ -21,6 +21,12 @@ struct ecore_eth_pf_params {
 	 * to update_pf_params routine invoked before slowpath start
 	 */
 	u16	num_cons;
+
+	/* To enable arfs, previous to HW-init a positive number needs to be
+	 * set [as filters require allocated searcher ILT memory].
+	 * This will set the maximal number of configured steering-filters.
+	 */
+	u32	num_arfs_filters;
 };
 
 /* Most of the the parameters below are described in the FW iSCSI / TCP HSI */
diff --git a/drivers/net/qede/base/ecore_spq.h b/drivers/net/qede/base/ecore_spq.h
index e2468b7..e530f83 100644
--- a/drivers/net/qede/base/ecore_spq.h
+++ b/drivers/net/qede/base/ecore_spq.h
@@ -26,6 +26,7 @@ union ramrod_data {
 	struct tx_queue_stop_ramrod_data		tx_queue_stop;
 	struct vport_start_ramrod_data			vport_start;
 	struct vport_stop_ramrod_data			vport_stop;
+	struct rx_update_gft_filter_data		rx_update_gft;
 	struct vport_update_ramrod_data			vport_update;
 	struct core_rx_start_ramrod_data		core_rx_queue_start;
 	struct core_rx_stop_ramrod_data			core_rx_queue_stop;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 60/62] net/qede: add ntuple and flow director filter support
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (60 preceding siblings ...)
  2017-03-28  6:52           ` [PATCH v4 59/62] net/qede/base: add support for arfs mode Rasesh Mody
@ 2017-03-28  6:52           ` Rasesh Mody
  2017-03-28  6:52           ` [PATCH v4 61/62] net/qede: add LRO/TSO offloads support Rasesh Mody
                             ` (2 subsequent siblings)
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:52 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Harish Patil

From: Harish Patil <harish.patil@qlogic.com>

Add limited support for ntuple filter and flow director configuration.
The filtering is based on 4-tuples viz src-ip, dst-ip, src-port,
dst-port. The mask fields, tcp_flags, flex masks, priority fields,
Rx queue drop etc are not supported.

Signed-off-by: Harish Patil <harish.patil@qlogic.com>
---
 doc/guides/nics/features/qede.ini |    2 +
 doc/guides/nics/qede.rst          |    1 +
 drivers/net/qede/Makefile         |    1 +
 drivers/net/qede/base/ecore.h     |    3 +
 drivers/net/qede/qede_ethdev.c    |   16 +-
 drivers/net/qede/qede_ethdev.h    |   39 +++
 drivers/net/qede/qede_fdir.c      |  487 +++++++++++++++++++++++++++++++++++++
 drivers/net/qede/qede_main.c      |   23 +-
 8 files changed, 563 insertions(+), 9 deletions(-)
 create mode 100644 drivers/net/qede/qede_fdir.c

diff --git a/doc/guides/nics/features/qede.ini b/doc/guides/nics/features/qede.ini
index 8858e5d..b688914 100644
--- a/doc/guides/nics/features/qede.ini
+++ b/doc/guides/nics/features/qede.ini
@@ -34,3 +34,5 @@ Multiprocess aware   = Y
 Linux UIO            = Y
 x86-64               = Y
 Usage doc            = Y
+N-tuple filter       = Y
+Flow director        = Y
diff --git a/doc/guides/nics/qede.rst b/doc/guides/nics/qede.rst
index 36b26b3..df0aaec 100644
--- a/doc/guides/nics/qede.rst
+++ b/doc/guides/nics/qede.rst
@@ -60,6 +60,7 @@ Supported Features
 - Multiprocess aware
 - Scatter-Gather
 - VXLAN tunneling offload
+- N-tuple filter and flow director (limited support)
 
 Non-supported Features
 ----------------------
diff --git a/drivers/net/qede/Makefile b/drivers/net/qede/Makefile
index 29b443d..aae6bd2 100644
--- a/drivers/net/qede/Makefile
+++ b/drivers/net/qede/Makefile
@@ -99,6 +99,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_ethdev.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_eth_if.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_main.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_rxtx.c
+SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_fdir.c
 
 # dependent libs:
 DEPDIRS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += lib/librte_eal lib/librte_ether
diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index fab8193..31470b6 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -602,6 +602,9 @@ struct ecore_hwfn {
 
 	/* L2-related */
 	struct ecore_l2_info		*p_l2_info;
+
+	/* @DPDK */
+	struct ecore_ptt		*p_arfs_ptt;
 };
 
 #ifndef __EXTRACT__LINUX__
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index bd190d0..22b528d 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -924,6 +924,15 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 		return -EINVAL;
 	}
 
+	/* Flow director mode check */
+	rc = qede_check_fdir_support(eth_dev);
+	if (rc) {
+		qdev->ops->vport_stop(edev, 0);
+		qede_dealloc_fp_resc(eth_dev);
+		return -EINVAL;
+	}
+	SLIST_INIT(&qdev->fdir_info.fdir_list_head);
+
 	SLIST_INIT(&qdev->vlan_list_head);
 
 	/* Add primary mac for PF */
@@ -1124,6 +1133,8 @@ static void qede_dev_close(struct rte_eth_dev *eth_dev)
 
 	PMD_INIT_FUNC_TRACE(edev);
 
+	qede_fdir_dealloc_resc(eth_dev);
+
 	/* dev_stop() shall cleanup fp resources in hw but without releasing
 	 * dma memories and sw structures so that dev_start() can be called
 	 * by the app without reconfiguration. However, in dev_close() we
@@ -1962,11 +1973,13 @@ int qede_dev_filter_ctrl(struct rte_eth_dev *eth_dev,
 		}
 		break;
 	case RTE_ETH_FILTER_FDIR:
+		return qede_fdir_filter_conf(eth_dev, filter_op, arg);
+	case RTE_ETH_FILTER_NTUPLE:
+		return qede_ntuple_filter_conf(eth_dev, filter_op, arg);
 	case RTE_ETH_FILTER_MACVLAN:
 	case RTE_ETH_FILTER_ETHERTYPE:
 	case RTE_ETH_FILTER_FLEXIBLE:
 	case RTE_ETH_FILTER_SYN:
-	case RTE_ETH_FILTER_NTUPLE:
 	case RTE_ETH_FILTER_HASH:
 	case RTE_ETH_FILTER_L2_TUNNEL:
 	case RTE_ETH_FILTER_MAX:
@@ -2057,6 +2070,7 @@ static void qede_update_pf_params(struct ecore_dev *edev)
 
 	memset(&pf_params, 0, sizeof(struct ecore_pf_params));
 	pf_params.eth_pf_params.num_cons = QEDE_PF_NUM_CONNS;
+	pf_params.eth_pf_params.num_arfs_filters = QEDE_RFS_MAX_FLTR;
 	qed_ops->common->update_pf_params(edev, &pf_params);
 }
 
diff --git a/drivers/net/qede/qede_ethdev.h b/drivers/net/qede/qede_ethdev.h
index be54f31..8342b99 100644
--- a/drivers/net/qede/qede_ethdev.h
+++ b/drivers/net/qede/qede_ethdev.h
@@ -34,6 +34,8 @@
 #include "base/nvm_cfg.h"
 #include "base/ecore_iov_api.h"
 #include "base/ecore_sp_commands.h"
+#include "base/ecore_l2.h"
+#include "base/ecore_dev_api.h"
 
 #include "qede_logs.h"
 #include "qede_if.h"
@@ -131,6 +133,9 @@ extern char fw_file[];
 /* Number of PF connections - 32 RX + 32 TX */
 #define QEDE_PF_NUM_CONNS		(64)
 
+/* Maximum number of flowdir filters */
+#define QEDE_RFS_MAX_FLTR		(256)
+
 /* Port/function states */
 enum qede_dev_state {
 	QEDE_DEV_INIT, /* Init the chip and Slowpath */
@@ -156,6 +161,21 @@ struct qede_ucast_entry {
 	SLIST_ENTRY(qede_ucast_entry) list;
 };
 
+struct qede_fdir_entry {
+	uint32_t soft_id; /* unused for now */
+	uint16_t pkt_len; /* actual packet length to match */
+	uint16_t rx_queue; /* queue to be steered to */
+	const struct rte_memzone *mz; /* mz used to hold L2 frame */
+	SLIST_ENTRY(qede_fdir_entry) list;
+};
+
+struct qede_fdir_info {
+	struct ecore_arfs_config_params arfs;
+	uint16_t filter_count;
+	SLIST_HEAD(fdir_list_head, qede_fdir_entry)fdir_list_head;
+};
+
+
 /*
  *  Structure to store private data for each port.
  */
@@ -190,6 +210,7 @@ struct qede_dev {
 	bool handle_hw_err;
 	uint16_t num_tunn_filters;
 	uint16_t vxlan_filter_type;
+	struct qede_fdir_info fdir_info;
 	char drv_ver[QEDE_PMD_DRV_VER_STR_SIZE];
 };
 
@@ -208,6 +229,11 @@ static void qede_init_rss_caps(uint8_t *rss_caps, uint64_t hf);
 
 static inline uint32_t qede_rx_cqe_to_pkt_type(uint16_t flags);
 
+static uint16_t qede_fdir_construct_pkt(struct rte_eth_dev *eth_dev,
+					struct rte_eth_fdir_filter *fdir,
+					void *buff,
+					struct ecore_arfs_config_params *param);
+
 /* Non-static functions */
 void qede_init_rss_caps(uint8_t *rss_caps, uint64_t hf);
 
@@ -215,4 +241,17 @@ int qed_fill_eth_dev_info(struct ecore_dev *edev,
 				 struct qed_dev_eth_info *info);
 int qede_dev_set_link_state(struct rte_eth_dev *eth_dev, bool link_up);
 
+int qede_dev_filter_ctrl(struct rte_eth_dev *dev, enum rte_filter_type type,
+			 enum rte_filter_op op, void *arg);
+
+int qede_fdir_filter_conf(struct rte_eth_dev *eth_dev,
+			  enum rte_filter_op filter_op, void *arg);
+
+int qede_ntuple_filter_conf(struct rte_eth_dev *eth_dev,
+			    enum rte_filter_op filter_op, void *arg);
+
+int qede_check_fdir_support(struct rte_eth_dev *eth_dev);
+
+void qede_fdir_dealloc_resc(struct rte_eth_dev *eth_dev);
+
 #endif /* _QEDE_ETHDEV_H_ */
diff --git a/drivers/net/qede/qede_fdir.c b/drivers/net/qede/qede_fdir.c
new file mode 100644
index 0000000..f0dc73a
--- /dev/null
+++ b/drivers/net/qede/qede_fdir.c
@@ -0,0 +1,487 @@
+/*
+ * Copyright (c) 2017 QLogic Corporation.
+ * All rights reserved.
+ * www.qlogic.com
+ *
+ * See LICENSE.qede_pmd for copyright and licensing details.
+ */
+
+#include <rte_udp.h>
+#include <rte_tcp.h>
+#include <rte_sctp.h>
+#include <rte_errno.h>
+
+#include "qede_ethdev.h"
+
+#define IP_VERSION				(0x40)
+#define IP_HDRLEN				(0x5)
+#define QEDE_FDIR_IP_DEFAULT_VERSION_IHL	(IP_VERSION | IP_HDRLEN)
+#define QEDE_FDIR_TCP_DEFAULT_DATAOFF		(0x50)
+#define QEDE_FDIR_IPV4_DEF_TTL			(64)
+
+/* Sum of length of header types of L2, L3, L4.
+ * L2 : ether_hdr + vlan_hdr + vxlan_hdr
+ * L3 : ipv6_hdr
+ * L4 : tcp_hdr
+ */
+#define QEDE_MAX_FDIR_PKT_LEN			(86)
+
+#ifndef IPV6_ADDR_LEN
+#define IPV6_ADDR_LEN				(16)
+#endif
+
+#define QEDE_VALID_FLOW(flow_type) \
+	((flow_type) == RTE_ETH_FLOW_FRAG_IPV4		|| \
+	(flow_type) == RTE_ETH_FLOW_NONFRAG_IPV4_TCP	|| \
+	(flow_type) == RTE_ETH_FLOW_NONFRAG_IPV4_UDP	|| \
+	(flow_type) == RTE_ETH_FLOW_FRAG_IPV6		|| \
+	(flow_type) == RTE_ETH_FLOW_NONFRAG_IPV6_TCP	|| \
+	(flow_type) == RTE_ETH_FLOW_NONFRAG_IPV6_UDP)
+
+/* Note: Flowdir support is only partial.
+ * For ex: drop_queue, FDIR masks, flex_conf are not supported.
+ * Parameters like pballoc/status fields are irrelevant here.
+ */
+int qede_check_fdir_support(struct rte_eth_dev *eth_dev)
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	struct rte_fdir_conf *fdir = &eth_dev->data->dev_conf.fdir_conf;
+
+	/* check FDIR modes */
+	switch (fdir->mode) {
+	case RTE_FDIR_MODE_NONE:
+		qdev->fdir_info.arfs.arfs_enable = false;
+		DP_INFO(edev, "flowdir is disabled\n");
+	break;
+	case RTE_FDIR_MODE_PERFECT:
+		if (edev->num_hwfns > 1) {
+			DP_ERR(edev, "flowdir is not supported in 100G mode\n");
+			qdev->fdir_info.arfs.arfs_enable = false;
+			return -ENOTSUP;
+		}
+		qdev->fdir_info.arfs.arfs_enable = true;
+		DP_INFO(edev, "flowdir is enabled\n");
+	break;
+	case RTE_FDIR_MODE_PERFECT_TUNNEL:
+	case RTE_FDIR_MODE_SIGNATURE:
+	case RTE_FDIR_MODE_PERFECT_MAC_VLAN:
+		DP_ERR(edev, "Unsupported flowdir mode %d\n", fdir->mode);
+		return -ENOTSUP;
+	}
+
+	return 0;
+}
+
+void qede_fdir_dealloc_resc(struct rte_eth_dev *eth_dev)
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	struct qede_fdir_entry *tmp = NULL;
+	struct qede_fdir_entry *fdir;
+
+	SLIST_FOREACH(tmp, &qdev->fdir_info.fdir_list_head, list) {
+		if (tmp) {
+			if (tmp->mz)
+				rte_memzone_free(tmp->mz);
+			SLIST_REMOVE(&qdev->fdir_info.fdir_list_head, tmp,
+				     qede_fdir_entry, list);
+			rte_free(tmp);
+		}
+	}
+}
+
+static int
+qede_config_cmn_fdir_filter(struct rte_eth_dev *eth_dev,
+			    struct rte_eth_fdir_filter *fdir_filter,
+			    bool add)
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	char mz_name[RTE_MEMZONE_NAMESIZE] = {0};
+	struct qede_fdir_entry *tmp = NULL;
+	struct qede_fdir_entry *fdir;
+	const struct rte_memzone *mz;
+	struct ecore_hwfn *p_hwfn;
+	enum _ecore_status_t rc;
+	uint16_t pkt_len;
+	uint16_t len;
+	void *pkt;
+
+	if (add) {
+		if (qdev->fdir_info.filter_count == QEDE_RFS_MAX_FLTR - 1) {
+			DP_ERR(edev, "Reached max flowdir filter limit\n");
+			return -EINVAL;
+		}
+		fdir = rte_malloc(NULL, sizeof(struct qede_fdir_entry),
+				  RTE_CACHE_LINE_SIZE);
+		if (!fdir) {
+			DP_ERR(edev, "Did not allocate memory for fdir\n");
+			return -ENOMEM;
+		}
+	}
+	/* soft_id could have been used as memzone string, but soft_id is
+	 * not currently used so it has no significance.
+	 */
+	snprintf(mz_name, sizeof(mz_name) - 1, "%lx",
+		 (unsigned long)rte_get_timer_cycles());
+	mz = rte_memzone_reserve_aligned(mz_name, QEDE_MAX_FDIR_PKT_LEN,
+					 SOCKET_ID_ANY, 0, RTE_CACHE_LINE_SIZE);
+	if (!mz) {
+		DP_ERR(edev, "Failed to allocate memzone for fdir, err = %s\n",
+		       rte_strerror(rte_errno));
+		rc = -rte_errno;
+		goto err1;
+	}
+
+	pkt = mz->addr;
+	memset(pkt, 0, QEDE_MAX_FDIR_PKT_LEN);
+	pkt_len = qede_fdir_construct_pkt(eth_dev, fdir_filter, pkt,
+					  &qdev->fdir_info.arfs);
+	if (pkt_len == 0) {
+		rc = -EINVAL;
+		goto err2;
+	}
+	DP_INFO(edev, "pkt_len = %u memzone = %s\n", pkt_len, mz_name);
+	if (add) {
+		SLIST_FOREACH(tmp, &qdev->fdir_info.fdir_list_head, list) {
+			if (memcmp(tmp->mz->addr, pkt, pkt_len) == 0) {
+				DP_ERR(edev, "flowdir filter exist\n");
+				rc = -EEXIST;
+				goto err2;
+			}
+		}
+	} else {
+		SLIST_FOREACH(tmp, &qdev->fdir_info.fdir_list_head, list) {
+			if (memcmp(tmp->mz->addr, pkt, pkt_len) == 0)
+				break;
+		}
+		if (!tmp) {
+			DP_ERR(edev, "flowdir filter does not exist\n");
+			rc = -EEXIST;
+			goto err2;
+		}
+	}
+	p_hwfn = ECORE_LEADING_HWFN(edev);
+	if (add) {
+		if (!qdev->fdir_info.arfs.arfs_enable) {
+			/* Force update */
+			eth_dev->data->dev_conf.fdir_conf.mode =
+						RTE_FDIR_MODE_PERFECT;
+			qdev->fdir_info.arfs.arfs_enable = true;
+			DP_INFO(edev, "Force enable flowdir in perfect mode\n");
+		}
+		/* Enable ARFS searcher with updated flow_types */
+		ecore_arfs_mode_configure(p_hwfn, p_hwfn->p_arfs_ptt,
+					  &qdev->fdir_info.arfs);
+	}
+	/* configure filter with ECORE_SPQ_MODE_EBLOCK */
+	rc = ecore_configure_rfs_ntuple_filter(p_hwfn, p_hwfn->p_arfs_ptt, NULL,
+					       (dma_addr_t)mz->phys_addr,
+					       pkt_len,
+					       fdir_filter->action.rx_queue,
+					       0, add);
+	if (rc == ECORE_SUCCESS) {
+		if (add) {
+			fdir->rx_queue = fdir_filter->action.rx_queue;
+			fdir->pkt_len = pkt_len;
+			fdir->mz = mz;
+			SLIST_INSERT_HEAD(&qdev->fdir_info.fdir_list_head,
+					  fdir, list);
+			qdev->fdir_info.filter_count++;
+			DP_INFO(edev, "flowdir filter added, count = %d\n",
+				qdev->fdir_info.filter_count);
+		} else {
+			rte_memzone_free(tmp->mz);
+			SLIST_REMOVE(&qdev->fdir_info.fdir_list_head, tmp,
+				     qede_fdir_entry, list);
+			rte_free(tmp); /* the node deleted */
+			rte_memzone_free(mz); /* temp node allocated */
+			qdev->fdir_info.filter_count--;
+			DP_INFO(edev, "Fdir filter deleted, count = %d\n",
+				qdev->fdir_info.filter_count);
+		}
+	} else {
+		DP_ERR(edev, "flowdir filter failed, rc=%d filter_count=%d\n",
+		       rc, qdev->fdir_info.filter_count);
+	}
+
+	/* Disable ARFS searcher if there are no more filters */
+	if (qdev->fdir_info.filter_count == 0) {
+		memset(&qdev->fdir_info.arfs, 0,
+		       sizeof(struct ecore_arfs_config_params));
+		DP_INFO(edev, "Disabling flowdir\n");
+		qdev->fdir_info.arfs.arfs_enable = false;
+		ecore_arfs_mode_configure(p_hwfn, p_hwfn->p_arfs_ptt,
+					  &qdev->fdir_info.arfs);
+	}
+	return 0;
+
+err2:
+	rte_memzone_free(mz);
+err1:
+	if (add)
+		rte_free(fdir);
+	return rc;
+}
+
+static int
+qede_fdir_filter_add(struct rte_eth_dev *eth_dev,
+		     struct rte_eth_fdir_filter *fdir,
+		     bool add)
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+
+	if (!QEDE_VALID_FLOW(fdir->input.flow_type)) {
+		DP_ERR(edev, "invalid flow_type input\n");
+		return -EINVAL;
+	}
+
+	if (fdir->action.rx_queue >= QEDE_RSS_COUNT(qdev)) {
+		DP_ERR(edev, "invalid queue number %u\n",
+		       fdir->action.rx_queue);
+		return -EINVAL;
+	}
+
+	if (fdir->input.flow_ext.is_vf) {
+		DP_ERR(edev, "flowdir is not supported over VF\n");
+		return -EINVAL;
+	}
+
+	return qede_config_cmn_fdir_filter(eth_dev, fdir, add);
+}
+
+/* Fills the L3/L4 headers and returns the actual length  of flowdir packet */
+static uint16_t
+qede_fdir_construct_pkt(struct rte_eth_dev *eth_dev,
+			struct rte_eth_fdir_filter *fdir,
+			void *buff,
+			struct ecore_arfs_config_params *params)
+
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	uint16_t *ether_type;
+	uint8_t *raw_pkt;
+	struct rte_eth_fdir_input *input;
+	static uint8_t vlan_frame[] = {0x81, 0, 0, 0};
+	struct ipv4_hdr *ip;
+	struct ipv6_hdr *ip6;
+	struct udp_hdr *udp;
+	struct tcp_hdr *tcp;
+	struct sctp_hdr *sctp;
+	uint8_t size, dst = 0;
+	uint16_t len;
+	static const uint8_t next_proto[] = {
+		[RTE_ETH_FLOW_FRAG_IPV4] = IPPROTO_IP,
+		[RTE_ETH_FLOW_NONFRAG_IPV4_TCP] = IPPROTO_TCP,
+		[RTE_ETH_FLOW_NONFRAG_IPV4_UDP] = IPPROTO_UDP,
+		[RTE_ETH_FLOW_FRAG_IPV6] = IPPROTO_NONE,
+		[RTE_ETH_FLOW_NONFRAG_IPV6_TCP] = IPPROTO_TCP,
+		[RTE_ETH_FLOW_NONFRAG_IPV6_UDP] = IPPROTO_UDP,
+	};
+	raw_pkt = (uint8_t *)buff;
+	input = &fdir->input;
+	DP_INFO(edev, "flow_type %d\n", input->flow_type);
+
+	len =  2 * sizeof(struct ether_addr);
+	raw_pkt += 2 * sizeof(struct ether_addr);
+	if (input->flow_ext.vlan_tci) {
+		DP_INFO(edev, "adding VLAN header\n");
+		rte_memcpy(raw_pkt, vlan_frame, sizeof(vlan_frame));
+		rte_memcpy(raw_pkt + sizeof(uint16_t),
+			   &input->flow_ext.vlan_tci,
+			   sizeof(uint16_t));
+		raw_pkt += sizeof(vlan_frame);
+		len += sizeof(vlan_frame);
+	}
+	ether_type = (uint16_t *)raw_pkt;
+	raw_pkt += sizeof(uint16_t);
+	len += sizeof(uint16_t);
+
+	/* fill the common ip header */
+	switch (input->flow_type) {
+	case RTE_ETH_FLOW_NONFRAG_IPV4_TCP:
+	case RTE_ETH_FLOW_NONFRAG_IPV4_UDP:
+	case RTE_ETH_FLOW_FRAG_IPV4:
+		ip = (struct ipv4_hdr *)raw_pkt;
+		*ether_type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+		ip->version_ihl = QEDE_FDIR_IP_DEFAULT_VERSION_IHL;
+		ip->total_length = sizeof(struct ipv4_hdr);
+		ip->next_proto_id = input->flow.ip4_flow.proto ?
+				    input->flow.ip4_flow.proto :
+				    next_proto[input->flow_type];
+		ip->time_to_live = input->flow.ip4_flow.ttl ?
+				   input->flow.ip4_flow.ttl :
+				   QEDE_FDIR_IPV4_DEF_TTL;
+		ip->type_of_service = input->flow.ip4_flow.tos;
+		ip->dst_addr = input->flow.ip4_flow.dst_ip;
+		ip->src_addr = input->flow.ip4_flow.src_ip;
+		len += sizeof(struct ipv4_hdr);
+		params->ipv4 = true;
+		break;
+	case RTE_ETH_FLOW_NONFRAG_IPV6_TCP:
+	case RTE_ETH_FLOW_NONFRAG_IPV6_UDP:
+	case RTE_ETH_FLOW_FRAG_IPV6:
+		ip6 = (struct ipv6_hdr *)raw_pkt;
+		*ether_type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+		ip6->proto = input->flow.ipv6_flow.proto ?
+					input->flow.ipv6_flow.proto :
+					next_proto[input->flow_type];
+		rte_memcpy(&ip6->src_addr, &input->flow.ipv6_flow.dst_ip,
+			   IPV6_ADDR_LEN);
+		rte_memcpy(&ip6->dst_addr, &input->flow.ipv6_flow.src_ip,
+			   IPV6_ADDR_LEN);
+		len += sizeof(struct ipv6_hdr);
+		break;
+	default:
+		DP_ERR(edev, "Unsupported flow_type %u\n",
+		       input->flow_type);
+		return 0;
+	}
+
+	/* fill the L4 header */
+	raw_pkt = (uint8_t *)buff;
+	switch (input->flow_type) {
+	case RTE_ETH_FLOW_NONFRAG_IPV4_UDP:
+		udp = (struct udp_hdr *)(raw_pkt + len);
+		udp->dst_port = input->flow.udp4_flow.dst_port;
+		udp->src_port = input->flow.udp4_flow.src_port;
+		udp->dgram_len = sizeof(struct udp_hdr);
+		len += sizeof(struct udp_hdr);
+		/* adjust ip total_length */
+		ip->total_length += sizeof(struct udp_hdr);
+		params->udp = true;
+	break;
+	case RTE_ETH_FLOW_NONFRAG_IPV4_TCP:
+		tcp = (struct tcp_hdr *)(raw_pkt + len);
+		tcp->src_port = input->flow.tcp4_flow.src_port;
+		tcp->dst_port = input->flow.tcp4_flow.dst_port;
+		tcp->data_off = QEDE_FDIR_TCP_DEFAULT_DATAOFF;
+		len += sizeof(struct tcp_hdr);
+		/* adjust ip total_length */
+		ip->total_length += sizeof(struct tcp_hdr);
+		params->tcp = true;
+	break;
+	case RTE_ETH_FLOW_NONFRAG_IPV6_TCP:
+		tcp = (struct tcp_hdr *)(raw_pkt + len);
+		tcp->data_off = QEDE_FDIR_TCP_DEFAULT_DATAOFF;
+		tcp->src_port = input->flow.udp6_flow.src_port;
+		tcp->dst_port = input->flow.udp6_flow.dst_port;
+		/* adjust ip total_length */
+		len += sizeof(struct tcp_hdr);
+		params->tcp = true;
+	break;
+	case RTE_ETH_FLOW_NONFRAG_IPV6_UDP:
+		udp = (struct udp_hdr *)(raw_pkt + len);
+		udp->src_port = input->flow.udp6_flow.dst_port;
+		udp->dst_port = input->flow.udp6_flow.src_port;
+		/* adjust ip total_length */
+		len += sizeof(struct udp_hdr);
+		params->udp = true;
+	break;
+	default:
+		DP_ERR(edev, "Unsupported flow_type %d\n", input->flow_type);
+		return 0;
+	}
+	return len;
+}
+
+int
+qede_fdir_filter_conf(struct rte_eth_dev *eth_dev,
+		      enum rte_filter_op filter_op,
+		      void *arg)
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	struct rte_eth_fdir_filter *fdir;
+	int ret;
+
+	fdir = (struct rte_eth_fdir_filter *)arg;
+	switch (filter_op) {
+	case RTE_ETH_FILTER_NOP:
+		/* Typically used to query flowdir support */
+		if (edev->num_hwfns > 1) {
+			DP_ERR(edev, "flowdir is not supported in 100G mode\n");
+			return -ENOTSUP;
+		}
+		return 0; /* means supported */
+	case RTE_ETH_FILTER_ADD:
+		ret = qede_fdir_filter_add(eth_dev, fdir, 1);
+	break;
+	case RTE_ETH_FILTER_DELETE:
+		ret = qede_fdir_filter_add(eth_dev, fdir, 0);
+	break;
+	case RTE_ETH_FILTER_FLUSH:
+	case RTE_ETH_FILTER_UPDATE:
+	case RTE_ETH_FILTER_INFO:
+		return -ENOTSUP;
+	break;
+	default:
+		DP_ERR(edev, "unknown operation %u", filter_op);
+		ret = -EINVAL;
+	}
+
+	return ret;
+}
+
+int qede_ntuple_filter_conf(struct rte_eth_dev *eth_dev,
+			    enum rte_filter_op filter_op,
+			    void *arg)
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	struct rte_eth_ntuple_filter *ntuple;
+	struct rte_eth_fdir_filter fdir_entry;
+	struct rte_eth_tcpv4_flow *tcpv4_flow;
+	struct rte_eth_udpv4_flow *udpv4_flow;
+	struct ecore_hwfn *p_hwfn;
+	bool add;
+
+	switch (filter_op) {
+	case RTE_ETH_FILTER_NOP:
+		/* Typically used to query fdir support */
+		if (edev->num_hwfns > 1) {
+			DP_ERR(edev, "flowdir is not supported in 100G mode\n");
+			return -ENOTSUP;
+		}
+		return 0; /* means supported */
+	case RTE_ETH_FILTER_ADD:
+		add = true;
+	break;
+	case RTE_ETH_FILTER_DELETE:
+		add = false;
+	break;
+	case RTE_ETH_FILTER_INFO:
+	case RTE_ETH_FILTER_GET:
+	case RTE_ETH_FILTER_UPDATE:
+	case RTE_ETH_FILTER_FLUSH:
+	case RTE_ETH_FILTER_SET:
+	case RTE_ETH_FILTER_STATS:
+	case RTE_ETH_FILTER_OP_MAX:
+		DP_ERR(edev, "Unsupported filter_op %d\n", filter_op);
+		return -ENOTSUP;
+	}
+	ntuple = (struct rte_eth_ntuple_filter *)arg;
+	/* Internally convert ntuple to fdir entry */
+	memset(&fdir_entry, 0, sizeof(fdir_entry));
+	if (ntuple->proto == IPPROTO_TCP) {
+		fdir_entry.input.flow_type = RTE_ETH_FLOW_NONFRAG_IPV4_TCP;
+		tcpv4_flow = &fdir_entry.input.flow.tcp4_flow;
+		tcpv4_flow->ip.src_ip = ntuple->src_ip;
+		tcpv4_flow->ip.dst_ip = ntuple->dst_ip;
+		tcpv4_flow->ip.proto = IPPROTO_TCP;
+		tcpv4_flow->src_port = ntuple->src_port;
+		tcpv4_flow->dst_port = ntuple->dst_port;
+	} else {
+		fdir_entry.input.flow_type = RTE_ETH_FLOW_NONFRAG_IPV4_UDP;
+		udpv4_flow = &fdir_entry.input.flow.udp4_flow;
+		udpv4_flow->ip.src_ip = ntuple->src_ip;
+		udpv4_flow->ip.dst_ip = ntuple->dst_ip;
+		udpv4_flow->ip.proto = IPPROTO_TCP;
+		udpv4_flow->src_port = ntuple->src_port;
+		udpv4_flow->dst_port = ntuple->dst_port;
+	}
+	return qede_config_cmn_fdir_filter(eth_dev, &fdir_entry, add);
+}
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 74856c5..307b33a 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -12,8 +12,6 @@
 
 #include "qede_ethdev.h"
 
-static uint8_t npar_tx_switching = 1;
-
 /* Alarm timeout. */
 #define QEDE_ALARM_TIMEOUT_US 100000
 
@@ -224,23 +222,34 @@ static void qed_stop_iov_task(struct ecore_dev *edev)
 static int qed_slowpath_start(struct ecore_dev *edev,
 			      struct qed_slowpath_params *params)
 {
-	bool allow_npar_tx_switching;
 	const uint8_t *data = NULL;
 	struct ecore_hwfn *hwfn;
 	struct ecore_mcp_drv_version drv_version;
 	struct ecore_hw_init_params hw_init_params;
 	struct qede_dev *qdev = (struct qede_dev *)edev;
+	struct ecore_ptt *p_ptt;
 	int rc;
 
-#ifdef CONFIG_ECORE_BINARY_FW
 	if (IS_PF(edev)) {
+#ifdef CONFIG_ECORE_BINARY_FW
 		rc = qed_load_firmware_data(edev);
 		if (rc) {
 			DP_ERR(edev, "Failed to find fw file %s\n", fw_file);
 			goto err;
 		}
-	}
 #endif
+		hwfn = ECORE_LEADING_HWFN(edev);
+		if (edev->num_hwfns == 1) { /* skip aRFS for 100G device */
+			p_ptt = ecore_ptt_acquire(hwfn);
+			if (p_ptt) {
+				ECORE_LEADING_HWFN(edev)->p_arfs_ptt = p_ptt;
+			} else {
+				DP_ERR(edev, "Failed to acquire PTT for flowdir\n");
+				rc = -ENOMEM;
+				goto err;
+			}
+		}
+	}
 
 	rc = qed_nic_setup(edev);
 	if (rc)
@@ -268,13 +277,11 @@ static int qed_slowpath_start(struct ecore_dev *edev,
 		data = (const uint8_t *)edev->firmware + sizeof(u32);
 #endif
 
-	allow_npar_tx_switching = npar_tx_switching ? true : false;
-
 	/* Start the slowpath */
 	memset(&hw_init_params, 0, sizeof(hw_init_params));
 	hw_init_params.b_hw_start = true;
 	hw_init_params.int_mode = ECORE_INT_MODE_MSIX;
-	hw_init_params.allow_npar_tx_switch = allow_npar_tx_switching;
+	hw_init_params.allow_npar_tx_switch = true;
 	hw_init_params.bin_fw_data = data;
 	hw_init_params.mfw_timeout_val = ECORE_LOAD_REQ_LOCK_TO_DEFAULT;
 	hw_init_params.avoid_eng_reset = false;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 61/62] net/qede: add LRO/TSO offloads support
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (61 preceding siblings ...)
  2017-03-28  6:52           ` [PATCH v4 60/62] net/qede: add ntuple and flow director filter support Rasesh Mody
@ 2017-03-28  6:52           ` Rasesh Mody
  2017-03-28  6:52           ` [PATCH v4 62/62] net/qede: update PMD version to 2.4.0.1 Rasesh Mody
       [not found]           ` <1490683278-23776-1-git-send-email-y>
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:52 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Harish Patil

From: Harish Patil <harish.patil@qlogic.com>

This patch includes slowpath configuration and fastpath changes
to support LRO and TSO. A bit of revamping is needed in order
to make use of existing packet classification schemes in Rx fastpath
and for SG element processing in Tx.

Signed-off-by: Harish Patil <harish.patil@qlogic.com>
---
 doc/guides/nics/features/qede.ini    |    2 +
 doc/guides/nics/features/qede_vf.ini |    2 +
 doc/guides/nics/qede.rst             |    2 +-
 drivers/net/qede/qede_eth_if.c       |    6 +-
 drivers/net/qede/qede_eth_if.h       |    3 +-
 drivers/net/qede/qede_ethdev.c       |   29 +-
 drivers/net/qede/qede_ethdev.h       |    3 +-
 drivers/net/qede/qede_rxtx.c         |  739 +++++++++++++++++++++++++---------
 drivers/net/qede/qede_rxtx.h         |   30 ++
 9 files changed, 605 insertions(+), 211 deletions(-)

diff --git a/doc/guides/nics/features/qede.ini b/doc/guides/nics/features/qede.ini
index b688914..fba5dc3 100644
--- a/doc/guides/nics/features/qede.ini
+++ b/doc/guides/nics/features/qede.ini
@@ -36,3 +36,5 @@ x86-64               = Y
 Usage doc            = Y
 N-tuple filter       = Y
 Flow director        = Y
+LRO                  = Y
+TSO                  = Y
diff --git a/doc/guides/nics/features/qede_vf.ini b/doc/guides/nics/features/qede_vf.ini
index acb1b99..21ec40f 100644
--- a/doc/guides/nics/features/qede_vf.ini
+++ b/doc/guides/nics/features/qede_vf.ini
@@ -31,4 +31,6 @@ Stats per queue      = Y
 Multiprocess aware   = Y
 Linux UIO            = Y
 x86-64               = Y
+LRO                  = Y
+TSO                  = Y
 Usage doc            = Y
diff --git a/doc/guides/nics/qede.rst b/doc/guides/nics/qede.rst
index df0aaec..eacb3da 100644
--- a/doc/guides/nics/qede.rst
+++ b/doc/guides/nics/qede.rst
@@ -61,13 +61,13 @@ Supported Features
 - Scatter-Gather
 - VXLAN tunneling offload
 - N-tuple filter and flow director (limited support)
+- LRO/TSO
 
 Non-supported Features
 ----------------------
 
 - SR-IOV PF
 - GENEVE and NVGRE Tunneling offloads
-- LRO/TSO
 - NPAR
 
 Supported QLogic Adapters
diff --git a/drivers/net/qede/qede_eth_if.c b/drivers/net/qede/qede_eth_if.c
index 8e4290c..86bb129 100644
--- a/drivers/net/qede/qede_eth_if.c
+++ b/drivers/net/qede/qede_eth_if.c
@@ -18,8 +18,8 @@ qed_start_vport(struct ecore_dev *edev, struct qed_start_vport_params *p_params)
 		u8 tx_switching = 0;
 		struct ecore_sp_vport_start_params start = { 0 };
 
-		start.tpa_mode = p_params->gro_enable ? ECORE_TPA_MODE_GRO :
-		    ECORE_TPA_MODE_NONE;
+		start.tpa_mode = p_params->enable_lro ? ECORE_TPA_MODE_RSC :
+				ECORE_TPA_MODE_NONE;
 		start.remove_inner_vlan = p_params->remove_inner_vlan;
 		start.tx_switching = tx_switching;
 		start.only_untagged = false;	/* untagged only */
@@ -29,7 +29,6 @@ qed_start_vport(struct ecore_dev *edev, struct qed_start_vport_params *p_params)
 		start.concrete_fid = p_hwfn->hw_info.concrete_fid;
 		start.handle_ptp_pkts = p_params->handle_ptp_pkts;
 		start.vport_id = p_params->vport_id;
-		start.max_buffers_per_cqe = 16;	/* TODO-is this right */
 		start.mtu = p_params->mtu;
 		/* @DPDK - Disable FW placement */
 		start.zero_placement_offset = 1;
@@ -120,6 +119,7 @@ qed_update_vport(struct ecore_dev *edev, struct qed_update_vport_params *params)
 	sp_params.update_accept_any_vlan_flg =
 	    params->update_accept_any_vlan_flg;
 	sp_params.mtu = params->mtu;
+	sp_params.sge_tpa_params = params->sge_tpa_params;
 
 	for_each_hwfn(edev, i) {
 		struct ecore_hwfn *p_hwfn = &edev->hwfns[i];
diff --git a/drivers/net/qede/qede_eth_if.h b/drivers/net/qede/qede_eth_if.h
index 12dd828..d845bac 100644
--- a/drivers/net/qede/qede_eth_if.h
+++ b/drivers/net/qede/qede_eth_if.h
@@ -59,12 +59,13 @@ struct qed_update_vport_params {
 	uint8_t accept_any_vlan;
 	uint8_t update_rss_flg;
 	uint16_t mtu;
+	struct ecore_sge_tpa_params *sge_tpa_params;
 };
 
 struct qed_start_vport_params {
 	bool remove_inner_vlan;
 	bool handle_ptp_pkts;
-	bool gro_enable;
+	bool enable_lro;
 	bool drop_ttl0;
 	uint8_t vport_id;
 	uint16_t mtu;
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 22b528d..0762111 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -769,7 +769,7 @@ static int qede_init_vport(struct qede_dev *qdev)
 	int rc;
 
 	start.remove_inner_vlan = 1;
-	start.gro_enable = 0;
+	start.enable_lro = qdev->enable_lro;
 	start.mtu = ETHER_MTU + QEDE_ETH_OVERHEAD;
 	start.vport_id = 0;
 	start.drop_ttl0 = false;
@@ -866,11 +866,6 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 	if (rxmode->enable_scatter == 1)
 		eth_dev->data->scattered_rx = 1;
 
-	if (rxmode->enable_lro == 1) {
-		DP_ERR(edev, "LRO is not supported\n");
-		return -EINVAL;
-	}
-
 	if (!rxmode->hw_strip_crc)
 		DP_INFO(edev, "L2 CRC stripping is always enabled in hw\n");
 
@@ -878,6 +873,13 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 		DP_INFO(edev, "IP/UDP/TCP checksum offload is always enabled "
 			      "in hw\n");
 
+	if (rxmode->enable_lro) {
+		qdev->enable_lro = true;
+		/* Enable scatter mode for LRO */
+		if (!rxmode->enable_scatter)
+			eth_dev->data->scattered_rx = 1;
+	}
+
 	/* Check for the port restart case */
 	if (qdev->state != QEDE_DEV_INIT) {
 		rc = qdev->ops->vport_stop(edev, 0);
@@ -957,13 +959,15 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 static const struct rte_eth_desc_lim qede_rx_desc_lim = {
 	.nb_max = NUM_RX_BDS_MAX,
 	.nb_min = 128,
-	.nb_align = 128	/* lowest common multiple */
+	.nb_align = 128 /* lowest common multiple */
 };
 
 static const struct rte_eth_desc_lim qede_tx_desc_lim = {
 	.nb_max = NUM_TX_BDS_MAX,
 	.nb_min = 256,
-	.nb_align = 256
+	.nb_align = 256,
+	.nb_seg_max = ETH_TX_MAX_BDS_PER_LSO_PACKET,
+	.nb_mtu_seg_max = ETH_TX_MAX_BDS_PER_NON_LSO_PACKET
 };
 
 static void
@@ -1005,12 +1009,16 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
 				     DEV_RX_OFFLOAD_IPV4_CKSUM	|
 				     DEV_RX_OFFLOAD_UDP_CKSUM	|
 				     DEV_RX_OFFLOAD_TCP_CKSUM	|
-				     DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM);
+				     DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+				     DEV_RX_OFFLOAD_TCP_LRO);
+
 	dev_info->tx_offload_capa = (DEV_TX_OFFLOAD_VLAN_INSERT	|
 				     DEV_TX_OFFLOAD_IPV4_CKSUM	|
 				     DEV_TX_OFFLOAD_UDP_CKSUM	|
 				     DEV_TX_OFFLOAD_TCP_CKSUM	|
-				     DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM);
+				     DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				     DEV_TX_OFFLOAD_TCP_TSO |
+				     DEV_TX_OFFLOAD_VXLAN_TNL_TSO);
 
 	memset(&link, 0, sizeof(struct qed_link_output));
 	qdev->ops->common->get_link(edev, &link);
@@ -2107,6 +2115,7 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
 
 	eth_dev->rx_pkt_burst = qede_recv_pkts;
 	eth_dev->tx_pkt_burst = qede_xmit_pkts;
+	eth_dev->tx_pkt_prepare = qede_xmit_prep_pkts;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
 		DP_NOTICE(edev, false,
diff --git a/drivers/net/qede/qede_ethdev.h b/drivers/net/qede/qede_ethdev.h
index 8342b99..799a3ba 100644
--- a/drivers/net/qede/qede_ethdev.h
+++ b/drivers/net/qede/qede_ethdev.h
@@ -193,8 +193,7 @@ struct qede_dev {
 	uint16_t rss_ind_table[ECORE_RSS_IND_TABLE_SIZE];
 	uint64_t rss_hf;
 	uint8_t rss_key_len;
-	uint32_t flags;
-	bool gro_disable;
+	bool enable_lro;
 	uint16_t num_queues;
 	uint8_t fp_num_tx;
 	uint8_t fp_num_rx;
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 85134fb..e72a693 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -6,10 +6,9 @@
  * See LICENSE.qede_pmd for copyright and licensing details.
  */
 
+#include <rte_net.h>
 #include "qede_rxtx.h"
 
-static bool gro_disable = 1;	/* mod_param */
-
 static inline int qede_alloc_rx_buffer(struct qede_rx_queue *rxq)
 {
 	struct rte_mbuf *new_mb = NULL;
@@ -352,7 +351,6 @@ static void qede_init_fp(struct qede_dev *qdev)
 		snprintf(fp->name, sizeof(fp->name), "%s-fp-%d", "qdev", i);
 	}
 
-	qdev->gro_disable = gro_disable;
 }
 
 void qede_free_fp_arrays(struct qede_dev *qdev)
@@ -509,6 +507,30 @@ qede_update_rx_prod(struct qede_dev *edev, struct qede_rx_queue *rxq)
 	PMD_RX_LOG(DEBUG, rxq, "bd_prod %u  cqe_prod %u", bd_prod, cqe_prod);
 }
 
+static void
+qede_update_sge_tpa_params(struct ecore_sge_tpa_params *sge_tpa_params,
+			   uint16_t mtu, bool enable)
+{
+	/* Enable LRO in split mode */
+	sge_tpa_params->tpa_ipv4_en_flg = enable;
+	sge_tpa_params->tpa_ipv6_en_flg = enable;
+	sge_tpa_params->tpa_ipv4_tunn_en_flg = enable;
+	sge_tpa_params->tpa_ipv6_tunn_en_flg = enable;
+	/* set if tpa enable changes */
+	sge_tpa_params->update_tpa_en_flg = 1;
+	/* set if tpa parameters should be handled */
+	sge_tpa_params->update_tpa_param_flg = enable;
+
+	sge_tpa_params->max_buffers_per_cqe = 20;
+	sge_tpa_params->tpa_pkt_split_flg = 1;
+	sge_tpa_params->tpa_hdr_data_split_flg = 0;
+	sge_tpa_params->tpa_gro_consistent_flg = 0;
+	sge_tpa_params->tpa_max_aggs_num = ETH_TPA_MAX_AGGS_NUM;
+	sge_tpa_params->tpa_max_size = 0x7FFF;
+	sge_tpa_params->tpa_min_size_to_start = mtu / 2;
+	sge_tpa_params->tpa_min_size_to_cont = mtu / 2;
+}
+
 static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 {
 	struct qede_dev *qdev = eth_dev->data->dev_private;
@@ -516,6 +538,7 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 	struct ecore_queue_start_common_params q_params;
 	struct qed_dev_info *qed_info = &qdev->dev_info.common;
 	struct qed_update_vport_params vport_update_params;
+	struct ecore_sge_tpa_params tpa_params;
 	struct qede_tx_queue *txq;
 	struct qede_fastpath *fp;
 	dma_addr_t p_phys_table;
@@ -529,10 +552,10 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 		if (fp->type & QEDE_FASTPATH_RX) {
 			struct ecore_rxq_start_ret_params ret_params;
 
-			p_phys_table = ecore_chain_get_pbl_phys(&fp->rxq->
-								rx_comp_ring);
-			page_cnt = ecore_chain_get_page_cnt(&fp->rxq->
-								rx_comp_ring);
+			p_phys_table =
+			    ecore_chain_get_pbl_phys(&fp->rxq->rx_comp_ring);
+			page_cnt =
+			    ecore_chain_get_page_cnt(&fp->rxq->rx_comp_ring);
 
 			memset(&ret_params, 0, sizeof(ret_params));
 			memset(&q_params, 0, sizeof(q_params));
@@ -625,6 +648,14 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 		vport_update_params.tx_switching_flg = 1;
 	}
 
+	/* TPA */
+	if (qdev->enable_lro) {
+		DP_INFO(edev, "Enabling LRO\n");
+		memset(&tpa_params, 0, sizeof(struct ecore_sge_tpa_params));
+		qede_update_sge_tpa_params(&tpa_params, qdev->mtu, true);
+		vport_update_params.sge_tpa_params = &tpa_params;
+	}
+
 	rc = qdev->ops->vport_update(edev, &vport_update_params);
 	if (rc) {
 		DP_ERR(edev, "Update V-PORT failed %d\n", rc);
@@ -761,6 +792,94 @@ static inline uint32_t qede_rx_cqe_to_pkt_type(uint16_t flags)
 		return RTE_PTYPE_UNKNOWN;
 }
 
+static inline void
+qede_rx_process_tpa_cont_cqe(struct qede_dev *qdev,
+			     struct qede_rx_queue *rxq,
+			     struct eth_fast_path_rx_tpa_cont_cqe *cqe)
+{
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	struct qede_agg_info *tpa_info;
+	struct rte_mbuf *temp_frag; /* Pointer to mbuf chain head */
+	struct rte_mbuf *curr_frag;
+	uint8_t list_count = 0;
+	uint16_t cons_idx;
+	uint8_t i;
+
+	PMD_RX_LOG(INFO, rxq, "TPA cont[%02x] - len_list [%04x %04x]\n",
+		   cqe->tpa_agg_index, rte_le_to_cpu_16(cqe->len_list[0]),
+		   rte_le_to_cpu_16(cqe->len_list[1]));
+
+	tpa_info = &rxq->tpa_info[cqe->tpa_agg_index];
+	temp_frag = tpa_info->mbuf;
+	assert(temp_frag);
+
+	for (i = 0; cqe->len_list[i]; i++) {
+		cons_idx = rxq->sw_rx_cons & NUM_RX_BDS(rxq);
+		curr_frag = rxq->sw_rx_ring[cons_idx].mbuf;
+		qede_rx_bd_ring_consume(rxq);
+		curr_frag->data_len = rte_le_to_cpu_16(cqe->len_list[i]);
+		temp_frag->next = curr_frag;
+		temp_frag = curr_frag;
+		list_count++;
+	}
+
+	/* Allocate RX mbuf on the RX BD ring for those many consumed  */
+	for (i = 0 ; i < list_count ; i++) {
+		if (unlikely(qede_alloc_rx_buffer(rxq) != 0)) {
+			DP_ERR(edev, "Failed to allocate mbuf for LRO cont\n");
+			tpa_info->state = QEDE_AGG_STATE_ERROR;
+		}
+	}
+}
+
+static inline void
+qede_rx_process_tpa_end_cqe(struct qede_dev *qdev,
+			    struct qede_rx_queue *rxq,
+			    struct eth_fast_path_rx_tpa_end_cqe *cqe)
+{
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	struct qede_agg_info *tpa_info;
+	struct rte_mbuf *temp_frag; /* Pointer to mbuf chain head */
+	struct rte_mbuf *curr_frag;
+	struct rte_mbuf *rx_mb;
+	uint8_t list_count = 0;
+	uint16_t cons_idx;
+	uint8_t i;
+
+	PMD_RX_LOG(INFO, rxq, "TPA End[%02x] - len_list [%04x %04x]\n",
+		   cqe->tpa_agg_index, rte_le_to_cpu_16(cqe->len_list[0]),
+		   rte_le_to_cpu_16(cqe->len_list[1]));
+
+	tpa_info = &rxq->tpa_info[cqe->tpa_agg_index];
+	temp_frag = tpa_info->mbuf;
+	assert(temp_frag);
+
+	for (i = 0; cqe->len_list[i]; i++) {
+		cons_idx = rxq->sw_rx_cons & NUM_RX_BDS(rxq);
+		curr_frag = rxq->sw_rx_ring[cons_idx].mbuf;
+		qede_rx_bd_ring_consume(rxq);
+		curr_frag->data_len = rte_le_to_cpu_16(cqe->len_list[i]);
+		temp_frag->next = curr_frag;
+		temp_frag = curr_frag;
+		list_count++;
+	}
+
+	/* Allocate RX mbuf on the RX BD ring for those many consumed */
+	for (i = 0 ; i < list_count ; i++) {
+		if (unlikely(qede_alloc_rx_buffer(rxq) != 0)) {
+			DP_ERR(edev, "Failed to allocate mbuf for lro end\n");
+			tpa_info->state = QEDE_AGG_STATE_ERROR;
+		}
+	}
+
+	/* Update total length and frags based on end TPA */
+	rx_mb = rxq->tpa_info[cqe->tpa_agg_index].mbuf;
+	/* TBD: Add sanity checks here */
+	rx_mb->nb_segs = cqe->num_of_bds;
+	rx_mb->pkt_len = cqe->total_packet_len;
+	tpa_info->state = QEDE_AGG_STATE_NONE;
+}
+
 static inline uint32_t qede_rx_cqe_to_tunn_pkt_type(uint16_t flags)
 {
 	uint32_t val;
@@ -875,13 +994,20 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 	uint16_t pkt_len; /* Sum of all BD segments */
 	uint16_t len; /* Length of first BD */
 	uint8_t num_segs = 1;
-	uint16_t pad;
 	uint16_t preload_idx;
 	uint8_t csum_flag;
 	uint16_t parse_flag;
 	enum rss_hash_type htype;
 	uint8_t tunn_parse_flag;
 	uint8_t j;
+	struct eth_fast_path_rx_tpa_start_cqe *cqe_start_tpa;
+	uint64_t ol_flags;
+	uint32_t packet_type;
+	uint16_t vlan_tci;
+	bool tpa_start_flg;
+	uint8_t bitfield_val;
+	uint8_t offset, tpa_agg_idx, flags;
+	struct qede_agg_info *tpa_info;
 
 	hw_comp_cons = rte_le_to_cpu_16(*rxq->hw_cons_ptr);
 	sw_comp_cons = ecore_chain_get_cons_idx(&rxq->rx_comp_ring);
@@ -892,16 +1018,53 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		return 0;
 
 	while (sw_comp_cons != hw_comp_cons) {
+		ol_flags = 0;
+		packet_type = RTE_PTYPE_UNKNOWN;
+		vlan_tci = 0;
+		tpa_start_flg = false;
+
 		/* Get the CQE from the completion ring */
 		cqe =
 		    (union eth_rx_cqe *)ecore_chain_consume(&rxq->rx_comp_ring);
 		cqe_type = cqe->fast_path_regular.type;
-
-		if (unlikely(cqe_type == ETH_RX_CQE_TYPE_SLOW_PATH)) {
-			PMD_RX_LOG(DEBUG, rxq, "Got a slowath CQE");
-
+		PMD_RX_LOG(INFO, rxq, "Rx CQE type %d\n", cqe_type);
+
+		switch (cqe_type) {
+		case ETH_RX_CQE_TYPE_REGULAR:
+			fp_cqe = &cqe->fast_path_regular;
+		break;
+		case ETH_RX_CQE_TYPE_TPA_START:
+			cqe_start_tpa = &cqe->fast_path_tpa_start;
+			tpa_info = &rxq->tpa_info[cqe_start_tpa->tpa_agg_index];
+			tpa_start_flg = true;
+			PMD_RX_LOG(INFO, rxq,
+			    "TPA start[%u] - len %04x [header %02x]"
+			    " [bd_list[0] %04x], [seg_len %04x]\n",
+			    cqe_start_tpa->tpa_agg_index,
+			    rte_le_to_cpu_16(cqe_start_tpa->len_on_first_bd),
+			    cqe_start_tpa->header_len,
+			    rte_le_to_cpu_16(cqe_start_tpa->ext_bd_len_list[0]),
+			    rte_le_to_cpu_16(cqe_start_tpa->seg_len));
+
+		break;
+		case ETH_RX_CQE_TYPE_TPA_CONT:
+			qede_rx_process_tpa_cont_cqe(qdev, rxq,
+						     &cqe->fast_path_tpa_cont);
+			continue;
+		case ETH_RX_CQE_TYPE_TPA_END:
+			qede_rx_process_tpa_end_cqe(qdev, rxq,
+						    &cqe->fast_path_tpa_end);
+			tpa_agg_idx = cqe->fast_path_tpa_end.tpa_agg_index;
+			rx_mb = rxq->tpa_info[tpa_agg_idx].mbuf;
+			PMD_RX_LOG(INFO, rxq, "TPA end reason %d\n",
+				   cqe->fast_path_tpa_end.end_reason);
+			goto tpa_end;
+		case ETH_RX_CQE_TYPE_SLOW_PATH:
+			PMD_RX_LOG(INFO, rxq, "Got unexpected slowpath CQE\n");
 			qdev->ops->eth_cqe_completion(edev, fp->id,
 				(struct eth_slow_path_rx_cqe *)cqe);
+			/* fall-thru */
+		default:
 			goto next_cqe;
 		}
 
@@ -910,69 +1073,93 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		rx_mb = rxq->sw_rx_ring[sw_rx_index].mbuf;
 		assert(rx_mb != NULL);
 
-		/* non GRO */
-		fp_cqe = &cqe->fast_path_regular;
-
-		len = rte_le_to_cpu_16(fp_cqe->len_on_first_bd);
-		pkt_len = rte_le_to_cpu_16(fp_cqe->pkt_len);
-		pad = fp_cqe->placement_offset;
-		assert((len + pad) <= rx_mb->buf_len);
-
-		PMD_RX_LOG(DEBUG, rxq,
-			   "CQE type = 0x%x, flags = 0x%x, vlan = 0x%x"
-			   " len = %u, parsing_flags = %d",
-			   cqe_type, fp_cqe->bitfields,
-			   rte_le_to_cpu_16(fp_cqe->vlan_tag),
-			   len, rte_le_to_cpu_16(fp_cqe->pars_flags.flags));
-
-		/* If this is an error packet then drop it */
-		parse_flag =
-		    rte_le_to_cpu_16(cqe->fast_path_regular.pars_flags.flags);
-
-		rx_mb->ol_flags = 0;
-
+		/* Handle regular CQE or TPA start CQE */
+		if (!tpa_start_flg) {
+			parse_flag = rte_le_to_cpu_16(fp_cqe->pars_flags.flags);
+			bitfield_val = fp_cqe->bitfields;
+			offset = fp_cqe->placement_offset;
+			len = rte_le_to_cpu_16(fp_cqe->len_on_first_bd);
+			pkt_len = rte_le_to_cpu_16(fp_cqe->pkt_len);
+		} else {
+			parse_flag =
+			    rte_le_to_cpu_16(cqe_start_tpa->pars_flags.flags);
+			bitfield_val = cqe_start_tpa->bitfields;
+			offset = cqe_start_tpa->placement_offset;
+			/* seg_len = len_on_first_bd */
+			len = rte_le_to_cpu_16(cqe_start_tpa->len_on_first_bd);
+			tpa_info->start_cqe_bd_len = len +
+						cqe_start_tpa->header_len;
+			tpa_info->mbuf = rx_mb;
+		}
 		if (qede_tunn_exist(parse_flag)) {
-			PMD_RX_LOG(DEBUG, rxq, "Rx tunneled packet");
+			PMD_RX_LOG(INFO, rxq, "Rx tunneled packet\n");
 			if (unlikely(qede_check_tunn_csum_l4(parse_flag))) {
 				PMD_RX_LOG(ERR, rxq,
-					    "L4 csum failed, flags = 0x%x",
+					    "L4 csum failed, flags = 0x%x\n",
 					    parse_flag);
 				rxq->rx_hw_errors++;
-				rx_mb->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+				ol_flags |= PKT_RX_L4_CKSUM_BAD;
 			} else {
-				tunn_parse_flag =
-						fp_cqe->tunnel_pars_flags.flags;
-				rx_mb->packet_type =
-					qede_rx_cqe_to_tunn_pkt_type(
-							tunn_parse_flag);
+				ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+				if (tpa_start_flg)
+					flags =
+					 cqe_start_tpa->tunnel_pars_flags.flags;
+				else
+					flags = fp_cqe->tunnel_pars_flags.flags;
+				tunn_parse_flag = flags;
+				packet_type =
+				qede_rx_cqe_to_tunn_pkt_type(tunn_parse_flag);
 			}
 		} else {
-			PMD_RX_LOG(DEBUG, rxq, "Rx non-tunneled packet");
+			PMD_RX_LOG(INFO, rxq, "Rx non-tunneled packet\n");
 			if (unlikely(qede_check_notunn_csum_l4(parse_flag))) {
 				PMD_RX_LOG(ERR, rxq,
-					    "L4 csum failed, flags = 0x%x",
+					    "L4 csum failed, flags = 0x%x\n",
 					    parse_flag);
 				rxq->rx_hw_errors++;
-				rx_mb->ol_flags |= PKT_RX_L4_CKSUM_BAD;
-			} else if (unlikely(qede_check_notunn_csum_l3(rx_mb,
+				ol_flags |= PKT_RX_L4_CKSUM_BAD;
+			} else {
+				ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			}
+			if (unlikely(qede_check_notunn_csum_l3(rx_mb,
 							parse_flag))) {
 				PMD_RX_LOG(ERR, rxq,
-					   "IP csum failed, flags = 0x%x",
+					   "IP csum failed, flags = 0x%x\n",
 					   parse_flag);
 				rxq->rx_hw_errors++;
-				rx_mb->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+				ol_flags |= PKT_RX_IP_CKSUM_BAD;
 			} else {
-				rx_mb->packet_type =
+				ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+				packet_type =
 					qede_rx_cqe_to_pkt_type(parse_flag);
 			}
 		}
 
-		PMD_RX_LOG(INFO, rxq, "packet_type 0x%x", rx_mb->packet_type);
+		if (CQE_HAS_VLAN(parse_flag)) {
+			vlan_tci = rte_le_to_cpu_16(fp_cqe->vlan_tag);
+			ol_flags |= PKT_RX_VLAN_PKT;
+		}
+
+		if (CQE_HAS_OUTER_VLAN(parse_flag)) {
+			vlan_tci = rte_le_to_cpu_16(fp_cqe->vlan_tag);
+			ol_flags |= PKT_RX_QINQ_PKT;
+			rx_mb->vlan_tci_outer = 0;
+		}
+
+		/* RSS Hash */
+		htype = (uint8_t)GET_FIELD(bitfield_val,
+					ETH_FAST_PATH_RX_REG_CQE_RSS_HASH_TYPE);
+		if (qdev->rss_enable && htype) {
+			ol_flags |= PKT_RX_RSS_HASH;
+			rx_mb->hash.rss = rte_le_to_cpu_32(fp_cqe->rss_hash);
+			PMD_RX_LOG(INFO, rxq, "Hash result 0x%x\n",
+				   rx_mb->hash.rss);
+		}
 
 		if (unlikely(qede_alloc_rx_buffer(rxq) != 0)) {
 			PMD_RX_LOG(ERR, rxq,
 				   "New buffer allocation failed,"
-				   "dropping incoming packet");
+				   "dropping incoming packet\n");
 			qede_recycle_rx_bd_ring(rxq, qdev, fp_cqe->bd_num);
 			rte_eth_devices[rxq->port_id].
 			    data->rx_mbuf_alloc_failed++;
@@ -980,7 +1167,8 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 			break;
 		}
 		qede_rx_bd_ring_consume(rxq);
-		if (fp_cqe->bd_num > 1) {
+
+		if (!tpa_start_flg && fp_cqe->bd_num > 1) {
 			PMD_RX_LOG(DEBUG, rxq, "Jumbo-over-BD packet: %02x BDs"
 				   " len on first: %04x Total Len: %04x",
 				   fp_cqe->bd_num, len, pkt_len);
@@ -1008,40 +1196,24 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		rte_prefetch0(rxq->sw_rx_ring[preload_idx].mbuf);
 
 		/* Update rest of the MBUF fields */
-		rx_mb->data_off = pad + RTE_PKTMBUF_HEADROOM;
-		rx_mb->nb_segs = fp_cqe->bd_num;
-		rx_mb->data_len = len;
-		rx_mb->pkt_len = pkt_len;
+		rx_mb->data_off = offset + RTE_PKTMBUF_HEADROOM;
 		rx_mb->port = rxq->port_id;
-
-		htype = (uint8_t)GET_FIELD(fp_cqe->bitfields,
-				ETH_FAST_PATH_RX_REG_CQE_RSS_HASH_TYPE);
-		if (qdev->rss_enable && htype) {
-			rx_mb->ol_flags |= PKT_RX_RSS_HASH;
-			rx_mb->hash.rss = rte_le_to_cpu_32(fp_cqe->rss_hash);
-			PMD_RX_LOG(DEBUG, rxq, "Hash result 0x%x",
-				   rx_mb->hash.rss);
+		rx_mb->ol_flags = ol_flags;
+		rx_mb->data_len = len;
+		rx_mb->vlan_tci = vlan_tci;
+		rx_mb->packet_type = packet_type;
+		PMD_RX_LOG(INFO, rxq, "pkt_type %04x len %04x flags %04lx\n",
+			   packet_type, len, (unsigned long)ol_flags);
+		if (!tpa_start_flg) {
+			rx_mb->nb_segs = fp_cqe->bd_num;
+			rx_mb->pkt_len = pkt_len;
 		}
-
 		rte_prefetch1(rte_pktmbuf_mtod(rx_mb, void *));
-
-		if (CQE_HAS_VLAN(parse_flag)) {
-			rx_mb->vlan_tci = rte_le_to_cpu_16(fp_cqe->vlan_tag);
-			rx_mb->ol_flags |= PKT_RX_VLAN_PKT;
-		}
-
-		if (CQE_HAS_OUTER_VLAN(parse_flag)) {
-			/* FW does not provide indication of Outer VLAN tag,
-			 * which is always stripped, so vlan_tci_outer is set
-			 * to 0. Here vlan_tag represents inner VLAN tag.
-			 */
-			rx_mb->vlan_tci = rte_le_to_cpu_16(fp_cqe->vlan_tag);
-			rx_mb->ol_flags |= PKT_RX_QINQ_PKT;
-			rx_mb->vlan_tci_outer = 0;
+tpa_end:
+		if (!tpa_start_flg) {
+			rx_pkts[rx_pkt] = rx_mb;
+			rx_pkt++;
 		}
-
-		rx_pkts[rx_pkt] = rx_mb;
-		rx_pkt++;
 next_cqe:
 		ecore_chain_recycle_consumed(&rxq->rx_comp_ring);
 		sw_comp_cons = ecore_chain_get_cons_idx(&rxq->rx_comp_ring);
@@ -1062,101 +1234,91 @@ next_cqe:
 	return rx_pkt;
 }
 
-static inline int
-qede_free_tx_pkt(struct ecore_dev *edev, struct qede_tx_queue *txq)
+static inline void
+qede_free_tx_pkt(struct qede_tx_queue *txq)
 {
-	uint16_t nb_segs, idx = TX_CONS(txq);
-	struct eth_tx_bd *tx_data_bd;
-	struct rte_mbuf *mbuf = txq->sw_tx_ring[idx].mbuf;
-
-	if (unlikely(!mbuf)) {
-		PMD_TX_LOG(ERR, txq, "null mbuf");
-		PMD_TX_LOG(ERR, txq,
-			   "tx_desc %u tx_avail %u tx_cons %u tx_prod %u",
-			   txq->nb_tx_desc, txq->nb_tx_avail, idx,
-			   TX_PROD(txq));
-		return -1;
-	}
-
-	nb_segs = mbuf->nb_segs;
-	while (nb_segs) {
-		/* It's like consuming rxbuf in recv() */
+	struct rte_mbuf *mbuf;
+	uint16_t nb_segs;
+	uint16_t idx;
+	uint8_t nbds;
+
+	idx = TX_CONS(txq);
+	mbuf = txq->sw_tx_ring[idx].mbuf;
+	if (mbuf) {
+		nb_segs = mbuf->nb_segs;
+		PMD_TX_LOG(DEBUG, txq, "nb_segs to free %u\n", nb_segs);
+		while (nb_segs) {
+			/* It's like consuming rxbuf in recv() */
+			ecore_chain_consume(&txq->tx_pbl);
+			txq->nb_tx_avail++;
+			nb_segs--;
+		}
+		rte_pktmbuf_free(mbuf);
+		txq->sw_tx_ring[idx].mbuf = NULL;
+		txq->sw_tx_cons++;
+		PMD_TX_LOG(DEBUG, txq, "Freed tx packet\n");
+	} else {
 		ecore_chain_consume(&txq->tx_pbl);
 		txq->nb_tx_avail++;
-		nb_segs--;
 	}
-	rte_pktmbuf_free(mbuf);
-	txq->sw_tx_ring[idx].mbuf = NULL;
-
-	return 0;
 }
 
-static inline uint16_t
+static inline void
 qede_process_tx_compl(struct ecore_dev *edev, struct qede_tx_queue *txq)
 {
-	uint16_t tx_compl = 0;
 	uint16_t hw_bd_cons;
+	uint16_t sw_tx_cons;
 
-	hw_bd_cons = rte_le_to_cpu_16(*txq->hw_cons_ptr);
 	rte_compiler_barrier();
-
-	while (hw_bd_cons != ecore_chain_get_cons_idx(&txq->tx_pbl)) {
-		if (qede_free_tx_pkt(edev, txq)) {
-			PMD_TX_LOG(ERR, txq,
-				   "hw_bd_cons = %u, chain_cons = %u",
-				   hw_bd_cons,
-				   ecore_chain_get_cons_idx(&txq->tx_pbl));
-			break;
-		}
-		txq->sw_tx_cons++;	/* Making TXD available */
-		tx_compl++;
-	}
-
-	PMD_TX_LOG(DEBUG, txq, "Tx compl %u sw_tx_cons %u avail %u",
-		   tx_compl, txq->sw_tx_cons, txq->nb_tx_avail);
-	return tx_compl;
+	hw_bd_cons = rte_le_to_cpu_16(*txq->hw_cons_ptr);
+	sw_tx_cons = ecore_chain_get_cons_idx(&txq->tx_pbl);
+	PMD_TX_LOG(DEBUG, txq, "Tx Completions = %u\n",
+		   abs(hw_bd_cons - sw_tx_cons));
+	while (hw_bd_cons !=  ecore_chain_get_cons_idx(&txq->tx_pbl))
+		qede_free_tx_pkt(txq);
 }
 
 /* Populate scatter gather buffer descriptor fields */
 static inline uint8_t
 qede_encode_sg_bd(struct qede_tx_queue *p_txq, struct rte_mbuf *m_seg,
-		  struct eth_tx_1st_bd *bd1)
+		  struct eth_tx_2nd_bd **bd2, struct eth_tx_3rd_bd **bd3)
 {
 	struct qede_tx_queue *txq = p_txq;
-	struct eth_tx_2nd_bd *bd2 = NULL;
-	struct eth_tx_3rd_bd *bd3 = NULL;
 	struct eth_tx_bd *tx_bd = NULL;
 	dma_addr_t mapping;
-	uint8_t nb_segs = 1; /* min one segment per packet */
+	uint8_t nb_segs = 0;
 
 	/* Check for scattered buffers */
 	while (m_seg) {
-		if (nb_segs == 1) {
-			bd2 = (struct eth_tx_2nd_bd *)
-				ecore_chain_produce(&txq->tx_pbl);
-			memset(bd2, 0, sizeof(*bd2));
+		if (nb_segs == 0) {
+			if (!*bd2) {
+				*bd2 = (struct eth_tx_2nd_bd *)
+					ecore_chain_produce(&txq->tx_pbl);
+				memset(*bd2, 0, sizeof(struct eth_tx_2nd_bd));
+				nb_segs++;
+			}
 			mapping = rte_mbuf_data_dma_addr(m_seg);
-			QEDE_BD_SET_ADDR_LEN(bd2, mapping, m_seg->data_len);
-			PMD_TX_LOG(DEBUG, txq, "BD2 len %04x",
-				   m_seg->data_len);
-		} else if (nb_segs == 2) {
-			bd3 = (struct eth_tx_3rd_bd *)
-				ecore_chain_produce(&txq->tx_pbl);
-			memset(bd3, 0, sizeof(*bd3));
+			QEDE_BD_SET_ADDR_LEN(*bd2, mapping, m_seg->data_len);
+			PMD_TX_LOG(DEBUG, txq, "BD2 len %04x", m_seg->data_len);
+		} else if (nb_segs == 1) {
+			if (!*bd3) {
+				*bd3 = (struct eth_tx_3rd_bd *)
+					ecore_chain_produce(&txq->tx_pbl);
+				memset(*bd3, 0, sizeof(struct eth_tx_3rd_bd));
+				nb_segs++;
+			}
 			mapping = rte_mbuf_data_dma_addr(m_seg);
-			QEDE_BD_SET_ADDR_LEN(bd3, mapping, m_seg->data_len);
-			PMD_TX_LOG(DEBUG, txq, "BD3 len %04x",
-				   m_seg->data_len);
+			QEDE_BD_SET_ADDR_LEN(*bd3, mapping, m_seg->data_len);
+			PMD_TX_LOG(DEBUG, txq, "BD3 len %04x", m_seg->data_len);
 		} else {
 			tx_bd = (struct eth_tx_bd *)
 				ecore_chain_produce(&txq->tx_pbl);
 			memset(tx_bd, 0, sizeof(*tx_bd));
+			nb_segs++;
 			mapping = rte_mbuf_data_dma_addr(m_seg);
 			QEDE_BD_SET_ADDR_LEN(tx_bd, mapping, m_seg->data_len);
-			PMD_TX_LOG(DEBUG, txq, "BD len %04x",
-				   m_seg->data_len);
+			PMD_TX_LOG(DEBUG, txq, "BD len %04x", m_seg->data_len);
 		}
-		nb_segs++;
 		m_seg = m_seg->next;
 	}
 
@@ -1164,59 +1326,209 @@ qede_encode_sg_bd(struct qede_tx_queue *p_txq, struct rte_mbuf *m_seg,
 	return nb_segs;
 }
 
+#ifdef RTE_LIBRTE_QEDE_DEBUG_TX
+static inline void
+print_tx_bd_info(struct qede_tx_queue *txq,
+		 struct eth_tx_1st_bd *bd1,
+		 struct eth_tx_2nd_bd *bd2,
+		 struct eth_tx_3rd_bd *bd3,
+		 uint64_t tx_ol_flags)
+{
+	char ol_buf[256] = { 0 }; /* for verbose prints */
+
+	if (bd1)
+		PMD_TX_LOG(INFO, txq,
+			   "BD1: nbytes=%u nbds=%u bd_flags=04%x bf=%04x",
+			   rte_cpu_to_le_16(bd1->nbytes), bd1->data.nbds,
+			   bd1->data.bd_flags.bitfields,
+			   rte_cpu_to_le_16(bd1->data.bitfields));
+	if (bd2)
+		PMD_TX_LOG(INFO, txq,
+			   "BD2: nbytes=%u bf=%04x\n",
+			   rte_cpu_to_le_16(bd2->nbytes), bd2->data.bitfields1);
+	if (bd3)
+		PMD_TX_LOG(INFO, txq,
+			   "BD3: nbytes=%u bf=%04x mss=%u\n",
+			   rte_cpu_to_le_16(bd3->nbytes),
+			   rte_cpu_to_le_16(bd3->data.bitfields),
+			   rte_cpu_to_le_16(bd3->data.lso_mss));
+
+	rte_get_tx_ol_flag_list(tx_ol_flags, ol_buf, sizeof(ol_buf));
+	PMD_TX_LOG(INFO, txq, "TX offloads = %s\n", ol_buf);
+}
+#endif
+
+/* TX prepare to check packets meets TX conditions */
+uint16_t
+qede_xmit_prep_pkts(void *p_txq, struct rte_mbuf **tx_pkts,
+		    uint16_t nb_pkts)
+{
+	struct qede_tx_queue *txq = p_txq;
+	uint64_t ol_flags;
+	struct rte_mbuf *m;
+	uint16_t i;
+	int ret;
+
+	for (i = 0; i < nb_pkts; i++) {
+		m = tx_pkts[i];
+		ol_flags = m->ol_flags;
+		if (ol_flags & PKT_TX_TCP_SEG) {
+			if (m->nb_segs >= ETH_TX_MAX_BDS_PER_LSO_PACKET) {
+				rte_errno = -EINVAL;
+				break;
+			}
+			/* TBD: confirm its ~9700B for both ? */
+			if (m->tso_segsz > ETH_TX_MAX_NON_LSO_PKT_LEN) {
+				rte_errno = -EINVAL;
+				break;
+			}
+		} else {
+			if (m->nb_segs >= ETH_TX_MAX_BDS_PER_NON_LSO_PACKET) {
+				rte_errno = -EINVAL;
+				break;
+			}
+		}
+		if (ol_flags & QEDE_TX_OFFLOAD_NOTSUP_MASK) {
+			rte_errno = -ENOTSUP;
+			break;
+		}
+
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+		ret = rte_validate_tx_offload(m);
+		if (ret != 0) {
+			rte_errno = ret;
+			break;
+		}
+#endif
+		/* TBD: pseudo csum calcuation required iff
+		 * ETH_TX_DATA_2ND_BD_L4_PSEUDO_CSUM_MODE not set?
+		 */
+		ret = rte_net_intel_cksum_prepare(m);
+		if (ret != 0) {
+			rte_errno = ret;
+			break;
+		}
+	}
+
+	if (unlikely(i != nb_pkts))
+		PMD_TX_LOG(ERR, txq, "TX prepare failed for %u\n",
+			   nb_pkts - i);
+	return i;
+}
+
 uint16_t
 qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 {
 	struct qede_tx_queue *txq = p_txq;
 	struct qede_dev *qdev = txq->qdev;
 	struct ecore_dev *edev = &qdev->edev;
-	struct qede_fastpath *fp;
-	struct eth_tx_1st_bd *bd1;
 	struct rte_mbuf *mbuf;
 	struct rte_mbuf *m_seg = NULL;
 	uint16_t nb_tx_pkts;
 	uint16_t bd_prod;
 	uint16_t idx;
-	uint16_t tx_count;
 	uint16_t nb_frags;
 	uint16_t nb_pkt_sent = 0;
-
-	fp = &qdev->fp_array[QEDE_RSS_COUNT(qdev) + txq->queue_id];
+	uint8_t nbds;
+	bool ipv6_ext_flg;
+	bool lso_flg;
+	bool tunn_flg;
+	struct eth_tx_1st_bd *bd1;
+	struct eth_tx_2nd_bd *bd2;
+	struct eth_tx_3rd_bd *bd3;
+	uint64_t tx_ol_flags;
+	uint16_t hdr_size;
 
 	if (unlikely(txq->nb_tx_avail < txq->tx_free_thresh)) {
 		PMD_TX_LOG(DEBUG, txq, "send=%u avail=%u free_thresh=%u",
 			   nb_pkts, txq->nb_tx_avail, txq->tx_free_thresh);
-		(void)qede_process_tx_compl(edev, txq);
-	}
-
-	nb_tx_pkts = RTE_MIN(nb_pkts, (txq->nb_tx_avail /
-			ETH_TX_MAX_BDS_PER_NON_LSO_PACKET));
-	if (unlikely(nb_tx_pkts == 0)) {
-		PMD_TX_LOG(DEBUG, txq, "Out of BDs nb_pkts=%u avail=%u",
-			   nb_pkts, txq->nb_tx_avail);
-		return 0;
+		qede_process_tx_compl(edev, txq);
 	}
 
-	tx_count = nb_tx_pkts;
+	nb_tx_pkts  = nb_pkts;
+	bd_prod = rte_cpu_to_le_16(ecore_chain_get_prod_idx(&txq->tx_pbl));
 	while (nb_tx_pkts--) {
+		/* Init flags/values */
+		ipv6_ext_flg = false;
+		tunn_flg = false;
+		lso_flg = false;
+		nbds = 0;
+		bd1 = NULL;
+		bd2 = NULL;
+		bd3 = NULL;
+		hdr_size = 0;
+
+		mbuf = *tx_pkts;
+		assert(mbuf);
+
+		/* Check minimum TX BDS availability against available BDs */
+		if (unlikely(txq->nb_tx_avail < mbuf->nb_segs))
+			break;
+
+		tx_ol_flags = mbuf->ol_flags;
+
+#define RTE_ETH_IS_IPV6_HDR_EXT(ptype) ((ptype) & RTE_PTYPE_L3_IPV6_EXT)
+		if (RTE_ETH_IS_IPV6_HDR_EXT(mbuf->packet_type))
+			ipv6_ext_flg = true;
+
+		if (RTE_ETH_IS_TUNNEL_PKT(mbuf->packet_type))
+			tunn_flg = true;
+
+		if (tx_ol_flags & PKT_TX_TCP_SEG)
+			lso_flg = true;
+
+		if (lso_flg) {
+			if (unlikely(txq->nb_tx_avail <
+						ETH_TX_MIN_BDS_PER_LSO_PKT))
+				break;
+		} else {
+			if (unlikely(txq->nb_tx_avail <
+					ETH_TX_MIN_BDS_PER_NON_LSO_PKT))
+				break;
+		}
+
+		if (tunn_flg && ipv6_ext_flg) {
+			if (unlikely(txq->nb_tx_avail <
+				ETH_TX_MIN_BDS_PER_TUNN_IPV6_WITH_EXT_PKT))
+				break;
+		}
+		if (ipv6_ext_flg) {
+			if (unlikely(txq->nb_tx_avail <
+					ETH_TX_MIN_BDS_PER_IPV6_WITH_EXT_PKT))
+				break;
+		}
+
 		/* Fill the entry in the SW ring and the BDs in the FW ring */
 		idx = TX_PROD(txq);
-		mbuf = *tx_pkts++;
+		*tx_pkts++;
 		txq->sw_tx_ring[idx].mbuf = mbuf;
+
+		/* BD1 */
 		bd1 = (struct eth_tx_1st_bd *)ecore_chain_produce(&txq->tx_pbl);
-		bd1->data.bd_flags.bitfields =
+		memset(bd1, 0, sizeof(struct eth_tx_1st_bd));
+		nbds++;
+
+		bd1->data.bd_flags.bitfields |=
 			1 << ETH_TX_1ST_BD_FLAGS_START_BD_SHIFT;
 		/* FW 8.10.x specific change */
-		bd1->data.bitfields =
+		if (!lso_flg) {
+			bd1->data.bitfields |=
 			(mbuf->pkt_len & ETH_TX_DATA_1ST_BD_PKT_LEN_MASK)
 				<< ETH_TX_DATA_1ST_BD_PKT_LEN_SHIFT;
-		/* Map MBUF linear data for DMA and set in the first BD */
-		QEDE_BD_SET_ADDR_LEN(bd1, rte_mbuf_data_dma_addr(mbuf),
-				     mbuf->data_len);
-		PMD_TX_LOG(INFO, txq, "BD1 len %04x", mbuf->data_len);
+			/* Map MBUF linear data for DMA and set in the BD1 */
+			QEDE_BD_SET_ADDR_LEN(bd1, rte_mbuf_data_dma_addr(mbuf),
+					     mbuf->data_len);
+		} else {
+			/* For LSO, packet header and payload must reside on
+			 * buffers pointed by different BDs. Using BD1 for HDR
+			 * and BD2 onwards for data.
+			 */
+			hdr_size = mbuf->l2_len + mbuf->l3_len + mbuf->l4_len;
+			QEDE_BD_SET_ADDR_LEN(bd1, rte_mbuf_data_dma_addr(mbuf),
+					     hdr_size);
+		}
 
-		if (RTE_ETH_IS_TUNNEL_PKT(mbuf->packet_type)) {
-			PMD_TX_LOG(INFO, txq, "Tx tunnel packet");
+		if (tunn_flg) {
 			/* First indicate its a tunnel pkt */
 			bd1->data.bd_flags.bitfields |=
 				ETH_TX_DATA_1ST_BD_TUNN_FLAG_MASK <<
@@ -1231,8 +1543,7 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 					1 << ETH_TX_DATA_1ST_BD_TUNN_FLAG_SHIFT;
 
 			/* Outer IP checksum offload */
-			if (mbuf->ol_flags & PKT_TX_OUTER_IP_CKSUM) {
-				PMD_TX_LOG(INFO, txq, "OuterIP csum offload");
+			if (tx_ol_flags & PKT_TX_OUTER_IP_CKSUM) {
 				bd1->data.bd_flags.bitfields |=
 					ETH_TX_1ST_BD_FLAGS_TUNN_IP_CSUM_MASK <<
 					ETH_TX_1ST_BD_FLAGS_TUNN_IP_CSUM_SHIFT;
@@ -1245,43 +1556,79 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		}
 
 		/* Descriptor based VLAN insertion */
-		if (mbuf->ol_flags & (PKT_TX_VLAN_PKT | PKT_TX_QINQ_PKT)) {
-			PMD_TX_LOG(INFO, txq, "Insert VLAN 0x%x",
-				   mbuf->vlan_tci);
+		if (tx_ol_flags & (PKT_TX_VLAN_PKT | PKT_TX_QINQ_PKT)) {
 			bd1->data.vlan = rte_cpu_to_le_16(mbuf->vlan_tci);
 			bd1->data.bd_flags.bitfields |=
 			    1 << ETH_TX_1ST_BD_FLAGS_VLAN_INSERTION_SHIFT;
 		}
 
+		if (lso_flg)
+			bd1->data.bd_flags.bitfields |=
+				1 << ETH_TX_1ST_BD_FLAGS_LSO_SHIFT;
+
 		/* Offload the IP checksum in the hardware */
-		if (mbuf->ol_flags & PKT_TX_IP_CKSUM) {
-			PMD_TX_LOG(INFO, txq, "IP csum offload");
+		if ((lso_flg) || (tx_ol_flags & PKT_TX_IP_CKSUM))
 			bd1->data.bd_flags.bitfields |=
 			    1 << ETH_TX_1ST_BD_FLAGS_IP_CSUM_SHIFT;
-		}
 
 		/* L4 checksum offload (tcp or udp) */
-		if (mbuf->ol_flags & (PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM)) {
-			PMD_TX_LOG(INFO, txq, "L4 csum offload");
+		if ((lso_flg) || (tx_ol_flags & (PKT_TX_TCP_CKSUM |
+						PKT_TX_UDP_CKSUM)))
+			/* PKT_TX_TCP_SEG implies PKT_TX_TCP_CKSUM */
 			bd1->data.bd_flags.bitfields |=
 			    1 << ETH_TX_1ST_BD_FLAGS_L4_CSUM_SHIFT;
-			/* IPv6 + extn. -> later */
+
+		/* BD2 */
+		if (lso_flg || ipv6_ext_flg) {
+			bd2 = (struct eth_tx_2nd_bd *)ecore_chain_produce
+							(&txq->tx_pbl);
+			memset(bd2, 0, sizeof(struct eth_tx_2nd_bd));
+			nbds++;
+			QEDE_BD_SET_ADDR_LEN(bd2,
+					    (hdr_size +
+					    rte_mbuf_data_dma_addr(mbuf)),
+					    mbuf->data_len - hdr_size);
+			/* TBD: check pseudo csum iff tx_prepare not called? */
+			if (ipv6_ext_flg) {
+				bd2->data.bitfields1 |=
+				ETH_L4_PSEUDO_CSUM_ZERO_LENGTH <<
+				ETH_TX_DATA_2ND_BD_L4_PSEUDO_CSUM_MODE_SHIFT;
+			}
+		}
+
+		/* BD3 */
+		if (lso_flg || ipv6_ext_flg) {
+			bd3 = (struct eth_tx_3rd_bd *)ecore_chain_produce
+							(&txq->tx_pbl);
+			memset(bd3, 0, sizeof(struct eth_tx_3rd_bd));
+			nbds++;
+			if (lso_flg) {
+				bd3->data.lso_mss =
+					rte_cpu_to_le_16(mbuf->tso_segsz);
+				/* Using one header BD */
+				bd3->data.bitfields |=
+					rte_cpu_to_le_16(1 <<
+					ETH_TX_DATA_3RD_BD_HDR_NBD_SHIFT);
+			}
 		}
 
 		/* Handle fragmented MBUF */
 		m_seg = mbuf->next;
 		/* Encode scatter gather buffer descriptors if required */
-		nb_frags = qede_encode_sg_bd(txq, m_seg, bd1);
-		bd1->data.nbds = nb_frags;
-		txq->nb_tx_avail -= nb_frags;
+		nb_frags = qede_encode_sg_bd(txq, m_seg, &bd2, &bd3);
+		bd1->data.nbds = nbds + nb_frags;
+		txq->nb_tx_avail -= bd1->data.nbds;
 		txq->sw_tx_prod++;
 		rte_prefetch0(txq->sw_tx_ring[TX_PROD(txq)].mbuf);
 		bd_prod =
 		    rte_cpu_to_le_16(ecore_chain_get_prod_idx(&txq->tx_pbl));
+#ifdef RTE_LIBRTE_QEDE_DEBUG_TX
+		print_tx_bd_info(txq, bd1, bd2, bd3, tx_ol_flags);
+		PMD_TX_LOG(INFO, txq, "lso=%d tunn=%d ipv6_ext=%d\n",
+			   lso_flg, tunn_flg, ipv6_ext_flg);
+#endif
 		nb_pkt_sent++;
 		txq->xmit_pkts++;
-		PMD_TX_LOG(INFO, txq, "nbds = %d pkt_len = %04x",
-			   bd1->data.nbds, mbuf->pkt_len);
 	}
 
 	/* Write value of prod idx into bd_prod */
@@ -1292,10 +1639,10 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 	rte_wmb();
 
 	/* Check again for Tx completions */
-	(void)qede_process_tx_compl(edev, txq);
+	qede_process_tx_compl(edev, txq);
 
-	PMD_TX_LOG(DEBUG, txq, "to_send=%u can_send=%u sent=%u core=%d",
-		   nb_pkts, tx_count, nb_pkt_sent, rte_lcore_id());
+	PMD_TX_LOG(DEBUG, txq, "to_send=%u sent=%u bd_prod=%u core=%d",
+		   nb_pkts, nb_pkt_sent, TX_PROD(txq), rte_lcore_id());
 
 	return nb_pkt_sent;
 }
@@ -1380,8 +1727,7 @@ static int qede_drain_txq(struct qede_dev *qdev,
 		qede_process_tx_compl(edev, txq);
 		if (!cnt) {
 			if (allow_drain) {
-				DP_NOTICE(edev, false,
-					  "Tx queue[%u] is stuck,"
+				DP_ERR(edev, "Tx queue[%u] is stuck,"
 					  "requesting MCP to drain\n",
 					  txq->queue_id);
 				rc = qdev->ops->common->drain(edev);
@@ -1389,13 +1735,11 @@ static int qede_drain_txq(struct qede_dev *qdev,
 					return rc;
 				return qede_drain_txq(qdev, txq, false);
 			}
-
-			DP_NOTICE(edev, false,
-				  "Timeout waiting for tx queue[%d]:"
+			DP_ERR(edev, "Timeout waiting for tx queue[%d]:"
 				  "PROD=%d, CONS=%d\n",
 				  txq->queue_id, txq->sw_tx_prod,
 				  txq->sw_tx_cons);
-			return -ENODEV;
+			return -1;
 		}
 		cnt--;
 		DELAY(1000);
@@ -1412,6 +1756,7 @@ static int qede_stop_queues(struct qede_dev *qdev)
 {
 	struct qed_update_vport_params vport_update_params;
 	struct ecore_dev *edev = &qdev->edev;
+	struct ecore_sge_tpa_params tpa_params;
 	struct qede_fastpath *fp;
 	int rc, tc, i;
 
@@ -1421,9 +1766,15 @@ static int qede_stop_queues(struct qede_dev *qdev)
 	vport_update_params.update_vport_active_flg = 1;
 	vport_update_params.vport_active_flg = 0;
 	vport_update_params.update_rss_flg = 0;
+	/* Disable TPA */
+	if (qdev->enable_lro) {
+		DP_INFO(edev, "Disabling LRO\n");
+		memset(&tpa_params, 0, sizeof(struct ecore_sge_tpa_params));
+		qede_update_sge_tpa_params(&tpa_params, qdev->mtu, false);
+		vport_update_params.sge_tpa_params = &tpa_params;
+	}
 
 	DP_INFO(edev, "Deactivate vport\n");
-
 	rc = qdev->ops->vport_update(edev, &vport_update_params);
 	if (rc) {
 		DP_ERR(edev, "Failed to update vport\n");
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index 17a2f0c..c27632e 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -126,6 +126,19 @@
 
 #define QEDE_PKT_TYPE_TUNN_MAX_TYPE			0x20 /* 2^5 */
 
+#define QEDE_TX_CSUM_OFFLOAD_MASK (PKT_TX_IP_CKSUM              | \
+				   PKT_TX_TCP_CKSUM             | \
+				   PKT_TX_UDP_CKSUM             | \
+				   PKT_TX_OUTER_IP_CKSUM        | \
+				   PKT_TX_TCP_SEG)
+
+#define QEDE_TX_OFFLOAD_MASK (QEDE_TX_CSUM_OFFLOAD_MASK | \
+			      PKT_TX_QINQ_PKT           | \
+			      PKT_TX_VLAN_PKT)
+
+#define QEDE_TX_OFFLOAD_NOTSUP_MASK \
+	(PKT_TX_OFFLOAD_MASK ^ QEDE_TX_OFFLOAD_MASK)
+
 /*
  * RX BD descriptor ring
  */
@@ -135,6 +148,19 @@ struct qede_rx_entry {
 	/* allows expansion .. */
 };
 
+/* TPA related structures */
+enum qede_agg_state {
+	QEDE_AGG_STATE_NONE  = 0,
+	QEDE_AGG_STATE_START = 1,
+	QEDE_AGG_STATE_ERROR = 2
+};
+
+struct qede_agg_info {
+	struct rte_mbuf *mbuf;
+	uint16_t start_cqe_bd_len;
+	uint8_t state; /* for sanity check */
+};
+
 /*
  * Structure associated with each RX queue.
  */
@@ -155,6 +181,7 @@ struct qede_rx_queue {
 	uint64_t rx_segs;
 	uint64_t rx_hw_errors;
 	uint64_t rx_alloc_errors;
+	struct qede_agg_info tpa_info[ETH_TPA_MAX_AGGS_NUM];
 	struct qede_dev *qdev;
 	void *handle;
 };
@@ -232,6 +259,9 @@ void qede_free_mem_load(struct rte_eth_dev *eth_dev);
 uint16_t qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts,
 			uint16_t nb_pkts);
 
+uint16_t qede_xmit_prep_pkts(void *p_txq, struct rte_mbuf **tx_pkts,
+			     uint16_t nb_pkts);
+
 uint16_t qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts,
 			uint16_t nb_pkts);
 
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v4 62/62] net/qede: update PMD version to 2.4.0.1
  2017-03-24 11:08         ` Ferruh Yigit
                             ` (62 preceding siblings ...)
  2017-03-28  6:52           ` [PATCH v4 61/62] net/qede: add LRO/TSO offloads support Rasesh Mody
@ 2017-03-28  6:52           ` Rasesh Mody
       [not found]           ` <1490683278-23776-1-git-send-email-y>
  64 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-28  6:52 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/qede_ethdev.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/qede/qede_ethdev.h b/drivers/net/qede/qede_ethdev.h
index 799a3ba..3c8ead8 100644
--- a/drivers/net/qede/qede_ethdev.h
+++ b/drivers/net/qede/qede_ethdev.h
@@ -49,7 +49,7 @@
 /* Driver versions */
 #define QEDE_PMD_VER_PREFIX		"QEDE PMD"
 #define QEDE_PMD_VERSION_MAJOR		2
-#define QEDE_PMD_VERSION_MINOR	        0
+#define QEDE_PMD_VERSION_MINOR	        4
 #define QEDE_PMD_VERSION_REVISION       0
 #define QEDE_PMD_VERSION_PATCH	        1
 
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* Re: [PATCH 00/62] net/qede/base: update PMD to 2.4.0.1
       [not found]           ` <1490683278-23776-1-git-send-email-y>
@ 2017-03-28  6:54             ` Mody, Rasesh
  0 siblings, 0 replies; 329+ messages in thread
From: Mody, Rasesh @ 2017-03-28  6:54 UTC (permalink / raw)
  To: y, ferruh.yigit, dev; +Cc: Dept-Eng DPDK Dev

Please ignore this as it doesn't have subject prefix v4.

> From: y@qlogic.com [mailto:y@qlogic.com]
> Sent: Monday, March 27, 2017 11:41 PM
> 
> From: Rasesh Mody <rasesh.mody@cavium.com>
> 
> Hi Ferruh,
> 
> This patch set adds support for new firmware 8.18.9.0, adds new features
> and includes bug fixes. This patch set updates PMD version to 2.4.0.1.
> 
> Please apply to dpdk-net-next for 17.05 release.
> 
> v1..v4
>  - address all the review comments received so far
> 
> Thanks!
> Rasesh
> 
> Harish Patil (3):
>   net/qede/base: add support for arfs mode
>   net/qede: add ntuple and flow director filter support
>   net/qede: add LRO/TSO offloads support
> 
> Rasesh Mody (59):
>   net/qede/base: return an initialized return value
>   net/qede/base: send FW version driver state to MFW
>   net/qede/base: mask Rx buffer attention bits
>   net/qede/base: print various indication on Tx-timeouts
>   net/qede/base: utilize FW 8.18.9.0
>   net/qede: upgrade the FW to 8.18.9.0
>   net/qede/base: decrease maximum HW func per device
>   net/qede/base: move mask constants defining NIC type
>   net/qede/base: remove attribute from update current config
>   net/qede/base: add nvram options
>   net/qede/base: add comment
>   net/qede/base: use default MTU from shared memory
>   net/qede/base: change queue/sb-id from 8 bit to 16 bit
>   net/qede/base: update MFW when default MTU is changed
>   net/qede/base: prevent device init failure
>   net/qede/base: read card personality via MFW commands
>   net/qede/base: allow probe to succeed with minor HW-issues
>   net/qede/base: remove unneeded step in HW init
>   net/qede/base: allow only trusted VFs to be promisc
>   net/qede/base: qm initialization revamp
>   net/qede/base: print firmware MFW and MBI versions
>   net/qede/base: check active VF queues before stopping
>   net/qede/base: set driver type before sending load request
>   net/qede/base: prevent driver load with invalid resources
>   net/qede/base: add interfaces for MFW TLV request processing
>   net/qede/base: code refactoring of SP queues
>   net/qede/base: make L2 queues handle based
>   net/qede/base: add support for handling TLV request from MFW
>   net/qede/base: optimize cache-line access
>   net/qede/base: infrastructure changes for VF tunnelling
>   net/qede/base: revise tunnel APIs/structs
>   net/qede/base: add tunnelling support for VFs
>   net/qede/base: formatting changes
>   net/qede/base: prevent transmitter stuck condition
>   net/qede/base: add mask/shift defines for resource command
>   net/qede/base: add API for using MFW resource lock
>   net/qede/base: remove clock slowdown option
>   net/qede/base: add new image types
>   net/qede/base: use L2-handles for RSS configuration
>   net/qede/base: change valloc to vzalloc
>   net/qede/base: add support for previous driver unload
>   net/qede/base: add non-L2 dcbx tlv application support
>   net/qede/base: update bulletin board during VF init
>   net/qede/base: add coalescing support for VFs
>   net/qede/base: add macro got resource value message
>   net/qede/base: add mailbox for resource allocation
>   net/qede/base: add macro for unsupported command
>   net/qede/base: set max values for soft resources
>   net/qede/base: add return code check
>   net/qede/base: zero out MFW mailbox data
>   net/qede/base: move code bits
>   net/qede/base: add PF parameter
>   net/qede/base: allow PMD to control vport and RSS engine ids
>   net/qede/base: add udp ports in bulletin board message
>   net/qede/base: prevent DMAE transactions during recovery
>   net/qede/base: multi-Txq support on same queue-zone for VFs
>   net/qede/base: prevent race condition during unload
>   net/qede/base: semantic changes
>   net/qede: update PMD version to 2.4.0.1
> 
>  doc/guides/nics/features/qede.ini             |    4 +
>  doc/guides/nics/features/qede_vf.ini          |    2 +
>  doc/guides/nics/qede.rst                      |   11 +-
>  drivers/net/qede/Makefile                     |    1 +
>  drivers/net/qede/base/bcm_osal.h              |   13 +-
>  drivers/net/qede/base/common_hsi.h            |  191 ++-
>  drivers/net/qede/base/ecore.h                 |  169 +-
>  drivers/net/qede/base/ecore_chain.h           |  143 +-
>  drivers/net/qede/base/ecore_cxt.c             |  297 +++-
>  drivers/net/qede/base/ecore_cxt.h             |   64 +-
>  drivers/net/qede/base/ecore_cxt_api.h         |   13 -
>  drivers/net/qede/base/ecore_dcbx.c            |   42 +-
>  drivers/net/qede/base/ecore_dcbx.h            |    4 +-
>  drivers/net/qede/base/ecore_dcbx_api.h        |    4 +-
>  drivers/net/qede/base/ecore_dev.c             | 2137 +++++++++++++++--------
> --
>  drivers/net/qede/base/ecore_dev_api.h         |  122 +-
>  drivers/net/qede/base/ecore_gtt_reg_addr.h    |   20 +-
>  drivers/net/qede/base/ecore_hsi_common.h      |  816 +++++-----
>  drivers/net/qede/base/ecore_hsi_debug_tools.h |  203 ++-
>  drivers/net/qede/base/ecore_hsi_eth.h         | 2069 ++++++++++++----------
> --
>  drivers/net/qede/base/ecore_hsi_init_tool.h   |   78 +-
>  drivers/net/qede/base/ecore_hw.c              |   50 +-
>  drivers/net/qede/base/ecore_init_fw_funcs.c   | 1409 ++++++++++------
>  drivers/net/qede/base/ecore_init_fw_funcs.h   |  172 +-
>  drivers/net/qede/base/ecore_int.c             |   51 +-
>  drivers/net/qede/base/ecore_int.h             |   10 -
>  drivers/net/qede/base/ecore_int_api.h         |   21 +
>  drivers/net/qede/base/ecore_iov_api.h         |   45 +-
>  drivers/net/qede/base/ecore_iro.h             |    8 +
>  drivers/net/qede/base/ecore_iro_values.h      |   28 +-
>  drivers/net/qede/base/ecore_l2.c              |  853 +++++++---
>  drivers/net/qede/base/ecore_l2.h              |  149 +-
>  drivers/net/qede/base/ecore_l2_api.h          |  134 +-
>  drivers/net/qede/base/ecore_mcp.c             | 1020 ++++++++++--
>  drivers/net/qede/base/ecore_mcp.h             |  181 ++-
>  drivers/net/qede/base/ecore_mcp_api.h         |  316 +++-
>  drivers/net/qede/base/ecore_mng_tlv.c         | 1535 ++++++++++++++++++
>  drivers/net/qede/base/ecore_proto_if.h        |   16 +
>  drivers/net/qede/base/ecore_rt_defs.h         |  623 ++++---
>  drivers/net/qede/base/ecore_sp_api.h          |   19 +
>  drivers/net/qede/base/ecore_sp_commands.c     |  372 +++--
>  drivers/net/qede/base/ecore_sp_commands.h     |   23 +-
>  drivers/net/qede/base/ecore_spq.c             |   86 +-
>  drivers/net/qede/base/ecore_spq.h             |   36 +-
>  drivers/net/qede/base/ecore_sriov.c           |  953 ++++++++---
>  drivers/net/qede/base/ecore_sriov.h           |   23 +-
>  drivers/net/qede/base/ecore_vf.c              |  348 +++-
>  drivers/net/qede/base/ecore_vf.h              |   85 +-
>  drivers/net/qede/base/ecore_vf_api.h          |   11 +
>  drivers/net/qede/base/ecore_vfpf_if.h         |   55 +-
>  drivers/net/qede/base/eth_common.h            |    2 +-
>  drivers/net/qede/base/mcp_public.h            |  271 ++--
>  drivers/net/qede/base/nvm_cfg.h               |  475 +++++-
>  drivers/net/qede/base/reg_addr.h              |   59 +
>  drivers/net/qede/qede_eth_if.c                |   56 +-
>  drivers/net/qede/qede_eth_if.h                |   25 +-
>  drivers/net/qede/qede_ethdev.c                |  115 +-
>  drivers/net/qede/qede_ethdev.h                |   44 +-
>  drivers/net/qede/qede_fdir.c                  |  487 ++++++
>  drivers/net/qede/qede_if.h                    |   58 +-
>  drivers/net/qede/qede_main.c                  |  126 +-
>  drivers/net/qede/qede_rxtx.c                  |  781 ++++++---
>  drivers/net/qede/qede_rxtx.h                  |   32 +
>  63 files changed, 12375 insertions(+), 5191 deletions(-)  create mode 100644
> drivers/net/qede/base/ecore_mng_tlv.c
>  create mode 100644 drivers/net/qede/qede_fdir.c
> 
> --
> 1.7.10.3

^ permalink raw reply	[flat|nested] 329+ messages in thread

* Re: [PATCH v4 31/62] net/qede/base: revise tunnel APIs/structs
  2017-03-28  6:52           ` [PATCH v4 31/62] net/qede/base: revise tunnel APIs/structs Rasesh Mody
@ 2017-03-28 11:22             ` Ferruh Yigit
  2017-03-28 21:18               ` Mody, Rasesh
  0 siblings, 1 reply; 329+ messages in thread
From: Ferruh Yigit @ 2017-03-28 11:22 UTC (permalink / raw)
  To: Rasesh Mody, dev; +Cc: Thomas Monjalon

On 3/28/2017 7:52 AM, Rasesh Mody wrote:
> Revise tunnel APIs/structs.
>  - Unite tunnel start and update params in single struct
>    "ecore_tunnel_info"
>  - Remove A0 chip tunnelling support.
>  - Added per tunnel info - removed bitmasks.
> 
> Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>

I hate to say this, but this patch gives build error with clang [1], it
seems it is fixed in next patch.

This patchset is big, and takes time to review / validate it, and a
small error requires whole patchset done again. I am not suggesting
updating this one, but for further patchsets, what do you think making
multiple smaller patchsets?

Thanks,
ferruh


[1]
Building x86_64-native-linuxapp-clang ...
.../drivers/net/qede/base/ecore_sp_commands.c:141:25: error: implicit
conversion from enumeration type 'enum tunnel_clss' to different
enumeration type 'enum ecore_tunn_clss' [-Werror,-Wenum-conversion]
        p_tun->vxlan.tun_cls = type;
                             ~ ^~~~
.../drivers/net/qede/base/ecore_sp_commands.c:143:26: error: implicit
conversion from enumeration type 'enum tunnel_clss' to different
enumeration type 'enum ecore_tunn_clss' [-Werror,-Wenum-conversion]
        p_tun->l2_gre.tun_cls = type;
                              ~ ^~~~
.../drivers/net/qede/base/ecore_sp_commands.c:145:26: error: implicit
conversion from enumeration type 'enum tunnel_clss' to different
enumeration type 'enum ecore_tunn_clss' [-Werror,-Wenum-conversion]
        p_tun->ip_gre.tun_cls = type;
                              ~ ^~~~
.../drivers/net/qede/base/ecore_sp_commands.c:147:29: error: implicit
conversion from enumeration type 'enum tunnel_clss' to different
enumeration type 'enum ecore_tunn_clss' [-Werror,-Wenum-conversion]
        p_tun->l2_geneve.tun_cls = type;
                                 ~ ^~~~
.../drivers/net/qede/base/ecore_sp_commands.c:149:29: error: implicit
conversion from enumeration type 'enum tunnel_clss' to different
enumeration type 'enum ecore_tunn_clss' [-Werror,-Wenum-conversion]
        p_tun->ip_geneve.tun_cls = type;
                                 ~ ^~~~
5 errors generated.
make[10]: *** [base/ecore_sp_commands.o] Error 1
make[10]: *** Waiting for unfinished jobs....
.../drivers/net/qede/qede_ethdev.c:1724:45: error: variable 'p_tunn' is
uninitialized when used here [-Werror,-Wuninitialized]
                        rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, p_tunn,
                                                                 ^~~~~~
.../drivers/net/qede/qede_ethdev.c:1711:34: note: initialize the
variable 'p_tunn' to silence this warning
        struct ecore_tunnel_info *p_tunn;
                                        ^
                                         = NULL
.../drivers/net/qede/qede_ethdev.c:1877:5: error: variable 'p_tunn' is
uninitialized when used here [-Werror,-Wuninitialized]
                                p_tunn, ECORE_SPQ_MODE_CB, NULL);
                                ^~~~~~
.../drivers/net/qede/qede_ethdev.c:1822:34: note: initialize the
variable 'p_tunn' to silence this warning
        struct ecore_tunnel_info *p_tunn;
                                        ^
                                         = NULL
2 errors generated.

^ permalink raw reply	[flat|nested] 329+ messages in thread

* Re: [PATCH v4 31/62] net/qede/base: revise tunnel APIs/structs
  2017-03-28 11:22             ` Ferruh Yigit
@ 2017-03-28 21:18               ` Mody, Rasesh
  2017-03-29  9:23                 ` Ferruh Yigit
  0 siblings, 1 reply; 329+ messages in thread
From: Mody, Rasesh @ 2017-03-28 21:18 UTC (permalink / raw)
  To: Ferruh Yigit, dev; +Cc: Thomas Monjalon

> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ferruh Yigit
> Sent: Tuesday, March 28, 2017 4:23 AM
> 
> On 3/28/2017 7:52 AM, Rasesh Mody wrote:
> > Revise tunnel APIs/structs.
> >  - Unite tunnel start and update params in single struct
> >    "ecore_tunnel_info"
> >  - Remove A0 chip tunnelling support.
> >  - Added per tunnel info - removed bitmasks.
> >
> > Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
> 
> I hate to say this, but this patch gives build error with clang [1], it seems it is
> fixed in next patch.

We also observed this error on clang, however, the fix got wrongly applied to the next patch, sorry about that.

> 
> This patchset is big, and takes time to review / validate it, and a small error
> requires whole patchset done again. I am not suggesting updating this one,
> but for further patchsets, what do you think making multiple smaller
> patchsets?

Please let us know if we need to refresh the current v4 patchset to address the clang issue.
It's good suggestion, for further patchsets, we can do multiple smaller patchsets.

Thanks!
-Rasesh

> 
> Thanks,
> ferruh
> 
> 
> [1]
> Building x86_64-native-linuxapp-clang ...
> .../drivers/net/qede/base/ecore_sp_commands.c:141:25: error: implicit
> conversion from enumeration type 'enum tunnel_clss' to different
> enumeration type 'enum ecore_tunn_clss' [-Werror,-Wenum-conversion]
>         p_tun->vxlan.tun_cls = type;
>                              ~ ^~~~
> .../drivers/net/qede/base/ecore_sp_commands.c:143:26: error: implicit
> conversion from enumeration type 'enum tunnel_clss' to different
> enumeration type 'enum ecore_tunn_clss' [-Werror,-Wenum-conversion]
>         p_tun->l2_gre.tun_cls = type;
>                               ~ ^~~~
> .../drivers/net/qede/base/ecore_sp_commands.c:145:26: error: implicit
> conversion from enumeration type 'enum tunnel_clss' to different
> enumeration type 'enum ecore_tunn_clss' [-Werror,-Wenum-conversion]
>         p_tun->ip_gre.tun_cls = type;
>                               ~ ^~~~
> .../drivers/net/qede/base/ecore_sp_commands.c:147:29: error: implicit
> conversion from enumeration type 'enum tunnel_clss' to different
> enumeration type 'enum ecore_tunn_clss' [-Werror,-Wenum-conversion]
>         p_tun->l2_geneve.tun_cls = type;
>                                  ~ ^~~~
> .../drivers/net/qede/base/ecore_sp_commands.c:149:29: error: implicit
> conversion from enumeration type 'enum tunnel_clss' to different
> enumeration type 'enum ecore_tunn_clss' [-Werror,-Wenum-conversion]
>         p_tun->ip_geneve.tun_cls = type;
>                                  ~ ^~~~
> 5 errors generated.
> make[10]: *** [base/ecore_sp_commands.o] Error 1
> make[10]: *** Waiting for unfinished jobs....
> .../drivers/net/qede/qede_ethdev.c:1724:45: error: variable 'p_tunn' is
> uninitialized when used here [-Werror,-Wuninitialized]
>                         rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, p_tunn,
>                                                                  ^~~~~~
> .../drivers/net/qede/qede_ethdev.c:1711:34: note: initialize the variable
> 'p_tunn' to silence this warning
>         struct ecore_tunnel_info *p_tunn;
>                                         ^
>                                          = NULL
> .../drivers/net/qede/qede_ethdev.c:1877:5: error: variable 'p_tunn' is
> uninitialized when used here [-Werror,-Wuninitialized]
>                                 p_tunn, ECORE_SPQ_MODE_CB, NULL);
>                                 ^~~~~~
> .../drivers/net/qede/qede_ethdev.c:1822:34: note: initialize the variable
> 'p_tunn' to silence this warning
>         struct ecore_tunnel_info *p_tunn;
>                                         ^
>                                          = NULL
> 2 errors generated.

^ permalink raw reply	[flat|nested] 329+ messages in thread

* Re: [PATCH v4 31/62] net/qede/base: revise tunnel APIs/structs
  2017-03-28 21:18               ` Mody, Rasesh
@ 2017-03-29  9:23                 ` Ferruh Yigit
  2017-03-29 20:48                   ` Mody, Rasesh
  0 siblings, 1 reply; 329+ messages in thread
From: Ferruh Yigit @ 2017-03-29  9:23 UTC (permalink / raw)
  To: Mody, Rasesh, dev; +Cc: Thomas Monjalon

On 3/28/2017 10:18 PM, Mody, Rasesh wrote:
>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ferruh Yigit
>> Sent: Tuesday, March 28, 2017 4:23 AM
>>
>> On 3/28/2017 7:52 AM, Rasesh Mody wrote:
>>> Revise tunnel APIs/structs.
>>>  - Unite tunnel start and update params in single struct
>>>    "ecore_tunnel_info"
>>>  - Remove A0 chip tunnelling support.
>>>  - Added per tunnel info - removed bitmasks.
>>>
>>> Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
>>
>> I hate to say this, but this patch gives build error with clang [1], it seems it is
>> fixed in next patch.
> 
> We also observed this error on clang, however, the fix got wrongly applied to the next patch, sorry about that.
> 
>>
>> This patchset is big, and takes time to review / validate it, and a small error
>> requires whole patchset done again. I am not suggesting updating this one,
>> but for further patchsets, what do you think making multiple smaller
>> patchsets?
> 
> Please let us know if we need to refresh the current v4 patchset to address the clang issue.

Yes, can you please send a new version of the patchset.

> It's good suggestion, for further patchsets, we can do multiple smaller patchsets.
> 
> Thanks!
> -Rasesh
> 
>>
>> Thanks,
>> ferruh
>>
>>
>> [1]
>> Building x86_64-native-linuxapp-clang ...
>> .../drivers/net/qede/base/ecore_sp_commands.c:141:25: error: implicit
>> conversion from enumeration type 'enum tunnel_clss' to different
>> enumeration type 'enum ecore_tunn_clss' [-Werror,-Wenum-conversion]
>>         p_tun->vxlan.tun_cls = type;
>>                              ~ ^~~~
>> .../drivers/net/qede/base/ecore_sp_commands.c:143:26: error: implicit
>> conversion from enumeration type 'enum tunnel_clss' to different
>> enumeration type 'enum ecore_tunn_clss' [-Werror,-Wenum-conversion]
>>         p_tun->l2_gre.tun_cls = type;
>>                               ~ ^~~~
>> .../drivers/net/qede/base/ecore_sp_commands.c:145:26: error: implicit
>> conversion from enumeration type 'enum tunnel_clss' to different
>> enumeration type 'enum ecore_tunn_clss' [-Werror,-Wenum-conversion]
>>         p_tun->ip_gre.tun_cls = type;
>>                               ~ ^~~~
>> .../drivers/net/qede/base/ecore_sp_commands.c:147:29: error: implicit
>> conversion from enumeration type 'enum tunnel_clss' to different
>> enumeration type 'enum ecore_tunn_clss' [-Werror,-Wenum-conversion]
>>         p_tun->l2_geneve.tun_cls = type;
>>                                  ~ ^~~~
>> .../drivers/net/qede/base/ecore_sp_commands.c:149:29: error: implicit
>> conversion from enumeration type 'enum tunnel_clss' to different
>> enumeration type 'enum ecore_tunn_clss' [-Werror,-Wenum-conversion]
>>         p_tun->ip_geneve.tun_cls = type;
>>                                  ~ ^~~~
>> 5 errors generated.
>> make[10]: *** [base/ecore_sp_commands.o] Error 1
>> make[10]: *** Waiting for unfinished jobs....
>> .../drivers/net/qede/qede_ethdev.c:1724:45: error: variable 'p_tunn' is
>> uninitialized when used here [-Werror,-Wuninitialized]
>>                         rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, p_tunn,
>>                                                                  ^~~~~~
>> .../drivers/net/qede/qede_ethdev.c:1711:34: note: initialize the variable
>> 'p_tunn' to silence this warning
>>         struct ecore_tunnel_info *p_tunn;
>>                                         ^
>>                                          = NULL
>> .../drivers/net/qede/qede_ethdev.c:1877:5: error: variable 'p_tunn' is
>> uninitialized when used here [-Werror,-Wuninitialized]
>>                                 p_tunn, ECORE_SPQ_MODE_CB, NULL);
>>                                 ^~~~~~
>> .../drivers/net/qede/qede_ethdev.c:1822:34: note: initialize the variable
>> 'p_tunn' to silence this warning
>>         struct ecore_tunnel_info *p_tunn;
>>                                         ^
>>                                          = NULL
>> 2 errors generated.
> 

^ permalink raw reply	[flat|nested] 329+ messages in thread

* [PATCH v5 00/62] net/qede/base: update PMD to 2.4.0.1
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-30 12:23               ` Ferruh Yigit
  2017-03-29 20:36             ` [PATCH v5 01/62] net/qede/base: return an initialized return value Rasesh Mody
                               ` (61 subsequent siblings)
  62 siblings, 1 reply; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Hi Ferruh,

This patch set adds support for new firmware 8.18.9.0, adds new features
and includes bug fixes. This patch set updates PMD version to 2.4.0.1.

Please apply to dpdk-net-next for 17.05 release.

v4..v5
 - properly fix clang compilation
v1..v4
 - address all the review comments received

Thanks!
Rasesh

Harish Patil (3):
  net/qede/base: add support for arfs mode
  net/qede: add ntuple and flow director filter support
  net/qede: add LRO/TSO offloads support

Rasesh Mody (59):
  net/qede/base: return an initialized return value
  net/qede/base: send FW version driver state to MFW
  net/qede/base: mask Rx buffer attention bits
  net/qede/base: print various indication on Tx-timeouts
  net/qede/base: utilize FW 8.18.9.0
  net/qede: upgrade the FW to 8.18.9.0
  net/qede/base: decrease maximum HW func per device
  net/qede/base: move mask constants defining NIC type
  net/qede/base: remove attribute from update current config
  net/qede/base: add nvram options
  net/qede/base: add comment
  net/qede/base: use default MTU from shared memory
  net/qede/base: change queue/sb-id from 8 bit to 16 bit
  net/qede/base: update MFW when default MTU is changed
  net/qede/base: prevent device init failure
  net/qede/base: read card personality via MFW commands
  net/qede/base: allow probe to succeed with minor HW-issues
  net/qede/base: remove unneeded step in HW init
  net/qede/base: allow only trusted VFs to be promisc
  net/qede/base: qm initialization revamp
  net/qede/base: print firmware MFW and MBI versions
  net/qede/base: check active VF queues before stopping
  net/qede/base: set driver type before sending load request
  net/qede/base: prevent driver load with invalid resources
  net/qede/base: add interfaces for MFW TLV request processing
  net/qede/base: code refactoring of SP queues
  net/qede/base: make L2 queues handle based
  net/qede/base: add support for handling TLV request from MFW
  net/qede/base: optimize cache-line access
  net/qede/base: infrastructure changes for VF tunnelling
  net/qede/base: revise tunnel APIs/structs
  net/qede/base: add tunnelling support for VFs
  net/qede/base: formatting changes
  net/qede/base: prevent transmitter stuck condition
  net/qede/base: add mask/shift defines for resource command
  net/qede/base: add API for using MFW resource lock
  net/qede/base: remove clock slowdown option
  net/qede/base: add new image types
  net/qede/base: use L2-handles for RSS configuration
  net/qede/base: change valloc to vzalloc
  net/qede/base: add support for previous driver unload
  net/qede/base: add non-L2 dcbx tlv application support
  net/qede/base: update bulletin board during VF init
  net/qede/base: add coalescing support for VFs
  net/qede/base: add macro got resource value message
  net/qede/base: add mailbox for resource allocation
  net/qede/base: add macro for unsupported command
  net/qede/base: set max values for soft resources
  net/qede/base: add return code check
  net/qede/base: zero out MFW mailbox data
  net/qede/base: move code bits
  net/qede/base: add PF parameter
  net/qede/base: allow PMD to control vport and RSS engine ids
  net/qede/base: add udp ports in bulletin board message
  net/qede/base: prevent DMAE transactions during recovery
  net/qede/base: multi-Txq support on same queue-zone for VFs
  net/qede/base: prevent race condition during unload
  net/qede/base: semantic changes
  net/qede: update PMD version to 2.4.0.1

 doc/guides/nics/features/qede.ini             |    4 +
 doc/guides/nics/features/qede_vf.ini          |    2 +
 doc/guides/nics/qede.rst                      |   11 +-
 drivers/net/qede/Makefile                     |    1 +
 drivers/net/qede/base/bcm_osal.h              |   13 +-
 drivers/net/qede/base/common_hsi.h            |  191 ++-
 drivers/net/qede/base/ecore.h                 |  169 +-
 drivers/net/qede/base/ecore_chain.h           |  143 +-
 drivers/net/qede/base/ecore_cxt.c             |  297 +++-
 drivers/net/qede/base/ecore_cxt.h             |   64 +-
 drivers/net/qede/base/ecore_cxt_api.h         |   13 -
 drivers/net/qede/base/ecore_dcbx.c            |   42 +-
 drivers/net/qede/base/ecore_dcbx.h            |    4 +-
 drivers/net/qede/base/ecore_dcbx_api.h        |    4 +-
 drivers/net/qede/base/ecore_dev.c             | 2137 +++++++++++++++----------
 drivers/net/qede/base/ecore_dev_api.h         |  122 +-
 drivers/net/qede/base/ecore_gtt_reg_addr.h    |   20 +-
 drivers/net/qede/base/ecore_hsi_common.h      |  816 +++++-----
 drivers/net/qede/base/ecore_hsi_debug_tools.h |  203 ++-
 drivers/net/qede/base/ecore_hsi_eth.h         | 2069 ++++++++++++------------
 drivers/net/qede/base/ecore_hsi_init_tool.h   |   78 +-
 drivers/net/qede/base/ecore_hw.c              |   50 +-
 drivers/net/qede/base/ecore_init_fw_funcs.c   | 1409 ++++++++++------
 drivers/net/qede/base/ecore_init_fw_funcs.h   |  172 +-
 drivers/net/qede/base/ecore_int.c             |   51 +-
 drivers/net/qede/base/ecore_int.h             |   10 -
 drivers/net/qede/base/ecore_int_api.h         |   21 +
 drivers/net/qede/base/ecore_iov_api.h         |   45 +-
 drivers/net/qede/base/ecore_iro.h             |    8 +
 drivers/net/qede/base/ecore_iro_values.h      |   28 +-
 drivers/net/qede/base/ecore_l2.c              |  853 +++++++---
 drivers/net/qede/base/ecore_l2.h              |  149 +-
 drivers/net/qede/base/ecore_l2_api.h          |  134 +-
 drivers/net/qede/base/ecore_mcp.c             | 1020 ++++++++++--
 drivers/net/qede/base/ecore_mcp.h             |  181 ++-
 drivers/net/qede/base/ecore_mcp_api.h         |  316 +++-
 drivers/net/qede/base/ecore_mng_tlv.c         | 1535 ++++++++++++++++++
 drivers/net/qede/base/ecore_proto_if.h        |   16 +
 drivers/net/qede/base/ecore_rt_defs.h         |  623 ++++---
 drivers/net/qede/base/ecore_sp_api.h          |   19 +
 drivers/net/qede/base/ecore_sp_commands.c     |  372 +++--
 drivers/net/qede/base/ecore_sp_commands.h     |   23 +-
 drivers/net/qede/base/ecore_spq.c             |   86 +-
 drivers/net/qede/base/ecore_spq.h             |   36 +-
 drivers/net/qede/base/ecore_sriov.c           |  953 ++++++++---
 drivers/net/qede/base/ecore_sriov.h           |   23 +-
 drivers/net/qede/base/ecore_vf.c              |  348 +++-
 drivers/net/qede/base/ecore_vf.h              |   85 +-
 drivers/net/qede/base/ecore_vf_api.h          |   11 +
 drivers/net/qede/base/ecore_vfpf_if.h         |   55 +-
 drivers/net/qede/base/eth_common.h            |    2 +-
 drivers/net/qede/base/mcp_public.h            |  271 ++--
 drivers/net/qede/base/nvm_cfg.h               |  475 +++++-
 drivers/net/qede/base/reg_addr.h              |   59 +
 drivers/net/qede/qede_eth_if.c                |   56 +-
 drivers/net/qede/qede_eth_if.h                |   25 +-
 drivers/net/qede/qede_ethdev.c                |  115 +-
 drivers/net/qede/qede_ethdev.h                |   44 +-
 drivers/net/qede/qede_fdir.c                  |  487 ++++++
 drivers/net/qede/qede_if.h                    |   58 +-
 drivers/net/qede/qede_main.c                  |  126 +-
 drivers/net/qede/qede_rxtx.c                  |  781 ++++++---
 drivers/net/qede/qede_rxtx.h                  |   32 +
 63 files changed, 12375 insertions(+), 5191 deletions(-)
 create mode 100644 drivers/net/qede/base/ecore_mng_tlv.c
 create mode 100644 drivers/net/qede/qede_fdir.c

-- 
1.7.10.3

^ permalink raw reply	[flat|nested] 329+ messages in thread

* [PATCH v5 01/62] net/qede/base: return an initialized return value
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 " Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 02/62] net/qede/base: send FW version driver state to MFW Rasesh Mody
                               ` (60 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Make sure ecore_iov_mark_vf_flr() always returns an initialized return
value.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_sriov.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 6912cf8..d1c809c 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -3164,7 +3164,7 @@ ecore_iov_single_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
 
 bool ecore_iov_mark_vf_flr(struct ecore_hwfn *p_hwfn, u32 *p_disabled_vfs)
 {
-	bool found;
+	bool found = false;
 	u16 i;
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "Marking FLR-ed VFs\n");
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 02/62] net/qede/base: send FW version driver state to MFW
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 " Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 01/62] net/qede/base: return an initialized return value Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 03/62] net/qede/base: mask Rx buffer attention bits Rasesh Mody
                               ` (59 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add support to send FW version and driver state to Management FW.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c     |   31 ++++++++++++++++++++++++++++---
 drivers/net/qede/base/ecore_mcp.c     |    7 +++++--
 drivers/net/qede/base/ecore_mcp_api.h |    3 ++-
 drivers/net/qede/qede_if.h            |    3 +++
 drivers/net/qede/qede_main.c          |   20 ++++++++++++++++++++
 5 files changed, 58 insertions(+), 6 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index da9cdc9..2d1e031 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1609,8 +1609,9 @@ static void ecore_reset_mb_shadow(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 				   struct ecore_hw_init_params *p_params)
 {
-	enum _ecore_status_t rc, mfw_rc;
-	u32 load_code, param;
+	enum _ecore_status_t rc = ECORE_SUCCESS, mfw_rc;
+	u32 load_code, param, drv_mb_param;
+	struct ecore_hwfn *p_hwfn;
 	int i;
 
 	if ((p_params->int_mode == ECORE_INT_MODE_MSI) &&
@@ -1743,7 +1744,26 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		p_hwfn->hw_init_done = true;
 	}
 
-	return ECORE_SUCCESS;
+	if (IS_PF(p_dev)) {
+		p_hwfn = ECORE_LEADING_HWFN(p_dev);
+		drv_mb_param = (FW_MAJOR_VERSION << 24) |
+			       (FW_MINOR_VERSION << 16) |
+			       (FW_REVISION_VERSION << 8) |
+			       (FW_ENGINEERING_VERSION);
+		rc = ecore_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
+				   DRV_MSG_CODE_OV_UPDATE_STORM_FW_VER,
+				   drv_mb_param, &load_code, &param);
+		if (rc != ECORE_SUCCESS) {
+			DP_ERR(p_hwfn, "Failed to send firmware version\n");
+			return rc;
+		}
+
+		rc = ecore_mcp_ov_update_driver_state(p_hwfn,
+						      p_hwfn->p_main_ptt,
+						ECORE_OV_DRIVER_STATE_DISABLED);
+	}
+
+	return rc;
 }
 
 #define ECORE_HW_STOP_RETRY_LIMIT	(10)
@@ -3130,8 +3150,13 @@ enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev,
 
 void ecore_hw_remove(struct ecore_dev *p_dev)
 {
+	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
 	int i;
 
+	if (IS_PF(p_dev))
+		ecore_mcp_ov_update_driver_state(p_hwfn, p_hwfn->p_main_ptt,
+					ECORE_OV_DRIVER_STATE_NOT_LOADED);
+
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
 
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index cb3e0bd..e236f39 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1723,6 +1723,9 @@ ecore_mcp_ov_update_current_config(struct ecore_hwfn *p_hwfn,
 	case ECORE_OV_CLIENT_USER:
 		drv_mb_param = DRV_MB_PARAM_OV_CURR_CFG_OTHER;
 		break;
+	case ECORE_OV_CLIENT_VENDOR_SPEC:
+		drv_mb_param = DRV_MB_PARAM_OV_CURR_CFG_VENDOR_SPEC;
+		break;
 	default:
 		DP_NOTICE(p_hwfn, true, "Invalid client type %d\n", config);
 		return ECORE_INVAL;
@@ -1761,9 +1764,9 @@ ecore_mcp_ov_update_driver_state(struct ecore_hwfn *p_hwfn,
 	}
 
 	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE,
-			   drv_state, &resp, &param);
+			   drv_mb_param, &resp, &param);
 	if (rc != ECORE_SUCCESS)
-		DP_ERR(p_hwfn, "MCP response failure, aborting\n");
+		DP_ERR(p_hwfn, "Failed to send driver state\n");
 
 	return rc;
 }
diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h
index 4e954bd..614cf67 100644
--- a/drivers/net/qede/base/ecore_mcp_api.h
+++ b/drivers/net/qede/base/ecore_mcp_api.h
@@ -181,7 +181,8 @@ enum ecore_ov_config_method {
 
 enum ecore_ov_client {
 	ECORE_OV_CLIENT_DRV,
-	ECORE_OV_CLIENT_USER
+	ECORE_OV_CLIENT_USER,
+	ECORE_OV_CLIENT_VENDOR_SPEC
 };
 
 enum ecore_ov_driver_state {
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index 4289d0b..4b23bb9 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -150,8 +150,11 @@ struct qed_common_ops {
 			    uint16_t sb_id, enum qed_sb_type type);
 
 	bool (*can_link_change)(struct ecore_dev *edev);
+
 	void (*update_msglvl)(struct ecore_dev *edev,
 			      uint32_t dp_module, uint8_t dp_level);
+
+	int (*send_drv_state)(struct ecore_dev *edev, bool active);
 };
 
 #endif /* _QEDE_IF_H */
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 8a4d68a..f0033a1 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -668,6 +668,25 @@ static void qed_remove(struct ecore_dev *edev)
 	ecore_hw_remove(edev);
 }
 
+static int qed_send_drv_state(struct ecore_dev *edev, bool active)
+{
+	struct ecore_hwfn *hwfn = ECORE_LEADING_HWFN(edev);
+	struct ecore_ptt *ptt;
+	int status = 0;
+
+	ptt = ecore_ptt_acquire(hwfn);
+	if (!ptt)
+		return -EAGAIN;
+
+	status = ecore_mcp_ov_update_driver_state(hwfn, ptt, active ?
+						  ECORE_OV_DRIVER_STATE_ACTIVE :
+						ECORE_OV_DRIVER_STATE_DISABLED);
+
+	ecore_ptt_release(hwfn, ptt);
+
+	return status;
+}
+
 const struct qed_common_ops qed_common_ops_pass = {
 	INIT_STRUCT_FIELD(probe, &qed_probe),
 	INIT_STRUCT_FIELD(update_pf_params, &qed_update_pf_params),
@@ -681,4 +700,5 @@ const struct qed_common_ops qed_common_ops_pass = {
 	INIT_STRUCT_FIELD(drain, &qed_drain),
 	INIT_STRUCT_FIELD(slowpath_stop, &qed_slowpath_stop),
 	INIT_STRUCT_FIELD(remove, &qed_remove),
+	INIT_STRUCT_FIELD(send_drv_state, &qed_send_drv_state),
 };
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 03/62] net/qede/base: mask Rx buffer attention bits
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (2 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 02/62] net/qede/base: send FW version driver state to MFW Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 04/62] net/qede/base: print various indication on Tx-timeouts Rasesh Mody
                               ` (58 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Mask the BRB "RC0_EOP_OUT_SYNC_FIFO_PUSH_ERROR" attention.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c |    6 ++++++
 drivers/net/qede/base/reg_addr.h  |    3 +++
 2 files changed, 9 insertions(+)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 2d1e031..eef24cd 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1051,6 +1051,12 @@ static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn,
 	/* pretend to original PF */
 	ecore_fid_pretend(p_hwfn, p_ptt, p_hwfn->rel_pf_id);
 
+	/* @@@TMP:
+	 * CQ89456 - Mask the BRB "RC0_EOP_OUT_SYNC_FIFO_PUSH_ERROR" attention.
+	 */
+	if (ECORE_IS_AH(p_dev))
+		ecore_wr(p_hwfn, p_ptt, BRB_REG_INT_MASK_10, 0x4000000);
+
 	return rc;
 }
 
diff --git a/drivers/net/qede/base/reg_addr.h b/drivers/net/qede/base/reg_addr.h
index 3c369aa..21cbdbd 100644
--- a/drivers/net/qede/base/reg_addr.h
+++ b/drivers/net/qede/base/reg_addr.h
@@ -1141,3 +1141,6 @@
 #define NIG_REG_LLH_TAGMAC_DEF_PF_VECTOR 0x50196cUL
 #define PRS_REG_MSG_INFO 0x1f0a1cUL
 #define BAR0_MAP_REG_XSDM_RAM 0x1e00000UL
+
+/* 8.18.7.0 FW */
+#define BRB_REG_INT_MASK_10 0x3401b8UL
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 04/62] net/qede/base: print various indication on Tx-timeouts
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (3 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 03/62] net/qede/base: mask Rx buffer attention bits Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 05/62] net/qede/base: utilize FW 8.18.9.0 Rasesh Mody
                               ` (57 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Print various indication on Tx-timeouts.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_int.c     |   27 +++++++++++++++++++++++++++
 drivers/net/qede/base/ecore_int_api.h |   21 +++++++++++++++++++++
 drivers/net/qede/base/reg_addr.h      |    3 +++
 drivers/net/qede/qede_main.c          |   23 +++++++++++++++++++++++
 4 files changed, 74 insertions(+)

diff --git a/drivers/net/qede/base/ecore_int.c b/drivers/net/qede/base/ecore_int.c
index b6b8e2d..e5a4359 100644
--- a/drivers/net/qede/base/ecore_int.c
+++ b/drivers/net/qede/base/ecore_int.c
@@ -2255,3 +2255,30 @@ enum _ecore_status_t ecore_int_set_timer_res(struct ecore_hwfn *p_hwfn,
 
 	return rc;
 }
+
+enum _ecore_status_t ecore_int_get_sb_dbg(struct ecore_hwfn *p_hwfn,
+					  struct ecore_ptt *p_ptt,
+					  struct ecore_sb_info *p_sb,
+					  struct ecore_sb_info_dbg *p_info)
+{
+	u16 sbid = p_sb->igu_sb_id;
+	int i;
+
+	if (IS_VF(p_hwfn->p_dev))
+		return ECORE_INVAL;
+
+	if (sbid > NUM_OF_SBS(p_hwfn->p_dev))
+		return ECORE_INVAL;
+
+	p_info->igu_prod = ecore_rd(p_hwfn, p_ptt,
+				    IGU_REG_PRODUCER_MEMORY + sbid * 4);
+	p_info->igu_cons = ecore_rd(p_hwfn, p_ptt,
+				    IGU_REG_CONSUMER_MEM + sbid * 4);
+
+	for (i = 0; i < PIS_PER_SB; i++)
+		p_info->pi[i] = (u16)ecore_rd(p_hwfn, p_ptt,
+					      CAU_REG_PI_MEMORY +
+					      sbid * 4 * PIS_PER_SB +  i * 4);
+
+	return ECORE_SUCCESS;
+}
diff --git a/drivers/net/qede/base/ecore_int_api.h b/drivers/net/qede/base/ecore_int_api.h
index a0d6a43..fdfcba8 100644
--- a/drivers/net/qede/base/ecore_int_api.h
+++ b/drivers/net/qede/base/ecore_int_api.h
@@ -41,6 +41,12 @@ struct ecore_sb_info {
 	struct ecore_dev *p_dev;
 };
 
+struct ecore_sb_info_dbg {
+	u32 igu_prod;
+	u32 igu_cons;
+	u16 pi[PIS_PER_SB];
+};
+
 struct ecore_sb_cnt_info {
 	int sb_cnt;
 	int sb_iov_cnt;
@@ -303,4 +309,19 @@ void ecore_int_disable_post_isr_release(struct ecore_dev *p_dev);
  */
 void ecore_int_attn_clr_enable(struct ecore_dev *p_dev, bool clr_enable);
 
+/**
+ * @brief Read debug information regarding a given SB.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param p_sb - point to Status block for which we want to get info.
+ * @param p_info - pointer to struct to fill with information regarding SB.
+ *
+ * @return ECORE_SUCCESS if pointer is filled; failure otherwise.
+ */
+enum _ecore_status_t ecore_int_get_sb_dbg(struct ecore_hwfn *p_hwfn,
+					  struct ecore_ptt *p_ptt,
+					  struct ecore_sb_info *p_sb,
+					  struct ecore_sb_info_dbg *p_info);
+
 #endif
diff --git a/drivers/net/qede/base/reg_addr.h b/drivers/net/qede/base/reg_addr.h
index 21cbdbd..3cc7fd4 100644
--- a/drivers/net/qede/base/reg_addr.h
+++ b/drivers/net/qede/base/reg_addr.h
@@ -1144,3 +1144,6 @@
 
 /* 8.18.7.0 FW */
 #define BRB_REG_INT_MASK_10 0x3401b8UL
+
+#define IGU_REG_PRODUCER_MEMORY 0x182000UL
+#define IGU_REG_CONSUMER_MEM 0x183000UL
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index f0033a1..a604a5b 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -687,6 +687,29 @@ static int qed_send_drv_state(struct ecore_dev *edev, bool active)
 	return status;
 }
 
+static int qed_get_sb_info(struct ecore_dev *edev, struct ecore_sb_info *sb,
+			   u16 qid, struct ecore_sb_info_dbg *sb_dbg)
+{
+	struct ecore_hwfn *hwfn = &edev->hwfns[qid % edev->num_hwfns];
+	struct ecore_ptt *ptt;
+	int rc;
+
+	if (IS_VF(edev))
+		return -EINVAL;
+
+	ptt = ecore_ptt_acquire(hwfn);
+	if (!ptt) {
+		DP_NOTICE(hwfn, true, "Can't acquire PTT\n");
+		return -EAGAIN;
+	}
+
+	memset(sb_dbg, 0, sizeof(*sb_dbg));
+	rc = ecore_int_get_sb_dbg(hwfn, ptt, sb, sb_dbg);
+
+	ecore_ptt_release(hwfn, ptt);
+	return rc;
+}
+
 const struct qed_common_ops qed_common_ops_pass = {
 	INIT_STRUCT_FIELD(probe, &qed_probe),
 	INIT_STRUCT_FIELD(update_pf_params, &qed_update_pf_params),
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 05/62] net/qede/base: utilize FW 8.18.9.0
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (4 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 04/62] net/qede/base: print various indication on Tx-timeouts Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 06/62] net/qede: upgrade the FW to 8.18.9.0 Rasesh Mody
                               ` (56 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

This change is in preparation to work with new FW 8.18.9.0.
Rename the defines to use E4_ and structs to use e4_. This renaming
is to add support for future chipsets.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/common_hsi.h       |   15 +-
 drivers/net/qede/base/ecore_hsi_common.h |  770 +++++------
 drivers/net/qede/base/ecore_hsi_eth.h    | 2052 +++++++++++++++---------------
 drivers/net/qede/base/ecore_iov_api.h    |    4 +-
 drivers/net/qede/base/ecore_spq.c        |   20 +-
 drivers/net/qede/base/ecore_sriov.c      |    2 +-
 drivers/net/qede/base/ecore_sriov.h      |    4 +-
 7 files changed, 1447 insertions(+), 1420 deletions(-)

diff --git a/drivers/net/qede/base/common_hsi.h b/drivers/net/qede/base/common_hsi.h
index 2f84148..59e751f 100644
--- a/drivers/net/qede/base/common_hsi.h
+++ b/drivers/net/qede/base/common_hsi.h
@@ -107,20 +107,20 @@
 #define MAX_NUM_PFS	(MAX_NUM_PFS_K2)
 #define MAX_NUM_OF_PFS_IN_CHIP (16) /* On both engines */
 
-#define MAX_NUM_VFS_K2	(192)
 #define MAX_NUM_VFS_BB	(120)
-#define MAX_NUM_VFS	(MAX_NUM_VFS_K2)
+#define MAX_NUM_VFS_K2	(192)
+#define E4_MAX_NUM_VFS	(MAX_NUM_VFS_K2)
 
 #define MAX_NUM_FUNCTIONS_BB	(MAX_NUM_PFS_BB + MAX_NUM_VFS_BB)
 #define MAX_NUM_FUNCTIONS_K2	(MAX_NUM_PFS_K2 + MAX_NUM_VFS_K2)
-#define MAX_NUM_FUNCTIONS	(MAX_NUM_PFS + MAX_NUM_VFS)
+#define MAX_NUM_FUNCTIONS	(MAX_NUM_PFS + E4_MAX_NUM_VFS)
 
 /* in both BB and K2, the VF number starts from 16. so for arrays containing all
  * possible PFs and VFs - we need a constant for this size
  */
 #define MAX_FUNCTION_NUMBER_BB	(MAX_NUM_PFS + MAX_NUM_VFS_BB)
 #define MAX_FUNCTION_NUMBER_K2	(MAX_NUM_PFS + MAX_NUM_VFS_K2)
-#define MAX_FUNCTION_NUMBER	(MAX_NUM_PFS + MAX_NUM_VFS)
+#define MAX_FUNCTION_NUMBER	(MAX_NUM_PFS + E4_MAX_NUM_VFS)
 
 #define MAX_NUM_VPORTS_K2	(208)
 #define MAX_NUM_VPORTS_BB	(160)
@@ -149,9 +149,10 @@
 #define MAX_PHYS_VOQS		(NUM_OF_PHYS_TCS * MAX_NUM_PORTS_BB)
 
 /* CIDs */
-#define NUM_OF_CONNECTION_TYPES	(8)
-#define NUM_OF_LCIDS		(320)
-#define NUM_OF_LTIDS		(320)
+#define E4_NUM_OF_CONNECTION_TYPES (8)
+#define NUM_OF_TASK_TYPES		(8)
+#define NUM_OF_LCIDS			(320)
+#define NUM_OF_LTIDS			(320)
 
 /* Clock values */
 #define MASTER_CLK_FREQ_E4		(375e6)
diff --git a/drivers/net/qede/base/ecore_hsi_common.h b/drivers/net/qede/base/ecore_hsi_common.h
index d978bb0..f934e68 100644
--- a/drivers/net/qede/base/ecore_hsi_common.h
+++ b/drivers/net/qede/base/ecore_hsi_common.h
@@ -75,306 +75,306 @@ struct xstorm_core_conn_st_ctx {
 	__le32 reserved0[55] /* Pad to 15 cycles */;
 };
 
-struct xstorm_core_conn_ag_ctx {
+struct e4_xstorm_core_conn_ag_ctx {
 	u8 reserved0 /* cdu_validation */;
 	u8 core_state /* state */;
 	u8 flags0;
 /* exist_in_qm0 */
-#define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_SHIFT        0
+#define E4_XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_SHIFT        0
 /* exist_in_qm1 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED1_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED1_SHIFT           1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED1_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED1_SHIFT           1
 /* exist_in_qm2 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED2_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED2_SHIFT           2
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED2_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED2_SHIFT           2
 /* exist_in_qm3 */
-#define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM3_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM3_SHIFT        3
+#define E4_XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM3_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM3_SHIFT        3
 /* bit4 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED3_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED3_SHIFT           4
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED3_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED3_SHIFT           4
 /* cf_array_active */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED4_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED4_SHIFT           5
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED4_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED4_SHIFT           5
 /* bit6 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED5_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED5_SHIFT           6
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED5_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED5_SHIFT           6
 /* bit7 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED6_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED6_SHIFT           7
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED6_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED6_SHIFT           7
 	u8 flags1;
 /* bit8 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED7_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED7_SHIFT           0
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED7_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED7_SHIFT           0
 /* bit9 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED8_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED8_SHIFT           1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED8_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED8_SHIFT           1
 /* bit10 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED9_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED9_SHIFT           2
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED9_MASK            0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED9_SHIFT           2
 /* bit11 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT11_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT11_SHIFT               3
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT11_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT11_SHIFT               3
 /* bit12 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT12_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT12_SHIFT               4
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT12_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT12_SHIFT               4
 /* bit13 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT13_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT13_SHIFT               5
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT13_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT13_SHIFT               5
 /* bit14 */
-#define XSTORM_CORE_CONN_AG_CTX_TX_RULE_ACTIVE_MASK       0x1
-#define XSTORM_CORE_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT      6
+#define E4_XSTORM_CORE_CONN_AG_CTX_TX_RULE_ACTIVE_MASK       0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT      6
 /* bit15 */
-#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT        7
+#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT        7
 	u8 flags2;
 /* timer0cf */
-#define XSTORM_CORE_CONN_AG_CTX_CF0_MASK                  0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF0_SHIFT                 0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF0_MASK                  0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF0_SHIFT                 0
 /* timer1cf */
-#define XSTORM_CORE_CONN_AG_CTX_CF1_MASK                  0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF1_SHIFT                 2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF1_MASK                  0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF1_SHIFT                 2
 /* timer2cf */
-#define XSTORM_CORE_CONN_AG_CTX_CF2_MASK                  0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF2_SHIFT                 4
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF2_MASK                  0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF2_SHIFT                 4
 /* timer_stop_all */
-#define XSTORM_CORE_CONN_AG_CTX_CF3_MASK                  0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF3_SHIFT                 6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF3_MASK                  0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF3_SHIFT                 6
 	u8 flags3;
-#define XSTORM_CORE_CONN_AG_CTX_CF4_MASK                  0x3 /* cf4 */
-#define XSTORM_CORE_CONN_AG_CTX_CF4_SHIFT                 0
-#define XSTORM_CORE_CONN_AG_CTX_CF5_MASK                  0x3 /* cf5 */
-#define XSTORM_CORE_CONN_AG_CTX_CF5_SHIFT                 2
-#define XSTORM_CORE_CONN_AG_CTX_CF6_MASK                  0x3 /* cf6 */
-#define XSTORM_CORE_CONN_AG_CTX_CF6_SHIFT                 4
-#define XSTORM_CORE_CONN_AG_CTX_CF7_MASK                  0x3 /* cf7 */
-#define XSTORM_CORE_CONN_AG_CTX_CF7_SHIFT                 6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF4_MASK                  0x3 /* cf4 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF4_SHIFT                 0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF5_MASK                  0x3 /* cf5 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF5_SHIFT                 2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF6_MASK                  0x3 /* cf6 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF6_SHIFT                 4
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF7_MASK                  0x3 /* cf7 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF7_SHIFT                 6
 	u8 flags4;
-#define XSTORM_CORE_CONN_AG_CTX_CF8_MASK                  0x3 /* cf8 */
-#define XSTORM_CORE_CONN_AG_CTX_CF8_SHIFT                 0
-#define XSTORM_CORE_CONN_AG_CTX_CF9_MASK                  0x3 /* cf9 */
-#define XSTORM_CORE_CONN_AG_CTX_CF9_SHIFT                 2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF8_MASK                  0x3 /* cf8 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF8_SHIFT                 0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF9_MASK                  0x3 /* cf9 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF9_SHIFT                 2
 /* cf10 */
-#define XSTORM_CORE_CONN_AG_CTX_CF10_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF10_SHIFT                4
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF10_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF10_SHIFT                4
 /* cf11 */
-#define XSTORM_CORE_CONN_AG_CTX_CF11_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF11_SHIFT                6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF11_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF11_SHIFT                6
 	u8 flags5;
 /* cf12 */
-#define XSTORM_CORE_CONN_AG_CTX_CF12_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF12_SHIFT                0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF12_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF12_SHIFT                0
 /* cf13 */
-#define XSTORM_CORE_CONN_AG_CTX_CF13_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF13_SHIFT                2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF13_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF13_SHIFT                2
 /* cf14 */
-#define XSTORM_CORE_CONN_AG_CTX_CF14_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF14_SHIFT                4
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF14_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF14_SHIFT                4
 /* cf15 */
-#define XSTORM_CORE_CONN_AG_CTX_CF15_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF15_SHIFT                6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF15_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF15_SHIFT                6
 	u8 flags6;
 /* cf16 */
-#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_MASK     0x3
-#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_SHIFT    0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_MASK     0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_SHIFT    0
 /* cf_array_cf */
-#define XSTORM_CORE_CONN_AG_CTX_CF17_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF17_SHIFT                2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF17_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF17_SHIFT                2
 /* cf18 */
-#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_MASK                0x3
-#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_SHIFT               4
+#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_MASK                0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_SHIFT               4
 /* cf19 */
-#define XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_MASK         0x3
-#define XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_SHIFT        6
+#define E4_XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_MASK         0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_SHIFT        6
 	u8 flags7;
 /* cf20 */
-#define XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_MASK             0x3
-#define XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_SHIFT            0
+#define E4_XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_MASK             0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_SHIFT            0
 /* cf21 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED10_MASK           0x3
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED10_SHIFT          2
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED10_MASK           0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED10_SHIFT          2
 /* cf22 */
-#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_MASK            0x3
-#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_SHIFT           4
+#define E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_MASK            0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_SHIFT           4
 /* cf0en */
-#define XSTORM_CORE_CONN_AG_CTX_CF0EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT               6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF0EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT               6
 /* cf1en */
-#define XSTORM_CORE_CONN_AG_CTX_CF1EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT               7
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF1EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT               7
 	u8 flags8;
 /* cf2en */
-#define XSTORM_CORE_CONN_AG_CTX_CF2EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT               0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF2EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT               0
 /* cf3en */
-#define XSTORM_CORE_CONN_AG_CTX_CF3EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT               1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF3EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT               1
 /* cf4en */
-#define XSTORM_CORE_CONN_AG_CTX_CF4EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT               2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF4EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT               2
 /* cf5en */
-#define XSTORM_CORE_CONN_AG_CTX_CF5EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT               3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF5EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT               3
 /* cf6en */
-#define XSTORM_CORE_CONN_AG_CTX_CF6EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT               4
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF6EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT               4
 /* cf7en */
-#define XSTORM_CORE_CONN_AG_CTX_CF7EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF7EN_SHIFT               5
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF7EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF7EN_SHIFT               5
 /* cf8en */
-#define XSTORM_CORE_CONN_AG_CTX_CF8EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF8EN_SHIFT               6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF8EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF8EN_SHIFT               6
 /* cf9en */
-#define XSTORM_CORE_CONN_AG_CTX_CF9EN_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF9EN_SHIFT               7
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF9EN_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF9EN_SHIFT               7
 	u8 flags9;
 /* cf10en */
-#define XSTORM_CORE_CONN_AG_CTX_CF10EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF10EN_SHIFT              0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF10EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF10EN_SHIFT              0
 /* cf11en */
-#define XSTORM_CORE_CONN_AG_CTX_CF11EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF11EN_SHIFT              1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF11EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF11EN_SHIFT              1
 /* cf12en */
-#define XSTORM_CORE_CONN_AG_CTX_CF12EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF12EN_SHIFT              2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF12EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF12EN_SHIFT              2
 /* cf13en */
-#define XSTORM_CORE_CONN_AG_CTX_CF13EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF13EN_SHIFT              3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF13EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF13EN_SHIFT              3
 /* cf14en */
-#define XSTORM_CORE_CONN_AG_CTX_CF14EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF14EN_SHIFT              4
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF14EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF14EN_SHIFT              4
 /* cf15en */
-#define XSTORM_CORE_CONN_AG_CTX_CF15EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF15EN_SHIFT              5
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF15EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF15EN_SHIFT              5
 /* cf16en */
-#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_MASK  0x1
-#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_SHIFT 6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_MASK  0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_SHIFT 6
 /* cf_array_cf_en */
-#define XSTORM_CORE_CONN_AG_CTX_CF17EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF17EN_SHIFT              7
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF17EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF17EN_SHIFT              7
 	u8 flags10;
 /* cf18en */
-#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_SHIFT            0
+#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_SHIFT            0
 /* cf19en */
-#define XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_EN_MASK      0x1
-#define XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT     1
+#define E4_XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_EN_MASK      0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT     1
 /* cf20en */
-#define XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_EN_MASK          0x1
-#define XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT         2
+#define E4_XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_EN_MASK          0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT         2
 /* cf21en */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED11_MASK           0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED11_SHIFT          3
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED11_MASK           0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED11_SHIFT          3
 /* cf22en */
-#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_SHIFT        4
+#define E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_SHIFT        4
 /* cf23en */
-#define XSTORM_CORE_CONN_AG_CTX_CF23EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF23EN_SHIFT              5
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF23EN_MASK               0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF23EN_SHIFT              5
 /* rule0en */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED12_MASK           0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED12_SHIFT          6
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED12_MASK           0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED12_SHIFT          6
 /* rule1en */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED13_MASK           0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED13_SHIFT          7
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED13_MASK           0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED13_SHIFT          7
 	u8 flags11;
 /* rule2en */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED14_MASK           0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED14_SHIFT          0
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED14_MASK           0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED14_SHIFT          0
 /* rule3en */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED15_MASK           0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED15_SHIFT          1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED15_MASK           0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED15_SHIFT          1
 /* rule4en */
-#define XSTORM_CORE_CONN_AG_CTX_TX_DEC_RULE_EN_MASK       0x1
-#define XSTORM_CORE_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT      2
+#define E4_XSTORM_CORE_CONN_AG_CTX_TX_DEC_RULE_EN_MASK       0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT      2
 /* rule5en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE5EN_MASK              0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT             3
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE5EN_MASK              0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT             3
 /* rule6en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE6EN_MASK              0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT             4
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE6EN_MASK              0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT             4
 /* rule7en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE7EN_MASK              0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT             5
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE7EN_MASK              0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT             5
 /* rule8en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED1_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED1_SHIFT        6
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED1_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED1_SHIFT        6
 /* rule9en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE9EN_MASK              0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE9EN_SHIFT             7
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE9EN_MASK              0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE9EN_SHIFT             7
 	u8 flags12;
 /* rule10en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE10EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE10EN_SHIFT            0
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE10EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE10EN_SHIFT            0
 /* rule11en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE11EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE11EN_SHIFT            1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE11EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE11EN_SHIFT            1
 /* rule12en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED2_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED2_SHIFT        2
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED2_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED2_SHIFT        2
 /* rule13en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED3_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED3_SHIFT        3
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED3_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED3_SHIFT        3
 /* rule14en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE14EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE14EN_SHIFT            4
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE14EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE14EN_SHIFT            4
 /* rule15en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE15EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE15EN_SHIFT            5
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE15EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE15EN_SHIFT            5
 /* rule16en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE16EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE16EN_SHIFT            6
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE16EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE16EN_SHIFT            6
 /* rule17en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE17EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE17EN_SHIFT            7
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE17EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE17EN_SHIFT            7
 	u8 flags13;
 /* rule18en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE18EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE18EN_SHIFT            0
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE18EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE18EN_SHIFT            0
 /* rule19en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE19EN_MASK             0x1
-#define XSTORM_CORE_CONN_AG_CTX_RULE19EN_SHIFT            1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE19EN_MASK             0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE19EN_SHIFT            1
 /* rule20en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED4_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED4_SHIFT        2
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED4_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED4_SHIFT        2
 /* rule21en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED5_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED5_SHIFT        3
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED5_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED5_SHIFT        3
 /* rule22en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED6_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED6_SHIFT        4
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED6_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED6_SHIFT        4
 /* rule23en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED7_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED7_SHIFT        5
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED7_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED7_SHIFT        5
 /* rule24en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED8_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED8_SHIFT        6
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED8_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED8_SHIFT        6
 /* rule25en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED9_MASK         0x1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED9_SHIFT        7
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED9_MASK         0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED9_SHIFT        7
 	u8 flags14;
 /* bit16 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT16_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT16_SHIFT               0
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT16_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT16_SHIFT               0
 /* bit17 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT17_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT17_SHIFT               1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT17_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT17_SHIFT               1
 /* bit18 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT18_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT18_SHIFT               2
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT18_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT18_SHIFT               2
 /* bit19 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT19_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT19_SHIFT               3
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT19_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT19_SHIFT               3
 /* bit20 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT20_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT20_SHIFT               4
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT20_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT20_SHIFT               4
 /* bit21 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT21_MASK                0x1
-#define XSTORM_CORE_CONN_AG_CTX_BIT21_SHIFT               5
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT21_MASK                0x1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT21_SHIFT               5
 /* cf23 */
-#define XSTORM_CORE_CONN_AG_CTX_CF23_MASK                 0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF23_SHIFT                6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF23_MASK                 0x3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF23_SHIFT                6
 	u8 byte2 /* byte2 */;
 	__le16 physical_q0 /* physical_q0 */;
 	__le16 consolid_prod /* physical_q1 */;
@@ -410,7 +410,7 @@ struct xstorm_core_conn_ag_ctx {
 	u8 byte13 /* byte13 */;
 	u8 byte14 /* byte14 */;
 	u8 byte15 /* byte15 */;
-	u8 byte16 /* byte16 */;
+	u8 e5_reserved /* e5_reserved */;
 	__le16 word11 /* word11 */;
 	__le32 reg10 /* reg10 */;
 	__le32 reg11 /* reg11 */;
@@ -428,89 +428,89 @@ struct xstorm_core_conn_ag_ctx {
 	__le16 word15 /* word15 */;
 };
 
-struct tstorm_core_conn_ag_ctx {
+struct e4_tstorm_core_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 byte1 /* state */;
 	u8 flags0;
-#define TSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
-#define TSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
-#define TSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
-#define TSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
-#define TSTORM_CORE_CONN_AG_CTX_BIT2_MASK     0x1 /* bit2 */
-#define TSTORM_CORE_CONN_AG_CTX_BIT2_SHIFT    2
-#define TSTORM_CORE_CONN_AG_CTX_BIT3_MASK     0x1 /* bit3 */
-#define TSTORM_CORE_CONN_AG_CTX_BIT3_SHIFT    3
-#define TSTORM_CORE_CONN_AG_CTX_BIT4_MASK     0x1 /* bit4 */
-#define TSTORM_CORE_CONN_AG_CTX_BIT4_SHIFT    4
-#define TSTORM_CORE_CONN_AG_CTX_BIT5_MASK     0x1 /* bit5 */
-#define TSTORM_CORE_CONN_AG_CTX_BIT5_SHIFT    5
-#define TSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* timer0cf */
-#define TSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     6
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT2_MASK     0x1 /* bit2 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT2_SHIFT    2
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT3_MASK     0x1 /* bit3 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT3_SHIFT    3
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT4_MASK     0x1 /* bit4 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT4_SHIFT    4
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT5_MASK     0x1 /* bit5 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT5_SHIFT    5
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* timer0cf */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     6
 	u8 flags1;
-#define TSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* timer1cf */
-#define TSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     0
-#define TSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* timer2cf */
-#define TSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     2
-#define TSTORM_CORE_CONN_AG_CTX_CF3_MASK      0x3 /* timer_stop_all */
-#define TSTORM_CORE_CONN_AG_CTX_CF3_SHIFT     4
-#define TSTORM_CORE_CONN_AG_CTX_CF4_MASK      0x3 /* cf4 */
-#define TSTORM_CORE_CONN_AG_CTX_CF4_SHIFT     6
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* timer1cf */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     0
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* timer2cf */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     2
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF3_MASK      0x3 /* timer_stop_all */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF3_SHIFT     4
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF4_MASK      0x3 /* cf4 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF4_SHIFT     6
 	u8 flags2;
-#define TSTORM_CORE_CONN_AG_CTX_CF5_MASK      0x3 /* cf5 */
-#define TSTORM_CORE_CONN_AG_CTX_CF5_SHIFT     0
-#define TSTORM_CORE_CONN_AG_CTX_CF6_MASK      0x3 /* cf6 */
-#define TSTORM_CORE_CONN_AG_CTX_CF6_SHIFT     2
-#define TSTORM_CORE_CONN_AG_CTX_CF7_MASK      0x3 /* cf7 */
-#define TSTORM_CORE_CONN_AG_CTX_CF7_SHIFT     4
-#define TSTORM_CORE_CONN_AG_CTX_CF8_MASK      0x3 /* cf8 */
-#define TSTORM_CORE_CONN_AG_CTX_CF8_SHIFT     6
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF5_MASK      0x3 /* cf5 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF5_SHIFT     0
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF6_MASK      0x3 /* cf6 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF6_SHIFT     2
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF7_MASK      0x3 /* cf7 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF7_SHIFT     4
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF8_MASK      0x3 /* cf8 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF8_SHIFT     6
 	u8 flags3;
-#define TSTORM_CORE_CONN_AG_CTX_CF9_MASK      0x3 /* cf9 */
-#define TSTORM_CORE_CONN_AG_CTX_CF9_SHIFT     0
-#define TSTORM_CORE_CONN_AG_CTX_CF10_MASK     0x3 /* cf10 */
-#define TSTORM_CORE_CONN_AG_CTX_CF10_SHIFT    2
-#define TSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
-#define TSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   4
-#define TSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
-#define TSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   5
-#define TSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
-#define TSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   6
-#define TSTORM_CORE_CONN_AG_CTX_CF3EN_MASK    0x1 /* cf3en */
-#define TSTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT   7
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF9_MASK      0x3 /* cf9 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF9_SHIFT     0
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF10_MASK     0x3 /* cf10 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF10_SHIFT    2
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   4
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   5
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   6
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF3EN_MASK    0x1 /* cf3en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT   7
 	u8 flags4;
-#define TSTORM_CORE_CONN_AG_CTX_CF4EN_MASK    0x1 /* cf4en */
-#define TSTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT   0
-#define TSTORM_CORE_CONN_AG_CTX_CF5EN_MASK    0x1 /* cf5en */
-#define TSTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT   1
-#define TSTORM_CORE_CONN_AG_CTX_CF6EN_MASK    0x1 /* cf6en */
-#define TSTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT   2
-#define TSTORM_CORE_CONN_AG_CTX_CF7EN_MASK    0x1 /* cf7en */
-#define TSTORM_CORE_CONN_AG_CTX_CF7EN_SHIFT   3
-#define TSTORM_CORE_CONN_AG_CTX_CF8EN_MASK    0x1 /* cf8en */
-#define TSTORM_CORE_CONN_AG_CTX_CF8EN_SHIFT   4
-#define TSTORM_CORE_CONN_AG_CTX_CF9EN_MASK    0x1 /* cf9en */
-#define TSTORM_CORE_CONN_AG_CTX_CF9EN_SHIFT   5
-#define TSTORM_CORE_CONN_AG_CTX_CF10EN_MASK   0x1 /* cf10en */
-#define TSTORM_CORE_CONN_AG_CTX_CF10EN_SHIFT  6
-#define TSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 7
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF4EN_MASK    0x1 /* cf4en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT   0
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF5EN_MASK    0x1 /* cf5en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT   1
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF6EN_MASK    0x1 /* cf6en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT   2
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF7EN_MASK    0x1 /* cf7en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF7EN_SHIFT   3
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF8EN_MASK    0x1 /* cf8en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF8EN_SHIFT   4
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF9EN_MASK    0x1 /* cf9en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF9EN_SHIFT   5
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF10EN_MASK   0x1 /* cf10en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF10EN_SHIFT  6
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 7
 	u8 flags5;
-#define TSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 0
-#define TSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 1
-#define TSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 2
-#define TSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 3
-#define TSTORM_CORE_CONN_AG_CTX_RULE5EN_MASK  0x1 /* rule5en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT 4
-#define TSTORM_CORE_CONN_AG_CTX_RULE6EN_MASK  0x1 /* rule6en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT 5
-#define TSTORM_CORE_CONN_AG_CTX_RULE7EN_MASK  0x1 /* rule7en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT 6
-#define TSTORM_CORE_CONN_AG_CTX_RULE8EN_MASK  0x1 /* rule8en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE8EN_SHIFT 7
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 0
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 1
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 2
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 3
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE5EN_MASK  0x1 /* rule5en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT 4
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE6EN_MASK  0x1 /* rule6en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT 5
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE7EN_MASK  0x1 /* rule7en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT 6
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE8EN_MASK  0x1 /* rule8en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE8EN_SHIFT 7
 	__le32 reg0 /* reg0 */;
 	__le32 reg1 /* reg1 */;
 	__le32 reg2 /* reg2 */;
@@ -532,63 +532,63 @@ struct tstorm_core_conn_ag_ctx {
 	__le32 reg10 /* reg10 */;
 };
 
-struct ustorm_core_conn_ag_ctx {
+struct e4_ustorm_core_conn_ag_ctx {
 	u8 reserved /* cdu_validation */;
 	u8 byte1 /* state */;
 	u8 flags0;
-#define USTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
-#define USTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
-#define USTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
-#define USTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
-#define USTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* timer0cf */
-#define USTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
-#define USTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* timer1cf */
-#define USTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
-#define USTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* timer2cf */
-#define USTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
+#define E4_USTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
+#define E4_USTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
+#define E4_USTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
+#define E4_USTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
+#define E4_USTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* timer0cf */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
+#define E4_USTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* timer1cf */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
+#define E4_USTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* timer2cf */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
 	u8 flags1;
-#define USTORM_CORE_CONN_AG_CTX_CF3_MASK      0x3 /* timer_stop_all */
-#define USTORM_CORE_CONN_AG_CTX_CF3_SHIFT     0
-#define USTORM_CORE_CONN_AG_CTX_CF4_MASK      0x3 /* cf4 */
-#define USTORM_CORE_CONN_AG_CTX_CF4_SHIFT     2
-#define USTORM_CORE_CONN_AG_CTX_CF5_MASK      0x3 /* cf5 */
-#define USTORM_CORE_CONN_AG_CTX_CF5_SHIFT     4
-#define USTORM_CORE_CONN_AG_CTX_CF6_MASK      0x3 /* cf6 */
-#define USTORM_CORE_CONN_AG_CTX_CF6_SHIFT     6
+#define E4_USTORM_CORE_CONN_AG_CTX_CF3_MASK      0x3 /* timer_stop_all */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF3_SHIFT     0
+#define E4_USTORM_CORE_CONN_AG_CTX_CF4_MASK      0x3 /* cf4 */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF4_SHIFT     2
+#define E4_USTORM_CORE_CONN_AG_CTX_CF5_MASK      0x3 /* cf5 */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF5_SHIFT     4
+#define E4_USTORM_CORE_CONN_AG_CTX_CF6_MASK      0x3 /* cf6 */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF6_SHIFT     6
 	u8 flags2;
-#define USTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
-#define USTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
-#define USTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
-#define USTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
-#define USTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
-#define USTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
-#define USTORM_CORE_CONN_AG_CTX_CF3EN_MASK    0x1 /* cf3en */
-#define USTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT   3
-#define USTORM_CORE_CONN_AG_CTX_CF4EN_MASK    0x1 /* cf4en */
-#define USTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT   4
-#define USTORM_CORE_CONN_AG_CTX_CF5EN_MASK    0x1 /* cf5en */
-#define USTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT   5
-#define USTORM_CORE_CONN_AG_CTX_CF6EN_MASK    0x1 /* cf6en */
-#define USTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT   6
-#define USTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
-#define USTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 7
+#define E4_USTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
+#define E4_USTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
+#define E4_USTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
+#define E4_USTORM_CORE_CONN_AG_CTX_CF3EN_MASK    0x1 /* cf3en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT   3
+#define E4_USTORM_CORE_CONN_AG_CTX_CF4EN_MASK    0x1 /* cf4en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT   4
+#define E4_USTORM_CORE_CONN_AG_CTX_CF5EN_MASK    0x1 /* cf5en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT   5
+#define E4_USTORM_CORE_CONN_AG_CTX_CF6EN_MASK    0x1 /* cf6en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT   6
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 7
 	u8 flags3;
-#define USTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
-#define USTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 0
-#define USTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
-#define USTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 1
-#define USTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
-#define USTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 2
-#define USTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
-#define USTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 3
-#define USTORM_CORE_CONN_AG_CTX_RULE5EN_MASK  0x1 /* rule5en */
-#define USTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT 4
-#define USTORM_CORE_CONN_AG_CTX_RULE6EN_MASK  0x1 /* rule6en */
-#define USTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT 5
-#define USTORM_CORE_CONN_AG_CTX_RULE7EN_MASK  0x1 /* rule7en */
-#define USTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT 6
-#define USTORM_CORE_CONN_AG_CTX_RULE8EN_MASK  0x1 /* rule8en */
-#define USTORM_CORE_CONN_AG_CTX_RULE8EN_SHIFT 7
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 0
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 1
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 2
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 3
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE5EN_MASK  0x1 /* rule5en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT 4
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE6EN_MASK  0x1 /* rule6en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT 5
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE7EN_MASK  0x1 /* rule7en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT 6
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE8EN_MASK  0x1 /* rule8en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE8EN_SHIFT 7
 	u8 byte2 /* byte2 */;
 	u8 byte3 /* byte3 */;
 	__le16 word0 /* conn_dpi */;
@@ -628,11 +628,11 @@ struct core_conn_context {
 /* xstorm storm context */
 	struct xstorm_core_conn_st_ctx xstorm_st_context;
 /* xstorm aggregative context */
-	struct xstorm_core_conn_ag_ctx xstorm_ag_context;
+	struct e4_xstorm_core_conn_ag_ctx xstorm_ag_context;
 /* tstorm aggregative context */
-	struct tstorm_core_conn_ag_ctx tstorm_ag_context;
+	struct e4_tstorm_core_conn_ag_ctx tstorm_ag_context;
 /* ustorm aggregative context */
-	struct ustorm_core_conn_ag_ctx ustorm_ag_context;
+	struct e4_ustorm_core_conn_ag_ctx ustorm_ag_context;
 /* mstorm storm context */
 	struct mstorm_core_conn_st_ctx mstorm_st_context;
 /* ustorm storm context */
@@ -1934,6 +1934,92 @@ enum dmae_cmd_src_enum {
 };
 
 
+struct e4_mstorm_core_conn_ag_ctx {
+	u8 byte0 /* cdu_validation */;
+	u8 byte1 /* state */;
+	u8 flags0;
+#define E4_MSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
+#define E4_MSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
+#define E4_MSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
+#define E4_MSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* cf0 */
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* cf1 */
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* cf2 */
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
+	u8 flags1;
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 3
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 4
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 5
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 6
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 7
+	__le16 word0 /* word0 */;
+	__le16 word1 /* word1 */;
+	__le32 reg0 /* reg0 */;
+	__le32 reg1 /* reg1 */;
+};
+
+
+
+
+
+struct e4_ystorm_core_conn_ag_ctx {
+	u8 byte0 /* cdu_validation */;
+	u8 byte1 /* state */;
+	u8 flags0;
+#define E4_YSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
+#define E4_YSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
+#define E4_YSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
+#define E4_YSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* cf0 */
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* cf1 */
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* cf2 */
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
+	u8 flags1;
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 3
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 4
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 5
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 6
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 7
+	u8 byte2 /* byte2 */;
+	u8 byte3 /* byte3 */;
+	__le16 word0 /* word0 */;
+	__le32 reg0 /* reg0 */;
+	__le32 reg1 /* reg1 */;
+	__le16 word1 /* word1 */;
+	__le16 word2 /* word2 */;
+	__le16 word3 /* word3 */;
+	__le16 word4 /* word4 */;
+	__le32 reg2 /* reg2 */;
+	__le32 reg3 /* reg3 */;
+};
+
+
 /*
  * IGU cleanup command
  */
@@ -2017,44 +2103,6 @@ struct igu_msix_vector {
 };
 
 
-struct mstorm_core_conn_ag_ctx {
-	u8 byte0 /* cdu_validation */;
-	u8 byte1 /* state */;
-	u8 flags0;
-#define MSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
-#define MSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
-#define MSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
-#define MSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
-#define MSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* cf0 */
-#define MSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
-#define MSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* cf1 */
-#define MSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
-#define MSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* cf2 */
-#define MSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
-	u8 flags1;
-#define MSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
-#define MSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
-#define MSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
-#define MSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
-#define MSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
-#define MSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
-#define MSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
-#define MSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 3
-#define MSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
-#define MSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 4
-#define MSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
-#define MSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 5
-#define MSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
-#define MSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 6
-#define MSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
-#define MSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 7
-	__le16 word0 /* word0 */;
-	__le16 word1 /* word1 */;
-	__le32 reg0 /* reg0 */;
-	__le32 reg1 /* reg1 */;
-};
-
-
 /*
  * per encapsulation type enabling flags
  */
diff --git a/drivers/net/qede/base/ecore_hsi_eth.h b/drivers/net/qede/base/ecore_hsi_eth.h
index e8373d7..9d2a118 100644
--- a/drivers/net/qede/base/ecore_hsi_eth.h
+++ b/drivers/net/qede/base/ecore_hsi_eth.h
@@ -34,315 +34,315 @@ struct xstorm_eth_conn_st_ctx {
 	__le32 reserved[60];
 };
 
-struct xstorm_eth_conn_ag_ctx {
+struct e4_xstorm_eth_conn_ag_ctx {
 	u8 reserved0 /* cdu_validation */;
 	u8 eth_state /* state */;
 	u8 flags0;
 /* exist_in_qm0 */
-#define XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_SHIFT           0
+#define E4_XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_SHIFT           0
 /* exist_in_qm1 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED1_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED1_SHIFT              1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED1_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED1_SHIFT              1
 /* exist_in_qm2 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED2_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED2_SHIFT              2
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED2_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED2_SHIFT              2
 /* exist_in_qm3 */
-#define XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM3_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM3_SHIFT           3
+#define E4_XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM3_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM3_SHIFT           3
 /* bit4 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED3_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED3_SHIFT              4
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED3_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED3_SHIFT              4
 /* cf_array_active */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED4_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED4_SHIFT              5
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED4_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED4_SHIFT              5
 /* bit6 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED5_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED5_SHIFT              6
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED5_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED5_SHIFT              6
 /* bit7 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED6_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED6_SHIFT              7
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED6_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED6_SHIFT              7
 	u8 flags1;
 /* bit8 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED7_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED7_SHIFT              0
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED7_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED7_SHIFT              0
 /* bit9 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED8_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED8_SHIFT              1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED8_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED8_SHIFT              1
 /* bit10 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED9_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED9_SHIFT              2
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED9_MASK               0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED9_SHIFT              2
 /* bit11 */
-#define XSTORM_ETH_CONN_AG_CTX_BIT11_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_BIT11_SHIFT                  3
+#define E4_XSTORM_ETH_CONN_AG_CTX_BIT11_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_BIT11_SHIFT                  3
 /* bit12 */
-#define XSTORM_ETH_CONN_AG_CTX_BIT12_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_BIT12_SHIFT                  4
+#define E4_XSTORM_ETH_CONN_AG_CTX_BIT12_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_BIT12_SHIFT                  4
 /* bit13 */
-#define XSTORM_ETH_CONN_AG_CTX_BIT13_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_BIT13_SHIFT                  5
+#define E4_XSTORM_ETH_CONN_AG_CTX_BIT13_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_BIT13_SHIFT                  5
 /* bit14 */
-#define XSTORM_ETH_CONN_AG_CTX_TX_RULE_ACTIVE_MASK          0x1
-#define XSTORM_ETH_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT         6
+#define E4_XSTORM_ETH_CONN_AG_CTX_TX_RULE_ACTIVE_MASK          0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT         6
 /* bit15 */
-#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_ACTIVE_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT           7
+#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_ACTIVE_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT           7
 	u8 flags2;
 /* timer0cf */
-#define XSTORM_ETH_CONN_AG_CTX_CF0_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF0_SHIFT                    0
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF0_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF0_SHIFT                    0
 /* timer1cf */
-#define XSTORM_ETH_CONN_AG_CTX_CF1_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF1_SHIFT                    2
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF1_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF1_SHIFT                    2
 /* timer2cf */
-#define XSTORM_ETH_CONN_AG_CTX_CF2_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF2_SHIFT                    4
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF2_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF2_SHIFT                    4
 /* timer_stop_all */
-#define XSTORM_ETH_CONN_AG_CTX_CF3_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF3_SHIFT                    6
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF3_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF3_SHIFT                    6
 	u8 flags3;
 /* cf4 */
-#define XSTORM_ETH_CONN_AG_CTX_CF4_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF4_SHIFT                    0
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF4_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF4_SHIFT                    0
 /* cf5 */
-#define XSTORM_ETH_CONN_AG_CTX_CF5_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF5_SHIFT                    2
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF5_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF5_SHIFT                    2
 /* cf6 */
-#define XSTORM_ETH_CONN_AG_CTX_CF6_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF6_SHIFT                    4
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF6_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF6_SHIFT                    4
 /* cf7 */
-#define XSTORM_ETH_CONN_AG_CTX_CF7_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF7_SHIFT                    6
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF7_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF7_SHIFT                    6
 	u8 flags4;
 /* cf8 */
-#define XSTORM_ETH_CONN_AG_CTX_CF8_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF8_SHIFT                    0
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF8_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF8_SHIFT                    0
 /* cf9 */
-#define XSTORM_ETH_CONN_AG_CTX_CF9_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF9_SHIFT                    2
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF9_MASK                     0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF9_SHIFT                    2
 /* cf10 */
-#define XSTORM_ETH_CONN_AG_CTX_CF10_MASK                    0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF10_SHIFT                   4
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF10_MASK                    0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF10_SHIFT                   4
 /* cf11 */
-#define XSTORM_ETH_CONN_AG_CTX_CF11_MASK                    0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF11_SHIFT                   6
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF11_MASK                    0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF11_SHIFT                   6
 	u8 flags5;
 /* cf12 */
-#define XSTORM_ETH_CONN_AG_CTX_CF12_MASK                    0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF12_SHIFT                   0
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF12_MASK                    0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF12_SHIFT                   0
 /* cf13 */
-#define XSTORM_ETH_CONN_AG_CTX_CF13_MASK                    0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF13_SHIFT                   2
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF13_MASK                    0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF13_SHIFT                   2
 /* cf14 */
-#define XSTORM_ETH_CONN_AG_CTX_CF14_MASK                    0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF14_SHIFT                   4
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF14_MASK                    0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF14_SHIFT                   4
 /* cf15 */
-#define XSTORM_ETH_CONN_AG_CTX_CF15_MASK                    0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF15_SHIFT                   6
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF15_MASK                    0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF15_SHIFT                   6
 	u8 flags6;
 /* cf16 */
-#define XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_MASK        0x3
-#define XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_SHIFT       0
+#define E4_XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_MASK        0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_SHIFT       0
 /* cf_array_cf */
-#define XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_MASK        0x3
-#define XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_SHIFT       2
+#define E4_XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_MASK        0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_SHIFT       2
 /* cf18 */
-#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_MASK                   0x3
-#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_SHIFT                  4
+#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_MASK                   0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_SHIFT                  4
 /* cf19 */
-#define XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_MASK            0x3
-#define XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_SHIFT           6
+#define E4_XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_MASK            0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_SHIFT           6
 	u8 flags7;
 /* cf20 */
-#define XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_MASK                0x3
-#define XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_SHIFT               0
+#define E4_XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_MASK                0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_SHIFT               0
 /* cf21 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED10_MASK              0x3
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED10_SHIFT             2
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED10_MASK              0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED10_SHIFT             2
 /* cf22 */
-#define XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_MASK               0x3
-#define XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_SHIFT              4
+#define E4_XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_MASK               0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_SHIFT              4
 /* cf0en */
-#define XSTORM_ETH_CONN_AG_CTX_CF0EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT                  6
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF0EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT                  6
 /* cf1en */
-#define XSTORM_ETH_CONN_AG_CTX_CF1EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT                  7
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF1EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT                  7
 	u8 flags8;
 /* cf2en */
-#define XSTORM_ETH_CONN_AG_CTX_CF2EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                  0
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF2EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                  0
 /* cf3en */
-#define XSTORM_ETH_CONN_AG_CTX_CF3EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT                  1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF3EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT                  1
 /* cf4en */
-#define XSTORM_ETH_CONN_AG_CTX_CF4EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF4EN_SHIFT                  2
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF4EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF4EN_SHIFT                  2
 /* cf5en */
-#define XSTORM_ETH_CONN_AG_CTX_CF5EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF5EN_SHIFT                  3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF5EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF5EN_SHIFT                  3
 /* cf6en */
-#define XSTORM_ETH_CONN_AG_CTX_CF6EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF6EN_SHIFT                  4
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF6EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF6EN_SHIFT                  4
 /* cf7en */
-#define XSTORM_ETH_CONN_AG_CTX_CF7EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF7EN_SHIFT                  5
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF7EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF7EN_SHIFT                  5
 /* cf8en */
-#define XSTORM_ETH_CONN_AG_CTX_CF8EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF8EN_SHIFT                  6
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF8EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF8EN_SHIFT                  6
 /* cf9en */
-#define XSTORM_ETH_CONN_AG_CTX_CF9EN_MASK                   0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF9EN_SHIFT                  7
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF9EN_MASK                   0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF9EN_SHIFT                  7
 	u8 flags9;
 /* cf10en */
-#define XSTORM_ETH_CONN_AG_CTX_CF10EN_MASK                  0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF10EN_SHIFT                 0
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF10EN_MASK                  0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF10EN_SHIFT                 0
 /* cf11en */
-#define XSTORM_ETH_CONN_AG_CTX_CF11EN_MASK                  0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF11EN_SHIFT                 1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF11EN_MASK                  0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF11EN_SHIFT                 1
 /* cf12en */
-#define XSTORM_ETH_CONN_AG_CTX_CF12EN_MASK                  0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF12EN_SHIFT                 2
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF12EN_MASK                  0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF12EN_SHIFT                 2
 /* cf13en */
-#define XSTORM_ETH_CONN_AG_CTX_CF13EN_MASK                  0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF13EN_SHIFT                 3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF13EN_MASK                  0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF13EN_SHIFT                 3
 /* cf14en */
-#define XSTORM_ETH_CONN_AG_CTX_CF14EN_MASK                  0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF14EN_SHIFT                 4
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF14EN_MASK                  0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF14EN_SHIFT                 4
 /* cf15en */
-#define XSTORM_ETH_CONN_AG_CTX_CF15EN_MASK                  0x1
-#define XSTORM_ETH_CONN_AG_CTX_CF15EN_SHIFT                 5
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF15EN_MASK                  0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF15EN_SHIFT                 5
 /* cf16en */
-#define XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_MASK     0x1
-#define XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_SHIFT    6
+#define E4_XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_MASK     0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_SHIFT    6
 /* cf_array_cf_en */
-#define XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_EN_MASK     0x1
-#define XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_EN_SHIFT    7
+#define E4_XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_EN_MASK     0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_EN_SHIFT    7
 	u8 flags10;
 /* cf18en */
-#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_EN_SHIFT               0
+#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_EN_SHIFT               0
 /* cf19en */
-#define XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_EN_MASK         0x1
-#define XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT        1
+#define E4_XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_EN_MASK         0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT        1
 /* cf20en */
-#define XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_EN_MASK             0x1
-#define XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT            2
+#define E4_XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_EN_MASK             0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT            2
 /* cf21en */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED11_MASK              0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED11_SHIFT             3
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED11_MASK              0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED11_SHIFT             3
 /* cf22en */
-#define XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_EN_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_EN_SHIFT           4
+#define E4_XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_EN_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_EN_SHIFT           4
 /* cf23en */
-#define XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_MASK  0x1
-#define XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_SHIFT 5
+#define E4_XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_MASK  0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_SHIFT 5
 /* rule0en */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED12_MASK              0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED12_SHIFT             6
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED12_MASK              0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED12_SHIFT             6
 /* rule1en */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED13_MASK              0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED13_SHIFT             7
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED13_MASK              0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED13_SHIFT             7
 	u8 flags11;
 /* rule2en */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED14_MASK              0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED14_SHIFT             0
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED14_MASK              0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED14_SHIFT             0
 /* rule3en */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED15_MASK              0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED15_SHIFT             1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED15_MASK              0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED15_SHIFT             1
 /* rule4en */
-#define XSTORM_ETH_CONN_AG_CTX_TX_DEC_RULE_EN_MASK          0x1
-#define XSTORM_ETH_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT         2
+#define E4_XSTORM_ETH_CONN_AG_CTX_TX_DEC_RULE_EN_MASK          0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT         2
 /* rule5en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE5EN_MASK                 0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT                3
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE5EN_MASK                 0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT                3
 /* rule6en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE6EN_MASK                 0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE6EN_SHIFT                4
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE6EN_MASK                 0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE6EN_SHIFT                4
 /* rule7en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE7EN_MASK                 0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT                5
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE7EN_MASK                 0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT                5
 /* rule8en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED1_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED1_SHIFT           6
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED1_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED1_SHIFT           6
 /* rule9en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE9EN_MASK                 0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE9EN_SHIFT                7
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE9EN_MASK                 0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE9EN_SHIFT                7
 	u8 flags12;
 /* rule10en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE10EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE10EN_SHIFT               0
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE10EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE10EN_SHIFT               0
 /* rule11en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE11EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE11EN_SHIFT               1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE11EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE11EN_SHIFT               1
 /* rule12en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED2_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED2_SHIFT           2
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED2_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED2_SHIFT           2
 /* rule13en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED3_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED3_SHIFT           3
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED3_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED3_SHIFT           3
 /* rule14en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE14EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE14EN_SHIFT               4
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE14EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE14EN_SHIFT               4
 /* rule15en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE15EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE15EN_SHIFT               5
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE15EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE15EN_SHIFT               5
 /* rule16en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE16EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE16EN_SHIFT               6
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE16EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE16EN_SHIFT               6
 /* rule17en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE17EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE17EN_SHIFT               7
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE17EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE17EN_SHIFT               7
 	u8 flags13;
 /* rule18en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE18EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE18EN_SHIFT               0
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE18EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE18EN_SHIFT               0
 /* rule19en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE19EN_MASK                0x1
-#define XSTORM_ETH_CONN_AG_CTX_RULE19EN_SHIFT               1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE19EN_MASK                0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE19EN_SHIFT               1
 /* rule20en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED4_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED4_SHIFT           2
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED4_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED4_SHIFT           2
 /* rule21en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED5_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED5_SHIFT           3
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED5_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED5_SHIFT           3
 /* rule22en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED6_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED6_SHIFT           4
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED6_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED6_SHIFT           4
 /* rule23en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED7_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED7_SHIFT           5
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED7_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED7_SHIFT           5
 /* rule24en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED8_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED8_SHIFT           6
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED8_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED8_SHIFT           6
 /* rule25en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED9_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED9_SHIFT           7
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED9_MASK            0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED9_SHIFT           7
 	u8 flags14;
 /* bit16 */
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_USE_EXT_HDR_MASK        0x1
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_USE_EXT_HDR_SHIFT       0
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_USE_EXT_HDR_MASK        0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_USE_EXT_HDR_SHIFT       0
 /* bit17 */
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_MASK      0x1
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_SHIFT     1
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_MASK      0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_SHIFT     1
 /* bit18 */
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_MASK    0x1
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_SHIFT   2
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_MASK    0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_SHIFT   2
 /* bit19 */
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_MASK    0x1
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_SHIFT   3
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_MASK    0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_SHIFT   3
 /* bit20 */
-#define XSTORM_ETH_CONN_AG_CTX_L2_EDPM_ENABLE_MASK          0x1
-#define XSTORM_ETH_CONN_AG_CTX_L2_EDPM_ENABLE_SHIFT         4
+#define E4_XSTORM_ETH_CONN_AG_CTX_L2_EDPM_ENABLE_MASK          0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_L2_EDPM_ENABLE_SHIFT         4
 /* bit21 */
-#define XSTORM_ETH_CONN_AG_CTX_ROCE_EDPM_ENABLE_MASK        0x1
-#define XSTORM_ETH_CONN_AG_CTX_ROCE_EDPM_ENABLE_SHIFT       5
+#define E4_XSTORM_ETH_CONN_AG_CTX_ROCE_EDPM_ENABLE_MASK        0x1
+#define E4_XSTORM_ETH_CONN_AG_CTX_ROCE_EDPM_ENABLE_SHIFT       5
 /* cf23 */
-#define XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_MASK              0x3
-#define XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_SHIFT             6
+#define E4_XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_MASK              0x3
+#define E4_XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_SHIFT             6
 	u8 edpm_event_id /* byte2 */;
 	__le16 physical_q0 /* physical_q0 */;
-	__le16 quota /* physical_q1 */;
+	__le16 e5_reserved1 /* physical_q1 */;
 	__le16 edpm_num_bds /* physical_q2 */;
 	__le16 tx_bd_cons /* word3 */;
 	__le16 tx_bd_prod /* word4 */;
@@ -375,7 +375,7 @@ struct xstorm_eth_conn_ag_ctx {
 	u8 byte13 /* byte13 */;
 	u8 byte14 /* byte14 */;
 	u8 byte15 /* byte15 */;
-	u8 byte16 /* byte16 */;
+	u8 e5_reserved /* e5_reserved */;
 	__le16 word11 /* word11 */;
 	__le32 reg10 /* reg10 */;
 	__le32 reg11 /* reg11 */;
@@ -400,47 +400,47 @@ struct ystorm_eth_conn_st_ctx {
 	__le32 reserved[8];
 };
 
-struct ystorm_eth_conn_ag_ctx {
+struct e4_ystorm_eth_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 state /* state */;
 	u8 flags0;
 /* exist_in_qm0 */
-#define YSTORM_ETH_CONN_AG_CTX_BIT0_MASK                  0x1
-#define YSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT                 0
+#define E4_YSTORM_ETH_CONN_AG_CTX_BIT0_MASK                  0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT                 0
 /* exist_in_qm1 */
-#define YSTORM_ETH_CONN_AG_CTX_BIT1_MASK                  0x1
-#define YSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT                 1
-#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK     0x3 /* cf0 */
-#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT    2
-#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_MASK      0x3 /* cf1 */
-#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_SHIFT     4
-#define YSTORM_ETH_CONN_AG_CTX_CF2_MASK                   0x3 /* cf2 */
-#define YSTORM_ETH_CONN_AG_CTX_CF2_SHIFT                  6
+#define E4_YSTORM_ETH_CONN_AG_CTX_BIT1_MASK                  0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT                 1
+#define E4_YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK     0x3 /* cf0 */
+#define E4_YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT    2
+#define E4_YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_MASK      0x3 /* cf1 */
+#define E4_YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_SHIFT     4
+#define E4_YSTORM_ETH_CONN_AG_CTX_CF2_MASK                   0x3 /* cf2 */
+#define E4_YSTORM_ETH_CONN_AG_CTX_CF2_SHIFT                  6
 	u8 flags1;
 /* cf0en */
-#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK  0x1
-#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT 0
+#define E4_YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK  0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT 0
 /* cf1en */
-#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_MASK   0x1
-#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_SHIFT  1
+#define E4_YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_MASK   0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_SHIFT  1
 /* cf2en */
-#define YSTORM_ETH_CONN_AG_CTX_CF2EN_MASK                 0x1
-#define YSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                2
+#define E4_YSTORM_ETH_CONN_AG_CTX_CF2EN_MASK                 0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                2
 /* rule0en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK               0x1
-#define YSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT              3
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK               0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT              3
 /* rule1en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK               0x1
-#define YSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT              4
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK               0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT              4
 /* rule2en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK               0x1
-#define YSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT              5
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK               0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT              5
 /* rule3en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK               0x1
-#define YSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT              6
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK               0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT              6
 /* rule4en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK               0x1
-#define YSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT              7
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK               0x1
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT              7
 	u8 tx_q0_int_coallecing_timeset /* byte2 */;
 	u8 byte3 /* byte3 */;
 	__le16 word0 /* word0 */;
@@ -454,89 +454,89 @@ struct ystorm_eth_conn_ag_ctx {
 	__le32 reg3 /* reg3 */;
 };
 
-struct tstorm_eth_conn_ag_ctx {
+struct e4_tstorm_eth_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 byte1 /* state */;
 	u8 flags0;
-#define TSTORM_ETH_CONN_AG_CTX_BIT0_MASK      0x1 /* exist_in_qm0 */
-#define TSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT     0
-#define TSTORM_ETH_CONN_AG_CTX_BIT1_MASK      0x1 /* exist_in_qm1 */
-#define TSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT     1
-#define TSTORM_ETH_CONN_AG_CTX_BIT2_MASK      0x1 /* bit2 */
-#define TSTORM_ETH_CONN_AG_CTX_BIT2_SHIFT     2
-#define TSTORM_ETH_CONN_AG_CTX_BIT3_MASK      0x1 /* bit3 */
-#define TSTORM_ETH_CONN_AG_CTX_BIT3_SHIFT     3
-#define TSTORM_ETH_CONN_AG_CTX_BIT4_MASK      0x1 /* bit4 */
-#define TSTORM_ETH_CONN_AG_CTX_BIT4_SHIFT     4
-#define TSTORM_ETH_CONN_AG_CTX_BIT5_MASK      0x1 /* bit5 */
-#define TSTORM_ETH_CONN_AG_CTX_BIT5_SHIFT     5
-#define TSTORM_ETH_CONN_AG_CTX_CF0_MASK       0x3 /* timer0cf */
-#define TSTORM_ETH_CONN_AG_CTX_CF0_SHIFT      6
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT0_MASK      0x1 /* exist_in_qm0 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT     0
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT1_MASK      0x1 /* exist_in_qm1 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT     1
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT2_MASK      0x1 /* bit2 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT2_SHIFT     2
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT3_MASK      0x1 /* bit3 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT3_SHIFT     3
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT4_MASK      0x1 /* bit4 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT4_SHIFT     4
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT5_MASK      0x1 /* bit5 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT5_SHIFT     5
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF0_MASK       0x3 /* timer0cf */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF0_SHIFT      6
 	u8 flags1;
-#define TSTORM_ETH_CONN_AG_CTX_CF1_MASK       0x3 /* timer1cf */
-#define TSTORM_ETH_CONN_AG_CTX_CF1_SHIFT      0
-#define TSTORM_ETH_CONN_AG_CTX_CF2_MASK       0x3 /* timer2cf */
-#define TSTORM_ETH_CONN_AG_CTX_CF2_SHIFT      2
-#define TSTORM_ETH_CONN_AG_CTX_CF3_MASK       0x3 /* timer_stop_all */
-#define TSTORM_ETH_CONN_AG_CTX_CF3_SHIFT      4
-#define TSTORM_ETH_CONN_AG_CTX_CF4_MASK       0x3 /* cf4 */
-#define TSTORM_ETH_CONN_AG_CTX_CF4_SHIFT      6
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF1_MASK       0x3 /* timer1cf */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF1_SHIFT      0
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF2_MASK       0x3 /* timer2cf */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF2_SHIFT      2
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF3_MASK       0x3 /* timer_stop_all */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF3_SHIFT      4
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF4_MASK       0x3 /* cf4 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF4_SHIFT      6
 	u8 flags2;
-#define TSTORM_ETH_CONN_AG_CTX_CF5_MASK       0x3 /* cf5 */
-#define TSTORM_ETH_CONN_AG_CTX_CF5_SHIFT      0
-#define TSTORM_ETH_CONN_AG_CTX_CF6_MASK       0x3 /* cf6 */
-#define TSTORM_ETH_CONN_AG_CTX_CF6_SHIFT      2
-#define TSTORM_ETH_CONN_AG_CTX_CF7_MASK       0x3 /* cf7 */
-#define TSTORM_ETH_CONN_AG_CTX_CF7_SHIFT      4
-#define TSTORM_ETH_CONN_AG_CTX_CF8_MASK       0x3 /* cf8 */
-#define TSTORM_ETH_CONN_AG_CTX_CF8_SHIFT      6
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF5_MASK       0x3 /* cf5 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF5_SHIFT      0
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF6_MASK       0x3 /* cf6 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF6_SHIFT      2
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF7_MASK       0x3 /* cf7 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF7_SHIFT      4
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF8_MASK       0x3 /* cf8 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF8_SHIFT      6
 	u8 flags3;
-#define TSTORM_ETH_CONN_AG_CTX_CF9_MASK       0x3 /* cf9 */
-#define TSTORM_ETH_CONN_AG_CTX_CF9_SHIFT      0
-#define TSTORM_ETH_CONN_AG_CTX_CF10_MASK      0x3 /* cf10 */
-#define TSTORM_ETH_CONN_AG_CTX_CF10_SHIFT     2
-#define TSTORM_ETH_CONN_AG_CTX_CF0EN_MASK     0x1 /* cf0en */
-#define TSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT    4
-#define TSTORM_ETH_CONN_AG_CTX_CF1EN_MASK     0x1 /* cf1en */
-#define TSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT    5
-#define TSTORM_ETH_CONN_AG_CTX_CF2EN_MASK     0x1 /* cf2en */
-#define TSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT    6
-#define TSTORM_ETH_CONN_AG_CTX_CF3EN_MASK     0x1 /* cf3en */
-#define TSTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT    7
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF9_MASK       0x3 /* cf9 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF9_SHIFT      0
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF10_MASK      0x3 /* cf10 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF10_SHIFT     2
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF0EN_MASK     0x1 /* cf0en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT    4
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF1EN_MASK     0x1 /* cf1en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT    5
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF2EN_MASK     0x1 /* cf2en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT    6
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF3EN_MASK     0x1 /* cf3en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT    7
 	u8 flags4;
-#define TSTORM_ETH_CONN_AG_CTX_CF4EN_MASK     0x1 /* cf4en */
-#define TSTORM_ETH_CONN_AG_CTX_CF4EN_SHIFT    0
-#define TSTORM_ETH_CONN_AG_CTX_CF5EN_MASK     0x1 /* cf5en */
-#define TSTORM_ETH_CONN_AG_CTX_CF5EN_SHIFT    1
-#define TSTORM_ETH_CONN_AG_CTX_CF6EN_MASK     0x1 /* cf6en */
-#define TSTORM_ETH_CONN_AG_CTX_CF6EN_SHIFT    2
-#define TSTORM_ETH_CONN_AG_CTX_CF7EN_MASK     0x1 /* cf7en */
-#define TSTORM_ETH_CONN_AG_CTX_CF7EN_SHIFT    3
-#define TSTORM_ETH_CONN_AG_CTX_CF8EN_MASK     0x1 /* cf8en */
-#define TSTORM_ETH_CONN_AG_CTX_CF8EN_SHIFT    4
-#define TSTORM_ETH_CONN_AG_CTX_CF9EN_MASK     0x1 /* cf9en */
-#define TSTORM_ETH_CONN_AG_CTX_CF9EN_SHIFT    5
-#define TSTORM_ETH_CONN_AG_CTX_CF10EN_MASK    0x1 /* cf10en */
-#define TSTORM_ETH_CONN_AG_CTX_CF10EN_SHIFT   6
-#define TSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK   0x1 /* rule0en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT  7
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF4EN_MASK     0x1 /* cf4en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF4EN_SHIFT    0
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF5EN_MASK     0x1 /* cf5en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF5EN_SHIFT    1
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF6EN_MASK     0x1 /* cf6en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF6EN_SHIFT    2
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF7EN_MASK     0x1 /* cf7en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF7EN_SHIFT    3
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF8EN_MASK     0x1 /* cf8en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF8EN_SHIFT    4
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF9EN_MASK     0x1 /* cf9en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF9EN_SHIFT    5
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF10EN_MASK    0x1 /* cf10en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF10EN_SHIFT   6
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK   0x1 /* rule0en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT  7
 	u8 flags5;
-#define TSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK   0x1 /* rule1en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT  0
-#define TSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK   0x1 /* rule2en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT  1
-#define TSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK   0x1 /* rule3en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT  2
-#define TSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK   0x1 /* rule4en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT  3
-#define TSTORM_ETH_CONN_AG_CTX_RULE5EN_MASK   0x1 /* rule5en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT  4
-#define TSTORM_ETH_CONN_AG_CTX_RX_BD_EN_MASK  0x1 /* rule6en */
-#define TSTORM_ETH_CONN_AG_CTX_RX_BD_EN_SHIFT 5
-#define TSTORM_ETH_CONN_AG_CTX_RULE7EN_MASK   0x1 /* rule7en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT  6
-#define TSTORM_ETH_CONN_AG_CTX_RULE8EN_MASK   0x1 /* rule8en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE8EN_SHIFT  7
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK   0x1 /* rule1en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT  0
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK   0x1 /* rule2en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT  1
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK   0x1 /* rule3en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT  2
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK   0x1 /* rule4en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT  3
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE5EN_MASK   0x1 /* rule5en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT  4
+#define E4_TSTORM_ETH_CONN_AG_CTX_RX_BD_EN_MASK  0x1 /* rule6en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RX_BD_EN_SHIFT 5
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE7EN_MASK   0x1 /* rule7en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT  6
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE8EN_MASK   0x1 /* rule8en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE8EN_SHIFT  7
 	__le32 reg0 /* reg0 */;
 	__le32 reg1 /* reg1 */;
 	__le32 reg2 /* reg2 */;
@@ -558,88 +558,88 @@ struct tstorm_eth_conn_ag_ctx {
 	__le32 reg10 /* reg10 */;
 };
 
-struct ustorm_eth_conn_ag_ctx {
+struct e4_ustorm_eth_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 byte1 /* state */;
 	u8 flags0;
 /* exist_in_qm0 */
-#define USTORM_ETH_CONN_AG_CTX_BIT0_MASK                    0x1
-#define USTORM_ETH_CONN_AG_CTX_BIT0_SHIFT                   0
+#define E4_USTORM_ETH_CONN_AG_CTX_BIT0_MASK                    0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_BIT0_SHIFT                   0
 /* exist_in_qm1 */
-#define USTORM_ETH_CONN_AG_CTX_BIT1_MASK                    0x1
-#define USTORM_ETH_CONN_AG_CTX_BIT1_SHIFT                   1
+#define E4_USTORM_ETH_CONN_AG_CTX_BIT1_MASK                    0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_BIT1_SHIFT                   1
 /* timer0cf */
-#define USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_MASK     0x3
-#define USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_SHIFT    2
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_MASK     0x3
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_SHIFT    2
 /* timer1cf */
-#define USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_MASK     0x3
-#define USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_SHIFT    4
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_MASK     0x3
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_SHIFT    4
 /* timer2cf */
-#define USTORM_ETH_CONN_AG_CTX_CF2_MASK                     0x3
-#define USTORM_ETH_CONN_AG_CTX_CF2_SHIFT                    6
+#define E4_USTORM_ETH_CONN_AG_CTX_CF2_MASK                     0x3
+#define E4_USTORM_ETH_CONN_AG_CTX_CF2_SHIFT                    6
 	u8 flags1;
 /* timer_stop_all */
-#define USTORM_ETH_CONN_AG_CTX_CF3_MASK                     0x3
-#define USTORM_ETH_CONN_AG_CTX_CF3_SHIFT                    0
+#define E4_USTORM_ETH_CONN_AG_CTX_CF3_MASK                     0x3
+#define E4_USTORM_ETH_CONN_AG_CTX_CF3_SHIFT                    0
 /* cf4 */
-#define USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_MASK               0x3
-#define USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_SHIFT              2
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_MASK               0x3
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_SHIFT              2
 /* cf5 */
-#define USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_MASK               0x3
-#define USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_SHIFT              4
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_MASK               0x3
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_SHIFT              4
 /* cf6 */
-#define USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK       0x3
-#define USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT      6
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK       0x3
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT      6
 	u8 flags2;
 /* cf0en */
-#define USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_EN_MASK  0x1
-#define USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_EN_SHIFT 0
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_EN_MASK  0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_EN_SHIFT 0
 /* cf1en */
-#define USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_EN_MASK  0x1
-#define USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_EN_SHIFT 1
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_EN_MASK  0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_EN_SHIFT 1
 /* cf2en */
-#define USTORM_ETH_CONN_AG_CTX_CF2EN_MASK                   0x1
-#define USTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                  2
+#define E4_USTORM_ETH_CONN_AG_CTX_CF2EN_MASK                   0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                  2
 /* cf3en */
-#define USTORM_ETH_CONN_AG_CTX_CF3EN_MASK                   0x1
-#define USTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT                  3
+#define E4_USTORM_ETH_CONN_AG_CTX_CF3EN_MASK                   0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT                  3
 /* cf4en */
-#define USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_EN_MASK            0x1
-#define USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_EN_SHIFT           4
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_EN_MASK            0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_EN_SHIFT           4
 /* cf5en */
-#define USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_EN_MASK            0x1
-#define USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_EN_SHIFT           5
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_EN_MASK            0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_EN_SHIFT           5
 /* cf6en */
-#define USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK    0x1
-#define USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT   6
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK    0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT   6
 /* rule0en */
-#define USTORM_ETH_CONN_AG_CTX_RULE0EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT                7
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE0EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT                7
 	u8 flags3;
 /* rule1en */
-#define USTORM_ETH_CONN_AG_CTX_RULE1EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT                0
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE1EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT                0
 /* rule2en */
-#define USTORM_ETH_CONN_AG_CTX_RULE2EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT                1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE2EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT                1
 /* rule3en */
-#define USTORM_ETH_CONN_AG_CTX_RULE3EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT                2
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE3EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT                2
 /* rule4en */
-#define USTORM_ETH_CONN_AG_CTX_RULE4EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT                3
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE4EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT                3
 /* rule5en */
-#define USTORM_ETH_CONN_AG_CTX_RULE5EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT                4
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE5EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT                4
 /* rule6en */
-#define USTORM_ETH_CONN_AG_CTX_RULE6EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE6EN_SHIFT                5
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE6EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE6EN_SHIFT                5
 /* rule7en */
-#define USTORM_ETH_CONN_AG_CTX_RULE7EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT                6
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE7EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT                6
 /* rule8en */
-#define USTORM_ETH_CONN_AG_CTX_RULE8EN_MASK                 0x1
-#define USTORM_ETH_CONN_AG_CTX_RULE8EN_SHIFT                7
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE8EN_MASK                 0x1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE8EN_SHIFT                7
 	u8 byte2 /* byte2 */;
 	u8 byte3 /* byte3 */;
 	__le16 word0 /* conn_dpi */;
@@ -678,15 +678,15 @@ struct eth_conn_context {
 /* xstorm storm context */
 	struct xstorm_eth_conn_st_ctx xstorm_st_context;
 /* xstorm aggregative context */
-	struct xstorm_eth_conn_ag_ctx xstorm_ag_context;
+	struct e4_xstorm_eth_conn_ag_ctx xstorm_ag_context;
 /* ystorm storm context */
 	struct ystorm_eth_conn_st_ctx ystorm_st_context;
 /* ystorm aggregative context */
-	struct ystorm_eth_conn_ag_ctx ystorm_ag_context;
+	struct e4_ystorm_eth_conn_ag_ctx ystorm_ag_context;
 /* tstorm aggregative context */
-	struct tstorm_eth_conn_ag_ctx tstorm_ag_context;
+	struct e4_tstorm_eth_conn_ag_ctx tstorm_ag_context;
 /* ustorm aggregative context */
-	struct ustorm_eth_conn_ag_ctx ustorm_ag_context;
+	struct e4_ustorm_eth_conn_ag_ctx ustorm_ag_context;
 /* ustorm storm context */
 	struct ustorm_eth_conn_st_ctx ustorm_st_context;
 /* mstorm storm context */
@@ -1480,6 +1480,668 @@ struct vport_update_ramrod_data {
 
 
 
+struct E4XstormEthConnAgCtxDqExtLdPart {
+	u8 reserved0 /* cdu_validation */;
+	u8 eth_state /* state */;
+	u8 flags0;
+/* exist_in_qm0 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM0_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM0_SHIFT           0
+/* exist_in_qm1 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED1_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED1_SHIFT              1
+/* exist_in_qm2 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED2_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED2_SHIFT              2
+/* exist_in_qm3 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM3_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM3_SHIFT           3
+/* bit4 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED3_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED3_SHIFT              4
+/* cf_array_active */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED4_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED4_SHIFT              5
+/* bit6 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED5_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED5_SHIFT              6
+/* bit7 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED6_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED6_SHIFT              7
+	u8 flags1;
+/* bit8 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED7_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED7_SHIFT              0
+/* bit9 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED8_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED8_SHIFT              1
+/* bit10 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED9_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED9_SHIFT              2
+/* bit11 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_BIT11_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_BIT11_SHIFT                  3
+/* bit12 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_BIT12_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_BIT12_SHIFT                  4
+/* bit13 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_BIT13_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_BIT13_SHIFT                  5
+/* bit14 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TX_RULE_ACTIVE_MASK          0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TX_RULE_ACTIVE_SHIFT         6
+/* bit15 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_ACTIVE_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_ACTIVE_SHIFT           7
+	u8 flags2;
+/* timer0cf */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF0_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF0_SHIFT                    0
+/* timer1cf */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF1_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF1_SHIFT                    2
+/* timer2cf */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF2_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF2_SHIFT                    4
+/* timer_stop_all */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF3_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF3_SHIFT                    6
+	u8 flags3;
+/* cf4 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF4_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF4_SHIFT                    0
+/* cf5 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF5_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF5_SHIFT                    2
+/* cf6 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF6_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF6_SHIFT                    4
+/* cf7 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF7_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF7_SHIFT                    6
+	u8 flags4;
+/* cf8 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF8_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF8_SHIFT                    0
+/* cf9 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF9_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF9_SHIFT                    2
+/* cf10 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF10_MASK                    0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF10_SHIFT                   4
+/* cf11 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF11_MASK                    0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF11_SHIFT                   6
+	u8 flags5;
+/* cf12 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF12_MASK                    0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF12_SHIFT                   0
+/* cf13 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF13_MASK                    0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF13_SHIFT                   2
+/* cf14 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF14_MASK                    0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF14_SHIFT                   4
+/* cf15 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF15_MASK                    0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF15_SHIFT                   6
+	u8 flags6;
+/* cf16 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_MASK        0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_SHIFT       0
+/* cf_array_cf */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_MASK        0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_SHIFT       2
+/* cf18 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_MASK                   0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_SHIFT                  4
+/* cf19 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_MASK            0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_SHIFT           6
+	u8 flags7;
+/* cf20 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_MASK                0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_SHIFT               0
+/* cf21 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED10_MASK              0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED10_SHIFT             2
+/* cf22 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_MASK               0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_SHIFT              4
+/* cf0en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF0EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF0EN_SHIFT                  6
+/* cf1en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF1EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF1EN_SHIFT                  7
+	u8 flags8;
+/* cf2en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF2EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF2EN_SHIFT                  0
+/* cf3en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF3EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF3EN_SHIFT                  1
+/* cf4en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF4EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF4EN_SHIFT                  2
+/* cf5en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF5EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF5EN_SHIFT                  3
+/* cf6en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF6EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF6EN_SHIFT                  4
+/* cf7en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF7EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF7EN_SHIFT                  5
+/* cf8en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF8EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF8EN_SHIFT                  6
+/* cf9en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF9EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF9EN_SHIFT                  7
+	u8 flags9;
+/* cf10en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF10EN_MASK                  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF10EN_SHIFT                 0
+/* cf11en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF11EN_MASK                  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF11EN_SHIFT                 1
+/* cf12en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF12EN_MASK                  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF12EN_SHIFT                 2
+/* cf13en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF13EN_MASK                  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF13EN_SHIFT                 3
+/* cf14en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF14EN_MASK                  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF14EN_SHIFT                 4
+/* cf15en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF15EN_MASK                  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF15EN_SHIFT                 5
+/* cf16en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_EN_MASK     0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_EN_SHIFT    6
+/* cf_array_cf_en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_EN_MASK     0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_EN_SHIFT    7
+	u8 flags10;
+/* cf18en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_EN_SHIFT               0
+/* cf19en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_EN_MASK         0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_EN_SHIFT        1
+/* cf20en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_EN_MASK             0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_EN_SHIFT            2
+/* cf21en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED11_MASK              0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED11_SHIFT             3
+/* cf22en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_EN_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_EN_SHIFT           4
+/* cf23en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_EN_RESERVED_MASK  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_EN_RESERVED_SHIFT 5
+/* rule0en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED12_MASK              0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED12_SHIFT             6
+/* rule1en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED13_MASK              0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED13_SHIFT             7
+	u8 flags11;
+/* rule2en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED14_MASK              0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED14_SHIFT             0
+/* rule3en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED15_MASK              0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED15_SHIFT             1
+/* rule4en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TX_DEC_RULE_EN_MASK          0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TX_DEC_RULE_EN_SHIFT         2
+/* rule5en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE5EN_MASK                 0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE5EN_SHIFT                3
+/* rule6en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE6EN_MASK                 0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE6EN_SHIFT                4
+/* rule7en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE7EN_MASK                 0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE7EN_SHIFT                5
+/* rule8en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED1_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED1_SHIFT           6
+/* rule9en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE9EN_MASK                 0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE9EN_SHIFT                7
+	u8 flags12;
+/* rule10en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE10EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE10EN_SHIFT               0
+/* rule11en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE11EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE11EN_SHIFT               1
+/* rule12en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED2_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED2_SHIFT           2
+/* rule13en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED3_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED3_SHIFT           3
+/* rule14en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE14EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE14EN_SHIFT               4
+/* rule15en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE15EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE15EN_SHIFT               5
+/* rule16en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE16EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE16EN_SHIFT               6
+/* rule17en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE17EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE17EN_SHIFT               7
+	u8 flags13;
+/* rule18en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE18EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE18EN_SHIFT               0
+/* rule19en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE19EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE19EN_SHIFT               1
+/* rule20en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED4_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED4_SHIFT           2
+/* rule21en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED5_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED5_SHIFT           3
+/* rule22en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED6_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED6_SHIFT           4
+/* rule23en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED7_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED7_SHIFT           5
+/* rule24en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED8_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED8_SHIFT           6
+/* rule25en */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED9_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED9_SHIFT           7
+	u8 flags14;
+/* bit16 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_USE_EXT_HDR_MASK        0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_USE_EXT_HDR_SHIFT       0
+/* bit17 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_RAW_L3L4_MASK      0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_RAW_L3L4_SHIFT     1
+/* bit18 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_INBAND_PROP_HDR_MASK    0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_INBAND_PROP_HDR_SHIFT   2
+/* bit19 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_EXT_TUNNEL_MASK    0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_EXT_TUNNEL_SHIFT   3
+/* bit20 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_L2_EDPM_ENABLE_MASK          0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_L2_EDPM_ENABLE_SHIFT         4
+/* bit21 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_ROCE_EDPM_ENABLE_MASK        0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_ROCE_EDPM_ENABLE_SHIFT       5
+/* cf23 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_MASK              0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_SHIFT             6
+	u8 edpm_event_id /* byte2 */;
+	__le16 physical_q0 /* physical_q0 */;
+	__le16 e5_reserved1 /* physical_q1 */;
+	__le16 edpm_num_bds /* physical_q2 */;
+	__le16 tx_bd_cons /* word3 */;
+	__le16 tx_bd_prod /* word4 */;
+	__le16 tx_class /* word5 */;
+	__le16 conn_dpi /* conn_dpi */;
+	u8 byte3 /* byte3 */;
+	u8 byte4 /* byte4 */;
+	u8 byte5 /* byte5 */;
+	u8 byte6 /* byte6 */;
+	__le32 reg0 /* reg0 */;
+	__le32 reg1 /* reg1 */;
+	__le32 reg2 /* reg2 */;
+	__le32 reg3 /* reg3 */;
+	__le32 reg4 /* reg4 */;
+};
+
+
+struct e4_mstorm_eth_conn_ag_ctx {
+	u8 byte0 /* cdu_validation */;
+	u8 byte1 /* state */;
+	u8 flags0;
+#define E4_MSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_MASK  0x1 /* exist_in_qm0 */
+#define E4_MSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_SHIFT 0
+#define E4_MSTORM_ETH_CONN_AG_CTX_BIT1_MASK          0x1 /* exist_in_qm1 */
+#define E4_MSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT         1
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF0_MASK           0x3 /* cf0 */
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF0_SHIFT          2
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF1_MASK           0x3 /* cf1 */
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF1_SHIFT          4
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF2_MASK           0x3 /* cf2 */
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF2_SHIFT          6
+	u8 flags1;
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF0EN_MASK         0x1 /* cf0en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT        0
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF1EN_MASK         0x1 /* cf1en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT        1
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF2EN_MASK         0x1 /* cf2en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT        2
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK       0x1 /* rule0en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT      3
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK       0x1 /* rule1en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT      4
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK       0x1 /* rule2en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT      5
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK       0x1 /* rule3en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT      6
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK       0x1 /* rule4en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT      7
+	__le16 word0 /* word0 */;
+	__le16 word1 /* word1 */;
+	__le32 reg0 /* reg0 */;
+	__le32 reg1 /* reg1 */;
+};
+
+
+
+
+
+struct e4_xstorm_eth_hw_conn_ag_ctx {
+	u8 reserved0 /* cdu_validation */;
+	u8 eth_state /* state */;
+	u8 flags0;
+/* exist_in_qm0 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM0_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM0_SHIFT           0
+/* exist_in_qm1 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED1_MASK               0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED1_SHIFT              1
+/* exist_in_qm2 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED2_MASK               0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED2_SHIFT              2
+/* exist_in_qm3 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM3_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM3_SHIFT           3
+/* bit4 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED3_MASK               0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED3_SHIFT              4
+/* cf_array_active */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED4_MASK               0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED4_SHIFT              5
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED5_MASK               0x1 /* bit6 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED5_SHIFT              6
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED6_MASK               0x1 /* bit7 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED6_SHIFT              7
+	u8 flags1;
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED7_MASK               0x1 /* bit8 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED7_SHIFT              0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED8_MASK               0x1 /* bit9 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED8_SHIFT              1
+/* bit10 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED9_MASK               0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED9_SHIFT              2
+/* bit11 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT11_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT11_SHIFT                  3
+/* bit12 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT12_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT12_SHIFT                  4
+/* bit13 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT13_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT13_SHIFT                  5
+/* bit14 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TX_RULE_ACTIVE_MASK          0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT         6
+/* bit15 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_ACTIVE_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT           7
+	u8 flags2;
+/* timer0cf */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF0_MASK                     0x3
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF0_SHIFT                    0
+/* timer1cf */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF1_MASK                     0x3
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF1_SHIFT                    2
+/* timer2cf */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF2_MASK                     0x3
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF2_SHIFT                    4
+/* timer_stop_all */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF3_MASK                     0x3
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF3_SHIFT                    6
+	u8 flags3;
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF4_MASK                     0x3 /* cf4 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF4_SHIFT                    0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF5_MASK                     0x3 /* cf5 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF5_SHIFT                    2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF6_MASK                     0x3 /* cf6 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF6_SHIFT                    4
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF7_MASK                     0x3 /* cf7 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF7_SHIFT                    6
+	u8 flags4;
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF8_MASK                     0x3 /* cf8 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF8_SHIFT                    0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF9_MASK                     0x3 /* cf9 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF9_SHIFT                    2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF10_MASK                    0x3 /* cf10 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF10_SHIFT                   4
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF11_MASK                    0x3 /* cf11 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF11_SHIFT                   6
+	u8 flags5;
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF12_MASK                    0x3 /* cf12 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF12_SHIFT                   0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF13_MASK                    0x3 /* cf13 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF13_SHIFT                   2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF14_MASK                    0x3 /* cf14 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF14_SHIFT                   4
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF15_MASK                    0x3 /* cf15 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF15_SHIFT                   6
+	u8 flags6;
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_MASK        0x3 /* cf16 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_SHIFT       0
+/* cf_array_cf */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_MASK        0x3
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_SHIFT       2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_MASK                   0x3 /* cf18 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_SHIFT                  4
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_MASK            0x3 /* cf19 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_SHIFT           6
+	u8 flags7;
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_MASK                0x3 /* cf20 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_SHIFT               0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED10_MASK              0x3 /* cf21 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED10_SHIFT             2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_MASK               0x3 /* cf22 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_SHIFT              4
+/* cf0en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF0EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF0EN_SHIFT                  6
+/* cf1en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF1EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF1EN_SHIFT                  7
+	u8 flags8;
+/* cf2en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF2EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF2EN_SHIFT                  0
+/* cf3en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF3EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF3EN_SHIFT                  1
+/* cf4en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF4EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF4EN_SHIFT                  2
+/* cf5en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF5EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF5EN_SHIFT                  3
+/* cf6en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF6EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF6EN_SHIFT                  4
+/* cf7en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF7EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF7EN_SHIFT                  5
+/* cf8en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF8EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF8EN_SHIFT                  6
+/* cf9en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF9EN_MASK                   0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF9EN_SHIFT                  7
+	u8 flags9;
+/* cf10en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF10EN_MASK                  0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF10EN_SHIFT                 0
+/* cf11en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF11EN_MASK                  0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF11EN_SHIFT                 1
+/* cf12en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF12EN_MASK                  0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF12EN_SHIFT                 2
+/* cf13en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF13EN_MASK                  0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF13EN_SHIFT                 3
+/* cf14en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF14EN_MASK                  0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF14EN_SHIFT                 4
+/* cf15en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF15EN_MASK                  0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF15EN_SHIFT                 5
+/* cf16en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_MASK     0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_SHIFT    6
+/* cf_array_cf_en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_EN_MASK     0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_EN_SHIFT    7
+	u8 flags10;
+/* cf18en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_EN_SHIFT               0
+/* cf19en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_EN_MASK         0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT        1
+/* cf20en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_EN_MASK             0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT            2
+/* cf21en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED11_MASK              0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED11_SHIFT             3
+/* cf22en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_EN_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_EN_SHIFT           4
+/* cf23en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_MASK  0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_SHIFT 5
+/* rule0en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED12_MASK              0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED12_SHIFT             6
+/* rule1en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED13_MASK              0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED13_SHIFT             7
+	u8 flags11;
+/* rule2en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED14_MASK              0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED14_SHIFT             0
+/* rule3en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED15_MASK              0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED15_SHIFT             1
+/* rule4en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TX_DEC_RULE_EN_MASK          0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT         2
+/* rule5en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE5EN_MASK                 0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE5EN_SHIFT                3
+/* rule6en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE6EN_MASK                 0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE6EN_SHIFT                4
+/* rule7en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE7EN_MASK                 0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE7EN_SHIFT                5
+/* rule8en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED1_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED1_SHIFT           6
+/* rule9en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE9EN_MASK                 0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE9EN_SHIFT                7
+	u8 flags12;
+/* rule10en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE10EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE10EN_SHIFT               0
+/* rule11en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE11EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE11EN_SHIFT               1
+/* rule12en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED2_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED2_SHIFT           2
+/* rule13en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED3_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED3_SHIFT           3
+/* rule14en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE14EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE14EN_SHIFT               4
+/* rule15en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE15EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE15EN_SHIFT               5
+/* rule16en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE16EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE16EN_SHIFT               6
+/* rule17en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE17EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE17EN_SHIFT               7
+	u8 flags13;
+/* rule18en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE18EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE18EN_SHIFT               0
+/* rule19en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE19EN_MASK                0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE19EN_SHIFT               1
+/* rule20en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED4_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED4_SHIFT           2
+/* rule21en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED5_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED5_SHIFT           3
+/* rule22en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED6_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED6_SHIFT           4
+/* rule23en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED7_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED7_SHIFT           5
+/* rule24en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED8_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED8_SHIFT           6
+/* rule25en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED9_MASK            0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED9_SHIFT           7
+	u8 flags14;
+/* bit16 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_USE_EXT_HDR_MASK        0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_USE_EXT_HDR_SHIFT       0
+/* bit17 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_MASK      0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_SHIFT     1
+/* bit18 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_MASK    0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_SHIFT   2
+/* bit19 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_MASK    0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_SHIFT   3
+/* bit20 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_L2_EDPM_ENABLE_MASK          0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_L2_EDPM_ENABLE_SHIFT         4
+/* bit21 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_ROCE_EDPM_ENABLE_MASK        0x1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_ROCE_EDPM_ENABLE_SHIFT       5
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_MASK              0x3 /* cf23 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_SHIFT             6
+	u8 edpm_event_id /* byte2 */;
+	__le16 physical_q0 /* physical_q0 */;
+	__le16 e5_reserved1 /* physical_q1 */;
+	__le16 edpm_num_bds /* physical_q2 */;
+	__le16 tx_bd_cons /* word3 */;
+	__le16 tx_bd_prod /* word4 */;
+	__le16 tx_class /* word5 */;
+	__le16 conn_dpi /* conn_dpi */;
+};
+
+
+
 /*
  * GFT CAM line struct
  */
@@ -1730,690 +2392,4 @@ enum gft_vlan_select {
 };
 
 
-struct mstorm_eth_conn_ag_ctx {
-	u8 byte0 /* cdu_validation */;
-	u8 byte1 /* state */;
-	u8 flags0;
-/* exist_in_qm0 */
-#define MSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_MASK  0x1
-#define MSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_SHIFT 0
-/* exist_in_qm1 */
-#define MSTORM_ETH_CONN_AG_CTX_BIT1_MASK          0x1
-#define MSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT         1
-#define MSTORM_ETH_CONN_AG_CTX_CF0_MASK           0x3 /* cf0 */
-#define MSTORM_ETH_CONN_AG_CTX_CF0_SHIFT          2
-#define MSTORM_ETH_CONN_AG_CTX_CF1_MASK           0x3 /* cf1 */
-#define MSTORM_ETH_CONN_AG_CTX_CF1_SHIFT          4
-#define MSTORM_ETH_CONN_AG_CTX_CF2_MASK           0x3 /* cf2 */
-#define MSTORM_ETH_CONN_AG_CTX_CF2_SHIFT          6
-	u8 flags1;
-#define MSTORM_ETH_CONN_AG_CTX_CF0EN_MASK         0x1 /* cf0en */
-#define MSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT        0
-#define MSTORM_ETH_CONN_AG_CTX_CF1EN_MASK         0x1 /* cf1en */
-#define MSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT        1
-#define MSTORM_ETH_CONN_AG_CTX_CF2EN_MASK         0x1 /* cf2en */
-#define MSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT        2
-#define MSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK       0x1 /* rule0en */
-#define MSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT      3
-#define MSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK       0x1 /* rule1en */
-#define MSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT      4
-#define MSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK       0x1 /* rule2en */
-#define MSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT      5
-#define MSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK       0x1 /* rule3en */
-#define MSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT      6
-#define MSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK       0x1 /* rule4en */
-#define MSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT      7
-	__le16 word0 /* word0 */;
-	__le16 word1 /* word1 */;
-	__le32 reg0 /* reg0 */;
-	__le32 reg1 /* reg1 */;
-};
-
-
-
-
-struct xstormEthConnAgCtxDqExtLdPart {
-	u8 reserved0 /* cdu_validation */;
-	u8 eth_state /* state */;
-	u8 flags0;
-/* exist_in_qm0 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM0_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM0_SHIFT           0
-/* exist_in_qm1 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED1_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED1_SHIFT              1
-/* exist_in_qm2 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED2_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED2_SHIFT              2
-/* exist_in_qm3 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM3_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM3_SHIFT           3
-/* bit4 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED3_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED3_SHIFT              4
-/* cf_array_active */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED4_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED4_SHIFT              5
-/* bit6 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED5_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED5_SHIFT              6
-/* bit7 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED6_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED6_SHIFT              7
-	u8 flags1;
-/* bit8 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED7_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED7_SHIFT              0
-/* bit9 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED8_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED8_SHIFT              1
-/* bit10 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED9_MASK               0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED9_SHIFT              2
-/* bit11 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_BIT11_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_BIT11_SHIFT                  3
-/* bit12 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_BIT12_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_BIT12_SHIFT                  4
-/* bit13 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_BIT13_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_BIT13_SHIFT                  5
-/* bit14 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TX_RULE_ACTIVE_MASK          0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TX_RULE_ACTIVE_SHIFT         6
-/* bit15 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_ACTIVE_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_ACTIVE_SHIFT           7
-	u8 flags2;
-/* timer0cf */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF0_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF0_SHIFT                    0
-/* timer1cf */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF1_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF1_SHIFT                    2
-/* timer2cf */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF2_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF2_SHIFT                    4
-/* timer_stop_all */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF3_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF3_SHIFT                    6
-	u8 flags3;
-/* cf4 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF4_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF4_SHIFT                    0
-/* cf5 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF5_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF5_SHIFT                    2
-/* cf6 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF6_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF6_SHIFT                    4
-/* cf7 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF7_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF7_SHIFT                    6
-	u8 flags4;
-/* cf8 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF8_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF8_SHIFT                    0
-/* cf9 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF9_MASK                     0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF9_SHIFT                    2
-/* cf10 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF10_MASK                    0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF10_SHIFT                   4
-/* cf11 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF11_MASK                    0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF11_SHIFT                   6
-	u8 flags5;
-/* cf12 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF12_MASK                    0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF12_SHIFT                   0
-/* cf13 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF13_MASK                    0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF13_SHIFT                   2
-/* cf14 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF14_MASK                    0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF14_SHIFT                   4
-/* cf15 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF15_MASK                    0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF15_SHIFT                   6
-	u8 flags6;
-/* cf16 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_MASK        0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_SHIFT       0
-/* cf_array_cf */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_MASK        0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_SHIFT       2
-/* cf18 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_MASK                   0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_SHIFT                  4
-/* cf19 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_MASK            0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_SHIFT           6
-	u8 flags7;
-/* cf20 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_MASK                0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_SHIFT               0
-/* cf21 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED10_MASK              0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED10_SHIFT             2
-/* cf22 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_MASK               0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_SHIFT              4
-/* cf0en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF0EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF0EN_SHIFT                  6
-/* cf1en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF1EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF1EN_SHIFT                  7
-	u8 flags8;
-/* cf2en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF2EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF2EN_SHIFT                  0
-/* cf3en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF3EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF3EN_SHIFT                  1
-/* cf4en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF4EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF4EN_SHIFT                  2
-/* cf5en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF5EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF5EN_SHIFT                  3
-/* cf6en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF6EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF6EN_SHIFT                  4
-/* cf7en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF7EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF7EN_SHIFT                  5
-/* cf8en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF8EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF8EN_SHIFT                  6
-/* cf9en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF9EN_MASK                   0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF9EN_SHIFT                  7
-	u8 flags9;
-/* cf10en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF10EN_MASK                  0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF10EN_SHIFT                 0
-/* cf11en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF11EN_MASK                  0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF11EN_SHIFT                 1
-/* cf12en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF12EN_MASK                  0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF12EN_SHIFT                 2
-/* cf13en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF13EN_MASK                  0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF13EN_SHIFT                 3
-/* cf14en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF14EN_MASK                  0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF14EN_SHIFT                 4
-/* cf15en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF15EN_MASK                  0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_CF15EN_SHIFT                 5
-/* cf16en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_EN_MASK     0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_EN_SHIFT    6
-/* cf_array_cf_en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_EN_MASK     0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_EN_SHIFT    7
-	u8 flags10;
-/* cf18en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_EN_SHIFT               0
-/* cf19en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_EN_MASK         0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_EN_SHIFT        1
-/* cf20en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_EN_MASK             0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_EN_SHIFT            2
-/* cf21en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED11_MASK              0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED11_SHIFT             3
-/* cf22en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_EN_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_EN_SHIFT           4
-/* cf23en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_EN_RESERVED_MASK  0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_EN_RESERVED_SHIFT 5
-/* rule0en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED12_MASK              0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED12_SHIFT             6
-/* rule1en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED13_MASK              0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED13_SHIFT             7
-	u8 flags11;
-/* rule2en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED14_MASK              0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED14_SHIFT             0
-/* rule3en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED15_MASK              0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED15_SHIFT             1
-/* rule4en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TX_DEC_RULE_EN_MASK          0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TX_DEC_RULE_EN_SHIFT         2
-/* rule5en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE5EN_MASK                 0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE5EN_SHIFT                3
-/* rule6en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE6EN_MASK                 0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE6EN_SHIFT                4
-/* rule7en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE7EN_MASK                 0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE7EN_SHIFT                5
-/* rule8en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED1_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED1_SHIFT           6
-/* rule9en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE9EN_MASK                 0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE9EN_SHIFT                7
-	u8 flags12;
-/* rule10en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE10EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE10EN_SHIFT               0
-/* rule11en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE11EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE11EN_SHIFT               1
-/* rule12en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED2_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED2_SHIFT           2
-/* rule13en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED3_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED3_SHIFT           3
-/* rule14en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE14EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE14EN_SHIFT               4
-/* rule15en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE15EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE15EN_SHIFT               5
-/* rule16en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE16EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE16EN_SHIFT               6
-/* rule17en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE17EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE17EN_SHIFT               7
-	u8 flags13;
-/* rule18en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE18EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE18EN_SHIFT               0
-/* rule19en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE19EN_MASK                0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE19EN_SHIFT               1
-/* rule20en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED4_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED4_SHIFT           2
-/* rule21en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED5_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED5_SHIFT           3
-/* rule22en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED6_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED6_SHIFT           4
-/* rule23en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED7_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED7_SHIFT           5
-/* rule24en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED8_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED8_SHIFT           6
-/* rule25en */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED9_MASK            0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED9_SHIFT           7
-	u8 flags14;
-/* bit16 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_USE_EXT_HDR_MASK        0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_USE_EXT_HDR_SHIFT       0
-/* bit17 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_RAW_L3L4_MASK      0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_RAW_L3L4_SHIFT     1
-/* bit18 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_INBAND_PROP_HDR_MASK    0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_INBAND_PROP_HDR_SHIFT   2
-/* bit19 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_EXT_TUNNEL_MASK    0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_EXT_TUNNEL_SHIFT   3
-/* bit20 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_L2_EDPM_ENABLE_MASK          0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_L2_EDPM_ENABLE_SHIFT         4
-/* bit21 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_ROCE_EDPM_ENABLE_MASK        0x1
-#define XSTORMETHCONNAGCTXDQEXTLDPART_ROCE_EDPM_ENABLE_SHIFT       5
-/* cf23 */
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_MASK              0x3
-#define XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_SHIFT             6
-	u8 edpm_event_id /* byte2 */;
-	__le16 physical_q0 /* physical_q0 */;
-	__le16 quota /* physical_q1 */;
-	__le16 edpm_num_bds /* physical_q2 */;
-	__le16 tx_bd_cons /* word3 */;
-	__le16 tx_bd_prod /* word4 */;
-	__le16 tx_class /* word5 */;
-	__le16 conn_dpi /* conn_dpi */;
-	u8 byte3 /* byte3 */;
-	u8 byte4 /* byte4 */;
-	u8 byte5 /* byte5 */;
-	u8 byte6 /* byte6 */;
-	__le32 reg0 /* reg0 */;
-	__le32 reg1 /* reg1 */;
-	__le32 reg2 /* reg2 */;
-	__le32 reg3 /* reg3 */;
-	__le32 reg4 /* reg4 */;
-};
-
-
-
-struct xstorm_eth_hw_conn_ag_ctx {
-	u8 reserved0 /* cdu_validation */;
-	u8 eth_state /* state */;
-	u8 flags0;
-/* exist_in_qm0 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM0_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM0_SHIFT           0
-/* exist_in_qm1 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED1_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED1_SHIFT              1
-/* exist_in_qm2 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED2_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED2_SHIFT              2
-/* exist_in_qm3 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM3_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM3_SHIFT           3
-/* bit4 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED3_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED3_SHIFT              4
-/* cf_array_active */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED4_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED4_SHIFT              5
-/* bit6 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED5_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED5_SHIFT              6
-/* bit7 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED6_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED6_SHIFT              7
-	u8 flags1;
-/* bit8 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED7_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED7_SHIFT              0
-/* bit9 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED8_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED8_SHIFT              1
-/* bit10 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED9_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED9_SHIFT              2
-/* bit11 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_BIT11_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_BIT11_SHIFT                  3
-/* bit12 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_BIT12_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_BIT12_SHIFT                  4
-/* bit13 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_BIT13_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_BIT13_SHIFT                  5
-/* bit14 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_TX_RULE_ACTIVE_MASK          0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT         6
-/* bit15 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_ACTIVE_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT           7
-	u8 flags2;
-/* timer0cf */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF0_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF0_SHIFT                    0
-/* timer1cf */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF1_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF1_SHIFT                    2
-/* timer2cf */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF2_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF2_SHIFT                    4
-/* timer_stop_all */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF3_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF3_SHIFT                    6
-	u8 flags3;
-/* cf4 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF4_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF4_SHIFT                    0
-/* cf5 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF5_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF5_SHIFT                    2
-/* cf6 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF6_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF6_SHIFT                    4
-/* cf7 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF7_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF7_SHIFT                    6
-	u8 flags4;
-/* cf8 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF8_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF8_SHIFT                    0
-/* cf9 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF9_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF9_SHIFT                    2
-/* cf10 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF10_MASK                    0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF10_SHIFT                   4
-/* cf11 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF11_MASK                    0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF11_SHIFT                   6
-	u8 flags5;
-/* cf12 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF12_MASK                    0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF12_SHIFT                   0
-/* cf13 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF13_MASK                    0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF13_SHIFT                   2
-/* cf14 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF14_MASK                    0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF14_SHIFT                   4
-/* cf15 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF15_MASK                    0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF15_SHIFT                   6
-	u8 flags6;
-/* cf16 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_MASK        0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_SHIFT       0
-/* cf_array_cf */
-#define XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_MASK        0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_SHIFT       2
-/* cf18 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_MASK                   0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_SHIFT                  4
-/* cf19 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_MASK            0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_SHIFT           6
-	u8 flags7;
-/* cf20 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_MASK                0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_SHIFT               0
-/* cf21 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED10_MASK              0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED10_SHIFT             2
-/* cf22 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_MASK               0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_SHIFT              4
-/* cf0en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF0EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF0EN_SHIFT                  6
-/* cf1en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF1EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF1EN_SHIFT                  7
-	u8 flags8;
-/* cf2en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF2EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF2EN_SHIFT                  0
-/* cf3en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF3EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF3EN_SHIFT                  1
-/* cf4en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF4EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF4EN_SHIFT                  2
-/* cf5en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF5EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF5EN_SHIFT                  3
-/* cf6en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF6EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF6EN_SHIFT                  4
-/* cf7en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF7EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF7EN_SHIFT                  5
-/* cf8en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF8EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF8EN_SHIFT                  6
-/* cf9en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF9EN_MASK                   0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF9EN_SHIFT                  7
-	u8 flags9;
-/* cf10en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF10EN_MASK                  0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF10EN_SHIFT                 0
-/* cf11en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF11EN_MASK                  0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF11EN_SHIFT                 1
-/* cf12en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF12EN_MASK                  0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF12EN_SHIFT                 2
-/* cf13en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF13EN_MASK                  0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF13EN_SHIFT                 3
-/* cf14en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF14EN_MASK                  0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF14EN_SHIFT                 4
-/* cf15en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF15EN_MASK                  0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF15EN_SHIFT                 5
-/* cf16en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_MASK     0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_SHIFT    6
-/* cf_array_cf_en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_EN_MASK     0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_EN_SHIFT    7
-	u8 flags10;
-/* cf18en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_EN_SHIFT               0
-/* cf19en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_EN_MASK         0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT        1
-/* cf20en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_EN_MASK             0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT            2
-/* cf21en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED11_MASK              0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED11_SHIFT             3
-/* cf22en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_EN_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_EN_SHIFT           4
-/* cf23en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_MASK  0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_SHIFT 5
-/* rule0en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED12_MASK              0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED12_SHIFT             6
-/* rule1en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED13_MASK              0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED13_SHIFT             7
-	u8 flags11;
-/* rule2en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED14_MASK              0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED14_SHIFT             0
-/* rule3en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED15_MASK              0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED15_SHIFT             1
-/* rule4en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_TX_DEC_RULE_EN_MASK          0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT         2
-/* rule5en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE5EN_MASK                 0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE5EN_SHIFT                3
-/* rule6en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE6EN_MASK                 0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE6EN_SHIFT                4
-/* rule7en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE7EN_MASK                 0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE7EN_SHIFT                5
-/* rule8en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED1_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED1_SHIFT           6
-/* rule9en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE9EN_MASK                 0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE9EN_SHIFT                7
-	u8 flags12;
-/* rule10en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE10EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE10EN_SHIFT               0
-/* rule11en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE11EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE11EN_SHIFT               1
-/* rule12en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED2_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED2_SHIFT           2
-/* rule13en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED3_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED3_SHIFT           3
-/* rule14en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE14EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE14EN_SHIFT               4
-/* rule15en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE15EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE15EN_SHIFT               5
-/* rule16en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE16EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE16EN_SHIFT               6
-/* rule17en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE17EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE17EN_SHIFT               7
-	u8 flags13;
-/* rule18en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE18EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE18EN_SHIFT               0
-/* rule19en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE19EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE19EN_SHIFT               1
-/* rule20en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED4_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED4_SHIFT           2
-/* rule21en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED5_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED5_SHIFT           3
-/* rule22en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED6_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED6_SHIFT           4
-/* rule23en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED7_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED7_SHIFT           5
-/* rule24en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED8_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED8_SHIFT           6
-/* rule25en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED9_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED9_SHIFT           7
-	u8 flags14;
-/* bit16 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_USE_EXT_HDR_MASK        0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_USE_EXT_HDR_SHIFT       0
-/* bit17 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_MASK      0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_SHIFT     1
-/* bit18 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_MASK    0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_SHIFT   2
-/* bit19 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_MASK    0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_SHIFT   3
-/* bit20 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_L2_EDPM_ENABLE_MASK          0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_L2_EDPM_ENABLE_SHIFT         4
-/* bit21 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_ROCE_EDPM_ENABLE_MASK        0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_ROCE_EDPM_ENABLE_SHIFT       5
-/* cf23 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_MASK              0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_SHIFT             6
-	u8 edpm_event_id /* byte2 */;
-	__le16 physical_q0 /* physical_q0 */;
-	__le16 quota /* physical_q1 */;
-	__le16 edpm_num_bds /* physical_q2 */;
-	__le16 tx_bd_cons /* word3 */;
-	__le16 tx_bd_prod /* word4 */;
-	__le16 tx_class /* word5 */;
-	__le16 conn_dpi /* conn_dpi */;
-};
-
-
 #endif /* __ECORE_HSI_ETH__ */
diff --git a/drivers/net/qede/base/ecore_iov_api.h b/drivers/net/qede/base/ecore_iov_api.h
index 24a43d3..9775360 100644
--- a/drivers/net/qede/base/ecore_iov_api.h
+++ b/drivers/net/qede/base/ecore_iov_api.h
@@ -701,7 +701,7 @@ enum _ecore_status_t ecore_iov_configure_min_tx_rate(struct ecore_dev *p_dev,
  * @param p_hwfn
  * @param rel_vf_id
  *
- * @return MAX_NUM_VFS in case no further active VFs, otherwise index.
+ * @return E4_MAX_NUM_VFS in case no further active VFs, otherwise index.
  */
 u16 ecore_iov_get_next_active_vf(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
 
@@ -709,7 +709,7 @@ u16 ecore_iov_get_next_active_vf(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
 
 #define ecore_for_each_vf(_p_hwfn, _i)					\
 	for (_i = ecore_iov_get_next_active_vf(_p_hwfn, 0);		\
-	     _i < MAX_NUM_VFS;						\
+	     _i < E4_MAX_NUM_VFS;					\
 	     _i = ecore_iov_get_next_active_vf(_p_hwfn, _i + 1))
 
 #endif
diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c
index 1f35d6c..9035d3b 100644
--- a/drivers/net/qede/base/ecore_spq.c
+++ b/drivers/net/qede/base/ecore_spq.c
@@ -191,15 +191,17 @@ static void ecore_spq_hw_initialize(struct ecore_hwfn *p_hwfn,
 
 	p_cxt = cxt_info.p_cxt;
 
-	SET_FIELD(p_cxt->xstorm_ag_context.flags10,
-		  XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN, 1);
-	SET_FIELD(p_cxt->xstorm_ag_context.flags1,
-		  XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE, 1);
-	/* SET_FIELD(p_cxt->xstorm_ag_context.flags10,
-	 *           XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN, 1);
-	 */
-	SET_FIELD(p_cxt->xstorm_ag_context.flags9,
-		  XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN, 1);
+	if (ECORE_IS_BB(p_hwfn->p_dev) || ECORE_IS_AH(p_hwfn->p_dev)) {
+		SET_FIELD(p_cxt->xstorm_ag_context.flags10,
+			  E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN, 1);
+		SET_FIELD(p_cxt->xstorm_ag_context.flags1,
+			  E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE, 1);
+		/* SET_FIELD(p_cxt->xstorm_ag_context.flags10,
+		 *	  E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN, 1);
+		 */
+		SET_FIELD(p_cxt->xstorm_ag_context.flags9,
+			  E4_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN, 1);
+	}
 
 	/* CDU validation - FIXME currently disabled */
 
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index d1c809c..b051678 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -3487,7 +3487,7 @@ u16 ecore_iov_get_next_active_vf(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
 			return i;
 
 out:
-	return MAX_NUM_VFS;
+	return E4_MAX_NUM_VFS;
 }
 
 enum _ecore_status_t ecore_iov_copy_vf_msg(struct ecore_hwfn *p_hwfn,
diff --git a/drivers/net/qede/base/ecore_sriov.h b/drivers/net/qede/base/ecore_sriov.h
index 884a90c..e9ccc79 100644
--- a/drivers/net/qede/base/ecore_sriov.h
+++ b/drivers/net/qede/base/ecore_sriov.h
@@ -15,7 +15,7 @@
 #include "ecore_hsi_common.h"
 
 #define ECORE_ETH_MAX_VF_NUM_VLAN_FILTERS \
-	(MAX_NUM_VFS * ECORE_ETH_VF_NUM_VLAN_FILTERS)
+	(E4_MAX_NUM_VFS * ECORE_ETH_VF_NUM_VLAN_FILTERS)
 
 /* Represents a full message. Both the request filled by VF
  * and the response filled by the PF. The VF needs one copy
@@ -152,7 +152,7 @@ struct ecore_vf_info {
  * capability enabled.
  */
 struct ecore_pf_iov {
-	struct ecore_vf_info	vfs_array[MAX_NUM_VFS];
+	struct ecore_vf_info	vfs_array[E4_MAX_NUM_VFS];
 	u64			pending_events[ECORE_VF_ARRAY_LENGTH];
 	u64			pending_flr[ECORE_VF_ARRAY_LENGTH];
 	u16			base_vport_id;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 06/62] net/qede: upgrade the FW to 8.18.9.0
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (5 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 05/62] net/qede/base: utilize FW 8.18.9.0 Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 07/62] net/qede/base: decrease maximum HW func per device Rasesh Mody
                               ` (55 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

This patchset adds changes to upgrade to 8.18.9.0 FW.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 doc/guides/nics/qede.rst                      |    8 +-
 drivers/net/qede/base/bcm_osal.h              |    1 +
 drivers/net/qede/base/common_hsi.h            |  176 +++-
 drivers/net/qede/base/ecore_dcbx.c            |    4 +-
 drivers/net/qede/base/ecore_dev.c             |  204 ++--
 drivers/net/qede/base/ecore_gtt_reg_addr.h    |   20 +-
 drivers/net/qede/base/ecore_hsi_common.h      |   46 +-
 drivers/net/qede/base/ecore_hsi_debug_tools.h |  203 ++--
 drivers/net/qede/base/ecore_hsi_eth.h         |   17 +-
 drivers/net/qede/base/ecore_hsi_init_tool.h   |   78 +-
 drivers/net/qede/base/ecore_init_fw_funcs.c   | 1378 ++++++++++++++++---------
 drivers/net/qede/base/ecore_init_fw_funcs.h   |  161 ++-
 drivers/net/qede/base/ecore_iro.h             |    8 +
 drivers/net/qede/base/ecore_iro_values.h      |   28 +-
 drivers/net/qede/base/ecore_rt_defs.h         |  623 ++++++-----
 drivers/net/qede/base/eth_common.h            |    2 +-
 drivers/net/qede/base/reg_addr.h              |   53 +
 drivers/net/qede/qede_main.c                  |    2 +-
 18 files changed, 1886 insertions(+), 1126 deletions(-)

diff --git a/doc/guides/nics/qede.rst b/doc/guides/nics/qede.rst
index 4694ec0..36b26b3 100644
--- a/doc/guides/nics/qede.rst
+++ b/doc/guides/nics/qede.rst
@@ -77,10 +77,10 @@ Supported QLogic Adapters
 Prerequisites
 -------------
 
-- Requires firmware version **8.14.x.** and management firmware
-  version **8.14.x or higher**. Firmware may be available
+- Requires firmware version **8.18.x.** and management firmware
+  version **8.18.x or higher**. Firmware may be available
   inbox in certain newer Linux distros under the standard directory
-  ``E.g. /lib/firmware/qed/qed_init_values-8.14.6.0.bin``
+  ``E.g. /lib/firmware/qed/qed_init_values-8.18.9.0.bin``
 
 - If the required firmware files are not available then visit
   `QLogic Driver Download Center <http://driverdownloads.qlogic.com>`_.
@@ -119,7 +119,7 @@ enabling debugging options may affect system performance.
 - ``CONFIG_RTE_LIBRTE_QEDE_FW`` (default **""**)
 
   Gives absolute path of firmware file.
-  ``Eg: "/lib/firmware/qed/qed_init_values_zipped-8.14.6.0.bin"``
+  ``Eg: "/lib/firmware/qed/qed_init_values_zipped-8.18.9.0.bin"``
   Empty string indicates driver will pick up the firmware file
   from the default location.
 
diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 88246b7..0d239c9 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -398,6 +398,7 @@ u32 qede_osal_log2(u32);
 #define OSAL_STRCPY(dst, string) strcpy(dst, string)
 #define OSAL_STRNCPY(dst, string, len) strncpy(dst, string, len)
 #define OSAL_STRCMP(str1, str2) strcmp(str1, str2)
+#define OSAL_STRTOUL(str, base, res) 0
 
 #define OSAL_INLINE inline
 #define OSAL_REG_ADDR(_p_hwfn, _offset) \
diff --git a/drivers/net/qede/base/common_hsi.h b/drivers/net/qede/base/common_hsi.h
index 59e751f..cbcde22 100644
--- a/drivers/net/qede/base/common_hsi.h
+++ b/drivers/net/qede/base/common_hsi.h
@@ -78,8 +78,16 @@
 
 #define CORE_SPQE_PAGE_SIZE_BYTES                       4096
 
-#define MAX_NUM_LL2_RX_QUEUES					32
-#define MAX_NUM_LL2_TX_STATS_COUNTERS			32
+/*
+ * Usually LL2 queues are opened in pairs TX-RX.
+ * There is a hard restriction on number of RX queues (limited by Tstorm RAM)
+ * and TX counters (Pstorm RAM).
+ * Number of TX queues is almost unlimited.
+ * The constants are different so as to allow asymmetric LL2 connections
+ */
+
+#define MAX_NUM_LL2_RX_QUEUES					48
+#define MAX_NUM_LL2_TX_STATS_COUNTERS			48
 
 
 /****************************************************************************/
@@ -89,8 +97,8 @@
 
 
 #define FW_MAJOR_VERSION		8
-#define FW_MINOR_VERSION		14
-#define FW_REVISION_VERSION		6
+#define FW_MINOR_VERSION		18
+#define FW_REVISION_VERSION		9
 #define FW_ENGINEERING_VERSION	0
 
 /***********************/
@@ -110,6 +118,7 @@
 #define MAX_NUM_VFS_BB	(120)
 #define MAX_NUM_VFS_K2	(192)
 #define E4_MAX_NUM_VFS	(MAX_NUM_VFS_K2)
+#define COMMON_MAX_NUM_VFS (240)
 
 #define MAX_NUM_FUNCTIONS_BB	(MAX_NUM_PFS_BB + MAX_NUM_VFS_BB)
 #define MAX_NUM_FUNCTIONS_K2	(MAX_NUM_PFS_K2 + MAX_NUM_VFS_K2)
@@ -177,6 +186,13 @@
 #define CDU_VF_FL_SEG_TYPE_OFFSET_REG_TYPE_SHIFT	(12)
 #define CDU_VF_FL_SEG_TYPE_OFFSET_REG_OFFSET_MASK	(0xfff)
 
+#define	CDU_CONTEXT_VALIDATION_CFG_ENABLE_SHIFT				(0)
+#define	CDU_CONTEXT_VALIDATION_CFG_VALIDATION_TYPE_SHIFT	(1)
+#define	CDU_CONTEXT_VALIDATION_CFG_USE_TYPE				(2)
+#define	CDU_CONTEXT_VALIDATION_CFG_USE_REGION				(3)
+#define	CDU_CONTEXT_VALIDATION_CFG_USE_CID				(4)
+#define	CDU_CONTEXT_VALIDATION_CFG_USE_ACTIVE				(5)
+
 
 /*****************/
 /* DQ CONSTANTS  */
@@ -472,7 +488,6 @@
 #define PXP_BAR_DQ                                          1
 
 /* PTT and GTT */
-#define PXP_NUM_PF_WINDOWS		12
 #define PXP_PER_PF_ENTRY_SIZE		8
 #define PXP_NUM_GLOBAL_WINDOWS		243
 #define PXP_GLOBAL_ENTRY_SIZE		4
@@ -497,6 +512,8 @@
 #define PXP_PF_ME_OPAQUE_ADDR		0x1f8
 #define PXP_PF_ME_CONCRETE_ADDR		0x1fc
 
+#define PXP_NUM_PF_WINDOWS		12
+
 #define PXP_EXTERNAL_BAR_PF_WINDOW_START	0x1000
 #define PXP_EXTERNAL_BAR_PF_WINDOW_NUM		PXP_NUM_PF_WINDOWS
 #define PXP_EXTERNAL_BAR_PF_WINDOW_SINGLE_SIZE	0x1000
@@ -519,8 +536,6 @@
 	 PXP_EXTERNAL_BAR_GLOBAL_WINDOW_LENGTH - 1)
 
 /* PF BAR */
-/*#define PXP_BAR0_START_GRC 0x1000 */
-/*#define PXP_BAR0_GRC_LENGTH 0xBFF000 */
 #define PXP_BAR0_START_GRC                      0x0000
 #define PXP_BAR0_GRC_LENGTH                     0x1C00000
 #define PXP_BAR0_END_GRC                        \
@@ -589,7 +604,7 @@
 #define SDM_OP_GEN_TRIG_AGG_INT			2
 #define SDM_OP_GEN_TRIG_LOADER			4
 #define SDM_OP_GEN_TRIG_INDICATE_ERROR	6
-#define SDM_OP_GEN_TRIG_RELEASE_THREAD	7
+#define SDM_OP_GEN_TRIG_INC_ORDER_CNT	9
 
 /***********************************************************/
 /* Completion types                                        */
@@ -612,6 +627,7 @@
 #define SDM_COMP_TYPE_RELEASE_THREAD	7
 /* Write to local RAM as a completion */
 #define SDM_COMP_TYPE_RAM		8
+#define SDM_COMP_TYPE_INC_ORDER_CNT	9 /* Applicable only for E4 */
 
 
 /******************/
@@ -881,7 +897,7 @@ enum db_dest {
  */
 enum db_dpm_type {
 	DPM_LEGACY /* Legacy DPM- to Xstorm RAM */,
-	DPM_ROCE /* RoCE DPM- to NIG */,
+	DPM_RDMA /* RDMA DPM (only RoCE in E4) - to NIG */,
 /* L2 DPM inline- to PBF, with packet data on doorbell */
 	DPM_L2_INLINE,
 	DPM_L2_BD /* L2 DPM with BD- to PBF, with TX BD data on doorbell */,
@@ -968,42 +984,42 @@ struct db_pwm_addr {
 };
 
 /*
- * Parameters to RoCE firmware, passed in EDPM doorbell
+ * Parameters to RDMA firmware, passed in EDPM doorbell
  */
-struct db_roce_dpm_params {
+struct db_rdma_dpm_params {
 	__le32 params;
 /* Size in QWORD-s of the DPM burst */
-#define DB_ROCE_DPM_PARAMS_SIZE_MASK            0x3F
-#define DB_ROCE_DPM_PARAMS_SIZE_SHIFT           0
-/* Type of DPM transacation (DPM_ROCE) (use enum db_dpm_type) */
-#define DB_ROCE_DPM_PARAMS_DPM_TYPE_MASK        0x3
-#define DB_ROCE_DPM_PARAMS_DPM_TYPE_SHIFT       6
-/* opcode for ROCE operation */
-#define DB_ROCE_DPM_PARAMS_OPCODE_MASK          0xFF
-#define DB_ROCE_DPM_PARAMS_OPCODE_SHIFT         8
+#define DB_RDMA_DPM_PARAMS_SIZE_MASK            0x3F
+#define DB_RDMA_DPM_PARAMS_SIZE_SHIFT           0
+/* Type of DPM transacation (DPM_RDMA) (use enum db_dpm_type) */
+#define DB_RDMA_DPM_PARAMS_DPM_TYPE_MASK        0x3
+#define DB_RDMA_DPM_PARAMS_DPM_TYPE_SHIFT       6
+/* opcode for RDMA operation */
+#define DB_RDMA_DPM_PARAMS_OPCODE_MASK          0xFF
+#define DB_RDMA_DPM_PARAMS_OPCODE_SHIFT         8
 /* the size of the WQE payload in bytes */
-#define DB_ROCE_DPM_PARAMS_WQE_SIZE_MASK        0x7FF
-#define DB_ROCE_DPM_PARAMS_WQE_SIZE_SHIFT       16
-#define DB_ROCE_DPM_PARAMS_RESERVED0_MASK       0x1
-#define DB_ROCE_DPM_PARAMS_RESERVED0_SHIFT      27
+#define DB_RDMA_DPM_PARAMS_WQE_SIZE_MASK        0x7FF
+#define DB_RDMA_DPM_PARAMS_WQE_SIZE_SHIFT       16
+#define DB_RDMA_DPM_PARAMS_RESERVED0_MASK       0x1
+#define DB_RDMA_DPM_PARAMS_RESERVED0_SHIFT      27
 /* RoCE completion flag */
-#define DB_ROCE_DPM_PARAMS_COMPLETION_FLG_MASK  0x1
-#define DB_ROCE_DPM_PARAMS_COMPLETION_FLG_SHIFT 28
-#define DB_ROCE_DPM_PARAMS_S_FLG_MASK           0x1 /* RoCE S flag */
-#define DB_ROCE_DPM_PARAMS_S_FLG_SHIFT          29
-#define DB_ROCE_DPM_PARAMS_RESERVED1_MASK       0x3
-#define DB_ROCE_DPM_PARAMS_RESERVED1_SHIFT      30
+#define DB_RDMA_DPM_PARAMS_COMPLETION_FLG_MASK  0x1
+#define DB_RDMA_DPM_PARAMS_COMPLETION_FLG_SHIFT 28
+#define DB_RDMA_DPM_PARAMS_S_FLG_MASK           0x1 /* RoCE S flag */
+#define DB_RDMA_DPM_PARAMS_S_FLG_SHIFT          29
+#define DB_RDMA_DPM_PARAMS_RESERVED1_MASK       0x3
+#define DB_RDMA_DPM_PARAMS_RESERVED1_SHIFT      30
 };
 
 /*
- * Structure for doorbell data, in ROCE DPM mode, for the first doorbell in a
+ * Structure for doorbell data, in RDMA DPM mode, for the first doorbell in a
  * DPM burst
  */
-struct db_roce_dpm_data {
+struct db_rdma_dpm_data {
 	__le16 icid /* internal CID */;
 	__le16 prod_val /* aggregated value to update */;
-/* parameters passed to RoCE firmware */
-	struct db_roce_dpm_params params;
+/* parameters passed to RDMA firmware */
+	struct db_rdma_dpm_params params;
 };
 
 /* Igu interrupt command */
@@ -1136,6 +1152,68 @@ struct parsing_and_err_flags {
 
 
 /*
+ * Parsing error flags bitmap.
+ */
+struct parsing_err_flags {
+	__le16 flags;
+/* MAC error indication */
+#define PARSING_ERR_FLAGS_MAC_ERROR_MASK                          0x1
+#define PARSING_ERR_FLAGS_MAC_ERROR_SHIFT                         0
+/* truncation error indication */
+#define PARSING_ERR_FLAGS_TRUNC_ERROR_MASK                        0x1
+#define PARSING_ERR_FLAGS_TRUNC_ERROR_SHIFT                       1
+/* packet too small indication */
+#define PARSING_ERR_FLAGS_PKT_TOO_SMALL_MASK                      0x1
+#define PARSING_ERR_FLAGS_PKT_TOO_SMALL_SHIFT                     2
+/* Header Missing Tag */
+#define PARSING_ERR_FLAGS_ANY_HDR_MISSING_TAG_MASK                0x1
+#define PARSING_ERR_FLAGS_ANY_HDR_MISSING_TAG_SHIFT               3
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_ANY_HDR_IP_VER_MISMTCH_MASK             0x1
+#define PARSING_ERR_FLAGS_ANY_HDR_IP_VER_MISMTCH_SHIFT            4
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_ANY_HDR_IP_V4_HDR_LEN_TOO_SMALL_MASK    0x1
+#define PARSING_ERR_FLAGS_ANY_HDR_IP_V4_HDR_LEN_TOO_SMALL_SHIFT   5
+/* set this error if: 1. total-len is smaller than hdr-len 2. total-ip-len
+ * indicates number that is bigger than real packet length 3. tunneling:
+ * total-ip-length of the outer header points to offset that is smaller than
+ * the one pointed to by the total-ip-len of the inner hdr.
+ */
+#define PARSING_ERR_FLAGS_ANY_HDR_IP_BAD_TOTAL_LEN_MASK           0x1
+#define PARSING_ERR_FLAGS_ANY_HDR_IP_BAD_TOTAL_LEN_SHIFT          6
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_IP_V4_CHKSM_ERROR_MASK                  0x1
+#define PARSING_ERR_FLAGS_IP_V4_CHKSM_ERROR_SHIFT                 7
+/* from frame cracker output. for either TCP or UDP */
+#define PARSING_ERR_FLAGS_ANY_HDR_L4_IP_LEN_MISMTCH_MASK          0x1
+#define PARSING_ERR_FLAGS_ANY_HDR_L4_IP_LEN_MISMTCH_SHIFT         8
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_ZERO_UDP_IP_V6_CHKSM_MASK               0x1
+#define PARSING_ERR_FLAGS_ZERO_UDP_IP_V6_CHKSM_SHIFT              9
+/* cksm calculated and value isn't 0xffff or L4-cksm-wasnt-calculated for any
+ * reason, like: udp/ipv4 checksum is 0 etc.
+ */
+#define PARSING_ERR_FLAGS_INNER_L4_CHKSM_ERROR_MASK               0x1
+#define PARSING_ERR_FLAGS_INNER_L4_CHKSM_ERROR_SHIFT              10
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_ANY_HDR_ZERO_TTL_OR_HOP_LIM_MASK        0x1
+#define PARSING_ERR_FLAGS_ANY_HDR_ZERO_TTL_OR_HOP_LIM_SHIFT       11
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_NON_8021Q_TAG_EXISTS_IN_BOTH_HDRS_MASK  0x1
+#define PARSING_ERR_FLAGS_NON_8021Q_TAG_EXISTS_IN_BOTH_HDRS_SHIFT 12
+/* set if geneve option size was over 32 byte */
+#define PARSING_ERR_FLAGS_GENEVE_OPTION_OVERSIZED_MASK            0x1
+#define PARSING_ERR_FLAGS_GENEVE_OPTION_OVERSIZED_SHIFT           13
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_TUNNEL_IP_V4_CHKSM_ERROR_MASK           0x1
+#define PARSING_ERR_FLAGS_TUNNEL_IP_V4_CHKSM_ERROR_SHIFT          14
+/* from frame cracker output */
+#define PARSING_ERR_FLAGS_TUNNEL_L4_CHKSM_ERROR_MASK              0x1
+#define PARSING_ERR_FLAGS_TUNNEL_L4_CHKSM_ERROR_SHIFT             15
+};
+
+
+/*
  * Pb context
  */
 struct pb_context {
@@ -1492,49 +1570,57 @@ struct tdif_task_context {
 struct timers_context {
 	__le32 logical_client_0;
 /* Expiration time of logical client 0 */
-#define TIMERS_CONTEXT_EXPIRATIONTIMELC0_MASK     0xFFFFFFF
+#define TIMERS_CONTEXT_EXPIRATIONTIMELC0_MASK     0x7FFFFFF
 #define TIMERS_CONTEXT_EXPIRATIONTIMELC0_SHIFT    0
+#define TIMERS_CONTEXT_RESERVED0_MASK             0x1
+#define TIMERS_CONTEXT_RESERVED0_SHIFT            27
 /* Valid bit of logical client 0 */
 #define TIMERS_CONTEXT_VALIDLC0_MASK              0x1
 #define TIMERS_CONTEXT_VALIDLC0_SHIFT             28
 /* Active bit of logical client 0 */
 #define TIMERS_CONTEXT_ACTIVELC0_MASK             0x1
 #define TIMERS_CONTEXT_ACTIVELC0_SHIFT            29
-#define TIMERS_CONTEXT_RESERVED0_MASK             0x3
-#define TIMERS_CONTEXT_RESERVED0_SHIFT            30
+#define TIMERS_CONTEXT_RESERVED1_MASK             0x3
+#define TIMERS_CONTEXT_RESERVED1_SHIFT            30
 	__le32 logical_client_1;
 /* Expiration time of logical client 1 */
-#define TIMERS_CONTEXT_EXPIRATIONTIMELC1_MASK     0xFFFFFFF
+#define TIMERS_CONTEXT_EXPIRATIONTIMELC1_MASK     0x7FFFFFF
 #define TIMERS_CONTEXT_EXPIRATIONTIMELC1_SHIFT    0
+#define TIMERS_CONTEXT_RESERVED2_MASK             0x1
+#define TIMERS_CONTEXT_RESERVED2_SHIFT            27
 /* Valid bit of logical client 1 */
 #define TIMERS_CONTEXT_VALIDLC1_MASK              0x1
 #define TIMERS_CONTEXT_VALIDLC1_SHIFT             28
 /* Active bit of logical client 1 */
 #define TIMERS_CONTEXT_ACTIVELC1_MASK             0x1
 #define TIMERS_CONTEXT_ACTIVELC1_SHIFT            29
-#define TIMERS_CONTEXT_RESERVED1_MASK             0x3
-#define TIMERS_CONTEXT_RESERVED1_SHIFT            30
+#define TIMERS_CONTEXT_RESERVED3_MASK             0x3
+#define TIMERS_CONTEXT_RESERVED3_SHIFT            30
 	__le32 logical_client_2;
 /* Expiration time of logical client 2 */
-#define TIMERS_CONTEXT_EXPIRATIONTIMELC2_MASK     0xFFFFFFF
+#define TIMERS_CONTEXT_EXPIRATIONTIMELC2_MASK     0x7FFFFFF
 #define TIMERS_CONTEXT_EXPIRATIONTIMELC2_SHIFT    0
+#define TIMERS_CONTEXT_RESERVED4_MASK             0x1
+#define TIMERS_CONTEXT_RESERVED4_SHIFT            27
 /* Valid bit of logical client 2 */
 #define TIMERS_CONTEXT_VALIDLC2_MASK              0x1
 #define TIMERS_CONTEXT_VALIDLC2_SHIFT             28
 /* Active bit of logical client 2 */
 #define TIMERS_CONTEXT_ACTIVELC2_MASK             0x1
 #define TIMERS_CONTEXT_ACTIVELC2_SHIFT            29
-#define TIMERS_CONTEXT_RESERVED2_MASK             0x3
-#define TIMERS_CONTEXT_RESERVED2_SHIFT            30
+#define TIMERS_CONTEXT_RESERVED5_MASK             0x3
+#define TIMERS_CONTEXT_RESERVED5_SHIFT            30
 	__le32 host_expiration_fields;
 /* Expiration time on host (closest one) */
-#define TIMERS_CONTEXT_HOSTEXPRIRATIONVALUE_MASK  0xFFFFFFF
+#define TIMERS_CONTEXT_HOSTEXPRIRATIONVALUE_MASK  0x7FFFFFF
 #define TIMERS_CONTEXT_HOSTEXPRIRATIONVALUE_SHIFT 0
+#define TIMERS_CONTEXT_RESERVED6_MASK             0x1
+#define TIMERS_CONTEXT_RESERVED6_SHIFT            27
 /* Valid bit of host expiration */
 #define TIMERS_CONTEXT_HOSTEXPRIRATIONVALID_MASK  0x1
 #define TIMERS_CONTEXT_HOSTEXPRIRATIONVALID_SHIFT 28
-#define TIMERS_CONTEXT_RESERVED3_MASK             0x7
-#define TIMERS_CONTEXT_RESERVED3_SHIFT            29
+#define TIMERS_CONTEXT_RESERVED7_MASK             0x7
+#define TIMERS_CONTEXT_RESERVED7_SHIFT            29
 };
 
 
diff --git a/drivers/net/qede/base/ecore_dcbx.c b/drivers/net/qede/base/ecore_dcbx.c
index 7380fd8..102774d 100644
--- a/drivers/net/qede/base/ecore_dcbx.c
+++ b/drivers/net/qede/base/ecore_dcbx.c
@@ -126,7 +126,7 @@ ecore_dcbx_set_params(struct ecore_dcbx_results *p_data,
 	else if (enable)
 		p_data->arr[type].update = UPDATE_DCB;
 	else
-		p_data->arr[type].update = DONT_UPDATE_DCB_DHCP;
+		p_data->arr[type].update = DONT_UPDATE_DCB_DSCP;
 
 	/* QM reconf data */
 	if (p_hwfn->hw_info.personality == personality) {
@@ -938,7 +938,7 @@ void ecore_dcbx_set_pf_update_params(struct ecore_dcbx_results *p_src,
 	p_dest->pf_id = p_src->pf_id;
 
 	update_flag = p_src->arr[DCBX_PROTOCOL_ETH].update;
-	p_dest->update_eth_dcb_data_flag = update_flag;
+	p_dest->update_eth_dcb_data_mode = update_flag;
 
 	p_dcb_data = &p_dest->eth_dcb_data;
 	ecore_dcbx_update_protocol_data(p_dcb_data, p_src, DCBX_PROTOCOL_ETH);
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index eef24cd..f82f5e6 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -814,7 +814,7 @@ static enum _ecore_status_t ecore_calc_hw_mode(struct ecore_hwfn *p_hwfn)
 	int hw_mode = 0;
 
 	if (ECORE_IS_BB_B0(p_hwfn->p_dev)) {
-		hw_mode |= 1 << MODE_BB_B0;
+		hw_mode |= 1 << MODE_BB;
 	} else if (ECORE_IS_AH(p_hwfn->p_dev)) {
 		hw_mode |= 1 << MODE_K2;
 	} else {
@@ -886,29 +886,36 @@ static enum _ecore_status_t ecore_calc_hw_mode(struct ecore_hwfn *p_hwfn)
 static enum _ecore_status_t ecore_hw_init_chip(struct ecore_hwfn *p_hwfn,
 					       struct ecore_ptt *p_ptt)
 {
+	struct ecore_dev *p_dev = p_hwfn->p_dev;
 	u32 pl_hv = 1;
 	int i;
 
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && ECORE_IS_AH(p_hwfn->p_dev))
-		pl_hv |= 0x600;
+	if (CHIP_REV_IS_EMUL(p_dev)) {
+		if (ECORE_IS_AH(p_dev))
+			pl_hv |= 0x600;
+	}
 
 	ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV + 4, pl_hv);
 
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && ECORE_IS_AH(p_hwfn->p_dev))
-		ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV_2, 0x3ffffff);
+	if (CHIP_REV_IS_EMUL(p_dev) &&
+	    (ECORE_IS_AH(p_dev)))
+		ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV_2_K2_E5,
+			 0x3ffffff);
 
 	/* initialize port mode to 4x10G_E (10G with 4x10 SERDES) */
 	/* CNIG_REG_NW_PORT_MODE is same for A0 and B0 */
-	if (!CHIP_REV_IS_EMUL(p_hwfn->p_dev) || !ECORE_IS_AH(p_hwfn->p_dev))
-		ecore_wr(p_hwfn, p_ptt, CNIG_REG_NW_PORT_MODE_BB_B0, 4);
+	if (!CHIP_REV_IS_EMUL(p_dev) || ECORE_IS_BB(p_dev))
+		ecore_wr(p_hwfn, p_ptt, CNIG_REG_NW_PORT_MODE_BB, 4);
 
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && ECORE_IS_AH(p_hwfn->p_dev)) {
-		/* 2 for 4-port, 1 for 2-port, 0 for 1-port */
-		ecore_wr(p_hwfn, p_ptt, MISC_REG_PORT_MODE,
-			 (p_hwfn->p_dev->num_ports_in_engines >> 1));
+	if (CHIP_REV_IS_EMUL(p_dev)) {
+		if (ECORE_IS_AH(p_dev)) {
+			/* 2 for 4-port, 1 for 2-port, 0 for 1-port */
+			ecore_wr(p_hwfn, p_ptt, MISC_REG_PORT_MODE,
+				 (p_dev->num_ports_in_engines >> 1));
 
-		ecore_wr(p_hwfn, p_ptt, MISC_REG_BLOCK_256B_EN,
-			 p_hwfn->p_dev->num_ports_in_engines == 4 ? 0 : 3);
+			ecore_wr(p_hwfn, p_ptt, MISC_REG_BLOCK_256B_EN,
+				 p_dev->num_ports_in_engines == 4 ? 0 : 3);
+		}
 	}
 
 	/* Poll on RBC */
@@ -1051,12 +1058,6 @@ static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn,
 	/* pretend to original PF */
 	ecore_fid_pretend(p_hwfn, p_ptt, p_hwfn->rel_pf_id);
 
-	/* @@@TMP:
-	 * CQ89456 - Mask the BRB "RC0_EOP_OUT_SYNC_FIFO_PUSH_ERROR" attention.
-	 */
-	if (ECORE_IS_AH(p_dev))
-		ecore_wr(p_hwfn, p_ptt, BRB_REG_INT_MASK_10, 0x4000000);
-
 	return rc;
 }
 
@@ -1072,20 +1073,19 @@ static void ecore_wr_nw_port(struct ecore_hwfn *p_hwfn,
 {
 	DP_VERBOSE(p_hwfn, ECORE_MSG_LINK,
 		   "CMD: %08x, ADDR: 0x%08x, DATA: %08x:%08x\n",
-		   ecore_rd(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB_B0) |
+		   ecore_rd(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB) |
 		   (8 << PMEG_IF_BYTE_COUNT),
 		   (reg_type << 25) | (addr << 8) | port,
 		   (u32)((data >> 32) & 0xffffffff),
 		   (u32)(data & 0xffffffff));
 
-	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB_B0,
-		 (ecore_rd(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB_B0) &
+	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB,
+		 (ecore_rd(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB) &
 		  0xffff00fe) | (8 << PMEG_IF_BYTE_COUNT));
-	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_ADDR_BB_B0,
+	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_ADDR_BB,
 		 (reg_type << 25) | (addr << 8) | port);
-	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_WRDATA_BB_B0,
-		 data & 0xffffffff);
-	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_WRDATA_BB_B0,
+	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_WRDATA_BB, data & 0xffffffff);
+	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_WRDATA_BB,
 		 (data >> 32) & 0xffffffff);
 }
 
@@ -1101,48 +1101,13 @@ static void ecore_wr_nw_port(struct ecore_hwfn *p_hwfn,
 #define XLMAC_PAUSE_CTRL (0x60d)
 #define XLMAC_PFC_CTRL (0x60e)
 
-static void ecore_emul_link_init_ah(struct ecore_hwfn *p_hwfn,
+static void ecore_emul_link_init_bb(struct ecore_hwfn *p_hwfn,
 				    struct ecore_ptt *p_ptt)
 {
-	u8 port = p_hwfn->port_id;
-	u32 mac_base = NWM_REG_MAC0 + (port << 2) * NWM_REG_MAC0_SIZE;
-
-	ecore_wr(p_hwfn, p_ptt, CNIG_REG_NIG_PORT0_CONF_K2 + (port << 2),
-		 (1 << CNIG_REG_NIG_PORT0_CONF_NIG_PORT_ENABLE_0_SHIFT) |
-		 (port << CNIG_REG_NIG_PORT0_CONF_NIG_PORT_NWM_PORT_MAP_0_SHIFT)
-		 | (0 << CNIG_REG_NIG_PORT0_CONF_NIG_PORT_RATE_0_SHIFT));
-
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_XIF_MODE,
-		 1 << ETH_MAC_REG_XIF_MODE_XGMII_SHIFT);
-
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_FRM_LENGTH,
-		 9018 << ETH_MAC_REG_FRM_LENGTH_FRM_LENGTH_SHIFT);
-
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_TX_IPG_LENGTH,
-		 0xc << ETH_MAC_REG_TX_IPG_LENGTH_TXIPG_SHIFT);
-
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_RX_FIFO_SECTIONS,
-		 8 << ETH_MAC_REG_RX_FIFO_SECTIONS_RX_SECTION_FULL_SHIFT);
-
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_TX_FIFO_SECTIONS,
-		 (0xA << ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_EMPTY_SHIFT) |
-		 (8 << ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_FULL_SHIFT));
-
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_COMMAND_CONFIG, 0xa853);
-}
-
-static void ecore_emul_link_init(struct ecore_hwfn *p_hwfn,
-				 struct ecore_ptt *p_ptt)
-{
 	u8 loopback = 0, port = p_hwfn->port_id * 2;
 
 	DP_INFO(p_hwfn->p_dev, "Configurating Emulation Link %02x\n", port);
 
-	if (ECORE_IS_AH(p_hwfn->p_dev)) {
-		ecore_emul_link_init_ah(p_hwfn, p_ptt);
-		return;
-	}
-
 	/* XLPORT MAC MODE *//* 0 Quad, 4 Single... */
 	ecore_wr_nw_port(p_hwfn, p_ptt, XLPORT_MODE_REG, (0x4 << 4) | 0x4, 1,
 			 port);
@@ -1171,8 +1136,53 @@ static void ecore_emul_link_init(struct ecore_hwfn *p_hwfn,
 	ecore_wr_nw_port(p_hwfn, p_ptt, XLPORT_ENABLE_REG, 0xf, 1, port);
 }
 
-static void ecore_link_init(struct ecore_hwfn *p_hwfn,
-			    struct ecore_ptt *p_ptt, u8 port)
+static void ecore_emul_link_init_ah_e5(struct ecore_hwfn *p_hwfn,
+				       struct ecore_ptt *p_ptt)
+{
+	u8 port = p_hwfn->port_id;
+	u32 mac_base = NWM_REG_MAC0_K2_E5 + (port << 2) * NWM_REG_MAC0_SIZE;
+
+	DP_INFO(p_hwfn->p_dev, "Configurating Emulation Link %02x\n", port);
+
+	ecore_wr(p_hwfn, p_ptt, CNIG_REG_NIG_PORT0_CONF_K2_E5 + (port << 2),
+		 (1 << CNIG_REG_NIG_PORT0_CONF_NIG_PORT_ENABLE_0_K2_E5_SHIFT) |
+		 (port <<
+		  CNIG_REG_NIG_PORT0_CONF_NIG_PORT_NWM_PORT_MAP_0_K2_E5_SHIFT) |
+		 (0 << CNIG_REG_NIG_PORT0_CONF_NIG_PORT_RATE_0_K2_E5_SHIFT));
+
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_XIF_MODE_K2_E5,
+		 1 << ETH_MAC_REG_XIF_MODE_XGMII_K2_E5_SHIFT);
+
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_FRM_LENGTH_K2_E5,
+		 9018 << ETH_MAC_REG_FRM_LENGTH_FRM_LENGTH_K2_E5_SHIFT);
+
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_TX_IPG_LENGTH_K2_E5,
+		 0xc << ETH_MAC_REG_TX_IPG_LENGTH_TXIPG_K2_E5_SHIFT);
+
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_RX_FIFO_SECTIONS_K2_E5,
+		 8 << ETH_MAC_REG_RX_FIFO_SECTIONS_RX_SECTION_FULL_K2_E5_SHIFT);
+
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_TX_FIFO_SECTIONS_K2_E5,
+		 (0xA <<
+		  ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_EMPTY_K2_E5_SHIFT) |
+		 (8 <<
+		  ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_FULL_K2_E5_SHIFT));
+
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_COMMAND_CONFIG_K2_E5,
+		 0xa853);
+}
+
+static void ecore_emul_link_init(struct ecore_hwfn *p_hwfn,
+				 struct ecore_ptt *p_ptt)
+{
+	if (ECORE_IS_AH(p_hwfn->p_dev))
+		ecore_emul_link_init_ah_e5(p_hwfn, p_ptt);
+	else /* BB */
+		ecore_emul_link_init_bb(p_hwfn, p_ptt);
+}
+
+static void ecore_link_init_bb(struct ecore_hwfn *p_hwfn,
+			       struct ecore_ptt *p_ptt,  u8 port)
 {
 	int port_offset = port ? 0x800 : 0;
 	u32 xmac_rxctrl = 0;
@@ -1185,10 +1195,10 @@ static void ecore_link_init(struct ecore_hwfn *p_hwfn,
 	ecore_wr(p_hwfn, p_ptt, MISC_REG_RESET_PL_PDA_VAUX + sizeof(u32),
 		 MISC_REG_RESET_REG_2_XMAC_BIT);	/* Set */
 
-	ecore_wr(p_hwfn, p_ptt, MISC_REG_XMAC_CORE_PORT_MODE, 1);
+	ecore_wr(p_hwfn, p_ptt, MISC_REG_XMAC_CORE_PORT_MODE_BB, 1);
 
 	/* Set the number of ports on the Warp Core to 10G */
-	ecore_wr(p_hwfn, p_ptt, MISC_REG_XMAC_PHY_PORT_MODE, 3);
+	ecore_wr(p_hwfn, p_ptt, MISC_REG_XMAC_PHY_PORT_MODE_BB, 3);
 
 	/* Soft reset of XMAC */
 	ecore_wr(p_hwfn, p_ptt, MISC_REG_RESET_PL_PDA_VAUX + 2 * sizeof(u32),
@@ -1199,20 +1209,21 @@ static void ecore_link_init(struct ecore_hwfn *p_hwfn,
 
 	/* FIXME: move to common end */
 	if (CHIP_REV_IS_FPGA(p_hwfn->p_dev))
-		ecore_wr(p_hwfn, p_ptt, XMAC_REG_MODE + port_offset, 0x20);
+		ecore_wr(p_hwfn, p_ptt, XMAC_REG_MODE_BB + port_offset, 0x20);
 
 	/* Set Max packet size: initialize XMAC block register for port 0 */
-	ecore_wr(p_hwfn, p_ptt, XMAC_REG_RX_MAX_SIZE + port_offset, 0x2710);
+	ecore_wr(p_hwfn, p_ptt, XMAC_REG_RX_MAX_SIZE_BB + port_offset, 0x2710);
 
 	/* CRC append for Tx packets: init XMAC block register for port 1 */
-	ecore_wr(p_hwfn, p_ptt, XMAC_REG_TX_CTRL_LO + port_offset, 0xC800);
+	ecore_wr(p_hwfn, p_ptt, XMAC_REG_TX_CTRL_LO_BB + port_offset, 0xC800);
 
 	/* Enable TX and RX: initialize XMAC block register for port 1 */
-	ecore_wr(p_hwfn, p_ptt, XMAC_REG_CTRL + port_offset,
-		 XMAC_REG_CTRL_TX_EN | XMAC_REG_CTRL_RX_EN);
-	xmac_rxctrl = ecore_rd(p_hwfn, p_ptt, XMAC_REG_RX_CTRL + port_offset);
-	xmac_rxctrl |= XMAC_REG_RX_CTRL_PROCESS_VARIABLE_PREAMBLE;
-	ecore_wr(p_hwfn, p_ptt, XMAC_REG_RX_CTRL + port_offset, xmac_rxctrl);
+	ecore_wr(p_hwfn, p_ptt, XMAC_REG_CTRL_BB + port_offset,
+		 XMAC_REG_CTRL_TX_EN_BB | XMAC_REG_CTRL_RX_EN_BB);
+	xmac_rxctrl = ecore_rd(p_hwfn, p_ptt,
+			       XMAC_REG_RX_CTRL_BB + port_offset);
+	xmac_rxctrl |= XMAC_REG_RX_CTRL_PROCESS_VARIABLE_PREAMBLE_BB;
+	ecore_wr(p_hwfn, p_ptt, XMAC_REG_RX_CTRL_BB + port_offset, xmac_rxctrl);
 }
 #endif
 
@@ -1233,7 +1244,8 @@ static enum _ecore_status_t ecore_hw_init_port(struct ecore_hwfn *p_hwfn,
 	if (CHIP_REV_IS_FPGA(p_hwfn->p_dev)) {
 		if (ECORE_IS_AH(p_hwfn->p_dev))
 			return ECORE_SUCCESS;
-		ecore_link_init(p_hwfn, p_ptt, p_hwfn->port_id);
+		else if (ECORE_IS_BB(p_hwfn->p_dev))
+			ecore_link_init_bb(p_hwfn, p_ptt, p_hwfn->port_id);
 	} else if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
 		if (p_hwfn->p_dev->num_hwfns > 1) {
 			/* Activate OPTE in CMT */
@@ -1667,7 +1679,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		 * out that these registers get initialized during the call to
 		 * ecore_mcp_load_req request. So we need to reread them here
 		 * to get the proper shadow register value.
-		 * Note: This is a workaround for the missinginig MFW
+		 * Note: This is a workaround for the missing MFW
 		 * initialization. It may be removed once the implementation
 		 * is done.
 		 */
@@ -2033,22 +2045,22 @@ static void ecore_hw_hwfn_prepare(struct ecore_hwfn *p_hwfn)
 	/* clear indirect access */
 	if (ECORE_IS_AH(p_hwfn->p_dev)) {
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_E8_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_E8_F0_K2_E5, 0);
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_EC_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_EC_F0_K2_E5, 0);
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_F0_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_F0_F0_K2_E5, 0);
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_F4_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_F4_F0_K2_E5, 0);
 	} else {
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_88_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_88_F0_BB, 0);
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_8C_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_8C_F0_BB, 0);
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_90_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_90_F0_BB, 0);
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_94_F0, 0);
+			 PGLUE_B_REG_PGL_ADDR_94_F0_BB, 0);
 	}
 
 	/* Clean Previous errors if such exist */
@@ -2643,7 +2655,12 @@ static void ecore_get_num_funcs(struct ecore_hwfn *p_hwfn,
 	 * In case of CMT in BB, only the "even" functions are enabled, and thus
 	 * the number of functions for both hwfns is learnt from the same bits.
 	 */
-	reg_function_hide = ecore_rd(p_hwfn, p_ptt, MISCS_REG_FUNCTION_HIDE);
+	if (ECORE_IS_BB(p_dev) || ECORE_IS_AH(p_dev)) {
+		reg_function_hide = ecore_rd(p_hwfn, p_ptt,
+					     MISCS_REG_FUNCTION_HIDE_BB_K2);
+	} else { /* E5 */
+		reg_function_hide = 0;
+	}
 
 	if (reg_function_hide & 0x1) {
 		if (ECORE_IS_BB(p_dev)) {
@@ -2709,8 +2726,7 @@ static void ecore_hw_info_port_num_bb(struct ecore_hwfn *p_hwfn,
 		port_mode = 1;
 	else
 #endif
-		port_mode = ecore_rd(p_hwfn, p_ptt,
-				     CNIG_REG_NW_PORT_MODE_BB_B0);
+	port_mode = ecore_rd(p_hwfn, p_ptt, CNIG_REG_NW_PORT_MODE_BB);
 
 	if (port_mode < 3) {
 		p_hwfn->p_dev->num_ports_in_engines = 1;
@@ -2725,8 +2741,8 @@ static void ecore_hw_info_port_num_bb(struct ecore_hwfn *p_hwfn,
 	}
 }
 
-static void ecore_hw_info_port_num_ah(struct ecore_hwfn *p_hwfn,
-				      struct ecore_ptt *p_ptt)
+static void ecore_hw_info_port_num_ah_e5(struct ecore_hwfn *p_hwfn,
+					 struct ecore_ptt *p_ptt)
 {
 	u32 port;
 	int i;
@@ -2755,7 +2771,8 @@ static void ecore_hw_info_port_num_ah(struct ecore_hwfn *p_hwfn,
 #endif
 		for (i = 0; i < MAX_NUM_PORTS_K2; i++) {
 			port = ecore_rd(p_hwfn, p_ptt,
-					CNIG_REG_NIG_PORT0_CONF_K2 + (i * 4));
+					CNIG_REG_NIG_PORT0_CONF_K2_E5 +
+					(i * 4));
 			if (port & 1)
 				p_hwfn->p_dev->num_ports_in_engines++;
 		}
@@ -2767,7 +2784,7 @@ static void ecore_hw_info_port_num(struct ecore_hwfn *p_hwfn,
 	if (ECORE_IS_BB(p_hwfn->p_dev))
 		ecore_hw_info_port_num_bb(p_hwfn, p_ptt);
 	else
-		ecore_hw_info_port_num_ah(p_hwfn, p_ptt);
+		ecore_hw_info_port_num_ah_e5(p_hwfn, p_ptt);
 }
 
 static enum _ecore_status_t
@@ -3076,12 +3093,13 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn,
 	if (CHIP_REV_IS_FPGA(p_dev)) {
 		DP_NOTICE(p_hwfn, false,
 			  "FPGA: workaround; Prevent DMAE parities\n");
-		ecore_wr(p_hwfn, p_hwfn->p_main_ptt, PCIE_REG_PRTY_MASK, 7);
+		ecore_wr(p_hwfn, p_hwfn->p_main_ptt, PCIE_REG_PRTY_MASK_K2_E5,
+			 7);
 
 		DP_NOTICE(p_hwfn, false,
 			  "FPGA: workaround: Set VF bar0 size\n");
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_VF_BAR0_SIZE, 4);
+			 PGLUE_B_REG_VF_BAR0_SIZE_K2_E5, 4);
 	}
 #endif
 
diff --git a/drivers/net/qede/base/ecore_gtt_reg_addr.h b/drivers/net/qede/base/ecore_gtt_reg_addr.h
index 070588d..2acd864 100644
--- a/drivers/net/qede/base/ecore_gtt_reg_addr.h
+++ b/drivers/net/qede/base/ecore_gtt_reg_addr.h
@@ -10,43 +10,43 @@
 #define GTT_REG_ADDR_H
 
 /* Win 2 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_IGU_CMD                                      0x00f000UL
 
 /* Win 3 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_TSDM_RAM                                     0x010000UL
 
 /* Win 4 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_MSDM_RAM                                     0x011000UL
 
 /* Win 5 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_MSDM_RAM_1024                                0x012000UL
 
 /* Win 6 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_USDM_RAM                                     0x013000UL
 
 /* Win 7 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_USDM_RAM_1024                                0x014000UL
 
 /* Win 8 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_USDM_RAM_2048                                0x015000UL
 
 /* Win 9 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_XSDM_RAM                                     0x016000UL
 
 /* Win 10 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_YSDM_RAM                                     0x017000UL
 
 /* Win 11 */
-/* Access:RW   DataWidth:0x20    Chips: BB_B0 K2 E5 */
+/* Access:RW   DataWidth:0x20    */
 #define GTT_BAR0_MAP_REG_PSDM_RAM                                     0x018000UL
 
 #endif
diff --git a/drivers/net/qede/base/ecore_hsi_common.h b/drivers/net/qede/base/ecore_hsi_common.h
index f934e68..3042ed5 100644
--- a/drivers/net/qede/base/ecore_hsi_common.h
+++ b/drivers/net/qede/base/ecore_hsi_common.h
@@ -836,7 +836,12 @@ struct core_rx_fast_path_cqe {
 	__le16 packet_length /* Total packet length (from the parser) */;
 	__le16 vlan /* 802.1q VLAN tag */;
 	struct core_rx_cqe_opaque_data opaque_data /* Opaque Data */;
-	__le32 reserved[4];
+/* bit- map: each bit represents a specific error. errors indications are
+ * provided by the cracker. see spec for detailed description
+ */
+	struct parsing_err_flags err_flags;
+	__le16 reserved0;
+	__le32 reserved1[3];
 };
 
 /*
@@ -1042,13 +1047,13 @@ struct core_tx_stop_ramrod_data {
 /*
  * Enum flag for what type of dcb data to update
  */
-enum dcb_dhcp_update_flag {
+enum dcb_dscp_update_mode {
 /* use when no change should be done to dcb data */
-	DONT_UPDATE_DCB_DHCP,
+	DONT_UPDATE_DCB_DSCP,
 	UPDATE_DCB /* use to update only l2 (vlan) priority */,
-	UPDATE_DSCP /* use to update only l3 dhcp */,
-	UPDATE_DCB_DSCP /* update vlan pri and dhcp */,
-	MAX_DCB_DHCP_UPDATE_FLAG
+	UPDATE_DSCP /* use to update only l3 dscp */,
+	UPDATE_DCB_DSCP /* update vlan pri and dscp */,
+	MAX_DCB_DSCP_UPDATE_FLAG
 };
 
 
@@ -1232,6 +1237,10 @@ enum iwarp_ll2_tx_queues {
 	IWARP_LL2_IN_ORDER_TX_QUEUE = 1,
 /* LL2 queue for unaligned packets sent aligned by the driver */
 	IWARP_LL2_ALIGNED_TX_QUEUE,
+/* LL2 queue for unaligned packets sent aligned and was right-trimmed by the
+ * driver
+ */
+	IWARP_LL2_ALIGNED_RIGHT_TRIMMED_TX_QUEUE,
 	IWARP_LL2_ERROR /* Error indication */,
 	MAX_IWARP_LL2_TX_QUEUES
 };
@@ -1446,13 +1455,13 @@ struct pf_update_tunnel_config {
  */
 struct pf_update_ramrod_data {
 	u8 pf_id;
-	u8 update_eth_dcb_data_flag /* Update Eth DCB  data indication */;
-	u8 update_fcoe_dcb_data_flag /* Update FCOE DCB  data indication */;
-	u8 update_iscsi_dcb_data_flag /* Update iSCSI DCB  data indication */;
-	u8 update_roce_dcb_data_flag /* Update ROCE DCB  data indication */;
+	u8 update_eth_dcb_data_mode /* Update Eth DCB  data indication */;
+	u8 update_fcoe_dcb_data_mode /* Update FCOE DCB  data indication */;
+	u8 update_iscsi_dcb_data_mode /* Update iSCSI DCB  data indication */;
+	u8 update_roce_dcb_data_mode /* Update ROCE DCB  data indication */;
 /* Update RROCE (RoceV2) DCB  data indication */
-	u8 update_rroce_dcb_data_flag;
-	u8 update_iwarp_dcb_data_flag /* Update IWARP DCB  data indication */;
+	u8 update_rroce_dcb_data_mode;
+	u8 update_iwarp_dcb_data_mode /* Update IWARP DCB  data indication */;
 	u8 update_mf_vlan_flag /* Update MF outer vlan Id */;
 	struct protocol_dcb_data eth_dcb_data /* core eth related fields */;
 	struct protocol_dcb_data fcoe_dcb_data /* core fcoe related fields */;
@@ -1611,6 +1620,8 @@ struct tstorm_per_port_stat {
 	struct regpair fcoe_irregular_pkt;
 /* packet is an ROCE irregular packet */
 	struct regpair roce_irregular_pkt;
+/* packet is an IWARP irregular packet */
+	struct regpair iwarp_irregular_pkt;
 /* packet is an ETH irregular packet */
 	struct regpair eth_irregular_pkt;
 /* packet is an TOE irregular packet */
@@ -1861,8 +1872,11 @@ struct dmae_cmd {
 #define DMAE_CMD_SRC_VF_ID_SHIFT       0
 #define DMAE_CMD_DST_VF_ID_MASK        0xFF /* Destination VF id */
 #define DMAE_CMD_DST_VF_ID_SHIFT       8
-	__le32 comp_addr_lo /* PCIe completion address low or grc address */;
-/* PCIe completion address high or reserved (if completion address is in GRC) */
+/* PCIe completion address low in bytes or GRC completion address in DW */
+	__le32 comp_addr_lo;
+/* PCIe completion address high in bytes or reserved (if completion address is
+ * GRC)
+ */
 	__le32 comp_addr_hi;
 	__le32 comp_val /* Value to write to completion address */;
 	__le32 crc32 /* crc16 result */;
@@ -2250,10 +2264,6 @@ struct sdm_op_gen {
 #define SDM_OP_GEN_RESERVED_SHIFT   20
 };
 
-
-
-
-
 struct ystorm_core_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 byte1 /* state */;
diff --git a/drivers/net/qede/base/ecore_hsi_debug_tools.h b/drivers/net/qede/base/ecore_hsi_debug_tools.h
index effb6ed..917e8f4 100644
--- a/drivers/net/qede/base/ecore_hsi_debug_tools.h
+++ b/drivers/net/qede/base/ecore_hsi_debug_tools.h
@@ -93,10 +93,12 @@ enum block_addr {
 	GRCBASE_PHY_PCIE = 0x620000,
 	GRCBASE_LED = 0x6b8000,
 	GRCBASE_AVS_WRAP = 0x6b0000,
-	GRCBASE_RGFS = 0x19d0000,
-	GRCBASE_TGFS = 0x19e0000,
-	GRCBASE_PTLD = 0x19f0000,
-	GRCBASE_YPLD = 0x1a10000,
+	GRCBASE_RGFS = 0x1fa0000,
+	GRCBASE_RGSRC = 0x1fa8000,
+	GRCBASE_TGFS = 0x1fb0000,
+	GRCBASE_TGSRC = 0x1fb8000,
+	GRCBASE_PTLD = 0x1fc0000,
+	GRCBASE_YPLD = 0x1fe0000,
 	GRCBASE_MISC_AEU = 0x8000,
 	GRCBASE_BAR0_MAP = 0x1c00000,
 	MAX_BLOCK_ADDR
@@ -184,7 +186,9 @@ enum block_id {
 	BLOCK_LED,
 	BLOCK_AVS_WRAP,
 	BLOCK_RGFS,
+	BLOCK_RGSRC,
 	BLOCK_TGFS,
+	BLOCK_TGSRC,
 	BLOCK_PTLD,
 	BLOCK_YPLD,
 	BLOCK_MISC_AEU,
@@ -208,6 +212,10 @@ enum bin_dbg_buffer_type {
 	BIN_BUF_DBG_ATTN_REGS /* Attention registers */,
 	BIN_BUF_DBG_ATTN_INDEXES /* Attention indexes */,
 	BIN_BUF_DBG_ATTN_NAME_OFFSETS /* Attention name offsets */,
+	BIN_BUF_DBG_BUS_BLOCKS /* Debug Bus blocks */,
+	BIN_BUF_DBG_BUS_LINES /* Debug Bus lines */,
+	BIN_BUF_DBG_BUS_BLOCKS_USER_DATA /* Debug Bus blocks user data */,
+	BIN_BUF_DBG_BUS_LINE_NAME_OFFSETS /* Debug Bus line name offsets */,
 	BIN_BUF_DBG_PARSING_STRINGS /* Debug Tools parsing strings */,
 	MAX_BIN_DBG_BUFFER_TYPE
 };
@@ -219,8 +227,8 @@ enum bin_dbg_buffer_type {
 struct dbg_attn_bit_mapping {
 	__le16 data;
 /* The index of an attention in the blocks attentions list
- * (if is_unused_idx_cnt=0), or a number of consecutive unused attention bits
- * (if is_unused_idx_cnt=1)
+ * (if is_unused_bit_cnt=0), or a number of consecutive unused attention bits
+ * (if is_unused_bit_cnt=1)
  */
 #define DBG_ATTN_BIT_MAPPING_VAL_MASK                0x7FFF
 #define DBG_ATTN_BIT_MAPPING_VAL_SHIFT               0
@@ -269,10 +277,10 @@ struct dbg_attn_reg_result {
 #define DBG_ATTN_REG_RESULT_STS_ADDRESS_MASK   0xFFFFFF
 #define DBG_ATTN_REG_RESULT_STS_ADDRESS_SHIFT  0
 /* Number of attention indexes in this register */
-#define DBG_ATTN_REG_RESULT_NUM_ATTN_IDX_MASK  0xFF
-#define DBG_ATTN_REG_RESULT_NUM_ATTN_IDX_SHIFT 24
-/* Offset of this registers block attention indexes (values in the range
- * 0..number of block attentions)
+#define DBG_ATTN_REG_RESULT_NUM_REG_ATTN_MASK  0xFF
+#define DBG_ATTN_REG_RESULT_NUM_REG_ATTN_SHIFT 24
+/* The offset of this registers attentions within the blocks attentions
+ * list (a value in the range 0..number of block attentions-1)
  */
 	__le16 attn_idx_offset;
 	__le16 reserved;
@@ -289,7 +297,7 @@ struct dbg_attn_block_result {
 /* Value from dbg_attn_type enum */
 #define DBG_ATTN_BLOCK_RESULT_ATTN_TYPE_MASK  0x3
 #define DBG_ATTN_BLOCK_RESULT_ATTN_TYPE_SHIFT 0
-/* Number of registers in the blok in which at least one attention bit is set */
+/* Number of registers in block in which at least one attention bit is set */
 #define DBG_ATTN_BLOCK_RESULT_NUM_REGS_MASK   0x3F
 #define DBG_ATTN_BLOCK_RESULT_NUM_REGS_SHIFT  2
 /* Offset of this registers block attention names in the attention name offsets
@@ -324,17 +332,17 @@ struct dbg_mode_hdr {
  */
 struct dbg_attn_reg {
 	struct dbg_mode_hdr mode /* Mode header */;
-/* Offset of this registers block attention indexes (values in the range
- * 0..number of block attentions)
+/* The offset of this registers attentions within the blocks attentions
+ * list (a value in the range 0..number of block attentions-1)
  */
 	__le16 attn_idx_offset;
 	__le32 data;
 /* STS attention register GRC address (in dwords) */
 #define DBG_ATTN_REG_STS_ADDRESS_MASK   0xFFFFFF
 #define DBG_ATTN_REG_STS_ADDRESS_SHIFT  0
-/* Number of attention indexes in this register */
-#define DBG_ATTN_REG_NUM_ATTN_IDX_MASK  0xFF
-#define DBG_ATTN_REG_NUM_ATTN_IDX_SHIFT 24
+/* Number of attention in this register */
+#define DBG_ATTN_REG_NUM_REG_ATTN_MASK  0xFF
+#define DBG_ATTN_REG_NUM_REG_ATTN_SHIFT 24
 /* STS_CLR attention register GRC address (in dwords) */
 	__le32 sts_clr_address;
 /* MASK attention register GRC address (in dwords) */
@@ -354,6 +362,53 @@ enum dbg_attn_type {
 
 
 /*
+ * Debug Bus block data
+ */
+struct dbg_bus_block {
+/* Number of debug lines in this block (excluding signature & latency events) */
+	u8 num_of_lines;
+/* Indicates if this block has a latency events debug line (0/1). */
+	u8 has_latency_events;
+/* Offset of this blocks lines in the Debug Bus lines array. */
+	__le16 lines_offset;
+};
+
+
+/*
+ * Debug Bus block user data
+ */
+struct dbg_bus_block_user_data {
+/* Number of debug lines in this block (excluding signature & latency events) */
+	u8 num_of_lines;
+/* Indicates if this block has a latency events debug line (0/1). */
+	u8 has_latency_events;
+/* Offset of this blocks lines in the debug bus line name offsets array. */
+	__le16 names_offset;
+};
+
+
+/*
+ * Block Debug line data
+ */
+struct dbg_bus_line {
+	u8 data;
+/* Number of groups in the line (0-3) */
+#define DBG_BUS_LINE_NUM_OF_GROUPS_MASK  0xF
+#define DBG_BUS_LINE_NUM_OF_GROUPS_SHIFT 0
+/* Indicates if this is a 128b line (0) or a 256b line (1). */
+#define DBG_BUS_LINE_IS_256B_MASK        0x1
+#define DBG_BUS_LINE_IS_256B_SHIFT       4
+#define DBG_BUS_LINE_RESERVED_MASK       0x7
+#define DBG_BUS_LINE_RESERVED_SHIFT      5
+/* Four 2-bit values, indicating the size of each group minus 1 (i.e.
+ * value=0 means size=1, value=1 means size=2, etc), starting from lsb.
+ * The sizes are in dwords (if is_256b=0) or in qwords (if is_256b=1).
+ */
+	u8 group_sizes;
+};
+
+
+/*
  * condition header for registers dump
  */
 struct dbg_dump_cond_hdr {
@@ -377,8 +432,11 @@ struct dbg_dump_mem {
 /* register size (in dwords) */
 #define DBG_DUMP_MEM_LENGTH_MASK        0xFFFFFF
 #define DBG_DUMP_MEM_LENGTH_SHIFT       0
-#define DBG_DUMP_MEM_RESERVED_MASK      0xFF
-#define DBG_DUMP_MEM_RESERVED_SHIFT     24
+/* indicates if the register is wide-bus */
+#define DBG_DUMP_MEM_WIDE_BUS_MASK      0x1
+#define DBG_DUMP_MEM_WIDE_BUS_SHIFT     24
+#define DBG_DUMP_MEM_RESERVED_MASK      0x7F
+#define DBG_DUMP_MEM_RESERVED_SHIFT     25
 };
 
 
@@ -388,10 +446,13 @@ struct dbg_dump_mem {
 struct dbg_dump_reg {
 	__le32 data;
 /* register address (in dwords) */
-#define DBG_DUMP_REG_ADDRESS_MASK  0xFFFFFF
-#define DBG_DUMP_REG_ADDRESS_SHIFT 0
-#define DBG_DUMP_REG_LENGTH_MASK   0xFF /* register size (in dwords) */
-#define DBG_DUMP_REG_LENGTH_SHIFT  24
+#define DBG_DUMP_REG_ADDRESS_MASK   0x7FFFFF /* register address (in dwords) */
+#define DBG_DUMP_REG_ADDRESS_SHIFT  0
+/* indicates if the register is wide-bus */
+#define DBG_DUMP_REG_WIDE_BUS_MASK  0x1
+#define DBG_DUMP_REG_WIDE_BUS_SHIFT 23
+#define DBG_DUMP_REG_LENGTH_MASK    0xFF /* register size (in dwords) */
+#define DBG_DUMP_REG_LENGTH_SHIFT   24
 };
 
 
@@ -424,8 +485,11 @@ struct dbg_idle_chk_cond_hdr {
 struct dbg_idle_chk_cond_reg {
 	__le32 data;
 /* Register GRC address (in dwords) */
-#define DBG_IDLE_CHK_COND_REG_ADDRESS_MASK   0xFFFFFF
+#define DBG_IDLE_CHK_COND_REG_ADDRESS_MASK   0x7FFFFF
 #define DBG_IDLE_CHK_COND_REG_ADDRESS_SHIFT  0
+/* indicates if the register is wide-bus */
+#define DBG_IDLE_CHK_COND_REG_WIDE_BUS_MASK  0x1
+#define DBG_IDLE_CHK_COND_REG_WIDE_BUS_SHIFT 23
 /* value from block_id enum */
 #define DBG_IDLE_CHK_COND_REG_BLOCK_ID_MASK  0xFF
 #define DBG_IDLE_CHK_COND_REG_BLOCK_ID_SHIFT 24
@@ -441,8 +505,11 @@ struct dbg_idle_chk_cond_reg {
 struct dbg_idle_chk_info_reg {
 	__le32 data;
 /* Register GRC address (in dwords) */
-#define DBG_IDLE_CHK_INFO_REG_ADDRESS_MASK   0xFFFFFF
+#define DBG_IDLE_CHK_INFO_REG_ADDRESS_MASK   0x7FFFFF
 #define DBG_IDLE_CHK_INFO_REG_ADDRESS_SHIFT  0
+/* indicates if the register is wide-bus */
+#define DBG_IDLE_CHK_INFO_REG_WIDE_BUS_MASK  0x1
+#define DBG_IDLE_CHK_INFO_REG_WIDE_BUS_SHIFT 23
 /* value from block_id enum */
 #define DBG_IDLE_CHK_INFO_REG_BLOCK_ID_MASK  0xFF
 #define DBG_IDLE_CHK_INFO_REG_BLOCK_ID_SHIFT 24
@@ -544,17 +611,21 @@ enum dbg_idle_chk_severity_types {
  * Debug Bus block data
  */
 struct dbg_bus_block_data {
-/* Indicates if the block is enabled for recording (0/1) */
-	u8 enabled;
-	u8 hw_id /* HW ID associated with the block */;
+	__le16 data;
+/* 4-bit value: bit i set -> dword/qword i is enabled. */
+#define DBG_BUS_BLOCK_DATA_ENABLE_MASK_MASK       0xF
+#define DBG_BUS_BLOCK_DATA_ENABLE_MASK_SHIFT      0
+/* Number of dwords/qwords to shift right the debug data (0-3) */
+#define DBG_BUS_BLOCK_DATA_RIGHT_SHIFT_MASK       0xF
+#define DBG_BUS_BLOCK_DATA_RIGHT_SHIFT_SHIFT      4
+/* 4-bit value: bit i set -> dword/qword i is forced valid. */
+#define DBG_BUS_BLOCK_DATA_FORCE_VALID_MASK_MASK  0xF
+#define DBG_BUS_BLOCK_DATA_FORCE_VALID_MASK_SHIFT 8
+/* 4-bit value: bit i set -> dword/qword i frame bit is forced. */
+#define DBG_BUS_BLOCK_DATA_FORCE_FRAME_MASK_MASK  0xF
+#define DBG_BUS_BLOCK_DATA_FORCE_FRAME_MASK_SHIFT 12
 	u8 line_num /* Debug line number to select */;
-	u8 right_shift /* Number of units to  right the debug data (0-3) */;
-	u8 cycle_en /* 4-bit value: bit i set -> unit i is enabled. */;
-/* 4-bit value: bit i set -> unit i is forced valid. */
-	u8 force_valid;
-/* 4-bit value: bit i set -> unit i frame bit is forced. */
-	u8 force_frame;
-	u8 reserved;
+	u8 hw_id /* HW ID associated with the block */;
 };
 
 
@@ -604,6 +675,21 @@ enum dbg_bus_constraint_ops {
 
 
 /*
+ * Debug Bus trigger state data
+ */
+struct dbg_bus_trigger_state_data {
+	u8 data;
+/* 4-bit value: bit i set -> dword i of the trigger state block
+ * (after right shift) is enabled.
+ */
+#define DBG_BUS_TRIGGER_STATE_DATA_BLOCK_SHIFTED_ENABLE_MASK_MASK  0xF
+#define DBG_BUS_TRIGGER_STATE_DATA_BLOCK_SHIFTED_ENABLE_MASK_SHIFT 0
+/* 4-bit value: bit i set -> dword i is compared by a constraint */
+#define DBG_BUS_TRIGGER_STATE_DATA_CONSTRAINT_DWORD_MASK_MASK      0xF
+#define DBG_BUS_TRIGGER_STATE_DATA_CONSTRAINT_DWORD_MASK_SHIFT     4
+};
+
+/*
  * Debug Bus memory address
  */
 struct dbg_bus_mem_addr {
@@ -650,14 +736,8 @@ union dbg_bus_storm_eid_params {
  * Debug Bus Storm data
  */
 struct dbg_bus_storm_data {
-/* Indicates if the Storm is enabled for fast debug recording (0/1) */
-	u8 fast_enabled;
-/* Fast debug Storm mode, valid only if fast_enabled is set */
-	u8 fast_mode;
-/* Indicates if the Storm is enabled for slow debug recording (0/1) */
-	u8 slow_enabled;
-/* Slow debug Storm mode, valid only if slow_enabled is set */
-	u8 slow_mode;
+	u8 enabled /* indicates if the Storm is enabled for recording */;
+	u8 mode /* Storm debug mode, valid only if the Storm is enabled */;
 	u8 hw_id /* HW ID associated with the Storm */;
 	u8 eid_filter_en /* Indicates if EID filtering is performed (0/1) */;
 /* 1 = EID range filter, 0 = EID mask filter. Valid only if eid_filter_en is
@@ -667,7 +747,6 @@ struct dbg_bus_storm_data {
 	u8 cid_filter_en /* Indicates if CID filtering is performed (0/1) */;
 /* EID filter params to filter on. Valid only if eid_filter_en is set. */
 	union dbg_bus_storm_eid_params eid_filter_params;
-	__le16 reserved;
 /* CID to filter on. Valid only if cid_filter_en is set. */
 	__le32 cid;
 };
@@ -679,20 +758,18 @@ struct dbg_bus_data {
 	__le32 app_version /* The tools version number of the application */;
 	u8 state /* The current debug bus state */;
 	u8 hw_dwords /* HW dwords per cycle */;
-	u8 next_hw_id /* Next HW ID to be associated with an input */;
+/* The HW IDs of the recorded HW blocks, where bits i*3..i*3+2 contain the
+ * HW ID of dword/qword i
+ */
+	__le16 hw_id_mask;
 	u8 num_enabled_blocks /* Number of blocks enabled for recording */;
 	u8 num_enabled_storms /* Number of Storms enabled for recording */;
 	u8 target /* Output target */;
-	u8 next_trigger_state /* ID of next trigger state to be added */;
-/* ID of next filter/trigger constraint to be added */
-	u8 next_constraint_id;
 	u8 one_shot_en /* Indicates if one-shot mode is enabled (0/1) */;
 	u8 grc_input_en /* Indicates if GRC recording is enabled (0/1) */;
 /* Indicates if timestamp recording is enabled (0/1) */
 	u8 timestamp_input_en;
 	u8 filter_en /* Indicates if the recording filter is enabled (0/1) */;
-/* Indicates if the recording trigger is enabled (0/1) */
-	u8 trigger_en;
 /* If true, the next added constraint belong to the filter. Otherwise,
  * it belongs to the last added trigger state. Valid only if either filter or
  * triggers are enabled.
@@ -706,6 +783,14 @@ struct dbg_bus_data {
  * Valid only if both filter and trigger are enabled (0/1)
  */
 	u8 filter_post_trigger;
+	__le16 reserved;
+/* Indicates if the recording trigger is enabled (0/1) */
+	u8 trigger_en;
+/* trigger states data */
+	struct dbg_bus_trigger_state_data trigger_states[3];
+	u8 next_trigger_state /* ID of next trigger state to be added */;
+/* ID of next filter/trigger constraint to be added */
+	u8 next_constraint_id;
 /* If true, all inputs are associated with HW ID 0. Otherwise, each input is
  * assigned a different HW ID (0/1)
  */
@@ -716,7 +801,6 @@ struct dbg_bus_data {
  * DBG_BUS_TARGET_ID_PCI.
  */
 	struct dbg_bus_pci_buf_data pci_buf;
-	__le16 reserved;
 /* Debug Bus data for each block */
 	struct dbg_bus_block_data blocks[88];
 /* Debug Bus data for each block */
@@ -748,17 +832,6 @@ enum dbg_bus_frame_modes {
 
 
 /*
- * Debug bus input types
- */
-enum dbg_bus_input_types {
-	DBG_BUS_INPUT_TYPE_STORM,
-	DBG_BUS_INPUT_TYPE_BLOCK,
-	MAX_DBG_BUS_INPUT_TYPES
-};
-
-
-
-/*
  * Debug bus other engine mode
  */
 enum dbg_bus_other_engine_modes {
@@ -852,6 +925,7 @@ enum dbg_bus_targets {
 };
 
 
+
 /*
  * GRC Dump data
  */
@@ -987,7 +1061,10 @@ enum dbg_status {
 	DBG_STATUS_REG_FIFO_BAD_DATA,
 	DBG_STATUS_PROTECTION_OVERRIDE_BAD_DATA,
 	DBG_STATUS_DBG_ARRAY_NOT_SET,
-	DBG_STATUS_MULTI_BLOCKS_WITH_FILTER,
+	DBG_STATUS_FILTER_BUG,
+	DBG_STATUS_NON_MATCHING_LINES,
+	DBG_STATUS_INVALID_TRIGGER_DWORD_OFFSET,
+	DBG_STATUS_DBG_BUS_IN_USE,
 	MAX_DBG_STATUS
 };
 
@@ -1028,7 +1105,7 @@ struct dbg_tools_data {
 /* Indicates if a block is in reset state (0/1) */
 	u8 block_in_reset[88];
 	u8 chip_id /* Chip ID (from enum chip_ids) */;
-	u8 platform_id /* Platform ID (from enum platform_ids) */;
+	u8 platform_id /* Platform ID */;
 	u8 initialized /* Indicates if the data was initialized */;
 	u8 reserved;
 };
diff --git a/drivers/net/qede/base/ecore_hsi_eth.h b/drivers/net/qede/base/ecore_hsi_eth.h
index 9d2a118..397c408 100644
--- a/drivers/net/qede/base/ecore_hsi_eth.h
+++ b/drivers/net/qede/base/ecore_hsi_eth.h
@@ -739,6 +739,7 @@ enum eth_error_code {
 	ETH_FILTERS_VNI_ADD_FAIL_FULL,
 /* vni add filters command failed due to duplicate VNI filter */
 	ETH_FILTERS_VNI_ADD_FAIL_DUP,
+	ETH_FILTERS_GFT_UPDATE_FAIL /* Fail update GFT filter. */,
 	MAX_ETH_ERROR_CODE
 };
 
@@ -982,8 +983,10 @@ struct eth_vport_rss_config {
 	u8 rss_id;
 	u8 rss_mode /* The RSS mode for this function */;
 	u8 update_rss_key /* if set update the rss key */;
-	u8 update_rss_ind_table /* if set update the indirection table */;
-	u8 update_rss_capabilities /* if set update the capabilities */;
+/* if set update the indirection table values */
+	u8 update_rss_ind_table;
+/* if set update the capabilities and indirection table size. */
+	u8 update_rss_capabilities;
 	u8 tbl_size /* rss mask (Tbl size) */;
 	__le32 reserved2[2];
 /* RSS indirection table */
@@ -1267,7 +1270,10 @@ struct rx_update_gft_filter_data {
 /* Use enum to set type of flow using gft HW logic blocks */
 	u8 filter_type;
 	u8 filter_action /* Use to set type of action on filter */;
-	u8 reserved;
+/* 0 - dont assert in case of error. Just return an error code. 1 - assert in
+ * case of error.
+ */
+	u8 assert_on_error;
 };
 
 
@@ -2290,8 +2296,7 @@ enum gft_profile_upper_protocol_type {
  * GFT RAM line struct
  */
 struct gft_ram_line {
-	__le32 low32bits;
-/*  (use enum gft_vlan_select) */
+	__le32 lo;
 #define GFT_RAM_LINE_VLAN_SELECT_MASK              0x3
 #define GFT_RAM_LINE_VLAN_SELECT_SHIFT             0
 #define GFT_RAM_LINE_TUNNEL_ENTROPHY_MASK          0x1
@@ -2354,7 +2359,7 @@ struct gft_ram_line {
 #define GFT_RAM_LINE_DST_PORT_SHIFT                30
 #define GFT_RAM_LINE_SRC_PORT_MASK                 0x1
 #define GFT_RAM_LINE_SRC_PORT_SHIFT                31
-	__le32 high32bits;
+	__le32 hi;
 #define GFT_RAM_LINE_DSCP_MASK                     0x1
 #define GFT_RAM_LINE_DSCP_SHIFT                    0
 #define GFT_RAM_LINE_OVER_IP_PROTOCOL_MASK         0x1
diff --git a/drivers/net/qede/base/ecore_hsi_init_tool.h b/drivers/net/qede/base/ecore_hsi_init_tool.h
index d07549c..1f57e9b 100644
--- a/drivers/net/qede/base/ecore_hsi_init_tool.h
+++ b/drivers/net/qede/base/ecore_hsi_init_tool.h
@@ -22,43 +22,13 @@
 /* Max size in dwords of a zipped array */
 #define MAX_ZIPPED_SIZE			8192
 
-enum init_modes {
-	MODE_BB_A0_DEPRECATED,
-	MODE_BB_B0,
-	MODE_K2,
-	MODE_ASIC,
-	MODE_EMUL_REDUCED,
-	MODE_EMUL_FULL,
-	MODE_FPGA,
-	MODE_CHIPSIM,
-	MODE_SF,
-	MODE_MF_SD,
-	MODE_MF_SI,
-	MODE_PORTS_PER_ENG_1,
-	MODE_PORTS_PER_ENG_2,
-	MODE_PORTS_PER_ENG_4,
-	MODE_100G,
-	MODE_E5,
-	MAX_INIT_MODES
-};
-
-enum init_phases {
-	PHASE_ENGINE,
-	PHASE_PORT,
-	PHASE_PF,
-	PHASE_VF,
-	PHASE_QM_PF,
-	MAX_INIT_PHASES
+enum chip_ids {
+	CHIP_BB,
+	CHIP_K2,
+	CHIP_E5,
+	MAX_CHIP_IDS
 };
 
-enum init_split_types {
-	SPLIT_TYPE_NONE,
-	SPLIT_TYPE_PORT,
-	SPLIT_TYPE_PF,
-	SPLIT_TYPE_PORT_PF,
-	SPLIT_TYPE_VF,
-	MAX_INIT_SPLIT_TYPES
-};
 
 struct fw_asserts_ram_section {
 /* The offset of the section in the RAM in RAM lines (64-bit units) */
@@ -196,8 +166,46 @@ union init_array_hdr {
 };
 
 
+enum init_modes {
+	MODE_BB_A0_DEPRECATED,
+	MODE_BB,
+	MODE_K2,
+	MODE_ASIC,
+	MODE_EMUL_REDUCED,
+	MODE_EMUL_FULL,
+	MODE_FPGA,
+	MODE_CHIPSIM,
+	MODE_SF,
+	MODE_MF_SD,
+	MODE_MF_SI,
+	MODE_PORTS_PER_ENG_1,
+	MODE_PORTS_PER_ENG_2,
+	MODE_PORTS_PER_ENG_4,
+	MODE_100G,
+	MODE_E5,
+	MAX_INIT_MODES
+};
 
 
+enum init_phases {
+	PHASE_ENGINE,
+	PHASE_PORT,
+	PHASE_PF,
+	PHASE_VF,
+	PHASE_QM_PF,
+	MAX_INIT_PHASES
+};
+
+
+enum init_split_types {
+	SPLIT_TYPE_NONE,
+	SPLIT_TYPE_PORT,
+	SPLIT_TYPE_PF,
+	SPLIT_TYPE_PORT_PF,
+	SPLIT_TYPE_VF,
+	MAX_INIT_SPLIT_TYPES
+};
+
 
 /*
  * init array types
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.c b/drivers/net/qede/base/ecore_init_fw_funcs.c
index 77f9152..af0deaa 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.c
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.c
@@ -17,112 +17,156 @@
 #include "ecore_hsi_init_tool.h"
 #include "ecore_iro.h"
 #include "ecore_init_fw_funcs.h"
-enum CmInterfaceEnum {
-	MCM_SEC,
-	MCM_PRI,
-	UCM_SEC,
-	UCM_PRI,
-	TCM_SEC,
-	TCM_PRI,
-	YCM_SEC,
-	YCM_PRI,
-	XCM_SEC,
-	XCM_PRI,
-	NUM_OF_CM_INTERFACES
+
+#define CDU_VALIDATION_DEFAULT_CFG 61
+
+static u16 con_region_offsets[3][E4_NUM_OF_CONNECTION_TYPES] = {
+	{ 400,  336,  352,  304,  304,  384,  416,  352}, /* region 3 offsets */
+	{ 528,  496,  416,  448,  448,  512,  544,  480}, /* region 4 offsets */
+	{ 608,  544,  496,  512,  576,  592,  624,  560}  /* region 5 offsets */
+};
+static u16 task_region_offsets[1][E4_NUM_OF_CONNECTION_TYPES] = {
+	{ 240,  240,  112,    0,    0,    0,    0,   96}  /* region 1 offsets */
 };
-/* general constants */
-#define QM_PQ_MEM_4KB(pq_size) \
-(pq_size ? DIV_ROUND_UP((pq_size + 1) * QM_PQ_ELEMENT_SIZE, 0x1000) : 0)
-#define QM_PQ_SIZE_256B(pq_size) \
-(pq_size ? DIV_ROUND_UP(pq_size, 0x100) - 1 : 0)
-#define QM_INVALID_PQ_ID			0xffff
-/* feature enable */
-#define QM_BYPASS_EN				1
-#define QM_BYTE_CRD_EN				1
-/* other PQ constants */
-#define QM_OTHER_PQS_PER_PF			4
-/* WFQ constants */
-#define QM_WFQ_UPPER_BOUND			62500000
+
+/* General constants */
+#define QM_PQ_MEM_4KB(pq_size) (pq_size ? DIV_ROUND_UP((pq_size + 1) * \
+				QM_PQ_ELEMENT_SIZE, 0x1000) : 0)
+#define QM_PQ_SIZE_256B(pq_size) (pq_size ? DIV_ROUND_UP(pq_size, 0x100) - 1 : \
+				  0)
+#define QM_INVALID_PQ_ID		0xffff
+
+/* Feature enable */
+#define QM_BYPASS_EN			1
+#define QM_BYTE_CRD_EN			1
+
+/* Other PQ constants */
+#define QM_OTHER_PQS_PER_PF		4
+
+/* WFQ constants: */
+
+/* Upper bound in MB, 10 * burst size of 1ms in 50Gbps */
+#define QM_WFQ_UPPER_BOUND		62500000
+
+/* Bit  of VOQ in WFQ VP PQ map */
 #define QM_WFQ_VP_PQ_VOQ_SHIFT		0
+
+/* Bit  of PF in WFQ VP PQ map */
 #define QM_WFQ_VP_PQ_PF_SHIFT		5
+
+/* 0x9000 = 4*9*1024 */
 #define QM_WFQ_INC_VAL(weight)		((weight) * 0x9000)
-#define QM_WFQ_MAX_INC_VAL			43750000
-/* RL constants */
-#define QM_RL_UPPER_BOUND			62500000
-#define QM_RL_PERIOD				5
+
+/* 0.7 * upper bound (62500000) */
+#define QM_WFQ_MAX_INC_VAL		43750000
+
+/* RL constants: */
+
+/* Upper bound is set to 10 * burst size of 1ms in 50Gbps */
+#define QM_RL_UPPER_BOUND		62500000
+
+/* Period in us */
+#define QM_RL_PERIOD			5
+
+/* Period in 25MHz cycles */
 #define QM_RL_PERIOD_CLK_25M		(25 * QM_RL_PERIOD)
-#define QM_RL_MAX_INC_VAL			43750000
-/* RL increment value - the factor of 1.01 was added after seeing only
- * 99% factor reached in a 25Gbps port with DPDK RFC 2544 test.
- * In this scenario the PF RL was reducing the line rate to 99% although
- * the credit increment value was the correct one and FW calculated
- * correct packet sizes. The reason for the inaccuracy of the RL is
- * unknown at this point.
+
+/* 0.7 * upper bound (62500000) */
+#define QM_RL_MAX_INC_VAL		43750000
+
+/* RL increment value - rate is specified in mbps. the factor of 1.01 was
+ * added after seeing only 99% factor reached in a 25Gbps port with DPDK RFC
+ * 2544 test. In this scenario the PF RL was reducing the line rate to 99%
+ * although the credit increment value was the correct one and FW calculated
+ * correct packet sizes. The reason for the inaccuracy of the RL is unknown at
+ * this point.
  */
-/* rate in mbps */
 #define QM_RL_INC_VAL(rate) OSAL_MAX_T(u32, (u32)(((rate ? rate : 1000000) * \
-					QM_RL_PERIOD * 101) / (8 * 100)), 1)
+				       QM_RL_PERIOD * 101) / (8 * 100)), 1)
+
 /* AFullOprtnstcCrdMask constants */
 #define QM_OPPOR_LINE_VOQ_DEF		1
 #define QM_OPPOR_FW_STOP_DEF		0
 #define QM_OPPOR_PQ_EMPTY_DEF		1
-/* Command Queue constants */
-#define PBF_CMDQ_PURE_LB_LINES			150
+
+/* Command Queue constants: */
+
+/* Pure LB CmdQ lines (+spare) */
+#define PBF_CMDQ_PURE_LB_LINES		150
+
 #define PBF_CMDQ_LINES_RT_OFFSET(voq) \
-(PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET + \
-voq * (PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET \
-- PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET))
+	(PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET + voq * \
+	 (PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET - \
+	  PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET))
+
 #define PBF_BTB_GUARANTEED_RT_OFFSET(voq) \
-(PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET + voq * \
-(PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET - PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET))
+	(PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET + voq * \
+	 (PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET - \
+	  PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET))
+
 #define QM_VOQ_LINE_CRD(pbf_cmd_lines) \
 ((((pbf_cmd_lines) - 4) * 2) | QM_LINE_CRD_REG_SIGN_BIT)
+
 /* BTB: blocks constants (block size = 256B) */
-#define BTB_JUMBO_PKT_BLOCKS 38	/* 256B blocks in 9700B packet */
-/* headroom per-port */
-#define BTB_HEADROOM_BLOCKS BTB_JUMBO_PKT_BLOCKS
+
+/* 256B blocks in 9700B packet */
+#define BTB_JUMBO_PKT_BLOCKS		38
+
+/* Headroom per-port */
+#define BTB_HEADROOM_BLOCKS		BTB_JUMBO_PKT_BLOCKS
 #define BTB_PURE_LB_FACTOR		10
-#define BTB_PURE_LB_RATIO		7 /* factored (hence really 0.7) */
+
+/* Factored (hence really 0.7) */
+#define BTB_PURE_LB_RATIO		7
+
 /* QM stop command constants */
-#define QM_STOP_PQ_MASK_WIDTH			32
-#define QM_STOP_CMD_ADDR				0x2
-#define QM_STOP_CMD_STRUCT_SIZE			2
+#define QM_STOP_PQ_MASK_WIDTH		32
+#define QM_STOP_CMD_ADDR		2
+#define QM_STOP_CMD_STRUCT_SIZE		2
 #define QM_STOP_CMD_PAUSE_MASK_OFFSET	0
 #define QM_STOP_CMD_PAUSE_MASK_SHIFT	0
-#define QM_STOP_CMD_PAUSE_MASK_MASK		0xffffffff /* @DPDK */
-#define QM_STOP_CMD_GROUP_ID_OFFSET		1
-#define QM_STOP_CMD_GROUP_ID_SHIFT		16
-#define QM_STOP_CMD_GROUP_ID_MASK		15
-#define QM_STOP_CMD_PQ_TYPE_OFFSET		1
-#define QM_STOP_CMD_PQ_TYPE_SHIFT		24
-#define QM_STOP_CMD_PQ_TYPE_MASK		1
-#define QM_STOP_CMD_MAX_POLL_COUNT		100
-#define QM_STOP_CMD_POLL_PERIOD_US		500
+#define QM_STOP_CMD_PAUSE_MASK_MASK	0xffffffff /* @DPDK */
+#define QM_STOP_CMD_GROUP_ID_OFFSET	1
+#define QM_STOP_CMD_GROUP_ID_SHIFT	16
+#define QM_STOP_CMD_GROUP_ID_MASK	15
+#define QM_STOP_CMD_PQ_TYPE_OFFSET	1
+#define QM_STOP_CMD_PQ_TYPE_SHIFT	24
+#define QM_STOP_CMD_PQ_TYPE_MASK	1
+#define QM_STOP_CMD_MAX_POLL_COUNT	100
+#define QM_STOP_CMD_POLL_PERIOD_US	500
+
 /* QM command macros */
-#define QM_CMD_STRUCT_SIZE(cmd)	cmd##_STRUCT_SIZE
+#define QM_CMD_STRUCT_SIZE(cmd) cmd##_STRUCT_SIZE
 #define QM_CMD_SET_FIELD(var, cmd, field, value) \
-SET_FIELD(var[cmd##_##field##_OFFSET], cmd##_##field, value)
+	SET_FIELD(var[cmd##_##field##_OFFSET], cmd##_##field, value)
+
 /* QM: VOQ macros */
 #define PHYS_VOQ(port, tc, max_phys_tcs_per_port) \
-((port) * (max_phys_tcs_per_port) + (tc))
-#define LB_VOQ(port)				(MAX_PHYS_VOQS + (port))
+	((port) * (max_phys_tcs_per_port) + (tc))
+#define LB_VOQ(port)				 (MAX_PHYS_VOQS + (port))
 #define VOQ(port, tc, max_phys_tcs_per_port) \
-((tc) < LB_TC ? PHYS_VOQ(port, tc, max_phys_tcs_per_port) : LB_VOQ(port))
+	((tc) < LB_TC ? PHYS_VOQ(port, tc, max_phys_tcs_per_port) : \
+				 LB_VOQ(port))
+
+
 /******************** INTERNAL IMPLEMENTATION *********************/
+
 /* Prepare PF RL enable/disable runtime init values */
 static void ecore_enable_pf_rl(struct ecore_hwfn *p_hwfn, bool pf_rl_en)
 {
 	STORE_RT_REG(p_hwfn, QM_REG_RLPFENABLE_RT_OFFSET, pf_rl_en ? 1 : 0);
 	if (pf_rl_en) {
-		/* enable RLs for all VOQs */
+		/* Enable RLs for all VOQs */
 		STORE_RT_REG(p_hwfn, QM_REG_RLPFVOQENABLE_RT_OFFSET,
 			     (1 << MAX_NUM_VOQS) - 1);
-		/* write RL period */
+
+		/* Write RL period */
 		STORE_RT_REG(p_hwfn, QM_REG_RLPFPERIOD_RT_OFFSET,
 			     QM_RL_PERIOD_CLK_25M);
 		STORE_RT_REG(p_hwfn, QM_REG_RLPFPERIODTIMER_RT_OFFSET,
 			     QM_RL_PERIOD_CLK_25M);
-		/* set credit threshold for QM bypass flow */
+
+		/* Set credit threshold for QM bypass flow */
 		if (QM_BYPASS_EN)
 			STORE_RT_REG(p_hwfn, QM_REG_AFULLQMBYPTHRPFRL_RT_OFFSET,
 				     QM_RL_UPPER_BOUND);
@@ -133,7 +177,8 @@ static void ecore_enable_pf_rl(struct ecore_hwfn *p_hwfn, bool pf_rl_en)
 static void ecore_enable_pf_wfq(struct ecore_hwfn *p_hwfn, bool pf_wfq_en)
 {
 	STORE_RT_REG(p_hwfn, QM_REG_WFQPFENABLE_RT_OFFSET, pf_wfq_en ? 1 : 0);
-	/* set credit threshold for QM bypass flow */
+
+	/* Set credit threshold for QM bypass flow */
 	if (pf_wfq_en && QM_BYPASS_EN)
 		STORE_RT_REG(p_hwfn, QM_REG_AFULLQMBYPTHRPFWFQ_RT_OFFSET,
 			     QM_WFQ_UPPER_BOUND);
@@ -145,12 +190,13 @@ static void ecore_enable_vport_rl(struct ecore_hwfn *p_hwfn, bool vport_rl_en)
 	STORE_RT_REG(p_hwfn, QM_REG_RLGLBLENABLE_RT_OFFSET,
 		     vport_rl_en ? 1 : 0);
 	if (vport_rl_en) {
-		/* write RL period (use timer 0 only) */
+		/* Write RL period (use timer 0 only) */
 		STORE_RT_REG(p_hwfn, QM_REG_RLGLBLPERIOD_0_RT_OFFSET,
 			     QM_RL_PERIOD_CLK_25M);
 		STORE_RT_REG(p_hwfn, QM_REG_RLGLBLPERIODTIMER_0_RT_OFFSET,
 			     QM_RL_PERIOD_CLK_25M);
-		/* set credit threshold for QM bypass flow */
+
+		/* Set credit threshold for QM bypass flow */
 		if (QM_BYPASS_EN)
 			STORE_RT_REG(p_hwfn,
 				     QM_REG_AFULLQMBYPTHRGLBLRL_RT_OFFSET,
@@ -163,7 +209,8 @@ static void ecore_enable_vport_wfq(struct ecore_hwfn *p_hwfn, bool vport_wfq_en)
 {
 	STORE_RT_REG(p_hwfn, QM_REG_WFQVPENABLE_RT_OFFSET,
 		     vport_wfq_en ? 1 : 0);
-	/* set credit threshold for QM bypass flow */
+
+	/* Set credit threshold for QM bypass flow */
 	if (vport_wfq_en && QM_BYPASS_EN)
 		STORE_RT_REG(p_hwfn, QM_REG_AFULLQMBYPTHRVPWFQ_RT_OFFSET,
 			     QM_WFQ_UPPER_BOUND);
@@ -176,7 +223,9 @@ static void ecore_cmdq_lines_voq_rt_init(struct ecore_hwfn *p_hwfn,
 					 u8 voq, u16 cmdq_lines)
 {
 	u32 qm_line_crd;
+
 	qm_line_crd = QM_VOQ_LINE_CRD(cmdq_lines);
+
 	OVERWRITE_RT_REG(p_hwfn, PBF_CMDQ_LINES_RT_OFFSET(voq),
 			 (u32)cmdq_lines);
 	STORE_RT_REG(p_hwfn, QM_REG_VOQCRDLINE_RT_OFFSET + voq, qm_line_crd);
@@ -192,38 +241,43 @@ static void ecore_cmdq_lines_rt_init(struct ecore_hwfn *p_hwfn,
 				     port_params[MAX_NUM_PORTS])
 {
 	u8 tc, voq, port_id, num_tcs_in_port;
-	/* clear PBF lines for all VOQs */
+
+	/* Clear PBF lines for all VOQs */
 	for (voq = 0; voq < MAX_NUM_VOQS; voq++)
 		STORE_RT_REG(p_hwfn, PBF_CMDQ_LINES_RT_OFFSET(voq), 0);
+
 	for (port_id = 0; port_id < max_ports_per_engine; port_id++) {
-		if (port_params[port_id].active) {
-			u16 phys_lines, phys_lines_per_tc;
-			/* find #lines to divide between active physical TCs */
-			phys_lines =
-			    port_params[port_id].num_pbf_cmd_lines -
-			    PBF_CMDQ_PURE_LB_LINES;
-			/* find #lines per active physical TC */
-			num_tcs_in_port = 0;
-			for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) {
-				if (((port_params[port_id].active_phys_tcs >>
-						tc) & 0x1) == 1)
-					num_tcs_in_port++;
-			}
-			phys_lines_per_tc = phys_lines / num_tcs_in_port;
-			/* init registers per active TC */
-			for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) {
-				if (((port_params[port_id].active_phys_tcs >>
-							tc) & 0x1) == 1) {
-					voq = PHYS_VOQ(port_id, tc,
-							max_phys_tcs_per_port);
-					ecore_cmdq_lines_voq_rt_init(p_hwfn,
-							voq, phys_lines_per_tc);
-				}
+		u16 phys_lines, phys_lines_per_tc;
+
+		if (!port_params[port_id].active)
+			continue;
+
+		/* Find #lines to divide between the active physical TCs */
+		phys_lines = port_params[port_id].num_pbf_cmd_lines -
+			     PBF_CMDQ_PURE_LB_LINES;
+
+		/* Find #lines per active physical TC */
+		num_tcs_in_port = 0;
+		for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++)
+			if (((port_params[port_id].active_phys_tcs >> tc) &
+			      0x1) == 1)
+				num_tcs_in_port++;
+		phys_lines_per_tc = phys_lines / num_tcs_in_port;
+
+		/* Init registers per active TC */
+		for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) {
+			if (((port_params[port_id].active_phys_tcs >> tc) &
+			      0x1) == 1) {
+				voq = PHYS_VOQ(port_id, tc,
+					       max_phys_tcs_per_port);
+				ecore_cmdq_lines_voq_rt_init(p_hwfn, voq,
+							     phys_lines_per_tc);
 			}
-			/* init registers for pure LB TC */
-			ecore_cmdq_lines_voq_rt_init(p_hwfn, LB_VOQ(port_id),
-						     PBF_CMDQ_PURE_LB_LINES);
 		}
+
+		/* Init registers for pure LB TC */
+		ecore_cmdq_lines_voq_rt_init(p_hwfn, LB_VOQ(port_id),
+					     PBF_CMDQ_PURE_LB_LINES);
 	}
 }
 
@@ -253,50 +307,51 @@ static void ecore_btb_blocks_rt_init(struct ecore_hwfn *p_hwfn,
 				     struct init_qm_port_params
 				     port_params[MAX_NUM_PORTS])
 {
-	u8 tc, voq, port_id, num_tcs_in_port;
 	u32 usable_blocks, pure_lb_blocks, phys_blocks;
+	u8 tc, voq, port_id, num_tcs_in_port;
+
 	for (port_id = 0; port_id < max_ports_per_engine; port_id++) {
-		if (port_params[port_id].active) {
-			/* subtract headroom blocks */
-			usable_blocks =
-			    port_params[port_id].num_btb_blocks -
-			    BTB_HEADROOM_BLOCKS;
-/* find blocks per physical TC. use factor to avoid floating arithmethic */
-
-			num_tcs_in_port = 0;
-			for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++)
-				if (((port_params[port_id].active_phys_tcs >>
-								tc) & 0x1) == 1)
-					num_tcs_in_port++;
-			pure_lb_blocks =
-			    (usable_blocks * BTB_PURE_LB_FACTOR) /
-			    (num_tcs_in_port *
-			     BTB_PURE_LB_FACTOR + BTB_PURE_LB_RATIO);
-			pure_lb_blocks =
-			    OSAL_MAX_T(u32, BTB_JUMBO_PKT_BLOCKS,
-				       pure_lb_blocks / BTB_PURE_LB_FACTOR);
-			phys_blocks =
-			    (usable_blocks -
-			     pure_lb_blocks) /
-			     num_tcs_in_port;
-			/* init physical TCs */
-			for (tc = 0;
-			     tc < NUM_OF_PHYS_TCS;
-			     tc++) {
-				if (((port_params[port_id].active_phys_tcs >>
-							tc) & 0x1) == 1) {
-					voq = PHYS_VOQ(port_id, tc,
-						       max_phys_tcs_per_port);
-					STORE_RT_REG(p_hwfn,
+		if (!port_params[port_id].active)
+			continue;
+
+		/* Subtract headroom blocks */
+		usable_blocks = port_params[port_id].num_btb_blocks -
+				BTB_HEADROOM_BLOCKS;
+
+		/* Find blocks per physical TC. use factor to avoid floating
+		 * arithmethic.
+		 */
+		num_tcs_in_port = 0;
+		for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++)
+			if (((port_params[port_id].active_phys_tcs >> tc) &
+			      0x1) == 1)
+				num_tcs_in_port++;
+
+		pure_lb_blocks = (usable_blocks * BTB_PURE_LB_FACTOR) /
+				  (num_tcs_in_port * BTB_PURE_LB_FACTOR +
+				   BTB_PURE_LB_RATIO);
+		pure_lb_blocks = OSAL_MAX_T(u32, BTB_JUMBO_PKT_BLOCKS,
+					    pure_lb_blocks /
+					    BTB_PURE_LB_FACTOR);
+		phys_blocks = (usable_blocks - pure_lb_blocks) /
+			      num_tcs_in_port;
+
+		/* Init physical TCs */
+		for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) {
+			if (((port_params[port_id].active_phys_tcs >> tc) &
+			      0x1) == 1) {
+				voq = PHYS_VOQ(port_id, tc,
+					       max_phys_tcs_per_port);
+				STORE_RT_REG(p_hwfn,
 					     PBF_BTB_GUARANTEED_RT_OFFSET(voq),
 					     phys_blocks);
-				}
 			}
-			/* init pure LB TC */
-			STORE_RT_REG(p_hwfn,
-				     PBF_BTB_GUARANTEED_RT_OFFSET(
-					LB_VOQ(port_id)), pure_lb_blocks);
 		}
+
+		/* Init pure LB TC */
+		STORE_RT_REG(p_hwfn,
+			     PBF_BTB_GUARANTEED_RT_OFFSET(LB_VOQ(port_id)),
+			     pure_lb_blocks);
 	}
 }
 
@@ -317,57 +372,69 @@ static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 				    struct init_qm_pq_params *pq_params,
 				    struct init_qm_vport_params *vport_params)
 {
-	u16 i, pq_id, pq_group;
-	u16 num_pqs = num_pf_pqs + num_vf_pqs;
-	u16 first_pq_group = start_pq / QM_PF_QUEUE_GROUP_SIZE;
-	u16 last_pq_group = (start_pq + num_pqs - 1) / QM_PF_QUEUE_GROUP_SIZE;
-	/* a bit per Tx PQ indicating if the PQ is associated with a VF */
+	/* A bit per Tx PQ indicating if the PQ is associated with a VF */
 	u32 tx_pq_vf_mask[MAX_QM_TX_QUEUES / QM_PF_QUEUE_GROUP_SIZE] = { 0 };
 	u32 num_tx_pq_vf_masks = MAX_QM_TX_QUEUES / QM_PF_QUEUE_GROUP_SIZE;
-	u32 pq_mem_4kb = QM_PQ_MEM_4KB(num_pf_cids);
-	u32 vport_pq_mem_4kb = QM_PQ_MEM_4KB(num_vf_cids);
-	u32 mem_addr_4kb = base_mem_addr_4kb;
-	/* set mapping from PQ group to PF */
+	u16 num_pqs, first_pq_group, last_pq_group, i, pq_id, pq_group;
+	u32 pq_mem_4kb, vport_pq_mem_4kb, mem_addr_4kb;
+
+	num_pqs = num_pf_pqs + num_vf_pqs;
+
+	first_pq_group = start_pq / QM_PF_QUEUE_GROUP_SIZE;
+	last_pq_group = (start_pq + num_pqs - 1) / QM_PF_QUEUE_GROUP_SIZE;
+
+	pq_mem_4kb = QM_PQ_MEM_4KB(num_pf_cids);
+	vport_pq_mem_4kb = QM_PQ_MEM_4KB(num_vf_cids);
+	mem_addr_4kb = base_mem_addr_4kb;
+
+	/* Set mapping from PQ group to PF */
 	for (pq_group = first_pq_group; pq_group <= last_pq_group; pq_group++)
 		STORE_RT_REG(p_hwfn, QM_REG_PQTX2PF_0_RT_OFFSET + pq_group,
 			     (u32)(pf_id));
-	/* set PQ sizes */
+
+	/* Set PQ sizes */
 	STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZE_0_RT_OFFSET,
 		     QM_PQ_SIZE_256B(num_pf_cids));
 	STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZE_1_RT_OFFSET,
 		     QM_PQ_SIZE_256B(num_vf_cids));
-	/* go over all Tx PQs */
+
+	/* Go over all Tx PQs */
 	for (i = 0, pq_id = start_pq; i < num_pqs; i++, pq_id++) {
-		struct qm_rf_pq_map tx_pq_map;
-		u8 voq =
-		    VOQ(port_id, pq_params[i].tc_id, max_phys_tcs_per_port);
-		bool is_vf_pq = (i >= num_pf_pqs);
-		/* added to avoid compilation warning */
 		u32 max_qm_global_rls = MAX_QM_GLOBAL_RLS;
-		bool rl_valid = pq_params[i].rl_valid &&
-				pq_params[i].vport_id < max_qm_global_rls;
-		/* update first Tx PQ of VPORT/TC */
-		u8 vport_id_in_pf = pq_params[i].vport_id - start_vport;
-		u16 first_tx_pq_id =
-		    vport_params[vport_id_in_pf].first_tx_pq_id[pq_params[i].
-								tc_id];
+		struct qm_rf_pq_map tx_pq_map;
+		bool is_vf_pq, rl_valid;
+		u8 voq, vport_id_in_pf;
+		u16 first_tx_pq_id;
+
+		voq = VOQ(port_id, pq_params[i].tc_id, max_phys_tcs_per_port);
+		is_vf_pq = (i >= num_pf_pqs);
+		rl_valid = pq_params[i].rl_valid && pq_params[i].vport_id <
+			   max_qm_global_rls;
+
+		/* Update first Tx PQ of VPORT/TC */
+		vport_id_in_pf = pq_params[i].vport_id - start_vport;
+		first_tx_pq_id =
+		vport_params[vport_id_in_pf].first_tx_pq_id[pq_params[i].tc_id];
 		if (first_tx_pq_id == QM_INVALID_PQ_ID) {
-			/* create new VP PQ */
+			/* Create new VP PQ */
 			vport_params[vport_id_in_pf].
 			    first_tx_pq_id[pq_params[i].tc_id] = pq_id;
 			first_tx_pq_id = pq_id;
-			/* map VP PQ to VOQ and PF */
+
+			/* Map VP PQ to VOQ and PF */
 			STORE_RT_REG(p_hwfn,
 				     QM_REG_WFQVPMAP_RT_OFFSET + first_tx_pq_id,
 				     (voq << QM_WFQ_VP_PQ_VOQ_SHIFT) | (pf_id <<
 							QM_WFQ_VP_PQ_PF_SHIFT));
 		}
-		/* check RL ID */
+
+		/* Check RL ID */
 		if (pq_params[i].rl_valid && pq_params[i].vport_id >=
 							max_qm_global_rls)
 			DP_NOTICE(p_hwfn, true,
-				  "Invalid VPORT ID for rate limiter config");
-		/* fill PQ map entry */
+				  "Invalid VPORT ID for rate limiter config\n");
+
+		/* Fill PQ map entry */
 		OSAL_MEMSET(&tx_pq_map, 0, sizeof(tx_pq_map));
 		SET_FIELD(tx_pq_map.reg, QM_RF_PQ_MAP_PQ_VALID, 1);
 		SET_FIELD(tx_pq_map.reg, QM_RF_PQ_MAP_RL_VALID,
@@ -378,17 +445,17 @@ static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 		SET_FIELD(tx_pq_map.reg, QM_RF_PQ_MAP_VOQ, voq);
 		SET_FIELD(tx_pq_map.reg, QM_RF_PQ_MAP_WRR_WEIGHT_GROUP,
 			  pq_params[i].wrr_group);
-		/* write PQ map entry to CAM */
+
+		/* Write PQ map entry to CAM */
 		STORE_RT_REG(p_hwfn, QM_REG_TXPQMAP_RT_OFFSET + pq_id,
 			     *((u32 *)&tx_pq_map));
-		/* set base address */
+
+		/* Set base address */
 		STORE_RT_REG(p_hwfn, QM_REG_BASEADDRTXPQ_RT_OFFSET + pq_id,
 			     mem_addr_4kb);
-		/* check if VF PQ */
+
+		/* If VF PQ, add indication to PQ VF mask */
 		if (is_vf_pq) {
-			/* if PQ is associated with a VF, add indication to PQ
-			 * VF mask
-			 */
 			tx_pq_vf_mask[pq_id / QM_PF_QUEUE_GROUP_SIZE] |=
 				(1 << (pq_id % QM_PF_QUEUE_GROUP_SIZE));
 			mem_addr_4kb += vport_pq_mem_4kb;
@@ -396,12 +463,12 @@ static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 			mem_addr_4kb += pq_mem_4kb;
 		}
 	}
-	/* store Tx PQ VF mask to size select register */
-	for (i = 0; i < num_tx_pq_vf_masks; i++) {
+
+	/* Store Tx PQ VF mask to size select register */
+	for (i = 0; i < num_tx_pq_vf_masks; i++)
 		if (tx_pq_vf_mask[i])
 			STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZETXSEL_0_RT_OFFSET +
 				     i, tx_pq_vf_mask[i]);
-	}
 }
 
 /* Prepare Other PQ mapping runtime init values for the specified PF */
@@ -411,20 +478,26 @@ static void ecore_other_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 				       u32 num_pf_cids,
 				       u32 num_tids, u32 base_mem_addr_4kb)
 {
-	u16 i, pq_id;
-/* a single other PQ grp is used in each PF, where PQ group i is used in PF i */
-
-	u16 pq_group = pf_id;
-	u32 pq_size = num_pf_cids + num_tids;
-	u32 pq_mem_4kb = QM_PQ_MEM_4KB(pq_size);
-	u32 mem_addr_4kb = base_mem_addr_4kb;
-	/* map PQ group to PF */
+	u32 pq_size, pq_mem_4kb, mem_addr_4kb;
+	u16 i, pq_id, pq_group;
+
+	/* A single other PQ group is used in each PF, where PQ group i is used
+	 * in PF i.
+	 */
+	pq_group = pf_id;
+	pq_size = num_pf_cids + num_tids;
+	pq_mem_4kb = QM_PQ_MEM_4KB(pq_size);
+	mem_addr_4kb = base_mem_addr_4kb;
+
+	/* Map PQ group to PF */
 	STORE_RT_REG(p_hwfn, QM_REG_PQOTHER2PF_0_RT_OFFSET + pq_group,
 		     (u32)(pf_id));
-	/* set PQ sizes */
+
+	/* Set PQ sizes */
 	STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZE_2_RT_OFFSET,
 		     QM_PQ_SIZE_256B(pq_size));
-	/* set base address */
+
+	/* Set base address */
 	for (i = 0, pq_id = pf_id * QM_PF_QUEUE_GROUP_SIZE;
 	     i < QM_OTHER_PQS_PER_PF; i++, pq_id++) {
 		STORE_RT_REG(p_hwfn, QM_REG_BASEADDROTHERPQ_RT_OFFSET + pq_id,
@@ -432,7 +505,10 @@ static void ecore_other_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 		mem_addr_4kb += pq_mem_4kb;
 	}
 }
-/* Prepare PF WFQ runtime init values for specified PF. Return -1 on error. */
+
+/* Prepare PF WFQ runtime init values for the specified PF.
+ * Return -1 on error.
+ */
 static int ecore_pf_wfq_rt_init(struct ecore_hwfn *p_hwfn,
 				u8 port_id,
 				u8 pf_id,
@@ -441,76 +517,89 @@ static int ecore_pf_wfq_rt_init(struct ecore_hwfn *p_hwfn,
 				u16 num_tx_pqs,
 				struct init_qm_pq_params *pq_params)
 {
+	u32 inc_val, crd_reg_offset;
+	u8 voq;
 	u16 i;
-	u32 inc_val;
-	u32 crd_reg_offset =
-	    (pf_id <
-	     MAX_NUM_PFS_BB ? QM_REG_WFQPFCRD_RT_OFFSET :
-	     QM_REG_WFQPFCRD_MSB_RT_OFFSET) + (pf_id % MAX_NUM_PFS_BB);
+
+	crd_reg_offset = (pf_id < MAX_NUM_PFS_BB ? QM_REG_WFQPFCRD_RT_OFFSET :
+			  QM_REG_WFQPFCRD_MSB_RT_OFFSET) +
+			 (pf_id % MAX_NUM_PFS_BB);
+
 	inc_val = QM_WFQ_INC_VAL(pf_wfq);
-	if (inc_val == 0 || inc_val > QM_WFQ_MAX_INC_VAL) {
-		DP_NOTICE(p_hwfn, true, "Invalid PF WFQ weight configuration");
+	if (!inc_val || inc_val > QM_WFQ_MAX_INC_VAL) {
+		DP_NOTICE(p_hwfn, true,
+			  "Invalid PF WFQ weight configuration\n");
 		return -1;
 	}
+
 	for (i = 0; i < num_tx_pqs; i++) {
-		u8 voq =
-		    VOQ(port_id, pq_params[i].tc_id, max_phys_tcs_per_port);
+		voq = VOQ(port_id, pq_params[i].tc_id, max_phys_tcs_per_port);
 		OVERWRITE_RT_REG(p_hwfn, crd_reg_offset + voq * MAX_NUM_PFS_BB,
 				 (u32)QM_WFQ_CRD_REG_SIGN_BIT);
 	}
+
 	STORE_RT_REG(p_hwfn, QM_REG_WFQPFUPPERBOUND_RT_OFFSET + pf_id,
 		     QM_WFQ_UPPER_BOUND | (u32)QM_WFQ_CRD_REG_SIGN_BIT);
 	STORE_RT_REG(p_hwfn, QM_REG_WFQPFWEIGHT_RT_OFFSET + pf_id, inc_val);
 	return 0;
 }
-/* Prepare PF RL runtime init values for specified PF. Return -1 on error. */
+
+/* Prepare PF RL runtime init values for the specified PF.
+ * Return -1 on error.
+ */
 static int ecore_pf_rl_rt_init(struct ecore_hwfn *p_hwfn, u8 pf_id, u32 pf_rl)
 {
-	u32 inc_val = QM_RL_INC_VAL(pf_rl);
+	u32 inc_val;
+
+	inc_val = QM_RL_INC_VAL(pf_rl);
 	if (inc_val > QM_RL_MAX_INC_VAL) {
-		DP_NOTICE(p_hwfn, true, "Invalid PF rate limit configuration");
+		DP_NOTICE(p_hwfn, true,
+			  "Invalid PF rate limit configuration\n");
 		return -1;
 	}
+
 	STORE_RT_REG(p_hwfn, QM_REG_RLPFCRD_RT_OFFSET + pf_id,
 		     (u32)QM_RL_CRD_REG_SIGN_BIT);
 	STORE_RT_REG(p_hwfn, QM_REG_RLPFUPPERBOUND_RT_OFFSET + pf_id,
 		     QM_RL_UPPER_BOUND | (u32)QM_RL_CRD_REG_SIGN_BIT);
 	STORE_RT_REG(p_hwfn, QM_REG_RLPFINCVAL_RT_OFFSET + pf_id, inc_val);
+
 	return 0;
 }
-/* Prepare VPORT WFQ runtime init values for the specified VPORTs. Return -1 on
- * error.
+
+/* Prepare VPORT WFQ runtime init values for the specified VPORTs.
+ * Return -1 on error.
  */
 static int ecore_vp_wfq_rt_init(struct ecore_hwfn *p_hwfn,
 				u8 num_vports,
 				struct init_qm_vport_params *vport_params)
 {
-	u8 tc, i;
+	u16 vport_pq_id;
 	u32 inc_val;
-	/* go over all PF VPORTs */
+	u8 tc, i;
+
+	/* Go over all PF VPORTs */
 	for (i = 0; i < num_vports; i++) {
-		if (vport_params[i].vport_wfq) {
-			inc_val = QM_WFQ_INC_VAL(vport_params[i].vport_wfq);
-			if (inc_val > QM_WFQ_MAX_INC_VAL) {
-				DP_NOTICE(p_hwfn, true,
-					  "Invalid VPORT WFQ weight config");
-				return -1;
-			}
-			/* each VPORT can have several VPORT PQ IDs for
-			 * different TCs
-			 */
-			for (tc = 0; tc < NUM_OF_TCS; tc++) {
-				u16 vport_pq_id =
-				    vport_params[i].first_tx_pq_id[tc];
-				if (vport_pq_id != QM_INVALID_PQ_ID) {
-					STORE_RT_REG(p_hwfn,
-						  QM_REG_WFQVPCRD_RT_OFFSET +
-						  vport_pq_id,
-						  (u32)QM_WFQ_CRD_REG_SIGN_BIT);
-					STORE_RT_REG(p_hwfn,
-						QM_REG_WFQVPWEIGHT_RT_OFFSET
-						     + vport_pq_id, inc_val);
-				}
+		if (!vport_params[i].vport_wfq)
+			continue;
+
+		inc_val = QM_WFQ_INC_VAL(vport_params[i].vport_wfq);
+		if (inc_val > QM_WFQ_MAX_INC_VAL) {
+			DP_NOTICE(p_hwfn, true,
+				  "Invalid VPORT WFQ weight configuration\n");
+			return -1;
+		}
+
+		/* Each VPORT can have several VPORT PQ IDs for various TCs */
+		for (tc = 0; tc < NUM_OF_TCS; tc++) {
+			vport_pq_id = vport_params[i].first_tx_pq_id[tc];
+			if (vport_pq_id != QM_INVALID_PQ_ID) {
+				STORE_RT_REG(p_hwfn, QM_REG_WFQVPCRD_RT_OFFSET +
+					     vport_pq_id,
+					     (u32)QM_WFQ_CRD_REG_SIGN_BIT);
+				STORE_RT_REG(p_hwfn,
+					     QM_REG_WFQVPWEIGHT_RT_OFFSET +
+					     vport_pq_id, inc_val);
 			}
 		}
 	}
@@ -526,19 +615,23 @@ static int ecore_vport_rl_rt_init(struct ecore_hwfn *p_hwfn,
 				  struct init_qm_vport_params *vport_params)
 {
 	u8 i, vport_id;
+	u32 inc_val;
+
 	if (start_vport + num_vports >= MAX_QM_GLOBAL_RLS) {
 		DP_NOTICE(p_hwfn, true,
-			  "Invalid VPORT ID for rate limiter configuration");
+			  "Invalid VPORT ID for rate limiter configuration\n");
 		return -1;
 	}
-	/* go over all PF VPORTs */
+
+	/* Go over all PF VPORTs */
 	for (i = 0, vport_id = start_vport; i < num_vports; i++, vport_id++) {
 		u32 inc_val = QM_RL_INC_VAL(vport_params[i].vport_rl);
 		if (inc_val > QM_RL_MAX_INC_VAL) {
 			DP_NOTICE(p_hwfn, true,
-				  "Invalid VPORT rate-limit configuration");
+				  "Invalid VPORT rate-limit configuration\n");
 			return -1;
 		}
+
 		STORE_RT_REG(p_hwfn, QM_REG_RLGLBLCRD_RT_OFFSET + vport_id,
 			     (u32)QM_RL_CRD_REG_SIGN_BIT);
 		STORE_RT_REG(p_hwfn,
@@ -547,6 +640,7 @@ static int ecore_vport_rl_rt_init(struct ecore_hwfn *p_hwfn,
 		STORE_RT_REG(p_hwfn, QM_REG_RLGLBLINCVAL_RT_OFFSET + vport_id,
 			     inc_val);
 	}
+
 	return 0;
 }
 
@@ -554,17 +648,20 @@ static bool ecore_poll_on_qm_cmd_ready(struct ecore_hwfn *p_hwfn,
 				       struct ecore_ptt *p_ptt)
 {
 	u32 reg_val, i;
-	for (i = 0, reg_val = 0; i < QM_STOP_CMD_MAX_POLL_COUNT && reg_val == 0;
+
+	for (i = 0, reg_val = 0; i < QM_STOP_CMD_MAX_POLL_COUNT && !reg_val;
 	     i++) {
 		OSAL_UDELAY(QM_STOP_CMD_POLL_PERIOD_US);
 		reg_val = ecore_rd(p_hwfn, p_ptt, QM_REG_SDMCMDREADY);
 	}
-	/* check if timeout while waiting for SDM command ready */
+
+	/* Check if timeout while waiting for SDM command ready */
 	if (i == QM_STOP_CMD_MAX_POLL_COUNT) {
 		DP_VERBOSE(p_hwfn, ECORE_MSG_DEBUG,
 			   "Timeout waiting for QM SDM cmd ready signal\n");
 		return false;
 	}
+
 	return true;
 }
 
@@ -574,15 +671,19 @@ static bool ecore_send_qm_cmd(struct ecore_hwfn *p_hwfn,
 {
 	if (!ecore_poll_on_qm_cmd_ready(p_hwfn, p_ptt))
 		return false;
+
 	ecore_wr(p_hwfn, p_ptt, QM_REG_SDMCMDADDR, cmd_addr);
 	ecore_wr(p_hwfn, p_ptt, QM_REG_SDMCMDDATALSB, cmd_data_lsb);
 	ecore_wr(p_hwfn, p_ptt, QM_REG_SDMCMDDATAMSB, cmd_data_msb);
 	ecore_wr(p_hwfn, p_ptt, QM_REG_SDMCMDGO, 1);
 	ecore_wr(p_hwfn, p_ptt, QM_REG_SDMCMDGO, 0);
+
 	return ecore_poll_on_qm_cmd_ready(p_hwfn, p_ptt);
 }
 
+
 /******************** INTERFACE IMPLEMENTATION *********************/
+
 u32 ecore_qm_pf_mem_size(u8 pf_id,
 			 u32 num_pf_cids,
 			 u32 num_vf_cids,
@@ -603,32 +704,42 @@ int ecore_qm_common_rt_init(struct ecore_hwfn *p_hwfn,
 			    struct init_qm_port_params
 			    port_params[MAX_NUM_PORTS])
 {
-	/* init AFullOprtnstcCrdMask */
-	u32 mask =
-	    (QM_OPPOR_LINE_VOQ_DEF << QM_RF_OPPORTUNISTIC_MASK_LINEVOQ_SHIFT) |
-	    (QM_BYTE_CRD_EN << QM_RF_OPPORTUNISTIC_MASK_BYTEVOQ_SHIFT) |
-	    (pf_wfq_en << QM_RF_OPPORTUNISTIC_MASK_PFWFQ_SHIFT) |
-	    (vport_wfq_en << QM_RF_OPPORTUNISTIC_MASK_VPWFQ_SHIFT) |
-	    (pf_rl_en << QM_RF_OPPORTUNISTIC_MASK_PFRL_SHIFT) |
-	    (vport_rl_en << QM_RF_OPPORTUNISTIC_MASK_VPQCNRL_SHIFT) |
-	    (QM_OPPOR_FW_STOP_DEF << QM_RF_OPPORTUNISTIC_MASK_FWPAUSE_SHIFT) |
-	    (QM_OPPOR_PQ_EMPTY_DEF <<
-	     QM_RF_OPPORTUNISTIC_MASK_QUEUEEMPTY_SHIFT);
+	u32 mask;
+
+	/* Init AFullOprtnstcCrdMask */
+	mask = (QM_OPPOR_LINE_VOQ_DEF <<
+		QM_RF_OPPORTUNISTIC_MASK_LINEVOQ_SHIFT) |
+		(QM_BYTE_CRD_EN << QM_RF_OPPORTUNISTIC_MASK_BYTEVOQ_SHIFT) |
+		(pf_wfq_en << QM_RF_OPPORTUNISTIC_MASK_PFWFQ_SHIFT) |
+		(vport_wfq_en << QM_RF_OPPORTUNISTIC_MASK_VPWFQ_SHIFT) |
+		(pf_rl_en << QM_RF_OPPORTUNISTIC_MASK_PFRL_SHIFT) |
+		(vport_rl_en << QM_RF_OPPORTUNISTIC_MASK_VPQCNRL_SHIFT) |
+		(QM_OPPOR_FW_STOP_DEF <<
+		 QM_RF_OPPORTUNISTIC_MASK_FWPAUSE_SHIFT) |
+		(QM_OPPOR_PQ_EMPTY_DEF <<
+		 QM_RF_OPPORTUNISTIC_MASK_QUEUEEMPTY_SHIFT);
 	STORE_RT_REG(p_hwfn, QM_REG_AFULLOPRTNSTCCRDMASK_RT_OFFSET, mask);
-	/* enable/disable PF RL */
+
+	/* Enable/disable PF RL */
 	ecore_enable_pf_rl(p_hwfn, pf_rl_en);
-	/* enable/disable PF WFQ */
+
+	/* Enable/disable PF WFQ */
 	ecore_enable_pf_wfq(p_hwfn, pf_wfq_en);
-	/* enable/disable VPORT RL */
+
+	/* Enable/disable VPORT RL */
 	ecore_enable_vport_rl(p_hwfn, vport_rl_en);
-	/* enable/disable VPORT WFQ */
+
+	/* Enable/disable VPORT WFQ */
 	ecore_enable_vport_wfq(p_hwfn, vport_wfq_en);
-	/* init PBF CMDQ line credit */
+
+	/* Init PBF CMDQ line credit */
 	ecore_cmdq_lines_rt_init(p_hwfn, max_ports_per_engine,
 				 max_phys_tcs_per_port, port_params);
-	/* init BTB blocks in PBF */
+
+	/* Init BTB blocks in PBF */
 	ecore_btb_blocks_rt_init(p_hwfn, max_ports_per_engine,
 				 max_phys_tcs_per_port, port_params);
+
 	return 0;
 }
 
@@ -651,66 +762,86 @@ int ecore_qm_pf_rt_init(struct ecore_hwfn *p_hwfn,
 			struct init_qm_pq_params *pq_params,
 			struct init_qm_vport_params *vport_params)
 {
+	u32 other_mem_size_4kb;
 	u8 tc, i;
-	u32 other_mem_size_4kb =
-	    QM_PQ_MEM_4KB(num_pf_cids + num_tids) * QM_OTHER_PQS_PER_PF;
-	/* clear first Tx PQ ID array for each VPORT */
+
+	other_mem_size_4kb = QM_PQ_MEM_4KB(num_pf_cids + num_tids) *
+			     QM_OTHER_PQS_PER_PF;
+
+	/* Clear first Tx PQ ID array for each VPORT */
 	for (i = 0; i < num_vports; i++)
 		for (tc = 0; tc < NUM_OF_TCS; tc++)
 			vport_params[i].first_tx_pq_id[tc] = QM_INVALID_PQ_ID;
-	/* map Other PQs (if any) */
+
+	/* Map Other PQs (if any) */
 #if QM_OTHER_PQS_PER_PF > 0
 	ecore_other_pq_map_rt_init(p_hwfn, port_id, pf_id, num_pf_cids,
 				   num_tids, 0);
 #endif
-	/* map Tx PQs */
+
+	/* Map Tx PQs */
 	ecore_tx_pq_map_rt_init(p_hwfn, p_ptt, port_id, pf_id,
 				max_phys_tcs_per_port, is_first_pf, num_pf_cids,
 				num_vf_cids, start_pq, num_pf_pqs, num_vf_pqs,
 				start_vport, other_mem_size_4kb, pq_params,
 				vport_params);
-	/* init PF WFQ */
+
+	/* Init PF WFQ */
 	if (pf_wfq)
 		if (ecore_pf_wfq_rt_init
 		    (p_hwfn, port_id, pf_id, pf_wfq, max_phys_tcs_per_port,
-		     num_pf_pqs + num_vf_pqs, pq_params) != 0)
+		     num_pf_pqs + num_vf_pqs, pq_params))
 			return -1;
-	/* init PF RL */
-	if (ecore_pf_rl_rt_init(p_hwfn, pf_id, pf_rl) != 0)
+
+	/* Init PF RL */
+	if (ecore_pf_rl_rt_init(p_hwfn, pf_id, pf_rl))
 		return -1;
-	/* set VPORT WFQ */
-	if (ecore_vp_wfq_rt_init(p_hwfn, num_vports, vport_params) != 0)
+
+	/* Set VPORT WFQ */
+	if (ecore_vp_wfq_rt_init(p_hwfn, num_vports, vport_params))
 		return -1;
-	/* set VPORT RL */
+
+	/* Set VPORT RL */
 	if (ecore_vport_rl_rt_init
-	    (p_hwfn, start_vport, num_vports, vport_params) != 0)
+	    (p_hwfn, start_vport, num_vports, vport_params))
 		return -1;
+
 	return 0;
 }
 
 int ecore_init_pf_wfq(struct ecore_hwfn *p_hwfn,
 		      struct ecore_ptt *p_ptt, u8 pf_id, u16 pf_wfq)
 {
-	u32 inc_val = QM_WFQ_INC_VAL(pf_wfq);
-	if (inc_val == 0 || inc_val > QM_WFQ_MAX_INC_VAL) {
-		DP_NOTICE(p_hwfn, true, "Invalid PF WFQ weight configuration");
+	u32 inc_val;
+
+	inc_val = QM_WFQ_INC_VAL(pf_wfq);
+	if (!inc_val || inc_val > QM_WFQ_MAX_INC_VAL) {
+		DP_NOTICE(p_hwfn, true,
+			  "Invalid PF WFQ weight configuration\n");
 		return -1;
 	}
+
 	ecore_wr(p_hwfn, p_ptt, QM_REG_WFQPFWEIGHT + pf_id * 4, inc_val);
+
 	return 0;
 }
 
 int ecore_init_pf_rl(struct ecore_hwfn *p_hwfn,
 		     struct ecore_ptt *p_ptt, u8 pf_id, u32 pf_rl)
 {
-	u32 inc_val = QM_RL_INC_VAL(pf_rl);
+	u32 inc_val;
+
+	inc_val = QM_RL_INC_VAL(pf_rl);
 	if (inc_val > QM_RL_MAX_INC_VAL) {
-		DP_NOTICE(p_hwfn, true, "Invalid PF rate limit configuration");
+		DP_NOTICE(p_hwfn, true,
+			  "Invalid PF rate limit configuration\n");
 		return -1;
 	}
+
 	ecore_wr(p_hwfn, p_ptt, QM_REG_RLPFCRD + pf_id * 4,
 		 (u32)QM_RL_CRD_REG_SIGN_BIT);
 	ecore_wr(p_hwfn, p_ptt, QM_REG_RLPFINCVAL + pf_id * 4, inc_val);
+
 	return 0;
 }
 
@@ -718,20 +849,25 @@ int ecore_init_vport_wfq(struct ecore_hwfn *p_hwfn,
 			 struct ecore_ptt *p_ptt,
 			 u16 first_tx_pq_id[NUM_OF_TCS], u16 vport_wfq)
 {
+	u16 vport_pq_id;
+	u32 inc_val;
 	u8 tc;
-	u32 inc_val = QM_WFQ_INC_VAL(vport_wfq);
-	if (inc_val == 0 || inc_val > QM_WFQ_MAX_INC_VAL) {
+
+	inc_val = QM_WFQ_INC_VAL(vport_wfq);
+	if (!inc_val || inc_val > QM_WFQ_MAX_INC_VAL) {
 		DP_NOTICE(p_hwfn, true,
-			  "Invalid VPORT WFQ weight configuration");
+			  "Invalid VPORT WFQ weight configuration\n");
 		return -1;
 	}
+
 	for (tc = 0; tc < NUM_OF_TCS; tc++) {
-		u16 vport_pq_id = first_tx_pq_id[tc];
+		vport_pq_id = first_tx_pq_id[tc];
 		if (vport_pq_id != QM_INVALID_PQ_ID) {
 			ecore_wr(p_hwfn, p_ptt,
 				 QM_REG_WFQVPWEIGHT + vport_pq_id * 4, inc_val);
 		}
 	}
+
 	return 0;
 }
 
@@ -739,20 +875,24 @@ int ecore_init_vport_rl(struct ecore_hwfn *p_hwfn,
 			struct ecore_ptt *p_ptt, u8 vport_id, u32 vport_rl)
 {
 	u32 inc_val, max_qm_global_rls = MAX_QM_GLOBAL_RLS;
+
 	if (vport_id >= max_qm_global_rls) {
 		DP_NOTICE(p_hwfn, true,
-			  "Invalid VPORT ID for rate limiter configuration");
+			  "Invalid VPORT ID for rate limiter configuration\n");
 		return -1;
 	}
+
 	inc_val = QM_RL_INC_VAL(vport_rl);
 	if (inc_val > QM_RL_MAX_INC_VAL) {
 		DP_NOTICE(p_hwfn, true,
-			  "Invalid VPORT rate-limit configuration");
+			  "Invalid VPORT rate-limit configuration\n");
 		return -1;
 	}
+
 	ecore_wr(p_hwfn, p_ptt, QM_REG_RLGLBLCRD + vport_id * 4,
 		 (u32)QM_RL_CRD_REG_SIGN_BIT);
 	ecore_wr(p_hwfn, p_ptt, QM_REG_RLGLBLINCVAL + vport_id * 4, inc_val);
+
 	return 0;
 }
 
@@ -762,15 +902,20 @@ bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn,
 			    bool is_tx_pq, u16 start_pq, u16 num_pqs)
 {
 	u32 cmd_arr[QM_CMD_STRUCT_SIZE(QM_STOP_CMD)] = { 0 };
-	u32 pq_mask = 0, last_pq = start_pq + num_pqs - 1, pq_id;
-	/* set command's PQ type */
+	u32 pq_mask = 0, last_pq, pq_id;
+
+	last_pq = start_pq + num_pqs - 1;
+
+	/* Set command's PQ type */
 	QM_CMD_SET_FIELD(cmd_arr, QM_STOP_CMD, PQ_TYPE, is_tx_pq ? 0 : 1);
-	/* go over requested PQs */
+
+	/* Go over requested PQs */
 	for (pq_id = start_pq; pq_id <= last_pq; pq_id++) {
-		/* set PQ bit in mask (stop command only) */
+		/* Set PQ bit in mask (stop command only) */
 		if (!is_release_cmd)
 			pq_mask |= (1 << (pq_id % QM_STOP_PQ_MASK_WIDTH));
-		/* if last PQ or end of PQ mask, write command */
+
+		/* If last PQ or end of PQ mask, write command */
 		if ((pq_id == last_pq) ||
 		    (pq_id % QM_STOP_PQ_MASK_WIDTH ==
 		    (QM_STOP_PQ_MASK_WIDTH - 1))) {
@@ -785,68 +930,92 @@ bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn,
 			pq_mask = 0;
 		}
 	}
+
 	return true;
 }
 
+
 /* NIG: ETS configuration constants */
 #define NIG_TX_ETS_CLIENT_OFFSET	4
 #define NIG_LB_ETS_CLIENT_OFFSET	1
 #define NIG_ETS_MIN_WFQ_BYTES		1600
+
 /* NIG: ETS constants */
 #define NIG_ETS_UP_BOUND(weight, mtu) \
-(2 * ((weight) > (mtu) ? (weight) : (mtu)))
+	(2 * ((weight) > (mtu) ? (weight) : (mtu)))
+
 /* NIG: RL constants */
-#define NIG_RL_BASE_TYPE			1	/* byte base type */
-#define NIG_RL_PERIOD				1	/* in us */
+
+/* Byte base type value */
+#define NIG_RL_BASE_TYPE		1
+
+/* Period in us */
+#define NIG_RL_PERIOD			1
+
+/* Period in 25MHz cycles */
 #define NIG_RL_PERIOD_CLK_25M		(25 * NIG_RL_PERIOD)
+
+/* Rate in mbps */
 #define NIG_RL_INC_VAL(rate)		(((rate) * NIG_RL_PERIOD) / 8)
+
 #define NIG_RL_MAX_VAL(inc_val, mtu) \
-(2 * ((inc_val) > (mtu) ? (inc_val) : (mtu)))
+	(2 * ((inc_val) > (mtu) ? (inc_val) : (mtu)))
+
 /* NIG: packet prioritry configuration constants */
-#define NIG_PRIORITY_MAP_TC_BITS 4
+#define NIG_PRIORITY_MAP_TC_BITS	4
+
+
 void ecore_init_nig_ets(struct ecore_hwfn *p_hwfn,
 			struct ecore_ptt *p_ptt,
 			struct init_ets_req *req, bool is_lb)
 {
-	u8 tc, sp_tc_map = 0, wfq_tc_map = 0;
-	u8 num_tc = is_lb ? NUM_OF_TCS : NUM_OF_PHYS_TCS;
-	u8 tc_client_offset =
-	    is_lb ? NIG_LB_ETS_CLIENT_OFFSET : NIG_TX_ETS_CLIENT_OFFSET;
-	u32 min_weight = 0xffffffff;
-	u32 tc_weight_base_addr =
-	    is_lb ? NIG_REG_LB_ARB_CREDIT_WEIGHT_0 :
-	    NIG_REG_TX_ARB_CREDIT_WEIGHT_0;
-	u32 tc_weight_addr_diff =
-	    is_lb ? NIG_REG_LB_ARB_CREDIT_WEIGHT_1 -
-	    NIG_REG_LB_ARB_CREDIT_WEIGHT_0 : NIG_REG_TX_ARB_CREDIT_WEIGHT_1 -
-	    NIG_REG_TX_ARB_CREDIT_WEIGHT_0;
-	u32 tc_bound_base_addr =
-	    is_lb ? NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_0 :
-	    NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_0;
-	u32 tc_bound_addr_diff =
-	    is_lb ? NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_1 -
-	    NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_0 :
-	    NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_1 -
-	    NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_0;
+	u32 min_weight, tc_weight_base_addr, tc_weight_addr_diff;
+	u32 tc_bound_base_addr, tc_bound_addr_diff;
+	u8 sp_tc_map = 0, wfq_tc_map = 0;
+	u8 tc, num_tc, tc_client_offset;
+
+	num_tc = is_lb ? NUM_OF_TCS : NUM_OF_PHYS_TCS;
+	tc_client_offset = is_lb ? NIG_LB_ETS_CLIENT_OFFSET :
+				   NIG_TX_ETS_CLIENT_OFFSET;
+	min_weight = 0xffffffff;
+	tc_weight_base_addr = is_lb ? NIG_REG_LB_ARB_CREDIT_WEIGHT_0 :
+				      NIG_REG_TX_ARB_CREDIT_WEIGHT_0;
+	tc_weight_addr_diff = is_lb ? NIG_REG_LB_ARB_CREDIT_WEIGHT_1 -
+				      NIG_REG_LB_ARB_CREDIT_WEIGHT_0 :
+				      NIG_REG_TX_ARB_CREDIT_WEIGHT_1 -
+				      NIG_REG_TX_ARB_CREDIT_WEIGHT_0;
+	tc_bound_base_addr = is_lb ? NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_0 :
+				     NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_0;
+	tc_bound_addr_diff = is_lb ? NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_1 -
+				     NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_0 :
+				     NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_1 -
+				     NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_0;
+
 	for (tc = 0; tc < num_tc; tc++) {
 		struct init_ets_tc_req *tc_req = &req->tc_req[tc];
-		/* update SP map */
+
+		/* Update SP map */
 		if (tc_req->use_sp)
 			sp_tc_map |= (1 << tc);
-		if (tc_req->use_wfq) {
-			/* update WFQ map */
-			wfq_tc_map |= (1 << tc);
-			/* find minimal weight */
-			if (tc_req->weight < min_weight)
-				min_weight = tc_req->weight;
-		}
+
+		if (!tc_req->use_wfq)
+			continue;
+
+		/* Update WFQ map */
+		wfq_tc_map |= (1 << tc);
+
+		/* Find minimal weight */
+		if (tc_req->weight < min_weight)
+			min_weight = tc_req->weight;
 	}
-	/* write SP map */
+
+	/* Write SP map */
 	ecore_wr(p_hwfn, p_ptt,
 		 is_lb ? NIG_REG_LB_ARB_CLIENT_IS_STRICT :
 		 NIG_REG_TX_ARB_CLIENT_IS_STRICT,
 		 (sp_tc_map << tc_client_offset));
-	/* write WFQ map */
+
+	/* Write WFQ map */
 	ecore_wr(p_hwfn, p_ptt,
 		 is_lb ? NIG_REG_LB_ARB_CLIENT_IS_SUBJECT2WFQ :
 		 NIG_REG_TX_ARB_CLIENT_IS_SUBJECT2WFQ,
@@ -854,22 +1023,23 @@ void ecore_init_nig_ets(struct ecore_hwfn *p_hwfn,
 	/* write WFQ weights */
 	for (tc = 0; tc < num_tc; tc++, tc_client_offset++) {
 		struct init_ets_tc_req *tc_req = &req->tc_req[tc];
-		if (tc_req->use_wfq) {
-			/* translate weight to bytes */
-			u32 byte_weight =
-			    (NIG_ETS_MIN_WFQ_BYTES * tc_req->weight) /
-			    min_weight;
-			/* write WFQ weight */
-			ecore_wr(p_hwfn, p_ptt,
-				 tc_weight_base_addr +
-				 tc_weight_addr_diff * tc_client_offset,
-				 byte_weight);
-			/* write WFQ upper bound */
-			ecore_wr(p_hwfn, p_ptt,
-				 tc_bound_base_addr +
-				 tc_bound_addr_diff * tc_client_offset,
-				 NIG_ETS_UP_BOUND(byte_weight, req->mtu));
-		}
+		u32 byte_weight;
+
+		if (!tc_req->use_wfq)
+			continue;
+
+		/* Translate weight to bytes */
+		byte_weight = (NIG_ETS_MIN_WFQ_BYTES * tc_req->weight) /
+			      min_weight;
+
+		/* Write WFQ weight */
+		ecore_wr(p_hwfn, p_ptt, tc_weight_base_addr +
+			 tc_weight_addr_diff * tc_client_offset, byte_weight);
+
+		/* Write WFQ upper bound */
+		ecore_wr(p_hwfn, p_ptt, tc_bound_base_addr +
+			 tc_bound_addr_diff * tc_client_offset,
+			 NIG_ETS_UP_BOUND(byte_weight, req->mtu));
 	}
 }
 
@@ -877,16 +1047,18 @@ void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn,
 			  struct ecore_ptt *p_ptt,
 			  struct init_nig_lb_rl_req *req)
 {
-	u8 tc;
 	u32 ctrl, inc_val, reg_offset;
-	/* disable global MAC+LB RL */
+	u8 tc;
+
+	/* Disable global MAC+LB RL */
 	ctrl =
 	    NIG_RL_BASE_TYPE <<
 	    NIG_REG_TX_LB_GLBRATELIMIT_CTRL_TX_LB_GLBRATELIMIT_BASE_TYPE_SHIFT;
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_CTRL, ctrl);
-	/* configure and enable global MAC+LB RL */
+
+	/* Configure and enable global MAC+LB RL */
 	if (req->lb_mac_rate) {
-		/* configure  */
+		/* Configure  */
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_INC_PERIOD,
 			 NIG_RL_PERIOD_CLK_25M);
 		inc_val = NIG_RL_INC_VAL(req->lb_mac_rate);
@@ -894,20 +1066,23 @@ void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn,
 			 inc_val);
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_MAX_VALUE,
 			 NIG_RL_MAX_VAL(inc_val, req->mtu));
-		/* enable */
+
+		/* Enable */
 		ctrl |=
 		    1 <<
 		    NIG_REG_TX_LB_GLBRATELIMIT_CTRL_TX_LB_GLBRATELIMIT_EN_SHIFT;
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_CTRL, ctrl);
 	}
-	/* disable global LB-only RL */
+
+	/* Disable global LB-only RL */
 	ctrl =
 	    NIG_RL_BASE_TYPE <<
 	    NIG_REG_LB_BRBRATELIMIT_CTRL_LB_BRBRATELIMIT_BASE_TYPE_SHIFT;
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_CTRL, ctrl);
-	/* configure and enable global LB-only RL */
+
+	/* Configure and enable global LB-only RL */
 	if (req->lb_rate) {
-		/* configure  */
+		/* Configure  */
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_INC_PERIOD,
 			 NIG_RL_PERIOD_CLK_25M);
 		inc_val = NIG_RL_INC_VAL(req->lb_rate);
@@ -915,41 +1090,41 @@ void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn,
 			 inc_val);
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_MAX_VALUE,
 			 NIG_RL_MAX_VAL(inc_val, req->mtu));
-		/* enable */
+
+		/* Enable */
 		ctrl |=
 		    1 << NIG_REG_LB_BRBRATELIMIT_CTRL_LB_BRBRATELIMIT_EN_SHIFT;
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_CTRL, ctrl);
 	}
-	/* per-TC RLs */
+
+	/* Per-TC RLs */
 	for (tc = 0, reg_offset = 0; tc < NUM_OF_PHYS_TCS;
 	     tc++, reg_offset += 4) {
-		/* disable TC RL */
+		/* Disable TC RL */
 		ctrl =
 		    NIG_RL_BASE_TYPE <<
 		NIG_REG_LB_TCRATELIMIT_CTRL_0_LB_TCRATELIMIT_BASE_TYPE_0_SHIFT;
 		ecore_wr(p_hwfn, p_ptt,
 			 NIG_REG_LB_TCRATELIMIT_CTRL_0 + reg_offset, ctrl);
-		/* configure and enable TC RL */
-		if (req->tc_rate[tc]) {
-			/* configure */
-			ecore_wr(p_hwfn, p_ptt,
-				 NIG_REG_LB_TCRATELIMIT_INC_PERIOD_0 +
-				 reg_offset, NIG_RL_PERIOD_CLK_25M);
-			inc_val = NIG_RL_INC_VAL(req->tc_rate[tc]);
-			ecore_wr(p_hwfn, p_ptt,
-				 NIG_REG_LB_TCRATELIMIT_INC_VALUE_0 +
-				 reg_offset, inc_val);
-			ecore_wr(p_hwfn, p_ptt,
-				 NIG_REG_LB_TCRATELIMIT_MAX_VALUE_0 +
-				 reg_offset, NIG_RL_MAX_VAL(inc_val, req->mtu));
-			/* enable */
-			ctrl |=
-			    1 <<
-		NIG_REG_LB_TCRATELIMIT_CTRL_0_LB_TCRATELIMIT_EN_0_SHIFT;
-			ecore_wr(p_hwfn, p_ptt,
-				 NIG_REG_LB_TCRATELIMIT_CTRL_0 + reg_offset,
-				 ctrl);
-		}
+
+		/* Configure and enable TC RL */
+		if (!req->tc_rate[tc])
+			continue;
+
+		/* Configure */
+		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_INC_PERIOD_0 +
+			 reg_offset, NIG_RL_PERIOD_CLK_25M);
+		inc_val = NIG_RL_INC_VAL(req->tc_rate[tc]);
+		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_INC_VALUE_0 +
+			 reg_offset, inc_val);
+		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_MAX_VALUE_0 +
+			 reg_offset, NIG_RL_MAX_VAL(inc_val, req->mtu));
+
+		/* Enable */
+		ctrl |= 1 <<
+			NIG_REG_LB_TCRATELIMIT_CTRL_0_LB_TCRATELIMIT_EN_0_SHIFT;
+		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_CTRL_0 +
+			 reg_offset, ctrl);
 	}
 }
 
@@ -957,20 +1132,23 @@ void ecore_init_nig_pri_tc_map(struct ecore_hwfn *p_hwfn,
 			       struct ecore_ptt *p_ptt,
 			       struct init_nig_pri_tc_map_req *req)
 {
-	u8 pri, tc;
-	u32 pri_tc_mask = 0;
 	u8 tc_pri_mask[NUM_OF_PHYS_TCS] = { 0 };
+	u32 pri_tc_mask = 0;
+	u8 pri, tc;
+
 	for (pri = 0; pri < NUM_OF_VLAN_PRIORITIES; pri++) {
-		if (req->pri[pri].valid) {
-			pri_tc_mask |=
-			    (req->pri[pri].
-			     tc_id << (pri * NIG_PRIORITY_MAP_TC_BITS));
-			tc_pri_mask[req->pri[pri].tc_id] |= (1 << pri);
-		}
+		if (!req->pri[pri].valid)
+			continue;
+
+		pri_tc_mask |= (req->pri[pri].tc_id <<
+				(pri * NIG_PRIORITY_MAP_TC_BITS));
+		tc_pri_mask[req->pri[pri].tc_id] |= (1 << pri);
 	}
-	/* write priority -> TC mask */
+
+	/* Write priority -> TC mask */
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_PKT_PRIORITY_TO_TC, pri_tc_mask);
-	/* write TC -> priority mask */
+
+	/* Write TC -> priority mask */
 	for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) {
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_PRIORITY_FOR_TC_0 + tc * 4,
 			 tc_pri_mask[tc]);
@@ -979,110 +1157,133 @@ void ecore_init_nig_pri_tc_map(struct ecore_hwfn *p_hwfn,
 	}
 }
 
+
 /* PRS: ETS configuration constants */
-#define PRS_ETS_MIN_WFQ_BYTES			1600
+#define PRS_ETS_MIN_WFQ_BYTES		1600
 #define PRS_ETS_UP_BOUND(weight, mtu) \
-(2 * ((weight) > (mtu) ? (weight) : (mtu)))
+	(2 * ((weight) > (mtu) ? (weight) : (mtu)))
+
+
 void ecore_init_prs_ets(struct ecore_hwfn *p_hwfn,
 			struct ecore_ptt *p_ptt, struct init_ets_req *req)
 {
+	u32 tc_weight_addr_diff, tc_bound_addr_diff, min_weight = 0xffffffff;
 	u8 tc, sp_tc_map = 0, wfq_tc_map = 0;
-	u32 min_weight = 0xffffffff;
-	u32 tc_weight_addr_diff =
-	    PRS_REG_ETS_ARB_CREDIT_WEIGHT_1 - PRS_REG_ETS_ARB_CREDIT_WEIGHT_0;
-	u32 tc_bound_addr_diff =
-	    PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_1 -
-	    PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_0;
+
+	tc_weight_addr_diff = PRS_REG_ETS_ARB_CREDIT_WEIGHT_1 -
+			      PRS_REG_ETS_ARB_CREDIT_WEIGHT_0;
+	tc_bound_addr_diff = PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_1 -
+			     PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_0;
+
 	for (tc = 0; tc < NUM_OF_TCS; tc++) {
 		struct init_ets_tc_req *tc_req = &req->tc_req[tc];
-		/* update SP map */
+
+		/* Update SP map */
 		if (tc_req->use_sp)
 			sp_tc_map |= (1 << tc);
-		if (tc_req->use_wfq) {
-			/* update WFQ map */
-			wfq_tc_map |= (1 << tc);
-			/* find minimal weight */
-			if (tc_req->weight < min_weight)
-				min_weight = tc_req->weight;
-		}
+
+		if (!tc_req->use_wfq)
+			continue;
+
+		/* Update WFQ map */
+		wfq_tc_map |= (1 << tc);
+
+		/* Find minimal weight */
+		if (tc_req->weight < min_weight)
+			min_weight = tc_req->weight;
 	}
+
 	/* write SP map */
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CLIENT_IS_STRICT, sp_tc_map);
+
 	/* write WFQ map */
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CLIENT_IS_SUBJECT2WFQ,
 		 wfq_tc_map);
+
 	/* write WFQ weights */
 	for (tc = 0; tc < NUM_OF_TCS; tc++) {
 		struct init_ets_tc_req *tc_req = &req->tc_req[tc];
-		if (tc_req->use_wfq) {
-			/* translate weight to bytes */
-			u32 byte_weight =
-			    (PRS_ETS_MIN_WFQ_BYTES * tc_req->weight) /
-			    min_weight;
-			/* write WFQ weight */
-			ecore_wr(p_hwfn, p_ptt,
-				 PRS_REG_ETS_ARB_CREDIT_WEIGHT_0 +
-				 tc * tc_weight_addr_diff, byte_weight);
-			/* write WFQ upper bound */
-			ecore_wr(p_hwfn, p_ptt,
-				 PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_0 +
-				 tc * tc_bound_addr_diff,
-				 PRS_ETS_UP_BOUND(byte_weight, req->mtu));
-		}
+		u32 byte_weight;
+
+		if (!tc_req->use_wfq)
+			continue;
+
+		/* Translate weight to bytes */
+		byte_weight = (PRS_ETS_MIN_WFQ_BYTES * tc_req->weight) /
+			      min_weight;
+
+		/* Write WFQ weight */
+		ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CREDIT_WEIGHT_0 + tc *
+			 tc_weight_addr_diff, byte_weight);
+
+		/* Write WFQ upper bound */
+		ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_0 +
+			 tc * tc_bound_addr_diff, PRS_ETS_UP_BOUND(byte_weight,
+								   req->mtu));
 	}
 }
 
+
 /* BRB: RAM configuration constants */
 #define BRB_TOTAL_RAM_BLOCKS_BB	4800
 #define BRB_TOTAL_RAM_BLOCKS_K2	5632
-#define BRB_BLOCK_SIZE			128	/* in bytes */
+#define BRB_BLOCK_SIZE		128
 #define BRB_MIN_BLOCKS_PER_TC	9
-#define BRB_HYST_BYTES			10240
-#define BRB_HYST_BLOCKS			(BRB_HYST_BYTES / BRB_BLOCK_SIZE)
-/*
- * temporary big RAM allocation - should be updated
- */
+#define BRB_HYST_BYTES		10240
+#define BRB_HYST_BLOCKS		(BRB_HYST_BYTES / BRB_BLOCK_SIZE)
+
+/* Temporary big RAM allocation - should be updated */
 void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
 			struct ecore_ptt *p_ptt, struct init_brb_ram_req *req)
 {
-	u8 port, active_ports = 0;
+	u32 tc_headroom_blocks, min_pkt_size_blocks, total_blocks;
 	u32 active_port_blocks, reg_offset = 0;
-	u32 tc_headroom_blocks =
-	    (u32)DIV_ROUND_UP(req->headroom_per_tc, BRB_BLOCK_SIZE);
-	u32 min_pkt_size_blocks =
-	    (u32)DIV_ROUND_UP(req->min_pkt_size, BRB_BLOCK_SIZE);
-	u32 total_blocks =
-	    ECORE_IS_K2(p_hwfn->
-			p_dev) ? BRB_TOTAL_RAM_BLOCKS_K2 :
-	    BRB_TOTAL_RAM_BLOCKS_BB;
-	/* find number of active ports */
+	u8 port, active_ports = 0;
+
+	tc_headroom_blocks = (u32)DIV_ROUND_UP(req->headroom_per_tc,
+					       BRB_BLOCK_SIZE);
+	min_pkt_size_blocks = (u32)DIV_ROUND_UP(req->min_pkt_size,
+						BRB_BLOCK_SIZE);
+	total_blocks = ECORE_IS_K2(p_hwfn->p_dev) ? BRB_TOTAL_RAM_BLOCKS_K2 :
+						    BRB_TOTAL_RAM_BLOCKS_BB;
+
+	/* Find number of active ports */
 	for (port = 0; port < MAX_NUM_PORTS; port++)
 		if (req->num_active_tcs[port])
 			active_ports++;
+
 	active_port_blocks = (u32)(total_blocks / active_ports);
+
 	for (port = 0; port < req->max_ports_per_engine; port++) {
-		/* calculate per-port sizes */
-		u32 tc_guaranteed_blocks =
-		    (u32)DIV_ROUND_UP(req->guranteed_per_tc, BRB_BLOCK_SIZE);
-		u32 port_blocks =
-		    req->num_active_tcs[port] ? active_port_blocks : 0;
-		u32 port_guaranteed_blocks =
-		    req->num_active_tcs[port] * tc_guaranteed_blocks;
-		u32 port_shared_blocks = port_blocks - port_guaranteed_blocks;
-		u32 full_xoff_th =
-		    req->num_active_tcs[port] * BRB_MIN_BLOCKS_PER_TC;
-		u32 full_xon_th = full_xoff_th + min_pkt_size_blocks;
-		u32 pause_xoff_th = tc_headroom_blocks;
-		u32 pause_xon_th = pause_xoff_th + min_pkt_size_blocks;
+		u32 port_blocks, port_shared_blocks, port_guaranteed_blocks;
+		u32 full_xoff_th, full_xon_th, pause_xoff_th, pause_xon_th;
+		u32 tc_guaranteed_blocks;
 		u8 tc;
-		/* init total size per port */
+
+		/* Calculate per-port sizes */
+		tc_guaranteed_blocks = (u32)DIV_ROUND_UP(req->guranteed_per_tc,
+							 BRB_BLOCK_SIZE);
+		port_blocks = req->num_active_tcs[port] ? active_port_blocks :
+							  0;
+		port_guaranteed_blocks = req->num_active_tcs[port] *
+					 tc_guaranteed_blocks;
+		port_shared_blocks = port_blocks - port_guaranteed_blocks;
+		full_xoff_th = req->num_active_tcs[port] *
+			       BRB_MIN_BLOCKS_PER_TC;
+		full_xon_th = full_xoff_th + min_pkt_size_blocks;
+		pause_xoff_th = tc_headroom_blocks;
+		pause_xon_th = pause_xoff_th + min_pkt_size_blocks;
+
+		/* Init total size per port */
 		ecore_wr(p_hwfn, p_ptt, BRB_REG_TOTAL_MAC_SIZE + port * 4,
 			 port_blocks);
-		/* init shared size per port */
+
+		/* Init shared size per port */
 		ecore_wr(p_hwfn, p_ptt, BRB_REG_SHARED_HR_AREA + port * 4,
 			 port_shared_blocks);
+
 		for (tc = 0; tc < NUM_OF_TCS; tc++, reg_offset += 4) {
-			/* clear init values for non-active TCs */
+			/* Clear init values for non-active TCs */
 			if (tc == req->num_active_tcs[port]) {
 				tc_guaranteed_blocks = 0;
 				full_xoff_th = 0;
@@ -1090,15 +1291,18 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
 				pause_xoff_th = 0;
 				pause_xon_th = 0;
 			}
-			/* init guaranteed size per TC */
+
+			/* Init guaranteed size per TC */
 			ecore_wr(p_hwfn, p_ptt,
 				 BRB_REG_TC_GUARANTIED_0 + reg_offset,
 				 tc_guaranteed_blocks);
 			ecore_wr(p_hwfn, p_ptt,
 				 BRB_REG_MAIN_TC_GUARANTIED_HYST_0 + reg_offset,
 				 BRB_HYST_BLOCKS);
-/* init pause/full thresholds per physical TC - for loopback traffic */
 
+			/* Init pause/full thresholds per physical TC - for
+			 * loopback traffic.
+			 */
 			ecore_wr(p_hwfn, p_ptt,
 				 BRB_REG_LB_TC_FULL_XOFF_THRESHOLD_0 +
 				 reg_offset, full_xoff_th);
@@ -1111,7 +1315,10 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
 			ecore_wr(p_hwfn, p_ptt,
 				 BRB_REG_LB_TC_PAUSE_XON_THRESHOLD_0 +
 				 reg_offset, pause_xon_th);
-/* init pause/full thresholds per physical TC - for main traffic */
+
+			/* Init pause/full thresholds per physical TC - for
+			 * main traffic.
+			 */
 			ecore_wr(p_hwfn, p_ptt,
 				 BRB_REG_MAIN_TC_FULL_XOFF_THRESHOLD_0 +
 				 reg_offset, full_xoff_th);
@@ -1128,23 +1335,25 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
 	}
 }
 
-/*In MF should be called once per engine to set EtherType of OuterTag*/
+/* In MF should be called once per engine to set EtherType of OuterTag */
 void ecore_set_engine_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
 					struct ecore_ptt *p_ptt, u32 ethType)
 {
-	/* update PRS register */
+	/* Update PRS register */
 	STORE_RT_REG(p_hwfn, PRS_REG_TAG_ETHERTYPE_0_RT_OFFSET, ethType);
-	/* update NIG register */
+
+	/* Update NIG register */
 	STORE_RT_REG(p_hwfn, NIG_REG_TAG_ETHERTYPE_0_RT_OFFSET, ethType);
-	/* update PBF register */
+
+	/* Update PBF register */
 	STORE_RT_REG(p_hwfn, PBF_REG_TAG_ETHERTYPE_0_RT_OFFSET, ethType);
 }
 
-/*In MF should be called once per port to set EtherType of OuterTag*/
+/* In MF should be called once per port to set EtherType of OuterTag */
 void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
 				      struct ecore_ptt *p_ptt, u32 ethType)
 {
-	/* update DORQ register */
+	/* Update DORQ register */
 	STORE_RT_REG(p_hwfn, DORQ_REG_TAG1_ETHERTYPE_RT_OFFSET, ethType);
 }
 
@@ -1154,11 +1363,13 @@ void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
 void ecore_set_vxlan_dest_port(struct ecore_hwfn *p_hwfn,
 			       struct ecore_ptt *p_ptt, u16 dest_port)
 {
-	/* update PRS register */
+	/* Update PRS register */
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_VXLAN_PORT, dest_port);
-	/* update NIG register */
+
+	/* Update NIG register */
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_VXLAN_CTRL, dest_port);
-	/* update PBF register */
+
+	/* Update PBF register */
 	ecore_wr(p_hwfn, p_ptt, PBF_REG_VXLAN_PORT, dest_port);
 }
 
@@ -1166,23 +1377,26 @@ void ecore_set_vxlan_enable(struct ecore_hwfn *p_hwfn,
 			    struct ecore_ptt *p_ptt, bool vxlan_enable)
 {
 	u32 reg_val;
-	/* update PRS register */
+
+	/* Update PRS register */
 	reg_val = ecore_rd(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN);
 	SET_TUNNEL_TYPE_ENABLE_BIT(reg_val,
 			   PRS_REG_ENCAPSULATION_TYPE_EN_VXLAN_ENABLE_SHIFT,
 			   vxlan_enable);
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN, reg_val);
 	if (reg_val) {
-		ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0,
+		ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0_BB_K2,
 			 (u32)PRS_ETH_TUNN_FIC_FORMAT);
 	}
-	/* update NIG register */
+
+	/* Update NIG register */
 	reg_val = ecore_rd(p_hwfn, p_ptt, NIG_REG_ENC_TYPE_ENABLE);
 	SET_TUNNEL_TYPE_ENABLE_BIT(reg_val,
 				   NIG_REG_ENC_TYPE_ENABLE_VXLAN_ENABLE_SHIFT,
 				   vxlan_enable);
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_ENC_TYPE_ENABLE, reg_val);
-	/* update DORQ register */
+
+	/* Update DORQ register */
 	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_VXLAN_EN,
 		 vxlan_enable ? 1 : 0);
 }
@@ -1192,7 +1406,8 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn,
 			  bool eth_gre_enable, bool ip_gre_enable)
 {
 	u32 reg_val;
-	/* update PRS register */
+
+	/* Update PRS register */
 	reg_val = ecore_rd(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN);
 	SET_TUNNEL_TYPE_ENABLE_BIT(reg_val,
 		   PRS_REG_ENCAPSULATION_TYPE_EN_ETH_OVER_GRE_ENABLE_SHIFT,
@@ -1202,10 +1417,11 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn,
 		   ip_gre_enable);
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN, reg_val);
 	if (reg_val) {
-		ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0,
+		ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0_BB_K2,
 			 (u32)PRS_ETH_TUNN_FIC_FORMAT);
 	}
-	/* update NIG register */
+
+	/* Update NIG register */
 	reg_val = ecore_rd(p_hwfn, p_ptt, NIG_REG_ENC_TYPE_ENABLE);
 	SET_TUNNEL_TYPE_ENABLE_BIT(reg_val,
 		   NIG_REG_ENC_TYPE_ENABLE_ETH_OVER_GRE_ENABLE_SHIFT,
@@ -1214,7 +1430,8 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn,
 		   NIG_REG_ENC_TYPE_ENABLE_IP_OVER_GRE_ENABLE_SHIFT,
 		   ip_gre_enable);
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_ENC_TYPE_ENABLE, reg_val);
-	/* update DORQ registers */
+
+	/* Update DORQ registers */
 	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_GRE_ETH_EN,
 		 eth_gre_enable ? 1 : 0);
 	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_GRE_IP_EN,
@@ -1224,11 +1441,13 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn,
 void ecore_set_geneve_dest_port(struct ecore_hwfn *p_hwfn,
 				struct ecore_ptt *p_ptt, u16 dest_port)
 {
-	/* update PRS register */
+	/* Update PRS register */
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_NGE_PORT, dest_port);
-	/* update NIG register */
+
+	/* Update NIG register */
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_NGE_PORT, dest_port);
-	/* update PBF register */
+
+	/* Update PBF register */
 	ecore_wr(p_hwfn, p_ptt, PBF_REG_NGE_PORT, dest_port);
 }
 
@@ -1237,7 +1456,8 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
 			     bool eth_geneve_enable, bool ip_geneve_enable)
 {
 	u32 reg_val;
-	/* update PRS register */
+
+	/* Update PRS register */
 	reg_val = ecore_rd(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN);
 	SET_TUNNEL_TYPE_ENABLE_BIT(reg_val,
 		   PRS_REG_ENCAPSULATION_TYPE_EN_ETH_OVER_GENEVE_ENABLE_SHIFT,
@@ -1247,37 +1467,44 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
 		   ip_geneve_enable);
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN, reg_val);
 	if (reg_val) {
-		ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0,
+		ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0_BB_K2,
 			 (u32)PRS_ETH_TUNN_FIC_FORMAT);
 	}
-	/* update NIG register */
+
+	/* Update NIG register */
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_NGE_ETH_ENABLE,
 		 eth_geneve_enable ? 1 : 0);
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_NGE_IP_ENABLE,
 		 ip_geneve_enable ? 1 : 0);
-	/* EDPM with geneve tunnel not supported in BB_B0 */
+
+	/* EDPM with geneve tunnel not supported in BB */
 	if (ECORE_IS_BB_B0(p_hwfn->p_dev))
 		return;
-	/* update DORQ registers */
-	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_ETH_EN,
+
+	/* Update DORQ registers */
+	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_ETH_EN_K2_E5,
 		 eth_geneve_enable ? 1 : 0);
-	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_IP_EN,
+	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_IP_EN_K2_E5,
 		 ip_geneve_enable ? 1 : 0);
 }
 
+
 #define T_ETH_PACKET_ACTION_GFT_EVENTID  23
 #define PARSER_ETH_CONN_GFT_ACTION_CM_HDR  272
 #define T_ETH_PACKET_MATCH_RFS_EVENTID 25
-#define PARSER_ETH_CONN_CM_HDR (0x0)
+#define PARSER_ETH_CONN_CM_HDR 0
 #define CAM_LINE_SIZE sizeof(u32)
 #define RAM_LINE_SIZE sizeof(u64)
 #define REG_SIZE sizeof(u32)
 
+
 void ecore_set_gft_event_id_cm_hdr(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt)
 {
-	/* set RFS event ID to be awakened i Tstorm By Prs */
-	u32 rfs_cm_hdr_event_id = ecore_rd(p_hwfn, p_ptt, PRS_REG_CM_HDR_GFT);
+	u32 rfs_cm_hdr_event_id;
+
+	/* Set RFS event ID to be awakened i Tstorm By Prs */
+	rfs_cm_hdr_event_id = ecore_rd(p_hwfn, p_ptt, PRS_REG_CM_HDR_GFT);
 	rfs_cm_hdr_event_id |= T_ETH_PACKET_ACTION_GFT_EVENTID <<
 	    PRS_REG_CM_HDR_GFT_EVENT_ID_SHIFT;
 	rfs_cm_hdr_event_id |= PARSER_ETH_CONN_GFT_ACTION_CM_HDR <<
@@ -1298,39 +1525,48 @@ void ecore_set_rfs_mode_enable(struct ecore_hwfn *p_hwfn,
 	struct gft_ram_line ramLine;
 	u32 *ramLinePointer = (u32 *)&ramLine;
 	int i;
+
 	if (!ipv6 && !ipv4)
 		DP_NOTICE(p_hwfn, true,
 			  "set_rfs_mode_enable: must accept at "
 			  "least on of - ipv4 or ipv6");
+
 	if (!tcp && !udp)
 		DP_NOTICE(p_hwfn, true,
 			  "set_rfs_mode_enable: must accept at "
 			  "least on of - udp or tcp");
-	/* set RFS event ID to be awakened i Tstorm By Prs */
+
+	/* Set RFS event ID to be awakened i Tstorm By Prs */
 	rfs_cm_hdr_event_id |=  T_ETH_PACKET_MATCH_RFS_EVENTID <<
 	    PRS_REG_CM_HDR_GFT_EVENT_ID_SHIFT;
 	rfs_cm_hdr_event_id |=  PARSER_ETH_CONN_CM_HDR <<
 	    PRS_REG_CM_HDR_GFT_CM_HDR_SHIFT;
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_CM_HDR_GFT, rfs_cm_hdr_event_id);
+
 	/* Configure Registers for RFS mode */
-/* enable gft search */
+
+	/* Enable gft search */
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_GFT, 1);
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_LOAD_L2_FILTER, 0); /* do not load
 							     * context only cid
 							     * in PRS on match
 							     */
 	camLine.cam_line_mapped.camline = 0;
-	/* cam line is now valid!! */
+
+	/* Cam line is now valid!! */
 	SET_FIELD(camLine.cam_line_mapped.camline,
 		  GFT_CAM_LINE_MAPPED_VALID, 1);
-	/* filters are per PF!! */
+
+	/* Filters are per PF!! */
 	SET_FIELD(camLine.cam_line_mapped.camline,
 		  GFT_CAM_LINE_MAPPED_PF_ID_MASK, 1);
 	SET_FIELD(camLine.cam_line_mapped.camline,
 		  GFT_CAM_LINE_MAPPED_PF_ID, pf_id);
+
 	if (!(tcp && udp)) {
 		SET_FIELD(camLine.cam_line_mapped.camline,
-			  GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE_MASK, 1);
+			  GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE_MASK,
+			  GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE_MASK_MASK);
 		if (tcp)
 			SET_FIELD(camLine.cam_line_mapped.camline,
 				  GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE,
@@ -1340,6 +1576,7 @@ void ecore_set_rfs_mode_enable(struct ecore_hwfn *p_hwfn,
 				  GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE,
 				  GFT_PROFILE_UDP_PROTOCOL);
 	}
+
 	if (!(ipv4 && ipv6)) {
 		SET_FIELD(camLine.cam_line_mapped.camline,
 			  GFT_CAM_LINE_MAPPED_IP_VERSION_MASK, 1);
@@ -1352,44 +1589,53 @@ void ecore_set_rfs_mode_enable(struct ecore_hwfn *p_hwfn,
 				  GFT_CAM_LINE_MAPPED_IP_VERSION,
 				  GFT_PROFILE_IPV6);
 	}
-	/* write characteristics to cam */
+
+	/* Write characteristics to cam */
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_CAM + CAM_LINE_SIZE * pf_id,
 	    camLine.cam_line_mapped.camline);
 	camLine.cam_line_mapped.camline =
 	    ecore_rd(p_hwfn, p_ptt, PRS_REG_GFT_CAM + CAM_LINE_SIZE * pf_id);
-	/* write line to RAM - compare to filter 4 tuple */
-	ramLine.low32bits = 0;
-	ramLine.high32bits = 0;
-	SET_FIELD(ramLine.high32bits, GFT_RAM_LINE_DST_IP, 1);
-	SET_FIELD(ramLine.high32bits, GFT_RAM_LINE_SRC_IP, 1);
-	SET_FIELD(ramLine.low32bits, GFT_RAM_LINE_SRC_PORT, 1);
-	SET_FIELD(ramLine.low32bits, GFT_RAM_LINE_DST_PORT, 1);
-	/* each iteration write to reg */
+
+	/* Write line to RAM - compare to filter 4 tuple */
+	ramLine.lo = 0;
+	ramLine.hi = 0;
+	SET_FIELD(ramLine.hi, GFT_RAM_LINE_DST_IP, 1);
+	SET_FIELD(ramLine.hi, GFT_RAM_LINE_SRC_IP, 1);
+	SET_FIELD(ramLine.hi, GFT_RAM_LINE_OVER_IP_PROTOCOL, 1);
+	SET_FIELD(ramLine.lo, GFT_RAM_LINE_ETHERTYPE, 1);
+	SET_FIELD(ramLine.lo, GFT_RAM_LINE_SRC_PORT, 1);
+	SET_FIELD(ramLine.lo, GFT_RAM_LINE_DST_PORT, 1);
+
+	/* Each iteration write to reg */
 	for (i = 0; i < RAM_LINE_SIZE / REG_SIZE; i++)
 		ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_PROFILE_MASK_RAM +
 			 RAM_LINE_SIZE * pf_id +
 			 i * REG_SIZE, *(ramLinePointer + i));
-	/* set default profile so that no filter match will happen */
-	ramLine.low32bits = 0xffff;
-	ramLine.high32bits = 0xffff;
+
+	/* Set default profile so that no filter match will happen */
+	ramLine.lo = 0xffff;
+	ramLine.hi = 0xffff;
 	for (i = 0; i < RAM_LINE_SIZE / REG_SIZE; i++)
 		ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_PROFILE_MASK_RAM +
 			 RAM_LINE_SIZE * PRS_GFT_CAM_LINES_NO_MATCH +
 			 i * REG_SIZE, *(ramLinePointer + i));
 }
 
-/* Configure VF zone size mode*/
+/* Configure VF zone size mode */
 void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn,
 				    struct ecore_ptt *p_ptt, u16 mode,
 				    bool runtime_init)
 {
 	u32 msdm_vf_size_log = MSTORM_VF_ZONE_DEFAULT_SIZE_LOG;
 	u32 msdm_vf_offset_mask;
+
 	if (mode == VF_ZONE_SIZE_MODE_DOUBLE)
 		msdm_vf_size_log += 1;
 	else if (mode == VF_ZONE_SIZE_MODE_QUAD)
 		msdm_vf_size_log += 2;
+
 	msdm_vf_offset_mask = (1 << msdm_vf_size_log) - 1;
+
 	if (runtime_init) {
 		STORE_RT_REG(p_hwfn,
 			     PGLUE_REG_B_MSDM_VF_SHIFT_B_RT_OFFSET,
@@ -1405,12 +1651,13 @@ void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn,
 	}
 }
 
-/* get mstorm statistics for offset by VF zone size mode*/
+/* Get mstorm statistics for offset by VF zone size mode */
 u32 ecore_get_mstorm_queue_stat_offset(struct ecore_hwfn *p_hwfn,
 				       u16 stat_cnt_id,
 				       u16 vf_zone_size_mode)
 {
 	u32 offset = MSTORM_QUEUE_STAT_OFFSET(stat_cnt_id);
+
 	if ((vf_zone_size_mode != VF_ZONE_SIZE_MODE_DEFAULT) &&
 	    (stat_cnt_id > MAX_NUM_PFS)) {
 		if (vf_zone_size_mode == VF_ZONE_SIZE_MODE_DOUBLE)
@@ -1420,16 +1667,18 @@ u32 ecore_get_mstorm_queue_stat_offset(struct ecore_hwfn *p_hwfn,
 			offset += 3 * (1 << MSTORM_VF_ZONE_DEFAULT_SIZE_LOG) *
 			    (stat_cnt_id - MAX_NUM_PFS);
 	}
+
 	return offset;
 }
 
-/* get mstorm VF producer offset by VF zone size mode*/
+/* Get mstorm VF producer offset by VF zone size mode */
 u32 ecore_get_mstorm_eth_vf_prods_offset(struct ecore_hwfn *p_hwfn,
 					 u8 vf_id,
 					 u8 vf_queue_id,
 					 u16 vf_zone_size_mode)
 {
 	u32 offset = MSTORM_ETH_VF_PRODS_OFFSET(vf_id, vf_queue_id);
+
 	if (vf_zone_size_mode != VF_ZONE_SIZE_MODE_DEFAULT) {
 		if (vf_zone_size_mode == VF_ZONE_SIZE_MODE_DOUBLE)
 			offset += (1 << MSTORM_VF_ZONE_DEFAULT_SIZE_LOG) *
@@ -1438,5 +1687,166 @@ u32 ecore_get_mstorm_eth_vf_prods_offset(struct ecore_hwfn *p_hwfn,
 			offset += 3 * (1 << MSTORM_VF_ZONE_DEFAULT_SIZE_LOG) *
 				  vf_id;
 	}
+
 	return offset;
 }
+
+/* Calculate CRC8 of first 4 bytes in buf */
+static u8 ecore_calc_crc8(const u8 *buf)
+{
+	u32 i, j, crc = 0xff << 8;
+
+	/* CRC-8 polynomial */
+	#define POLY 0x1070
+
+	for (j = 0; j < 4; j++, buf++) {
+		crc ^= (*buf << 8);
+		for (i = 0; i < 8; i++) {
+			if (crc & 0x8000)
+				crc ^= (POLY << 3);
+
+			 crc <<= 1;
+		}
+	}
+
+	return (u8)(crc >> 8);
+}
+
+/* Calculate and return CDU validation byte per conneciton type / region /
+ * cid
+ */
+static u8 ecore_calc_cdu_validation_byte(u8 conn_type, u8 region,
+					 u32 cid)
+{
+	const u8 validation_cfg = CDU_VALIDATION_DEFAULT_CFG;
+	u8 crc, validation_byte = 0;
+	u32 validation_string = 0;
+	const u8 *data_to_crc_rev;
+	u8 data_to_crc[4];
+
+	data_to_crc_rev = (const u8 *)&validation_string;
+
+	/*
+	 * The CRC is calculated on the String-to-compress:
+	 * [31:8]  = {CID[31:20],CID[11:0]}
+	 * [7:4]   = Region
+	 * [3:0]   = Type
+	 */
+	if ((validation_cfg >> CDU_CONTEXT_VALIDATION_CFG_USE_CID) & 1)
+		validation_string |= (cid & 0xFFF00000) | ((cid & 0xFFF) << 8);
+
+	if ((validation_cfg >> CDU_CONTEXT_VALIDATION_CFG_USE_REGION) & 1)
+		validation_string |= ((region & 0xF) << 4);
+
+	if ((validation_cfg >> CDU_CONTEXT_VALIDATION_CFG_USE_TYPE) & 1)
+		validation_string |= (conn_type & 0xF);
+
+	/* Convert to big-endian (ntoh())*/
+	data_to_crc[0] = data_to_crc_rev[3];
+	data_to_crc[1] = data_to_crc_rev[2];
+	data_to_crc[2] = data_to_crc_rev[1];
+	data_to_crc[3] = data_to_crc_rev[0];
+
+	crc = ecore_calc_crc8(data_to_crc);
+
+	validation_byte |= ((validation_cfg >>
+			     CDU_CONTEXT_VALIDATION_CFG_USE_ACTIVE) & 1) << 7;
+
+	if ((validation_cfg >>
+	     CDU_CONTEXT_VALIDATION_CFG_VALIDATION_TYPE_SHIFT) & 1)
+		validation_byte |= ((conn_type & 0xF) << 3) | (crc & 0x7);
+	else
+		validation_byte |= crc & 0x7F;
+
+	return validation_byte;
+}
+
+/* Calcualte and set validation bytes for session context */
+void ecore_calc_session_ctx_validation(void *p_ctx_mem, u16 ctx_size,
+				       u8 ctx_type, u32 cid)
+{
+	u8 *x_val_ptr, *t_val_ptr, *u_val_ptr, *p_ctx;
+
+	p_ctx = (u8 *)p_ctx_mem;
+	x_val_ptr = &p_ctx[con_region_offsets[0][ctx_type]];
+	t_val_ptr = &p_ctx[con_region_offsets[1][ctx_type]];
+	u_val_ptr = &p_ctx[con_region_offsets[2][ctx_type]];
+
+	OSAL_MEMSET(p_ctx, 0, ctx_size);
+
+	*x_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 3, cid);
+	*t_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 4, cid);
+	*u_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 5, cid);
+}
+
+/* Calcualte and set validation bytes for task context */
+void ecore_calc_task_ctx_validation(void *p_ctx_mem, u16 ctx_size,
+				    u8 ctx_type, u32 tid)
+{
+	u8 *p_ctx, *region1_val_ptr;
+
+	p_ctx = (u8 *)p_ctx_mem;
+	region1_val_ptr = &p_ctx[task_region_offsets[0][ctx_type]];
+
+	OSAL_MEMSET(p_ctx, 0, ctx_size);
+
+	*region1_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 1, tid);
+}
+
+/* Memset session context to 0 while preserving validation bytes */
+void ecore_memset_session_ctx(void *p_ctx_mem, u32 ctx_size, u8 ctx_type)
+{
+	u8 *x_val_ptr, *t_val_ptr, *u_val_ptr, *p_ctx;
+	u8 x_val, t_val, u_val;
+
+	p_ctx = (u8 *)p_ctx_mem;
+	x_val_ptr = &p_ctx[con_region_offsets[0][ctx_type]];
+	t_val_ptr = &p_ctx[con_region_offsets[1][ctx_type]];
+	u_val_ptr = &p_ctx[con_region_offsets[2][ctx_type]];
+
+	x_val = *x_val_ptr;
+	t_val = *t_val_ptr;
+	u_val = *u_val_ptr;
+
+	OSAL_MEMSET(p_ctx, 0, ctx_size);
+
+	*x_val_ptr = x_val;
+	*t_val_ptr = t_val;
+	*u_val_ptr = u_val;
+}
+
+/* Memset task context to 0 while preserving validation bytes */
+void ecore_memset_task_ctx(void *p_ctx_mem, const u32 ctx_size,
+			   const u8 ctx_type)
+{
+	u8 *p_ctx, *region1_val_ptr;
+	u8 region1_val;
+
+	p_ctx = (u8 *)p_ctx_mem;
+	region1_val_ptr = &p_ctx[task_region_offsets[0][ctx_type]];
+
+	region1_val = *region1_val_ptr;
+
+	OSAL_MEMSET(p_ctx, 0, ctx_size);
+
+	*region1_val_ptr = region1_val;
+}
+
+/* Enable and configure context validation */
+void ecore_enable_context_validation(struct ecore_hwfn *p_hwfn,
+				     struct ecore_ptt *p_ptt)
+{
+	u32 ctx_validation;
+
+	/* Enable validation for connection region 3 - bits [31:24] */
+	ctx_validation = CDU_VALIDATION_DEFAULT_CFG << 24;
+	ecore_wr(p_hwfn, p_ptt, CDU_REG_CCFC_CTX_VALID0, ctx_validation);
+
+	/* Enable validation for connection region 5 - bits [15: 8] */
+	ctx_validation = CDU_VALIDATION_DEFAULT_CFG << 8;
+	ecore_wr(p_hwfn, p_ptt, CDU_REG_CCFC_CTX_VALID1, ctx_validation);
+
+	/* Enable validation for connection region 1 - bits [15: 8] */
+	ctx_validation = CDU_VALIDATION_DEFAULT_CFG << 8;
+	ecore_wr(p_hwfn, p_ptt, CDU_REG_TCFC_CTX_VALID0, ctx_validation);
+}
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.h b/drivers/net/qede/base/ecore_init_fw_funcs.h
index 9df0e7d..2d1ab7c 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.h
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.h
@@ -8,20 +8,22 @@
 
 #ifndef _INIT_FW_FUNCS_H
 #define _INIT_FW_FUNCS_H
-/* forward declarations */
+/* Forward declarations */
+
 struct init_qm_pq_params;
+
 /**
- * @brief ecore_qm_pf_mem_size - prepare QM ILT sizes
+ * @brief ecore_qm_pf_mem_size - Prepare QM ILT sizes
  *
  * Returns the required host memory size in 4KB units.
  * Must be called before all QM init HSI functions.
  *
- * @param pf_id			- physical function ID
- * @param num_pf_cids	- number of connections used by this PF
- * @param num_vf_cids	- number of connections used by VFs of this PF
- * @param num_tids		- number of tasks used by this PF
- * @param num_pf_pqs	- number of PQs used by this PF
- * @param num_vf_pqs	- number of PQs used by VFs of this PF
+ * @param pf_id -	physical function ID
+ * @param num_pf_cids - number of connections used by this PF
+ * @param num_vf_cids -	number of connections used by VFs of this PF
+ * @param num_tids -	number of tasks used by this PF
+ * @param num_pf_pqs -	number of PQs used by this PF
+ * @param num_vf_pqs -	number of PQs used by VFs of this PF
  *
  * @return The required host memory size in 4KB units.
  */
@@ -31,6 +33,7 @@ u32 ecore_qm_pf_mem_size(u8 pf_id,
 						 u32 num_tids,
 						 u16 num_pf_pqs,
 						 u16 num_vf_pqs);
+
 /**
  * @brief ecore_qm_common_rt_init - Prepare QM runtime init values for engine
  *                                  phase
@@ -38,10 +41,10 @@ u32 ecore_qm_pf_mem_size(u8 pf_id,
  * @param p_hwfn
  * @param max_ports_per_engine	- max number of ports per engine in HW
  * @param max_phys_tcs_per_port	- max number of physical TCs per port in HW
- * @param pf_rl_en				- enable per-PF rate limiters
- * @param pf_wfq_en				- enable per-PF WFQ
- * @param vport_rl_en			- enable per-VPORT rate limiters
- * @param vport_wfq_en			- enable per-VPORT WFQ
+ * @param pf_rl_en		- enable per-PF rate limiters
+ * @param pf_wfq_en		- enable per-PF WFQ
+ * @param vport_rl_en		- enable per-VPORT rate limiters
+ * @param vport_wfq_en		- enable per-VPORT WFQ
  * @param port_params - array of size MAX_NUM_PORTS with params for each port
  *
  * @return 0 on success, -1 on error.
@@ -54,22 +57,24 @@ int ecore_qm_common_rt_init(struct ecore_hwfn *p_hwfn,
 			 bool vport_rl_en,
 			 bool vport_wfq_en,
 			 struct init_qm_port_params port_params[MAX_NUM_PORTS]);
+
 /**
  * @brief ecore_qm_pf_rt_init  Prepare QM runtime init values for the PF phase
  *
  * @param p_hwfn
  * @param p_ptt			- ptt window used for writing the registers
- * @param port_id				- port ID
- * @param pf_id					- PF ID
+ * @param port_id		- port ID
+ * @param pf_id			- PF ID
  * @param max_phys_tcs_per_port	- max number of physical TCs per port in HW
- * @param is_first_pf			- 1 = first PF in engine, 0 = othwerwise
- * @param num_pf_cids			- number of connections used by this PF
+ * @param is_first_pf		- 1 = first PF in engine, 0 = othwerwise
+ * @param num_pf_cids		- number of connections used by this PF
  * @param num_vf_cids		- number of connections used by VFs of this PF
- * @param num_tids			- number of tasks used by this PF
- * @param start_pq			- first Tx PQ ID associated with this PF
- * @param num_pf_pqs	- number of Tx PQs associated with this PF (non-VF)
- * @param num_vf_pqs			- number of Tx PQs associated with a VF
- * @param start_vport			- first VPORT ID associated with this PF
+ * @param num_tids		- number of tasks used by this PF
+ * @param start_pq		- first Tx PQ ID associated with this PF
+ * @param num_pf_pqs		- number of Tx PQs associated with this PF
+ *                                (non-VF)
+ * @param num_vf_pqs		- number of Tx PQs associated with a VF
+ * @param start_vport		- first VPORT ID associated with this PF
  * @param num_vports - number of VPORTs associated with this PF
  * @param pf_wfq - WFQ weight. if PF WFQ is globally disabled, the weight must
  *		   be 0. otherwise, the weight must be non-zero.
@@ -100,6 +105,7 @@ int ecore_qm_pf_rt_init(struct ecore_hwfn *p_hwfn,
 				u32 pf_rl,
 				struct init_qm_pq_params *pq_params,
 				struct init_qm_vport_params *vport_params);
+
 /**
  * @brief ecore_init_pf_wfq  Initializes the WFQ weight of the specified PF
  *
@@ -114,11 +120,12 @@ int ecore_init_pf_wfq(struct ecore_hwfn *p_hwfn,
 					  struct ecore_ptt *p_ptt,
 					  u8 pf_id,
 					  u16 pf_wfq);
+
 /**
- * @brief ecore_init_pf_rl  Initializes the rate limit of the specified PF
+ * @brief ecore_init_pf_rl - Initializes the rate limit of the specified PF
  *
  * @param p_hwfn
- * @param p_ptt	- ptt window used for writing the registers
+ * @param p_ptt - ptt window used for writing the registers
  * @param pf_id	- PF ID
  * @param pf_rl	- rate limit in Mb/sec units
  *
@@ -128,6 +135,7 @@ int ecore_init_pf_rl(struct ecore_hwfn *p_hwfn,
 					 struct ecore_ptt *p_ptt,
 					 u8 pf_id,
 					 u32 pf_rl);
+
 /**
  * @brief ecore_init_vport_wfq  Initializes the WFQ weight of specified VPORT
  *
@@ -144,10 +152,12 @@ int ecore_init_vport_wfq(struct ecore_hwfn *p_hwfn,
 						 struct ecore_ptt *p_ptt,
 						 u16 first_tx_pq_id[NUM_OF_TCS],
 						 u16 vport_wfq);
+
 /**
- * @brief ecore_init_vport_rl  Initializes the rate limit of the specified VPORT
+ * @brief ecore_init_vport_rl - Initializes the rate limit of the specified
+ * VPORT.
  *
- * @param p_hwfn
+ * @param p_hwfn	- HW device data
  * @param p_ptt		- ptt window used for writing the registers
  * @param vport_id	- VPORT ID
  * @param vport_rl	- rate limit in Mb/sec units
@@ -158,6 +168,7 @@ int ecore_init_vport_rl(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt,
 						u8 vport_id,
 						u32 vport_rl);
+
 /**
  * @brief ecore_send_qm_stop_cmd  Sends a stop command to the QM
  *
@@ -178,6 +189,7 @@ bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn,
 							u16 start_pq,
 							u16 num_pqs);
 #ifndef UNUSED_HSI_FUNC
+
 /**
  * @brief ecore_init_nig_ets - initializes the NIG ETS arbiter
  *
@@ -193,6 +205,7 @@ void ecore_init_nig_ets(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt,
 						struct init_ets_req *req,
 						bool is_lb);
+
 /**
  * @brief ecore_init_nig_lb_rl - initializes the NIG LB RLs
  *
@@ -205,6 +218,7 @@ void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn,
 				  struct ecore_ptt *p_ptt,
 				  struct init_nig_lb_rl_req *req);
 #endif /* UNUSED_HSI_FUNC */
+
 /**
  * @brief ecore_init_nig_pri_tc_map - initializes the NIG priority to TC map.
  *
@@ -216,6 +230,7 @@ void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn,
 void ecore_init_nig_pri_tc_map(struct ecore_hwfn *p_hwfn,
 					   struct ecore_ptt *p_ptt,
 					   struct init_nig_pri_tc_map_req *req);
+
 #ifndef UNUSED_HSI_FUNC
 /**
  * @brief ecore_init_prs_ets - initializes the PRS Rx ETS arbiter
@@ -229,6 +244,7 @@ void ecore_init_prs_ets(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt,
 						struct init_ets_req *req);
 #endif /* UNUSED_HSI_FUNC */
+
 #ifndef UNUSED_HSI_FUNC
 /**
  * @brief ecore_init_brb_ram - initializes BRB RAM sizes per TC
@@ -242,6 +258,7 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt,
 						struct init_brb_ram_req *req);
 #endif /* UNUSED_HSI_FUNC */
+
 #ifndef UNUSED_HSI_FUNC
 /**
  * @brief ecore_set_engine_mf_ovlan_eth_type - initializes Nig,Prs,Pbf and llh
@@ -250,22 +267,24 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
  *                                             if engine
  *  is in BD mode.
  *
- * @param p_ptt    - ptt window used for writing the registers.
+ * @param p_ptt   - ptt window used for writing the registers.
  * @param ethType - etherType to configure
  */
 void ecore_set_engine_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
 			struct ecore_ptt *p_ptt, u32 ethType);
+
 /**
  * @brief ecore_set_port_mf_ovlan_eth_type - initializes DORQ ethType Regs to
  *                                           input ethType should Be called
  *                                           once per port.
  *
- * @param p_ptt    - ptt window used for writing the registers.
+ * @param p_ptt   - ptt window used for writing the registers.
  * @param ethType - etherType to configure
  */
 void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
 			struct ecore_ptt *p_ptt, u32 ethType);
 #endif /* UNUSED_HSI_FUNC */
+
 /**
  * @brief ecore_set_vxlan_dest_port - initializes vxlan tunnel destination udp
  *                                    port
@@ -276,15 +295,17 @@ void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
 void ecore_set_vxlan_dest_port(struct ecore_hwfn *p_hwfn,
 			       struct ecore_ptt *p_ptt,
 			       u16 dest_port);
+
 /**
  * @brief ecore_set_vxlan_enable - enable or disable VXLAN tunnel in HW
  *
- * @param p_ptt	- ptt window used for writing the registers.
- * @param vxlan_enable - vxlan enable flag.
+ * @param p_ptt		- ptt window used for writing the registers.
+ * @param vxlan_enable	- vxlan enable flag.
  */
 void ecore_set_vxlan_enable(struct ecore_hwfn *p_hwfn,
 			    struct ecore_ptt *p_ptt,
 			    bool vxlan_enable);
+
 /**
  * @brief ecore_set_gre_enable - enable or disable GRE tunnel in HW
  *
@@ -296,6 +317,7 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn,
 			  struct ecore_ptt *p_ptt,
 			  bool eth_gre_enable,
 			  bool ip_gre_enable);
+
 /**
  * @brief ecore_set_geneve_dest_port - initializes geneve tunnel destination
  *                                     udp port
@@ -306,6 +328,7 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn,
 void ecore_set_geneve_dest_port(struct ecore_hwfn *p_hwfn,
 				struct ecore_ptt *p_ptt,
 				u16 dest_port);
+
 /**
  * @brief ecore_set_gre_enable - enable or disable GRE tunnel in HW
  *
@@ -318,6 +341,7 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
 			     bool eth_geneve_enable,
 			     bool ip_geneve_enable);
 #ifndef UNUSED_HSI_FUNC
+
 /**
 * @brief ecore_set_gft_event_id_cm_hdr - configure GFT event id and cm header
 *
@@ -325,16 +349,16 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
 */
 void ecore_set_gft_event_id_cm_hdr(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt);
+
 /**
 * @brief ecore_set_rfs_mode_enable - enable and configure HW for RFS
 *
-*
-* @param p_ptt             - ptt window used for writing the registers.
-* @param pf_id - pf on which to enable RFS.
-* @param tcp -  set profile tcp packets.
-* @param udp -  set profile udp  packet.
-* @param ipv4 - set profile ipv4 packet.
-* @param ipv6 - set profile ipv6 packet.
+* @param p_ptt	- ptt window used for writing the registers.
+* @param pf_id	- pf on which to enable RFS.
+* @param tcp	- set profile tcp packets.
+* @param udp	- set profile udp  packet.
+* @param ipv4	- set profile ipv4 packet.
+* @param ipv6	- set profile ipv6 packet.
 */
 void ecore_set_rfs_mode_enable(struct ecore_hwfn *p_hwfn,
 	struct ecore_ptt *p_ptt,
@@ -344,6 +368,7 @@ void ecore_set_rfs_mode_enable(struct ecore_hwfn *p_hwfn,
 	bool ipv4,
 	bool ipv6);
 #endif /* UNUSED_HSI_FUNC */
+
 /**
 * @brief ecore_config_vf_zone_size_mode - Configure VF zone size mode. Must be
 *                                         used before first ETH queue started.
@@ -357,18 +382,20 @@ void ecore_set_rfs_mode_enable(struct ecore_hwfn *p_hwfn,
 */
 void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn, struct ecore_ptt
 				    *p_ptt, u16 mode, bool runtime_init);
+
 /**
-* @brief ecore_get_mstorm_queue_stat_offset - get mstorm statistics offset by VF
-*                                             zone size mode.
+ * @brief ecore_get_mstorm_queue_stat_offset - Get mstorm statistics offset by
+ * VF zone size mode.
 *
 * @param stat_cnt_id         -  statistic counter id
 * @param vf_zone_size_mode   -  VF zone size mode. Use enum vf_zone_size_mode.
 */
 u32 ecore_get_mstorm_queue_stat_offset(struct ecore_hwfn *p_hwfn,
 				       u16 stat_cnt_id, u16 vf_zone_size_mode);
+
 /**
-* @brief ecore_get_mstorm_eth_vf_prods_offset - VF producer offset by VF zone
-*                                               size mode.
+ * @brief ecore_get_mstorm_eth_vf_prods_offset - VF producer offset by VF zone
+ * size mode.
 *
 * @param vf_id               -  vf id.
 * @param vf_queue_id         -  per VF rx queue id.
@@ -376,4 +403,58 @@ u32 ecore_get_mstorm_queue_stat_offset(struct ecore_hwfn *p_hwfn,
 */
 u32 ecore_get_mstorm_eth_vf_prods_offset(struct ecore_hwfn *p_hwfn, u8 vf_id, u8
 					 vf_queue_id, u16 vf_zone_size_mode);
+/**
+ * @brief ecore_enable_context_validation - Enable and configure context
+ *                                          validation.
+ *
+ * @param p_ptt - ptt window used for writing the registers.
+ */
+void ecore_enable_context_validation(struct ecore_hwfn *p_hwfn,
+				     struct ecore_ptt *p_ptt);
+/**
+ * @brief ecore_calc_session_ctx_validation - Calcualte validation byte for
+ *                                            session context.
+ *
+ *
+ * @param p_ctx_mem           -  pointer to context memory.
+ * @param ctx_size            -  context size.
+ * @param ctx_type            -  context type.
+ * @param cid                 -  context cid.
+ */
+void ecore_calc_session_ctx_validation(void *p_ctx_mem, u16 ctx_size,
+				       u8 ctx_type, u32 cid);
+/**
+ * @brief ecore_calc_task_ctx_validation - Calcualte validation byte for task
+ *                                         context.
+ *
+ *
+ * @param p_ctx_mem           -  pointer to context memory.
+ * @param ctx_size            -  context size.
+ * @param ctx_type            -  context type.
+ * @param tid                 -  context tid.
+ */
+void ecore_calc_task_ctx_validation(void *p_ctx_mem, u16 ctx_size,
+				    u8 ctx_type, u32 tid);
+/**
+ * @brief ecore_memset_session_ctx - Memset session context to 0 while
+ *                                   preserving validation bytes.
+ *
+ *
+ * @param p_ctx_mem           -  pointer to context memory.
+ * @param ctx_size            -  size to initialzie.
+ * @param ctx_type            -  context type.
+ */
+void ecore_memset_session_ctx(void *p_ctx_mem, u32 ctx_size,
+			      u8 ctx_type);
+/**
+ * @brief ecore_memset_task_ctx - Memset session context to 0 while preserving
+ *                                validation bytes.
+ *
+ *
+ * @param p_ctx_mem           -  pointer to context memory.
+ * @param ctx_size            -  size to initialzie.
+ * @param ctx_type            -  context type.
+ */
+void ecore_memset_task_ctx(void *p_ctx_mem, u32 ctx_size,
+			   u8 ctx_type);
 #endif
diff --git a/drivers/net/qede/base/ecore_iro.h b/drivers/net/qede/base/ecore_iro.h
index aad9012..b4bfe89 100644
--- a/drivers/net/qede/base/ecore_iro.h
+++ b/drivers/net/qede/base/ecore_iro.h
@@ -185,5 +185,13 @@
 #define TSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) (IRO[46].base + \
 	((rdma_stat_counter_id) * IRO[46].m1))
 #define TSTORM_RDMA_QUEUE_STAT_SIZE (IRO[46].size)
+/* Xstorm iWARP rxmit stats */
+#define XSTORM_IWARP_RXMIT_STATS_OFFSET(pf_id) (IRO[47].base + \
+	((pf_id) * IRO[47].m1))
+#define XSTORM_IWARP_RXMIT_STATS_SIZE (IRO[47].size)
+/* Tstorm RoCE Event Statistics */
+#define TSTORM_ROCE_EVENTS_STAT_OFFSET(roce_pf_id) (IRO[48].base + \
+	((roce_pf_id) * IRO[48].m1))
+#define TSTORM_ROCE_EVENTS_STAT_SIZE (IRO[48].size)
 
 #endif /* __IRO_H__ */
diff --git a/drivers/net/qede/base/ecore_iro_values.h b/drivers/net/qede/base/ecore_iro_values.h
index 4ff7e95..6764bfa 100644
--- a/drivers/net/qede/base/ecore_iro_values.h
+++ b/drivers/net/qede/base/ecore_iro_values.h
@@ -9,13 +9,13 @@
 #ifndef __IRO_VALUES_H__
 #define __IRO_VALUES_H__
 
-static const struct iro iro_arr[47] = {
+static const struct iro iro_arr[49] = {
 /* YSTORM_FLOW_CONTROL_MODE_OFFSET */
 	{      0x0,      0x0,      0x0,      0x0,      0x8},
 /* TSTORM_PORT_STAT_OFFSET(port_id) */
-	{   0x4cb0,     0x78,      0x0,      0x0,     0x78},
+	{   0x4cb0,     0x80,      0x0,      0x0,     0x80},
 /* TSTORM_LL2_PORT_STAT_OFFSET(port_id) */
-	{   0x6318,     0x20,      0x0,      0x0,     0x20},
+	{   0x6518,     0x20,      0x0,      0x0,     0x20},
 /* USTORM_VF_PF_CHANNEL_READY_OFFSET(vf_id) */
 	{    0xb00,      0x8,      0x0,      0x0,      0x4},
 /* USTORM_FLR_FINAL_ACK_OFFSET(pf_id) */
@@ -41,7 +41,7 @@ static const struct iro iro_arr[47] = {
 /* TSTORM_LL2_RX_PRODS_OFFSET(core_rx_queue_id) */
 	{    0xa28,      0x8,      0x0,      0x0,      0x8},
 /* CORE_LL2_TSTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id) */
-	{   0x60f8,     0x10,      0x0,      0x0,     0x10},
+	{   0x61f8,     0x10,      0x0,      0x0,     0x10},
 /* CORE_LL2_USTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id) */
 	{   0xb820,     0x30,      0x0,      0x0,     0x30},
 /* CORE_LL2_PSTORM_PER_QUEUE_STAT_OFFSET(core_tx_stats_id) */
@@ -53,7 +53,7 @@ static const struct iro iro_arr[47] = {
 /* MSTORM_ETH_VF_PRODS_OFFSET(vf_id,vf_queue_id) */
 	{   0x53a0,     0x80,      0x4,      0x0,      0x4},
 /* MSTORM_TPA_TIMEOUT_US_OFFSET */
-	{   0xc8f0,      0x0,      0x0,      0x0,      0x4},
+	{   0xc7c8,      0x0,      0x0,      0x0,      0x4},
 /* MSTORM_ETH_PF_STAT_OFFSET(pf_id) */
 	{   0x4ba0,     0x80,      0x0,      0x0,     0x20},
 /* USTORM_QUEUE_STAT_OFFSET(stat_counter_id) */
@@ -63,13 +63,13 @@ static const struct iro iro_arr[47] = {
 /* PSTORM_QUEUE_STAT_OFFSET(stat_counter_id) */
 	{   0x2b48,     0x80,      0x0,      0x0,     0x38},
 /* PSTORM_ETH_PF_STAT_OFFSET(pf_id) */
-	{   0xf188,     0x78,      0x0,      0x0,     0x78},
+	{   0xf1b0,     0x78,      0x0,      0x0,     0x78},
 /* PSTORM_CTL_FRAME_ETHTYPE_OFFSET(ethType_id) */
 	{    0x1f8,      0x4,      0x0,      0x0,      0x4},
 /* TSTORM_ETH_PRS_INPUT_OFFSET */
-	{   0xacf0,      0x0,      0x0,      0x0,     0xf0},
+	{   0xaef8,      0x0,      0x0,      0x0,     0xf0},
 /* ETH_RX_RATE_LIMIT_OFFSET(pf_id) */
-	{   0xade0,      0x8,      0x0,      0x0,      0x8},
+	{   0xafe8,      0x8,      0x0,      0x0,      0x8},
 /* XSTORM_ETH_QUEUE_ZONE_OFFSET(queue_id) */
 	{    0x1f8,      0x8,      0x0,      0x0,      0x8},
 /* YSTORM_TOE_CQ_PROD_OFFSET(rss_id) */
@@ -85,9 +85,9 @@ static const struct iro iro_arr[47] = {
 /* MSTORM_SCSI_BDQ_EXT_PROD_OFFSET(func_id,bdq_id) */
 	{    0xb78,     0x10,      0x8,      0x0,      0x2},
 /* TSTORM_ISCSI_RX_STATS_OFFSET(pf_id) */
-	{   0xd888,     0x38,      0x0,      0x0,     0x24},
+	{   0xd9a8,     0x38,      0x0,      0x0,     0x24},
 /* MSTORM_ISCSI_RX_STATS_OFFSET(pf_id) */
-	{  0x12c38,     0x10,      0x0,      0x0,      0x8},
+	{  0x12988,     0x10,      0x0,      0x0,      0x8},
 /* USTORM_ISCSI_RX_STATS_OFFSET(pf_id) */
 	{  0x11aa0,     0x38,      0x0,      0x0,     0x18},
 /* XSTORM_ISCSI_TX_STATS_OFFSET(pf_id) */
@@ -97,13 +97,17 @@ static const struct iro iro_arr[47] = {
 /* PSTORM_ISCSI_TX_STATS_OFFSET(pf_id) */
 	{  0x101f8,     0x10,      0x0,      0x0,     0x10},
 /* TSTORM_FCOE_RX_STATS_OFFSET(pf_id) */
-	{   0xdd08,     0x48,      0x0,      0x0,     0x38},
+	{   0xde28,     0x48,      0x0,      0x0,     0x38},
 /* PSTORM_FCOE_TX_STATS_OFFSET(pf_id) */
 	{  0x10660,     0x20,      0x0,      0x0,     0x20},
 /* PSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) */
 	{   0x2b80,     0x80,      0x0,      0x0,     0x10},
 /* TSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) */
-	{   0x5000,     0x10,      0x0,      0x0,     0x10},
+	{   0x5020,     0x10,      0x0,      0x0,     0x10},
+/* XSTORM_IWARP_RXMIT_STATS_OFFSET(pf_id) */
+	{   0xc9b0,     0x30,      0x0,      0x0,     0x10},
+/* TSTORM_ROCE_EVENTS_STAT_OFFSET(roce_pf_id) */
+	{   0xeec0,     0x10,      0x0,      0x0,     0x10},
 };
 
 #endif /* __IRO_VALUES_H__ */
diff --git a/drivers/net/qede/base/ecore_rt_defs.h b/drivers/net/qede/base/ecore_rt_defs.h
index 01a29e3..846dc6d 100644
--- a/drivers/net/qede/base/ecore_rt_defs.h
+++ b/drivers/net/qede/base/ecore_rt_defs.h
@@ -115,339 +115,338 @@
 #define TM_REG_CONFIG_CONN_MEM_RT_OFFSET                            28716
 #define TM_REG_CONFIG_CONN_MEM_RT_SIZE                              416
 #define TM_REG_CONFIG_TASK_MEM_RT_OFFSET                            29132
-#define TM_REG_CONFIG_TASK_MEM_RT_SIZE                              512
-#define QM_REG_MAXPQSIZE_0_RT_OFFSET                                29644
-#define QM_REG_MAXPQSIZE_1_RT_OFFSET                                29645
-#define QM_REG_MAXPQSIZE_2_RT_OFFSET                                29646
-#define QM_REG_MAXPQSIZETXSEL_0_RT_OFFSET                           29647
-#define QM_REG_MAXPQSIZETXSEL_1_RT_OFFSET                           29648
-#define QM_REG_MAXPQSIZETXSEL_2_RT_OFFSET                           29649
-#define QM_REG_MAXPQSIZETXSEL_3_RT_OFFSET                           29650
-#define QM_REG_MAXPQSIZETXSEL_4_RT_OFFSET                           29651
-#define QM_REG_MAXPQSIZETXSEL_5_RT_OFFSET                           29652
-#define QM_REG_MAXPQSIZETXSEL_6_RT_OFFSET                           29653
-#define QM_REG_MAXPQSIZETXSEL_7_RT_OFFSET                           29654
-#define QM_REG_MAXPQSIZETXSEL_8_RT_OFFSET                           29655
-#define QM_REG_MAXPQSIZETXSEL_9_RT_OFFSET                           29656
-#define QM_REG_MAXPQSIZETXSEL_10_RT_OFFSET                          29657
-#define QM_REG_MAXPQSIZETXSEL_11_RT_OFFSET                          29658
-#define QM_REG_MAXPQSIZETXSEL_12_RT_OFFSET                          29659
-#define QM_REG_MAXPQSIZETXSEL_13_RT_OFFSET                          29660
-#define QM_REG_MAXPQSIZETXSEL_14_RT_OFFSET                          29661
-#define QM_REG_MAXPQSIZETXSEL_15_RT_OFFSET                          29662
-#define QM_REG_MAXPQSIZETXSEL_16_RT_OFFSET                          29663
-#define QM_REG_MAXPQSIZETXSEL_17_RT_OFFSET                          29664
-#define QM_REG_MAXPQSIZETXSEL_18_RT_OFFSET                          29665
-#define QM_REG_MAXPQSIZETXSEL_19_RT_OFFSET                          29666
-#define QM_REG_MAXPQSIZETXSEL_20_RT_OFFSET                          29667
-#define QM_REG_MAXPQSIZETXSEL_21_RT_OFFSET                          29668
-#define QM_REG_MAXPQSIZETXSEL_22_RT_OFFSET                          29669
-#define QM_REG_MAXPQSIZETXSEL_23_RT_OFFSET                          29670
-#define QM_REG_MAXPQSIZETXSEL_24_RT_OFFSET                          29671
-#define QM_REG_MAXPQSIZETXSEL_25_RT_OFFSET                          29672
-#define QM_REG_MAXPQSIZETXSEL_26_RT_OFFSET                          29673
-#define QM_REG_MAXPQSIZETXSEL_27_RT_OFFSET                          29674
-#define QM_REG_MAXPQSIZETXSEL_28_RT_OFFSET                          29675
-#define QM_REG_MAXPQSIZETXSEL_29_RT_OFFSET                          29676
-#define QM_REG_MAXPQSIZETXSEL_30_RT_OFFSET                          29677
-#define QM_REG_MAXPQSIZETXSEL_31_RT_OFFSET                          29678
-#define QM_REG_MAXPQSIZETXSEL_32_RT_OFFSET                          29679
-#define QM_REG_MAXPQSIZETXSEL_33_RT_OFFSET                          29680
-#define QM_REG_MAXPQSIZETXSEL_34_RT_OFFSET                          29681
-#define QM_REG_MAXPQSIZETXSEL_35_RT_OFFSET                          29682
-#define QM_REG_MAXPQSIZETXSEL_36_RT_OFFSET                          29683
-#define QM_REG_MAXPQSIZETXSEL_37_RT_OFFSET                          29684
-#define QM_REG_MAXPQSIZETXSEL_38_RT_OFFSET                          29685
-#define QM_REG_MAXPQSIZETXSEL_39_RT_OFFSET                          29686
-#define QM_REG_MAXPQSIZETXSEL_40_RT_OFFSET                          29687
-#define QM_REG_MAXPQSIZETXSEL_41_RT_OFFSET                          29688
-#define QM_REG_MAXPQSIZETXSEL_42_RT_OFFSET                          29689
-#define QM_REG_MAXPQSIZETXSEL_43_RT_OFFSET                          29690
-#define QM_REG_MAXPQSIZETXSEL_44_RT_OFFSET                          29691
-#define QM_REG_MAXPQSIZETXSEL_45_RT_OFFSET                          29692
-#define QM_REG_MAXPQSIZETXSEL_46_RT_OFFSET                          29693
-#define QM_REG_MAXPQSIZETXSEL_47_RT_OFFSET                          29694
-#define QM_REG_MAXPQSIZETXSEL_48_RT_OFFSET                          29695
-#define QM_REG_MAXPQSIZETXSEL_49_RT_OFFSET                          29696
-#define QM_REG_MAXPQSIZETXSEL_50_RT_OFFSET                          29697
-#define QM_REG_MAXPQSIZETXSEL_51_RT_OFFSET                          29698
-#define QM_REG_MAXPQSIZETXSEL_52_RT_OFFSET                          29699
-#define QM_REG_MAXPQSIZETXSEL_53_RT_OFFSET                          29700
-#define QM_REG_MAXPQSIZETXSEL_54_RT_OFFSET                          29701
-#define QM_REG_MAXPQSIZETXSEL_55_RT_OFFSET                          29702
-#define QM_REG_MAXPQSIZETXSEL_56_RT_OFFSET                          29703
-#define QM_REG_MAXPQSIZETXSEL_57_RT_OFFSET                          29704
-#define QM_REG_MAXPQSIZETXSEL_58_RT_OFFSET                          29705
-#define QM_REG_MAXPQSIZETXSEL_59_RT_OFFSET                          29706
-#define QM_REG_MAXPQSIZETXSEL_60_RT_OFFSET                          29707
-#define QM_REG_MAXPQSIZETXSEL_61_RT_OFFSET                          29708
-#define QM_REG_MAXPQSIZETXSEL_62_RT_OFFSET                          29709
-#define QM_REG_MAXPQSIZETXSEL_63_RT_OFFSET                          29710
-#define QM_REG_BASEADDROTHERPQ_RT_OFFSET                            29711
-#define QM_REG_BASEADDROTHERPQ_RT_SIZE                              128
-#define QM_REG_VOQCRDLINE_RT_OFFSET                                 29839
-#define QM_REG_VOQCRDLINE_RT_SIZE                                   20
-#define QM_REG_VOQINITCRDLINE_RT_OFFSET                             29859
-#define QM_REG_VOQINITCRDLINE_RT_SIZE                               20
-#define QM_REG_AFULLQMBYPTHRPFWFQ_RT_OFFSET                         29879
-#define QM_REG_AFULLQMBYPTHRVPWFQ_RT_OFFSET                         29880
-#define QM_REG_AFULLQMBYPTHRPFRL_RT_OFFSET                          29881
-#define QM_REG_AFULLQMBYPTHRGLBLRL_RT_OFFSET                        29882
-#define QM_REG_AFULLOPRTNSTCCRDMASK_RT_OFFSET                       29883
-#define QM_REG_WRROTHERPQGRP_0_RT_OFFSET                            29884
-#define QM_REG_WRROTHERPQGRP_1_RT_OFFSET                            29885
-#define QM_REG_WRROTHERPQGRP_2_RT_OFFSET                            29886
-#define QM_REG_WRROTHERPQGRP_3_RT_OFFSET                            29887
-#define QM_REG_WRROTHERPQGRP_4_RT_OFFSET                            29888
-#define QM_REG_WRROTHERPQGRP_5_RT_OFFSET                            29889
-#define QM_REG_WRROTHERPQGRP_6_RT_OFFSET                            29890
-#define QM_REG_WRROTHERPQGRP_7_RT_OFFSET                            29891
-#define QM_REG_WRROTHERPQGRP_8_RT_OFFSET                            29892
-#define QM_REG_WRROTHERPQGRP_9_RT_OFFSET                            29893
-#define QM_REG_WRROTHERPQGRP_10_RT_OFFSET                           29894
-#define QM_REG_WRROTHERPQGRP_11_RT_OFFSET                           29895
-#define QM_REG_WRROTHERPQGRP_12_RT_OFFSET                           29896
-#define QM_REG_WRROTHERPQGRP_13_RT_OFFSET                           29897
-#define QM_REG_WRROTHERPQGRP_14_RT_OFFSET                           29898
-#define QM_REG_WRROTHERPQGRP_15_RT_OFFSET                           29899
-#define QM_REG_WRROTHERGRPWEIGHT_0_RT_OFFSET                        29900
-#define QM_REG_WRROTHERGRPWEIGHT_1_RT_OFFSET                        29901
-#define QM_REG_WRROTHERGRPWEIGHT_2_RT_OFFSET                        29902
-#define QM_REG_WRROTHERGRPWEIGHT_3_RT_OFFSET                        29903
-#define QM_REG_WRRTXGRPWEIGHT_0_RT_OFFSET                           29904
-#define QM_REG_WRRTXGRPWEIGHT_1_RT_OFFSET                           29905
-#define QM_REG_PQTX2PF_0_RT_OFFSET                                  29906
-#define QM_REG_PQTX2PF_1_RT_OFFSET                                  29907
-#define QM_REG_PQTX2PF_2_RT_OFFSET                                  29908
-#define QM_REG_PQTX2PF_3_RT_OFFSET                                  29909
-#define QM_REG_PQTX2PF_4_RT_OFFSET                                  29910
-#define QM_REG_PQTX2PF_5_RT_OFFSET                                  29911
-#define QM_REG_PQTX2PF_6_RT_OFFSET                                  29912
-#define QM_REG_PQTX2PF_7_RT_OFFSET                                  29913
-#define QM_REG_PQTX2PF_8_RT_OFFSET                                  29914
-#define QM_REG_PQTX2PF_9_RT_OFFSET                                  29915
-#define QM_REG_PQTX2PF_10_RT_OFFSET                                 29916
-#define QM_REG_PQTX2PF_11_RT_OFFSET                                 29917
-#define QM_REG_PQTX2PF_12_RT_OFFSET                                 29918
-#define QM_REG_PQTX2PF_13_RT_OFFSET                                 29919
-#define QM_REG_PQTX2PF_14_RT_OFFSET                                 29920
-#define QM_REG_PQTX2PF_15_RT_OFFSET                                 29921
-#define QM_REG_PQTX2PF_16_RT_OFFSET                                 29922
-#define QM_REG_PQTX2PF_17_RT_OFFSET                                 29923
-#define QM_REG_PQTX2PF_18_RT_OFFSET                                 29924
-#define QM_REG_PQTX2PF_19_RT_OFFSET                                 29925
-#define QM_REG_PQTX2PF_20_RT_OFFSET                                 29926
-#define QM_REG_PQTX2PF_21_RT_OFFSET                                 29927
-#define QM_REG_PQTX2PF_22_RT_OFFSET                                 29928
-#define QM_REG_PQTX2PF_23_RT_OFFSET                                 29929
-#define QM_REG_PQTX2PF_24_RT_OFFSET                                 29930
-#define QM_REG_PQTX2PF_25_RT_OFFSET                                 29931
-#define QM_REG_PQTX2PF_26_RT_OFFSET                                 29932
-#define QM_REG_PQTX2PF_27_RT_OFFSET                                 29933
-#define QM_REG_PQTX2PF_28_RT_OFFSET                                 29934
-#define QM_REG_PQTX2PF_29_RT_OFFSET                                 29935
-#define QM_REG_PQTX2PF_30_RT_OFFSET                                 29936
-#define QM_REG_PQTX2PF_31_RT_OFFSET                                 29937
-#define QM_REG_PQTX2PF_32_RT_OFFSET                                 29938
-#define QM_REG_PQTX2PF_33_RT_OFFSET                                 29939
-#define QM_REG_PQTX2PF_34_RT_OFFSET                                 29940
-#define QM_REG_PQTX2PF_35_RT_OFFSET                                 29941
-#define QM_REG_PQTX2PF_36_RT_OFFSET                                 29942
-#define QM_REG_PQTX2PF_37_RT_OFFSET                                 29943
-#define QM_REG_PQTX2PF_38_RT_OFFSET                                 29944
-#define QM_REG_PQTX2PF_39_RT_OFFSET                                 29945
-#define QM_REG_PQTX2PF_40_RT_OFFSET                                 29946
-#define QM_REG_PQTX2PF_41_RT_OFFSET                                 29947
-#define QM_REG_PQTX2PF_42_RT_OFFSET                                 29948
-#define QM_REG_PQTX2PF_43_RT_OFFSET                                 29949
-#define QM_REG_PQTX2PF_44_RT_OFFSET                                 29950
-#define QM_REG_PQTX2PF_45_RT_OFFSET                                 29951
-#define QM_REG_PQTX2PF_46_RT_OFFSET                                 29952
-#define QM_REG_PQTX2PF_47_RT_OFFSET                                 29953
-#define QM_REG_PQTX2PF_48_RT_OFFSET                                 29954
-#define QM_REG_PQTX2PF_49_RT_OFFSET                                 29955
-#define QM_REG_PQTX2PF_50_RT_OFFSET                                 29956
-#define QM_REG_PQTX2PF_51_RT_OFFSET                                 29957
-#define QM_REG_PQTX2PF_52_RT_OFFSET                                 29958
-#define QM_REG_PQTX2PF_53_RT_OFFSET                                 29959
-#define QM_REG_PQTX2PF_54_RT_OFFSET                                 29960
-#define QM_REG_PQTX2PF_55_RT_OFFSET                                 29961
-#define QM_REG_PQTX2PF_56_RT_OFFSET                                 29962
-#define QM_REG_PQTX2PF_57_RT_OFFSET                                 29963
-#define QM_REG_PQTX2PF_58_RT_OFFSET                                 29964
-#define QM_REG_PQTX2PF_59_RT_OFFSET                                 29965
-#define QM_REG_PQTX2PF_60_RT_OFFSET                                 29966
-#define QM_REG_PQTX2PF_61_RT_OFFSET                                 29967
-#define QM_REG_PQTX2PF_62_RT_OFFSET                                 29968
-#define QM_REG_PQTX2PF_63_RT_OFFSET                                 29969
-#define QM_REG_PQOTHER2PF_0_RT_OFFSET                               29970
-#define QM_REG_PQOTHER2PF_1_RT_OFFSET                               29971
-#define QM_REG_PQOTHER2PF_2_RT_OFFSET                               29972
-#define QM_REG_PQOTHER2PF_3_RT_OFFSET                               29973
-#define QM_REG_PQOTHER2PF_4_RT_OFFSET                               29974
-#define QM_REG_PQOTHER2PF_5_RT_OFFSET                               29975
-#define QM_REG_PQOTHER2PF_6_RT_OFFSET                               29976
-#define QM_REG_PQOTHER2PF_7_RT_OFFSET                               29977
-#define QM_REG_PQOTHER2PF_8_RT_OFFSET                               29978
-#define QM_REG_PQOTHER2PF_9_RT_OFFSET                               29979
-#define QM_REG_PQOTHER2PF_10_RT_OFFSET                              29980
-#define QM_REG_PQOTHER2PF_11_RT_OFFSET                              29981
-#define QM_REG_PQOTHER2PF_12_RT_OFFSET                              29982
-#define QM_REG_PQOTHER2PF_13_RT_OFFSET                              29983
-#define QM_REG_PQOTHER2PF_14_RT_OFFSET                              29984
-#define QM_REG_PQOTHER2PF_15_RT_OFFSET                              29985
-#define QM_REG_RLGLBLPERIOD_0_RT_OFFSET                             29986
-#define QM_REG_RLGLBLPERIOD_1_RT_OFFSET                             29987
-#define QM_REG_RLGLBLPERIODTIMER_0_RT_OFFSET                        29988
-#define QM_REG_RLGLBLPERIODTIMER_1_RT_OFFSET                        29989
-#define QM_REG_RLGLBLPERIODSEL_0_RT_OFFSET                          29990
-#define QM_REG_RLGLBLPERIODSEL_1_RT_OFFSET                          29991
-#define QM_REG_RLGLBLPERIODSEL_2_RT_OFFSET                          29992
-#define QM_REG_RLGLBLPERIODSEL_3_RT_OFFSET                          29993
-#define QM_REG_RLGLBLPERIODSEL_4_RT_OFFSET                          29994
-#define QM_REG_RLGLBLPERIODSEL_5_RT_OFFSET                          29995
-#define QM_REG_RLGLBLPERIODSEL_6_RT_OFFSET                          29996
-#define QM_REG_RLGLBLPERIODSEL_7_RT_OFFSET                          29997
-#define QM_REG_RLGLBLINCVAL_RT_OFFSET                               29998
+#define TM_REG_CONFIG_TASK_MEM_RT_SIZE                              608
+#define QM_REG_MAXPQSIZE_0_RT_OFFSET                                29740
+#define QM_REG_MAXPQSIZE_1_RT_OFFSET                                29741
+#define QM_REG_MAXPQSIZE_2_RT_OFFSET                                29742
+#define QM_REG_MAXPQSIZETXSEL_0_RT_OFFSET                           29743
+#define QM_REG_MAXPQSIZETXSEL_1_RT_OFFSET                           29744
+#define QM_REG_MAXPQSIZETXSEL_2_RT_OFFSET                           29745
+#define QM_REG_MAXPQSIZETXSEL_3_RT_OFFSET                           29746
+#define QM_REG_MAXPQSIZETXSEL_4_RT_OFFSET                           29747
+#define QM_REG_MAXPQSIZETXSEL_5_RT_OFFSET                           29748
+#define QM_REG_MAXPQSIZETXSEL_6_RT_OFFSET                           29749
+#define QM_REG_MAXPQSIZETXSEL_7_RT_OFFSET                           29750
+#define QM_REG_MAXPQSIZETXSEL_8_RT_OFFSET                           29751
+#define QM_REG_MAXPQSIZETXSEL_9_RT_OFFSET                           29752
+#define QM_REG_MAXPQSIZETXSEL_10_RT_OFFSET                          29753
+#define QM_REG_MAXPQSIZETXSEL_11_RT_OFFSET                          29754
+#define QM_REG_MAXPQSIZETXSEL_12_RT_OFFSET                          29755
+#define QM_REG_MAXPQSIZETXSEL_13_RT_OFFSET                          29756
+#define QM_REG_MAXPQSIZETXSEL_14_RT_OFFSET                          29757
+#define QM_REG_MAXPQSIZETXSEL_15_RT_OFFSET                          29758
+#define QM_REG_MAXPQSIZETXSEL_16_RT_OFFSET                          29759
+#define QM_REG_MAXPQSIZETXSEL_17_RT_OFFSET                          29760
+#define QM_REG_MAXPQSIZETXSEL_18_RT_OFFSET                          29761
+#define QM_REG_MAXPQSIZETXSEL_19_RT_OFFSET                          29762
+#define QM_REG_MAXPQSIZETXSEL_20_RT_OFFSET                          29763
+#define QM_REG_MAXPQSIZETXSEL_21_RT_OFFSET                          29764
+#define QM_REG_MAXPQSIZETXSEL_22_RT_OFFSET                          29765
+#define QM_REG_MAXPQSIZETXSEL_23_RT_OFFSET                          29766
+#define QM_REG_MAXPQSIZETXSEL_24_RT_OFFSET                          29767
+#define QM_REG_MAXPQSIZETXSEL_25_RT_OFFSET                          29768
+#define QM_REG_MAXPQSIZETXSEL_26_RT_OFFSET                          29769
+#define QM_REG_MAXPQSIZETXSEL_27_RT_OFFSET                          29770
+#define QM_REG_MAXPQSIZETXSEL_28_RT_OFFSET                          29771
+#define QM_REG_MAXPQSIZETXSEL_29_RT_OFFSET                          29772
+#define QM_REG_MAXPQSIZETXSEL_30_RT_OFFSET                          29773
+#define QM_REG_MAXPQSIZETXSEL_31_RT_OFFSET                          29774
+#define QM_REG_MAXPQSIZETXSEL_32_RT_OFFSET                          29775
+#define QM_REG_MAXPQSIZETXSEL_33_RT_OFFSET                          29776
+#define QM_REG_MAXPQSIZETXSEL_34_RT_OFFSET                          29777
+#define QM_REG_MAXPQSIZETXSEL_35_RT_OFFSET                          29778
+#define QM_REG_MAXPQSIZETXSEL_36_RT_OFFSET                          29779
+#define QM_REG_MAXPQSIZETXSEL_37_RT_OFFSET                          29780
+#define QM_REG_MAXPQSIZETXSEL_38_RT_OFFSET                          29781
+#define QM_REG_MAXPQSIZETXSEL_39_RT_OFFSET                          29782
+#define QM_REG_MAXPQSIZETXSEL_40_RT_OFFSET                          29783
+#define QM_REG_MAXPQSIZETXSEL_41_RT_OFFSET                          29784
+#define QM_REG_MAXPQSIZETXSEL_42_RT_OFFSET                          29785
+#define QM_REG_MAXPQSIZETXSEL_43_RT_OFFSET                          29786
+#define QM_REG_MAXPQSIZETXSEL_44_RT_OFFSET                          29787
+#define QM_REG_MAXPQSIZETXSEL_45_RT_OFFSET                          29788
+#define QM_REG_MAXPQSIZETXSEL_46_RT_OFFSET                          29789
+#define QM_REG_MAXPQSIZETXSEL_47_RT_OFFSET                          29790
+#define QM_REG_MAXPQSIZETXSEL_48_RT_OFFSET                          29791
+#define QM_REG_MAXPQSIZETXSEL_49_RT_OFFSET                          29792
+#define QM_REG_MAXPQSIZETXSEL_50_RT_OFFSET                          29793
+#define QM_REG_MAXPQSIZETXSEL_51_RT_OFFSET                          29794
+#define QM_REG_MAXPQSIZETXSEL_52_RT_OFFSET                          29795
+#define QM_REG_MAXPQSIZETXSEL_53_RT_OFFSET                          29796
+#define QM_REG_MAXPQSIZETXSEL_54_RT_OFFSET                          29797
+#define QM_REG_MAXPQSIZETXSEL_55_RT_OFFSET                          29798
+#define QM_REG_MAXPQSIZETXSEL_56_RT_OFFSET                          29799
+#define QM_REG_MAXPQSIZETXSEL_57_RT_OFFSET                          29800
+#define QM_REG_MAXPQSIZETXSEL_58_RT_OFFSET                          29801
+#define QM_REG_MAXPQSIZETXSEL_59_RT_OFFSET                          29802
+#define QM_REG_MAXPQSIZETXSEL_60_RT_OFFSET                          29803
+#define QM_REG_MAXPQSIZETXSEL_61_RT_OFFSET                          29804
+#define QM_REG_MAXPQSIZETXSEL_62_RT_OFFSET                          29805
+#define QM_REG_MAXPQSIZETXSEL_63_RT_OFFSET                          29806
+#define QM_REG_BASEADDROTHERPQ_RT_OFFSET                            29807
+#define QM_REG_AFULLQMBYPTHRPFWFQ_RT_OFFSET                         29935
+#define QM_REG_AFULLQMBYPTHRVPWFQ_RT_OFFSET                         29936
+#define QM_REG_AFULLQMBYPTHRPFRL_RT_OFFSET                          29937
+#define QM_REG_AFULLQMBYPTHRGLBLRL_RT_OFFSET                        29938
+#define QM_REG_AFULLOPRTNSTCCRDMASK_RT_OFFSET                       29939
+#define QM_REG_WRROTHERPQGRP_0_RT_OFFSET                            29940
+#define QM_REG_WRROTHERPQGRP_1_RT_OFFSET                            29941
+#define QM_REG_WRROTHERPQGRP_2_RT_OFFSET                            29942
+#define QM_REG_WRROTHERPQGRP_3_RT_OFFSET                            29943
+#define QM_REG_WRROTHERPQGRP_4_RT_OFFSET                            29944
+#define QM_REG_WRROTHERPQGRP_5_RT_OFFSET                            29945
+#define QM_REG_WRROTHERPQGRP_6_RT_OFFSET                            29946
+#define QM_REG_WRROTHERPQGRP_7_RT_OFFSET                            29947
+#define QM_REG_WRROTHERPQGRP_8_RT_OFFSET                            29948
+#define QM_REG_WRROTHERPQGRP_9_RT_OFFSET                            29949
+#define QM_REG_WRROTHERPQGRP_10_RT_OFFSET                           29950
+#define QM_REG_WRROTHERPQGRP_11_RT_OFFSET                           29951
+#define QM_REG_WRROTHERPQGRP_12_RT_OFFSET                           29952
+#define QM_REG_WRROTHERPQGRP_13_RT_OFFSET                           29953
+#define QM_REG_WRROTHERPQGRP_14_RT_OFFSET                           29954
+#define QM_REG_WRROTHERPQGRP_15_RT_OFFSET                           29955
+#define QM_REG_WRROTHERGRPWEIGHT_0_RT_OFFSET                        29956
+#define QM_REG_WRROTHERGRPWEIGHT_1_RT_OFFSET                        29957
+#define QM_REG_WRROTHERGRPWEIGHT_2_RT_OFFSET                        29958
+#define QM_REG_WRROTHERGRPWEIGHT_3_RT_OFFSET                        29959
+#define QM_REG_WRRTXGRPWEIGHT_0_RT_OFFSET                           29960
+#define QM_REG_WRRTXGRPWEIGHT_1_RT_OFFSET                           29961
+#define QM_REG_PQTX2PF_0_RT_OFFSET                                  29962
+#define QM_REG_PQTX2PF_1_RT_OFFSET                                  29963
+#define QM_REG_PQTX2PF_2_RT_OFFSET                                  29964
+#define QM_REG_PQTX2PF_3_RT_OFFSET                                  29965
+#define QM_REG_PQTX2PF_4_RT_OFFSET                                  29966
+#define QM_REG_PQTX2PF_5_RT_OFFSET                                  29967
+#define QM_REG_PQTX2PF_6_RT_OFFSET                                  29968
+#define QM_REG_PQTX2PF_7_RT_OFFSET                                  29969
+#define QM_REG_PQTX2PF_8_RT_OFFSET                                  29970
+#define QM_REG_PQTX2PF_9_RT_OFFSET                                  29971
+#define QM_REG_PQTX2PF_10_RT_OFFSET                                 29972
+#define QM_REG_PQTX2PF_11_RT_OFFSET                                 29973
+#define QM_REG_PQTX2PF_12_RT_OFFSET                                 29974
+#define QM_REG_PQTX2PF_13_RT_OFFSET                                 29975
+#define QM_REG_PQTX2PF_14_RT_OFFSET                                 29976
+#define QM_REG_PQTX2PF_15_RT_OFFSET                                 29977
+#define QM_REG_PQTX2PF_16_RT_OFFSET                                 29978
+#define QM_REG_PQTX2PF_17_RT_OFFSET                                 29979
+#define QM_REG_PQTX2PF_18_RT_OFFSET                                 29980
+#define QM_REG_PQTX2PF_19_RT_OFFSET                                 29981
+#define QM_REG_PQTX2PF_20_RT_OFFSET                                 29982
+#define QM_REG_PQTX2PF_21_RT_OFFSET                                 29983
+#define QM_REG_PQTX2PF_22_RT_OFFSET                                 29984
+#define QM_REG_PQTX2PF_23_RT_OFFSET                                 29985
+#define QM_REG_PQTX2PF_24_RT_OFFSET                                 29986
+#define QM_REG_PQTX2PF_25_RT_OFFSET                                 29987
+#define QM_REG_PQTX2PF_26_RT_OFFSET                                 29988
+#define QM_REG_PQTX2PF_27_RT_OFFSET                                 29989
+#define QM_REG_PQTX2PF_28_RT_OFFSET                                 29990
+#define QM_REG_PQTX2PF_29_RT_OFFSET                                 29991
+#define QM_REG_PQTX2PF_30_RT_OFFSET                                 29992
+#define QM_REG_PQTX2PF_31_RT_OFFSET                                 29993
+#define QM_REG_PQTX2PF_32_RT_OFFSET                                 29994
+#define QM_REG_PQTX2PF_33_RT_OFFSET                                 29995
+#define QM_REG_PQTX2PF_34_RT_OFFSET                                 29996
+#define QM_REG_PQTX2PF_35_RT_OFFSET                                 29997
+#define QM_REG_PQTX2PF_36_RT_OFFSET                                 29998
+#define QM_REG_PQTX2PF_37_RT_OFFSET                                 29999
+#define QM_REG_PQTX2PF_38_RT_OFFSET                                 30000
+#define QM_REG_PQTX2PF_39_RT_OFFSET                                 30001
+#define QM_REG_PQTX2PF_40_RT_OFFSET                                 30002
+#define QM_REG_PQTX2PF_41_RT_OFFSET                                 30003
+#define QM_REG_PQTX2PF_42_RT_OFFSET                                 30004
+#define QM_REG_PQTX2PF_43_RT_OFFSET                                 30005
+#define QM_REG_PQTX2PF_44_RT_OFFSET                                 30006
+#define QM_REG_PQTX2PF_45_RT_OFFSET                                 30007
+#define QM_REG_PQTX2PF_46_RT_OFFSET                                 30008
+#define QM_REG_PQTX2PF_47_RT_OFFSET                                 30009
+#define QM_REG_PQTX2PF_48_RT_OFFSET                                 30010
+#define QM_REG_PQTX2PF_49_RT_OFFSET                                 30011
+#define QM_REG_PQTX2PF_50_RT_OFFSET                                 30012
+#define QM_REG_PQTX2PF_51_RT_OFFSET                                 30013
+#define QM_REG_PQTX2PF_52_RT_OFFSET                                 30014
+#define QM_REG_PQTX2PF_53_RT_OFFSET                                 30015
+#define QM_REG_PQTX2PF_54_RT_OFFSET                                 30016
+#define QM_REG_PQTX2PF_55_RT_OFFSET                                 30017
+#define QM_REG_PQTX2PF_56_RT_OFFSET                                 30018
+#define QM_REG_PQTX2PF_57_RT_OFFSET                                 30019
+#define QM_REG_PQTX2PF_58_RT_OFFSET                                 30020
+#define QM_REG_PQTX2PF_59_RT_OFFSET                                 30021
+#define QM_REG_PQTX2PF_60_RT_OFFSET                                 30022
+#define QM_REG_PQTX2PF_61_RT_OFFSET                                 30023
+#define QM_REG_PQTX2PF_62_RT_OFFSET                                 30024
+#define QM_REG_PQTX2PF_63_RT_OFFSET                                 30025
+#define QM_REG_PQOTHER2PF_0_RT_OFFSET                               30026
+#define QM_REG_PQOTHER2PF_1_RT_OFFSET                               30027
+#define QM_REG_PQOTHER2PF_2_RT_OFFSET                               30028
+#define QM_REG_PQOTHER2PF_3_RT_OFFSET                               30029
+#define QM_REG_PQOTHER2PF_4_RT_OFFSET                               30030
+#define QM_REG_PQOTHER2PF_5_RT_OFFSET                               30031
+#define QM_REG_PQOTHER2PF_6_RT_OFFSET                               30032
+#define QM_REG_PQOTHER2PF_7_RT_OFFSET                               30033
+#define QM_REG_PQOTHER2PF_8_RT_OFFSET                               30034
+#define QM_REG_PQOTHER2PF_9_RT_OFFSET                               30035
+#define QM_REG_PQOTHER2PF_10_RT_OFFSET                              30036
+#define QM_REG_PQOTHER2PF_11_RT_OFFSET                              30037
+#define QM_REG_PQOTHER2PF_12_RT_OFFSET                              30038
+#define QM_REG_PQOTHER2PF_13_RT_OFFSET                              30039
+#define QM_REG_PQOTHER2PF_14_RT_OFFSET                              30040
+#define QM_REG_PQOTHER2PF_15_RT_OFFSET                              30041
+#define QM_REG_RLGLBLPERIOD_0_RT_OFFSET                             30042
+#define QM_REG_RLGLBLPERIOD_1_RT_OFFSET                             30043
+#define QM_REG_RLGLBLPERIODTIMER_0_RT_OFFSET                        30044
+#define QM_REG_RLGLBLPERIODTIMER_1_RT_OFFSET                        30045
+#define QM_REG_RLGLBLPERIODSEL_0_RT_OFFSET                          30046
+#define QM_REG_RLGLBLPERIODSEL_1_RT_OFFSET                          30047
+#define QM_REG_RLGLBLPERIODSEL_2_RT_OFFSET                          30048
+#define QM_REG_RLGLBLPERIODSEL_3_RT_OFFSET                          30049
+#define QM_REG_RLGLBLPERIODSEL_4_RT_OFFSET                          30050
+#define QM_REG_RLGLBLPERIODSEL_5_RT_OFFSET                          30051
+#define QM_REG_RLGLBLPERIODSEL_6_RT_OFFSET                          30052
+#define QM_REG_RLGLBLPERIODSEL_7_RT_OFFSET                          30053
+#define QM_REG_RLGLBLINCVAL_RT_OFFSET                               30054
 #define QM_REG_RLGLBLINCVAL_RT_SIZE                                 256
-#define QM_REG_RLGLBLUPPERBOUND_RT_OFFSET                           30254
+#define QM_REG_RLGLBLUPPERBOUND_RT_OFFSET                           30310
 #define QM_REG_RLGLBLUPPERBOUND_RT_SIZE                             256
-#define QM_REG_RLGLBLCRD_RT_OFFSET                                  30510
+#define QM_REG_RLGLBLCRD_RT_OFFSET                                  30566
 #define QM_REG_RLGLBLCRD_RT_SIZE                                    256
-#define QM_REG_RLGLBLENABLE_RT_OFFSET                               30766
-#define QM_REG_RLPFPERIOD_RT_OFFSET                                 30767
-#define QM_REG_RLPFPERIODTIMER_RT_OFFSET                            30768
-#define QM_REG_RLPFINCVAL_RT_OFFSET                                 30769
+#define QM_REG_RLGLBLENABLE_RT_OFFSET                               30822
+#define QM_REG_RLPFPERIOD_RT_OFFSET                                 30823
+#define QM_REG_RLPFPERIODTIMER_RT_OFFSET                            30824
+#define QM_REG_RLPFINCVAL_RT_OFFSET                                 30825
 #define QM_REG_RLPFINCVAL_RT_SIZE                                   16
-#define QM_REG_RLPFUPPERBOUND_RT_OFFSET                             30785
+#define QM_REG_RLPFUPPERBOUND_RT_OFFSET                             30841
 #define QM_REG_RLPFUPPERBOUND_RT_SIZE                               16
-#define QM_REG_RLPFCRD_RT_OFFSET                                    30801
+#define QM_REG_RLPFCRD_RT_OFFSET                                    30857
 #define QM_REG_RLPFCRD_RT_SIZE                                      16
-#define QM_REG_RLPFENABLE_RT_OFFSET                                 30817
-#define QM_REG_RLPFVOQENABLE_RT_OFFSET                              30818
-#define QM_REG_WFQPFWEIGHT_RT_OFFSET                                30819
+#define QM_REG_RLPFENABLE_RT_OFFSET                                 30873
+#define QM_REG_RLPFVOQENABLE_RT_OFFSET                              30874
+#define QM_REG_WFQPFWEIGHT_RT_OFFSET                                30875
 #define QM_REG_WFQPFWEIGHT_RT_SIZE                                  16
-#define QM_REG_WFQPFUPPERBOUND_RT_OFFSET                            30835
+#define QM_REG_WFQPFUPPERBOUND_RT_OFFSET                            30891
 #define QM_REG_WFQPFUPPERBOUND_RT_SIZE                              16
-#define QM_REG_WFQPFCRD_RT_OFFSET                                   30851
-#define QM_REG_WFQPFCRD_RT_SIZE                                     160
-#define QM_REG_WFQPFENABLE_RT_OFFSET                                31011
-#define QM_REG_WFQVPENABLE_RT_OFFSET                                31012
-#define QM_REG_BASEADDRTXPQ_RT_OFFSET                               31013
+#define QM_REG_WFQPFCRD_RT_OFFSET                                   30907
+#define QM_REG_WFQPFCRD_RT_SIZE                                     256
+#define QM_REG_WFQPFENABLE_RT_OFFSET                                31163
+#define QM_REG_WFQVPENABLE_RT_OFFSET                                31164
+#define QM_REG_BASEADDRTXPQ_RT_OFFSET                               31165
 #define QM_REG_BASEADDRTXPQ_RT_SIZE                                 512
-#define QM_REG_TXPQMAP_RT_OFFSET                                    31525
+#define QM_REG_TXPQMAP_RT_OFFSET                                    31677
 #define QM_REG_TXPQMAP_RT_SIZE                                      512
-#define QM_REG_WFQVPWEIGHT_RT_OFFSET                                32037
+#define QM_REG_WFQVPWEIGHT_RT_OFFSET                                32189
 #define QM_REG_WFQVPWEIGHT_RT_SIZE                                  512
-#define QM_REG_WFQVPCRD_RT_OFFSET                                   32549
+#define QM_REG_WFQVPCRD_RT_OFFSET                                   32701
 #define QM_REG_WFQVPCRD_RT_SIZE                                     512
-#define QM_REG_WFQVPMAP_RT_OFFSET                                   33061
+#define QM_REG_WFQVPMAP_RT_OFFSET                                   33213
 #define QM_REG_WFQVPMAP_RT_SIZE                                     512
-#define QM_REG_WFQPFCRD_MSB_RT_OFFSET                               33573
-#define QM_REG_WFQPFCRD_MSB_RT_SIZE                                 160
-#define NIG_REG_TAG_ETHERTYPE_0_RT_OFFSET                           33733
-#define NIG_REG_OUTER_TAG_VALUE_LIST0_RT_OFFSET                     33734
-#define NIG_REG_OUTER_TAG_VALUE_LIST1_RT_OFFSET                     33735
-#define NIG_REG_OUTER_TAG_VALUE_LIST2_RT_OFFSET                     33736
-#define NIG_REG_OUTER_TAG_VALUE_LIST3_RT_OFFSET                     33737
-#define NIG_REG_OUTER_TAG_VALUE_MASK_RT_OFFSET                      33738
-#define NIG_REG_LLH_FUNC_TAGMAC_CLS_TYPE_RT_OFFSET                  33739
-#define NIG_REG_LLH_FUNC_TAG_EN_RT_OFFSET                           33740
+#define QM_REG_WFQPFCRD_MSB_RT_OFFSET                               33725
+#define QM_REG_WFQPFCRD_MSB_RT_SIZE                                 320
+#define QM_REG_VOQCRDLINE_RT_OFFSET                                 34045
+#define QM_REG_VOQCRDLINE_RT_SIZE                                   36
+#define QM_REG_VOQINITCRDLINE_RT_OFFSET                             34081
+#define QM_REG_VOQINITCRDLINE_RT_SIZE                               36
+#define NIG_REG_TAG_ETHERTYPE_0_RT_OFFSET                           34117
+#define NIG_REG_OUTER_TAG_VALUE_LIST0_RT_OFFSET                     34118
+#define NIG_REG_OUTER_TAG_VALUE_LIST1_RT_OFFSET                     34119
+#define NIG_REG_OUTER_TAG_VALUE_LIST2_RT_OFFSET                     34120
+#define NIG_REG_OUTER_TAG_VALUE_LIST3_RT_OFFSET                     34121
+#define NIG_REG_OUTER_TAG_VALUE_MASK_RT_OFFSET                      34122
+#define NIG_REG_LLH_FUNC_TAGMAC_CLS_TYPE_RT_OFFSET                  34123
+#define NIG_REG_LLH_FUNC_TAG_EN_RT_OFFSET                           34124
 #define NIG_REG_LLH_FUNC_TAG_EN_RT_SIZE                             4
-#define NIG_REG_LLH_FUNC_TAG_HDR_SEL_RT_OFFSET                      33744
+#define NIG_REG_LLH_FUNC_TAG_HDR_SEL_RT_OFFSET                      34128
 #define NIG_REG_LLH_FUNC_TAG_HDR_SEL_RT_SIZE                        4
-#define NIG_REG_LLH_FUNC_TAG_VALUE_RT_OFFSET                        33748
+#define NIG_REG_LLH_FUNC_TAG_VALUE_RT_OFFSET                        34132
 #define NIG_REG_LLH_FUNC_TAG_VALUE_RT_SIZE                          4
-#define NIG_REG_LLH_FUNC_NO_TAG_RT_OFFSET                           33752
-#define NIG_REG_LLH_FUNC_FILTER_VALUE_RT_OFFSET                     33753
+#define NIG_REG_LLH_FUNC_NO_TAG_RT_OFFSET                           34136
+#define NIG_REG_LLH_FUNC_FILTER_VALUE_RT_OFFSET                     34137
 #define NIG_REG_LLH_FUNC_FILTER_VALUE_RT_SIZE                       32
-#define NIG_REG_LLH_FUNC_FILTER_EN_RT_OFFSET                        33785
+#define NIG_REG_LLH_FUNC_FILTER_EN_RT_OFFSET                        34169
 #define NIG_REG_LLH_FUNC_FILTER_EN_RT_SIZE                          16
-#define NIG_REG_LLH_FUNC_FILTER_MODE_RT_OFFSET                      33801
+#define NIG_REG_LLH_FUNC_FILTER_MODE_RT_OFFSET                      34185
 #define NIG_REG_LLH_FUNC_FILTER_MODE_RT_SIZE                        16
-#define NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_RT_OFFSET             33817
+#define NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_RT_OFFSET             34201
 #define NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_RT_SIZE               16
-#define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_RT_OFFSET                   33833
+#define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_RT_OFFSET                   34217
 #define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_RT_SIZE                     16
-#define NIG_REG_TX_EDPM_CTRL_RT_OFFSET                              33849
-#define NIG_REG_ROCE_DUPLICATE_TO_HOST_RT_OFFSET                    33850
-#define CDU_REG_CID_ADDR_PARAMS_RT_OFFSET                           33851
-#define CDU_REG_SEGMENT0_PARAMS_RT_OFFSET                           33852
-#define CDU_REG_SEGMENT1_PARAMS_RT_OFFSET                           33853
-#define CDU_REG_PF_SEG0_TYPE_OFFSET_RT_OFFSET                       33854
-#define CDU_REG_PF_SEG1_TYPE_OFFSET_RT_OFFSET                       33855
-#define CDU_REG_PF_SEG2_TYPE_OFFSET_RT_OFFSET                       33856
-#define CDU_REG_PF_SEG3_TYPE_OFFSET_RT_OFFSET                       33857
-#define CDU_REG_PF_FL_SEG0_TYPE_OFFSET_RT_OFFSET                    33858
-#define CDU_REG_PF_FL_SEG1_TYPE_OFFSET_RT_OFFSET                    33859
-#define CDU_REG_PF_FL_SEG2_TYPE_OFFSET_RT_OFFSET                    33860
-#define CDU_REG_PF_FL_SEG3_TYPE_OFFSET_RT_OFFSET                    33861
-#define CDU_REG_VF_SEG_TYPE_OFFSET_RT_OFFSET                        33862
-#define CDU_REG_VF_FL_SEG_TYPE_OFFSET_RT_OFFSET                     33863
-#define PBF_REG_TAG_ETHERTYPE_0_RT_OFFSET                           33864
-#define PBF_REG_BTB_SHARED_AREA_SIZE_RT_OFFSET                      33865
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET                    33866
-#define PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET                       33867
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ0_RT_OFFSET                33868
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET                    33869
-#define PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET                       33870
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ1_RT_OFFSET                33871
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ2_RT_OFFSET                    33872
-#define PBF_REG_BTB_GUARANTEED_VOQ2_RT_OFFSET                       33873
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ2_RT_OFFSET                33874
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ3_RT_OFFSET                    33875
-#define PBF_REG_BTB_GUARANTEED_VOQ3_RT_OFFSET                       33876
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ3_RT_OFFSET                33877
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ4_RT_OFFSET                    33878
-#define PBF_REG_BTB_GUARANTEED_VOQ4_RT_OFFSET                       33879
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ4_RT_OFFSET                33880
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ5_RT_OFFSET                    33881
-#define PBF_REG_BTB_GUARANTEED_VOQ5_RT_OFFSET                       33882
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ5_RT_OFFSET                33883
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ6_RT_OFFSET                    33884
-#define PBF_REG_BTB_GUARANTEED_VOQ6_RT_OFFSET                       33885
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ6_RT_OFFSET                33886
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ7_RT_OFFSET                    33887
-#define PBF_REG_BTB_GUARANTEED_VOQ7_RT_OFFSET                       33888
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ7_RT_OFFSET                33889
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ8_RT_OFFSET                    33890
-#define PBF_REG_BTB_GUARANTEED_VOQ8_RT_OFFSET                       33891
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ8_RT_OFFSET                33892
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ9_RT_OFFSET                    33893
-#define PBF_REG_BTB_GUARANTEED_VOQ9_RT_OFFSET                       33894
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ9_RT_OFFSET                33895
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ10_RT_OFFSET                   33896
-#define PBF_REG_BTB_GUARANTEED_VOQ10_RT_OFFSET                      33897
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ10_RT_OFFSET               33898
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ11_RT_OFFSET                   33899
-#define PBF_REG_BTB_GUARANTEED_VOQ11_RT_OFFSET                      33900
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ11_RT_OFFSET               33901
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ12_RT_OFFSET                   33902
-#define PBF_REG_BTB_GUARANTEED_VOQ12_RT_OFFSET                      33903
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ12_RT_OFFSET               33904
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ13_RT_OFFSET                   33905
-#define PBF_REG_BTB_GUARANTEED_VOQ13_RT_OFFSET                      33906
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ13_RT_OFFSET               33907
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ14_RT_OFFSET                   33908
-#define PBF_REG_BTB_GUARANTEED_VOQ14_RT_OFFSET                      33909
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ14_RT_OFFSET               33910
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ15_RT_OFFSET                   33911
-#define PBF_REG_BTB_GUARANTEED_VOQ15_RT_OFFSET                      33912
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ15_RT_OFFSET               33913
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ16_RT_OFFSET                   33914
-#define PBF_REG_BTB_GUARANTEED_VOQ16_RT_OFFSET                      33915
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ16_RT_OFFSET               33916
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ17_RT_OFFSET                   33917
-#define PBF_REG_BTB_GUARANTEED_VOQ17_RT_OFFSET                      33918
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ17_RT_OFFSET               33919
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ18_RT_OFFSET                   33920
-#define PBF_REG_BTB_GUARANTEED_VOQ18_RT_OFFSET                      33921
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ18_RT_OFFSET               33922
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ19_RT_OFFSET                   33923
-#define PBF_REG_BTB_GUARANTEED_VOQ19_RT_OFFSET                      33924
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ19_RT_OFFSET               33925
-#define XCM_REG_CON_PHY_Q3_RT_OFFSET                                33926
+#define NIG_REG_TX_EDPM_CTRL_RT_OFFSET                              34233
+#define NIG_REG_ROCE_DUPLICATE_TO_HOST_RT_OFFSET                    34234
+#define CDU_REG_CID_ADDR_PARAMS_RT_OFFSET                           34235
+#define CDU_REG_SEGMENT0_PARAMS_RT_OFFSET                           34236
+#define CDU_REG_SEGMENT1_PARAMS_RT_OFFSET                           34237
+#define CDU_REG_PF_SEG0_TYPE_OFFSET_RT_OFFSET                       34238
+#define CDU_REG_PF_SEG1_TYPE_OFFSET_RT_OFFSET                       34239
+#define CDU_REG_PF_SEG2_TYPE_OFFSET_RT_OFFSET                       34240
+#define CDU_REG_PF_SEG3_TYPE_OFFSET_RT_OFFSET                       34241
+#define CDU_REG_PF_FL_SEG0_TYPE_OFFSET_RT_OFFSET                    34242
+#define CDU_REG_PF_FL_SEG1_TYPE_OFFSET_RT_OFFSET                    34243
+#define CDU_REG_PF_FL_SEG2_TYPE_OFFSET_RT_OFFSET                    34244
+#define CDU_REG_PF_FL_SEG3_TYPE_OFFSET_RT_OFFSET                    34245
+#define CDU_REG_VF_SEG_TYPE_OFFSET_RT_OFFSET                        34246
+#define CDU_REG_VF_FL_SEG_TYPE_OFFSET_RT_OFFSET                     34247
+#define PBF_REG_TAG_ETHERTYPE_0_RT_OFFSET                           34248
+#define PBF_REG_BTB_SHARED_AREA_SIZE_RT_OFFSET                      34249
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET                    34250
+#define PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET                       34251
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ0_RT_OFFSET                34252
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET                    34253
+#define PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET                       34254
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ1_RT_OFFSET                34255
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ2_RT_OFFSET                    34256
+#define PBF_REG_BTB_GUARANTEED_VOQ2_RT_OFFSET                       34257
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ2_RT_OFFSET                34258
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ3_RT_OFFSET                    34259
+#define PBF_REG_BTB_GUARANTEED_VOQ3_RT_OFFSET                       34260
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ3_RT_OFFSET                34261
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ4_RT_OFFSET                    34262
+#define PBF_REG_BTB_GUARANTEED_VOQ4_RT_OFFSET                       34263
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ4_RT_OFFSET                34264
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ5_RT_OFFSET                    34265
+#define PBF_REG_BTB_GUARANTEED_VOQ5_RT_OFFSET                       34266
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ5_RT_OFFSET                34267
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ6_RT_OFFSET                    34268
+#define PBF_REG_BTB_GUARANTEED_VOQ6_RT_OFFSET                       34269
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ6_RT_OFFSET                34270
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ7_RT_OFFSET                    34271
+#define PBF_REG_BTB_GUARANTEED_VOQ7_RT_OFFSET                       34272
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ7_RT_OFFSET                34273
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ8_RT_OFFSET                    34274
+#define PBF_REG_BTB_GUARANTEED_VOQ8_RT_OFFSET                       34275
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ8_RT_OFFSET                34276
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ9_RT_OFFSET                    34277
+#define PBF_REG_BTB_GUARANTEED_VOQ9_RT_OFFSET                       34278
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ9_RT_OFFSET                34279
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ10_RT_OFFSET                   34280
+#define PBF_REG_BTB_GUARANTEED_VOQ10_RT_OFFSET                      34281
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ10_RT_OFFSET               34282
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ11_RT_OFFSET                   34283
+#define PBF_REG_BTB_GUARANTEED_VOQ11_RT_OFFSET                      34284
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ11_RT_OFFSET               34285
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ12_RT_OFFSET                   34286
+#define PBF_REG_BTB_GUARANTEED_VOQ12_RT_OFFSET                      34287
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ12_RT_OFFSET               34288
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ13_RT_OFFSET                   34289
+#define PBF_REG_BTB_GUARANTEED_VOQ13_RT_OFFSET                      34290
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ13_RT_OFFSET               34291
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ14_RT_OFFSET                   34292
+#define PBF_REG_BTB_GUARANTEED_VOQ14_RT_OFFSET                      34293
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ14_RT_OFFSET               34294
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ15_RT_OFFSET                   34295
+#define PBF_REG_BTB_GUARANTEED_VOQ15_RT_OFFSET                      34296
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ15_RT_OFFSET               34297
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ16_RT_OFFSET                   34298
+#define PBF_REG_BTB_GUARANTEED_VOQ16_RT_OFFSET                      34299
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ16_RT_OFFSET               34300
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ17_RT_OFFSET                   34301
+#define PBF_REG_BTB_GUARANTEED_VOQ17_RT_OFFSET                      34302
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ17_RT_OFFSET               34303
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ18_RT_OFFSET                   34304
+#define PBF_REG_BTB_GUARANTEED_VOQ18_RT_OFFSET                      34305
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ18_RT_OFFSET               34306
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ19_RT_OFFSET                   34307
+#define PBF_REG_BTB_GUARANTEED_VOQ19_RT_OFFSET                      34308
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ19_RT_OFFSET               34309
+#define XCM_REG_CON_PHY_Q3_RT_OFFSET                                34310
 
-#define RUNTIME_ARRAY_SIZE 33927
+#define RUNTIME_ARRAY_SIZE 34311
 
 #endif /* __RT_DEFS_H__ */
diff --git a/drivers/net/qede/base/eth_common.h b/drivers/net/qede/base/eth_common.h
index d2ebce8..6dc969b 100644
--- a/drivers/net/qede/base/eth_common.h
+++ b/drivers/net/qede/base/eth_common.h
@@ -182,7 +182,7 @@ struct eth_tx_1st_bd_flags {
 struct eth_tx_data_1st_bd {
 /* VLAN tag to insert to packet (if enabled by vlan_insertion flag). */
 	__le16 vlan;
-/* Number of BDs in packet. Should be at least 2 in non-LSO packet and at least
+/* Number of BDs in packet. Should be at least 1 in non-LSO packet and at least
  * 3 in LSO (or Tunnel with IPv6+ext) packet.
  */
 	u8 nbds;
diff --git a/drivers/net/qede/base/reg_addr.h b/drivers/net/qede/base/reg_addr.h
index 3cc7fd4..f9920f3 100644
--- a/drivers/net/qede/base/reg_addr.h
+++ b/drivers/net/qede/base/reg_addr.h
@@ -1147,3 +1147,56 @@
 
 #define IGU_REG_PRODUCER_MEMORY 0x182000UL
 #define IGU_REG_CONSUMER_MEM 0x183000UL
+
+#define CDU_REG_CCFC_CTX_VALID0 0x580400UL
+#define CDU_REG_CCFC_CTX_VALID1 0x580404UL
+#define CDU_REG_TCFC_CTX_VALID0 0x580408UL
+
+#define DORQ_REG_L2_EDPM_TUNNEL_NGE_IP_EN_K2_E5 0x10092cUL
+#define DORQ_REG_L2_EDPM_TUNNEL_NGE_ETH_EN_K2_E5 0x100930UL
+#define MISCS_REG_RESET_PL_HV_2_K2_E5 0x009150UL
+#define CNIG_REG_NW_PORT_MODE_BB 0x218200UL
+#define CNIG_REG_PMEG_IF_CMD_BB 0x21821cUL
+#define CNIG_REG_PMEG_IF_ADDR_BB 0x218224UL
+#define CNIG_REG_PMEG_IF_WRDATA_BB 0x218228UL
+#define NWM_REG_MAC0_K2_E5 0x800400UL
+#define CNIG_REG_NIG_PORT0_CONF_K2_E5 0x218200UL
+#define CNIG_REG_NIG_PORT0_CONF_NIG_PORT_ENABLE_0_K2_E5_SHIFT 0
+#define CNIG_REG_NIG_PORT0_CONF_NIG_PORT_NWM_PORT_MAP_0_K2_E5_SHIFT 1
+#define CNIG_REG_NIG_PORT0_CONF_NIG_PORT_RATE_0_K2_E5_SHIFT 3
+#define ETH_MAC_REG_XIF_MODE_K2_E5 0x000080UL
+#define ETH_MAC_REG_XIF_MODE_XGMII_K2_E5_SHIFT 0
+#define ETH_MAC_REG_FRM_LENGTH_K2_E5 0x000014UL
+#define ETH_MAC_REG_FRM_LENGTH_FRM_LENGTH_K2_E5_SHIFT 0
+#define ETH_MAC_REG_TX_IPG_LENGTH_K2_E5 0x000044UL
+#define ETH_MAC_REG_TX_IPG_LENGTH_TXIPG_K2_E5_SHIFT 0
+#define ETH_MAC_REG_RX_FIFO_SECTIONS_K2_E5 0x00001cUL
+#define ETH_MAC_REG_RX_FIFO_SECTIONS_RX_SECTION_FULL_K2_E5_SHIFT 0
+#define ETH_MAC_REG_TX_FIFO_SECTIONS_K2_E5 0x000020UL
+#define ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_EMPTY_K2_E5_SHIFT 16
+#define ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_FULL_K2_E5_SHIFT 0
+#define ETH_MAC_REG_COMMAND_CONFIG_K2_E5 0x000008UL
+#define MISC_REG_XMAC_CORE_PORT_MODE_BB 0x008c08UL
+#define MISC_REG_XMAC_PHY_PORT_MODE_BB 0x008c04UL
+#define XMAC_REG_MODE_BB 0x210008UL
+#define XMAC_REG_RX_MAX_SIZE_BB  0x210040UL
+#define XMAC_REG_TX_CTRL_LO_BB 0x210020UL
+#define XMAC_REG_CTRL_BB 0x210000UL
+#define XMAC_REG_CTRL_TX_EN_BB (0x1 << 0)
+#define XMAC_REG_CTRL_RX_EN_BB (0x1 << 1)
+#define XMAC_REG_RX_CTRL_BB 0x210030UL
+#define XMAC_REG_RX_CTRL_PROCESS_VARIABLE_PREAMBLE_BB (0x1 << 12)
+
+#define PGLUE_B_REG_PGL_ADDR_E8_F0_K2_E5 0x2aaf98UL
+#define PGLUE_B_REG_PGL_ADDR_EC_F0_K2_E5 0x2aaf9cUL
+#define PGLUE_B_REG_PGL_ADDR_F0_F0_K2_E5 0x2aafa0UL
+#define PGLUE_B_REG_PGL_ADDR_F4_F0_K2_E5 0x2aafa4UL
+#define PGLUE_B_REG_PGL_ADDR_88_F0_BB 0x2aa404UL
+#define PGLUE_B_REG_PGL_ADDR_8C_F0_BB 0x2aa408UL
+#define PGLUE_B_REG_PGL_ADDR_90_F0_BB 0x2aa40cUL
+#define PGLUE_B_REG_PGL_ADDR_94_F0_BB 0x2aa410UL
+#define MISCS_REG_FUNCTION_HIDE_BB_K2 0x0096f0UL
+#define PCIE_REG_PRTY_MASK_K2_E5 0x0547b4UL
+#define PGLUE_B_REG_VF_BAR0_SIZE_K2_E5 0x2aaeb4UL
+
+#define PRS_REG_OUTPUT_FORMAT_4_0_BB_K2 0x1f099cUL
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index a604a5b..332b1f8 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -21,7 +21,7 @@ static uint8_t npar_tx_switching = 1;
 char fw_file[PATH_MAX];
 
 const char *QEDE_DEFAULT_FIRMWARE =
-	"/lib/firmware/qed/qed_init_values-8.14.6.0.bin";
+	"/lib/firmware/qed/qed_init_values-8.18.9.0.bin";
 
 static void
 qed_update_pf_params(struct ecore_dev *edev, struct ecore_pf_params *params)
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 07/62] net/qede/base: decrease maximum HW func per device
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (6 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 06/62] net/qede: upgrade the FW to 8.18.9.0 Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 08/62] net/qede/base: move mask constants defining NIC type Rasesh Mody
                               ` (54 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Decrease MAX_HWFNS_PER_DEVICE from 4 to 2

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index b2f4910..d14f99c 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -28,7 +28,7 @@
 #include "ecore_proto_if.h"
 #include "mcp_public.h"
 
-#define MAX_HWFNS_PER_DEVICE	(4)
+#define MAX_HWFNS_PER_DEVICE	2
 #define NAME_SIZE 128 /* @DPDK */
 #define VER_SIZE 16
 #define ECORE_WFQ_UNIT	100
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 08/62] net/qede/base: move mask constants defining NIC type
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (7 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 07/62] net/qede/base: decrease maximum HW func per device Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 09/62] net/qede/base: remove attribute from update current config Rasesh Mody
                               ` (53 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Move mask constants defining NIC type to ecore.h

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h     |    4 ++++
 drivers/net/qede/base/ecore_dev.c |    4 ----
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index d14f99c..a6cf52e 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -625,6 +625,10 @@ struct ecore_dev {
 #define ECORE_IS_AH(dev)	((dev)->type == ECORE_DEV_TYPE_AH)
 #define ECORE_IS_K2(dev)	ECORE_IS_AH(dev)
 
+#define ECORE_DEV_ID_MASK	0xff00
+#define ECORE_DEV_ID_MASK_BB	0x1600
+#define ECORE_DEV_ID_MASK_AH	0x8000
+
 	u16 vendor_id;
 	u16 device_id;
 
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index f82f5e6..ee50090 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2888,10 +2888,6 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	return ecore_hw_get_resc(p_hwfn, drv_resc_alloc);
 }
 
-#define ECORE_DEV_ID_MASK	0xff00
-#define ECORE_DEV_ID_MASK_BB	0x1600
-#define ECORE_DEV_ID_MASK_AH	0x8000
-
 static enum _ecore_status_t ecore_get_dev_info(struct ecore_dev *p_dev)
 {
 	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 09/62] net/qede/base: remove attribute from update current config
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (8 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 08/62] net/qede/base: move mask constants defining NIC type Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 10/62] net/qede/base: add nvram options Rasesh Mody
                               ` (52 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Remove attribute field from update_current_config() API, Management FW
need to know only the last entity who configured the device.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_mcp.c     |    5 ++---
 drivers/net/qede/base/ecore_mcp_api.h |    8 --------
 2 files changed, 2 insertions(+), 11 deletions(-)

diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index e236f39..245d478 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1709,14 +1709,13 @@ enum _ecore_status_t ecore_mcp_resume(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t
 ecore_mcp_ov_update_current_config(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt,
-				   enum ecore_ov_config_method config,
 				   enum ecore_ov_client client)
 {
 	enum _ecore_status_t rc;
 	u32 resp = 0, param = 0;
 	u32 drv_mb_param;
 
-	switch (config) {
+	switch (client) {
 	case ECORE_OV_CLIENT_DRV:
 		drv_mb_param = DRV_MB_PARAM_OV_CURR_CFG_OS;
 		break;
@@ -1727,7 +1726,7 @@ ecore_mcp_ov_update_current_config(struct ecore_hwfn *p_hwfn,
 		drv_mb_param = DRV_MB_PARAM_OV_CURR_CFG_VENDOR_SPEC;
 		break;
 	default:
-		DP_NOTICE(p_hwfn, true, "Invalid client type %d\n", config);
+		DP_NOTICE(p_hwfn, true, "Invalid client type %d\n", client);
 		return ECORE_INVAL;
 	}
 
diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h
index 614cf67..72a58e4 100644
--- a/drivers/net/qede/base/ecore_mcp_api.h
+++ b/drivers/net/qede/base/ecore_mcp_api.h
@@ -173,12 +173,6 @@ union ecore_mcp_protocol_stats {
 };
 #endif
 
-enum ecore_ov_config_method {
-	ECORE_OV_CONFIG_MTU,
-	ECORE_OV_CONFIG_MAC,
-	ECORE_OV_CONFIG_WOL
-};
-
 enum ecore_ov_client {
 	ECORE_OV_CLIENT_DRV,
 	ECORE_OV_CLIENT_USER,
@@ -453,7 +447,6 @@ enum _ecore_status_t ecore_start_recovery_process(struct ecore_hwfn *p_hwfn,
  *
  *  @param p_hwfn
  *  @param p_ptt
- *  @param config - Configuation that has been updated
  *  @param client - ecore client type
  *
  * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
@@ -461,7 +454,6 @@ enum _ecore_status_t ecore_start_recovery_process(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t
 ecore_mcp_ov_update_current_config(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt,
-				   enum ecore_ov_config_method config,
 				   enum ecore_ov_client client);
 
 /**
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 10/62] net/qede/base: add nvram options
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (9 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 09/62] net/qede/base: remove attribute from update current config Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 11/62] net/qede/base: add comment Rasesh Mody
                               ` (51 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add a bunch of NVRAM options like MCOT, FEC selection, temperature
threshold, Reset On Lan, etc.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/nvm_cfg.h |  465 ++++++++++++++++++++++++++++++++++++++-
 1 file changed, 461 insertions(+), 4 deletions(-)

diff --git a/drivers/net/qede/base/nvm_cfg.h b/drivers/net/qede/base/nvm_cfg.h
index 68abc2d..4202337 100644
--- a/drivers/net/qede/base/nvm_cfg.h
+++ b/drivers/net/qede/base/nvm_cfg.h
@@ -13,13 +13,21 @@
  * Description: NVM config file - Generated file from nvm cfg excel.
  *              DO NOT MODIFY !!!
  *
- * Created:     9/6/2016
+ * Created:     12/15/2016
  *
  ****************************************************************************/
 
 #ifndef NVM_CFG_H
 #define NVM_CFG_H
 
+#define NVM_CFG_version 0x81805
+
+#define NVM_CFG_new_option_seq 15
+
+#define NVM_CFG_removed_option_seq 0
+
+#define NVM_CFG_updated_value_seq 1
+
 struct nvm_cfg_mac_address {
 	u32 mac_addr_hi;
 		#define NVM_CFG_MAC_ADDRESS_HI_MASK 0x0000FFFF
@@ -242,6 +250,11 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_INTERNAL 0x0
 		#define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_EXTERNAL 0x1
 		#define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_BOTH 0x2
+	/*  ROL enable */
+		#define NVM_CFG1_GLOB_RESET_ON_LAN_MASK 0x80000000
+		#define NVM_CFG1_GLOB_RESET_ON_LAN_OFFSET 31
+		#define NVM_CFG1_GLOB_RESET_ON_LAN_DISABLED 0x0
+		#define NVM_CFG1_GLOB_RESET_ON_LAN_ENABLED 0x1
 	u32 f_lane_cfg1; /* 0x38 */
 		#define NVM_CFG1_GLOB_RX_LANE0_SWAP_MASK 0x0000000F
 		#define NVM_CFG1_GLOB_RX_LANE0_SWAP_OFFSET 0
@@ -470,6 +483,15 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_MANUF3_VER_OFFSET 18
 		#define NVM_CFG1_GLOB_MANUF4_VER_MASK 0x3F000000
 		#define NVM_CFG1_GLOB_MANUF4_VER_OFFSET 24
+	/*  Select package id method */
+		#define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_IO_MASK 0x40000000
+		#define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_IO_OFFSET 30
+		#define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_IO_NVRAM 0x0
+		#define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_IO_IO_PINS 0x1
+		#define NVM_CFG1_GLOB_RECOVERY_MODE_MASK 0x80000000
+		#define NVM_CFG1_GLOB_RECOVERY_MODE_OFFSET 31
+		#define NVM_CFG1_GLOB_RECOVERY_MODE_DISABLED 0x0
+		#define NVM_CFG1_GLOB_RECOVERY_MODE_ENABLED 0x1
 	u32 manufacture_time; /* 0x70 */
 		#define NVM_CFG1_GLOB_MANUF0_TIME_MASK 0x0000003F
 		#define NVM_CFG1_GLOB_MANUF0_TIME_OFFSET 0
@@ -480,6 +502,11 @@ struct nvm_cfg1_glob {
 	/*  Max MSIX for Ethernet in default mode */
 		#define NVM_CFG1_GLOB_MAX_MSIX_MASK 0x03FC0000
 		#define NVM_CFG1_GLOB_MAX_MSIX_OFFSET 18
+	/*  PF Mapping */
+		#define NVM_CFG1_GLOB_PF_MAPPING_MASK 0x0C000000
+		#define NVM_CFG1_GLOB_PF_MAPPING_OFFSET 26
+		#define NVM_CFG1_GLOB_PF_MAPPING_CONTINUOUS 0x0
+		#define NVM_CFG1_GLOB_PF_MAPPING_FIXED 0x1
 	u32 led_global_settings; /* 0x74 */
 		#define NVM_CFG1_GLOB_LED_SWAP_0_MASK 0x0000000F
 		#define NVM_CFG1_GLOB_LED_SWAP_0_OFFSET 0
@@ -489,6 +516,47 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_LED_SWAP_2_OFFSET 8
 		#define NVM_CFG1_GLOB_LED_SWAP_3_MASK 0x0000F000
 		#define NVM_CFG1_GLOB_LED_SWAP_3_OFFSET 12
+	/*  Max. continues operating temperature */
+		#define NVM_CFG1_GLOB_MAX_CONT_OPERATING_TEMP_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_MAX_CONT_OPERATING_TEMP_OFFSET 16
+	/*  GPIO which triggers run-time port swap according to the map
+	 *  specified in option 205
+	 */
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_OFFSET 24
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO31 0x20
 	u32 generic_cont1; /* 0x78 */
 		#define NVM_CFG1_GLOB_AVS_DAC_CODE_MASK 0x000003FF
 		#define NVM_CFG1_GLOB_AVS_DAC_CODE_OFFSET 0
@@ -508,6 +576,17 @@ struct nvm_cfg1_glob {
 	/*  PCIe Preset value - applies only if option 194 is enabled */
 		#define NVM_CFG1_GLOB_PCIE_PRESET_VALUE_MASK 0x00780000
 		#define NVM_CFG1_GLOB_PCIE_PRESET_VALUE_OFFSET 19
+	/*  Port mapping to be used when the run-time GPIO for port-swap is
+	 *  defined and set.
+	 */
+		#define NVM_CFG1_GLOB_RUNTIME_PORT0_SWAP_MAP_MASK 0x01800000
+		#define NVM_CFG1_GLOB_RUNTIME_PORT0_SWAP_MAP_OFFSET 23
+		#define NVM_CFG1_GLOB_RUNTIME_PORT1_SWAP_MAP_MASK 0x06000000
+		#define NVM_CFG1_GLOB_RUNTIME_PORT1_SWAP_MAP_OFFSET 25
+		#define NVM_CFG1_GLOB_RUNTIME_PORT2_SWAP_MAP_MASK 0x18000000
+		#define NVM_CFG1_GLOB_RUNTIME_PORT2_SWAP_MAP_OFFSET 27
+		#define NVM_CFG1_GLOB_RUNTIME_PORT3_SWAP_MAP_MASK 0x60000000
+		#define NVM_CFG1_GLOB_RUNTIME_PORT3_SWAP_MAP_OFFSET 29
 	u32 mbi_version; /* 0x7C */
 		#define NVM_CFG1_GLOB_MBI_VERSION_0_MASK 0x000000FF
 		#define NVM_CFG1_GLOB_MBI_VERSION_0_OFFSET 0
@@ -515,6 +594,44 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_MBI_VERSION_1_OFFSET 8
 		#define NVM_CFG1_GLOB_MBI_VERSION_2_MASK 0x00FF0000
 		#define NVM_CFG1_GLOB_MBI_VERSION_2_OFFSET 16
+	/*  If set to other than NA, 0 - Normal operation, 1 - Thermal event
+	 *  occurred
+	 */
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_OFFSET 24
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO31 0x20
 	u32 mbi_date; /* 0x80 */
 	u32 misc_sig; /* 0x84 */
 	/*  Define the GPIO mapping to switch i2c mux */
@@ -555,6 +672,81 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO29 0x1E
 		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO30 0x1F
 		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO31 0x20
+	/*  Interrupt signal used for SMBus/I2C management interface
+	 *  0 = Interrupt event occurred
+	 *  1 = Normal
+	 */
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_OFFSET 16
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO31 0x20
+	/*  Set aLOM FAN on GPIO */
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_OFFSET 24
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO31 0x20
 	u32 device_capabilities; /* 0x88 */
 		#define NVM_CFG1_GLOB_DEVICE_CAPABILITIES_ETHERNET 0x1
 		#define NVM_CFG1_GLOB_DEVICE_CAPABILITIES_FCOE 0x2
@@ -591,11 +783,262 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_BB_1X100G \
 			0x80
 		#define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_2X10G 0x100
-	u32 reserved[41]; /* 0x9C */
+	/* @DPDK */
+	u32 reserved1[12]; /* 0x9C */
+	u32 oem1_number[8]; /* 0xCC */
+	u32 oem2_number[8]; /* 0xEC */
+	u32 mps25_active_txfir_pre; /* 0x10C */
+		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_PRE_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_PRE_OFFSET 0
+		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_PRE_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_PRE_OFFSET 8
+		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_PRE_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_PRE_OFFSET 16
+		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_PRE_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_PRE_OFFSET 24
+	u32 mps25_active_txfir_main; /* 0x110 */
+		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_MAIN_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_MAIN_OFFSET 0
+		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_MAIN_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_MAIN_OFFSET 8
+		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_MAIN_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_MAIN_OFFSET 16
+		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_MAIN_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_MAIN_OFFSET 24
+	u32 mps25_active_txfir_post; /* 0x114 */
+		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_POST_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_POST_OFFSET 0
+		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_POST_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_POST_OFFSET 8
+		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_POST_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_POST_OFFSET 16
+		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_POST_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_POST_OFFSET 24
+	u32 features; /* 0x118 */
+	/*  Set the Aux Fan on temperature  */
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_VALUE_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_VALUE_OFFSET 0
+	/*  Set NC-SI package ID */
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_OFFSET 8
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO31 0x20
+	/*  PMBUS Clock GPIO */
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_OFFSET 16
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO31 0x20
+	/*  PMBUS Data GPIO */
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_OFFSET 24
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO31 0x20
+	u32 tx_rx_eq_25g_hlpc; /* 0x11C */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_HLPC_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_HLPC_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_HLPC_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_HLPC_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_HLPC_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_HLPC_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_HLPC_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_HLPC_OFFSET 24
+	u32 tx_rx_eq_25g_llpc; /* 0x120 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_LLPC_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_LLPC_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_LLPC_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_LLPC_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_LLPC_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_LLPC_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_LLPC_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_LLPC_OFFSET 24
+	u32 tx_rx_eq_25g_ac; /* 0x124 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_AC_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_AC_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_AC_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_AC_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_AC_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_AC_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_AC_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_AC_OFFSET 24
+	u32 tx_rx_eq_10g_pc; /* 0x128 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_PC_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_PC_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_PC_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_PC_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_PC_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_PC_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_PC_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_PC_OFFSET 24
+	u32 tx_rx_eq_10g_ac; /* 0x12C */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_AC_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_AC_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_AC_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_AC_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_AC_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_AC_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_AC_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_AC_OFFSET 24
+	u32 tx_rx_eq_1g; /* 0x130 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_1G_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_1G_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_1G_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_1G_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_1G_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_1G_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_1G_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_1G_OFFSET 24
+	u32 tx_rx_eq_25g_bt; /* 0x134 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_BT_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_BT_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_BT_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_BT_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_BT_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_BT_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_BT_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_BT_OFFSET 24
+	u32 tx_rx_eq_10g_bt; /* 0x138 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_BT_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_BT_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_BT_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_BT_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_BT_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_BT_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_BT_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_BT_OFFSET 24
+	u32 generic_cont4; /* 0x13C */
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_OFFSET 0
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO31 0x20
+	u32 reserved[58]; /* 0x140 */
 };
 
 struct nvm_cfg1_path {
-	u32 reserved[30]; /* 0x0 */
+	u32 reserved[1]; /* 0x0 */
 };
 
 struct nvm_cfg1_port {
@@ -749,6 +1192,15 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_FEC_FORCE_MODE_FIRECODE 0x1
 		#define NVM_CFG1_PORT_FEC_FORCE_MODE_RS 0x2
 		#define NVM_CFG1_PORT_FEC_FORCE_MODE_AUTO 0x7
+		#define NVM_CFG1_PORT_FEC_AN_MODE_MASK 0x00700000
+		#define NVM_CFG1_PORT_FEC_AN_MODE_OFFSET 20
+		#define NVM_CFG1_PORT_FEC_AN_MODE_NONE 0x0
+		#define NVM_CFG1_PORT_FEC_AN_MODE_10G_FIRECODE 0x1
+		#define NVM_CFG1_PORT_FEC_AN_MODE_25G_FIRECODE 0x2
+		#define NVM_CFG1_PORT_FEC_AN_MODE_10G_AND_25G_FIRECODE 0x3
+		#define NVM_CFG1_PORT_FEC_AN_MODE_25G_RS 0x4
+		#define NVM_CFG1_PORT_FEC_AN_MODE_25G_FIRECODE_AND_RS 0x5
+		#define NVM_CFG1_PORT_FEC_AN_MODE_ALL 0x6
 	u32 phy_cfg; /* 0x1C */
 		#define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_MASK 0x0000FFFF
 		#define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_OFFSET 0
@@ -1451,12 +1903,17 @@ struct nvm_cfg1_func {
 		#define NVM_CFG1_FUNC_PREBOOT_VLAN_VALUE_OFFSET 0
 		#define NVM_CFG1_FUNC_PREBOOT_VLAN_MASK 0x00010000
 		#define NVM_CFG1_FUNC_PREBOOT_VLAN_OFFSET 16
+		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_MASK 0x001E0000
+		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_OFFSET 17
+		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_ETHERNET 0x1
+		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_FCOE 0x2
+		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_ISCSI 0x4
 	u32 reserved[8]; /* 0x30 */
 };
 
 struct nvm_cfg1 {
 	struct nvm_cfg1_glob glob; /* 0x0 */
-	struct nvm_cfg1_path path[MCP_GLOB_PATH_MAX]; /* 0x140 */
+	struct nvm_cfg1_path path[MCP_GLOB_PATH_MAX]; /* 0x228 */
 	struct nvm_cfg1_port port[MCP_GLOB_PORT_MAX]; /* 0x230 */
 	struct nvm_cfg1_func func[MCP_GLOB_FUNC_MAX]; /* 0xB90 */
 };
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 11/62] net/qede/base: add comment
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (10 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 10/62] net/qede/base: add nvram options Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 12/62] net/qede/base: use default MTU from shared memory Rasesh Mody
                               ` (50 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add a comment for the endianness manipulation in
ecore_mcp_send_drv_version().

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_mcp.c |    1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 245d478..df6ebd2 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1662,6 +1662,7 @@ ecore_mcp_send_drv_version(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	p_drv_version->version = p_ver->version;
 	num_words = (MCP_DRV_VER_STR_SIZE - 4) / 4;
 	for (i = 0; i < num_words; i++) {
+		/* The driver name is expected to be in a big-endian format */
 		p_name = &p_ver->name[i * sizeof(u32)];
 		val = OSAL_CPU_TO_BE32(*(u32 *)p_name);
 		*(u32 *)&p_drv_version->name[i * sizeof(u32)] = val;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 12/62] net/qede/base: use default MTU from shared memory
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (11 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 11/62] net/qede/base: add comment Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 13/62] net/qede/base: change queue/sb-id from 8 bit to 16 bit Rasesh Mody
                               ` (49 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Read and use the default MTU value from shared-memory.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h         |    2 ++
 drivers/net/qede/base/ecore_dev.c     |    3 +++
 drivers/net/qede/base/ecore_mcp.c     |   10 ++++++++++
 drivers/net/qede/base/ecore_mcp_api.h |    2 ++
 drivers/net/qede/qede_if.h            |    1 +
 drivers/net/qede/qede_main.c          |    2 ++
 6 files changed, 20 insertions(+)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index a6cf52e..25c96f8 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -377,6 +377,8 @@ struct ecore_hw_info {
 
 	/* Default DCBX mode */
 	u8 dcbx_mode;
+
+	u16 mtu;
 };
 
 struct ecore_hw_cid_data {
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index ee50090..87c1c23 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2879,6 +2879,9 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 
 	ecore_get_num_funcs(p_hwfn, p_ptt);
 
+	if (ecore_mcp_is_init(p_hwfn))
+		p_hwfn->hw_info.mtu = p_hwfn->mcp_info->func_info.mtu;
+
 	/* In case of forcing the driver's default resource allocation, calling
 	 * ecore_hw_get_resc() should come after initializing the personality
 	 * and after getting the number of functions, since the calculation of
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index df6ebd2..8720ae7 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1431,6 +1431,16 @@ enum _ecore_status_t ecore_mcp_fill_shmem_func_info(struct ecore_hwfn *p_hwfn,
 
 	info->ovlan = (u16)(shmem_info.ovlan_stag & FUNC_MF_CFG_OV_STAG_MASK);
 
+	info->mtu = (u16)shmem_info.mtu_size;
+
+	if (info->mtu == 0)
+		info->mtu = 1500;
+
+	info->mtu = (u16)shmem_info.mtu_size;
+
+	if (info->mtu == 0)
+		info->mtu = 1500;
+
 	DP_VERBOSE(p_hwfn, (ECORE_MSG_SP | ECORE_MSG_IFUP),
 		   "Read configuration from shmem: pause_on_host %02x"
 		    " protocol %02x BW [%02x - %02x]"
diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h
index 72a58e4..1be22dd 100644
--- a/drivers/net/qede/base/ecore_mcp_api.h
+++ b/drivers/net/qede/base/ecore_mcp_api.h
@@ -84,6 +84,8 @@ struct ecore_mcp_function_info {
 
 #define ECORE_MCP_VLAN_UNSET		(0xffff)
 	u16 ovlan;
+
+	u16 mtu;
 };
 
 struct ecore_mcp_nvm_common {
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index 4b23bb9..18404fb 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -34,6 +34,7 @@ struct qed_dev_info {
 	uint32_t flash_size;
 	uint8_t mf_mode;
 	bool tx_switching;
+	u16 mtu;
 	/* To be added... */
 };
 
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 332b1f8..e76346e 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -365,6 +365,8 @@ qed_fill_dev_info(struct ecore_dev *edev, struct qed_dev_info *dev_info)
 				      &dev_info->mfw_rev, NULL);
 	}
 
+	dev_info->mtu = ECORE_LEADING_HWFN(edev)->hw_info.mtu;
+
 	return 0;
 }
 
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 13/62] net/qede/base: change queue/sb-id from 8 bit to 16 bit
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (12 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 12/62] net/qede/base: use default MTU from shared memory Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 14/62] net/qede/base: update MFW when default MTU is changed Rasesh Mody
                               ` (48 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Change the queue/sb-id values from 8 bit fields to 16 bit fields.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c     |    8 ++++----
 drivers/net/qede/base/ecore_dev_api.h |    4 ++--
 drivers/net/qede/base/ecore_l2.c      |    2 +-
 drivers/net/qede/base/ecore_l2_api.h  |    2 +-
 drivers/net/qede/base/ecore_sriov.c   |    4 ++--
 5 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 87c1c23..7a501bb 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -3876,7 +3876,7 @@ static enum _ecore_status_t ecore_set_coalesce(struct ecore_hwfn *p_hwfn,
 
 enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u8 qid, u16 sb_id)
+					    u16 coalesce, u16 qid, u16 sb_id)
 {
 	struct ustorm_eth_queue_zone eth_qzone;
 	u8 timeset, timer_res;
@@ -3897,7 +3897,7 @@ enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
 	}
 	timeset = (u8)(coalesce >> timer_res);
 
-	rc = ecore_fw_l2_queue(p_hwfn, (u16)qid, &fw_qid);
+	rc = ecore_fw_l2_queue(p_hwfn, qid, &fw_qid);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
@@ -3919,7 +3919,7 @@ enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
 
 enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u8 qid, u16 sb_id)
+					    u16 coalesce, u16 qid, u16 sb_id)
 {
 	struct xstorm_eth_queue_zone eth_qzone;
 	u8 timeset, timer_res;
@@ -3941,7 +3941,7 @@ enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
 
 	timeset = (u8)(coalesce >> timer_res);
 
-	rc = ecore_fw_l2_queue(p_hwfn, (u16)qid, &fw_qid);
+	rc = ecore_fw_l2_queue(p_hwfn, qid, &fw_qid);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index 0dee68a..e7332ac 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -535,7 +535,7 @@ enum _ecore_status_t ecore_final_cleanup(struct ecore_hwfn	*p_hwfn,
  */
 enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u8 qid, u16 sb_id);
+					    u16 coalesce, u16 qid, u16 sb_id);
 
 /**
  * @brief ecore_set_txq_coalesce - Configure coalesce parameters for a Tx queue
@@ -553,6 +553,6 @@ enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
  */
 enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u8 qid, u16 sb_id);
+					    u16 coalesce, u16 qid, u16 sb_id);
 
 #endif
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 22bb43d..1379a1b 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -212,7 +212,7 @@ ecore_sp_vport_update_rss(struct ecore_hwfn *p_hwfn,
 
 	for (i = 0; i < ECORE_RSS_IND_TABLE_SIZE; i++) {
 		rc = ecore_fw_l2_queue(p_hwfn,
-				       (u8)p_rss->rss_ind_table[i],
+				       p_rss->rss_ind_table[i],
 				       &abs_l2_queue);
 		if (rc != ECORE_SUCCESS)
 			return rc;
diff --git a/drivers/net/qede/base/ecore_l2_api.h b/drivers/net/qede/base/ecore_l2_api.h
index 247316b..8f7b614 100644
--- a/drivers/net/qede/base/ecore_l2_api.h
+++ b/drivers/net/qede/base/ecore_l2_api.h
@@ -37,7 +37,7 @@ struct ecore_queue_start_common_params {
 	/* q_zone_id is relative, may be different from queue id
 	 * currently used by Tx-only, upper-bounded by number of FW-queues
 	 */
-	u8 qzone_id;
+	u16 qzone_id;
 
 	/* stats_id is relative or absolute depends on function */
 	u8 stats_id;
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index b051678..6e86966 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -2118,8 +2118,8 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 	    !ecore_iov_validate_sb(p_hwfn, vf, req->hw_sb))
 		goto out;
 
-	params.queue_id = (u8)vf->vf_queues[req->tx_qid].fw_tx_qid;
-	params.qzone_id = (u8)vf->vf_queues[req->tx_qid].fw_tx_qid;
+	params.queue_id = vf->vf_queues[req->tx_qid].fw_tx_qid;
+	params.qzone_id = vf->vf_queues[req->tx_qid].fw_tx_qid;
 	params.vport_id = vf->vport_id;
 	params.stats_id = vf->abs_vf_id + 0x10;
 	params.sb = req->hw_sb;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 14/62] net/qede/base: update MFW when default MTU is changed
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (13 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 13/62] net/qede/base: change queue/sb-id from 8 bit to 16 bit Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 15/62] net/qede/base: prevent device init failure Rasesh Mody
                               ` (47 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Send mailbox command to Management FW when MTU changes.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c |   11 +++++++++++
 drivers/net/qede/base/ecore_mcp.c |    3 ---
 2 files changed, 11 insertions(+), 3 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 7a501bb..13e13ba 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1629,6 +1629,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 {
 	enum _ecore_status_t rc = ECORE_SUCCESS, mfw_rc;
 	u32 load_code, param, drv_mb_param;
+	bool b_default_mtu = true;
 	struct ecore_hwfn *p_hwfn;
 	int i;
 
@@ -1648,6 +1649,12 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
 
+		/* If management didn't provide a default, set one of our own */
+		if (!p_hwfn->hw_info.mtu) {
+			p_hwfn->hw_info.mtu = 1500;
+			b_default_mtu = false;
+		}
+
 		if (IS_VF(p_dev)) {
 			p_hwfn->b_int_enabled = 1;
 			continue;
@@ -1776,6 +1783,10 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 			return rc;
 		}
 
+		if (!b_default_mtu)
+			ecore_mcp_ov_update_mtu(p_hwfn, p_hwfn->p_main_ptt,
+						p_hwfn->hw_info.mtu);
+
 		rc = ecore_mcp_ov_update_driver_state(p_hwfn,
 						      p_hwfn->p_main_ptt,
 						ECORE_OV_DRIVER_STATE_DISABLED);
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 8720ae7..0338576 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1438,9 +1438,6 @@ enum _ecore_status_t ecore_mcp_fill_shmem_func_info(struct ecore_hwfn *p_hwfn,
 
 	info->mtu = (u16)shmem_info.mtu_size;
 
-	if (info->mtu == 0)
-		info->mtu = 1500;
-
 	DP_VERBOSE(p_hwfn, (ECORE_MSG_SP | ECORE_MSG_IFUP),
 		   "Read configuration from shmem: pause_on_host %02x"
 		    " protocol %02x BW [%02x - %02x]"
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 15/62] net/qede/base: prevent device init failure
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (14 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 14/62] net/qede/base: update MFW when default MTU is changed Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 16/62] net/qede/base: read card personality via MFW commands Rasesh Mody
                               ` (46 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Device initialization flow should not be failed because the FW interface
command is not available.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c |   14 ++++++++------
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 13e13ba..7494f93 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1778,18 +1778,20 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		rc = ecore_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
 				   DRV_MSG_CODE_OV_UPDATE_STORM_FW_VER,
 				   drv_mb_param, &load_code, &param);
-		if (rc != ECORE_SUCCESS) {
-			DP_ERR(p_hwfn, "Failed to send firmware version\n");
-			return rc;
-		}
+		if (rc != ECORE_SUCCESS)
+			DP_INFO(p_hwfn, "Failed to update firmware version\n");
 
 		if (!b_default_mtu)
-			ecore_mcp_ov_update_mtu(p_hwfn, p_hwfn->p_main_ptt,
-						p_hwfn->hw_info.mtu);
+			rc = ecore_mcp_ov_update_mtu(p_hwfn, p_hwfn->p_main_ptt,
+						      p_hwfn->hw_info.mtu);
+		if (rc != ECORE_SUCCESS)
+			DP_INFO(p_hwfn, "Failed to update default mtu\n");
 
 		rc = ecore_mcp_ov_update_driver_state(p_hwfn,
 						      p_hwfn->p_main_ptt,
 						ECORE_OV_DRIVER_STATE_DISABLED);
+		if (rc != ECORE_SUCCESS)
+			DP_INFO(p_hwfn, "Failed to update driver state\n");
 	}
 
 	return rc;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 16/62] net/qede/base: read card personality via MFW commands
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (15 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 15/62] net/qede/base: prevent device init failure Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 17/62] net/qede/base: allow probe to succeed with minor HW-issues Rasesh Mody
                               ` (45 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add support to read NIC personality via management FW for non-L2
protocols.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h       |   16 +++++++++++++-
 drivers/net/qede/base/ecore_dev.c   |   17 +++++----------
 drivers/net/qede/base/ecore_mcp.c   |   41 +++++++++++++++++++++++++++++++----
 drivers/net/qede/base/ecore_sriov.c |    1 +
 4 files changed, 59 insertions(+), 16 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 25c96f8..842a3b5 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -243,7 +243,8 @@ enum ecore_pci_personality {
 	ECORE_PCI_FCOE,
 	ECORE_PCI_ISCSI,
 	ECORE_PCI_ETH_ROCE,
-	ECORE_PCI_IWARP,
+	ECORE_PCI_ETH_IWARP,
+	ECORE_PCI_ETH_RDMA,
 	ECORE_PCI_DEFAULT /* default in shmem */
 };
 
@@ -328,6 +329,19 @@ enum ecore_hw_err_type {
 struct ecore_hw_info {
 	/* PCI personality */
 	enum ecore_pci_personality personality;
+#define ECORE_IS_RDMA_PERSONALITY(dev)			    \
+	((dev)->hw_info.personality == ECORE_PCI_ETH_ROCE ||  \
+	 (dev)->hw_info.personality == ECORE_PCI_ETH_IWARP || \
+	 (dev)->hw_info.personality == ECORE_PCI_ETH_RDMA)
+#define ECORE_IS_ROCE_PERSONALITY(dev)			   \
+	((dev)->hw_info.personality == ECORE_PCI_ETH_ROCE || \
+	 (dev)->hw_info.personality == ECORE_PCI_ETH_RDMA)
+#define ECORE_IS_IWARP_PERSONALITY(dev)			    \
+	((dev)->hw_info.personality == ECORE_PCI_ETH_IWARP || \
+	 (dev)->hw_info.personality == ECORE_PCI_ETH_RDMA)
+#define ECORE_IS_L2_PERSONALITY(dev)		      \
+	((dev)->hw_info.personality == ECORE_PCI_ETH || \
+	 ECORE_IS_RDMA_PERSONALITY(dev))
 
 	/* Resource Allocation scheme results */
 	u32 resc_start[ECORE_MAX_RESC];
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 7494f93..1b033b7 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -219,9 +219,7 @@ static enum _ecore_status_t ecore_init_qm_info(struct ecore_hwfn *p_hwfn,
 	 * don't have a good recycle flow. Non ethernet PFs require only a
 	 * single physical queue.
 	 */
-	if (p_hwfn->hw_info.personality == ECORE_PCI_ETH_ROCE ||
-	    p_hwfn->hw_info.personality == ECORE_PCI_IWARP ||
-	    p_hwfn->hw_info.personality == ECORE_PCI_ETH)
+	if (ECORE_IS_L2_PERSONALITY(p_hwfn))
 		protocol_pqs = p_hwfn->hw_info.num_hw_tc;
 	else
 		protocol_pqs = 1;
@@ -229,7 +227,7 @@ static enum _ecore_status_t ecore_init_qm_info(struct ecore_hwfn *p_hwfn,
 	num_pqs = protocol_pqs + num_vfs + 1;	/* The '1' is for pure-LB */
 	num_vports = (u8)RESC_NUM(p_hwfn, ECORE_VPORT);
 
-	if (p_hwfn->hw_info.personality == ECORE_PCI_ETH_ROCE) {
+	if (ECORE_IS_ROCE_PERSONALITY(p_hwfn)) {
 		num_pqs++;	/* for RoCE queue */
 		init_rdma_offload_pq = true;
 		if (p_hwfn->pf_params.rdma_pf_params.enable_dcqcn) {
@@ -259,7 +257,7 @@ static enum _ecore_status_t ecore_init_qm_info(struct ecore_hwfn *p_hwfn,
 		qm_info->num_pf_rls = (u8)num_pf_rls;
 	}
 
-	if (p_hwfn->hw_info.personality == ECORE_PCI_IWARP) {
+	if (ECORE_IS_IWARP_PERSONALITY(p_hwfn)) {
 		num_pqs += 3;	/* for iwarp queue / pure-ack / ooo */
 		init_rdma_offload_pq = true;
 		init_pure_ack_pq = true;
@@ -335,9 +333,7 @@ static enum _ecore_status_t ecore_init_qm_info(struct ecore_hwfn *p_hwfn,
 		struct init_qm_pq_params *params =
 		    &qm_info->qm_pq_params[curr_queue++];
 
-		if (p_hwfn->hw_info.personality == ECORE_PCI_ETH_ROCE ||
-		    p_hwfn->hw_info.personality == ECORE_PCI_IWARP ||
-		    p_hwfn->hw_info.personality == ECORE_PCI_ETH) {
+		if (ECORE_IS_L2_PERSONALITY(p_hwfn)) {
 			params->vport_id = vport_id;
 			params->tc_id = i;
 			/* Note: this assumes that if we had a configuration
@@ -612,8 +608,7 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 
 		/* EQ */
 		n_eqes = ecore_chain_get_capacity(&p_hwfn->p_spq->chain);
-		if ((p_hwfn->hw_info.personality == ECORE_PCI_ETH_ROCE) ||
-		    (p_hwfn->hw_info.personality == ECORE_PCI_IWARP)) {
+		if (ECORE_IS_RDMA_PERSONALITY(p_hwfn)) {
 			/* Calculate the EQ size
 			 * ---------------------
 			 * Each ICID may generate up to one event at a time i.e.
@@ -636,7 +631,7 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 			 *          smaller than RoCE's so we avoid exact
 			 *          calculation.
 			 */
-			if (p_hwfn->hw_info.personality == ECORE_PCI_ETH_ROCE) {
+			if (ECORE_IS_ROCE_PERSONALITY(p_hwfn)) {
 				num_cons =
 				    ecore_cxt_get_proto_cid_count(
 						p_hwfn,
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 0338576..9f897b5 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1373,16 +1373,47 @@ enum _ecore_status_t ecore_mcp_get_media_type(struct ecore_dev *p_dev,
 	return ECORE_SUCCESS;
 }
 
+/* @DPDK */
+/* Old MFW has a global configuration for all PFs regarding RDMA support */
+static void
+ecore_mcp_get_shmem_proto_legacy(struct ecore_hwfn *p_hwfn,
+				 enum ecore_pci_personality *p_proto)
+{
+	*p_proto = ECORE_PCI_ETH;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IFUP,
+		   "According to Legacy capabilities, L2 personality is %08x\n",
+		   (u32)*p_proto);
+}
+
+/* @DPDK */
+static enum _ecore_status_t
+ecore_mcp_get_shmem_proto_mfw(struct ecore_hwfn *p_hwfn,
+			      struct ecore_ptt *p_ptt,
+			      enum ecore_pci_personality *p_proto)
+{
+	u32 resp = 0, param = 0;
+	enum _ecore_status_t rc;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IFUP,
+		   "According to capabilities, L2 personality is %08x [resp %08x param %08x]\n",
+		   (u32)*p_proto, resp, param);
+	return ECORE_SUCCESS;
+}
+
 static enum _ecore_status_t
 ecore_mcp_get_shmem_proto(struct ecore_hwfn *p_hwfn,
 			  struct public_func *p_info,
+			  struct ecore_ptt *p_ptt,
 			  enum ecore_pci_personality *p_proto)
 {
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 
 	switch (p_info->config & FUNC_MF_CFG_PROTOCOL_MASK) {
 	case FUNC_MF_CFG_PROTOCOL_ETHERNET:
-		*p_proto = ECORE_PCI_ETH;
+		if (ecore_mcp_get_shmem_proto_mfw(p_hwfn, p_ptt, p_proto) !=
+		    ECORE_SUCCESS)
+			ecore_mcp_get_shmem_proto_legacy(p_hwfn, p_proto);
 		break;
 	default:
 		rc = ECORE_INVAL;
@@ -1403,7 +1434,8 @@ enum _ecore_status_t ecore_mcp_fill_shmem_func_info(struct ecore_hwfn *p_hwfn,
 	info->pause_on_host = (shmem_info.config &
 			       FUNC_MF_CFG_PAUSE_ON_HOST_RING) ? 1 : 0;
 
-	if (ecore_mcp_get_shmem_proto(p_hwfn, &shmem_info, &info->protocol)) {
+	if (ecore_mcp_get_shmem_proto(p_hwfn, &shmem_info, p_ptt,
+				      &info->protocol)) {
 		DP_ERR(p_hwfn, "Unknown personality %08x\n",
 		       (u32)(shmem_info.config & FUNC_MF_CFG_PROTOCOL_MASK));
 		return ECORE_INVAL;
@@ -1559,8 +1591,9 @@ int ecore_mcp_get_personality_cnt(struct ecore_hwfn *p_hwfn,
 		if (shmem_info.config & FUNC_MF_CFG_FUNC_HIDE)
 			continue;
 
-		if (ecore_mcp_get_shmem_proto(p_hwfn, &shmem_info,
-					      &protocol) != ECORE_SUCCESS)
+		if (ecore_mcp_get_shmem_proto(p_hwfn, &shmem_info, p_ptt,
+					      &protocol) !=
+		    ECORE_SUCCESS)
 			continue;
 
 		if ((1 << ((u32)protocol)) & personalities)
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 6e86966..578899c 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -86,6 +86,7 @@ static enum _ecore_status_t ecore_sp_vf_start(struct ecore_hwfn *p_hwfn,
 		p_ramrod->personality = PERSONALITY_ETH;
 		break;
 	case ECORE_PCI_ETH_ROCE:
+	case ECORE_PCI_ETH_IWARP:
 		p_ramrod->personality = PERSONALITY_RDMA_AND_ETH;
 		break;
 	default:
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 17/62] net/qede/base: allow probe to succeed with minor HW-issues
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (16 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 16/62] net/qede/base: read card personality via MFW commands Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 18/62] net/qede/base: remove unneeded step in HW init Rasesh Mody
                               ` (44 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Allow probe to succeed with various 'minor' HW-issues [if requested]

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c     |   71 +++++++++++++++++++++++++++------
 drivers/net/qede/base/ecore_dev_api.h |   40 ++++++++++++++++---
 2 files changed, 94 insertions(+), 17 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 1b033b7..907566c 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2445,12 +2445,15 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-static enum _ecore_status_t ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
-						  struct ecore_ptt *p_ptt)
+static enum _ecore_status_t
+ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
+		      struct ecore_ptt *p_ptt,
+		      struct ecore_hw_prepare_params *p_params)
 {
 	u32 nvm_cfg1_offset, mf_mode, addr, generic_cont0, core_cfg, dcbx_mode;
 	u32 port_cfg_addr, link_temp, nvm_cfg_addr, device_capabilities;
 	struct ecore_mcp_link_params *link;
+	enum _ecore_status_t rc;
 
 	/* Read global nvm_cfg address */
 	nvm_cfg_addr = ecore_rd(p_hwfn, p_ptt, MISC_REG_GEN_PURP_CR0);
@@ -2458,6 +2461,8 @@ static enum _ecore_status_t ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
 	/* Verify MCP has initialized it */
 	if (!nvm_cfg_addr) {
 		DP_NOTICE(p_hwfn, false, "Shared memory not initialized\n");
+		if (p_params->b_relaxed_probe)
+			p_params->p_relaxed_res = ECORE_HW_PREPARE_FAILED_NVM;
 		return ECORE_INVAL;
 	}
 
@@ -2643,7 +2648,13 @@ static enum _ecore_status_t ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
 		OSAL_SET_BIT(ECORE_DEV_CAP_IWARP,
 			     &p_hwfn->hw_info.device_capabilities);
 
-	return ecore_mcp_fill_shmem_func_info(p_hwfn, p_ptt);
+	rc = ecore_mcp_fill_shmem_func_info(p_hwfn, p_ptt);
+	if (rc != ECORE_SUCCESS && p_params->b_relaxed_probe) {
+		rc = ECORE_SUCCESS;
+		p_params->p_relaxed_res = ECORE_HW_PREPARE_BAD_MCP;
+	}
+
+	return rc;
 }
 
 static void ecore_get_num_funcs(struct ecore_hwfn *p_hwfn,
@@ -2797,15 +2808,22 @@ static void ecore_hw_info_port_num(struct ecore_hwfn *p_hwfn,
 
 static enum _ecore_status_t
 ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-		  enum ecore_pci_personality personality, bool drv_resc_alloc)
+		  enum ecore_pci_personality personality,
+		  struct ecore_hw_prepare_params *p_params)
 {
+	bool drv_resc_alloc = p_params->drv_resc_alloc;
 	enum _ecore_status_t rc;
 
 	/* Since all information is common, only first hwfns should do this */
 	if (IS_LEAD_HWFN(p_hwfn)) {
 		rc = ecore_iov_hw_info(p_hwfn);
-		if (rc != ECORE_SUCCESS)
-			return rc;
+		if (rc != ECORE_SUCCESS) {
+			if (p_params->b_relaxed_probe)
+				p_params->p_relaxed_res =
+						ECORE_HW_PREPARE_BAD_IOV;
+			else
+				return rc;
+		}
 	}
 
 	/* TODO In get_hw_info, amoungst others:
@@ -2820,7 +2838,7 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_ASIC(p_hwfn->p_dev)) {
 #endif
-	rc = ecore_hw_get_nvm_info(p_hwfn, p_ptt);
+	rc = ecore_hw_get_nvm_info(p_hwfn, p_ptt, p_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 #ifndef ASIC_ONLY
@@ -2828,8 +2846,12 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 #endif
 
 	rc = ecore_int_igu_read_cam(p_hwfn, p_ptt);
-	if (rc != ECORE_SUCCESS)
-		return rc;
+	if (rc != ECORE_SUCCESS) {
+		if (p_params->b_relaxed_probe)
+			p_params->p_relaxed_res = ECORE_HW_PREPARE_BAD_IGU;
+		else
+			return rc;
+	}
 
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_ASIC(p_hwfn->p_dev) && ecore_mcp_is_init(p_hwfn)) {
@@ -2896,7 +2918,13 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	 * the resources/features depends on them.
 	 * This order is not harmful if not forcing.
 	 */
-	return ecore_hw_get_resc(p_hwfn, drv_resc_alloc);
+	rc = ecore_hw_get_resc(p_hwfn, drv_resc_alloc);
+	if (rc != ECORE_SUCCESS && p_params->b_relaxed_probe) {
+		rc = ECORE_SUCCESS;
+		p_params->p_relaxed_res = ECORE_HW_PREPARE_BAD_MCP;
+	}
+
+	return rc;
 }
 
 static enum _ecore_status_t ecore_get_dev_info(struct ecore_dev *p_dev)
@@ -3028,6 +3056,8 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn,
 	if (REG_RD(p_hwfn, PXP_PF_ME_OPAQUE_ADDR) == 0xffffffff) {
 		DP_ERR(p_hwfn,
 		       "Reading the ME register returns all Fs; Preventing further chip access\n");
+		if (p_params->b_relaxed_probe)
+			p_params->p_relaxed_res = ECORE_HW_PREPARE_FAILED_ME;
 		return ECORE_INVAL;
 	}
 
@@ -3037,6 +3067,8 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn,
 	rc = ecore_ptt_pool_alloc(p_hwfn);
 	if (rc) {
 		DP_NOTICE(p_hwfn, true, "Failed to prepare hwfn's hw\n");
+		if (p_params->b_relaxed_probe)
+			p_params->p_relaxed_res = ECORE_HW_PREPARE_FAILED_MEM;
 		goto err0;
 	}
 
@@ -3046,8 +3078,12 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn,
 	/* First hwfn learns basic information, e.g., number of hwfns */
 	if (!p_hwfn->my_id) {
 		rc = ecore_get_dev_info(p_dev);
-		if (rc != ECORE_SUCCESS)
+		if (rc != ECORE_SUCCESS) {
+			if (p_params->b_relaxed_probe)
+				p_params->p_relaxed_res =
+					ECORE_HW_PREPARE_FAILED_DEV;
 			goto err1;
+		}
 	}
 
 	ecore_hw_hwfn_prepare(p_hwfn);
@@ -3056,12 +3092,14 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn,
 	rc = ecore_mcp_cmd_init(p_hwfn, p_hwfn->p_main_ptt);
 	if (rc) {
 		DP_NOTICE(p_hwfn, true, "Failed initializing mcp command\n");
+		if (p_params->b_relaxed_probe)
+			p_params->p_relaxed_res = ECORE_HW_PREPARE_FAILED_MEM;
 		goto err1;
 	}
 
 	/* Read the device configuration information from the HW and SHMEM */
 	rc = ecore_get_hw_info(p_hwfn, p_hwfn->p_main_ptt,
-			       p_params->personality, p_params->drv_resc_alloc);
+			       p_params->personality, p_params);
 	if (rc) {
 		DP_NOTICE(p_hwfn, true, "Failed to get HW information\n");
 		goto err2;
@@ -3094,6 +3132,8 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn,
 	rc = ecore_init_alloc(p_hwfn);
 	if (rc) {
 		DP_NOTICE(p_hwfn, true, "Failed to allocate the init array\n");
+		if (p_params->b_relaxed_probe)
+			p_params->p_relaxed_res = ECORE_HW_PREPARE_FAILED_MEM;
 		goto err2;
 	}
 #ifndef ASIC_ONLY
@@ -3129,6 +3169,9 @@ enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev,
 
 	p_dev->chk_reg_fifo = p_params->chk_reg_fifo;
 
+	if (p_params->b_relaxed_probe)
+		p_params->p_relaxed_res = ECORE_HW_PREPARE_SUCCESS;
+
 	/* Store the precompiled init data ptrs */
 	if (IS_PF(p_dev))
 		ecore_init_iro_array(p_dev);
@@ -3164,6 +3207,10 @@ enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev,
 		 * initiliazed hwfn 0.
 		 */
 		if (rc != ECORE_SUCCESS) {
+			if (p_params->b_relaxed_probe)
+				p_params->p_relaxed_res =
+						ECORE_HW_PREPARE_FAILED_ENG2;
+
 			if (IS_PF(p_dev)) {
 				ecore_init_free(p_hwfn);
 				ecore_mcp_free(p_hwfn);
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index e7332ac..74a15ef 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -138,17 +138,47 @@ void ecore_hw_start_fastpath(struct ecore_hwfn *p_hwfn);
  */
 enum _ecore_status_t ecore_hw_reset(struct ecore_dev *p_dev);
 
+enum ecore_hw_prepare_result {
+	ECORE_HW_PREPARE_SUCCESS,
+
+	/* FAILED results indicate probe has failed & cleaned up */
+	ECORE_HW_PREPARE_FAILED_ENG2,
+	ECORE_HW_PREPARE_FAILED_ME,
+	ECORE_HW_PREPARE_FAILED_MEM,
+	ECORE_HW_PREPARE_FAILED_DEV,
+	ECORE_HW_PREPARE_FAILED_NVM,
+
+	/* BAD results indicate probe is passed even though some wrongness
+	 * has occurred; Trying to actually use [I.e., hw_init()] might have
+	 * dire reprecautions.
+	 */
+	ECORE_HW_PREPARE_BAD_IOV,
+	ECORE_HW_PREPARE_BAD_MCP,
+	ECORE_HW_PREPARE_BAD_IGU,
+};
+
 struct ecore_hw_prepare_params {
-	/* personality to initialize */
+	/* Personality to initialize */
 	int personality;
-	/* force the driver's default resource allocation */
+
+	/* Force the driver's default resource allocation */
 	bool drv_resc_alloc;
-	/* check the reg_fifo after any register access */
+
+	/* Check the reg_fifo after any register access */
 	bool chk_reg_fifo;
-	/* request the MFW to initiate PF FLR */
+
+	/* Request the MFW to initiate PF FLR */
 	bool initiate_pf_flr;
-	/* the OS Epoch time in seconds */
+
+	/* The OS Epoch time in seconds */
 	u32 epoch;
+
+	/* Allow prepare to pass even if some initializations are failing.
+	 * If set, the `p_prepare_res' field would be set with the return,
+	 * and might allow probe to pass even if there are certain issues.
+	 */
+	bool b_relaxed_probe;
+	enum ecore_hw_prepare_result p_relaxed_res;
 };
 
 /**
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 18/62] net/qede/base: remove unneeded step in HW init
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (17 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 17/62] net/qede/base: allow probe to succeed with minor HW-issues Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 19/62] net/qede/base: allow only trusted VFs to be promisc Rasesh Mody
                               ` (43 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

There is no need to close the OUT_EN NIG registers, so remove that.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c |   12 ------------
 1 file changed, 12 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 907566c..e2d4132 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -999,18 +999,6 @@ static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn,
 
 	ecore_cxt_hw_init_common(p_hwfn);
 
-	/* Close gate from NIG to BRB/Storm; By default they are open, but
-	 * we close them to prevent NIG from passing data to reset blocks.
-	 * Should have been done in the ENGINE phase, but init-tool lacks
-	 * proper port-pretend capabilities.
-	 */
-	ecore_wr(p_hwfn, p_ptt, NIG_REG_RX_BRB_OUT_EN, 0);
-	ecore_wr(p_hwfn, p_ptt, NIG_REG_STORM_OUT_EN, 0);
-	ecore_port_pretend(p_hwfn, p_ptt, p_hwfn->port_id ^ 1);
-	ecore_wr(p_hwfn, p_ptt, NIG_REG_RX_BRB_OUT_EN, 0);
-	ecore_wr(p_hwfn, p_ptt, NIG_REG_STORM_OUT_EN, 0);
-	ecore_port_unpretend(p_hwfn, p_ptt);
-
 	rc = ecore_init_run(p_hwfn, p_ptt, PHASE_ENGINE, ANY_PHASE_ID, hw_mode);
 	if (rc != ECORE_SUCCESS)
 		return rc;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 19/62] net/qede/base: allow only trusted VFs to be promisc
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (18 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 18/62] net/qede/base: remove unneeded step in HW init Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 20/62] net/qede/base: qm initialization revamp Rasesh Mody
                               ` (42 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Allow only trusted VFs to be promisc/multi-promisc. The reasonable
thing is to use the 'trusted' node instead of simply allowing VFs to
become promiscuous.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_l2.c    |    8 ++++----
 drivers/net/qede/base/ecore_sriov.c |    2 --
 2 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 1379a1b..d2e1719 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -274,8 +274,8 @@ ecore_sp_update_accept_mode(struct ecore_hwfn *p_hwfn,
 
 		p_ramrod->rx_mode.state = OSAL_CPU_TO_LE16(state);
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-			   "p_ramrod->rx_mode.state = 0x%x\n",
-			   state);
+			   "vport[%02x] p_ramrod->rx_mode.state = 0x%x\n",
+			   p_ramrod->common.vport_id, state);
 	}
 
 	/* Set Tx mode accept flags */
@@ -298,8 +298,8 @@ ecore_sp_update_accept_mode(struct ecore_hwfn *p_hwfn,
 
 		p_ramrod->tx_mode.state = OSAL_CPU_TO_LE16(state);
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-			   "p_ramrod->tx_mode.state = 0x%x\n",
-			   state);
+			   "vport[%02x] p_ramrod->tx_mode.state = 0x%x\n",
+			   p_ramrod->common.vport_id, state);
 	}
 }
 
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 578899c..a302e9e 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -2626,7 +2626,6 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 	 */
 	tlvs_accepted = tlvs_mask;
 
-#ifndef LINUX_REMOVE
 	if (OSAL_IOV_VF_VPORT_UPDATE(p_hwfn, vf->relative_vf_id,
 				     &params, &tlvs_accepted) !=
 	    ECORE_SUCCESS) {
@@ -2634,7 +2633,6 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 		status = PFVF_STATUS_NOT_SUPPORTED;
 		goto out;
 	}
-#endif
 
 	if (!tlvs_accepted) {
 		if (tlvs_mask)
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 20/62] net/qede/base: qm initialization revamp
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (19 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 19/62] net/qede/base: allow only trusted VFs to be promisc Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 21/62] net/qede/base: print firmware MFW and MBI versions Rasesh Mody
                               ` (41 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

This patch revamps queue initialization.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/bcm_osal.h    |    2 +
 drivers/net/qede/base/ecore.h       |   34 +-
 drivers/net/qede/base/ecore_cxt.c   |   14 +-
 drivers/net/qede/base/ecore_dev.c   |  869 ++++++++++++++++++++++++-----------
 drivers/net/qede/base/ecore_hw.c    |   38 --
 drivers/net/qede/base/ecore_l2.c    |   12 +-
 drivers/net/qede/base/ecore_l2.h    |    2 +-
 drivers/net/qede/base/ecore_spq.c   |    9 +-
 drivers/net/qede/base/ecore_sriov.c |   13 +-
 9 files changed, 645 insertions(+), 348 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 0d239c9..63ee6d5 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -320,6 +320,8 @@ u32 qede_find_first_zero_bit(unsigned long *, u32);
 #define OSAL_BUILD_BUG_ON(cond)		nothing
 #define ETH_ALEN			ETHER_ADDR_LEN
 
+#define OSAL_BITMAP_WEIGHT(bitmap, count) 0
+
 #define OSAL_LINK_UPDATE(hwfn) qed_link_update(hwfn)
 #define OSAL_DCBX_AEN(hwfn, mib_type) nothing
 
diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 842a3b5..58c97a3 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -445,11 +445,13 @@ struct ecore_qm_info {
 	struct init_qm_port_params  *qm_port_params;
 	u16			start_pq;
 	u8			start_vport;
-	u8			pure_lb_pq;
-	u8			offload_pq;
-	u8			pure_ack_pq;
-	u8			ooo_pq;
-	u8			vf_queues_offset;
+	u16			pure_lb_pq;
+	u16			offload_pq;
+	u16			pure_ack_pq;
+	u16			ooo_pq;
+	u16			first_vf_pq;
+	u16			first_mcos_pq;
+	u16			first_rl_pq;
 	u16			num_pqs;
 	u16			num_vf_pqs;
 	u8			num_vports;
@@ -828,6 +830,28 @@ int ecore_device_num_ports(struct ecore_dev *p_dev);
 void ecore_set_fw_mac_addr(__le16 *fw_msb, __le16 *fw_mid, __le16 *fw_lsb,
 			   u8 *mac);
 
+/* Flags for indication of required queues */
+#define PQ_FLAGS_RLS	(1 << 0)
+#define PQ_FLAGS_MCOS	(1 << 1)
+#define PQ_FLAGS_LB	(1 << 2)
+#define PQ_FLAGS_OOO	(1 << 3)
+#define PQ_FLAGS_ACK    (1 << 4)
+#define PQ_FLAGS_OFLD	(1 << 5)
+#define PQ_FLAGS_VFS	(1 << 6)
+
+/* physical queue index for cm context intialization */
+u16 ecore_get_cm_pq_idx(struct ecore_hwfn *p_hwfn, u32 pq_flags);
+u16 ecore_get_cm_pq_idx_mcos(struct ecore_hwfn *p_hwfn, u8 tc);
+u16 ecore_get_cm_pq_idx_vf(struct ecore_hwfn *p_hwfn, u16 vf);
+u16 ecore_get_cm_pq_idx_rl(struct ecore_hwfn *p_hwfn, u8 qpid);
+
+/* amount of resources used in qm init */
+u8 ecore_init_qm_get_num_tcs(struct ecore_hwfn *p_hwfn);
+u16 ecore_init_qm_get_num_vfs(struct ecore_hwfn *p_hwfn);
+u16 ecore_init_qm_get_num_pf_rls(struct ecore_hwfn *p_hwfn);
+u16 ecore_init_qm_get_num_vports(struct ecore_hwfn *p_hwfn);
+u16 ecore_init_qm_get_num_pqs(struct ecore_hwfn *p_hwfn);
+
 #define ECORE_LEADING_HWFN(dev)	(&dev->hwfns[0])
 
 #endif /* __ECORE_H */
diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index 2635030..aeeabf1 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -1372,18 +1372,10 @@ void ecore_qm_init_pf(struct ecore_hwfn *p_hwfn)
 }
 
 /* CM PF */
-static enum _ecore_status_t ecore_cm_init_pf(struct ecore_hwfn *p_hwfn)
+void ecore_cm_init_pf(struct ecore_hwfn *p_hwfn)
 {
-	union ecore_qm_pq_params pq_params;
-	u16 pq;
-
-	/* XCM pure-LB queue */
-	OSAL_MEMSET(&pq_params, 0, sizeof(pq_params));
-	pq_params.core.tc = LB_TC;
-	pq = ecore_get_qm_pq(p_hwfn, PROTOCOLID_CORE, &pq_params);
-	STORE_RT_REG(p_hwfn, XCM_REG_CON_PHY_Q3_RT_OFFSET, pq);
-
-	return ECORE_SUCCESS;
+	STORE_RT_REG(p_hwfn, XCM_REG_CON_PHY_Q3_RT_OFFSET,
+		     ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_LB));
 }
 
 /* DQ PF */
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index e2d4132..380c5ba 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -178,282 +178,575 @@ void ecore_resc_free(struct ecore_dev *p_dev)
 	}
 }
 
-static enum _ecore_status_t ecore_init_qm_info(struct ecore_hwfn *p_hwfn,
-					       bool b_sleepable)
+/******************** QM initialization *******************/
+
+/* bitmaps for indicating active traffic classes.
+ * Special case for Arrowhead 4 port
+ */
+/* 0..3 actualy used, 4 serves OOO, 7 serves high priority stuff (e.g. DCQCN) */
+#define ACTIVE_TCS_BMAP 0x9f
+/* 0..3 actually used, OOO and high priority stuff all use 3 */
+#define ACTIVE_TCS_BMAP_4PORT_K2 0xf
+
+/* determines the physical queue flags for a given PF. */
+static u32 ecore_get_pq_flags(struct ecore_hwfn *p_hwfn)
 {
-	u8 num_vports, vf_offset = 0, i, vport_id, num_ports, curr_queue;
-	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
-	struct init_qm_port_params *p_qm_port;
-	bool init_rdma_offload_pq = false;
-	bool init_pure_ack_pq = false;
-	bool init_ooo_pq = false;
-	u16 num_pqs, protocol_pqs;
-	u16 num_pf_rls = 0;
-	u16 num_vfs = 0;
-	u32 pf_rl;
-	u8 pf_wfq;
-
-	/* @TMP - saving the existing min/max bw config before resetting the
-	 * qm_info to restore them.
-	 */
-	pf_rl = qm_info->pf_rl;
-	pf_wfq = qm_info->pf_wfq;
+	u32 flags;
 
-#ifdef CONFIG_ECORE_SRIOV
-	if (p_hwfn->p_dev->p_iov_info)
-		num_vfs = p_hwfn->p_dev->p_iov_info->total_vfs;
-#endif
-	OSAL_MEM_ZERO(qm_info, sizeof(*qm_info));
+	/* common flags */
+	flags = PQ_FLAGS_LB;
 
-#ifndef ASIC_ONLY
-	/* @TMP - Don't allocate QM queues for VFs on emulation */
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
-		DP_NOTICE(p_hwfn, false,
-			  "Emulation - skip configuring QM queues for VFs\n");
-		num_vfs = 0;
+	/* feature flags */
+	if (IS_ECORE_SRIOV(p_hwfn->p_dev))
+		flags |= PQ_FLAGS_VFS;
+
+	/* protocol flags */
+	switch (p_hwfn->hw_info.personality) {
+	case ECORE_PCI_ETH:
+		flags |= PQ_FLAGS_MCOS;
+		break;
+	case ECORE_PCI_FCOE:
+		flags |= PQ_FLAGS_OFLD;
+		break;
+	case ECORE_PCI_ISCSI:
+		flags |= PQ_FLAGS_ACK | PQ_FLAGS_OOO | PQ_FLAGS_OFLD;
+		break;
+	case ECORE_PCI_ETH_ROCE:
+		flags |= PQ_FLAGS_MCOS | PQ_FLAGS_OFLD;
+		break;
+	case ECORE_PCI_ETH_IWARP:
+		flags |= PQ_FLAGS_MCOS | PQ_FLAGS_ACK | PQ_FLAGS_OOO |
+			 PQ_FLAGS_OFLD;
+		break;
+	default:
+		DP_ERR(p_hwfn, "unknown personality %d\n",
+		       p_hwfn->hw_info.personality);
+		return 0;
 	}
-#endif
+	return flags;
+}
 
-	/* ethernet PFs require a pq per tc. Even if only a subset of the TCs
-	 * active, we want physical queues allocated for all of them, since we
-	 * don't have a good recycle flow. Non ethernet PFs require only a
-	 * single physical queue.
-	 */
-	if (ECORE_IS_L2_PERSONALITY(p_hwfn))
-		protocol_pqs = p_hwfn->hw_info.num_hw_tc;
-	else
-		protocol_pqs = 1;
-
-	num_pqs = protocol_pqs + num_vfs + 1;	/* The '1' is for pure-LB */
-	num_vports = (u8)RESC_NUM(p_hwfn, ECORE_VPORT);
-
-	if (ECORE_IS_ROCE_PERSONALITY(p_hwfn)) {
-		num_pqs++;	/* for RoCE queue */
-		init_rdma_offload_pq = true;
-		if (p_hwfn->pf_params.rdma_pf_params.enable_dcqcn) {
-			/* Due to FW assumption that rl==vport, we limit the
-			 * number of rate limiters by the minimum between its
-			 * allocated number and the allocated number of vports.
-			 * Another limitation is the number of supported qps
-			 * with rate limiters in FW.
-			 */
-			num_pf_rls =
-			    (u16)OSAL_MIN_T(u32, RESC_NUM(p_hwfn, ECORE_RL),
-					     RESC_NUM(p_hwfn, ECORE_VPORT));
+/* Getters for resource amounts necessary for qm initialization */
+u8 ecore_init_qm_get_num_tcs(struct ecore_hwfn *p_hwfn)
+{
+	return p_hwfn->hw_info.num_hw_tc;
+}
 
-			/* we subtract num_vfs because each one requires a rate
-			 * limiter, and one default rate limiter.
-			 */
-			if (num_pf_rls < num_vfs + 1) {
-				DP_ERR(p_hwfn, "No RL for DCQCN");
-				DP_ERR(p_hwfn, "[num_pf_rls %d num_vfs %d]\n",
-				       num_pf_rls, num_vfs);
-				return ECORE_INVAL;
-			}
-			num_pf_rls -= num_vfs + 1;
-		}
+u16 ecore_init_qm_get_num_vfs(struct ecore_hwfn *p_hwfn)
+{
+	return IS_ECORE_SRIOV(p_hwfn->p_dev) ?
+			p_hwfn->p_dev->p_iov_info->total_vfs : 0;
+}
 
-		num_pqs += num_pf_rls;
-		qm_info->num_pf_rls = (u8)num_pf_rls;
-	}
+#define NUM_DEFAULT_RLS 1
 
-	if (ECORE_IS_IWARP_PERSONALITY(p_hwfn)) {
-		num_pqs += 3;	/* for iwarp queue / pure-ack / ooo */
-		init_rdma_offload_pq = true;
-		init_pure_ack_pq = true;
-		init_ooo_pq = true;
-	}
+u16 ecore_init_qm_get_num_pf_rls(struct ecore_hwfn *p_hwfn)
+{
+	u16 num_pf_rls, num_vfs = ecore_init_qm_get_num_vfs(p_hwfn);
 
-	if (p_hwfn->hw_info.personality == ECORE_PCI_ISCSI) {
-		num_pqs += 2;	/* for iSCSI pure-ACK / OOO queue */
-		init_pure_ack_pq = true;
-		init_ooo_pq = true;
-	}
+	/* @DPDK */
+	/* num RLs can't exceed resource amount of rls or vports or the
+	 * dcqcn qps
+	 */
+	num_pf_rls = (u16)OSAL_MIN_T(u32, RESC_NUM(p_hwfn, ECORE_RL),
+				     (u16)RESC_NUM(p_hwfn, ECORE_VPORT));
 
-	/* Sanity checking that setup requires legal number of resources */
-	if (num_pqs > RESC_NUM(p_hwfn, ECORE_PQ)) {
-		DP_ERR(p_hwfn,
-		       "Need too many Physical queues - 0x%04x avail %04x",
-		       num_pqs, RESC_NUM(p_hwfn, ECORE_PQ));
-		return ECORE_INVAL;
+	/* make sure after we reserve the default and VF rls we'll have
+	 * something left
+	 */
+	if (num_pf_rls < num_vfs + NUM_DEFAULT_RLS) {
+		DP_NOTICE(p_hwfn, false,
+			  "no rate limiters left for PF rate limiting"
+			  " [num_pf_rls %d num_vfs %d]\n", num_pf_rls, num_vfs);
+		return 0;
 	}
 
-	/* PQs will be arranged as follows: First per-TC PQ, then pure-LB queue,
-	 * then special queues (iSCSI pure-ACK / RoCE), then per-VF PQ.
+	/* subtract rls necessary for VFs and one default one for the PF */
+	num_pf_rls -= num_vfs + NUM_DEFAULT_RLS;
+
+	return num_pf_rls;
+}
+
+u16 ecore_init_qm_get_num_vports(struct ecore_hwfn *p_hwfn)
+{
+	u32 pq_flags = ecore_get_pq_flags(p_hwfn);
+
+	/* all pqs share the same vport (hence the 1 below), except for vfs
+	 * and pf_rl pqs
 	 */
-	qm_info->qm_pq_params = OSAL_ZALLOC(p_hwfn->p_dev,
-					    b_sleepable ? GFP_KERNEL :
-					    GFP_ATOMIC,
-					    sizeof(struct init_qm_pq_params) *
-					    num_pqs);
-	if (!qm_info->qm_pq_params)
-		goto alloc_err;
+	return (!!(PQ_FLAGS_RLS & pq_flags)) *
+		ecore_init_qm_get_num_pf_rls(p_hwfn) +
+	       (!!(PQ_FLAGS_VFS & pq_flags)) *
+		ecore_init_qm_get_num_vfs(p_hwfn) + 1;
+}
 
-	qm_info->qm_vport_params = OSAL_ZALLOC(p_hwfn->p_dev,
-					       b_sleepable ? GFP_KERNEL :
-					       GFP_ATOMIC,
-					       sizeof(struct
-						      init_qm_vport_params) *
-					       num_vports);
-	if (!qm_info->qm_vport_params)
-		goto alloc_err;
+/* calc amount of PQs according to the requested flags */
+u16 ecore_init_qm_get_num_pqs(struct ecore_hwfn *p_hwfn)
+{
+	u32 pq_flags = ecore_get_pq_flags(p_hwfn);
+
+	return (!!(PQ_FLAGS_RLS & pq_flags)) *
+		ecore_init_qm_get_num_pf_rls(p_hwfn) +
+	       (!!(PQ_FLAGS_MCOS & pq_flags)) *
+		ecore_init_qm_get_num_tcs(p_hwfn) +
+	       (!!(PQ_FLAGS_LB & pq_flags)) +
+	       (!!(PQ_FLAGS_OOO & pq_flags)) +
+	       (!!(PQ_FLAGS_ACK & pq_flags)) +
+	       (!!(PQ_FLAGS_OFLD & pq_flags)) +
+	       (!!(PQ_FLAGS_VFS & pq_flags)) *
+		ecore_init_qm_get_num_vfs(p_hwfn);
+}
 
-	qm_info->qm_port_params = OSAL_ZALLOC(p_hwfn->p_dev,
-					      b_sleepable ? GFP_KERNEL :
-					      GFP_ATOMIC,
-					      sizeof(struct init_qm_port_params)
-					      * MAX_NUM_PORTS);
-	if (!qm_info->qm_port_params)
-		goto alloc_err;
+/* initialize the top level QM params */
+static void ecore_init_qm_params(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
 
-	qm_info->wfq_data = OSAL_ZALLOC(p_hwfn->p_dev,
-					b_sleepable ? GFP_KERNEL :
-					GFP_ATOMIC,
-					sizeof(struct ecore_wfq_data) *
-					num_vports);
+	/* pq and vport bases for this PF */
+	qm_info->start_pq = (u16)RESC_START(p_hwfn, ECORE_PQ);
+	qm_info->start_vport = (u8)RESC_START(p_hwfn, ECORE_VPORT);
 
-	if (!qm_info->wfq_data)
-		goto alloc_err;
+	/* rate limiting and weighted fair queueing are always enabled */
+	qm_info->vport_rl_en = 1;
+	qm_info->vport_wfq_en = 1;
 
-	vport_id = (u8)RESC_START(p_hwfn, ECORE_VPORT);
+	/* in AH 4 port we have fewer TCs per port */
+	qm_info->max_phys_tcs_per_port =
+		p_hwfn->p_dev->num_ports_in_engines == MAX_NUM_PORTS_K2 ?
+			NUM_PHYS_TCS_4PORT_K2 : NUM_OF_PHYS_TCS;
+}
 
-	/* First init rate limited queues ( Due to RoCE assumption of
-	 * qpid=rlid )
-	 */
-	for (curr_queue = 0; curr_queue < num_pf_rls; curr_queue++) {
-		qm_info->qm_pq_params[curr_queue].vport_id = vport_id++;
-		qm_info->qm_pq_params[curr_queue].tc_id =
-		    p_hwfn->hw_info.offload_tc;
-		qm_info->qm_pq_params[curr_queue].wrr_group = 1;
-		qm_info->qm_pq_params[curr_queue].rl_valid = 1;
-	};
-
-	/* Protocol PQs */
-	for (i = 0; i < protocol_pqs; i++) {
-		struct init_qm_pq_params *params =
-		    &qm_info->qm_pq_params[curr_queue++];
-
-		if (ECORE_IS_L2_PERSONALITY(p_hwfn)) {
-			params->vport_id = vport_id;
-			params->tc_id = i;
-			/* Note: this assumes that if we had a configuration
-			 * with N tcs and subsequently another configuration
-			 * With Fewer TCs, the in flight traffic (in QM queues,
-			 * in FW, from driver to FW) will still trickle out and
-			 * not get "stuck" in the QM. This is determined by the
-			 * NIG_REG_TX_ARB_CLIENT_IS_SUBJECT2WFQ. Unused TCs are
-			 * supposed to be cleared in this map, allowing traffic
-			 * to flush out. If this is not the case, we would need
-			 * to set the TC of unused queues to 0, and reconfigure
-			 * QM every time num of TCs changes. Unused queues in
-			 * this context would mean those intended for TCs where
-			 * tc_id > hw_info.num_active_tcs.
-			 */
-			params->wrr_group = 1;	/* @@@TBD ECORE_WRR_MEDIUM */
-		} else {
-			params->vport_id = vport_id;
-			params->tc_id = p_hwfn->hw_info.offload_tc;
-			params->wrr_group = 1;	/* @@@TBD ECORE_WRR_MEDIUM */
-		}
-	}
+/* initialize qm vport params */
+static void ecore_init_qm_vport_params(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+	u8 i;
 
-	/* Then init pure-LB PQ */
-	qm_info->pure_lb_pq = curr_queue;
-	qm_info->qm_pq_params[curr_queue].vport_id =
-	    (u8)RESC_START(p_hwfn, ECORE_VPORT);
-	qm_info->qm_pq_params[curr_queue].tc_id = PURE_LB_TC;
-	qm_info->qm_pq_params[curr_queue].wrr_group = 1;
-	curr_queue++;
-
-	qm_info->offload_pq = 0;	/* Already initialized for iSCSI/FCoE */
-	if (init_rdma_offload_pq) {
-		qm_info->offload_pq = curr_queue;
-		qm_info->qm_pq_params[curr_queue].vport_id = vport_id;
-		qm_info->qm_pq_params[curr_queue].tc_id =
-		    p_hwfn->hw_info.offload_tc;
-		qm_info->qm_pq_params[curr_queue].wrr_group = 1;
-		curr_queue++;
-	}
-
-	if (init_pure_ack_pq) {
-		qm_info->pure_ack_pq = curr_queue;
-		qm_info->qm_pq_params[curr_queue].vport_id = vport_id;
-		qm_info->qm_pq_params[curr_queue].tc_id =
-		    p_hwfn->hw_info.offload_tc;
-		qm_info->qm_pq_params[curr_queue].wrr_group = 1;
-		curr_queue++;
-	}
-
-	if (init_ooo_pq) {
-		qm_info->ooo_pq = curr_queue;
-		qm_info->qm_pq_params[curr_queue].vport_id = vport_id;
-		qm_info->qm_pq_params[curr_queue].tc_id = DCBX_ISCSI_OOO_TC;
-		qm_info->qm_pq_params[curr_queue].wrr_group = 1;
-		curr_queue++;
-	}
-
-	/* Then init per-VF PQs */
-	vf_offset = curr_queue;
-	for (i = 0; i < num_vfs; i++) {
-		/* First vport is used by the PF */
-		qm_info->qm_pq_params[curr_queue].vport_id = vport_id + i + 1;
-		/* @@@TBD VF Multi-cos */
-		qm_info->qm_pq_params[curr_queue].tc_id = 0;
-		qm_info->qm_pq_params[curr_queue].wrr_group = 1;
-		qm_info->qm_pq_params[curr_queue].rl_valid = 1;
-		curr_queue++;
-	};
-
-	qm_info->vf_queues_offset = vf_offset;
-	qm_info->num_pqs = num_pqs;
-	qm_info->num_vports = num_vports;
+	/* all vports participate in weighted fair queueing */
+	for (i = 0; i < ecore_init_qm_get_num_vports(p_hwfn); i++)
+		qm_info->qm_vport_params[i].vport_wfq = 1;
+}
 
+/* initialize qm port params */
+static void ecore_init_qm_port_params(struct ecore_hwfn *p_hwfn)
+{
 	/* Initialize qm port parameters */
-	num_ports = p_hwfn->p_dev->num_ports_in_engines;
+	u8 i, active_phys_tcs, num_ports = p_hwfn->p_dev->num_ports_in_engines;
+
+	/* indicate how ooo and high pri traffic is dealt with */
+	active_phys_tcs = num_ports == MAX_NUM_PORTS_K2 ?
+		ACTIVE_TCS_BMAP_4PORT_K2 : ACTIVE_TCS_BMAP;
+
 	for (i = 0; i < num_ports; i++) {
-		p_qm_port = &qm_info->qm_port_params[i];
+		struct init_qm_port_params *p_qm_port =
+			&p_hwfn->qm_info.qm_port_params[i];
+
 		p_qm_port->active = 1;
-		/* @@@TMP - was NUM_OF_PHYS_TCS; Changed until dcbx will
-		 * be in place
-		 */
-		if (num_ports == 4)
-			p_qm_port->active_phys_tcs = 0xf;
-		else
-			p_qm_port->active_phys_tcs = 0x9f;
+		p_qm_port->active_phys_tcs = active_phys_tcs;
 		p_qm_port->num_pbf_cmd_lines = PBF_MAX_CMD_LINES / num_ports;
 		p_qm_port->num_btb_blocks = BTB_MAX_BLOCKS / num_ports;
 	}
+}
 
-	if (ECORE_IS_AH(p_hwfn->p_dev) && (num_ports == 4))
-		qm_info->max_phys_tcs_per_port = NUM_PHYS_TCS_4PORT_K2;
-	else
-		qm_info->max_phys_tcs_per_port = NUM_OF_PHYS_TCS;
+/* Reset the params which must be reset for qm init. QM init may be called as
+ * a result of flows other than driver load (e.g. dcbx renegotiation). Other
+ * params may be affected by the init but would simply recalculate to the same
+ * values. The allocations made for QM init, ports, vports, pqs and vfqs are not
+ * affected as these amounts stay the same.
+ */
+static void ecore_init_qm_reset_params(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
 
-	qm_info->start_pq = (u16)RESC_START(p_hwfn, ECORE_PQ);
+	qm_info->num_pqs = 0;
+	qm_info->num_vports = 0;
+	qm_info->num_pf_rls = 0;
+	qm_info->num_vf_pqs = 0;
+	qm_info->first_vf_pq = 0;
+	qm_info->first_mcos_pq = 0;
+	qm_info->first_rl_pq = 0;
+}
+
+static void ecore_init_qm_advance_vport(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+
+	qm_info->num_vports++;
+
+	if (qm_info->num_vports > ecore_init_qm_get_num_vports(p_hwfn))
+		DP_ERR(p_hwfn,
+		       "vport overflow! qm_info->num_vports %d,"
+		       " qm_init_get_num_vports() %d\n",
+		       qm_info->num_vports,
+		       ecore_init_qm_get_num_vports(p_hwfn));
+}
+
+/* initialize a single pq and manage qm_info resources accounting.
+ * The pq_init_flags param determines whether the PQ is rate limited
+ * (for VF or PF)
+ * and whether a new vport is allocated to the pq or not (i.e. vport will be
+ * shared)
+ */
+
+/* flags for pq init */
+#define PQ_INIT_SHARE_VPORT	(1 << 0)
+#define PQ_INIT_PF_RL		(1 << 1)
+#define PQ_INIT_VF_RL		(1 << 2)
+
+/* defines for pq init */
+#define PQ_INIT_DEFAULT_WRR_GROUP	1
+#define PQ_INIT_DEFAULT_TC		0
+#define PQ_INIT_OFLD_TC			(p_hwfn->hw_info.offload_tc)
+
+static void ecore_init_qm_pq(struct ecore_hwfn *p_hwfn,
+			     struct ecore_qm_info *qm_info,
+			     u8 tc, u32 pq_init_flags)
+{
+	u16 pq_idx = qm_info->num_pqs, max_pq =
+					ecore_init_qm_get_num_pqs(p_hwfn);
+
+	if (pq_idx > max_pq)
+		DP_ERR(p_hwfn,
+		       "pq overflow! pq %d, max pq %d\n", pq_idx, max_pq);
+
+	/* init pq params */
+	qm_info->qm_pq_params[pq_idx].vport_id = qm_info->start_vport +
+						 qm_info->num_vports;
+	qm_info->qm_pq_params[pq_idx].tc_id = tc;
+	qm_info->qm_pq_params[pq_idx].wrr_group = PQ_INIT_DEFAULT_WRR_GROUP;
+	qm_info->qm_pq_params[pq_idx].rl_valid =
+		(pq_init_flags & PQ_INIT_PF_RL ||
+		 pq_init_flags & PQ_INIT_VF_RL);
+
+	/* qm params accounting */
+	qm_info->num_pqs++;
+	if (!(pq_init_flags & PQ_INIT_SHARE_VPORT))
+		qm_info->num_vports++;
+
+	if (pq_init_flags & PQ_INIT_PF_RL)
+		qm_info->num_pf_rls++;
+
+	if (qm_info->num_vports > ecore_init_qm_get_num_vports(p_hwfn))
+		DP_ERR(p_hwfn,
+		       "vport overflow! qm_info->num_vports %d,"
+		       " qm_init_get_num_vports() %d\n",
+		       qm_info->num_vports,
+		       ecore_init_qm_get_num_vports(p_hwfn));
+
+	if (qm_info->num_pf_rls > ecore_init_qm_get_num_pf_rls(p_hwfn))
+		DP_ERR(p_hwfn, "rl overflow! qm_info->num_pf_rls %d,"
+		       " qm_init_get_num_pf_rls() %d\n",
+		       qm_info->num_pf_rls,
+		       ecore_init_qm_get_num_pf_rls(p_hwfn));
+}
+
+/* get pq index according to PQ_FLAGS */
+static u16 *ecore_init_qm_get_idx_from_flags(struct ecore_hwfn *p_hwfn,
+					     u32 pq_flags)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+
+	/* Can't have multiple flags set here */
+	if (OSAL_BITMAP_WEIGHT((unsigned long *)&pq_flags,
+				sizeof(pq_flags)) > 1)
+		goto err;
+
+	switch (pq_flags) {
+	case PQ_FLAGS_RLS:
+		return &qm_info->first_rl_pq;
+	case PQ_FLAGS_MCOS:
+		return &qm_info->first_mcos_pq;
+	case PQ_FLAGS_LB:
+		return &qm_info->pure_lb_pq;
+	case PQ_FLAGS_OOO:
+		return &qm_info->ooo_pq;
+	case PQ_FLAGS_ACK:
+		return &qm_info->pure_ack_pq;
+	case PQ_FLAGS_OFLD:
+		return &qm_info->offload_pq;
+	case PQ_FLAGS_VFS:
+		return &qm_info->first_vf_pq;
+	default:
+		goto err;
+	}
+
+err:
+	DP_ERR(p_hwfn, "BAD pq flags %d\n", pq_flags);
+	return OSAL_NULL;
+}
+
+/* save pq index in qm info */
+static void ecore_init_qm_set_idx(struct ecore_hwfn *p_hwfn,
+				  u32 pq_flags, u16 pq_val)
+{
+	u16 *base_pq_idx = ecore_init_qm_get_idx_from_flags(p_hwfn, pq_flags);
+
+	*base_pq_idx = p_hwfn->qm_info.start_pq + pq_val;
+}
+
+/* get tx pq index, with the PQ TX base already set (ready for context init) */
+u16 ecore_get_cm_pq_idx(struct ecore_hwfn *p_hwfn, u32 pq_flags)
+{
+	u16 *base_pq_idx = ecore_init_qm_get_idx_from_flags(p_hwfn, pq_flags);
+
+	return *base_pq_idx + CM_TX_PQ_BASE;
+}
+
+u16 ecore_get_cm_pq_idx_mcos(struct ecore_hwfn *p_hwfn, u8 tc)
+{
+	u8 max_tc = ecore_init_qm_get_num_tcs(p_hwfn);
+
+	if (tc > max_tc)
+		DP_ERR(p_hwfn, "tc %d must be smaller than %d\n", tc, max_tc);
+
+	return ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_MCOS) + tc;
+}
+
+u16 ecore_get_cm_pq_idx_vf(struct ecore_hwfn *p_hwfn, u16 vf)
+{
+	u16 max_vf = ecore_init_qm_get_num_vfs(p_hwfn);
+
+	if (vf > max_vf)
+		DP_ERR(p_hwfn, "vf %d must be smaller than %d\n", vf, max_vf);
+
+	return ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_VFS) + vf;
+}
+
+u16 ecore_get_cm_pq_idx_rl(struct ecore_hwfn *p_hwfn, u8 rl)
+{
+	u16 max_rl = ecore_init_qm_get_num_pf_rls(p_hwfn);
+
+	if (rl > max_rl)
+		DP_ERR(p_hwfn, "rl %d must be smaller than %d\n", rl, max_rl);
+
+	return ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_RLS) + rl;
+}
+
+/* Functions for creating specific types of pqs */
+static void ecore_init_qm_lb_pq(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+
+	if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_LB))
+		return;
+
+	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_LB, qm_info->num_pqs);
+	ecore_init_qm_pq(p_hwfn, qm_info, PURE_LB_TC, PQ_INIT_SHARE_VPORT);
+}
+
+static void ecore_init_qm_ooo_pq(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+
+	if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_OOO))
+		return;
+
+	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_OOO, qm_info->num_pqs);
+	ecore_init_qm_pq(p_hwfn, qm_info, DCBX_ISCSI_OOO_TC,
+			 PQ_INIT_SHARE_VPORT);
+}
+
+static void ecore_init_qm_pure_ack_pq(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+
+	if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_ACK))
+		return;
+
+	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_ACK, qm_info->num_pqs);
+	ecore_init_qm_pq(p_hwfn, qm_info, PQ_INIT_OFLD_TC, PQ_INIT_SHARE_VPORT);
+}
+
+static void ecore_init_qm_offload_pq(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+
+	if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_OFLD))
+		return;
+
+	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_OFLD, qm_info->num_pqs);
+	ecore_init_qm_pq(p_hwfn, qm_info, PQ_INIT_OFLD_TC, PQ_INIT_SHARE_VPORT);
+}
+
+static void ecore_init_qm_mcos_pqs(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+	u8 tc_idx;
+
+	if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_MCOS))
+		return;
+
+	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_MCOS, qm_info->num_pqs);
+	for (tc_idx = 0; tc_idx < ecore_init_qm_get_num_tcs(p_hwfn); tc_idx++)
+		ecore_init_qm_pq(p_hwfn, qm_info, tc_idx, PQ_INIT_SHARE_VPORT);
+}
+
+static void ecore_init_qm_vf_pqs(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+	u16 vf_idx, num_vfs = ecore_init_qm_get_num_vfs(p_hwfn);
+
+	if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_VFS))
+		return;
+
+	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_VFS, qm_info->num_pqs);
 
 	qm_info->num_vf_pqs = num_vfs;
-	qm_info->start_vport = (u8)RESC_START(p_hwfn, ECORE_VPORT);
+	for (vf_idx = 0; vf_idx < num_vfs; vf_idx++)
+		ecore_init_qm_pq(p_hwfn, qm_info, PQ_INIT_DEFAULT_TC,
+				 PQ_INIT_VF_RL);
+}
 
-	for (i = 0; i < qm_info->num_vports; i++)
-		qm_info->qm_vport_params[i].vport_wfq = 1;
+static void ecore_init_qm_rl_pqs(struct ecore_hwfn *p_hwfn)
+{
+	u16 pf_rls_idx, num_pf_rls = ecore_init_qm_get_num_pf_rls(p_hwfn);
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
 
-	qm_info->vport_rl_en = 1;
-	qm_info->vport_wfq_en = 1;
-	qm_info->pf_rl = pf_rl;
-	qm_info->pf_wfq = pf_wfq;
+	if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_RLS))
+		return;
+
+	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_RLS, qm_info->num_pqs);
+	for (pf_rls_idx = 0; pf_rls_idx < num_pf_rls; pf_rls_idx++)
+		ecore_init_qm_pq(p_hwfn, qm_info, PQ_INIT_OFLD_TC,
+				 PQ_INIT_PF_RL);
+}
+
+static void ecore_init_qm_pq_params(struct ecore_hwfn *p_hwfn)
+{
+	/* rate limited pqs, must come first (FW assumption) */
+	ecore_init_qm_rl_pqs(p_hwfn);
+
+	/* pqs for multi cos */
+	ecore_init_qm_mcos_pqs(p_hwfn);
+
+	/* pure loopback pq */
+	ecore_init_qm_lb_pq(p_hwfn);
+
+	/* out of order pq */
+	ecore_init_qm_ooo_pq(p_hwfn);
+
+	/* pure ack pq */
+	ecore_init_qm_pure_ack_pq(p_hwfn);
+
+	/* pq for offloaded protocol */
+	ecore_init_qm_offload_pq(p_hwfn);
+
+	/* done sharing vports */
+	ecore_init_qm_advance_vport(p_hwfn);
+
+	/* pqs for vfs */
+	ecore_init_qm_vf_pqs(p_hwfn);
+}
+
+/* compare values of getters against resources amounts */
+static enum _ecore_status_t ecore_init_qm_sanity(struct ecore_hwfn *p_hwfn)
+{
+	if (ecore_init_qm_get_num_vports(p_hwfn) >
+	    RESC_NUM(p_hwfn, ECORE_VPORT)) {
+		DP_ERR(p_hwfn, "requested amount of vports exceeds resource\n");
+		return ECORE_INVAL;
+	}
+
+	if (ecore_init_qm_get_num_pqs(p_hwfn) > RESC_NUM(p_hwfn, ECORE_PQ)) {
+		DP_ERR(p_hwfn, "requested amount of pqs exceeds resource\n");
+		return ECORE_INVAL;
+	}
 
 	return ECORE_SUCCESS;
+}
 
- alloc_err:
-	DP_NOTICE(p_hwfn, false, "Failed to allocate memory for QM params\n");
-	ecore_qm_info_free(p_hwfn);
-	return ECORE_NOMEM;
+/*
+ * Function for verbose printing of the qm initialization results
+ */
+static void ecore_dp_init_qm_params(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+	struct init_qm_vport_params *vport;
+	struct init_qm_port_params *port;
+	struct init_qm_pq_params *pq;
+	int i, tc;
+
+	/* top level params */
+	DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+		   "qm init top level params: start_pq %d, start_vport %d,"
+		   " pure_lb_pq %d, offload_pq %d, pure_ack_pq %d\n",
+		   qm_info->start_pq, qm_info->start_vport, qm_info->pure_lb_pq,
+		   qm_info->offload_pq, qm_info->pure_ack_pq);
+	DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+		   "ooo_pq %d, first_vf_pq %d, num_pqs %d, num_vf_pqs %d,"
+		   " num_vports %d, max_phys_tcs_per_port %d\n",
+		   qm_info->ooo_pq, qm_info->first_vf_pq, qm_info->num_pqs,
+		   qm_info->num_vf_pqs, qm_info->num_vports,
+		   qm_info->max_phys_tcs_per_port);
+	DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+		   "pf_rl_en %d, pf_wfq_en %d, vport_rl_en %d, vport_wfq_en %d,"
+		   " pf_wfq %d, pf_rl %d, num_pf_rls %d, pq_flags %x\n",
+		   qm_info->pf_rl_en, qm_info->pf_wfq_en, qm_info->vport_rl_en,
+		   qm_info->vport_wfq_en, qm_info->pf_wfq, qm_info->pf_rl,
+		   qm_info->num_pf_rls, ecore_get_pq_flags(p_hwfn));
+
+	/* port table */
+	for (i = 0; i < p_hwfn->p_dev->num_ports_in_engines; i++) {
+		port = &qm_info->qm_port_params[i];
+		DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+			   "port idx %d, active %d, active_phys_tcs %d,"
+			   " num_pbf_cmd_lines %d, num_btb_blocks %d,"
+			   " reserved %d\n",
+			   i, port->active, port->active_phys_tcs,
+			   port->num_pbf_cmd_lines, port->num_btb_blocks,
+			   port->reserved);
+	}
+
+	/* vport table */
+	for (i = 0; i < qm_info->num_vports; i++) {
+		vport = &qm_info->qm_vport_params[i];
+		DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+			   "vport idx %d, vport_rl %d, wfq %d,"
+			   " first_tx_pq_id [ ",
+			   qm_info->start_vport + i, vport->vport_rl,
+			   vport->vport_wfq);
+		for (tc = 0; tc < NUM_OF_TCS; tc++)
+			DP_VERBOSE(p_hwfn, ECORE_MSG_HW, "%d ",
+				   vport->first_tx_pq_id[tc]);
+		DP_VERBOSE(p_hwfn, ECORE_MSG_HW, "]\n");
+	}
+
+	/* pq table */
+	for (i = 0; i < qm_info->num_pqs; i++) {
+		pq = &qm_info->qm_pq_params[i];
+		DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+			   "pq idx %d, vport_id %d, tc %d, wrr_grp %d,"
+			   " rl_valid %d\n",
+			   qm_info->start_pq + i, pq->vport_id, pq->tc_id,
+			   pq->wrr_group, pq->rl_valid);
+	}
+}
+
+static void ecore_init_qm_info(struct ecore_hwfn *p_hwfn)
+{
+	/* reset params required for init run */
+	ecore_init_qm_reset_params(p_hwfn);
+
+	/* init QM top level params */
+	ecore_init_qm_params(p_hwfn);
+
+	/* init QM port params */
+	ecore_init_qm_port_params(p_hwfn);
+
+	/* init QM vport params */
+	ecore_init_qm_vport_params(p_hwfn);
+
+	/* init QM physical queue params */
+	ecore_init_qm_pq_params(p_hwfn);
+
+	/* display all that init */
+	ecore_dp_init_qm_params(p_hwfn);
 }
 
 /* This function reconfigures the QM pf on the fly.
  * For this purpose we:
  * 1. reconfigure the QM database
- * 2. set new values to runtime arrat
+ * 2. set new values to runtime array
  * 3. send an sdm_qm_cmd through the rbc interface to stop the QM
  * 4. activate init tool in QM_PF stage
  * 5. send an sdm_qm_cmd through rbc interface to release the QM
@@ -462,20 +755,11 @@ enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn,
 				     struct ecore_ptt *p_ptt)
 {
 	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
-	bool b_rc;
 	enum _ecore_status_t rc;
-
-	/* qm_info is allocated in ecore_init_qm_info() which is already called
-	 * from ecore_resc_alloc() or previous call of ecore_qm_reconf().
-	 * The allocated size may change each init, so we free it before next
-	 * allocation.
-	 */
-	ecore_qm_info_free(p_hwfn);
+	bool b_rc;
 
 	/* initialize ecore's qm data structure */
-	rc = ecore_init_qm_info(p_hwfn, false);
-	if (rc != ECORE_SUCCESS)
-		return rc;
+	ecore_init_qm_info(p_hwfn);
 
 	/* stop PF's qm queues */
 	OSAL_SPIN_LOCK(&qm_lock);
@@ -508,6 +792,48 @@ enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
+static enum _ecore_status_t ecore_alloc_qm_data(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+	enum _ecore_status_t rc;
+
+	rc = ecore_init_qm_sanity(p_hwfn);
+	if (rc != ECORE_SUCCESS)
+		goto alloc_err;
+
+	qm_info->qm_pq_params = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
+					    sizeof(struct init_qm_pq_params) *
+					    ecore_init_qm_get_num_pqs(p_hwfn));
+	if (!qm_info->qm_pq_params)
+		goto alloc_err;
+
+	qm_info->qm_vport_params = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
+				       sizeof(struct init_qm_vport_params) *
+				       ecore_init_qm_get_num_vports(p_hwfn));
+	if (!qm_info->qm_vport_params)
+		goto alloc_err;
+
+	qm_info->qm_port_params = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
+				      sizeof(struct init_qm_port_params) *
+				      p_hwfn->p_dev->num_ports_in_engines);
+	if (!qm_info->qm_port_params)
+		goto alloc_err;
+
+	qm_info->wfq_data = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
+					sizeof(struct ecore_wfq_data) *
+					ecore_init_qm_get_num_vports(p_hwfn));
+	if (!qm_info->wfq_data)
+		goto alloc_err;
+
+	return ECORE_SUCCESS;
+
+alloc_err:
+	DP_NOTICE(p_hwfn, false, "Failed to allocate memory for QM params\n");
+	ecore_qm_info_free(p_hwfn);
+	return ECORE_NOMEM;
+}
+/******************** End QM initialization ***************/
+
 enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 {
 	struct ecore_consq *p_consq;
@@ -572,11 +898,13 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 		if (rc)
 			goto alloc_err;
 
-		/* Prepare and process QM requirements */
-		rc = ecore_init_qm_info(p_hwfn, true);
+		rc = ecore_alloc_qm_data(p_hwfn);
 		if (rc)
 			goto alloc_err;
 
+		/* init qm info */
+		ecore_init_qm_info(p_hwfn);
+
 		/* Compute the ILT client partition */
 		rc = ecore_cxt_cfg_ilt_compute(p_hwfn);
 		if (rc)
@@ -618,18 +946,18 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 			 * worst case:
 			 * - Core - according to SPQ.
 			 * - RoCE - per QP there are a couple of ICIDs, one
-			 *          responder and one requester, each can
-			 *          generate an EQE => n_eqes_qp = 2 * n_qp.
-			 *          Each CQ can generate an EQE. There are 2 CQs
-			 *          per QP => n_eqes_cq = 2 * n_qp.
-			 *          Hence the RoCE total is 4 * n_qp or
-			 *          2 * num_cons.
+			 *	  responder and one requester, each can
+			 *	  generate an EQE => n_eqes_qp = 2 * n_qp.
+			 *	  Each CQ can generate an EQE. There are 2 CQs
+			 *	  per QP => n_eqes_cq = 2 * n_qp.
+			 *	  Hence the RoCE total is 4 * n_qp or
+			 *	  2 * num_cons.
 			 * - ENet - There can be up to two events per VF. One
-			 *          for VF-PF channel and another for VF FLR
-			 *          initial cleanup. The number of VFs is
-			 *          bounded by MAX_NUM_VFS_BB, and is much
-			 *          smaller than RoCE's so we avoid exact
-			 *          calculation.
+			 *	  for VF-PF channel and another for VF FLR
+			 *	  initial cleanup. The number of VFs is
+			 *	  bounded by MAX_NUM_VFS_BB, and is much
+			 *	  smaller than RoCE's so we avoid exact
+			 *	  calculation.
 			 */
 			if (ECORE_IS_ROCE_PERSONALITY(p_hwfn)) {
 				num_cons =
@@ -683,7 +1011,8 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 		rc = ecore_dmae_info_alloc(p_hwfn);
 		if (rc) {
 			DP_NOTICE(p_hwfn, true,
-				  "Failed to allocate memory for dmae_info structure\n");
+				  "Failed to allocate memory for dmae_info"
+				  " structure\n");
 			goto alloc_err;
 		}
 
@@ -705,9 +1034,9 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 
 	return ECORE_SUCCESS;
 
- alloc_no_mem:
+alloc_no_mem:
 	rc = ECORE_NOMEM;
- alloc_err:
+alloc_err:
 	ecore_resc_free(p_dev);
 	return rc;
 }
@@ -2353,7 +2682,7 @@ static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
 			*p_resc_start = dflt_resc_start;
 		}
 	}
- out:
+out:
 	return ECORE_SUCCESS;
 }
 
@@ -3139,13 +3468,13 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn,
 #endif
 
 	return rc;
- err2:
+err2:
 	if (IS_LEAD_HWFN(p_hwfn))
 		ecore_iov_free_hw_info(p_dev);
 	ecore_mcp_free(p_hwfn);
- err1:
+err1:
 	ecore_hw_hwfn_free(p_hwfn);
- err0:
+err0:
 	return rc;
 }
 
@@ -3309,7 +3638,7 @@ static void ecore_chain_free_pbl(struct ecore_dev *p_dev,
 	if (!p_chain->pbl.external)
 		OSAL_DMA_FREE_COHERENT(p_dev, p_chain->pbl.p_virt_table,
 				       p_chain->pbl.p_phys_table, pbl_size);
- out:
+out:
 	OSAL_VFREE(p_dev, p_chain->pbl.pp_virt_addr_tbl);
 }
 
@@ -3521,7 +3850,7 @@ enum _ecore_status_t ecore_chain_alloc(struct ecore_dev *p_dev,
 
 	return ECORE_SUCCESS;
 
- nomem:
+nomem:
 	ecore_chain_free(p_dev, p_chain);
 	return rc;
 }
@@ -3956,7 +4285,7 @@ enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
 		goto out;
 
 	p_hwfn->p_dev->rx_coalesce_usecs = coalesce;
- out:
+out:
 	return rc;
 }
 
@@ -4000,7 +4329,7 @@ enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
 		goto out;
 
 	p_hwfn->p_dev->tx_coalesce_usecs = coalesce;
- out:
+out:
 	return rc;
 }
 
diff --git a/drivers/net/qede/base/ecore_hw.c b/drivers/net/qede/base/ecore_hw.c
index 49d52c0..396edc2 100644
--- a/drivers/net/qede/base/ecore_hw.c
+++ b/drivers/net/qede/base/ecore_hw.c
@@ -905,44 +905,6 @@ ecore_dmae_host2host(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-u16 ecore_get_qm_pq(struct ecore_hwfn *p_hwfn,
-		    enum protocol_type proto,
-		    union ecore_qm_pq_params *p_params)
-{
-	u16 pq_id = 0;
-
-	if ((proto == PROTOCOLID_CORE ||
-	     proto == PROTOCOLID_ETH) && !p_params) {
-		DP_NOTICE(p_hwfn, true,
-			  "Protocol %d received NULL PQ params\n", proto);
-		return 0;
-	}
-
-	switch (proto) {
-	case PROTOCOLID_CORE:
-		if (p_params->core.tc == LB_TC)
-			pq_id = p_hwfn->qm_info.pure_lb_pq;
-		else if (p_params->core.tc == PKT_LB_TC)
-			pq_id = p_hwfn->qm_info.ooo_pq;
-		else
-			pq_id = p_hwfn->qm_info.offload_pq;
-		break;
-	case PROTOCOLID_ETH:
-		pq_id = p_params->eth.tc;
-		/* TODO - multi-CoS for VFs? */
-		if (p_params->eth.is_vf)
-			pq_id += p_hwfn->qm_info.vf_queues_offset +
-			    p_params->eth.vf_id;
-		break;
-	default:
-		pq_id = 0;
-	}
-
-	pq_id = CM_TX_PQ_BASE + pq_id + RESC_START(p_hwfn, ECORE_PQ);
-
-	return pq_id;
-}
-
 void ecore_hw_err_notify(struct ecore_hwfn *p_hwfn,
 			 enum ecore_hw_err_type err_type)
 {
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index d2e1719..0220d19 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -834,13 +834,13 @@ ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
 			      struct ecore_queue_start_common_params *p_params,
 			      dma_addr_t pbl_addr,
 			      u16 pbl_size,
-			      union ecore_qm_pq_params *p_pq_params)
+			      u16 pq_id)
 {
 	struct tx_queue_start_ramrod_data *p_ramrod = OSAL_NULL;
 	struct ecore_spq_entry *p_ent = OSAL_NULL;
 	struct ecore_sp_init_data init_data;
 	struct ecore_hw_cid_data *p_tx_cid;
-	u16 pq_id, abs_tx_qzone_id = 0;
+	u16 abs_tx_qzone_id = 0;
 	enum _ecore_status_t rc = ECORE_NOTIMPL;
 	u8 abs_vport_id;
 
@@ -882,7 +882,6 @@ ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
 	p_ramrod->pbl_size = OSAL_CPU_TO_LE16(pbl_size);
 	DMA_REGPAIR_LE(p_ramrod->pbl_base_addr, pbl_addr);
 
-	pq_id = ecore_get_qm_pq(p_hwfn, PROTOCOLID_ETH, p_pq_params);
 	p_ramrod->qm_pq_id = OSAL_CPU_TO_LE16(pq_id);
 
 	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
@@ -898,7 +897,6 @@ ecore_sp_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
 			    void OSAL_IOMEM * *pp_doorbell)
 {
 	struct ecore_hw_cid_data *p_tx_cid;
-	union ecore_qm_pq_params pq_params;
 	u8 abs_stats_id = 0;
 	enum _ecore_status_t rc;
 
@@ -918,9 +916,6 @@ ecore_sp_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
 
 	p_tx_cid = &p_hwfn->p_tx_cids[p_params->queue_id];
 	OSAL_MEMSET(p_tx_cid, 0, sizeof(*p_tx_cid));
-	OSAL_MEMSET(&pq_params, 0, sizeof(pq_params));
-
-	pq_params.eth.tc = tc;
 
 	/* Allocate a CID for the queue */
 	rc = ecore_cxt_acquire_cid(p_hwfn, PROTOCOLID_ETH, &p_tx_cid->cid);
@@ -944,7 +939,8 @@ ecore_sp_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
 					   p_params,
 					   pbl_addr,
 					   pbl_size,
-					   &pq_params);
+					   ecore_get_cm_pq_idx_mcos(p_hwfn,
+								    tc));
 
 	*pp_doorbell = (u8 OSAL_IOMEM *)p_hwfn->doorbells +
 	    DB_ADDR(p_tx_cid->cid, DQ_DEMS_LEGACY);
diff --git a/drivers/net/qede/base/ecore_l2.h b/drivers/net/qede/base/ecore_l2.h
index 9c1bd38..b598eda 100644
--- a/drivers/net/qede/base/ecore_l2.h
+++ b/drivers/net/qede/base/ecore_l2.h
@@ -81,7 +81,7 @@ ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn	*p_hwfn,
 			      struct ecore_queue_start_common_params *p_params,
 			      dma_addr_t pbl_addr,
 			      u16 pbl_size,
-			      union ecore_qm_pq_params *p_pq_params);
+			      u16 pq_id);
 
 u8 ecore_mcast_bin_from_mac(u8 *mac);
 
diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c
index 9035d3b..ba26d45 100644
--- a/drivers/net/qede/base/ecore_spq.c
+++ b/drivers/net/qede/base/ecore_spq.c
@@ -173,11 +173,10 @@ ecore_spq_fill_entry(struct ecore_hwfn *p_hwfn, struct ecore_spq_entry *p_ent)
 static void ecore_spq_hw_initialize(struct ecore_hwfn *p_hwfn,
 				    struct ecore_spq *p_spq)
 {
-	u16 pq;
 	struct ecore_cxt_info cxt_info;
 	struct core_conn_context *p_cxt;
-	union ecore_qm_pq_params pq_params;
 	enum _ecore_status_t rc;
+	u16 physical_q;
 
 	cxt_info.iid = p_spq->cid;
 
@@ -206,10 +205,8 @@ static void ecore_spq_hw_initialize(struct ecore_hwfn *p_hwfn,
 	/* CDU validation - FIXME currently disabled */
 
 	/* QM physical queue */
-	OSAL_MEMSET(&pq_params, 0, sizeof(pq_params));
-	pq_params.core.tc = LB_TC;
-	pq = ecore_get_qm_pq(p_hwfn, PROTOCOLID_CORE, &pq_params);
-	p_cxt->xstorm_ag_context.physical_q0 = OSAL_CPU_TO_LE16(pq);
+	physical_q = ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_LB);
+	p_cxt->xstorm_ag_context.physical_q0 = OSAL_CPU_TO_LE16(physical_q);
 
 	p_cxt->xstorm_st_context.spq_base_lo =
 	    DMA_LO_LE(p_spq->chain.p_phys_addr);
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index a302e9e..365be25 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -632,8 +632,8 @@ enum _ecore_status_t ecore_iov_hw_info(struct ecore_hwfn *p_hwfn)
 	return ECORE_SUCCESS;
 }
 
-bool _ecore_iov_pf_sanity_check(struct ecore_hwfn *p_hwfn, int vfid,
-				bool b_fail_malicious)
+static bool _ecore_iov_pf_sanity_check(struct ecore_hwfn *p_hwfn, int vfid,
+				       bool b_fail_malicious)
 {
 	/* Check PF supports sriov */
 	if (IS_VF(p_hwfn->p_dev) || !IS_ECORE_SRIOV(p_hwfn->p_dev) ||
@@ -2103,15 +2103,9 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 	struct ecore_queue_start_common_params params;
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
 	u8 status = PFVF_STATUS_NO_RESOURCE;
-	union ecore_qm_pq_params pq_params;
 	struct vfpf_start_txq_tlv *req;
 	enum _ecore_status_t rc;
 
-	/* Prepare the parameters which would choose the right PQ */
-	OSAL_MEMSET(&pq_params, 0, sizeof(pq_params));
-	pq_params.eth.is_vf = 1;
-	pq_params.eth.vf_id = vf->relative_vf_id;
-
 	OSAL_MEMSET(&params, 0, sizeof(params));
 	req = &mbx->req_virt->start_txq;
 
@@ -2132,7 +2126,8 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 					   &params,
 					   req->pbl_addr,
 					   req->pbl_size,
-					   &pq_params);
+					   ecore_get_cm_pq_idx_vf(p_hwfn,
+							vf->relative_vf_id));
 
 	if (rc)
 		status = PFVF_STATUS_FAILURE;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 21/62] net/qede/base: print firmware MFW and MBI versions
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (20 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 20/62] net/qede/base: qm initialization revamp Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 22/62] net/qede/base: check active VF queues before stopping Rasesh Mody
                               ` (40 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add a printout of the FW, Management FW and MBI versions.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/qede_if.h   |    9 ++++++++-
 drivers/net/qede/qede_main.c |   14 ++++++--------
 2 files changed, 14 insertions(+), 9 deletions(-)

diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index 18404fb..1e27428 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -30,12 +30,19 @@ struct qed_dev_info {
 
 	/* MFW version */
 	uint32_t mfw_rev;
+#define QED_MFW_VERSION_0_MASK		0x000000FF
+#define QED_MFW_VERSION_0_OFFSET	0
+#define QED_MFW_VERSION_1_MASK		0x0000FF00
+#define QED_MFW_VERSION_1_OFFSET	8
+#define QED_MFW_VERSION_2_MASK		0x00FF0000
+#define QED_MFW_VERSION_2_OFFSET	16
+#define QED_MFW_VERSION_3_MASK		0xFF000000
+#define QED_MFW_VERSION_3_OFFSET	24
 
 	uint32_t flash_size;
 	uint8_t mf_mode;
 	bool tx_switching;
 	u16 mtu;
-	/* To be added... */
 };
 
 enum qed_sb_type {
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index e76346e..1d4f336 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -327,6 +327,8 @@ qed_fill_dev_info(struct ecore_dev *edev, struct qed_dev_info *dev_info)
 	memset(dev_info, 0, sizeof(struct qed_dev_info));
 	dev_info->num_hwfns = edev->num_hwfns;
 	dev_info->is_mf_default = IS_MF_DEFAULT(&edev->hwfns[0]);
+	dev_info->mtu = ECORE_LEADING_HWFN(edev)->hw_info.mtu;
+
 	rte_memcpy(&dev_info->hw_mac, &edev->hwfns[0].hw_info.hw_mac_addr,
 	       ETHER_ADDR_LEN);
 
@@ -337,13 +339,7 @@ qed_fill_dev_info(struct ecore_dev *edev, struct qed_dev_info *dev_info)
 		dev_info->fw_eng = FW_ENGINEERING_VERSION;
 		dev_info->mf_mode = edev->mf_mode;
 		dev_info->tx_switching = false;
-	} else {
-		ecore_vf_get_fw_version(&edev->hwfns[0], &dev_info->fw_major,
-					&dev_info->fw_minor, &dev_info->fw_rev,
-					&dev_info->fw_eng);
-	}
 
-	if (IS_PF(edev)) {
 		ptt = ecore_ptt_acquire(ECORE_LEADING_HWFN(edev));
 		if (ptt) {
 			ecore_mcp_get_mfw_ver(ECORE_LEADING_HWFN(edev), ptt,
@@ -361,12 +357,14 @@ qed_fill_dev_info(struct ecore_dev *edev, struct qed_dev_info *dev_info)
 			ecore_ptt_release(ECORE_LEADING_HWFN(edev), ptt);
 		}
 	} else {
+		ecore_vf_get_fw_version(&edev->hwfns[0], &dev_info->fw_major,
+					&dev_info->fw_minor, &dev_info->fw_rev,
+					&dev_info->fw_eng);
+
 		ecore_mcp_get_mfw_ver(ECORE_LEADING_HWFN(edev), ptt,
 				      &dev_info->mfw_rev, NULL);
 	}
 
-	dev_info->mtu = ECORE_LEADING_HWFN(edev)->hw_info.mtu;
-
 	return 0;
 }
 
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 22/62] net/qede/base: check active VF queues before stopping
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (21 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 21/62] net/qede/base: print firmware MFW and MBI versions Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 23/62] net/qede/base: set driver type before sending load request Rasesh Mody
                               ` (39 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Make sure VF queue are closed before stopping vport.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_sriov.c |   37 ++++++++++++++++++++++++++++++++++-
 1 file changed, 36 insertions(+), 1 deletion(-)

diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 365be25..73c4015 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -232,6 +232,30 @@ static bool ecore_iov_validate_sb(struct ecore_hwfn *p_hwfn,
 	return false;
 }
 
+static bool ecore_iov_validate_active_rxq(struct ecore_hwfn *p_hwfn,
+					  struct ecore_vf_info *p_vf)
+{
+	u8 i;
+
+	for (i = 0; i < p_vf->num_rxqs; i++)
+		if (p_vf->vf_queues[i].rxq_active)
+			return true;
+
+	return false;
+}
+
+static bool ecore_iov_validate_active_txq(struct ecore_hwfn *p_hwfn,
+					  struct ecore_vf_info *p_vf)
+{
+	u8 i;
+
+	for (i = 0; i < p_vf->num_rxqs; i++)
+		if (p_vf->vf_queues[i].txq_active)
+			return true;
+
+	return false;
+}
+
 /* TODO - this is linux crc32; Need a way to ifdef it out for linux */
 u32 ecore_crc32(u32 crc, u8 *ptr, u32 length)
 {
@@ -1365,8 +1389,10 @@ static void ecore_iov_vf_cleanup(struct ecore_hwfn *p_hwfn,
 
 	p_vf->num_active_rxqs = 0;
 
-	for (i = 0; i < ECORE_MAX_VF_CHAINS_PER_PF; i++)
+	for (i = 0; i < ECORE_MAX_VF_CHAINS_PER_PF; i++) {
 		p_vf->vf_queues[i].rxq_active = 0;
+		p_vf->vf_queues[i].txq_active = 0;
+	}
 
 	OSAL_MEMSET(&p_vf->shadow_config, 0, sizeof(p_vf->shadow_config));
 	OSAL_MEMSET(&p_vf->acquire, 0, sizeof(p_vf->acquire));
@@ -1943,6 +1969,15 @@ static void ecore_iov_vf_mbx_stop_vport(struct ecore_hwfn *p_hwfn,
 	vf->vport_instance--;
 	vf->spoof_chk = false;
 
+	if ((ecore_iov_validate_active_rxq(p_hwfn, vf)) ||
+	    (ecore_iov_validate_active_txq(p_hwfn, vf))) {
+		vf->b_malicious = true;
+		DP_NOTICE(p_hwfn, false,
+			  "VF [%02x] - considered malicious;"
+			  " Unable to stop RX/TX queuess\n",
+			  vf->abs_vf_id);
+	}
+
 	rc = ecore_sp_vport_stop(p_hwfn, vf->opaque_fid, vf->vport_id);
 	if (rc != ECORE_SUCCESS) {
 		DP_ERR(p_hwfn,
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 23/62] net/qede/base: set driver type before sending load request
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (22 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 22/62] net/qede/base: check active VF queues before stopping Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 24/62] net/qede/base: prevent driver load with invalid resources Rasesh Mody
                               ` (38 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Set the drv_type before sending LOAD_REQ and remove the
ver_str which is not used by the MFW

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h     |    3 +--
 drivers/net/qede/base/ecore_mcp.c |    3 ---
 drivers/net/qede/qede_ethdev.c    |    2 +-
 drivers/net/qede/qede_if.h        |    3 +--
 drivers/net/qede/qede_main.c      |   10 ++++------
 5 files changed, 7 insertions(+), 14 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 58c97a3..b8c8bfd 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -30,7 +30,6 @@
 
 #define MAX_HWFNS_PER_DEVICE	2
 #define NAME_SIZE 128 /* @DPDK */
-#define VER_SIZE 16
 #define ECORE_WFQ_UNIT	100
 #include "../qede_logs.h" /* @DPDK */
 
@@ -706,7 +705,7 @@ struct ecore_dev {
 
 	int				pcie_width;
 	int				pcie_speed;
-	u8				ver_str[NAME_SIZE]; /* @DPDK */
+
 	/* Add MF related configuration */
 	u8				mcp_rev;
 	u8				boot_mode;
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 9f897b5..2b9c819 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -524,7 +524,6 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 {
 	struct ecore_dev *p_dev = p_hwfn->p_dev;
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
 	enum _ecore_status_t rc;
 
 #ifndef ASIC_ONLY
@@ -538,8 +537,6 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 	mb_params.cmd = DRV_MSG_CODE_LOAD_REQ;
 	mb_params.param = PDA_COMP | DRV_ID_MCP_HSI_VER_CURRENT |
 			  p_dev->drv_type;
-	OSAL_MEMCPY(&union_data.ver_str, p_dev->ver_str, MCP_DRV_VER_STR_SIZE);
-	mb_params.p_data_src = &union_data;
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 
 	/* if mcp fails to respond we must abort */
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index c372181..d52e1be 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -2175,7 +2175,7 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
 
 	qede_alloc_etherdev(adapter, &dev_info);
 
-	adapter->ops->common->set_id(edev, edev->name, QEDE_PMD_VERSION);
+	adapter->ops->common->set_name(edev, edev->name);
 
 	if (!is_vf)
 		adapter->dev_info.num_mac_filters =
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index 1e27428..0a1f7db 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -116,8 +116,7 @@ struct qed_common_ops {
 		     struct rte_pci_device *pci_dev,
 		     enum qed_protocol protocol,
 		     uint32_t dp_module, uint8_t dp_level, bool is_vf);
-	void (*set_id)(struct ecore_dev *edev,
-		char name[], const char ver_str[]);
+	void (*set_name)(struct ecore_dev *edev, char name[]);
 	enum _ecore_status_t
 		(*chain_alloc)(struct ecore_dev *edev,
 			       enum ecore_chain_use_mode
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 1d4f336..a932c5f 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -50,7 +50,9 @@ qed_probe(struct ecore_dev *edev, struct rte_pci_device *pci_dev,
 	int rc;
 
 	ecore_init_struct(edev);
+	edev->drv_type = DRV_ID_DRV_TYPE_LINUX;
 	qdev->protocol = protocol;
+
 	if (is_vf)
 		edev->b_is_vf = true;
 
@@ -420,9 +422,7 @@ qed_fill_eth_dev_info(struct ecore_dev *edev, struct qed_dev_eth_info *info)
 	return 0;
 }
 
-static void
-qed_set_id(struct ecore_dev *edev, char name[NAME_SIZE],
-	   const char ver_str[NAME_SIZE])
+static void qed_set_name(struct ecore_dev *edev, char name[NAME_SIZE])
 {
 	int i;
 
@@ -430,8 +430,6 @@ qed_set_id(struct ecore_dev *edev, char name[NAME_SIZE],
 	for_each_hwfn(edev, i) {
 		snprintf(edev->hwfns[i].name, NAME_SIZE, "%s-%d", name, i);
 	}
-	memcpy(edev->ver_str, ver_str, NAME_SIZE);
-	edev->drv_type = DRV_ID_DRV_TYPE_LINUX;
 }
 
 static uint32_t
@@ -714,7 +712,7 @@ const struct qed_common_ops qed_common_ops_pass = {
 	INIT_STRUCT_FIELD(probe, &qed_probe),
 	INIT_STRUCT_FIELD(update_pf_params, &qed_update_pf_params),
 	INIT_STRUCT_FIELD(slowpath_start, &qed_slowpath_start),
-	INIT_STRUCT_FIELD(set_id, &qed_set_id),
+	INIT_STRUCT_FIELD(set_name, &qed_set_name),
 	INIT_STRUCT_FIELD(chain_alloc, &ecore_chain_alloc),
 	INIT_STRUCT_FIELD(chain_free, &ecore_chain_free),
 	INIT_STRUCT_FIELD(sb_init, &qed_sb_init),
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 24/62] net/qede/base: prevent driver load with invalid resources
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (23 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 23/62] net/qede/base: set driver type before sending load request Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 25/62] net/qede/base: add interfaces for MFW TLV request processing Rasesh Mody
                               ` (37 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Prevent storage drivers from attempting to load with invalid resources.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c |   12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 380c5ba..7fce4fd 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2437,13 +2437,19 @@ static void ecore_hw_set_feat(struct ecore_hwfn *p_hwfn)
 			   FEAT_NUM(p_hwfn, ECORE_PF_L2_QUE),
 			   sb_cnt_info.sb_iov_cnt);
 
+	feat_num[ECORE_FCOE_CQ] = OSAL_MIN_T(u32, RESC_NUM(p_hwfn, ECORE_SB),
+					     RESC_NUM(p_hwfn, ECORE_CMDQS_CQS));
+	feat_num[ECORE_ISCSI_CQ] = OSAL_MIN_T(u32, RESC_NUM(p_hwfn, ECORE_SB),
+					     RESC_NUM(p_hwfn, ECORE_CMDQS_CQS));
+
 	DP_VERBOSE(p_hwfn, ECORE_MSG_PROBE,
-		   "#PF_L2_QUEUES=%d VF_L2_QUEUES=%d #ROCE_CNQ=%d #SBS=%d num_features=%d\n",
+		   "#PF_L2_QUEUE=%d VF_L2_QUEUES=%d #ROCE_CNQ=%d #FCOE_CQ=%d #ISCSI_CQ=%d #SB=%d\n",
 		   (int)FEAT_NUM(p_hwfn, ECORE_PF_L2_QUE),
 		   (int)FEAT_NUM(p_hwfn, ECORE_VF_L2_QUE),
 		   (int)FEAT_NUM(p_hwfn, ECORE_RDMA_CNQ),
-		   RESC_NUM(p_hwfn, ECORE_SB),
-		   num_features);
+		   (int)FEAT_NUM(p_hwfn, ECORE_FCOE_CQ),
+		   (int)FEAT_NUM(p_hwfn, ECORE_ISCSI_CQ),
+		   RESC_NUM(p_hwfn, ECORE_SB));
 }
 
 static enum resource_id_enum
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 25/62] net/qede/base: add interfaces for MFW TLV request processing
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (24 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 24/62] net/qede/base: prevent driver load with invalid resources Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 26/62] net/qede/base: code refactoring of SP queues Rasesh Mody
                               ` (36 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add new base driver interfaces for Management FW TLV request processing.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_mcp.c     |    6 +
 drivers/net/qede/base/ecore_mcp_api.h |  301 +++++++++++++++++++++++++++++++++
 2 files changed, 307 insertions(+)

diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 2b9c819..79a907b 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -2502,3 +2502,9 @@ enum _ecore_status_t ecore_mcp_initiate_pf_flr(struct ecore_hwfn *p_hwfn,
 	return ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_INITIATE_PF_FLR, 0,
 			     &mcp_resp, &mcp_param);
 }
+
+enum _ecore_status_t
+ecore_mfw_process_tlv_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
+{
+	return ECORE_SUCCESS;
+}
diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h
index 1be22dd..8cad43d 100644
--- a/drivers/net/qede/base/ecore_mcp_api.h
+++ b/drivers/net/qede/base/ecore_mcp_api.h
@@ -232,6 +232,295 @@ struct ecore_mba_vers {
 	u32 mba_vers[ECORE_MAX_NUM_OF_ROMIMG];
 };
 
+enum ecore_mfw_tlv_type {
+	ECORE_MFW_TLV_GENERIC = 0x1,	/* Core driver TLVs */
+	ECORE_MFW_TLV_FCOE = 0x2,	/* FCoE protocol TLVs */
+	ECORE_MFW_TLV_ISCSI = 0x4,	/* SCSI protocol TLVs */
+};
+
+struct ecore_mfw_tlv_generic {
+	u16 feat_flags;
+	bool feat_flags_set;
+	u64 local_mac;
+	bool local_mac_set;
+	u64 additional_mac1;
+	bool additional_mac1_set;
+	u64 additional_mac2;
+	bool additional_mac2_set;
+	u16 lso_maxoff_size;
+	bool lso_maxoff_size_set;
+	u16 lso_minseg_size;
+	bool lso_minseg_size_set;
+	u8 prom_mode;
+	bool prom_mode_set;
+	u16 tx_descr_size;
+	bool tx_descr_size_set;
+	u16 rx_descr_size;
+	bool rx_descr_size_set;
+	u16 netq_count;
+	bool netq_count_set;
+	u16 flex_vlan;
+	bool flex_vlan_set;
+	u8 drv_state;
+	bool drv_state_set;
+	u8 pxe_progress;
+	bool pxe_progress_set;
+	u32 tcp4_offloads;
+	bool tcp4_offloads_set;
+	u32 tcp6_offloads;
+	bool tcp6_offloads_set;
+	u16 tx_descr_qdepth;
+	bool tx_descr_qdepth_set;
+	u16 rx_descr_qdepth;
+	bool rx_descr_qdepth_set;
+	u64 rx_frames;
+	bool rx_frames_set;
+	u64 rx_bytes;
+	bool rx_bytes_set;
+	u64 tx_frames;
+	bool tx_frames_set;
+	u64 tx_bytes;
+	bool tx_bytes_set;
+	u8 iov_offload;
+	bool iov_offload_set;
+	u8 txqs_empty;
+	bool txqs_empty_set;
+	u8 rxqs_empty;
+	bool rxqs_empty_set;
+	u8 num_txqs_full;
+	bool num_txqs_full_set;
+	u8 num_rxqs_full;
+	bool num_rxqs_full_set;
+};
+
+struct ecore_mfw_tlv_fcoe {
+	u8 scsi_timeout;
+	bool scsi_timeout_set;
+	u32 rt_tov;
+	bool rt_tov_set;
+	u32 ra_tov;
+	bool ra_tov_set;
+	u32 ed_tov;
+	bool ed_tov_set;
+	u32 cr_tov;
+	bool cr_tov_set;
+	u8 boot_type;
+	bool boot_type_set;
+	u8 npiv_state;
+	bool npiv_state_set;
+	u32 num_npiv_ids;
+	bool num_npiv_ids_set;
+	u8 switch_name[8];
+	bool switch_name_set;
+	u16 switch_portnum;
+	bool switch_portnum_set;
+	u8 switch_portid[3];
+	bool switch_portid_set;
+	u8 vendor_name[8];
+	bool vendor_name_set;
+	u8 switch_model[8];
+	bool switch_model_set;
+	u8 switch_fw_version[8];
+	bool switch_fw_version_set;
+	u8 qos_pri;
+	bool qos_pri_set;
+	u8 port_alias[3];
+	bool port_alias_set;
+	u8 port_state;
+	bool port_state_set;
+	u16 fip_tx_descr_size;
+	bool fip_tx_descr_size_set;
+	u16 fip_rx_descr_size;
+	bool fip_rx_descr_size_set;
+	u16 link_failures;
+	bool link_failures_set;
+	u8 fcoe_boot_progress;
+	bool fcoe_boot_progress_set;
+	u64 rx_bcast;
+	bool rx_bcast_set;
+	u64 tx_bcast;
+	bool tx_bcast_set;
+	u16 fcoe_txq_depth;
+	bool fcoe_txq_depth_set;
+	u16 fcoe_rxq_depth;
+	bool fcoe_rxq_depth_set;
+	u64 fcoe_rx_frames;
+	bool fcoe_rx_frames_set;
+	u64 fcoe_rx_bytes;
+	bool fcoe_rx_bytes_set;
+	u64 fcoe_tx_frames;
+	bool fcoe_tx_frames_set;
+	u64 fcoe_tx_bytes;
+	bool fcoe_tx_bytes_set;
+	u16 crc_count;
+	bool crc_count_set;
+	u32 crc_err_src_fcid[5];
+	bool crc_err_src_fcid_set[5];
+	u8 crc_err_tstamp[5][14];
+	bool crc_err_tstamp_set[5];
+	u16 losync_err;
+	bool losync_err_set;
+	u16 losig_err;
+	bool losig_err_set;
+	u16 primtive_err;
+	bool primtive_err_set;
+	u16 disparity_err;
+	bool disparity_err_set;
+	u16 code_violation_err;
+	bool code_violation_err_set;
+	u32 flogi_param[4];
+	bool flogi_param_set[4];
+	u8 flogi_tstamp[14];
+	bool flogi_tstamp_set;
+	u32 flogi_acc_param[4];
+	bool flogi_acc_param_set[4];
+	u8 flogi_acc_tstamp[14];
+	bool flogi_acc_tstamp_set;
+	u32 flogi_rjt;
+	bool flogi_rjt_set;
+	u8 flogi_rjt_tstamp[14];
+	bool flogi_rjt_tstamp_set;
+	u32 fdiscs;
+	bool fdiscs_set;
+	u8 fdisc_acc;
+	bool fdisc_acc_set;
+	u8 fdisc_rjt;
+	bool fdisc_rjt_set;
+	u8 plogi;
+	bool plogi_set;
+	u8 plogi_acc;
+	bool plogi_acc_set;
+	u8 plogi_rjt;
+	bool plogi_rjt_set;
+	u32 plogi_dst_fcid[5];
+	bool plogi_dst_fcid_set[5];
+	u8 plogi_tstamp[5][14];
+	bool plogi_tstamp_set[5];
+	u32 plogi_acc_src_fcid[5];
+	bool plogi_acc_src_fcid_set[5];
+	u8 plogi_acc_tstamp[5][14];
+	bool plogi_acc_tstamp_set[5];
+	u8 tx_plogos;
+	bool tx_plogos_set;
+	u8 plogo_acc;
+	bool plogo_acc_set;
+	u8 plogo_rjt;
+	bool plogo_rjt_set;
+	u32 plogo_src_fcid[5];
+	bool plogo_src_fcid_set[5];
+	u8 plogo_tstamp[5][14];
+	bool plogo_tstamp_set[5];
+	u8 rx_logos;
+	bool rx_logos_set;
+	u8 tx_accs;
+	bool tx_accs_set;
+	u8 tx_prlis;
+	bool tx_prlis_set;
+	u8 rx_accs;
+	bool rx_accs_set;
+	u8 tx_abts;
+	bool tx_abts_set;
+	u8 rx_abts_acc;
+	bool rx_abts_acc_set;
+	u8 rx_abts_rjt;
+	bool rx_abts_rjt_set;
+	u32 abts_dst_fcid[5];
+	bool abts_dst_fcid_set[5];
+	u8 abts_tstamp[5][14];
+	bool abts_tstamp_set[5];
+	u8 rx_rscn;
+	bool rx_rscn_set;
+	u32 rx_rscn_nport[4];
+	bool rx_rscn_nport_set[4];
+	u8 tx_lun_rst;
+	bool tx_lun_rst_set;
+	u8 abort_task_sets;
+	bool abort_task_sets_set;
+	u8 tx_tprlos;
+	bool tx_tprlos_set;
+	u8 tx_nos;
+	bool tx_nos_set;
+	u8 rx_nos;
+	bool rx_nos_set;
+	u8 ols;
+	bool ols_set;
+	u8 lr;
+	bool lr_set;
+	u8 llr;
+	bool llrt;
+	u8 tx_lip;
+	bool tx_lip_set;
+	u8 rx_lip;
+	bool rx_lip_set;
+	u8 eofa;
+	bool eofa_set;
+	u8 eofni;
+	bool eofni_set;
+	u8 scsi_chks;
+	bool scsi_chks_set;
+	u8 scsi_cond_met;
+	bool scsi_cond_met_set;
+	u8 scsi_busy;
+	bool scsi_busy_set;
+	u8 scsi_inter;
+	bool scsi_inter_set;
+	u8 scsi_inter_cond_met;
+	bool scsi_inter_cond_met_set;
+	u8 scsi_rsv_conflicts;
+	bool scsi_rsv_conflicts_set;
+	u8 scsi_tsk_full;
+	bool scsi_tsk_full_set;
+	u8 scsi_aca_active;
+	bool scsi_aca_active_set;
+	u8 scsi_tsk_abort;
+	bool scsi_tsk_abort_set;
+	u32 scsi_rx_chk[5];
+	bool scsi_rx_chk_set[5];
+	u8 scsi_chk_tstamp[5][14];
+	bool scsi_chk_tstamp_set[5];
+};
+
+struct ecore_mfw_tlv_iscsi {
+	u8 target_llmnr;
+	bool target_llmnr_set;
+	u8 header_digest;
+	bool header_digest_set;
+	u8 data_digest;
+	bool data_digest_set;
+	u8 auth_method;
+	bool auth_method_set;
+	u16 boot_taget_portal;
+	bool boot_taget_portal_set;
+	u16 frame_size;
+	bool frame_size_set;
+	u16 tx_desc_size;
+	bool tx_desc_size_set;
+	u16 rx_desc_size;
+	bool rx_desc_size_set;
+	u8 boot_progress;
+	bool boot_progress_set;
+	u16 tx_desc_qdepth;
+	bool tx_desc_qdepth_set;
+	u16 rx_desc_qdepth;
+	bool rx_desc_qdepth_set;
+	u64 rx_frames;
+	bool rx_frames_set;
+	u64 rx_bytes;
+	bool rx_bytes_set;
+	u64 tx_frames;
+	bool tx_frames_set;
+	u64 tx_bytes;
+	bool tx_bytes_set;
+	u32 cpcp_spcp_map;
+	bool cpcp_spcp_map_set;
+};
+
+union ecore_mfw_tlv_data {
+	struct ecore_mfw_tlv_generic generic;
+	struct ecore_mfw_tlv_fcoe fcoe;
+	struct ecore_mfw_tlv_iscsi iscsi;
+};
+
 /**
  * @brief - returns the link params of the hw function
  *
@@ -820,4 +1109,16 @@ ecore_mcp_mdump_get_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 enum _ecore_status_t ecore_mcp_mdump_clear_logs(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt);
 
+/**
+ * @brief - Processes the TLV request from MFW i.e., get the required TLV info
+ *          from the ecore client and send it to the MFW.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ *
+ * @param return ECORE_SUCCESS upon success.
+ */
+enum _ecore_status_t ecore_mfw_process_tlv_req(struct ecore_hwfn *p_hwfn,
+					       struct ecore_ptt *p_ptt);
+
 #endif
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 26/62] net/qede/base: code refactoring of SP queues
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (25 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 25/62] net/qede/base: add interfaces for MFW TLV request processing Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 27/62] net/qede/base: make L2 queues handle based Rasesh Mody
                               ` (35 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Maintain slowpath event queue and consumer queue within HW function
structure, update corresponding alloc and free APIs accordingly.
Cleanup unused code under CONFIG_ECORE_LL2 ifdef.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c |   43 +++++++----------------------
 drivers/net/qede/base/ecore_spq.c |   54 ++++++++++++++++++++-----------------
 drivers/net/qede/base/ecore_spq.h |   35 +++++++++---------------
 3 files changed, 52 insertions(+), 80 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 7fce4fd..1ce7d8e 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -165,12 +165,9 @@ void ecore_resc_free(struct ecore_dev *p_dev)
 		ecore_cxt_mngr_free(p_hwfn);
 		ecore_qm_info_free(p_hwfn);
 		ecore_spq_free(p_hwfn);
-		ecore_eq_free(p_hwfn, p_hwfn->p_eq);
-		ecore_consq_free(p_hwfn, p_hwfn->p_consq);
+		ecore_eq_free(p_hwfn);
+		ecore_consq_free(p_hwfn);
 		ecore_int_free(p_hwfn);
-#ifdef CONFIG_ECORE_LL2
-		ecore_ll2_free(p_hwfn, p_hwfn->p_ll2_info);
-#endif
 		ecore_iov_free(p_hwfn);
 		ecore_dmae_info_free(p_hwfn);
 		ecore_dcbx_info_free(p_hwfn, p_hwfn->p_dcbx_info);
@@ -836,11 +833,6 @@ alloc_err:
 
 enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 {
-	struct ecore_consq *p_consq;
-	struct ecore_eq *p_eq;
-#ifdef	CONFIG_ECORE_LL2
-	struct ecore_ll2_info *p_ll2_info;
-#endif
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 	int i;
 
@@ -988,24 +980,13 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 			goto alloc_no_mem;
 		}
 
-		p_eq = ecore_eq_alloc(p_hwfn, (u16)n_eqes);
-		if (!p_eq)
-			goto alloc_no_mem;
-		p_hwfn->p_eq = p_eq;
+		rc = ecore_eq_alloc(p_hwfn, (u16)n_eqes);
+		if (rc)
+			goto alloc_err;
 
-		p_consq = ecore_consq_alloc(p_hwfn);
-		if (!p_consq)
-			goto alloc_no_mem;
-		p_hwfn->p_consq = p_consq;
-
-#ifdef CONFIG_ECORE_LL2
-		if (p_hwfn->using_ll2) {
-			p_ll2_info = ecore_ll2_alloc(p_hwfn);
-			if (!p_ll2_info)
-				goto alloc_no_mem;
-			p_hwfn->p_ll2_info = p_ll2_info;
-		}
-#endif
+		rc = ecore_consq_alloc(p_hwfn);
+		if (rc)
+			goto alloc_err;
 
 		/* DMA info initialization */
 		rc = ecore_dmae_info_alloc(p_hwfn);
@@ -1053,8 +1034,8 @@ void ecore_resc_setup(struct ecore_dev *p_dev)
 
 		ecore_cxt_mngr_setup(p_hwfn);
 		ecore_spq_setup(p_hwfn);
-		ecore_eq_setup(p_hwfn, p_hwfn->p_eq);
-		ecore_consq_setup(p_hwfn, p_hwfn->p_consq);
+		ecore_eq_setup(p_hwfn);
+		ecore_consq_setup(p_hwfn);
 
 		/* Read shadow of current MFW mailbox */
 		ecore_mcp_read_mb(p_hwfn, p_hwfn->p_main_ptt);
@@ -1065,10 +1046,6 @@ void ecore_resc_setup(struct ecore_dev *p_dev)
 		ecore_int_setup(p_hwfn, p_hwfn->p_main_ptt);
 
 		ecore_iov_setup(p_hwfn, p_hwfn->p_main_ptt);
-#ifdef CONFIG_ECORE_LL2
-		if (p_hwfn->using_ll2)
-			ecore_ll2_setup(p_hwfn, p_hwfn->p_ll2_info);
-#endif
 	}
 }
 
diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c
index ba26d45..016de74 100644
--- a/drivers/net/qede/base/ecore_spq.c
+++ b/drivers/net/qede/base/ecore_spq.c
@@ -355,7 +355,7 @@ enum _ecore_status_t ecore_eq_completion(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-struct ecore_eq *ecore_eq_alloc(struct ecore_hwfn *p_hwfn, u16 num_elem)
+enum _ecore_status_t ecore_eq_alloc(struct ecore_hwfn *p_hwfn, u16 num_elem)
 {
 	struct ecore_eq *p_eq;
 
@@ -364,7 +364,7 @@ struct ecore_eq *ecore_eq_alloc(struct ecore_hwfn *p_hwfn, u16 num_elem)
 	if (!p_eq) {
 		DP_NOTICE(p_hwfn, true,
 			  "Failed to allocate `struct ecore_eq'\n");
-		return OSAL_NULL;
+		return ECORE_NOMEM;
 	}
 
 	/* Allocate and initialize EQ chain*/
@@ -374,7 +374,7 @@ struct ecore_eq *ecore_eq_alloc(struct ecore_hwfn *p_hwfn, u16 num_elem)
 			      ECORE_CHAIN_CNT_TYPE_U16,
 			      num_elem,
 			      sizeof(union event_ring_element),
-			      &p_eq->chain, OSAL_NULL)) {
+			      &p_eq->chain, OSAL_NULL) != ECORE_SUCCESS) {
 		DP_NOTICE(p_hwfn, true, "Failed to allocate eq chain\n");
 		goto eq_allocate_fail;
 	}
@@ -383,24 +383,28 @@ struct ecore_eq *ecore_eq_alloc(struct ecore_hwfn *p_hwfn, u16 num_elem)
 	ecore_int_register_cb(p_hwfn, ecore_eq_completion,
 			      p_eq, &p_eq->eq_sb_index, &p_eq->p_fw_cons);
 
-	return p_eq;
+	p_hwfn->p_eq = p_eq;
+	return ECORE_SUCCESS;
 
 eq_allocate_fail:
-	ecore_eq_free(p_hwfn, p_eq);
-	return OSAL_NULL;
+	OSAL_FREE(p_hwfn->p_dev, p_eq);
+	return ECORE_NOMEM;
 }
 
-void ecore_eq_setup(struct ecore_hwfn *p_hwfn, struct ecore_eq *p_eq)
+void ecore_eq_setup(struct ecore_hwfn *p_hwfn)
 {
-	ecore_chain_reset(&p_eq->chain);
+	ecore_chain_reset(&p_hwfn->p_eq->chain);
 }
 
-void ecore_eq_free(struct ecore_hwfn *p_hwfn, struct ecore_eq *p_eq)
+void ecore_eq_free(struct ecore_hwfn *p_hwfn)
 {
-	if (!p_eq)
+	if (!p_hwfn->p_eq)
 		return;
-	ecore_chain_free(p_hwfn->p_dev, &p_eq->chain);
-	OSAL_FREE(p_hwfn->p_dev, p_eq);
+
+	ecore_chain_free(p_hwfn->p_dev, &p_hwfn->p_eq->chain);
+
+	OSAL_FREE(p_hwfn->p_dev, p_hwfn->p_eq);
+	p_hwfn->p_eq = OSAL_NULL;
 }
 
 /***************************************************************************
@@ -943,7 +947,7 @@ enum _ecore_status_t ecore_spq_completion(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-struct ecore_consq *ecore_consq_alloc(struct ecore_hwfn *p_hwfn)
+enum _ecore_status_t ecore_consq_alloc(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_consq *p_consq;
 
@@ -953,7 +957,7 @@ struct ecore_consq *ecore_consq_alloc(struct ecore_hwfn *p_hwfn)
 	if (!p_consq) {
 		DP_NOTICE(p_hwfn, true,
 			  "Failed to allocate `struct ecore_consq'\n");
-		return OSAL_NULL;
+		return ECORE_NOMEM;
 	}
 
 	/* Allocate and initialize EQ chain */
@@ -963,27 +967,29 @@ struct ecore_consq *ecore_consq_alloc(struct ecore_hwfn *p_hwfn)
 			      ECORE_CHAIN_CNT_TYPE_U16,
 			      ECORE_CHAIN_PAGE_SIZE / 0x80,
 			      0x80,
-			      &p_consq->chain, OSAL_NULL)) {
+			      &p_consq->chain, OSAL_NULL) != ECORE_SUCCESS) {
 		DP_NOTICE(p_hwfn, true, "Failed to allocate consq chain");
 		goto consq_allocate_fail;
 	}
 
-	return p_consq;
+	p_hwfn->p_consq = p_consq;
+	return ECORE_SUCCESS;
 
 consq_allocate_fail:
-	ecore_consq_free(p_hwfn, p_consq);
-	return OSAL_NULL;
+	OSAL_FREE(p_hwfn->p_dev, p_consq);
+	return ECORE_NOMEM;
 }
 
-void ecore_consq_setup(struct ecore_hwfn *p_hwfn, struct ecore_consq *p_consq)
+void ecore_consq_setup(struct ecore_hwfn *p_hwfn)
 {
-	ecore_chain_reset(&p_consq->chain);
+	ecore_chain_reset(&p_hwfn->p_consq->chain);
 }
 
-void ecore_consq_free(struct ecore_hwfn *p_hwfn, struct ecore_consq *p_consq)
+void ecore_consq_free(struct ecore_hwfn *p_hwfn)
 {
-	if (!p_consq)
+	if (!p_hwfn->p_consq)
 		return;
-	ecore_chain_free(p_hwfn->p_dev, &p_consq->chain);
-	OSAL_FREE(p_hwfn->p_dev, p_consq);
+
+	ecore_chain_free(p_hwfn->p_dev, &p_hwfn->p_consq->chain);
+	OSAL_FREE(p_hwfn->p_dev, p_hwfn->p_consq);
 }
diff --git a/drivers/net/qede/base/ecore_spq.h b/drivers/net/qede/base/ecore_spq.h
index 717ede3..e2468b7 100644
--- a/drivers/net/qede/base/ecore_spq.h
+++ b/drivers/net/qede/base/ecore_spq.h
@@ -194,28 +194,23 @@ void ecore_spq_return_entry(struct ecore_hwfn		*p_hwfn,
  * @param p_hwfn
  * @param num_elem number of elements in the eq
  *
- * @return struct ecore_eq* - a newly allocated structure; NULL upon error.
+ * @return enum _ecore_status_t
  */
-struct ecore_eq *ecore_eq_alloc(struct ecore_hwfn	*p_hwfn,
-				 u16			num_elem);
+enum _ecore_status_t ecore_eq_alloc(struct ecore_hwfn	*p_hwfn, u16 num_elem);
 
 /**
- * @brief ecore_eq_setup - Reset the SPQ to its start state.
+ * @brief ecore_eq_setup - Reset the EQ to its start state.
  *
  * @param p_hwfn
- * @param p_eq
  */
-void ecore_eq_setup(struct ecore_hwfn *p_hwfn,
-		    struct ecore_eq   *p_eq);
+void ecore_eq_setup(struct ecore_hwfn *p_hwfn);
 
 /**
- * @brief ecore_eq_deallocate - deallocates the given EQ struct.
+ * @brief ecore_eq_free - deallocates the given EQ struct.
  *
  * @param p_hwfn
- * @param p_eq
  */
-void ecore_eq_free(struct ecore_hwfn *p_hwfn,
-		   struct ecore_eq   *p_eq);
+void ecore_eq_free(struct ecore_hwfn *p_hwfn);
 
 /**
  * @brief ecore_eq_prod_update - update the FW with default EQ producer
@@ -261,32 +256,26 @@ enum _ecore_status_t ecore_spq_completion(struct ecore_hwfn	*p_hwfn,
 u32 ecore_spq_get_cid(struct ecore_hwfn *p_hwfn);
 
 /**
- * @brief ecore_consq_alloc - Allocates & initializes an ConsQ
- *        struct
+ * @brief ecore_consq_alloc - Allocates & initializes an ConsQ struct
  *
  * @param p_hwfn
  *
- * @return struct ecore_eq* - a newly allocated structure; NULL upon error.
+ * @return enum _ecore_status_t
  */
-struct ecore_consq *ecore_consq_alloc(struct ecore_hwfn	*p_hwfn);
+enum _ecore_status_t ecore_consq_alloc(struct ecore_hwfn *p_hwfn);
 
 /**
- * @brief ecore_consq_setup - Reset the ConsQ to its start
- *        state.
+ * @brief ecore_consq_setup - Reset the ConsQ to its start state.
  *
  * @param p_hwfn
- * @param p_eq
  */
-void ecore_consq_setup(struct ecore_hwfn *p_hwfn,
-		    struct ecore_consq   *p_consq);
+void ecore_consq_setup(struct ecore_hwfn *p_hwfn);
 
 /**
  * @brief ecore_consq_free - deallocates the given ConsQ struct.
  *
  * @param p_hwfn
- * @param p_eq
  */
-void ecore_consq_free(struct ecore_hwfn *p_hwfn,
-		   struct ecore_consq   *p_consq);
+void ecore_consq_free(struct ecore_hwfn *p_hwfn);
 
 #endif /* __ECORE_SPQ_H__ */
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 27/62] net/qede/base: make L2 queues handle based
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (26 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 26/62] net/qede/base: code refactoring of SP queues Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 28/62] net/qede/base: add support for handling TLV request from MFW Rasesh Mody
                               ` (34 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

L2 handler changes:

This is change to remove the queue-id/qzone difference for Tx queues.

It does that by mainly doing:

a. VFs queues are no longer determined by the SBs they're using.
Instead, the ecore-client needs to maintain those and choose the values
to be used by VF when initializing it.

b. Eliminate the HW-cid array in the hw-function.
To do that, have all the rx/tx functionality turn into 'handle' base -
when queue would be started the caller would get a (void*) handle,
which it would later use with ecore for configuring various
queue-related stop [update, close].

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h         |   13 -
 drivers/net/qede/base/ecore_dev.c     |   37 ---
 drivers/net/qede/base/ecore_int.c     |   24 --
 drivers/net/qede/base/ecore_int.h     |   10 -
 drivers/net/qede/base/ecore_iov_api.h |   24 +-
 drivers/net/qede/base/ecore_l2.c      |  526 ++++++++++++++++++---------------
 drivers/net/qede/base/ecore_l2.h      |   84 +++---
 drivers/net/qede/base/ecore_l2_api.h  |  108 ++++---
 drivers/net/qede/base/ecore_sriov.c   |  262 ++++++++++------
 drivers/net/qede/base/ecore_sriov.h   |    4 +-
 drivers/net/qede/base/ecore_vf.c      |  119 +++++---
 drivers/net/qede/base/ecore_vf.h      |   55 ++--
 drivers/net/qede/qede_eth_if.c        |   50 ++--
 drivers/net/qede/qede_eth_if.h        |   22 +-
 drivers/net/qede/qede_rxtx.c          |   42 +--
 drivers/net/qede/qede_rxtx.h          |    2 +
 16 files changed, 723 insertions(+), 659 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index b8c8bfd..de0f49a 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -394,16 +394,6 @@ struct ecore_hw_info {
 	u16 mtu;
 };
 
-struct ecore_hw_cid_data {
-	u32	cid;
-	bool	b_cid_allocated;
-	u8	vfid; /* 1-based; 0 signals this is for a PF */
-
-	/* Additional identifiers */
-	u16	opaque_fid;
-	u8	vport_id;
-};
-
 /* maximun size of read/write commands (HW limit) */
 #define DMAE_MAX_RW_SIZE	0x2000
 
@@ -566,9 +556,6 @@ struct ecore_hwfn {
 	struct ecore_mcp_info		*mcp_info;
 	struct ecore_dcbx_info		*p_dcbx_info;
 
-	struct ecore_hw_cid_data	*p_tx_cids;
-	struct ecore_hw_cid_data	*p_rx_cids;
-
 	struct ecore_dmae_info		dmae_info;
 
 	/* QM init */
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 1ce7d8e..c895656 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -155,13 +155,6 @@ void ecore_resc_free(struct ecore_dev *p_dev)
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
 
-		OSAL_FREE(p_dev, p_hwfn->p_tx_cids);
-		OSAL_FREE(p_dev, p_hwfn->p_rx_cids);
-	}
-
-	for_each_hwfn(p_dev, i) {
-		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
-
 		ecore_cxt_mngr_free(p_hwfn);
 		ecore_qm_info_free(p_hwfn);
 		ecore_spq_free(p_hwfn);
@@ -844,36 +837,6 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 	if (!p_dev->fw_data)
 		return ECORE_NOMEM;
 
-	/* Allocate Memory for the Queue->CID mapping */
-	for_each_hwfn(p_dev, i) {
-		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
-		u32 num_tx_conns = RESC_NUM(p_hwfn, ECORE_L2_QUEUE);
-		int tx_size, rx_size;
-
-		/* @@@TMP - resc management, change to actual required size */
-		if (p_hwfn->pf_params.eth_pf_params.num_cons > num_tx_conns)
-			num_tx_conns = p_hwfn->pf_params.eth_pf_params.num_cons;
-		tx_size = sizeof(struct ecore_hw_cid_data) * num_tx_conns;
-		rx_size = sizeof(struct ecore_hw_cid_data) *
-		    RESC_NUM(p_hwfn, ECORE_L2_QUEUE);
-
-		p_hwfn->p_tx_cids = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
-						tx_size);
-		if (!p_hwfn->p_tx_cids) {
-			DP_NOTICE(p_hwfn, true,
-				  "Failed to allocate memory for Tx Cids\n");
-			goto alloc_no_mem;
-		}
-
-		p_hwfn->p_rx_cids = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
-						rx_size);
-		if (!p_hwfn->p_rx_cids) {
-			DP_NOTICE(p_hwfn, true,
-				  "Failed to allocate memory for Rx Cids\n");
-			goto alloc_no_mem;
-		}
-	}
-
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
 		u32 n_eqes, num_cons;
diff --git a/drivers/net/qede/base/ecore_int.c b/drivers/net/qede/base/ecore_int.c
index e5a4359..8dc4d15 100644
--- a/drivers/net/qede/base/ecore_int.c
+++ b/drivers/net/qede/base/ecore_int.c
@@ -2182,30 +2182,6 @@ void ecore_int_get_num_sbs(struct ecore_hwfn *p_hwfn,
 	p_sb_cnt_info->sb_free_blk = info->free_blks;
 }
 
-u16 ecore_int_queue_id_from_sb_id(struct ecore_hwfn *p_hwfn, u16 sb_id)
-{
-	struct ecore_igu_info *p_info = p_hwfn->hw_info.p_igu_info;
-
-	/* Determine origin of SB id */
-	if ((sb_id >= p_info->igu_base_sb) &&
-	    (sb_id < p_info->igu_base_sb + p_info->igu_sb_cnt)) {
-		return sb_id - p_info->igu_base_sb;
-	} else if ((sb_id >= p_info->igu_base_sb_iov) &&
-		   (sb_id < p_info->igu_base_sb_iov +
-			    p_info->igu_sb_cnt_iov)) {
-		/* We want the first VF queue to be adjacent to the
-		 * last PF queue. Since L2 queues can be partial to
-		 * SBs, we'll use the feature instead.
-		 */
-		return sb_id - p_info->igu_base_sb_iov +
-		       FEAT_NUM(p_hwfn, ECORE_PF_L2_QUE);
-	} else {
-		DP_NOTICE(p_hwfn, true, "SB %d not in range for function\n",
-			  sb_id);
-		return 0;
-	}
-}
-
 void ecore_int_disable_post_isr_release(struct ecore_dev *p_dev)
 {
 	int i;
diff --git a/drivers/net/qede/base/ecore_int.h b/drivers/net/qede/base/ecore_int.h
index 45358b9..0c8929e 100644
--- a/drivers/net/qede/base/ecore_int.h
+++ b/drivers/net/qede/base/ecore_int.h
@@ -172,16 +172,6 @@ void ecore_int_free(struct ecore_hwfn *p_hwfn);
 void ecore_int_setup(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt);
 
 /**
- * @brief - Returns an Rx queue index appropriate for usage with given SB.
- *
- * @param p_hwfn
- * @param sb_id - absolute index of SB
- *
- * @return index of Rx queue
- */
-u16 ecore_int_queue_id_from_sb_id(struct ecore_hwfn *p_hwfn, u16 sb_id);
-
-/**
  * @brief - Enable Interrupt & Attention for hw function
  *
  * @param p_hwfn
diff --git a/drivers/net/qede/base/ecore_iov_api.h b/drivers/net/qede/base/ecore_iov_api.h
index 9775360..b8dc47b 100644
--- a/drivers/net/qede/base/ecore_iov_api.h
+++ b/drivers/net/qede/base/ecore_iov_api.h
@@ -88,6 +88,23 @@ struct ecore_public_vf_info {
 	u16 forced_vlan;
 };
 
+struct ecore_iov_vf_init_params {
+	u16 rel_vf_id;
+
+	/* Number of requested Queues; Currently, don't support different
+	 * number of Rx/Tx queues.
+	 */
+	/* TODO - remove this limitation */
+	u16 num_queues;
+
+	/* Allow the client to choose which qzones to use for Rx/Tx,
+	 * and which queue_base to use for Tx queues on a per-queue basis.
+	 * Notice values should be relative to the PF resources.
+	 */
+	u16 req_rx_queue[ECORE_MAX_VF_CHAINS_PER_PF];
+	u16 req_tx_queue[ECORE_MAX_VF_CHAINS_PER_PF];
+};
+
 #ifdef CONFIG_ECORE_SW_CHANNEL
 /* This is SW channel related only... */
 enum mbx_state {
@@ -175,15 +192,14 @@ void ecore_iov_set_vf_to_disable(struct ecore_dev *p_dev,
  *
  * @param p_hwfn
  * @param p_ptt
- * @param rel_vf_id
- * @param num_rx_queues
+ * @param p_params
  *
  * @return enum _ecore_status_t
  */
 enum _ecore_status_t ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 					      struct ecore_ptt *p_ptt,
-					      u16 rel_vf_id,
-					      u16 num_rx_queues);
+					      struct ecore_iov_vf_init_params
+						     *p_params);
 
 /**
  * @brief ecore_iov_process_mbx_req - process a request received
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 0220d19..352620a 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -29,6 +29,120 @@
 #define ECORE_MAX_SGES_NUM 16
 #define CRC32_POLY 0x1edc6f41
 
+void ecore_eth_queue_cid_release(struct ecore_hwfn *p_hwfn,
+				 struct ecore_queue_cid *p_cid)
+{
+	/* VFs' CIDs are 0-based in PF-view, and uninitialized on VF */
+	if (!p_cid->is_vf && IS_PF(p_hwfn->p_dev))
+		ecore_cxt_release_cid(p_hwfn, p_cid->cid);
+	OSAL_VFREE(p_hwfn->p_dev, p_cid);
+}
+
+/* The internal is only meant to be directly called by PFs initializeing CIDs
+ * for their VFs.
+ */
+struct ecore_queue_cid *
+_ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
+			u16 opaque_fid, u32 cid, u8 vf_qid,
+			struct ecore_queue_start_common_params *p_params)
+{
+	bool b_is_same = (p_hwfn->hw_info.opaque_fid == opaque_fid);
+	struct ecore_queue_cid *p_cid;
+	enum _ecore_status_t rc;
+
+	p_cid = OSAL_VALLOC(p_hwfn->p_dev, sizeof(*p_cid));
+	if (p_cid == OSAL_NULL)
+		return OSAL_NULL;
+	OSAL_MEM_ZERO(p_cid, sizeof(*p_cid));
+
+	p_cid->opaque_fid = opaque_fid;
+	p_cid->cid = cid;
+	p_cid->vf_qid = vf_qid;
+	p_cid->rel = *p_params;
+
+	/* Don't try calculating the absolute indices for VFs */
+	if (IS_VF(p_hwfn->p_dev)) {
+		p_cid->abs = p_cid->rel;
+		goto out;
+	}
+
+	/* Calculate the engine-absolute indices of the resources.
+	 * The would guarantee they're valid later on.
+	 * In some cases [SBs] we already have the right values.
+	 */
+	rc = ecore_fw_vport(p_hwfn, p_cid->rel.vport_id, &p_cid->abs.vport_id);
+	if (rc != ECORE_SUCCESS)
+		goto fail;
+
+	rc = ecore_fw_l2_queue(p_hwfn, p_cid->rel.queue_id,
+			       &p_cid->abs.queue_id);
+	if (rc != ECORE_SUCCESS)
+		goto fail;
+
+	/* In case of a PF configuring its VF's queues, the stats-id is already
+	 * absolute [since there's a single index that's suitable per-VF].
+	 */
+	if (b_is_same) {
+		rc = ecore_fw_vport(p_hwfn, p_cid->rel.stats_id,
+				    &p_cid->abs.stats_id);
+		if (rc != ECORE_SUCCESS)
+			goto fail;
+	} else {
+		p_cid->abs.stats_id = p_cid->rel.stats_id;
+	}
+
+	/* SBs relevant information was already provided as absolute */
+	p_cid->abs.sb = p_cid->rel.sb;
+	p_cid->abs.sb_idx = p_cid->rel.sb_idx;
+
+	/* This is tricky - we're actually interested in whehter this is a PF
+	 * entry meant for the VF.
+	 */
+	if (!b_is_same)
+		p_cid->is_vf = true;
+out:
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "opaque_fid: %04x CID %08x vport %02x [%02x] qzone %04x [%04x] stats %02x [%02x] SB %04x PI %02x\n",
+		   p_cid->opaque_fid, p_cid->cid,
+		   p_cid->rel.vport_id, p_cid->abs.vport_id,
+		   p_cid->rel.queue_id, p_cid->abs.queue_id,
+		   p_cid->rel.stats_id, p_cid->abs.stats_id,
+		   p_cid->abs.sb, p_cid->abs.sb_idx);
+
+	return p_cid;
+
+fail:
+	OSAL_VFREE(p_hwfn->p_dev, p_cid);
+	return OSAL_NULL;
+}
+
+static struct ecore_queue_cid *
+ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
+		       u16 opaque_fid,
+		       struct ecore_queue_start_common_params *p_params)
+{
+	struct ecore_queue_cid *p_cid;
+	u32 cid = 0;
+
+	/* Get a unique firmware CID for this queue, in case it's a PF.
+	 * VF's don't need a CID as the queue configuration will be done
+	 * by PF.
+	 */
+	if (IS_PF(p_hwfn->p_dev)) {
+		if (ecore_cxt_acquire_cid(p_hwfn, PROTOCOLID_ETH,
+					  &cid) != ECORE_SUCCESS) {
+			DP_NOTICE(p_hwfn, true, "Failed to acquire cid\n");
+			return OSAL_NULL;
+		}
+	}
+
+	p_cid = _ecore_eth_queue_to_cid(p_hwfn, opaque_fid, cid, 0, p_params);
+	if ((p_cid == OSAL_NULL) && IS_PF(p_hwfn->p_dev))
+		ecore_cxt_release_cid(p_hwfn, cid);
+
+	return p_cid;
+}
+
 enum _ecore_status_t
 ecore_sp_eth_vport_start(struct ecore_hwfn *p_hwfn,
 			 struct ecore_sp_vport_start_params *p_params)
@@ -558,57 +672,28 @@ ecore_filter_accept_cmd(struct ecore_dev *p_dev,
 	return 0;
 }
 
-static void ecore_sp_release_queue_cid(struct ecore_hwfn *p_hwfn,
-				       struct ecore_hw_cid_data *p_cid_data)
-{
-	if (!p_cid_data->b_cid_allocated)
-		return;
-
-	ecore_cxt_release_cid(p_hwfn, p_cid_data->cid);
-	p_cid_data->b_cid_allocated = false;
-}
-
 enum _ecore_status_t
-ecore_sp_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
-			      u16 opaque_fid,
-			      u32 cid,
-			      struct ecore_queue_start_common_params *p_params,
-			      u16 bd_max_bytes,
-			      dma_addr_t bd_chain_phys_addr,
-			      dma_addr_t cqe_pbl_addr,
-			      u16 cqe_pbl_size, bool b_use_zone_a_prod)
+ecore_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
+			   struct ecore_queue_cid *p_cid,
+			   u16 bd_max_bytes,
+			   dma_addr_t bd_chain_phys_addr,
+			   dma_addr_t cqe_pbl_addr,
+			   u16 cqe_pbl_size)
 {
 	struct rx_queue_start_ramrod_data *p_ramrod = OSAL_NULL;
 	struct ecore_spq_entry *p_ent = OSAL_NULL;
 	struct ecore_sp_init_data init_data;
-	struct ecore_hw_cid_data *p_rx_cid;
-	u16 abs_rx_q_id = 0;
-	u8 abs_vport_id = 0;
 	enum _ecore_status_t rc = ECORE_NOTIMPL;
 
-	/* Store information for the stop */
-	p_rx_cid = &p_hwfn->p_rx_cids[p_params->queue_id];
-	p_rx_cid->cid = cid;
-	p_rx_cid->opaque_fid = opaque_fid;
-	p_rx_cid->vport_id = p_params->vport_id;
-
-	rc = ecore_fw_vport(p_hwfn, p_params->vport_id, &abs_vport_id);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	rc = ecore_fw_l2_queue(p_hwfn, p_params->queue_id, &abs_rx_q_id);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-		   "opaque_fid=0x%x, cid=0x%x, rx_qid=0x%x, vport_id=0x%x, sb_id=0x%x\n",
-		   opaque_fid, cid, p_params->queue_id,
-		   p_params->vport_id, p_params->sb);
+		   "opaque_fid=0x%x, cid=0x%x, rx_qzone=0x%x, vport_id=0x%x, sb_id=0x%x\n",
+		   p_cid->opaque_fid, p_cid->cid, p_cid->abs.queue_id,
+		   p_cid->abs.vport_id, p_cid->abs.sb);
 
 	/* Get SPQ entry */
 	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
-	init_data.cid = cid;
-	init_data.opaque_fid = opaque_fid;
+	init_data.cid = p_cid->cid;
+	init_data.opaque_fid = p_cid->opaque_fid;
 	init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
 
 	rc = ecore_sp_init_request(p_hwfn, &p_ent,
@@ -619,11 +704,11 @@ ecore_sp_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
 
 	p_ramrod = &p_ent->ramrod.rx_queue_start;
 
-	p_ramrod->sb_id = OSAL_CPU_TO_LE16(p_params->sb);
-	p_ramrod->sb_index = (u8)p_params->sb_idx;
-	p_ramrod->vport_id = abs_vport_id;
-	p_ramrod->stats_counter_id = p_params->stats_id;
-	p_ramrod->rx_queue_id = OSAL_CPU_TO_LE16(abs_rx_q_id);
+	p_ramrod->sb_id = OSAL_CPU_TO_LE16(p_cid->abs.sb);
+	p_ramrod->sb_index = p_cid->abs.sb_idx;
+	p_ramrod->vport_id = p_cid->abs.vport_id;
+	p_ramrod->stats_counter_id = p_cid->abs.stats_id;
+	p_ramrod->rx_queue_id = OSAL_CPU_TO_LE16(p_cid->abs.queue_id);
 	p_ramrod->complete_cqe_flg = 0;
 	p_ramrod->complete_event_flg = 1;
 
@@ -633,92 +718,88 @@ ecore_sp_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
 	p_ramrod->num_of_pbl_pages = OSAL_CPU_TO_LE16(cqe_pbl_size);
 	DMA_REGPAIR_LE(p_ramrod->cqe_pbl_addr, cqe_pbl_addr);
 
-	if (p_params->vf_qid || b_use_zone_a_prod) {
-		p_ramrod->vf_rx_prod_index = (u8)p_params->vf_qid;
+	if (p_cid->is_vf) {
+		p_ramrod->vf_rx_prod_index = p_cid->vf_qid;
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 			   "Queue%s is meant for VF rxq[%02x]\n",
-			   b_use_zone_a_prod ? " [legacy]" : "",
-			   p_params->vf_qid);
-		p_ramrod->vf_rx_prod_use_zone_a = b_use_zone_a_prod;
+			   !!p_cid->b_legacy_vf ? " [legacy]" : "",
+			   p_cid->vf_qid);
+		p_ramrod->vf_rx_prod_use_zone_a = !!p_cid->b_legacy_vf;
 	}
 
 	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
 }
 
-enum _ecore_status_t
-ecore_sp_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
-			    u16 opaque_fid,
-			    struct ecore_queue_start_common_params *p_params,
+static enum _ecore_status_t
+ecore_eth_pf_rx_queue_start(struct ecore_hwfn *p_hwfn,
+			    struct ecore_queue_cid *p_cid,
 			    u16 bd_max_bytes,
 			    dma_addr_t bd_chain_phys_addr,
 			    dma_addr_t cqe_pbl_addr,
 			    u16 cqe_pbl_size,
-			    void OSAL_IOMEM * *pp_prod)
+			    void OSAL_IOMEM * *pp_producer)
 {
-	struct ecore_hw_cid_data *p_rx_cid;
 	u32 init_prod_val = 0;
-	u16 abs_l2_queue = 0;
-	u8 abs_stats_id = 0;
-	enum _ecore_status_t rc;
-
-	if (IS_VF(p_hwfn->p_dev)) {
-		return ecore_vf_pf_rxq_start(p_hwfn,
-					     (u8)p_params->queue_id,
-					     p_params->sb,
-					     (u8)p_params->sb_idx,
-					     bd_max_bytes,
-					     bd_chain_phys_addr,
-					     cqe_pbl_addr,
-					     cqe_pbl_size, pp_prod);
-	}
-
-	rc = ecore_fw_l2_queue(p_hwfn, p_params->queue_id, &abs_l2_queue);
-	if (rc != ECORE_SUCCESS)
-		return rc;
 
-	rc = ecore_fw_vport(p_hwfn, p_params->stats_id, &abs_stats_id);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	*pp_prod = (u8 OSAL_IOMEM *)p_hwfn->regview +
-	    GTT_BAR0_MAP_REG_MSDM_RAM +
-	    MSTORM_ETH_PF_PRODS_OFFSET(abs_l2_queue);
+	*pp_producer = (u8 OSAL_IOMEM *)
+		       p_hwfn->regview +
+		       GTT_BAR0_MAP_REG_MSDM_RAM +
+		       MSTORM_ETH_PF_PRODS_OFFSET(p_cid->abs.queue_id);
 
 	/* Init the rcq, rx bd and rx sge (if valid) producers to 0 */
-	__internal_ram_wr(p_hwfn, *pp_prod, sizeof(u32),
+	__internal_ram_wr(p_hwfn, *pp_producer, sizeof(u32),
 			  (u32 *)(&init_prod_val));
 
+	return ecore_eth_rxq_start_ramrod(p_hwfn, p_cid,
+					  bd_max_bytes,
+					  bd_chain_phys_addr,
+					  cqe_pbl_addr, cqe_pbl_size);
+}
+
+enum _ecore_status_t
+ecore_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
+			 u16 opaque_fid,
+			 struct ecore_queue_start_common_params *p_params,
+			 u16 bd_max_bytes,
+			 dma_addr_t bd_chain_phys_addr,
+			 dma_addr_t cqe_pbl_addr,
+			 u16 cqe_pbl_size,
+			 struct ecore_rxq_start_ret_params *p_ret_params)
+{
+	struct ecore_queue_cid *p_cid;
+	enum _ecore_status_t rc;
+
 	/* Allocate a CID for the queue */
-	p_rx_cid = &p_hwfn->p_rx_cids[p_params->queue_id];
-	rc = ecore_cxt_acquire_cid(p_hwfn, PROTOCOLID_ETH,
-				   &p_rx_cid->cid);
-	if (rc != ECORE_SUCCESS) {
-		DP_NOTICE(p_hwfn, true, "Failed to acquire cid\n");
-		return rc;
-	}
-	p_rx_cid->b_cid_allocated = true;
-	p_params->stats_id = abs_stats_id;
-	p_params->vf_qid = 0;
-
-	rc = ecore_sp_eth_rxq_start_ramrod(p_hwfn,
-					   opaque_fid,
-					   p_rx_cid->cid,
-					   p_params,
+	p_cid = ecore_eth_queue_to_cid(p_hwfn, opaque_fid, p_params);
+	if (p_cid == OSAL_NULL)
+		return ECORE_NOMEM;
+
+	if (IS_PF(p_hwfn->p_dev))
+		rc = ecore_eth_pf_rx_queue_start(p_hwfn, p_cid,
+						 bd_max_bytes,
+						 bd_chain_phys_addr,
+						 cqe_pbl_addr, cqe_pbl_size,
+						 &p_ret_params->p_prod);
+	else
+		rc = ecore_vf_pf_rxq_start(p_hwfn, p_cid,
 					   bd_max_bytes,
 					   bd_chain_phys_addr,
 					   cqe_pbl_addr,
 					   cqe_pbl_size,
-					   false);
+					   &p_ret_params->p_prod);
 
+	/* Provide the caller with a reference to as handler */
 	if (rc != ECORE_SUCCESS)
-		ecore_sp_release_queue_cid(p_hwfn, p_rx_cid);
+		ecore_eth_queue_cid_release(p_hwfn, p_cid);
+	else
+		p_ret_params->p_handle = (void *)p_cid;
 
 	return rc;
 }
 
 enum _ecore_status_t
 ecore_sp_eth_rx_queues_update(struct ecore_hwfn *p_hwfn,
-			      u16 rx_queue_id,
+			      void **pp_rxq_handles,
 			      u8 num_rxqs,
 			      u8 complete_cqe_flg,
 			      u8 complete_event_flg,
@@ -728,14 +809,14 @@ ecore_sp_eth_rx_queues_update(struct ecore_hwfn *p_hwfn,
 	struct rx_queue_update_ramrod_data *p_ramrod = OSAL_NULL;
 	struct ecore_spq_entry *p_ent = OSAL_NULL;
 	struct ecore_sp_init_data init_data;
-	struct ecore_hw_cid_data *p_rx_cid;
-	u16 qid, abs_rx_q_id = 0;
+	struct ecore_queue_cid *p_cid;
 	enum _ecore_status_t rc = ECORE_NOTIMPL;
 	u8 i;
 
 	if (IS_VF(p_hwfn->p_dev))
 		return ecore_vf_pf_rxqs_update(p_hwfn,
-					       rx_queue_id,
+					       (struct ecore_queue_cid **)
+					       pp_rxq_handles,
 					       num_rxqs,
 					       complete_cqe_flg,
 					       complete_event_flg);
@@ -745,12 +826,11 @@ ecore_sp_eth_rx_queues_update(struct ecore_hwfn *p_hwfn,
 	init_data.p_comp_data = p_comp_data;
 
 	for (i = 0; i < num_rxqs; i++) {
-		qid = rx_queue_id + i;
-		p_rx_cid = &p_hwfn->p_rx_cids[qid];
+		p_cid = ((struct ecore_queue_cid **)pp_rxq_handles)[i];
 
 		/* Get SPQ entry */
-		init_data.cid = p_rx_cid->cid;
-		init_data.opaque_fid = p_rx_cid->opaque_fid;
+		init_data.cid = p_cid->cid;
+		init_data.opaque_fid = p_cid->opaque_fid;
 
 		rc = ecore_sp_init_request(p_hwfn, &p_ent,
 					   ETH_RAMROD_RX_QUEUE_UPDATE,
@@ -759,41 +839,34 @@ ecore_sp_eth_rx_queues_update(struct ecore_hwfn *p_hwfn,
 			return rc;
 
 		p_ramrod = &p_ent->ramrod.rx_queue_update;
+		p_ramrod->vport_id = p_cid->abs.vport_id;
 
-		ecore_fw_vport(p_hwfn, p_rx_cid->vport_id, &p_ramrod->vport_id);
-		ecore_fw_l2_queue(p_hwfn, qid, &abs_rx_q_id);
-		p_ramrod->rx_queue_id = OSAL_CPU_TO_LE16(abs_rx_q_id);
+		p_ramrod->rx_queue_id = OSAL_CPU_TO_LE16(p_cid->abs.queue_id);
 		p_ramrod->complete_cqe_flg = complete_cqe_flg;
 		p_ramrod->complete_event_flg = complete_event_flg;
 
 		rc = ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
-		if (rc)
+		if (rc != ECORE_SUCCESS)
 			return rc;
 	}
 
 	return rc;
 }
 
-enum _ecore_status_t
-ecore_sp_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
-			   u16 rx_queue_id,
-			   bool eq_completion_only, bool cqe_completion)
+static enum _ecore_status_t
+ecore_eth_pf_rx_queue_stop(struct ecore_hwfn *p_hwfn,
+			   struct ecore_queue_cid *p_cid,
+			   bool b_eq_completion_only,
+			   bool b_cqe_completion)
 {
-	struct ecore_hw_cid_data *p_rx_cid = &p_hwfn->p_rx_cids[rx_queue_id];
 	struct rx_queue_stop_ramrod_data *p_ramrod = OSAL_NULL;
 	struct ecore_spq_entry *p_ent = OSAL_NULL;
 	struct ecore_sp_init_data init_data;
-	u16 abs_rx_q_id = 0;
-	enum _ecore_status_t rc = ECORE_NOTIMPL;
-
-	if (IS_VF(p_hwfn->p_dev))
-		return ecore_vf_pf_rxq_stop(p_hwfn, rx_queue_id,
-					    cqe_completion);
+	enum _ecore_status_t rc;
 
-	/* Get SPQ entry */
 	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
-	init_data.cid = p_rx_cid->cid;
-	init_data.opaque_fid = p_rx_cid->opaque_fid;
+	init_data.cid = p_cid->cid;
+	init_data.opaque_fid = p_cid->opaque_fid;
 	init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
 
 	rc = ecore_sp_init_request(p_hwfn, &p_ent,
@@ -803,64 +876,54 @@ ecore_sp_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
 		return rc;
 
 	p_ramrod = &p_ent->ramrod.rx_queue_stop;
-
-	ecore_fw_vport(p_hwfn, p_rx_cid->vport_id, &p_ramrod->vport_id);
-	ecore_fw_l2_queue(p_hwfn, rx_queue_id, &abs_rx_q_id);
-	p_ramrod->rx_queue_id = OSAL_CPU_TO_LE16(abs_rx_q_id);
+	p_ramrod->vport_id = p_cid->abs.vport_id;
+	p_ramrod->rx_queue_id = OSAL_CPU_TO_LE16(p_cid->abs.queue_id);
 
 	/* Cleaning the queue requires the completion to arrive there.
 	 * In addition, VFs require the answer to come as eqe to PF.
 	 */
-	p_ramrod->complete_cqe_flg = (!!(p_rx_cid->opaque_fid ==
-					 p_hwfn->hw_info.opaque_fid) &&
-				      !eq_completion_only) || cqe_completion;
-	p_ramrod->complete_event_flg = !(p_rx_cid->opaque_fid ==
-					 p_hwfn->hw_info.opaque_fid) ||
-	    eq_completion_only;
+	p_ramrod->complete_cqe_flg = (!p_cid->is_vf && !b_eq_completion_only) ||
+				     b_cqe_completion;
+	p_ramrod->complete_event_flg = p_cid->is_vf || b_eq_completion_only;
 
-	rc = ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
-	if (rc != ECORE_SUCCESS)
-		return rc;
+	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
+}
 
-	ecore_sp_release_queue_cid(p_hwfn, p_rx_cid);
+enum _ecore_status_t ecore_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
+					     void *p_rxq,
+					     bool eq_completion_only,
+					     bool cqe_completion)
+{
+	struct ecore_queue_cid *p_cid = (struct ecore_queue_cid *)p_rxq;
+	enum _ecore_status_t rc = ECORE_NOTIMPL;
+
+	if (IS_PF(p_hwfn->p_dev))
+		rc = ecore_eth_pf_rx_queue_stop(p_hwfn, p_cid,
+						eq_completion_only,
+						cqe_completion);
+	else
+		rc = ecore_vf_pf_rxq_stop(p_hwfn, p_cid, cqe_completion);
 
+	if (rc == ECORE_SUCCESS)
+		ecore_eth_queue_cid_release(p_hwfn, p_cid);
 	return rc;
 }
 
 enum _ecore_status_t
-ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
-			      u16 opaque_fid,
-			      u32 cid,
-			      struct ecore_queue_start_common_params *p_params,
-			      dma_addr_t pbl_addr,
-			      u16 pbl_size,
-			      u16 pq_id)
+ecore_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
+			   struct ecore_queue_cid *p_cid,
+			   dma_addr_t pbl_addr, u16 pbl_size,
+			   u16 pq_id)
 {
 	struct tx_queue_start_ramrod_data *p_ramrod = OSAL_NULL;
 	struct ecore_spq_entry *p_ent = OSAL_NULL;
 	struct ecore_sp_init_data init_data;
-	struct ecore_hw_cid_data *p_tx_cid;
-	u16 abs_tx_qzone_id = 0;
 	enum _ecore_status_t rc = ECORE_NOTIMPL;
-	u8 abs_vport_id;
-
-	/* Store information for the stop */
-	p_tx_cid = &p_hwfn->p_tx_cids[p_params->queue_id];
-	p_tx_cid->cid = cid;
-	p_tx_cid->opaque_fid = opaque_fid;
-
-	rc = ecore_fw_vport(p_hwfn, p_params->vport_id, &abs_vport_id);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	rc = ecore_fw_l2_queue(p_hwfn, p_params->qzone_id, &abs_tx_qzone_id);
-	if (rc != ECORE_SUCCESS)
-		return rc;
 
 	/* Get SPQ entry */
 	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
-	init_data.cid = cid;
-	init_data.opaque_fid = opaque_fid;
+	init_data.cid = p_cid->cid;
+	init_data.opaque_fid = p_cid->opaque_fid;
 	init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
 
 	rc = ecore_sp_init_request(p_hwfn, &p_ent,
@@ -870,14 +933,14 @@ ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
 		return rc;
 
 	p_ramrod = &p_ent->ramrod.tx_queue_start;
-	p_ramrod->vport_id = abs_vport_id;
+	p_ramrod->vport_id = p_cid->abs.vport_id;
 
-	p_ramrod->sb_id = OSAL_CPU_TO_LE16(p_params->sb);
-	p_ramrod->sb_index = (u8)p_params->sb_idx;
-	p_ramrod->stats_counter_id = p_params->stats_id;
+	p_ramrod->sb_id = OSAL_CPU_TO_LE16(p_cid->abs.sb);
+	p_ramrod->sb_index = p_cid->abs.sb_idx;
+	p_ramrod->stats_counter_id = p_cid->abs.stats_id;
 
-	p_ramrod->queue_zone_id = OSAL_CPU_TO_LE16(abs_tx_qzone_id);
-	p_ramrod->same_as_last_id = OSAL_CPU_TO_LE16(abs_tx_qzone_id);
+	p_ramrod->queue_zone_id = OSAL_CPU_TO_LE16(p_cid->abs.queue_id);
+	p_ramrod->same_as_last_id = OSAL_CPU_TO_LE16(p_cid->abs.queue_id);
 
 	p_ramrod->pbl_size = OSAL_CPU_TO_LE16(pbl_size);
 	DMA_REGPAIR_LE(p_ramrod->pbl_base_addr, pbl_addr);
@@ -887,90 +950,72 @@ ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
 	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
 }
 
-enum _ecore_status_t
-ecore_sp_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
-			    u16 opaque_fid,
-			    struct ecore_queue_start_common_params *p_params,
+static enum _ecore_status_t
+ecore_eth_pf_tx_queue_start(struct ecore_hwfn *p_hwfn,
+			    struct ecore_queue_cid *p_cid,
 			    u8 tc,
-			    dma_addr_t pbl_addr,
-			    u16 pbl_size,
+			    dma_addr_t pbl_addr, u16 pbl_size,
 			    void OSAL_IOMEM * *pp_doorbell)
 {
-	struct ecore_hw_cid_data *p_tx_cid;
-	u8 abs_stats_id = 0;
 	enum _ecore_status_t rc;
 
-	if (IS_VF(p_hwfn->p_dev)) {
-		return ecore_vf_pf_txq_start(p_hwfn,
-					     p_params->queue_id,
-					     p_params->sb,
-					     (u8)p_params->sb_idx,
-					     pbl_addr,
-					     pbl_size,
-					     pp_doorbell);
-	}
-
-	rc = ecore_fw_vport(p_hwfn, p_params->stats_id, &abs_stats_id);
+	/* TODO - set tc in the pq_params for multi-cos */
+	rc = ecore_eth_txq_start_ramrod(p_hwfn, p_cid,
+					pbl_addr, pbl_size,
+					ecore_get_cm_pq_idx_mcos(p_hwfn, tc));
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	p_tx_cid = &p_hwfn->p_tx_cids[p_params->queue_id];
-	OSAL_MEMSET(p_tx_cid, 0, sizeof(*p_tx_cid));
+	/* Provide the caller with the necessary return values */
+	*pp_doorbell = (u8 OSAL_IOMEM *)
+		       p_hwfn->doorbells +
+		       DB_ADDR(p_cid->cid, DQ_DEMS_LEGACY);
 
-	/* Allocate a CID for the queue */
-	rc = ecore_cxt_acquire_cid(p_hwfn, PROTOCOLID_ETH, &p_tx_cid->cid);
-	if (rc != ECORE_SUCCESS) {
-		DP_NOTICE(p_hwfn, true, "Failed to acquire cid\n");
-		return rc;
-	}
-	p_tx_cid->b_cid_allocated = true;
+	return ECORE_SUCCESS;
+}
 
-	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-		   "opaque_fid=0x%x, cid=0x%x, tx_qid=0x%x, vport_id=0x%x, sb_id=0x%x\n",
-		    opaque_fid, p_tx_cid->cid, p_params->queue_id,
-		    p_params->vport_id, p_params->sb);
+enum _ecore_status_t
+ecore_eth_tx_queue_start(struct ecore_hwfn *p_hwfn, u16 opaque_fid,
+			 struct ecore_queue_start_common_params *p_params,
+			 u8 tc,
+			 dma_addr_t pbl_addr, u16 pbl_size,
+			 struct ecore_txq_start_ret_params *p_ret_params)
+{
+	struct ecore_queue_cid *p_cid;
+	enum _ecore_status_t rc;
 
-	p_params->stats_id = abs_stats_id;
+	p_cid = ecore_eth_queue_to_cid(p_hwfn, opaque_fid, p_params);
+	if (p_cid == OSAL_NULL)
+		return ECORE_INVAL;
 
-	/* TODO - set tc in the pq_params for multi-cos */
-	rc = ecore_sp_eth_txq_start_ramrod(p_hwfn,
-					   opaque_fid,
-					   p_tx_cid->cid,
-					   p_params,
-					   pbl_addr,
-					   pbl_size,
-					   ecore_get_cm_pq_idx_mcos(p_hwfn,
-								    tc));
-
-	*pp_doorbell = (u8 OSAL_IOMEM *)p_hwfn->doorbells +
-	    DB_ADDR(p_tx_cid->cid, DQ_DEMS_LEGACY);
+	if (IS_PF(p_hwfn->p_dev))
+		rc = ecore_eth_pf_tx_queue_start(p_hwfn, p_cid, tc,
+						 pbl_addr, pbl_size,
+						 &p_ret_params->p_doorbell);
+	else
+		rc = ecore_vf_pf_txq_start(p_hwfn, p_cid,
+					   pbl_addr, pbl_size,
+					   &p_ret_params->p_doorbell);
 
 	if (rc != ECORE_SUCCESS)
-		ecore_sp_release_queue_cid(p_hwfn, p_tx_cid);
+		ecore_eth_queue_cid_release(p_hwfn, p_cid);
+	else
+		p_ret_params->p_handle = (void *)p_cid;
 
 	return rc;
 }
 
-enum _ecore_status_t ecore_sp_eth_tx_queue_update(struct ecore_hwfn *p_hwfn)
-{
-	return ECORE_NOTIMPL;
-}
-
-enum _ecore_status_t ecore_sp_eth_tx_queue_stop(struct ecore_hwfn *p_hwfn,
-						u16 tx_queue_id)
+static enum _ecore_status_t
+ecore_eth_pf_tx_queue_stop(struct ecore_hwfn *p_hwfn,
+			   struct ecore_queue_cid *p_cid)
 {
-	struct ecore_hw_cid_data *p_tx_cid = &p_hwfn->p_tx_cids[tx_queue_id];
 	struct ecore_spq_entry *p_ent = OSAL_NULL;
 	struct ecore_sp_init_data init_data;
-	enum _ecore_status_t rc = ECORE_NOTIMPL;
-
-	if (IS_VF(p_hwfn->p_dev))
-		return ecore_vf_pf_txq_stop(p_hwfn, tx_queue_id);
+	enum _ecore_status_t rc;
 
-	/* Get SPQ entry */
 	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
-	init_data.cid = p_tx_cid->cid;
-	init_data.opaque_fid = p_tx_cid->opaque_fid;
+	init_data.cid = p_cid->cid;
+	init_data.opaque_fid = p_cid->opaque_fid;
 	init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
 
 	rc = ecore_sp_init_request(p_hwfn, &p_ent,
@@ -979,11 +1024,22 @@ enum _ecore_status_t ecore_sp_eth_tx_queue_stop(struct ecore_hwfn *p_hwfn,
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	rc = ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
-	if (rc != ECORE_SUCCESS)
-		return rc;
+	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
+}
+
+enum _ecore_status_t ecore_eth_tx_queue_stop(struct ecore_hwfn *p_hwfn,
+					     void *p_handle)
+{
+	struct ecore_queue_cid *p_cid = (struct ecore_queue_cid *)p_handle;
+	enum _ecore_status_t rc;
+
+	if (IS_PF(p_hwfn->p_dev))
+		rc = ecore_eth_pf_tx_queue_stop(p_hwfn, p_cid);
+	else
+		rc = ecore_vf_pf_txq_stop(p_hwfn, p_cid);
 
-	ecore_sp_release_queue_cid(p_hwfn, p_tx_cid);
+	if (rc == ECORE_SUCCESS)
+		ecore_eth_queue_cid_release(p_hwfn, p_cid);
 	return rc;
 }
 
diff --git a/drivers/net/qede/base/ecore_l2.h b/drivers/net/qede/base/ecore_l2.h
index b598eda..c136389 100644
--- a/drivers/net/qede/base/ecore_l2.h
+++ b/drivers/net/qede/base/ecore_l2.h
@@ -15,59 +15,66 @@
 #include "ecore_spq.h"
 #include "ecore_l2_api.h"
 
-/**
- * @brief ecore_sp_eth_tx_queue_update -
- *
- * This ramrod updates a TX queue. It is used for setting the active
- * state of the queue.
- *
- * @note Final phase API.
- *
- * @param p_hwfn
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_sp_eth_tx_queue_update(struct ecore_hwfn *p_hwfn);
+struct ecore_queue_cid {
+	/* 'Relative' is a relative term ;-). Usually the indices [not counting
+	 * SBs] would be PF-relative, but there are some cases where that isn't
+	 * the case - specifically for a PF configuring its VF indices it's
+	 * possible some fields [E.g., stats-id] in 'rel' would already be abs.
+	 */
+	struct ecore_queue_start_common_params rel;
+	struct ecore_queue_start_common_params abs;
+	u32 cid;
+	u16 opaque_fid;
+
+	/* VFs queues are mapped differently, so we need to know the
+	 * relative queue associated with them [0-based].
+	 * Notice this is relevant on the *PF* queue-cid of its VF's queues,
+	 * and not on the VF itself.
+	 */
+	bool is_vf;
+	u8 vf_qid;
+
+	/* Legacy VFs might have Rx producer located elsewhere */
+	bool b_legacy_vf;
+};
+
+void ecore_eth_queue_cid_release(struct ecore_hwfn *p_hwfn,
+				 struct ecore_queue_cid *p_cid);
+
+struct ecore_queue_cid *
+_ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
+			u16 opaque_fid, u32 cid, u8 vf_qid,
+			struct ecore_queue_start_common_params *p_params);
 
 enum _ecore_status_t
 ecore_sp_eth_vport_start(struct ecore_hwfn *p_hwfn,
 			 struct ecore_sp_vport_start_params *p_params);
 
 /**
- * @brief - Starts an Rx queue; Should be used where contexts are handled
- * outside of the ramrod area [specifically iov scenarios]
+ * @brief - Starts an Rx queue, when queue_cid is already prepared
  *
  * @param p_hwfn
- * @param opaque_fid
- * @param cid
- * @param p_params [queue_id, vport_id, stats_id, sb, sb_idx, vf_qid]
-	  stats_id is absolute packed in p_params.
+ * @param p_cid
  * @param bd_max_bytes
  * @param bd_chain_phys_addr
  * @param cqe_pbl_addr
  * @param cqe_pbl_size
- * @param b_use_zone_a_prod - support legacy VF producers
  *
  * @return enum _ecore_status_t
  */
 enum _ecore_status_t
-ecore_sp_eth_rxq_start_ramrod(struct ecore_hwfn	*p_hwfn,
-			      u16 opaque_fid,
-			      u32 cid,
-			      struct ecore_queue_start_common_params *p_params,
-			      u16 bd_max_bytes,
-			      dma_addr_t bd_chain_phys_addr,
-			      dma_addr_t cqe_pbl_addr,
-			      u16 cqe_pbl_size, bool b_use_zone_a_prod);
+ecore_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
+			   struct ecore_queue_cid *p_cid,
+			   u16 bd_max_bytes,
+			   dma_addr_t bd_chain_phys_addr,
+			   dma_addr_t cqe_pbl_addr,
+			   u16 cqe_pbl_size);
 
 /**
- * @brief - Starts a Tx queue; Should be used where contexts are handled
- * outside of the ramrod area [specifically iov scenarios]
+ * @brief - Starts a Tx queue, where queue_cid is already prepared
  *
  * @param p_hwfn
- * @param opaque_fid
- * @param cid
- * @param p_params [queue_id, vport_id,stats_id, sb, sb_idx, vf_qid]
+ * @param p_cid
  * @param pbl_addr
  * @param pbl_size
  * @param p_pq_params - parameters for choosing the PQ for this Tx queue
@@ -75,13 +82,10 @@ ecore_sp_eth_rxq_start_ramrod(struct ecore_hwfn	*p_hwfn,
  * @return enum _ecore_status_t
  */
 enum _ecore_status_t
-ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn	*p_hwfn,
-			      u16 opaque_fid,
-			      u32 cid,
-			      struct ecore_queue_start_common_params *p_params,
-			      dma_addr_t pbl_addr,
-			      u16 pbl_size,
-			      u16 pq_id);
+ecore_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
+			   struct ecore_queue_cid *p_cid,
+			   dma_addr_t pbl_addr, u16 pbl_size,
+			   u16 pq_id);
 
 u8 ecore_mcast_bin_from_mac(u8 *mac);
 
diff --git a/drivers/net/qede/base/ecore_l2_api.h b/drivers/net/qede/base/ecore_l2_api.h
index 8f7b614..af316d3 100644
--- a/drivers/net/qede/base/ecore_l2_api.h
+++ b/drivers/net/qede/base/ecore_l2_api.h
@@ -28,22 +28,26 @@ enum ecore_rss_caps {
 #endif
 
 struct ecore_queue_start_common_params {
-	/* Rx/Tx queue relative id to keep obtained cid in corresponding array
-	 * RX - upper-bounded by number of FW-queues
-	 */
-	u16 queue_id;
+	/* Should always be relative to entity sending this. */
 	u8 vport_id;
+	u16 queue_id;
 
-	/* q_zone_id is relative, may be different from queue id
-	 * currently used by Tx-only, upper-bounded by number of FW-queues
-	 */
-	u16 qzone_id;
-
-	/* stats_id is relative or absolute depends on function */
+	/* Relative, but relevant only for PFs */
 	u8 stats_id;
+
+	/* These are always absolute */
 	u16 sb;
-	u16 sb_idx;
-	u16 vf_qid;
+	u8 sb_idx;
+};
+
+struct ecore_rxq_start_ret_params {
+	void OSAL_IOMEM *p_prod;
+	void *p_handle;
+};
+
+struct ecore_txq_start_ret_params {
+	void OSAL_IOMEM *p_doorbell;
+	void *p_handle;
 };
 
 struct ecore_rss_params {
@@ -167,42 +171,37 @@ ecore_filter_accept_cmd(
 	struct ecore_spq_comp_cb	 *p_comp_data);
 
 /**
- * @brief ecore_sp_eth_rx_queue_start - RX Queue Start Ramrod
+ * @brief ecore_eth_rx_queue_start - RX Queue Start Ramrod
  *
  * This ramrod initializes an RX Queue for a VPort. An Assert is generated if
  * the VPort ID is not currently initialized.
  *
  * @param p_hwfn
  * @param opaque_fid
- * @p_params			[stats_id is relative, packed in p_params]
+ * @p_params			Inputs; Relative for PF [SB being an exception]
  * @param bd_max_bytes		Maximum bytes that can be placed on a BD
  * @param bd_chain_phys_addr	Physical address of BDs for receive.
  * @param cqe_pbl_addr		Physical address of the CQE PBL Table.
  * @param cqe_pbl_size		Size of the CQE PBL Table
- * @param pp_prod		Pointer to place producer's
- *                              address for the Rx Q (May be
- *				NULL).
+ * @param p_ret_params		Pointed struct to be filled with outputs.
  *
  * @return enum _ecore_status_t
  */
 enum _ecore_status_t
-ecore_sp_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
-			    u16 opaque_fid,
-			    struct ecore_queue_start_common_params *p_params,
-			    u16 bd_max_bytes,
-			    dma_addr_t bd_chain_phys_addr,
-			    dma_addr_t cqe_pbl_addr,
-			    u16 cqe_pbl_size,
-			    void OSAL_IOMEM * *pp_prod);
+ecore_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
+			 u16 opaque_fid,
+			 struct ecore_queue_start_common_params *p_params,
+			 u16 bd_max_bytes,
+			 dma_addr_t bd_chain_phys_addr,
+			 dma_addr_t cqe_pbl_addr,
+			 u16 cqe_pbl_size,
+			 struct ecore_rxq_start_ret_params *p_ret_params);
 
 /**
- * @brief ecore_sp_eth_rx_queue_stop -
- *
- * This ramrod closes an RX queue. It sends RX queue stop ramrod
- * + CFC delete ramrod
+ * @brief ecore_eth_rx_queue_stop - This ramrod closes an Rx queue
  *
  * @param p_hwfn
- * @param rx_queue_id		RX Queue ID
+ * @param p_rxq			Handler of queue to close
  * @param eq_completion_only	If True completion will be on
  *				EQe, if False completion will be
  *				on EQe if p_hwfn opaque
@@ -213,13 +212,13 @@ ecore_sp_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
  * @return enum _ecore_status_t
  */
 enum _ecore_status_t
-ecore_sp_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
-			   u16 rx_queue_id,
-			   bool eq_completion_only,
-			   bool cqe_completion);
+ecore_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
+			void *p_rxq,
+			bool eq_completion_only,
+			bool cqe_completion);
 
 /**
- * @brief ecore_sp_eth_tx_queue_start - TX Queue Start Ramrod
+ * @brief - TX Queue Start Ramrod
  *
  * This ramrod initializes a TX Queue for a VPort. An Assert is generated if
  * the VPort is not currently initialized.
@@ -230,34 +229,29 @@ ecore_sp_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
  * @param tc			traffic class to use with this L2 txq
  * @param pbl_addr		address of the pbl array
  * @param pbl_size		number of entries in pbl
- * @param pp_doorbell		Pointer to place doorbell pointer (May be NULL).
- *				This address should be used with the
- *				DIRECT_REG_WR macro.
+ * @param p_ret_params		Pointer to fill the return parameters in.
  *
  * @return enum _ecore_status_t
  */
 enum _ecore_status_t
-ecore_sp_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
-			    u16 opaque_fid,
-			    struct ecore_queue_start_common_params *p_params,
-			    u8 tc,
-			    dma_addr_t pbl_addr,
-			    u16 pbl_size,
-			    void OSAL_IOMEM * *pp_doorbell);
+ecore_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
+			 u16 opaque_fid,
+			 struct ecore_queue_start_common_params *p_params,
+			 u8 tc,
+			 dma_addr_t pbl_addr,
+			 u16 pbl_size,
+			 struct ecore_txq_start_ret_params *p_ret_params);
 
 /**
- * @brief ecore_sp_eth_tx_queue_stop -
- *
- * This ramrod closes a TX queue. It sends TX queue stop ramrod
- * + CFC delete ramrod
+ * @brief ecore_eth_tx_queue_stop - closes a Tx queue
  *
  * @param p_hwfn
- * @param tx_queue_id		TX Queue ID
+ * @param p_txq - handle to Tx queue needed to be closed
  *
  * @return enum _ecore_status_t
  */
-enum _ecore_status_t ecore_sp_eth_tx_queue_stop(struct ecore_hwfn *p_hwfn,
-						u16 tx_queue_id);
+enum _ecore_status_t ecore_eth_tx_queue_stop(struct ecore_hwfn *p_hwfn,
+					     void *p_txq);
 
 enum ecore_tpa_mode	{
 	ECORE_TPA_MODE_NONE,
@@ -389,19 +383,19 @@ ecore_sp_eth_filter_ucast(struct ecore_hwfn *p_hwfn,
  * @note Final phase API.
  *
  * @param p_hwfn
- * @param rx_queue_id		RX Queue ID
- * @param num_rxqs              Allow to update multiple rx
- *				queues, from rx_queue_id to
- *				(rx_queue_id + num_rxqs)
+ * @param pp_rxq_handlers	An array of queue handlers to be updated.
+ * @param num_rxqs              number of queues to update.
  * @param complete_cqe_flg	Post completion to the CQE Ring if set
  * @param complete_event_flg	Post completion to the Event Ring if set
+ * @param comp_mode
+ * @param p_comp_data
  *
  * @return enum _ecore_status_t
  */
 
 enum _ecore_status_t
 ecore_sp_eth_rx_queues_update(struct ecore_hwfn *p_hwfn,
-			      u16 rx_queue_id,
+			      void **pp_rxq_handlers,
 			      u8 num_rxqs,
 			      u8 complete_cqe_flg,
 			      u8 complete_event_flg,
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 73c4015..7378420 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -238,7 +238,7 @@ static bool ecore_iov_validate_active_rxq(struct ecore_hwfn *p_hwfn,
 	u8 i;
 
 	for (i = 0; i < p_vf->num_rxqs; i++)
-		if (p_vf->vf_queues[i].rxq_active)
+		if (p_vf->vf_queues[i].p_rx_cid)
 			return true;
 
 	return false;
@@ -250,7 +250,7 @@ static bool ecore_iov_validate_active_txq(struct ecore_hwfn *p_hwfn,
 	u8 i;
 
 	for (i = 0; i < p_vf->num_rxqs; i++)
-		if (p_vf->vf_queues[i].txq_active)
+		if (p_vf->vf_queues[i].p_tx_cid)
 			return true;
 
 	return false;
@@ -953,17 +953,19 @@ static void ecore_iov_free_vf_igu_sbs(struct ecore_hwfn *p_hwfn,
 	vf->num_sbs = 0;
 }
 
-enum _ecore_status_t ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
-					      struct ecore_ptt *p_ptt,
-					      u16 rel_vf_id, u16 num_rx_queues)
+enum _ecore_status_t
+ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
+			 struct ecore_ptt *p_ptt,
+			 struct ecore_iov_vf_init_params *p_params)
 {
 	u8 num_of_vf_available_chains  = 0;
 	struct ecore_vf_info *vf = OSAL_NULL;
+	u16 qid, num_irqs;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 	u32 cids;
 	u8 i;
 
-	vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, false);
+	vf = ecore_iov_get_vf_info(p_hwfn, p_params->rel_vf_id, false);
 	if (!vf) {
 		DP_ERR(p_hwfn, "ecore_iov_init_hw_for_vf : vf is OSAL_NULL\n");
 		return ECORE_UNKNOWN_ERROR;
@@ -971,22 +973,52 @@ enum _ecore_status_t ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 
 	if (vf->b_init) {
 		DP_NOTICE(p_hwfn, true, "VF[%d] is already active.\n",
-			  rel_vf_id);
+			  p_params->rel_vf_id);
 		return ECORE_INVAL;
 	}
 
+	/* Perform sanity checking on the requested queue_id */
+	for (i = 0; i < p_params->num_queues; i++) {
+		u16 min_vf_qzone = (u16)FEAT_NUM(p_hwfn, ECORE_PF_L2_QUE);
+		u16 max_vf_qzone = min_vf_qzone +
+				   FEAT_NUM(p_hwfn, ECORE_VF_L2_QUE) - 1;
+
+		qid = p_params->req_rx_queue[i];
+		if (qid < min_vf_qzone || qid > max_vf_qzone) {
+			DP_NOTICE(p_hwfn, true,
+				  "Can't enable Rx qid [%04x] for VF[%d]: qids [0x%04x,...,0x%04x] available\n",
+				  qid, p_params->rel_vf_id,
+				  min_vf_qzone, max_vf_qzone);
+			return ECORE_INVAL;
+		}
+
+		qid = p_params->req_tx_queue[i];
+		if (qid > max_vf_qzone) {
+			DP_NOTICE(p_hwfn, true,
+				  "Can't enable Tx qid [%04x] for VF[%d]: max qid 0x%04x\n",
+				  qid, p_params->rel_vf_id, max_vf_qzone);
+			return ECORE_INVAL;
+		}
+
+		/* If client *really* wants, Tx qid can be shared with PF */
+		if (qid < min_vf_qzone)
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VF[%d] is using PF qid [0x%04x] for Txq[0x%02x]\n",
+				   p_params->rel_vf_id, qid, i);
+	}
+
 	/* Limit number of queues according to number of CIDs */
 	ecore_cxt_get_proto_cid_count(p_hwfn, PROTOCOLID_ETH, &cids);
 	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 		   "VF[%d] - requesting to initialize for 0x%04x queues"
 		   " [0x%04x CIDs available]\n",
-		   vf->relative_vf_id, num_rx_queues, (u16)cids);
-	num_rx_queues = OSAL_MIN_T(u16, num_rx_queues, ((u16)cids));
+		   vf->relative_vf_id, p_params->num_queues, (u16)cids);
+	num_irqs = OSAL_MIN_T(u16, p_params->num_queues, ((u16)cids));
 
 	num_of_vf_available_chains = ecore_iov_alloc_vf_igu_sbs(p_hwfn,
 							       p_ptt,
 							       vf,
-							       num_rx_queues);
+							       num_irqs);
 	if (num_of_vf_available_chains == 0) {
 		DP_ERR(p_hwfn, "no available igu sbs\n");
 		return ECORE_NOMEM;
@@ -997,26 +1029,19 @@ enum _ecore_status_t ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 	vf->num_txqs = num_of_vf_available_chains;
 
 	for (i = 0; i < vf->num_rxqs; i++) {
-		u16 queue_id = ecore_int_queue_id_from_sb_id(p_hwfn,
-							     vf->igu_sbs[i]);
+		struct ecore_vf_q_info *p_queue = &vf->vf_queues[i];
 
-		if (queue_id > RESC_NUM(p_hwfn, ECORE_L2_QUEUE)) {
-			DP_NOTICE(p_hwfn, true,
-				  "VF[%d] will require utilizing of"
-				  " out-of-bounds queues - %04x\n",
-				  vf->relative_vf_id, queue_id);
-			/* TODO - cleanup the already allocate SBs */
-			return ECORE_INVAL;
-		}
+		p_queue->fw_rx_qid = p_params->req_rx_queue[i];
+		p_queue->fw_tx_qid = p_params->req_tx_queue[i];
 
 		/* CIDs are per-VF, so no problem having them 0-based. */
-		vf->vf_queues[i].fw_rx_qid = queue_id;
-		vf->vf_queues[i].fw_tx_qid = queue_id;
-		vf->vf_queues[i].fw_cid = i;
+		p_queue->fw_cid = i;
 
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-			   "VF[%d] - [%d] SB %04x, Tx/Rx queue %04x CID %04x\n",
-			   vf->relative_vf_id, i, vf->igu_sbs[i], queue_id, i);
+			   "VF[%d] - Q[%d] SB %04x, qid [Rx %04x Tx %04x]  CID %04x\n",
+			   vf->relative_vf_id, i, vf->igu_sbs[i],
+			   p_queue->fw_rx_qid, p_queue->fw_tx_qid,
+			   p_queue->fw_cid);
 	}
 
 	rc = ecore_iov_enable_vf_access(p_hwfn, p_ptt, vf);
@@ -1390,8 +1415,19 @@ static void ecore_iov_vf_cleanup(struct ecore_hwfn *p_hwfn,
 	p_vf->num_active_rxqs = 0;
 
 	for (i = 0; i < ECORE_MAX_VF_CHAINS_PER_PF; i++) {
-		p_vf->vf_queues[i].rxq_active = 0;
-		p_vf->vf_queues[i].txq_active = 0;
+		struct ecore_vf_q_info *p_queue = &p_vf->vf_queues[i];
+
+		if (p_queue->p_rx_cid) {
+			ecore_eth_queue_cid_release(p_hwfn,
+						    p_queue->p_rx_cid);
+			p_queue->p_rx_cid = OSAL_NULL;
+		}
+
+		if (p_queue->p_tx_cid) {
+			ecore_eth_queue_cid_release(p_hwfn,
+						    p_queue->p_tx_cid);
+			p_queue->p_tx_cid = OSAL_NULL;
+		}
 	}
 
 	OSAL_MEMSET(&p_vf->shadow_config, 0, sizeof(p_vf->shadow_config));
@@ -1829,14 +1865,14 @@ ecore_iov_configure_vport_forced(struct ecore_hwfn *p_hwfn,
 
 		/* Update all the Rx queues */
 		for (i = 0; i < ECORE_MAX_VF_CHAINS_PER_PF; i++) {
-			u16 qid;
+			struct ecore_queue_cid *p_cid;
 
-			if (!p_vf->vf_queues[i].rxq_active)
+			p_cid = p_vf->vf_queues[i].p_rx_cid;
+			if (p_cid == OSAL_NULL)
 				continue;
 
-			qid = p_vf->vf_queues[i].fw_rx_qid;
-
-			rc = ecore_sp_eth_rx_queues_update(p_hwfn, qid,
+			rc = ecore_sp_eth_rx_queues_update(p_hwfn,
+							   (void **)&p_cid,
 						   1, 0, 1,
 						   ECORE_SPQ_MODE_EBLOCK,
 						   OSAL_NULL);
@@ -1844,7 +1880,7 @@ ecore_iov_configure_vport_forced(struct ecore_hwfn *p_hwfn,
 				DP_NOTICE(p_hwfn, true,
 					  "Failed to send Rx update"
 					  " fo queue[0x%04x]\n",
-					  qid);
+					  p_cid->rel.queue_id);
 				return rc;
 			}
 		}
@@ -2038,6 +2074,7 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
 	struct ecore_queue_start_common_params params;
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
 	u8 status = PFVF_STATUS_NO_RESOURCE;
+	struct ecore_vf_q_info *p_queue;
 	struct vfpf_start_rxq_tlv *req;
 	bool b_legacy_vf = false;
 	enum _ecore_status_t rc;
@@ -2048,14 +2085,24 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
 	    !ecore_iov_validate_sb(p_hwfn, vf, req->hw_sb))
 		goto out;
 
+	/* Acquire a new queue-cid */
+	p_queue = &vf->vf_queues[req->rx_qid];
+
 	OSAL_MEMSET(&params, 0, sizeof(params));
-	params.queue_id = (u8)vf->vf_queues[req->rx_qid].fw_rx_qid;
-	params.vf_qid = req->rx_qid;
+	params.queue_id = (u8)p_queue->fw_rx_qid;
 	params.vport_id = vf->vport_id;
 	params.stats_id = vf->abs_vf_id + 0x10;
 	params.sb = req->hw_sb;
 	params.sb_idx = req->sb_index;
 
+	p_queue->p_rx_cid = _ecore_eth_queue_to_cid(p_hwfn,
+						    vf->opaque_fid,
+						    p_queue->fw_cid,
+						    (u8)req->rx_qid,
+						    &params);
+	if (p_queue->p_rx_cid == OSAL_NULL)
+		goto out;
+
 	/* Legacy VFs have their Producers in a different location, which they
 	 * calculate on their own and clean the producer prior to this.
 	 */
@@ -2067,27 +2114,27 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
 		       GTT_BAR0_MAP_REG_MSDM_RAM +
 		       MSTORM_ETH_VF_PRODS_OFFSET(vf->abs_vf_id, req->rx_qid),
 		       0);
+	p_queue->p_rx_cid->b_legacy_vf = b_legacy_vf;
 
-	rc = ecore_sp_eth_rxq_start_ramrod(p_hwfn, vf->opaque_fid,
-					   vf->vf_queues[req->rx_qid].fw_cid,
-					   &params,
-					   req->bd_max_bytes,
-					   req->rxq_addr,
-					   req->cqe_pbl_addr,
-					   req->cqe_pbl_size,
-					   b_legacy_vf);
 
-	if (rc) {
+	rc = ecore_eth_rxq_start_ramrod(p_hwfn,
+					p_queue->p_rx_cid,
+					req->bd_max_bytes,
+					req->rxq_addr,
+					req->cqe_pbl_addr,
+					req->cqe_pbl_size);
+	if (rc != ECORE_SUCCESS) {
 		status = PFVF_STATUS_FAILURE;
+		ecore_eth_queue_cid_release(p_hwfn, p_queue->p_rx_cid);
+		p_queue->p_rx_cid = OSAL_NULL;
 	} else {
 		status = PFVF_STATUS_SUCCESS;
-		vf->vf_queues[req->rx_qid].rxq_active = true;
 		vf->num_active_rxqs++;
 	}
 
 out:
-	ecore_iov_vf_mbx_start_rxq_resp(p_hwfn, p_ptt, vf,
-					status, b_legacy_vf);
+	ecore_iov_vf_mbx_start_rxq_resp(p_hwfn, p_ptt, vf, status,
+					b_legacy_vf);
 }
 
 static void ecore_iov_vf_mbx_start_txq_resp(struct ecore_hwfn *p_hwfn,
@@ -2138,8 +2185,10 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 	struct ecore_queue_start_common_params params;
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
 	u8 status = PFVF_STATUS_NO_RESOURCE;
+	struct ecore_vf_q_info *p_queue;
 	struct vfpf_start_txq_tlv *req;
 	enum _ecore_status_t rc;
+	u16 pq;
 
 	OSAL_MEMSET(&params, 0, sizeof(params));
 	req = &mbx->req_virt->start_txq;
@@ -2148,27 +2197,34 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 	    !ecore_iov_validate_sb(p_hwfn, vf, req->hw_sb))
 		goto out;
 
-	params.queue_id = vf->vf_queues[req->tx_qid].fw_tx_qid;
-	params.qzone_id = vf->vf_queues[req->tx_qid].fw_tx_qid;
+	/* Acquire a new queue-cid */
+	p_queue = &vf->vf_queues[req->tx_qid];
+
+	params.queue_id = p_queue->fw_tx_qid;
 	params.vport_id = vf->vport_id;
 	params.stats_id = vf->abs_vf_id + 0x10;
 	params.sb = req->hw_sb;
 	params.sb_idx = req->sb_index;
 
-	rc = ecore_sp_eth_txq_start_ramrod(p_hwfn,
-					   vf->opaque_fid,
-					   vf->vf_queues[req->tx_qid].fw_cid,
-					   &params,
-					   req->pbl_addr,
-					   req->pbl_size,
-					   ecore_get_cm_pq_idx_vf(p_hwfn,
-							vf->relative_vf_id));
+	p_queue->p_tx_cid = _ecore_eth_queue_to_cid(p_hwfn,
+						    vf->opaque_fid,
+						    p_queue->fw_cid,
+						    (u8)req->tx_qid,
+						    &params);
+	if (p_queue->p_tx_cid == OSAL_NULL)
+		goto out;
 
-	if (rc)
+	pq = ecore_get_cm_pq_idx_vf(p_hwfn,
+				    vf->relative_vf_id);
+	rc = ecore_eth_txq_start_ramrod(p_hwfn, p_queue->p_tx_cid,
+					req->pbl_addr, req->pbl_size, pq);
+	if (rc != ECORE_SUCCESS) {
 		status = PFVF_STATUS_FAILURE;
-	else {
+		ecore_eth_queue_cid_release(p_hwfn,
+					    p_queue->p_tx_cid);
+		p_queue->p_tx_cid = OSAL_NULL;
+	} else {
 		status = PFVF_STATUS_SUCCESS;
-		vf->vf_queues[req->tx_qid].txq_active = true;
 	}
 
 out:
@@ -2181,6 +2237,7 @@ static enum _ecore_status_t ecore_iov_vf_stop_rxqs(struct ecore_hwfn *p_hwfn,
 						   u8 num_rxqs,
 						   bool cqe_completion)
 {
+	struct ecore_vf_q_info *p_queue;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 	int qid;
 
@@ -2188,16 +2245,18 @@ static enum _ecore_status_t ecore_iov_vf_stop_rxqs(struct ecore_hwfn *p_hwfn,
 		return ECORE_INVAL;
 
 	for (qid = rxq_id; qid < rxq_id + num_rxqs; qid++) {
-		if (vf->vf_queues[qid].rxq_active) {
-			rc = ecore_sp_eth_rx_queue_stop(p_hwfn,
-							vf->vf_queues[qid].
-							fw_rx_qid, false,
-							cqe_completion);
+		p_queue = &vf->vf_queues[qid];
 
-			if (rc)
-				return rc;
-		}
-		vf->vf_queues[qid].rxq_active = false;
+		if (!p_queue->p_rx_cid)
+			continue;
+
+		rc = ecore_eth_rx_queue_stop(p_hwfn,
+					     p_queue->p_rx_cid,
+					     false, cqe_completion);
+		if (rc != ECORE_SUCCESS)
+			return rc;
+
+		vf->vf_queues[qid].p_rx_cid = OSAL_NULL;
 		vf->num_active_rxqs--;
 	}
 
@@ -2209,21 +2268,23 @@ static enum _ecore_status_t ecore_iov_vf_stop_txqs(struct ecore_hwfn *p_hwfn,
 						   u16 txq_id, u8 num_txqs)
 {
 	enum _ecore_status_t rc = ECORE_SUCCESS;
+	struct ecore_vf_q_info *p_queue;
 	int qid;
 
 	if (txq_id + num_txqs > OSAL_ARRAY_SIZE(vf->vf_queues))
 		return ECORE_INVAL;
 
 	for (qid = txq_id; qid < txq_id + num_txqs; qid++) {
-		if (vf->vf_queues[qid].txq_active) {
-			rc = ecore_sp_eth_tx_queue_stop(p_hwfn,
-							vf->vf_queues[qid].
-							fw_tx_qid);
+		p_queue = &vf->vf_queues[qid];
+		if (!p_queue->p_tx_cid)
+			continue;
 
-			if (rc)
-				return rc;
-		}
-		vf->vf_queues[qid].txq_active = false;
+		rc = ecore_eth_tx_queue_stop(p_hwfn,
+					     p_queue->p_tx_cid);
+		if (rc != ECORE_SUCCESS)
+			return rc;
+
+		p_queue->p_tx_cid = OSAL_NULL;
 	}
 	return rc;
 }
@@ -2279,10 +2340,11 @@ static void ecore_iov_vf_mbx_update_rxqs(struct ecore_hwfn *p_hwfn,
 					 struct ecore_ptt *p_ptt,
 					 struct ecore_vf_info *vf)
 {
+	struct ecore_queue_cid *handlers[ECORE_MAX_VF_CHAINS_PER_PF];
 	u16 length = sizeof(struct pfvf_def_resp_tlv);
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
 	struct vfpf_update_rxq_tlv *req;
-	u8 status = PFVF_STATUS_SUCCESS;
+	u8 status = PFVF_STATUS_FAILURE;
 	u8 complete_event_flg;
 	u8 complete_cqe_flg;
 	u16 qid;
@@ -2293,30 +2355,38 @@ static void ecore_iov_vf_mbx_update_rxqs(struct ecore_hwfn *p_hwfn,
 	complete_cqe_flg = !!(req->flags & VFPF_RXQ_UPD_COMPLETE_CQE_FLAG);
 	complete_event_flg = !!(req->flags & VFPF_RXQ_UPD_COMPLETE_EVENT_FLAG);
 
+	/* Validaute inputs */
+	if (req->num_rxqs + req->rx_qid > ECORE_MAX_VF_CHAINS_PER_PF ||
+	    !ecore_iov_validate_rxq(p_hwfn, vf, req->rx_qid)) {
+		DP_INFO(p_hwfn, "VF[%d]: Incorrect Rxqs [%04x, %02x]\n",
+			vf->relative_vf_id, req->rx_qid, req->num_rxqs);
+		goto out;
+	}
+
 	for (i = 0; i < req->num_rxqs; i++) {
 		qid = req->rx_qid + i;
 
-		if (!vf->vf_queues[qid].rxq_active) {
-			DP_NOTICE(p_hwfn, true,
-				  "VF rx_qid = %d isn`t active!\n", qid);
-			status = PFVF_STATUS_FAILURE;
-			break;
+		if (!vf->vf_queues[qid].p_rx_cid) {
+			DP_INFO(p_hwfn,
+				"VF[%d] rx_qid = %d isn`t active!\n",
+				vf->relative_vf_id, qid);
+			goto out;
 		}
 
-		rc = ecore_sp_eth_rx_queues_update(p_hwfn,
-						   vf->vf_queues[qid].fw_rx_qid,
-						   1,
-						   complete_cqe_flg,
-						   complete_event_flg,
-						   ECORE_SPQ_MODE_EBLOCK,
-						   OSAL_NULL);
-
-		if (rc) {
-			status = PFVF_STATUS_FAILURE;
-			break;
-		}
+		handlers[i] = vf->vf_queues[qid].p_rx_cid;
 	}
 
+	rc = ecore_sp_eth_rx_queues_update(p_hwfn, (void **)&handlers,
+					   req->num_rxqs,
+					   complete_cqe_flg,
+					   complete_event_flg,
+					   ECORE_SPQ_MODE_EBLOCK,
+					   OSAL_NULL);
+	if (rc)
+		goto out;
+
+	status = PFVF_STATUS_SUCCESS;
+out:
 	ecore_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_UPDATE_RXQ,
 			       length, status);
 }
@@ -2545,7 +2615,7 @@ ecore_iov_vp_update_rss_param(struct ecore_hwfn *p_hwfn,
 				  "rss_ind_table[%d] = %d,"
 				  " rxq is out of range\n",
 				  i, q_idx);
-		else if (!vf->vf_queues[q_idx].rxq_active)
+		else if (!vf->vf_queues[q_idx].p_rx_cid)
 			DP_NOTICE(p_hwfn, true,
 				  "rss_ind_table[%d] = %d, rxq is not active\n",
 				  i, q_idx);
diff --git a/drivers/net/qede/base/ecore_sriov.h b/drivers/net/qede/base/ecore_sriov.h
index e9ccc79..d32f931 100644
--- a/drivers/net/qede/base/ecore_sriov.h
+++ b/drivers/net/qede/base/ecore_sriov.h
@@ -64,10 +64,10 @@ struct ecore_iov_vf_mbx {
 
 struct ecore_vf_q_info {
 	u16 fw_rx_qid;
+	struct ecore_queue_cid *p_rx_cid;
 	u16 fw_tx_qid;
+	struct ecore_queue_cid *p_tx_cid;
 	u8 fw_cid;
-	u8 rxq_active;
-	u8 txq_active;
 };
 
 enum vf_state {
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index 05ceefd..60ecd16 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -451,19 +451,19 @@ free_p_iov:
 #define MSTORM_QZONE_START(dev)   (TSTORM_QZONE_START + \
 				   (TSTORM_QZONE_SIZE * NUM_OF_L2_QUEUES(dev)))
 
-enum _ecore_status_t ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
-					   u8 rx_qid,
-					   u16 sb,
-					   u8 sb_index,
-					   u16 bd_max_bytes,
-					   dma_addr_t bd_chain_phys_addr,
-					   dma_addr_t cqe_pbl_addr,
-					   u16 cqe_pbl_size,
-					   void OSAL_IOMEM **pp_prod)
+enum _ecore_status_t
+ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
+		      struct ecore_queue_cid *p_cid,
+		      u16 bd_max_bytes,
+		      dma_addr_t bd_chain_phys_addr,
+		      dma_addr_t cqe_pbl_addr,
+		      u16 cqe_pbl_size,
+		      void OSAL_IOMEM **pp_prod)
 {
 	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
 	struct pfvf_start_queue_resp_tlv *resp;
 	struct vfpf_start_rxq_tlv *req;
+	u16 rx_qid = p_cid->rel.queue_id;
 	enum _ecore_status_t rc;
 
 	/* clear mailbox and prep first tlv */
@@ -473,19 +473,20 @@ enum _ecore_status_t ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
 	req->cqe_pbl_addr = cqe_pbl_addr;
 	req->cqe_pbl_size = cqe_pbl_size;
 	req->rxq_addr = bd_chain_phys_addr;
-	req->hw_sb = sb;
-	req->sb_index = sb_index;
+	req->hw_sb = p_cid->rel.sb;
+	req->sb_index = p_cid->rel.sb_idx;
 	req->bd_max_bytes = bd_max_bytes;
 	req->stat_id = -1; /* Keep initialized, for future compatibility */
 
 	/* If PF is legacy, we'll need to calculate producers ourselves
 	 * as well as clean them.
 	 */
-	if (pp_prod && p_iov->b_pre_fp_hsi) {
+	if (p_iov->b_pre_fp_hsi) {
 		u8 hw_qid = p_iov->acquire_resp.resc.hw_qid[rx_qid];
 		u32 init_prod_val = 0;
 
-		*pp_prod = (u8 OSAL_IOMEM *)p_hwfn->regview +
+		*pp_prod = (u8 OSAL_IOMEM *)
+			   p_hwfn->regview +
 			   MSTORM_QZONE_START(p_hwfn->p_dev) +
 			   (hw_qid) * MSTORM_QZONE_SIZE;
 
@@ -510,7 +511,7 @@ enum _ecore_status_t ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
 	}
 
 	/* Learn the address of the producer from the response */
-	if (pp_prod && !p_iov->b_pre_fp_hsi) {
+	if (!p_iov->b_pre_fp_hsi) {
 		u32 init_prod_val = 0;
 
 		*pp_prod = (u8 OSAL_IOMEM *)p_hwfn->regview + resp->offset;
@@ -534,7 +535,8 @@ exit:
 }
 
 enum _ecore_status_t ecore_vf_pf_rxq_stop(struct ecore_hwfn *p_hwfn,
-					  u16 rx_qid, bool cqe_completion)
+					  struct ecore_queue_cid *p_cid,
+					  bool cqe_completion)
 {
 	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
 	struct vfpf_stop_rxqs_tlv *req;
@@ -544,7 +546,7 @@ enum _ecore_status_t ecore_vf_pf_rxq_stop(struct ecore_hwfn *p_hwfn,
 	/* clear mailbox and prep first tlv */
 	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_STOP_RXQS, sizeof(*req));
 
-	req->rx_qid = rx_qid;
+	req->rx_qid = p_cid->rel.queue_id;
 	req->num_rxqs = 1;
 	req->cqe_completion = cqe_completion;
 
@@ -569,29 +571,28 @@ exit:
 	return rc;
 }
 
-enum _ecore_status_t ecore_vf_pf_txq_start(struct ecore_hwfn *p_hwfn,
-					   u16 tx_queue_id,
-					   u16 sb,
-					   u8 sb_index,
-					   dma_addr_t pbl_addr,
-					   u16 pbl_size,
-					   void OSAL_IOMEM **pp_doorbell)
+enum _ecore_status_t
+ecore_vf_pf_txq_start(struct ecore_hwfn *p_hwfn,
+		      struct ecore_queue_cid *p_cid,
+		      dma_addr_t pbl_addr, u16 pbl_size,
+		      void OSAL_IOMEM **pp_doorbell)
 {
 	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
 	struct pfvf_start_queue_resp_tlv *resp;
 	struct vfpf_start_txq_tlv *req;
+	u16 qid = p_cid->rel.queue_id;
 	enum _ecore_status_t rc;
 
 	/* clear mailbox and prep first tlv */
 	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_START_TXQ, sizeof(*req));
 
-	req->tx_qid = tx_queue_id;
+	req->tx_qid = qid;
 
 	/* Tx */
 	req->pbl_addr = pbl_addr;
 	req->pbl_size = pbl_size;
-	req->hw_sb = sb;
-	req->sb_index = sb_index;
+	req->hw_sb = p_cid->rel.sb;
+	req->sb_index = p_cid->rel.sb_idx;
 
 	/* add list termination tlv */
 	ecore_add_tlv(p_hwfn, &p_iov->offset,
@@ -608,32 +609,30 @@ enum _ecore_status_t ecore_vf_pf_txq_start(struct ecore_hwfn *p_hwfn,
 		goto exit;
 	}
 
-	if (pp_doorbell) {
-		/* Modern PFs provide the actual offsets, while legacy
-		 * provided only the queue id.
-		 */
-		if (!p_iov->b_pre_fp_hsi) {
-			*pp_doorbell = (u8 OSAL_IOMEM *)p_hwfn->doorbells +
-						       resp->offset;
-		} else {
-			u8 cid = p_iov->acquire_resp.resc.cid[tx_queue_id];
-
+	/* Modern PFs provide the actual offsets, while legacy
+	 * provided only the queue id.
+	 */
+	if (!p_iov->b_pre_fp_hsi) {
 		*pp_doorbell = (u8 OSAL_IOMEM *)p_hwfn->doorbells +
-				DB_ADDR_VF(cid, DQ_DEMS_LEGACY);
-		}
+						resp->offset;
+	} else {
+		u8 cid = p_iov->acquire_resp.resc.cid[qid];
 
-		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-			   "Txq[0x%02x]: doorbell at %p [offset 0x%08x]\n",
-			   tx_queue_id, *pp_doorbell, resp->offset);
+		*pp_doorbell = (u8 OSAL_IOMEM *)p_hwfn->doorbells +
+						DB_ADDR_VF(cid, DQ_DEMS_LEGACY);
 	}
 
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+		   "Txq[0x%02x]: doorbell at %p [offset 0x%08x]\n",
+		   qid, *pp_doorbell, resp->offset);
 exit:
 	ecore_vf_pf_req_end(p_hwfn, rc);
 
 	return rc;
 }
 
-enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn *p_hwfn, u16 tx_qid)
+enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn *p_hwfn,
+					  struct ecore_queue_cid *p_cid)
 {
 	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
 	struct vfpf_stop_txqs_tlv *req;
@@ -643,7 +642,7 @@ enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn *p_hwfn, u16 tx_qid)
 	/* clear mailbox and prep first tlv */
 	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_STOP_TXQS, sizeof(*req));
 
-	req->tx_qid = tx_qid;
+	req->tx_qid = p_cid->rel.queue_id;
 	req->num_txqs = 1;
 
 	/* add list termination tlv */
@@ -668,20 +667,36 @@ exit:
 }
 
 enum _ecore_status_t ecore_vf_pf_rxqs_update(struct ecore_hwfn *p_hwfn,
-					     u16 rx_queue_id,
+					     struct ecore_queue_cid **pp_cid,
 					     u8 num_rxqs,
-					     u8 comp_cqe_flg, u8 comp_event_flg)
+					     u8 comp_cqe_flg,
+					     u8 comp_event_flg)
 {
 	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
 	struct pfvf_def_resp_tlv *resp = &p_iov->pf2vf_reply->default_resp;
 	struct vfpf_update_rxq_tlv *req;
 	enum _ecore_status_t rc;
 
+	/* TODO - API is limited to assuming continuous regions of queues,
+	 * but VF queues might not fullfil this requirement.
+	 * Need to consider whether we need new TLVs for this, or whether
+	 * simply doing it iteratively is good enough.
+	 */
+	if (!num_rxqs)
+		return ECORE_INVAL;
+
+again:
 	/* clear mailbox and prep first tlv */
 	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_UPDATE_RXQ, sizeof(*req));
 
-	req->rx_qid = rx_queue_id;
-	req->num_rxqs = num_rxqs;
+	/* Find the length of the current contagious range of queues beginning
+	 * at first queue's index.
+	 */
+	req->rx_qid = (*pp_cid)->rel.queue_id;
+	for (req->num_rxqs = 1; req->num_rxqs < num_rxqs; req->num_rxqs++)
+		if (pp_cid[req->num_rxqs]->rel.queue_id !=
+		    req->rx_qid + req->num_rxqs)
+			break;
 
 	if (comp_cqe_flg)
 		req->flags |= VFPF_RXQ_UPD_COMPLETE_CQE_FLAG;
@@ -702,9 +717,17 @@ enum _ecore_status_t ecore_vf_pf_rxqs_update(struct ecore_hwfn *p_hwfn,
 		goto exit;
 	}
 
+	/* Make sure we're done with all the queues */
+	if (req->num_rxqs < num_rxqs) {
+		num_rxqs -= req->num_rxqs;
+		pp_cid += req->num_rxqs;
+		/* TODO - should we give a non-locked variant instead? */
+		ecore_vf_pf_req_end(p_hwfn, rc);
+		goto again;
+	}
+
 exit:
 	ecore_vf_pf_req_end(p_hwfn, rc);
-
 	return rc;
 }
 
diff --git a/drivers/net/qede/base/ecore_vf.h b/drivers/net/qede/base/ecore_vf.h
index 6077d60..1afd667 100644
--- a/drivers/net/qede/base/ecore_vf.h
+++ b/drivers/net/qede/base/ecore_vf.h
@@ -53,10 +53,7 @@ enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn);
  * @brief VF - start the RX Queue by sending a message to the PF
  *
  * @param p_hwfn
- * @param cid			- zero based within the VF
- * @param rx_queue_id		- zero based within the VF
- * @param sb			- VF status block for this queue
- * @param sb_index		- Index within the status block
+ * @param p_cid			- Only relative fields are relevant
  * @param bd_max_bytes		- maximum number of bytes per bd
  * @param bd_chain_phys_addr	- physical address of bd chain
  * @param cqe_pbl_addr		- physical address of pbl
@@ -67,9 +64,7 @@ enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn);
  * @return enum _ecore_status_t
  */
 enum _ecore_status_t ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
-					   u8 rx_queue_id,
-					   u16 sb,
-					   u8 sb_index,
+					   struct ecore_queue_cid *p_cid,
 					   u16 bd_max_bytes,
 					   dma_addr_t bd_chain_phys_addr,
 					   dma_addr_t cqe_pbl_addr,
@@ -81,46 +76,44 @@ enum _ecore_status_t ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
  *        PF.
  *
  * @param p_hwfn
- * @param tx_queue_id		- zero based within the VF
- * @param sb			- status block for this queue
- * @param sb_index		- index within the status block
+ * @param p_cid
  * @param bd_chain_phys_addr	- physical address of tx chain
  * @param pp_doorbell		- pointer to address to which to
  *				write the doorbell too..
  *
  * @return enum _ecore_status_t
  */
-enum _ecore_status_t ecore_vf_pf_txq_start(struct ecore_hwfn *p_hwfn,
-					   u16 tx_queue_id,
-					   u16 sb,
-					   u8 sb_index,
-					   dma_addr_t pbl_addr,
-					   u16 pbl_size,
-					   void OSAL_IOMEM **pp_doorbell);
+enum _ecore_status_t
+ecore_vf_pf_txq_start(struct ecore_hwfn *p_hwfn,
+		      struct ecore_queue_cid *p_cid,
+		      dma_addr_t pbl_addr, u16 pbl_size,
+		      void OSAL_IOMEM **pp_doorbell);
 
 /**
  * @brief VF - stop the RX queue by sending a message to the PF
  *
  * @param p_hwfn
- * @param rx_qid
+ * @param p_cid
  * @param cqe_completion
  *
  * @return enum _ecore_status_t
  */
-enum _ecore_status_t ecore_vf_pf_rxq_stop(struct ecore_hwfn	*p_hwfn,
-					  u16			rx_qid,
-					  bool			cqe_completion);
+enum _ecore_status_t ecore_vf_pf_rxq_stop(struct ecore_hwfn *p_hwfn,
+					  struct ecore_queue_cid *p_cid,
+					  bool cqe_completion);
 
 /**
  * @brief VF - stop the TX queue by sending a message to the PF
  *
  * @param p_hwfn
- * @param tx_qid
+ * @param p_cid
  *
  * @return enum _ecore_status_t
  */
-enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn	*p_hwfn,
-					  u16			tx_qid);
+enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn *p_hwfn,
+					  struct ecore_queue_cid *p_cid);
+
+/* TODO - fix all the !SRIOV prototypes */
 
 #ifndef LINUX_REMOVE
 /**
@@ -128,20 +121,18 @@ enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn	*p_hwfn,
  *        PF
  *
  * @param p_hwfn
- * @param rx_queue_id
+ * @param pp_cid - list of queue-cids which we want to update
  * @param num_rxqs
- * @param init_sge_ring
  * @param comp_cqe_flg
  * @param comp_event_flg
  *
  * @return enum _ecore_status_t
  */
-enum _ecore_status_t ecore_vf_pf_rxqs_update(
-			struct ecore_hwfn	*p_hwfn,
-			u16			rx_queue_id,
-			u8			num_rxqs,
-			u8			comp_cqe_flg,
-			u8			comp_event_flg);
+enum _ecore_status_t ecore_vf_pf_rxqs_update(struct ecore_hwfn *p_hwfn,
+					     struct ecore_queue_cid **pp_cid,
+					     u8 num_rxqs,
+					     u8 comp_cqe_flg,
+					     u8 comp_event_flg);
 #endif
 
 /**
diff --git a/drivers/net/qede/qede_eth_if.c b/drivers/net/qede/qede_eth_if.c
index d0f6e87..8e4290c 100644
--- a/drivers/net/qede/qede_eth_if.c
+++ b/drivers/net/qede/qede_eth_if.c
@@ -148,7 +148,8 @@ qed_start_rxq(struct ecore_dev *edev,
 	      uint16_t bd_max_bytes,
 	      dma_addr_t bd_chain_phys_addr,
 	      dma_addr_t cqe_pbl_addr,
-	      uint16_t cqe_pbl_size, void OSAL_IOMEM * *pp_prod)
+	      uint16_t cqe_pbl_size,
+	      struct ecore_rxq_start_ret_params *ret_params)
 {
 	struct ecore_hwfn *p_hwfn;
 	int rc, hwfn_index;
@@ -159,12 +160,14 @@ qed_start_rxq(struct ecore_dev *edev,
 	p_params->queue_id = p_params->queue_id / edev->num_hwfns;
 	p_params->stats_id = p_params->vport_id;
 
-	rc = ecore_sp_eth_rx_queue_start(p_hwfn,
-					 p_hwfn->hw_info.opaque_fid,
-					 p_params,
-					 bd_max_bytes,
-					 bd_chain_phys_addr,
-					 cqe_pbl_addr, cqe_pbl_size, pp_prod);
+	rc = ecore_eth_rx_queue_start(p_hwfn,
+				      p_hwfn->hw_info.opaque_fid,
+				      p_params,
+				      bd_max_bytes,
+				      bd_chain_phys_addr,
+				      cqe_pbl_addr,
+				      cqe_pbl_size,
+				      ret_params);
 
 	if (rc) {
 		DP_ERR(edev, "Failed to start RXQ#%d\n", p_params->queue_id);
@@ -180,19 +183,17 @@ qed_start_rxq(struct ecore_dev *edev,
 }
 
 static int
-qed_stop_rxq(struct ecore_dev *edev, struct qed_stop_rxq_params *params)
+qed_stop_rxq(struct ecore_dev *edev, uint8_t rss_id, void *handle)
 {
 	int rc, hwfn_index;
 	struct ecore_hwfn *p_hwfn;
 
-	hwfn_index = params->rss_id % edev->num_hwfns;
+	hwfn_index = rss_id % edev->num_hwfns;
 	p_hwfn = &edev->hwfns[hwfn_index];
 
-	rc = ecore_sp_eth_rx_queue_stop(p_hwfn,
-					params->rx_queue_id / edev->num_hwfns,
-					params->eq_completion_only, false);
+	rc = ecore_eth_rx_queue_stop(p_hwfn, handle, true, false);
 	if (rc) {
-		DP_ERR(edev, "Failed to stop RXQ#%d\n", params->rx_queue_id);
+		DP_ERR(edev, "Failed to stop RXQ#%02x\n", rss_id);
 		return rc;
 	}
 
@@ -204,7 +205,8 @@ qed_start_txq(struct ecore_dev *edev,
 	      uint8_t rss_num,
 	      struct ecore_queue_start_common_params *p_params,
 	      dma_addr_t pbl_addr,
-	      uint16_t pbl_size, void OSAL_IOMEM * *pp_doorbell)
+	      uint16_t pbl_size,
+	      struct ecore_txq_start_ret_params *ret_params)
 {
 	struct ecore_hwfn *p_hwfn;
 	int rc, hwfn_index;
@@ -213,14 +215,13 @@ qed_start_txq(struct ecore_dev *edev,
 	p_hwfn = &edev->hwfns[hwfn_index];
 
 	p_params->queue_id = p_params->queue_id / edev->num_hwfns;
-	p_params->qzone_id = p_params->queue_id;
 	p_params->stats_id = p_params->vport_id;
 
-	rc = ecore_sp_eth_tx_queue_start(p_hwfn,
-					 p_hwfn->hw_info.opaque_fid,
-					 p_params,
-					 0 /* tc */,
-					 pbl_addr, pbl_size, pp_doorbell);
+	rc = ecore_eth_tx_queue_start(p_hwfn,
+				      p_hwfn->hw_info.opaque_fid,
+				      p_params, 0 /* tc */,
+				      pbl_addr, pbl_size,
+				      ret_params);
 
 	if (rc) {
 		DP_ERR(edev, "Failed to start TXQ#%d\n", p_params->queue_id);
@@ -236,18 +237,17 @@ qed_start_txq(struct ecore_dev *edev,
 }
 
 static int
-qed_stop_txq(struct ecore_dev *edev, struct qed_stop_txq_params *params)
+qed_stop_txq(struct ecore_dev *edev, uint8_t rss_id, void *handle)
 {
 	struct ecore_hwfn *p_hwfn;
 	int rc, hwfn_index;
 
-	hwfn_index = params->rss_id % edev->num_hwfns;
+	hwfn_index = rss_id % edev->num_hwfns;
 	p_hwfn = &edev->hwfns[hwfn_index];
 
-	rc = ecore_sp_eth_tx_queue_stop(p_hwfn,
-					params->tx_queue_id / edev->num_hwfns);
+	rc = ecore_eth_tx_queue_stop(p_hwfn, handle);
 	if (rc) {
-		DP_ERR(edev, "Failed to stop TXQ#%d\n", params->tx_queue_id);
+		DP_ERR(edev, "Failed to stop TXQ#%02x\n", rss_id);
 		return rc;
 	}
 
diff --git a/drivers/net/qede/qede_eth_if.h b/drivers/net/qede/qede_eth_if.h
index 37b1b74..12dd828 100644
--- a/drivers/net/qede/qede_eth_if.h
+++ b/drivers/net/qede/qede_eth_if.h
@@ -47,13 +47,6 @@ struct qed_dev_eth_info {
 	bool is_legacy;
 };
 
-struct qed_stop_rxq_params {
-	uint8_t rss_id;
-	uint8_t rx_queue_id;
-	uint8_t vport_id;
-	bool eq_completion_only;
-};
-
 struct qed_update_vport_params {
 	uint8_t vport_id;
 	uint8_t update_vport_active_flg;
@@ -78,11 +71,6 @@ struct qed_start_vport_params {
 	bool clear_stats;
 };
 
-struct qed_stop_txq_params {
-	uint8_t rss_id;
-	uint8_t tx_queue_id;
-};
-
 struct qed_eth_ops {
 	const struct qed_common_ops *common;
 
@@ -103,19 +91,21 @@ struct qed_eth_ops {
 			  uint16_t bd_max_bytes,
 			  dma_addr_t bd_chain_phys_addr,
 			  dma_addr_t cqe_pbl_addr,
-			  uint16_t cqe_pbl_size, void OSAL_IOMEM * *pp_prod);
+			  uint16_t cqe_pbl_size,
+			  struct ecore_rxq_start_ret_params *ret_params);
 
 	int (*q_rx_stop)(struct ecore_dev *edev,
-			 struct qed_stop_rxq_params *params);
+			 uint8_t rss_id, void *handle);
 
 	int (*q_tx_start)(struct ecore_dev *edev,
 			  uint8_t rss_num,
 			  struct ecore_queue_start_common_params *p_params,
 			  dma_addr_t pbl_addr,
-			  uint16_t pbl_size, void OSAL_IOMEM * *pp_doorbell);
+			  uint16_t pbl_size,
+			  struct ecore_txq_start_ret_params *ret_params);
 
 	int (*q_tx_stop)(struct ecore_dev *edev,
-			 struct qed_stop_txq_params *params);
+			 uint8_t rss_id, void *handle);
 
 	int (*eth_cqe_completion)(struct ecore_dev *edev,
 				  uint8_t rss_id,
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 01ea9b4..85134fb 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -527,11 +527,14 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 	for_each_queue(i) {
 		fp = &qdev->fp_array[i];
 		if (fp->type & QEDE_FASTPATH_RX) {
+			struct ecore_rxq_start_ret_params ret_params;
+
 			p_phys_table = ecore_chain_get_pbl_phys(&fp->rxq->
 								rx_comp_ring);
 			page_cnt = ecore_chain_get_page_cnt(&fp->rxq->
 								rx_comp_ring);
 
+			memset(&ret_params, 0, sizeof(ret_params));
 			memset(&q_params, 0, sizeof(q_params));
 			q_params.queue_id = i;
 			q_params.vport_id = 0;
@@ -545,13 +548,17 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 					   fp->rxq->rx_bd_ring.p_phys_addr,
 					   p_phys_table,
 					   page_cnt,
-					   &fp->rxq->hw_rxq_prod_addr);
+					   &ret_params);
 			if (rc) {
 				DP_ERR(edev, "Start rxq #%d failed %d\n",
 				       fp->rxq->queue_id, rc);
 				return rc;
 			}
 
+			/* Use the return parameters */
+			fp->rxq->hw_rxq_prod_addr = ret_params.p_prod;
+			fp->rxq->handle = ret_params.p_handle;
+
 			fp->rxq->hw_cons_ptr =
 					&fp->sb_info->sb_virt->pi_array[RX_PI];
 
@@ -561,6 +568,8 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 		if (!(fp->type & QEDE_FASTPATH_TX))
 			continue;
 		for (tc = 0; tc < qdev->num_tc; tc++) {
+			struct ecore_txq_start_ret_params ret_params;
+
 			txq = fp->txqs[tc];
 			txq_index = tc * QEDE_RSS_COUNT(qdev) + i;
 
@@ -568,6 +577,7 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 			page_cnt = ecore_chain_get_page_cnt(&txq->tx_pbl);
 
 			memset(&q_params, 0, sizeof(q_params));
+			memset(&ret_params, 0, sizeof(ret_params));
 			q_params.queue_id = txq->queue_id;
 			q_params.vport_id = 0;
 			q_params.sb = fp->sb_info->igu_sb_id;
@@ -576,13 +586,16 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 			rc = qdev->ops->q_tx_start(edev, i, &q_params,
 						   p_phys_table,
 						   page_cnt, /* **pp_doorbell */
-						   &txq->doorbell_addr);
+						   &ret_params);
 			if (rc) {
 				DP_ERR(edev, "Start txq %u failed %d\n",
 				       txq_index, rc);
 				return rc;
 			}
 
+			txq->doorbell_addr = ret_params.p_doorbell;
+			txq->handle = ret_params.p_handle;
+
 			txq->hw_cons_ptr =
 			    &fp->sb_info->sb_virt->pi_array[TX_PI(tc)];
 			SET_FIELD(txq->tx_db.data.params,
@@ -1399,6 +1412,7 @@ static int qede_stop_queues(struct qede_dev *qdev)
 {
 	struct qed_update_vport_params vport_update_params;
 	struct ecore_dev *edev = &qdev->edev;
+	struct qede_fastpath *fp;
 	int rc, tc, i;
 
 	/* Disable the vport */
@@ -1420,7 +1434,7 @@ static int qede_stop_queues(struct qede_dev *qdev)
 
 	/* Flush Tx queues. If needed, request drain from MCP */
 	for_each_queue(i) {
-		struct qede_fastpath *fp = &qdev->fp_array[i];
+		fp = &qdev->fp_array[i];
 
 		if (fp->type & QEDE_FASTPATH_TX) {
 			for (tc = 0; tc < qdev->num_tc; tc++) {
@@ -1435,23 +1449,17 @@ static int qede_stop_queues(struct qede_dev *qdev)
 
 	/* Stop all Queues in reverse order */
 	for (i = QEDE_QUEUE_CNT(qdev) - 1; i >= 0; i--) {
-		struct qed_stop_rxq_params rx_params;
+		fp = &qdev->fp_array[i];
 
 		/* Stop the Tx Queue(s) */
 		if (qdev->fp_array[i].type & QEDE_FASTPATH_TX) {
 			for (tc = 0; tc < qdev->num_tc; tc++) {
-				struct qed_stop_txq_params tx_params;
-				u8 val;
-
-				tx_params.rss_id = i;
-				val = qdev->fp_array[i].txqs[tc]->queue_id;
-				tx_params.tx_queue_id = val;
-
+				struct qede_tx_queue *txq = fp->txqs[tc];
 				DP_INFO(edev, "Stopping tx queues\n");
-				rc = qdev->ops->q_tx_stop(edev, &tx_params);
+				rc = qdev->ops->q_tx_stop(edev, i, txq->handle);
 				if (rc) {
 					DP_ERR(edev, "Failed to stop TXQ #%d\n",
-					       tx_params.tx_queue_id);
+					       i);
 					return rc;
 				}
 			}
@@ -1459,14 +1467,8 @@ static int qede_stop_queues(struct qede_dev *qdev)
 
 		/* Stop the Rx Queue */
 		if (qdev->fp_array[i].type & QEDE_FASTPATH_RX) {
-			memset(&rx_params, 0, sizeof(rx_params));
-			rx_params.rss_id = i;
-			rx_params.rx_queue_id = qdev->fp_array[i].rxq->queue_id;
-			rx_params.eq_completion_only = 1;
-
 			DP_INFO(edev, "Stopping rx queues\n");
-
-			rc = qdev->ops->q_rx_stop(edev, &rx_params);
+			rc = qdev->ops->q_rx_stop(edev, i, fp->rxq->handle);
 			if (rc) {
 				DP_ERR(edev, "Failed to stop RXQ #%d\n", i);
 				return rc;
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index 9a393e9..17a2f0c 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -156,6 +156,7 @@ struct qede_rx_queue {
 	uint64_t rx_hw_errors;
 	uint64_t rx_alloc_errors;
 	struct qede_dev *qdev;
+	void *handle;
 };
 
 /*
@@ -187,6 +188,7 @@ struct qede_tx_queue {
 	uint64_t xmit_pkts;
 	bool is_legacy;
 	struct qede_dev *qdev;
+	void *handle;
 };
 
 struct qede_fastpath {
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 28/62] net/qede/base: add support for handling TLV request from MFW
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (27 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 27/62] net/qede/base: make L2 queues handle based Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 29/62] net/qede/base: optimize cache-line access Rasesh Mody
                               ` (33 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add support for handling the TLV request from Management FW.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/bcm_osal.h      |    3 +
 drivers/net/qede/base/ecore_mcp.c     |    6 -
 drivers/net/qede/base/ecore_mcp.h     |    8 +
 drivers/net/qede/base/ecore_mcp_api.h |   44 +-
 drivers/net/qede/base/ecore_mng_tlv.c | 1536 +++++++++++++++++++++++++++++++++
 drivers/net/qede/qede_if.h            |   21 +
 6 files changed, 1591 insertions(+), 27 deletions(-)
 create mode 100644 drivers/net/qede/base/ecore_mng_tlv.c

diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 63ee6d5..82e3ebd 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -419,5 +419,8 @@ void qede_get_mcp_proto_stats(struct ecore_dev *, enum ecore_mcp_protocol_type,
 	qede_get_mcp_proto_stats(dev, type, stats)
 
 #define	OSAL_SLOWPATH_IRQ_REQ(p_hwfn) (0)
+#define OSAL_MFW_TLV_REQ(p_hwfn) (0)
+#define OSAL_MFW_FILL_TLV_DATA(type, buf, data) (0)
+
 
 #endif /* __BCM_OSAL_H */
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 79a907b..2b9c819 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -2502,9 +2502,3 @@ enum _ecore_status_t ecore_mcp_initiate_pf_flr(struct ecore_hwfn *p_hwfn,
 	return ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_INITIATE_PF_FLR, 0,
 			     &mcp_resp, &mcp_param);
 }
-
-enum _ecore_status_t
-ecore_mfw_process_tlv_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
-{
-	return ECORE_SUCCESS;
-}
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index d77b5df..0708923 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -70,6 +70,14 @@ struct ecore_mcp_mb_params {
 	u32 mcp_param;
 };
 
+struct ecore_drv_tlv_hdr {
+	u8 tlv_type;	/* According to the enum below */
+	u8 tlv_length;	/* In dwords - not including this header */
+	u8 tlv_reserved;
+#define ECORE_DRV_TLV_FLAGS_CHANGED 0x01
+	u8 tlv_flags;
+};
+
 /**
  * @brief Initialize the interface with the MCP
  *
diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h
index 8cad43d..190c135 100644
--- a/drivers/net/qede/base/ecore_mcp_api.h
+++ b/drivers/net/qede/base/ecore_mcp_api.h
@@ -233,9 +233,11 @@ struct ecore_mba_vers {
 };
 
 enum ecore_mfw_tlv_type {
-	ECORE_MFW_TLV_GENERIC = 0x1,	/* Core driver TLVs */
-	ECORE_MFW_TLV_FCOE = 0x2,	/* FCoE protocol TLVs */
-	ECORE_MFW_TLV_ISCSI = 0x4,	/* SCSI protocol TLVs */
+	ECORE_MFW_TLV_GENERIC = 0x1, /* Core driver TLVs */
+	ECORE_MFW_TLV_ETH = 0x2, /* L2 driver TLVs */
+	ECORE_MFW_TLV_FCOE = 0x4, /* FCoE protocol TLVs */
+	ECORE_MFW_TLV_ISCSI = 0x8, /* SCSI protocol TLVs */
+	ECORE_MFW_TLV_MAX = 0x16,
 };
 
 struct ecore_mfw_tlv_generic {
@@ -247,6 +249,21 @@ struct ecore_mfw_tlv_generic {
 	bool additional_mac1_set;
 	u64 additional_mac2;
 	bool additional_mac2_set;
+	u8 drv_state;
+	bool drv_state_set;
+	u8 pxe_progress;
+	bool pxe_progress_set;
+	u64 rx_frames;
+	bool rx_frames_set;
+	u64 rx_bytes;
+	bool rx_bytes_set;
+	u64 tx_frames;
+	bool tx_frames_set;
+	u64 tx_bytes;
+	bool tx_bytes_set;
+};
+
+struct ecore_mfw_tlv_eth {
 	u16 lso_maxoff_size;
 	bool lso_maxoff_size_set;
 	u16 lso_minseg_size;
@@ -259,12 +276,6 @@ struct ecore_mfw_tlv_generic {
 	bool rx_descr_size_set;
 	u16 netq_count;
 	bool netq_count_set;
-	u16 flex_vlan;
-	bool flex_vlan_set;
-	u8 drv_state;
-	bool drv_state_set;
-	u8 pxe_progress;
-	bool pxe_progress_set;
 	u32 tcp4_offloads;
 	bool tcp4_offloads_set;
 	u32 tcp6_offloads;
@@ -273,14 +284,6 @@ struct ecore_mfw_tlv_generic {
 	bool tx_descr_qdepth_set;
 	u16 rx_descr_qdepth;
 	bool rx_descr_qdepth_set;
-	u64 rx_frames;
-	bool rx_frames_set;
-	u64 rx_bytes;
-	bool rx_bytes_set;
-	u64 tx_frames;
-	bool tx_frames_set;
-	u64 tx_bytes;
-	bool tx_bytes_set;
 	u8 iov_offload;
 	bool iov_offload_set;
 	u8 txqs_empty;
@@ -446,8 +449,8 @@ struct ecore_mfw_tlv_fcoe {
 	bool ols_set;
 	u8 lr;
 	bool lr_set;
-	u8 llr;
-	bool llrt;
+	u8 lrr;
+	bool lrr_set;
 	u8 tx_lip;
 	bool tx_lip_set;
 	u8 rx_lip;
@@ -511,12 +514,11 @@ struct ecore_mfw_tlv_iscsi {
 	bool tx_frames_set;
 	u64 tx_bytes;
 	bool tx_bytes_set;
-	u32 cpcp_spcp_map;
-	bool cpcp_spcp_map_set;
 };
 
 union ecore_mfw_tlv_data {
 	struct ecore_mfw_tlv_generic generic;
+	struct ecore_mfw_tlv_eth eth;
 	struct ecore_mfw_tlv_fcoe fcoe;
 	struct ecore_mfw_tlv_iscsi iscsi;
 };
diff --git a/drivers/net/qede/base/ecore_mng_tlv.c b/drivers/net/qede/base/ecore_mng_tlv.c
new file mode 100644
index 0000000..0065d12
--- /dev/null
+++ b/drivers/net/qede/base/ecore_mng_tlv.c
@@ -0,0 +1,1536 @@
+#include "bcm_osal.h"
+#include "ecore.h"
+#include "ecore_status.h"
+#include "ecore_mcp.h"
+#include "ecore_hw.h"
+#include "reg_addr.h"
+
+#define TLV_TYPE(p)	(p[0])
+#define TLV_LENGTH(p)	(p[1])
+#define TLV_FLAGS(p)	(p[3])
+
+static enum _ecore_status_t
+ecore_mfw_get_tlv_group(u8 tlv_type, u8 *tlv_group)
+{
+	switch (tlv_type) {
+	case DRV_TLV_FEATURE_FLAGS:
+	case DRV_TLV_LOCAL_ADMIN_ADDR:
+	case DRV_TLV_ADDITIONAL_MAC_ADDR_1:
+	case DRV_TLV_ADDITIONAL_MAC_ADDR_2:
+	case DRV_TLV_OS_DRIVER_STATES:
+	case DRV_TLV_PXE_BOOT_PROGRESS:
+	case DRV_TLV_RX_FRAMES_RECEIVED:
+	case DRV_TLV_RX_BYTES_RECEIVED:
+	case DRV_TLV_TX_FRAMES_SENT:
+	case DRV_TLV_TX_BYTES_SENT:
+		*tlv_group |= ECORE_MFW_TLV_GENERIC;
+		break;
+	case DRV_TLV_LSO_MAX_OFFLOAD_SIZE:
+	case DRV_TLV_LSO_MIN_SEGMENT_COUNT:
+	case DRV_TLV_PROMISCUOUS_MODE:
+	case DRV_TLV_TX_DESCRIPTORS_QUEUE_SIZE:
+	case DRV_TLV_RX_DESCRIPTORS_QUEUE_SIZE:
+	case DRV_TLV_NUM_OF_NET_QUEUE_VMQ_CFG:
+	case DRV_TLV_NUM_OFFLOADED_CONNECTIONS_TCP_IPV4:
+	case DRV_TLV_NUM_OFFLOADED_CONNECTIONS_TCP_IPV6:
+	case DRV_TLV_TX_DESCRIPTOR_QUEUE_AVG_DEPTH:
+	case DRV_TLV_RX_DESCRIPTORS_QUEUE_AVG_DEPTH:
+	case DRV_TLV_IOV_OFFLOAD:
+	case DRV_TLV_TX_QUEUES_EMPTY:
+	case DRV_TLV_RX_QUEUES_EMPTY:
+	case DRV_TLV_TX_QUEUES_FULL:
+	case DRV_TLV_RX_QUEUES_FULL:
+		*tlv_group |= ECORE_MFW_TLV_ETH;
+		break;
+	case DRV_TLV_SCSI_TO:
+	case DRV_TLV_R_T_TOV:
+	case DRV_TLV_R_A_TOV:
+	case DRV_TLV_E_D_TOV:
+	case DRV_TLV_CR_TOV:
+	case DRV_TLV_BOOT_TYPE:
+	case DRV_TLV_NPIV_STATE:
+	case DRV_TLV_NUM_OF_NPIV_IDS:
+	case DRV_TLV_SWITCH_NAME:
+	case DRV_TLV_SWITCH_PORT_NUM:
+	case DRV_TLV_SWITCH_PORT_ID:
+	case DRV_TLV_VENDOR_NAME:
+	case DRV_TLV_SWITCH_MODEL:
+	case DRV_TLV_SWITCH_FW_VER:
+	case DRV_TLV_QOS_PRIORITY_PER_802_1P:
+	case DRV_TLV_PORT_ALIAS:
+	case DRV_TLV_PORT_STATE:
+	case DRV_TLV_FIP_TX_DESCRIPTORS_QUEUE_SIZE:
+	case DRV_TLV_FCOE_RX_DESCRIPTORS_QUEUE_SIZE:
+	case DRV_TLV_LINK_FAILURE_COUNT:
+	case DRV_TLV_FCOE_BOOT_PROGRESS:
+	case DRV_TLV_RX_BROADCAST_PACKETS:
+	case DRV_TLV_TX_BROADCAST_PACKETS:
+	case DRV_TLV_FCOE_TX_DESCRIPTOR_QUEUE_AVG_DEPTH:
+	case DRV_TLV_FCOE_RX_DESCRIPTORS_QUEUE_AVG_DEPTH:
+	case DRV_TLV_FCOE_RX_FRAMES_RECEIVED:
+	case DRV_TLV_FCOE_RX_BYTES_RECEIVED:
+	case DRV_TLV_FCOE_TX_FRAMES_SENT:
+	case DRV_TLV_FCOE_TX_BYTES_SENT:
+	case DRV_TLV_CRC_ERROR_COUNT:
+	case DRV_TLV_CRC_ERROR_1_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_CRC_ERROR_1_TIMESTAMP:
+	case DRV_TLV_CRC_ERROR_2_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_CRC_ERROR_2_TIMESTAMP:
+	case DRV_TLV_CRC_ERROR_3_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_CRC_ERROR_3_TIMESTAMP:
+	case DRV_TLV_CRC_ERROR_4_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_CRC_ERROR_4_TIMESTAMP:
+	case DRV_TLV_CRC_ERROR_5_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_CRC_ERROR_5_TIMESTAMP:
+	case DRV_TLV_LOSS_OF_SYNC_ERROR_COUNT:
+	case DRV_TLV_LOSS_OF_SIGNAL_ERRORS:
+	case DRV_TLV_PRIMITIVE_SEQUENCE_PROTOCOL_ERROR_COUNT:
+	case DRV_TLV_DISPARITY_ERROR_COUNT:
+	case DRV_TLV_CODE_VIOLATION_ERROR_COUNT:
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_1:
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_2:
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_3:
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_4:
+	case DRV_TLV_LAST_FLOGI_TIMESTAMP:
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_1:
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_2:
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_3:
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_4:
+	case DRV_TLV_LAST_FLOGI_ACC_TIMESTAMP:
+	case DRV_TLV_LAST_FLOGI_RJT:
+	case DRV_TLV_LAST_FLOGI_RJT_TIMESTAMP:
+	case DRV_TLV_FDISCS_SENT_COUNT:
+	case DRV_TLV_FDISC_ACCS_RECEIVED:
+	case DRV_TLV_FDISC_RJTS_RECEIVED:
+	case DRV_TLV_PLOGI_SENT_COUNT:
+	case DRV_TLV_PLOGI_ACCS_RECEIVED:
+	case DRV_TLV_PLOGI_RJTS_RECEIVED:
+	case DRV_TLV_PLOGI_1_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_PLOGI_1_TIMESTAMP:
+	case DRV_TLV_PLOGI_2_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_PLOGI_2_TIMESTAMP:
+	case DRV_TLV_PLOGI_3_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_PLOGI_3_TIMESTAMP:
+	case DRV_TLV_PLOGI_4_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_PLOGI_4_TIMESTAMP:
+	case DRV_TLV_PLOGI_5_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_PLOGI_5_TIMESTAMP:
+	case DRV_TLV_PLOGI_1_ACC_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_PLOGI_1_ACC_TIMESTAMP:
+	case DRV_TLV_PLOGI_2_ACC_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_PLOGI_2_ACC_TIMESTAMP:
+	case DRV_TLV_PLOGI_3_ACC_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_PLOGI_3_ACC_TIMESTAMP:
+	case DRV_TLV_PLOGI_4_ACC_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_PLOGI_4_ACC_TIMESTAMP:
+	case DRV_TLV_PLOGI_5_ACC_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_PLOGI_5_ACC_TIMESTAMP:
+	case DRV_TLV_LOGOS_ISSUED:
+	case DRV_TLV_LOGO_ACCS_RECEIVED:
+	case DRV_TLV_LOGO_RJTS_RECEIVED:
+	case DRV_TLV_LOGO_1_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_LOGO_1_TIMESTAMP:
+	case DRV_TLV_LOGO_2_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_LOGO_2_TIMESTAMP:
+	case DRV_TLV_LOGO_3_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_LOGO_3_TIMESTAMP:
+	case DRV_TLV_LOGO_4_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_LOGO_4_TIMESTAMP:
+	case DRV_TLV_LOGO_5_RECEIVED_SOURCE_FC_ID:
+	case DRV_TLV_LOGO_5_TIMESTAMP:
+	case DRV_TLV_LOGOS_RECEIVED:
+	case DRV_TLV_ACCS_ISSUED:
+	case DRV_TLV_PRLIS_ISSUED:
+	case DRV_TLV_ACCS_RECEIVED:
+	case DRV_TLV_ABTS_SENT_COUNT:
+	case DRV_TLV_ABTS_ACCS_RECEIVED:
+	case DRV_TLV_ABTS_RJTS_RECEIVED:
+	case DRV_TLV_ABTS_1_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_ABTS_1_TIMESTAMP:
+	case DRV_TLV_ABTS_2_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_ABTS_2_TIMESTAMP:
+	case DRV_TLV_ABTS_3_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_ABTS_3_TIMESTAMP:
+	case DRV_TLV_ABTS_4_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_ABTS_4_TIMESTAMP:
+	case DRV_TLV_ABTS_5_SENT_DESTINATION_FC_ID:
+	case DRV_TLV_ABTS_5_TIMESTAMP:
+	case DRV_TLV_RSCNS_RECEIVED:
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_1:
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_2:
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_3:
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_4:
+	case DRV_TLV_LUN_RESETS_ISSUED:
+	case DRV_TLV_ABORT_TASK_SETS_ISSUED:
+	case DRV_TLV_TPRLOS_SENT:
+	case DRV_TLV_NOS_SENT_COUNT:
+	case DRV_TLV_NOS_RECEIVED_COUNT:
+	case DRV_TLV_OLS_COUNT:
+	case DRV_TLV_LR_COUNT:
+	case DRV_TLV_LRR_COUNT:
+	case DRV_TLV_LIP_SENT_COUNT:
+	case DRV_TLV_LIP_RECEIVED_COUNT:
+	case DRV_TLV_EOFA_COUNT:
+	case DRV_TLV_EOFNI_COUNT:
+	case DRV_TLV_SCSI_STATUS_CHECK_CONDITION_COUNT:
+	case DRV_TLV_SCSI_STATUS_CONDITION_MET_COUNT:
+	case DRV_TLV_SCSI_STATUS_BUSY_COUNT:
+	case DRV_TLV_SCSI_STATUS_INTERMEDIATE_COUNT:
+	case DRV_TLV_SCSI_STATUS_INTERMEDIATE_CONDITION_MET_COUNT:
+	case DRV_TLV_SCSI_STATUS_RESERVATION_CONFLICT_COUNT:
+	case DRV_TLV_SCSI_STATUS_TASK_SET_FULL_COUNT:
+	case DRV_TLV_SCSI_STATUS_ACA_ACTIVE_COUNT:
+	case DRV_TLV_SCSI_STATUS_TASK_ABORTED_COUNT:
+	case DRV_TLV_SCSI_CHECK_CONDITION_1_RECEIVED_SK_ASC_ASCQ:
+	case DRV_TLV_SCSI_CHECK_1_TIMESTAMP:
+	case DRV_TLV_SCSI_CHECK_CONDITION_2_RECEIVED_SK_ASC_ASCQ:
+	case DRV_TLV_SCSI_CHECK_2_TIMESTAMP:
+	case DRV_TLV_SCSI_CHECK_CONDITION_3_RECEIVED_SK_ASC_ASCQ:
+	case DRV_TLV_SCSI_CHECK_3_TIMESTAMP:
+	case DRV_TLV_SCSI_CHECK_CONDITION_4_RECEIVED_SK_ASC_ASCQ:
+	case DRV_TLV_SCSI_CHECK_4_TIMESTAMP:
+	case DRV_TLV_SCSI_CHECK_CONDITION_5_RECEIVED_SK_ASC_ASCQ:
+	case DRV_TLV_SCSI_CHECK_5_TIMESTAMP:
+		*tlv_group = ECORE_MFW_TLV_FCOE;
+		break;
+	case DRV_TLV_TARGET_LLMNR_ENABLED:
+	case DRV_TLV_HEADER_DIGEST_FLAG_ENABLED:
+	case DRV_TLV_DATA_DIGEST_FLAG_ENABLED:
+	case DRV_TLV_AUTHENTICATION_METHOD:
+	case DRV_TLV_ISCSI_BOOT_TARGET_PORTAL:
+	case DRV_TLV_MAX_FRAME_SIZE:
+	case DRV_TLV_PDU_TX_DESCRIPTORS_QUEUE_SIZE:
+	case DRV_TLV_PDU_RX_DESCRIPTORS_QUEUE_SIZE:
+	case DRV_TLV_ISCSI_BOOT_PROGRESS:
+	case DRV_TLV_PDU_TX_DESCRIPTOR_QUEUE_AVG_DEPTH:
+	case DRV_TLV_PDU_RX_DESCRIPTORS_QUEUE_AVG_DEPTH:
+	case DRV_TLV_ISCSI_PDU_RX_FRAMES_RECEIVED:
+	case DRV_TLV_ISCSI_PDU_RX_BYTES_RECEIVED:
+	case DRV_TLV_ISCSI_PDU_TX_FRAMES_SENT:
+	case DRV_TLV_ISCSI_PDU_TX_BYTES_SENT:
+		*tlv_group |= ECORE_MFW_TLV_ISCSI;
+		break;
+	default:
+		return ECORE_INVAL;
+	}
+
+	return ECORE_SUCCESS;
+}
+
+static int
+ecore_mfw_get_gen_tlv_value(struct ecore_drv_tlv_hdr *p_tlv,
+			    struct ecore_mfw_tlv_generic *p_drv_buf,
+			    u8 **p_tlv_buf)
+{
+	switch (p_tlv->tlv_type) {
+	case DRV_TLV_FEATURE_FLAGS:
+		if (p_drv_buf->feat_flags_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->feat_flags;
+			return sizeof(p_drv_buf->feat_flags);
+		}
+		break;
+	case DRV_TLV_LOCAL_ADMIN_ADDR:
+		if (p_drv_buf->local_mac_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->local_mac;
+			return sizeof(p_drv_buf->local_mac);
+		}
+		break;
+	case DRV_TLV_ADDITIONAL_MAC_ADDR_1:
+		if (p_drv_buf->additional_mac1_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->additional_mac1;
+			return sizeof(p_drv_buf->additional_mac1);
+		}
+		break;
+	case DRV_TLV_ADDITIONAL_MAC_ADDR_2:
+		if (p_drv_buf->additional_mac2_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->additional_mac2;
+			return sizeof(p_drv_buf->additional_mac2);
+		}
+		break;
+	case DRV_TLV_OS_DRIVER_STATES:
+		if (p_drv_buf->drv_state_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->drv_state;
+			return sizeof(p_drv_buf->drv_state);
+		}
+		break;
+	case DRV_TLV_PXE_BOOT_PROGRESS:
+		if (p_drv_buf->pxe_progress_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->pxe_progress;
+			return sizeof(p_drv_buf->pxe_progress);
+		}
+		break;
+	case DRV_TLV_RX_FRAMES_RECEIVED:
+		if (p_drv_buf->rx_frames_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_frames;
+			return sizeof(p_drv_buf->rx_frames);
+		}
+		break;
+	case DRV_TLV_RX_BYTES_RECEIVED:
+		if (p_drv_buf->rx_bytes_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_bytes;
+			return sizeof(p_drv_buf->rx_bytes);
+		}
+		break;
+	case DRV_TLV_TX_FRAMES_SENT:
+		if (p_drv_buf->tx_frames_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_frames;
+			return sizeof(p_drv_buf->tx_frames);
+		}
+		break;
+	case DRV_TLV_TX_BYTES_SENT:
+		if (p_drv_buf->tx_bytes_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_bytes;
+			return sizeof(p_drv_buf->tx_bytes);
+		}
+		break;
+	default:
+		break;
+	}
+
+	return -1;
+}
+
+static int
+ecore_mfw_get_eth_tlv_value(struct ecore_drv_tlv_hdr *p_tlv,
+			    struct ecore_mfw_tlv_eth *p_drv_buf,
+			    u8 **p_tlv_buf)
+{
+	switch (p_tlv->tlv_type) {
+	case DRV_TLV_LSO_MAX_OFFLOAD_SIZE:
+		if (p_drv_buf->lso_maxoff_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->lso_maxoff_size;
+			return sizeof(p_drv_buf->lso_maxoff_size);
+		}
+		break;
+	case DRV_TLV_LSO_MIN_SEGMENT_COUNT:
+		if (p_drv_buf->lso_minseg_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->lso_minseg_size;
+			return sizeof(p_drv_buf->lso_minseg_size);
+		}
+		break;
+	case DRV_TLV_PROMISCUOUS_MODE:
+		if (p_drv_buf->prom_mode_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->prom_mode;
+			return sizeof(p_drv_buf->prom_mode);
+		}
+		break;
+	case DRV_TLV_TX_DESCRIPTORS_QUEUE_SIZE:
+		if (p_drv_buf->tx_descr_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_descr_size;
+			return sizeof(p_drv_buf->tx_descr_size);
+		}
+		break;
+	case DRV_TLV_RX_DESCRIPTORS_QUEUE_SIZE:
+		if (p_drv_buf->rx_descr_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_descr_size;
+			return sizeof(p_drv_buf->rx_descr_size);
+		}
+		break;
+	case DRV_TLV_NUM_OF_NET_QUEUE_VMQ_CFG:
+		if (p_drv_buf->netq_count_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->netq_count;
+			return sizeof(p_drv_buf->netq_count);
+		}
+		break;
+	case DRV_TLV_NUM_OFFLOADED_CONNECTIONS_TCP_IPV4:
+		if (p_drv_buf->tcp4_offloads_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tcp4_offloads;
+			return sizeof(p_drv_buf->tcp4_offloads);
+		}
+		break;
+	case DRV_TLV_NUM_OFFLOADED_CONNECTIONS_TCP_IPV6:
+		if (p_drv_buf->tcp6_offloads_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tcp6_offloads;
+			return sizeof(p_drv_buf->tcp6_offloads);
+		}
+		break;
+	case DRV_TLV_TX_DESCRIPTOR_QUEUE_AVG_DEPTH:
+		if (p_drv_buf->tx_descr_qdepth_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_descr_qdepth;
+			return sizeof(p_drv_buf->tx_descr_qdepth);
+		}
+		break;
+	case DRV_TLV_RX_DESCRIPTORS_QUEUE_AVG_DEPTH:
+		if (p_drv_buf->rx_descr_qdepth_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_descr_qdepth;
+			return sizeof(p_drv_buf->rx_descr_qdepth);
+		}
+		break;
+	case DRV_TLV_IOV_OFFLOAD:
+		if (p_drv_buf->iov_offload_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->iov_offload;
+			return sizeof(p_drv_buf->iov_offload);
+		}
+		break;
+	case DRV_TLV_TX_QUEUES_EMPTY:
+		if (p_drv_buf->txqs_empty_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->txqs_empty;
+			return sizeof(p_drv_buf->txqs_empty);
+		}
+		break;
+	case DRV_TLV_RX_QUEUES_EMPTY:
+		if (p_drv_buf->rxqs_empty_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rxqs_empty;
+			return sizeof(p_drv_buf->rxqs_empty);
+		}
+		break;
+	case DRV_TLV_TX_QUEUES_FULL:
+		if (p_drv_buf->num_txqs_full_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->num_txqs_full;
+			return sizeof(p_drv_buf->num_txqs_full);
+		}
+		break;
+	case DRV_TLV_RX_QUEUES_FULL:
+		if (p_drv_buf->num_rxqs_full_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->num_rxqs_full;
+			return sizeof(p_drv_buf->num_rxqs_full);
+		}
+		break;
+	default:
+		break;
+	}
+
+	return -1;
+}
+
+static int
+ecore_mfw_get_fcoe_tlv_value(struct ecore_drv_tlv_hdr *p_tlv,
+			     struct ecore_mfw_tlv_fcoe *p_drv_buf,
+			     u8 **p_tlv_buf)
+{
+	switch (p_tlv->tlv_type) {
+	case DRV_TLV_SCSI_TO:
+		if (p_drv_buf->scsi_timeout_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_timeout;
+			return sizeof(p_drv_buf->scsi_timeout);
+		}
+		break;
+	case DRV_TLV_R_T_TOV:
+		if (p_drv_buf->rt_tov_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rt_tov;
+			return sizeof(p_drv_buf->rt_tov);
+		}
+		break;
+	case DRV_TLV_R_A_TOV:
+		if (p_drv_buf->ra_tov_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->ra_tov;
+			return sizeof(p_drv_buf->ra_tov);
+		}
+		break;
+	case DRV_TLV_E_D_TOV:
+		if (p_drv_buf->ed_tov_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->ed_tov;
+			return sizeof(p_drv_buf->ed_tov);
+		}
+		break;
+	case DRV_TLV_CR_TOV:
+		if (p_drv_buf->cr_tov_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->cr_tov;
+			return sizeof(p_drv_buf->cr_tov);
+		}
+		break;
+	case DRV_TLV_BOOT_TYPE:
+		if (p_drv_buf->boot_type_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->boot_type;
+			return sizeof(p_drv_buf->boot_type);
+		}
+		break;
+	case DRV_TLV_NPIV_STATE:
+		if (p_drv_buf->npiv_state_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->npiv_state;
+			return sizeof(p_drv_buf->npiv_state);
+		}
+		break;
+	case DRV_TLV_NUM_OF_NPIV_IDS:
+		if (p_drv_buf->num_npiv_ids_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->num_npiv_ids;
+			return sizeof(p_drv_buf->num_npiv_ids);
+		}
+		break;
+	case DRV_TLV_SWITCH_NAME:
+		if (p_drv_buf->switch_name_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->switch_name;
+			return sizeof(p_drv_buf->switch_name);
+		}
+		break;
+	case DRV_TLV_SWITCH_PORT_NUM:
+		if (p_drv_buf->switch_portnum_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->switch_portnum;
+			return sizeof(p_drv_buf->switch_portnum);
+		}
+		break;
+	case DRV_TLV_SWITCH_PORT_ID:
+		if (p_drv_buf->switch_portid_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->switch_portid;
+			return sizeof(p_drv_buf->switch_portid);
+		}
+		break;
+	case DRV_TLV_VENDOR_NAME:
+		if (p_drv_buf->vendor_name_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->vendor_name;
+			return sizeof(p_drv_buf->vendor_name);
+		}
+		break;
+	case DRV_TLV_SWITCH_MODEL:
+		if (p_drv_buf->switch_model_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->switch_model;
+			return sizeof(p_drv_buf->switch_model);
+		}
+		break;
+	case DRV_TLV_SWITCH_FW_VER:
+		if (p_drv_buf->switch_fw_version_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->switch_fw_version;
+			return sizeof(p_drv_buf->switch_fw_version);
+		}
+		break;
+	case DRV_TLV_QOS_PRIORITY_PER_802_1P:
+		if (p_drv_buf->qos_pri_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->qos_pri;
+			return sizeof(p_drv_buf->qos_pri);
+		}
+		break;
+	case DRV_TLV_PORT_ALIAS:
+		if (p_drv_buf->port_alias_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->port_alias;
+			return sizeof(p_drv_buf->port_alias);
+		}
+		break;
+	case DRV_TLV_PORT_STATE:
+		if (p_drv_buf->port_state_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->port_state;
+			return sizeof(p_drv_buf->port_state);
+		}
+		break;
+	case DRV_TLV_FIP_TX_DESCRIPTORS_QUEUE_SIZE:
+		if (p_drv_buf->fip_tx_descr_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fip_tx_descr_size;
+			return sizeof(p_drv_buf->fip_tx_descr_size);
+		}
+		break;
+	case DRV_TLV_FCOE_RX_DESCRIPTORS_QUEUE_SIZE:
+		if (p_drv_buf->fip_rx_descr_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fip_rx_descr_size;
+			return sizeof(p_drv_buf->fip_rx_descr_size);
+		}
+		break;
+	case DRV_TLV_LINK_FAILURE_COUNT:
+		if (p_drv_buf->link_failures_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->link_failures;
+			return sizeof(p_drv_buf->link_failures);
+		}
+		break;
+	case DRV_TLV_FCOE_BOOT_PROGRESS:
+		if (p_drv_buf->fcoe_boot_progress_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_boot_progress;
+			return sizeof(p_drv_buf->fcoe_boot_progress);
+		}
+		break;
+	case DRV_TLV_RX_BROADCAST_PACKETS:
+		if (p_drv_buf->rx_bcast_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_bcast;
+			return sizeof(p_drv_buf->rx_bcast);
+		}
+		break;
+	case DRV_TLV_TX_BROADCAST_PACKETS:
+		if (p_drv_buf->tx_bcast_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_bcast;
+			return sizeof(p_drv_buf->tx_bcast);
+		}
+		break;
+	case DRV_TLV_FCOE_TX_DESCRIPTOR_QUEUE_AVG_DEPTH:
+		if (p_drv_buf->fcoe_txq_depth_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_txq_depth;
+			return sizeof(p_drv_buf->fcoe_txq_depth);
+		}
+		break;
+	case DRV_TLV_FCOE_RX_DESCRIPTORS_QUEUE_AVG_DEPTH:
+		if (p_drv_buf->fcoe_rxq_depth_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_rxq_depth;
+			return sizeof(p_drv_buf->fcoe_rxq_depth);
+		}
+		break;
+	case DRV_TLV_FCOE_RX_FRAMES_RECEIVED:
+		if (p_drv_buf->fcoe_rx_frames_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_rx_frames;
+			return sizeof(p_drv_buf->fcoe_rx_frames);
+		}
+		break;
+	case DRV_TLV_FCOE_RX_BYTES_RECEIVED:
+		if (p_drv_buf->fcoe_rx_bytes_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_rx_bytes;
+			return sizeof(p_drv_buf->fcoe_rx_bytes);
+		}
+		break;
+	case DRV_TLV_FCOE_TX_FRAMES_SENT:
+		if (p_drv_buf->fcoe_tx_frames_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_tx_frames;
+			return sizeof(p_drv_buf->fcoe_tx_frames);
+		}
+		break;
+	case DRV_TLV_FCOE_TX_BYTES_SENT:
+		if (p_drv_buf->fcoe_tx_bytes_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_tx_bytes;
+			return sizeof(p_drv_buf->fcoe_tx_bytes);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_COUNT:
+		if (p_drv_buf->crc_count_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_count;
+			return sizeof(p_drv_buf->crc_count);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_1_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->crc_err_src_fcid_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_src_fcid[0];
+			return sizeof(p_drv_buf->crc_err_src_fcid[0]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_2_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->crc_err_src_fcid_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_src_fcid[1];
+			return sizeof(p_drv_buf->crc_err_src_fcid[1]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_3_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->crc_err_src_fcid_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_src_fcid[2];
+			return sizeof(p_drv_buf->crc_err_src_fcid[2]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_4_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->crc_err_src_fcid_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_src_fcid[3];
+			return sizeof(p_drv_buf->crc_err_src_fcid[3]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_5_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->crc_err_src_fcid_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_src_fcid[4];
+			return sizeof(p_drv_buf->crc_err_src_fcid[4]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_1_TIMESTAMP:
+		if (p_drv_buf->crc_err_tstamp_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_tstamp[0];
+			return sizeof(p_drv_buf->crc_err_tstamp[0]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_2_TIMESTAMP:
+		if (p_drv_buf->crc_err_tstamp_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_tstamp[1];
+			return sizeof(p_drv_buf->crc_err_tstamp[1]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_3_TIMESTAMP:
+		if (p_drv_buf->crc_err_tstamp_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_tstamp[2];
+			return sizeof(p_drv_buf->crc_err_tstamp[2]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_4_TIMESTAMP:
+		if (p_drv_buf->crc_err_tstamp_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_tstamp[3];
+			return sizeof(p_drv_buf->crc_err_tstamp[3]);
+		}
+		break;
+	case DRV_TLV_CRC_ERROR_5_TIMESTAMP:
+		if (p_drv_buf->crc_err_tstamp_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_tstamp[4];
+			return sizeof(p_drv_buf->crc_err_tstamp[4]);
+		}
+		break;
+	case DRV_TLV_LOSS_OF_SYNC_ERROR_COUNT:
+		if (p_drv_buf->losync_err_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->losync_err;
+			return sizeof(p_drv_buf->losync_err);
+		}
+		break;
+	case DRV_TLV_LOSS_OF_SIGNAL_ERRORS:
+		if (p_drv_buf->losig_err_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->losig_err;
+			return sizeof(p_drv_buf->losig_err);
+		}
+		break;
+	case DRV_TLV_PRIMITIVE_SEQUENCE_PROTOCOL_ERROR_COUNT:
+		if (p_drv_buf->primtive_err_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->primtive_err;
+			return sizeof(p_drv_buf->primtive_err);
+		}
+		break;
+	case DRV_TLV_DISPARITY_ERROR_COUNT:
+		if (p_drv_buf->disparity_err_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->disparity_err;
+			return sizeof(p_drv_buf->disparity_err);
+		}
+		break;
+	case DRV_TLV_CODE_VIOLATION_ERROR_COUNT:
+		if (p_drv_buf->code_violation_err_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->code_violation_err;
+			return sizeof(p_drv_buf->code_violation_err);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_1:
+		if (p_drv_buf->flogi_param_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_param[0];
+			return sizeof(p_drv_buf->flogi_param[0]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_2:
+		if (p_drv_buf->flogi_param_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_param[1];
+			return sizeof(p_drv_buf->flogi_param[1]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_3:
+		if (p_drv_buf->flogi_param_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_param[2];
+			return sizeof(p_drv_buf->flogi_param[2]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_4:
+		if (p_drv_buf->flogi_param_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_param[3];
+			return sizeof(p_drv_buf->flogi_param[3]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_TIMESTAMP:
+		if (p_drv_buf->flogi_tstamp_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_tstamp;
+			return sizeof(p_drv_buf->flogi_tstamp);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_1:
+		if (p_drv_buf->flogi_acc_param_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_acc_param[0];
+			return sizeof(p_drv_buf->flogi_acc_param[0]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_2:
+		if (p_drv_buf->flogi_acc_param_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_acc_param[1];
+			return sizeof(p_drv_buf->flogi_acc_param[1]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_3:
+		if (p_drv_buf->flogi_acc_param_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_acc_param[2];
+			return sizeof(p_drv_buf->flogi_acc_param[2]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_4:
+		if (p_drv_buf->flogi_acc_param_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_acc_param[3];
+			return sizeof(p_drv_buf->flogi_acc_param[3]);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_ACC_TIMESTAMP:
+		if (p_drv_buf->flogi_acc_tstamp_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_acc_tstamp;
+			return sizeof(p_drv_buf->flogi_acc_tstamp);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_RJT:
+		if (p_drv_buf->flogi_rjt_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_rjt;
+			return sizeof(p_drv_buf->flogi_rjt);
+		}
+		break;
+	case DRV_TLV_LAST_FLOGI_RJT_TIMESTAMP:
+		if (p_drv_buf->flogi_rjt_tstamp_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_rjt_tstamp;
+			return sizeof(p_drv_buf->flogi_rjt_tstamp);
+		}
+		break;
+	case DRV_TLV_FDISCS_SENT_COUNT:
+		if (p_drv_buf->fdiscs_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fdiscs;
+			return sizeof(p_drv_buf->fdiscs);
+		}
+		break;
+	case DRV_TLV_FDISC_ACCS_RECEIVED:
+		if (p_drv_buf->fdisc_acc_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fdisc_acc;
+			return sizeof(p_drv_buf->fdisc_acc);
+		}
+		break;
+	case DRV_TLV_FDISC_RJTS_RECEIVED:
+		if (p_drv_buf->fdisc_rjt_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->fdisc_rjt;
+			return sizeof(p_drv_buf->fdisc_rjt);
+		}
+		break;
+	case DRV_TLV_PLOGI_SENT_COUNT:
+		if (p_drv_buf->plogi_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi;
+			return sizeof(p_drv_buf->plogi);
+		}
+		break;
+	case DRV_TLV_PLOGI_ACCS_RECEIVED:
+		if (p_drv_buf->plogi_acc_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc;
+			return sizeof(p_drv_buf->plogi_acc);
+		}
+		break;
+	case DRV_TLV_PLOGI_RJTS_RECEIVED:
+		if (p_drv_buf->plogi_rjt_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_rjt;
+			return sizeof(p_drv_buf->plogi_rjt);
+		}
+		break;
+	case DRV_TLV_PLOGI_1_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->plogi_dst_fcid_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_dst_fcid[0];
+			return sizeof(p_drv_buf->plogi_dst_fcid[0]);
+		}
+		break;
+	case DRV_TLV_PLOGI_2_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->plogi_dst_fcid_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_dst_fcid[1];
+			return sizeof(p_drv_buf->plogi_dst_fcid[1]);
+		}
+		break;
+	case DRV_TLV_PLOGI_3_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->plogi_dst_fcid_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_dst_fcid[2];
+			return sizeof(p_drv_buf->plogi_dst_fcid[2]);
+		}
+		break;
+	case DRV_TLV_PLOGI_4_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->plogi_dst_fcid_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_dst_fcid[3];
+			return sizeof(p_drv_buf->plogi_dst_fcid[3]);
+		}
+		break;
+	case DRV_TLV_PLOGI_5_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->plogi_dst_fcid_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_dst_fcid[4];
+			return sizeof(p_drv_buf->plogi_dst_fcid[4]);
+		}
+		break;
+	case DRV_TLV_PLOGI_1_TIMESTAMP:
+		if (p_drv_buf->plogi_tstamp_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_tstamp[0];
+			return sizeof(p_drv_buf->plogi_tstamp[0]);
+		}
+		break;
+	case DRV_TLV_PLOGI_2_TIMESTAMP:
+		if (p_drv_buf->plogi_tstamp_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_tstamp[1];
+			return sizeof(p_drv_buf->plogi_tstamp[1]);
+		}
+		break;
+	case DRV_TLV_PLOGI_3_TIMESTAMP:
+		if (p_drv_buf->plogi_tstamp_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_tstamp[2];
+			return sizeof(p_drv_buf->plogi_tstamp[2]);
+		}
+		break;
+	case DRV_TLV_PLOGI_4_TIMESTAMP:
+		if (p_drv_buf->plogi_tstamp_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_tstamp[3];
+			return sizeof(p_drv_buf->plogi_tstamp[3]);
+		}
+		break;
+	case DRV_TLV_PLOGI_5_TIMESTAMP:
+		if (p_drv_buf->plogi_tstamp_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_tstamp[4];
+			return sizeof(p_drv_buf->plogi_tstamp[4]);
+		}
+		break;
+	case DRV_TLV_PLOGI_1_ACC_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogi_acc_src_fcid_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_src_fcid[0];
+			return sizeof(p_drv_buf->plogi_acc_src_fcid[0]);
+		}
+		break;
+	case DRV_TLV_PLOGI_2_ACC_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogi_acc_src_fcid_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_src_fcid[1];
+			return sizeof(p_drv_buf->plogi_acc_src_fcid[1]);
+		}
+		break;
+	case DRV_TLV_PLOGI_3_ACC_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogi_acc_src_fcid_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_src_fcid[2];
+			return sizeof(p_drv_buf->plogi_acc_src_fcid[2]);
+		}
+		break;
+	case DRV_TLV_PLOGI_4_ACC_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogi_acc_src_fcid_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_src_fcid[3];
+			return sizeof(p_drv_buf->plogi_acc_src_fcid[3]);
+		}
+		break;
+	case DRV_TLV_PLOGI_5_ACC_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogi_acc_src_fcid_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_src_fcid[4];
+			return sizeof(p_drv_buf->plogi_acc_src_fcid[4]);
+		}
+		break;
+	case DRV_TLV_PLOGI_1_ACC_TIMESTAMP:
+		if (p_drv_buf->plogi_acc_tstamp_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_tstamp[0];
+			return sizeof(p_drv_buf->plogi_acc_tstamp[0]);
+		}
+		break;
+	case DRV_TLV_PLOGI_2_ACC_TIMESTAMP:
+		if (p_drv_buf->plogi_acc_tstamp_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_tstamp[1];
+			return sizeof(p_drv_buf->plogi_acc_tstamp[1]);
+		}
+		break;
+	case DRV_TLV_PLOGI_3_ACC_TIMESTAMP:
+		if (p_drv_buf->plogi_acc_tstamp_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_tstamp[2];
+			return sizeof(p_drv_buf->plogi_acc_tstamp[2]);
+		}
+		break;
+	case DRV_TLV_PLOGI_4_ACC_TIMESTAMP:
+		if (p_drv_buf->plogi_acc_tstamp_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_tstamp[3];
+			return sizeof(p_drv_buf->plogi_acc_tstamp[3]);
+		}
+		break;
+	case DRV_TLV_PLOGI_5_ACC_TIMESTAMP:
+		if (p_drv_buf->plogi_acc_tstamp_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_tstamp[4];
+			return sizeof(p_drv_buf->plogi_acc_tstamp[4]);
+		}
+		break;
+	case DRV_TLV_LOGOS_ISSUED:
+		if (p_drv_buf->tx_plogos_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_plogos;
+			return sizeof(p_drv_buf->tx_plogos);
+		}
+		break;
+	case DRV_TLV_LOGO_ACCS_RECEIVED:
+		if (p_drv_buf->plogo_acc_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_acc;
+			return sizeof(p_drv_buf->plogo_acc);
+		}
+		break;
+	case DRV_TLV_LOGO_RJTS_RECEIVED:
+		if (p_drv_buf->plogo_rjt_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_rjt;
+			return sizeof(p_drv_buf->plogo_rjt);
+		}
+		break;
+	case DRV_TLV_LOGO_1_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogo_src_fcid_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_src_fcid[0];
+			return sizeof(p_drv_buf->plogo_src_fcid[0]);
+		}
+		break;
+	case DRV_TLV_LOGO_2_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogo_src_fcid_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_src_fcid[1];
+			return sizeof(p_drv_buf->plogo_src_fcid[1]);
+		}
+		break;
+	case DRV_TLV_LOGO_3_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogo_src_fcid_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_src_fcid[2];
+			return sizeof(p_drv_buf->plogo_src_fcid[2]);
+		}
+		break;
+	case DRV_TLV_LOGO_4_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogo_src_fcid_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_src_fcid[3];
+			return sizeof(p_drv_buf->plogo_src_fcid[3]);
+		}
+		break;
+	case DRV_TLV_LOGO_5_RECEIVED_SOURCE_FC_ID:
+		if (p_drv_buf->plogo_src_fcid_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_src_fcid[4];
+			return sizeof(p_drv_buf->plogo_src_fcid[4]);
+		}
+		break;
+	case DRV_TLV_LOGO_1_TIMESTAMP:
+		if (p_drv_buf->plogo_tstamp_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_tstamp[0];
+			return sizeof(p_drv_buf->plogo_tstamp[0]);
+		}
+		break;
+	case DRV_TLV_LOGO_2_TIMESTAMP:
+		if (p_drv_buf->plogo_tstamp_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_tstamp[1];
+			return sizeof(p_drv_buf->plogo_tstamp[1]);
+		}
+		break;
+	case DRV_TLV_LOGO_3_TIMESTAMP:
+		if (p_drv_buf->plogo_tstamp_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_tstamp[2];
+			return sizeof(p_drv_buf->plogo_tstamp[2]);
+		}
+		break;
+	case DRV_TLV_LOGO_4_TIMESTAMP:
+		if (p_drv_buf->plogo_tstamp_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_tstamp[3];
+			return sizeof(p_drv_buf->plogo_tstamp[3]);
+		}
+		break;
+	case DRV_TLV_LOGO_5_TIMESTAMP:
+		if (p_drv_buf->plogo_tstamp_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_tstamp[4];
+			return sizeof(p_drv_buf->plogo_tstamp[4]);
+		}
+		break;
+	case DRV_TLV_LOGOS_RECEIVED:
+		if (p_drv_buf->rx_logos_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_logos;
+			return sizeof(p_drv_buf->rx_logos);
+		}
+		break;
+	case DRV_TLV_ACCS_ISSUED:
+		if (p_drv_buf->tx_accs_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_accs;
+			return sizeof(p_drv_buf->tx_accs);
+		}
+		break;
+	case DRV_TLV_PRLIS_ISSUED:
+		if (p_drv_buf->tx_prlis_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_prlis;
+			return sizeof(p_drv_buf->tx_prlis);
+		}
+		break;
+	case DRV_TLV_ACCS_RECEIVED:
+		if (p_drv_buf->rx_accs_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_accs;
+			return sizeof(p_drv_buf->rx_accs);
+		}
+		break;
+	case DRV_TLV_ABTS_SENT_COUNT:
+		if (p_drv_buf->tx_abts_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_abts;
+			return sizeof(p_drv_buf->tx_abts);
+		}
+		break;
+	case DRV_TLV_ABTS_ACCS_RECEIVED:
+		if (p_drv_buf->rx_abts_acc_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_abts_acc;
+			return sizeof(p_drv_buf->rx_abts_acc);
+		}
+		break;
+	case DRV_TLV_ABTS_RJTS_RECEIVED:
+		if (p_drv_buf->rx_abts_rjt_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_abts_rjt;
+			return sizeof(p_drv_buf->rx_abts_rjt);
+		}
+		break;
+	case DRV_TLV_ABTS_1_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->abts_dst_fcid_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_dst_fcid[0];
+			return sizeof(p_drv_buf->abts_dst_fcid[0]);
+		}
+		break;
+	case DRV_TLV_ABTS_2_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->abts_dst_fcid_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_dst_fcid[1];
+			return sizeof(p_drv_buf->abts_dst_fcid[1]);
+		}
+		break;
+	case DRV_TLV_ABTS_3_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->abts_dst_fcid_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_dst_fcid[2];
+			return sizeof(p_drv_buf->abts_dst_fcid[2]);
+		}
+		break;
+	case DRV_TLV_ABTS_4_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->abts_dst_fcid_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_dst_fcid[3];
+			return sizeof(p_drv_buf->abts_dst_fcid[3]);
+		}
+		break;
+	case DRV_TLV_ABTS_5_SENT_DESTINATION_FC_ID:
+		if (p_drv_buf->abts_dst_fcid_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_dst_fcid[4];
+			return sizeof(p_drv_buf->abts_dst_fcid[4]);
+		}
+		break;
+	case DRV_TLV_ABTS_1_TIMESTAMP:
+		if (p_drv_buf->abts_tstamp_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_tstamp[0];
+			return sizeof(p_drv_buf->abts_tstamp[0]);
+		}
+		break;
+	case DRV_TLV_ABTS_2_TIMESTAMP:
+		if (p_drv_buf->abts_tstamp_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_tstamp[1];
+			return sizeof(p_drv_buf->abts_tstamp[1]);
+		}
+		break;
+	case DRV_TLV_ABTS_3_TIMESTAMP:
+		if (p_drv_buf->abts_tstamp_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_tstamp[2];
+			return sizeof(p_drv_buf->abts_tstamp[2]);
+		}
+		break;
+	case DRV_TLV_ABTS_4_TIMESTAMP:
+		if (p_drv_buf->abts_tstamp_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_tstamp[3];
+			return sizeof(p_drv_buf->abts_tstamp[3]);
+		}
+		break;
+	case DRV_TLV_ABTS_5_TIMESTAMP:
+		if (p_drv_buf->abts_tstamp_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abts_tstamp[4];
+			return sizeof(p_drv_buf->abts_tstamp[4]);
+		}
+		break;
+	case DRV_TLV_RSCNS_RECEIVED:
+		if (p_drv_buf->rx_rscn_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_rscn;
+			return sizeof(p_drv_buf->rx_rscn);
+		}
+		break;
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_1:
+		if (p_drv_buf->rx_rscn_nport_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_rscn_nport[0];
+			return sizeof(p_drv_buf->rx_rscn_nport[0]);
+		}
+		break;
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_2:
+		if (p_drv_buf->rx_rscn_nport_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_rscn_nport[1];
+			return sizeof(p_drv_buf->rx_rscn_nport[1]);
+		}
+		break;
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_3:
+		if (p_drv_buf->rx_rscn_nport_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_rscn_nport[2];
+			return sizeof(p_drv_buf->rx_rscn_nport[2]);
+		}
+		break;
+	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_4:
+		if (p_drv_buf->rx_rscn_nport_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_rscn_nport[3];
+			return sizeof(p_drv_buf->rx_rscn_nport[3]);
+		}
+		break;
+	case DRV_TLV_LUN_RESETS_ISSUED:
+		if (p_drv_buf->tx_lun_rst_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_lun_rst;
+			return sizeof(p_drv_buf->tx_lun_rst);
+		}
+		break;
+	case DRV_TLV_ABORT_TASK_SETS_ISSUED:
+		if (p_drv_buf->abort_task_sets_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->abort_task_sets;
+			return sizeof(p_drv_buf->abort_task_sets);
+		}
+		break;
+	case DRV_TLV_TPRLOS_SENT:
+		if (p_drv_buf->tx_tprlos_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_tprlos;
+			return sizeof(p_drv_buf->tx_tprlos);
+		}
+		break;
+	case DRV_TLV_NOS_SENT_COUNT:
+		if (p_drv_buf->tx_nos_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_nos;
+			return sizeof(p_drv_buf->tx_nos);
+		}
+		break;
+	case DRV_TLV_NOS_RECEIVED_COUNT:
+		if (p_drv_buf->rx_nos_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_nos;
+			return sizeof(p_drv_buf->rx_nos);
+		}
+		break;
+	case DRV_TLV_OLS_COUNT:
+		if (p_drv_buf->ols_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->ols;
+			return sizeof(p_drv_buf->ols);
+		}
+		break;
+	case DRV_TLV_LR_COUNT:
+		if (p_drv_buf->lr_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->lr;
+			return sizeof(p_drv_buf->lr);
+		}
+		break;
+	case DRV_TLV_LRR_COUNT:
+		if (p_drv_buf->lrr_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->lrr;
+			return sizeof(p_drv_buf->lrr);
+		}
+		break;
+	case DRV_TLV_LIP_SENT_COUNT:
+		if (p_drv_buf->tx_lip_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_lip;
+			return sizeof(p_drv_buf->tx_lip);
+		}
+		break;
+	case DRV_TLV_LIP_RECEIVED_COUNT:
+		if (p_drv_buf->rx_lip_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_lip;
+			return sizeof(p_drv_buf->rx_lip);
+		}
+		break;
+	case DRV_TLV_EOFA_COUNT:
+		if (p_drv_buf->eofa_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->eofa;
+			return sizeof(p_drv_buf->eofa);
+		}
+		break;
+	case DRV_TLV_EOFNI_COUNT:
+		if (p_drv_buf->eofni_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->eofni;
+			return sizeof(p_drv_buf->eofni);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_CHECK_CONDITION_COUNT:
+		if (p_drv_buf->scsi_chks_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_chks;
+			return sizeof(p_drv_buf->scsi_chks);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_CONDITION_MET_COUNT:
+		if (p_drv_buf->scsi_cond_met_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_cond_met;
+			return sizeof(p_drv_buf->scsi_cond_met);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_BUSY_COUNT:
+		if (p_drv_buf->scsi_busy_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_busy;
+			return sizeof(p_drv_buf->scsi_busy);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_INTERMEDIATE_COUNT:
+		if (p_drv_buf->scsi_inter_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_inter;
+			return sizeof(p_drv_buf->scsi_inter);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_INTERMEDIATE_CONDITION_MET_COUNT:
+		if (p_drv_buf->scsi_inter_cond_met_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_inter_cond_met;
+			return sizeof(p_drv_buf->scsi_inter_cond_met);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_RESERVATION_CONFLICT_COUNT:
+		if (p_drv_buf->scsi_rsv_conflicts_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_rsv_conflicts;
+			return sizeof(p_drv_buf->scsi_rsv_conflicts);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_TASK_SET_FULL_COUNT:
+		if (p_drv_buf->scsi_tsk_full_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_tsk_full;
+			return sizeof(p_drv_buf->scsi_tsk_full);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_ACA_ACTIVE_COUNT:
+		if (p_drv_buf->scsi_aca_active_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_aca_active;
+			return sizeof(p_drv_buf->scsi_aca_active);
+		}
+		break;
+	case DRV_TLV_SCSI_STATUS_TASK_ABORTED_COUNT:
+		if (p_drv_buf->scsi_tsk_abort_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_tsk_abort;
+			return sizeof(p_drv_buf->scsi_tsk_abort);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_CONDITION_1_RECEIVED_SK_ASC_ASCQ:
+		if (p_drv_buf->scsi_rx_chk_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_rx_chk[0];
+			return sizeof(p_drv_buf->scsi_rx_chk[0]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_CONDITION_2_RECEIVED_SK_ASC_ASCQ:
+		if (p_drv_buf->scsi_rx_chk_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_rx_chk[1];
+			return sizeof(p_drv_buf->scsi_rx_chk[1]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_CONDITION_3_RECEIVED_SK_ASC_ASCQ:
+		if (p_drv_buf->scsi_rx_chk_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_rx_chk[2];
+			return sizeof(p_drv_buf->scsi_rx_chk[2]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_CONDITION_4_RECEIVED_SK_ASC_ASCQ:
+		if (p_drv_buf->scsi_rx_chk_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_rx_chk[3];
+			return sizeof(p_drv_buf->scsi_rx_chk[4]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_CONDITION_5_RECEIVED_SK_ASC_ASCQ:
+		if (p_drv_buf->scsi_rx_chk_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_rx_chk[4];
+			return sizeof(p_drv_buf->scsi_rx_chk[4]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_1_TIMESTAMP:
+		if (p_drv_buf->scsi_chk_tstamp_set[0]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_chk_tstamp[0];
+			return sizeof(p_drv_buf->scsi_chk_tstamp[0]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_2_TIMESTAMP:
+		if (p_drv_buf->scsi_chk_tstamp_set[1]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_chk_tstamp[1];
+			return sizeof(p_drv_buf->scsi_chk_tstamp[1]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_3_TIMESTAMP:
+		if (p_drv_buf->scsi_chk_tstamp_set[2]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_chk_tstamp[2];
+			return sizeof(p_drv_buf->scsi_chk_tstamp[2]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_4_TIMESTAMP:
+		if (p_drv_buf->scsi_chk_tstamp_set[3]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_chk_tstamp[3];
+			return sizeof(p_drv_buf->scsi_chk_tstamp[3]);
+		}
+		break;
+	case DRV_TLV_SCSI_CHECK_5_TIMESTAMP:
+		if (p_drv_buf->scsi_chk_tstamp_set[4]) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_chk_tstamp[4];
+			return sizeof(p_drv_buf->scsi_chk_tstamp[4]);
+		}
+		break;
+	default:
+		break;
+	}
+
+	return -1;
+}
+
+static int
+ecore_mfw_get_iscsi_tlv_value(struct ecore_drv_tlv_hdr *p_tlv,
+			      struct ecore_mfw_tlv_iscsi *p_drv_buf,
+			      u8 **p_tlv_buf)
+{
+	switch (p_tlv->tlv_type) {
+	case DRV_TLV_TARGET_LLMNR_ENABLED:
+		if (p_drv_buf->target_llmnr_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->target_llmnr;
+			return sizeof(p_drv_buf->target_llmnr);
+		}
+		break;
+	case DRV_TLV_HEADER_DIGEST_FLAG_ENABLED:
+		if (p_drv_buf->header_digest_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->header_digest;
+			return sizeof(p_drv_buf->header_digest);
+		}
+		break;
+	case DRV_TLV_DATA_DIGEST_FLAG_ENABLED:
+		if (p_drv_buf->data_digest_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->data_digest;
+			return sizeof(p_drv_buf->data_digest);
+		}
+		break;
+	case DRV_TLV_AUTHENTICATION_METHOD:
+		if (p_drv_buf->auth_method_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->auth_method;
+			return sizeof(p_drv_buf->auth_method);
+		}
+		break;
+	case DRV_TLV_ISCSI_BOOT_TARGET_PORTAL:
+		if (p_drv_buf->boot_taget_portal_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->boot_taget_portal;
+			return sizeof(p_drv_buf->boot_taget_portal);
+		}
+		break;
+	case DRV_TLV_MAX_FRAME_SIZE:
+		if (p_drv_buf->frame_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->frame_size;
+			return sizeof(p_drv_buf->frame_size);
+		}
+		break;
+	case DRV_TLV_PDU_TX_DESCRIPTORS_QUEUE_SIZE:
+		if (p_drv_buf->tx_desc_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_desc_size;
+			return sizeof(p_drv_buf->tx_desc_size);
+		}
+		break;
+	case DRV_TLV_PDU_RX_DESCRIPTORS_QUEUE_SIZE:
+		if (p_drv_buf->rx_desc_size_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_desc_size;
+			return sizeof(p_drv_buf->rx_desc_size);
+		}
+		break;
+	case DRV_TLV_ISCSI_BOOT_PROGRESS:
+		if (p_drv_buf->boot_progress_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->boot_progress;
+			return sizeof(p_drv_buf->boot_progress);
+		}
+		break;
+	case DRV_TLV_PDU_TX_DESCRIPTOR_QUEUE_AVG_DEPTH:
+		if (p_drv_buf->tx_desc_qdepth_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_desc_qdepth;
+			return sizeof(p_drv_buf->tx_desc_qdepth);
+		}
+		break;
+	case DRV_TLV_PDU_RX_DESCRIPTORS_QUEUE_AVG_DEPTH:
+		if (p_drv_buf->rx_desc_qdepth_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_desc_qdepth;
+			return sizeof(p_drv_buf->rx_desc_qdepth);
+		}
+		break;
+	case DRV_TLV_ISCSI_PDU_RX_FRAMES_RECEIVED:
+		if (p_drv_buf->rx_frames_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_frames;
+			return sizeof(p_drv_buf->rx_frames);
+		}
+		break;
+	case DRV_TLV_ISCSI_PDU_RX_BYTES_RECEIVED:
+		if (p_drv_buf->rx_bytes_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->rx_bytes;
+			return sizeof(p_drv_buf->rx_bytes);
+		}
+		break;
+	case DRV_TLV_ISCSI_PDU_TX_FRAMES_SENT:
+		if (p_drv_buf->tx_frames_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_frames;
+			return sizeof(p_drv_buf->tx_frames);
+		}
+		break;
+	case DRV_TLV_ISCSI_PDU_TX_BYTES_SENT:
+		if (p_drv_buf->tx_bytes_set) {
+			*p_tlv_buf = (u8 *)&p_drv_buf->tx_bytes;
+			return sizeof(p_drv_buf->tx_bytes);
+		}
+		break;
+	default:
+		break;
+	}
+
+	return -1;
+}
+
+static enum _ecore_status_t
+ecore_mfw_update_tlvs(u8 tlv_group, struct ecore_hwfn *p_hwfn,
+		      struct ecore_ptt *p_ptt, u8 *p_mfw_buf, u32 size)
+{
+	union ecore_mfw_tlv_data *p_tlv_data;
+	struct ecore_drv_tlv_hdr tlv;
+	u8 *p_tlv_ptr = OSAL_NULL, *p_temp;
+	u32 offset;
+	int len;
+
+	p_tlv_data = OSAL_VALLOC(p_hwfn->p_dev, sizeof(*p_tlv_data));
+	if (!p_tlv_data)
+		return ECORE_NOMEM;
+
+	OSAL_MEMSET(p_tlv_data, 0, sizeof(*p_tlv_data));
+	if (OSAL_MFW_FILL_TLV_DATA(p_hwfn, tlv_group, p_tlv_data)) {
+		OSAL_VFREE(p_hwfn->p_dev, p_tlv_data);
+		return ECORE_INVAL;
+	}
+
+	offset = 0;
+	OSAL_MEMSET(&tlv, 0, sizeof(tlv));
+	while (offset < size) {
+		p_temp = &p_mfw_buf[offset];
+		tlv.tlv_type = TLV_TYPE(p_temp);
+		tlv.tlv_length = TLV_LENGTH(p_temp);
+		tlv.tlv_flags = TLV_FLAGS(p_temp);
+		DP_INFO(p_hwfn, "Type %d length = %d flags = 0x%x\n",
+			tlv.tlv_type, tlv.tlv_length, tlv.tlv_flags);
+
+		offset += sizeof(tlv);
+		if (tlv_group == ECORE_MFW_TLV_GENERIC)
+			len = ecore_mfw_get_gen_tlv_value(&tlv,
+					&p_tlv_data->generic, &p_tlv_ptr);
+		else if (tlv_group == ECORE_MFW_TLV_ETH)
+			len = ecore_mfw_get_eth_tlv_value(&tlv,
+					&p_tlv_data->eth, &p_tlv_ptr);
+		else if (tlv_group == ECORE_MFW_TLV_FCOE)
+			len = ecore_mfw_get_fcoe_tlv_value(&tlv,
+					&p_tlv_data->fcoe, &p_tlv_ptr);
+		else
+			len = ecore_mfw_get_iscsi_tlv_value(&tlv,
+					&p_tlv_data->iscsi, &p_tlv_ptr);
+
+		if (len > 0) {
+			OSAL_WARN(len > 4 * tlv.tlv_length,
+				  "Incorrect MFW TLV length");
+			len = OSAL_MIN_T(int, len, 4 * tlv.tlv_length);
+			tlv.tlv_flags |= ECORE_DRV_TLV_FLAGS_CHANGED;
+			/* TODO: Endianness handling? */
+			OSAL_MEMCPY(p_mfw_buf, &tlv, sizeof(tlv));
+			OSAL_MEMCPY(p_mfw_buf + offset, p_tlv_ptr, len);
+		}
+
+		offset += sizeof(u32) * tlv.tlv_length;
+	}
+
+	OSAL_VFREE(p_hwfn->p_dev, p_tlv_data);
+
+	return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t
+ecore_mfw_process_tlv_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
+{
+	u32 addr, size, offset, resp, param, val;
+	u8 tlv_group = 0, id, *p_mfw_buf = OSAL_NULL, *p_temp;
+	u32 global_offsize, global_addr;
+	enum _ecore_status_t rc;
+	struct ecore_drv_tlv_hdr tlv;
+
+	addr = SECTION_OFFSIZE_ADDR(p_hwfn->mcp_info->public_base,
+				    PUBLIC_GLOBAL);
+	global_offsize = ecore_rd(p_hwfn, p_ptt, addr);
+	global_addr = SECTION_ADDR(global_offsize, 0);
+	addr = global_addr + OFFSETOF(struct public_global, data_ptr);
+	size = ecore_rd(p_hwfn, p_ptt, global_addr +
+			OFFSETOF(struct public_global, data_size));
+
+	if (!size) {
+		DP_NOTICE(p_hwfn, false, "Invalid TLV req size = %d\n", size);
+		goto drv_done;
+	}
+
+	p_mfw_buf = (void *)OSAL_VALLOC(p_hwfn->p_dev, size);
+	if (!p_mfw_buf) {
+		DP_NOTICE(p_hwfn, false,
+			  "Failed allocate memory for p_mfw_buf\n");
+		goto drv_done;
+	}
+
+	/* Read the TLV request to local buffer */
+	for (offset = 0; offset < size; offset += sizeof(u32)) {
+		val = ecore_rd(p_hwfn, p_ptt, addr + offset);
+		OSAL_MEMCPY(&p_mfw_buf[offset], &val, sizeof(u32));
+	}
+
+	/* Parse the headers to enumerate the requested TLV groups */
+	for (offset = 0; offset < size;
+	     offset += sizeof(tlv) + sizeof(u32) * tlv.tlv_length) {
+		p_temp = &p_mfw_buf[offset];
+		tlv.tlv_type = TLV_TYPE(p_temp);
+		tlv.tlv_length = TLV_LENGTH(p_temp);
+		if (ecore_mfw_get_tlv_group(tlv.tlv_type, &tlv_group))
+			goto drv_done;
+	}
+
+	/* Update the TLV values in the local buffer */
+	for (id = ECORE_MFW_TLV_GENERIC; id < ECORE_MFW_TLV_MAX; id <<= 1) {
+		if (tlv_group & id) {
+			if (ecore_mfw_update_tlvs(id, p_hwfn, p_ptt, p_mfw_buf,
+						  size))
+				goto drv_done;
+		}
+	}
+
+	/* Write the TLV data to shared memory */
+	for (offset = 0; offset < size; offset += sizeof(u32)) {
+		val = (u32)p_mfw_buf[offset];
+		ecore_wr(p_hwfn, p_ptt, addr + offset, val);
+		offset += sizeof(u32);
+	}
+
+drv_done:
+	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_GET_TLV_DONE, 0, &resp,
+			   &param);
+
+	OSAL_VFREE(p_hwfn->p_dev, p_mfw_buf);
+
+	return rc;
+}
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index 0a1f7db..bfd96d6 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -96,8 +96,29 @@ struct qed_slowpath_params {
 
 #define ILT_PAGE_SIZE_TCFC 0x8000	/* 32KB */
 
+struct qed_eth_tlvs {
+	u16 feat_flags;
+	u8 mac[3][ETH_ALEN];
+	u16 lso_maxoff;
+	u16 lso_minseg;
+	bool prom_mode;
+	u16 num_txqs;
+	u16 num_rxqs;
+	u16 num_netqs;
+	u16 flex_vlan;
+	u32 tcp4_offloads;
+	u32 tcp6_offloads;
+	u16 tx_avg_qdepth;
+	u16 rx_avg_qdepth;
+	u8 txqs_empty;
+	u8 rxqs_empty;
+	u8 num_txqs_full;
+	u8 num_rxqs_full;
+};
+
 struct qed_common_cb_ops {
 	void (*link_update)(void *dev, struct qed_link_output *link);
+	void (*get_tlv_data)(void *dev, struct qed_eth_tlvs *data);
 };
 
 struct qed_selftest_ops {
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 29/62] net/qede/base: optimize cache-line access
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (28 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 28/62] net/qede/base: add support for handling TLV request from MFW Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 30/62] net/qede/base: infrastructure changes for VF tunnelling Rasesh Mody
                               ` (32 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Optimize cache-line access in ecore_chain -
re-arrange fields so that fields that are needed for fastpath
[mostly produce/consume and their derivatives] are in the first cache
line, and the rest are in the second.

This is true for both PBL and NEXT_PTR kind of chains.
Advancing a page in a SINGLE_PAGE chain would still require the 2nd
cacheline as well, but afaik only SPQ uses it and so it isn't
considered as 'fastpath'.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_chain.h       |  143 ++++++++++++++++-------------
 drivers/net/qede/base/ecore_dev.c         |   14 +--
 drivers/net/qede/base/ecore_sp_commands.c |    4 +-
 3 files changed, 89 insertions(+), 72 deletions(-)

diff --git a/drivers/net/qede/base/ecore_chain.h b/drivers/net/qede/base/ecore_chain.h
index 61e39b5..ba272a9 100644
--- a/drivers/net/qede/base/ecore_chain.h
+++ b/drivers/net/qede/base/ecore_chain.h
@@ -59,25 +59,6 @@ struct ecore_chain_ext_pbl {
 	void *p_pbl_virt;
 };
 
-struct ecore_chain_pbl {
-	/* Base address of a pre-allocated buffer for pbl */
-	dma_addr_t p_phys_table;
-	void *p_virt_table;
-
-	/* Table for keeping the virtual addresses of the chain pages,
-	 * respectively to the physical addresses in the pbl table.
-	 */
-	void **pp_virt_addr_tbl;
-
-	/* Index to current used page by producer/consumer */
-	union {
-		struct ecore_chain_pbl_u16 pbl16;
-		struct ecore_chain_pbl_u32 pbl32;
-	} u;
-
-	bool external;
-};
-
 struct ecore_chain_u16 {
 	/* Cyclic index of next element to produce/consme */
 	u16 prod_idx;
@@ -91,40 +72,75 @@ struct ecore_chain_u32 {
 };
 
 struct ecore_chain {
-	/* Address of first page of the chain */
-	void *p_virt_addr;
-	dma_addr_t p_phys_addr;
-
+	/* fastpath portion of the chain - required for commands such
+	 * as produce / consume.
+	 */
 	/* Point to next element to produce/consume */
 	void *p_prod_elem;
 	void *p_cons_elem;
 
-	enum ecore_chain_mode mode;
-	enum ecore_chain_use_mode intended_use;
+	/* Fastpath portions of the PBL [if exists] */
+
+	struct {
+		/* Table for keeping the virtual addresses of the chain pages,
+		 * respectively to the physical addresses in the pbl table.
+		 */
+		void		**pp_virt_addr_tbl;
+
+		union {
+			struct ecore_chain_pbl_u16	u16;
+			struct ecore_chain_pbl_u32	u32;
+		} c;
+	} pbl;
 
-	enum ecore_chain_cnt_type cnt_type;
 	union {
 		struct ecore_chain_u16 chain16;
 		struct ecore_chain_u32 chain32;
 	} u;
 
-	u32 page_cnt;
+	/* Capacity counts only usable elements */
+	u32				capacity;
+	u32				page_cnt;
 
-	/* Number of elements - capacity is for usable elements only,
-	 * while size will contain total number of elements [for entire chain].
+	/* A u8 would suffice for mode, but it would save as a lot of headaches
+	 * on castings & defaults.
 	 */
-	u32 capacity;
-	u32 size;
+	enum ecore_chain_mode		mode;
 
 	/* Elements information for fast calculations */
 	u16 elem_per_page;
 	u16 elem_per_page_mask;
-	u16 elem_unusable;
-	u16 usable_per_page;
 	u16 elem_size;
 	u16 next_page_mask;
+	u16 usable_per_page;
+	u8 elem_unusable;
 
-	struct ecore_chain_pbl pbl;
+	u8				cnt_type;
+
+	/* Slowpath of the chain - required for initialization and destruction,
+	 * but isn't involved in regular functionality.
+	 */
+
+	/* Base address of a pre-allocated buffer for pbl */
+	struct {
+		dma_addr_t		p_phys_table;
+		void			*p_virt_table;
+	} pbl_sp;
+
+	/* Address of first page of the chain  - the address is required
+	 * for fastpath operation [consume/produce] but only for the the SINGLE
+	 * flavour which isn't considered fastpath [== SPQ].
+	 */
+	void				*p_virt_addr;
+	dma_addr_t			p_phys_addr;
+
+	/* Total number of elements [for entire chain] */
+	u32				size;
+
+	u8				intended_use;
+
+	/* TBD - do we really need this? Couldn't find usage for it */
+	bool				b_external_pbl;
 
 	void *dp_ctx;
 };
@@ -135,8 +151,8 @@ struct ecore_chain {
 
 #define UNUSABLE_ELEMS_PER_PAGE(elem_size, mode)		\
 	  ((mode == ECORE_CHAIN_MODE_NEXT_PTR) ?		\
-	   (1 + ((sizeof(struct ecore_chain_next) - 1) /		\
-	   (elem_size))) : 0)
+	   (u8)(1 + ((sizeof(struct ecore_chain_next) - 1) /	\
+		     (elem_size))) : 0)
 
 #define USABLE_ELEMS_PER_PAGE(elem_size, mode)		\
 	((u32)(ELEMS_PER_PAGE(elem_size) -			\
@@ -245,7 +261,7 @@ u16 ecore_chain_get_usable_per_page(struct ecore_chain *p_chain)
 }
 
 static OSAL_INLINE
-u16 ecore_chain_get_unusable_per_page(struct ecore_chain *p_chain)
+u8 ecore_chain_get_unusable_per_page(struct ecore_chain *p_chain)
 {
 	return p_chain->elem_unusable;
 }
@@ -263,7 +279,7 @@ static OSAL_INLINE u32 ecore_chain_get_page_cnt(struct ecore_chain *p_chain)
 static OSAL_INLINE
 dma_addr_t ecore_chain_get_pbl_phys(struct ecore_chain *p_chain)
 {
-	return p_chain->pbl.p_phys_table;
+	return p_chain->pbl_sp.p_phys_table;
 }
 
 /**
@@ -288,9 +304,9 @@ ecore_chain_advance_page(struct ecore_chain *p_chain, void **p_next_elem,
 		p_next = (struct ecore_chain_next *)(*p_next_elem);
 		*p_next_elem = p_next->next_virt;
 		if (is_chain_u16(p_chain))
-			*(u16 *)idx_to_inc += p_chain->elem_unusable;
+			*(u16 *)idx_to_inc += (u16)p_chain->elem_unusable;
 		else
-			*(u32 *)idx_to_inc += p_chain->elem_unusable;
+			*(u32 *)idx_to_inc += (u16)p_chain->elem_unusable;
 		break;
 	case ECORE_CHAIN_MODE_SINGLE:
 		*p_next_elem = p_chain->p_virt_addr;
@@ -391,7 +407,7 @@ static OSAL_INLINE void *ecore_chain_produce(struct ecore_chain *p_chain)
 		if ((p_chain->u.chain16.prod_idx &
 		     p_chain->elem_per_page_mask) == p_chain->next_page_mask) {
 			p_prod_idx = &p_chain->u.chain16.prod_idx;
-			p_prod_page_idx = &p_chain->pbl.u.pbl16.prod_page_idx;
+			p_prod_page_idx = &p_chain->pbl.c.u16.prod_page_idx;
 			ecore_chain_advance_page(p_chain, &p_chain->p_prod_elem,
 						 p_prod_idx, p_prod_page_idx);
 		}
@@ -400,7 +416,7 @@ static OSAL_INLINE void *ecore_chain_produce(struct ecore_chain *p_chain)
 		if ((p_chain->u.chain32.prod_idx &
 		     p_chain->elem_per_page_mask) == p_chain->next_page_mask) {
 			p_prod_idx = &p_chain->u.chain32.prod_idx;
-			p_prod_page_idx = &p_chain->pbl.u.pbl32.prod_page_idx;
+			p_prod_page_idx = &p_chain->pbl.c.u32.prod_page_idx;
 			ecore_chain_advance_page(p_chain, &p_chain->p_prod_elem,
 						 p_prod_idx, p_prod_page_idx);
 		}
@@ -465,7 +481,7 @@ static OSAL_INLINE void *ecore_chain_consume(struct ecore_chain *p_chain)
 		if ((p_chain->u.chain16.cons_idx &
 		     p_chain->elem_per_page_mask) == p_chain->next_page_mask) {
 			p_cons_idx = &p_chain->u.chain16.cons_idx;
-			p_cons_page_idx = &p_chain->pbl.u.pbl16.cons_page_idx;
+			p_cons_page_idx = &p_chain->pbl.c.u16.cons_page_idx;
 			ecore_chain_advance_page(p_chain, &p_chain->p_cons_elem,
 						 p_cons_idx, p_cons_page_idx);
 		}
@@ -474,7 +490,7 @@ static OSAL_INLINE void *ecore_chain_consume(struct ecore_chain *p_chain)
 		if ((p_chain->u.chain32.cons_idx &
 		     p_chain->elem_per_page_mask) == p_chain->next_page_mask) {
 			p_cons_idx = &p_chain->u.chain32.cons_idx;
-			p_cons_page_idx = &p_chain->pbl.u.pbl32.cons_page_idx;
+			p_cons_page_idx = &p_chain->pbl.c.u32.cons_page_idx;
 			ecore_chain_advance_page(p_chain, &p_chain->p_cons_elem,
 						 p_cons_idx, p_cons_page_idx);
 		}
@@ -518,25 +534,26 @@ static OSAL_INLINE void ecore_chain_reset(struct ecore_chain *p_chain)
 		u32 reset_val = p_chain->page_cnt - 1;
 
 		if (is_chain_u16(p_chain)) {
-			p_chain->pbl.u.pbl16.prod_page_idx = (u16)reset_val;
-			p_chain->pbl.u.pbl16.cons_page_idx = (u16)reset_val;
+			p_chain->pbl.c.u16.prod_page_idx = (u16)reset_val;
+			p_chain->pbl.c.u16.cons_page_idx = (u16)reset_val;
 		} else {
-			p_chain->pbl.u.pbl32.prod_page_idx = reset_val;
-			p_chain->pbl.u.pbl32.cons_page_idx = reset_val;
+			p_chain->pbl.c.u32.prod_page_idx = reset_val;
+			p_chain->pbl.c.u32.cons_page_idx = reset_val;
 		}
 	}
 
 	switch (p_chain->intended_use) {
-	case ECORE_CHAIN_USE_TO_CONSUME_PRODUCE:
-	case ECORE_CHAIN_USE_TO_PRODUCE:
-			/* Do nothing */
-			break;
-
 	case ECORE_CHAIN_USE_TO_CONSUME:
-			/* produce empty elements */
-			for (i = 0; i < p_chain->capacity; i++)
+		/* produce empty elements */
+		for (i = 0; i < p_chain->capacity; i++)
 			ecore_chain_recycle_consumed(p_chain);
-			break;
+		break;
+
+	case ECORE_CHAIN_USE_TO_CONSUME_PRODUCE:
+	case ECORE_CHAIN_USE_TO_PRODUCE:
+	default:
+		/* Do nothing */
+		break;
 	}
 }
 
@@ -563,9 +580,9 @@ ecore_chain_init_params(struct ecore_chain *p_chain, u32 page_cnt, u8 elem_size,
 	p_chain->p_virt_addr = OSAL_NULL;
 	p_chain->p_phys_addr = 0;
 	p_chain->elem_size = elem_size;
-	p_chain->intended_use = intended_use;
+	p_chain->intended_use = (u8)intended_use;
 	p_chain->mode = mode;
-	p_chain->cnt_type = cnt_type;
+	p_chain->cnt_type = (u8)cnt_type;
 
 	p_chain->elem_per_page = ELEMS_PER_PAGE(elem_size);
 	p_chain->usable_per_page = USABLE_ELEMS_PER_PAGE(elem_size, mode);
@@ -577,9 +594,9 @@ ecore_chain_init_params(struct ecore_chain *p_chain, u32 page_cnt, u8 elem_size,
 	p_chain->page_cnt = page_cnt;
 	p_chain->capacity = p_chain->usable_per_page * page_cnt;
 	p_chain->size = p_chain->elem_per_page * page_cnt;
-	p_chain->pbl.external = false;
-	p_chain->pbl.p_phys_table = 0;
-	p_chain->pbl.p_virt_table = OSAL_NULL;
+	p_chain->b_external_pbl = false;
+	p_chain->pbl_sp.p_phys_table = 0;
+	p_chain->pbl_sp.p_virt_table = OSAL_NULL;
 	p_chain->pbl.pp_virt_addr_tbl = OSAL_NULL;
 
 	p_chain->dp_ctx = dp_ctx;
@@ -623,8 +640,8 @@ static OSAL_INLINE void ecore_chain_init_pbl_mem(struct ecore_chain *p_chain,
 						 dma_addr_t p_phys_pbl,
 						 void **pp_virt_addr_tbl)
 {
-	p_chain->pbl.p_phys_table = p_phys_pbl;
-	p_chain->pbl.p_virt_table = p_virt_pbl;
+	p_chain->pbl_sp.p_phys_table = p_phys_pbl;
+	p_chain->pbl_sp.p_virt_table = p_virt_pbl;
 	p_chain->pbl.pp_virt_addr_tbl = pp_virt_addr_tbl;
 }
 
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index c895656..1c08d4a 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -3559,13 +3559,13 @@ static void ecore_chain_free_pbl(struct ecore_dev *p_dev,
 				 struct ecore_chain *p_chain)
 {
 	void **pp_virt_addr_tbl = p_chain->pbl.pp_virt_addr_tbl;
-	u8 *p_pbl_virt = (u8 *)p_chain->pbl.p_virt_table;
+	u8 *p_pbl_virt = (u8 *)p_chain->pbl_sp.p_virt_table;
 	u32 page_cnt = p_chain->page_cnt, i, pbl_size;
 
 	if (!pp_virt_addr_tbl)
 		return;
 
-	if (!p_chain->pbl.p_virt_table)
+	if (!p_pbl_virt)
 		goto out;
 
 	for (i = 0; i < page_cnt; i++) {
@@ -3581,10 +3581,10 @@ static void ecore_chain_free_pbl(struct ecore_dev *p_dev,
 
 	pbl_size = page_cnt * ECORE_CHAIN_PBL_ENTRY_SIZE;
 
-	if (!p_chain->pbl.external)
-		OSAL_DMA_FREE_COHERENT(p_dev, p_chain->pbl.p_virt_table,
-				       p_chain->pbl.p_phys_table, pbl_size);
-out:
+	if (!p_chain->b_external_pbl)
+		OSAL_DMA_FREE_COHERENT(p_dev, p_chain->pbl_sp.p_virt_table,
+				       p_chain->pbl_sp.p_phys_table, pbl_size);
+ out:
 	OSAL_VFREE(p_dev, p_chain->pbl.pp_virt_addr_tbl);
 }
 
@@ -3716,7 +3716,7 @@ ecore_chain_alloc_pbl(struct ecore_dev *p_dev,
 	} else {
 		p_pbl_virt = ext_pbl->p_pbl_virt;
 		p_pbl_phys = ext_pbl->p_pbl_phys;
-		p_chain->pbl.external = true;
+		p_chain->b_external_pbl = true;
 	}
 
 	ecore_chain_init_pbl_mem(p_chain, p_pbl_virt, p_pbl_phys,
diff --git a/drivers/net/qede/base/ecore_sp_commands.c b/drivers/net/qede/base/ecore_sp_commands.c
index 23ebab7..b831970 100644
--- a/drivers/net/qede/base/ecore_sp_commands.c
+++ b/drivers/net/qede/base/ecore_sp_commands.c
@@ -379,11 +379,11 @@ enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
 
 	/* Place EQ address in RAMROD */
 	DMA_REGPAIR_LE(p_ramrod->event_ring_pbl_addr,
-		       p_hwfn->p_eq->chain.pbl.p_phys_table);
+		       p_hwfn->p_eq->chain.pbl_sp.p_phys_table);
 	page_cnt = (u8)ecore_chain_get_page_cnt(&p_hwfn->p_eq->chain);
 	p_ramrod->event_ring_num_pages = page_cnt;
 	DMA_REGPAIR_LE(p_ramrod->consolid_q_pbl_addr,
-		       p_hwfn->p_consq->chain.pbl.p_phys_table);
+		       p_hwfn->p_consq->chain.pbl_sp.p_phys_table);
 
 	ecore_tunn_set_pf_start_params(p_hwfn, p_tunn,
 				       &p_ramrod->tunnel_config);
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 30/62] net/qede/base: infrastructure changes for VF tunnelling
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (29 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 29/62] net/qede/base: optimize cache-line access Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 31/62] net/qede/base: revise tunnel APIs/structs Rasesh Mody
                               ` (31 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Infrastructure changes for VF tunnelling.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/bcm_osal.h          |    3 +-
 drivers/net/qede/base/ecore.h             |   14 ++++-
 drivers/net/qede/base/ecore_sp_commands.c |   87 +++++++++++++++++++----------
 drivers/net/qede/qede_if.h                |    5 ++
 drivers/net/qede/qede_main.c              |   18 ++++++
 5 files changed, 93 insertions(+), 34 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 82e3ebd..513d542 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -292,7 +292,8 @@ typedef struct osal_list_t {
 #define OSAL_WMB(dev)			rte_wmb()
 #define OSAL_DMA_SYNC(dev, addr, length, is_post) nothing
 
-#define OSAL_BITS_PER_BYTE		(8)
+#define OSAL_BIT(nr)            (1UL << (nr))
+#define OSAL_BITS_PER_BYTE	(8)
 #define OSAL_BITS_PER_UL	(sizeof(unsigned long) * OSAL_BITS_PER_BYTE)
 #define OSAL_BITS_PER_UL_MASK		(OSAL_BITS_PER_UL - 1)
 
diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index de0f49a..5c12c1e 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -470,6 +470,17 @@ struct ecore_fw_data {
 	u32 init_ops_size;
 };
 
+struct ecore_tunnel_info {
+	u8		tunn_clss_vxlan;
+	u8		tunn_clss_l2geneve;
+	u8		tunn_clss_ipgeneve;
+	u8		tunn_clss_l2gre;
+	u8		tunn_clss_ipgre;
+	unsigned long	tunn_mode;
+	u16		port_vxlan_udp_port;
+	u16		port_geneve_udp_port;
+};
+
 struct ecore_hwfn {
 	struct ecore_dev		*p_dev;
 	u8				my_id;		/* ID inside the PF */
@@ -724,8 +735,7 @@ struct ecore_dev {
 	/* SRIOV */
 	struct ecore_hw_sriov_info	*p_iov_info;
 #define IS_ECORE_SRIOV(p_dev)		(!!(p_dev)->p_iov_info)
-	unsigned long			tunn_mode;
-
+	struct ecore_tunnel_info	tunnel;
 	bool				b_is_vf;
 
 	u32				drv_type;
diff --git a/drivers/net/qede/base/ecore_sp_commands.c b/drivers/net/qede/base/ecore_sp_commands.c
index b831970..f5860a0 100644
--- a/drivers/net/qede/base/ecore_sp_commands.c
+++ b/drivers/net/qede/base/ecore_sp_commands.c
@@ -111,8 +111,9 @@ ecore_tunn_set_pf_fix_tunn_mode(struct ecore_hwfn *p_hwfn,
 				struct ecore_tunn_update_params *p_src,
 				struct pf_update_tunnel_config *p_tunn_cfg)
 {
-	unsigned long cached_tunn_mode = p_hwfn->p_dev->tunn_mode;
 	unsigned long update_mask = p_src->tunn_mode_update_mask;
+	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
+	unsigned long cached_tunn_mode = p_tun->tunn_mode;
 	unsigned long tunn_mode = p_src->tunn_mode;
 	unsigned long new_tunn_mode = 0;
 
@@ -149,9 +150,10 @@ ecore_tunn_set_pf_fix_tunn_mode(struct ecore_hwfn *p_hwfn,
 	}
 
 	if (p_src->update_geneve_udp_port) {
+		p_tun->port_geneve_udp_port = p_src->geneve_udp_port;
 		p_tunn_cfg->set_geneve_udp_port_flg = 1;
 		p_tunn_cfg->geneve_udp_port =
-		    OSAL_CPU_TO_LE16(p_src->geneve_udp_port);
+				OSAL_CPU_TO_LE16(p_tun->port_geneve_udp_port);
 	}
 
 	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &update_mask)) {
@@ -178,33 +180,39 @@ ecore_tunn_set_pf_update_params(struct ecore_hwfn *p_hwfn,
 				struct ecore_tunn_update_params *p_src,
 				struct pf_update_tunnel_config *p_tunn_cfg)
 {
-	unsigned long tunn_mode = p_src->tunn_mode;
+	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
 	enum tunnel_clss type;
 
 	ecore_tunn_set_pf_fix_tunn_mode(p_hwfn, p_src, p_tunn_cfg);
+	p_tun->tunn_mode = p_src->tunn_mode;
+
 	p_tunn_cfg->update_rx_pf_clss = p_src->update_rx_pf_clss;
 	p_tunn_cfg->update_tx_pf_clss = p_src->update_tx_pf_clss;
 
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_vxlan);
-	p_tunn_cfg->tunnel_clss_vxlan = type;
+	p_tun->tunn_clss_vxlan = type;
+	p_tunn_cfg->tunnel_clss_vxlan = p_tun->tunn_clss_vxlan;
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2gre);
-	p_tunn_cfg->tunnel_clss_l2gre = type;
+	p_tun->tunn_clss_l2gre = type;
+	p_tunn_cfg->tunnel_clss_l2gre = p_tun->tunn_clss_l2gre;
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgre);
-	p_tunn_cfg->tunnel_clss_ipgre = type;
+	p_tun->tunn_clss_ipgre = type;
+	p_tunn_cfg->tunnel_clss_ipgre = p_tun->tunn_clss_ipgre;
 
 	if (p_src->update_vxlan_udp_port) {
+		p_tun->port_vxlan_udp_port = p_src->vxlan_udp_port;
 		p_tunn_cfg->set_vxlan_udp_port_flg = 1;
 		p_tunn_cfg->vxlan_udp_port =
-		    OSAL_CPU_TO_LE16(p_src->vxlan_udp_port);
+				OSAL_CPU_TO_LE16(p_tun->port_vxlan_udp_port);
 	}
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_l2gre = 1;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_ipgre = 1;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_vxlan = 1;
 
 	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
@@ -215,21 +223,24 @@ ecore_tunn_set_pf_update_params(struct ecore_hwfn *p_hwfn,
 	}
 
 	if (p_src->update_geneve_udp_port) {
+		p_tun->port_geneve_udp_port = p_src->geneve_udp_port;
 		p_tunn_cfg->set_geneve_udp_port_flg = 1;
 		p_tunn_cfg->geneve_udp_port =
-		    OSAL_CPU_TO_LE16(p_src->geneve_udp_port);
+				OSAL_CPU_TO_LE16(p_tun->port_geneve_udp_port);
 	}
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_l2geneve = 1;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_ipgeneve = 1;
 
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2geneve);
-	p_tunn_cfg->tunnel_clss_l2geneve = type;
+	p_tun->tunn_clss_l2geneve = type;
+	p_tunn_cfg->tunnel_clss_l2geneve = p_tun->tunn_clss_l2geneve;
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgeneve);
-	p_tunn_cfg->tunnel_clss_ipgeneve = type;
+	p_tun->tunn_clss_ipgeneve = type;
+	p_tunn_cfg->tunnel_clss_ipgeneve = p_tun->tunn_clss_ipgeneve;
 }
 
 static void ecore_set_hw_tunn_mode(struct ecore_hwfn *p_hwfn,
@@ -269,33 +280,37 @@ ecore_tunn_set_pf_start_params(struct ecore_hwfn *p_hwfn,
 			       struct ecore_tunn_start_params *p_src,
 			       struct pf_start_tunnel_config *p_tunn_cfg)
 {
-	unsigned long tunn_mode;
+	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
 	enum tunnel_clss type;
 
 	if (!p_src)
 		return;
 
-	tunn_mode = p_src->tunn_mode;
+	p_tun->tunn_mode = p_src->tunn_mode;
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_vxlan);
-	p_tunn_cfg->tunnel_clss_vxlan = type;
+	p_tun->tunn_clss_vxlan = type;
+	p_tunn_cfg->tunnel_clss_vxlan = p_tun->tunn_clss_vxlan;
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2gre);
-	p_tunn_cfg->tunnel_clss_l2gre = type;
+	p_tun->tunn_clss_l2gre = type;
+	p_tunn_cfg->tunnel_clss_l2gre = p_tun->tunn_clss_l2gre;
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgre);
-	p_tunn_cfg->tunnel_clss_ipgre = type;
+	p_tun->tunn_clss_ipgre = type;
+	p_tunn_cfg->tunnel_clss_ipgre = p_tun->tunn_clss_ipgre;
 
 	if (p_src->update_vxlan_udp_port) {
+		p_tun->port_vxlan_udp_port = p_src->vxlan_udp_port;
 		p_tunn_cfg->set_vxlan_udp_port_flg = 1;
 		p_tunn_cfg->vxlan_udp_port =
-		    OSAL_CPU_TO_LE16(p_src->vxlan_udp_port);
+				OSAL_CPU_TO_LE16(p_tun->port_vxlan_udp_port);
 	}
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_l2gre = 1;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_ipgre = 1;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_vxlan = 1;
 
 	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
@@ -306,21 +321,24 @@ ecore_tunn_set_pf_start_params(struct ecore_hwfn *p_hwfn,
 	}
 
 	if (p_src->update_geneve_udp_port) {
+		p_tun->port_geneve_udp_port = p_src->geneve_udp_port;
 		p_tunn_cfg->set_geneve_udp_port_flg = 1;
 		p_tunn_cfg->geneve_udp_port =
-		    OSAL_CPU_TO_LE16(p_src->geneve_udp_port);
+				OSAL_CPU_TO_LE16(p_tun->port_geneve_udp_port);
 	}
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_l2geneve = 1;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &tunn_mode))
+	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &p_tun->tunn_mode))
 		p_tunn_cfg->tx_enable_ipgeneve = 1;
 
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2geneve);
-	p_tunn_cfg->tunnel_clss_l2geneve = type;
+	p_tun->tunn_clss_l2geneve = type;
+	p_tunn_cfg->tunnel_clss_l2geneve = p_tun->tunn_clss_l2geneve;
 	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgeneve);
-	p_tunn_cfg->tunnel_clss_ipgeneve = type;
+	p_tun->tunn_clss_ipgeneve = type;
+	p_tunn_cfg->tunnel_clss_ipgeneve = p_tun->tunn_clss_ipgeneve;
 }
 
 enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
@@ -420,9 +438,16 @@ enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
 	rc = ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
 
 	if (p_tunn) {
+		if (p_tunn->update_vxlan_udp_port)
+			ecore_set_vxlan_dest_port(p_hwfn, p_hwfn->p_main_ptt,
+						  p_tunn->vxlan_udp_port);
+
+		if (p_tunn->update_geneve_udp_port)
+			ecore_set_geneve_dest_port(p_hwfn, p_hwfn->p_main_ptt,
+						   p_tunn->geneve_udp_port);
+
 		ecore_set_hw_tunn_mode(p_hwfn, p_hwfn->p_main_ptt,
 				       p_tunn->tunn_mode);
-		p_hwfn->p_dev->tunn_mode = p_tunn->tunn_mode;
 	}
 
 	return rc;
@@ -529,12 +554,12 @@ ecore_sp_pf_update_tunn_cfg(struct ecore_hwfn *p_hwfn,
 	if (p_tunn->update_vxlan_udp_port)
 		ecore_set_vxlan_dest_port(p_hwfn, p_hwfn->p_main_ptt,
 					  p_tunn->vxlan_udp_port);
+
 	if (p_tunn->update_geneve_udp_port)
 		ecore_set_geneve_dest_port(p_hwfn, p_hwfn->p_main_ptt,
 					   p_tunn->geneve_udp_port);
 
 	ecore_set_hw_tunn_mode(p_hwfn, p_hwfn->p_main_ptt, p_tunn->tunn_mode);
-	p_hwfn->p_dev->tunn_mode = p_tunn->tunn_mode;
 
 	return rc;
 }
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index bfd96d6..baa8476 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -43,6 +43,11 @@ struct qed_dev_info {
 	uint8_t mf_mode;
 	bool tx_switching;
 	u16 mtu;
+
+	/* Out param for qede */
+	bool vxlan_enable;
+	bool gre_enable;
+	bool geneve_enable;
 };
 
 enum qed_sb_type {
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index a932c5f..e7195b4 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -325,8 +325,26 @@ static int
 qed_fill_dev_info(struct ecore_dev *edev, struct qed_dev_info *dev_info)
 {
 	struct ecore_ptt *ptt = NULL;
+	struct ecore_tunnel_info *tun = &edev->tunnel;
 
 	memset(dev_info, 0, sizeof(struct qed_dev_info));
+
+	if (tun->tunn_mode & OSAL_BIT(ECORE_MODE_VXLAN_TUNN) &&
+	    tun->tunn_clss_vxlan == ECORE_TUNN_CLSS_MAC_VLAN)
+		dev_info->vxlan_enable = true;
+
+	if (tun->tunn_mode & OSAL_BIT(ECORE_MODE_L2GRE_TUNN) &&
+	    tun->tunn_mode & OSAL_BIT(ECORE_MODE_IPGRE_TUNN) &&
+	    tun->tunn_clss_l2gre == ECORE_TUNN_CLSS_MAC_VLAN &&
+	    tun->tunn_clss_ipgre == ECORE_TUNN_CLSS_MAC_VLAN)
+		dev_info->gre_enable = true;
+
+	if (tun->tunn_mode & OSAL_BIT(ECORE_MODE_L2GENEVE_TUNN) &&
+	    tun->tunn_mode & OSAL_BIT(ECORE_MODE_IPGENEVE_TUNN) &&
+	    tun->tunn_clss_l2geneve == ECORE_TUNN_CLSS_MAC_VLAN &&
+	    tun->tunn_clss_ipgeneve == ECORE_TUNN_CLSS_MAC_VLAN)
+		dev_info->geneve_enable = true;
+
 	dev_info->num_hwfns = edev->num_hwfns;
 	dev_info->is_mf_default = IS_MF_DEFAULT(&edev->hwfns[0]);
 	dev_info->mtu = ECORE_LEADING_HWFN(edev)->hw_info.mtu;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 31/62] net/qede/base: revise tunnel APIs/structs
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (30 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 30/62] net/qede/base: infrastructure changes for VF tunnelling Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 32/62] net/qede/base: add tunnelling support for VFs Rasesh Mody
                               ` (30 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Revise tunnel APIs/structs.
 - Unite tunnel start and update params in single struct
   "ecore_tunnel_info"
 - Remove A0 chip tunnelling support.
 - Added per tunnel info - removed bitmasks.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h             |   57 ++---
 drivers/net/qede/base/ecore_dev.c         |    2 +-
 drivers/net/qede/base/ecore_dev_api.h     |    2 +-
 drivers/net/qede/base/ecore_sp_api.h      |   19 ++
 drivers/net/qede/base/ecore_sp_commands.c |  385 +++++++++++++----------------
 drivers/net/qede/base/ecore_sp_commands.h |   23 +-
 drivers/net/qede/qede_ethdev.c            |   22 +-
 drivers/net/qede/qede_if.h                |   16 ++
 drivers/net/qede/qede_main.c              |   18 +-
 9 files changed, 251 insertions(+), 293 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 5c12c1e..f86f7ca 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -204,33 +204,29 @@ enum ecore_tunn_clss {
 	MAX_ECORE_TUNN_CLSS,
 };
 
-struct ecore_tunn_start_params {
-	unsigned long tunn_mode;
-	u16	vxlan_udp_port;
-	u16	geneve_udp_port;
-	u8	update_vxlan_udp_port;
-	u8	update_geneve_udp_port;
-	u8	tunn_clss_vxlan;
-	u8	tunn_clss_l2geneve;
-	u8	tunn_clss_ipgeneve;
-	u8	tunn_clss_l2gre;
-	u8	tunn_clss_ipgre;
+struct ecore_tunn_update_type {
+	bool b_update_mode;
+	bool b_mode_enabled;
+	enum ecore_tunn_clss tun_cls;
 };
 
-struct ecore_tunn_update_params {
-	unsigned long tunn_mode_update_mask;
-	unsigned long tunn_mode;
-	u16	vxlan_udp_port;
-	u16	geneve_udp_port;
-	u8	update_rx_pf_clss;
-	u8	update_tx_pf_clss;
-	u8	update_vxlan_udp_port;
-	u8	update_geneve_udp_port;
-	u8	tunn_clss_vxlan;
-	u8	tunn_clss_l2geneve;
-	u8	tunn_clss_ipgeneve;
-	u8	tunn_clss_l2gre;
-	u8	tunn_clss_ipgre;
+struct ecore_tunn_update_udp_port {
+	bool b_update_port;
+	u16 port;
+};
+
+struct ecore_tunnel_info {
+	struct ecore_tunn_update_type vxlan;
+	struct ecore_tunn_update_type l2_geneve;
+	struct ecore_tunn_update_type ip_geneve;
+	struct ecore_tunn_update_type l2_gre;
+	struct ecore_tunn_update_type ip_gre;
+
+	struct ecore_tunn_update_udp_port vxlan_port;
+	struct ecore_tunn_update_udp_port geneve_port;
+
+	bool b_update_rx_cls;
+	bool b_update_tx_cls;
 };
 
 /* The PCI personality is not quite synonymous to protocol ID:
@@ -470,17 +466,6 @@ struct ecore_fw_data {
 	u32 init_ops_size;
 };
 
-struct ecore_tunnel_info {
-	u8		tunn_clss_vxlan;
-	u8		tunn_clss_l2geneve;
-	u8		tunn_clss_ipgeneve;
-	u8		tunn_clss_l2gre;
-	u8		tunn_clss_ipgre;
-	unsigned long	tunn_mode;
-	u16		port_vxlan_udp_port;
-	u16		port_geneve_udp_port;
-};
-
 struct ecore_hwfn {
 	struct ecore_dev		*p_dev;
 	u8				my_id;		/* ID inside the PF */
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 1c08d4a..0d3971c 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1696,7 +1696,7 @@ ecore_hw_init_pf_doorbell_bar(struct ecore_hwfn *p_hwfn,
 static enum _ecore_status_t
 ecore_hw_init_pf(struct ecore_hwfn *p_hwfn,
 		 struct ecore_ptt *p_ptt,
-		 struct ecore_tunn_start_params *p_tunn,
+		 struct ecore_tunnel_info *p_tunn,
 		 int hw_mode,
 		 bool b_hw_start,
 		 enum ecore_int_mode int_mode, bool allow_npar_tx_switch)
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index 74a15ef..356c5e4 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -59,7 +59,7 @@ void ecore_resc_setup(struct ecore_dev *p_dev);
 
 struct ecore_hw_init_params {
 	/* tunnelling parameters */
-	struct ecore_tunn_start_params *p_tunn;
+	struct ecore_tunnel_info *p_tunn;
 	bool b_hw_start;
 	/* interrupt mode [msix, inta, etc.] to use */
 	enum ecore_int_mode int_mode;
diff --git a/drivers/net/qede/base/ecore_sp_api.h b/drivers/net/qede/base/ecore_sp_api.h
index a4cb507..c8e564f 100644
--- a/drivers/net/qede/base/ecore_sp_api.h
+++ b/drivers/net/qede/base/ecore_sp_api.h
@@ -41,5 +41,24 @@ struct ecore_spq_comp_cb {
  */
 enum _ecore_status_t ecore_eth_cqe_completion(struct ecore_hwfn *p_hwfn,
 					      struct eth_slow_path_rx_cqe *cqe);
+/**
+ * @brief ecore_sp_pf_update_tunn_cfg - PF Function Tunnel configuration
+ *					update  Ramrod
+ *
+ * This ramrod is sent to update a tunneling configuration
+ * for a physical function (PF).
+ *
+ * @param p_hwfn
+ * @param p_tunn - pf update tunneling parameters
+ * @param comp_mode - completion mode
+ * @param p_comp_data - callback function
+ *
+ * @return enum _ecore_status_t
+ */
 
+enum _ecore_status_t
+ecore_sp_pf_update_tunn_cfg(struct ecore_hwfn *p_hwfn,
+			    struct ecore_tunnel_info *p_tunn,
+			    enum spq_mode comp_mode,
+			    struct ecore_spq_comp_cb *p_comp_data);
 #endif
diff --git a/drivers/net/qede/base/ecore_sp_commands.c b/drivers/net/qede/base/ecore_sp_commands.c
index f5860a0..fc47fc4 100644
--- a/drivers/net/qede/base/ecore_sp_commands.c
+++ b/drivers/net/qede/base/ecore_sp_commands.c
@@ -88,7 +88,7 @@ enum _ecore_status_t ecore_sp_init_request(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-static enum tunnel_clss ecore_tunn_get_clss_type(u8 type)
+static enum tunnel_clss ecore_tunn_clss_to_fw_clss(u8 type)
 {
 	switch (type) {
 	case ECORE_TUNN_CLSS_MAC_VLAN:
@@ -107,242 +107,208 @@ static enum tunnel_clss ecore_tunn_get_clss_type(u8 type)
 }
 
 static void
-ecore_tunn_set_pf_fix_tunn_mode(struct ecore_hwfn *p_hwfn,
-				struct ecore_tunn_update_params *p_src,
-				struct pf_update_tunnel_config *p_tunn_cfg)
+ecore_set_pf_update_tunn_mode(struct ecore_tunnel_info *p_tun,
+			      struct ecore_tunnel_info *p_src,
+			      bool b_pf_start)
 {
-	unsigned long update_mask = p_src->tunn_mode_update_mask;
-	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
-	unsigned long cached_tunn_mode = p_tun->tunn_mode;
-	unsigned long tunn_mode = p_src->tunn_mode;
-	unsigned long new_tunn_mode = 0;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &update_mask)) {
-		if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_L2GRE_TUNN, &new_tunn_mode);
-	} else {
-		if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &cached_tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_L2GRE_TUNN, &new_tunn_mode);
-	}
-
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &update_mask)) {
-		if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_IPGRE_TUNN, &new_tunn_mode);
-	} else {
-		if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &cached_tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_IPGRE_TUNN, &new_tunn_mode);
-	}
-
-	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &update_mask)) {
-		if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_VXLAN_TUNN, &new_tunn_mode);
-	} else {
-		if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &cached_tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_VXLAN_TUNN, &new_tunn_mode);
-	}
-
-	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
-		if (p_src->update_geneve_udp_port)
-			DP_NOTICE(p_hwfn, true, "Geneve not supported\n");
-		p_src->update_geneve_udp_port = 0;
-		p_src->tunn_mode = new_tunn_mode;
-		return;
-	}
+	if (p_src->vxlan.b_update_mode || b_pf_start)
+		p_tun->vxlan.b_mode_enabled = p_src->vxlan.b_mode_enabled;
 
-	if (p_src->update_geneve_udp_port) {
-		p_tun->port_geneve_udp_port = p_src->geneve_udp_port;
-		p_tunn_cfg->set_geneve_udp_port_flg = 1;
-		p_tunn_cfg->geneve_udp_port =
-				OSAL_CPU_TO_LE16(p_tun->port_geneve_udp_port);
-	}
+	if (p_src->l2_gre.b_update_mode || b_pf_start)
+		p_tun->l2_gre.b_mode_enabled = p_src->l2_gre.b_mode_enabled;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &update_mask)) {
-		if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_L2GENEVE_TUNN, &new_tunn_mode);
-	} else {
-		if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &cached_tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_L2GENEVE_TUNN, &new_tunn_mode);
-	}
+	if (p_src->ip_gre.b_update_mode || b_pf_start)
+		p_tun->ip_gre.b_mode_enabled = p_src->ip_gre.b_mode_enabled;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &update_mask)) {
-		if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_IPGENEVE_TUNN, &new_tunn_mode);
-	} else {
-		if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &cached_tunn_mode))
-			OSAL_SET_BIT(ECORE_MODE_IPGENEVE_TUNN, &new_tunn_mode);
-	}
+	if (p_src->l2_geneve.b_update_mode || b_pf_start)
+		p_tun->l2_geneve.b_mode_enabled =
+				p_src->l2_geneve.b_mode_enabled;
 
-	p_src->tunn_mode = new_tunn_mode;
+	if (p_src->ip_geneve.b_update_mode || b_pf_start)
+		p_tun->ip_geneve.b_mode_enabled =
+				p_src->ip_geneve.b_mode_enabled;
 }
 
-static void
-ecore_tunn_set_pf_update_params(struct ecore_hwfn *p_hwfn,
-				struct ecore_tunn_update_params *p_src,
-				struct pf_update_tunnel_config *p_tunn_cfg)
+static void ecore_set_tunn_cls_info(struct ecore_tunnel_info *p_tun,
+				    struct ecore_tunnel_info *p_src)
 {
-	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
 	enum tunnel_clss type;
 
-	ecore_tunn_set_pf_fix_tunn_mode(p_hwfn, p_src, p_tunn_cfg);
-	p_tun->tunn_mode = p_src->tunn_mode;
-
-	p_tunn_cfg->update_rx_pf_clss = p_src->update_rx_pf_clss;
-	p_tunn_cfg->update_tx_pf_clss = p_src->update_tx_pf_clss;
-
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_vxlan);
-	p_tun->tunn_clss_vxlan = type;
-	p_tunn_cfg->tunnel_clss_vxlan = p_tun->tunn_clss_vxlan;
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2gre);
-	p_tun->tunn_clss_l2gre = type;
-	p_tunn_cfg->tunnel_clss_l2gre = p_tun->tunn_clss_l2gre;
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgre);
-	p_tun->tunn_clss_ipgre = type;
-	p_tunn_cfg->tunnel_clss_ipgre = p_tun->tunn_clss_ipgre;
-
-	if (p_src->update_vxlan_udp_port) {
-		p_tun->port_vxlan_udp_port = p_src->vxlan_udp_port;
-		p_tunn_cfg->set_vxlan_udp_port_flg = 1;
-		p_tunn_cfg->vxlan_udp_port =
-				OSAL_CPU_TO_LE16(p_tun->port_vxlan_udp_port);
-	}
+	p_tun->b_update_rx_cls = p_src->b_update_rx_cls;
+	p_tun->b_update_tx_cls = p_src->b_update_tx_cls;
+
+	/* @DPDK - typecast tunnul class */
+	type = ecore_tunn_clss_to_fw_clss(p_src->vxlan.tun_cls);
+	p_tun->vxlan.tun_cls = (enum ecore_tunn_clss)type;
+	type = ecore_tunn_clss_to_fw_clss(p_src->l2_gre.tun_cls);
+	p_tun->l2_gre.tun_cls = (enum ecore_tunn_clss)type;
+	type = ecore_tunn_clss_to_fw_clss(p_src->ip_gre.tun_cls);
+	p_tun->ip_gre.tun_cls = (enum ecore_tunn_clss)type;
+	type = ecore_tunn_clss_to_fw_clss(p_src->l2_geneve.tun_cls);
+	p_tun->l2_geneve.tun_cls = (enum ecore_tunn_clss)type;
+	type = ecore_tunn_clss_to_fw_clss(p_src->ip_geneve.tun_cls);
+	p_tun->ip_geneve.tun_cls = (enum ecore_tunn_clss)type;
+}
+
+static void ecore_set_tunn_ports(struct ecore_tunnel_info *p_tun,
+				 struct ecore_tunnel_info *p_src)
+{
+	p_tun->geneve_port.b_update_port = p_src->geneve_port.b_update_port;
+	p_tun->vxlan_port.b_update_port = p_src->vxlan_port.b_update_port;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_l2gre = 1;
+	if (p_src->geneve_port.b_update_port)
+		p_tun->geneve_port.port = p_src->geneve_port.port;
 
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_ipgre = 1;
+	if (p_src->vxlan_port.b_update_port)
+		p_tun->vxlan_port.port = p_src->vxlan_port.port;
+}
 
-	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_vxlan = 1;
+static void
+__ecore_set_ramrod_tunnel_param(u8 *p_tunn_cls, u8 *p_enable_tx_clas,
+				struct ecore_tunn_update_type *tun_type)
+{
+	*p_tunn_cls = tun_type->tun_cls;
 
-	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
-		if (p_src->update_geneve_udp_port)
-			DP_NOTICE(p_hwfn, true, "Geneve not supported\n");
-		p_src->update_geneve_udp_port = 0;
-		return;
-	}
+	if (tun_type->b_mode_enabled)
+		*p_enable_tx_clas = 1;
+}
 
-	if (p_src->update_geneve_udp_port) {
-		p_tun->port_geneve_udp_port = p_src->geneve_udp_port;
-		p_tunn_cfg->set_geneve_udp_port_flg = 1;
-		p_tunn_cfg->geneve_udp_port =
-				OSAL_CPU_TO_LE16(p_tun->port_geneve_udp_port);
+static void
+ecore_set_ramrod_tunnel_param(u8 *p_tunn_cls, u8 *p_enable_tx_clas,
+			      struct ecore_tunn_update_type *tun_type,
+			      u8 *p_update_port, __le16 *p_port,
+			      struct ecore_tunn_update_udp_port *p_udp_port)
+{
+	__ecore_set_ramrod_tunnel_param(p_tunn_cls, p_enable_tx_clas,
+					tun_type);
+	if (p_udp_port->b_update_port) {
+		*p_update_port = 1;
+		*p_port = OSAL_CPU_TO_LE16(p_udp_port->port);
 	}
+}
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_l2geneve = 1;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_ipgeneve = 1;
+static void
+ecore_tunn_set_pf_update_params(struct ecore_hwfn		*p_hwfn,
+				struct ecore_tunnel_info *p_src,
+				struct pf_update_tunnel_config	*p_tunn_cfg)
+{
+	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
 
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2geneve);
-	p_tun->tunn_clss_l2geneve = type;
-	p_tunn_cfg->tunnel_clss_l2geneve = p_tun->tunn_clss_l2geneve;
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgeneve);
-	p_tun->tunn_clss_ipgeneve = type;
-	p_tunn_cfg->tunnel_clss_ipgeneve = p_tun->tunn_clss_ipgeneve;
+	ecore_set_pf_update_tunn_mode(p_tun, p_src, false);
+	ecore_set_tunn_cls_info(p_tun, p_src);
+	ecore_set_tunn_ports(p_tun, p_src);
+
+	ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_vxlan,
+				      &p_tunn_cfg->tx_enable_vxlan,
+				      &p_tun->vxlan,
+				      &p_tunn_cfg->set_vxlan_udp_port_flg,
+				      &p_tunn_cfg->vxlan_udp_port,
+				      &p_tun->vxlan_port);
+
+	ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_l2geneve,
+				      &p_tunn_cfg->tx_enable_l2geneve,
+				      &p_tun->l2_geneve,
+				      &p_tunn_cfg->set_geneve_udp_port_flg,
+				      &p_tunn_cfg->geneve_udp_port,
+				      &p_tun->geneve_port);
+
+	__ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_ipgeneve,
+					&p_tunn_cfg->tx_enable_ipgeneve,
+					&p_tun->ip_geneve);
+
+	__ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_l2gre,
+					&p_tunn_cfg->tx_enable_l2gre,
+					&p_tun->l2_gre);
+
+	__ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_ipgre,
+					&p_tunn_cfg->tx_enable_ipgre,
+					&p_tun->ip_gre);
+
+	p_tunn_cfg->update_rx_pf_clss = p_tun->b_update_rx_cls;
+	p_tunn_cfg->update_tx_pf_clss = p_tun->b_update_tx_cls;
 }
 
 static void ecore_set_hw_tunn_mode(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt,
-				   unsigned long tunn_mode)
+				   struct ecore_tunnel_info *p_tun)
 {
-	u8 l2gre_enable = 0, ipgre_enable = 0, vxlan_enable = 0;
-	u8 l2geneve_enable = 0, ipgeneve_enable = 0;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &tunn_mode))
-		l2gre_enable = 1;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &tunn_mode))
-		ipgre_enable = 1;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &tunn_mode))
-		vxlan_enable = 1;
+	ecore_set_gre_enable(p_hwfn, p_ptt, p_tun->l2_gre.b_mode_enabled,
+			     p_tun->ip_gre.b_mode_enabled);
+	ecore_set_vxlan_enable(p_hwfn, p_ptt, p_tun->vxlan.b_mode_enabled);
 
-	ecore_set_gre_enable(p_hwfn, p_ptt, l2gre_enable, ipgre_enable);
-	ecore_set_vxlan_enable(p_hwfn, p_ptt, vxlan_enable);
+	ecore_set_geneve_enable(p_hwfn, p_ptt, p_tun->l2_geneve.b_mode_enabled,
+				p_tun->ip_geneve.b_mode_enabled);
+}
 
-	if (ECORE_IS_BB_A0(p_hwfn->p_dev))
+static void ecore_set_hw_tunn_mode_port(struct ecore_hwfn *p_hwfn,
+					struct ecore_tunnel_info *p_tunn)
+{
+	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
+		DP_NOTICE(p_hwfn, true,
+			  "A0 chip: tunnel hw config is not supported\n");
 		return;
+	}
 
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &tunn_mode))
-		l2geneve_enable = 1;
+	if (p_tunn->vxlan_port.b_update_port)
+		ecore_set_vxlan_dest_port(p_hwfn, p_hwfn->p_main_ptt,
+					  p_tunn->vxlan_port.port);
 
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &tunn_mode))
-		ipgeneve_enable = 1;
+	if (p_tunn->geneve_port.b_update_port)
+		ecore_set_geneve_dest_port(p_hwfn, p_hwfn->p_main_ptt,
+					   p_tunn->geneve_port.port);
 
-	ecore_set_geneve_enable(p_hwfn, p_ptt, l2geneve_enable,
-				ipgeneve_enable);
+	ecore_set_hw_tunn_mode(p_hwfn, p_hwfn->p_main_ptt, p_tunn);
 }
 
 static void
 ecore_tunn_set_pf_start_params(struct ecore_hwfn *p_hwfn,
-			       struct ecore_tunn_start_params *p_src,
+			       struct ecore_tunnel_info		*p_src,
 			       struct pf_start_tunnel_config *p_tunn_cfg)
 {
 	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
-	enum tunnel_clss type;
-
-	if (!p_src)
-		return;
-
-	p_tun->tunn_mode = p_src->tunn_mode;
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_vxlan);
-	p_tun->tunn_clss_vxlan = type;
-	p_tunn_cfg->tunnel_clss_vxlan = p_tun->tunn_clss_vxlan;
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2gre);
-	p_tun->tunn_clss_l2gre = type;
-	p_tunn_cfg->tunnel_clss_l2gre = p_tun->tunn_clss_l2gre;
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgre);
-	p_tun->tunn_clss_ipgre = type;
-	p_tunn_cfg->tunnel_clss_ipgre = p_tun->tunn_clss_ipgre;
-
-	if (p_src->update_vxlan_udp_port) {
-		p_tun->port_vxlan_udp_port = p_src->vxlan_udp_port;
-		p_tunn_cfg->set_vxlan_udp_port_flg = 1;
-		p_tunn_cfg->vxlan_udp_port =
-				OSAL_CPU_TO_LE16(p_tun->port_vxlan_udp_port);
-	}
-
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GRE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_l2gre = 1;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGRE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_ipgre = 1;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_VXLAN_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_vxlan = 1;
 
 	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
-		if (p_src->update_geneve_udp_port)
-			DP_NOTICE(p_hwfn, true, "Geneve not supported\n");
-		p_src->update_geneve_udp_port = 0;
+		DP_NOTICE(p_hwfn, true,
+			  "A0 chip: tunnel pf start config is not supported\n");
 		return;
 	}
 
-	if (p_src->update_geneve_udp_port) {
-		p_tun->port_geneve_udp_port = p_src->geneve_udp_port;
-		p_tunn_cfg->set_geneve_udp_port_flg = 1;
-		p_tunn_cfg->geneve_udp_port =
-				OSAL_CPU_TO_LE16(p_tun->port_geneve_udp_port);
-	}
-
-	if (OSAL_TEST_BIT(ECORE_MODE_L2GENEVE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_l2geneve = 1;
-
-	if (OSAL_TEST_BIT(ECORE_MODE_IPGENEVE_TUNN, &p_tun->tunn_mode))
-		p_tunn_cfg->tx_enable_ipgeneve = 1;
+	if (!p_src)
+		return;
 
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_l2geneve);
-	p_tun->tunn_clss_l2geneve = type;
-	p_tunn_cfg->tunnel_clss_l2geneve = p_tun->tunn_clss_l2geneve;
-	type = ecore_tunn_get_clss_type(p_src->tunn_clss_ipgeneve);
-	p_tun->tunn_clss_ipgeneve = type;
-	p_tunn_cfg->tunnel_clss_ipgeneve = p_tun->tunn_clss_ipgeneve;
+	ecore_set_pf_update_tunn_mode(p_tun, p_src, true);
+	ecore_set_tunn_cls_info(p_tun, p_src);
+	ecore_set_tunn_ports(p_tun, p_src);
+
+	ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_vxlan,
+				      &p_tunn_cfg->tx_enable_vxlan,
+				      &p_tun->vxlan,
+				      &p_tunn_cfg->set_vxlan_udp_port_flg,
+				      &p_tunn_cfg->vxlan_udp_port,
+				      &p_tun->vxlan_port);
+
+	ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_l2geneve,
+				      &p_tunn_cfg->tx_enable_l2geneve,
+				      &p_tun->l2_geneve,
+				      &p_tunn_cfg->set_geneve_udp_port_flg,
+				      &p_tunn_cfg->geneve_udp_port,
+				      &p_tun->geneve_port);
+
+	__ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_ipgeneve,
+					&p_tunn_cfg->tx_enable_ipgeneve,
+					&p_tun->ip_geneve);
+
+	__ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_l2gre,
+					&p_tunn_cfg->tx_enable_l2gre,
+					&p_tun->l2_gre);
+
+	__ecore_set_ramrod_tunnel_param(&p_tunn_cfg->tunnel_clss_ipgre,
+					&p_tunn_cfg->tx_enable_ipgre,
+					&p_tun->ip_gre);
 }
 
 enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
-				       struct ecore_tunn_start_params *p_tunn,
+				       struct ecore_tunnel_info *p_tunn,
 				       enum ecore_mf_mode mode,
 				       bool allow_npar_tx_switch)
 {
@@ -437,18 +403,8 @@ enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
 
 	rc = ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
 
-	if (p_tunn) {
-		if (p_tunn->update_vxlan_udp_port)
-			ecore_set_vxlan_dest_port(p_hwfn, p_hwfn->p_main_ptt,
-						  p_tunn->vxlan_udp_port);
-
-		if (p_tunn->update_geneve_udp_port)
-			ecore_set_geneve_dest_port(p_hwfn, p_hwfn->p_main_ptt,
-						   p_tunn->geneve_udp_port);
-
-		ecore_set_hw_tunn_mode(p_hwfn, p_hwfn->p_main_ptt,
-				       p_tunn->tunn_mode);
-	}
+	if (p_tunn)
+		ecore_set_hw_tunn_mode_port(p_hwfn, &p_hwfn->p_dev->tunnel);
 
 	return rc;
 }
@@ -523,7 +479,7 @@ enum _ecore_status_t ecore_sp_rl_update(struct ecore_hwfn *p_hwfn,
 /* Set pf update ramrod command params */
 enum _ecore_status_t
 ecore_sp_pf_update_tunn_cfg(struct ecore_hwfn *p_hwfn,
-			    struct ecore_tunn_update_params *p_tunn,
+			    struct ecore_tunnel_info *p_tunn,
 			    enum spq_mode comp_mode,
 			    struct ecore_spq_comp_cb *p_comp_data)
 {
@@ -531,6 +487,15 @@ ecore_sp_pf_update_tunn_cfg(struct ecore_hwfn *p_hwfn,
 	struct ecore_sp_init_data init_data;
 	enum _ecore_status_t rc = ECORE_NOTIMPL;
 
+	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
+		DP_NOTICE(p_hwfn, true,
+			  "A0 chip: tunnel pf update config is not supported\n");
+		return rc;
+	}
+
+	if (!p_tunn)
+		return ECORE_INVAL;
+
 	/* Get SPQ entry */
 	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
 	init_data.cid = ecore_spq_get_cid(p_hwfn);
@@ -551,15 +516,7 @@ ecore_sp_pf_update_tunn_cfg(struct ecore_hwfn *p_hwfn,
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	if (p_tunn->update_vxlan_udp_port)
-		ecore_set_vxlan_dest_port(p_hwfn, p_hwfn->p_main_ptt,
-					  p_tunn->vxlan_udp_port);
-
-	if (p_tunn->update_geneve_udp_port)
-		ecore_set_geneve_dest_port(p_hwfn, p_hwfn->p_main_ptt,
-					   p_tunn->geneve_udp_port);
-
-	ecore_set_hw_tunn_mode(p_hwfn, p_hwfn->p_main_ptt, p_tunn->tunn_mode);
+	ecore_set_hw_tunn_mode_port(p_hwfn, &p_hwfn->p_dev->tunnel);
 
 	return rc;
 }
diff --git a/drivers/net/qede/base/ecore_sp_commands.h b/drivers/net/qede/base/ecore_sp_commands.h
index 66c9a69..33e31e4 100644
--- a/drivers/net/qede/base/ecore_sp_commands.h
+++ b/drivers/net/qede/base/ecore_sp_commands.h
@@ -68,32 +68,11 @@ enum _ecore_status_t ecore_sp_init_request(struct ecore_hwfn *p_hwfn,
  */
 
 enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
-				       struct ecore_tunn_start_params *p_tunn,
+				       struct ecore_tunnel_info *p_tunn,
 				       enum ecore_mf_mode mode,
 				       bool allow_npar_tx_switch);
 
 /**
- * @brief ecore_sp_pf_update_tunn_cfg - PF Function Tunnel configuration
- *					update  Ramrod
- *
- * This ramrod is sent to update a tunneling configuration
- * for a physical function (PF).
- *
- * @param p_hwfn
- * @param p_tunn - pf update tunneling parameters
- * @param comp_mode - completion mode
- * @param p_comp_data - callback function
- *
- * @return enum _ecore_status_t
- */
-
-enum _ecore_status_t
-ecore_sp_pf_update_tunn_cfg(struct ecore_hwfn *p_hwfn,
-			    struct ecore_tunn_update_params *p_tunn,
-			    enum spq_mode comp_mode,
-			    struct ecore_spq_comp_cb *p_comp_data);
-
-/**
  * @brief ecore_sp_pf_update - PF Function Update Ramrod
  *
  * This ramrod updates function-related parameters. Every parameter can be
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index d52e1be..0c05d2d 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -335,10 +335,10 @@ static void qede_set_ucast_cmn_params(struct ecore_filter_ucast *ucast)
 	/* ucast->assert_on_error = true; - For debug */
 }
 
-static void qede_set_cmn_tunn_param(struct ecore_tunn_update_params *params,
-				     uint8_t clss, uint64_t mode, uint64_t mask)
+static void qede_set_cmn_tunn_param(struct qed_tunn_update_params *params,
+				    uint8_t clss, uint64_t mode, uint64_t mask)
 {
-	memset(params, 0, sizeof(struct ecore_tunn_update_params));
+	memset(params, 0, sizeof(struct qed_tunn_update_params));
 	params->tunn_mode = mode;
 	params->tunn_mode_update_mask = mask;
 	params->update_tx_pf_clss = 1;
@@ -1707,20 +1707,22 @@ qede_conf_udp_dst_port(struct rte_eth_dev *eth_dev,
 {
 	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
-	struct ecore_tunn_update_params params;
+	struct qed_tunn_update_params params;
+	struct ecore_tunnel_info tunn; /* @DPDK */
 	struct ecore_hwfn *p_hwfn;
 	int rc, i;
 
 	PMD_INIT_FUNC_TRACE(edev);
 
 	memset(&params, 0, sizeof(params));
+	memset(&tunn, 0, sizeof(tunn));
 	if (tunnel_udp->prot_type == RTE_TUNNEL_TYPE_VXLAN) {
 		params.update_vxlan_udp_port = 1;
 		params.vxlan_udp_port = (add) ? tunnel_udp->udp_port :
 					QEDE_VXLAN_DEF_PORT;
 		for_each_hwfn(edev, i) {
 			p_hwfn = &edev->hwfns[i];
-			rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, &params,
+			rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, &tunn,
 						ECORE_SPQ_MODE_CB, NULL);
 			if (rc != ECORE_SUCCESS) {
 				DP_ERR(edev, "Unable to config UDP port %u\n",
@@ -1817,7 +1819,8 @@ static int qede_vxlan_tunn_config(struct rte_eth_dev *eth_dev,
 {
 	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
-	struct ecore_tunn_update_params params;
+	struct qed_tunn_update_params params;
+	struct ecore_tunnel_info tunn;
 	struct ecore_hwfn *p_hwfn;
 	enum ecore_filter_ucast_type type;
 	enum ecore_tunn_clss clss;
@@ -1826,6 +1829,7 @@ static int qede_vxlan_tunn_config(struct rte_eth_dev *eth_dev,
 	uint16_t filter_type;
 	int rc, i;
 
+	memset(&tunn, 0, sizeof(tunn));
 	filter_type = conf->filter_type | qdev->vxlan_filter_type;
 	/* First determine if the given filter classification is supported */
 	qede_get_ecore_tunn_params(filter_type, &type, &clss, str);
@@ -1872,7 +1876,7 @@ static int qede_vxlan_tunn_config(struct rte_eth_dev *eth_dev,
 		for_each_hwfn(edev, i) {
 			p_hwfn = &edev->hwfns[i];
 			rc = ecore_sp_pf_update_tunn_cfg(p_hwfn,
-				&params, ECORE_SPQ_MODE_CB, NULL);
+				&tunn, ECORE_SPQ_MODE_CB, NULL);
 			if (rc != ECORE_SUCCESS) {
 				DP_ERR(edev, "Failed to update tunn_clss %u\n",
 					params.tunn_clss_vxlan);
@@ -1906,8 +1910,8 @@ static int qede_vxlan_tunn_config(struct rte_eth_dev *eth_dev,
 						(1 << ECORE_MODE_VXLAN_TUNN));
 			for_each_hwfn(edev, i) {
 				p_hwfn = &edev->hwfns[i];
-				rc = ecore_sp_pf_update_tunn_cfg(p_hwfn,
-					&params, ECORE_SPQ_MODE_CB, NULL);
+				rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, &tunn,
+					ECORE_SPQ_MODE_CB, NULL);
 				if (rc != ECORE_SUCCESS) {
 					DP_ERR(edev,
 						"Failed to update tunn_clss %u\n",
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index baa8476..09b6912 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -121,6 +121,22 @@ struct qed_eth_tlvs {
 	u8 num_rxqs_full;
 };
 
+struct qed_tunn_update_params {
+	unsigned long   tunn_mode_update_mask;
+	unsigned long   tunn_mode;
+	u16             vxlan_udp_port;
+	u16             geneve_udp_port;
+	u8              update_rx_pf_clss;
+	u8              update_tx_pf_clss;
+	u8              update_vxlan_udp_port;
+	u8              update_geneve_udp_port;
+	u8              tunn_clss_vxlan;
+	u8              tunn_clss_l2geneve;
+	u8              tunn_clss_ipgeneve;
+	u8              tunn_clss_l2gre;
+	u8              tunn_clss_ipgre;
+};
+
 struct qed_common_cb_ops {
 	void (*link_update)(void *dev, struct qed_link_output *link);
 	void (*get_tlv_data)(void *dev, struct qed_eth_tlvs *data);
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index e7195b4..5c79055 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -329,20 +329,18 @@ qed_fill_dev_info(struct ecore_dev *edev, struct qed_dev_info *dev_info)
 
 	memset(dev_info, 0, sizeof(struct qed_dev_info));
 
-	if (tun->tunn_mode & OSAL_BIT(ECORE_MODE_VXLAN_TUNN) &&
-	    tun->tunn_clss_vxlan == ECORE_TUNN_CLSS_MAC_VLAN)
+	if (tun->vxlan.tun_cls == ECORE_TUNN_CLSS_MAC_VLAN &&
+	    tun->vxlan.b_mode_enabled)
 		dev_info->vxlan_enable = true;
 
-	if (tun->tunn_mode & OSAL_BIT(ECORE_MODE_L2GRE_TUNN) &&
-	    tun->tunn_mode & OSAL_BIT(ECORE_MODE_IPGRE_TUNN) &&
-	    tun->tunn_clss_l2gre == ECORE_TUNN_CLSS_MAC_VLAN &&
-	    tun->tunn_clss_ipgre == ECORE_TUNN_CLSS_MAC_VLAN)
+	if (tun->l2_gre.b_mode_enabled && tun->ip_gre.b_mode_enabled &&
+	    tun->l2_gre.tun_cls == ECORE_TUNN_CLSS_MAC_VLAN &&
+	    tun->ip_gre.tun_cls == ECORE_TUNN_CLSS_MAC_VLAN)
 		dev_info->gre_enable = true;
 
-	if (tun->tunn_mode & OSAL_BIT(ECORE_MODE_L2GENEVE_TUNN) &&
-	    tun->tunn_mode & OSAL_BIT(ECORE_MODE_IPGENEVE_TUNN) &&
-	    tun->tunn_clss_l2geneve == ECORE_TUNN_CLSS_MAC_VLAN &&
-	    tun->tunn_clss_ipgeneve == ECORE_TUNN_CLSS_MAC_VLAN)
+	if (tun->l2_geneve.b_mode_enabled && tun->ip_geneve.b_mode_enabled &&
+	    tun->l2_geneve.tun_cls == ECORE_TUNN_CLSS_MAC_VLAN &&
+	    tun->ip_geneve.tun_cls == ECORE_TUNN_CLSS_MAC_VLAN)
 		dev_info->geneve_enable = true;
 
 	dev_info->num_hwfns = edev->num_hwfns;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 32/62] net/qede/base: add tunnelling support for VFs
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (31 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 31/62] net/qede/base: revise tunnel APIs/structs Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 33/62] net/qede/base: formatting changes Rasesh Mody
                               ` (29 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add new tunnelling support for VFs.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/bcm_osal.h          |    3 +-
 drivers/net/qede/base/ecore_dev.c         |   15 ++-
 drivers/net/qede/base/ecore_sp_commands.c |    4 +
 drivers/net/qede/base/ecore_sriov.c       |  144 +++++++++++++++++++++++++++
 drivers/net/qede/base/ecore_vf.c          |  154 +++++++++++++++++++++++++++++
 drivers/net/qede/base/ecore_vf.h          |    5 +
 drivers/net/qede/base/ecore_vfpf_if.h     |   40 ++++++++
 drivers/net/qede/qede_ethdev.c            |   39 +++-----
 8 files changed, 378 insertions(+), 26 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 513d542..4c91dc0 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -422,6 +422,5 @@ void qede_get_mcp_proto_stats(struct ecore_dev *, enum ecore_mcp_protocol_type,
 #define	OSAL_SLOWPATH_IRQ_REQ(p_hwfn) (0)
 #define OSAL_MFW_TLV_REQ(p_hwfn) (0)
 #define OSAL_MFW_FILL_TLV_DATA(type, buf, data) (0)
-
-
+#define OSAL_PF_VALIDATE_MODIFY_TUNN_CONFIG(p_hwfn, mask, b_update, tunn) 0
 #endif /* __BCM_OSAL_H */
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 0d3971c..21fec58 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1876,6 +1876,19 @@ static void ecore_reset_mb_shadow(struct ecore_hwfn *p_hwfn,
 		    p_hwfn->mcp_info->mfw_mb_length);
 }
 
+enum _ecore_status_t ecore_vf_start(struct ecore_hwfn *p_hwfn,
+				    struct ecore_hw_init_params *p_params)
+{
+	if (p_params->p_tunn) {
+		ecore_vf_set_vf_start_tunn_update_param(p_params->p_tunn);
+		ecore_vf_pf_tunnel_param_update(p_hwfn, p_params->p_tunn);
+	}
+
+	p_hwfn->b_int_enabled = 1;
+
+	return ECORE_SUCCESS;
+}
+
 enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 				   struct ecore_hw_init_params *p_params)
 {
@@ -1908,7 +1921,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		}
 
 		if (IS_VF(p_dev)) {
-			p_hwfn->b_int_enabled = 1;
+			ecore_vf_start(p_hwfn, p_params);
 			continue;
 		}
 
diff --git a/drivers/net/qede/base/ecore_sp_commands.c b/drivers/net/qede/base/ecore_sp_commands.c
index fc47fc4..8fd64d7 100644
--- a/drivers/net/qede/base/ecore_sp_commands.c
+++ b/drivers/net/qede/base/ecore_sp_commands.c
@@ -22,6 +22,7 @@
 #include "ecore_hw.h"
 #include "ecore_dcbx.h"
 #include "ecore_sriov.h"
+#include "ecore_vf.h"
 
 enum _ecore_status_t ecore_sp_init_request(struct ecore_hwfn *p_hwfn,
 					   struct ecore_spq_entry **pp_ent,
@@ -487,6 +488,9 @@ ecore_sp_pf_update_tunn_cfg(struct ecore_hwfn *p_hwfn,
 	struct ecore_sp_init_data init_data;
 	enum _ecore_status_t rc = ECORE_NOTIMPL;
 
+	if (IS_VF(p_hwfn->p_dev))
+		return ecore_vf_pf_tunnel_param_update(p_hwfn, p_tunn);
+
 	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
 		DP_NOTICE(p_hwfn, true,
 			  "A0 chip: tunnel pf update config is not supported\n");
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 7378420..6cec7b2 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -51,6 +51,7 @@ const char *ecore_channel_tlvs_string[] = {
 	"CHANNEL_TLV_VPORT_UPDATE_RSS",
 	"CHANNEL_TLV_VPORT_UPDATE_ACCEPT_ANY_VLAN",
 	"CHANNEL_TLV_VPORT_UPDATE_SGE_TPA",
+	"CHANNEL_TLV_UPDATE_TUNN_PARAM",
 	"CHANNEL_TLV_MAX"
 };
 
@@ -2137,6 +2138,146 @@ out:
 					b_legacy_vf);
 }
 
+static void
+ecore_iov_pf_update_tun_response(struct pfvf_update_tunn_param_tlv *p_resp,
+				 struct ecore_tunnel_info *p_tun,
+				 u16 tunn_feature_mask)
+{
+	p_resp->tunn_feature_mask = tunn_feature_mask;
+	p_resp->vxlan_mode = p_tun->vxlan.b_mode_enabled;
+	p_resp->l2geneve_mode = p_tun->l2_geneve.b_mode_enabled;
+	p_resp->ipgeneve_mode = p_tun->ip_geneve.b_mode_enabled;
+	p_resp->l2gre_mode = p_tun->l2_gre.b_mode_enabled;
+	p_resp->ipgre_mode = p_tun->l2_gre.b_mode_enabled;
+	p_resp->vxlan_clss = p_tun->vxlan.tun_cls;
+	p_resp->l2gre_clss = p_tun->l2_gre.tun_cls;
+	p_resp->ipgre_clss = p_tun->ip_gre.tun_cls;
+	p_resp->l2geneve_clss = p_tun->l2_geneve.tun_cls;
+	p_resp->ipgeneve_clss = p_tun->ip_geneve.tun_cls;
+	p_resp->geneve_udp_port = p_tun->geneve_port.port;
+	p_resp->vxlan_udp_port = p_tun->vxlan_port.port;
+}
+
+static void
+__ecore_iov_pf_update_tun_param(struct vfpf_update_tunn_param_tlv *p_req,
+				struct ecore_tunn_update_type *p_tun,
+				enum ecore_tunn_mode mask, u8 tun_cls)
+{
+	if (p_req->tun_mode_update_mask & (1 << mask)) {
+		p_tun->b_update_mode = true;
+
+		if (p_req->tunn_mode & (1 << mask))
+			p_tun->b_mode_enabled = true;
+	}
+
+	p_tun->tun_cls = tun_cls;
+}
+
+static void
+ecore_iov_pf_update_tun_param(struct vfpf_update_tunn_param_tlv *p_req,
+			      struct ecore_tunn_update_type *p_tun,
+			      struct ecore_tunn_update_udp_port *p_port,
+			      enum ecore_tunn_mode mask,
+			      u8 tun_cls, u8 update_port, u16 port)
+{
+	if (update_port) {
+		p_port->b_update_port = true;
+		p_port->port = port;
+	}
+
+	__ecore_iov_pf_update_tun_param(p_req, p_tun, mask, tun_cls);
+}
+
+static bool
+ecore_iov_pf_validate_tunn_param(struct vfpf_update_tunn_param_tlv *p_req)
+{
+	bool b_update_requested = false;
+
+	if (p_req->tun_mode_update_mask || p_req->update_tun_cls ||
+	    p_req->update_geneve_port || p_req->update_vxlan_port)
+		b_update_requested = true;
+
+	return b_update_requested;
+}
+
+static void ecore_iov_vf_mbx_update_tunn_param(struct ecore_hwfn *p_hwfn,
+					       struct ecore_ptt *p_ptt,
+					       struct ecore_vf_info *p_vf)
+{
+	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
+	struct ecore_iov_vf_mbx *mbx = &p_vf->vf_mbx;
+	struct pfvf_update_tunn_param_tlv *p_resp;
+	struct vfpf_update_tunn_param_tlv *p_req;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	u8 status = PFVF_STATUS_SUCCESS;
+	bool b_update_required = false;
+	struct ecore_tunnel_info tunn;
+	u16 tunn_feature_mask = 0;
+
+	mbx->offset = (u8 *)mbx->reply_virt;
+
+	OSAL_MEM_ZERO(&tunn, sizeof(tunn));
+	p_req = &mbx->req_virt->tunn_param_update;
+
+	if (!ecore_iov_pf_validate_tunn_param(p_req)) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "No tunnel update requested by VF\n");
+		status = PFVF_STATUS_FAILURE;
+		goto send_resp;
+	}
+
+	tunn.b_update_rx_cls = p_req->update_tun_cls;
+	tunn.b_update_tx_cls = p_req->update_tun_cls;
+
+	ecore_iov_pf_update_tun_param(p_req, &tunn.vxlan, &tunn.vxlan_port,
+				      ECORE_MODE_VXLAN_TUNN, p_req->vxlan_clss,
+				      p_req->update_vxlan_port,
+				      p_req->vxlan_port);
+	ecore_iov_pf_update_tun_param(p_req, &tunn.l2_geneve, &tunn.geneve_port,
+				      ECORE_MODE_L2GENEVE_TUNN,
+				      p_req->l2geneve_clss,
+				      p_req->update_geneve_port,
+				      p_req->geneve_port);
+	__ecore_iov_pf_update_tun_param(p_req, &tunn.ip_geneve,
+					ECORE_MODE_IPGENEVE_TUNN,
+					p_req->ipgeneve_clss);
+	__ecore_iov_pf_update_tun_param(p_req, &tunn.l2_gre,
+					ECORE_MODE_L2GRE_TUNN,
+					p_req->l2gre_clss);
+	__ecore_iov_pf_update_tun_param(p_req, &tunn.ip_gre,
+					ECORE_MODE_IPGRE_TUNN,
+					p_req->ipgre_clss);
+
+	/* If PF modifies VF's req then it should
+	 * still return an error in case of partial configuration
+	 * or modified configuration as opposed to requested one.
+	 */
+	rc = OSAL_PF_VALIDATE_MODIFY_TUNN_CONFIG(p_hwfn, &tunn_feature_mask,
+						 &b_update_required, &tunn);
+
+	if (rc != ECORE_SUCCESS)
+		status = PFVF_STATUS_FAILURE;
+
+	/* If ECORE client is willing to update anything ? */
+	if (b_update_required) {
+		rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, &tunn,
+						 ECORE_SPQ_MODE_EBLOCK,
+						 OSAL_NULL);
+		if (rc != ECORE_SUCCESS)
+			status = PFVF_STATUS_FAILURE;
+	}
+
+send_resp:
+	p_resp = ecore_add_tlv(p_hwfn, &mbx->offset,
+			       CHANNEL_TLV_UPDATE_TUNN_PARAM, sizeof(*p_resp));
+
+	ecore_iov_pf_update_tun_response(p_resp, p_tun, tunn_feature_mask);
+	ecore_add_tlv(p_hwfn, &mbx->offset, CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	ecore_iov_send_response(p_hwfn, p_ptt, p_vf, sizeof(*p_resp), status);
+}
+
 static void ecore_iov_vf_mbx_start_txq_resp(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
 					    struct ecore_vf_info *p_vf,
@@ -3405,6 +3546,9 @@ void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn,
 		case CHANNEL_TLV_RELEASE:
 			ecore_iov_vf_mbx_release(p_hwfn, p_ptt, p_vf);
 			break;
+		case CHANNEL_TLV_UPDATE_TUNN_PARAM:
+			ecore_iov_vf_mbx_update_tunn_param(p_hwfn, p_ptt, p_vf);
+			break;
 		}
 	} else if (ecore_iov_tlv_supported(mbx->first_tlv.tl.type)) {
 		/* If we've received a message from a VF we consider malicious
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index 60ecd16..3182621 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -451,6 +451,160 @@ free_p_iov:
 #define MSTORM_QZONE_START(dev)   (TSTORM_QZONE_START + \
 				   (TSTORM_QZONE_SIZE * NUM_OF_L2_QUEUES(dev)))
 
+/* @DPDK - changed enum ecore_tunn_clss to enum ecore_tunn_mode */
+static void
+__ecore_vf_prep_tunn_req_tlv(struct vfpf_update_tunn_param_tlv *p_req,
+			     struct ecore_tunn_update_type *p_src,
+			     enum ecore_tunn_mode mask, u8 *p_cls)
+{
+	if (p_src->b_update_mode) {
+		p_req->tun_mode_update_mask |= (1 << mask);
+
+		if (p_src->b_mode_enabled)
+			p_req->tunn_mode |= (1 << mask);
+	}
+
+	*p_cls = p_src->tun_cls;
+}
+
+/* @DPDK - changed enum ecore_tunn_clss to enum ecore_tunn_mode */
+static void
+ecore_vf_prep_tunn_req_tlv(struct vfpf_update_tunn_param_tlv *p_req,
+			   struct ecore_tunn_update_type *p_src,
+			   enum ecore_tunn_mode mask, u8 *p_cls,
+			   struct ecore_tunn_update_udp_port *p_port,
+			   u8 *p_update_port, u16 *p_udp_port)
+{
+	if (p_port->b_update_port) {
+		*p_update_port = 1;
+		*p_udp_port = p_port->port;
+	}
+
+	__ecore_vf_prep_tunn_req_tlv(p_req, p_src, mask, p_cls);
+}
+
+void ecore_vf_set_vf_start_tunn_update_param(struct ecore_tunnel_info *p_tun)
+{
+	if (p_tun->vxlan.b_mode_enabled)
+		p_tun->vxlan.b_update_mode = true;
+	if (p_tun->l2_geneve.b_mode_enabled)
+		p_tun->l2_geneve.b_update_mode = true;
+	if (p_tun->ip_geneve.b_mode_enabled)
+		p_tun->ip_geneve.b_update_mode = true;
+	if (p_tun->l2_gre.b_mode_enabled)
+		p_tun->l2_gre.b_update_mode = true;
+	if (p_tun->ip_gre.b_mode_enabled)
+		p_tun->ip_gre.b_update_mode = true;
+
+	p_tun->b_update_rx_cls = true;
+	p_tun->b_update_tx_cls = true;
+}
+
+static void
+__ecore_vf_update_tunn_param(struct ecore_tunn_update_type *p_tun,
+			     u16 feature_mask, u8 tunn_mode, u8 tunn_cls,
+			     enum ecore_tunn_mode val)
+{
+	if (feature_mask & (1 << val)) {
+		p_tun->b_mode_enabled = tunn_mode;
+		p_tun->tun_cls = tunn_cls;
+	} else {
+		p_tun->b_mode_enabled = false;
+	}
+}
+
+static void
+ecore_vf_update_tunn_param(struct ecore_hwfn *p_hwfn,
+			   struct ecore_tunnel_info *p_tun,
+			   struct pfvf_update_tunn_param_tlv *p_resp)
+{
+	/* Update mode and classes provided by PF */
+	u16 feat_mask = p_resp->tunn_feature_mask;
+
+	__ecore_vf_update_tunn_param(&p_tun->vxlan, feat_mask,
+				     p_resp->vxlan_mode, p_resp->vxlan_clss,
+				     ECORE_MODE_VXLAN_TUNN);
+	__ecore_vf_update_tunn_param(&p_tun->l2_geneve, feat_mask,
+				     p_resp->l2geneve_mode,
+				     p_resp->l2geneve_clss,
+				     ECORE_MODE_L2GENEVE_TUNN);
+	__ecore_vf_update_tunn_param(&p_tun->ip_geneve, feat_mask,
+				     p_resp->ipgeneve_mode,
+				     p_resp->ipgeneve_clss,
+				     ECORE_MODE_IPGENEVE_TUNN);
+	__ecore_vf_update_tunn_param(&p_tun->l2_gre, feat_mask,
+				     p_resp->l2gre_mode, p_resp->l2gre_clss,
+				     ECORE_MODE_L2GRE_TUNN);
+	__ecore_vf_update_tunn_param(&p_tun->ip_gre, feat_mask,
+				     p_resp->ipgre_mode, p_resp->ipgre_clss,
+				     ECORE_MODE_IPGRE_TUNN);
+	p_tun->geneve_port.port = p_resp->geneve_udp_port;
+	p_tun->vxlan_port.port = p_resp->vxlan_udp_port;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+		   "tunn mode: vxlan=0x%x, l2geneve=0x%x, ipgeneve=0x%x, l2gre=0x%x, ipgre=0x%x",
+		   p_tun->vxlan.b_mode_enabled, p_tun->l2_geneve.b_mode_enabled,
+		   p_tun->ip_geneve.b_mode_enabled,
+		   p_tun->l2_gre.b_mode_enabled,
+		   p_tun->ip_gre.b_mode_enabled);
+}
+
+enum _ecore_status_t
+ecore_vf_pf_tunnel_param_update(struct ecore_hwfn *p_hwfn,
+				struct ecore_tunnel_info *p_src)
+{
+	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
+	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+	struct pfvf_update_tunn_param_tlv *p_resp;
+	struct vfpf_update_tunn_param_tlv *p_req;
+	enum _ecore_status_t rc;
+
+	p_req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_UPDATE_TUNN_PARAM,
+				 sizeof(*p_req));
+
+	if (p_src->b_update_rx_cls && p_src->b_update_tx_cls)
+		p_req->update_tun_cls = 1;
+
+	ecore_vf_prep_tunn_req_tlv(p_req, &p_src->vxlan, ECORE_MODE_VXLAN_TUNN,
+				   &p_req->vxlan_clss, &p_src->vxlan_port,
+				   &p_req->update_vxlan_port,
+				   &p_req->vxlan_port);
+	ecore_vf_prep_tunn_req_tlv(p_req, &p_src->l2_geneve,
+				   ECORE_MODE_L2GENEVE_TUNN,
+				   &p_req->l2geneve_clss, &p_src->geneve_port,
+				   &p_req->update_geneve_port,
+				   &p_req->geneve_port);
+	__ecore_vf_prep_tunn_req_tlv(p_req, &p_src->ip_geneve,
+				     ECORE_MODE_IPGENEVE_TUNN,
+				     &p_req->ipgeneve_clss);
+	__ecore_vf_prep_tunn_req_tlv(p_req, &p_src->l2_gre,
+				     ECORE_MODE_L2GRE_TUNN, &p_req->l2gre_clss);
+	__ecore_vf_prep_tunn_req_tlv(p_req, &p_src->ip_gre,
+				     ECORE_MODE_IPGRE_TUNN, &p_req->ipgre_clss);
+
+	/* add list termination tlv */
+	ecore_add_tlv(p_hwfn, &p_iov->offset,
+		      CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	p_resp = &p_iov->pf2vf_reply->tunn_param_resp;
+	rc = ecore_send_msg2pf(p_hwfn, &p_resp->hdr.status, sizeof(*p_resp));
+
+	if (rc)
+		goto exit;
+
+	if (p_resp->hdr.status != PFVF_STATUS_SUCCESS) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "Failed to update tunnel parameters\n");
+		rc = ECORE_INVAL;
+	}
+
+	ecore_vf_update_tunn_param(p_hwfn, p_tun, p_resp);
+exit:
+	ecore_vf_pf_req_end(p_hwfn, rc);
+	return rc;
+}
+
 enum _ecore_status_t
 ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
 		      struct ecore_queue_cid *p_cid,
diff --git a/drivers/net/qede/base/ecore_vf.h b/drivers/net/qede/base/ecore_vf.h
index 1afd667..0d67054 100644
--- a/drivers/net/qede/base/ecore_vf.h
+++ b/drivers/net/qede/base/ecore_vf.h
@@ -258,5 +258,10 @@ void __ecore_vf_get_link_caps(struct ecore_hwfn *p_hwfn,
 			      struct ecore_mcp_link_capabilities *p_link_caps,
 			      struct ecore_bulletin_content *p_bulletin);
 
+enum _ecore_status_t
+ecore_vf_pf_tunnel_param_update(struct ecore_hwfn *p_hwfn,
+				struct ecore_tunnel_info *p_tunn);
+
+void ecore_vf_set_vf_start_tunn_update_param(struct ecore_tunnel_info *p_tun);
 #endif
 #endif /* __ECORE_VF_H__ */
diff --git a/drivers/net/qede/base/ecore_vfpf_if.h b/drivers/net/qede/base/ecore_vfpf_if.h
index 149d092..82ed4f5 100644
--- a/drivers/net/qede/base/ecore_vfpf_if.h
+++ b/drivers/net/qede/base/ecore_vfpf_if.h
@@ -416,6 +416,43 @@ struct vfpf_ucast_filter_tlv {
 	u16			padding[3];
 };
 
+/* tunnel update param tlv */
+struct vfpf_update_tunn_param_tlv {
+	struct vfpf_first_tlv   first_tlv;
+
+	u8			tun_mode_update_mask;
+	u8			tunn_mode;
+	u8			update_tun_cls;
+	u8			vxlan_clss;
+	u8			l2gre_clss;
+	u8			ipgre_clss;
+	u8			l2geneve_clss;
+	u8			ipgeneve_clss;
+	u8			update_geneve_port;
+	u8			update_vxlan_port;
+	u16			geneve_port;
+	u16			vxlan_port;
+	u8			padding[2];
+};
+
+struct pfvf_update_tunn_param_tlv {
+	struct pfvf_tlv hdr;
+
+	u16			tunn_feature_mask;
+	u8			vxlan_mode;
+	u8			l2geneve_mode;
+	u8			ipgeneve_mode;
+	u8			l2gre_mode;
+	u8			ipgre_mode;
+	u8			vxlan_clss;
+	u8			l2gre_clss;
+	u8			ipgre_clss;
+	u8			l2geneve_clss;
+	u8			ipgeneve_clss;
+	u16			vxlan_udp_port;
+	u16			geneve_udp_port;
+};
+
 struct tlv_buffer_size {
 	u8 tlv_buffer[TLV_BUFFER_SIZE];
 };
@@ -431,6 +468,7 @@ union vfpf_tlvs {
 	struct vfpf_vport_start_tlv		start_vport;
 	struct vfpf_vport_update_tlv		vport_update;
 	struct vfpf_ucast_filter_tlv		ucast_filter;
+	struct vfpf_update_tunn_param_tlv	tunn_param_update;
 	struct tlv_buffer_size			tlv_buf_size;
 };
 
@@ -439,6 +477,7 @@ union pfvf_tlvs {
 	struct pfvf_acquire_resp_tlv		acquire_resp;
 	struct tlv_buffer_size			tlv_buf_size;
 	struct pfvf_start_queue_resp_tlv	queue_start;
+	struct pfvf_update_tunn_param_tlv	tunn_param_resp;
 };
 
 /* This is a structure which is allocated in the VF, which the PF may update
@@ -552,6 +591,7 @@ enum {
 	CHANNEL_TLV_VPORT_UPDATE_RSS,
 	CHANNEL_TLV_VPORT_UPDATE_ACCEPT_ANY_VLAN,
 	CHANNEL_TLV_VPORT_UPDATE_SGE_TPA,
+	CHANNEL_TLV_UPDATE_TUNN_PARAM,
 	CHANNEL_TLV_MAX,
 
 	/* Required for iterating over vport-update tlvs.
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 0c05d2d..257e5b2 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -335,15 +335,15 @@ static void qede_set_ucast_cmn_params(struct ecore_filter_ucast *ucast)
 	/* ucast->assert_on_error = true; - For debug */
 }
 
-static void qede_set_cmn_tunn_param(struct qed_tunn_update_params *params,
-				    uint8_t clss, uint64_t mode, uint64_t mask)
+static void qede_set_cmn_tunn_param(struct ecore_tunnel_info *p_tunn,
+				    uint8_t clss, bool mode, bool mask)
 {
-	memset(params, 0, sizeof(struct qed_tunn_update_params));
-	params->tunn_mode = mode;
-	params->tunn_mode_update_mask = mask;
-	params->update_tx_pf_clss = 1;
-	params->update_rx_pf_clss = 1;
-	params->tunn_clss_vxlan = clss;
+	memset(p_tunn, 0, sizeof(struct ecore_tunnel_info));
+	p_tunn->vxlan.b_update_mode = mode;
+	p_tunn->vxlan.b_mode_enabled = mask;
+	p_tunn->b_update_rx_cls = true;
+	p_tunn->b_update_tx_cls = true;
+	p_tunn->vxlan.tun_cls = clss;
 }
 
 static int
@@ -1707,26 +1707,24 @@ qede_conf_udp_dst_port(struct rte_eth_dev *eth_dev,
 {
 	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
-	struct qed_tunn_update_params params;
 	struct ecore_tunnel_info tunn; /* @DPDK */
 	struct ecore_hwfn *p_hwfn;
 	int rc, i;
 
 	PMD_INIT_FUNC_TRACE(edev);
 
-	memset(&params, 0, sizeof(params));
 	memset(&tunn, 0, sizeof(tunn));
 	if (tunnel_udp->prot_type == RTE_TUNNEL_TYPE_VXLAN) {
-		params.update_vxlan_udp_port = 1;
-		params.vxlan_udp_port = (add) ? tunnel_udp->udp_port :
-					QEDE_VXLAN_DEF_PORT;
+		tunn.vxlan_port.b_update_port = true;
+		tunn.vxlan_port.port = (add) ? tunnel_udp->udp_port :
+						  QEDE_VXLAN_DEF_PORT;
 		for_each_hwfn(edev, i) {
 			p_hwfn = &edev->hwfns[i];
 			rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, &tunn,
 						ECORE_SPQ_MODE_CB, NULL);
 			if (rc != ECORE_SUCCESS) {
 				DP_ERR(edev, "Unable to config UDP port %u\n",
-					params.vxlan_udp_port);
+				       tunn.vxlan_port.port);
 				return rc;
 			}
 		}
@@ -1819,7 +1817,6 @@ static int qede_vxlan_tunn_config(struct rte_eth_dev *eth_dev,
 {
 	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
-	struct qed_tunn_update_params params;
 	struct ecore_tunnel_info tunn;
 	struct ecore_hwfn *p_hwfn;
 	enum ecore_filter_ucast_type type;
@@ -1829,7 +1826,6 @@ static int qede_vxlan_tunn_config(struct rte_eth_dev *eth_dev,
 	uint16_t filter_type;
 	int rc, i;
 
-	memset(&tunn, 0, sizeof(tunn));
 	filter_type = conf->filter_type | qdev->vxlan_filter_type;
 	/* First determine if the given filter classification is supported */
 	qede_get_ecore_tunn_params(filter_type, &type, &clss, str);
@@ -1870,16 +1866,14 @@ static int qede_vxlan_tunn_config(struct rte_eth_dev *eth_dev,
 		qdev->vxlan_filter_type = filter_type;
 
 		DP_INFO(edev, "Enabling VXLAN tunneling\n");
-		qede_set_cmn_tunn_param(&params, clss,
-					(1 << ECORE_MODE_VXLAN_TUNN),
-					(1 << ECORE_MODE_VXLAN_TUNN));
+		qede_set_cmn_tunn_param(&tunn, clss, true, true);
 		for_each_hwfn(edev, i) {
 			p_hwfn = &edev->hwfns[i];
 			rc = ecore_sp_pf_update_tunn_cfg(p_hwfn,
 				&tunn, ECORE_SPQ_MODE_CB, NULL);
 			if (rc != ECORE_SUCCESS) {
 				DP_ERR(edev, "Failed to update tunn_clss %u\n",
-					params.tunn_clss_vxlan);
+				       tunn.vxlan.tun_cls);
 			}
 		}
 		qdev->num_tunn_filters++; /* Filter added successfully */
@@ -1906,8 +1900,7 @@ static int qede_vxlan_tunn_config(struct rte_eth_dev *eth_dev,
 			DP_INFO(edev, "Disabling VXLAN tunneling\n");
 
 			/* Use 0 as tunnel mode */
-			qede_set_cmn_tunn_param(&params, clss, 0,
-						(1 << ECORE_MODE_VXLAN_TUNN));
+			qede_set_cmn_tunn_param(&tunn, clss, false, true);
 			for_each_hwfn(edev, i) {
 				p_hwfn = &edev->hwfns[i];
 				rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, &tunn,
@@ -1915,7 +1908,7 @@ static int qede_vxlan_tunn_config(struct rte_eth_dev *eth_dev,
 				if (rc != ECORE_SUCCESS) {
 					DP_ERR(edev,
 						"Failed to update tunn_clss %u\n",
-						params.tunn_clss_vxlan);
+						tunn.vxlan.tun_cls);
 					break;
 				}
 			}
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 33/62] net/qede/base: formatting changes
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (32 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 32/62] net/qede/base: add tunnelling support for VFs Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 34/62] net/qede/base: prevent transmitter stuck condition Rasesh Mody
                               ` (28 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h      |   14 +--
 drivers/net/qede/base/mcp_public.h |  176 ++++++++++++++++++------------------
 2 files changed, 96 insertions(+), 94 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index f86f7ca..479a991 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -157,8 +157,8 @@ enum DP_MODULE {
 	ECORE_MSG_CXT		= 0x800000,
 	ECORE_MSG_LL2		= 0x1000000,
 	ECORE_MSG_ILT		= 0x2000000,
-	ECORE_MSG_RDMA          = 0x4000000,
-	ECORE_MSG_DEBUG         = 0x8000000,
+	ECORE_MSG_RDMA		= 0x4000000,
+	ECORE_MSG_DEBUG		= 0x8000000,
 	/* to be added...up to 0x8000000 */
 };
 #endif
@@ -480,7 +480,7 @@ struct ecore_hwfn {
 	u32				dp_module;
 	u8				dp_level;
 	char				name[NAME_SIZE];
-	void                            *dp_ctx;
+	void				*dp_ctx;
 
 	bool				first_on_engine;
 	bool				hw_init_done;
@@ -535,8 +535,8 @@ struct ecore_hwfn {
 	u32				rdma_prs_search_reg;
 
 	/* Array of sb_info of all status blocks */
-	struct ecore_sb_info            *sbs_info[MAX_SB_PER_PF_MIMD];
-	u16                             num_sbs;
+	struct ecore_sb_info		*sbs_info[MAX_SB_PER_PF_MIMD];
+	u16				num_sbs;
 
 	struct ecore_cxt_mngr		*p_cxt_mngr;
 
@@ -608,7 +608,7 @@ struct ecore_dev {
 	u32				dp_module;
 	u8				dp_level;
 	char				name[NAME_SIZE];
-	void                            *dp_ctx;
+	void				*dp_ctx;
 
 	u8				type;
 #define ECORE_DEV_TYPE_BB	(0 << 0)
@@ -816,7 +816,7 @@ void ecore_set_fw_mac_addr(__le16 *fw_msb, __le16 *fw_mid, __le16 *fw_lsb,
 #define PQ_FLAGS_MCOS	(1 << 1)
 #define PQ_FLAGS_LB	(1 << 2)
 #define PQ_FLAGS_OOO	(1 << 3)
-#define PQ_FLAGS_ACK    (1 << 4)
+#define PQ_FLAGS_ACK	(1 << 4)
 #define PQ_FLAGS_OFLD	(1 << 5)
 #define PQ_FLAGS_VFS	(1 << 6)
 
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 969dd5a..28909fb 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -586,14 +586,14 @@ struct public_port {
 	u32 link_status;
 #define LINK_STATUS_LINK_UP				0x00000001
 #define LINK_STATUS_SPEED_AND_DUPLEX_MASK		0x0000001e
-#define LINK_STATUS_SPEED_AND_DUPLEX_1000THD			(1 << 1)
-#define LINK_STATUS_SPEED_AND_DUPLEX_1000TFD			(2 << 1)
-#define LINK_STATUS_SPEED_AND_DUPLEX_10G			(3 << 1)
-#define LINK_STATUS_SPEED_AND_DUPLEX_20G			(4 << 1)
-#define LINK_STATUS_SPEED_AND_DUPLEX_40G			(5 << 1)
-#define LINK_STATUS_SPEED_AND_DUPLEX_50G			(6 << 1)
-#define LINK_STATUS_SPEED_AND_DUPLEX_100G			(7 << 1)
-#define LINK_STATUS_SPEED_AND_DUPLEX_25G			(8 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_1000THD		(1 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_1000TFD		(2 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_10G		(3 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_20G		(4 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_40G		(5 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_50G		(6 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_100G		(7 << 1)
+#define LINK_STATUS_SPEED_AND_DUPLEX_25G		(8 << 1)
 #define LINK_STATUS_AUTO_NEGOTIATE_ENABLED		0x00000020
 #define LINK_STATUS_AUTO_NEGOTIATE_COMPLETE		0x00000040
 #define LINK_STATUS_PARALLEL_DETECTION_USED		0x00000080
@@ -607,10 +607,10 @@ struct public_port {
 #define LINK_STATUS_LINK_PARTNER_100G_CAPABLE		0x00008000
 #define LINK_STATUS_LINK_PARTNER_25G_CAPABLE		0x00010000
 #define LINK_STATUS_LINK_PARTNER_FLOW_CONTROL_MASK	0x000C0000
-#define LINK_STATUS_LINK_PARTNER_NOT_PAUSE_CAPABLE		(0 << 18)
-#define LINK_STATUS_LINK_PARTNER_SYMMETRIC_PAUSE		(1 << 18)
-#define LINK_STATUS_LINK_PARTNER_ASYMMETRIC_PAUSE		(2 << 18)
-#define LINK_STATUS_LINK_PARTNER_BOTH_PAUSE			(3 << 18)
+#define LINK_STATUS_LINK_PARTNER_NOT_PAUSE_CAPABLE	(0 << 18)
+#define LINK_STATUS_LINK_PARTNER_SYMMETRIC_PAUSE	(1 << 18)
+#define LINK_STATUS_LINK_PARTNER_ASYMMETRIC_PAUSE	(2 << 18)
+#define LINK_STATUS_LINK_PARTNER_BOTH_PAUSE		(3 << 18)
 #define LINK_STATUS_SFP_TX_FAULT			0x00100000
 #define LINK_STATUS_TX_FLOW_CONTROL_ENABLED		0x00200000
 #define LINK_STATUS_RX_FLOW_CONTROL_ENABLED		0x00400000
@@ -619,9 +619,9 @@ struct public_port {
 #define LINK_STATUS_MAC_REMOTE_FAULT			0x02000000
 #define LINK_STATUS_UNSUPPORTED_SPD_REQ			0x04000000
 #define LINK_STATUS_FEC_MODE_MASK			0x38000000
-#define LINK_STATUS_FEC_MODE_NONE				(0 << 27)
-#define LINK_STATUS_FEC_MODE_FIRECODE_CL74			(1 << 27)
-#define LINK_STATUS_FEC_MODE_RS_CL91				(2 << 27)
+#define LINK_STATUS_FEC_MODE_NONE			(0 << 27)
+#define LINK_STATUS_FEC_MODE_FIRECODE_CL74		(1 << 27)
+#define LINK_STATUS_FEC_MODE_RS_CL91			(2 << 27)
 #define LINK_STATUS_EXT_PHY_LINK_UP			0x40000000
 
 	u32 link_status1;
@@ -762,23 +762,23 @@ struct public_port {
 	 *          When 1'b1 those bits contains a value times 16 microseconds.
 	 */
 	u32 eee_status;
-	#define EEE_TIMER_MASK		0x000fffff
-	#define EEE_ADV_STATUS_MASK	0x00f00000
-		#define EEE_1G_ADV	(1 << 1)
-		#define EEE_10G_ADV	(1 << 2)
-	#define EEE_ADV_STATUS_SHIFT	20
-	#define	EEE_LP_ADV_STATUS_MASK	0x0f000000
-	#define EEE_LP_ADV_STATUS_SHIFT	24
-	#define EEE_REQUESTED_BIT	0x10000000
-	#define EEE_LPI_REQUESTED_BIT	0x20000000
-	#define EEE_ACTIVE_BIT		0x40000000
-	#define EEE_TIME_OUTPUT_BIT	0x80000000
+#define EEE_TIMER_MASK		0x000fffff
+#define EEE_ADV_STATUS_MASK	0x00f00000
+#define EEE_1G_ADV	(1 << 1)
+#define EEE_10G_ADV	(1 << 2)
+#define EEE_ADV_STATUS_SHIFT	20
+#define	EEE_LP_ADV_STATUS_MASK	0x0f000000
+#define EEE_LP_ADV_STATUS_SHIFT	24
+#define EEE_REQUESTED_BIT	0x10000000
+#define EEE_LPI_REQUESTED_BIT	0x20000000
+#define EEE_ACTIVE_BIT		0x40000000
+#define EEE_TIME_OUTPUT_BIT	0x80000000
 
 	u32 eee_remote;	/* Used for EEE in LLDP */
-	#define EEE_REMOTE_TW_TX_MASK	0x0000ffff
-	#define EEE_REMOTE_TW_TX_SHIFT	0
-	#define EEE_REMOTE_TW_RX_MASK	0xffff0000
-	#define EEE_REMOTE_TW_RX_SHIFT	16
+#define EEE_REMOTE_TW_TX_MASK	0x0000ffff
+#define EEE_REMOTE_TW_TX_SHIFT	0
+#define EEE_REMOTE_TW_RX_MASK	0xffff0000
+#define EEE_REMOTE_TW_RX_SHIFT	16
 };
 
 /**************************************/
@@ -1157,15 +1157,17 @@ struct public_drv_mb {
  * [3:0] - func, drv_data[7:0] - MAC/WWNN/WWPN
  */
 #define DRV_MSG_CODE_GET_VMAC                   0x00120000
-	#define DRV_MSG_CODE_VMAC_TYPE_MAC              1
-	#define DRV_MSG_CODE_VMAC_TYPE_WWNN             2
-	#define DRV_MSG_CODE_VMAC_TYPE_WWPN             3
+#define DRV_MSG_CODE_VMAC_TYPE_SHIFT            4
+#define DRV_MSG_CODE_VMAC_TYPE_MASK             0x30
+#define DRV_MSG_CODE_VMAC_TYPE_MAC              1
+#define DRV_MSG_CODE_VMAC_TYPE_WWNN             2
+#define DRV_MSG_CODE_VMAC_TYPE_WWPN             3
 /* Get statistics from pf, params [31:4] - reserved, [3:0] - stats type */
 #define DRV_MSG_CODE_GET_STATS                  0x00130000
-	#define DRV_MSG_CODE_STATS_TYPE_LAN             1
-	#define DRV_MSG_CODE_STATS_TYPE_FCOE            2
-	#define DRV_MSG_CODE_STATS_TYPE_ISCSI           3
-	#define DRV_MSG_CODE_STATS_TYPE_RDMA            4
+#define DRV_MSG_CODE_STATS_TYPE_LAN             1
+#define DRV_MSG_CODE_STATS_TYPE_FCOE            2
+#define DRV_MSG_CODE_STATS_TYPE_ISCSI           3
+#define DRV_MSG_CODE_STATS_TYPE_RDMA            4
 /* Host shall provide buffer and size for MFW  */
 #define DRV_MSG_CODE_PMD_DIAG_DUMP		0x00140000
 /* Host shall provide buffer and size for MFW  */
@@ -1193,8 +1195,8 @@ struct public_drv_mb {
 #define DRV_MSG_CODE_MASK_PARITIES		0x001a0000
 /* param[0] - Simulate fan failure,  param[1] - simulate over temp. */
 #define DRV_MSG_CODE_INDUCE_FAILURE		0x001b0000
-	#define DRV_MSG_FAN_FAILURE_TYPE		(1 << 0)
-	#define DRV_MSG_TEMPERATURE_FAILURE_TYPE	(1 << 1)
+#define DRV_MSG_FAN_FAILURE_TYPE		(1 << 0)
+#define DRV_MSG_TEMPERATURE_FAILURE_TYPE	(1 << 1)
 /* Param: [0:15] - gpio number */
 #define DRV_MSG_CODE_GPIO_READ			0x001c0000
 /* Param: [0:15] - gpio number, [16:31] - gpio value */
@@ -1215,50 +1217,50 @@ struct public_drv_mb {
  * param[15:8] - age
  */
 #define DRV_MSG_CODE_RESOURCE_CMD		0x00230000
-	/* request resource ownership with default aging */
-	#define RESOURCE_OPCODE_REQ			1
-	/* request resource ownership without aging */
-	#define RESOURCE_OPCODE_REQ_WO_AGING		2
-	/* request resource ownership with specific aging timer (in seconds) */
-	#define RESOURCE_OPCODE_REQ_W_AGING		3
-	#define RESOURCE_OPCODE_RELEASE			4 /* release resource */
-	/* force resource release */
-	#define RESOURCE_OPCODE_FORCE_RELEASE		5
-	/* resource is free and granted to requester */
-	#define RESOURCE_OPCODE_GNT			1
-	/* resource is busy, param[7:0] indicates owner as follow 0-15 = PF0-15,
-	 * 16 = MFW, 17 = diag over serial
-	 */
-	#define RESOURCE_OPCODE_BUSY			2
-	/* indicate release request was acknowledged */
-	#define RESOURCE_OPCODE_RELEASED		3
-	/* indicate release request was previously received by other owner */
-	#define RESOURCE_OPCODE_RELEASED_PREVIOUS	4
-	/* indicate wrong owner during release */
-	#define RESOURCE_OPCODE_WRONG_OWNER		5
-	#define RESOURCE_OPCODE_UNKNOWN_CMD		255
-	/* dedicate resource 0 for dump */
-	#define RESOURCE_DUMP				0
+/* request resource ownership with default aging */
+#define RESOURCE_OPCODE_REQ			1
+/* request resource ownership without aging */
+#define RESOURCE_OPCODE_REQ_WO_AGING		2
+/* request resource ownership with specific aging timer (in seconds) */
+#define RESOURCE_OPCODE_REQ_W_AGING		3
+#define RESOURCE_OPCODE_RELEASE			4 /* release resource */
+/* force resource release */
+#define RESOURCE_OPCODE_FORCE_RELEASE		5
+/* resource is free and granted to requester */
+#define RESOURCE_OPCODE_GNT			1
+/* resource is busy, param[7:0] indicates owner as follow 0-15 = PF0-15,
+ * 16 = MFW, 17 = diag over serial
+ */
+#define RESOURCE_OPCODE_BUSY			2
+/* indicate release request was acknowledged */
+#define RESOURCE_OPCODE_RELEASED		3
+/* indicate release request was previously received by other owner */
+#define RESOURCE_OPCODE_RELEASED_PREVIOUS	4
+/* indicate wrong owner during release */
+#define RESOURCE_OPCODE_WRONG_OWNER		5
+#define RESOURCE_OPCODE_UNKNOWN_CMD		255
+/* dedicate resource 0 for dump */
+#define RESOURCE_DUMP				0
 #define DRV_MSG_CODE_GET_MBA_VERSION		0x00240000 /* Get MBA version */
 /* Send crash dump commands with param[3:0] - opcode */
 #define DRV_MSG_CODE_MDUMP_CMD			0x00250000
-	#define MDUMP_DRV_PARAM_OPCODE_MASK		0x0000000f
-	/* acknowledge reception of error indication */
-	#define DRV_MSG_CODE_MDUMP_ACK			0x01
-	/* set epoc and personality as follow: drv_data[3:0] - epoch,
-	 * drv_data[7:4] - personality
-	 */
-	#define DRV_MSG_CODE_MDUMP_SET_VALUES		0x02
-	/* trigger crash dump procedure */
-	#define DRV_MSG_CODE_MDUMP_TRIGGER		0x03
-	/* Request valid logs and config words */
-	#define DRV_MSG_CODE_MDUMP_GET_CONFIG		0x04
-	/* Set triggers mask. drv_mb_param should indicate (bitwise) which
-	 * trigger enabled
-	 */
-	#define DRV_MSG_CODE_MDUMP_SET_ENABLE		0x05
-	/* Clear all logs */
-	#define DRV_MSG_CODE_MDUMP_CLEAR_LOGS		0x06
+#define MDUMP_DRV_PARAM_OPCODE_MASK		0x0000000f
+/* acknowledge reception of error indication */
+#define DRV_MSG_CODE_MDUMP_ACK			0x01
+/* set epoc and personality as follow: drv_data[3:0] - epoch,
+ * drv_data[7:4] - personality
+ */
+#define DRV_MSG_CODE_MDUMP_SET_VALUES		0x02
+/* trigger crash dump procedure */
+#define DRV_MSG_CODE_MDUMP_TRIGGER		0x03
+/* Request valid logs and config words */
+#define DRV_MSG_CODE_MDUMP_GET_CONFIG		0x04
+/* Set triggers mask. drv_mb_param should indicate (bitwise) which
+ * trigger enabled
+ */
+#define DRV_MSG_CODE_MDUMP_SET_ENABLE		0x05
+/* Clear all logs */
+#define DRV_MSG_CODE_MDUMP_CLEAR_LOGS		0x06
 #define DRV_MSG_CODE_MEM_ECC_EVENTS		0x00260000 /* Param: None */
 /* Param: [0:15] - gpio number */
 #define DRV_MSG_CODE_GPIO_INFO			0x00270000
@@ -1266,12 +1268,12 @@ struct public_drv_mb {
 #define DRV_MSG_CODE_EXT_PHY_READ		0x00280000
 /* Value should be placed in union */
 #define DRV_MSG_CODE_EXT_PHY_WRITE		0x00290000
-	#define DRV_MB_PARAM_ADDR_SHIFT			0
-	#define DRV_MB_PARAM_ADDR_MASK			0x0000FFFF
-	#define DRV_MB_PARAM_DEVAD_SHIFT		16
-	#define DRV_MB_PARAM_DEVAD_MASK			0x001F0000
-	#define DRV_MB_PARAM_PORT_SHIFT			21
-	#define DRV_MB_PARAM_PORT_MASK			0x00600000
+#define DRV_MB_PARAM_ADDR_SHIFT			0
+#define DRV_MB_PARAM_ADDR_MASK			0x0000FFFF
+#define DRV_MB_PARAM_DEVAD_SHIFT		16
+#define DRV_MB_PARAM_DEVAD_MASK			0x001F0000
+#define DRV_MB_PARAM_PORT_SHIFT			21
+#define DRV_MB_PARAM_PORT_MASK			0x00600000
 #define DRV_MSG_CODE_EXT_PHY_FW_UPGRADE		0x002a0000
 
 #define DRV_MSG_SEQ_NUMBER_MASK                 0x0000ffff
@@ -1510,7 +1512,7 @@ struct public_drv_mb {
 #define FW_MSG_CODE_EXTPHY_OPERATION_FAILED	0x00720000
 #define FW_MSG_CODE_EXTPHY_NO_PHY_DETECTED	0x00730000
 
-/* mdump related response codes */
+	/* mdump related response codes */
 #define FW_MSG_CODE_MDUMP_NO_IMAGE_FOUND	0x00010000
 #define FW_MSG_CODE_MDUMP_ALLOC_FAILED		0x00020000
 #define FW_MSG_CODE_MDUMP_INVALID_CMD		0x00030000
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 34/62] net/qede/base: prevent transmitter stuck condition
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (33 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 33/62] net/qede/base: formatting changes Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 35/62] net/qede/base: add mask/shift defines for resource command Rasesh Mody
                               ` (27 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Change OOO TC properly to prevent transmitter stuck condition
due to credit underruns.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h      |    4 +---
 drivers/net/qede/base/ecore_dcbx.c |    6 ++----
 drivers/net/qede/base/ecore_dev.c  |   19 ++++++++++++++-----
 drivers/net/qede/base/mcp_public.h |   12 ++++++++----
 4 files changed, 25 insertions(+), 16 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 479a991..c9b1b5a 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -358,9 +358,6 @@ struct ecore_hw_info {
 
 	u8 num_active_tc;
 
-	/* Traffic class used for tcp out of order traffic */
-	u8 ooo_tc;
-
 	/* The traffic class used by PF for it's offloaded protocol */
 	u8 offload_tc;
 
@@ -441,6 +438,7 @@ struct ecore_qm_info {
 	u16			num_vf_pqs;
 	u8			num_vports;
 	u8			max_phys_tcs_per_port;
+	u8			ooo_tc;
 	bool			pf_rl_en;
 	bool			pf_wfq_en;
 	bool			vport_rl_en;
diff --git a/drivers/net/qede/base/ecore_dcbx.c b/drivers/net/qede/base/ecore_dcbx.c
index 102774d..0e11927 100644
--- a/drivers/net/qede/base/ecore_dcbx.c
+++ b/drivers/net/qede/base/ecore_dcbx.c
@@ -129,11 +129,8 @@ ecore_dcbx_set_params(struct ecore_dcbx_results *p_data,
 		p_data->arr[type].update = DONT_UPDATE_DCB_DSCP;
 
 	/* QM reconf data */
-	if (p_hwfn->hw_info.personality == personality) {
+	if (p_hwfn->hw_info.personality == personality)
 		p_hwfn->hw_info.offload_tc = tc;
-		if (personality == ECORE_PCI_ISCSI)
-			p_hwfn->hw_info.ooo_tc = DCBX_ISCSI_OOO_TC;
-	}
 }
 
 /* Update app protocol data and hw_info fields with the TLV info */
@@ -317,6 +314,7 @@ ecore_dcbx_process_mib_info(struct ecore_hwfn *p_hwfn)
 
 	p_info->num_active_tc = ECORE_MFW_GET_FIELD(p_ets->flags,
 						    DCBX_ETS_MAX_TCS);
+	p_hwfn->qm_info.ooo_tc = ECORE_MFW_GET_FIELD(p_ets->flags, DCBX_OOO_TC);
 	data.pf_id = p_hwfn->rel_pf_id;
 	data.dcbx_enabled = !!dcbx_version;
 
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 21fec58..0840d49 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -291,6 +291,7 @@ u16 ecore_init_qm_get_num_pqs(struct ecore_hwfn *p_hwfn)
 static void ecore_init_qm_params(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+	bool four_port;
 
 	/* pq and vport bases for this PF */
 	qm_info->start_pq = (u16)RESC_START(p_hwfn, ECORE_PQ);
@@ -300,10 +301,19 @@ static void ecore_init_qm_params(struct ecore_hwfn *p_hwfn)
 	qm_info->vport_rl_en = 1;
 	qm_info->vport_wfq_en = 1;
 
+	/* TC config is different for AH 4 port */
+	four_port = p_hwfn->p_dev->num_ports_in_engines == MAX_NUM_PORTS_K2;
+
 	/* in AH 4 port we have fewer TCs per port */
-	qm_info->max_phys_tcs_per_port =
-		p_hwfn->p_dev->num_ports_in_engines == MAX_NUM_PORTS_K2 ?
-			NUM_PHYS_TCS_4PORT_K2 : NUM_OF_PHYS_TCS;
+	qm_info->max_phys_tcs_per_port = four_port ? NUM_PHYS_TCS_4PORT_K2 :
+						     NUM_OF_PHYS_TCS;
+
+	/* unless MFW indicated otherwise, ooo_tc should be 3 for AH 4 port and
+	 * 4 otherwise
+	 */
+	if (!qm_info->ooo_tc)
+		qm_info->ooo_tc = four_port ? DCBX_TCP_OOO_K2_4PORT_TC :
+					      DCBX_TCP_OOO_TC;
 }
 
 /* initialize qm vport params */
@@ -532,8 +542,7 @@ static void ecore_init_qm_ooo_pq(struct ecore_hwfn *p_hwfn)
 		return;
 
 	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_OOO, qm_info->num_pqs);
-	ecore_init_qm_pq(p_hwfn, qm_info, DCBX_ISCSI_OOO_TC,
-			 PQ_INIT_SHARE_VPORT);
+	ecore_init_qm_pq(p_hwfn, qm_info, qm_info->ooo_tc, PQ_INIT_SHARE_VPORT);
 }
 
 static void ecore_init_qm_pure_ack_pq(struct ecore_hwfn *p_hwfn)
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 28909fb..bd34557 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -294,16 +294,20 @@ struct dcbx_ets_feature {
 #define DCBX_ETS_CBS_SHIFT                      3
 #define DCBX_ETS_MAX_TCS_MASK                   0x000000f0
 #define DCBX_ETS_MAX_TCS_SHIFT                  4
-#define DCBX_ISCSI_OOO_TC_MASK			0x00000f00
-#define DCBX_ISCSI_OOO_TC_SHIFT                 8
+#define DCBX_OOO_TC_MASK                        0x00000f00
+#define DCBX_OOO_TC_SHIFT                       8
 /* Entries in tc table are orginized that the left most is pri 0, right most is
  * prio 7
  */
 
 	u32  pri_tc_tbl[1];
-#define DCBX_ISCSI_OOO_TC			(4)
+/* Fixed TCP OOO TC usage is deprecated and used only for driver backward
+ * compatibility
+ */
+#define DCBX_TCP_OOO_TC				(4)
+#define DCBX_TCP_OOO_K2_4PORT_TC		(3)
 
-#define NIG_ETS_ISCSI_OOO_CLIENT_OFFSET		(DCBX_ISCSI_OOO_TC + 1)
+#define NIG_ETS_ISCSI_OOO_CLIENT_OFFSET		(DCBX_TCP_OOO_TC + 1)
 #define DCBX_CEE_STRICT_PRIORITY		0xf
 /* Entries in tc table are orginized that the left most is pri 0, right most is
  * prio 7
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 35/62] net/qede/base: add mask/shift defines for resource command
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (34 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 34/62] net/qede/base: prevent transmitter stuck condition Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 36/62] net/qede/base: add API for using MFW resource lock Rasesh Mody
                               ` (26 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add several mask/shift defines for the resource command

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/mcp_public.h |   15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index bd34557..1b1ecd2 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -1217,10 +1217,16 @@ struct public_drv_mb {
 #define DRV_MSG_CODE_TIMESTAMP                  0x00210000
 /* This is an empty mailbox just return OK*/
 #define DRV_MSG_CODE_EMPTY_MB			0x00220000
+
 /* Param[0:4] - resource number (0-31), Param[5:7] - opcode,
  * param[15:8] - age
  */
 #define DRV_MSG_CODE_RESOURCE_CMD		0x00230000
+
+#define RESOURCE_CMD_REQ_RESC_MASK		0x0000001F
+#define RESOURCE_CMD_REQ_RESC_SHIFT		0
+#define RESOURCE_CMD_REQ_OPCODE_MASK		0x000000E0
+#define RESOURCE_CMD_REQ_OPCODE_SHIFT		5
 /* request resource ownership with default aging */
 #define RESOURCE_OPCODE_REQ			1
 /* request resource ownership without aging */
@@ -1230,6 +1236,13 @@ struct public_drv_mb {
 #define RESOURCE_OPCODE_RELEASE			4 /* release resource */
 /* force resource release */
 #define RESOURCE_OPCODE_FORCE_RELEASE		5
+#define RESOURCE_CMD_REQ_AGE_MASK		0x0000FF00
+#define RESOURCE_CMD_REQ_AGE_SHIFT		8
+
+#define RESOURCE_CMD_RSP_OWNER_MASK		0x000000FF
+#define RESOURCE_CMD_RSP_OWNER_SHIFT		0
+#define RESOURCE_CMD_RSP_OPCODE_MASK		0x00000700
+#define RESOURCE_CMD_RSP_OPCODE_SHIFT		8
 /* resource is free and granted to requester */
 #define RESOURCE_OPCODE_GNT			1
 /* resource is busy, param[7:0] indicates owner as follow 0-15 = PF0-15,
@@ -1243,8 +1256,10 @@ struct public_drv_mb {
 /* indicate wrong owner during release */
 #define RESOURCE_OPCODE_WRONG_OWNER		5
 #define RESOURCE_OPCODE_UNKNOWN_CMD		255
+
 /* dedicate resource 0 for dump */
 #define RESOURCE_DUMP				0
+
 #define DRV_MSG_CODE_GET_MBA_VERSION		0x00240000 /* Get MBA version */
 /* Send crash dump commands with param[3:0] - opcode */
 #define DRV_MSG_CODE_MDUMP_CMD			0x00250000
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 36/62] net/qede/base: add API for using MFW resource lock
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (35 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 35/62] net/qede/base: add mask/shift defines for resource command Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 37/62] net/qede/base: remove clock slowdown option Rasesh Mody
                               ` (25 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add base driver API for using the Management FW resource lock

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h      |    9 +++
 drivers/net/qede/base/ecore_dcbx.h |    3 -
 drivers/net/qede/base/ecore_mcp.c  |  143 ++++++++++++++++++++++++++++++++++++
 drivers/net/qede/base/ecore_mcp.h  |   41 +++++++++++
 4 files changed, 193 insertions(+), 3 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index c9b1b5a..acf2244 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -86,6 +86,15 @@ do {									\
 	(((value) >> (name##_SHIFT)) & name##_MASK)
 #endif
 
+#define ECORE_MFW_GET_FIELD(name, field)				\
+	(((name) & (field ## _MASK)) >> (field ## _SHIFT))
+
+#define ECORE_MFW_SET_FIELD(name, field, value)				\
+do {									\
+	(name) &= ~((field ## _MASK) << (field ## _SHIFT));		\
+	(name) |= (((value) << (field ## _SHIFT)) & (field ## _MASK));	\
+} while (0)
+
 static OSAL_INLINE u32 DB_ADDR(u32 cid, u32 DEMS)
 {
 	u32 db_addr = FIELD_VALUE(DB_LEGACY_ADDR_DEMS, DEMS) |
diff --git a/drivers/net/qede/base/ecore_dcbx.h b/drivers/net/qede/base/ecore_dcbx.h
index 2ce4465..0830014 100644
--- a/drivers/net/qede/base/ecore_dcbx.h
+++ b/drivers/net/qede/base/ecore_dcbx.h
@@ -17,9 +17,6 @@
 #include "ecore_hsi_common.h"
 #include "ecore_dcbx_api.h"
 
-#define ECORE_MFW_GET_FIELD(name, field) \
-	(((name) & (field ## _MASK)) >> (field ## _SHIFT))
-
 struct ecore_dcbx_info {
 	struct lldp_status_params_s lldp_remote[LLDP_MAX_LLDP_AGENTS];
 	struct lldp_config_params_s lldp_local[LLDP_MAX_LLDP_AGENTS];
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 2b9c819..30cb76e 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -2502,3 +2502,146 @@ enum _ecore_status_t ecore_mcp_initiate_pf_flr(struct ecore_hwfn *p_hwfn,
 	return ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_INITIATE_PF_FLR, 0,
 			     &mcp_resp, &mcp_param);
 }
+
+static enum _ecore_status_t ecore_mcp_resource_cmd(struct ecore_hwfn *p_hwfn,
+						   struct ecore_ptt *p_ptt,
+						   u32 param, u32 *p_mcp_resp,
+						   u32 *p_mcp_param)
+{
+	enum _ecore_status_t rc;
+
+	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_RESOURCE_CMD, param,
+			   p_mcp_resp, p_mcp_param);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	/* A zero response implies that the resource command is not supported */
+	if (!*p_mcp_resp)
+		return ECORE_NOTIMPL;
+
+	if (*p_mcp_param == RESOURCE_OPCODE_UNKNOWN_CMD) {
+		u8 opcode = ECORE_MFW_GET_FIELD(param, RESOURCE_CMD_REQ_OPCODE);
+
+		DP_NOTICE(p_hwfn, false,
+			  "The resource command is unknown to the MFW [param 0x%08x, opcode %d]\n",
+			  param, opcode);
+		return ECORE_INVAL;
+	}
+
+	return rc;
+}
+
+enum _ecore_status_t ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn,
+					 struct ecore_ptt *p_ptt,
+					 u8 resource_num, u8 timeout,
+					 bool *p_granted, u8 *p_owner)
+{
+	u32 param = 0, mcp_resp, mcp_param;
+	u8 opcode;
+	enum _ecore_status_t rc;
+
+	switch (timeout) {
+	case ECORE_MCP_RESC_LOCK_TO_DEFAULT:
+		opcode = RESOURCE_OPCODE_REQ;
+		timeout = 0;
+		break;
+	case ECORE_MCP_RESC_LOCK_TO_NONE:
+		opcode = RESOURCE_OPCODE_REQ_WO_AGING;
+		timeout = 0;
+		break;
+	default:
+		opcode = RESOURCE_OPCODE_REQ_W_AGING;
+		break;
+	}
+
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_RESC, resource_num);
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_OPCODE, opcode);
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_AGE, timeout);
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Resource lock request: param 0x%08x [age %d, opcode %d, resc_num %d]\n",
+		   param, timeout, opcode, resource_num);
+
+	/* Attempt to acquire the resource */
+	rc = ecore_mcp_resource_cmd(p_hwfn, p_ptt, param, &mcp_resp,
+				    &mcp_param);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	/* Analyze the response */
+	*p_owner = ECORE_MFW_GET_FIELD(mcp_param, RESOURCE_CMD_RSP_OWNER);
+	opcode = ECORE_MFW_GET_FIELD(mcp_param, RESOURCE_CMD_RSP_OPCODE);
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Resource lock response: mcp_param 0x%08x [opcode %d, owner %d]\n",
+		   mcp_param, opcode, *p_owner);
+
+	switch (opcode) {
+	case RESOURCE_OPCODE_GNT:
+		*p_granted = true;
+		break;
+	case RESOURCE_OPCODE_BUSY:
+		*p_granted = false;
+		break;
+	default:
+		DP_NOTICE(p_hwfn, false,
+			  "Unexpected opcode in resource lock response [mcp_param 0x%08x, opcode %d]\n",
+			  mcp_param, opcode);
+		return ECORE_INVAL;
+	}
+
+	return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t ecore_mcp_resc_unlock(struct ecore_hwfn *p_hwfn,
+					   struct ecore_ptt *p_ptt,
+					   u8 resource_num, bool force,
+					   bool *p_released)
+{
+	u32 param = 0, mcp_resp, mcp_param;
+	u8 opcode;
+	enum _ecore_status_t rc;
+
+	opcode = force ? RESOURCE_OPCODE_FORCE_RELEASE
+		       : RESOURCE_OPCODE_RELEASE;
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_RESC, resource_num);
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_OPCODE, opcode);
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Resource unlock request: param 0x%08x [opcode %d, resc_num %d]\n",
+		   param, opcode, resource_num);
+
+	/* Attempt to release the resource */
+	rc = ecore_mcp_resource_cmd(p_hwfn, p_ptt, param, &mcp_resp,
+				    &mcp_param);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	/* Analyze the response */
+	opcode = ECORE_MFW_GET_FIELD(mcp_param, RESOURCE_CMD_RSP_OPCODE);
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Resource unlock response: mcp_param 0x%08x [opcode %d]\n",
+		   mcp_param, opcode);
+
+	switch (opcode) {
+	case RESOURCE_OPCODE_RELEASED_PREVIOUS:
+		DP_INFO(p_hwfn,
+			"Resource unlock request for an already released resource [resc_num %d]\n",
+			resource_num);
+		/* Fallthrough */
+	case RESOURCE_OPCODE_RELEASED:
+		*p_released = true;
+		break;
+	case RESOURCE_OPCODE_WRONG_OWNER:
+		*p_released = false;
+		break;
+	default:
+		DP_NOTICE(p_hwfn, false,
+			  "Unexpected opcode in resource unlock response [mcp_param 0x%08x, opcode %d]\n",
+			  mcp_param, opcode);
+		return ECORE_INVAL;
+	}
+
+	return ECORE_SUCCESS;
+}
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index 0708923..7a81516 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -361,4 +361,45 @@ enum _ecore_status_t ecore_mcp_get_resc_info(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_mcp_initiate_pf_flr(struct ecore_hwfn *p_hwfn,
 					       struct ecore_ptt *p_ptt);
 
+#define ECORE_MCP_RESC_LOCK_TO_DEFAULT	0
+#define ECORE_MCP_RESC_LOCK_TO_NONE	255
+
+/**
+ * @brief Acquires MFW generic resource lock
+ *
+ *  @param p_hwfn
+ *  @param p_ptt
+ *  @param resource_num - valid values are 0..31
+ *  @param timeout - lock timeout value in seconds
+ *                   (1..254, '0' - default value, '255' - no timeout).
+ *  @param p_granted - will be filled as true if the resource is free and
+ *                     granted, or false if it is busy.
+ *  @param p_owner - A pointer to a variable to be filled with the resource
+ *                   owner (0..15 = PF0-15, 16 = MFW, 17 = diag over serial).
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
+ */
+enum _ecore_status_t ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn,
+					 struct ecore_ptt *p_ptt,
+					 u8 resource_num, u8 timeout,
+					 bool *p_granted, u8 *p_owner);
+
+/**
+ * @brief Releases MFW generic resource lock
+ *
+ *  @param p_hwfn
+ *  @param p_ptt
+ *  @param resource_num
+ *  @param force -  allows to release a reeource even if belongs to another PF
+ *  @param p_released - will be filled as true if the resource is released (or
+ *			has been already released), and false if the resource is
+ *			acquired by another PF and the `force' flag was not set.
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
+ */
+enum _ecore_status_t ecore_mcp_resc_unlock(struct ecore_hwfn *p_hwfn,
+					   struct ecore_ptt *p_ptt,
+					   u8 resource_num, bool force,
+					   bool *p_released);
+
 #endif /* __ECORE_MCP_H__ */
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 37/62] net/qede/base: remove clock slowdown option
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (36 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 36/62] net/qede/base: add API for using MFW resource lock Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 38/62] net/qede/base: add new image types Rasesh Mody
                               ` (24 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Remove clock slowdown NVM config option as this is not supported
for current chipsets.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/nvm_cfg.h |   10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/drivers/net/qede/base/nvm_cfg.h b/drivers/net/qede/base/nvm_cfg.h
index 4202337..4e58835 100644
--- a/drivers/net/qede/base/nvm_cfg.h
+++ b/drivers/net/qede/base/nvm_cfg.h
@@ -72,10 +72,12 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_ENABLE_ATC_OFFSET 30
 		#define NVM_CFG1_GLOB_ENABLE_ATC_DISABLED 0x0
 		#define NVM_CFG1_GLOB_ENABLE_ATC_ENABLED 0x1
-		#define NVM_CFG1_GLOB_CLOCK_SLOWDOWN_MASK 0x80000000
-		#define NVM_CFG1_GLOB_CLOCK_SLOWDOWN_OFFSET 31
-		#define NVM_CFG1_GLOB_CLOCK_SLOWDOWN_DISABLED 0x0
-		#define NVM_CFG1_GLOB_CLOCK_SLOWDOWN_ENABLED 0x1
+		#define NVM_CFG1_GLOB_RESERVED__M_WAS_CLOCK_SLOWDOWN_MASK \
+								0x80000000
+		#define NVM_CFG1_GLOB_RESERVED__M_WAS_CLOCK_SLOWDOWN_OFFSET 31
+		#define NVM_CFG1_GLOB_RESERVED__M_WAS_CLOCK_SLOWDOWN_DISABLED \
+								0x0
+		#define NVM_CFG1_GLOB_RESERVED__M_WAS_CLOCK_SLOWDOWN_ENABLED 0x1
 	u32 engineering_change[3]; /* 0x4 */
 	u32 manufacturing_id; /* 0x10 */
 	u32 serial_number[4]; /* 0x14 */
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 38/62] net/qede/base: add new image types
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (37 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 37/62] net/qede/base: remove clock slowdown option Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 39/62] net/qede/base: use L2-handles for RSS configuration Rasesh Mody
                               ` (23 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add new image types - RECOVERY and PK (Public Key) towards
the second phase of NVRAM security support.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/mcp_public.h |    5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 1b1ecd2..d3cbc96 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -1502,6 +1502,10 @@ struct public_drv_mb {
 #define FW_MSG_CODE_NVM_PUT_FILE_FINISH_OK	0x00400000
 /* MFW reject "mcp reset" command if one of the drivers is up */
 #define FW_MSG_CODE_MCP_RESET_REJECT		0x00600000
+#define FW_MSG_CODE_NVM_FAILED_CALC_HASH	0x00310000
+#define FW_MSG_CODE_NVM_PUBLIC_KEY_MISSING	0x00320000
+#define FW_MSG_CODE_NVM_INVALID_PUBLIC_KEY	0x00330000
+
 #define FW_MSG_CODE_PHY_OK			0x00110000
 #define FW_MSG_CODE_PHY_ERROR			0x00120000
 #define FW_MSG_CODE_SET_SECURE_MODE_ERROR	0x00130000
@@ -1530,6 +1534,7 @@ struct public_drv_mb {
 #define FW_MSG_CODE_EXTPHY_INVALID_PHY_TYPE	0x00710000
 #define FW_MSG_CODE_EXTPHY_OPERATION_FAILED	0x00720000
 #define FW_MSG_CODE_EXTPHY_NO_PHY_DETECTED	0x00730000
+#define FW_MSG_CODE_RECOVERY_MODE		0x00740000
 
 	/* mdump related response codes */
 #define FW_MSG_CODE_MDUMP_NO_IMAGE_FOUND	0x00010000
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 39/62] net/qede/base: use L2-handles for RSS configuration
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (38 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 38/62] net/qede/base: add new image types Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 40/62] net/qede/base: change valloc to vzalloc Rasesh Mody
                               ` (22 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Move RSS configuration into using L2-handles instead of queue-ids.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_l2.c     |   48 ++++++++++++++++++-------
 drivers/net/qede/base/ecore_l2.h     |    2 ++
 drivers/net/qede/base/ecore_l2_api.h |    4 ++-
 drivers/net/qede/base/ecore_sriov.c  |   66 +++++++++++++++++++++-------------
 drivers/net/qede/base/ecore_vf.c     |   13 +++++--
 drivers/net/qede/qede_ethdev.c       |   19 ++++++----
 6 files changed, 105 insertions(+), 47 deletions(-)

diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 352620a..2635213 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -59,6 +59,7 @@ _ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
 	p_cid->cid = cid;
 	p_cid->vf_qid = vf_qid;
 	p_cid->rel = *p_params;
+	p_cid->p_owner = p_hwfn;
 
 	/* Don't try calculating the absolute indices for VFs */
 	if (IS_VF(p_hwfn->p_dev)) {
@@ -267,10 +268,9 @@ ecore_sp_vport_update_rss(struct ecore_hwfn *p_hwfn,
 			  struct vport_update_ramrod_data *p_ramrod,
 			  struct ecore_rss_params *p_rss)
 {
-	enum _ecore_status_t rc = ECORE_SUCCESS;
 	struct eth_vport_rss_config *p_config;
-	u16 abs_l2_queue = 0;
-	int i;
+	int i, table_size;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
 
 	if (!p_rss) {
 		p_ramrod->common.update_rss_flg = 0;
@@ -324,16 +324,40 @@ ecore_sp_vport_update_rss(struct ecore_hwfn *p_hwfn,
 		   p_config->capabilities,
 		   p_config->update_rss_ind_table, p_config->update_rss_key);
 
-	for (i = 0; i < ECORE_RSS_IND_TABLE_SIZE; i++) {
-		rc = ecore_fw_l2_queue(p_hwfn,
-				       p_rss->rss_ind_table[i],
-				       &abs_l2_queue);
-		if (rc != ECORE_SUCCESS)
-			return rc;
+	table_size = OSAL_MIN_T(int, ECORE_RSS_IND_TABLE_SIZE,
+				1 << p_config->tbl_size);
+	for (i = 0; i < table_size; i++) {
+		struct ecore_queue_cid *p_queue = p_rss->rss_ind_table[i];
 
-		p_config->indirection_table[i] = OSAL_CPU_TO_LE16(abs_l2_queue);
-		DP_VERBOSE(p_hwfn, ECORE_MSG_IFUP, "i= %d, queue = %d\n",
-			   i, p_config->indirection_table[i]);
+		if (!p_queue)
+			return ECORE_INVAL;
+
+		p_config->indirection_table[i] =
+				OSAL_CPU_TO_LE16(p_queue->abs.queue_id);
+	}
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IFUP,
+		   "Configured RSS indirection table [%d entries]:\n",
+		   table_size);
+	for (i = 0; i < ECORE_RSS_IND_TABLE_SIZE; i += 0x10) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IFUP,
+			   "%04x %04x %04x %04x %04x %04x %04x %04x %04x %04x %04x %04x %04x %04x %04x %04x\n",
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 1]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 2]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 3]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 4]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 5]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 6]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 7]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 8]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 9]),
+			  OSAL_LE16_TO_CPU(p_config->indirection_table[i + 10]),
+			  OSAL_LE16_TO_CPU(p_config->indirection_table[i + 11]),
+			  OSAL_LE16_TO_CPU(p_config->indirection_table[i + 12]),
+			  OSAL_LE16_TO_CPU(p_config->indirection_table[i + 13]),
+			  OSAL_LE16_TO_CPU(p_config->indirection_table[i + 14]),
+			 OSAL_LE16_TO_CPU(p_config->indirection_table[i + 15]));
 	}
 
 	for (i = 0; i < 10; i++)
diff --git a/drivers/net/qede/base/ecore_l2.h b/drivers/net/qede/base/ecore_l2.h
index c136389..4b0ccb4 100644
--- a/drivers/net/qede/base/ecore_l2.h
+++ b/drivers/net/qede/base/ecore_l2.h
@@ -36,6 +36,8 @@ struct ecore_queue_cid {
 
 	/* Legacy VFs might have Rx producer located elsewhere */
 	bool b_legacy_vf;
+
+	struct ecore_hwfn *p_owner;
 };
 
 void ecore_eth_queue_cid_release(struct ecore_hwfn *p_hwfn,
diff --git a/drivers/net/qede/base/ecore_l2_api.h b/drivers/net/qede/base/ecore_l2_api.h
index af316d3..5a7db76 100644
--- a/drivers/net/qede/base/ecore_l2_api.h
+++ b/drivers/net/qede/base/ecore_l2_api.h
@@ -59,7 +59,9 @@ struct ecore_rss_params {
 	u8 update_rss_key;
 	u8 rss_caps;
 	u8 rss_table_size_log; /* The table size is 2 ^ rss_table_size_log */
-	u16 rss_ind_table[ECORE_RSS_IND_TABLE_SIZE];
+
+	/* Indirection table consist of rx queue handles */
+	void *rss_ind_table[ECORE_RSS_IND_TABLE_SIZE];
 	u32 rss_key[ECORE_RSS_KEY_SIZE];
 };
 
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 6cec7b2..280c992 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -2704,12 +2704,14 @@ ecore_iov_vp_update_rss_param(struct ecore_hwfn *p_hwfn,
 			      struct ecore_vf_info *vf,
 			      struct ecore_sp_vport_update_params *p_data,
 			      struct ecore_rss_params *p_rss,
-			      struct ecore_iov_vf_mbx *p_mbx, u16 *tlvs_mask)
+			      struct ecore_iov_vf_mbx *p_mbx,
+			      u16 *tlvs_mask, u16 *tlvs_accepted)
 {
 	struct vfpf_vport_update_rss_tlv *p_rss_tlv;
 	u16 tlv = CHANNEL_TLV_VPORT_UPDATE_RSS;
-	u16 i, q_idx, max_q_idx;
+	bool b_reject = false;
 	u16 table_size;
+	u16 i, q_idx;
 
 	p_rss_tlv = (struct vfpf_vport_update_rss_tlv *)
 	    ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv);
@@ -2737,36 +2739,38 @@ ecore_iov_vp_update_rss_param(struct ecore_hwfn *p_hwfn,
 	p_rss->rss_eng_id = vf->relative_vf_id + 1;
 	p_rss->rss_caps = p_rss_tlv->rss_caps;
 	p_rss->rss_table_size_log = p_rss_tlv->rss_table_size_log;
-	OSAL_MEMCPY(p_rss->rss_ind_table, p_rss_tlv->rss_ind_table,
-		    sizeof(p_rss->rss_ind_table));
 	OSAL_MEMCPY(p_rss->rss_key, p_rss_tlv->rss_key,
 		    sizeof(p_rss->rss_key));
 
 	table_size = OSAL_MIN_T(u16, OSAL_ARRAY_SIZE(p_rss->rss_ind_table),
 				(1 << p_rss_tlv->rss_table_size_log));
 
-	max_q_idx = OSAL_ARRAY_SIZE(vf->vf_queues);
-
 	for (i = 0; i < table_size; i++) {
-		u16 index = vf->vf_queues[0].fw_rx_qid;
+		q_idx = p_rss_tlv->rss_ind_table[i];
+		if (!ecore_iov_validate_rxq(p_hwfn, vf, q_idx)) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VF[%d]: Omitting RSS due to wrong queue %04x\n",
+				   vf->relative_vf_id, q_idx);
+			b_reject = true;
+			goto out;
+		}
 
-		q_idx = p_rss->rss_ind_table[i];
-		if (q_idx >= max_q_idx)
-			DP_NOTICE(p_hwfn, true,
-				  "rss_ind_table[%d] = %d,"
-				  " rxq is out of range\n",
-				  i, q_idx);
-		else if (!vf->vf_queues[q_idx].p_rx_cid)
-			DP_NOTICE(p_hwfn, true,
-				  "rss_ind_table[%d] = %d, rxq is not active\n",
-				  i, q_idx);
-		else
-			index = vf->vf_queues[q_idx].fw_rx_qid;
-		p_rss->rss_ind_table[i] = index;
+		if (!vf->vf_queues[q_idx].p_rx_cid) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VF[%d]: Omitting RSS due to inactive queue %08x\n",
+				   vf->relative_vf_id, q_idx);
+			b_reject = true;
+			goto out;
+		}
+
+		p_rss->rss_ind_table[i] = vf->vf_queues[q_idx].p_rx_cid;
 	}
 
 	p_data->rss_params = p_rss;
+out:
 	*tlvs_mask |= 1 << ECORE_IOV_VP_UPDATE_RSS;
+	if (!b_reject)
+		*tlvs_accepted |= 1 << ECORE_IOV_VP_UPDATE_RSS;
 }
 
 static void
@@ -2822,11 +2826,11 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 					  struct ecore_ptt *p_ptt,
 					  struct ecore_vf_info *vf)
 {
+	struct ecore_rss_params *p_rss_params = OSAL_NULL;
 	struct ecore_sp_vport_update_params params;
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
 	struct ecore_sge_tpa_params sge_tpa_params;
 	u16 tlvs_mask = 0, tlvs_accepted = 0;
-	struct ecore_rss_params rss_params;
 	u8 status = PFVF_STATUS_SUCCESS;
 	u16 length;
 	enum _ecore_status_t rc;
@@ -2841,6 +2845,12 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 		goto out;
 	}
 
+	p_rss_params = OSAL_VALLOC(p_hwfn->p_dev, sizeof(*p_rss_params));
+	if (p_rss_params == OSAL_NULL) {
+		status = PFVF_STATUS_FAILURE;
+		goto out;
+	}
+
 	OSAL_MEMSET(&params, 0, sizeof(params));
 	params.opaque_fid = vf->opaque_fid;
 	params.vport_id = vf->vport_id;
@@ -2854,19 +2864,24 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 	ecore_iov_vp_update_tx_switch(p_hwfn, &params, mbx, &tlvs_mask);
 	ecore_iov_vp_update_mcast_bin_param(p_hwfn, &params, mbx, &tlvs_mask);
 	ecore_iov_vp_update_accept_flag(p_hwfn, &params, mbx, &tlvs_mask);
-	ecore_iov_vp_update_rss_param(p_hwfn, vf, &params, &rss_params,
-				      mbx, &tlvs_mask);
 	ecore_iov_vp_update_accept_any_vlan(p_hwfn, &params, mbx, &tlvs_mask);
 	ecore_iov_vp_update_sge_tpa_param(p_hwfn, vf, &params,
 					  &sge_tpa_params, mbx, &tlvs_mask);
 
+	tlvs_accepted = tlvs_mask;
+
+	/* Some of the extended TLVs need to be validated first; In that case,
+	 * they can update the mask without updating the accepted [so that
+	 * PF could communicate to VF it has rejected request].
+	 */
+	ecore_iov_vp_update_rss_param(p_hwfn, vf, &params, p_rss_params,
+				      mbx, &tlvs_mask, &tlvs_accepted);
+
 	/* Just log a message if there is no single extended tlv in buffer.
 	 * When all features of vport update ramrod would be requested by VF
 	 * as extended TLVs in buffer then an error can be returned in response
 	 * if there is no extended TLV present in buffer.
 	 */
-	tlvs_accepted = tlvs_mask;
-
 	if (OSAL_IOV_VF_VPORT_UPDATE(p_hwfn, vf->relative_vf_id,
 				     &params, &tlvs_accepted) !=
 	    ECORE_SUCCESS) {
@@ -2894,6 +2909,7 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 		status = PFVF_STATUS_FAILURE;
 
 out:
+	OSAL_VFREE(p_hwfn->p_dev, p_rss_params);
 	length = ecore_iov_prep_vp_update_resp_tlvs(p_hwfn, vf, mbx, status,
 						    tlvs_mask, tlvs_accepted);
 	ecore_iov_send_response(p_hwfn, p_ptt, vf, length, status);
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index 3182621..a072a81 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -1132,6 +1132,7 @@ ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn,
 	if (p_params->rss_params) {
 		struct ecore_rss_params *rss_params = p_params->rss_params;
 		struct vfpf_vport_update_rss_tlv *p_rss_tlv;
+		int i, table_size;
 
 		size = sizeof(struct vfpf_vport_update_rss_tlv);
 		p_rss_tlv = ecore_add_tlv(p_hwfn, &p_iov->offset,
@@ -1153,8 +1154,16 @@ ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn,
 		p_rss_tlv->rss_enable = rss_params->rss_enable;
 		p_rss_tlv->rss_caps = rss_params->rss_caps;
 		p_rss_tlv->rss_table_size_log = rss_params->rss_table_size_log;
-		OSAL_MEMCPY(p_rss_tlv->rss_ind_table, rss_params->rss_ind_table,
-			    sizeof(rss_params->rss_ind_table));
+
+		table_size = OSAL_MIN_T(int, T_ETH_INDIRECTION_TABLE_SIZE,
+					1 << p_rss_tlv->rss_table_size_log);
+		for (i = 0; i < table_size; i++) {
+			struct ecore_queue_cid *p_queue;
+
+			p_queue = rss_params->rss_ind_table[i];
+			p_rss_tlv->rss_ind_table[i] = p_queue->rel.queue_id;
+		}
+
 		OSAL_MEMCPY(p_rss_tlv->rss_key, rss_params->rss_key,
 			    sizeof(rss_params->rss_key));
 	}
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 257e5b2..bd190d0 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1487,11 +1487,11 @@ static int qede_rss_hash_update(struct rte_eth_dev *eth_dev,
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
 	struct ecore_sp_vport_update_params vport_update_params;
 	struct ecore_rss_params rss_params;
-	struct ecore_rss_params params;
 	struct ecore_hwfn *p_hwfn;
 	uint32_t *key = (uint32_t *)rss_conf->rss_key;
 	uint64_t hf = rss_conf->rss_hf;
 	uint8_t len = rss_conf->rss_key_len;
+	uint8_t idx;
 	uint8_t i;
 	int rc;
 
@@ -1526,6 +1526,11 @@ static int qede_rss_hash_update(struct rte_eth_dev *eth_dev,
 	/* tbl_size has to be set with capabilities */
 	rss_params.rss_table_size_log = 7;
 	vport_update_params.vport_id = 0;
+	/* pass the L2 handles instead of qids */
+	for (i = 0 ; i < ECORE_RSS_IND_TABLE_SIZE ; i++) {
+		idx = qdev->rss_ind_table[i];
+		rss_params.rss_ind_table[i] = qdev->fp_array[idx].rxq->handle;
+	}
 	vport_update_params.rss_params = &rss_params;
 
 	for_each_hwfn(edev, i) {
@@ -1607,14 +1612,18 @@ static int qede_rss_reta_update(struct rte_eth_dev *eth_dev,
 		shift = i % RTE_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift)) {
 			entry = reta_conf[idx].reta[shift];
-			params.rss_ind_table[i] = entry;
+			/* Pass rxq handles to ecore */
+			params.rss_ind_table[i] =
+					qdev->fp_array[entry].rxq->handle;
+			/* Update the local copy for RETA query command */
+			qdev->rss_ind_table[i] = entry;
 		}
 	}
 
 	/* Fix up RETA for CMT mode device */
 	if (edev->num_hwfns > 1)
 		qdev->rss_enable = qed_update_rss_parm_cmt(edev,
-					&params.rss_ind_table[0]);
+					params.rss_ind_table[0]);
 	params.update_rss_ind_table = 1;
 	params.rss_table_size_log = 7;
 	params.update_rss_config = 1;
@@ -1634,10 +1643,6 @@ static int qede_rss_reta_update(struct rte_eth_dev *eth_dev,
 		}
 	}
 
-	/* Update the local copy for RETA query command */
-	memcpy(qdev->rss_ind_table, params.rss_ind_table,
-	       sizeof(params.rss_ind_table));
-
 	return 0;
 }
 
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 40/62] net/qede/base: change valloc to vzalloc
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (39 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 39/62] net/qede/base: use L2-handles for RSS configuration Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 41/62] net/qede/base: add support for previous driver unload Rasesh Mody
                               ` (21 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Change OSAL_VALLOC() into OSAL_VZALLOC() which would also zero memory.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/bcm_osal.h      |    2 +-
 drivers/net/qede/base/ecore_dev.c     |    3 +--
 drivers/net/qede/base/ecore_l2.c      |    3 +--
 drivers/net/qede/base/ecore_mng_tlv.c |    5 ++---
 drivers/net/qede/base/ecore_sriov.c   |    2 +-
 5 files changed, 6 insertions(+), 9 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 4c91dc0..052a0cf 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -89,7 +89,7 @@ typedef int bool;
 #define OSAL_ALLOC(dev, GFP, size) rte_malloc("qede", size, 0)
 #define OSAL_ZALLOC(dev, GFP, size) rte_zmalloc("qede", size, 0)
 #define OSAL_CALLOC(dev, GFP, num, size) rte_calloc("qede", num, size, 0)
-#define OSAL_VALLOC(dev, size) rte_malloc("qede", size, 0)
+#define OSAL_VZALLOC(dev, size) rte_zmalloc("qede", size, 0)
 #define OSAL_FREE(dev, memory)		  \
 	do {				  \
 		rte_free((void *)memory); \
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 0840d49..6d75e60 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -3717,13 +3717,12 @@ ecore_chain_alloc_pbl(struct ecore_dev *p_dev,
 	u32 page_cnt = p_chain->page_cnt, size, i;
 
 	size = page_cnt * sizeof(*pp_virt_addr_tbl);
-	pp_virt_addr_tbl = (void **)OSAL_VALLOC(p_dev, size);
+	pp_virt_addr_tbl = (void **)OSAL_VZALLOC(p_dev, size);
 	if (!pp_virt_addr_tbl) {
 		DP_NOTICE(p_dev, true,
 			  "Failed to allocate memory for the chain virtual addresses table\n");
 		return ECORE_NOMEM;
 	}
-	OSAL_MEM_ZERO(pp_virt_addr_tbl, size);
 
 	/* The allocation of the PBL table is done with its full size, since it
 	 * is expected to be successive.
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 2635213..4d26e19 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -50,10 +50,9 @@ _ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
 	struct ecore_queue_cid *p_cid;
 	enum _ecore_status_t rc;
 
-	p_cid = OSAL_VALLOC(p_hwfn->p_dev, sizeof(*p_cid));
+	p_cid = OSAL_VZALLOC(p_hwfn->p_dev, sizeof(*p_cid));
 	if (p_cid == OSAL_NULL)
 		return OSAL_NULL;
-	OSAL_MEM_ZERO(p_cid, sizeof(*p_cid));
 
 	p_cid->opaque_fid = opaque_fid;
 	p_cid->cid = cid;
diff --git a/drivers/net/qede/base/ecore_mng_tlv.c b/drivers/net/qede/base/ecore_mng_tlv.c
index 0065d12..0bf1be8 100644
--- a/drivers/net/qede/base/ecore_mng_tlv.c
+++ b/drivers/net/qede/base/ecore_mng_tlv.c
@@ -1413,11 +1413,10 @@ ecore_mfw_update_tlvs(u8 tlv_group, struct ecore_hwfn *p_hwfn,
 	u32 offset;
 	int len;
 
-	p_tlv_data = OSAL_VALLOC(p_hwfn->p_dev, sizeof(*p_tlv_data));
+	p_tlv_data = OSAL_VZALLOC(p_hwfn->p_dev, sizeof(*p_tlv_data));
 	if (!p_tlv_data)
 		return ECORE_NOMEM;
 
-	OSAL_MEMSET(p_tlv_data, 0, sizeof(*p_tlv_data));
 	if (OSAL_MFW_FILL_TLV_DATA(p_hwfn, tlv_group, p_tlv_data)) {
 		OSAL_VFREE(p_hwfn->p_dev, p_tlv_data);
 		return ECORE_INVAL;
@@ -1487,7 +1486,7 @@ ecore_mfw_process_tlv_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
 		goto drv_done;
 	}
 
-	p_mfw_buf = (void *)OSAL_VALLOC(p_hwfn->p_dev, size);
+	p_mfw_buf = (void *)OSAL_VZALLOC(p_hwfn->p_dev, size);
 	if (!p_mfw_buf) {
 		DP_NOTICE(p_hwfn, false,
 			  "Failed allocate memory for p_mfw_buf\n");
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 280c992..aab9925 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -2845,7 +2845,7 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 		goto out;
 	}
 
-	p_rss_params = OSAL_VALLOC(p_hwfn->p_dev, sizeof(*p_rss_params));
+	p_rss_params = OSAL_VZALLOC(p_hwfn->p_dev, sizeof(*p_rss_params));
 	if (p_rss_params == OSAL_NULL) {
 		status = PFVF_STATUS_FAILURE;
 		goto out;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 41/62] net/qede/base: add support for previous driver unload
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (40 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 40/62] net/qede/base: change valloc to vzalloc Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 42/62] net/qede/base: add non-L2 dcbx tlv application support Rasesh Mody
                               ` (20 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

New driver/management fw load request sequence for handling previous
driver unload.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h         |   13 ++
 drivers/net/qede/base/ecore_dev.c     |   43 ++--
 drivers/net/qede/base/ecore_dev_api.h |   30 ++-
 drivers/net/qede/base/ecore_mcp.c     |  369 ++++++++++++++++++++++++++++++---
 drivers/net/qede/base/ecore_mcp.h     |   40 ++--
 drivers/net/qede/base/mcp_public.h    |   56 ++++-
 drivers/net/qede/qede_main.c          |    2 +
 7 files changed, 482 insertions(+), 71 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index acf2244..60a8a6b 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -28,6 +28,19 @@
 #include "ecore_proto_if.h"
 #include "mcp_public.h"
 
+#define ECORE_MAJOR_VERSION		8
+#define ECORE_MINOR_VERSION		18
+#define ECORE_REVISION_VERSION		7
+#define ECORE_ENGINEERING_VERSION	0
+
+#define ECORE_VERSION							\
+	((ECORE_MAJOR_VERSION << 24) | (ECORE_MINOR_VERSION << 16) |	\
+	 (ECORE_REVISION_VERSION << 8) | ECORE_ENGINEERING_VERSION)
+
+#define STORM_FW_VERSION						\
+	((FW_MAJOR_VERSION << 24) | (FW_MINOR_VERSION << 16) |	\
+	 (FW_REVISION_VERSION << 8) | FW_ENGINEERING_VERSION)
+
 #define MAX_HWFNS_PER_DEVICE	2
 #define NAME_SIZE 128 /* @DPDK */
 #define ECORE_WFQ_UNIT	100
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 6d75e60..29dd292 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1901,10 +1901,11 @@ enum _ecore_status_t ecore_vf_start(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 				   struct ecore_hw_init_params *p_params)
 {
-	enum _ecore_status_t rc = ECORE_SUCCESS, mfw_rc;
+	struct ecore_load_req_params load_req_params;
 	u32 load_code, param, drv_mb_param;
-	bool b_default_mtu = true;
 	struct ecore_hwfn *p_hwfn;
+	bool b_default_mtu = true;
+	enum _ecore_status_t rc = ECORE_SUCCESS, mfw_rc;
 	int i;
 
 	if ((p_params->int_mode == ECORE_INT_MODE_MSI) &&
@@ -1943,17 +1944,25 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		if (rc != ECORE_SUCCESS)
 			return rc;
 
-		/* @@@TBD need to add here:
-		 * Check for fan failure
-		 * Prev_unload
-		 */
-		rc = ecore_mcp_load_req(p_hwfn, p_hwfn->p_main_ptt, &load_code);
-		if (rc) {
+		OSAL_MEM_ZERO(&load_req_params, sizeof(load_req_params));
+		load_req_params.drv_role = p_params->is_crash_kernel ?
+					   ECORE_DRV_ROLE_KDUMP :
+					   ECORE_DRV_ROLE_OS;
+		load_req_params.timeout_val = p_params->mfw_timeout_val;
+		load_req_params.avoid_eng_reset = p_params->avoid_eng_reset;
+		rc = ecore_mcp_load_req(p_hwfn, p_hwfn->p_main_ptt,
+					&load_req_params);
+		if (rc != ECORE_SUCCESS) {
 			DP_NOTICE(p_hwfn, true,
-				  "Failed sending LOAD_REQ command\n");
+				  "Failed sending a LOAD_REQ command\n");
 			return rc;
 		}
 
+		load_code = load_req_params.load_code;
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+			   "Load request was sent. Load code: 0x%x\n",
+			   load_code);
+
 		/* CQ75580:
 		 * When coming back from hiberbate state, the registers from
 		 * which shadow is read initially are not initialized. It turns
@@ -1966,10 +1975,6 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		 */
 		ecore_reset_mb_shadow(p_hwfn, p_hwfn->p_main_ptt);
 
-		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-			   "Load request was sent. Resp:0x%x, Load code: 0x%x\n",
-			   rc, load_code);
-
 		/* Only relevant for recovery:
 		 * Clear the indication after the LOAD_REQ command is responded
 		 * by the MFW.
@@ -1988,13 +1993,13 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		case FW_MSG_CODE_DRV_LOAD_ENGINE:
 			rc = ecore_hw_init_common(p_hwfn, p_hwfn->p_main_ptt,
 						  p_hwfn->hw_info.hw_mode);
-			if (rc)
+			if (rc != ECORE_SUCCESS)
 				break;
 			/* Fall into */
 		case FW_MSG_CODE_DRV_LOAD_PORT:
 			rc = ecore_hw_init_port(p_hwfn, p_hwfn->p_main_ptt,
 						p_hwfn->hw_info.hw_mode);
-			if (rc)
+			if (rc != ECORE_SUCCESS)
 				break;
 			/* Fall into */
 		case FW_MSG_CODE_DRV_LOAD_FUNCTION:
@@ -2006,6 +2011,8 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 					      p_params->allow_npar_tx_switch);
 			break;
 		default:
+			DP_NOTICE(p_hwfn, false,
+				  "Unexpected load code [0x%08x]", load_code);
 			rc = ECORE_NOTIMPL;
 			break;
 		}
@@ -2021,6 +2028,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 				       0, &load_code, &param);
 		if (rc != ECORE_SUCCESS)
 			return rc;
+
 		if (mfw_rc != ECORE_SUCCESS) {
 			DP_NOTICE(p_hwfn, true,
 				  "Failed sending LOAD_DONE command\n");
@@ -2045,10 +2053,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 
 	if (IS_PF(p_dev)) {
 		p_hwfn = ECORE_LEADING_HWFN(p_dev);
-		drv_mb_param = (FW_MAJOR_VERSION << 24) |
-			       (FW_MINOR_VERSION << 16) |
-			       (FW_REVISION_VERSION << 8) |
-			       (FW_ENGINEERING_VERSION);
+		drv_mb_param = STORM_FW_VERSION;
 		rc = ecore_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
 				   DRV_MSG_CODE_OV_UPDATE_STORM_FW_VER,
 				   drv_mb_param, &load_code, &param);
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index 356c5e4..7e90778 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -58,16 +58,38 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev);
 void ecore_resc_setup(struct ecore_dev *p_dev);
 
 struct ecore_hw_init_params {
-	/* tunnelling parameters */
+	/* Tunnelling parameters */
 	struct ecore_tunnel_info *p_tunn;
+
 	bool b_hw_start;
-	/* interrupt mode [msix, inta, etc.] to use */
+
+	/* Interrupt mode [msix, inta, etc.] to use */
 	enum ecore_int_mode int_mode;
-/* npar tx switching to be used for vports configured for tx-switching */
 
+	/* NPAR tx switching to be used for vports configured for tx-switching
+	 */
 	bool allow_npar_tx_switch;
-	/* binary fw data pointer in binary fw file */
+
+	/* Binary fw data pointer in binary fw file */
 	const u8 *bin_fw_data;
+
+	/* Indicates whether the driver is running over a crash kernel.
+	 * As part of the load request, this will be used for providing the
+	 * driver role to the MFW.
+	 * In case of a crash kernel over PDA - this should be set to false.
+	 */
+	bool is_crash_kernel;
+
+	/* The timeout value that the MFW should use when locking the engine for
+	 * the driver load process.
+	 * A value of '0' means the default value, and '255' means no timeout.
+	 */
+	u8 mfw_timeout_val;
+#define ECORE_LOAD_REQ_LOCK_TO_DEFAULT	0
+#define ECORE_LOAD_REQ_LOCK_TO_NONE	255
+
+	/* Avoid engine reset when first PF loads on it */
+	bool avoid_eng_reset;
 };
 
 /**
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 30cb76e..6c5b5db 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -518,51 +518,368 @@ static void ecore_mcp_mf_workaround(struct ecore_hwfn *p_hwfn,
 }
 #endif
 
+static bool ecore_mcp_can_force_load(u8 drv_role, u8 exist_drv_role)
+{
+	return (drv_role == DRV_ROLE_OS &&
+		exist_drv_role == DRV_ROLE_PREBOOT) ||
+	       (drv_role == DRV_ROLE_KDUMP && exist_drv_role == DRV_ROLE_OS);
+}
+
+static enum _ecore_status_t ecore_mcp_cancel_load_req(struct ecore_hwfn *p_hwfn,
+						      struct ecore_ptt *p_ptt)
+{
+	u32 resp = 0, param = 0;
+	enum _ecore_status_t rc;
+
+	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_CANCEL_LOAD_REQ, 0,
+			   &resp, &param);
+	if (rc != ECORE_SUCCESS)
+		DP_NOTICE(p_hwfn, false,
+			  "Failed to send cancel load request, rc = %d\n", rc);
+
+	return rc;
+}
+
+#define CONFIG_ECORE_L2_BITMAP_IDX	(0x1 << 0)
+#define CONFIG_ECORE_SRIOV_BITMAP_IDX	(0x1 << 1)
+#define CONFIG_ECORE_ROCE_BITMAP_IDX	(0x1 << 2)
+#define CONFIG_ECORE_IWARP_BITMAP_IDX	(0x1 << 3)
+#define CONFIG_ECORE_FCOE_BITMAP_IDX	(0x1 << 4)
+#define CONFIG_ECORE_ISCSI_BITMAP_IDX	(0x1 << 5)
+#define CONFIG_ECORE_LL2_BITMAP_IDX	(0x1 << 6)
+
+static u32 ecore_get_config_bitmap(void)
+{
+	u32 config_bitmap = 0x0;
+
+#ifdef CONFIG_ECORE_L2
+	config_bitmap |= CONFIG_ECORE_L2_BITMAP_IDX;
+#endif
+#ifdef CONFIG_ECORE_SRIOV
+	config_bitmap |= CONFIG_ECORE_SRIOV_BITMAP_IDX;
+#endif
+#ifdef CONFIG_ECORE_ROCE
+	config_bitmap |= CONFIG_ECORE_ROCE_BITMAP_IDX;
+#endif
+#ifdef CONFIG_ECORE_IWARP
+	config_bitmap |= CONFIG_ECORE_IWARP_BITMAP_IDX;
+#endif
+#ifdef CONFIG_ECORE_FCOE
+	config_bitmap |= CONFIG_ECORE_FCOE_BITMAP_IDX;
+#endif
+#ifdef CONFIG_ECORE_ISCSI
+	config_bitmap |= CONFIG_ECORE_ISCSI_BITMAP_IDX;
+#endif
+#ifdef CONFIG_ECORE_LL2
+	config_bitmap |= CONFIG_ECORE_LL2_BITMAP_IDX;
+#endif
+
+	return config_bitmap;
+}
+
+struct ecore_load_req_in_params {
+	u8 hsi_ver;
+#define ECORE_LOAD_REQ_HSI_VER_DEFAULT	0
+#define ECORE_LOAD_REQ_HSI_VER_1	1
+	u32 drv_ver_0;
+	u32 drv_ver_1;
+	u32 fw_ver;
+	u8 drv_role;
+	u8 timeout_val;
+	u8 force_cmd;
+	bool avoid_eng_reset;
+};
+
+struct ecore_load_req_out_params {
+	u32 load_code;
+	u32 exist_drv_ver_0;
+	u32 exist_drv_ver_1;
+	u32 exist_fw_ver;
+	u8 exist_drv_role;
+	u8 mfw_hsi_ver;
+	bool drv_exists;
+};
+
+static enum _ecore_status_t
+__ecore_mcp_load_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		     struct ecore_load_req_in_params *p_in_params,
+		     struct ecore_load_req_out_params *p_out_params)
+{
+	union drv_union_data union_data_src, union_data_dst;
+	struct ecore_mcp_mb_params mb_params;
+	struct load_req_stc *p_load_req;
+	struct load_rsp_stc *p_load_rsp;
+	u32 hsi_ver;
+	enum _ecore_status_t rc;
+
+	p_load_req = &union_data_src.load_req;
+	OSAL_MEM_ZERO(p_load_req, sizeof(*p_load_req));
+	p_load_req->drv_ver_0 = p_in_params->drv_ver_0;
+	p_load_req->drv_ver_1 = p_in_params->drv_ver_1;
+	p_load_req->fw_ver = p_in_params->fw_ver;
+	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_ROLE,
+			    p_in_params->drv_role);
+	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_LOCK_TO,
+			    p_in_params->timeout_val);
+	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_FORCE,
+			    p_in_params->force_cmd);
+	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_FLAGS0,
+			    p_in_params->avoid_eng_reset);
+
+	hsi_ver = (p_in_params->hsi_ver == ECORE_LOAD_REQ_HSI_VER_DEFAULT) ?
+		  DRV_ID_MCP_HSI_VER_CURRENT :
+		  (p_in_params->hsi_ver << DRV_ID_MCP_HSI_VER_SHIFT);
+
+	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
+	mb_params.cmd = DRV_MSG_CODE_LOAD_REQ;
+	mb_params.param = PDA_COMP | hsi_ver | p_hwfn->p_dev->drv_type;
+	mb_params.p_data_src = &union_data_src;
+	mb_params.p_data_dst = &union_data_dst;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Load Request: param 0x%08x [init_hw %d, drv_type %d, hsi_ver %d, pda 0x%04x]\n",
+		   mb_params.param,
+		   ECORE_MFW_GET_FIELD(mb_params.param, DRV_ID_DRV_INIT_HW),
+		   ECORE_MFW_GET_FIELD(mb_params.param, DRV_ID_DRV_TYPE),
+		   ECORE_MFW_GET_FIELD(mb_params.param, DRV_ID_MCP_HSI_VER),
+		   ECORE_MFW_GET_FIELD(mb_params.param, DRV_ID_PDA_COMP_VER));
+
+	if (p_in_params->hsi_ver != ECORE_LOAD_REQ_HSI_VER_1)
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+			   "Load Request: drv_ver 0x%08x_0x%08x, fw_ver 0x%08x, misc0 0x%08x [role %d, timeout %d, force %d, flags0 0x%x]\n",
+			   p_load_req->drv_ver_0, p_load_req->drv_ver_1,
+			   p_load_req->fw_ver, p_load_req->misc0,
+			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
+					       LOAD_REQ_ROLE),
+			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
+					       LOAD_REQ_LOCK_TO),
+			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
+					       LOAD_REQ_FORCE),
+			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
+					       LOAD_REQ_FLAGS0));
+
+	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+	if (rc != ECORE_SUCCESS) {
+		DP_NOTICE(p_hwfn, false,
+			  "Failed to send load request, rc = %d\n", rc);
+		return rc;
+	}
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Load Response: resp 0x%08x\n", mb_params.mcp_resp);
+	p_out_params->load_code = mb_params.mcp_resp;
+
+	if (p_in_params->hsi_ver != ECORE_LOAD_REQ_HSI_VER_1 &&
+	    p_out_params->load_code != FW_MSG_CODE_DRV_LOAD_REFUSED_HSI_1) {
+		p_load_rsp = &union_data_dst.load_rsp;
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+			   "Load Response: exist_drv_ver 0x%08x_0x%08x, exist_fw_ver 0x%08x, misc0 0x%08x [exist_role %d, mfw_hsi %d, flags0 0x%x]\n",
+			   p_load_rsp->drv_ver_0, p_load_rsp->drv_ver_1,
+			   p_load_rsp->fw_ver, p_load_rsp->misc0,
+			   ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
+					       LOAD_RSP_ROLE),
+			   ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
+					       LOAD_RSP_HSI),
+			   ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
+					       LOAD_RSP_FLAGS0));
+
+		p_out_params->exist_drv_ver_0 = p_load_rsp->drv_ver_0;
+		p_out_params->exist_drv_ver_1 = p_load_rsp->drv_ver_1;
+		p_out_params->exist_fw_ver = p_load_rsp->fw_ver;
+		p_out_params->exist_drv_role =
+			ECORE_MFW_GET_FIELD(p_load_rsp->misc0, LOAD_RSP_ROLE);
+		p_out_params->mfw_hsi_ver =
+			ECORE_MFW_GET_FIELD(p_load_rsp->misc0, LOAD_RSP_HSI);
+		p_out_params->drv_exists =
+			ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
+					    LOAD_RSP_FLAGS0) &
+			LOAD_RSP_FLAGS0_DRV_EXISTS;
+	}
+
+	return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t eocre_get_mfw_drv_role(struct ecore_hwfn *p_hwfn,
+						   enum ecore_drv_role drv_role,
+						   u8 *p_mfw_drv_role)
+{
+	switch (drv_role) {
+	case ECORE_DRV_ROLE_OS:
+		*p_mfw_drv_role = DRV_ROLE_OS;
+		break;
+	case ECORE_DRV_ROLE_KDUMP:
+		*p_mfw_drv_role = DRV_ROLE_KDUMP;
+		break;
+	default:
+		DP_ERR(p_hwfn, "Unexpected driver role %d\n", drv_role);
+		return ECORE_INVAL;
+	}
+
+	return ECORE_SUCCESS;
+}
+
+enum ecore_load_req_force {
+	ECORE_LOAD_REQ_FORCE_NONE,
+	ECORE_LOAD_REQ_FORCE_PF,
+	ECORE_LOAD_REQ_FORCE_ALL,
+};
+
+static enum _ecore_status_t
+ecore_get_mfw_force_cmd(struct ecore_hwfn *p_hwfn,
+			enum ecore_load_req_force force_cmd,
+			u8 *p_mfw_force_cmd)
+{
+	switch (force_cmd) {
+	case ECORE_LOAD_REQ_FORCE_NONE:
+		*p_mfw_force_cmd = LOAD_REQ_FORCE_NONE;
+		break;
+	case ECORE_LOAD_REQ_FORCE_PF:
+		*p_mfw_force_cmd = LOAD_REQ_FORCE_PF;
+		break;
+	case ECORE_LOAD_REQ_FORCE_ALL:
+		*p_mfw_force_cmd = LOAD_REQ_FORCE_ALL;
+		break;
+	default:
+		DP_ERR(p_hwfn, "Unexpected force value %d\n", force_cmd);
+		return ECORE_INVAL;
+	}
+
+	return ECORE_SUCCESS;
+}
+
 enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 					struct ecore_ptt *p_ptt,
-					u32 *p_load_code)
+					struct ecore_load_req_params *p_params)
 {
-	struct ecore_dev *p_dev = p_hwfn->p_dev;
-	struct ecore_mcp_mb_params mb_params;
+	struct ecore_load_req_out_params out_params;
+	struct ecore_load_req_in_params in_params;
+	u8 mfw_drv_role, mfw_force_cmd;
 	enum _ecore_status_t rc;
 
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
-		ecore_mcp_mf_workaround(p_hwfn, p_load_code);
+		ecore_mcp_mf_workaround(p_hwfn, &p_params->load_code);
 		return ECORE_SUCCESS;
 	}
 #endif
 
-	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
-	mb_params.cmd = DRV_MSG_CODE_LOAD_REQ;
-	mb_params.param = PDA_COMP | DRV_ID_MCP_HSI_VER_CURRENT |
-			  p_dev->drv_type;
-	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+	OSAL_MEM_ZERO(&in_params, sizeof(in_params));
+	in_params.hsi_ver = ECORE_LOAD_REQ_HSI_VER_DEFAULT;
+	in_params.drv_ver_0 = ECORE_VERSION;
+	in_params.drv_ver_1 = ecore_get_config_bitmap();
+	in_params.fw_ver = STORM_FW_VERSION;
+	rc = eocre_get_mfw_drv_role(p_hwfn, p_params->drv_role, &mfw_drv_role);
+	if (rc != ECORE_SUCCESS)
+		return rc;
 
-	/* if mcp fails to respond we must abort */
-	if (rc != ECORE_SUCCESS) {
-		DP_ERR(p_hwfn, "MCP response failure, aborting\n");
+	in_params.drv_role = mfw_drv_role;
+	in_params.timeout_val = p_params->timeout_val;
+	rc = ecore_get_mfw_force_cmd(p_hwfn, ECORE_LOAD_REQ_FORCE_NONE,
+				     &mfw_force_cmd);
+	if (rc != ECORE_SUCCESS)
 		return rc;
-	}
 
-	*p_load_code = mb_params.mcp_resp;
+	in_params.force_cmd = mfw_force_cmd;
+	in_params.avoid_eng_reset = p_params->avoid_eng_reset;
 
-	/* If MFW refused (e.g. other port is in diagnostic mode) we
-	 * must abort. This can happen in the following cases:
-	 * - Other port is in diagnostic mode
-	 * - Previously loaded function on the engine is not compliant with
-	 *   the requester.
-	 * - MFW cannot cope with the requester's DRV_MFW_HSI_VERSION.
-	 *      -
+	OSAL_MEM_ZERO(&out_params, sizeof(out_params));
+	rc = __ecore_mcp_load_req(p_hwfn, p_ptt, &in_params, &out_params);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	/* First handle cases where another load request should/might be sent:
+	 * - MFW expects the old interface [HSI version = 1]
+	 * - MFW responds that a force load request is required
 	 */
-	if (!(*p_load_code) ||
-	    ((*p_load_code) == FW_MSG_CODE_DRV_LOAD_REFUSED_HSI) ||
-	    ((*p_load_code) == FW_MSG_CODE_DRV_LOAD_REFUSED_PDA) ||
-	    ((*p_load_code) == FW_MSG_CODE_DRV_LOAD_REFUSED_DIAG)) {
-		DP_ERR(p_hwfn, "MCP refused load request, aborting\n");
+	if (out_params.load_code == FW_MSG_CODE_DRV_LOAD_REFUSED_HSI_1) {
+		DP_INFO(p_hwfn,
+			"MFW refused a load request due to HSI > 1. Resending with HSI = 1.\n");
+
+		/* The previous load request set the mailbox blocking */
+		p_hwfn->mcp_info->block_mb_sending = false;
+
+		in_params.hsi_ver = ECORE_LOAD_REQ_HSI_VER_1;
+		OSAL_MEM_ZERO(&out_params, sizeof(out_params));
+		rc = __ecore_mcp_load_req(p_hwfn, p_ptt, &in_params,
+					  &out_params);
+		if (rc != ECORE_SUCCESS)
+			return rc;
+	} else if (out_params.load_code ==
+		   FW_MSG_CODE_DRV_LOAD_REFUSED_REQUIRES_FORCE) {
+		/* The previous load request set the mailbox blocking */
+		p_hwfn->mcp_info->block_mb_sending = false;
+
+		if (ecore_mcp_can_force_load(in_params.drv_role,
+					     out_params.exist_drv_role)) {
+			DP_INFO(p_hwfn,
+				"A force load is required [existing: role %d, fw_ver 0x%08x, drv_ver 0x%08x_0x%08x]. Sending a force load request.\n",
+				out_params.exist_drv_role,
+				out_params.exist_fw_ver,
+				out_params.exist_drv_ver_0,
+				out_params.exist_drv_ver_1);
+
+			rc = ecore_get_mfw_force_cmd(p_hwfn,
+						     ECORE_LOAD_REQ_FORCE_ALL,
+						     &mfw_force_cmd);
+			if (rc != ECORE_SUCCESS)
+				return rc;
+
+			in_params.force_cmd = mfw_force_cmd;
+			OSAL_MEM_ZERO(&out_params, sizeof(out_params));
+			rc = __ecore_mcp_load_req(p_hwfn, p_ptt, &in_params,
+						  &out_params);
+			if (rc != ECORE_SUCCESS)
+				return rc;
+		} else {
+			DP_NOTICE(p_hwfn, false,
+				  "A force load is required [existing: role %d, fw_ver 0x%08x, drv_ver 0x%08x_0x%08x]. Avoiding to prevent disruption of active PFs.\n",
+				  out_params.exist_drv_role,
+				  out_params.exist_fw_ver,
+				  out_params.exist_drv_ver_0,
+				  out_params.exist_drv_ver_1);
+
+			ecore_mcp_cancel_load_req(p_hwfn, p_ptt);
+			return ECORE_BUSY;
+		}
+	}
+
+	/* Now handle the other types of responses.
+	 * The "REFUSED_HSI_1" and "REFUSED_REQUIRES_FORCE" responses are not
+	 * expected here after the additional revised load requests were sent.
+	 */
+	switch (out_params.load_code) {
+	case FW_MSG_CODE_DRV_LOAD_ENGINE:
+	case FW_MSG_CODE_DRV_LOAD_PORT:
+	case FW_MSG_CODE_DRV_LOAD_FUNCTION:
+		if (out_params.mfw_hsi_ver != ECORE_LOAD_REQ_HSI_VER_1 &&
+		    out_params.drv_exists) {
+			/* The role and fw/driver version match, but the PF is
+			 * already loaded and has not been unloaded gracefully.
+			 * This is unexpected since a quasi-FLR request was
+			 * previously sent as part of ecore_hw_prepare().
+			 */
+			DP_NOTICE(p_hwfn, false,
+				  "PF is already loaded - shouldn't have got here since a quasi-FLR request was previously sent!\n");
+			return ECORE_INVAL;
+		}
+		break;
+	case FW_MSG_CODE_DRV_LOAD_REFUSED_PDA:
+	case FW_MSG_CODE_DRV_LOAD_REFUSED_DIAG:
+	case FW_MSG_CODE_DRV_LOAD_REFUSED_HSI:
+	case FW_MSG_CODE_DRV_LOAD_REFUSED_REJECT:
+		DP_NOTICE(p_hwfn, false,
+			  "MFW refused a load request [resp 0x%08x]. Aborting.\n",
+			  out_params.load_code);
 		return ECORE_BUSY;
+	default:
+		DP_NOTICE(p_hwfn, false,
+			  "Unexpected response to load request [resp 0x%08x]. Aborting.\n",
+			  out_params.load_code);
+		break;
 	}
 
+	p_params->load_code = out_params.load_code;
+
 	return ECORE_SUCCESS;
 }
 
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index 7a81516..4138a12 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -136,32 +136,36 @@ enum _ecore_status_t ecore_mcp_handle_events(struct ecore_hwfn *p_hwfn,
  * @param p_hwfn - hw function
  * @param p_ptt - PTT required for register access
  * @return enum _ecore_status_t - ECORE_SUCCESS - operation
- * was successul.
+ * was successful.
  */
 enum _ecore_status_t ecore_issue_pulse(struct ecore_hwfn *p_hwfn,
 				       struct ecore_ptt *p_ptt);
 
+enum ecore_drv_role {
+	ECORE_DRV_ROLE_OS,
+	ECORE_DRV_ROLE_KDUMP,
+};
+
+struct ecore_load_req_params {
+	enum ecore_drv_role drv_role;
+	u8 timeout_val; /* 1..254, '0' - default value, '255' - no timeout */
+	bool avoid_eng_reset;
+	u32 load_code;
+};
+
 /**
- * @brief Sends a LOAD_REQ to the MFW, and in case operation
- *        succeed, returns whether this PF is the first on the
- *        chip/engine/port or function. This function should be
- *        called when driver is ready to accept MFW events after
- *        Storms initializations are done.
- *
- * @param p_hwfn       - hw function
- * @param p_ptt        - PTT required for register access
- * @param p_load_code  - The MCP response param containing one
- *      of the following:
- *      FW_MSG_CODE_DRV_LOAD_ENGINE
- *      FW_MSG_CODE_DRV_LOAD_PORT
- *      FW_MSG_CODE_DRV_LOAD_FUNCTION
- * @return enum _ecore_status_t -
- *      ECORE_SUCCESS - Operation was successul.
- *      ECORE_BUSY - Operation failed
+ * @brief Sends a LOAD_REQ to the MFW, and in case the operation succeeds,
+ *        returns whether this PF is the first on the engine/port or function.
+ *
+ * @param p_hwfn
+ * @param p_pt
+ * @param p_params
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - Operation was successful.
  */
 enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 					struct ecore_ptt *p_ptt,
-					u32 *p_load_code);
+					struct ecore_load_req_params *p_params);
 
 /**
  * @brief Read the MFW mailbox into Current buffer.
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index d3cbc96..145f5ca 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -878,9 +878,11 @@ struct public_func {
 #define DRV_ID_PDA_COMP_VER_MASK	0x0000ffff
 #define DRV_ID_PDA_COMP_VER_SHIFT	0
 
+#define LOAD_REQ_HSI_VERSION		2
 #define DRV_ID_MCP_HSI_VER_MASK		0x00ff0000
 #define DRV_ID_MCP_HSI_VER_SHIFT	16
-#define DRV_ID_MCP_HSI_VER_CURRENT	(1 << DRV_ID_MCP_HSI_VER_SHIFT)
+#define DRV_ID_MCP_HSI_VER_CURRENT	(LOAD_REQ_HSI_VERSION << \
+					 DRV_ID_MCP_HSI_VER_SHIFT)
 
 #define DRV_ID_DRV_TYPE_MASK		0x7f000000
 #define DRV_ID_DRV_TYPE_SHIFT		24
@@ -1040,8 +1042,47 @@ struct resource_info {
 #define RESOURCE_ELEMENT_STRICT (1 << 0)
 };
 
+#define DRV_ROLE_NONE		0
+#define DRV_ROLE_PREBOOT	1
+#define DRV_ROLE_OS		2
+#define DRV_ROLE_KDUMP		3
+
+struct load_req_stc {
+	u32 drv_ver_0;
+	u32 drv_ver_1;
+	u32 fw_ver;
+	u32 misc0;
+#define LOAD_REQ_ROLE_MASK		0x000000FF
+#define LOAD_REQ_ROLE_SHIFT		0
+#define LOAD_REQ_LOCK_TO_MASK		0x0000FF00
+#define LOAD_REQ_LOCK_TO_SHIFT		0 /* @DPDK */
+#define LOAD_REQ_LOCK_TO_DEFAULT	0
+#define LOAD_REQ_LOCK_TO_NONE		255
+#define LOAD_REQ_FORCE_MASK		0x000F0000
+#define LOAD_REQ_FORCE_SHIFT		0 /* @DPDK */
+#define LOAD_REQ_FORCE_NONE		0
+#define LOAD_REQ_FORCE_PF		1
+#define LOAD_REQ_FORCE_ALL		2
+#define LOAD_REQ_FLAGS0_MASK		0x00F00000
+#define LOAD_REQ_FLAGS0_SHIFT		0 /* @DPDK */
+#define LOAD_REQ_FLAGS0_AVOID_RESET	(0x1 << 0)
+};
+
+struct load_rsp_stc {
+	u32 drv_ver_0;
+	u32 drv_ver_1;
+	u32 fw_ver;
+	u32 misc0;
+#define LOAD_RSP_ROLE_MASK		0x000000FF
+#define LOAD_RSP_ROLE_SHIFT		0
+#define LOAD_RSP_HSI_MASK		0x0000FF00
+#define LOAD_RSP_HSI_SHIFT		8
+#define LOAD_RSP_FLAGS0_MASK		0x000F0000
+#define LOAD_RSP_FLAGS0_SHIFT		16
+#define LOAD_RSP_FLAGS0_DRV_EXISTS	(0x1 << 0)
+};
+
 union drv_union_data {
-	u32 ver_str[MCP_DRV_VER_STR_SIZE_DWORD];    /* LOAD_REQ */
 	struct mcp_mac wol_mac; /* UNLOAD_DONE */
 
 /* This configuration should be set by the driver for the LINK_SET command. */
@@ -1068,6 +1109,9 @@ union drv_union_data {
 	struct bist_nvm_image_att nvm_image_att;
 	struct mdump_config_stc mdump_config;
 	u32 dword;
+
+	struct load_req_stc load_req;
+	struct load_rsp_stc load_rsp;
 	/* ... */
 };
 
@@ -1077,6 +1121,7 @@ struct public_drv_mb {
 #define DRV_MSG_CODE_LOAD_REQ                   0x10000000
 #define DRV_MSG_CODE_LOAD_DONE                  0x11000000
 #define DRV_MSG_CODE_INIT_HW                    0x12000000
+#define DRV_MSG_CODE_CANCEL_LOAD_REQ            0x13000000
 #define DRV_MSG_CODE_UNLOAD_REQ		        0x20000000
 #define DRV_MSG_CODE_UNLOAD_DONE                0x21000000
 #define DRV_MSG_CODE_INIT_PHY			0x22000000
@@ -1448,8 +1493,11 @@ struct public_drv_mb {
 #define FW_MSG_CODE_DRV_LOAD_PORT               0x10110000
 #define FW_MSG_CODE_DRV_LOAD_FUNCTION           0x10120000
 #define FW_MSG_CODE_DRV_LOAD_REFUSED_PDA        0x10200000
-#define FW_MSG_CODE_DRV_LOAD_REFUSED_HSI        0x10210000
+#define FW_MSG_CODE_DRV_LOAD_REFUSED_HSI_1      0x10210000
 #define FW_MSG_CODE_DRV_LOAD_REFUSED_DIAG       0x10220000
+#define FW_MSG_CODE_DRV_LOAD_REFUSED_HSI        0x10230000
+#define FW_MSG_CODE_DRV_LOAD_REFUSED_REQUIRES_FORCE 0x10300000
+#define FW_MSG_CODE_DRV_LOAD_REFUSED_REJECT     0x10310000
 #define FW_MSG_CODE_DRV_LOAD_DONE               0x11100000
 #define FW_MSG_CODE_DRV_UNLOAD_ENGINE           0x20110000
 #define FW_MSG_CODE_DRV_UNLOAD_PORT             0x20120000
@@ -1547,7 +1595,7 @@ struct public_drv_mb {
 
 
 	u32 fw_mb_param;
-	/* Resource Allocation params - MFW  version support*/
+/* Resource Allocation params - MFW  version support */
 #define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR_MASK	0xFFFF0000
 #define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR_SHIFT		16
 #define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR_MASK	0x0000FFFF
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 5c79055..326e56f 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -276,6 +276,8 @@ static int qed_slowpath_start(struct ecore_dev *edev,
 	hw_init_params.int_mode = ECORE_INT_MODE_MSIX;
 	hw_init_params.allow_npar_tx_switch = allow_npar_tx_switching;
 	hw_init_params.bin_fw_data = data;
+	hw_init_params.mfw_timeout_val = ECORE_LOAD_REQ_LOCK_TO_DEFAULT;
+	hw_init_params.avoid_eng_reset = false;
 	rc = ecore_hw_init(edev, &hw_init_params);
 	if (rc) {
 		DP_ERR(edev, "ecore_hw_init failed\n");
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 42/62] net/qede/base: add non-L2 dcbx tlv application support
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (41 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 41/62] net/qede/base: add support for previous driver unload Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 43/62] net/qede/base: update bulletin board during VF init Rasesh Mody
                               ` (19 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add non-l2 dcbx tlv application support.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dcbx.c     |   30 ++++++++++++++++++++++++++----
 drivers/net/qede/base/ecore_dcbx.h     |    1 +
 drivers/net/qede/base/ecore_dcbx_api.h |    4 +++-
 drivers/net/qede/base/ecore_proto_if.h |    3 +++
 4 files changed, 33 insertions(+), 5 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dcbx.c b/drivers/net/qede/base/ecore_dcbx.c
index 0e11927..5ecc6b0 100644
--- a/drivers/net/qede/base/ecore_dcbx.c
+++ b/drivers/net/qede/base/ecore_dcbx.c
@@ -72,6 +72,23 @@ static bool ecore_dcbx_default_tlv(u32 app_info_bitmap, u16 proto_id, bool ieee)
 	return !!(ethtype && (proto_id == ECORE_ETH_TYPE_DEFAULT));
 }
 
+static bool ecore_dcbx_iwarp_tlv(struct ecore_hwfn *p_hwfn, u32 app_info_bitmap,
+				 u16 proto_id, bool ieee)
+{
+	bool port;
+
+	if (!p_hwfn->p_dcbx_info->iwarp_port)
+		return false;
+
+	if (ieee)
+		port = ecore_dcbx_ieee_app_port(app_info_bitmap,
+						DCBX_APP_SF_IEEE_TCP_PORT);
+	else
+		port = ecore_dcbx_app_port(app_info_bitmap);
+
+	return !!(port && (proto_id == p_hwfn->p_dcbx_info->iwarp_port));
+}
+
 static void
 ecore_dcbx_dp_protocol(struct ecore_hwfn *p_hwfn,
 		       struct ecore_dcbx_results *p_data)
@@ -896,17 +913,18 @@ ecore_dcbx_mib_update_event(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 
 enum _ecore_status_t ecore_dcbx_info_alloc(struct ecore_hwfn *p_hwfn)
 {
-	enum _ecore_status_t rc = ECORE_SUCCESS;
-
 	p_hwfn->p_dcbx_info = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
 					  sizeof(*p_hwfn->p_dcbx_info));
 	if (!p_hwfn->p_dcbx_info) {
 		DP_NOTICE(p_hwfn, true,
 			  "Failed to allocate `struct ecore_dcbx_info'");
-		rc = ECORE_NOMEM;
+		return ECORE_NOMEM;
 	}
 
-	return rc;
+	p_hwfn->p_dcbx_info->iwarp_port =
+		p_hwfn->pf_params.rdma_pf_params.iwarp_port;
+
+	return ECORE_SUCCESS;
 }
 
 void ecore_dcbx_info_free(struct ecore_hwfn *p_hwfn,
@@ -937,9 +955,13 @@ void ecore_dcbx_set_pf_update_params(struct ecore_dcbx_results *p_src,
 
 	update_flag = p_src->arr[DCBX_PROTOCOL_ETH].update;
 	p_dest->update_eth_dcb_data_mode = update_flag;
+	update_flag = p_src->arr[DCBX_PROTOCOL_IWARP].update;
+	p_dest->update_iwarp_dcb_data_mode = update_flag;
 
 	p_dcb_data = &p_dest->eth_dcb_data;
 	ecore_dcbx_update_protocol_data(p_dcb_data, p_src, DCBX_PROTOCOL_ETH);
+	p_dcb_data = &p_dest->iwarp_dcb_data;
+	ecore_dcbx_update_protocol_data(p_dcb_data, p_src, DCBX_PROTOCOL_IWARP);
 }
 
 enum _ecore_status_t ecore_dcbx_query_params(struct ecore_hwfn *p_hwfn,
diff --git a/drivers/net/qede/base/ecore_dcbx.h b/drivers/net/qede/base/ecore_dcbx.h
index 0830014..eba2d91 100644
--- a/drivers/net/qede/base/ecore_dcbx.h
+++ b/drivers/net/qede/base/ecore_dcbx.h
@@ -29,6 +29,7 @@ struct ecore_dcbx_info {
 	struct ecore_dcbx_set set;
 	struct ecore_dcbx_get get;
 	u8 dcbx_cap;
+	u16 iwarp_port;
 };
 
 struct ecore_dcbx_mib_meta_data {
diff --git a/drivers/net/qede/base/ecore_dcbx_api.h b/drivers/net/qede/base/ecore_dcbx_api.h
index 3a1712f..2dc7679 100644
--- a/drivers/net/qede/base/ecore_dcbx_api.h
+++ b/drivers/net/qede/base/ecore_dcbx_api.h
@@ -37,6 +37,7 @@ enum dcbx_protocol_type {
 	DCBX_PROTOCOL_ROCE,
 	DCBX_PROTOCOL_ROCE_V2,
 	DCBX_PROTOCOL_ETH,
+	DCBX_PROTOCOL_IWARP,
 	DCBX_MAX_PROTOCOL_TYPE
 };
 
@@ -191,7 +192,8 @@ static const struct ecore_dcbx_app_metadata ecore_dcbx_app_update[] = {
 	{DCBX_PROTOCOL_FCOE, "FCOE", ECORE_PCI_FCOE},
 	{DCBX_PROTOCOL_ROCE, "ROCE", ECORE_PCI_ETH_ROCE},
 	{DCBX_PROTOCOL_ROCE_V2, "ROCE_V2", ECORE_PCI_ETH_ROCE},
-	{DCBX_PROTOCOL_ETH, "ETH", ECORE_PCI_ETH}
+	{DCBX_PROTOCOL_ETH, "ETH", ECORE_PCI_ETH},
+	{DCBX_PROTOCOL_IWARP, "IWARP", ECORE_PCI_ETH_IWARP}
 };
 
 #endif /* __ECORE_DCBX_API_H__ */
diff --git a/drivers/net/qede/base/ecore_proto_if.h b/drivers/net/qede/base/ecore_proto_if.h
index e252d52..ed24019 100644
--- a/drivers/net/qede/base/ecore_proto_if.h
+++ b/drivers/net/qede/base/ecore_proto_if.h
@@ -76,6 +76,9 @@ struct ecore_rdma_pf_params {
 
 	/* Will allocate rate limiters to be used with QPs */
 	u8		enable_dcqcn;
+
+	/* TCP port number used for the iwarp traffic */
+	u16		iwarp_port;
 };
 
 struct ecore_pf_params {
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 43/62] net/qede/base: update bulletin board during VF init
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (42 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 42/62] net/qede/base: add non-L2 dcbx tlv application support Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 44/62] net/qede/base: add coalescing support for VFs Rasesh Mody
                               ` (18 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Updated bulletin board with link state during VF initialization.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_sriov.c |   88 ++++++++++++++++++++---------------
 1 file changed, 51 insertions(+), 37 deletions(-)

diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index aab9925..703c1e8 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -954,11 +954,51 @@ static void ecore_iov_free_vf_igu_sbs(struct ecore_hwfn *p_hwfn,
 	vf->num_sbs = 0;
 }
 
+void ecore_iov_set_link(struct ecore_hwfn *p_hwfn,
+			u16 vfid,
+			struct ecore_mcp_link_params *params,
+			struct ecore_mcp_link_state *link,
+			struct ecore_mcp_link_capabilities *p_caps)
+{
+	struct ecore_vf_info *p_vf = ecore_iov_get_vf_info(p_hwfn, vfid, false);
+	struct ecore_bulletin_content *p_bulletin;
+
+	if (!p_vf)
+		return;
+
+	p_bulletin = p_vf->bulletin.p_virt;
+	p_bulletin->req_autoneg = params->speed.autoneg;
+	p_bulletin->req_adv_speed = params->speed.advertised_speeds;
+	p_bulletin->req_forced_speed = params->speed.forced_speed;
+	p_bulletin->req_autoneg_pause = params->pause.autoneg;
+	p_bulletin->req_forced_rx = params->pause.forced_rx;
+	p_bulletin->req_forced_tx = params->pause.forced_tx;
+	p_bulletin->req_loopback = params->loopback_mode;
+
+	p_bulletin->link_up = link->link_up;
+	p_bulletin->speed = link->speed;
+	p_bulletin->full_duplex = link->full_duplex;
+	p_bulletin->autoneg = link->an;
+	p_bulletin->autoneg_complete = link->an_complete;
+	p_bulletin->parallel_detection = link->parallel_detection;
+	p_bulletin->pfc_enabled = link->pfc_enabled;
+	p_bulletin->partner_adv_speed = link->partner_adv_speed;
+	p_bulletin->partner_tx_flow_ctrl_en = link->partner_tx_flow_ctrl_en;
+	p_bulletin->partner_rx_flow_ctrl_en = link->partner_rx_flow_ctrl_en;
+	p_bulletin->partner_adv_pause = link->partner_adv_pause;
+	p_bulletin->sfp_tx_fault = link->sfp_tx_fault;
+
+	p_bulletin->capability_speed = p_caps->speed_capabilities;
+}
+
 enum _ecore_status_t
 ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 			 struct ecore_ptt *p_ptt,
 			 struct ecore_iov_vf_init_params *p_params)
 {
+	struct ecore_mcp_link_capabilities link_caps;
+	struct ecore_mcp_link_params link_params;
+	struct ecore_mcp_link_state link_state;
 	u8 num_of_vf_available_chains  = 0;
 	struct ecore_vf_info *vf = OSAL_NULL;
 	u16 qid, num_irqs;
@@ -1045,6 +1085,17 @@ ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 			   p_queue->fw_cid);
 	}
 
+	/* Update the link configuration in bulletin.
+	 */
+	OSAL_MEMCPY(&link_params, ecore_mcp_get_link_params(p_hwfn),
+		    sizeof(link_params));
+	OSAL_MEMCPY(&link_state, ecore_mcp_get_link_state(p_hwfn),
+		    sizeof(link_state));
+	OSAL_MEMCPY(&link_caps, ecore_mcp_get_link_capabilities(p_hwfn),
+		    sizeof(link_caps));
+	ecore_iov_set_link(p_hwfn, p_params->rel_vf_id,
+			   &link_params, &link_state, &link_caps);
+
 	rc = ecore_iov_enable_vf_access(p_hwfn, p_ptt, vf);
 
 	if (rc == ECORE_SUCCESS) {
@@ -1059,43 +1110,6 @@ ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-void ecore_iov_set_link(struct ecore_hwfn *p_hwfn,
-			u16 vfid,
-			struct ecore_mcp_link_params *params,
-			struct ecore_mcp_link_state *link,
-			struct ecore_mcp_link_capabilities *p_caps)
-{
-	struct ecore_vf_info *p_vf = ecore_iov_get_vf_info(p_hwfn, vfid, false);
-	struct ecore_bulletin_content *p_bulletin;
-
-	if (!p_vf)
-		return;
-
-	p_bulletin = p_vf->bulletin.p_virt;
-	p_bulletin->req_autoneg = params->speed.autoneg;
-	p_bulletin->req_adv_speed = params->speed.advertised_speeds;
-	p_bulletin->req_forced_speed = params->speed.forced_speed;
-	p_bulletin->req_autoneg_pause = params->pause.autoneg;
-	p_bulletin->req_forced_rx = params->pause.forced_rx;
-	p_bulletin->req_forced_tx = params->pause.forced_tx;
-	p_bulletin->req_loopback = params->loopback_mode;
-
-	p_bulletin->link_up = link->link_up;
-	p_bulletin->speed = link->speed;
-	p_bulletin->full_duplex = link->full_duplex;
-	p_bulletin->autoneg = link->an;
-	p_bulletin->autoneg_complete = link->an_complete;
-	p_bulletin->parallel_detection = link->parallel_detection;
-	p_bulletin->pfc_enabled = link->pfc_enabled;
-	p_bulletin->partner_adv_speed = link->partner_adv_speed;
-	p_bulletin->partner_tx_flow_ctrl_en = link->partner_tx_flow_ctrl_en;
-	p_bulletin->partner_rx_flow_ctrl_en = link->partner_rx_flow_ctrl_en;
-	p_bulletin->partner_adv_pause = link->partner_adv_pause;
-	p_bulletin->sfp_tx_fault = link->sfp_tx_fault;
-
-	p_bulletin->capability_speed = p_caps->speed_capabilities;
-}
-
 enum _ecore_status_t ecore_iov_release_hw_for_vf(struct ecore_hwfn *p_hwfn,
 						 struct ecore_ptt *p_ptt,
 						 u16 rel_vf_id)
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 44/62] net/qede/base: add coalescing support for VFs
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (43 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 43/62] net/qede/base: update bulletin board during VF init Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 45/62] net/qede/base: add macro got resource value message Rasesh Mody
                               ` (17 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add coalescing support for VFs.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c     |   83 ++++++++++++++++++++++-----------
 drivers/net/qede/base/ecore_dev_api.h |   43 ++++++-----------
 drivers/net/qede/base/ecore_sriov.c   |   66 +++++++++++++++++++++++++-
 drivers/net/qede/base/ecore_vf.c      |   42 +++++++++++++++++
 drivers/net/qede/base/ecore_vf.h      |   24 ++++++++++
 drivers/net/qede/base/ecore_vfpf_if.h |   10 ++++
 6 files changed, 209 insertions(+), 59 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 29dd292..7a876bc 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -30,6 +30,7 @@
 #include "nvm_cfg.h"
 #include "ecore_dev_api.h"
 #include "ecore_dcbx.h"
+#include "ecore_l2.h"
 
 /* TODO - there's a bug in DCBx re-configuration flows in MF, as the QM
  * registers involved are not split and thus configuration is a race where
@@ -4198,11 +4199,6 @@ static enum _ecore_status_t ecore_set_coalesce(struct ecore_hwfn *p_hwfn,
 {
 	struct coalescing_timeset *p_coal_timeset;
 
-	if (IS_VF(p_hwfn->p_dev)) {
-		DP_NOTICE(p_hwfn, true, "VF coalescing config not supported\n");
-		return ECORE_INVAL;
-	}
-
 	if (p_hwfn->p_dev->int_coalescing_mode != ECORE_COAL_MODE_ENABLE) {
 		DP_NOTICE(p_hwfn, true,
 			  "Coalescing configuration not enabled\n");
@@ -4218,13 +4214,53 @@ static enum _ecore_status_t ecore_set_coalesce(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
+enum _ecore_status_t ecore_set_queue_coalesce(struct ecore_hwfn *p_hwfn,
+					      u16 rx_coal, u16 tx_coal,
+					      void *p_handle)
+{
+	struct ecore_queue_cid *p_cid = (struct ecore_queue_cid *)p_handle;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	struct ecore_ptt *p_ptt;
+
+	/* TODO - Configuring a single queue's coalescing but
+	 * claiming all queues are abiding same configuration
+	 * for PF and VF both.
+	 */
+
+	if (IS_VF(p_hwfn->p_dev))
+		return ecore_vf_pf_set_coalesce(p_hwfn, rx_coal,
+						tx_coal, p_cid);
+
+	p_ptt = ecore_ptt_acquire(p_hwfn);
+	if (!p_ptt)
+		return ECORE_AGAIN;
+
+	if (rx_coal) {
+		rc = ecore_set_rxq_coalesce(p_hwfn, p_ptt, rx_coal, p_cid);
+		if (rc)
+			goto out;
+		p_hwfn->p_dev->rx_coalesce_usecs = rx_coal;
+	}
+
+	if (tx_coal) {
+		rc = ecore_set_txq_coalesce(p_hwfn, p_ptt, tx_coal, p_cid);
+		if (rc)
+			goto out;
+		p_hwfn->p_dev->tx_coalesce_usecs = tx_coal;
+	}
+out:
+	ecore_ptt_release(p_hwfn, p_ptt);
+
+	return rc;
+}
+
 enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u16 qid, u16 sb_id)
+					    u16 coalesce,
+					    struct ecore_queue_cid *p_cid)
 {
 	struct ustorm_eth_queue_zone eth_qzone;
 	u8 timeset, timer_res;
-	u16 fw_qid = 0;
 	u32 address;
 	enum _ecore_status_t rc;
 
@@ -4241,33 +4277,30 @@ enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
 	}
 	timeset = (u8)(coalesce >> timer_res);
 
-	rc = ecore_fw_l2_queue(p_hwfn, qid, &fw_qid);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	rc = ecore_int_set_timer_res(p_hwfn, p_ptt, timer_res, sb_id, false);
+	rc = ecore_int_set_timer_res(p_hwfn, p_ptt, timer_res,
+				     p_cid->abs.sb_idx, false);
 	if (rc != ECORE_SUCCESS)
 		goto out;
 
-	address = BAR0_MAP_REG_USDM_RAM + USTORM_ETH_QUEUE_ZONE_OFFSET(fw_qid);
+	address = BAR0_MAP_REG_USDM_RAM +
+		  USTORM_ETH_QUEUE_ZONE_OFFSET(p_cid->abs.queue_id);
 
 	rc = ecore_set_coalesce(p_hwfn, p_ptt, address, &eth_qzone,
 				sizeof(struct ustorm_eth_queue_zone), timeset);
 	if (rc != ECORE_SUCCESS)
 		goto out;
 
-	p_hwfn->p_dev->rx_coalesce_usecs = coalesce;
-out:
+ out:
 	return rc;
 }
 
 enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u16 qid, u16 sb_id)
+					    u16 coalesce,
+					    struct ecore_queue_cid *p_cid)
 {
 	struct xstorm_eth_queue_zone eth_qzone;
 	u8 timeset, timer_res;
-	u16 fw_qid = 0;
 	u32 address;
 	enum _ecore_status_t rc;
 
@@ -4285,23 +4318,17 @@ enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
 
 	timeset = (u8)(coalesce >> timer_res);
 
-	rc = ecore_fw_l2_queue(p_hwfn, qid, &fw_qid);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	rc = ecore_int_set_timer_res(p_hwfn, p_ptt, timer_res, sb_id, true);
+	rc = ecore_int_set_timer_res(p_hwfn, p_ptt, timer_res,
+				     p_cid->abs.sb_idx, true);
 	if (rc != ECORE_SUCCESS)
 		goto out;
 
-	address = BAR0_MAP_REG_XSDM_RAM + XSTORM_ETH_QUEUE_ZONE_OFFSET(fw_qid);
+	address = BAR0_MAP_REG_XSDM_RAM +
+		  XSTORM_ETH_QUEUE_ZONE_OFFSET(p_cid->abs.queue_id);
 
 	rc = ecore_set_coalesce(p_hwfn, p_ptt, address, &eth_qzone,
 				sizeof(struct xstorm_eth_queue_zone), timeset);
-	if (rc != ECORE_SUCCESS)
-		goto out;
-
-	p_hwfn->p_dev->tx_coalesce_usecs = coalesce;
-out:
+ out:
 	return rc;
 }
 
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index 7e90778..ce764d2 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -570,41 +570,24 @@ enum _ecore_status_t ecore_final_cleanup(struct ecore_hwfn	*p_hwfn,
 					 struct ecore_ptt	*p_ptt,
 					 u16			id,
 					 bool			is_vf);
-
-/**
- * @brief ecore_set_rxq_coalesce - Configure coalesce parameters for an Rx queue
- *    The fact that we can configure coalescing to up to 511, but on varying
- *    accuracy [the bigger the value the less accurate] up to a mistake of 3usec
- *    for the highest values.
- *
- * @param p_hwfn
- * @param p_ptt
- * @param coalesce - Coalesce value in micro seconds.
- * @param qid - Queue index.
- * @param qid - SB Id
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
-					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u16 qid, u16 sb_id);
-
 /**
- * @brief ecore_set_txq_coalesce - Configure coalesce parameters for a Tx queue
- *    While the API allows setting coalescing per-qid, all tx queues sharing a
- *    SB should be in same range [i.e., either 0-0x7f, 0x80-0xff or 0x100-0x1ff]
+ * @brief ecore_set_queue_coalesce - Configure coalesce parameters for Rx and
+ *    Tx queue. The fact that we can configure coalescing to up to 511, but on
+ *    varying accuracy [the bigger the value the less accurate] up to a mistake
+ *    of 3usec for the highest values.
+ *    While the API allows setting coalescing per-qid, all queues sharing a SB
+ *    should be in same range [i.e., either 0-0x7f, 0x80-0xff or 0x100-0x1ff]
  *    otherwise configuration would break.
  *
  * @param p_hwfn
- * @param p_ptt
- * @param coalesce - Coalesce value in micro seconds.
- * @param qid - Queue index.
- * @param qid - SB Id
+ * @param rx_coal - Rx Coalesce value in micro seconds.
+ * @param tx_coal - TX Coalesce value in micro seconds.
+ * @param p_handle
  *
  * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
-					    struct ecore_ptt *p_ptt,
-					    u16 coalesce, u16 qid, u16 sb_id);
+ **/
+enum _ecore_status_t
+ecore_set_queue_coalesce(struct ecore_hwfn *p_hwfn, u16 rx_coal,
+			 u16 tx_coal, void *p_handle);
 
 #endif
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 703c1e8..4ffa8d0 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -52,6 +52,7 @@ const char *ecore_channel_tlvs_string[] = {
 	"CHANNEL_TLV_VPORT_UPDATE_ACCEPT_ANY_VLAN",
 	"CHANNEL_TLV_VPORT_UPDATE_SGE_TPA",
 	"CHANNEL_TLV_UPDATE_TUNN_PARAM",
+	"CHANNEL_TLV_COALESCE_UPDATE",
 	"CHANNEL_TLV_MAX"
 };
 
@@ -1939,6 +1940,8 @@ static void ecore_iov_vf_mbx_start_vport(struct ecore_hwfn *p_hwfn,
 	vf->state = VF_ENABLED;
 	start = &mbx->req_virt->start_vport;
 
+	ecore_iov_enable_vf_traffic(p_hwfn, p_ptt, vf);
+
 	/* Initialize Status block in CAU */
 	for (sb_id = 0; sb_id < vf->num_sbs; sb_id++) {
 		if (!start->sb_addr[sb_id]) {
@@ -1953,7 +1956,6 @@ static void ecore_iov_vf_mbx_start_vport(struct ecore_hwfn *p_hwfn,
 				      vf->igu_sbs[sb_id],
 				      vf->abs_vf_id, 1);
 	}
-	ecore_iov_enable_vf_traffic(p_hwfn, p_ptt, vf);
 
 	vf->mtu = start->mtu;
 	vf->shadow_config.inner_vlan_removal = start->inner_vlan_removal;
@@ -3226,6 +3228,65 @@ static void ecore_iov_vf_mbx_release(struct ecore_hwfn *p_hwfn,
 			       length, status);
 }
 
+static void ecore_iov_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn,
+					 struct ecore_ptt *p_ptt,
+					 struct ecore_vf_info *vf)
+{
+	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	struct vfpf_update_coalesce *req;
+	u8 status = PFVF_STATUS_FAILURE;
+	struct ecore_queue_cid *p_cid;
+	u16 rx_coal, tx_coal;
+	u16  qid;
+
+	req = &mbx->req_virt->update_coalesce;
+
+	rx_coal = req->rx_coal;
+	tx_coal = req->tx_coal;
+	qid = req->qid;
+	p_cid = vf->vf_queues[qid].p_rx_cid;
+
+	if (!ecore_iov_validate_rxq(p_hwfn, vf, qid)) {
+		DP_ERR(p_hwfn, "VF[%d]: Invalid Rx queue_id = %d\n",
+		       vf->abs_vf_id, qid);
+		goto out;
+	}
+
+	if (!ecore_iov_validate_txq(p_hwfn, vf, qid)) {
+		DP_ERR(p_hwfn, "VF[%d]: Invalid Tx queue_id = %d\n",
+		       vf->abs_vf_id, qid);
+		goto out;
+	}
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+		   "VF[%d]: Setting coalesce for VF rx_coal = %d, tx_coal = %d at queue = %d\n",
+		   vf->abs_vf_id, rx_coal, tx_coal, qid);
+	if (rx_coal) {
+		rc = ecore_set_rxq_coalesce(p_hwfn, p_ptt, rx_coal, p_cid);
+		if (rc != ECORE_SUCCESS) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VF[%d]: Unable to set rx queue = %d coalesce\n",
+				   vf->abs_vf_id, vf->vf_queues[qid].fw_rx_qid);
+			goto out;
+		}
+	}
+	if (tx_coal) {
+		rc =  ecore_set_txq_coalesce(p_hwfn, p_ptt, tx_coal, p_cid);
+		if (rc != ECORE_SUCCESS) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VF[%d]: Unable to set tx queue = %d coalesce\n",
+				   vf->abs_vf_id, vf->vf_queues[qid].fw_tx_qid);
+			goto out;
+		}
+	}
+
+	status = PFVF_STATUS_SUCCESS;
+out:
+	ecore_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_COALESCE_UPDATE,
+			       sizeof(struct pfvf_def_resp_tlv), status);
+}
+
 static enum _ecore_status_t
 ecore_iov_vf_flr_poll_dorq(struct ecore_hwfn *p_hwfn,
 			   struct ecore_vf_info *p_vf, struct ecore_ptt *p_ptt)
@@ -3579,6 +3640,9 @@ void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn,
 		case CHANNEL_TLV_UPDATE_TUNN_PARAM:
 			ecore_iov_vf_mbx_update_tunn_param(p_hwfn, p_ptt, p_vf);
 			break;
+		case CHANNEL_TLV_COALESCE_UPDATE:
+			ecore_iov_vf_pf_set_coalesce(p_hwfn, p_ptt, p_vf);
+			break;
 		}
 	} else if (ecore_iov_tlv_supported(mbx->first_tlv.tl.type)) {
 		/* If we've received a message from a VF we consider malicious
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index a072a81..bf516cc 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -1424,6 +1424,48 @@ exit:
 	return rc;
 }
 
+enum _ecore_status_t
+ecore_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn, u16 rx_coal, u16 tx_coal,
+			 struct ecore_queue_cid     *p_cid)
+{
+	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+	struct vfpf_update_coalesce *req;
+	struct pfvf_def_resp_tlv *resp;
+	enum _ecore_status_t rc;
+
+	/* clear mailbox and prep header tlv */
+	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_COALESCE_UPDATE,
+			       sizeof(*req));
+
+	req->rx_coal = rx_coal;
+	req->tx_coal = tx_coal;
+	req->qid = p_cid->rel.queue_id;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+		   "Setting coalesce rx_coal = %d, tx_coal = %d at queue = %d\n",
+		   rx_coal, tx_coal, req->qid);
+
+	/* add list termination tlv */
+	ecore_add_tlv(p_hwfn, &p_iov->offset, CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	resp = &p_iov->pf2vf_reply->default_resp;
+	rc = ecore_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
+
+	if (rc != ECORE_SUCCESS)
+		goto exit;
+
+	if (resp->hdr.status != PFVF_STATUS_SUCCESS)
+		goto exit;
+
+	p_hwfn->p_dev->rx_coalesce_usecs = rx_coal;
+	p_hwfn->p_dev->tx_coalesce_usecs = tx_coal;
+
+exit:
+	ecore_vf_pf_req_end(p_hwfn, rc);
+	return rc;
+}
+
 u16 ecore_vf_get_igu_sb_id(struct ecore_hwfn *p_hwfn,
 			   u16               sb_id)
 {
diff --git a/drivers/net/qede/base/ecore_vf.h b/drivers/net/qede/base/ecore_vf.h
index 0d67054..228bbf0 100644
--- a/drivers/net/qede/base/ecore_vf.h
+++ b/drivers/net/qede/base/ecore_vf.h
@@ -50,6 +50,20 @@ struct ecore_vf_iov {
 enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn);
 
 /**
+ * @brief VF - Set Rx/Tx coalesce per VF's relative queue.
+ *	Coalesce value '0' will omit the configuration.
+ *
+ *	@param p_hwfn
+ *	@param rx_coal - coalesce value in micro second for rx queue
+ *	@param tx_coal - coalesce value in micro second for tx queue
+ *	@param qid
+ *
+ **/
+enum _ecore_status_t ecore_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn,
+					      u16 rx_coal, u16 tx_coal,
+					      struct ecore_queue_cid *p_cid);
+
+/**
  * @brief VF - start the RX Queue by sending a message to the PF
  *
  * @param p_hwfn
@@ -263,5 +277,15 @@ ecore_vf_pf_tunnel_param_update(struct ecore_hwfn *p_hwfn,
 				struct ecore_tunnel_info *p_tunn);
 
 void ecore_vf_set_vf_start_tunn_update_param(struct ecore_tunnel_info *p_tun);
+
+enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
+					    struct ecore_ptt *p_ptt,
+					    u16 coalesce,
+					    struct ecore_queue_cid *p_cid);
+
+enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
+					    struct ecore_ptt *p_ptt,
+					    u16 coalesce,
+					    struct ecore_queue_cid *p_cid);
 #endif
 #endif /* __ECORE_VF_H__ */
diff --git a/drivers/net/qede/base/ecore_vfpf_if.h b/drivers/net/qede/base/ecore_vfpf_if.h
index 82ed4f5..e0b63bf 100644
--- a/drivers/net/qede/base/ecore_vfpf_if.h
+++ b/drivers/net/qede/base/ecore_vfpf_if.h
@@ -457,6 +457,14 @@ struct tlv_buffer_size {
 	u8 tlv_buffer[TLV_BUFFER_SIZE];
 };
 
+struct vfpf_update_coalesce {
+	struct vfpf_first_tlv first_tlv;
+	u16 rx_coal;
+	u16 tx_coal;
+	u16 qid;
+	u8 padding[2];
+};
+
 union vfpf_tlvs {
 	struct vfpf_first_tlv			first_tlv;
 	struct vfpf_acquire_tlv			acquire;
@@ -469,6 +477,7 @@ union vfpf_tlvs {
 	struct vfpf_vport_update_tlv		vport_update;
 	struct vfpf_ucast_filter_tlv		ucast_filter;
 	struct vfpf_update_tunn_param_tlv	tunn_param_update;
+	struct vfpf_update_coalesce		update_coalesce;
 	struct tlv_buffer_size			tlv_buf_size;
 };
 
@@ -592,6 +601,7 @@ enum {
 	CHANNEL_TLV_VPORT_UPDATE_ACCEPT_ANY_VLAN,
 	CHANNEL_TLV_VPORT_UPDATE_SGE_TPA,
 	CHANNEL_TLV_UPDATE_TUNN_PARAM,
+	CHANNEL_TLV_COALESCE_UPDATE,
 	CHANNEL_TLV_MAX,
 
 	/* Required for iterating over vport-update tlvs.
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 45/62] net/qede/base: add macro got resource value message
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (44 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 44/62] net/qede/base: add coalescing support for VFs Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 46/62] net/qede/base: add mailbox for resource allocation Rasesh Mody
                               ` (16 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add macro got resource value message

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/mcp_public.h |    5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 145f5ca..24acfcb 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -1137,16 +1137,15 @@ struct public_drv_mb {
 #define DRV_MSG_CODE_OV_UPDATE_BUS_NUM		0x27000000
 #define DRV_MSG_CODE_OV_UPDATE_BOOT_PROGRESS	0x28000000
 #define DRV_MSG_CODE_OV_UPDATE_STORM_FW_VER	0x29000000
+#define DRV_MSG_CODE_NIG_DRAIN			0x30000000
 #define DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE	0x31000000
 #define DRV_MSG_CODE_BW_UPDATE_ACK		0x32000000
 #define DRV_MSG_CODE_OV_UPDATE_MTU		0x33000000
-
-#define DRV_MSG_CODE_NIG_DRAIN			0x30000000
-
 /* DRV_MB Param: driver version supp, FW_MB param: MFW version supp,
  * data: struct resource_info
  */
 #define DRV_MSG_GET_RESOURCE_ALLOC_MSG		0x34000000
+#define DRV_MSG_SET_RESOURCE_VALUE_MSG		0x35000000
 
 /*deprecated don't use*/
 #define DRV_MSG_CODE_INITIATE_FLR_DEPRECATED    0x02000000
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 46/62] net/qede/base: add mailbox for resource allocation
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (45 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 45/62] net/qede/base: add macro got resource value message Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 47/62] net/qede/base: add macro for unsupported command Rasesh Mody
                               ` (15 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add the Management FW mailbox for getting non-l2 resource allocation
information.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h      |    1 +
 drivers/net/qede/base/ecore_dev.c  |   60 ++++++++++++++++++++++++------------
 drivers/net/qede/base/mcp_public.h |    1 +
 3 files changed, 43 insertions(+), 19 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 60a8a6b..25b6c4e 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -291,6 +291,7 @@ enum ecore_resources {
 	ECORE_LL2_QUEUE,
 	ECORE_CMDQS_CQS,
 	ECORE_RDMA_STATS_QUEUE,
+	ECORE_BDQ,
 	ECORE_MAX_RESC,			/* must be last */
 };
 
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 7a876bc..d5a8a90 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2463,6 +2463,9 @@ ecore_hw_get_mfw_res_id(enum ecore_resources res_id)
 	case ECORE_RDMA_STATS_QUEUE:
 		mfw_res_id = RESOURCE_RDMA_STATS_QUEUE_E;
 		break;
+	case ECORE_BDQ:
+		mfw_res_id = RESOURCE_BDQ_E;
+		break;
 	default:
 		break;
 	}
@@ -2470,67 +2473,84 @@ ecore_hw_get_mfw_res_id(enum ecore_resources res_id)
 	return mfw_res_id;
 }
 
-static u32 ecore_hw_get_dflt_resc_num(struct ecore_hwfn *p_hwfn,
-				      enum ecore_resources res_id)
+static
+enum _ecore_status_t ecore_hw_get_dflt_resc(struct ecore_hwfn *p_hwfn,
+					    enum ecore_resources res_id,
+					    u32 *p_resc_num,
+					    u32 *p_resc_start)
 {
 	u8 num_funcs = p_hwfn->num_funcs_on_engine;
 	bool b_ah = ECORE_IS_AH(p_hwfn->p_dev);
 	struct ecore_sb_cnt_info sb_cnt_info;
-	u32 dflt_resc_num = 0;
 
 	switch (res_id) {
 	case ECORE_SB:
 		OSAL_MEM_ZERO(&sb_cnt_info, sizeof(sb_cnt_info));
 		ecore_int_get_num_sbs(p_hwfn, &sb_cnt_info);
-		dflt_resc_num = sb_cnt_info.sb_cnt;
+		*p_resc_num = sb_cnt_info.sb_cnt;
 		break;
 	case ECORE_L2_QUEUE:
-		dflt_resc_num = (b_ah ? MAX_NUM_L2_QUEUES_K2 :
+		*p_resc_num = (b_ah ? MAX_NUM_L2_QUEUES_K2 :
 				 MAX_NUM_L2_QUEUES_BB) / num_funcs;
 		break;
 	case ECORE_VPORT:
-		dflt_resc_num = (b_ah ? MAX_NUM_VPORTS_K2 :
+		*p_resc_num = (b_ah ? MAX_NUM_VPORTS_K2 :
 				 MAX_NUM_VPORTS_BB) / num_funcs;
 		break;
 	case ECORE_RSS_ENG:
-		dflt_resc_num = (b_ah ? ETH_RSS_ENGINE_NUM_K2 :
+		*p_resc_num = (b_ah ? ETH_RSS_ENGINE_NUM_K2 :
 				 ETH_RSS_ENGINE_NUM_BB) / num_funcs;
 		break;
 	case ECORE_PQ:
-		dflt_resc_num = (b_ah ? MAX_QM_TX_QUEUES_K2 :
+		*p_resc_num = (b_ah ? MAX_QM_TX_QUEUES_K2 :
 				 MAX_QM_TX_QUEUES_BB) / num_funcs;
 		break;
 	case ECORE_RL:
-		dflt_resc_num = MAX_QM_GLOBAL_RLS / num_funcs;
+		*p_resc_num = MAX_QM_GLOBAL_RLS / num_funcs;
 		break;
 	case ECORE_MAC:
 	case ECORE_VLAN:
 		/* Each VFC resource can accommodate both a MAC and a VLAN */
-		dflt_resc_num = ETH_NUM_MAC_FILTERS / num_funcs;
+		*p_resc_num = ETH_NUM_MAC_FILTERS / num_funcs;
 		break;
 	case ECORE_ILT:
-		dflt_resc_num = (b_ah ? PXP_NUM_ILT_RECORDS_K2 :
+		*p_resc_num = (b_ah ? PXP_NUM_ILT_RECORDS_K2 :
 				 PXP_NUM_ILT_RECORDS_BB) / num_funcs;
 		break;
 	case ECORE_LL2_QUEUE:
-		dflt_resc_num = MAX_NUM_LL2_RX_QUEUES / num_funcs;
+		*p_resc_num = MAX_NUM_LL2_RX_QUEUES / num_funcs;
 		break;
 	case ECORE_RDMA_CNQ_RAM:
 	case ECORE_CMDQS_CQS:
 		/* CNQ/CMDQS are the same resource */
 		/* @DPDK */
-		dflt_resc_num = (NUM_OF_GLOBAL_QUEUES / 2) / num_funcs;
+		*p_resc_num = (NUM_OF_GLOBAL_QUEUES / 2) / num_funcs;
 		break;
 	case ECORE_RDMA_STATS_QUEUE:
 		/* @DPDK */
-		dflt_resc_num = (b_ah ? MAX_NUM_VPORTS_K2 :
+		*p_resc_num = (b_ah ? MAX_NUM_VPORTS_K2 :
 				 MAX_NUM_VPORTS_BB) / num_funcs;
 		break;
+	case ECORE_BDQ:
+		/* @DPDK */
+		*p_resc_num = 0;
+		break;
+	default:
+		break;
+	}
+
+
+	switch (res_id) {
+	case ECORE_BDQ:
+		if (!*p_resc_num)
+			*p_resc_start = 0;
+		break;
 	default:
+		*p_resc_start = *p_resc_num * p_hwfn->enabled_func_idx;
 		break;
 	}
 
-	return dflt_resc_num;
+	return ECORE_SUCCESS;
 }
 
 static const char *ecore_hw_get_resc_name(enum ecore_resources res_id)
@@ -2562,6 +2582,8 @@ static const char *ecore_hw_get_resc_name(enum ecore_resources res_id)
 		return "CMDQS_CQS";
 	case ECORE_RDMA_STATS_QUEUE:
 		return "RDMA_STATS_QUEUE";
+	case ECORE_BDQ:
+		return "BDQ";
 	default:
 		return "UNKNOWN_RESOURCE";
 	}
@@ -2579,14 +2601,14 @@ static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
 	p_resc_num = &RESC_NUM(p_hwfn, res_id);
 	p_resc_start = &RESC_START(p_hwfn, res_id);
 
-	dflt_resc_num = ecore_hw_get_dflt_resc_num(p_hwfn, res_id);
-	if (!dflt_resc_num) {
+	rc = ecore_hw_get_dflt_resc(p_hwfn, res_id,
+				    &dflt_resc_num, &dflt_resc_start);
+	if (rc != ECORE_SUCCESS) {
 		DP_ERR(p_hwfn,
 		       "Failed to get default amount for resource %d [%s]\n",
 			res_id, ecore_hw_get_resc_name(res_id));
-		return ECORE_INVAL;
+		return rc;
 	}
-	dflt_resc_start = dflt_resc_num * p_hwfn->enabled_func_idx;
 
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_SLOW(p_hwfn->p_dev)) {
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 24acfcb..17971a4 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -1025,6 +1025,7 @@ enum resource_id_enum {
 	RESOURCE_NUM_RSS_ENGINES_E	=	14,
 	RESOURCE_LL2_QUEUE_E		=	15,
 	RESOURCE_RDMA_STATS_QUEUE_E	=	16,
+	RESOURCE_BDQ_E			=	17,
 	RESOURCE_MAX_NUM,
 	RESOURCE_NUM_INVALID		=	0xFFFFFFFF
 };
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 47/62] net/qede/base: add macro for unsupported command
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (46 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 46/62] net/qede/base: add mailbox for resource allocation Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 48/62] net/qede/base: set max values for soft resources Rasesh Mody
                               ` (14 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add a macro for unsupported management FW command

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_mcp.c  |    6 ++----
 drivers/net/qede/base/mcp_public.h |    1 +
 2 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 6c5b5db..15f3ea0 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1424,8 +1424,7 @@ ecore_mcp_mdump_get_config(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	/* A zero response implies that the mdump command is not supported */
-	if (!mcp_resp)
+	if (mcp_resp == FW_MSG_CODE_UNSUPPORTED)
 		return ECORE_NOTIMPL;
 
 	if (mcp_resp != FW_MSG_CODE_OK) {
@@ -2832,8 +2831,7 @@ static enum _ecore_status_t ecore_mcp_resource_cmd(struct ecore_hwfn *p_hwfn,
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	/* A zero response implies that the resource command is not supported */
-	if (!*p_mcp_resp)
+	if (*p_mcp_resp == FW_MSG_CODE_UNSUPPORTED)
 		return ECORE_NOTIMPL;
 
 	if (*p_mcp_param == RESOURCE_OPCODE_UNKNOWN_CMD) {
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 17971a4..8d65390 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -1489,6 +1489,7 @@ struct public_drv_mb {
 
 	u32 fw_mb_header;
 #define FW_MSG_CODE_MASK                        0xffff0000
+#define FW_MSG_CODE_UNSUPPORTED			0x00000000
 #define FW_MSG_CODE_DRV_LOAD_ENGINE		0x10100000
 #define FW_MSG_CODE_DRV_LOAD_PORT               0x10110000
 #define FW_MSG_CODE_DRV_LOAD_FUNCTION           0x10120000
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 48/62] net/qede/base: set max values for soft resources
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (47 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 47/62] net/qede/base: add macro for unsupported command Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 49/62] net/qede/base: add return code check Rasesh Mody
                               ` (13 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add support for the new interface with the Management FW for setting
max values of "soft" resources.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h     |    2 +
 drivers/net/qede/base/ecore_dev.c |  282 ++++++++++++++++++++++--------------
 drivers/net/qede/base/ecore_mcp.c |  287 +++++++++++++++++++++++++++++++------
 drivers/net/qede/base/ecore_mcp.h |  104 ++++++++++----
 4 files changed, 498 insertions(+), 177 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 25b6c4e..7379b3f 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -856,4 +856,6 @@ u16 ecore_init_qm_get_num_pqs(struct ecore_hwfn *p_hwfn);
 
 #define ECORE_LEADING_HWFN(dev)	(&dev->hwfns[0])
 
+const char *ecore_hw_get_resc_name(enum ecore_resources res_id);
+
 #endif /* __ECORE_H */
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index d5a8a90..3191ee4 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2420,64 +2420,109 @@ static void ecore_hw_set_feat(struct ecore_hwfn *p_hwfn)
 		   RESC_NUM(p_hwfn, ECORE_SB));
 }
 
-static enum resource_id_enum
-ecore_hw_get_mfw_res_id(enum ecore_resources res_id)
+const char *ecore_hw_get_resc_name(enum ecore_resources res_id)
 {
-	enum resource_id_enum mfw_res_id = RESOURCE_NUM_INVALID;
-
 	switch (res_id) {
 	case ECORE_SB:
-		mfw_res_id = RESOURCE_NUM_SB_E;
-		break;
+		return "SB";
 	case ECORE_L2_QUEUE:
-		mfw_res_id = RESOURCE_NUM_L2_QUEUE_E;
-		break;
+		return "L2_QUEUE";
 	case ECORE_VPORT:
-		mfw_res_id = RESOURCE_NUM_VPORT_E;
-		break;
+		return "VPORT";
 	case ECORE_RSS_ENG:
-		mfw_res_id = RESOURCE_NUM_RSS_ENGINES_E;
-		break;
+		return "RSS_ENG";
 	case ECORE_PQ:
-		mfw_res_id = RESOURCE_NUM_PQ_E;
-		break;
+		return "PQ";
 	case ECORE_RL:
-		mfw_res_id = RESOURCE_NUM_RL_E;
-		break;
+		return "RL";
 	case ECORE_MAC:
+		return "MAC";
 	case ECORE_VLAN:
-		/* Each VFC resource can accommodate both a MAC and a VLAN */
-		mfw_res_id = RESOURCE_VFC_FILTER_E;
-		break;
+		return "VLAN";
+	case ECORE_RDMA_CNQ_RAM:
+		return "RDMA_CNQ_RAM";
 	case ECORE_ILT:
-		mfw_res_id = RESOURCE_ILT_E;
-		break;
+		return "ILT";
 	case ECORE_LL2_QUEUE:
-		mfw_res_id = RESOURCE_LL2_QUEUE_E;
-		break;
-	case ECORE_RDMA_CNQ_RAM:
+		return "LL2_QUEUE";
 	case ECORE_CMDQS_CQS:
-		/* CNQ/CMDQS are the same resource */
-		mfw_res_id = RESOURCE_CQS_E;
-		break;
+		return "CMDQS_CQS";
 	case ECORE_RDMA_STATS_QUEUE:
-		mfw_res_id = RESOURCE_RDMA_STATS_QUEUE_E;
-		break;
+		return "RDMA_STATS_QUEUE";
 	case ECORE_BDQ:
-		mfw_res_id = RESOURCE_BDQ_E;
-		break;
+		return "BDQ";
 	default:
-		break;
+		return "UNKNOWN_RESOURCE";
 	}
+}
 
-	return mfw_res_id;
+static enum _ecore_status_t
+__ecore_hw_set_soft_resc_size(struct ecore_hwfn *p_hwfn,
+			      enum ecore_resources res_id, u32 resc_max_val,
+			      u32 *p_mcp_resp)
+{
+	enum _ecore_status_t rc;
+
+	rc = ecore_mcp_set_resc_max_val(p_hwfn, p_hwfn->p_main_ptt, res_id,
+					resc_max_val, p_mcp_resp);
+	if (rc != ECORE_SUCCESS) {
+		DP_NOTICE(p_hwfn, true,
+			  "MFW response failure for a max value setting of resource %d [%s]\n",
+			  res_id, ecore_hw_get_resc_name(res_id));
+		return rc;
+	}
+
+	if (*p_mcp_resp != FW_MSG_CODE_RESOURCE_ALLOC_OK)
+		DP_INFO(p_hwfn,
+			"Failed to set the max value of resource %d [%s]. mcp_resp = 0x%08x.\n",
+			res_id, ecore_hw_get_resc_name(res_id), *p_mcp_resp);
+
+	return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t
+ecore_hw_set_soft_resc_size(struct ecore_hwfn *p_hwfn)
+{
+	bool b_ah = ECORE_IS_AH(p_hwfn->p_dev);
+	u32 resc_max_val, mcp_resp;
+	u8 res_id;
+	enum _ecore_status_t rc;
+
+	for (res_id = 0; res_id < ECORE_MAX_RESC; res_id++) {
+		/* @DPDK */
+		switch (res_id) {
+		case ECORE_LL2_QUEUE:
+		case ECORE_RDMA_CNQ_RAM:
+		case ECORE_RDMA_STATS_QUEUE:
+		case ECORE_BDQ:
+			resc_max_val = 0;
+			break;
+		default:
+			continue;
+		}
+
+		rc = __ecore_hw_set_soft_resc_size(p_hwfn, res_id,
+						   resc_max_val, &mcp_resp);
+		if (rc != ECORE_SUCCESS)
+			return rc;
+
+		/* There's no point to continue to the next resource if the
+		 * command is not supported by the MFW.
+		 * We do continue if the command is supported but the resource
+		 * is unknown to the MFW. Such a resource will be later
+		 * configured with the default allocation values.
+		 */
+		if (mcp_resp == FW_MSG_CODE_UNSUPPORTED)
+			return ECORE_NOTIMPL;
+	}
+
+	return ECORE_SUCCESS;
 }
 
 static
 enum _ecore_status_t ecore_hw_get_dflt_resc(struct ecore_hwfn *p_hwfn,
 					    enum ecore_resources res_id,
-					    u32 *p_resc_num,
-					    u32 *p_resc_start)
+					    u32 *p_resc_num, u32 *p_resc_start)
 {
 	u8 num_funcs = p_hwfn->num_funcs_on_engine;
 	bool b_ah = ECORE_IS_AH(p_hwfn->p_dev);
@@ -2553,56 +2598,19 @@ enum _ecore_status_t ecore_hw_get_dflt_resc(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-static const char *ecore_hw_get_resc_name(enum ecore_resources res_id)
-{
-	switch (res_id) {
-	case ECORE_SB:
-		return "SB";
-	case ECORE_L2_QUEUE:
-		return "L2_QUEUE";
-	case ECORE_VPORT:
-		return "VPORT";
-	case ECORE_RSS_ENG:
-		return "RSS_ENG";
-	case ECORE_PQ:
-		return "PQ";
-	case ECORE_RL:
-		return "RL";
-	case ECORE_MAC:
-		return "MAC";
-	case ECORE_VLAN:
-		return "VLAN";
-	case ECORE_RDMA_CNQ_RAM:
-		return "RDMA_CNQ_RAM";
-	case ECORE_ILT:
-		return "ILT";
-	case ECORE_LL2_QUEUE:
-		return "LL2_QUEUE";
-	case ECORE_CMDQS_CQS:
-		return "CMDQS_CQS";
-	case ECORE_RDMA_STATS_QUEUE:
-		return "RDMA_STATS_QUEUE";
-	case ECORE_BDQ:
-		return "BDQ";
-	default:
-		return "UNKNOWN_RESOURCE";
-	}
-}
-
-static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
-						   enum ecore_resources res_id,
-						   bool drv_resc_alloc)
+static enum _ecore_status_t
+__ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn, enum ecore_resources res_id,
+			 bool drv_resc_alloc)
 {
-	u32 dflt_resc_num = 0, dflt_resc_start = 0, mcp_resp, mcp_param;
-	u32 *p_resc_num, *p_resc_start;
-	struct resource_info resc_info;
+	u32 dflt_resc_num = 0, dflt_resc_start = 0;
+	u32 mcp_resp, *p_resc_num, *p_resc_start;
 	enum _ecore_status_t rc;
 
 	p_resc_num = &RESC_NUM(p_hwfn, res_id);
 	p_resc_start = &RESC_START(p_hwfn, res_id);
 
-	rc = ecore_hw_get_dflt_resc(p_hwfn, res_id,
-				    &dflt_resc_num, &dflt_resc_start);
+	rc = ecore_hw_get_dflt_resc(p_hwfn, res_id, &dflt_resc_num,
+				    &dflt_resc_start);
 	if (rc != ECORE_SUCCESS) {
 		DP_ERR(p_hwfn,
 		       "Failed to get default amount for resource %d [%s]\n",
@@ -2618,17 +2626,8 @@ static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
 	}
 #endif
 
-	OSAL_MEM_ZERO(&resc_info, sizeof(resc_info));
-	resc_info.res_id = ecore_hw_get_mfw_res_id(res_id);
-	if (resc_info.res_id == RESOURCE_NUM_INVALID) {
-		DP_ERR(p_hwfn,
-		       "Failed to match resource %d with MFW resources\n",
-		       res_id);
-		return ECORE_INVAL;
-	}
-
-	rc = ecore_mcp_get_resc_info(p_hwfn, p_hwfn->p_main_ptt, &resc_info,
-				     &mcp_resp, &mcp_param);
+	rc = ecore_mcp_get_resc_info(p_hwfn, p_hwfn->p_main_ptt, res_id,
+				     &mcp_resp, p_resc_num, p_resc_start);
 	if (rc != ECORE_SUCCESS) {
 		DP_NOTICE(p_hwfn, true,
 			  "MFW response failure for an allocation request for"
@@ -2642,13 +2641,11 @@ static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
 	 * - There is an internal error in the MFW while processing the request
 	 * - The resource ID is unknown to the MFW
 	 */
-	if (mcp_resp != FW_MSG_CODE_RESOURCE_ALLOC_OK &&
-	    mcp_resp != FW_MSG_CODE_RESOURCE_ALLOC_DEPRECATED) {
-		/* @DPDK */
+	if (mcp_resp != FW_MSG_CODE_RESOURCE_ALLOC_OK) {
 		DP_INFO(p_hwfn,
-			"Resource %d [%s]: No allocation info was received"
-			" [mcp_resp 0x%x]. Applying default values"
-			" [num %d, start %d].\n",
+			"Failed to receive allocation info for resource %d [%s]."
+			" mcp_resp = 0x%x. Applying default values"
+			" [%d,%d].\n",
 			res_id, ecore_hw_get_resc_name(res_id), mcp_resp,
 			dflt_resc_num, dflt_resc_start);
 
@@ -2660,16 +2657,13 @@ static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
 	/* TBD - remove this when revising the handling of the SB resource */
 	if (res_id == ECORE_SB) {
 		/* Excluding the slowpath SB */
-		resc_info.size -= 1;
-		resc_info.offset -= p_hwfn->enabled_func_idx;
+		*p_resc_num -= 1;
+		*p_resc_start -= p_hwfn->enabled_func_idx;
 	}
 
-	*p_resc_num = resc_info.size;
-	*p_resc_start = resc_info.offset;
-
 	if (*p_resc_num != dflt_resc_num || *p_resc_start != dflt_resc_start) {
 		DP_INFO(p_hwfn,
-			"Resource %d [%s]: MFW allocation [num %d, start %d] differs from default values [num %d, start %d]%s\n",
+			"MFW allocation for resource %d [%s] differs from default values [%d,%d vs. %d,%d]%s\n",
 			res_id, ecore_hw_get_resc_name(res_id), *p_resc_num,
 			*p_resc_start, dflt_resc_num, dflt_resc_start,
 			drv_resc_alloc ? " - Applying default values" : "");
@@ -2682,12 +2676,32 @@ out:
 	return ECORE_SUCCESS;
 }
 
+static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
+						   bool drv_resc_alloc)
+{
+	enum _ecore_status_t rc;
+	u8 res_id;
+
+	for (res_id = 0; res_id < ECORE_MAX_RESC; res_id++) {
+		rc = __ecore_hw_set_resc_info(p_hwfn, res_id, drv_resc_alloc);
+		if (rc != ECORE_SUCCESS)
+			return rc;
+	}
+
+	return ECORE_SUCCESS;
+}
+
+#define ECORE_RESC_ALLOC_LOCK_RETRY_CNT		10
+#define ECORE_RESC_ALLOC_LOCK_RETRY_INTVL_US	10000 /* 10 msec */
+
 static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 					      bool drv_resc_alloc)
 {
+	struct ecore_resc_unlock_params resc_unlock_params;
+	struct ecore_resc_lock_params resc_lock_params;
 	bool b_ah = ECORE_IS_AH(p_hwfn->p_dev);
-	enum _ecore_status_t rc;
 	u8 res_id;
+	enum _ecore_status_t rc;
 #ifndef ASIC_ONLY
 	u32 *resc_start = p_hwfn->hw_info.resc_start;
 	u32 *resc_num = p_hwfn->hw_info.resc_num;
@@ -2700,10 +2714,62 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 	u32 roce_min_ilt_lines = PXP_NUM_ILT_RECORDS_BB / MAX_NUM_PFS_BB;
 #endif
 
-	for (res_id = 0; res_id < ECORE_MAX_RESC; res_id++) {
-		rc = ecore_hw_set_resc_info(p_hwfn, res_id, drv_resc_alloc);
+	/* Setting the max values of the soft resources and the following
+	 * resources allocation queries should be atomic. Since several PFs can
+	 * run in parallel - a resource lock is needed.
+	 * If either the resource lock or resource set value commands are not
+	 * supported - skip the the max values setting, release the lock if
+	 * needed, and proceed to the queries. Other failures, including a
+	 * failure to acquire the lock, will cause this function to fail.
+	 * Old drivers that don't acquire the lock can run in parallel, and
+	 * their allocation values won't be affected by the updated max values.
+	 */
+	OSAL_MEM_ZERO(&resc_lock_params, sizeof(resc_lock_params));
+	resc_lock_params.resource = ECORE_RESC_LOCK_RESC_ALLOC;
+	resc_lock_params.retry_num = ECORE_RESC_ALLOC_LOCK_RETRY_CNT;
+	resc_lock_params.retry_interval = ECORE_RESC_ALLOC_LOCK_RETRY_INTVL_US;
+	resc_lock_params.sleep_b4_retry = true;
+	OSAL_MEM_ZERO(&resc_unlock_params, sizeof(resc_unlock_params));
+	resc_unlock_params.resource = ECORE_RESC_LOCK_RESC_ALLOC;
+
+	rc = ecore_mcp_resc_lock(p_hwfn, p_hwfn->p_main_ptt, &resc_lock_params);
+	if (rc != ECORE_SUCCESS && rc != ECORE_NOTIMPL) {
+		return rc;
+	} else if (rc == ECORE_NOTIMPL) {
+		DP_INFO(p_hwfn,
+			"Skip the max values setting of the soft resources since the resource lock is not supported by the MFW\n");
+	} else if (rc == ECORE_SUCCESS && !resc_lock_params.b_granted) {
+		DP_NOTICE(p_hwfn, false,
+			  "Failed to acquire the resource lock for the resource allocation commands\n");
+		rc = ECORE_BUSY;
+		goto unlock_and_exit;
+	} else {
+		rc = ecore_hw_set_soft_resc_size(p_hwfn);
+		if (rc != ECORE_SUCCESS && rc != ECORE_NOTIMPL) {
+			DP_NOTICE(p_hwfn, false,
+				  "Failed to set the max values of the soft resources\n");
+			goto unlock_and_exit;
+		} else if (rc == ECORE_NOTIMPL) {
+			DP_INFO(p_hwfn,
+				"Skip the max values setting of the soft resources since it is not supported by the MFW\n");
+			rc = ecore_mcp_resc_unlock(p_hwfn, p_hwfn->p_main_ptt,
+						   &resc_unlock_params);
+			if (rc != ECORE_SUCCESS)
+				DP_INFO(p_hwfn,
+					"Failed to release the resource lock for the resource allocation commands\n");
+		}
+	}
+
+	rc = ecore_hw_set_resc_info(p_hwfn, drv_resc_alloc);
+	if (rc != ECORE_SUCCESS)
+		goto unlock_and_exit;
+
+	if (resc_lock_params.b_granted && !resc_unlock_params.b_released) {
+		rc = ecore_mcp_resc_unlock(p_hwfn, p_hwfn->p_main_ptt,
+					   &resc_unlock_params);
 		if (rc != ECORE_SUCCESS)
-			return rc;
+			DP_INFO(p_hwfn,
+				"Failed to release the resource lock for the resource allocation commands\n");
 	}
 
 #ifndef ASIC_ONLY
@@ -2756,6 +2822,10 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 			   RESC_START(p_hwfn, res_id));
 
 	return ECORE_SUCCESS;
+
+unlock_and_exit:
+	ecore_mcp_resc_unlock(p_hwfn, p_hwfn->p_main_ptt, &resc_unlock_params);
+	return rc;
 }
 
 static enum _ecore_status_t
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 15f3ea0..3efe0a0 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -2768,7 +2768,60 @@ enum _ecore_status_t ecore_mcp_mem_ecc_events(struct ecore_hwfn *p_hwfn,
 			     0, &rsp, (u32 *)num_events);
 }
 
-#define ECORE_RESC_ALLOC_VERSION_MAJOR	1
+static enum resource_id_enum
+ecore_mcp_get_mfw_res_id(enum ecore_resources res_id)
+{
+	enum resource_id_enum mfw_res_id = RESOURCE_NUM_INVALID;
+
+	switch (res_id) {
+	case ECORE_SB:
+		mfw_res_id = RESOURCE_NUM_SB_E;
+		break;
+	case ECORE_L2_QUEUE:
+		mfw_res_id = RESOURCE_NUM_L2_QUEUE_E;
+		break;
+	case ECORE_VPORT:
+		mfw_res_id = RESOURCE_NUM_VPORT_E;
+		break;
+	case ECORE_RSS_ENG:
+		mfw_res_id = RESOURCE_NUM_RSS_ENGINES_E;
+		break;
+	case ECORE_PQ:
+		mfw_res_id = RESOURCE_NUM_PQ_E;
+		break;
+	case ECORE_RL:
+		mfw_res_id = RESOURCE_NUM_RL_E;
+		break;
+	case ECORE_MAC:
+	case ECORE_VLAN:
+		/* Each VFC resource can accommodate both a MAC and a VLAN */
+		mfw_res_id = RESOURCE_VFC_FILTER_E;
+		break;
+	case ECORE_ILT:
+		mfw_res_id = RESOURCE_ILT_E;
+		break;
+	case ECORE_LL2_QUEUE:
+		mfw_res_id = RESOURCE_LL2_QUEUE_E;
+		break;
+	case ECORE_RDMA_CNQ_RAM:
+	case ECORE_CMDQS_CQS:
+		/* CNQ/CMDQS are the same resource */
+		mfw_res_id = RESOURCE_CQS_E;
+		break;
+	case ECORE_RDMA_STATS_QUEUE:
+		mfw_res_id = RESOURCE_RDMA_STATS_QUEUE_E;
+		break;
+	case ECORE_BDQ:
+		mfw_res_id = RESOURCE_BDQ_E;
+		break;
+	default:
+		break;
+	}
+
+	return mfw_res_id;
+}
+
+#define ECORE_RESC_ALLOC_VERSION_MAJOR	2
 #define ECORE_RESC_ALLOC_VERSION_MINOR	0
 #define ECORE_RESC_ALLOC_VERSION				\
 	((ECORE_RESC_ALLOC_VERSION_MAJOR <<			\
@@ -2776,36 +2829,146 @@ enum _ecore_status_t ecore_mcp_mem_ecc_events(struct ecore_hwfn *p_hwfn,
 	 (ECORE_RESC_ALLOC_VERSION_MINOR <<			\
 	  DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR_SHIFT))
 
-enum _ecore_status_t ecore_mcp_get_resc_info(struct ecore_hwfn *p_hwfn,
-					     struct ecore_ptt *p_ptt,
-					     struct resource_info *p_resc_info,
-					     u32 *p_mcp_resp, u32 *p_mcp_param)
+struct ecore_resc_alloc_in_params {
+	u32 cmd;
+	enum ecore_resources res_id;
+	u32 resc_max_val;
+};
+
+struct ecore_resc_alloc_out_params {
+	u32 mcp_resp;
+	u32 mcp_param;
+	u32 resc_num;
+	u32 resc_start;
+	u32 vf_resc_num;
+	u32 vf_resc_start;
+	u32 flags;
+};
+
+static enum _ecore_status_t
+ecore_mcp_resc_allocation_msg(struct ecore_hwfn *p_hwfn,
+			      struct ecore_ptt *p_ptt,
+			      struct ecore_resc_alloc_in_params *p_in_params,
+			      struct ecore_resc_alloc_out_params *p_out_params)
 {
+	struct resource_info *p_mfw_resc_info;
 	struct ecore_mcp_mb_params mb_params;
 	union drv_union_data union_data;
 	enum _ecore_status_t rc;
 
+	p_mfw_resc_info = &union_data.resource;
+	OSAL_MEM_ZERO(p_mfw_resc_info, sizeof(*p_mfw_resc_info));
+
+	p_mfw_resc_info->res_id = ecore_mcp_get_mfw_res_id(p_in_params->res_id);
+	if (p_mfw_resc_info->res_id == RESOURCE_NUM_INVALID) {
+		DP_ERR(p_hwfn,
+		       "Failed to match resource %d [%s] with the MFW resources\n",
+		       p_in_params->res_id,
+		       ecore_hw_get_resc_name(p_in_params->res_id));
+		return ECORE_INVAL;
+	}
+
+	switch (p_in_params->cmd) {
+	case DRV_MSG_SET_RESOURCE_VALUE_MSG:
+		p_mfw_resc_info->size = p_in_params->resc_max_val;
+		/* Fallthrough */
+	case DRV_MSG_GET_RESOURCE_ALLOC_MSG:
+		break;
+	default:
+		DP_ERR(p_hwfn, "Unexpected resource alloc command [0x%08x]\n",
+		       p_in_params->cmd);
+		return ECORE_INVAL;
+	}
+
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
-	mb_params.cmd = DRV_MSG_GET_RESOURCE_ALLOC_MSG;
+	mb_params.cmd = p_in_params->cmd;
 	mb_params.param = ECORE_RESC_ALLOC_VERSION;
-	OSAL_MEMCPY(&union_data.resource, p_resc_info, sizeof(*p_resc_info));
 	mb_params.p_data_src = &union_data;
 	mb_params.p_data_dst = &union_data;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Resource message request: cmd 0x%08x, res_id %d [%s], hsi_version %d.%d, val 0x%x\n",
+		   p_in_params->cmd, p_in_params->res_id,
+		   ecore_hw_get_resc_name(p_in_params->res_id),
+		   ECORE_MFW_GET_FIELD(mb_params.param,
+			   DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR),
+		   ECORE_MFW_GET_FIELD(mb_params.param,
+			   DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR),
+		   p_in_params->resc_max_val);
+
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	*p_mcp_resp = mb_params.mcp_resp;
-	*p_mcp_param = mb_params.mcp_param;
-
-	OSAL_MEMCPY(p_resc_info, &union_data.resource, sizeof(*p_resc_info));
+	p_out_params->mcp_resp = mb_params.mcp_resp;
+	p_out_params->mcp_param = mb_params.mcp_param;
+	p_out_params->resc_num = p_mfw_resc_info->size;
+	p_out_params->resc_start = p_mfw_resc_info->offset;
+	p_out_params->vf_resc_num = p_mfw_resc_info->vf_size;
+	p_out_params->vf_resc_start = p_mfw_resc_info->vf_offset;
+	p_out_params->flags = p_mfw_resc_info->flags;
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-		   "MFW resource_info: version 0x%x, res_id 0x%x, size 0x%x,"
-		   " offset 0x%x, vf_size 0x%x, vf_offset 0x%x, flags 0x%x\n",
-		   *p_mcp_param, p_resc_info->res_id, p_resc_info->size,
-		   p_resc_info->offset, p_resc_info->vf_size,
-		   p_resc_info->vf_offset, p_resc_info->flags);
+		   "Resource message response: mfw_hsi_version %d.%d, num 0x%x, start 0x%x, vf_num 0x%x, vf_start 0x%x, flags 0x%08x\n",
+		   ECORE_MFW_GET_FIELD(p_out_params->mcp_param,
+			   FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR),
+		   ECORE_MFW_GET_FIELD(p_out_params->mcp_param,
+			   FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR),
+		   p_out_params->resc_num, p_out_params->resc_start,
+		   p_out_params->vf_resc_num, p_out_params->vf_resc_start,
+		   p_out_params->flags);
+
+	return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t
+ecore_mcp_set_resc_max_val(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			   enum ecore_resources res_id, u32 resc_max_val,
+			   u32 *p_mcp_resp)
+{
+	struct ecore_resc_alloc_out_params out_params;
+	struct ecore_resc_alloc_in_params in_params;
+	enum _ecore_status_t rc;
+
+	OSAL_MEM_ZERO(&in_params, sizeof(in_params));
+	in_params.cmd = DRV_MSG_SET_RESOURCE_VALUE_MSG;
+	in_params.res_id = res_id;
+	in_params.resc_max_val = resc_max_val;
+	OSAL_MEM_ZERO(&out_params, sizeof(out_params));
+	rc = ecore_mcp_resc_allocation_msg(p_hwfn, p_ptt, &in_params,
+					   &out_params);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	*p_mcp_resp = out_params.mcp_resp;
+
+	return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t
+ecore_mcp_get_resc_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			enum ecore_resources res_id, u32 *p_mcp_resp,
+			u32 *p_resc_num, u32 *p_resc_start)
+{
+	struct ecore_resc_alloc_out_params out_params;
+	struct ecore_resc_alloc_in_params in_params;
+	enum _ecore_status_t rc;
+
+	OSAL_MEM_ZERO(&in_params, sizeof(in_params));
+	in_params.cmd = DRV_MSG_GET_RESOURCE_ALLOC_MSG;
+	in_params.res_id = res_id;
+	OSAL_MEM_ZERO(&out_params, sizeof(out_params));
+	rc = ecore_mcp_resc_allocation_msg(p_hwfn, p_ptt, &in_params,
+					   &out_params);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	*p_mcp_resp = out_params.mcp_resp;
+
+	if (*p_mcp_resp == FW_MSG_CODE_RESOURCE_ALLOC_OK) {
+		*p_resc_num = out_params.resc_num;
+		*p_resc_start = out_params.resc_start;
+	}
 
 	return ECORE_SUCCESS;
 }
@@ -2831,8 +2994,11 @@ static enum _ecore_status_t ecore_mcp_resource_cmd(struct ecore_hwfn *p_hwfn,
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	if (*p_mcp_resp == FW_MSG_CODE_UNSUPPORTED)
+	if (*p_mcp_resp == FW_MSG_CODE_UNSUPPORTED) {
+		DP_INFO(p_hwfn,
+			"The resource command is unsupported by the MFW\n");
 		return ECORE_NOTIMPL;
+	}
 
 	if (*p_mcp_param == RESOURCE_OPCODE_UNKNOWN_CMD) {
 		u8 opcode = ECORE_MFW_GET_FIELD(param, RESOURCE_CMD_REQ_OPCODE);
@@ -2846,36 +3012,35 @@ static enum _ecore_status_t ecore_mcp_resource_cmd(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-enum _ecore_status_t ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn,
-					 struct ecore_ptt *p_ptt,
-					 u8 resource_num, u8 timeout,
-					 bool *p_granted, u8 *p_owner)
+enum _ecore_status_t
+__ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		      struct ecore_resc_lock_params *p_params)
 {
 	u32 param = 0, mcp_resp, mcp_param;
 	u8 opcode;
 	enum _ecore_status_t rc;
 
-	switch (timeout) {
+	switch (p_params->timeout) {
 	case ECORE_MCP_RESC_LOCK_TO_DEFAULT:
 		opcode = RESOURCE_OPCODE_REQ;
-		timeout = 0;
+		p_params->timeout = 0;
 		break;
 	case ECORE_MCP_RESC_LOCK_TO_NONE:
 		opcode = RESOURCE_OPCODE_REQ_WO_AGING;
-		timeout = 0;
+		p_params->timeout = 0;
 		break;
 	default:
 		opcode = RESOURCE_OPCODE_REQ_W_AGING;
 		break;
 	}
 
-	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_RESC, resource_num);
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_RESC, p_params->resource);
 	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_OPCODE, opcode);
-	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_AGE, timeout);
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_AGE, p_params->timeout);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-		   "Resource lock request: param 0x%08x [age %d, opcode %d, resc_num %d]\n",
-		   param, timeout, opcode, resource_num);
+		   "Resource lock request: param 0x%08x [age %d, opcode %d, resource %d]\n",
+		   param, p_params->timeout, opcode, p_params->resource);
 
 	/* Attempt to acquire the resource */
 	rc = ecore_mcp_resource_cmd(p_hwfn, p_ptt, param, &mcp_resp,
@@ -2884,19 +3049,20 @@ enum _ecore_status_t ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn,
 		return rc;
 
 	/* Analyze the response */
-	*p_owner = ECORE_MFW_GET_FIELD(mcp_param, RESOURCE_CMD_RSP_OWNER);
+	p_params->owner = ECORE_MFW_GET_FIELD(mcp_param,
+					     RESOURCE_CMD_RSP_OWNER);
 	opcode = ECORE_MFW_GET_FIELD(mcp_param, RESOURCE_CMD_RSP_OPCODE);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 		   "Resource lock response: mcp_param 0x%08x [opcode %d, owner %d]\n",
-		   mcp_param, opcode, *p_owner);
+		   mcp_param, opcode, p_params->owner);
 
 	switch (opcode) {
 	case RESOURCE_OPCODE_GNT:
-		*p_granted = true;
+		p_params->b_granted = true;
 		break;
 	case RESOURCE_OPCODE_BUSY:
-		*p_granted = false;
+		p_params->b_granted = false;
 		break;
 	default:
 		DP_NOTICE(p_hwfn, false,
@@ -2908,23 +3074,54 @@ enum _ecore_status_t ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-enum _ecore_status_t ecore_mcp_resc_unlock(struct ecore_hwfn *p_hwfn,
-					   struct ecore_ptt *p_ptt,
-					   u8 resource_num, bool force,
-					   bool *p_released)
+enum _ecore_status_t
+ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		    struct ecore_resc_lock_params *p_params)
+{
+	u32 retry_cnt = 0;
+	enum _ecore_status_t rc;
+
+	do {
+		/* No need for an interval before the first iteration */
+		if (retry_cnt) {
+			if (p_params->sleep_b4_retry) {
+				u16 retry_interval_in_ms =
+					DIV_ROUND_UP(p_params->retry_interval,
+						     1000);
+
+				OSAL_MSLEEP(retry_interval_in_ms);
+			} else {
+				OSAL_UDELAY(p_params->retry_interval);
+			}
+		}
+
+		rc = __ecore_mcp_resc_lock(p_hwfn, p_ptt, p_params);
+		if (rc != ECORE_SUCCESS)
+			return rc;
+
+		if (p_params->b_granted)
+			break;
+	} while (retry_cnt++ < p_params->retry_num);
+
+	return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t
+ecore_mcp_resc_unlock(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		      struct ecore_resc_unlock_params *p_params)
 {
 	u32 param = 0, mcp_resp, mcp_param;
 	u8 opcode;
 	enum _ecore_status_t rc;
 
-	opcode = force ? RESOURCE_OPCODE_FORCE_RELEASE
-		       : RESOURCE_OPCODE_RELEASE;
-	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_RESC, resource_num);
+	opcode = p_params->b_force ? RESOURCE_OPCODE_FORCE_RELEASE
+				   : RESOURCE_OPCODE_RELEASE;
+	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_RESC, p_params->resource);
 	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_OPCODE, opcode);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-		   "Resource unlock request: param 0x%08x [opcode %d, resc_num %d]\n",
-		   param, opcode, resource_num);
+		   "Resource unlock request: param 0x%08x [opcode %d, resource %d]\n",
+		   param, opcode, p_params->resource);
 
 	/* Attempt to release the resource */
 	rc = ecore_mcp_resource_cmd(p_hwfn, p_ptt, param, &mcp_resp,
@@ -2942,14 +3139,14 @@ enum _ecore_status_t ecore_mcp_resc_unlock(struct ecore_hwfn *p_hwfn,
 	switch (opcode) {
 	case RESOURCE_OPCODE_RELEASED_PREVIOUS:
 		DP_INFO(p_hwfn,
-			"Resource unlock request for an already released resource [resc_num %d]\n",
-			resource_num);
+			"Resource unlock request for an already released resource [%d]\n",
+			p_params->resource);
 		/* Fallthrough */
 	case RESOURCE_OPCODE_RELEASED:
-		*p_released = true;
+		p_params->b_released = true;
 		break;
 	case RESOURCE_OPCODE_WRONG_OWNER:
-		*p_released = false;
+		p_params->b_released = false;
 		break;
 	default:
 		DP_NOTICE(p_hwfn, false,
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index 4138a12..f5dac9d 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -11,6 +11,7 @@
 
 #include "bcm_osal.h"
 #include "mcp_public.h"
+#include "ecore.h"
 #include "ecore_mcp_api.h"
 
 /* Using hwfn number (and not pf_num) is required since in CMT mode,
@@ -339,20 +340,37 @@ enum _ecore_status_t ecore_mcp_mdump_trigger(struct ecore_hwfn *p_hwfn,
 					     struct ecore_ptt *p_ptt);
 
 /**
+ * @brief - Sets the MFW's max value for the given resource
+ *
+ *  @param p_hwfn
+ *  @param p_ptt
+ *  @param res_id
+ *  @param resc_max_val
+ *  @param p_mcp_resp
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
+ */
+enum _ecore_status_t
+ecore_mcp_set_resc_max_val(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			   enum ecore_resources res_id, u32 resc_max_val,
+			   u32 *p_mcp_resp);
+
+/**
  * @brief - Gets the MFW allocation info for the given resource
  *
  *  @param p_hwfn
  *  @param p_ptt
- *  @param p_resc_info
+ *  @param res_id
  *  @param p_mcp_resp
- *  @param p_mcp_param
+ *  @param p_resc_num
+ *  @param p_resc_start
  *
  * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
  */
-enum _ecore_status_t ecore_mcp_get_resc_info(struct ecore_hwfn *p_hwfn,
-					     struct ecore_ptt *p_ptt,
-					     struct resource_info *p_resc_info,
-					     u32 *p_mcp_resp, u32 *p_mcp_param);
+enum _ecore_status_t
+ecore_mcp_get_resc_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			enum ecore_resources res_id, u32 *p_mcp_resp,
+			u32 *p_resc_num, u32 *p_resc_start);
 
 /**
  * @brief - Initiates PF FLR
@@ -365,45 +383,79 @@ enum _ecore_status_t ecore_mcp_get_resc_info(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_mcp_initiate_pf_flr(struct ecore_hwfn *p_hwfn,
 					       struct ecore_ptt *p_ptt);
 
+#define ECORE_MCP_RESC_LOCK_MIN_VAL	RESOURCE_DUMP /* 0 */
+#define ECORE_MCP_RESC_LOCK_MAX_VAL	31
+
+enum ecore_resc_lock {
+	ECORE_RESC_LOCK_DBG_DUMP = ECORE_MCP_RESC_LOCK_MIN_VAL,
+	/* Locks that the MFW is aware of should be added here downwards */
+
+	/* Ecore only locks should be added here upwards */
+	ECORE_RESC_LOCK_RESC_ALLOC = ECORE_MCP_RESC_LOCK_MAX_VAL
+};
+
+struct ecore_resc_lock_params {
+	/* Resource number [valid values are 0..31] */
+	u8 resource;
+
+	/* Lock timeout value in seconds [default, none or 1..254] */
+	u8 timeout;
 #define ECORE_MCP_RESC_LOCK_TO_DEFAULT	0
 #define ECORE_MCP_RESC_LOCK_TO_NONE	255
 
+	/* Number of times to retry locking */
+	u8 retry_num;
+
+	/* The interval in usec between retries */
+	u16 retry_interval;
+
+	/* Use sleep or delay between retries */
+	bool sleep_b4_retry;
+
+	/* Will be set as true if the resource is free and granted */
+	bool b_granted;
+
+	/* Will be filled with the resource owner.
+	 * [0..15 = PF0-15, 16 = MFW, 17 = diag over serial]
+	 */
+	u8 owner;
+};
+
 /**
  * @brief Acquires MFW generic resource lock
  *
  *  @param p_hwfn
  *  @param p_ptt
- *  @param resource_num - valid values are 0..31
- *  @param timeout - lock timeout value in seconds
- *                   (1..254, '0' - default value, '255' - no timeout).
- *  @param p_granted - will be filled as true if the resource is free and
- *                     granted, or false if it is busy.
- *  @param p_owner - A pointer to a variable to be filled with the resource
- *                   owner (0..15 = PF0-15, 16 = MFW, 17 = diag over serial).
+ *  @param p_params
  *
  * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
  */
-enum _ecore_status_t ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn,
-					 struct ecore_ptt *p_ptt,
-					 u8 resource_num, u8 timeout,
-					 bool *p_granted, u8 *p_owner);
+enum _ecore_status_t
+ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		    struct ecore_resc_lock_params *p_params);
+
+struct ecore_resc_unlock_params {
+	/* Resource number [valid values are 0..31] */
+	u8 resource;
+
+	/* Allow to release a resource even if belongs to another PF */
+	bool b_force;
+
+	/* Will be set as true if the resource is released */
+	bool b_released;
+};
 
 /**
  * @brief Releases MFW generic resource lock
  *
  *  @param p_hwfn
  *  @param p_ptt
- *  @param resource_num
- *  @param force -  allows to release a reeource even if belongs to another PF
- *  @param p_released - will be filled as true if the resource is released (or
- *			has been already released), and false if the resource is
- *			acquired by another PF and the `force' flag was not set.
+ *  @param p_params
  *
  * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
  */
-enum _ecore_status_t ecore_mcp_resc_unlock(struct ecore_hwfn *p_hwfn,
-					   struct ecore_ptt *p_ptt,
-					   u8 resource_num, bool force,
-					   bool *p_released);
+enum _ecore_status_t
+ecore_mcp_resc_unlock(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		      struct ecore_resc_unlock_params *p_params);
 
 #endif /* __ECORE_MCP_H__ */
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 49/62] net/qede/base: add return code check
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (48 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 48/62] net/qede/base: set max values for soft resources Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 50/62] net/qede/base: zero out MFW mailbox data Rasesh Mody
                               ` (12 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add a check of the return code of ecore_mcp_cmd_and_union() in
ecore_mcp_send_protocol_stats()

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_mcp.c |    5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 3efe0a0..0ebb5cd 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1237,6 +1237,7 @@ static void ecore_mcp_send_protocol_stats(struct ecore_hwfn *p_hwfn,
 	struct ecore_mcp_mb_params mb_params;
 	union drv_union_data union_data;
 	u32 hsi_param;
+	enum _ecore_status_t rc;
 
 	switch (type) {
 	case MFW_DRV_MSG_GET_LAN_STATS:
@@ -1255,7 +1256,9 @@ static void ecore_mcp_send_protocol_stats(struct ecore_hwfn *p_hwfn,
 	mb_params.param = hsi_param;
 	OSAL_MEMCPY(&union_data, &stats, sizeof(stats));
 	mb_params.p_data_src = &union_data;
-	ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+	if (rc != ECORE_SUCCESS)
+		DP_ERR(p_hwfn, "Failed to send protocol stats, rc = %d\n", rc);
 }
 
 static void ecore_read_pf_bandwidth(struct ecore_hwfn *p_hwfn,
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 50/62] net/qede/base: zero out MFW mailbox data
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (49 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 49/62] net/qede/base: add return code check Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 51/62] net/qede/base: move code bits Rasesh Mody
                               ` (11 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Zero the whole union data of the Management FW mailbox before copying
the actual union member

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c |    4 +-
 drivers/net/qede/base/ecore_mcp.c |  294 +++++++++++++++++++++----------------
 drivers/net/qede/base/ecore_mcp.h |   19 ++-
 3 files changed, 181 insertions(+), 136 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 3191ee4..e584058 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2311,9 +2311,7 @@ enum _ecore_status_t ecore_hw_reset(struct ecore_dev *p_dev)
 			unload_resp = FW_MSG_CODE_DRV_UNLOAD_ENGINE;
 		}
 
-		rc = ecore_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
-				   DRV_MSG_CODE_UNLOAD_DONE,
-				   0, &unload_resp, &unload_param);
+		rc = ecore_mcp_unload_done(p_hwfn, p_hwfn->p_main_ptt);
 		if (rc != ECORE_SUCCESS) {
 			DP_NOTICE(p_hwfn,
 				  true, "ecore_hw_reset: UNLOAD_DONE failed\n");
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 0ebb5cd..a3a6ca1 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -364,6 +364,7 @@ ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn,
 			struct ecore_ptt *p_ptt,
 			struct ecore_mcp_mb_params *p_mb_params)
 {
+	union drv_union_data union_data;
 	u32 union_data_addr;
 	enum _ecore_status_t rc;
 
@@ -373,6 +374,15 @@ ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn,
 		return ECORE_BUSY;
 	}
 
+	if (p_mb_params->data_src_size > sizeof(union_data) ||
+	    p_mb_params->data_dst_size > sizeof(union_data)) {
+		DP_ERR(p_hwfn,
+		       "The provided size is larger than the union data size [src_size %u, dst_size %u, union_data_size %zu]\n",
+		       p_mb_params->data_src_size, p_mb_params->data_dst_size,
+		       sizeof(union_data));
+		return ECORE_INVAL;
+	}
+
 	union_data_addr = p_hwfn->mcp_info->drv_mb_addr +
 			  OFFSETOF(struct public_drv_mb, union_data);
 
@@ -383,19 +393,21 @@ ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn,
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	if (p_mb_params->p_data_src != OSAL_NULL)
-		ecore_memcpy_to(p_hwfn, p_ptt, union_data_addr,
-				p_mb_params->p_data_src,
-				sizeof(*p_mb_params->p_data_src));
+	OSAL_MEM_ZERO(&union_data, sizeof(union_data));
+	if (p_mb_params->p_data_src != OSAL_NULL && p_mb_params->data_src_size)
+		OSAL_MEMCPY(&union_data, p_mb_params->p_data_src,
+			    p_mb_params->data_src_size);
+	ecore_memcpy_to(p_hwfn, p_ptt, union_data_addr, &union_data,
+			sizeof(union_data));
 
 	rc = ecore_do_mcp_cmd(p_hwfn, p_ptt, p_mb_params->cmd,
 			      p_mb_params->param, &p_mb_params->mcp_resp,
 			      &p_mb_params->mcp_param);
 
-	if (p_mb_params->p_data_dst != OSAL_NULL)
+	if (p_mb_params->p_data_dst != OSAL_NULL &&
+	    p_mb_params->data_dst_size)
 		ecore_memcpy_from(p_hwfn, p_ptt, p_mb_params->p_data_dst,
-				  union_data_addr,
-				  sizeof(*p_mb_params->p_data_dst));
+				  union_data_addr, p_mb_params->data_dst_size);
 
 	ecore_mcp_mb_unlock(p_hwfn, p_mb_params->cmd);
 
@@ -443,14 +455,13 @@ enum _ecore_status_t ecore_mcp_nvm_wr_cmd(struct ecore_hwfn *p_hwfn,
 					  u32 i_txn_size, u32 *i_buf)
 {
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
 	enum _ecore_status_t rc;
 
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = cmd;
 	mb_params.param = param;
-	OSAL_MEMCPY((u32 *)&union_data.raw_data, i_buf, i_txn_size);
-	mb_params.p_data_src = &union_data;
+	mb_params.p_data_src = i_buf;
+	mb_params.data_src_size = (u8)i_txn_size;
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
@@ -470,13 +481,17 @@ enum _ecore_status_t ecore_mcp_nvm_rd_cmd(struct ecore_hwfn *p_hwfn,
 					  u32 *o_txn_size, u32 *o_buf)
 {
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
+	u8 raw_data[MCP_DRV_NVM_BUF_LEN];
 	enum _ecore_status_t rc;
 
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = cmd;
 	mb_params.param = param;
-	mb_params.p_data_dst = &union_data;
+	mb_params.p_data_dst = raw_data;
+
+	/* Use the maximal value since the actual one is part of the response */
+	mb_params.data_dst_size = MCP_DRV_NVM_BUF_LEN;
+
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
@@ -485,7 +500,8 @@ enum _ecore_status_t ecore_mcp_nvm_rd_cmd(struct ecore_hwfn *p_hwfn,
 	*o_mcp_param = mb_params.mcp_param;
 
 	*o_txn_size = *o_mcp_param;
-	OSAL_MEMCPY(o_buf, (u32 *)&union_data.raw_data, *o_txn_size);
+	/* @DPDK */
+	OSAL_MEMCPY(o_buf, raw_data, RTE_MIN(*o_txn_size, MCP_DRV_NVM_BUF_LEN));
 
 	return ECORE_SUCCESS;
 }
@@ -605,25 +621,23 @@ __ecore_mcp_load_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 		     struct ecore_load_req_in_params *p_in_params,
 		     struct ecore_load_req_out_params *p_out_params)
 {
-	union drv_union_data union_data_src, union_data_dst;
 	struct ecore_mcp_mb_params mb_params;
-	struct load_req_stc *p_load_req;
-	struct load_rsp_stc *p_load_rsp;
+	struct load_req_stc load_req;
+	struct load_rsp_stc load_rsp;
 	u32 hsi_ver;
 	enum _ecore_status_t rc;
 
-	p_load_req = &union_data_src.load_req;
-	OSAL_MEM_ZERO(p_load_req, sizeof(*p_load_req));
-	p_load_req->drv_ver_0 = p_in_params->drv_ver_0;
-	p_load_req->drv_ver_1 = p_in_params->drv_ver_1;
-	p_load_req->fw_ver = p_in_params->fw_ver;
-	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_ROLE,
+	OSAL_MEM_ZERO(&load_req, sizeof(load_req));
+	load_req.drv_ver_0 = p_in_params->drv_ver_0;
+	load_req.drv_ver_1 = p_in_params->drv_ver_1;
+	load_req.fw_ver = p_in_params->fw_ver;
+	ECORE_MFW_SET_FIELD(load_req.misc0, LOAD_REQ_ROLE,
 			    p_in_params->drv_role);
-	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_LOCK_TO,
+	ECORE_MFW_SET_FIELD(load_req.misc0, LOAD_REQ_LOCK_TO,
 			    p_in_params->timeout_val);
-	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_FORCE,
+	ECORE_MFW_SET_FIELD(load_req.misc0, LOAD_REQ_FORCE,
 			    p_in_params->force_cmd);
-	ECORE_MFW_SET_FIELD(p_load_req->misc0, LOAD_REQ_FLAGS0,
+	ECORE_MFW_SET_FIELD(load_req.misc0, LOAD_REQ_FLAGS0,
 			    p_in_params->avoid_eng_reset);
 
 	hsi_ver = (p_in_params->hsi_ver == ECORE_LOAD_REQ_HSI_VER_DEFAULT) ?
@@ -633,8 +647,10 @@ __ecore_mcp_load_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = DRV_MSG_CODE_LOAD_REQ;
 	mb_params.param = PDA_COMP | hsi_ver | p_hwfn->p_dev->drv_type;
-	mb_params.p_data_src = &union_data_src;
-	mb_params.p_data_dst = &union_data_dst;
+	mb_params.p_data_src = &load_req;
+	mb_params.data_src_size = sizeof(load_req);
+	mb_params.p_data_dst = &load_rsp;
+	mb_params.data_dst_size = sizeof(load_rsp);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 		   "Load Request: param 0x%08x [init_hw %d, drv_type %d, hsi_ver %d, pda 0x%04x]\n",
@@ -647,15 +663,13 @@ __ecore_mcp_load_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	if (p_in_params->hsi_ver != ECORE_LOAD_REQ_HSI_VER_1)
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 			   "Load Request: drv_ver 0x%08x_0x%08x, fw_ver 0x%08x, misc0 0x%08x [role %d, timeout %d, force %d, flags0 0x%x]\n",
-			   p_load_req->drv_ver_0, p_load_req->drv_ver_1,
-			   p_load_req->fw_ver, p_load_req->misc0,
-			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
-					       LOAD_REQ_ROLE),
-			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
+			   load_req.drv_ver_0, load_req.drv_ver_1,
+			   load_req.fw_ver, load_req.misc0,
+			   ECORE_MFW_GET_FIELD(load_req.misc0, LOAD_REQ_ROLE),
+			   ECORE_MFW_GET_FIELD(load_req.misc0,
 					       LOAD_REQ_LOCK_TO),
-			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
-					       LOAD_REQ_FORCE),
-			   ECORE_MFW_GET_FIELD(p_load_req->misc0,
+			   ECORE_MFW_GET_FIELD(load_req.misc0, LOAD_REQ_FORCE),
+			   ECORE_MFW_GET_FIELD(load_req.misc0,
 					       LOAD_REQ_FLAGS0));
 
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
@@ -671,28 +685,24 @@ __ecore_mcp_load_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 
 	if (p_in_params->hsi_ver != ECORE_LOAD_REQ_HSI_VER_1 &&
 	    p_out_params->load_code != FW_MSG_CODE_DRV_LOAD_REFUSED_HSI_1) {
-		p_load_rsp = &union_data_dst.load_rsp;
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 			   "Load Response: exist_drv_ver 0x%08x_0x%08x, exist_fw_ver 0x%08x, misc0 0x%08x [exist_role %d, mfw_hsi %d, flags0 0x%x]\n",
-			   p_load_rsp->drv_ver_0, p_load_rsp->drv_ver_1,
-			   p_load_rsp->fw_ver, p_load_rsp->misc0,
-			   ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
-					       LOAD_RSP_ROLE),
-			   ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
-					       LOAD_RSP_HSI),
-			   ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
+			   load_rsp.drv_ver_0, load_rsp.drv_ver_1,
+			   load_rsp.fw_ver, load_rsp.misc0,
+			   ECORE_MFW_GET_FIELD(load_rsp.misc0, LOAD_RSP_ROLE),
+			   ECORE_MFW_GET_FIELD(load_rsp.misc0, LOAD_RSP_HSI),
+			   ECORE_MFW_GET_FIELD(load_rsp.misc0,
 					       LOAD_RSP_FLAGS0));
 
-		p_out_params->exist_drv_ver_0 = p_load_rsp->drv_ver_0;
-		p_out_params->exist_drv_ver_1 = p_load_rsp->drv_ver_1;
-		p_out_params->exist_fw_ver = p_load_rsp->fw_ver;
+		p_out_params->exist_drv_ver_0 = load_rsp.drv_ver_0;
+		p_out_params->exist_drv_ver_1 = load_rsp.drv_ver_1;
+		p_out_params->exist_fw_ver = load_rsp.fw_ver;
 		p_out_params->exist_drv_role =
-			ECORE_MFW_GET_FIELD(p_load_rsp->misc0, LOAD_RSP_ROLE);
+			ECORE_MFW_GET_FIELD(load_rsp.misc0, LOAD_RSP_ROLE);
 		p_out_params->mfw_hsi_ver =
-			ECORE_MFW_GET_FIELD(p_load_rsp->misc0, LOAD_RSP_HSI);
+			ECORE_MFW_GET_FIELD(load_rsp.misc0, LOAD_RSP_HSI);
 		p_out_params->drv_exists =
-			ECORE_MFW_GET_FIELD(p_load_rsp->misc0,
-					    LOAD_RSP_FLAGS0) &
+			ECORE_MFW_GET_FIELD(load_rsp.misc0, LOAD_RSP_FLAGS0) &
 			LOAD_RSP_FLAGS0_DRV_EXISTS;
 	}
 
@@ -883,6 +893,18 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
+enum _ecore_status_t ecore_mcp_unload_done(struct ecore_hwfn *p_hwfn,
+					   struct ecore_ptt *p_ptt)
+{
+	struct ecore_mcp_mb_params mb_params;
+	struct mcp_mac wol_mac;
+
+	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
+	mb_params.cmd = DRV_MSG_CODE_UNLOAD_DONE;
+
+	return ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+}
+
 static void ecore_mcp_handle_vf_flr(struct ecore_hwfn *p_hwfn,
 				    struct ecore_ptt *p_ptt)
 {
@@ -924,7 +946,6 @@ enum _ecore_status_t ecore_mcp_ack_vf_flr(struct ecore_hwfn *p_hwfn,
 	u32 func_addr = SECTION_ADDR(mfw_func_offsize,
 				     MCP_PF_ID(p_hwfn));
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
 	enum _ecore_status_t rc;
 	int i;
 
@@ -935,8 +956,8 @@ enum _ecore_status_t ecore_mcp_ack_vf_flr(struct ecore_hwfn *p_hwfn,
 
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = DRV_MSG_CODE_VF_DISABLED_DONE;
-	OSAL_MEMCPY(&union_data.ack_vf_disabled, vfs_to_ack, VF_MAX_STATIC / 8);
-	mb_params.p_data_src = &union_data;
+	mb_params.p_data_src = vfs_to_ack;
+	mb_params.data_src_size = VF_MAX_STATIC / 8;
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt,
 				     &mb_params);
 	if (rc != ECORE_SUCCESS) {
@@ -1122,8 +1143,7 @@ enum _ecore_status_t ecore_mcp_set_link(struct ecore_hwfn *p_hwfn,
 {
 	struct ecore_mcp_link_params *params = &p_hwfn->mcp_info->link_input;
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
-	struct eth_phy_cfg *p_phy_cfg;
+	struct eth_phy_cfg phy_cfg;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 	u32 cmd;
 
@@ -1133,30 +1153,30 @@ enum _ecore_status_t ecore_mcp_set_link(struct ecore_hwfn *p_hwfn,
 #endif
 
 	/* Set the shmem configuration according to params */
-	p_phy_cfg = &union_data.drv_phy_cfg;
-	OSAL_MEMSET(p_phy_cfg, 0, sizeof(*p_phy_cfg));
+	OSAL_MEM_ZERO(&phy_cfg, sizeof(phy_cfg));
 	cmd = b_up ? DRV_MSG_CODE_INIT_PHY : DRV_MSG_CODE_LINK_RESET;
 	if (!params->speed.autoneg)
-		p_phy_cfg->speed = params->speed.forced_speed;
-	p_phy_cfg->pause |= (params->pause.autoneg) ? ETH_PAUSE_AUTONEG : 0;
-	p_phy_cfg->pause |= (params->pause.forced_rx) ? ETH_PAUSE_RX : 0;
-	p_phy_cfg->pause |= (params->pause.forced_tx) ? ETH_PAUSE_TX : 0;
-	p_phy_cfg->adv_speed = params->speed.advertised_speeds;
-	p_phy_cfg->loopback_mode = params->loopback_mode;
+		phy_cfg.speed = params->speed.forced_speed;
+	phy_cfg.pause |= (params->pause.autoneg) ? ETH_PAUSE_AUTONEG : 0;
+	phy_cfg.pause |= (params->pause.forced_rx) ? ETH_PAUSE_RX : 0;
+	phy_cfg.pause |= (params->pause.forced_tx) ? ETH_PAUSE_TX : 0;
+	phy_cfg.adv_speed = params->speed.advertised_speeds;
+	phy_cfg.loopback_mode = params->loopback_mode;
 	p_hwfn->b_drv_link_init = b_up;
 
 	if (b_up)
 		DP_VERBOSE(p_hwfn, ECORE_MSG_LINK,
 			   "Configuring Link: Speed 0x%08x, Pause 0x%08x,"
 			   " adv_speed 0x%08x, loopback 0x%08x\n",
-			   p_phy_cfg->speed, p_phy_cfg->pause,
-			   p_phy_cfg->adv_speed, p_phy_cfg->loopback_mode);
+			   phy_cfg.speed, phy_cfg.pause, phy_cfg.adv_speed,
+			   phy_cfg.loopback_mode);
 	else
 		DP_VERBOSE(p_hwfn, ECORE_MSG_LINK, "Resetting link\n");
 
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = cmd;
-	mb_params.p_data_src = &union_data;
+	mb_params.p_data_src = &phy_cfg;
+	mb_params.data_src_size = sizeof(phy_cfg);
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 
 	/* if mcp fails to respond we must abort */
@@ -1235,7 +1255,6 @@ static void ecore_mcp_send_protocol_stats(struct ecore_hwfn *p_hwfn,
 	enum ecore_mcp_protocol_type stats_type;
 	union ecore_mcp_protocol_stats stats;
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
 	u32 hsi_param;
 	enum _ecore_status_t rc;
 
@@ -1254,8 +1273,8 @@ static void ecore_mcp_send_protocol_stats(struct ecore_hwfn *p_hwfn,
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = DRV_MSG_CODE_GET_STATS;
 	mb_params.param = hsi_param;
-	OSAL_MEMCPY(&union_data, &stats, sizeof(stats));
-	mb_params.p_data_src = &union_data;
+	mb_params.p_data_src = &stats;
+	mb_params.data_src_size = sizeof(stats);
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS)
 		DP_ERR(p_hwfn, "Failed to send protocol stats, rc = %d\n", rc);
@@ -1353,28 +1372,38 @@ static void ecore_mcp_handle_fan_failure(struct ecore_hwfn *p_hwfn,
 	ecore_hw_err_notify(p_hwfn, ECORE_HW_ERR_FAN_FAIL);
 }
 
+struct ecore_mdump_cmd_params {
+	u32 cmd;
+	void *p_data_src;
+	u8 data_src_size;
+	void *p_data_dst;
+	u8 data_dst_size;
+	u32 mcp_resp;
+};
+
 static enum _ecore_status_t
 ecore_mcp_mdump_cmd(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-		    u32 mdump_cmd, union drv_union_data *p_data_src,
-		    union drv_union_data *p_data_dst, u32 *p_mcp_resp)
+		    struct ecore_mdump_cmd_params *p_mdump_cmd_params)
 {
 	struct ecore_mcp_mb_params mb_params;
 	enum _ecore_status_t rc;
 
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = DRV_MSG_CODE_MDUMP_CMD;
-	mb_params.param = mdump_cmd;
-	mb_params.p_data_src = p_data_src;
-	mb_params.p_data_dst = p_data_dst;
+	mb_params.param = p_mdump_cmd_params->cmd;
+	mb_params.p_data_src = p_mdump_cmd_params->p_data_src;
+	mb_params.data_src_size = p_mdump_cmd_params->data_src_size;
+	mb_params.p_data_dst = p_mdump_cmd_params->p_data_dst;
+	mb_params.data_dst_size = p_mdump_cmd_params->data_dst_size;
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	*p_mcp_resp = mb_params.mcp_resp;
-	if (*p_mcp_resp == FW_MSG_CODE_MDUMP_INVALID_CMD) {
+	p_mdump_cmd_params->mcp_resp = mb_params.mcp_resp;
+	if (p_mdump_cmd_params->mcp_resp == FW_MSG_CODE_MDUMP_INVALID_CMD) {
 		DP_NOTICE(p_hwfn, false,
 			  "MFW claims that the mdump command is illegal [mdump_cmd 0x%x]\n",
-			  mdump_cmd);
+			  p_mdump_cmd_params->cmd);
 		rc = ECORE_INVAL;
 	}
 
@@ -1384,62 +1413,68 @@ ecore_mcp_mdump_cmd(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 static enum _ecore_status_t ecore_mcp_mdump_ack(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt)
 {
-	u32 mcp_resp;
+	struct ecore_mdump_cmd_params mdump_cmd_params;
+
+	OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params));
+	mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_ACK;
 
-	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MDUMP_ACK,
-				   OSAL_NULL, OSAL_NULL, &mcp_resp);
+	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
 }
 
 enum _ecore_status_t ecore_mcp_mdump_set_values(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt,
 						u32 epoch)
 {
-	union drv_union_data union_data;
-	u32 mcp_resp;
+	struct ecore_mdump_cmd_params mdump_cmd_params;
 
-	OSAL_MEMCPY(&union_data.raw_data, &epoch, sizeof(epoch));
+	OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params));
+	mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_SET_VALUES;
+	mdump_cmd_params.p_data_src = &epoch;
+	mdump_cmd_params.data_src_size = sizeof(epoch);
 
-	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MDUMP_SET_VALUES,
-				   &union_data, OSAL_NULL, &mcp_resp);
+	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
 }
 
 enum _ecore_status_t ecore_mcp_mdump_trigger(struct ecore_hwfn *p_hwfn,
 					     struct ecore_ptt *p_ptt)
 {
-	u32 mcp_resp;
+	struct ecore_mdump_cmd_params mdump_cmd_params;
 
-	p_hwfn->p_dev->mdump_en = true;
+	OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params));
+	mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_TRIGGER;
 
-	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MDUMP_TRIGGER,
-				   OSAL_NULL, OSAL_NULL, &mcp_resp);
+	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
 }
 
 static enum _ecore_status_t
 ecore_mcp_mdump_get_config(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			   struct mdump_config_stc *p_mdump_config)
 {
-	union drv_union_data union_data;
-	u32 mcp_resp;
+	struct ecore_mdump_cmd_params mdump_cmd_params;
 	enum _ecore_status_t rc;
 
-	rc = ecore_mcp_mdump_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MDUMP_GET_CONFIG,
-				 OSAL_NULL, &union_data, &mcp_resp);
+	OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params));
+	mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_GET_CONFIG;
+	mdump_cmd_params.p_data_dst = p_mdump_config;
+	mdump_cmd_params.data_dst_size = sizeof(*p_mdump_config);
+
+	rc = ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	if (mcp_resp == FW_MSG_CODE_UNSUPPORTED)
+	if (mdump_cmd_params.mcp_resp == FW_MSG_CODE_UNSUPPORTED) {
+		DP_INFO(p_hwfn,
+			"The mdump command is not supported by the MFW\n");
 		return ECORE_NOTIMPL;
+	}
 
-	if (mcp_resp != FW_MSG_CODE_OK) {
+	if (mdump_cmd_params.mcp_resp != FW_MSG_CODE_OK) {
 		DP_NOTICE(p_hwfn, false,
 			  "Failed to get the mdump configuration and logs info [mcp_resp 0x%x]\n",
-			  mcp_resp);
+			  mdump_cmd_params.mcp_resp);
 		rc = ECORE_UNKNOWN_ERROR;
 	}
 
-	OSAL_MEMCPY(p_mdump_config, &union_data.mdump_config,
-		    sizeof(*p_mdump_config));
-
 	return rc;
 }
 
@@ -1489,10 +1524,12 @@ ecore_mcp_mdump_get_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 enum _ecore_status_t ecore_mcp_mdump_clear_logs(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt)
 {
-	u32 mcp_resp;
+	struct ecore_mdump_cmd_params mdump_cmd_params;
 
-	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MDUMP_CLEAR_LOGS,
-				   OSAL_NULL, OSAL_NULL, &mcp_resp);
+	OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params));
+	mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_CLEAR_LOGS;
+
+	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
 }
 
 static void ecore_mcp_handle_critical_error(struct ecore_hwfn *p_hwfn,
@@ -2001,9 +2038,8 @@ enum _ecore_status_t
 ecore_mcp_send_drv_version(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			   struct ecore_mcp_drv_version *p_ver)
 {
-	struct drv_version_stc *p_drv_version;
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
+	struct drv_version_stc drv_version;
 	u32 num_words, i;
 	void *p_name;
 	OSAL_BE32 val;
@@ -2014,19 +2050,20 @@ ecore_mcp_send_drv_version(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 		return ECORE_SUCCESS;
 #endif
 
-	p_drv_version = &union_data.drv_version;
-	p_drv_version->version = p_ver->version;
+	OSAL_MEM_ZERO(&drv_version, sizeof(drv_version));
+	drv_version.version = p_ver->version;
 	num_words = (MCP_DRV_VER_STR_SIZE - 4) / 4;
 	for (i = 0; i < num_words; i++) {
 		/* The driver name is expected to be in a big-endian format */
 		p_name = &p_ver->name[i * sizeof(u32)];
 		val = OSAL_CPU_TO_BE32(*(u32 *)p_name);
-		*(u32 *)&p_drv_version->name[i * sizeof(u32)] = val;
+		*(u32 *)&drv_version.name[i * sizeof(u32)] = val;
 	}
 
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = DRV_MSG_CODE_SET_VERSION;
-	mb_params.p_data_src = &union_data;
+	mb_params.p_data_src = &drv_version;
+	mb_params.data_src_size = sizeof(drv_version);
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS)
 		DP_ERR(p_hwfn, "MCP response failure, aborting\n");
@@ -2695,28 +2732,25 @@ ecore_mcp_get_temperature_info(struct ecore_hwfn *p_hwfn,
 			       struct ecore_temperature_info *p_temp_info)
 {
 	struct ecore_temperature_sensor *p_temp_sensor;
-	struct temperature_status_stc *p_mfw_temp_info;
+	struct temperature_status_stc mfw_temp_info;
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
 	u32 val;
 	enum _ecore_status_t rc;
 	u8 i;
 
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = DRV_MSG_CODE_GET_TEMPERATURE;
-	mb_params.p_data_dst = &union_data;
+	mb_params.p_data_dst = &mfw_temp_info;
+	mb_params.data_dst_size = sizeof(mfw_temp_info);
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	p_mfw_temp_info = &union_data.temp_info;
-
 	OSAL_BUILD_BUG_ON(ECORE_MAX_NUM_OF_SENSORS != MAX_NUM_OF_SENSORS);
-	p_temp_info->num_sensors = OSAL_MIN_T(u32,
-					      p_mfw_temp_info->num_of_sensors,
+	p_temp_info->num_sensors = OSAL_MIN_T(u32, mfw_temp_info.num_of_sensors,
 					      ECORE_MAX_NUM_OF_SENSORS);
 	for (i = 0; i < p_temp_info->num_sensors; i++) {
-		val = p_mfw_temp_info->sensor[i];
+		val = mfw_temp_info.sensor[i];
 		p_temp_sensor = &p_temp_info->sensors[i];
 		p_temp_sensor->sensor_location = (val & SENSOR_LOCATION_MASK) >>
 						 SENSOR_LOCATION_SHIFT;
@@ -2854,16 +2888,14 @@ ecore_mcp_resc_allocation_msg(struct ecore_hwfn *p_hwfn,
 			      struct ecore_resc_alloc_in_params *p_in_params,
 			      struct ecore_resc_alloc_out_params *p_out_params)
 {
-	struct resource_info *p_mfw_resc_info;
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
+	struct resource_info mfw_resc_info;
 	enum _ecore_status_t rc;
 
-	p_mfw_resc_info = &union_data.resource;
-	OSAL_MEM_ZERO(p_mfw_resc_info, sizeof(*p_mfw_resc_info));
+	OSAL_MEM_ZERO(&mfw_resc_info, sizeof(mfw_resc_info));
 
-	p_mfw_resc_info->res_id = ecore_mcp_get_mfw_res_id(p_in_params->res_id);
-	if (p_mfw_resc_info->res_id == RESOURCE_NUM_INVALID) {
+	mfw_resc_info.res_id = ecore_mcp_get_mfw_res_id(p_in_params->res_id);
+	if (mfw_resc_info.res_id == RESOURCE_NUM_INVALID) {
 		DP_ERR(p_hwfn,
 		       "Failed to match resource %d [%s] with the MFW resources\n",
 		       p_in_params->res_id,
@@ -2873,7 +2905,7 @@ ecore_mcp_resc_allocation_msg(struct ecore_hwfn *p_hwfn,
 
 	switch (p_in_params->cmd) {
 	case DRV_MSG_SET_RESOURCE_VALUE_MSG:
-		p_mfw_resc_info->size = p_in_params->resc_max_val;
+		mfw_resc_info.size = p_in_params->resc_max_val;
 		/* Fallthrough */
 	case DRV_MSG_GET_RESOURCE_ALLOC_MSG:
 		break;
@@ -2886,8 +2918,10 @@ ecore_mcp_resc_allocation_msg(struct ecore_hwfn *p_hwfn,
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = p_in_params->cmd;
 	mb_params.param = ECORE_RESC_ALLOC_VERSION;
-	mb_params.p_data_src = &union_data;
-	mb_params.p_data_dst = &union_data;
+	mb_params.p_data_src = &mfw_resc_info;
+	mb_params.data_src_size = sizeof(mfw_resc_info);
+	mb_params.p_data_dst = mb_params.p_data_src;
+	mb_params.data_dst_size = mb_params.data_src_size;
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 		   "Resource message request: cmd 0x%08x, res_id %d [%s], hsi_version %d.%d, val 0x%x\n",
@@ -2905,11 +2939,11 @@ ecore_mcp_resc_allocation_msg(struct ecore_hwfn *p_hwfn,
 
 	p_out_params->mcp_resp = mb_params.mcp_resp;
 	p_out_params->mcp_param = mb_params.mcp_param;
-	p_out_params->resc_num = p_mfw_resc_info->size;
-	p_out_params->resc_start = p_mfw_resc_info->offset;
-	p_out_params->vf_resc_num = p_mfw_resc_info->vf_size;
-	p_out_params->vf_resc_start = p_mfw_resc_info->vf_offset;
-	p_out_params->flags = p_mfw_resc_info->flags;
+	p_out_params->resc_num = mfw_resc_info.size;
+	p_out_params->resc_start = mfw_resc_info.offset;
+	p_out_params->vf_resc_num = mfw_resc_info.vf_size;
+	p_out_params->vf_resc_start = mfw_resc_info.vf_offset;
+	p_out_params->flags = mfw_resc_info.flags;
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 		   "Resource message response: mfw_hsi_version %d.%d, num 0x%x, start 0x%x, vf_num 0x%x, vf_start 0x%x, flags 0x%08x\n",
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index f5dac9d..350d8a2 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -65,8 +65,10 @@ struct ecore_mcp_info {
 struct ecore_mcp_mb_params {
 	u32 cmd;
 	u32 param;
-	union drv_union_data *p_data_src;
-	union drv_union_data *p_data_dst;
+	void *p_data_src;
+	u8 data_src_size;
+	void *p_data_dst;
+	u8 data_dst_size;
 	u32 mcp_resp;
 	u32 mcp_param;
 };
@@ -159,7 +161,7 @@ struct ecore_load_req_params {
  *        returns whether this PF is the first on the engine/port or function.
  *
  * @param p_hwfn
- * @param p_pt
+ * @param p_ptt
  * @param p_params
  *
  * @return enum _ecore_status_t - ECORE_SUCCESS - Operation was successful.
@@ -169,6 +171,17 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 					struct ecore_load_req_params *p_params);
 
 /**
+ * @brief Sends a UNLOAD_DONE message to the MFW
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - Operation was successful.
+ */
+enum _ecore_status_t ecore_mcp_unload_done(struct ecore_hwfn *p_hwfn,
+					   struct ecore_ptt *p_ptt);
+
+/**
  * @brief Read the MFW mailbox into Current buffer.
  *
  * @param p_hwfn
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 51/62] net/qede/base: move code bits
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (50 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 50/62] net/qede/base: zero out MFW mailbox data Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 52/62] net/qede/base: add PF parameter Rasesh Mody
                               ` (10 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_vf.h |   41 +++++++++++++++++++-------------------
 1 file changed, 20 insertions(+), 21 deletions(-)

diff --git a/drivers/net/qede/base/ecore_vf.h b/drivers/net/qede/base/ecore_vf.h
index 228bbf0..f471388 100644
--- a/drivers/net/qede/base/ecore_vf.h
+++ b/drivers/net/qede/base/ecore_vf.h
@@ -38,17 +38,15 @@ struct ecore_vf_iov {
 	bool b_pre_fp_hsi;
 };
 
-#ifdef CONFIG_ECORE_SRIOV
-/**
- * @brief hw preparation for VF
- * sends ACQUIRE message
- *
- * @param p_hwfn
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn);
 
+enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
+					    struct ecore_ptt *p_ptt,
+					    u16 coalesce,
+					    struct ecore_queue_cid *p_cid);
+enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
+					    struct ecore_ptt *p_ptt,
+					    u16 coalesce,
+					    struct ecore_queue_cid *p_cid);
 /**
  * @brief VF - Set Rx/Tx coalesce per VF's relative queue.
  *	Coalesce value '0' will omit the configuration.
@@ -56,13 +54,24 @@ enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn);
  *	@param p_hwfn
  *	@param rx_coal - coalesce value in micro second for rx queue
  *	@param tx_coal - coalesce value in micro second for tx queue
- *	@param qid
+ *	@param queue_cid
  *
  **/
 enum _ecore_status_t ecore_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn,
 					      u16 rx_coal, u16 tx_coal,
 					      struct ecore_queue_cid *p_cid);
 
+#ifdef CONFIG_ECORE_SRIOV
+/**
+ * @brief hw preparation for VF
+ *	sends ACQUIRE message
+ *
+ * @param p_hwfn
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn);
+
 /**
  * @brief VF - start the RX Queue by sending a message to the PF
  *
@@ -277,15 +286,5 @@ ecore_vf_pf_tunnel_param_update(struct ecore_hwfn *p_hwfn,
 				struct ecore_tunnel_info *p_tunn);
 
 void ecore_vf_set_vf_start_tunn_update_param(struct ecore_tunnel_info *p_tun);
-
-enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
-					    struct ecore_ptt *p_ptt,
-					    u16 coalesce,
-					    struct ecore_queue_cid *p_cid);
-
-enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
-					    struct ecore_ptt *p_ptt,
-					    u16 coalesce,
-					    struct ecore_queue_cid *p_cid);
 #endif
 #endif /* __ECORE_VF_H__ */
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 52/62] net/qede/base: add PF parameter
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (51 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 51/62] net/qede/base: move code bits Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 53/62] net/qede/base: allow PMD to control vport and RSS engine ids Rasesh Mody
                               ` (9 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add a common enum to pf_params for RDMA.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_cxt.c      |    1 +
 drivers/net/qede/base/ecore_proto_if.h |    7 +++++++
 2 files changed, 8 insertions(+)

diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index aeeabf1..691d638 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -19,6 +19,7 @@
 #include "ecore_hw.h"
 #include "ecore_dev_api.h"
 #include "ecore_sriov.h"
+#include "ecore_mcp.h"
 
 /* Max number of connection types in HW (DQ/CDU etc.) */
 #define MAX_CONN_TYPES		PROTOCOLID_COMMON
diff --git a/drivers/net/qede/base/ecore_proto_if.h b/drivers/net/qede/base/ecore_proto_if.h
index ed24019..0ac153f 100644
--- a/drivers/net/qede/base/ecore_proto_if.h
+++ b/drivers/net/qede/base/ecore_proto_if.h
@@ -63,6 +63,12 @@ struct ecore_iscsi_pf_params {
 	u8		bdq_pbl_num_entries[2];
 };
 
+enum ecore_rdma_protocol {
+	ECORE_RDMA_PROTOCOL_DEFAULT,
+	ECORE_RDMA_PROTOCOL_ROCE,
+	ECORE_RDMA_PROTOCOL_IWARP,
+};
+
 struct ecore_rdma_pf_params {
 	/* Supplied to ECORE during resource allocation (may affect the ILT and
 	 * the doorbell BAR).
@@ -79,6 +85,7 @@ struct ecore_rdma_pf_params {
 
 	/* TCP port number used for the iwarp traffic */
 	u16		iwarp_port;
+	enum ecore_rdma_protocol rdma_protocol;
 };
 
 struct ecore_pf_params {
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 53/62] net/qede/base: allow PMD to control vport and RSS engine ids
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (52 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 52/62] net/qede/base: add PF parameter Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 54/62] net/qede/base: add udp ports in bulletin board message Rasesh Mody
                               ` (8 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Let PMD have control over the vport-id and rss-eng-id of a given VF
during initialization.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_iov_api.h |   15 ++++-------
 drivers/net/qede/base/ecore_sriov.c   |   46 +++++++++++++++++++++------------
 drivers/net/qede/base/ecore_sriov.h   |    2 +-
 3 files changed, 35 insertions(+), 28 deletions(-)

diff --git a/drivers/net/qede/base/ecore_iov_api.h b/drivers/net/qede/base/ecore_iov_api.h
index b8dc47b..6a0fc5a 100644
--- a/drivers/net/qede/base/ecore_iov_api.h
+++ b/drivers/net/qede/base/ecore_iov_api.h
@@ -103,6 +103,11 @@ struct ecore_iov_vf_init_params {
 	 */
 	u16 req_rx_queue[ECORE_MAX_VF_CHAINS_PER_PF];
 	u16 req_tx_queue[ECORE_MAX_VF_CHAINS_PER_PF];
+
+	u8 vport_id;
+
+	/* Should be set in case RSS is going to be used for VF */
+	u8 rss_eng_id;
 };
 
 #ifdef CONFIG_ECORE_SW_CHANNEL
@@ -417,16 +422,6 @@ void ecore_iov_get_vfs_opaque_fid(struct ecore_hwfn *p_hwfn, int vfid,
 				  u16 *opaque_fid);
 
 /**
- * @brief Get VFs VPORT id.
- *
- * @param p_hwfn
- * @param vfid
- * @param vport id
- */
-void ecore_iov_get_vfs_vport_id(struct ecore_hwfn *p_hwfn, int vfid,
-				u8 *p_vport_id);
-
-/**
  * @brief Set forced VLAN [pvid] in PFs copy of bulletin board
  *        and configures FW/HW to support the configuration.
  *        Setting of pvid 0 would clear the feature.
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 4ffa8d0..20b51c4 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -426,8 +426,6 @@ static void ecore_iov_setup_vfdb(struct ecore_hwfn *p_hwfn)
 		return;
 	}
 
-	p_iov_info->base_vport_id = 1;	/* @@@TBD resource allocation */
-
 	for (idx = 0; idx < p_iov->total_vfs; idx++) {
 		struct ecore_vf_info *vf = &p_iov_info->vfs_array[idx];
 		u32 concrete;
@@ -456,8 +454,6 @@ static void ecore_iov_setup_vfdb(struct ecore_hwfn *p_hwfn)
 		/* TODO - need to devise a better way of getting opaque */
 		vf->opaque_fid = (p_hwfn->hw_info.opaque_fid & 0xff) |
 		    (vf->abs_vf_id << 8);
-		/* @@TBD MichalK - add base vport_id of VFs to equation */
-		vf->vport_id = p_iov_info->base_vport_id + idx;
 
 		vf->num_mac_filters = ECORE_ETH_VF_NUM_MAC_FILTERS;
 		vf->num_vlan_filters = ECORE_ETH_VF_NUM_VLAN_FILTERS;
@@ -1019,6 +1015,34 @@ ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 		return ECORE_INVAL;
 	}
 
+	/* Perform sanity checking on the requested vport/rss */
+	if (p_params->vport_id >= RESC_NUM(p_hwfn, ECORE_VPORT)) {
+		DP_NOTICE(p_hwfn, true, "VF[%d] - can't use VPORT %02x\n",
+			  p_params->rel_vf_id, p_params->vport_id);
+		return ECORE_INVAL;
+	}
+
+	if ((p_params->num_queues > 1) &&
+	    (p_params->rss_eng_id >= RESC_NUM(p_hwfn, ECORE_RSS_ENG))) {
+		DP_NOTICE(p_hwfn, true, "VF[%d] - can't use RSS_ENG %02x\n",
+			  p_params->rel_vf_id, p_params->rss_eng_id);
+		return ECORE_INVAL;
+	}
+
+	/* TODO - remove this once we get confidence of change */
+	if (!p_params->vport_id) {
+		DP_NOTICE(p_hwfn, false,
+			  "VF[%d] - Unlikely that VF uses vport0. Forgotten?\n",
+			  p_params->rel_vf_id);
+	}
+	if ((!p_params->rss_eng_id) && (p_params->num_queues > 1)) {
+		DP_NOTICE(p_hwfn, false,
+			  "VF[%d] - Unlikely that VF uses RSS_eng0. Forgotten?\n",
+			  p_params->rel_vf_id);
+	}
+	vf->vport_id = p_params->vport_id;
+	vf->rss_eng_id = p_params->rss_eng_id;
+
 	/* Perform sanity checking on the requested queue_id */
 	for (i = 0; i < p_params->num_queues; i++) {
 		u16 min_vf_qzone = (u16)FEAT_NUM(p_hwfn, ECORE_PF_L2_QUE);
@@ -2752,7 +2776,7 @@ ecore_iov_vp_update_rss_param(struct ecore_hwfn *p_hwfn,
 		VFPF_UPDATE_RSS_KEY_FLAG);
 
 	p_rss->rss_enable = p_rss_tlv->rss_enable;
-	p_rss->rss_eng_id = vf->relative_vf_id + 1;
+	p_rss->rss_eng_id = vf->rss_eng_id;
 	p_rss->rss_caps = p_rss_tlv->rss_caps;
 	p_rss->rss_table_size_log = p_rss_tlv->rss_table_size_log;
 	OSAL_MEMCPY(p_rss->rss_key, p_rss_tlv->rss_key,
@@ -3974,18 +3998,6 @@ void ecore_iov_get_vfs_opaque_fid(struct ecore_hwfn *p_hwfn, int vfid,
 	*opaque_fid = vf_info->opaque_fid;
 }
 
-void ecore_iov_get_vfs_vport_id(struct ecore_hwfn *p_hwfn, int vfid,
-				u8 *p_vort_id)
-{
-	struct ecore_vf_info *vf_info;
-
-	vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
-	if (!vf_info)
-		return;
-
-	*p_vort_id = vf_info->vport_id;
-}
-
 void ecore_iov_bulletin_set_forced_vlan(struct ecore_hwfn *p_hwfn,
 					u16 pvid, int vfid)
 {
diff --git a/drivers/net/qede/base/ecore_sriov.h b/drivers/net/qede/base/ecore_sriov.h
index d32f931..66e9271 100644
--- a/drivers/net/qede/base/ecore_sriov.h
+++ b/drivers/net/qede/base/ecore_sriov.h
@@ -111,6 +111,7 @@ struct ecore_vf_info {
 	u16			mtu;
 
 	u8			vport_id;
+	u8			rss_eng_id;
 	u8			relative_vf_id;
 	u8			abs_vf_id;
 #define ECORE_VF_ABS_ID(p_hwfn, p_vf)	(ECORE_PATH_ID(p_hwfn) ? \
@@ -155,7 +156,6 @@ struct ecore_pf_iov {
 	struct ecore_vf_info	vfs_array[E4_MAX_NUM_VFS];
 	u64			pending_events[ECORE_VF_ARRAY_LENGTH];
 	u64			pending_flr[ECORE_VF_ARRAY_LENGTH];
-	u16			base_vport_id;
 
 #ifndef REMOVE_DBG
 	/* This doesn't serve anything functionally, but it makes windows
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 54/62] net/qede/base: add udp ports in bulletin board message
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (53 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 53/62] net/qede/base: allow PMD to control vport and RSS engine ids Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 55/62] net/qede/base: prevent DMAE transactions during recovery Rasesh Mody
                               ` (7 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add udp ports in bulletin board message.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_iov_api.h |    2 ++
 drivers/net/qede/base/ecore_sriov.c   |   33 +++++++++++++++++++++++++++++++++
 drivers/net/qede/base/ecore_vf.c      |   12 ++++++++++++
 drivers/net/qede/base/ecore_vf_api.h  |    2 ++
 drivers/net/qede/base/ecore_vfpf_if.h |    5 ++++-
 5 files changed, 53 insertions(+), 1 deletion(-)

diff --git a/drivers/net/qede/base/ecore_iov_api.h b/drivers/net/qede/base/ecore_iov_api.h
index 6a0fc5a..870c57e 100644
--- a/drivers/net/qede/base/ecore_iov_api.h
+++ b/drivers/net/qede/base/ecore_iov_api.h
@@ -716,6 +716,8 @@ enum _ecore_status_t ecore_iov_configure_min_tx_rate(struct ecore_dev *p_dev,
  */
 u16 ecore_iov_get_next_active_vf(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
 
+void ecore_iov_bulletin_set_udp_ports(struct ecore_hwfn *p_hwfn, int vfid,
+				      u16 vxlan_port, u16 geneve_port);
 #endif /* CONFIG_ECORE_SRIOV */
 
 #define ecore_for_each_vf(_p_hwfn, _i)					\
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 20b51c4..532c492 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -2253,6 +2253,7 @@ static void ecore_iov_vf_mbx_update_tunn_param(struct ecore_hwfn *p_hwfn,
 	bool b_update_required = false;
 	struct ecore_tunnel_info tunn;
 	u16 tunn_feature_mask = 0;
+	int i;
 
 	mbx->offset = (u8 *)mbx->reply_virt;
 
@@ -2300,11 +2301,20 @@ static void ecore_iov_vf_mbx_update_tunn_param(struct ecore_hwfn *p_hwfn,
 
 	/* If ECORE client is willing to update anything ? */
 	if (b_update_required) {
+		u16 geneve_port;
+
 		rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, &tunn,
 						 ECORE_SPQ_MODE_EBLOCK,
 						 OSAL_NULL);
 		if (rc != ECORE_SUCCESS)
 			status = PFVF_STATUS_FAILURE;
+
+		geneve_port = p_tun->geneve_port.port;
+		ecore_for_each_vf(p_hwfn, i) {
+			ecore_iov_bulletin_set_udp_ports(p_hwfn, i,
+							 p_tun->vxlan_port.port,
+							 geneve_port);
+		}
 	}
 
 send_resp:
@@ -4028,6 +4038,29 @@ void ecore_iov_bulletin_set_forced_vlan(struct ecore_hwfn *p_hwfn,
 	ecore_iov_configure_vport_forced(p_hwfn, vf_info, feature);
 }
 
+void ecore_iov_bulletin_set_udp_ports(struct ecore_hwfn *p_hwfn,
+				      int vfid, u16 vxlan_port, u16 geneve_port)
+{
+	struct ecore_vf_info *vf_info;
+
+	vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
+	if (!vf_info) {
+		DP_NOTICE(p_hwfn->p_dev, true,
+			  "Can not set udp ports, invalid vfid [%d]\n", vfid);
+		return;
+	}
+
+	if (vf_info->b_malicious) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "Can not set udp ports to malicious VF [%d]\n",
+			   vfid);
+		return;
+	}
+
+	vf_info->bulletin.p_virt->vxlan_udp_port = vxlan_port;
+	vf_info->bulletin.p_virt->geneve_udp_port = geneve_port;
+}
+
 bool ecore_iov_vf_has_vport_instance(struct ecore_hwfn *p_hwfn, int vfid)
 {
 	struct ecore_vf_info *p_vf_info;
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index bf516cc..8ce9340 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -1652,6 +1652,18 @@ bool ecore_vf_bulletin_get_forced_mac(struct ecore_hwfn *hwfn, u8 *dst_mac,
 	return true;
 }
 
+void ecore_vf_bulletin_get_udp_ports(struct ecore_hwfn *p_hwfn,
+				     u16 *p_vxlan_port,
+				     u16 *p_geneve_port)
+{
+	struct ecore_bulletin_content *p_bulletin;
+
+	p_bulletin = &p_hwfn->vf_iov_info->bulletin_shadow;
+
+	*p_vxlan_port = p_bulletin->vxlan_udp_port;
+	*p_geneve_port = p_bulletin->geneve_udp_port;
+}
+
 bool ecore_vf_bulletin_get_forced_vlan(struct ecore_hwfn *hwfn, u16 *dst_pvid)
 {
 	struct ecore_bulletin_content *bulletin;
diff --git a/drivers/net/qede/base/ecore_vf_api.h b/drivers/net/qede/base/ecore_vf_api.h
index 77b93ff..a6e5f32 100644
--- a/drivers/net/qede/base/ecore_vf_api.h
+++ b/drivers/net/qede/base/ecore_vf_api.h
@@ -152,5 +152,7 @@ void ecore_vf_get_fw_version(struct ecore_hwfn *p_hwfn,
 			     u16 *fw_minor,
 			     u16 *fw_rev,
 			     u16 *fw_eng);
+void ecore_vf_bulletin_get_udp_ports(struct ecore_hwfn *p_hwfn,
+				     u16 *p_vxlan_port, u16 *p_geneve_port);
 #endif
 #endif
diff --git a/drivers/net/qede/base/ecore_vfpf_if.h b/drivers/net/qede/base/ecore_vfpf_if.h
index e0b63bf..6618442 100644
--- a/drivers/net/qede/base/ecore_vfpf_if.h
+++ b/drivers/net/qede/base/ecore_vfpf_if.h
@@ -554,9 +554,12 @@ struct ecore_bulletin_content {
 	u8 pfc_enabled;
 	u8 partner_tx_flow_ctrl_en;
 	u8 partner_rx_flow_ctrl_en;
+
 	u8 partner_adv_pause;
 	u8 sfp_tx_fault;
-	u8 padding4[6];
+	u16 vxlan_udp_port;
+	u16 geneve_udp_port;
+	u8 padding4[2];
 
 	u32 speed;
 	u32 partner_adv_speed;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 55/62] net/qede/base: prevent DMAE transactions during recovery
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (54 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 54/62] net/qede/base: add udp ports in bulletin board message Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 56/62] net/qede/base: multi-Txq support on same queue-zone for VFs Rasesh Mody
                               ` (6 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Prevent DMA engine transactions during recovery phase.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_hw.c |   12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/drivers/net/qede/base/ecore_hw.c b/drivers/net/qede/base/ecore_hw.c
index 396edc2..2bcc32d 100644
--- a/drivers/net/qede/base/ecore_hw.c
+++ b/drivers/net/qede/base/ecore_hw.c
@@ -773,6 +773,18 @@ ecore_dmae_execute_command(struct ecore_hwfn *p_hwfn,
 	enum _ecore_status_t ecore_status = ECORE_SUCCESS;
 	u32 offset = 0;
 
+	if (p_hwfn->p_dev->recov_in_prog) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+			   "Recovery is in progress. Avoid DMAE transaction [{src: addr 0x%lx, type %d}, {dst: addr 0x%lx, type %d}, size %d].\n",
+			   (unsigned long)src_addr, src_type,
+			   (unsigned long)dst_addr, dst_type,
+			   size_in_dwords);
+		/* Return success to let the flow to be completed successfully
+		 * w/o any error handling.
+		 */
+		return ECORE_SUCCESS;
+	}
+
 	ecore_dmae_opcode(p_hwfn,
 			  (src_type == ECORE_DMAE_ADDRESS_GRC),
 			  (dst_type == ECORE_DMAE_ADDRESS_GRC), p_params);
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 56/62] net/qede/base: multi-Txq support on same queue-zone for VFs
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (55 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 55/62] net/qede/base: prevent DMAE transactions during recovery Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 57/62] net/qede/base: prevent race condition during unload Rasesh Mody
                               ` (5 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

A step toward having multi-Txq support on same queue-zone for VFs.

This change takes care of:

 - VFs assume a single CID per-queue, where queue X receives CID X.
   Switch to a model similar to that of PF - I.e., Use different CIDs
   for Rx/Tx, and use mapping to acquire/release those. Each VF
   currently will have 32 CIDs available for it [for its possible 16
   Rx & 16 Tx queues].

 - To retain the same interface for PFs/VFs when initializing queues,
   the base driver would have to retain a unique number per-each queue
   that would be communicated in some extended TLV [current TLV
   interface allows the PF to send only the queue-id]. The new TLV isn't
   part of the current change but base driver would now start adding
   such unique keys internally to queue_cids. This would also force
   us to start having alloc/setup/free for L2 [we've refrained from
   doing so until now]
   The limit would be no-more than 64 queues per qzone [This could be
   changed if needed, but hopefully no one needs so many queues]

 - In IOV, Add infrastructure for up to 64 qids per-qzone, although
   at the moment hard-code '0' for Rx and '1' for Tx [Since VF still
   isn't communicating via new TLV which index to associate with a
   given queue in its queue-zone].

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h         |    4 +
 drivers/net/qede/base/ecore_cxt.c     |  230 +++++++++++++++-----
 drivers/net/qede/base/ecore_cxt.h     |   53 ++++-
 drivers/net/qede/base/ecore_cxt_api.h |   13 --
 drivers/net/qede/base/ecore_dev.c     |   24 +-
 drivers/net/qede/base/ecore_l2.c      |  248 ++++++++++++++++++---
 drivers/net/qede/base/ecore_l2.h      |   46 +++-
 drivers/net/qede/base/ecore_sriov.c   |  387 ++++++++++++++++++++++-----------
 drivers/net/qede/base/ecore_sriov.h   |   17 +-
 drivers/net/qede/base/ecore_vf.c      |    6 +
 drivers/net/qede/base/ecore_vf_api.h  |    9 +
 11 files changed, 794 insertions(+), 243 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 7379b3f..fab8193 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -200,6 +200,7 @@ struct ecore_cxt_mngr;
 struct ecore_dma_mem;
 struct ecore_sb_sp_info;
 struct ecore_ll2_info;
+struct ecore_l2_info;
 struct ecore_igu_info;
 struct ecore_mcp_info;
 struct ecore_dcbx_info;
@@ -598,6 +599,9 @@ struct ecore_hwfn {
 	/* If one of the following is set then EDPM shouldn't be used */
 	u8				dcbx_no_edpm;
 	u8				db_bar_no_edpm;
+
+	/* L2-related */
+	struct ecore_l2_info		*p_l2_info;
 };
 
 #ifndef __EXTRACT__LINUX__
diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index 691d638..f7b5672 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -8,6 +8,7 @@
 
 #include "bcm_osal.h"
 #include "reg_addr.h"
+#include "common_hsi.h"
 #include "ecore_hsi_common.h"
 #include "ecore_hsi_eth.h"
 #include "ecore_rt_defs.h"
@@ -101,7 +102,6 @@ struct ecore_tid_seg {
 
 struct ecore_conn_type_cfg {
 	u32 cid_count;
-	u32 cid_start;
 	u32 cids_per_vf;
 	struct ecore_tid_seg tid_seg[TASK_SEGMENTS];
 };
@@ -197,6 +197,9 @@ struct ecore_cxt_mngr {
 
 	/* Acquired CIDs */
 	struct ecore_cid_acquired_map acquired[MAX_CONN_TYPES];
+	/* TBD - do we want this allocated to reserve space? */
+	struct ecore_cid_acquired_map
+		acquired_vf[MAX_CONN_TYPES][COMMON_MAX_NUM_VFS];
 
 	/* ILT  shadow table */
 	struct ecore_dma_mem *ilt_shadow;
@@ -1015,44 +1018,75 @@ ilt_shadow_fail:
 static void ecore_cid_map_free(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
-	u32 type;
+	u32 type, vf;
 
 	for (type = 0; type < MAX_CONN_TYPES; type++) {
 		OSAL_FREE(p_hwfn->p_dev, p_mngr->acquired[type].cid_map);
 		p_mngr->acquired[type].max_count = 0;
 		p_mngr->acquired[type].start_cid = 0;
+
+		for (vf = 0; vf < COMMON_MAX_NUM_VFS; vf++) {
+			OSAL_FREE(p_hwfn->p_dev,
+				  p_mngr->acquired_vf[type][vf].cid_map);
+			p_mngr->acquired_vf[type][vf].max_count = 0;
+			p_mngr->acquired_vf[type][vf].start_cid = 0;
+		}
 	}
 }
 
+static enum _ecore_status_t
+ecore_cid_map_alloc_single(struct ecore_hwfn *p_hwfn, u32 type,
+			   u32 cid_start, u32 cid_count,
+			   struct ecore_cid_acquired_map *p_map)
+{
+	u32 size;
+
+	if (!cid_count)
+		return ECORE_SUCCESS;
+
+	size = MAP_WORD_SIZE * DIV_ROUND_UP(cid_count, BITS_PER_MAP_WORD);
+	p_map->cid_map = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL, size);
+	if (p_map->cid_map == OSAL_NULL)
+		return ECORE_NOMEM;
+
+	p_map->max_count = cid_count;
+	p_map->start_cid = cid_start;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_CXT,
+		   "Type %08x start: %08x count %08x\n",
+		   type, p_map->start_cid, p_map->max_count);
+
+	return ECORE_SUCCESS;
+}
+
 static enum _ecore_status_t ecore_cid_map_alloc(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
-	u32 start_cid = 0;
-	u32 type;
+	u32 start_cid = 0, vf_start_cid = 0;
+	u32 type, vf;
 
 	for (type = 0; type < MAX_CONN_TYPES; type++) {
-		u32 cid_cnt = p_hwfn->p_cxt_mngr->conn_cfg[type].cid_count;
-		u32 size;
-
-		if (cid_cnt == 0)
-			continue;
+		struct ecore_conn_type_cfg *p_cfg = &p_mngr->conn_cfg[type];
+		struct ecore_cid_acquired_map *p_map;
 
-		size = MAP_WORD_SIZE * DIV_ROUND_UP(cid_cnt, BITS_PER_MAP_WORD);
-		p_mngr->acquired[type].cid_map = OSAL_ZALLOC(p_hwfn->p_dev,
-							     GFP_KERNEL, size);
-		if (!p_mngr->acquired[type].cid_map)
+		/* Handle PF maps */
+		p_map = &p_mngr->acquired[type];
+		if (ecore_cid_map_alloc_single(p_hwfn, type, start_cid,
+					       p_cfg->cid_count, p_map))
 			goto cid_map_fail;
 
-		p_mngr->acquired[type].max_count = cid_cnt;
-		p_mngr->acquired[type].start_cid = start_cid;
-
-		p_hwfn->p_cxt_mngr->conn_cfg[type].cid_start = start_cid;
+		/* Handle VF maps */
+		for (vf = 0; vf < COMMON_MAX_NUM_VFS; vf++) {
+			p_map = &p_mngr->acquired_vf[type][vf];
+			if (ecore_cid_map_alloc_single(p_hwfn, type,
+						       vf_start_cid,
+						       p_cfg->cids_per_vf,
+						       p_map))
+				goto cid_map_fail;
+		}
 
-		DP_VERBOSE(p_hwfn, ECORE_MSG_CXT,
-			   "Type %08x start: %08x count %08x\n",
-			   type, p_mngr->acquired[type].start_cid,
-			   p_mngr->acquired[type].max_count);
-		start_cid += cid_cnt;
+		start_cid += p_cfg->cid_count;
+		vf_start_cid += p_cfg->cids_per_vf;
 	}
 
 	return ECORE_SUCCESS;
@@ -1171,18 +1205,34 @@ void ecore_cxt_mngr_free(struct ecore_hwfn *p_hwfn)
 void ecore_cxt_mngr_setup(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
+	struct ecore_cid_acquired_map *p_map;
+	struct ecore_conn_type_cfg *p_cfg;
 	int type;
+	u32 len;
 
 	/* Reset acquired cids */
 	for (type = 0; type < MAX_CONN_TYPES; type++) {
-		u32 cid_cnt = p_hwfn->p_cxt_mngr->conn_cfg[type].cid_count;
-		u32 i;
+		u32 vf;
+
+		p_cfg = &p_mngr->conn_cfg[type];
+		if (p_cfg->cid_count) {
+			p_map = &p_mngr->acquired[type];
+			len = DIV_ROUND_UP(p_map->max_count,
+					   BITS_PER_MAP_WORD) *
+			      MAP_WORD_SIZE;
+			OSAL_MEM_ZERO(p_map->cid_map, len);
+		}
 
-		if (cid_cnt == 0)
+		if (!p_cfg->cids_per_vf)
 			continue;
 
-		for (i = 0; i < DIV_ROUND_UP(cid_cnt, BITS_PER_MAP_WORD); i++)
-			p_mngr->acquired[type].cid_map[i] = 0;
+		for (vf = 0; vf < COMMON_MAX_NUM_VFS; vf++) {
+			p_map = &p_mngr->acquired_vf[type][vf];
+			len = DIV_ROUND_UP(p_map->max_count,
+					   BITS_PER_MAP_WORD) *
+			      MAP_WORD_SIZE;
+			OSAL_MEM_ZERO(p_map->cid_map, len);
+		}
 	}
 }
 
@@ -1723,93 +1773,150 @@ void ecore_cxt_hw_init_pf(struct ecore_hwfn *p_hwfn)
 	ecore_prs_init_pf(p_hwfn);
 }
 
-enum _ecore_status_t ecore_cxt_acquire_cid(struct ecore_hwfn *p_hwfn,
-					   enum protocol_type type, u32 *p_cid)
+enum _ecore_status_t _ecore_cxt_acquire_cid(struct ecore_hwfn *p_hwfn,
+					    enum protocol_type type,
+					    u32 *p_cid, u8 vfid)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
+	struct ecore_cid_acquired_map *p_map;
 	u32 rel_cid;
 
-	if (type >= MAX_CONN_TYPES || !p_mngr->acquired[type].cid_map) {
+	if (type >= MAX_CONN_TYPES) {
 		DP_NOTICE(p_hwfn, true, "Invalid protocol type %d", type);
 		return ECORE_INVAL;
 	}
 
-	rel_cid = OSAL_FIND_FIRST_ZERO_BIT(p_mngr->acquired[type].cid_map,
-					   p_mngr->acquired[type].max_count);
+	if (vfid >= COMMON_MAX_NUM_VFS && vfid != ECORE_CXT_PF_CID) {
+		DP_NOTICE(p_hwfn, true, "VF [%02x] is out of range\n", vfid);
+		return ECORE_INVAL;
+	}
+
+	/* Determine the right map to take this CID from */
+	if (vfid == ECORE_CXT_PF_CID)
+		p_map = &p_mngr->acquired[type];
+	else
+		p_map = &p_mngr->acquired_vf[type][vfid];
 
-	if (rel_cid >= p_mngr->acquired[type].max_count) {
+	if (p_map->cid_map == OSAL_NULL) {
+		DP_NOTICE(p_hwfn, true, "Invalid protocol type %d", type);
+		return ECORE_INVAL;
+	}
+
+	rel_cid = OSAL_FIND_FIRST_ZERO_BIT(p_map->cid_map,
+					   p_map->max_count);
+
+	if (rel_cid >= p_map->max_count) {
 		DP_NOTICE(p_hwfn, false, "no CID available for protocol %d\n",
 			  type);
 		return ECORE_NORESOURCES;
 	}
 
-	OSAL_SET_BIT(rel_cid, p_mngr->acquired[type].cid_map);
+	OSAL_SET_BIT(rel_cid, p_map->cid_map);
 
-	*p_cid = rel_cid + p_mngr->acquired[type].start_cid;
+	*p_cid = rel_cid + p_map->start_cid;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_CXT,
+		   "Acquired cid 0x%08x [rel. %08x] vfid %02x type %d\n",
+		   *p_cid, rel_cid, vfid, type);
 
 	return ECORE_SUCCESS;
 }
 
+enum _ecore_status_t ecore_cxt_acquire_cid(struct ecore_hwfn *p_hwfn,
+					   enum protocol_type type,
+					   u32 *p_cid)
+{
+	return _ecore_cxt_acquire_cid(p_hwfn, type, p_cid, ECORE_CXT_PF_CID);
+}
+
 static bool ecore_cxt_test_cid_acquired(struct ecore_hwfn *p_hwfn,
-					u32 cid, enum protocol_type *p_type)
+					u32 cid, u8 vfid,
+					enum protocol_type *p_type,
+					struct ecore_cid_acquired_map **pp_map)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
-	struct ecore_cid_acquired_map *p_map;
-	enum protocol_type p;
 	u32 rel_cid;
 
 	/* Iterate over protocols and find matching cid range */
-	for (p = 0; p < MAX_CONN_TYPES; p++) {
-		p_map = &p_mngr->acquired[p];
+	for (*p_type = 0; *p_type < MAX_CONN_TYPES; (*p_type)++) {
+		if (vfid == ECORE_CXT_PF_CID)
+			*pp_map = &p_mngr->acquired[*p_type];
+		else
+			*pp_map = &p_mngr->acquired_vf[*p_type][vfid];
 
-		if (!p_map->cid_map)
+		if (!((*pp_map)->cid_map))
 			continue;
-		if (cid >= p_map->start_cid &&
-		    cid < p_map->start_cid + p_map->max_count) {
+		if (cid >= (*pp_map)->start_cid &&
+		    cid < (*pp_map)->start_cid + (*pp_map)->max_count) {
 			break;
 		}
 	}
-	*p_type = p;
-
-	if (p == MAX_CONN_TYPES) {
-		DP_NOTICE(p_hwfn, true, "Invalid CID %d", cid);
-		return false;
+	if (*p_type == MAX_CONN_TYPES) {
+		DP_NOTICE(p_hwfn, true, "Invalid CID %d vfid %02x", cid, vfid);
+		goto fail;
 	}
-	rel_cid = cid - p_map->start_cid;
-	if (!OSAL_TEST_BIT(rel_cid, p_map->cid_map)) {
-		DP_NOTICE(p_hwfn, true, "CID %d not acquired", cid);
-		return false;
+
+	rel_cid = cid - (*pp_map)->start_cid;
+	if (!OSAL_TEST_BIT(rel_cid, (*pp_map)->cid_map)) {
+		DP_NOTICE(p_hwfn, true,
+			  "CID %d [vifd %02x] not acquired", cid, vfid);
+		goto fail;
 	}
+
 	return true;
+fail:
+	*p_type = MAX_CONN_TYPES;
+	*pp_map = OSAL_NULL;
+	return false;
 }
 
-void ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn, u32 cid)
+void _ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn, u32 cid, u8 vfid)
 {
-	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
+	struct ecore_cid_acquired_map *p_map = OSAL_NULL;
 	enum protocol_type type;
 	bool b_acquired;
 	u32 rel_cid;
 
+	if (vfid != ECORE_CXT_PF_CID && vfid > COMMON_MAX_NUM_VFS) {
+		DP_NOTICE(p_hwfn, true,
+			  "Trying to return incorrect CID belonging to VF %02x\n",
+			  vfid);
+		return;
+	}
+
 	/* Test acquired and find matching per-protocol map */
-	b_acquired = ecore_cxt_test_cid_acquired(p_hwfn, cid, &type);
+	b_acquired = ecore_cxt_test_cid_acquired(p_hwfn, cid, vfid,
+						 &type, &p_map);
 
 	if (!b_acquired)
 		return;
 
-	rel_cid = cid - p_mngr->acquired[type].start_cid;
-	OSAL_CLEAR_BIT(rel_cid, p_mngr->acquired[type].cid_map);
+	rel_cid = cid - p_map->start_cid;
+	OSAL_CLEAR_BIT(rel_cid, p_map->cid_map);
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_CXT,
+		   "Released CID 0x%08x [rel. %08x] vfid %02x type %d\n",
+		   cid, rel_cid, vfid, type);
+}
+
+void ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn, u32 cid)
+{
+	_ecore_cxt_release_cid(p_hwfn, cid, ECORE_CXT_PF_CID);
 }
 
 enum _ecore_status_t ecore_cxt_get_cid_info(struct ecore_hwfn *p_hwfn,
 					    struct ecore_cxt_info *p_info)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
+	struct ecore_cid_acquired_map *p_map = OSAL_NULL;
 	u32 conn_cxt_size, hw_p_size, cxts_per_p, line;
 	enum protocol_type type;
 	bool b_acquired;
 
 	/* Test acquired and find matching per-protocol map */
-	b_acquired = ecore_cxt_test_cid_acquired(p_hwfn, p_info->iid, &type);
+	b_acquired = ecore_cxt_test_cid_acquired(p_hwfn, p_info->iid,
+						 ECORE_CXT_PF_CID,
+						 &type, &p_map);
 
 	if (!b_acquired)
 		return ECORE_INVAL;
@@ -1865,9 +1972,14 @@ enum _ecore_status_t ecore_cxt_set_pf_params(struct ecore_hwfn *p_hwfn)
 			struct ecore_eth_pf_params *p_params =
 			    &p_hwfn->pf_params.eth_pf_params;
 
+			/* TODO - we probably want to add VF number to the PF
+			 * params;
+			 * As of now, allocates 16 * 2 per-VF [to retain regular
+			 * functionality].
+			 */
 			ecore_cxt_set_proto_cid_count(p_hwfn,
 				PROTOCOLID_ETH,
-				p_params->num_cons, 1);	/* FIXME VF count... */
+				p_params->num_cons, 32);
 
 			break;
 		}
diff --git a/drivers/net/qede/base/ecore_cxt.h b/drivers/net/qede/base/ecore_cxt.h
index 5379d7b..1128051 100644
--- a/drivers/net/qede/base/ecore_cxt.h
+++ b/drivers/net/qede/base/ecore_cxt.h
@@ -130,14 +130,53 @@ void ecore_qm_init_pf(struct ecore_hwfn *p_hwfn);
 enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn,
 				     struct ecore_ptt *p_ptt);
 
+#define ECORE_CXT_PF_CID (0xff)
+
+/**
+ * @brief ecore_cxt_release - Release a cid
+ *
+ * @param p_hwfn
+ * @param cid
+ */
+void ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn, u32 cid);
+
 /**
-* @brief ecore_cxt_release - Release a cid
-*
-* @param p_hwfn
-* @param cid
-*/
-void ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn,
-			   u32 cid);
+ * @brief ecore_cxt_release - Release a cid belonging to a vf-queue
+ *
+ * @param p_hwfn
+ * @param cid
+ * @param vfid - engine relative index. ECORE_CXT_PF_CID if belongs to PF
+ */
+void _ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn,
+			    u32 cid, u8 vfid);
+
+/**
+ * @brief ecore_cxt_acquire - Acquire a new cid of a specific protocol type
+ *
+ * @param p_hwfn
+ * @param type
+ * @param p_cid
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_cxt_acquire_cid(struct ecore_hwfn *p_hwfn,
+					   enum protocol_type type,
+					   u32 *p_cid);
+
+/**
+ * @brief _ecore_cxt_acquire - Acquire a new cid of a specific protocol type
+ *                             for a vf-queue
+ *
+ * @param p_hwfn
+ * @param type
+ * @param p_cid
+ * @param vfid - engine relative index. ECORE_CXT_PF_CID if belongs to PF
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t _ecore_cxt_acquire_cid(struct ecore_hwfn *p_hwfn,
+					    enum protocol_type type,
+					    u32 *p_cid, u8 vfid);
 
 /**
  * @brief ecore_cxt_get_tid_mem_info - function checks if the
diff --git a/drivers/net/qede/base/ecore_cxt_api.h b/drivers/net/qede/base/ecore_cxt_api.h
index 6a50412..f154e0d 100644
--- a/drivers/net/qede/base/ecore_cxt_api.h
+++ b/drivers/net/qede/base/ecore_cxt_api.h
@@ -26,19 +26,6 @@ struct ecore_tid_mem {
 };
 
 /**
-* @brief ecore_cxt_acquire - Acquire a new cid of a specific protocol type
-*
-* @param p_hwfn
-* @param type
-* @param p_cid
-*
-* @return enum _ecore_status_t
-*/
-enum _ecore_status_t ecore_cxt_acquire_cid(struct ecore_hwfn  *p_hwfn,
-					   enum protocol_type type,
-					   u32 *p_cid);
-
-/**
 * @brief ecoreo_cid_get_cxt_info - Returns the context info for a specific cid
 *
 *
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index e584058..2a621f7 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -146,8 +146,11 @@ void ecore_resc_free(struct ecore_dev *p_dev)
 {
 	int i;
 
-	if (IS_VF(p_dev))
+	if (IS_VF(p_dev)) {
+		for_each_hwfn(p_dev, i)
+			ecore_l2_free(&p_dev->hwfns[i]);
 		return;
+	}
 
 	OSAL_FREE(p_dev, p_dev->fw_data);
 
@@ -163,6 +166,7 @@ void ecore_resc_free(struct ecore_dev *p_dev)
 		ecore_consq_free(p_hwfn);
 		ecore_int_free(p_hwfn);
 		ecore_iov_free(p_hwfn);
+		ecore_l2_free(p_hwfn);
 		ecore_dmae_info_free(p_hwfn);
 		ecore_dcbx_info_free(p_hwfn, p_hwfn->p_dcbx_info);
 		/* @@@TBD Flush work-queue ? */
@@ -839,8 +843,14 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 	int i;
 
-	if (IS_VF(p_dev))
+	if (IS_VF(p_dev)) {
+		for_each_hwfn(p_dev, i) {
+			rc = ecore_l2_alloc(&p_dev->hwfns[i]);
+			if (rc != ECORE_SUCCESS)
+				return rc;
+		}
 		return rc;
+	}
 
 	p_dev->fw_data = OSAL_ZALLOC(p_dev, GFP_KERNEL,
 				     sizeof(*p_dev->fw_data));
@@ -961,6 +971,10 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 		if (rc)
 			goto alloc_err;
 
+		rc = ecore_l2_alloc(p_hwfn);
+		if (rc != ECORE_SUCCESS)
+			goto alloc_err;
+
 		/* DMA info initialization */
 		rc = ecore_dmae_info_alloc(p_hwfn);
 		if (rc) {
@@ -999,8 +1013,11 @@ void ecore_resc_setup(struct ecore_dev *p_dev)
 {
 	int i;
 
-	if (IS_VF(p_dev))
+	if (IS_VF(p_dev)) {
+		for_each_hwfn(p_dev, i)
+			ecore_l2_setup(&p_dev->hwfns[i]);
 		return;
+	}
 
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
@@ -1018,6 +1035,7 @@ void ecore_resc_setup(struct ecore_dev *p_dev)
 
 		ecore_int_setup(p_hwfn, p_hwfn->p_main_ptt);
 
+		ecore_l2_setup(p_hwfn);
 		ecore_iov_setup(p_hwfn, p_hwfn->p_main_ptt);
 	}
 }
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 4d26e19..adb5e47 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -29,24 +29,172 @@
 #define ECORE_MAX_SGES_NUM 16
 #define CRC32_POLY 0x1edc6f41
 
+struct ecore_l2_info {
+	u32 queues;
+	unsigned long **pp_qid_usage;
+
+	/* The lock is meant to synchronize access to the qid usage */
+	osal_mutex_t lock;
+};
+
+enum _ecore_status_t ecore_l2_alloc(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_l2_info *p_l2_info;
+	unsigned long **pp_qids;
+	u32 i;
+
+	if (!ECORE_IS_L2_PERSONALITY(p_hwfn))
+		return ECORE_SUCCESS;
+
+	p_l2_info = OSAL_VZALLOC(p_hwfn->p_dev, sizeof(*p_l2_info));
+	if (!p_l2_info)
+		return ECORE_NOMEM;
+	p_hwfn->p_l2_info = p_l2_info;
+
+	if (IS_PF(p_hwfn->p_dev)) {
+		p_l2_info->queues = RESC_NUM(p_hwfn, ECORE_L2_QUEUE);
+	} else {
+		u8 rx = 0, tx = 0;
+
+		ecore_vf_get_num_rxqs(p_hwfn, &rx);
+		ecore_vf_get_num_txqs(p_hwfn, &tx);
+
+		p_l2_info->queues = (u32)OSAL_MAX_T(u8, rx, tx);
+	}
+
+	pp_qids = OSAL_VZALLOC(p_hwfn->p_dev,
+			       sizeof(unsigned long *) *
+			       p_l2_info->queues);
+	if (pp_qids == OSAL_NULL)
+		return ECORE_NOMEM;
+	p_l2_info->pp_qid_usage = pp_qids;
+
+	for (i = 0; i < p_l2_info->queues; i++) {
+		pp_qids[i] = OSAL_VZALLOC(p_hwfn->p_dev,
+					  MAX_QUEUES_PER_QZONE / 8);
+		if (pp_qids[i] == OSAL_NULL)
+			return ECORE_NOMEM;
+	}
+
+#ifdef CONFIG_ECORE_LOCK_ALLOC
+	OSAL_MUTEX_ALLOC(p_hwfn, &p_l2_info->lock);
+#endif
+
+	return ECORE_SUCCESS;
+}
+
+void ecore_l2_setup(struct ecore_hwfn *p_hwfn)
+{
+	if (!ECORE_IS_L2_PERSONALITY(p_hwfn))
+		return;
+
+	OSAL_MUTEX_INIT(&p_hwfn->p_l2_info->lock);
+}
+
+void ecore_l2_free(struct ecore_hwfn *p_hwfn)
+{
+	u32 i;
+
+	if (!ECORE_IS_L2_PERSONALITY(p_hwfn))
+		return;
+
+	if (p_hwfn->p_l2_info == OSAL_NULL)
+		return;
+
+	if (p_hwfn->p_l2_info->pp_qid_usage == OSAL_NULL)
+		goto out_l2_info;
+
+	/* Free until hit first uninitialized entry */
+	for (i = 0; i < p_hwfn->p_l2_info->queues; i++) {
+		if (p_hwfn->p_l2_info->pp_qid_usage[i] == OSAL_NULL)
+			break;
+		OSAL_VFREE(p_hwfn->p_dev,
+			   p_hwfn->p_l2_info->pp_qid_usage[i]);
+	}
+
+#ifdef CONFIG_ECORE_LOCK_ALLOC
+	/* Lock is last to initialize, if everything else was */
+	if (i == p_hwfn->p_l2_info->queues)
+		OSAL_MUTEX_DEALLOC(&p_hwfn->p_l2_info->lock);
+#endif
+
+	OSAL_VFREE(p_hwfn->p_dev, p_hwfn->p_l2_info->pp_qid_usage);
+
+out_l2_info:
+	OSAL_VFREE(p_hwfn->p_dev, p_hwfn->p_l2_info);
+	p_hwfn->p_l2_info = OSAL_NULL;
+}
+
+/* TODO - we'll need locking around these... */
+static bool ecore_eth_queue_qid_usage_add(struct ecore_hwfn *p_hwfn,
+					  struct ecore_queue_cid *p_cid)
+{
+	struct ecore_l2_info *p_l2_info = p_hwfn->p_l2_info;
+	u16 queue_id = p_cid->rel.queue_id;
+	bool b_rc = true;
+	u8 first;
+
+	OSAL_MUTEX_ACQUIRE(&p_l2_info->lock);
+
+	if (queue_id > p_l2_info->queues) {
+		DP_NOTICE(p_hwfn, true,
+			  "Requested to increase usage for qzone %04x out of %08x\n",
+			  queue_id, p_l2_info->queues);
+		b_rc = false;
+		goto out;
+	}
+
+	first = (u8)OSAL_FIND_FIRST_ZERO_BIT(p_l2_info->pp_qid_usage[queue_id],
+					     MAX_QUEUES_PER_QZONE);
+	if (first >= MAX_QUEUES_PER_QZONE) {
+		b_rc = false;
+		goto out;
+	}
+
+	OSAL_SET_BIT(first, p_l2_info->pp_qid_usage[queue_id]);
+	p_cid->qid_usage_idx = first;
+
+out:
+	OSAL_MUTEX_RELEASE(&p_l2_info->lock);
+	return b_rc;
+}
+
+static void ecore_eth_queue_qid_usage_del(struct ecore_hwfn *p_hwfn,
+					  struct ecore_queue_cid *p_cid)
+{
+	OSAL_MUTEX_ACQUIRE(&p_hwfn->p_l2_info->lock);
+
+	OSAL_CLEAR_BIT(p_cid->qid_usage_idx,
+		       p_hwfn->p_l2_info->pp_qid_usage[p_cid->rel.queue_id]);
+
+	OSAL_MUTEX_RELEASE(&p_hwfn->p_l2_info->lock);
+}
+
 void ecore_eth_queue_cid_release(struct ecore_hwfn *p_hwfn,
 				 struct ecore_queue_cid *p_cid)
 {
+	/* For VF-queues, stuff is a bit complicated as:
+	 *  - They always maintain the qid_usage on their own.
+	 *  - In legacy mode, they also maintain their CIDs.
+	 */
+
 	/* VFs' CIDs are 0-based in PF-view, and uninitialized on VF */
-	if (!p_cid->is_vf && IS_PF(p_hwfn->p_dev))
-		ecore_cxt_release_cid(p_hwfn, p_cid->cid);
+	if (IS_PF(p_hwfn->p_dev) && !p_cid->b_legacy_vf)
+		_ecore_cxt_release_cid(p_hwfn, p_cid->cid, p_cid->vfid);
+	if (!p_cid->b_legacy_vf)
+		ecore_eth_queue_qid_usage_del(p_hwfn, p_cid);
 	OSAL_VFREE(p_hwfn->p_dev, p_cid);
 }
 
 /* The internal is only meant to be directly called by PFs initializeing CIDs
  * for their VFs.
  */
-struct ecore_queue_cid *
+static struct ecore_queue_cid *
 _ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
-			u16 opaque_fid, u32 cid, u8 vf_qid,
-			struct ecore_queue_start_common_params *p_params)
+			u16 opaque_fid, u32 cid,
+			struct ecore_queue_start_common_params *p_params,
+			struct ecore_queue_cid_vf_params *p_vf_params)
 {
-	bool b_is_same = (p_hwfn->hw_info.opaque_fid == opaque_fid);
 	struct ecore_queue_cid *p_cid;
 	enum _ecore_status_t rc;
 
@@ -56,13 +204,22 @@ _ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
 
 	p_cid->opaque_fid = opaque_fid;
 	p_cid->cid = cid;
-	p_cid->vf_qid = vf_qid;
 	p_cid->rel = *p_params;
 	p_cid->p_owner = p_hwfn;
 
+	/* Fill-in bits related to VFs' queues if information was provided */
+	if (p_vf_params != OSAL_NULL) {
+		p_cid->vfid = p_vf_params->vfid;
+		p_cid->vf_qid = p_vf_params->vf_qid;
+		p_cid->b_legacy_vf = p_vf_params->b_legacy;
+	} else {
+		p_cid->vfid = ECORE_QUEUE_CID_PF;
+	}
+
 	/* Don't try calculating the absolute indices for VFs */
 	if (IS_VF(p_hwfn->p_dev)) {
 		p_cid->abs = p_cid->rel;
+
 		goto out;
 	}
 
@@ -82,7 +239,7 @@ _ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
 	/* In case of a PF configuring its VF's queues, the stats-id is already
 	 * absolute [since there's a single index that's suitable per-VF].
 	 */
-	if (b_is_same) {
+	if (p_cid->vfid == ECORE_QUEUE_CID_PF) {
 		rc = ecore_fw_vport(p_hwfn, p_cid->rel.stats_id,
 				    &p_cid->abs.stats_id);
 		if (rc != ECORE_SUCCESS)
@@ -95,17 +252,23 @@ _ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
 	p_cid->abs.sb = p_cid->rel.sb;
 	p_cid->abs.sb_idx = p_cid->rel.sb_idx;
 
-	/* This is tricky - we're actually interested in whehter this is a PF
-	 * entry meant for the VF.
-	 */
-	if (!b_is_same)
-		p_cid->is_vf = true;
 out:
+	/* VF-images have provided the qid_usage_idx on their own.
+	 * Otherwise, we need to allocate a unique one.
+	 */
+	if (!p_vf_params) {
+		if (!ecore_eth_queue_qid_usage_add(p_hwfn, p_cid))
+			goto fail;
+	} else {
+		p_cid->qid_usage_idx = p_vf_params->qid_usage_idx;
+	}
+
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-		   "opaque_fid: %04x CID %08x vport %02x [%02x] qzone %04x [%04x] stats %02x [%02x] SB %04x PI %02x\n",
+		   "opaque_fid: %04x CID %08x vport %02x [%02x] qzone %04x.%02x [%04x] stats %02x [%02x] SB %04x PI %02x\n",
 		   p_cid->opaque_fid, p_cid->cid,
 		   p_cid->rel.vport_id, p_cid->abs.vport_id,
-		   p_cid->rel.queue_id, p_cid->abs.queue_id,
+		   p_cid->rel.queue_id,	p_cid->qid_usage_idx,
+		   p_cid->abs.queue_id,
 		   p_cid->rel.stats_id, p_cid->abs.stats_id,
 		   p_cid->abs.sb, p_cid->abs.sb_idx);
 
@@ -116,33 +279,56 @@ fail:
 	return OSAL_NULL;
 }
 
-static struct ecore_queue_cid *
-ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
-		       u16 opaque_fid,
-		       struct ecore_queue_start_common_params *p_params)
+struct ecore_queue_cid *
+ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn, u16 opaque_fid,
+		       struct ecore_queue_start_common_params *p_params,
+		       struct ecore_queue_cid_vf_params *p_vf_params)
 {
 	struct ecore_queue_cid *p_cid;
+	u8 vfid = ECORE_CXT_PF_CID;
+	bool b_legacy_vf = false;
 	u32 cid = 0;
 
+	/* In case of legacy VFs, The CID can be derived from the additional
+	 * VF parameters - the VF assumes queue X uses CID X, so we can simply
+	 * use the vf_qid for this purpose as well.
+	 */
+	if (p_vf_params) {
+		vfid = p_vf_params->vfid;
+
+		if (p_vf_params->b_legacy) {
+			b_legacy_vf = true;
+			cid = p_vf_params->vf_qid;
+		}
+	}
+
 	/* Get a unique firmware CID for this queue, in case it's a PF.
 	 * VF's don't need a CID as the queue configuration will be done
 	 * by PF.
 	 */
-	if (IS_PF(p_hwfn->p_dev)) {
-		if (ecore_cxt_acquire_cid(p_hwfn, PROTOCOLID_ETH,
-					  &cid) != ECORE_SUCCESS) {
+	if (IS_PF(p_hwfn->p_dev) && !b_legacy_vf) {
+		if (_ecore_cxt_acquire_cid(p_hwfn, PROTOCOLID_ETH,
+					   &cid, vfid) != ECORE_SUCCESS) {
 			DP_NOTICE(p_hwfn, true, "Failed to acquire cid\n");
 			return OSAL_NULL;
 		}
 	}
 
-	p_cid = _ecore_eth_queue_to_cid(p_hwfn, opaque_fid, cid, 0, p_params);
-	if ((p_cid == OSAL_NULL) && IS_PF(p_hwfn->p_dev))
-		ecore_cxt_release_cid(p_hwfn, cid);
+	p_cid = _ecore_eth_queue_to_cid(p_hwfn, opaque_fid, cid,
+					p_params, p_vf_params);
+	if ((p_cid == OSAL_NULL) && IS_PF(p_hwfn->p_dev) && !b_legacy_vf)
+		_ecore_cxt_release_cid(p_hwfn, cid, vfid);
 
 	return p_cid;
 }
 
+static struct ecore_queue_cid *
+ecore_eth_queue_to_cid_pf(struct ecore_hwfn *p_hwfn, u16 opaque_fid,
+			  struct ecore_queue_start_common_params *p_params)
+{
+	return ecore_eth_queue_to_cid(p_hwfn, opaque_fid, p_params, OSAL_NULL);
+}
+
 enum _ecore_status_t
 ecore_sp_eth_vport_start(struct ecore_hwfn *p_hwfn,
 			 struct ecore_sp_vport_start_params *p_params)
@@ -741,7 +927,7 @@ ecore_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
 	p_ramrod->num_of_pbl_pages = OSAL_CPU_TO_LE16(cqe_pbl_size);
 	DMA_REGPAIR_LE(p_ramrod->cqe_pbl_addr, cqe_pbl_addr);
 
-	if (p_cid->is_vf) {
+	if (p_cid->vfid != ECORE_QUEUE_CID_PF) {
 		p_ramrod->vf_rx_prod_index = p_cid->vf_qid;
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 			   "Queue%s is meant for VF rxq[%02x]\n",
@@ -793,7 +979,7 @@ ecore_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
 	enum _ecore_status_t rc;
 
 	/* Allocate a CID for the queue */
-	p_cid = ecore_eth_queue_to_cid(p_hwfn, opaque_fid, p_params);
+	p_cid = ecore_eth_queue_to_cid_pf(p_hwfn, opaque_fid, p_params);
 	if (p_cid == OSAL_NULL)
 		return ECORE_NOMEM;
 
@@ -905,9 +1091,11 @@ ecore_eth_pf_rx_queue_stop(struct ecore_hwfn *p_hwfn,
 	/* Cleaning the queue requires the completion to arrive there.
 	 * In addition, VFs require the answer to come as eqe to PF.
 	 */
-	p_ramrod->complete_cqe_flg = (!p_cid->is_vf && !b_eq_completion_only) ||
+	p_ramrod->complete_cqe_flg = ((p_cid->vfid == ECORE_QUEUE_CID_PF) &&
+				      !b_eq_completion_only) ||
 				     b_cqe_completion;
-	p_ramrod->complete_event_flg = p_cid->is_vf || b_eq_completion_only;
+	p_ramrod->complete_event_flg = (p_cid->vfid != ECORE_QUEUE_CID_PF) ||
+				       b_eq_completion_only;
 
 	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
 }
@@ -1007,7 +1195,7 @@ ecore_eth_tx_queue_start(struct ecore_hwfn *p_hwfn, u16 opaque_fid,
 	struct ecore_queue_cid *p_cid;
 	enum _ecore_status_t rc;
 
-	p_cid = ecore_eth_queue_to_cid(p_hwfn, opaque_fid, p_params);
+	p_cid = ecore_eth_queue_to_cid_pf(p_hwfn, opaque_fid, p_params);
 	if (p_cid == OSAL_NULL)
 		return ECORE_INVAL;
 
diff --git a/drivers/net/qede/base/ecore_l2.h b/drivers/net/qede/base/ecore_l2.h
index 4b0ccb4..3f86eac 100644
--- a/drivers/net/qede/base/ecore_l2.h
+++ b/drivers/net/qede/base/ecore_l2.h
@@ -15,6 +15,34 @@
 #include "ecore_spq.h"
 #include "ecore_l2_api.h"
 
+#define MAX_QUEUES_PER_QZONE	(sizeof(unsigned long) * 8)
+#define ECORE_QUEUE_CID_PF	(0xff)
+
+/* Additional parameters required for initialization of the queue_cid
+ * and are relevant only for a PF initializing one for its VFs.
+ */
+struct ecore_queue_cid_vf_params {
+	/* Should match the VF's relative index */
+	u8 vfid;
+
+	/* 0-based queue index. Should reflect the relative qzone the
+	 * VF thinks is associated with it [in its range].
+	 */
+	u8 vf_qid;
+
+	/* Indicates a VF is legacy, making it differ in several things:
+	 *  - Producers would be placed in a different place.
+	 *  - Makes assumptions regarding the CIDs.
+	 */
+	bool b_legacy;
+
+	/* For VFs, this index arrives via TLV to diffrentiate between
+	 * different queues opened on the same qzone, and is passed
+	 * [where the PF would have allocated it internally for its own].
+	 */
+	u8 qid_usage_idx;
+};
+
 struct ecore_queue_cid {
 	/* 'Relative' is a relative term ;-). Usually the indices [not counting
 	 * SBs] would be PF-relative, but there are some cases where that isn't
@@ -31,22 +59,32 @@ struct ecore_queue_cid {
 	 * Notice this is relevant on the *PF* queue-cid of its VF's queues,
 	 * and not on the VF itself.
 	 */
-	bool is_vf;
+	u8 vfid;
 	u8 vf_qid;
 
+	/* We need an additional index to diffrentiate between queues opened
+	 * for same queue-zone, as VFs would have to communicate the info
+	 * to the PF [otherwise PF has no way to diffrentiate].
+	 */
+	u8 qid_usage_idx;
+
 	/* Legacy VFs might have Rx producer located elsewhere */
 	bool b_legacy_vf;
 
 	struct ecore_hwfn *p_owner;
 };
 
+enum _ecore_status_t ecore_l2_alloc(struct ecore_hwfn *p_hwfn);
+void ecore_l2_setup(struct ecore_hwfn *p_hwfn);
+void ecore_l2_free(struct ecore_hwfn *p_hwfn);
+
 void ecore_eth_queue_cid_release(struct ecore_hwfn *p_hwfn,
 				 struct ecore_queue_cid *p_cid);
 
 struct ecore_queue_cid *
-_ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn,
-			u16 opaque_fid, u32 cid, u8 vf_qid,
-			struct ecore_queue_start_common_params *p_params);
+ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn, u16 opaque_fid,
+		       struct ecore_queue_start_common_params *p_params,
+		       struct ecore_queue_cid_vf_params *p_vf_params);
 
 enum _ecore_status_t
 ecore_sp_eth_vport_start(struct ecore_hwfn *p_hwfn,
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 532c492..39d3e88 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -192,28 +192,90 @@ struct ecore_vf_info *ecore_iov_get_vf_info(struct ecore_hwfn *p_hwfn,
 	return vf;
 }
 
+static struct ecore_queue_cid *
+ecore_iov_get_vf_rx_queue_cid(struct ecore_hwfn *p_hwfn,
+			      struct ecore_vf_info *p_vf,
+			      struct ecore_vf_queue *p_queue)
+{
+	int i;
+
+	for (i = 0; i < MAX_QUEUES_PER_QZONE; i++) {
+		if (p_queue->cids[i].p_cid &&
+		    !p_queue->cids[i].b_is_tx)
+			return p_queue->cids[i].p_cid;
+	}
+
+	return OSAL_NULL;
+}
+
+enum ecore_iov_validate_q_mode {
+	ECORE_IOV_VALIDATE_Q_NA,
+	ECORE_IOV_VALIDATE_Q_ENABLE,
+	ECORE_IOV_VALIDATE_Q_DISABLE,
+};
+
+static bool ecore_iov_validate_queue_mode(struct ecore_hwfn *p_hwfn,
+					  struct ecore_vf_info *p_vf,
+					  u16 qid,
+					  enum ecore_iov_validate_q_mode mode,
+					  bool b_is_tx)
+{
+	int i;
+
+	if (mode == ECORE_IOV_VALIDATE_Q_NA)
+		return true;
+
+	for (i = 0; i < MAX_QUEUES_PER_QZONE; i++) {
+		struct ecore_vf_queue_cid *p_qcid;
+
+		p_qcid = &p_vf->vf_queues[qid].cids[i];
+
+		if (p_qcid->p_cid == OSAL_NULL)
+			continue;
+
+		if (p_qcid->b_is_tx != b_is_tx)
+			continue;
+
+		/* Found. It's enabled. */
+		return (mode == ECORE_IOV_VALIDATE_Q_ENABLE);
+	}
+
+	/* In case we haven't found any valid cid, then its disabled */
+	return (mode == ECORE_IOV_VALIDATE_Q_DISABLE);
+}
+
 static bool ecore_iov_validate_rxq(struct ecore_hwfn *p_hwfn,
 				   struct ecore_vf_info *p_vf,
-				   u16 rx_qid)
+				   u16 rx_qid,
+				   enum ecore_iov_validate_q_mode mode)
 {
-	if (rx_qid >= p_vf->num_rxqs)
+	if (rx_qid >= p_vf->num_rxqs) {
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 			   "VF[0x%02x] - can't touch Rx queue[%04x];"
 			   " Only 0x%04x are allocated\n",
 			   p_vf->abs_vf_id, rx_qid, p_vf->num_rxqs);
-	return rx_qid < p_vf->num_rxqs;
+		return false;
+	}
+
+	return ecore_iov_validate_queue_mode(p_hwfn, p_vf, rx_qid,
+					     mode, false);
 }
 
 static bool ecore_iov_validate_txq(struct ecore_hwfn *p_hwfn,
 				   struct ecore_vf_info *p_vf,
-				   u16 tx_qid)
+				   u16 tx_qid,
+				   enum ecore_iov_validate_q_mode mode)
 {
-	if (tx_qid >= p_vf->num_txqs)
+	if (tx_qid >= p_vf->num_txqs) {
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 			   "VF[0x%02x] - can't touch Tx queue[%04x];"
 			   " Only 0x%04x are allocated\n",
 			   p_vf->abs_vf_id, tx_qid, p_vf->num_txqs);
-	return tx_qid < p_vf->num_txqs;
+		return false;
+	}
+
+	return ecore_iov_validate_queue_mode(p_hwfn, p_vf, tx_qid,
+					     mode, true);
 }
 
 static bool ecore_iov_validate_sb(struct ecore_hwfn *p_hwfn,
@@ -234,13 +296,16 @@ static bool ecore_iov_validate_sb(struct ecore_hwfn *p_hwfn,
 	return false;
 }
 
+/* Is there at least 1 queue open? */
 static bool ecore_iov_validate_active_rxq(struct ecore_hwfn *p_hwfn,
 					  struct ecore_vf_info *p_vf)
 {
 	u8 i;
 
 	for (i = 0; i < p_vf->num_rxqs; i++)
-		if (p_vf->vf_queues[i].p_rx_cid)
+		if (ecore_iov_validate_queue_mode(p_hwfn, p_vf, i,
+						  ECORE_IOV_VALIDATE_Q_ENABLE,
+						  false))
 			return true;
 
 	return false;
@@ -251,8 +316,10 @@ static bool ecore_iov_validate_active_txq(struct ecore_hwfn *p_hwfn,
 {
 	u8 i;
 
-	for (i = 0; i < p_vf->num_rxqs; i++)
-		if (p_vf->vf_queues[i].p_tx_cid)
+	for (i = 0; i < p_vf->num_txqs; i++)
+		if (ecore_iov_validate_queue_mode(p_hwfn, p_vf, i,
+						  ECORE_IOV_VALIDATE_Q_ENABLE,
+						  true))
 			return true;
 
 	return false;
@@ -1095,19 +1162,15 @@ ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 	vf->num_txqs = num_of_vf_available_chains;
 
 	for (i = 0; i < vf->num_rxqs; i++) {
-		struct ecore_vf_q_info *p_queue = &vf->vf_queues[i];
+		struct ecore_vf_queue *p_queue = &vf->vf_queues[i];
 
 		p_queue->fw_rx_qid = p_params->req_rx_queue[i];
 		p_queue->fw_tx_qid = p_params->req_tx_queue[i];
 
-		/* CIDs are per-VF, so no problem having them 0-based. */
-		p_queue->fw_cid = i;
-
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-			   "VF[%d] - Q[%d] SB %04x, qid [Rx %04x Tx %04x]  CID %04x\n",
+			   "VF[%d] - Q[%d] SB %04x, qid [Rx %04x Tx %04x]\n",
 			   vf->relative_vf_id, i, vf->igu_sbs[i],
-			   p_queue->fw_rx_qid, p_queue->fw_tx_qid,
-			   p_queue->fw_cid);
+			   p_queue->fw_rx_qid, p_queue->fw_tx_qid);
 	}
 
 	/* Update the link configuration in bulletin.
@@ -1443,7 +1506,7 @@ struct ecore_public_vf_info
 static void ecore_iov_vf_cleanup(struct ecore_hwfn *p_hwfn,
 				 struct ecore_vf_info *p_vf)
 {
-	u32 i;
+	u32 i, j;
 	p_vf->vf_bulletin = 0;
 	p_vf->vport_instance = 0;
 	p_vf->configured_features = 0;
@@ -1455,18 +1518,15 @@ static void ecore_iov_vf_cleanup(struct ecore_hwfn *p_hwfn,
 	p_vf->num_active_rxqs = 0;
 
 	for (i = 0; i < ECORE_MAX_VF_CHAINS_PER_PF; i++) {
-		struct ecore_vf_q_info *p_queue = &p_vf->vf_queues[i];
+		struct ecore_vf_queue *p_queue = &p_vf->vf_queues[i];
 
-		if (p_queue->p_rx_cid) {
-			ecore_eth_queue_cid_release(p_hwfn,
-						    p_queue->p_rx_cid);
-			p_queue->p_rx_cid = OSAL_NULL;
-		}
+		for (j = 0; j < MAX_QUEUES_PER_QZONE; j++) {
+			if (!p_queue->cids[j].p_cid)
+				continue;
 
-		if (p_queue->p_tx_cid) {
 			ecore_eth_queue_cid_release(p_hwfn,
-						    p_queue->p_tx_cid);
-			p_queue->p_tx_cid = OSAL_NULL;
+						    p_queue->cids[j].p_cid);
+			p_queue->cids[j].p_cid = OSAL_NULL;
 		}
 	}
 
@@ -1481,7 +1541,7 @@ static u8 ecore_iov_vf_mbx_acquire_resc(struct ecore_hwfn *p_hwfn,
 					struct vf_pf_resc_request *p_req,
 					struct pf_vf_resc *p_resp)
 {
-	int i;
+	u8 i;
 
 	/* Queue related information */
 	p_resp->num_rxqs = p_vf->num_rxqs;
@@ -1502,7 +1562,7 @@ static u8 ecore_iov_vf_mbx_acquire_resc(struct ecore_hwfn *p_hwfn,
 	for (i = 0; i < p_resp->num_rxqs; i++) {
 		ecore_fw_l2_queue(p_hwfn, p_vf->vf_queues[i].fw_rx_qid,
 				  (u16 *)&p_resp->hw_qid[i]);
-		p_resp->cid[i] = p_vf->vf_queues[i].fw_cid;
+		p_resp->cid[i] = i;
 	}
 
 	/* Filter related information */
@@ -1905,9 +1965,12 @@ ecore_iov_configure_vport_forced(struct ecore_hwfn *p_hwfn,
 
 		/* Update all the Rx queues */
 		for (i = 0; i < ECORE_MAX_VF_CHAINS_PER_PF; i++) {
-			struct ecore_queue_cid *p_cid;
+			struct ecore_vf_queue *p_queue = &p_vf->vf_queues[i];
+			struct ecore_queue_cid *p_cid = OSAL_NULL;
 
-			p_cid = p_vf->vf_queues[i].p_rx_cid;
+			/* There can be at most 1 Rx queue on qzone. Find it */
+			p_cid = ecore_iov_get_vf_rx_queue_cid(p_hwfn, p_vf,
+							      p_queue);
 			if (p_cid == OSAL_NULL)
 				continue;
 
@@ -2113,19 +2176,32 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
 				       struct ecore_vf_info *vf)
 {
 	struct ecore_queue_start_common_params params;
+	struct ecore_queue_cid_vf_params vf_params;
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
 	u8 status = PFVF_STATUS_NO_RESOURCE;
-	struct ecore_vf_q_info *p_queue;
+	struct ecore_vf_queue *p_queue;
 	struct vfpf_start_rxq_tlv *req;
+	struct ecore_queue_cid *p_cid;
 	bool b_legacy_vf = false;
+	u8 qid_usage_idx;
 	enum _ecore_status_t rc;
 
 	req = &mbx->req_virt->start_rxq;
 
-	if (!ecore_iov_validate_rxq(p_hwfn, vf, req->rx_qid) ||
+	if (!ecore_iov_validate_rxq(p_hwfn, vf, req->rx_qid,
+				    ECORE_IOV_VALIDATE_Q_DISABLE) ||
 	    !ecore_iov_validate_sb(p_hwfn, vf, req->hw_sb))
 		goto out;
 
+	/* Legacy VFs made assumptions on the CID their queues connected to,
+	 * assuming queue X used CID X.
+	 * TODO - need to validate that there was no official release post
+	 * the current legacy scheme that still made that assumption.
+	 */
+	if (vf->acquire.vfdev_info.eth_fp_hsi_minor ==
+	    ETH_HSI_VER_NO_PKT_LEN_TUNN)
+		b_legacy_vf = true;
+
 	/* Acquire a new queue-cid */
 	p_queue = &vf->vf_queues[req->rx_qid];
 
@@ -2136,39 +2212,42 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
 	params.sb = req->hw_sb;
 	params.sb_idx = req->sb_index;
 
-	p_queue->p_rx_cid = _ecore_eth_queue_to_cid(p_hwfn,
-						    vf->opaque_fid,
-						    p_queue->fw_cid,
-						    (u8)req->rx_qid,
-						    &params);
-	if (p_queue->p_rx_cid == OSAL_NULL)
+	/* TODO - set qid_usage_idx according to extended TLV. For now, use
+	 * '0' for Rx.
+	 */
+	qid_usage_idx = 0;
+
+	OSAL_MEM_ZERO(&vf_params, sizeof(vf_params));
+	vf_params.vfid = vf->relative_vf_id;
+	vf_params.vf_qid = (u8)req->rx_qid;
+	vf_params.b_legacy = b_legacy_vf;
+	vf_params.qid_usage_idx = qid_usage_idx;
+
+	p_cid = ecore_eth_queue_to_cid(p_hwfn, vf->opaque_fid,
+				       &params, &vf_params);
+	if (p_cid == OSAL_NULL)
 		goto out;
 
 	/* Legacy VFs have their Producers in a different location, which they
 	 * calculate on their own and clean the producer prior to this.
 	 */
-	if (vf->acquire.vfdev_info.eth_fp_hsi_minor ==
-	    ETH_HSI_VER_NO_PKT_LEN_TUNN)
-		b_legacy_vf = true;
-	else
+	if (!b_legacy_vf)
 		REG_WR(p_hwfn,
 		       GTT_BAR0_MAP_REG_MSDM_RAM +
 		       MSTORM_ETH_VF_PRODS_OFFSET(vf->abs_vf_id, req->rx_qid),
 		       0);
-	p_queue->p_rx_cid->b_legacy_vf = b_legacy_vf;
 
-
-	rc = ecore_eth_rxq_start_ramrod(p_hwfn,
-					p_queue->p_rx_cid,
+	rc = ecore_eth_rxq_start_ramrod(p_hwfn, p_cid,
 					req->bd_max_bytes,
 					req->rxq_addr,
 					req->cqe_pbl_addr,
 					req->cqe_pbl_size);
 	if (rc != ECORE_SUCCESS) {
 		status = PFVF_STATUS_FAILURE;
-		ecore_eth_queue_cid_release(p_hwfn, p_queue->p_rx_cid);
-		p_queue->p_rx_cid = OSAL_NULL;
+		ecore_eth_queue_cid_release(p_hwfn, p_cid);
 	} else {
+		p_queue->cids[qid_usage_idx].p_cid = p_cid;
+		p_queue->cids[qid_usage_idx].b_is_tx = false;
 		status = PFVF_STATUS_SUCCESS;
 		vf->num_active_rxqs++;
 	}
@@ -2331,6 +2410,7 @@ send_resp:
 static void ecore_iov_vf_mbx_start_txq_resp(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
 					    struct ecore_vf_info *p_vf,
+					    u32 cid,
 					    u8 status)
 {
 	struct ecore_iov_vf_mbx *mbx = &p_vf->vf_mbx;
@@ -2359,12 +2439,8 @@ static void ecore_iov_vf_mbx_start_txq_resp(struct ecore_hwfn *p_hwfn,
 		      sizeof(struct channel_list_end_tlv));
 
 	/* Update the TLV with the response */
-	if ((status == PFVF_STATUS_SUCCESS) && !b_legacy) {
-		u16 qid = mbx->req_virt->start_txq.tx_qid;
-
-		p_tlv->offset = DB_ADDR_VF(p_vf->vf_queues[qid].fw_cid,
-					   DQ_DEMS_LEGACY);
-	}
+	if ((status == PFVF_STATUS_SUCCESS) && !b_legacy)
+		p_tlv->offset = DB_ADDR_VF(cid, DQ_DEMS_LEGACY);
 
 	ecore_iov_send_response(p_hwfn, p_ptt, p_vf, length, status);
 }
@@ -2374,20 +2450,34 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 				       struct ecore_vf_info *vf)
 {
 	struct ecore_queue_start_common_params params;
+	struct ecore_queue_cid_vf_params vf_params;
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
 	u8 status = PFVF_STATUS_NO_RESOURCE;
-	struct ecore_vf_q_info *p_queue;
+	struct ecore_vf_queue *p_queue;
 	struct vfpf_start_txq_tlv *req;
+	struct ecore_queue_cid *p_cid;
+	bool b_legacy_vf = false;
+	u8 qid_usage_idx;
+	u32 cid = 0;
 	enum _ecore_status_t rc;
 	u16 pq;
 
 	OSAL_MEMSET(&params, 0, sizeof(params));
 	req = &mbx->req_virt->start_txq;
 
-	if (!ecore_iov_validate_txq(p_hwfn, vf, req->tx_qid) ||
+	if (!ecore_iov_validate_txq(p_hwfn, vf, req->tx_qid,
+				    ECORE_IOV_VALIDATE_Q_NA) ||
 	    !ecore_iov_validate_sb(p_hwfn, vf, req->hw_sb))
 		goto out;
 
+	/* In case this is a legacy VF - need to know to use the right cids.
+	 * TODO - need to validate that there was no official release post
+	 * the current legacy scheme that still made that assumption.
+	 */
+	if (vf->acquire.vfdev_info.eth_fp_hsi_minor ==
+	    ETH_HSI_VER_NO_PKT_LEN_TUNN)
+		b_legacy_vf = true;
+
 	/* Acquire a new queue-cid */
 	p_queue = &vf->vf_queues[req->tx_qid];
 
@@ -2397,29 +2487,42 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 	params.sb = req->hw_sb;
 	params.sb_idx = req->sb_index;
 
-	p_queue->p_tx_cid = _ecore_eth_queue_to_cid(p_hwfn,
-						    vf->opaque_fid,
-						    p_queue->fw_cid,
-						    (u8)req->tx_qid,
-						    &params);
-	if (p_queue->p_tx_cid == OSAL_NULL)
+	/* TODO - set qid_usage_idx according to extended TLV. For now, use
+	 * '1' for Tx.
+	 */
+	qid_usage_idx = 1;
+
+	if (p_queue->cids[qid_usage_idx].p_cid)
+		goto out;
+
+	OSAL_MEM_ZERO(&vf_params, sizeof(vf_params));
+	vf_params.vfid = vf->relative_vf_id;
+	vf_params.vf_qid = (u8)req->tx_qid;
+	vf_params.b_legacy = b_legacy_vf;
+	vf_params.qid_usage_idx = qid_usage_idx;
+
+	p_cid = ecore_eth_queue_to_cid(p_hwfn, vf->opaque_fid,
+				       &params, &vf_params);
+	if (p_cid == OSAL_NULL)
 		goto out;
 
 	pq = ecore_get_cm_pq_idx_vf(p_hwfn,
 				    vf->relative_vf_id);
-	rc = ecore_eth_txq_start_ramrod(p_hwfn, p_queue->p_tx_cid,
+	rc = ecore_eth_txq_start_ramrod(p_hwfn, p_cid,
 					req->pbl_addr, req->pbl_size, pq);
 	if (rc != ECORE_SUCCESS) {
 		status = PFVF_STATUS_FAILURE;
-		ecore_eth_queue_cid_release(p_hwfn,
-					    p_queue->p_tx_cid);
-		p_queue->p_tx_cid = OSAL_NULL;
+		ecore_eth_queue_cid_release(p_hwfn, p_cid);
 	} else {
 		status = PFVF_STATUS_SUCCESS;
+		p_queue->cids[qid_usage_idx].p_cid = p_cid;
+		p_queue->cids[qid_usage_idx].b_is_tx = true;
+		cid = p_cid->cid;
 	}
 
 out:
-	ecore_iov_vf_mbx_start_txq_resp(p_hwfn, p_ptt, vf, status);
+	ecore_iov_vf_mbx_start_txq_resp(p_hwfn, p_ptt, vf,
+					cid, status);
 }
 
 static enum _ecore_status_t ecore_iov_vf_stop_rxqs(struct ecore_hwfn *p_hwfn,
@@ -2428,26 +2531,38 @@ static enum _ecore_status_t ecore_iov_vf_stop_rxqs(struct ecore_hwfn *p_hwfn,
 						   u8 num_rxqs,
 						   bool cqe_completion)
 {
-	struct ecore_vf_q_info *p_queue;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
-	int qid;
+	int qid, i;
 
+	/* TODO - improve validation [wrap around] */
 	if (rxq_id + num_rxqs > OSAL_ARRAY_SIZE(vf->vf_queues))
 		return ECORE_INVAL;
 
 	for (qid = rxq_id; qid < rxq_id + num_rxqs; qid++) {
-		p_queue = &vf->vf_queues[qid];
-
-		if (!p_queue->p_rx_cid)
+		struct ecore_vf_queue *p_queue = &vf->vf_queues[qid];
+		struct ecore_queue_cid **pp_cid = OSAL_NULL;
+
+		/* There can be at most a single Rx per qzone. Find it */
+		for (i = 0; i < MAX_QUEUES_PER_QZONE; i++) {
+			if (p_queue->cids[i].p_cid &&
+			    !p_queue->cids[i].b_is_tx) {
+				pp_cid = &p_queue->cids[i].p_cid;
+				break;
+			}
+		}
+		if (pp_cid == OSAL_NULL) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "Ignoring VF[%02x] request of closing Rx queue %04x - closed\n",
+				   vf->relative_vf_id, qid);
 			continue;
+		}
 
-		rc = ecore_eth_rx_queue_stop(p_hwfn,
-					     p_queue->p_rx_cid,
+		rc = ecore_eth_rx_queue_stop(p_hwfn, *pp_cid,
 					     false, cqe_completion);
 		if (rc != ECORE_SUCCESS)
 			return rc;
 
-		vf->vf_queues[qid].p_rx_cid = OSAL_NULL;
+		*pp_cid = OSAL_NULL;
 		vf->num_active_rxqs--;
 	}
 
@@ -2459,24 +2574,33 @@ static enum _ecore_status_t ecore_iov_vf_stop_txqs(struct ecore_hwfn *p_hwfn,
 						   u16 txq_id, u8 num_txqs)
 {
 	enum _ecore_status_t rc = ECORE_SUCCESS;
-	struct ecore_vf_q_info *p_queue;
-	int qid;
+	struct ecore_vf_queue *p_queue;
+	int qid, j;
 
-	if (txq_id + num_txqs > OSAL_ARRAY_SIZE(vf->vf_queues))
+	if (!ecore_iov_validate_txq(p_hwfn, vf, txq_id,
+				    ECORE_IOV_VALIDATE_Q_NA) ||
+	    !ecore_iov_validate_txq(p_hwfn, vf, txq_id + num_txqs,
+				    ECORE_IOV_VALIDATE_Q_NA))
 		return ECORE_INVAL;
 
 	for (qid = txq_id; qid < txq_id + num_txqs; qid++) {
 		p_queue = &vf->vf_queues[qid];
-		if (!p_queue->p_tx_cid)
-			continue;
+		for (j = 0; j < MAX_QUEUES_PER_QZONE; j++) {
+			if (p_queue->cids[j].p_cid == OSAL_NULL)
+				continue;
 
-		rc = ecore_eth_tx_queue_stop(p_hwfn,
-					     p_queue->p_tx_cid);
-		if (rc != ECORE_SUCCESS)
-			return rc;
+			if (!p_queue->cids[j].b_is_tx)
+				continue;
+
+			rc = ecore_eth_tx_queue_stop(p_hwfn,
+						     p_queue->cids[j].p_cid);
+			if (rc != ECORE_SUCCESS)
+				return rc;
 
-		p_queue->p_tx_cid = OSAL_NULL;
+			p_queue->cids[j].p_cid = OSAL_NULL;
+		}
 	}
+
 	return rc;
 }
 
@@ -2538,33 +2662,32 @@ static void ecore_iov_vf_mbx_update_rxqs(struct ecore_hwfn *p_hwfn,
 	u8 status = PFVF_STATUS_FAILURE;
 	u8 complete_event_flg;
 	u8 complete_cqe_flg;
-	u16 qid;
 	enum _ecore_status_t rc;
-	u8 i;
+	u16 i;
 
 	req = &mbx->req_virt->update_rxq;
 	complete_cqe_flg = !!(req->flags & VFPF_RXQ_UPD_COMPLETE_CQE_FLAG);
 	complete_event_flg = !!(req->flags & VFPF_RXQ_UPD_COMPLETE_EVENT_FLAG);
 
-	/* Validaute inputs */
-	if (req->num_rxqs + req->rx_qid > ECORE_MAX_VF_CHAINS_PER_PF ||
-	    !ecore_iov_validate_rxq(p_hwfn, vf, req->rx_qid)) {
-		DP_INFO(p_hwfn, "VF[%d]: Incorrect Rxqs [%04x, %02x]\n",
-			vf->relative_vf_id, req->rx_qid, req->num_rxqs);
-		goto out;
+	/* Validate inputs */
+	for (i = req->rx_qid; i < req->rx_qid + req->num_rxqs; i++) {
+		if (!ecore_iov_validate_rxq(p_hwfn, vf, i,
+					    ECORE_IOV_VALIDATE_Q_ENABLE)) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VF[%d]: Incorrect Rxqs [%04x, %02x]\n",
+				   vf->relative_vf_id, req->rx_qid,
+				   req->num_rxqs);
+			goto out;
+		}
 	}
 
 	for (i = 0; i < req->num_rxqs; i++) {
-		qid = req->rx_qid + i;
-
-		if (!vf->vf_queues[qid].p_rx_cid) {
-			DP_INFO(p_hwfn,
-				"VF[%d] rx_qid = %d isn`t active!\n",
-				vf->relative_vf_id, qid);
-			goto out;
-		}
+		struct ecore_vf_queue *p_queue;
+		u16 qid = req->rx_qid + i;
 
-		handlers[i] = vf->vf_queues[qid].p_rx_cid;
+		p_queue = &vf->vf_queues[qid];
+		handlers[i] = ecore_iov_get_vf_rx_queue_cid(p_hwfn, vf,
+							    p_queue);
 	}
 
 	rc = ecore_sp_eth_rx_queues_update(p_hwfn, (void **)&handlers,
@@ -2796,8 +2919,11 @@ ecore_iov_vp_update_rss_param(struct ecore_hwfn *p_hwfn,
 				(1 << p_rss_tlv->rss_table_size_log));
 
 	for (i = 0; i < table_size; i++) {
+		struct ecore_queue_cid *p_cid;
+
 		q_idx = p_rss_tlv->rss_ind_table[i];
-		if (!ecore_iov_validate_rxq(p_hwfn, vf, q_idx)) {
+		if (!ecore_iov_validate_rxq(p_hwfn, vf, q_idx,
+					    ECORE_IOV_VALIDATE_Q_ENABLE)) {
 			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 				   "VF[%d]: Omitting RSS due to wrong queue %04x\n",
 				   vf->relative_vf_id, q_idx);
@@ -2805,15 +2931,9 @@ ecore_iov_vp_update_rss_param(struct ecore_hwfn *p_hwfn,
 			goto out;
 		}
 
-		if (!vf->vf_queues[q_idx].p_rx_cid) {
-			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-				   "VF[%d]: Omitting RSS due to inactive queue %08x\n",
-				   vf->relative_vf_id, q_idx);
-			b_reject = true;
-			goto out;
-		}
-
-		p_rss->rss_ind_table[i] = vf->vf_queues[q_idx].p_rx_cid;
+		p_cid = ecore_iov_get_vf_rx_queue_cid(p_hwfn, vf,
+						      &vf->vf_queues[q_idx]);
+		p_rss->rss_ind_table[i] = p_cid;
 	}
 
 	p_data->rss_params = p_rss;
@@ -3272,22 +3392,26 @@ static void ecore_iov_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn,
 	u8 status = PFVF_STATUS_FAILURE;
 	struct ecore_queue_cid *p_cid;
 	u16 rx_coal, tx_coal;
-	u16  qid;
+	u16 qid;
+	int i;
 
 	req = &mbx->req_virt->update_coalesce;
 
 	rx_coal = req->rx_coal;
 	tx_coal = req->tx_coal;
 	qid = req->qid;
-	p_cid = vf->vf_queues[qid].p_rx_cid;
 
-	if (!ecore_iov_validate_rxq(p_hwfn, vf, qid)) {
+	if (!ecore_iov_validate_rxq(p_hwfn, vf, qid,
+				    ECORE_IOV_VALIDATE_Q_ENABLE) &&
+	    rx_coal) {
 		DP_ERR(p_hwfn, "VF[%d]: Invalid Rx queue_id = %d\n",
 		       vf->abs_vf_id, qid);
 		goto out;
 	}
 
-	if (!ecore_iov_validate_txq(p_hwfn, vf, qid)) {
+	if (!ecore_iov_validate_txq(p_hwfn, vf, qid,
+				    ECORE_IOV_VALIDATE_Q_ENABLE) &&
+	    tx_coal) {
 		DP_ERR(p_hwfn, "VF[%d]: Invalid Tx queue_id = %d\n",
 		       vf->abs_vf_id, qid);
 		goto out;
@@ -3296,7 +3420,11 @@ static void ecore_iov_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn,
 	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 		   "VF[%d]: Setting coalesce for VF rx_coal = %d, tx_coal = %d at queue = %d\n",
 		   vf->abs_vf_id, rx_coal, tx_coal, qid);
+
 	if (rx_coal) {
+		p_cid = ecore_iov_get_vf_rx_queue_cid(p_hwfn, vf,
+						      &vf->vf_queues[qid]);
+
 		rc = ecore_set_rxq_coalesce(p_hwfn, p_ptt, rx_coal, p_cid);
 		if (rc != ECORE_SUCCESS) {
 			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
@@ -3305,13 +3433,28 @@ static void ecore_iov_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn,
 			goto out;
 		}
 	}
+
+	/* TODO - in future, it might be possible to pass this in a per-cid
+	 * granularity. For now, do this for all Tx queues.
+	 */
 	if (tx_coal) {
-		rc =  ecore_set_txq_coalesce(p_hwfn, p_ptt, tx_coal, p_cid);
-		if (rc != ECORE_SUCCESS) {
-			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-				   "VF[%d]: Unable to set tx queue = %d coalesce\n",
-				   vf->abs_vf_id, vf->vf_queues[qid].fw_tx_qid);
-			goto out;
+		struct ecore_vf_queue *p_queue = &vf->vf_queues[qid];
+
+		for (i = 0; i < MAX_QUEUES_PER_QZONE; i++) {
+			if (p_queue->cids[i].p_cid == OSAL_NULL)
+				continue;
+
+			if (!p_queue->cids[i].b_is_tx)
+				continue;
+
+			rc = ecore_set_txq_coalesce(p_hwfn, p_ptt, tx_coal,
+						    p_queue->cids[i].p_cid);
+			if (rc != ECORE_SUCCESS) {
+				DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+					   "VF[%d]: Unable to set tx queue coalesce\n",
+					   vf->abs_vf_id);
+				goto out;
+			}
 		}
 	}
 
diff --git a/drivers/net/qede/base/ecore_sriov.h b/drivers/net/qede/base/ecore_sriov.h
index 66e9271..3c2f58b 100644
--- a/drivers/net/qede/base/ecore_sriov.h
+++ b/drivers/net/qede/base/ecore_sriov.h
@@ -13,6 +13,7 @@
 #include "ecore_vfpf_if.h"
 #include "ecore_iov_api.h"
 #include "ecore_hsi_common.h"
+#include "ecore_l2.h"
 
 #define ECORE_ETH_MAX_VF_NUM_VLAN_FILTERS \
 	(E4_MAX_NUM_VFS * ECORE_ETH_VF_NUM_VLAN_FILTERS)
@@ -62,12 +63,18 @@ struct ecore_iov_vf_mbx {
 					 */
 };
 
-struct ecore_vf_q_info {
+struct ecore_vf_queue_cid {
+	bool b_is_tx;
+	struct ecore_queue_cid *p_cid;
+};
+
+/* Describes a qzone associated with the VF */
+struct ecore_vf_queue {
+	/* Input from upper-layer, mapping relateive queue to queue-zone */
 	u16 fw_rx_qid;
-	struct ecore_queue_cid *p_rx_cid;
 	u16 fw_tx_qid;
-	struct ecore_queue_cid *p_tx_cid;
-	u8 fw_cid;
+
+	struct ecore_vf_queue_cid cids[MAX_QUEUES_PER_QZONE];
 };
 
 enum vf_state {
@@ -127,7 +134,7 @@ struct ecore_vf_info {
 	u8			num_mac_filters;
 	u8			num_vlan_filters;
 
-	struct ecore_vf_q_info	vf_queues[ECORE_MAX_VF_CHAINS_PER_PF];
+	struct ecore_vf_queue	vf_queues[ECORE_MAX_VF_CHAINS_PER_PF];
 	u16			igu_sbs[ECORE_MAX_VF_CHAINS_PER_PF];
 
 	/* TODO - Only windows is using it - should be removed */
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index 8ce9340..ac72681 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -1582,6 +1582,12 @@ void ecore_vf_get_num_rxqs(struct ecore_hwfn *p_hwfn, u8 *num_rxqs)
 	*num_rxqs = p_hwfn->vf_iov_info->acquire_resp.resc.num_rxqs;
 }
 
+void ecore_vf_get_num_txqs(struct ecore_hwfn *p_hwfn,
+			   u8 *num_txqs)
+{
+	*num_txqs = p_hwfn->vf_iov_info->acquire_resp.resc.num_txqs;
+}
+
 void ecore_vf_get_port_mac(struct ecore_hwfn *p_hwfn, u8 *port_mac)
 {
 	OSAL_MEMCPY(port_mac,
diff --git a/drivers/net/qede/base/ecore_vf_api.h b/drivers/net/qede/base/ecore_vf_api.h
index a6e5f32..be3a326 100644
--- a/drivers/net/qede/base/ecore_vf_api.h
+++ b/drivers/net/qede/base/ecore_vf_api.h
@@ -61,6 +61,15 @@ void ecore_vf_get_num_rxqs(struct ecore_hwfn *p_hwfn,
 			   u8 *num_rxqs);
 
 /**
+ * @brief Get number of Rx queues allocated for VF by ecore
+ *
+ *  @param p_hwfn
+ *  @param num_txqs - allocated RX queues
+ */
+void ecore_vf_get_num_txqs(struct ecore_hwfn *p_hwfn,
+			   u8 *num_txqs);
+
+/**
  * @brief Get port mac address for VF
  *
  * @param p_hwfn
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 57/62] net/qede/base: prevent race condition during unload
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (56 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 56/62] net/qede/base: multi-Txq support on same queue-zone for VFs Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 58/62] net/qede/base: semantic changes Rasesh Mody
                               ` (4 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Merge hw_stop and hw_reset into one function.
Prevent race condition between MFW attentions and pf stop command during
unload flow that causes an ASSERT.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/bcm_osal.h      |    1 +
 drivers/net/qede/base/ecore_dev.c     |  175 ++++++++++++++++-----------------
 drivers/net/qede/base/ecore_dev_api.h |    9 --
 drivers/net/qede/base/ecore_mcp.c     |   12 +++
 drivers/net/qede/base/ecore_mcp.h     |   11 +++
 drivers/net/qede/base/ecore_spq.c     |    3 +
 drivers/net/qede/qede_main.c          |   18 +---
 7 files changed, 116 insertions(+), 113 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 052a0cf..32c9b25 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -168,6 +168,7 @@ typedef pthread_mutex_t osal_mutex_t;
 #define OSAL_DPC_ALLOC(hwfn) OSAL_ALLOC(hwfn, GFP, sizeof(osal_dpc_t))
 #define OSAL_DPC_INIT(dpc, hwfn) nothing
 #define OSAL_POLL_MODE_DPC(hwfn) nothing
+#define OSAL_DPC_SYNC(hwfn) nothing
 
 /* Lists */
 
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 2a621f7..d8e4ca2 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2050,7 +2050,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 
 		if (mfw_rc != ECORE_SUCCESS) {
 			DP_NOTICE(p_hwfn, true,
-				  "Failed sending LOAD_DONE command\n");
+				  "Failed sending a LOAD_DONE command\n");
 			return mfw_rc;
 		}
 
@@ -2139,32 +2139,77 @@ void ecore_hw_timers_stop_all(struct ecore_dev *p_dev)
 	}
 }
 
+static enum _ecore_status_t ecore_verify_reg_val(struct ecore_hwfn *p_hwfn,
+						 struct ecore_ptt *p_ptt,
+						 u32 addr, u32 expected_val)
+{
+	u32 val = ecore_rd(p_hwfn, p_ptt, addr);
+
+	if (val != expected_val) {
+		DP_NOTICE(p_hwfn, true,
+			  "Value at address 0x%08x is 0x%08x while the expected value is 0x%08x\n",
+			  addr, val, expected_val);
+		return ECORE_UNKNOWN_ERROR;
+	}
+
+	return ECORE_SUCCESS;
+}
+
 enum _ecore_status_t ecore_hw_stop(struct ecore_dev *p_dev)
 {
-	enum _ecore_status_t rc = ECORE_SUCCESS, t_rc;
+	struct ecore_hwfn *p_hwfn;
+	struct ecore_ptt *p_ptt;
+	enum _ecore_status_t rc, rc2 = ECORE_SUCCESS;
 	int j;
 
 	for_each_hwfn(p_dev, j) {
-		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[j];
-		struct ecore_ptt *p_ptt = p_hwfn->p_main_ptt;
+		p_hwfn = &p_dev->hwfns[j];
+		p_ptt = p_hwfn->p_main_ptt;
 
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IFDOWN, "Stopping hw/fw\n");
 
 		if (IS_VF(p_dev)) {
 			ecore_vf_pf_int_cleanup(p_hwfn);
+			rc = ecore_vf_pf_reset(p_hwfn);
+			if (rc != ECORE_SUCCESS) {
+				DP_NOTICE(p_hwfn, true,
+					  "ecore_vf_pf_reset failed. rc = %d.\n",
+					  rc);
+				rc2 = ECORE_UNKNOWN_ERROR;
+			}
 			continue;
 		}
 
 		/* mark the hw as uninitialized... */
 		p_hwfn->hw_init_done = false;
 
+		/* Send unload command to MCP */
+		if (!p_dev->recov_in_prog) {
+			rc = ecore_mcp_unload_req(p_hwfn, p_ptt);
+			if (rc != ECORE_SUCCESS) {
+				DP_NOTICE(p_hwfn, true,
+					  "Failed sending a UNLOAD_REQ command. rc = %d.\n",
+					  rc);
+				rc2 = ECORE_UNKNOWN_ERROR;
+			}
+		}
+
+		OSAL_DPC_SYNC(p_hwfn);
+
+		/* After this point no MFW attentions are expected, e.g. prevent
+		 * race between pf stop and dcbx pf update.
+		 */
+
 		rc = ecore_sp_pf_stop(p_hwfn);
-		if (rc)
+		if (rc != ECORE_SUCCESS) {
 			DP_NOTICE(p_hwfn, true,
-				  "Failed to close PF against FW. Continue to stop HW to prevent illegal host access by the device\n");
+				  "Failed to close PF against FW [rc = %d]. Continue to stop HW to prevent illegal host access by the device.\n",
+				  rc);
+			rc2 = ECORE_UNKNOWN_ERROR;
+		}
 
 		/* perform debug action after PF stop was sent */
-		OSAL_AFTER_PF_STOP((void *)p_hwfn->p_dev, p_hwfn->my_id);
+		OSAL_AFTER_PF_STOP((void *)p_dev, p_hwfn->my_id);
 
 		/* close NIG to BRB gate */
 		ecore_wr(p_hwfn, p_ptt,
@@ -2191,20 +2236,48 @@ enum _ecore_status_t ecore_hw_stop(struct ecore_dev *p_dev)
 		ecore_int_igu_init_pure_rt(p_hwfn, p_ptt, false, true);
 		/* Need to wait 1ms to guarantee SBs are cleared */
 		OSAL_MSLEEP(1);
-	}
+
+		if (!p_dev->recov_in_prog) {
+			ecore_verify_reg_val(p_hwfn, p_ptt,
+					     QM_REG_USG_CNT_PF_TX, 0);
+			ecore_verify_reg_val(p_hwfn, p_ptt,
+					     QM_REG_USG_CNT_PF_OTHER, 0);
+			/* @@@TBD - assert on incorrect xCFC values (10.b) */
+		}
+
+		/* Disable PF in HW blocks */
+		ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_DB_ENABLE, 0);
+		ecore_wr(p_hwfn, p_ptt, QM_REG_PF_EN, 0);
+
+		if (!p_dev->recov_in_prog) {
+			ecore_mcp_unload_done(p_hwfn, p_ptt);
+			if (rc != ECORE_SUCCESS) {
+				DP_NOTICE(p_hwfn, true,
+					  "Failed sending a UNLOAD_DONE command. rc = %d.\n",
+					  rc);
+				rc2 = ECORE_UNKNOWN_ERROR;
+			}
+		}
+	} /* hwfn loop */
 
 	if (IS_PF(p_dev)) {
+		p_hwfn = ECORE_LEADING_HWFN(p_dev);
+		p_ptt = ECORE_LEADING_HWFN(p_dev)->p_main_ptt;
+
 		/* Disable DMAE in PXP - in CMT, this should only be done for
 		 * first hw-function, and only after all transactions have
 		 * stopped for all active hw-functions.
 		 */
-		t_rc = ecore_change_pci_hwfn(&p_dev->hwfns[0],
-					     p_dev->hwfns[0].p_main_ptt, false);
-		if (t_rc != ECORE_SUCCESS)
-			rc = t_rc;
+		rc = ecore_change_pci_hwfn(p_hwfn, p_ptt, false);
+		if (rc != ECORE_SUCCESS) {
+			DP_NOTICE(p_hwfn, true,
+				  "ecore_change_pci_hwfn failed. rc = %d.\n",
+				  rc);
+			rc2 = ECORE_UNKNOWN_ERROR;
+		}
 	}
 
-	return rc;
+	return rc2;
 }
 
 void ecore_hw_stop_fastpath(struct ecore_dev *p_dev)
@@ -2265,82 +2338,6 @@ void ecore_hw_start_fastpath(struct ecore_hwfn *p_hwfn)
 		 NIG_REG_RX_LLH_BRB_GATE_DNTFWD_PERPF, 0x0);
 }
 
-static enum _ecore_status_t ecore_reg_assert(struct ecore_hwfn *p_hwfn,
-					     struct ecore_ptt *p_ptt, u32 reg,
-					     bool expected)
-{
-	u32 assert_val = ecore_rd(p_hwfn, p_ptt, reg);
-
-	if (assert_val != expected) {
-		DP_NOTICE(p_hwfn, true, "Value at address 0x%08x != 0x%08x\n",
-			  reg, expected);
-		return ECORE_UNKNOWN_ERROR;
-	}
-
-	return 0;
-}
-
-enum _ecore_status_t ecore_hw_reset(struct ecore_dev *p_dev)
-{
-	enum _ecore_status_t rc = ECORE_SUCCESS;
-	u32 unload_resp, unload_param;
-	int i;
-
-	for_each_hwfn(p_dev, i) {
-		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
-
-		if (IS_VF(p_dev)) {
-			rc = ecore_vf_pf_reset(p_hwfn);
-			if (rc)
-				return rc;
-			continue;
-		}
-
-		DP_VERBOSE(p_hwfn, ECORE_MSG_IFDOWN, "Resetting hw/fw\n");
-
-		/* Check for incorrect states */
-		if (!p_dev->recov_in_prog) {
-			ecore_reg_assert(p_hwfn, p_hwfn->p_main_ptt,
-					 QM_REG_USG_CNT_PF_TX, 0);
-			ecore_reg_assert(p_hwfn, p_hwfn->p_main_ptt,
-					 QM_REG_USG_CNT_PF_OTHER, 0);
-			/* @@@TBD - assert on incorrect xCFC values (10.b) */
-		}
-
-		/* Disable PF in HW blocks */
-		ecore_wr(p_hwfn, p_hwfn->p_main_ptt, DORQ_REG_PF_DB_ENABLE, 0);
-		ecore_wr(p_hwfn, p_hwfn->p_main_ptt, QM_REG_PF_EN, 0);
-
-		if (p_dev->recov_in_prog) {
-			DP_VERBOSE(p_hwfn, ECORE_MSG_IFDOWN,
-				   "Recovery is in progress -> skip sending unload_req/done\n");
-			break;
-		}
-
-		/* Send unload command to MCP */
-		rc = ecore_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
-				   DRV_MSG_CODE_UNLOAD_REQ,
-				   DRV_MB_PARAM_UNLOAD_WOL_MCP,
-				   &unload_resp, &unload_param);
-		if (rc != ECORE_SUCCESS) {
-			DP_NOTICE(p_hwfn, true,
-				  "ecore_hw_reset: UNLOAD_REQ failed\n");
-			/* @@TBD - what to do? for now, assume ENG. */
-			unload_resp = FW_MSG_CODE_DRV_UNLOAD_ENGINE;
-		}
-
-		rc = ecore_mcp_unload_done(p_hwfn, p_hwfn->p_main_ptt);
-		if (rc != ECORE_SUCCESS) {
-			DP_NOTICE(p_hwfn,
-				  true, "ecore_hw_reset: UNLOAD_DONE failed\n");
-			/* @@@TBD - Should it really ASSERT here ? */
-			return rc;
-		}
-	}
-
-	return rc;
-}
-
 /* Free hwfn memory and resources acquired in hw_hwfn_prepare */
 static void ecore_hw_hwfn_free(struct ecore_hwfn *p_hwfn)
 {
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index ce764d2..e64a768 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -151,15 +151,6 @@ void ecore_prepare_hibernate(struct ecore_dev *p_dev);
  */
 void ecore_hw_start_fastpath(struct ecore_hwfn *p_hwfn);
 
-/**
- * @brief ecore_hw_reset -
- *
- * @param p_dev
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_hw_reset(struct ecore_dev *p_dev);
-
 enum ecore_hw_prepare_result {
 	ECORE_HW_PREPARE_SUCCESS,
 
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index a3a6ca1..a834ac7 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -893,6 +893,18 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
+enum _ecore_status_t ecore_mcp_unload_req(struct ecore_hwfn *p_hwfn,
+					  struct ecore_ptt *p_ptt)
+{
+	u32 wol_param, mcp_resp, mcp_param;
+
+	/* @DPDK */
+	wol_param = DRV_MB_PARAM_UNLOAD_WOL_MCP;
+
+	return ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_UNLOAD_REQ, wol_param,
+			     &mcp_resp, &mcp_param);
+}
+
 enum _ecore_status_t ecore_mcp_unload_done(struct ecore_hwfn *p_hwfn,
 					   struct ecore_ptt *p_ptt)
 {
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index 350d8a2..37d1835 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -171,6 +171,17 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 					struct ecore_load_req_params *p_params);
 
 /**
+ * @brief Sends a UNLOAD_REQ message to the MFW
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - Operation was successful.
+ */
+enum _ecore_status_t ecore_mcp_unload_req(struct ecore_hwfn *p_hwfn,
+					  struct ecore_ptt *p_ptt);
+
+/**
  * @brief Sends a UNLOAD_DONE message to the MFW
  *
  * @param p_hwfn
diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c
index 016de74..3c1d05b 100644
--- a/drivers/net/qede/base/ecore_spq.c
+++ b/drivers/net/qede/base/ecore_spq.c
@@ -190,6 +190,9 @@ static void ecore_spq_hw_initialize(struct ecore_hwfn *p_hwfn,
 
 	p_cxt = cxt_info.p_cxt;
 
+	/* @@@TBD we zero the context until we have ilt_reset implemented. */
+	OSAL_MEM_ZERO(p_cxt, sizeof(*p_cxt));
+
 	if (ECORE_IS_BB(p_hwfn->p_dev) || ECORE_IS_AH(p_hwfn->p_dev)) {
 		SET_FIELD(p_cxt->xstorm_ag_context.flags10,
 			  E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN, 1);
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 326e56f..74856c5 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -636,19 +636,6 @@ static int qed_nic_stop(struct ecore_dev *edev)
 	return rc;
 }
 
-static int qed_nic_reset(struct ecore_dev *edev)
-{
-	int rc;
-
-	rc = ecore_hw_reset(edev);
-	if (rc)
-		return rc;
-
-	ecore_resc_free(edev);
-
-	return 0;
-}
-
 static int qed_slowpath_stop(struct ecore_dev *edev)
 {
 #ifdef CONFIG_QED_SRIOV
@@ -667,10 +654,11 @@ static int qed_slowpath_stop(struct ecore_dev *edev)
 		if (IS_QED_ETH_IF(edev))
 			qed_sriov_disable(edev, true);
 #endif
-		qed_nic_stop(edev);
 	}
 
-	qed_nic_reset(edev);
+	qed_nic_stop(edev);
+
+	ecore_resc_free(edev);
 	qed_stop_iov_task(edev);
 
 	return 0;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 58/62] net/qede/base: semantic changes
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (57 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 57/62] net/qede/base: prevent race condition during unload Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:36             ` [PATCH v5 59/62] net/qede/base: add support for arfs mode Rasesh Mody
                               ` (3 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Make APIs static and other semantic changes.
A step toward cleaning 'make C=1' with GCC 4.8.3.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_cxt.c  |    5 +-
 drivers/net/qede/base/ecore_cxt.h  |   11 ----
 drivers/net/qede/base/ecore_dcbx.c |    2 +-
 drivers/net/qede/base/ecore_dev.c  |  109 ++++++++++++++++++------------------
 drivers/net/qede/base/ecore_l2.c   |   12 ++--
 drivers/net/qede/base/ecore_vf.c   |    2 +-
 6 files changed, 66 insertions(+), 75 deletions(-)

diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index f7b5672..1a2a701 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -327,7 +327,8 @@ static OSAL_INLINE void ecore_cxt_tm_iids(struct ecore_cxt_mngr *p_mngr,
 	}
 }
 
-void ecore_cxt_qm_iids(struct ecore_hwfn *p_hwfn, struct ecore_qm_iids *iids)
+static void ecore_cxt_qm_iids(struct ecore_hwfn *p_hwfn,
+			      struct ecore_qm_iids *iids)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
 	struct ecore_tid_seg *segs;
@@ -1945,7 +1946,7 @@ enum _ecore_status_t ecore_cxt_get_cid_info(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-void ecore_cxt_set_srq_count(struct ecore_hwfn *p_hwfn, u32 num_srqs)
+static void ecore_cxt_set_srq_count(struct ecore_hwfn *p_hwfn, u32 num_srqs)
 {
 	struct ecore_cxt_mngr *p_mgr = p_hwfn->p_cxt_mngr;
 
diff --git a/drivers/net/qede/base/ecore_cxt.h b/drivers/net/qede/base/ecore_cxt.h
index 1128051..e678118 100644
--- a/drivers/net/qede/base/ecore_cxt.h
+++ b/drivers/net/qede/base/ecore_cxt.h
@@ -35,17 +35,6 @@ u32 ecore_cxt_get_proto_cid_start(struct ecore_hwfn *p_hwfn,
 				  enum protocol_type type);
 u32 ecore_cxt_get_srq_count(struct ecore_hwfn *p_hwfn);
 
-#ifndef LINUX_REMOVE
-/**
- * @brief ecore_cxt_qm_iids - fills the cid/tid counts for the QM configuration
- *
- * @param p_hwfn
- * @param iids [out], a structure holding all the counters
- */
-void ecore_cxt_qm_iids(struct ecore_hwfn *p_hwfn,
-		       struct ecore_qm_iids *iids);
-#endif
-
 /**
  * @brief ecore_cxt_set_pf_params - Set the PF params for cxt init
  *
diff --git a/drivers/net/qede/base/ecore_dcbx.c b/drivers/net/qede/base/ecore_dcbx.c
index 5ecc6b0..4f1b069 100644
--- a/drivers/net/qede/base/ecore_dcbx.c
+++ b/drivers/net/qede/base/ecore_dcbx.c
@@ -114,7 +114,7 @@ ecore_dcbx_dp_protocol(struct ecore_hwfn *p_hwfn,
 	}
 }
 
-void
+static void
 ecore_dcbx_set_params(struct ecore_dcbx_results *p_data,
 		      struct ecore_hwfn *p_hwfn,
 		      bool enable, u8 prio, u8 tc,
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index d8e4ca2..865103c 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -759,8 +759,8 @@ enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn,
 				     struct ecore_ptt *p_ptt)
 {
 	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
-	enum _ecore_status_t rc;
 	bool b_rc;
+	enum _ecore_status_t rc;
 
 	/* initialize ecore's qm data structure */
 	ecore_init_qm_info(p_hwfn);
@@ -1507,54 +1507,6 @@ static void ecore_link_init_bb(struct ecore_hwfn *p_hwfn,
 }
 #endif
 
-static enum _ecore_status_t ecore_hw_init_port(struct ecore_hwfn *p_hwfn,
-					       struct ecore_ptt *p_ptt,
-					       int hw_mode)
-{
-	enum _ecore_status_t rc = ECORE_SUCCESS;
-
-	rc = ecore_init_run(p_hwfn, p_ptt, PHASE_PORT, p_hwfn->port_id,
-			    hw_mode);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-#ifndef ASIC_ONLY
-	if (CHIP_REV_IS_ASIC(p_hwfn->p_dev))
-		return ECORE_SUCCESS;
-
-	if (CHIP_REV_IS_FPGA(p_hwfn->p_dev)) {
-		if (ECORE_IS_AH(p_hwfn->p_dev))
-			return ECORE_SUCCESS;
-		else if (ECORE_IS_BB(p_hwfn->p_dev))
-			ecore_link_init_bb(p_hwfn, p_ptt, p_hwfn->port_id);
-	} else if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
-		if (p_hwfn->p_dev->num_hwfns > 1) {
-			/* Activate OPTE in CMT */
-			u32 val;
-
-			val = ecore_rd(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV);
-			val |= 0x10;
-			ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV, val);
-			ecore_wr(p_hwfn, p_ptt, MISC_REG_CLK_100G_MODE, 1);
-			ecore_wr(p_hwfn, p_ptt, MISCS_REG_CLK_100G_MODE, 1);
-			ecore_wr(p_hwfn, p_ptt, MISC_REG_OPTE_MODE, 1);
-			ecore_wr(p_hwfn, p_ptt,
-				 NIG_REG_LLH_ENG_CLS_TCP_4_TUPLE_SEARCH, 1);
-			ecore_wr(p_hwfn, p_ptt,
-				 NIG_REG_LLH_ENG_CLS_ENG_ID_TBL, 0x55555555);
-			ecore_wr(p_hwfn, p_ptt,
-				 NIG_REG_LLH_ENG_CLS_ENG_ID_TBL + 0x4,
-				 0x55555555);
-		}
-
-		ecore_emul_link_init(p_hwfn, p_ptt);
-	} else {
-		DP_INFO(p_hwfn->p_dev, "link is not being configured\n");
-	}
-#endif
-
-	return rc;
-}
-
 static enum _ecore_status_t
 ecore_hw_init_dpi_size(struct ecore_hwfn *p_hwfn,
 		       struct ecore_ptt *p_ptt, u32 pwm_region_size, u32 n_cpus)
@@ -1623,7 +1575,7 @@ ecore_hw_init_pf_doorbell_bar(struct ecore_hwfn *p_hwfn,
 	u32 db_bar_size, n_cpus;
 	u32 roce_edpm_mode;
 	u32 pf_dems_shift;
-	int rc = ECORE_SUCCESS;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
 	u8 cond;
 
 	db_bar_size = ecore_hw_bar_size(p_hwfn, BAR_ID_1);
@@ -1678,8 +1630,9 @@ ecore_hw_init_pf_doorbell_bar(struct ecore_hwfn *p_hwfn,
 		rc = ecore_hw_init_dpi_size(p_hwfn, p_ptt, pwm_regsize, n_cpus);
 	}
 
-	cond = ((rc) && (roce_edpm_mode == ECORE_ROCE_EDPM_MODE_ENABLE)) ||
-	    (roce_edpm_mode == ECORE_ROCE_EDPM_MODE_DISABLE);
+	cond = ((rc != ECORE_SUCCESS) &&
+		(roce_edpm_mode == ECORE_ROCE_EDPM_MODE_ENABLE)) ||
+		(roce_edpm_mode == ECORE_ROCE_EDPM_MODE_DISABLE);
 	if (cond || p_hwfn->dcbx_no_edpm) {
 		/* Either EDPM is disabled from user configuration, or it is
 		 * disabled via DCBx, or it is not mandatory and we failed to
@@ -1703,7 +1656,7 @@ ecore_hw_init_pf_doorbell_bar(struct ecore_hwfn *p_hwfn,
 		"disabled" : "enabled");
 
 	/* Check return codes from above calls */
-	if (rc) {
+	if (rc != ECORE_SUCCESS) {
 		DP_ERR(p_hwfn,
 		       "Failed to allocate enough DPIs\n");
 		return ECORE_NORESOURCES;
@@ -1721,6 +1674,54 @@ ecore_hw_init_pf_doorbell_bar(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
+static enum _ecore_status_t ecore_hw_init_port(struct ecore_hwfn *p_hwfn,
+					       struct ecore_ptt *p_ptt,
+					       int hw_mode)
+{
+	enum _ecore_status_t rc	= ECORE_SUCCESS;
+
+	rc = ecore_init_run(p_hwfn, p_ptt, PHASE_PORT, p_hwfn->port_id,
+			    hw_mode);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+#ifndef ASIC_ONLY
+	if (CHIP_REV_IS_ASIC(p_hwfn->p_dev))
+		return ECORE_SUCCESS;
+
+	if (CHIP_REV_IS_FPGA(p_hwfn->p_dev)) {
+		if (ECORE_IS_AH(p_hwfn->p_dev))
+			return ECORE_SUCCESS;
+		else if (ECORE_IS_BB(p_hwfn->p_dev))
+			ecore_link_init_bb(p_hwfn, p_ptt, p_hwfn->port_id);
+	} else if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
+		if (p_hwfn->p_dev->num_hwfns > 1) {
+			/* Activate OPTE in CMT */
+			u32 val;
+
+			val = ecore_rd(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV);
+			val |= 0x10;
+			ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV, val);
+			ecore_wr(p_hwfn, p_ptt, MISC_REG_CLK_100G_MODE, 1);
+			ecore_wr(p_hwfn, p_ptt, MISCS_REG_CLK_100G_MODE, 1);
+			ecore_wr(p_hwfn, p_ptt, MISC_REG_OPTE_MODE, 1);
+			ecore_wr(p_hwfn, p_ptt,
+				 NIG_REG_LLH_ENG_CLS_TCP_4_TUPLE_SEARCH, 1);
+			ecore_wr(p_hwfn, p_ptt,
+				 NIG_REG_LLH_ENG_CLS_ENG_ID_TBL, 0x55555555);
+			ecore_wr(p_hwfn, p_ptt,
+				 NIG_REG_LLH_ENG_CLS_ENG_ID_TBL + 0x4,
+				 0x55555555);
+		}
+
+		ecore_emul_link_init(p_hwfn, p_ptt);
+	} else {
+		DP_INFO(p_hwfn->p_dev, "link is not being configured\n");
+	}
+#endif
+
+	return rc;
+}
+
 static enum _ecore_status_t
 ecore_hw_init_pf(struct ecore_hwfn *p_hwfn,
 		 struct ecore_ptt *p_ptt,
@@ -1922,8 +1923,8 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 {
 	struct ecore_load_req_params load_req_params;
 	u32 load_code, param, drv_mb_param;
-	struct ecore_hwfn *p_hwfn;
 	bool b_default_mtu = true;
+	struct ecore_hwfn *p_hwfn;
 	enum _ecore_status_t rc = ECORE_SUCCESS, mfw_rc;
 	int i;
 
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index adb5e47..c4af895 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -946,17 +946,17 @@ ecore_eth_pf_rx_queue_start(struct ecore_hwfn *p_hwfn,
 			    dma_addr_t bd_chain_phys_addr,
 			    dma_addr_t cqe_pbl_addr,
 			    u16 cqe_pbl_size,
-			    void OSAL_IOMEM * *pp_producer)
+			    void OSAL_IOMEM * *pp_prod)
 {
 	u32 init_prod_val = 0;
 
-	*pp_producer = (u8 OSAL_IOMEM *)
-		       p_hwfn->regview +
-		       GTT_BAR0_MAP_REG_MSDM_RAM +
-		       MSTORM_ETH_PF_PRODS_OFFSET(p_cid->abs.queue_id);
+	*pp_prod = (u8 OSAL_IOMEM *)
+		    p_hwfn->regview +
+		    GTT_BAR0_MAP_REG_MSDM_RAM +
+		    MSTORM_ETH_PF_PRODS_OFFSET(p_cid->abs.queue_id);
 
 	/* Init the rcq, rx bd and rx sge (if valid) producers to 0 */
-	__internal_ram_wr(p_hwfn, *pp_producer, sizeof(u32),
+	__internal_ram_wr(p_hwfn, *pp_prod, sizeof(u32),
 			  (u32 *)(&init_prod_val));
 
 	return ecore_eth_rxq_start_ramrod(p_hwfn, p_cid,
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index ac72681..f4d331c 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -1285,8 +1285,8 @@ enum _ecore_status_t ecore_vf_pf_release(struct ecore_hwfn *p_hwfn)
 	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
 	struct pfvf_def_resp_tlv *resp;
 	struct vfpf_first_tlv *req;
-	enum _ecore_status_t rc;
 	u32 size;
+	enum _ecore_status_t rc;
 
 	/* clear mailbox and prep first tlv */
 	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_RELEASE, sizeof(*req));
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 59/62] net/qede/base: add support for arfs mode
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (58 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 58/62] net/qede/base: semantic changes Rasesh Mody
@ 2017-03-29 20:36             ` Rasesh Mody
  2017-03-29 20:37             ` [PATCH v5 60/62] net/qede: add ntuple and flow director filter support Rasesh Mody
                               ` (2 subsequent siblings)
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:36 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Harish Patil, Dept-EngDPDKDev

From: Harish Patil <harish.patil@qlogic.com>

Add base driver APIs to enable accelerated RFS[aRFS] mode and ramrod
to configure rfs and ntuple filter.

Signed-off-by: Harish Patil <harish.patil@qlogic.com>
---
 drivers/net/qede/base/ecore_cxt.c           |   49 +++++++++++-----
 drivers/net/qede/base/ecore_init_fw_funcs.c |   31 ++++++++++
 drivers/net/qede/base/ecore_init_fw_funcs.h |   11 ++++
 drivers/net/qede/base/ecore_l2.c            |   84 +++++++++++++++++++++++++++
 drivers/net/qede/base/ecore_l2.h            |   27 +++++++++
 drivers/net/qede/base/ecore_l2_api.h        |   22 +++++++
 drivers/net/qede/base/ecore_proto_if.h      |    6 ++
 drivers/net/qede/base/ecore_spq.h           |    1 +
 8 files changed, 218 insertions(+), 13 deletions(-)

diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index 1a2a701..80ad102 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -192,9 +192,6 @@ struct ecore_cxt_mngr {
 	 */
 	u32 vf_count;
 
-	/* total number of SRQ's for this hwfn */
-	u32				srq_count;
-
 	/* Acquired CIDs */
 	struct ecore_cid_acquired_map acquired[MAX_CONN_TYPES];
 	/* TBD - do we want this allocated to reserve space? */
@@ -213,10 +210,29 @@ struct ecore_cxt_mngr {
 	u32 t2_num_pages;
 	u64 first_free;
 	u64 last_free;
+
+	/* The infrastructure originally was very generic and context/task
+	 * oriented - per connection-type we would set how many of those
+	 * are needed, and later when determining how much memory we're
+	 * needing for a given block we'd iterate over all the relevant
+	 * connection-types.
+	 * But since then we've had some additional resources, some of which
+	 * require memory which is indepent of the general context/task
+	 * scheme. We add those here explicitly per-feature.
+	 */
+
+	/* total number of SRQ's for this hwfn */
+	u32				srq_count;
+
+	/* Maximal number of L2 steering filters */
+	u32				arfs_count;
+
+	/* TODO - VF arfs filters ? */
 };
 
 /* check if resources/configuration is required according to protocol type */
-static OSAL_INLINE bool src_proto(enum protocol_type type)
+static OSAL_INLINE bool src_proto(struct ecore_hwfn *p_hwfn,
+				  enum protocol_type type)
 {
 	return type == PROTOCOLID_TOE;
 }
@@ -254,18 +270,22 @@ struct ecore_src_iids {
 	u32 per_vf_cids;
 };
 
-static OSAL_INLINE void ecore_cxt_src_iids(struct ecore_cxt_mngr *p_mngr,
+static OSAL_INLINE void ecore_cxt_src_iids(struct ecore_hwfn *p_hwfn,
+					   struct ecore_cxt_mngr *p_mngr,
 					   struct ecore_src_iids *iids)
 {
 	u32 i;
 
 	for (i = 0; i < MAX_CONN_TYPES; i++) {
-		if (!src_proto(i))
+		if (!src_proto(p_hwfn, i))
 			continue;
 
 		iids->pf_cids += p_mngr->conn_cfg[i].cid_count;
 		iids->per_vf_cids += p_mngr->conn_cfg[i].cids_per_vf;
 	}
+
+	/* Add L2 filtering filters in addition */
+	iids->pf_cids += p_mngr->arfs_count;
 }
 
 /* counts the iids for the Timers block configuration */
@@ -686,7 +706,7 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 
 	/* SRC */
 	p_cli = &p_mngr->clients[ILT_CLI_SRC];
-	ecore_cxt_src_iids(p_mngr, &src_iids);
+	ecore_cxt_src_iids(p_hwfn, p_mngr, &src_iids);
 
 	/* Both the PF and VFs searcher connections are stored in the per PF
 	 * database. Thus sum the PF searcher cids and all the VFs searcher
@@ -800,7 +820,7 @@ static enum _ecore_status_t ecore_cxt_src_t2_alloc(struct ecore_hwfn *p_hwfn)
 	if (!p_src->active)
 		return ECORE_SUCCESS;
 
-	ecore_cxt_src_iids(p_mngr, &src_iids);
+	ecore_cxt_src_iids(p_hwfn, p_mngr, &src_iids);
 	conn_num = src_iids.pf_cids + src_iids.per_vf_cids * p_mngr->vf_count;
 	total_size = conn_num * sizeof(struct src_ent);
 
@@ -1619,7 +1639,7 @@ static void ecore_src_init_pf(struct ecore_hwfn *p_hwfn)
 	struct ecore_src_iids src_iids;
 
 	OSAL_MEM_ZERO(&src_iids, sizeof(src_iids));
-	ecore_cxt_src_iids(p_mngr, &src_iids);
+	ecore_cxt_src_iids(p_hwfn, p_mngr, &src_iids);
 	conn_num = src_iids.pf_cids + src_iids.per_vf_cids * p_mngr->vf_count;
 	if (!conn_num)
 		return;
@@ -1635,6 +1655,9 @@ static void ecore_src_init_pf(struct ecore_hwfn *p_hwfn)
 			 p_hwfn->p_cxt_mngr->first_free);
 	STORE_RT_REG_AGG(p_hwfn, SRC_REG_LASTFREE_RT_OFFSET,
 			 p_hwfn->p_cxt_mngr->last_free);
+	DP_VERBOSE(p_hwfn, ECORE_MSG_ILT,
+		   "Configured SEARCHER for 0x%08x connections\n",
+		   conn_num);
 }
 
 /* Timers PF */
@@ -1978,10 +2001,10 @@ enum _ecore_status_t ecore_cxt_set_pf_params(struct ecore_hwfn *p_hwfn)
 			 * As of now, allocates 16 * 2 per-VF [to retain regular
 			 * functionality].
 			 */
-			ecore_cxt_set_proto_cid_count(p_hwfn,
-				PROTOCOLID_ETH,
-				p_params->num_cons, 32);
-
+			ecore_cxt_set_proto_cid_count(p_hwfn, PROTOCOLID_ETH,
+						      p_params->num_cons, 32);
+			p_hwfn->p_cxt_mngr->arfs_count =
+						p_params->num_arfs_filters;
 			break;
 		}
 	default:
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.c b/drivers/net/qede/base/ecore_init_fw_funcs.c
index af0deaa..004ab35 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.c
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.c
@@ -1497,6 +1497,37 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
 #define RAM_LINE_SIZE sizeof(u64)
 #define REG_SIZE sizeof(u32)
 
+void ecore_set_rfs_mode_disable(struct ecore_hwfn *p_hwfn,
+	struct ecore_ptt *p_ptt,
+	u16 pf_id)
+{
+	union gft_cam_line_union cam_line;
+	struct gft_ram_line ram_line;
+	u32 i, *ram_line_ptr;
+
+	ram_line_ptr = (u32 *)&ram_line;
+
+	/* Stop using gft logic, disable gft search */
+	ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_GFT, 0);
+	ecore_wr(p_hwfn, p_ptt, PRS_REG_CM_HDR_GFT, 0x0);
+
+	/* Clean ram & cam for next rfs/gft session*/
+
+	/* Zero camline */
+	OSAL_MEMSET(&cam_line, 0, sizeof(cam_line));
+	ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_CAM + CAM_LINE_SIZE * pf_id,
+					cam_line.cam_line_mapped.camline);
+
+	/* Zero ramline */
+	OSAL_MEMSET(&ram_line, 0, sizeof(ram_line));
+
+	/* Each iteration write to reg */
+	for (i = 0; i < RAM_LINE_SIZE / REG_SIZE; i++)
+		ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_PROFILE_MASK_RAM +
+			 RAM_LINE_SIZE * pf_id +
+			 i * REG_SIZE, *(ram_line_ptr + i));
+}
+
 
 void ecore_set_gft_event_id_cm_hdr(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt)
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.h b/drivers/net/qede/base/ecore_init_fw_funcs.h
index 2d1ab7c..4da3fc2 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.h
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.h
@@ -351,6 +351,17 @@ void ecore_set_gft_event_id_cm_hdr(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt);
 
 /**
+ * @brief ecore_set_rfs_mode_disable - Disable and configure HW for RFS
+ *
+ * @param p_hwfn -   HW device data
+ * @param p_ptt -   ptt window used for writing the registers.
+ * @param pf_id - pf on which to disable RFS.
+ */
+void ecore_set_rfs_mode_disable(struct ecore_hwfn *p_hwfn,
+				struct ecore_ptt *p_ptt,
+				u16 pf_id);
+
+/**
 * @brief ecore_set_rfs_mode_enable - enable and configure HW for RFS
 *
 * @param p_ptt	- ptt window used for writing the registers.
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index c4af895..4ab8fd5 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -2018,3 +2018,87 @@ void ecore_reset_vport_stats(struct ecore_dev *p_dev)
 	else
 		_ecore_get_vport_stats(p_dev, p_dev->reset_stats);
 }
+
+void ecore_arfs_mode_configure(struct ecore_hwfn *p_hwfn,
+			       struct ecore_ptt *p_ptt,
+			       struct ecore_arfs_config_params *p_cfg_params)
+{
+	if (p_cfg_params->arfs_enable) {
+		ecore_set_rfs_mode_enable(p_hwfn, p_ptt, p_hwfn->rel_pf_id,
+					  p_cfg_params->tcp,
+					  p_cfg_params->udp,
+					  p_cfg_params->ipv4,
+					  p_cfg_params->ipv6);
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+			   "tcp = %s, udp = %s, ipv4 = %s, ipv6 =%s\n",
+			   p_cfg_params->tcp ? "Enable" : "Disable",
+			   p_cfg_params->udp ? "Enable" : "Disable",
+			   p_cfg_params->ipv4 ? "Enable" : "Disable",
+			   p_cfg_params->ipv6 ? "Enable" : "Disable");
+	} else {
+		ecore_set_rfs_mode_disable(p_hwfn, p_ptt, p_hwfn->rel_pf_id);
+	}
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP, "Configured ARFS mode : %s\n",
+		   p_cfg_params->arfs_enable ? "Enable" : "Disable");
+}
+
+enum _ecore_status_t
+ecore_configure_rfs_ntuple_filter(struct ecore_hwfn *p_hwfn,
+				  struct ecore_ptt *p_ptt,
+				  struct ecore_spq_comp_cb *p_cb,
+				  dma_addr_t p_addr, u16 length,
+				  u16 qid, u8 vport_id,
+				  bool b_is_add)
+{
+	struct rx_update_gft_filter_data *p_ramrod = OSAL_NULL;
+	struct ecore_spq_entry *p_ent = OSAL_NULL;
+	struct ecore_sp_init_data init_data;
+	u16 abs_rx_q_id = 0;
+	u8 abs_vport_id = 0;
+	enum _ecore_status_t rc = ECORE_NOTIMPL;
+
+	rc = ecore_fw_vport(p_hwfn, vport_id, &abs_vport_id);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	rc = ecore_fw_l2_queue(p_hwfn, qid, &abs_rx_q_id);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	/* Get SPQ entry */
+	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
+	init_data.cid = ecore_spq_get_cid(p_hwfn);
+
+	init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
+
+	if (p_cb) {
+		init_data.comp_mode = ECORE_SPQ_MODE_CB;
+		init_data.p_comp_data = p_cb;
+	} else {
+		init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
+	}
+
+	rc = ecore_sp_init_request(p_hwfn, &p_ent,
+				   ETH_RAMROD_GFT_UPDATE_FILTER,
+				   PROTOCOLID_ETH, &init_data);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	p_ramrod = &p_ent->ramrod.rx_update_gft;
+
+	DMA_REGPAIR_LE(p_ramrod->pkt_hdr_addr, p_addr);
+	p_ramrod->pkt_hdr_length = OSAL_CPU_TO_LE16(length);
+	p_ramrod->rx_qid_or_action_icid = OSAL_CPU_TO_LE16(abs_rx_q_id);
+	p_ramrod->vport_id = abs_vport_id;
+	p_ramrod->filter_type = RFS_FILTER_TYPE;
+	p_ramrod->filter_action = b_is_add ? GFT_ADD_FILTER
+					   : GFT_DELETE_FILTER;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "V[%0x], Q[%04x] - %s filter from 0x%lx [length %04xb]\n",
+		   abs_vport_id, abs_rx_q_id,
+		   b_is_add ? "Adding" : "Removing",
+		   (unsigned long)p_addr, length);
+
+	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
+}
diff --git a/drivers/net/qede/base/ecore_l2.h b/drivers/net/qede/base/ecore_l2.h
index 3f86eac..7fe4cbc 100644
--- a/drivers/net/qede/base/ecore_l2.h
+++ b/drivers/net/qede/base/ecore_l2.h
@@ -129,4 +129,31 @@ ecore_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
 
 u8 ecore_mcast_bin_from_mac(u8 *mac);
 
+/**
+ * @brief - ecore_configure_rfs_ntuple_filter
+ *
+ * This ramrod should be used to add or remove arfs hw filter
+ *
+ * @params p_hwfn
+ * @params p_ptt
+ * @params p_cb		Used for ECORE_SPQ_MODE_CB,where client would initialize
+			it with cookie and callback function address, if not
+			using this mode then client must pass NULL.
+ * @params p_addr	p_addr is an actual packet header that needs to be
+ *			filter. It has to mapped with IO to read prior to
+ *			calling this, [contains 4 tuples- src ip, dest ip,
+ *			src port, dest port].
+ * @params length	length of p_addr header up to past the transport header.
+ * @params qid		receive packet will be directed to this queue.
+ * @params vport_id
+ * @params b_is_add	flag to add or remove filter.
+ *
+ */
+enum _ecore_status_t
+ecore_configure_rfs_ntuple_filter(struct ecore_hwfn *p_hwfn,
+				  struct ecore_ptt *p_ptt,
+				  struct ecore_spq_comp_cb *p_cb,
+				  dma_addr_t p_addr, u16 length,
+				  u16 qid, u8 vport_id,
+				  bool b_is_add);
 #endif
diff --git a/drivers/net/qede/base/ecore_l2_api.h b/drivers/net/qede/base/ecore_l2_api.h
index 5a7db76..d09f3c4 100644
--- a/drivers/net/qede/base/ecore_l2_api.h
+++ b/drivers/net/qede/base/ecore_l2_api.h
@@ -141,6 +141,14 @@ struct ecore_filter_accept_flags {
 #define ECORE_ACCEPT_BCAST		0x20
 };
 
+struct ecore_arfs_config_params {
+	bool tcp;
+	bool udp;
+	bool ipv4;
+	bool ipv6;
+	bool arfs_enable;	/* Enable or disable arfs mode */
+};
+
 /* Add / remove / move / remove-all unicast MAC-VLAN filters.
  * FW will assert in the following cases, so driver should take care...:
  * 1. Adding a filter to a full table.
@@ -414,4 +422,18 @@ void ecore_get_vport_stats(struct ecore_dev *p_dev,
 
 void ecore_reset_vport_stats(struct ecore_dev *p_dev);
 
+/**
+ *@brief ecore_arfs_mode_configure -
+ *
+ *Enable or disable rfs mode. It must accept atleast one of tcp or udp true
+ *and atleast one of ipv4 or ipv6 true to enable rfs mode.
+ *
+ *@param p_hwfn
+ *@param p_ptt
+ *@param p_cfg_params		arfs mode configuration parameters.
+ *
+ */
+void ecore_arfs_mode_configure(struct ecore_hwfn *p_hwfn,
+			       struct ecore_ptt *p_ptt,
+			       struct ecore_arfs_config_params *p_cfg_params);
 #endif
diff --git a/drivers/net/qede/base/ecore_proto_if.h b/drivers/net/qede/base/ecore_proto_if.h
index 0ac153f..226e3d2 100644
--- a/drivers/net/qede/base/ecore_proto_if.h
+++ b/drivers/net/qede/base/ecore_proto_if.h
@@ -21,6 +21,12 @@ struct ecore_eth_pf_params {
 	 * to update_pf_params routine invoked before slowpath start
 	 */
 	u16	num_cons;
+
+	/* To enable arfs, previous to HW-init a positive number needs to be
+	 * set [as filters require allocated searcher ILT memory].
+	 * This will set the maximal number of configured steering-filters.
+	 */
+	u32	num_arfs_filters;
 };
 
 /* Most of the the parameters below are described in the FW iSCSI / TCP HSI */
diff --git a/drivers/net/qede/base/ecore_spq.h b/drivers/net/qede/base/ecore_spq.h
index e2468b7..e530f83 100644
--- a/drivers/net/qede/base/ecore_spq.h
+++ b/drivers/net/qede/base/ecore_spq.h
@@ -26,6 +26,7 @@ union ramrod_data {
 	struct tx_queue_stop_ramrod_data		tx_queue_stop;
 	struct vport_start_ramrod_data			vport_start;
 	struct vport_stop_ramrod_data			vport_stop;
+	struct rx_update_gft_filter_data		rx_update_gft;
 	struct vport_update_ramrod_data			vport_update;
 	struct core_rx_start_ramrod_data		core_rx_queue_start;
 	struct core_rx_stop_ramrod_data			core_rx_queue_stop;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 60/62] net/qede: add ntuple and flow director filter support
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (59 preceding siblings ...)
  2017-03-29 20:36             ` [PATCH v5 59/62] net/qede/base: add support for arfs mode Rasesh Mody
@ 2017-03-29 20:37             ` Rasesh Mody
  2017-03-29 20:37             ` [PATCH v5 61/62] net/qede: add LRO/TSO offloads support Rasesh Mody
  2017-03-29 20:37             ` [PATCH v5 62/62] net/qede: update PMD version to 2.4.0.1 Rasesh Mody
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:37 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Harish Patil, Dept-EngDPDKDev

From: Harish Patil <harish.patil@qlogic.com>

Add limited support for ntuple filter and flow director configuration.
The filtering is based on 4-tuples viz src-ip, dst-ip, src-port,
dst-port. The mask fields, tcp_flags, flex masks, priority fields,
Rx queue drop etc are not supported.

Signed-off-by: Harish Patil <harish.patil@qlogic.com>
---
 doc/guides/nics/features/qede.ini |    2 +
 doc/guides/nics/qede.rst          |    1 +
 drivers/net/qede/Makefile         |    1 +
 drivers/net/qede/base/ecore.h     |    3 +
 drivers/net/qede/qede_ethdev.c    |   16 +-
 drivers/net/qede/qede_ethdev.h    |   39 +++
 drivers/net/qede/qede_fdir.c      |  487 +++++++++++++++++++++++++++++++++++++
 drivers/net/qede/qede_main.c      |   23 +-
 8 files changed, 563 insertions(+), 9 deletions(-)
 create mode 100644 drivers/net/qede/qede_fdir.c

diff --git a/doc/guides/nics/features/qede.ini b/doc/guides/nics/features/qede.ini
index 8858e5d..b688914 100644
--- a/doc/guides/nics/features/qede.ini
+++ b/doc/guides/nics/features/qede.ini
@@ -34,3 +34,5 @@ Multiprocess aware   = Y
 Linux UIO            = Y
 x86-64               = Y
 Usage doc            = Y
+N-tuple filter       = Y
+Flow director        = Y
diff --git a/doc/guides/nics/qede.rst b/doc/guides/nics/qede.rst
index 36b26b3..df0aaec 100644
--- a/doc/guides/nics/qede.rst
+++ b/doc/guides/nics/qede.rst
@@ -60,6 +60,7 @@ Supported Features
 - Multiprocess aware
 - Scatter-Gather
 - VXLAN tunneling offload
+- N-tuple filter and flow director (limited support)
 
 Non-supported Features
 ----------------------
diff --git a/drivers/net/qede/Makefile b/drivers/net/qede/Makefile
index d989536..da7968f 100644
--- a/drivers/net/qede/Makefile
+++ b/drivers/net/qede/Makefile
@@ -99,5 +99,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_ethdev.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_eth_if.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_main.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_rxtx.c
+SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_fdir.c
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index fab8193..31470b6 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -602,6 +602,9 @@ struct ecore_hwfn {
 
 	/* L2-related */
 	struct ecore_l2_info		*p_l2_info;
+
+	/* @DPDK */
+	struct ecore_ptt		*p_arfs_ptt;
 };
 
 #ifndef __EXTRACT__LINUX__
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index bd190d0..22b528d 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -924,6 +924,15 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 		return -EINVAL;
 	}
 
+	/* Flow director mode check */
+	rc = qede_check_fdir_support(eth_dev);
+	if (rc) {
+		qdev->ops->vport_stop(edev, 0);
+		qede_dealloc_fp_resc(eth_dev);
+		return -EINVAL;
+	}
+	SLIST_INIT(&qdev->fdir_info.fdir_list_head);
+
 	SLIST_INIT(&qdev->vlan_list_head);
 
 	/* Add primary mac for PF */
@@ -1124,6 +1133,8 @@ static void qede_dev_close(struct rte_eth_dev *eth_dev)
 
 	PMD_INIT_FUNC_TRACE(edev);
 
+	qede_fdir_dealloc_resc(eth_dev);
+
 	/* dev_stop() shall cleanup fp resources in hw but without releasing
 	 * dma memories and sw structures so that dev_start() can be called
 	 * by the app without reconfiguration. However, in dev_close() we
@@ -1962,11 +1973,13 @@ int qede_dev_filter_ctrl(struct rte_eth_dev *eth_dev,
 		}
 		break;
 	case RTE_ETH_FILTER_FDIR:
+		return qede_fdir_filter_conf(eth_dev, filter_op, arg);
+	case RTE_ETH_FILTER_NTUPLE:
+		return qede_ntuple_filter_conf(eth_dev, filter_op, arg);
 	case RTE_ETH_FILTER_MACVLAN:
 	case RTE_ETH_FILTER_ETHERTYPE:
 	case RTE_ETH_FILTER_FLEXIBLE:
 	case RTE_ETH_FILTER_SYN:
-	case RTE_ETH_FILTER_NTUPLE:
 	case RTE_ETH_FILTER_HASH:
 	case RTE_ETH_FILTER_L2_TUNNEL:
 	case RTE_ETH_FILTER_MAX:
@@ -2057,6 +2070,7 @@ static void qede_update_pf_params(struct ecore_dev *edev)
 
 	memset(&pf_params, 0, sizeof(struct ecore_pf_params));
 	pf_params.eth_pf_params.num_cons = QEDE_PF_NUM_CONNS;
+	pf_params.eth_pf_params.num_arfs_filters = QEDE_RFS_MAX_FLTR;
 	qed_ops->common->update_pf_params(edev, &pf_params);
 }
 
diff --git a/drivers/net/qede/qede_ethdev.h b/drivers/net/qede/qede_ethdev.h
index be54f31..8342b99 100644
--- a/drivers/net/qede/qede_ethdev.h
+++ b/drivers/net/qede/qede_ethdev.h
@@ -34,6 +34,8 @@
 #include "base/nvm_cfg.h"
 #include "base/ecore_iov_api.h"
 #include "base/ecore_sp_commands.h"
+#include "base/ecore_l2.h"
+#include "base/ecore_dev_api.h"
 
 #include "qede_logs.h"
 #include "qede_if.h"
@@ -131,6 +133,9 @@ extern char fw_file[];
 /* Number of PF connections - 32 RX + 32 TX */
 #define QEDE_PF_NUM_CONNS		(64)
 
+/* Maximum number of flowdir filters */
+#define QEDE_RFS_MAX_FLTR		(256)
+
 /* Port/function states */
 enum qede_dev_state {
 	QEDE_DEV_INIT, /* Init the chip and Slowpath */
@@ -156,6 +161,21 @@ struct qede_ucast_entry {
 	SLIST_ENTRY(qede_ucast_entry) list;
 };
 
+struct qede_fdir_entry {
+	uint32_t soft_id; /* unused for now */
+	uint16_t pkt_len; /* actual packet length to match */
+	uint16_t rx_queue; /* queue to be steered to */
+	const struct rte_memzone *mz; /* mz used to hold L2 frame */
+	SLIST_ENTRY(qede_fdir_entry) list;
+};
+
+struct qede_fdir_info {
+	struct ecore_arfs_config_params arfs;
+	uint16_t filter_count;
+	SLIST_HEAD(fdir_list_head, qede_fdir_entry)fdir_list_head;
+};
+
+
 /*
  *  Structure to store private data for each port.
  */
@@ -190,6 +210,7 @@ struct qede_dev {
 	bool handle_hw_err;
 	uint16_t num_tunn_filters;
 	uint16_t vxlan_filter_type;
+	struct qede_fdir_info fdir_info;
 	char drv_ver[QEDE_PMD_DRV_VER_STR_SIZE];
 };
 
@@ -208,6 +229,11 @@ static void qede_init_rss_caps(uint8_t *rss_caps, uint64_t hf);
 
 static inline uint32_t qede_rx_cqe_to_pkt_type(uint16_t flags);
 
+static uint16_t qede_fdir_construct_pkt(struct rte_eth_dev *eth_dev,
+					struct rte_eth_fdir_filter *fdir,
+					void *buff,
+					struct ecore_arfs_config_params *param);
+
 /* Non-static functions */
 void qede_init_rss_caps(uint8_t *rss_caps, uint64_t hf);
 
@@ -215,4 +241,17 @@ int qed_fill_eth_dev_info(struct ecore_dev *edev,
 				 struct qed_dev_eth_info *info);
 int qede_dev_set_link_state(struct rte_eth_dev *eth_dev, bool link_up);
 
+int qede_dev_filter_ctrl(struct rte_eth_dev *dev, enum rte_filter_type type,
+			 enum rte_filter_op op, void *arg);
+
+int qede_fdir_filter_conf(struct rte_eth_dev *eth_dev,
+			  enum rte_filter_op filter_op, void *arg);
+
+int qede_ntuple_filter_conf(struct rte_eth_dev *eth_dev,
+			    enum rte_filter_op filter_op, void *arg);
+
+int qede_check_fdir_support(struct rte_eth_dev *eth_dev);
+
+void qede_fdir_dealloc_resc(struct rte_eth_dev *eth_dev);
+
 #endif /* _QEDE_ETHDEV_H_ */
diff --git a/drivers/net/qede/qede_fdir.c b/drivers/net/qede/qede_fdir.c
new file mode 100644
index 0000000..f0dc73a
--- /dev/null
+++ b/drivers/net/qede/qede_fdir.c
@@ -0,0 +1,487 @@
+/*
+ * Copyright (c) 2017 QLogic Corporation.
+ * All rights reserved.
+ * www.qlogic.com
+ *
+ * See LICENSE.qede_pmd for copyright and licensing details.
+ */
+
+#include <rte_udp.h>
+#include <rte_tcp.h>
+#include <rte_sctp.h>
+#include <rte_errno.h>
+
+#include "qede_ethdev.h"
+
+#define IP_VERSION				(0x40)
+#define IP_HDRLEN				(0x5)
+#define QEDE_FDIR_IP_DEFAULT_VERSION_IHL	(IP_VERSION | IP_HDRLEN)
+#define QEDE_FDIR_TCP_DEFAULT_DATAOFF		(0x50)
+#define QEDE_FDIR_IPV4_DEF_TTL			(64)
+
+/* Sum of length of header types of L2, L3, L4.
+ * L2 : ether_hdr + vlan_hdr + vxlan_hdr
+ * L3 : ipv6_hdr
+ * L4 : tcp_hdr
+ */
+#define QEDE_MAX_FDIR_PKT_LEN			(86)
+
+#ifndef IPV6_ADDR_LEN
+#define IPV6_ADDR_LEN				(16)
+#endif
+
+#define QEDE_VALID_FLOW(flow_type) \
+	((flow_type) == RTE_ETH_FLOW_FRAG_IPV4		|| \
+	(flow_type) == RTE_ETH_FLOW_NONFRAG_IPV4_TCP	|| \
+	(flow_type) == RTE_ETH_FLOW_NONFRAG_IPV4_UDP	|| \
+	(flow_type) == RTE_ETH_FLOW_FRAG_IPV6		|| \
+	(flow_type) == RTE_ETH_FLOW_NONFRAG_IPV6_TCP	|| \
+	(flow_type) == RTE_ETH_FLOW_NONFRAG_IPV6_UDP)
+
+/* Note: Flowdir support is only partial.
+ * For ex: drop_queue, FDIR masks, flex_conf are not supported.
+ * Parameters like pballoc/status fields are irrelevant here.
+ */
+int qede_check_fdir_support(struct rte_eth_dev *eth_dev)
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	struct rte_fdir_conf *fdir = &eth_dev->data->dev_conf.fdir_conf;
+
+	/* check FDIR modes */
+	switch (fdir->mode) {
+	case RTE_FDIR_MODE_NONE:
+		qdev->fdir_info.arfs.arfs_enable = false;
+		DP_INFO(edev, "flowdir is disabled\n");
+	break;
+	case RTE_FDIR_MODE_PERFECT:
+		if (edev->num_hwfns > 1) {
+			DP_ERR(edev, "flowdir is not supported in 100G mode\n");
+			qdev->fdir_info.arfs.arfs_enable = false;
+			return -ENOTSUP;
+		}
+		qdev->fdir_info.arfs.arfs_enable = true;
+		DP_INFO(edev, "flowdir is enabled\n");
+	break;
+	case RTE_FDIR_MODE_PERFECT_TUNNEL:
+	case RTE_FDIR_MODE_SIGNATURE:
+	case RTE_FDIR_MODE_PERFECT_MAC_VLAN:
+		DP_ERR(edev, "Unsupported flowdir mode %d\n", fdir->mode);
+		return -ENOTSUP;
+	}
+
+	return 0;
+}
+
+void qede_fdir_dealloc_resc(struct rte_eth_dev *eth_dev)
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	struct qede_fdir_entry *tmp = NULL;
+	struct qede_fdir_entry *fdir;
+
+	SLIST_FOREACH(tmp, &qdev->fdir_info.fdir_list_head, list) {
+		if (tmp) {
+			if (tmp->mz)
+				rte_memzone_free(tmp->mz);
+			SLIST_REMOVE(&qdev->fdir_info.fdir_list_head, tmp,
+				     qede_fdir_entry, list);
+			rte_free(tmp);
+		}
+	}
+}
+
+static int
+qede_config_cmn_fdir_filter(struct rte_eth_dev *eth_dev,
+			    struct rte_eth_fdir_filter *fdir_filter,
+			    bool add)
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	char mz_name[RTE_MEMZONE_NAMESIZE] = {0};
+	struct qede_fdir_entry *tmp = NULL;
+	struct qede_fdir_entry *fdir;
+	const struct rte_memzone *mz;
+	struct ecore_hwfn *p_hwfn;
+	enum _ecore_status_t rc;
+	uint16_t pkt_len;
+	uint16_t len;
+	void *pkt;
+
+	if (add) {
+		if (qdev->fdir_info.filter_count == QEDE_RFS_MAX_FLTR - 1) {
+			DP_ERR(edev, "Reached max flowdir filter limit\n");
+			return -EINVAL;
+		}
+		fdir = rte_malloc(NULL, sizeof(struct qede_fdir_entry),
+				  RTE_CACHE_LINE_SIZE);
+		if (!fdir) {
+			DP_ERR(edev, "Did not allocate memory for fdir\n");
+			return -ENOMEM;
+		}
+	}
+	/* soft_id could have been used as memzone string, but soft_id is
+	 * not currently used so it has no significance.
+	 */
+	snprintf(mz_name, sizeof(mz_name) - 1, "%lx",
+		 (unsigned long)rte_get_timer_cycles());
+	mz = rte_memzone_reserve_aligned(mz_name, QEDE_MAX_FDIR_PKT_LEN,
+					 SOCKET_ID_ANY, 0, RTE_CACHE_LINE_SIZE);
+	if (!mz) {
+		DP_ERR(edev, "Failed to allocate memzone for fdir, err = %s\n",
+		       rte_strerror(rte_errno));
+		rc = -rte_errno;
+		goto err1;
+	}
+
+	pkt = mz->addr;
+	memset(pkt, 0, QEDE_MAX_FDIR_PKT_LEN);
+	pkt_len = qede_fdir_construct_pkt(eth_dev, fdir_filter, pkt,
+					  &qdev->fdir_info.arfs);
+	if (pkt_len == 0) {
+		rc = -EINVAL;
+		goto err2;
+	}
+	DP_INFO(edev, "pkt_len = %u memzone = %s\n", pkt_len, mz_name);
+	if (add) {
+		SLIST_FOREACH(tmp, &qdev->fdir_info.fdir_list_head, list) {
+			if (memcmp(tmp->mz->addr, pkt, pkt_len) == 0) {
+				DP_ERR(edev, "flowdir filter exist\n");
+				rc = -EEXIST;
+				goto err2;
+			}
+		}
+	} else {
+		SLIST_FOREACH(tmp, &qdev->fdir_info.fdir_list_head, list) {
+			if (memcmp(tmp->mz->addr, pkt, pkt_len) == 0)
+				break;
+		}
+		if (!tmp) {
+			DP_ERR(edev, "flowdir filter does not exist\n");
+			rc = -EEXIST;
+			goto err2;
+		}
+	}
+	p_hwfn = ECORE_LEADING_HWFN(edev);
+	if (add) {
+		if (!qdev->fdir_info.arfs.arfs_enable) {
+			/* Force update */
+			eth_dev->data->dev_conf.fdir_conf.mode =
+						RTE_FDIR_MODE_PERFECT;
+			qdev->fdir_info.arfs.arfs_enable = true;
+			DP_INFO(edev, "Force enable flowdir in perfect mode\n");
+		}
+		/* Enable ARFS searcher with updated flow_types */
+		ecore_arfs_mode_configure(p_hwfn, p_hwfn->p_arfs_ptt,
+					  &qdev->fdir_info.arfs);
+	}
+	/* configure filter with ECORE_SPQ_MODE_EBLOCK */
+	rc = ecore_configure_rfs_ntuple_filter(p_hwfn, p_hwfn->p_arfs_ptt, NULL,
+					       (dma_addr_t)mz->phys_addr,
+					       pkt_len,
+					       fdir_filter->action.rx_queue,
+					       0, add);
+	if (rc == ECORE_SUCCESS) {
+		if (add) {
+			fdir->rx_queue = fdir_filter->action.rx_queue;
+			fdir->pkt_len = pkt_len;
+			fdir->mz = mz;
+			SLIST_INSERT_HEAD(&qdev->fdir_info.fdir_list_head,
+					  fdir, list);
+			qdev->fdir_info.filter_count++;
+			DP_INFO(edev, "flowdir filter added, count = %d\n",
+				qdev->fdir_info.filter_count);
+		} else {
+			rte_memzone_free(tmp->mz);
+			SLIST_REMOVE(&qdev->fdir_info.fdir_list_head, tmp,
+				     qede_fdir_entry, list);
+			rte_free(tmp); /* the node deleted */
+			rte_memzone_free(mz); /* temp node allocated */
+			qdev->fdir_info.filter_count--;
+			DP_INFO(edev, "Fdir filter deleted, count = %d\n",
+				qdev->fdir_info.filter_count);
+		}
+	} else {
+		DP_ERR(edev, "flowdir filter failed, rc=%d filter_count=%d\n",
+		       rc, qdev->fdir_info.filter_count);
+	}
+
+	/* Disable ARFS searcher if there are no more filters */
+	if (qdev->fdir_info.filter_count == 0) {
+		memset(&qdev->fdir_info.arfs, 0,
+		       sizeof(struct ecore_arfs_config_params));
+		DP_INFO(edev, "Disabling flowdir\n");
+		qdev->fdir_info.arfs.arfs_enable = false;
+		ecore_arfs_mode_configure(p_hwfn, p_hwfn->p_arfs_ptt,
+					  &qdev->fdir_info.arfs);
+	}
+	return 0;
+
+err2:
+	rte_memzone_free(mz);
+err1:
+	if (add)
+		rte_free(fdir);
+	return rc;
+}
+
+static int
+qede_fdir_filter_add(struct rte_eth_dev *eth_dev,
+		     struct rte_eth_fdir_filter *fdir,
+		     bool add)
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+
+	if (!QEDE_VALID_FLOW(fdir->input.flow_type)) {
+		DP_ERR(edev, "invalid flow_type input\n");
+		return -EINVAL;
+	}
+
+	if (fdir->action.rx_queue >= QEDE_RSS_COUNT(qdev)) {
+		DP_ERR(edev, "invalid queue number %u\n",
+		       fdir->action.rx_queue);
+		return -EINVAL;
+	}
+
+	if (fdir->input.flow_ext.is_vf) {
+		DP_ERR(edev, "flowdir is not supported over VF\n");
+		return -EINVAL;
+	}
+
+	return qede_config_cmn_fdir_filter(eth_dev, fdir, add);
+}
+
+/* Fills the L3/L4 headers and returns the actual length  of flowdir packet */
+static uint16_t
+qede_fdir_construct_pkt(struct rte_eth_dev *eth_dev,
+			struct rte_eth_fdir_filter *fdir,
+			void *buff,
+			struct ecore_arfs_config_params *params)
+
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	uint16_t *ether_type;
+	uint8_t *raw_pkt;
+	struct rte_eth_fdir_input *input;
+	static uint8_t vlan_frame[] = {0x81, 0, 0, 0};
+	struct ipv4_hdr *ip;
+	struct ipv6_hdr *ip6;
+	struct udp_hdr *udp;
+	struct tcp_hdr *tcp;
+	struct sctp_hdr *sctp;
+	uint8_t size, dst = 0;
+	uint16_t len;
+	static const uint8_t next_proto[] = {
+		[RTE_ETH_FLOW_FRAG_IPV4] = IPPROTO_IP,
+		[RTE_ETH_FLOW_NONFRAG_IPV4_TCP] = IPPROTO_TCP,
+		[RTE_ETH_FLOW_NONFRAG_IPV4_UDP] = IPPROTO_UDP,
+		[RTE_ETH_FLOW_FRAG_IPV6] = IPPROTO_NONE,
+		[RTE_ETH_FLOW_NONFRAG_IPV6_TCP] = IPPROTO_TCP,
+		[RTE_ETH_FLOW_NONFRAG_IPV6_UDP] = IPPROTO_UDP,
+	};
+	raw_pkt = (uint8_t *)buff;
+	input = &fdir->input;
+	DP_INFO(edev, "flow_type %d\n", input->flow_type);
+
+	len =  2 * sizeof(struct ether_addr);
+	raw_pkt += 2 * sizeof(struct ether_addr);
+	if (input->flow_ext.vlan_tci) {
+		DP_INFO(edev, "adding VLAN header\n");
+		rte_memcpy(raw_pkt, vlan_frame, sizeof(vlan_frame));
+		rte_memcpy(raw_pkt + sizeof(uint16_t),
+			   &input->flow_ext.vlan_tci,
+			   sizeof(uint16_t));
+		raw_pkt += sizeof(vlan_frame);
+		len += sizeof(vlan_frame);
+	}
+	ether_type = (uint16_t *)raw_pkt;
+	raw_pkt += sizeof(uint16_t);
+	len += sizeof(uint16_t);
+
+	/* fill the common ip header */
+	switch (input->flow_type) {
+	case RTE_ETH_FLOW_NONFRAG_IPV4_TCP:
+	case RTE_ETH_FLOW_NONFRAG_IPV4_UDP:
+	case RTE_ETH_FLOW_FRAG_IPV4:
+		ip = (struct ipv4_hdr *)raw_pkt;
+		*ether_type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+		ip->version_ihl = QEDE_FDIR_IP_DEFAULT_VERSION_IHL;
+		ip->total_length = sizeof(struct ipv4_hdr);
+		ip->next_proto_id = input->flow.ip4_flow.proto ?
+				    input->flow.ip4_flow.proto :
+				    next_proto[input->flow_type];
+		ip->time_to_live = input->flow.ip4_flow.ttl ?
+				   input->flow.ip4_flow.ttl :
+				   QEDE_FDIR_IPV4_DEF_TTL;
+		ip->type_of_service = input->flow.ip4_flow.tos;
+		ip->dst_addr = input->flow.ip4_flow.dst_ip;
+		ip->src_addr = input->flow.ip4_flow.src_ip;
+		len += sizeof(struct ipv4_hdr);
+		params->ipv4 = true;
+		break;
+	case RTE_ETH_FLOW_NONFRAG_IPV6_TCP:
+	case RTE_ETH_FLOW_NONFRAG_IPV6_UDP:
+	case RTE_ETH_FLOW_FRAG_IPV6:
+		ip6 = (struct ipv6_hdr *)raw_pkt;
+		*ether_type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+		ip6->proto = input->flow.ipv6_flow.proto ?
+					input->flow.ipv6_flow.proto :
+					next_proto[input->flow_type];
+		rte_memcpy(&ip6->src_addr, &input->flow.ipv6_flow.dst_ip,
+			   IPV6_ADDR_LEN);
+		rte_memcpy(&ip6->dst_addr, &input->flow.ipv6_flow.src_ip,
+			   IPV6_ADDR_LEN);
+		len += sizeof(struct ipv6_hdr);
+		break;
+	default:
+		DP_ERR(edev, "Unsupported flow_type %u\n",
+		       input->flow_type);
+		return 0;
+	}
+
+	/* fill the L4 header */
+	raw_pkt = (uint8_t *)buff;
+	switch (input->flow_type) {
+	case RTE_ETH_FLOW_NONFRAG_IPV4_UDP:
+		udp = (struct udp_hdr *)(raw_pkt + len);
+		udp->dst_port = input->flow.udp4_flow.dst_port;
+		udp->src_port = input->flow.udp4_flow.src_port;
+		udp->dgram_len = sizeof(struct udp_hdr);
+		len += sizeof(struct udp_hdr);
+		/* adjust ip total_length */
+		ip->total_length += sizeof(struct udp_hdr);
+		params->udp = true;
+	break;
+	case RTE_ETH_FLOW_NONFRAG_IPV4_TCP:
+		tcp = (struct tcp_hdr *)(raw_pkt + len);
+		tcp->src_port = input->flow.tcp4_flow.src_port;
+		tcp->dst_port = input->flow.tcp4_flow.dst_port;
+		tcp->data_off = QEDE_FDIR_TCP_DEFAULT_DATAOFF;
+		len += sizeof(struct tcp_hdr);
+		/* adjust ip total_length */
+		ip->total_length += sizeof(struct tcp_hdr);
+		params->tcp = true;
+	break;
+	case RTE_ETH_FLOW_NONFRAG_IPV6_TCP:
+		tcp = (struct tcp_hdr *)(raw_pkt + len);
+		tcp->data_off = QEDE_FDIR_TCP_DEFAULT_DATAOFF;
+		tcp->src_port = input->flow.udp6_flow.src_port;
+		tcp->dst_port = input->flow.udp6_flow.dst_port;
+		/* adjust ip total_length */
+		len += sizeof(struct tcp_hdr);
+		params->tcp = true;
+	break;
+	case RTE_ETH_FLOW_NONFRAG_IPV6_UDP:
+		udp = (struct udp_hdr *)(raw_pkt + len);
+		udp->src_port = input->flow.udp6_flow.dst_port;
+		udp->dst_port = input->flow.udp6_flow.src_port;
+		/* adjust ip total_length */
+		len += sizeof(struct udp_hdr);
+		params->udp = true;
+	break;
+	default:
+		DP_ERR(edev, "Unsupported flow_type %d\n", input->flow_type);
+		return 0;
+	}
+	return len;
+}
+
+int
+qede_fdir_filter_conf(struct rte_eth_dev *eth_dev,
+		      enum rte_filter_op filter_op,
+		      void *arg)
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	struct rte_eth_fdir_filter *fdir;
+	int ret;
+
+	fdir = (struct rte_eth_fdir_filter *)arg;
+	switch (filter_op) {
+	case RTE_ETH_FILTER_NOP:
+		/* Typically used to query flowdir support */
+		if (edev->num_hwfns > 1) {
+			DP_ERR(edev, "flowdir is not supported in 100G mode\n");
+			return -ENOTSUP;
+		}
+		return 0; /* means supported */
+	case RTE_ETH_FILTER_ADD:
+		ret = qede_fdir_filter_add(eth_dev, fdir, 1);
+	break;
+	case RTE_ETH_FILTER_DELETE:
+		ret = qede_fdir_filter_add(eth_dev, fdir, 0);
+	break;
+	case RTE_ETH_FILTER_FLUSH:
+	case RTE_ETH_FILTER_UPDATE:
+	case RTE_ETH_FILTER_INFO:
+		return -ENOTSUP;
+	break;
+	default:
+		DP_ERR(edev, "unknown operation %u", filter_op);
+		ret = -EINVAL;
+	}
+
+	return ret;
+}
+
+int qede_ntuple_filter_conf(struct rte_eth_dev *eth_dev,
+			    enum rte_filter_op filter_op,
+			    void *arg)
+{
+	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	struct rte_eth_ntuple_filter *ntuple;
+	struct rte_eth_fdir_filter fdir_entry;
+	struct rte_eth_tcpv4_flow *tcpv4_flow;
+	struct rte_eth_udpv4_flow *udpv4_flow;
+	struct ecore_hwfn *p_hwfn;
+	bool add;
+
+	switch (filter_op) {
+	case RTE_ETH_FILTER_NOP:
+		/* Typically used to query fdir support */
+		if (edev->num_hwfns > 1) {
+			DP_ERR(edev, "flowdir is not supported in 100G mode\n");
+			return -ENOTSUP;
+		}
+		return 0; /* means supported */
+	case RTE_ETH_FILTER_ADD:
+		add = true;
+	break;
+	case RTE_ETH_FILTER_DELETE:
+		add = false;
+	break;
+	case RTE_ETH_FILTER_INFO:
+	case RTE_ETH_FILTER_GET:
+	case RTE_ETH_FILTER_UPDATE:
+	case RTE_ETH_FILTER_FLUSH:
+	case RTE_ETH_FILTER_SET:
+	case RTE_ETH_FILTER_STATS:
+	case RTE_ETH_FILTER_OP_MAX:
+		DP_ERR(edev, "Unsupported filter_op %d\n", filter_op);
+		return -ENOTSUP;
+	}
+	ntuple = (struct rte_eth_ntuple_filter *)arg;
+	/* Internally convert ntuple to fdir entry */
+	memset(&fdir_entry, 0, sizeof(fdir_entry));
+	if (ntuple->proto == IPPROTO_TCP) {
+		fdir_entry.input.flow_type = RTE_ETH_FLOW_NONFRAG_IPV4_TCP;
+		tcpv4_flow = &fdir_entry.input.flow.tcp4_flow;
+		tcpv4_flow->ip.src_ip = ntuple->src_ip;
+		tcpv4_flow->ip.dst_ip = ntuple->dst_ip;
+		tcpv4_flow->ip.proto = IPPROTO_TCP;
+		tcpv4_flow->src_port = ntuple->src_port;
+		tcpv4_flow->dst_port = ntuple->dst_port;
+	} else {
+		fdir_entry.input.flow_type = RTE_ETH_FLOW_NONFRAG_IPV4_UDP;
+		udpv4_flow = &fdir_entry.input.flow.udp4_flow;
+		udpv4_flow->ip.src_ip = ntuple->src_ip;
+		udpv4_flow->ip.dst_ip = ntuple->dst_ip;
+		udpv4_flow->ip.proto = IPPROTO_TCP;
+		udpv4_flow->src_port = ntuple->src_port;
+		udpv4_flow->dst_port = ntuple->dst_port;
+	}
+	return qede_config_cmn_fdir_filter(eth_dev, &fdir_entry, add);
+}
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 74856c5..307b33a 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -12,8 +12,6 @@
 
 #include "qede_ethdev.h"
 
-static uint8_t npar_tx_switching = 1;
-
 /* Alarm timeout. */
 #define QEDE_ALARM_TIMEOUT_US 100000
 
@@ -224,23 +222,34 @@ static void qed_stop_iov_task(struct ecore_dev *edev)
 static int qed_slowpath_start(struct ecore_dev *edev,
 			      struct qed_slowpath_params *params)
 {
-	bool allow_npar_tx_switching;
 	const uint8_t *data = NULL;
 	struct ecore_hwfn *hwfn;
 	struct ecore_mcp_drv_version drv_version;
 	struct ecore_hw_init_params hw_init_params;
 	struct qede_dev *qdev = (struct qede_dev *)edev;
+	struct ecore_ptt *p_ptt;
 	int rc;
 
-#ifdef CONFIG_ECORE_BINARY_FW
 	if (IS_PF(edev)) {
+#ifdef CONFIG_ECORE_BINARY_FW
 		rc = qed_load_firmware_data(edev);
 		if (rc) {
 			DP_ERR(edev, "Failed to find fw file %s\n", fw_file);
 			goto err;
 		}
-	}
 #endif
+		hwfn = ECORE_LEADING_HWFN(edev);
+		if (edev->num_hwfns == 1) { /* skip aRFS for 100G device */
+			p_ptt = ecore_ptt_acquire(hwfn);
+			if (p_ptt) {
+				ECORE_LEADING_HWFN(edev)->p_arfs_ptt = p_ptt;
+			} else {
+				DP_ERR(edev, "Failed to acquire PTT for flowdir\n");
+				rc = -ENOMEM;
+				goto err;
+			}
+		}
+	}
 
 	rc = qed_nic_setup(edev);
 	if (rc)
@@ -268,13 +277,11 @@ static int qed_slowpath_start(struct ecore_dev *edev,
 		data = (const uint8_t *)edev->firmware + sizeof(u32);
 #endif
 
-	allow_npar_tx_switching = npar_tx_switching ? true : false;
-
 	/* Start the slowpath */
 	memset(&hw_init_params, 0, sizeof(hw_init_params));
 	hw_init_params.b_hw_start = true;
 	hw_init_params.int_mode = ECORE_INT_MODE_MSIX;
-	hw_init_params.allow_npar_tx_switch = allow_npar_tx_switching;
+	hw_init_params.allow_npar_tx_switch = true;
 	hw_init_params.bin_fw_data = data;
 	hw_init_params.mfw_timeout_val = ECORE_LOAD_REQ_LOCK_TO_DEFAULT;
 	hw_init_params.avoid_eng_reset = false;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 61/62] net/qede: add LRO/TSO offloads support
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (60 preceding siblings ...)
  2017-03-29 20:37             ` [PATCH v5 60/62] net/qede: add ntuple and flow director filter support Rasesh Mody
@ 2017-03-29 20:37             ` Rasesh Mody
  2017-03-29 20:37             ` [PATCH v5 62/62] net/qede: update PMD version to 2.4.0.1 Rasesh Mody
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:37 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Harish Patil, Dept-EngDPDKDev

From: Harish Patil <harish.patil@qlogic.com>

This patch includes slowpath configuration and fastpath changes
to support LRO and TSO. A bit of revamping is needed in order
to make use of existing packet classification schemes in Rx fastpath
and for SG element processing in Tx.

Signed-off-by: Harish Patil <harish.patil@qlogic.com>
---
 doc/guides/nics/features/qede.ini    |    2 +
 doc/guides/nics/features/qede_vf.ini |    2 +
 doc/guides/nics/qede.rst             |    2 +-
 drivers/net/qede/qede_eth_if.c       |    6 +-
 drivers/net/qede/qede_eth_if.h       |    3 +-
 drivers/net/qede/qede_ethdev.c       |   29 +-
 drivers/net/qede/qede_ethdev.h       |    3 +-
 drivers/net/qede/qede_rxtx.c         |  739 +++++++++++++++++++++++++---------
 drivers/net/qede/qede_rxtx.h         |   30 ++
 9 files changed, 605 insertions(+), 211 deletions(-)

diff --git a/doc/guides/nics/features/qede.ini b/doc/guides/nics/features/qede.ini
index b688914..fba5dc3 100644
--- a/doc/guides/nics/features/qede.ini
+++ b/doc/guides/nics/features/qede.ini
@@ -36,3 +36,5 @@ x86-64               = Y
 Usage doc            = Y
 N-tuple filter       = Y
 Flow director        = Y
+LRO                  = Y
+TSO                  = Y
diff --git a/doc/guides/nics/features/qede_vf.ini b/doc/guides/nics/features/qede_vf.ini
index acb1b99..21ec40f 100644
--- a/doc/guides/nics/features/qede_vf.ini
+++ b/doc/guides/nics/features/qede_vf.ini
@@ -31,4 +31,6 @@ Stats per queue      = Y
 Multiprocess aware   = Y
 Linux UIO            = Y
 x86-64               = Y
+LRO                  = Y
+TSO                  = Y
 Usage doc            = Y
diff --git a/doc/guides/nics/qede.rst b/doc/guides/nics/qede.rst
index df0aaec..eacb3da 100644
--- a/doc/guides/nics/qede.rst
+++ b/doc/guides/nics/qede.rst
@@ -61,13 +61,13 @@ Supported Features
 - Scatter-Gather
 - VXLAN tunneling offload
 - N-tuple filter and flow director (limited support)
+- LRO/TSO
 
 Non-supported Features
 ----------------------
 
 - SR-IOV PF
 - GENEVE and NVGRE Tunneling offloads
-- LRO/TSO
 - NPAR
 
 Supported QLogic Adapters
diff --git a/drivers/net/qede/qede_eth_if.c b/drivers/net/qede/qede_eth_if.c
index 8e4290c..86bb129 100644
--- a/drivers/net/qede/qede_eth_if.c
+++ b/drivers/net/qede/qede_eth_if.c
@@ -18,8 +18,8 @@ qed_start_vport(struct ecore_dev *edev, struct qed_start_vport_params *p_params)
 		u8 tx_switching = 0;
 		struct ecore_sp_vport_start_params start = { 0 };
 
-		start.tpa_mode = p_params->gro_enable ? ECORE_TPA_MODE_GRO :
-		    ECORE_TPA_MODE_NONE;
+		start.tpa_mode = p_params->enable_lro ? ECORE_TPA_MODE_RSC :
+				ECORE_TPA_MODE_NONE;
 		start.remove_inner_vlan = p_params->remove_inner_vlan;
 		start.tx_switching = tx_switching;
 		start.only_untagged = false;	/* untagged only */
@@ -29,7 +29,6 @@ qed_start_vport(struct ecore_dev *edev, struct qed_start_vport_params *p_params)
 		start.concrete_fid = p_hwfn->hw_info.concrete_fid;
 		start.handle_ptp_pkts = p_params->handle_ptp_pkts;
 		start.vport_id = p_params->vport_id;
-		start.max_buffers_per_cqe = 16;	/* TODO-is this right */
 		start.mtu = p_params->mtu;
 		/* @DPDK - Disable FW placement */
 		start.zero_placement_offset = 1;
@@ -120,6 +119,7 @@ qed_update_vport(struct ecore_dev *edev, struct qed_update_vport_params *params)
 	sp_params.update_accept_any_vlan_flg =
 	    params->update_accept_any_vlan_flg;
 	sp_params.mtu = params->mtu;
+	sp_params.sge_tpa_params = params->sge_tpa_params;
 
 	for_each_hwfn(edev, i) {
 		struct ecore_hwfn *p_hwfn = &edev->hwfns[i];
diff --git a/drivers/net/qede/qede_eth_if.h b/drivers/net/qede/qede_eth_if.h
index 12dd828..d845bac 100644
--- a/drivers/net/qede/qede_eth_if.h
+++ b/drivers/net/qede/qede_eth_if.h
@@ -59,12 +59,13 @@ struct qed_update_vport_params {
 	uint8_t accept_any_vlan;
 	uint8_t update_rss_flg;
 	uint16_t mtu;
+	struct ecore_sge_tpa_params *sge_tpa_params;
 };
 
 struct qed_start_vport_params {
 	bool remove_inner_vlan;
 	bool handle_ptp_pkts;
-	bool gro_enable;
+	bool enable_lro;
 	bool drop_ttl0;
 	uint8_t vport_id;
 	uint16_t mtu;
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 22b528d..0762111 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -769,7 +769,7 @@ static int qede_init_vport(struct qede_dev *qdev)
 	int rc;
 
 	start.remove_inner_vlan = 1;
-	start.gro_enable = 0;
+	start.enable_lro = qdev->enable_lro;
 	start.mtu = ETHER_MTU + QEDE_ETH_OVERHEAD;
 	start.vport_id = 0;
 	start.drop_ttl0 = false;
@@ -866,11 +866,6 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 	if (rxmode->enable_scatter == 1)
 		eth_dev->data->scattered_rx = 1;
 
-	if (rxmode->enable_lro == 1) {
-		DP_ERR(edev, "LRO is not supported\n");
-		return -EINVAL;
-	}
-
 	if (!rxmode->hw_strip_crc)
 		DP_INFO(edev, "L2 CRC stripping is always enabled in hw\n");
 
@@ -878,6 +873,13 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 		DP_INFO(edev, "IP/UDP/TCP checksum offload is always enabled "
 			      "in hw\n");
 
+	if (rxmode->enable_lro) {
+		qdev->enable_lro = true;
+		/* Enable scatter mode for LRO */
+		if (!rxmode->enable_scatter)
+			eth_dev->data->scattered_rx = 1;
+	}
+
 	/* Check for the port restart case */
 	if (qdev->state != QEDE_DEV_INIT) {
 		rc = qdev->ops->vport_stop(edev, 0);
@@ -957,13 +959,15 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 static const struct rte_eth_desc_lim qede_rx_desc_lim = {
 	.nb_max = NUM_RX_BDS_MAX,
 	.nb_min = 128,
-	.nb_align = 128	/* lowest common multiple */
+	.nb_align = 128 /* lowest common multiple */
 };
 
 static const struct rte_eth_desc_lim qede_tx_desc_lim = {
 	.nb_max = NUM_TX_BDS_MAX,
 	.nb_min = 256,
-	.nb_align = 256
+	.nb_align = 256,
+	.nb_seg_max = ETH_TX_MAX_BDS_PER_LSO_PACKET,
+	.nb_mtu_seg_max = ETH_TX_MAX_BDS_PER_NON_LSO_PACKET
 };
 
 static void
@@ -1005,12 +1009,16 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
 				     DEV_RX_OFFLOAD_IPV4_CKSUM	|
 				     DEV_RX_OFFLOAD_UDP_CKSUM	|
 				     DEV_RX_OFFLOAD_TCP_CKSUM	|
-				     DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM);
+				     DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+				     DEV_RX_OFFLOAD_TCP_LRO);
+
 	dev_info->tx_offload_capa = (DEV_TX_OFFLOAD_VLAN_INSERT	|
 				     DEV_TX_OFFLOAD_IPV4_CKSUM	|
 				     DEV_TX_OFFLOAD_UDP_CKSUM	|
 				     DEV_TX_OFFLOAD_TCP_CKSUM	|
-				     DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM);
+				     DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				     DEV_TX_OFFLOAD_TCP_TSO |
+				     DEV_TX_OFFLOAD_VXLAN_TNL_TSO);
 
 	memset(&link, 0, sizeof(struct qed_link_output));
 	qdev->ops->common->get_link(edev, &link);
@@ -2107,6 +2115,7 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
 
 	eth_dev->rx_pkt_burst = qede_recv_pkts;
 	eth_dev->tx_pkt_burst = qede_xmit_pkts;
+	eth_dev->tx_pkt_prepare = qede_xmit_prep_pkts;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
 		DP_NOTICE(edev, false,
diff --git a/drivers/net/qede/qede_ethdev.h b/drivers/net/qede/qede_ethdev.h
index 8342b99..799a3ba 100644
--- a/drivers/net/qede/qede_ethdev.h
+++ b/drivers/net/qede/qede_ethdev.h
@@ -193,8 +193,7 @@ struct qede_dev {
 	uint16_t rss_ind_table[ECORE_RSS_IND_TABLE_SIZE];
 	uint64_t rss_hf;
 	uint8_t rss_key_len;
-	uint32_t flags;
-	bool gro_disable;
+	bool enable_lro;
 	uint16_t num_queues;
 	uint8_t fp_num_tx;
 	uint8_t fp_num_rx;
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 85134fb..e72a693 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -6,10 +6,9 @@
  * See LICENSE.qede_pmd for copyright and licensing details.
  */
 
+#include <rte_net.h>
 #include "qede_rxtx.h"
 
-static bool gro_disable = 1;	/* mod_param */
-
 static inline int qede_alloc_rx_buffer(struct qede_rx_queue *rxq)
 {
 	struct rte_mbuf *new_mb = NULL;
@@ -352,7 +351,6 @@ static void qede_init_fp(struct qede_dev *qdev)
 		snprintf(fp->name, sizeof(fp->name), "%s-fp-%d", "qdev", i);
 	}
 
-	qdev->gro_disable = gro_disable;
 }
 
 void qede_free_fp_arrays(struct qede_dev *qdev)
@@ -509,6 +507,30 @@ qede_update_rx_prod(struct qede_dev *edev, struct qede_rx_queue *rxq)
 	PMD_RX_LOG(DEBUG, rxq, "bd_prod %u  cqe_prod %u", bd_prod, cqe_prod);
 }
 
+static void
+qede_update_sge_tpa_params(struct ecore_sge_tpa_params *sge_tpa_params,
+			   uint16_t mtu, bool enable)
+{
+	/* Enable LRO in split mode */
+	sge_tpa_params->tpa_ipv4_en_flg = enable;
+	sge_tpa_params->tpa_ipv6_en_flg = enable;
+	sge_tpa_params->tpa_ipv4_tunn_en_flg = enable;
+	sge_tpa_params->tpa_ipv6_tunn_en_flg = enable;
+	/* set if tpa enable changes */
+	sge_tpa_params->update_tpa_en_flg = 1;
+	/* set if tpa parameters should be handled */
+	sge_tpa_params->update_tpa_param_flg = enable;
+
+	sge_tpa_params->max_buffers_per_cqe = 20;
+	sge_tpa_params->tpa_pkt_split_flg = 1;
+	sge_tpa_params->tpa_hdr_data_split_flg = 0;
+	sge_tpa_params->tpa_gro_consistent_flg = 0;
+	sge_tpa_params->tpa_max_aggs_num = ETH_TPA_MAX_AGGS_NUM;
+	sge_tpa_params->tpa_max_size = 0x7FFF;
+	sge_tpa_params->tpa_min_size_to_start = mtu / 2;
+	sge_tpa_params->tpa_min_size_to_cont = mtu / 2;
+}
+
 static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 {
 	struct qede_dev *qdev = eth_dev->data->dev_private;
@@ -516,6 +538,7 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 	struct ecore_queue_start_common_params q_params;
 	struct qed_dev_info *qed_info = &qdev->dev_info.common;
 	struct qed_update_vport_params vport_update_params;
+	struct ecore_sge_tpa_params tpa_params;
 	struct qede_tx_queue *txq;
 	struct qede_fastpath *fp;
 	dma_addr_t p_phys_table;
@@ -529,10 +552,10 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 		if (fp->type & QEDE_FASTPATH_RX) {
 			struct ecore_rxq_start_ret_params ret_params;
 
-			p_phys_table = ecore_chain_get_pbl_phys(&fp->rxq->
-								rx_comp_ring);
-			page_cnt = ecore_chain_get_page_cnt(&fp->rxq->
-								rx_comp_ring);
+			p_phys_table =
+			    ecore_chain_get_pbl_phys(&fp->rxq->rx_comp_ring);
+			page_cnt =
+			    ecore_chain_get_page_cnt(&fp->rxq->rx_comp_ring);
 
 			memset(&ret_params, 0, sizeof(ret_params));
 			memset(&q_params, 0, sizeof(q_params));
@@ -625,6 +648,14 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
 		vport_update_params.tx_switching_flg = 1;
 	}
 
+	/* TPA */
+	if (qdev->enable_lro) {
+		DP_INFO(edev, "Enabling LRO\n");
+		memset(&tpa_params, 0, sizeof(struct ecore_sge_tpa_params));
+		qede_update_sge_tpa_params(&tpa_params, qdev->mtu, true);
+		vport_update_params.sge_tpa_params = &tpa_params;
+	}
+
 	rc = qdev->ops->vport_update(edev, &vport_update_params);
 	if (rc) {
 		DP_ERR(edev, "Update V-PORT failed %d\n", rc);
@@ -761,6 +792,94 @@ static inline uint32_t qede_rx_cqe_to_pkt_type(uint16_t flags)
 		return RTE_PTYPE_UNKNOWN;
 }
 
+static inline void
+qede_rx_process_tpa_cont_cqe(struct qede_dev *qdev,
+			     struct qede_rx_queue *rxq,
+			     struct eth_fast_path_rx_tpa_cont_cqe *cqe)
+{
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	struct qede_agg_info *tpa_info;
+	struct rte_mbuf *temp_frag; /* Pointer to mbuf chain head */
+	struct rte_mbuf *curr_frag;
+	uint8_t list_count = 0;
+	uint16_t cons_idx;
+	uint8_t i;
+
+	PMD_RX_LOG(INFO, rxq, "TPA cont[%02x] - len_list [%04x %04x]\n",
+		   cqe->tpa_agg_index, rte_le_to_cpu_16(cqe->len_list[0]),
+		   rte_le_to_cpu_16(cqe->len_list[1]));
+
+	tpa_info = &rxq->tpa_info[cqe->tpa_agg_index];
+	temp_frag = tpa_info->mbuf;
+	assert(temp_frag);
+
+	for (i = 0; cqe->len_list[i]; i++) {
+		cons_idx = rxq->sw_rx_cons & NUM_RX_BDS(rxq);
+		curr_frag = rxq->sw_rx_ring[cons_idx].mbuf;
+		qede_rx_bd_ring_consume(rxq);
+		curr_frag->data_len = rte_le_to_cpu_16(cqe->len_list[i]);
+		temp_frag->next = curr_frag;
+		temp_frag = curr_frag;
+		list_count++;
+	}
+
+	/* Allocate RX mbuf on the RX BD ring for those many consumed  */
+	for (i = 0 ; i < list_count ; i++) {
+		if (unlikely(qede_alloc_rx_buffer(rxq) != 0)) {
+			DP_ERR(edev, "Failed to allocate mbuf for LRO cont\n");
+			tpa_info->state = QEDE_AGG_STATE_ERROR;
+		}
+	}
+}
+
+static inline void
+qede_rx_process_tpa_end_cqe(struct qede_dev *qdev,
+			    struct qede_rx_queue *rxq,
+			    struct eth_fast_path_rx_tpa_end_cqe *cqe)
+{
+	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+	struct qede_agg_info *tpa_info;
+	struct rte_mbuf *temp_frag; /* Pointer to mbuf chain head */
+	struct rte_mbuf *curr_frag;
+	struct rte_mbuf *rx_mb;
+	uint8_t list_count = 0;
+	uint16_t cons_idx;
+	uint8_t i;
+
+	PMD_RX_LOG(INFO, rxq, "TPA End[%02x] - len_list [%04x %04x]\n",
+		   cqe->tpa_agg_index, rte_le_to_cpu_16(cqe->len_list[0]),
+		   rte_le_to_cpu_16(cqe->len_list[1]));
+
+	tpa_info = &rxq->tpa_info[cqe->tpa_agg_index];
+	temp_frag = tpa_info->mbuf;
+	assert(temp_frag);
+
+	for (i = 0; cqe->len_list[i]; i++) {
+		cons_idx = rxq->sw_rx_cons & NUM_RX_BDS(rxq);
+		curr_frag = rxq->sw_rx_ring[cons_idx].mbuf;
+		qede_rx_bd_ring_consume(rxq);
+		curr_frag->data_len = rte_le_to_cpu_16(cqe->len_list[i]);
+		temp_frag->next = curr_frag;
+		temp_frag = curr_frag;
+		list_count++;
+	}
+
+	/* Allocate RX mbuf on the RX BD ring for those many consumed */
+	for (i = 0 ; i < list_count ; i++) {
+		if (unlikely(qede_alloc_rx_buffer(rxq) != 0)) {
+			DP_ERR(edev, "Failed to allocate mbuf for lro end\n");
+			tpa_info->state = QEDE_AGG_STATE_ERROR;
+		}
+	}
+
+	/* Update total length and frags based on end TPA */
+	rx_mb = rxq->tpa_info[cqe->tpa_agg_index].mbuf;
+	/* TBD: Add sanity checks here */
+	rx_mb->nb_segs = cqe->num_of_bds;
+	rx_mb->pkt_len = cqe->total_packet_len;
+	tpa_info->state = QEDE_AGG_STATE_NONE;
+}
+
 static inline uint32_t qede_rx_cqe_to_tunn_pkt_type(uint16_t flags)
 {
 	uint32_t val;
@@ -875,13 +994,20 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 	uint16_t pkt_len; /* Sum of all BD segments */
 	uint16_t len; /* Length of first BD */
 	uint8_t num_segs = 1;
-	uint16_t pad;
 	uint16_t preload_idx;
 	uint8_t csum_flag;
 	uint16_t parse_flag;
 	enum rss_hash_type htype;
 	uint8_t tunn_parse_flag;
 	uint8_t j;
+	struct eth_fast_path_rx_tpa_start_cqe *cqe_start_tpa;
+	uint64_t ol_flags;
+	uint32_t packet_type;
+	uint16_t vlan_tci;
+	bool tpa_start_flg;
+	uint8_t bitfield_val;
+	uint8_t offset, tpa_agg_idx, flags;
+	struct qede_agg_info *tpa_info;
 
 	hw_comp_cons = rte_le_to_cpu_16(*rxq->hw_cons_ptr);
 	sw_comp_cons = ecore_chain_get_cons_idx(&rxq->rx_comp_ring);
@@ -892,16 +1018,53 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		return 0;
 
 	while (sw_comp_cons != hw_comp_cons) {
+		ol_flags = 0;
+		packet_type = RTE_PTYPE_UNKNOWN;
+		vlan_tci = 0;
+		tpa_start_flg = false;
+
 		/* Get the CQE from the completion ring */
 		cqe =
 		    (union eth_rx_cqe *)ecore_chain_consume(&rxq->rx_comp_ring);
 		cqe_type = cqe->fast_path_regular.type;
-
-		if (unlikely(cqe_type == ETH_RX_CQE_TYPE_SLOW_PATH)) {
-			PMD_RX_LOG(DEBUG, rxq, "Got a slowath CQE");
-
+		PMD_RX_LOG(INFO, rxq, "Rx CQE type %d\n", cqe_type);
+
+		switch (cqe_type) {
+		case ETH_RX_CQE_TYPE_REGULAR:
+			fp_cqe = &cqe->fast_path_regular;
+		break;
+		case ETH_RX_CQE_TYPE_TPA_START:
+			cqe_start_tpa = &cqe->fast_path_tpa_start;
+			tpa_info = &rxq->tpa_info[cqe_start_tpa->tpa_agg_index];
+			tpa_start_flg = true;
+			PMD_RX_LOG(INFO, rxq,
+			    "TPA start[%u] - len %04x [header %02x]"
+			    " [bd_list[0] %04x], [seg_len %04x]\n",
+			    cqe_start_tpa->tpa_agg_index,
+			    rte_le_to_cpu_16(cqe_start_tpa->len_on_first_bd),
+			    cqe_start_tpa->header_len,
+			    rte_le_to_cpu_16(cqe_start_tpa->ext_bd_len_list[0]),
+			    rte_le_to_cpu_16(cqe_start_tpa->seg_len));
+
+		break;
+		case ETH_RX_CQE_TYPE_TPA_CONT:
+			qede_rx_process_tpa_cont_cqe(qdev, rxq,
+						     &cqe->fast_path_tpa_cont);
+			continue;
+		case ETH_RX_CQE_TYPE_TPA_END:
+			qede_rx_process_tpa_end_cqe(qdev, rxq,
+						    &cqe->fast_path_tpa_end);
+			tpa_agg_idx = cqe->fast_path_tpa_end.tpa_agg_index;
+			rx_mb = rxq->tpa_info[tpa_agg_idx].mbuf;
+			PMD_RX_LOG(INFO, rxq, "TPA end reason %d\n",
+				   cqe->fast_path_tpa_end.end_reason);
+			goto tpa_end;
+		case ETH_RX_CQE_TYPE_SLOW_PATH:
+			PMD_RX_LOG(INFO, rxq, "Got unexpected slowpath CQE\n");
 			qdev->ops->eth_cqe_completion(edev, fp->id,
 				(struct eth_slow_path_rx_cqe *)cqe);
+			/* fall-thru */
+		default:
 			goto next_cqe;
 		}
 
@@ -910,69 +1073,93 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		rx_mb = rxq->sw_rx_ring[sw_rx_index].mbuf;
 		assert(rx_mb != NULL);
 
-		/* non GRO */
-		fp_cqe = &cqe->fast_path_regular;
-
-		len = rte_le_to_cpu_16(fp_cqe->len_on_first_bd);
-		pkt_len = rte_le_to_cpu_16(fp_cqe->pkt_len);
-		pad = fp_cqe->placement_offset;
-		assert((len + pad) <= rx_mb->buf_len);
-
-		PMD_RX_LOG(DEBUG, rxq,
-			   "CQE type = 0x%x, flags = 0x%x, vlan = 0x%x"
-			   " len = %u, parsing_flags = %d",
-			   cqe_type, fp_cqe->bitfields,
-			   rte_le_to_cpu_16(fp_cqe->vlan_tag),
-			   len, rte_le_to_cpu_16(fp_cqe->pars_flags.flags));
-
-		/* If this is an error packet then drop it */
-		parse_flag =
-		    rte_le_to_cpu_16(cqe->fast_path_regular.pars_flags.flags);
-
-		rx_mb->ol_flags = 0;
-
+		/* Handle regular CQE or TPA start CQE */
+		if (!tpa_start_flg) {
+			parse_flag = rte_le_to_cpu_16(fp_cqe->pars_flags.flags);
+			bitfield_val = fp_cqe->bitfields;
+			offset = fp_cqe->placement_offset;
+			len = rte_le_to_cpu_16(fp_cqe->len_on_first_bd);
+			pkt_len = rte_le_to_cpu_16(fp_cqe->pkt_len);
+		} else {
+			parse_flag =
+			    rte_le_to_cpu_16(cqe_start_tpa->pars_flags.flags);
+			bitfield_val = cqe_start_tpa->bitfields;
+			offset = cqe_start_tpa->placement_offset;
+			/* seg_len = len_on_first_bd */
+			len = rte_le_to_cpu_16(cqe_start_tpa->len_on_first_bd);
+			tpa_info->start_cqe_bd_len = len +
+						cqe_start_tpa->header_len;
+			tpa_info->mbuf = rx_mb;
+		}
 		if (qede_tunn_exist(parse_flag)) {
-			PMD_RX_LOG(DEBUG, rxq, "Rx tunneled packet");
+			PMD_RX_LOG(INFO, rxq, "Rx tunneled packet\n");
 			if (unlikely(qede_check_tunn_csum_l4(parse_flag))) {
 				PMD_RX_LOG(ERR, rxq,
-					    "L4 csum failed, flags = 0x%x",
+					    "L4 csum failed, flags = 0x%x\n",
 					    parse_flag);
 				rxq->rx_hw_errors++;
-				rx_mb->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+				ol_flags |= PKT_RX_L4_CKSUM_BAD;
 			} else {
-				tunn_parse_flag =
-						fp_cqe->tunnel_pars_flags.flags;
-				rx_mb->packet_type =
-					qede_rx_cqe_to_tunn_pkt_type(
-							tunn_parse_flag);
+				ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+				if (tpa_start_flg)
+					flags =
+					 cqe_start_tpa->tunnel_pars_flags.flags;
+				else
+					flags = fp_cqe->tunnel_pars_flags.flags;
+				tunn_parse_flag = flags;
+				packet_type =
+				qede_rx_cqe_to_tunn_pkt_type(tunn_parse_flag);
 			}
 		} else {
-			PMD_RX_LOG(DEBUG, rxq, "Rx non-tunneled packet");
+			PMD_RX_LOG(INFO, rxq, "Rx non-tunneled packet\n");
 			if (unlikely(qede_check_notunn_csum_l4(parse_flag))) {
 				PMD_RX_LOG(ERR, rxq,
-					    "L4 csum failed, flags = 0x%x",
+					    "L4 csum failed, flags = 0x%x\n",
 					    parse_flag);
 				rxq->rx_hw_errors++;
-				rx_mb->ol_flags |= PKT_RX_L4_CKSUM_BAD;
-			} else if (unlikely(qede_check_notunn_csum_l3(rx_mb,
+				ol_flags |= PKT_RX_L4_CKSUM_BAD;
+			} else {
+				ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			}
+			if (unlikely(qede_check_notunn_csum_l3(rx_mb,
 							parse_flag))) {
 				PMD_RX_LOG(ERR, rxq,
-					   "IP csum failed, flags = 0x%x",
+					   "IP csum failed, flags = 0x%x\n",
 					   parse_flag);
 				rxq->rx_hw_errors++;
-				rx_mb->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+				ol_flags |= PKT_RX_IP_CKSUM_BAD;
 			} else {
-				rx_mb->packet_type =
+				ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+				packet_type =
 					qede_rx_cqe_to_pkt_type(parse_flag);
 			}
 		}
 
-		PMD_RX_LOG(INFO, rxq, "packet_type 0x%x", rx_mb->packet_type);
+		if (CQE_HAS_VLAN(parse_flag)) {
+			vlan_tci = rte_le_to_cpu_16(fp_cqe->vlan_tag);
+			ol_flags |= PKT_RX_VLAN_PKT;
+		}
+
+		if (CQE_HAS_OUTER_VLAN(parse_flag)) {
+			vlan_tci = rte_le_to_cpu_16(fp_cqe->vlan_tag);
+			ol_flags |= PKT_RX_QINQ_PKT;
+			rx_mb->vlan_tci_outer = 0;
+		}
+
+		/* RSS Hash */
+		htype = (uint8_t)GET_FIELD(bitfield_val,
+					ETH_FAST_PATH_RX_REG_CQE_RSS_HASH_TYPE);
+		if (qdev->rss_enable && htype) {
+			ol_flags |= PKT_RX_RSS_HASH;
+			rx_mb->hash.rss = rte_le_to_cpu_32(fp_cqe->rss_hash);
+			PMD_RX_LOG(INFO, rxq, "Hash result 0x%x\n",
+				   rx_mb->hash.rss);
+		}
 
 		if (unlikely(qede_alloc_rx_buffer(rxq) != 0)) {
 			PMD_RX_LOG(ERR, rxq,
 				   "New buffer allocation failed,"
-				   "dropping incoming packet");
+				   "dropping incoming packet\n");
 			qede_recycle_rx_bd_ring(rxq, qdev, fp_cqe->bd_num);
 			rte_eth_devices[rxq->port_id].
 			    data->rx_mbuf_alloc_failed++;
@@ -980,7 +1167,8 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 			break;
 		}
 		qede_rx_bd_ring_consume(rxq);
-		if (fp_cqe->bd_num > 1) {
+
+		if (!tpa_start_flg && fp_cqe->bd_num > 1) {
 			PMD_RX_LOG(DEBUG, rxq, "Jumbo-over-BD packet: %02x BDs"
 				   " len on first: %04x Total Len: %04x",
 				   fp_cqe->bd_num, len, pkt_len);
@@ -1008,40 +1196,24 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		rte_prefetch0(rxq->sw_rx_ring[preload_idx].mbuf);
 
 		/* Update rest of the MBUF fields */
-		rx_mb->data_off = pad + RTE_PKTMBUF_HEADROOM;
-		rx_mb->nb_segs = fp_cqe->bd_num;
-		rx_mb->data_len = len;
-		rx_mb->pkt_len = pkt_len;
+		rx_mb->data_off = offset + RTE_PKTMBUF_HEADROOM;
 		rx_mb->port = rxq->port_id;
-
-		htype = (uint8_t)GET_FIELD(fp_cqe->bitfields,
-				ETH_FAST_PATH_RX_REG_CQE_RSS_HASH_TYPE);
-		if (qdev->rss_enable && htype) {
-			rx_mb->ol_flags |= PKT_RX_RSS_HASH;
-			rx_mb->hash.rss = rte_le_to_cpu_32(fp_cqe->rss_hash);
-			PMD_RX_LOG(DEBUG, rxq, "Hash result 0x%x",
-				   rx_mb->hash.rss);
+		rx_mb->ol_flags = ol_flags;
+		rx_mb->data_len = len;
+		rx_mb->vlan_tci = vlan_tci;
+		rx_mb->packet_type = packet_type;
+		PMD_RX_LOG(INFO, rxq, "pkt_type %04x len %04x flags %04lx\n",
+			   packet_type, len, (unsigned long)ol_flags);
+		if (!tpa_start_flg) {
+			rx_mb->nb_segs = fp_cqe->bd_num;
+			rx_mb->pkt_len = pkt_len;
 		}
-
 		rte_prefetch1(rte_pktmbuf_mtod(rx_mb, void *));
-
-		if (CQE_HAS_VLAN(parse_flag)) {
-			rx_mb->vlan_tci = rte_le_to_cpu_16(fp_cqe->vlan_tag);
-			rx_mb->ol_flags |= PKT_RX_VLAN_PKT;
-		}
-
-		if (CQE_HAS_OUTER_VLAN(parse_flag)) {
-			/* FW does not provide indication of Outer VLAN tag,
-			 * which is always stripped, so vlan_tci_outer is set
-			 * to 0. Here vlan_tag represents inner VLAN tag.
-			 */
-			rx_mb->vlan_tci = rte_le_to_cpu_16(fp_cqe->vlan_tag);
-			rx_mb->ol_flags |= PKT_RX_QINQ_PKT;
-			rx_mb->vlan_tci_outer = 0;
+tpa_end:
+		if (!tpa_start_flg) {
+			rx_pkts[rx_pkt] = rx_mb;
+			rx_pkt++;
 		}
-
-		rx_pkts[rx_pkt] = rx_mb;
-		rx_pkt++;
 next_cqe:
 		ecore_chain_recycle_consumed(&rxq->rx_comp_ring);
 		sw_comp_cons = ecore_chain_get_cons_idx(&rxq->rx_comp_ring);
@@ -1062,101 +1234,91 @@ next_cqe:
 	return rx_pkt;
 }
 
-static inline int
-qede_free_tx_pkt(struct ecore_dev *edev, struct qede_tx_queue *txq)
+static inline void
+qede_free_tx_pkt(struct qede_tx_queue *txq)
 {
-	uint16_t nb_segs, idx = TX_CONS(txq);
-	struct eth_tx_bd *tx_data_bd;
-	struct rte_mbuf *mbuf = txq->sw_tx_ring[idx].mbuf;
-
-	if (unlikely(!mbuf)) {
-		PMD_TX_LOG(ERR, txq, "null mbuf");
-		PMD_TX_LOG(ERR, txq,
-			   "tx_desc %u tx_avail %u tx_cons %u tx_prod %u",
-			   txq->nb_tx_desc, txq->nb_tx_avail, idx,
-			   TX_PROD(txq));
-		return -1;
-	}
-
-	nb_segs = mbuf->nb_segs;
-	while (nb_segs) {
-		/* It's like consuming rxbuf in recv() */
+	struct rte_mbuf *mbuf;
+	uint16_t nb_segs;
+	uint16_t idx;
+	uint8_t nbds;
+
+	idx = TX_CONS(txq);
+	mbuf = txq->sw_tx_ring[idx].mbuf;
+	if (mbuf) {
+		nb_segs = mbuf->nb_segs;
+		PMD_TX_LOG(DEBUG, txq, "nb_segs to free %u\n", nb_segs);
+		while (nb_segs) {
+			/* It's like consuming rxbuf in recv() */
+			ecore_chain_consume(&txq->tx_pbl);
+			txq->nb_tx_avail++;
+			nb_segs--;
+		}
+		rte_pktmbuf_free(mbuf);
+		txq->sw_tx_ring[idx].mbuf = NULL;
+		txq->sw_tx_cons++;
+		PMD_TX_LOG(DEBUG, txq, "Freed tx packet\n");
+	} else {
 		ecore_chain_consume(&txq->tx_pbl);
 		txq->nb_tx_avail++;
-		nb_segs--;
 	}
-	rte_pktmbuf_free(mbuf);
-	txq->sw_tx_ring[idx].mbuf = NULL;
-
-	return 0;
 }
 
-static inline uint16_t
+static inline void
 qede_process_tx_compl(struct ecore_dev *edev, struct qede_tx_queue *txq)
 {
-	uint16_t tx_compl = 0;
 	uint16_t hw_bd_cons;
+	uint16_t sw_tx_cons;
 
-	hw_bd_cons = rte_le_to_cpu_16(*txq->hw_cons_ptr);
 	rte_compiler_barrier();
-
-	while (hw_bd_cons != ecore_chain_get_cons_idx(&txq->tx_pbl)) {
-		if (qede_free_tx_pkt(edev, txq)) {
-			PMD_TX_LOG(ERR, txq,
-				   "hw_bd_cons = %u, chain_cons = %u",
-				   hw_bd_cons,
-				   ecore_chain_get_cons_idx(&txq->tx_pbl));
-			break;
-		}
-		txq->sw_tx_cons++;	/* Making TXD available */
-		tx_compl++;
-	}
-
-	PMD_TX_LOG(DEBUG, txq, "Tx compl %u sw_tx_cons %u avail %u",
-		   tx_compl, txq->sw_tx_cons, txq->nb_tx_avail);
-	return tx_compl;
+	hw_bd_cons = rte_le_to_cpu_16(*txq->hw_cons_ptr);
+	sw_tx_cons = ecore_chain_get_cons_idx(&txq->tx_pbl);
+	PMD_TX_LOG(DEBUG, txq, "Tx Completions = %u\n",
+		   abs(hw_bd_cons - sw_tx_cons));
+	while (hw_bd_cons !=  ecore_chain_get_cons_idx(&txq->tx_pbl))
+		qede_free_tx_pkt(txq);
 }
 
 /* Populate scatter gather buffer descriptor fields */
 static inline uint8_t
 qede_encode_sg_bd(struct qede_tx_queue *p_txq, struct rte_mbuf *m_seg,
-		  struct eth_tx_1st_bd *bd1)
+		  struct eth_tx_2nd_bd **bd2, struct eth_tx_3rd_bd **bd3)
 {
 	struct qede_tx_queue *txq = p_txq;
-	struct eth_tx_2nd_bd *bd2 = NULL;
-	struct eth_tx_3rd_bd *bd3 = NULL;
 	struct eth_tx_bd *tx_bd = NULL;
 	dma_addr_t mapping;
-	uint8_t nb_segs = 1; /* min one segment per packet */
+	uint8_t nb_segs = 0;
 
 	/* Check for scattered buffers */
 	while (m_seg) {
-		if (nb_segs == 1) {
-			bd2 = (struct eth_tx_2nd_bd *)
-				ecore_chain_produce(&txq->tx_pbl);
-			memset(bd2, 0, sizeof(*bd2));
+		if (nb_segs == 0) {
+			if (!*bd2) {
+				*bd2 = (struct eth_tx_2nd_bd *)
+					ecore_chain_produce(&txq->tx_pbl);
+				memset(*bd2, 0, sizeof(struct eth_tx_2nd_bd));
+				nb_segs++;
+			}
 			mapping = rte_mbuf_data_dma_addr(m_seg);
-			QEDE_BD_SET_ADDR_LEN(bd2, mapping, m_seg->data_len);
-			PMD_TX_LOG(DEBUG, txq, "BD2 len %04x",
-				   m_seg->data_len);
-		} else if (nb_segs == 2) {
-			bd3 = (struct eth_tx_3rd_bd *)
-				ecore_chain_produce(&txq->tx_pbl);
-			memset(bd3, 0, sizeof(*bd3));
+			QEDE_BD_SET_ADDR_LEN(*bd2, mapping, m_seg->data_len);
+			PMD_TX_LOG(DEBUG, txq, "BD2 len %04x", m_seg->data_len);
+		} else if (nb_segs == 1) {
+			if (!*bd3) {
+				*bd3 = (struct eth_tx_3rd_bd *)
+					ecore_chain_produce(&txq->tx_pbl);
+				memset(*bd3, 0, sizeof(struct eth_tx_3rd_bd));
+				nb_segs++;
+			}
 			mapping = rte_mbuf_data_dma_addr(m_seg);
-			QEDE_BD_SET_ADDR_LEN(bd3, mapping, m_seg->data_len);
-			PMD_TX_LOG(DEBUG, txq, "BD3 len %04x",
-				   m_seg->data_len);
+			QEDE_BD_SET_ADDR_LEN(*bd3, mapping, m_seg->data_len);
+			PMD_TX_LOG(DEBUG, txq, "BD3 len %04x", m_seg->data_len);
 		} else {
 			tx_bd = (struct eth_tx_bd *)
 				ecore_chain_produce(&txq->tx_pbl);
 			memset(tx_bd, 0, sizeof(*tx_bd));
+			nb_segs++;
 			mapping = rte_mbuf_data_dma_addr(m_seg);
 			QEDE_BD_SET_ADDR_LEN(tx_bd, mapping, m_seg->data_len);
-			PMD_TX_LOG(DEBUG, txq, "BD len %04x",
-				   m_seg->data_len);
+			PMD_TX_LOG(DEBUG, txq, "BD len %04x", m_seg->data_len);
 		}
-		nb_segs++;
 		m_seg = m_seg->next;
 	}
 
@@ -1164,59 +1326,209 @@ qede_encode_sg_bd(struct qede_tx_queue *p_txq, struct rte_mbuf *m_seg,
 	return nb_segs;
 }
 
+#ifdef RTE_LIBRTE_QEDE_DEBUG_TX
+static inline void
+print_tx_bd_info(struct qede_tx_queue *txq,
+		 struct eth_tx_1st_bd *bd1,
+		 struct eth_tx_2nd_bd *bd2,
+		 struct eth_tx_3rd_bd *bd3,
+		 uint64_t tx_ol_flags)
+{
+	char ol_buf[256] = { 0 }; /* for verbose prints */
+
+	if (bd1)
+		PMD_TX_LOG(INFO, txq,
+			   "BD1: nbytes=%u nbds=%u bd_flags=04%x bf=%04x",
+			   rte_cpu_to_le_16(bd1->nbytes), bd1->data.nbds,
+			   bd1->data.bd_flags.bitfields,
+			   rte_cpu_to_le_16(bd1->data.bitfields));
+	if (bd2)
+		PMD_TX_LOG(INFO, txq,
+			   "BD2: nbytes=%u bf=%04x\n",
+			   rte_cpu_to_le_16(bd2->nbytes), bd2->data.bitfields1);
+	if (bd3)
+		PMD_TX_LOG(INFO, txq,
+			   "BD3: nbytes=%u bf=%04x mss=%u\n",
+			   rte_cpu_to_le_16(bd3->nbytes),
+			   rte_cpu_to_le_16(bd3->data.bitfields),
+			   rte_cpu_to_le_16(bd3->data.lso_mss));
+
+	rte_get_tx_ol_flag_list(tx_ol_flags, ol_buf, sizeof(ol_buf));
+	PMD_TX_LOG(INFO, txq, "TX offloads = %s\n", ol_buf);
+}
+#endif
+
+/* TX prepare to check packets meets TX conditions */
+uint16_t
+qede_xmit_prep_pkts(void *p_txq, struct rte_mbuf **tx_pkts,
+		    uint16_t nb_pkts)
+{
+	struct qede_tx_queue *txq = p_txq;
+	uint64_t ol_flags;
+	struct rte_mbuf *m;
+	uint16_t i;
+	int ret;
+
+	for (i = 0; i < nb_pkts; i++) {
+		m = tx_pkts[i];
+		ol_flags = m->ol_flags;
+		if (ol_flags & PKT_TX_TCP_SEG) {
+			if (m->nb_segs >= ETH_TX_MAX_BDS_PER_LSO_PACKET) {
+				rte_errno = -EINVAL;
+				break;
+			}
+			/* TBD: confirm its ~9700B for both ? */
+			if (m->tso_segsz > ETH_TX_MAX_NON_LSO_PKT_LEN) {
+				rte_errno = -EINVAL;
+				break;
+			}
+		} else {
+			if (m->nb_segs >= ETH_TX_MAX_BDS_PER_NON_LSO_PACKET) {
+				rte_errno = -EINVAL;
+				break;
+			}
+		}
+		if (ol_flags & QEDE_TX_OFFLOAD_NOTSUP_MASK) {
+			rte_errno = -ENOTSUP;
+			break;
+		}
+
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+		ret = rte_validate_tx_offload(m);
+		if (ret != 0) {
+			rte_errno = ret;
+			break;
+		}
+#endif
+		/* TBD: pseudo csum calcuation required iff
+		 * ETH_TX_DATA_2ND_BD_L4_PSEUDO_CSUM_MODE not set?
+		 */
+		ret = rte_net_intel_cksum_prepare(m);
+		if (ret != 0) {
+			rte_errno = ret;
+			break;
+		}
+	}
+
+	if (unlikely(i != nb_pkts))
+		PMD_TX_LOG(ERR, txq, "TX prepare failed for %u\n",
+			   nb_pkts - i);
+	return i;
+}
+
 uint16_t
 qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 {
 	struct qede_tx_queue *txq = p_txq;
 	struct qede_dev *qdev = txq->qdev;
 	struct ecore_dev *edev = &qdev->edev;
-	struct qede_fastpath *fp;
-	struct eth_tx_1st_bd *bd1;
 	struct rte_mbuf *mbuf;
 	struct rte_mbuf *m_seg = NULL;
 	uint16_t nb_tx_pkts;
 	uint16_t bd_prod;
 	uint16_t idx;
-	uint16_t tx_count;
 	uint16_t nb_frags;
 	uint16_t nb_pkt_sent = 0;
-
-	fp = &qdev->fp_array[QEDE_RSS_COUNT(qdev) + txq->queue_id];
+	uint8_t nbds;
+	bool ipv6_ext_flg;
+	bool lso_flg;
+	bool tunn_flg;
+	struct eth_tx_1st_bd *bd1;
+	struct eth_tx_2nd_bd *bd2;
+	struct eth_tx_3rd_bd *bd3;
+	uint64_t tx_ol_flags;
+	uint16_t hdr_size;
 
 	if (unlikely(txq->nb_tx_avail < txq->tx_free_thresh)) {
 		PMD_TX_LOG(DEBUG, txq, "send=%u avail=%u free_thresh=%u",
 			   nb_pkts, txq->nb_tx_avail, txq->tx_free_thresh);
-		(void)qede_process_tx_compl(edev, txq);
-	}
-
-	nb_tx_pkts = RTE_MIN(nb_pkts, (txq->nb_tx_avail /
-			ETH_TX_MAX_BDS_PER_NON_LSO_PACKET));
-	if (unlikely(nb_tx_pkts == 0)) {
-		PMD_TX_LOG(DEBUG, txq, "Out of BDs nb_pkts=%u avail=%u",
-			   nb_pkts, txq->nb_tx_avail);
-		return 0;
+		qede_process_tx_compl(edev, txq);
 	}
 
-	tx_count = nb_tx_pkts;
+	nb_tx_pkts  = nb_pkts;
+	bd_prod = rte_cpu_to_le_16(ecore_chain_get_prod_idx(&txq->tx_pbl));
 	while (nb_tx_pkts--) {
+		/* Init flags/values */
+		ipv6_ext_flg = false;
+		tunn_flg = false;
+		lso_flg = false;
+		nbds = 0;
+		bd1 = NULL;
+		bd2 = NULL;
+		bd3 = NULL;
+		hdr_size = 0;
+
+		mbuf = *tx_pkts;
+		assert(mbuf);
+
+		/* Check minimum TX BDS availability against available BDs */
+		if (unlikely(txq->nb_tx_avail < mbuf->nb_segs))
+			break;
+
+		tx_ol_flags = mbuf->ol_flags;
+
+#define RTE_ETH_IS_IPV6_HDR_EXT(ptype) ((ptype) & RTE_PTYPE_L3_IPV6_EXT)
+		if (RTE_ETH_IS_IPV6_HDR_EXT(mbuf->packet_type))
+			ipv6_ext_flg = true;
+
+		if (RTE_ETH_IS_TUNNEL_PKT(mbuf->packet_type))
+			tunn_flg = true;
+
+		if (tx_ol_flags & PKT_TX_TCP_SEG)
+			lso_flg = true;
+
+		if (lso_flg) {
+			if (unlikely(txq->nb_tx_avail <
+						ETH_TX_MIN_BDS_PER_LSO_PKT))
+				break;
+		} else {
+			if (unlikely(txq->nb_tx_avail <
+					ETH_TX_MIN_BDS_PER_NON_LSO_PKT))
+				break;
+		}
+
+		if (tunn_flg && ipv6_ext_flg) {
+			if (unlikely(txq->nb_tx_avail <
+				ETH_TX_MIN_BDS_PER_TUNN_IPV6_WITH_EXT_PKT))
+				break;
+		}
+		if (ipv6_ext_flg) {
+			if (unlikely(txq->nb_tx_avail <
+					ETH_TX_MIN_BDS_PER_IPV6_WITH_EXT_PKT))
+				break;
+		}
+
 		/* Fill the entry in the SW ring and the BDs in the FW ring */
 		idx = TX_PROD(txq);
-		mbuf = *tx_pkts++;
+		*tx_pkts++;
 		txq->sw_tx_ring[idx].mbuf = mbuf;
+
+		/* BD1 */
 		bd1 = (struct eth_tx_1st_bd *)ecore_chain_produce(&txq->tx_pbl);
-		bd1->data.bd_flags.bitfields =
+		memset(bd1, 0, sizeof(struct eth_tx_1st_bd));
+		nbds++;
+
+		bd1->data.bd_flags.bitfields |=
 			1 << ETH_TX_1ST_BD_FLAGS_START_BD_SHIFT;
 		/* FW 8.10.x specific change */
-		bd1->data.bitfields =
+		if (!lso_flg) {
+			bd1->data.bitfields |=
 			(mbuf->pkt_len & ETH_TX_DATA_1ST_BD_PKT_LEN_MASK)
 				<< ETH_TX_DATA_1ST_BD_PKT_LEN_SHIFT;
-		/* Map MBUF linear data for DMA and set in the first BD */
-		QEDE_BD_SET_ADDR_LEN(bd1, rte_mbuf_data_dma_addr(mbuf),
-				     mbuf->data_len);
-		PMD_TX_LOG(INFO, txq, "BD1 len %04x", mbuf->data_len);
+			/* Map MBUF linear data for DMA and set in the BD1 */
+			QEDE_BD_SET_ADDR_LEN(bd1, rte_mbuf_data_dma_addr(mbuf),
+					     mbuf->data_len);
+		} else {
+			/* For LSO, packet header and payload must reside on
+			 * buffers pointed by different BDs. Using BD1 for HDR
+			 * and BD2 onwards for data.
+			 */
+			hdr_size = mbuf->l2_len + mbuf->l3_len + mbuf->l4_len;
+			QEDE_BD_SET_ADDR_LEN(bd1, rte_mbuf_data_dma_addr(mbuf),
+					     hdr_size);
+		}
 
-		if (RTE_ETH_IS_TUNNEL_PKT(mbuf->packet_type)) {
-			PMD_TX_LOG(INFO, txq, "Tx tunnel packet");
+		if (tunn_flg) {
 			/* First indicate its a tunnel pkt */
 			bd1->data.bd_flags.bitfields |=
 				ETH_TX_DATA_1ST_BD_TUNN_FLAG_MASK <<
@@ -1231,8 +1543,7 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 					1 << ETH_TX_DATA_1ST_BD_TUNN_FLAG_SHIFT;
 
 			/* Outer IP checksum offload */
-			if (mbuf->ol_flags & PKT_TX_OUTER_IP_CKSUM) {
-				PMD_TX_LOG(INFO, txq, "OuterIP csum offload");
+			if (tx_ol_flags & PKT_TX_OUTER_IP_CKSUM) {
 				bd1->data.bd_flags.bitfields |=
 					ETH_TX_1ST_BD_FLAGS_TUNN_IP_CSUM_MASK <<
 					ETH_TX_1ST_BD_FLAGS_TUNN_IP_CSUM_SHIFT;
@@ -1245,43 +1556,79 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		}
 
 		/* Descriptor based VLAN insertion */
-		if (mbuf->ol_flags & (PKT_TX_VLAN_PKT | PKT_TX_QINQ_PKT)) {
-			PMD_TX_LOG(INFO, txq, "Insert VLAN 0x%x",
-				   mbuf->vlan_tci);
+		if (tx_ol_flags & (PKT_TX_VLAN_PKT | PKT_TX_QINQ_PKT)) {
 			bd1->data.vlan = rte_cpu_to_le_16(mbuf->vlan_tci);
 			bd1->data.bd_flags.bitfields |=
 			    1 << ETH_TX_1ST_BD_FLAGS_VLAN_INSERTION_SHIFT;
 		}
 
+		if (lso_flg)
+			bd1->data.bd_flags.bitfields |=
+				1 << ETH_TX_1ST_BD_FLAGS_LSO_SHIFT;
+
 		/* Offload the IP checksum in the hardware */
-		if (mbuf->ol_flags & PKT_TX_IP_CKSUM) {
-			PMD_TX_LOG(INFO, txq, "IP csum offload");
+		if ((lso_flg) || (tx_ol_flags & PKT_TX_IP_CKSUM))
 			bd1->data.bd_flags.bitfields |=
 			    1 << ETH_TX_1ST_BD_FLAGS_IP_CSUM_SHIFT;
-		}
 
 		/* L4 checksum offload (tcp or udp) */
-		if (mbuf->ol_flags & (PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM)) {
-			PMD_TX_LOG(INFO, txq, "L4 csum offload");
+		if ((lso_flg) || (tx_ol_flags & (PKT_TX_TCP_CKSUM |
+						PKT_TX_UDP_CKSUM)))
+			/* PKT_TX_TCP_SEG implies PKT_TX_TCP_CKSUM */
 			bd1->data.bd_flags.bitfields |=
 			    1 << ETH_TX_1ST_BD_FLAGS_L4_CSUM_SHIFT;
-			/* IPv6 + extn. -> later */
+
+		/* BD2 */
+		if (lso_flg || ipv6_ext_flg) {
+			bd2 = (struct eth_tx_2nd_bd *)ecore_chain_produce
+							(&txq->tx_pbl);
+			memset(bd2, 0, sizeof(struct eth_tx_2nd_bd));
+			nbds++;
+			QEDE_BD_SET_ADDR_LEN(bd2,
+					    (hdr_size +
+					    rte_mbuf_data_dma_addr(mbuf)),
+					    mbuf->data_len - hdr_size);
+			/* TBD: check pseudo csum iff tx_prepare not called? */
+			if (ipv6_ext_flg) {
+				bd2->data.bitfields1 |=
+				ETH_L4_PSEUDO_CSUM_ZERO_LENGTH <<
+				ETH_TX_DATA_2ND_BD_L4_PSEUDO_CSUM_MODE_SHIFT;
+			}
+		}
+
+		/* BD3 */
+		if (lso_flg || ipv6_ext_flg) {
+			bd3 = (struct eth_tx_3rd_bd *)ecore_chain_produce
+							(&txq->tx_pbl);
+			memset(bd3, 0, sizeof(struct eth_tx_3rd_bd));
+			nbds++;
+			if (lso_flg) {
+				bd3->data.lso_mss =
+					rte_cpu_to_le_16(mbuf->tso_segsz);
+				/* Using one header BD */
+				bd3->data.bitfields |=
+					rte_cpu_to_le_16(1 <<
+					ETH_TX_DATA_3RD_BD_HDR_NBD_SHIFT);
+			}
 		}
 
 		/* Handle fragmented MBUF */
 		m_seg = mbuf->next;
 		/* Encode scatter gather buffer descriptors if required */
-		nb_frags = qede_encode_sg_bd(txq, m_seg, bd1);
-		bd1->data.nbds = nb_frags;
-		txq->nb_tx_avail -= nb_frags;
+		nb_frags = qede_encode_sg_bd(txq, m_seg, &bd2, &bd3);
+		bd1->data.nbds = nbds + nb_frags;
+		txq->nb_tx_avail -= bd1->data.nbds;
 		txq->sw_tx_prod++;
 		rte_prefetch0(txq->sw_tx_ring[TX_PROD(txq)].mbuf);
 		bd_prod =
 		    rte_cpu_to_le_16(ecore_chain_get_prod_idx(&txq->tx_pbl));
+#ifdef RTE_LIBRTE_QEDE_DEBUG_TX
+		print_tx_bd_info(txq, bd1, bd2, bd3, tx_ol_flags);
+		PMD_TX_LOG(INFO, txq, "lso=%d tunn=%d ipv6_ext=%d\n",
+			   lso_flg, tunn_flg, ipv6_ext_flg);
+#endif
 		nb_pkt_sent++;
 		txq->xmit_pkts++;
-		PMD_TX_LOG(INFO, txq, "nbds = %d pkt_len = %04x",
-			   bd1->data.nbds, mbuf->pkt_len);
 	}
 
 	/* Write value of prod idx into bd_prod */
@@ -1292,10 +1639,10 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 	rte_wmb();
 
 	/* Check again for Tx completions */
-	(void)qede_process_tx_compl(edev, txq);
+	qede_process_tx_compl(edev, txq);
 
-	PMD_TX_LOG(DEBUG, txq, "to_send=%u can_send=%u sent=%u core=%d",
-		   nb_pkts, tx_count, nb_pkt_sent, rte_lcore_id());
+	PMD_TX_LOG(DEBUG, txq, "to_send=%u sent=%u bd_prod=%u core=%d",
+		   nb_pkts, nb_pkt_sent, TX_PROD(txq), rte_lcore_id());
 
 	return nb_pkt_sent;
 }
@@ -1380,8 +1727,7 @@ static int qede_drain_txq(struct qede_dev *qdev,
 		qede_process_tx_compl(edev, txq);
 		if (!cnt) {
 			if (allow_drain) {
-				DP_NOTICE(edev, false,
-					  "Tx queue[%u] is stuck,"
+				DP_ERR(edev, "Tx queue[%u] is stuck,"
 					  "requesting MCP to drain\n",
 					  txq->queue_id);
 				rc = qdev->ops->common->drain(edev);
@@ -1389,13 +1735,11 @@ static int qede_drain_txq(struct qede_dev *qdev,
 					return rc;
 				return qede_drain_txq(qdev, txq, false);
 			}
-
-			DP_NOTICE(edev, false,
-				  "Timeout waiting for tx queue[%d]:"
+			DP_ERR(edev, "Timeout waiting for tx queue[%d]:"
 				  "PROD=%d, CONS=%d\n",
 				  txq->queue_id, txq->sw_tx_prod,
 				  txq->sw_tx_cons);
-			return -ENODEV;
+			return -1;
 		}
 		cnt--;
 		DELAY(1000);
@@ -1412,6 +1756,7 @@ static int qede_stop_queues(struct qede_dev *qdev)
 {
 	struct qed_update_vport_params vport_update_params;
 	struct ecore_dev *edev = &qdev->edev;
+	struct ecore_sge_tpa_params tpa_params;
 	struct qede_fastpath *fp;
 	int rc, tc, i;
 
@@ -1421,9 +1766,15 @@ static int qede_stop_queues(struct qede_dev *qdev)
 	vport_update_params.update_vport_active_flg = 1;
 	vport_update_params.vport_active_flg = 0;
 	vport_update_params.update_rss_flg = 0;
+	/* Disable TPA */
+	if (qdev->enable_lro) {
+		DP_INFO(edev, "Disabling LRO\n");
+		memset(&tpa_params, 0, sizeof(struct ecore_sge_tpa_params));
+		qede_update_sge_tpa_params(&tpa_params, qdev->mtu, false);
+		vport_update_params.sge_tpa_params = &tpa_params;
+	}
 
 	DP_INFO(edev, "Deactivate vport\n");
-
 	rc = qdev->ops->vport_update(edev, &vport_update_params);
 	if (rc) {
 		DP_ERR(edev, "Failed to update vport\n");
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index 17a2f0c..c27632e 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -126,6 +126,19 @@
 
 #define QEDE_PKT_TYPE_TUNN_MAX_TYPE			0x20 /* 2^5 */
 
+#define QEDE_TX_CSUM_OFFLOAD_MASK (PKT_TX_IP_CKSUM              | \
+				   PKT_TX_TCP_CKSUM             | \
+				   PKT_TX_UDP_CKSUM             | \
+				   PKT_TX_OUTER_IP_CKSUM        | \
+				   PKT_TX_TCP_SEG)
+
+#define QEDE_TX_OFFLOAD_MASK (QEDE_TX_CSUM_OFFLOAD_MASK | \
+			      PKT_TX_QINQ_PKT           | \
+			      PKT_TX_VLAN_PKT)
+
+#define QEDE_TX_OFFLOAD_NOTSUP_MASK \
+	(PKT_TX_OFFLOAD_MASK ^ QEDE_TX_OFFLOAD_MASK)
+
 /*
  * RX BD descriptor ring
  */
@@ -135,6 +148,19 @@ struct qede_rx_entry {
 	/* allows expansion .. */
 };
 
+/* TPA related structures */
+enum qede_agg_state {
+	QEDE_AGG_STATE_NONE  = 0,
+	QEDE_AGG_STATE_START = 1,
+	QEDE_AGG_STATE_ERROR = 2
+};
+
+struct qede_agg_info {
+	struct rte_mbuf *mbuf;
+	uint16_t start_cqe_bd_len;
+	uint8_t state; /* for sanity check */
+};
+
 /*
  * Structure associated with each RX queue.
  */
@@ -155,6 +181,7 @@ struct qede_rx_queue {
 	uint64_t rx_segs;
 	uint64_t rx_hw_errors;
 	uint64_t rx_alloc_errors;
+	struct qede_agg_info tpa_info[ETH_TPA_MAX_AGGS_NUM];
 	struct qede_dev *qdev;
 	void *handle;
 };
@@ -232,6 +259,9 @@ void qede_free_mem_load(struct rte_eth_dev *eth_dev);
 uint16_t qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts,
 			uint16_t nb_pkts);
 
+uint16_t qede_xmit_prep_pkts(void *p_txq, struct rte_mbuf **tx_pkts,
+			     uint16_t nb_pkts);
+
 uint16_t qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts,
 			uint16_t nb_pkts);
 
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* [PATCH v5 62/62] net/qede: update PMD version to 2.4.0.1
  2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
                               ` (61 preceding siblings ...)
  2017-03-29 20:37             ` [PATCH v5 61/62] net/qede: add LRO/TSO offloads support Rasesh Mody
@ 2017-03-29 20:37             ` Rasesh Mody
  62 siblings, 0 replies; 329+ messages in thread
From: Rasesh Mody @ 2017-03-29 20:37 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rasesh Mody, Dept-EngDPDKDev

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/qede_ethdev.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/qede/qede_ethdev.h b/drivers/net/qede/qede_ethdev.h
index 799a3ba..3c8ead8 100644
--- a/drivers/net/qede/qede_ethdev.h
+++ b/drivers/net/qede/qede_ethdev.h
@@ -49,7 +49,7 @@
 /* Driver versions */
 #define QEDE_PMD_VER_PREFIX		"QEDE PMD"
 #define QEDE_PMD_VERSION_MAJOR		2
-#define QEDE_PMD_VERSION_MINOR	        0
+#define QEDE_PMD_VERSION_MINOR	        4
 #define QEDE_PMD_VERSION_REVISION       0
 #define QEDE_PMD_VERSION_PATCH	        1
 
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 329+ messages in thread

* Re: [PATCH v4 31/62] net/qede/base: revise tunnel APIs/structs
  2017-03-29  9:23                 ` Ferruh Yigit
@ 2017-03-29 20:48                   ` Mody, Rasesh
  0 siblings, 0 replies; 329+ messages in thread
From: Mody, Rasesh @ 2017-03-29 20:48 UTC (permalink / raw)
  To: Ferruh Yigit, dev; +Cc: Thomas Monjalon

> From: Ferruh Yigit [mailto:ferruh.yigit@intel.com]
> Sent: Wednesday, March 29, 2017 2:23 AM
> 
> On 3/28/2017 10:18 PM, Mody, Rasesh wrote:
> >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ferruh Yigit
> >> Sent: Tuesday, March 28, 2017 4:23 AM
> >>
> >> On 3/28/2017 7:52 AM, Rasesh Mody wrote:
> >>> Revise tunnel APIs/structs.
> >>>  - Unite tunnel start and update params in single struct
> >>>    "ecore_tunnel_info"
> >>>  - Remove A0 chip tunnelling support.
> >>>  - Added per tunnel info - removed bitmasks.
> >>>
> >>> Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
> >>
> >> I hate to say this, but this patch gives build error with clang [1],
> >> it seems it is fixed in next patch.
> >
> > We also observed this error on clang, however, the fix got wrongly applied
> to the next patch, sorry about that.
> >
> >>
> >> This patchset is big, and takes time to review / validate it, and a
> >> small error requires whole patchset done again. I am not suggesting
> >> updating this one, but for further patchsets, what do you think
> >> making multiple smaller patchsets?
> >
> > Please let us know if we need to refresh the current v4 patchset to address
> the clang issue.
> 
> Yes, can you please send a new version of the patchset.

We have addressed clang build error in appropriate patch and resent the patchset. We have tested the patchset on latest dpdk-next-net d1f78e9696cd ("doc: detail new tap features in release note").

Thanks!
-Rasesh
 
> > It's good suggestion, for further patchsets, we can do multiple smaller
> patchsets.
> >
> > Thanks!
> > -Rasesh
> >
> >>
> >> Thanks,
> >> ferruh
> >>
> >>
> >> [1]
> >> Building x86_64-native-linuxapp-clang ...
> >> .../drivers/net/qede/base/ecore_sp_commands.c:141:25: error: implicit
> >> conversion from enumeration type 'enum tunnel_clss' to different
> >> enumeration type 'enum ecore_tunn_clss' [-Werror,-Wenum-
> conversion]
> >>         p_tun->vxlan.tun_cls = type;
> >>                              ~ ^~~~
> >> .../drivers/net/qede/base/ecore_sp_commands.c:143:26: error: implicit
> >> conversion from enumeration type 'enum tunnel_clss' to different
> >> enumeration type 'enum ecore_tunn_clss' [-Werror,-Wenum-
> conversion]
> >>         p_tun->l2_gre.tun_cls = type;
> >>                               ~ ^~~~
> >> .../drivers/net/qede/base/ecore_sp_commands.c:145:26: error: implicit
> >> conversion from enumeration type 'enum tunnel_clss' to different
> >> enumeration type 'enum ecore_tunn_clss' [-Werror,-Wenum-
> conversion]
> >>         p_tun->ip_gre.tun_cls = type;
> >>                               ~ ^~~~
> >> .../drivers/net/qede/base/ecore_sp_commands.c:147:29: error: implicit
> >> conversion from enumeration type 'enum tunnel_clss' to different
> >> enumeration type 'enum ecore_tunn_clss' [-Werror,-Wenum-
> conversion]
> >>         p_tun->l2_geneve.tun_cls = type;
> >>                                  ~ ^~~~
> >> .../drivers/net/qede/base/ecore_sp_commands.c:149:29: error: implicit
> >> conversion from enumeration type 'enum tunnel_clss' to different
> >> enumeration type 'enum ecore_tunn_clss' [-Werror,-Wenum-
> conversion]
> >>         p_tun->ip_geneve.tun_cls = type;
> >>                                  ~ ^~~~
> >> 5 errors generated.
> >> make[10]: *** [base/ecore_sp_commands.o] Error 1
> >> make[10]: *** Waiting for unfinished jobs....
> >> .../drivers/net/qede/qede_ethdev.c:1724:45: error: variable 'p_tunn'
> >> is uninitialized when used here [-Werror,-Wuninitialized]
> >>                         rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, p_tunn,
> >>
> >> ^~~~~~
> >> .../drivers/net/qede/qede_ethdev.c:1711:34: note: initialize the
> >> variable 'p_tunn' to silence this warning
> >>         struct ecore_tunnel_info *p_tunn;
> >>                                         ^
> >>                                          = NULL
> >> .../drivers/net/qede/qede_ethdev.c:1877:5: error: variable 'p_tunn'
> >> is uninitialized when used here [-Werror,-Wuninitialized]
> >>                                 p_tunn, ECORE_SPQ_MODE_CB, NULL);
> >>                                 ^~~~~~
> >> .../drivers/net/qede/qede_ethdev.c:1822:34: note: initialize the
> >> variable 'p_tunn' to silence this warning
> >>         struct ecore_tunnel_info *p_tunn;
> >>                                         ^
> >>                                          = NULL
> >> 2 errors generated.
> >

^ permalink raw reply	[flat|nested] 329+ messages in thread

* Re: [PATCH v5 00/62] net/qede/base: update PMD to 2.4.0.1
  2017-03-29 20:36             ` [PATCH v5 " Rasesh Mody
@ 2017-03-30 12:23               ` Ferruh Yigit
  0 siblings, 0 replies; 329+ messages in thread
From: Ferruh Yigit @ 2017-03-30 12:23 UTC (permalink / raw)
  To: Rasesh Mody, dev; +Cc: Dept-EngDPDKDev

On 3/29/2017 9:36 PM, Rasesh Mody wrote:
> Hi Ferruh,
> 
> This patch set adds support for new firmware 8.18.9.0, adds new features
> and includes bug fixes. This patch set updates PMD version to 2.4.0.1.
> 
> Please apply to dpdk-net-next for 17.05 release.
> 
> v4..v5
>  - properly fix clang compilation
> v1..v4
>  - address all the review comments received
> 
> Thanks!
> Rasesh
> 
> Harish Patil (3):
>   net/qede/base: add support for arfs mode
>   net/qede: add ntuple and flow director filter support
>   net/qede: add LRO/TSO offloads support
> 
> Rasesh Mody (59):
>   net/qede/base: return an initialized return value
>   net/qede/base: send FW version driver state to MFW
>   net/qede/base: mask Rx buffer attention bits
>   net/qede/base: print various indication on Tx-timeouts
>   net/qede/base: utilize FW 8.18.9.0
>   net/qede: upgrade the FW to 8.18.9.0
>   net/qede/base: decrease maximum HW func per device
>   net/qede/base: move mask constants defining NIC type
>   net/qede/base: remove attribute from update current config
>   net/qede/base: add nvram options
>   net/qede/base: add comment
>   net/qede/base: use default MTU from shared memory
>   net/qede/base: change queue/sb-id from 8 bit to 16 bit
>   net/qede/base: update MFW when default MTU is changed
>   net/qede/base: prevent device init failure
>   net/qede/base: read card personality via MFW commands
>   net/qede/base: allow probe to succeed with minor HW-issues
>   net/qede/base: remove unneeded step in HW init
>   net/qede/base: allow only trusted VFs to be promisc
>   net/qede/base: qm initialization revamp
>   net/qede/base: print firmware MFW and MBI versions
>   net/qede/base: check active VF queues before stopping
>   net/qede/base: set driver type before sending load request
>   net/qede/base: prevent driver load with invalid resources
>   net/qede/base: add interfaces for MFW TLV request processing
>   net/qede/base: code refactoring of SP queues
>   net/qede/base: make L2 queues handle based
>   net/qede/base: add support for handling TLV request from MFW
>   net/qede/base: optimize cache-line access
>   net/qede/base: infrastructure changes for VF tunnelling
>   net/qede/base: revise tunnel APIs/structs
>   net/qede/base: add tunnelling support for VFs
>   net/qede/base: formatting changes
>   net/qede/base: prevent transmitter stuck condition
>   net/qede/base: add mask/shift defines for resource command
>   net/qede/base: add API for using MFW resource lock
>   net/qede/base: remove clock slowdown option
>   net/qede/base: add new image types
>   net/qede/base: use L2-handles for RSS configuration
>   net/qede/base: change valloc to vzalloc
>   net/qede/base: add support for previous driver unload
>   net/qede/base: add non-L2 dcbx tlv application support
>   net/qede/base: update bulletin board during VF init
>   net/qede/base: add coalescing support for VFs
>   net/qede/base: add macro got resource value message
>   net/qede/base: add mailbox for resource allocation
>   net/qede/base: add macro for unsupported command
>   net/qede/base: set max values for soft resources
>   net/qede/base: add return code check
>   net/qede/base: zero out MFW mailbox data
>   net/qede/base: move code bits
>   net/qede/base: add PF parameter
>   net/qede/base: allow PMD to control vport and RSS engine ids
>   net/qede/base: add udp ports in bulletin board message
>   net/qede/base: prevent DMAE transactions during recovery
>   net/qede/base: multi-Txq support on same queue-zone for VFs
>   net/qede/base: prevent race condition during unload
>   net/qede/base: semantic changes
>   net/qede: update PMD version to 2.4.0.1

Series applied to dpdk-next-net/master, thanks.

^ permalink raw reply	[flat|nested] 329+ messages in thread

end of thread, other threads:[~2017-03-30 12:23 UTC | newest]

Thread overview: 329+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-02-27  7:56 [PATCH 00/61] net/qede/base: qede PMD enhancements Rasesh Mody
2017-02-27  7:56 ` [PATCH 01/61] net/qede/base: return an initialized return value Rasesh Mody
2017-02-27  7:56 ` [PATCH 02/61] send FW version driver state to MFW Rasesh Mody
2017-03-03 10:26   ` Ferruh Yigit
2017-02-27  7:56 ` [PATCH 03/61] net/qede/base: mask Rx buffer attention bits Rasesh Mody
2017-02-27  7:56 ` [PATCH 04/61] net/qede/base: print various indication on Tx-timeouts Rasesh Mody
2017-02-27  7:56 ` [PATCH 05/61] net/qede/base: utilize FW 8.18.9.0 Rasesh Mody
2017-02-27  7:56 ` [PATCH 06/61] drivers/net/qede: upgrade the FW to 8.18.9.0 Rasesh Mody
2017-02-27  7:56 ` [PATCH 07/61] net/qede/base: decrease MAX_HWFNS_PER_DEVICE from 4 to 2 Rasesh Mody
2017-02-27  7:56 ` [PATCH 08/61] net/qede/base: move mask constants defining NIC type Rasesh Mody
2017-02-27  7:56 ` [PATCH 09/61] net/qede/base: remove attribute field from update current config Rasesh Mody
2017-02-27  7:56 ` [PATCH 10/61] net/qede/base: add nvram options Rasesh Mody
2017-02-27  7:56 ` [PATCH 11/61] net/qede/base: add comment Rasesh Mody
2017-02-27  7:56 ` [PATCH 12/61] net/qede/base: use default mtu from shared memory Rasesh Mody
2017-02-27  7:56 ` [PATCH 13/61] net/qede/base: change queue/sb-id from 8 bit to 16 bit Rasesh Mody
2017-02-27  7:56 ` [PATCH 14/61] net/qede/base: update MFW when default mtu is changed Rasesh Mody
2017-02-27  7:56 ` [PATCH 15/61] net/qede/base: prevent device init failure Rasesh Mody
2017-02-27  7:56 ` [PATCH 16/61] net/qede/base: add support to read personality via MFW commands Rasesh Mody
2017-02-27  7:56 ` [PATCH 17/61] net/qede/base: allow probe to succeed with minor HW-issues Rasesh Mody
2017-02-27  7:56 ` [PATCH 18/61] net/qede/base: remove unneeded step in HW init Rasesh Mody
2017-02-27  7:56 ` [PATCH 19/61] net/qede/base: allow only trusted VFs to be promisc/multi-promisc Rasesh Mody
2017-02-27  7:56 ` [PATCH 20/61] net/qede/base: qm initialization revamp Rasesh Mody
2017-02-27  7:56 ` [PATCH 21/61] net/qede/base: add a printout of the FW, MFW and MBI versions Rasesh Mody
2017-02-27  7:56 ` [PATCH 22/61] net/qede/base: check active VF queues before stopping Rasesh Mody
2017-02-27  7:56 ` [PATCH 23/61] net/qede/base: set the drv_type before sending load request Rasesh Mody
2017-02-27  7:56 ` [PATCH 24/61] net/qede/base: prevent driver laod with invalid resources Rasesh Mody
2017-02-27  7:56 ` [PATCH 25/61] net/qede/base: add interfaces for MFW TLV request processing Rasesh Mody
2017-02-27  7:56 ` [PATCH 26/61] net/qede/base: fix to set pointers to NULL after freeing Rasesh Mody
2017-02-27  7:56 ` [PATCH 27/61] net/qede/base: L2 handler changes Rasesh Mody
2017-02-27  7:56 ` [PATCH 28/61] net/qede/base: add support for handling TLV request from MFW Rasesh Mody
2017-02-27  7:56 ` [PATCH 29/61] net/qede/base: optimize cache-line access Rasesh Mody
2017-02-27  7:56 ` [PATCH 30/61] net/qede/base: infrastructure changes for VF tunnelling Rasesh Mody
2017-02-27  7:56 ` [PATCH 31/61] net/qede/base: revise tunnel APIs/structs Rasesh Mody
2017-02-27  7:56 ` [PATCH 32/61] net/qede/base: add tunnelling support for VFs Rasesh Mody
2017-02-27  7:56 ` [PATCH 33/61] net/qede/base: formatting changes Rasesh Mody
2017-02-27  7:56 ` [PATCH 34/61] net/qede/base: prevent transmitter stuck condition Rasesh Mody
2017-02-27  7:56 ` [PATCH 35/61] net/qede/base: add mask/shift defines for resource command Rasesh Mody
2017-02-27  7:56 ` [PATCH 36/61] net/qede/base: add API for using MFW resource lock Rasesh Mody
2017-02-27  7:56 ` [PATCH 37/61] net/qede/base: remove clock slowdown option Rasesh Mody
2017-02-27  7:56 ` [PATCH 38/61] net/qede/base: add new image types Rasesh Mody
2017-02-27  7:56 ` [PATCH 39/61] net/qede/base: use L2-handles for RSS configuration Rasesh Mody
2017-02-27  7:56 ` [PATCH 40/61] net/qede/base: change valloc to vzalloc Rasesh Mody
2017-02-27  7:56 ` [PATCH 41/61] net/qede/base: add support for previous driver unload Rasesh Mody
2017-02-27  7:56 ` [PATCH 42/61] net/qede/base: add non-l2 dcbx tlv application support Rasesh Mody
2017-02-27  7:56 ` [PATCH 43/61] net/qede/base: update bulletin board with link state during init Rasesh Mody
2017-02-27  7:57 ` [PATCH 44/61] net/qede/base: add coalescing support for VFs Rasesh Mody
2017-02-27  7:57 ` [PATCH 45/61] net/qede/base: add macro got resource value message Rasesh Mody
2017-02-27  7:57 ` [PATCH 46/61] net/qede/base: add mailbox for resource allocation Rasesh Mody
2017-02-27  7:57 ` [PATCH 47/61] net/qede/base: add macro for unsupported command Rasesh Mody
2017-02-27  7:57 ` [PATCH 48/61] net/qede/base: Add support to set max values of soft resoruces Rasesh Mody
2017-02-27  7:57 ` [PATCH 49/61] net/qede/base: add return code check Rasesh Mody
2017-02-27  7:57 ` [PATCH 50/61] net/qede/base: zero out MFW mailbox data Rasesh Mody
2017-02-27  7:57 ` [PATCH 51/61] net/qede/base: move code bits Rasesh Mody
2017-02-27  7:57 ` [PATCH 52/61] net/qede/base: add PF parameter Rasesh Mody
2017-02-27  7:57 ` [PATCH 53/61] net/qede/base: allow PMD to control vport-id and rss-eng-id Rasesh Mody
2017-02-27  7:57 ` [PATCH 54/61] net/qede/base: add udp ports in bulletin board message Rasesh Mody
2017-02-27  7:57 ` [PATCH 55/61] net/qede/base: prevent DMAE transactions during recovery Rasesh Mody
2017-02-27  7:57 ` [PATCH 56/61] net/qede/base: add multi-Txq support on same queue-zone for VFs Rasesh Mody
2017-02-27  7:57 ` [PATCH 57/61] net/qede/base: fix race cond between MFW attentions and PF stop Rasesh Mody
2017-02-27  7:57 ` [PATCH 58/61] net/qede/base: semantic changes Rasesh Mody
2017-02-27  7:57 ` [PATCH 59/61] net/qede/base: add support for arfs mode Rasesh Mody
2017-02-27  7:57 ` [PATCH 60/61] net/qede: add ntuple and flow director filter support Rasesh Mody
2017-02-27  7:57 ` [PATCH 61/61] net/qede: add LRO/TSO offloads support Rasesh Mody
2017-03-03 10:25 ` [PATCH 00/61] net/qede/base: qede PMD enhancements Ferruh Yigit
2017-03-18  7:05   ` [PATCH v2 " Rasesh Mody
2017-03-20 16:59     ` Ferruh Yigit
2017-03-24  7:27       ` [PATCH v3 " Rasesh Mody
2017-03-24 11:08         ` Ferruh Yigit
2017-03-28  6:42           ` [PATCH 01/62] net/qede/base: return an initialized return value Rasesh Mody
2017-03-28  6:51           ` [PATCH v4 00/62] net/qede/base: update PMD to 2.4.0.1 Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 " Rasesh Mody
2017-03-30 12:23               ` Ferruh Yigit
2017-03-29 20:36             ` [PATCH v5 01/62] net/qede/base: return an initialized return value Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 02/62] net/qede/base: send FW version driver state to MFW Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 03/62] net/qede/base: mask Rx buffer attention bits Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 04/62] net/qede/base: print various indication on Tx-timeouts Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 05/62] net/qede/base: utilize FW 8.18.9.0 Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 06/62] net/qede: upgrade the FW to 8.18.9.0 Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 07/62] net/qede/base: decrease maximum HW func per device Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 08/62] net/qede/base: move mask constants defining NIC type Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 09/62] net/qede/base: remove attribute from update current config Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 10/62] net/qede/base: add nvram options Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 11/62] net/qede/base: add comment Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 12/62] net/qede/base: use default MTU from shared memory Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 13/62] net/qede/base: change queue/sb-id from 8 bit to 16 bit Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 14/62] net/qede/base: update MFW when default MTU is changed Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 15/62] net/qede/base: prevent device init failure Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 16/62] net/qede/base: read card personality via MFW commands Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 17/62] net/qede/base: allow probe to succeed with minor HW-issues Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 18/62] net/qede/base: remove unneeded step in HW init Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 19/62] net/qede/base: allow only trusted VFs to be promisc Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 20/62] net/qede/base: qm initialization revamp Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 21/62] net/qede/base: print firmware MFW and MBI versions Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 22/62] net/qede/base: check active VF queues before stopping Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 23/62] net/qede/base: set driver type before sending load request Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 24/62] net/qede/base: prevent driver load with invalid resources Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 25/62] net/qede/base: add interfaces for MFW TLV request processing Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 26/62] net/qede/base: code refactoring of SP queues Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 27/62] net/qede/base: make L2 queues handle based Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 28/62] net/qede/base: add support for handling TLV request from MFW Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 29/62] net/qede/base: optimize cache-line access Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 30/62] net/qede/base: infrastructure changes for VF tunnelling Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 31/62] net/qede/base: revise tunnel APIs/structs Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 32/62] net/qede/base: add tunnelling support for VFs Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 33/62] net/qede/base: formatting changes Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 34/62] net/qede/base: prevent transmitter stuck condition Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 35/62] net/qede/base: add mask/shift defines for resource command Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 36/62] net/qede/base: add API for using MFW resource lock Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 37/62] net/qede/base: remove clock slowdown option Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 38/62] net/qede/base: add new image types Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 39/62] net/qede/base: use L2-handles for RSS configuration Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 40/62] net/qede/base: change valloc to vzalloc Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 41/62] net/qede/base: add support for previous driver unload Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 42/62] net/qede/base: add non-L2 dcbx tlv application support Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 43/62] net/qede/base: update bulletin board during VF init Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 44/62] net/qede/base: add coalescing support for VFs Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 45/62] net/qede/base: add macro got resource value message Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 46/62] net/qede/base: add mailbox for resource allocation Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 47/62] net/qede/base: add macro for unsupported command Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 48/62] net/qede/base: set max values for soft resources Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 49/62] net/qede/base: add return code check Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 50/62] net/qede/base: zero out MFW mailbox data Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 51/62] net/qede/base: move code bits Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 52/62] net/qede/base: add PF parameter Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 53/62] net/qede/base: allow PMD to control vport and RSS engine ids Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 54/62] net/qede/base: add udp ports in bulletin board message Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 55/62] net/qede/base: prevent DMAE transactions during recovery Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 56/62] net/qede/base: multi-Txq support on same queue-zone for VFs Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 57/62] net/qede/base: prevent race condition during unload Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 58/62] net/qede/base: semantic changes Rasesh Mody
2017-03-29 20:36             ` [PATCH v5 59/62] net/qede/base: add support for arfs mode Rasesh Mody
2017-03-29 20:37             ` [PATCH v5 60/62] net/qede: add ntuple and flow director filter support Rasesh Mody
2017-03-29 20:37             ` [PATCH v5 61/62] net/qede: add LRO/TSO offloads support Rasesh Mody
2017-03-29 20:37             ` [PATCH v5 62/62] net/qede: update PMD version to 2.4.0.1 Rasesh Mody
2017-03-28  6:51           ` [PATCH v4 01/62] net/qede/base: return an initialized return value Rasesh Mody
2017-03-28  6:51           ` [PATCH v4 02/62] net/qede/base: send FW version driver state to MFW Rasesh Mody
2017-03-28  6:51           ` [PATCH v4 03/62] net/qede/base: mask Rx buffer attention bits Rasesh Mody
2017-03-28  6:51           ` [PATCH v4 04/62] net/qede/base: print various indication on Tx-timeouts Rasesh Mody
2017-03-28  6:51           ` [PATCH v4 05/62] net/qede/base: utilize FW 8.18.9.0 Rasesh Mody
2017-03-28  6:51           ` [PATCH v4 06/62] net/qede: upgrade the FW to 8.18.9.0 Rasesh Mody
2017-03-28  6:51           ` [PATCH v4 07/62] net/qede/base: decrease maximum HW func per device Rasesh Mody
2017-03-28  6:51           ` [PATCH v4 08/62] net/qede/base: move mask constants defining NIC type Rasesh Mody
2017-03-28  6:51           ` [PATCH v4 09/62] net/qede/base: remove attribute from update current config Rasesh Mody
2017-03-28  6:51           ` [PATCH v4 10/62] net/qede/base: add nvram options Rasesh Mody
2017-03-28  6:51           ` [PATCH v4 11/62] net/qede/base: add comment Rasesh Mody
2017-03-28  6:51           ` [PATCH v4 12/62] net/qede/base: use default MTU from shared memory Rasesh Mody
2017-03-28  6:51           ` [PATCH v4 13/62] net/qede/base: change queue/sb-id from 8 bit to 16 bit Rasesh Mody
2017-03-28  6:51           ` [PATCH v4 14/62] net/qede/base: update MFW when default MTU is changed Rasesh Mody
2017-03-28  6:51           ` [PATCH v4 15/62] net/qede/base: prevent device init failure Rasesh Mody
2017-03-28  6:51           ` [PATCH v4 16/62] net/qede/base: read card personality via MFW commands Rasesh Mody
2017-03-28  6:51           ` [PATCH v4 17/62] net/qede/base: allow probe to succeed with minor HW-issues Rasesh Mody
2017-03-28  6:51           ` [PATCH v4 18/62] net/qede/base: remove unneeded step in HW init Rasesh Mody
2017-03-28  6:51           ` [PATCH v4 19/62] net/qede/base: allow only trusted VFs to be promisc Rasesh Mody
2017-03-28  6:51           ` [PATCH v4 20/62] net/qede/base: qm initialization revamp Rasesh Mody
2017-03-28  6:51           ` [PATCH v4 21/62] net/qede/base: print firmware MFW and MBI versions Rasesh Mody
2017-03-28  6:51           ` [PATCH v4 22/62] net/qede/base: check active VF queues before stopping Rasesh Mody
2017-03-28  6:51           ` [PATCH v4 23/62] net/qede/base: set driver type before sending load request Rasesh Mody
2017-03-28  6:51           ` [PATCH v4 24/62] net/qede/base: prevent driver load with invalid resources Rasesh Mody
2017-03-28  6:51           ` [PATCH v4 25/62] net/qede/base: add interfaces for MFW TLV request processing Rasesh Mody
2017-03-28  6:51           ` [PATCH v4 26/62] net/qede/base: code refactoring of SP queues Rasesh Mody
2017-03-28  6:51           ` [PATCH v4 27/62] net/qede/base: make L2 queues handle based Rasesh Mody
2017-03-28  6:51           ` [PATCH v4 28/62] net/qede/base: add support for handling TLV request from MFW Rasesh Mody
2017-03-28  6:51           ` [PATCH v4 29/62] net/qede/base: optimize cache-line access Rasesh Mody
2017-03-28  6:52           ` [PATCH v4 30/62] net/qede/base: infrastructure changes for VF tunnelling Rasesh Mody
2017-03-28  6:52           ` [PATCH v4 31/62] net/qede/base: revise tunnel APIs/structs Rasesh Mody
2017-03-28 11:22             ` Ferruh Yigit
2017-03-28 21:18               ` Mody, Rasesh
2017-03-29  9:23                 ` Ferruh Yigit
2017-03-29 20:48                   ` Mody, Rasesh
2017-03-28  6:52           ` [PATCH v4 32/62] net/qede/base: add tunnelling support for VFs Rasesh Mody
2017-03-28  6:52           ` [PATCH v4 33/62] net/qede/base: formatting changes Rasesh Mody
2017-03-28  6:52           ` [PATCH v4 34/62] net/qede/base: prevent transmitter stuck condition Rasesh Mody
2017-03-28  6:52           ` [PATCH v4 35/62] net/qede/base: add mask/shift defines for resource command Rasesh Mody
2017-03-28  6:52           ` [PATCH v4 36/62] net/qede/base: add API for using MFW resource lock Rasesh Mody
2017-03-28  6:52           ` [PATCH v4 37/62] net/qede/base: remove clock slowdown option Rasesh Mody
2017-03-28  6:52           ` [PATCH v4 38/62] net/qede/base: add new image types Rasesh Mody
2017-03-28  6:52           ` [PATCH v4 39/62] net/qede/base: use L2-handles for RSS configuration Rasesh Mody
2017-03-28  6:52           ` [PATCH v4 40/62] net/qede/base: change valloc to vzalloc Rasesh Mody
2017-03-28  6:52           ` [PATCH v4 41/62] net/qede/base: add support for previous driver unload Rasesh Mody
2017-03-28  6:52           ` [PATCH v4 42/62] net/qede/base: add non-L2 dcbx tlv application support Rasesh Mody
2017-03-28  6:52           ` [PATCH v4 43/62] net/qede/base: update bulletin board during VF init Rasesh Mody
2017-03-28  6:52           ` [PATCH v4 44/62] net/qede/base: add coalescing support for VFs Rasesh Mody
2017-03-28  6:52           ` [PATCH v4 45/62] net/qede/base: add macro got resource value message Rasesh Mody
2017-03-28  6:52           ` [PATCH v4 46/62] net/qede/base: add mailbox for resource allocation Rasesh Mody
2017-03-28  6:52           ` [PATCH v4 47/62] net/qede/base: add macro for unsupported command Rasesh Mody
2017-03-28  6:52           ` [PATCH v4 48/62] net/qede/base: set max values for soft resources Rasesh Mody
2017-03-28  6:52           ` [PATCH v4 49/62] net/qede/base: add return code check Rasesh Mody
2017-03-28  6:52           ` [PATCH v4 50/62] net/qede/base: zero out MFW mailbox data Rasesh Mody
2017-03-28  6:52           ` [PATCH v4 51/62] net/qede/base: move code bits Rasesh Mody
2017-03-28  6:52           ` [PATCH v4 52/62] net/qede/base: add PF parameter Rasesh Mody
2017-03-28  6:52           ` [PATCH v4 53/62] net/qede/base: allow PMD to control vport and RSS engine ids Rasesh Mody
2017-03-28  6:52           ` [PATCH v4 54/62] net/qede/base: add udp ports in bulletin board message Rasesh Mody
2017-03-28  6:52           ` [PATCH v4 55/62] net/qede/base: prevent DMAE transactions during recovery Rasesh Mody
2017-03-28  6:52           ` [PATCH v4 56/62] net/qede/base: multi-Txq support on same queue-zone for VFs Rasesh Mody
2017-03-28  6:52           ` [PATCH v4 57/62] net/qede/base: prevent race condition during unload Rasesh Mody
2017-03-28  6:52           ` [PATCH v4 58/62] net/qede/base: semantic changes Rasesh Mody
2017-03-28  6:52           ` [PATCH v4 59/62] net/qede/base: add support for arfs mode Rasesh Mody
2017-03-28  6:52           ` [PATCH v4 60/62] net/qede: add ntuple and flow director filter support Rasesh Mody
2017-03-28  6:52           ` [PATCH v4 61/62] net/qede: add LRO/TSO offloads support Rasesh Mody
2017-03-28  6:52           ` [PATCH v4 62/62] net/qede: update PMD version to 2.4.0.1 Rasesh Mody
     [not found]           ` <1490683278-23776-1-git-send-email-y>
2017-03-28  6:54             ` [PATCH 00/62] net/qede/base: update PMD " Mody, Rasesh
2017-03-24  7:27       ` [PATCH v3 01/61] net/qede/base: return an initialized return value Rasesh Mody
2017-03-24  7:27       ` [PATCH v3 02/61] net/qede/base: send FW version driver state to MFW Rasesh Mody
2017-03-24  7:27       ` [PATCH v3 03/61] net/qede/base: mask Rx buffer attention bits Rasesh Mody
2017-03-24  7:27       ` [PATCH v3 04/61] net/qede/base: print various indication on Tx-timeouts Rasesh Mody
2017-03-24  7:27       ` [PATCH v3 05/61] net/qede/base: utilize FW 8.18.9.0 Rasesh Mody
2017-03-24  7:27       ` [PATCH v3 06/61] net/qede: upgrade the FW to 8.18.9.0 Rasesh Mody
2017-03-24  7:27       ` [PATCH v3 07/61] net/qede/base: decrease maximum HW func per device Rasesh Mody
2017-03-24  7:27       ` [PATCH v3 08/61] net/qede/base: move mask constants defining NIC type Rasesh Mody
2017-03-24  7:27       ` [PATCH v3 09/61] net/qede/base: remove attribute from update current config Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 10/61] net/qede/base: add nvram options Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 11/61] net/qede/base: add comment Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 12/61] net/qede/base: use default MTU from shared memory Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 13/61] net/qede/base: change queue/sb-id from 8 bit to 16 bit Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 14/61] net/qede/base: update MFW when default MTU is changed Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 15/61] net/qede/base: prevent device init failure Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 16/61] net/qede/base: read card personality via MFW commands Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 17/61] net/qede/base: allow probe to succeed with minor HW-issues Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 18/61] net/qede/base: remove unneeded step in HW init Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 19/61] net/qede/base: allow only trusted VFs to be promisc Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 20/61] net/qede/base: qm initialization revamp Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 21/61] net/qede/base: print firmware MFW and MBI versions Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 22/61] net/qede/base: check active VF queues before stopping Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 23/61] net/qede/base: set driver type before sending load request Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 24/61] net/qede/base: prevent driver laod with invalid resources Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 25/61] net/qede/base: add interfaces for MFW TLV request processing Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 26/61] net/qede/base: code refactoring of SP queues Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 27/61] net/qede/base: make L2 queues handle based Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 28/61] net/qede/base: add support for handling TLV request from MFW Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 29/61] net/qede/base: optimize cache-line access Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 30/61] net/qede/base: infrastructure changes for VF tunnelling Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 31/61] net/qede/base: revise tunnel APIs/structs Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 32/61] net/qede/base: add tunnelling support for VFs Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 33/61] net/qede/base: formatting changes Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 34/61] net/qede/base: prevent transmitter stuck condition Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 35/61] net/qede/base: add mask/shift defines for resource command Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 36/61] net/qede/base: add API for using MFW resource lock Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 37/61] net/qede/base: remove clock slowdown option Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 38/61] net/qede/base: add new image types Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 39/61] net/qede/base: use L2-handles for RSS configuration Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 40/61] net/qede/base: change valloc to vzalloc Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 41/61] net/qede/base: add support for previous driver unload Rasesh Mody
2017-03-24 11:00         ` Ferruh Yigit
2017-03-25  6:25           ` Mody, Rasesh
2017-03-24  7:28       ` [PATCH v3 42/61] net/qede/base: add non-L2 dcbx tlv application support Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 43/61] net/qede/base: update bulletin board during VF init Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 44/61] net/qede/base: add coalescing support for VFs Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 45/61] net/qede/base: add macro got resource value message Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 46/61] net/qede/base: add mailbox for resource allocation Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 47/61] net/qede/base: add macro for unsupported command Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 48/61] net/qede/base: set max values for soft resoruces Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 49/61] net/qede/base: add return code check Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 50/61] net/qede/base: zero out MFW mailbox data Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 51/61] net/qede/base: move code bits Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 52/61] net/qede/base: add PF parameter Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 53/61] net/qede/base: allow PMD to control vport and RSS engine ids Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 54/61] net/qede/base: add udp ports in bulletin board message Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 55/61] net/qede/base: prevent DMAE transactions during recovery Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 56/61] net/qede/base: multi-Txq support on same queue-zone for VFs Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 57/61] net/qede/base: prevent race condition during unload Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 58/61] net/qede/base: semantic changes Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 59/61] net/qede/base: add support for arfs mode Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 60/61] net/qede: add ntuple and flow director filter support Rasesh Mody
2017-03-24  7:28       ` [PATCH v3 61/61] net/qede: add LRO/TSO offloads support Rasesh Mody
2017-03-24  7:45       ` [PATCH v2 00/61] net/qede/base: qede PMD enhancements Mody, Rasesh
2017-03-18  7:05   ` [PATCH v2 01/61] net/qede/base: return an initialized return value Rasesh Mody
2017-03-18  7:05   ` [PATCH v2 02/61] net/qede/base: send FW version driver state to MFW Rasesh Mody
2017-03-18  7:05   ` [PATCH v2 03/61] net/qede/base: mask Rx buffer attention bits Rasesh Mody
2017-03-18  7:05   ` [PATCH v2 04/61] net/qede/base: print various indication on Tx-timeouts Rasesh Mody
2017-03-18  7:05   ` [PATCH v2 05/61] net/qede/base: utilize FW 8.18.9.0 Rasesh Mody
2017-03-18  7:05   ` [PATCH v2 06/61] net/qede: upgrade the FW to 8.18.9.0 Rasesh Mody
2017-03-18  7:05   ` [PATCH v2 07/61] net/qede/base: decrease maximum HW func per device Rasesh Mody
2017-03-18  7:05   ` [PATCH v2 08/61] net/qede/base: move mask constants defining NIC type Rasesh Mody
2017-03-18  7:05   ` [PATCH v2 09/61] net/qede/base: remove attribute from update current config Rasesh Mody
2017-03-18  7:05   ` [PATCH v2 10/61] net/qede/base: add nvram options Rasesh Mody
2017-03-18  7:05   ` [PATCH v2 11/61] net/qede/base: add comment Rasesh Mody
2017-03-18  7:05   ` [PATCH v2 12/61] net/qede/base: use default MTU from shared memory Rasesh Mody
2017-03-18  7:05   ` [PATCH v2 13/61] net/qede/base: change queue/sb-id from 8 bit to 16 bit Rasesh Mody
2017-03-18  7:05   ` [PATCH v2 14/61] net/qede/base: update MFW when default MTU is changed Rasesh Mody
2017-03-18  7:05   ` [PATCH v2 15/61] net/qede/base: prevent device init failure Rasesh Mody
2017-03-18  7:05   ` [PATCH v2 16/61] net/qede/base: read card personality via MFW commands Rasesh Mody
2017-03-18  7:05   ` [PATCH v2 17/61] net/qede/base: allow probe to succeed with minor HW-issues Rasesh Mody
2017-03-18  7:05   ` [PATCH v2 18/61] net/qede/base: remove unneeded step in HW init Rasesh Mody
2017-03-18  7:05   ` [PATCH v2 19/61] net/qede/base: allow only trusted VFs to be promisc Rasesh Mody
2017-03-18  7:05   ` [PATCH v2 20/61] net/qede/base: qm initialization revamp Rasesh Mody
2017-03-18  7:05   ` [PATCH v2 21/61] net/qede/base: print firmware MFW and MBI versions Rasesh Mody
2017-03-18  7:05   ` [PATCH v2 22/61] net/qede/base: check active VF queues before stopping Rasesh Mody
2017-03-18  7:05   ` [PATCH v2 23/61] net/qede/base: set driver type before sending load request Rasesh Mody
2017-03-18  7:05   ` [PATCH v2 24/61] net/qede/base: prevent driver laod with invalid resources Rasesh Mody
2017-03-18  7:05   ` [PATCH v2 25/61] net/qede/base: add interfaces for MFW TLV request processing Rasesh Mody
2017-03-18  7:05   ` [PATCH v2 26/61] net/qede/base: code refactoring of SP queues Rasesh Mody
2017-03-18  7:05   ` [PATCH v2 27/61] net/qede/base: make L2 queues handle based Rasesh Mody
2017-03-18  7:05   ` [PATCH v2 28/61] net/qede/base: add support for handling TLV request from MFW Rasesh Mody
2017-03-18  7:05   ` [PATCH v2 29/61] net/qede/base: optimize cache-line access Rasesh Mody
2017-03-18  7:05   ` [PATCH v2 30/61] net/qede/base: infrastructure changes for VF tunnelling Rasesh Mody
2017-03-18  7:05   ` [PATCH v2 31/61] net/qede/base: revise tunnel APIs/structs Rasesh Mody
2017-03-18  7:05   ` [PATCH v2 32/61] net/qede/base: add tunnelling support for VFs Rasesh Mody
2017-03-18  7:05   ` [PATCH v2 33/61] net/qede/base: formatting changes Rasesh Mody
2017-03-18  7:05   ` [PATCH v2 34/61] net/qede/base: prevent transmitter stuck condition Rasesh Mody
2017-03-18  7:06   ` [PATCH v2 35/61] net/qede/base: add mask/shift defines for resource command Rasesh Mody
2017-03-18  7:06   ` [PATCH v2 36/61] net/qede/base: add API for using MFW resource lock Rasesh Mody
2017-03-18  7:06   ` [PATCH v2 37/61] net/qede/base: remove clock slowdown option Rasesh Mody
2017-03-18  7:06   ` [PATCH v2 38/61] net/qede/base: add new image types Rasesh Mody
2017-03-18  7:06   ` [PATCH v2 39/61] net/qede/base: use L2-handles for RSS configuration Rasesh Mody
2017-03-18  7:06   ` [PATCH v2 40/61] net/qede/base: change valloc to vzalloc Rasesh Mody
2017-03-18  7:06   ` [PATCH v2 41/61] net/qede/base: add support for previous driver unload Rasesh Mody
2017-03-18  7:06   ` [PATCH v2 42/61] net/qede/base: add non-L2 dcbx tlv application support Rasesh Mody
2017-03-18  7:06   ` [PATCH v2 43/61] net/qede/base: update bulletin board during VF init Rasesh Mody
2017-03-18  7:06   ` [PATCH v2 44/61] net/qede/base: add coalescing support for VFs Rasesh Mody
2017-03-18  7:06   ` [PATCH v2 45/61] net/qede/base: add macro got resource value message Rasesh Mody
2017-03-18  7:06   ` [PATCH v2 46/61] net/qede/base: add mailbox for resource allocation Rasesh Mody
2017-03-18  7:06   ` [PATCH v2 47/61] net/qede/base: add macro for unsupported command Rasesh Mody
2017-03-18  7:06   ` [PATCH v2 48/61] net/qede/base: set max values for soft resoruces Rasesh Mody
2017-03-18  7:06   ` [PATCH v2 49/61] net/qede/base: add return code check Rasesh Mody
2017-03-18  7:06   ` [PATCH v2 50/61] net/qede/base: zero out MFW mailbox data Rasesh Mody
2017-03-18  7:06   ` [PATCH v2 51/61] net/qede/base: move code bits Rasesh Mody
2017-03-18  7:06   ` [PATCH v2 52/61] net/qede/base: add PF parameter Rasesh Mody
2017-03-18  7:06   ` [PATCH v2 53/61] net/qede/base: allow PMD to control vport and RSS engine ids Rasesh Mody
2017-03-18  7:06   ` [PATCH v2 54/61] net/qede/base: add udp ports in bulletin board message Rasesh Mody
2017-03-18  7:06   ` [PATCH v2 55/61] net/qede/base: prevent DMAE transactions during recovery Rasesh Mody
2017-03-18  7:06   ` [PATCH v2 56/61] net/qede/base: multi-Txq support on same queue-zone for VFs Rasesh Mody
2017-03-18  7:06   ` [PATCH v2 57/61] net/qede/base: prevent race condition during unload Rasesh Mody
2017-03-18  7:06   ` [PATCH v2 58/61] net/qede/base: semantic changes Rasesh Mody
2017-03-18  7:06   ` [PATCH v2 59/61] net/qede/base: add support for arfs mode Rasesh Mody
2017-03-18  7:06   ` [PATCH v2 60/61] net/qede: add ntuple and flow director filter support Rasesh Mody
2017-03-18  7:06   ` [PATCH v2 61/61] net/qede: add LRO/TSO offloads support Rasesh Mody
2017-03-24 11:58     ` Ferruh Yigit
2017-03-25  6:28       ` Mody, Rasesh
2017-03-18  7:18   ` [PATCH 00/61] net/qede/base: qede PMD enhancements Mody, Rasesh

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.