All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/53] net/qede/base: update PMD to 2.6.0.1
@ 2017-09-19  1:29 Rasesh Mody
  2017-09-19  1:29 ` [PATCH 01/53] net/qede/base: add NVM config options Rasesh Mody
                   ` (29 more replies)
  0 siblings, 30 replies; 31+ messages in thread
From: Rasesh Mody @ 2017-09-19  1:29 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Hi,

This patch set adds support for new firmware 8.30.12.0, includes
enahncements, code cleanup and bug fixes. This patch set updates
PMD version to 2.6.0.1.

Thanks!
Rasesh

Rasesh Mody (53):
  net/qede/base: add NVM config options
  net/qede/base: update management FW supported features
  net/qede/base: use crc32 OSAL macro
  net/qede/base: allocate VF queues before PF
  net/qede/base: convert device type to enum
  net/qede/base: changes for VF queue zone
  net/qede/base: interchangeably use SB between PF and VF
  net/qede/base: add API to configure coalescing for VF queues
  net/qede/base: restrict cache line size register padding
  net/qede/base: fix to use a passed ptt handle
  net/qede/base: add a sanity check
  net/qede/base: add SmartAN support
  net/qede/base: alter driver's force load behavior
  net/qede/base: add mdump sub-commands
  net/qede/base: add EEE support
  net/qede/base: use passed ptt handler
  net/qede/base: prevent re-assertions of parity errors
  net/qede/base: avoid possible race condition
  net/qede/base: revise management FW mbox access scheme
  net/qede/base: remove helper functions/structures
  net/qede/base: initialize resc lock/unlock params
  net/qede/base: rename MFW get/set field defines
  net/qede/base: allow clients to override VF MSI-X table size
  net/qede/base: add API to send STAG config update to FW
  net/qede/base: add support for doorbell overflow recovery
  net/qede/base: block mbox command to unresponsive MFW
  net/qede/base: prevent stop vport assert by malicious VF
  net/qede/base: remove unused parameters
  net/qede/base: fix macros to check chip revision/metal
  net/qede/base: read per queue coalescing from HW
  net/qede/base: refactor device's number of ports logic
  net/qede/base: use proper units for rate limiting
  net/qede/base: use available macro
  net/qede/base: use function pointers for spq async callback
  net/qede/base: fix API return types
  net/qede/base: semantic changes
  net/qede/base: handle the error condition properly
  net/qede/base: add new macro for CMT mode
  net/qede/base: change verbosity
  net/qede/base: fix number of app table entries
  net/qede/base: update firmware to 8.30.12.0
  net/qede/base: add UFP support
  net/qede/base: add support for mapped doorbell Bars for VFs
  net/qede/base: add support for driver attribute repository
  net/qede/base: move define to header file
  net/qede/base: dcbx dscp related extensions
  net/qede/base: add feature support for per-PF virtual link
  net/qede/base: catch an init command write failure
  net/qede/base: retain dcbx config till actually applied
  net/qede/base: disable aRFS for NPAR and 100G
  net/qede/base: add support for WoL writes
  net/qede/base: remove unused input parameter
  net/qede/base: update PMD version to 2.6.0.1

 drivers/net/qede/base/bcm_osal.c              |   12 +
 drivers/net/qede/base/bcm_osal.h              |   20 +-
 drivers/net/qede/base/common_hsi.h            |  760 ++++++------
 drivers/net/qede/base/ecore.h                 |  210 +++-
 drivers/net/qede/base/ecore_cxt.c             |  111 +-
 drivers/net/qede/base/ecore_cxt.h             |    6 +-
 drivers/net/qede/base/ecore_dcbx.c            |  328 +++--
 drivers/net/qede/base/ecore_dcbx.h            |    9 +-
 drivers/net/qede/base/ecore_dev.c             | 1066 ++++++++++++----
 drivers/net/qede/base/ecore_dev_api.h         |  113 +-
 drivers/net/qede/base/ecore_hsi_common.h      |  245 ++--
 drivers/net/qede/base/ecore_hsi_debug_tools.h |    6 +-
 drivers/net/qede/base/ecore_hsi_eth.h         |   65 +-
 drivers/net/qede/base/ecore_hw.c              |   10 +-
 drivers/net/qede/base/ecore_hw.h              |   15 +-
 drivers/net/qede/base/ecore_init_fw_funcs.c   |  511 ++++----
 drivers/net/qede/base/ecore_init_fw_funcs.h   |   98 +-
 drivers/net/qede/base/ecore_init_ops.c        |   73 +-
 drivers/net/qede/base/ecore_init_ops.h        |    3 +-
 drivers/net/qede/base/ecore_int.c             | 1001 ++++++++++-----
 drivers/net/qede/base/ecore_int.h             |   73 +-
 drivers/net/qede/base/ecore_int_api.h         |   47 +-
 drivers/net/qede/base/ecore_iov_api.h         |   41 +-
 drivers/net/qede/base/ecore_iro.h             |    8 +
 drivers/net/qede/base/ecore_iro_values.h      |   44 +-
 drivers/net/qede/base/ecore_l2.c              |  293 +++--
 drivers/net/qede/base/ecore_l2.h              |   82 +-
 drivers/net/qede/base/ecore_l2_api.h          |   30 +-
 drivers/net/qede/base/ecore_mcp.c             | 1612 +++++++++++++++++--------
 drivers/net/qede/base/ecore_mcp.h             |  195 ++-
 drivers/net/qede/base/ecore_mcp_api.h         |  190 +--
 drivers/net/qede/base/ecore_mng_tlv.c         |    9 +-
 drivers/net/qede/base/ecore_proto_if.h        |    5 +
 drivers/net/qede/base/ecore_rt_defs.h         |  858 +++++++------
 drivers/net/qede/base/ecore_sp_api.h          |    2 +
 drivers/net/qede/base/ecore_sp_commands.c     |  152 ++-
 drivers/net/qede/base/ecore_sp_commands.h     |   33 +-
 drivers/net/qede/base/ecore_spq.c             |  109 +-
 drivers/net/qede/base/ecore_spq.h             |   20 +
 drivers/net/qede/base/ecore_sriov.c           |  945 ++++++++++-----
 drivers/net/qede/base/ecore_sriov.h           |   53 +-
 drivers/net/qede/base/ecore_vf.c              |  414 +++++--
 drivers/net/qede/base/ecore_vf.h              |   72 +-
 drivers/net/qede/base/ecore_vfpf_if.h         |   80 +-
 drivers/net/qede/base/mcp_public.h            |  465 ++++---
 drivers/net/qede/base/nvm_cfg.h               |   90 +-
 drivers/net/qede/base/reg_addr.h              |   17 +
 drivers/net/qede/qede_ethdev.c                |   29 +-
 drivers/net/qede/qede_ethdev.h                |    4 +-
 drivers/net/qede/qede_fdir.c                  |    8 +-
 drivers/net/qede/qede_if.h                    |   15 +-
 drivers/net/qede/qede_main.c                  |   76 +-
 drivers/net/qede/qede_rxtx.c                  |   12 +-
 53 files changed, 7021 insertions(+), 3724 deletions(-)

-- 
1.7.10.3

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH 01/53] net/qede/base: add NVM config options
  2017-09-19  1:29 [PATCH 00/53] net/qede/base: update PMD to 2.6.0.1 Rasesh Mody
@ 2017-09-19  1:29 ` Rasesh Mody
  2017-09-19  1:29 ` [PATCH 02/53] net/qede/base: update management FW supported features Rasesh Mody
                   ` (28 subsequent siblings)
  29 siblings, 0 replies; 31+ messages in thread
From: Rasesh Mody @ 2017-09-19  1:29 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add new NVM configuration options like
 - enabling Preboot Debug Mode,
 - 20G ethernet backplane support,
 - add RDMA to NPAR protocol list,
 - PHY module temperature thresholds, etc.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/nvm_cfg.h |   63 ++++++++++++++++++++++++++++++++-------
 1 file changed, 53 insertions(+), 10 deletions(-)

diff --git a/drivers/net/qede/base/nvm_cfg.h b/drivers/net/qede/base/nvm_cfg.h
index 4e58835..ccd9286 100644
--- a/drivers/net/qede/base/nvm_cfg.h
+++ b/drivers/net/qede/base/nvm_cfg.h
@@ -13,20 +13,20 @@
  * Description: NVM config file - Generated file from nvm cfg excel.
  *              DO NOT MODIFY !!!
  *
- * Created:     12/15/2016
+ * Created:     4/10/2017
  *
  ****************************************************************************/
 
 #ifndef NVM_CFG_H
 #define NVM_CFG_H
 
-#define NVM_CFG_version 0x81805
+#define NVM_CFG_version 0x83000
 
-#define NVM_CFG_new_option_seq 15
+#define NVM_CFG_new_option_seq 22
 
-#define NVM_CFG_removed_option_seq 0
+#define NVM_CFG_removed_option_seq 1
 
-#define NVM_CFG_updated_value_seq 1
+#define NVM_CFG_updated_value_seq 4
 
 struct nvm_cfg_mac_address {
 	u32 mac_addr_hi;
@@ -509,6 +509,10 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_PF_MAPPING_OFFSET 26
 		#define NVM_CFG1_GLOB_PF_MAPPING_CONTINUOUS 0x0
 		#define NVM_CFG1_GLOB_PF_MAPPING_FIXED 0x1
+		#define NVM_CFG1_GLOB_VOLTAGE_REGULATOR_TYPE_MASK 0x30000000
+		#define NVM_CFG1_GLOB_VOLTAGE_REGULATOR_TYPE_OFFSET 28
+		#define NVM_CFG1_GLOB_VOLTAGE_REGULATOR_TYPE_DISABLED 0x0
+		#define NVM_CFG1_GLOB_VOLTAGE_REGULATOR_TYPE_TI 0x1
 	u32 led_global_settings; /* 0x74 */
 		#define NVM_CFG1_GLOB_LED_SWAP_0_MASK 0x0000000F
 		#define NVM_CFG1_GLOB_LED_SWAP_0_OFFSET 0
@@ -1036,7 +1040,9 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO29 0x1E
 		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO30 0x1F
 		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO31 0x20
-	u32 reserved[58]; /* 0x140 */
+	u32 preboot_debug_mode_std; /* 0x140 */
+	u32 preboot_debug_mode_ext; /* 0x144 */
+	u32 reserved[56]; /* 0x148 */
 };
 
 struct nvm_cfg1_path {
@@ -1134,6 +1140,7 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_OFFSET 0
 		#define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_1G 0x1
 		#define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G 0x2
+		#define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_20G 0x4
 		#define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_25G 0x8
 		#define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_40G 0x10
 		#define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_50G 0x20
@@ -1142,6 +1149,7 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_OFFSET 16
 		#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_1G 0x1
 		#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_10G 0x2
+		#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_20G 0x4
 		#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_25G 0x8
 		#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_40G 0x10
 		#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_50G 0x20
@@ -1152,6 +1160,7 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_DRV_LINK_SPEED_AUTONEG 0x0
 		#define NVM_CFG1_PORT_DRV_LINK_SPEED_1G 0x1
 		#define NVM_CFG1_PORT_DRV_LINK_SPEED_10G 0x2
+		#define NVM_CFG1_PORT_DRV_LINK_SPEED_20G 0x3
 		#define NVM_CFG1_PORT_DRV_LINK_SPEED_25G 0x4
 		#define NVM_CFG1_PORT_DRV_LINK_SPEED_40G 0x5
 		#define NVM_CFG1_PORT_DRV_LINK_SPEED_50G 0x6
@@ -1167,6 +1176,7 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_MFW_LINK_SPEED_AUTONEG 0x0
 		#define NVM_CFG1_PORT_MFW_LINK_SPEED_1G 0x1
 		#define NVM_CFG1_PORT_MFW_LINK_SPEED_10G 0x2
+		#define NVM_CFG1_PORT_MFW_LINK_SPEED_20G 0x3
 		#define NVM_CFG1_PORT_MFW_LINK_SPEED_25G 0x4
 		#define NVM_CFG1_PORT_MFW_LINK_SPEED_40G 0x5
 		#define NVM_CFG1_PORT_MFW_LINK_SPEED_50G 0x6
@@ -1276,6 +1286,7 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_AUTONEG 0x0
 		#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_1G 0x1
 		#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_10G 0x2
+		#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_20G 0x3
 		#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_25G 0x4
 		#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_40G 0x5
 		#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_50G 0x6
@@ -1289,6 +1300,8 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_RESERVED65_OFFSET 0
 		#define NVM_CFG1_PORT_RESERVED66_MASK 0x00010000
 		#define NVM_CFG1_PORT_RESERVED66_OFFSET 16
+		#define NVM_CFG1_PORT_PREBOOT_LINK_UP_DELAY_MASK 0x01FE0000
+		#define NVM_CFG1_PORT_PREBOOT_LINK_UP_DELAY_OFFSET 17
 	u32 vf_cfg; /* 0x30 */
 		#define NVM_CFG1_PORT_RESERVED8_MASK 0x0000FFFF
 		#define NVM_CFG1_PORT_RESERVED8_OFFSET 0
@@ -1304,9 +1317,12 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_LANE_LED_SPD_2_SEL_OFFSET 16
 		#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_1G 0x1
 		#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_10G 0x2
-		#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_25G 0x8
-		#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_40G 0x10
-		#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_50G 0x20
+		#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_AH_25G 0x4
+		#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_BB_25G 0x8
+		#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_AH_40G 0x8
+		#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_BB_40G 0x10
+		#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_AH_50G 0x10
+		#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_BB_50G 0x20
 		#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_BB_100G 0x40
 	u32 transceiver_00; /* 0x40 */
 	/*  Define for mapping of transceiver signal module absent */
@@ -1412,6 +1428,7 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_OFFSET 0
 		#define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_1G 0x1
 		#define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_10G 0x2
+		#define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_20G 0x4
 		#define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_25G 0x8
 		#define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_40G 0x10
 		#define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_50G 0x20
@@ -1423,6 +1440,7 @@ struct nvm_cfg1_port {
 			16
 		#define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_1G 0x1
 		#define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_10G 0x2
+		#define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_20G 0x4
 		#define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_25G 0x8
 		#define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_40G 0x10
 		#define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_50G 0x20
@@ -1434,6 +1452,7 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_AUTONEG 0x0
 		#define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_1G 0x1
 		#define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_10G 0x2
+		#define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_20G 0x3
 		#define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_25G 0x4
 		#define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_40G 0x5
 		#define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_50G 0x6
@@ -1444,6 +1463,7 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_AUTONEG 0x0
 		#define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_1G 0x1
 		#define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_10G 0x2
+		#define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_20G 0x3
 		#define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_25G 0x4
 		#define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_40G 0x5
 		#define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_50G 0x6
@@ -1490,6 +1510,7 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_OFFSET 0
 		#define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_1G 0x1
 		#define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_10G 0x2
+		#define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_20G 0x4
 		#define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_25G 0x8
 		#define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_40G 0x10
 		#define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_50G 0x20
@@ -1501,6 +1522,7 @@ struct nvm_cfg1_port {
 			16
 		#define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_1G 0x1
 		#define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_10G 0x2
+		#define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_20G 0x4
 		#define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_25G 0x8
 		#define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_40G 0x10
 		#define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_50G 0x20
@@ -1512,6 +1534,7 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_AUTONEG 0x0
 		#define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_1G 0x1
 		#define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_10G 0x2
+		#define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_20G 0x3
 		#define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_25G 0x4
 		#define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_40G 0x5
 		#define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_50G 0x6
@@ -1522,6 +1545,7 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_AUTONEG 0x0
 		#define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_1G 0x1
 		#define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_10G 0x2
+		#define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_20G 0x3
 		#define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_25G 0x4
 		#define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_40G 0x5
 		#define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_50G 0x6
@@ -1568,6 +1592,7 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_OFFSET 0
 		#define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_1G 0x1
 		#define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_10G 0x2
+		#define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_20G 0x4
 		#define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_25G 0x8
 		#define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_40G 0x10
 		#define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_50G 0x20
@@ -1579,6 +1604,7 @@ struct nvm_cfg1_port {
 			16
 		#define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_1G 0x1
 		#define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_10G 0x2
+		#define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_20G 0x4
 		#define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_25G 0x8
 		#define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_40G 0x10
 		#define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_50G 0x20
@@ -1590,6 +1616,7 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_AUTONEG 0x0
 		#define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_1G 0x1
 		#define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_10G 0x2
+		#define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_20G 0x3
 		#define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_25G 0x4
 		#define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_40G 0x5
 		#define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_50G 0x6
@@ -1600,6 +1627,7 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_AUTONEG 0x0
 		#define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_1G 0x1
 		#define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_10G 0x2
+		#define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_20G 0x3
 		#define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_25G 0x4
 		#define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_40G 0x5
 		#define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_50G 0x6
@@ -1646,6 +1674,7 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_OFFSET 0
 		#define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_1G 0x1
 		#define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_10G 0x2
+		#define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_20G 0x4
 		#define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_25G 0x8
 		#define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_40G 0x10
 		#define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_50G 0x20
@@ -1658,6 +1687,7 @@ struct nvm_cfg1_port {
 			16
 		#define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_1G 0x1
 		#define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_10G 0x2
+		#define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_20G 0x4
 		#define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_25G 0x8
 		#define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_40G 0x10
 		#define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_50G 0x20
@@ -1670,6 +1700,7 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_AUTONEG 0x0
 		#define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_1G 0x1
 		#define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_10G 0x2
+		#define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_20G 0x3
 		#define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_25G 0x4
 		#define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_40G 0x5
 		#define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_50G 0x6
@@ -1680,6 +1711,7 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_AUTONEG 0x0
 		#define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_1G 0x1
 		#define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_10G 0x2
+		#define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_20G 0x3
 		#define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_25G 0x4
 		#define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_40G 0x5
 		#define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_50G 0x6
@@ -1726,6 +1758,7 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_OFFSET 0
 		#define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_1G 0x1
 		#define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_10G 0x2
+		#define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_20G 0x4
 		#define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_25G 0x8
 		#define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_40G 0x10
 		#define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_50G 0x20
@@ -1735,6 +1768,7 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_OFFSET 16
 		#define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_1G 0x1
 		#define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_10G 0x2
+		#define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_20G 0x4
 		#define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_25G 0x8
 		#define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_40G 0x10
 		#define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_50G 0x20
@@ -1745,6 +1779,7 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_AUTONEG 0x0
 		#define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_1G 0x1
 		#define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_10G 0x2
+		#define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_20G 0x3
 		#define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_25G 0x4
 		#define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_40G 0x5
 		#define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_50G 0x6
@@ -1755,6 +1790,7 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_AUTONEG 0x0
 		#define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_1G 0x1
 		#define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_10G 0x2
+		#define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_20G 0x3
 		#define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_25G 0x4
 		#define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_40G 0x5
 		#define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_50G 0x6
@@ -1795,7 +1831,13 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_MNM_100G_FEC_FORCE_MODE_FIRECODE 0x1
 		#define NVM_CFG1_PORT_MNM_100G_FEC_FORCE_MODE_RS 0x2
 		#define NVM_CFG1_PORT_MNM_100G_FEC_FORCE_MODE_AUTO 0x7
-	u32 reserved[116]; /* 0x88 */
+	u32 temperature; /* 0x88 */
+		#define NVM_CFG1_PORT_PHY_MODULE_DEAD_TEMP_TH_MASK 0x000000FF
+		#define NVM_CFG1_PORT_PHY_MODULE_DEAD_TEMP_TH_OFFSET 0
+		#define NVM_CFG1_PORT_PHY_MODULE_ALOM_FAN_ON_TEMP_TH_MASK \
+			0x0000FF00
+		#define NVM_CFG1_PORT_PHY_MODULE_ALOM_FAN_ON_TEMP_TH_OFFSET 8
+	u32 reserved[115]; /* 0x8C */
 };
 
 struct nvm_cfg1_func {
@@ -1910,6 +1952,7 @@ struct nvm_cfg1_func {
 		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_ETHERNET 0x1
 		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_FCOE 0x2
 		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_ISCSI 0x4
+		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_RDMA 0x8
 	u32 reserved[8]; /* 0x30 */
 };
 
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 02/53] net/qede/base: update management FW supported features
  2017-09-19  1:29 [PATCH 00/53] net/qede/base: update PMD to 2.6.0.1 Rasesh Mody
  2017-09-19  1:29 ` [PATCH 01/53] net/qede/base: add NVM config options Rasesh Mody
@ 2017-09-19  1:29 ` Rasesh Mody
  2017-09-19  1:29 ` [PATCH 03/53] net/qede/base: use crc32 OSAL macro Rasesh Mody
                   ` (27 subsequent siblings)
  29 siblings, 0 replies; 31+ messages in thread
From: Rasesh Mody @ 2017-09-19  1:29 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

 - Add transceivers temperature monitoring/reporting feature
 - Add new mbox command DRV_MSG_CODE_FEATURE_SUPPORT to exchange info
   between drivers and management FW regarding features supported
 - Add EEE to Link Flap Avoidance check, etc.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/mcp_public.h |  104 ++++++++++++++++++------------------
 1 file changed, 53 insertions(+), 51 deletions(-)

diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 1ad8a96..f41d4e6 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -59,7 +59,7 @@ struct eth_phy_cfg {
 /* 0 = autoneg, 1000/10000/20000/25000/40000/50000/100000 */
 	u32 speed;
 #define ETH_SPEED_AUTONEG   0
-#define ETH_SPEED_SMARTLINQ  0x8
+#define ETH_SPEED_SMARTLINQ  0x8 /* deprecated - use link_modes field instead */
 
 	u32 pause;      /* bitmask */
 #define ETH_PAUSE_NONE		0x0
@@ -84,32 +84,22 @@ struct eth_phy_cfg {
 /* Remote Serdes Loopback (RX to TX) */
 #define ETH_LOOPBACK_INT_PHY_FEA_AH_ONLY (9)
 
-	/* Used to configure the EEE Tx LPI timer, has several modes of
-	 * operation, according to bits 29:28
-	 * 2'b00: Timer will be configured by nvram, output will be the value
-	 *        from nvram.
-	 * 2'b01: Timer will be configured by nvram, output will be in
-	 *        16xmicroseconds.
-	 * 2'b10: bits 1:0 contain an nvram value which will be used instead
-	 *        of the one located in the nvram. Output will be that value.
-	 * 2'b11: bits 19:0 contain the idle timer in microseconds; output
-	 *        will be in 16xmicroseconds.
-	 * Bits 31:30 should be 2'b11 in order for EEE to be enabled.
-	 */
-	u32 eee_mode;
-#define EEE_MODE_TIMER_USEC_MASK	(0x000fffff)
-#define EEE_MODE_TIMER_USEC_OFFSET	(0)
-#define EEE_MODE_TIMER_USEC_BALANCED_TIME	(0xa00)
-#define EEE_MODE_TIMER_USEC_AGGRESSIVE_TIME	(0x100)
-#define EEE_MODE_TIMER_USEC_LATENCY_TIME	(0x6000)
-/* Set by the driver to request status timer will be in microseconds and and not
- * in EEE policy definition
+	u32 eee_cfg;
+/* EEE is enabled (configuration). Refer to eee_status->active for negotiated
+ * status
  */
-#define EEE_MODE_OUTPUT_TIME		(1 << 28)
-/* Set by the driver to override default nvm timer */
-#define EEE_MODE_OVERRIDE_NVRAM		(1 << 29)
-#define EEE_MODE_ENABLE_LPI		(1 << 30) /* Set when */
-#define EEE_MODE_ADV_LPI		(1 << 31) /* Set when EEE is enabled */
+#define EEE_CFG_EEE_ENABLED	(1 << 0)
+#define EEE_CFG_TX_LPI		(1 << 1)
+#define EEE_CFG_ADV_SPEED_1G	(1 << 2)
+#define EEE_CFG_ADV_SPEED_10G	(1 << 3)
+#define EEE_TX_TIMER_USEC_MASK	(0xfffffff0)
+#define EEE_TX_TIMER_USEC_SHIFT	4
+#define EEE_TX_TIMER_USEC_BALANCED_TIME		(0xa00)
+#define EEE_TX_TIMER_USEC_AGGRESSIVE_TIME	(0x100)
+#define EEE_TX_TIMER_USEC_LATENCY_TIME		(0x6000)
+
+	u32 link_modes; /* Additional link modes */
+#define LINK_MODE_SMARTLINQ_ENABLE		0x1
 };
 
 struct port_mf_cfg {
@@ -697,6 +687,7 @@ struct public_port {
 #define LFA_SPEED_MISMATCH				(1 << 3)
 #define LFA_FLOW_CTRL_MISMATCH				(1 << 4)
 #define LFA_ADV_SPEED_MISMATCH				(1 << 5)
+#define LFA_EEE_MISMATCH				(1 << 6)
 #define LINK_FLAP_AVOIDANCE_COUNT_OFFSET	8
 #define LINK_FLAP_AVOIDANCE_COUNT_MASK		0x0000ff00
 #define LINK_FLAP_COUNT_OFFSET			16
@@ -787,38 +778,35 @@ struct public_port {
 	u32 wol_pkt_details;
 	struct dcb_dscp_map dcb_dscp_map;
 
-	/* the status of EEE auto-negotiation
-	 * bits 19:0 the configured tx-lpi entry timer value. Depends on bit 31.
-	 * bits 23:20 the speeds advertised for EEE.
-	 * bits 27:24 the speeds the Link partner advertised for EEE.
-	 * The supported/adv. modes in bits 27:19 originate from the
-	 * SHMEM_EEE_XXX_ADV definitions (where XXX is replaced by speed).
-	 * bit 28 when 1'b1 EEE was requested.
-	 * bit 29 when 1'b1 tx lpi was requested.
-	 * bit 30 when 1'b1 EEE was negotiated. Tx lpi will be asserted if 30:29
-	 *        are 2'b11.
-	 * bit 31 - When 1'b0 bits 15:0 contain
-	 *          NVM_CFG1_PORT_EEE_POWER_SAVING_MODE_XXX define as value.
-	 *          When 1'b1 those bits contains a value times 16 microseconds.
-	 */
 	u32 eee_status;
-#define EEE_TIMER_MASK		0x000fffff
-#define EEE_ADV_STATUS_MASK	0x00f00000
-#define EEE_1G_ADV	(1 << 1)
-#define EEE_10G_ADV	(1 << 2)
-#define EEE_ADV_STATUS_SHIFT	20
-#define	EEE_LP_ADV_STATUS_MASK	0x0f000000
-#define EEE_LP_ADV_STATUS_SHIFT	24
-#define EEE_REQUESTED_BIT	0x10000000
-#define EEE_LPI_REQUESTED_BIT	0x20000000
-#define EEE_ACTIVE_BIT		0x40000000
-#define EEE_TIME_OUTPUT_BIT	0x80000000
+/* Set when EEE negotiation is complete. */
+#define EEE_ACTIVE_BIT		(1 << 0)
+
+/* Shows the Local Device EEE capabilities */
+#define EEE_LD_ADV_STATUS_MASK	0x000000f0
+#define EEE_LD_ADV_STATUS_SHIFT	4
+	#define EEE_1G_ADV	(1 << 1)
+	#define EEE_10G_ADV	(1 << 2)
+/* Same values as in EEE_LD_ADV, but for Link Parter */
+#define	EEE_LP_ADV_STATUS_MASK	0x00000f00
+#define EEE_LP_ADV_STATUS_SHIFT	8
 
 	u32 eee_remote;	/* Used for EEE in LLDP */
 #define EEE_REMOTE_TW_TX_MASK	0x0000ffff
 #define EEE_REMOTE_TW_TX_SHIFT	0
 #define EEE_REMOTE_TW_RX_MASK	0xffff0000
 #define EEE_REMOTE_TW_RX_SHIFT	16
+
+	u32 module_info;
+#define ETH_TRANSCEIVER_MONITORING_TYPE_MASK		0x000000FF
+#define ETH_TRANSCEIVER_MONITORING_TYPE_OFFSET		0
+#define ETH_TRANSCEIVER_ADDR_CHNG_REQUIRED		(1 << 2)
+#define ETH_TRANSCEIVER_RCV_PWR_MEASURE_TYPE		(1 << 3)
+#define ETH_TRANSCEIVER_EXTERNALLY_CALIBRATED		(1 << 4)
+#define ETH_TRANSCEIVER_INTERNALLY_CALIBRATED		(1 << 5)
+#define ETH_TRANSCEIVER_HAS_DIAGNOSTIC			(1 << 6)
+#define ETH_TRANSCEIVER_IDENT_MASK			0x0000ff00
+#define ETH_TRANSCEIVER_IDENT_OFFSET			8
 };
 
 /**************************************/
@@ -1376,6 +1364,11 @@ struct public_drv_mb {
 #define DRV_MB_PARAM_PORT_MASK			0x00600000
 #define DRV_MSG_CODE_EXT_PHY_FW_UPGRADE		0x002a0000
 
+/* Param: Set DRV_MB_PARAM_FEATURE_SUPPORT_*,
+ * return FW_MB_PARAM_FEATURE_SUPPORT_*
+ */
+#define DRV_MSG_CODE_FEATURE_SUPPORT            0x00300000
+
 #define DRV_MSG_SEQ_NUMBER_MASK                 0x0000ffff
 
 	u32 drv_mb_param;
@@ -1523,6 +1516,11 @@ struct public_drv_mb {
 #define DRV_MB_PARAM_BIST_TEST_IMAGE_INDEX_SHIFT      8
 #define DRV_MB_PARAM_BIST_TEST_IMAGE_INDEX_MASK       0x0000FF00
 
+/* driver supports SmartLinQ */
+#define DRV_MB_PARAM_FEATURE_SUPPORT_SMARTLINQ  0x00000001
+/* driver support EEE */
+#define DRV_MB_PARAM_FEATURE_SUPPORT_EEE        0x00000002
+
 	u32 fw_mb_header;
 #define FW_MSG_CODE_MASK                        0xffff0000
 #define FW_MSG_CODE_UNSUPPORTED			0x00000000
@@ -1638,6 +1636,10 @@ struct public_drv_mb {
 #define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR_MASK	0x0000FFFF
 #define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR_SHIFT		0
 
+/* MFW supports SmartLinQ */
+#define FW_MB_PARAM_FEATURE_SUPPORT_SMARTLINQ   0x00000001
+/* MFW supports EEE */
+#define FW_MB_PARAM_FEATURE_SUPPORT_EEE         0x00000002
 
 	u32 drv_pulse_mb;
 #define DRV_PULSE_SEQ_MASK                      0x00007fff
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 03/53] net/qede/base: use crc32 OSAL macro
  2017-09-19  1:29 [PATCH 00/53] net/qede/base: update PMD to 2.6.0.1 Rasesh Mody
  2017-09-19  1:29 ` [PATCH 01/53] net/qede/base: add NVM config options Rasesh Mody
  2017-09-19  1:29 ` [PATCH 02/53] net/qede/base: update management FW supported features Rasesh Mody
@ 2017-09-19  1:29 ` Rasesh Mody
  2017-09-19  1:29 ` [PATCH 04/53] net/qede/base: allocate VF queues before PF Rasesh Mody
                   ` (26 subsequent siblings)
  29 siblings, 0 replies; 31+ messages in thread
From: Rasesh Mody @ 2017-09-19  1:29 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Move ecore_crc32() macro to within base driver to qede_crc32() and use
OSAL_CRC32() where required.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/bcm_osal.c    |   12 ++++++++++++
 drivers/net/qede/base/bcm_osal.h    |    4 +++-
 drivers/net/qede/base/ecore_sriov.c |   17 ++---------------
 drivers/net/qede/base/ecore_sriov.h |   13 -------------
 drivers/net/qede/base/ecore_vf.c    |    4 ++--
 5 files changed, 19 insertions(+), 31 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.c b/drivers/net/qede/base/bcm_osal.c
index 2603a8b..e3a2cb4 100644
--- a/drivers/net/qede/base/bcm_osal.c
+++ b/drivers/net/qede/base/bcm_osal.c
@@ -292,3 +292,15 @@ u32 qede_unzip_data(struct ecore_hwfn *p_hwfn, u32 input_len,
 	DP_ERR(p_hwfn, "HW error occurred [%s]\n", err_str);
 	ecore_int_attn_clr_enable(p_hwfn->p_dev, true);
 }
+
+u32 qede_crc32(u32 crc, u8 *ptr, u32 length)
+{
+	int i;
+
+	while (length--) {
+		crc ^= *ptr++;
+		for (i = 0; i < 8; i++)
+			crc = (crc >> 1) ^ ((crc & 1) ? 0xedb88320 : 0);
+	}
+	return crc;
+}
diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 3acf8f7..6148982 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -427,7 +427,9 @@ void qede_get_mcp_proto_stats(struct ecore_dev *, enum ecore_mcp_protocol_type,
 	qede_get_mcp_proto_stats(dev, type, stats)
 
 #define	OSAL_SLOWPATH_IRQ_REQ(p_hwfn) (0)
-#define OSAL_CRC32(crc, buf, length) 0
+
+u32 qede_crc32(u32 crc, u8 *ptr, u32 length);
+#define OSAL_CRC32(crc, buf, length) qede_crc32(crc, buf, length)
 #define OSAL_CRC8_POPULATE(table, polynomial) nothing
 #define OSAL_CRC8(table, pdata, nbytes, crc) 0
 #define OSAL_MFW_TLV_REQ(p_hwfn) (0)
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index db2873e..cb3f4c3 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -325,19 +325,6 @@ static bool ecore_iov_validate_active_txq(struct ecore_hwfn *p_hwfn,
 	return false;
 }
 
-/* TODO - this is linux crc32; Need a way to ifdef it out for linux */
-u32 ecore_crc32(u32 crc, u8 *ptr, u32 length)
-{
-	int i;
-
-	while (length--) {
-		crc ^= *ptr++;
-		for (i = 0; i < 8; i++)
-			crc = (crc >> 1) ^ ((crc & 1) ? 0xedb88320 : 0);
-	}
-	return crc;
-}
-
 enum _ecore_status_t ecore_iov_post_vf_bulletin(struct ecore_hwfn *p_hwfn,
 						int vfid,
 						struct ecore_ptt *p_ptt)
@@ -359,8 +346,8 @@ enum _ecore_status_t ecore_iov_post_vf_bulletin(struct ecore_hwfn *p_hwfn,
 
 	/* Increment bulletin board version and compute crc */
 	p_bulletin->version++;
-	p_bulletin->crc = ecore_crc32(0, (u8 *)p_bulletin + crc_size,
-				      p_vf->bulletin.size - crc_size);
+	p_bulletin->crc = OSAL_CRC32(0, (u8 *)p_bulletin + crc_size,
+				     p_vf->bulletin.size - crc_size);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 		   "Posting Bulletin 0x%08x to VF[%d] (CRC 0x%08x)\n",
diff --git a/drivers/net/qede/base/ecore_sriov.h b/drivers/net/qede/base/ecore_sriov.h
index 3c2f58b..5eb3484 100644
--- a/drivers/net/qede/base/ecore_sriov.h
+++ b/drivers/net/qede/base/ecore_sriov.h
@@ -264,19 +264,6 @@ enum _ecore_status_t ecore_sriov_eqe_event(struct ecore_hwfn	 *p_hwfn,
 					   union event_ring_data *data);
 
 /**
- * @brief calculate CRC for bulletin board validation
- *
- * @param basic crc seed
- * @param ptr to beginning of buffer
- * @length in bytes of buffer
- *
- * @return calculated crc over buffer [with respect to seed].
- */
-u32 ecore_crc32(u32 crc,
-		u8  *ptr,
-		u32 length);
-
-/**
  * @brief Mark structs of vfs that have been FLR-ed.
  *
  * @param p_hwfn
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index f4d331c..7a52621 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -1497,8 +1497,8 @@ enum _ecore_status_t ecore_vf_read_bulletin(struct ecore_hwfn *p_hwfn,
 		return ECORE_SUCCESS;
 
 	/* Verify the bulletin we see is valid */
-	crc = ecore_crc32(0, (u8 *)&shadow + crc_size,
-			  p_iov->bulletin.size - crc_size);
+	crc = OSAL_CRC32(0, (u8 *)&shadow + crc_size,
+			 p_iov->bulletin.size - crc_size);
 	if (crc != shadow.crc)
 		return ECORE_AGAIN;
 
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 04/53] net/qede/base: allocate VF queues before PF
  2017-09-19  1:29 [PATCH 00/53] net/qede/base: update PMD to 2.6.0.1 Rasesh Mody
                   ` (2 preceding siblings ...)
  2017-09-19  1:29 ` [PATCH 03/53] net/qede/base: use crc32 OSAL macro Rasesh Mody
@ 2017-09-19  1:29 ` Rasesh Mody
  2017-09-19  1:29 ` [PATCH 05/53] net/qede/base: convert device type to enum Rasesh Mody
                   ` (25 subsequent siblings)
  29 siblings, 0 replies; 31+ messages in thread
From: Rasesh Mody @ 2017-09-19  1:29 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Change the order by which we allocate the resources to align with
management FW by first allocating the VF l2 queues and only
afterwards use what's left for the PF.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c |   32 ++++++++++++++++++--------------
 1 file changed, 18 insertions(+), 14 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 65b89b8..6e40088 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2423,22 +2423,26 @@ static void get_function_id(struct ecore_hwfn *p_hwfn)
 static void ecore_hw_set_feat(struct ecore_hwfn *p_hwfn)
 {
 	u32 *feat_num = p_hwfn->hw_info.feat_num;
-	struct ecore_sb_cnt_info sb_cnt_info;
-	int num_features = 1;
+	u32 non_l2_sbs = 0;
 
 	/* L2 Queues require each: 1 status block. 1 L2 queue */
-	feat_num[ECORE_PF_L2_QUE] =
-	    OSAL_MIN_T(u32,
-		       RESC_NUM(p_hwfn, ECORE_SB) / num_features,
-		       RESC_NUM(p_hwfn, ECORE_L2_QUEUE));
-
-	OSAL_MEM_ZERO(&sb_cnt_info, sizeof(sb_cnt_info));
-	ecore_int_get_num_sbs(p_hwfn, &sb_cnt_info);
-	feat_num[ECORE_VF_L2_QUE] =
-		OSAL_MIN_T(u32,
-			   RESC_NUM(p_hwfn, ECORE_L2_QUEUE) -
-			   FEAT_NUM(p_hwfn, ECORE_PF_L2_QUE),
-			   sb_cnt_info.sb_iov_cnt);
+	if (ECORE_IS_L2_PERSONALITY(p_hwfn)) {
+		struct ecore_sb_cnt_info sb_cnt_info;
+
+		OSAL_MEM_ZERO(&sb_cnt_info, sizeof(sb_cnt_info));
+		ecore_int_get_num_sbs(p_hwfn, &sb_cnt_info);
+
+		/* Start by allocating VF queues, then PF's */
+		feat_num[ECORE_VF_L2_QUE] =
+			OSAL_MIN_T(u32,
+				   RESC_NUM(p_hwfn, ECORE_L2_QUEUE),
+				   sb_cnt_info.sb_iov_cnt);
+		feat_num[ECORE_PF_L2_QUE] =
+			OSAL_MIN_T(u32,
+				   RESC_NUM(p_hwfn, ECORE_SB) - non_l2_sbs,
+				   RESC_NUM(p_hwfn, ECORE_L2_QUEUE) -
+				   FEAT_NUM(p_hwfn, ECORE_VF_L2_QUE));
+	}
 
 	feat_num[ECORE_FCOE_CQ] = OSAL_MIN_T(u32, RESC_NUM(p_hwfn, ECORE_SB),
 					     RESC_NUM(p_hwfn, ECORE_CMDQS_CQS));
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 05/53] net/qede/base: convert device type to enum
  2017-09-19  1:29 [PATCH 00/53] net/qede/base: update PMD to 2.6.0.1 Rasesh Mody
                   ` (3 preceding siblings ...)
  2017-09-19  1:29 ` [PATCH 04/53] net/qede/base: allocate VF queues before PF Rasesh Mody
@ 2017-09-19  1:29 ` Rasesh Mody
  2017-09-19  1:29 ` [PATCH 06/53] net/qede/base: changes for VF queue zone Rasesh Mody
                   ` (24 subsequent siblings)
  29 siblings, 0 replies; 31+ messages in thread
From: Rasesh Mody @ 2017-09-19  1:29 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add a new enum for device type and device type details to device info.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h     |   14 ++++++++------
 drivers/net/qede/base/ecore_dev.c |   16 +++++++++++++---
 drivers/net/qede/qede_if.h        |    2 ++
 drivers/net/qede/qede_main.c      |    1 +
 4 files changed, 24 insertions(+), 9 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 0d68a9b..10fb16a 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -632,15 +632,18 @@ enum qed_dbg_features {
 	DBG_FEATURE_NUM
 };
 
+enum ecore_dev_type {
+	ECORE_DEV_TYPE_BB,
+	ECORE_DEV_TYPE_AH,
+};
+
 struct ecore_dev {
 	u32				dp_module;
 	u8				dp_level;
 	char				name[NAME_SIZE];
 	void				*dp_ctx;
 
-	u8				type;
-#define ECORE_DEV_TYPE_BB	(0 << 0)
-#define ECORE_DEV_TYPE_AH	(1 << 0)
+	enum ecore_dev_type		type;
 /* Translate type/revision combo into the proper conditions */
 #define ECORE_IS_BB(dev)	((dev)->type == ECORE_DEV_TYPE_BB)
 #define ECORE_IS_BB_A0(dev)	(ECORE_IS_BB(dev) && CHIP_REV_IS_A0(dev))
@@ -653,13 +656,12 @@ struct ecore_dev {
 #define ECORE_IS_AH(dev)	((dev)->type == ECORE_DEV_TYPE_AH)
 #define ECORE_IS_K2(dev)	ECORE_IS_AH(dev)
 
+	u16 vendor_id;
+	u16 device_id;
 #define ECORE_DEV_ID_MASK	0xff00
 #define ECORE_DEV_ID_MASK_BB	0x1600
 #define ECORE_DEV_ID_MASK_AH	0x8000
 
-	u16 vendor_id;
-	u16 device_id;
-
 	u16				chip_num;
 	#define CHIP_NUM_MASK			0xffff
 	#define CHIP_NUM_SHIFT			16
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 6e40088..4a31d67 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -3352,6 +3352,7 @@ static void ecore_hw_info_port_num(struct ecore_hwfn *p_hwfn,
 static enum _ecore_status_t ecore_get_dev_info(struct ecore_dev *p_dev)
 {
 	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
+	u16 device_id_mask;
 	u32 tmp;
 
 	/* Read Vendor Id / Device Id */
@@ -3361,10 +3362,19 @@ static enum _ecore_status_t ecore_get_dev_info(struct ecore_dev *p_dev)
 				  &p_dev->device_id);
 
 	/* Determine type */
-	if ((p_dev->device_id & ECORE_DEV_ID_MASK) == ECORE_DEV_ID_MASK_AH)
-		p_dev->type = ECORE_DEV_TYPE_AH;
-	else
+	device_id_mask = p_dev->device_id & ECORE_DEV_ID_MASK;
+	switch (device_id_mask) {
+	case ECORE_DEV_ID_MASK_BB:
 		p_dev->type = ECORE_DEV_TYPE_BB;
+		break;
+	case ECORE_DEV_ID_MASK_AH:
+		p_dev->type = ECORE_DEV_TYPE_AH;
+		break;
+	default:
+		DP_NOTICE(p_hwfn, true, "Unknown device id 0x%x\n",
+			  p_dev->device_id);
+		return ECORE_ABORTED;
+	}
 
 	p_dev->chip_num = (u16)ecore_rd(p_hwfn, p_hwfn->p_main_ptt,
 					 MISCS_REG_CHIP_NUM);
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index 9864bb4..8e0c999 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -48,6 +48,8 @@ struct qed_dev_info {
 	bool vxlan_enable;
 	bool gre_enable;
 	bool geneve_enable;
+
+	enum ecore_dev_type dev_type;
 };
 
 struct qed_dev_eth_info {
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index a6ff7af..2447ffb 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -357,6 +357,7 @@ static int qed_slowpath_start(struct ecore_dev *edev,
 	dev_info->num_hwfns = edev->num_hwfns;
 	dev_info->is_mf_default = IS_MF_DEFAULT(&edev->hwfns[0]);
 	dev_info->mtu = ECORE_LEADING_HWFN(edev)->hw_info.mtu;
+	dev_info->dev_type = edev->type;
 
 	rte_memcpy(&dev_info->hw_mac, &edev->hwfns[0].hw_info.hw_mac_addr,
 	       ETHER_ADDR_LEN);
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 06/53] net/qede/base: changes for VF queue zone
  2017-09-19  1:29 [PATCH 00/53] net/qede/base: update PMD to 2.6.0.1 Rasesh Mody
                   ` (4 preceding siblings ...)
  2017-09-19  1:29 ` [PATCH 05/53] net/qede/base: convert device type to enum Rasesh Mody
@ 2017-09-19  1:29 ` Rasesh Mody
  2017-09-19  1:29 ` [PATCH 07/53] net/qede/base: interchangeably use SB between PF and VF Rasesh Mody
                   ` (23 subsequent siblings)
  29 siblings, 0 replies; 31+ messages in thread
From: Rasesh Mody @ 2017-09-19  1:29 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Allow opening Multiple Tx queues on a single qzone for VFs.
This is supported by Rx/Tx TLVs now having an additional extended TLV that
passes the `qid_usage_idx', a unique number per each queue-cid that was
opened for a given queue-zone.

Fix to overcome TX timeout issue due to more than 16 CIDs by adding an
additional VF legacy mode. This will detach the CIDs from the original
only-existing legacy mode suited for older releases.
Following this change, only VFs that would publish VFPF_ACQUIRE_CAP_QIDS
would have the new CIDs scheme applied. I.e., the new 'legacy' mode is
actually whether this capability is published or not.

Changed the logic to clear doorbells for legacy and non-legacy VFs, so
the PF is cleaning the doorbells for both cases.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_cxt.c      |   19 +-
 drivers/net/qede/base/ecore_l2.c       |   29 +--
 drivers/net/qede/base/ecore_l2.h       |    6 +-
 drivers/net/qede/base/ecore_proto_if.h |    4 +
 drivers/net/qede/base/ecore_sriov.c    |  322 ++++++++++++++++++++------------
 drivers/net/qede/base/ecore_sriov.h    |    4 +
 drivers/net/qede/base/ecore_vf.c       |   76 +++++---
 drivers/net/qede/base/ecore_vf.h       |    5 +
 drivers/net/qede/base/ecore_vfpf_if.h  |   55 +++++-
 9 files changed, 345 insertions(+), 175 deletions(-)

diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index 688118b..8c45315 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -1993,19 +1993,16 @@ enum _ecore_status_t ecore_cxt_set_pf_params(struct ecore_hwfn *p_hwfn)
 	switch (p_hwfn->hw_info.personality) {
 	case ECORE_PCI_ETH:
 		{
-			struct ecore_eth_pf_params *p_params =
+		struct ecore_eth_pf_params *p_params =
 			    &p_hwfn->pf_params.eth_pf_params;
 
-			/* TODO - we probably want to add VF number to the PF
-			 * params;
-			 * As of now, allocates 16 * 2 per-VF [to retain regular
-			 * functionality].
-			 */
-			ecore_cxt_set_proto_cid_count(p_hwfn, PROTOCOLID_ETH,
-						      p_params->num_cons, 32);
-			p_hwfn->p_cxt_mngr->arfs_count =
-						p_params->num_arfs_filters;
-			break;
+		if (!p_params->num_vf_cons)
+			p_params->num_vf_cons = ETH_PF_PARAMS_VF_CONS_DEFAULT;
+		ecore_cxt_set_proto_cid_count(p_hwfn, PROTOCOLID_ETH,
+					      p_params->num_cons,
+					      p_params->num_vf_cons);
+		p_hwfn->p_cxt_mngr->arfs_count = p_params->num_arfs_filters;
+		break;
 		}
 	default:
 		return ECORE_INVAL;
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index e58b8fa..839bd46 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -173,16 +173,19 @@ static void ecore_eth_queue_qid_usage_del(struct ecore_hwfn *p_hwfn,
 void ecore_eth_queue_cid_release(struct ecore_hwfn *p_hwfn,
 				 struct ecore_queue_cid *p_cid)
 {
-	/* For VF-queues, stuff is a bit complicated as:
-	 *  - They always maintain the qid_usage on their own.
-	 *  - In legacy mode, they also maintain their CIDs.
-	 */
+	bool b_legacy_vf = !!(p_cid->vf_legacy &
+			      ECORE_QCID_LEGACY_VF_CID);
 
-	/* VFs' CIDs are 0-based in PF-view, and uninitialized on VF */
-	if (IS_PF(p_hwfn->p_dev) && !p_cid->b_legacy_vf)
+	/* VFs' CIDs are 0-based in PF-view, and uninitialized on VF.
+	 * For legacy vf-queues, the CID doesn't go through here.
+	 */
+	if (IS_PF(p_hwfn->p_dev) && !b_legacy_vf)
 		_ecore_cxt_release_cid(p_hwfn, p_cid->cid, p_cid->vfid);
-	if (!p_cid->b_legacy_vf)
+
+	/* VFs maintain the index inside queue-zone on their own */
+	if (p_cid->vfid == ECORE_QUEUE_CID_PF)
 		ecore_eth_queue_qid_usage_del(p_hwfn, p_cid);
+
 	OSAL_VFREE(p_hwfn->p_dev, p_cid);
 }
 
@@ -211,7 +214,7 @@ void ecore_eth_queue_cid_release(struct ecore_hwfn *p_hwfn,
 	if (p_vf_params != OSAL_NULL) {
 		p_cid->vfid = p_vf_params->vfid;
 		p_cid->vf_qid = p_vf_params->vf_qid;
-		p_cid->b_legacy_vf = p_vf_params->b_legacy;
+		p_cid->vf_legacy = p_vf_params->vf_legacy;
 	} else {
 		p_cid->vfid = ECORE_QUEUE_CID_PF;
 	}
@@ -296,7 +299,8 @@ struct ecore_queue_cid *
 	if (p_vf_params) {
 		vfid = p_vf_params->vfid;
 
-		if (p_vf_params->b_legacy) {
+		if (p_vf_params->vf_legacy &
+		    ECORE_QCID_LEGACY_VF_CID) {
 			b_legacy_vf = true;
 			cid = p_vf_params->vf_qid;
 		}
@@ -928,12 +932,15 @@ enum _ecore_status_t
 	DMA_REGPAIR_LE(p_ramrod->cqe_pbl_addr, cqe_pbl_addr);
 
 	if (p_cid->vfid != ECORE_QUEUE_CID_PF) {
+		bool b_legacy_vf = !!(p_cid->vf_legacy &
+				      ECORE_QCID_LEGACY_VF_RX_PROD);
+
 		p_ramrod->vf_rx_prod_index = p_cid->vf_qid;
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 			   "Queue%s is meant for VF rxq[%02x]\n",
-			   !!p_cid->b_legacy_vf ? " [legacy]" : "",
+			   b_legacy_vf ? " [legacy]" : "",
 			   p_cid->vf_qid);
-		p_ramrod->vf_rx_prod_use_zone_a = !!p_cid->b_legacy_vf;
+		p_ramrod->vf_rx_prod_use_zone_a = b_legacy_vf;
 	}
 
 	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
diff --git a/drivers/net/qede/base/ecore_l2.h b/drivers/net/qede/base/ecore_l2.h
index 7fe4cbc..33f1fad 100644
--- a/drivers/net/qede/base/ecore_l2.h
+++ b/drivers/net/qede/base/ecore_l2.h
@@ -34,7 +34,7 @@ struct ecore_queue_cid_vf_params {
 	 *  - Producers would be placed in a different place.
 	 *  - Makes assumptions regarding the CIDs.
 	 */
-	bool b_legacy;
+	u8 vf_legacy;
 
 	/* For VFs, this index arrives via TLV to diffrentiate between
 	 * different queues opened on the same qzone, and is passed
@@ -69,7 +69,9 @@ struct ecore_queue_cid {
 	u8 qid_usage_idx;
 
 	/* Legacy VFs might have Rx producer located elsewhere */
-	bool b_legacy_vf;
+	u8 vf_legacy;
+#define ECORE_QCID_LEGACY_VF_RX_PROD	(1 << 0)
+#define ECORE_QCID_LEGACY_VF_CID	(1 << 1)
 
 	struct ecore_hwfn *p_owner;
 };
diff --git a/drivers/net/qede/base/ecore_proto_if.h b/drivers/net/qede/base/ecore_proto_if.h
index 226e3d2..5d4b2b3 100644
--- a/drivers/net/qede/base/ecore_proto_if.h
+++ b/drivers/net/qede/base/ecore_proto_if.h
@@ -22,6 +22,10 @@ struct ecore_eth_pf_params {
 	 */
 	u16	num_cons;
 
+	/* per-VF number of CIDs */
+	u8	num_vf_cons;
+#define ETH_PF_PARAMS_VF_CONS_DEFAULT	(32)
+
 	/* To enable arfs, previous to HW-init a positive number needs to be
 	 * set [as filters require allocated searcher ILT memory].
 	 * This will set the maximal number of configured steering-filters.
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index cb3f4c3..0886560 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -53,9 +53,26 @@
 	"CHANNEL_TLV_VPORT_UPDATE_SGE_TPA",
 	"CHANNEL_TLV_UPDATE_TUNN_PARAM",
 	"CHANNEL_TLV_COALESCE_UPDATE",
+	"CHANNEL_TLV_QID",
 	"CHANNEL_TLV_MAX"
 };
 
+static u8 ecore_vf_calculate_legacy(struct ecore_hwfn *p_hwfn,
+				    struct ecore_vf_info *p_vf)
+{
+	u8 legacy = 0;
+
+	if (p_vf->acquire.vfdev_info.eth_fp_hsi_minor ==
+	    ETH_HSI_VER_NO_PKT_LEN_TUNN)
+		legacy |= ECORE_QCID_LEGACY_VF_RX_PROD;
+
+	if (!(p_vf->acquire.vfdev_info.capabilities &
+	     VFPF_ACQUIRE_CAP_QUEUE_QIDS))
+		legacy |= ECORE_QCID_LEGACY_VF_CID;
+
+	return legacy;
+}
+
 /* IOV ramrods */
 static enum _ecore_status_t ecore_sp_vf_start(struct ecore_hwfn *p_hwfn,
 					      struct ecore_vf_info *p_vf)
@@ -1558,6 +1575,10 @@ static u8 ecore_iov_vf_mbx_acquire_resc(struct ecore_hwfn *p_hwfn,
 	p_resp->num_vlan_filters = OSAL_MIN_T(u8, p_vf->num_vlan_filters,
 					      p_req->num_vlan_filters);
 
+	p_resp->num_cids =
+		OSAL_MIN_T(u8, p_req->num_cids,
+			   p_hwfn->pf_params.eth_pf_params.num_vf_cons);
+
 	/* This isn't really needed/enforced, but some legacy VFs might depend
 	 * on the correct filling of this field.
 	 */
@@ -1569,18 +1590,18 @@ static u8 ecore_iov_vf_mbx_acquire_resc(struct ecore_hwfn *p_hwfn,
 	    p_resp->num_sbs < p_req->num_sbs ||
 	    p_resp->num_mac_filters < p_req->num_mac_filters ||
 	    p_resp->num_vlan_filters < p_req->num_vlan_filters ||
-	    p_resp->num_mc_filters < p_req->num_mc_filters) {
+	    p_resp->num_mc_filters < p_req->num_mc_filters ||
+	    p_resp->num_cids < p_req->num_cids) {
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-			   "VF[%d] - Insufficient resources: rxq [%02x/%02x]"
-			   " txq [%02x/%02x] sbs [%02x/%02x] mac [%02x/%02x]"
-			   " vlan [%02x/%02x] mc [%02x/%02x]\n",
+			   "VF[%d] - Insufficient resources: rxq [%02x/%02x] txq [%02x/%02x] sbs [%02x/%02x] mac [%02x/%02x] vlan [%02x/%02x] mc [%02x/%02x] cids [%02x/%02x]\n",
 			   p_vf->abs_vf_id,
 			   p_req->num_rxqs, p_resp->num_rxqs,
 			   p_req->num_rxqs, p_resp->num_txqs,
 			   p_req->num_sbs, p_resp->num_sbs,
 			   p_req->num_mac_filters, p_resp->num_mac_filters,
 			   p_req->num_vlan_filters, p_resp->num_vlan_filters,
-			   p_req->num_mc_filters, p_resp->num_mc_filters);
+			   p_req->num_mc_filters, p_resp->num_mc_filters,
+			   p_req->num_cids, p_resp->num_cids);
 
 		/* Some legacy OSes are incapable of correctly handling this
 		 * failure.
@@ -1715,6 +1736,12 @@ static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn       *p_hwfn,
 	if (p_hwfn->p_dev->num_hwfns > 1)
 		pfdev_info->capabilities |= PFVF_ACQUIRE_CAP_100G;
 
+	/* Share our ability to use multiple queue-ids only with VFs
+	 * that request it.
+	 */
+	if (req->vfdev_info.capabilities & VFPF_ACQUIRE_CAP_QUEUE_QIDS)
+		pfdev_info->capabilities |= PFVF_ACQUIRE_CAP_QUEUE_QIDS;
+
 	ecore_iov_vf_mbx_acquire_stats(p_hwfn, &pfdev_info->stats_info);
 
 	OSAL_MEMCPY(pfdev_info->port_mac, p_hwfn->hw_info.hw_mac_addr,
@@ -2158,6 +2185,42 @@ static void ecore_iov_vf_mbx_start_rxq_resp(struct ecore_hwfn *p_hwfn,
 	ecore_iov_send_response(p_hwfn, p_ptt, vf, length, status);
 }
 
+static u8 ecore_iov_vf_mbx_qid(struct ecore_hwfn *p_hwfn,
+			       struct ecore_vf_info *p_vf, bool b_is_tx)
+{
+	struct ecore_iov_vf_mbx *p_mbx = &p_vf->vf_mbx;
+	struct vfpf_qid_tlv *p_qid_tlv;
+
+	/* Search for the qid if the VF published if its going to provide it */
+	if (!(p_vf->acquire.vfdev_info.capabilities &
+	      VFPF_ACQUIRE_CAP_QUEUE_QIDS)) {
+		if (b_is_tx)
+			return ECORE_IOV_LEGACY_QID_TX;
+		else
+			return ECORE_IOV_LEGACY_QID_RX;
+	}
+
+	p_qid_tlv = (struct vfpf_qid_tlv *)
+		    ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt,
+					       CHANNEL_TLV_QID);
+	if (p_qid_tlv == OSAL_NULL) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "VF[%2x]: Failed to provide qid\n",
+			   p_vf->relative_vf_id);
+
+		return ECORE_IOV_QID_INVALID;
+	}
+
+	if (p_qid_tlv->qid >= MAX_QUEUES_PER_QZONE) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "VF[%02x]: Provided qid out-of-bounds %02x\n",
+			   p_vf->relative_vf_id, p_qid_tlv->qid);
+		return ECORE_IOV_QID_INVALID;
+	}
+
+	return p_qid_tlv->qid;
+}
+
 static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
 				       struct ecore_ptt *p_ptt,
 				       struct ecore_vf_info *vf)
@@ -2166,11 +2229,10 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
 	struct ecore_queue_cid_vf_params vf_params;
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
 	u8 status = PFVF_STATUS_NO_RESOURCE;
+	u8 qid_usage_idx, vf_legacy = 0;
 	struct ecore_vf_queue *p_queue;
 	struct vfpf_start_rxq_tlv *req;
 	struct ecore_queue_cid *p_cid;
-	bool b_legacy_vf = false;
-	u8 qid_usage_idx;
 	enum _ecore_status_t rc;
 
 	req = &mbx->req_virt->start_rxq;
@@ -2180,18 +2242,17 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
 	    !ecore_iov_validate_sb(p_hwfn, vf, req->hw_sb))
 		goto out;
 
-	/* Legacy VFs made assumptions on the CID their queues connected to,
-	 * assuming queue X used CID X.
-	 * TODO - need to validate that there was no official release post
-	 * the current legacy scheme that still made that assumption.
-	 */
-	if (vf->acquire.vfdev_info.eth_fp_hsi_minor ==
-	    ETH_HSI_VER_NO_PKT_LEN_TUNN)
-		b_legacy_vf = true;
+	qid_usage_idx = ecore_iov_vf_mbx_qid(p_hwfn, vf, false);
+	if (qid_usage_idx == ECORE_IOV_QID_INVALID)
+		goto out;
 
-	/* Acquire a new queue-cid */
 	p_queue = &vf->vf_queues[req->rx_qid];
+	if (p_queue->cids[qid_usage_idx].p_cid)
+		goto out;
+
+	vf_legacy = ecore_vf_calculate_legacy(p_hwfn, vf);
 
+	/* Acquire a new queue-cid */
 	OSAL_MEMSET(&params, 0, sizeof(params));
 	params.queue_id = (u8)p_queue->fw_rx_qid;
 	params.vport_id = vf->vport_id;
@@ -2199,15 +2260,10 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
 	params.sb = req->hw_sb;
 	params.sb_idx = req->sb_index;
 
-	/* TODO - set qid_usage_idx according to extended TLV. For now, use
-	 * '0' for Rx.
-	 */
-	qid_usage_idx = 0;
-
 	OSAL_MEM_ZERO(&vf_params, sizeof(vf_params));
 	vf_params.vfid = vf->relative_vf_id;
 	vf_params.vf_qid = (u8)req->rx_qid;
-	vf_params.b_legacy = b_legacy_vf;
+	vf_params.vf_legacy = vf_legacy;
 	vf_params.qid_usage_idx = qid_usage_idx;
 
 	p_cid = ecore_eth_queue_to_cid(p_hwfn, vf->opaque_fid,
@@ -2218,7 +2274,7 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
 	/* Legacy VFs have their Producers in a different location, which they
 	 * calculate on their own and clean the producer prior to this.
 	 */
-	if (!b_legacy_vf)
+	if (!(vf_legacy & ECORE_QCID_LEGACY_VF_RX_PROD))
 		REG_WR(p_hwfn,
 		       GTT_BAR0_MAP_REG_MSDM_RAM +
 		       MSTORM_ETH_VF_PRODS_OFFSET(vf->abs_vf_id, req->rx_qid),
@@ -2241,7 +2297,8 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
 
 out:
 	ecore_iov_vf_mbx_start_rxq_resp(p_hwfn, p_ptt, vf, status,
-					b_legacy_vf);
+					!!(vf_legacy &
+					   ECORE_QCID_LEGACY_VF_RX_PROD));
 }
 
 static void
@@ -2443,8 +2500,7 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 	struct ecore_vf_queue *p_queue;
 	struct vfpf_start_txq_tlv *req;
 	struct ecore_queue_cid *p_cid;
-	bool b_legacy_vf = false;
-	u8 qid_usage_idx;
+	u8 qid_usage_idx, vf_legacy;
 	u32 cid = 0;
 	enum _ecore_status_t rc;
 	u16 pq;
@@ -2457,35 +2513,27 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 	    !ecore_iov_validate_sb(p_hwfn, vf, req->hw_sb))
 		goto out;
 
-	/* In case this is a legacy VF - need to know to use the right cids.
-	 * TODO - need to validate that there was no official release post
-	 * the current legacy scheme that still made that assumption.
-	 */
-	if (vf->acquire.vfdev_info.eth_fp_hsi_minor ==
-	    ETH_HSI_VER_NO_PKT_LEN_TUNN)
-		b_legacy_vf = true;
+	qid_usage_idx = ecore_iov_vf_mbx_qid(p_hwfn, vf, true);
+	if (qid_usage_idx == ECORE_IOV_QID_INVALID)
+		goto out;
 
-	/* Acquire a new queue-cid */
 	p_queue = &vf->vf_queues[req->tx_qid];
+	if (p_queue->cids[qid_usage_idx].p_cid)
+		goto out;
+
+	vf_legacy = ecore_vf_calculate_legacy(p_hwfn, vf);
 
+	/* Acquire a new queue-cid */
 	params.queue_id = p_queue->fw_tx_qid;
 	params.vport_id = vf->vport_id;
 	params.stats_id = vf->abs_vf_id + 0x10;
 	params.sb = req->hw_sb;
 	params.sb_idx = req->sb_index;
 
-	/* TODO - set qid_usage_idx according to extended TLV. For now, use
-	 * '1' for Tx.
-	 */
-	qid_usage_idx = 1;
-
-	if (p_queue->cids[qid_usage_idx].p_cid)
-		goto out;
-
 	OSAL_MEM_ZERO(&vf_params, sizeof(vf_params));
 	vf_params.vfid = vf->relative_vf_id;
 	vf_params.vf_qid = (u8)req->tx_qid;
-	vf_params.b_legacy = b_legacy_vf;
+	vf_params.vf_legacy = vf_legacy;
 	vf_params.qid_usage_idx = qid_usage_idx;
 
 	p_cid = ecore_eth_queue_to_cid(p_hwfn, vf->opaque_fid,
@@ -2515,80 +2563,74 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 static enum _ecore_status_t ecore_iov_vf_stop_rxqs(struct ecore_hwfn *p_hwfn,
 						   struct ecore_vf_info *vf,
 						   u16 rxq_id,
-						   u8 num_rxqs,
+						   u8 qid_usage_idx,
 						   bool cqe_completion)
 {
+	struct ecore_vf_queue *p_queue;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
-	int qid, i;
 
-	/* TODO - improve validation [wrap around] */
-	if (rxq_id + num_rxqs > OSAL_ARRAY_SIZE(vf->vf_queues))
+	if (!ecore_iov_validate_rxq(p_hwfn, vf, rxq_id,
+				    ECORE_IOV_VALIDATE_Q_NA)) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "VF[%d] Tried Closing Rx 0x%04x.%02x which is inactive\n",
+			   vf->relative_vf_id, rxq_id, qid_usage_idx);
 		return ECORE_INVAL;
+	}
 
-	for (qid = rxq_id; qid < rxq_id + num_rxqs; qid++) {
-		struct ecore_vf_queue *p_queue = &vf->vf_queues[qid];
-		struct ecore_queue_cid **pp_cid = OSAL_NULL;
-
-		/* There can be at most a single Rx per qzone. Find it */
-		for (i = 0; i < MAX_QUEUES_PER_QZONE; i++) {
-			if (p_queue->cids[i].p_cid &&
-			    !p_queue->cids[i].b_is_tx) {
-				pp_cid = &p_queue->cids[i].p_cid;
-				break;
-			}
-		}
-		if (pp_cid == OSAL_NULL) {
-			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-				   "Ignoring VF[%02x] request of closing Rx queue %04x - closed\n",
-				   vf->relative_vf_id, qid);
-			continue;
-		}
+	p_queue = &vf->vf_queues[rxq_id];
 
-		rc = ecore_eth_rx_queue_stop(p_hwfn, *pp_cid,
-					     false, cqe_completion);
-		if (rc != ECORE_SUCCESS)
-			return rc;
+	/* We've validated the index and the existence of the active RXQ -
+	 * now we need to make sure that it's using the correct qid.
+	 */
+	if (!p_queue->cids[qid_usage_idx].p_cid ||
+	    p_queue->cids[qid_usage_idx].b_is_tx) {
+		struct ecore_queue_cid *p_cid;
 
-		*pp_cid = OSAL_NULL;
-		vf->num_active_rxqs--;
+		p_cid = ecore_iov_get_vf_rx_queue_cid(p_hwfn, vf, p_queue);
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "VF[%d] - Tried Closing Rx 0x%04x.%02x, but Rx is at %04x.%02x\n",
+			    vf->relative_vf_id, rxq_id, qid_usage_idx,
+			    rxq_id, p_cid->qid_usage_idx);
+		return ECORE_INVAL;
 	}
 
-	return rc;
+	/* Now that we know we have a valid Rx-queue - close it */
+	rc = ecore_eth_rx_queue_stop(p_hwfn,
+				     p_queue->cids[qid_usage_idx].p_cid,
+				     false, cqe_completion);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	p_queue->cids[qid_usage_idx].p_cid = OSAL_NULL;
+	vf->num_active_rxqs--;
+
+	return ECORE_SUCCESS;
 }
 
 static enum _ecore_status_t ecore_iov_vf_stop_txqs(struct ecore_hwfn *p_hwfn,
 						   struct ecore_vf_info *vf,
-						   u16 txq_id, u8 num_txqs)
+						   u16 txq_id,
+						   u8 qid_usage_idx)
 {
-	enum _ecore_status_t rc = ECORE_SUCCESS;
 	struct ecore_vf_queue *p_queue;
-	int qid, j;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
 
 	if (!ecore_iov_validate_txq(p_hwfn, vf, txq_id,
-				    ECORE_IOV_VALIDATE_Q_NA) ||
-	    !ecore_iov_validate_txq(p_hwfn, vf, txq_id + num_txqs,
 				    ECORE_IOV_VALIDATE_Q_NA))
 		return ECORE_INVAL;
 
-	for (qid = txq_id; qid < txq_id + num_txqs; qid++) {
-		p_queue = &vf->vf_queues[qid];
-		for (j = 0; j < MAX_QUEUES_PER_QZONE; j++) {
-			if (p_queue->cids[j].p_cid == OSAL_NULL)
-				continue;
-
-			if (!p_queue->cids[j].b_is_tx)
-				continue;
-
-			rc = ecore_eth_tx_queue_stop(p_hwfn,
-						     p_queue->cids[j].p_cid);
-			if (rc != ECORE_SUCCESS)
-				return rc;
+	p_queue = &vf->vf_queues[txq_id];
+	if (!p_queue->cids[qid_usage_idx].p_cid ||
+	    !p_queue->cids[qid_usage_idx].b_is_tx)
+		return ECORE_INVAL;
 
-			p_queue->cids[j].p_cid = OSAL_NULL;
-		}
-	}
+	rc = ecore_eth_tx_queue_stop(p_hwfn,
+				     p_queue->cids[qid_usage_idx].p_cid);
+	if (rc != ECORE_SUCCESS)
+		return rc;
 
-	return rc;
+	p_queue->cids[qid_usage_idx].p_cid = OSAL_NULL;
+	return ECORE_SUCCESS;
 }
 
 static void ecore_iov_vf_mbx_stop_rxqs(struct ecore_hwfn *p_hwfn,
@@ -2597,20 +2639,34 @@ static void ecore_iov_vf_mbx_stop_rxqs(struct ecore_hwfn *p_hwfn,
 {
 	u16 length = sizeof(struct pfvf_def_resp_tlv);
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
-	u8 status = PFVF_STATUS_SUCCESS;
+	u8 status = PFVF_STATUS_FAILURE;
 	struct vfpf_stop_rxqs_tlv *req;
+	u8 qid_usage_idx;
 	enum _ecore_status_t rc;
 
-	/* We give the option of starting from qid != 0, in this case we
-	 * need to make sure that qid + num_qs doesn't exceed the actual
-	 * amount of queues that exist.
+	/* Starting with CHANNEL_TLV_QID, it's assumed the 'num_rxqs'
+	 * would be one. Since no older ecore passed multiple queues
+	 * using this API, sanitize on the value.
 	 */
 	req = &mbx->req_virt->stop_rxqs;
-	rc = ecore_iov_vf_stop_rxqs(p_hwfn, vf, req->rx_qid,
-				    req->num_rxqs, req->cqe_completion);
-	if (rc)
-		status = PFVF_STATUS_FAILURE;
+	if (req->num_rxqs != 1) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "Odd; VF[%d] tried stopping multiple Rx queues\n",
+			   vf->relative_vf_id);
+		status = PFVF_STATUS_NOT_SUPPORTED;
+		goto out;
+	}
 
+	/* Find which qid-index is associated with the queue */
+	qid_usage_idx = ecore_iov_vf_mbx_qid(p_hwfn, vf, false);
+	if (qid_usage_idx == ECORE_IOV_QID_INVALID)
+		goto out;
+
+	rc = ecore_iov_vf_stop_rxqs(p_hwfn, vf, req->rx_qid,
+				    qid_usage_idx, req->cqe_completion);
+	if (rc == ECORE_SUCCESS)
+		status = PFVF_STATUS_SUCCESS;
+out:
 	ecore_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_STOP_RXQS,
 			       length, status);
 }
@@ -2621,19 +2677,35 @@ static void ecore_iov_vf_mbx_stop_txqs(struct ecore_hwfn *p_hwfn,
 {
 	u16 length = sizeof(struct pfvf_def_resp_tlv);
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
-	u8 status = PFVF_STATUS_SUCCESS;
+	u8 status = PFVF_STATUS_FAILURE;
 	struct vfpf_stop_txqs_tlv *req;
+	u8 qid_usage_idx;
 	enum _ecore_status_t rc;
 
-	/* We give the option of starting from qid != 0, in this case we
-	 * need to make sure that qid + num_qs doesn't exceed the actual
-	 * amount of queues that exist.
+	/* Starting with CHANNEL_TLV_QID, it's assumed the 'num_txqs'
+	 * would be one. Since no older ecore passed multiple queues
+	 * using this API, sanitize on the value.
 	 */
 	req = &mbx->req_virt->stop_txqs;
-	rc = ecore_iov_vf_stop_txqs(p_hwfn, vf, req->tx_qid, req->num_txqs);
-	if (rc)
-		status = PFVF_STATUS_FAILURE;
+	if (req->num_txqs != 1) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "Odd; VF[%d] tried stopping multiple Tx queues\n",
+			   vf->relative_vf_id);
+		status = PFVF_STATUS_NOT_SUPPORTED;
+		goto out;
+	}
 
+	/* Find which qid-index is associated with the queue */
+	qid_usage_idx = ecore_iov_vf_mbx_qid(p_hwfn, vf, true);
+	if (qid_usage_idx == ECORE_IOV_QID_INVALID)
+		goto out;
+
+	rc = ecore_iov_vf_stop_txqs(p_hwfn, vf, req->tx_qid,
+				    qid_usage_idx);
+	if (rc == ECORE_SUCCESS)
+		status = PFVF_STATUS_SUCCESS;
+
+out:
 	ecore_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_STOP_TXQS,
 			       length, status);
 }
@@ -2649,6 +2721,7 @@ static void ecore_iov_vf_mbx_update_rxqs(struct ecore_hwfn *p_hwfn,
 	u8 status = PFVF_STATUS_FAILURE;
 	u8 complete_event_flg;
 	u8 complete_cqe_flg;
+	u8 qid_usage_idx;
 	enum _ecore_status_t rc;
 	u16 i;
 
@@ -2656,10 +2729,30 @@ static void ecore_iov_vf_mbx_update_rxqs(struct ecore_hwfn *p_hwfn,
 	complete_cqe_flg = !!(req->flags & VFPF_RXQ_UPD_COMPLETE_CQE_FLAG);
 	complete_event_flg = !!(req->flags & VFPF_RXQ_UPD_COMPLETE_EVENT_FLAG);
 
-	/* Validate inputs */
+	qid_usage_idx = ecore_iov_vf_mbx_qid(p_hwfn, vf, false);
+	if (qid_usage_idx == ECORE_IOV_QID_INVALID)
+		goto out;
+
+	/* Starting with the addition of CHANNEL_TLV_QID, this API started
+	 * expecting a single queue at a time. Validate this.
+	 */
+	if ((vf->acquire.vfdev_info.capabilities &
+	     VFPF_ACQUIRE_CAP_QUEUE_QIDS) &&
+	     req->num_rxqs != 1) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "VF[%d] supports QIDs but sends multiple queues\n",
+			   vf->relative_vf_id);
+		goto out;
+	}
+
+	/* Validate inputs - for the legacy case this is still true since
+	 * qid_usage_idx for each Rx queue would be LEGACY_QID_RX.
+	 */
 	for (i = req->rx_qid; i < req->rx_qid + req->num_rxqs; i++) {
 		if (!ecore_iov_validate_rxq(p_hwfn, vf, i,
-					    ECORE_IOV_VALIDATE_Q_ENABLE)) {
+					    ECORE_IOV_VALIDATE_Q_NA) ||
+		    !vf->vf_queues[i].cids[qid_usage_idx].p_cid ||
+		    vf->vf_queues[i].cids[qid_usage_idx].b_is_tx) {
 			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 				   "VF[%d]: Incorrect Rxqs [%04x, %02x]\n",
 				   vf->relative_vf_id, req->rx_qid,
@@ -2669,12 +2762,9 @@ static void ecore_iov_vf_mbx_update_rxqs(struct ecore_hwfn *p_hwfn,
 	}
 
 	for (i = 0; i < req->num_rxqs; i++) {
-		struct ecore_vf_queue *p_queue;
 		u16 qid = req->rx_qid + i;
 
-		p_queue = &vf->vf_queues[qid];
-		handlers[i] = ecore_iov_get_vf_rx_queue_cid(p_hwfn, vf,
-							    p_queue);
+		handlers[i] = vf->vf_queues[qid].cids[qid_usage_idx].p_cid;
 	}
 
 	rc = ecore_sp_eth_rx_queues_update(p_hwfn, (void **)&handlers,
@@ -2683,7 +2773,7 @@ static void ecore_iov_vf_mbx_update_rxqs(struct ecore_hwfn *p_hwfn,
 					   complete_event_flg,
 					   ECORE_SPQ_MODE_EBLOCK,
 					   OSAL_NULL);
-	if (rc)
+	if (rc != ECORE_SUCCESS)
 		goto out;
 
 	status = PFVF_STATUS_SUCCESS;
diff --git a/drivers/net/qede/base/ecore_sriov.h b/drivers/net/qede/base/ecore_sriov.h
index 5eb3484..1750f0d 100644
--- a/drivers/net/qede/base/ecore_sriov.h
+++ b/drivers/net/qede/base/ecore_sriov.h
@@ -63,6 +63,10 @@ struct ecore_iov_vf_mbx {
 					 */
 };
 
+#define ECORE_IOV_LEGACY_QID_RX (0)
+#define ECORE_IOV_LEGACY_QID_TX (1)
+#define ECORE_IOV_QID_INVALID (0xFE)
+
 struct ecore_vf_queue_cid {
 	bool b_is_tx;
 	struct ecore_queue_cid *p_cid;
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index 7a52621..e4e2517 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -135,22 +135,36 @@ static void ecore_vf_pf_req_end(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
+static void ecore_vf_pf_add_qid(struct ecore_hwfn *p_hwfn,
+				struct ecore_queue_cid *p_cid)
+{
+	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+	struct vfpf_qid_tlv *p_qid_tlv;
+
+	/* Only add QIDs for the queue if it was negotiated with PF */
+	if (!(p_iov->acquire_resp.pfdev_info.capabilities &
+	      PFVF_ACQUIRE_CAP_QUEUE_QIDS))
+		return;
+
+	p_qid_tlv = ecore_add_tlv(p_hwfn, &p_iov->offset,
+				  CHANNEL_TLV_QID, sizeof(*p_qid_tlv));
+	p_qid_tlv->qid = p_cid->qid_usage_idx;
+}
+
 #define VF_ACQUIRE_THRESH 3
 static void ecore_vf_pf_acquire_reduce_resc(struct ecore_hwfn *p_hwfn,
 					    struct vf_pf_resc_request *p_req,
 					    struct pf_vf_resc *p_resp)
 {
 	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-		   "PF unwilling to fullill resource request: rxq [%02x/%02x]"
-		   " txq [%02x/%02x] sbs [%02x/%02x] mac [%02x/%02x]"
-		   " vlan [%02x/%02x] mc [%02x/%02x]."
-		   " Try PF recommended amount\n",
+		   "PF unwilling to fullill resource request: rxq [%02x/%02x] txq [%02x/%02x] sbs [%02x/%02x] mac [%02x/%02x] vlan [%02x/%02x] mc [%02x/%02x] cids [%02x/%02x]. Try PF recommended amount\n",
 		   p_req->num_rxqs, p_resp->num_rxqs,
 		   p_req->num_rxqs, p_resp->num_txqs,
 		   p_req->num_sbs, p_resp->num_sbs,
 		   p_req->num_mac_filters, p_resp->num_mac_filters,
 		   p_req->num_vlan_filters, p_resp->num_vlan_filters,
-		   p_req->num_mc_filters, p_resp->num_mc_filters);
+		   p_req->num_mc_filters, p_resp->num_mc_filters,
+		   p_req->num_cids, p_resp->num_cids);
 
 	/* humble our request */
 	p_req->num_txqs = p_resp->num_txqs;
@@ -159,6 +173,7 @@ static void ecore_vf_pf_acquire_reduce_resc(struct ecore_hwfn *p_hwfn,
 	p_req->num_mac_filters = p_resp->num_mac_filters;
 	p_req->num_vlan_filters = p_resp->num_vlan_filters;
 	p_req->num_mc_filters = p_resp->num_mc_filters;
+	p_req->num_cids = p_resp->num_cids;
 }
 
 static enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn)
@@ -185,6 +200,7 @@ static enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn)
 	p_resc->num_sbs = ECORE_MAX_VF_CHAINS_PER_PF;
 	p_resc->num_mac_filters = ECORE_ETH_VF_NUM_MAC_FILTERS;
 	p_resc->num_vlan_filters = ECORE_ETH_VF_NUM_VLAN_FILTERS;
+	p_resc->num_cids = ECORE_ETH_VF_DEFAULT_NUM_CIDS;
 
 	OSAL_MEMSET(&vf_sw_info, 0, sizeof(vf_sw_info));
 	OSAL_VF_FILL_ACQUIRE_RESC_REQ(p_hwfn, &req->resc_request, &vf_sw_info);
@@ -310,6 +326,15 @@ static enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn)
 	    VFPF_ACQUIRE_CAP_PRE_FP_HSI)
 		p_iov->b_pre_fp_hsi = true;
 
+	/* In case PF doesn't support multi-queue Tx, update the number of
+	 * CIDs to reflect the number of queues [older PFs didn't fill that
+	 * field].
+	 */
+	if (!(resp->pfdev_info.capabilities &
+	      PFVF_ACQUIRE_CAP_QUEUE_QIDS))
+		resp->resc.num_cids = resp->resc.num_rxqs +
+				      resp->resc.num_txqs;
+
 	rc = OSAL_VF_UPDATE_ACQUIRE_RESC_RESP(p_hwfn, &resp->resc);
 	if (rc) {
 		DP_NOTICE(p_hwfn, true,
@@ -649,6 +674,8 @@ enum _ecore_status_t
 				  (u32 *)(&init_prod_val));
 	}
 
+	ecore_vf_pf_add_qid(p_hwfn, p_cid);
+
 	/* add list termination tlv */
 	ecore_add_tlv(p_hwfn, &p_iov->offset,
 		      CHANNEL_TLV_LIST_END,
@@ -704,6 +731,8 @@ enum _ecore_status_t ecore_vf_pf_rxq_stop(struct ecore_hwfn *p_hwfn,
 	req->num_rxqs = 1;
 	req->cqe_completion = cqe_completion;
 
+	ecore_vf_pf_add_qid(p_hwfn, p_cid);
+
 	/* add list termination tlv */
 	ecore_add_tlv(p_hwfn, &p_iov->offset,
 		      CHANNEL_TLV_LIST_END,
@@ -748,6 +777,8 @@ enum _ecore_status_t
 	req->hw_sb = p_cid->rel.sb;
 	req->sb_index = p_cid->rel.sb_idx;
 
+	ecore_vf_pf_add_qid(p_hwfn, p_cid);
+
 	/* add list termination tlv */
 	ecore_add_tlv(p_hwfn, &p_iov->offset,
 		      CHANNEL_TLV_LIST_END,
@@ -799,6 +830,8 @@ enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn *p_hwfn,
 	req->tx_qid = p_cid->rel.queue_id;
 	req->num_txqs = 1;
 
+	ecore_vf_pf_add_qid(p_hwfn, p_cid);
+
 	/* add list termination tlv */
 	ecore_add_tlv(p_hwfn, &p_iov->offset,
 		      CHANNEL_TLV_LIST_END,
@@ -831,32 +864,30 @@ enum _ecore_status_t ecore_vf_pf_rxqs_update(struct ecore_hwfn *p_hwfn,
 	struct vfpf_update_rxq_tlv *req;
 	enum _ecore_status_t rc;
 
-	/* TODO - API is limited to assuming continuous regions of queues,
-	 * but VF queues might not fullfil this requirement.
-	 * Need to consider whether we need new TLVs for this, or whether
-	 * simply doing it iteratively is good enough.
+	/* Starting with CHANNEL_TLV_QID and the need for additional queue
+	 * information, this API stopped supporting multiple rxqs.
+	 * TODO - remove this and change the API to accept a single queue-cid
+	 * in a follow-up patch.
 	 */
-	if (!num_rxqs)
+	if (num_rxqs != 1) {
+		DP_NOTICE(p_hwfn, true,
+			  "VFs can no longer update more than a single queue\n");
 		return ECORE_INVAL;
+	}
 
-again:
 	/* clear mailbox and prep first tlv */
 	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_UPDATE_RXQ, sizeof(*req));
 
-	/* Find the length of the current contagious range of queues beginning
-	 * at first queue's index.
-	 */
 	req->rx_qid = (*pp_cid)->rel.queue_id;
-	for (req->num_rxqs = 1; req->num_rxqs < num_rxqs; req->num_rxqs++)
-		if (pp_cid[req->num_rxqs]->rel.queue_id !=
-		    req->rx_qid + req->num_rxqs)
-			break;
+	req->num_rxqs = 1;
 
 	if (comp_cqe_flg)
 		req->flags |= VFPF_RXQ_UPD_COMPLETE_CQE_FLAG;
 	if (comp_event_flg)
 		req->flags |= VFPF_RXQ_UPD_COMPLETE_EVENT_FLAG;
 
+	ecore_vf_pf_add_qid(p_hwfn, *pp_cid);
+
 	/* add list termination tlv */
 	ecore_add_tlv(p_hwfn, &p_iov->offset,
 		      CHANNEL_TLV_LIST_END,
@@ -871,15 +902,6 @@ enum _ecore_status_t ecore_vf_pf_rxqs_update(struct ecore_hwfn *p_hwfn,
 		goto exit;
 	}
 
-	/* Make sure we're done with all the queues */
-	if (req->num_rxqs < num_rxqs) {
-		num_rxqs -= req->num_rxqs;
-		pp_cid += req->num_rxqs;
-		/* TODO - should we give a non-locked variant instead? */
-		ecore_vf_pf_req_end(p_hwfn, rc);
-		goto again;
-	}
-
 exit:
 	ecore_vf_pf_req_end(p_hwfn, rc);
 	return rc;
diff --git a/drivers/net/qede/base/ecore_vf.h b/drivers/net/qede/base/ecore_vf.h
index f471388..4096d5d 100644
--- a/drivers/net/qede/base/ecore_vf.h
+++ b/drivers/net/qede/base/ecore_vf.h
@@ -14,6 +14,11 @@
 #include "ecore_l2_api.h"
 #include "ecore_vfpf_if.h"
 
+/* Default number of CIDs [total of both Rx and Tx] to be requested
+ * by default.
+ */
+#define ECORE_ETH_VF_DEFAULT_NUM_CIDS	(32)
+
 /* This data is held in the ecore_hwfn structure for VFs only. */
 struct ecore_vf_iov {
 	union vfpf_tlvs			*vf2pf_request;
diff --git a/drivers/net/qede/base/ecore_vfpf_if.h b/drivers/net/qede/base/ecore_vfpf_if.h
index 6618442..4df5619 100644
--- a/drivers/net/qede/base/ecore_vfpf_if.h
+++ b/drivers/net/qede/base/ecore_vfpf_if.h
@@ -19,13 +19,14 @@
  *
  **/
 struct vf_pf_resc_request {
-	u8  num_rxqs;
-	u8  num_txqs;
-	u8  num_sbs;
-	u8  num_mac_filters;
-	u8  num_vlan_filters;
-	u8  num_mc_filters; /* No limit  so superfluous */
-	u16 padding;
+	u8 num_rxqs;
+	u8 num_txqs;
+	u8 num_sbs;
+	u8 num_mac_filters;
+	u8 num_vlan_filters;
+	u8 num_mc_filters; /* No limit  so superfluous */
+	u8 num_cids;
+	u8 padding;
 };
 
 struct hw_sb_info {
@@ -92,6 +93,14 @@ struct vfpf_acquire_tlv {
 /* VF pre-FP hsi version */
 #define VFPF_ACQUIRE_CAP_PRE_FP_HSI	(1 << 0)
 #define VFPF_ACQUIRE_CAP_100G		(1 << 1) /* VF can support 100g */
+
+	/* A requirement for supporting multi-Tx queues on a single queue-zone,
+	 * VF would pass qids as additional information whenever passing queue
+	 * references.
+	 * TODO - due to the CID limitations in Bar0, VFs currently don't pass
+	 * this, and use the legacy CID scheme.
+	 */
+#define VFPF_ACQUIRE_CAP_QUEUE_QIDS	(1 << 2)
 		u64 capabilities;
 		u8 fw_major;
 		u8 fw_minor;
@@ -170,6 +179,9 @@ struct pfvf_acquire_resp_tlv {
 #endif
 #define PFVF_ACQUIRE_CAP_POST_FW_OVERRIDE	(1 << 2)
 
+	/* PF expects queues to be received with additional qids */
+#define PFVF_ACQUIRE_CAP_QUEUE_QIDS		(1 << 3)
+
 		u16 db_size;
 		u8  indices_per_sb;
 		u8 os_type;
@@ -210,7 +222,8 @@ struct pfvf_acquire_resp_tlv {
 		u8      num_mac_filters;
 		u8      num_vlan_filters;
 		u8      num_mc_filters;
-		u8      padding[2];
+		u8	num_cids;
+		u8      padding;
 	} resc;
 
 	u32 bulletin_size;
@@ -223,6 +236,16 @@ struct pfvf_start_queue_resp_tlv {
 	u8 padding[4];
 };
 
+/* Extended queue information - additional index for reference inside qzone.
+ * If commmunicated between VF/PF, each TLV relating to queues should be
+ * extended by one such [or have a future base TLV that already contains info].
+ */
+struct vfpf_qid_tlv {
+	struct channel_tlv	tl;
+	u8			qid;
+	u8			padding[3];
+};
+
 /* Setup Queue */
 struct vfpf_start_rxq_tlv {
 	struct vfpf_first_tlv	first_tlv;
@@ -265,7 +288,15 @@ struct vfpf_stop_rxqs_tlv {
 	struct vfpf_first_tlv	first_tlv;
 
 	u16			rx_qid;
+
+	/* While the API supports multiple Rx-queues on a single TLV
+	 * message, in practice older VFs always used it as one [ecore].
+	 * And there are PFs [starting with the CHANNEL_TLV_QID] which
+	 * would start assuming this is always a '1'. So in practice this
+	 * field should be considered deprecated and *Always* set to '1'.
+	 */
 	u8			num_rxqs;
+
 	u8			cqe_completion;
 	u8			padding[4];
 };
@@ -275,6 +306,13 @@ struct vfpf_stop_txqs_tlv {
 	struct vfpf_first_tlv	first_tlv;
 
 	u16			tx_qid;
+
+	/* While the API supports multiple Tx-queues on a single TLV
+	 * message, in practice older VFs always used it as one [ecore].
+	 * And there are PFs [starting with the CHANNEL_TLV_QID] which
+	 * would start assuming this is always a '1'. So in practice this
+	 * field should be considered deprecated and *Always* set to '1'.
+	 */
 	u8			num_txqs;
 	u8			padding[5];
 };
@@ -605,6 +643,7 @@ enum {
 	CHANNEL_TLV_VPORT_UPDATE_SGE_TPA,
 	CHANNEL_TLV_UPDATE_TUNN_PARAM,
 	CHANNEL_TLV_COALESCE_UPDATE,
+	CHANNEL_TLV_QID,
 	CHANNEL_TLV_MAX,
 
 	/* Required for iterating over vport-update tlvs.
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 07/53] net/qede/base: interchangeably use SB between PF and VF
  2017-09-19  1:29 [PATCH 00/53] net/qede/base: update PMD to 2.6.0.1 Rasesh Mody
                   ` (5 preceding siblings ...)
  2017-09-19  1:29 ` [PATCH 06/53] net/qede/base: changes for VF queue zone Rasesh Mody
@ 2017-09-19  1:29 ` Rasesh Mody
  2017-09-19  1:29 ` [PATCH 08/53] net/qede/base: add API to configure coalescing for VF queues Rasesh Mody
                   ` (22 subsequent siblings)
  29 siblings, 0 replies; 31+ messages in thread
From: Rasesh Mody @ 2017-09-19  1:29 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Status Block reallocation - allow a PF and its child VF to change SB
between them using new base driver APIs.

The changes that are inside base driver flows are:

New APIs ecore_int_igu_reset_cam() and ecore_int_igu_reset_cam_default()
added to reset IGU CAM.
 a. During hw_prepare(), driver would re-initialize the IGU CAM.
 b. During hw_stop(), driver would initialize the IGU CAM to default.

Use igu_sb_id instead of sb_idx [protocol index] to allow setting of
the timer-resolution in CAU[coalescing algorithm unit] for all SBs,
sb_idx could limit SBs 0-11 only to be able change their timer-resolution.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h         |   13 +-
 drivers/net/qede/base/ecore_dev.c     |   79 ++--
 drivers/net/qede/base/ecore_int.c     |  757 +++++++++++++++++++++++----------
 drivers/net/qede/base/ecore_int.h     |   71 +++-
 drivers/net/qede/base/ecore_int_api.h |   41 +-
 drivers/net/qede/base/ecore_l2.c      |   24 +-
 drivers/net/qede/base/ecore_l2.h      |   26 +-
 drivers/net/qede/base/ecore_l2_api.h  |    4 +-
 drivers/net/qede/base/ecore_sriov.c   |  134 +++---
 drivers/net/qede/base/ecore_vf.c      |   35 +-
 drivers/net/qede/base/ecore_vf.h      |   17 +
 drivers/net/qede/qede_rxtx.c          |    4 +-
 12 files changed, 808 insertions(+), 397 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 10fb16a..64a3416 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -279,7 +279,6 @@ struct ecore_qm_iids {
  * is received from MFW.
  */
 enum ecore_resources {
-	ECORE_SB,
 	ECORE_L2_QUEUE,
 	ECORE_VPORT,
 	ECORE_RSS_ENG,
@@ -293,7 +292,13 @@ enum ecore_resources {
 	ECORE_CMDQS_CQS,
 	ECORE_RDMA_STATS_QUEUE,
 	ECORE_BDQ,
-	ECORE_MAX_RESC,			/* must be last */
+
+	/* This is needed only internally for matching against the IGU.
+	 * In case of legacy MFW, would be set to `0'.
+	 */
+	ECORE_SB,
+
+	ECORE_MAX_RESC,
 };
 
 /* Features that require resources, given as input to the resource management
@@ -556,10 +561,6 @@ struct ecore_hwfn {
 	bool				b_rdma_enabled_in_prs;
 	u32				rdma_prs_search_reg;
 
-	/* Array of sb_info of all status blocks */
-	struct ecore_sb_info		*sbs_info[MAX_SB_PER_PF_MIMD];
-	u16				num_sbs;
-
 	struct ecore_cxt_mngr		*p_cxt_mngr;
 
 	/* Flag indicating whether interrupts are enabled or not*/
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 4a31d67..40b544b 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1232,7 +1232,7 @@ static enum _ecore_status_t ecore_hw_init_chip(struct ecore_hwfn *p_hwfn,
 static void ecore_init_cau_rt_data(struct ecore_dev *p_dev)
 {
 	u32 offset = CAU_REG_SB_VAR_MEMORY_RT_OFFSET;
-	int i, sb_id;
+	int i, igu_sb_id;
 
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
@@ -1242,16 +1242,18 @@ static void ecore_init_cau_rt_data(struct ecore_dev *p_dev)
 
 		p_igu_info = p_hwfn->hw_info.p_igu_info;
 
-		for (sb_id = 0; sb_id < ECORE_MAPPING_MEMORY_SIZE(p_dev);
-		     sb_id++) {
-			p_block = &p_igu_info->igu_map.igu_blocks[sb_id];
+		for (igu_sb_id = 0;
+		     igu_sb_id < ECORE_MAPPING_MEMORY_SIZE(p_dev);
+		     igu_sb_id++) {
+			p_block = &p_igu_info->entry[igu_sb_id];
 
 			if (!p_block->is_pf)
 				continue;
 
 			ecore_init_cau_sb_entry(p_hwfn, &sb_entry,
 						p_block->function_id, 0, 0);
-			STORE_RT_REG_AGG(p_hwfn, offset + sb_id * 2, sb_entry);
+			STORE_RT_REG_AGG(p_hwfn, offset + igu_sb_id * 2,
+					 sb_entry);
 		}
 	}
 }
@@ -2255,6 +2257,13 @@ enum _ecore_status_t ecore_hw_stop(struct ecore_dev *p_dev)
 		ecore_wr(p_hwfn, p_ptt, IGU_REG_LEADING_EDGE_LATCH, 0);
 		ecore_wr(p_hwfn, p_ptt, IGU_REG_TRAILING_EDGE_LATCH, 0);
 		ecore_int_igu_init_pure_rt(p_hwfn, p_ptt, false, true);
+		rc = ecore_int_igu_reset_cam_default(p_hwfn, p_ptt);
+		if (rc != ECORE_SUCCESS) {
+			DP_NOTICE(p_hwfn, true,
+				  "Failed to return IGU CAM to default\n");
+			rc2 = ECORE_UNKNOWN_ERROR;
+		}
+
 		/* Need to wait 1ms to guarantee SBs are cleared */
 		OSAL_MSLEEP(1);
 
@@ -2423,31 +2432,32 @@ static void get_function_id(struct ecore_hwfn *p_hwfn)
 static void ecore_hw_set_feat(struct ecore_hwfn *p_hwfn)
 {
 	u32 *feat_num = p_hwfn->hw_info.feat_num;
+	struct ecore_sb_cnt_info sb_cnt;
 	u32 non_l2_sbs = 0;
 
+	OSAL_MEM_ZERO(&sb_cnt, sizeof(sb_cnt));
+	ecore_int_get_num_sbs(p_hwfn, &sb_cnt);
+
 	/* L2 Queues require each: 1 status block. 1 L2 queue */
 	if (ECORE_IS_L2_PERSONALITY(p_hwfn)) {
-		struct ecore_sb_cnt_info sb_cnt_info;
-
-		OSAL_MEM_ZERO(&sb_cnt_info, sizeof(sb_cnt_info));
-		ecore_int_get_num_sbs(p_hwfn, &sb_cnt_info);
-
 		/* Start by allocating VF queues, then PF's */
 		feat_num[ECORE_VF_L2_QUE] =
 			OSAL_MIN_T(u32,
 				   RESC_NUM(p_hwfn, ECORE_L2_QUEUE),
-				   sb_cnt_info.sb_iov_cnt);
+				   sb_cnt.iov_cnt);
 		feat_num[ECORE_PF_L2_QUE] =
 			OSAL_MIN_T(u32,
-				   RESC_NUM(p_hwfn, ECORE_SB) - non_l2_sbs,
+				   sb_cnt.cnt - non_l2_sbs,
 				   RESC_NUM(p_hwfn, ECORE_L2_QUEUE) -
 				   FEAT_NUM(p_hwfn, ECORE_VF_L2_QUE));
 	}
 
-	feat_num[ECORE_FCOE_CQ] = OSAL_MIN_T(u32, RESC_NUM(p_hwfn, ECORE_SB),
-					     RESC_NUM(p_hwfn, ECORE_CMDQS_CQS));
-	feat_num[ECORE_ISCSI_CQ] = OSAL_MIN_T(u32, RESC_NUM(p_hwfn, ECORE_SB),
-					     RESC_NUM(p_hwfn, ECORE_CMDQS_CQS));
+	feat_num[ECORE_FCOE_CQ] = OSAL_MIN_T(u32, sb_cnt.cnt,
+					     RESC_NUM(p_hwfn,
+						      ECORE_CMDQS_CQS));
+	feat_num[ECORE_ISCSI_CQ] = OSAL_MIN_T(u32, sb_cnt.cnt,
+					      RESC_NUM(p_hwfn,
+						       ECORE_CMDQS_CQS));
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_PROBE,
 		   "#PF_L2_QUEUE=%d VF_L2_QUEUES=%d #ROCE_CNQ=%d #FCOE_CQ=%d #ISCSI_CQ=%d #SB=%d\n",
@@ -2456,14 +2466,12 @@ static void ecore_hw_set_feat(struct ecore_hwfn *p_hwfn)
 		   (int)FEAT_NUM(p_hwfn, ECORE_RDMA_CNQ),
 		   (int)FEAT_NUM(p_hwfn, ECORE_FCOE_CQ),
 		   (int)FEAT_NUM(p_hwfn, ECORE_ISCSI_CQ),
-		   RESC_NUM(p_hwfn, ECORE_SB));
+		   (int)sb_cnt.cnt);
 }
 
 const char *ecore_hw_get_resc_name(enum ecore_resources res_id)
 {
 	switch (res_id) {
-	case ECORE_SB:
-		return "SB";
 	case ECORE_L2_QUEUE:
 		return "L2_QUEUE";
 	case ECORE_VPORT:
@@ -2490,6 +2498,8 @@ const char *ecore_hw_get_resc_name(enum ecore_resources res_id)
 		return "RDMA_STATS_QUEUE";
 	case ECORE_BDQ:
 		return "BDQ";
+	case ECORE_SB:
+		return "SB";
 	default:
 		return "UNKNOWN_RESOURCE";
 	}
@@ -2565,14 +2575,8 @@ enum _ecore_status_t ecore_hw_get_dflt_resc(struct ecore_hwfn *p_hwfn,
 {
 	u8 num_funcs = p_hwfn->num_funcs_on_engine;
 	bool b_ah = ECORE_IS_AH(p_hwfn->p_dev);
-	struct ecore_sb_cnt_info sb_cnt_info;
 
 	switch (res_id) {
-	case ECORE_SB:
-		OSAL_MEM_ZERO(&sb_cnt_info, sizeof(sb_cnt_info));
-		ecore_int_get_num_sbs(p_hwfn, &sb_cnt_info);
-		*p_resc_num = sb_cnt_info.sb_cnt;
-		break;
 	case ECORE_L2_QUEUE:
 		*p_resc_num = (b_ah ? MAX_NUM_L2_QUEUES_K2 :
 				 MAX_NUM_L2_QUEUES_BB) / num_funcs;
@@ -2629,6 +2633,12 @@ enum _ecore_status_t ecore_hw_get_dflt_resc(struct ecore_hwfn *p_hwfn,
 		if (!*p_resc_num)
 			*p_resc_start = 0;
 		break;
+	case ECORE_SB:
+		/* Since we want its value to reflect whether MFW supports
+		 * the new scheme, have a default of 0.
+		 */
+		*p_resc_num = 0;
+		break;
 	default:
 		*p_resc_start = *p_resc_num * p_hwfn->enabled_func_idx;
 		break;
@@ -2693,14 +2703,9 @@ enum _ecore_status_t ecore_hw_get_dflt_resc(struct ecore_hwfn *p_hwfn,
 		goto out;
 	}
 
-	/* TBD - remove this when revising the handling of the SB resource */
-	if (res_id == ECORE_SB) {
-		/* Excluding the slowpath SB */
-		*p_resc_num -= 1;
-		*p_resc_start -= p_hwfn->enabled_func_idx;
-	}
-
-	if (*p_resc_num != dflt_resc_num || *p_resc_start != dflt_resc_start) {
+	if ((*p_resc_num != dflt_resc_num ||
+	     *p_resc_start != dflt_resc_start) &&
+	    res_id != ECORE_SB) {
 		DP_INFO(p_hwfn,
 			"MFW allocation for resource %d [%s] differs from default values [%d,%d vs. %d,%d]%s\n",
 			res_id, ecore_hw_get_resc_name(res_id), *p_resc_num,
@@ -2850,6 +2855,10 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 		return ECORE_INVAL;
 	}
 
+	/* This will also learn the number of SBs from MFW */
+	if (ecore_int_igu_reset_cam(p_hwfn, p_hwfn->p_main_ptt))
+		return ECORE_INVAL;
+
 	ecore_hw_set_feat(p_hwfn);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_PROBE,
@@ -4540,7 +4549,7 @@ enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
 	timeset = (u8)(coalesce >> timer_res);
 
 	rc = ecore_int_set_timer_res(p_hwfn, p_ptt, timer_res,
-				     p_cid->abs.sb_idx, false);
+				     p_cid->sb_igu_id, false);
 	if (rc != ECORE_SUCCESS)
 		goto out;
 
@@ -4581,7 +4590,7 @@ enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
 	timeset = (u8)(coalesce >> timer_res);
 
 	rc = ecore_int_set_timer_res(p_hwfn, p_ptt, timer_res,
-				     p_cid->abs.sb_idx, true);
+				     p_cid->sb_igu_id, true);
 	if (rc != ECORE_SUCCESS)
 		goto out;
 
diff --git a/drivers/net/qede/base/ecore_int.c b/drivers/net/qede/base/ecore_int.c
index b57c510..f8b104a 100644
--- a/drivers/net/qede/base/ecore_int.c
+++ b/drivers/net/qede/base/ecore_int.c
@@ -1369,6 +1369,49 @@ void ecore_init_cau_sb_entry(struct ecore_hwfn *p_hwfn,
 	SET_FIELD(p_sb_entry->data, CAU_SB_ENTRY_STATE1, cau_state);
 }
 
+static void _ecore_int_cau_conf_pi(struct ecore_hwfn *p_hwfn,
+				   struct ecore_ptt *p_ptt,
+				   u16 igu_sb_id, u32 pi_index,
+				   enum ecore_coalescing_fsm coalescing_fsm,
+				   u8 timeset)
+{
+	struct cau_pi_entry pi_entry;
+	u32 sb_offset, pi_offset;
+
+	if (IS_VF(p_hwfn->p_dev))
+		return;/* @@@TBD MichalK- VF CAU... */
+
+	sb_offset = igu_sb_id * PIS_PER_SB;
+	OSAL_MEMSET(&pi_entry, 0, sizeof(struct cau_pi_entry));
+
+	SET_FIELD(pi_entry.prod, CAU_PI_ENTRY_PI_TIMESET, timeset);
+	if (coalescing_fsm == ECORE_COAL_RX_STATE_MACHINE)
+		SET_FIELD(pi_entry.prod, CAU_PI_ENTRY_FSM_SEL, 0);
+	else
+		SET_FIELD(pi_entry.prod, CAU_PI_ENTRY_FSM_SEL, 1);
+
+	pi_offset = sb_offset + pi_index;
+	if (p_hwfn->hw_init_done) {
+		ecore_wr(p_hwfn, p_ptt,
+			 CAU_REG_PI_MEMORY + pi_offset * sizeof(u32),
+			 *((u32 *)&(pi_entry)));
+	} else {
+		STORE_RT_REG(p_hwfn,
+			     CAU_REG_PI_MEMORY_RT_OFFSET + pi_offset,
+			     *((u32 *)&(pi_entry)));
+	}
+}
+
+void ecore_int_cau_conf_pi(struct ecore_hwfn *p_hwfn,
+			   struct ecore_ptt *p_ptt,
+			   struct ecore_sb_info *p_sb, u32 pi_index,
+			   enum ecore_coalescing_fsm coalescing_fsm,
+			   u8 timeset)
+{
+	_ecore_int_cau_conf_pi(p_hwfn, p_ptt, p_sb->igu_sb_id,
+			       pi_index, coalescing_fsm, timeset);
+}
+
 void ecore_int_cau_conf_sb(struct ecore_hwfn *p_hwfn,
 			   struct ecore_ptt *p_ptt,
 			   dma_addr_t sb_phys, u16 igu_sb_id,
@@ -1420,8 +1463,9 @@ void ecore_int_cau_conf_sb(struct ecore_hwfn *p_hwfn,
 		else
 			timer_res = 2;
 		timeset = (u8)(p_hwfn->p_dev->rx_coalesce_usecs >> timer_res);
-		ecore_int_cau_conf_pi(p_hwfn, p_ptt, igu_sb_id, RX_PI,
-				      ECORE_COAL_RX_STATE_MACHINE, timeset);
+		_ecore_int_cau_conf_pi(p_hwfn, p_ptt, igu_sb_id, RX_PI,
+				       ECORE_COAL_RX_STATE_MACHINE,
+				       timeset);
 
 		if (p_hwfn->p_dev->tx_coalesce_usecs <= 0x7F)
 			timer_res = 0;
@@ -1431,46 +1475,14 @@ void ecore_int_cau_conf_sb(struct ecore_hwfn *p_hwfn,
 			timer_res = 2;
 		timeset = (u8)(p_hwfn->p_dev->tx_coalesce_usecs >> timer_res);
 		for (i = 0; i < num_tc; i++) {
-			ecore_int_cau_conf_pi(p_hwfn, p_ptt,
-					      igu_sb_id, TX_PI(i),
-					      ECORE_COAL_TX_STATE_MACHINE,
-					      timeset);
+			_ecore_int_cau_conf_pi(p_hwfn, p_ptt,
+					       igu_sb_id, TX_PI(i),
+					       ECORE_COAL_TX_STATE_MACHINE,
+					       timeset);
 		}
 	}
 }
 
-void ecore_int_cau_conf_pi(struct ecore_hwfn *p_hwfn,
-			   struct ecore_ptt *p_ptt,
-			   u16 igu_sb_id, u32 pi_index,
-			   enum ecore_coalescing_fsm coalescing_fsm, u8 timeset)
-{
-	struct cau_pi_entry pi_entry;
-	u32 sb_offset, pi_offset;
-
-	if (IS_VF(p_hwfn->p_dev))
-		return;		/* @@@TBD MichalK- VF CAU... */
-
-	sb_offset = igu_sb_id * PIS_PER_SB;
-	OSAL_MEMSET(&pi_entry, 0, sizeof(struct cau_pi_entry));
-
-	SET_FIELD(pi_entry.prod, CAU_PI_ENTRY_PI_TIMESET, timeset);
-	if (coalescing_fsm == ECORE_COAL_RX_STATE_MACHINE)
-		SET_FIELD(pi_entry.prod, CAU_PI_ENTRY_FSM_SEL, 0);
-	else
-		SET_FIELD(pi_entry.prod, CAU_PI_ENTRY_FSM_SEL, 1);
-
-	pi_offset = sb_offset + pi_index;
-	if (p_hwfn->hw_init_done) {
-		ecore_wr(p_hwfn, p_ptt,
-			 CAU_REG_PI_MEMORY + pi_offset * sizeof(u32),
-			 *((u32 *)&(pi_entry)));
-	} else {
-		STORE_RT_REG(p_hwfn,
-			     CAU_REG_PI_MEMORY_RT_OFFSET + pi_offset,
-			     *((u32 *)&(pi_entry)));
-	}
-}
-
 void ecore_int_sb_setup(struct ecore_hwfn *p_hwfn,
 			struct ecore_ptt *p_ptt, struct ecore_sb_info *sb_info)
 {
@@ -1483,16 +1495,50 @@ void ecore_int_sb_setup(struct ecore_hwfn *p_hwfn,
 				      sb_info->igu_sb_id, 0, 0);
 }
 
-/**
- * @brief ecore_get_igu_sb_id - given a sw sb_id return the
- *        igu_sb_id
- *
- * @param p_hwfn
- * @param sb_id
- *
- * @return u16
- */
-static u16 ecore_get_igu_sb_id(struct ecore_hwfn *p_hwfn, u16 sb_id)
+struct ecore_igu_block *
+ecore_get_igu_free_sb(struct ecore_hwfn *p_hwfn, bool b_is_pf)
+{
+	struct ecore_igu_block *p_block;
+	u16 igu_id;
+
+	for (igu_id = 0; igu_id < ECORE_MAPPING_MEMORY_SIZE(p_hwfn->p_dev);
+	     igu_id++) {
+		p_block = &p_hwfn->hw_info.p_igu_info->entry[igu_id];
+
+		if (!(p_block->status & ECORE_IGU_STATUS_VALID) ||
+		    !(p_block->status & ECORE_IGU_STATUS_FREE))
+			continue;
+
+		if (!!(p_block->status & ECORE_IGU_STATUS_PF) ==
+		    b_is_pf)
+			return p_block;
+	}
+
+	return OSAL_NULL;
+}
+
+static u16 ecore_get_pf_igu_sb_id(struct ecore_hwfn *p_hwfn,
+				  u16 vector_id)
+{
+	struct ecore_igu_block *p_block;
+	u16 igu_id;
+
+	for (igu_id = 0; igu_id < ECORE_MAPPING_MEMORY_SIZE(p_hwfn->p_dev);
+	     igu_id++) {
+		p_block = &p_hwfn->hw_info.p_igu_info->entry[igu_id];
+
+		if (!(p_block->status & ECORE_IGU_STATUS_VALID) ||
+		    !p_block->is_pf ||
+		    p_block->vector_number != vector_id)
+			continue;
+
+		return igu_id;
+	}
+
+	return ECORE_SB_INVALID_IDX;
+}
+
+u16 ecore_get_igu_sb_id(struct ecore_hwfn *p_hwfn, u16 sb_id)
 {
 	u16 igu_sb_id;
 
@@ -1500,11 +1546,15 @@ static u16 ecore_get_igu_sb_id(struct ecore_hwfn *p_hwfn, u16 sb_id)
 	if (sb_id == ECORE_SP_SB_ID)
 		igu_sb_id = p_hwfn->hw_info.p_igu_info->igu_dsb_id;
 	else if (IS_PF(p_hwfn->p_dev))
-		igu_sb_id = sb_id + p_hwfn->hw_info.p_igu_info->igu_base_sb;
+		igu_sb_id = ecore_get_pf_igu_sb_id(p_hwfn, sb_id + 1);
 	else
 		igu_sb_id = ecore_vf_get_igu_sb_id(p_hwfn, sb_id);
 
-	if (sb_id == ECORE_SP_SB_ID)
+	if (igu_sb_id == ECORE_SB_INVALID_IDX)
+		DP_NOTICE(p_hwfn, true,
+			  "Slowpath SB vector %04x doesn't exist\n",
+			  sb_id);
+	else if (sb_id == ECORE_SP_SB_ID)
 		DP_VERBOSE(p_hwfn, ECORE_MSG_INTR,
 			   "Slowpath SB index in IGU is 0x%04x\n", igu_sb_id);
 	else
@@ -1525,9 +1575,24 @@ enum _ecore_status_t ecore_int_sb_init(struct ecore_hwfn *p_hwfn,
 
 	sb_info->igu_sb_id = ecore_get_igu_sb_id(p_hwfn, sb_id);
 
+	if (sb_info->igu_sb_id == ECORE_SB_INVALID_IDX)
+		return ECORE_INVAL;
+
+	/* Let the igu info reference the client's SB info */
 	if (sb_id != ECORE_SP_SB_ID) {
-		p_hwfn->sbs_info[sb_id] = sb_info;
-		p_hwfn->num_sbs++;
+		if (IS_PF(p_hwfn->p_dev)) {
+			struct ecore_igu_info *p_info;
+			struct ecore_igu_block *p_block;
+
+			p_info = p_hwfn->hw_info.p_igu_info;
+			p_block = &p_info->entry[sb_info->igu_sb_id];
+
+			p_block->sb_info = sb_info;
+			p_block->status &= ~ECORE_IGU_STATUS_FREE;
+			p_info->usage.free_cnt--;
+		} else {
+			ecore_vf_set_sb_info(p_hwfn, sb_id, sb_info);
+		}
 	}
 #ifdef ECORE_CONFIG_DIRECT_HWFN
 	sb_info->p_hwfn = p_hwfn;
@@ -1559,20 +1624,35 @@ enum _ecore_status_t ecore_int_sb_release(struct ecore_hwfn *p_hwfn,
 					  struct ecore_sb_info *sb_info,
 					  u16 sb_id)
 {
-	if (sb_id == ECORE_SP_SB_ID) {
-		DP_ERR(p_hwfn, "Do Not free sp sb using this function");
-		return ECORE_INVAL;
-	}
+	struct ecore_igu_info *p_info;
+	struct ecore_igu_block *p_block;
+
+	if (sb_info == OSAL_NULL)
+		return ECORE_SUCCESS;
 
 	/* zero status block and ack counter */
 	sb_info->sb_ack = 0;
 	OSAL_MEMSET(sb_info->sb_virt, 0, sizeof(*sb_info->sb_virt));
 
-	if (p_hwfn->sbs_info[sb_id] != OSAL_NULL) {
-		p_hwfn->sbs_info[sb_id] = OSAL_NULL;
-		p_hwfn->num_sbs--;
+	if (IS_VF(p_hwfn->p_dev)) {
+		ecore_vf_set_sb_info(p_hwfn, sb_id, OSAL_NULL);
+		return ECORE_SUCCESS;
 	}
 
+	p_info = p_hwfn->hw_info.p_igu_info;
+	p_block = &p_info->entry[sb_info->igu_sb_id];
+
+	/* Vector 0 is reserved to Default SB */
+	if (p_block->vector_number == 0) {
+		DP_ERR(p_hwfn, "Do Not free sp sb using this function");
+		return ECORE_INVAL;
+	}
+
+	/* Lose reference to client's SB info, and fix counters */
+	p_block->sb_info = OSAL_NULL;
+	p_block->status |= ECORE_IGU_STATUS_FREE;
+	p_info->usage.free_cnt++;
+
 	return ECORE_SUCCESS;
 }
 
@@ -1778,11 +1858,13 @@ void ecore_int_igu_disable_int(struct ecore_hwfn *p_hwfn,
 
 #define IGU_CLEANUP_SLEEP_LENGTH		(1000)
 static void ecore_int_igu_cleanup_sb(struct ecore_hwfn *p_hwfn,
-			      struct ecore_ptt *p_ptt,
-			      u32 sb_id, bool cleanup_set, u16 opaque_fid)
+				     struct ecore_ptt *p_ptt,
+				     u32 igu_sb_id,
+				     bool cleanup_set,
+				     u16 opaque_fid)
 {
 	u32 cmd_ctrl = 0, val = 0, sb_bit = 0, sb_bit_addr = 0, data = 0;
-	u32 pxp_addr = IGU_CMD_INT_ACK_BASE + sb_id;
+	u32 pxp_addr = IGU_CMD_INT_ACK_BASE + igu_sb_id;
 	u32 sleep_cnt = IGU_CLEANUP_SLEEP_LENGTH;
 	u8 type = 0;		/* FIXME MichalS type??? */
 
@@ -1813,8 +1895,8 @@ static void ecore_int_igu_cleanup_sb(struct ecore_hwfn *p_hwfn,
 	OSAL_MMIOWB(p_hwfn->p_dev);
 
 	/* calculate where to read the status bit from */
-	sb_bit = 1 << (sb_id % 32);
-	sb_bit_addr = sb_id / 32 * sizeof(u32);
+	sb_bit = 1 << (igu_sb_id % 32);
+	sb_bit_addr = igu_sb_id / 32 * sizeof(u32);
 
 	sb_bit_addr += IGU_REG_CLEANUP_STATUS_0 + (0x80 * type);
 
@@ -1829,21 +1911,28 @@ static void ecore_int_igu_cleanup_sb(struct ecore_hwfn *p_hwfn,
 	if (!sleep_cnt)
 		DP_NOTICE(p_hwfn, true,
 			  "Timeout waiting for clear status 0x%08x [for sb %d]\n",
-			  val, sb_id);
+			  val, igu_sb_id);
 }
 
 void ecore_int_igu_init_pure_rt_single(struct ecore_hwfn *p_hwfn,
 				       struct ecore_ptt *p_ptt,
-				       u32 sb_id, u16 opaque, bool b_set)
+				       u16 igu_sb_id, u16 opaque, bool b_set)
 {
+	struct ecore_igu_block *p_block;
 	int pi, i;
 
+	p_block = &p_hwfn->hw_info.p_igu_info->entry[igu_sb_id];
+	DP_VERBOSE(p_hwfn, ECORE_MSG_INTR,
+		   "Cleaning SB [%04x]: func_id= %d is_pf = %d vector_num = 0x%0x\n",
+		   igu_sb_id, p_block->function_id, p_block->is_pf,
+		   p_block->vector_number);
+
 	/* Set */
 	if (b_set)
-		ecore_int_igu_cleanup_sb(p_hwfn, p_ptt, sb_id, 1, opaque);
+		ecore_int_igu_cleanup_sb(p_hwfn, p_ptt, igu_sb_id, 1, opaque);
 
 	/* Clear */
-	ecore_int_igu_cleanup_sb(p_hwfn, p_ptt, sb_id, 0, opaque);
+	ecore_int_igu_cleanup_sb(p_hwfn, p_ptt, igu_sb_id, 0, opaque);
 
 	/* Wait for the IGU SB to cleanup */
 	for (i = 0; i < IGU_CLEANUP_SLEEP_LENGTH; i++) {
@@ -1851,8 +1940,8 @@ void ecore_int_igu_init_pure_rt_single(struct ecore_hwfn *p_hwfn,
 
 		val = ecore_rd(p_hwfn, p_ptt,
 			       IGU_REG_WRITE_DONE_PENDING +
-			       ((sb_id / 32) * 4));
-		if (val & (1 << (sb_id % 32)))
+			       ((igu_sb_id / 32) * 4));
+		if (val & (1 << (igu_sb_id % 32)))
 			OSAL_UDELAY(10);
 		else
 			break;
@@ -1860,21 +1949,22 @@ void ecore_int_igu_init_pure_rt_single(struct ecore_hwfn *p_hwfn,
 	if (i == IGU_CLEANUP_SLEEP_LENGTH)
 		DP_NOTICE(p_hwfn, true,
 			  "Failed SB[0x%08x] still appearing in WRITE_DONE_PENDING\n",
-			  sb_id);
+			  igu_sb_id);
 
 	/* Clear the CAU for the SB */
 	for (pi = 0; pi < 12; pi++)
 		ecore_wr(p_hwfn, p_ptt,
-			 CAU_REG_PI_MEMORY + (sb_id * 12 + pi) * 4, 0);
+			 CAU_REG_PI_MEMORY + (igu_sb_id * 12 + pi) * 4, 0);
 }
 
 void ecore_int_igu_init_pure_rt(struct ecore_hwfn *p_hwfn,
 				struct ecore_ptt *p_ptt,
 				bool b_set, bool b_slowpath)
 {
-	u32 igu_base_sb = p_hwfn->hw_info.p_igu_info->igu_base_sb;
-	u32 igu_sb_cnt = p_hwfn->hw_info.p_igu_info->igu_sb_cnt;
-	u32 sb_id = 0, val = 0;
+	struct ecore_igu_info *p_info = p_hwfn->hw_info.p_igu_info;
+	struct ecore_igu_block *p_block;
+	u16 igu_sb_id = 0;
+	u32 val = 0;
 
 	/* @@@TBD MichalK temporary... should be moved to init-tool... */
 	val = ecore_rd(p_hwfn, p_ptt, IGU_REG_BLOCK_CONFIGURATION);
@@ -1883,53 +1973,204 @@ void ecore_int_igu_init_pure_rt(struct ecore_hwfn *p_hwfn,
 	ecore_wr(p_hwfn, p_ptt, IGU_REG_BLOCK_CONFIGURATION, val);
 	/* end temporary */
 
-	DP_VERBOSE(p_hwfn, ECORE_MSG_INTR,
-		   "IGU cleaning SBs [%d,...,%d]\n",
-		   igu_base_sb, igu_base_sb + igu_sb_cnt - 1);
+	for (igu_sb_id = 0;
+	     igu_sb_id < ECORE_MAPPING_MEMORY_SIZE(p_hwfn->p_dev);
+	     igu_sb_id++) {
+		p_block = &p_info->entry[igu_sb_id];
 
-	for (sb_id = igu_base_sb; sb_id < igu_base_sb + igu_sb_cnt; sb_id++)
-		ecore_int_igu_init_pure_rt_single(p_hwfn, p_ptt, sb_id,
+		if (!(p_block->status & ECORE_IGU_STATUS_VALID) ||
+		    !p_block->is_pf ||
+		    (p_block->status & ECORE_IGU_STATUS_DSB))
+			continue;
+
+		ecore_int_igu_init_pure_rt_single(p_hwfn, p_ptt, igu_sb_id,
 						  p_hwfn->hw_info.opaque_fid,
 						  b_set);
+	}
 
-	if (!b_slowpath)
-		return;
+	if (b_slowpath)
+		ecore_int_igu_init_pure_rt_single(p_hwfn, p_ptt,
+						  p_info->igu_dsb_id,
+						  p_hwfn->hw_info.opaque_fid,
+						  b_set);
+}
 
-	sb_id = p_hwfn->hw_info.p_igu_info->igu_dsb_id;
-	DP_VERBOSE(p_hwfn, ECORE_MSG_INTR,
-		   "IGU cleaning slowpath SB [%d]\n", sb_id);
-	ecore_int_igu_init_pure_rt_single(p_hwfn, p_ptt, sb_id,
-					  p_hwfn->hw_info.opaque_fid, b_set);
+int ecore_int_igu_reset_cam(struct ecore_hwfn *p_hwfn,
+			    struct ecore_ptt *p_ptt)
+{
+	struct ecore_igu_info *p_info = p_hwfn->hw_info.p_igu_info;
+	struct ecore_igu_block *p_block;
+	int pf_sbs, vf_sbs;
+	u16 igu_sb_id;
+	u32 val, rval;
+
+	if (!RESC_NUM(p_hwfn, ECORE_SB)) {
+		/* We're using an old MFW - have to prevent any switching
+		 * of SBs between PF and VFs as later driver wouldn't be
+		 * able to tell which belongs to which.
+		 */
+		p_info->b_allow_pf_vf_change = false;
+	} else {
+		/* Use the numbers the MFW have provided -
+		 * don't forget MFW accounts for the default SB as well.
+		 */
+		p_info->b_allow_pf_vf_change = true;
+
+		if (p_info->usage.cnt != RESC_NUM(p_hwfn, ECORE_SB) - 1) {
+			DP_INFO(p_hwfn,
+				"MFW notifies of 0x%04x PF SBs; IGU indicates of only 0x%04x\n",
+				RESC_NUM(p_hwfn, ECORE_SB) - 1,
+				p_info->usage.cnt);
+			p_info->usage.cnt = RESC_NUM(p_hwfn, ECORE_SB) - 1;
+		}
+
+		/* TODO - how do we learn about VF SBs from MFW? */
+		if (IS_PF_SRIOV(p_hwfn)) {
+			u16 vfs = p_hwfn->p_dev->p_iov_info->total_vfs;
+
+			if (vfs != p_info->usage.iov_cnt)
+				DP_VERBOSE(p_hwfn, ECORE_MSG_INTR,
+					   "0x%04x VF SBs in IGU CAM != PCI configuration 0x%04x\n",
+					   p_info->usage.iov_cnt, vfs);
+
+			/* At this point we know how many SBs we have totally
+			 * in IGU + number of PF SBs. So we can validate that
+			 * we'd have sufficient for VF.
+			 */
+			if (vfs > p_info->usage.free_cnt +
+				  p_info->usage.free_cnt_iov -
+				  p_info->usage.cnt) {
+				DP_NOTICE(p_hwfn, true,
+					  "Not enough SBs for VFs - 0x%04x SBs, from which %04x PFs and %04x are required\n",
+					  p_info->usage.free_cnt +
+					  p_info->usage.free_cnt_iov,
+					  p_info->usage.cnt, vfs);
+				return ECORE_INVAL;
+			}
+		}
+	}
+
+	/* Cap the number of VFs SBs by the number of VFs */
+	if (IS_PF_SRIOV(p_hwfn))
+		p_info->usage.iov_cnt = p_hwfn->p_dev->p_iov_info->total_vfs;
+
+	/* Mark all SBs as free, now in the right PF/VFs division */
+	p_info->usage.free_cnt = p_info->usage.cnt;
+	p_info->usage.free_cnt_iov = p_info->usage.iov_cnt;
+	p_info->usage.orig = p_info->usage.cnt;
+	p_info->usage.iov_orig = p_info->usage.iov_cnt;
+
+	/* We now proceed to re-configure the IGU cam to reflect the initial
+	 * configuration. We can start with the Default SB.
+	 */
+	pf_sbs = p_info->usage.cnt;
+	vf_sbs = p_info->usage.iov_cnt;
+
+	for (igu_sb_id = p_info->igu_dsb_id;
+	     igu_sb_id < ECORE_MAPPING_MEMORY_SIZE(p_hwfn->p_dev);
+	     igu_sb_id++) {
+		p_block = &p_info->entry[igu_sb_id];
+		val = 0;
+
+		if (!(p_block->status & ECORE_IGU_STATUS_VALID))
+			continue;
+
+		if (p_block->status & ECORE_IGU_STATUS_DSB) {
+			p_block->function_id = p_hwfn->rel_pf_id;
+			p_block->is_pf = 1;
+			p_block->vector_number = 0;
+			p_block->status = ECORE_IGU_STATUS_VALID |
+					  ECORE_IGU_STATUS_PF |
+					  ECORE_IGU_STATUS_DSB;
+		} else if (pf_sbs) {
+			pf_sbs--;
+			p_block->function_id = p_hwfn->rel_pf_id;
+			p_block->is_pf = 1;
+			p_block->vector_number = p_info->usage.cnt - pf_sbs;
+			p_block->status = ECORE_IGU_STATUS_VALID |
+					  ECORE_IGU_STATUS_PF |
+					  ECORE_IGU_STATUS_FREE;
+		} else if (vf_sbs) {
+			p_block->function_id =
+				p_hwfn->p_dev->p_iov_info->first_vf_in_pf +
+				p_info->usage.iov_cnt - vf_sbs;
+			p_block->is_pf = 0;
+			p_block->vector_number = 0;
+			p_block->status = ECORE_IGU_STATUS_VALID |
+					  ECORE_IGU_STATUS_FREE;
+			vf_sbs--;
+		} else {
+			p_block->function_id = 0;
+			p_block->is_pf = 0;
+			p_block->vector_number = 0;
+		}
+
+		SET_FIELD(val, IGU_MAPPING_LINE_FUNCTION_NUMBER,
+			  p_block->function_id);
+		SET_FIELD(val, IGU_MAPPING_LINE_PF_VALID, p_block->is_pf);
+		SET_FIELD(val, IGU_MAPPING_LINE_VECTOR_NUMBER,
+			  p_block->vector_number);
+
+		/* VF entries would be enabled when VF is initializaed */
+		SET_FIELD(val, IGU_MAPPING_LINE_VALID, p_block->is_pf);
+
+		rval = ecore_rd(p_hwfn, p_ptt,
+				IGU_REG_MAPPING_MEMORY +
+				sizeof(u32) * igu_sb_id);
+
+		if (rval != val) {
+			ecore_wr(p_hwfn, p_ptt,
+				 IGU_REG_MAPPING_MEMORY +
+				 sizeof(u32) * igu_sb_id,
+				 val);
+
+			DP_VERBOSE(p_hwfn, ECORE_MSG_INTR,
+				   "IGU reset: [SB 0x%04x] func_id = %d is_pf = %d vector_num = 0x%x [%08x -> %08x]\n",
+				   igu_sb_id, p_block->function_id,
+				   p_block->is_pf, p_block->vector_number,
+				   rval, val);
+		}
+	}
+
+	return 0;
+}
+
+int ecore_int_igu_reset_cam_default(struct ecore_hwfn *p_hwfn,
+				    struct ecore_ptt *p_ptt)
+{
+	struct ecore_sb_cnt_info *p_cnt = &p_hwfn->hw_info.p_igu_info->usage;
+
+	/* Return all the usage indications to default prior to the reset;
+	 * The reset expects the !orig to reflect the initial status of the
+	 * SBs, and would re-calculate the originals based on those.
+	 */
+	p_cnt->cnt = p_cnt->orig;
+	p_cnt->free_cnt = p_cnt->orig;
+	p_cnt->iov_cnt = p_cnt->iov_orig;
+	p_cnt->free_cnt_iov = p_cnt->iov_orig;
+	p_cnt->orig = 0;
+	p_cnt->iov_orig = 0;
+
+	/* TODO - we probably need to re-configure the CAU as well... */
+	return ecore_int_igu_reset_cam(p_hwfn, p_ptt);
 }
 
-static u32 ecore_int_igu_read_cam_block(struct ecore_hwfn *p_hwfn,
-					struct ecore_ptt *p_ptt, u16 sb_id)
+static void ecore_int_igu_read_cam_block(struct ecore_hwfn *p_hwfn,
+					 struct ecore_ptt *p_ptt,
+					 u16 igu_sb_id)
 {
 	u32 val = ecore_rd(p_hwfn, p_ptt,
-			   IGU_REG_MAPPING_MEMORY + sizeof(u32) * sb_id);
+			   IGU_REG_MAPPING_MEMORY + sizeof(u32) * igu_sb_id);
 	struct ecore_igu_block *p_block;
 
-	p_block = &p_hwfn->hw_info.p_igu_info->igu_map.igu_blocks[sb_id];
-
-	/* stop scanning when hit first invalid PF entry */
-	if (!GET_FIELD(val, IGU_MAPPING_LINE_VALID) &&
-	    GET_FIELD(val, IGU_MAPPING_LINE_PF_VALID))
-		goto out;
+	p_block = &p_hwfn->hw_info.p_igu_info->entry[igu_sb_id];
 
 	/* Fill the block information */
-	p_block->status = ECORE_IGU_STATUS_VALID;
 	p_block->function_id = GET_FIELD(val, IGU_MAPPING_LINE_FUNCTION_NUMBER);
 	p_block->is_pf = GET_FIELD(val, IGU_MAPPING_LINE_PF_VALID);
 	p_block->vector_number = GET_FIELD(val, IGU_MAPPING_LINE_VECTOR_NUMBER);
 
-	DP_VERBOSE(p_hwfn, ECORE_MSG_INTR,
-		   "IGU_BLOCK: [SB 0x%04x, Value in CAM 0x%08x] func_id = %d"
-		   " is_pf = %d vector_num = 0x%x\n",
-		   sb_id, val, p_block->function_id, p_block->is_pf,
-		   p_block->vector_number);
-
-out:
-	return val;
+	p_block->igu_sb_id = igu_sb_id;
 }
 
 enum _ecore_status_t ecore_int_igu_read_cam(struct ecore_hwfn *p_hwfn,
@@ -1937,140 +2178,217 @@ enum _ecore_status_t ecore_int_igu_read_cam(struct ecore_hwfn *p_hwfn,
 {
 	struct ecore_igu_info *p_igu_info;
 	struct ecore_igu_block *p_block;
-	u32 min_vf = 0, max_vf = 0, val;
-	u16 sb_id, last_iov_sb_id = 0;
-	u16 prev_sb_id = 0xFF;
+	u32 min_vf = 0, max_vf = 0;
+	u16 igu_sb_id;
 
-	p_hwfn->hw_info.p_igu_info = OSAL_ALLOC(p_hwfn->p_dev,
-						GFP_KERNEL,
-						sizeof(*p_igu_info));
+	p_hwfn->hw_info.p_igu_info = OSAL_ZALLOC(p_hwfn->p_dev,
+						 GFP_KERNEL,
+						 sizeof(*p_igu_info));
 	if (!p_hwfn->hw_info.p_igu_info)
 		return ECORE_NOMEM;
-
-	OSAL_MEMSET(p_hwfn->hw_info.p_igu_info, 0, sizeof(*p_igu_info));
-
 	p_igu_info = p_hwfn->hw_info.p_igu_info;
 
-	/* Initialize base sb / sb cnt for PFs and VFs */
-	p_igu_info->igu_base_sb = 0xffff;
-	p_igu_info->igu_sb_cnt = 0;
-	p_igu_info->igu_dsb_id = 0xffff;
-	p_igu_info->igu_base_sb_iov = 0xffff;
+	/* Distinguish between existent and onn-existent default SB */
+	p_igu_info->igu_dsb_id = ECORE_SB_INVALID_IDX;
 
+	/* Find the range of VF ids whose SB belong to this PF */
 	if (p_hwfn->p_dev->p_iov_info) {
 		struct ecore_hw_sriov_info *p_iov = p_hwfn->p_dev->p_iov_info;
 
 		min_vf = p_iov->first_vf_in_pf;
 		max_vf = p_iov->first_vf_in_pf + p_iov->total_vfs;
 	}
-	for (sb_id = 0;
-	     sb_id < ECORE_MAPPING_MEMORY_SIZE(p_hwfn->p_dev);
-	     sb_id++) {
-		p_block = &p_igu_info->igu_map.igu_blocks[sb_id];
-		val = ecore_int_igu_read_cam_block(p_hwfn, p_ptt, sb_id);
-		if (!GET_FIELD(val, IGU_MAPPING_LINE_VALID) &&
-		    GET_FIELD(val, IGU_MAPPING_LINE_PF_VALID))
-			break;
 
-		if (p_block->is_pf) {
-			if (p_block->function_id == p_hwfn->rel_pf_id) {
-				p_block->status |= ECORE_IGU_STATUS_PF;
-
-				if (p_block->vector_number == 0) {
-					if (p_igu_info->igu_dsb_id == 0xffff)
-						p_igu_info->igu_dsb_id = sb_id;
-				} else {
-					if (p_igu_info->igu_base_sb == 0xffff) {
-						p_igu_info->igu_base_sb = sb_id;
-					} else if (prev_sb_id != sb_id - 1) {
-						DP_NOTICE(p_hwfn->p_dev, false,
-							  "consecutive igu"
-							  " vectors for HWFN"
-							  " %x broken",
-							  p_hwfn->rel_pf_id);
-						break;
-					}
-					prev_sb_id = sb_id;
-					/* we don't count the default */
-					(p_igu_info->igu_sb_cnt)++;
-				}
-			}
-		} else {
-			if ((p_block->function_id >= min_vf) &&
-			    (p_block->function_id < max_vf)) {
-				/* Available for VFs of this PF */
-				if (p_igu_info->igu_base_sb_iov == 0xffff) {
-					p_igu_info->igu_base_sb_iov = sb_id;
-				} else if (last_iov_sb_id != sb_id - 1) {
-					if (!val)
-						DP_VERBOSE(p_hwfn->p_dev,
-							   ECORE_MSG_INTR,
-							   "First uninited IGU"
-							   " CAM entry at"
-							   " index 0x%04x\n",
-							   sb_id);
-					else
-						DP_NOTICE(p_hwfn->p_dev, false,
-							  "Consecutive igu"
-							  " vectors for HWFN"
-							  " %x vfs is broken"
-							  " [jumps from %04x"
-							  " to %04x]\n",
-							  p_hwfn->rel_pf_id,
-							  last_iov_sb_id,
-							  sb_id);
-					break;
-				}
-				p_block->status |= ECORE_IGU_STATUS_FREE;
-				p_hwfn->hw_info.p_igu_info->free_blks++;
-				last_iov_sb_id = sb_id;
-			}
+	for (igu_sb_id = 0;
+	     igu_sb_id < ECORE_MAPPING_MEMORY_SIZE(p_hwfn->p_dev);
+	     igu_sb_id++) {
+		/* Read current entry; Notice it might not belong to this PF */
+		ecore_int_igu_read_cam_block(p_hwfn, p_ptt, igu_sb_id);
+		p_block = &p_igu_info->entry[igu_sb_id];
+
+		if ((p_block->is_pf) &&
+		    (p_block->function_id == p_hwfn->rel_pf_id)) {
+			p_block->status = ECORE_IGU_STATUS_PF |
+					  ECORE_IGU_STATUS_VALID |
+					  ECORE_IGU_STATUS_FREE;
+
+			if (p_igu_info->igu_dsb_id != ECORE_SB_INVALID_IDX)
+				p_igu_info->usage.cnt++;
+		} else if (!(p_block->is_pf) &&
+			   (p_block->function_id >= min_vf) &&
+			   (p_block->function_id < max_vf)) {
+			/* Available for VFs of this PF */
+			p_block->status = ECORE_IGU_STATUS_VALID |
+					  ECORE_IGU_STATUS_FREE;
+
+			if (p_igu_info->igu_dsb_id != ECORE_SB_INVALID_IDX)
+				p_igu_info->usage.iov_cnt++;
+		}
+
+		/* Mark the First entry belonging to the PF or its VFs
+		 * as the default SB [we'll reset IGU prior to first usage].
+		 */
+		if ((p_block->status & ECORE_IGU_STATUS_VALID) &&
+		    (p_igu_info->igu_dsb_id == ECORE_SB_INVALID_IDX)) {
+			p_igu_info->igu_dsb_id = igu_sb_id;
+			p_block->status |= ECORE_IGU_STATUS_DSB;
 		}
+
+		/* While this isn't suitable for all clients, limit number
+		 * of prints by having each PF print only its entries with the
+		 * exception of PF0 which would print everything.
+		 */
+		if ((p_block->status & ECORE_IGU_STATUS_VALID) ||
+		    (p_hwfn->abs_pf_id == 0))
+			DP_VERBOSE(p_hwfn, ECORE_MSG_INTR,
+				   "IGU_BLOCK: [SB 0x%04x] func_id = %d is_pf = %d vector_num = 0x%x\n",
+				   igu_sb_id, p_block->function_id,
+				   p_block->is_pf, p_block->vector_number);
+	}
+
+	if (p_igu_info->igu_dsb_id == ECORE_SB_INVALID_IDX) {
+		DP_NOTICE(p_hwfn, true,
+			  "IGU CAM returned invalid values igu_dsb_id=0x%x\n",
+			  p_igu_info->igu_dsb_id);
+		return ECORE_INVAL;
+	}
+
+	/* All non default SB are considered free at this point */
+	p_igu_info->usage.free_cnt = p_igu_info->usage.cnt;
+	p_igu_info->usage.free_cnt_iov = p_igu_info->usage.iov_cnt;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_INTR,
+		   "igu_dsb_id=0x%x, num Free SBs - PF: %04x VF: %04x [might change after resource allocation]\n",
+		   p_igu_info->igu_dsb_id, p_igu_info->usage.cnt,
+		   p_igu_info->usage.iov_cnt);
+
+	return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t
+ecore_int_igu_relocate_sb(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			  u16 sb_id, bool b_to_vf)
+{
+	struct ecore_igu_info *p_info = p_hwfn->hw_info.p_igu_info;
+	struct ecore_igu_block *p_block = OSAL_NULL;
+	u16 igu_sb_id = 0, vf_num = 0;
+	u32 val = 0;
+
+	if (IS_VF(p_hwfn->p_dev) || !IS_PF_SRIOV(p_hwfn))
+		return ECORE_INVAL;
+
+	if (sb_id == ECORE_SP_SB_ID)
+		return ECORE_INVAL;
+
+	if (!p_info->b_allow_pf_vf_change) {
+		DP_INFO(p_hwfn, "Can't relocate SBs as MFW is too old.\n");
+		return ECORE_INVAL;
 	}
 
-	/* There's a possibility the igu_sb_cnt_iov doesn't properly reflect
-	 * the number of VF SBs [especially for first VF on engine, as we can't
-	 * diffrentiate between empty entries and its entries].
-	 * Since we don't really support more SBs than VFs today, prevent any
-	 * such configuration by sanitizing the number of SBs to equal the
-	 * number of VFs.
+	/* If we're moving a SB from PF to VF, the client had to specify
+	 * which vector it wants to move.
 	 */
-	if (IS_PF_SRIOV(p_hwfn)) {
-		u16 total_vfs = p_hwfn->p_dev->p_iov_info->total_vfs;
-
-		if (total_vfs < p_igu_info->free_blks) {
-			DP_VERBOSE(p_hwfn, (ECORE_MSG_INTR | ECORE_MSG_IOV),
-				   "Limiting number of SBs for IOV - %04x --> %04x\n",
-				   p_igu_info->free_blks,
-				   p_hwfn->p_dev->p_iov_info->total_vfs);
-			p_igu_info->free_blks = total_vfs;
-		} else if (total_vfs > p_igu_info->free_blks) {
-			DP_NOTICE(p_hwfn, true,
-				  "IGU has only %04x SBs for VFs while the device has %04x VFs\n",
-				  p_igu_info->free_blks, total_vfs);
+	if (b_to_vf) {
+		igu_sb_id = ecore_get_pf_igu_sb_id(p_hwfn, sb_id + 1);
+		if (igu_sb_id == ECORE_SB_INVALID_IDX)
 			return ECORE_INVAL;
-		}
 	}
 
-	p_igu_info->igu_sb_cnt_iov = p_igu_info->free_blks;
+	/* If we're moving a SB from VF to PF, need to validate there isn't
+	 * already a line configured for that vector.
+	 */
+	if (!b_to_vf) {
+		if (ecore_get_pf_igu_sb_id(p_hwfn, sb_id + 1) !=
+		    ECORE_SB_INVALID_IDX)
+			return ECORE_INVAL;
+	}
 
-	DP_VERBOSE(p_hwfn, ECORE_MSG_INTR,
-		   "IGU igu_base_sb=0x%x [IOV 0x%x] igu_sb_cnt=%d [IOV 0x%x] "
-		   "igu_dsb_id=0x%x\n",
-		   p_igu_info->igu_base_sb, p_igu_info->igu_base_sb_iov,
-		   p_igu_info->igu_sb_cnt, p_igu_info->igu_sb_cnt_iov,
-		   p_igu_info->igu_dsb_id);
-
-	if (p_igu_info->igu_base_sb == 0xffff ||
-	    p_igu_info->igu_dsb_id == 0xffff || p_igu_info->igu_sb_cnt == 0) {
-		DP_NOTICE(p_hwfn, true,
-			  "IGU CAM returned invalid values igu_base_sb=0x%x "
-			  "igu_sb_cnt=%d igu_dsb_id=0x%x\n",
-			  p_igu_info->igu_base_sb, p_igu_info->igu_sb_cnt,
-			  p_igu_info->igu_dsb_id);
+	/* We need to validate that the SB can actually be relocated.
+	 * This would also handle the previous case where we've explicitly
+	 * stated which IGU SB needs to move.
+	 */
+	for (; igu_sb_id < ECORE_MAPPING_MEMORY_SIZE(p_hwfn->p_dev);
+	     igu_sb_id++) {
+		p_block = &p_info->entry[igu_sb_id];
+
+		if (!(p_block->status & ECORE_IGU_STATUS_VALID) ||
+		    !(p_block->status & ECORE_IGU_STATUS_FREE) ||
+		    (!!(p_block->status & ECORE_IGU_STATUS_PF) != b_to_vf)) {
+			if (b_to_vf)
+				return ECORE_INVAL;
+			else
+				continue;
+		}
+
+		break;
+	}
+
+	if (igu_sb_id == ECORE_MAPPING_MEMORY_SIZE(p_hwfn->p_dev)) {
+		DP_VERBOSE(p_hwfn, (ECORE_MSG_INTR | ECORE_MSG_IOV),
+			   "Failed to find a free SB to move\n");
 		return ECORE_INVAL;
 	}
 
+	/* At this point, p_block points to the SB we want to relocate */
+	if (b_to_vf) {
+		p_block->status &= ~ECORE_IGU_STATUS_PF;
+
+		/* It doesn't matter which VF number we choose, since we're
+		 * going to disable the line; But let's keep it in range.
+		 */
+		vf_num = (u16)p_hwfn->p_dev->p_iov_info->first_vf_in_pf;
+
+		p_block->function_id = (u8)vf_num;
+		p_block->is_pf = 0;
+		p_block->vector_number = 0;
+
+		p_info->usage.cnt--;
+		p_info->usage.free_cnt--;
+		p_info->usage.iov_cnt++;
+		p_info->usage.free_cnt_iov++;
+
+		/* TODO - if SBs aren't really the limiting factor,
+		 * then it might not be accurate [in the since that
+		 * we might not need decrement the feature].
+		 */
+		p_hwfn->hw_info.feat_num[ECORE_PF_L2_QUE]--;
+		p_hwfn->hw_info.feat_num[ECORE_VF_L2_QUE]++;
+	} else {
+		p_block->status |= ECORE_IGU_STATUS_PF;
+		p_block->function_id = p_hwfn->rel_pf_id;
+		p_block->is_pf = 1;
+		p_block->vector_number = sb_id + 1;
+
+		p_info->usage.cnt++;
+		p_info->usage.free_cnt++;
+		p_info->usage.iov_cnt--;
+		p_info->usage.free_cnt_iov--;
+
+		p_hwfn->hw_info.feat_num[ECORE_PF_L2_QUE]++;
+		p_hwfn->hw_info.feat_num[ECORE_VF_L2_QUE]--;
+	}
+
+	/* Update the IGU and CAU with the new configuration */
+	SET_FIELD(val, IGU_MAPPING_LINE_FUNCTION_NUMBER,
+		  p_block->function_id);
+	SET_FIELD(val, IGU_MAPPING_LINE_PF_VALID, p_block->is_pf);
+	SET_FIELD(val, IGU_MAPPING_LINE_VALID, p_block->is_pf);
+	SET_FIELD(val, IGU_MAPPING_LINE_VECTOR_NUMBER,
+		  p_block->vector_number);
+
+	ecore_wr(p_hwfn, p_ptt,
+		 IGU_REG_MAPPING_MEMORY + sizeof(u32) * igu_sb_id,
+		 val);
+
+	ecore_int_cau_conf_sb(p_hwfn, p_ptt, 0,
+			      igu_sb_id, vf_num,
+			      p_block->is_pf ? 0 : 1);
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_INTR,
+		   "Relocation: [SB 0x%04x] func_id = %d is_pf = %d vector_num = 0x%x\n",
+		   igu_sb_id, p_block->function_id,
+		   p_block->is_pf, p_block->vector_number);
+
 	return ECORE_SUCCESS;
 }
 
@@ -2170,14 +2488,13 @@ void ecore_int_setup(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
 void ecore_int_get_num_sbs(struct ecore_hwfn *p_hwfn,
 			   struct ecore_sb_cnt_info *p_sb_cnt_info)
 {
-	struct ecore_igu_info *info = p_hwfn->hw_info.p_igu_info;
+	struct ecore_igu_info *p_igu_info = p_hwfn->hw_info.p_igu_info;
 
-	if (!info || !p_sb_cnt_info)
+	if (!p_igu_info || !p_sb_cnt_info)
 		return;
 
-	p_sb_cnt_info->sb_cnt = info->igu_sb_cnt;
-	p_sb_cnt_info->sb_iov_cnt = info->igu_sb_cnt_iov;
-	p_sb_cnt_info->sb_free_blk = info->free_blks;
+	OSAL_MEMCPY(p_sb_cnt_info, &p_igu_info->usage,
+		    sizeof(*p_sb_cnt_info));
 }
 
 void ecore_int_disable_post_isr_release(struct ecore_dev *p_dev)
diff --git a/drivers/net/qede/base/ecore_int.h b/drivers/net/qede/base/ecore_int.h
index 067ed60..b655685 100644
--- a/drivers/net/qede/base/ecore_int.h
+++ b/drivers/net/qede/base/ecore_int.h
@@ -21,31 +21,76 @@
 #define SB_ALIGNED_SIZE(p_hwfn)					\
 	ALIGNED_TYPE_SIZE(struct status_block, p_hwfn)
 
+#define ECORE_SB_INVALID_IDX	0xffff
+
 struct ecore_igu_block {
 	u8 status;
 #define ECORE_IGU_STATUS_FREE	0x01
 #define ECORE_IGU_STATUS_VALID	0x02
 #define ECORE_IGU_STATUS_PF	0x04
+#define ECORE_IGU_STATUS_DSB	0x08
 
 	u8 vector_number;
 	u8 function_id;
 	u8 is_pf;
-};
 
-struct ecore_igu_map {
-	struct ecore_igu_block igu_blocks[MAX_TOT_SB_PER_PATH];
+	/* Index inside IGU [meant for back reference] */
+	u16 igu_sb_id;
+
+	struct ecore_sb_info *sb_info;
 };
 
 struct ecore_igu_info {
-	struct ecore_igu_map igu_map;
+	struct ecore_igu_block entry[MAX_TOT_SB_PER_PATH];
 	u16 igu_dsb_id;
-	u16 igu_base_sb;
-	u16 igu_base_sb_iov;
-	u16 igu_sb_cnt;
-	u16 igu_sb_cnt_iov;
-	u16 free_blks;
+
+	/* The numbers can shift when using APIs to switch SBs between PF and
+	 * VF.
+	 */
+	struct ecore_sb_cnt_info usage;
+
+	/* Determine whether we can shift SBs between VFs and PFs */
+	bool b_allow_pf_vf_change;
 };
 
+/**
+ * @brief - Make sure the IGU CAM reflects the resources provided by MFW
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ */
+int ecore_int_igu_reset_cam(struct ecore_hwfn *p_hwfn,
+			    struct ecore_ptt *p_ptt);
+
+/**
+ * @brief - Make sure IGU CAM reflects the default resources once again,
+ *          starting with a 'dirty' SW database.
+ * @param p_hwfn
+ * @param p_ptt
+ */
+int ecore_int_igu_reset_cam_default(struct ecore_hwfn *p_hwfn,
+				    struct ecore_ptt *p_ptt);
+
+/**
+ * @brief Translate the weakly-defined client sb-id into an IGU sb-id
+ *
+ * @param p_hwfn
+ * @param sb_id - user provided sb_id
+ *
+ * @return an index inside IGU CAM where the SB resides
+ */
+u16 ecore_get_igu_sb_id(struct ecore_hwfn *p_hwfn, u16 sb_id);
+
+/**
+ * @brief return a pointer to an unused valid SB
+ *
+ * @param p_hwfn
+ * @param b_is_pf - true iff we want a SB belonging to a PF
+ *
+ * @return point to an igu_block, OSAL_NULL if none is available
+ */
+struct ecore_igu_block *
+ecore_get_igu_free_sb(struct ecore_hwfn *p_hwfn, bool b_is_pf);
 /* TODO Names of function may change... */
 void ecore_int_igu_init_pure_rt(struct ecore_hwfn *p_hwfn,
 				struct ecore_ptt *p_ptt,
@@ -125,9 +170,11 @@ enum _ecore_status_t ecore_int_register_cb(struct ecore_hwfn *p_hwfn,
  * @param opaque	- opaque fid of the sb owner.
  * @param cleanup_set	- set(1) / clear(0)
  */
-void ecore_int_igu_init_pure_rt_single(struct ecore_hwfn *p_hwfn,
-				       struct ecore_ptt *p_ptt,
-				       u32 sb_id, u16 opaque, bool b_set);
+void ecore_int_igu_init_pure_rt_single(struct ecore_hwfn	*p_hwfn,
+				       struct ecore_ptt		*p_ptt,
+				       u16			sb_id,
+				       u16			opaque,
+				       bool			b_set);
 
 /**
  * @brief ecore_int_cau_conf - configure cau for a given status
diff --git a/drivers/net/qede/base/ecore_int_api.h b/drivers/net/qede/base/ecore_int_api.h
index 799fbe8..49d0fac 100644
--- a/drivers/net/qede/base/ecore_int_api.h
+++ b/drivers/net/qede/base/ecore_int_api.h
@@ -48,9 +48,15 @@ struct ecore_sb_info_dbg {
 };
 
 struct ecore_sb_cnt_info {
-	int sb_cnt;
-	int sb_iov_cnt;
-	int sb_free_blk;
+	/* Original, current, and free SBs for PF */
+	int orig;
+	int cnt;
+	int free_cnt;
+
+	/* Original, current and free SBS for child VFs */
+	int iov_orig;
+	int iov_cnt;
+	int free_cnt_iov;
 };
 
 static OSAL_INLINE u16 ecore_sb_update_sb_idx(struct ecore_sb_info *sb_info)
@@ -173,17 +179,17 @@ enum ecore_coalescing_fsm {
  *
  * @param p_hwfn
  * @param p_ptt
- * @param igu_sb_id
+ * @param p_sb
  * @param pi_index
  * @param state
  * @param timeset
  */
-void ecore_int_cau_conf_pi(struct ecore_hwfn *p_hwfn,
-			   struct ecore_ptt *p_ptt,
-			   u16 igu_sb_id,
-			   u32 pi_index,
-			   enum ecore_coalescing_fsm coalescing_fsm,
-			   u8 timeset);
+void ecore_int_cau_conf_pi(struct ecore_hwfn		*p_hwfn,
+			   struct ecore_ptt		*p_ptt,
+			   struct ecore_sb_info		*p_sb,
+			   u32				pi_index,
+			   enum ecore_coalescing_fsm	coalescing_fsm,
+			   u8				timeset);
 
 /**
  *
@@ -219,6 +225,7 @@ void ecore_int_igu_disable_int(struct ecore_hwfn *p_hwfn,
 u64 ecore_int_igu_read_sisr_reg(struct ecore_hwfn *p_hwfn);
 
 #define ECORE_SP_SB_ID 0xffff
+
 /**
  * @brief ecore_int_sb_init - Initializes the sb_info structure.
  *
@@ -324,4 +331,18 @@ enum _ecore_status_t ecore_int_get_sb_dbg(struct ecore_hwfn *p_hwfn,
 					  struct ecore_sb_info *p_sb,
 					  struct ecore_sb_info_dbg *p_info);
 
+/**
+ * @brief - Move a free Status block between PF and child VF
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param sb_id - The PF fastpath vector to be moved [re-assigned if claiming
+ *                from VF, given-up if moving to VF]
+ * @param b_to_vf - PF->VF == true, VF->PF == false
+ *
+ * @return ECORE_SUCCESS if SB successfully moved.
+ */
+enum _ecore_status_t
+ecore_int_igu_relocate_sb(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			  u16 sb_id, bool b_to_vf);
 #endif
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 839bd46..3140fdd 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -207,9 +207,15 @@ void ecore_eth_queue_cid_release(struct ecore_hwfn *p_hwfn,
 
 	p_cid->opaque_fid = opaque_fid;
 	p_cid->cid = cid;
-	p_cid->rel = *p_params;
 	p_cid->p_owner = p_hwfn;
 
+	/* Fill in parameters */
+	p_cid->rel.vport_id = p_params->vport_id;
+	p_cid->rel.queue_id = p_params->queue_id;
+	p_cid->rel.stats_id = p_params->stats_id;
+	p_cid->sb_igu_id = p_params->p_sb->igu_sb_id;
+	p_cid->sb_idx = p_params->sb_idx;
+
 	/* Fill-in bits related to VFs' queues if information was provided */
 	if (p_vf_params != OSAL_NULL) {
 		p_cid->vfid = p_vf_params->vfid;
@@ -251,10 +257,6 @@ void ecore_eth_queue_cid_release(struct ecore_hwfn *p_hwfn,
 		p_cid->abs.stats_id = p_cid->rel.stats_id;
 	}
 
-	/* SBs relevant information was already provided as absolute */
-	p_cid->abs.sb = p_cid->rel.sb;
-	p_cid->abs.sb_idx = p_cid->rel.sb_idx;
-
 out:
 	/* VF-images have provided the qid_usage_idx on their own.
 	 * Otherwise, we need to allocate a unique one.
@@ -273,7 +275,7 @@ void ecore_eth_queue_cid_release(struct ecore_hwfn *p_hwfn,
 		   p_cid->rel.queue_id,	p_cid->qid_usage_idx,
 		   p_cid->abs.queue_id,
 		   p_cid->rel.stats_id, p_cid->abs.stats_id,
-		   p_cid->abs.sb, p_cid->abs.sb_idx);
+		   p_cid->sb_igu_id, p_cid->sb_idx);
 
 	return p_cid;
 
@@ -901,7 +903,7 @@ enum _ecore_status_t
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 		   "opaque_fid=0x%x, cid=0x%x, rx_qzone=0x%x, vport_id=0x%x, sb_id=0x%x\n",
 		   p_cid->opaque_fid, p_cid->cid, p_cid->abs.queue_id,
-		   p_cid->abs.vport_id, p_cid->abs.sb);
+		   p_cid->abs.vport_id, p_cid->sb_igu_id);
 
 	/* Get SPQ entry */
 	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
@@ -917,8 +919,8 @@ enum _ecore_status_t
 
 	p_ramrod = &p_ent->ramrod.rx_queue_start;
 
-	p_ramrod->sb_id = OSAL_CPU_TO_LE16(p_cid->abs.sb);
-	p_ramrod->sb_index = p_cid->abs.sb_idx;
+	p_ramrod->sb_id = OSAL_CPU_TO_LE16(p_cid->sb_igu_id);
+	p_ramrod->sb_index = p_cid->sb_idx;
 	p_ramrod->vport_id = p_cid->abs.vport_id;
 	p_ramrod->stats_counter_id = p_cid->abs.stats_id;
 	p_ramrod->rx_queue_id = OSAL_CPU_TO_LE16(p_cid->abs.queue_id);
@@ -1153,8 +1155,8 @@ enum _ecore_status_t
 	p_ramrod = &p_ent->ramrod.tx_queue_start;
 	p_ramrod->vport_id = p_cid->abs.vport_id;
 
-	p_ramrod->sb_id = OSAL_CPU_TO_LE16(p_cid->abs.sb);
-	p_ramrod->sb_index = p_cid->abs.sb_idx;
+	p_ramrod->sb_id = OSAL_CPU_TO_LE16(p_cid->sb_igu_id);
+	p_ramrod->sb_index = p_cid->sb_idx;
 	p_ramrod->stats_counter_id = p_cid->abs.stats_id;
 
 	p_ramrod->queue_zone_id = OSAL_CPU_TO_LE16(p_cid->abs.queue_id);
diff --git a/drivers/net/qede/base/ecore_l2.h b/drivers/net/qede/base/ecore_l2.h
index 33f1fad..02aa5e8 100644
--- a/drivers/net/qede/base/ecore_l2.h
+++ b/drivers/net/qede/base/ecore_l2.h
@@ -18,7 +18,16 @@
 #define MAX_QUEUES_PER_QZONE	(sizeof(unsigned long) * 8)
 #define ECORE_QUEUE_CID_PF	(0xff)
 
-/* Additional parameters required for initialization of the queue_cid
+/* Almost identical to the ecore_queue_start_common_params,
+ * but here we maintain the SB index in IGU CAM.
+ */
+struct ecore_queue_cid_params {
+	u8 vport_id;
+	u16 queue_id;
+	u8 stats_id;
+};
+
+ /* Additional parameters required for initialization of the queue_cid
  * and are relevant only for a PF initializing one for its VFs.
  */
 struct ecore_queue_cid_vf_params {
@@ -44,13 +53,14 @@ struct ecore_queue_cid_vf_params {
 };
 
 struct ecore_queue_cid {
-	/* 'Relative' is a relative term ;-). Usually the indices [not counting
-	 * SBs] would be PF-relative, but there are some cases where that isn't
-	 * the case - specifically for a PF configuring its VF indices it's
-	 * possible some fields [E.g., stats-id] in 'rel' would already be abs.
-	 */
-	struct ecore_queue_start_common_params rel;
-	struct ecore_queue_start_common_params abs;
+	/* For stats-id, the `rel' is actually absolute as well */
+	struct ecore_queue_cid_params rel;
+	struct ecore_queue_cid_params abs;
+
+	/* These have no 'relative' meaning */
+	u16 sb_igu_id;
+	u8 sb_idx;
+
 	u32 cid;
 	u16 opaque_fid;
 
diff --git a/drivers/net/qede/base/ecore_l2_api.h b/drivers/net/qede/base/ecore_l2_api.h
index d09f3c4..a6740d5 100644
--- a/drivers/net/qede/base/ecore_l2_api.h
+++ b/drivers/net/qede/base/ecore_l2_api.h
@@ -11,6 +11,7 @@
 
 #include "ecore_status.h"
 #include "ecore_sp_api.h"
+#include "ecore_int_api.h"
 
 #ifndef __EXTRACT__LINUX__
 enum ecore_rss_caps {
@@ -35,8 +36,7 @@ struct ecore_queue_start_common_params {
 	/* Relative, but relevant only for PFs */
 	u8 stats_id;
 
-	/* These are always absolute */
-	u16 sb;
+	struct ecore_sb_info *p_sb;
 	u8 sb_idx;
 };
 
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 0886560..1ec6451 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -446,33 +446,6 @@ static enum _ecore_status_t ecore_iov_pci_cfg_info(struct ecore_dev *p_dev)
 	return ECORE_SUCCESS;
 }
 
-static void ecore_iov_clear_vf_igu_blocks(struct ecore_hwfn *p_hwfn,
-					  struct ecore_ptt *p_ptt)
-{
-	struct ecore_igu_block *p_sb;
-	u16 sb_id;
-	u32 val;
-
-	if (!p_hwfn->hw_info.p_igu_info) {
-		DP_ERR(p_hwfn,
-		       "ecore_iov_clear_vf_igu_blocks IGU Info not inited\n");
-		return;
-	}
-
-	for (sb_id = 0;
-	     sb_id < ECORE_MAPPING_MEMORY_SIZE(p_hwfn->p_dev); sb_id++) {
-		p_sb = &p_hwfn->hw_info.p_igu_info->igu_map.igu_blocks[sb_id];
-		if ((p_sb->status & ECORE_IGU_STATUS_FREE) &&
-		    !(p_sb->status & ECORE_IGU_STATUS_PF)) {
-			val = ecore_rd(p_hwfn, p_ptt,
-				       IGU_REG_MAPPING_MEMORY + sb_id * 4);
-			SET_FIELD(val, IGU_MAPPING_LINE_VALID, 0);
-			ecore_wr(p_hwfn, p_ptt,
-				 IGU_REG_MAPPING_MEMORY + 4 * sb_id, val);
-		}
-	}
-}
-
 static void ecore_iov_setup_vfdb(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_hw_sriov_info *p_iov = p_hwfn->p_dev->p_iov_info;
@@ -634,7 +607,6 @@ void ecore_iov_setup(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
 		return;
 
 	ecore_iov_setup_vfdb(p_hwfn);
-	ecore_iov_clear_vf_igu_blocks(p_hwfn, p_ptt);
 }
 
 void ecore_iov_free(struct ecore_hwfn *p_hwfn)
@@ -938,46 +910,38 @@ static u8 ecore_iov_alloc_vf_igu_sbs(struct ecore_hwfn *p_hwfn,
 				     struct ecore_vf_info *vf,
 				     u16 num_rx_queues)
 {
-	struct ecore_igu_block *igu_blocks;
-	int qid = 0, igu_id = 0;
+	struct ecore_igu_block *p_block;
+	struct cau_sb_entry sb_entry;
+	int qid = 0;
 	u32 val = 0;
 
-	igu_blocks = p_hwfn->hw_info.p_igu_info->igu_map.igu_blocks;
-
-	if (num_rx_queues > p_hwfn->hw_info.p_igu_info->free_blks)
-		num_rx_queues = p_hwfn->hw_info.p_igu_info->free_blks;
-
-	p_hwfn->hw_info.p_igu_info->free_blks -= num_rx_queues;
+	if (num_rx_queues > p_hwfn->hw_info.p_igu_info->usage.free_cnt_iov)
+		num_rx_queues =
+		(u16)p_hwfn->hw_info.p_igu_info->usage.free_cnt_iov;
+	p_hwfn->hw_info.p_igu_info->usage.free_cnt_iov -= num_rx_queues;
 
 	SET_FIELD(val, IGU_MAPPING_LINE_FUNCTION_NUMBER, vf->abs_vf_id);
 	SET_FIELD(val, IGU_MAPPING_LINE_VALID, 1);
 	SET_FIELD(val, IGU_MAPPING_LINE_PF_VALID, 0);
 
-	while ((qid < num_rx_queues) &&
-	       (igu_id < ECORE_MAPPING_MEMORY_SIZE(p_hwfn->p_dev))) {
-		if (igu_blocks[igu_id].status & ECORE_IGU_STATUS_FREE) {
-			struct cau_sb_entry sb_entry;
-
-			vf->igu_sbs[qid] = (u16)igu_id;
-			igu_blocks[igu_id].status &= ~ECORE_IGU_STATUS_FREE;
-
-			SET_FIELD(val, IGU_MAPPING_LINE_VECTOR_NUMBER, qid);
-
-			ecore_wr(p_hwfn, p_ptt,
-				 IGU_REG_MAPPING_MEMORY + sizeof(u32) * igu_id,
-				 val);
-
-			/* Configure igu sb in CAU which were marked valid */
-			ecore_init_cau_sb_entry(p_hwfn, &sb_entry,
-						p_hwfn->rel_pf_id,
-						vf->abs_vf_id, 1);
-			ecore_dmae_host2grc(p_hwfn, p_ptt,
-					    (u64)(osal_uintptr_t)&sb_entry,
-					    CAU_REG_SB_VAR_MEMORY +
-					    igu_id * sizeof(u64), 2, 0);
-			qid++;
-		}
-		igu_id++;
+	for (qid = 0; qid < num_rx_queues; qid++) {
+		p_block = ecore_get_igu_free_sb(p_hwfn, false);
+		vf->igu_sbs[qid] = p_block->igu_sb_id;
+		p_block->status &= ~ECORE_IGU_STATUS_FREE;
+		SET_FIELD(val, IGU_MAPPING_LINE_VECTOR_NUMBER, qid);
+
+		ecore_wr(p_hwfn, p_ptt,
+			 IGU_REG_MAPPING_MEMORY +
+			 sizeof(u32) * p_block->igu_sb_id, val);
+
+		/* Configure igu sb in CAU which were marked valid */
+		ecore_init_cau_sb_entry(p_hwfn, &sb_entry,
+					p_hwfn->rel_pf_id,
+					vf->abs_vf_id, 1);
+		ecore_dmae_host2grc(p_hwfn, p_ptt,
+				    (u64)(osal_uintptr_t)&sb_entry,
+				    CAU_REG_SB_VAR_MEMORY +
+				    p_block->igu_sb_id * sizeof(u64), 2, 0);
 	}
 
 	vf->num_sbs = (u8)num_rx_queues;
@@ -1013,10 +977,8 @@ static void ecore_iov_free_vf_igu_sbs(struct ecore_hwfn *p_hwfn,
 		SET_FIELD(val, IGU_MAPPING_LINE_VALID, 0);
 		ecore_wr(p_hwfn, p_ptt, addr, val);
 
-		p_info->igu_map.igu_blocks[igu_id].status |=
-		    ECORE_IGU_STATUS_FREE;
-
-		p_hwfn->hw_info.p_igu_info->free_blks++;
+		p_info->entry[igu_id].status |= ECORE_IGU_STATUS_FREE;
+		p_hwfn->hw_info.p_igu_info->usage.free_cnt_iov++;
 	}
 
 	vf->num_sbs = 0;
@@ -1114,34 +1076,28 @@ enum _ecore_status_t
 	vf->vport_id = p_params->vport_id;
 	vf->rss_eng_id = p_params->rss_eng_id;
 
-	/* Perform sanity checking on the requested queue_id */
+	/* Since it's possible to relocate SBs, it's a bit difficult to check
+	 * things here. Simply check whether the index falls in the range
+	 * belonging to the PF.
+	 */
 	for (i = 0; i < p_params->num_queues; i++) {
-		u16 min_vf_qzone = (u16)FEAT_NUM(p_hwfn, ECORE_PF_L2_QUE);
-		u16 max_vf_qzone = min_vf_qzone +
-				   FEAT_NUM(p_hwfn, ECORE_VF_L2_QUE) - 1;
-
 		qid = p_params->req_rx_queue[i];
-		if (qid < min_vf_qzone || qid > max_vf_qzone) {
+		if (qid > (u16)RESC_NUM(p_hwfn, ECORE_L2_QUEUE)) {
 			DP_NOTICE(p_hwfn, true,
-				  "Can't enable Rx qid [%04x] for VF[%d]: qids [0x%04x,...,0x%04x] available\n",
+				  "Can't enable Rx qid [%04x] for VF[%d]: qids [0,,...,0x%04x] available\n",
 				  qid, p_params->rel_vf_id,
-				  min_vf_qzone, max_vf_qzone);
+				  (u16)RESC_NUM(p_hwfn, ECORE_L2_QUEUE));
 			return ECORE_INVAL;
 		}
 
 		qid = p_params->req_tx_queue[i];
-		if (qid > max_vf_qzone) {
+		if (qid > (u16)RESC_NUM(p_hwfn, ECORE_L2_QUEUE)) {
 			DP_NOTICE(p_hwfn, true,
-				  "Can't enable Tx qid [%04x] for VF[%d]: max qid 0x%04x\n",
-				  qid, p_params->rel_vf_id, max_vf_qzone);
+				  "Can't enable Tx qid [%04x] for VF[%d]: qids [0,,...,0x%04x] available\n",
+				  qid, p_params->rel_vf_id,
+				  (u16)RESC_NUM(p_hwfn, ECORE_L2_QUEUE));
 			return ECORE_INVAL;
 		}
-
-		/* If client *really* wants, Tx qid can be shared with PF */
-		if (qid < min_vf_qzone)
-			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-				   "VF[%d] is using PF qid [0x%04x] for Txq[0x%02x]\n",
-				   p_params->rel_vf_id, qid, i);
 	}
 
 	/* Limit number of queues according to number of CIDs */
@@ -2233,6 +2189,7 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
 	struct ecore_vf_queue *p_queue;
 	struct vfpf_start_rxq_tlv *req;
 	struct ecore_queue_cid *p_cid;
+	struct ecore_sb_info sb_dummy;
 	enum _ecore_status_t rc;
 
 	req = &mbx->req_virt->start_rxq;
@@ -2257,7 +2214,11 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
 	params.queue_id = (u8)p_queue->fw_rx_qid;
 	params.vport_id = vf->vport_id;
 	params.stats_id = vf->abs_vf_id + 0x10;
-	params.sb = req->hw_sb;
+
+	/* Since IGU index is passed via sb_info, construct a dummy one */
+	OSAL_MEM_ZERO(&sb_dummy, sizeof(sb_dummy));
+	sb_dummy.igu_sb_id = req->hw_sb;
+	params.p_sb = &sb_dummy;
 	params.sb_idx = req->sb_index;
 
 	OSAL_MEM_ZERO(&vf_params, sizeof(vf_params));
@@ -2500,6 +2461,7 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 	struct ecore_vf_queue *p_queue;
 	struct vfpf_start_txq_tlv *req;
 	struct ecore_queue_cid *p_cid;
+	struct ecore_sb_info sb_dummy;
 	u8 qid_usage_idx, vf_legacy;
 	u32 cid = 0;
 	enum _ecore_status_t rc;
@@ -2527,7 +2489,11 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 	params.queue_id = p_queue->fw_tx_qid;
 	params.vport_id = vf->vport_id;
 	params.stats_id = vf->abs_vf_id + 0x10;
-	params.sb = req->hw_sb;
+
+	/* Since IGU index is passed via sb_info, construct a dummy one */
+	OSAL_MEM_ZERO(&sb_dummy, sizeof(sb_dummy));
+	sb_dummy.igu_sb_id = req->hw_sb;
+	params.p_sb = &sb_dummy;
 	params.sb_idx = req->sb_index;
 
 	OSAL_MEM_ZERO(&vf_params, sizeof(vf_params));
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index e4e2517..0a26141 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -652,8 +652,8 @@ enum _ecore_status_t
 	req->cqe_pbl_addr = cqe_pbl_addr;
 	req->cqe_pbl_size = cqe_pbl_size;
 	req->rxq_addr = bd_chain_phys_addr;
-	req->hw_sb = p_cid->rel.sb;
-	req->sb_index = p_cid->rel.sb_idx;
+	req->hw_sb = p_cid->sb_igu_id;
+	req->sb_index = p_cid->sb_idx;
 	req->bd_max_bytes = bd_max_bytes;
 	req->stat_id = -1; /* Keep initialized, for future compatibility */
 
@@ -774,8 +774,8 @@ enum _ecore_status_t
 	/* Tx */
 	req->pbl_addr = pbl_addr;
 	req->pbl_size = pbl_size;
-	req->hw_sb = p_cid->rel.sb;
-	req->sb_index = p_cid->rel.sb_idx;
+	req->hw_sb = p_cid->sb_igu_id;
+	req->sb_index = p_cid->sb_idx;
 
 	ecore_vf_pf_add_qid(p_hwfn, p_cid);
 
@@ -930,9 +930,12 @@ enum _ecore_status_t
 	req->only_untagged = only_untagged;
 
 	/* status blocks */
-	for (i = 0; i < p_hwfn->vf_iov_info->acquire_resp.resc.num_sbs; i++)
-		if (p_hwfn->sbs_info[i])
-			req->sb_addr[i] = p_hwfn->sbs_info[i]->sb_phys;
+	for (i = 0; i < p_hwfn->vf_iov_info->acquire_resp.resc.num_sbs; i++) {
+		struct ecore_sb_info *p_sb = p_hwfn->vf_iov_info->sbs_info[i];
+
+		if (p_sb)
+			req->sb_addr[i] = p_sb->sb_phys;
+	}
 
 	/* add list termination tlv */
 	ecore_add_tlv(p_hwfn, &p_iov->offset,
@@ -1501,6 +1504,24 @@ u16 ecore_vf_get_igu_sb_id(struct ecore_hwfn *p_hwfn,
 	return p_iov->acquire_resp.resc.hw_sbs[sb_id].hw_sb_id;
 }
 
+void ecore_vf_set_sb_info(struct ecore_hwfn *p_hwfn,
+			  u16 sb_id, struct ecore_sb_info *p_sb)
+{
+	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+
+	if (!p_iov) {
+		DP_NOTICE(p_hwfn, true, "vf_sriov_info isn't initialized\n");
+		return;
+	}
+
+	if (sb_id >= PFVF_MAX_SBS_PER_VF) {
+		DP_NOTICE(p_hwfn, true, "Can't configure SB %04x\n", sb_id);
+		return;
+	}
+
+	p_iov->sbs_info[sb_id] = p_sb;
+}
+
 enum _ecore_status_t ecore_vf_read_bulletin(struct ecore_hwfn *p_hwfn,
 					    u8 *p_change)
 {
diff --git a/drivers/net/qede/base/ecore_vf.h b/drivers/net/qede/base/ecore_vf.h
index 4096d5d..d9ee96b 100644
--- a/drivers/net/qede/base/ecore_vf.h
+++ b/drivers/net/qede/base/ecore_vf.h
@@ -41,6 +41,14 @@ struct ecore_vf_iov {
 	 * this has to be propagated as it affects the fastpath.
 	 */
 	bool b_pre_fp_hsi;
+
+	/* Current day VFs are passing the SBs physical address on vport
+	 * start, and as they lack an IGU mapping they need to store the
+	 * addresses of previously registered SBs.
+	 * Even if we were to change configuration flow, due to backward
+	 * compatibility [with older PFs] we'd still need to store these.
+	 */
+	struct ecore_sb_info *sbs_info[PFVF_MAX_SBS_PER_VF];
 };
 
 
@@ -205,6 +213,15 @@ enum _ecore_status_t
 u16 ecore_vf_get_igu_sb_id(struct ecore_hwfn *p_hwfn,
 			   u16               sb_id);
 
+/**
+ * @brief Stores [or removes] a configured sb_info.
+ *
+ * @param p_hwfn
+ * @param sb_id - zero-based SB index [for fastpath]
+ * @param sb_info - may be OSAL_NULL [during removal].
+ */
+void ecore_vf_set_sb_info(struct ecore_hwfn *p_hwfn,
+			  u16 sb_id, struct ecore_sb_info *p_sb);
 
 /**
  * @brief ecore_vf_pf_vport_start - perform vport start for VF.
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 5c3613c..8ce89e5 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -555,7 +555,7 @@ void qede_dealloc_fp_resc(struct rte_eth_dev *eth_dev)
 		params.queue_id = rx_queue_id / edev->num_hwfns;
 		params.vport_id = 0;
 		params.stats_id = params.vport_id;
-		params.sb = fp->sb_info->igu_sb_id;
+		params.p_sb = fp->sb_info;
 		DP_INFO(edev, "rxq %u igu_sb_id 0x%x\n",
 				fp->rxq->queue_id, fp->sb_info->igu_sb_id);
 		params.sb_idx = RX_PI;
@@ -614,7 +614,7 @@ void qede_dealloc_fp_resc(struct rte_eth_dev *eth_dev)
 		params.queue_id = tx_queue_id / edev->num_hwfns;
 		params.vport_id = 0;
 		params.stats_id = params.vport_id;
-		params.sb = fp->sb_info->igu_sb_id;
+		params.p_sb = fp->sb_info;
 		DP_INFO(edev, "txq %u igu_sb_id 0x%x\n",
 				fp->txq->queue_id, fp->sb_info->igu_sb_id);
 		params.sb_idx = TX_PI(0); /* tc = 0 */
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 08/53] net/qede/base: add API to configure coalescing for VF queues
  2017-09-19  1:29 [PATCH 00/53] net/qede/base: update PMD to 2.6.0.1 Rasesh Mody
                   ` (6 preceding siblings ...)
  2017-09-19  1:29 ` [PATCH 07/53] net/qede/base: interchangeably use SB between PF and VF Rasesh Mody
@ 2017-09-19  1:29 ` Rasesh Mody
  2017-09-19  1:29 ` [PATCH 09/53] net/qede/base: restrict cache line size register padding Rasesh Mody
                   ` (21 subsequent siblings)
  29 siblings, 0 replies; 31+ messages in thread
From: Rasesh Mody @ 2017-09-19  1:29 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add API for PF to configure coalescing for VF queues.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_iov_api.h |   21 ++++++++
 drivers/net/qede/base/ecore_sriov.c   |   89 +++++++++++++++++++++++++++++++++
 drivers/net/qede/base/ecore_sriov.h   |    3 ++
 3 files changed, 113 insertions(+)

diff --git a/drivers/net/qede/base/ecore_iov_api.h b/drivers/net/qede/base/ecore_iov_api.h
index 50cb3f2..4fce6b6 100644
--- a/drivers/net/qede/base/ecore_iov_api.h
+++ b/drivers/net/qede/base/ecore_iov_api.h
@@ -693,9 +693,30 @@ bool ecore_iov_is_vf_started(struct ecore_hwfn *p_hwfn,
  * @return - rate in Mbps
  */
 int ecore_iov_get_vf_min_rate(struct ecore_hwfn *p_hwfn, int vfid);
+
 #endif
 
 /**
+ * @brief ecore_pf_configure_vf_queue_coalesce - PF configure coalesce
+ *    parameters of VFs for Rx and Tx queue.
+ *    While the API allows setting coalescing per-qid, all queues sharing a SB
+ *    should be in same range [i.e., either 0-0x7f, 0x80-0xff or 0x100-0x1ff]
+ *    otherwise configuration would break.
+ *
+ * @param p_hwfn
+ * @param rx_coal - Rx Coalesce value in micro seconds.
+ * @param tx_coal - TX Coalesce value in micro seconds.
+ * @param vf_id
+ * @param qid
+ *
+ * @return int
+ **/
+enum _ecore_status_t
+ecore_iov_pf_configure_vf_queue_coalesce(struct ecore_hwfn *p_hwfn,
+					 u16 rx_coal, u16 tx_coal,
+					 u16 vf_id, u16 qid);
+
+/**
  * @brief - Given a VF index, return index of next [including that] active VF.
  *
  * @param p_hwfn
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 1ec6451..3f500d3 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -3475,6 +3475,7 @@ static void ecore_iov_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn,
 				   vf->abs_vf_id, vf->vf_queues[qid].fw_rx_qid);
 			goto out;
 		}
+		vf->rx_coal = rx_coal;
 	}
 
 	/* TODO - in future, it might be possible to pass this in a per-cid
@@ -3499,6 +3500,7 @@ static void ecore_iov_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn,
 				goto out;
 			}
 		}
+		vf->tx_coal = tx_coal;
 	}
 
 	status = PFVF_STATUS_SUCCESS;
@@ -3507,6 +3509,93 @@ static void ecore_iov_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn,
 			       sizeof(struct pfvf_def_resp_tlv), status);
 }
 
+enum _ecore_status_t
+ecore_iov_pf_configure_vf_queue_coalesce(struct ecore_hwfn *p_hwfn,
+					 u16 rx_coal, u16 tx_coal,
+					 u16 vf_id, u16 qid)
+{
+	struct ecore_queue_cid *p_cid;
+	struct ecore_vf_info *vf;
+	struct ecore_ptt *p_ptt;
+	int i, rc = 0;
+
+	if (!ecore_iov_is_valid_vfid(p_hwfn, vf_id, true, true)) {
+		DP_NOTICE(p_hwfn, true,
+			  "VF[%d] - Can not set coalescing: VF is not active\n",
+			  vf_id);
+		return ECORE_INVAL;
+	}
+
+	vf = &p_hwfn->pf_iov_info->vfs_array[vf_id];
+	p_ptt = ecore_ptt_acquire(p_hwfn);
+	if (!p_ptt)
+		return ECORE_AGAIN;
+
+	if (!ecore_iov_validate_rxq(p_hwfn, vf, qid,
+				    ECORE_IOV_VALIDATE_Q_ENABLE) &&
+	    rx_coal) {
+		DP_ERR(p_hwfn, "VF[%d]: Invalid Rx queue_id = %d\n",
+		       vf->abs_vf_id, qid);
+		goto out;
+	}
+
+	if (!ecore_iov_validate_txq(p_hwfn, vf, qid,
+				    ECORE_IOV_VALIDATE_Q_ENABLE) &&
+	    tx_coal) {
+		DP_ERR(p_hwfn, "VF[%d]: Invalid Tx queue_id = %d\n",
+		       vf->abs_vf_id, qid);
+		goto out;
+	}
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+		   "VF[%d]: Setting coalesce for VF rx_coal = %d, tx_coal = %d at queue = %d\n",
+		   vf->abs_vf_id, rx_coal, tx_coal, qid);
+
+	if (rx_coal) {
+		p_cid = ecore_iov_get_vf_rx_queue_cid(p_hwfn, vf,
+						      &vf->vf_queues[qid]);
+
+		rc = ecore_set_rxq_coalesce(p_hwfn, p_ptt, rx_coal, p_cid);
+		if (rc != ECORE_SUCCESS) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VF[%d]: Unable to set rx queue = %d coalesce\n",
+				   vf->abs_vf_id, vf->vf_queues[qid].fw_rx_qid);
+			goto out;
+		}
+		vf->rx_coal = rx_coal;
+	}
+
+	/* TODO - in future, it might be possible to pass this in a per-cid
+	 * granularity. For now, do this for all Tx queues.
+	 */
+	if (tx_coal) {
+		struct ecore_vf_queue *p_queue = &vf->vf_queues[qid];
+
+		for (i = 0; i < MAX_QUEUES_PER_QZONE; i++) {
+			if (p_queue->cids[i].p_cid == OSAL_NULL)
+				continue;
+
+			if (!p_queue->cids[i].b_is_tx)
+				continue;
+
+			rc = ecore_set_txq_coalesce(p_hwfn, p_ptt, tx_coal,
+						    p_queue->cids[i].p_cid);
+			if (rc != ECORE_SUCCESS) {
+				DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+					   "VF[%d]: Unable to set tx queue coalesce\n",
+					   vf->abs_vf_id);
+				goto out;
+			}
+		}
+		vf->tx_coal = tx_coal;
+	}
+
+out:
+	ecore_ptt_release(p_hwfn, p_ptt);
+
+	return rc;
+}
+
 static enum _ecore_status_t
 ecore_iov_vf_flr_poll_dorq(struct ecore_hwfn *p_hwfn,
 			   struct ecore_vf_info *p_vf, struct ecore_ptt *p_ptt)
diff --git a/drivers/net/qede/base/ecore_sriov.h b/drivers/net/qede/base/ecore_sriov.h
index 1750f0d..ade74c9 100644
--- a/drivers/net/qede/base/ecore_sriov.h
+++ b/drivers/net/qede/base/ecore_sriov.h
@@ -133,6 +133,9 @@ struct ecore_vf_info {
 	u8			num_rxqs;
 	u8			num_txqs;
 
+	u16			rx_coal;
+	u16			tx_coal;
+
 	u8			num_sbs;
 
 	u8			num_mac_filters;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 09/53] net/qede/base: restrict cache line size register padding
  2017-09-19  1:29 [PATCH 00/53] net/qede/base: update PMD to 2.6.0.1 Rasesh Mody
                   ` (7 preceding siblings ...)
  2017-09-19  1:29 ` [PATCH 08/53] net/qede/base: add API to configure coalescing for VF queues Rasesh Mody
@ 2017-09-19  1:29 ` Rasesh Mody
  2017-09-19  1:29 ` [PATCH 10/53] net/qede/base: fix to use a passed ptt handle Rasesh Mody
                   ` (20 subsequent siblings)
  29 siblings, 0 replies; 31+ messages in thread
From: Rasesh Mody @ 2017-09-19  1:29 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add a restriction on the pad to cache line size register.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/bcm_osal.h  |    1 +
 drivers/net/qede/base/ecore_dev.c |   60 +++++++++++++++++++++++++++++++++++++
 drivers/net/qede/base/reg_addr.h  |    3 ++
 3 files changed, 64 insertions(+)

diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 6148982..bd07724 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -414,6 +414,7 @@ void qede_hw_err_notify(struct ecore_hwfn *p_hwfn,
 #define OSAL_REG_ADDR(_p_hwfn, _offset) \
 		(void *)((u8 *)(uintptr_t)(_p_hwfn->regview) + (_offset))
 #define OSAL_PAGE_SIZE 4096
+#define OSAL_CACHE_LINE_SIZE RTE_CACHE_LINE_SIZE
 #define OSAL_IOMEM volatile
 #define OSAL_UNLIKELY(x)  __builtin_expect(!!(x), 0)
 #define OSAL_MIN_T(type, __min1, __min2)	\
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 40b544b..73949e8 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1258,6 +1258,61 @@ static void ecore_init_cau_rt_data(struct ecore_dev *p_dev)
 	}
 }
 
+static void ecore_init_cache_line_size(struct ecore_hwfn *p_hwfn,
+				       struct ecore_ptt *p_ptt)
+{
+	u32 val, wr_mbs, cache_line_size;
+
+	val = ecore_rd(p_hwfn, p_ptt, PSWRQ2_REG_WR_MBS0);
+	switch (val) {
+	case 0:
+		wr_mbs = 128;
+		break;
+	case 1:
+		wr_mbs = 256;
+		break;
+	case 2:
+		wr_mbs = 512;
+		break;
+	default:
+		DP_INFO(p_hwfn,
+			"Unexpected value of PSWRQ2_REG_WR_MBS0 [0x%x]. Avoid configuring PGLUE_B_REG_CACHE_LINE_SIZE.\n",
+			val);
+		return;
+	}
+
+	cache_line_size = OSAL_MIN_T(u32, OSAL_CACHE_LINE_SIZE, wr_mbs);
+	switch (cache_line_size) {
+	case 32:
+		val = 0;
+		break;
+	case 64:
+		val = 1;
+		break;
+	case 128:
+		val = 2;
+		break;
+	case 256:
+		val = 3;
+		break;
+	default:
+		DP_INFO(p_hwfn,
+			"Unexpected value of cache line size [0x%x]. Avoid configuring PGLUE_B_REG_CACHE_LINE_SIZE.\n",
+			cache_line_size);
+	}
+
+	if (wr_mbs < OSAL_CACHE_LINE_SIZE)
+		DP_INFO(p_hwfn,
+			"The cache line size for padding is suboptimal for performance [OS cache line size 0x%x, wr mbs 0x%x]\n",
+			OSAL_CACHE_LINE_SIZE, wr_mbs);
+
+	STORE_RT_REG(p_hwfn, PGLUE_REG_B_CACHE_LINE_SIZE_RT_OFFSET, val);
+	if (val > 0) {
+		STORE_RT_REG(p_hwfn, PSWRQ2_REG_DRAM_ALIGN_WR_RT_OFFSET, val);
+		STORE_RT_REG(p_hwfn, PSWRQ2_REG_DRAM_ALIGN_RD_RT_OFFSET, val);
+	}
+}
+
 static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn,
 						 struct ecore_ptt *p_ptt,
 						 int hw_mode)
@@ -1298,6 +1353,8 @@ static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn,
 
 	ecore_cxt_hw_init_common(p_hwfn);
 
+	ecore_init_cache_line_size(p_hwfn, p_ptt);
+
 	rc = ecore_init_run(p_hwfn, p_ptt, PHASE_ENGINE, ANY_PHASE_ID, hw_mode);
 	if (rc != ECORE_SUCCESS)
 		return rc;
@@ -1686,6 +1743,9 @@ static enum _ecore_status_t ecore_hw_init_port(struct ecore_hwfn *p_hwfn,
 			    hw_mode);
 	if (rc != ECORE_SUCCESS)
 		return rc;
+
+	ecore_wr(p_hwfn, p_ptt, PGLUE_B_REG_MASTER_WRITE_PAD_ENABLE, 0);
+
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_ASIC(p_hwfn->p_dev))
 		return ECORE_SUCCESS;
diff --git a/drivers/net/qede/base/reg_addr.h b/drivers/net/qede/base/reg_addr.h
index 6028654..116fe78 100644
--- a/drivers/net/qede/base/reg_addr.h
+++ b/drivers/net/qede/base/reg_addr.h
@@ -1205,3 +1205,6 @@
 #define NIG_REG_LLH_FUNC_FILTER_EN_BB_K2 0x501a80UL
 #define NIG_REG_LLH_FUNC_FILTER_MODE_BB_K2 0x501ac0UL
 #define NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_BB_K2 0x501b00UL
+
+#define PSWRQ2_REG_WR_MBS0 0x240400UL
+#define PGLUE_B_REG_MASTER_WRITE_PAD_ENABLE 0x2aae30UL
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 10/53] net/qede/base: fix to use a passed ptt handle
  2017-09-19  1:29 [PATCH 00/53] net/qede/base: update PMD to 2.6.0.1 Rasesh Mody
                   ` (8 preceding siblings ...)
  2017-09-19  1:29 ` [PATCH 09/53] net/qede/base: restrict cache line size register padding Rasesh Mody
@ 2017-09-19  1:29 ` Rasesh Mody
  2017-09-19  1:29 ` [PATCH 11/53] net/qede/base: add a sanity check Rasesh Mody
                   ` (19 subsequent siblings)
  29 siblings, 0 replies; 31+ messages in thread
From: Rasesh Mody @ 2017-09-19  1:29 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev, stable

Fix ecore_configure_vp_wfq_on_link_change() to use a provided ptt[PF
translation table] handle instead of directly using p_dpc_ptt

Fixes: ec94dbc57362 ("qede: add base driver")
Cc: stable@dpdk.org

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h     |    1 +
 drivers/net/qede/base/ecore_dev.c |    4 ++--
 drivers/net/qede/base/ecore_mcp.c |    2 +-
 3 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 64a3416..a3edcb0 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -832,6 +832,7 @@ struct ecore_dev {
 
 int ecore_configure_vport_wfq(struct ecore_dev *p_dev, u16 vp_id, u32 rate);
 void ecore_configure_vp_wfq_on_link_change(struct ecore_dev *p_dev,
+					   struct ecore_ptt *p_ptt,
 					   u32 min_pf_rate);
 
 int ecore_configure_pf_max_bandwidth(struct ecore_dev *p_dev, u8 max_bw);
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 73949e8..01e20ac 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -4903,6 +4903,7 @@ int ecore_configure_vport_wfq(struct ecore_dev *p_dev, u16 vp_id, u32 rate)
 
 /* API to configure WFQ from mcp link change */
 void ecore_configure_vp_wfq_on_link_change(struct ecore_dev *p_dev,
+					   struct ecore_ptt *p_ptt,
 					   u32 min_pf_rate)
 {
 	int i;
@@ -4917,8 +4918,7 @@ void ecore_configure_vp_wfq_on_link_change(struct ecore_dev *p_dev,
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
 
-		__ecore_configure_vp_wfq_on_link_change(p_hwfn,
-							p_hwfn->p_dpc_ptt,
+		__ecore_configure_vp_wfq_on_link_change(p_hwfn, p_ptt,
 							min_pf_rate);
 	}
 }
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 88c5ceb..cdb5cf2 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1116,7 +1116,7 @@ static void ecore_mcp_handle_link_change(struct ecore_hwfn *p_hwfn,
 	/* Mintz bandwidth configuration */
 	__ecore_configure_pf_min_bandwidth(p_hwfn, p_ptt,
 					   p_link, min_bw);
-	ecore_configure_vp_wfq_on_link_change(p_hwfn->p_dev,
+	ecore_configure_vp_wfq_on_link_change(p_hwfn->p_dev, p_ptt,
 					      p_link->min_pf_rate);
 
 	p_link->an = !!(status & LINK_STATUS_AUTO_NEGOTIATE_ENABLED);
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 11/53] net/qede/base: add a sanity check
  2017-09-19  1:29 [PATCH 00/53] net/qede/base: update PMD to 2.6.0.1 Rasesh Mody
                   ` (9 preceding siblings ...)
  2017-09-19  1:29 ` [PATCH 10/53] net/qede/base: fix to use a passed ptt handle Rasesh Mody
@ 2017-09-19  1:29 ` Rasesh Mody
  2017-09-19  1:29 ` [PATCH 12/53] net/qede/base: add SmartAN support Rasesh Mody
                   ` (18 subsequent siblings)
  29 siblings, 0 replies; 31+ messages in thread
From: Rasesh Mody @ 2017-09-19  1:29 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add a sanity check that the offset being used to access the runtime array
is not greater/equal than/to RUNTIME_ARRAY_SIZE

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_init_ops.c |   15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/drivers/net/qede/base/ecore_init_ops.c b/drivers/net/qede/base/ecore_init_ops.c
index b907a95..80a52ca 100644
--- a/drivers/net/qede/base/ecore_init_ops.c
+++ b/drivers/net/qede/base/ecore_init_ops.c
@@ -40,6 +40,13 @@ void ecore_init_clear_rt_data(struct ecore_hwfn *p_hwfn)
 
 void ecore_init_store_rt_reg(struct ecore_hwfn *p_hwfn, u32 rt_offset, u32 val)
 {
+	if (rt_offset >= RUNTIME_ARRAY_SIZE) {
+		DP_ERR(p_hwfn,
+		       "Avoid storing %u in rt_data at index %u since RUNTIME_ARRAY_SIZE is %u!\n",
+		       val, rt_offset, RUNTIME_ARRAY_SIZE);
+		return;
+	}
+
 	p_hwfn->rt_data.init_val[rt_offset] = val;
 	p_hwfn->rt_data.b_valid[rt_offset] = true;
 }
@@ -49,6 +56,14 @@ void ecore_init_store_rt_agg(struct ecore_hwfn *p_hwfn,
 {
 	osal_size_t i;
 
+	if ((rt_offset + size - 1) >= RUNTIME_ARRAY_SIZE) {
+		DP_ERR(p_hwfn,
+		       "Avoid storing values in rt_data at indices %u-%u since RUNTIME_ARRAY_SIZE is %u!\n",
+		       rt_offset, (u32)(rt_offset + size - 1),
+		       RUNTIME_ARRAY_SIZE);
+		return;
+	}
+
 	for (i = 0; i < size / sizeof(u32); i++) {
 		p_hwfn->rt_data.init_val[rt_offset + i] = p_val[i];
 		p_hwfn->rt_data.b_valid[rt_offset + i] = true;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 12/53] net/qede/base: add SmartAN support
  2017-09-19  1:29 [PATCH 00/53] net/qede/base: update PMD to 2.6.0.1 Rasesh Mody
                   ` (10 preceding siblings ...)
  2017-09-19  1:29 ` [PATCH 11/53] net/qede/base: add a sanity check Rasesh Mody
@ 2017-09-19  1:29 ` Rasesh Mody
  2017-09-19  1:29 ` [PATCH 13/53] net/qede/base: alter driver's force load behavior Rasesh Mody
                   ` (17 subsequent siblings)
  29 siblings, 0 replies; 31+ messages in thread
From: Rasesh Mody @ 2017-09-19  1:29 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add SmartAN feature that automatically detects peer switch capabilities
which relieves users from fumbling with adapter and switch configuration
Add new cmd DRV_MSG_CODE_GET_MFW_FEATURE_SUPPORT. Add new SmartLinQ config
method using NVM cfg options 239.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c     |   16 +++++++------
 drivers/net/qede/base/ecore_mcp.c     |   42 ++++++++++++++++++++++++++++++---
 drivers/net/qede/base/ecore_mcp.h     |   22 +++++++++++++++++
 drivers/net/qede/base/ecore_mcp_api.h |    9 +++++++
 drivers/net/qede/base/mcp_public.h    |   22 ++++++++++-------
 drivers/net/qede/base/nvm_cfg.h       |   27 +++++++++++----------
 drivers/net/qede/qede_if.h            |    2 ++
 drivers/net/qede/qede_main.c          |    3 +++
 8 files changed, 112 insertions(+), 31 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 01e20ac..b206b44 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2040,6 +2040,8 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 			   "Load request was sent. Load code: 0x%x\n",
 			   load_code);
 
+		ecore_mcp_set_capabilities(p_hwfn, p_hwfn->p_main_ptt);
+
 		/* CQ75580:
 		 * When coming back from hiberbate state, the registers from
 		 * which shadow is read initially are not initialized. It turns
@@ -2943,6 +2945,7 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 {
 	u32 nvm_cfg1_offset, mf_mode, addr, generic_cont0, core_cfg, dcbx_mode;
 	u32 port_cfg_addr, link_temp, nvm_cfg_addr, device_capabilities;
+	struct ecore_mcp_link_capabilities *p_caps;
 	struct ecore_mcp_link_params *link;
 	enum _ecore_status_t rc;
 
@@ -3032,6 +3035,7 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 
 	/* Read default link configuration */
 	link = &p_hwfn->mcp_info->link_input;
+	p_caps = &p_hwfn->mcp_info->link_capabilities;
 	port_cfg_addr = MCP_REG_SCRATCH + nvm_cfg1_offset +
 	    OFFSETOF(struct nvm_cfg1, port[MFW_PORT(p_hwfn)]);
 	link_temp = ecore_rd(p_hwfn, p_ptt,
@@ -3039,9 +3043,7 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 			     OFFSETOF(struct nvm_cfg1_port, speed_cap_mask));
 	link_temp &= NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_MASK;
 	link->speed.advertised_speeds = link_temp;
-
-	link_temp = link->speed.advertised_speeds;
-	p_hwfn->mcp_info->link_capabilities.speed_capabilities = link_temp;
+	p_caps->speed_capabilities = link->speed.advertised_speeds;
 
 	link_temp = ecore_rd(p_hwfn, p_ptt,
 			     port_cfg_addr +
@@ -3073,10 +3075,8 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 		DP_NOTICE(p_hwfn, true, "Unknown Speed in 0x%08x\n", link_temp);
 	}
 
-	p_hwfn->mcp_info->link_capabilities.default_speed =
-	    link->speed.forced_speed;
-	p_hwfn->mcp_info->link_capabilities.default_speed_autoneg =
-	    link->speed.autoneg;
+	p_caps->default_speed = link->speed.forced_speed;
+	p_caps->default_speed_autoneg = link->speed.autoneg;
 
 	link_temp &= NVM_CFG1_PORT_DRV_FLOW_CONTROL_MASK;
 	link_temp >>= NVM_CFG1_PORT_DRV_FLOW_CONTROL_OFFSET;
@@ -3326,6 +3326,8 @@ static void ecore_hw_info_port_num(struct ecore_hwfn *p_hwfn,
 	 */
 	ecore_hw_info_port_num(p_hwfn, p_ptt);
 
+	ecore_mcp_get_capabilities(p_hwfn, p_ptt);
+
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_ASIC(p_hwfn->p_dev)) {
 #endif
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index cdb5cf2..1e616ad 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -225,7 +225,11 @@ static enum _ecore_status_t ecore_mcp_mb_lock(struct ecore_hwfn *p_hwfn,
 	if (cmd == DRV_MSG_CODE_LOAD_DONE || cmd == DRV_MSG_CODE_UNLOAD_DONE)
 		p_hwfn->mcp_info->block_mb_sending = false;
 
-	if (p_hwfn->mcp_info->block_mb_sending) {
+	/* There's at least a single command that is sent by ecore during the
+	 * load sequence [expectation of MFW].
+	 */
+	if ((p_hwfn->mcp_info->block_mb_sending) &&
+	    (cmd != DRV_MSG_CODE_FEATURE_SUPPORT)) {
 		DP_NOTICE(p_hwfn, false,
 			  "Trying to send a MFW mailbox command [0x%x]"
 			  " in parallel to [UN]LOAD_REQ. Aborting.\n",
@@ -1202,8 +1206,7 @@ enum _ecore_status_t ecore_mcp_set_link(struct ecore_hwfn *p_hwfn,
 
 	if (b_up)
 		DP_VERBOSE(p_hwfn, ECORE_MSG_LINK,
-			   "Configuring Link: Speed 0x%08x, Pause 0x%08x,"
-			   " adv_speed 0x%08x, loopback 0x%08x\n",
+			   "Configuring Link: Speed 0x%08x, Pause 0x%08x, adv_speed 0x%08x, loopback 0x%08x\n",
 			   phy_cfg.speed, phy_cfg.pause, phy_cfg.adv_speed,
 			   phy_cfg.loopback_mode);
 	else
@@ -3250,3 +3253,36 @@ enum _ecore_status_t
 
 	return ECORE_SUCCESS;
 }
+
+bool ecore_mcp_is_smart_an_supported(struct ecore_hwfn *p_hwfn)
+{
+	return !!(p_hwfn->mcp_info->capabilities &
+		  FW_MB_PARAM_FEATURE_SUPPORT_SMARTLINQ);
+}
+
+enum _ecore_status_t ecore_mcp_get_capabilities(struct ecore_hwfn *p_hwfn,
+						struct ecore_ptt *p_ptt)
+{
+	u32 mcp_resp;
+	enum _ecore_status_t rc;
+
+	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_GET_MFW_FEATURE_SUPPORT,
+			   0, &mcp_resp, &p_hwfn->mcp_info->capabilities);
+	if (rc == ECORE_SUCCESS)
+		DP_VERBOSE(p_hwfn, (ECORE_MSG_SP | ECORE_MSG_PROBE),
+			   "MFW supported features: %08x\n",
+			   p_hwfn->mcp_info->capabilities);
+
+	return rc;
+}
+
+enum _ecore_status_t ecore_mcp_set_capabilities(struct ecore_hwfn *p_hwfn,
+						struct ecore_ptt *p_ptt)
+{
+	u32 mcp_resp, mcp_param, features;
+
+	features = DRV_MB_PARAM_FEATURE_SUPPORT_PORT_SMARTLINQ;
+
+	return ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_FEATURE_SUPPORT,
+			     features, &mcp_resp, &mcp_param);
+}
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index 77fb5a3..5d2f4e5 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -60,6 +60,9 @@ struct ecore_mcp_info {
 	u8 *mfw_mb_shadow;
 	u16 mfw_mb_length;
 	u16 mcp_hist;
+
+	/* Capabilties negotiated with the MFW */
+	u32 capabilities;
 };
 
 struct ecore_mcp_mb_params {
@@ -493,4 +496,23 @@ enum _ecore_status_t
 ecore_mcp_resc_unlock(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 		      struct ecore_resc_unlock_params *p_params);
 
+/**
+ * @brief Learn of supported MFW features; To be done during early init
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ */
+enum _ecore_status_t ecore_mcp_get_capabilities(struct ecore_hwfn *p_hwfn,
+						struct ecore_ptt *p_ptt);
+
+/**
+ * @brief Inform MFW of set of features supported by driver. Should be done
+ * inside the contet of the LOAD_REQ.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ */
+enum _ecore_status_t ecore_mcp_set_capabilities(struct ecore_hwfn *p_hwfn,
+						struct ecore_ptt *p_ptt);
+
 #endif /* __ECORE_MCP_H__ */
diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h
index abc190c..86fa0cb 100644
--- a/drivers/net/qede/base/ecore_mcp_api.h
+++ b/drivers/net/qede/base/ecore_mcp_api.h
@@ -1134,4 +1134,13 @@ enum _ecore_status_t ecore_mcp_mdump_clear_logs(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_mfw_process_tlv_req(struct ecore_hwfn *p_hwfn,
 					       struct ecore_ptt *p_ptt);
 
+
+/**
+ * @brief - Return whether management firmware support smart AN
+ *
+ * @param p_hwfn
+ *
+ * @return bool - true iff feature is supported.
+ */
+bool ecore_mcp_is_smart_an_supported(struct ecore_hwfn *p_hwfn);
 #endif
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index f41d4e6..41711cc 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -99,7 +99,7 @@ struct eth_phy_cfg {
 #define EEE_TX_TIMER_USEC_LATENCY_TIME		(0x6000)
 
 	u32 link_modes; /* Additional link modes */
-#define LINK_MODE_SMARTLINQ_ENABLE		0x1
+#define LINK_MODE_SMARTLINQ_ENABLE		0x1  /* XXX Deprecate */
 };
 
 struct port_mf_cfg {
@@ -688,6 +688,7 @@ struct public_port {
 #define LFA_FLOW_CTRL_MISMATCH				(1 << 4)
 #define LFA_ADV_SPEED_MISMATCH				(1 << 5)
 #define LFA_EEE_MISMATCH				(1 << 6)
+#define LFA_LINK_MODES_MISMATCH			(1 << 7)
 #define LINK_FLAP_AVOIDANCE_COUNT_OFFSET	8
 #define LINK_FLAP_AVOIDANCE_COUNT_MASK		0x0000ff00
 #define LINK_FLAP_COUNT_OFFSET			16
@@ -1364,10 +1365,10 @@ struct public_drv_mb {
 #define DRV_MB_PARAM_PORT_MASK			0x00600000
 #define DRV_MSG_CODE_EXT_PHY_FW_UPGRADE		0x002a0000
 
-/* Param: Set DRV_MB_PARAM_FEATURE_SUPPORT_*,
- * return FW_MB_PARAM_FEATURE_SUPPORT_*
- */
+/* Param: Set DRV_MB_PARAM_FEATURE_SUPPORT_* */
 #define DRV_MSG_CODE_FEATURE_SUPPORT            0x00300000
+/* return FW_MB_PARAM_FEATURE_SUPPORT_*  */
+#define DRV_MSG_CODE_GET_MFW_FEATURE_SUPPORT	0x00310000
 
 #define DRV_MSG_SEQ_NUMBER_MASK                 0x0000ffff
 
@@ -1516,10 +1517,14 @@ struct public_drv_mb {
 #define DRV_MB_PARAM_BIST_TEST_IMAGE_INDEX_SHIFT      8
 #define DRV_MB_PARAM_BIST_TEST_IMAGE_INDEX_MASK       0x0000FF00
 
-/* driver supports SmartLinQ */
-#define DRV_MB_PARAM_FEATURE_SUPPORT_SMARTLINQ  0x00000001
-/* driver support EEE */
-#define DRV_MB_PARAM_FEATURE_SUPPORT_EEE        0x00000002
+#define DRV_MB_PARAM_FEATURE_SUPPORT_PORT_MASK      0x0000FFFF
+#define DRV_MB_PARAM_FEATURE_SUPPORT_PORT_SHIFT     0
+/* driver supports SmartLinQ parameter */
+#define DRV_MB_PARAM_FEATURE_SUPPORT_PORT_SMARTLINQ 0x00000001
+/* driver supports EEE parameter */
+#define DRV_MB_PARAM_FEATURE_SUPPORT_PORT_EEE       0x00000002
+#define DRV_MB_PARAM_FEATURE_SUPPORT_FUNC_MASK      0xFFFF0000
+#define DRV_MB_PARAM_FEATURE_SUPPORT_FUNC_SHIFT     16
 
 	u32 fw_mb_header;
 #define FW_MSG_CODE_MASK                        0xffff0000
@@ -1636,6 +1641,7 @@ struct public_drv_mb {
 #define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR_MASK	0x0000FFFF
 #define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR_SHIFT		0
 
+/* get MFW feature support response */
 /* MFW supports SmartLinQ */
 #define FW_MB_PARAM_FEATURE_SUPPORT_SMARTLINQ   0x00000001
 /* MFW supports EEE */
diff --git a/drivers/net/qede/base/nvm_cfg.h b/drivers/net/qede/base/nvm_cfg.h
index ccd9286..ed024f2 100644
--- a/drivers/net/qede/base/nvm_cfg.h
+++ b/drivers/net/qede/base/nvm_cfg.h
@@ -1165,7 +1165,6 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_DRV_LINK_SPEED_40G 0x5
 		#define NVM_CFG1_PORT_DRV_LINK_SPEED_50G 0x6
 		#define NVM_CFG1_PORT_DRV_LINK_SPEED_BB_100G 0x7
-		#define NVM_CFG1_PORT_DRV_LINK_SPEED_SMARTLINQ 0x8
 		#define NVM_CFG1_PORT_DRV_FLOW_CONTROL_MASK 0x00000070
 		#define NVM_CFG1_PORT_DRV_FLOW_CONTROL_OFFSET 4
 		#define NVM_CFG1_PORT_DRV_FLOW_CONTROL_AUTONEG 0x1
@@ -1181,7 +1180,6 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_MFW_LINK_SPEED_40G 0x5
 		#define NVM_CFG1_PORT_MFW_LINK_SPEED_50G 0x6
 		#define NVM_CFG1_PORT_MFW_LINK_SPEED_BB_100G 0x7
-		#define NVM_CFG1_PORT_MFW_LINK_SPEED_SMARTLINQ 0x8
 		#define NVM_CFG1_PORT_MFW_FLOW_CONTROL_MASK 0x00003800
 		#define NVM_CFG1_PORT_MFW_FLOW_CONTROL_OFFSET 11
 		#define NVM_CFG1_PORT_MFW_FLOW_CONTROL_AUTONEG 0x1
@@ -1213,6 +1211,14 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_FEC_AN_MODE_25G_RS 0x4
 		#define NVM_CFG1_PORT_FEC_AN_MODE_25G_FIRECODE_AND_RS 0x5
 		#define NVM_CFG1_PORT_FEC_AN_MODE_ALL 0x6
+		#define NVM_CFG1_PORT_SMARTLINQ_MODE_MASK 0x00800000
+		#define NVM_CFG1_PORT_SMARTLINQ_MODE_OFFSET 23
+		#define NVM_CFG1_PORT_SMARTLINQ_MODE_DISABLED 0x0
+		#define NVM_CFG1_PORT_SMARTLINQ_MODE_ENABLED 0x1
+		#define NVM_CFG1_PORT_RESERVED_WAS_MFW_SMARTLINQ_MASK 0x01000000
+		#define NVM_CFG1_PORT_RESERVED_WAS_MFW_SMARTLINQ_OFFSET 24
+		#define NVM_CFG1_PORT_RESERVED_WAS_MFW_SMARTLINQ_DISABLED 0x0
+		#define NVM_CFG1_PORT_RESERVED_WAS_MFW_SMARTLINQ_ENABLED 0x1
 	u32 phy_cfg; /* 0x1C */
 		#define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_MASK 0x0000FFFF
 		#define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_OFFSET 0
@@ -1291,10 +1297,15 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_40G 0x5
 		#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_50G 0x6
 		#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_BB_100G 0x7
-		#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_SMARTLINQ 0x8
 		#define NVM_CFG1_PORT_RESERVED__M_MBA_BOOT_RETRY_COUNT_MASK \
 			0x00E00000
 		#define NVM_CFG1_PORT_RESERVED__M_MBA_BOOT_RETRY_COUNT_OFFSET 21
+		#define NVM_CFG1_PORT_RESERVED_WAS_PREBOOT_SMARTLINQ_MASK \
+			0x01000000
+		#define NVM_CFG1_PORT_RESERVED_WAS_PREBOOT_SMARTLINQ_OFFSET 24
+		#define NVM_CFG1_PORT_RESERVED_WAS_PREBOOT_SMARTLINQ_DISABLED \
+			0x0
+		#define NVM_CFG1_PORT_RESERVED_WAS_PREBOOT_SMARTLINQ_ENABLED 0x1
 	u32 mba_cfg2; /* 0x2C */
 		#define NVM_CFG1_PORT_RESERVED65_MASK 0x0000FFFF
 		#define NVM_CFG1_PORT_RESERVED65_OFFSET 0
@@ -1457,7 +1468,6 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_40G 0x5
 		#define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_50G 0x6
 		#define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_BB_100G 0x7
-		#define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_SMARTLINQ 0x8
 		#define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_MASK 0x000000F0
 		#define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_OFFSET 4
 		#define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_AUTONEG 0x0
@@ -1468,7 +1478,6 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_40G 0x5
 		#define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_50G 0x6
 		#define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_BB_100G 0x7
-		#define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_SMARTLINQ 0x8
 	/*  This field defines the board technology
 	 * (backpane,transceiver,external PHY)
 	*/
@@ -1539,7 +1548,6 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_40G 0x5
 		#define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_50G 0x6
 		#define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_BB_100G 0x7
-		#define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_SMARTLINQ 0x8
 		#define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_MASK 0x000000F0
 		#define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_OFFSET 4
 		#define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_AUTONEG 0x0
@@ -1550,7 +1558,6 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_40G 0x5
 		#define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_50G 0x6
 		#define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_BB_100G 0x7
-		#define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_SMARTLINQ 0x8
 	/*  This field defines the board technology
 	 * (backpane,transceiver,external PHY)
 	*/
@@ -1621,7 +1628,6 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_40G 0x5
 		#define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_50G 0x6
 		#define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_BB_100G 0x7
-		#define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_SMARTLINQ 0x8
 		#define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_MASK 0x000000F0
 		#define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_OFFSET 4
 		#define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_AUTONEG 0x0
@@ -1632,7 +1638,6 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_40G 0x5
 		#define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_50G 0x6
 		#define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_BB_100G 0x7
-		#define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_SMARTLINQ 0x8
 	/*  This field defines the board technology
 	 * (backpane,transceiver,external PHY)
 	*/
@@ -1705,7 +1710,6 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_40G 0x5
 		#define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_50G 0x6
 		#define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_BB_100G 0x7
-		#define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_SMARTLINQ 0x8
 		#define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_MASK 0x000000F0
 		#define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_OFFSET 4
 		#define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_AUTONEG 0x0
@@ -1716,7 +1720,6 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_40G 0x5
 		#define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_50G 0x6
 		#define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_BB_100G 0x7
-		#define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_SMARTLINQ 0x8
 	/*  This field defines the board technology
 	 * (backpane,transceiver,external PHY)
 	*/
@@ -1784,7 +1787,6 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_40G 0x5
 		#define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_50G 0x6
 		#define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_BB_100G 0x7
-		#define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_SMARTLINQ 0x8
 		#define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_MASK 0x000000F0
 		#define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_OFFSET 4
 		#define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_AUTONEG 0x0
@@ -1795,7 +1797,6 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_40G 0x5
 		#define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_50G 0x6
 		#define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_BB_100G 0x7
-		#define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_SMARTLINQ 0x8
 	/*  This field defines the board technology
 	 * (backpane,transceiver,external PHY)
 	*/
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index 8e0c999..42560d5 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -44,6 +44,8 @@ struct qed_dev_info {
 	bool tx_switching;
 	u16 mtu;
 
+	bool smart_an;
+
 	/* Out param for qede */
 	bool vxlan_enable;
 	bool gre_enable;
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 2447ffb..42b556f 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -335,6 +335,7 @@ static int qed_slowpath_start(struct ecore_dev *edev,
 static int
 qed_fill_dev_info(struct ecore_dev *edev, struct qed_dev_info *dev_info)
 {
+	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(edev);
 	struct ecore_ptt *ptt = NULL;
 	struct ecore_tunnel_info *tun = &edev->tunnel;
 
@@ -371,6 +372,8 @@ static int qed_slowpath_start(struct ecore_dev *edev,
 		dev_info->mf_mode = edev->mf_mode;
 		dev_info->tx_switching = false;
 
+		dev_info->smart_an = ecore_mcp_is_smart_an_supported(p_hwfn);
+
 		ptt = ecore_ptt_acquire(ECORE_LEADING_HWFN(edev));
 		if (ptt) {
 			ecore_mcp_get_mfw_ver(ECORE_LEADING_HWFN(edev), ptt,
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 13/53] net/qede/base: alter driver's force load behavior
  2017-09-19  1:29 [PATCH 00/53] net/qede/base: update PMD to 2.6.0.1 Rasesh Mody
                   ` (11 preceding siblings ...)
  2017-09-19  1:29 ` [PATCH 12/53] net/qede/base: add SmartAN support Rasesh Mody
@ 2017-09-19  1:29 ` Rasesh Mody
  2017-09-19  1:29 ` [PATCH 14/53] net/qede/base: add mdump sub-commands Rasesh Mody
                   ` (16 subsequent siblings)
  29 siblings, 0 replies; 31+ messages in thread
From: Rasesh Mody @ 2017-09-19  1:29 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

 - Add an option to override the default force load behavior.
 - PMD will set the override force load parameter to
   ECORE_OVERRIDE_FORCE_LOAD_ALWAYS.
 - Modify the printout when a force load is required to include the loaded
   value
 - No need for 'default' when switching over enums and covering all the
   values.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c     |   32 +++++++++---
 drivers/net/qede/base/ecore_dev_api.h |   44 ++++++++++------
 drivers/net/qede/base/ecore_mcp.c     |   93 ++++++++++++++++-----------------
 drivers/net/qede/base/ecore_mcp.h     |    5 ++
 drivers/net/qede/qede_main.c          |   14 +++--
 5 files changed, 114 insertions(+), 74 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index b206b44..938834b 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1980,6 +1980,30 @@ static void ecore_pglueb_clear_err(struct ecore_hwfn *p_hwfn,
 		 1 << p_hwfn->abs_pf_id);
 }
 
+static void
+ecore_fill_load_req_params(struct ecore_load_req_params *p_load_req,
+			   struct ecore_drv_load_params *p_drv_load)
+{
+	/* Make sure that if ecore-client didn't provide inputs, all the
+	 * expected defaults are indeed zero.
+	 */
+	OSAL_BUILD_BUG_ON(ECORE_DRV_ROLE_OS != 0);
+	OSAL_BUILD_BUG_ON(ECORE_LOAD_REQ_LOCK_TO_DEFAULT != 0);
+	OSAL_BUILD_BUG_ON(ECORE_OVERRIDE_FORCE_LOAD_NONE != 0);
+
+	OSAL_MEM_ZERO(p_load_req, sizeof(*p_load_req));
+
+	if (p_drv_load != OSAL_NULL) {
+		p_load_req->drv_role = p_drv_load->is_crash_kernel ?
+				       ECORE_DRV_ROLE_KDUMP :
+				       ECORE_DRV_ROLE_OS;
+		p_load_req->timeout_val = p_drv_load->mfw_timeout_val;
+		p_load_req->avoid_eng_reset = p_drv_load->avoid_eng_reset;
+		p_load_req->override_force_load =
+			p_drv_load->override_force_load;
+	}
+}
+
 enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 				   struct ecore_hw_init_params *p_params)
 {
@@ -2021,12 +2045,8 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		if (rc != ECORE_SUCCESS)
 			return rc;
 
-		OSAL_MEM_ZERO(&load_req_params, sizeof(load_req_params));
-		load_req_params.drv_role = p_params->is_crash_kernel ?
-					   ECORE_DRV_ROLE_KDUMP :
-					   ECORE_DRV_ROLE_OS;
-		load_req_params.timeout_val = p_params->mfw_timeout_val;
-		load_req_params.avoid_eng_reset = p_params->avoid_eng_reset;
+		ecore_fill_load_req_params(&load_req_params,
+					   p_params->p_drv_load_params);
 		rc = ecore_mcp_load_req(p_hwfn, p_hwfn->p_main_ptt,
 					&load_req_params);
 		if (rc != ECORE_SUCCESS) {
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index 9126cf9..99a9c49 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -57,22 +57,13 @@ void ecore_init_dp(struct ecore_dev *p_dev,
  */
 void ecore_resc_setup(struct ecore_dev *p_dev);
 
-struct ecore_hw_init_params {
-	/* Tunnelling parameters */
-	struct ecore_tunnel_info *p_tunn;
-
-	bool b_hw_start;
-
-	/* Interrupt mode [msix, inta, etc.] to use */
-	enum ecore_int_mode int_mode;
-
-	/* NPAR tx switching to be used for vports configured for tx-switching
-	 */
-	bool allow_npar_tx_switch;
-
-	/* Binary fw data pointer in binary fw file */
-	const u8 *bin_fw_data;
+enum ecore_override_force_load {
+	ECORE_OVERRIDE_FORCE_LOAD_NONE,
+	ECORE_OVERRIDE_FORCE_LOAD_ALWAYS,
+	ECORE_OVERRIDE_FORCE_LOAD_NEVER,
+};
 
+struct ecore_drv_load_params {
 	/* Indicates whether the driver is running over a crash kernel.
 	 * As part of the load request, this will be used for providing the
 	 * driver role to the MFW.
@@ -90,6 +81,29 @@ struct ecore_hw_init_params {
 
 	/* Avoid engine reset when first PF loads on it */
 	bool avoid_eng_reset;
+
+	/* Allow overriding the default force load behavior */
+	enum ecore_override_force_load override_force_load;
+};
+
+struct ecore_hw_init_params {
+	/* Tunneling parameters */
+	struct ecore_tunnel_info *p_tunn;
+
+	bool b_hw_start;
+
+	/* Interrupt mode [msix, inta, etc.] to use */
+	enum ecore_int_mode int_mode;
+
+	/* NPAR tx switching to be used for vports configured for tx-switching
+	 */
+	bool allow_npar_tx_switch;
+
+	/* Binary fw data pointer in binary fw file */
+	const u8 *bin_fw_data;
+
+	/* Driver load parameters */
+	struct ecore_drv_load_params *p_drv_load_params;
 };
 
 /**
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 1e616ad..868b075 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -538,11 +538,28 @@ static void ecore_mcp_mf_workaround(struct ecore_hwfn *p_hwfn,
 }
 #endif
 
-static bool ecore_mcp_can_force_load(u8 drv_role, u8 exist_drv_role)
+static bool
+ecore_mcp_can_force_load(u8 drv_role, u8 exist_drv_role,
+			 enum ecore_override_force_load override_force_load)
 {
-	return (drv_role == DRV_ROLE_OS &&
-		exist_drv_role == DRV_ROLE_PREBOOT) ||
-	       (drv_role == DRV_ROLE_KDUMP && exist_drv_role == DRV_ROLE_OS);
+	bool can_force_load = false;
+
+	switch (override_force_load) {
+	case ECORE_OVERRIDE_FORCE_LOAD_ALWAYS:
+		can_force_load = true;
+		break;
+	case ECORE_OVERRIDE_FORCE_LOAD_NEVER:
+		can_force_load = false;
+		break;
+	default:
+		can_force_load = (drv_role == DRV_ROLE_OS &&
+				  exist_drv_role == DRV_ROLE_PREBOOT) ||
+				 (drv_role == DRV_ROLE_KDUMP &&
+				  exist_drv_role == DRV_ROLE_OS);
+		break;
+	}
+
+	return can_force_load;
 }
 
 static enum _ecore_status_t ecore_mcp_cancel_load_req(struct ecore_hwfn *p_hwfn,
@@ -713,9 +730,9 @@ struct ecore_load_req_out_params {
 	return ECORE_SUCCESS;
 }
 
-static enum _ecore_status_t eocre_get_mfw_drv_role(struct ecore_hwfn *p_hwfn,
-						   enum ecore_drv_role drv_role,
-						   u8 *p_mfw_drv_role)
+static void ecore_get_mfw_drv_role(struct ecore_hwfn *p_hwfn,
+				   enum ecore_drv_role drv_role,
+				   u8 *p_mfw_drv_role)
 {
 	switch (drv_role) {
 	case ECORE_DRV_ROLE_OS:
@@ -724,12 +741,7 @@ static enum _ecore_status_t eocre_get_mfw_drv_role(struct ecore_hwfn *p_hwfn,
 	case ECORE_DRV_ROLE_KDUMP:
 		*p_mfw_drv_role = DRV_ROLE_KDUMP;
 		break;
-	default:
-		DP_ERR(p_hwfn, "Unexpected driver role %d\n", drv_role);
-		return ECORE_INVAL;
 	}
-
-	return ECORE_SUCCESS;
 }
 
 enum ecore_load_req_force {
@@ -738,10 +750,9 @@ enum ecore_load_req_force {
 	ECORE_LOAD_REQ_FORCE_ALL,
 };
 
-static enum _ecore_status_t
-ecore_get_mfw_force_cmd(struct ecore_hwfn *p_hwfn,
-			enum ecore_load_req_force force_cmd,
-			u8 *p_mfw_force_cmd)
+static void ecore_get_mfw_force_cmd(struct ecore_hwfn *p_hwfn,
+				    enum ecore_load_req_force force_cmd,
+				    u8 *p_mfw_force_cmd)
 {
 	switch (force_cmd) {
 	case ECORE_LOAD_REQ_FORCE_NONE:
@@ -753,12 +764,7 @@ enum ecore_load_req_force {
 	case ECORE_LOAD_REQ_FORCE_ALL:
 		*p_mfw_force_cmd = LOAD_REQ_FORCE_ALL;
 		break;
-	default:
-		DP_ERR(p_hwfn, "Unexpected force value %d\n", force_cmd);
-		return ECORE_INVAL;
 	}
-
-	return ECORE_SUCCESS;
 }
 
 enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
@@ -767,7 +773,7 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 {
 	struct ecore_load_req_out_params out_params;
 	struct ecore_load_req_in_params in_params;
-	u8 mfw_drv_role, mfw_force_cmd;
+	u8 mfw_drv_role = 0, mfw_force_cmd;
 	enum _ecore_status_t rc;
 
 #ifndef ASIC_ONLY
@@ -782,17 +788,11 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 	in_params.drv_ver_0 = ECORE_VERSION;
 	in_params.drv_ver_1 = ecore_get_config_bitmap();
 	in_params.fw_ver = STORM_FW_VERSION;
-	rc = eocre_get_mfw_drv_role(p_hwfn, p_params->drv_role, &mfw_drv_role);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
+	ecore_get_mfw_drv_role(p_hwfn, p_params->drv_role, &mfw_drv_role);
 	in_params.drv_role = mfw_drv_role;
 	in_params.timeout_val = p_params->timeout_val;
-	rc = ecore_get_mfw_force_cmd(p_hwfn, ECORE_LOAD_REQ_FORCE_NONE,
-				     &mfw_force_cmd);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
+	ecore_get_mfw_force_cmd(p_hwfn, ECORE_LOAD_REQ_FORCE_NONE,
+				&mfw_force_cmd);
 	in_params.force_cmd = mfw_force_cmd;
 	in_params.avoid_eng_reset = p_params->avoid_eng_reset;
 
@@ -824,19 +824,20 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 		p_hwfn->mcp_info->block_mb_sending = false;
 
 		if (ecore_mcp_can_force_load(in_params.drv_role,
-					     out_params.exist_drv_role)) {
+					     out_params.exist_drv_role,
+					     p_params->override_force_load)) {
 			DP_INFO(p_hwfn,
-				"A force load is required [existing: role %d, fw_ver 0x%08x, drv_ver 0x%08x_0x%08x]. Sending a force load request.\n",
+				"A force load is required [{role, fw_ver, drv_ver}: loading={%d, 0x%08x, 0x%08x_%08x}, existing={%d, 0x%08x, 0x%08x_%08x}]\n",
+				in_params.drv_role, in_params.fw_ver,
+				in_params.drv_ver_0, in_params.drv_ver_1,
 				out_params.exist_drv_role,
 				out_params.exist_fw_ver,
 				out_params.exist_drv_ver_0,
 				out_params.exist_drv_ver_1);
 
-			rc = ecore_get_mfw_force_cmd(p_hwfn,
-						     ECORE_LOAD_REQ_FORCE_ALL,
-						     &mfw_force_cmd);
-			if (rc != ECORE_SUCCESS)
-				return rc;
+			ecore_get_mfw_force_cmd(p_hwfn,
+						ECORE_LOAD_REQ_FORCE_ALL,
+						&mfw_force_cmd);
 
 			in_params.force_cmd = mfw_force_cmd;
 			OSAL_MEM_ZERO(&out_params, sizeof(out_params));
@@ -846,7 +847,9 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 				return rc;
 		} else {
 			DP_NOTICE(p_hwfn, false,
-				  "A force load is required [existing: role %d, fw_ver 0x%08x, drv_ver 0x%08x_0x%08x]. Avoiding to prevent disruption of active PFs.\n",
+				  "A force load is required [{role, fw_ver, drv_ver}: loading={%d, 0x%08x, x%08x_0x%08x}, existing={%d, 0x%08x, 0x%08x_0x%08x}] - Avoid\n",
+				  in_params.drv_role, in_params.fw_ver,
+				  in_params.drv_ver_0, in_params.drv_ver_1,
 				  out_params.exist_drv_role,
 				  out_params.exist_fw_ver,
 				  out_params.exist_drv_ver_0,
@@ -877,19 +880,11 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 			return ECORE_INVAL;
 		}
 		break;
-	case FW_MSG_CODE_DRV_LOAD_REFUSED_PDA:
-	case FW_MSG_CODE_DRV_LOAD_REFUSED_DIAG:
-	case FW_MSG_CODE_DRV_LOAD_REFUSED_HSI:
-	case FW_MSG_CODE_DRV_LOAD_REFUSED_REJECT:
-		DP_NOTICE(p_hwfn, false,
-			  "MFW refused a load request [resp 0x%08x]. Aborting.\n",
-			  out_params.load_code);
-		return ECORE_BUSY;
 	default:
 		DP_NOTICE(p_hwfn, false,
-			  "Unexpected response to load request [resp 0x%08x]. Aborting.\n",
+			  "Unexpected refusal to load request [resp 0x%08x]. Aborting.\n",
 			  out_params.load_code);
-		break;
+		return ECORE_BUSY;
 	}
 
 	p_params->load_code = out_params.load_code;
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index 5d2f4e5..9b6a9b4 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -13,6 +13,7 @@
 #include "mcp_public.h"
 #include "ecore.h"
 #include "ecore_mcp_api.h"
+#include "ecore_dev_api.h"
 
 /* Using hwfn number (and not pf_num) is required since in CMT mode,
  * same pf_num may be used by two different hwfn
@@ -153,9 +154,13 @@ enum ecore_drv_role {
 };
 
 struct ecore_load_req_params {
+	/* Input params */
 	enum ecore_drv_role drv_role;
 	u8 timeout_val; /* 1..254, '0' - default value, '255' - no timeout */
 	bool avoid_eng_reset;
+	enum ecore_override_force_load override_force_load;
+
+	/* Output params */
 	u32 load_code;
 };
 
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 42b556f..9be8f80 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -221,10 +221,11 @@ static void qed_stop_iov_task(struct ecore_dev *edev)
 static int qed_slowpath_start(struct ecore_dev *edev,
 			      struct qed_slowpath_params *params)
 {
+	struct ecore_drv_load_params drv_load_params;
+	struct ecore_hw_init_params hw_init_params;
+	struct ecore_mcp_drv_version drv_version;
 	const uint8_t *data = NULL;
 	struct ecore_hwfn *hwfn;
-	struct ecore_mcp_drv_version drv_version;
-	struct ecore_hw_init_params hw_init_params;
 	struct ecore_ptt *p_ptt;
 	int rc;
 
@@ -280,8 +281,13 @@ static int qed_slowpath_start(struct ecore_dev *edev,
 	hw_init_params.int_mode = ECORE_INT_MODE_MSIX;
 	hw_init_params.allow_npar_tx_switch = true;
 	hw_init_params.bin_fw_data = data;
-	hw_init_params.mfw_timeout_val = ECORE_LOAD_REQ_LOCK_TO_DEFAULT;
-	hw_init_params.avoid_eng_reset = false;
+
+	memset(&drv_load_params, 0, sizeof(drv_load_params));
+	drv_load_params.mfw_timeout_val = ECORE_LOAD_REQ_LOCK_TO_DEFAULT;
+	drv_load_params.avoid_eng_reset = false;
+	drv_load_params.override_force_load = ECORE_OVERRIDE_FORCE_LOAD_ALWAYS;
+	hw_init_params.p_drv_load_params = &drv_load_params;
+
 	rc = ecore_hw_init(edev, &hw_init_params);
 	if (rc) {
 		DP_ERR(edev, "ecore_hw_init failed\n");
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 14/53] net/qede/base: add mdump sub-commands
  2017-09-19  1:29 [PATCH 00/53] net/qede/base: update PMD to 2.6.0.1 Rasesh Mody
                   ` (12 preceding siblings ...)
  2017-09-19  1:29 ` [PATCH 13/53] net/qede/base: alter driver's force load behavior Rasesh Mody
@ 2017-09-19  1:29 ` Rasesh Mody
  2017-09-19  1:29 ` [PATCH 15/53] net/qede/base: add EEE support Rasesh Mody
                   ` (15 subsequent siblings)
  29 siblings, 0 replies; 31+ messages in thread
From: Rasesh Mody @ 2017-09-19  1:29 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

 - Add support to retain/clear data for crash dump by introducing the mdump
   GET_RETAIN/CLR_RETAIN sub commands, new APIs
   ecore_mcp_mdump_get_retain() and ecore_mcp_mdump_clr_retain()
 - Avoid checking for mdump logs and data in case of an emulator
 - Fix "deadbeaf" returned value in case of pcie status command read
   fails (prevent false detection)

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c     |   24 +++++++--
 drivers/net/qede/base/ecore_mcp.c     |   87 +++++++++++++++++++++++++++------
 drivers/net/qede/base/ecore_mcp.h     |   21 ++++++++
 drivers/net/qede/base/ecore_mcp_api.h |   11 +++++
 drivers/net/qede/base/mcp_public.h    |   10 ++++
 5 files changed, 132 insertions(+), 21 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 938834b..93c2306 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -3564,6 +3564,7 @@ void ecore_prepare_hibernate(struct ecore_dev *p_dev)
 			void OSAL_IOMEM * p_doorbells,
 			struct ecore_hw_prepare_params *p_params)
 {
+	struct ecore_mdump_retain_data mdump_retain;
 	struct ecore_dev *p_dev = p_hwfn->p_dev;
 	struct ecore_mdump_info mdump_info;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
@@ -3631,24 +3632,37 @@ void ecore_prepare_hibernate(struct ecore_dev *p_dev)
 	/* Sending a mailbox to the MFW should be after ecore_get_hw_info() is
 	 * called, since among others it sets the ports number in an engine.
 	 */
-	if (p_params->initiate_pf_flr && p_hwfn == ECORE_LEADING_HWFN(p_dev) &&
+	if (p_params->initiate_pf_flr && IS_LEAD_HWFN(p_hwfn) &&
 	    !p_dev->recov_in_prog) {
 		rc = ecore_mcp_initiate_pf_flr(p_hwfn, p_hwfn->p_main_ptt);
 		if (rc != ECORE_SUCCESS)
 			DP_NOTICE(p_hwfn, false, "Failed to initiate PF FLR\n");
 	}
 
-	/* Check if mdump logs are present and update the epoch value */
-	if (p_hwfn == ECORE_LEADING_HWFN(p_hwfn->p_dev)) {
+	/* Check if mdump logs/data are present and update the epoch value */
+	if (IS_LEAD_HWFN(p_hwfn)) {
+#ifndef ASIC_ONLY
+		if (!CHIP_REV_IS_EMUL(p_dev)) {
+#endif
 		rc = ecore_mcp_mdump_get_info(p_hwfn, p_hwfn->p_main_ptt,
 					      &mdump_info);
-		if (rc == ECORE_SUCCESS && mdump_info.num_of_logs > 0) {
+		if (rc == ECORE_SUCCESS && mdump_info.num_of_logs)
 			DP_NOTICE(p_hwfn, false,
 				  "* * * IMPORTANT - HW ERROR register dump captured by device * * *\n");
-		}
+
+		rc = ecore_mcp_mdump_get_retain(p_hwfn, p_hwfn->p_main_ptt,
+						&mdump_retain);
+		if (rc == ECORE_SUCCESS && mdump_retain.valid)
+			DP_NOTICE(p_hwfn, false,
+				  "mdump retained data: epoch 0x%08x, pf 0x%x, status 0x%08x\n",
+				  mdump_retain.epoch, mdump_retain.pf,
+				  mdump_retain.status);
 
 		ecore_mcp_mdump_set_values(p_hwfn, p_hwfn->p_main_ptt,
 					   p_params->epoch);
+#ifndef ASIC_ONLY
+		}
+#endif
 	}
 
 	/* Allocate the init RT array and initialize the init-ops engine */
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 868b075..462fcc9 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1434,11 +1434,16 @@ struct ecore_mdump_cmd_params {
 		return rc;
 
 	p_mdump_cmd_params->mcp_resp = mb_params.mcp_resp;
+
 	if (p_mdump_cmd_params->mcp_resp == FW_MSG_CODE_MDUMP_INVALID_CMD) {
-		DP_NOTICE(p_hwfn, false,
-			  "MFW claims that the mdump command is illegal [mdump_cmd 0x%x]\n",
-			  p_mdump_cmd_params->cmd);
-		rc = ECORE_INVAL;
+		DP_INFO(p_hwfn,
+			"The mdump sub command is unsupported by the MFW [mdump_cmd 0x%x]\n",
+			p_mdump_cmd_params->cmd);
+		rc = ECORE_NOTIMPL;
+	} else if (p_mdump_cmd_params->mcp_resp == FW_MSG_CODE_UNSUPPORTED) {
+		DP_INFO(p_hwfn,
+			"The mdump command is not supported by the MFW\n");
+		rc = ECORE_NOTIMPL;
 	}
 
 	return rc;
@@ -1496,16 +1501,10 @@ enum _ecore_status_t ecore_mcp_mdump_trigger(struct ecore_hwfn *p_hwfn,
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	if (mdump_cmd_params.mcp_resp == FW_MSG_CODE_UNSUPPORTED) {
-		DP_INFO(p_hwfn,
-			"The mdump command is not supported by the MFW\n");
-		return ECORE_NOTIMPL;
-	}
-
 	if (mdump_cmd_params.mcp_resp != FW_MSG_CODE_OK) {
-		DP_NOTICE(p_hwfn, false,
-			  "Failed to get the mdump configuration and logs info [mcp_resp 0x%x]\n",
-			  mdump_cmd_params.mcp_resp);
+		DP_INFO(p_hwfn,
+			"Failed to get the mdump configuration and logs info [mcp_resp 0x%x]\n",
+			mdump_cmd_params.mcp_resp);
 		rc = ECORE_UNKNOWN_ERROR;
 	}
 
@@ -1566,17 +1565,71 @@ enum _ecore_status_t ecore_mcp_mdump_clear_logs(struct ecore_hwfn *p_hwfn,
 	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
 }
 
+enum _ecore_status_t
+ecore_mcp_mdump_get_retain(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			   struct ecore_mdump_retain_data *p_mdump_retain)
+{
+	struct ecore_mdump_cmd_params mdump_cmd_params;
+	struct mdump_retain_data_stc mfw_mdump_retain;
+	enum _ecore_status_t rc;
+
+	OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params));
+	mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_GET_RETAIN;
+	mdump_cmd_params.p_data_dst = &mfw_mdump_retain;
+	mdump_cmd_params.data_dst_size = sizeof(mfw_mdump_retain);
+
+	rc = ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	if (mdump_cmd_params.mcp_resp != FW_MSG_CODE_OK) {
+		DP_INFO(p_hwfn,
+			"Failed to get the mdump retained data [mcp_resp 0x%x]\n",
+			mdump_cmd_params.mcp_resp);
+		return ECORE_UNKNOWN_ERROR;
+	}
+
+	p_mdump_retain->valid = mfw_mdump_retain.valid;
+	p_mdump_retain->epoch = mfw_mdump_retain.epoch;
+	p_mdump_retain->pf = mfw_mdump_retain.pf;
+	p_mdump_retain->status = mfw_mdump_retain.status;
+
+	return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t ecore_mcp_mdump_clr_retain(struct ecore_hwfn *p_hwfn,
+						struct ecore_ptt *p_ptt)
+{
+	struct ecore_mdump_cmd_params mdump_cmd_params;
+
+	OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params));
+	mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_CLR_RETAIN;
+
+	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
+}
+
 static void ecore_mcp_handle_critical_error(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt)
 {
+	struct ecore_mdump_retain_data mdump_retain;
+	enum _ecore_status_t rc;
+
 	/* In CMT mode - no need for more than a single acknowledgment to the
 	 * MFW, and no more than a single notification to the upper driver.
 	 */
 	if (p_hwfn != ECORE_LEADING_HWFN(p_hwfn->p_dev))
 		return;
 
-	DP_NOTICE(p_hwfn, false,
-		  "Received a critical error notification from the MFW!\n");
+	rc = ecore_mcp_mdump_get_retain(p_hwfn, p_ptt, &mdump_retain);
+	if (rc == ECORE_SUCCESS && mdump_retain.valid) {
+		DP_NOTICE(p_hwfn, false,
+			  "The MFW notified that a critical error occurred in the device [epoch 0x%08x, pf 0x%x, status 0x%08x]\n",
+			  mdump_retain.epoch, mdump_retain.pf,
+			  mdump_retain.status);
+	} else {
+		DP_NOTICE(p_hwfn, false,
+			  "The MFW notified that a critical error occurred in the device\n");
+	}
 
 	if (p_hwfn->p_dev->allow_mdump) {
 		DP_NOTICE(p_hwfn, false,
@@ -1584,6 +1637,8 @@ static void ecore_mcp_handle_critical_error(struct ecore_hwfn *p_hwfn,
 		return;
 	}
 
+	DP_NOTICE(p_hwfn, false,
+		  "Acknowledging the notification to not allow the MFW crash dump [driver debug data collection is preferable]\n");
 	ecore_mcp_mdump_ack(p_hwfn, p_ptt);
 	ecore_hw_err_notify(p_hwfn, ECORE_HW_ERR_HW_ATTN);
 }
@@ -2245,8 +2300,8 @@ enum _ecore_status_t ecore_mcp_mask_parities(struct ecore_hwfn *p_hwfn,
 					     struct ecore_ptt *p_ptt,
 					     u32 mask_parities)
 {
-	enum _ecore_status_t rc;
 	u32 resp = 0, param = 0;
+	enum _ecore_status_t rc;
 
 	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MASK_PARITIES,
 			   mask_parities, &resp, &param);
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index 9b6a9b4..b84f0d1 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -376,12 +376,33 @@ enum _ecore_status_t ecore_mcp_mdump_set_values(struct ecore_hwfn *p_hwfn,
  *
  * @param p_hwfn
  * @param p_ptt
+ * @param epoch
  *
  * @param return ECORE_SUCCESS upon success.
  */
 enum _ecore_status_t ecore_mcp_mdump_trigger(struct ecore_hwfn *p_hwfn,
 					     struct ecore_ptt *p_ptt);
 
+struct ecore_mdump_retain_data {
+	u32 valid;
+	u32 epoch;
+	u32 pf;
+	u32 status;
+};
+
+/**
+ * @brief - Gets the mdump retained data from the MFW.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param p_mdump_retain
+ *
+ * @param return ECORE_SUCCESS upon success.
+ */
+enum _ecore_status_t
+ecore_mcp_mdump_get_retain(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			   struct ecore_mdump_retain_data *p_mdump_retain);
+
 /**
  * @brief - Sets the MFW's max value for the given resource
  *
diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h
index 86fa0cb..059b55e 100644
--- a/drivers/net/qede/base/ecore_mcp_api.h
+++ b/drivers/net/qede/base/ecore_mcp_api.h
@@ -1123,6 +1123,17 @@ enum _ecore_status_t ecore_mcp_mdump_clear_logs(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt);
 
 /**
+ * @brief - Clear the mdump retained data.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ *
+ * @param return ECORE_SUCCESS upon success.
+ */
+enum _ecore_status_t ecore_mcp_mdump_clr_retain(struct ecore_hwfn *p_hwfn,
+						struct ecore_ptt *p_ptt);
+
+/**
  * @brief - Processes the TLV request from MFW i.e., get the required TLV info
  *          from the ecore client and send it to the MFW.
  *
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 41711cc..f934c17 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -1108,6 +1108,13 @@ struct load_rsp_stc {
 #define LOAD_RSP_FLAGS0_DRV_EXISTS	(0x1 << 0)
 };
 
+struct mdump_retain_data_stc {
+	u32 valid;
+	u32 epoch;
+	u32 pf;
+	u32 status;
+};
+
 union drv_union_data {
 	struct mcp_mac wol_mac; /* UNLOAD_DONE */
 
@@ -1138,6 +1145,7 @@ struct load_rsp_stc {
 
 	struct load_req_stc load_req;
 	struct load_rsp_stc load_rsp;
+	struct mdump_retain_data_stc mdump_retain;
 	/* ... */
 };
 
@@ -1350,6 +1358,8 @@ struct public_drv_mb {
 #define DRV_MSG_CODE_MDUMP_SET_ENABLE		0x05
 /* Clear all logs */
 #define DRV_MSG_CODE_MDUMP_CLEAR_LOGS		0x06
+#define DRV_MSG_CODE_MDUMP_GET_RETAIN		0x07 /* Get retained data */
+#define DRV_MSG_CODE_MDUMP_CLR_RETAIN		0x08 /* Clear retain data */
 #define DRV_MSG_CODE_MEM_ECC_EVENTS		0x00260000 /* Param: None */
 /* Param: [0:15] - gpio number */
 #define DRV_MSG_CODE_GPIO_INFO			0x00270000
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 15/53] net/qede/base: add EEE support
  2017-09-19  1:29 [PATCH 00/53] net/qede/base: update PMD to 2.6.0.1 Rasesh Mody
                   ` (13 preceding siblings ...)
  2017-09-19  1:29 ` [PATCH 14/53] net/qede/base: add mdump sub-commands Rasesh Mody
@ 2017-09-19  1:29 ` Rasesh Mody
  2017-09-19  1:29 ` [PATCH 16/53] net/qede/base: use passed ptt handler Rasesh Mody
                   ` (14 subsequent siblings)
  29 siblings, 0 replies; 31+ messages in thread
From: Rasesh Mody @ 2017-09-19  1:29 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

 - Base driver EEE (Energy efficient ethernet) support.
 - Provide supported-speed mask to driver though shared memory.
 - Read/use eee-supported capabilities value from the shared memory.
 - Update qed_fill_link() to advertise the EEE capabilities.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c     |   59 +++++++++++++++++++++++++++++++--
 drivers/net/qede/base/ecore_mcp.c     |   50 +++++++++++++++++++++++++++-
 drivers/net/qede/base/ecore_mcp_api.h |   25 ++++++++++++++
 drivers/net/qede/base/mcp_public.h    |    6 ++++
 drivers/net/qede/qede_if.h            |    8 +++++
 drivers/net/qede/qede_main.c          |   19 +++++++++++
 6 files changed, 164 insertions(+), 3 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 93c2306..5c37e1c 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -3108,10 +3108,42 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 				    NVM_CFG1_PORT_DRV_FLOW_CONTROL_TX);
 	link->loopback_mode = 0;
 
+	if (p_hwfn->mcp_info->capabilities & FW_MB_PARAM_FEATURE_SUPPORT_EEE) {
+		link_temp = ecore_rd(p_hwfn, p_ptt, port_cfg_addr +
+				     OFFSETOF(struct nvm_cfg1_port, ext_phy));
+		link_temp &= NVM_CFG1_PORT_EEE_POWER_SAVING_MODE_MASK;
+		link_temp >>= NVM_CFG1_PORT_EEE_POWER_SAVING_MODE_OFFSET;
+		p_caps->default_eee = ECORE_MCP_EEE_ENABLED;
+		link->eee.enable = true;
+		switch (link_temp) {
+		case NVM_CFG1_PORT_EEE_POWER_SAVING_MODE_DISABLED:
+			p_caps->default_eee = ECORE_MCP_EEE_DISABLED;
+			link->eee.enable = false;
+			break;
+		case NVM_CFG1_PORT_EEE_POWER_SAVING_MODE_BALANCED:
+			p_caps->eee_lpi_timer = EEE_TX_TIMER_USEC_BALANCED_TIME;
+			break;
+		case NVM_CFG1_PORT_EEE_POWER_SAVING_MODE_AGGRESSIVE:
+			p_caps->eee_lpi_timer =
+				EEE_TX_TIMER_USEC_AGGRESSIVE_TIME;
+			break;
+		case NVM_CFG1_PORT_EEE_POWER_SAVING_MODE_LOW_LATENCY:
+			p_caps->eee_lpi_timer = EEE_TX_TIMER_USEC_LATENCY_TIME;
+			break;
+		}
+
+		link->eee.tx_lpi_timer = p_caps->eee_lpi_timer;
+		link->eee.tx_lpi_enable = link->eee.enable;
+		link->eee.adv_caps = ECORE_EEE_1G_ADV | ECORE_EEE_10G_ADV;
+	} else {
+		p_caps->default_eee = ECORE_MCP_EEE_UNSUPPORTED;
+	}
+
 	DP_VERBOSE(p_hwfn, ECORE_MSG_LINK,
-		   "Read default link: Speed 0x%08x, Adv. Speed 0x%08x, AN: 0x%02x, PAUSE AN: 0x%02x\n",
+		   "Read default link: Speed 0x%08x, Adv. Speed 0x%08x, AN: 0x%02x, PAUSE AN: 0x%02x\n EEE: %02x [%08x usec]",
 		   link->speed.forced_speed, link->speed.advertised_speeds,
-		   link->speed.autoneg, link->pause.autoneg);
+		   link->speed.autoneg, link->pause.autoneg,
+		   p_caps->default_eee, p_caps->eee_lpi_timer);
 
 	/* Read Multi-function information from shmem */
 	addr = MCP_REG_SCRATCH + nvm_cfg1_offset +
@@ -3317,6 +3349,27 @@ static void ecore_hw_info_port_num(struct ecore_hwfn *p_hwfn,
 		ecore_hw_info_port_num_ah_e5(p_hwfn, p_ptt);
 }
 
+static void ecore_mcp_get_eee_caps(struct ecore_hwfn *p_hwfn,
+				   struct ecore_ptt *p_ptt)
+{
+	struct ecore_mcp_link_capabilities *p_caps;
+	u32 eee_status;
+
+	p_caps = &p_hwfn->mcp_info->link_capabilities;
+	if (p_caps->default_eee == ECORE_MCP_EEE_UNSUPPORTED)
+		return;
+
+	p_caps->eee_speed_caps = 0;
+	eee_status = ecore_rd(p_hwfn, p_ptt, p_hwfn->mcp_info->port_addr +
+			      OFFSETOF(struct public_port, eee_status));
+	eee_status = (eee_status & EEE_SUPPORTED_SPEED_MASK) >>
+			EEE_SUPPORTED_SPEED_OFFSET;
+	if (eee_status & EEE_1G_SUPPORTED)
+		p_caps->eee_speed_caps |= ECORE_EEE_1G_ADV;
+	if (eee_status & EEE_10G_ADV)
+		p_caps->eee_speed_caps |= ECORE_EEE_10G_ADV;
+}
+
 static enum _ecore_status_t
 ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 		  enum ecore_pci_personality personality,
@@ -3386,6 +3439,8 @@ static void ecore_hw_info_port_num(struct ecore_hwfn *p_hwfn,
 			    p_hwfn->mcp_info->func_info.ovlan;
 
 		ecore_mcp_cmd_port_init(p_hwfn, p_ptt);
+
+		ecore_mcp_get_eee_caps(p_hwfn, p_ptt);
 	}
 
 	if (personality != ECORE_PCI_DEFAULT) {
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 462fcc9..3be23ba 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1036,6 +1036,29 @@ static void ecore_mcp_handle_transceiver_change(struct ecore_hwfn *p_hwfn,
 		DP_NOTICE(p_hwfn, false, "Transceiver is unplugged.\n");
 }
 
+static void ecore_mcp_read_eee_config(struct ecore_hwfn *p_hwfn,
+				      struct ecore_ptt *p_ptt,
+				      struct ecore_mcp_link_state *p_link)
+{
+	u32 eee_status, val;
+
+	p_link->eee_adv_caps = 0;
+	p_link->eee_lp_adv_caps = 0;
+	eee_status = ecore_rd(p_hwfn, p_ptt, p_hwfn->mcp_info->port_addr +
+				     OFFSETOF(struct public_port, eee_status));
+	p_link->eee_active = !!(eee_status & EEE_ACTIVE_BIT);
+	val = (eee_status & EEE_LD_ADV_STATUS_MASK) >> EEE_LD_ADV_STATUS_SHIFT;
+	if (val & EEE_1G_ADV)
+		p_link->eee_adv_caps |= ECORE_EEE_1G_ADV;
+	if (val & EEE_10G_ADV)
+		p_link->eee_adv_caps |= ECORE_EEE_10G_ADV;
+	val = (eee_status & EEE_LP_ADV_STATUS_MASK) >> EEE_LP_ADV_STATUS_SHIFT;
+	if (val & EEE_1G_ADV)
+		p_link->eee_lp_adv_caps |= ECORE_EEE_1G_ADV;
+	if (val & EEE_10G_ADV)
+		p_link->eee_lp_adv_caps |= ECORE_EEE_10G_ADV;
+}
+
 static void ecore_mcp_handle_link_change(struct ecore_hwfn *p_hwfn,
 					 struct ecore_ptt *p_ptt,
 					 bool b_reset)
@@ -1170,6 +1193,9 @@ static void ecore_mcp_handle_link_change(struct ecore_hwfn *p_hwfn,
 
 	p_link->sfp_tx_fault = !!(status & LINK_STATUS_SFP_TX_FAULT);
 
+	if (p_hwfn->mcp_info->capabilities & FW_MB_PARAM_FEATURE_SUPPORT_EEE)
+		ecore_mcp_read_eee_config(p_hwfn, p_ptt, p_link);
+
 	OSAL_LINK_UPDATE(p_hwfn);
 }
 
@@ -1197,6 +1223,27 @@ enum _ecore_status_t ecore_mcp_set_link(struct ecore_hwfn *p_hwfn,
 	phy_cfg.pause |= (params->pause.forced_tx) ? ETH_PAUSE_TX : 0;
 	phy_cfg.adv_speed = params->speed.advertised_speeds;
 	phy_cfg.loopback_mode = params->loopback_mode;
+
+	/* There are MFWs that share this capability regardless of whether
+	 * this is feasible or not. And given that at the very least adv_caps
+	 * would be set internally by ecore, we want to make sure LFA would
+	 * still work.
+	 */
+	if ((p_hwfn->mcp_info->capabilities &
+	     FW_MB_PARAM_FEATURE_SUPPORT_EEE) &&
+	    params->eee.enable) {
+		phy_cfg.eee_cfg |= EEE_CFG_EEE_ENABLED;
+		if (params->eee.tx_lpi_enable)
+			phy_cfg.eee_cfg |= EEE_CFG_TX_LPI;
+		if (params->eee.adv_caps & ECORE_EEE_1G_ADV)
+			phy_cfg.eee_cfg |= EEE_CFG_ADV_SPEED_1G;
+		if (params->eee.adv_caps & ECORE_EEE_10G_ADV)
+			phy_cfg.eee_cfg |= EEE_CFG_ADV_SPEED_10G;
+		phy_cfg.eee_cfg |= (params->eee.tx_lpi_timer <<
+				    EEE_TX_TIMER_USEC_SHIFT) &
+					EEE_TX_TIMER_USEC_MASK;
+	}
+
 	p_hwfn->b_drv_link_init = b_up;
 
 	if (b_up)
@@ -3331,7 +3378,8 @@ enum _ecore_status_t ecore_mcp_set_capabilities(struct ecore_hwfn *p_hwfn,
 {
 	u32 mcp_resp, mcp_param, features;
 
-	features = DRV_MB_PARAM_FEATURE_SUPPORT_PORT_SMARTLINQ;
+	features = DRV_MB_PARAM_FEATURE_SUPPORT_PORT_SMARTLINQ |
+		   DRV_MB_PARAM_FEATURE_SUPPORT_PORT_EEE;
 
 	return ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_FEATURE_SUPPORT,
 			     features, &mcp_resp, &mcp_param);
diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h
index 059b55e..a3d6bc1 100644
--- a/drivers/net/qede/base/ecore_mcp_api.h
+++ b/drivers/net/qede/base/ecore_mcp_api.h
@@ -23,16 +23,37 @@ struct ecore_mcp_link_pause_params {
 	bool forced_tx;
 };
 
+enum ecore_mcp_eee_mode {
+	ECORE_MCP_EEE_DISABLED,
+	ECORE_MCP_EEE_ENABLED,
+	ECORE_MCP_EEE_UNSUPPORTED
+};
+
+struct ecore_link_eee_params {
+	u32 tx_lpi_timer;
+#define ECORE_EEE_1G_ADV	(1 << 0)
+#define ECORE_EEE_10G_ADV	(1 << 1)
+	/* Capabilities are represented using ECORE_EEE_*_ADV values */
+	u8 adv_caps;
+	u8 lp_adv_caps;
+	bool enable;
+	bool tx_lpi_enable;
+};
+
 struct ecore_mcp_link_params {
 	struct ecore_mcp_link_speed_params speed;
 	struct ecore_mcp_link_pause_params pause;
 	u32 loopback_mode; /* in PMM_LOOPBACK values */
+	struct ecore_link_eee_params eee;
 };
 
 struct ecore_mcp_link_capabilities {
 	u32 speed_capabilities;
 	bool default_speed_autoneg; /* In Mb/s */
 	u32 default_speed; /* In Mb/s */
+	enum ecore_mcp_eee_mode default_eee;
+	u32 eee_lpi_timer;
+	u8 eee_speed_caps;
 };
 
 struct ecore_mcp_link_state {
@@ -67,6 +88,10 @@ struct ecore_mcp_link_state {
 	u8 partner_adv_pause;
 
 	bool sfp_tx_fault;
+
+	bool eee_active;
+	u8 eee_adv_caps;
+	u8 eee_lp_adv_caps;
 };
 
 struct ecore_mcp_function_info {
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index f934c17..af6a45e 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -792,6 +792,12 @@ struct public_port {
 #define	EEE_LP_ADV_STATUS_MASK	0x00000f00
 #define EEE_LP_ADV_STATUS_SHIFT	8
 
+/* Supported speeds for EEE */
+#define EEE_SUPPORTED_SPEED_MASK	0x0000f000
+#define EEE_SUPPORTED_SPEED_OFFSET	12
+	#define EEE_1G_SUPPORTED	(1 << 1)
+	#define EEE_10G_SUPPORTED	(1 << 2)
+
 	u32 eee_remote;	/* Used for EEE in LLDP */
 #define EEE_REMOTE_TW_TX_MASK	0x0000ffff
 #define EEE_REMOTE_TW_TX_SHIFT	0
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index 42560d5..02af2ee 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -83,6 +83,7 @@ struct qed_link_params {
 #define QED_LINK_OVERRIDE_SPEED_ADV_SPEEDS      (1 << 1)
 #define QED_LINK_OVERRIDE_SPEED_FORCED_SPEED    (1 << 2)
 #define QED_LINK_OVERRIDE_PAUSE_CONFIG          (1 << 3)
+#define QED_LINK_OVERRIDE_EEE_CONFIG		(1 << 5)
 	uint32_t override_flags;
 	bool autoneg;
 	uint32_t adv_speeds;
@@ -91,6 +92,7 @@ struct qed_link_params {
 #define QED_LINK_PAUSE_RX_ENABLE                (1 << 1)
 #define QED_LINK_PAUSE_TX_ENABLE                (1 << 2)
 	uint32_t pause_config;
+	struct ecore_link_eee_params eee;
 };
 
 struct qed_link_output {
@@ -104,6 +106,12 @@ struct qed_link_output {
 	uint8_t port;		/* In PORT defs */
 	bool autoneg;
 	uint32_t pause_config;
+
+	/* EEE - capability & param */
+	bool eee_supported;
+	bool eee_active;
+	u8 sup_caps;
+	struct ecore_link_eee_params eee;
 };
 
 struct qed_slowpath_params {
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 9be8f80..13b321d 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -539,6 +539,21 @@ static void qed_fill_link(struct ecore_hwfn *hwfn,
 
 	if (params.pause.forced_tx)
 		if_link->pause_config |= QED_LINK_PAUSE_TX_ENABLE;
+
+	if (link_caps.default_eee == ECORE_MCP_EEE_UNSUPPORTED) {
+		if_link->eee_supported = false;
+	} else {
+		if_link->eee_supported = true;
+		if_link->eee_active = link.eee_active;
+		if_link->sup_caps = link_caps.eee_speed_caps;
+		/* MFW clears adv_caps on eee disable; use configured value */
+		if_link->eee.adv_caps = link.eee_adv_caps ? link.eee_adv_caps :
+					params.eee.adv_caps;
+		if_link->eee.lp_adv_caps = link.eee_lp_adv_caps;
+		if_link->eee.enable = params.eee.enable;
+		if_link->eee.tx_lpi_enable = params.eee.tx_lpi_enable;
+		if_link->eee.tx_lpi_timer = params.eee.tx_lpi_timer;
+	}
 }
 
 static void
@@ -588,6 +603,10 @@ static int qed_set_link(struct ecore_dev *edev, struct qed_link_params *params)
 			link_params->pause.forced_tx = false;
 	}
 
+	if (params->override_flags & QED_LINK_OVERRIDE_EEE_CONFIG)
+		memcpy(&link_params->eee, &params->eee,
+		       sizeof(link_params->eee));
+
 	rc = ecore_mcp_set_link(hwfn, ptt, params->link_up);
 
 	ecore_ptt_release(hwfn, ptt);
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 16/53] net/qede/base: use passed ptt handler
  2017-09-19  1:29 [PATCH 00/53] net/qede/base: update PMD to 2.6.0.1 Rasesh Mody
                   ` (14 preceding siblings ...)
  2017-09-19  1:29 ` [PATCH 15/53] net/qede/base: add EEE support Rasesh Mody
@ 2017-09-19  1:29 ` Rasesh Mody
  2017-09-19  1:29 ` [PATCH 17/53] net/qede/base: prevent re-assertions of parity errors Rasesh Mody
                   ` (13 subsequent siblings)
  29 siblings, 0 replies; 31+ messages in thread
From: Rasesh Mody @ 2017-09-19  1:29 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Use the ptt[PF translation table] handler that is passed rather than using
main ptt from the HW function.
In ecore_hw_get_resc()'s error flow, release the MFW generic resource lock
only if needed.
Change the verbosity level of GRC timeout from DP_INFO() to DP_NOTICE().
Reduce verbosity of print in ecore_hw_bar_size().

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/bcm_osal.h          |    5 +-
 drivers/net/qede/base/ecore_cxt.c         |    8 +-
 drivers/net/qede/base/ecore_cxt.h         |    6 +-
 drivers/net/qede/base/ecore_dev.c         |  120 +++++++++++++++++------------
 drivers/net/qede/base/ecore_dev_api.h     |    8 +-
 drivers/net/qede/base/ecore_hw.h          |    4 +-
 drivers/net/qede/base/ecore_init_ops.c    |    7 +-
 drivers/net/qede/base/ecore_init_ops.h    |    3 +-
 drivers/net/qede/base/ecore_int.c         |   28 +++----
 drivers/net/qede/base/ecore_mcp.c         |   28 ++++---
 drivers/net/qede/base/ecore_mcp_api.h     |    6 +-
 drivers/net/qede/base/ecore_sp_api.h      |    2 +
 drivers/net/qede/base/ecore_sp_commands.c |   14 ++--
 drivers/net/qede/base/ecore_sp_commands.h |    2 +
 drivers/net/qede/base/ecore_spq.c         |    8 +-
 drivers/net/qede/base/ecore_sriov.c       |    4 +-
 drivers/net/qede/base/ecore_sriov.h       |    4 +-
 drivers/net/qede/qede_ethdev.c            |   21 ++++-
 drivers/net/qede/qede_main.c              |   27 +++++--
 19 files changed, 186 insertions(+), 119 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index bd07724..29edfb2 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -23,13 +23,14 @@
 /* Forward declaration */
 struct ecore_dev;
 struct ecore_hwfn;
+struct ecore_ptt;
 struct ecore_vf_acquire_sw_info;
 struct vf_pf_resc_request;
 enum ecore_mcp_protocol_type;
 union ecore_mcp_protocol_stats;
 enum ecore_hw_err_type;
 
-void qed_link_update(struct ecore_hwfn *hwfn);
+void qed_link_update(struct ecore_hwfn *hwfn, struct ecore_ptt *ptt);
 
 #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
 #undef __BIG_ENDIAN
@@ -327,7 +328,7 @@ void *osal_dma_alloc_coherent_aligned(struct ecore_dev *, dma_addr_t *,
 
 #define OSAL_BITMAP_WEIGHT(bitmap, count) 0
 
-#define OSAL_LINK_UPDATE(hwfn) qed_link_update(hwfn)
+#define OSAL_LINK_UPDATE(hwfn, ptt) qed_link_update(hwfn, ptt)
 #define OSAL_DCBX_AEN(hwfn, mib_type) nothing
 
 /* SR-IOV channel */
diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index 8c45315..08a616e 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -1422,7 +1422,7 @@ static void ecore_cdu_init_pf(struct ecore_hwfn *p_hwfn)
 	}
 }
 
-void ecore_qm_init_pf(struct ecore_hwfn *p_hwfn)
+void ecore_qm_init_pf(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
 {
 	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
 	struct ecore_qm_iids iids;
@@ -1430,7 +1430,7 @@ void ecore_qm_init_pf(struct ecore_hwfn *p_hwfn)
 	OSAL_MEM_ZERO(&iids, sizeof(iids));
 	ecore_cxt_qm_iids(p_hwfn, &iids);
 
-	ecore_qm_pf_rt_init(p_hwfn, p_hwfn->p_main_ptt, p_hwfn->port_id,
+	ecore_qm_pf_rt_init(p_hwfn, p_ptt, p_hwfn->port_id,
 			    p_hwfn->rel_pf_id, qm_info->max_phys_tcs_per_port,
 			    p_hwfn->first_on_engine,
 			    iids.cids, iids.vf_cids, iids.tids,
@@ -1785,9 +1785,9 @@ void ecore_cxt_hw_init_common(struct ecore_hwfn *p_hwfn)
 	ecore_cdu_init_common(p_hwfn);
 }
 
-void ecore_cxt_hw_init_pf(struct ecore_hwfn *p_hwfn)
+void ecore_cxt_hw_init_pf(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
 {
-	ecore_qm_init_pf(p_hwfn);
+	ecore_qm_init_pf(p_hwfn, p_ptt);
 	ecore_cm_init_pf(p_hwfn);
 	ecore_dq_init_pf(p_hwfn);
 	ecore_cdu_init_pf(p_hwfn);
diff --git a/drivers/net/qede/base/ecore_cxt.h b/drivers/net/qede/base/ecore_cxt.h
index 6ff823a..54761e4 100644
--- a/drivers/net/qede/base/ecore_cxt.h
+++ b/drivers/net/qede/base/ecore_cxt.h
@@ -98,15 +98,17 @@ u32 ecore_cxt_get_proto_cid_start(struct ecore_hwfn *p_hwfn,
  * @brief ecore_cxt_hw_init_pf - Initailze ILT and DQ, PF phase, per path.
  *
  * @param p_hwfn
+ * @param p_ptt
  */
-void ecore_cxt_hw_init_pf(struct ecore_hwfn *p_hwfn);
+void ecore_cxt_hw_init_pf(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt);
 
 /**
  * @brief ecore_qm_init_pf - Initailze the QM PF phase, per path
  *
  * @param p_hwfn
+ * @param p_ptt
  */
-void ecore_qm_init_pf(struct ecore_hwfn *p_hwfn);
+void ecore_qm_init_pf(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt);
 
  /**
  * @brief Reconfigures QM pf on the fly
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 5c37e1c..9af6348 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -56,7 +56,9 @@ enum BAR_ID {
 	BAR_ID_1		/* Used for doorbells */
 };
 
-static u32 ecore_hw_bar_size(struct ecore_hwfn *p_hwfn, enum BAR_ID bar_id)
+static u32 ecore_hw_bar_size(struct ecore_hwfn *p_hwfn,
+			     struct ecore_ptt *p_ptt,
+			     enum BAR_ID bar_id)
 {
 	u32 bar_reg = (bar_id == BAR_ID_0 ?
 		       PGLUE_B_REG_PF_BAR0_SIZE : PGLUE_B_REG_PF_BAR1_SIZE);
@@ -70,7 +72,7 @@ static u32 ecore_hw_bar_size(struct ecore_hwfn *p_hwfn, enum BAR_ID bar_id)
 		return 1 << 17;
 	}
 
-	val = ecore_rd(p_hwfn, p_hwfn->p_main_ptt, bar_reg);
+	val = ecore_rd(p_hwfn, p_ptt, bar_reg);
 	if (val)
 		return 1 << (val + 15);
 
@@ -79,14 +81,12 @@ static u32 ecore_hw_bar_size(struct ecore_hwfn *p_hwfn, enum BAR_ID bar_id)
 	 * In older MFW versions they are set to 0 which means disabled.
 	 */
 	if (p_hwfn->p_dev->num_hwfns > 1) {
-		DP_NOTICE(p_hwfn, false,
-			  "BAR size not configured. Assuming BAR size of 256kB"
-			  " for GRC and 512kB for DB\n");
+		DP_INFO(p_hwfn,
+			"BAR size not configured. Assuming BAR size of 256kB for GRC and 512kB for DB\n");
 		val = BAR_ID_0 ? 256 * 1024 : 512 * 1024;
 	} else {
-		DP_NOTICE(p_hwfn, false,
-			  "BAR size not configured. Assuming BAR size of 512kB"
-			  " for GRC and 512kB for DB\n");
+		DP_INFO(p_hwfn,
+			"BAR size not configured. Assuming BAR size of 512kB for GRC and 512kB for DB\n");
 		val = 512 * 1024;
 	}
 
@@ -777,7 +777,7 @@ enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn,
 	ecore_init_clear_rt_data(p_hwfn);
 
 	/* prepare QM portion of runtime array */
-	ecore_qm_init_pf(p_hwfn);
+	ecore_qm_init_pf(p_hwfn, p_ptt);
 
 	/* activate init tool on runtime array */
 	rc = ecore_init_run(p_hwfn, p_ptt, PHASE_QM_PF, p_hwfn->rel_pf_id,
@@ -1036,7 +1036,7 @@ void ecore_resc_setup(struct ecore_dev *p_dev)
 		ecore_int_setup(p_hwfn, p_hwfn->p_main_ptt);
 
 		ecore_l2_setup(p_hwfn);
-		ecore_iov_setup(p_hwfn, p_hwfn->p_main_ptt);
+		ecore_iov_setup(p_hwfn);
 	}
 }
 
@@ -1327,11 +1327,11 @@ static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn,
 	ecore_init_cau_rt_data(p_dev);
 
 	/* Program GTT windows */
-	ecore_gtt_init(p_hwfn);
+	ecore_gtt_init(p_hwfn, p_ptt);
 
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_EMUL(p_dev)) {
-		rc = ecore_hw_init_chip(p_hwfn, p_hwfn->p_main_ptt);
+		rc = ecore_hw_init_chip(p_hwfn, p_ptt);
 		if (rc != ECORE_SUCCESS)
 			return rc;
 	}
@@ -1637,7 +1637,7 @@ enum ECORE_ROCE_EDPM_MODE {
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 	u8 cond;
 
-	db_bar_size = ecore_hw_bar_size(p_hwfn, BAR_ID_1);
+	db_bar_size = ecore_hw_bar_size(p_hwfn, p_ptt, BAR_ID_1);
 	if (p_hwfn->p_dev->num_hwfns > 1)
 		db_bar_size /= 2;
 
@@ -1808,7 +1808,7 @@ static enum _ecore_status_t ecore_hw_init_port(struct ecore_hwfn *p_hwfn,
 		/* Update rate limit once we'll actually have a link */
 		p_hwfn->qm_info.pf_rl = 100000;
 	}
-	ecore_cxt_hw_init_pf(p_hwfn);
+	ecore_cxt_hw_init_pf(p_hwfn, p_ptt);
 
 	ecore_int_igu_init_rt(p_hwfn);
 
@@ -1877,7 +1877,8 @@ static enum _ecore_status_t ecore_hw_init_port(struct ecore_hwfn *p_hwfn,
 			return rc;
 
 		/* send function start command */
-		rc = ecore_sp_pf_start(p_hwfn, p_tunn, p_hwfn->p_dev->mf_mode,
+		rc = ecore_sp_pf_start(p_hwfn, p_ptt, p_tunn,
+				       p_hwfn->p_dev->mf_mode,
 				       allow_npar_tx_switch);
 		if (rc) {
 			DP_NOTICE(p_hwfn, true,
@@ -2394,18 +2395,21 @@ enum _ecore_status_t ecore_hw_stop(struct ecore_dev *p_dev)
 	return rc2;
 }
 
-void ecore_hw_stop_fastpath(struct ecore_dev *p_dev)
+enum _ecore_status_t ecore_hw_stop_fastpath(struct ecore_dev *p_dev)
 {
 	int j;
 
 	for_each_hwfn(p_dev, j) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[j];
-		struct ecore_ptt *p_ptt = p_hwfn->p_main_ptt;
+		struct ecore_ptt *p_ptt;
 
 		if (IS_VF(p_dev)) {
 			ecore_vf_pf_int_cleanup(p_hwfn);
 			continue;
 		}
+		p_ptt = ecore_ptt_acquire(p_hwfn);
+		if (!p_ptt)
+			return ECORE_AGAIN;
 
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IFDOWN,
 			   "Shutting down the fastpath\n");
@@ -2427,15 +2431,22 @@ void ecore_hw_stop_fastpath(struct ecore_dev *p_dev)
 		ecore_int_igu_init_pure_rt(p_hwfn, p_ptt, false, false);
 		/* Need to wait 1ms to guarantee SBs are cleared */
 		OSAL_MSLEEP(1);
+		ecore_ptt_release(p_hwfn, p_ptt);
 	}
+
+	return ECORE_SUCCESS;
 }
 
-void ecore_hw_start_fastpath(struct ecore_hwfn *p_hwfn)
+enum _ecore_status_t ecore_hw_start_fastpath(struct ecore_hwfn *p_hwfn)
 {
-	struct ecore_ptt *p_ptt = p_hwfn->p_main_ptt;
+	struct ecore_ptt *p_ptt;
 
 	if (IS_VF(p_hwfn->p_dev))
-		return;
+		return ECORE_SUCCESS;
+
+	p_ptt = ecore_ptt_acquire(p_hwfn);
+	if (!p_ptt)
+		return ECORE_AGAIN;
 
 	/* If roce info is allocated it means roce is initialized and should
 	 * be enabled in searcher.
@@ -2448,8 +2459,11 @@ void ecore_hw_start_fastpath(struct ecore_hwfn *p_hwfn)
 	}
 
 	/* Re-open incoming traffic */
-	ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
+	ecore_wr(p_hwfn, p_ptt,
 		 NIG_REG_RX_LLH_BRB_GATE_DNTFWD_PERPF, 0x0);
+	ecore_ptt_release(p_hwfn, p_ptt);
+
+	return ECORE_SUCCESS;
 }
 
 /* Free hwfn memory and resources acquired in hw_hwfn_prepare */
@@ -2589,12 +2603,14 @@ const char *ecore_hw_get_resc_name(enum ecore_resources res_id)
 
 static enum _ecore_status_t
 __ecore_hw_set_soft_resc_size(struct ecore_hwfn *p_hwfn,
-			      enum ecore_resources res_id, u32 resc_max_val,
+			      struct ecore_ptt *p_ptt,
+			      enum ecore_resources res_id,
+			      u32 resc_max_val,
 			      u32 *p_mcp_resp)
 {
 	enum _ecore_status_t rc;
 
-	rc = ecore_mcp_set_resc_max_val(p_hwfn, p_hwfn->p_main_ptt, res_id,
+	rc = ecore_mcp_set_resc_max_val(p_hwfn, p_ptt, res_id,
 					resc_max_val, p_mcp_resp);
 	if (rc != ECORE_SUCCESS) {
 		DP_NOTICE(p_hwfn, true,
@@ -2612,7 +2628,8 @@ const char *ecore_hw_get_resc_name(enum ecore_resources res_id)
 }
 
 static enum _ecore_status_t
-ecore_hw_set_soft_resc_size(struct ecore_hwfn *p_hwfn)
+ecore_hw_set_soft_resc_size(struct ecore_hwfn *p_hwfn,
+			    struct ecore_ptt *p_ptt)
 {
 	bool b_ah = ECORE_IS_AH(p_hwfn->p_dev);
 	u32 resc_max_val, mcp_resp;
@@ -2632,7 +2649,7 @@ const char *ecore_hw_get_resc_name(enum ecore_resources res_id)
 			continue;
 		}
 
-		rc = __ecore_hw_set_soft_resc_size(p_hwfn, res_id,
+		rc = __ecore_hw_set_soft_resc_size(p_hwfn, p_ptt, res_id,
 						   resc_max_val, &mcp_resp);
 		if (rc != ECORE_SUCCESS)
 			return rc;
@@ -2821,6 +2838,7 @@ static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
 #define ECORE_RESC_ALLOC_LOCK_RETRY_INTVL_US	10000 /* 10 msec */
 
 static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
+					      struct ecore_ptt *p_ptt,
 					      bool drv_resc_alloc)
 {
 	struct ecore_resc_unlock_params resc_unlock_params;
@@ -2858,7 +2876,7 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 	OSAL_MEM_ZERO(&resc_unlock_params, sizeof(resc_unlock_params));
 	resc_unlock_params.resource = ECORE_RESC_LOCK_RESC_ALLOC;
 
-	rc = ecore_mcp_resc_lock(p_hwfn, p_hwfn->p_main_ptt, &resc_lock_params);
+	rc = ecore_mcp_resc_lock(p_hwfn, p_ptt, &resc_lock_params);
 	if (rc != ECORE_SUCCESS && rc != ECORE_NOTIMPL) {
 		return rc;
 	} else if (rc == ECORE_NOTIMPL) {
@@ -2870,7 +2888,7 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 		rc = ECORE_BUSY;
 		goto unlock_and_exit;
 	} else {
-		rc = ecore_hw_set_soft_resc_size(p_hwfn);
+		rc = ecore_hw_set_soft_resc_size(p_hwfn, p_ptt);
 		if (rc != ECORE_SUCCESS && rc != ECORE_NOTIMPL) {
 			DP_NOTICE(p_hwfn, false,
 				  "Failed to set the max values of the soft resources\n");
@@ -2878,7 +2896,7 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 		} else if (rc == ECORE_NOTIMPL) {
 			DP_INFO(p_hwfn,
 				"Skip the max values setting of the soft resources since it is not supported by the MFW\n");
-			rc = ecore_mcp_resc_unlock(p_hwfn, p_hwfn->p_main_ptt,
+			rc = ecore_mcp_resc_unlock(p_hwfn, p_ptt,
 						   &resc_unlock_params);
 			if (rc != ECORE_SUCCESS)
 				DP_INFO(p_hwfn,
@@ -2891,7 +2909,7 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 		goto unlock_and_exit;
 
 	if (resc_lock_params.b_granted && !resc_unlock_params.b_released) {
-		rc = ecore_mcp_resc_unlock(p_hwfn, p_hwfn->p_main_ptt,
+		rc = ecore_mcp_resc_unlock(p_hwfn, p_ptt,
 					   &resc_unlock_params);
 		if (rc != ECORE_SUCCESS)
 			DP_INFO(p_hwfn,
@@ -2938,7 +2956,7 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 	}
 
 	/* This will also learn the number of SBs from MFW */
-	if (ecore_int_igu_reset_cam(p_hwfn, p_hwfn->p_main_ptt))
+	if (ecore_int_igu_reset_cam(p_hwfn, p_ptt))
 		return ECORE_INVAL;
 
 	ecore_hw_set_feat(p_hwfn);
@@ -2954,7 +2972,9 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 
 unlock_and_exit:
-	ecore_mcp_resc_unlock(p_hwfn, p_hwfn->p_main_ptt, &resc_unlock_params);
+	if (resc_lock_params.b_granted && !resc_unlock_params.b_released)
+		ecore_mcp_resc_unlock(p_hwfn, p_ptt,
+				      &resc_unlock_params);
 	return rc;
 }
 
@@ -3486,7 +3506,7 @@ static void ecore_mcp_get_eee_caps(struct ecore_hwfn *p_hwfn,
 	 * the resources/features depends on them.
 	 * This order is not harmful if not forcing.
 	 */
-	rc = ecore_hw_get_resc(p_hwfn, drv_resc_alloc);
+	rc = ecore_hw_get_resc(p_hwfn, p_ptt, drv_resc_alloc);
 	if (rc != ECORE_SUCCESS && p_params->b_relaxed_probe) {
 		rc = ECORE_SUCCESS;
 		p_params->p_relaxed_res = ECORE_HW_PREPARE_BAD_MCP;
@@ -3495,9 +3515,10 @@ static void ecore_mcp_get_eee_caps(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-static enum _ecore_status_t ecore_get_dev_info(struct ecore_dev *p_dev)
+static enum _ecore_status_t ecore_get_dev_info(struct ecore_hwfn *p_hwfn,
+					       struct ecore_ptt *p_ptt)
 {
-	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
+	struct ecore_dev *p_dev = p_hwfn->p_dev;
 	u16 device_id_mask;
 	u32 tmp;
 
@@ -3522,16 +3543,15 @@ static enum _ecore_status_t ecore_get_dev_info(struct ecore_dev *p_dev)
 		return ECORE_ABORTED;
 	}
 
-	p_dev->chip_num = (u16)ecore_rd(p_hwfn, p_hwfn->p_main_ptt,
-					 MISCS_REG_CHIP_NUM);
-	p_dev->chip_rev = (u16)ecore_rd(p_hwfn, p_hwfn->p_main_ptt,
-					 MISCS_REG_CHIP_REV);
+	p_dev->chip_num = (u16)ecore_rd(p_hwfn, p_ptt,
+						MISCS_REG_CHIP_NUM);
+	p_dev->chip_rev = (u16)ecore_rd(p_hwfn, p_ptt,
+						MISCS_REG_CHIP_REV);
 
 	MASK_FIELD(CHIP_REV, p_dev->chip_rev);
 
 	/* Learn number of HW-functions */
-	tmp = ecore_rd(p_hwfn, p_hwfn->p_main_ptt,
-		       MISCS_REG_CMT_ENABLED_FOR_PAIR);
+	tmp = ecore_rd(p_hwfn, p_ptt, MISCS_REG_CMT_ENABLED_FOR_PAIR);
 
 	if (tmp & (1 << p_hwfn->rel_pf_id)) {
 		DP_NOTICE(p_dev->hwfns, false, "device in CMT mode\n");
@@ -3551,10 +3571,10 @@ static enum _ecore_status_t ecore_get_dev_info(struct ecore_dev *p_dev)
 	}
 #endif
 
-	p_dev->chip_bond_id = ecore_rd(p_hwfn, p_hwfn->p_main_ptt,
+	p_dev->chip_bond_id = ecore_rd(p_hwfn, p_ptt,
 				       MISCS_REG_CHIP_TEST_REG) >> 4;
 	MASK_FIELD(CHIP_BOND_ID, p_dev->chip_bond_id);
-	p_dev->chip_metal = (u16)ecore_rd(p_hwfn, p_hwfn->p_main_ptt,
+	p_dev->chip_metal = (u16)ecore_rd(p_hwfn, p_ptt,
 					   MISCS_REG_CHIP_METAL);
 	MASK_FIELD(CHIP_METAL, p_dev->chip_metal);
 	DP_INFO(p_dev->hwfns,
@@ -3571,12 +3591,10 @@ static enum _ecore_status_t ecore_get_dev_info(struct ecore_dev *p_dev)
 	}
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_EMUL(p_dev) && ECORE_IS_AH(p_dev))
-		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 MISCS_REG_PLL_MAIN_CTRL_4, 0x1);
+		ecore_wr(p_hwfn, p_ptt, MISCS_REG_PLL_MAIN_CTRL_4, 0x1);
 
 	if (CHIP_REV_IS_EMUL(p_dev)) {
-		tmp = ecore_rd(p_hwfn, p_hwfn->p_main_ptt,
-			       MISCS_REG_ECO_RESERVED);
+		tmp = ecore_rd(p_hwfn, p_ptt, MISCS_REG_ECO_RESERVED);
 		if (tmp & (1 << 29)) {
 			DP_NOTICE(p_hwfn, false,
 				  "Emulation: Running on a FULL build\n");
@@ -3656,7 +3674,7 @@ void ecore_prepare_hibernate(struct ecore_dev *p_dev)
 
 	/* First hwfn learns basic information, e.g., number of hwfns */
 	if (!p_hwfn->my_id) {
-		rc = ecore_get_dev_info(p_dev);
+		rc = ecore_get_dev_info(p_hwfn, p_hwfn->p_main_ptt);
 		if (rc != ECORE_SUCCESS) {
 			if (p_params->b_relaxed_probe)
 				p_params->p_relaxed_res =
@@ -3785,11 +3803,15 @@ enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev,
 
 		/* adjust bar offset for second engine */
 		addr = (u8 OSAL_IOMEM *)p_dev->regview +
-		    ecore_hw_bar_size(p_hwfn, BAR_ID_0) / 2;
+					ecore_hw_bar_size(p_hwfn,
+							  p_hwfn->p_main_ptt,
+							  BAR_ID_0) / 2;
 		p_regview = (void OSAL_IOMEM *)addr;
 
 		addr = (u8 OSAL_IOMEM *)p_dev->doorbells +
-		    ecore_hw_bar_size(p_hwfn, BAR_ID_1) / 2;
+					ecore_hw_bar_size(p_hwfn,
+							  p_hwfn->p_main_ptt,
+							  BAR_ID_1) / 2;
 		p_doorbell = (void OSAL_IOMEM *)addr;
 
 		/* prepare second hw function */
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index 99a9c49..b3c9f89 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -142,8 +142,9 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
  *
  * @param p_dev
  *
+ * @return enum _ecore_status_t
  */
-void ecore_hw_stop_fastpath(struct ecore_dev *p_dev);
+enum _ecore_status_t ecore_hw_stop_fastpath(struct ecore_dev *p_dev);
 
 #ifndef LINUX_REMOVE
 /**
@@ -160,10 +161,11 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
  * @brief ecore_hw_start_fastpath -restart fastpath traffic,
  *        only if hw_stop_fastpath was called
 
- * @param p_dev
+ * @param p_hwfn
  *
+ * @return enum _ecore_status_t
  */
-void ecore_hw_start_fastpath(struct ecore_hwfn *p_hwfn);
+enum _ecore_status_t ecore_hw_start_fastpath(struct ecore_hwfn *p_hwfn);
 
 enum ecore_hw_prepare_result {
 	ECORE_HW_PREPARE_SUCCESS,
diff --git a/drivers/net/qede/base/ecore_hw.h b/drivers/net/qede/base/ecore_hw.h
index 0750b2e..726bc18 100644
--- a/drivers/net/qede/base/ecore_hw.h
+++ b/drivers/net/qede/base/ecore_hw.h
@@ -71,8 +71,10 @@ enum _dmae_cmd_crc_mask {
 * @brief ecore_gtt_init - Initialize GTT windows
 *
 * @param p_hwfn
+* @param p_ptt
 */
-void ecore_gtt_init(struct ecore_hwfn *p_hwfn);
+void ecore_gtt_init(struct ecore_hwfn *p_hwfn,
+		    struct ecore_ptt *p_ptt);
 
 /**
  * @brief ecore_ptt_invalidate - Forces all ptt entries to be re-configured
diff --git a/drivers/net/qede/base/ecore_init_ops.c b/drivers/net/qede/base/ecore_init_ops.c
index 80a52ca..c76cc07 100644
--- a/drivers/net/qede/base/ecore_init_ops.c
+++ b/drivers/net/qede/base/ecore_init_ops.c
@@ -525,7 +525,8 @@ enum _ecore_status_t ecore_init_run(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-void ecore_gtt_init(struct ecore_hwfn *p_hwfn)
+void ecore_gtt_init(struct ecore_hwfn *p_hwfn,
+		    struct ecore_ptt *p_ptt)
 {
 	u32 gtt_base;
 	u32 i;
@@ -543,7 +544,7 @@ void ecore_gtt_init(struct ecore_hwfn *p_hwfn)
 
 		/* initialize PTT/GTT (poll for completion) */
 		if (!initialized) {
-			ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
+			ecore_wr(p_hwfn, p_ptt,
 				 PGLUE_B_REG_START_INIT_PTT_GTT, 1);
 			initialized = true;
 		}
@@ -552,7 +553,7 @@ void ecore_gtt_init(struct ecore_hwfn *p_hwfn)
 			/* ptt might be overrided by HW until this is done */
 			OSAL_UDELAY(10);
 			ecore_ptt_invalidate(p_hwfn);
-			val = ecore_rd(p_hwfn, p_hwfn->p_main_ptt,
+			val = ecore_rd(p_hwfn, p_ptt,
 				       PGLUE_B_REG_INIT_DONE_PTT_GTT);
 		} while ((val != 1) && --poll_cnt);
 
diff --git a/drivers/net/qede/base/ecore_init_ops.h b/drivers/net/qede/base/ecore_init_ops.h
index d58c7d6..e293a4a 100644
--- a/drivers/net/qede/base/ecore_init_ops.h
+++ b/drivers/net/qede/base/ecore_init_ops.h
@@ -107,5 +107,6 @@ void ecore_init_store_rt_agg(struct ecore_hwfn *p_hwfn,
  *
  * @param p_hwfn
  */
-void ecore_gtt_init(struct ecore_hwfn *p_hwfn);
+void ecore_gtt_init(struct ecore_hwfn *p_hwfn,
+		    struct ecore_ptt *p_ptt);
 #endif /* __ECORE_INIT_OPS__ */
diff --git a/drivers/net/qede/base/ecore_int.c b/drivers/net/qede/base/ecore_int.c
index f8b104a..e7dfe04 100644
--- a/drivers/net/qede/base/ecore_int.c
+++ b/drivers/net/qede/base/ecore_int.c
@@ -248,21 +248,21 @@ static enum _ecore_status_t ecore_grc_attn_cb(struct ecore_hwfn *p_hwfn)
 	tmp2 = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
 			GRC_REG_TIMEOUT_ATTN_ACCESS_DATA_1);
 
-	DP_INFO(p_hwfn->p_dev,
-		"GRC timeout [%08x:%08x] - %s Address [%08x] [Master %s]"
-		" [PF: %02x %s %02x]\n",
-		tmp2, tmp,
-		(tmp & ECORE_GRC_ATTENTION_RDWR_BIT) ? "Write to" : "Read from",
-		(tmp & ECORE_GRC_ATTENTION_ADDRESS_MASK) << 2,
-		grc_timeout_attn_master_to_str((tmp &
-					ECORE_GRC_ATTENTION_MASTER_MASK) >>
-				       ECORE_GRC_ATTENTION_MASTER_SHIFT),
-		(tmp2 & ECORE_GRC_ATTENTION_PF_MASK),
-		(((tmp2 & ECORE_GRC_ATTENTION_PRIV_MASK) >>
+	DP_NOTICE(p_hwfn->p_dev, false,
+		  "GRC timeout [%08x:%08x] - %s Address [%08x] [Master %s] [PF: %02x %s %02x]\n",
+		  tmp2, tmp,
+		  (tmp & ECORE_GRC_ATTENTION_RDWR_BIT) ? "Write to"
+						       : "Read from",
+		  (tmp & ECORE_GRC_ATTENTION_ADDRESS_MASK) << 2,
+		  grc_timeout_attn_master_to_str(
+			(tmp & ECORE_GRC_ATTENTION_MASTER_MASK) >>
+			 ECORE_GRC_ATTENTION_MASTER_SHIFT),
+		  (tmp2 & ECORE_GRC_ATTENTION_PF_MASK),
+		  (((tmp2 & ECORE_GRC_ATTENTION_PRIV_MASK) >>
 		  ECORE_GRC_ATTENTION_PRIV_SHIFT) ==
-		 ECORE_GRC_ATTENTION_PRIV_VF) ? "VF" : "(Irrelevant:)",
-		(tmp2 & ECORE_GRC_ATTENTION_VF_MASK) >>
-		ECORE_GRC_ATTENTION_VF_SHIFT);
+		  ECORE_GRC_ATTENTION_PRIV_VF) ? "VF" : "(Irrelevant:)",
+		  (tmp2 & ECORE_GRC_ATTENTION_VF_MASK) >>
+		  ECORE_GRC_ATTENTION_VF_SHIFT);
 
 out:
 	/* Regardles of anything else, clean the validity bit */
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 3be23ba..b334997 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1196,7 +1196,7 @@ static void ecore_mcp_handle_link_change(struct ecore_hwfn *p_hwfn,
 	if (p_hwfn->mcp_info->capabilities & FW_MB_PARAM_FEATURE_SUPPORT_EEE)
 		ecore_mcp_read_eee_config(p_hwfn, p_ptt, p_link);
 
-	OSAL_LINK_UPDATE(p_hwfn);
+	OSAL_LINK_UPDATE(p_hwfn, p_ptt);
 }
 
 enum _ecore_status_t ecore_mcp_set_link(struct ecore_hwfn *p_hwfn,
@@ -1832,14 +1832,13 @@ enum _ecore_status_t ecore_mcp_get_mfw_ver(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-enum _ecore_status_t ecore_mcp_get_media_type(struct ecore_dev *p_dev,
+enum _ecore_status_t ecore_mcp_get_media_type(struct ecore_hwfn *p_hwfn,
+					      struct ecore_ptt *p_ptt,
 					      u32 *p_media_type)
 {
-	struct ecore_hwfn *p_hwfn = &p_dev->hwfns[0];
-	struct ecore_ptt *p_ptt;
 
 	/* TODO - Add support for VFs */
-	if (IS_VF(p_dev))
+	if (IS_VF(p_hwfn->p_dev))
 		return ECORE_INVAL;
 
 	if (!ecore_mcp_is_init(p_hwfn)) {
@@ -1847,16 +1846,15 @@ enum _ecore_status_t ecore_mcp_get_media_type(struct ecore_dev *p_dev,
 		return ECORE_BUSY;
 	}
 
-	*p_media_type = MEDIA_UNSPECIFIED;
-
-	p_ptt = ecore_ptt_acquire(p_hwfn);
-	if (!p_ptt)
-		return ECORE_BUSY;
-
-	*p_media_type = ecore_rd(p_hwfn, p_ptt, p_hwfn->mcp_info->port_addr +
-				 OFFSETOF(struct public_port, media_type));
-
-	ecore_ptt_release(p_hwfn, p_ptt);
+	if (!p_ptt) {
+		*p_media_type = MEDIA_UNSPECIFIED;
+		return ECORE_INVAL;
+	} else {
+		*p_media_type = ecore_rd(p_hwfn, p_ptt,
+					 p_hwfn->mcp_info->port_addr +
+					 OFFSETOF(struct public_port,
+						  media_type));
+	}
 
 	return ECORE_SUCCESS;
 }
diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h
index a3d6bc1..ac889f9 100644
--- a/drivers/net/qede/base/ecore_mcp_api.h
+++ b/drivers/net/qede/base/ecore_mcp_api.h
@@ -608,14 +608,16 @@ enum _ecore_status_t ecore_mcp_get_mfw_ver(struct ecore_hwfn *p_hwfn,
  * @brief Get media type value of the port.
  *
  * @param p_dev      - ecore dev pointer
+ * @param p_ptt
  * @param mfw_ver    - media type value
  *
  * @return enum _ecore_status_t -
  *      ECORE_SUCCESS - Operation was successful.
  *      ECORE_BUSY - Operation failed
  */
-enum _ecore_status_t ecore_mcp_get_media_type(struct ecore_dev *p_dev,
-					   u32 *media_type);
+enum _ecore_status_t ecore_mcp_get_media_type(struct ecore_hwfn *p_hwfn,
+					      struct ecore_ptt *p_ptt,
+					      u32 *media_type);
 
 /**
  * @brief - Sends a command to the MCP mailbox.
diff --git a/drivers/net/qede/base/ecore_sp_api.h b/drivers/net/qede/base/ecore_sp_api.h
index c8e564f..86e8496 100644
--- a/drivers/net/qede/base/ecore_sp_api.h
+++ b/drivers/net/qede/base/ecore_sp_api.h
@@ -49,6 +49,7 @@ enum _ecore_status_t ecore_eth_cqe_completion(struct ecore_hwfn *p_hwfn,
  * for a physical function (PF).
  *
  * @param p_hwfn
+ * @param p_ptt
  * @param p_tunn - pf update tunneling parameters
  * @param comp_mode - completion mode
  * @param p_comp_data - callback function
@@ -58,6 +59,7 @@ enum _ecore_status_t ecore_eth_cqe_completion(struct ecore_hwfn *p_hwfn,
 
 enum _ecore_status_t
 ecore_sp_pf_update_tunn_cfg(struct ecore_hwfn *p_hwfn,
+			    struct ecore_ptt *p_ptt,
 			    struct ecore_tunnel_info *p_tunn,
 			    enum spq_mode comp_mode,
 			    struct ecore_spq_comp_cb *p_comp_data);
diff --git a/drivers/net/qede/base/ecore_sp_commands.c b/drivers/net/qede/base/ecore_sp_commands.c
index d6e4b9e..abfdfbf 100644
--- a/drivers/net/qede/base/ecore_sp_commands.c
+++ b/drivers/net/qede/base/ecore_sp_commands.c
@@ -232,6 +232,7 @@ static void ecore_set_hw_tunn_mode(struct ecore_hwfn *p_hwfn,
 }
 
 static void ecore_set_hw_tunn_mode_port(struct ecore_hwfn *p_hwfn,
+					struct ecore_ptt  *p_ptt,
 					struct ecore_tunnel_info *p_tunn)
 {
 	if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
@@ -241,14 +242,14 @@ static void ecore_set_hw_tunn_mode_port(struct ecore_hwfn *p_hwfn,
 	}
 
 	if (p_tunn->vxlan_port.b_update_port)
-		ecore_set_vxlan_dest_port(p_hwfn, p_hwfn->p_main_ptt,
+		ecore_set_vxlan_dest_port(p_hwfn, p_ptt,
 					  p_tunn->vxlan_port.port);
 
 	if (p_tunn->geneve_port.b_update_port)
-		ecore_set_geneve_dest_port(p_hwfn, p_hwfn->p_main_ptt,
+		ecore_set_geneve_dest_port(p_hwfn, p_ptt,
 					   p_tunn->geneve_port.port);
 
-	ecore_set_hw_tunn_mode(p_hwfn, p_hwfn->p_main_ptt, p_tunn);
+	ecore_set_hw_tunn_mode(p_hwfn, p_ptt, p_tunn);
 }
 
 static void
@@ -294,6 +295,7 @@ static void ecore_set_hw_tunn_mode_port(struct ecore_hwfn *p_hwfn,
 }
 
 enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
+				       struct ecore_ptt *p_ptt,
 				       struct ecore_tunnel_info *p_tunn,
 				       enum ecore_mf_mode mode,
 				       bool allow_npar_tx_switch)
@@ -390,7 +392,8 @@ enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
 	rc = ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
 
 	if (p_tunn)
-		ecore_set_hw_tunn_mode_port(p_hwfn, &p_hwfn->p_dev->tunnel);
+		ecore_set_hw_tunn_mode_port(p_hwfn, p_ptt,
+					    &p_hwfn->p_dev->tunnel);
 
 	return rc;
 }
@@ -465,6 +468,7 @@ enum _ecore_status_t ecore_sp_rl_update(struct ecore_hwfn *p_hwfn,
 /* Set pf update ramrod command params */
 enum _ecore_status_t
 ecore_sp_pf_update_tunn_cfg(struct ecore_hwfn *p_hwfn,
+			    struct ecore_ptt *p_ptt,
 			    struct ecore_tunnel_info *p_tunn,
 			    enum spq_mode comp_mode,
 			    struct ecore_spq_comp_cb *p_comp_data)
@@ -505,7 +509,7 @@ enum _ecore_status_t
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	ecore_set_hw_tunn_mode_port(p_hwfn, &p_hwfn->p_dev->tunnel);
+	ecore_set_hw_tunn_mode_port(p_hwfn, p_ptt, &p_hwfn->p_dev->tunnel);
 
 	return rc;
 }
diff --git a/drivers/net/qede/base/ecore_sp_commands.h b/drivers/net/qede/base/ecore_sp_commands.h
index 33e31e4..b9f40b7 100644
--- a/drivers/net/qede/base/ecore_sp_commands.h
+++ b/drivers/net/qede/base/ecore_sp_commands.h
@@ -59,6 +59,7 @@ enum _ecore_status_t ecore_sp_init_request(struct ecore_hwfn *p_hwfn,
  * to the internal RAM of the UStorm by the Function Start Ramrod.
  *
  * @param p_hwfn
+ * @param p_ptt
  * @param p_tunn - pf start tunneling configuration
  * @param mode
  * @param allow_npar_tx_switch - npar tx switching to be used
@@ -68,6 +69,7 @@ enum _ecore_status_t ecore_sp_init_request(struct ecore_hwfn *p_hwfn,
  */
 
 enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
+				       struct ecore_ptt *p_ptt,
 				       struct ecore_tunnel_info *p_tunn,
 				       enum ecore_mf_mode mode,
 				       bool allow_npar_tx_switch);
diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c
index 3c1d05b..25d573e 100644
--- a/drivers/net/qede/base/ecore_spq.c
+++ b/drivers/net/qede/base/ecore_spq.c
@@ -87,6 +87,7 @@ static enum _ecore_status_t ecore_spq_block(struct ecore_hwfn *p_hwfn,
 					    u8 *p_fw_ret, bool skip_quick_poll)
 {
 	struct ecore_spq_comp_done *comp_done;
+	struct ecore_ptt *p_ptt;
 	enum _ecore_status_t rc;
 
 	/* A relatively short polling period w/o sleeping, to allow the FW to
@@ -103,8 +104,13 @@ static enum _ecore_status_t ecore_spq_block(struct ecore_hwfn *p_hwfn,
 	if (rc == ECORE_SUCCESS)
 		return ECORE_SUCCESS;
 
+	p_ptt = ecore_ptt_acquire(p_hwfn);
+	if (!p_ptt)
+		return ECORE_AGAIN;
+
 	DP_INFO(p_hwfn, "Ramrod is stuck, requesting MCP drain\n");
-	rc = ecore_mcp_drain(p_hwfn, p_hwfn->p_main_ptt);
+	rc = ecore_mcp_drain(p_hwfn, p_ptt);
+	ecore_ptt_release(p_hwfn, p_ptt);
 	if (rc != ECORE_SUCCESS) {
 		DP_NOTICE(p_hwfn, true, "MCP drain failed\n");
 		goto err;
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 3f500d3..b8e03d0 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -601,7 +601,7 @@ enum _ecore_status_t ecore_iov_alloc(struct ecore_hwfn *p_hwfn)
 	return ecore_iov_allocate_vfdb(p_hwfn);
 }
 
-void ecore_iov_setup(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
+void ecore_iov_setup(struct ecore_hwfn *p_hwfn)
 {
 	if (!IS_PF_SRIOV(p_hwfn) || !IS_PF_SRIOV_ALLOC(p_hwfn))
 		return;
@@ -2387,7 +2387,7 @@ static void ecore_iov_vf_mbx_update_tunn_param(struct ecore_hwfn *p_hwfn,
 	if (b_update_required) {
 		u16 geneve_port;
 
-		rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, &tunn,
+		rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, p_ptt, &tunn,
 						 ECORE_SPQ_MODE_EBLOCK,
 						 OSAL_NULL);
 		if (rc != ECORE_SUCCESS)
diff --git a/drivers/net/qede/base/ecore_sriov.h b/drivers/net/qede/base/ecore_sriov.h
index ade74c9..d190126 100644
--- a/drivers/net/qede/base/ecore_sriov.h
+++ b/drivers/net/qede/base/ecore_sriov.h
@@ -238,10 +238,8 @@ void ecore_dp_tlv_list(struct ecore_hwfn *p_hwfn,
  * @brief ecore_iov_setup - setup sriov related resources
  *
  * @param p_hwfn
- * @param p_ptt
  */
-void ecore_iov_setup(struct ecore_hwfn	*p_hwfn,
-		     struct ecore_ptt	*p_ptt);
+void ecore_iov_setup(struct ecore_hwfn	*p_hwfn);
 
 /**
  * @brief ecore_iov_free - free sriov related resources
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 4e9e89f..1af0427 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -2167,11 +2167,15 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
 						  QEDE_VXLAN_DEF_PORT;
 		for_each_hwfn(edev, i) {
 			p_hwfn = &edev->hwfns[i];
-			rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, &tunn,
+			struct ecore_ptt *p_ptt = IS_PF(edev) ?
+			       ecore_ptt_acquire(p_hwfn) : NULL;
+			rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, p_ptt, &tunn,
 						ECORE_SPQ_MODE_CB, NULL);
 			if (rc != ECORE_SUCCESS) {
 				DP_ERR(edev, "Unable to config UDP port %u\n",
 				       tunn.vxlan_port.port);
+				if (IS_PF(edev))
+					ecore_ptt_release(p_hwfn, p_ptt);
 				return rc;
 			}
 		}
@@ -2318,11 +2322,15 @@ static int qede_vxlan_tunn_config(struct rte_eth_dev *eth_dev,
 		qede_set_cmn_tunn_param(&tunn, clss, true, true);
 		for_each_hwfn(edev, i) {
 			p_hwfn = &edev->hwfns[i];
-			rc = ecore_sp_pf_update_tunn_cfg(p_hwfn,
+			struct ecore_ptt *p_ptt = IS_PF(edev) ?
+			       ecore_ptt_acquire(p_hwfn) : NULL;
+			rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, p_ptt,
 				&tunn, ECORE_SPQ_MODE_CB, NULL);
 			if (rc != ECORE_SUCCESS) {
 				DP_ERR(edev, "Failed to update tunn_clss %u\n",
 				       tunn.vxlan.tun_cls);
+				if (IS_PF(edev))
+					ecore_ptt_release(p_hwfn, p_ptt);
 			}
 		}
 		qdev->num_tunn_filters++; /* Filter added successfully */
@@ -2352,12 +2360,17 @@ static int qede_vxlan_tunn_config(struct rte_eth_dev *eth_dev,
 			qede_set_cmn_tunn_param(&tunn, clss, false, true);
 			for_each_hwfn(edev, i) {
 				p_hwfn = &edev->hwfns[i];
-				rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, &tunn,
-					ECORE_SPQ_MODE_CB, NULL);
+				struct ecore_ptt *p_ptt = IS_PF(edev) ?
+				       ecore_ptt_acquire(p_hwfn) : NULL;
+				rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, p_ptt,
+					&tunn, ECORE_SPQ_MODE_CB, NULL);
 				if (rc != ECORE_SUCCESS) {
 					DP_ERR(edev,
 						"Failed to update tunn_clss %u\n",
 						tunn.vxlan.tun_cls);
+					if (IS_PF(edev))
+						ecore_ptt_release(p_hwfn,
+								  p_ptt);
 					break;
 				}
 			}
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 13b321d..71b3a39 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -178,7 +178,7 @@ static void qed_handle_bulletin_change(struct ecore_hwfn *hwfn)
 		rte_memcpy(hwfn->hw_info.hw_mac_addr, mac, ETH_ALEN);
 
 	/* Always update link configuration according to bulletin */
-	qed_link_update(hwfn);
+	qed_link_update(hwfn, NULL);
 }
 
 static void qede_vf_task(void *arg)
@@ -489,6 +489,7 @@ static void qed_set_name(struct ecore_dev *edev, char name[NAME_SIZE])
 }
 
 static void qed_fill_link(struct ecore_hwfn *hwfn,
+			  __rte_unused struct ecore_ptt *ptt,
 			  struct qed_link_output *if_link)
 {
 	struct ecore_mcp_link_params params;
@@ -559,12 +560,22 @@ static void qed_fill_link(struct ecore_hwfn *hwfn,
 static void
 qed_get_current_link(struct ecore_dev *edev, struct qed_link_output *if_link)
 {
-	qed_fill_link(&edev->hwfns[0], if_link);
+	struct ecore_hwfn *hwfn;
+	struct ecore_ptt *ptt;
 
-#ifdef CONFIG_QED_SRIOV
-	for_each_hwfn(cdev, i)
-		qed_inform_vf_link_state(&cdev->hwfns[i]);
-#endif
+	hwfn = &edev->hwfns[0];
+	if (IS_PF(edev)) {
+		ptt = ecore_ptt_acquire(hwfn);
+		if (!ptt)
+			DP_NOTICE(hwfn, true, "Failed to fill link; No PTT\n");
+
+			qed_fill_link(hwfn, ptt, if_link);
+
+		if (ptt)
+			ecore_ptt_release(hwfn, ptt);
+	} else {
+		qed_fill_link(hwfn, NULL, if_link);
+	}
 }
 
 static int qed_set_link(struct ecore_dev *edev, struct qed_link_params *params)
@@ -614,11 +625,11 @@ static int qed_set_link(struct ecore_dev *edev, struct qed_link_params *params)
 	return rc;
 }
 
-void qed_link_update(struct ecore_hwfn *hwfn)
+void qed_link_update(struct ecore_hwfn *hwfn, struct ecore_ptt *ptt)
 {
 	struct qed_link_output if_link;
 
-	qed_fill_link(hwfn, &if_link);
+	qed_fill_link(hwfn, ptt, &if_link);
 }
 
 static int qed_drain(struct ecore_dev *edev)
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 17/53] net/qede/base: prevent re-assertions of parity errors
  2017-09-19  1:29 [PATCH 00/53] net/qede/base: update PMD to 2.6.0.1 Rasesh Mody
                   ` (15 preceding siblings ...)
  2017-09-19  1:29 ` [PATCH 16/53] net/qede/base: use passed ptt handler Rasesh Mody
@ 2017-09-19  1:29 ` Rasesh Mody
  2017-09-19  1:29 ` [PATCH 18/53] net/qede/base: avoid possible race condition Rasesh Mody
                   ` (12 subsequent siblings)
  29 siblings, 0 replies; 31+ messages in thread
From: Rasesh Mody @ 2017-09-19  1:29 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Prevent parity errors from being re-asserted. Mask any parity error, even
if it is not associated with a HW block.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_int.c |   72 ++++++++++++++++++++-----------------
 1 file changed, 39 insertions(+), 33 deletions(-)

diff --git a/drivers/net/qede/base/ecore_int.c b/drivers/net/qede/base/ecore_int.c
index e7dfe04..acf8759 100644
--- a/drivers/net/qede/base/ecore_int.c
+++ b/drivers/net/qede/base/ecore_int.c
@@ -851,32 +851,38 @@ static void ecore_int_attn_print(struct ecore_hwfn *p_hwfn,
  * @brief ecore_int_deassertion_parity - handle a single parity AEU source
  *
  * @param p_hwfn
- * @param p_aeu - descriptor of an AEU bit which caused the
- *              parity
+ * @param p_aeu - descriptor of an AEU bit which caused the parity
+ * @param aeu_en_reg - address of the AEU enable register
  * @param bit_index
  */
 static void ecore_int_deassertion_parity(struct ecore_hwfn *p_hwfn,
 					 struct aeu_invert_reg_bit *p_aeu,
-					 u8 bit_index)
+					 u32 aeu_en_reg, u8 bit_index)
 {
-	u32 block_id = p_aeu->block_index;
+	u32 block_id = p_aeu->block_index, mask, val;
 
-	DP_INFO(p_hwfn->p_dev, "%s[%d] parity attention is set\n",
-		p_aeu->bit_name, bit_index);
-
-	if (block_id == MAX_BLOCK_ID)
-		return;
-
-	ecore_int_attn_print(p_hwfn, block_id,
-			     ATTN_TYPE_PARITY, false);
-
-	/* In A0, there's a single parity bit for several blocks */
-	if (block_id == BLOCK_BTB) {
-		ecore_int_attn_print(p_hwfn, BLOCK_OPTE,
-				     ATTN_TYPE_PARITY, false);
-		ecore_int_attn_print(p_hwfn, BLOCK_MCP,
-				     ATTN_TYPE_PARITY, false);
+	DP_NOTICE(p_hwfn->p_dev, false,
+		  "%s parity attention is set [address 0x%08x, bit %d]\n",
+		  p_aeu->bit_name, aeu_en_reg, bit_index);
+
+	if (block_id != MAX_BLOCK_ID) {
+		ecore_int_attn_print(p_hwfn, block_id, ATTN_TYPE_PARITY, false);
+
+		/* In A0, there's a single parity bit for several blocks */
+		if (block_id == BLOCK_BTB) {
+			ecore_int_attn_print(p_hwfn, BLOCK_OPTE,
+					     ATTN_TYPE_PARITY, false);
+			ecore_int_attn_print(p_hwfn, BLOCK_MCP,
+					     ATTN_TYPE_PARITY, false);
+		}
 	}
+
+	/* Prevent this parity error from being re-asserted */
+	mask = ~(0x1 << bit_index);
+	val = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, aeu_en_reg);
+	ecore_wr(p_hwfn, p_hwfn->p_dpc_ptt, aeu_en_reg, val & mask);
+	DP_INFO(p_hwfn, "`%s' - Disabled future parity errors\n",
+		p_aeu->bit_name);
 }
 
 /**
@@ -891,8 +897,7 @@ static enum _ecore_status_t ecore_int_deassertion(struct ecore_hwfn *p_hwfn,
 						  u16 deasserted_bits)
 {
 	struct ecore_sb_attn_info *sb_attn_sw = p_hwfn->p_sb_attn;
-	u32 aeu_inv_arr[NUM_ATTN_REGS], aeu_mask;
-	bool b_parity = false;
+	u32 aeu_inv_arr[NUM_ATTN_REGS], aeu_mask, aeu_en, en;
 	u8 i, j, k, bit_idx;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 
@@ -908,11 +913,11 @@ static enum _ecore_status_t ecore_int_deassertion(struct ecore_hwfn *p_hwfn,
 	/* Handle parity attentions first */
 	for (i = 0; i < NUM_ATTN_REGS; i++) {
 		struct aeu_invert_reg *p_aeu = &sb_attn_sw->p_aeu_desc[i];
-		u32 en = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
-				  MISC_REG_AEU_ENABLE1_IGU_OUT_0 +
-				  i * sizeof(u32));
+		u32 parities;
 
-		u32 parities = sb_attn_sw->parity_mask[i] & aeu_inv_arr[i] & en;
+		aeu_en = MISC_REG_AEU_ENABLE1_IGU_OUT_0 + i * sizeof(u32);
+		en = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, aeu_en);
+		parities = sb_attn_sw->parity_mask[i] & aeu_inv_arr[i] & en;
 
 		/* Skip register in which no parity bit is currently set */
 		if (!parities)
@@ -922,11 +927,9 @@ static enum _ecore_status_t ecore_int_deassertion(struct ecore_hwfn *p_hwfn,
 			struct aeu_invert_reg_bit *p_bit = &p_aeu->bits[j];
 
 			if (ecore_int_is_parity_flag(p_hwfn, p_bit) &&
-			    !!(parities & (1 << bit_idx))) {
+			    !!(parities & (1 << bit_idx)))
 				ecore_int_deassertion_parity(p_hwfn, p_bit,
-							     bit_idx);
-				b_parity = true;
-			}
+							     aeu_en, bit_idx);
 
 			bit_idx += ATTENTION_LENGTH(p_bit->flags);
 		}
@@ -941,10 +944,13 @@ static enum _ecore_status_t ecore_int_deassertion(struct ecore_hwfn *p_hwfn,
 			continue;
 
 		for (i = 0; i < NUM_ATTN_REGS; i++) {
-			u32 aeu_en = MISC_REG_AEU_ENABLE1_IGU_OUT_0 +
-			    i * sizeof(u32) + k * sizeof(u32) * NUM_ATTN_REGS;
-			u32 en = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, aeu_en);
-			u32 bits = aeu_inv_arr[i] & en;
+			u32 bits;
+
+			aeu_en = MISC_REG_AEU_ENABLE1_IGU_OUT_0 +
+				 i * sizeof(u32) +
+				 k * sizeof(u32) * NUM_ATTN_REGS;
+			en = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, aeu_en);
+			bits = aeu_inv_arr[i] & en;
 
 			/* Skip if no bit from this group is currently set */
 			if (!bits)
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 18/53] net/qede/base: avoid possible race condition
  2017-09-19  1:29 [PATCH 00/53] net/qede/base: update PMD to 2.6.0.1 Rasesh Mody
                   ` (16 preceding siblings ...)
  2017-09-19  1:29 ` [PATCH 17/53] net/qede/base: prevent re-assertions of parity errors Rasesh Mody
@ 2017-09-19  1:29 ` Rasesh Mody
  2017-09-19  1:29 ` [PATCH 19/53] net/qede/base: revise management FW mbox access scheme Rasesh Mody
                   ` (11 subsequent siblings)
  29 siblings, 0 replies; 31+ messages in thread
From: Rasesh Mody @ 2017-09-19  1:29 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

There's a possible race in multiple VF scenarios for base driver users
that use the optional APIs ecore_iov_pf_get_and_clear_pending_events,
ecore_iov_pf_add_pending_events. If the client doesn't synchronize the two
calls, it's possible for the PF to clear a VF pending message indication
without ever getting it [as 'get & clear' isn't atomic], leading to VF
timeout on the command.

The solution is to switch into a per-VF indication rather than having a
bitfield for the various VFs with pending events. As part of the solution,
the setting/clearing of the indications is done internally by base driver.
As a result, ecore_iov_pf_add_pending_events is no longer needed and
ecore_iov_pf_get_and_clear_pending_events loses the 'and_clear' from its
name as its now a proper getter.

A VF would be considered 'pending' [I.e., get_pending_events() should
have '1' for it in its bitfield] beginning with the PF's base driver
recognizing a message sent by that VF [in SP_DPC] and ending only when
that VF message is processed.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_iov_api.h |   16 +++--------
 drivers/net/qede/base/ecore_sriov.c   |   47 ++++++++++++++++++---------------
 drivers/net/qede/base/ecore_sriov.h   |    4 ++-
 3 files changed, 33 insertions(+), 34 deletions(-)

diff --git a/drivers/net/qede/base/ecore_iov_api.h b/drivers/net/qede/base/ecore_iov_api.h
index 4fce6b6..4ec6217 100644
--- a/drivers/net/qede/base/ecore_iov_api.h
+++ b/drivers/net/qede/base/ecore_iov_api.h
@@ -345,21 +345,13 @@ struct ecore_public_vf_info*
 			     u16 vfid, bool b_enabled_only);
 
 /**
- * @brief Set pending events bitmap for given @vfid
- *
- * @param p_hwfn
- * @param vfid
- */
-void ecore_iov_pf_add_pending_events(struct ecore_hwfn *p_hwfn, u8 vfid);
-
-/**
- * @brief Copy pending events bitmap in @events and clear
- *	  original copy of events
+ * @brief fills a bitmask of all VFs which have pending unhandled
+ *        messages.
  *
  * @param p_hwfn
  */
-void ecore_iov_pf_get_and_clear_pending_events(struct ecore_hwfn *p_hwfn,
-					       u64 *events);
+void ecore_iov_pf_get_pending_events(struct ecore_hwfn *p_hwfn,
+				     u64 *events);
 
 /**
  * @brief Copy VF's message to PF's buffer
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index b8e03d0..7ac533e 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -3755,8 +3755,7 @@ static enum _ecore_status_t ecore_iov_vf_flr_poll(struct ecore_hwfn *p_hwfn,
 		ack_vfs[vfid / 32] |= (1 << (vfid % 32));
 		p_hwfn->pf_iov_info->pending_flr[rel_vf_id / 64] &=
 		    ~(1ULL << (rel_vf_id % 64));
-		p_hwfn->pf_iov_info->pending_events[rel_vf_id / 64] &=
-		    ~(1ULL << (rel_vf_id % 64));
+		p_vf->vf_mbx.b_pending_msg = false;
 	}
 
 	return rc;
@@ -3886,12 +3885,22 @@ void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn,
 	mbx = &p_vf->vf_mbx;
 
 	/* ecore_iov_process_mbx_request */
-	DP_VERBOSE(p_hwfn,
-		   ECORE_MSG_IOV,
-		   "VF[%02x]: Processing mailbox message\n", p_vf->abs_vf_id);
+#ifndef CONFIG_ECORE_SW_CHANNEL
+	if (!mbx->b_pending_msg) {
+		DP_NOTICE(p_hwfn, true,
+			  "VF[%02x]: Trying to process mailbox message when none is pending\n",
+			  p_vf->abs_vf_id);
+		return;
+	}
+	mbx->b_pending_msg = false;
+#endif
 
 	mbx->first_tlv = mbx->req_virt->first_tlv;
 
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+		   "VF[%02x]: Processing mailbox message [type %04x]\n",
+		   p_vf->abs_vf_id, mbx->first_tlv.tl.type);
+
 	OSAL_IOV_VF_MSG_TYPE(p_hwfn,
 			     p_vf->relative_vf_id,
 			     mbx->first_tlv.tl.type);
@@ -4016,26 +4025,20 @@ void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn,
 #endif
 }
 
-void ecore_iov_pf_add_pending_events(struct ecore_hwfn *p_hwfn, u8 vfid)
+void ecore_iov_pf_get_pending_events(struct ecore_hwfn *p_hwfn,
+				     u64 *events)
 {
-	u64 add_bit = 1ULL << (vfid % 64);
+	int i;
 
-	/* TODO - add locking mechanisms [no atomics in ecore, so we can't
-	* add the lock inside the ecore_pf_iov struct].
-	*/
-	p_hwfn->pf_iov_info->pending_events[vfid / 64] |= add_bit;
-}
+	OSAL_MEM_ZERO(events, sizeof(u64) * ECORE_VF_ARRAY_LENGTH);
 
-void ecore_iov_pf_get_and_clear_pending_events(struct ecore_hwfn *p_hwfn,
-					       u64 *events)
-{
-	u64 *p_pending_events = p_hwfn->pf_iov_info->pending_events;
+	ecore_for_each_vf(p_hwfn, i) {
+		struct ecore_vf_info *p_vf;
 
-	/* TODO - Take a lock */
-	OSAL_MEMCPY(events, p_pending_events,
-		    sizeof(u64) * ECORE_VF_ARRAY_LENGTH);
-	OSAL_MEMSET(p_pending_events, 0,
-		    sizeof(u64) * ECORE_VF_ARRAY_LENGTH);
+		p_vf = &p_hwfn->pf_iov_info->vfs_array[i];
+		if (p_vf->vf_mbx.b_pending_msg)
+			events[i / 64] |= 1ULL << (i % 64);
+	}
 }
 
 static struct ecore_vf_info *
@@ -4069,6 +4072,8 @@ static enum _ecore_status_t ecore_sriov_vfpf_msg(struct ecore_hwfn *p_hwfn,
 	 */
 	p_vf->vf_mbx.pending_req = (((u64)vf_msg->hi) << 32) | vf_msg->lo;
 
+	p_vf->vf_mbx.b_pending_msg = true;
+
 	return OSAL_PF_VF_MSG(p_hwfn, p_vf->relative_vf_id);
 }
 
diff --git a/drivers/net/qede/base/ecore_sriov.h b/drivers/net/qede/base/ecore_sriov.h
index d190126..8923730 100644
--- a/drivers/net/qede/base/ecore_sriov.h
+++ b/drivers/net/qede/base/ecore_sriov.h
@@ -45,6 +45,9 @@ struct ecore_iov_vf_mbx {
 	/* Address in VF where a pending message is located */
 	dma_addr_t		pending_req;
 
+	/* Message from VF awaits handling */
+	bool			b_pending_msg;
+
 	u8 *offset;
 
 #ifdef CONFIG_ECORE_SW_CHANNEL
@@ -168,7 +171,6 @@ struct ecore_vf_info {
  */
 struct ecore_pf_iov {
 	struct ecore_vf_info	vfs_array[E4_MAX_NUM_VFS];
-	u64			pending_events[ECORE_VF_ARRAY_LENGTH];
 	u64			pending_flr[ECORE_VF_ARRAY_LENGTH];
 
 #ifndef REMOVE_DBG
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 19/53] net/qede/base: revise management FW mbox access scheme
  2017-09-19  1:29 [PATCH 00/53] net/qede/base: update PMD to 2.6.0.1 Rasesh Mody
                   ` (17 preceding siblings ...)
  2017-09-19  1:29 ` [PATCH 18/53] net/qede/base: avoid possible race condition Rasesh Mody
@ 2017-09-19  1:29 ` Rasesh Mody
  2017-09-19  1:30 ` [PATCH 20/53] net/qede/base: remove helper functions/structures Rasesh Mody
                   ` (10 subsequent siblings)
  29 siblings, 0 replies; 31+ messages in thread
From: Rasesh Mody @ 2017-09-19  1:29 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Revise the manamgement FW mbox access locking scheme for the access to the
MFW mailbox:
 - add a new linked list called cmd_list to ecore_mcp_info that tracks all
   the mailbox commands sent to management FW and ones waiting for
   response.
 - add a mutex lock called cmd_lock to ecore_mcp_info, a spinlock used to
   serialize the access to this cmd_list and makes sure that the mbox is
   not a pending one before sending a new mbox request. It protects the
   access to the mailbox commands list and sending of the commands.
 - add ecore_mcp_cmd_add|del|get_elem() APIs for new access scheme
 - remove ecore_mcp_mb_lock() and ecore_mcp_mb_unlock()
 - add a mutex lock called link_lock to ecore_mcp_info, a spinlock used for
   syncing SW link-changes and link-changes originating from attention
   context. This locking scheme prevents possible race conditions that may
   occur, such as during link status reporting.
 - Surround OSAL_{MUTEX,SPIN_LOCK}_{ALLOC,DEALLOC} with
   '#ifdef CONFIG_ECORE_LOCK_ALLOC'. In case memory has to be allocated for
   lock primitives, then compile driver with CONFIG_ECORE_LOCK_ALLOC flag.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/bcm_osal.h  |    6 +-
 drivers/net/qede/base/ecore_cxt.c |    4 +
 drivers/net/qede/base/ecore_dev.c |    4 +
 drivers/net/qede/base/ecore_hw.c  |    4 +
 drivers/net/qede/base/ecore_mcp.c |  461 +++++++++++++++++++++++++------------
 drivers/net/qede/base/ecore_mcp.h |   18 +-
 drivers/net/qede/base/ecore_spq.c |    5 +
 drivers/net/qede/base/ecore_vf.c  |    6 +
 8 files changed, 347 insertions(+), 161 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 29edfb2..f4c7028 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -345,8 +345,8 @@ void *osal_dma_alloc_coherent_aligned(struct ecore_dev *, dma_addr_t *,
 #define OSAL_IOV_VF_VPORT_UPDATE(hwfn, vfid, p_params, p_mask) 0
 #define OSAL_VF_UPDATE_ACQUIRE_RESC_RESP(_dev_p, _resc_resp) 0
 #define OSAL_IOV_GET_OS_TYPE() 0
-#define OSAL_IOV_VF_MSG_TYPE(hwfn, vfid, vf_msg_type) 0
-#define OSAL_IOV_PF_RESP_TYPE(hwfn, vfid, pf_resp_type) 0
+#define OSAL_IOV_VF_MSG_TYPE(hwfn, vfid, vf_msg_type) nothing
+#define OSAL_IOV_PF_RESP_TYPE(hwfn, vfid, pf_resp_type) nothing
 
 u32 qede_unzip_data(struct ecore_hwfn *p_hwfn, u32 input_len,
 		   u8 *input_buf, u32 max_size, u8 *unzip_buf);
@@ -434,7 +434,7 @@ void qede_get_mcp_proto_stats(struct ecore_dev *, enum ecore_mcp_protocol_type,
 #define OSAL_CRC32(crc, buf, length) qede_crc32(crc, buf, length)
 #define OSAL_CRC8_POPULATE(table, polynomial) nothing
 #define OSAL_CRC8(table, pdata, nbytes, crc) 0
-#define OSAL_MFW_TLV_REQ(p_hwfn) (0)
+#define OSAL_MFW_TLV_REQ(p_hwfn) nothing
 #define OSAL_MFW_FILL_TLV_DATA(type, buf, data) (0)
 #define OSAL_PF_VALIDATE_MODIFY_TUNN_CONFIG(p_hwfn, mask, b_update, tunn) 0
 #endif /* __BCM_OSAL_H */
diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index 08a616e..73dc7cb 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -1170,7 +1170,9 @@ enum _ecore_status_t ecore_cxt_mngr_alloc(struct ecore_hwfn *p_hwfn)
 		p_mngr->vf_count = p_hwfn->p_dev->p_iov_info->total_vfs;
 
 	/* Initialize the dynamic ILT allocation mutex */
+#ifdef CONFIG_ECORE_LOCK_ALLOC
 	OSAL_MUTEX_ALLOC(p_hwfn, &p_mngr->mutex);
+#endif
 	OSAL_MUTEX_INIT(&p_mngr->mutex);
 
 	/* Set the cxt mangr pointer priori to further allocations */
@@ -1219,7 +1221,9 @@ void ecore_cxt_mngr_free(struct ecore_hwfn *p_hwfn)
 	ecore_cid_map_free(p_hwfn);
 	ecore_cxt_src_t2_free(p_hwfn);
 	ecore_ilt_shadow_free(p_hwfn);
+#ifdef CONFIG_ECORE_LOCK_ALLOC
 	OSAL_MUTEX_DEALLOC(&p_hwfn->p_cxt_mngr->mutex);
+#endif
 	OSAL_FREE(p_hwfn->p_dev, p_hwfn->p_cxt_mngr);
 }
 
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 9af6348..1608b19 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -121,7 +121,9 @@ void ecore_init_struct(struct ecore_dev *p_dev)
 		p_hwfn->my_id = i;
 		p_hwfn->b_active = false;
 
+#ifdef CONFIG_ECORE_LOCK_ALLOC
 		OSAL_MUTEX_ALLOC(p_hwfn, &p_hwfn->dmae_info.mutex);
+#endif
 		OSAL_MUTEX_INIT(&p_hwfn->dmae_info.mutex);
 	}
 
@@ -3862,7 +3864,9 @@ void ecore_hw_remove(struct ecore_dev *p_dev)
 		ecore_hw_hwfn_free(p_hwfn);
 		ecore_mcp_free(p_hwfn);
 
+#ifdef CONFIG_ECORE_LOCK_ALLOC
 		OSAL_MUTEX_DEALLOC(&p_hwfn->dmae_info.mutex);
+#endif
 	}
 
 	ecore_iov_free_hw_info(p_dev);
diff --git a/drivers/net/qede/base/ecore_hw.c b/drivers/net/qede/base/ecore_hw.c
index 2bcc32d..31e2776 100644
--- a/drivers/net/qede/base/ecore_hw.c
+++ b/drivers/net/qede/base/ecore_hw.c
@@ -64,7 +64,9 @@ enum _ecore_status_t ecore_ptt_pool_alloc(struct ecore_hwfn *p_hwfn)
 	}
 
 	p_hwfn->p_ptt_pool = p_pool;
+#ifdef CONFIG_ECORE_LOCK_ALLOC
 	OSAL_SPIN_LOCK_ALLOC(p_hwfn, &p_pool->lock);
+#endif
 	OSAL_SPIN_LOCK_INIT(&p_pool->lock);
 
 	return ECORE_SUCCESS;
@@ -83,8 +85,10 @@ void ecore_ptt_invalidate(struct ecore_hwfn *p_hwfn)
 
 void ecore_ptt_pool_free(struct ecore_hwfn *p_hwfn)
 {
+#ifdef CONFIG_ECORE_LOCK_ALLOC
 	if (p_hwfn->p_ptt_pool)
 		OSAL_SPIN_LOCK_DEALLOC(&p_hwfn->p_ptt_pool->lock);
+#endif
 	OSAL_FREE(p_hwfn->p_dev, p_hwfn->p_ptt_pool);
 }
 
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index b334997..db44aa3 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -96,13 +96,80 @@ void ecore_mcp_read_mb(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
 	}
 }
 
+struct ecore_mcp_cmd_elem {
+	osal_list_entry_t list;
+	struct ecore_mcp_mb_params *p_mb_params;
+	u16 expected_seq_num;
+	bool b_is_completed;
+};
+
+/* Must be called while cmd_lock is acquired */
+static struct ecore_mcp_cmd_elem *
+ecore_mcp_cmd_add_elem(struct ecore_hwfn *p_hwfn,
+		       struct ecore_mcp_mb_params *p_mb_params,
+		       u16 expected_seq_num)
+{
+	struct ecore_mcp_cmd_elem *p_cmd_elem = OSAL_NULL;
+
+	p_cmd_elem = OSAL_ZALLOC(p_hwfn->p_dev, GFP_ATOMIC,
+				 sizeof(*p_cmd_elem));
+	if (!p_cmd_elem) {
+		DP_NOTICE(p_hwfn, false,
+			  "Failed to allocate `struct ecore_mcp_cmd_elem'\n");
+		goto out;
+	}
+
+	p_cmd_elem->p_mb_params = p_mb_params;
+	p_cmd_elem->expected_seq_num = expected_seq_num;
+	OSAL_LIST_PUSH_HEAD(&p_cmd_elem->list, &p_hwfn->mcp_info->cmd_list);
+out:
+	return p_cmd_elem;
+}
+
+/* Must be called while cmd_lock is acquired */
+static void ecore_mcp_cmd_del_elem(struct ecore_hwfn *p_hwfn,
+				   struct ecore_mcp_cmd_elem *p_cmd_elem)
+{
+	OSAL_LIST_REMOVE_ENTRY(&p_cmd_elem->list, &p_hwfn->mcp_info->cmd_list);
+	OSAL_FREE(p_hwfn->p_dev, p_cmd_elem);
+}
+
+/* Must be called while cmd_lock is acquired */
+static struct ecore_mcp_cmd_elem *
+ecore_mcp_cmd_get_elem(struct ecore_hwfn *p_hwfn, u16 seq_num)
+{
+	struct ecore_mcp_cmd_elem *p_cmd_elem = OSAL_NULL;
+
+	OSAL_LIST_FOR_EACH_ENTRY(p_cmd_elem, &p_hwfn->mcp_info->cmd_list, list,
+				 struct ecore_mcp_cmd_elem) {
+		if (p_cmd_elem->expected_seq_num == seq_num)
+			return p_cmd_elem;
+	}
+
+	return OSAL_NULL;
+}
+
 enum _ecore_status_t ecore_mcp_free(struct ecore_hwfn *p_hwfn)
 {
 	if (p_hwfn->mcp_info) {
+		struct ecore_mcp_cmd_elem *p_cmd_elem = OSAL_NULL, *p_tmp;
+
+		OSAL_SPIN_LOCK(&p_hwfn->mcp_info->cmd_lock);
+		OSAL_LIST_FOR_EACH_ENTRY_SAFE(p_cmd_elem, p_tmp,
+					      &p_hwfn->mcp_info->cmd_list, list,
+					      struct ecore_mcp_cmd_elem) {
+			ecore_mcp_cmd_del_elem(p_hwfn, p_cmd_elem);
+		}
+		OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->cmd_lock);
+
 		OSAL_FREE(p_hwfn->p_dev, p_hwfn->mcp_info->mfw_mb_cur);
 		OSAL_FREE(p_hwfn->p_dev, p_hwfn->mcp_info->mfw_mb_shadow);
-		OSAL_SPIN_LOCK_DEALLOC(&p_hwfn->mcp_info->lock);
+#ifdef CONFIG_ECORE_LOCK_ALLOC
+		OSAL_SPIN_LOCK_DEALLOC(&p_hwfn->mcp_info->cmd_lock);
+		OSAL_SPIN_LOCK_DEALLOC(&p_hwfn->mcp_info->link_lock);
+#endif
 	}
+
 	OSAL_FREE(p_hwfn->p_dev, p_hwfn->mcp_info);
 
 	return ECORE_SUCCESS;
@@ -157,8 +224,7 @@ static enum _ecore_status_t ecore_load_mcp_offsets(struct ecore_hwfn *p_hwfn,
 	p_info->drv_pulse_seq = DRV_MB_RD(p_hwfn, p_ptt, drv_pulse_mb) &
 	    DRV_PULSE_SEQ_MASK;
 
-	p_info->mcp_hist = (u16)ecore_rd(p_hwfn, p_ptt,
-					  MISCS_REG_GENERIC_POR_0);
+	p_info->mcp_hist = ecore_rd(p_hwfn, p_ptt, MISCS_REG_GENERIC_POR_0);
 
 	return ECORE_SUCCESS;
 }
@@ -190,9 +256,15 @@ enum _ecore_status_t ecore_mcp_cmd_init(struct ecore_hwfn *p_hwfn,
 	if (!p_info->mfw_mb_shadow || !p_info->mfw_mb_addr)
 		goto err;
 
-	/* Initialize the MFW spinlock */
-	OSAL_SPIN_LOCK_ALLOC(p_hwfn, &p_info->lock);
-	OSAL_SPIN_LOCK_INIT(&p_info->lock);
+	/* Initialize the MFW spinlocks */
+#ifdef CONFIG_ECORE_LOCK_ALLOC
+	OSAL_SPIN_LOCK_ALLOC(p_hwfn, &p_info->cmd_lock);
+	OSAL_SPIN_LOCK_ALLOC(p_hwfn, &p_info->link_lock);
+#endif
+	OSAL_SPIN_LOCK_INIT(&p_info->cmd_lock);
+	OSAL_SPIN_LOCK_INIT(&p_info->link_lock);
+
+	OSAL_LIST_INIT(&p_info->cmd_list);
 
 	return ECORE_SUCCESS;
 
@@ -202,62 +274,28 @@ enum _ecore_status_t ecore_mcp_cmd_init(struct ecore_hwfn *p_hwfn,
 	return ECORE_NOMEM;
 }
 
-/* Locks the MFW mailbox of a PF to ensure a single access.
- * The lock is achieved in most cases by holding a spinlock, causing other
- * threads to wait till a previous access is done.
- * In some cases (currently when a [UN]LOAD_REQ commands are sent), the single
- * access is achieved by setting a blocking flag, which will fail other
- * competing contexts to send their mailboxes.
- */
-static enum _ecore_status_t ecore_mcp_mb_lock(struct ecore_hwfn *p_hwfn,
-					      u32 cmd)
-{
-	OSAL_SPIN_LOCK(&p_hwfn->mcp_info->lock);
-
-	/* The spinlock shouldn't be acquired when the mailbox command is
-	 * [UN]LOAD_REQ, since the engine is locked by the MFW, and a parallel
-	 * pending [UN]LOAD_REQ command of another PF together with a spinlock
-	 * (i.e. interrupts are disabled) - can lead to a deadlock.
-	 * It is assumed that for a single PF, no other mailbox commands can be
-	 * sent from another context while sending LOAD_REQ, and that any
-	 * parallel commands to UNLOAD_REQ can be cancelled.
-	 */
-	if (cmd == DRV_MSG_CODE_LOAD_DONE || cmd == DRV_MSG_CODE_UNLOAD_DONE)
-		p_hwfn->mcp_info->block_mb_sending = false;
+static void ecore_mcp_reread_offsets(struct ecore_hwfn *p_hwfn,
+				     struct ecore_ptt *p_ptt)
+{
+	u32 generic_por_0 = ecore_rd(p_hwfn, p_ptt, MISCS_REG_GENERIC_POR_0);
 
-	/* There's at least a single command that is sent by ecore during the
-	 * load sequence [expectation of MFW].
+	/* Use MCP history register to check if MCP reset occurred between init
+	 * time and now.
 	 */
-	if ((p_hwfn->mcp_info->block_mb_sending) &&
-	    (cmd != DRV_MSG_CODE_FEATURE_SUPPORT)) {
-		DP_NOTICE(p_hwfn, false,
-			  "Trying to send a MFW mailbox command [0x%x]"
-			  " in parallel to [UN]LOAD_REQ. Aborting.\n",
-			  cmd);
-		OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->lock);
-		return ECORE_BUSY;
-	}
+	if (p_hwfn->mcp_info->mcp_hist != generic_por_0) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+			   "Rereading MCP offsets [mcp_hist 0x%08x, generic_por_0 0x%08x]\n",
+			   p_hwfn->mcp_info->mcp_hist, generic_por_0);
 
-	if (cmd == DRV_MSG_CODE_LOAD_REQ || cmd == DRV_MSG_CODE_UNLOAD_REQ) {
-		p_hwfn->mcp_info->block_mb_sending = true;
-		OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->lock);
+		ecore_load_mcp_offsets(p_hwfn, p_ptt);
+		ecore_mcp_cmd_port_init(p_hwfn, p_ptt);
 	}
-
-	return ECORE_SUCCESS;
-}
-
-static void ecore_mcp_mb_unlock(struct ecore_hwfn *p_hwfn, u32 cmd)
-{
-	if (cmd != DRV_MSG_CODE_LOAD_REQ && cmd != DRV_MSG_CODE_UNLOAD_REQ)
-		OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->lock);
 }
 
 enum _ecore_status_t ecore_mcp_reset(struct ecore_hwfn *p_hwfn,
 				     struct ecore_ptt *p_ptt)
 {
-	u32 seq = ++p_hwfn->mcp_info->drv_mb_seq;
-	u32 delay = CHIP_MCP_RESP_ITER_US;
-	u32 org_mcp_reset_seq, cnt = 0;
+	u32 org_mcp_reset_seq, seq, delay = CHIP_MCP_RESP_ITER_US, cnt = 0;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 
 #ifndef ASIC_ONLY
@@ -265,15 +303,14 @@ enum _ecore_status_t ecore_mcp_reset(struct ecore_hwfn *p_hwfn,
 		delay = EMUL_MCP_RESP_ITER_US;
 #endif
 
-	/* Ensure that only a single thread is accessing the mailbox at a
-	 * certain time.
-	 */
-	rc = ecore_mcp_mb_lock(p_hwfn, DRV_MSG_CODE_MCP_RESET);
-	if (rc != ECORE_SUCCESS)
-		return rc;
+	/* Ensure that only a single thread is accessing the mailbox */
+	OSAL_SPIN_LOCK(&p_hwfn->mcp_info->cmd_lock);
 
-	/* Set drv command along with the updated sequence */
 	org_mcp_reset_seq = ecore_rd(p_hwfn, p_ptt, MISCS_REG_GENERIC_POR_0);
+
+	/* Set drv command along with the updated sequence */
+	ecore_mcp_reread_offsets(p_hwfn, p_ptt);
+	seq = ++p_hwfn->mcp_info->drv_mb_seq;
 	DRV_MB_WR(p_hwfn, p_ptt, drv_mb_header, (DRV_MSG_CODE_MCP_RESET | seq));
 
 	do {
@@ -293,73 +330,207 @@ enum _ecore_status_t ecore_mcp_reset(struct ecore_hwfn *p_hwfn,
 		rc = ECORE_AGAIN;
 	}
 
-	ecore_mcp_mb_unlock(p_hwfn, DRV_MSG_CODE_MCP_RESET);
+	OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->cmd_lock);
 
 	return rc;
 }
 
-static enum _ecore_status_t ecore_do_mcp_cmd(struct ecore_hwfn *p_hwfn,
-					     struct ecore_ptt *p_ptt,
-					     u32 cmd, u32 param,
-					     u32 *o_mcp_resp,
-					     u32 *o_mcp_param)
+/* Must be called while cmd_lock is acquired */
+static bool ecore_mcp_has_pending_cmd(struct ecore_hwfn *p_hwfn)
 {
-	u32 delay = CHIP_MCP_RESP_ITER_US;
-	u32 max_retries = ECORE_DRV_MB_MAX_RETRIES;
-	u32 seq, cnt = 1, actual_mb_seq;
+	struct ecore_mcp_cmd_elem *p_cmd_elem = OSAL_NULL;
+
+	/* There is at most one pending command at a certain time, and if it
+	 * exists - it is placed at the HEAD of the list.
+	 */
+	if (!OSAL_LIST_IS_EMPTY(&p_hwfn->mcp_info->cmd_list)) {
+		p_cmd_elem = OSAL_LIST_FIRST_ENTRY(&p_hwfn->mcp_info->cmd_list,
+						   struct ecore_mcp_cmd_elem,
+						   list);
+		return !p_cmd_elem->b_is_completed;
+	}
+
+	return false;
+}
+
+/* Must be called while cmd_lock is acquired */
+static enum _ecore_status_t
+ecore_mcp_update_pending_cmd(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
+{
+	struct ecore_mcp_mb_params *p_mb_params;
+	struct ecore_mcp_cmd_elem *p_cmd_elem;
+	u32 mcp_resp;
+	u16 seq_num;
+
+	mcp_resp = DRV_MB_RD(p_hwfn, p_ptt, fw_mb_header);
+	seq_num = (u16)(mcp_resp & FW_MSG_SEQ_NUMBER_MASK);
+
+	/* Return if no new non-handled response has been received */
+	if (seq_num != p_hwfn->mcp_info->drv_mb_seq)
+		return ECORE_AGAIN;
+
+	p_cmd_elem = ecore_mcp_cmd_get_elem(p_hwfn, seq_num);
+	if (!p_cmd_elem) {
+		DP_ERR(p_hwfn,
+		       "Failed to find a pending mailbox cmd that expects sequence number %d\n",
+		       seq_num);
+		return ECORE_UNKNOWN_ERROR;
+	}
+
+	p_mb_params = p_cmd_elem->p_mb_params;
+
+	/* Get the MFW response along with the sequence number */
+	p_mb_params->mcp_resp = mcp_resp;
+
+	/* Get the MFW param */
+	p_mb_params->mcp_param = DRV_MB_RD(p_hwfn, p_ptt, fw_mb_param);
+
+	/* Get the union data */
+	if (p_mb_params->p_data_dst != OSAL_NULL &&
+	    p_mb_params->data_dst_size) {
+		u32 union_data_addr = p_hwfn->mcp_info->drv_mb_addr +
+				      OFFSETOF(struct public_drv_mb,
+					       union_data);
+		ecore_memcpy_from(p_hwfn, p_ptt, p_mb_params->p_data_dst,
+				  union_data_addr, p_mb_params->data_dst_size);
+	}
+
+	p_cmd_elem->b_is_completed = true;
+
+	return ECORE_SUCCESS;
+}
+
+/* Must be called while cmd_lock is acquired */
+static void __ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn,
+				      struct ecore_ptt *p_ptt,
+				      struct ecore_mcp_mb_params *p_mb_params,
+				      u16 seq_num)
+{
+	union drv_union_data union_data;
+	u32 union_data_addr;
+
+	/* Set the union data */
+	union_data_addr = p_hwfn->mcp_info->drv_mb_addr +
+			  OFFSETOF(struct public_drv_mb, union_data);
+	OSAL_MEM_ZERO(&union_data, sizeof(union_data));
+	if (p_mb_params->p_data_src != OSAL_NULL && p_mb_params->data_src_size)
+		OSAL_MEMCPY(&union_data, p_mb_params->p_data_src,
+			    p_mb_params->data_src_size);
+	ecore_memcpy_to(p_hwfn, p_ptt, union_data_addr, &union_data,
+			sizeof(union_data));
+
+	/* Set the drv param */
+	DRV_MB_WR(p_hwfn, p_ptt, drv_mb_param, p_mb_params->param);
+
+	/* Set the drv command along with the sequence number */
+	DRV_MB_WR(p_hwfn, p_ptt, drv_mb_header, (p_mb_params->cmd | seq_num));
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "MFW mailbox: command 0x%08x param 0x%08x\n",
+		   (p_mb_params->cmd | seq_num), p_mb_params->param);
+}
+
+static enum _ecore_status_t
+_ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			 struct ecore_mcp_mb_params *p_mb_params,
+			 u32 max_retries, u32 delay)
+{
+	struct ecore_mcp_cmd_elem *p_cmd_elem;
+	u32 cnt = 0;
+	u16 seq_num;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 
-#ifndef ASIC_ONLY
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev))
-		delay = EMUL_MCP_RESP_ITER_US;
-	/* There is a built-in delay of 100usec in each MFW response read */
-	if (CHIP_REV_IS_FPGA(p_hwfn->p_dev))
-		max_retries /= 10;
-#endif
+	/* Wait until the mailbox is non-occupied */
+	do {
+		/* Exit the loop if there is no pending command, or if the
+		 * pending command is completed during this iteration.
+		 * The spinlock stays locked until the command is sent.
+		 */
 
-	/* Get actual driver mailbox sequence */
-	actual_mb_seq = DRV_MB_RD(p_hwfn, p_ptt, drv_mb_header) &
-	    DRV_MSG_SEQ_NUMBER_MASK;
+		OSAL_SPIN_LOCK(&p_hwfn->mcp_info->cmd_lock);
 
-	/* Use MCP history register to check if MCP reset occurred between
-	 * init time and now.
-	 */
-	if (p_hwfn->mcp_info->mcp_hist !=
-	    ecore_rd(p_hwfn, p_ptt, MISCS_REG_GENERIC_POR_0)) {
-		DP_VERBOSE(p_hwfn, ECORE_MSG_SP, "Rereading MCP offsets\n");
-		ecore_load_mcp_offsets(p_hwfn, p_ptt);
-		ecore_mcp_cmd_port_init(p_hwfn, p_ptt);
+		if (!ecore_mcp_has_pending_cmd(p_hwfn))
+			break;
+
+		rc = ecore_mcp_update_pending_cmd(p_hwfn, p_ptt);
+		if (rc == ECORE_SUCCESS)
+			break;
+		else if (rc != ECORE_AGAIN)
+			goto err;
+
+		OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->cmd_lock);
+		OSAL_UDELAY(delay);
+	} while (++cnt < max_retries);
+
+	if (cnt >= max_retries) {
+		DP_NOTICE(p_hwfn, false,
+			  "The MFW mailbox is occupied by an uncompleted command. Failed to send command 0x%08x [param 0x%08x].\n",
+			  p_mb_params->cmd, p_mb_params->param);
+		return ECORE_AGAIN;
 	}
-	seq = ++p_hwfn->mcp_info->drv_mb_seq;
 
-	/* Set drv param */
-	DRV_MB_WR(p_hwfn, p_ptt, drv_mb_param, param);
+	/* Send the mailbox command */
+	ecore_mcp_reread_offsets(p_hwfn, p_ptt);
+	seq_num = ++p_hwfn->mcp_info->drv_mb_seq;
+	p_cmd_elem = ecore_mcp_cmd_add_elem(p_hwfn, p_mb_params, seq_num);
+	if (!p_cmd_elem) {
+		rc = ECORE_NOMEM;
+		goto err;
+	}
 
-	/* Set drv command along with the updated sequence */
-	DRV_MB_WR(p_hwfn, p_ptt, drv_mb_header, (cmd | seq));
+	__ecore_mcp_cmd_and_union(p_hwfn, p_ptt, p_mb_params, seq_num);
+	OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->cmd_lock);
 
+	/* Wait for the MFW response */
 	do {
-		/* Wait for MFW response */
+		/* Exit the loop if the command is already completed, or if the
+		 * command is completed during this iteration.
+		 * The spinlock stays locked until the list element is removed.
+		 */
+
 		OSAL_UDELAY(delay);
-		*o_mcp_resp = DRV_MB_RD(p_hwfn, p_ptt, fw_mb_header);
+		OSAL_SPIN_LOCK(&p_hwfn->mcp_info->cmd_lock);
 
-		/* Give the FW up to 5 second (500*10ms) */
-	} while ((seq != (*o_mcp_resp & FW_MSG_SEQ_NUMBER_MASK)) &&
-		 (cnt++ < max_retries));
+		if (p_cmd_elem->b_is_completed)
+			break;
+
+		rc = ecore_mcp_update_pending_cmd(p_hwfn, p_ptt);
+		if (rc == ECORE_SUCCESS)
+			break;
+		else if (rc != ECORE_AGAIN)
+			goto err;
+
+		OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->cmd_lock);
+	} while (++cnt < max_retries);
+
+	if (cnt >= max_retries) {
+		DP_NOTICE(p_hwfn, false,
+			  "The MFW failed to respond to command 0x%08x [param 0x%08x].\n",
+			  p_mb_params->cmd, p_mb_params->param);
+
+		OSAL_SPIN_LOCK(&p_hwfn->mcp_info->cmd_lock);
+		ecore_mcp_cmd_del_elem(p_hwfn, p_cmd_elem);
+		OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->cmd_lock);
 
-	/* Is this a reply to our command? */
-	if (seq == (*o_mcp_resp & FW_MSG_SEQ_NUMBER_MASK)) {
-		*o_mcp_resp &= FW_MSG_CODE_MASK;
-		/* Get the MCP param */
-		*o_mcp_param = DRV_MB_RD(p_hwfn, p_ptt, fw_mb_param);
-	} else {
-		/* FW BUG! */
-		DP_ERR(p_hwfn, "MFW failed to respond [cmd 0x%x param 0x%x]\n",
-		       cmd, param);
-		*o_mcp_resp = 0;
-		rc = ECORE_AGAIN;
 		ecore_hw_err_notify(p_hwfn, ECORE_HW_ERR_MFW_RESP_FAIL);
+		return ECORE_AGAIN;
 	}
+
+	ecore_mcp_cmd_del_elem(p_hwfn, p_cmd_elem);
+	OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->cmd_lock);
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "MFW mailbox: response 0x%08x param 0x%08x [after %d.%03d ms]\n",
+		   p_mb_params->mcp_resp, p_mb_params->mcp_param,
+		   (cnt * delay) / 1000, (cnt * delay) % 1000);
+
+	/* Clear the sequence number from the MFW response */
+	p_mb_params->mcp_resp &= FW_MSG_CODE_MASK;
+
+	return ECORE_SUCCESS;
+
+err:
+	OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->cmd_lock);
 	return rc;
 }
 
@@ -368,9 +539,17 @@ static enum _ecore_status_t ecore_do_mcp_cmd(struct ecore_hwfn *p_hwfn,
 			struct ecore_ptt *p_ptt,
 			struct ecore_mcp_mb_params *p_mb_params)
 {
-	union drv_union_data union_data;
-	u32 union_data_addr;
-	enum _ecore_status_t rc;
+	osal_size_t union_data_size = sizeof(union drv_union_data);
+	u32 max_retries = ECORE_DRV_MB_MAX_RETRIES;
+	u32 delay = CHIP_MCP_RESP_ITER_US;
+
+#ifndef ASIC_ONLY
+	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev))
+		delay = EMUL_MCP_RESP_ITER_US;
+	/* There is a built-in delay of 100usec in each MFW response read */
+	if (CHIP_REV_IS_FPGA(p_hwfn->p_dev))
+		max_retries /= 10;
+#endif
 
 	/* MCP not initialized */
 	if (!ecore_mcp_is_init(p_hwfn)) {
@@ -378,44 +557,17 @@ static enum _ecore_status_t ecore_do_mcp_cmd(struct ecore_hwfn *p_hwfn,
 		return ECORE_BUSY;
 	}
 
-	if (p_mb_params->data_src_size > sizeof(union_data) ||
-	    p_mb_params->data_dst_size > sizeof(union_data)) {
+	if (p_mb_params->data_src_size > union_data_size ||
+	    p_mb_params->data_dst_size > union_data_size) {
 		DP_ERR(p_hwfn,
 		       "The provided size is larger than the union data size [src_size %u, dst_size %u, union_data_size %zu]\n",
 		       p_mb_params->data_src_size, p_mb_params->data_dst_size,
-		       sizeof(union_data));
+		       union_data_size);
 		return ECORE_INVAL;
 	}
 
-	union_data_addr = p_hwfn->mcp_info->drv_mb_addr +
-			  OFFSETOF(struct public_drv_mb, union_data);
-
-	/* Ensure that only a single thread is accessing the mailbox at a
-	 * certain time.
-	 */
-	rc = ecore_mcp_mb_lock(p_hwfn, p_mb_params->cmd);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	OSAL_MEM_ZERO(&union_data, sizeof(union_data));
-	if (p_mb_params->p_data_src != OSAL_NULL && p_mb_params->data_src_size)
-		OSAL_MEMCPY(&union_data, p_mb_params->p_data_src,
-			    p_mb_params->data_src_size);
-	ecore_memcpy_to(p_hwfn, p_ptt, union_data_addr, &union_data,
-			sizeof(union_data));
-
-	rc = ecore_do_mcp_cmd(p_hwfn, p_ptt, p_mb_params->cmd,
-			      p_mb_params->param, &p_mb_params->mcp_resp,
-			      &p_mb_params->mcp_param);
-
-	if (p_mb_params->p_data_dst != OSAL_NULL &&
-	    p_mb_params->data_dst_size)
-		ecore_memcpy_from(p_hwfn, p_ptt, p_mb_params->p_data_dst,
-				  union_data_addr, p_mb_params->data_dst_size);
-
-	ecore_mcp_mb_unlock(p_hwfn, p_mb_params->cmd);
-
-	return rc;
+	return _ecore_mcp_cmd_and_union(p_hwfn, p_ptt, p_mb_params, max_retries,
+					delay);
 }
 
 enum _ecore_status_t ecore_mcp_cmd(struct ecore_hwfn *p_hwfn,
@@ -809,9 +961,6 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 		DP_INFO(p_hwfn,
 			"MFW refused a load request due to HSI > 1. Resending with HSI = 1.\n");
 
-		/* The previous load request set the mailbox blocking */
-		p_hwfn->mcp_info->block_mb_sending = false;
-
 		in_params.hsi_ver = ECORE_LOAD_REQ_HSI_VER_1;
 		OSAL_MEM_ZERO(&out_params, sizeof(out_params));
 		rc = __ecore_mcp_load_req(p_hwfn, p_ptt, &in_params,
@@ -820,9 +969,6 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 			return rc;
 	} else if (out_params.load_code ==
 		   FW_MSG_CODE_DRV_LOAD_REFUSED_REQUIRES_FORCE) {
-		/* The previous load request set the mailbox blocking */
-		p_hwfn->mcp_info->block_mb_sending = false;
-
 		if (ecore_mcp_can_force_load(in_params.drv_role,
 					     out_params.exist_drv_role,
 					     p_params->override_force_load)) {
@@ -1067,6 +1213,9 @@ static void ecore_mcp_handle_link_change(struct ecore_hwfn *p_hwfn,
 	u8 max_bw, min_bw;
 	u32 status = 0;
 
+	/* Prevent SW/attentions from doing this at the same time */
+	OSAL_SPIN_LOCK(&p_hwfn->mcp_info->link_lock);
+
 	p_link = &p_hwfn->mcp_info->link_output;
 	OSAL_MEMSET(p_link, 0, sizeof(*p_link));
 	if (!b_reset) {
@@ -1082,7 +1231,7 @@ static void ecore_mcp_handle_link_change(struct ecore_hwfn *p_hwfn,
 	} else {
 		DP_VERBOSE(p_hwfn, ECORE_MSG_LINK,
 			   "Resetting link indications\n");
-		return;
+		goto out;
 	}
 
 	if (p_hwfn->b_drv_link_init)
@@ -1197,6 +1346,8 @@ static void ecore_mcp_handle_link_change(struct ecore_hwfn *p_hwfn,
 		ecore_mcp_read_eee_config(p_hwfn, p_ptt, p_link);
 
 	OSAL_LINK_UPDATE(p_hwfn, p_ptt);
+out:
+	OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->link_lock);
 }
 
 enum _ecore_status_t ecore_mcp_set_link(struct ecore_hwfn *p_hwfn,
@@ -1266,9 +1417,13 @@ enum _ecore_status_t ecore_mcp_set_link(struct ecore_hwfn *p_hwfn,
 		return rc;
 	}
 
-	/* Reset the link status if needed */
-	if (!b_up)
-		ecore_mcp_handle_link_change(p_hwfn, p_ptt, true);
+	/* Mimic link-change attention, done for several reasons:
+	 *  - On reset, there's no guarantee MFW would trigger
+	 *    an attention.
+	 *  - On initialization, older MFWs might not indicate link change
+	 *    during LFA, so we'll never get an UP indication.
+	 */
+	ecore_mcp_handle_link_change(p_hwfn, p_ptt, !b_up);
 
 	return rc;
 }
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index b84f0d1..6c91046 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -32,10 +32,18 @@
 				  ecore_device_num_engines((_p_hwfn)->p_dev)))
 
 struct ecore_mcp_info {
-	/* Spinlock used for protecting the access to the MFW mailbox */
-	osal_spinlock_t lock;
-	/* Flag to indicate whether sending a MFW mailbox is forbidden */
-	bool block_mb_sending;
+	/* List for mailbox commands which were sent and wait for a response */
+	osal_list_t cmd_list;
+
+	/* Spinlock used for protecting the access to the mailbox commands list
+	 * and the sending of the commands.
+	 */
+	osal_spinlock_t cmd_lock;
+
+	/* Spinlock used for syncing SW link-changes and link-changes
+	 * originating from attention context.
+	 */
+	osal_spinlock_t link_lock;
 
 	/* Address of the MCP public area */
 	u32 public_base;
@@ -60,7 +68,7 @@ struct ecore_mcp_info {
 	u8 *mfw_mb_cur;
 	u8 *mfw_mb_shadow;
 	u16 mfw_mb_length;
-	u16 mcp_hist;
+	u32 mcp_hist;
 
 	/* Capabilties negotiated with the MFW */
 	u32 capabilities;
diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c
index 25d573e..29ba660 100644
--- a/drivers/net/qede/base/ecore_spq.c
+++ b/drivers/net/qede/base/ecore_spq.c
@@ -536,7 +536,9 @@ enum _ecore_status_t ecore_spq_alloc(struct ecore_hwfn *p_hwfn)
 	p_spq->p_virt = p_virt;
 	p_spq->p_phys = p_phys;
 
+#ifdef CONFIG_ECORE_LOCK_ALLOC
 	OSAL_SPIN_LOCK_ALLOC(p_hwfn, &p_spq->lock);
+#endif
 
 	p_hwfn->p_spq = p_spq;
 	return ECORE_SUCCESS;
@@ -565,7 +567,10 @@ void ecore_spq_free(struct ecore_hwfn *p_hwfn)
 	}
 
 	ecore_chain_free(p_hwfn->p_dev, &p_spq->chain);
+#ifdef CONFIG_ECORE_LOCK_ALLOC
 	OSAL_SPIN_LOCK_DEALLOC(&p_spq->lock);
+#endif
+
 	OSAL_FREE(p_hwfn->p_dev, p_spq);
 }
 
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index 0a26141..5002ada 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -453,7 +453,9 @@ enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn)
 		   p_iov->bulletin.p_virt, (unsigned long)p_iov->bulletin.phys,
 		   p_iov->bulletin.size);
 
+#ifdef CONFIG_ECORE_LOCK_ALLOC
 	OSAL_MUTEX_ALLOC(p_hwfn, &p_iov->mutex);
+#endif
 	OSAL_MUTEX_INIT(&p_iov->mutex);
 
 	p_hwfn->vf_iov_info = p_iov;
@@ -1349,6 +1351,10 @@ enum _ecore_status_t ecore_vf_pf_release(struct ecore_hwfn *p_hwfn)
 				       p_iov->bulletin.phys, size);
 	}
 
+#ifdef CONFIG_ECORE_LOCK_ALLOC
+	OSAL_MUTEX_DEALLOC(&p_iov->mutex);
+#endif
+
 	OSAL_FREE(p_hwfn->p_dev, p_hwfn->vf_iov_info);
 
 	return rc;
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 20/53] net/qede/base: remove helper functions/structures
  2017-09-19  1:29 [PATCH 00/53] net/qede/base: update PMD to 2.6.0.1 Rasesh Mody
                   ` (18 preceding siblings ...)
  2017-09-19  1:29 ` [PATCH 19/53] net/qede/base: revise management FW mbox access scheme Rasesh Mody
@ 2017-09-19  1:30 ` Rasesh Mody
  2017-09-19  1:30 ` [PATCH 21/53] net/qede/base: initialize resc lock/unlock params Rasesh Mody
                   ` (9 subsequent siblings)
  29 siblings, 0 replies; 31+ messages in thread
From: Rasesh Mody @ 2017-09-19  1:30 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

 - Remove an additional wrapper function ecore_mcp_nvm_command and
   instead
   use ecore_mcp_nvm_wr_cmd, ecore_mcp_nvm_rd_cmd or ecore_mcp_cmd APIs
   directly as appropriate.
 - Remove struct ecore_mcp_nvm_params
 - Add new NVM command ECORE_EXT_PHY_FW_UPGRADE and fix the expected
   management FW responses in ecore_mcp_nvm_write()
 - Fail the NVM write process on any failing partial write

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h         |    1 +
 drivers/net/qede/base/ecore_mcp.c     |  319 ++++++++++++++-------------------
 drivers/net/qede/base/ecore_mcp.h     |   50 ------
 drivers/net/qede/base/ecore_mcp_api.h |  119 ++++++------
 drivers/net/qede/base/mcp_public.h    |    1 +
 5 files changed, 186 insertions(+), 304 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index a3edcb0..818a34b 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -66,6 +66,7 @@ enum ecore_nvm_cmd {
 	ECORE_NVM_READ_NVRAM = DRV_MSG_CODE_NVM_READ_NVRAM,
 	ECORE_NVM_WRITE_NVRAM = DRV_MSG_CODE_NVM_WRITE_NVRAM,
 	ECORE_NVM_DEL_FILE = DRV_MSG_CODE_NVM_DEL_FILE,
+	ECORE_EXT_PHY_FW_UPGRADE = DRV_MSG_CODE_EXT_PHY_FW_UPGRADE,
 	ECORE_NVM_SET_SECURE_MODE = DRV_MSG_CODE_SET_SECURE_MODE,
 	ECORE_PHY_RAW_READ = DRV_MSG_CODE_PHY_RAW_READ,
 	ECORE_PHY_RAW_WRITE = DRV_MSG_CODE_PHY_RAW_WRITE,
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index db44aa3..24f65cf 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -2181,42 +2181,6 @@ enum _ecore_status_t ecore_mcp_drain(struct ecore_hwfn *p_hwfn,
 	return &p_hwfn->mcp_info->func_info;
 }
 
-enum _ecore_status_t ecore_mcp_nvm_command(struct ecore_hwfn *p_hwfn,
-					   struct ecore_ptt *p_ptt,
-					   struct ecore_mcp_nvm_params *params)
-{
-	enum _ecore_status_t rc;
-
-	switch (params->type) {
-	case ECORE_MCP_NVM_RD:
-		rc = ecore_mcp_nvm_rd_cmd(p_hwfn, p_ptt, params->nvm_common.cmd,
-					  params->nvm_common.offset,
-					  &params->nvm_common.resp,
-					  &params->nvm_common.param,
-					  params->nvm_rd.buf_size,
-					  params->nvm_rd.buf);
-		break;
-	case ECORE_MCP_CMD:
-		rc = ecore_mcp_cmd(p_hwfn, p_ptt, params->nvm_common.cmd,
-				   params->nvm_common.offset,
-				   &params->nvm_common.resp,
-				   &params->nvm_common.param);
-		break;
-	case ECORE_MCP_NVM_WR:
-		rc = ecore_mcp_nvm_wr_cmd(p_hwfn, p_ptt, params->nvm_common.cmd,
-					  params->nvm_common.offset,
-					  &params->nvm_common.resp,
-					  &params->nvm_common.param,
-					  params->nvm_wr.buf_size,
-					  params->nvm_wr.buf);
-		break;
-	default:
-		rc = ECORE_NOTIMPL;
-		break;
-	}
-	return rc;
-}
-
 int ecore_mcp_get_personality_cnt(struct ecore_hwfn *p_hwfn,
 				  struct ecore_ptt *p_ptt, u32 personalities)
 {
@@ -2523,7 +2487,7 @@ enum _ecore_status_t ecore_mcp_nvm_read(struct ecore_dev *p_dev, u32 addr,
 {
 	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
 	u32 bytes_left, offset, bytes_to_copy, buf_size;
-	struct ecore_mcp_nvm_params params;
+	u32 nvm_offset, resp, param;
 	struct ecore_ptt *p_ptt;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 
@@ -2531,22 +2495,29 @@ enum _ecore_status_t ecore_mcp_nvm_read(struct ecore_dev *p_dev, u32 addr,
 	if (!p_ptt)
 		return ECORE_BUSY;
 
-	OSAL_MEMSET(&params, 0, sizeof(struct ecore_mcp_nvm_params));
 	bytes_left = len;
 	offset = 0;
-	params.type = ECORE_MCP_NVM_RD;
-	params.nvm_rd.buf_size = &buf_size;
-	params.nvm_common.cmd = DRV_MSG_CODE_NVM_READ_NVRAM;
 	while (bytes_left > 0) {
 		bytes_to_copy = OSAL_MIN_T(u32, bytes_left,
 					   MCP_DRV_NVM_BUF_LEN);
-		params.nvm_common.offset = (addr + offset) |
-		    (bytes_to_copy << DRV_MB_PARAM_NVM_LEN_SHIFT);
-		params.nvm_rd.buf = (u32 *)(p_buf + offset);
-		rc = ecore_mcp_nvm_command(p_hwfn, p_ptt, &params);
-		if (rc != ECORE_SUCCESS || (params.nvm_common.resp !=
-					    FW_MSG_CODE_NVM_OK)) {
-			DP_NOTICE(p_dev, false, "MCP command rc = %d\n", rc);
+		nvm_offset = (addr + offset) | (bytes_to_copy <<
+						DRV_MB_PARAM_NVM_LEN_SHIFT);
+		rc = ecore_mcp_nvm_rd_cmd(p_hwfn, p_ptt,
+					  DRV_MSG_CODE_NVM_READ_NVRAM,
+					  nvm_offset, &resp, &param, &buf_size,
+					  (u32 *)(p_buf + offset));
+		if (rc != ECORE_SUCCESS) {
+			DP_NOTICE(p_dev, false,
+				  "ecore_mcp_nvm_rd_cmd() failed, rc = %d\n",
+				  rc);
+			resp = FW_MSG_CODE_ERROR;
+			break;
+		}
+
+		if (resp != FW_MSG_CODE_NVM_OK) {
+			DP_NOTICE(p_dev, false,
+				  "nvm read failed, resp = 0x%08x\n", resp);
+			rc = ECORE_UNKNOWN_ERROR;
 			break;
 		}
 
@@ -2554,14 +2525,14 @@ enum _ecore_status_t ecore_mcp_nvm_read(struct ecore_dev *p_dev, u32 addr,
 		 * isn't preemptible. Sleep a bit to prevent CPU hogging.
 		 */
 		if (bytes_left % 0x1000 <
-		    (bytes_left - *params.nvm_rd.buf_size) % 0x1000)
+		    (bytes_left - buf_size) % 0x1000)
 			OSAL_MSLEEP(1);
 
-		offset += *params.nvm_rd.buf_size;
-		bytes_left -= *params.nvm_rd.buf_size;
+		offset += buf_size;
+		bytes_left -= buf_size;
 	}
 
-	p_dev->mcp_nvm_resp = params.nvm_common.resp;
+	p_dev->mcp_nvm_resp = resp;
 	ecore_ptt_release(p_hwfn, p_ptt);
 
 	return rc;
@@ -2571,26 +2542,23 @@ enum _ecore_status_t ecore_mcp_phy_read(struct ecore_dev *p_dev, u32 cmd,
 					u32 addr, u8 *p_buf, u32 len)
 {
 	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
-	struct ecore_mcp_nvm_params params;
 	struct ecore_ptt *p_ptt;
+	u32 resp, param;
 	enum _ecore_status_t rc;
 
 	p_ptt = ecore_ptt_acquire(p_hwfn);
 	if (!p_ptt)
 		return ECORE_BUSY;
 
-	OSAL_MEMSET(&params, 0, sizeof(struct ecore_mcp_nvm_params));
-	params.type = ECORE_MCP_NVM_RD;
-	params.nvm_rd.buf_size = &len;
-	params.nvm_common.cmd = (cmd == ECORE_PHY_CORE_READ) ?
-	    DRV_MSG_CODE_PHY_CORE_READ : DRV_MSG_CODE_PHY_RAW_READ;
-	params.nvm_common.offset = addr;
-	params.nvm_rd.buf = (u32 *)p_buf;
-	rc = ecore_mcp_nvm_command(p_hwfn, p_ptt, &params);
+	rc = ecore_mcp_nvm_rd_cmd(p_hwfn, p_ptt,
+				  (cmd == ECORE_PHY_CORE_READ) ?
+				  DRV_MSG_CODE_PHY_CORE_READ :
+				  DRV_MSG_CODE_PHY_RAW_READ,
+				  addr, &resp, &param, &len, (u32 *)p_buf);
 	if (rc != ECORE_SUCCESS)
 		DP_NOTICE(p_dev, false, "MCP command rc = %d\n", rc);
 
-	p_dev->mcp_nvm_resp = params.nvm_common.resp;
+	p_dev->mcp_nvm_resp = resp;
 	ecore_ptt_release(p_hwfn, p_ptt);
 
 	return rc;
@@ -2599,14 +2567,12 @@ enum _ecore_status_t ecore_mcp_phy_read(struct ecore_dev *p_dev, u32 cmd,
 enum _ecore_status_t ecore_mcp_nvm_resp(struct ecore_dev *p_dev, u8 *p_buf)
 {
 	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
-	struct ecore_mcp_nvm_params params;
 	struct ecore_ptt *p_ptt;
 
 	p_ptt = ecore_ptt_acquire(p_hwfn);
 	if (!p_ptt)
 		return ECORE_BUSY;
 
-	OSAL_MEMSET(&params, 0, sizeof(struct ecore_mcp_nvm_params));
 	OSAL_MEMCPY(p_buf, &p_dev->mcp_nvm_resp, sizeof(p_dev->mcp_nvm_resp));
 	ecore_ptt_release(p_hwfn, p_ptt);
 
@@ -2616,19 +2582,16 @@ enum _ecore_status_t ecore_mcp_nvm_resp(struct ecore_dev *p_dev, u8 *p_buf)
 enum _ecore_status_t ecore_mcp_nvm_del_file(struct ecore_dev *p_dev, u32 addr)
 {
 	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
-	struct ecore_mcp_nvm_params params;
 	struct ecore_ptt *p_ptt;
+	u32 resp, param;
 	enum _ecore_status_t rc;
 
 	p_ptt = ecore_ptt_acquire(p_hwfn);
 	if (!p_ptt)
 		return ECORE_BUSY;
-	OSAL_MEMSET(&params, 0, sizeof(struct ecore_mcp_nvm_params));
-	params.type = ECORE_MCP_CMD;
-	params.nvm_common.cmd = DRV_MSG_CODE_NVM_DEL_FILE;
-	params.nvm_common.offset = addr;
-	rc = ecore_mcp_nvm_command(p_hwfn, p_ptt, &params);
-	p_dev->mcp_nvm_resp = params.nvm_common.resp;
+	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_NVM_DEL_FILE, addr,
+			   &resp, &param);
+	p_dev->mcp_nvm_resp = resp;
 	ecore_ptt_release(p_hwfn, p_ptt);
 
 	return rc;
@@ -2638,19 +2601,16 @@ enum _ecore_status_t ecore_mcp_nvm_put_file_begin(struct ecore_dev *p_dev,
 						  u32 addr)
 {
 	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
-	struct ecore_mcp_nvm_params params;
 	struct ecore_ptt *p_ptt;
+	u32 resp, param;
 	enum _ecore_status_t rc;
 
 	p_ptt = ecore_ptt_acquire(p_hwfn);
 	if (!p_ptt)
 		return ECORE_BUSY;
-	OSAL_MEMSET(&params, 0, sizeof(struct ecore_mcp_nvm_params));
-	params.type = ECORE_MCP_CMD;
-	params.nvm_common.cmd = DRV_MSG_CODE_NVM_PUT_FILE_BEGIN;
-	params.nvm_common.offset = addr;
-	rc = ecore_mcp_nvm_command(p_hwfn, p_ptt, &params);
-	p_dev->mcp_nvm_resp = params.nvm_common.resp;
+	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_NVM_PUT_FILE_BEGIN, addr,
+			   &resp, &param);
+	p_dev->mcp_nvm_resp = resp;
 	ecore_ptt_release(p_hwfn, p_ptt);
 
 	return rc;
@@ -2662,37 +2622,57 @@ enum _ecore_status_t ecore_mcp_nvm_put_file_begin(struct ecore_dev *p_dev,
 enum _ecore_status_t ecore_mcp_nvm_write(struct ecore_dev *p_dev, u32 cmd,
 					 u32 addr, u8 *p_buf, u32 len)
 {
+	u32 buf_idx, buf_size, nvm_cmd, nvm_offset, resp, param;
 	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
 	enum _ecore_status_t rc = ECORE_INVAL;
-	struct ecore_mcp_nvm_params params;
 	struct ecore_ptt *p_ptt;
-	u32 buf_idx, buf_size;
 
 	p_ptt = ecore_ptt_acquire(p_hwfn);
 	if (!p_ptt)
 		return ECORE_BUSY;
 
-	OSAL_MEMSET(&params, 0, sizeof(struct ecore_mcp_nvm_params));
-	params.type = ECORE_MCP_NVM_WR;
-	if (cmd == ECORE_PUT_FILE_DATA)
-		params.nvm_common.cmd = DRV_MSG_CODE_NVM_PUT_FILE_DATA;
-	else
-		params.nvm_common.cmd = DRV_MSG_CODE_NVM_WRITE_NVRAM;
+	switch (cmd) {
+	case ECORE_PUT_FILE_DATA:
+		nvm_cmd = DRV_MSG_CODE_NVM_PUT_FILE_DATA;
+		break;
+	case ECORE_NVM_WRITE_NVRAM:
+		nvm_cmd = DRV_MSG_CODE_NVM_WRITE_NVRAM;
+		break;
+	case ECORE_EXT_PHY_FW_UPGRADE:
+		nvm_cmd = DRV_MSG_CODE_EXT_PHY_FW_UPGRADE;
+		break;
+	default:
+		DP_NOTICE(p_hwfn, true, "Invalid nvm write command 0x%x\n",
+			  cmd);
+		rc = ECORE_INVAL;
+		goto out;
+	}
+
 	buf_idx = 0;
 	while (buf_idx < len) {
 		buf_size = OSAL_MIN_T(u32, (len - buf_idx),
 				      MCP_DRV_NVM_BUF_LEN);
-		params.nvm_common.offset = ((buf_size <<
-					     DRV_MB_PARAM_NVM_LEN_SHIFT)
-					    | addr) + buf_idx;
-		params.nvm_wr.buf_size = buf_size;
-		params.nvm_wr.buf = (u32 *)&p_buf[buf_idx];
-		rc = ecore_mcp_nvm_command(p_hwfn, p_ptt, &params);
-		if (rc != ECORE_SUCCESS ||
-		    ((params.nvm_common.resp != FW_MSG_CODE_NVM_OK) &&
-		     (params.nvm_common.resp !=
-		      FW_MSG_CODE_NVM_PUT_FILE_FINISH_OK)))
-			DP_NOTICE(p_dev, false, "MCP command rc = %d\n", rc);
+		nvm_offset = ((buf_size << DRV_MB_PARAM_NVM_LEN_SHIFT) | addr) +
+				buf_idx;
+		rc = ecore_mcp_nvm_wr_cmd(p_hwfn, p_ptt, nvm_cmd, nvm_offset,
+					  &resp, &param, buf_size,
+					  (u32 *)&p_buf[buf_idx]);
+		if (rc != ECORE_SUCCESS) {
+			DP_NOTICE(p_dev, false,
+				  "ecore_mcp_nvm_write() failed, rc = %d\n",
+				  rc);
+			resp = FW_MSG_CODE_ERROR;
+			break;
+		}
+
+		if (resp != FW_MSG_CODE_OK &&
+		    resp != FW_MSG_CODE_NVM_OK &&
+		    resp != FW_MSG_CODE_NVM_PUT_FILE_FINISH_OK) {
+			DP_NOTICE(p_dev, false,
+				  "nvm write failed, resp = 0x%08x\n", resp);
+			rc = ECORE_UNKNOWN_ERROR;
+			break;
+		}
 
 		/* This can be a lengthy process, and it's possible scheduler
 		 * isn't preemptible. Sleep a bit to prevent CPU hogging.
@@ -2704,7 +2684,8 @@ enum _ecore_status_t ecore_mcp_nvm_write(struct ecore_dev *p_dev, u32 cmd,
 		buf_idx += buf_size;
 	}
 
-	p_dev->mcp_nvm_resp = params.nvm_common.resp;
+	p_dev->mcp_nvm_resp = resp;
+out:
 	ecore_ptt_release(p_hwfn, p_ptt);
 
 	return rc;
@@ -2714,25 +2695,21 @@ enum _ecore_status_t ecore_mcp_phy_write(struct ecore_dev *p_dev, u32 cmd,
 					 u32 addr, u8 *p_buf, u32 len)
 {
 	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
-	struct ecore_mcp_nvm_params params;
 	struct ecore_ptt *p_ptt;
+	u32 resp, param, nvm_cmd;
 	enum _ecore_status_t rc;
 
 	p_ptt = ecore_ptt_acquire(p_hwfn);
 	if (!p_ptt)
 		return ECORE_BUSY;
 
-	OSAL_MEMSET(&params, 0, sizeof(struct ecore_mcp_nvm_params));
-	params.type = ECORE_MCP_NVM_WR;
-	params.nvm_wr.buf_size = len;
-	params.nvm_common.cmd = (cmd == ECORE_PHY_CORE_WRITE) ?
-	    DRV_MSG_CODE_PHY_CORE_WRITE : DRV_MSG_CODE_PHY_RAW_WRITE;
-	params.nvm_common.offset = addr;
-	params.nvm_wr.buf = (u32 *)p_buf;
-	rc = ecore_mcp_nvm_command(p_hwfn, p_ptt, &params);
+	nvm_cmd = (cmd == ECORE_PHY_CORE_WRITE) ?  DRV_MSG_CODE_PHY_CORE_WRITE :
+			DRV_MSG_CODE_PHY_RAW_WRITE;
+	rc = ecore_mcp_nvm_wr_cmd(p_hwfn, p_ptt, nvm_cmd, addr,
+				  &resp, &param, len, (u32 *)p_buf);
 	if (rc != ECORE_SUCCESS)
 		DP_NOTICE(p_dev, false, "MCP command rc = %d\n", rc);
-	p_dev->mcp_nvm_resp = params.nvm_common.resp;
+	p_dev->mcp_nvm_resp = resp;
 	ecore_ptt_release(p_hwfn, p_ptt);
 
 	return rc;
@@ -2742,20 +2719,17 @@ enum _ecore_status_t ecore_mcp_nvm_set_secure_mode(struct ecore_dev *p_dev,
 						   u32 addr)
 {
 	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
-	struct ecore_mcp_nvm_params params;
 	struct ecore_ptt *p_ptt;
+	u32 resp, param;
 	enum _ecore_status_t rc;
 
 	p_ptt = ecore_ptt_acquire(p_hwfn);
 	if (!p_ptt)
 		return ECORE_BUSY;
 
-	OSAL_MEMSET(&params, 0, sizeof(struct ecore_mcp_nvm_params));
-	params.type = ECORE_MCP_CMD;
-	params.nvm_common.cmd = DRV_MSG_CODE_SET_SECURE_MODE;
-	params.nvm_common.offset = addr;
-	rc = ecore_mcp_nvm_command(p_hwfn, p_ptt, &params);
-	p_dev->mcp_nvm_resp = params.nvm_common.resp;
+	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_SET_SECURE_MODE, addr,
+			   &resp, &param);
+	p_dev->mcp_nvm_resp = resp;
 	ecore_ptt_release(p_hwfn, p_ptt);
 
 	return rc;
@@ -2766,42 +2740,37 @@ enum _ecore_status_t ecore_mcp_phy_sfp_read(struct ecore_hwfn *p_hwfn,
 					    u32 port, u32 addr, u32 offset,
 					    u32 len, u8 *p_buf)
 {
-	struct ecore_mcp_nvm_params params;
+	u32 bytes_left, bytes_to_copy, buf_size, nvm_offset;
+	u32 resp, param;
 	enum _ecore_status_t rc;
-	u32 bytes_left, bytes_to_copy, buf_size;
 
-	OSAL_MEMSET(&params, 0, sizeof(struct ecore_mcp_nvm_params));
-	params.nvm_common.offset =
-		(port << DRV_MB_PARAM_TRANSCEIVER_PORT_SHIFT) |
-		(addr << DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS_SHIFT);
+	nvm_offset = (port << DRV_MB_PARAM_TRANSCEIVER_PORT_SHIFT) |
+			(addr << DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS_SHIFT);
 	addr = offset;
 	offset = 0;
 	bytes_left = len;
-	params.type = ECORE_MCP_NVM_RD;
-	params.nvm_rd.buf_size = &buf_size;
-	params.nvm_common.cmd = DRV_MSG_CODE_TRANSCEIVER_READ;
 	while (bytes_left > 0) {
 		bytes_to_copy = OSAL_MIN_T(u32, bytes_left,
 					   MAX_I2C_TRANSACTION_SIZE);
-		params.nvm_rd.buf = (u32 *)(p_buf + offset);
-		params.nvm_common.offset &=
-			(DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS_MASK |
-			 DRV_MB_PARAM_TRANSCEIVER_PORT_MASK);
-		params.nvm_common.offset |=
-			((addr + offset) <<
-			 DRV_MB_PARAM_TRANSCEIVER_OFFSET_SHIFT);
-		params.nvm_common.offset |=
-			(bytes_to_copy << DRV_MB_PARAM_TRANSCEIVER_SIZE_SHIFT);
-		rc = ecore_mcp_nvm_command(p_hwfn, p_ptt, &params);
-		if ((params.nvm_common.resp & FW_MSG_CODE_MASK) ==
+		nvm_offset &= (DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS_MASK |
+			       DRV_MB_PARAM_TRANSCEIVER_PORT_MASK);
+		nvm_offset |= ((addr + offset) <<
+				DRV_MB_PARAM_TRANSCEIVER_OFFSET_SHIFT);
+		nvm_offset |= (bytes_to_copy <<
+			       DRV_MB_PARAM_TRANSCEIVER_SIZE_SHIFT);
+		rc = ecore_mcp_nvm_rd_cmd(p_hwfn, p_ptt,
+					  DRV_MSG_CODE_TRANSCEIVER_READ,
+					  nvm_offset, &resp, &param, &buf_size,
+					  (u32 *)(p_buf + offset));
+		if ((resp & FW_MSG_CODE_MASK) ==
 		    FW_MSG_CODE_TRANSCEIVER_NOT_PRESENT) {
 			return ECORE_NODEV;
-		} else if ((params.nvm_common.resp & FW_MSG_CODE_MASK) !=
+		} else if ((resp & FW_MSG_CODE_MASK) !=
 			   FW_MSG_CODE_TRANSCEIVER_DIAG_OK)
 			return ECORE_UNKNOWN_ERROR;
 
-		offset += *params.nvm_rd.buf_size;
-		bytes_left -= *params.nvm_rd.buf_size;
+		offset += buf_size;
+		bytes_left -= buf_size;
 	}
 
 	return ECORE_SUCCESS;
@@ -2812,35 +2781,28 @@ enum _ecore_status_t ecore_mcp_phy_sfp_write(struct ecore_hwfn *p_hwfn,
 					     u32 port, u32 addr, u32 offset,
 					     u32 len, u8 *p_buf)
 {
-	struct ecore_mcp_nvm_params params;
+	u32 buf_idx, buf_size, nvm_offset, resp, param;
 	enum _ecore_status_t rc;
-	u32 buf_idx, buf_size;
-
-	OSAL_MEMSET(&params, 0, sizeof(struct ecore_mcp_nvm_params));
-	params.nvm_common.offset =
-		(port << DRV_MB_PARAM_TRANSCEIVER_PORT_SHIFT) |
-		(addr << DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS_SHIFT);
-	params.type = ECORE_MCP_NVM_WR;
-	params.nvm_common.cmd = DRV_MSG_CODE_TRANSCEIVER_WRITE;
+
+	nvm_offset = (port << DRV_MB_PARAM_TRANSCEIVER_PORT_SHIFT) |
+			(addr << DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS_SHIFT);
 	buf_idx = 0;
 	while (buf_idx < len) {
 		buf_size = OSAL_MIN_T(u32, (len - buf_idx),
 				      MAX_I2C_TRANSACTION_SIZE);
-		params.nvm_common.offset &=
-			(DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS_MASK |
-			 DRV_MB_PARAM_TRANSCEIVER_PORT_MASK);
-		params.nvm_common.offset |=
-			((offset + buf_idx) <<
-			 DRV_MB_PARAM_TRANSCEIVER_OFFSET_SHIFT);
-		params.nvm_common.offset |=
-			(buf_size << DRV_MB_PARAM_TRANSCEIVER_SIZE_SHIFT);
-		params.nvm_wr.buf_size = buf_size;
-		params.nvm_wr.buf = (u32 *)&p_buf[buf_idx];
-		rc = ecore_mcp_nvm_command(p_hwfn, p_ptt, &params);
-		if ((params.nvm_common.resp & FW_MSG_CODE_MASK) ==
+		nvm_offset &= (DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS_MASK |
+				 DRV_MB_PARAM_TRANSCEIVER_PORT_MASK);
+		nvm_offset |= ((offset + buf_idx) <<
+				 DRV_MB_PARAM_TRANSCEIVER_OFFSET_SHIFT);
+		nvm_offset |= (buf_size << DRV_MB_PARAM_TRANSCEIVER_SIZE_SHIFT);
+		rc = ecore_mcp_nvm_wr_cmd(p_hwfn, p_ptt,
+					  DRV_MSG_CODE_TRANSCEIVER_WRITE,
+					  nvm_offset, &resp, &param, buf_size,
+					  (u32 *)&p_buf[buf_idx]);
+		if ((resp & FW_MSG_CODE_MASK) ==
 		    FW_MSG_CODE_TRANSCEIVER_NOT_PRESENT) {
 			return ECORE_NODEV;
-		} else if ((params.nvm_common.resp & FW_MSG_CODE_MASK) !=
+		} else if ((resp & FW_MSG_CODE_MASK) !=
 			   FW_MSG_CODE_TRANSCEIVER_DIAG_OK)
 			return ECORE_UNKNOWN_ERROR;
 
@@ -2988,26 +2950,19 @@ enum _ecore_status_t ecore_mcp_bist_nvm_test_get_image_att(
 	struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	struct bist_nvm_image_att *p_image_att, u32 image_index)
 {
-	struct ecore_mcp_nvm_params params;
+	u32 buf_size, nvm_offset, resp, param;
 	enum _ecore_status_t rc;
-	u32 buf_size;
 
-	OSAL_MEMSET(&params, 0, sizeof(struct ecore_mcp_nvm_params));
-	params.nvm_common.offset = (DRV_MB_PARAM_BIST_NVM_TEST_IMAGE_BY_INDEX <<
+	nvm_offset = (DRV_MB_PARAM_BIST_NVM_TEST_IMAGE_BY_INDEX <<
 				    DRV_MB_PARAM_BIST_TEST_INDEX_SHIFT);
-	params.nvm_common.offset |= (image_index <<
-				    DRV_MB_PARAM_BIST_TEST_IMAGE_INDEX_SHIFT);
-
-	params.type = ECORE_MCP_NVM_RD;
-	params.nvm_rd.buf_size = &buf_size;
-	params.nvm_common.cmd = DRV_MSG_CODE_BIST_TEST;
-	params.nvm_rd.buf = (u32 *)p_image_att;
-
-	rc = ecore_mcp_nvm_command(p_hwfn, p_ptt, &params);
+	nvm_offset |= (image_index << DRV_MB_PARAM_BIST_TEST_IMAGE_INDEX_SHIFT);
+	rc = ecore_mcp_nvm_rd_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_BIST_TEST,
+				  nvm_offset, &resp, &param, &buf_size,
+				  (u32 *)p_image_att);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	if (((params.nvm_common.resp & FW_MSG_CODE_MASK) != FW_MSG_CODE_OK) ||
+	if (((resp & FW_MSG_CODE_MASK) != FW_MSG_CODE_OK) ||
 	    (p_image_att->return_code != 1))
 		rc = ECORE_UNKNOWN_ERROR;
 
@@ -3058,23 +3013,17 @@ enum _ecore_status_t ecore_mcp_get_mba_versions(
 	struct ecore_ptt *p_ptt,
 	struct ecore_mba_vers *p_mba_vers)
 {
-	struct ecore_mcp_nvm_params params;
+	u32 buf_size, resp, param;
 	enum _ecore_status_t rc;
-	u32 buf_size;
 
-	OSAL_MEM_ZERO(&params, sizeof(params));
-	params.type = ECORE_MCP_NVM_RD;
-	params.nvm_common.cmd = DRV_MSG_CODE_GET_MBA_VERSION;
-	params.nvm_common.offset = 0;
-	params.nvm_rd.buf = &p_mba_vers->mba_vers[0];
-	params.nvm_rd.buf_size = &buf_size;
-	rc = ecore_mcp_nvm_command(p_hwfn, p_ptt, &params);
+	rc = ecore_mcp_nvm_rd_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_GET_MBA_VERSION,
+				  0, &resp, &param, &buf_size,
+				  &p_mba_vers->mba_vers[0]);
 
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	if ((params.nvm_common.resp & FW_MSG_CODE_MASK) !=
-	    FW_MSG_CODE_NVM_OK)
+	if ((resp & FW_MSG_CODE_MASK) != FW_MSG_CODE_NVM_OK)
 		rc = ECORE_UNKNOWN_ERROR;
 
 	if (buf_size != MCP_DRV_NVM_BUF_LEN)
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index 6c91046..dae0720 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -263,56 +263,6 @@ enum _ecore_status_t ecore_mcp_reset(struct ecore_hwfn *p_hwfn,
 				     struct ecore_ptt *p_ptt);
 
 /**
- * @brief - Sends an NVM write command request to the MFW with
- *          payload.
- *
- * @param p_hwfn
- * @param p_ptt
- * @param cmd - Command: Either DRV_MSG_CODE_NVM_WRITE_NVRAM or
- *            DRV_MSG_CODE_NVM_PUT_FILE_DATA
- * @param param - [0:23] - Offset [24:31] - Size
- * @param o_mcp_resp - MCP response
- * @param o_mcp_param - MCP response param
- * @param i_txn_size -  Buffer size
- * @param i_buf - Pointer to the buffer
- *
- * @param return ECORE_SUCCESS upon success.
- */
-enum _ecore_status_t ecore_mcp_nvm_wr_cmd(struct ecore_hwfn *p_hwfn,
-					  struct ecore_ptt *p_ptt,
-					  u32 cmd,
-					  u32 param,
-					  u32 *o_mcp_resp,
-					  u32 *o_mcp_param,
-					  u32 i_txn_size,
-					  u32 *i_buf);
-
-/**
- * @brief - Sends an NVM read command request to the MFW to get
- *        a buffer.
- *
- * @param p_hwfn
- * @param p_ptt
- * @param cmd - Command: DRV_MSG_CODE_NVM_GET_FILE_DATA or
- *            DRV_MSG_CODE_NVM_READ_NVRAM commands
- * @param param - [0:23] - Offset [24:31] - Size
- * @param o_mcp_resp - MCP response
- * @param o_mcp_param - MCP response param
- * @param o_txn_size -  Buffer size output
- * @param o_buf - Pointer to the buffer returned by the MFW.
- *
- * @param return ECORE_SUCCESS upon success.
- */
-enum _ecore_status_t ecore_mcp_nvm_rd_cmd(struct ecore_hwfn *p_hwfn,
-					  struct ecore_ptt *p_ptt,
-					  u32 cmd,
-					  u32 param,
-					  u32 *o_mcp_resp,
-					  u32 *o_mcp_param,
-					  u32 *o_txn_size,
-					  u32 *o_buf);
-
-/**
  * @brief indicates whether the MFW objects [under mcp_info] are accessible
  *
  * @param p_hwfn
diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h
index ac889f9..cc5a43e 100644
--- a/drivers/net/qede/base/ecore_mcp_api.h
+++ b/drivers/net/qede/base/ecore_mcp_api.h
@@ -113,37 +113,6 @@ struct ecore_mcp_function_info {
 	u16 mtu;
 };
 
-struct ecore_mcp_nvm_common {
-	u32 offset;
-	u32 param;
-	u32 resp;
-	u32 cmd;
-};
-
-struct ecore_mcp_nvm_rd {
-	u32 *buf_size;
-	u32 *buf;
-};
-
-struct ecore_mcp_nvm_wr {
-	u32 buf_size;
-	u32 *buf;
-};
-
-struct ecore_mcp_nvm_params {
-#define ECORE_MCP_CMD		(1 << 0)
-#define ECORE_MCP_NVM_RD	(1 << 1)
-#define ECORE_MCP_NVM_WR	(1 << 2)
-	u8 type;
-
-	struct ecore_mcp_nvm_common nvm_common;
-
-	union {
-		struct ecore_mcp_nvm_rd nvm_rd;
-		struct ecore_mcp_nvm_wr nvm_wr;
-	};
-};
-
 #ifndef __EXTRACT__LINUX__
 enum ecore_nvm_images {
 	ECORE_NVM_IMAGE_ISCSI_CFG,
@@ -659,44 +628,6 @@ enum _ecore_status_t ecore_mcp_drain(struct ecore_hwfn *p_hwfn,
 *ecore_mcp_get_function_info(struct ecore_hwfn *p_hwfn);
 #endif
 
-/**
- * @brief - Function for reading/manipulating the nvram. Following are supported
- *          functionalities.
- *          1. Read: Read the specified nvram offset.
- *             input values:
- *               type   - ECORE_MCP_NVM_RD
- *               cmd    - command code (e.g. DRV_MSG_CODE_NVM_READ_NVRAM)
- *               offset - nvm offset
- *
- *             output values:
- *               buf      - buffer
- *               buf_size - buffer size
- *
- *          2. Write: Write the data at the specified nvram offset
- *             input values:
- *               type     - ECORE_MCP_NVM_WR
- *               cmd      - command code (e.g. DRV_MSG_CODE_NVM_WRITE_NVRAM)
- *               offset   - nvm offset
- *               buf      - buffer
- *               buf_size - buffer size
- *
- *          3. Command: Send the NVM command to MCP.
- *             input values:
- *               type   - ECORE_MCP_CMD
- *               cmd    - command code (e.g. DRV_MSG_CODE_NVM_DEL_FILE)
- *               offset - nvm offset
- *
- *
- * @param p_hwfn
- * @param p_ptt
- * @param params
- *
- * @return ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_nvm_command(struct ecore_hwfn *p_hwfn,
-					   struct ecore_ptt *p_ptt,
-					   struct ecore_mcp_nvm_params *params);
-
 #ifndef LINUX_REMOVE
 /**
  * @brief - count number of function with a matching personality on engine.
@@ -940,6 +871,56 @@ enum _ecore_status_t ecore_mcp_nvm_read(struct ecore_dev *p_dev, u32 addr,
 			   u8 *p_buf, u32 len);
 
 /**
+ * @brief - Sends an NVM write command request to the MFW with
+ *          payload.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param cmd - Command: Either DRV_MSG_CODE_NVM_WRITE_NVRAM or
+ *            DRV_MSG_CODE_NVM_PUT_FILE_DATA
+ * @param param - [0:23] - Offset [24:31] - Size
+ * @param o_mcp_resp - MCP response
+ * @param o_mcp_param - MCP response param
+ * @param i_txn_size -  Buffer size
+ * @param i_buf - Pointer to the buffer
+ *
+ * @param return ECORE_SUCCESS upon success.
+ */
+enum _ecore_status_t ecore_mcp_nvm_wr_cmd(struct ecore_hwfn *p_hwfn,
+					  struct ecore_ptt *p_ptt,
+					  u32 cmd,
+					  u32 param,
+					  u32 *o_mcp_resp,
+					  u32 *o_mcp_param,
+					  u32 i_txn_size,
+					  u32 *i_buf);
+
+/**
+ * @brief - Sends an NVM read command request to the MFW to get
+ *        a buffer.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param cmd - Command: DRV_MSG_CODE_NVM_GET_FILE_DATA or
+ *            DRV_MSG_CODE_NVM_READ_NVRAM commands
+ * @param param - [0:23] - Offset [24:31] - Size
+ * @param o_mcp_resp - MCP response
+ * @param o_mcp_param - MCP response param
+ * @param o_txn_size -  Buffer size output
+ * @param o_buf - Pointer to the buffer returned by the MFW.
+ *
+ * @param return ECORE_SUCCESS upon success.
+ */
+enum _ecore_status_t ecore_mcp_nvm_rd_cmd(struct ecore_hwfn *p_hwfn,
+					  struct ecore_ptt *p_ptt,
+					  u32 cmd,
+					  u32 param,
+					  u32 *o_mcp_resp,
+					  u32 *o_mcp_param,
+					  u32 *o_txn_size,
+					  u32 *o_buf);
+
+/**
  * @brief Read from sfp
  *
  *  @param p_hwfn - hw function
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index af6a45e..8484704 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -1616,6 +1616,7 @@ struct public_drv_mb {
 #define FW_MSG_CODE_SET_SECURE_MODE_OK		0x00140000
 #define FW_MSG_MODE_PHY_PRIVILEGE_ERROR		0x00150000
 #define FW_MSG_CODE_OK				0x00160000
+#define FW_MSG_CODE_ERROR			0x00170000
 #define FW_MSG_CODE_LED_MODE_INVALID		0x00170000
 #define FW_MSG_CODE_PHY_DIAG_OK			0x00160000
 #define FW_MSG_CODE_PHY_DIAG_ERROR		0x00170000
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 21/53] net/qede/base: initialize resc lock/unlock params
  2017-09-19  1:29 [PATCH 00/53] net/qede/base: update PMD to 2.6.0.1 Rasesh Mody
                   ` (19 preceding siblings ...)
  2017-09-19  1:30 ` [PATCH 20/53] net/qede/base: remove helper functions/structures Rasesh Mody
@ 2017-09-19  1:30 ` Rasesh Mody
  2017-09-19  1:30 ` [PATCH 22/53] net/qede/base: rename MFW get/set field defines Rasesh Mody
                   ` (8 subsequent siblings)
  29 siblings, 0 replies; 31+ messages in thread
From: Rasesh Mody @ 2017-09-19  1:30 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add a function that provides default initialization to resc lock/unlock
parameters. Change acquire flow that use resources into using this
function.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dev.c |   13 +++----------
 drivers/net/qede/base/ecore_mcp.c |   32 ++++++++++++++++++++++++++++++++
 drivers/net/qede/base/ecore_mcp.h |   25 ++++++++++++++++++++++++-
 3 files changed, 59 insertions(+), 11 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 1608b19..40959e7 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2836,9 +2836,6 @@ static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-#define ECORE_RESC_ALLOC_LOCK_RETRY_CNT		10
-#define ECORE_RESC_ALLOC_LOCK_RETRY_INTVL_US	10000 /* 10 msec */
-
 static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 					      struct ecore_ptt *p_ptt,
 					      bool drv_resc_alloc)
@@ -2870,13 +2867,9 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 	 * Old drivers that don't acquire the lock can run in parallel, and
 	 * their allocation values won't be affected by the updated max values.
 	 */
-	OSAL_MEM_ZERO(&resc_lock_params, sizeof(resc_lock_params));
-	resc_lock_params.resource = ECORE_RESC_LOCK_RESC_ALLOC;
-	resc_lock_params.retry_num = ECORE_RESC_ALLOC_LOCK_RETRY_CNT;
-	resc_lock_params.retry_interval = ECORE_RESC_ALLOC_LOCK_RETRY_INTVL_US;
-	resc_lock_params.sleep_b4_retry = true;
-	OSAL_MEM_ZERO(&resc_unlock_params, sizeof(resc_unlock_params));
-	resc_unlock_params.resource = ECORE_RESC_LOCK_RESC_ALLOC;
+	ecore_mcp_resc_lock_default_init(p_hwfn, &resc_lock_params,
+					 &resc_unlock_params,
+					 ECORE_RESC_LOCK_RESC_ALLOC, false);
 
 	rc = ecore_mcp_resc_lock(p_hwfn, p_ptt, &resc_lock_params);
 	if (rc != ECORE_SUCCESS && rc != ECORE_NOTIMPL) {
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 24f65cf..7169b55 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -3401,6 +3401,38 @@ enum _ecore_status_t
 	return ECORE_SUCCESS;
 }
 
+void
+ecore_mcp_resc_lock_default_init(struct ecore_hwfn *p_hwfn,
+				 struct ecore_resc_lock_params *p_lock,
+				 struct ecore_resc_unlock_params *p_unlock,
+				 enum ecore_resc_lock resource,
+				 bool b_is_permanent)
+{
+	if (p_lock != OSAL_NULL) {
+		OSAL_MEM_ZERO(p_lock, sizeof(*p_lock));
+
+		/* Permanent resources don't require aging, and there's no
+		 * point in trying to acquire them more than once since it's
+		 * unexpected another entity would release them.
+		 */
+		if (b_is_permanent) {
+			p_lock->timeout = ECORE_MCP_RESC_LOCK_TO_NONE;
+		} else {
+			p_lock->retry_num = ECORE_MCP_RESC_LOCK_RETRY_CNT_DFLT;
+			p_lock->retry_interval =
+					ECORE_MCP_RESC_LOCK_RETRY_VAL_DFLT;
+			p_lock->sleep_b4_retry = true;
+		}
+
+		p_lock->resource = resource;
+	}
+
+	if (p_unlock != OSAL_NULL) {
+		OSAL_MEM_ZERO(p_unlock, sizeof(*p_unlock));
+		p_unlock->resource = resource;
+	}
+}
+
 enum _ecore_status_t
 ecore_mcp_resc_unlock(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 		      struct ecore_resc_unlock_params *p_params)
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index dae0720..df80e11 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -413,7 +413,12 @@ enum ecore_resc_lock {
 	/* Locks that the MFW is aware of should be added here downwards */
 
 	/* Ecore only locks should be added here upwards */
-	ECORE_RESC_LOCK_RESC_ALLOC = ECORE_MCP_RESC_LOCK_MAX_VAL
+	ECORE_RESC_LOCK_RESC_ALLOC = ECORE_MCP_RESC_LOCK_MAX_VAL,
+
+	/* A dummy value to be used for auxiliary functions in need of
+	 * returning an 'error' value.
+	 */
+	ECORE_RESC_LOCK_RESC_INVALID,
 };
 
 struct ecore_resc_lock_params {
@@ -427,9 +432,11 @@ struct ecore_resc_lock_params {
 
 	/* Number of times to retry locking */
 	u8 retry_num;
+#define ECORE_MCP_RESC_LOCK_RETRY_CNT_DFLT	10
 
 	/* The interval in usec between retries */
 	u16 retry_interval;
+#define ECORE_MCP_RESC_LOCK_RETRY_VAL_DFLT	10000
 
 	/* Use sleep or delay between retries */
 	bool sleep_b4_retry;
@@ -481,6 +488,22 @@ enum _ecore_status_t
 		      struct ecore_resc_unlock_params *p_params);
 
 /**
+ * @brief - default initialization for lock/unlock resource structs
+ *
+ * @param p_hwfn
+ * @param p_lock - lock params struct to be initialized; Can be OSAL_NULL
+ * @param p_unlock - unlock params struct to be initialized; Can be OSAL_NULL
+ * @param resource - the requested resource
+ * @paral b_is_permanent - disable retries & aging when set
+ */
+void
+ecore_mcp_resc_lock_default_init(struct ecore_hwfn *p_hwfn,
+				 struct ecore_resc_lock_params *p_lock,
+				 struct ecore_resc_unlock_params *p_unlock,
+				 enum ecore_resc_lock resource,
+				 bool b_is_permanent);
+
+/**
  * @brief Learn of supported MFW features; To be done during early init
  *
  * @param p_hwfn
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 22/53] net/qede/base: rename MFW get/set field defines
  2017-09-19  1:29 [PATCH 00/53] net/qede/base: update PMD to 2.6.0.1 Rasesh Mody
                   ` (20 preceding siblings ...)
  2017-09-19  1:30 ` [PATCH 21/53] net/qede/base: initialize resc lock/unlock params Rasesh Mody
@ 2017-09-19  1:30 ` Rasesh Mody
  2017-09-19  1:30 ` [PATCH 23/53] net/qede/base: allow clients to override VF MSI-X table size Rasesh Mody
                   ` (7 subsequent siblings)
  29 siblings, 0 replies; 31+ messages in thread
From: Rasesh Mody @ 2017-09-19  1:30 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Changes for management FW, change of _SHIFT defines to _OFFSET.
Accordingly, rename and fix the ECORE_MFW_GET_FIELD() and
ECORE_MFW_SET_FIELD() macros and update wherever used.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h      |   12 +-
 drivers/net/qede/base/ecore_dcbx.c |  125 +++++++++---------
 drivers/net/qede/base/ecore_dev.c  |    2 +-
 drivers/net/qede/base/ecore_mcp.c  |  166 ++++++++++++------------
 drivers/net/qede/base/mcp_public.h |  252 ++++++++++++++++++------------------
 5 files changed, 273 insertions(+), 284 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 818a34b..71f27da 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -98,16 +98,16 @@ enum ecore_nvm_cmd {
 
 #define GET_FIELD(value, name)						\
 	(((value) >> (name##_SHIFT)) & name##_MASK)
-#endif
 
-#define ECORE_MFW_GET_FIELD(name, field)				\
-	(((name) & (field ## _MASK)) >> (field ## _SHIFT))
+#define GET_MFW_FIELD(name, field)				\
+	(((name) & (field ## _MASK)) >> (field ## _OFFSET))
 
-#define ECORE_MFW_SET_FIELD(name, field, value)				\
+#define SET_MFW_FIELD(name, field, value)				\
 do {									\
-	(name) &= ~(field ## _MASK);					\
-	(name) |= (((value) << (field ## _SHIFT)) & (field ## _MASK));	\
+	(name) &= ~((field ## _MASK));		\
+	(name) |= (((value) << (field ## _OFFSET)) & (field ## _MASK));	\
 } while (0)
+#endif
 
 static OSAL_INLINE u32 DB_ADDR(u32 cid, u32 DEMS)
 {
diff --git a/drivers/net/qede/base/ecore_dcbx.c b/drivers/net/qede/base/ecore_dcbx.c
index 4f1b069..cce2830 100644
--- a/drivers/net/qede/base/ecore_dcbx.c
+++ b/drivers/net/qede/base/ecore_dcbx.c
@@ -28,13 +28,13 @@
 
 static bool ecore_dcbx_app_ethtype(u32 app_info_bitmap)
 {
-	return !!(ECORE_MFW_GET_FIELD(app_info_bitmap, DCBX_APP_SF) ==
+	return !!(GET_MFW_FIELD(app_info_bitmap, DCBX_APP_SF) ==
 		  DCBX_APP_SF_ETHTYPE);
 }
 
 static bool ecore_dcbx_ieee_app_ethtype(u32 app_info_bitmap)
 {
-	u8 mfw_val = ECORE_MFW_GET_FIELD(app_info_bitmap, DCBX_APP_SF_IEEE);
+	u8 mfw_val = GET_MFW_FIELD(app_info_bitmap, DCBX_APP_SF_IEEE);
 
 	/* Old MFW */
 	if (mfw_val == DCBX_APP_SF_IEEE_RESERVED)
@@ -45,13 +45,13 @@ static bool ecore_dcbx_ieee_app_ethtype(u32 app_info_bitmap)
 
 static bool ecore_dcbx_app_port(u32 app_info_bitmap)
 {
-	return !!(ECORE_MFW_GET_FIELD(app_info_bitmap, DCBX_APP_SF) ==
+	return !!(GET_MFW_FIELD(app_info_bitmap, DCBX_APP_SF) ==
 		  DCBX_APP_SF_PORT);
 }
 
 static bool ecore_dcbx_ieee_app_port(u32 app_info_bitmap, u8 type)
 {
-	u8 mfw_val = ECORE_MFW_GET_FIELD(app_info_bitmap, DCBX_APP_SF_IEEE);
+	u8 mfw_val = GET_MFW_FIELD(app_info_bitmap, DCBX_APP_SF_IEEE);
 
 	/* Old MFW */
 	if (mfw_val == DCBX_APP_SF_IEEE_RESERVED)
@@ -248,10 +248,9 @@ static bool ecore_dcbx_iwarp_tlv(struct ecore_hwfn *p_hwfn, u32 app_info_bitmap,
 	ieee = (dcbx_version == DCBX_CONFIG_VERSION_IEEE);
 	/* Parse APP TLV */
 	for (i = 0; i < count; i++) {
-		protocol_id = ECORE_MFW_GET_FIELD(p_tbl[i].entry,
-						  DCBX_APP_PROTOCOL_ID);
-		priority_map = ECORE_MFW_GET_FIELD(p_tbl[i].entry,
-						   DCBX_APP_PRI_MAP);
+		protocol_id = GET_MFW_FIELD(p_tbl[i].entry,
+					    DCBX_APP_PROTOCOL_ID);
+		priority_map = GET_MFW_FIELD(p_tbl[i].entry, DCBX_APP_PRI_MAP);
 		DP_VERBOSE(p_hwfn, ECORE_MSG_DCB, "Id = 0x%x pri_map = %u\n",
 			   protocol_id, priority_map);
 		rc = ecore_dcbx_get_app_priority(priority_map, &priority);
@@ -313,7 +312,7 @@ static bool ecore_dcbx_iwarp_tlv(struct ecore_hwfn *p_hwfn, u32 app_info_bitmap,
 	int num_entries;
 
 	flags = p_hwfn->p_dcbx_info->operational.flags;
-	dcbx_version = ECORE_MFW_GET_FIELD(flags, DCBX_CONFIG_VERSION);
+	dcbx_version = GET_MFW_FIELD(flags, DCBX_CONFIG_VERSION);
 
 	p_app = &p_hwfn->p_dcbx_info->operational.features.app;
 	p_tbl = p_app->app_pri_tbl;
@@ -322,16 +321,15 @@ static bool ecore_dcbx_iwarp_tlv(struct ecore_hwfn *p_hwfn, u32 app_info_bitmap,
 	pri_tc_tbl = p_ets->pri_tc_tbl[0];
 
 	p_info = &p_hwfn->hw_info;
-	num_entries = ECORE_MFW_GET_FIELD(p_app->flags, DCBX_APP_NUM_ENTRIES);
+	num_entries = GET_MFW_FIELD(p_app->flags, DCBX_APP_NUM_ENTRIES);
 
 	rc = ecore_dcbx_process_tlv(p_hwfn, &data, p_tbl, pri_tc_tbl,
 				    num_entries, dcbx_version);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	p_info->num_active_tc = ECORE_MFW_GET_FIELD(p_ets->flags,
-						    DCBX_ETS_MAX_TCS);
-	p_hwfn->qm_info.ooo_tc = ECORE_MFW_GET_FIELD(p_ets->flags, DCBX_OOO_TC);
+	p_info->num_active_tc = GET_MFW_FIELD(p_ets->flags, DCBX_ETS_MAX_TCS);
+	p_hwfn->qm_info.ooo_tc = GET_MFW_FIELD(p_ets->flags, DCBX_OOO_TC);
 	data.pf_id = p_hwfn->rel_pf_id;
 	data.dcbx_enabled = !!dcbx_version;
 
@@ -414,26 +412,24 @@ static bool ecore_dcbx_iwarp_tlv(struct ecore_hwfn *p_hwfn, u32 app_info_bitmap,
 	u8 pri_map;
 	int i;
 
-	p_params->app_willing = ECORE_MFW_GET_FIELD(p_app->flags,
-						    DCBX_APP_WILLING);
-	p_params->app_valid = ECORE_MFW_GET_FIELD(p_app->flags,
-						  DCBX_APP_ENABLED);
-	p_params->app_error = ECORE_MFW_GET_FIELD(p_app->flags, DCBX_APP_ERROR);
-	p_params->num_app_entries = ECORE_MFW_GET_FIELD(p_app->flags,
-							DCBX_APP_NUM_ENTRIES);
+	p_params->app_willing = GET_MFW_FIELD(p_app->flags, DCBX_APP_WILLING);
+	p_params->app_valid = GET_MFW_FIELD(p_app->flags, DCBX_APP_ENABLED);
+	p_params->app_error = GET_MFW_FIELD(p_app->flags, DCBX_APP_ERROR);
+	p_params->num_app_entries = GET_MFW_FIELD(p_app->flags,
+						  DCBX_APP_NUM_ENTRIES);
 	for (i = 0; i < DCBX_MAX_APP_PROTOCOL; i++) {
 		entry = &p_params->app_entry[i];
 		if (ieee) {
 			u8 sf_ieee;
 			u32 val;
 
-			sf_ieee = ECORE_MFW_GET_FIELD(p_tbl[i].entry,
-						      DCBX_APP_SF_IEEE);
+			sf_ieee = GET_MFW_FIELD(p_tbl[i].entry,
+						DCBX_APP_SF_IEEE);
 			switch (sf_ieee) {
 			case DCBX_APP_SF_IEEE_RESERVED:
 				/* Old MFW */
-				val = ECORE_MFW_GET_FIELD(p_tbl[i].entry,
-							    DCBX_APP_SF);
+				val = GET_MFW_FIELD(p_tbl[i].entry,
+						    DCBX_APP_SF);
 				entry->sf_ieee = val ?
 					ECORE_DCBX_SF_IEEE_TCP_UDP_PORT :
 					ECORE_DCBX_SF_IEEE_ETHTYPE;
@@ -453,14 +449,14 @@ static bool ecore_dcbx_iwarp_tlv(struct ecore_hwfn *p_hwfn, u32 app_info_bitmap,
 				break;
 			}
 		} else {
-			entry->ethtype = !(ECORE_MFW_GET_FIELD(p_tbl[i].entry,
-							       DCBX_APP_SF));
+			entry->ethtype = !(GET_MFW_FIELD(p_tbl[i].entry,
+							 DCBX_APP_SF));
 		}
 
-		pri_map = ECORE_MFW_GET_FIELD(p_tbl[i].entry, DCBX_APP_PRI_MAP);
+		pri_map = GET_MFW_FIELD(p_tbl[i].entry, DCBX_APP_PRI_MAP);
 		ecore_dcbx_get_app_priority(pri_map, &entry->prio);
-		entry->proto_id = ECORE_MFW_GET_FIELD(p_tbl[i].entry,
-						      DCBX_APP_PROTOCOL_ID);
+		entry->proto_id = GET_MFW_FIELD(p_tbl[i].entry,
+						DCBX_APP_PROTOCOL_ID);
 		ecore_dcbx_get_app_protocol_type(p_hwfn, p_tbl[i].entry,
 						 entry->proto_id,
 						 &entry->proto_type, ieee);
@@ -478,10 +474,10 @@ static bool ecore_dcbx_iwarp_tlv(struct ecore_hwfn *p_hwfn, u32 app_info_bitmap,
 {
 	u8 pfc_map;
 
-	p_params->pfc.willing = ECORE_MFW_GET_FIELD(pfc, DCBX_PFC_WILLING);
-	p_params->pfc.max_tc = ECORE_MFW_GET_FIELD(pfc, DCBX_PFC_CAPS);
-	p_params->pfc.enabled = ECORE_MFW_GET_FIELD(pfc, DCBX_PFC_ENABLED);
-	pfc_map = ECORE_MFW_GET_FIELD(pfc, DCBX_PFC_PRI_EN_BITMAP);
+	p_params->pfc.willing = GET_MFW_FIELD(pfc, DCBX_PFC_WILLING);
+	p_params->pfc.max_tc = GET_MFW_FIELD(pfc, DCBX_PFC_CAPS);
+	p_params->pfc.enabled = GET_MFW_FIELD(pfc, DCBX_PFC_ENABLED);
+	pfc_map = GET_MFW_FIELD(pfc, DCBX_PFC_PRI_EN_BITMAP);
 	p_params->pfc.prio[0] = !!(pfc_map & DCBX_PFC_PRI_EN_BITMAP_PRI_0);
 	p_params->pfc.prio[1] = !!(pfc_map & DCBX_PFC_PRI_EN_BITMAP_PRI_1);
 	p_params->pfc.prio[2] = !!(pfc_map & DCBX_PFC_PRI_EN_BITMAP_PRI_2);
@@ -505,13 +501,10 @@ static bool ecore_dcbx_iwarp_tlv(struct ecore_hwfn *p_hwfn, u32 app_info_bitmap,
 	u32 bw_map[2], tsa_map[2], pri_map;
 	int i;
 
-	p_params->ets_willing = ECORE_MFW_GET_FIELD(p_ets->flags,
-						    DCBX_ETS_WILLING);
-	p_params->ets_enabled = ECORE_MFW_GET_FIELD(p_ets->flags,
-						    DCBX_ETS_ENABLED);
-	p_params->ets_cbs = ECORE_MFW_GET_FIELD(p_ets->flags, DCBX_ETS_CBS);
-	p_params->max_ets_tc = ECORE_MFW_GET_FIELD(p_ets->flags,
-						   DCBX_ETS_MAX_TCS);
+	p_params->ets_willing = GET_MFW_FIELD(p_ets->flags, DCBX_ETS_WILLING);
+	p_params->ets_enabled = GET_MFW_FIELD(p_ets->flags, DCBX_ETS_ENABLED);
+	p_params->ets_cbs = GET_MFW_FIELD(p_ets->flags, DCBX_ETS_CBS);
+	p_params->max_ets_tc = GET_MFW_FIELD(p_ets->flags, DCBX_ETS_MAX_TCS);
 	DP_VERBOSE(p_hwfn, ECORE_MSG_DCB,
 		   "ETS params: willing %d, enabled = %d ets_cbs %d pri_tc_tbl_0 %x max_ets_tc %d\n",
 		   p_params->ets_willing, p_params->ets_enabled,
@@ -597,7 +590,7 @@ static bool ecore_dcbx_iwarp_tlv(struct ecore_hwfn *p_hwfn, u32 app_info_bitmap,
 	 * was successfuly performed
 	 */
 	p_operational = &params->operational;
-	enabled = !!(ECORE_MFW_GET_FIELD(flags, DCBX_CONFIG_VERSION) !=
+	enabled = !!(GET_MFW_FIELD(flags, DCBX_CONFIG_VERSION) !=
 		     DCBX_CONFIG_VERSION_DISABLED);
 	if (!enabled) {
 		p_operational->enabled = enabled;
@@ -609,15 +602,15 @@ static bool ecore_dcbx_iwarp_tlv(struct ecore_hwfn *p_hwfn, u32 app_info_bitmap,
 	p_feat = &p_hwfn->p_dcbx_info->operational.features;
 	p_results = &p_hwfn->p_dcbx_info->results;
 
-	val = !!(ECORE_MFW_GET_FIELD(flags, DCBX_CONFIG_VERSION) ==
+	val = !!(GET_MFW_FIELD(flags, DCBX_CONFIG_VERSION) ==
 		 DCBX_CONFIG_VERSION_IEEE);
 	p_operational->ieee = val;
 
-	val = !!(ECORE_MFW_GET_FIELD(flags, DCBX_CONFIG_VERSION) ==
+	val = !!(GET_MFW_FIELD(flags, DCBX_CONFIG_VERSION) ==
 		 DCBX_CONFIG_VERSION_CEE);
 	p_operational->cee = val;
 
-	val = !!(ECORE_MFW_GET_FIELD(flags, DCBX_CONFIG_VERSION) ==
+	val = !!(GET_MFW_FIELD(flags, DCBX_CONFIG_VERSION) ==
 		 DCBX_CONFIG_VERSION_STATIC);
 	p_operational->local = val;
 
@@ -632,7 +625,7 @@ static bool ecore_dcbx_iwarp_tlv(struct ecore_hwfn *p_hwfn, u32 app_info_bitmap,
 				     p_operational->ieee);
 	ecore_dcbx_get_priority_info(p_hwfn, &p_operational->app_prio,
 				     p_results);
-	err = ECORE_MFW_GET_FIELD(p_feat->app.flags, DCBX_APP_ERROR);
+	err = GET_MFW_FIELD(p_feat->app.flags, DCBX_APP_ERROR);
 	p_operational->err = err;
 	p_operational->enabled = enabled;
 	p_operational->valid = true;
@@ -652,8 +645,8 @@ static bool ecore_dcbx_iwarp_tlv(struct ecore_hwfn *p_hwfn, u32 app_info_bitmap,
 
 	p_dscp = &params->dscp;
 	p_dscp_map = &p_hwfn->p_dcbx_info->dscp_map;
-	p_dscp->enabled = ECORE_MFW_GET_FIELD(p_dscp_map->flags,
-					      DCB_DSCP_ENABLE);
+	p_dscp->enabled = GET_MFW_FIELD(p_dscp_map->flags, DCB_DSCP_ENABLE);
+
 	/* MFW encodes 64 dscp entries into 8 element array of u32 entries,
 	 * where each entry holds the 4bit priority map for 8 dscp entries.
 	 */
@@ -1010,13 +1003,13 @@ enum _ecore_status_t ecore_dcbx_query_params(struct ecore_hwfn *p_hwfn,
 		*pfc &= ~DCBX_PFC_ENABLED_MASK;
 
 	*pfc &= ~DCBX_PFC_CAPS_MASK;
-	*pfc |= (u32)p_params->pfc.max_tc << DCBX_PFC_CAPS_SHIFT;
+	*pfc |= (u32)p_params->pfc.max_tc << DCBX_PFC_CAPS_OFFSET;
 
 	for (i = 0; i < ECORE_MAX_PFC_PRIORITIES; i++)
 		if (p_params->pfc.prio[i])
 			pfc_map |= (1 << i);
 	*pfc &= ~DCBX_PFC_PRI_EN_BITMAP_MASK;
-	*pfc |= (pfc_map << DCBX_PFC_PRI_EN_BITMAP_SHIFT);
+	*pfc |= (pfc_map << DCBX_PFC_PRI_EN_BITMAP_OFFSET);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_DCB, "pfc = 0x%x\n", *pfc);
 }
@@ -1046,7 +1039,7 @@ enum _ecore_status_t ecore_dcbx_query_params(struct ecore_hwfn *p_hwfn,
 		p_ets->flags &= ~DCBX_ETS_ENABLED_MASK;
 
 	p_ets->flags &= ~DCBX_ETS_MAX_TCS_MASK;
-	p_ets->flags |= (u32)p_params->max_ets_tc << DCBX_ETS_MAX_TCS_SHIFT;
+	p_ets->flags |= (u32)p_params->max_ets_tc << DCBX_ETS_MAX_TCS_OFFSET;
 
 	bw_map = (u8 *)&p_ets->tc_bw_tbl[0];
 	tsa_map = (u8 *)&p_ets->tc_tsa_tbl[0];
@@ -1092,7 +1085,7 @@ enum _ecore_status_t ecore_dcbx_query_params(struct ecore_hwfn *p_hwfn,
 
 	p_app->flags &= ~DCBX_APP_NUM_ENTRIES_MASK;
 	p_app->flags |= (u32)p_params->num_app_entries <<
-					DCBX_APP_NUM_ENTRIES_SHIFT;
+			DCBX_APP_NUM_ENTRIES_OFFSET;
 
 	for (i = 0; i < DCBX_MAX_APP_PROTOCOL; i++) {
 		entry = &p_app->app_pri_tbl[i].entry;
@@ -1102,44 +1095,44 @@ enum _ecore_status_t ecore_dcbx_query_params(struct ecore_hwfn *p_hwfn,
 			switch (p_params->app_entry[i].sf_ieee) {
 			case ECORE_DCBX_SF_IEEE_ETHTYPE:
 				*entry  |= ((u32)DCBX_APP_SF_IEEE_ETHTYPE <<
-					    DCBX_APP_SF_IEEE_SHIFT);
+					    DCBX_APP_SF_IEEE_OFFSET);
 				*entry  |= ((u32)DCBX_APP_SF_ETHTYPE <<
-					    DCBX_APP_SF_SHIFT);
+					    DCBX_APP_SF_OFFSET);
 				break;
 			case ECORE_DCBX_SF_IEEE_TCP_PORT:
 				*entry  |= ((u32)DCBX_APP_SF_IEEE_TCP_PORT <<
-					    DCBX_APP_SF_IEEE_SHIFT);
+					    DCBX_APP_SF_IEEE_OFFSET);
 				*entry  |= ((u32)DCBX_APP_SF_PORT <<
-					    DCBX_APP_SF_SHIFT);
+					    DCBX_APP_SF_OFFSET);
 				break;
 			case ECORE_DCBX_SF_IEEE_UDP_PORT:
 				*entry  |= ((u32)DCBX_APP_SF_IEEE_UDP_PORT <<
-					    DCBX_APP_SF_IEEE_SHIFT);
+					    DCBX_APP_SF_IEEE_OFFSET);
 				*entry  |= ((u32)DCBX_APP_SF_PORT <<
-					    DCBX_APP_SF_SHIFT);
+					    DCBX_APP_SF_OFFSET);
 				break;
 			case ECORE_DCBX_SF_IEEE_TCP_UDP_PORT:
 				*entry  |= (u32)DCBX_APP_SF_IEEE_TCP_UDP_PORT <<
-					    DCBX_APP_SF_IEEE_SHIFT;
+					    DCBX_APP_SF_IEEE_OFFSET;
 				*entry  |= ((u32)DCBX_APP_SF_PORT <<
-					    DCBX_APP_SF_SHIFT);
+					    DCBX_APP_SF_OFFSET);
 				break;
 			}
 		} else {
 			*entry &= ~DCBX_APP_SF_MASK;
 			if (p_params->app_entry[i].ethtype)
 				*entry  |= ((u32)DCBX_APP_SF_ETHTYPE <<
-					    DCBX_APP_SF_SHIFT);
+					    DCBX_APP_SF_OFFSET);
 			else
 				*entry  |= ((u32)DCBX_APP_SF_PORT <<
-					    DCBX_APP_SF_SHIFT);
+					    DCBX_APP_SF_OFFSET);
 		}
 		*entry &= ~DCBX_APP_PROTOCOL_ID_MASK;
 		*entry |= ((u32)p_params->app_entry[i].proto_id <<
-				DCBX_APP_PROTOCOL_ID_SHIFT);
+			   DCBX_APP_PROTOCOL_ID_OFFSET);
 		*entry &= ~DCBX_APP_PRI_MAP_MASK;
 		*entry |= ((u32)(p_params->app_entry[i].prio) <<
-				DCBX_APP_PRI_MAP_SHIFT);
+			   DCBX_APP_PRI_MAP_OFFSET);
 	}
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_DCB, "flags = 0x%x\n", p_app->flags);
@@ -1253,12 +1246,10 @@ enum _ecore_status_t ecore_dcbx_config_params(struct ecore_hwfn *p_hwfn,
 	}
 
 	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_SET_DCBX,
-			   1 << DRV_MB_PARAM_LLDP_SEND_SHIFT, &resp, &param);
-	if (rc != ECORE_SUCCESS) {
+			   1 << DRV_MB_PARAM_LLDP_SEND_OFFSET, &resp, &param);
+	if (rc != ECORE_SUCCESS)
 		DP_NOTICE(p_hwfn, false,
 			  "Failed to send DCBX update request\n");
-		return rc;
-	}
 
 	return rc;
 }
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 40959e7..2fe30d7 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2159,7 +2159,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 			   "sending phony dcbx set command to trigger DCBx attention handling\n");
 		rc = ecore_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
 				   DRV_MSG_CODE_SET_DCBX,
-				   1 << DRV_MB_PARAM_DCBX_NOTIFY_SHIFT, &resp,
+				   1 << DRV_MB_PARAM_DCBX_NOTIFY_OFFSET, &resp,
 				   &param);
 		if (rc != ECORE_SUCCESS) {
 			DP_NOTICE(p_hwfn, true,
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 7169b55..0f96c91 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -43,9 +43,9 @@
 		     OFFSETOF(struct public_drv_mb, _field))
 
 #define PDA_COMP (((FW_MAJOR_VERSION) + (FW_MINOR_VERSION << 8)) << \
-	DRV_ID_PDA_COMP_VER_SHIFT)
+	DRV_ID_PDA_COMP_VER_OFFSET)
 
-#define MCP_BYTES_PER_MBIT_SHIFT 17
+#define MCP_BYTES_PER_MBIT_OFFSET 17
 
 #ifndef ASIC_ONLY
 static int loaded;
@@ -804,18 +804,16 @@ struct ecore_load_req_out_params {
 	load_req.drv_ver_0 = p_in_params->drv_ver_0;
 	load_req.drv_ver_1 = p_in_params->drv_ver_1;
 	load_req.fw_ver = p_in_params->fw_ver;
-	ECORE_MFW_SET_FIELD(load_req.misc0, LOAD_REQ_ROLE,
-			    p_in_params->drv_role);
-	ECORE_MFW_SET_FIELD(load_req.misc0, LOAD_REQ_LOCK_TO,
-			    p_in_params->timeout_val);
-	ECORE_MFW_SET_FIELD(load_req.misc0, LOAD_REQ_FORCE,
-			    p_in_params->force_cmd);
-	ECORE_MFW_SET_FIELD(load_req.misc0, LOAD_REQ_FLAGS0,
-			    p_in_params->avoid_eng_reset);
+	SET_MFW_FIELD(load_req.misc0, LOAD_REQ_ROLE, p_in_params->drv_role);
+	SET_MFW_FIELD(load_req.misc0, LOAD_REQ_LOCK_TO,
+		      p_in_params->timeout_val);
+	SET_MFW_FIELD(load_req.misc0, LOAD_REQ_FORCE, p_in_params->force_cmd);
+	SET_MFW_FIELD(load_req.misc0, LOAD_REQ_FLAGS0,
+		      p_in_params->avoid_eng_reset);
 
 	hsi_ver = (p_in_params->hsi_ver == ECORE_LOAD_REQ_HSI_VER_DEFAULT) ?
 		  DRV_ID_MCP_HSI_VER_CURRENT :
-		  (p_in_params->hsi_ver << DRV_ID_MCP_HSI_VER_SHIFT);
+		  (p_in_params->hsi_ver << DRV_ID_MCP_HSI_VER_OFFSET);
 
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = DRV_MSG_CODE_LOAD_REQ;
@@ -828,22 +826,20 @@ struct ecore_load_req_out_params {
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 		   "Load Request: param 0x%08x [init_hw %d, drv_type %d, hsi_ver %d, pda 0x%04x]\n",
 		   mb_params.param,
-		   ECORE_MFW_GET_FIELD(mb_params.param, DRV_ID_DRV_INIT_HW),
-		   ECORE_MFW_GET_FIELD(mb_params.param, DRV_ID_DRV_TYPE),
-		   ECORE_MFW_GET_FIELD(mb_params.param, DRV_ID_MCP_HSI_VER),
-		   ECORE_MFW_GET_FIELD(mb_params.param, DRV_ID_PDA_COMP_VER));
+		   GET_MFW_FIELD(mb_params.param, DRV_ID_DRV_INIT_HW),
+		   GET_MFW_FIELD(mb_params.param, DRV_ID_DRV_TYPE),
+		   GET_MFW_FIELD(mb_params.param, DRV_ID_MCP_HSI_VER),
+		   GET_MFW_FIELD(mb_params.param, DRV_ID_PDA_COMP_VER));
 
 	if (p_in_params->hsi_ver != ECORE_LOAD_REQ_HSI_VER_1)
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 			   "Load Request: drv_ver 0x%08x_0x%08x, fw_ver 0x%08x, misc0 0x%08x [role %d, timeout %d, force %d, flags0 0x%x]\n",
 			   load_req.drv_ver_0, load_req.drv_ver_1,
 			   load_req.fw_ver, load_req.misc0,
-			   ECORE_MFW_GET_FIELD(load_req.misc0, LOAD_REQ_ROLE),
-			   ECORE_MFW_GET_FIELD(load_req.misc0,
-					       LOAD_REQ_LOCK_TO),
-			   ECORE_MFW_GET_FIELD(load_req.misc0, LOAD_REQ_FORCE),
-			   ECORE_MFW_GET_FIELD(load_req.misc0,
-					       LOAD_REQ_FLAGS0));
+			   GET_MFW_FIELD(load_req.misc0, LOAD_REQ_ROLE),
+			   GET_MFW_FIELD(load_req.misc0, LOAD_REQ_LOCK_TO),
+			   GET_MFW_FIELD(load_req.misc0, LOAD_REQ_FORCE),
+			   GET_MFW_FIELD(load_req.misc0, LOAD_REQ_FLAGS0));
 
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS) {
@@ -862,20 +858,19 @@ struct ecore_load_req_out_params {
 			   "Load Response: exist_drv_ver 0x%08x_0x%08x, exist_fw_ver 0x%08x, misc0 0x%08x [exist_role %d, mfw_hsi %d, flags0 0x%x]\n",
 			   load_rsp.drv_ver_0, load_rsp.drv_ver_1,
 			   load_rsp.fw_ver, load_rsp.misc0,
-			   ECORE_MFW_GET_FIELD(load_rsp.misc0, LOAD_RSP_ROLE),
-			   ECORE_MFW_GET_FIELD(load_rsp.misc0, LOAD_RSP_HSI),
-			   ECORE_MFW_GET_FIELD(load_rsp.misc0,
-					       LOAD_RSP_FLAGS0));
+			   GET_MFW_FIELD(load_rsp.misc0, LOAD_RSP_ROLE),
+			   GET_MFW_FIELD(load_rsp.misc0, LOAD_RSP_HSI),
+			   GET_MFW_FIELD(load_rsp.misc0, LOAD_RSP_FLAGS0));
 
 		p_out_params->exist_drv_ver_0 = load_rsp.drv_ver_0;
 		p_out_params->exist_drv_ver_1 = load_rsp.drv_ver_1;
 		p_out_params->exist_fw_ver = load_rsp.fw_ver;
 		p_out_params->exist_drv_role =
-			ECORE_MFW_GET_FIELD(load_rsp.misc0, LOAD_RSP_ROLE);
+			GET_MFW_FIELD(load_rsp.misc0, LOAD_RSP_ROLE);
 		p_out_params->mfw_hsi_ver =
-			ECORE_MFW_GET_FIELD(load_rsp.misc0, LOAD_RSP_HSI);
+			GET_MFW_FIELD(load_rsp.misc0, LOAD_RSP_HSI);
 		p_out_params->drv_exists =
-			ECORE_MFW_GET_FIELD(load_rsp.misc0, LOAD_RSP_FLAGS0) &
+			GET_MFW_FIELD(load_rsp.misc0, LOAD_RSP_FLAGS0) &
 			LOAD_RSP_FLAGS0_DRV_EXISTS;
 	}
 
@@ -1174,7 +1169,8 @@ static void ecore_mcp_handle_transceiver_change(struct ecore_hwfn *p_hwfn,
 					    OFFSETOF(struct public_port,
 						     transceiver_data)));
 
-	transceiver_state = GET_FIELD(transceiver_state, ETH_TRANSCEIVER_STATE);
+	transceiver_state = GET_MFW_FIELD(transceiver_state,
+					  ETH_TRANSCEIVER_STATE);
 
 	if (transceiver_state == ETH_TRANSCEIVER_STATE_PRESENT)
 		DP_NOTICE(p_hwfn, false, "Transceiver is present.\n");
@@ -1193,12 +1189,12 @@ static void ecore_mcp_read_eee_config(struct ecore_hwfn *p_hwfn,
 	eee_status = ecore_rd(p_hwfn, p_ptt, p_hwfn->mcp_info->port_addr +
 				     OFFSETOF(struct public_port, eee_status));
 	p_link->eee_active = !!(eee_status & EEE_ACTIVE_BIT);
-	val = (eee_status & EEE_LD_ADV_STATUS_MASK) >> EEE_LD_ADV_STATUS_SHIFT;
+	val = (eee_status & EEE_LD_ADV_STATUS_MASK) >> EEE_LD_ADV_STATUS_OFFSET;
 	if (val & EEE_1G_ADV)
 		p_link->eee_adv_caps |= ECORE_EEE_1G_ADV;
 	if (val & EEE_10G_ADV)
 		p_link->eee_adv_caps |= ECORE_EEE_10G_ADV;
-	val = (eee_status & EEE_LP_ADV_STATUS_MASK) >> EEE_LP_ADV_STATUS_SHIFT;
+	val = (eee_status & EEE_LP_ADV_STATUS_MASK) >> EEE_LP_ADV_STATUS_OFFSET;
 	if (val & EEE_1G_ADV)
 		p_link->eee_lp_adv_caps |= ECORE_EEE_1G_ADV;
 	if (val & EEE_10G_ADV)
@@ -1391,7 +1387,7 @@ enum _ecore_status_t ecore_mcp_set_link(struct ecore_hwfn *p_hwfn,
 		if (params->eee.adv_caps & ECORE_EEE_10G_ADV)
 			phy_cfg.eee_cfg |= EEE_CFG_ADV_SPEED_10G;
 		phy_cfg.eee_cfg |= (params->eee.tx_lpi_timer <<
-				    EEE_TX_TIMER_USEC_SHIFT) &
+				    EEE_TX_TIMER_USEC_OFFSET) &
 					EEE_TX_TIMER_USEC_MASK;
 	}
 
@@ -1531,7 +1527,7 @@ static void ecore_read_pf_bandwidth(struct ecore_hwfn *p_hwfn,
 	 */
 	p_info->bandwidth_min = (p_shmem_info->config &
 				 FUNC_MF_CFG_MIN_BW_MASK) >>
-	    FUNC_MF_CFG_MIN_BW_SHIFT;
+	    FUNC_MF_CFG_MIN_BW_OFFSET;
 	if (p_info->bandwidth_min < 1 || p_info->bandwidth_min > 100) {
 		DP_INFO(p_hwfn,
 			"bandwidth minimum out of bounds [%02x]. Set to 1\n",
@@ -1541,7 +1537,7 @@ static void ecore_read_pf_bandwidth(struct ecore_hwfn *p_hwfn,
 
 	p_info->bandwidth_max = (p_shmem_info->config &
 				 FUNC_MF_CFG_MAX_BW_MASK) >>
-	    FUNC_MF_CFG_MAX_BW_SHIFT;
+	    FUNC_MF_CFG_MAX_BW_OFFSET;
 	if (p_info->bandwidth_max < 1 || p_info->bandwidth_max > 100) {
 		DP_INFO(p_hwfn,
 			"bandwidth maximum out of bounds [%02x]. Set to 100\n",
@@ -2226,8 +2222,8 @@ enum _ecore_status_t ecore_mcp_get_flash_size(struct ecore_hwfn *p_hwfn,
 
 	flash_size = ecore_rd(p_hwfn, p_ptt, MCP_REG_NVM_CFG4);
 	flash_size = (flash_size & MCP_REG_NVM_CFG4_FLASH_SIZE) >>
-	    MCP_REG_NVM_CFG4_FLASH_SIZE_SHIFT;
-	flash_size = (1 << (flash_size + MCP_BYTES_PER_MBIT_SHIFT));
+		     MCP_REG_NVM_CFG4_FLASH_SIZE_SHIFT;
+	flash_size = (1 << (flash_size + MCP_BYTES_PER_MBIT_OFFSET));
 
 	*p_flash_size = flash_size;
 
@@ -2265,9 +2261,9 @@ enum _ecore_status_t ecore_mcp_config_vf_msix(struct ecore_hwfn *p_hwfn,
 		return ECORE_SUCCESS;
 	num *= p_hwfn->p_dev->num_hwfns;
 
-	param |= (vf_id << DRV_MB_PARAM_CFG_VF_MSIX_VF_ID_SHIFT) &
+	param |= (vf_id << DRV_MB_PARAM_CFG_VF_MSIX_VF_ID_OFFSET) &
 	    DRV_MB_PARAM_CFG_VF_MSIX_VF_ID_MASK;
-	param |= (num << DRV_MB_PARAM_CFG_VF_MSIX_SB_NUM_SHIFT) &
+	param |= (num << DRV_MB_PARAM_CFG_VF_MSIX_SB_NUM_OFFSET) &
 	    DRV_MB_PARAM_CFG_VF_MSIX_SB_NUM_MASK;
 
 	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_CFG_VF_MSIX, param,
@@ -2501,7 +2497,7 @@ enum _ecore_status_t ecore_mcp_nvm_read(struct ecore_dev *p_dev, u32 addr,
 		bytes_to_copy = OSAL_MIN_T(u32, bytes_left,
 					   MCP_DRV_NVM_BUF_LEN);
 		nvm_offset = (addr + offset) | (bytes_to_copy <<
-						DRV_MB_PARAM_NVM_LEN_SHIFT);
+						DRV_MB_PARAM_NVM_LEN_OFFSET);
 		rc = ecore_mcp_nvm_rd_cmd(p_hwfn, p_ptt,
 					  DRV_MSG_CODE_NVM_READ_NVRAM,
 					  nvm_offset, &resp, &param, &buf_size,
@@ -2652,8 +2648,9 @@ enum _ecore_status_t ecore_mcp_nvm_write(struct ecore_dev *p_dev, u32 cmd,
 	while (buf_idx < len) {
 		buf_size = OSAL_MIN_T(u32, (len - buf_idx),
 				      MCP_DRV_NVM_BUF_LEN);
-		nvm_offset = ((buf_size << DRV_MB_PARAM_NVM_LEN_SHIFT) | addr) +
-				buf_idx;
+		nvm_offset = ((buf_size << DRV_MB_PARAM_NVM_LEN_OFFSET) |
+			      addr) +
+			     buf_idx;
 		rc = ecore_mcp_nvm_wr_cmd(p_hwfn, p_ptt, nvm_cmd, nvm_offset,
 					  &resp, &param, buf_size,
 					  (u32 *)&p_buf[buf_idx]);
@@ -2744,8 +2741,8 @@ enum _ecore_status_t ecore_mcp_phy_sfp_read(struct ecore_hwfn *p_hwfn,
 	u32 resp, param;
 	enum _ecore_status_t rc;
 
-	nvm_offset = (port << DRV_MB_PARAM_TRANSCEIVER_PORT_SHIFT) |
-			(addr << DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS_SHIFT);
+	nvm_offset = (port << DRV_MB_PARAM_TRANSCEIVER_PORT_OFFSET) |
+			(addr << DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS_OFFSET);
 	addr = offset;
 	offset = 0;
 	bytes_left = len;
@@ -2755,9 +2752,9 @@ enum _ecore_status_t ecore_mcp_phy_sfp_read(struct ecore_hwfn *p_hwfn,
 		nvm_offset &= (DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS_MASK |
 			       DRV_MB_PARAM_TRANSCEIVER_PORT_MASK);
 		nvm_offset |= ((addr + offset) <<
-				DRV_MB_PARAM_TRANSCEIVER_OFFSET_SHIFT);
+				DRV_MB_PARAM_TRANSCEIVER_OFFSET_OFFSET);
 		nvm_offset |= (bytes_to_copy <<
-			       DRV_MB_PARAM_TRANSCEIVER_SIZE_SHIFT);
+			       DRV_MB_PARAM_TRANSCEIVER_SIZE_OFFSET);
 		rc = ecore_mcp_nvm_rd_cmd(p_hwfn, p_ptt,
 					  DRV_MSG_CODE_TRANSCEIVER_READ,
 					  nvm_offset, &resp, &param, &buf_size,
@@ -2784,8 +2781,8 @@ enum _ecore_status_t ecore_mcp_phy_sfp_write(struct ecore_hwfn *p_hwfn,
 	u32 buf_idx, buf_size, nvm_offset, resp, param;
 	enum _ecore_status_t rc;
 
-	nvm_offset = (port << DRV_MB_PARAM_TRANSCEIVER_PORT_SHIFT) |
-			(addr << DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS_SHIFT);
+	nvm_offset = (port << DRV_MB_PARAM_TRANSCEIVER_PORT_OFFSET) |
+			(addr << DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS_OFFSET);
 	buf_idx = 0;
 	while (buf_idx < len) {
 		buf_size = OSAL_MIN_T(u32, (len - buf_idx),
@@ -2793,8 +2790,9 @@ enum _ecore_status_t ecore_mcp_phy_sfp_write(struct ecore_hwfn *p_hwfn,
 		nvm_offset &= (DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS_MASK |
 				 DRV_MB_PARAM_TRANSCEIVER_PORT_MASK);
 		nvm_offset |= ((offset + buf_idx) <<
-				 DRV_MB_PARAM_TRANSCEIVER_OFFSET_SHIFT);
-		nvm_offset |= (buf_size << DRV_MB_PARAM_TRANSCEIVER_SIZE_SHIFT);
+				 DRV_MB_PARAM_TRANSCEIVER_OFFSET_OFFSET);
+		nvm_offset |= (buf_size <<
+			       DRV_MB_PARAM_TRANSCEIVER_SIZE_OFFSET);
 		rc = ecore_mcp_nvm_wr_cmd(p_hwfn, p_ptt,
 					  DRV_MSG_CODE_TRANSCEIVER_WRITE,
 					  nvm_offset, &resp, &param, buf_size,
@@ -2819,7 +2817,7 @@ enum _ecore_status_t ecore_mcp_gpio_read(struct ecore_hwfn *p_hwfn,
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 	u32 drv_mb_param = 0, rsp;
 
-	drv_mb_param = (gpio << DRV_MB_PARAM_GPIO_NUMBER_SHIFT);
+	drv_mb_param = (gpio << DRV_MB_PARAM_GPIO_NUMBER_OFFSET);
 
 	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_GPIO_READ,
 			   drv_mb_param, &rsp, gpio_val);
@@ -2840,8 +2838,8 @@ enum _ecore_status_t ecore_mcp_gpio_write(struct ecore_hwfn *p_hwfn,
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 	u32 drv_mb_param = 0, param, rsp;
 
-	drv_mb_param = (gpio << DRV_MB_PARAM_GPIO_NUMBER_SHIFT) |
-		(gpio_val << DRV_MB_PARAM_GPIO_VALUE_SHIFT);
+	drv_mb_param = (gpio << DRV_MB_PARAM_GPIO_NUMBER_OFFSET) |
+		(gpio_val << DRV_MB_PARAM_GPIO_VALUE_OFFSET);
 
 	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_GPIO_WRITE,
 			   drv_mb_param, &rsp, &param);
@@ -2863,7 +2861,7 @@ enum _ecore_status_t ecore_mcp_gpio_info(struct ecore_hwfn *p_hwfn,
 	u32 drv_mb_param = 0, rsp, val = 0;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 
-	drv_mb_param = gpio << DRV_MB_PARAM_GPIO_NUMBER_SHIFT;
+	drv_mb_param = gpio << DRV_MB_PARAM_GPIO_NUMBER_OFFSET;
 
 	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_GPIO_INFO,
 			   drv_mb_param, &rsp, &val);
@@ -2871,9 +2869,9 @@ enum _ecore_status_t ecore_mcp_gpio_info(struct ecore_hwfn *p_hwfn,
 		return rc;
 
 	*gpio_direction = (val & DRV_MB_PARAM_GPIO_DIRECTION_MASK) >>
-			   DRV_MB_PARAM_GPIO_DIRECTION_SHIFT;
+			   DRV_MB_PARAM_GPIO_DIRECTION_OFFSET;
 	*gpio_ctrl = (val & DRV_MB_PARAM_GPIO_CTRL_MASK) >>
-		      DRV_MB_PARAM_GPIO_CTRL_SHIFT;
+		      DRV_MB_PARAM_GPIO_CTRL_OFFSET;
 
 	if ((rsp & FW_MSG_CODE_MASK) != FW_MSG_CODE_GPIO_OK)
 		return ECORE_UNKNOWN_ERROR;
@@ -2888,7 +2886,7 @@ enum _ecore_status_t ecore_mcp_bist_register_test(struct ecore_hwfn *p_hwfn,
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 
 	drv_mb_param = (DRV_MB_PARAM_BIST_REGISTER_TEST <<
-			DRV_MB_PARAM_BIST_TEST_INDEX_SHIFT);
+			DRV_MB_PARAM_BIST_TEST_INDEX_OFFSET);
 
 	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_BIST_TEST,
 			   drv_mb_param, &rsp, &param);
@@ -2910,7 +2908,7 @@ enum _ecore_status_t ecore_mcp_bist_clock_test(struct ecore_hwfn *p_hwfn,
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 
 	drv_mb_param = (DRV_MB_PARAM_BIST_CLOCK_TEST <<
-			DRV_MB_PARAM_BIST_TEST_INDEX_SHIFT);
+			DRV_MB_PARAM_BIST_TEST_INDEX_OFFSET);
 
 	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_BIST_TEST,
 			   drv_mb_param, &rsp, &param);
@@ -2932,7 +2930,7 @@ enum _ecore_status_t ecore_mcp_bist_nvm_test_get_num_images(
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 
 	drv_mb_param = (DRV_MB_PARAM_BIST_NVM_TEST_NUM_IMAGES <<
-			DRV_MB_PARAM_BIST_TEST_INDEX_SHIFT);
+			DRV_MB_PARAM_BIST_TEST_INDEX_OFFSET);
 
 	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_BIST_TEST,
 			   drv_mb_param, &rsp, num_images);
@@ -2954,8 +2952,9 @@ enum _ecore_status_t ecore_mcp_bist_nvm_test_get_image_att(
 	enum _ecore_status_t rc;
 
 	nvm_offset = (DRV_MB_PARAM_BIST_NVM_TEST_IMAGE_BY_INDEX <<
-				    DRV_MB_PARAM_BIST_TEST_INDEX_SHIFT);
-	nvm_offset |= (image_index << DRV_MB_PARAM_BIST_TEST_IMAGE_INDEX_SHIFT);
+				    DRV_MB_PARAM_BIST_TEST_INDEX_OFFSET);
+	nvm_offset |= (image_index <<
+		       DRV_MB_PARAM_BIST_TEST_IMAGE_INDEX_OFFSET);
 	rc = ecore_mcp_nvm_rd_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_BIST_TEST,
 				  nvm_offset, &resp, &param, &buf_size,
 				  (u32 *)p_image_att);
@@ -2996,13 +2995,13 @@ enum _ecore_status_t
 		val = mfw_temp_info.sensor[i];
 		p_temp_sensor = &p_temp_info->sensors[i];
 		p_temp_sensor->sensor_location = (val & SENSOR_LOCATION_MASK) >>
-						 SENSOR_LOCATION_SHIFT;
+						 SENSOR_LOCATION_OFFSET;
 		p_temp_sensor->threshold_high = (val & THRESHOLD_HIGH_MASK) >>
-						THRESHOLD_HIGH_SHIFT;
+						THRESHOLD_HIGH_OFFSET;
 		p_temp_sensor->critical = (val & CRITICAL_TEMPERATURE_MASK) >>
-					  CRITICAL_TEMPERATURE_SHIFT;
+					  CRITICAL_TEMPERATURE_OFFSET;
 		p_temp_sensor->current_temp = (val & CURRENT_TEMP_MASK) >>
-					      CURRENT_TEMP_SHIFT;
+					      CURRENT_TEMP_OFFSET;
 	}
 
 	return ECORE_SUCCESS;
@@ -3099,9 +3098,9 @@ enum _ecore_status_t ecore_mcp_mem_ecc_events(struct ecore_hwfn *p_hwfn,
 #define ECORE_RESC_ALLOC_VERSION_MINOR	0
 #define ECORE_RESC_ALLOC_VERSION				\
 	((ECORE_RESC_ALLOC_VERSION_MAJOR <<			\
-	  DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR_SHIFT) |	\
+	  DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR_OFFSET) |	\
 	 (ECORE_RESC_ALLOC_VERSION_MINOR <<			\
-	  DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR_SHIFT))
+	  DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR_OFFSET))
 
 struct ecore_resc_alloc_in_params {
 	u32 cmd;
@@ -3185,10 +3184,10 @@ enum _ecore_status_t ecore_recovery_prolog(struct ecore_dev *p_dev)
 		   "Resource message request: cmd 0x%08x, res_id %d [%s], hsi_version %d.%d, val 0x%x\n",
 		   p_in_params->cmd, p_in_params->res_id,
 		   ecore_hw_get_resc_name(p_in_params->res_id),
-		   ECORE_MFW_GET_FIELD(mb_params.param,
-			   DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR),
-		   ECORE_MFW_GET_FIELD(mb_params.param,
-			   DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR),
+		   GET_MFW_FIELD(mb_params.param,
+				 DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR),
+		   GET_MFW_FIELD(mb_params.param,
+				 DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR),
 		   p_in_params->resc_max_val);
 
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
@@ -3205,10 +3204,10 @@ enum _ecore_status_t ecore_recovery_prolog(struct ecore_dev *p_dev)
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 		   "Resource message response: mfw_hsi_version %d.%d, num 0x%x, start 0x%x, vf_num 0x%x, vf_start 0x%x, flags 0x%08x\n",
-		   ECORE_MFW_GET_FIELD(p_out_params->mcp_param,
-			   FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR),
-		   ECORE_MFW_GET_FIELD(p_out_params->mcp_param,
-			   FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR),
+		   GET_MFW_FIELD(p_out_params->mcp_param,
+				 FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR),
+		   GET_MFW_FIELD(p_out_params->mcp_param,
+				 FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR),
 		   p_out_params->resc_num, p_out_params->resc_start,
 		   p_out_params->vf_resc_num, p_out_params->vf_resc_start,
 		   p_out_params->flags);
@@ -3296,7 +3295,7 @@ static enum _ecore_status_t ecore_mcp_resource_cmd(struct ecore_hwfn *p_hwfn,
 	}
 
 	if (*p_mcp_param == RESOURCE_OPCODE_UNKNOWN_CMD) {
-		u8 opcode = ECORE_MFW_GET_FIELD(param, RESOURCE_CMD_REQ_OPCODE);
+		u8 opcode = GET_MFW_FIELD(param, RESOURCE_CMD_REQ_OPCODE);
 
 		DP_NOTICE(p_hwfn, false,
 			  "The resource command is unknown to the MFW [param 0x%08x, opcode %d]\n",
@@ -3329,9 +3328,9 @@ enum _ecore_status_t
 		break;
 	}
 
-	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_RESC, p_params->resource);
-	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_OPCODE, opcode);
-	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_AGE, p_params->timeout);
+	SET_MFW_FIELD(param, RESOURCE_CMD_REQ_RESC, p_params->resource);
+	SET_MFW_FIELD(param, RESOURCE_CMD_REQ_OPCODE, opcode);
+	SET_MFW_FIELD(param, RESOURCE_CMD_REQ_AGE, p_params->timeout);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 		   "Resource lock request: param 0x%08x [age %d, opcode %d, resource %d]\n",
@@ -3344,9 +3343,8 @@ enum _ecore_status_t
 		return rc;
 
 	/* Analyze the response */
-	p_params->owner = ECORE_MFW_GET_FIELD(mcp_param,
-					     RESOURCE_CMD_RSP_OWNER);
-	opcode = ECORE_MFW_GET_FIELD(mcp_param, RESOURCE_CMD_RSP_OPCODE);
+	p_params->owner = GET_MFW_FIELD(mcp_param, RESOURCE_CMD_RSP_OWNER);
+	opcode = GET_MFW_FIELD(mcp_param, RESOURCE_CMD_RSP_OPCODE);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 		   "Resource lock response: mcp_param 0x%08x [opcode %d, owner %d]\n",
@@ -3443,8 +3441,8 @@ enum _ecore_status_t
 
 	opcode = p_params->b_force ? RESOURCE_OPCODE_FORCE_RELEASE
 				   : RESOURCE_OPCODE_RELEASE;
-	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_RESC, p_params->resource);
-	ECORE_MFW_SET_FIELD(param, RESOURCE_CMD_REQ_OPCODE, opcode);
+	SET_MFW_FIELD(param, RESOURCE_CMD_REQ_RESC, p_params->resource);
+	SET_MFW_FIELD(param, RESOURCE_CMD_REQ_OPCODE, opcode);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 		   "Resource unlock request: param 0x%08x [opcode %d, resource %d]\n",
@@ -3457,7 +3455,7 @@ enum _ecore_status_t
 		return rc;
 
 	/* Analyze the response */
-	opcode = ECORE_MFW_GET_FIELD(mcp_param, RESOURCE_CMD_RSP_OPCODE);
+	opcode = GET_MFW_FIELD(mcp_param, RESOURCE_CMD_RSP_OPCODE);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 		   "Resource unlock response: mcp_param 0x%08x [opcode %d]\n",
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 8484704..ff9ce9e 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -28,19 +28,19 @@
 
 typedef u32 offsize_t;      /* In DWORDS !!! */
 /* Offset from the beginning of the MCP scratchpad */
-#define OFFSIZE_OFFSET_SHIFT	0
+#define OFFSIZE_OFFSET_OFFSET	0
 #define OFFSIZE_OFFSET_MASK	0x0000ffff
 /* Size of specific element (not the whole array if any) */
-#define OFFSIZE_SIZE_SHIFT	16
+#define OFFSIZE_SIZE_OFFSET	16
 #define OFFSIZE_SIZE_MASK	0xffff0000
 
 /* SECTION_OFFSET is calculating the offset in bytes out of offsize */
 #define SECTION_OFFSET(_offsize)	\
-	((((_offsize & OFFSIZE_OFFSET_MASK) >> OFFSIZE_OFFSET_SHIFT) << 2))
+	((((_offsize & OFFSIZE_OFFSET_MASK) >> OFFSIZE_OFFSET_OFFSET) << 2))
 
 /* SECTION_SIZE is calculating the size in bytes out of offsize */
 #define SECTION_SIZE(_offsize)		\
-	(((_offsize & OFFSIZE_SIZE_MASK) >> OFFSIZE_SIZE_SHIFT) << 2)
+	(((_offsize & OFFSIZE_SIZE_MASK) >> OFFSIZE_SIZE_OFFSET) << 2)
 
 /* SECTION_ADDR returns the GRC addr of a section, given offsize and index
  * within section
@@ -93,7 +93,7 @@ struct eth_phy_cfg {
 #define EEE_CFG_ADV_SPEED_1G	(1 << 2)
 #define EEE_CFG_ADV_SPEED_10G	(1 << 3)
 #define EEE_TX_TIMER_USEC_MASK	(0xfffffff0)
-#define EEE_TX_TIMER_USEC_SHIFT	4
+#define EEE_TX_TIMER_USEC_OFFSET	4
 #define EEE_TX_TIMER_USEC_BALANCED_TIME		(0xa00)
 #define EEE_TX_TIMER_USEC_AGGRESSIVE_TIME	(0x100)
 #define EEE_TX_TIMER_USEC_LATENCY_TIME		(0x6000)
@@ -105,7 +105,7 @@ struct eth_phy_cfg {
 struct port_mf_cfg {
 	u32 dynamic_cfg;    /* device control channel */
 #define PORT_MF_CFG_OV_TAG_MASK              0x0000ffff
-#define PORT_MF_CFG_OV_TAG_SHIFT             0
+#define PORT_MF_CFG_OV_TAG_OFFSET             0
 #define PORT_MF_CFG_OV_TAG_DEFAULT         PORT_MF_CFG_OV_TAG_MASK
 
 	u32 reserved[1];
@@ -279,15 +279,15 @@ struct couple_mode_teaming {
 struct lldp_config_params_s {
 	u32 config;
 #define LLDP_CONFIG_TX_INTERVAL_MASK        0x000000ff
-#define LLDP_CONFIG_TX_INTERVAL_SHIFT       0
+#define LLDP_CONFIG_TX_INTERVAL_OFFSET       0
 #define LLDP_CONFIG_HOLD_MASK               0x00000f00
-#define LLDP_CONFIG_HOLD_SHIFT              8
+#define LLDP_CONFIG_HOLD_OFFSET              8
 #define LLDP_CONFIG_MAX_CREDIT_MASK         0x0000f000
-#define LLDP_CONFIG_MAX_CREDIT_SHIFT        12
+#define LLDP_CONFIG_MAX_CREDIT_OFFSET        12
 #define LLDP_CONFIG_ENABLE_RX_MASK          0x40000000
-#define LLDP_CONFIG_ENABLE_RX_SHIFT         30
+#define LLDP_CONFIG_ENABLE_RX_OFFSET         30
 #define LLDP_CONFIG_ENABLE_TX_MASK          0x80000000
-#define LLDP_CONFIG_ENABLE_TX_SHIFT         31
+#define LLDP_CONFIG_ENABLE_TX_OFFSET         31
 	/* Holds local Chassis ID TLV header, subtype and 9B of payload.
 	 * If firtst byte is 0, then we will use default chassis ID
 	 */
@@ -311,17 +311,17 @@ struct lldp_status_params_s {
 struct dcbx_ets_feature {
 	u32 flags;
 #define DCBX_ETS_ENABLED_MASK                   0x00000001
-#define DCBX_ETS_ENABLED_SHIFT                  0
+#define DCBX_ETS_ENABLED_OFFSET                  0
 #define DCBX_ETS_WILLING_MASK                   0x00000002
-#define DCBX_ETS_WILLING_SHIFT                  1
+#define DCBX_ETS_WILLING_OFFSET                  1
 #define DCBX_ETS_ERROR_MASK                     0x00000004
-#define DCBX_ETS_ERROR_SHIFT                    2
+#define DCBX_ETS_ERROR_OFFSET                    2
 #define DCBX_ETS_CBS_MASK                       0x00000008
-#define DCBX_ETS_CBS_SHIFT                      3
+#define DCBX_ETS_CBS_OFFSET                      3
 #define DCBX_ETS_MAX_TCS_MASK                   0x000000f0
-#define DCBX_ETS_MAX_TCS_SHIFT                  4
+#define DCBX_ETS_MAX_TCS_OFFSET                  4
 #define DCBX_OOO_TC_MASK                        0x00000f00
-#define DCBX_OOO_TC_SHIFT                       8
+#define DCBX_OOO_TC_OFFSET                       8
 /* Entries in tc table are orginized that the left most is pri 0, right most is
  * prio 7
  */
@@ -353,7 +353,7 @@ struct dcbx_ets_feature {
 struct dcbx_app_priority_entry {
 	u32 entry;
 #define DCBX_APP_PRI_MAP_MASK       0x000000ff
-#define DCBX_APP_PRI_MAP_SHIFT      0
+#define DCBX_APP_PRI_MAP_OFFSET      0
 #define DCBX_APP_PRI_0              0x01
 #define DCBX_APP_PRI_1              0x02
 #define DCBX_APP_PRI_2              0x04
@@ -363,11 +363,11 @@ struct dcbx_app_priority_entry {
 #define DCBX_APP_PRI_6              0x40
 #define DCBX_APP_PRI_7              0x80
 #define DCBX_APP_SF_MASK            0x00000300
-#define DCBX_APP_SF_SHIFT           8
+#define DCBX_APP_SF_OFFSET           8
 #define DCBX_APP_SF_ETHTYPE         0
 #define DCBX_APP_SF_PORT            1
 #define DCBX_APP_SF_IEEE_MASK       0x0000f000
-#define DCBX_APP_SF_IEEE_SHIFT      12
+#define DCBX_APP_SF_IEEE_OFFSET      12
 #define DCBX_APP_SF_IEEE_RESERVED   0
 #define DCBX_APP_SF_IEEE_ETHTYPE    1
 #define DCBX_APP_SF_IEEE_TCP_PORT   2
@@ -375,7 +375,7 @@ struct dcbx_app_priority_entry {
 #define DCBX_APP_SF_IEEE_TCP_UDP_PORT 4
 
 #define DCBX_APP_PROTOCOL_ID_MASK   0xffff0000
-#define DCBX_APP_PROTOCOL_ID_SHIFT  16
+#define DCBX_APP_PROTOCOL_ID_OFFSET  16
 };
 
 
@@ -383,19 +383,19 @@ struct dcbx_app_priority_entry {
 struct dcbx_app_priority_feature {
 	u32 flags;
 #define DCBX_APP_ENABLED_MASK           0x00000001
-#define DCBX_APP_ENABLED_SHIFT          0
+#define DCBX_APP_ENABLED_OFFSET          0
 #define DCBX_APP_WILLING_MASK           0x00000002
-#define DCBX_APP_WILLING_SHIFT          1
+#define DCBX_APP_WILLING_OFFSET          1
 #define DCBX_APP_ERROR_MASK             0x00000004
-#define DCBX_APP_ERROR_SHIFT            2
+#define DCBX_APP_ERROR_OFFSET            2
 	/* Not in use
 	#define DCBX_APP_DEFAULT_PRI_MASK       0x00000f00
-	#define DCBX_APP_DEFAULT_PRI_SHIFT      8
+	#define DCBX_APP_DEFAULT_PRI_OFFSET      8
 	*/
 #define DCBX_APP_MAX_TCS_MASK           0x0000f000
-#define DCBX_APP_MAX_TCS_SHIFT          12
+#define DCBX_APP_MAX_TCS_OFFSET          12
 #define DCBX_APP_NUM_ENTRIES_MASK       0x00ff0000
-#define DCBX_APP_NUM_ENTRIES_SHIFT      16
+#define DCBX_APP_NUM_ENTRIES_OFFSET      16
 	struct dcbx_app_priority_entry  app_pri_tbl[DCBX_MAX_APP_PROTOCOL];
 };
 
@@ -406,7 +406,7 @@ struct dcbx_features {
 	/* PFC feature */
 	u32 pfc;
 #define DCBX_PFC_PRI_EN_BITMAP_MASK             0x000000ff
-#define DCBX_PFC_PRI_EN_BITMAP_SHIFT            0
+#define DCBX_PFC_PRI_EN_BITMAP_OFFSET            0
 #define DCBX_PFC_PRI_EN_BITMAP_PRI_0            0x01
 #define DCBX_PFC_PRI_EN_BITMAP_PRI_1            0x02
 #define DCBX_PFC_PRI_EN_BITMAP_PRI_2            0x04
@@ -417,17 +417,17 @@ struct dcbx_features {
 #define DCBX_PFC_PRI_EN_BITMAP_PRI_7            0x80
 
 #define DCBX_PFC_FLAGS_MASK                     0x0000ff00
-#define DCBX_PFC_FLAGS_SHIFT                    8
+#define DCBX_PFC_FLAGS_OFFSET                    8
 #define DCBX_PFC_CAPS_MASK                      0x00000f00
-#define DCBX_PFC_CAPS_SHIFT                     8
+#define DCBX_PFC_CAPS_OFFSET                     8
 #define DCBX_PFC_MBC_MASK                       0x00004000
-#define DCBX_PFC_MBC_SHIFT                      14
+#define DCBX_PFC_MBC_OFFSET                      14
 #define DCBX_PFC_WILLING_MASK                   0x00008000
-#define DCBX_PFC_WILLING_SHIFT                  15
+#define DCBX_PFC_WILLING_OFFSET                  15
 #define DCBX_PFC_ENABLED_MASK                   0x00010000
-#define DCBX_PFC_ENABLED_SHIFT                  16
+#define DCBX_PFC_ENABLED_OFFSET                  16
 #define DCBX_PFC_ERROR_MASK                     0x00020000
-#define DCBX_PFC_ERROR_SHIFT                    17
+#define DCBX_PFC_ERROR_OFFSET                    17
 
 	/* APP feature */
 	struct dcbx_app_priority_feature app;
@@ -436,7 +436,7 @@ struct dcbx_features {
 struct dcbx_local_params {
 	u32 config;
 #define DCBX_CONFIG_VERSION_MASK            0x00000007
-#define DCBX_CONFIG_VERSION_SHIFT           0
+#define DCBX_CONFIG_VERSION_OFFSET           0
 #define DCBX_CONFIG_VERSION_DISABLED        0
 #define DCBX_CONFIG_VERSION_IEEE            1
 #define DCBX_CONFIG_VERSION_CEE             2
@@ -451,7 +451,7 @@ struct dcbx_mib {
 	u32 flags;
 	/*
 	#define DCBX_CONFIG_VERSION_MASK            0x00000007
-	#define DCBX_CONFIG_VERSION_SHIFT           0
+	#define DCBX_CONFIG_VERSION_OFFSET           0
 	#define DCBX_CONFIG_VERSION_DISABLED        0
 	#define DCBX_CONFIG_VERSION_IEEE            1
 	#define DCBX_CONFIG_VERSION_CEE             2
@@ -470,7 +470,7 @@ struct lldp_system_tlvs_buffer_s {
 struct dcb_dscp_map {
 	u32 flags;
 #define DCB_DSCP_ENABLE_MASK			0x1
-#define DCB_DSCP_ENABLE_SHIFT			0
+#define DCB_DSCP_ENABLE_OFFSET			0
 #define DCB_DSCP_ENABLE				1
 	u32 dscp_pri_map[8];
 };
@@ -502,12 +502,12 @@ struct public_global {
 #define MDUMP_REASON_DUMP_AGED		(1 << 2)
 	u32 ext_phy_upgrade_fw;
 #define EXT_PHY_FW_UPGRADE_STATUS_MASK		(0x0000ffff)
-#define EXT_PHY_FW_UPGRADE_STATUS_SHIFT		(0)
+#define EXT_PHY_FW_UPGRADE_STATUS_OFFSET		(0)
 #define EXT_PHY_FW_UPGRADE_STATUS_IN_PROGRESS	(1)
 #define EXT_PHY_FW_UPGRADE_STATUS_FAILED	(2)
 #define EXT_PHY_FW_UPGRADE_STATUS_SUCCESS	(3)
 #define EXT_PHY_FW_UPGRADE_TYPE_MASK		(0xffff0000)
-#define EXT_PHY_FW_UPGRADE_TYPE_SHIFT		(16)
+#define EXT_PHY_FW_UPGRADE_TYPE_OFFSET		(16)
 };
 
 /**************************************/
@@ -557,9 +557,9 @@ struct public_path {
 /* Reset on mcp reset, and incremented for eveny process kill event. */
 	u32 process_kill;
 #define PROCESS_KILL_COUNTER_MASK		0x0000ffff
-#define PROCESS_KILL_COUNTER_SHIFT		0
+#define PROCESS_KILL_COUNTER_OFFSET		0
 #define PROCESS_KILL_GLOB_AEU_BIT_MASK		0xffff0000
-#define PROCESS_KILL_GLOB_AEU_BIT_SHIFT		16
+#define PROCESS_KILL_GLOB_AEU_BIT_OFFSET	16
 #define GLOBAL_AEU_BIT(aeu_reg_id, aeu_bit) (aeu_reg_id * 32 + aeu_bit)
 };
 
@@ -713,13 +713,13 @@ struct public_port {
 	u32 fc_npiv_nvram_tbl_size;
 	u32 transceiver_data;
 #define ETH_TRANSCEIVER_STATE_MASK			0x000000FF
-#define ETH_TRANSCEIVER_STATE_SHIFT			0x00000000
+#define ETH_TRANSCEIVER_STATE_OFFSET			0x00000000
 #define ETH_TRANSCEIVER_STATE_UNPLUGGED			0x00000000
 #define ETH_TRANSCEIVER_STATE_PRESENT			0x00000001
 #define ETH_TRANSCEIVER_STATE_VALID			0x00000003
 #define ETH_TRANSCEIVER_STATE_UPDATING			0x00000008
 #define ETH_TRANSCEIVER_TYPE_MASK			0x0000FF00
-#define ETH_TRANSCEIVER_TYPE_SHIFT			0x00000008
+#define ETH_TRANSCEIVER_TYPE_OFFSET			0x00000008
 #define ETH_TRANSCEIVER_TYPE_NONE			0x00000000
 #define ETH_TRANSCEIVER_TYPE_UNKNOWN			0x000000FF
 /* 1G Passive copper cable */
@@ -785,12 +785,12 @@ struct public_port {
 
 /* Shows the Local Device EEE capabilities */
 #define EEE_LD_ADV_STATUS_MASK	0x000000f0
-#define EEE_LD_ADV_STATUS_SHIFT	4
+#define EEE_LD_ADV_STATUS_OFFSET	4
 	#define EEE_1G_ADV	(1 << 1)
 	#define EEE_10G_ADV	(1 << 2)
 /* Same values as in EEE_LD_ADV, but for Link Parter */
 #define	EEE_LP_ADV_STATUS_MASK	0x00000f00
-#define EEE_LP_ADV_STATUS_SHIFT	8
+#define EEE_LP_ADV_STATUS_OFFSET	8
 
 /* Supported speeds for EEE */
 #define EEE_SUPPORTED_SPEED_MASK	0x0000f000
@@ -800,9 +800,9 @@ struct public_port {
 
 	u32 eee_remote;	/* Used for EEE in LLDP */
 #define EEE_REMOTE_TW_TX_MASK	0x0000ffff
-#define EEE_REMOTE_TW_TX_SHIFT	0
+#define EEE_REMOTE_TW_TX_OFFSET	0
 #define EEE_REMOTE_TW_RX_MASK	0xffff0000
-#define EEE_REMOTE_TW_RX_SHIFT	16
+#define EEE_REMOTE_TW_RX_OFFSET	16
 
 	u32 module_info;
 #define ETH_TRANSCEIVER_MONITORING_TYPE_MASK		0x000000FF
@@ -852,11 +852,11 @@ struct public_func {
 	/* function 0 of each port cannot be hidden */
 #define FUNC_MF_CFG_FUNC_HIDE                   0x00000001
 #define FUNC_MF_CFG_PAUSE_ON_HOST_RING          0x00000002
-#define FUNC_MF_CFG_PAUSE_ON_HOST_RING_SHIFT    0x00000001
+#define FUNC_MF_CFG_PAUSE_ON_HOST_RING_OFFSET    0x00000001
 
 
 #define FUNC_MF_CFG_PROTOCOL_MASK               0x000000f0
-#define FUNC_MF_CFG_PROTOCOL_SHIFT              4
+#define FUNC_MF_CFG_PROTOCOL_OFFSET              4
 #define FUNC_MF_CFG_PROTOCOL_ETHERNET           0x00000000
 #define FUNC_MF_CFG_PROTOCOL_ISCSI              0x00000010
 #define FUNC_MF_CFG_PROTOCOL_FCOE		0x00000020
@@ -866,10 +866,10 @@ struct public_func {
 	/* MINBW, MAXBW */
 	/* value range - 0..100, increments in 1 %  */
 #define FUNC_MF_CFG_MIN_BW_MASK                 0x0000ff00
-#define FUNC_MF_CFG_MIN_BW_SHIFT                8
+#define FUNC_MF_CFG_MIN_BW_OFFSET                8
 #define FUNC_MF_CFG_MIN_BW_DEFAULT              0x00000000
 #define FUNC_MF_CFG_MAX_BW_MASK                 0x00ff0000
-#define FUNC_MF_CFG_MAX_BW_SHIFT                16
+#define FUNC_MF_CFG_MAX_BW_OFFSET                16
 #define FUNC_MF_CFG_MAX_BW_DEFAULT              0x00640000
 
 	u32 status;
@@ -877,7 +877,7 @@ struct public_func {
 
 	u32 mac_upper;      /* MAC */
 #define FUNC_MF_CFG_UPPERMAC_MASK               0x0000ffff
-#define FUNC_MF_CFG_UPPERMAC_SHIFT              0
+#define FUNC_MF_CFG_UPPERMAC_OFFSET              0
 #define FUNC_MF_CFG_UPPERMAC_DEFAULT            FUNC_MF_CFG_UPPERMAC_MASK
 	u32 mac_lower;
 #define FUNC_MF_CFG_LOWERMAC_DEFAULT            0xffffffff
@@ -890,7 +890,7 @@ struct public_func {
 
 	u32 ovlan_stag;     /* tags */
 #define FUNC_MF_CFG_OV_STAG_MASK              0x0000ffff
-#define FUNC_MF_CFG_OV_STAG_SHIFT             0
+#define FUNC_MF_CFG_OV_STAG_OFFSET             0
 #define FUNC_MF_CFG_OV_STAG_DEFAULT           FUNC_MF_CFG_OV_STAG_MASK
 
 	u32 pf_allocation; /* vf per pf */
@@ -907,29 +907,29 @@ struct public_func {
 
 	u32 drv_id;
 #define DRV_ID_PDA_COMP_VER_MASK	0x0000ffff
-#define DRV_ID_PDA_COMP_VER_SHIFT	0
+#define DRV_ID_PDA_COMP_VER_OFFSET	0
 
 #define LOAD_REQ_HSI_VERSION		2
 #define DRV_ID_MCP_HSI_VER_MASK		0x00ff0000
-#define DRV_ID_MCP_HSI_VER_SHIFT	16
+#define DRV_ID_MCP_HSI_VER_OFFSET	16
 #define DRV_ID_MCP_HSI_VER_CURRENT	(LOAD_REQ_HSI_VERSION << \
-					 DRV_ID_MCP_HSI_VER_SHIFT)
+					 DRV_ID_MCP_HSI_VER_OFFSET)
 
 #define DRV_ID_DRV_TYPE_MASK		0x7f000000
-#define DRV_ID_DRV_TYPE_SHIFT		24
-#define DRV_ID_DRV_TYPE_UNKNOWN		(0 << DRV_ID_DRV_TYPE_SHIFT)
-#define DRV_ID_DRV_TYPE_LINUX		(1 << DRV_ID_DRV_TYPE_SHIFT)
-#define DRV_ID_DRV_TYPE_WINDOWS		(2 << DRV_ID_DRV_TYPE_SHIFT)
-#define DRV_ID_DRV_TYPE_DIAG		(3 << DRV_ID_DRV_TYPE_SHIFT)
-#define DRV_ID_DRV_TYPE_PREBOOT		(4 << DRV_ID_DRV_TYPE_SHIFT)
-#define DRV_ID_DRV_TYPE_SOLARIS		(5 << DRV_ID_DRV_TYPE_SHIFT)
-#define DRV_ID_DRV_TYPE_VMWARE		(6 << DRV_ID_DRV_TYPE_SHIFT)
-#define DRV_ID_DRV_TYPE_FREEBSD		(7 << DRV_ID_DRV_TYPE_SHIFT)
-#define DRV_ID_DRV_TYPE_AIX		(8 << DRV_ID_DRV_TYPE_SHIFT)
+#define DRV_ID_DRV_TYPE_OFFSET		24
+#define DRV_ID_DRV_TYPE_UNKNOWN		(0 << DRV_ID_DRV_TYPE_OFFSET)
+#define DRV_ID_DRV_TYPE_LINUX		(1 << DRV_ID_DRV_TYPE_OFFSET)
+#define DRV_ID_DRV_TYPE_WINDOWS		(2 << DRV_ID_DRV_TYPE_OFFSET)
+#define DRV_ID_DRV_TYPE_DIAG		(3 << DRV_ID_DRV_TYPE_OFFSET)
+#define DRV_ID_DRV_TYPE_PREBOOT		(4 << DRV_ID_DRV_TYPE_OFFSET)
+#define DRV_ID_DRV_TYPE_SOLARIS		(5 << DRV_ID_DRV_TYPE_OFFSET)
+#define DRV_ID_DRV_TYPE_VMWARE		(6 << DRV_ID_DRV_TYPE_OFFSET)
+#define DRV_ID_DRV_TYPE_FREEBSD		(7 << DRV_ID_DRV_TYPE_OFFSET)
+#define DRV_ID_DRV_TYPE_AIX		(8 << DRV_ID_DRV_TYPE_OFFSET)
 
 #define DRV_ID_DRV_INIT_HW_MASK		0x80000000
-#define DRV_ID_DRV_INIT_HW_SHIFT	31
-#define DRV_ID_DRV_INIT_HW_FLAG		(1 << DRV_ID_DRV_INIT_HW_SHIFT)
+#define DRV_ID_DRV_INIT_HW_OFFSET	31
+#define DRV_ID_DRV_INIT_HW_FLAG		(1 << DRV_ID_DRV_INIT_HW_OFFSET)
 };
 
 /**************************************/
@@ -1014,13 +1014,13 @@ struct ocbb_data_stc {
 #define MFW_SENSOR_LOCATION_EXTERNAL		2
 #define MFW_SENSOR_LOCATION_SFP			3
 
-#define SENSOR_LOCATION_SHIFT			0
+#define SENSOR_LOCATION_OFFSET			0
 #define SENSOR_LOCATION_MASK			0x000000ff
-#define THRESHOLD_HIGH_SHIFT			8
+#define THRESHOLD_HIGH_OFFSET			8
 #define THRESHOLD_HIGH_MASK			0x0000ff00
-#define CRITICAL_TEMPERATURE_SHIFT		16
+#define CRITICAL_TEMPERATURE_OFFSET		16
 #define CRITICAL_TEMPERATURE_MASK		0x00ff0000
-#define CURRENT_TEMP_SHIFT			24
+#define CURRENT_TEMP_OFFSET			24
 #define CURRENT_TEMP_MASK			0xff000000
 struct temperature_status_stc {
 	u32 num_of_sensors;
@@ -1085,18 +1085,18 @@ struct load_req_stc {
 	u32 fw_ver;
 	u32 misc0;
 #define LOAD_REQ_ROLE_MASK		0x000000FF
-#define LOAD_REQ_ROLE_SHIFT		0
+#define LOAD_REQ_ROLE_OFFSET		0
 #define LOAD_REQ_LOCK_TO_MASK		0x0000FF00
-#define LOAD_REQ_LOCK_TO_SHIFT		8
+#define LOAD_REQ_LOCK_TO_OFFSET		8
 #define LOAD_REQ_LOCK_TO_DEFAULT	0
 #define LOAD_REQ_LOCK_TO_NONE		255
 #define LOAD_REQ_FORCE_MASK		0x000F0000
-#define LOAD_REQ_FORCE_SHIFT		16
+#define LOAD_REQ_FORCE_OFFSET		16
 #define LOAD_REQ_FORCE_NONE		0
 #define LOAD_REQ_FORCE_PF		1
 #define LOAD_REQ_FORCE_ALL		2
 #define LOAD_REQ_FLAGS0_MASK		0x00F00000
-#define LOAD_REQ_FLAGS0_SHIFT		20
+#define LOAD_REQ_FLAGS0_OFFSET		20
 #define LOAD_REQ_FLAGS0_AVOID_RESET	(0x1 << 0)
 };
 
@@ -1106,11 +1106,11 @@ struct load_rsp_stc {
 	u32 fw_ver;
 	u32 misc0;
 #define LOAD_RSP_ROLE_MASK		0x000000FF
-#define LOAD_RSP_ROLE_SHIFT		0
+#define LOAD_RSP_ROLE_OFFSET		0
 #define LOAD_RSP_HSI_MASK		0x0000FF00
-#define LOAD_RSP_HSI_SHIFT		8
+#define LOAD_RSP_HSI_OFFSET		8
 #define LOAD_RSP_FLAGS0_MASK		0x000F0000
-#define LOAD_RSP_FLAGS0_SHIFT		16
+#define LOAD_RSP_FLAGS0_OFFSET		16
 #define LOAD_RSP_FLAGS0_DRV_EXISTS	(0x1 << 0)
 };
 
@@ -1245,7 +1245,7 @@ struct public_drv_mb {
  * [3:0] - func, drv_data[7:0] - MAC/WWNN/WWPN
  */
 #define DRV_MSG_CODE_GET_VMAC                   0x00120000
-#define DRV_MSG_CODE_VMAC_TYPE_SHIFT            4
+#define DRV_MSG_CODE_VMAC_TYPE_OFFSET		4
 #define DRV_MSG_CODE_VMAC_TYPE_MASK             0x30
 #define DRV_MSG_CODE_VMAC_TYPE_MAC              1
 #define DRV_MSG_CODE_VMAC_TYPE_WWNN             2
@@ -1273,9 +1273,9 @@ struct public_drv_mb {
 /* Set function BW, params[15:8] - min, params[7:0] - max */
 #define DRV_MSG_CODE_SET_BW			0x00190000
 #define BW_MAX_MASK				0x000000ff
-#define BW_MAX_SHIFT				0
+#define BW_MAX_OFFSET				0
 #define BW_MIN_MASK				0x0000ff00
-#define BW_MIN_SHIFT				8
+#define BW_MIN_OFFSET				8
 
 /* When param is set to 1, all parities will be masked(disabled). When params
  * are set to 0, parities will be unmasked again.
@@ -1308,9 +1308,9 @@ struct public_drv_mb {
 #define DRV_MSG_CODE_RESOURCE_CMD		0x00230000
 
 #define RESOURCE_CMD_REQ_RESC_MASK		0x0000001F
-#define RESOURCE_CMD_REQ_RESC_SHIFT		0
+#define RESOURCE_CMD_REQ_RESC_OFFSET		0
 #define RESOURCE_CMD_REQ_OPCODE_MASK		0x000000E0
-#define RESOURCE_CMD_REQ_OPCODE_SHIFT		5
+#define RESOURCE_CMD_REQ_OPCODE_OFFSET		5
 /* request resource ownership with default aging */
 #define RESOURCE_OPCODE_REQ			1
 /* request resource ownership without aging */
@@ -1321,12 +1321,12 @@ struct public_drv_mb {
 /* force resource release */
 #define RESOURCE_OPCODE_FORCE_RELEASE		5
 #define RESOURCE_CMD_REQ_AGE_MASK		0x0000FF00
-#define RESOURCE_CMD_REQ_AGE_SHIFT		8
+#define RESOURCE_CMD_REQ_AGE_OFFSET		8
 
 #define RESOURCE_CMD_RSP_OWNER_MASK		0x000000FF
-#define RESOURCE_CMD_RSP_OWNER_SHIFT		0
+#define RESOURCE_CMD_RSP_OWNER_OFFSET		0
 #define RESOURCE_CMD_RSP_OPCODE_MASK		0x00000700
-#define RESOURCE_CMD_RSP_OPCODE_SHIFT		8
+#define RESOURCE_CMD_RSP_OPCODE_OFFSET		8
 /* resource is free and granted to requester */
 #define RESOURCE_OPCODE_GNT			1
 /* resource is busy, param[7:0] indicates owner as follow 0-15 = PF0-15,
@@ -1373,11 +1373,11 @@ struct public_drv_mb {
 #define DRV_MSG_CODE_EXT_PHY_READ		0x00280000
 /* Value should be placed in union */
 #define DRV_MSG_CODE_EXT_PHY_WRITE		0x00290000
-#define DRV_MB_PARAM_ADDR_SHIFT			0
+#define DRV_MB_PARAM_ADDR_OFFSET			0
 #define DRV_MB_PARAM_ADDR_MASK			0x0000FFFF
-#define DRV_MB_PARAM_DEVAD_SHIFT		16
+#define DRV_MB_PARAM_DEVAD_OFFSET		16
 #define DRV_MB_PARAM_DEVAD_MASK			0x001F0000
-#define DRV_MB_PARAM_PORT_SHIFT			21
+#define DRV_MB_PARAM_PORT_OFFSET			21
 #define DRV_MB_PARAM_PORT_MASK			0x00600000
 #define DRV_MSG_CODE_EXT_PHY_FW_UPGRADE		0x002a0000
 
@@ -1404,44 +1404,44 @@ struct public_drv_mb {
 
 	/* LLDP / DCBX params*/
 #define DRV_MB_PARAM_LLDP_SEND_MASK		0x00000001
-#define DRV_MB_PARAM_LLDP_SEND_SHIFT		0
+#define DRV_MB_PARAM_LLDP_SEND_OFFSET		0
 #define DRV_MB_PARAM_LLDP_AGENT_MASK		0x00000006
-#define DRV_MB_PARAM_LLDP_AGENT_SHIFT		1
+#define DRV_MB_PARAM_LLDP_AGENT_OFFSET		1
 #define DRV_MB_PARAM_DCBX_NOTIFY_MASK		0x00000008
-#define DRV_MB_PARAM_DCBX_NOTIFY_SHIFT		3
+#define DRV_MB_PARAM_DCBX_NOTIFY_OFFSET		3
 
 #define DRV_MB_PARAM_NIG_DRAIN_PERIOD_MS_MASK	0x000000FF
-#define DRV_MB_PARAM_NIG_DRAIN_PERIOD_MS_SHIFT	0
+#define DRV_MB_PARAM_NIG_DRAIN_PERIOD_MS_OFFSET	0
 
 #define DRV_MB_PARAM_NVM_PUT_FILE_BEGIN_MFW	0x1
 #define DRV_MB_PARAM_NVM_PUT_FILE_BEGIN_IMAGE	0x2
 
-#define DRV_MB_PARAM_NVM_OFFSET_SHIFT		0
+#define DRV_MB_PARAM_NVM_OFFSET_OFFSET		0
 #define DRV_MB_PARAM_NVM_OFFSET_MASK		0x00FFFFFF
-#define DRV_MB_PARAM_NVM_LEN_SHIFT		24
+#define DRV_MB_PARAM_NVM_LEN_OFFSET		24
 #define DRV_MB_PARAM_NVM_LEN_MASK		0xFF000000
 
-#define DRV_MB_PARAM_PHY_ADDR_SHIFT		0
+#define DRV_MB_PARAM_PHY_ADDR_OFFSET		0
 #define DRV_MB_PARAM_PHY_ADDR_MASK		0x1FF0FFFF
-#define DRV_MB_PARAM_PHY_LANE_SHIFT		16
+#define DRV_MB_PARAM_PHY_LANE_OFFSET		16
 #define DRV_MB_PARAM_PHY_LANE_MASK		0x000F0000
-#define DRV_MB_PARAM_PHY_SELECT_PORT_SHIFT	29
+#define DRV_MB_PARAM_PHY_SELECT_PORT_OFFSET	29
 #define DRV_MB_PARAM_PHY_SELECT_PORT_MASK	0x20000000
-#define DRV_MB_PARAM_PHY_PORT_SHIFT		30
+#define DRV_MB_PARAM_PHY_PORT_OFFSET		30
 #define DRV_MB_PARAM_PHY_PORT_MASK		0xc0000000
 
-#define DRV_MB_PARAM_PHYMOD_LANE_SHIFT		0
+#define DRV_MB_PARAM_PHYMOD_LANE_OFFSET		0
 #define DRV_MB_PARAM_PHYMOD_LANE_MASK		0x000000FF
-#define DRV_MB_PARAM_PHYMOD_SIZE_SHIFT		8
+#define DRV_MB_PARAM_PHYMOD_SIZE_OFFSET		8
 #define DRV_MB_PARAM_PHYMOD_SIZE_MASK		0x000FFF00
 	/* configure vf MSIX params*/
-#define DRV_MB_PARAM_CFG_VF_MSIX_VF_ID_SHIFT	0
+#define DRV_MB_PARAM_CFG_VF_MSIX_VF_ID_OFFSET	0
 #define DRV_MB_PARAM_CFG_VF_MSIX_VF_ID_MASK	0x000000FF
-#define DRV_MB_PARAM_CFG_VF_MSIX_SB_NUM_SHIFT	8
+#define DRV_MB_PARAM_CFG_VF_MSIX_SB_NUM_OFFSET	8
 #define DRV_MB_PARAM_CFG_VF_MSIX_SB_NUM_MASK	0x0000FF00
 
 	/* OneView configuration parametres */
-#define DRV_MB_PARAM_OV_CURR_CFG_SHIFT		0
+#define DRV_MB_PARAM_OV_CURR_CFG_OFFSET		0
 #define DRV_MB_PARAM_OV_CURR_CFG_MASK		0x0000000F
 #define DRV_MB_PARAM_OV_CURR_CFG_NONE		0
 #define DRV_MB_PARAM_OV_CURR_CFG_OS			1
@@ -1452,7 +1452,7 @@ struct public_drv_mb {
 #define DRV_MB_PARAM_OV_CURR_CFG_DCI		6
 #define DRV_MB_PARAM_OV_CURR_CFG_HII		7
 
-#define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_SHIFT				0
+#define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_OFFSET				0
 #define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_MASK			0x000000FF
 #define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_NONE				(1 << 0)
 #define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_ISCSI_IP_ACQUIRED		(1 << 1)
@@ -1465,17 +1465,17 @@ struct public_drv_mb {
 #define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_OS_HANDOFF			(1 << 6)
 #define DRV_MB_PARAM_OV_UPDATE_BOOT_COMPLETED				0
 
-#define DRV_MB_PARAM_OV_PCI_BUS_NUM_SHIFT				0
+#define DRV_MB_PARAM_OV_PCI_BUS_NUM_OFFSET				0
 #define DRV_MB_PARAM_OV_PCI_BUS_NUM_MASK		0x000000FF
 
-#define DRV_MB_PARAM_OV_STORM_FW_VER_SHIFT		0
+#define DRV_MB_PARAM_OV_STORM_FW_VER_OFFSET		0
 #define DRV_MB_PARAM_OV_STORM_FW_VER_MASK			0xFFFFFFFF
 #define DRV_MB_PARAM_OV_STORM_FW_VER_MAJOR_MASK		0xFF000000
 #define DRV_MB_PARAM_OV_STORM_FW_VER_MINOR_MASK		0x00FF0000
 #define DRV_MB_PARAM_OV_STORM_FW_VER_BUILD_MASK		0x0000FF00
 #define DRV_MB_PARAM_OV_STORM_FW_VER_DROP_MASK		0x000000FF
 
-#define DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE_SHIFT		0
+#define DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE_OFFSET		0
 #define DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE_MASK		0xF
 #define DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE_UNKNOWN		0x1
 /* Not Installed*/
@@ -1486,36 +1486,36 @@ struct public_drv_mb {
 /* installed and active */
 #define DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE_ACTIVE		0x5
 
-#define DRV_MB_PARAM_OV_MTU_SIZE_SHIFT		0
+#define DRV_MB_PARAM_OV_MTU_SIZE_OFFSET		0
 #define DRV_MB_PARAM_OV_MTU_SIZE_MASK		0xFFFFFFFF
 
 #define DRV_MB_PARAM_SET_LED_MODE_OPER		0x0
 #define DRV_MB_PARAM_SET_LED_MODE_ON		0x1
 #define DRV_MB_PARAM_SET_LED_MODE_OFF		0x2
 
-#define DRV_MB_PARAM_TRANSCEIVER_PORT_SHIFT		0
+#define DRV_MB_PARAM_TRANSCEIVER_PORT_OFFSET		0
 #define DRV_MB_PARAM_TRANSCEIVER_PORT_MASK		0x00000003
-#define DRV_MB_PARAM_TRANSCEIVER_SIZE_SHIFT		2
+#define DRV_MB_PARAM_TRANSCEIVER_SIZE_OFFSET		2
 #define DRV_MB_PARAM_TRANSCEIVER_SIZE_MASK		0x000000FC
-#define DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS_SHIFT	8
+#define DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS_OFFSET	8
 #define DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS_MASK	0x0000FF00
-#define DRV_MB_PARAM_TRANSCEIVER_OFFSET_SHIFT		16
+#define DRV_MB_PARAM_TRANSCEIVER_OFFSET_OFFSET		16
 #define DRV_MB_PARAM_TRANSCEIVER_OFFSET_MASK		0xFFFF0000
 
-#define DRV_MB_PARAM_GPIO_NUMBER_SHIFT		0
+#define DRV_MB_PARAM_GPIO_NUMBER_OFFSET		0
 #define DRV_MB_PARAM_GPIO_NUMBER_MASK		0x0000FFFF
-#define DRV_MB_PARAM_GPIO_VALUE_SHIFT		16
+#define DRV_MB_PARAM_GPIO_VALUE_OFFSET		16
 #define DRV_MB_PARAM_GPIO_VALUE_MASK		0xFFFF0000
-#define DRV_MB_PARAM_GPIO_DIRECTION_SHIFT	16
+#define DRV_MB_PARAM_GPIO_DIRECTION_OFFSET	16
 #define DRV_MB_PARAM_GPIO_DIRECTION_MASK	0x00FF0000
-#define DRV_MB_PARAM_GPIO_CTRL_SHIFT		24
+#define DRV_MB_PARAM_GPIO_CTRL_OFFSET		24
 #define DRV_MB_PARAM_GPIO_CTRL_MASK		0xFF000000
 
 	/* Resource Allocation params - Driver version support*/
 #define DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR_MASK	0xFFFF0000
-#define DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR_SHIFT		16
+#define DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR_OFFSET		16
 #define DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR_MASK	0x0000FFFF
-#define DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR_SHIFT		0
+#define DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR_OFFSET		0
 
 #define DRV_MB_PARAM_BIST_UNKNOWN_TEST		0
 #define DRV_MB_PARAM_BIST_REGISTER_TEST		1
@@ -1528,19 +1528,19 @@ struct public_drv_mb {
 #define DRV_MB_PARAM_BIST_RC_FAILED		2
 #define DRV_MB_PARAM_BIST_RC_INVALID_PARAMETER		3
 
-#define DRV_MB_PARAM_BIST_TEST_INDEX_SHIFT      0
+#define DRV_MB_PARAM_BIST_TEST_INDEX_OFFSET      0
 #define DRV_MB_PARAM_BIST_TEST_INDEX_MASK       0x000000FF
-#define DRV_MB_PARAM_BIST_TEST_IMAGE_INDEX_SHIFT      8
+#define DRV_MB_PARAM_BIST_TEST_IMAGE_INDEX_OFFSET      8
 #define DRV_MB_PARAM_BIST_TEST_IMAGE_INDEX_MASK       0x0000FF00
 
 #define DRV_MB_PARAM_FEATURE_SUPPORT_PORT_MASK      0x0000FFFF
-#define DRV_MB_PARAM_FEATURE_SUPPORT_PORT_SHIFT     0
+#define DRV_MB_PARAM_FEATURE_SUPPORT_PORT_OFFSET     0
 /* driver supports SmartLinQ parameter */
 #define DRV_MB_PARAM_FEATURE_SUPPORT_PORT_SMARTLINQ 0x00000001
 /* driver supports EEE parameter */
 #define DRV_MB_PARAM_FEATURE_SUPPORT_PORT_EEE       0x00000002
 #define DRV_MB_PARAM_FEATURE_SUPPORT_FUNC_MASK      0xFFFF0000
-#define DRV_MB_PARAM_FEATURE_SUPPORT_FUNC_SHIFT     16
+#define DRV_MB_PARAM_FEATURE_SUPPORT_FUNC_OFFSET     16
 
 	u32 fw_mb_header;
 #define FW_MSG_CODE_MASK                        0xffff0000
@@ -1654,9 +1654,9 @@ struct public_drv_mb {
 	u32 fw_mb_param;
 /* Resource Allocation params - MFW  version support */
 #define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR_MASK	0xFFFF0000
-#define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR_SHIFT		16
+#define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR_OFFSET		16
 #define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR_MASK	0x0000FFFF
-#define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR_SHIFT		0
+#define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR_OFFSET		0
 
 /* get MFW feature support response */
 /* MFW supports SmartLinQ */
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 23/53] net/qede/base: allow clients to override VF MSI-X table size
  2017-09-19  1:29 [PATCH 00/53] net/qede/base: update PMD to 2.6.0.1 Rasesh Mody
                   ` (21 preceding siblings ...)
  2017-09-19  1:30 ` [PATCH 22/53] net/qede/base: rename MFW get/set field defines Rasesh Mody
@ 2017-09-19  1:30 ` Rasesh Mody
  2017-09-19  1:30 ` [PATCH 24/53] net/qede/base: add API to send STAG config update to FW Rasesh Mody
                   ` (6 subsequent siblings)
  29 siblings, 0 replies; 31+ messages in thread
From: Rasesh Mody @ 2017-09-19  1:30 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

For chip variant CHIP_NUM_AH_xxx, MSI-x configuration for VFs is controlled
per-PF [for all of its child VFs] instead of on a per-VF basis. A flag
called "dont_override_vf_msix" is added that allows the caller/client to
specify the mode they want to operate. If dont_override_vf_msix is false as
in the case of VF of CHIP_NUM_AH_xxx, first a check is made as to what is
currently configured number. Management FW will be asked to configure the
requested number only if its bigger than the currently configured value.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h       |    1 +
 drivers/net/qede/base/ecore_mcp.c   |   40 +++++++++++++++++++++++++++++---
 drivers/net/qede/base/ecore_sriov.c |   43 +++++++++++++++++++++++++++++++----
 drivers/net/qede/base/mcp_public.h  |   10 +++++++-
 4 files changed, 86 insertions(+), 8 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 71f27da..2d2f6f3 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -754,6 +754,7 @@ struct ecore_dev {
 #define IS_ECORE_SRIOV(p_dev)		(!!(p_dev)->p_iov_info)
 	struct ecore_tunnel_info	tunnel;
 	bool				b_is_vf;
+	bool				b_dont_override_vf_msix;
 
 	u32				drv_type;
 
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 0f96c91..733852c 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -2248,9 +2248,10 @@ enum _ecore_status_t ecore_start_recovery_process(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-enum _ecore_status_t ecore_mcp_config_vf_msix(struct ecore_hwfn *p_hwfn,
-					      struct ecore_ptt *p_ptt,
-					      u8 vf_id, u8 num)
+static enum _ecore_status_t
+ecore_mcp_config_vf_msix_bb(struct ecore_hwfn *p_hwfn,
+			    struct ecore_ptt *p_ptt,
+			    u8 vf_id, u8 num)
 {
 	u32 resp = 0, param = 0, rc_param = 0;
 	enum _ecore_status_t rc;
@@ -2282,6 +2283,39 @@ enum _ecore_status_t ecore_mcp_config_vf_msix(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
+static enum _ecore_status_t
+ecore_mcp_config_vf_msix_ah(struct ecore_hwfn *p_hwfn,
+			    struct ecore_ptt *p_ptt,
+			    u8 num)
+{
+	u32 resp = 0, param = num, rc_param = 0;
+	enum _ecore_status_t rc;
+
+	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_CFG_PF_VFS_MSIX,
+			   param, &resp, &rc_param);
+
+	if (resp != FW_MSG_CODE_DRV_CFG_PF_VFS_MSIX_DONE) {
+		DP_NOTICE(p_hwfn, true, "MFW failed to set MSI-X for VFs\n");
+		rc = ECORE_INVAL;
+	} else {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "Requested 0x%02x MSI-x interrupts for VFs\n",
+			   num);
+	}
+
+	return rc;
+}
+
+enum _ecore_status_t ecore_mcp_config_vf_msix(struct ecore_hwfn *p_hwfn,
+					      struct ecore_ptt *p_ptt,
+					      u8 vf_id, u8 num)
+{
+	if (ECORE_IS_BB(p_hwfn->p_dev))
+		return ecore_mcp_config_vf_msix_bb(p_hwfn, p_ptt, vf_id, num);
+	else
+		return ecore_mcp_config_vf_msix_ah(p_hwfn, p_ptt, num);
+}
+
 enum _ecore_status_t
 ecore_mcp_send_drv_version(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			   struct ecore_mcp_drv_version *p_ver)
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 7ac533e..a70ca30 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -819,11 +819,47 @@ static void ecore_iov_vf_igu_set_int(struct ecore_hwfn *p_hwfn,
 }
 
 static enum _ecore_status_t
+ecore_iov_enable_vf_access_msix(struct ecore_hwfn *p_hwfn,
+				struct ecore_ptt *p_ptt,
+				u8 abs_vf_id,
+				u8 num_sbs)
+{
+	u8 current_max = 0;
+	int i;
+
+	/* If client overrides this, don't do anything */
+	if (p_hwfn->p_dev->b_dont_override_vf_msix)
+		return ECORE_SUCCESS;
+
+	/* For AH onward, configuration is per-PF. Find maximum of all
+	 * the currently enabled child VFs, and set the number to be that.
+	 */
+	if (!ECORE_IS_BB(p_hwfn->p_dev)) {
+		ecore_for_each_vf(p_hwfn, i) {
+			struct ecore_vf_info *p_vf;
+
+			p_vf  = ecore_iov_get_vf_info(p_hwfn, (u16)i, true);
+			if (!p_vf)
+				continue;
+
+			current_max = OSAL_MAX_T(u8, current_max,
+						 p_vf->num_sbs);
+		}
+	}
+
+	if (num_sbs > current_max)
+		return ecore_mcp_config_vf_msix(p_hwfn, p_ptt,
+						abs_vf_id, num_sbs);
+
+	return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t
 ecore_iov_enable_vf_access(struct ecore_hwfn *p_hwfn,
 			   struct ecore_ptt *p_ptt, struct ecore_vf_info *vf)
 {
 	u32 igu_vf_conf = IGU_VF_CONF_FUNC_EN;
-	enum _ecore_status_t rc;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
 
 	if (vf->to_disable)
 		return ECORE_SUCCESS;
@@ -839,9 +875,8 @@ static void ecore_iov_vf_igu_set_int(struct ecore_hwfn *p_hwfn,
 
 	/* It's possible VF was previously considered malicious */
 	vf->b_malicious = false;
-
-	rc = ecore_mcp_config_vf_msix(p_hwfn, p_ptt,
-				      vf->abs_vf_id, vf->num_sbs);
+	rc = ecore_iov_enable_vf_access_msix(p_hwfn, p_ptt,
+					     vf->abs_vf_id, vf->num_sbs);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index ff9ce9e..7ac2820 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -1192,6 +1192,7 @@ struct public_drv_mb {
 #define DRV_MSG_CODE_INITIATE_PF_FLR            0x02010000
 #define DRV_MSG_CODE_VF_DISABLED_DONE           0xc0000000
 #define DRV_MSG_CODE_CFG_VF_MSIX                0xc0010000
+#define DRV_MSG_CODE_CFG_PF_VFS_MSIX            0xc0020000
 /* Param is either DRV_MB_PARAM_NVM_PUT_FILE_BEGIN_MFW/IMAGE */
 #define DRV_MSG_CODE_NVM_PUT_FILE_BEGIN		0x00010000
 /* Param should be set to the transaction size (up to 64 bytes) */
@@ -1434,11 +1435,14 @@ struct public_drv_mb {
 #define DRV_MB_PARAM_PHYMOD_LANE_MASK		0x000000FF
 #define DRV_MB_PARAM_PHYMOD_SIZE_OFFSET		8
 #define DRV_MB_PARAM_PHYMOD_SIZE_MASK		0x000FFF00
-	/* configure vf MSIX params*/
+	/* configure vf MSIX params BB */
 #define DRV_MB_PARAM_CFG_VF_MSIX_VF_ID_OFFSET	0
 #define DRV_MB_PARAM_CFG_VF_MSIX_VF_ID_MASK	0x000000FF
 #define DRV_MB_PARAM_CFG_VF_MSIX_SB_NUM_OFFSET	8
 #define DRV_MB_PARAM_CFG_VF_MSIX_SB_NUM_MASK	0x0000FF00
+	/* configure vf MSIX for PF params AH*/
+#define DRV_MB_PARAM_CFG_PF_VFS_MSIX_SB_NUM_OFFSET	0
+#define DRV_MB_PARAM_CFG_PF_VFS_MSIX_SB_NUM_MASK	0x000000FF
 
 	/* OneView configuration parametres */
 #define DRV_MB_PARAM_OV_CURR_CFG_OFFSET		0
@@ -1648,6 +1652,10 @@ struct public_drv_mb {
 #define FW_MSG_CODE_MDUMP_IN_PROGRESS		0x00040000
 #define FW_MSG_CODE_MDUMP_WRITE_FAILED		0x00050000
 
+
+#define FW_MSG_CODE_DRV_CFG_PF_VFS_MSIX_DONE     0x00870000
+#define FW_MSG_CODE_DRV_CFG_PF_VFS_MSIX_BAD_ASIC 0x00880000
+
 #define FW_MSG_SEQ_NUMBER_MASK                  0x0000ffff
 
 
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 24/53] net/qede/base: add API to send STAG config update to FW
  2017-09-19  1:29 [PATCH 00/53] net/qede/base: update PMD to 2.6.0.1 Rasesh Mody
                   ` (22 preceding siblings ...)
  2017-09-19  1:30 ` [PATCH 23/53] net/qede/base: allow clients to override VF MSI-X table size Rasesh Mody
@ 2017-09-19  1:30 ` Rasesh Mody
  2017-09-19  1:30 ` [PATCH 25/53] net/qede/base: add support for doorbell overflow recovery Rasesh Mody
                   ` (5 subsequent siblings)
  29 siblings, 0 replies; 31+ messages in thread
From: Rasesh Mody @ 2017-09-19  1:30 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Send updated STAG configuration to the Firmware.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_dcbx.c        |    2 +-
 drivers/net/qede/base/ecore_mcp.c         |    1 +
 drivers/net/qede/base/ecore_sp_commands.c |   27 ++++++++++++++++++++++++++-
 drivers/net/qede/base/ecore_sp_commands.h |   12 +++++++++++-
 4 files changed, 39 insertions(+), 3 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dcbx.c b/drivers/net/qede/base/ecore_dcbx.c
index cce2830..e7848c7 100644
--- a/drivers/net/qede/base/ecore_dcbx.c
+++ b/drivers/net/qede/base/ecore_dcbx.c
@@ -882,7 +882,7 @@ enum _ecore_status_t
 			ecore_qm_reconf(p_hwfn, p_ptt);
 
 			/* update storm FW with negotiation results */
-			ecore_sp_pf_update(p_hwfn);
+			ecore_sp_pf_update_dcbx(p_hwfn);
 
 			/* set eagle enigne 1 flow control workaround
 			 * according to negotiation results
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 733852c..21eea49 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -20,6 +20,7 @@
 #include "ecore_gtt_reg_addr.h"
 #include "ecore_iro.h"
 #include "ecore_dcbx.h"
+#include "ecore_sp_commands.h"
 
 #define CHIP_MCP_RESP_ITER_US 10
 #define EMUL_MCP_RESP_ITER_US (1000 * 1000)
diff --git a/drivers/net/qede/base/ecore_sp_commands.c b/drivers/net/qede/base/ecore_sp_commands.c
index abfdfbf..d67805c 100644
--- a/drivers/net/qede/base/ecore_sp_commands.c
+++ b/drivers/net/qede/base/ecore_sp_commands.c
@@ -398,7 +398,7 @@ enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-enum _ecore_status_t ecore_sp_pf_update(struct ecore_hwfn *p_hwfn)
+enum _ecore_status_t ecore_sp_pf_update_dcbx(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_spq_entry *p_ent = OSAL_NULL;
 	struct ecore_sp_init_data init_data;
@@ -555,3 +555,28 @@ enum _ecore_status_t ecore_sp_heartbeat_ramrod(struct ecore_hwfn *p_hwfn)
 
 	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
 }
+
+enum _ecore_status_t ecore_sp_pf_update_stag(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_spq_entry *p_ent = OSAL_NULL;
+	struct ecore_sp_init_data init_data;
+	enum _ecore_status_t rc = ECORE_NOTIMPL;
+
+	/* Get SPQ entry */
+	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
+	init_data.cid = ecore_spq_get_cid(p_hwfn);
+	init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
+	init_data.comp_mode = ECORE_SPQ_MODE_CB;
+
+	rc = ecore_sp_init_request(p_hwfn, &p_ent,
+				   COMMON_RAMROD_PF_UPDATE, PROTOCOLID_COMMON,
+				   &init_data);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	p_ent->ramrod.pf_update.update_mf_vlan_flag = true;
+	p_ent->ramrod.pf_update.mf_vlan =
+				OSAL_CPU_TO_LE16(p_hwfn->hw_info.ovlan);
+
+	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
+}
diff --git a/drivers/net/qede/base/ecore_sp_commands.h b/drivers/net/qede/base/ecore_sp_commands.h
index b9f40b7..34d5a76 100644
--- a/drivers/net/qede/base/ecore_sp_commands.h
+++ b/drivers/net/qede/base/ecore_sp_commands.h
@@ -87,7 +87,7 @@ enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
  * @return enum _ecore_status_t
  */
 
-enum _ecore_status_t ecore_sp_pf_update(struct ecore_hwfn *p_hwfn);
+enum _ecore_status_t ecore_sp_pf_update_dcbx(struct ecore_hwfn *p_hwfn);
 
 /**
  * @brief ecore_sp_pf_stop - PF Function Stop Ramrod
@@ -145,4 +145,14 @@ struct ecore_rl_update_params {
 enum _ecore_status_t ecore_sp_rl_update(struct ecore_hwfn *p_hwfn,
 					struct ecore_rl_update_params *params);
 
+/**
+ * @brief ecore_sp_pf_update_stag - PF STAG value update Ramrod
+ *
+ * @param p_hwfn
+ *
+ * @return enum _ecore_status_t
+ */
+
+enum _ecore_status_t ecore_sp_pf_update_stag(struct ecore_hwfn *p_hwfn);
+
 #endif /*__ECORE_SP_COMMANDS_H__*/
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 25/53] net/qede/base: add support for doorbell overflow recovery
  2017-09-19  1:29 [PATCH 00/53] net/qede/base: update PMD to 2.6.0.1 Rasesh Mody
                   ` (23 preceding siblings ...)
  2017-09-19  1:30 ` [PATCH 24/53] net/qede/base: add API to send STAG config update to FW Rasesh Mody
@ 2017-09-19  1:30 ` Rasesh Mody
  2017-09-19  1:30 ` [PATCH 26/53] net/qede/base: block mbox command to unresponsive MFW Rasesh Mody
                   ` (4 subsequent siblings)
  29 siblings, 0 replies; 31+ messages in thread
From: Rasesh Mody @ 2017-09-19  1:30 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Add support for doorbell overflow recovery mechanism:
The doorbell recovery mechanism consists of a list of entries which
represent doorbelling entities (l2 queues, roce sq/rq/cqs, the slowpath
spq, etc). Each entity needs to register with the mechanism and provide
the parameters describing it's doorbell, including a location where last
used doorbell data can be found. The doorbell execute function will
traverse the list and doorbell all of the registered entries.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/bcm_osal.h      |    3 +
 drivers/net/qede/base/ecore.h         |   24 ++-
 drivers/net/qede/base/ecore_dev.c     |  322 ++++++++++++++++++++++++++++++++-
 drivers/net/qede/base/ecore_dev_api.h |   39 ++++
 drivers/net/qede/base/ecore_int.c     |  141 +++++++++++++--
 drivers/net/qede/base/ecore_spq.c     |   51 ++++--
 drivers/net/qede/base/ecore_spq.h     |    3 +
 drivers/net/qede/base/reg_addr.h      |   10 +
 drivers/net/qede/qede_main.c          |    1 +
 9 files changed, 557 insertions(+), 37 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index f4c7028..70b1a7f 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -148,6 +148,9 @@ void *osal_dma_alloc_coherent_aligned(struct ecore_dev *, dma_addr_t *,
 			      ((u8 *)(uintptr_t)(_p_hwfn->doorbells) +	\
 			      (_db_addr)), (u32)_val)
 
+#define DIRECT_REG_WR64(hwfn, addr, value) nothing
+#define DIRECT_REG_RD64(hwfn, addr) 0
+
 /* Mutexes */
 
 typedef pthread_mutex_t osal_mutex_t;
diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 2d2f6f3..d921d9e 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -351,6 +351,12 @@ enum ecore_hw_err_type {
 };
 #endif
 
+enum ecore_db_rec_exec {
+	DB_REC_DRY_RUN,
+	DB_REC_REAL_DEAL,
+	DB_REC_ONCE,
+};
+
 struct ecore_hw_info {
 	/* PCI personality */
 	enum ecore_pci_personality personality;
@@ -479,6 +485,12 @@ struct ecore_qm_info {
 	u8			num_pf_rls;
 };
 
+struct ecore_db_recovery_info {
+	osal_list_t list;
+	osal_spinlock_t lock;
+	u32 db_recovery_counter;
+};
+
 struct storm_stats {
 	u32 address;
 	u32 len;
@@ -605,6 +617,9 @@ struct ecore_hwfn {
 	/* L2-related */
 	struct ecore_l2_info		*p_l2_info;
 
+	/* Mechanism for recovering from doorbell drop */
+	struct ecore_db_recovery_info	db_recovery_info;
+
 	/* @DPDK */
 	struct ecore_ptt		*p_arfs_ptt;
 };
@@ -860,6 +875,13 @@ void ecore_set_fw_mac_addr(__le16 *fw_msb, __le16 *fw_mid, __le16 *fw_lsb,
 u16 ecore_get_cm_pq_idx_vf(struct ecore_hwfn *p_hwfn, u16 vf);
 u16 ecore_get_cm_pq_idx_rl(struct ecore_hwfn *p_hwfn, u8 qpid);
 
+const char *ecore_hw_get_resc_name(enum ecore_resources res_id);
+
+/* doorbell recovery mechanism */
+void ecore_db_recovery_dp(struct ecore_hwfn *p_hwfn);
+void ecore_db_recovery_execute(struct ecore_hwfn *p_hwfn,
+			       enum ecore_db_rec_exec);
+
 /* amount of resources used in qm init */
 u8 ecore_init_qm_get_num_tcs(struct ecore_hwfn *p_hwfn);
 u16 ecore_init_qm_get_num_vfs(struct ecore_hwfn *p_hwfn);
@@ -869,6 +891,4 @@ void ecore_set_fw_mac_addr(__le16 *fw_msb, __le16 *fw_mid, __le16 *fw_lsb,
 
 #define ECORE_LEADING_HWFN(dev)	(&dev->hwfns[0])
 
-const char *ecore_hw_get_resc_name(enum ecore_resources res_id);
-
 #endif /* __ECORE_H */
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 2fe30d7..711a824 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -42,6 +42,318 @@
 static osal_spinlock_t qm_lock;
 static bool qm_lock_init;
 
+/******************** Doorbell Recovery *******************/
+/* The doorbell recovery mechanism consists of a list of entries which represent
+ * doorbelling entities (l2 queues, roce sq/rq/cqs, the slowpath spq, etc). Each
+ * entity needs to register with the mechanism and provide the parameters
+ * describing it's doorbell, including a location where last used doorbell data
+ * can be found. The doorbell execute function will traverse the list and
+ * doorbell all of the registered entries.
+ */
+struct ecore_db_recovery_entry {
+	osal_list_entry_t	list_entry;
+	void OSAL_IOMEM		*db_addr;
+	void			*db_data;
+	enum ecore_db_rec_width	db_width;
+	enum ecore_db_rec_space	db_space;
+	u8			hwfn_idx;
+};
+
+/* display a single doorbell recovery entry */
+void ecore_db_recovery_dp_entry(struct ecore_hwfn *p_hwfn,
+				struct ecore_db_recovery_entry *db_entry,
+				const char *action)
+{
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, "(%s: db_entry %p, addr %p, data %p, width %s, %s space, hwfn %d)\n",
+		   action, db_entry, db_entry->db_addr, db_entry->db_data,
+		   db_entry->db_width == DB_REC_WIDTH_32B ? "32b" : "64b",
+		   db_entry->db_space == DB_REC_USER ? "user" : "kernel",
+		   db_entry->hwfn_idx);
+}
+
+/* doorbell address sanity (address within doorbell bar range) */
+bool ecore_db_rec_sanity(struct ecore_dev *p_dev, void OSAL_IOMEM *db_addr,
+			 void *db_data)
+{
+	/* make sure doorbell address  is within the doorbell bar */
+	if (db_addr < p_dev->doorbells || (u8 *)db_addr >
+			(u8 *)p_dev->doorbells + p_dev->db_size) {
+		OSAL_WARN(true,
+			  "Illegal doorbell address: %p. Legal range for doorbell addresses is [%p..%p]\n",
+			  db_addr, p_dev->doorbells,
+			  (u8 *)p_dev->doorbells + p_dev->db_size);
+		return false;
+	}
+
+	/* make sure doorbell data pointer is not null */
+	if (!db_data) {
+		OSAL_WARN(true, "Illegal doorbell data pointer: %p", db_data);
+		return false;
+	}
+
+	return true;
+}
+
+/* find hwfn according to the doorbell address */
+struct ecore_hwfn *ecore_db_rec_find_hwfn(struct ecore_dev *p_dev,
+					  void OSAL_IOMEM *db_addr)
+{
+	struct ecore_hwfn *p_hwfn;
+
+	/* In CMT doorbell bar is split down the middle between engine 0 and
+	 * enigne 1
+	 */
+	if (p_dev->num_hwfns > 1)
+		p_hwfn = db_addr < p_dev->hwfns[1].doorbells ?
+			&p_dev->hwfns[0] : &p_dev->hwfns[1];
+	else
+		p_hwfn = ECORE_LEADING_HWFN(p_dev);
+
+	return p_hwfn;
+}
+
+/* add a new entry to the doorbell recovery mechanism */
+enum _ecore_status_t ecore_db_recovery_add(struct ecore_dev *p_dev,
+					   void OSAL_IOMEM *db_addr,
+					   void *db_data,
+					   enum ecore_db_rec_width db_width,
+					   enum ecore_db_rec_space db_space)
+{
+	struct ecore_db_recovery_entry *db_entry;
+	struct ecore_hwfn *p_hwfn;
+
+	/* shortcircuit VFs, for now */
+	if (IS_VF(p_dev)) {
+		DP_VERBOSE(p_dev, ECORE_MSG_IOV, "db recovery - skipping VF doorbell\n");
+		return ECORE_SUCCESS;
+	}
+
+	/* sanitize doorbell address */
+	if (!ecore_db_rec_sanity(p_dev, db_addr, db_data))
+		return ECORE_INVAL;
+
+	/* obtain hwfn from doorbell address */
+	p_hwfn = ecore_db_rec_find_hwfn(p_dev, db_addr);
+
+	/* create entry */
+	db_entry = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL, sizeof(*db_entry));
+	if (!db_entry) {
+		DP_NOTICE(p_dev, false, "Failed to allocate a db recovery entry\n");
+		return ECORE_NOMEM;
+	}
+
+	/* populate entry */
+	db_entry->db_addr = db_addr;
+	db_entry->db_data = db_data;
+	db_entry->db_width = db_width;
+	db_entry->db_space = db_space;
+	db_entry->hwfn_idx = p_hwfn->my_id;
+
+	/* display */
+	ecore_db_recovery_dp_entry(p_hwfn, db_entry, "Adding");
+
+	/* protect the list */
+	OSAL_SPIN_LOCK(&p_hwfn->db_recovery_info.lock);
+	OSAL_LIST_PUSH_TAIL(&db_entry->list_entry,
+			    &p_hwfn->db_recovery_info.list);
+	OSAL_SPIN_UNLOCK(&p_hwfn->db_recovery_info.lock);
+
+	return ECORE_SUCCESS;
+}
+
+/* remove an entry from the doorbell recovery mechanism */
+enum _ecore_status_t ecore_db_recovery_del(struct ecore_dev *p_dev,
+					   void OSAL_IOMEM *db_addr,
+					   void *db_data)
+{
+	struct ecore_db_recovery_entry *db_entry = OSAL_NULL;
+	enum _ecore_status_t rc = ECORE_INVAL;
+	struct ecore_hwfn *p_hwfn;
+
+	/* shortcircuit VFs, for now */
+	if (IS_VF(p_dev)) {
+		DP_VERBOSE(p_dev, ECORE_MSG_IOV, "db recovery - skipping VF doorbell\n");
+		return ECORE_SUCCESS;
+	}
+
+	/* sanitize doorbell address */
+	if (!ecore_db_rec_sanity(p_dev, db_addr, db_data))
+		return ECORE_INVAL;
+
+	/* obtain hwfn from doorbell address */
+	p_hwfn = ecore_db_rec_find_hwfn(p_dev, db_addr);
+
+	/* protect the list */
+	OSAL_SPIN_LOCK(&p_hwfn->db_recovery_info.lock);
+	OSAL_LIST_FOR_EACH_ENTRY(db_entry,
+				 &p_hwfn->db_recovery_info.list,
+				 list_entry,
+				 struct ecore_db_recovery_entry) {
+		/* search according to db_data addr since db_addr is not unique
+		 * (roce)
+		 */
+		if (db_entry->db_data == db_data) {
+			ecore_db_recovery_dp_entry(p_hwfn, db_entry,
+						   "Deleting");
+			OSAL_LIST_REMOVE_ENTRY(&db_entry->list_entry,
+					       &p_hwfn->db_recovery_info.list);
+			rc = ECORE_SUCCESS;
+			break;
+		}
+	}
+
+	OSAL_SPIN_UNLOCK(&p_hwfn->db_recovery_info.lock);
+
+	if (rc == ECORE_INVAL)
+		/*OSAL_WARN(true,*/
+		DP_NOTICE(p_hwfn, false,
+			  "Failed to find element in list. Key (db_data addr) was %p. db_addr was %p\n",
+			  db_data, db_addr);
+	else
+		OSAL_FREE(p_dev, db_entry);
+
+	return rc;
+}
+
+/* initialize the doorbell recovery mechanism */
+enum _ecore_status_t ecore_db_recovery_setup(struct ecore_hwfn *p_hwfn)
+{
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, "Setting up db recovery\n");
+
+	/* make sure db_size was set in p_dev */
+	if (!p_hwfn->p_dev->db_size) {
+		DP_ERR(p_hwfn->p_dev, "db_size not set\n");
+		return ECORE_INVAL;
+	}
+
+	OSAL_LIST_INIT(&p_hwfn->db_recovery_info.list);
+#ifdef CONFIG_ECORE_LOCK_ALLOC
+	OSAL_SPIN_LOCK_ALLOC(p_hwfn, &p_hwfn->db_recovery_info.lock);
+#endif
+	OSAL_SPIN_LOCK_INIT(&p_hwfn->db_recovery_info.lock);
+	p_hwfn->db_recovery_info.db_recovery_counter = 0;
+
+	return ECORE_SUCCESS;
+}
+
+/* destroy the doorbell recovery mechanism */
+void ecore_db_recovery_teardown(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_db_recovery_entry *db_entry = OSAL_NULL;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, "Tearing down db recovery\n");
+	if (!OSAL_LIST_IS_EMPTY(&p_hwfn->db_recovery_info.list)) {
+		DP_VERBOSE(p_hwfn, false, "Doorbell Recovery teardown found the doorbell recovery list was not empty (Expected in disorderly driver unload (e.g. recovery) otherwise this probably means some flow forgot to db_recovery_del). Prepare to purge doorbell recovery list...\n");
+		while (!OSAL_LIST_IS_EMPTY(&p_hwfn->db_recovery_info.list)) {
+			db_entry = OSAL_LIST_FIRST_ENTRY(
+						&p_hwfn->db_recovery_info.list,
+						struct ecore_db_recovery_entry,
+						list_entry);
+			ecore_db_recovery_dp_entry(p_hwfn, db_entry, "Purging");
+			OSAL_LIST_REMOVE_ENTRY(&db_entry->list_entry,
+					       &p_hwfn->db_recovery_info.list);
+			OSAL_FREE(p_hwfn->p_dev, db_entry);
+		}
+	}
+#ifdef CONFIG_ECORE_LOCK_ALLOC
+	OSAL_SPIN_LOCK_DEALLOC(&p_hwfn->db_recovery_info.lock);
+#endif
+	p_hwfn->db_recovery_info.db_recovery_counter = 0;
+}
+
+/* print the content of the doorbell recovery mechanism */
+void ecore_db_recovery_dp(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_db_recovery_entry *db_entry = OSAL_NULL;
+
+	DP_NOTICE(p_hwfn, false,
+		  "Dispalying doorbell recovery database. Counter was %d\n",
+		  p_hwfn->db_recovery_info.db_recovery_counter);
+
+	/* protect the list */
+	OSAL_SPIN_LOCK(&p_hwfn->db_recovery_info.lock);
+	OSAL_LIST_FOR_EACH_ENTRY(db_entry,
+				 &p_hwfn->db_recovery_info.list,
+				 list_entry,
+				 struct ecore_db_recovery_entry) {
+		ecore_db_recovery_dp_entry(p_hwfn, db_entry, "Printing");
+	}
+
+	OSAL_SPIN_UNLOCK(&p_hwfn->db_recovery_info.lock);
+}
+
+/* ring the doorbell of a single doorbell recovery entry */
+void ecore_db_recovery_ring(struct ecore_hwfn *p_hwfn,
+			    struct ecore_db_recovery_entry *db_entry,
+			    enum ecore_db_rec_exec db_exec)
+{
+	/* Print according to width */
+	if (db_entry->db_width == DB_REC_WIDTH_32B)
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, "%s doorbell address %p data %x\n",
+			   db_exec == DB_REC_DRY_RUN ? "would have rung" : "ringing",
+			   db_entry->db_addr, *(u32 *)db_entry->db_data);
+	else
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, "%s doorbell address %p data %lx\n",
+			   db_exec == DB_REC_DRY_RUN ? "would have rung" : "ringing",
+			   db_entry->db_addr,
+			   *(unsigned long *)(db_entry->db_data));
+
+	/* Sanity */
+	if (!ecore_db_rec_sanity(p_hwfn->p_dev, db_entry->db_addr,
+				 db_entry->db_data))
+		return;
+
+	/* Flush the write combined buffer. Since there are multiple doorbelling
+	 * entities using the same address, if we don't flush, a transaction
+	 * could be lost.
+	 */
+	OSAL_WMB(p_hwfn->p_dev);
+
+	/* Ring the doorbell */
+	if (db_exec == DB_REC_REAL_DEAL || db_exec == DB_REC_ONCE) {
+		if (db_entry->db_width == DB_REC_WIDTH_32B)
+			DIRECT_REG_WR(p_hwfn, db_entry->db_addr,
+				      *(u32 *)(db_entry->db_data));
+		else
+			DIRECT_REG_WR64(p_hwfn, db_entry->db_addr,
+					*(u64 *)(db_entry->db_data));
+	}
+
+	/* Flush the write combined buffer. Next doorbell may come from a
+	 * different entity to the same address...
+	 */
+	OSAL_WMB(p_hwfn->p_dev);
+}
+
+/* traverse the doorbell recovery entry list and ring all the doorbells */
+void ecore_db_recovery_execute(struct ecore_hwfn *p_hwfn,
+			       enum ecore_db_rec_exec db_exec)
+{
+	struct ecore_db_recovery_entry *db_entry = OSAL_NULL;
+
+	if (db_exec != DB_REC_ONCE) {
+		DP_NOTICE(p_hwfn, false, "Executing doorbell recovery. Counter was %d\n",
+			  p_hwfn->db_recovery_info.db_recovery_counter);
+
+		/* track amount of times recovery was executed */
+		p_hwfn->db_recovery_info.db_recovery_counter++;
+	}
+
+	/* protect the list */
+	OSAL_SPIN_LOCK(&p_hwfn->db_recovery_info.lock);
+	OSAL_LIST_FOR_EACH_ENTRY(db_entry,
+				 &p_hwfn->db_recovery_info.list,
+				 list_entry,
+				 struct ecore_db_recovery_entry) {
+		ecore_db_recovery_ring(p_hwfn, db_entry, db_exec);
+		if (db_exec == DB_REC_ONCE)
+			break;
+	}
+
+	OSAL_SPIN_UNLOCK(&p_hwfn->db_recovery_info.lock);
+}
+/******************** Doorbell Recovery end ****************/
+
 /* Configurable */
 #define ECORE_MIN_DPIS		(4)	/* The minimal num of DPIs required to
 					 * load the driver. The number was
@@ -172,6 +484,9 @@ void ecore_resc_free(struct ecore_dev *p_dev)
 		ecore_dmae_info_free(p_hwfn);
 		ecore_dcbx_info_free(p_hwfn, p_hwfn->p_dcbx_info);
 		/* @@@TBD Flush work-queue ? */
+
+		/* destroy doorbell recovery mechanism */
+		ecore_db_recovery_teardown(p_hwfn);
 	}
 }
 
@@ -863,12 +1178,17 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
 		u32 n_eqes, num_cons;
 
+		/* initialize the doorbell recovery mechanism */
+		rc = ecore_db_recovery_setup(p_hwfn);
+		if (rc)
+			goto alloc_err;
+
 		/* First allocate the context manager structure */
 		rc = ecore_cxt_mngr_alloc(p_hwfn);
 		if (rc)
 			goto alloc_err;
 
-		/* Set the HW cid/tid numbers (in the contest manager)
+		/* Set the HW cid/tid numbers (in the context manager)
 		 * Must be done prior to any further computations.
 		 */
 		rc = ecore_cxt_set_pf_params(p_hwfn);
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index b3c9f89..8b28af9 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -155,6 +155,45 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
  *
  */
 void ecore_prepare_hibernate(struct ecore_dev *p_dev);
+
+enum ecore_db_rec_width {
+	DB_REC_WIDTH_32B,
+	DB_REC_WIDTH_64B,
+};
+
+enum ecore_db_rec_space {
+	DB_REC_KERNEL,
+	DB_REC_USER,
+};
+
+/**
+ * @brief db_recovery_add - add doorbell information to the doorbell
+ * recovery mechanism.
+ *
+ * @param p_dev
+ * @param db_addr - doorbell address
+ * @param db_data - address of where db_data is stored
+ * @param db_width - doorbell is 32b pr 64b
+ * @param db_space - doorbell recovery addresses are user or kernel space
+ */
+enum _ecore_status_t ecore_db_recovery_add(struct ecore_dev *p_dev,
+					   void OSAL_IOMEM *db_addr,
+					   void *db_data,
+					   enum ecore_db_rec_width db_width,
+					   enum ecore_db_rec_space db_space);
+
+/**
+ * @brief db_recovery_del - remove doorbell information from the doorbell
+ * recovery mechanism. db_data serves as key (db_addr is not unique).
+ *
+ * @param cdev
+ * @param db_addr - doorbell address
+ * @param db_data - address where db_data is stored. Serves as key for the
+ *                  entry to delete.
+ */
+enum _ecore_status_t ecore_db_recovery_del(struct ecore_dev *p_dev,
+					   void OSAL_IOMEM *db_addr,
+					   void *db_data);
 #endif
 
 /**
diff --git a/drivers/net/qede/base/ecore_int.c b/drivers/net/qede/base/ecore_int.c
index acf8759..d86f56e 100644
--- a/drivers/net/qede/base/ecore_int.c
+++ b/drivers/net/qede/base/ecore_int.c
@@ -414,31 +414,136 @@ static enum _ecore_status_t ecore_fw_assertion(struct ecore_hwfn *p_hwfn)
 	return ECORE_SUCCESS;
 }
 
-#define ECORE_DORQ_ATTENTION_REASON_MASK (0xfffff)
-#define ECORE_DORQ_ATTENTION_OPAQUE_MASK (0xffff)
-#define ECORE_DORQ_ATTENTION_SIZE_MASK	 (0x7f0000)
-#define ECORE_DORQ_ATTENTION_SIZE_SHIFT	 (16)
+#define ECORE_DORQ_ATTENTION_REASON_MASK	(0xfffff)
+#define ECORE_DORQ_ATTENTION_OPAQUE_MASK	(0xffff)
+#define ECORE_DORQ_ATTENTION_OPAQUE_SHIFT	(0x0)
+#define ECORE_DORQ_ATTENTION_SIZE_MASK		(0x7f)
+#define ECORE_DORQ_ATTENTION_SIZE_SHIFT		(16)
+
+#define ECORE_DB_REC_COUNT			10
+#define ECORE_DB_REC_INTERVAL			100
+
+/* assumes sticky overflow indication was set for this PF */
+static enum _ecore_status_t ecore_db_rec_attn(struct ecore_hwfn *p_hwfn,
+					      struct ecore_ptt *p_ptt)
+{
+	u8 count = ECORE_DB_REC_COUNT;
+	u32 usage = 1;
+
+	/* wait for usage to zero or count to run out. This is necessary since
+	 * EDPM doorbell transactions can take multiple 64b cycles, and as such
+	 * can "split" over the pci. Possibly, the doorbell drop can happen with
+	 * half an EDPM in the queue and other half dropped. Another EDPM
+	 * doorbell to the same address (from doorbell recovery mechanism or
+	 * from the doorbelling entity) could have first half dropped and second
+	 * half interperted as continuation of the first. To prevent such
+	 * malformed doorbells from reaching the device, flush the queue before
+	 * releaseing the overflow sticky indication.
+	 */
+	while (count-- && usage) {
+		usage = ecore_rd(p_hwfn, p_ptt, DORQ_REG_PF_USAGE_CNT);
+		OSAL_UDELAY(ECORE_DB_REC_INTERVAL);
+	}
+
+	/* should have been depleted by now */
+	if (usage) {
+		DP_NOTICE(p_hwfn->p_dev, false,
+			  "DB recovery: doorbell usage failed to zero after %d usec. usage was %x\n",
+			  ECORE_DB_REC_INTERVAL * ECORE_DB_REC_COUNT, usage);
+		return ECORE_TIMEOUT;
+	}
+
+	/* flush any pedning (e)dpm as they may never arrive */
+	ecore_wr(p_hwfn, p_ptt, DORQ_REG_DPM_FORCE_ABORT, 0x1);
+
+	/* release overflow sticky indication (stop silently dropping
+	 * everything)
+	 */
+	ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_OVFL_STICKY, 0x0);
+
+	/* repeat all last doorbells (doorbell drop recovery) */
+	ecore_db_recovery_execute(p_hwfn, DB_REC_REAL_DEAL);
+
+	return ECORE_SUCCESS;
+}
 
 static enum _ecore_status_t ecore_dorq_attn_cb(struct ecore_hwfn *p_hwfn)
 {
-	u32 reason;
+	u32 int_sts, first_drop_reason, details, address, overflow,
+		all_drops_reason;
+	struct ecore_ptt *p_ptt = p_hwfn->p_dpc_ptt;
+	enum _ecore_status_t rc;
 
-	reason = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, DORQ_REG_DB_DROP_REASON) &
-	    ECORE_DORQ_ATTENTION_REASON_MASK;
-	if (reason) {
-		u32 details = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
-				       DORQ_REG_DB_DROP_DETAILS);
+	int_sts = ecore_rd(p_hwfn, p_ptt, DORQ_REG_INT_STS);
+	DP_NOTICE(p_hwfn->p_dev, false, "DORQ attention. int_sts was %x\n",
+		  int_sts);
 
-		DP_INFO(p_hwfn->p_dev,
-			"DORQ db_drop: address 0x%08x Opaque FID 0x%04x"
-			" Size [bytes] 0x%08x Reason: 0x%08x\n",
-			ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
-				 DORQ_REG_DB_DROP_DETAILS_ADDRESS),
-			(u16)(details & ECORE_DORQ_ATTENTION_OPAQUE_MASK),
-			((details & ECORE_DORQ_ATTENTION_SIZE_MASK) >>
-			 ECORE_DORQ_ATTENTION_SIZE_SHIFT) * 4, reason);
+	/* int_sts may be zero since all PFs were interrupted for doorbell
+	 * overflow but another one already handled it. Can abort here. If
+	 * This PF also requires overflow recovery we will be interrupted again
+	 */
+	if (!int_sts)
+		return ECORE_SUCCESS;
+
+	/* check if db_drop or overflow happened */
+	if (int_sts & (DORQ_REG_INT_STS_DB_DROP |
+		       DORQ_REG_INT_STS_DORQ_FIFO_OVFL_ERR)) {
+		/* obtain data about db drop/overflow */
+		first_drop_reason = ecore_rd(p_hwfn, p_ptt,
+				  DORQ_REG_DB_DROP_REASON) &
+				  ECORE_DORQ_ATTENTION_REASON_MASK;
+		details = ecore_rd(p_hwfn, p_ptt,
+				   DORQ_REG_DB_DROP_DETAILS);
+		address = ecore_rd(p_hwfn, p_ptt,
+				   DORQ_REG_DB_DROP_DETAILS_ADDRESS);
+		overflow = ecore_rd(p_hwfn, p_ptt,
+				    DORQ_REG_PF_OVFL_STICKY);
+		all_drops_reason = ecore_rd(p_hwfn, p_ptt,
+					    DORQ_REG_DB_DROP_DETAILS_REASON);
+
+		/* log info */
+		DP_NOTICE(p_hwfn->p_dev, false,
+			  "Doorbell drop occurred\n"
+			  "Address\t\t0x%08x\t(second BAR address)\n"
+			  "FID\t\t0x%04x\t\t(Opaque FID)\n"
+			  "Size\t\t0x%04x\t\t(in bytes)\n"
+			  "1st drop reason\t0x%08x\t(details on first drop since last handling)\n"
+			  "Sticky reasons\t0x%08x\t(all drop reasons since last handling)\n"
+			  "Overflow\t0x%x\t\t(a per PF indication)\n",
+			  address,
+			  GET_FIELD(details, ECORE_DORQ_ATTENTION_OPAQUE),
+			  GET_FIELD(details, ECORE_DORQ_ATTENTION_SIZE) * 4,
+			  first_drop_reason, all_drops_reason, overflow);
+
+		/* if this PF caused overflow, initiate recovery */
+		if (overflow) {
+			rc = ecore_db_rec_attn(p_hwfn, p_ptt);
+			if (rc != ECORE_SUCCESS)
+				return rc;
+		}
+
+		/* clear the doorbell drop details and prepare for next drop */
+		ecore_wr(p_hwfn, p_ptt, DORQ_REG_DB_DROP_DETAILS_REL, 0);
+
+		/* mark interrupt as handeld (note: even if drop was due to a
+		 * different reason than overflow we mark as handled)
+		 */
+		ecore_wr(p_hwfn, p_ptt, DORQ_REG_INT_STS_WR,
+			 DORQ_REG_INT_STS_DB_DROP |
+			 DORQ_REG_INT_STS_DORQ_FIFO_OVFL_ERR);
+
+		/* if there are no indications otherthan drop indications,
+		 * success
+		 */
+		if ((int_sts & ~(DORQ_REG_INT_STS_DB_DROP |
+				 DORQ_REG_INT_STS_DORQ_FIFO_OVFL_ERR |
+				 DORQ_REG_INT_STS_DORQ_FIFO_AFULL)) == 0)
+			return ECORE_SUCCESS;
 	}
 
+	/* some other indication was present - non recoverable */
+	DP_INFO(p_hwfn, "DORQ fatal attention\n");
+
 	return ECORE_INVAL;
 }
 
diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c
index 29ba660..716799a 100644
--- a/drivers/net/qede/base/ecore_spq.c
+++ b/drivers/net/qede/base/ecore_spq.c
@@ -231,9 +231,9 @@ static enum _ecore_status_t ecore_spq_hw_post(struct ecore_hwfn *p_hwfn,
 					      struct ecore_spq_entry *p_ent)
 {
 	struct ecore_chain *p_chain = &p_hwfn->p_spq->chain;
+	struct core_db_data *p_db_data = &p_spq->db_data;
 	u16 echo = ecore_chain_get_prod_idx(p_chain);
 	struct slow_path_element *elem;
-	struct core_db_data db;
 
 	p_ent->elem.hdr.echo = OSAL_CPU_TO_LE16(echo);
 	elem = ecore_chain_produce(p_chain);
@@ -242,31 +242,24 @@ static enum _ecore_status_t ecore_spq_hw_post(struct ecore_hwfn *p_hwfn,
 		return ECORE_INVAL;
 	}
 
-	*elem = p_ent->elem;	/* struct assignment */
+	*elem = p_ent->elem;	/* Struct assignment */
 
-	/* send a doorbell on the slow hwfn session */
-	OSAL_MEMSET(&db, 0, sizeof(db));
-	SET_FIELD(db.params, CORE_DB_DATA_DEST, DB_DEST_XCM);
-	SET_FIELD(db.params, CORE_DB_DATA_AGG_CMD, DB_AGG_CMD_SET);
-	SET_FIELD(db.params, CORE_DB_DATA_AGG_VAL_SEL,
-		  DQ_XCM_CORE_SPQ_PROD_CMD);
-	db.agg_flags = DQ_XCM_CORE_DQ_CF_CMD;
-	db.spq_prod = OSAL_CPU_TO_LE16(ecore_chain_get_prod_idx(p_chain));
+	p_db_data->spq_prod =
+		OSAL_CPU_TO_LE16(ecore_chain_get_prod_idx(p_chain));
 
-	/* make sure the SPQE is updated before the doorbell */
+	/* Make sure the SPQE is updated before the doorbell */
 	OSAL_WMB(p_hwfn->p_dev);
 
-	DOORBELL(p_hwfn, DB_ADDR(p_spq->cid, DQ_DEMS_LEGACY),
-		 *(u32 *)&db);
+	DOORBELL(p_hwfn, p_spq->db_addr_offset, *(u32 *)p_db_data);
 
-	/* make sure doorbell is rang */
+	/* Make sure doorbell is rang */
 	OSAL_WMB(p_hwfn->p_dev);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ,
 		   "Doorbelled [0x%08x, CID 0x%08x] with Flags: %02x"
 		   " agg_params: %02x, prod: %04x\n",
-		   DB_ADDR(p_spq->cid, DQ_DEMS_LEGACY), p_spq->cid, db.params,
-		   db.agg_flags, ecore_chain_get_prod_idx(p_chain));
+		   p_spq->db_addr_offset, p_spq->cid, p_db_data->params,
+		   p_db_data->agg_flags, ecore_chain_get_prod_idx(p_chain));
 
 	return ECORE_SUCCESS;
 }
@@ -456,8 +449,11 @@ void ecore_spq_setup(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_spq *p_spq = p_hwfn->p_spq;
 	struct ecore_spq_entry *p_virt = OSAL_NULL;
+	struct core_db_data *p_db_data;
+	void OSAL_IOMEM *db_addr;
 	dma_addr_t p_phys = 0;
 	u32 i, capacity;
+	enum _ecore_status_t rc;
 
 	OSAL_LIST_INIT(&p_spq->pending);
 	OSAL_LIST_INIT(&p_spq->completion_pending);
@@ -495,6 +491,24 @@ void ecore_spq_setup(struct ecore_hwfn *p_hwfn)
 
 	/* reset the chain itself */
 	ecore_chain_reset(&p_spq->chain);
+
+	/* Initialize the address/data of the SPQ doorbell */
+	p_spq->db_addr_offset = DB_ADDR(p_spq->cid, DQ_DEMS_LEGACY);
+	p_db_data = &p_spq->db_data;
+	OSAL_MEM_ZERO(p_db_data, sizeof(*p_db_data));
+	SET_FIELD(p_db_data->params, CORE_DB_DATA_DEST, DB_DEST_XCM);
+	SET_FIELD(p_db_data->params, CORE_DB_DATA_AGG_CMD, DB_AGG_CMD_MAX);
+	SET_FIELD(p_db_data->params, CORE_DB_DATA_AGG_VAL_SEL,
+		  DQ_XCM_CORE_SPQ_PROD_CMD);
+	p_db_data->agg_flags = DQ_XCM_CORE_DQ_CF_CMD;
+
+	/* Register the SPQ doorbell with the doorbell recovery mechanism */
+	db_addr = (void *)((u8 *)p_hwfn->doorbells + p_spq->db_addr_offset);
+	rc = ecore_db_recovery_add(p_hwfn->p_dev, db_addr, &p_spq->db_data,
+				   DB_REC_WIDTH_32B, DB_REC_KERNEL);
+	if (rc != ECORE_SUCCESS)
+		DP_INFO(p_hwfn,
+			"Failed to register the SPQ doorbell with the doorbell recovery mechanism\n");
 }
 
 enum _ecore_status_t ecore_spq_alloc(struct ecore_hwfn *p_hwfn)
@@ -552,11 +566,16 @@ enum _ecore_status_t ecore_spq_alloc(struct ecore_hwfn *p_hwfn)
 void ecore_spq_free(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_spq *p_spq = p_hwfn->p_spq;
+	void OSAL_IOMEM *db_addr;
 	u32 capacity;
 
 	if (!p_spq)
 		return;
 
+	/* Delete the SPQ doorbell from the doorbell recovery mechanism */
+	db_addr = (void *)((u8 *)p_hwfn->doorbells + p_spq->db_addr_offset);
+	ecore_db_recovery_del(p_hwfn->p_dev, db_addr, &p_spq->db_data);
+
 	if (p_spq->p_virt) {
 		capacity = ecore_chain_get_capacity(&p_spq->chain);
 		OSAL_DMA_FREE_COHERENT(p_hwfn->p_dev,
diff --git a/drivers/net/qede/base/ecore_spq.h b/drivers/net/qede/base/ecore_spq.h
index e530f83..31d8a3e 100644
--- a/drivers/net/qede/base/ecore_spq.h
+++ b/drivers/net/qede/base/ecore_spq.h
@@ -124,6 +124,9 @@ struct ecore_spq {
 	u32				comp_count;
 
 	u32				cid;
+
+	u32				db_addr_offset;
+	struct core_db_data		db_data;
 };
 
 struct ecore_port;
diff --git a/drivers/net/qede/base/reg_addr.h b/drivers/net/qede/base/reg_addr.h
index 116fe78..9048581 100644
--- a/drivers/net/qede/base/reg_addr.h
+++ b/drivers/net/qede/base/reg_addr.h
@@ -1208,3 +1208,13 @@
 
 #define PSWRQ2_REG_WR_MBS0 0x240400UL
 #define PGLUE_B_REG_MASTER_WRITE_PAD_ENABLE 0x2aae30UL
+#define DORQ_REG_PF_USAGE_CNT 0x1009c0UL
+#define DORQ_REG_DPM_FORCE_ABORT 0x1009d8UL
+#define DORQ_REG_PF_OVFL_STICKY 0x1009d0UL
+#define DORQ_REG_INT_STS 0x100180UL
+  #define DORQ_REG_INT_STS_DB_DROP (0x1 << 1)
+  #define DORQ_REG_INT_STS_DORQ_FIFO_OVFL_ERR (0x1 << 2)
+  #define DORQ_REG_INT_STS_DORQ_FIFO_AFULL (0x1 << 3)
+#define DORQ_REG_DB_DROP_DETAILS_REL 0x100a28UL
+#define DORQ_REG_INT_STS_WR 0x100188UL
+#define DORQ_REG_DB_DROP_DETAILS_REASON 0x100a20UL
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 71b3a39..e6d2351 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -36,6 +36,7 @@ static void qed_init_pci(struct ecore_dev *edev, struct rte_pci_device *pci_dev)
 {
 	edev->regview = pci_dev->mem_resource[0].addr;
 	edev->doorbells = pci_dev->mem_resource[2].addr;
+	edev->db_size = pci_dev->mem_resource[2].len;
 }
 
 static int
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 26/53] net/qede/base: block mbox command to unresponsive MFW
  2017-09-19  1:29 [PATCH 00/53] net/qede/base: update PMD to 2.6.0.1 Rasesh Mody
                   ` (24 preceding siblings ...)
  2017-09-19  1:30 ` [PATCH 25/53] net/qede/base: add support for doorbell overflow recovery Rasesh Mody
@ 2017-09-19  1:30 ` Rasesh Mody
  2017-09-19  1:30 ` [PATCH 27/53] net/qede/base: prevent stop vport assert by malicious VF Rasesh Mody
                   ` (3 subsequent siblings)
  29 siblings, 0 replies; 31+ messages in thread
From: Rasesh Mody @ 2017-09-19  1:30 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

Block sending of mailbox command to the management FW if it is not
responsive. Use MCP_REG_CPU_STATE_SOFT_HALTED register to verify the MCP
is actually halted after sending the halt command and before proceeding
further.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_mcp.c |   74 +++++++++++++++++++++++++++++++++----
 drivers/net/qede/base/ecore_mcp.h |    3 ++
 drivers/net/qede/base/reg_addr.h  |    2 +
 3 files changed, 71 insertions(+), 8 deletions(-)

diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 21eea49..f1010ee 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -304,6 +304,12 @@ enum _ecore_status_t ecore_mcp_reset(struct ecore_hwfn *p_hwfn,
 		delay = EMUL_MCP_RESP_ITER_US;
 #endif
 
+	if (p_hwfn->mcp_info->b_block_cmd) {
+		DP_NOTICE(p_hwfn, false,
+			  "The MFW is not responsive. Avoid sending MCP_RESET mailbox command.\n");
+		return ECORE_ABORTED;
+	}
+
 	/* Ensure that only a single thread is accessing the mailbox */
 	OSAL_SPIN_LOCK(&p_hwfn->mcp_info->cmd_lock);
 
@@ -431,6 +437,15 @@ static void __ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn,
 		   (p_mb_params->cmd | seq_num), p_mb_params->param);
 }
 
+static void ecore_mcp_cmd_set_blocking(struct ecore_hwfn *p_hwfn,
+				       bool block_cmd)
+{
+	p_hwfn->mcp_info->b_block_cmd = block_cmd;
+
+	DP_INFO(p_hwfn, "%s sending of mailbox commands to the MFW\n",
+		block_cmd ? "Block" : "Unblock");
+}
+
 static enum _ecore_status_t
 _ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			 struct ecore_mcp_mb_params *p_mb_params,
@@ -513,6 +528,7 @@ static void __ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn,
 		ecore_mcp_cmd_del_elem(p_hwfn, p_cmd_elem);
 		OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->cmd_lock);
 
+		ecore_mcp_cmd_set_blocking(p_hwfn, true);
 		ecore_hw_err_notify(p_hwfn, ECORE_HW_ERR_MFW_RESP_FAIL);
 		return ECORE_AGAIN;
 	}
@@ -567,6 +583,13 @@ static void __ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn,
 		return ECORE_INVAL;
 	}
 
+	if (p_hwfn->mcp_info->b_block_cmd) {
+		DP_NOTICE(p_hwfn, false,
+			  "The MFW is not responsive. Avoid sending mailbox command 0x%08x [param 0x%08x].\n",
+			  p_mb_params->cmd, p_mb_params->param);
+		return ECORE_ABORTED;
+	}
+
 	return _ecore_mcp_cmd_and_union(p_hwfn, p_ptt, p_mb_params, max_retries,
 					delay);
 }
@@ -2354,33 +2377,68 @@ enum _ecore_status_t
 	return rc;
 }
 
+/* A maximal 100 msec waiting time for the MCP to halt */
+#define ECORE_MCP_HALT_SLEEP_MS		10
+#define ECORE_MCP_HALT_MAX_RETRIES	10
+
 enum _ecore_status_t ecore_mcp_halt(struct ecore_hwfn *p_hwfn,
 				    struct ecore_ptt *p_ptt)
 {
+	u32 resp = 0, param = 0, cpu_state, cnt = 0;
 	enum _ecore_status_t rc;
-	u32 resp = 0, param = 0;
 
 	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MCP_HALT, 0, &resp,
 			   &param);
-	if (rc != ECORE_SUCCESS)
+	if (rc != ECORE_SUCCESS) {
 		DP_ERR(p_hwfn, "MCP response failure, aborting\n");
+		return rc;
+	}
 
-	return rc;
+	do {
+		OSAL_MSLEEP(ECORE_MCP_HALT_SLEEP_MS);
+		cpu_state = ecore_rd(p_hwfn, p_ptt, MCP_REG_CPU_STATE);
+		if (cpu_state & MCP_REG_CPU_STATE_SOFT_HALTED)
+			break;
+	} while (++cnt < ECORE_MCP_HALT_MAX_RETRIES);
+
+	if (cnt == ECORE_MCP_HALT_MAX_RETRIES) {
+		DP_NOTICE(p_hwfn, false,
+			  "Failed to halt the MCP [CPU_MODE = 0x%08x, CPU_STATE = 0x%08x]\n",
+			  ecore_rd(p_hwfn, p_ptt, MCP_REG_CPU_MODE), cpu_state);
+		return ECORE_BUSY;
+	}
+
+	ecore_mcp_cmd_set_blocking(p_hwfn, true);
+
+	return ECORE_SUCCESS;
 }
 
+#define ECORE_MCP_RESUME_SLEEP_MS	10
+
 enum _ecore_status_t ecore_mcp_resume(struct ecore_hwfn *p_hwfn,
 				      struct ecore_ptt *p_ptt)
 {
-	u32 value, cpu_mode;
+	u32 cpu_mode, cpu_state;
 
 	ecore_wr(p_hwfn, p_ptt, MCP_REG_CPU_STATE, 0xffffffff);
 
-	value = ecore_rd(p_hwfn, p_ptt, MCP_REG_CPU_MODE);
-	value &= ~MCP_REG_CPU_MODE_SOFT_HALT;
-	ecore_wr(p_hwfn, p_ptt, MCP_REG_CPU_MODE, value);
 	cpu_mode = ecore_rd(p_hwfn, p_ptt, MCP_REG_CPU_MODE);
+	cpu_mode &= ~MCP_REG_CPU_MODE_SOFT_HALT;
+	ecore_wr(p_hwfn, p_ptt, MCP_REG_CPU_MODE, cpu_mode);
+
+	OSAL_MSLEEP(ECORE_MCP_RESUME_SLEEP_MS);
+	cpu_state = ecore_rd(p_hwfn, p_ptt, MCP_REG_CPU_STATE);
+
+	if (cpu_state & MCP_REG_CPU_STATE_SOFT_HALTED) {
+		DP_NOTICE(p_hwfn, false,
+			  "Failed to resume the MCP [CPU_MODE = 0x%08x, CPU_STATE = 0x%08x]\n",
+			  cpu_mode, cpu_state);
+		return ECORE_BUSY;
+	}
 
-	return (cpu_mode & MCP_REG_CPU_MODE_SOFT_HALT) ? -1 : 0;
+	ecore_mcp_cmd_set_blocking(p_hwfn, false);
+
+	return ECORE_SUCCESS;
 }
 
 enum _ecore_status_t
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index df80e11..f69b425 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -40,6 +40,9 @@ struct ecore_mcp_info {
 	 */
 	osal_spinlock_t cmd_lock;
 
+	/* Flag to indicate whether sending a MFW mailbox command is blocked */
+	bool b_block_cmd;
+
 	/* Spinlock used for syncing SW link-changes and link-changes
 	 * originating from attention context.
 	 */
diff --git a/drivers/net/qede/base/reg_addr.h b/drivers/net/qede/base/reg_addr.h
index 9048581..299efbc 100644
--- a/drivers/net/qede/base/reg_addr.h
+++ b/drivers/net/qede/base/reg_addr.h
@@ -1218,3 +1218,5 @@
 #define DORQ_REG_DB_DROP_DETAILS_REL 0x100a28UL
 #define DORQ_REG_INT_STS_WR 0x100188UL
 #define DORQ_REG_DB_DROP_DETAILS_REASON 0x100a20UL
+#define MCP_REG_CPU_PROGRAM_COUNTER 0xe0501cUL
+  #define MCP_REG_CPU_STATE_SOFT_HALTED (0x1 << 10)
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 27/53] net/qede/base: prevent stop vport assert by malicious VF
  2017-09-19  1:29 [PATCH 00/53] net/qede/base: update PMD to 2.6.0.1 Rasesh Mody
                   ` (25 preceding siblings ...)
  2017-09-19  1:30 ` [PATCH 26/53] net/qede/base: block mbox command to unresponsive MFW Rasesh Mody
@ 2017-09-19  1:30 ` Rasesh Mody
  2017-09-19  1:30 ` [PATCH 28/53] net/qede/base: remove unused parameters Rasesh Mody
                   ` (2 subsequent siblings)
  29 siblings, 0 replies; 31+ messages in thread
From: Rasesh Mody @ 2017-09-19  1:30 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

PF checks upon stop-vport from VF whether it's legal, but if it's not it
would STILL send the request to FW, which might cause it to assert.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore_sriov.c |    3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index a70ca30..792cf75 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -2121,6 +2121,8 @@ static void ecore_iov_vf_mbx_stop_vport(struct ecore_hwfn *p_hwfn,
 			  "VF [%02x] - considered malicious;"
 			  " Unable to stop RX/TX queuess\n",
 			  vf->abs_vf_id);
+		status = PFVF_STATUS_MALICIOUS;
+		goto out;
 	}
 
 	rc = ecore_sp_vport_stop(p_hwfn, vf->opaque_fid, vf->vport_id);
@@ -2134,6 +2136,7 @@ static void ecore_iov_vf_mbx_stop_vport(struct ecore_hwfn *p_hwfn,
 	vf->configured_features = 0;
 	OSAL_MEMSET(&vf->shadow_config, 0, sizeof(vf->shadow_config));
 
+out:
 	ecore_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_VPORT_TEARDOWN,
 			       sizeof(struct pfvf_def_resp_tlv), status);
 }
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 28/53] net/qede/base: remove unused parameters
  2017-09-19  1:29 [PATCH 00/53] net/qede/base: update PMD to 2.6.0.1 Rasesh Mody
                   ` (26 preceding siblings ...)
  2017-09-19  1:30 ` [PATCH 27/53] net/qede/base: prevent stop vport assert by malicious VF Rasesh Mody
@ 2017-09-19  1:30 ` Rasesh Mody
  2017-09-19  1:30 ` [PATCH 29/53] net/qede/base: fix macros to check chip revision/metal Rasesh Mody
  2017-09-20 11:00 ` [PATCH 00/53] net/qede/base: update PMD to 2.6.0.1 Ferruh Yigit
  29 siblings, 0 replies; 31+ messages in thread
From: Rasesh Mody @ 2017-09-19  1:30 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev

This is an attempt to clean up many unused API parameters across the base
code. Most of the changes are related to removing unused p_hwfn or p_ptt
handlers. The warnings are generated using 'unused-parameter' cflags.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/bcm_osal.h            |    1 +
 drivers/net/qede/base/ecore.h               |    3 +-
 drivers/net/qede/base/ecore_cxt.c           |    7 +-
 drivers/net/qede/base/ecore_dcbx.c          |   45 +++++------
 drivers/net/qede/base/ecore_dev.c           |   16 ++--
 drivers/net/qede/base/ecore_hw.c            |    6 +-
 drivers/net/qede/base/ecore_hw.h            |    4 +-
 drivers/net/qede/base/ecore_init_fw_funcs.c |   62 +++++-----------
 drivers/net/qede/base/ecore_init_fw_funcs.h |   78 +++++++------------
 drivers/net/qede/base/ecore_init_ops.c      |   39 +++++-----
 drivers/net/qede/base/ecore_l2.c            |   34 +++------
 drivers/net/qede/base/ecore_l2.h            |   28 -------
 drivers/net/qede/base/ecore_l2_api.h        |   26 +++++++
 drivers/net/qede/base/ecore_mcp.c           |   29 +++-----
 drivers/net/qede/base/ecore_mcp.h           |   11 +--
 drivers/net/qede/base/ecore_mng_tlv.c       |    9 +--
 drivers/net/qede/base/ecore_spq.c           |    5 +-
 drivers/net/qede/base/ecore_sriov.c         |  107 ++++++++++++---------------
 drivers/net/qede/base/ecore_sriov.h         |    8 +-
 drivers/net/qede/base/ecore_vf.c            |   66 ++++++++---------
 drivers/net/qede/base/ecore_vf.h            |   12 +--
 drivers/net/qede/qede_fdir.c                |    2 +-
 22 files changed, 243 insertions(+), 355 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 70b1a7f..6368030 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -420,6 +420,7 @@ void qede_hw_err_notify(struct ecore_hwfn *p_hwfn,
 #define OSAL_PAGE_SIZE 4096
 #define OSAL_CACHE_LINE_SIZE RTE_CACHE_LINE_SIZE
 #define OSAL_IOMEM volatile
+#define OSAL_UNUSED    __attribute__((unused))
 #define OSAL_UNLIKELY(x)  __builtin_expect(!!(x), 0)
 #define OSAL_MIN_T(type, __min1, __min2)	\
 	((type)(__min1) < (type)(__min2) ? (type)(__min1) : (type)(__min2))
diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index d921d9e..73024da 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -828,8 +828,7 @@ struct ecore_dev {
  *
  * @return OSAL_INLINE u8
  */
-static OSAL_INLINE u8
-ecore_concrete_to_sw_fid(__rte_unused struct ecore_dev *p_dev, u32 concrete_fid)
+static OSAL_INLINE u8 ecore_concrete_to_sw_fid(u32 concrete_fid)
 {
 	u8 vfid     = GET_FIELD(concrete_fid, PXP_CONCRETE_FID_VFID);
 	u8 pfid     = GET_FIELD(concrete_fid, PXP_CONCRETE_FID_PFID);
diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index 73dc7cb..24aeda9 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -297,8 +297,8 @@ struct ecore_tm_iids {
 	u32 per_vf_tids;
 };
 
-static OSAL_INLINE void ecore_cxt_tm_iids(struct ecore_cxt_mngr *p_mngr,
-					  struct ecore_tm_iids *iids)
+static void ecore_cxt_tm_iids(struct ecore_cxt_mngr *p_mngr,
+			      struct ecore_tm_iids *iids)
 {
 	bool tm_vf_required = false;
 	bool tm_required = false;
@@ -687,7 +687,7 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 	p_blk = &p_cli->pf_blks[0];
 
 	ecore_cxt_qm_iids(p_hwfn, &qm_iids);
-	total = ecore_qm_pf_mem_size(p_hwfn->rel_pf_id, qm_iids.cids,
+	total = ecore_qm_pf_mem_size(qm_iids.cids,
 				     qm_iids.vf_cids, qm_iids.tids,
 				     p_hwfn->qm_info.num_pqs,
 				     p_hwfn->qm_info.num_vf_pqs);
@@ -1436,7 +1436,6 @@ void ecore_qm_init_pf(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
 
 	ecore_qm_pf_rt_init(p_hwfn, p_ptt, p_hwfn->port_id,
 			    p_hwfn->rel_pf_id, qm_info->max_phys_tcs_per_port,
-			    p_hwfn->first_on_engine,
 			    iids.cids, iids.vf_cids, iids.tids,
 			    qm_info->start_pq,
 			    qm_info->num_pqs - qm_info->num_vf_pqs,
diff --git a/drivers/net/qede/base/ecore_dcbx.c b/drivers/net/qede/base/ecore_dcbx.c
index e7848c7..25ae21c 100644
--- a/drivers/net/qede/base/ecore_dcbx.c
+++ b/drivers/net/qede/base/ecore_dcbx.c
@@ -545,7 +545,6 @@ static bool ecore_dcbx_iwarp_tlv(struct ecore_hwfn *p_hwfn, u32 app_info_bitmap,
 
 static void
 ecore_dcbx_get_local_params(struct ecore_hwfn *p_hwfn,
-			    struct ecore_ptt *p_ptt,
 			    struct ecore_dcbx_get *params)
 {
 	struct dcbx_features *p_feat;
@@ -559,7 +558,6 @@ static bool ecore_dcbx_iwarp_tlv(struct ecore_hwfn *p_hwfn, u32 app_info_bitmap,
 
 static void
 ecore_dcbx_get_remote_params(struct ecore_hwfn *p_hwfn,
-			     struct ecore_ptt *p_ptt,
 			     struct ecore_dcbx_get *params)
 {
 	struct dcbx_features *p_feat;
@@ -574,7 +572,6 @@ static bool ecore_dcbx_iwarp_tlv(struct ecore_hwfn *p_hwfn, u32 app_info_bitmap,
 
 static enum _ecore_status_t
 ecore_dcbx_get_operational_params(struct ecore_hwfn *p_hwfn,
-				  struct ecore_ptt *p_ptt,
 				  struct ecore_dcbx_get *params)
 {
 	struct ecore_dcbx_operational_params *p_operational;
@@ -633,10 +630,8 @@ static bool ecore_dcbx_iwarp_tlv(struct ecore_hwfn *p_hwfn, u32 app_info_bitmap,
 	return ECORE_SUCCESS;
 }
 
-static void
-ecore_dcbx_get_dscp_params(struct ecore_hwfn *p_hwfn,
-			   struct ecore_ptt *p_ptt,
-			   struct ecore_dcbx_get *params)
+static void  ecore_dcbx_get_dscp_params(struct ecore_hwfn *p_hwfn,
+					struct ecore_dcbx_get *params)
 {
 	struct ecore_dcbx_dscp_params *p_dscp;
 	struct dcb_dscp_map *p_dscp_map;
@@ -660,10 +655,8 @@ static bool ecore_dcbx_iwarp_tlv(struct ecore_hwfn *p_hwfn, u32 app_info_bitmap,
 	}
 }
 
-static void
-ecore_dcbx_get_local_lldp_params(struct ecore_hwfn *p_hwfn,
-				 struct ecore_ptt *p_ptt,
-				 struct ecore_dcbx_get *params)
+static void ecore_dcbx_get_local_lldp_params(struct ecore_hwfn *p_hwfn,
+					     struct ecore_dcbx_get *params)
 {
 	struct lldp_config_params_s *p_local;
 
@@ -676,10 +669,8 @@ static bool ecore_dcbx_iwarp_tlv(struct ecore_hwfn *p_hwfn, u32 app_info_bitmap,
 		    OSAL_ARRAY_SIZE(p_local->local_port_id));
 }
 
-static void
-ecore_dcbx_get_remote_lldp_params(struct ecore_hwfn *p_hwfn,
-				  struct ecore_ptt *p_ptt,
-				  struct ecore_dcbx_get *params)
+static void ecore_dcbx_get_remote_lldp_params(struct ecore_hwfn *p_hwfn,
+					      struct ecore_dcbx_get *params)
 {
 	struct lldp_status_params_s *p_remote;
 
@@ -693,34 +684,32 @@ static bool ecore_dcbx_iwarp_tlv(struct ecore_hwfn *p_hwfn, u32 app_info_bitmap,
 }
 
 static enum _ecore_status_t
-ecore_dcbx_get_params(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+ecore_dcbx_get_params(struct ecore_hwfn *p_hwfn,
 		      struct ecore_dcbx_get *p_params,
 		      enum ecore_mib_read_type type)
 {
-	enum _ecore_status_t rc = ECORE_SUCCESS;
-
 	switch (type) {
 	case ECORE_DCBX_REMOTE_MIB:
-		ecore_dcbx_get_remote_params(p_hwfn, p_ptt, p_params);
+		ecore_dcbx_get_remote_params(p_hwfn, p_params);
 		break;
 	case ECORE_DCBX_LOCAL_MIB:
-		ecore_dcbx_get_local_params(p_hwfn, p_ptt, p_params);
+		ecore_dcbx_get_local_params(p_hwfn, p_params);
 		break;
 	case ECORE_DCBX_OPERATIONAL_MIB:
-		ecore_dcbx_get_operational_params(p_hwfn, p_ptt, p_params);
+		ecore_dcbx_get_operational_params(p_hwfn, p_params);
 		break;
 	case ECORE_DCBX_REMOTE_LLDP_MIB:
-		ecore_dcbx_get_remote_lldp_params(p_hwfn, p_ptt, p_params);
+		ecore_dcbx_get_remote_lldp_params(p_hwfn, p_params);
 		break;
 	case ECORE_DCBX_LOCAL_LLDP_MIB:
-		ecore_dcbx_get_local_lldp_params(p_hwfn, p_ptt, p_params);
+		ecore_dcbx_get_local_lldp_params(p_hwfn, p_params);
 		break;
 	default:
 		DP_ERR(p_hwfn, "MIB read err, unknown mib type %d\n", type);
 		return ECORE_INVAL;
 	}
 
-	return rc;
+	return ECORE_SUCCESS;
 }
 
 static enum _ecore_status_t
@@ -869,8 +858,7 @@ enum _ecore_status_t
 		return rc;
 
 	if (type == ECORE_DCBX_OPERATIONAL_MIB) {
-		ecore_dcbx_get_dscp_params(p_hwfn, p_ptt,
-					   &p_hwfn->p_dcbx_info->get);
+		ecore_dcbx_get_dscp_params(p_hwfn, &p_hwfn->p_dcbx_info->get);
 
 		rc = ecore_dcbx_process_mib_info(p_hwfn);
 		if (!rc) {
@@ -890,7 +878,8 @@ enum _ecore_status_t
 			enabled = p_hwfn->p_dcbx_info->results.dcbx_enabled;
 		}
 	}
-	ecore_dcbx_get_params(p_hwfn, p_ptt, &p_hwfn->p_dcbx_info->get, type);
+
+	ecore_dcbx_get_params(p_hwfn, &p_hwfn->p_dcbx_info->get, type);
 
 	/* Update the DSCP to TC mapping bit if required */
 	if ((type == ECORE_DCBX_OPERATIONAL_MIB) &&
@@ -978,7 +967,7 @@ enum _ecore_status_t ecore_dcbx_query_params(struct ecore_hwfn *p_hwfn,
 	if (rc != ECORE_SUCCESS)
 		goto out;
 
-	rc = ecore_dcbx_get_params(p_hwfn, p_ptt, p_get, type);
+	rc = ecore_dcbx_get_params(p_hwfn, p_get, type);
 
 out:
 	ecore_ptt_release(p_hwfn, p_ptt);
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 711a824..c185323 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -3187,8 +3187,7 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 	 * Old drivers that don't acquire the lock can run in parallel, and
 	 * their allocation values won't be affected by the updated max values.
 	 */
-	ecore_mcp_resc_lock_default_init(p_hwfn, &resc_lock_params,
-					 &resc_unlock_params,
+	ecore_mcp_resc_lock_default_init(&resc_lock_params, &resc_unlock_params,
 					 ECORE_RESC_LOCK_RESC_ALLOC, false);
 
 	rc = ecore_mcp_resc_lock(p_hwfn, p_ptt, &resc_lock_params);
@@ -5117,8 +5116,7 @@ static void ecore_configure_wfq_for_all_vports(struct ecore_hwfn *p_hwfn,
 	}
 }
 
-static void
-ecore_init_wfq_default_param(struct ecore_hwfn *p_hwfn, u32 min_pf_rate)
+static void ecore_init_wfq_default_param(struct ecore_hwfn *p_hwfn)
 {
 	int i;
 
@@ -5127,8 +5125,7 @@ static void ecore_configure_wfq_for_all_vports(struct ecore_hwfn *p_hwfn,
 }
 
 static void ecore_disable_wfq_for_all_vports(struct ecore_hwfn *p_hwfn,
-					     struct ecore_ptt *p_ptt,
-					     u32 min_pf_rate)
+					     struct ecore_ptt *p_ptt)
 {
 	struct init_qm_vport_params *vport_params;
 	int i;
@@ -5136,7 +5133,7 @@ static void ecore_disable_wfq_for_all_vports(struct ecore_hwfn *p_hwfn,
 	vport_params = p_hwfn->qm_info.qm_vport_params;
 
 	for (i = 0; i < p_hwfn->qm_info.num_vports; i++) {
-		ecore_init_wfq_default_param(p_hwfn, min_pf_rate);
+		ecore_init_wfq_default_param(p_hwfn);
 		ecore_init_vport_wfq(p_hwfn, p_ptt,
 				     vport_params[i].first_tx_pq_id,
 				     vport_params[i].vport_wfq);
@@ -5290,7 +5287,7 @@ static int __ecore_configure_vp_wfq_on_link_change(struct ecore_hwfn *p_hwfn,
 	if (rc == ECORE_SUCCESS && use_wfq)
 		ecore_configure_wfq_for_all_vports(p_hwfn, p_ptt, min_pf_rate);
 	else
-		ecore_disable_wfq_for_all_vports(p_hwfn, p_ptt, min_pf_rate);
+		ecore_disable_wfq_for_all_vports(p_hwfn, p_ptt);
 
 	return rc;
 }
@@ -5493,8 +5490,7 @@ void ecore_clean_wfq_db(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
 	p_link = &p_hwfn->mcp_info->link_output;
 
 	if (p_link->min_pf_rate)
-		ecore_disable_wfq_for_all_vports(p_hwfn, p_ptt,
-						 p_link->min_pf_rate);
+		ecore_disable_wfq_for_all_vports(p_hwfn, p_ptt);
 
 	OSAL_MEMSET(p_hwfn->qm_info.wfq_data, 0,
 		    sizeof(*p_hwfn->qm_info.wfq_data) *
diff --git a/drivers/net/qede/base/ecore_hw.c b/drivers/net/qede/base/ecore_hw.c
index 31e2776..36457ac 100644
--- a/drivers/net/qede/base/ecore_hw.c
+++ b/drivers/net/qede/base/ecore_hw.c
@@ -136,7 +136,7 @@ void ecore_ptt_release(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
 	OSAL_SPIN_UNLOCK(&p_hwfn->p_ptt_pool->lock);
 }
 
-u32 ecore_ptt_get_hw_addr(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
+u32 ecore_ptt_get_hw_addr(struct ecore_ptt *p_ptt)
 {
 	/* The HW is using DWORDS and we need to translate it to Bytes */
 	return OSAL_LE32_TO_CPU(p_ptt->pxp.offset) << 2;
@@ -159,7 +159,7 @@ void ecore_ptt_set_win(struct ecore_hwfn *p_hwfn,
 {
 	u32 prev_hw_addr;
 
-	prev_hw_addr = ecore_ptt_get_hw_addr(p_hwfn, p_ptt);
+	prev_hw_addr = ecore_ptt_get_hw_addr(p_ptt);
 
 	if (new_hw_addr == prev_hw_addr)
 		return;
@@ -181,7 +181,7 @@ void ecore_ptt_set_win(struct ecore_hwfn *p_hwfn,
 static u32 ecore_set_ptt(struct ecore_hwfn *p_hwfn,
 			 struct ecore_ptt *p_ptt, u32 hw_addr)
 {
-	u32 win_hw_addr = ecore_ptt_get_hw_addr(p_hwfn, p_ptt);
+	u32 win_hw_addr = ecore_ptt_get_hw_addr(p_ptt);
 	u32 offset;
 
 	offset = hw_addr - win_hw_addr;
diff --git a/drivers/net/qede/base/ecore_hw.h b/drivers/net/qede/base/ecore_hw.h
index 726bc18..0f3e88b 100644
--- a/drivers/net/qede/base/ecore_hw.h
+++ b/drivers/net/qede/base/ecore_hw.h
@@ -102,13 +102,11 @@ void ecore_gtt_init(struct ecore_hwfn *p_hwfn,
 /**
  * @brief ecore_ptt_get_hw_addr - Get PTT's GRC/HW address
  *
- * @param p_hwfn
  * @param p_ptt
  *
  * @return u32
  */
-u32 ecore_ptt_get_hw_addr(struct ecore_hwfn	*p_hwfn,
-			  struct ecore_ptt	*p_ptt);
+u32 ecore_ptt_get_hw_addr(struct ecore_ptt *p_ptt);
 
 /**
  * @brief ecore_ptt_get_bar_addr - Get PPT's external BAR address
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.c b/drivers/net/qede/base/ecore_init_fw_funcs.c
index b5ef173..ad697ad 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.c
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.c
@@ -361,7 +361,6 @@ static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 				    u8 port_id,
 				    u8 pf_id,
 				    u8 max_phys_tcs_per_port,
-				    bool is_first_pf,
 				    u32 num_pf_cids,
 				    u32 num_vf_cids,
 				    u16 start_pq,
@@ -473,10 +472,10 @@ static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 
 /* Prepare Other PQ mapping runtime init values for the specified PF */
 static void ecore_other_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
-				       u8 port_id,
 				       u8 pf_id,
 				       u32 num_pf_cids,
-				       u32 num_tids, u32 base_mem_addr_4kb)
+				       u32 num_tids,
+				       u32 base_mem_addr_4kb)
 {
 	u32 pq_size, pq_mem_4kb, mem_addr_4kb;
 	u16 i, pq_id, pq_group;
@@ -684,10 +683,11 @@ static bool ecore_send_qm_cmd(struct ecore_hwfn *p_hwfn,
 
 /******************** INTERFACE IMPLEMENTATION *********************/
 
-u32 ecore_qm_pf_mem_size(u8 pf_id,
-			 u32 num_pf_cids,
+u32 ecore_qm_pf_mem_size(u32 num_pf_cids,
 			 u32 num_vf_cids,
-			 u32 num_tids, u16 num_pf_pqs, u16 num_vf_pqs)
+			 u32 num_tids,
+			 u16 num_pf_pqs,
+			 u16 num_vf_pqs)
 {
 	return QM_PQ_MEM_4KB(num_pf_cids) * num_pf_pqs +
 	    QM_PQ_MEM_4KB(num_vf_cids) * num_vf_pqs +
@@ -748,7 +748,6 @@ int ecore_qm_pf_rt_init(struct ecore_hwfn *p_hwfn,
 			u8 port_id,
 			u8 pf_id,
 			u8 max_phys_tcs_per_port,
-			bool is_first_pf,
 			u32 num_pf_cids,
 			u32 num_vf_cids,
 			u32 num_tids,
@@ -775,16 +774,14 @@ int ecore_qm_pf_rt_init(struct ecore_hwfn *p_hwfn,
 
 	/* Map Other PQs (if any) */
 #if QM_OTHER_PQS_PER_PF > 0
-	ecore_other_pq_map_rt_init(p_hwfn, port_id, pf_id, num_pf_cids,
-				   num_tids, 0);
+	ecore_other_pq_map_rt_init(p_hwfn, pf_id, num_pf_cids, num_tids, 0);
 #endif
 
 	/* Map Tx PQs */
 	ecore_tx_pq_map_rt_init(p_hwfn, p_ptt, port_id, pf_id,
-				max_phys_tcs_per_port, is_first_pf, num_pf_cids,
-				num_vf_cids, start_pq, num_pf_pqs, num_vf_pqs,
-				start_vport, other_mem_size_4kb, pq_params,
-				vport_params);
+				max_phys_tcs_per_port, num_pf_cids, num_vf_cids,
+				start_pq, num_pf_pqs, num_vf_pqs, start_vport,
+				other_mem_size_4kb, pq_params, vport_params);
 
 	/* Init PF WFQ */
 	if (pf_wfq)
@@ -1335,23 +1332,8 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
 	}
 }
 
-/* In MF should be called once per engine to set EtherType of OuterTag */
-void ecore_set_engine_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
-					struct ecore_ptt *p_ptt, u32 ethType)
-{
-	/* Update PRS register */
-	STORE_RT_REG(p_hwfn, PRS_REG_TAG_ETHERTYPE_0_RT_OFFSET, ethType);
-
-	/* Update NIG register */
-	STORE_RT_REG(p_hwfn, NIG_REG_TAG_ETHERTYPE_0_RT_OFFSET, ethType);
-
-	/* Update PBF register */
-	STORE_RT_REG(p_hwfn, PBF_REG_TAG_ETHERTYPE_0_RT_OFFSET, ethType);
-}
-
 /* In MF should be called once per port to set EtherType of OuterTag */
-void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
-				      struct ecore_ptt *p_ptt, u32 ethType)
+void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn, u32 ethType)
 {
 	/* Update DORQ register */
 	STORE_RT_REG(p_hwfn, DORQ_REG_TAG1_ETHERTYPE_RT_OFFSET, ethType);
@@ -1733,9 +1715,7 @@ u32 ecore_get_mstorm_eth_vf_prods_offset(struct ecore_hwfn *p_hwfn,
 /* Calculate and return CDU validation byte per connection type / region /
  * cid
  */
-static u8 ecore_calc_cdu_validation_byte(struct ecore_hwfn *p_hwfn,
-					 u8 conn_type,
-					 u8 region, u32 cid)
+static u8 ecore_calc_cdu_validation_byte(u8 conn_type, u8 region, u32 cid)
 {
 	const u8 validation_cfg = CDU_VALIDATION_DEFAULT_CFG;
 
@@ -1794,9 +1774,8 @@ static u8 ecore_calc_cdu_validation_byte(struct ecore_hwfn *p_hwfn,
 }
 
 /* Calcualte and set validation bytes for session context */
-void ecore_calc_session_ctx_validation(struct ecore_hwfn *p_hwfn,
-				       void *p_ctx_mem,
-				       u16 ctx_size, u8 ctx_type, u32 cid)
+void ecore_calc_session_ctx_validation(void *p_ctx_mem, u16 ctx_size,
+				       u8 ctx_type, u32 cid)
 {
 	u8 *x_val_ptr, *t_val_ptr, *u_val_ptr, *p_ctx;
 
@@ -1807,14 +1786,14 @@ void ecore_calc_session_ctx_validation(struct ecore_hwfn *p_hwfn,
 
 	OSAL_MEMSET(p_ctx, 0, ctx_size);
 
-	*x_val_ptr = ecore_calc_cdu_validation_byte(p_hwfn, ctx_type, 3, cid);
-	*t_val_ptr = ecore_calc_cdu_validation_byte(p_hwfn, ctx_type, 4, cid);
-	*u_val_ptr = ecore_calc_cdu_validation_byte(p_hwfn, ctx_type, 5, cid);
+	*x_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 3, cid);
+	*t_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 4, cid);
+	*u_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 5, cid);
 }
 
 /* Calcualte and set validation bytes for task context */
-void ecore_calc_task_ctx_validation(struct ecore_hwfn *p_hwfn, void *p_ctx_mem,
-				    u16 ctx_size, u8 ctx_type, u32 tid)
+void ecore_calc_task_ctx_validation(void *p_ctx_mem, u16 ctx_size, u8 ctx_type,
+				    u32 tid)
 {
 	u8 *p_ctx, *region1_val_ptr;
 
@@ -1823,8 +1802,7 @@ void ecore_calc_task_ctx_validation(struct ecore_hwfn *p_hwfn, void *p_ctx_mem,
 
 	OSAL_MEMSET(p_ctx, 0, ctx_size);
 
-	*region1_val_ptr = ecore_calc_cdu_validation_byte(p_hwfn, ctx_type,
-								1, tid);
+	*region1_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 1, tid);
 }
 
 /* Memset session context to 0 while preserving validation bytes */
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.h b/drivers/net/qede/base/ecore_init_fw_funcs.h
index 488dc00..a258bd1 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.h
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.h
@@ -18,7 +18,6 @@
  * Returns the required host memory size in 4KB units.
  * Must be called before all QM init HSI functions.
  *
- * @param pf_id -	physical function ID
  * @param num_pf_cids - number of connections used by this PF
  * @param num_vf_cids -	number of connections used by VFs of this PF
  * @param num_tids -	number of tasks used by this PF
@@ -27,12 +26,11 @@
  *
  * @return The required host memory size in 4KB units.
  */
-u32 ecore_qm_pf_mem_size(u8 pf_id,
-						 u32 num_pf_cids,
-						 u32 num_vf_cids,
-						 u32 num_tids,
-						 u16 num_pf_pqs,
-						 u16 num_vf_pqs);
+u32 ecore_qm_pf_mem_size(u32 num_pf_cids,
+			 u32 num_vf_cids,
+			 u32 num_tids,
+			 u16 num_pf_pqs,
+			 u16 num_vf_pqs);
 
 /**
  * @brief ecore_qm_common_rt_init - Prepare QM runtime init values for engine
@@ -66,7 +64,6 @@ int ecore_qm_common_rt_init(struct ecore_hwfn *p_hwfn,
  * @param port_id		- port ID
  * @param pf_id			- PF ID
  * @param max_phys_tcs_per_port	- max number of physical TCs per port in HW
- * @param is_first_pf		- 1 = first PF in engine, 0 = othwerwise
  * @param num_pf_cids		- number of connections used by this PF
  * @param num_vf_cids		- number of connections used by VFs of this PF
  * @param num_tids		- number of tasks used by this PF
@@ -88,23 +85,22 @@ int ecore_qm_common_rt_init(struct ecore_hwfn *p_hwfn,
  * @return 0 on success, -1 on error.
  */
 int ecore_qm_pf_rt_init(struct ecore_hwfn *p_hwfn,
-				struct ecore_ptt *p_ptt,
-				u8 port_id,
-				u8 pf_id,
-				u8 max_phys_tcs_per_port,
-				bool is_first_pf,
-				u32 num_pf_cids,
-				u32 num_vf_cids,
-				u32 num_tids,
-				u16 start_pq,
-				u16 num_pf_pqs,
-				u16 num_vf_pqs,
-				u8 start_vport,
-				u8 num_vports,
-				u16 pf_wfq,
-				u32 pf_rl,
-				struct init_qm_pq_params *pq_params,
-				struct init_qm_vport_params *vport_params);
+			struct ecore_ptt *p_ptt,
+			u8 port_id,
+			u8 pf_id,
+			u8 max_phys_tcs_per_port,
+			u32 num_pf_cids,
+			u32 num_vf_cids,
+			u32 num_tids,
+			u16 start_pq,
+			u16 num_pf_pqs,
+			u16 num_vf_pqs,
+			u8 start_vport,
+			u8 num_vports,
+			u16 pf_wfq,
+			u32 pf_rl,
+			struct init_qm_pq_params *pq_params,
+			struct init_qm_vport_params *vport_params);
 
 /**
  * @brief ecore_init_pf_wfq  Initializes the WFQ weight of the specified PF
@@ -261,28 +257,14 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
 
 #ifndef UNUSED_HSI_FUNC
 /**
- * @brief ecore_set_engine_mf_ovlan_eth_type - initializes Nig,Prs,Pbf and llh
- *                                             ethType Regs to  input ethType
- *                                             should Be called once per engine
- *                                             if engine
- *  is in BD mode.
- *
- * @param p_ptt   - ptt window used for writing the registers.
- * @param ethType - etherType to configure
- */
-void ecore_set_engine_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
-			struct ecore_ptt *p_ptt, u32 ethType);
-
-/**
  * @brief ecore_set_port_mf_ovlan_eth_type - initializes DORQ ethType Regs to
  *                                           input ethType should Be called
  *                                           once per port.
  *
- * @param p_ptt   - ptt window used for writing the registers.
+ * @param p_hwfn -	    HW device data
  * @param ethType - etherType to configure
  */
-void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
-			struct ecore_ptt *p_ptt, u32 ethType);
+void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn, u32 ethType);
 #endif /* UNUSED_HSI_FUNC */
 
 /**
@@ -431,25 +413,19 @@ void ecore_enable_context_validation(struct ecore_hwfn *p_hwfn,
  * @param ctx_type -	context type.
  * @param cid -		context cid.
  */
-void ecore_calc_session_ctx_validation(struct ecore_hwfn *p_hwfn,
-				       void *p_ctx_mem,
-				       u16 ctx_size,
-				       u8 ctx_type,
-				       u32 cid);
+void ecore_calc_session_ctx_validation(void *p_ctx_mem, u16 ctx_size,
+				       u8 ctx_type, u32 cid);
+
 /**
  * @brief ecore_calc_task_ctx_validation - Calcualte validation byte for task
  * context.
  *
- * @param p_hwfn -		    HW device data
  * @param p_ctx_mem -	pointer to context memory.
  * @param ctx_size -	context size.
  * @param ctx_type -	context type.
  * @param tid -		    context tid.
  */
-void ecore_calc_task_ctx_validation(struct ecore_hwfn *p_hwfn,
-				    void *p_ctx_mem,
-				    u16 ctx_size,
-				    u8 ctx_type,
+void ecore_calc_task_ctx_validation(void *p_ctx_mem, u16 ctx_size, u8 ctx_type,
 				    u32 tid);
 /**
  * @brief ecore_memset_session_ctx - Memset session context to 0 while
diff --git a/drivers/net/qede/base/ecore_init_ops.c b/drivers/net/qede/base/ecore_init_ops.c
index c76cc07..1a2d2f4 100644
--- a/drivers/net/qede/base/ecore_init_ops.c
+++ b/drivers/net/qede/base/ecore_init_ops.c
@@ -176,8 +176,7 @@ static enum _ecore_status_t ecore_init_array_dmae(struct ecore_hwfn *p_hwfn,
 
 static enum _ecore_status_t ecore_init_fill_dmae(struct ecore_hwfn *p_hwfn,
 						 struct ecore_ptt *p_ptt,
-						 u32 addr, u32 fill,
-						 u32 fill_count)
+						 u32 addr, u32 fill_count)
 {
 	static u32 zero_buffer[DMAE_MAX_RW_SIZE];
 
@@ -309,7 +308,7 @@ static enum _ecore_status_t ecore_init_cmd_wr(struct ecore_hwfn *p_hwfn,
 	case INIT_SRC_ZEROS:
 		data = OSAL_LE32_TO_CPU(p_cmd->args.zeros_count);
 		if (b_must_dmae || (b_can_dmae && (data >= 64)))
-			rc = ecore_init_fill_dmae(p_hwfn, p_ptt, addr, 0, data);
+			rc = ecore_init_fill_dmae(p_hwfn, p_ptt, addr, data);
 		else
 			ecore_init_fill(p_hwfn, p_ptt, addr, 0, data);
 		break;
@@ -397,10 +396,13 @@ static void ecore_init_cmd_rd(struct ecore_hwfn *p_hwfn,
 		       OSAL_LE32_TO_CPU(cmd->op_data));
 }
 
-/* init_ops callbacks entry point */
+/* init_ops callbacks entry point.
+ * OSAL_UNUSED is temporary used to avoid unused-parameter compilation warnings.
+ * Should be removed when the function is actually used.
+ */
 static void ecore_init_cmd_cb(struct ecore_hwfn *p_hwfn,
-			      struct ecore_ptt *p_ptt,
-			      struct init_callback_op *p_cmd)
+			      struct ecore_ptt OSAL_UNUSED * p_ptt,
+			      struct init_callback_op OSAL_UNUSED * p_cmd)
 {
 	DP_NOTICE(p_hwfn, true,
 		  "Currently init values have no need of callbacks\n");
@@ -444,8 +446,7 @@ static u32 ecore_init_cmd_mode(struct ecore_hwfn *p_hwfn,
 				 INIT_IF_MODE_OP_CMD_OFFSET);
 }
 
-static u32 ecore_init_cmd_phase(struct ecore_hwfn *p_hwfn,
-				struct init_if_phase_op *p_cmd,
+static u32 ecore_init_cmd_phase(struct init_if_phase_op *p_cmd,
 				u32 phase, u32 phase_id)
 {
 	u32 data = OSAL_LE32_TO_CPU(p_cmd->phase_data);
@@ -500,8 +501,8 @@ enum _ecore_status_t ecore_init_run(struct ecore_hwfn *p_hwfn,
 						       modes);
 			break;
 		case INIT_OP_IF_PHASE:
-			cmd_num += ecore_init_cmd_phase(p_hwfn, &cmd->if_phase,
-							phase, phase_id);
+			cmd_num += ecore_init_cmd_phase(&cmd->if_phase, phase,
+							phase_id);
 			b_dmae = GET_FIELD(data, INIT_IF_PHASE_OP_DMAE_ENABLE);
 			break;
 		case INIT_OP_DELAY:
@@ -573,7 +574,11 @@ void ecore_gtt_init(struct ecore_hwfn *p_hwfn,
 }
 
 enum _ecore_status_t ecore_init_fw_data(struct ecore_dev *p_dev,
-					const u8 *data)
+#ifdef CONFIG_ECORE_BINARY_FW
+					const u8 *fw_data)
+#else
+					const u8 OSAL_UNUSED * fw_data)
+#endif
 {
 	struct ecore_fw_data *fw = p_dev->fw_data;
 
@@ -581,24 +586,24 @@ enum _ecore_status_t ecore_init_fw_data(struct ecore_dev *p_dev,
 	struct bin_buffer_hdr *buf_hdr;
 	u32 offset, len;
 
-	if (!data) {
+	if (!fw_data) {
 		DP_NOTICE(p_dev, true, "Invalid fw data\n");
 		return ECORE_INVAL;
 	}
 
-	buf_hdr = (struct bin_buffer_hdr *)(uintptr_t)data;
+	buf_hdr = (struct bin_buffer_hdr *)(uintptr_t)fw_data;
 
 	offset = buf_hdr[BIN_BUF_INIT_FW_VER_INFO].offset;
-	fw->fw_ver_info = (struct fw_ver_info *)((uintptr_t)(data + offset));
+	fw->fw_ver_info = (struct fw_ver_info *)((uintptr_t)(fw_data + offset));
 
 	offset = buf_hdr[BIN_BUF_INIT_CMD].offset;
-	fw->init_ops = (union init_op *)((uintptr_t)(data + offset));
+	fw->init_ops = (union init_op *)((uintptr_t)(fw_data + offset));
 
 	offset = buf_hdr[BIN_BUF_INIT_VAL].offset;
-	fw->arr_data = (u32 *)((uintptr_t)(data + offset));
+	fw->arr_data = (u32 *)((uintptr_t)(fw_data + offset));
 
 	offset = buf_hdr[BIN_BUF_INIT_MODE_TREE].offset;
-	fw->modes_tree_buf = (u8 *)((uintptr_t)(data + offset));
+	fw->modes_tree_buf = (u8 *)((uintptr_t)(fw_data + offset));
 	len = buf_hdr[BIN_BUF_INIT_CMD].length;
 	fw->init_ops_size = len / sizeof(struct init_raw_op);
 #else
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 3140fdd..3071b46 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -433,8 +433,7 @@ enum _ecore_status_t
 	p_ramrod->ctl_frame_ethtype_check_en = !!p_params->check_ethtype;
 
 	/* Software Function ID in hwfn (PFs are 0 - 15, VFs are 16 - 135) */
-	p_ramrod->sw_fid = ecore_concrete_to_sw_fid(p_hwfn->p_dev,
-						    p_params->concrete_fid);
+	p_ramrod->sw_fid = ecore_concrete_to_sw_fid(p_params->concrete_fid);
 
 	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
 }
@@ -633,8 +632,7 @@ enum _ecore_status_t
 }
 
 static void
-ecore_sp_vport_update_sge_tpa(struct ecore_hwfn *p_hwfn,
-			      struct vport_update_ramrod_data *p_ramrod,
+ecore_sp_vport_update_sge_tpa(struct vport_update_ramrod_data *p_ramrod,
 			      struct ecore_sge_tpa_params *p_params)
 {
 	struct eth_vport_tpa_param *p_tpa;
@@ -665,8 +663,7 @@ enum _ecore_status_t
 }
 
 static void
-ecore_sp_update_mcast_bin(struct ecore_hwfn *p_hwfn,
-			  struct vport_update_ramrod_data *p_ramrod,
+ecore_sp_update_mcast_bin(struct vport_update_ramrod_data *p_ramrod,
 			  struct ecore_sp_vport_update_params *p_params)
 {
 	int i;
@@ -775,11 +772,10 @@ enum _ecore_status_t
 	}
 
 	/* Update mcast bins for VFs, PF doesn't use this functionality */
-	ecore_sp_update_mcast_bin(p_hwfn, p_ramrod, p_params);
+	ecore_sp_update_mcast_bin(p_ramrod, p_params);
 
 	ecore_sp_update_accept_mode(p_hwfn, p_ramrod, p_params->accept_flags);
-	ecore_sp_vport_update_sge_tpa(p_hwfn, p_ramrod,
-				      p_params->sge_tpa_params);
+	ecore_sp_vport_update_sge_tpa(p_ramrod, p_params->sge_tpa_params);
 	if (p_params->mtu) {
 		p_ramrod->common.update_mtu_flg = 1;
 		p_ramrod->common.mtu = OSAL_CPU_TO_LE16(p_params->mtu);
@@ -1503,8 +1499,7 @@ enum _ecore_status_t
  *         Note: crc32_length MUST be aligned to 8
  * Return:
  ******************************************************************************/
-static u32 ecore_calc_crc32c(u8 *crc32_packet,
-			     u32 crc32_length, u32 crc32_seed, u8 complement)
+static u32 ecore_calc_crc32c(u8 *crc32_packet, u32 crc32_length, u32 crc32_seed)
 {
 	u32 byte = 0, bit = 0, crc32_result = crc32_seed;
 	u8 msb = 0, current_byte = 0;
@@ -1529,25 +1524,23 @@ static u32 ecore_calc_crc32c(u8 *crc32_packet,
 	return crc32_result;
 }
 
-static u32 ecore_crc32c_le(u32 seed, u8 *mac, u32 len)
+static u32 ecore_crc32c_le(u32 seed, u8 *mac)
 {
 	u32 packet_buf[2] = { 0 };
 
 	OSAL_MEMCPY((u8 *)(&packet_buf[0]), &mac[0], 6);
-	return ecore_calc_crc32c((u8 *)packet_buf, 8, seed, 0);
+	return ecore_calc_crc32c((u8 *)packet_buf, 8, seed);
 }
 
 u8 ecore_mcast_bin_from_mac(u8 *mac)
 {
-	u32 crc = ecore_crc32c_le(ETH_MULTICAST_BIN_FROM_MAC_SEED,
-				  mac, ETH_ALEN);
+	u32 crc = ecore_crc32c_le(ETH_MULTICAST_BIN_FROM_MAC_SEED, mac);
 
 	return crc & 0xff;
 }
 
 static enum _ecore_status_t
 ecore_sp_eth_filter_mcast(struct ecore_hwfn *p_hwfn,
-			  u16 opaque_fid,
 			  struct ecore_filter_mcast *p_filter_cmd,
 			  enum spq_mode comp_mode,
 			  struct ecore_spq_comp_cb *p_comp_data)
@@ -1642,16 +1635,13 @@ enum _ecore_status_t
 
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
-		u16 opaque_fid;
 
 		if (IS_VF(p_dev)) {
 			ecore_vf_pf_filter_mcast(p_hwfn, p_filter_cmd);
 			continue;
 		}
 
-		opaque_fid = p_hwfn->hw_info.opaque_fid;
 		rc = ecore_sp_eth_filter_mcast(p_hwfn,
-					       opaque_fid,
 					       p_filter_cmd,
 					       comp_mode, p_comp_data);
 		if (rc != ECORE_SUCCESS)
@@ -1741,8 +1731,7 @@ static void __ecore_get_vport_pstats(struct ecore_hwfn *p_hwfn,
 
 static void __ecore_get_vport_tstats(struct ecore_hwfn *p_hwfn,
 				     struct ecore_ptt *p_ptt,
-				     struct ecore_eth_stats *p_stats,
-				     u16 statistics_bin)
+				     struct ecore_eth_stats *p_stats)
 {
 	struct tstorm_per_port_stat tstats;
 	u32 tstats_addr, tstats_len;
@@ -1954,7 +1943,7 @@ void __ecore_get_vport_stats(struct ecore_hwfn *p_hwfn,
 {
 	__ecore_get_vport_mstats(p_hwfn, p_ptt, stats, statistics_bin);
 	__ecore_get_vport_ustats(p_hwfn, p_ptt, stats, statistics_bin);
-	__ecore_get_vport_tstats(p_hwfn, p_ptt, stats, statistics_bin);
+	__ecore_get_vport_tstats(p_hwfn, p_ptt, stats);
 	__ecore_get_vport_pstats(p_hwfn, p_ptt, stats, statistics_bin);
 
 #ifndef ASIC_ONLY
@@ -2091,7 +2080,6 @@ void ecore_arfs_mode_configure(struct ecore_hwfn *p_hwfn,
 
 enum _ecore_status_t
 ecore_configure_rfs_ntuple_filter(struct ecore_hwfn *p_hwfn,
-				  struct ecore_ptt *p_ptt,
 				  struct ecore_spq_comp_cb *p_cb,
 				  dma_addr_t p_addr, u16 length,
 				  u16 qid, u8 vport_id,
diff --git a/drivers/net/qede/base/ecore_l2.h b/drivers/net/qede/base/ecore_l2.h
index 02aa5e8..3618ae6 100644
--- a/drivers/net/qede/base/ecore_l2.h
+++ b/drivers/net/qede/base/ecore_l2.h
@@ -140,32 +140,4 @@ enum _ecore_status_t
 			   u16 pq_id);
 
 u8 ecore_mcast_bin_from_mac(u8 *mac);
-
-/**
- * @brief - ecore_configure_rfs_ntuple_filter
- *
- * This ramrod should be used to add or remove arfs hw filter
- *
- * @params p_hwfn
- * @params p_ptt
- * @params p_cb		Used for ECORE_SPQ_MODE_CB,where client would initialize
-			it with cookie and callback function address, if not
-			using this mode then client must pass NULL.
- * @params p_addr	p_addr is an actual packet header that needs to be
- *			filter. It has to mapped with IO to read prior to
- *			calling this, [contains 4 tuples- src ip, dest ip,
- *			src port, dest port].
- * @params length	length of p_addr header up to past the transport header.
- * @params qid		receive packet will be directed to this queue.
- * @params vport_id
- * @params b_is_add	flag to add or remove filter.
- *
- */
-enum _ecore_status_t
-ecore_configure_rfs_ntuple_filter(struct ecore_hwfn *p_hwfn,
-				  struct ecore_ptt *p_ptt,
-				  struct ecore_spq_comp_cb *p_cb,
-				  dma_addr_t p_addr, u16 length,
-				  u16 qid, u8 vport_id,
-				  bool b_is_add);
 #endif
diff --git a/drivers/net/qede/base/ecore_l2_api.h b/drivers/net/qede/base/ecore_l2_api.h
index a6740d5..ed9837b 100644
--- a/drivers/net/qede/base/ecore_l2_api.h
+++ b/drivers/net/qede/base/ecore_l2_api.h
@@ -436,4 +436,30 @@ void ecore_get_vport_stats(struct ecore_dev *p_dev,
 void ecore_arfs_mode_configure(struct ecore_hwfn *p_hwfn,
 			       struct ecore_ptt *p_ptt,
 			       struct ecore_arfs_config_params *p_cfg_params);
+
+/**
+ * @brief - ecore_configure_rfs_ntuple_filter
+ *
+ * This ramrod should be used to add or remove arfs hw filter
+ *
+ * @params p_hwfn
+ * @params p_cb		Used for ECORE_SPQ_MODE_CB,where client would initialize
+ *			it with cookie and callback function address, if not
+ *			using this mode then client must pass NULL.
+ * @params p_addr	p_addr is an actual packet header that needs to be
+ *			filter. It has to mapped with IO to read prior to
+ *			calling this, [contains 4 tuples- src ip, dest ip,
+ *			src port, dest port].
+ * @params length	length of p_addr header up to past the transport header.
+ * @params qid		receive packet will be directed to this queue.
+ * @params vport_id
+ * @params b_is_add	flag to add or remove filter.
+ *
+ */
+enum _ecore_status_t
+ecore_configure_rfs_ntuple_filter(struct ecore_hwfn *p_hwfn,
+				  struct ecore_spq_comp_cb *p_cb,
+				  dma_addr_t p_addr, u16 length,
+				  u16 qid, u8 vport_id,
+				  bool b_is_add);
 #endif
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index f1010ee..5aa3210 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -901,8 +901,7 @@ struct ecore_load_req_out_params {
 	return ECORE_SUCCESS;
 }
 
-static void ecore_get_mfw_drv_role(struct ecore_hwfn *p_hwfn,
-				   enum ecore_drv_role drv_role,
+static void ecore_get_mfw_drv_role(enum ecore_drv_role drv_role,
 				   u8 *p_mfw_drv_role)
 {
 	switch (drv_role) {
@@ -921,8 +920,7 @@ enum ecore_load_req_force {
 	ECORE_LOAD_REQ_FORCE_ALL,
 };
 
-static void ecore_get_mfw_force_cmd(struct ecore_hwfn *p_hwfn,
-				    enum ecore_load_req_force force_cmd,
+static void ecore_get_mfw_force_cmd(enum ecore_load_req_force force_cmd,
 				    u8 *p_mfw_force_cmd)
 {
 	switch (force_cmd) {
@@ -959,11 +957,10 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 	in_params.drv_ver_0 = ECORE_VERSION;
 	in_params.drv_ver_1 = ecore_get_config_bitmap();
 	in_params.fw_ver = STORM_FW_VERSION;
-	ecore_get_mfw_drv_role(p_hwfn, p_params->drv_role, &mfw_drv_role);
+	ecore_get_mfw_drv_role(p_params->drv_role, &mfw_drv_role);
 	in_params.drv_role = mfw_drv_role;
 	in_params.timeout_val = p_params->timeout_val;
-	ecore_get_mfw_force_cmd(p_hwfn, ECORE_LOAD_REQ_FORCE_NONE,
-				&mfw_force_cmd);
+	ecore_get_mfw_force_cmd(ECORE_LOAD_REQ_FORCE_NONE, &mfw_force_cmd);
 	in_params.force_cmd = mfw_force_cmd;
 	in_params.avoid_eng_reset = p_params->avoid_eng_reset;
 
@@ -1000,8 +997,7 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 				out_params.exist_drv_ver_0,
 				out_params.exist_drv_ver_1);
 
-			ecore_get_mfw_force_cmd(p_hwfn,
-						ECORE_LOAD_REQ_FORCE_ALL,
+			ecore_get_mfw_force_cmd(ECORE_LOAD_REQ_FORCE_ALL,
 						&mfw_force_cmd);
 
 			in_params.force_cmd = mfw_force_cmd;
@@ -1614,8 +1610,7 @@ static u32 ecore_mcp_get_shmem_func(struct ecore_hwfn *p_hwfn,
 		      &param);
 }
 
-static void ecore_mcp_handle_fan_failure(struct ecore_hwfn *p_hwfn,
-					 struct ecore_ptt *p_ptt)
+static void ecore_mcp_handle_fan_failure(struct ecore_hwfn *p_hwfn)
 {
 	/* A single notification should be sent to upper driver in CMT mode */
 	if (p_hwfn != ECORE_LEADING_HWFN(p_hwfn->p_dev))
@@ -1924,7 +1919,7 @@ enum _ecore_status_t ecore_mcp_handle_events(struct ecore_hwfn *p_hwfn,
 			ecore_mcp_update_bw(p_hwfn, p_ptt);
 			break;
 		case MFW_DRV_MSG_FAILURE_DETECTED:
-			ecore_mcp_handle_fan_failure(p_hwfn, p_ptt);
+			ecore_mcp_handle_fan_failure(p_hwfn);
 			break;
 		case MFW_DRV_MSG_CRITICAL_ERROR_OCCURRED:
 			ecore_mcp_handle_critical_error(p_hwfn, p_ptt);
@@ -3492,12 +3487,10 @@ enum _ecore_status_t
 	return ECORE_SUCCESS;
 }
 
-void
-ecore_mcp_resc_lock_default_init(struct ecore_hwfn *p_hwfn,
-				 struct ecore_resc_lock_params *p_lock,
-				 struct ecore_resc_unlock_params *p_unlock,
-				 enum ecore_resc_lock resource,
-				 bool b_is_permanent)
+void ecore_mcp_resc_lock_default_init(struct ecore_resc_lock_params *p_lock,
+				      struct ecore_resc_unlock_params *p_unlock,
+				      enum ecore_resc_lock resource,
+				      bool b_is_permanent)
 {
 	if (p_lock != OSAL_NULL) {
 		OSAL_MEM_ZERO(p_lock, sizeof(*p_lock));
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index f69b425..9f3fd70 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -493,18 +493,15 @@ enum _ecore_status_t
 /**
  * @brief - default initialization for lock/unlock resource structs
  *
- * @param p_hwfn
  * @param p_lock - lock params struct to be initialized; Can be OSAL_NULL
  * @param p_unlock - unlock params struct to be initialized; Can be OSAL_NULL
  * @param resource - the requested resource
  * @paral b_is_permanent - disable retries & aging when set
  */
-void
-ecore_mcp_resc_lock_default_init(struct ecore_hwfn *p_hwfn,
-				 struct ecore_resc_lock_params *p_lock,
-				 struct ecore_resc_unlock_params *p_unlock,
-				 enum ecore_resc_lock resource,
-				 bool b_is_permanent);
+void ecore_mcp_resc_lock_default_init(struct ecore_resc_lock_params *p_lock,
+				      struct ecore_resc_unlock_params *p_unlock,
+				      enum ecore_resc_lock resource,
+				      bool b_is_permanent);
 
 /**
  * @brief Learn of supported MFW features; To be done during early init
diff --git a/drivers/net/qede/base/ecore_mng_tlv.c b/drivers/net/qede/base/ecore_mng_tlv.c
index 0bf1be8..3a1de09 100644
--- a/drivers/net/qede/base/ecore_mng_tlv.c
+++ b/drivers/net/qede/base/ecore_mng_tlv.c
@@ -1403,9 +1403,9 @@
 	return -1;
 }
 
-static enum _ecore_status_t
-ecore_mfw_update_tlvs(u8 tlv_group, struct ecore_hwfn *p_hwfn,
-		      struct ecore_ptt *p_ptt, u8 *p_mfw_buf, u32 size)
+static enum _ecore_status_t ecore_mfw_update_tlvs(struct ecore_hwfn *p_hwfn,
+						  u8 tlv_group, u8 *p_mfw_buf,
+						  u32 size)
 {
 	union ecore_mfw_tlv_data *p_tlv_data;
 	struct ecore_drv_tlv_hdr tlv;
@@ -1512,8 +1512,7 @@ enum _ecore_status_t
 	/* Update the TLV values in the local buffer */
 	for (id = ECORE_MFW_TLV_GENERIC; id < ECORE_MFW_TLV_MAX; id <<= 1) {
 		if (tlv_group & id) {
-			if (ecore_mfw_update_tlvs(id, p_hwfn, p_ptt, p_mfw_buf,
-						  size))
+			if (ecore_mfw_update_tlvs(p_hwfn, id, p_mfw_buf, size))
 				goto drv_done;
 		}
 	}
diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c
index 716799a..ee0f06c 100644
--- a/drivers/net/qede/base/ecore_spq.c
+++ b/drivers/net/qede/base/ecore_spq.c
@@ -36,9 +36,8 @@
 /***************************************************************************
  * Blocking Imp. (BLOCK/EBLOCK mode)
  ***************************************************************************/
-static void ecore_spq_blocking_cb(struct ecore_hwfn *p_hwfn,
-				  void *cookie,
-				  union event_ring_data *data,
+static void ecore_spq_blocking_cb(struct ecore_hwfn *p_hwfn, void *cookie,
+				  union event_ring_data OSAL_UNUSED * data,
 				  u8 fw_return_code)
 {
 	struct ecore_spq_comp_done *comp_done;
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 792cf75..82ba198 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -57,8 +57,7 @@
 	"CHANNEL_TLV_MAX"
 };
 
-static u8 ecore_vf_calculate_legacy(struct ecore_hwfn *p_hwfn,
-				    struct ecore_vf_info *p_vf)
+static u8 ecore_vf_calculate_legacy(struct ecore_vf_info *p_vf)
 {
 	u8 legacy = 0;
 
@@ -210,9 +209,7 @@ struct ecore_vf_info *ecore_iov_get_vf_info(struct ecore_hwfn *p_hwfn,
 }
 
 static struct ecore_queue_cid *
-ecore_iov_get_vf_rx_queue_cid(struct ecore_hwfn *p_hwfn,
-			      struct ecore_vf_info *p_vf,
-			      struct ecore_vf_queue *p_queue)
+ecore_iov_get_vf_rx_queue_cid(struct ecore_vf_queue *p_queue)
 {
 	int i;
 
@@ -231,8 +228,7 @@ enum ecore_iov_validate_q_mode {
 	ECORE_IOV_VALIDATE_Q_DISABLE,
 };
 
-static bool ecore_iov_validate_queue_mode(struct ecore_hwfn *p_hwfn,
-					  struct ecore_vf_info *p_vf,
+static bool ecore_iov_validate_queue_mode(struct ecore_vf_info *p_vf,
 					  u16 qid,
 					  enum ecore_iov_validate_q_mode mode,
 					  bool b_is_tx)
@@ -274,8 +270,7 @@ static bool ecore_iov_validate_rxq(struct ecore_hwfn *p_hwfn,
 		return false;
 	}
 
-	return ecore_iov_validate_queue_mode(p_hwfn, p_vf, rx_qid,
-					     mode, false);
+	return ecore_iov_validate_queue_mode(p_vf, rx_qid, mode, false);
 }
 
 static bool ecore_iov_validate_txq(struct ecore_hwfn *p_hwfn,
@@ -291,8 +286,7 @@ static bool ecore_iov_validate_txq(struct ecore_hwfn *p_hwfn,
 		return false;
 	}
 
-	return ecore_iov_validate_queue_mode(p_hwfn, p_vf, tx_qid,
-					     mode, true);
+	return ecore_iov_validate_queue_mode(p_vf, tx_qid, mode, true);
 }
 
 static bool ecore_iov_validate_sb(struct ecore_hwfn *p_hwfn,
@@ -314,13 +308,12 @@ static bool ecore_iov_validate_sb(struct ecore_hwfn *p_hwfn,
 }
 
 /* Is there at least 1 queue open? */
-static bool ecore_iov_validate_active_rxq(struct ecore_hwfn *p_hwfn,
-					  struct ecore_vf_info *p_vf)
+static bool ecore_iov_validate_active_rxq(struct ecore_vf_info *p_vf)
 {
 	u8 i;
 
 	for (i = 0; i < p_vf->num_rxqs; i++)
-		if (ecore_iov_validate_queue_mode(p_hwfn, p_vf, i,
+		if (ecore_iov_validate_queue_mode(p_vf, i,
 						  ECORE_IOV_VALIDATE_Q_ENABLE,
 						  false))
 			return true;
@@ -328,13 +321,12 @@ static bool ecore_iov_validate_active_rxq(struct ecore_hwfn *p_hwfn,
 	return false;
 }
 
-static bool ecore_iov_validate_active_txq(struct ecore_hwfn *p_hwfn,
-					  struct ecore_vf_info *p_vf)
+static bool ecore_iov_validate_active_txq(struct ecore_vf_info *p_vf)
 {
 	u8 i;
 
 	for (i = 0; i < p_vf->num_txqs; i++)
-		if (ecore_iov_validate_queue_mode(p_hwfn, p_vf, i,
+		if (ecore_iov_validate_queue_mode(p_vf, i,
 						  ECORE_IOV_VALIDATE_Q_ENABLE,
 						  true))
 			return true;
@@ -1302,8 +1294,7 @@ static void ecore_iov_unlock_vf_pf_channel(struct ecore_hwfn *p_hwfn,
 }
 
 /* place a given tlv on the tlv buffer, continuing current tlv list */
-void *ecore_add_tlv(struct ecore_hwfn *p_hwfn,
-		    u8 **offset, u16 type, u16 length)
+void *ecore_add_tlv(u8 **offset, u16 type, u16 length)
 {
 	struct channel_tlv *tl = (struct channel_tlv *)*offset;
 
@@ -1359,7 +1350,12 @@ void ecore_dp_tlv_list(struct ecore_hwfn *p_hwfn, void *tlvs_list)
 static void ecore_iov_send_response(struct ecore_hwfn *p_hwfn,
 				    struct ecore_ptt *p_ptt,
 				    struct ecore_vf_info *p_vf,
-				    u16 length, u8 status)
+#ifdef CONFIG_ECORE_SW_CHANNEL
+				    u16 length,
+#else
+				    u16 OSAL_UNUSED length,
+#endif
+				    u8 status)
 {
 	struct ecore_iov_vf_mbx *mbx = &p_vf->vf_mbx;
 	struct ecore_dmae_params params;
@@ -1398,8 +1394,7 @@ static void ecore_iov_send_response(struct ecore_hwfn *p_hwfn,
 	       USTORM_VF_PF_CHANNEL_READY_OFFSET(eng_vf_id), 1);
 }
 
-static u16 ecore_iov_vport_to_tlv(struct ecore_hwfn *p_hwfn,
-				  enum ecore_iov_vport_update_flag flag)
+static u16 ecore_iov_vport_to_tlv(enum ecore_iov_vport_update_flag flag)
 {
 	switch (flag) {
 	case ECORE_IOV_VP_UPDATE_ACTIVATE:
@@ -1437,15 +1432,15 @@ static u16 ecore_iov_prep_vp_update_resp_tlvs(struct ecore_hwfn *p_hwfn,
 	size = sizeof(struct pfvf_def_resp_tlv);
 	total_len = size;
 
-	ecore_add_tlv(p_hwfn, &p_mbx->offset, CHANNEL_TLV_VPORT_UPDATE, size);
+	ecore_add_tlv(&p_mbx->offset, CHANNEL_TLV_VPORT_UPDATE, size);
 
 	/* Prepare response for all extended tlvs if they are found by PF */
 	for (i = 0; i < ECORE_IOV_VP_UPDATE_MAX; i++) {
 		if (!(tlvs_mask & (1 << i)))
 			continue;
 
-		resp = ecore_add_tlv(p_hwfn, &p_mbx->offset,
-				     ecore_iov_vport_to_tlv(p_hwfn, i), size);
+		resp = ecore_add_tlv(&p_mbx->offset, ecore_iov_vport_to_tlv(i),
+				     size);
 
 		if (tlvs_accepted & (1 << i))
 			resp->hdr.status = status;
@@ -1455,12 +1450,13 @@ static u16 ecore_iov_prep_vp_update_resp_tlvs(struct ecore_hwfn *p_hwfn,
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 			   "VF[%d] - vport_update resp: TLV %d, status %02x\n",
 			   p_vf->relative_vf_id,
-			   ecore_iov_vport_to_tlv(p_hwfn, i), resp->hdr.status);
+			   ecore_iov_vport_to_tlv(i),
+			   resp->hdr.status);
 
 		total_len += size;
 	}
 
-	ecore_add_tlv(p_hwfn, &p_mbx->offset, CHANNEL_TLV_LIST_END,
+	ecore_add_tlv(&p_mbx->offset, CHANNEL_TLV_LIST_END,
 		      sizeof(struct channel_list_end_tlv));
 
 	return total_len;
@@ -1475,8 +1471,8 @@ static void ecore_iov_prepare_resp(struct ecore_hwfn *p_hwfn,
 
 	mbx->offset = (u8 *)mbx->reply_virt;
 
-	ecore_add_tlv(p_hwfn, &mbx->offset, type, length);
-	ecore_add_tlv(p_hwfn, &mbx->offset, CHANNEL_TLV_LIST_END,
+	ecore_add_tlv(&mbx->offset, type, length);
+	ecore_add_tlv(&mbx->offset, CHANNEL_TLV_LIST_END,
 		      sizeof(struct channel_list_end_tlv));
 
 	ecore_iov_send_response(p_hwfn, p_ptt, vf_info, length, status);
@@ -1531,7 +1527,6 @@ static void ecore_iov_vf_cleanup(struct ecore_hwfn *p_hwfn,
 }
 
 static u8 ecore_iov_vf_mbx_acquire_resc(struct ecore_hwfn *p_hwfn,
-					struct ecore_ptt *p_ptt,
 					struct ecore_vf_info *p_vf,
 					struct vf_pf_resc_request *p_req,
 					struct pf_vf_resc *p_resp)
@@ -1609,8 +1604,7 @@ static u8 ecore_iov_vf_mbx_acquire_resc(struct ecore_hwfn *p_hwfn,
 	return PFVF_STATUS_SUCCESS;
 }
 
-static void ecore_iov_vf_mbx_acquire_stats(struct ecore_hwfn *p_hwfn,
-					   struct pfvf_stats_info *p_stats)
+static void ecore_iov_vf_mbx_acquire_stats(struct pfvf_stats_info *p_stats)
 {
 	p_stats->mstats.address = PXP_VF_BAR0_START_MSDM_ZONE_B +
 				  OFFSETOF(struct mstorm_vf_zone,
@@ -1733,7 +1727,7 @@ static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn       *p_hwfn,
 	if (req->vfdev_info.capabilities & VFPF_ACQUIRE_CAP_QUEUE_QIDS)
 		pfdev_info->capabilities |= PFVF_ACQUIRE_CAP_QUEUE_QIDS;
 
-	ecore_iov_vf_mbx_acquire_stats(p_hwfn, &pfdev_info->stats_info);
+	ecore_iov_vf_mbx_acquire_stats(&pfdev_info->stats_info);
 
 	OSAL_MEMCPY(pfdev_info->port_mac, p_hwfn->hw_info.hw_mac_addr,
 		    ETH_ALEN);
@@ -1758,7 +1752,7 @@ static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn       *p_hwfn,
 	/* Fill resources available to VF; Make sure there are enough to
 	 * satisfy the VF's request.
 	 */
-	vfpf_status = ecore_iov_vf_mbx_acquire_resc(p_hwfn, p_ptt, vf,
+	vfpf_status = ecore_iov_vf_mbx_acquire_resc(p_hwfn, vf,
 						    &req->resc_request, resc);
 	if (vfpf_status != PFVF_STATUS_SUCCESS)
 		goto out;
@@ -1974,8 +1968,7 @@ static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn       *p_hwfn,
 			struct ecore_queue_cid *p_cid = OSAL_NULL;
 
 			/* There can be at most 1 Rx queue on qzone. Find it */
-			p_cid = ecore_iov_get_vf_rx_queue_cid(p_hwfn, p_vf,
-							      p_queue);
+			p_cid = ecore_iov_get_vf_rx_queue_cid(p_queue);
 			if (p_cid == OSAL_NULL)
 				continue;
 
@@ -2114,8 +2107,8 @@ static void ecore_iov_vf_mbx_stop_vport(struct ecore_hwfn *p_hwfn,
 	vf->vport_instance--;
 	vf->spoof_chk = false;
 
-	if ((ecore_iov_validate_active_rxq(p_hwfn, vf)) ||
-	    (ecore_iov_validate_active_txq(p_hwfn, vf))) {
+	if ((ecore_iov_validate_active_rxq(vf)) ||
+	    (ecore_iov_validate_active_txq(vf))) {
 		vf->b_malicious = true;
 		DP_NOTICE(p_hwfn, false,
 			  "VF [%02x] - considered malicious;"
@@ -2162,9 +2155,8 @@ static void ecore_iov_vf_mbx_start_rxq_resp(struct ecore_hwfn *p_hwfn,
 	else
 		length = sizeof(struct pfvf_def_resp_tlv);
 
-	p_tlv = ecore_add_tlv(p_hwfn, &mbx->offset, CHANNEL_TLV_START_RXQ,
-			      length);
-	ecore_add_tlv(p_hwfn, &mbx->offset, CHANNEL_TLV_LIST_END,
+	p_tlv = ecore_add_tlv(&mbx->offset, CHANNEL_TLV_START_RXQ, length);
+	ecore_add_tlv(&mbx->offset, CHANNEL_TLV_LIST_END,
 		      sizeof(struct channel_list_end_tlv));
 
 	/* Update the TLV with the response */
@@ -2245,7 +2237,7 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
 	if (p_queue->cids[qid_usage_idx].p_cid)
 		goto out;
 
-	vf_legacy = ecore_vf_calculate_legacy(p_hwfn, vf);
+	vf_legacy = ecore_vf_calculate_legacy(vf);
 
 	/* Acquire a new queue-cid */
 	OSAL_MEMSET(&params, 0, sizeof(params));
@@ -2440,11 +2432,11 @@ static void ecore_iov_vf_mbx_update_tunn_param(struct ecore_hwfn *p_hwfn,
 	}
 
 send_resp:
-	p_resp = ecore_add_tlv(p_hwfn, &mbx->offset,
+	p_resp = ecore_add_tlv(&mbx->offset,
 			       CHANNEL_TLV_UPDATE_TUNN_PARAM, sizeof(*p_resp));
 
 	ecore_iov_pf_update_tun_response(p_resp, p_tun, tunn_feature_mask);
-	ecore_add_tlv(p_hwfn, &mbx->offset, CHANNEL_TLV_LIST_END,
+	ecore_add_tlv(&mbx->offset, CHANNEL_TLV_LIST_END,
 		      sizeof(struct channel_list_end_tlv));
 
 	ecore_iov_send_response(p_hwfn, p_ptt, p_vf, sizeof(*p_resp), status);
@@ -2476,9 +2468,8 @@ static void ecore_iov_vf_mbx_start_txq_resp(struct ecore_hwfn *p_hwfn,
 	else
 		length = sizeof(struct pfvf_def_resp_tlv);
 
-	p_tlv = ecore_add_tlv(p_hwfn, &mbx->offset, CHANNEL_TLV_START_TXQ,
-			      length);
-	ecore_add_tlv(p_hwfn, &mbx->offset, CHANNEL_TLV_LIST_END,
+	p_tlv = ecore_add_tlv(&mbx->offset, CHANNEL_TLV_START_TXQ, length);
+	ecore_add_tlv(&mbx->offset, CHANNEL_TLV_LIST_END,
 		      sizeof(struct channel_list_end_tlv));
 
 	/* Update the TLV with the response */
@@ -2521,7 +2512,7 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 	if (p_queue->cids[qid_usage_idx].p_cid)
 		goto out;
 
-	vf_legacy = ecore_vf_calculate_legacy(p_hwfn, vf);
+	vf_legacy = ecore_vf_calculate_legacy(vf);
 
 	/* Acquire a new queue-cid */
 	params.queue_id = p_queue->fw_tx_qid;
@@ -2590,7 +2581,7 @@ static enum _ecore_status_t ecore_iov_vf_stop_rxqs(struct ecore_hwfn *p_hwfn,
 	    p_queue->cids[qid_usage_idx].b_is_tx) {
 		struct ecore_queue_cid *p_cid;
 
-		p_cid = ecore_iov_get_vf_rx_queue_cid(p_hwfn, vf, p_queue);
+		p_cid = ecore_iov_get_vf_rx_queue_cid(p_queue);
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 			   "VF[%d] - Tried Closing Rx 0x%04x.%02x, but Rx is at %04x.%02x\n",
 			    vf->relative_vf_id, rxq_id, qid_usage_idx,
@@ -3012,8 +3003,7 @@ void *ecore_iov_search_list_tlvs(struct ecore_hwfn *p_hwfn,
 			goto out;
 		}
 
-		p_cid = ecore_iov_get_vf_rx_queue_cid(p_hwfn, vf,
-						      &vf->vf_queues[q_idx]);
+		p_cid = ecore_iov_get_vf_rx_queue_cid(&vf->vf_queues[q_idx]);
 		p_rss->rss_ind_table[i] = p_cid;
 	}
 
@@ -3026,7 +3016,6 @@ void *ecore_iov_search_list_tlvs(struct ecore_hwfn *p_hwfn,
 
 static void
 ecore_iov_vp_update_sge_tpa_param(struct ecore_hwfn *p_hwfn,
-				  struct ecore_vf_info *vf,
 				  struct ecore_sp_vport_update_params *p_data,
 				  struct ecore_sge_tpa_params *p_sge_tpa,
 				  struct ecore_iov_vf_mbx *p_mbx,
@@ -3116,7 +3105,7 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 	ecore_iov_vp_update_mcast_bin_param(p_hwfn, &params, mbx, &tlvs_mask);
 	ecore_iov_vp_update_accept_flag(p_hwfn, &params, mbx, &tlvs_mask);
 	ecore_iov_vp_update_accept_any_vlan(p_hwfn, &params, mbx, &tlvs_mask);
-	ecore_iov_vp_update_sge_tpa_param(p_hwfn, vf, &params,
+	ecore_iov_vp_update_sge_tpa_param(p_hwfn, &params,
 					  &sge_tpa_params, mbx, &tlvs_mask);
 
 	tlvs_accepted = tlvs_mask;
@@ -3503,8 +3492,7 @@ static void ecore_iov_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn,
 		   vf->abs_vf_id, rx_coal, tx_coal, qid);
 
 	if (rx_coal) {
-		p_cid = ecore_iov_get_vf_rx_queue_cid(p_hwfn, vf,
-						      &vf->vf_queues[qid]);
+		p_cid = ecore_iov_get_vf_rx_queue_cid(&vf->vf_queues[qid]);
 
 		rc = ecore_set_rxq_coalesce(p_hwfn, p_ptt, rx_coal, p_cid);
 		if (rc != ECORE_SUCCESS) {
@@ -3590,8 +3578,7 @@ enum _ecore_status_t
 		   vf->abs_vf_id, rx_coal, tx_coal, qid);
 
 	if (rx_coal) {
-		p_cid = ecore_iov_get_vf_rx_queue_cid(p_hwfn, vf,
-						      &vf->vf_queues[qid]);
+		p_cid = ecore_iov_get_vf_rx_queue_cid(&vf->vf_queues[qid]);
 
 		rc = ecore_set_rxq_coalesce(p_hwfn, p_ptt, rx_coal, p_cid);
 		if (rc != ECORE_SUCCESS) {
@@ -3903,11 +3890,11 @@ void ecore_iov_get_link(struct ecore_hwfn *p_hwfn,
 	p_bulletin = p_vf->bulletin.p_virt;
 
 	if (p_params)
-		__ecore_vf_get_link_params(p_hwfn, p_params, p_bulletin);
+		__ecore_vf_get_link_params(p_params, p_bulletin);
 	if (p_link)
-		__ecore_vf_get_link_state(p_hwfn, p_link, p_bulletin);
+		__ecore_vf_get_link_state(p_link, p_bulletin);
 	if (p_caps)
-		__ecore_vf_get_link_caps(p_hwfn, p_caps, p_bulletin);
+		__ecore_vf_get_link_caps(p_caps, p_bulletin);
 }
 
 void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn,
diff --git a/drivers/net/qede/base/ecore_sriov.h b/drivers/net/qede/base/ecore_sriov.h
index 8923730..effeb69 100644
--- a/drivers/net/qede/base/ecore_sriov.h
+++ b/drivers/net/qede/base/ecore_sriov.h
@@ -206,17 +206,13 @@ struct ecore_pf_iov {
 /**
  * @brief ecore_add_tlv - place a given tlv on the tlv buffer at next offset
  *
- * @param p_hwfn
- * @param p_iov
+ * @param offset
  * @param type
  * @param length
  *
  * @return pointer to the newly placed tlv
  */
-void *ecore_add_tlv(struct ecore_hwfn	*p_hwfn,
-		    u8			**offset,
-		    u16			type,
-		    u16			length);
+void *ecore_add_tlv(u8 **offset, u16 type, u16 length);
 
 /**
  * @brief list the types and lengths of the tlvs on the buffer
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index 5002ada..c37341e 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -44,7 +44,7 @@ static void *ecore_vf_pf_prep(struct ecore_hwfn *p_hwfn, u16 type, u16 length)
 	OSAL_MEMSET(p_iov->pf2vf_reply, 0, sizeof(union pfvf_tlvs));
 
 	/* Init type and length */
-	p_tlv = ecore_add_tlv(p_hwfn, &p_iov->offset, type, length);
+	p_tlv = ecore_add_tlv(&p_iov->offset, type, length);
 
 	/* Init first tlv header */
 	((struct vfpf_first_tlv *)p_tlv)->reply_address =
@@ -146,7 +146,7 @@ static void ecore_vf_pf_add_qid(struct ecore_hwfn *p_hwfn,
 	      PFVF_ACQUIRE_CAP_QUEUE_QIDS))
 		return;
 
-	p_qid_tlv = ecore_add_tlv(p_hwfn, &p_iov->offset,
+	p_qid_tlv = ecore_add_tlv(&p_iov->offset,
 				  CHANNEL_TLV_QID, sizeof(*p_qid_tlv));
 	p_qid_tlv->qid = p_cid->qid_usage_idx;
 }
@@ -222,7 +222,7 @@ static enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn)
 	req->bulletin_size = p_iov->bulletin.size;
 
 	/* add list termination tlv */
-	ecore_add_tlv(p_hwfn, &p_iov->offset,
+	ecore_add_tlv(&p_iov->offset,
 		      CHANNEL_TLV_LIST_END,
 		      sizeof(struct channel_list_end_tlv));
 
@@ -610,7 +610,7 @@ enum _ecore_status_t
 				     ECORE_MODE_IPGRE_TUNN, &p_req->ipgre_clss);
 
 	/* add list termination tlv */
-	ecore_add_tlv(p_hwfn, &p_iov->offset,
+	ecore_add_tlv(&p_iov->offset,
 		      CHANNEL_TLV_LIST_END,
 		      sizeof(struct channel_list_end_tlv));
 
@@ -679,7 +679,7 @@ enum _ecore_status_t
 	ecore_vf_pf_add_qid(p_hwfn, p_cid);
 
 	/* add list termination tlv */
-	ecore_add_tlv(p_hwfn, &p_iov->offset,
+	ecore_add_tlv(&p_iov->offset,
 		      CHANNEL_TLV_LIST_END,
 		      sizeof(struct channel_list_end_tlv));
 
@@ -736,7 +736,7 @@ enum _ecore_status_t ecore_vf_pf_rxq_stop(struct ecore_hwfn *p_hwfn,
 	ecore_vf_pf_add_qid(p_hwfn, p_cid);
 
 	/* add list termination tlv */
-	ecore_add_tlv(p_hwfn, &p_iov->offset,
+	ecore_add_tlv(&p_iov->offset,
 		      CHANNEL_TLV_LIST_END,
 		      sizeof(struct channel_list_end_tlv));
 
@@ -782,7 +782,7 @@ enum _ecore_status_t
 	ecore_vf_pf_add_qid(p_hwfn, p_cid);
 
 	/* add list termination tlv */
-	ecore_add_tlv(p_hwfn, &p_iov->offset,
+	ecore_add_tlv(&p_iov->offset,
 		      CHANNEL_TLV_LIST_END,
 		      sizeof(struct channel_list_end_tlv));
 
@@ -835,7 +835,7 @@ enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn *p_hwfn,
 	ecore_vf_pf_add_qid(p_hwfn, p_cid);
 
 	/* add list termination tlv */
-	ecore_add_tlv(p_hwfn, &p_iov->offset,
+	ecore_add_tlv(&p_iov->offset,
 		      CHANNEL_TLV_LIST_END,
 		      sizeof(struct channel_list_end_tlv));
 
@@ -891,7 +891,7 @@ enum _ecore_status_t ecore_vf_pf_rxqs_update(struct ecore_hwfn *p_hwfn,
 	ecore_vf_pf_add_qid(p_hwfn, *pp_cid);
 
 	/* add list termination tlv */
-	ecore_add_tlv(p_hwfn, &p_iov->offset,
+	ecore_add_tlv(&p_iov->offset,
 		      CHANNEL_TLV_LIST_END,
 		      sizeof(struct channel_list_end_tlv));
 
@@ -940,7 +940,7 @@ enum _ecore_status_t
 	}
 
 	/* add list termination tlv */
-	ecore_add_tlv(p_hwfn, &p_iov->offset,
+	ecore_add_tlv(&p_iov->offset,
 		      CHANNEL_TLV_LIST_END,
 		      sizeof(struct channel_list_end_tlv));
 
@@ -971,7 +971,7 @@ enum _ecore_status_t ecore_vf_pf_vport_stop(struct ecore_hwfn *p_hwfn)
 			 sizeof(struct vfpf_first_tlv));
 
 	/* add list termination tlv */
-	ecore_add_tlv(p_hwfn, &p_iov->offset,
+	ecore_add_tlv(&p_iov->offset,
 		      CHANNEL_TLV_LIST_END,
 		      sizeof(struct channel_list_end_tlv));
 
@@ -1078,7 +1078,7 @@ enum _ecore_status_t
 		struct vfpf_vport_update_activate_tlv *p_act_tlv;
 
 		size = sizeof(struct vfpf_vport_update_activate_tlv);
-		p_act_tlv = ecore_add_tlv(p_hwfn, &p_iov->offset,
+		p_act_tlv = ecore_add_tlv(&p_iov->offset,
 					  CHANNEL_TLV_VPORT_UPDATE_ACTIVATE,
 					  size);
 		resp_size += sizeof(struct pfvf_def_resp_tlv);
@@ -1098,7 +1098,7 @@ enum _ecore_status_t
 		struct vfpf_vport_update_vlan_strip_tlv *p_vlan_tlv;
 
 		size = sizeof(struct vfpf_vport_update_vlan_strip_tlv);
-		p_vlan_tlv = ecore_add_tlv(p_hwfn, &p_iov->offset,
+		p_vlan_tlv = ecore_add_tlv(&p_iov->offset,
 					   CHANNEL_TLV_VPORT_UPDATE_VLAN_STRIP,
 					   size);
 		resp_size += sizeof(struct pfvf_def_resp_tlv);
@@ -1111,7 +1111,7 @@ enum _ecore_status_t
 
 		size = sizeof(struct vfpf_vport_update_tx_switch_tlv);
 		tlv = CHANNEL_TLV_VPORT_UPDATE_TX_SWITCH;
-		p_tx_switch_tlv = ecore_add_tlv(p_hwfn, &p_iov->offset,
+		p_tx_switch_tlv = ecore_add_tlv(&p_iov->offset,
 						tlv, size);
 		resp_size += sizeof(struct pfvf_def_resp_tlv);
 
@@ -1122,7 +1122,7 @@ enum _ecore_status_t
 		struct vfpf_vport_update_mcast_bin_tlv *p_mcast_tlv;
 
 		size = sizeof(struct vfpf_vport_update_mcast_bin_tlv);
-		p_mcast_tlv = ecore_add_tlv(p_hwfn, &p_iov->offset,
+		p_mcast_tlv = ecore_add_tlv(&p_iov->offset,
 					    CHANNEL_TLV_VPORT_UPDATE_MCAST,
 					    size);
 		resp_size += sizeof(struct pfvf_def_resp_tlv);
@@ -1140,7 +1140,7 @@ enum _ecore_status_t
 
 		tlv = CHANNEL_TLV_VPORT_UPDATE_ACCEPT_PARAM;
 		size = sizeof(struct vfpf_vport_update_accept_param_tlv);
-		p_accept_tlv = ecore_add_tlv(p_hwfn, &p_iov->offset, tlv, size);
+		p_accept_tlv = ecore_add_tlv(&p_iov->offset, tlv, size);
 		resp_size += sizeof(struct pfvf_def_resp_tlv);
 
 		if (update_rx) {
@@ -1162,7 +1162,7 @@ enum _ecore_status_t
 		int i, table_size;
 
 		size = sizeof(struct vfpf_vport_update_rss_tlv);
-		p_rss_tlv = ecore_add_tlv(p_hwfn, &p_iov->offset,
+		p_rss_tlv = ecore_add_tlv(&p_iov->offset,
 					  CHANNEL_TLV_VPORT_UPDATE_RSS, size);
 		resp_size += sizeof(struct pfvf_def_resp_tlv);
 
@@ -1200,8 +1200,7 @@ enum _ecore_status_t
 
 		size = sizeof(struct vfpf_vport_update_accept_any_vlan_tlv);
 		tlv = CHANNEL_TLV_VPORT_UPDATE_ACCEPT_ANY_VLAN;
-		p_any_vlan_tlv = ecore_add_tlv(p_hwfn, &p_iov->offset,
-					       tlv, size);
+		p_any_vlan_tlv = ecore_add_tlv(&p_iov->offset, tlv, size);
 
 		resp_size += sizeof(struct pfvf_def_resp_tlv);
 		p_any_vlan_tlv->accept_any_vlan = p_params->accept_any_vlan;
@@ -1215,7 +1214,7 @@ enum _ecore_status_t
 
 		sge_tpa_params = p_params->sge_tpa_params;
 		size = sizeof(struct vfpf_vport_update_sge_tpa_tlv);
-		p_sge_tpa_tlv = ecore_add_tlv(p_hwfn, &p_iov->offset,
+		p_sge_tpa_tlv = ecore_add_tlv(&p_iov->offset,
 					      CHANNEL_TLV_VPORT_UPDATE_SGE_TPA,
 					      size);
 		resp_size += sizeof(struct pfvf_def_resp_tlv);
@@ -1253,7 +1252,7 @@ enum _ecore_status_t
 	}
 
 	/* add list termination tlv */
-	ecore_add_tlv(p_hwfn, &p_iov->offset,
+	ecore_add_tlv(&p_iov->offset,
 		      CHANNEL_TLV_LIST_END,
 		      sizeof(struct channel_list_end_tlv));
 
@@ -1285,7 +1284,7 @@ enum _ecore_status_t ecore_vf_pf_reset(struct ecore_hwfn *p_hwfn)
 	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_CLOSE, sizeof(*req));
 
 	/* add list termination tlv */
-	ecore_add_tlv(p_hwfn, &p_iov->offset,
+	ecore_add_tlv(&p_iov->offset,
 		      CHANNEL_TLV_LIST_END,
 		      sizeof(struct channel_list_end_tlv));
 
@@ -1319,7 +1318,7 @@ enum _ecore_status_t ecore_vf_pf_release(struct ecore_hwfn *p_hwfn)
 	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_RELEASE, sizeof(*req));
 
 	/* add list termination tlv */
-	ecore_add_tlv(p_hwfn, &p_iov->offset,
+	ecore_add_tlv(&p_iov->offset,
 		      CHANNEL_TLV_LIST_END,
 		      sizeof(struct channel_list_end_tlv));
 
@@ -1405,7 +1404,7 @@ enum _ecore_status_t ecore_vf_pf_filter_ucast(struct ecore_hwfn *p_hwfn,
 	req->vlan = p_ucast->vlan;
 
 	/* add list termination tlv */
-	ecore_add_tlv(p_hwfn, &p_iov->offset,
+	ecore_add_tlv(&p_iov->offset,
 		      CHANNEL_TLV_LIST_END,
 		      sizeof(struct channel_list_end_tlv));
 
@@ -1436,7 +1435,7 @@ enum _ecore_status_t ecore_vf_pf_int_cleanup(struct ecore_hwfn *p_hwfn)
 			 sizeof(struct vfpf_first_tlv));
 
 	/* add list termination tlv */
-	ecore_add_tlv(p_hwfn, &p_iov->offset,
+	ecore_add_tlv(&p_iov->offset,
 		      CHANNEL_TLV_LIST_END,
 		      sizeof(struct channel_list_end_tlv));
 
@@ -1477,7 +1476,7 @@ enum _ecore_status_t
 		   rx_coal, tx_coal, req->qid);
 
 	/* add list termination tlv */
-	ecore_add_tlv(p_hwfn, &p_iov->offset, CHANNEL_TLV_LIST_END,
+	ecore_add_tlv(&p_iov->offset, CHANNEL_TLV_LIST_END,
 		      sizeof(struct channel_list_end_tlv));
 
 	resp = &p_iov->pf2vf_reply->default_resp;
@@ -1562,8 +1561,7 @@ enum _ecore_status_t ecore_vf_read_bulletin(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-void __ecore_vf_get_link_params(struct ecore_hwfn *p_hwfn,
-				struct ecore_mcp_link_params *p_params,
+void __ecore_vf_get_link_params(struct ecore_mcp_link_params *p_params,
 				struct ecore_bulletin_content *p_bulletin)
 {
 	OSAL_MEMSET(p_params, 0, sizeof(*p_params));
@@ -1580,12 +1578,11 @@ void __ecore_vf_get_link_params(struct ecore_hwfn *p_hwfn,
 void ecore_vf_get_link_params(struct ecore_hwfn *p_hwfn,
 			      struct ecore_mcp_link_params *params)
 {
-	__ecore_vf_get_link_params(p_hwfn, params,
+	__ecore_vf_get_link_params(params,
 				   &p_hwfn->vf_iov_info->bulletin_shadow);
 }
 
-void __ecore_vf_get_link_state(struct ecore_hwfn *p_hwfn,
-			       struct ecore_mcp_link_state *p_link,
+void __ecore_vf_get_link_state(struct ecore_mcp_link_state *p_link,
 			       struct ecore_bulletin_content *p_bulletin)
 {
 	OSAL_MEMSET(p_link, 0, sizeof(*p_link));
@@ -1607,12 +1604,11 @@ void __ecore_vf_get_link_state(struct ecore_hwfn *p_hwfn,
 void ecore_vf_get_link_state(struct ecore_hwfn *p_hwfn,
 			     struct ecore_mcp_link_state *link)
 {
-	__ecore_vf_get_link_state(p_hwfn, link,
+	__ecore_vf_get_link_state(link,
 				  &p_hwfn->vf_iov_info->bulletin_shadow);
 }
 
-void __ecore_vf_get_link_caps(struct ecore_hwfn *p_hwfn,
-			      struct ecore_mcp_link_capabilities *p_link_caps,
+void __ecore_vf_get_link_caps(struct ecore_mcp_link_capabilities *p_link_caps,
 			      struct ecore_bulletin_content *p_bulletin)
 {
 	OSAL_MEMSET(p_link_caps, 0, sizeof(*p_link_caps));
@@ -1622,7 +1618,7 @@ void __ecore_vf_get_link_caps(struct ecore_hwfn *p_hwfn,
 void ecore_vf_get_link_caps(struct ecore_hwfn *p_hwfn,
 			    struct ecore_mcp_link_capabilities *p_link_caps)
 {
-	__ecore_vf_get_link_caps(p_hwfn, p_link_caps,
+	__ecore_vf_get_link_caps(p_link_caps,
 				 &p_hwfn->vf_iov_info->bulletin_shadow);
 }
 
diff --git a/drivers/net/qede/base/ecore_vf.h b/drivers/net/qede/base/ecore_vf.h
index d9ee96b..0945522 100644
--- a/drivers/net/qede/base/ecore_vf.h
+++ b/drivers/net/qede/base/ecore_vf.h
@@ -273,34 +273,28 @@ void ecore_vf_pf_filter_mcast(struct ecore_hwfn *p_hwfn,
 /**
  * @brief - return the link params in a given bulletin board
  *
- * @param p_hwfn
  * @param p_params - pointer to a struct to fill with link params
  * @param p_bulletin
  */
-void __ecore_vf_get_link_params(struct ecore_hwfn *p_hwfn,
-				struct ecore_mcp_link_params *p_params,
+void __ecore_vf_get_link_params(struct ecore_mcp_link_params *p_params,
 				struct ecore_bulletin_content *p_bulletin);
 
 /**
  * @brief - return the link state in a given bulletin board
  *
- * @param p_hwfn
  * @param p_link - pointer to a struct to fill with link state
  * @param p_bulletin
  */
-void __ecore_vf_get_link_state(struct ecore_hwfn *p_hwfn,
-			       struct ecore_mcp_link_state *p_link,
+void __ecore_vf_get_link_state(struct ecore_mcp_link_state *p_link,
 			       struct ecore_bulletin_content *p_bulletin);
 
 /**
  * @brief - return the link capabilities in a given bulletin board
  *
- * @param p_hwfn
  * @param p_link - pointer to a struct to fill with link capabilities
  * @param p_bulletin
  */
-void __ecore_vf_get_link_caps(struct ecore_hwfn *p_hwfn,
-			      struct ecore_mcp_link_capabilities *p_link_caps,
+void __ecore_vf_get_link_caps(struct ecore_mcp_link_capabilities *p_link_caps,
 			      struct ecore_bulletin_content *p_bulletin);
 
 enum _ecore_status_t
diff --git a/drivers/net/qede/qede_fdir.c b/drivers/net/qede/qede_fdir.c
index 7bd5c5d..7db7521 100644
--- a/drivers/net/qede/qede_fdir.c
+++ b/drivers/net/qede/qede_fdir.c
@@ -171,7 +171,7 @@ void qede_fdir_dealloc_resc(struct rte_eth_dev *eth_dev)
 					  &qdev->fdir_info.arfs);
 	}
 	/* configure filter with ECORE_SPQ_MODE_EBLOCK */
-	rc = ecore_configure_rfs_ntuple_filter(p_hwfn, p_hwfn->p_arfs_ptt, NULL,
+	rc = ecore_configure_rfs_ntuple_filter(p_hwfn, NULL,
 					       (dma_addr_t)mz->phys_addr,
 					       pkt_len,
 					       fdir_filter->action.rx_queue,
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 29/53] net/qede/base: fix macros to check chip revision/metal
  2017-09-19  1:29 [PATCH 00/53] net/qede/base: update PMD to 2.6.0.1 Rasesh Mody
                   ` (27 preceding siblings ...)
  2017-09-19  1:30 ` [PATCH 28/53] net/qede/base: remove unused parameters Rasesh Mody
@ 2017-09-19  1:30 ` Rasesh Mody
  2017-09-20 11:00 ` [PATCH 00/53] net/qede/base: update PMD to 2.6.0.1 Ferruh Yigit
  29 siblings, 0 replies; 31+ messages in thread
From: Rasesh Mody @ 2017-09-19  1:30 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Rasesh Mody, Dept-EngDPDKDev, stable

Fix the ECORE_IS_[AB]0() macros to check both the chip revision and the
chip metal. Realign defines in the struct ecore_dev.

Fixes: ec94dbc57362 ("qede: add base driver")
Cc: stable@dpdk.org

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
 drivers/net/qede/base/ecore.h     |   78 ++++++++++++++++++-------------------
 drivers/net/qede/base/ecore_dev.c |   25 ++++++------
 drivers/net/qede/base/ecore_vf.c  |    2 +-
 3 files changed, 51 insertions(+), 54 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 73024da..95cc01d 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -680,45 +680,45 @@ struct ecore_dev {
 #define ECORE_DEV_ID_MASK_AH	0x8000
 
 	u16				chip_num;
-	#define CHIP_NUM_MASK			0xffff
-	#define CHIP_NUM_SHIFT			16
+#define CHIP_NUM_MASK			0xffff
+#define CHIP_NUM_SHIFT			0
 
-	u16				chip_rev;
-	#define CHIP_REV_MASK			0xf
-	#define CHIP_REV_SHIFT			12
+	u8				chip_rev;
+#define CHIP_REV_MASK			0xf
+#define CHIP_REV_SHIFT			0
 #ifndef ASIC_ONLY
-	#define CHIP_REV_IS_TEDIBEAR(_p_dev) ((_p_dev)->chip_rev == 0x5)
-	#define CHIP_REV_IS_EMUL_A0(_p_dev) ((_p_dev)->chip_rev == 0xe)
-	#define CHIP_REV_IS_EMUL_B0(_p_dev) ((_p_dev)->chip_rev == 0xc)
-	#define CHIP_REV_IS_EMUL(_p_dev) (CHIP_REV_IS_EMUL_A0(_p_dev) || \
-					  CHIP_REV_IS_EMUL_B0(_p_dev))
-	#define CHIP_REV_IS_FPGA_A0(_p_dev) ((_p_dev)->chip_rev == 0xf)
-	#define CHIP_REV_IS_FPGA_B0(_p_dev) ((_p_dev)->chip_rev == 0xd)
-	#define CHIP_REV_IS_FPGA(_p_dev) (CHIP_REV_IS_FPGA_A0(_p_dev) || \
-					  CHIP_REV_IS_FPGA_B0(_p_dev))
-	#define CHIP_REV_IS_SLOW(_p_dev) \
-		(CHIP_REV_IS_EMUL(_p_dev) || CHIP_REV_IS_FPGA(_p_dev))
-	#define CHIP_REV_IS_A0(_p_dev) \
-		(CHIP_REV_IS_EMUL_A0(_p_dev) || \
-		 CHIP_REV_IS_FPGA_A0(_p_dev) || \
-		 !(_p_dev)->chip_rev)
-	#define CHIP_REV_IS_B0(_p_dev) \
-		(CHIP_REV_IS_EMUL_B0(_p_dev) || \
-		 CHIP_REV_IS_FPGA_B0(_p_dev) || \
-		 (_p_dev)->chip_rev == 1)
-	#define CHIP_REV_IS_ASIC(_p_dev) !CHIP_REV_IS_SLOW(_p_dev)
+#define CHIP_REV_IS_TEDIBEAR(_p_dev)	((_p_dev)->chip_rev == 0x5)
+#define CHIP_REV_IS_EMUL_A0(_p_dev)	((_p_dev)->chip_rev == 0xe)
+#define CHIP_REV_IS_EMUL_B0(_p_dev)	((_p_dev)->chip_rev == 0xc)
+#define CHIP_REV_IS_EMUL(_p_dev) \
+	(CHIP_REV_IS_EMUL_A0(_p_dev) || CHIP_REV_IS_EMUL_B0(_p_dev))
+#define CHIP_REV_IS_FPGA_A0(_p_dev)	((_p_dev)->chip_rev == 0xf)
+#define CHIP_REV_IS_FPGA_B0(_p_dev)	((_p_dev)->chip_rev == 0xd)
+#define CHIP_REV_IS_FPGA(_p_dev) \
+	(CHIP_REV_IS_FPGA_A0(_p_dev) || CHIP_REV_IS_FPGA_B0(_p_dev))
+#define CHIP_REV_IS_SLOW(_p_dev) \
+	(CHIP_REV_IS_EMUL(_p_dev) || CHIP_REV_IS_FPGA(_p_dev))
+#define CHIP_REV_IS_A0(_p_dev) \
+	(CHIP_REV_IS_EMUL_A0(_p_dev) || CHIP_REV_IS_FPGA_A0(_p_dev) || \
+	 (!(_p_dev)->chip_rev && !(_p_dev)->chip_metal))
+#define CHIP_REV_IS_B0(_p_dev) \
+	(CHIP_REV_IS_EMUL_B0(_p_dev) || CHIP_REV_IS_FPGA_B0(_p_dev) || \
+	 ((_p_dev)->chip_rev == 1 && !(_p_dev)->chip_metal))
+#define CHIP_REV_IS_ASIC(_p_dev)	!CHIP_REV_IS_SLOW(_p_dev)
 #else
-	#define CHIP_REV_IS_A0(_p_dev)	(!(_p_dev)->chip_rev)
-	#define CHIP_REV_IS_B0(_p_dev)	((_p_dev)->chip_rev == 1)
+#define CHIP_REV_IS_A0(_p_dev) \
+	(!(_p_dev)->chip_rev && !(_p_dev)->chip_metal)
+#define CHIP_REV_IS_B0(_p_dev) \
+	((_p_dev)->chip_rev == 1 && !(_p_dev)->chip_metal)
 #endif
 
-	u16				chip_metal;
-	#define CHIP_METAL_MASK			0xff
-	#define CHIP_METAL_SHIFT		4
+	u8				chip_metal;
+#define CHIP_METAL_MASK			0xff
+#define CHIP_METAL_SHIFT		0
 
-	u16				chip_bond_id;
-	#define CHIP_BOND_ID_MASK		0xf
-	#define CHIP_BOND_ID_SHIFT		0
+	u8				chip_bond_id;
+#define CHIP_BOND_ID_MASK		0xff
+#define CHIP_BOND_ID_SHIFT		0
 
 	u8				num_engines;
 	u8				num_ports_in_engines;
@@ -726,12 +726,12 @@ struct ecore_dev {
 
 	u8				path_id;
 	enum ecore_mf_mode		mf_mode;
-	#define IS_MF_DEFAULT(_p_hwfn)	\
-			(((_p_hwfn)->p_dev)->mf_mode == ECORE_MF_DEFAULT)
-	#define IS_MF_SI(_p_hwfn)	\
-			(((_p_hwfn)->p_dev)->mf_mode == ECORE_MF_NPAR)
-	#define IS_MF_SD(_p_hwfn)	\
-			(((_p_hwfn)->p_dev)->mf_mode == ECORE_MF_OVLAN)
+#define IS_MF_DEFAULT(_p_hwfn)	\
+	(((_p_hwfn)->p_dev)->mf_mode == ECORE_MF_DEFAULT)
+#define IS_MF_SI(_p_hwfn)	\
+	(((_p_hwfn)->p_dev)->mf_mode == ECORE_MF_NPAR)
+#define IS_MF_SD(_p_hwfn)	\
+	(((_p_hwfn)->p_dev)->mf_mode == ECORE_MF_OVLAN)
 
 	int				pcie_width;
 	int				pcie_speed;
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index c185323..e2698ea 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -3857,12 +3857,10 @@ static enum _ecore_status_t ecore_get_dev_info(struct ecore_hwfn *p_hwfn,
 		return ECORE_ABORTED;
 	}
 
-	p_dev->chip_num = (u16)ecore_rd(p_hwfn, p_ptt,
-						MISCS_REG_CHIP_NUM);
-	p_dev->chip_rev = (u16)ecore_rd(p_hwfn, p_ptt,
-						MISCS_REG_CHIP_REV);
-
-	MASK_FIELD(CHIP_REV, p_dev->chip_rev);
+	tmp = ecore_rd(p_hwfn, p_ptt, MISCS_REG_CHIP_NUM);
+	p_dev->chip_num = (u16)GET_FIELD(tmp, CHIP_NUM);
+	tmp = ecore_rd(p_hwfn, p_ptt, MISCS_REG_CHIP_REV);
+	p_dev->chip_rev = (u8)GET_FIELD(tmp, CHIP_REV);
 
 	/* Learn number of HW-functions */
 	tmp = ecore_rd(p_hwfn, p_ptt, MISCS_REG_CMT_ENABLED_FOR_PAIR);
@@ -3885,20 +3883,19 @@ static enum _ecore_status_t ecore_get_dev_info(struct ecore_hwfn *p_hwfn,
 	}
 #endif
 
-	p_dev->chip_bond_id = ecore_rd(p_hwfn, p_ptt,
-				       MISCS_REG_CHIP_TEST_REG) >> 4;
-	MASK_FIELD(CHIP_BOND_ID, p_dev->chip_bond_id);
-	p_dev->chip_metal = (u16)ecore_rd(p_hwfn, p_ptt,
-					   MISCS_REG_CHIP_METAL);
-	MASK_FIELD(CHIP_METAL, p_dev->chip_metal);
+	tmp = ecore_rd(p_hwfn, p_ptt, MISCS_REG_CHIP_TEST_REG);
+	p_dev->chip_bond_id = (u8)GET_FIELD(tmp, CHIP_BOND_ID);
+	tmp = ecore_rd(p_hwfn, p_ptt, MISCS_REG_CHIP_METAL);
+	p_dev->chip_metal = (u8)GET_FIELD(tmp, CHIP_METAL);
+
 	DP_INFO(p_dev->hwfns,
-		"Chip details - %s %c%d, Num: %04x Rev: %04x Bond id: %04x Metal: %04x\n",
+		"Chip details - %s %c%d, Num: %04x Rev: %02x Bond id: %02x Metal: %02x\n",
 		ECORE_IS_BB(p_dev) ? "BB" : "AH",
 		'A' + p_dev->chip_rev, (int)p_dev->chip_metal,
 		p_dev->chip_num, p_dev->chip_rev, p_dev->chip_bond_id,
 		p_dev->chip_metal);
 
-	if (ECORE_IS_BB(p_dev) && CHIP_REV_IS_A0(p_dev)) {
+	if (ECORE_IS_BB_A0(p_dev)) {
 		DP_NOTICE(p_dev->hwfns, false,
 			  "The chip type/rev (BB A0) is not supported!\n");
 		return ECORE_ABORTED;
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index c37341e..fb5d0a7 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -350,7 +350,7 @@ static enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn)
 
 	/* get HW info */
 	p_hwfn->p_dev->type = resp->pfdev_info.dev_type;
-	p_hwfn->p_dev->chip_rev = resp->pfdev_info.chip_rev;
+	p_hwfn->p_dev->chip_rev = (u8)resp->pfdev_info.chip_rev;
 
 	DP_INFO(p_hwfn, "Chip details - %s%d\n",
 		ECORE_IS_BB(p_hwfn->p_dev) ? "BB" : "AH",
-- 
1.7.10.3

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: [PATCH 00/53] net/qede/base: update PMD to 2.6.0.1
  2017-09-19  1:29 [PATCH 00/53] net/qede/base: update PMD to 2.6.0.1 Rasesh Mody
                   ` (28 preceding siblings ...)
  2017-09-19  1:30 ` [PATCH 29/53] net/qede/base: fix macros to check chip revision/metal Rasesh Mody
@ 2017-09-20 11:00 ` Ferruh Yigit
  29 siblings, 0 replies; 31+ messages in thread
From: Ferruh Yigit @ 2017-09-20 11:00 UTC (permalink / raw)
  To: Rasesh Mody, dev; +Cc: Dept-EngDPDKDev

On 9/19/2017 2:29 AM, Rasesh Mody wrote:
> Hi,
> 
> This patch set adds support for new firmware 8.30.12.0, includes
> enahncements, code cleanup and bug fixes. This patch set updates
> PMD version to 2.6.0.1.
> 
> Thanks!
> Rasesh
> 
> Rasesh Mody (53):
>   net/qede/base: add NVM config options
>   net/qede/base: update management FW supported features
>   net/qede/base: use crc32 OSAL macro
>   net/qede/base: allocate VF queues before PF
>   net/qede/base: convert device type to enum
>   net/qede/base: changes for VF queue zone
>   net/qede/base: interchangeably use SB between PF and VF
>   net/qede/base: add API to configure coalescing for VF queues
>   net/qede/base: restrict cache line size register padding
>   net/qede/base: fix to use a passed ptt handle
>   net/qede/base: add a sanity check
>   net/qede/base: add SmartAN support
>   net/qede/base: alter driver's force load behavior
>   net/qede/base: add mdump sub-commands
>   net/qede/base: add EEE support
>   net/qede/base: use passed ptt handler
>   net/qede/base: prevent re-assertions of parity errors
>   net/qede/base: avoid possible race condition
>   net/qede/base: revise management FW mbox access scheme
>   net/qede/base: remove helper functions/structures
>   net/qede/base: initialize resc lock/unlock params
>   net/qede/base: rename MFW get/set field defines
>   net/qede/base: allow clients to override VF MSI-X table size
>   net/qede/base: add API to send STAG config update to FW
>   net/qede/base: add support for doorbell overflow recovery
>   net/qede/base: block mbox command to unresponsive MFW
>   net/qede/base: prevent stop vport assert by malicious VF
>   net/qede/base: remove unused parameters
>   net/qede/base: fix macros to check chip revision/metal
>   net/qede/base: read per queue coalescing from HW
>   net/qede/base: refactor device's number of ports logic
>   net/qede/base: use proper units for rate limiting
>   net/qede/base: use available macro
>   net/qede/base: use function pointers for spq async callback
>   net/qede/base: fix API return types
>   net/qede/base: semantic changes
>   net/qede/base: handle the error condition properly
>   net/qede/base: add new macro for CMT mode
>   net/qede/base: change verbosity
>   net/qede/base: fix number of app table entries
>   net/qede/base: update firmware to 8.30.12.0
>   net/qede/base: add UFP support
>   net/qede/base: add support for mapped doorbell Bars for VFs
>   net/qede/base: add support for driver attribute repository
>   net/qede/base: move define to header file
>   net/qede/base: dcbx dscp related extensions
>   net/qede/base: add feature support for per-PF virtual link
>   net/qede/base: catch an init command write failure
>   net/qede/base: retain dcbx config till actually applied
>   net/qede/base: disable aRFS for NPAR and 100G
>   net/qede/base: add support for WoL writes
>   net/qede/base: remove unused input parameter
>   net/qede/base: update PMD version to 2.6.0.1

Series applied to dpdk-next-net/master, thanks.

^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2017-09-20 11:00 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-09-19  1:29 [PATCH 00/53] net/qede/base: update PMD to 2.6.0.1 Rasesh Mody
2017-09-19  1:29 ` [PATCH 01/53] net/qede/base: add NVM config options Rasesh Mody
2017-09-19  1:29 ` [PATCH 02/53] net/qede/base: update management FW supported features Rasesh Mody
2017-09-19  1:29 ` [PATCH 03/53] net/qede/base: use crc32 OSAL macro Rasesh Mody
2017-09-19  1:29 ` [PATCH 04/53] net/qede/base: allocate VF queues before PF Rasesh Mody
2017-09-19  1:29 ` [PATCH 05/53] net/qede/base: convert device type to enum Rasesh Mody
2017-09-19  1:29 ` [PATCH 06/53] net/qede/base: changes for VF queue zone Rasesh Mody
2017-09-19  1:29 ` [PATCH 07/53] net/qede/base: interchangeably use SB between PF and VF Rasesh Mody
2017-09-19  1:29 ` [PATCH 08/53] net/qede/base: add API to configure coalescing for VF queues Rasesh Mody
2017-09-19  1:29 ` [PATCH 09/53] net/qede/base: restrict cache line size register padding Rasesh Mody
2017-09-19  1:29 ` [PATCH 10/53] net/qede/base: fix to use a passed ptt handle Rasesh Mody
2017-09-19  1:29 ` [PATCH 11/53] net/qede/base: add a sanity check Rasesh Mody
2017-09-19  1:29 ` [PATCH 12/53] net/qede/base: add SmartAN support Rasesh Mody
2017-09-19  1:29 ` [PATCH 13/53] net/qede/base: alter driver's force load behavior Rasesh Mody
2017-09-19  1:29 ` [PATCH 14/53] net/qede/base: add mdump sub-commands Rasesh Mody
2017-09-19  1:29 ` [PATCH 15/53] net/qede/base: add EEE support Rasesh Mody
2017-09-19  1:29 ` [PATCH 16/53] net/qede/base: use passed ptt handler Rasesh Mody
2017-09-19  1:29 ` [PATCH 17/53] net/qede/base: prevent re-assertions of parity errors Rasesh Mody
2017-09-19  1:29 ` [PATCH 18/53] net/qede/base: avoid possible race condition Rasesh Mody
2017-09-19  1:29 ` [PATCH 19/53] net/qede/base: revise management FW mbox access scheme Rasesh Mody
2017-09-19  1:30 ` [PATCH 20/53] net/qede/base: remove helper functions/structures Rasesh Mody
2017-09-19  1:30 ` [PATCH 21/53] net/qede/base: initialize resc lock/unlock params Rasesh Mody
2017-09-19  1:30 ` [PATCH 22/53] net/qede/base: rename MFW get/set field defines Rasesh Mody
2017-09-19  1:30 ` [PATCH 23/53] net/qede/base: allow clients to override VF MSI-X table size Rasesh Mody
2017-09-19  1:30 ` [PATCH 24/53] net/qede/base: add API to send STAG config update to FW Rasesh Mody
2017-09-19  1:30 ` [PATCH 25/53] net/qede/base: add support for doorbell overflow recovery Rasesh Mody
2017-09-19  1:30 ` [PATCH 26/53] net/qede/base: block mbox command to unresponsive MFW Rasesh Mody
2017-09-19  1:30 ` [PATCH 27/53] net/qede/base: prevent stop vport assert by malicious VF Rasesh Mody
2017-09-19  1:30 ` [PATCH 28/53] net/qede/base: remove unused parameters Rasesh Mody
2017-09-19  1:30 ` [PATCH 29/53] net/qede/base: fix macros to check chip revision/metal Rasesh Mody
2017-09-20 11:00 ` [PATCH 00/53] net/qede/base: update PMD to 2.6.0.1 Ferruh Yigit

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.