All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH net-next 00/13] bnxt_en: Updates for net-next.
@ 2018-08-05 20:51 Michael Chan
  2018-08-05 20:51 ` [PATCH net-next 01/13] bnxt_en: Update firmware interface version to 1.9.2.25 Michael Chan
                   ` (13 more replies)
  0 siblings, 14 replies; 15+ messages in thread
From: Michael Chan @ 2018-08-05 20:51 UTC (permalink / raw)
  To: davem; +Cc: netdev

This series includes the usual firmware spec update.  The driver has
added external phy loopback test and phy setup retry logic that is
needed during hotplug.  In the SRIOV space, the driver has added a
new VF resource allocation mode that requires the VF driver to
reserve resources during IFUP.  IF state changes are now propagated
to firmware so that firmware can release some resources during IFDOWN.

ethtool method to get firmware core dump and hwmon temperature reading
have been added.  DSCP to user priority support has been added to
the driver's DCBNL interface, and the CoS queue logic has been refined
to make sure that the special RDMA Congestion Notification hardware CoS
queue will not be used for networking traffic.

Michael Chan (11):
  bnxt_en: Update firmware interface version to 1.9.2.25.
  bnxt_en: Adjust timer based on ethtool stats-block-usecs settings.
  bnxt_en: Add external loopback test to ethtool selftest.
  bnxt_en: Add PHY retry logic.
  bnxt_en: Add new VF resource allocation strategy mode.
  bnxt_en: Update RSS setup and GRO-HW logic according to the latest
    spec.
  bnxt_en: Add BNXT_NEW_RM() macro.
  bnxt_en: Move firmware related flags to a new fw_cap field in struct
    bnxt.
  bnxt_en: Notify firmware about IF state changes.
  bnxt_en: Add DCBNL DSCP application protocol support.
  bnxt_en: Do not use the CNP CoS queue for networking traffic.

Vasundhara Volam (2):
  bnxt_en: Add support for ethtool get dump.
  bnxt_en: Add hwmon sysfs support to read temperature

 drivers/net/ethernet/broadcom/Kconfig              |    8 +
 drivers/net/ethernet/broadcom/bnxt/bnxt.c          |  216 +++-
 drivers/net/ethernet/broadcom/bnxt/bnxt.h          |   30 +-
 drivers/net/ethernet/broadcom/bnxt/bnxt_coredump.h |   66 ++
 drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c      |   89 +-
 drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.h      |   10 +
 drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c  |    8 +-
 drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c  |  378 +++++-
 drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.h  |   37 +
 drivers/net/ethernet/broadcom/bnxt/bnxt_hsi.h      | 1227 +++++++++++++++-----
 drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c    |   25 +-
 drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c      |    4 +-
 12 files changed, 1716 insertions(+), 382 deletions(-)
 create mode 100644 drivers/net/ethernet/broadcom/bnxt/bnxt_coredump.h

-- 
2.5.1

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH net-next 01/13] bnxt_en: Update firmware interface version to 1.9.2.25.
  2018-08-05 20:51 [PATCH net-next 00/13] bnxt_en: Updates for net-next Michael Chan
@ 2018-08-05 20:51 ` Michael Chan
  2018-08-05 20:51 ` [PATCH net-next 02/13] bnxt_en: Adjust timer based on ethtool stats-block-usecs settings Michael Chan
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Michael Chan @ 2018-08-05 20:51 UTC (permalink / raw)
  To: davem; +Cc: netdev

New interface has firmware core dump support, new extended port
statistics, and IF state change notifications to the firmware.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnxt/bnxt.h         |    4 +-
 drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c |    8 +-
 drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c |    6 +-
 drivers/net/ethernet/broadcom/bnxt/bnxt_hsi.h     | 1227 +++++++++++++++------
 4 files changed, 924 insertions(+), 321 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
index 934aa11..3b5a55c 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
@@ -12,11 +12,11 @@
 #define BNXT_H
 
 #define DRV_MODULE_NAME		"bnxt_en"
-#define DRV_MODULE_VERSION	"1.9.1"
+#define DRV_MODULE_VERSION	"1.9.2"
 
 #define DRV_VER_MAJ	1
 #define DRV_VER_MIN	9
-#define DRV_VER_UPD	1
+#define DRV_VER_UPD	2
 
 #include <linux/interrupt.h>
 #include <linux/rhashtable.h>
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c
index 7bd96ab..f3b9fbc 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c
@@ -29,7 +29,7 @@ static const struct bnxt_dl_nvm_param nvm_params[] = {
 static int bnxt_hwrm_nvm_req(struct bnxt *bp, u32 param_id, void *msg,
 			     int msg_len, union devlink_param_value *val)
 {
-	struct hwrm_nvm_variable_input *req = msg;
+	struct hwrm_nvm_get_variable_input *req = msg;
 	void *data_addr = NULL, *buf = NULL;
 	struct bnxt_dl_nvm_param nvm_param;
 	int bytesize, idx = 0, rc, i;
@@ -60,18 +60,18 @@ static int bnxt_hwrm_nvm_req(struct bnxt *bp, u32 param_id, void *msg,
 	if (!data_addr)
 		return -ENOMEM;
 
-	req->data_addr = cpu_to_le64(data_dma_addr);
+	req->dest_data_addr = cpu_to_le64(data_dma_addr);
 	req->data_len = cpu_to_le16(nvm_param.num_bits);
 	req->option_num = cpu_to_le16(nvm_param.offset);
 	req->index_0 = cpu_to_le16(idx);
 	if (idx)
 		req->dimensions = cpu_to_le16(1);
 
-	if (req->req_type == HWRM_NVM_SET_VARIABLE)
+	if (req->req_type == cpu_to_le16(HWRM_NVM_SET_VARIABLE))
 		memcpy(data_addr, buf, bytesize);
 
 	rc = hwrm_send_message(bp, msg, msg_len, HWRM_CMD_TIMEOUT);
-	if (!rc && req->req_type == HWRM_NVM_GET_VARIABLE)
+	if (!rc && req->req_type == cpu_to_le16(HWRM_NVM_GET_VARIABLE))
 		memcpy(buf, data_addr, bytesize);
 
 	dma_free_coherent(&bp->pdev->dev, bytesize, data_addr, data_dma_addr);
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
index 7270c8b..3d40e49 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
@@ -162,7 +162,7 @@ static const struct {
 	BNXT_RX_STATS_ENTRY(rx_128b_255b_frames),
 	BNXT_RX_STATS_ENTRY(rx_256b_511b_frames),
 	BNXT_RX_STATS_ENTRY(rx_512b_1023b_frames),
-	BNXT_RX_STATS_ENTRY(rx_1024b_1518_frames),
+	BNXT_RX_STATS_ENTRY(rx_1024b_1518b_frames),
 	BNXT_RX_STATS_ENTRY(rx_good_vlan_frames),
 	BNXT_RX_STATS_ENTRY(rx_1519b_2047b_frames),
 	BNXT_RX_STATS_ENTRY(rx_2048b_4095b_frames),
@@ -205,9 +205,9 @@ static const struct {
 	BNXT_TX_STATS_ENTRY(tx_128b_255b_frames),
 	BNXT_TX_STATS_ENTRY(tx_256b_511b_frames),
 	BNXT_TX_STATS_ENTRY(tx_512b_1023b_frames),
-	BNXT_TX_STATS_ENTRY(tx_1024b_1518_frames),
+	BNXT_TX_STATS_ENTRY(tx_1024b_1518b_frames),
 	BNXT_TX_STATS_ENTRY(tx_good_vlan_frames),
-	BNXT_TX_STATS_ENTRY(tx_1519b_2047_frames),
+	BNXT_TX_STATS_ENTRY(tx_1519b_2047b_frames),
 	BNXT_TX_STATS_ENTRY(tx_2048b_4095b_frames),
 	BNXT_TX_STATS_ENTRY(tx_4096b_9216b_frames),
 	BNXT_TX_STATS_ENTRY(tx_9217b_16383b_frames),
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_hsi.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_hsi.h
index c75d7fa..971ace5d 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_hsi.h
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_hsi.h
@@ -96,6 +96,7 @@ struct hwrm_short_input {
 struct cmd_nums {
 	__le16	req_type;
 	#define HWRM_VER_GET                              0x0UL
+	#define HWRM_FUNC_DRV_IF_CHANGE                   0xdUL
 	#define HWRM_FUNC_BUF_UNRGTR                      0xeUL
 	#define HWRM_FUNC_VF_CFG                          0xfUL
 	#define HWRM_RESERVED1                            0x10UL
@@ -159,6 +160,7 @@ struct cmd_nums {
 	#define HWRM_RING_FREE                            0x51UL
 	#define HWRM_RING_CMPL_RING_QAGGINT_PARAMS        0x52UL
 	#define HWRM_RING_CMPL_RING_CFG_AGGINT_PARAMS     0x53UL
+	#define HWRM_RING_AGGINT_QCAPS                    0x54UL
 	#define HWRM_RING_RESET                           0x5eUL
 	#define HWRM_RING_GRP_ALLOC                       0x60UL
 	#define HWRM_RING_GRP_FREE                        0x61UL
@@ -191,6 +193,8 @@ struct cmd_nums {
 	#define HWRM_PORT_QSTATS_EXT                      0xb4UL
 	#define HWRM_FW_RESET                             0xc0UL
 	#define HWRM_FW_QSTATUS                           0xc1UL
+	#define HWRM_FW_HEALTH_CHECK                      0xc2UL
+	#define HWRM_FW_SYNC                              0xc3UL
 	#define HWRM_FW_SET_TIME                          0xc8UL
 	#define HWRM_FW_GET_TIME                          0xc9UL
 	#define HWRM_FW_SET_STRUCTURED_DATA               0xcaUL
@@ -269,6 +273,11 @@ struct cmd_nums {
 	#define HWRM_ENGINE_ON_DIE_RQE_CREDITS            0x164UL
 	#define HWRM_FUNC_RESOURCE_QCAPS                  0x190UL
 	#define HWRM_FUNC_VF_RESOURCE_CFG                 0x191UL
+	#define HWRM_FUNC_BACKING_STORE_QCAPS             0x192UL
+	#define HWRM_FUNC_BACKING_STORE_CFG               0x193UL
+	#define HWRM_FUNC_BACKING_STORE_QCFG              0x194UL
+	#define HWRM_FUNC_VF_BW_CFG                       0x195UL
+	#define HWRM_FUNC_VF_BW_QCFG                      0x196UL
 	#define HWRM_SELFTEST_QLIST                       0x200UL
 	#define HWRM_SELFTEST_EXEC                        0x201UL
 	#define HWRM_SELFTEST_IRQ                         0x202UL
@@ -284,6 +293,8 @@ struct cmd_nums {
 	#define HWRM_DBG_COREDUMP_LIST                    0xff17UL
 	#define HWRM_DBG_COREDUMP_INITIATE                0xff18UL
 	#define HWRM_DBG_COREDUMP_RETRIEVE                0xff19UL
+	#define HWRM_DBG_FW_CLI                           0xff1aUL
+	#define HWRM_DBG_I2C_CMD                          0xff1bUL
 	#define HWRM_NVM_FACTORY_DEFAULTS                 0xffeeUL
 	#define HWRM_NVM_VALIDATE_OPTION                  0xffefUL
 	#define HWRM_NVM_FLUSH                            0xfff0UL
@@ -318,6 +329,7 @@ struct ret_codes {
 	#define HWRM_ERR_CODE_INVALID_ENABLES        0x6UL
 	#define HWRM_ERR_CODE_UNSUPPORTED_TLV        0x7UL
 	#define HWRM_ERR_CODE_NO_BUFFER              0x8UL
+	#define HWRM_ERR_CODE_UNSUPPORTED_OPTION_ERR 0x9UL
 	#define HWRM_ERR_CODE_HWRM_ERROR             0xfUL
 	#define HWRM_ERR_CODE_UNKNOWN_ERR            0xfffeUL
 	#define HWRM_ERR_CODE_CMD_NOT_SUPPORTED      0xffffUL
@@ -344,9 +356,9 @@ struct hwrm_err_output {
 #define HWRM_RESP_VALID_KEY 1
 #define HWRM_VERSION_MAJOR 1
 #define HWRM_VERSION_MINOR 9
-#define HWRM_VERSION_UPDATE 1
-#define HWRM_VERSION_RSVD 15
-#define HWRM_VERSION_STR "1.9.1.15"
+#define HWRM_VERSION_UPDATE 2
+#define HWRM_VERSION_RSVD 25
+#define HWRM_VERSION_STR "1.9.2.25"
 
 /* hwrm_ver_get_input (size:192b/24B) */
 struct hwrm_ver_get_input {
@@ -526,6 +538,7 @@ struct hwrm_async_event_cmpl {
 	#define ASYNC_EVENT_CMPL_EVENT_ID_PF_VF_COMM_STATUS_CHANGE   0x32UL
 	#define ASYNC_EVENT_CMPL_EVENT_ID_VF_CFG_CHANGE              0x33UL
 	#define ASYNC_EVENT_CMPL_EVENT_ID_LLFC_PFC_CHANGE            0x34UL
+	#define ASYNC_EVENT_CMPL_EVENT_ID_DEFAULT_VNIC_CHANGE        0x35UL
 	#define ASYNC_EVENT_CMPL_EVENT_ID_HWRM_ERROR                 0xffUL
 	#define ASYNC_EVENT_CMPL_EVENT_ID_LAST                      ASYNC_EVENT_CMPL_EVENT_ID_HWRM_ERROR
 	__le32	event_data2;
@@ -564,6 +577,8 @@ struct hwrm_async_event_cmpl_link_status_change {
 	#define ASYNC_EVENT_CMPL_LINK_STATUS_CHANGE_EVENT_DATA1_PORT_SFT        1
 	#define ASYNC_EVENT_CMPL_LINK_STATUS_CHANGE_EVENT_DATA1_PORT_ID_MASK    0xffff0UL
 	#define ASYNC_EVENT_CMPL_LINK_STATUS_CHANGE_EVENT_DATA1_PORT_ID_SFT     4
+	#define ASYNC_EVENT_CMPL_LINK_STATUS_CHANGE_EVENT_DATA1_PF_ID_MASK      0xff00000UL
+	#define ASYNC_EVENT_CMPL_LINK_STATUS_CHANGE_EVENT_DATA1_PF_ID_SFT       20
 };
 
 /* hwrm_async_event_cmpl_port_conn_not_allowed (size:128b/16B) */
@@ -817,23 +832,26 @@ struct hwrm_func_qcaps_output {
 	__le16	fid;
 	__le16	port_id;
 	__le32	flags;
-	#define FUNC_QCAPS_RESP_FLAGS_PUSH_MODE_SUPPORTED            0x1UL
-	#define FUNC_QCAPS_RESP_FLAGS_GLOBAL_MSIX_AUTOMASKING        0x2UL
-	#define FUNC_QCAPS_RESP_FLAGS_PTP_SUPPORTED                  0x4UL
-	#define FUNC_QCAPS_RESP_FLAGS_ROCE_V1_SUPPORTED              0x8UL
-	#define FUNC_QCAPS_RESP_FLAGS_ROCE_V2_SUPPORTED              0x10UL
-	#define FUNC_QCAPS_RESP_FLAGS_WOL_MAGICPKT_SUPPORTED         0x20UL
-	#define FUNC_QCAPS_RESP_FLAGS_WOL_BMP_SUPPORTED              0x40UL
-	#define FUNC_QCAPS_RESP_FLAGS_TX_RING_RL_SUPPORTED           0x80UL
-	#define FUNC_QCAPS_RESP_FLAGS_TX_BW_CFG_SUPPORTED            0x100UL
-	#define FUNC_QCAPS_RESP_FLAGS_VF_TX_RING_RL_SUPPORTED        0x200UL
-	#define FUNC_QCAPS_RESP_FLAGS_VF_BW_CFG_SUPPORTED            0x400UL
-	#define FUNC_QCAPS_RESP_FLAGS_STD_TX_RING_MODE_SUPPORTED     0x800UL
-	#define FUNC_QCAPS_RESP_FLAGS_GENEVE_TUN_FLAGS_SUPPORTED     0x1000UL
-	#define FUNC_QCAPS_RESP_FLAGS_NVGRE_TUN_FLAGS_SUPPORTED      0x2000UL
-	#define FUNC_QCAPS_RESP_FLAGS_GRE_TUN_FLAGS_SUPPORTED        0x4000UL
-	#define FUNC_QCAPS_RESP_FLAGS_MPLS_TUN_FLAGS_SUPPORTED       0x8000UL
-	#define FUNC_QCAPS_RESP_FLAGS_PCIE_STATS_SUPPORTED           0x10000UL
+	#define FUNC_QCAPS_RESP_FLAGS_PUSH_MODE_SUPPORTED             0x1UL
+	#define FUNC_QCAPS_RESP_FLAGS_GLOBAL_MSIX_AUTOMASKING         0x2UL
+	#define FUNC_QCAPS_RESP_FLAGS_PTP_SUPPORTED                   0x4UL
+	#define FUNC_QCAPS_RESP_FLAGS_ROCE_V1_SUPPORTED               0x8UL
+	#define FUNC_QCAPS_RESP_FLAGS_ROCE_V2_SUPPORTED               0x10UL
+	#define FUNC_QCAPS_RESP_FLAGS_WOL_MAGICPKT_SUPPORTED          0x20UL
+	#define FUNC_QCAPS_RESP_FLAGS_WOL_BMP_SUPPORTED               0x40UL
+	#define FUNC_QCAPS_RESP_FLAGS_TX_RING_RL_SUPPORTED            0x80UL
+	#define FUNC_QCAPS_RESP_FLAGS_TX_BW_CFG_SUPPORTED             0x100UL
+	#define FUNC_QCAPS_RESP_FLAGS_VF_TX_RING_RL_SUPPORTED         0x200UL
+	#define FUNC_QCAPS_RESP_FLAGS_VF_BW_CFG_SUPPORTED             0x400UL
+	#define FUNC_QCAPS_RESP_FLAGS_STD_TX_RING_MODE_SUPPORTED      0x800UL
+	#define FUNC_QCAPS_RESP_FLAGS_GENEVE_TUN_FLAGS_SUPPORTED      0x1000UL
+	#define FUNC_QCAPS_RESP_FLAGS_NVGRE_TUN_FLAGS_SUPPORTED       0x2000UL
+	#define FUNC_QCAPS_RESP_FLAGS_GRE_TUN_FLAGS_SUPPORTED         0x4000UL
+	#define FUNC_QCAPS_RESP_FLAGS_MPLS_TUN_FLAGS_SUPPORTED        0x8000UL
+	#define FUNC_QCAPS_RESP_FLAGS_PCIE_STATS_SUPPORTED            0x10000UL
+	#define FUNC_QCAPS_RESP_FLAGS_ADOPTED_PF_SUPPORTED            0x20000UL
+	#define FUNC_QCAPS_RESP_FLAGS_ADMIN_PF_SUPPORTED              0x40000UL
+	#define FUNC_QCAPS_RESP_FLAGS_LINK_ADMIN_STATUS_SUPPORTED     0x80000UL
 	u8	mac_address[6];
 	__le16	max_rsscos_ctx;
 	__le16	max_cmpl_rings;
@@ -947,58 +965,26 @@ struct hwrm_func_qcfg_output {
 	#define FUNC_QCFG_RESP_EVB_MODE_VEPA   0x2UL
 	#define FUNC_QCFG_RESP_EVB_MODE_LAST  FUNC_QCFG_RESP_EVB_MODE_VEPA
 	u8	options;
-	#define FUNC_QCFG_RESP_OPTIONS_CACHE_LINESIZE_MASK    0x3UL
-	#define FUNC_QCFG_RESP_OPTIONS_CACHE_LINESIZE_SFT     0
-	#define FUNC_QCFG_RESP_OPTIONS_CACHE_LINESIZE_SIZE_64   0x0UL
-	#define FUNC_QCFG_RESP_OPTIONS_CACHE_LINESIZE_SIZE_128  0x1UL
-	#define FUNC_QCFG_RESP_OPTIONS_CACHE_LINESIZE_LAST     FUNC_QCFG_RESP_OPTIONS_CACHE_LINESIZE_SIZE_128
-	#define FUNC_QCFG_RESP_OPTIONS_RSVD_MASK              0xfcUL
-	#define FUNC_QCFG_RESP_OPTIONS_RSVD_SFT               2
+	#define FUNC_QCFG_RESP_OPTIONS_CACHE_LINESIZE_MASK         0x3UL
+	#define FUNC_QCFG_RESP_OPTIONS_CACHE_LINESIZE_SFT          0
+	#define FUNC_QCFG_RESP_OPTIONS_CACHE_LINESIZE_SIZE_64        0x0UL
+	#define FUNC_QCFG_RESP_OPTIONS_CACHE_LINESIZE_SIZE_128       0x1UL
+	#define FUNC_QCFG_RESP_OPTIONS_CACHE_LINESIZE_LAST          FUNC_QCFG_RESP_OPTIONS_CACHE_LINESIZE_SIZE_128
+	#define FUNC_QCFG_RESP_OPTIONS_LINK_ADMIN_STATE_MASK       0xcUL
+	#define FUNC_QCFG_RESP_OPTIONS_LINK_ADMIN_STATE_SFT        2
+	#define FUNC_QCFG_RESP_OPTIONS_LINK_ADMIN_STATE_FORCED_DOWN  (0x0UL << 2)
+	#define FUNC_QCFG_RESP_OPTIONS_LINK_ADMIN_STATE_FORCED_UP    (0x1UL << 2)
+	#define FUNC_QCFG_RESP_OPTIONS_LINK_ADMIN_STATE_AUTO         (0x2UL << 2)
+	#define FUNC_QCFG_RESP_OPTIONS_LINK_ADMIN_STATE_LAST        FUNC_QCFG_RESP_OPTIONS_LINK_ADMIN_STATE_AUTO
+	#define FUNC_QCFG_RESP_OPTIONS_RSVD_MASK                   0xf0UL
+	#define FUNC_QCFG_RESP_OPTIONS_RSVD_SFT                    4
 	__le16	alloc_vfs;
 	__le32	alloc_mcast_filters;
 	__le32	alloc_hw_ring_grps;
 	__le16	alloc_sp_tx_rings;
 	__le16	alloc_stat_ctx;
-	u8	unused_2[7];
-	u8	valid;
-};
-
-/* hwrm_func_vlan_cfg_input (size:384b/48B) */
-struct hwrm_func_vlan_cfg_input {
-	__le16	req_type;
-	__le16	cmpl_ring;
-	__le16	seq_id;
-	__le16	target_id;
-	__le64	resp_addr;
-	__le16	fid;
-	u8	unused_0[2];
-	__le32	enables;
-	#define FUNC_VLAN_CFG_REQ_ENABLES_STAG_VID      0x1UL
-	#define FUNC_VLAN_CFG_REQ_ENABLES_CTAG_VID      0x2UL
-	#define FUNC_VLAN_CFG_REQ_ENABLES_STAG_PCP      0x4UL
-	#define FUNC_VLAN_CFG_REQ_ENABLES_CTAG_PCP      0x8UL
-	#define FUNC_VLAN_CFG_REQ_ENABLES_STAG_TPID     0x10UL
-	#define FUNC_VLAN_CFG_REQ_ENABLES_CTAG_TPID     0x20UL
-	__le16	stag_vid;
-	u8	stag_pcp;
-	u8	unused_1;
-	__be16	stag_tpid;
-	__le16	ctag_vid;
-	u8	ctag_pcp;
-	u8	unused_2;
-	__be16	ctag_tpid;
-	__le32	rsvd1;
-	__le32	rsvd2;
-	u8	unused_3[4];
-};
-
-/* hwrm_func_vlan_cfg_output (size:128b/16B) */
-struct hwrm_func_vlan_cfg_output {
-	__le16	error_code;
-	__le16	req_type;
-	__le16	seq_id;
-	__le16	resp_len;
-	u8	unused_0[7];
+	__le16	alloc_msix;
+	u8	unused_2[5];
 	u8	valid;
 };
 
@@ -1010,7 +996,7 @@ struct hwrm_func_cfg_input {
 	__le16	target_id;
 	__le64	resp_addr;
 	__le16	fid;
-	u8	unused_0[2];
+	__le16	num_msix;
 	__le32	flags;
 	#define FUNC_CFG_REQ_FLAGS_SRC_MAC_ADDR_CHECK_DISABLE     0x1UL
 	#define FUNC_CFG_REQ_FLAGS_SRC_MAC_ADDR_CHECK_ENABLE      0x2UL
@@ -1050,6 +1036,8 @@ struct hwrm_func_cfg_input {
 	#define FUNC_CFG_REQ_ENABLES_NUM_MCAST_FILTERS       0x40000UL
 	#define FUNC_CFG_REQ_ENABLES_NUM_HW_RING_GRPS        0x80000UL
 	#define FUNC_CFG_REQ_ENABLES_CACHE_LINESIZE          0x100000UL
+	#define FUNC_CFG_REQ_ENABLES_NUM_MSIX                0x200000UL
+	#define FUNC_CFG_REQ_ENABLES_ADMIN_LINK_STATE        0x400000UL
 	__le16	mtu;
 	__le16	mru;
 	__le16	num_rsscos_ctxs;
@@ -1109,13 +1097,19 @@ struct hwrm_func_cfg_input {
 	#define FUNC_CFG_REQ_EVB_MODE_VEPA   0x2UL
 	#define FUNC_CFG_REQ_EVB_MODE_LAST  FUNC_CFG_REQ_EVB_MODE_VEPA
 	u8	options;
-	#define FUNC_CFG_REQ_OPTIONS_CACHE_LINESIZE_MASK    0x3UL
-	#define FUNC_CFG_REQ_OPTIONS_CACHE_LINESIZE_SFT     0
-	#define FUNC_CFG_REQ_OPTIONS_CACHE_LINESIZE_SIZE_64   0x0UL
-	#define FUNC_CFG_REQ_OPTIONS_CACHE_LINESIZE_SIZE_128  0x1UL
-	#define FUNC_CFG_REQ_OPTIONS_CACHE_LINESIZE_LAST     FUNC_CFG_REQ_OPTIONS_CACHE_LINESIZE_SIZE_128
-	#define FUNC_CFG_REQ_OPTIONS_RSVD_MASK              0xfcUL
-	#define FUNC_CFG_REQ_OPTIONS_RSVD_SFT               2
+	#define FUNC_CFG_REQ_OPTIONS_CACHE_LINESIZE_MASK         0x3UL
+	#define FUNC_CFG_REQ_OPTIONS_CACHE_LINESIZE_SFT          0
+	#define FUNC_CFG_REQ_OPTIONS_CACHE_LINESIZE_SIZE_64        0x0UL
+	#define FUNC_CFG_REQ_OPTIONS_CACHE_LINESIZE_SIZE_128       0x1UL
+	#define FUNC_CFG_REQ_OPTIONS_CACHE_LINESIZE_LAST          FUNC_CFG_REQ_OPTIONS_CACHE_LINESIZE_SIZE_128
+	#define FUNC_CFG_REQ_OPTIONS_LINK_ADMIN_STATE_MASK       0xcUL
+	#define FUNC_CFG_REQ_OPTIONS_LINK_ADMIN_STATE_SFT        2
+	#define FUNC_CFG_REQ_OPTIONS_LINK_ADMIN_STATE_FORCED_DOWN  (0x0UL << 2)
+	#define FUNC_CFG_REQ_OPTIONS_LINK_ADMIN_STATE_FORCED_UP    (0x1UL << 2)
+	#define FUNC_CFG_REQ_OPTIONS_LINK_ADMIN_STATE_AUTO         (0x2UL << 2)
+	#define FUNC_CFG_REQ_OPTIONS_LINK_ADMIN_STATE_LAST        FUNC_CFG_REQ_OPTIONS_LINK_ADMIN_STATE_AUTO
+	#define FUNC_CFG_REQ_OPTIONS_RSVD_MASK                   0xf0UL
+	#define FUNC_CFG_REQ_OPTIONS_RSVD_SFT                    4
 	__le16	num_mcast_filters;
 };
 
@@ -1212,30 +1206,6 @@ struct hwrm_func_vf_resc_free_output {
 	u8	valid;
 };
 
-/* hwrm_func_vf_vnic_ids_query_input (size:256b/32B) */
-struct hwrm_func_vf_vnic_ids_query_input {
-	__le16	req_type;
-	__le16	cmpl_ring;
-	__le16	seq_id;
-	__le16	target_id;
-	__le64	resp_addr;
-	__le16	vf_id;
-	u8	unused_0[2];
-	__le32	max_vnic_id_cnt;
-	__le64	vnic_id_tbl_addr;
-};
-
-/* hwrm_func_vf_vnic_ids_query_output (size:128b/16B) */
-struct hwrm_func_vf_vnic_ids_query_output {
-	__le16	error_code;
-	__le16	req_type;
-	__le16	seq_id;
-	__le16	resp_len;
-	__le32	vnic_id_cnt;
-	u8	unused_0[3];
-	u8	valid;
-};
-
 /* hwrm_func_drv_rgtr_input (size:896b/112B) */
 struct hwrm_func_drv_rgtr_input {
 	__le16	req_type;
@@ -1286,7 +1256,9 @@ struct hwrm_func_drv_rgtr_output {
 	__le16	req_type;
 	__le16	seq_id;
 	__le16	resp_len;
-	u8	unused_0[7];
+	__le32	flags;
+	#define FUNC_DRV_RGTR_RESP_FLAGS_IF_CHANGE_SUPPORTED     0x1UL
+	u8	unused_0[3];
 	u8	valid;
 };
 
@@ -1372,7 +1344,7 @@ struct hwrm_func_drv_qver_input {
 	u8	unused_0[2];
 };
 
-/* hwrm_func_drv_qver_output (size:192b/24B) */
+/* hwrm_func_drv_qver_output (size:256b/32B) */
 struct hwrm_func_drv_qver_output {
 	__le16	error_code;
 	__le16	req_type;
@@ -1394,12 +1366,13 @@ struct hwrm_func_drv_qver_output {
 	u8	ver_maj_8b;
 	u8	ver_min_8b;
 	u8	ver_upd_8b;
-	u8	unused_0[2];
-	u8	valid;
+	u8	unused_0[3];
 	__le16	ver_maj;
 	__le16	ver_min;
 	__le16	ver_upd;
 	__le16	ver_patch;
+	u8	unused_1[7];
+	u8	valid;
 };
 
 /* hwrm_func_resource_qcaps_input (size:192b/24B) */
@@ -1493,6 +1466,410 @@ struct hwrm_func_vf_resource_cfg_output {
 	u8	valid;
 };
 
+/* hwrm_func_backing_store_qcaps_input (size:128b/16B) */
+struct hwrm_func_backing_store_qcaps_input {
+	__le16	req_type;
+	__le16	cmpl_ring;
+	__le16	seq_id;
+	__le16	target_id;
+	__le64	resp_addr;
+};
+
+/* hwrm_func_backing_store_qcaps_output (size:576b/72B) */
+struct hwrm_func_backing_store_qcaps_output {
+	__le16	error_code;
+	__le16	req_type;
+	__le16	seq_id;
+	__le16	resp_len;
+	__le32	qp_max_entries;
+	__le16	qp_min_qp1_entries;
+	__le16	qp_max_l2_entries;
+	__le16	qp_entry_size;
+	__le16	srq_max_l2_entries;
+	__le32	srq_max_entries;
+	__le16	srq_entry_size;
+	__le16	cq_max_l2_entries;
+	__le32	cq_max_entries;
+	__le16	cq_entry_size;
+	__le16	vnic_max_vnic_entries;
+	__le16	vnic_max_ring_table_entries;
+	__le16	vnic_entry_size;
+	__le32	stat_max_entries;
+	__le16	stat_entry_size;
+	__le16	tqm_entry_size;
+	__le32	tqm_min_entries_per_ring;
+	__le32	tqm_max_entries_per_ring;
+	__le32	mrav_max_entries;
+	__le16	mrav_entry_size;
+	__le16	tim_entry_size;
+	__le32	tim_max_entries;
+	u8	unused_0[3];
+	u8	valid;
+};
+
+/* hwrm_func_backing_store_cfg_input (size:2048b/256B) */
+struct hwrm_func_backing_store_cfg_input {
+	__le16	req_type;
+	__le16	cmpl_ring;
+	__le16	seq_id;
+	__le16	target_id;
+	__le64	resp_addr;
+	__le32	flags;
+	#define FUNC_BACKING_STORE_CFG_REQ_FLAGS_PREBOOT_MODE     0x1UL
+	__le32	enables;
+	#define FUNC_BACKING_STORE_CFG_REQ_ENABLES_QP            0x1UL
+	#define FUNC_BACKING_STORE_CFG_REQ_ENABLES_SRQ           0x2UL
+	#define FUNC_BACKING_STORE_CFG_REQ_ENABLES_CQ            0x4UL
+	#define FUNC_BACKING_STORE_CFG_REQ_ENABLES_VNIC          0x8UL
+	#define FUNC_BACKING_STORE_CFG_REQ_ENABLES_STAT          0x10UL
+	#define FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_SP        0x20UL
+	#define FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_RING0     0x40UL
+	#define FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_RING1     0x80UL
+	#define FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_RING2     0x100UL
+	#define FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_RING3     0x200UL
+	#define FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_RING4     0x400UL
+	#define FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_RING5     0x800UL
+	#define FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_RING6     0x1000UL
+	#define FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_RING7     0x2000UL
+	#define FUNC_BACKING_STORE_CFG_REQ_ENABLES_MRAV          0x4000UL
+	#define FUNC_BACKING_STORE_CFG_REQ_ENABLES_TIM           0x8000UL
+	u8	qpc_pg_size_qpc_lvl;
+	#define FUNC_BACKING_STORE_CFG_REQ_QPC_LVL_MASK      0xfUL
+	#define FUNC_BACKING_STORE_CFG_REQ_QPC_LVL_SFT       0
+	#define FUNC_BACKING_STORE_CFG_REQ_QPC_LVL_LVL_0       0x0UL
+	#define FUNC_BACKING_STORE_CFG_REQ_QPC_LVL_LVL_1       0x1UL
+	#define FUNC_BACKING_STORE_CFG_REQ_QPC_LVL_LVL_2       0x2UL
+	#define FUNC_BACKING_STORE_CFG_REQ_QPC_LVL_LAST       FUNC_BACKING_STORE_CFG_REQ_QPC_LVL_LVL_2
+	#define FUNC_BACKING_STORE_CFG_REQ_QPC_PG_SIZE_MASK  0xf0UL
+	#define FUNC_BACKING_STORE_CFG_REQ_QPC_PG_SIZE_SFT   4
+	#define FUNC_BACKING_STORE_CFG_REQ_QPC_PG_SIZE_PG_4K   (0x0UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_QPC_PG_SIZE_PG_8K   (0x1UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_QPC_PG_SIZE_PG_64K  (0x2UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_QPC_PG_SIZE_PG_2M   (0x3UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_QPC_PG_SIZE_PG_8M   (0x4UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_QPC_PG_SIZE_PG_1G   (0x5UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_QPC_PG_SIZE_LAST   FUNC_BACKING_STORE_CFG_REQ_QPC_PG_SIZE_PG_1G
+	u8	srq_pg_size_srq_lvl;
+	#define FUNC_BACKING_STORE_CFG_REQ_SRQ_LVL_MASK      0xfUL
+	#define FUNC_BACKING_STORE_CFG_REQ_SRQ_LVL_SFT       0
+	#define FUNC_BACKING_STORE_CFG_REQ_SRQ_LVL_LVL_0       0x0UL
+	#define FUNC_BACKING_STORE_CFG_REQ_SRQ_LVL_LVL_1       0x1UL
+	#define FUNC_BACKING_STORE_CFG_REQ_SRQ_LVL_LVL_2       0x2UL
+	#define FUNC_BACKING_STORE_CFG_REQ_SRQ_LVL_LAST       FUNC_BACKING_STORE_CFG_REQ_SRQ_LVL_LVL_2
+	#define FUNC_BACKING_STORE_CFG_REQ_SRQ_PG_SIZE_MASK  0xf0UL
+	#define FUNC_BACKING_STORE_CFG_REQ_SRQ_PG_SIZE_SFT   4
+	#define FUNC_BACKING_STORE_CFG_REQ_SRQ_PG_SIZE_PG_4K   (0x0UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_SRQ_PG_SIZE_PG_8K   (0x1UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_SRQ_PG_SIZE_PG_64K  (0x2UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_SRQ_PG_SIZE_PG_2M   (0x3UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_SRQ_PG_SIZE_PG_8M   (0x4UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_SRQ_PG_SIZE_PG_1G   (0x5UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_SRQ_PG_SIZE_LAST   FUNC_BACKING_STORE_CFG_REQ_SRQ_PG_SIZE_PG_1G
+	u8	cq_pg_size_cq_lvl;
+	#define FUNC_BACKING_STORE_CFG_REQ_CQ_LVL_MASK      0xfUL
+	#define FUNC_BACKING_STORE_CFG_REQ_CQ_LVL_SFT       0
+	#define FUNC_BACKING_STORE_CFG_REQ_CQ_LVL_LVL_0       0x0UL
+	#define FUNC_BACKING_STORE_CFG_REQ_CQ_LVL_LVL_1       0x1UL
+	#define FUNC_BACKING_STORE_CFG_REQ_CQ_LVL_LVL_2       0x2UL
+	#define FUNC_BACKING_STORE_CFG_REQ_CQ_LVL_LAST       FUNC_BACKING_STORE_CFG_REQ_CQ_LVL_LVL_2
+	#define FUNC_BACKING_STORE_CFG_REQ_CQ_PG_SIZE_MASK  0xf0UL
+	#define FUNC_BACKING_STORE_CFG_REQ_CQ_PG_SIZE_SFT   4
+	#define FUNC_BACKING_STORE_CFG_REQ_CQ_PG_SIZE_PG_4K   (0x0UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_CQ_PG_SIZE_PG_8K   (0x1UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_CQ_PG_SIZE_PG_64K  (0x2UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_CQ_PG_SIZE_PG_2M   (0x3UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_CQ_PG_SIZE_PG_8M   (0x4UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_CQ_PG_SIZE_PG_1G   (0x5UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_CQ_PG_SIZE_LAST   FUNC_BACKING_STORE_CFG_REQ_CQ_PG_SIZE_PG_1G
+	u8	vnic_pg_size_vnic_lvl;
+	#define FUNC_BACKING_STORE_CFG_REQ_VNIC_LVL_MASK      0xfUL
+	#define FUNC_BACKING_STORE_CFG_REQ_VNIC_LVL_SFT       0
+	#define FUNC_BACKING_STORE_CFG_REQ_VNIC_LVL_LVL_0       0x0UL
+	#define FUNC_BACKING_STORE_CFG_REQ_VNIC_LVL_LVL_1       0x1UL
+	#define FUNC_BACKING_STORE_CFG_REQ_VNIC_LVL_LVL_2       0x2UL
+	#define FUNC_BACKING_STORE_CFG_REQ_VNIC_LVL_LAST       FUNC_BACKING_STORE_CFG_REQ_VNIC_LVL_LVL_2
+	#define FUNC_BACKING_STORE_CFG_REQ_VNIC_PG_SIZE_MASK  0xf0UL
+	#define FUNC_BACKING_STORE_CFG_REQ_VNIC_PG_SIZE_SFT   4
+	#define FUNC_BACKING_STORE_CFG_REQ_VNIC_PG_SIZE_PG_4K   (0x0UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_VNIC_PG_SIZE_PG_8K   (0x1UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_VNIC_PG_SIZE_PG_64K  (0x2UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_VNIC_PG_SIZE_PG_2M   (0x3UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_VNIC_PG_SIZE_PG_8M   (0x4UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_VNIC_PG_SIZE_PG_1G   (0x5UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_VNIC_PG_SIZE_LAST   FUNC_BACKING_STORE_CFG_REQ_VNIC_PG_SIZE_PG_1G
+	u8	stat_pg_size_stat_lvl;
+	#define FUNC_BACKING_STORE_CFG_REQ_STAT_LVL_MASK      0xfUL
+	#define FUNC_BACKING_STORE_CFG_REQ_STAT_LVL_SFT       0
+	#define FUNC_BACKING_STORE_CFG_REQ_STAT_LVL_LVL_0       0x0UL
+	#define FUNC_BACKING_STORE_CFG_REQ_STAT_LVL_LVL_1       0x1UL
+	#define FUNC_BACKING_STORE_CFG_REQ_STAT_LVL_LVL_2       0x2UL
+	#define FUNC_BACKING_STORE_CFG_REQ_STAT_LVL_LAST       FUNC_BACKING_STORE_CFG_REQ_STAT_LVL_LVL_2
+	#define FUNC_BACKING_STORE_CFG_REQ_STAT_PG_SIZE_MASK  0xf0UL
+	#define FUNC_BACKING_STORE_CFG_REQ_STAT_PG_SIZE_SFT   4
+	#define FUNC_BACKING_STORE_CFG_REQ_STAT_PG_SIZE_PG_4K   (0x0UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_STAT_PG_SIZE_PG_8K   (0x1UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_STAT_PG_SIZE_PG_64K  (0x2UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_STAT_PG_SIZE_PG_2M   (0x3UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_STAT_PG_SIZE_PG_8M   (0x4UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_STAT_PG_SIZE_PG_1G   (0x5UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_STAT_PG_SIZE_LAST   FUNC_BACKING_STORE_CFG_REQ_STAT_PG_SIZE_PG_1G
+	u8	tqm_sp_pg_size_tqm_sp_lvl;
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_SP_LVL_MASK      0xfUL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_SP_LVL_SFT       0
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_SP_LVL_LVL_0       0x0UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_SP_LVL_LVL_1       0x1UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_SP_LVL_LVL_2       0x2UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_SP_LVL_LAST       FUNC_BACKING_STORE_CFG_REQ_TQM_SP_LVL_LVL_2
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_SP_PG_SIZE_MASK  0xf0UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_SP_PG_SIZE_SFT   4
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_SP_PG_SIZE_PG_4K   (0x0UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_SP_PG_SIZE_PG_8K   (0x1UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_SP_PG_SIZE_PG_64K  (0x2UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_SP_PG_SIZE_PG_2M   (0x3UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_SP_PG_SIZE_PG_8M   (0x4UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_SP_PG_SIZE_PG_1G   (0x5UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_SP_PG_SIZE_LAST   FUNC_BACKING_STORE_CFG_REQ_TQM_SP_PG_SIZE_PG_1G
+	u8	tqm_ring0_pg_size_tqm_ring0_lvl;
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING0_LVL_MASK      0xfUL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING0_LVL_SFT       0
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING0_LVL_LVL_0       0x0UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING0_LVL_LVL_1       0x1UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING0_LVL_LVL_2       0x2UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING0_LVL_LAST       FUNC_BACKING_STORE_CFG_REQ_TQM_RING0_LVL_LVL_2
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING0_PG_SIZE_MASK  0xf0UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING0_PG_SIZE_SFT   4
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING0_PG_SIZE_PG_4K   (0x0UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING0_PG_SIZE_PG_8K   (0x1UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING0_PG_SIZE_PG_64K  (0x2UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING0_PG_SIZE_PG_2M   (0x3UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING0_PG_SIZE_PG_8M   (0x4UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING0_PG_SIZE_PG_1G   (0x5UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING0_PG_SIZE_LAST   FUNC_BACKING_STORE_CFG_REQ_TQM_RING0_PG_SIZE_PG_1G
+	u8	tqm_ring1_pg_size_tqm_ring1_lvl;
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING1_LVL_MASK      0xfUL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING1_LVL_SFT       0
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING1_LVL_LVL_0       0x0UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING1_LVL_LVL_1       0x1UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING1_LVL_LVL_2       0x2UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING1_LVL_LAST       FUNC_BACKING_STORE_CFG_REQ_TQM_RING1_LVL_LVL_2
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING1_PG_SIZE_MASK  0xf0UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING1_PG_SIZE_SFT   4
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING1_PG_SIZE_PG_4K   (0x0UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING1_PG_SIZE_PG_8K   (0x1UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING1_PG_SIZE_PG_64K  (0x2UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING1_PG_SIZE_PG_2M   (0x3UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING1_PG_SIZE_PG_8M   (0x4UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING1_PG_SIZE_PG_1G   (0x5UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING1_PG_SIZE_LAST   FUNC_BACKING_STORE_CFG_REQ_TQM_RING1_PG_SIZE_PG_1G
+	u8	tqm_ring2_pg_size_tqm_ring2_lvl;
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING2_LVL_MASK      0xfUL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING2_LVL_SFT       0
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING2_LVL_LVL_0       0x0UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING2_LVL_LVL_1       0x1UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING2_LVL_LVL_2       0x2UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING2_LVL_LAST       FUNC_BACKING_STORE_CFG_REQ_TQM_RING2_LVL_LVL_2
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING2_PG_SIZE_MASK  0xf0UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING2_PG_SIZE_SFT   4
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING2_PG_SIZE_PG_4K   (0x0UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING2_PG_SIZE_PG_8K   (0x1UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING2_PG_SIZE_PG_64K  (0x2UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING2_PG_SIZE_PG_2M   (0x3UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING2_PG_SIZE_PG_8M   (0x4UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING2_PG_SIZE_PG_1G   (0x5UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING2_PG_SIZE_LAST   FUNC_BACKING_STORE_CFG_REQ_TQM_RING2_PG_SIZE_PG_1G
+	u8	tqm_ring3_pg_size_tqm_ring3_lvl;
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING3_LVL_MASK      0xfUL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING3_LVL_SFT       0
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING3_LVL_LVL_0       0x0UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING3_LVL_LVL_1       0x1UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING3_LVL_LVL_2       0x2UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING3_LVL_LAST       FUNC_BACKING_STORE_CFG_REQ_TQM_RING3_LVL_LVL_2
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING3_PG_SIZE_MASK  0xf0UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING3_PG_SIZE_SFT   4
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING3_PG_SIZE_PG_4K   (0x0UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING3_PG_SIZE_PG_8K   (0x1UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING3_PG_SIZE_PG_64K  (0x2UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING3_PG_SIZE_PG_2M   (0x3UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING3_PG_SIZE_PG_8M   (0x4UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING3_PG_SIZE_PG_1G   (0x5UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING3_PG_SIZE_LAST   FUNC_BACKING_STORE_CFG_REQ_TQM_RING3_PG_SIZE_PG_1G
+	u8	tqm_ring4_pg_size_tqm_ring4_lvl;
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING4_LVL_MASK      0xfUL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING4_LVL_SFT       0
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING4_LVL_LVL_0       0x0UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING4_LVL_LVL_1       0x1UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING4_LVL_LVL_2       0x2UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING4_LVL_LAST       FUNC_BACKING_STORE_CFG_REQ_TQM_RING4_LVL_LVL_2
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING4_PG_SIZE_MASK  0xf0UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING4_PG_SIZE_SFT   4
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING4_PG_SIZE_PG_4K   (0x0UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING4_PG_SIZE_PG_8K   (0x1UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING4_PG_SIZE_PG_64K  (0x2UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING4_PG_SIZE_PG_2M   (0x3UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING4_PG_SIZE_PG_8M   (0x4UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING4_PG_SIZE_PG_1G   (0x5UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING4_PG_SIZE_LAST   FUNC_BACKING_STORE_CFG_REQ_TQM_RING4_PG_SIZE_PG_1G
+	u8	tqm_ring5_pg_size_tqm_ring5_lvl;
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING5_LVL_MASK      0xfUL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING5_LVL_SFT       0
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING5_LVL_LVL_0       0x0UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING5_LVL_LVL_1       0x1UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING5_LVL_LVL_2       0x2UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING5_LVL_LAST       FUNC_BACKING_STORE_CFG_REQ_TQM_RING5_LVL_LVL_2
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING5_PG_SIZE_MASK  0xf0UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING5_PG_SIZE_SFT   4
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING5_PG_SIZE_PG_4K   (0x0UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING5_PG_SIZE_PG_8K   (0x1UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING5_PG_SIZE_PG_64K  (0x2UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING5_PG_SIZE_PG_2M   (0x3UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING5_PG_SIZE_PG_8M   (0x4UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING5_PG_SIZE_PG_1G   (0x5UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING5_PG_SIZE_LAST   FUNC_BACKING_STORE_CFG_REQ_TQM_RING5_PG_SIZE_PG_1G
+	u8	tqm_ring6_pg_size_tqm_ring6_lvl;
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING6_LVL_MASK      0xfUL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING6_LVL_SFT       0
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING6_LVL_LVL_0       0x0UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING6_LVL_LVL_1       0x1UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING6_LVL_LVL_2       0x2UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING6_LVL_LAST       FUNC_BACKING_STORE_CFG_REQ_TQM_RING6_LVL_LVL_2
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING6_PG_SIZE_MASK  0xf0UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING6_PG_SIZE_SFT   4
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING6_PG_SIZE_PG_4K   (0x0UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING6_PG_SIZE_PG_8K   (0x1UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING6_PG_SIZE_PG_64K  (0x2UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING6_PG_SIZE_PG_2M   (0x3UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING6_PG_SIZE_PG_8M   (0x4UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING6_PG_SIZE_PG_1G   (0x5UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING6_PG_SIZE_LAST   FUNC_BACKING_STORE_CFG_REQ_TQM_RING6_PG_SIZE_PG_1G
+	u8	tqm_ring7_pg_size_tqm_ring7_lvl;
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING7_LVL_MASK      0xfUL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING7_LVL_SFT       0
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING7_LVL_LVL_0       0x0UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING7_LVL_LVL_1       0x1UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING7_LVL_LVL_2       0x2UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING7_LVL_LAST       FUNC_BACKING_STORE_CFG_REQ_TQM_RING7_LVL_LVL_2
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING7_PG_SIZE_MASK  0xf0UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING7_PG_SIZE_SFT   4
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING7_PG_SIZE_PG_4K   (0x0UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING7_PG_SIZE_PG_8K   (0x1UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING7_PG_SIZE_PG_64K  (0x2UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING7_PG_SIZE_PG_2M   (0x3UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING7_PG_SIZE_PG_8M   (0x4UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING7_PG_SIZE_PG_1G   (0x5UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TQM_RING7_PG_SIZE_LAST   FUNC_BACKING_STORE_CFG_REQ_TQM_RING7_PG_SIZE_PG_1G
+	u8	mrav_pg_size_mrav_lvl;
+	#define FUNC_BACKING_STORE_CFG_REQ_MRAV_LVL_MASK      0xfUL
+	#define FUNC_BACKING_STORE_CFG_REQ_MRAV_LVL_SFT       0
+	#define FUNC_BACKING_STORE_CFG_REQ_MRAV_LVL_LVL_0       0x0UL
+	#define FUNC_BACKING_STORE_CFG_REQ_MRAV_LVL_LVL_1       0x1UL
+	#define FUNC_BACKING_STORE_CFG_REQ_MRAV_LVL_LVL_2       0x2UL
+	#define FUNC_BACKING_STORE_CFG_REQ_MRAV_LVL_LAST       FUNC_BACKING_STORE_CFG_REQ_MRAV_LVL_LVL_2
+	#define FUNC_BACKING_STORE_CFG_REQ_MRAV_PG_SIZE_MASK  0xf0UL
+	#define FUNC_BACKING_STORE_CFG_REQ_MRAV_PG_SIZE_SFT   4
+	#define FUNC_BACKING_STORE_CFG_REQ_MRAV_PG_SIZE_PG_4K   (0x0UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_MRAV_PG_SIZE_PG_8K   (0x1UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_MRAV_PG_SIZE_PG_64K  (0x2UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_MRAV_PG_SIZE_PG_2M   (0x3UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_MRAV_PG_SIZE_PG_8M   (0x4UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_MRAV_PG_SIZE_PG_1G   (0x5UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_MRAV_PG_SIZE_LAST   FUNC_BACKING_STORE_CFG_REQ_MRAV_PG_SIZE_PG_1G
+	u8	tim_pg_size_tim_lvl;
+	#define FUNC_BACKING_STORE_CFG_REQ_TIM_LVL_MASK      0xfUL
+	#define FUNC_BACKING_STORE_CFG_REQ_TIM_LVL_SFT       0
+	#define FUNC_BACKING_STORE_CFG_REQ_TIM_LVL_LVL_0       0x0UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TIM_LVL_LVL_1       0x1UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TIM_LVL_LVL_2       0x2UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TIM_LVL_LAST       FUNC_BACKING_STORE_CFG_REQ_TIM_LVL_LVL_2
+	#define FUNC_BACKING_STORE_CFG_REQ_TIM_PG_SIZE_MASK  0xf0UL
+	#define FUNC_BACKING_STORE_CFG_REQ_TIM_PG_SIZE_SFT   4
+	#define FUNC_BACKING_STORE_CFG_REQ_TIM_PG_SIZE_PG_4K   (0x0UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TIM_PG_SIZE_PG_8K   (0x1UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TIM_PG_SIZE_PG_64K  (0x2UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TIM_PG_SIZE_PG_2M   (0x3UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TIM_PG_SIZE_PG_8M   (0x4UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TIM_PG_SIZE_PG_1G   (0x5UL << 4)
+	#define FUNC_BACKING_STORE_CFG_REQ_TIM_PG_SIZE_LAST   FUNC_BACKING_STORE_CFG_REQ_TIM_PG_SIZE_PG_1G
+	__le64	qpc_page_dir;
+	__le64	srq_page_dir;
+	__le64	cq_page_dir;
+	__le64	vnic_page_dir;
+	__le64	stat_page_dir;
+	__le64	tqm_sp_page_dir;
+	__le64	tqm_ring0_page_dir;
+	__le64	tqm_ring1_page_dir;
+	__le64	tqm_ring2_page_dir;
+	__le64	tqm_ring3_page_dir;
+	__le64	tqm_ring4_page_dir;
+	__le64	tqm_ring5_page_dir;
+	__le64	tqm_ring6_page_dir;
+	__le64	tqm_ring7_page_dir;
+	__le64	mrav_page_dir;
+	__le64	tim_page_dir;
+	__le32	qp_num_entries;
+	__le32	srq_num_entries;
+	__le32	cq_num_entries;
+	__le32	stat_num_entries;
+	__le32	tqm_sp_num_entries;
+	__le32	tqm_ring0_num_entries;
+	__le32	tqm_ring1_num_entries;
+	__le32	tqm_ring2_num_entries;
+	__le32	tqm_ring3_num_entries;
+	__le32	tqm_ring4_num_entries;
+	__le32	tqm_ring5_num_entries;
+	__le32	tqm_ring6_num_entries;
+	__le32	tqm_ring7_num_entries;
+	__le32	mrav_num_entries;
+	__le32	tim_num_entries;
+	__le16	qp_num_qp1_entries;
+	__le16	qp_num_l2_entries;
+	__le16	qp_entry_size;
+	__le16	srq_num_l2_entries;
+	__le16	srq_entry_size;
+	__le16	cq_num_l2_entries;
+	__le16	cq_entry_size;
+	__le16	vnic_num_vnic_entries;
+	__le16	vnic_num_ring_table_entries;
+	__le16	vnic_entry_size;
+	__le16	stat_entry_size;
+	__le16	tqm_entry_size;
+	__le16	mrav_entry_size;
+	__le16	tim_entry_size;
+};
+
+/* hwrm_func_backing_store_cfg_output (size:128b/16B) */
+struct hwrm_func_backing_store_cfg_output {
+	__le16	error_code;
+	__le16	req_type;
+	__le16	seq_id;
+	__le16	resp_len;
+	u8	unused_0[7];
+	u8	valid;
+};
+
+/* hwrm_func_drv_if_change_input (size:192b/24B) */
+struct hwrm_func_drv_if_change_input {
+	__le16	req_type;
+	__le16	cmpl_ring;
+	__le16	seq_id;
+	__le16	target_id;
+	__le64	resp_addr;
+	__le32	flags;
+	#define FUNC_DRV_IF_CHANGE_REQ_FLAGS_UP     0x1UL
+	__le32	unused;
+};
+
+/* hwrm_func_drv_if_change_output (size:128b/16B) */
+struct hwrm_func_drv_if_change_output {
+	__le16	error_code;
+	__le16	req_type;
+	__le16	seq_id;
+	__le16	resp_len;
+	__le32	flags;
+	#define FUNC_DRV_IF_CHANGE_RESP_FLAGS_RESC_CHANGE     0x1UL
+	u8	unused_0[3];
+	u8	valid;
+};
+
 /* hwrm_port_phy_cfg_input (size:448b/56B) */
 struct hwrm_port_phy_cfg_input {
 	__le16	req_type;
@@ -1592,10 +1969,11 @@ struct hwrm_port_phy_cfg_input {
 	#define PORT_PHY_CFG_REQ_WIRESPEED_ON  0x1UL
 	#define PORT_PHY_CFG_REQ_WIRESPEED_LAST PORT_PHY_CFG_REQ_WIRESPEED_ON
 	u8	lpbk;
-	#define PORT_PHY_CFG_REQ_LPBK_NONE   0x0UL
-	#define PORT_PHY_CFG_REQ_LPBK_LOCAL  0x1UL
-	#define PORT_PHY_CFG_REQ_LPBK_REMOTE 0x2UL
-	#define PORT_PHY_CFG_REQ_LPBK_LAST  PORT_PHY_CFG_REQ_LPBK_REMOTE
+	#define PORT_PHY_CFG_REQ_LPBK_NONE     0x0UL
+	#define PORT_PHY_CFG_REQ_LPBK_LOCAL    0x1UL
+	#define PORT_PHY_CFG_REQ_LPBK_REMOTE   0x2UL
+	#define PORT_PHY_CFG_REQ_LPBK_EXTERNAL 0x3UL
+	#define PORT_PHY_CFG_REQ_LPBK_LAST    PORT_PHY_CFG_REQ_LPBK_EXTERNAL
 	u8	force_pause;
 	#define PORT_PHY_CFG_REQ_FORCE_PAUSE_TX     0x1UL
 	#define PORT_PHY_CFG_REQ_FORCE_PAUSE_RX     0x2UL
@@ -1751,10 +2129,11 @@ struct hwrm_port_phy_qcfg_output {
 	#define PORT_PHY_QCFG_RESP_WIRESPEED_ON  0x1UL
 	#define PORT_PHY_QCFG_RESP_WIRESPEED_LAST PORT_PHY_QCFG_RESP_WIRESPEED_ON
 	u8	lpbk;
-	#define PORT_PHY_QCFG_RESP_LPBK_NONE   0x0UL
-	#define PORT_PHY_QCFG_RESP_LPBK_LOCAL  0x1UL
-	#define PORT_PHY_QCFG_RESP_LPBK_REMOTE 0x2UL
-	#define PORT_PHY_QCFG_RESP_LPBK_LAST  PORT_PHY_QCFG_RESP_LPBK_REMOTE
+	#define PORT_PHY_QCFG_RESP_LPBK_NONE     0x0UL
+	#define PORT_PHY_QCFG_RESP_LPBK_LOCAL    0x1UL
+	#define PORT_PHY_QCFG_RESP_LPBK_REMOTE   0x2UL
+	#define PORT_PHY_QCFG_RESP_LPBK_EXTERNAL 0x3UL
+	#define PORT_PHY_QCFG_RESP_LPBK_LAST    PORT_PHY_QCFG_RESP_LPBK_EXTERNAL
 	u8	force_pause;
 	#define PORT_PHY_QCFG_RESP_FORCE_PAUSE_TX     0x1UL
 	#define PORT_PHY_QCFG_RESP_FORCE_PAUSE_RX     0x2UL
@@ -2014,6 +2393,131 @@ struct hwrm_port_mac_ptp_qcfg_output {
 	u8	valid;
 };
 
+/* tx_port_stats (size:3264b/408B) */
+struct tx_port_stats {
+	__le64	tx_64b_frames;
+	__le64	tx_65b_127b_frames;
+	__le64	tx_128b_255b_frames;
+	__le64	tx_256b_511b_frames;
+	__le64	tx_512b_1023b_frames;
+	__le64	tx_1024b_1518b_frames;
+	__le64	tx_good_vlan_frames;
+	__le64	tx_1519b_2047b_frames;
+	__le64	tx_2048b_4095b_frames;
+	__le64	tx_4096b_9216b_frames;
+	__le64	tx_9217b_16383b_frames;
+	__le64	tx_good_frames;
+	__le64	tx_total_frames;
+	__le64	tx_ucast_frames;
+	__le64	tx_mcast_frames;
+	__le64	tx_bcast_frames;
+	__le64	tx_pause_frames;
+	__le64	tx_pfc_frames;
+	__le64	tx_jabber_frames;
+	__le64	tx_fcs_err_frames;
+	__le64	tx_control_frames;
+	__le64	tx_oversz_frames;
+	__le64	tx_single_dfrl_frames;
+	__le64	tx_multi_dfrl_frames;
+	__le64	tx_single_coll_frames;
+	__le64	tx_multi_coll_frames;
+	__le64	tx_late_coll_frames;
+	__le64	tx_excessive_coll_frames;
+	__le64	tx_frag_frames;
+	__le64	tx_err;
+	__le64	tx_tagged_frames;
+	__le64	tx_dbl_tagged_frames;
+	__le64	tx_runt_frames;
+	__le64	tx_fifo_underruns;
+	__le64	tx_pfc_ena_frames_pri0;
+	__le64	tx_pfc_ena_frames_pri1;
+	__le64	tx_pfc_ena_frames_pri2;
+	__le64	tx_pfc_ena_frames_pri3;
+	__le64	tx_pfc_ena_frames_pri4;
+	__le64	tx_pfc_ena_frames_pri5;
+	__le64	tx_pfc_ena_frames_pri6;
+	__le64	tx_pfc_ena_frames_pri7;
+	__le64	tx_eee_lpi_events;
+	__le64	tx_eee_lpi_duration;
+	__le64	tx_llfc_logical_msgs;
+	__le64	tx_hcfc_msgs;
+	__le64	tx_total_collisions;
+	__le64	tx_bytes;
+	__le64	tx_xthol_frames;
+	__le64	tx_stat_discard;
+	__le64	tx_stat_error;
+};
+
+/* rx_port_stats (size:4224b/528B) */
+struct rx_port_stats {
+	__le64	rx_64b_frames;
+	__le64	rx_65b_127b_frames;
+	__le64	rx_128b_255b_frames;
+	__le64	rx_256b_511b_frames;
+	__le64	rx_512b_1023b_frames;
+	__le64	rx_1024b_1518b_frames;
+	__le64	rx_good_vlan_frames;
+	__le64	rx_1519b_2047b_frames;
+	__le64	rx_2048b_4095b_frames;
+	__le64	rx_4096b_9216b_frames;
+	__le64	rx_9217b_16383b_frames;
+	__le64	rx_total_frames;
+	__le64	rx_ucast_frames;
+	__le64	rx_mcast_frames;
+	__le64	rx_bcast_frames;
+	__le64	rx_fcs_err_frames;
+	__le64	rx_ctrl_frames;
+	__le64	rx_pause_frames;
+	__le64	rx_pfc_frames;
+	__le64	rx_unsupported_opcode_frames;
+	__le64	rx_unsupported_da_pausepfc_frames;
+	__le64	rx_wrong_sa_frames;
+	__le64	rx_align_err_frames;
+	__le64	rx_oor_len_frames;
+	__le64	rx_code_err_frames;
+	__le64	rx_false_carrier_frames;
+	__le64	rx_ovrsz_frames;
+	__le64	rx_jbr_frames;
+	__le64	rx_mtu_err_frames;
+	__le64	rx_match_crc_frames;
+	__le64	rx_promiscuous_frames;
+	__le64	rx_tagged_frames;
+	__le64	rx_double_tagged_frames;
+	__le64	rx_trunc_frames;
+	__le64	rx_good_frames;
+	__le64	rx_pfc_xon2xoff_frames_pri0;
+	__le64	rx_pfc_xon2xoff_frames_pri1;
+	__le64	rx_pfc_xon2xoff_frames_pri2;
+	__le64	rx_pfc_xon2xoff_frames_pri3;
+	__le64	rx_pfc_xon2xoff_frames_pri4;
+	__le64	rx_pfc_xon2xoff_frames_pri5;
+	__le64	rx_pfc_xon2xoff_frames_pri6;
+	__le64	rx_pfc_xon2xoff_frames_pri7;
+	__le64	rx_pfc_ena_frames_pri0;
+	__le64	rx_pfc_ena_frames_pri1;
+	__le64	rx_pfc_ena_frames_pri2;
+	__le64	rx_pfc_ena_frames_pri3;
+	__le64	rx_pfc_ena_frames_pri4;
+	__le64	rx_pfc_ena_frames_pri5;
+	__le64	rx_pfc_ena_frames_pri6;
+	__le64	rx_pfc_ena_frames_pri7;
+	__le64	rx_sch_crc_err_frames;
+	__le64	rx_undrsz_frames;
+	__le64	rx_frag_frames;
+	__le64	rx_eee_lpi_events;
+	__le64	rx_eee_lpi_duration;
+	__le64	rx_llfc_physical_msgs;
+	__le64	rx_llfc_logical_msgs;
+	__le64	rx_llfc_msgs_with_crc_err;
+	__le64	rx_hcfc_msgs;
+	__le64	rx_hcfc_msgs_with_crc_err;
+	__le64	rx_bytes;
+	__le64	rx_runt_bytes;
+	__le64	rx_runt_frames;
+	__le64	rx_stat_discard;
+	__le64	rx_stat_err;
+};
+
 /* hwrm_port_qstats_input (size:320b/40B) */
 struct hwrm_port_qstats_input {
 	__le16	req_type;
@@ -2039,6 +2543,83 @@ struct hwrm_port_qstats_output {
 	u8	valid;
 };
 
+/* tx_port_stats_ext (size:2048b/256B) */
+struct tx_port_stats_ext {
+	__le64	tx_bytes_cos0;
+	__le64	tx_bytes_cos1;
+	__le64	tx_bytes_cos2;
+	__le64	tx_bytes_cos3;
+	__le64	tx_bytes_cos4;
+	__le64	tx_bytes_cos5;
+	__le64	tx_bytes_cos6;
+	__le64	tx_bytes_cos7;
+	__le64	tx_packets_cos0;
+	__le64	tx_packets_cos1;
+	__le64	tx_packets_cos2;
+	__le64	tx_packets_cos3;
+	__le64	tx_packets_cos4;
+	__le64	tx_packets_cos5;
+	__le64	tx_packets_cos6;
+	__le64	tx_packets_cos7;
+	__le64	pfc_pri0_tx_duration_us;
+	__le64	pfc_pri0_tx_transitions;
+	__le64	pfc_pri1_tx_duration_us;
+	__le64	pfc_pri1_tx_transitions;
+	__le64	pfc_pri2_tx_duration_us;
+	__le64	pfc_pri2_tx_transitions;
+	__le64	pfc_pri3_tx_duration_us;
+	__le64	pfc_pri3_tx_transitions;
+	__le64	pfc_pri4_tx_duration_us;
+	__le64	pfc_pri4_tx_transitions;
+	__le64	pfc_pri5_tx_duration_us;
+	__le64	pfc_pri5_tx_transitions;
+	__le64	pfc_pri6_tx_duration_us;
+	__le64	pfc_pri6_tx_transitions;
+	__le64	pfc_pri7_tx_duration_us;
+	__le64	pfc_pri7_tx_transitions;
+};
+
+/* rx_port_stats_ext (size:2368b/296B) */
+struct rx_port_stats_ext {
+	__le64	link_down_events;
+	__le64	continuous_pause_events;
+	__le64	resume_pause_events;
+	__le64	continuous_roce_pause_events;
+	__le64	resume_roce_pause_events;
+	__le64	rx_bytes_cos0;
+	__le64	rx_bytes_cos1;
+	__le64	rx_bytes_cos2;
+	__le64	rx_bytes_cos3;
+	__le64	rx_bytes_cos4;
+	__le64	rx_bytes_cos5;
+	__le64	rx_bytes_cos6;
+	__le64	rx_bytes_cos7;
+	__le64	rx_packets_cos0;
+	__le64	rx_packets_cos1;
+	__le64	rx_packets_cos2;
+	__le64	rx_packets_cos3;
+	__le64	rx_packets_cos4;
+	__le64	rx_packets_cos5;
+	__le64	rx_packets_cos6;
+	__le64	rx_packets_cos7;
+	__le64	pfc_pri0_rx_duration_us;
+	__le64	pfc_pri0_rx_transitions;
+	__le64	pfc_pri1_rx_duration_us;
+	__le64	pfc_pri1_rx_transitions;
+	__le64	pfc_pri2_rx_duration_us;
+	__le64	pfc_pri2_rx_transitions;
+	__le64	pfc_pri3_rx_duration_us;
+	__le64	pfc_pri3_rx_transitions;
+	__le64	pfc_pri4_rx_duration_us;
+	__le64	pfc_pri4_rx_transitions;
+	__le64	pfc_pri5_rx_duration_us;
+	__le64	pfc_pri5_rx_transitions;
+	__le64	pfc_pri6_rx_duration_us;
+	__le64	pfc_pri6_rx_transitions;
+	__le64	pfc_pri7_rx_duration_us;
+	__le64	pfc_pri7_rx_transitions;
+};
+
 /* hwrm_port_qstats_ext_input (size:320b/40B) */
 struct hwrm_port_qstats_ext_input {
 	__le16	req_type;
@@ -2062,7 +2643,8 @@ struct hwrm_port_qstats_ext_output {
 	__le16	resp_len;
 	__le16	tx_stat_size;
 	__le16	rx_stat_size;
-	u8	unused_0[3];
+	__le16	total_active_cos_queues;
+	u8	unused_0;
 	u8	valid;
 };
 
@@ -2153,9 +2735,10 @@ struct hwrm_port_phy_qcaps_output {
 	__le16	seq_id;
 	__le16	resp_len;
 	u8	flags;
-	#define PORT_PHY_QCAPS_RESP_FLAGS_EEE_SUPPORTED     0x1UL
-	#define PORT_PHY_QCAPS_RESP_FLAGS_RSVD1_MASK        0xfeUL
-	#define PORT_PHY_QCAPS_RESP_FLAGS_RSVD1_SFT         1
+	#define PORT_PHY_QCAPS_RESP_FLAGS_EEE_SUPPORTED               0x1UL
+	#define PORT_PHY_QCAPS_RESP_FLAGS_EXTERNAL_LPBK_SUPPORTED     0x2UL
+	#define PORT_PHY_QCAPS_RESP_FLAGS_RSVD1_MASK                  0xfcUL
+	#define PORT_PHY_QCAPS_RESP_FLAGS_RSVD1_SFT                   2
 	u8	port_cnt;
 	#define PORT_PHY_QCAPS_RESP_PORT_CNT_UNKNOWN 0x0UL
 	#define PORT_PHY_QCAPS_RESP_PORT_CNT_1       0x1UL
@@ -2612,6 +3195,7 @@ struct hwrm_queue_qportcfg_output {
 	u8	queue_id0;
 	u8	queue_id0_service_profile;
 	#define QUEUE_QPORTCFG_RESP_QUEUE_ID0_SERVICE_PROFILE_LOSSY          0x0UL
+	#define QUEUE_QPORTCFG_RESP_QUEUE_ID0_SERVICE_PROFILE_LOSSLESS       0x1UL
 	#define QUEUE_QPORTCFG_RESP_QUEUE_ID0_SERVICE_PROFILE_LOSSLESS_ROCE  0x1UL
 	#define QUEUE_QPORTCFG_RESP_QUEUE_ID0_SERVICE_PROFILE_LOSSY_ROCE_CNP 0x2UL
 	#define QUEUE_QPORTCFG_RESP_QUEUE_ID0_SERVICE_PROFILE_LOSSLESS_NIC   0x3UL
@@ -2620,6 +3204,7 @@ struct hwrm_queue_qportcfg_output {
 	u8	queue_id1;
 	u8	queue_id1_service_profile;
 	#define QUEUE_QPORTCFG_RESP_QUEUE_ID1_SERVICE_PROFILE_LOSSY          0x0UL
+	#define QUEUE_QPORTCFG_RESP_QUEUE_ID1_SERVICE_PROFILE_LOSSLESS       0x1UL
 	#define QUEUE_QPORTCFG_RESP_QUEUE_ID1_SERVICE_PROFILE_LOSSLESS_ROCE  0x1UL
 	#define QUEUE_QPORTCFG_RESP_QUEUE_ID1_SERVICE_PROFILE_LOSSY_ROCE_CNP 0x2UL
 	#define QUEUE_QPORTCFG_RESP_QUEUE_ID1_SERVICE_PROFILE_LOSSLESS_NIC   0x3UL
@@ -2628,6 +3213,7 @@ struct hwrm_queue_qportcfg_output {
 	u8	queue_id2;
 	u8	queue_id2_service_profile;
 	#define QUEUE_QPORTCFG_RESP_QUEUE_ID2_SERVICE_PROFILE_LOSSY          0x0UL
+	#define QUEUE_QPORTCFG_RESP_QUEUE_ID2_SERVICE_PROFILE_LOSSLESS       0x1UL
 	#define QUEUE_QPORTCFG_RESP_QUEUE_ID2_SERVICE_PROFILE_LOSSLESS_ROCE  0x1UL
 	#define QUEUE_QPORTCFG_RESP_QUEUE_ID2_SERVICE_PROFILE_LOSSY_ROCE_CNP 0x2UL
 	#define QUEUE_QPORTCFG_RESP_QUEUE_ID2_SERVICE_PROFILE_LOSSLESS_NIC   0x3UL
@@ -2636,6 +3222,7 @@ struct hwrm_queue_qportcfg_output {
 	u8	queue_id3;
 	u8	queue_id3_service_profile;
 	#define QUEUE_QPORTCFG_RESP_QUEUE_ID3_SERVICE_PROFILE_LOSSY          0x0UL
+	#define QUEUE_QPORTCFG_RESP_QUEUE_ID3_SERVICE_PROFILE_LOSSLESS       0x1UL
 	#define QUEUE_QPORTCFG_RESP_QUEUE_ID3_SERVICE_PROFILE_LOSSLESS_ROCE  0x1UL
 	#define QUEUE_QPORTCFG_RESP_QUEUE_ID3_SERVICE_PROFILE_LOSSY_ROCE_CNP 0x2UL
 	#define QUEUE_QPORTCFG_RESP_QUEUE_ID3_SERVICE_PROFILE_LOSSLESS_NIC   0x3UL
@@ -2644,6 +3231,7 @@ struct hwrm_queue_qportcfg_output {
 	u8	queue_id4;
 	u8	queue_id4_service_profile;
 	#define QUEUE_QPORTCFG_RESP_QUEUE_ID4_SERVICE_PROFILE_LOSSY          0x0UL
+	#define QUEUE_QPORTCFG_RESP_QUEUE_ID4_SERVICE_PROFILE_LOSSLESS       0x1UL
 	#define QUEUE_QPORTCFG_RESP_QUEUE_ID4_SERVICE_PROFILE_LOSSLESS_ROCE  0x1UL
 	#define QUEUE_QPORTCFG_RESP_QUEUE_ID4_SERVICE_PROFILE_LOSSY_ROCE_CNP 0x2UL
 	#define QUEUE_QPORTCFG_RESP_QUEUE_ID4_SERVICE_PROFILE_LOSSLESS_NIC   0x3UL
@@ -2652,6 +3240,7 @@ struct hwrm_queue_qportcfg_output {
 	u8	queue_id5;
 	u8	queue_id5_service_profile;
 	#define QUEUE_QPORTCFG_RESP_QUEUE_ID5_SERVICE_PROFILE_LOSSY          0x0UL
+	#define QUEUE_QPORTCFG_RESP_QUEUE_ID5_SERVICE_PROFILE_LOSSLESS       0x1UL
 	#define QUEUE_QPORTCFG_RESP_QUEUE_ID5_SERVICE_PROFILE_LOSSLESS_ROCE  0x1UL
 	#define QUEUE_QPORTCFG_RESP_QUEUE_ID5_SERVICE_PROFILE_LOSSY_ROCE_CNP 0x2UL
 	#define QUEUE_QPORTCFG_RESP_QUEUE_ID5_SERVICE_PROFILE_LOSSLESS_NIC   0x3UL
@@ -2660,6 +3249,7 @@ struct hwrm_queue_qportcfg_output {
 	u8	queue_id6;
 	u8	queue_id6_service_profile;
 	#define QUEUE_QPORTCFG_RESP_QUEUE_ID6_SERVICE_PROFILE_LOSSY          0x0UL
+	#define QUEUE_QPORTCFG_RESP_QUEUE_ID6_SERVICE_PROFILE_LOSSLESS       0x1UL
 	#define QUEUE_QPORTCFG_RESP_QUEUE_ID6_SERVICE_PROFILE_LOSSLESS_ROCE  0x1UL
 	#define QUEUE_QPORTCFG_RESP_QUEUE_ID6_SERVICE_PROFILE_LOSSY_ROCE_CNP 0x2UL
 	#define QUEUE_QPORTCFG_RESP_QUEUE_ID6_SERVICE_PROFILE_LOSSLESS_NIC   0x3UL
@@ -2668,6 +3258,7 @@ struct hwrm_queue_qportcfg_output {
 	u8	queue_id7;
 	u8	queue_id7_service_profile;
 	#define QUEUE_QPORTCFG_RESP_QUEUE_ID7_SERVICE_PROFILE_LOSSY          0x0UL
+	#define QUEUE_QPORTCFG_RESP_QUEUE_ID7_SERVICE_PROFILE_LOSSLESS       0x1UL
 	#define QUEUE_QPORTCFG_RESP_QUEUE_ID7_SERVICE_PROFILE_LOSSLESS_ROCE  0x1UL
 	#define QUEUE_QPORTCFG_RESP_QUEUE_ID7_SERVICE_PROFILE_LOSSY_ROCE_CNP 0x2UL
 	#define QUEUE_QPORTCFG_RESP_QUEUE_ID7_SERVICE_PROFILE_LOSSLESS_NIC   0x3UL
@@ -3689,18 +4280,21 @@ struct hwrm_vnic_cfg_input {
 	#define VNIC_CFG_REQ_FLAGS_RSS_DFLT_CR_MODE                     0x20UL
 	#define VNIC_CFG_REQ_FLAGS_ROCE_MIRRORING_CAPABLE_VNIC_MODE     0x40UL
 	__le32	enables;
-	#define VNIC_CFG_REQ_ENABLES_DFLT_RING_GRP     0x1UL
-	#define VNIC_CFG_REQ_ENABLES_RSS_RULE          0x2UL
-	#define VNIC_CFG_REQ_ENABLES_COS_RULE          0x4UL
-	#define VNIC_CFG_REQ_ENABLES_LB_RULE           0x8UL
-	#define VNIC_CFG_REQ_ENABLES_MRU               0x10UL
+	#define VNIC_CFG_REQ_ENABLES_DFLT_RING_GRP            0x1UL
+	#define VNIC_CFG_REQ_ENABLES_RSS_RULE                 0x2UL
+	#define VNIC_CFG_REQ_ENABLES_COS_RULE                 0x4UL
+	#define VNIC_CFG_REQ_ENABLES_LB_RULE                  0x8UL
+	#define VNIC_CFG_REQ_ENABLES_MRU                      0x10UL
+	#define VNIC_CFG_REQ_ENABLES_DEFAULT_RX_RING_ID       0x20UL
+	#define VNIC_CFG_REQ_ENABLES_DEFAULT_CMPL_RING_ID     0x40UL
 	__le16	vnic_id;
 	__le16	dflt_ring_grp;
 	__le16	rss_rule;
 	__le16	cos_rule;
 	__le16	lb_rule;
 	__le16	mru;
-	u8	unused_0[4];
+	__le16	default_rx_ring_id;
+	__le16	default_cmpl_ring_id;
 };
 
 /* hwrm_vnic_cfg_output (size:128b/16B) */
@@ -3740,6 +4334,7 @@ struct hwrm_vnic_qcaps_output {
 	#define VNIC_QCAPS_RESP_FLAGS_ROCE_ONLY_VNIC_CAP                  0x10UL
 	#define VNIC_QCAPS_RESP_FLAGS_RSS_DFLT_CR_CAP                     0x20UL
 	#define VNIC_QCAPS_RESP_FLAGS_ROCE_MIRRORING_CAPABLE_VNIC_CAP     0x40UL
+	#define VNIC_QCAPS_RESP_FLAGS_OUTERMOST_RSS_CAP                   0x80UL
 	u8	unused_1[7];
 	u8	valid;
 };
@@ -3857,7 +4452,14 @@ struct hwrm_vnic_rss_cfg_input {
 	#define VNIC_RSS_CFG_REQ_HASH_TYPE_IPV6         0x8UL
 	#define VNIC_RSS_CFG_REQ_HASH_TYPE_TCP_IPV6     0x10UL
 	#define VNIC_RSS_CFG_REQ_HASH_TYPE_UDP_IPV6     0x20UL
-	u8	unused_0[4];
+	__le16	vnic_id;
+	u8	ring_table_pair_index;
+	u8	hash_mode_flags;
+	#define VNIC_RSS_CFG_REQ_HASH_MODE_FLAGS_DEFAULT         0x1UL
+	#define VNIC_RSS_CFG_REQ_HASH_MODE_FLAGS_INNERMOST_4     0x2UL
+	#define VNIC_RSS_CFG_REQ_HASH_MODE_FLAGS_INNERMOST_2     0x4UL
+	#define VNIC_RSS_CFG_REQ_HASH_MODE_FLAGS_OUTERMOST_4     0x8UL
+	#define VNIC_RSS_CFG_REQ_HASH_MODE_FLAGS_OUTERMOST_2     0x10UL
 	__le64	ring_grp_tbl_addr;
 	__le64	hash_key_tbl_addr;
 	__le16	rss_ctx_idx;
@@ -3950,7 +4552,7 @@ struct hwrm_vnic_rss_cos_lb_ctx_free_output {
 	u8	valid;
 };
 
-/* hwrm_ring_alloc_input (size:640b/80B) */
+/* hwrm_ring_alloc_input (size:704b/88B) */
 struct hwrm_ring_alloc_input {
 	__le16	req_type;
 	__le16	cmpl_ring;
@@ -3961,12 +4563,17 @@ struct hwrm_ring_alloc_input {
 	#define RING_ALLOC_REQ_ENABLES_RING_ARB_CFG          0x2UL
 	#define RING_ALLOC_REQ_ENABLES_STAT_CTX_ID_VALID     0x8UL
 	#define RING_ALLOC_REQ_ENABLES_MAX_BW_VALID          0x20UL
+	#define RING_ALLOC_REQ_ENABLES_RX_RING_ID_VALID      0x40UL
+	#define RING_ALLOC_REQ_ENABLES_NQ_RING_ID_VALID      0x80UL
+	#define RING_ALLOC_REQ_ENABLES_RX_BUF_SIZE_VALID     0x100UL
 	u8	ring_type;
 	#define RING_ALLOC_REQ_RING_TYPE_L2_CMPL   0x0UL
 	#define RING_ALLOC_REQ_RING_TYPE_TX        0x1UL
 	#define RING_ALLOC_REQ_RING_TYPE_RX        0x2UL
 	#define RING_ALLOC_REQ_RING_TYPE_ROCE_CMPL 0x3UL
-	#define RING_ALLOC_REQ_RING_TYPE_LAST     RING_ALLOC_REQ_RING_TYPE_ROCE_CMPL
+	#define RING_ALLOC_REQ_RING_TYPE_RX_AGG    0x4UL
+	#define RING_ALLOC_REQ_RING_TYPE_NQ        0x5UL
+	#define RING_ALLOC_REQ_RING_TYPE_LAST     RING_ALLOC_REQ_RING_TYPE_NQ
 	u8	unused_0[3];
 	__le64	page_tbl_addr;
 	__le32	fbo;
@@ -3977,8 +4584,9 @@ struct hwrm_ring_alloc_input {
 	__le16	logical_id;
 	__le16	cmpl_ring_id;
 	__le16	queue_id;
-	u8	unused_2[2];
-	__le32	reserved1;
+	__le16	rx_buf_size;
+	__le16	rx_ring_id;
+	__le16	nq_ring_id;
 	__le16	ring_arb_cfg;
 	#define RING_ALLOC_REQ_RING_ARB_CFG_ARB_POLICY_MASK      0xfUL
 	#define RING_ALLOC_REQ_RING_ARB_CFG_ARB_POLICY_SFT       0
@@ -4016,6 +4624,7 @@ struct hwrm_ring_alloc_input {
 	#define RING_ALLOC_REQ_INT_MODE_POLL   0x3UL
 	#define RING_ALLOC_REQ_INT_MODE_LAST  RING_ALLOC_REQ_INT_MODE_POLL
 	u8	unused_4[3];
+	__le64	cq_handle;
 };
 
 /* hwrm_ring_alloc_output (size:128b/16B) */
@@ -4042,7 +4651,9 @@ struct hwrm_ring_free_input {
 	#define RING_FREE_REQ_RING_TYPE_TX        0x1UL
 	#define RING_FREE_REQ_RING_TYPE_RX        0x2UL
 	#define RING_FREE_REQ_RING_TYPE_ROCE_CMPL 0x3UL
-	#define RING_FREE_REQ_RING_TYPE_LAST     RING_FREE_REQ_RING_TYPE_ROCE_CMPL
+	#define RING_FREE_REQ_RING_TYPE_RX_AGG    0x4UL
+	#define RING_FREE_REQ_RING_TYPE_NQ        0x5UL
+	#define RING_FREE_REQ_RING_TYPE_LAST     RING_FREE_REQ_RING_TYPE_NQ
 	u8	unused_0;
 	__le16	ring_id;
 	u8	unused_1[4];
@@ -4058,6 +4669,52 @@ struct hwrm_ring_free_output {
 	u8	valid;
 };
 
+/* hwrm_ring_aggint_qcaps_input (size:128b/16B) */
+struct hwrm_ring_aggint_qcaps_input {
+	__le16	req_type;
+	__le16	cmpl_ring;
+	__le16	seq_id;
+	__le16	target_id;
+	__le64	resp_addr;
+};
+
+/* hwrm_ring_aggint_qcaps_output (size:384b/48B) */
+struct hwrm_ring_aggint_qcaps_output {
+	__le16	error_code;
+	__le16	req_type;
+	__le16	seq_id;
+	__le16	resp_len;
+	__le32	cmpl_params;
+	#define RING_AGGINT_QCAPS_RESP_CMPL_PARAMS_INT_LAT_TMR_MIN                  0x1UL
+	#define RING_AGGINT_QCAPS_RESP_CMPL_PARAMS_INT_LAT_TMR_MAX                  0x2UL
+	#define RING_AGGINT_QCAPS_RESP_CMPL_PARAMS_TIMER_RESET                      0x4UL
+	#define RING_AGGINT_QCAPS_RESP_CMPL_PARAMS_RING_IDLE                        0x8UL
+	#define RING_AGGINT_QCAPS_RESP_CMPL_PARAMS_NUM_CMPL_DMA_AGGR                0x10UL
+	#define RING_AGGINT_QCAPS_RESP_CMPL_PARAMS_NUM_CMPL_DMA_AGGR_DURING_INT     0x20UL
+	#define RING_AGGINT_QCAPS_RESP_CMPL_PARAMS_CMPL_AGGR_DMA_TMR                0x40UL
+	#define RING_AGGINT_QCAPS_RESP_CMPL_PARAMS_CMPL_AGGR_DMA_TMR_DURING_INT     0x80UL
+	#define RING_AGGINT_QCAPS_RESP_CMPL_PARAMS_NUM_CMPL_AGGR_INT                0x100UL
+	__le32	nq_params;
+	#define RING_AGGINT_QCAPS_RESP_NQ_PARAMS_INT_LAT_TMR_MIN     0x1UL
+	__le16	num_cmpl_dma_aggr_min;
+	__le16	num_cmpl_dma_aggr_max;
+	__le16	num_cmpl_dma_aggr_during_int_min;
+	__le16	num_cmpl_dma_aggr_during_int_max;
+	__le16	cmpl_aggr_dma_tmr_min;
+	__le16	cmpl_aggr_dma_tmr_max;
+	__le16	cmpl_aggr_dma_tmr_during_int_min;
+	__le16	cmpl_aggr_dma_tmr_during_int_max;
+	__le16	int_lat_tmr_min_min;
+	__le16	int_lat_tmr_min_max;
+	__le16	int_lat_tmr_max_min;
+	__le16	int_lat_tmr_max_max;
+	__le16	num_cmpl_aggr_int_min;
+	__le16	num_cmpl_aggr_int_max;
+	__le16	timer_units;
+	u8	unused_0[1];
+	u8	valid;
+};
+
 /* hwrm_ring_cmpl_ring_qaggint_params_input (size:192b/24B) */
 struct hwrm_ring_cmpl_ring_qaggint_params_input {
 	__le16	req_type;
@@ -4100,6 +4757,7 @@ struct hwrm_ring_cmpl_ring_cfg_aggint_params_input {
 	__le16	flags;
 	#define RING_CMPL_RING_CFG_AGGINT_PARAMS_REQ_FLAGS_TIMER_RESET     0x1UL
 	#define RING_CMPL_RING_CFG_AGGINT_PARAMS_REQ_FLAGS_RING_IDLE       0x2UL
+	#define RING_CMPL_RING_CFG_AGGINT_PARAMS_REQ_FLAGS_IS_NQ           0x4UL
 	__le16	num_cmpl_dma_aggr;
 	__le16	num_cmpl_dma_aggr_during_int;
 	__le16	cmpl_aggr_dma_tmr;
@@ -4107,7 +4765,14 @@ struct hwrm_ring_cmpl_ring_cfg_aggint_params_input {
 	__le16	int_lat_tmr_min;
 	__le16	int_lat_tmr_max;
 	__le16	num_cmpl_aggr_int;
-	u8	unused_0[6];
+	__le16	enables;
+	#define RING_CMPL_RING_CFG_AGGINT_PARAMS_REQ_ENABLES_NUM_CMPL_DMA_AGGR                0x1UL
+	#define RING_CMPL_RING_CFG_AGGINT_PARAMS_REQ_ENABLES_NUM_CMPL_DMA_AGGR_DURING_INT     0x2UL
+	#define RING_CMPL_RING_CFG_AGGINT_PARAMS_REQ_ENABLES_CMPL_AGGR_DMA_TMR                0x4UL
+	#define RING_CMPL_RING_CFG_AGGINT_PARAMS_REQ_ENABLES_INT_LAT_TMR_MIN                  0x8UL
+	#define RING_CMPL_RING_CFG_AGGINT_PARAMS_REQ_ENABLES_INT_LAT_TMR_MAX                  0x10UL
+	#define RING_CMPL_RING_CFG_AGGINT_PARAMS_REQ_ENABLES_NUM_CMPL_AGGR_INT                0x20UL
+	u8	unused_0[4];
 };
 
 /* hwrm_ring_cmpl_ring_cfg_aggint_params_output (size:128b/16B) */
@@ -4120,34 +4785,6 @@ struct hwrm_ring_cmpl_ring_cfg_aggint_params_output {
 	u8	valid;
 };
 
-/* hwrm_ring_reset_input (size:192b/24B) */
-struct hwrm_ring_reset_input {
-	__le16	req_type;
-	__le16	cmpl_ring;
-	__le16	seq_id;
-	__le16	target_id;
-	__le64	resp_addr;
-	u8	ring_type;
-	#define RING_RESET_REQ_RING_TYPE_L2_CMPL   0x0UL
-	#define RING_RESET_REQ_RING_TYPE_TX        0x1UL
-	#define RING_RESET_REQ_RING_TYPE_RX        0x2UL
-	#define RING_RESET_REQ_RING_TYPE_ROCE_CMPL 0x3UL
-	#define RING_RESET_REQ_RING_TYPE_LAST     RING_RESET_REQ_RING_TYPE_ROCE_CMPL
-	u8	unused_0;
-	__le16	ring_id;
-	u8	unused_1[4];
-};
-
-/* hwrm_ring_reset_output (size:128b/16B) */
-struct hwrm_ring_reset_output {
-	__le16	error_code;
-	__le16	req_type;
-	__le16	seq_id;
-	__le16	resp_len;
-	u8	unused_0[7];
-	u8	valid;
-};
-
 /* hwrm_ring_grp_alloc_input (size:192b/24B) */
 struct hwrm_ring_grp_alloc_input {
 	__le16	req_type;
@@ -5032,7 +5669,8 @@ struct hwrm_tunnel_dst_port_query_input {
 	#define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_VXLAN    0x1UL
 	#define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_GENEVE   0x5UL
 	#define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_VXLAN_V4 0x9UL
-	#define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_LAST    TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_VXLAN_V4
+	#define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_IPGRE_V1 0xaUL
+	#define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_LAST    TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_IPGRE_V1
 	u8	unused_0[7];
 };
 
@@ -5059,7 +5697,8 @@ struct hwrm_tunnel_dst_port_alloc_input {
 	#define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_VXLAN    0x1UL
 	#define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_GENEVE   0x5UL
 	#define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_VXLAN_V4 0x9UL
-	#define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_LAST    TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_VXLAN_V4
+	#define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_IPGRE_V1 0xaUL
+	#define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_LAST    TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_IPGRE_V1
 	u8	unused_0;
 	__be16	tunnel_dst_port_val;
 	u8	unused_1[4];
@@ -5087,7 +5726,8 @@ struct hwrm_tunnel_dst_port_free_input {
 	#define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN    0x1UL
 	#define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_GENEVE   0x5UL
 	#define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN_V4 0x9UL
-	#define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_LAST    TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN_V4
+	#define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_IPGRE_V1 0xaUL
+	#define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_LAST    TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_IPGRE_V1
 	u8	unused_0;
 	__le16	tunnel_dst_port_id;
 	u8	unused_1[4];
@@ -5259,140 +5899,6 @@ struct hwrm_pcie_qstats_output {
 	u8	valid;
 };
 
-/* tx_port_stats (size:3264b/408B) */
-struct tx_port_stats {
-	__le64	tx_64b_frames;
-	__le64	tx_65b_127b_frames;
-	__le64	tx_128b_255b_frames;
-	__le64	tx_256b_511b_frames;
-	__le64	tx_512b_1023b_frames;
-	__le64	tx_1024b_1518_frames;
-	__le64	tx_good_vlan_frames;
-	__le64	tx_1519b_2047_frames;
-	__le64	tx_2048b_4095b_frames;
-	__le64	tx_4096b_9216b_frames;
-	__le64	tx_9217b_16383b_frames;
-	__le64	tx_good_frames;
-	__le64	tx_total_frames;
-	__le64	tx_ucast_frames;
-	__le64	tx_mcast_frames;
-	__le64	tx_bcast_frames;
-	__le64	tx_pause_frames;
-	__le64	tx_pfc_frames;
-	__le64	tx_jabber_frames;
-	__le64	tx_fcs_err_frames;
-	__le64	tx_control_frames;
-	__le64	tx_oversz_frames;
-	__le64	tx_single_dfrl_frames;
-	__le64	tx_multi_dfrl_frames;
-	__le64	tx_single_coll_frames;
-	__le64	tx_multi_coll_frames;
-	__le64	tx_late_coll_frames;
-	__le64	tx_excessive_coll_frames;
-	__le64	tx_frag_frames;
-	__le64	tx_err;
-	__le64	tx_tagged_frames;
-	__le64	tx_dbl_tagged_frames;
-	__le64	tx_runt_frames;
-	__le64	tx_fifo_underruns;
-	__le64	tx_pfc_ena_frames_pri0;
-	__le64	tx_pfc_ena_frames_pri1;
-	__le64	tx_pfc_ena_frames_pri2;
-	__le64	tx_pfc_ena_frames_pri3;
-	__le64	tx_pfc_ena_frames_pri4;
-	__le64	tx_pfc_ena_frames_pri5;
-	__le64	tx_pfc_ena_frames_pri6;
-	__le64	tx_pfc_ena_frames_pri7;
-	__le64	tx_eee_lpi_events;
-	__le64	tx_eee_lpi_duration;
-	__le64	tx_llfc_logical_msgs;
-	__le64	tx_hcfc_msgs;
-	__le64	tx_total_collisions;
-	__le64	tx_bytes;
-	__le64	tx_xthol_frames;
-	__le64	tx_stat_discard;
-	__le64	tx_stat_error;
-};
-
-/* rx_port_stats (size:4224b/528B) */
-struct rx_port_stats {
-	__le64	rx_64b_frames;
-	__le64	rx_65b_127b_frames;
-	__le64	rx_128b_255b_frames;
-	__le64	rx_256b_511b_frames;
-	__le64	rx_512b_1023b_frames;
-	__le64	rx_1024b_1518_frames;
-	__le64	rx_good_vlan_frames;
-	__le64	rx_1519b_2047b_frames;
-	__le64	rx_2048b_4095b_frames;
-	__le64	rx_4096b_9216b_frames;
-	__le64	rx_9217b_16383b_frames;
-	__le64	rx_total_frames;
-	__le64	rx_ucast_frames;
-	__le64	rx_mcast_frames;
-	__le64	rx_bcast_frames;
-	__le64	rx_fcs_err_frames;
-	__le64	rx_ctrl_frames;
-	__le64	rx_pause_frames;
-	__le64	rx_pfc_frames;
-	__le64	rx_unsupported_opcode_frames;
-	__le64	rx_unsupported_da_pausepfc_frames;
-	__le64	rx_wrong_sa_frames;
-	__le64	rx_align_err_frames;
-	__le64	rx_oor_len_frames;
-	__le64	rx_code_err_frames;
-	__le64	rx_false_carrier_frames;
-	__le64	rx_ovrsz_frames;
-	__le64	rx_jbr_frames;
-	__le64	rx_mtu_err_frames;
-	__le64	rx_match_crc_frames;
-	__le64	rx_promiscuous_frames;
-	__le64	rx_tagged_frames;
-	__le64	rx_double_tagged_frames;
-	__le64	rx_trunc_frames;
-	__le64	rx_good_frames;
-	__le64	rx_pfc_xon2xoff_frames_pri0;
-	__le64	rx_pfc_xon2xoff_frames_pri1;
-	__le64	rx_pfc_xon2xoff_frames_pri2;
-	__le64	rx_pfc_xon2xoff_frames_pri3;
-	__le64	rx_pfc_xon2xoff_frames_pri4;
-	__le64	rx_pfc_xon2xoff_frames_pri5;
-	__le64	rx_pfc_xon2xoff_frames_pri6;
-	__le64	rx_pfc_xon2xoff_frames_pri7;
-	__le64	rx_pfc_ena_frames_pri0;
-	__le64	rx_pfc_ena_frames_pri1;
-	__le64	rx_pfc_ena_frames_pri2;
-	__le64	rx_pfc_ena_frames_pri3;
-	__le64	rx_pfc_ena_frames_pri4;
-	__le64	rx_pfc_ena_frames_pri5;
-	__le64	rx_pfc_ena_frames_pri6;
-	__le64	rx_pfc_ena_frames_pri7;
-	__le64	rx_sch_crc_err_frames;
-	__le64	rx_undrsz_frames;
-	__le64	rx_frag_frames;
-	__le64	rx_eee_lpi_events;
-	__le64	rx_eee_lpi_duration;
-	__le64	rx_llfc_physical_msgs;
-	__le64	rx_llfc_logical_msgs;
-	__le64	rx_llfc_msgs_with_crc_err;
-	__le64	rx_hcfc_msgs;
-	__le64	rx_hcfc_msgs_with_crc_err;
-	__le64	rx_bytes;
-	__le64	rx_runt_bytes;
-	__le64	rx_runt_frames;
-	__le64	rx_stat_discard;
-	__le64	rx_stat_err;
-};
-
-/* rx_port_stats_ext (size:320b/40B) */
-struct rx_port_stats_ext {
-	__le64	link_down_events;
-	__le64	continuous_pause_events;
-	__le64	resume_pause_events;
-	__le64	continuous_roce_pause_events;
-	__le64	resume_roce_pause_events;
-};
-
 /* pcie_ctx_hw_stats (size:768b/96B) */
 struct pcie_ctx_hw_stats {
 	__le64	pcie_pl_signal_integrity;
@@ -5884,6 +6390,114 @@ struct hwrm_wol_reason_qcfg_output {
 	u8	valid;
 };
 
+/* coredump_segment_record (size:128b/16B) */
+struct coredump_segment_record {
+	__le16	component_id;
+	__le16	segment_id;
+	__le16	max_instances;
+	u8	version_hi;
+	u8	version_low;
+	u8	seg_flags;
+	u8	unused_0[7];
+};
+
+/* hwrm_dbg_coredump_list_input (size:256b/32B) */
+struct hwrm_dbg_coredump_list_input {
+	__le16	req_type;
+	__le16	cmpl_ring;
+	__le16	seq_id;
+	__le16	target_id;
+	__le64	resp_addr;
+	__le64	host_dest_addr;
+	__le32	host_buf_len;
+	__le16	seq_no;
+	u8	unused_0[2];
+};
+
+/* hwrm_dbg_coredump_list_output (size:128b/16B) */
+struct hwrm_dbg_coredump_list_output {
+	__le16	error_code;
+	__le16	req_type;
+	__le16	seq_id;
+	__le16	resp_len;
+	u8	flags;
+	#define DBG_COREDUMP_LIST_RESP_FLAGS_MORE     0x1UL
+	u8	unused_0;
+	__le16	total_segments;
+	__le16	data_len;
+	u8	unused_1;
+	u8	valid;
+};
+
+/* hwrm_dbg_coredump_initiate_input (size:256b/32B) */
+struct hwrm_dbg_coredump_initiate_input {
+	__le16	req_type;
+	__le16	cmpl_ring;
+	__le16	seq_id;
+	__le16	target_id;
+	__le64	resp_addr;
+	__le16	component_id;
+	__le16	segment_id;
+	__le16	instance;
+	__le16	unused_0;
+	u8	seg_flags;
+	u8	unused_1[7];
+};
+
+/* hwrm_dbg_coredump_initiate_output (size:128b/16B) */
+struct hwrm_dbg_coredump_initiate_output {
+	__le16	error_code;
+	__le16	req_type;
+	__le16	seq_id;
+	__le16	resp_len;
+	u8	unused_0[7];
+	u8	valid;
+};
+
+/* coredump_data_hdr (size:128b/16B) */
+struct coredump_data_hdr {
+	__le32	address;
+	__le32	flags_length;
+	__le32	instance;
+	__le32	next_offset;
+};
+
+/* hwrm_dbg_coredump_retrieve_input (size:448b/56B) */
+struct hwrm_dbg_coredump_retrieve_input {
+	__le16	req_type;
+	__le16	cmpl_ring;
+	__le16	seq_id;
+	__le16	target_id;
+	__le64	resp_addr;
+	__le64	host_dest_addr;
+	__le32	host_buf_len;
+	__le32	unused_0;
+	__le16	component_id;
+	__le16	segment_id;
+	__le16	instance;
+	__le16	unused_1;
+	u8	seg_flags;
+	u8	unused_2;
+	__le16	unused_3;
+	__le32	unused_4;
+	__le32	seq_no;
+	__le32	unused_5;
+};
+
+/* hwrm_dbg_coredump_retrieve_output (size:128b/16B) */
+struct hwrm_dbg_coredump_retrieve_output {
+	__le16	error_code;
+	__le16	req_type;
+	__le16	seq_id;
+	__le16	resp_len;
+	u8	flags;
+	#define DBG_COREDUMP_RETRIEVE_RESP_FLAGS_MORE     0x1UL
+	u8	unused_0;
+	__le16	data_len;
+	u8	unused_1[3];
+	u8	valid;
+};
+
 /* hwrm_nvm_read_input (size:320b/40B) */
 struct hwrm_nvm_read_input {
 	__le16	req_type;
@@ -6201,19 +6815,6 @@ struct hwrm_nvm_install_update_cmd_err {
 	u8	unused_0[7];
 };
 
-struct hwrm_nvm_variable_input {
-	__le16	req_type;
-	__le16	cmpl_ring;
-	__le16	seq_id;
-	__le16	target_id;
-	__le64	resp_addr;
-	__le64	data_addr;
-	__le16	data_len;
-	__le16	option_num;
-	__le16	dimensions;
-	__le16	index_0;
-};
-
 /* hwrm_nvm_get_variable_input (size:320b/40B) */
 struct hwrm_nvm_get_variable_input {
 	__le16	req_type;
@@ -6282,12 +6883,14 @@ struct hwrm_nvm_set_variable_input {
 	__le16	index_2;
 	__le16	index_3;
 	u8	flags;
-	#define NVM_SET_VARIABLE_REQ_FLAGS_FORCE_FLUSH           0x1UL
-	#define NVM_SET_VARIABLE_REQ_FLAGS_ENCRYPT_MODE_MASK     0xeUL
-	#define NVM_SET_VARIABLE_REQ_FLAGS_ENCRYPT_MODE_SFT      1
-	#define NVM_SET_VARIABLE_REQ_FLAGS_ENCRYPT_MODE_NONE       (0x0UL << 1)
-	#define NVM_SET_VARIABLE_REQ_FLAGS_ENCRYPT_MODE_HMAC_SHA1  (0x1UL << 1)
-	#define NVM_SET_VARIABLE_REQ_FLAGS_ENCRYPT_MODE_LAST      NVM_SET_VARIABLE_REQ_FLAGS_ENCRYPT_MODE_HMAC_SHA1
+	#define NVM_SET_VARIABLE_REQ_FLAGS_FORCE_FLUSH                0x1UL
+	#define NVM_SET_VARIABLE_REQ_FLAGS_ENCRYPT_MODE_MASK          0xeUL
+	#define NVM_SET_VARIABLE_REQ_FLAGS_ENCRYPT_MODE_SFT           1
+	#define NVM_SET_VARIABLE_REQ_FLAGS_ENCRYPT_MODE_NONE            (0x0UL << 1)
+	#define NVM_SET_VARIABLE_REQ_FLAGS_ENCRYPT_MODE_HMAC_SHA1       (0x1UL << 1)
+	#define NVM_SET_VARIABLE_REQ_FLAGS_ENCRYPT_MODE_AES256          (0x2UL << 1)
+	#define NVM_SET_VARIABLE_REQ_FLAGS_ENCRYPT_MODE_HMAC_SHA1_AUTH  (0x3UL << 1)
+	#define NVM_SET_VARIABLE_REQ_FLAGS_ENCRYPT_MODE_LAST           NVM_SET_VARIABLE_REQ_FLAGS_ENCRYPT_MODE_HMAC_SHA1_AUTH
 	u8	unused_0;
 };
 
-- 
2.5.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next 02/13] bnxt_en: Adjust timer based on ethtool stats-block-usecs settings.
  2018-08-05 20:51 [PATCH net-next 00/13] bnxt_en: Updates for net-next Michael Chan
  2018-08-05 20:51 ` [PATCH net-next 01/13] bnxt_en: Update firmware interface version to 1.9.2.25 Michael Chan
@ 2018-08-05 20:51 ` Michael Chan
  2018-08-05 20:51 ` [PATCH net-next 03/13] bnxt_en: Add external loopback test to ethtool selftest Michael Chan
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Michael Chan @ 2018-08-05 20:51 UTC (permalink / raw)
  To: davem; +Cc: netdev

The driver gathers statistics using 2 mechanisms.  Some stats are DMA'ed
directly from hardware and others are polled from the driver's timer.
Currently, we only adjust the DMA frequency based on the ethtool
stats-block-usecs setting.  This patch adjusts the driver's timer
frequency as well to make everything consistent.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
index 3d40e49..1f626af 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
@@ -112,6 +112,11 @@ static int bnxt_set_coalesce(struct net_device *dev,
 					      BNXT_MAX_STATS_COAL_TICKS);
 		stats_ticks = rounddown(stats_ticks, BNXT_MIN_STATS_COAL_TICKS);
 		bp->stats_coal_ticks = stats_ticks;
+		if (bp->stats_coal_ticks)
+			bp->current_interval =
+				bp->stats_coal_ticks * HZ / 1000000;
+		else
+			bp->current_interval = BNXT_TIMER_INTERVAL;
 		update_stats = true;
 	}
 
-- 
2.5.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next 03/13] bnxt_en: Add external loopback test to ethtool selftest.
  2018-08-05 20:51 [PATCH net-next 00/13] bnxt_en: Updates for net-next Michael Chan
  2018-08-05 20:51 ` [PATCH net-next 01/13] bnxt_en: Update firmware interface version to 1.9.2.25 Michael Chan
  2018-08-05 20:51 ` [PATCH net-next 02/13] bnxt_en: Adjust timer based on ethtool stats-block-usecs settings Michael Chan
@ 2018-08-05 20:51 ` Michael Chan
  2018-08-05 20:51 ` [PATCH net-next 04/13] bnxt_en: Add PHY retry logic Michael Chan
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Michael Chan @ 2018-08-05 20:51 UTC (permalink / raw)
  To: davem; +Cc: netdev

Add code to detect firmware support for external loopback and the extra
test entry for external loopback.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnxt/bnxt.c         |  4 +++
 drivers/net/ethernet/broadcom/bnxt/bnxt.h         |  2 ++
 drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c | 32 ++++++++++++++++++-----
 3 files changed, 32 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index c612d74..d9fc905 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -6337,6 +6337,10 @@ static int bnxt_hwrm_phy_qcaps(struct bnxt *bp)
 		bp->lpi_tmr_hi = le32_to_cpu(resp->valid_tx_lpi_timer_high) &
 				 PORT_PHY_QCAPS_RESP_TX_LPI_TIMER_HIGH_MASK;
 	}
+	if (resp->flags & PORT_PHY_QCAPS_RESP_FLAGS_EXTERNAL_LPBK_SUPPORTED) {
+		if (bp->test_info)
+			bp->test_info->flags |= BNXT_TEST_FL_EXT_LPBK;
+	}
 	if (resp->supported_speeds_auto_mode)
 		link_info->support_auto_speeds =
 			le16_to_cpu(resp->supported_speeds_auto_mode);
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
index 3b5a55c..0d49fe0 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
@@ -990,6 +990,8 @@ struct bnxt_led_info {
 
 struct bnxt_test_info {
 	u8 offline_mask;
+	u8 flags;
+#define BNXT_TEST_FL_EXT_LPBK	0x1
 	u16 timeout;
 	char string[BNXT_MAX_TEST][ETH_GSTRING_LEN];
 };
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
index 1f626af..9517633 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
@@ -2397,7 +2397,7 @@ static int bnxt_disable_an_for_lpbk(struct bnxt *bp,
 	return rc;
 }
 
-static int bnxt_hwrm_phy_loopback(struct bnxt *bp, bool enable)
+static int bnxt_hwrm_phy_loopback(struct bnxt *bp, bool enable, bool ext)
 {
 	struct hwrm_port_phy_cfg_input req = {0};
 
@@ -2405,7 +2405,10 @@ static int bnxt_hwrm_phy_loopback(struct bnxt *bp, bool enable)
 
 	if (enable) {
 		bnxt_disable_an_for_lpbk(bp, &req);
-		req.lpbk = PORT_PHY_CFG_REQ_LPBK_LOCAL;
+		if (ext)
+			req.lpbk = PORT_PHY_CFG_REQ_LPBK_EXTERNAL;
+		else
+			req.lpbk = PORT_PHY_CFG_REQ_LPBK_LOCAL;
 	} else {
 		req.lpbk = PORT_PHY_CFG_REQ_LPBK_NONE;
 	}
@@ -2538,15 +2541,17 @@ static int bnxt_run_fw_tests(struct bnxt *bp, u8 test_mask, u8 *test_results)
 	return rc;
 }
 
-#define BNXT_DRV_TESTS			3
+#define BNXT_DRV_TESTS			4
 #define BNXT_MACLPBK_TEST_IDX		(bp->num_tests - BNXT_DRV_TESTS)
 #define BNXT_PHYLPBK_TEST_IDX		(BNXT_MACLPBK_TEST_IDX + 1)
-#define BNXT_IRQ_TEST_IDX		(BNXT_MACLPBK_TEST_IDX + 2)
+#define BNXT_EXTLPBK_TEST_IDX		(BNXT_MACLPBK_TEST_IDX + 2)
+#define BNXT_IRQ_TEST_IDX		(BNXT_MACLPBK_TEST_IDX + 3)
 
 static void bnxt_self_test(struct net_device *dev, struct ethtool_test *etest,
 			   u64 *buf)
 {
 	struct bnxt *bp = netdev_priv(dev);
+	bool do_ext_lpbk = false;
 	bool offline = false;
 	u8 test_results = 0;
 	u8 test_mask = 0;
@@ -2560,6 +2565,10 @@ static void bnxt_self_test(struct net_device *dev, struct ethtool_test *etest,
 		return;
 	}
 
+	if ((etest->flags & ETH_TEST_FL_EXTERNAL_LB) &&
+	    (bp->test_info->flags & BNXT_TEST_FL_EXT_LPBK))
+		do_ext_lpbk = true;
+
 	if (etest->flags & ETH_TEST_FL_OFFLINE) {
 		if (bp->pf.active_vfs) {
 			etest->flags |= ETH_TEST_FL_FAILED;
@@ -2600,13 +2609,22 @@ static void bnxt_self_test(struct net_device *dev, struct ethtool_test *etest,
 			buf[BNXT_MACLPBK_TEST_IDX] = 0;
 
 		bnxt_hwrm_mac_loopback(bp, false);
-		bnxt_hwrm_phy_loopback(bp, true);
+		bnxt_hwrm_phy_loopback(bp, true, false);
 		msleep(1000);
 		if (bnxt_run_loopback(bp)) {
 			buf[BNXT_PHYLPBK_TEST_IDX] = 1;
 			etest->flags |= ETH_TEST_FL_FAILED;
 		}
-		bnxt_hwrm_phy_loopback(bp, false);
+		if (do_ext_lpbk) {
+			etest->flags |= ETH_TEST_FL_EXTERNAL_LB_DONE;
+			bnxt_hwrm_phy_loopback(bp, true, true);
+			msleep(1000);
+			if (bnxt_run_loopback(bp)) {
+				buf[BNXT_EXTLPBK_TEST_IDX] = 1;
+				etest->flags |= ETH_TEST_FL_FAILED;
+			}
+		}
+		bnxt_hwrm_phy_loopback(bp, false, false);
 		bnxt_half_close_nic(bp);
 		bnxt_open_nic(bp, false, true);
 	}
@@ -2707,6 +2725,8 @@ void bnxt_ethtool_init(struct bnxt *bp)
 			strcpy(str, "Mac loopback test (offline)");
 		} else if (i == BNXT_PHYLPBK_TEST_IDX) {
 			strcpy(str, "Phy loopback test (offline)");
+		} else if (i == BNXT_EXTLPBK_TEST_IDX) {
+			strcpy(str, "Ext loopback test (offline)");
 		} else if (i == BNXT_IRQ_TEST_IDX) {
 			strcpy(str, "Interrupt_test (offline)");
 		} else {
-- 
2.5.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next 04/13] bnxt_en: Add PHY retry logic.
  2018-08-05 20:51 [PATCH net-next 00/13] bnxt_en: Updates for net-next Michael Chan
                   ` (2 preceding siblings ...)
  2018-08-05 20:51 ` [PATCH net-next 03/13] bnxt_en: Add external loopback test to ethtool selftest Michael Chan
@ 2018-08-05 20:51 ` Michael Chan
  2018-08-05 20:51 ` [PATCH net-next 05/13] bnxt_en: Add new VF resource allocation strategy mode Michael Chan
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Michael Chan @ 2018-08-05 20:51 UTC (permalink / raw)
  To: davem; +Cc: netdev

During hotplug, the driver's open function can be called almost
immediately after power on reset.  The PHY may not be ready and the
firmware may return failure when the driver tries to update PHY
settings.  Add retry logic fired from the driver's timer to retry
the operation for 5 seconds.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnxt/bnxt.c | 31 ++++++++++++++++++++++++++++++-
 drivers/net/ethernet/broadcom/bnxt/bnxt.h |  4 ++++
 2 files changed, 34 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index d9fc905..fd936c5 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -6898,8 +6898,14 @@ static int __bnxt_open_nic(struct bnxt *bp, bool irq_re_init, bool link_re_init)
 		mutex_lock(&bp->link_lock);
 		rc = bnxt_update_phy_setting(bp);
 		mutex_unlock(&bp->link_lock);
-		if (rc)
+		if (rc) {
 			netdev_warn(bp->dev, "failed to update phy settings\n");
+			if (BNXT_SINGLE_PF(bp)) {
+				bp->link_info.phy_retry = true;
+				bp->link_info.phy_retry_expires =
+					jiffies + 5 * HZ;
+			}
+		}
 	}
 
 	if (irq_re_init)
@@ -7583,6 +7589,16 @@ static void bnxt_timer(struct timer_list *t)
 		set_bit(BNXT_FLOW_STATS_SP_EVENT, &bp->sp_event);
 		bnxt_queue_sp_work(bp);
 	}
+
+	if (bp->link_info.phy_retry) {
+		if (time_after(jiffies, bp->link_info.phy_retry_expires)) {
+			bp->link_info.phy_retry = 0;
+			netdev_warn(bp->dev, "failed to update phy settings after maximum retries.\n");
+		} else {
+			set_bit(BNXT_UPDATE_PHY_SP_EVENT, &bp->sp_event);
+			bnxt_queue_sp_work(bp);
+		}
+	}
 bnxt_restart_timer:
 	mod_timer(&bp->timer, jiffies + bp->current_interval);
 }
@@ -7670,6 +7686,19 @@ static void bnxt_sp_task(struct work_struct *work)
 			netdev_err(bp->dev, "SP task can't update link (rc: %x)\n",
 				   rc);
 	}
+	if (test_and_clear_bit(BNXT_UPDATE_PHY_SP_EVENT, &bp->sp_event)) {
+		int rc;
+
+		mutex_lock(&bp->link_lock);
+		rc = bnxt_update_phy_setting(bp);
+		mutex_unlock(&bp->link_lock);
+		if (rc) {
+			netdev_warn(bp->dev, "update phy settings retry failed\n");
+		} else {
+			bp->link_info.phy_retry = false;
+			netdev_info(bp->dev, "update phy settings retry succeeded\n");
+		}
+	}
 	if (test_and_clear_bit(BNXT_HWRM_PORT_MODULE_SP_EVENT, &bp->sp_event)) {
 		mutex_lock(&bp->link_lock);
 		bnxt_get_port_module_status(bp);
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
index 0d49fe0..47eec14 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
@@ -959,6 +959,9 @@ struct bnxt_link_info {
 	u16			advertising;	/* user adv setting */
 	bool			force_link_chng;
 
+	bool			phy_retry;
+	unsigned long		phy_retry_expires;
+
 	/* a copy of phy_qcfg output used to report link
 	 * info to VF
 	 */
@@ -1344,6 +1347,7 @@ struct bnxt {
 #define BNXT_GENEVE_DEL_PORT_SP_EVENT	13
 #define BNXT_LINK_SPEED_CHNG_SP_EVENT	14
 #define BNXT_FLOW_STATS_SP_EVENT	15
+#define BNXT_UPDATE_PHY_SP_EVENT	16
 
 	struct bnxt_hw_resc	hw_resc;
 	struct bnxt_pf_info	pf;
-- 
2.5.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next 05/13] bnxt_en: Add new VF resource allocation strategy mode.
  2018-08-05 20:51 [PATCH net-next 00/13] bnxt_en: Updates for net-next Michael Chan
                   ` (3 preceding siblings ...)
  2018-08-05 20:51 ` [PATCH net-next 04/13] bnxt_en: Add PHY retry logic Michael Chan
@ 2018-08-05 20:51 ` Michael Chan
  2018-08-05 20:51 ` [PATCH net-next 06/13] bnxt_en: Update RSS setup and GRO-HW logic according to the latest spec Michael Chan
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Michael Chan @ 2018-08-05 20:51 UTC (permalink / raw)
  To: davem; +Cc: netdev

The new mode is "minimal-static" to be used when resources are more
limited to support a large number of VFs, for example  The PF driver
will provision guaranteed minimum resources of 0.  Each VF has no
guranteed resources until it tries to reserve resources during device
open.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnxt/bnxt.c       |  2 +-
 drivers/net/ethernet/broadcom/bnxt/bnxt.h       |  1 +
 drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c | 23 ++++++++++++++---------
 3 files changed, 16 insertions(+), 10 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index fd936c5..e0e3b4b 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -5162,7 +5162,7 @@ int bnxt_hwrm_func_resc_qcaps(struct bnxt *bp, bool all)
 
 		pf->vf_resv_strategy =
 			le16_to_cpu(resp->vf_reservation_strategy);
-		if (pf->vf_resv_strategy > BNXT_VF_RESV_STRATEGY_MINIMAL)
+		if (pf->vf_resv_strategy > BNXT_VF_RESV_STRATEGY_MINIMAL_STATIC)
 			pf->vf_resv_strategy = BNXT_VF_RESV_STRATEGY_MAXIMAL;
 	}
 hwrm_func_resc_qcaps_exit:
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
index 47eec14..b44a758 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
@@ -862,6 +862,7 @@ struct bnxt_pf_info {
 	u8	vf_resv_strategy;
 #define BNXT_VF_RESV_STRATEGY_MAXIMAL	0
 #define BNXT_VF_RESV_STRATEGY_MINIMAL	1
+#define BNXT_VF_RESV_STRATEGY_MINIMAL_STATIC	2
 	void			*hwrm_cmd_req_addr[4];
 	dma_addr_t		hwrm_cmd_req_dma_addr[4];
 	struct bnxt_vf_info	*vf;
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
index f560845..b896a52 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
@@ -447,7 +447,7 @@ static int bnxt_hwrm_func_vf_resc_cfg(struct bnxt *bp, int num_vfs)
 	u16 vf_tx_rings, vf_rx_rings, vf_cp_rings;
 	u16 vf_stat_ctx, vf_vnics, vf_ring_grps;
 	struct bnxt_pf_info *pf = &bp->pf;
-	int i, rc = 0;
+	int i, rc = 0, min = 1;
 
 	bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_FUNC_VF_RESOURCE_CFG, -1, -1);
 
@@ -464,14 +464,19 @@ static int bnxt_hwrm_func_vf_resc_cfg(struct bnxt *bp, int num_vfs)
 
 	req.min_rsscos_ctx = cpu_to_le16(BNXT_VF_MIN_RSS_CTX);
 	req.max_rsscos_ctx = cpu_to_le16(BNXT_VF_MAX_RSS_CTX);
-	if (pf->vf_resv_strategy == BNXT_VF_RESV_STRATEGY_MINIMAL) {
-		req.min_cmpl_rings = cpu_to_le16(1);
-		req.min_tx_rings = cpu_to_le16(1);
-		req.min_rx_rings = cpu_to_le16(1);
-		req.min_l2_ctxs = cpu_to_le16(BNXT_VF_MIN_L2_CTX);
-		req.min_vnics = cpu_to_le16(1);
-		req.min_stat_ctx = cpu_to_le16(1);
-		req.min_hw_ring_grps = cpu_to_le16(1);
+	if (pf->vf_resv_strategy == BNXT_VF_RESV_STRATEGY_MINIMAL_STATIC) {
+		min = 0;
+		req.min_rsscos_ctx = cpu_to_le16(min);
+	}
+	if (pf->vf_resv_strategy == BNXT_VF_RESV_STRATEGY_MINIMAL ||
+	    pf->vf_resv_strategy == BNXT_VF_RESV_STRATEGY_MINIMAL_STATIC) {
+		req.min_cmpl_rings = cpu_to_le16(min);
+		req.min_tx_rings = cpu_to_le16(min);
+		req.min_rx_rings = cpu_to_le16(min);
+		req.min_l2_ctxs = cpu_to_le16(min);
+		req.min_vnics = cpu_to_le16(min);
+		req.min_stat_ctx = cpu_to_le16(min);
+		req.min_hw_ring_grps = cpu_to_le16(min);
 	} else {
 		vf_cp_rings /= num_vfs;
 		vf_tx_rings /= num_vfs;
-- 
2.5.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next 06/13] bnxt_en: Update RSS setup and GRO-HW logic according to the latest spec.
  2018-08-05 20:51 [PATCH net-next 00/13] bnxt_en: Updates for net-next Michael Chan
                   ` (4 preceding siblings ...)
  2018-08-05 20:51 ` [PATCH net-next 05/13] bnxt_en: Add new VF resource allocation strategy mode Michael Chan
@ 2018-08-05 20:51 ` Michael Chan
  2018-08-05 20:51 ` [PATCH net-next 07/13] bnxt_en: Add support for ethtool get dump Michael Chan
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Michael Chan @ 2018-08-05 20:51 UTC (permalink / raw)
  To: davem; +Cc: netdev

Set the default hash mode flag in HWRM_VNIC_RSS_CFG to signal to the
firmware that the driver is compliant with the latest spec.  With
that, the firmware can return expanded RSS profile IDs that the driver
checks to setup the proper gso_type for GRO-HW packets.  But instead
of checking for the new profile IDs, we check the IP_TYPE flag
in TPA_START which is more straight forward than checking a list of
profile IDs.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnxt/bnxt.c | 3 ++-
 drivers/net/ethernet/broadcom/bnxt/bnxt.h | 4 ++++
 2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index e0e3b4b..1714850 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -1115,7 +1115,7 @@ static void bnxt_tpa_start(struct bnxt *bp, struct bnxt_rx_ring_info *rxr,
 		tpa_info->hash_type = PKT_HASH_TYPE_L4;
 		tpa_info->gso_type = SKB_GSO_TCPV4;
 		/* RSS profiles 1 and 3 with extract code 0 for inner 4-tuple */
-		if (hash_type == 3)
+		if (hash_type == 3 || TPA_START_IS_IPV6(tpa_start1))
 			tpa_info->gso_type = SKB_GSO_TCPV6;
 		tpa_info->rss_hash =
 			le32_to_cpu(tpa_start->rx_tpa_start_cmp_rss_hash);
@@ -3981,6 +3981,7 @@ static int bnxt_hwrm_vnic_set_rss(struct bnxt *bp, u16 vnic_id, bool set_rss)
 	bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_VNIC_RSS_CFG, -1, -1);
 	if (set_rss) {
 		req.hash_type = cpu_to_le32(bp->rss_hash_cfg);
+		req.hash_mode_flags = VNIC_RSS_CFG_REQ_HASH_MODE_FLAGS_DEFAULT;
 		if (vnic->flags & BNXT_VNIC_RSS_FLAG) {
 			if (BNXT_CHIP_TYPE_NITRO_A0(bp))
 				max_rings = bp->rx_nr_rings - 1;
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
index b44a758..7ea022d 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
@@ -326,6 +326,10 @@ struct rx_tpa_start_cmp_ext {
 	((le32_to_cpu((rx_tpa_start)->rx_tpa_start_cmp_cfa_code_v2) &	\
 	 RX_TPA_START_CMP_CFA_CODE) >> RX_TPA_START_CMPL_CFA_CODE_SHIFT)
 
+#define TPA_START_IS_IPV6(rx_tpa_start)				\
+	(!!((rx_tpa_start)->rx_tpa_start_cmp_flags2 &		\
+	    cpu_to_le32(RX_TPA_START_CMP_FLAGS2_IP_TYPE)))
+
 struct rx_tpa_end_cmp {
 	__le32 rx_tpa_end_cmp_len_flags_type;
 	#define RX_TPA_END_CMP_TYPE				(0x3f << 0)
-- 
2.5.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next 07/13] bnxt_en: Add support for ethtool get dump.
  2018-08-05 20:51 [PATCH net-next 00/13] bnxt_en: Updates for net-next Michael Chan
                   ` (5 preceding siblings ...)
  2018-08-05 20:51 ` [PATCH net-next 06/13] bnxt_en: Update RSS setup and GRO-HW logic according to the latest spec Michael Chan
@ 2018-08-05 20:51 ` Michael Chan
  2018-08-05 20:51 ` [PATCH net-next 08/13] bnxt_en: Add BNXT_NEW_RM() macro Michael Chan
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Michael Chan @ 2018-08-05 20:51 UTC (permalink / raw)
  To: davem; +Cc: netdev, Vasundhara Volam

From: Vasundhara Volam <vasundhara-v.volam@broadcom.com>

Add support to collect live firmware coredump via ethtool.

Signed-off-by: Vasundhara Volam <vasundhara-v.volam@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnxt/bnxt_coredump.h |  66 ++++
 drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c  | 333 +++++++++++++++++++++
 drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.h  |  37 +++
 3 files changed, 436 insertions(+)
 create mode 100644 drivers/net/ethernet/broadcom/bnxt/bnxt_coredump.h

diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_coredump.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_coredump.h
new file mode 100644
index 0000000..09c22f8
--- /dev/null
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_coredump.h
@@ -0,0 +1,66 @@
+/* Broadcom NetXtreme-C/E network driver.
+ *
+ * Copyright (c) 2018 Broadcom Inc
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation.
+ */
+
+#ifndef BNXT_COREDUMP_H
+#define BNXT_COREDUMP_H
+
+struct bnxt_coredump_segment_hdr {
+	__u8 signature[4];
+	__le32 component_id;
+	__le32 segment_id;
+	__le32 flags;
+	__u8 low_version;
+	__u8 high_version;
+	__le16 function_id;
+	__le32 offset;
+	__le32 length;
+	__le32 status;
+	__le32 duration;
+	__le32 data_offset;
+	__le32 instance;
+	__le32 rsvd[5];
+};
+
+struct bnxt_coredump_record {
+	__u8 signature[4];
+	__le32 flags;
+	__u8 low_version;
+	__u8 high_version;
+	__u8 asic_state;
+	__u8 rsvd0[5];
+	char system_name[32];
+	__le16 year;
+	__le16 month;
+	__le16 day;
+	__le16 hour;
+	__le16 minute;
+	__le16 second;
+	__le16 utc_bias;
+	__le16 rsvd1;
+	char commandline[256];
+	__le32 total_segments;
+	__le32 os_ver_major;
+	__le32 os_ver_minor;
+	__le32 rsvd2;
+	char os_name[32];
+	__le16 end_year;
+	__le16 end_month;
+	__le16 end_day;
+	__le16 end_hour;
+	__le16 end_minute;
+	__le16 end_second;
+	__le16 end_utc_bias;
+	__le32 asic_id1;
+	__le32 asic_id2;
+	__le32 coredump_status;
+	__u8 ioctl_low_version;
+	__u8 ioctl_high_version;
+	__le16 rsvd3[313];
+};
+#endif
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
index 9517633..3fc7c74 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
@@ -16,12 +16,15 @@
 #include <linux/etherdevice.h>
 #include <linux/crc32.h>
 #include <linux/firmware.h>
+#include <linux/utsname.h>
+#include <linux/time.h>
 #include "bnxt_hsi.h"
 #include "bnxt.h"
 #include "bnxt_xdp.h"
 #include "bnxt_ethtool.h"
 #include "bnxt_nvm_defs.h"	/* NVRAM content constant and structure defs */
 #include "bnxt_fw_hdr.h"	/* Firmware hdr constant and structure defs */
+#include "bnxt_coredump.h"
 #define FLASH_NVRAM_TIMEOUT	((HWRM_CMD_TIMEOUT) * 100)
 #define FLASH_PACKAGE_TIMEOUT	((HWRM_CMD_TIMEOUT) * 200)
 #define INSTALL_PACKAGE_TIMEOUT	((HWRM_CMD_TIMEOUT) * 200)
@@ -2685,6 +2688,334 @@ static int bnxt_reset(struct net_device *dev, u32 *flags)
 	return rc;
 }
 
+static int bnxt_hwrm_dbg_dma_data(struct bnxt *bp, void *msg, int msg_len,
+				  struct bnxt_hwrm_dbg_dma_info *info)
+{
+	struct hwrm_dbg_cmn_output *cmn_resp = bp->hwrm_cmd_resp_addr;
+	struct hwrm_dbg_cmn_input *cmn_req = msg;
+	__le16 *seq_ptr = msg + info->seq_off;
+	u16 seq = 0, len, segs_off;
+	void *resp = cmn_resp;
+	dma_addr_t dma_handle;
+	int rc, off = 0;
+	void *dma_buf;
+
+	dma_buf = dma_alloc_coherent(&bp->pdev->dev, info->dma_len, &dma_handle,
+				     GFP_KERNEL);
+	if (!dma_buf)
+		return -ENOMEM;
+
+	segs_off = offsetof(struct hwrm_dbg_coredump_list_output,
+			    total_segments);
+	cmn_req->host_dest_addr = cpu_to_le64(dma_handle);
+	cmn_req->host_buf_len = cpu_to_le32(info->dma_len);
+	mutex_lock(&bp->hwrm_cmd_lock);
+	while (1) {
+		*seq_ptr = cpu_to_le16(seq);
+		rc = _hwrm_send_message(bp, msg, msg_len, HWRM_CMD_TIMEOUT);
+		if (rc)
+			break;
+
+		len = le16_to_cpu(*((__le16 *)(resp + info->data_len_off)));
+		if (!seq &&
+		    cmn_req->req_type == cpu_to_le16(HWRM_DBG_COREDUMP_LIST)) {
+			info->segs = le16_to_cpu(*((__le16 *)(resp +
+							      segs_off)));
+			if (!info->segs) {
+				rc = -EIO;
+				break;
+			}
+
+			info->dest_buf_size = info->segs *
+					sizeof(struct coredump_segment_record);
+			info->dest_buf = kmalloc(info->dest_buf_size,
+						 GFP_KERNEL);
+			if (!info->dest_buf) {
+				rc = -ENOMEM;
+				break;
+			}
+		}
+
+		if (info->dest_buf)
+			memcpy(info->dest_buf + off, dma_buf, len);
+
+		if (cmn_req->req_type ==
+				cpu_to_le16(HWRM_DBG_COREDUMP_RETRIEVE))
+			info->dest_buf_size += len;
+
+		if (!(cmn_resp->flags & HWRM_DBG_CMN_FLAGS_MORE))
+			break;
+
+		seq++;
+		off += len;
+	}
+	mutex_unlock(&bp->hwrm_cmd_lock);
+	dma_free_coherent(&bp->pdev->dev, info->dma_len, dma_buf, dma_handle);
+	return rc;
+}
+
+static int bnxt_hwrm_dbg_coredump_list(struct bnxt *bp,
+				       struct bnxt_coredump *coredump)
+{
+	struct hwrm_dbg_coredump_list_input req = {0};
+	struct bnxt_hwrm_dbg_dma_info info = {NULL};
+	int rc;
+
+	bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_DBG_COREDUMP_LIST, -1, -1);
+
+	info.dma_len = COREDUMP_LIST_BUF_LEN;
+	info.seq_off = offsetof(struct hwrm_dbg_coredump_list_input, seq_no);
+	info.data_len_off = offsetof(struct hwrm_dbg_coredump_list_output,
+				     data_len);
+
+	rc = bnxt_hwrm_dbg_dma_data(bp, &req, sizeof(req), &info);
+	if (!rc) {
+		coredump->data = info.dest_buf;
+		coredump->data_size = info.dest_buf_size;
+		coredump->total_segs = info.segs;
+	}
+	return rc;
+}
+
+static int bnxt_hwrm_dbg_coredump_initiate(struct bnxt *bp, u16 component_id,
+					   u16 segment_id)
+{
+	struct hwrm_dbg_coredump_initiate_input req = {0};
+
+	bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_DBG_COREDUMP_INITIATE, -1, -1);
+	req.component_id = cpu_to_le16(component_id);
+	req.segment_id = cpu_to_le16(segment_id);
+
+	return hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
+}
+
+static int bnxt_hwrm_dbg_coredump_retrieve(struct bnxt *bp, u16 component_id,
+					   u16 segment_id, u32 *seg_len,
+					   void *buf, u32 offset)
+{
+	struct hwrm_dbg_coredump_retrieve_input req = {0};
+	struct bnxt_hwrm_dbg_dma_info info = {NULL};
+	int rc;
+
+	bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_DBG_COREDUMP_RETRIEVE, -1, -1);
+	req.component_id = cpu_to_le16(component_id);
+	req.segment_id = cpu_to_le16(segment_id);
+
+	info.dma_len = COREDUMP_RETRIEVE_BUF_LEN;
+	info.seq_off = offsetof(struct hwrm_dbg_coredump_retrieve_input,
+				seq_no);
+	info.data_len_off = offsetof(struct hwrm_dbg_coredump_retrieve_output,
+				     data_len);
+	if (buf)
+		info.dest_buf = buf + offset;
+
+	rc = bnxt_hwrm_dbg_dma_data(bp, &req, sizeof(req), &info);
+	if (!rc)
+		*seg_len = info.dest_buf_size;
+
+	return rc;
+}
+
+static void
+bnxt_fill_coredump_seg_hdr(struct bnxt *bp,
+			   struct bnxt_coredump_segment_hdr *seg_hdr,
+			   struct coredump_segment_record *seg_rec, u32 seg_len,
+			   int status, u32 duration, u32 instance)
+{
+	memset(seg_hdr, 0, sizeof(*seg_hdr));
+	strcpy(seg_hdr->signature, "sEgM");
+	if (seg_rec) {
+		seg_hdr->component_id = (__force __le32)seg_rec->component_id;
+		seg_hdr->segment_id = (__force __le32)seg_rec->segment_id;
+		seg_hdr->low_version = seg_rec->version_low;
+		seg_hdr->high_version = seg_rec->version_hi;
+	} else {
+		/* For hwrm_ver_get response Component id = 2
+		 * and Segment id = 0
+		 */
+		seg_hdr->component_id = cpu_to_le32(2);
+		seg_hdr->segment_id = 0;
+	}
+	seg_hdr->function_id = cpu_to_le16(bp->pdev->devfn);
+	seg_hdr->length = cpu_to_le32(seg_len);
+	seg_hdr->status = cpu_to_le32(status);
+	seg_hdr->duration = cpu_to_le32(duration);
+	seg_hdr->data_offset = cpu_to_le32(sizeof(*seg_hdr));
+	seg_hdr->instance = cpu_to_le32(instance);
+}
+
+static void
+bnxt_fill_coredump_record(struct bnxt *bp, struct bnxt_coredump_record *record,
+			  time64_t start, s16 start_utc, u16 total_segs,
+			  int status)
+{
+	time64_t end = ktime_get_real_seconds();
+	u32 os_ver_major = 0, os_ver_minor = 0;
+	struct tm tm;
+
+	time64_to_tm(start, 0, &tm);
+	memset(record, 0, sizeof(*record));
+	strcpy(record->signature, "cOrE");
+	record->flags = 0;
+	record->low_version = 0;
+	record->high_version = 1;
+	record->asic_state = 0;
+	strncpy(record->system_name, utsname()->nodename,
+		strlen(utsname()->nodename));
+	record->year = cpu_to_le16(tm.tm_year);
+	record->month = cpu_to_le16(tm.tm_mon);
+	record->day = cpu_to_le16(tm.tm_mday);
+	record->hour = cpu_to_le16(tm.tm_hour);
+	record->minute = cpu_to_le16(tm.tm_min);
+	record->second = cpu_to_le16(tm.tm_sec);
+	record->utc_bias = cpu_to_le16(start_utc);
+	strcpy(record->commandline, "ethtool -w");
+	record->total_segments = cpu_to_le32(total_segs);
+
+	sscanf(utsname()->release, "%u.%u", &os_ver_major, &os_ver_minor);
+	record->os_ver_major = cpu_to_le32(os_ver_major);
+	record->os_ver_minor = cpu_to_le32(os_ver_minor);
+
+	strcpy(record->os_name, utsname()->sysname);
+	time64_to_tm(end, 0, &tm);
+	record->end_year = cpu_to_le16(tm.tm_year + 1900);
+	record->end_month = cpu_to_le16(tm.tm_mon + 1);
+	record->end_day = cpu_to_le16(tm.tm_mday);
+	record->end_hour = cpu_to_le16(tm.tm_hour);
+	record->end_minute = cpu_to_le16(tm.tm_min);
+	record->end_second = cpu_to_le16(tm.tm_sec);
+	record->end_utc_bias = cpu_to_le16(sys_tz.tz_minuteswest * 60);
+	record->asic_id1 = cpu_to_le32(bp->chip_num << 16 |
+				       bp->ver_resp.chip_rev << 8 |
+				       bp->ver_resp.chip_metal);
+	record->asic_id2 = 0;
+	record->coredump_status = cpu_to_le32(status);
+	record->ioctl_low_version = 0;
+	record->ioctl_high_version = 0;
+}
+
+static int bnxt_get_coredump(struct bnxt *bp, void *buf, u32 *dump_len)
+{
+	u32 ver_get_resp_len = sizeof(struct hwrm_ver_get_output);
+	struct coredump_segment_record *seg_record = NULL;
+	u32 offset = 0, seg_hdr_len, seg_record_len;
+	struct bnxt_coredump_segment_hdr seg_hdr;
+	struct bnxt_coredump_record coredump_rec;
+	struct bnxt_coredump coredump = {NULL};
+	time64_t start_time;
+	u16 start_utc;
+	int rc = 0, i;
+
+	start_time = ktime_get_real_seconds();
+	start_utc = sys_tz.tz_minuteswest * 60;
+	seg_hdr_len = sizeof(seg_hdr);
+
+	/* First segment should be hwrm_ver_get response */
+	*dump_len = seg_hdr_len + ver_get_resp_len;
+	if (buf) {
+		bnxt_fill_coredump_seg_hdr(bp, &seg_hdr, NULL, ver_get_resp_len,
+					   0, 0, 0);
+		memcpy(buf + offset, &seg_hdr, seg_hdr_len);
+		offset += seg_hdr_len;
+		memcpy(buf + offset, &bp->ver_resp, ver_get_resp_len);
+		offset += ver_get_resp_len;
+	}
+
+	rc = bnxt_hwrm_dbg_coredump_list(bp, &coredump);
+	if (rc) {
+		netdev_err(bp->dev, "Failed to get coredump segment list\n");
+		goto err;
+	}
+
+	*dump_len += seg_hdr_len * coredump.total_segs;
+
+	seg_record = (struct coredump_segment_record *)coredump.data;
+	seg_record_len = sizeof(*seg_record);
+
+	for (i = 0; i < coredump.total_segs; i++) {
+		u16 comp_id = le16_to_cpu(seg_record->component_id);
+		u16 seg_id = le16_to_cpu(seg_record->segment_id);
+		u32 duration = 0, seg_len = 0;
+		unsigned long start, end;
+
+		start = jiffies;
+
+		rc = bnxt_hwrm_dbg_coredump_initiate(bp, comp_id, seg_id);
+		if (rc) {
+			netdev_err(bp->dev,
+				   "Failed to initiate coredump for seg = %d\n",
+				   seg_record->segment_id);
+			goto next_seg;
+		}
+
+		/* Write segment data into the buffer */
+		rc = bnxt_hwrm_dbg_coredump_retrieve(bp, comp_id, seg_id,
+						     &seg_len, buf,
+						     offset + seg_hdr_len);
+		if (rc)
+			netdev_err(bp->dev,
+				   "Failed to retrieve coredump for seg = %d\n",
+				   seg_record->segment_id);
+
+next_seg:
+		end = jiffies;
+		duration = jiffies_to_msecs(end - start);
+		bnxt_fill_coredump_seg_hdr(bp, &seg_hdr, seg_record, seg_len,
+					   rc, duration, 0);
+
+		if (buf) {
+			/* Write segment header into the buffer */
+			memcpy(buf + offset, &seg_hdr, seg_hdr_len);
+			offset += seg_hdr_len + seg_len;
+		}
+
+		*dump_len += seg_len;
+		seg_record =
+			(struct coredump_segment_record *)((u8 *)seg_record +
+							   seg_record_len);
+	}
+
+err:
+	if (buf) {
+		bnxt_fill_coredump_record(bp, &coredump_rec, start_time,
+					  start_utc, coredump.total_segs + 1,
+					  rc);
+		memcpy(buf + offset, &coredump_rec, sizeof(coredump_rec));
+	}
+	kfree(coredump.data);
+	*dump_len += sizeof(coredump_rec);
+
+	return rc;
+}
+
+static int bnxt_get_dump_flag(struct net_device *dev, struct ethtool_dump *dump)
+{
+	struct bnxt *bp = netdev_priv(dev);
+
+	if (bp->hwrm_spec_code < 0x10801)
+		return -EOPNOTSUPP;
+
+	dump->version = bp->ver_resp.hwrm_fw_maj_8b << 24 |
+			bp->ver_resp.hwrm_fw_min_8b << 16 |
+			bp->ver_resp.hwrm_fw_bld_8b << 8 |
+			bp->ver_resp.hwrm_fw_rsvd_8b;
+
+	return bnxt_get_coredump(bp, NULL, &dump->len);
+}
+
+static int bnxt_get_dump_data(struct net_device *dev, struct ethtool_dump *dump,
+			      void *buf)
+{
+	struct bnxt *bp = netdev_priv(dev);
+
+	if (bp->hwrm_spec_code < 0x10801)
+		return -EOPNOTSUPP;
+
+	memset(buf, 0, dump->len);
+
+	return bnxt_get_coredump(bp, buf, &dump->len);
+}
+
 void bnxt_ethtool_init(struct bnxt *bp)
 {
 	struct hwrm_selftest_qlist_output *resp = bp->hwrm_cmd_resp_addr;
@@ -2788,4 +3119,6 @@ const struct ethtool_ops bnxt_ethtool_ops = {
 	.set_phys_id		= bnxt_set_phys_id,
 	.self_test		= bnxt_self_test,
 	.reset			= bnxt_reset,
+	.get_dump_flag		= bnxt_get_dump_flag,
+	.get_dump_data		= bnxt_get_dump_data,
 };
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.h
index 836ef68..b5b65b3 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.h
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.h
@@ -22,6 +22,43 @@ struct bnxt_led_cfg {
 	u8 rsvd;
 };
 
+#define COREDUMP_LIST_BUF_LEN		2048
+#define COREDUMP_RETRIEVE_BUF_LEN	4096
+
+struct bnxt_coredump {
+	void		*data;
+	int		data_size;
+	u16		total_segs;
+};
+
+struct bnxt_hwrm_dbg_dma_info {
+	void *dest_buf;
+	int dest_buf_size;
+	u16 dma_len;
+	u16 seq_off;
+	u16 data_len_off;
+	u16 segs;
+};
+
+struct hwrm_dbg_cmn_input {
+	__le16 req_type;
+	__le16 cmpl_ring;
+	__le16 seq_id;
+	__le16 target_id;
+	__le64 resp_addr;
+	__le64 host_dest_addr;
+	__le32 host_buf_len;
+};
+
+struct hwrm_dbg_cmn_output {
+	__le16 error_code;
+	__le16 req_type;
+	__le16 seq_id;
+	__le16 resp_len;
+	u8 flags;
+	#define HWRM_DBG_CMN_FLAGS_MORE	1
+};
+
 #define BNXT_LED_DFLT_ENA				\
 	(PORT_LED_CFG_REQ_ENABLES_LED0_ID |		\
 	 PORT_LED_CFG_REQ_ENABLES_LED0_STATE |		\
-- 
2.5.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next 08/13] bnxt_en: Add BNXT_NEW_RM() macro.
  2018-08-05 20:51 [PATCH net-next 00/13] bnxt_en: Updates for net-next Michael Chan
                   ` (6 preceding siblings ...)
  2018-08-05 20:51 ` [PATCH net-next 07/13] bnxt_en: Add support for ethtool get dump Michael Chan
@ 2018-08-05 20:51 ` Michael Chan
  2018-08-05 20:51 ` [PATCH net-next 09/13] bnxt_en: Move firmware related flags to a new fw_cap field in struct bnxt Michael Chan
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Michael Chan @ 2018-08-05 20:51 UTC (permalink / raw)
  To: davem; +Cc: netdev, Michael Chan

The BNXT_FLAG_NEW_RM flag is checked a lot in the code to determine if
the new resource manager is in effect.  Define a macro to perform
this check.

Signed-off-by: Michael Chan <michael.chan@broadocm.com>
---
 drivers/net/ethernet/broadcom/bnxt/bnxt.c         | 27 +++++++++++------------
 drivers/net/ethernet/broadcom/bnxt/bnxt.h         |  1 +
 drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c |  2 +-
 drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c   |  2 +-
 drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c     |  4 ++--
 5 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index 1714850..5c9ee3c 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -4579,7 +4579,7 @@ static int bnxt_hwrm_get_rings(struct bnxt *bp)
 	}
 
 	hw_resc->resv_tx_rings = le16_to_cpu(resp->alloc_tx_rings);
-	if (bp->flags & BNXT_FLAG_NEW_RM) {
+	if (BNXT_NEW_RM(bp)) {
 		u16 cp, stats;
 
 		hw_resc->resv_rx_rings = le16_to_cpu(resp->alloc_rx_rings);
@@ -4625,7 +4625,7 @@ __bnxt_hwrm_reserve_pf_rings(struct bnxt *bp, struct hwrm_func_cfg_input *req,
 	req->fid = cpu_to_le16(0xffff);
 	enables |= tx_rings ? FUNC_CFG_REQ_ENABLES_NUM_TX_RINGS : 0;
 	req->num_tx_rings = cpu_to_le16(tx_rings);
-	if (bp->flags & BNXT_FLAG_NEW_RM) {
+	if (BNXT_NEW_RM(bp)) {
 		enables |= rx_rings ? FUNC_CFG_REQ_ENABLES_NUM_RX_RINGS : 0;
 		enables |= cp_rings ? FUNC_CFG_REQ_ENABLES_NUM_CMPL_RINGS |
 				      FUNC_CFG_REQ_ENABLES_NUM_STAT_CTXS : 0;
@@ -4698,7 +4698,7 @@ bnxt_hwrm_reserve_vf_rings(struct bnxt *bp, int tx_rings, int rx_rings,
 	struct hwrm_func_vf_cfg_input req = {0};
 	int rc;
 
-	if (!(bp->flags & BNXT_FLAG_NEW_RM)) {
+	if (!BNXT_NEW_RM(bp)) {
 		bp->hw_resc.resv_tx_rings = tx_rings;
 		return 0;
 	}
@@ -4758,7 +4758,7 @@ static bool bnxt_need_reserve_rings(struct bnxt *bp)
 		vnic = rx + 1;
 	if (bp->flags & BNXT_FLAG_AGG_RINGS)
 		rx <<= 1;
-	if ((bp->flags & BNXT_FLAG_NEW_RM) &&
+	if (BNXT_NEW_RM(bp) &&
 	    (hw_resc->resv_rx_rings != rx || hw_resc->resv_cp_rings != cp ||
 	     hw_resc->resv_hw_ring_grps != grp || hw_resc->resv_vnics != vnic))
 		return true;
@@ -4794,7 +4794,7 @@ static int __bnxt_reserve_rings(struct bnxt *bp)
 		return rc;
 
 	tx = hw_resc->resv_tx_rings;
-	if (bp->flags & BNXT_FLAG_NEW_RM) {
+	if (BNXT_NEW_RM(bp)) {
 		rx = hw_resc->resv_rx_rings;
 		cp = hw_resc->resv_cp_rings;
 		grp = hw_resc->resv_hw_ring_grps;
@@ -4838,7 +4838,7 @@ static int bnxt_hwrm_check_vf_rings(struct bnxt *bp, int tx_rings, int rx_rings,
 	u32 flags;
 	int rc;
 
-	if (!(bp->flags & BNXT_FLAG_NEW_RM))
+	if (!BNXT_NEW_RM(bp))
 		return 0;
 
 	__bnxt_hwrm_reserve_vf_rings(bp, &req, tx_rings, rx_rings, ring_grps,
@@ -4867,7 +4867,7 @@ static int bnxt_hwrm_check_pf_rings(struct bnxt *bp, int tx_rings, int rx_rings,
 	__bnxt_hwrm_reserve_pf_rings(bp, &req, tx_rings, rx_rings, ring_grps,
 				     cp_rings, vnics);
 	flags = FUNC_CFG_REQ_FLAGS_TX_ASSETS_TEST;
-	if (bp->flags & BNXT_FLAG_NEW_RM)
+	if (BNXT_NEW_RM(bp))
 		flags |= FUNC_CFG_REQ_FLAGS_RX_ASSETS_TEST |
 			 FUNC_CFG_REQ_FLAGS_CMPL_ASSETS_TEST |
 			 FUNC_CFG_REQ_FLAGS_RING_GRP_ASSETS_TEST |
@@ -5921,7 +5921,7 @@ int bnxt_get_avail_msix(struct bnxt *bp, int num)
 
 	max_idx = min_t(int, bp->total_irqs, max_cp);
 	avail_msix = max_idx - bp->cp_nr_rings;
-	if (!(bp->flags & BNXT_FLAG_NEW_RM) || avail_msix >= num)
+	if (!BNXT_NEW_RM(bp) || avail_msix >= num)
 		return avail_msix;
 
 	if (max_irq < total_req) {
@@ -5934,7 +5934,7 @@ int bnxt_get_avail_msix(struct bnxt *bp, int num)
 
 static int bnxt_get_num_msix(struct bnxt *bp)
 {
-	if (!(bp->flags & BNXT_FLAG_NEW_RM))
+	if (!BNXT_NEW_RM(bp))
 		return bnxt_get_max_func_irqs(bp);
 
 	return bnxt_cp_rings_in_use(bp);
@@ -6057,8 +6057,7 @@ int bnxt_reserve_rings(struct bnxt *bp)
 		netdev_err(bp->dev, "ring reservation failure rc: %d\n", rc);
 		return rc;
 	}
-	if ((bp->flags & BNXT_FLAG_NEW_RM) &&
-	    (bnxt_get_num_msix(bp) != bp->total_irqs)) {
+	if (BNXT_NEW_RM(bp) && (bnxt_get_num_msix(bp) != bp->total_irqs)) {
 		bnxt_ulp_irq_stop(bp);
 		bnxt_clear_int_mode(bp);
 		rc = bnxt_init_int_mode(bp);
@@ -7306,7 +7305,7 @@ static int bnxt_cfg_rx_mode(struct bnxt *bp)
 static bool bnxt_can_reserve_rings(struct bnxt *bp)
 {
 #ifdef CONFIG_BNXT_SRIOV
-	if ((bp->flags & BNXT_FLAG_NEW_RM) && BNXT_VF(bp)) {
+	if (BNXT_NEW_RM(bp) && BNXT_VF(bp)) {
 		struct bnxt_hw_resc *hw_resc = &bp->hw_resc;
 
 		/* No minimum rings were provisioned by the PF.  Don't
@@ -7356,7 +7355,7 @@ static bool bnxt_rfs_capable(struct bnxt *bp)
 		return false;
 	}
 
-	if (!(bp->flags & BNXT_FLAG_NEW_RM))
+	if (!BNXT_NEW_RM(bp))
 		return true;
 
 	if (vnics == bp->hw_resc.resv_vnics)
@@ -7752,7 +7751,7 @@ int bnxt_check_rings(struct bnxt *bp, int tx, int rx, bool sh, int tcs,
 	if (bp->flags & BNXT_FLAG_AGG_RINGS)
 		rx_rings <<= 1;
 	cp = sh ? max_t(int, tx_rings_needed, rx) : tx_rings_needed + rx;
-	if (bp->flags & BNXT_FLAG_NEW_RM)
+	if (BNXT_NEW_RM(bp))
 		cp += bnxt_get_ulp_msix_num(bp);
 	return bnxt_hwrm_check_rings(bp, tx_rings_needed, rx_rings, rx, cp,
 				     vnics);
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
index 7ea022d..37dc896 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
@@ -1195,6 +1195,7 @@ struct bnxt {
 #define BNXT_SINGLE_PF(bp)	(BNXT_PF(bp) && !BNXT_NPAR(bp) && !BNXT_MH(bp))
 #define BNXT_CHIP_TYPE_NITRO_A0(bp) ((bp)->flags & BNXT_FLAG_CHIP_NITRO_A0)
 #define BNXT_RX_PAGE_MODE(bp)	((bp)->flags & BNXT_FLAG_RX_PAGE_MODE)
+#define BNXT_NEW_RM(bp)		((bp)->flags & BNXT_FLAG_NEW_RM)
 
 /* Chip class phase 4 and later */
 #define BNXT_CHIP_P4_PLUS(bp)			\
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
index 3fc7c74..b6dbc3f 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
@@ -471,7 +471,7 @@ static void bnxt_get_channels(struct net_device *dev,
 	int max_tx_sch_inputs;
 
 	/* Get the most up-to-date max_tx_sch_inputs. */
-	if (bp->flags & BNXT_FLAG_NEW_RM)
+	if (BNXT_NEW_RM(bp))
 		bnxt_hwrm_func_resc_qcaps(bp, false);
 	max_tx_sch_inputs = hw_resc->max_tx_sch_inputs;
 
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
index b896a52..6d583bc 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
@@ -623,7 +623,7 @@ static int bnxt_hwrm_func_cfg(struct bnxt *bp, int num_vfs)
 
 static int bnxt_func_cfg(struct bnxt *bp, int num_vfs)
 {
-	if (bp->flags & BNXT_FLAG_NEW_RM)
+	if (BNXT_NEW_RM(bp))
 		return bnxt_hwrm_func_vf_resc_cfg(bp, num_vfs);
 	else
 		return bnxt_hwrm_func_cfg(bp, num_vfs);
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
index 840f6e5..c37b284 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
@@ -141,7 +141,7 @@ static int bnxt_req_msix_vecs(struct bnxt_en_dev *edev, int ulp_id,
 	if (avail_msix > num_msix)
 		avail_msix = num_msix;
 
-	if (bp->flags & BNXT_FLAG_NEW_RM) {
+	if (BNXT_NEW_RM(bp)) {
 		idx = bp->cp_nr_rings;
 	} else {
 		max_idx = min_t(int, bp->total_irqs, max_cp_rings);
@@ -162,7 +162,7 @@ static int bnxt_req_msix_vecs(struct bnxt_en_dev *edev, int ulp_id,
 		return -EAGAIN;
 	}
 
-	if (bp->flags & BNXT_FLAG_NEW_RM) {
+	if (BNXT_NEW_RM(bp)) {
 		struct bnxt_hw_resc *hw_resc = &bp->hw_resc;
 
 		avail_msix = hw_resc->resv_cp_rings - bp->cp_nr_rings;
-- 
2.5.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next 09/13] bnxt_en: Move firmware related flags to a new fw_cap field in struct bnxt.
  2018-08-05 20:51 [PATCH net-next 00/13] bnxt_en: Updates for net-next Michael Chan
                   ` (7 preceding siblings ...)
  2018-08-05 20:51 ` [PATCH net-next 08/13] bnxt_en: Add BNXT_NEW_RM() macro Michael Chan
@ 2018-08-05 20:51 ` Michael Chan
  2018-08-05 20:51 ` [PATCH net-next 10/13] bnxt_en: Notify firmware about IF state changes Michael Chan
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Michael Chan @ 2018-08-05 20:51 UTC (permalink / raw)
  To: davem; +Cc: netdev

The flags field is almost getting full.  Move firmware capability flags
to a new fw_cap field to better organize these firmware flags.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnxt/bnxt.c     | 12 ++++++------
 drivers/net/ethernet/broadcom/bnxt/bnxt.h     | 13 +++++++------
 drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c |  6 +++---
 3 files changed, 16 insertions(+), 15 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index 5c9ee3c..1659940 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -3445,7 +3445,7 @@ static int bnxt_hwrm_do_send_msg(struct bnxt *bp, void *msg, u32 msg_len,
 	cp_ring_id = le16_to_cpu(req->cmpl_ring);
 	intr_process = (cp_ring_id == INVALID_HW_RING_ID) ? 0 : 1;
 
-	if (bp->flags & BNXT_FLAG_SHORT_CMD) {
+	if (bp->fw_cap & BNXT_FW_CAP_SHORT_CMD) {
 		void *short_cmd_req = bp->hwrm_short_cmd_req_addr;
 
 		memcpy(short_cmd_req, req, msg_len);
@@ -5089,9 +5089,9 @@ static int bnxt_hwrm_func_qcfg(struct bnxt *bp)
 	flags = le16_to_cpu(resp->flags);
 	if (flags & (FUNC_QCFG_RESP_FLAGS_FW_DCBX_AGENT_ENABLED |
 		     FUNC_QCFG_RESP_FLAGS_FW_LLDP_AGENT_ENABLED)) {
-		bp->flags |= BNXT_FLAG_FW_LLDP_AGENT;
+		bp->fw_cap |= BNXT_FW_CAP_LLDP_AGENT;
 		if (flags & FUNC_QCFG_RESP_FLAGS_FW_DCBX_AGENT_ENABLED)
-			bp->flags |= BNXT_FLAG_FW_DCBX_AGENT;
+			bp->fw_cap |= BNXT_FW_CAP_DCBX_AGENT;
 	}
 	if (BNXT_PF(bp) && (flags & FUNC_QCFG_RESP_FLAGS_MULTI_HOST))
 		bp->flags |= BNXT_FLAG_MULTI_HOST;
@@ -5249,7 +5249,7 @@ static int bnxt_hwrm_func_qcaps(struct bnxt *bp)
 	if (bp->hwrm_spec_code >= 0x10803) {
 		rc = bnxt_hwrm_func_resc_qcaps(bp, true);
 		if (!rc)
-			bp->flags |= BNXT_FLAG_NEW_RM;
+			bp->fw_cap |= BNXT_FW_CAP_NEW_RM;
 	}
 	return 0;
 }
@@ -5352,7 +5352,7 @@ static int bnxt_hwrm_ver_get(struct bnxt *bp)
 	dev_caps_cfg = le32_to_cpu(resp->dev_caps_cfg);
 	if ((dev_caps_cfg & VER_GET_RESP_DEV_CAPS_CFG_SHORT_CMD_SUPPORTED) &&
 	    (dev_caps_cfg & VER_GET_RESP_DEV_CAPS_CFG_SHORT_CMD_REQUIRED))
-		bp->flags |= BNXT_FLAG_SHORT_CMD;
+		bp->fw_cap |= BNXT_FW_CAP_SHORT_CMD;
 
 hwrm_ver_get_exit:
 	mutex_unlock(&bp->hwrm_cmd_lock);
@@ -8760,7 +8760,7 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
 	if (rc)
 		goto init_err_pci_clean;
 
-	if (bp->flags & BNXT_FLAG_SHORT_CMD) {
+	if (bp->fw_cap & BNXT_FW_CAP_SHORT_CMD) {
 		rc = bnxt_alloc_hwrm_short_cmd_req(bp);
 		if (rc)
 			goto init_err_pci_clean;
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
index 37dc896..ded2aff 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
@@ -1144,7 +1144,6 @@ struct bnxt {
 	atomic_t		intr_sem;
 
 	u32			flags;
-	#define BNXT_FLAG_DCB_ENABLED	0x1
 	#define BNXT_FLAG_VF		0x2
 	#define BNXT_FLAG_LRO		0x4
 #ifdef CONFIG_INET
@@ -1173,15 +1172,11 @@ struct bnxt {
 					 BNXT_FLAG_ROCEV2_CAP)
 	#define BNXT_FLAG_NO_AGG_RINGS	0x20000
 	#define BNXT_FLAG_RX_PAGE_MODE	0x40000
-	#define BNXT_FLAG_FW_LLDP_AGENT	0x80000
 	#define BNXT_FLAG_MULTI_HOST	0x100000
-	#define BNXT_FLAG_SHORT_CMD	0x200000
 	#define BNXT_FLAG_DOUBLE_DB	0x400000
-	#define BNXT_FLAG_FW_DCBX_AGENT	0x800000
 	#define BNXT_FLAG_CHIP_NITRO_A0	0x1000000
 	#define BNXT_FLAG_DIM		0x2000000
 	#define BNXT_FLAG_ROCE_MIRROR_CAP	0x4000000
-	#define BNXT_FLAG_NEW_RM	0x8000000
 	#define BNXT_FLAG_PORT_STATS_EXT	0x10000000
 
 	#define BNXT_FLAG_ALL_CONFIG_FEATS (BNXT_FLAG_TPA |		\
@@ -1195,7 +1190,6 @@ struct bnxt {
 #define BNXT_SINGLE_PF(bp)	(BNXT_PF(bp) && !BNXT_NPAR(bp) && !BNXT_MH(bp))
 #define BNXT_CHIP_TYPE_NITRO_A0(bp) ((bp)->flags & BNXT_FLAG_CHIP_NITRO_A0)
 #define BNXT_RX_PAGE_MODE(bp)	((bp)->flags & BNXT_FLAG_RX_PAGE_MODE)
-#define BNXT_NEW_RM(bp)		((bp)->flags & BNXT_FLAG_NEW_RM)
 
 /* Chip class phase 4 and later */
 #define BNXT_CHIP_P4_PLUS(bp)			\
@@ -1291,6 +1285,13 @@ struct bnxt {
 
 	u32			msg_enable;
 
+	u32			fw_cap;
+	#define BNXT_FW_CAP_SHORT_CMD	0x00000001
+	#define BNXT_FW_CAP_LLDP_AGENT	0x00000002
+	#define BNXT_FW_CAP_DCBX_AGENT	0x00000004
+	#define BNXT_FW_CAP_NEW_RM	0x00000008
+
+#define BNXT_NEW_RM(bp)		((bp)->fw_cap & BNXT_FW_CAP_NEW_RM)
 	u32			hwrm_spec_code;
 	u16			hwrm_cmd_seq;
 	u32			hwrm_intr_seq_id;
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c
index d5bc72c..00dd26d 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c
@@ -610,7 +610,7 @@ static u8 bnxt_dcbnl_setdcbx(struct net_device *dev, u8 mode)
 		return 1;
 
 	if (mode & DCB_CAP_DCBX_HOST) {
-		if (BNXT_VF(bp) || (bp->flags & BNXT_FLAG_FW_LLDP_AGENT))
+		if (BNXT_VF(bp) || (bp->fw_cap & BNXT_FW_CAP_LLDP_AGENT))
 			return 1;
 
 		/* only support IEEE */
@@ -643,9 +643,9 @@ void bnxt_dcb_init(struct bnxt *bp)
 		return;
 
 	bp->dcbx_cap = DCB_CAP_DCBX_VER_IEEE;
-	if (BNXT_PF(bp) && !(bp->flags & BNXT_FLAG_FW_LLDP_AGENT))
+	if (BNXT_PF(bp) && !(bp->fw_cap & BNXT_FW_CAP_LLDP_AGENT))
 		bp->dcbx_cap |= DCB_CAP_DCBX_HOST;
-	else if (bp->flags & BNXT_FLAG_FW_DCBX_AGENT)
+	else if (bp->fw_cap & BNXT_FW_CAP_DCBX_AGENT)
 		bp->dcbx_cap |= DCB_CAP_DCBX_LLD_MANAGED;
 	bp->dev->dcbnl_ops = &dcbnl_ops;
 }
-- 
2.5.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next 10/13] bnxt_en: Notify firmware about IF state changes.
  2018-08-05 20:51 [PATCH net-next 00/13] bnxt_en: Updates for net-next Michael Chan
                   ` (8 preceding siblings ...)
  2018-08-05 20:51 ` [PATCH net-next 09/13] bnxt_en: Move firmware related flags to a new fw_cap field in struct bnxt Michael Chan
@ 2018-08-05 20:51 ` Michael Chan
  2018-08-05 20:51 ` [PATCH net-next 11/13] bnxt_en: Add hwmon sysfs support to read temperature Michael Chan
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Michael Chan @ 2018-08-05 20:51 UTC (permalink / raw)
  To: davem; +Cc: netdev

Use latest firmware API to notify firmware about IF state changes.
Firmware has the option to clean up resources during IF down and
to require the driver to reserve resources again during IF up.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnxt/bnxt.c | 53 +++++++++++++++++++++++++++++--
 drivers/net/ethernet/broadcom/bnxt/bnxt.h |  1 +
 2 files changed, 52 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index 1659940..56bd097 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -3638,7 +3638,9 @@ int bnxt_hwrm_func_rgtr_async_events(struct bnxt *bp, unsigned long *bmap,
 
 static int bnxt_hwrm_func_drv_rgtr(struct bnxt *bp)
 {
+	struct hwrm_func_drv_rgtr_output *resp = bp->hwrm_cmd_resp_addr;
 	struct hwrm_func_drv_rgtr_input req = {0};
+	int rc;
 
 	bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_FUNC_DRV_RGTR, -1, -1);
 
@@ -3676,7 +3678,15 @@ static int bnxt_hwrm_func_drv_rgtr(struct bnxt *bp)
 			cpu_to_le32(FUNC_DRV_RGTR_REQ_ENABLES_VF_REQ_FWD);
 	}
 
-	return hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
+	mutex_lock(&bp->hwrm_cmd_lock);
+	rc = _hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
+	if (rc)
+		rc = -EIO;
+	else if (resp->flags &
+		 cpu_to_le32(FUNC_DRV_RGTR_RESP_FLAGS_IF_CHANGE_SUPPORTED))
+		bp->fw_cap |= BNXT_FW_CAP_IF_CHANGE;
+	mutex_unlock(&bp->hwrm_cmd_lock);
+	return rc;
 }
 
 static int bnxt_hwrm_func_drv_unrgtr(struct bnxt *bp)
@@ -6637,6 +6647,39 @@ static int bnxt_hwrm_shutdown_link(struct bnxt *bp)
 	return hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
 }
 
+static int bnxt_hwrm_if_change(struct bnxt *bp, bool up)
+{
+	struct hwrm_func_drv_if_change_output *resp = bp->hwrm_cmd_resp_addr;
+	struct hwrm_func_drv_if_change_input req = {0};
+	bool resc_reinit = false;
+	int rc;
+
+	if (!(bp->fw_cap & BNXT_FW_CAP_IF_CHANGE))
+		return 0;
+
+	bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_FUNC_DRV_IF_CHANGE, -1, -1);
+	if (up)
+		req.flags = cpu_to_le32(FUNC_DRV_IF_CHANGE_REQ_FLAGS_UP);
+	mutex_lock(&bp->hwrm_cmd_lock);
+	rc = _hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
+	if (!rc && (resp->flags &
+		    cpu_to_le32(FUNC_DRV_IF_CHANGE_RESP_FLAGS_RESC_CHANGE)))
+		resc_reinit = true;
+	mutex_unlock(&bp->hwrm_cmd_lock);
+
+	if (up && resc_reinit && BNXT_NEW_RM(bp)) {
+		struct bnxt_hw_resc *hw_resc = &bp->hw_resc;
+
+		rc = bnxt_hwrm_func_resc_qcaps(bp, true);
+		hw_resc->resv_cp_rings = 0;
+		hw_resc->resv_tx_rings = 0;
+		hw_resc->resv_rx_rings = 0;
+		hw_resc->resv_hw_ring_grps = 0;
+		hw_resc->resv_vnics = 0;
+	}
+	return rc;
+}
+
 static int bnxt_hwrm_port_led_qcaps(struct bnxt *bp)
 {
 	struct hwrm_port_led_qcaps_output *resp = bp->hwrm_cmd_resp_addr;
@@ -6991,8 +7034,13 @@ void bnxt_half_close_nic(struct bnxt *bp)
 static int bnxt_open(struct net_device *dev)
 {
 	struct bnxt *bp = netdev_priv(dev);
+	int rc;
 
-	return __bnxt_open_nic(bp, true, true);
+	bnxt_hwrm_if_change(bp, true);
+	rc = __bnxt_open_nic(bp, true, true);
+	if (rc)
+		bnxt_hwrm_if_change(bp, false);
+	return rc;
 }
 
 static bool bnxt_drv_busy(struct bnxt *bp)
@@ -7056,6 +7104,7 @@ static int bnxt_close(struct net_device *dev)
 
 	bnxt_close_nic(bp, true, true);
 	bnxt_hwrm_shutdown_link(bp);
+	bnxt_hwrm_if_change(bp, false);
 	return 0;
 }
 
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
index ded2aff..6c40b257 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
@@ -1290,6 +1290,7 @@ struct bnxt {
 	#define BNXT_FW_CAP_LLDP_AGENT	0x00000002
 	#define BNXT_FW_CAP_DCBX_AGENT	0x00000004
 	#define BNXT_FW_CAP_NEW_RM	0x00000008
+	#define BNXT_FW_CAP_IF_CHANGE	0x00000010
 
 #define BNXT_NEW_RM(bp)		((bp)->fw_cap & BNXT_FW_CAP_NEW_RM)
 	u32			hwrm_spec_code;
-- 
2.5.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next 11/13] bnxt_en: Add hwmon sysfs support to read temperature
  2018-08-05 20:51 [PATCH net-next 00/13] bnxt_en: Updates for net-next Michael Chan
                   ` (9 preceding siblings ...)
  2018-08-05 20:51 ` [PATCH net-next 10/13] bnxt_en: Notify firmware about IF state changes Michael Chan
@ 2018-08-05 20:51 ` Michael Chan
  2018-08-05 20:51 ` [PATCH net-next 12/13] bnxt_en: Add DCBNL DSCP application protocol support Michael Chan
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Michael Chan @ 2018-08-05 20:51 UTC (permalink / raw)
  To: davem; +Cc: netdev, Vasundhara Volam

From: Vasundhara Volam <vasundhara-v.volam@broadcom.com>

Export temperature sensor reading via hwmon sysfs.

Signed-off-by: Vasundhara Volam <vasundhara-v.volam@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
---
 drivers/net/ethernet/broadcom/Kconfig     |  8 ++++
 drivers/net/ethernet/broadcom/bnxt/bnxt.c | 62 +++++++++++++++++++++++++++++++
 drivers/net/ethernet/broadcom/bnxt/bnxt.h |  1 +
 3 files changed, 71 insertions(+)

diff --git a/drivers/net/ethernet/broadcom/Kconfig b/drivers/net/ethernet/broadcom/Kconfig
index b7aa8ad..c1d3ee9b 100644
--- a/drivers/net/ethernet/broadcom/Kconfig
+++ b/drivers/net/ethernet/broadcom/Kconfig
@@ -230,4 +230,12 @@ config BNXT_DCB
 
 	  If unsure, say N.
 
+config BNXT_HWMON
+	bool "Broadcom NetXtreme-C/E HWMON support"
+	default y
+	depends on BNXT && HWMON && !(BNXT=y && HWMON=m)
+	---help---
+	  Say Y if you want to expose the thermal sensor data on NetXtreme-C/E
+	  devices, via the hwmon sysfs interface.
+
 endif # NET_VENDOR_BROADCOM
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index 56bd097..dde904b 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -51,6 +51,8 @@
 #include <linux/cpu_rmap.h>
 #include <linux/cpumask.h>
 #include <net/pkt_cls.h>
+#include <linux/hwmon.h>
+#include <linux/hwmon-sysfs.h>
 
 #include "bnxt_hsi.h"
 #include "bnxt.h"
@@ -6789,6 +6791,62 @@ static void bnxt_get_wol_settings(struct bnxt *bp)
 	} while (handle && handle != 0xffff);
 }
 
+#ifdef CONFIG_BNXT_HWMON
+static ssize_t bnxt_show_temp(struct device *dev,
+			      struct device_attribute *devattr, char *buf)
+{
+	struct hwrm_temp_monitor_query_input req = {0};
+	struct hwrm_temp_monitor_query_output *resp;
+	struct bnxt *bp = dev_get_drvdata(dev);
+	u32 temp = 0;
+
+	resp = bp->hwrm_cmd_resp_addr;
+	bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_TEMP_MONITOR_QUERY, -1, -1);
+	mutex_lock(&bp->hwrm_cmd_lock);
+	if (!_hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT))
+		temp = resp->temp * 1000; /* display millidegree */
+	mutex_unlock(&bp->hwrm_cmd_lock);
+
+	return sprintf(buf, "%u\n", temp);
+}
+static SENSOR_DEVICE_ATTR(temp1_input, 0444, bnxt_show_temp, NULL, 0);
+
+static struct attribute *bnxt_attrs[] = {
+	&sensor_dev_attr_temp1_input.dev_attr.attr,
+	NULL
+};
+ATTRIBUTE_GROUPS(bnxt);
+
+static void bnxt_hwmon_close(struct bnxt *bp)
+{
+	if (bp->hwmon_dev) {
+		hwmon_device_unregister(bp->hwmon_dev);
+		bp->hwmon_dev = NULL;
+	}
+}
+
+static void bnxt_hwmon_open(struct bnxt *bp)
+{
+	struct pci_dev *pdev = bp->pdev;
+
+	bp->hwmon_dev = hwmon_device_register_with_groups(&pdev->dev,
+							  DRV_MODULE_NAME, bp,
+							  bnxt_groups);
+	if (IS_ERR(bp->hwmon_dev)) {
+		bp->hwmon_dev = NULL;
+		dev_warn(&pdev->dev, "Cannot register hwmon device\n");
+	}
+}
+#else
+static void bnxt_hwmon_close(struct bnxt *bp)
+{
+}
+
+static void bnxt_hwmon_open(struct bnxt *bp)
+{
+}
+#endif
+
 static bool bnxt_eee_config_ok(struct bnxt *bp)
 {
 	struct ethtool_eee *eee = &bp->eee;
@@ -7040,6 +7098,9 @@ static int bnxt_open(struct net_device *dev)
 	rc = __bnxt_open_nic(bp, true, true);
 	if (rc)
 		bnxt_hwrm_if_change(bp, false);
+
+	bnxt_hwmon_open(bp);
+
 	return rc;
 }
 
@@ -7102,6 +7163,7 @@ static int bnxt_close(struct net_device *dev)
 {
 	struct bnxt *bp = netdev_priv(dev);
 
+	bnxt_hwmon_close(bp);
 	bnxt_close_nic(bp, true, true);
 	bnxt_hwrm_shutdown_link(bp);
 	bnxt_hwrm_if_change(bp, false);
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
index 6c40b257..006726c 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
@@ -1411,6 +1411,7 @@ struct bnxt {
 	struct bnxt_tc_info	*tc_info;
 	struct dentry		*debugfs_pdev;
 	struct dentry		*debugfs_dim;
+	struct device		*hwmon_dev;
 };
 
 #define BNXT_RX_STATS_OFFSET(counter)			\
-- 
2.5.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next 12/13] bnxt_en: Add DCBNL DSCP application protocol support.
  2018-08-05 20:51 [PATCH net-next 00/13] bnxt_en: Updates for net-next Michael Chan
                   ` (10 preceding siblings ...)
  2018-08-05 20:51 ` [PATCH net-next 11/13] bnxt_en: Add hwmon sysfs support to read temperature Michael Chan
@ 2018-08-05 20:51 ` Michael Chan
  2018-08-05 20:51 ` [PATCH net-next 13/13] bnxt_en: Do not use the CNP CoS queue for networking traffic Michael Chan
  2018-08-06  0:37 ` [PATCH net-next 00/13] bnxt_en: Updates for net-next David Miller
  13 siblings, 0 replies; 15+ messages in thread
From: Michael Chan @ 2018-08-05 20:51 UTC (permalink / raw)
  To: davem; +Cc: netdev

Expand the .ieee_setapp() and ieee_delapp() DCBNL methods to support
DSCP.  This allows DSCP values to user priority mappings instead
of using VLAN priorities.  Each DSCP mapping is added or deleted one
entry at a time using the firmware API.  The firmware call can only be
made from a PF.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnxt/bnxt.h     |  1 +
 drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c | 83 ++++++++++++++++++++++++++-
 drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.h |  6 ++
 3 files changed, 89 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
index 006726c..fefa011 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
@@ -1281,6 +1281,7 @@ struct bnxt {
 	struct ieee_ets		*ieee_ets;
 	u8			dcbx_cap;
 	u8			default_pri;
+	u8			max_dscp_value;
 #endif /* CONFIG_BNXT_DCB */
 
 	u32			msg_enable;
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c
index 00dd26d..ddc98c3 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c
@@ -385,6 +385,61 @@ static int bnxt_hwrm_set_dcbx_app(struct bnxt *bp, struct dcb_app *app,
 	return rc;
 }
 
+static int bnxt_hwrm_queue_dscp_qcaps(struct bnxt *bp)
+{
+	struct hwrm_queue_dscp_qcaps_output *resp = bp->hwrm_cmd_resp_addr;
+	struct hwrm_queue_dscp_qcaps_input req = {0};
+	int rc;
+
+	if (bp->hwrm_spec_code < 0x10800 || BNXT_VF(bp))
+		return 0;
+
+	bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_QUEUE_DSCP_QCAPS, -1, -1);
+	mutex_lock(&bp->hwrm_cmd_lock);
+	rc = _hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
+	if (!rc) {
+		bp->max_dscp_value = (1 << resp->num_dscp_bits) - 1;
+		if (bp->max_dscp_value < 0x3f)
+			bp->max_dscp_value = 0;
+	}
+
+	mutex_unlock(&bp->hwrm_cmd_lock);
+	return rc;
+}
+
+static int bnxt_hwrm_queue_dscp2pri_cfg(struct bnxt *bp, struct dcb_app *app,
+					bool add)
+{
+	struct hwrm_queue_dscp2pri_cfg_input req = {0};
+	struct bnxt_dscp2pri_entry *dscp2pri;
+	dma_addr_t mapping;
+	int rc;
+
+	if (bp->hwrm_spec_code < 0x10800)
+		return 0;
+
+	bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_QUEUE_DSCP2PRI_CFG, -1, -1);
+	dscp2pri = dma_alloc_coherent(&bp->pdev->dev, sizeof(*dscp2pri),
+				      &mapping, GFP_KERNEL);
+	if (!dscp2pri)
+		return -ENOMEM;
+
+	req.src_data_addr = cpu_to_le64(mapping);
+	dscp2pri->dscp = app->protocol;
+	if (add)
+		dscp2pri->mask = 0x3f;
+	else
+		dscp2pri->mask = 0;
+	dscp2pri->pri = app->priority;
+	req.entry_cnt = cpu_to_le16(1);
+	rc = hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
+	if (rc)
+		rc = -EIO;
+	dma_free_coherent(&bp->pdev->dev, sizeof(*dscp2pri), dscp2pri,
+			  mapping);
+	return rc;
+}
+
 static int bnxt_ets_validate(struct bnxt *bp, struct ieee_ets *ets, u8 *tc)
 {
 	int total_ets_bw = 0;
@@ -551,15 +606,30 @@ static int bnxt_dcbnl_ieee_setpfc(struct net_device *dev, struct ieee_pfc *pfc)
 	return rc;
 }
 
+static int bnxt_dcbnl_ieee_dscp_app_prep(struct bnxt *bp, struct dcb_app *app)
+{
+	if (app->selector == IEEE_8021QAZ_APP_SEL_DSCP) {
+		if (!bp->max_dscp_value)
+			return -ENOTSUPP;
+		if (app->protocol > bp->max_dscp_value)
+			return -EINVAL;
+	}
+	return 0;
+}
+
 static int bnxt_dcbnl_ieee_setapp(struct net_device *dev, struct dcb_app *app)
 {
 	struct bnxt *bp = netdev_priv(dev);
-	int rc = -EINVAL;
+	int rc;
 
 	if (!(bp->dcbx_cap & DCB_CAP_DCBX_VER_IEEE) ||
 	    !(bp->dcbx_cap & DCB_CAP_DCBX_HOST))
 		return -EINVAL;
 
+	rc = bnxt_dcbnl_ieee_dscp_app_prep(bp, app);
+	if (rc)
+		return rc;
+
 	rc = dcb_ieee_setapp(dev, app);
 	if (rc)
 		return rc;
@@ -570,6 +640,9 @@ static int bnxt_dcbnl_ieee_setapp(struct net_device *dev, struct dcb_app *app)
 	     app->protocol == ROCE_V2_UDP_DPORT))
 		rc = bnxt_hwrm_set_dcbx_app(bp, app, true);
 
+	if (app->selector == IEEE_8021QAZ_APP_SEL_DSCP)
+		rc = bnxt_hwrm_queue_dscp2pri_cfg(bp, app, true);
+
 	return rc;
 }
 
@@ -582,6 +655,10 @@ static int bnxt_dcbnl_ieee_delapp(struct net_device *dev, struct dcb_app *app)
 	    !(bp->dcbx_cap & DCB_CAP_DCBX_HOST))
 		return -EINVAL;
 
+	rc = bnxt_dcbnl_ieee_dscp_app_prep(bp, app);
+	if (rc)
+		return rc;
+
 	rc = dcb_ieee_delapp(dev, app);
 	if (rc)
 		return rc;
@@ -591,6 +668,9 @@ static int bnxt_dcbnl_ieee_delapp(struct net_device *dev, struct dcb_app *app)
 	     app->protocol == ROCE_V2_UDP_DPORT))
 		rc = bnxt_hwrm_set_dcbx_app(bp, app, false);
 
+	if (app->selector == IEEE_8021QAZ_APP_SEL_DSCP)
+		rc = bnxt_hwrm_queue_dscp2pri_cfg(bp, app, false);
+
 	return rc;
 }
 
@@ -642,6 +722,7 @@ void bnxt_dcb_init(struct bnxt *bp)
 	if (bp->hwrm_spec_code < 0x10501)
 		return;
 
+	bnxt_hwrm_queue_dscp_qcaps(bp);
 	bp->dcbx_cap = DCB_CAP_DCBX_VER_IEEE;
 	if (BNXT_PF(bp) && !(bp->fw_cap & BNXT_FW_CAP_LLDP_AGENT))
 		bp->dcbx_cap |= DCB_CAP_DCBX_HOST;
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.h
index 69efde7..c0e16c0 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.h
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.h
@@ -33,6 +33,12 @@ struct bnxt_cos2bw_cfg {
 	u8			unused;
 };
 
+struct bnxt_dscp2pri_entry {
+	u8	dscp;
+	u8	mask;
+	u8	pri;
+};
+
 #define BNXT_LLQ(q_profile)	\
 	((q_profile) ==		\
 	 QUEUE_QPORTCFG_RESP_QUEUE_ID0_SERVICE_PROFILE_LOSSLESS_ROCE)
-- 
2.5.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next 13/13] bnxt_en: Do not use the CNP CoS queue for networking traffic.
  2018-08-05 20:51 [PATCH net-next 00/13] bnxt_en: Updates for net-next Michael Chan
                   ` (11 preceding siblings ...)
  2018-08-05 20:51 ` [PATCH net-next 12/13] bnxt_en: Add DCBNL DSCP application protocol support Michael Chan
@ 2018-08-05 20:51 ` Michael Chan
  2018-08-06  0:37 ` [PATCH net-next 00/13] bnxt_en: Updates for net-next David Miller
  13 siblings, 0 replies; 15+ messages in thread
From: Michael Chan @ 2018-08-05 20:51 UTC (permalink / raw)
  To: davem; +Cc: netdev

The CNP CoS queue is reserved for internal RDMA Congestion Notification
Packets (CNP) and should not be used for a TC.  Modify the CoS queue
discovery code to skip over the CNP CoS queue and to reduce
bp->max_tc accordingly.  However, if RDMA is disabled in NVRAM, the
the CNP CoS queue can be used for a TC.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnxt/bnxt.c     | 22 ++++++++++++++--------
 drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.h |  4 ++++
 2 files changed, 18 insertions(+), 8 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index dde904b..d7f51ab 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -5281,7 +5281,8 @@ static int bnxt_hwrm_queue_qportcfg(struct bnxt *bp)
 	int rc = 0;
 	struct hwrm_queue_qportcfg_input req = {0};
 	struct hwrm_queue_qportcfg_output *resp = bp->hwrm_cmd_resp_addr;
-	u8 i, *qptr;
+	u8 i, j, *qptr;
+	bool no_rdma;
 
 	bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_QUEUE_QPORTCFG, -1, -1);
 
@@ -5299,19 +5300,24 @@ static int bnxt_hwrm_queue_qportcfg(struct bnxt *bp)
 	if (bp->max_tc > BNXT_MAX_QUEUE)
 		bp->max_tc = BNXT_MAX_QUEUE;
 
+	no_rdma = !(bp->flags & BNXT_FLAG_ROCE_CAP);
+	qptr = &resp->queue_id0;
+	for (i = 0, j = 0; i < bp->max_tc; i++) {
+		bp->q_info[j].queue_id = *qptr++;
+		bp->q_info[j].queue_profile = *qptr++;
+		bp->tc_to_qidx[j] = j;
+		if (!BNXT_CNPQ(bp->q_info[j].queue_profile) ||
+		    (no_rdma && BNXT_PF(bp)))
+			j++;
+	}
+	bp->max_tc = max_t(u8, j, 1);
+
 	if (resp->queue_cfg_info & QUEUE_QPORTCFG_RESP_QUEUE_CFG_INFO_ASYM_CFG)
 		bp->max_tc = 1;
 
 	if (bp->max_lltc > bp->max_tc)
 		bp->max_lltc = bp->max_tc;
 
-	qptr = &resp->queue_id0;
-	for (i = 0; i < bp->max_tc; i++) {
-		bp->q_info[i].queue_id = *qptr++;
-		bp->q_info[i].queue_profile = *qptr++;
-		bp->tc_to_qidx[i] = i;
-	}
-
 qportcfg_exit:
 	mutex_unlock(&bp->hwrm_cmd_lock);
 	return rc;
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.h
index c0e16c0..6eed231 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.h
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.h
@@ -43,6 +43,10 @@ struct bnxt_dscp2pri_entry {
 	((q_profile) ==		\
 	 QUEUE_QPORTCFG_RESP_QUEUE_ID0_SERVICE_PROFILE_LOSSLESS_ROCE)
 
+#define BNXT_CNPQ(q_profile)	\
+	((q_profile) ==		\
+	 QUEUE_QPORTCFG_RESP_QUEUE_ID0_SERVICE_PROFILE_LOSSY_ROCE_CNP)
+
 #define HWRM_STRUCT_DATA_SUBTYPE_HOST_OPERATIONAL	0x0300
 
 void bnxt_dcb_init(struct bnxt *bp);
-- 
2.5.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next 00/13] bnxt_en: Updates for net-next.
  2018-08-05 20:51 [PATCH net-next 00/13] bnxt_en: Updates for net-next Michael Chan
                   ` (12 preceding siblings ...)
  2018-08-05 20:51 ` [PATCH net-next 13/13] bnxt_en: Do not use the CNP CoS queue for networking traffic Michael Chan
@ 2018-08-06  0:37 ` David Miller
  13 siblings, 0 replies; 15+ messages in thread
From: David Miller @ 2018-08-06  0:37 UTC (permalink / raw)
  To: michael.chan; +Cc: netdev

From: Michael Chan <michael.chan@broadcom.com>
Date: Sun,  5 Aug 2018 16:51:45 -0400

> This series includes the usual firmware spec update.  The driver has
> added external phy loopback test and phy setup retry logic that is
> needed during hotplug.  In the SRIOV space, the driver has added a
> new VF resource allocation mode that requires the VF driver to
> reserve resources during IFUP.  IF state changes are now propagated
> to firmware so that firmware can release some resources during IFDOWN.
> 
> ethtool method to get firmware core dump and hwmon temperature reading
> have been added.  DSCP to user priority support has been added to
> the driver's DCBNL interface, and the CoS queue logic has been refined
> to make sure that the special RDMA Congestion Notification hardware CoS
> queue will not be used for networking traffic.

Series applied, thanks Michael.

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2018-08-06  2:44 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-08-05 20:51 [PATCH net-next 00/13] bnxt_en: Updates for net-next Michael Chan
2018-08-05 20:51 ` [PATCH net-next 01/13] bnxt_en: Update firmware interface version to 1.9.2.25 Michael Chan
2018-08-05 20:51 ` [PATCH net-next 02/13] bnxt_en: Adjust timer based on ethtool stats-block-usecs settings Michael Chan
2018-08-05 20:51 ` [PATCH net-next 03/13] bnxt_en: Add external loopback test to ethtool selftest Michael Chan
2018-08-05 20:51 ` [PATCH net-next 04/13] bnxt_en: Add PHY retry logic Michael Chan
2018-08-05 20:51 ` [PATCH net-next 05/13] bnxt_en: Add new VF resource allocation strategy mode Michael Chan
2018-08-05 20:51 ` [PATCH net-next 06/13] bnxt_en: Update RSS setup and GRO-HW logic according to the latest spec Michael Chan
2018-08-05 20:51 ` [PATCH net-next 07/13] bnxt_en: Add support for ethtool get dump Michael Chan
2018-08-05 20:51 ` [PATCH net-next 08/13] bnxt_en: Add BNXT_NEW_RM() macro Michael Chan
2018-08-05 20:51 ` [PATCH net-next 09/13] bnxt_en: Move firmware related flags to a new fw_cap field in struct bnxt Michael Chan
2018-08-05 20:51 ` [PATCH net-next 10/13] bnxt_en: Notify firmware about IF state changes Michael Chan
2018-08-05 20:51 ` [PATCH net-next 11/13] bnxt_en: Add hwmon sysfs support to read temperature Michael Chan
2018-08-05 20:51 ` [PATCH net-next 12/13] bnxt_en: Add DCBNL DSCP application protocol support Michael Chan
2018-08-05 20:51 ` [PATCH net-next 13/13] bnxt_en: Do not use the CNP CoS queue for networking traffic Michael Chan
2018-08-06  0:37 ` [PATCH net-next 00/13] bnxt_en: Updates for net-next David Miller

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.