All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 00/24] bnxt patchset
@ 2017-09-28 21:43 Ajit Khaparde
  2017-09-28 21:43 ` [PATCH v4 01/24] net/bnxt: fix HWRM_*() macros and locking Ajit Khaparde
                   ` (24 more replies)
  0 siblings, 25 replies; 26+ messages in thread
From: Ajit Khaparde @ 2017-09-28 21:43 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

This patch set includes some bug fixes and also adds
support for new dev_ops like rx_queue_count,
rx/tx_descriptor_status, get/set_eeprom
and rx_queue_intr_enable/disable.
It also adds support for the flow_filter funciton to add
Flow API functionality.

Please apply.

Ajit Khaparde (22):
  net/bnxt: fix HWRM_*() macros and locking
  net/bnxt: use 64-bits of address for vlan_table
  net/bnxt: fix an issue with group id calculation
  net/bnxt: fix calculation of number of pools
  net/bnxt: handle multi queue mode properly
  net/bnxt: fix rx handling and buffer allocation logic
  net/bnxt: fix an issue with broadcast traffic
  net/bnxt: fix usage of ETH_VMDQ_* flags
  net/bnxt: set checksum offload flags correctly
  net/bnxt: update status of Rx IP/L4 CKSUM
  net/bnxt: add support for xstats get by id
  net/bnxt: fix config rss update
  net/bnxt: set the hash_key_size
  net/bnxt: add support for rx_queue_count
  net/bnxt: add support for rx_descriptor_status
  net/bnxt: add support for tx_descriptor_status
  net/bnxt: add new HWRM structs to support flow filtering
  net/bnxt: add support for flow filter ops
  doc: update release notes
  net/bnxt: fix per queue stats display in xstats
  net/bnxt: prevent interrupt handler from accessing freed memory
  net/bnxt: add dev_supported_ptypes_get dev_op

Somnath Kotur (2):
  net/bnxt: add support for get/set EEPROM
  net/bnxt: add support for rx_queue_intr_enable/disable APIs

 doc/guides/nics/features/bnxt.ini      |    2 +
 doc/guides/rel_notes/release_17_11.rst |   11 +
 drivers/net/bnxt/bnxt.h                |   11 +-
 drivers/net/bnxt/bnxt_cpr.c            |    2 +
 drivers/net/bnxt/bnxt_cpr.h            |    6 +-
 drivers/net/bnxt/bnxt_ethdev.c         |  571 +++++++++++++++-
 drivers/net/bnxt/bnxt_filter.c         |  871 ++++++++++++++++++++++++-
 drivers/net/bnxt/bnxt_filter.h         |   76 +++
 drivers/net/bnxt/bnxt_hwrm.c           |  963 +++++++++++++++++++++------
 drivers/net/bnxt/bnxt_hwrm.h           |   25 +-
 drivers/net/bnxt/bnxt_irq.c            |    3 +
 drivers/net/bnxt/bnxt_irq.h            |    3 +
 drivers/net/bnxt/bnxt_nvm_defs.h       |   75 +++
 drivers/net/bnxt/bnxt_rxq.c            |  241 ++++---
 drivers/net/bnxt/bnxt_rxq.h            |    4 +
 drivers/net/bnxt/bnxt_rxr.c            |   98 ++-
 drivers/net/bnxt/bnxt_rxr.h            |   16 +
 drivers/net/bnxt/bnxt_stats.c          |   57 +-
 drivers/net/bnxt/bnxt_stats.h          |    5 +
 drivers/net/bnxt/bnxt_txr.c            |   35 +-
 drivers/net/bnxt/bnxt_txr.h            |   21 +
 drivers/net/bnxt/bnxt_vnic.c           |    1 +
 drivers/net/bnxt/bnxt_vnic.h           |    1 +
 drivers/net/bnxt/hsi_struct_def_dpdk.h | 1122 ++++++++++++++++++++++++++++++++
 drivers/net/bnxt/rte_pmd_bnxt.c        |   15 +-
 25 files changed, 3884 insertions(+), 351 deletions(-)
 create mode 100644 drivers/net/bnxt/bnxt_nvm_defs.h

-- 
2.13.5 (Apple Git-94)

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v4 01/24] net/bnxt: fix HWRM_*() macros and locking
  2017-09-28 21:43 [PATCH v4 00/24] bnxt patchset Ajit Khaparde
@ 2017-09-28 21:43 ` Ajit Khaparde
  2017-09-28 21:43 ` [PATCH v4 02/24] net/bnxt: use 64-bits of address for vlan_table Ajit Khaparde
                   ` (23 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Ajit Khaparde @ 2017-09-28 21:43 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

Obtain the spinlock in HWRM_PREP()
Eliminate two unnecessary arguments in HWRM_PREP().
Unlock the spinlock before returning in HWRM_ERROR_CHECK()
Add new HWRM_UNLOCK() macro
Update usage of the thre macros.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
--
v1->v2: incorporate review comments
---
 drivers/net/bnxt/bnxt_hwrm.c | 459 +++++++++++++++++++++++++++----------------
 1 file changed, 291 insertions(+), 168 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index e710e6367..fe82f0936 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -54,7 +54,7 @@
 
 #include <rte_io.h>
 
-#define HWRM_CMD_TIMEOUT		2000
+#define HWRM_CMD_TIMEOUT		10000
 
 struct bnxt_plcmodes_cfg {
 	uint32_t	flags;
@@ -95,7 +95,7 @@ static int page_roundup(size_t size)
  * command was failed by the ChiMP.
  */
 
-static int bnxt_hwrm_send_message_locked(struct bnxt *bp, void *msg,
+static int bnxt_hwrm_send_message(struct bnxt *bp, void *msg,
 					uint32_t msg_len)
 {
 	unsigned int i;
@@ -171,52 +171,58 @@ static int bnxt_hwrm_send_message_locked(struct bnxt *bp, void *msg,
 	return -1;
 }
 
-static int bnxt_hwrm_send_message(struct bnxt *bp, void *msg, uint32_t msg_len)
-{
-	int rc;
-
-	rte_spinlock_lock(&bp->hwrm_lock);
-	rc = bnxt_hwrm_send_message_locked(bp, msg, msg_len);
-	rte_spinlock_unlock(&bp->hwrm_lock);
-	return rc;
-}
-
-#define HWRM_PREP(req, type, cr, resp) \
+/*
+ * HWRM_PREP() should be used to prepare *ALL* HWRM commands.  It grabs the
+ * spinlock, and does initial processing.
+ *
+ * HWRM_CHECK_RESULT() returns errors on failure and may not be used.  It
+ * releases the spinlock only if it returns.  If the regular int return codes
+ * are not used by the function, HWRM_CHECK_RESULT() should not be used
+ * directly, rather it should be copied and modified to suit the function.
+ *
+ * HWRM_UNLOCK() must be called after all response processing is completed.
+ */
+#define HWRM_PREP(req, type) do { \
+	rte_spinlock_lock(&bp->hwrm_lock); \
 	memset(bp->hwrm_cmd_resp_addr, 0, bp->max_resp_len); \
 	req.req_type = rte_cpu_to_le_16(HWRM_##type); \
-	req.cmpl_ring = rte_cpu_to_le_16(cr); \
+	req.cmpl_ring = rte_cpu_to_le_16(-1); \
 	req.seq_id = rte_cpu_to_le_16(bp->hwrm_cmd_seq++); \
 	req.target_id = rte_cpu_to_le_16(0xffff); \
-	req.resp_addr = rte_cpu_to_le_64(bp->hwrm_cmd_resp_dma_addr)
-
-#define HWRM_CHECK_RESULT \
-	{ \
-		if (rc) { \
-			RTE_LOG(ERR, PMD, "%s failed rc:%d\n", \
-				__func__, rc); \
-			return rc; \
+	req.resp_addr = rte_cpu_to_le_64(bp->hwrm_cmd_resp_dma_addr); \
+} while (0)
+
+#define HWRM_CHECK_RESULT() do {\
+	if (rc) { \
+		RTE_LOG(ERR, PMD, "%s failed rc:%d\n", \
+			__func__, rc); \
+		rte_spinlock_unlock(&bp->hwrm_lock); \
+		return rc; \
+	} \
+	if (resp->error_code) { \
+		rc = rte_le_to_cpu_16(resp->error_code); \
+		if (resp->resp_len >= 16) { \
+			struct hwrm_err_output *tmp_hwrm_err_op = \
+						(void *)resp; \
+			RTE_LOG(ERR, PMD, \
+				"%s error %d:%d:%08x:%04x\n", \
+				__func__, \
+				rc, tmp_hwrm_err_op->cmd_err, \
+				rte_le_to_cpu_32(\
+					tmp_hwrm_err_op->opaque_0), \
+				rte_le_to_cpu_16(\
+					tmp_hwrm_err_op->opaque_1)); \
 		} \
-		if (resp->error_code) { \
-			rc = rte_le_to_cpu_16(resp->error_code); \
-			if (resp->resp_len >= 16) { \
-				struct hwrm_err_output *tmp_hwrm_err_op = \
-							(void *)resp; \
-				RTE_LOG(ERR, PMD, \
-					"%s error %d:%d:%08x:%04x\n", \
-					__func__, \
-					rc, tmp_hwrm_err_op->cmd_err, \
-					rte_le_to_cpu_32(\
-						tmp_hwrm_err_op->opaque_0), \
-					rte_le_to_cpu_16(\
-						tmp_hwrm_err_op->opaque_1)); \
-			} \
-			else { \
-				RTE_LOG(ERR, PMD, \
-					"%s error %d\n", __func__, rc); \
-			} \
-			return rc; \
+		else { \
+			RTE_LOG(ERR, PMD, \
+				"%s error %d\n", __func__, rc); \
 		} \
-	}
+		rte_spinlock_unlock(&bp->hwrm_lock); \
+		return rc; \
+	} \
+} while (0)
+
+#define HWRM_UNLOCK()		rte_spinlock_unlock(&bp->hwrm_lock)
 
 int bnxt_hwrm_cfa_l2_clear_rx_mask(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 {
@@ -224,13 +230,14 @@ int bnxt_hwrm_cfa_l2_clear_rx_mask(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 	struct hwrm_cfa_l2_set_rx_mask_input req = {.req_type = 0 };
 	struct hwrm_cfa_l2_set_rx_mask_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, CFA_L2_SET_RX_MASK, -1, resp);
+	HWRM_PREP(req, CFA_L2_SET_RX_MASK);
 	req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id);
 	req.mask = 0;
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
 
 	return rc;
 }
@@ -245,7 +252,7 @@ int bnxt_hwrm_cfa_l2_set_rx_mask(struct bnxt *bp,
 	struct hwrm_cfa_l2_set_rx_mask_output *resp = bp->hwrm_cmd_resp_addr;
 	uint32_t mask = 0;
 
-	HWRM_PREP(req, CFA_L2_SET_RX_MASK, -1, resp);
+	HWRM_PREP(req, CFA_L2_SET_RX_MASK);
 	req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id);
 
 	/* FIXME add multicast flag, when multicast adding options is supported
@@ -278,7 +285,8 @@ int bnxt_hwrm_cfa_l2_set_rx_mask(struct bnxt *bp,
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
 
 	return rc;
 }
@@ -307,7 +315,7 @@ int bnxt_hwrm_cfa_vlan_antispoof_cfg(struct bnxt *bp, uint16_t fid,
 				return 0;
 		}
 	}
-	HWRM_PREP(req, CFA_VLAN_ANTISPOOF_CFG, -1, resp);
+	HWRM_PREP(req, CFA_VLAN_ANTISPOOF_CFG);
 	req.fid = rte_cpu_to_le_16(fid);
 
 	req.vlan_tag_mask_tbl_addr =
@@ -316,7 +324,8 @@ int bnxt_hwrm_cfa_vlan_antispoof_cfg(struct bnxt *bp, uint16_t fid,
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
 
 	return rc;
 }
@@ -331,13 +340,14 @@ int bnxt_hwrm_clear_filter(struct bnxt *bp,
 	if (filter->fw_l2_filter_id == UINT64_MAX)
 		return 0;
 
-	HWRM_PREP(req, CFA_L2_FILTER_FREE, -1, resp);
+	HWRM_PREP(req, CFA_L2_FILTER_FREE);
 
 	req.l2_filter_id = rte_cpu_to_le_64(filter->fw_l2_filter_id);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
 
 	filter->fw_l2_filter_id = -1;
 
@@ -356,7 +366,7 @@ int bnxt_hwrm_set_filter(struct bnxt *bp,
 	if (filter->fw_l2_filter_id != UINT64_MAX)
 		bnxt_hwrm_clear_filter(bp, filter);
 
-	HWRM_PREP(req, CFA_L2_FILTER_ALLOC, -1, resp);
+	HWRM_PREP(req, CFA_L2_FILTER_ALLOC);
 
 	req.flags = rte_cpu_to_le_32(filter->flags);
 
@@ -387,9 +397,10 @@ int bnxt_hwrm_set_filter(struct bnxt *bp,
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
 
 	filter->fw_l2_filter_id = rte_le_to_cpu_64(resp->l2_filter_id);
+	HWRM_UNLOCK();
 
 	return rc;
 }
@@ -402,13 +413,13 @@ int bnxt_hwrm_func_qcaps(struct bnxt *bp)
 	uint16_t new_max_vfs;
 	int i;
 
-	HWRM_PREP(req, FUNC_QCAPS, -1, resp);
+	HWRM_PREP(req, FUNC_QCAPS);
 
 	req.fid = rte_cpu_to_le_16(0xffff);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
 
 	bp->max_ring_grps = rte_le_to_cpu_32(resp->max_hw_ring_grps);
 	if (BNXT_PF(bp)) {
@@ -469,6 +480,7 @@ int bnxt_hwrm_func_qcaps(struct bnxt *bp)
 	bp->max_stat_ctx = rte_le_to_cpu_16(resp->max_stat_ctx);
 	if (BNXT_PF(bp))
 		bp->pf.total_vnics = rte_le_to_cpu_16(resp->max_vnics);
+	HWRM_UNLOCK();
 
 	return rc;
 }
@@ -479,13 +491,14 @@ int bnxt_hwrm_func_reset(struct bnxt *bp)
 	struct hwrm_func_reset_input req = {.req_type = 0 };
 	struct hwrm_func_reset_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, FUNC_RESET, -1, resp);
+	HWRM_PREP(req, FUNC_RESET);
 
 	req.enables = rte_cpu_to_le_32(0);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
 
 	return rc;
 }
@@ -499,7 +512,7 @@ int bnxt_hwrm_func_driver_register(struct bnxt *bp)
 	if (bp->flags & BNXT_FLAG_REGISTERED)
 		return 0;
 
-	HWRM_PREP(req, FUNC_DRV_RGTR, -1, resp);
+	HWRM_PREP(req, FUNC_DRV_RGTR);
 	req.enables = rte_cpu_to_le_32(HWRM_FUNC_DRV_RGTR_INPUT_ENABLES_VER |
 			HWRM_FUNC_DRV_RGTR_INPUT_ENABLES_ASYNC_EVENT_FWD);
 	req.ver_maj = RTE_VER_YEAR;
@@ -519,7 +532,8 @@ int bnxt_hwrm_func_driver_register(struct bnxt *bp)
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
 
 	bp->flags |= BNXT_FLAG_REGISTERED;
 
@@ -538,19 +552,15 @@ int bnxt_hwrm_ver_get(struct bnxt *bp)
 	uint32_t dev_caps_cfg;
 
 	bp->max_req_len = HWRM_MAX_REQ_LEN;
-	HWRM_PREP(req, VER_GET, -1, resp);
+	HWRM_PREP(req, VER_GET);
 
 	req.hwrm_intf_maj = HWRM_VERSION_MAJOR;
 	req.hwrm_intf_min = HWRM_VERSION_MINOR;
 	req.hwrm_intf_upd = HWRM_VERSION_UPDATE;
 
-	/*
-	 * Hold the lock since we may be adjusting the response pointers.
-	 */
-	rte_spinlock_lock(&bp->hwrm_lock);
-	rc = bnxt_hwrm_send_message_locked(bp, &req, sizeof(req));
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
 
 	RTE_LOG(INFO, PMD, "%d.%d.%d:%d.%d.%d\n",
 		resp->hwrm_intf_maj, resp->hwrm_intf_min,
@@ -651,7 +661,7 @@ int bnxt_hwrm_ver_get(struct bnxt *bp)
 	}
 
 error:
-	rte_spinlock_unlock(&bp->hwrm_lock);
+	HWRM_UNLOCK();
 	return rc;
 }
 
@@ -664,12 +674,13 @@ int bnxt_hwrm_func_driver_unregister(struct bnxt *bp, uint32_t flags)
 	if (!(bp->flags & BNXT_FLAG_REGISTERED))
 		return 0;
 
-	HWRM_PREP(req, FUNC_DRV_UNRGTR, -1, resp);
+	HWRM_PREP(req, FUNC_DRV_UNRGTR);
 	req.flags = flags;
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
 
 	bp->flags &= ~BNXT_FLAG_REGISTERED;
 
@@ -685,7 +696,7 @@ static int bnxt_hwrm_port_phy_cfg(struct bnxt *bp, struct bnxt_link_info *conf)
 	uint32_t link_speed_mask =
 		HWRM_PORT_PHY_CFG_INPUT_ENABLES_AUTO_LINK_SPEED_MASK;
 
-	HWRM_PREP(req, PORT_PHY_CFG, -1, resp);
+	HWRM_PREP(req, PORT_PHY_CFG);
 
 	if (conf->link_up) {
 		req.flags = rte_cpu_to_le_32(conf->phy_flags);
@@ -729,7 +740,8 @@ static int bnxt_hwrm_port_phy_cfg(struct bnxt *bp, struct bnxt_link_info *conf)
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
 
 	return rc;
 }
@@ -741,11 +753,11 @@ static int bnxt_hwrm_port_phy_qcfg(struct bnxt *bp,
 	struct hwrm_port_phy_qcfg_input req = {0};
 	struct hwrm_port_phy_qcfg_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, PORT_PHY_QCFG, -1, resp);
+	HWRM_PREP(req, PORT_PHY_QCFG);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
 
 	link_info->phy_link_status = resp->link;
 	link_info->link_up =
@@ -765,6 +777,8 @@ static int bnxt_hwrm_port_phy_qcfg(struct bnxt *bp,
 	link_info->phy_ver[1] = resp->phy_min;
 	link_info->phy_ver[2] = resp->phy_bld;
 
+	HWRM_UNLOCK();
+
 	return rc;
 }
 
@@ -774,11 +788,11 @@ int bnxt_hwrm_queue_qportcfg(struct bnxt *bp)
 	struct hwrm_queue_qportcfg_input req = {.req_type = 0 };
 	struct hwrm_queue_qportcfg_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, QUEUE_QPORTCFG, -1, resp);
+	HWRM_PREP(req, QUEUE_QPORTCFG);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
 
 #define GET_QUEUE_INFO(x) \
 	bp->cos_queue[x].id = resp->queue_id##x; \
@@ -793,6 +807,8 @@ int bnxt_hwrm_queue_qportcfg(struct bnxt *bp)
 	GET_QUEUE_INFO(6);
 	GET_QUEUE_INFO(7);
 
+	HWRM_UNLOCK();
+
 	return rc;
 }
 
@@ -806,7 +822,7 @@ int bnxt_hwrm_ring_alloc(struct bnxt *bp,
 	struct hwrm_ring_alloc_input req = {.req_type = 0 };
 	struct hwrm_ring_alloc_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, RING_ALLOC, -1, resp);
+	HWRM_PREP(req, RING_ALLOC);
 
 	req.page_tbl_addr = rte_cpu_to_le_64(ring->bd_dma);
 	req.fbo = rte_cpu_to_le_32(0);
@@ -837,6 +853,7 @@ int bnxt_hwrm_ring_alloc(struct bnxt *bp,
 	default:
 		RTE_LOG(ERR, PMD, "hwrm alloc invalid ring type %d\n",
 			ring_type);
+		HWRM_UNLOCK();
 		return -1;
 	}
 	req.enables = rte_cpu_to_le_32(enables);
@@ -850,22 +867,27 @@ int bnxt_hwrm_ring_alloc(struct bnxt *bp,
 		case HWRM_RING_FREE_INPUT_RING_TYPE_L2_CMPL:
 			RTE_LOG(ERR, PMD,
 				"hwrm_ring_alloc cp failed. rc:%d\n", rc);
+			HWRM_UNLOCK();
 			return rc;
 		case HWRM_RING_FREE_INPUT_RING_TYPE_RX:
 			RTE_LOG(ERR, PMD,
 				"hwrm_ring_alloc rx failed. rc:%d\n", rc);
+			HWRM_UNLOCK();
 			return rc;
 		case HWRM_RING_FREE_INPUT_RING_TYPE_TX:
 			RTE_LOG(ERR, PMD,
 				"hwrm_ring_alloc tx failed. rc:%d\n", rc);
+			HWRM_UNLOCK();
 			return rc;
 		default:
 			RTE_LOG(ERR, PMD, "Invalid ring. rc:%d\n", rc);
+			HWRM_UNLOCK();
 			return rc;
 		}
 	}
 
 	ring->fw_ring_id = rte_le_to_cpu_16(resp->ring_id);
+	HWRM_UNLOCK();
 	return rc;
 }
 
@@ -876,7 +898,7 @@ int bnxt_hwrm_ring_free(struct bnxt *bp,
 	struct hwrm_ring_free_input req = {.req_type = 0 };
 	struct hwrm_ring_free_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, RING_FREE, -1, resp);
+	HWRM_PREP(req, RING_FREE);
 
 	req.ring_type = ring_type;
 	req.ring_id = rte_cpu_to_le_16(ring->fw_ring_id);
@@ -886,6 +908,7 @@ int bnxt_hwrm_ring_free(struct bnxt *bp,
 	if (rc || resp->error_code) {
 		if (rc == 0 && resp->error_code)
 			rc = rte_le_to_cpu_16(resp->error_code);
+		HWRM_UNLOCK();
 
 		switch (ring_type) {
 		case HWRM_RING_FREE_INPUT_RING_TYPE_L2_CMPL:
@@ -905,6 +928,7 @@ int bnxt_hwrm_ring_free(struct bnxt *bp,
 			return rc;
 		}
 	}
+	HWRM_UNLOCK();
 	return 0;
 }
 
@@ -914,7 +938,7 @@ int bnxt_hwrm_ring_grp_alloc(struct bnxt *bp, unsigned int idx)
 	struct hwrm_ring_grp_alloc_input req = {.req_type = 0 };
 	struct hwrm_ring_grp_alloc_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, RING_GRP_ALLOC, -1, resp);
+	HWRM_PREP(req, RING_GRP_ALLOC);
 
 	req.cr = rte_cpu_to_le_16(bp->grp_info[idx].cp_fw_ring_id);
 	req.rr = rte_cpu_to_le_16(bp->grp_info[idx].rx_fw_ring_id);
@@ -923,11 +947,13 @@ int bnxt_hwrm_ring_grp_alloc(struct bnxt *bp, unsigned int idx)
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
 
 	bp->grp_info[idx].fw_grp_id =
 	    rte_le_to_cpu_16(resp->ring_group_id);
 
+	HWRM_UNLOCK();
+
 	return rc;
 }
 
@@ -937,13 +963,14 @@ int bnxt_hwrm_ring_grp_free(struct bnxt *bp, unsigned int idx)
 	struct hwrm_ring_grp_free_input req = {.req_type = 0 };
 	struct hwrm_ring_grp_free_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, RING_GRP_FREE, -1, resp);
+	HWRM_PREP(req, RING_GRP_FREE);
 
 	req.ring_group_id = rte_cpu_to_le_16(bp->grp_info[idx].fw_grp_id);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
 
 	bp->grp_info[idx].fw_grp_id = INVALID_HW_RING_ID;
 	return rc;
@@ -958,13 +985,14 @@ int bnxt_hwrm_stat_clear(struct bnxt *bp, struct bnxt_cp_ring_info *cpr)
 	if (cpr->hw_stats_ctx_id == (uint32_t)HWRM_NA_SIGNATURE)
 		return rc;
 
-	HWRM_PREP(req, STAT_CTX_CLR_STATS, -1, resp);
+	HWRM_PREP(req, STAT_CTX_CLR_STATS);
 
 	req.stat_ctx_id = rte_cpu_to_le_16(cpr->hw_stats_ctx_id);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
 
 	return rc;
 }
@@ -976,7 +1004,7 @@ int bnxt_hwrm_stat_ctx_alloc(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
 	struct hwrm_stat_ctx_alloc_input req = {.req_type = 0 };
 	struct hwrm_stat_ctx_alloc_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, STAT_CTX_ALLOC, -1, resp);
+	HWRM_PREP(req, STAT_CTX_ALLOC);
 
 	req.update_period_ms = rte_cpu_to_le_32(0);
 
@@ -985,10 +1013,12 @@ int bnxt_hwrm_stat_ctx_alloc(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
 
 	cpr->hw_stats_ctx_id = rte_le_to_cpu_16(resp->stat_ctx_id);
 
+	HWRM_UNLOCK();
+
 	return rc;
 }
 
@@ -999,13 +1029,14 @@ int bnxt_hwrm_stat_ctx_free(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
 	struct hwrm_stat_ctx_free_input req = {.req_type = 0 };
 	struct hwrm_stat_ctx_free_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, STAT_CTX_FREE, -1, resp);
+	HWRM_PREP(req, STAT_CTX_FREE);
 
 	req.stat_ctx_id = rte_cpu_to_le_16(cpr->hw_stats_ctx_id);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
 
 	return rc;
 }
@@ -1027,15 +1058,16 @@ int bnxt_hwrm_vnic_alloc(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 	vnic->lb_rule = (uint16_t)HWRM_NA_SIGNATURE;
 	vnic->mru = bp->eth_dev->data->mtu + ETHER_HDR_LEN +
 				ETHER_CRC_LEN + VLAN_TAG_SIZE;
-	HWRM_PREP(req, VNIC_ALLOC, -1, resp);
+	HWRM_PREP(req, VNIC_ALLOC);
 
 	if (vnic->func_default)
 		req.flags = HWRM_VNIC_ALLOC_INPUT_FLAGS_DEFAULT;
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
 
 	vnic->fw_vnic_id = rte_le_to_cpu_16(resp->vnic_id);
+	HWRM_UNLOCK();
 	RTE_LOG(DEBUG, PMD, "VNIC ID %x\n", vnic->fw_vnic_id);
 	return rc;
 }
@@ -1048,13 +1080,13 @@ static int bnxt_hwrm_vnic_plcmodes_qcfg(struct bnxt *bp,
 	struct hwrm_vnic_plcmodes_qcfg_input req = {.req_type = 0 };
 	struct hwrm_vnic_plcmodes_qcfg_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, VNIC_PLCMODES_QCFG, -1, resp);
+	HWRM_PREP(req, VNIC_PLCMODES_QCFG);
 
 	req.vnic_id = rte_cpu_to_le_32(vnic->fw_vnic_id);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
 
 	pmode->flags = rte_le_to_cpu_32(resp->flags);
 	/* dflt_vnic bit doesn't exist in the _cfg command */
@@ -1063,6 +1095,8 @@ static int bnxt_hwrm_vnic_plcmodes_qcfg(struct bnxt *bp,
 	pmode->hds_offset = rte_le_to_cpu_16(resp->hds_offset);
 	pmode->hds_threshold = rte_le_to_cpu_16(resp->hds_threshold);
 
+	HWRM_UNLOCK();
+
 	return rc;
 }
 
@@ -1074,7 +1108,7 @@ static int bnxt_hwrm_vnic_plcmodes_cfg(struct bnxt *bp,
 	struct hwrm_vnic_plcmodes_cfg_input req = {.req_type = 0 };
 	struct hwrm_vnic_plcmodes_cfg_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, VNIC_PLCMODES_CFG, -1, resp);
+	HWRM_PREP(req, VNIC_PLCMODES_CFG);
 
 	req.vnic_id = rte_cpu_to_le_32(vnic->fw_vnic_id);
 	req.flags = rte_cpu_to_le_32(pmode->flags);
@@ -1089,7 +1123,8 @@ static int bnxt_hwrm_vnic_plcmodes_cfg(struct bnxt *bp,
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
 
 	return rc;
 }
@@ -1111,7 +1146,7 @@ int bnxt_hwrm_vnic_cfg(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 	if (rc)
 		return rc;
 
-	HWRM_PREP(req, VNIC_CFG, -1, resp);
+	HWRM_PREP(req, VNIC_CFG);
 
 	/* Only RSS support for now TBD: COS & LB */
 	req.enables =
@@ -1151,7 +1186,8 @@ int bnxt_hwrm_vnic_cfg(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
 
 	rc = bnxt_hwrm_vnic_plcmodes_cfg(bp, vnic, &pmodes);
 
@@ -1169,7 +1205,7 @@ int bnxt_hwrm_vnic_qcfg(struct bnxt *bp, struct bnxt_vnic_info *vnic,
 		RTE_LOG(DEBUG, PMD, "VNIC QCFG ID %d\n", vnic->fw_vnic_id);
 		return rc;
 	}
-	HWRM_PREP(req, VNIC_QCFG, -1, resp);
+	HWRM_PREP(req, VNIC_QCFG);
 
 	req.enables =
 		rte_cpu_to_le_32(HWRM_VNIC_QCFG_INPUT_ENABLES_VF_ID_VALID);
@@ -1178,7 +1214,7 @@ int bnxt_hwrm_vnic_qcfg(struct bnxt *bp, struct bnxt_vnic_info *vnic,
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
 
 	vnic->dflt_ring_grp = rte_le_to_cpu_16(resp->dflt_ring_grp);
 	vnic->rss_rule = rte_le_to_cpu_16(resp->rss_rule);
@@ -1198,6 +1234,8 @@ int bnxt_hwrm_vnic_qcfg(struct bnxt *bp, struct bnxt_vnic_info *vnic,
 	vnic->rss_dflt_cr = rte_le_to_cpu_32(resp->flags) &
 			HWRM_VNIC_QCFG_OUTPUT_FLAGS_RSS_DFLT_CR_MODE;
 
+	HWRM_UNLOCK();
+
 	return rc;
 }
 
@@ -1208,13 +1246,14 @@ int bnxt_hwrm_vnic_ctx_alloc(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 	struct hwrm_vnic_rss_cos_lb_ctx_alloc_output *resp =
 						bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, VNIC_RSS_COS_LB_CTX_ALLOC, -1, resp);
+	HWRM_PREP(req, VNIC_RSS_COS_LB_CTX_ALLOC);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
 
 	vnic->rss_rule = rte_le_to_cpu_16(resp->rss_cos_lb_ctx_id);
+	HWRM_UNLOCK();
 	RTE_LOG(DEBUG, PMD, "VNIC RSS Rule %x\n", vnic->rss_rule);
 
 	return rc;
@@ -1231,13 +1270,14 @@ int bnxt_hwrm_vnic_ctx_free(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 		RTE_LOG(DEBUG, PMD, "VNIC RSS Rule %x\n", vnic->rss_rule);
 		return rc;
 	}
-	HWRM_PREP(req, VNIC_RSS_COS_LB_CTX_FREE, -1, resp);
+	HWRM_PREP(req, VNIC_RSS_COS_LB_CTX_FREE);
 
 	req.rss_cos_lb_ctx_id = rte_cpu_to_le_16(vnic->rss_rule);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
 
 	vnic->rss_rule = INVALID_HW_RING_ID;
 
@@ -1255,13 +1295,14 @@ int bnxt_hwrm_vnic_free(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 		return rc;
 	}
 
-	HWRM_PREP(req, VNIC_FREE, -1, resp);
+	HWRM_PREP(req, VNIC_FREE);
 
 	req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
 
 	vnic->fw_vnic_id = INVALID_HW_RING_ID;
 	return rc;
@@ -1274,7 +1315,7 @@ int bnxt_hwrm_vnic_rss_cfg(struct bnxt *bp,
 	struct hwrm_vnic_rss_cfg_input req = {.req_type = 0 };
 	struct hwrm_vnic_rss_cfg_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, VNIC_RSS_CFG, -1, resp);
+	HWRM_PREP(req, VNIC_RSS_CFG);
 
 	req.hash_type = rte_cpu_to_le_32(vnic->hash_type);
 
@@ -1286,7 +1327,8 @@ int bnxt_hwrm_vnic_rss_cfg(struct bnxt *bp,
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
 
 	return rc;
 }
@@ -1299,7 +1341,7 @@ int bnxt_hwrm_vnic_plcmode_cfg(struct bnxt *bp,
 	struct hwrm_vnic_plcmodes_cfg_output *resp = bp->hwrm_cmd_resp_addr;
 	uint16_t size;
 
-	HWRM_PREP(req, VNIC_PLCMODES_CFG, -1, resp);
+	HWRM_PREP(req, VNIC_PLCMODES_CFG);
 
 	req.flags = rte_cpu_to_le_32(
 			HWRM_VNIC_PLCMODES_CFG_INPUT_FLAGS_JUMBO_PLACEMENT);
@@ -1315,7 +1357,8 @@ int bnxt_hwrm_vnic_plcmode_cfg(struct bnxt *bp,
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
 
 	return rc;
 }
@@ -1327,7 +1370,7 @@ int bnxt_hwrm_vnic_tpa_cfg(struct bnxt *bp,
 	struct hwrm_vnic_tpa_cfg_input req = {.req_type = 0 };
 	struct hwrm_vnic_tpa_cfg_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, VNIC_TPA_CFG, -1, resp);
+	HWRM_PREP(req, VNIC_TPA_CFG);
 
 	if (enable) {
 		req.enables = rte_cpu_to_le_32(
@@ -1350,7 +1393,8 @@ int bnxt_hwrm_vnic_tpa_cfg(struct bnxt *bp,
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
 
 	return rc;
 }
@@ -1367,10 +1411,11 @@ int bnxt_hwrm_func_vf_mac(struct bnxt *bp, uint16_t vf, const uint8_t *mac_addr)
 	memcpy(req.dflt_mac_addr, mac_addr, sizeof(req.dflt_mac_addr));
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
 
-	HWRM_PREP(req, FUNC_CFG, -1, resp);
+	HWRM_PREP(req, FUNC_CFG);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
 
 	bp->pf.vf_info[vf].random_mac = false;
 
@@ -1384,17 +1429,19 @@ int bnxt_hwrm_func_qstats_tx_drop(struct bnxt *bp, uint16_t fid,
 	struct hwrm_func_qstats_input req = {.req_type = 0};
 	struct hwrm_func_qstats_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, FUNC_QSTATS, -1, resp);
+	HWRM_PREP(req, FUNC_QSTATS);
 
 	req.fid = rte_cpu_to_le_16(fid);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
 
 	if (dropped)
 		*dropped = rte_le_to_cpu_64(resp->tx_drop_pkts);
 
+	HWRM_UNLOCK();
+
 	return rc;
 }
 
@@ -1405,13 +1452,13 @@ int bnxt_hwrm_func_qstats(struct bnxt *bp, uint16_t fid,
 	struct hwrm_func_qstats_input req = {.req_type = 0};
 	struct hwrm_func_qstats_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, FUNC_QSTATS, -1, resp);
+	HWRM_PREP(req, FUNC_QSTATS);
 
 	req.fid = rte_cpu_to_le_16(fid);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
 
 	stats->ipackets = rte_le_to_cpu_64(resp->rx_ucast_pkts);
 	stats->ipackets += rte_le_to_cpu_64(resp->rx_mcast_pkts);
@@ -1432,6 +1479,8 @@ int bnxt_hwrm_func_qstats(struct bnxt *bp, uint16_t fid,
 
 	stats->imissed = rte_le_to_cpu_64(resp->rx_drop_pkts);
 
+	HWRM_UNLOCK();
+
 	return rc;
 }
 
@@ -1441,13 +1490,14 @@ int bnxt_hwrm_func_clr_stats(struct bnxt *bp, uint16_t fid)
 	struct hwrm_func_clr_stats_input req = {.req_type = 0};
 	struct hwrm_func_clr_stats_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, FUNC_CLR_STATS, -1, resp);
+	HWRM_PREP(req, FUNC_CLR_STATS);
 
 	req.fid = rte_cpu_to_le_16(fid);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
 
 	return rc;
 }
@@ -2038,12 +2088,12 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp)
 	struct hwrm_func_qcfg_output *resp = bp->hwrm_cmd_resp_addr;
 	int rc = 0;
 
-	HWRM_PREP(req, FUNC_QCFG, -1, resp);
+	HWRM_PREP(req, FUNC_QCFG);
 	req.fid = rte_cpu_to_le_16(0xffff);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
 
 	/* Hard Coded.. 0xfff VLAN ID mask */
 	bp->vlan = rte_le_to_cpu_16(resp->vlan) & 0xfff;
@@ -2059,6 +2109,8 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp)
 		break;
 	}
 
+	HWRM_UNLOCK();
+
 	return rc;
 }
 
@@ -2118,10 +2170,12 @@ static int bnxt_hwrm_pf_func_cfg(struct bnxt *bp, int tx_rings)
 	req.num_hw_ring_grps = rte_cpu_to_le_16(bp->max_ring_grps);
 	req.fid = rte_cpu_to_le_16(0xffff);
 
-	HWRM_PREP(req, FUNC_CFG, -1, resp);
+	HWRM_PREP(req, FUNC_CFG);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
-	HWRM_CHECK_RESULT;
+
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
 
 	return rc;
 }
@@ -2187,7 +2241,7 @@ static void reserve_resources_from_vf(struct bnxt *bp,
 	int rc;
 
 	/* Get the actual allocated values now */
-	HWRM_PREP(req, FUNC_QCAPS, -1, resp);
+	HWRM_PREP(req, FUNC_QCAPS);
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
@@ -2212,6 +2266,8 @@ static void reserve_resources_from_vf(struct bnxt *bp,
 	 */
 	//bp->max_vnics -= rte_le_to_cpu_16(esp->max_vnics);
 	bp->max_ring_grps -= rte_le_to_cpu_16(resp->max_hw_ring_grps);
+
+	HWRM_UNLOCK();
 }
 
 int bnxt_hwrm_func_qcfg_current_vf_vlan(struct bnxt *bp, int vf)
@@ -2221,7 +2277,7 @@ int bnxt_hwrm_func_qcfg_current_vf_vlan(struct bnxt *bp, int vf)
 	int rc;
 
 	/* Check for zero MAC address */
-	HWRM_PREP(req, FUNC_QCFG, -1, resp);
+	HWRM_PREP(req, FUNC_QCFG);
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 	if (rc) {
@@ -2232,7 +2288,11 @@ int bnxt_hwrm_func_qcfg_current_vf_vlan(struct bnxt *bp, int vf)
 		RTE_LOG(ERR, PMD, "hwrm_func_qcfg error %d\n", rc);
 		return -1;
 	}
-	return rte_le_to_cpu_16(resp->vlan);
+	rc = rte_le_to_cpu_16(resp->vlan);
+
+	HWRM_UNLOCK();
+
+	return rc;
 }
 
 static int update_pf_resource_max(struct bnxt *bp)
@@ -2242,15 +2302,17 @@ static int update_pf_resource_max(struct bnxt *bp)
 	int rc;
 
 	/* And copy the allocated numbers into the pf struct */
-	HWRM_PREP(req, FUNC_QCFG, -1, resp);
+	HWRM_PREP(req, FUNC_QCFG);
 	req.fid = rte_cpu_to_le_16(0xffff);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
 
 	/* Only TX ring value reflects actual allocation? TODO */
 	bp->max_tx_rings = rte_le_to_cpu_16(resp->alloc_tx_rings);
 	bp->pf.evb_mode = resp->evb_mode;
 
+	HWRM_UNLOCK();
+
 	return rc;
 }
 
@@ -2342,7 +2404,7 @@ int bnxt_hwrm_allocate_vfs(struct bnxt *bp, int num_vfs)
 	for (i = 0; i < num_vfs; i++) {
 		add_random_mac_if_needed(bp, &req, i);
 
-		HWRM_PREP(req, FUNC_CFG, -1, resp);
+		HWRM_PREP(req, FUNC_CFG);
 		req.flags = rte_cpu_to_le_32(bp->pf.vf_info[i].func_cfg_flags);
 		req.fid = rte_cpu_to_le_16(bp->pf.vf_info[i].fid);
 		rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
@@ -2357,9 +2419,12 @@ int bnxt_hwrm_allocate_vfs(struct bnxt *bp, int num_vfs)
 			RTE_LOG(ERR, PMD,
 				"Not all VFs available. (%d, %d)\n",
 				rc, resp->error_code);
+			HWRM_UNLOCK();
 			break;
 		}
 
+		HWRM_UNLOCK();
+
 		reserve_resources_from_vf(bp, &req, i);
 		bp->pf.active_vfs++;
 		bnxt_hwrm_func_clr_stats(bp, bp->pf.vf_info[i].fid);
@@ -2392,14 +2457,15 @@ int bnxt_hwrm_pf_evb_mode(struct bnxt *bp)
 	struct hwrm_func_cfg_output *resp = bp->hwrm_cmd_resp_addr;
 	int rc;
 
-	HWRM_PREP(req, FUNC_CFG, -1, resp);
+	HWRM_PREP(req, FUNC_CFG);
 
 	req.fid = rte_cpu_to_le_16(0xffff);
 	req.enables = rte_cpu_to_le_32(HWRM_FUNC_CFG_INPUT_ENABLES_EVB_MODE);
 	req.evb_mode = bp->pf.evb_mode;
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
 
 	return rc;
 }
@@ -2411,11 +2477,11 @@ int bnxt_hwrm_tunnel_dst_port_alloc(struct bnxt *bp, uint16_t port,
 	struct hwrm_tunnel_dst_port_alloc_output *resp = bp->hwrm_cmd_resp_addr;
 	int rc = 0;
 
-	HWRM_PREP(req, TUNNEL_DST_PORT_ALLOC, -1, resp);
+	HWRM_PREP(req, TUNNEL_DST_PORT_ALLOC);
 	req.tunnel_type = tunnel_type;
 	req.tunnel_dst_port_val = port;
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
 
 	switch (tunnel_type) {
 	case HWRM_TUNNEL_DST_PORT_ALLOC_INPUT_TUNNEL_TYPE_VXLAN:
@@ -2429,6 +2495,9 @@ int bnxt_hwrm_tunnel_dst_port_alloc(struct bnxt *bp, uint16_t port,
 	default:
 		break;
 	}
+
+	HWRM_UNLOCK();
+
 	return rc;
 }
 
@@ -2439,11 +2508,14 @@ int bnxt_hwrm_tunnel_dst_port_free(struct bnxt *bp, uint16_t port,
 	struct hwrm_tunnel_dst_port_free_output *resp = bp->hwrm_cmd_resp_addr;
 	int rc = 0;
 
-	HWRM_PREP(req, TUNNEL_DST_PORT_FREE, -1, resp);
+	HWRM_PREP(req, TUNNEL_DST_PORT_FREE);
+
 	req.tunnel_type = tunnel_type;
 	req.tunnel_dst_port_id = rte_cpu_to_be_16(port);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
-	HWRM_CHECK_RESULT;
+
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
 
 	return rc;
 }
@@ -2455,11 +2527,14 @@ int bnxt_hwrm_func_cfg_vf_set_flags(struct bnxt *bp, uint16_t vf,
 	struct hwrm_func_cfg_input req = {0};
 	int rc;
 
-	HWRM_PREP(req, FUNC_CFG, -1, resp);
+	HWRM_PREP(req, FUNC_CFG);
+
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
 	req.flags = rte_cpu_to_le_32(flags);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
-	HWRM_CHECK_RESULT;
+
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
 
 	return rc;
 }
@@ -2482,7 +2557,7 @@ int bnxt_hwrm_func_buf_rgtr(struct bnxt *bp)
 	struct hwrm_func_buf_rgtr_input req = {.req_type = 0 };
 	struct hwrm_func_buf_rgtr_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, FUNC_BUF_RGTR, -1, resp);
+	HWRM_PREP(req, FUNC_BUF_RGTR);
 
 	req.req_buf_num_pages = rte_cpu_to_le_16(1);
 	req.req_buf_page_size = rte_cpu_to_le_16(
@@ -2498,7 +2573,8 @@ int bnxt_hwrm_func_buf_rgtr(struct bnxt *bp)
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
 
 	return rc;
 }
@@ -2509,11 +2585,12 @@ int bnxt_hwrm_func_buf_unrgtr(struct bnxt *bp)
 	struct hwrm_func_buf_unrgtr_input req = {.req_type = 0 };
 	struct hwrm_func_buf_unrgtr_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, FUNC_BUF_UNRGTR, -1, resp);
+	HWRM_PREP(req, FUNC_BUF_UNRGTR);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
 
 	return rc;
 }
@@ -2524,7 +2601,8 @@ int bnxt_hwrm_func_cfg_def_cp(struct bnxt *bp)
 	struct hwrm_func_cfg_input req = {0};
 	int rc;
 
-	HWRM_PREP(req, FUNC_CFG, -1, resp);
+	HWRM_PREP(req, FUNC_CFG);
+
 	req.fid = rte_cpu_to_le_16(0xffff);
 	req.flags = rte_cpu_to_le_32(bp->pf.func_cfg_flags);
 	req.enables = rte_cpu_to_le_32(
@@ -2532,7 +2610,9 @@ int bnxt_hwrm_func_cfg_def_cp(struct bnxt *bp)
 	req.async_event_cr = rte_cpu_to_le_16(
 			bp->def_cp_ring->cp_ring_struct->fw_ring_id);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
-	HWRM_CHECK_RESULT;
+
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
 
 	return rc;
 }
@@ -2543,13 +2623,16 @@ int bnxt_hwrm_vf_func_cfg_def_cp(struct bnxt *bp)
 	struct hwrm_func_vf_cfg_input req = {0};
 	int rc;
 
-	HWRM_PREP(req, FUNC_VF_CFG, -1, resp);
+	HWRM_PREP(req, FUNC_VF_CFG);
+
 	req.enables = rte_cpu_to_le_32(
 			HWRM_FUNC_CFG_INPUT_ENABLES_ASYNC_EVENT_CR);
 	req.async_event_cr = rte_cpu_to_le_16(
 			bp->def_cp_ring->cp_ring_struct->fw_ring_id);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
-	HWRM_CHECK_RESULT;
+
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
 
 	return rc;
 }
@@ -2562,7 +2645,7 @@ int bnxt_hwrm_set_default_vlan(struct bnxt *bp, int vf, uint8_t is_vf)
 	uint32_t func_cfg_flags;
 	int rc = 0;
 
-	HWRM_PREP(req, FUNC_CFG, -1, resp);
+	HWRM_PREP(req, FUNC_CFG);
 
 	if (is_vf) {
 		dflt_vlan = bp->pf.vf_info[vf].dflt_vlan;
@@ -2580,7 +2663,9 @@ int bnxt_hwrm_set_default_vlan(struct bnxt *bp, int vf, uint8_t is_vf)
 	req.dflt_vlan = rte_cpu_to_le_16(dflt_vlan);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
-	HWRM_CHECK_RESULT;
+
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
 
 	return rc;
 }
@@ -2592,13 +2677,16 @@ int bnxt_hwrm_func_bw_cfg(struct bnxt *bp, uint16_t vf,
 	struct hwrm_func_cfg_input req = {0};
 	int rc;
 
-	HWRM_PREP(req, FUNC_CFG, -1, resp);
+	HWRM_PREP(req, FUNC_CFG);
+
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
 	req.enables |= rte_cpu_to_le_32(enables);
 	req.flags = rte_cpu_to_le_32(bp->pf.vf_info[vf].func_cfg_flags);
 	req.max_bw = rte_cpu_to_le_32(max_bw);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
-	HWRM_CHECK_RESULT;
+
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
 
 	return rc;
 }
@@ -2609,14 +2697,17 @@ int bnxt_hwrm_set_vf_vlan(struct bnxt *bp, int vf)
 	struct hwrm_func_cfg_output *resp = bp->hwrm_cmd_resp_addr;
 	int rc = 0;
 
-	HWRM_PREP(req, FUNC_CFG, -1, resp);
+	HWRM_PREP(req, FUNC_CFG);
+
 	req.flags = rte_cpu_to_le_32(bp->pf.vf_info[vf].func_cfg_flags);
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
 	req.enables |= rte_cpu_to_le_32(HWRM_FUNC_CFG_INPUT_ENABLES_DFLT_VLAN);
 	req.dflt_vlan = rte_cpu_to_le_16(bp->pf.vf_info[vf].dflt_vlan);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
-	HWRM_CHECK_RESULT;
+
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
 
 	return rc;
 }
@@ -2631,14 +2722,15 @@ int bnxt_hwrm_reject_fwd_resp(struct bnxt *bp, uint16_t target_id,
 	if (ec_size > sizeof(req.encap_request))
 		return -1;
 
-	HWRM_PREP(req, REJECT_FWD_RESP, -1, resp);
+	HWRM_PREP(req, REJECT_FWD_RESP);
 
 	req.encap_resp_target_id = rte_cpu_to_le_16(target_id);
 	memcpy(req.encap_request, encaped, ec_size);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
 
 	return rc;
 }
@@ -2650,13 +2742,17 @@ int bnxt_hwrm_func_qcfg_vf_default_mac(struct bnxt *bp, uint16_t vf,
 	struct hwrm_func_qcfg_output *resp = bp->hwrm_cmd_resp_addr;
 	int rc;
 
-	HWRM_PREP(req, FUNC_QCFG, -1, resp);
+	HWRM_PREP(req, FUNC_QCFG);
+
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
 
 	memcpy(mac->addr_bytes, resp->mac_address, ETHER_ADDR_LEN);
+
+	HWRM_UNLOCK();
+
 	return rc;
 }
 
@@ -2670,14 +2766,15 @@ int bnxt_hwrm_exec_fwd_resp(struct bnxt *bp, uint16_t target_id,
 	if (ec_size > sizeof(req.encap_request))
 		return -1;
 
-	HWRM_PREP(req, EXEC_FWD_RESP, -1, resp);
+	HWRM_PREP(req, EXEC_FWD_RESP);
 
 	req.encap_resp_target_id = rte_cpu_to_le_16(target_id);
 	memcpy(req.encap_request, encaped, ec_size);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
 
 	return rc;
 }
@@ -2689,13 +2786,13 @@ int bnxt_hwrm_ctx_qstats(struct bnxt *bp, uint32_t cid, int idx,
 	struct hwrm_stat_ctx_query_input req = {.req_type = 0};
 	struct hwrm_stat_ctx_query_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, STAT_CTX_QUERY, -1, resp);
+	HWRM_PREP(req, STAT_CTX_QUERY);
 
 	req.stat_ctx_id = rte_cpu_to_le_32(cid);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT;
+	HWRM_CHECK_RESULT();
 
 	stats->q_ipackets[idx] = rte_le_to_cpu_64(resp->rx_ucast_pkts);
 	stats->q_ipackets[idx] += rte_le_to_cpu_64(resp->rx_mcast_pkts);
@@ -2715,6 +2812,8 @@ int bnxt_hwrm_ctx_qstats(struct bnxt *bp, uint32_t cid, int idx,
 	stats->q_errors[idx] += rte_le_to_cpu_64(resp->tx_err_pkts);
 	stats->q_errors[idx] += rte_le_to_cpu_64(resp->rx_drop_pkts);
 
+	HWRM_UNLOCK();
+
 	return rc;
 }
 
@@ -2728,12 +2827,16 @@ int bnxt_hwrm_port_qstats(struct bnxt *bp)
 	if (!(bp->flags & BNXT_FLAG_PORT_STATS))
 		return 0;
 
-	HWRM_PREP(req, PORT_QSTATS, -1, resp);
+	HWRM_PREP(req, PORT_QSTATS);
+
 	req.port_id = rte_cpu_to_le_16(pf->port_id);
 	req.tx_stat_host_addr = rte_cpu_to_le_64(bp->hw_tx_port_stats_map);
 	req.rx_stat_host_addr = rte_cpu_to_le_64(bp->hw_rx_port_stats_map);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
-	HWRM_CHECK_RESULT;
+
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
+
 	return rc;
 }
 
@@ -2747,10 +2850,14 @@ int bnxt_hwrm_port_clr_stats(struct bnxt *bp)
 	if (!(bp->flags & BNXT_FLAG_PORT_STATS))
 		return 0;
 
-	HWRM_PREP(req, PORT_CLR_STATS, -1, resp);
+	HWRM_PREP(req, PORT_CLR_STATS);
+
 	req.port_id = rte_cpu_to_le_16(pf->port_id);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
-	HWRM_CHECK_RESULT;
+
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
+
 	return rc;
 }
 
@@ -2763,10 +2870,11 @@ int bnxt_hwrm_port_led_qcaps(struct bnxt *bp)
 	if (BNXT_VF(bp))
 		return 0;
 
-	HWRM_PREP(req, PORT_LED_QCAPS, -1, resp);
+	HWRM_PREP(req, PORT_LED_QCAPS);
 	req.port_id = bp->pf.port_id;
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
-	HWRM_CHECK_RESULT;
+
+	HWRM_CHECK_RESULT();
 
 	if (resp->num_leds > 0 && resp->num_leds < BNXT_MAX_LED) {
 		unsigned int i;
@@ -2786,6 +2894,9 @@ int bnxt_hwrm_port_led_qcaps(struct bnxt *bp)
 			}
 		}
 	}
+
+	HWRM_UNLOCK();
+
 	return rc;
 }
 
@@ -2801,7 +2912,8 @@ int bnxt_hwrm_port_led_cfg(struct bnxt *bp, bool led_on)
 	if (!bp->num_leds || BNXT_VF(bp))
 		return -EOPNOTSUPP;
 
-	HWRM_PREP(req, PORT_LED_CFG, -1, resp);
+	HWRM_PREP(req, PORT_LED_CFG);
+
 	if (led_on) {
 		led_state = HWRM_PORT_LED_CFG_INPUT_LED0_STATE_BLINKALT;
 		duration = rte_cpu_to_le_16(500);
@@ -2819,7 +2931,9 @@ int bnxt_hwrm_port_led_cfg(struct bnxt *bp, bool led_on)
 	}
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
-	HWRM_CHECK_RESULT;
+
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
 
 	return rc;
 }
@@ -2857,28 +2971,34 @@ static int bnxt_hwrm_func_vf_vnic_query(struct bnxt *bp, uint16_t vf,
 	int rc;
 
 	/* First query all VNIC ids */
-	HWRM_PREP(req, FUNC_VF_VNIC_IDS_QUERY, -1, resp_vf_vnic_ids);
+	HWRM_PREP(req, FUNC_VF_VNIC_IDS_QUERY);
 
 	req.vf_id = rte_cpu_to_le_16(bp->pf.first_vf_id + vf);
 	req.max_vnic_id_cnt = rte_cpu_to_le_32(bp->pf.total_vnics);
 	req.vnic_id_tbl_addr = rte_cpu_to_le_64(rte_mem_virt2phy(vnic_ids));
 
 	if (req.vnic_id_tbl_addr == 0) {
+		HWRM_UNLOCK();
 		RTE_LOG(ERR, PMD,
 		"unable to map VNIC ID table address to physical memory\n");
 		return -ENOMEM;
 	}
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 	if (rc) {
+		HWRM_UNLOCK();
 		RTE_LOG(ERR, PMD, "hwrm_func_vf_vnic_query failed rc:%d\n", rc);
 		return -1;
 	} else if (resp->error_code) {
 		rc = rte_le_to_cpu_16(resp->error_code);
+		HWRM_UNLOCK();
 		RTE_LOG(ERR, PMD, "hwrm_func_vf_vnic_query error %d\n", rc);
 		return -1;
 	}
+	rc = rte_le_to_cpu_32(resp->vnic_id_cnt);
 
-	return rte_le_to_cpu_32(resp->vnic_id_cnt);
+	HWRM_UNLOCK();
+
+	return rc;
 }
 
 /*
@@ -2943,7 +3063,8 @@ int bnxt_hwrm_func_cfg_vf_set_vlan_anti_spoof(struct bnxt *bp, uint16_t vf,
 	struct hwrm_func_cfg_input req = {0};
 	int rc;
 
-	HWRM_PREP(req, FUNC_CFG, -1, resp);
+	HWRM_PREP(req, FUNC_CFG);
+
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
 	req.enables |= rte_cpu_to_le_32(
 			HWRM_FUNC_CFG_INPUT_ENABLES_VLAN_ANTISPOOF_MODE);
@@ -2951,7 +3072,9 @@ int bnxt_hwrm_func_cfg_vf_set_vlan_anti_spoof(struct bnxt *bp, uint16_t vf,
 		HWRM_FUNC_CFG_INPUT_VLAN_ANTISPOOF_MODE_VALIDATE_VLAN :
 		HWRM_FUNC_CFG_INPUT_VLAN_ANTISPOOF_MODE_NOCHECK;
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
-	HWRM_CHECK_RESULT;
+
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
 
 	return rc;
 }
-- 
2.13.5 (Apple Git-94)

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v4 02/24] net/bnxt: use 64-bits of address for vlan_table
  2017-09-28 21:43 [PATCH v4 00/24] bnxt patchset Ajit Khaparde
  2017-09-28 21:43 ` [PATCH v4 01/24] net/bnxt: fix HWRM_*() macros and locking Ajit Khaparde
@ 2017-09-28 21:43 ` Ajit Khaparde
  2017-09-28 21:43 ` [PATCH v4 03/24] net/bnxt: fix an issue with group id calculation Ajit Khaparde
                   ` (22 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Ajit Khaparde @ 2017-09-28 21:43 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

We are wrongly using just 16 bits of address from rte_mem_virt2phy
while filling the vlan table address instead of 64-bytes.
Most likely a copy-paste error.

Fixes: 36735a932ca7 ("net/bnxt: support set VF QOS and MAC anti spoof")

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_hwrm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index fe82f0936..ceb4ab29b 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -276,7 +276,7 @@ int bnxt_hwrm_cfa_l2_set_rx_mask(struct bnxt *bp,
 	if (vlan_table) {
 		if (!(mask & HWRM_CFA_L2_SET_RX_MASK_INPUT_MASK_VLAN_NONVLAN))
 			mask |= HWRM_CFA_L2_SET_RX_MASK_INPUT_MASK_VLANONLY;
-		req.vlan_tag_tbl_addr = rte_cpu_to_le_16(
+		req.vlan_tag_tbl_addr = rte_cpu_to_le_64(
 			 rte_mem_virt2phy(vlan_table));
 		req.num_vlan_tags = rte_cpu_to_le_32((uint32_t)vlan_count);
 	}
-- 
2.13.5 (Apple Git-94)

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v4 03/24] net/bnxt: fix an issue with group id calculation
  2017-09-28 21:43 [PATCH v4 00/24] bnxt patchset Ajit Khaparde
  2017-09-28 21:43 ` [PATCH v4 01/24] net/bnxt: fix HWRM_*() macros and locking Ajit Khaparde
  2017-09-28 21:43 ` [PATCH v4 02/24] net/bnxt: use 64-bits of address for vlan_table Ajit Khaparde
@ 2017-09-28 21:43 ` Ajit Khaparde
  2017-09-28 21:43 ` [PATCH v4 04/24] net/bnxt: fix calculation of number of pools Ajit Khaparde
                   ` (21 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Ajit Khaparde @ 2017-09-28 21:43 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

start_grp_id is incremented wrongly. Fixing it.

Fixes: daef48efe5e5 ("net/bnxt: support set MTU")

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_rxq.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt_rxq.c b/drivers/net/bnxt/bnxt_rxq.c
index 0793820b1..ef5e47d4f 100644
--- a/drivers/net/bnxt/bnxt_rxq.c
+++ b/drivers/net/bnxt/bnxt_rxq.c
@@ -168,7 +168,7 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
 			 */
 			STAILQ_INSERT_TAIL(&vnic->filter, filter, next);
 
-			start_grp_id = end_grp_id + 1;
+			start_grp_id = end_grp_id;
 			end_grp_id += nb_q_per_grp;
 		}
 		goto out;
-- 
2.13.5 (Apple Git-94)

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v4 04/24] net/bnxt: fix calculation of number of pools
  2017-09-28 21:43 [PATCH v4 00/24] bnxt patchset Ajit Khaparde
                   ` (2 preceding siblings ...)
  2017-09-28 21:43 ` [PATCH v4 03/24] net/bnxt: fix an issue with group id calculation Ajit Khaparde
@ 2017-09-28 21:43 ` Ajit Khaparde
  2017-09-28 21:43 ` [PATCH v4 05/24] net/bnxt: handle multi queue mode properly Ajit Khaparde
                   ` (20 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Ajit Khaparde @ 2017-09-28 21:43 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

The calculation for number of pools is wrong.
We are wrongly overwriting the calculated value with ETH_64_POOLS.
Accordingly fix the size of ff_pools array.
Fix the log message as well.

Fixes: 4cfe399f6550 ("net/bnxt: support to set VF rxmode")

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h     | 2 +-
 drivers/net/bnxt/bnxt_rxq.c | 3 +--
 2 files changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 405d94deb..357950509 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -217,7 +217,7 @@ struct bnxt {
 	STAILQ_HEAD(, bnxt_filter_info)	free_filter_list;
 
 	/* VNIC pointer for flow filter (VMDq) pools */
-#define MAX_FF_POOLS	ETH_64_POOLS
+#define MAX_FF_POOLS	256
 	STAILQ_HEAD(, bnxt_vnic_info)	ff_pool[MAX_FF_POOLS];
 
 	struct bnxt_irq         *irq_tbl;
diff --git a/drivers/net/bnxt/bnxt_rxq.c b/drivers/net/bnxt/bnxt_rxq.c
index ef5e47d4f..441e543eb 100644
--- a/drivers/net/bnxt/bnxt_rxq.c
+++ b/drivers/net/bnxt/bnxt_rxq.c
@@ -125,8 +125,7 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
 			    RTE_MIN(bp->max_l2_ctx,
 			     RTE_MIN(bp->max_rsscos_ctx, ETH_64_POOLS)));
 			RTE_LOG(ERR, PMD,
-				"VMDq pool not set, defaulted to 64\n");
-			pools = ETH_64_POOLS;
+				"VMDq pool not set, defaulted to %d\n", pools);
 		}
 		nb_q_per_grp = bp->rx_cp_nr_rings / pools;
 		start_grp_id = 0;
-- 
2.13.5 (Apple Git-94)

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v4 05/24] net/bnxt: handle multi queue mode properly
  2017-09-28 21:43 [PATCH v4 00/24] bnxt patchset Ajit Khaparde
                   ` (3 preceding siblings ...)
  2017-09-28 21:43 ` [PATCH v4 04/24] net/bnxt: fix calculation of number of pools Ajit Khaparde
@ 2017-09-28 21:43 ` Ajit Khaparde
  2017-09-28 21:43 ` [PATCH v4 06/24] net/bnxt: fix rx handling and buffer allocation logic Ajit Khaparde
                   ` (19 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Ajit Khaparde @ 2017-09-28 21:43 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

We are currently not handling multi queue RX/RSS modes correctly.
If RSS is not requested, create one VNIC per RXQ.

Fixes: 6133f207970c ("net/bnxt: add Rx queue create/destroy")

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c |  11 ++-
 drivers/net/bnxt/bnxt_rxq.c    | 197 +++++++++++++++++++++--------------------
 2 files changed, 111 insertions(+), 97 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index c9d11228b..6e5cb8854 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -360,6 +360,7 @@ static void bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
 {
 	struct bnxt *bp = (struct bnxt *)eth_dev->data->dev_private;
 	uint16_t max_vnics, i, j, vpool, vrxq;
+	unsigned int max_rx_rings;
 
 	dev_info->pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
 
@@ -370,8 +371,12 @@ static void bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
 	/* PF/VF specifics */
 	if (BNXT_PF(bp))
 		dev_info->max_vfs = bp->pdev->max_vfs;
-	dev_info->max_rx_queues = bp->max_rx_rings;
-	dev_info->max_tx_queues = bp->max_tx_rings;
+	max_rx_rings = RTE_MIN(bp->max_vnics, RTE_MIN(bp->max_l2_ctx,
+						RTE_MIN(bp->max_rsscos_ctx,
+						bp->max_stat_ctx)));
+	/* For the sake of symmetry, max_rx_queues = max_tx_queues */
+	dev_info->max_rx_queues = max_rx_rings;
+	dev_info->max_tx_queues = max_rx_rings;
 	dev_info->reta_size = bp->max_rsscos_ctx;
 	max_vnics = bp->max_vnics;
 
@@ -827,7 +832,7 @@ static int bnxt_rss_hash_update_op(struct rte_eth_dev *eth_dev,
 	 */
 	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
 		if (!rss_conf->rss_hf)
-			return -EINVAL;
+			RTE_LOG(ERR, PMD, "Hash type NONE\n");
 	} else {
 		if (rss_conf->rss_hf & BNXT_ETH_RSS_SUPPORT)
 			return -EINVAL;
diff --git a/drivers/net/bnxt/bnxt_rxq.c b/drivers/net/bnxt/bnxt_rxq.c
index 441e543eb..8459fcc09 100644
--- a/drivers/net/bnxt/bnxt_rxq.c
+++ b/drivers/net/bnxt/bnxt_rxq.c
@@ -60,11 +60,13 @@ void bnxt_free_rxq_stats(struct bnxt_rx_queue *rxq)
 int bnxt_mq_rx_configure(struct bnxt *bp)
 {
 	struct rte_eth_conf *dev_conf = &bp->eth_dev->data->dev_conf;
-	unsigned int i, j, nb_q_per_grp, ring_idx;
-	int start_grp_id, end_grp_id, rc = 0;
+	unsigned int i, j, nb_q_per_grp = 1, ring_idx = 0;
+	int start_grp_id, end_grp_id = 1, rc = 0;
 	struct bnxt_vnic_info *vnic;
 	struct bnxt_filter_info *filter;
+	enum rte_eth_nb_pools pools = bp->rx_cp_nr_rings, max_pools = 0;
 	struct bnxt_rx_queue *rxq;
+	bool rss_dflt_cr = false;
 
 	bp->nr_vnics = 0;
 
@@ -98,116 +100,123 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
 	}
 
 	/* Multi-queue mode */
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG) {
+	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_DCB_RSS) {
 		/* VMDq ONLY, VMDq+RSS, VMDq+DCB, VMDq+DCB+RSS */
-		enum rte_eth_nb_pools pools;
+		const struct rte_eth_vmdq_rx_conf *conf =
+		    &dev_conf->rx_adv_conf.vmdq_rx_conf;
+
 
 		switch (dev_conf->rxmode.mq_mode) {
 		case ETH_MQ_RX_VMDQ_RSS:
 		case ETH_MQ_RX_VMDQ_ONLY:
-			{
-				const struct rte_eth_vmdq_rx_conf *conf =
-				    &dev_conf->rx_adv_conf.vmdq_rx_conf;
-
-				/* ETH_8/64_POOLs */
-				pools = conf->nb_queue_pools;
-				break;
-			}
+			/* ETH_8/64_POOLs */
+			pools = conf->nb_queue_pools;
+			/* For each pool, allocate MACVLAN CFA rule & VNIC */
+			max_pools = RTE_MIN(bp->max_vnics,
+					    RTE_MIN(bp->max_l2_ctx,
+					    RTE_MIN(bp->max_rsscos_ctx,
+						    ETH_64_POOLS)));
+			if (pools > max_pools)
+				pools = max_pools;
+			break;
+		case ETH_MQ_RX_RSS:
+			pools = 1;
+			break;
 		default:
 			RTE_LOG(ERR, PMD, "Unsupported mq_mod %d\n",
 				dev_conf->rxmode.mq_mode);
 			rc = -EINVAL;
 			goto err_out;
 		}
-		/* For each pool, allocate MACVLAN CFA rule & VNIC */
-		if (!pools) {
-			pools = RTE_MIN(bp->max_vnics,
-			    RTE_MIN(bp->max_l2_ctx,
-			     RTE_MIN(bp->max_rsscos_ctx, ETH_64_POOLS)));
-			RTE_LOG(ERR, PMD,
-				"VMDq pool not set, defaulted to %d\n", pools);
+	}
+	/*
+	 * If MQ RX w/o RSS no need for per VNIC filter.
+	 */
+	if ((dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_DCB) ||
+	    (bp->rx_cp_nr_rings &&
+	     !(dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS)))
+		rss_dflt_cr = true;
+
+	nb_q_per_grp = bp->rx_cp_nr_rings / pools;
+	start_grp_id = 0;
+	end_grp_id = nb_q_per_grp;
+
+	for (i = 0; i < pools; i++) {
+		vnic = bnxt_alloc_vnic(bp);
+		if (!vnic) {
+			RTE_LOG(ERR, PMD, "VNIC alloc failed\n");
+			rc = -ENOMEM;
+			goto err_out;
 		}
-		nb_q_per_grp = bp->rx_cp_nr_rings / pools;
-		start_grp_id = 0;
-		end_grp_id = nb_q_per_grp;
-
-		ring_idx = 0;
-		for (i = 0; i < pools; i++) {
-			vnic = bnxt_alloc_vnic(bp);
-			if (!vnic) {
-				RTE_LOG(ERR, PMD,
-					"VNIC alloc failed\n");
-				rc = -ENOMEM;
-				goto err_out;
-			}
-			vnic->flags |= BNXT_VNIC_INFO_BCAST;
-			STAILQ_INSERT_TAIL(&bp->ff_pool[i], vnic, next);
-			bp->nr_vnics++;
-
-			for (j = 0; j < nb_q_per_grp; j++, ring_idx++) {
-				rxq = bp->eth_dev->data->rx_queues[ring_idx];
-				rxq->vnic = vnic;
-			}
-			if (i == 0)
-				vnic->func_default = true;
-			vnic->ff_pool_idx = i;
-			vnic->start_grp_id = start_grp_id;
-			vnic->end_grp_id = end_grp_id;
-
-			filter = bnxt_alloc_filter(bp);
-			if (!filter) {
-				RTE_LOG(ERR, PMD,
-					"L2 filter alloc failed\n");
-				rc = -ENOMEM;
-				goto err_out;
-			}
-			/*
-			 * TODO: Configure & associate CFA rule for
-			 * each VNIC for each VMDq with MACVLAN, MACVLAN+TC
-			 */
-			STAILQ_INSERT_TAIL(&vnic->filter, filter, next);
+		vnic->flags |= BNXT_VNIC_INFO_BCAST;
+		STAILQ_INSERT_TAIL(&bp->ff_pool[i], vnic, next);
+		bp->nr_vnics++;
 
-			start_grp_id = end_grp_id;
-			end_grp_id += nb_q_per_grp;
+		for (j = 0, ring_idx = 0; j < nb_q_per_grp; j++, ring_idx++) {
+			rxq = bp->eth_dev->data->rx_queues[ring_idx];
+			rxq->vnic = vnic;
 		}
-		goto out;
-	}
+		if (i == 0)
+			vnic->func_default = true;
+		vnic->ff_pool_idx = i;
+		vnic->start_grp_id = start_grp_id;
+		vnic->end_grp_id = end_grp_id;
+
+		if (rss_dflt_cr && i) {
+			vnic->rss_dflt_cr = true;
+			goto skip_filter_allocation;
+		}
+		filter = bnxt_alloc_filter(bp);
+		if (!filter) {
+			RTE_LOG(ERR, PMD, "L2 filter alloc failed\n");
+			rc = -ENOMEM;
+			goto err_out;
+		}
+		/*
+		 * TODO: Configure & associate CFA rule for
+		 * each VNIC for each VMDq with MACVLAN, MACVLAN+TC
+		 */
+		STAILQ_INSERT_TAIL(&vnic->filter, filter, next);
 
-	/* Non-VMDq mode - RSS, DCB, RSS+DCB */
-	/* Init default VNIC for RSS or DCB only */
-	vnic = bnxt_alloc_vnic(bp);
-	if (!vnic) {
-		RTE_LOG(ERR, PMD, "VNIC alloc failed\n");
-		rc = -ENOMEM;
-		goto err_out;
-	}
-	vnic->flags |= BNXT_VNIC_INFO_BCAST;
-	/* Partition the rx queues for the single pool */
-	for (i = 0; i < bp->rx_cp_nr_rings; i++) {
-		rxq = bp->eth_dev->data->rx_queues[i];
-		rxq->vnic = vnic;
+skip_filter_allocation:
+		start_grp_id = end_grp_id;
+		end_grp_id += nb_q_per_grp;
 	}
-	STAILQ_INSERT_TAIL(&bp->ff_pool[0], vnic, next);
-	bp->nr_vnics++;
-
-	vnic->func_default = true;
-	vnic->ff_pool_idx = 0;
-	vnic->start_grp_id = 0;
-	vnic->end_grp_id = bp->rx_cp_nr_rings;
-	filter = bnxt_alloc_filter(bp);
-	if (!filter) {
-		RTE_LOG(ERR, PMD, "L2 filter alloc failed\n");
-		rc = -ENOMEM;
-		goto err_out;
-	}
-	STAILQ_INSERT_TAIL(&vnic->filter, filter, next);
-
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		vnic->hash_type =
-			HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4 |
-			HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6;
 
 out:
+	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+		struct rte_eth_rss_conf *rss = &dev_conf->rx_adv_conf.rss_conf;
+		uint16_t hash_type = 0;
+
+		if (rss->rss_hf & ETH_RSS_IPV4)
+			hash_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4;
+		if (rss->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+			hash_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV4;
+		if (rss->rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+			hash_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV4;
+		if (rss->rss_hf & ETH_RSS_IPV6)
+			hash_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6;
+		if (rss->rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+			hash_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV6;
+		if (rss->rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+			hash_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV6;
+
+		for (i = 0; i < bp->nr_vnics; i++) {
+			STAILQ_FOREACH(vnic, &bp->ff_pool[i], next) {
+			vnic->hash_type |= hash_type;
+
+			/*
+			 * Use the supplied key if the key length is
+			 * acceptable and the rss_key is not NULL
+			 */
+			if (rss->rss_key &&
+			    rss->rss_key_len <= HW_HASH_KEY_SIZE)
+				memcpy(vnic->rss_hash_key,
+				       rss->rss_key, rss->rss_key_len);
+			}
+		}
+	}
+
 	return rc;
 
 err_out:
-- 
2.13.5 (Apple Git-94)

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v4 06/24] net/bnxt: fix rx handling and buffer allocation logic
  2017-09-28 21:43 [PATCH v4 00/24] bnxt patchset Ajit Khaparde
                   ` (4 preceding siblings ...)
  2017-09-28 21:43 ` [PATCH v4 05/24] net/bnxt: handle multi queue mode properly Ajit Khaparde
@ 2017-09-28 21:43 ` Ajit Khaparde
  2017-09-28 21:43 ` [PATCH v4 07/24] net/bnxt: fix an issue with broadcast traffic Ajit Khaparde
                   ` (18 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Ajit Khaparde @ 2017-09-28 21:43 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

Even when rx buffer allocation fails, we are wrongly updating
the producer index. This patch fixes that.
Also in case of a buffer allocation failure, reattempt buffer
allocation before the rx handler exits.

Fixes: 2eb53b134a ("net/bnxt: add initial Rx code")

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_rxr.c | 32 +++++++++++++++++++++++++++++---
 1 file changed, 29 insertions(+), 3 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index bee67d33c..bf9f78a55 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -391,7 +391,7 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 	rte_prefetch0(mbuf);
 
 	if (mbuf == NULL)
-		return -ENOMEM;
+		return -EBUSY;
 
 	mbuf->nb_segs = 1;
 	mbuf->next = NULL;
@@ -448,13 +448,14 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 	if (bnxt_alloc_rx_data(rxq, rxr, prod)) {
 		RTE_LOG(ERR, PMD, "mbuf alloc failed with prod=0x%x\n", prod);
 		rc = -ENOMEM;
+		goto rx;
 	}
 	rxr->rx_prod = prod;
 	/*
 	 * All MBUFs are allocated with the same size under DPDK,
 	 * no optimization for rx_copy_thresh
 	 */
-
+rx:
 	*rx_pkt = mbuf;
 
 next_rx:
@@ -476,6 +477,7 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	struct rx_pkt_cmpl *rxcmp;
 	uint16_t prod = rxr->rx_prod;
 	uint16_t ag_prod = rxr->ag_prod;
+	int rc = 0;
 
 	/* Handle RX burst request */
 	while (1) {
@@ -491,7 +493,7 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		/* TODO: Avoid magic numbers... */
 		if ((CMP_TYPE(rxcmp) & 0x30) == 0x10) {
 			rc = bnxt_rx_pkt(&rx_pkts[nb_rx_pkts], rxq, &raw_cons);
-			if (likely(!rc))
+			if (likely(!rc) || rc == -ENOMEM)
 				nb_rx_pkts++;
 			if (rc == -EBUSY)	/* partial completion */
 				break;
@@ -514,6 +516,30 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	B_RX_DB(rxr->rx_doorbell, rxr->rx_prod);
 	/* Ring the AGG ring DB */
 	B_RX_DB(rxr->ag_doorbell, rxr->ag_prod);
+
+	/* Attempt to alloc Rx buf in case of a previous allocation failure. */
+	if (rc == -ENOMEM) {
+		int i;
+
+		for (i = prod; i <= nb_rx_pkts;
+			i = RING_NEXT(rxr->rx_ring_struct, i)) {
+			struct bnxt_sw_rx_bd *rx_buf = &rxr->rx_buf_ring[i];
+
+			/* Buffer already allocated for this index. */
+			if (rx_buf->mbuf != NULL)
+				continue;
+
+			/* This slot is empty. Alloc buffer for Rx */
+			if (!bnxt_alloc_rx_data(rxq, rxr, i)) {
+				rxr->rx_prod = i;
+				B_RX_DB(rxr->rx_doorbell, rxr->rx_prod);
+			} else {
+				RTE_LOG(ERR, PMD, "Alloc  mbuf failed\n");
+				break;
+			}
+		}
+	}
+
 	return nb_rx_pkts;
 }
 
-- 
2.13.5 (Apple Git-94)

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v4 07/24] net/bnxt: fix an issue with broadcast traffic
  2017-09-28 21:43 [PATCH v4 00/24] bnxt patchset Ajit Khaparde
                   ` (5 preceding siblings ...)
  2017-09-28 21:43 ` [PATCH v4 06/24] net/bnxt: fix rx handling and buffer allocation logic Ajit Khaparde
@ 2017-09-28 21:43 ` Ajit Khaparde
  2017-09-28 21:43 ` [PATCH v4 08/24] net/bnxt: fix usage of ETH_VMDQ_* flags Ajit Khaparde
                   ` (17 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Ajit Khaparde @ 2017-09-28 21:43 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

In bnxt_hwrm_cfa_l2_set_rx_mask, we are ignoring the previous
setting of HWRM_CFA_L2_SET_RX_MASK_INPUT_MASK_BCAST and
setting it again, thereby wrongly enabling Broadcast.

Fixes: 244bc98b0da7 ("net/bnxt: set L2 Rx mask")

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_hwrm.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index ceb4ab29b..ade96278b 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -280,8 +280,7 @@ int bnxt_hwrm_cfa_l2_set_rx_mask(struct bnxt *bp,
 			 rte_mem_virt2phy(vlan_table));
 		req.num_vlan_tags = rte_cpu_to_le_32((uint32_t)vlan_count);
 	}
-	req.mask = rte_cpu_to_le_32(HWRM_CFA_L2_SET_RX_MASK_INPUT_MASK_BCAST |
-				    mask);
+	req.mask = rte_cpu_to_le_32(mask);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-- 
2.13.5 (Apple Git-94)

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v4 08/24] net/bnxt: fix usage of ETH_VMDQ_* flags
  2017-09-28 21:43 [PATCH v4 00/24] bnxt patchset Ajit Khaparde
                   ` (6 preceding siblings ...)
  2017-09-28 21:43 ` [PATCH v4 07/24] net/bnxt: fix an issue with broadcast traffic Ajit Khaparde
@ 2017-09-28 21:43 ` Ajit Khaparde
  2017-09-28 21:43 ` [PATCH v4 09/24] net/bnxt: set checksum offload flags correctly Ajit Khaparde
                   ` (16 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Ajit Khaparde @ 2017-09-28 21:43 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Stephen Hurd

Map ETH_VMDQ_ACCEPT_HASH_UC to the promiscuous bit.
Also, set ALLMULTI and MCAST when MCAST is set to ensure multicast traffic
is received regardless of the VF driver list.

Fixes: 4cfe399f6550 ("net/bnxt: support to set VF rxmode")

Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/rte_pmd_bnxt.c | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/drivers/net/bnxt/rte_pmd_bnxt.c b/drivers/net/bnxt/rte_pmd_bnxt.c
index c343d9033..0bf5db5ec 100644
--- a/drivers/net/bnxt/rte_pmd_bnxt.c
+++ b/drivers/net/bnxt/rte_pmd_bnxt.c
@@ -409,20 +409,19 @@ int rte_pmd_bnxt_set_vf_rxmode(uint8_t port, uint16_t vf,
 	if (vf >= bp->pdev->max_vfs)
 		return -EINVAL;
 
-	if (rx_mask & (ETH_VMDQ_ACCEPT_UNTAG | ETH_VMDQ_ACCEPT_HASH_MC)) {
+	if (rx_mask & ETH_VMDQ_ACCEPT_UNTAG) {
 		RTE_LOG(ERR, PMD, "Currently cannot toggle this setting\n");
 		return -ENOTSUP;
 	}
 
-	if (rx_mask & ETH_VMDQ_ACCEPT_HASH_UC && !on) {
-		RTE_LOG(ERR, PMD, "Currently cannot disable UC Rx\n");
-		return -ENOTSUP;
-	}
+	/* Is this really the correct mapping?  VFd seems to think it is. */
+	if (rx_mask & ETH_VMDQ_ACCEPT_HASH_UC)
+		flag |= BNXT_VNIC_INFO_PROMISC;
 
 	if (rx_mask & ETH_VMDQ_ACCEPT_BROADCAST)
 		flag |= BNXT_VNIC_INFO_BCAST;
 	if (rx_mask & ETH_VMDQ_ACCEPT_MULTICAST)
-		flag |= BNXT_VNIC_INFO_ALLMULTI;
+		flag |= BNXT_VNIC_INFO_ALLMULTI | BNXT_VNIC_INFO_MCAST;
 
 	if (on)
 		bp->pf.vf_info[vf].l2_rx_mask |= flag;
-- 
2.13.5 (Apple Git-94)

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v4 09/24] net/bnxt: set checksum offload flags correctly
  2017-09-28 21:43 [PATCH v4 00/24] bnxt patchset Ajit Khaparde
                   ` (7 preceding siblings ...)
  2017-09-28 21:43 ` [PATCH v4 08/24] net/bnxt: fix usage of ETH_VMDQ_* flags Ajit Khaparde
@ 2017-09-28 21:43 ` Ajit Khaparde
  2017-09-28 21:43 ` [PATCH v4 10/24] net/bnxt: update status of Rx IP/L4 CKSUM Ajit Khaparde
                   ` (15 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Ajit Khaparde @ 2017-09-28 21:43 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

We are not correctly setting hw checksum offload for all the
offload flags. This patch fixes that.

Fixes: 6eb3cc2294fd ("net/bnxt: add initial Tx code")

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_txr.c | 32 +++++++++++++++++++++++++-------
 drivers/net/bnxt/bnxt_txr.h | 21 +++++++++++++++++++++
 2 files changed, 46 insertions(+), 7 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 6870b16d1..60cc17405 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -161,7 +161,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 
 	if (tx_pkt->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_TCP_CKSUM |
 				PKT_TX_UDP_CKSUM | PKT_TX_IP_CKSUM |
-				PKT_TX_VLAN_PKT))
+				PKT_TX_VLAN_PKT | PKT_TX_OUTER_IP_CKSUM))
 		long_bd = true;
 
 	tx_buf = &txr->tx_buf_ring[txr->tx_prod];
@@ -211,21 +211,39 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 
 		if (tx_pkt->ol_flags & PKT_TX_TCP_SEG) {
 			/* TSO */
-			txbd1->lflags = TX_BD_LONG_LFLAGS_LSO;
+			txbd1->lflags |= TX_BD_LONG_LFLAGS_LSO;
 			txbd1->hdr_size = tx_pkt->l2_len + tx_pkt->l3_len +
 					tx_pkt->l4_len + tx_pkt->outer_l2_len +
 					tx_pkt->outer_l3_len;
 			txbd1->mss = tx_pkt->tso_segsz;
 
-		} else if (tx_pkt->ol_flags & (PKT_TX_TCP_CKSUM |
-					PKT_TX_UDP_CKSUM)) {
+		} else if (tx_pkt->ol_flags & PKT_TX_OIP_IIP_TCP_UDP_CKSUM) {
+			/* Outer IP, Inner IP, Inner TCP/UDP CSO */
+			txbd1->lflags |= TX_BD_FLG_TIP_IP_TCP_UDP_CHKSUM;
+			txbd1->mss = 0;
+		} else if (tx_pkt->ol_flags & PKT_TX_IIP_TCP_UDP_CKSUM) {
+			/* (Inner) IP, (Inner) TCP/UDP CSO */
+			txbd1->lflags |= TX_BD_FLG_IP_TCP_UDP_CHKSUM;
+			txbd1->mss = 0;
+		} else if (tx_pkt->ol_flags & PKT_TX_OIP_TCP_UDP_CKSUM) {
+			/* Outer IP, (Inner) TCP/UDP CSO */
+			txbd1->lflags |= TX_BD_FLG_TIP_TCP_UDP_CHKSUM;
+			txbd1->mss = 0;
+		} else if (tx_pkt->ol_flags & PKT_TX_OIP_IIP_CKSUM) {
+			/* Outer IP, Inner IP CSO */
+			txbd1->lflags |= TX_BD_FLG_TIP_IP_CHKSUM;
+			txbd1->mss = 0;
+		} else if (tx_pkt->ol_flags & PKT_TX_TCP_UDP_CKSUM) {
 			/* TCP/UDP CSO */
-			txbd1->lflags = TX_BD_LONG_LFLAGS_TCP_UDP_CHKSUM;
+			txbd1->lflags |= TX_BD_LONG_LFLAGS_TCP_UDP_CHKSUM;
 			txbd1->mss = 0;
-
 		} else if (tx_pkt->ol_flags & PKT_TX_IP_CKSUM) {
 			/* IP CSO */
-			txbd1->lflags = TX_BD_LONG_LFLAGS_IP_CHKSUM;
+			txbd1->lflags |= TX_BD_LONG_LFLAGS_IP_CHKSUM;
+			txbd1->mss = 0;
+		} else if (tx_pkt->ol_flags & PKT_TX_OUTER_IP_CKSUM) {
+			/* IP CSO */
+			txbd1->lflags |= TX_BD_LONG_LFLAGS_T_IP_CHKSUM;
 			txbd1->mss = 0;
 		}
 	} else {
diff --git a/drivers/net/bnxt/bnxt_txr.h b/drivers/net/bnxt/bnxt_txr.h
index 5b0971141..3f3eb312b 100644
--- a/drivers/net/bnxt/bnxt_txr.h
+++ b/drivers/net/bnxt/bnxt_txr.h
@@ -69,4 +69,25 @@ int bnxt_init_tx_ring_struct(struct bnxt_tx_queue *txq, unsigned int socket_id);
 uint16_t bnxt_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			       uint16_t nb_pkts);
 
+#define PKT_TX_OIP_IIP_TCP_UDP_CKSUM	(PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM | \
+					PKT_TX_IP_CKSUM | PKT_TX_OUTER_IP_CKSUM)
+#define PKT_TX_IIP_TCP_UDP_CKSUM	(PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM | \
+					PKT_TX_IP_CKSUM)
+#define PKT_TX_OIP_TCP_UDP_CKSUM	(PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM | \
+					PKT_TX_OUTER_IP_CKSUM)
+#define PKT_TX_OIP_IIP_CKSUM		(PKT_TX_IP_CKSUM |	\
+					 PKT_TX_OUTER_IP_CKSUM)
+#define PKT_TX_TCP_UDP_CKSUM		(PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM)
+
+
+#define TX_BD_FLG_TIP_IP_TCP_UDP_CHKSUM	(TX_BD_LONG_LFLAGS_TCP_UDP_CHKSUM | \
+					TX_BD_LONG_LFLAGS_T_IP_CHKSUM | \
+					TX_BD_LONG_LFLAGS_IP_CHKSUM)
+#define TX_BD_FLG_IP_TCP_UDP_CHKSUM	(TX_BD_LONG_LFLAGS_TCP_UDP_CHKSUM | \
+					TX_BD_LONG_LFLAGS_IP_CHKSUM)
+#define TX_BD_FLG_TIP_IP_CHKSUM		(TX_BD_LONG_LFLAGS_T_IP_CHKSUM | \
+					TX_BD_LONG_LFLAGS_IP_CHKSUM)
+#define TX_BD_FLG_TIP_TCP_UDP_CHKSUM	(TX_BD_LONG_LFLAGS_TCP_UDP_CHKSUM | \
+					TX_BD_LONG_LFLAGS_T_IP_CHKSUM)
+
 #endif
-- 
2.13.5 (Apple Git-94)

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v4 10/24] net/bnxt: update status of Rx IP/L4 CKSUM
  2017-09-28 21:43 [PATCH v4 00/24] bnxt patchset Ajit Khaparde
                   ` (8 preceding siblings ...)
  2017-09-28 21:43 ` [PATCH v4 09/24] net/bnxt: set checksum offload flags correctly Ajit Khaparde
@ 2017-09-28 21:43 ` Ajit Khaparde
  2017-09-28 21:43 ` [PATCH v4 11/24] net/bnxt: add support for xstats get by id Ajit Khaparde
                   ` (14 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Ajit Khaparde @ 2017-09-28 21:43 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

update ol_flags with the appropriate status of IP/L4 cksum in Rx path.

Fixes: 2eb53b134a ("net/bnxt: add initial Rx code")

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_rxr.c | 11 +++++++++++
 drivers/net/bnxt/bnxt_rxr.h | 16 ++++++++++++++++
 2 files changed, 27 insertions(+)

diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index bf9f78a55..28105b06b 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -418,6 +418,17 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 		mbuf->ol_flags |= PKT_RX_VLAN_PKT;
 	}
 
+	if (likely(RX_CMP_IP_CS_OK(rxcmp1)))
+		mbuf->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+	else
+		mbuf->ol_flags |= PKT_RX_IP_CKSUM_NONE;
+
+	if (likely(RX_CMP_L4_CS_OK(rxcmp1)))
+		mbuf->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+	else
+		mbuf->ol_flags |= PKT_RX_L4_CKSUM_NONE;
+
+
 #ifdef BNXT_DEBUG
 	if (rxcmp1->errors_v2 & RX_CMP_L2_ERRORS) {
 		/* Re-install the mbuf back to the rx ring */
diff --git a/drivers/net/bnxt/bnxt_rxr.h b/drivers/net/bnxt/bnxt_rxr.h
index f8d6dc80a..cb0cef303 100644
--- a/drivers/net/bnxt/bnxt_rxr.h
+++ b/drivers/net/bnxt/bnxt_rxr.h
@@ -52,6 +52,22 @@
 #define BNXT_TPA_OUTER_L3_OFF(hdr_info)	\
 	((hdr_info) & 0x1ff)
 
+#define RX_CMP_L4_CS_BITS	rte_cpu_to_le_32(RX_PKT_CMPL_FLAGS2_L4_CS_CALC)
+
+#define RX_CMP_L4_CS_ERR_BITS	rte_cpu_to_le_32(RX_PKT_CMPL_ERRORS_L4_CS_ERROR)
+
+#define RX_CMP_L4_CS_OK(rxcmp1)						\
+	    (((rxcmp1)->flags2 & RX_CMP_L4_CS_BITS) &&		\
+	     !((rxcmp1)->errors_v2 & RX_CMP_L4_CS_ERR_BITS))
+
+#define RX_CMP_IP_CS_ERR_BITS	rte_cpu_to_le_32(RX_PKT_CMPL_ERRORS_IP_CS_ERROR)
+
+#define RX_CMP_IP_CS_BITS	rte_cpu_to_le_32(RX_PKT_CMPL_FLAGS2_IP_CS_CALC)
+
+#define RX_CMP_IP_CS_OK(rxcmp1)						\
+		(((rxcmp1)->flags2 & RX_CMP_IP_CS_BITS) &&	\
+		!((rxcmp1)->errors_v2 & RX_CMP_IP_CS_ERR_BITS))
+
 enum pkt_hash_types {
 	PKT_HASH_TYPE_NONE,	/* Undefined type */
 	PKT_HASH_TYPE_L2,	/* Input: src_MAC, dest_MAC */
-- 
2.13.5 (Apple Git-94)

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v4 11/24] net/bnxt: add support for xstats get by id
  2017-09-28 21:43 [PATCH v4 00/24] bnxt patchset Ajit Khaparde
                   ` (9 preceding siblings ...)
  2017-09-28 21:43 ` [PATCH v4 10/24] net/bnxt: update status of Rx IP/L4 CKSUM Ajit Khaparde
@ 2017-09-28 21:43 ` Ajit Khaparde
  2017-09-28 21:43 ` [PATCH v4 12/24] net/bnxt: fix config rss update Ajit Khaparde
                   ` (13 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Ajit Khaparde @ 2017-09-28 21:43 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

This patch adds support for xstats_get_by_id/xstats_get_names_by_id.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
--
v1->v2: incorporate review comments.
v2->v3: fix an issue observed while testing.
---
 drivers/net/bnxt/bnxt_ethdev.c | 11 +++++++++
 drivers/net/bnxt/bnxt_stats.c  | 51 ++++++++++++++++++++++++++++++++++++++++++
 drivers/net/bnxt/bnxt_stats.h  |  5 +++++
 3 files changed, 67 insertions(+)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 6e5cb8854..05e601b66 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -1569,6 +1569,8 @@ static const struct eth_dev_ops bnxt_dev_ops = {
 	.txq_info_get = bnxt_txq_info_get_op,
 	.dev_led_on = bnxt_dev_led_on_op,
 	.dev_led_off = bnxt_dev_led_off_op,
+	.xstats_get_by_id = bnxt_dev_xstats_get_by_id_op,
+	.xstats_get_names_by_id = bnxt_dev_xstats_get_names_by_id_op,
 };
 
 static bool bnxt_vf_pciid(uint16_t id)
@@ -1648,6 +1650,9 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev)
 	rte_atomic64_init(&bp->rx_mbuf_alloc_fail);
 	bp->dev_stopped = 1;
 
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		goto skip_init;
+
 	if (bnxt_vf_pciid(pci_dev->id.device_id))
 		bp->flags |= BNXT_FLAG_VF;
 
@@ -1657,7 +1662,10 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev)
 			"Board initialization failed rc: %x\n", rc);
 		goto error;
 	}
+skip_init:
 	eth_dev->dev_ops = &bnxt_dev_ops;
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return 0;
 	eth_dev->rx_pkt_burst = &bnxt_recv_pkts;
 	eth_dev->tx_pkt_burst = &bnxt_xmit_pkts;
 
@@ -1882,6 +1890,9 @@ bnxt_dev_uninit(struct rte_eth_dev *eth_dev) {
 	struct bnxt *bp = eth_dev->data->dev_private;
 	int rc;
 
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return -EPERM;
+
 	bnxt_disable_int(bp);
 	bnxt_free_int(bp);
 	bnxt_free_mem(bp);
diff --git a/drivers/net/bnxt/bnxt_stats.c b/drivers/net/bnxt/bnxt_stats.c
index d7d0e35c1..225f98830 100644
--- a/drivers/net/bnxt/bnxt_stats.c
+++ b/drivers/net/bnxt/bnxt_stats.c
@@ -358,3 +358,54 @@ void bnxt_dev_xstats_reset_op(struct rte_eth_dev *eth_dev)
 	if (!(bp->flags & BNXT_FLAG_PORT_STATS))
 		RTE_LOG(ERR, PMD, "Operation not supported\n");
 }
+
+int bnxt_dev_xstats_get_by_id_op(struct rte_eth_dev *dev, const uint64_t *ids,
+		uint64_t *values, unsigned int limit)
+{
+	/* Account for the Tx drop pkts aka the Anti spoof counter */
+	const unsigned int stat_cnt = RTE_DIM(bnxt_rx_stats_strings) +
+				RTE_DIM(bnxt_tx_stats_strings) + 1;
+	struct rte_eth_xstat xstats[stat_cnt];
+	uint64_t values_copy[stat_cnt];
+	uint16_t i;
+
+	if (!ids)
+		return bnxt_dev_xstats_get_op(dev, xstats, stat_cnt);
+
+	bnxt_dev_xstats_get_by_id_op(dev, NULL, values_copy, stat_cnt);
+	for (i = 0; i < limit; i++) {
+		if (ids[i] >= stat_cnt) {
+			RTE_LOG(ERR, PMD, "id value isn't valid");
+			return -1;
+		}
+		values[i] = values_copy[ids[i]];
+	}
+	return stat_cnt;
+}
+
+int bnxt_dev_xstats_get_names_by_id_op(struct rte_eth_dev *dev,
+				struct rte_eth_xstat_name *xstats_names,
+				const uint64_t *ids, unsigned int limit)
+{
+	/* Account for the Tx drop pkts aka the Anti spoof counter */
+	const unsigned int stat_cnt = RTE_DIM(bnxt_rx_stats_strings) +
+				RTE_DIM(bnxt_tx_stats_strings) + 1;
+	struct rte_eth_xstat_name xstats_names_copy[stat_cnt];
+	uint16_t i;
+
+	if (!ids)
+		return bnxt_dev_xstats_get_names_op(dev, xstats_names,
+						    stat_cnt);
+	bnxt_dev_xstats_get_names_by_id_op(dev, xstats_names_copy, NULL,
+			stat_cnt);
+
+	for (i = 0; i < limit; i++) {
+		if (ids[i] >= stat_cnt) {
+			RTE_LOG(ERR, PMD, "id value isn't valid");
+			return -1;
+		}
+		strcpy(xstats_names[i].name,
+				xstats_names_copy[ids[i]].name);
+	}
+	return stat_cnt;
+}
diff --git a/drivers/net/bnxt/bnxt_stats.h b/drivers/net/bnxt/bnxt_stats.h
index b6d133ef2..daeb3d9b9 100644
--- a/drivers/net/bnxt/bnxt_stats.h
+++ b/drivers/net/bnxt/bnxt_stats.h
@@ -46,6 +46,11 @@ int bnxt_dev_xstats_get_names_op(__rte_unused struct rte_eth_dev *eth_dev,
 int bnxt_dev_xstats_get_op(struct rte_eth_dev *eth_dev,
 			   struct rte_eth_xstat *xstats, unsigned int n);
 void bnxt_dev_xstats_reset_op(struct rte_eth_dev *eth_dev);
+int bnxt_dev_xstats_get_by_id_op(struct rte_eth_dev *dev, const uint64_t *ids,
+				uint64_t *values, unsigned int limit);
+int bnxt_dev_xstats_get_names_by_id_op(struct rte_eth_dev *dev,
+				struct rte_eth_xstat_name *xstats_names,
+				const uint64_t *ids, unsigned int limit);
 
 struct bnxt_xstats_name_off {
 	char name[RTE_ETH_XSTATS_NAME_SIZE];
-- 
2.13.5 (Apple Git-94)

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v4 12/24] net/bnxt: fix config rss update
  2017-09-28 21:43 [PATCH v4 00/24] bnxt patchset Ajit Khaparde
                   ` (10 preceding siblings ...)
  2017-09-28 21:43 ` [PATCH v4 11/24] net/bnxt: add support for xstats get by id Ajit Khaparde
@ 2017-09-28 21:43 ` Ajit Khaparde
  2017-09-28 21:43 ` [PATCH v4 13/24] net/bnxt: set the hash_key_size Ajit Khaparde
                   ` (12 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Ajit Khaparde @ 2017-09-28 21:43 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

We are not configuring the RSS settings updated by rss_hash_update().
Fix it.

Fixes: cc0aa1edc10 ("net/bnxt: add RSS hash configuration")

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        | 2 ++
 drivers/net/bnxt/bnxt_ethdev.c | 4 ++++
 drivers/net/bnxt/bnxt_rxq.c    | 5 +++++
 3 files changed, 11 insertions(+)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 357950509..65f716b96 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -176,6 +176,7 @@ struct bnxt {
 	void				*bar0;
 
 	struct rte_eth_dev		*eth_dev;
+	struct rte_eth_rss_conf		rss_conf;
 	struct rte_pci_device		*pdev;
 
 	uint32_t		flags;
@@ -184,6 +185,7 @@ struct bnxt {
 #define BNXT_FLAG_PORT_STATS	(1 << 2)
 #define BNXT_FLAG_JUMBO		(1 << 3)
 #define BNXT_FLAG_SHORT_CMD	(1 << 4)
+#define BNXT_FLAG_UPDATE_HASH	(1 << 5)
 #define BNXT_PF(bp)		(!((bp)->flags & BNXT_FLAG_VF))
 #define BNXT_VF(bp)		((bp)->flags & BNXT_FLAG_VF)
 #define BNXT_NPAR_ENABLED(bp)	((bp)->port_partition_type)
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 05e601b66..f8dfc1c65 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -837,6 +837,10 @@ static int bnxt_rss_hash_update_op(struct rte_eth_dev *eth_dev,
 		if (rss_conf->rss_hf & BNXT_ETH_RSS_SUPPORT)
 			return -EINVAL;
 	}
+
+	bp->flags |= BNXT_FLAG_UPDATE_HASH;
+	memcpy(&bp->rss_conf, rss_conf, sizeof(*rss_conf));
+
 	if (rss_conf->rss_hf & ETH_RSS_IPV4)
 		hash_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4;
 	if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
diff --git a/drivers/net/bnxt/bnxt_rxq.c b/drivers/net/bnxt/bnxt_rxq.c
index 8459fcc09..690a59987 100644
--- a/drivers/net/bnxt/bnxt_rxq.c
+++ b/drivers/net/bnxt/bnxt_rxq.c
@@ -188,6 +188,11 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
 		struct rte_eth_rss_conf *rss = &dev_conf->rx_adv_conf.rss_conf;
 		uint16_t hash_type = 0;
 
+		if (bp->flags & BNXT_FLAG_UPDATE_HASH) {
+			rss = &bp->rss_conf;
+			bp->flags &= ~BNXT_FLAG_UPDATE_HASH;
+		}
+
 		if (rss->rss_hf & ETH_RSS_IPV4)
 			hash_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4;
 		if (rss->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
-- 
2.13.5 (Apple Git-94)

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v4 13/24] net/bnxt: set the hash_key_size
  2017-09-28 21:43 [PATCH v4 00/24] bnxt patchset Ajit Khaparde
                   ` (11 preceding siblings ...)
  2017-09-28 21:43 ` [PATCH v4 12/24] net/bnxt: fix config rss update Ajit Khaparde
@ 2017-09-28 21:43 ` Ajit Khaparde
  2017-09-28 21:43 ` [PATCH v4 14/24] net/bnxt: add support for rx_queue_count Ajit Khaparde
                   ` (11 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Ajit Khaparde @ 2017-09-28 21:43 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

We were not setting the dev_info.hash_key_size. Setting it now.

Fixes: 0a6d2a720078 ("net/bnxt: get device infos")

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index f8dfc1c65..5b17eef53 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -378,6 +378,7 @@ static void bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
 	dev_info->max_rx_queues = max_rx_rings;
 	dev_info->max_tx_queues = max_rx_rings;
 	dev_info->reta_size = bp->max_rsscos_ctx;
+	dev_info->hash_key_size = 40;
 	max_vnics = bp->max_vnics;
 
 	/* Fast path specifics */
-- 
2.13.5 (Apple Git-94)

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v4 14/24] net/bnxt: add support for rx_queue_count
  2017-09-28 21:43 [PATCH v4 00/24] bnxt patchset Ajit Khaparde
                   ` (12 preceding siblings ...)
  2017-09-28 21:43 ` [PATCH v4 13/24] net/bnxt: set the hash_key_size Ajit Khaparde
@ 2017-09-28 21:43 ` Ajit Khaparde
  2017-09-28 21:43 ` [PATCH v4 15/24] net/bnxt: add support for rx_descriptor_status Ajit Khaparde
                   ` (10 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Ajit Khaparde @ 2017-09-28 21:43 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

add support for rx_queue_count dev_op

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
--
v1->v2: incorporate review comments.
v2->v3: fix checkpatch warning.
---
 drivers/net/bnxt/bnxt_cpr.h    |  6 +++++-
 drivers/net/bnxt/bnxt_ethdev.c | 46 ++++++++++++++++++++++++++++++++++++++++++
 drivers/net/bnxt/bnxt_rxr.c    | 12 +++++++++--
 3 files changed, 61 insertions(+), 3 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_cpr.h b/drivers/net/bnxt/bnxt_cpr.h
index a6e87858a..e8f048a3b 100644
--- a/drivers/net/bnxt/bnxt_cpr.h
+++ b/drivers/net/bnxt/bnxt_cpr.h
@@ -41,6 +41,9 @@
 	(!!(((struct cmpl_base *)(cmp))->info3_v & CMPL_BASE_V) ==	\
 	 !((raw_cons) & ((ring)->ring_size)))
 
+#define CMPL_VALID(cmp, v)						\
+	(!!(((struct cmpl_base *)(cmp))->info3_v & CMPL_BASE_V) == !(v))
+
 #define CMP_TYPE(cmp)						\
 	(((struct cmpl_base *)cmp)->type & CMPL_BASE_TYPE_MASK)
 
@@ -48,6 +51,7 @@
 #define NEXT_RAW_CMP(idx)	ADV_RAW_CMP(idx, 1)
 #define RING_CMP(ring, idx)	((idx) & (ring)->ring_mask)
 #define NEXT_CMP(idx)		RING_CMP(ADV_RAW_CMP(idx, 1))
+#define FLIP_VALID(cons, mask, val)	((cons) >= (mask) ? !(val) : (val))
 
 #define DB_CP_REARM_FLAGS	(DB_KEY_CP | DB_IDX_VALID)
 #define DB_CP_FLAGS		(DB_KEY_CP | DB_IDX_VALID | DB_IRQ_DIS)
@@ -90,7 +94,7 @@ struct bnxt_cp_ring_info {
 
 	struct bnxt_ring	*cp_ring_struct;
 	uint16_t		cp_cons;
-	bool			v;
+	bool			valid;
 };
 
 #define RX_CMP_L2_ERRORS						\
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 5b17eef53..5c68797c1 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -1527,6 +1527,51 @@ bnxt_dev_led_off_op(struct rte_eth_dev *dev)
 	return bnxt_hwrm_port_led_cfg(bp, false);
 }
 
+static uint32_t
+bnxt_rx_queue_count_op(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	uint32_t desc = 0, raw_cons = 0, cons;
+	struct bnxt_cp_ring_info *cpr;
+	struct bnxt_rx_queue *rxq;
+	struct rx_pkt_cmpl *rxcmp;
+	uint16_t cmp_type;
+	uint8_t cmp = 1;
+	bool valid;
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+	cpr = rxq->cp_ring;
+	valid = cpr->valid;
+
+	while (raw_cons < rxq->nb_rx_desc) {
+		cons = RING_CMP(cpr->cp_ring_struct, raw_cons);
+		rxcmp = (struct rx_pkt_cmpl *)&cpr->cp_desc_ring[cons];
+
+		if (!CMPL_VALID(rxcmp, valid))
+			goto nothing_to_do;
+		valid = FLIP_VALID(cons, cpr->cp_ring_struct->ring_mask, valid);
+		cmp_type = CMP_TYPE(rxcmp);
+		if (cmp_type == RX_PKT_CMPL_TYPE_RX_L2_TPA_END) {
+			cmp = (rte_le_to_cpu_32(
+					((struct rx_tpa_end_cmpl *)
+					 (rxcmp))->agg_bufs_v1) &
+			       RX_TPA_END_CMPL_AGG_BUFS_MASK) >>
+				RX_TPA_END_CMPL_AGG_BUFS_SFT;
+			desc++;
+		} else if (cmp_type == 0x11) {
+			desc++;
+			cmp = (rxcmp->agg_bufs_v1 &
+				   RX_PKT_CMPL_AGG_BUFS_MASK) >>
+				RX_PKT_CMPL_AGG_BUFS_SFT;
+		} else {
+			cmp = 1;
+		}
+nothing_to_do:
+		raw_cons += cmp ? cmp : 2;
+	}
+
+	return desc;
+}
+
 /*
  * Initialization
  */
@@ -1576,6 +1621,7 @@ static const struct eth_dev_ops bnxt_dev_ops = {
 	.dev_led_off = bnxt_dev_led_off_op,
 	.xstats_get_by_id = bnxt_dev_xstats_get_by_id_op,
 	.xstats_get_names_by_id = bnxt_dev_xstats_get_names_by_id_op,
+	.rx_queue_count = bnxt_rx_queue_count_op,
 };
 
 static bool bnxt_vf_pciid(uint16_t id)
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index 28105b06b..3216a6d3b 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -219,6 +219,9 @@ static int bnxt_agg_bufs_valid(struct bnxt_cp_ring_info *cpr,
 	raw_cp_cons = ADV_RAW_CMP(raw_cp_cons, agg_bufs);
 	last_cp_cons = RING_CMP(cpr->cp_ring_struct, raw_cp_cons);
 	agg_cmpl = (struct rx_pkt_cmpl *)&cpr->cp_desc_ring[last_cp_cons];
+	cpr->valid = FLIP_VALID(raw_cp_cons,
+				cpr->cp_ring_struct->ring_mask,
+				cpr->valid);
 	return CMP_VALID(agg_cmpl, raw_cp_cons, cpr->cp_ring_struct);
 }
 
@@ -360,6 +363,10 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 	if (!CMP_VALID(rxcmp1, tmp_raw_cons, cpr->cp_ring_struct))
 		return -EBUSY;
 
+	cpr->valid = FLIP_VALID(cp_cons,
+				cpr->cp_ring_struct->ring_mask,
+				cpr->valid);
+
 	cmp_type = CMP_TYPE(rxcmp);
 	if (cmp_type == RX_PKT_CMPL_TYPE_RX_L2_TPA_START) {
 		bnxt_tpa_start(rxq, (struct rx_tpa_start_cmpl *)rxcmp,
@@ -492,14 +499,15 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 
 	/* Handle RX burst request */
 	while (1) {
-		int rc;
-
 		cons = RING_CMP(cpr->cp_ring_struct, raw_cons);
 		rte_prefetch0(&cpr->cp_desc_ring[cons]);
 		rxcmp = (struct rx_pkt_cmpl *)&cpr->cp_desc_ring[cons];
 
 		if (!CMP_VALID(rxcmp, raw_cons, cpr->cp_ring_struct))
 			break;
+		cpr->valid = FLIP_VALID(cons,
+					cpr->cp_ring_struct->ring_mask,
+					cpr->valid);
 
 		/* TODO: Avoid magic numbers... */
 		if ((CMP_TYPE(rxcmp) & 0x30) == 0x10) {
-- 
2.13.5 (Apple Git-94)

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v4 15/24] net/bnxt: add support for rx_descriptor_status
  2017-09-28 21:43 [PATCH v4 00/24] bnxt patchset Ajit Khaparde
                   ` (13 preceding siblings ...)
  2017-09-28 21:43 ` [PATCH v4 14/24] net/bnxt: add support for rx_queue_count Ajit Khaparde
@ 2017-09-28 21:43 ` Ajit Khaparde
  2017-09-28 21:43 ` [PATCH v4 16/24] net/bnxt: add support for tx_descriptor_status Ajit Khaparde
                   ` (9 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Ajit Khaparde @ 2017-09-28 21:43 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

add support for rx_descriptor_status dev_op
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
--
v1->v2: incorporate review comments.
---
 drivers/net/bnxt/bnxt_ethdev.c | 39 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 39 insertions(+)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 5c68797c1..12888e6a9 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -1572,6 +1572,44 @@ bnxt_rx_queue_count_op(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	return desc;
 }
 
+static int
+bnxt_rx_descriptor_status_op(void *rx_queue, uint16_t offset)
+{
+	struct bnxt_rx_queue *rxq = (struct bnxt_rx_queue *)rx_queue;
+	struct bnxt_rx_ring_info *rxr;
+	struct bnxt_cp_ring_info *cpr;
+	struct bnxt_sw_rx_bd *rx_buf;
+	struct rx_pkt_cmpl *rxcmp;
+	uint32_t cons, cp_cons;
+
+	if (!rxq)
+		return -EINVAL;
+
+	cpr = rxq->cp_ring;
+	rxr = rxq->rx_ring;
+
+	if (offset >= rxq->nb_rx_desc)
+		return -EINVAL;
+
+	cons = RING_CMP(cpr->cp_ring_struct, offset);
+	cp_cons = cpr->cp_raw_cons;
+	rxcmp = (struct rx_pkt_cmpl *)&cpr->cp_desc_ring[cons];
+
+	if (cons > cp_cons) {
+		if (CMPL_VALID(rxcmp, cpr->valid))
+			return RTE_ETH_RX_DESC_DONE;
+	} else {
+		if (CMPL_VALID(rxcmp, !cpr->valid))
+			return RTE_ETH_RX_DESC_DONE;
+	}
+	rx_buf = &rxr->rx_buf_ring[cons];
+	if (rx_buf->mbuf == NULL)
+		return RTE_ETH_RX_DESC_UNAVAIL;
+
+
+	return RTE_ETH_RX_DESC_AVAIL;
+}
+
 /*
  * Initialization
  */
@@ -1622,6 +1660,7 @@ static const struct eth_dev_ops bnxt_dev_ops = {
 	.xstats_get_by_id = bnxt_dev_xstats_get_by_id_op,
 	.xstats_get_names_by_id = bnxt_dev_xstats_get_names_by_id_op,
 	.rx_queue_count = bnxt_rx_queue_count_op,
+	.rx_descriptor_status = bnxt_rx_descriptor_status_op,
 };
 
 static bool bnxt_vf_pciid(uint16_t id)
-- 
2.13.5 (Apple Git-94)

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v4 16/24] net/bnxt: add support for tx_descriptor_status
  2017-09-28 21:43 [PATCH v4 00/24] bnxt patchset Ajit Khaparde
                   ` (14 preceding siblings ...)
  2017-09-28 21:43 ` [PATCH v4 15/24] net/bnxt: add support for rx_descriptor_status Ajit Khaparde
@ 2017-09-28 21:43 ` Ajit Khaparde
  2017-09-28 21:43 ` [PATCH v4 17/24] net/bnxt: add new HWRM structs to support flow filtering Ajit Khaparde
                   ` (8 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Ajit Khaparde @ 2017-09-28 21:43 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

add support for tx_descriptor_status dev_op

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
--
v1->v2: incorporate review comments.
---
 drivers/net/bnxt/bnxt_ethdev.c | 38 ++++++++++++++++++++++++++++++++++++++
 drivers/net/bnxt/bnxt_txr.c    |  3 +++
 2 files changed, 41 insertions(+)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 12888e6a9..97ddca069 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -1610,6 +1610,43 @@ bnxt_rx_descriptor_status_op(void *rx_queue, uint16_t offset)
 	return RTE_ETH_RX_DESC_AVAIL;
 }
 
+static int
+bnxt_tx_descriptor_status_op(void *tx_queue, uint16_t offset)
+{
+	struct bnxt_tx_queue *txq = (struct bnxt_tx_queue *)tx_queue;
+	struct bnxt_tx_ring_info *txr;
+	struct bnxt_cp_ring_info *cpr;
+	struct bnxt_sw_tx_bd *tx_buf;
+	struct tx_pkt_cmpl *txcmp;
+	uint32_t cons, cp_cons;
+
+	if (!txq)
+		return -EINVAL;
+
+	cpr = txq->cp_ring;
+	txr = txq->tx_ring;
+
+	if (offset >= txq->nb_tx_desc)
+		return -EINVAL;
+
+	cons = RING_CMP(cpr->cp_ring_struct, offset);
+	txcmp = (struct tx_pkt_cmpl *)&cpr->cp_desc_ring[cons];
+	cp_cons = cpr->cp_raw_cons;
+
+	if (cons > cp_cons) {
+		if (CMPL_VALID(txcmp, cpr->valid))
+			return RTE_ETH_TX_DESC_UNAVAIL;
+	} else {
+		if (CMPL_VALID(txcmp, !cpr->valid))
+			return RTE_ETH_TX_DESC_UNAVAIL;
+	}
+	tx_buf = &txr->tx_buf_ring[cons];
+	if (tx_buf->mbuf == NULL)
+		return RTE_ETH_TX_DESC_DONE;
+
+	return RTE_ETH_TX_DESC_FULL;
+}
+
 /*
  * Initialization
  */
@@ -1661,6 +1698,7 @@ static const struct eth_dev_ops bnxt_dev_ops = {
 	.xstats_get_names_by_id = bnxt_dev_xstats_get_names_by_id_op,
 	.rx_queue_count = bnxt_rx_queue_count_op,
 	.rx_descriptor_status = bnxt_rx_descriptor_status_op,
+	.tx_descriptor_status = bnxt_tx_descriptor_status_op,
 };
 
 static bool bnxt_vf_pciid(uint16_t id)
diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 60cc17405..8ca4bbd80 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -313,6 +313,9 @@ static int bnxt_handle_tx_cp(struct bnxt_tx_queue *txq)
 
 			if (!CMP_VALID(txcmp, raw_cons, cpr->cp_ring_struct))
 				break;
+			cpr->valid = FLIP_VALID(cons,
+						cpr->cp_ring_struct->ring_mask,
+						cpr->valid);
 
 			if (CMP_TYPE(txcmp) == TX_CMPL_TYPE_TX_L2)
 				nb_tx_pkts++;
-- 
2.13.5 (Apple Git-94)

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v4 17/24] net/bnxt: add new HWRM structs to support flow filtering
  2017-09-28 21:43 [PATCH v4 00/24] bnxt patchset Ajit Khaparde
                   ` (15 preceding siblings ...)
  2017-09-28 21:43 ` [PATCH v4 16/24] net/bnxt: add support for tx_descriptor_status Ajit Khaparde
@ 2017-09-28 21:43 ` Ajit Khaparde
  2017-09-28 21:43 ` [PATCH v4 18/24] net/bnxt: add support for flow filter ops Ajit Khaparde
                   ` (7 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Ajit Khaparde @ 2017-09-28 21:43 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

HWRM structs added:
hwrm_cfa_ntuple_filter_alloc, hwrm_cfa_ntuple_filter_free,
hwrm_cfa_ntuple_filter_cfg, hwrm_cfa_em_flow_alloc,
hwrm_cfa_em_flow_free, hwrm_cfa_em_flow_cfg

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/hsi_struct_def_dpdk.h | 984 +++++++++++++++++++++++++++++++++
 1 file changed, 984 insertions(+)

diff --git a/drivers/net/bnxt/hsi_struct_def_dpdk.h b/drivers/net/bnxt/hsi_struct_def_dpdk.h
index cb8660af5..a5f871b8d 100644
--- a/drivers/net/bnxt/hsi_struct_def_dpdk.h
+++ b/drivers/net/bnxt/hsi_struct_def_dpdk.h
@@ -129,6 +129,9 @@
 #define HWRM_CFA_NTUPLE_FILTER_ALLOC	(UINT32_C(0x99))
 #define HWRM_CFA_NTUPLE_FILTER_FREE	(UINT32_C(0x9a))
 #define HWRM_CFA_NTUPLE_FILTER_CFG	(UINT32_C(0x9b))
+#define HWRM_CFA_EM_FLOW_ALLOC		(UINT32_C(0x9c))
+#define HWRM_CFA_EM_FLOW_FREE		(UINT32_C(0x9d))
+#define HWRM_CFA_EM_FLOW_CFG		(UINT32_C(0x9e))
 #define HWRM_TUNNEL_DST_PORT_QUERY	(UINT32_C(0xa0))
 #define HWRM_TUNNEL_DST_PORT_ALLOC	(UINT32_C(0xa1))
 #define HWRM_TUNNEL_DST_PORT_FREE	(UINT32_C(0xa2))
@@ -9471,6 +9474,987 @@ struct hwrm_cfa_l2_set_rx_mask_output {
 	 */
 } __attribute__((packed));
 
+/* hwrm_cfa_ntuple_filter_alloc */
+/*
+ * Description: This is a ntuple filter that uses fields from L4/L3 header and
+ * optionally fields from L2. The ntuple filters apply to receive traffic only.
+ * All L2/L3/L4 header fields are specified in network byte order. These filters
+ * can be used for Receive Flow Steering (RFS). # For ethertype value, only
+ * 0x0800 (IPv4) and 0x86dd (IPv6) shall be supported for ntuple filters. # If a
+ * field specified in this command is not enabled as a valid field, then that
+ * field shall not be used in matching packet header fields against this filter.
+ */
+/* Input	(128 bytes) */
+struct hwrm_cfa_ntuple_filter_alloc_input {
+	uint16_t req_type;
+	/*
+	 * This value indicates what type of request this is. The format
+	 * for the rest of the command is determined by this field.
+	 */
+	uint16_t cmpl_ring;
+	/*
+	 * This value indicates the what completion ring the request
+	 * will be optionally completed on. If the value is -1, then no
+	 * CR completion will be generated. Any other value must be a
+	 * valid CR ring_id value for this function.
+	 */
+	uint16_t seq_id;
+	/* This value indicates the command sequence number. */
+	uint16_t target_id;
+	/*
+	 * Target ID of this command. 0x0 - 0xFFF8 - Used for function
+	 * ids 0xFFF8 - 0xFFFE - Reserved for internal processors 0xFFFF
+	 * - HWRM
+	 */
+	uint64_t resp_addr;
+	/*
+	 * This is the host address where the response will be written
+	 * when the request is complete. This area must be 16B aligned
+	 * and must be cleared to zero before the request is made.
+	 */
+	uint32_t flags;
+	/*
+	 * Setting of this flag indicates the applicability to the
+	 * loopback path.
+	 */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_FLAGS_LOOPBACK	\
+		UINT32_C(0x1)
+	/*
+	 * Setting of this flag indicates drop action. If this flag is
+	 * not set, then it should be considered accept action.
+	 */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_FLAGS_DROP	UINT32_C(0x2)
+	/*
+	 * Setting of this flag indicates that a meter is expected to be
+	 * attached to this flow. This hint can be used when choosing
+	 * the action record format required for the flow.
+	 */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_FLAGS_METER UINT32_C(0x4)
+	uint32_t enables;
+	/* This bit must be '1' for the l2_filter_id field to be configured. */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_L2_FILTER_ID   \
+		UINT32_C(0x1)
+	/* This bit must be '1' for the ethertype field to be configured. */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_ETHERTYPE	 \
+		UINT32_C(0x2)
+	/* This bit must be '1' for the tunnel_type field to be configured. */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_TUNNEL_TYPE	\
+		UINT32_C(0x4)
+	/* This bit must be '1' for the src_macaddr field to be configured. */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_SRC_MACADDR	\
+		UINT32_C(0x8)
+	/* This bit must be '1' for the ipaddr_type field to be configured. */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_IPADDR_TYPE	\
+		UINT32_C(0x10)
+	/* This bit must be '1' for the src_ipaddr field to be configured. */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_SRC_IPADDR	\
+		UINT32_C(0x20)
+	/*
+	 * This bit must be '1' for the src_ipaddr_mask field to be
+	 * configured.
+	 */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_SRC_IPADDR_MASK \
+		UINT32_C(0x40)
+	/* This bit must be '1' for the dst_ipaddr field to be configured. */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_DST_IPADDR	\
+		UINT32_C(0x80)
+	/*
+	 * This bit must be '1' for the dst_ipaddr_mask field to be
+	 * configured.
+	 */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_DST_IPADDR_MASK \
+		UINT32_C(0x100)
+	/* This bit must be '1' for the ip_protocol field to be configured. */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_IP_PROTOCOL	\
+		UINT32_C(0x200)
+	/* This bit must be '1' for the src_port field to be configured. */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_SRC_PORT	\
+		UINT32_C(0x400)
+	/*
+	 * This bit must be '1' for the src_port_mask field to be
+	 * configured.
+	 */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_SRC_PORT_MASK  \
+		UINT32_C(0x800)
+	/* This bit must be '1' for the dst_port field to be configured. */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_DST_PORT	\
+		UINT32_C(0x1000)
+	/*
+	 * This bit must be '1' for the dst_port_mask field to be
+	 * configured.
+	 */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_DST_PORT_MASK  \
+		UINT32_C(0x2000)
+	/* This bit must be '1' for the pri_hint field to be configured. */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_PRI_HINT	\
+		UINT32_C(0x4000)
+	/*
+	 * This bit must be '1' for the ntuple_filter_id field to be
+	 * configured.
+	 */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_NTUPLE_FILTER_ID \
+		UINT32_C(0x8000)
+	/* This bit must be '1' for the dst_id field to be configured. */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_DST_ID	\
+		UINT32_C(0x10000)
+	/*
+	 * This bit must be '1' for the mirror_vnic_id field to be
+	 * configured.
+	 */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_MIRROR_VNIC_ID \
+		UINT32_C(0x20000)
+	/* This bit must be '1' for the dst_macaddr field to be configured. */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_DST_MACADDR	\
+		UINT32_C(0x40000)
+	uint64_t l2_filter_id;
+	/*
+	 * This value identifies a set of CFA data structures used for
+	 * an L2 context.
+	 */
+	uint8_t src_macaddr[6];
+	/*
+	 * This value indicates the source MAC address in the Ethernet
+	 * header.
+	 */
+	uint16_t ethertype;
+	/* This value indicates the ethertype in the Ethernet header. */
+	uint8_t ip_addr_type;
+	/*
+	 * This value indicates the type of IP address. 4 - IPv4 6 -
+	 * IPv6 All others are invalid.
+	 */
+	/* invalid */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_IP_ADDR_TYPE_UNKNOWN \
+		UINT32_C(0x0)
+	/* IPv4 */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_IP_ADDR_TYPE_IPV4 \
+		UINT32_C(0x4)
+	/* IPv6 */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_IP_ADDR_TYPE_IPV6 \
+		UINT32_C(0x6)
+	uint8_t ip_protocol;
+	/*
+	 * The value of protocol filed in IP header. Applies to UDP and
+	 * TCP traffic. 6 - UDP 17 - TCP
+	 */
+	/* invalid */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_IP_PROTOCOL_UNKNOWN \
+		UINT32_C(0x0)
+	/* UDP */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_IP_PROTOCOL_UDP \
+		UINT32_C(0x6)
+	/* TCP */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_IP_PROTOCOL_TCP \
+		UINT32_C(0x11)
+	uint16_t dst_id;
+	/*
+	 * If set, this value shall represent the Logical VNIC ID of the
+	 * destination VNIC for the RX path and network port id of the
+	 * destination port for the TX path.
+	 */
+	uint16_t mirror_vnic_id;
+	/* Logical VNIC ID of the VNIC where traffic is mirrored. */
+	uint8_t tunnel_type;
+	/*
+	 * This value indicates the tunnel type for this filter. If this
+	 * field is not specified, then the filter shall apply to both
+	 * non-tunneled and tunneled packets. If this field conflicts
+	 * with the tunnel_type specified in the l2_filter_id, then the
+	 * HWRM shall return an error for this command.
+	 */
+	/* Non-tunnel */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_TUNNEL_TYPE_NONTUNNEL \
+		UINT32_C(0x0)
+	/* Virtual eXtensible Local Area Network	(VXLAN) */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_TUNNEL_TYPE_VXLAN \
+		UINT32_C(0x1)
+	/*
+	 * Network Virtualization Generic Routing
+	 * Encapsulation	(NVGRE)
+	 */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_TUNNEL_TYPE_NVGRE \
+		UINT32_C(0x2)
+	/*
+	 * Generic Routing Encapsulation	(GRE) inside
+	 * Ethernet payload
+	 */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_TUNNEL_TYPE_L2GRE \
+		UINT32_C(0x3)
+	/* IP in IP */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_TUNNEL_TYPE_IPIP \
+		UINT32_C(0x4)
+	/* Generic Network Virtualization Encapsulation	(Geneve) */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_TUNNEL_TYPE_GENEVE \
+		UINT32_C(0x5)
+	/* Multi-Protocol Lable Switching	(MPLS) */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_TUNNEL_TYPE_MPLS \
+		UINT32_C(0x6)
+	/* Stateless Transport Tunnel	(STT) */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_TUNNEL_TYPE_STT UINT32_C(0x7)
+	/*
+	 * Generic Routing Encapsulation	(GRE) inside IP
+	 * datagram payload
+	 */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_TUNNEL_TYPE_IPGRE \
+		UINT32_C(0x8)
+	/* Any tunneled traffic */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_TUNNEL_TYPE_ANYTUNNEL \
+		UINT32_C(0xff)
+	uint8_t pri_hint;
+	/*
+	 * This hint is provided to help in placing the filter in the
+	 * filter table.
+	 */
+	/* No preference */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_PRI_HINT_NO_PREFER \
+		UINT32_C(0x0)
+	/* Above the given filter */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_PRI_HINT_ABOVE UINT32_C(0x1)
+	/* Below the given filter */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_PRI_HINT_BELOW UINT32_C(0x2)
+	/* As high as possible */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_PRI_HINT_HIGHEST \
+		UINT32_C(0x3)
+	/* As low as possible */
+	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_PRI_HINT_LOWEST UINT32_C(0x4)
+	uint32_t src_ipaddr[4];
+	/*
+	 * The value of source IP address to be used in filtering. For
+	 * IPv4, first four bytes represent the IP address.
+	 */
+	uint32_t src_ipaddr_mask[4];
+	/*
+	 * The value of source IP address mask to be used in filtering.
+	 * For IPv4, first four bytes represent the IP address mask.
+	 */
+	uint32_t dst_ipaddr[4];
+	/*
+	 * The value of destination IP address to be used in filtering.
+	 * For IPv4, first four bytes represent the IP address.
+	 */
+	uint32_t dst_ipaddr_mask[4];
+	/*
+	 * The value of destination IP address mask to be used in
+	 * filtering. For IPv4, first four bytes represent the IP
+	 * address mask.
+	 */
+	uint16_t src_port;
+	/*
+	 * The value of source port to be used in filtering. Applies to
+	 * UDP and TCP traffic.
+	 */
+	uint16_t src_port_mask;
+	/*
+	 * The value of source port mask to be used in filtering.
+	 * Applies to UDP and TCP traffic.
+	 */
+	uint16_t dst_port;
+	/*
+	 * The value of destination port to be used in filtering.
+	 * Applies to UDP and TCP traffic.
+	 */
+	uint16_t dst_port_mask;
+	/*
+	 * The value of destination port mask to be used in filtering.
+	 * Applies to UDP and TCP traffic.
+	 */
+	uint64_t ntuple_filter_id_hint;
+	/* This is the ID of the filter that goes along with the pri_hint. */
+} __attribute__((packed));
+
+/* Output	(24 bytes) */
+struct hwrm_cfa_ntuple_filter_alloc_output {
+	uint16_t error_code;
+	/*
+	 * Pass/Fail or error type Note: receiver to verify the in
+	 * parameters, and fail the call with an error when appropriate
+	 */
+	uint16_t req_type;
+	/* This field returns the type of original request. */
+	uint16_t seq_id;
+	/* This field provides original sequence number of the command. */
+	uint16_t resp_len;
+	/*
+	 * This field is the length of the response in bytes. The last
+	 * byte of the response is a valid flag that will read as '1'
+	 * when the command has been completely written to memory.
+	 */
+	uint64_t ntuple_filter_id;
+	/* This value is an opaque id into CFA data structures. */
+	uint32_t flow_id;
+	/*
+	 * This is the ID of the flow associated with this filter. This
+	 * value shall be used to match and associate the flow
+	 * identifier returned in completion records. A value of
+	 * 0xFFFFFFFF shall indicate no flow id.
+	 */
+	uint8_t unused_0;
+	uint8_t unused_1;
+	uint8_t unused_2;
+	uint8_t valid;
+	/*
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been completely
+	 * written. When writing a command completion or response to an
+	 * internal processor, the order of writes has to be such that
+	 * this field is written last.
+	 */
+} __attribute__((packed));
+
+/* hwrm_cfa_ntuple_filter_free */
+/* Description: Free an ntuple filter */
+/* Input	(24 bytes) */
+struct hwrm_cfa_ntuple_filter_free_input {
+	uint16_t req_type;
+	/*
+	 * This value indicates what type of request this is. The format
+	 * for the rest of the command is determined by this field.
+	 */
+	uint16_t cmpl_ring;
+	/*
+	 * This value indicates the what completion ring the request
+	 * will be optionally completed on. If the value is -1, then no
+	 * CR completion will be generated. Any other value must be a
+	 * valid CR ring_id value for this function.
+	 */
+	uint16_t seq_id;
+	/* This value indicates the command sequence number. */
+	uint16_t target_id;
+	/*
+	 * Target ID of this command. 0x0 - 0xFFF8 - Used for function
+	 * ids 0xFFF8 - 0xFFFE - Reserved for internal processors 0xFFFF
+	 * - HWRM
+	 */
+	uint64_t resp_addr;
+	/*
+	 * This is the host address where the response will be written
+	 * when the request is complete. This area must be 16B aligned
+	 * and must be cleared to zero before the request is made.
+	 */
+	uint64_t ntuple_filter_id;
+	/* This value is an opaque id into CFA data structures. */
+} __attribute__((packed));
+
+/* Output	(16 bytes) */
+struct hwrm_cfa_ntuple_filter_free_output {
+	uint16_t error_code;
+	/*
+	 * Pass/Fail or error type Note: receiver to verify the in
+	 * parameters, and fail the call with an error when appropriate
+	 */
+	uint16_t req_type;
+	/* This field returns the type of original request. */
+	uint16_t seq_id;
+	/* This field provides original sequence number of the command. */
+	uint16_t resp_len;
+	/*
+	 * This field is the length of the response in bytes. The last
+	 * byte of the response is a valid flag that will read as '1'
+	 * when the command has been completely written to memory.
+	 */
+	uint32_t unused_0;
+	uint8_t unused_1;
+	uint8_t unused_2;
+	uint8_t unused_3;
+	uint8_t valid;
+	/*
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been completely
+	 * written. When writing a command completion or response to an
+	 * internal processor, the order of writes has to be such that
+	 * this field is written last.
+	 */
+} __attribute__((packed));
+
+/* hwrm_cfa_ntuple_filter_cfg */
+/*
+ * Description: Configure an ntuple filter with a new destination VNIC and/or
+ * meter.
+ */
+/* Input	(48 bytes) */
+struct hwrm_cfa_ntuple_filter_cfg_input {
+	uint16_t req_type;
+	/*
+	 * This value indicates what type of request this is. The format
+	 * for the rest of the command is determined by this field.
+	 */
+	uint16_t cmpl_ring;
+	/*
+	 * This value indicates the what completion ring the request
+	 * will be optionally completed on. If the value is -1, then no
+	 * CR completion will be generated. Any other value must be a
+	 * valid CR ring_id value for this function.
+	 */
+	uint16_t seq_id;
+	/* This value indicates the command sequence number. */
+	uint16_t target_id;
+	/*
+	 * Target ID of this command. 0x0 - 0xFFF8 - Used for function
+	 * ids 0xFFF8 - 0xFFFE - Reserved for internal processors 0xFFFF
+	 * - HWRM
+	 */
+	uint64_t resp_addr;
+	/*
+	 * This is the host address where the response will be written
+	 * when the request is complete. This area must be 16B aligned
+	 * and must be cleared to zero before the request is made.
+	 */
+	uint32_t enables;
+	/* This bit must be '1' for the new_dst_id field to be configured. */
+	#define HWRM_CFA_NTUPLE_FILTER_CFG_INPUT_ENABLES_NEW_DST_ID	\
+		UINT32_C(0x1)
+	/*
+	 * This bit must be '1' for the new_mirror_vnic_id field to be
+	 * configured.
+	 */
+	#define HWRM_CFA_NTUPLE_FILTER_CFG_INPUT_ENABLES_NEW_MIRROR_VNIC_ID \
+		UINT32_C(0x2)
+	/*
+	 * This bit must be '1' for the new_meter_instance_id field to
+	 * be configured.
+	 */
+	#define HWRM_CFA_NTUPLE_FILTER_CFG_INPUT_ENABLES_NEW_METER_INSTANCE_ID \
+		UINT32_C(0x4)
+	uint32_t unused_0;
+	uint64_t ntuple_filter_id;
+	/* This value is an opaque id into CFA data structures. */
+	uint32_t new_dst_id;
+	/*
+	 * If set, this value shall represent the new Logical VNIC ID of
+	 * the destination VNIC for the RX path and new network port id
+	 * of the destination port for the TX path.
+	 */
+	uint32_t new_mirror_vnic_id;
+	/* New Logical VNIC ID of the VNIC where traffic is mirrored. */
+	uint16_t new_meter_instance_id;
+	/*
+	 * New meter to attach to the flow. Specifying the invalid
+	 * instance ID is used to remove any existing meter from the
+	 * flow.
+	 */
+	/*
+	 * A value of 0xfff is considered invalid and
+	 * implies the instance is not configured.
+	 */
+	#define HWRM_CFA_NTUPLE_FILTER_CFG_INPUT_NEW_METER_INSTANCE_ID_INVALID \
+		UINT32_C(0xffff)
+	uint16_t unused_1[3];
+} __attribute__((packed));
+
+/* Output	(16 bytes) */
+struct hwrm_cfa_ntuple_filter_cfg_output {
+	uint16_t error_code;
+	/*
+	 * Pass/Fail or error type Note: receiver to verify the in
+	 * parameters, and fail the call with an error when appropriate
+	 */
+	uint16_t req_type;
+	/* This field returns the type of original request. */
+	uint16_t seq_id;
+	/* This field provides original sequence number of the command. */
+	uint16_t resp_len;
+	/*
+	 * This field is the length of the response in bytes. The last
+	 * byte of the response is a valid flag that will read as '1'
+	 * when the command has been completely written to memory.
+	 */
+	uint32_t unused_0;
+	uint8_t unused_1;
+	uint8_t unused_2;
+	uint8_t unused_3;
+	uint8_t valid;
+	/*
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been completely
+	 * written. When writing a command completion or response to an
+	 * internal processor, the order of writes has to be such that
+	 * this field is written last.
+	 */
+} __attribute__((packed));
+
+/* hwrm_cfa_em_flow_alloc */
+/*
+ * Description: This is a generic Exact Match	(EM) flow that uses fields from
+ * L4/L3/L2 headers. The EM flows apply to transmit and receive traffic. All
+ * L2/L3/L4 header fields are specified in network byte order. For each EM flow,
+ * there is an associated set of actions specified. For tunneled packets, all
+ * L2/L3/L4 fields specified are fields of inner headers unless otherwise
+ * specified. # If a field specified in this command is not enabled as a valid
+ * field, then that field shall not be used in matching packet header fields
+ * against this EM flow entry.
+ */
+/* Input	(112 bytes) */
+struct hwrm_cfa_em_flow_alloc_input {
+	uint16_t req_type;
+	/*
+	 * This value indicates what type of request this is. The format
+	 * for the rest of the command is determined by this field.
+	 */
+	uint16_t cmpl_ring;
+	/*
+	 * This value indicates the what completion ring the request
+	 * will be optionally completed on. If the value is -1, then no
+	 * CR completion will be generated. Any other value must be a
+	 * valid CR ring_id value for this function.
+	 */
+	uint16_t seq_id;
+	/* This value indicates the command sequence number. */
+	uint16_t target_id;
+	/*
+	 * Target ID of this command. 0x0 - 0xFFF8 - Used for function
+	 * ids 0xFFF8 - 0xFFFE - Reserved for internal processors 0xFFFF
+	 * - HWRM
+	 */
+	uint64_t resp_addr;
+	/*
+	 * This is the host address where the response will be written
+	 * when the request is complete. This area must be 16B aligned
+	 * and must be cleared to zero before the request is made.
+	 */
+	uint32_t flags;
+	/*
+	 * Enumeration denoting the RX, TX type of the resource. This
+	 * enumeration is used for resources that are similar for both
+	 * TX and RX paths of the chip.
+	 */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_FLAGS_PATH	UINT32_C(0x1)
+	/* tx path */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_FLAGS_PATH_TX	\
+		(UINT32_C(0x0) << 0)
+	/* rx path */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_FLAGS_PATH_RX	\
+		(UINT32_C(0x1) << 0)
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_FLAGS_PATH_LAST \
+		CFA_EM_FLOW_ALLOC_INPUT_FLAGS_PATH_RX
+	/*
+	 * Setting of this flag indicates enabling of a byte counter for
+	 * a given flow.
+	 */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_FLAGS_BYTE_CTR	UINT32_C(0x2)
+	/*
+	 * Setting of this flag indicates enabling of a packet counter
+	 * for a given flow.
+	 */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_FLAGS_PKT_CTR	UINT32_C(0x4)
+	/*
+	 * Setting of this flag indicates de-capsulation action for the
+	 * given flow.
+	 */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_FLAGS_DECAP	UINT32_C(0x8)
+	/*
+	 * Setting of this flag indicates encapsulation action for the
+	 * given flow.
+	 */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_FLAGS_ENCAP	UINT32_C(0x10)
+	/*
+	 * Setting of this flag indicates drop action. If this flag is
+	 * not set, then it should be considered accept action.
+	 */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_FLAGS_DROP	UINT32_C(0x20)
+	/*
+	 * Setting of this flag indicates that a meter is expected to be
+	 * attached to this flow. This hint can be used when choosing
+	 * the action record format required for the flow.
+	 */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_FLAGS_METER	UINT32_C(0x40)
+	uint32_t enables;
+	/* This bit must be '1' for the l2_filter_id field to be configured. */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_L2_FILTER_ID UINT32_C(0x1)
+	/* This bit must be '1' for the tunnel_type field to be configured. */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_TUNNEL_TYPE UINT32_C(0x2)
+	/* This bit must be '1' for the tunnel_id field to be configured. */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_TUNNEL_ID UINT32_C(0x4)
+	/* This bit must be '1' for the src_macaddr field to be configured. */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_SRC_MACADDR UINT32_C(0x8)
+	/* This bit must be '1' for the dst_macaddr field to be configured. */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_DST_MACADDR UINT32_C(0x10)
+	/* This bit must be '1' for the ovlan_vid field to be configured. */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_OVLAN_VID UINT32_C(0x20)
+	/* This bit must be '1' for the ivlan_vid field to be configured. */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_IVLAN_VID UINT32_C(0x40)
+	/* This bit must be '1' for the ethertype field to be configured. */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_ETHERTYPE UINT32_C(0x80)
+	/* This bit must be '1' for the src_ipaddr field to be configured. */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_SRC_IPADDR	UINT32_C(0x100)
+	/* This bit must be '1' for the dst_ipaddr field to be configured. */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_DST_IPADDR	UINT32_C(0x200)
+	/* This bit must be '1' for the ipaddr_type field to be configured. */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_IPADDR_TYPE UINT32_C(0x400)
+	/* This bit must be '1' for the ip_protocol field to be configured. */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_IP_PROTOCOL UINT32_C(0x800)
+	/* This bit must be '1' for the src_port field to be configured. */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_SRC_PORT UINT32_C(0x1000)
+	/* This bit must be '1' for the dst_port field to be configured. */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_DST_PORT UINT32_C(0x2000)
+	/* This bit must be '1' for the dst_id field to be configured. */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_DST_ID	UINT32_C(0x4000)
+	/*
+	 * This bit must be '1' for the mirror_vnic_id field to be
+	 * configured.
+	 */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_MIRROR_VNIC_ID	\
+		UINT32_C(0x8000)
+	/*
+	 * This bit must be '1' for the encap_record_id field to be
+	 * configured.
+	 */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_ENCAP_RECORD_ID	 \
+		UINT32_C(0x10000)
+	/*
+	 * This bit must be '1' for the meter_instance_id field to be
+	 * configured.
+	 */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_METER_INSTANCE_ID	\
+		UINT32_C(0x20000)
+	uint64_t l2_filter_id;
+	/*
+	 * This value identifies a set of CFA data structures used for
+	 * an L2 context.
+	 */
+	uint8_t tunnel_type;
+	/* Tunnel Type. */
+	/* Non-tunnel */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_TUNNEL_TYPE_NONTUNNEL \
+		UINT32_C(0x0)
+	/* Virtual eXtensible Local Area Network	(VXLAN) */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_TUNNEL_TYPE_VXLAN	UINT32_C(0x1)
+	/*
+	 * Network Virtualization Generic Routing
+	 * Encapsulation	(NVGRE)
+	 */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_TUNNEL_TYPE_NVGRE	UINT32_C(0x2)
+	/*
+	 * Generic Routing Encapsulation	(GRE) inside
+	 * Ethernet payload
+	 */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_TUNNEL_TYPE_L2GRE	UINT32_C(0x3)
+	/* IP in IP */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_TUNNEL_TYPE_IPIP	UINT32_C(0x4)
+	/* Generic Network Virtualization Encapsulation	(Geneve) */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_TUNNEL_TYPE_GENEVE	UINT32_C(0x5)
+	/* Multi-Protocol Lable Switching	(MPLS) */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_TUNNEL_TYPE_MPLS	UINT32_C(0x6)
+	/* Stateless Transport Tunnel	(STT) */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_TUNNEL_TYPE_STT	UINT32_C(0x7)
+	/*
+	 * Generic Routing Encapsulation	(GRE) inside IP
+	 * datagram payload
+	 */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_TUNNEL_TYPE_IPGRE	UINT32_C(0x8)
+	/* Any tunneled traffic */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_TUNNEL_TYPE_ANYTUNNEL \
+		UINT32_C(0xff)
+	uint8_t unused_0;
+	uint16_t unused_1;
+	uint32_t tunnel_id;
+	/*
+	 * Tunnel identifier. Virtual Network Identifier	(VNI). Only
+	 * valid with tunnel_types VXLAN, NVGRE, and Geneve. Only lower
+	 * 24-bits of VNI field are used in setting up the filter.
+	 */
+	uint8_t src_macaddr[6];
+	/*
+	 * This value indicates the source MAC address in the Ethernet
+	 * header.
+	 */
+	uint16_t meter_instance_id;
+	/* The meter instance to attach to the flow. */
+	/*
+	 * A value of 0xfff is considered invalid and
+	 * implies the instance is not configured.
+	 */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_METER_INSTANCE_ID_INVALID   \
+		UINT32_C(0xffff)
+	uint8_t dst_macaddr[6];
+	/*
+	 * This value indicates the destination MAC address in the
+	 * Ethernet header.
+	 */
+	uint16_t ovlan_vid;
+	/*
+	 * This value indicates the VLAN ID of the outer VLAN tag in the
+	 * Ethernet header.
+	 */
+	uint16_t ivlan_vid;
+	/*
+	 * This value indicates the VLAN ID of the inner VLAN tag in the
+	 * Ethernet header.
+	 */
+	uint16_t ethertype;
+	/* This value indicates the ethertype in the Ethernet header. */
+	uint8_t ip_addr_type;
+	/*
+	 * This value indicates the type of IP address. 4 - IPv4 6 -
+	 * IPv6 All others are invalid.
+	 */
+	/* invalid */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_IP_ADDR_TYPE_UNKNOWN UINT32_C(0x0)
+	/* IPv4 */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_IP_ADDR_TYPE_IPV4	UINT32_C(0x4)
+	/* IPv6 */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_IP_ADDR_TYPE_IPV6	UINT32_C(0x6)
+	uint8_t ip_protocol;
+	/*
+	 * The value of protocol filed in IP header. Applies to UDP and
+	 * TCP traffic. 6 - UDP 17 - TCP
+	 */
+	/* invalid */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_IP_PROTOCOL_UNKNOWN UINT32_C(0x0)
+	/* UDP */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_IP_PROTOCOL_UDP	UINT32_C(0x6)
+	/* TCP */
+	#define HWRM_CFA_EM_FLOW_ALLOC_INPUT_IP_PROTOCOL_TCP	UINT32_C(0x11)
+	uint8_t unused_2;
+	uint8_t unused_3;
+	uint32_t src_ipaddr[4];
+	/*
+	 * The value of source IP address to be used in filtering. For
+	 * IPv4, first four bytes represent the IP address.
+	 */
+	uint32_t dst_ipaddr[4];
+	/*
+	 * big_endian = True The value of destination IP address to be
+	 * used in filtering. For IPv4, first four bytes represent the
+	 * IP address.
+	 */
+	uint16_t src_port;
+	/*
+	 * The value of source port to be used in filtering. Applies to
+	 * UDP and TCP traffic.
+	 */
+	uint16_t dst_port;
+	/*
+	 * The value of destination port to be used in filtering.
+	 * Applies to UDP and TCP traffic.
+	 */
+	uint16_t dst_id;
+	/*
+	 * If set, this value shall represent the Logical VNIC ID of the
+	 * destination VNIC for the RX path and network port id of the
+	 * destination port for the TX path.
+	 */
+	uint16_t mirror_vnic_id;
+	/* Logical VNIC ID of the VNIC where traffic is mirrored. */
+	uint32_t encap_record_id;
+	/* Logical ID of the encapsulation record. */
+	uint32_t unused_4;
+} __attribute__((packed));
+
+/* Output	(24 bytes) */
+struct hwrm_cfa_em_flow_alloc_output {
+	uint16_t error_code;
+	/*
+	 * Pass/Fail or error type Note: receiver to verify the in
+	 * parameters, and fail the call with an error when appropriate
+	 */
+	uint16_t req_type;
+	/* This field returns the type of original request. */
+	uint16_t seq_id;
+	/* This field provides original sequence number of the command. */
+	uint16_t resp_len;
+	/*
+	 * This field is the length of the response in bytes. The last
+	 * byte of the response is a valid flag that will read as '1'
+	 * when the command has been completely written to memory.
+	 */
+	uint64_t em_filter_id;
+	/* This value is an opaque id into CFA data structures. */
+	uint32_t flow_id;
+	/*
+	 * This is the ID of the flow associated with this filter. This
+	 * value shall be used to match and associate the flow
+	 * identifier returned in completion records. A value of
+	 * 0xFFFFFFFF shall indicate no flow id.
+	 */
+	uint8_t unused_0;
+	uint8_t unused_1;
+	uint8_t unused_2;
+	uint8_t valid;
+	/*
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been completely
+	 * written. When writing a command completion or response to an
+	 * internal processor, the order of writes has to be such that
+	 * this field is written last.
+	 */
+} __attribute__((packed));
+
+/* hwrm_cfa_em_flow_free */
+/* Description: Free an EM flow table entry */
+/* Input	(24 bytes) */
+struct hwrm_cfa_em_flow_free_input {
+	uint16_t req_type;
+	/*
+	 * This value indicates what type of request this is. The format
+	 * for the rest of the command is determined by this field.
+	 */
+	uint16_t cmpl_ring;
+	/*
+	 * This value indicates the what completion ring the request
+	 * will be optionally completed on. If the value is -1, then no
+	 * CR completion will be generated. Any other value must be a
+	 * valid CR ring_id value for this function.
+	 */
+	uint16_t seq_id;
+	/* This value indicates the command sequence number. */
+	uint16_t target_id;
+	/*
+	 * Target ID of this command. 0x0 - 0xFFF8 - Used for function
+	 * ids 0xFFF8 - 0xFFFE - Reserved for internal processors 0xFFFF
+	 * - HWRM
+	 */
+	uint64_t resp_addr;
+	/*
+	 * This is the host address where the response will be written
+	 * when the request is complete. This area must be 16B aligned
+	 * and must be cleared to zero before the request is made.
+	 */
+	uint64_t em_filter_id;
+	/* This value is an opaque id into CFA data structures. */
+} __attribute__((packed));
+
+/* Output	(16 bytes) */
+struct hwrm_cfa_em_flow_free_output {
+	uint16_t error_code;
+	/*
+	 * Pass/Fail or error type Note: receiver to verify the in
+	 * parameters, and fail the call with an error when appropriate
+	 */
+	uint16_t req_type;
+	/* This field returns the type of original request. */
+	uint16_t seq_id;
+	/* This field provides original sequence number of the command. */
+	uint16_t resp_len;
+	/*
+	 * This field is the length of the response in bytes. The last
+	 * byte of the response is a valid flag that will read as '1'
+	 * when the command has been completely written to memory.
+	 */
+	uint32_t unused_0;
+	uint8_t unused_1;
+	uint8_t unused_2;
+	uint8_t unused_3;
+	uint8_t valid;
+	/*
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been completely
+	 * written. When writing a command completion or response to an
+	 * internal processor, the order of writes has to be such that
+	 * this field is written last.
+	 */
+} __attribute__((packed));
+
+/* hwrm_cfa_em_flow_cfg */
+/*
+ * Description: Configure an EM flow with a new destination VNIC and/or meter.
+ */
+/* Input	(48 bytes) */
+struct hwrm_cfa_em_flow_cfg_input {
+	uint16_t req_type;
+	/*
+	 * This value indicates what type of request this is. The format
+	 * for the rest of the command is determined by this field.
+	 */
+	uint16_t cmpl_ring;
+	/*
+	 * This value indicates the what completion ring the request
+	 * will be optionally completed on. If the value is -1, then no
+	 * CR completion will be generated. Any other value must be a
+	 * valid CR ring_id value for this function.
+	 */
+	uint16_t seq_id;
+	/* This value indicates the command sequence number. */
+	uint16_t target_id;
+	/*
+	 * Target ID of this command. 0x0 - 0xFFF8 - Used for function
+	 * ids 0xFFF8 - 0xFFFE - Reserved for internal processors 0xFFFF
+	 * - HWRM
+	 */
+	uint64_t resp_addr;
+	/*
+	 * This is the host address where the response will be written
+	 * when the request is complete. This area must be 16B aligned
+	 * and must be cleared to zero before the request is made.
+	 */
+	uint32_t enables;
+	/* This bit must be '1' for the new_dst_id field to be configured. */
+	#define HWRM_CFA_EM_FLOW_CFG_INPUT_ENABLES_NEW_DST_ID	UINT32_C(0x1)
+	/*
+	 * This bit must be '1' for the new_mirror_vnic_id field to be
+	 * configured.
+	 */
+	#define HWRM_CFA_EM_FLOW_CFG_INPUT_ENABLES_NEW_MIRROR_VNIC_ID	\
+		UINT32_C(0x2)
+	/*
+	 * This bit must be '1' for the new_meter_instance_id field to
+	 * be configured.
+	 */
+	#define HWRM_CFA_EM_FLOW_CFG_INPUT_ENABLES_NEW_METER_INSTANCE_ID  \
+		UINT32_C(0x4)
+	uint32_t unused_0;
+	uint64_t em_filter_id;
+	/* This value is an opaque id into CFA data structures. */
+	uint32_t new_dst_id;
+	/*
+	 * If set, this value shall represent the new Logical VNIC ID of
+	 * the destination VNIC for the RX path and network port id of
+	 * the destination port for the TX path.
+	 */
+	uint32_t new_mirror_vnic_id;
+	/* New Logical VNIC ID of the VNIC where traffic is mirrored. */
+	uint16_t new_meter_instance_id;
+	/*
+	 * New meter to attach to the flow. Specifying the invalid
+	 * instance ID is used to remove any existing meter from the
+	 * flow.
+	 */
+	/*
+	 * A value of 0xfff is considered invalid and
+	 * implies the instance is not configured.
+	 */
+	#define HWRM_CFA_EM_FLOW_CFG_INPUT_NEW_METER_INSTANCE_ID_INVALID \
+		UINT32_C(0xffff)
+	uint16_t unused_1[3];
+} __attribute__((packed));
+
+/* Output	(16 bytes) */
+struct hwrm_cfa_em_flow_cfg_output {
+	uint16_t error_code;
+	/*
+	 * Pass/Fail or error type Note: receiver to verify the in
+	 * parameters, and fail the call with an error when appropriate
+	 */
+	uint16_t req_type;
+	/* This field returns the type of original request. */
+	uint16_t seq_id;
+	/* This field provides original sequence number of the command. */
+	uint16_t resp_len;
+	/*
+	 * This field is the length of the response in bytes. The last
+	 * byte of the response is a valid flag that will read as '1'
+	 * when the command has been completely written to memory.
+	 */
+	uint32_t unused_0;
+	uint8_t unused_1;
+	uint8_t unused_2;
+	uint8_t unused_3;
+	uint8_t valid;
+	/*
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been completely
+	 * written. When writing a command completion or response to an
+	 * internal processor, the order of writes has to be such that
+	 * this field is written last.
+	 */
+} __attribute__((packed));
+
+
 /* hwrm_cfa_vlan_antispoof_cfg */
 /* Description: Configures vlan anti-spoof filters for VF. */
 /* Input (32 bytes) */
-- 
2.13.5 (Apple Git-94)

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v4 18/24] net/bnxt: add support for flow filter ops
  2017-09-28 21:43 [PATCH v4 00/24] bnxt patchset Ajit Khaparde
                   ` (16 preceding siblings ...)
  2017-09-28 21:43 ` [PATCH v4 17/24] net/bnxt: add new HWRM structs to support flow filtering Ajit Khaparde
@ 2017-09-28 21:43 ` Ajit Khaparde
  2017-09-28 21:43 ` [PATCH v4 19/24] doc: update release notes Ajit Khaparde
                   ` (6 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Ajit Khaparde @ 2017-09-28 21:43 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

This patch adds support for flow validate/create/destroy/flush,
ethertype add/del ops

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
--
v1->v2: incorporate review comments.
v2->v3: fix 32-bit builds.
v3->v4: fix a clang error.
---
 drivers/net/bnxt/bnxt.h         |   7 +
 drivers/net/bnxt/bnxt_ethdev.c  | 202 +++++++++-
 drivers/net/bnxt/bnxt_filter.c  | 871 +++++++++++++++++++++++++++++++++++++++-
 drivers/net/bnxt/bnxt_filter.h  |  76 ++++
 drivers/net/bnxt/bnxt_hwrm.c    | 302 ++++++++++++--
 drivers/net/bnxt/bnxt_hwrm.h    |  12 +-
 drivers/net/bnxt/bnxt_vnic.c    |   1 +
 drivers/net/bnxt/bnxt_vnic.h    |   1 +
 drivers/net/bnxt/rte_pmd_bnxt.c |   4 +-
 9 files changed, 1434 insertions(+), 42 deletions(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 65f716b96..e7b1007c1 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -171,6 +171,12 @@ struct bnxt_cos_queue_info {
 	uint8_t	profile;
 };
 
+struct rte_flow {
+	STAILQ_ENTRY(rte_flow) next;
+	struct bnxt_filter_info *filter;
+	struct bnxt_vnic_info	*vnic;
+};
+
 #define BNXT_HWRM_SHORT_REQ_LEN		sizeof(struct hwrm_short_input)
 struct bnxt {
 	void				*bar0;
@@ -271,4 +277,5 @@ int bnxt_rcv_msg_from_vf(struct bnxt *bp, uint16_t vf_id, void *msg);
 #define RX_PROD_AGG_BD_TYPE_RX_PROD_AGG		0x6
 
 bool is_bnxt_supported(struct rte_eth_dev *dev);
+extern const struct rte_flow_ops bnxt_flow_ops;
 #endif
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 97ddca069..fdba0ac69 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -616,7 +616,7 @@ static void bnxt_mac_addr_remove_op(struct rte_eth_dev *eth_dev,
 				if (filter->mac_index == index) {
 					STAILQ_REMOVE(&vnic->filter, filter,
 						      bnxt_filter_info, next);
-					bnxt_hwrm_clear_filter(bp, filter);
+					bnxt_hwrm_clear_l2_filter(bp, filter);
 					filter->mac_index = INVALID_MAC_INDEX;
 					memset(&filter->l2_addr, 0,
 					       ETHER_ADDR_LEN);
@@ -663,7 +663,7 @@ static int bnxt_mac_addr_add_op(struct rte_eth_dev *eth_dev,
 	STAILQ_INSERT_TAIL(&vnic->filter, filter, next);
 	filter->mac_index = index;
 	memcpy(filter->l2_addr, mac_addr, ETHER_ADDR_LEN);
-	return bnxt_hwrm_set_filter(bp, vnic->fw_vnic_id, filter);
+	return bnxt_hwrm_set_l2_filter(bp, vnic->fw_vnic_id, filter);
 }
 
 int bnxt_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_complete)
@@ -1157,7 +1157,7 @@ static int bnxt_del_vlan_filter(struct bnxt *bp, uint16_t vlan_id)
 					/* Must delete the filter */
 					STAILQ_REMOVE(&vnic->filter, filter,
 						      bnxt_filter_info, next);
-					bnxt_hwrm_clear_filter(bp, filter);
+					bnxt_hwrm_clear_l2_filter(bp, filter);
 					STAILQ_INSERT_TAIL(
 							&bp->free_filter_list,
 							filter, next);
@@ -1183,7 +1183,7 @@ static int bnxt_del_vlan_filter(struct bnxt *bp, uint16_t vlan_id)
 					memcpy(new_filter->l2_addr,
 					       filter->l2_addr, ETHER_ADDR_LEN);
 					/* MAC only filter */
-					rc = bnxt_hwrm_set_filter(bp,
+					rc = bnxt_hwrm_set_l2_filter(bp,
 							vnic->fw_vnic_id,
 							new_filter);
 					if (rc)
@@ -1235,7 +1235,7 @@ static int bnxt_add_vlan_filter(struct bnxt *bp, uint16_t vlan_id)
 					/* Must delete the MAC filter */
 					STAILQ_REMOVE(&vnic->filter, filter,
 						      bnxt_filter_info, next);
-					bnxt_hwrm_clear_filter(bp, filter);
+					bnxt_hwrm_clear_l2_filter(bp, filter);
 					filter->l2_ovlan = 0;
 					STAILQ_INSERT_TAIL(
 							&bp->free_filter_list,
@@ -1258,8 +1258,9 @@ static int bnxt_add_vlan_filter(struct bnxt *bp, uint16_t vlan_id)
 				new_filter->l2_ovlan = vlan_id;
 				new_filter->l2_ovlan_mask = 0xF000;
 				new_filter->enables |= en;
-				rc = bnxt_hwrm_set_filter(bp, vnic->fw_vnic_id,
-							  new_filter);
+				rc = bnxt_hwrm_set_l2_filter(bp,
+							     vnic->fw_vnic_id,
+							     new_filter);
 				if (rc)
 					goto exit;
 				RTE_LOG(INFO, PMD,
@@ -1338,7 +1339,7 @@ bnxt_set_default_mac_addr_op(struct rte_eth_dev *dev, struct ether_addr *addr)
 		/* Default Filter is at Index 0 */
 		if (filter->mac_index != 0)
 			continue;
-		rc = bnxt_hwrm_clear_filter(bp, filter);
+		rc = bnxt_hwrm_clear_l2_filter(bp, filter);
 		if (rc)
 			break;
 		memcpy(filter->l2_addr, bp->mac_addr, ETHER_ADDR_LEN);
@@ -1347,7 +1348,7 @@ bnxt_set_default_mac_addr_op(struct rte_eth_dev *dev, struct ether_addr *addr)
 		filter->enables |=
 			HWRM_CFA_L2_FILTER_ALLOC_INPUT_ENABLES_L2_ADDR |
 			HWRM_CFA_L2_FILTER_ALLOC_INPUT_ENABLES_L2_ADDR_MASK;
-		rc = bnxt_hwrm_set_filter(bp, vnic->fw_vnic_id, filter);
+		rc = bnxt_hwrm_set_l2_filter(bp, vnic->fw_vnic_id, filter);
 		if (rc)
 			break;
 		filter->mac_index = 0;
@@ -1647,6 +1648,188 @@ bnxt_tx_descriptor_status_op(void *tx_queue, uint16_t offset)
 	return RTE_ETH_TX_DESC_FULL;
 }
 
+static struct bnxt_filter_info *
+bnxt_match_and_validate_ether_filter(struct bnxt *bp,
+				struct rte_eth_ethertype_filter *efilter,
+				struct bnxt_vnic_info *vnic0,
+				struct bnxt_vnic_info *vnic,
+				int *ret)
+{
+	struct bnxt_filter_info *mfilter = NULL;
+	int match = 0;
+	*ret = 0;
+
+	if (efilter->ether_type != ETHER_TYPE_IPv4 &&
+		efilter->ether_type != ETHER_TYPE_IPv6) {
+		RTE_LOG(ERR, PMD, "unsupported ether_type(0x%04x) in"
+			" ethertype filter.", efilter->ether_type);
+		*ret = -EINVAL;
+	}
+	if (efilter->queue >= bp->rx_nr_rings) {
+		RTE_LOG(ERR, PMD, "Invalid queue %d\n", efilter->queue);
+		*ret = -EINVAL;
+	}
+
+	vnic0 = STAILQ_FIRST(&bp->ff_pool[0]);
+	vnic = STAILQ_FIRST(&bp->ff_pool[efilter->queue]);
+	if (vnic == NULL) {
+		RTE_LOG(ERR, PMD, "Invalid queue %d\n", efilter->queue);
+		*ret = -EINVAL;
+	}
+
+	if (efilter->flags & RTE_ETHTYPE_FLAGS_DROP) {
+		STAILQ_FOREACH(mfilter, &vnic0->filter, next) {
+			if ((!memcmp(efilter->mac_addr.addr_bytes,
+				     mfilter->l2_addr, ETHER_ADDR_LEN) &&
+			     (mfilter->flags ==
+			      HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_FLAGS_DROP) &&
+			     (mfilter->ethertype == efilter->ether_type))) {
+				match = 1;
+				break;
+			}
+		}
+	} else {
+		STAILQ_FOREACH(mfilter, &vnic->filter, next)
+			if ((!memcmp(efilter->mac_addr.addr_bytes,
+				     mfilter->l2_addr, ETHER_ADDR_LEN) &&
+			     (mfilter->ethertype == efilter->ether_type) &&
+			     (mfilter->flags ==
+			      HWRM_CFA_L2_FILTER_CFG_INPUT_FLAGS_PATH_RX))) {
+				match = 1;
+				break;
+			}
+	}
+
+	if (match)
+		*ret = -EEXIST;
+
+	return mfilter;
+}
+
+static int
+bnxt_ethertype_filter(struct rte_eth_dev *dev,
+			enum rte_filter_op filter_op,
+			void *arg)
+{
+	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
+	struct rte_eth_ethertype_filter *efilter =
+			(struct rte_eth_ethertype_filter *)arg;
+	struct bnxt_filter_info *bfilter, *filter1;
+	struct bnxt_vnic_info *vnic, *vnic0;
+	int ret;
+
+	if (filter_op == RTE_ETH_FILTER_NOP)
+		return 0;
+
+	if (arg == NULL) {
+		RTE_LOG(ERR, PMD, "arg shouldn't be NULL for operation %u.",
+			    filter_op);
+		return -EINVAL;
+	}
+
+	vnic0 = STAILQ_FIRST(&bp->ff_pool[0]);
+	vnic = STAILQ_FIRST(&bp->ff_pool[efilter->queue]);
+
+	switch (filter_op) {
+	case RTE_ETH_FILTER_ADD:
+		bnxt_match_and_validate_ether_filter(bp, efilter,
+							vnic0, vnic, &ret);
+		if (ret < 0)
+			return ret;
+
+		bfilter = bnxt_get_unused_filter(bp);
+		if (bfilter == NULL) {
+			RTE_LOG(ERR, PMD,
+				"Not enough resources for a new filter.\n");
+			return -ENOMEM;
+		}
+		bfilter->filter_type = HWRM_CFA_NTUPLE_FILTER;
+		memcpy(bfilter->l2_addr, efilter->mac_addr.addr_bytes,
+		       ETHER_ADDR_LEN);
+		memcpy(bfilter->dst_macaddr, efilter->mac_addr.addr_bytes,
+		       ETHER_ADDR_LEN);
+		bfilter->enables |= NTUPLE_FLTR_ALLOC_INPUT_EN_DST_MACADDR;
+		bfilter->ethertype = efilter->ether_type;
+		bfilter->enables |= NTUPLE_FLTR_ALLOC_INPUT_EN_ETHERTYPE;
+
+		filter1 = bnxt_get_l2_filter(bp, bfilter, vnic0);
+		if (filter1 == NULL) {
+			ret = -1;
+			goto cleanup;
+		}
+		bfilter->enables |=
+			HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_L2_FILTER_ID;
+		bfilter->fw_l2_filter_id = filter1->fw_l2_filter_id;
+
+		bfilter->dst_id = vnic->fw_vnic_id;
+
+		if (efilter->flags & RTE_ETHTYPE_FLAGS_DROP) {
+			bfilter->flags =
+				HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_FLAGS_DROP;
+		}
+
+		ret = bnxt_hwrm_set_ntuple_filter(bp, bfilter->dst_id, bfilter);
+		if (ret)
+			goto cleanup;
+		STAILQ_INSERT_TAIL(&vnic->filter, bfilter, next);
+		break;
+	case RTE_ETH_FILTER_DELETE:
+		filter1 = bnxt_match_and_validate_ether_filter(bp, efilter,
+							vnic0, vnic, &ret);
+		if (ret == -EEXIST) {
+			ret = bnxt_hwrm_clear_ntuple_filter(bp, filter1);
+
+			STAILQ_REMOVE(&vnic->filter, filter1, bnxt_filter_info,
+				      next);
+			bnxt_free_filter(bp, filter1);
+		} else if (ret == 0) {
+			RTE_LOG(ERR, PMD, "No matching filter found\n");
+		}
+		break;
+	default:
+		RTE_LOG(ERR, PMD, "unsupported operation %u.", filter_op);
+		ret = -EINVAL;
+		goto error;
+	}
+	return ret;
+cleanup:
+	bnxt_free_filter(bp, bfilter);
+error:
+	return ret;
+}
+
+static int
+bnxt_filter_ctrl_op(struct rte_eth_dev *dev __rte_unused,
+		    enum rte_filter_type filter_type,
+		    enum rte_filter_op filter_op, void *arg)
+{
+	int ret = 0;
+
+	switch (filter_type) {
+	case RTE_ETH_FILTER_NTUPLE:
+	case RTE_ETH_FILTER_FDIR:
+	case RTE_ETH_FILTER_TUNNEL:
+		/* FALLTHROUGH */
+		RTE_LOG(ERR, PMD,
+			"filter type: %d: To be implemented\n", filter_type);
+		break;
+	case RTE_ETH_FILTER_ETHERTYPE:
+		ret = bnxt_ethertype_filter(dev, filter_op, arg);
+		break;
+	case RTE_ETH_FILTER_GENERIC:
+		if (filter_op != RTE_ETH_FILTER_GET)
+			return -EINVAL;
+		*(const void **)arg = &bnxt_flow_ops;
+		break;
+	default:
+		RTE_LOG(ERR, PMD,
+			"Filter type (%d) not supported", filter_type);
+		ret = -EINVAL;
+		break;
+	}
+	return ret;
+}
+
 /*
  * Initialization
  */
@@ -1699,6 +1882,7 @@ static const struct eth_dev_ops bnxt_dev_ops = {
 	.rx_queue_count = bnxt_rx_queue_count_op,
 	.rx_descriptor_status = bnxt_rx_descriptor_status_op,
 	.tx_descriptor_status = bnxt_tx_descriptor_status_op,
+	.filter_ctrl = bnxt_filter_ctrl_op,
 };
 
 static bool bnxt_vf_pciid(uint16_t id)
diff --git a/drivers/net/bnxt/bnxt_filter.c b/drivers/net/bnxt/bnxt_filter.c
index e9aac2714..3eaa7e45a 100644
--- a/drivers/net/bnxt/bnxt_filter.c
+++ b/drivers/net/bnxt/bnxt_filter.c
@@ -35,6 +35,9 @@
 
 #include <rte_log.h>
 #include <rte_malloc.h>
+#include <rte_flow.h>
+#include <rte_flow_driver.h>
+#include <rte_tailq.h>
 
 #include "bnxt.h"
 #include "bnxt_filter.h"
@@ -94,6 +97,8 @@ void bnxt_init_filters(struct bnxt *bp)
 	for (i = 0; i < max_filters; i++) {
 		filter = &bp->filter_info[i];
 		filter->fw_l2_filter_id = -1;
+		filter->fw_em_filter_id = -1;
+		filter->fw_ntuple_filter_id = -1;
 		STAILQ_INSERT_TAIL(&bp->free_filter_list, filter, next);
 	}
 }
@@ -121,7 +126,7 @@ void bnxt_free_all_filters(struct bnxt *bp)
 
 	for (i = 0; i < bp->pf.max_vfs; i++) {
 		STAILQ_FOREACH(filter, &bp->pf.vf_info[i].filter, next) {
-			bnxt_hwrm_clear_filter(bp, filter);
+			bnxt_hwrm_clear_l2_filter(bp, filter);
 		}
 	}
 }
@@ -142,7 +147,7 @@ void bnxt_free_filter_mem(struct bnxt *bp)
 		if (filter->fw_l2_filter_id != ((uint64_t)-1)) {
 			RTE_LOG(ERR, PMD, "HWRM filter is not freed??\n");
 			/* Call HWRM to try to free filter again */
-			rc = bnxt_hwrm_clear_filter(bp, filter);
+			rc = bnxt_hwrm_clear_l2_filter(bp, filter);
 			if (rc)
 				RTE_LOG(ERR, PMD,
 				       "HWRM filter cannot be freed rc = %d\n",
@@ -174,3 +179,865 @@ int bnxt_alloc_filter_mem(struct bnxt *bp)
 	bp->filter_info = filter_mem;
 	return 0;
 }
+
+struct bnxt_filter_info *bnxt_get_unused_filter(struct bnxt *bp)
+{
+	struct bnxt_filter_info *filter;
+
+	/* Find the 1st unused filter from the free_filter_list pool*/
+	filter = STAILQ_FIRST(&bp->free_filter_list);
+	if (!filter) {
+		RTE_LOG(ERR, PMD, "No more free filter resources\n");
+		return NULL;
+	}
+	STAILQ_REMOVE_HEAD(&bp->free_filter_list, next);
+
+	return filter;
+}
+
+void bnxt_free_filter(struct bnxt *bp, struct bnxt_filter_info *filter)
+{
+	STAILQ_INSERT_TAIL(&bp->free_filter_list, filter, next);
+}
+
+static int
+bnxt_flow_agrs_validate(const struct rte_flow_attr *attr,
+			const struct rte_flow_item pattern[],
+			const struct rte_flow_action actions[],
+			struct rte_flow_error *error)
+{
+	if (!pattern) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+			NULL, "NULL pattern.");
+		return -rte_errno;
+	}
+
+	if (!actions) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				   NULL, "NULL action.");
+		return -rte_errno;
+	}
+
+	if (!attr) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, "NULL attribute.");
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
+static const struct rte_flow_item *
+nxt_non_void_pattern(const struct rte_flow_item *cur)
+{
+	while (1) {
+		if (cur->type != RTE_FLOW_ITEM_TYPE_VOID)
+			return cur;
+		cur++;
+	}
+}
+
+static const struct rte_flow_action *
+nxt_non_void_action(const struct rte_flow_action *cur)
+{
+	while (1) {
+		if (cur->type != RTE_FLOW_ACTION_TYPE_VOID)
+			return cur;
+		cur++;
+	}
+}
+
+static inline int check_zero_bytes(const uint8_t *bytes, int len)
+{
+	int i;
+	for (i = 0; i < len; i++)
+		if (bytes[i] != 0x00)
+			return 0;
+	return 1;
+}
+
+static int
+bnxt_filter_type_check(const struct rte_flow_item pattern[],
+		       struct rte_flow_error *error __rte_unused)
+{
+	const struct rte_flow_item *item = nxt_non_void_pattern(pattern);
+	int use_ntuple = 1;
+
+	while (item->type != RTE_FLOW_ITEM_TYPE_END) {
+		switch (item->type) {
+		case RTE_FLOW_ITEM_TYPE_ETH:
+			use_ntuple = 1;
+			break;
+		case RTE_FLOW_ITEM_TYPE_VLAN:
+			use_ntuple = 0;
+			break;
+		case RTE_FLOW_ITEM_TYPE_IPV4:
+		case RTE_FLOW_ITEM_TYPE_IPV6:
+		case RTE_FLOW_ITEM_TYPE_TCP:
+		case RTE_FLOW_ITEM_TYPE_UDP:
+			/* FALLTHROUGH */
+			/* need ntuple match, reset exact match */
+			if (!use_ntuple) {
+				RTE_LOG(ERR, PMD,
+					"VLAN flow cannot use NTUPLE filter\n");
+				rte_flow_error_set(error, EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item,
+						   "Cannot use VLAN with NTUPLE");
+				return -rte_errno;
+			}
+			use_ntuple |= 1;
+			break;
+		default:
+			RTE_LOG(ERR, PMD, "Unknown Flow type");
+			use_ntuple |= 1;
+		}
+		item++;
+	}
+	return use_ntuple;
+}
+
+static int
+bnxt_validate_and_parse_flow_type(const struct rte_flow_item pattern[],
+				  struct rte_flow_error *error,
+				  struct bnxt_filter_info *filter)
+{
+	const struct rte_flow_item *item = nxt_non_void_pattern(pattern);
+	const struct rte_flow_item_vlan *vlan_spec, *vlan_mask;
+	const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_mask;
+	const struct rte_flow_item_ipv6 *ipv6_spec, *ipv6_mask;
+	const struct rte_flow_item_tcp *tcp_spec, *tcp_mask;
+	const struct rte_flow_item_udp *udp_spec, *udp_mask;
+	const struct rte_flow_item_eth *eth_spec, *eth_mask;
+	const struct rte_flow_item_nvgre *nvgre_spec;
+	const struct rte_flow_item_nvgre *nvgre_mask;
+	const struct rte_flow_item_vxlan *vxlan_spec;
+	const struct rte_flow_item_vxlan *vxlan_mask;
+	uint8_t vni_mask[] = {0xFF, 0xFF, 0xFF};
+	uint8_t tni_mask[] = {0xFF, 0xFF, 0xFF};
+	uint32_t tenant_id_be = 0;
+	bool vni_masked = 0;
+	bool tni_masked = 0;
+	int use_ntuple;
+	uint32_t en = 0;
+
+	use_ntuple = bnxt_filter_type_check(pattern, error);
+	RTE_LOG(ERR, PMD, "Use NTUPLE %d\n", use_ntuple);
+	if (use_ntuple < 0)
+		return use_ntuple;
+
+	filter->filter_type = use_ntuple ?
+		HWRM_CFA_NTUPLE_FILTER : HWRM_CFA_EM_FILTER;
+
+	while (item->type != RTE_FLOW_ITEM_TYPE_END) {
+		if (item->last) {
+			/* last or range is NOT supported as match criteria */
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ITEM,
+					   item,
+					   "No support for range");
+			return -rte_errno;
+		}
+		if (!item->spec || !item->mask) {
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ITEM,
+					   item,
+					   "spec/mask is NULL");
+			return -rte_errno;
+		}
+		switch (item->type) {
+		case RTE_FLOW_ITEM_TYPE_ETH:
+			eth_spec = (const struct rte_flow_item_eth *)item->spec;
+			eth_mask = (const struct rte_flow_item_eth *)item->mask;
+
+			/* Source MAC address mask cannot be partially set.
+			 * Should be All 0's or all 1's.
+			 * Destination MAC address mask must not be partially
+			 * set. Should be all 1's or all 0's.
+			 */
+			if ((!is_zero_ether_addr(&eth_mask->src) &&
+			     !is_broadcast_ether_addr(&eth_mask->src)) ||
+			    (!is_zero_ether_addr(&eth_mask->dst) &&
+			     !is_broadcast_ether_addr(&eth_mask->dst))) {
+				rte_flow_error_set(error, EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item,
+						   "MAC_addr mask not valid");
+				return -rte_errno;
+			}
+
+			/* Mask is not allowed. Only exact matches are */
+			if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+				rte_flow_error_set(error, EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item,
+						   "ethertype mask not valid");
+				return -rte_errno;
+			}
+
+			if (is_broadcast_ether_addr(&eth_mask->dst)) {
+				rte_memcpy(filter->dst_macaddr,
+					   &eth_spec->dst, 6);
+				en |= use_ntuple ?
+					NTUPLE_FLTR_ALLOC_INPUT_EN_DST_MACADDR :
+					EM_FLOW_ALLOC_INPUT_EN_DST_MACADDR;
+			}
+			if (is_broadcast_ether_addr(&eth_mask->src)) {
+				rte_memcpy(filter->src_macaddr,
+					   &eth_spec->src, 6);
+				en |= use_ntuple ?
+					NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_MACADDR :
+					EM_FLOW_ALLOC_INPUT_EN_SRC_MACADDR;
+			} /*
+			   * else {
+			   *  RTE_LOG(ERR, PMD, "Handle this condition\n");
+			   * }
+			   */
+			if (eth_spec->type) {
+				filter->ethertype =
+					rte_be_to_cpu_16(eth_spec->type);
+				en |= use_ntuple ?
+					NTUPLE_FLTR_ALLOC_INPUT_EN_ETHERTYPE :
+					EM_FLOW_ALLOC_INPUT_EN_ETHERTYPE;
+			}
+
+			break;
+		case RTE_FLOW_ITEM_TYPE_VLAN:
+			vlan_spec =
+				(const struct rte_flow_item_vlan *)item->spec;
+			vlan_mask =
+				(const struct rte_flow_item_vlan *)item->mask;
+			if (vlan_mask->tci & 0xFFFF && !vlan_mask->tpid) {
+				/* Only the VLAN ID can be matched. */
+				filter->l2_ovlan =
+					rte_be_to_cpu_16(vlan_spec->tci &
+							 0xFFF);
+				en |= EM_FLOW_ALLOC_INPUT_EN_OVLAN_VID;
+			} else {
+				rte_flow_error_set(error, EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item,
+						   "VLAN mask is invalid");
+				return -rte_errno;
+			}
+
+			break;
+		case RTE_FLOW_ITEM_TYPE_IPV4:
+			/* If mask is not involved, we could use EM filters. */
+			ipv4_spec =
+				(const struct rte_flow_item_ipv4 *)item->spec;
+			ipv4_mask =
+				(const struct rte_flow_item_ipv4 *)item->mask;
+			/* Only IP DST and SRC fields are maskable. */
+			if (ipv4_mask->hdr.version_ihl ||
+			    ipv4_mask->hdr.type_of_service ||
+			    ipv4_mask->hdr.total_length ||
+			    ipv4_mask->hdr.packet_id ||
+			    ipv4_mask->hdr.fragment_offset ||
+			    ipv4_mask->hdr.time_to_live ||
+			    ipv4_mask->hdr.next_proto_id ||
+			    ipv4_mask->hdr.hdr_checksum) {
+				rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ITEM,
+					   item,
+					   "Invalid IPv4 mask.");
+				return -rte_errno;
+			}
+			filter->dst_ipaddr[0] = ipv4_spec->hdr.dst_addr;
+			filter->src_ipaddr[0] = ipv4_spec->hdr.src_addr;
+			if (use_ntuple)
+				en |= NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_IPADDR |
+					NTUPLE_FLTR_ALLOC_INPUT_EN_DST_IPADDR;
+			else
+				en |= EM_FLOW_ALLOC_INPUT_EN_SRC_IPADDR |
+					EM_FLOW_ALLOC_INPUT_EN_DST_IPADDR;
+			if (ipv4_mask->hdr.src_addr) {
+				filter->src_ipaddr_mask[0] =
+					ipv4_mask->hdr.src_addr;
+				en |= !use_ntuple ? 0 :
+				     NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_IPADDR_MASK;
+			}
+			if (ipv4_mask->hdr.dst_addr) {
+				filter->dst_ipaddr_mask[0] =
+					ipv4_mask->hdr.dst_addr;
+				en |= !use_ntuple ? 0 :
+				     NTUPLE_FLTR_ALLOC_INPUT_EN_DST_IPADDR_MASK;
+			}
+			filter->ip_addr_type = use_ntuple ?
+			 HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_IP_ADDR_TYPE_IPV4 :
+			 HWRM_CFA_EM_FLOW_ALLOC_INPUT_IP_ADDR_TYPE_IPV4;
+			if (ipv4_spec->hdr.next_proto_id) {
+				filter->ip_protocol =
+					ipv4_spec->hdr.next_proto_id;
+				if (use_ntuple)
+					en |= NTUPLE_FLTR_ALLOC_IN_EN_IP_PROTO;
+				else
+					en |= EM_FLOW_ALLOC_INPUT_EN_IP_PROTO;
+			}
+			break;
+		case RTE_FLOW_ITEM_TYPE_IPV6:
+			ipv6_spec =
+				(const struct rte_flow_item_ipv6 *)item->spec;
+			ipv6_mask =
+				(const struct rte_flow_item_ipv6 *)item->mask;
+
+			/* Only IP DST and SRC fields are maskable. */
+			if (ipv6_mask->hdr.vtc_flow ||
+			    ipv6_mask->hdr.payload_len ||
+			    ipv6_mask->hdr.proto ||
+			    ipv6_mask->hdr.hop_limits) {
+				rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ITEM,
+					   item,
+					   "Invalid IPv6 mask.");
+				return -rte_errno;
+			}
+
+			if (use_ntuple)
+				en |= NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_IPADDR |
+					NTUPLE_FLTR_ALLOC_INPUT_EN_DST_IPADDR;
+			else
+				en |= EM_FLOW_ALLOC_INPUT_EN_SRC_IPADDR |
+					EM_FLOW_ALLOC_INPUT_EN_DST_IPADDR;
+			rte_memcpy(filter->src_ipaddr,
+				   ipv6_spec->hdr.src_addr, 16);
+			rte_memcpy(filter->dst_ipaddr,
+				   ipv6_spec->hdr.dst_addr, 16);
+			if (!check_zero_bytes(ipv6_mask->hdr.src_addr, 16)) {
+				rte_memcpy(filter->src_ipaddr_mask,
+					   ipv6_mask->hdr.src_addr, 16);
+				en |= !use_ntuple ? 0 :
+				    NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_IPADDR_MASK;
+			}
+			if (!check_zero_bytes(ipv6_mask->hdr.dst_addr, 16)) {
+				rte_memcpy(filter->dst_ipaddr_mask,
+					   ipv6_mask->hdr.dst_addr, 16);
+				en |= !use_ntuple ? 0 :
+				     NTUPLE_FLTR_ALLOC_INPUT_EN_DST_IPADDR_MASK;
+			}
+			filter->ip_addr_type = use_ntuple ?
+				NTUPLE_FLTR_ALLOC_INPUT_IP_ADDR_TYPE_IPV6 :
+				EM_FLOW_ALLOC_INPUT_IP_ADDR_TYPE_IPV6;
+			break;
+		case RTE_FLOW_ITEM_TYPE_TCP:
+			tcp_spec = (const struct rte_flow_item_tcp *)item->spec;
+			tcp_mask = (const struct rte_flow_item_tcp *)item->mask;
+
+			/* Check TCP mask. Only DST & SRC ports are maskable */
+			if (tcp_mask->hdr.sent_seq ||
+			    tcp_mask->hdr.recv_ack ||
+			    tcp_mask->hdr.data_off ||
+			    tcp_mask->hdr.tcp_flags ||
+			    tcp_mask->hdr.rx_win ||
+			    tcp_mask->hdr.cksum ||
+			    tcp_mask->hdr.tcp_urp) {
+				rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ITEM,
+					   item,
+					   "Invalid TCP mask");
+				return -rte_errno;
+			}
+			filter->src_port = tcp_spec->hdr.src_port;
+			filter->dst_port = tcp_spec->hdr.dst_port;
+			if (use_ntuple)
+				en |= NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_PORT |
+					NTUPLE_FLTR_ALLOC_INPUT_EN_DST_PORT;
+			else
+				en |= EM_FLOW_ALLOC_INPUT_EN_SRC_PORT |
+					EM_FLOW_ALLOC_INPUT_EN_DST_PORT;
+			if (tcp_mask->hdr.dst_port) {
+				filter->dst_port_mask = tcp_mask->hdr.dst_port;
+				en |= !use_ntuple ? 0 :
+				  NTUPLE_FLTR_ALLOC_INPUT_EN_DST_PORT_MASK;
+			}
+			if (tcp_mask->hdr.src_port) {
+				filter->src_port_mask = tcp_mask->hdr.src_port;
+				en |= !use_ntuple ? 0 :
+				  NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_PORT_MASK;
+			}
+			break;
+		case RTE_FLOW_ITEM_TYPE_UDP:
+			udp_spec = (const struct rte_flow_item_udp *)item->spec;
+			udp_mask = (const struct rte_flow_item_udp *)item->mask;
+
+			if (udp_mask->hdr.dgram_len ||
+			    udp_mask->hdr.dgram_cksum) {
+				rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ITEM,
+					   item,
+					   "Invalid UDP mask");
+				return -rte_errno;
+			}
+
+			filter->src_port = udp_spec->hdr.src_port;
+			filter->dst_port = udp_spec->hdr.dst_port;
+			if (use_ntuple)
+				en |= NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_PORT |
+					NTUPLE_FLTR_ALLOC_INPUT_EN_DST_PORT;
+			else
+				en |= EM_FLOW_ALLOC_INPUT_EN_SRC_PORT |
+					EM_FLOW_ALLOC_INPUT_EN_DST_PORT;
+
+			if (udp_mask->hdr.dst_port) {
+				filter->dst_port_mask = udp_mask->hdr.dst_port;
+				en |= !use_ntuple ? 0 :
+				  NTUPLE_FLTR_ALLOC_INPUT_EN_DST_PORT_MASK;
+			}
+			if (udp_mask->hdr.src_port) {
+				filter->src_port_mask = udp_mask->hdr.src_port;
+				en |= !use_ntuple ? 0 :
+				  NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_PORT_MASK;
+			}
+			break;
+		case RTE_FLOW_ITEM_TYPE_VXLAN:
+			vxlan_spec =
+				(const struct rte_flow_item_vxlan *)item->spec;
+			vxlan_mask =
+				(const struct rte_flow_item_vxlan *)item->mask;
+			/* Check if VXLAN item is used to describe protocol.
+			 * If yes, both spec and mask should be NULL.
+			 * If no, both spec and mask shouldn't be NULL.
+			 */
+			if ((!vxlan_spec && vxlan_mask) ||
+			    (vxlan_spec && !vxlan_mask)) {
+				rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ITEM,
+					   item,
+					   "Invalid VXLAN item");
+				return -rte_errno;
+			}
+
+			if (vxlan_spec->rsvd1 || vxlan_spec->rsvd0[0] ||
+			    vxlan_spec->rsvd0[1] || vxlan_spec->rsvd0[2] ||
+			    (vxlan_spec->flags != 0x8)) {
+				rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ITEM,
+					   item,
+					   "Invalid VXLAN item");
+				return -rte_errno;
+			}
+
+			/* Check if VNI is masked. */
+			if (vxlan_spec && vxlan_mask) {
+				vni_masked =
+					!!memcmp(vxlan_mask->vni, vni_mask,
+						 RTE_DIM(vni_mask));
+				if (vni_masked) {
+					rte_flow_error_set(error, EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item,
+						   "Invalid VNI mask");
+					return -rte_errno;
+				}
+
+				rte_memcpy(((uint8_t *)&tenant_id_be + 1),
+					   vxlan_spec->vni, 3);
+				filter->vni =
+					rte_be_to_cpu_32(tenant_id_be);
+				filter->tunnel_type =
+				 CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_VXLAN;
+			}
+			break;
+		case RTE_FLOW_ITEM_TYPE_NVGRE:
+			nvgre_spec =
+				(const struct rte_flow_item_nvgre *)item->spec;
+			nvgre_mask =
+				(const struct rte_flow_item_nvgre *)item->mask;
+			/* Check if NVGRE item is used to describe protocol.
+			 * If yes, both spec and mask should be NULL.
+			 * If no, both spec and mask shouldn't be NULL.
+			 */
+			if ((!nvgre_spec && nvgre_mask) ||
+			    (nvgre_spec && !nvgre_mask)) {
+				rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ITEM,
+					   item,
+					   "Invalid NVGRE item");
+				return -rte_errno;
+			}
+
+			if ((nvgre_spec->c_k_s_rsvd0_ver != 0x2000) ||
+			    (nvgre_spec->protocol != 0x6558)) {
+				rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ITEM,
+					   item,
+					   "Invalid NVGRE item");
+				return -rte_errno;
+			}
+
+			if (nvgre_spec && nvgre_mask) {
+				tni_masked =
+					!!memcmp(nvgre_mask->tni, tni_mask,
+						 RTE_DIM(tni_mask));
+				if (tni_masked) {
+					rte_flow_error_set(error, EINVAL,
+						       RTE_FLOW_ERROR_TYPE_ITEM,
+						       item,
+						       "Invalid TNI mask");
+					return -rte_errno;
+				}
+				rte_memcpy(((uint8_t *)&tenant_id_be + 1),
+					   nvgre_spec->tni, 3);
+				filter->vni =
+					rte_be_to_cpu_32(tenant_id_be);
+				filter->tunnel_type =
+				 CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_NVGRE;
+			}
+			break;
+		default:
+			break;
+		}
+		item++;
+	}
+	filter->enables = en;
+
+	return 0;
+}
+
+/* Parse attributes */
+static int
+bnxt_flow_parse_attr(const struct rte_flow_attr *attr,
+		     struct rte_flow_error *error)
+{
+	/* Must be input direction */
+	if (!attr->ingress) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR_INGRESS,
+				   attr, "Only support ingress.");
+		return -rte_errno;
+	}
+
+	/* Not supported */
+	if (attr->egress) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
+				   attr, "No support for egress.");
+		return -rte_errno;
+	}
+
+	/* Not supported */
+	if (attr->priority) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
+				   attr, "No support for priority.");
+		return -rte_errno;
+	}
+
+	/* Not supported */
+	if (attr->group) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR_GROUP,
+				   attr, "No support for group.");
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
+struct bnxt_filter_info *
+bnxt_get_l2_filter(struct bnxt *bp, struct bnxt_filter_info *nf,
+		   struct bnxt_vnic_info *vnic)
+{
+	struct bnxt_filter_info *filter1, *f0;
+	struct bnxt_vnic_info *vnic0;
+	int rc;
+
+	vnic0 = STAILQ_FIRST(&bp->ff_pool[0]);
+	f0 = STAILQ_FIRST(&vnic0->filter);
+
+	//This flow has same DST MAC as the port/l2 filter.
+	if (memcmp(f0->l2_addr, nf->dst_macaddr, ETHER_ADDR_LEN) == 0)
+		return f0;
+
+	//This flow needs DST MAC which is not same as port/l2
+	RTE_LOG(DEBUG, PMD, "Create L2 filter for DST MAC\n");
+	filter1 = bnxt_get_unused_filter(bp);
+	filter1->flags = HWRM_CFA_L2_FILTER_ALLOC_INPUT_FLAGS_PATH_RX;
+	filter1->enables = HWRM_CFA_L2_FILTER_ALLOC_INPUT_ENABLES_L2_ADDR |
+			L2_FILTER_ALLOC_INPUT_EN_L2_ADDR_MASK;
+	memcpy(filter1->l2_addr, nf->dst_macaddr, ETHER_ADDR_LEN);
+	memset(filter1->l2_addr_mask, 0xff, ETHER_ADDR_LEN);
+	rc = bnxt_hwrm_set_l2_filter(bp, vnic->fw_vnic_id,
+				     filter1);
+	if (rc) {
+		bnxt_free_filter(bp, filter1);
+		return NULL;
+	}
+	STAILQ_INSERT_TAIL(&vnic->filter, filter1, next);
+	return filter1;
+}
+
+static int
+bnxt_validate_and_parse_flow(struct rte_eth_dev *dev,
+			     const struct rte_flow_item pattern[],
+			     const struct rte_flow_action actions[],
+			     const struct rte_flow_attr *attr,
+			     struct rte_flow_error *error,
+			     struct bnxt_filter_info *filter)
+{
+	const struct rte_flow_action *act = nxt_non_void_action(actions);
+	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
+	const struct rte_flow_action_queue *act_q;
+	struct bnxt_vnic_info *vnic, *vnic0;
+	struct bnxt_filter_info *filter1;
+	int rc;
+
+	if (bp->eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) {
+		RTE_LOG(ERR, PMD, "Cannot create flow on RSS queues\n");
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "Cannot create flow on RSS queues");
+		rc = -rte_errno;
+		goto ret;
+	}
+
+	rc = bnxt_validate_and_parse_flow_type(pattern, error, filter);
+	if (rc != 0)
+		goto ret;
+
+	rc = bnxt_flow_parse_attr(attr, error);
+	if (rc != 0)
+		goto ret;
+	//Since we support ingress attribute only - right now.
+	filter->flags = HWRM_CFA_EM_FLOW_ALLOC_INPUT_FLAGS_PATH_RX;
+
+	switch (act->type) {
+	case RTE_FLOW_ACTION_TYPE_QUEUE:
+		/* Allow this flow. Redirect to a VNIC. */
+		act_q = (const struct rte_flow_action_queue *)act->conf;
+		if (act_q->index >= bp->rx_nr_rings) {
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ACTION, act,
+					   "Invalid queue ID.");
+			rc = -rte_errno;
+			goto ret;
+		}
+		RTE_LOG(ERR, PMD, "Queue index %d\n", act_q->index);
+
+		vnic0 = STAILQ_FIRST(&bp->ff_pool[0]);
+		vnic = STAILQ_FIRST(&bp->ff_pool[act_q->index]);
+		if (vnic == NULL) {
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ACTION, act,
+					   "No matching VNIC for queue ID.");
+			rc = -rte_errno;
+			goto ret;
+		}
+		filter->dst_id = vnic->fw_vnic_id;
+		filter1 = bnxt_get_l2_filter(bp, filter, vnic);
+		filter->fw_l2_filter_id = filter1->fw_l2_filter_id;
+		RTE_LOG(DEBUG, PMD, "VNIC found\n");
+		break;
+	case RTE_FLOW_ACTION_TYPE_DROP:
+		vnic0 = STAILQ_FIRST(&bp->ff_pool[0]);
+		filter1 = bnxt_get_l2_filter(bp, filter, vnic0);
+		filter->fw_l2_filter_id = filter1->fw_l2_filter_id;
+		if (filter->filter_type == HWRM_CFA_EM_FILTER)
+			filter->flags =
+				HWRM_CFA_EM_FLOW_ALLOC_INPUT_FLAGS_DROP;
+		else
+			filter->flags =
+				HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_FLAGS_DROP;
+		break;
+	case RTE_FLOW_ACTION_TYPE_COUNT:
+		vnic0 = STAILQ_FIRST(&bp->ff_pool[0]);
+		filter1 = bnxt_get_l2_filter(bp, filter, vnic0);
+		filter->fw_l2_filter_id = filter1->fw_l2_filter_id;
+		filter->flags = HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_FLAGS_METER;
+		break;
+	default:
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ACTION, act,
+				   "Invalid action.");
+		rc = -rte_errno;
+		goto ret;
+	}
+
+	act = nxt_non_void_action(++act);
+	if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ACTION,
+				   act, "Invalid action.");
+		rc = -rte_errno;
+		goto ret;
+	}
+ret:
+	return rc;
+}
+
+static int
+bnxt_flow_validate(struct rte_eth_dev *dev,
+		const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct rte_flow_error *error)
+{
+	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
+	struct bnxt_filter_info *filter;
+	int ret = 0;
+
+	ret = bnxt_flow_agrs_validate(attr, pattern, actions, error);
+	if (ret != 0)
+		return ret;
+
+	filter = bnxt_get_unused_filter(bp);
+	if (filter == NULL) {
+		RTE_LOG(ERR, PMD, "Not enough resources for a new flow.\n");
+		return -ENOMEM;
+	}
+
+	ret = bnxt_validate_and_parse_flow(dev, pattern, actions, attr,
+					   error, filter);
+	/* No need to hold on to this filter if we are just validating flow */
+	bnxt_free_filter(bp, filter);
+
+	return ret;
+}
+
+static struct rte_flow *
+bnxt_flow_create(struct rte_eth_dev *dev,
+		  const struct rte_flow_attr *attr,
+		  const struct rte_flow_item pattern[],
+		  const struct rte_flow_action actions[],
+		  struct rte_flow_error *error)
+{
+	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
+	struct bnxt_filter_info *filter;
+	struct bnxt_vnic_info *vnic;
+	struct rte_flow *flow;
+	unsigned int i;
+	int ret = 0;
+
+	flow = rte_zmalloc("bnxt_flow", sizeof(struct rte_flow), 0);
+	if (!flow) {
+		rte_flow_error_set(error, ENOMEM,
+				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+				   "Failed to allocate memory");
+		return flow;
+	}
+
+	ret = bnxt_flow_agrs_validate(attr, pattern, actions, error);
+	if (ret != 0) {
+		RTE_LOG(ERR, PMD, "Not a validate flow.\n");
+		goto free_flow;
+	}
+
+	filter = bnxt_get_unused_filter(bp);
+	if (filter == NULL) {
+		RTE_LOG(ERR, PMD, "Not enough resources for a new flow.\n");
+		goto free_flow;
+	}
+
+	ret = bnxt_validate_and_parse_flow(dev, pattern, actions, attr,
+					   error, filter);
+	if (ret != 0)
+		goto free_filter;
+
+	if (filter->filter_type == HWRM_CFA_EM_FILTER) {
+		filter->enables |=
+			HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_L2_FILTER_ID;
+		ret = bnxt_hwrm_set_em_filter(bp, filter->dst_id, filter);
+	}
+	if (filter->filter_type == HWRM_CFA_NTUPLE_FILTER) {
+		filter->enables |=
+			HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_L2_FILTER_ID;
+		ret = bnxt_hwrm_set_ntuple_filter(bp, filter->dst_id, filter);
+	}
+
+	for (i = 0; i < bp->nr_vnics; i++) {
+		vnic = &bp->vnic_info[i];
+		if (filter->dst_id == vnic->fw_vnic_id)
+			break;
+	}
+
+	if (!ret) {
+		flow->filter = filter;
+		flow->vnic = vnic;
+		RTE_LOG(ERR, PMD, "Successfully created flow.\n");
+		STAILQ_INSERT_TAIL(&vnic->flow_list, flow, next);
+		return flow;
+	}
+free_filter:
+	bnxt_free_filter(bp, filter);
+free_flow:
+	RTE_LOG(ERR, PMD, "Failed to create flow.\n");
+	rte_flow_error_set(error, -ret,
+			   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+			   "Failed to create flow.");
+	rte_free(flow);
+	flow = NULL;
+	return flow;
+}
+
+static int
+bnxt_flow_destroy(struct rte_eth_dev *dev,
+		  struct rte_flow *flow,
+		  struct rte_flow_error *error)
+{
+	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
+	struct bnxt_filter_info *filter = flow->filter;
+	struct bnxt_vnic_info *vnic = flow->vnic;
+	int ret = 0;
+
+	if (filter->filter_type == HWRM_CFA_EM_FILTER)
+		ret = bnxt_hwrm_clear_em_filter(bp, filter);
+	if (filter->filter_type == HWRM_CFA_NTUPLE_FILTER)
+		ret = bnxt_hwrm_clear_ntuple_filter(bp, filter);
+
+	if (!ret) {
+		STAILQ_REMOVE(&vnic->flow_list, flow, rte_flow, next);
+		rte_free(flow);
+	} else {
+		rte_flow_error_set(error, -ret,
+				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+				   "Failed to destroy flow.");
+	}
+
+	return ret;
+}
+
+static int
+bnxt_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error)
+{
+	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
+	struct bnxt_vnic_info *vnic;
+	struct rte_flow *flow;
+	unsigned int i;
+	int ret = 0;
+
+	for (i = 0; i < bp->nr_vnics; i++) {
+		vnic = &bp->vnic_info[i];
+		STAILQ_FOREACH(flow, &vnic->flow_list, next) {
+			struct bnxt_filter_info *filter = flow->filter;
+
+			if (filter->filter_type == HWRM_CFA_EM_FILTER)
+				ret = bnxt_hwrm_clear_em_filter(bp, filter);
+			if (filter->filter_type == HWRM_CFA_NTUPLE_FILTER)
+				ret = bnxt_hwrm_clear_ntuple_filter(bp, filter);
+
+			if (ret) {
+				rte_flow_error_set(error, -ret,
+						   RTE_FLOW_ERROR_TYPE_HANDLE,
+						   NULL,
+						   "Failed to flush flow in HW.");
+				return -rte_errno;
+			}
+
+			STAILQ_REMOVE(&vnic->flow_list, flow,
+				      rte_flow, next);
+			rte_free(flow);
+		}
+	}
+
+	return ret;
+}
+
+const struct rte_flow_ops bnxt_flow_ops = {
+	.validate = bnxt_flow_validate,
+	.create = bnxt_flow_create,
+	.destroy = bnxt_flow_destroy,
+	.flush = bnxt_flow_flush,
+};
diff --git a/drivers/net/bnxt/bnxt_filter.h b/drivers/net/bnxt/bnxt_filter.h
index 613b2eeac..d6c1ce6df 100644
--- a/drivers/net/bnxt/bnxt_filter.h
+++ b/drivers/net/bnxt/bnxt_filter.h
@@ -40,8 +40,15 @@ struct bnxt;
 struct bnxt_filter_info {
 	STAILQ_ENTRY(bnxt_filter_info)	next;
 	uint64_t		fw_l2_filter_id;
+	uint64_t		fw_em_filter_id;
+	uint64_t		fw_ntuple_filter_id;
 #define INVALID_MAC_INDEX	((uint16_t)-1)
 	uint16_t		mac_index;
+#define HWRM_CFA_L2_FILTER	0
+#define HWRM_CFA_EM_FILTER	1
+#define HWRM_CFA_NTUPLE_FILTER	2
+	uint8_t                 filter_type;    //L2 or EM or NTUPLE filter
+	uint32_t                dst_id;
 
 	/* Filter Characteristics */
 	uint32_t		flags;
@@ -65,6 +72,19 @@ struct bnxt_filter_info {
 	uint64_t		l2_filter_id_hint;
 	uint32_t		src_id;
 	uint8_t			src_type;
+	uint8_t                 src_macaddr[6];
+	uint8_t                 dst_macaddr[6];
+	uint32_t                dst_ipaddr[4];
+	uint32_t                dst_ipaddr_mask[4];
+	uint32_t                src_ipaddr[4];
+	uint32_t                src_ipaddr_mask[4];
+	uint16_t                dst_port;
+	uint16_t                dst_port_mask;
+	uint16_t                src_port;
+	uint16_t                src_port_mask;
+	uint16_t                ip_protocol;
+	uint16_t                ip_addr_type;
+	uint16_t                ethertype;
 };
 
 struct bnxt_filter_info *bnxt_alloc_filter(struct bnxt *bp);
@@ -73,5 +93,61 @@ void bnxt_init_filters(struct bnxt *bp);
 void bnxt_free_all_filters(struct bnxt *bp);
 void bnxt_free_filter_mem(struct bnxt *bp);
 int bnxt_alloc_filter_mem(struct bnxt *bp);
+struct bnxt_filter_info *bnxt_get_unused_filter(struct bnxt *bp);
+void bnxt_free_filter(struct bnxt *bp, struct bnxt_filter_info *filter);
+struct bnxt_filter_info *bnxt_get_l2_filter(struct bnxt *bp,
+		struct bnxt_filter_info *nf, struct bnxt_vnic_info *vnic);
 
+#define NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_MACADDR	\
+	HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_SRC_MACADDR
+#define EM_FLOW_ALLOC_INPUT_EN_SRC_MACADDR	\
+	HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_SRC_MACADDR
+#define NTUPLE_FLTR_ALLOC_INPUT_EN_DST_MACADDR	\
+	HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_DST_MACADDR
+#define EM_FLOW_ALLOC_INPUT_EN_DST_MACADDR	\
+	HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_DST_MACADDR
+#define NTUPLE_FLTR_ALLOC_INPUT_EN_ETHERTYPE   \
+	HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_ETHERTYPE
+#define EM_FLOW_ALLOC_INPUT_EN_ETHERTYPE       \
+	HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_ETHERTYPE
+#define EM_FLOW_ALLOC_INPUT_EN_OVLAN_VID       \
+	HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_OVLAN_VID
+#define NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_IPADDR  \
+	HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_SRC_IPADDR
+#define NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_IPADDR_MASK     \
+	HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_SRC_IPADDR_MASK
+#define NTUPLE_FLTR_ALLOC_INPUT_EN_DST_IPADDR  \
+	HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_DST_IPADDR
+#define NTUPLE_FLTR_ALLOC_INPUT_EN_DST_IPADDR_MASK     \
+	HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_DST_IPADDR_MASK
+#define NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_PORT    \
+	HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_SRC_PORT
+#define NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_PORT_MASK       \
+	HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_SRC_PORT_MASK
+#define NTUPLE_FLTR_ALLOC_INPUT_EN_DST_PORT    \
+	HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_DST_PORT
+#define NTUPLE_FLTR_ALLOC_INPUT_EN_DST_PORT_MASK       \
+	HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_DST_PORT_MASK
+#define NTUPLE_FLTR_ALLOC_IN_EN_IP_PROTO	\
+	HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_IP_PROTOCOL
+#define EM_FLOW_ALLOC_INPUT_EN_SRC_IPADDR	\
+	HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_SRC_IPADDR
+#define EM_FLOW_ALLOC_INPUT_EN_DST_IPADDR	\
+	HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_DST_IPADDR
+#define EM_FLOW_ALLOC_INPUT_EN_SRC_PORT	\
+	HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_SRC_PORT
+#define EM_FLOW_ALLOC_INPUT_EN_DST_PORT	\
+	HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_DST_PORT
+#define EM_FLOW_ALLOC_INPUT_EN_IP_PROTO	\
+	HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_IP_PROTOCOL
+#define EM_FLOW_ALLOC_INPUT_IP_ADDR_TYPE_IPV6	\
+	HWRM_CFA_EM_FLOW_ALLOC_INPUT_IP_ADDR_TYPE_IPV6
+#define NTUPLE_FLTR_ALLOC_INPUT_IP_ADDR_TYPE_IPV6	\
+	HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_IP_ADDR_TYPE_IPV6
+#define CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_VXLAN	\
+	HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_TUNNEL_TYPE_VXLAN
+#define CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_NVGRE	\
+	HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_TUNNEL_TYPE_NVGRE
+#define L2_FILTER_ALLOC_INPUT_EN_L2_ADDR_MASK	\
+	HWRM_CFA_L2_FILTER_ALLOC_INPUT_ENABLES_L2_ADDR_MASK
 #endif
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index ade96278b..204a0dcd6 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -329,7 +329,7 @@ int bnxt_hwrm_cfa_vlan_antispoof_cfg(struct bnxt *bp, uint16_t fid,
 	return rc;
 }
 
-int bnxt_hwrm_clear_filter(struct bnxt *bp,
+int bnxt_hwrm_clear_l2_filter(struct bnxt *bp,
 			   struct bnxt_filter_info *filter)
 {
 	int rc = 0;
@@ -353,7 +353,7 @@ int bnxt_hwrm_clear_filter(struct bnxt *bp,
 	return 0;
 }
 
-int bnxt_hwrm_set_filter(struct bnxt *bp,
+int bnxt_hwrm_set_l2_filter(struct bnxt *bp,
 			 uint16_t dst_id,
 			 struct bnxt_filter_info *filter)
 {
@@ -363,7 +363,7 @@ int bnxt_hwrm_set_filter(struct bnxt *bp,
 	uint32_t enables = 0;
 
 	if (filter->fw_l2_filter_id != UINT64_MAX)
-		bnxt_hwrm_clear_filter(bp, filter);
+		bnxt_hwrm_clear_l2_filter(bp, filter);
 
 	HWRM_PREP(req, CFA_L2_FILTER_ALLOC);
 
@@ -1017,6 +1017,7 @@ int bnxt_hwrm_stat_ctx_alloc(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
 	cpr->hw_stats_ctx_id = rte_le_to_cpu_16(resp->stat_ctx_id);
 
 	HWRM_UNLOCK();
+	bp->grp_info[idx].fw_stats_ctx = cpr->hw_stats_ctx_id;
 
 	return rc;
 }
@@ -1133,7 +1134,7 @@ int bnxt_hwrm_vnic_cfg(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 	int rc = 0;
 	struct hwrm_vnic_cfg_input req = {.req_type = 0 };
 	struct hwrm_vnic_cfg_output *resp = bp->hwrm_cmd_resp_addr;
-	uint32_t ctx_enable_flag = HWRM_VNIC_CFG_INPUT_ENABLES_RSS_RULE;
+	uint32_t ctx_enable_flag = 0;
 	struct bnxt_plcmodes_cfg pmodes;
 
 	if (vnic->fw_vnic_id == INVALID_HW_RING_ID) {
@@ -1149,14 +1150,15 @@ int bnxt_hwrm_vnic_cfg(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 
 	/* Only RSS support for now TBD: COS & LB */
 	req.enables =
-	    rte_cpu_to_le_32(HWRM_VNIC_CFG_INPUT_ENABLES_DFLT_RING_GRP |
-			     HWRM_VNIC_CFG_INPUT_ENABLES_MRU);
+	    rte_cpu_to_le_32(HWRM_VNIC_CFG_INPUT_ENABLES_DFLT_RING_GRP);
 	if (vnic->lb_rule != 0xffff)
-		ctx_enable_flag = HWRM_VNIC_CFG_INPUT_ENABLES_LB_RULE;
+		ctx_enable_flag |= HWRM_VNIC_CFG_INPUT_ENABLES_LB_RULE;
 	if (vnic->cos_rule != 0xffff)
-		ctx_enable_flag = HWRM_VNIC_CFG_INPUT_ENABLES_COS_RULE;
-	if (vnic->rss_rule != 0xffff)
-		ctx_enable_flag = HWRM_VNIC_CFG_INPUT_ENABLES_RSS_RULE;
+		ctx_enable_flag |= HWRM_VNIC_CFG_INPUT_ENABLES_COS_RULE;
+	if (vnic->rss_rule != 0xffff) {
+		ctx_enable_flag |= HWRM_VNIC_CFG_INPUT_ENABLES_MRU;
+		ctx_enable_flag |= HWRM_VNIC_CFG_INPUT_ENABLES_RSS_RULE;
+	}
 	req.enables |= rte_cpu_to_le_32(ctx_enable_flag);
 	req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id);
 	req.dflt_ring_grp = rte_cpu_to_le_16(vnic->dflt_ring_grp);
@@ -1591,12 +1593,8 @@ int bnxt_free_all_hwrm_ring_grps(struct bnxt *bp)
 
 	for (idx = 0; idx < bp->rx_cp_nr_rings; idx++) {
 
-		if (bp->grp_info[idx].fw_grp_id == INVALID_HW_RING_ID) {
-			RTE_LOG(ERR, PMD,
-				"Attempt to free invalid ring group %d\n",
-				idx);
+		if (bp->grp_info[idx].fw_grp_id == INVALID_HW_RING_ID)
 			continue;
-		}
 
 		rc = bnxt_hwrm_ring_grp_free(bp, idx);
 
@@ -1749,9 +1747,39 @@ int bnxt_clear_hwrm_vnic_filters(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 	int rc = 0;
 
 	STAILQ_FOREACH(filter, &vnic->filter, next) {
-		rc = bnxt_hwrm_clear_filter(bp, filter);
-		if (rc)
-			break;
+		if (filter->filter_type == HWRM_CFA_EM_FILTER)
+			rc = bnxt_hwrm_clear_em_filter(bp, filter);
+		else if (filter->filter_type == HWRM_CFA_NTUPLE_FILTER)
+			rc = bnxt_hwrm_clear_ntuple_filter(bp, filter);
+		else
+			rc = bnxt_hwrm_clear_l2_filter(bp, filter);
+		//if (rc)
+			//break;
+	}
+	return rc;
+}
+
+static int
+bnxt_clear_hwrm_vnic_flows(struct bnxt *bp, struct bnxt_vnic_info *vnic)
+{
+	struct bnxt_filter_info *filter;
+	struct rte_flow *flow;
+	int rc = 0;
+
+	STAILQ_FOREACH(flow, &vnic->flow_list, next) {
+		filter = flow->filter;
+		RTE_LOG(ERR, PMD, "filter type %d\n", filter->filter_type);
+		if (filter->filter_type == HWRM_CFA_EM_FILTER)
+			rc = bnxt_hwrm_clear_em_filter(bp, filter);
+		else if (filter->filter_type == HWRM_CFA_NTUPLE_FILTER)
+			rc = bnxt_hwrm_clear_ntuple_filter(bp, filter);
+		else
+			rc = bnxt_hwrm_clear_l2_filter(bp, filter);
+
+		STAILQ_REMOVE(&vnic->flow_list, flow, rte_flow, next);
+		rte_free(flow);
+		//if (rc)
+			//break;
 	}
 	return rc;
 }
@@ -1762,7 +1790,15 @@ int bnxt_set_hwrm_vnic_filters(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 	int rc = 0;
 
 	STAILQ_FOREACH(filter, &vnic->filter, next) {
-		rc = bnxt_hwrm_set_filter(bp, vnic->fw_vnic_id, filter);
+		if (filter->filter_type == HWRM_CFA_EM_FILTER)
+			rc = bnxt_hwrm_set_em_filter(bp, filter->dst_id,
+						     filter);
+		else if (filter->filter_type == HWRM_CFA_NTUPLE_FILTER)
+			rc = bnxt_hwrm_set_ntuple_filter(bp, filter->dst_id,
+							 filter);
+		else
+			rc = bnxt_hwrm_set_l2_filter(bp, vnic->fw_vnic_id,
+						     filter);
 		if (rc)
 			break;
 	}
@@ -1783,20 +1819,20 @@ void bnxt_free_tunnel_ports(struct bnxt *bp)
 
 void bnxt_free_all_hwrm_resources(struct bnxt *bp)
 {
-	struct bnxt_vnic_info *vnic;
-	unsigned int i;
+	int i;
 
 	if (bp->vnic_info == NULL)
 		return;
 
-	vnic = &bp->vnic_info[0];
-	if (BNXT_PF(bp))
-		bnxt_hwrm_cfa_l2_clear_rx_mask(bp, vnic);
-
-	/* VNIC resources */
-	for (i = 0; i < bp->nr_vnics; i++) {
+	/*
+	 * Cleanup VNICs in reverse order, to make sure the L2 filter
+	 * from vnic0 is last to be cleaned up.
+	 */
+	for (i = bp->nr_vnics - 1; i >= 0; i--) {
 		struct bnxt_vnic_info *vnic = &bp->vnic_info[i];
 
+		bnxt_clear_hwrm_vnic_flows(bp, vnic);
+
 		bnxt_clear_hwrm_vnic_filters(bp, vnic);
 
 		bnxt_hwrm_vnic_ctx_free(bp, vnic);
@@ -3126,3 +3162,215 @@ int bnxt_hwrm_func_qcfg_vf_dflt_vnic_id(struct bnxt *bp, int vf)
 	rte_free(vnic_ids);
 	return -1;
 }
+
+int bnxt_hwrm_set_em_filter(struct bnxt *bp,
+			 uint16_t dst_id,
+			 struct bnxt_filter_info *filter)
+{
+	int rc = 0;
+	struct hwrm_cfa_em_flow_alloc_input req = {.req_type = 0 };
+	struct hwrm_cfa_em_flow_alloc_output *resp = bp->hwrm_cmd_resp_addr;
+	uint32_t enables = 0;
+
+	if (filter->fw_em_filter_id != UINT64_MAX)
+		bnxt_hwrm_clear_em_filter(bp, filter);
+
+	HWRM_PREP(req, CFA_EM_FLOW_ALLOC);
+
+	req.flags = rte_cpu_to_le_32(filter->flags);
+
+	enables = filter->enables |
+	      HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_DST_ID;
+	req.dst_id = rte_cpu_to_le_16(dst_id);
+
+	if (filter->ip_addr_type) {
+		req.ip_addr_type = filter->ip_addr_type;
+		enables |= HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_IPADDR_TYPE;
+	}
+	if (enables &
+	    HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_L2_FILTER_ID)
+		req.l2_filter_id = rte_cpu_to_le_64(filter->fw_l2_filter_id);
+	if (enables &
+	    HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_SRC_MACADDR)
+		memcpy(req.src_macaddr, filter->src_macaddr,
+		       ETHER_ADDR_LEN);
+	if (enables &
+	    HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_DST_MACADDR)
+		memcpy(req.dst_macaddr, filter->dst_macaddr,
+		       ETHER_ADDR_LEN);
+	if (enables &
+	    HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_OVLAN_VID)
+		req.ovlan_vid = filter->l2_ovlan;
+	if (enables &
+	    HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_IVLAN_VID)
+		req.ivlan_vid = filter->l2_ivlan;
+	if (enables &
+	    HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_ETHERTYPE)
+		req.ethertype = rte_cpu_to_be_16(filter->ethertype);
+	if (enables &
+	    HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_IP_PROTOCOL)
+		req.ip_protocol = filter->ip_protocol;
+	if (enables &
+	    HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_SRC_IPADDR)
+		req.src_ipaddr[0] = rte_cpu_to_be_32(filter->src_ipaddr[0]);
+	if (enables &
+	    HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_DST_IPADDR)
+		req.dst_ipaddr[0] = rte_cpu_to_be_32(filter->dst_ipaddr[0]);
+	if (enables &
+	    HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_SRC_PORT)
+		req.src_port = rte_cpu_to_be_16(filter->src_port);
+	if (enables &
+	    HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_DST_PORT)
+		req.dst_port = rte_cpu_to_be_16(filter->dst_port);
+	if (enables &
+	    HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_MIRROR_VNIC_ID)
+		req.mirror_vnic_id = filter->mirror_vnic_id;
+
+	req.enables = rte_cpu_to_le_32(enables);
+
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
+
+	HWRM_CHECK_RESULT();
+
+	filter->fw_em_filter_id = rte_le_to_cpu_64(resp->em_filter_id);
+	HWRM_UNLOCK();
+
+	return rc;
+}
+
+int bnxt_hwrm_clear_em_filter(struct bnxt *bp, struct bnxt_filter_info *filter)
+{
+	int rc = 0;
+	struct hwrm_cfa_em_flow_free_input req = {.req_type = 0 };
+	struct hwrm_cfa_em_flow_free_output *resp = bp->hwrm_cmd_resp_addr;
+
+	if (filter->fw_em_filter_id == UINT64_MAX)
+		return 0;
+
+	RTE_LOG(ERR, PMD, "Clear EM filter\n");
+	HWRM_PREP(req, CFA_EM_FLOW_FREE);
+
+	req.em_filter_id = rte_cpu_to_le_64(filter->fw_em_filter_id);
+
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
+
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
+
+	filter->fw_em_filter_id = -1;
+	filter->fw_l2_filter_id = -1;
+
+	return 0;
+}
+
+int bnxt_hwrm_set_ntuple_filter(struct bnxt *bp,
+			 uint16_t dst_id,
+			 struct bnxt_filter_info *filter)
+{
+	int rc = 0;
+	struct hwrm_cfa_ntuple_filter_alloc_input req = {.req_type = 0 };
+	struct hwrm_cfa_ntuple_filter_alloc_output *resp =
+						bp->hwrm_cmd_resp_addr;
+	uint32_t enables = 0;
+
+	if (filter->fw_ntuple_filter_id != UINT64_MAX)
+		bnxt_hwrm_clear_ntuple_filter(bp, filter);
+
+	HWRM_PREP(req, CFA_NTUPLE_FILTER_ALLOC);
+
+	req.flags = rte_cpu_to_le_32(filter->flags);
+
+	enables = filter->enables |
+	      HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_DST_ID;
+	req.dst_id = rte_cpu_to_le_16(dst_id);
+
+
+	if (filter->ip_addr_type) {
+		req.ip_addr_type = filter->ip_addr_type;
+		enables |=
+			HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_IPADDR_TYPE;
+	}
+	if (enables &
+	    HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_L2_FILTER_ID)
+		req.l2_filter_id = rte_cpu_to_le_64(filter->fw_l2_filter_id);
+	if (enables &
+	    HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_SRC_MACADDR)
+		memcpy(req.src_macaddr, filter->src_macaddr,
+		       ETHER_ADDR_LEN);
+	//if (enables &
+	    //HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_DST_MACADDR)
+		//memcpy(req.dst_macaddr, filter->dst_macaddr,
+		       //ETHER_ADDR_LEN);
+	if (enables &
+	    HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_ETHERTYPE)
+		req.ethertype = rte_cpu_to_be_16(filter->ethertype);
+	if (enables &
+	    HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_IP_PROTOCOL)
+		req.ip_protocol = filter->ip_protocol;
+	if (enables &
+	    HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_SRC_IPADDR)
+		req.src_ipaddr[0] = rte_cpu_to_le_32(filter->src_ipaddr[0]);
+	if (enables &
+	    HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_SRC_IPADDR_MASK)
+		req.src_ipaddr_mask[0] =
+			rte_cpu_to_le_32(filter->src_ipaddr_mask[0]);
+	if (enables &
+	    HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_DST_IPADDR)
+		req.dst_ipaddr[0] = rte_cpu_to_le_32(filter->dst_ipaddr[0]);
+	if (enables &
+	    HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_DST_IPADDR_MASK)
+		req.dst_ipaddr_mask[0] =
+			rte_cpu_to_be_32(filter->dst_ipaddr_mask[0]);
+	if (enables &
+	    HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_SRC_PORT)
+		req.src_port = rte_cpu_to_le_16(filter->src_port);
+	if (enables &
+	    HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_SRC_PORT_MASK)
+		req.src_port_mask = rte_cpu_to_le_16(filter->src_port_mask);
+	if (enables &
+	    HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_DST_PORT)
+		req.dst_port = rte_cpu_to_le_16(filter->dst_port);
+	if (enables &
+	    HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_DST_PORT_MASK)
+		req.dst_port_mask = rte_cpu_to_le_16(filter->dst_port_mask);
+	if (enables &
+	    HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_MIRROR_VNIC_ID)
+		req.mirror_vnic_id = filter->mirror_vnic_id;
+
+	req.enables = rte_cpu_to_le_32(enables);
+
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
+
+	HWRM_CHECK_RESULT();
+
+	filter->fw_ntuple_filter_id = rte_le_to_cpu_64(resp->ntuple_filter_id);
+	HWRM_UNLOCK();
+
+	return rc;
+}
+
+int bnxt_hwrm_clear_ntuple_filter(struct bnxt *bp,
+				struct bnxt_filter_info *filter)
+{
+	int rc = 0;
+	struct hwrm_cfa_ntuple_filter_free_input req = {.req_type = 0 };
+	struct hwrm_cfa_ntuple_filter_free_output *resp =
+						bp->hwrm_cmd_resp_addr;
+
+	if (filter->fw_ntuple_filter_id == UINT64_MAX)
+		return 0;
+
+	HWRM_PREP(req, CFA_NTUPLE_FILTER_FREE);
+
+	req.ntuple_filter_id = rte_cpu_to_le_64(filter->fw_ntuple_filter_id);
+
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
+
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
+
+	filter->fw_ntuple_filter_id = -1;
+	filter->fw_l2_filter_id = -1;
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index 51cd0dd42..bd9017f17 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -51,9 +51,9 @@ int bnxt_hwrm_cfa_l2_set_rx_mask(struct bnxt *bp, struct bnxt_vnic_info *vnic,
 int bnxt_hwrm_cfa_vlan_antispoof_cfg(struct bnxt *bp, uint16_t fid,
 			uint16_t vlan_count,
 			struct bnxt_vlan_antispoof_table_entry *vlan_table);
-int bnxt_hwrm_clear_filter(struct bnxt *bp,
+int bnxt_hwrm_clear_l2_filter(struct bnxt *bp,
 			   struct bnxt_filter_info *filter);
-int bnxt_hwrm_set_filter(struct bnxt *bp,
+int bnxt_hwrm_set_l2_filter(struct bnxt *bp,
 			 uint16_t dst_id,
 			 struct bnxt_filter_info *filter);
 int bnxt_hwrm_exec_fwd_resp(struct bnxt *bp, uint16_t target_id,
@@ -156,4 +156,12 @@ int bnxt_hwrm_func_vf_vnic_query_and_config(struct bnxt *bp, uint16_t vf,
 int bnxt_hwrm_func_cfg_vf_set_vlan_anti_spoof(struct bnxt *bp, uint16_t vf,
 					      bool on);
 int bnxt_hwrm_func_qcfg_vf_dflt_vnic_id(struct bnxt *bp, int vf);
+int bnxt_hwrm_set_em_filter(struct bnxt *bp, uint16_t dst_id,
+			struct bnxt_filter_info *filter);
+int bnxt_hwrm_clear_em_filter(struct bnxt *bp, struct bnxt_filter_info *filter);
+
+int bnxt_hwrm_set_ntuple_filter(struct bnxt *bp, uint16_t dst_id,
+			 struct bnxt_filter_info *filter);
+int bnxt_hwrm_clear_ntuple_filter(struct bnxt *bp,
+				struct bnxt_filter_info *filter);
 #endif
diff --git a/drivers/net/bnxt/bnxt_vnic.c b/drivers/net/bnxt/bnxt_vnic.c
index db9fb0796..6f7c05bdf 100644
--- a/drivers/net/bnxt/bnxt_vnic.c
+++ b/drivers/net/bnxt/bnxt_vnic.c
@@ -83,6 +83,7 @@ void bnxt_init_vnics(struct bnxt *bp)
 
 		prandom_bytes(vnic->rss_hash_key, HW_HASH_KEY_SIZE);
 		STAILQ_INIT(&vnic->filter);
+		STAILQ_INIT(&vnic->flow_list);
 		STAILQ_INSERT_TAIL(&bp->free_vnic_list, vnic, next);
 	}
 	for (i = 0; i < MAX_FF_POOLS; i++)
diff --git a/drivers/net/bnxt/bnxt_vnic.h b/drivers/net/bnxt/bnxt_vnic.h
index 993f22127..544390453 100644
--- a/drivers/net/bnxt/bnxt_vnic.h
+++ b/drivers/net/bnxt/bnxt_vnic.h
@@ -80,6 +80,7 @@ struct bnxt_vnic_info {
 	bool		rss_dflt_cr;
 
 	STAILQ_HEAD(, bnxt_filter_info)	filter;
+	STAILQ_HEAD(, rte_flow)	flow_list;
 };
 
 struct bnxt;
diff --git a/drivers/net/bnxt/rte_pmd_bnxt.c b/drivers/net/bnxt/rte_pmd_bnxt.c
index 0bf5db5ec..82b9baca6 100644
--- a/drivers/net/bnxt/rte_pmd_bnxt.c
+++ b/drivers/net/bnxt/rte_pmd_bnxt.c
@@ -730,7 +730,7 @@ int rte_pmd_bnxt_mac_addr_add(uint8_t port, struct ether_addr *addr,
 		    (HWRM_CFA_L2_FILTER_ALLOC_INPUT_ENABLES_L2_ADDR |
 		     HWRM_CFA_L2_FILTER_ALLOC_INPUT_ENABLES_L2_ADDR_MASK) &&
 		    memcmp(addr, filter->l2_addr, ETHER_ADDR_LEN) == 0) {
-			bnxt_hwrm_clear_filter(bp, filter);
+			bnxt_hwrm_clear_l2_filter(bp, filter);
 			break;
 		}
 	}
@@ -748,7 +748,7 @@ int rte_pmd_bnxt_mac_addr_add(uint8_t port, struct ether_addr *addr,
 	/* Do not add a filter for the default MAC */
 	if (bnxt_hwrm_func_qcfg_vf_default_mac(bp, vf_id, &dflt_mac) ||
 	    memcmp(filter->l2_addr, dflt_mac.addr_bytes, ETHER_ADDR_LEN))
-		rc = bnxt_hwrm_set_filter(bp, vnic.fw_vnic_id, filter);
+		rc = bnxt_hwrm_set_l2_filter(bp, vnic.fw_vnic_id, filter);
 
 exit:
 	return rc;
-- 
2.13.5 (Apple Git-94)

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v4 19/24] doc: update release notes
  2017-09-28 21:43 [PATCH v4 00/24] bnxt patchset Ajit Khaparde
                   ` (17 preceding siblings ...)
  2017-09-28 21:43 ` [PATCH v4 18/24] net/bnxt: add support for flow filter ops Ajit Khaparde
@ 2017-09-28 21:43 ` Ajit Khaparde
  2017-09-28 21:43 ` [PATCH v4 20/24] net/bnxt: fix per queue stats display in xstats Ajit Khaparde
                   ` (5 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Ajit Khaparde @ 2017-09-28 21:43 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

Update release doc briefly describing updates to bnxt PMD.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
--
v1->v2: Update New Features section instead of Resolved Issues section.
---
 doc/guides/rel_notes/release_17_11.rst | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/doc/guides/rel_notes/release_17_11.rst b/doc/guides/rel_notes/release_17_11.rst
index bb0ba80c6..b864d03ff 100644
--- a/doc/guides/rel_notes/release_17_11.rst
+++ b/doc/guides/rel_notes/release_17_11.rst
@@ -52,6 +52,17 @@ New Features
   NFP 4000 devices are also now supported along with previous 6000 devices.
 
 
+Drivers
+~~~~~~~
+
+* **Updated bnxt PMD.**
+
+  Major enhancements include:
+
+   * Support for Flow API
+   * Support for Tx and Rx descriptor status functions
+
+
 Resolved Issues
 ---------------
 
-- 
2.13.5 (Apple Git-94)

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v4 20/24] net/bnxt: fix per queue stats display in xstats
  2017-09-28 21:43 [PATCH v4 00/24] bnxt patchset Ajit Khaparde
                   ` (18 preceding siblings ...)
  2017-09-28 21:43 ` [PATCH v4 19/24] doc: update release notes Ajit Khaparde
@ 2017-09-28 21:43 ` Ajit Khaparde
  2017-09-28 21:43 ` [PATCH v4 21/24] net/bnxt: prevent interrupt handler from accessing freed memory Ajit Khaparde
                   ` (4 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Ajit Khaparde @ 2017-09-28 21:43 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

While gathering per queue stats, we are overwriting some of the
stats. This causes some of the counters in xstats to be incorrect.

Fixes: 577d3dced0dc ("net/bnxt: refactor the query stats")

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_hwrm.c  | 38 ++++++++++++++++++++------------------
 drivers/net/bnxt/bnxt_hwrm.h  |  2 +-
 drivers/net/bnxt/bnxt_stats.c |  6 ++++--
 3 files changed, 25 insertions(+), 21 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 204a0dcd6..7f146d606 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -2815,7 +2815,7 @@ int bnxt_hwrm_exec_fwd_resp(struct bnxt *bp, uint16_t target_id,
 }
 
 int bnxt_hwrm_ctx_qstats(struct bnxt *bp, uint32_t cid, int idx,
-			 struct rte_eth_stats *stats)
+			 struct rte_eth_stats *stats, uint8_t rx)
 {
 	int rc = 0;
 	struct hwrm_stat_ctx_query_input req = {.req_type = 0};
@@ -2829,23 +2829,25 @@ int bnxt_hwrm_ctx_qstats(struct bnxt *bp, uint32_t cid, int idx,
 
 	HWRM_CHECK_RESULT();
 
-	stats->q_ipackets[idx] = rte_le_to_cpu_64(resp->rx_ucast_pkts);
-	stats->q_ipackets[idx] += rte_le_to_cpu_64(resp->rx_mcast_pkts);
-	stats->q_ipackets[idx] += rte_le_to_cpu_64(resp->rx_bcast_pkts);
-	stats->q_ibytes[idx] = rte_le_to_cpu_64(resp->rx_ucast_bytes);
-	stats->q_ibytes[idx] += rte_le_to_cpu_64(resp->rx_mcast_bytes);
-	stats->q_ibytes[idx] += rte_le_to_cpu_64(resp->rx_bcast_bytes);
-
-	stats->q_opackets[idx] = rte_le_to_cpu_64(resp->tx_ucast_pkts);
-	stats->q_opackets[idx] += rte_le_to_cpu_64(resp->tx_mcast_pkts);
-	stats->q_opackets[idx] += rte_le_to_cpu_64(resp->tx_bcast_pkts);
-	stats->q_obytes[idx] = rte_le_to_cpu_64(resp->tx_ucast_bytes);
-	stats->q_obytes[idx] += rte_le_to_cpu_64(resp->tx_mcast_bytes);
-	stats->q_obytes[idx] += rte_le_to_cpu_64(resp->tx_bcast_bytes);
-
-	stats->q_errors[idx] = rte_le_to_cpu_64(resp->rx_err_pkts);
-	stats->q_errors[idx] += rte_le_to_cpu_64(resp->tx_err_pkts);
-	stats->q_errors[idx] += rte_le_to_cpu_64(resp->rx_drop_pkts);
+	if (rx) {
+		stats->q_ipackets[idx] = rte_le_to_cpu_64(resp->rx_ucast_pkts);
+		stats->q_ipackets[idx] += rte_le_to_cpu_64(resp->rx_mcast_pkts);
+		stats->q_ipackets[idx] += rte_le_to_cpu_64(resp->rx_bcast_pkts);
+		stats->q_ibytes[idx] = rte_le_to_cpu_64(resp->rx_ucast_bytes);
+		stats->q_ibytes[idx] += rte_le_to_cpu_64(resp->rx_mcast_bytes);
+		stats->q_ibytes[idx] += rte_le_to_cpu_64(resp->rx_bcast_bytes);
+		stats->q_errors[idx] = rte_le_to_cpu_64(resp->rx_err_pkts);
+		stats->q_errors[idx] += rte_le_to_cpu_64(resp->rx_drop_pkts);
+	} else {
+		stats->q_opackets[idx] = rte_le_to_cpu_64(resp->tx_ucast_pkts);
+		stats->q_opackets[idx] += rte_le_to_cpu_64(resp->tx_mcast_pkts);
+		stats->q_opackets[idx] += rte_le_to_cpu_64(resp->tx_bcast_pkts);
+		stats->q_obytes[idx] = rte_le_to_cpu_64(resp->tx_ucast_bytes);
+		stats->q_obytes[idx] += rte_le_to_cpu_64(resp->tx_mcast_bytes);
+		stats->q_obytes[idx] += rte_le_to_cpu_64(resp->tx_bcast_bytes);
+		stats->q_errors[idx] += rte_le_to_cpu_64(resp->tx_err_pkts);
+	}
+
 
 	HWRM_UNLOCK();
 
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index bd9017f17..b41cc5af3 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -92,7 +92,7 @@ int bnxt_hwrm_stat_ctx_alloc(struct bnxt *bp,
 int bnxt_hwrm_stat_ctx_free(struct bnxt *bp,
 			    struct bnxt_cp_ring_info *cpr, unsigned int idx);
 int bnxt_hwrm_ctx_qstats(struct bnxt *bp, uint32_t cid, int idx,
-			 struct rte_eth_stats *stats);
+			 struct rte_eth_stats *stats, uint8_t rx);
 
 int bnxt_hwrm_ver_get(struct bnxt *bp);
 
diff --git a/drivers/net/bnxt/bnxt_stats.c b/drivers/net/bnxt/bnxt_stats.c
index 225f98830..87feac6c1 100644
--- a/drivers/net/bnxt/bnxt_stats.c
+++ b/drivers/net/bnxt/bnxt_stats.c
@@ -240,14 +240,16 @@ void bnxt_stats_get_op(struct rte_eth_dev *eth_dev,
 		struct bnxt_rx_queue *rxq = bp->rx_queues[i];
 		struct bnxt_cp_ring_info *cpr = rxq->cp_ring;
 
-		bnxt_hwrm_ctx_qstats(bp, cpr->hw_stats_ctx_id, i, bnxt_stats);
+		bnxt_hwrm_ctx_qstats(bp, cpr->hw_stats_ctx_id, i,
+				     bnxt_stats, 1);
 	}
 
 	for (i = 0; i < bp->tx_cp_nr_rings; i++) {
 		struct bnxt_tx_queue *txq = bp->tx_queues[i];
 		struct bnxt_cp_ring_info *cpr = txq->cp_ring;
 
-		bnxt_hwrm_ctx_qstats(bp, cpr->hw_stats_ctx_id, i, bnxt_stats);
+		bnxt_hwrm_ctx_qstats(bp, cpr->hw_stats_ctx_id, i,
+				     bnxt_stats, 0);
 	}
 	bnxt_hwrm_func_qstats(bp, 0xffff, bnxt_stats);
 	bnxt_stats->rx_nombuf = rte_atomic64_read(&bp->rx_mbuf_alloc_fail);
-- 
2.13.5 (Apple Git-94)

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v4 21/24] net/bnxt: prevent interrupt handler from accessing freed memory
  2017-09-28 21:43 [PATCH v4 00/24] bnxt patchset Ajit Khaparde
                   ` (19 preceding siblings ...)
  2017-09-28 21:43 ` [PATCH v4 20/24] net/bnxt: fix per queue stats display in xstats Ajit Khaparde
@ 2017-09-28 21:43 ` Ajit Khaparde
  2017-09-28 21:43 ` [PATCH v4 22/24] net/bnxt: add dev_supported_ptypes_get dev_op Ajit Khaparde
                   ` (3 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Ajit Khaparde @ 2017-09-28 21:43 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

In some cases the interrupt handler is accessing cpr, which has
already been freed causing segfaults. This patch avoids such accesses.

Fixes: 7bc8e9a227cc ("net/bnxt: support async link notification")

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_cpr.c | 2 ++
 drivers/net/bnxt/bnxt_irq.c | 3 +++
 2 files changed, 5 insertions(+)

diff --git a/drivers/net/bnxt/bnxt_cpr.c b/drivers/net/bnxt/bnxt_cpr.c
index 68979bc43..26b2755e1 100644
--- a/drivers/net/bnxt/bnxt_cpr.c
+++ b/drivers/net/bnxt/bnxt_cpr.c
@@ -183,8 +183,10 @@ void bnxt_free_def_cp_ring(struct bnxt *bp)
 		return;
 
 	bnxt_free_ring(cpr->cp_ring_struct);
+	cpr->cp_ring_struct = NULL;
 	rte_free(cpr->cp_ring_struct);
 	rte_free(cpr);
+	bp->def_cp_ring = NULL;
 }
 
 /* For the default completion ring only */
diff --git a/drivers/net/bnxt/bnxt_irq.c b/drivers/net/bnxt/bnxt_irq.c
index 47cda7e52..79a119623 100644
--- a/drivers/net/bnxt/bnxt_irq.c
+++ b/drivers/net/bnxt/bnxt_irq.c
@@ -55,6 +55,9 @@ static void bnxt_int_handler(void *param)
 	struct cmpl_base *cmp;
 
 	while (1) {
+		if (!cpr || !cpr->cp_ring_struct)
+			return;
+
 		cons = RING_CMP(cpr->cp_ring_struct, raw_cons);
 		cmp = &cpr->cp_desc_ring[cons];
 
-- 
2.13.5 (Apple Git-94)

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v4 22/24] net/bnxt: add dev_supported_ptypes_get dev_op
  2017-09-28 21:43 [PATCH v4 00/24] bnxt patchset Ajit Khaparde
                   ` (20 preceding siblings ...)
  2017-09-28 21:43 ` [PATCH v4 21/24] net/bnxt: prevent interrupt handler from accessing freed memory Ajit Khaparde
@ 2017-09-28 21:43 ` Ajit Khaparde
  2017-09-28 21:43 ` [PATCH v4 23/24] net/bnxt: add support for get/set EEPROM Ajit Khaparde
                   ` (2 subsequent siblings)
  24 siblings, 0 replies; 26+ messages in thread
From: Ajit Khaparde @ 2017-09-28 21:43 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

This patch adds support for dev_supported_ptypes_get

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c | 25 ++++++++++++++++++++++++
 drivers/net/bnxt/bnxt_rxr.c    | 43 ++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 68 insertions(+)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index fdba0ac69..23eec6ab0 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -1830,6 +1830,30 @@ bnxt_filter_ctrl_op(struct rte_eth_dev *dev __rte_unused,
 	return ret;
 }
 
+static const uint32_t *
+bnxt_dev_supported_ptypes_get_op(struct rte_eth_dev *dev)
+{
+	static const uint32_t ptypes[] = {
+		RTE_PTYPE_L2_ETHER_VLAN,
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
+		RTE_PTYPE_L3_IPV6_EXT_UNKNOWN,
+		RTE_PTYPE_L4_ICMP,
+		RTE_PTYPE_L4_TCP,
+		RTE_PTYPE_L4_UDP,
+		RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN,
+		RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN,
+		RTE_PTYPE_INNER_L4_ICMP,
+		RTE_PTYPE_INNER_L4_TCP,
+		RTE_PTYPE_INNER_L4_UDP,
+		RTE_PTYPE_UNKNOWN
+	};
+
+	if (dev->rx_pkt_burst == bnxt_recv_pkts)
+		return ptypes;
+	return NULL;
+}
+
+
 /*
  * Initialization
  */
@@ -1883,6 +1907,7 @@ static const struct eth_dev_ops bnxt_dev_ops = {
 	.rx_descriptor_status = bnxt_rx_descriptor_status_op,
 	.tx_descriptor_status = bnxt_tx_descriptor_status_op,
 	.filter_ctrl = bnxt_filter_ctrl_op,
+	.dev_supported_ptypes_get = bnxt_dev_supported_ptypes_get_op,
 };
 
 static bool bnxt_vf_pciid(uint16_t id)
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index 3216a6d3b..153ca93ed 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -335,6 +335,48 @@ static inline struct rte_mbuf *bnxt_tpa_end(
 	return mbuf;
 }
 
+static uint32_t
+bnxt_parse_pkt_type(struct rx_pkt_cmpl *rxcmp, struct rx_pkt_cmpl_hi *rxcmp1)
+{
+	uint32_t pkt_type = 0;
+	uint32_t t_ipcs = 0, ip = 0, ip6 = 0;
+	uint32_t tcp = 0, udp = 0, icmp = 0;
+	uint32_t vlan = 0;
+
+	vlan = !!(rxcmp1->flags2 &
+		rte_cpu_to_le_32(RX_PKT_CMPL_FLAGS2_META_FORMAT_VLAN));
+	t_ipcs = !!(rxcmp1->flags2 &
+		rte_cpu_to_le_32(RX_PKT_CMPL_FLAGS2_T_IP_CS_CALC));
+	ip6 = !!(rxcmp1->flags2 &
+		 rte_cpu_to_le_32(RX_PKT_CMPL_FLAGS2_IP_TYPE));
+	icmp = !!(rxcmp->flags_type &
+		  rte_cpu_to_le_16(RX_PKT_CMPL_FLAGS_ITYPE_ICMP));
+	tcp = !!(rxcmp->flags_type &
+		 rte_cpu_to_le_16(RX_PKT_CMPL_FLAGS_ITYPE_TCP));
+	udp = !!(rxcmp->flags_type &
+		 rte_cpu_to_le_16(RX_PKT_CMPL_FLAGS_ITYPE_UDP));
+	ip = !!(rxcmp->flags_type &
+		rte_cpu_to_le_16(RX_PKT_CMPL_FLAGS_ITYPE_IP));
+
+	pkt_type |= ((ip || tcp || udp || icmp) && !t_ipcs && !ip6) ?
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN : 0;
+	pkt_type |= ((ip || tcp || udp || icmp) && !t_ipcs && ip6) ?
+		RTE_PTYPE_L3_IPV6_EXT_UNKNOWN : 0;
+	pkt_type |= (!t_ipcs &&  icmp) ? RTE_PTYPE_L4_ICMP : 0;
+	pkt_type |= (!t_ipcs &&  udp) ? RTE_PTYPE_L4_UDP : 0;
+	pkt_type |= (!t_ipcs &&  tcp) ? RTE_PTYPE_L4_TCP : 0;
+	pkt_type |= ((ip || tcp || udp || icmp) && t_ipcs && !ip6) ?
+		RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN : 0;
+	pkt_type |= ((ip || tcp || udp || icmp) && t_ipcs && ip6) ?
+		RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN : 0;
+	pkt_type |= (t_ipcs &&  icmp) ? RTE_PTYPE_INNER_L4_ICMP : 0;
+	pkt_type |= (t_ipcs &&  udp) ? RTE_PTYPE_INNER_L4_UDP : 0;
+	pkt_type |= (t_ipcs &&  tcp) ? RTE_PTYPE_INNER_L4_TCP : 0;
+	pkt_type |= vlan ? RTE_PTYPE_L2_ETHER_VLAN : 0;
+
+	return pkt_type;
+}
+
 static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 			    struct bnxt_rx_queue *rxq, uint32_t *raw_cons)
 {
@@ -435,6 +477,7 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 	else
 		mbuf->ol_flags |= PKT_RX_L4_CKSUM_NONE;
 
+	mbuf->packet_type = bnxt_parse_pkt_type(rxcmp, rxcmp1);
 
 #ifdef BNXT_DEBUG
 	if (rxcmp1->errors_v2 & RX_CMP_L2_ERRORS) {
-- 
2.13.5 (Apple Git-94)

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v4 23/24] net/bnxt: add support for get/set EEPROM
  2017-09-28 21:43 [PATCH v4 00/24] bnxt patchset Ajit Khaparde
                   ` (21 preceding siblings ...)
  2017-09-28 21:43 ` [PATCH v4 22/24] net/bnxt: add dev_supported_ptypes_get dev_op Ajit Khaparde
@ 2017-09-28 21:43 ` Ajit Khaparde
  2017-09-28 21:43 ` [PATCH v4 24/24] net/bnxt: add support for rx_queue_intr_enable/disable APIs Ajit Khaparde
  2017-10-02 21:28 ` [PATCH v4 00/24] bnxt patchset Ferruh Yigit
  24 siblings, 0 replies; 26+ messages in thread
From: Ajit Khaparde @ 2017-09-28 21:43 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Somnath Kotur

From: Somnath Kotur <somnath.kotur@broadcom.com>

Add support for get/set_eeprom, get_eeprom_length dev_ops.
Defined the structures required to get/set the eeprom length/data
in hsi_struct_defs hdr file along with implementation.

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 doc/guides/nics/features/bnxt.ini      |   1 +
 drivers/net/bnxt/bnxt_ethdev.c         | 140 ++++++++++++++++++++++++++++
 drivers/net/bnxt/bnxt_hwrm.c           | 161 +++++++++++++++++++++++++++++++++
 drivers/net/bnxt/bnxt_hwrm.h           |  11 +++
 drivers/net/bnxt/bnxt_nvm_defs.h       |  75 +++++++++++++++
 drivers/net/bnxt/hsi_struct_def_dpdk.h | 138 ++++++++++++++++++++++++++++
 6 files changed, 526 insertions(+)
 create mode 100644 drivers/net/bnxt/bnxt_nvm_defs.h

diff --git a/doc/guides/nics/features/bnxt.ini b/doc/guides/nics/features/bnxt.ini
index 119132e16..0dcf07cc3 100644
--- a/doc/guides/nics/features/bnxt.ini
+++ b/doc/guides/nics/features/bnxt.ini
@@ -21,6 +21,7 @@ Basic stats          = Y
 Extended stats       = Y
 FW version           = Y
 LED                  = Y
+EEPROM dump          = Y
 Linux UIO            = Y
 Linux VFIO           = Y
 x86-64               = Y
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 23eec6ab0..81f66cadd 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -53,6 +53,7 @@
 #include "bnxt_txr.h"
 #include "bnxt_vnic.h"
 #include "hsi_struct_def_dpdk.h"
+#include "bnxt_nvm_defs.h"
 
 #define DRV_MODULE_NAME		"bnxt"
 static const char bnxt_version[] =
@@ -1854,6 +1855,142 @@ bnxt_dev_supported_ptypes_get_op(struct rte_eth_dev *dev)
 }
 
 
+
+static int
+bnxt_get_eeprom_length_op(struct rte_eth_dev *dev)
+{
+	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
+	int rc;
+	uint32_t dir_entries;
+	uint32_t entry_length;
+
+	RTE_LOG(INFO, PMD, "%s(): %04x:%02x:%02x:%02x\n",
+		__func__, bp->pdev->addr.domain, bp->pdev->addr.bus,
+		bp->pdev->addr.devid, bp->pdev->addr.function);
+
+	rc = bnxt_hwrm_nvm_get_dir_info(bp, &dir_entries, &entry_length);
+	if (rc != 0)
+		return rc;
+
+	return dir_entries * entry_length;
+}
+
+static int
+bnxt_get_eeprom_op(struct rte_eth_dev *dev,
+		struct rte_dev_eeprom_info *in_eeprom)
+{
+	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
+	uint32_t index;
+	uint32_t offset;
+
+	RTE_LOG(INFO, PMD, "%s(): %04x:%02x:%02x:%02x in_eeprom->offset = %d "
+		"len = %d\n", __func__, bp->pdev->addr.domain,
+		bp->pdev->addr.bus, bp->pdev->addr.devid,
+		bp->pdev->addr.function, in_eeprom->offset, in_eeprom->length);
+
+	if (in_eeprom->offset == 0) /* special offset value to get directory */
+		return bnxt_get_nvram_directory(bp, in_eeprom->length,
+						in_eeprom->data);
+
+	index = in_eeprom->offset >> 24;
+	offset = in_eeprom->offset & 0xffffff;
+
+	if (index != 0)
+		return bnxt_hwrm_get_nvram_item(bp, index - 1, offset,
+					   in_eeprom->length, in_eeprom->data);
+
+	return 0;
+}
+
+static bool bnxt_dir_type_is_ape_bin_format(uint16_t dir_type)
+{
+	switch (dir_type) {
+	case BNX_DIR_TYPE_CHIMP_PATCH:
+	case BNX_DIR_TYPE_BOOTCODE:
+	case BNX_DIR_TYPE_BOOTCODE_2:
+	case BNX_DIR_TYPE_APE_FW:
+	case BNX_DIR_TYPE_APE_PATCH:
+	case BNX_DIR_TYPE_KONG_FW:
+	case BNX_DIR_TYPE_KONG_PATCH:
+	case BNX_DIR_TYPE_BONO_FW:
+	case BNX_DIR_TYPE_BONO_PATCH:
+		return true;
+	}
+
+	return false;
+}
+
+static bool bnxt_dir_type_is_other_exec_format(uint16_t dir_type)
+{
+	switch (dir_type) {
+	case BNX_DIR_TYPE_AVS:
+	case BNX_DIR_TYPE_EXP_ROM_MBA:
+	case BNX_DIR_TYPE_PCIE:
+	case BNX_DIR_TYPE_TSCF_UCODE:
+	case BNX_DIR_TYPE_EXT_PHY:
+	case BNX_DIR_TYPE_CCM:
+	case BNX_DIR_TYPE_ISCSI_BOOT:
+	case BNX_DIR_TYPE_ISCSI_BOOT_IPV6:
+	case BNX_DIR_TYPE_ISCSI_BOOT_IPV4N6:
+		return true;
+	}
+
+	return false;
+}
+
+static bool bnxt_dir_type_is_executable(uint16_t dir_type)
+{
+	return bnxt_dir_type_is_ape_bin_format(dir_type) ||
+		bnxt_dir_type_is_other_exec_format(dir_type);
+}
+
+static int
+bnxt_set_eeprom_op(struct rte_eth_dev *dev,
+		struct rte_dev_eeprom_info *in_eeprom)
+{
+	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
+	uint8_t index, dir_op;
+	uint16_t type, ext, ordinal, attr;
+
+	RTE_LOG(INFO, PMD, "%s(): %04x:%02x:%02x:%02x in_eeprom->offset = %d "
+		"len = %d\n", __func__, bp->pdev->addr.domain,
+		bp->pdev->addr.bus, bp->pdev->addr.devid,
+		bp->pdev->addr.function, in_eeprom->offset, in_eeprom->length);
+
+	if (!BNXT_PF(bp)) {
+		RTE_LOG(ERR, PMD, "NVM write not supported from a VF\n");
+		return -EINVAL;
+	}
+
+	type = in_eeprom->magic >> 16;
+
+	if (type == 0xffff) { /* special value for directory operations */
+		index = in_eeprom->magic & 0xff;
+		dir_op = in_eeprom->magic >> 8;
+		if (index == 0)
+			return -EINVAL;
+		switch (dir_op) {
+		case 0x0e: /* erase */
+			if (in_eeprom->offset != ~in_eeprom->magic)
+				return -EINVAL;
+			return bnxt_hwrm_erase_nvram_directory(bp, index - 1);
+		default:
+			return -EINVAL;
+		}
+	}
+
+	/* Create or re-write an NVM item: */
+	if (bnxt_dir_type_is_executable(type) == true)
+		return -EOPNOTSUPP;
+	ext = in_eeprom->magic & 0xffff;
+	ordinal = in_eeprom->offset >> 16;
+	attr = in_eeprom->offset & 0xffff;
+
+	return bnxt_hwrm_flash_nvram(bp, type, ordinal, ext, attr,
+				     in_eeprom->data, in_eeprom->length);
+	return 0;
+}
+
 /*
  * Initialization
  */
@@ -1908,6 +2045,9 @@ static const struct eth_dev_ops bnxt_dev_ops = {
 	.tx_descriptor_status = bnxt_tx_descriptor_status_op,
 	.filter_ctrl = bnxt_filter_ctrl_op,
 	.dev_supported_ptypes_get = bnxt_dev_supported_ptypes_get_op,
+	.get_eeprom_length    = bnxt_get_eeprom_length_op,
+	.get_eeprom           = bnxt_get_eeprom_op,
+	.set_eeprom           = bnxt_set_eeprom_op,
 };
 
 static bool bnxt_vf_pciid(uint16_t id)
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 7f146d606..d379850bb 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -2975,6 +2975,167 @@ int bnxt_hwrm_port_led_cfg(struct bnxt *bp, bool led_on)
 	return rc;
 }
 
+int bnxt_hwrm_nvm_get_dir_info(struct bnxt *bp, uint32_t *entries,
+			       uint32_t *length)
+{
+	int rc;
+	struct hwrm_nvm_get_dir_info_input req = {0};
+	struct hwrm_nvm_get_dir_info_output *resp = bp->hwrm_cmd_resp_addr;
+
+	HWRM_PREP(req, NVM_GET_DIR_INFO);
+
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
+
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
+
+	if (!rc) {
+		*entries = rte_le_to_cpu_32(resp->entries);
+		*length = rte_le_to_cpu_32(resp->entry_length);
+	}
+	return rc;
+}
+
+int bnxt_get_nvram_directory(struct bnxt *bp, uint32_t len, uint8_t *data)
+{
+	int rc;
+	uint32_t dir_entries;
+	uint32_t entry_length;
+	uint8_t *buf;
+	size_t buflen;
+	phys_addr_t dma_handle;
+	struct hwrm_nvm_get_dir_entries_input req = {0};
+	struct hwrm_nvm_get_dir_entries_output *resp = bp->hwrm_cmd_resp_addr;
+
+	rc = bnxt_hwrm_nvm_get_dir_info(bp, &dir_entries, &entry_length);
+	if (rc != 0)
+		return rc;
+
+	*data++ = dir_entries;
+	*data++ = entry_length;
+	len -= 2;
+	memset(data, 0xff, len);
+
+	buflen = dir_entries * entry_length;
+	buf = rte_malloc("nvm_dir", buflen, 0);
+	rte_mem_lock_page(buf);
+	if (buf == NULL)
+		return -ENOMEM;
+	dma_handle = rte_mem_virt2phy(buf);
+	if (dma_handle == 0) {
+		RTE_LOG(ERR, PMD,
+			"unable to map response address to physical memory\n");
+		return -ENOMEM;
+	}
+	HWRM_PREP(req, NVM_GET_DIR_ENTRIES);
+	req.host_dest_addr = rte_cpu_to_le_64(dma_handle);
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
+
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
+
+	if (rc == 0)
+		memcpy(data, buf, len > buflen ? buflen : len);
+
+	rte_free(buf);
+
+	return rc;
+}
+
+int bnxt_hwrm_get_nvram_item(struct bnxt *bp, uint32_t index,
+			     uint32_t offset, uint32_t length,
+			     uint8_t *data)
+{
+	int rc;
+	uint8_t *buf;
+	phys_addr_t dma_handle;
+	struct hwrm_nvm_read_input req = {0};
+	struct hwrm_nvm_read_output *resp = bp->hwrm_cmd_resp_addr;
+
+	buf = rte_malloc("nvm_item", length, 0);
+	rte_mem_lock_page(buf);
+	if (!buf)
+		return -ENOMEM;
+
+	dma_handle = rte_mem_virt2phy(buf);
+	if (dma_handle == 0) {
+		RTE_LOG(ERR, PMD,
+			"unable to map response address to physical memory\n");
+		return -ENOMEM;
+	}
+	HWRM_PREP(req, NVM_READ);
+	req.host_dest_addr = rte_cpu_to_le_64(dma_handle);
+	req.dir_idx = rte_cpu_to_le_16(index);
+	req.offset = rte_cpu_to_le_32(offset);
+	req.len = rte_cpu_to_le_32(length);
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
+	if (rc == 0)
+		memcpy(data, buf, length);
+
+	rte_free(buf);
+	return rc;
+}
+
+int bnxt_hwrm_erase_nvram_directory(struct bnxt *bp, uint8_t index)
+{
+	int rc;
+	struct hwrm_nvm_erase_dir_entry_input req = {0};
+	struct hwrm_nvm_erase_dir_entry_output *resp = bp->hwrm_cmd_resp_addr;
+
+	HWRM_PREP(req, NVM_ERASE_DIR_ENTRY);
+	req.dir_idx = rte_cpu_to_le_16(index);
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
+
+	return rc;
+}
+
+
+int bnxt_hwrm_flash_nvram(struct bnxt *bp, uint16_t dir_type,
+			  uint16_t dir_ordinal, uint16_t dir_ext,
+			  uint16_t dir_attr, const uint8_t *data,
+			  size_t data_len)
+{
+	int rc;
+	struct hwrm_nvm_write_input req = {0};
+	struct hwrm_nvm_write_output *resp = bp->hwrm_cmd_resp_addr;
+	phys_addr_t dma_handle;
+	uint8_t *buf;
+
+	HWRM_PREP(req, NVM_WRITE);
+
+	req.dir_type = rte_cpu_to_le_16(dir_type);
+	req.dir_ordinal = rte_cpu_to_le_16(dir_ordinal);
+	req.dir_ext = rte_cpu_to_le_16(dir_ext);
+	req.dir_attr = rte_cpu_to_le_16(dir_attr);
+	req.dir_data_length = rte_cpu_to_le_32(data_len);
+
+	buf = rte_malloc("nvm_write", data_len, 0);
+	rte_mem_lock_page(buf);
+	if (!buf)
+		return -ENOMEM;
+
+	dma_handle = rte_mem_virt2phy(buf);
+	if (dma_handle == 0) {
+		RTE_LOG(ERR, PMD,
+			"unable to map response address to physical memory\n");
+		return -ENOMEM;
+	}
+	memcpy(buf, data, data_len);
+	req.host_src_addr = rte_cpu_to_le_64(dma_handle);
+
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
+
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
+
+	rte_free(buf);
+	return rc;
+}
+
 static void
 bnxt_vnic_count(struct bnxt_vnic_info *vnic __rte_unused, void *cbdata)
 {
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index b41cc5af3..85083e619 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -164,4 +164,15 @@ int bnxt_hwrm_set_ntuple_filter(struct bnxt *bp, uint16_t dst_id,
 			 struct bnxt_filter_info *filter);
 int bnxt_hwrm_clear_ntuple_filter(struct bnxt *bp,
 				struct bnxt_filter_info *filter);
+int bnxt_get_nvram_directory(struct bnxt *bp, uint32_t len, uint8_t *data);
+int bnxt_hwrm_nvm_get_dir_info(struct bnxt *bp, uint32_t *entries,
+			       uint32_t *length);
+int bnxt_hwrm_get_nvram_item(struct bnxt *bp, uint32_t index,
+			     uint32_t offset, uint32_t length,
+			     uint8_t *data);
+int bnxt_hwrm_erase_nvram_directory(struct bnxt *bp, uint8_t index);
+int bnxt_hwrm_flash_nvram(struct bnxt *bp, uint16_t dir_type,
+			  uint16_t dir_ordinal, uint16_t dir_ext,
+			  uint16_t dir_attr, const uint8_t *data,
+			  size_t data_len);
 #endif
diff --git a/drivers/net/bnxt/bnxt_nvm_defs.h b/drivers/net/bnxt/bnxt_nvm_defs.h
new file mode 100644
index 000000000..c5ccc9bc4
--- /dev/null
+++ b/drivers/net/bnxt/bnxt_nvm_defs.h
@@ -0,0 +1,75 @@
+/* Broadcom NetXtreme-C/E network driver.
+ *
+ * Copyright (c) 2014-2016 Broadcom Corporation
+ * Copyright (c) 2016-2017 Broadcom Limited
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation.
+ */
+
+#ifndef _BNXT_NVM_DEFS_H_
+#define _BNXT_NVM_DEFS_H_
+
+enum bnxt_nvm_directory_type {
+	BNX_DIR_TYPE_UNUSED = 0,
+	BNX_DIR_TYPE_PKG_LOG = 1,
+	BNX_DIR_TYPE_UPDATE = 2,
+	BNX_DIR_TYPE_CHIMP_PATCH = 3,
+	BNX_DIR_TYPE_BOOTCODE = 4,
+	BNX_DIR_TYPE_VPD = 5,
+	BNX_DIR_TYPE_EXP_ROM_MBA = 6,
+	BNX_DIR_TYPE_AVS = 7,
+	BNX_DIR_TYPE_PCIE = 8,
+	BNX_DIR_TYPE_PORT_MACRO = 9,
+	BNX_DIR_TYPE_APE_FW = 10,
+	BNX_DIR_TYPE_APE_PATCH = 11,
+	BNX_DIR_TYPE_KONG_FW = 12,
+	BNX_DIR_TYPE_KONG_PATCH = 13,
+	BNX_DIR_TYPE_BONO_FW = 14,
+	BNX_DIR_TYPE_BONO_PATCH = 15,
+	BNX_DIR_TYPE_TANG_FW = 16,
+	BNX_DIR_TYPE_TANG_PATCH = 17,
+	BNX_DIR_TYPE_BOOTCODE_2 = 18,
+	BNX_DIR_TYPE_CCM = 19,
+	BNX_DIR_TYPE_PCI_CFG = 20,
+	BNX_DIR_TYPE_TSCF_UCODE = 21,
+	BNX_DIR_TYPE_ISCSI_BOOT = 22,
+	BNX_DIR_TYPE_ISCSI_BOOT_IPV6 = 24,
+	BNX_DIR_TYPE_ISCSI_BOOT_IPV4N6 = 25,
+	BNX_DIR_TYPE_ISCSI_BOOT_CFG6 = 26,
+	BNX_DIR_TYPE_EXT_PHY = 27,
+	BNX_DIR_TYPE_SHARED_CFG = 40,
+	BNX_DIR_TYPE_PORT_CFG = 41,
+	BNX_DIR_TYPE_FUNC_CFG = 42,
+	BNX_DIR_TYPE_MGMT_CFG = 48,
+	BNX_DIR_TYPE_MGMT_DATA = 49,
+	BNX_DIR_TYPE_MGMT_WEB_DATA = 50,
+	BNX_DIR_TYPE_MGMT_WEB_META = 51,
+	BNX_DIR_TYPE_MGMT_EVENT_LOG = 52,
+	BNX_DIR_TYPE_MGMT_AUDIT_LOG = 53
+};
+
+#define BNX_DIR_ORDINAL_FIRST			0
+
+#define BNX_DIR_EXT_NONE			0
+#define BNX_DIR_EXT_INACTIVE			(1 << 0)
+#define BNX_DIR_EXT_UPDATE			(1 << 1)
+
+#define BNX_DIR_ATTR_NONE			0
+#define BNX_DIR_ATTR_NO_CHKSUM			(1 << 0)
+#define BNX_DIR_ATTR_PROP_STREAM		(1 << 1)
+
+#define BNX_PKG_LOG_MAX_LENGTH			4096
+
+enum bnxnvm_pkglog_field_index {
+	BNX_PKG_LOG_FIELD_IDX_INSTALLED_TIMESTAMP	= 0,
+	BNX_PKG_LOG_FIELD_IDX_PKG_DESCRIPTION		= 1,
+	BNX_PKG_LOG_FIELD_IDX_PKG_VERSION		= 2,
+	BNX_PKG_LOG_FIELD_IDX_PKG_TIMESTAMP		= 3,
+	BNX_PKG_LOG_FIELD_IDX_PKG_CHECKSUM		= 4,
+	BNX_PKG_LOG_FIELD_IDX_INSTALLED_ITEMS		= 5,
+	BNX_PKG_LOG_FIELD_IDX_INSTALLED_MASK		= 6
+};
+
+#endif				/* Don't add anything after this line */
diff --git a/drivers/net/bnxt/hsi_struct_def_dpdk.h b/drivers/net/bnxt/hsi_struct_def_dpdk.h
index a5f871b8d..1b35466a9 100644
--- a/drivers/net/bnxt/hsi_struct_def_dpdk.h
+++ b/drivers/net/bnxt/hsi_struct_def_dpdk.h
@@ -11315,6 +11315,144 @@ struct hwrm_reject_fwd_resp_output {
 	 */
 } __attribute__((packed));
 
+/* hwrm_nvm_get_dir_entries */
+/* Input (24 bytes) */
+struct hwrm_nvm_get_dir_entries_input {
+	uint16_t req_type;
+	uint16_t cmpl_ring;
+	uint16_t seq_id;
+	uint16_t target_id;
+	uint64_t resp_addr;
+	uint64_t host_dest_addr;
+} __attribute__((packed));
+
+/* Output (16 bytes) */
+struct hwrm_nvm_get_dir_entries_output {
+	uint16_t error_code;
+	uint16_t req_type;
+	uint16_t seq_id;
+	uint16_t resp_len;
+	uint32_t unused_0;
+	uint8_t unused_1;
+	uint8_t unused_2;
+	uint8_t unused_3;
+	uint8_t valid;
+} __attribute__((packed));
+
+
+/* hwrm_nvm_erase_dir_entry */
+/* Input (24 bytes) */
+struct hwrm_nvm_erase_dir_entry_input {
+	uint16_t req_type;
+	uint16_t cmpl_ring;
+	uint16_t seq_id;
+	uint16_t target_id;
+	uint64_t resp_addr;
+	uint16_t dir_idx;
+	uint16_t unused_0[3];
+};
+
+/* Output (16 bytes) */
+struct hwrm_nvm_erase_dir_entry_output {
+	uint16_t error_code;
+	uint16_t req_type;
+	uint16_t seq_id;
+	uint16_t resp_len;
+	uint32_t unused_0;
+	uint8_t unused_1;
+	uint8_t unused_2;
+	uint8_t unused_3;
+	uint8_t valid;
+};
+
+/* hwrm_nvm_get_dir_info */
+/* Input (16 bytes) */
+struct hwrm_nvm_get_dir_info_input {
+	uint16_t req_type;
+	uint16_t cmpl_ring;
+	uint16_t seq_id;
+	uint16_t target_id;
+	uint64_t resp_addr;
+} __attribute__((packed));
+
+/* Output (24 bytes) */
+struct hwrm_nvm_get_dir_info_output {
+	uint16_t error_code;
+	uint16_t req_type;
+	uint16_t seq_id;
+	uint16_t resp_len;
+	uint32_t entries;
+	uint32_t entry_length;
+	uint32_t unused_0;
+	uint8_t unused_1;
+	uint8_t unused_2;
+	uint8_t unused_3;
+	uint8_t valid;
+} __attribute__((packed));
+
+
+/* hwrm_nvm_write */
+/* Input (48 bytes) */
+struct hwrm_nvm_write_input {
+	uint16_t req_type;
+	uint16_t cmpl_ring;
+	uint16_t seq_id;
+	uint16_t target_id;
+	uint64_t resp_addr;
+	uint64_t host_src_addr;
+	uint16_t dir_type;
+	uint16_t dir_ordinal;
+	uint16_t dir_ext;
+	uint16_t dir_attr;
+	uint32_t dir_data_length;
+	uint16_t option;
+	uint16_t flags;
+	#define NVM_WRITE_REQ_FLAGS_KEEP_ORIG_ACTIVE_IMG	    0x1UL
+	uint32_t dir_item_length;
+	uint32_t unused_0;
+};
+
+/* Output (16 bytes) */
+struct hwrm_nvm_write_output {
+	uint16_t error_code;
+	uint16_t req_type;
+	uint16_t seq_id;
+	uint16_t resp_len;
+	uint32_t dir_item_length;
+	uint16_t dir_idx;
+	uint8_t unused_0;
+	uint8_t valid;
+};
+/* hwrm_nvm_read */
+/* Input (40 bytes) */
+struct hwrm_nvm_read_input {
+	uint16_t req_type;
+	uint16_t cmpl_ring;
+	uint16_t seq_id;
+	uint16_t target_id;
+	uint64_t resp_addr;
+	uint64_t host_dest_addr;
+	uint16_t dir_idx;
+	uint8_t unused_0;
+	uint8_t unused_1;
+	uint32_t offset;
+	uint32_t len;
+	uint32_t unused_2;
+} __attribute__((packed));
+
+/* Output (16 bytes) */
+struct hwrm_nvm_read_output {
+	uint16_t error_code;
+	uint16_t req_type;
+	uint16_t seq_id;
+	uint16_t resp_len;
+	uint32_t unused_0;
+	uint8_t unused_1;
+	uint8_t unused_2;
+	uint8_t unused_3;
+	uint8_t valid;
+} __attribute__((packed));
+
 /* Hardware Resource Manager Specification */
 /* Description: This structure is used to specify port description. */
 /*
-- 
2.13.5 (Apple Git-94)

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v4 24/24] net/bnxt: add support for rx_queue_intr_enable/disable APIs
  2017-09-28 21:43 [PATCH v4 00/24] bnxt patchset Ajit Khaparde
                   ` (22 preceding siblings ...)
  2017-09-28 21:43 ` [PATCH v4 23/24] net/bnxt: add support for get/set EEPROM Ajit Khaparde
@ 2017-09-28 21:43 ` Ajit Khaparde
  2017-10-02 21:28 ` [PATCH v4 00/24] bnxt patchset Ferruh Yigit
  24 siblings, 0 replies; 26+ messages in thread
From: Ajit Khaparde @ 2017-09-28 21:43 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Somnath Kotur

From: Somnath Kotur <somnath.kotur@broadcom.com>

Implement Rx Queue interrupt enable/disable functions

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 doc/guides/nics/features/bnxt.ini |  1 +
 drivers/net/bnxt/bnxt_ethdev.c    | 54 +++++++++++++++++++++++++++++++++++++++
 drivers/net/bnxt/bnxt_irq.h       |  3 +++
 drivers/net/bnxt/bnxt_rxq.c       | 38 +++++++++++++++++++++++++++
 drivers/net/bnxt/bnxt_rxq.h       |  4 +++
 5 files changed, 100 insertions(+)

diff --git a/doc/guides/nics/features/bnxt.ini b/doc/guides/nics/features/bnxt.ini
index 0dcf07cc3..ae98de466 100644
--- a/doc/guides/nics/features/bnxt.ini
+++ b/doc/guides/nics/features/bnxt.ini
@@ -22,6 +22,7 @@ Extended stats       = Y
 FW version           = Y
 LED                  = Y
 EEPROM dump          = Y
+Rx interrupt         = Y
 Linux UIO            = Y
 Linux VFIO           = Y
 x86-64               = Y
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 81f66cadd..5490c2aaf 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -202,8 +202,16 @@ static int bnxt_init_chip(struct bnxt *bp)
 {
 	unsigned int i, rss_idx, fw_idx;
 	struct rte_eth_link new;
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(bp->eth_dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	uint32_t intr_vector = 0;
+	uint32_t queue_id, base = BNXT_MISC_VEC_ID;
+	uint32_t vec = BNXT_MISC_VEC_ID;
 	int rc;
 
+	/* disable uio/vfio intr/eventfd mapping */
+	rte_intr_disable(intr_handle);
+
 	if (bp->eth_dev->data->mtu > ETHER_MTU) {
 		bp->eth_dev->data->dev_conf.rxmode.jumbo_frame = 1;
 		bp->flags |= BNXT_FLAG_JUMBO;
@@ -306,6 +314,48 @@ static int bnxt_init_chip(struct bnxt *bp)
 		goto err_out;
 	}
 
+	/* check and configure queue intr-vector mapping */
+	if ((rte_intr_cap_multiple(intr_handle) ||
+	     !RTE_ETH_DEV_SRIOV(bp->eth_dev).active) &&
+	    bp->eth_dev->data->dev_conf.intr_conf.rxq != 0) {
+		intr_vector = bp->eth_dev->data->nb_rx_queues;
+		RTE_LOG(INFO, PMD, "%s(): intr_vector = %d\n", __func__,
+			intr_vector);
+		if (intr_vector > bp->rx_cp_nr_rings) {
+			RTE_LOG(ERR, PMD, "At most %d intr queues supported",
+					bp->rx_cp_nr_rings);
+			return -ENOTSUP;
+		}
+		if (rte_intr_efd_enable(intr_handle, intr_vector))
+			return -1;
+	}
+
+	if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
+		intr_handle->intr_vec =
+			rte_zmalloc("intr_vec",
+				    bp->eth_dev->data->nb_rx_queues *
+				    sizeof(int), 0);
+		if (intr_handle->intr_vec == NULL) {
+			RTE_LOG(ERR, PMD, "Failed to allocate %d rx_queues"
+				" intr_vec", bp->eth_dev->data->nb_rx_queues);
+			return -ENOMEM;
+		}
+		RTE_LOG(DEBUG, PMD, "%s(): intr_handle->intr_vec = %p "
+			"intr_handle->nb_efd = %d intr_handle->max_intr = %d\n",
+			 __func__, intr_handle->intr_vec, intr_handle->nb_efd,
+			intr_handle->max_intr);
+	}
+
+	for (queue_id = 0; queue_id < bp->eth_dev->data->nb_rx_queues;
+	     queue_id++) {
+		intr_handle->intr_vec[queue_id] = vec;
+		if (vec < base + intr_handle->nb_efd - 1)
+			vec++;
+	}
+
+	/* enable uio/vfio intr/eventfd mapping */
+	rte_intr_enable(intr_handle);
+
 	rc = bnxt_get_hwrm_link_config(bp, &new);
 	if (rc) {
 		RTE_LOG(ERR, PMD, "HWRM Get link config failure rc: %x\n", rc);
@@ -421,6 +471,8 @@ static void bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
 	};
 	eth_dev->data->dev_conf.intr_conf.lsc = 1;
 
+	eth_dev->data->dev_conf.intr_conf.rxq = 1;
+
 	/* *INDENT-ON* */
 
 	/*
@@ -2009,6 +2061,8 @@ static const struct eth_dev_ops bnxt_dev_ops = {
 	.rx_queue_release = bnxt_rx_queue_release_op,
 	.tx_queue_setup = bnxt_tx_queue_setup_op,
 	.tx_queue_release = bnxt_tx_queue_release_op,
+	.rx_queue_intr_enable = bnxt_rx_queue_intr_enable_op,
+	.rx_queue_intr_disable = bnxt_rx_queue_intr_disable_op,
 	.reta_update = bnxt_reta_update_op,
 	.reta_query = bnxt_reta_query_op,
 	.rss_hash_update = bnxt_rss_hash_update_op,
diff --git a/drivers/net/bnxt/bnxt_irq.h b/drivers/net/bnxt/bnxt_irq.h
index e21bec568..4d2f7af9f 100644
--- a/drivers/net/bnxt/bnxt_irq.h
+++ b/drivers/net/bnxt/bnxt_irq.h
@@ -34,6 +34,9 @@
 #ifndef _BNXT_IRQ_H_
 #define _BNXT_IRQ_H_
 
+#define BNXT_MISC_VEC_ID               RTE_INTR_VEC_ZERO_OFFSET
+#define BNXT_RX_VEC_START              RTE_INTR_VEC_RXTX_OFFSET
+
 struct bnxt_irq {
 	rte_intr_callback_fn	handler;
 	unsigned int		vector;
diff --git a/drivers/net/bnxt/bnxt_rxq.c b/drivers/net/bnxt/bnxt_rxq.c
index 690a59987..0c4e0f6ec 100644
--- a/drivers/net/bnxt/bnxt_rxq.c
+++ b/drivers/net/bnxt/bnxt_rxq.c
@@ -362,3 +362,41 @@ int bnxt_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
 out:
 	return rc;
 }
+
+int
+bnxt_rx_queue_intr_enable_op(struct rte_eth_dev *eth_dev, uint16_t queue_id)
+{
+	struct bnxt_rx_queue *rxq;
+	struct bnxt_cp_ring_info *cpr;
+	int rc = 0;
+
+	if (eth_dev->data->rx_queues) {
+		rxq = eth_dev->data->rx_queues[queue_id];
+		if (!rxq) {
+			rc = -EINVAL;
+			return rc;
+		}
+		cpr = rxq->cp_ring;
+		B_CP_DB_ARM(cpr);
+	}
+	return rc;
+}
+
+int
+bnxt_rx_queue_intr_disable_op(struct rte_eth_dev *eth_dev, uint16_t queue_id)
+{
+	struct bnxt_rx_queue *rxq;
+	struct bnxt_cp_ring_info *cpr;
+	int rc = 0;
+
+	if (eth_dev->data->rx_queues) {
+		rxq = eth_dev->data->rx_queues[queue_id];
+		if (!rxq) {
+			rc = -EINVAL;
+			return rc;
+		}
+		cpr = rxq->cp_ring;
+		B_CP_DB_DISARM(cpr);
+	}
+	return rc;
+}
diff --git a/drivers/net/bnxt/bnxt_rxq.h b/drivers/net/bnxt/bnxt_rxq.h
index 01aaa007f..a867a4b56 100644
--- a/drivers/net/bnxt/bnxt_rxq.h
+++ b/drivers/net/bnxt/bnxt_rxq.h
@@ -73,5 +73,9 @@ int bnxt_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
 			       const struct rte_eth_rxconf *rx_conf,
 			       struct rte_mempool *mp);
 void bnxt_free_rx_mbufs(struct bnxt *bp);
+int bnxt_rx_queue_intr_enable_op(struct rte_eth_dev *eth_dev,
+				 uint16_t queue_id);
+int bnxt_rx_queue_intr_disable_op(struct rte_eth_dev *eth_dev,
+				  uint16_t queue_id);
 
 #endif
-- 
2.13.5 (Apple Git-94)

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH v4 00/24] bnxt patchset
  2017-09-28 21:43 [PATCH v4 00/24] bnxt patchset Ajit Khaparde
                   ` (23 preceding siblings ...)
  2017-09-28 21:43 ` [PATCH v4 24/24] net/bnxt: add support for rx_queue_intr_enable/disable APIs Ajit Khaparde
@ 2017-10-02 21:28 ` Ferruh Yigit
  24 siblings, 0 replies; 26+ messages in thread
From: Ferruh Yigit @ 2017-10-02 21:28 UTC (permalink / raw)
  To: Ajit Khaparde, dev; +Cc: somnath.kotur, Stephen Hurd

On 9/28/2017 10:43 PM, Ajit Khaparde wrote:
> This patch set includes some bug fixes and also adds
> support for new dev_ops like rx_queue_count,
> rx/tx_descriptor_status, get/set_eeprom
> and rx_queue_intr_enable/disable.
> It also adds support for the flow_filter funciton to add
> Flow API functionality.
> 
> Please apply.
> 
> Ajit Khaparde (22):
>   net/bnxt: fix HWRM_*() macros and locking
>   net/bnxt: use 64-bits of address for vlan_table
>   net/bnxt: fix an issue with group id calculation
>   net/bnxt: fix calculation of number of pools
>   net/bnxt: handle multi queue mode properly
>   net/bnxt: fix rx handling and buffer allocation logic
>   net/bnxt: fix an issue with broadcast traffic
>   net/bnxt: fix usage of ETH_VMDQ_* flags
>   net/bnxt: set checksum offload flags correctly
>   net/bnxt: update status of Rx IP/L4 CKSUM
>   net/bnxt: add support for xstats get by id
>   net/bnxt: fix config rss update
>   net/bnxt: set the hash_key_size
>   net/bnxt: add support for rx_queue_count
>   net/bnxt: add support for rx_descriptor_status
>   net/bnxt: add support for tx_descriptor_status
>   net/bnxt: add new HWRM structs to support flow filtering
>   net/bnxt: add support for flow filter ops
>   doc: update release notes
>   net/bnxt: fix per queue stats display in xstats
>   net/bnxt: prevent interrupt handler from accessing freed memory
>   net/bnxt: add dev_supported_ptypes_get dev_op
> 
> Somnath Kotur (2):
>   net/bnxt: add support for get/set EEPROM
>   net/bnxt: add support for rx_queue_intr_enable/disable APIs

Series applied to dpdk-next-net/master, thanks.

Welcome Somnath!

(I have updated some patches to update bnxt.ini file, can you please
confirm final bnxt.ini file)

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2017-10-02 21:28 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-09-28 21:43 [PATCH v4 00/24] bnxt patchset Ajit Khaparde
2017-09-28 21:43 ` [PATCH v4 01/24] net/bnxt: fix HWRM_*() macros and locking Ajit Khaparde
2017-09-28 21:43 ` [PATCH v4 02/24] net/bnxt: use 64-bits of address for vlan_table Ajit Khaparde
2017-09-28 21:43 ` [PATCH v4 03/24] net/bnxt: fix an issue with group id calculation Ajit Khaparde
2017-09-28 21:43 ` [PATCH v4 04/24] net/bnxt: fix calculation of number of pools Ajit Khaparde
2017-09-28 21:43 ` [PATCH v4 05/24] net/bnxt: handle multi queue mode properly Ajit Khaparde
2017-09-28 21:43 ` [PATCH v4 06/24] net/bnxt: fix rx handling and buffer allocation logic Ajit Khaparde
2017-09-28 21:43 ` [PATCH v4 07/24] net/bnxt: fix an issue with broadcast traffic Ajit Khaparde
2017-09-28 21:43 ` [PATCH v4 08/24] net/bnxt: fix usage of ETH_VMDQ_* flags Ajit Khaparde
2017-09-28 21:43 ` [PATCH v4 09/24] net/bnxt: set checksum offload flags correctly Ajit Khaparde
2017-09-28 21:43 ` [PATCH v4 10/24] net/bnxt: update status of Rx IP/L4 CKSUM Ajit Khaparde
2017-09-28 21:43 ` [PATCH v4 11/24] net/bnxt: add support for xstats get by id Ajit Khaparde
2017-09-28 21:43 ` [PATCH v4 12/24] net/bnxt: fix config rss update Ajit Khaparde
2017-09-28 21:43 ` [PATCH v4 13/24] net/bnxt: set the hash_key_size Ajit Khaparde
2017-09-28 21:43 ` [PATCH v4 14/24] net/bnxt: add support for rx_queue_count Ajit Khaparde
2017-09-28 21:43 ` [PATCH v4 15/24] net/bnxt: add support for rx_descriptor_status Ajit Khaparde
2017-09-28 21:43 ` [PATCH v4 16/24] net/bnxt: add support for tx_descriptor_status Ajit Khaparde
2017-09-28 21:43 ` [PATCH v4 17/24] net/bnxt: add new HWRM structs to support flow filtering Ajit Khaparde
2017-09-28 21:43 ` [PATCH v4 18/24] net/bnxt: add support for flow filter ops Ajit Khaparde
2017-09-28 21:43 ` [PATCH v4 19/24] doc: update release notes Ajit Khaparde
2017-09-28 21:43 ` [PATCH v4 20/24] net/bnxt: fix per queue stats display in xstats Ajit Khaparde
2017-09-28 21:43 ` [PATCH v4 21/24] net/bnxt: prevent interrupt handler from accessing freed memory Ajit Khaparde
2017-09-28 21:43 ` [PATCH v4 22/24] net/bnxt: add dev_supported_ptypes_get dev_op Ajit Khaparde
2017-09-28 21:43 ` [PATCH v4 23/24] net/bnxt: add support for get/set EEPROM Ajit Khaparde
2017-09-28 21:43 ` [PATCH v4 24/24] net/bnxt: add support for rx_queue_intr_enable/disable APIs Ajit Khaparde
2017-10-02 21:28 ` [PATCH v4 00/24] bnxt patchset Ferruh Yigit

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.