All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/8] net/mrvl: add new features to PMD
@ 2018-02-21 14:14 Tomasz Duszynski
  2018-02-21 14:14 ` [PATCH 1/8] net/mrvl: fix crash when port is closed without starting Tomasz Duszynski
                   ` (8 more replies)
  0 siblings, 9 replies; 34+ messages in thread
From: Tomasz Duszynski @ 2018-02-21 14:14 UTC (permalink / raw)
  To: dev; +Cc: mw, dima, nsamsono, jck, Tomasz Duszynski

This patch series comes along with a set of features,
documentation updates and fixes.

Below one can find a short summary of introduced changes:

o Added support for selective Tx queue start and stop.
o Added support for Rx flow control.
o Added support for extended statistics counters.
o Added support for ingress policer, egress scheduler and egress rate
  limiter.
o Added support for configuring hardware classifier via a flow API.
o Documented new features and their usage.

Natalie Samsonov (1):
  net/mrvl: fix crash when port is closed without starting

Tomasz Duszynski (7):
  net/mrvl: add ingress policer support
  net/mrvl: add egress scheduler/rate limiter support
  net/mrvl: document policer/scheduler/rate limiter usage
  net/mrvl: add classifier support
  net/mrvl: add extended statistics
  net/mrvl: add Rx flow control
  net/mrvl: add Tx queue start/stop

 doc/guides/nics/features/mrvl.ini |    2 +
 doc/guides/nics/mrvl.rst          |  257 +++-
 drivers/net/mrvl/Makefile         |    1 +
 drivers/net/mrvl/mrvl_ethdev.c    |  447 +++++-
 drivers/net/mrvl/mrvl_ethdev.h    |   11 +
 drivers/net/mrvl/mrvl_flow.c      | 2787 +++++++++++++++++++++++++++++++++++++
 drivers/net/mrvl/mrvl_qos.c       |  301 +++-
 drivers/net/mrvl/mrvl_qos.h       |   22 +
 8 files changed, 3810 insertions(+), 18 deletions(-)
 create mode 100644 drivers/net/mrvl/mrvl_flow.c

--
2.7.4

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 1/8] net/mrvl: fix crash when port is closed without starting
  2018-02-21 14:14 [PATCH 0/8] net/mrvl: add new features to PMD Tomasz Duszynski
@ 2018-02-21 14:14 ` Tomasz Duszynski
  2018-02-21 14:14 ` [PATCH 2/8] net/mrvl: add ingress policer support Tomasz Duszynski
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 34+ messages in thread
From: Tomasz Duszynski @ 2018-02-21 14:14 UTC (permalink / raw)
  To: dev; +Cc: mw, dima, nsamsono, jck, stable, Tomasz Duszynski

From: Natalie Samsonov <nsamsono@marvell.com>

Fixes: 0ddc9b815b11 ("net/mrvl: add net PMD skeleton")
Cc: stable@dpdk.org

Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
---
 drivers/net/mrvl/mrvl_ethdev.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mrvl/mrvl_ethdev.c b/drivers/net/mrvl/mrvl_ethdev.c
index 705c4bd..3611a92 100644
--- a/drivers/net/mrvl/mrvl_ethdev.c
+++ b/drivers/net/mrvl/mrvl_ethdev.c
@@ -686,7 +686,8 @@ mrvl_dev_stop(struct rte_eth_dev *dev)
 		pp2_cls_qos_tbl_deinit(priv->qos_tbl);
 		priv->qos_tbl = NULL;
 	}
-	pp2_ppio_deinit(priv->ppio);
+	if (priv->ppio)
+		pp2_ppio_deinit(priv->ppio);
 	priv->ppio = NULL;
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 2/8] net/mrvl: add ingress policer support
  2018-02-21 14:14 [PATCH 0/8] net/mrvl: add new features to PMD Tomasz Duszynski
  2018-02-21 14:14 ` [PATCH 1/8] net/mrvl: fix crash when port is closed without starting Tomasz Duszynski
@ 2018-02-21 14:14 ` Tomasz Duszynski
  2018-02-21 14:14 ` [PATCH 3/8] net/mrvl: add egress scheduler/rate limiter support Tomasz Duszynski
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 34+ messages in thread
From: Tomasz Duszynski @ 2018-02-21 14:14 UTC (permalink / raw)
  To: dev; +Cc: mw, dima, nsamsono, jck, Tomasz Duszynski

Add ingress policer support.

Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
---
 drivers/net/mrvl/mrvl_ethdev.c |   6 ++
 drivers/net/mrvl/mrvl_ethdev.h |   1 +
 drivers/net/mrvl/mrvl_qos.c    | 160 +++++++++++++++++++++++++++++++++++++++--
 drivers/net/mrvl/mrvl_qos.h    |   3 +
 4 files changed, 166 insertions(+), 4 deletions(-)

diff --git a/drivers/net/mrvl/mrvl_ethdev.c b/drivers/net/mrvl/mrvl_ethdev.c
index 3611a92..2d59fce 100644
--- a/drivers/net/mrvl/mrvl_ethdev.c
+++ b/drivers/net/mrvl/mrvl_ethdev.c
@@ -689,6 +689,12 @@ mrvl_dev_stop(struct rte_eth_dev *dev)
 	if (priv->ppio)
 		pp2_ppio_deinit(priv->ppio);
 	priv->ppio = NULL;
+
+	/* policer must be released after ppio deinitialization */
+	if (priv->policer) {
+		pp2_cls_plcr_deinit(priv->policer);
+		priv->policer = NULL;
+	}
 }
 
 /**
diff --git a/drivers/net/mrvl/mrvl_ethdev.h b/drivers/net/mrvl/mrvl_ethdev.h
index f7afae5..0d152d6 100644
--- a/drivers/net/mrvl/mrvl_ethdev.h
+++ b/drivers/net/mrvl/mrvl_ethdev.h
@@ -113,6 +113,7 @@ struct mrvl_priv {
 	struct pp2_cls_qos_tbl_params qos_tbl_params;
 	struct pp2_cls_tbl *qos_tbl;
 	uint16_t nb_rx_queues;
+	struct pp2_cls_plcr *policer;
 };
 
 #endif /* _MRVL_ETHDEV_H_ */
diff --git a/drivers/net/mrvl/mrvl_qos.c b/drivers/net/mrvl/mrvl_qos.c
index fbb3681..854eb4d 100644
--- a/drivers/net/mrvl/mrvl_qos.c
+++ b/drivers/net/mrvl/mrvl_qos.c
@@ -71,6 +71,22 @@
 #define MRVL_TOK_VLAN_IP "vlan/ip"
 #define MRVL_TOK_WEIGHT "weight"
 
+/* policer specific configuration tokens */
+#define MRVL_TOK_PLCR_ENABLE "policer_enable"
+#define MRVL_TOK_PLCR_UNIT "token_unit"
+#define MRVL_TOK_PLCR_UNIT_BYTES "bytes"
+#define MRVL_TOK_PLCR_UNIT_PACKETS "packets"
+#define MRVL_TOK_PLCR_COLOR "color_mode"
+#define MRVL_TOK_PLCR_COLOR_BLIND "blind"
+#define MRVL_TOK_PLCR_COLOR_AWARE "aware"
+#define MRVL_TOK_PLCR_CIR "cir"
+#define MRVL_TOK_PLCR_CBS "cbs"
+#define MRVL_TOK_PLCR_EBS "ebs"
+#define MRVL_TOK_PLCR_DEFAULT_COLOR "default_color"
+#define MRVL_TOK_PLCR_DEFAULT_COLOR_GREEN "green"
+#define MRVL_TOK_PLCR_DEFAULT_COLOR_YELLOW "yellow"
+#define MRVL_TOK_PLCR_DEFAULT_COLOR_RED "red"
+
 /** Number of tokens in range a-b = 2. */
 #define MAX_RNG_TOKENS 2
 
@@ -324,6 +340,25 @@ parse_tc_cfg(struct rte_cfgfile *file, int port, int tc,
 		}
 		cfg->port[port].tc[tc].dscps = n;
 	}
+
+	entry = rte_cfgfile_get_entry(file, sec_name,
+			MRVL_TOK_PLCR_DEFAULT_COLOR);
+	if (entry) {
+		if (!strncmp(entry, MRVL_TOK_PLCR_DEFAULT_COLOR_GREEN,
+				sizeof(MRVL_TOK_PLCR_DEFAULT_COLOR_GREEN))) {
+			cfg->port[port].tc[tc].color = PP2_PPIO_COLOR_GREEN;
+		} else if (!strncmp(entry, MRVL_TOK_PLCR_DEFAULT_COLOR_YELLOW,
+				sizeof(MRVL_TOK_PLCR_DEFAULT_COLOR_YELLOW))) {
+			cfg->port[port].tc[tc].color = PP2_PPIO_COLOR_YELLOW;
+		} else if (!strncmp(entry, MRVL_TOK_PLCR_DEFAULT_COLOR_RED,
+				sizeof(MRVL_TOK_PLCR_DEFAULT_COLOR_RED))) {
+			cfg->port[port].tc[tc].color = PP2_PPIO_COLOR_RED;
+		} else {
+			RTE_LOG(ERR, PMD, "Error while parsing: %s\n", entry);
+			return -1;
+		}
+	}
+
 	return 0;
 }
 
@@ -396,6 +431,88 @@ mrvl_get_qoscfg(const char *key __rte_unused, const char *path,
 		}
 
 		entry = rte_cfgfile_get_entry(file, sec_name,
+				MRVL_TOK_PLCR_ENABLE);
+		if (entry) {
+			if (get_val_securely(entry, &val) < 0)
+				return -1;
+			(*cfg)->port[n].policer_enable = val;
+		}
+
+		if ((*cfg)->port[n].policer_enable) {
+			enum pp2_cls_plcr_token_unit unit;
+
+			/* Read policer token unit */
+			entry = rte_cfgfile_get_entry(file, sec_name,
+					MRVL_TOK_PLCR_UNIT);
+			if (entry) {
+				if (!strncmp(entry, MRVL_TOK_PLCR_UNIT_BYTES,
+					sizeof(MRVL_TOK_PLCR_UNIT_BYTES))) {
+					unit = PP2_CLS_PLCR_BYTES_TOKEN_UNIT;
+				} else if (!strncmp(entry,
+						MRVL_TOK_PLCR_UNIT_PACKETS,
+					sizeof(MRVL_TOK_PLCR_UNIT_PACKETS))) {
+					unit = PP2_CLS_PLCR_PACKETS_TOKEN_UNIT;
+				} else {
+					RTE_LOG(ERR, PMD, "Unknown token: %s\n",
+						entry);
+					return -1;
+				}
+				(*cfg)->port[n].policer_params.token_unit =
+					unit;
+			}
+
+			/* Read policer color mode */
+			entry = rte_cfgfile_get_entry(file, sec_name,
+					MRVL_TOK_PLCR_COLOR);
+			if (entry) {
+				enum pp2_cls_plcr_color_mode mode;
+
+				if (!strncmp(entry, MRVL_TOK_PLCR_COLOR_BLIND,
+					sizeof(MRVL_TOK_PLCR_COLOR_BLIND))) {
+					mode = PP2_CLS_PLCR_COLOR_BLIND_MODE;
+				} else if (!strncmp(entry,
+						MRVL_TOK_PLCR_COLOR_AWARE,
+					sizeof(MRVL_TOK_PLCR_COLOR_AWARE))) {
+					mode = PP2_CLS_PLCR_COLOR_AWARE_MODE;
+				} else {
+					RTE_LOG(ERR, PMD,
+						"Error in parsing: %s\n",
+						entry);
+					return -1;
+				}
+				(*cfg)->port[n].policer_params.color_mode =
+					mode;
+			}
+
+			/* Read policer cir */
+			entry = rte_cfgfile_get_entry(file, sec_name,
+					MRVL_TOK_PLCR_CIR);
+			if (entry) {
+				if (get_val_securely(entry, &val) < 0)
+					return -1;
+				(*cfg)->port[n].policer_params.cir = val;
+			}
+
+			/* Read policer cbs */
+			entry = rte_cfgfile_get_entry(file, sec_name,
+					MRVL_TOK_PLCR_CBS);
+			if (entry) {
+				if (get_val_securely(entry, &val) < 0)
+					return -1;
+				(*cfg)->port[n].policer_params.cbs = val;
+			}
+
+			/* Read policer ebs */
+			entry = rte_cfgfile_get_entry(file, sec_name,
+					MRVL_TOK_PLCR_EBS);
+			if (entry) {
+				if (get_val_securely(entry, &val) < 0)
+					return -1;
+				(*cfg)->port[n].policer_params.ebs = val;
+			}
+		}
+
+		entry = rte_cfgfile_get_entry(file, sec_name,
 				MRVL_TOK_MAPPING_PRIORITY);
 		if (entry) {
 			if (!strncmp(entry, MRVL_TOK_VLAN_IP,
@@ -450,16 +567,18 @@ mrvl_get_qoscfg(const char *key __rte_unused, const char *path,
  * @param param TC parameters entry.
  * @param inqs Number of MUSDK in-queues in this TC.
  * @param bpool Bpool for this TC.
+ * @param color Default color for this TC.
  * @returns 0 in case of success, exits otherwise.
  */
 static int
 setup_tc(struct pp2_ppio_tc_params *param, uint8_t inqs,
-	struct pp2_bpool *bpool)
+	struct pp2_bpool *bpool, enum pp2_ppio_color color)
 {
 	struct pp2_ppio_inq_params *inq_params;
 
 	param->pkt_offset = MRVL_PKT_OFFS;
 	param->pools[0] = bpool;
+	param->default_color = color;
 
 	inq_params = rte_zmalloc_socket("inq_params",
 		inqs * sizeof(*inq_params),
@@ -479,6 +598,33 @@ setup_tc(struct pp2_ppio_tc_params *param, uint8_t inqs,
 }
 
 /**
+ * Setup ingress policer.
+ *
+ * @param priv Port's private data.
+ * @param params Pointer to the policer's configuration.
+ * @returns 0 in case of success, negative values otherwise.
+ */
+static int
+setup_policer(struct mrvl_priv *priv, struct pp2_cls_plcr_params *params)
+{
+	char match[16];
+	int ret;
+
+	sprintf(match, "policer-%d:%d\n", priv->pp_id, priv->ppio_id);
+	params->match = match;
+
+	ret = pp2_cls_plcr_init(params, &priv->policer);
+	if (ret) {
+		RTE_LOG(ERR, PMD, "Failed to setup %s\n", match);
+		return -1;
+	}
+
+	priv->ppio_params.inqs_params.plcr = priv->policer;
+
+	return 0;
+}
+
+/**
  * Configure RX Queues in a given port.
  *
  * Sets up RX queues, their Traffic Classes and DPDK rxq->(TC,inq) mapping.
@@ -496,10 +642,13 @@ mrvl_configure_rxqs(struct mrvl_priv *priv, uint16_t portid,
 
 	if (mrvl_qos_cfg == NULL ||
 		mrvl_qos_cfg->port[portid].use_global_defaults) {
-		/* No port configuration, use default: 1 TC, no QoS. */
+		/*
+		 * No port configuration, use default: 1 TC, no QoS,
+		 * TC color set to green.
+		 */
 		priv->ppio_params.inqs_params.num_tcs = 1;
 		setup_tc(&priv->ppio_params.inqs_params.tcs_params[0],
-			max_queues, priv->bpool);
+			max_queues, priv->bpool, PP2_PPIO_COLOR_GREEN);
 
 		/* Direct mapping of queues i.e. 0->0, 1->1 etc. */
 		for (i = 0; i < max_queues; ++i) {
@@ -597,11 +746,14 @@ mrvl_configure_rxqs(struct mrvl_priv *priv, uint16_t portid,
 			break;
 		setup_tc(&priv->ppio_params.inqs_params.tcs_params[i],
 				port_cfg->tc[i].inqs,
-				priv->bpool);
+				priv->bpool, port_cfg->tc[i].color);
 	}
 
 	priv->ppio_params.inqs_params.num_tcs = i;
 
+	if (port_cfg->policer_enable)
+		return setup_policer(priv, &port_cfg->policer_params);
+
 	return 0;
 }
 
diff --git a/drivers/net/mrvl/mrvl_qos.h b/drivers/net/mrvl/mrvl_qos.h
index ae7508c..2ff50c1 100644
--- a/drivers/net/mrvl/mrvl_qos.h
+++ b/drivers/net/mrvl/mrvl_qos.h
@@ -55,6 +55,7 @@ struct mrvl_qos_cfg {
 			uint8_t inqs;
 			uint8_t dscps;
 			uint8_t pcps;
+			enum pp2_ppio_color color;
 		} tc[MRVL_PP2_TC_MAX];
 		struct {
 			uint8_t weight;
@@ -64,6 +65,8 @@ struct mrvl_qos_cfg {
 		uint16_t outqs;
 		uint8_t default_tc;
 		uint8_t use_global_defaults;
+		struct pp2_cls_plcr_params policer_params;
+		uint8_t policer_enable;
 	} port[RTE_MAX_ETHPORTS];
 };
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 3/8] net/mrvl: add egress scheduler/rate limiter support
  2018-02-21 14:14 [PATCH 0/8] net/mrvl: add new features to PMD Tomasz Duszynski
  2018-02-21 14:14 ` [PATCH 1/8] net/mrvl: fix crash when port is closed without starting Tomasz Duszynski
  2018-02-21 14:14 ` [PATCH 2/8] net/mrvl: add ingress policer support Tomasz Duszynski
@ 2018-02-21 14:14 ` Tomasz Duszynski
  2018-02-21 14:14 ` [PATCH 4/8] net/mrvl: document policer/scheduler/rate limiter usage Tomasz Duszynski
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 34+ messages in thread
From: Tomasz Duszynski @ 2018-02-21 14:14 UTC (permalink / raw)
  To: dev; +Cc: mw, dima, nsamsono, jck, Tomasz Duszynski

Add egress scheduler and egress rate limiter support.

Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
---
 drivers/net/mrvl/mrvl_ethdev.c |   6 +-
 drivers/net/mrvl/mrvl_qos.c    | 141 +++++++++++++++++++++++++++++++++++++++--
 drivers/net/mrvl/mrvl_qos.h    |  19 ++++++
 3 files changed, 161 insertions(+), 5 deletions(-)

diff --git a/drivers/net/mrvl/mrvl_ethdev.c b/drivers/net/mrvl/mrvl_ethdev.c
index 2d59fce..e42b787 100644
--- a/drivers/net/mrvl/mrvl_ethdev.c
+++ b/drivers/net/mrvl/mrvl_ethdev.c
@@ -348,6 +348,11 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
 	if (ret < 0)
 		return ret;
 
+	ret = mrvl_configure_txqs(priv, dev->data->port_id,
+				  dev->data->nb_tx_queues);
+	if (ret < 0)
+		return ret;
+
 	priv->ppio_params.outqs_params.num_outqs = dev->data->nb_tx_queues;
 	priv->ppio_params.maintain_stats = 1;
 	priv->nb_rx_queues = dev->data->nb_rx_queues;
@@ -1565,7 +1570,6 @@ mrvl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	dev->data->tx_queues[idx] = txq;
 
 	priv->ppio_params.outqs_params.outqs_params[idx].size = desc;
-	priv->ppio_params.outqs_params.outqs_params[idx].weight = 1;
 
 	return 0;
 }
diff --git a/drivers/net/mrvl/mrvl_qos.c b/drivers/net/mrvl/mrvl_qos.c
index 854eb4d..e6d204a 100644
--- a/drivers/net/mrvl/mrvl_qos.c
+++ b/drivers/net/mrvl/mrvl_qos.c
@@ -64,12 +64,19 @@
 #define MRVL_TOK_PCP "pcp"
 #define MRVL_TOK_PORT "port"
 #define MRVL_TOK_RXQ "rxq"
-#define MRVL_TOK_SP "SP"
 #define MRVL_TOK_TC "tc"
 #define MRVL_TOK_TXQ "txq"
 #define MRVL_TOK_VLAN "vlan"
 #define MRVL_TOK_VLAN_IP "vlan/ip"
-#define MRVL_TOK_WEIGHT "weight"
+
+/* egress specific configuration tokens */
+#define MRVL_TOK_BURST_SIZE "burst_size"
+#define MRVL_TOK_RATE_LIMIT "rate_limit"
+#define MRVL_TOK_RATE_LIMIT_ENABLE "rate_limit_enable"
+#define MRVL_TOK_SCHED_MODE "sched_mode"
+#define MRVL_TOK_SCHED_MODE_SP "sp"
+#define MRVL_TOK_SCHED_MODE_WRR "wrr"
+#define MRVL_TOK_WRR_WEIGHT "wrr_weight"
 
 /* policer specific configuration tokens */
 #define MRVL_TOK_PLCR_ENABLE "policer_enable"
@@ -147,12 +154,69 @@ get_outq_cfg(struct rte_cfgfile *file, int port, int outq,
 	if (rte_cfgfile_num_sections(file, sec_name, strlen(sec_name)) <= 0)
 		return 0;
 
+	/* Read scheduling mode */
+	entry = rte_cfgfile_get_entry(file, sec_name, MRVL_TOK_SCHED_MODE);
+	if (entry) {
+		if (!strncmp(entry, MRVL_TOK_SCHED_MODE_SP,
+					strlen(MRVL_TOK_SCHED_MODE_SP))) {
+			cfg->port[port].outq[outq].sched_mode =
+				PP2_PPIO_SCHED_M_SP;
+		} else if (!strncmp(entry, MRVL_TOK_SCHED_MODE_WRR,
+					strlen(MRVL_TOK_SCHED_MODE_WRR))) {
+			cfg->port[port].outq[outq].sched_mode =
+				PP2_PPIO_SCHED_M_WRR;
+		} else {
+			RTE_LOG(ERR, PMD, "Unknown token: %s\n", entry);
+			return -1;
+		}
+	}
+
+	/* Read wrr weight */
+	if (cfg->port[port].outq[outq].sched_mode == PP2_PPIO_SCHED_M_WRR) {
+		entry = rte_cfgfile_get_entry(file, sec_name,
+				MRVL_TOK_WRR_WEIGHT);
+		if (entry) {
+			if (get_val_securely(entry, &val) < 0)
+				return -1;
+			cfg->port[port].outq[outq].weight = val;
+		}
+	}
+
+	/*
+	 * There's no point in setting rate limiting for specific outq as
+	 * global port rate limiting has priority.
+	 */
+	if (cfg->port[port].rate_limit_enable) {
+		RTE_LOG(WARNING, PMD, "Port %d rate limiting already enabled\n",
+			port);
+		return 0;
+	}
+
 	entry = rte_cfgfile_get_entry(file, sec_name,
-			MRVL_TOK_WEIGHT);
+			MRVL_TOK_RATE_LIMIT_ENABLE);
 	if (entry) {
 		if (get_val_securely(entry, &val) < 0)
 			return -1;
-		cfg->port[port].outq[outq].weight = (uint8_t)val;
+		cfg->port[port].outq[outq].rate_limit_enable = val;
+	}
+
+	if (!cfg->port[port].outq[outq].rate_limit_enable)
+		return 0;
+
+	/* Read CBS (in kB) */
+	entry = rte_cfgfile_get_entry(file, sec_name, MRVL_TOK_BURST_SIZE);
+	if (entry) {
+		if (get_val_securely(entry, &val) < 0)
+			return -1;
+		cfg->port[port].outq[outq].rate_limit_params.cbs = val;
+	}
+
+	/* Read CIR (in kbps) */
+	entry = rte_cfgfile_get_entry(file, sec_name, MRVL_TOK_RATE_LIMIT);
+	if (entry) {
+		if (get_val_securely(entry, &val) < 0)
+			return -1;
+		cfg->port[port].outq[outq].rate_limit_params.cir = val;
 	}
 
 	return 0;
@@ -512,6 +576,36 @@ mrvl_get_qoscfg(const char *key __rte_unused, const char *path,
 			}
 		}
 
+		/*
+		 * Read per-port rate limiting. Setting that will
+		 * disable per-queue rate limiting.
+		 */
+		entry = rte_cfgfile_get_entry(file, sec_name,
+				MRVL_TOK_RATE_LIMIT_ENABLE);
+		if (entry) {
+			if (get_val_securely(entry, &val) < 0)
+				return -1;
+			(*cfg)->port[n].rate_limit_enable = val;
+		}
+
+		if ((*cfg)->port[n].rate_limit_enable) {
+			entry = rte_cfgfile_get_entry(file, sec_name,
+					MRVL_TOK_BURST_SIZE);
+			if (entry) {
+				if (get_val_securely(entry, &val) < 0)
+					return -1;
+				(*cfg)->port[n].rate_limit_params.cbs = val;
+			}
+
+			entry = rte_cfgfile_get_entry(file, sec_name,
+					MRVL_TOK_RATE_LIMIT);
+			if (entry) {
+				if (get_val_securely(entry, &val) < 0)
+					return -1;
+				(*cfg)->port[n].rate_limit_params.cir = val;
+			}
+		}
+
 		entry = rte_cfgfile_get_entry(file, sec_name,
 				MRVL_TOK_MAPPING_PRIORITY);
 		if (entry) {
@@ -758,6 +852,45 @@ mrvl_configure_rxqs(struct mrvl_priv *priv, uint16_t portid,
 }
 
 /**
+ * Configure TX Queues in a given port.
+ *
+ * Sets up TX queues egress scheduler and limiter.
+ *
+ * @param priv Port's private data
+ * @param portid DPDK port ID
+ * @param max_queues Maximum number of queues to configure.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+int
+mrvl_configure_txqs(struct mrvl_priv *priv, uint16_t portid,
+		uint16_t max_queues)
+{
+	/* We need only a subset of configuration. */
+	struct port_cfg *port_cfg = &mrvl_qos_cfg->port[portid];
+	int i;
+
+	if (mrvl_qos_cfg == NULL)
+		return 0;
+
+	priv->ppio_params.rate_limit_enable = port_cfg->rate_limit_enable;
+	if (port_cfg->rate_limit_enable)
+		priv->ppio_params.rate_limit_params =
+			port_cfg->rate_limit_params;
+
+	for (i = 0; i < max_queues; i++) {
+		struct pp2_ppio_outq_params *params =
+			&priv->ppio_params.outqs_params.outqs_params[i];
+
+		params->sched_mode = port_cfg->outq[i].sched_mode;
+		params->weight = port_cfg->outq[i].weight;
+		params->rate_limit_enable = port_cfg->outq[i].rate_limit_enable;
+		params->rate_limit_params = port_cfg->outq[i].rate_limit_params;
+	}
+
+	return 0;
+}
+
+/**
  * Start QoS mapping.
  *
  * Finalize QoS table configuration and initialize it in SDK. It can be done
diff --git a/drivers/net/mrvl/mrvl_qos.h b/drivers/net/mrvl/mrvl_qos.h
index 2ff50c1..48ded5f 100644
--- a/drivers/net/mrvl/mrvl_qos.h
+++ b/drivers/net/mrvl/mrvl_qos.h
@@ -48,6 +48,8 @@
 /* QoS config. */
 struct mrvl_qos_cfg {
 	struct port_cfg {
+		int rate_limit_enable;
+		struct pp2_ppio_rate_limit_params rate_limit_params;
 		struct {
 			uint8_t inq[MRVL_PP2_RXQ_MAX];
 			uint8_t dscp[MRVL_CP_PER_TC];
@@ -58,7 +60,10 @@ struct mrvl_qos_cfg {
 			enum pp2_ppio_color color;
 		} tc[MRVL_PP2_TC_MAX];
 		struct {
+			enum pp2_ppio_outq_sched_mode sched_mode;
 			uint8_t weight;
+			int rate_limit_enable;
+			struct pp2_ppio_rate_limit_params rate_limit_params;
 		} outq[MRVL_PP2_RXQ_MAX];
 		enum pp2_cls_qos_tbl_type mapping_priority;
 		uint16_t inqs;
@@ -102,6 +107,20 @@ mrvl_configure_rxqs(struct mrvl_priv *priv, uint16_t portid,
 		    uint16_t max_queues);
 
 /**
+ * Configure TX Queues in a given port.
+ *
+ * Sets up TX queues egress scheduler and limiter.
+ *
+ * @param priv Port's private data
+ * @param portid DPDK port ID
+ * @param max_queues Maximum number of queues to configure.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+int
+mrvl_configure_txqs(struct mrvl_priv *priv, uint16_t portid,
+		    uint16_t max_queues);
+
+/**
  * Start QoS mapping.
  *
  * Finalize QoS table configuration and initialize it in SDK. It can be done
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 4/8] net/mrvl: document policer/scheduler/rate limiter usage
  2018-02-21 14:14 [PATCH 0/8] net/mrvl: add new features to PMD Tomasz Duszynski
                   ` (2 preceding siblings ...)
  2018-02-21 14:14 ` [PATCH 3/8] net/mrvl: add egress scheduler/rate limiter support Tomasz Duszynski
@ 2018-02-21 14:14 ` Tomasz Duszynski
  2018-02-21 14:14 ` [PATCH 5/8] net/mrvl: add classifier support Tomasz Duszynski
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 34+ messages in thread
From: Tomasz Duszynski @ 2018-02-21 14:14 UTC (permalink / raw)
  To: dev; +Cc: mw, dima, nsamsono, jck, Tomasz Duszynski

Add documentation and example for ingress policer, egress scheduler
and egress rate limiter.

Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
---
 doc/guides/nics/mrvl.rst | 86 ++++++++++++++++++++++++++++++++++++++++++++----
 1 file changed, 80 insertions(+), 6 deletions(-)

diff --git a/doc/guides/nics/mrvl.rst b/doc/guides/nics/mrvl.rst
index b7f3292..6794cbb 100644
--- a/doc/guides/nics/mrvl.rst
+++ b/doc/guides/nics/mrvl.rst
@@ -149,17 +149,36 @@ Configuration syntax
    [port <portnum> default]
    default_tc = <default_tc>
    mapping_priority = <mapping_priority>
+   policer_enable = <policer_enable>
+   token_unit = <token_unit>
+   color = <color_mode>
+   cir = <cir>
+   ebs = <ebs>
+   cbs = <cbs>
+
+   rate_limit_enable = <rate_limit_enable>
+   rate_limit = <rate_limit>
+   burst_size = <burst_size>
 
    [port <portnum> tc <traffic_class>]
    rxq = <rx_queue_list>
    pcp = <pcp_list>
    dscp = <dscp_list>
+   default_color = <default_color>
 
    [port <portnum> tc <traffic_class>]
    rxq = <rx_queue_list>
    pcp = <pcp_list>
    dscp = <dscp_list>
 
+   [port <portnum> txq <txqnum>]
+   sched_mode = <sched_mode>
+   wrr_weight = <wrr_weight>
+
+   rate_limit_enable = <rate_limit_enable>
+   rate_limit = <rate_limit>
+   burst_size = <burst_size>
+
 Where:
 
 - ``<portnum>``: DPDK Port number (0..n).
@@ -176,6 +195,30 @@ Where:
 
 - ``<dscp_list>``: List of DSCP values to handle in particular TC (e.g. 0-12 32-48 63).
 
+- ``<policer_enable>``: Enable ingress policer.
+
+- ``<token_unit>``: Policer token unit (`bytes` or `packets`).
+
+- ``<color_mode>``: Policer color mode (`aware` or `blind`).
+
+- ``<cir>``: Committed information rate in unit of kilo bits per second (data rate) or packets per second.
+
+- ``<cbs>``: Committed burst size in unit of kilo bytes or number of packets.
+
+- ``<ebs>``: Excess burst size in unit of kilo bytes or number of packets.
+
+- ``<default_color>``: Default color for specific tc.
+
+- ``<rate_limit_enable>``: Enables per port or per txq rate limiting.
+
+- ``<rate_limit>``: Committed information rate, in kilo bits per second.
+
+- ``<burst_size>``: Committed burst size, in kilo bytes.
+
+- ``<sched_mode>``: Egress scheduler mode (`wrr` or `sp`).
+
+- ``<wrr_weight>``: Txq weight.
+
 Setting PCP/DSCP values for the default TC is not required. All PCP/DSCP
 values not assigned explicitly to particular TC will be handled by the
 default TC.
@@ -187,11 +230,26 @@ Configuration file example
 
    [port 0 default]
    default_tc = 0
-   qos_mode = ip
+   mapping_priority = ip
+
+   rate_limit_enable = 1
+   rate_limit = 1000
+   burst_size = 2000
 
    [port 0 tc 0]
    rxq = 0 1
 
+   [port 0 txq 0]
+   sched_mode = wrr
+   wrr_weight = 10
+
+   [port 0 txq 1]
+   sched_mode = wrr
+   wrr_weight = 100
+
+   [port 0 txq 2]
+   sched_mode = sp
+
    [port 0 tc 1]
    rxq = 2
    pcp = 5 6 7
@@ -199,15 +257,31 @@ Configuration file example
 
    [port 1 default]
    default_tc = 0
-   qos_mode = vlan/ip
+   mapping_priority = vlan/ip
+
+   policer_enable = 1
+   token_unit = bytes
+   color = blind
+   cir = 100000
+   ebs = 64
+   cbs = 64
 
    [port 1 tc 0]
    rxq = 0
+   dscp = 10
 
    [port 1 tc 1]
-   rxq = 1 2
-   pcp = 5 6 7
-   dscp = 26-38
+   rxq = 1
+   dscp = 11-20
+
+   [port 1 tc 2]
+   rxq = 2
+   dscp = 30
+
+   [port 1 txq 0]
+   rate_limit_enable = 1
+   rate_limit = 10000
+   burst_size = 2000
 
 Usage example
 ^^^^^^^^^^^^^
@@ -215,7 +289,7 @@ Usage example
 .. code-block:: console
 
    ./testpmd --vdev=eth_mrvl,iface=eth0,iface=eth2,cfg=/home/user/mrvl.conf \
-     -c 7 -- -i -a --rxq=2
+     -c 7 -- -i -a --disable-hw-vlan-strip --rxq=3 --txq=3
 
 
 Building DPDK
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 5/8] net/mrvl: add classifier support
  2018-02-21 14:14 [PATCH 0/8] net/mrvl: add new features to PMD Tomasz Duszynski
                   ` (3 preceding siblings ...)
  2018-02-21 14:14 ` [PATCH 4/8] net/mrvl: document policer/scheduler/rate limiter usage Tomasz Duszynski
@ 2018-02-21 14:14 ` Tomasz Duszynski
  2018-03-07 11:07   ` Ferruh Yigit
  2018-02-21 14:14 ` [PATCH 6/8] net/mrvl: add extended statistics Tomasz Duszynski
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 34+ messages in thread
From: Tomasz Duszynski @ 2018-02-21 14:14 UTC (permalink / raw)
  To: dev; +Cc: mw, dima, nsamsono, jck, Tomasz Duszynski

Add classifier configuration support via rte_flow api.

Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
---
 doc/guides/nics/mrvl.rst       |  168 +++
 drivers/net/mrvl/Makefile      |    1 +
 drivers/net/mrvl/mrvl_ethdev.c |   59 +
 drivers/net/mrvl/mrvl_ethdev.h |   10 +
 drivers/net/mrvl/mrvl_flow.c   | 2787 ++++++++++++++++++++++++++++++++++++++++
 5 files changed, 3025 insertions(+)
 create mode 100644 drivers/net/mrvl/mrvl_flow.c

diff --git a/doc/guides/nics/mrvl.rst b/doc/guides/nics/mrvl.rst
index 6794cbb..9230d5e 100644
--- a/doc/guides/nics/mrvl.rst
+++ b/doc/guides/nics/mrvl.rst
@@ -113,6 +113,9 @@ Prerequisites
   approval has been granted, library can be found by typing ``musdk`` in
   the search box.
 
+  To get better understanding of the library one can consult documentation
+  available in the ``doc`` top level directory of the MUSDK sources.
+
   MUSDK must be configured with the following features:
 
   .. code-block:: console
@@ -318,6 +321,171 @@ the path to the MUSDK installation directory needs to be exported.
    sed -ri 's,(MRVL_PMD=)n,\1y,' build/.config
    make
 
+Flow API
+--------
+
+PPv2 offers packet classification capabilities via classifier engine which
+can be configured via generic flow API offered by DPDK.
+
+Supported flow actions
+~~~~~~~~~~~~~~~~~~~~~~
+
+Following flow action items are supported by the driver:
+
+* DROP
+* QUEUE
+
+Supported flow items
+~~~~~~~~~~~~~~~~~~~~
+
+Following flow items and their respective fields are supported by the driver:
+
+* ETH
+
+  * source MAC
+  * destination MAC
+  * ethertype
+
+* VLAN
+
+  * PCP
+  * VID
+
+* IPV4
+
+  * DSCP
+  * protocol
+  * source address
+  * destination address
+
+* IPV6
+
+  * flow label
+  * next header
+  * source address
+  * destination address
+
+* UDP
+
+  * source port
+  * destination port
+
+* TCP
+
+  * source port
+  * destination port
+
+Classifier match engine
+~~~~~~~~~~~~~~~~~~~~~~~
+
+Classifier has an internal match engine which can be configured to
+operate in either exact or maskable mode.
+
+Mode is selected upon creation of the first unique flow rule as follows:
+
+* maskable, if key size is up to 8 bytes.
+* exact, otherwise, i.e for keys bigger than 8 bytes.
+
+Where the key size equals the number of bytes of all fields specified
+in the flow items.
+
+.. table:: Examples of key size calculation
+
+   +----------------------------------------------------------------------------+-------------------+-------------+
+   | Flow pattern                                                               | Key size in bytes | Used engine |
+   +============================================================================+===================+=============+
+   | ETH (destination MAC) / VLAN (VID)                                         | 6 + 2 = 8         | Maskable    |
+   +----------------------------------------------------------------------------+-------------------+-------------+
+   | VLAN (VID) / IPV4 (source address)                                         | 2 + 4 = 6         | Maskable    |
+   +----------------------------------------------------------------------------+-------------------+-------------+
+   | TCP (source port, destination port)                                        | 2 + 2 = 4         | Maskable    |
+   +----------------------------------------------------------------------------+-------------------+-------------+
+   | VLAN (priority) / IPV4 (source address)                                    | 1 + 4 = 5         | Maskable    |
+   +----------------------------------------------------------------------------+-------------------+-------------+
+   | IPV4 (destination address) / UDP (source port, destination port)           | 6 + 2 + 2 = 10    | Exact       |
+   +----------------------------------------------------------------------------+-------------------+-------------+
+   | VLAN (VID) / IPV6 (flow label, destination address)                        | 2 + 3 + 16 = 21   | Exact       |
+   +----------------------------------------------------------------------------+-------------------+-------------+
+   | IPV4 (DSCP, source address, destination address)                           | 1 + 4 + 4 = 9     | Exact       |
+   +----------------------------------------------------------------------------+-------------------+-------------+
+   | IPV6 (flow label, source address, destination address)                     | 3 + 16 + 16 = 35  | Exact       |
+   +----------------------------------------------------------------------------+-------------------+-------------+
+
+From the user perspective maskable mode means that masks specified
+via flow rules are respected. In case of exact match mode, masks
+which do not provide exact matching (all bits masked) are ignored.
+
+If the flow matches more than one classifier rule the first
+(with the lowest index) matched takes precedence.
+
+Flow rules usage example
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Before proceeding run testpmd user application:
+
+.. code-block:: console
+
+   ./testpmd --vdev=net_mrvl,iface=eth0,iface=eth2 -c 3 -- -i --p 3 -a --disable-hw-vlan-strip
+
+Example #1
+^^^^^^^^^^
+
+.. code-block:: console
+
+   testpmd> flow create 0 ingress pattern eth src is 10:11:12:13:14:15 / end actions drop / end
+
+In this case key size is 6 bytes thus maskable type is selected. Testpmd
+will set mask to ff:ff:ff:ff:ff:ff i.e traffic explicitly matching
+above rule will be dropped.
+
+Example #2
+^^^^^^^^^^
+
+.. code-block:: console
+
+   testpmd> flow create 0 ingress pattern ipv4 src spec 10.10.10.0 src mask 255.255.255.0 / tcp src spec 0x10 src mask 0x10 / end action drop / end
+
+In this case key size is 8 bytes thus maskable type is selected.
+Flows which have IPv4 source addresses ranging from 10.10.10.0 to 10.10.10.255
+and tcp source port set to 16 will be dropped.
+
+Example #3
+^^^^^^^^^^
+
+.. code-block:: console
+
+   testpmd> flow create 0 ingress pattern vlan vid spec 0x10 vid mask 0x10 / ipv4 src spec 10.10.1.1 src mask 255.255.0.0 dst spec 11.11.11.1 dst mask 255.255.255.0 / end actions drop / end
+
+In this case key size is 10 bytes thus exact type is selected.
+Even though each item has partial mask set, masks will be ignored.
+As a result only flows with VID set to 16 and IPv4 source and destination
+addresses set to 10.10.1.1 and 11.11.11.1 respectively will be dropped.
+
+Limitations
+~~~~~~~~~~~
+
+Following limitations need to be taken into account while creating flow rules:
+
+* For IPv4 exact match type the key size must be up to 12 bytes.
+* For IPv6 exact match type the key size must be up to 36 bytes.
+* Following fields cannot be partially masked (all masks are treated as
+  if they were exact):
+
+  * ETH: ethertype
+  * VLAN: PCP, VID
+  * IPv4: protocol
+  * IPv6: next header
+  * TCP/UDP: source port, destination port
+
+* Only one classifier table can be created thus all rules in the table
+  have to match table format. Table format is set during creation of
+  the first unique flow rule.
+* Up to 5 fields can be specified per flow rule.
+* Up to 20 flow rules can be added.
+
+For additional information about classifier please consult
+``doc/musdk_cls_user_guide.txt``.
+
 Usage Example
 -------------
 
diff --git a/drivers/net/mrvl/Makefile b/drivers/net/mrvl/Makefile
index f75e53c..8e7079f 100644
--- a/drivers/net/mrvl/Makefile
+++ b/drivers/net/mrvl/Makefile
@@ -64,5 +64,6 @@ LDLIBS += -lrte_bus_vdev
 # library source files
 SRCS-$(CONFIG_RTE_LIBRTE_MRVL_PMD) += mrvl_ethdev.c
 SRCS-$(CONFIG_RTE_LIBRTE_MRVL_PMD) += mrvl_qos.c
+SRCS-$(CONFIG_RTE_LIBRTE_MRVL_PMD) += mrvl_flow.c
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/mrvl/mrvl_ethdev.c b/drivers/net/mrvl/mrvl_ethdev.c
index e42b787..2536ee5 100644
--- a/drivers/net/mrvl/mrvl_ethdev.c
+++ b/drivers/net/mrvl/mrvl_ethdev.c
@@ -687,6 +687,10 @@ mrvl_dev_stop(struct rte_eth_dev *dev)
 	mrvl_dev_set_link_down(dev);
 	mrvl_flush_rx_queues(dev);
 	mrvl_flush_tx_shadow_queues(dev);
+	if (priv->cls_tbl) {
+		pp2_cls_tbl_deinit(priv->cls_tbl);
+		priv->cls_tbl = NULL;
+	}
 	if (priv->qos_tbl) {
 		pp2_cls_qos_tbl_deinit(priv->qos_tbl);
 		priv->qos_tbl = NULL;
@@ -812,6 +816,9 @@ mrvl_promiscuous_enable(struct rte_eth_dev *dev)
 	if (!priv->ppio)
 		return;
 
+	if (priv->isolated)
+		return;
+
 	ret = pp2_ppio_set_promisc(priv->ppio, 1);
 	if (ret)
 		RTE_LOG(ERR, PMD, "Failed to enable promiscuous mode\n");
@@ -832,6 +839,9 @@ mrvl_allmulticast_enable(struct rte_eth_dev *dev)
 	if (!priv->ppio)
 		return;
 
+	if (priv->isolated)
+		return;
+
 	ret = pp2_ppio_set_mc_promisc(priv->ppio, 1);
 	if (ret)
 		RTE_LOG(ERR, PMD, "Failed enable all-multicast mode\n");
@@ -895,6 +905,9 @@ mrvl_mac_addr_remove(struct rte_eth_dev *dev, uint32_t index)
 	if (!priv->ppio)
 		return;
 
+	if (priv->isolated)
+		return;
+
 	ret = pp2_ppio_remove_mac_addr(priv->ppio,
 				       dev->data->mac_addrs[index].addr_bytes);
 	if (ret) {
@@ -927,6 +940,9 @@ mrvl_mac_addr_add(struct rte_eth_dev *dev, struct ether_addr *mac_addr,
 	char buf[ETHER_ADDR_FMT_SIZE];
 	int ret;
 
+	if (priv->isolated)
+		return -ENOTSUP;
+
 	if (index == 0)
 		/* For setting index 0, mrvl_mac_addr_set() should be used.*/
 		return -1;
@@ -974,6 +990,9 @@ mrvl_mac_addr_set(struct rte_eth_dev *dev, struct ether_addr *mac_addr)
 	if (!priv->ppio)
 		return;
 
+	if (priv->isolated)
+		return;
+
 	ret = pp2_ppio_set_mac_addr(priv->ppio, mac_addr->addr_bytes);
 	if (ret) {
 		char buf[ETHER_ADDR_FMT_SIZE];
@@ -1255,6 +1274,9 @@ mrvl_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
 	if (!priv->ppio)
 		return -EPERM;
 
+	if (priv->isolated)
+		return -ENOTSUP;
+
 	return on ? pp2_ppio_add_vlan(priv->ppio, vlan_id) :
 		    pp2_ppio_remove_vlan(priv->ppio, vlan_id);
 }
@@ -1608,6 +1630,9 @@ mrvl_rss_hash_update(struct rte_eth_dev *dev,
 {
 	struct mrvl_priv *priv = dev->data->dev_private;
 
+	if (priv->isolated)
+		return -ENOTSUP;
+
 	return mrvl_configure_rss(priv, rss_conf);
 }
 
@@ -1644,6 +1669,39 @@ mrvl_rss_hash_conf_get(struct rte_eth_dev *dev,
 	return 0;
 }
 
+/**
+ * DPDK callback to get rte_flow callbacks.
+ *
+ * @param dev
+ *   Pointer to the device structure.
+ * @param filer_type
+ *   Flow filter type.
+ * @param filter_op
+ *   Flow filter operation.
+ * @param arg
+ *   Pointer to pass the flow ops.
+ *
+ * @return
+ *   0 on success, negative error value otherwise.
+ */
+static int
+mrvl_eth_filter_ctrl(struct rte_eth_dev *dev __rte_unused,
+		     enum rte_filter_type filter_type,
+		     enum rte_filter_op filter_op, void *arg)
+{
+	switch (filter_type) {
+	case RTE_ETH_FILTER_GENERIC:
+		if (filter_op != RTE_ETH_FILTER_GET)
+			return -EINVAL;
+		*(const void **)arg = &mrvl_flow_ops;
+		return 0;
+	default:
+		RTE_LOG(WARNING, PMD, "Filter type (%d) not supported",
+				filter_type);
+		return -EINVAL;
+	}
+}
+
 static const struct eth_dev_ops mrvl_ops = {
 	.dev_configure = mrvl_dev_configure,
 	.dev_start = mrvl_dev_start,
@@ -1673,6 +1731,7 @@ static const struct eth_dev_ops mrvl_ops = {
 	.tx_queue_release = mrvl_tx_queue_release,
 	.rss_hash_update = mrvl_rss_hash_update,
 	.rss_hash_conf_get = mrvl_rss_hash_conf_get,
+	.filter_ctrl = mrvl_eth_filter_ctrl
 };
 
 /**
diff --git a/drivers/net/mrvl/mrvl_ethdev.h b/drivers/net/mrvl/mrvl_ethdev.h
index 0d152d6..c09f313 100644
--- a/drivers/net/mrvl/mrvl_ethdev.h
+++ b/drivers/net/mrvl/mrvl_ethdev.h
@@ -36,6 +36,7 @@
 #define _MRVL_ETHDEV_H_
 
 #include <rte_spinlock.h>
+#include <rte_flow_driver.h>
 
 #include <env/mv_autogen_comp_flags.h>
 #include <drivers/mv_pp2.h>
@@ -108,12 +109,21 @@ struct mrvl_priv {
 	uint8_t rss_hf_tcp;
 	uint8_t uc_mc_flushed;
 	uint8_t vlan_flushed;
+	uint8_t isolated;
 
 	struct pp2_ppio_params ppio_params;
 	struct pp2_cls_qos_tbl_params qos_tbl_params;
 	struct pp2_cls_tbl *qos_tbl;
 	uint16_t nb_rx_queues;
+
+	struct pp2_cls_tbl_params cls_tbl_params;
+	struct pp2_cls_tbl *cls_tbl;
+	uint32_t cls_tbl_pattern;
+	LIST_HEAD(mrvl_flows, rte_flow) flows;
+
 	struct pp2_cls_plcr *policer;
 };
 
+/** Flow operations forward declaration. */
+extern const struct rte_flow_ops mrvl_flow_ops;
 #endif /* _MRVL_ETHDEV_H_ */
diff --git a/drivers/net/mrvl/mrvl_flow.c b/drivers/net/mrvl/mrvl_flow.c
new file mode 100644
index 0000000..a2c25e6
--- /dev/null
+++ b/drivers/net/mrvl/mrvl_flow.c
@@ -0,0 +1,2787 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2018 Marvell International Ltd.
+ *   Copyright(c) 2018 Semihalf.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of the copyright holder nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_flow.h>
+#include <rte_flow_driver.h>
+#include <rte_malloc.h>
+#include <rte_log.h>
+
+#include <arpa/inet.h>
+
+#ifdef container_of
+#undef container_of
+#endif
+
+#include "mrvl_ethdev.h"
+#include "mrvl_qos.h"
+#include "env/mv_common.h" /* for BIT() */
+
+/** Number of rules in the classifier table. */
+#define MRVL_CLS_MAX_NUM_RULES 20
+
+/** Size of the classifier key and mask strings. */
+#define MRVL_CLS_STR_SIZE_MAX 40
+
+/** Parsed fields in processed rte_flow_item. */
+enum mrvl_parsed_fields {
+	/* eth flags */
+	F_DMAC =         BIT(0),
+	F_SMAC =         BIT(1),
+	F_TYPE =         BIT(2),
+	/* vlan flags */
+	F_VLAN_ID =      BIT(3),
+	F_VLAN_PRI =     BIT(4),
+	F_VLAN_TCI =     BIT(5), /* not supported by MUSDK yet */
+	/* ip4 flags */
+	F_IP4_TOS =      BIT(6),
+	F_IP4_SIP =      BIT(7),
+	F_IP4_DIP =      BIT(8),
+	F_IP4_PROTO =    BIT(9),
+	/* ip6 flags */
+	F_IP6_TC =       BIT(10), /* not supported by MUSDK yet */
+	F_IP6_SIP =      BIT(11),
+	F_IP6_DIP =      BIT(12),
+	F_IP6_FLOW =     BIT(13),
+	F_IP6_NEXT_HDR = BIT(14),
+	/* tcp flags */
+	F_TCP_SPORT =    BIT(15),
+	F_TCP_DPORT =    BIT(16),
+	/* udp flags */
+	F_UDP_SPORT =    BIT(17),
+	F_UDP_DPORT =    BIT(18),
+};
+
+/** PMD-specific definition of a flow rule handle. */
+struct rte_flow {
+	LIST_ENTRY(rte_flow) next;
+
+	enum mrvl_parsed_fields pattern;
+
+	struct pp2_cls_tbl_rule rule;
+	struct pp2_cls_cos_desc cos;
+	struct pp2_cls_tbl_action action;
+};
+
+static const enum rte_flow_item_type pattern_eth[] = {
+	RTE_FLOW_ITEM_TYPE_ETH,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_eth_vlan[] = {
+	RTE_FLOW_ITEM_TYPE_ETH,
+	RTE_FLOW_ITEM_TYPE_VLAN,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_eth_vlan_ip[] = {
+	RTE_FLOW_ITEM_TYPE_ETH,
+	RTE_FLOW_ITEM_TYPE_VLAN,
+	RTE_FLOW_ITEM_TYPE_IPV4,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_eth_vlan_ip6[] = {
+	RTE_FLOW_ITEM_TYPE_ETH,
+	RTE_FLOW_ITEM_TYPE_VLAN,
+	RTE_FLOW_ITEM_TYPE_IPV6,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_eth_ip4[] = {
+	RTE_FLOW_ITEM_TYPE_ETH,
+	RTE_FLOW_ITEM_TYPE_IPV4,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_eth_ip4_tcp[] = {
+	RTE_FLOW_ITEM_TYPE_ETH,
+	RTE_FLOW_ITEM_TYPE_IPV4,
+	RTE_FLOW_ITEM_TYPE_TCP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_eth_ip4_udp[] = {
+	RTE_FLOW_ITEM_TYPE_ETH,
+	RTE_FLOW_ITEM_TYPE_IPV4,
+	RTE_FLOW_ITEM_TYPE_UDP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_eth_ip6[] = {
+	RTE_FLOW_ITEM_TYPE_ETH,
+	RTE_FLOW_ITEM_TYPE_IPV6,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_eth_ip6_tcp[] = {
+	RTE_FLOW_ITEM_TYPE_ETH,
+	RTE_FLOW_ITEM_TYPE_IPV6,
+	RTE_FLOW_ITEM_TYPE_TCP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_eth_ip6_udp[] = {
+	RTE_FLOW_ITEM_TYPE_ETH,
+	RTE_FLOW_ITEM_TYPE_IPV6,
+	RTE_FLOW_ITEM_TYPE_UDP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_vlan[] = {
+	RTE_FLOW_ITEM_TYPE_VLAN,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_vlan_ip[] = {
+	RTE_FLOW_ITEM_TYPE_VLAN,
+	RTE_FLOW_ITEM_TYPE_IPV4,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_vlan_ip_tcp[] = {
+	RTE_FLOW_ITEM_TYPE_VLAN,
+	RTE_FLOW_ITEM_TYPE_IPV4,
+	RTE_FLOW_ITEM_TYPE_TCP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_vlan_ip_udp[] = {
+	RTE_FLOW_ITEM_TYPE_VLAN,
+	RTE_FLOW_ITEM_TYPE_IPV4,
+	RTE_FLOW_ITEM_TYPE_UDP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_vlan_ip6[] = {
+	RTE_FLOW_ITEM_TYPE_VLAN,
+	RTE_FLOW_ITEM_TYPE_IPV6,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_vlan_ip6_tcp[] = {
+	RTE_FLOW_ITEM_TYPE_VLAN,
+	RTE_FLOW_ITEM_TYPE_IPV6,
+	RTE_FLOW_ITEM_TYPE_TCP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_vlan_ip6_udp[] = {
+	RTE_FLOW_ITEM_TYPE_VLAN,
+	RTE_FLOW_ITEM_TYPE_IPV6,
+	RTE_FLOW_ITEM_TYPE_UDP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_ip[] = {
+	RTE_FLOW_ITEM_TYPE_IPV4,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_ip6[] = {
+	RTE_FLOW_ITEM_TYPE_IPV6,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_ip_tcp[] = {
+	RTE_FLOW_ITEM_TYPE_IPV4,
+	RTE_FLOW_ITEM_TYPE_TCP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_ip6_tcp[] = {
+	RTE_FLOW_ITEM_TYPE_IPV6,
+	RTE_FLOW_ITEM_TYPE_TCP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_ip_udp[] = {
+	RTE_FLOW_ITEM_TYPE_IPV4,
+	RTE_FLOW_ITEM_TYPE_UDP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_ip6_udp[] = {
+	RTE_FLOW_ITEM_TYPE_IPV6,
+	RTE_FLOW_ITEM_TYPE_UDP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_tcp[] = {
+	RTE_FLOW_ITEM_TYPE_TCP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_udp[] = {
+	RTE_FLOW_ITEM_TYPE_UDP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+#define MRVL_VLAN_ID_MASK 0x0fff
+#define MRVL_VLAN_PRI_MASK 0x7000
+#define MRVL_IPV4_DSCP_MASK 0xfc
+#define MRVL_IPV4_ADDR_MASK 0xffffffff
+#define MRVL_IPV6_FLOW_MASK 0x0fffff
+
+/**
+ * Given a flow item, return the next non-void one.
+ *
+ * @param items Pointer to the item in the table.
+ * @returns Next not-void item, NULL otherwise.
+ */
+static const struct rte_flow_item *
+mrvl_next_item(const struct rte_flow_item *items)
+{
+	const struct rte_flow_item *item = items;
+
+	for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
+		if (item->type != RTE_FLOW_ITEM_TYPE_VOID)
+			return item;
+	}
+
+	return NULL;
+}
+
+/**
+ * Allocate memory for classifier rule key and mask fields.
+ *
+ * @param field Pointer to the classifier rule.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_alloc_key_mask(struct pp2_cls_rule_key_field *field)
+{
+	unsigned int id = rte_socket_id();
+
+	field->key = rte_zmalloc_socket(NULL, MRVL_CLS_STR_SIZE_MAX, 0, id);
+	if (!field->key)
+		goto out;
+
+	field->mask = rte_zmalloc_socket(NULL, MRVL_CLS_STR_SIZE_MAX, 0, id);
+	if (!field->mask)
+		goto out_mask;
+
+	return 0;
+out_mask:
+	rte_free(field->key);
+out:
+	field->key = NULL;
+	field->mask = NULL;
+	return -1;
+}
+
+/**
+ * Free memory allocated for classifier rule key and mask fields.
+ *
+ * @param field Pointer to the classifier rule.
+ */
+static void
+mrvl_free_key_mask(struct pp2_cls_rule_key_field *field)
+{
+	rte_free(field->key);
+	rte_free(field->mask);
+	field->key = NULL;
+	field->mask = NULL;
+}
+
+/**
+ * Free memory allocated for all classifier rule key and mask fields.
+ *
+ * @param rule Pointer to the classifier table rule.
+ */
+static void
+mrvl_free_all_key_mask(struct pp2_cls_tbl_rule *rule)
+{
+	int i;
+
+	for (i = 0; i < rule->num_fields; i++)
+		mrvl_free_key_mask(&rule->fields[i]);
+	rule->num_fields = 0;
+}
+
+/*
+ * Initialize rte flow item parsing.
+ *
+ * @param item Pointer to the flow item.
+ * @param spec_ptr Pointer to the specific item pointer.
+ * @param mask_ptr Pointer to the specific item's mask pointer.
+ * @def_mask Pointer to the default mask.
+ * @size Size of the flow item.
+ * @error Pointer to the rte flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_init(const struct rte_flow_item *item,
+		const void **spec_ptr,
+		const void **mask_ptr,
+		const void *def_mask,
+		unsigned int size,
+		struct rte_flow_error *error)
+{
+	const uint8_t *spec;
+	const uint8_t *mask;
+	const uint8_t *last;
+	uint8_t zeros[size];
+
+	memset(zeros, 0, size);
+
+	if (item == NULL) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+				   "NULL item\n");
+		return -rte_errno;
+	}
+
+	if ((item->last != NULL || item->mask != NULL) && item->spec == NULL) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ITEM, item,
+				   "Mask or last is set without spec\n");
+		return -rte_errno;
+	}
+
+	/*
+	 * If "mask" is not set, default mask is used,
+	 * but if default mask is NULL, "mask" should be set.
+	 */
+	if (item->mask == NULL) {
+		if (def_mask == NULL) {
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+					   "Mask should be specified\n");
+			return -rte_errno;
+		}
+
+		mask = (const uint8_t *)def_mask;
+	} else {
+		mask = (const uint8_t *)item->mask;
+	}
+
+	spec = (const uint8_t *)item->spec;
+	last = (const uint8_t *)item->last;
+
+	if (spec == NULL) {
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+				   NULL, "Spec should be specified\n");
+		return -rte_errno;
+	}
+
+	/*
+	 * If field values in "last" are either 0 or equal to the corresponding
+	 * values in "spec" then they are ignored.
+	 */
+	if (last != NULL &&
+	    !memcmp(last, zeros, size) &&
+	    memcmp(last, spec, size) != 0) {
+		rte_flow_error_set(error, ENOTSUP,
+				   RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+				   "Ranging is not supported\n");
+		return -rte_errno;
+	}
+
+	*spec_ptr = spec;
+	*mask_ptr = mask;
+
+	return 0;
+}
+
+/**
+ * Parse the eth flow item.
+ *
+ * This will create classifier rule that matches either destination or source
+ * mac.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param mask Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static int
+mrvl_parse_mac(const struct rte_flow_item_eth *spec,
+	       const struct rte_flow_item_eth *mask,
+	       int parse_dst, struct rte_flow *flow)
+{
+	struct pp2_cls_rule_key_field *key_field;
+	const uint8_t *k, *m;
+
+	if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS)
+		return -ENOSPC;
+
+	if (parse_dst) {
+		k = spec->dst.addr_bytes;
+		m = mask->dst.addr_bytes;
+
+		flow->pattern |= F_DMAC;
+	} else {
+		k = spec->src.addr_bytes;
+		m = mask->src.addr_bytes;
+
+		flow->pattern |= F_SMAC;
+	}
+
+	key_field = &flow->rule.fields[flow->rule.num_fields];
+	mrvl_alloc_key_mask(key_field);
+	key_field->size = 6;
+
+	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX,
+		 "%02x:%02x:%02x:%02x:%02x:%02x",
+		 k[0], k[1], k[2], k[3], k[4], k[5]);
+
+	snprintf((char *)key_field->mask, MRVL_CLS_STR_SIZE_MAX,
+		 "%02x:%02x:%02x:%02x:%02x:%02x",
+		 m[0], m[1], m[2], m[3], m[4], m[5]);
+
+	flow->rule.num_fields += 1;
+
+	return 0;
+}
+
+/**
+ * Helper for parsing the eth flow item destination mac address.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static inline int
+mrvl_parse_dmac(const struct rte_flow_item_eth *spec,
+		const struct rte_flow_item_eth *mask,
+		struct rte_flow *flow)
+{
+	return mrvl_parse_mac(spec, mask, 1, flow);
+}
+
+/**
+ * Helper for parsing the eth flow item source mac address.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static inline int
+mrvl_parse_smac(const struct rte_flow_item_eth *spec,
+		const struct rte_flow_item_eth *mask,
+		struct rte_flow *flow)
+{
+	return mrvl_parse_mac(spec, mask, 0, flow);
+}
+
+/**
+ * Parse the ether type field of the eth flow item.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static int
+mrvl_parse_type(const struct rte_flow_item_eth *spec,
+		const struct rte_flow_item_eth *mask __rte_unused,
+		struct rte_flow *flow)
+{
+	struct pp2_cls_rule_key_field *key_field;
+	uint16_t k;
+
+	if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS)
+		return -ENOSPC;
+
+	key_field = &flow->rule.fields[flow->rule.num_fields];
+	mrvl_alloc_key_mask(key_field);
+	key_field->size = 2;
+
+	k = rte_be_to_cpu_16(spec->type);
+	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
+
+	flow->pattern |= F_TYPE;
+	flow->rule.num_fields += 1;
+
+	return 0;
+}
+
+/**
+ * Parse the vid field of the vlan rte flow item.
+ *
+ * This will create classifier rule that matches vid.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static int
+mrvl_parse_vlan_id(const struct rte_flow_item_vlan *spec,
+		   const struct rte_flow_item_vlan *mask __rte_unused,
+		   struct rte_flow *flow)
+{
+	struct pp2_cls_rule_key_field *key_field;
+	uint16_t k;
+
+	if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS)
+		return -ENOSPC;
+
+	key_field = &flow->rule.fields[flow->rule.num_fields];
+	mrvl_alloc_key_mask(key_field);
+	key_field->size = 2;
+
+	k = rte_be_to_cpu_16(spec->tci) & MRVL_VLAN_ID_MASK;
+	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
+
+	flow->pattern |= F_VLAN_ID;
+	flow->rule.num_fields += 1;
+
+	return 0;
+}
+
+/**
+ * Parse the pri field of the vlan rte flow item.
+ *
+ * This will create classifier rule that matches pri.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static int
+mrvl_parse_vlan_pri(const struct rte_flow_item_vlan *spec,
+		    const struct rte_flow_item_vlan *mask __rte_unused,
+		    struct rte_flow *flow)
+{
+	struct pp2_cls_rule_key_field *key_field;
+	uint16_t k;
+
+	if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS)
+		return -ENOSPC;
+
+	key_field = &flow->rule.fields[flow->rule.num_fields];
+	mrvl_alloc_key_mask(key_field);
+	key_field->size = 1;
+
+	k = (rte_be_to_cpu_16(spec->tci) & MRVL_VLAN_PRI_MASK) >> 13;
+	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
+
+	flow->pattern |= F_VLAN_PRI;
+	flow->rule.num_fields += 1;
+
+	return 0;
+}
+
+/**
+ * Parse the dscp field of the ipv4 rte flow item.
+ *
+ * This will create classifier rule that matches dscp field.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static int
+mrvl_parse_ip4_dscp(const struct rte_flow_item_ipv4 *spec,
+		    const struct rte_flow_item_ipv4 *mask,
+		    struct rte_flow *flow)
+{
+	struct pp2_cls_rule_key_field *key_field;
+	uint8_t k, m;
+
+	if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS)
+		return -ENOSPC;
+
+	key_field = &flow->rule.fields[flow->rule.num_fields];
+	mrvl_alloc_key_mask(key_field);
+	key_field->size = 1;
+
+	k = (spec->hdr.type_of_service & MRVL_IPV4_DSCP_MASK) >> 2;
+	m = (mask->hdr.type_of_service & MRVL_IPV4_DSCP_MASK) >> 2;
+	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
+	snprintf((char *)key_field->mask, MRVL_CLS_STR_SIZE_MAX, "%u", m);
+
+	flow->pattern |= F_IP4_TOS;
+	flow->rule.num_fields += 1;
+
+	return 0;
+}
+
+/**
+ * Parse either source or destination ip addresses of the ipv4 flow item.
+ *
+ * This will create classifier rule that matches either destination
+ * or source ip field.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static int
+mrvl_parse_ip4_addr(const struct rte_flow_item_ipv4 *spec,
+		    const struct rte_flow_item_ipv4 *mask,
+		    int parse_dst, struct rte_flow *flow)
+{
+	struct pp2_cls_rule_key_field *key_field;
+	struct in_addr k;
+	uint32_t m;
+
+	if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS)
+		return -ENOSPC;
+
+	memset(&k, 0, sizeof(k));
+	if (parse_dst) {
+		k.s_addr = spec->hdr.dst_addr;
+		m = rte_be_to_cpu_32(mask->hdr.dst_addr);
+
+		flow->pattern |= F_IP4_DIP;
+	} else {
+		k.s_addr = spec->hdr.src_addr;
+		m = rte_be_to_cpu_32(mask->hdr.src_addr);
+
+		flow->pattern |= F_IP4_SIP;
+	}
+
+	key_field = &flow->rule.fields[flow->rule.num_fields];
+	mrvl_alloc_key_mask(key_field);
+	key_field->size = 4;
+
+	inet_ntop(AF_INET, &k, (char *)key_field->key, MRVL_CLS_STR_SIZE_MAX);
+	snprintf((char *)key_field->mask, MRVL_CLS_STR_SIZE_MAX, "0x%x", m);
+
+	flow->rule.num_fields += 1;
+
+	return 0;
+}
+
+/**
+ * Helper for parsing destination ip of the ipv4 flow item.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static inline int
+mrvl_parse_ip4_dip(const struct rte_flow_item_ipv4 *spec,
+		   const struct rte_flow_item_ipv4 *mask,
+		   struct rte_flow *flow)
+{
+	return mrvl_parse_ip4_addr(spec, mask, 1, flow);
+}
+
+/**
+ * Helper for parsing source ip of the ipv4 flow item.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static inline int
+mrvl_parse_ip4_sip(const struct rte_flow_item_ipv4 *spec,
+		   const struct rte_flow_item_ipv4 *mask,
+		   struct rte_flow *flow)
+{
+	return mrvl_parse_ip4_addr(spec, mask, 0, flow);
+}
+
+/**
+ * Parse the proto field of the ipv4 rte flow item.
+ *
+ * This will create classifier rule that matches proto field.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static int
+mrvl_parse_ip4_proto(const struct rte_flow_item_ipv4 *spec,
+		     const struct rte_flow_item_ipv4 *mask __rte_unused,
+		     struct rte_flow *flow)
+{
+	struct pp2_cls_rule_key_field *key_field;
+	uint8_t k = spec->hdr.next_proto_id;
+
+	if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS)
+		return -ENOSPC;
+
+	key_field = &flow->rule.fields[flow->rule.num_fields];
+	mrvl_alloc_key_mask(key_field);
+	key_field->size = 1;
+
+	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
+
+	flow->pattern |= F_IP4_PROTO;
+	flow->rule.num_fields += 1;
+
+	return 0;
+}
+
+/**
+ * Parse either source or destination ip addresses of the ipv6 rte flow item.
+ *
+ * This will create classifier rule that matches either destination
+ * or source ip field.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static int
+mrvl_parse_ip6_addr(const struct rte_flow_item_ipv6 *spec,
+	       const struct rte_flow_item_ipv6 *mask,
+	       int parse_dst, struct rte_flow *flow)
+{
+	struct pp2_cls_rule_key_field *key_field;
+	int size = sizeof(spec->hdr.dst_addr);
+	struct in6_addr k, m;
+
+	if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS)
+		return -ENOSPC;
+
+	memset(&k, 0, sizeof(k));
+	if (parse_dst) {
+		memcpy(k.s6_addr, spec->hdr.dst_addr, size);
+		memcpy(m.s6_addr, mask->hdr.dst_addr, size);
+
+		flow->pattern |= F_IP6_DIP;
+	} else {
+		memcpy(k.s6_addr, spec->hdr.src_addr, size);
+		memcpy(m.s6_addr, mask->hdr.src_addr, size);
+
+		flow->pattern |= F_IP6_SIP;
+	}
+
+	key_field = &flow->rule.fields[flow->rule.num_fields];
+	mrvl_alloc_key_mask(key_field);
+	key_field->size = 16;
+
+	inet_ntop(AF_INET6, &k, (char *)key_field->key, MRVL_CLS_STR_SIZE_MAX);
+	inet_ntop(AF_INET6, &m, (char *)key_field->mask, MRVL_CLS_STR_SIZE_MAX);
+
+	flow->rule.num_fields += 1;
+
+	return 0;
+}
+
+/**
+ * Helper for parsing destination ip of the ipv6 flow item.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static inline int
+mrvl_parse_ip6_dip(const struct rte_flow_item_ipv6 *spec,
+		   const struct rte_flow_item_ipv6 *mask,
+		   struct rte_flow *flow)
+{
+	return mrvl_parse_ip6_addr(spec, mask, 1, flow);
+}
+
+/**
+ * Helper for parsing source ip of the ipv6 flow item.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static inline int
+mrvl_parse_ip6_sip(const struct rte_flow_item_ipv6 *spec,
+		   const struct rte_flow_item_ipv6 *mask,
+		   struct rte_flow *flow)
+{
+	return mrvl_parse_ip6_addr(spec, mask, 0, flow);
+}
+
+/**
+ * Parse the flow label of the ipv6 flow item.
+ *
+ * This will create classifier rule that matches flow field.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static int
+mrvl_parse_ip6_flow(const struct rte_flow_item_ipv6 *spec,
+		    const struct rte_flow_item_ipv6 *mask,
+		    struct rte_flow *flow)
+{
+	struct pp2_cls_rule_key_field *key_field;
+	uint32_t k = rte_be_to_cpu_32(spec->hdr.vtc_flow) & MRVL_IPV6_FLOW_MASK,
+		 m = rte_be_to_cpu_32(mask->hdr.vtc_flow) & MRVL_IPV6_FLOW_MASK;
+
+	if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS)
+		return -ENOSPC;
+
+	key_field = &flow->rule.fields[flow->rule.num_fields];
+	mrvl_alloc_key_mask(key_field);
+	key_field->size = 3;
+
+	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
+	snprintf((char *)key_field->mask, MRVL_CLS_STR_SIZE_MAX, "%u", m);
+
+	flow->pattern |= F_IP6_FLOW;
+	flow->rule.num_fields += 1;
+
+	return 0;
+}
+
+/**
+ * Parse the next header of the ipv6 flow item.
+ *
+ * This will create classifier rule that matches next header field.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static int
+mrvl_parse_ip6_next_hdr(const struct rte_flow_item_ipv6 *spec,
+			const struct rte_flow_item_ipv6 *mask __rte_unused,
+			struct rte_flow *flow)
+{
+	struct pp2_cls_rule_key_field *key_field;
+	uint8_t k = spec->hdr.proto;
+
+	if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS)
+		return -ENOSPC;
+
+	key_field = &flow->rule.fields[flow->rule.num_fields];
+	mrvl_alloc_key_mask(key_field);
+	key_field->size = 1;
+
+	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
+
+	flow->pattern |= F_IP6_NEXT_HDR;
+	flow->rule.num_fields += 1;
+
+	return 0;
+}
+
+/**
+ * Parse destination or source port of the tcp flow item.
+ *
+ * This will create classifier rule that matches either destination or
+ * source tcp port.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static int
+mrvl_parse_tcp_port(const struct rte_flow_item_tcp *spec,
+		    const struct rte_flow_item_tcp *mask __rte_unused,
+		    int parse_dst, struct rte_flow *flow)
+{
+	struct pp2_cls_rule_key_field *key_field;
+	uint16_t k;
+
+	if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS)
+		return -ENOSPC;
+
+	key_field = &flow->rule.fields[flow->rule.num_fields];
+	mrvl_alloc_key_mask(key_field);
+	key_field->size = 2;
+
+	if (parse_dst) {
+		k = rte_be_to_cpu_16(spec->hdr.dst_port);
+
+		flow->pattern |= F_TCP_DPORT;
+	} else {
+		k = rte_be_to_cpu_16(spec->hdr.src_port);
+
+		flow->pattern |= F_TCP_SPORT;
+	}
+
+	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
+
+	flow->rule.num_fields += 1;
+
+	return 0;
+}
+
+/**
+ * Helper for parsing the tcp source port of the tcp flow item.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static inline int
+mrvl_parse_tcp_sport(const struct rte_flow_item_tcp *spec,
+		     const struct rte_flow_item_tcp *mask,
+		     struct rte_flow *flow)
+{
+	return mrvl_parse_tcp_port(spec, mask, 0, flow);
+}
+
+/**
+ * Helper for parsing the tcp destination port of the tcp flow item.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static inline int
+mrvl_parse_tcp_dport(const struct rte_flow_item_tcp *spec,
+		     const struct rte_flow_item_tcp *mask,
+		     struct rte_flow *flow)
+{
+	return mrvl_parse_tcp_port(spec, mask, 1, flow);
+}
+
+/**
+ * Parse destination or source port of the udp flow item.
+ *
+ * This will create classifier rule that matches either destination or
+ * source udp port.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static int
+mrvl_parse_udp_port(const struct rte_flow_item_udp *spec,
+		    const struct rte_flow_item_udp *mask __rte_unused,
+		    int parse_dst, struct rte_flow *flow)
+{
+	struct pp2_cls_rule_key_field *key_field;
+	uint16_t k;
+
+	if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS)
+		return -ENOSPC;
+
+	key_field = &flow->rule.fields[flow->rule.num_fields];
+	mrvl_alloc_key_mask(key_field);
+	key_field->size = 2;
+
+	if (parse_dst) {
+		k = rte_be_to_cpu_16(spec->hdr.dst_port);
+
+		flow->pattern |= F_UDP_DPORT;
+	} else {
+		k = rte_be_to_cpu_16(spec->hdr.src_port);
+
+		flow->pattern |= F_UDP_SPORT;
+	}
+
+	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
+
+	flow->rule.num_fields += 1;
+
+	return 0;
+}
+
+/**
+ * Helper for parsing the udp source port of the udp flow item.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static inline int
+mrvl_parse_udp_sport(const struct rte_flow_item_udp *spec,
+		     const struct rte_flow_item_udp *mask,
+		     struct rte_flow *flow)
+{
+	return mrvl_parse_udp_port(spec, mask, 0, flow);
+}
+
+/**
+ * Helper for parsing the udp destination port of the udp flow item.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static inline int
+mrvl_parse_udp_dport(const struct rte_flow_item_udp *spec,
+		     const struct rte_flow_item_udp *mask,
+		     struct rte_flow *flow)
+{
+	return mrvl_parse_udp_port(spec, mask, 1, flow);
+}
+
+/**
+ * Parse eth flow item.
+ *
+ * @param item Pointer to the flow item.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @param fields Pointer to the parsed parsed fields enum.
+ * @returns 0 on success, negative value otherwise.
+ */
+static int
+mrvl_parse_eth(const struct rte_flow_item *item, struct rte_flow *flow,
+	       struct rte_flow_error *error)
+{
+	const struct rte_flow_item_eth *spec = NULL, *mask = NULL;
+	struct ether_addr zero;
+	int ret;
+
+	ret = mrvl_parse_init(item, (const void **)&spec, (const void **)&mask,
+			      &rte_flow_item_eth_mask,
+			      sizeof(struct rte_flow_item_eth), error);
+	if (ret)
+		return ret;
+
+	memset(&zero, 0, sizeof(zero));
+
+	if (memcmp(&mask->dst, &zero, sizeof(mask->dst))) {
+		ret = mrvl_parse_dmac(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	if (memcmp(&mask->src, &zero, sizeof(mask->src))) {
+		ret = mrvl_parse_smac(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	if (mask->type) {
+		RTE_LOG(WARNING, PMD, "eth type mask is ignored\n");
+		ret = mrvl_parse_type(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	return 0;
+out:
+	rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+			   "Reached maximum number of fields in cls tbl key\n");
+	return -rte_errno;
+}
+
+/**
+ * Parse vlan flow item.
+ *
+ * @param item Pointer to the flow item.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @param fields Pointer to the parsed parsed fields enum.
+ * @returns 0 on success, negative value otherwise.
+ */
+static int
+mrvl_parse_vlan(const struct rte_flow_item *item,
+		struct rte_flow *flow,
+		struct rte_flow_error *error)
+{
+	const struct rte_flow_item_vlan *spec = NULL, *mask = NULL;
+	uint16_t m;
+	int ret;
+
+	ret = mrvl_parse_init(item, (const void **)&spec, (const void **)&mask,
+			      &rte_flow_item_vlan_mask,
+			      sizeof(struct rte_flow_item_vlan), error);
+	if (ret)
+		return ret;
+
+	if (mask->tpid) {
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+				   NULL, "Not supported by classifier\n");
+		return -rte_errno;
+	}
+
+	m = rte_be_to_cpu_16(mask->tci);
+	if (m & MRVL_VLAN_ID_MASK) {
+		RTE_LOG(WARNING, PMD, "vlan id mask is ignored\n");
+		ret = mrvl_parse_vlan_id(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	if (m & MRVL_VLAN_PRI_MASK) {
+		RTE_LOG(WARNING, PMD, "vlan pri mask is ignored\n");
+		ret = mrvl_parse_vlan_pri(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	return 0;
+out:
+	rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+			   "Reached maximum number of fields in cls tbl key\n");
+	return -rte_errno;
+}
+
+/**
+ * Parse ipv4 flow item.
+ *
+ * @param item Pointer to the flow item.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @param fields Pointer to the parsed parsed fields enum.
+ * @returns 0 on success, negative value otherwise.
+ */
+static int
+mrvl_parse_ip4(const struct rte_flow_item *item,
+	       struct rte_flow *flow,
+	       struct rte_flow_error *error)
+{
+	const struct rte_flow_item_ipv4 *spec = NULL, *mask = NULL;
+	int ret;
+
+	ret = mrvl_parse_init(item, (const void **)&spec, (const void **)&mask,
+			      &rte_flow_item_ipv4_mask,
+			      sizeof(struct rte_flow_item_ipv4), error);
+	if (ret)
+		return ret;
+
+	if (mask->hdr.version_ihl ||
+	    mask->hdr.total_length ||
+	    mask->hdr.packet_id ||
+	    mask->hdr.fragment_offset ||
+	    mask->hdr.time_to_live ||
+	    mask->hdr.hdr_checksum) {
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+				   NULL, "Not supported by classifier\n");
+		return -rte_errno;
+	}
+
+	if (mask->hdr.type_of_service & MRVL_IPV4_DSCP_MASK) {
+		ret = mrvl_parse_ip4_dscp(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	if (mask->hdr.src_addr) {
+		ret = mrvl_parse_ip4_sip(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	if (mask->hdr.dst_addr) {
+		ret = mrvl_parse_ip4_dip(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	if (mask->hdr.next_proto_id) {
+		RTE_LOG(WARNING, PMD, "next proto id mask is ignored\n");
+		ret = mrvl_parse_ip4_proto(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	return 0;
+out:
+	rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+			   "Reached maximum number of fields in cls tbl key\n");
+	return -rte_errno;
+}
+
+/**
+ * Parse ipv6 flow item.
+ *
+ * @param item Pointer to the flow item.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @param fields Pointer to the parsed parsed fields enum.
+ * @returns 0 on success, negative value otherwise.
+ */
+static int
+mrvl_parse_ip6(const struct rte_flow_item *item,
+	       struct rte_flow *flow,
+	       struct rte_flow_error *error)
+{
+	const struct rte_flow_item_ipv6 *spec = NULL, *mask = NULL;
+	struct ipv6_hdr zero;
+	uint32_t flow_mask;
+	int ret;
+
+	ret = mrvl_parse_init(item, (const void **)&spec,
+			      (const void **)&mask,
+			      &rte_flow_item_ipv6_mask,
+			      sizeof(struct rte_flow_item_ipv6),
+			      error);
+	if (ret)
+		return ret;
+
+	memset(&zero, 0, sizeof(zero));
+
+	if (mask->hdr.payload_len ||
+	    mask->hdr.hop_limits) {
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+				   NULL, "Not supported by classifier\n");
+		return -rte_errno;
+	}
+
+	if (memcmp(mask->hdr.src_addr,
+		   zero.src_addr, sizeof(mask->hdr.src_addr))) {
+		ret = mrvl_parse_ip6_sip(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	if (memcmp(mask->hdr.dst_addr,
+		   zero.dst_addr, sizeof(mask->hdr.dst_addr))) {
+		ret = mrvl_parse_ip6_dip(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	flow_mask = rte_be_to_cpu_32(mask->hdr.vtc_flow) & MRVL_IPV6_FLOW_MASK;
+	if (flow_mask) {
+		ret = mrvl_parse_ip6_flow(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	if (mask->hdr.proto) {
+		RTE_LOG(WARNING, PMD, "next header mask is ignored\n");
+		ret = mrvl_parse_ip6_next_hdr(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	return 0;
+out:
+	rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+			   "Reached maximum number of fields in cls tbl key\n");
+	return -rte_errno;
+}
+
+/**
+ * Parse tcp flow item.
+ *
+ * @param item Pointer to the flow item.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @param fields Pointer to the parsed parsed fields enum.
+ * @returns 0 on success, negative value otherwise.
+ */
+static int
+mrvl_parse_tcp(const struct rte_flow_item *item,
+	       struct rte_flow *flow,
+	       struct rte_flow_error *error)
+{
+	const struct rte_flow_item_tcp *spec = NULL, *mask = NULL;
+	int ret;
+
+	ret = mrvl_parse_init(item, (const void **)&spec, (const void **)&mask,
+			      &rte_flow_item_ipv4_mask,
+			      sizeof(struct rte_flow_item_ipv4), error);
+	if (ret)
+		return ret;
+
+	if (mask->hdr.sent_seq ||
+	    mask->hdr.recv_ack ||
+	    mask->hdr.data_off ||
+	    mask->hdr.tcp_flags ||
+	    mask->hdr.rx_win ||
+	    mask->hdr.cksum ||
+	    mask->hdr.tcp_urp) {
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+				   NULL, "Not supported by classifier\n");
+		return -rte_errno;
+	}
+
+	if (mask->hdr.src_port) {
+		RTE_LOG(WARNING, PMD, "tcp sport mask is ignored\n");
+		ret = mrvl_parse_tcp_sport(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	if (mask->hdr.dst_port) {
+		RTE_LOG(WARNING, PMD, "tcp dport mask is ignored\n");
+		ret = mrvl_parse_tcp_dport(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	return 0;
+out:
+	rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+			   "Reached maximum number of fields in cls tbl key\n");
+	return -rte_errno;
+}
+
+/**
+ * Parse udp flow item.
+ *
+ * @param item Pointer to the flow item.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @param fields Pointer to the parsed parsed fields enum.
+ * @returns 0 on success, negative value otherwise.
+ */
+static int
+mrvl_parse_udp(const struct rte_flow_item *item,
+	       struct rte_flow *flow,
+	       struct rte_flow_error *error)
+{
+	const struct rte_flow_item_udp *spec = NULL, *mask = NULL;
+	int ret;
+
+	ret = mrvl_parse_init(item, (const void **)&spec, (const void **)&mask,
+			      &rte_flow_item_ipv4_mask,
+			      sizeof(struct rte_flow_item_ipv4), error);
+	if (ret)
+		return ret;
+
+	if (mask->hdr.dgram_len ||
+	    mask->hdr.dgram_cksum) {
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+				   NULL, "Not supported by classifier\n");
+		return -rte_errno;
+	}
+
+	if (mask->hdr.src_port) {
+		RTE_LOG(WARNING, PMD, "udp sport mask is ignored\n");
+		ret = mrvl_parse_udp_sport(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	if (mask->hdr.dst_port) {
+		RTE_LOG(WARNING, PMD, "udp dport mask is ignored\n");
+		ret = mrvl_parse_udp_dport(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	return 0;
+out:
+	rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+			   "Reached maximum number of fields in cls tbl key\n");
+	return -rte_errno;
+}
+
+/**
+ * Parse flow pattern composed of the the eth item.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_eth(const struct rte_flow_item pattern[],
+		       struct rte_flow *flow,
+		       struct rte_flow_error *error)
+{
+	return mrvl_parse_eth(pattern, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the eth and vlan items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_eth_vlan(const struct rte_flow_item pattern[],
+			    struct rte_flow *flow,
+			    struct rte_flow_error *error)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+	int ret;
+
+	ret = mrvl_parse_eth(item, flow, error);
+	if (ret)
+		return ret;
+
+	item = mrvl_next_item(item + 1);
+
+	return mrvl_parse_vlan(item, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the eth, vlan and ip4/ip6 items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @param ip6 1 to parse ip6 item, 0 to parse ip4 item.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_eth_vlan_ip4_ip6(const struct rte_flow_item pattern[],
+				    struct rte_flow *flow,
+				    struct rte_flow_error *error, int ip6)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+	int ret;
+
+	ret = mrvl_parse_eth(item, flow, error);
+	if (ret)
+		return ret;
+
+	item = mrvl_next_item(item + 1);
+	ret = mrvl_parse_vlan(item, flow, error);
+	if (ret)
+		return ret;
+
+	item = mrvl_next_item(item + 1);
+
+	return ip6 ? mrvl_parse_ip6(item, flow, error) :
+		     mrvl_parse_ip4(item, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the eth, vlan and ipv4 items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_eth_vlan_ip4(const struct rte_flow_item pattern[],
+				struct rte_flow *flow,
+				struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_eth_vlan_ip4_ip6(pattern, flow, error, 0);
+}
+
+/**
+ * Parse flow pattern composed of the eth, vlan and ipv6 items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_eth_vlan_ip6(const struct rte_flow_item pattern[],
+				struct rte_flow *flow,
+				struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_eth_vlan_ip4_ip6(pattern, flow, error, 1);
+}
+
+/**
+ * Parse flow pattern composed of the eth and ip4/ip6 items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @param ip6 1 to parse ip6 item, 0 to parse ip4 item.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_eth_ip4_ip6(const struct rte_flow_item pattern[],
+			       struct rte_flow *flow,
+			       struct rte_flow_error *error, int ip6)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+	int ret;
+
+	ret = mrvl_parse_eth(item, flow, error);
+	if (ret)
+		return ret;
+
+	item = mrvl_next_item(item + 1);
+
+	return ip6 ? mrvl_parse_ip6(item, flow, error) :
+		     mrvl_parse_ip4(item, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the eth and ipv4 items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_eth_ip4(const struct rte_flow_item pattern[],
+			   struct rte_flow *flow,
+			   struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_eth_ip4_ip6(pattern, flow, error, 0);
+}
+
+/**
+ * Parse flow pattern composed of the eth and ipv6 items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_eth_ip6(const struct rte_flow_item pattern[],
+			   struct rte_flow *flow,
+			   struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_eth_ip4_ip6(pattern, flow, error, 1);
+}
+
+/**
+ * Parse flow pattern composed of the eth, ip4 and tcp/udp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @param tcp 1 to parse tcp item, 0 to parse udp item.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_eth_ip4_tcp_udp(const struct rte_flow_item pattern[],
+				   struct rte_flow *flow,
+				   struct rte_flow_error *error, int tcp)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+	int ret;
+
+	ret = mrvl_parse_pattern_eth_ip4_ip6(pattern, flow, error, 0);
+	if (ret)
+		return ret;
+
+	item = mrvl_next_item(item + 1);
+	item = mrvl_next_item(item + 1);
+
+	if (tcp)
+		return mrvl_parse_tcp(item, flow, error);
+
+	return mrvl_parse_udp(item, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the eth, ipv4 and tcp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_eth_ip4_tcp(const struct rte_flow_item pattern[],
+			       struct rte_flow *flow,
+			       struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_eth_ip4_tcp_udp(pattern, flow, error, 1);
+}
+
+/**
+ * Parse flow pattern composed of the eth, ipv4 and udp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_eth_ip4_udp(const struct rte_flow_item pattern[],
+			       struct rte_flow *flow,
+			       struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_eth_ip4_tcp_udp(pattern, flow, error, 0);
+}
+
+/**
+ * Parse flow pattern composed of the eth, ipv6 and tcp/udp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @param tcp 1 to parse tcp item, 0 to parse udp item.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_eth_ip6_tcp_udp(const struct rte_flow_item pattern[],
+				   struct rte_flow *flow,
+				   struct rte_flow_error *error, int tcp)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+	int ret;
+
+	ret = mrvl_parse_pattern_eth_ip4_ip6(pattern, flow, error, 1);
+	if (ret)
+		return ret;
+
+	item = mrvl_next_item(item + 1);
+	item = mrvl_next_item(item + 1);
+
+	if (tcp)
+		return mrvl_parse_tcp(item, flow, error);
+
+	return mrvl_parse_udp(item, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the eth, ipv6 and tcp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_eth_ip6_tcp(const struct rte_flow_item pattern[],
+			       struct rte_flow *flow,
+			       struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_eth_ip6_tcp_udp(pattern, flow, error, 1);
+}
+
+/**
+ * Parse flow pattern composed of the eth, ipv6 and udp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_eth_ip6_udp(const struct rte_flow_item pattern[],
+			       struct rte_flow *flow,
+			       struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_eth_ip6_tcp_udp(pattern, flow, error, 0);
+}
+
+/**
+ * Parse flow pattern composed of the vlan item.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_vlan(const struct rte_flow_item pattern[],
+			    struct rte_flow *flow,
+			    struct rte_flow_error *error)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+
+	return mrvl_parse_vlan(item, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the vlan and ip4/ip6 items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @param ip6 1 to parse ip6 item, 0 to parse ip4 item.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_vlan_ip4_ip6(const struct rte_flow_item pattern[],
+				struct rte_flow *flow,
+				struct rte_flow_error *error, int ip6)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+	int ret;
+
+	ret = mrvl_parse_vlan(item, flow, error);
+	if (ret)
+		return ret;
+
+	item = mrvl_next_item(item + 1);
+
+	return ip6 ? mrvl_parse_ip6(item, flow, error) :
+		     mrvl_parse_ip4(item, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the vlan and ipv4 items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_vlan_ip4(const struct rte_flow_item pattern[],
+			    struct rte_flow *flow,
+			    struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_vlan_ip4_ip6(pattern, flow, error, 0);
+}
+
+/**
+ * Parse flow pattern composed of the vlan, ipv4 and tcp/udp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_vlan_ip_tcp_udp(const struct rte_flow_item pattern[],
+				   struct rte_flow *flow,
+				   struct rte_flow_error *error, int tcp)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+	int ret;
+
+	ret = mrvl_parse_pattern_vlan_ip4_ip6(pattern, flow, error, 0);
+	if (ret)
+		return ret;
+
+	item = mrvl_next_item(item + 1);
+	item = mrvl_next_item(item + 1);
+
+	if (tcp)
+		return mrvl_parse_tcp(item, flow, error);
+
+	return mrvl_parse_udp(item, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the vlan, ipv4 and tcp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_vlan_ip_tcp(const struct rte_flow_item pattern[],
+			       struct rte_flow *flow,
+			       struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_vlan_ip_tcp_udp(pattern, flow, error, 1);
+}
+
+/**
+ * Parse flow pattern composed of the vlan, ipv4 and udp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_vlan_ip_udp(const struct rte_flow_item pattern[],
+			       struct rte_flow *flow,
+			       struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_vlan_ip_tcp_udp(pattern, flow, error, 0);
+}
+
+/**
+ * Parse flow pattern composed of the vlan and ipv6 items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_vlan_ip6(const struct rte_flow_item pattern[],
+			    struct rte_flow *flow,
+			    struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_vlan_ip4_ip6(pattern, flow, error, 1);
+}
+
+/**
+ * Parse flow pattern composed of the vlan, ipv6 and tcp/udp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_vlan_ip6_tcp_udp(const struct rte_flow_item pattern[],
+				    struct rte_flow *flow,
+				    struct rte_flow_error *error, int tcp)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+	int ret;
+
+	ret = mrvl_parse_pattern_vlan_ip4_ip6(pattern, flow, error, 1);
+	if (ret)
+		return ret;
+
+	item = mrvl_next_item(item + 1);
+	item = mrvl_next_item(item + 1);
+
+	if (tcp)
+		return mrvl_parse_tcp(item, flow, error);
+
+	return mrvl_parse_udp(item, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the vlan, ipv6 and tcp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_vlan_ip6_tcp(const struct rte_flow_item pattern[],
+				struct rte_flow *flow,
+				struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_vlan_ip6_tcp_udp(pattern, flow, error, 1);
+}
+
+/**
+ * Parse flow pattern composed of the vlan, ipv6 and udp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_vlan_ip6_udp(const struct rte_flow_item pattern[],
+				struct rte_flow *flow,
+				struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_vlan_ip6_tcp_udp(pattern, flow, error, 0);
+}
+
+/**
+ * Parse flow pattern composed of the ip4/ip6 item.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @param ip6 1 to parse ip6 item, 0 to parse ip4 item.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_ip4_ip6(const struct rte_flow_item pattern[],
+		       struct rte_flow *flow,
+		       struct rte_flow_error *error, int ip6)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+
+	return ip6 ? mrvl_parse_ip6(item, flow, error) :
+		     mrvl_parse_ip4(item, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the ipv4 item.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_ip4(const struct rte_flow_item pattern[],
+		       struct rte_flow *flow,
+		       struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_ip4_ip6(pattern, flow, error, 0);
+}
+
+/**
+ * Parse flow pattern composed of the ipv6 item.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_ip6(const struct rte_flow_item pattern[],
+		       struct rte_flow *flow,
+		       struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_ip4_ip6(pattern, flow, error, 1);
+}
+
+/**
+ * Parse flow pattern composed of the ip4/ip6 and tcp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @param ip6 1 to parse ip6 item, 0 to parse ip4 item.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_ip4_ip6_tcp(const struct rte_flow_item pattern[],
+			   struct rte_flow *flow,
+			   struct rte_flow_error *error, int ip6)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+	int ret;
+
+	ret = ip6 ? mrvl_parse_ip6(item, flow, error) :
+		    mrvl_parse_ip4(item, flow, error);
+	if (ret)
+		return ret;
+
+	item = mrvl_next_item(item + 1);
+
+	return mrvl_parse_tcp(item, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the ipv4 and tcp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_ip4_tcp(const struct rte_flow_item pattern[],
+			   struct rte_flow *flow,
+			   struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_ip4_ip6_tcp(pattern, flow, error, 0);
+}
+
+/**
+ * Parse flow pattern composed of the ipv6 and tcp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_ip6_tcp(const struct rte_flow_item pattern[],
+			   struct rte_flow *flow,
+			   struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_ip4_ip6_tcp(pattern, flow, error, 1);
+}
+
+/**
+ * Parse flow pattern composed of the ipv4/ipv6 and udp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_ip4_ip6_udp(const struct rte_flow_item pattern[],
+			   struct rte_flow *flow,
+			   struct rte_flow_error *error, int ip6)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+	int ret;
+
+	ret = ip6 ? mrvl_parse_ip6(item, flow, error) :
+		    mrvl_parse_ip4(item, flow, error);
+	if (ret)
+		return ret;
+
+	item = mrvl_next_item(item + 1);
+
+	return mrvl_parse_udp(item, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the ipv4 and udp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_ip4_udp(const struct rte_flow_item pattern[],
+			   struct rte_flow *flow,
+			   struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_ip4_ip6_udp(pattern, flow, error, 0);
+}
+
+/**
+ * Parse flow pattern composed of the ipv6 and udp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_ip6_udp(const struct rte_flow_item pattern[],
+			   struct rte_flow *flow,
+			   struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_ip4_ip6_udp(pattern, flow, error, 1);
+}
+
+/**
+ * Parse flow pattern composed of the tcp item.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_tcp(const struct rte_flow_item pattern[],
+		       struct rte_flow *flow,
+		       struct rte_flow_error *error)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+
+	return mrvl_parse_tcp(item, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the udp item.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_udp(const struct rte_flow_item pattern[],
+		       struct rte_flow *flow,
+		       struct rte_flow_error *error)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+
+	return mrvl_parse_udp(item, flow, error);
+}
+
+/**
+ * Structure used to map specific flow pattern to the pattern parse callback
+ * which will iterate over each pattern item and extract relevant data.
+ */
+static const struct {
+	const enum rte_flow_item_type *pattern;
+	int (*parse)(const struct rte_flow_item pattern[],
+		struct rte_flow *flow,
+		struct rte_flow_error *error);
+} mrvl_patterns[] = {
+	{ pattern_eth, mrvl_parse_pattern_eth },
+	{ pattern_eth_vlan, mrvl_parse_pattern_eth_vlan },
+	{ pattern_eth_vlan_ip, mrvl_parse_pattern_eth_vlan_ip4 },
+	{ pattern_eth_vlan_ip6, mrvl_parse_pattern_eth_vlan_ip6 },
+	{ pattern_eth_ip4, mrvl_parse_pattern_eth_ip4 },
+	{ pattern_eth_ip4_tcp, mrvl_parse_pattern_eth_ip4_tcp },
+	{ pattern_eth_ip4_udp, mrvl_parse_pattern_eth_ip4_udp },
+	{ pattern_eth_ip6, mrvl_parse_pattern_eth_ip6 },
+	{ pattern_eth_ip6_tcp, mrvl_parse_pattern_eth_ip6_tcp },
+	{ pattern_eth_ip6_udp, mrvl_parse_pattern_eth_ip6_udp },
+	{ pattern_vlan, mrvl_parse_pattern_vlan },
+	{ pattern_vlan_ip, mrvl_parse_pattern_vlan_ip4 },
+	{ pattern_vlan_ip_tcp, mrvl_parse_pattern_vlan_ip_tcp },
+	{ pattern_vlan_ip_udp, mrvl_parse_pattern_vlan_ip_udp },
+	{ pattern_vlan_ip6, mrvl_parse_pattern_vlan_ip6 },
+	{ pattern_vlan_ip6_tcp, mrvl_parse_pattern_vlan_ip6_tcp },
+	{ pattern_vlan_ip6_udp, mrvl_parse_pattern_vlan_ip6_udp },
+	{ pattern_ip, mrvl_parse_pattern_ip4 },
+	{ pattern_ip_tcp, mrvl_parse_pattern_ip4_tcp },
+	{ pattern_ip_udp, mrvl_parse_pattern_ip4_udp },
+	{ pattern_ip6, mrvl_parse_pattern_ip6 },
+	{ pattern_ip6_tcp, mrvl_parse_pattern_ip6_tcp },
+	{ pattern_ip6_udp, mrvl_parse_pattern_ip6_udp },
+	{ pattern_tcp, mrvl_parse_pattern_tcp },
+	{ pattern_udp, mrvl_parse_pattern_udp }
+};
+
+/**
+ * Check whether provided pattern matches any of the supported ones.
+ *
+ * @param type_pattern Pointer to the pattern type.
+ * @param item_pattern Pointer to the flow pattern.
+ * @returns 1 in case of success, 0 value otherwise.
+ */
+static int
+mrvl_patterns_match(const enum rte_flow_item_type *type_pattern,
+		    const struct rte_flow_item *item_pattern)
+{
+	const enum rte_flow_item_type *type = type_pattern;
+	const struct rte_flow_item *item = item_pattern;
+
+	for (;;) {
+		if (item->type == RTE_FLOW_ITEM_TYPE_VOID) {
+			item++;
+			continue;
+		}
+
+		if (*type == RTE_FLOW_ITEM_TYPE_END ||
+		    item->type == RTE_FLOW_ITEM_TYPE_END)
+			break;
+
+		if (*type != item->type)
+			break;
+
+		item++;
+		type++;
+	}
+
+	return *type == item->type;
+}
+
+/**
+ * Parse flow attribute.
+ *
+ * This will check whether the provided attribute's flags are supported.
+ *
+ * @param priv Unused
+ * @param attr Pointer to the flow attribute.
+ * @param flow Unused
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_flow_parse_attr(struct mrvl_priv *priv __rte_unused,
+		     const struct rte_flow_attr *attr,
+		     struct rte_flow *flow __rte_unused,
+		     struct rte_flow_error *error)
+{
+	if (!attr) {
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, "NULL attribute");
+		return -rte_errno;
+	}
+
+	if (attr->group) {
+		rte_flow_error_set(error, ENOTSUP,
+				   RTE_FLOW_ERROR_TYPE_ATTR_GROUP, NULL,
+				   "Groups are not supported");
+		return -rte_errno;
+	}
+	if (attr->priority) {
+		rte_flow_error_set(error, ENOTSUP,
+				   RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, NULL,
+				   "Priorities are not supported");
+		return -rte_errno;
+	}
+	if (!attr->ingress) {
+		rte_flow_error_set(error, ENOTSUP,
+				   RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, NULL,
+				   "Only ingress is supported");
+		return -rte_errno;
+	}
+	if (attr->egress) {
+		rte_flow_error_set(error, ENOTSUP,
+				   RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, NULL,
+				   "Egress is not supported");
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
+/**
+ * Parse flow pattern.
+ *
+ * Specific classifier rule will be created as well.
+ *
+ * @param priv Unused
+ * @param pattern Pointer to the flow pattern.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_flow_parse_pattern(struct mrvl_priv *priv __rte_unused,
+			const struct rte_flow_item pattern[],
+			struct rte_flow *flow,
+			struct rte_flow_error *error)
+{
+	unsigned int i;
+	int ret;
+
+	for (i = 0; i < RTE_DIM(mrvl_patterns); i++) {
+		if (!mrvl_patterns_match(mrvl_patterns[i].pattern, pattern))
+			continue;
+
+		ret = mrvl_patterns[i].parse(pattern, flow, error);
+		if (ret)
+			mrvl_free_all_key_mask(&flow->rule);
+
+		return ret;
+	}
+
+	rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+			   "Unsupported pattern");
+
+	return -rte_errno;
+}
+
+/**
+ * Parse flow actions.
+ *
+ * @param priv Pointer to the port's private data.
+ * @param actions Pointer the action table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_flow_parse_actions(struct mrvl_priv *priv,
+			const struct rte_flow_action actions[],
+			struct rte_flow *flow,
+			struct rte_flow_error *error)
+{
+	const struct rte_flow_action *action = actions;
+	int specified = 0;
+
+	for (; action->type != RTE_FLOW_ACTION_TYPE_END; action++) {
+		if (action->type == RTE_FLOW_ACTION_TYPE_VOID)
+			continue;
+
+		if (action->type == RTE_FLOW_ACTION_TYPE_DROP) {
+			flow->cos.ppio = priv->ppio;
+			flow->cos.tc = 0;
+			flow->action.type = PP2_CLS_TBL_ACT_DROP;
+			flow->action.cos = &flow->cos;
+			specified++;
+		} else if (action->type == RTE_FLOW_ACTION_TYPE_QUEUE) {
+			const struct rte_flow_action_queue *q =
+				(const struct rte_flow_action_queue *)
+				action->conf;
+
+			if (q->index > priv->nb_rx_queues) {
+				rte_flow_error_set(error, EINVAL,
+						RTE_FLOW_ERROR_TYPE_ACTION,
+						NULL,
+						"Queue index out of range");
+				return -rte_errno;
+			}
+
+			if (priv->rxq_map[q->index].tc == MRVL_UNKNOWN_TC) {
+				/*
+				 * Unknown TC mapping, mapping will not have
+				 * a correct queue.
+				 */
+				RTE_LOG(ERR, PMD,
+					"Unknown TC mapping for queue %hu eth%hhu\n",
+					q->index, priv->ppio_id);
+
+				rte_flow_error_set(error, EFAULT,
+						RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+						NULL, NULL);
+				return -rte_errno;
+			}
+
+			RTE_LOG(DEBUG, PMD,
+				"Action: Assign packets to queue %d, tc:%d, q:%d\n",
+				q->index, priv->rxq_map[q->index].tc,
+				priv->rxq_map[q->index].inq);
+
+			flow->cos.ppio = priv->ppio;
+			flow->cos.tc = priv->rxq_map[q->index].tc;
+			flow->action.type = PP2_CLS_TBL_ACT_DONE;
+			flow->action.cos = &flow->cos;
+			specified++;
+		} else {
+			rte_flow_error_set(error, ENOTSUP,
+					   RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+					   "Action not supported");
+			return -rte_errno;
+		}
+
+	}
+
+	if (!specified) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				   NULL, "Action not specified");
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
+/**
+ * Parse flow attribute, pattern and actions.
+ *
+ * @param priv Pointer to the port's private data.
+ * @param attr Pointer to the flow attribute.
+ * @param pattern Pointer to the flow pattern.
+ * @param actions Pointer to the flow actions.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 on success, negative value otherwise.
+ */
+static int
+mrvl_flow_parse(struct mrvl_priv *priv, const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct rte_flow *flow,
+		struct rte_flow_error *error)
+{
+	int ret;
+
+	ret = mrvl_flow_parse_attr(priv, attr, flow, error);
+	if (ret)
+		return ret;
+
+	ret = mrvl_flow_parse_pattern(priv, pattern, flow, error);
+	if (ret)
+		return ret;
+
+	return mrvl_flow_parse_actions(priv, actions, flow, error);
+}
+
+static inline enum pp2_cls_tbl_type
+mrvl_engine_type(const struct rte_flow *flow)
+{
+	int i, size = 0;
+
+	for (i = 0; i < flow->rule.num_fields; i++)
+		size += flow->rule.fields[i].size;
+
+	/*
+	 * For maskable engine type the key size must be up to 8 bytes.
+	 * For keys with size bigger than 8 bytes, engine type must
+	 * be set to exact match.
+	 */
+	if (size > 8)
+		return PP2_CLS_TBL_EXACT_MATCH;
+
+	return PP2_CLS_TBL_MASKABLE;
+}
+
+static int
+mrvl_create_cls_table(struct rte_eth_dev *dev, struct rte_flow *first_flow)
+{
+	struct mrvl_priv *priv = dev->data->dev_private;
+	struct pp2_cls_tbl_key *key = &priv->cls_tbl_params.key;
+	int ret;
+
+	if (priv->cls_tbl) {
+		pp2_cls_tbl_deinit(priv->cls_tbl);
+		priv->cls_tbl = NULL;
+	}
+
+	memset(&priv->cls_tbl_params, 0, sizeof(priv->cls_tbl_params));
+
+	priv->cls_tbl_params.type = mrvl_engine_type(first_flow);
+	RTE_LOG(INFO, PMD, "Setting cls search engine type to %s\n",
+			priv->cls_tbl_params.type == PP2_CLS_TBL_EXACT_MATCH ?
+			"exact" : "maskable");
+	priv->cls_tbl_params.max_num_rules = MRVL_CLS_MAX_NUM_RULES;
+	priv->cls_tbl_params.default_act.type = PP2_CLS_TBL_ACT_DONE;
+	priv->cls_tbl_params.default_act.cos = &first_flow->cos;
+
+	if (first_flow->pattern & F_DMAC) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_ETH;
+		key->proto_field[key->num_fields].field.eth = MV_NET_ETH_F_DA;
+		key->key_size += 6;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_SMAC) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_ETH;
+		key->proto_field[key->num_fields].field.eth = MV_NET_ETH_F_SA;
+		key->key_size += 6;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_TYPE) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_ETH;
+		key->proto_field[key->num_fields].field.eth = MV_NET_ETH_F_TYPE;
+		key->key_size += 2;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_VLAN_ID) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_VLAN;
+		key->proto_field[key->num_fields].field.vlan = MV_NET_VLAN_F_ID;
+		key->key_size += 2;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_VLAN_PRI) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_VLAN;
+		key->proto_field[key->num_fields].field.vlan =
+			MV_NET_VLAN_F_PRI;
+		key->key_size += 1;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_IP4_TOS) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP4;
+		key->proto_field[key->num_fields].field.ipv4 = MV_NET_IP4_F_TOS;
+		key->key_size += 1;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_IP4_SIP) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP4;
+		key->proto_field[key->num_fields].field.ipv4 = MV_NET_IP4_F_SA;
+		key->key_size += 4;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_IP4_DIP) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP4;
+		key->proto_field[key->num_fields].field.ipv4 = MV_NET_IP4_F_DA;
+		key->key_size += 4;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_IP4_PROTO) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP4;
+		key->proto_field[key->num_fields].field.ipv4 =
+			MV_NET_IP4_F_PROTO;
+		key->key_size += 1;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_IP6_SIP) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP6;
+		key->proto_field[key->num_fields].field.ipv6 = MV_NET_IP6_F_SA;
+		key->key_size += 16;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_IP6_DIP) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP6;
+		key->proto_field[key->num_fields].field.ipv6 = MV_NET_IP6_F_DA;
+		key->key_size += 16;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_IP6_FLOW) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP6;
+		key->proto_field[key->num_fields].field.ipv6 =
+			MV_NET_IP6_F_FLOW;
+		key->key_size += 3;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_IP6_NEXT_HDR) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP6;
+		key->proto_field[key->num_fields].field.ipv6 =
+			MV_NET_IP6_F_NEXT_HDR;
+		key->key_size += 1;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_TCP_SPORT) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_TCP;
+		key->proto_field[key->num_fields].field.tcp = MV_NET_TCP_F_SP;
+		key->key_size += 2;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_TCP_DPORT) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_TCP;
+		key->proto_field[key->num_fields].field.tcp = MV_NET_TCP_F_DP;
+		key->key_size += 2;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_UDP_SPORT) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_UDP;
+		key->proto_field[key->num_fields].field.tcp = MV_NET_TCP_F_SP;
+		key->key_size += 2;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_UDP_DPORT) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_UDP;
+		key->proto_field[key->num_fields].field.udp = MV_NET_TCP_F_DP;
+		key->key_size += 2;
+		key->num_fields += 1;
+	}
+
+	ret = pp2_cls_tbl_init(&priv->cls_tbl_params, &priv->cls_tbl);
+	if (!ret)
+		priv->cls_tbl_pattern = first_flow->pattern;
+
+	return ret;
+}
+
+/**
+ * Check whether new flow can be added to the table
+ *
+ * @param priv Pointer to the port's private data.
+ * @param flow Pointer to the new flow.
+ * @return 1 in case flow can be added, 0 otherwise.
+ */
+static inline int
+mrvl_flow_can_be_added(struct mrvl_priv *priv, const struct rte_flow *flow)
+{
+	return flow->pattern == priv->cls_tbl_pattern &&
+	       mrvl_engine_type(flow) == priv->cls_tbl_params.type;
+}
+
+/**
+ * DPDK flow create callback called when flow is to be created.
+ *
+ * @param dev Pointer to the device.
+ * @param attr Pointer to the flow attribute.
+ * @param pattern Pointer to the flow pattern.
+ * @param actions Pointer to the flow actions.
+ * @param error Pointer to the flow error.
+ * @returns Pointer to the created flow in case of success, NULL otherwise.
+ */
+static struct rte_flow *
+mrvl_flow_create(struct rte_eth_dev *dev,
+		 const struct rte_flow_attr *attr,
+		 const struct rte_flow_item pattern[],
+		 const struct rte_flow_action actions[],
+		 struct rte_flow_error *error)
+{
+	struct mrvl_priv *priv = dev->data->dev_private;
+	struct rte_flow *flow, *first;
+	int ret;
+
+	if (!dev->data->dev_started) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "Port must be started first\n");
+		return NULL;
+	}
+
+	flow = rte_zmalloc_socket(NULL, sizeof(*flow), 0, rte_socket_id());
+	if (!flow)
+		return NULL;
+
+	ret = mrvl_flow_parse(priv, attr, pattern, actions, flow, error);
+	if (ret)
+		goto out;
+
+	/*
+	 * Four cases here:
+	 *
+	 * 1. In case table does not exist - create one.
+	 * 2. In case table exists, is empty and new flow cannot be added
+	 *    recreate table.
+	 * 3. In case table is not empty and new flow matches table format
+	 *    add it.
+	 * 4. Otherwise flow cannot be added.
+	 */
+	first = LIST_FIRST(&priv->flows);
+	if (!priv->cls_tbl) {
+		ret = mrvl_create_cls_table(dev, flow);
+	} else if (!first && !mrvl_flow_can_be_added(priv, flow)) {
+		ret = mrvl_create_cls_table(dev, flow);
+	} else if (mrvl_flow_can_be_added(priv, flow)) {
+		ret = 0;
+	} else {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "Pattern does not match cls table format\n");
+		goto out;
+	}
+
+	if (ret) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "Failed to create cls table\n");
+		goto out;
+	}
+
+	ret = pp2_cls_tbl_add_rule(priv->cls_tbl, &flow->rule, &flow->action);
+	if (ret) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "Failed to add rule\n");
+		goto out;
+	}
+
+	LIST_INSERT_HEAD(&priv->flows, flow, next);
+
+	return flow;
+out:
+	rte_free(flow);
+	return NULL;
+}
+
+/**
+ * Remove classifier rule associated with given flow.
+ *
+ * @param priv Pointer to the port's private data.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_flow_remove(struct mrvl_priv *priv, struct rte_flow *flow,
+		 struct rte_flow_error *error)
+{
+	int ret;
+
+	if (!priv->cls_tbl) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "Classifier table not initialized");
+		return -rte_errno;
+	}
+
+	ret = pp2_cls_tbl_remove_rule(priv->cls_tbl, &flow->rule);
+	if (ret) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "Failed to remove rule");
+		return -rte_errno;
+	}
+
+	mrvl_free_all_key_mask(&flow->rule);
+
+	return 0;
+}
+
+/**
+ * DPDK flow destroy callback called when flow is to be removed.
+ *
+ * @param priv Pointer to the port's private data.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow,
+		  struct rte_flow_error *error)
+{
+	struct mrvl_priv *priv = dev->data->dev_private;
+	struct rte_flow *f;
+	int ret;
+
+	LIST_FOREACH(f, &priv->flows, next) {
+		if (f == flow)
+			break;
+	}
+
+	if (!flow) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "Rule was not found");
+		return -rte_errno;
+	}
+
+	LIST_REMOVE(f, next);
+
+	ret = mrvl_flow_remove(priv, flow, error);
+	if (ret)
+		return ret;
+
+	rte_free(flow);
+
+	return 0;
+}
+
+/**
+ * DPDK flow callback called to verify given attribute, pattern and actions.
+ *
+ * @param dev Pointer to the device.
+ * @param attr Pointer to the flow attribute.
+ * @param pattern Pointer to the flow pattern.
+ * @param actions Pointer to the flow actions.
+ * @param error Pointer to the flow error.
+ * @returns 0 on success, negative value otherwise.
+ */
+static int
+mrvl_flow_validate(struct rte_eth_dev *dev,
+		   const struct rte_flow_attr *attr,
+		   const struct rte_flow_item pattern[],
+		   const struct rte_flow_action actions[],
+		   struct rte_flow_error *error)
+{
+	static struct rte_flow *flow;
+
+	flow = mrvl_flow_create(dev, attr, pattern, actions, error);
+	if (!flow)
+		return -rte_errno;
+
+	mrvl_flow_destroy(dev, flow, error);
+
+	return 0;
+}
+
+/**
+ * DPDK flow flush callback called when flows are to be flushed.
+ *
+ * @param dev Pointer to the device.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error)
+{
+	struct mrvl_priv *priv = dev->data->dev_private;
+
+	while (!LIST_EMPTY(&priv->flows)) {
+		struct rte_flow *flow = LIST_FIRST(&priv->flows);
+		int ret = mrvl_flow_remove(priv, flow, error);
+		if (ret)
+			return ret;
+
+		LIST_REMOVE(flow, next);
+		rte_free(flow);
+	}
+
+	return 0;
+}
+
+/**
+ * DPDK flow isolate callback called to isolate port.
+ *
+ * @param dev Pointer to the device.
+ * @param enable Pass 0/1 to disable/enable port isolation.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_flow_isolate(struct rte_eth_dev *dev, int enable,
+		  struct rte_flow_error *error)
+{
+	struct mrvl_priv *priv = dev->data->dev_private;
+
+	if (dev->data->dev_started) {
+		rte_flow_error_set(error, EBUSY,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				   NULL, "Port must be stopped first\n");
+		return -rte_errno;
+	}
+
+	priv->isolated = enable;
+
+	return 0;
+}
+
+const struct rte_flow_ops mrvl_flow_ops = {
+	.validate = mrvl_flow_validate,
+	.create = mrvl_flow_create,
+	.destroy = mrvl_flow_destroy,
+	.flush = mrvl_flow_flush,
+	.isolate = mrvl_flow_isolate
+};
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 6/8] net/mrvl: add extended statistics
  2018-02-21 14:14 [PATCH 0/8] net/mrvl: add new features to PMD Tomasz Duszynski
                   ` (4 preceding siblings ...)
  2018-02-21 14:14 ` [PATCH 5/8] net/mrvl: add classifier support Tomasz Duszynski
@ 2018-02-21 14:14 ` Tomasz Duszynski
  2018-02-21 14:14 ` [PATCH 7/8] net/mrvl: add Rx flow control Tomasz Duszynski
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 34+ messages in thread
From: Tomasz Duszynski @ 2018-02-21 14:14 UTC (permalink / raw)
  To: dev; +Cc: mw, dima, nsamsono, jck, Tomasz Duszynski

Add extended statistics implementation.

Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
---
 doc/guides/nics/features/mrvl.ini |   1 +
 doc/guides/nics/mrvl.rst          |   1 +
 drivers/net/mrvl/mrvl_ethdev.c    | 205 +++++++++++++++++++++++++++++++++++++-
 3 files changed, 206 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/features/mrvl.ini b/doc/guides/nics/features/mrvl.ini
index 00d9621..120fd4d 100644
--- a/doc/guides/nics/features/mrvl.ini
+++ b/doc/guides/nics/features/mrvl.ini
@@ -19,5 +19,6 @@ L3 checksum offload  = Y
 L4 checksum offload  = Y
 Packet type parsing  = Y
 Basic stats          = Y
+Extended stats       = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/doc/guides/nics/mrvl.rst b/doc/guides/nics/mrvl.rst
index 9230d5e..7678265 100644
--- a/doc/guides/nics/mrvl.rst
+++ b/doc/guides/nics/mrvl.rst
@@ -70,6 +70,7 @@ Features of the MRVL PMD are:
 - L4 checksum offload
 - Packet type parsing
 - Basic stats
+- Extended stats
 - QoS
 
 
diff --git a/drivers/net/mrvl/mrvl_ethdev.c b/drivers/net/mrvl/mrvl_ethdev.c
index 2536ee5..bccd2d0 100644
--- a/drivers/net/mrvl/mrvl_ethdev.c
+++ b/drivers/net/mrvl/mrvl_ethdev.c
@@ -173,6 +173,32 @@ static inline void mrvl_free_sent_buffers(struct pp2_ppio *ppio,
 			struct pp2_hif *hif, unsigned int core_id,
 			struct mrvl_shadow_txq *sq, int qid, int force);
 
+#define MRVL_XSTATS_TBL_ENTRY(name) { \
+	#name, offsetof(struct pp2_ppio_statistics, name),	\
+	sizeof(((struct pp2_ppio_statistics *)0)->name)		\
+}
+
+/* Table with xstats data */
+static struct {
+	const char *name;
+	unsigned int offset;
+	unsigned int size;
+} mrvl_xstats_tbl[] = {
+	MRVL_XSTATS_TBL_ENTRY(rx_bytes),
+	MRVL_XSTATS_TBL_ENTRY(rx_packets),
+	MRVL_XSTATS_TBL_ENTRY(rx_unicast_packets),
+	MRVL_XSTATS_TBL_ENTRY(rx_errors),
+	MRVL_XSTATS_TBL_ENTRY(rx_fullq_dropped),
+	MRVL_XSTATS_TBL_ENTRY(rx_bm_dropped),
+	MRVL_XSTATS_TBL_ENTRY(rx_early_dropped),
+	MRVL_XSTATS_TBL_ENTRY(rx_fifo_dropped),
+	MRVL_XSTATS_TBL_ENTRY(rx_cls_dropped),
+	MRVL_XSTATS_TBL_ENTRY(tx_bytes),
+	MRVL_XSTATS_TBL_ENTRY(tx_packets),
+	MRVL_XSTATS_TBL_ENTRY(tx_unicast_packets),
+	MRVL_XSTATS_TBL_ENTRY(tx_errors)
+};
+
 static inline int
 mrvl_get_bpool_size(int pp2_id, int pool_id)
 {
@@ -1138,6 +1164,90 @@ mrvl_stats_reset(struct rte_eth_dev *dev)
 }
 
 /**
+ * DPDK callback to get extended statistics.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ * @param stats
+ *   Pointer to xstats table.
+ * @param n
+ *   Number of entries in xstats table.
+ * @return
+ *   Negative value on error, number of read xstats otherwise.
+ */
+static int
+mrvl_xstats_get(struct rte_eth_dev *dev,
+		struct rte_eth_xstat *stats, unsigned int n)
+{
+	struct mrvl_priv *priv = dev->data->dev_private;
+	struct pp2_ppio_statistics ppio_stats;
+	unsigned int i;
+
+	if (!stats)
+		return 0;
+
+	pp2_ppio_get_statistics(priv->ppio, &ppio_stats, 0);
+	for (i = 0; i < n && i < RTE_DIM(mrvl_xstats_tbl); i++) {
+		uint64_t val;
+
+		if (mrvl_xstats_tbl[i].size == sizeof(uint32_t))
+			val = *(uint32_t *)((uint8_t *)&ppio_stats +
+					    mrvl_xstats_tbl[i].offset);
+		else if (mrvl_xstats_tbl[i].size == sizeof(uint64_t))
+			val = *(uint64_t *)((uint8_t *)&ppio_stats +
+					    mrvl_xstats_tbl[i].offset);
+		else
+			return -EINVAL;
+
+		stats[i].id = i;
+		stats[i].value = val;
+	}
+
+	return n;
+}
+
+/**
+ * DPDK callback to reset extended statistics.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ */
+static void
+mrvl_xstats_reset(struct rte_eth_dev *dev)
+{
+	mrvl_stats_reset(dev);
+}
+
+/**
+ * DPDK callback to get extended statistics names.
+ *
+ * @param dev (unused)
+ *   Pointer to Ethernet device structure.
+ * @param xstats_names
+ *   Pointer to xstats names table.
+ * @param size
+ *   Size of the xstats names table.
+ * @return
+ *   Number of read names.
+ */
+static int
+mrvl_xstats_get_names(struct rte_eth_dev *dev __rte_unused,
+		      struct rte_eth_xstat_name *xstats_names,
+		      unsigned int size)
+{
+	unsigned int i;
+
+	if (!xstats_names)
+		return RTE_DIM(mrvl_xstats_tbl);
+
+	for (i = 0; i < size && i < RTE_DIM(mrvl_xstats_tbl); i++)
+		snprintf(xstats_names[i].name, RTE_ETH_XSTATS_NAME_SIZE, "%s",
+			 mrvl_xstats_tbl[i].name);
+
+	return size;
+}
+
+/**
  * DPDK callback to get information about the device.
  *
  * @param dev
@@ -1702,6 +1812,94 @@ mrvl_eth_filter_ctrl(struct rte_eth_dev *dev __rte_unused,
 	}
 }
 
+/**
+ * DPDK callback to get xstats by id.
+ *
+ * @param dev
+ *   Pointer to the device structure.
+ * @param ids
+ *   Pointer to the ids table.
+ * @param values
+ *   Pointer to the values table.
+ * @param n
+ *   Values table size.
+ * @returns
+ *   Number of read values, negative value otherwise.
+ */
+static int
+mrvl_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
+		      uint64_t *values, unsigned int n)
+{
+	unsigned int i, num = RTE_DIM(mrvl_xstats_tbl);
+	uint64_t vals[n];
+	int ret;
+
+	if (!ids) {
+		struct rte_eth_xstat xstats[num];
+		int j;
+
+		ret = mrvl_xstats_get(dev, xstats, num);
+		for (j = 0; j < ret; i++)
+			values[j] = xstats[j].value;
+
+		return ret;
+	}
+
+	ret = mrvl_xstats_get_by_id(dev, NULL, vals, n);
+	if (ret < 0)
+		return ret;
+
+	for (i = 0; i < n; i++) {
+		if (ids[i] >= num) {
+			RTE_LOG(ERR, PMD, "id value is not valid\n");
+			return -1;
+		}
+
+		values[i] = vals[ids[i]];
+	}
+
+	return n;
+}
+
+/**
+ * DPDK callback to get xstats names by ids.
+ *
+ * @param dev
+ *   Pointer to the device structure.
+ * @param xstats_names
+ *   Pointer to table with xstats names.
+ * @param ids
+ *   Pointer to table with ids.
+ * @param size
+ *   Xstats names table size.
+ * @returns
+ *   Number of names read, negative value otherwise.
+ */
+static int
+mrvl_xstats_get_names_by_id(struct rte_eth_dev *dev,
+			    struct rte_eth_xstat_name *xstats_names,
+			    const uint64_t *ids, unsigned int size)
+{
+	unsigned int i, num = RTE_DIM(mrvl_xstats_tbl);
+	struct rte_eth_xstat_name names[num];
+
+	if (!ids)
+		return mrvl_xstats_get_names(dev, xstats_names, size);
+
+	mrvl_xstats_get_names(dev, names, size);
+	for (i = 0; i < size; i++) {
+		if (ids[i] >= num) {
+			RTE_LOG(ERR, PMD, "id value is not valid");
+			return -1;
+		}
+
+		snprintf(xstats_names[i].name, RTE_ETH_XSTATS_NAME_SIZE,
+			 "%s", names[ids[i]].name);
+	}
+
+	return size;
+}
+
 static const struct eth_dev_ops mrvl_ops = {
 	.dev_configure = mrvl_dev_configure,
 	.dev_start = mrvl_dev_start,
@@ -1720,6 +1918,9 @@ static const struct eth_dev_ops mrvl_ops = {
 	.mtu_set = mrvl_mtu_set,
 	.stats_get = mrvl_stats_get,
 	.stats_reset = mrvl_stats_reset,
+	.xstats_get = mrvl_xstats_get,
+	.xstats_reset = mrvl_xstats_reset,
+	.xstats_get_names = mrvl_xstats_get_names,
 	.dev_infos_get = mrvl_dev_infos_get,
 	.dev_supported_ptypes_get = mrvl_dev_supported_ptypes_get,
 	.rxq_info_get = mrvl_rxq_info_get,
@@ -1731,7 +1932,9 @@ static const struct eth_dev_ops mrvl_ops = {
 	.tx_queue_release = mrvl_tx_queue_release,
 	.rss_hash_update = mrvl_rss_hash_update,
 	.rss_hash_conf_get = mrvl_rss_hash_conf_get,
-	.filter_ctrl = mrvl_eth_filter_ctrl
+	.filter_ctrl = mrvl_eth_filter_ctrl,
+	.xstats_get_by_id = mrvl_xstats_get_by_id,
+	.xstats_get_names_by_id = mrvl_xstats_get_names_by_id
 };
 
 /**
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 7/8] net/mrvl: add Rx flow control
  2018-02-21 14:14 [PATCH 0/8] net/mrvl: add new features to PMD Tomasz Duszynski
                   ` (5 preceding siblings ...)
  2018-02-21 14:14 ` [PATCH 6/8] net/mrvl: add extended statistics Tomasz Duszynski
@ 2018-02-21 14:14 ` Tomasz Duszynski
  2018-02-21 14:14 ` [PATCH 8/8] net/mrvl: add Tx queue start/stop Tomasz Duszynski
  2018-03-12  8:42 ` [PATCH v2 0/8] net/mrvl: add new features to PMD Tomasz Duszynski
  8 siblings, 0 replies; 34+ messages in thread
From: Tomasz Duszynski @ 2018-02-21 14:14 UTC (permalink / raw)
  To: dev; +Cc: mw, dima, nsamsono, jck, Tomasz Duszynski

Add Rx side flow control support.

Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
---
 doc/guides/nics/features/mrvl.ini |  1 +
 doc/guides/nics/mrvl.rst          |  1 +
 drivers/net/mrvl/mrvl_ethdev.c    | 78 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 80 insertions(+)

diff --git a/doc/guides/nics/features/mrvl.ini b/doc/guides/nics/features/mrvl.ini
index 120fd4d..8673a56 100644
--- a/doc/guides/nics/features/mrvl.ini
+++ b/doc/guides/nics/features/mrvl.ini
@@ -13,6 +13,7 @@ Allmulticast mode    = Y
 Unicast MAC filter   = Y
 Multicast MAC filter = Y
 RSS hash             = Y
+Flow control         = Y
 VLAN filter          = Y
 CRC offload          = Y
 L3 checksum offload  = Y
diff --git a/doc/guides/nics/mrvl.rst b/doc/guides/nics/mrvl.rst
index 7678265..550bd79 100644
--- a/doc/guides/nics/mrvl.rst
+++ b/doc/guides/nics/mrvl.rst
@@ -72,6 +72,7 @@ Features of the MRVL PMD are:
 - Basic stats
 - Extended stats
 - QoS
+- RX flow control
 
 
 Limitations
diff --git a/drivers/net/mrvl/mrvl_ethdev.c b/drivers/net/mrvl/mrvl_ethdev.c
index bccd2d0..08c0b03 100644
--- a/drivers/net/mrvl/mrvl_ethdev.c
+++ b/drivers/net/mrvl/mrvl_ethdev.c
@@ -1724,6 +1724,82 @@ mrvl_tx_queue_release(void *txq)
 }
 
 /**
+ * DPDK callback to get flow control configuration.
+ *
+ * @param dev
+ *  Pointer to Ethernet device structure.
+ * @param fc_conf
+ *  Pointer to the flow control configuration.
+ *
+ * @return
+ *  0 on success, negative error value otherwise.
+ */
+static int
+mrvl_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
+{
+	struct mrvl_priv *priv = dev->data->dev_private;
+	int ret, en;
+
+	if (!priv)
+		return -EPERM;
+
+	ret = pp2_ppio_get_rx_pause(priv->ppio, &en);
+	if (ret) {
+		RTE_LOG(ERR, PMD, "Failed to read rx pause state\n");
+		return ret;
+	}
+
+	fc_conf->mode = en ? RTE_FC_RX_PAUSE : RTE_FC_NONE;
+
+	return 0;
+}
+
+/**
+ * DPDK callback to set flow control configuration.
+ *
+ * @param dev
+ *  Pointer to Ethernet device structure.
+ * @param fc_conf
+ *  Pointer to the flow control configuration.
+ *
+ * @return
+ *  0 on success, negative error value otherwise.
+ */
+static int
+mrvl_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
+{
+	struct mrvl_priv *priv = dev->data->dev_private;
+
+	if (!priv)
+		return -EPERM;
+
+	if (fc_conf->high_water ||
+	    fc_conf->low_water ||
+	    fc_conf->pause_time ||
+	    fc_conf->mac_ctrl_frame_fwd ||
+	    fc_conf->autoneg) {
+		RTE_LOG(ERR, PMD, "Flowctrl parameter is not supported\n");
+
+		return -EINVAL;
+	}
+
+	if (fc_conf->mode == RTE_FC_NONE ||
+	    fc_conf->mode == RTE_FC_RX_PAUSE) {
+		int ret, en;
+
+		en = fc_conf->mode == RTE_FC_NONE ? 0 : 1;
+		ret = pp2_ppio_set_rx_pause(priv->ppio, en);
+		if (ret)
+			RTE_LOG(ERR, PMD,
+				"Failed to change flowctrl on RX side\n");
+
+		return ret;
+	}
+
+	return 0;
+}
+
+/**
  * Update RSS hash configuration
  *
  * @param dev
@@ -1930,6 +2006,8 @@ static const struct eth_dev_ops mrvl_ops = {
 	.rx_queue_release = mrvl_rx_queue_release,
 	.tx_queue_setup = mrvl_tx_queue_setup,
 	.tx_queue_release = mrvl_tx_queue_release,
+	.flow_ctrl_get = mrvl_flow_ctrl_get,
+	.flow_ctrl_set = mrvl_flow_ctrl_set,
 	.rss_hash_update = mrvl_rss_hash_update,
 	.rss_hash_conf_get = mrvl_rss_hash_conf_get,
 	.filter_ctrl = mrvl_eth_filter_ctrl,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 8/8] net/mrvl: add Tx queue start/stop
  2018-02-21 14:14 [PATCH 0/8] net/mrvl: add new features to PMD Tomasz Duszynski
                   ` (6 preceding siblings ...)
  2018-02-21 14:14 ` [PATCH 7/8] net/mrvl: add Rx flow control Tomasz Duszynski
@ 2018-02-21 14:14 ` Tomasz Duszynski
  2018-03-12  8:42 ` [PATCH v2 0/8] net/mrvl: add new features to PMD Tomasz Duszynski
  8 siblings, 0 replies; 34+ messages in thread
From: Tomasz Duszynski @ 2018-02-21 14:14 UTC (permalink / raw)
  To: dev; +Cc: mw, dima, nsamsono, jck, Tomasz Duszynski

Add Tx queue start/stop feature.

Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
---
 doc/guides/nics/mrvl.rst       |  1 +
 drivers/net/mrvl/mrvl_ethdev.c | 92 +++++++++++++++++++++++++++++++++++++++++-
 2 files changed, 91 insertions(+), 2 deletions(-)

diff --git a/doc/guides/nics/mrvl.rst b/doc/guides/nics/mrvl.rst
index 550bd79..f9ec9d6 100644
--- a/doc/guides/nics/mrvl.rst
+++ b/doc/guides/nics/mrvl.rst
@@ -73,6 +73,7 @@ Features of the MRVL PMD are:
 - Extended stats
 - QoS
 - RX flow control
+- TX queue start/stop
 
 
 Limitations
diff --git a/drivers/net/mrvl/mrvl_ethdev.c b/drivers/net/mrvl/mrvl_ethdev.c
index 08c0b03..20b045a 100644
--- a/drivers/net/mrvl/mrvl_ethdev.c
+++ b/drivers/net/mrvl/mrvl_ethdev.c
@@ -162,6 +162,7 @@ struct mrvl_txq {
 	int port_id;
 	uint64_t bytes_sent;
 	struct mrvl_shadow_txq shadow_txqs[RTE_MAX_LCORE];
+	int tx_deferred_start;
 };
 
 static int mrvl_lcore_first;
@@ -487,6 +488,70 @@ mrvl_dev_set_link_down(struct rte_eth_dev *dev)
 }
 
 /**
+ * DPDK callback to start tx queue.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ * @param queue_id
+ *   Transmit queue index.
+ *
+ * @return
+ *   0 on success, negative error value otherwise.
+ */
+static int
+mrvl_tx_queue_start(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+	struct mrvl_priv *priv = dev->data->dev_private;
+	int ret;
+
+	if (!priv)
+		return -EPERM;
+
+	/* passing 1 enables given tx queue */
+	ret = pp2_ppio_set_outq_state(priv->ppio, queue_id, 1);
+	if (ret) {
+		RTE_LOG(ERR, PMD, "Failed to start txq %d\n", queue_id);
+		return ret;
+	}
+
+	dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
+
+	return 0;
+}
+
+/**
+ * DPDK callback to stop tx queue.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ * @param queue_id
+ *   Transmit queue index.
+ *
+ * @return
+ *   0 on success, negative error value otherwise.
+ */
+static int
+mrvl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+	struct mrvl_priv *priv = dev->data->dev_private;
+	int ret;
+
+	if (!priv->ppio)
+		return -EPERM;
+
+	/* passing 0 disables given tx queue */
+	ret = pp2_ppio_set_outq_state(priv->ppio, queue_id, 0);
+	if (ret) {
+		RTE_LOG(ERR, PMD, "Failed to stop txq %d\n", queue_id);
+		return ret;
+	}
+
+	dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+/**
  * DPDK callback to start the device.
  *
  * @param dev
@@ -500,7 +565,7 @@ mrvl_dev_start(struct rte_eth_dev *dev)
 {
 	struct mrvl_priv *priv = dev->data->dev_private;
 	char match[MRVL_MATCH_LEN];
-	int ret = 0, def_init_size;
+	int ret = 0, i, def_init_size;
 
 	snprintf(match, sizeof(match), "ppio-%d:%d",
 		 priv->pp_id, priv->ppio_id);
@@ -587,6 +652,24 @@ mrvl_dev_start(struct rte_eth_dev *dev)
 		goto out;
 	}
 
+	/* start tx queues */
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		struct mrvl_txq *txq = dev->data->tx_queues[i];
+
+		dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
+
+		if (!txq->tx_deferred_start)
+			continue;
+
+		/*
+		 * All txqs are started by default. Stop them
+		 * so that tx_deferred_start works as expected.
+		 */
+		ret = mrvl_tx_queue_stop(dev, i);
+		if (ret)
+			goto out;
+	}
+
 	return 0;
 out:
 	RTE_LOG(ERR, PMD, "Failed to start device\n");
@@ -1358,9 +1441,11 @@ static void mrvl_txq_info_get(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 			      struct rte_eth_txq_info *qinfo)
 {
 	struct mrvl_priv *priv = dev->data->dev_private;
+	struct mrvl_txq *txq = dev->data->tx_queues[tx_queue_id];
 
 	qinfo->nb_desc =
 		priv->ppio_params.outqs_params.outqs_params[tx_queue_id].size;
+	qinfo->conf.tx_deferred_start = txq->tx_deferred_start;
 }
 
 /**
@@ -1671,7 +1756,7 @@ mrvl_tx_queue_offloads_okay(struct rte_eth_dev *dev, uint64_t requested)
  * @param socket
  *   NUMA socket on which memory must be allocated.
  * @param conf
- *   Thresholds parameters.
+ *   Tx queue configuration parameters.
  *
  * @return
  *   0 on success, negative error value otherwise.
@@ -1699,6 +1784,7 @@ mrvl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	txq->priv = priv;
 	txq->queue_id = idx;
 	txq->port_id = dev->data->port_id;
+	txq->tx_deferred_start = conf->tx_deferred_start;
 	dev->data->tx_queues[idx] = txq;
 
 	priv->ppio_params.outqs_params.outqs_params[idx].size = desc;
@@ -2002,6 +2088,8 @@ static const struct eth_dev_ops mrvl_ops = {
 	.rxq_info_get = mrvl_rxq_info_get,
 	.txq_info_get = mrvl_txq_info_get,
 	.vlan_filter_set = mrvl_vlan_filter_set,
+	.tx_queue_start = mrvl_tx_queue_start,
+	.tx_queue_stop = mrvl_tx_queue_stop,
 	.rx_queue_setup = mrvl_rx_queue_setup,
 	.rx_queue_release = mrvl_rx_queue_release,
 	.tx_queue_setup = mrvl_tx_queue_setup,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH 5/8] net/mrvl: add classifier support
  2018-02-21 14:14 ` [PATCH 5/8] net/mrvl: add classifier support Tomasz Duszynski
@ 2018-03-07 11:07   ` Ferruh Yigit
  2018-03-07 11:16     ` Tomasz Duszynski
  0 siblings, 1 reply; 34+ messages in thread
From: Ferruh Yigit @ 2018-03-07 11:07 UTC (permalink / raw)
  To: Tomasz Duszynski, dev; +Cc: mw, dima, nsamsono, jck

On 2/21/2018 2:14 PM, Tomasz Duszynski wrote:
> Add classifier configuration support via rte_flow api.
> 
> Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
> Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
> ---
>  doc/guides/nics/mrvl.rst       |  168 +++
>  drivers/net/mrvl/Makefile      |    1 +
>  drivers/net/mrvl/mrvl_ethdev.c |   59 +
>  drivers/net/mrvl/mrvl_ethdev.h |   10 +
>  drivers/net/mrvl/mrvl_flow.c   | 2787 ++++++++++++++++++++++++++++++++++++++++
>  5 files changed, 3025 insertions(+)
>  create mode 100644 drivers/net/mrvl/mrvl_flow.c

<...>

> diff --git a/drivers/net/mrvl/mrvl_flow.c b/drivers/net/mrvl/mrvl_flow.c
> new file mode 100644
> index 0000000..a2c25e6
> --- /dev/null
> +++ b/drivers/net/mrvl/mrvl_flow.c
> @@ -0,0 +1,2787 @@
> +/*-
> + *   BSD LICENSE
> + *
> + *   Copyright(c) 2018 Marvell International Ltd.
> + *   Copyright(c) 2018 Semihalf.
> + *   All rights reserved.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of the copyright holder nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */

Can you please use SPDX licensing tags for new files?

And marvell PMD seems not switched to SPDX tags yet, can you please plan the switch?

Thanks,
ferruh

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 5/8] net/mrvl: add classifier support
  2018-03-07 11:07   ` Ferruh Yigit
@ 2018-03-07 11:16     ` Tomasz Duszynski
  2018-03-07 11:24       ` Ferruh Yigit
  0 siblings, 1 reply; 34+ messages in thread
From: Tomasz Duszynski @ 2018-03-07 11:16 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: Tomasz Duszynski, dev, mw, dima, nsamsono, jck

On Wed, Mar 07, 2018 at 11:07:14AM +0000, Ferruh Yigit wrote:
> On 2/21/2018 2:14 PM, Tomasz Duszynski wrote:
> > Add classifier configuration support via rte_flow api.
> >
> > Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
> > Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
> > ---
> >  doc/guides/nics/mrvl.rst       |  168 +++
> >  drivers/net/mrvl/Makefile      |    1 +
> >  drivers/net/mrvl/mrvl_ethdev.c |   59 +
> >  drivers/net/mrvl/mrvl_ethdev.h |   10 +
> >  drivers/net/mrvl/mrvl_flow.c   | 2787 ++++++++++++++++++++++++++++++++++++++++
> >  5 files changed, 3025 insertions(+)
> >  create mode 100644 drivers/net/mrvl/mrvl_flow.c
>
> <...>
>
> > diff --git a/drivers/net/mrvl/mrvl_flow.c b/drivers/net/mrvl/mrvl_flow.c
> > new file mode 100644
> > index 0000000..a2c25e6
> > --- /dev/null
> > +++ b/drivers/net/mrvl/mrvl_flow.c
> > @@ -0,0 +1,2787 @@
> > +/*-
> > + *   BSD LICENSE
> > + *
> > + *   Copyright(c) 2018 Marvell International Ltd.
> > + *   Copyright(c) 2018 Semihalf.
> > + *   All rights reserved.
> > + *
> > + *   Redistribution and use in source and binary forms, with or without
> > + *   modification, are permitted provided that the following conditions
> > + *   are met:
> > + *
> > + *     * Redistributions of source code must retain the above copyright
> > + *       notice, this list of conditions and the following disclaimer.
> > + *     * Redistributions in binary form must reproduce the above copyright
> > + *       notice, this list of conditions and the following disclaimer in
> > + *       the documentation and/or other materials provided with the
> > + *       distribution.
> > + *     * Neither the name of the copyright holder nor the names of its
> > + *       contributors may be used to endorse or promote products derived
> > + *       from this software without specific prior written permission.
> > + *
> > + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> > + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> > + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> > + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> > + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> > + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> > + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> > + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> > + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> > + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> > + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> > + */
>
> Can you please use SPDX licensing tags for new files?
>
> And marvell PMD seems not switched to SPDX tags yet, can you please plan the switch?

SPDX conversion patches are waiting for submission. Once Marvell
gives final approval they will be pushed out.

>
> Thanks,
> ferruh

--
- Tomasz Duszyński

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 5/8] net/mrvl: add classifier support
  2018-03-07 11:16     ` Tomasz Duszynski
@ 2018-03-07 11:24       ` Ferruh Yigit
  2018-03-08 13:23         ` Tomasz Duszynski
  0 siblings, 1 reply; 34+ messages in thread
From: Ferruh Yigit @ 2018-03-07 11:24 UTC (permalink / raw)
  To: Tomasz Duszynski; +Cc: dev, mw, dima, nsamsono, jck

On 3/7/2018 11:16 AM, Tomasz Duszynski wrote:
> On Wed, Mar 07, 2018 at 11:07:14AM +0000, Ferruh Yigit wrote:
>> On 2/21/2018 2:14 PM, Tomasz Duszynski wrote:
>>> Add classifier configuration support via rte_flow api.
>>>
>>> Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
>>> Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
>>> ---
>>>  doc/guides/nics/mrvl.rst       |  168 +++
>>>  drivers/net/mrvl/Makefile      |    1 +
>>>  drivers/net/mrvl/mrvl_ethdev.c |   59 +
>>>  drivers/net/mrvl/mrvl_ethdev.h |   10 +
>>>  drivers/net/mrvl/mrvl_flow.c   | 2787 ++++++++++++++++++++++++++++++++++++++++
>>>  5 files changed, 3025 insertions(+)
>>>  create mode 100644 drivers/net/mrvl/mrvl_flow.c
>>
>> <...>
>>
>>> diff --git a/drivers/net/mrvl/mrvl_flow.c b/drivers/net/mrvl/mrvl_flow.c
>>> new file mode 100644
>>> index 0000000..a2c25e6
>>> --- /dev/null
>>> +++ b/drivers/net/mrvl/mrvl_flow.c
>>> @@ -0,0 +1,2787 @@
>>> +/*-
>>> + *   BSD LICENSE
>>> + *
>>> + *   Copyright(c) 2018 Marvell International Ltd.
>>> + *   Copyright(c) 2018 Semihalf.
>>> + *   All rights reserved.
>>> + *
>>> + *   Redistribution and use in source and binary forms, with or without
>>> + *   modification, are permitted provided that the following conditions
>>> + *   are met:
>>> + *
>>> + *     * Redistributions of source code must retain the above copyright
>>> + *       notice, this list of conditions and the following disclaimer.
>>> + *     * Redistributions in binary form must reproduce the above copyright
>>> + *       notice, this list of conditions and the following disclaimer in
>>> + *       the documentation and/or other materials provided with the
>>> + *       distribution.
>>> + *     * Neither the name of the copyright holder nor the names of its
>>> + *       contributors may be used to endorse or promote products derived
>>> + *       from this software without specific prior written permission.
>>> + *
>>> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
>>> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
>>> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
>>> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
>>> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
>>> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
>>> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
>>> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
>>> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
>>> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
>>> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
>>> + */
>>
>> Can you please use SPDX licensing tags for new files?
>>
>> And marvell PMD seems not switched to SPDX tags yet, can you please plan the switch?
> 
> SPDX conversion patches are waiting for submission. Once Marvell
> gives final approval they will be pushed out.

Good, thank you. Will it be possible to send this set with SPDX license?

> 
>>
>> Thanks,
>> ferruh
> 
> --
> - Tomasz Duszyński
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 5/8] net/mrvl: add classifier support
  2018-03-07 11:24       ` Ferruh Yigit
@ 2018-03-08 13:23         ` Tomasz Duszynski
  0 siblings, 0 replies; 34+ messages in thread
From: Tomasz Duszynski @ 2018-03-08 13:23 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: Tomasz Duszynski, dev, mw, dima, nsamsono, jck

On Wed, Mar 07, 2018 at 11:24:11AM +0000, Ferruh Yigit wrote:
> On 3/7/2018 11:16 AM, Tomasz Duszynski wrote:
> > On Wed, Mar 07, 2018 at 11:07:14AM +0000, Ferruh Yigit wrote:
> >> On 2/21/2018 2:14 PM, Tomasz Duszynski wrote:
> >>> Add classifier configuration support via rte_flow api.
> >>>
> >>> Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
> >>> Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
> >>> ---
> >>>  doc/guides/nics/mrvl.rst       |  168 +++
> >>>  drivers/net/mrvl/Makefile      |    1 +
> >>>  drivers/net/mrvl/mrvl_ethdev.c |   59 +
> >>>  drivers/net/mrvl/mrvl_ethdev.h |   10 +
> >>>  drivers/net/mrvl/mrvl_flow.c   | 2787 ++++++++++++++++++++++++++++++++++++++++
> >>>  5 files changed, 3025 insertions(+)
> >>>  create mode 100644 drivers/net/mrvl/mrvl_flow.c
> >>
> >> <...>
> >>
> >>> diff --git a/drivers/net/mrvl/mrvl_flow.c b/drivers/net/mrvl/mrvl_flow.c
> >>> new file mode 100644
> >>> index 0000000..a2c25e6
> >>> --- /dev/null
> >>> +++ b/drivers/net/mrvl/mrvl_flow.c
> >>> @@ -0,0 +1,2787 @@
> >>> +/*-
> >>> + *   BSD LICENSE
> >>> + *
> >>> + *   Copyright(c) 2018 Marvell International Ltd.
> >>> + *   Copyright(c) 2018 Semihalf.
> >>> + *   All rights reserved.
> >>> + *
> >>> + *   Redistribution and use in source and binary forms, with or without
> >>> + *   modification, are permitted provided that the following conditions
> >>> + *   are met:
> >>> + *
> >>> + *     * Redistributions of source code must retain the above copyright
> >>> + *       notice, this list of conditions and the following disclaimer.
> >>> + *     * Redistributions in binary form must reproduce the above copyright
> >>> + *       notice, this list of conditions and the following disclaimer in
> >>> + *       the documentation and/or other materials provided with the
> >>> + *       distribution.
> >>> + *     * Neither the name of the copyright holder nor the names of its
> >>> + *       contributors may be used to endorse or promote products derived
> >>> + *       from this software without specific prior written permission.
> >>> + *
> >>> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> >>> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> >>> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> >>> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> >>> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> >>> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> >>> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> >>> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> >>> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> >>> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> >>> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> >>> + */
> >>
> >> Can you please use SPDX licensing tags for new files?
> >>
> >> And marvell PMD seems not switched to SPDX tags yet, can you please plan the switch?
> >
> > SPDX conversion patches are waiting for submission. Once Marvell
> > gives final approval they will be pushed out.
>
> Good, thank you. Will it be possible to send this set with SPDX license?

OK, so let's postpone this until SPDX license conversion gets fully
resolved.

>
> >
> >>
> >> Thanks,
> >> ferruh
> >
> > --
> > - Tomasz Duszyński
> >
>

--
- Tomasz Duszyński

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v2 0/8] net/mrvl: add new features to PMD
  2018-02-21 14:14 [PATCH 0/8] net/mrvl: add new features to PMD Tomasz Duszynski
                   ` (7 preceding siblings ...)
  2018-02-21 14:14 ` [PATCH 8/8] net/mrvl: add Tx queue start/stop Tomasz Duszynski
@ 2018-03-12  8:42 ` Tomasz Duszynski
  2018-03-12  8:42   ` [PATCH v2 1/8] net/mrvl: fix crash when port is closed without starting Tomasz Duszynski
                     ` (8 more replies)
  8 siblings, 9 replies; 34+ messages in thread
From: Tomasz Duszynski @ 2018-03-12  8:42 UTC (permalink / raw)
  To: dev; +Cc: mw, dima, nsamsono, jck, jianbo.liu, Tomasz Duszynski

This patch series comes along with a set of features,
documentation updates and fixes.

Below one can find a short summary of introduced changes:

o Added support for selective Tx queue start and stop.
o Added support for Rx flow control.
o Added support for extended statistics counters.
o Added support for ingress policer, egress scheduler and egress rate
  limiter.
o Added support for configuring hardware classifier via a flow API.
o Documented new features and their usage.

Natalie Samsonov (1):
  net/mrvl: fix crash when port is closed without starting

Tomasz Duszynski (7):
  net/mrvl: add ingress policer support
  net/mrvl: add egress scheduler/rate limiter support
  net/mrvl: document policer/scheduler/rate limiter usage
  net/mrvl: add classifier support
  net/mrvl: add extended statistics
  net/mrvl: add Rx flow control
  net/mrvl: add Tx queue start/stop

v2:
- Convert license header of a new file to SPDX tags.

 doc/guides/nics/features/mrvl.ini |    2 +
 doc/guides/nics/mrvl.rst          |  257 +++-
 drivers/net/mrvl/Makefile         |    1 +
 drivers/net/mrvl/mrvl_ethdev.c    |  447 +++++-
 drivers/net/mrvl/mrvl_ethdev.h    |   11 +
 drivers/net/mrvl/mrvl_flow.c      | 2759 +++++++++++++++++++++++++++++++++++++
 drivers/net/mrvl/mrvl_qos.c       |  301 +++-
 drivers/net/mrvl/mrvl_qos.h       |   22 +
 8 files changed, 3782 insertions(+), 18 deletions(-)
 create mode 100644 drivers/net/mrvl/mrvl_flow.c

--
2.7.4

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v2 1/8] net/mrvl: fix crash when port is closed without starting
  2018-03-12  8:42 ` [PATCH v2 0/8] net/mrvl: add new features to PMD Tomasz Duszynski
@ 2018-03-12  8:42   ` Tomasz Duszynski
  2018-03-12  8:42   ` [PATCH v2 2/8] net/mrvl: add ingress policer support Tomasz Duszynski
                     ` (7 subsequent siblings)
  8 siblings, 0 replies; 34+ messages in thread
From: Tomasz Duszynski @ 2018-03-12  8:42 UTC (permalink / raw)
  To: dev; +Cc: mw, dima, nsamsono, jck, jianbo.liu, stable, Tomasz Duszynski

From: Natalie Samsonov <nsamsono@marvell.com>

Fixes: 0ddc9b815b11 ("net/mrvl: add net PMD skeleton")
Cc: stable@dpdk.org

Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
---
 drivers/net/mrvl/mrvl_ethdev.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mrvl/mrvl_ethdev.c b/drivers/net/mrvl/mrvl_ethdev.c
index ac8f2d6..e313768 100644
--- a/drivers/net/mrvl/mrvl_ethdev.c
+++ b/drivers/net/mrvl/mrvl_ethdev.c
@@ -658,7 +658,8 @@ mrvl_dev_stop(struct rte_eth_dev *dev)
 		pp2_cls_qos_tbl_deinit(priv->qos_tbl);
 		priv->qos_tbl = NULL;
 	}
-	pp2_ppio_deinit(priv->ppio);
+	if (priv->ppio)
+		pp2_ppio_deinit(priv->ppio);
 	priv->ppio = NULL;
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v2 2/8] net/mrvl: add ingress policer support
  2018-03-12  8:42 ` [PATCH v2 0/8] net/mrvl: add new features to PMD Tomasz Duszynski
  2018-03-12  8:42   ` [PATCH v2 1/8] net/mrvl: fix crash when port is closed without starting Tomasz Duszynski
@ 2018-03-12  8:42   ` Tomasz Duszynski
  2018-03-12  8:42   ` [PATCH v2 3/8] net/mrvl: add egress scheduler/rate limiter support Tomasz Duszynski
                     ` (6 subsequent siblings)
  8 siblings, 0 replies; 34+ messages in thread
From: Tomasz Duszynski @ 2018-03-12  8:42 UTC (permalink / raw)
  To: dev; +Cc: mw, dima, nsamsono, jck, jianbo.liu, Tomasz Duszynski

Add ingress policer support.

Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
---
 drivers/net/mrvl/mrvl_ethdev.c |   6 ++
 drivers/net/mrvl/mrvl_ethdev.h |   1 +
 drivers/net/mrvl/mrvl_qos.c    | 160 +++++++++++++++++++++++++++++++++++++++--
 drivers/net/mrvl/mrvl_qos.h    |   3 +
 4 files changed, 166 insertions(+), 4 deletions(-)

diff --git a/drivers/net/mrvl/mrvl_ethdev.c b/drivers/net/mrvl/mrvl_ethdev.c
index e313768..d1faa3d 100644
--- a/drivers/net/mrvl/mrvl_ethdev.c
+++ b/drivers/net/mrvl/mrvl_ethdev.c
@@ -661,6 +661,12 @@ mrvl_dev_stop(struct rte_eth_dev *dev)
 	if (priv->ppio)
 		pp2_ppio_deinit(priv->ppio);
 	priv->ppio = NULL;
+
+	/* policer must be released after ppio deinitialization */
+	if (priv->policer) {
+		pp2_cls_plcr_deinit(priv->policer);
+		priv->policer = NULL;
+	}
 }
 
 /**
diff --git a/drivers/net/mrvl/mrvl_ethdev.h b/drivers/net/mrvl/mrvl_ethdev.h
index aee853a..2cc229e 100644
--- a/drivers/net/mrvl/mrvl_ethdev.h
+++ b/drivers/net/mrvl/mrvl_ethdev.h
@@ -85,6 +85,7 @@ struct mrvl_priv {
 	struct pp2_cls_qos_tbl_params qos_tbl_params;
 	struct pp2_cls_tbl *qos_tbl;
 	uint16_t nb_rx_queues;
+	struct pp2_cls_plcr *policer;
 };
 
 #endif /* _MRVL_ETHDEV_H_ */
diff --git a/drivers/net/mrvl/mrvl_qos.c b/drivers/net/mrvl/mrvl_qos.c
index d206dc6..0205157 100644
--- a/drivers/net/mrvl/mrvl_qos.c
+++ b/drivers/net/mrvl/mrvl_qos.c
@@ -43,6 +43,22 @@
 #define MRVL_TOK_VLAN_IP "vlan/ip"
 #define MRVL_TOK_WEIGHT "weight"
 
+/* policer specific configuration tokens */
+#define MRVL_TOK_PLCR_ENABLE "policer_enable"
+#define MRVL_TOK_PLCR_UNIT "token_unit"
+#define MRVL_TOK_PLCR_UNIT_BYTES "bytes"
+#define MRVL_TOK_PLCR_UNIT_PACKETS "packets"
+#define MRVL_TOK_PLCR_COLOR "color_mode"
+#define MRVL_TOK_PLCR_COLOR_BLIND "blind"
+#define MRVL_TOK_PLCR_COLOR_AWARE "aware"
+#define MRVL_TOK_PLCR_CIR "cir"
+#define MRVL_TOK_PLCR_CBS "cbs"
+#define MRVL_TOK_PLCR_EBS "ebs"
+#define MRVL_TOK_PLCR_DEFAULT_COLOR "default_color"
+#define MRVL_TOK_PLCR_DEFAULT_COLOR_GREEN "green"
+#define MRVL_TOK_PLCR_DEFAULT_COLOR_YELLOW "yellow"
+#define MRVL_TOK_PLCR_DEFAULT_COLOR_RED "red"
+
 /** Number of tokens in range a-b = 2. */
 #define MAX_RNG_TOKENS 2
 
@@ -296,6 +312,25 @@ parse_tc_cfg(struct rte_cfgfile *file, int port, int tc,
 		}
 		cfg->port[port].tc[tc].dscps = n;
 	}
+
+	entry = rte_cfgfile_get_entry(file, sec_name,
+			MRVL_TOK_PLCR_DEFAULT_COLOR);
+	if (entry) {
+		if (!strncmp(entry, MRVL_TOK_PLCR_DEFAULT_COLOR_GREEN,
+				sizeof(MRVL_TOK_PLCR_DEFAULT_COLOR_GREEN))) {
+			cfg->port[port].tc[tc].color = PP2_PPIO_COLOR_GREEN;
+		} else if (!strncmp(entry, MRVL_TOK_PLCR_DEFAULT_COLOR_YELLOW,
+				sizeof(MRVL_TOK_PLCR_DEFAULT_COLOR_YELLOW))) {
+			cfg->port[port].tc[tc].color = PP2_PPIO_COLOR_YELLOW;
+		} else if (!strncmp(entry, MRVL_TOK_PLCR_DEFAULT_COLOR_RED,
+				sizeof(MRVL_TOK_PLCR_DEFAULT_COLOR_RED))) {
+			cfg->port[port].tc[tc].color = PP2_PPIO_COLOR_RED;
+		} else {
+			RTE_LOG(ERR, PMD, "Error while parsing: %s\n", entry);
+			return -1;
+		}
+	}
+
 	return 0;
 }
 
@@ -368,6 +403,88 @@ mrvl_get_qoscfg(const char *key __rte_unused, const char *path,
 		}
 
 		entry = rte_cfgfile_get_entry(file, sec_name,
+				MRVL_TOK_PLCR_ENABLE);
+		if (entry) {
+			if (get_val_securely(entry, &val) < 0)
+				return -1;
+			(*cfg)->port[n].policer_enable = val;
+		}
+
+		if ((*cfg)->port[n].policer_enable) {
+			enum pp2_cls_plcr_token_unit unit;
+
+			/* Read policer token unit */
+			entry = rte_cfgfile_get_entry(file, sec_name,
+					MRVL_TOK_PLCR_UNIT);
+			if (entry) {
+				if (!strncmp(entry, MRVL_TOK_PLCR_UNIT_BYTES,
+					sizeof(MRVL_TOK_PLCR_UNIT_BYTES))) {
+					unit = PP2_CLS_PLCR_BYTES_TOKEN_UNIT;
+				} else if (!strncmp(entry,
+						MRVL_TOK_PLCR_UNIT_PACKETS,
+					sizeof(MRVL_TOK_PLCR_UNIT_PACKETS))) {
+					unit = PP2_CLS_PLCR_PACKETS_TOKEN_UNIT;
+				} else {
+					RTE_LOG(ERR, PMD, "Unknown token: %s\n",
+						entry);
+					return -1;
+				}
+				(*cfg)->port[n].policer_params.token_unit =
+					unit;
+			}
+
+			/* Read policer color mode */
+			entry = rte_cfgfile_get_entry(file, sec_name,
+					MRVL_TOK_PLCR_COLOR);
+			if (entry) {
+				enum pp2_cls_plcr_color_mode mode;
+
+				if (!strncmp(entry, MRVL_TOK_PLCR_COLOR_BLIND,
+					sizeof(MRVL_TOK_PLCR_COLOR_BLIND))) {
+					mode = PP2_CLS_PLCR_COLOR_BLIND_MODE;
+				} else if (!strncmp(entry,
+						MRVL_TOK_PLCR_COLOR_AWARE,
+					sizeof(MRVL_TOK_PLCR_COLOR_AWARE))) {
+					mode = PP2_CLS_PLCR_COLOR_AWARE_MODE;
+				} else {
+					RTE_LOG(ERR, PMD,
+						"Error in parsing: %s\n",
+						entry);
+					return -1;
+				}
+				(*cfg)->port[n].policer_params.color_mode =
+					mode;
+			}
+
+			/* Read policer cir */
+			entry = rte_cfgfile_get_entry(file, sec_name,
+					MRVL_TOK_PLCR_CIR);
+			if (entry) {
+				if (get_val_securely(entry, &val) < 0)
+					return -1;
+				(*cfg)->port[n].policer_params.cir = val;
+			}
+
+			/* Read policer cbs */
+			entry = rte_cfgfile_get_entry(file, sec_name,
+					MRVL_TOK_PLCR_CBS);
+			if (entry) {
+				if (get_val_securely(entry, &val) < 0)
+					return -1;
+				(*cfg)->port[n].policer_params.cbs = val;
+			}
+
+			/* Read policer ebs */
+			entry = rte_cfgfile_get_entry(file, sec_name,
+					MRVL_TOK_PLCR_EBS);
+			if (entry) {
+				if (get_val_securely(entry, &val) < 0)
+					return -1;
+				(*cfg)->port[n].policer_params.ebs = val;
+			}
+		}
+
+		entry = rte_cfgfile_get_entry(file, sec_name,
 				MRVL_TOK_MAPPING_PRIORITY);
 		if (entry) {
 			if (!strncmp(entry, MRVL_TOK_VLAN_IP,
@@ -422,16 +539,18 @@ mrvl_get_qoscfg(const char *key __rte_unused, const char *path,
  * @param param TC parameters entry.
  * @param inqs Number of MUSDK in-queues in this TC.
  * @param bpool Bpool for this TC.
+ * @param color Default color for this TC.
  * @returns 0 in case of success, exits otherwise.
  */
 static int
 setup_tc(struct pp2_ppio_tc_params *param, uint8_t inqs,
-	struct pp2_bpool *bpool)
+	struct pp2_bpool *bpool, enum pp2_ppio_color color)
 {
 	struct pp2_ppio_inq_params *inq_params;
 
 	param->pkt_offset = MRVL_PKT_OFFS;
 	param->pools[0] = bpool;
+	param->default_color = color;
 
 	inq_params = rte_zmalloc_socket("inq_params",
 		inqs * sizeof(*inq_params),
@@ -451,6 +570,33 @@ setup_tc(struct pp2_ppio_tc_params *param, uint8_t inqs,
 }
 
 /**
+ * Setup ingress policer.
+ *
+ * @param priv Port's private data.
+ * @param params Pointer to the policer's configuration.
+ * @returns 0 in case of success, negative values otherwise.
+ */
+static int
+setup_policer(struct mrvl_priv *priv, struct pp2_cls_plcr_params *params)
+{
+	char match[16];
+	int ret;
+
+	sprintf(match, "policer-%d:%d\n", priv->pp_id, priv->ppio_id);
+	params->match = match;
+
+	ret = pp2_cls_plcr_init(params, &priv->policer);
+	if (ret) {
+		RTE_LOG(ERR, PMD, "Failed to setup %s\n", match);
+		return -1;
+	}
+
+	priv->ppio_params.inqs_params.plcr = priv->policer;
+
+	return 0;
+}
+
+/**
  * Configure RX Queues in a given port.
  *
  * Sets up RX queues, their Traffic Classes and DPDK rxq->(TC,inq) mapping.
@@ -468,10 +614,13 @@ mrvl_configure_rxqs(struct mrvl_priv *priv, uint16_t portid,
 
 	if (mrvl_qos_cfg == NULL ||
 		mrvl_qos_cfg->port[portid].use_global_defaults) {
-		/* No port configuration, use default: 1 TC, no QoS. */
+		/*
+		 * No port configuration, use default: 1 TC, no QoS,
+		 * TC color set to green.
+		 */
 		priv->ppio_params.inqs_params.num_tcs = 1;
 		setup_tc(&priv->ppio_params.inqs_params.tcs_params[0],
-			max_queues, priv->bpool);
+			max_queues, priv->bpool, PP2_PPIO_COLOR_GREEN);
 
 		/* Direct mapping of queues i.e. 0->0, 1->1 etc. */
 		for (i = 0; i < max_queues; ++i) {
@@ -569,11 +718,14 @@ mrvl_configure_rxqs(struct mrvl_priv *priv, uint16_t portid,
 			break;
 		setup_tc(&priv->ppio_params.inqs_params.tcs_params[i],
 				port_cfg->tc[i].inqs,
-				priv->bpool);
+				priv->bpool, port_cfg->tc[i].color);
 	}
 
 	priv->ppio_params.inqs_params.num_tcs = i;
 
+	if (port_cfg->policer_enable)
+		return setup_policer(priv, &port_cfg->policer_params);
+
 	return 0;
 }
 
diff --git a/drivers/net/mrvl/mrvl_qos.h b/drivers/net/mrvl/mrvl_qos.h
index d2b6c83..bcf5bd3 100644
--- a/drivers/net/mrvl/mrvl_qos.h
+++ b/drivers/net/mrvl/mrvl_qos.h
@@ -27,6 +27,7 @@ struct mrvl_qos_cfg {
 			uint8_t inqs;
 			uint8_t dscps;
 			uint8_t pcps;
+			enum pp2_ppio_color color;
 		} tc[MRVL_PP2_TC_MAX];
 		struct {
 			uint8_t weight;
@@ -36,6 +37,8 @@ struct mrvl_qos_cfg {
 		uint16_t outqs;
 		uint8_t default_tc;
 		uint8_t use_global_defaults;
+		struct pp2_cls_plcr_params policer_params;
+		uint8_t policer_enable;
 	} port[RTE_MAX_ETHPORTS];
 };
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v2 3/8] net/mrvl: add egress scheduler/rate limiter support
  2018-03-12  8:42 ` [PATCH v2 0/8] net/mrvl: add new features to PMD Tomasz Duszynski
  2018-03-12  8:42   ` [PATCH v2 1/8] net/mrvl: fix crash when port is closed without starting Tomasz Duszynski
  2018-03-12  8:42   ` [PATCH v2 2/8] net/mrvl: add ingress policer support Tomasz Duszynski
@ 2018-03-12  8:42   ` Tomasz Duszynski
  2018-03-12  8:42   ` [PATCH v2 4/8] net/mrvl: document policer/scheduler/rate limiter usage Tomasz Duszynski
                     ` (5 subsequent siblings)
  8 siblings, 0 replies; 34+ messages in thread
From: Tomasz Duszynski @ 2018-03-12  8:42 UTC (permalink / raw)
  To: dev; +Cc: mw, dima, nsamsono, jck, jianbo.liu, Tomasz Duszynski

Add egress scheduler and egress rate limiter support.

Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
---
 drivers/net/mrvl/mrvl_ethdev.c |   6 +-
 drivers/net/mrvl/mrvl_qos.c    | 141 +++++++++++++++++++++++++++++++++++++++--
 drivers/net/mrvl/mrvl_qos.h    |  19 ++++++
 3 files changed, 161 insertions(+), 5 deletions(-)

diff --git a/drivers/net/mrvl/mrvl_ethdev.c b/drivers/net/mrvl/mrvl_ethdev.c
index d1faa3d..7e00dbd 100644
--- a/drivers/net/mrvl/mrvl_ethdev.c
+++ b/drivers/net/mrvl/mrvl_ethdev.c
@@ -320,6 +320,11 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
 	if (ret < 0)
 		return ret;
 
+	ret = mrvl_configure_txqs(priv, dev->data->port_id,
+				  dev->data->nb_tx_queues);
+	if (ret < 0)
+		return ret;
+
 	priv->ppio_params.outqs_params.num_outqs = dev->data->nb_tx_queues;
 	priv->ppio_params.maintain_stats = 1;
 	priv->nb_rx_queues = dev->data->nb_rx_queues;
@@ -1537,7 +1542,6 @@ mrvl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	dev->data->tx_queues[idx] = txq;
 
 	priv->ppio_params.outqs_params.outqs_params[idx].size = desc;
-	priv->ppio_params.outqs_params.outqs_params[idx].weight = 1;
 
 	return 0;
 }
diff --git a/drivers/net/mrvl/mrvl_qos.c b/drivers/net/mrvl/mrvl_qos.c
index 0205157..e9c4531 100644
--- a/drivers/net/mrvl/mrvl_qos.c
+++ b/drivers/net/mrvl/mrvl_qos.c
@@ -36,12 +36,19 @@
 #define MRVL_TOK_PCP "pcp"
 #define MRVL_TOK_PORT "port"
 #define MRVL_TOK_RXQ "rxq"
-#define MRVL_TOK_SP "SP"
 #define MRVL_TOK_TC "tc"
 #define MRVL_TOK_TXQ "txq"
 #define MRVL_TOK_VLAN "vlan"
 #define MRVL_TOK_VLAN_IP "vlan/ip"
-#define MRVL_TOK_WEIGHT "weight"
+
+/* egress specific configuration tokens */
+#define MRVL_TOK_BURST_SIZE "burst_size"
+#define MRVL_TOK_RATE_LIMIT "rate_limit"
+#define MRVL_TOK_RATE_LIMIT_ENABLE "rate_limit_enable"
+#define MRVL_TOK_SCHED_MODE "sched_mode"
+#define MRVL_TOK_SCHED_MODE_SP "sp"
+#define MRVL_TOK_SCHED_MODE_WRR "wrr"
+#define MRVL_TOK_WRR_WEIGHT "wrr_weight"
 
 /* policer specific configuration tokens */
 #define MRVL_TOK_PLCR_ENABLE "policer_enable"
@@ -119,12 +126,69 @@ get_outq_cfg(struct rte_cfgfile *file, int port, int outq,
 	if (rte_cfgfile_num_sections(file, sec_name, strlen(sec_name)) <= 0)
 		return 0;
 
+	/* Read scheduling mode */
+	entry = rte_cfgfile_get_entry(file, sec_name, MRVL_TOK_SCHED_MODE);
+	if (entry) {
+		if (!strncmp(entry, MRVL_TOK_SCHED_MODE_SP,
+					strlen(MRVL_TOK_SCHED_MODE_SP))) {
+			cfg->port[port].outq[outq].sched_mode =
+				PP2_PPIO_SCHED_M_SP;
+		} else if (!strncmp(entry, MRVL_TOK_SCHED_MODE_WRR,
+					strlen(MRVL_TOK_SCHED_MODE_WRR))) {
+			cfg->port[port].outq[outq].sched_mode =
+				PP2_PPIO_SCHED_M_WRR;
+		} else {
+			RTE_LOG(ERR, PMD, "Unknown token: %s\n", entry);
+			return -1;
+		}
+	}
+
+	/* Read wrr weight */
+	if (cfg->port[port].outq[outq].sched_mode == PP2_PPIO_SCHED_M_WRR) {
+		entry = rte_cfgfile_get_entry(file, sec_name,
+				MRVL_TOK_WRR_WEIGHT);
+		if (entry) {
+			if (get_val_securely(entry, &val) < 0)
+				return -1;
+			cfg->port[port].outq[outq].weight = val;
+		}
+	}
+
+	/*
+	 * There's no point in setting rate limiting for specific outq as
+	 * global port rate limiting has priority.
+	 */
+	if (cfg->port[port].rate_limit_enable) {
+		RTE_LOG(WARNING, PMD, "Port %d rate limiting already enabled\n",
+			port);
+		return 0;
+	}
+
 	entry = rte_cfgfile_get_entry(file, sec_name,
-			MRVL_TOK_WEIGHT);
+			MRVL_TOK_RATE_LIMIT_ENABLE);
 	if (entry) {
 		if (get_val_securely(entry, &val) < 0)
 			return -1;
-		cfg->port[port].outq[outq].weight = (uint8_t)val;
+		cfg->port[port].outq[outq].rate_limit_enable = val;
+	}
+
+	if (!cfg->port[port].outq[outq].rate_limit_enable)
+		return 0;
+
+	/* Read CBS (in kB) */
+	entry = rte_cfgfile_get_entry(file, sec_name, MRVL_TOK_BURST_SIZE);
+	if (entry) {
+		if (get_val_securely(entry, &val) < 0)
+			return -1;
+		cfg->port[port].outq[outq].rate_limit_params.cbs = val;
+	}
+
+	/* Read CIR (in kbps) */
+	entry = rte_cfgfile_get_entry(file, sec_name, MRVL_TOK_RATE_LIMIT);
+	if (entry) {
+		if (get_val_securely(entry, &val) < 0)
+			return -1;
+		cfg->port[port].outq[outq].rate_limit_params.cir = val;
 	}
 
 	return 0;
@@ -484,6 +548,36 @@ mrvl_get_qoscfg(const char *key __rte_unused, const char *path,
 			}
 		}
 
+		/*
+		 * Read per-port rate limiting. Setting that will
+		 * disable per-queue rate limiting.
+		 */
+		entry = rte_cfgfile_get_entry(file, sec_name,
+				MRVL_TOK_RATE_LIMIT_ENABLE);
+		if (entry) {
+			if (get_val_securely(entry, &val) < 0)
+				return -1;
+			(*cfg)->port[n].rate_limit_enable = val;
+		}
+
+		if ((*cfg)->port[n].rate_limit_enable) {
+			entry = rte_cfgfile_get_entry(file, sec_name,
+					MRVL_TOK_BURST_SIZE);
+			if (entry) {
+				if (get_val_securely(entry, &val) < 0)
+					return -1;
+				(*cfg)->port[n].rate_limit_params.cbs = val;
+			}
+
+			entry = rte_cfgfile_get_entry(file, sec_name,
+					MRVL_TOK_RATE_LIMIT);
+			if (entry) {
+				if (get_val_securely(entry, &val) < 0)
+					return -1;
+				(*cfg)->port[n].rate_limit_params.cir = val;
+			}
+		}
+
 		entry = rte_cfgfile_get_entry(file, sec_name,
 				MRVL_TOK_MAPPING_PRIORITY);
 		if (entry) {
@@ -730,6 +824,45 @@ mrvl_configure_rxqs(struct mrvl_priv *priv, uint16_t portid,
 }
 
 /**
+ * Configure TX Queues in a given port.
+ *
+ * Sets up TX queues egress scheduler and limiter.
+ *
+ * @param priv Port's private data
+ * @param portid DPDK port ID
+ * @param max_queues Maximum number of queues to configure.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+int
+mrvl_configure_txqs(struct mrvl_priv *priv, uint16_t portid,
+		uint16_t max_queues)
+{
+	/* We need only a subset of configuration. */
+	struct port_cfg *port_cfg = &mrvl_qos_cfg->port[portid];
+	int i;
+
+	if (mrvl_qos_cfg == NULL)
+		return 0;
+
+	priv->ppio_params.rate_limit_enable = port_cfg->rate_limit_enable;
+	if (port_cfg->rate_limit_enable)
+		priv->ppio_params.rate_limit_params =
+			port_cfg->rate_limit_params;
+
+	for (i = 0; i < max_queues; i++) {
+		struct pp2_ppio_outq_params *params =
+			&priv->ppio_params.outqs_params.outqs_params[i];
+
+		params->sched_mode = port_cfg->outq[i].sched_mode;
+		params->weight = port_cfg->outq[i].weight;
+		params->rate_limit_enable = port_cfg->outq[i].rate_limit_enable;
+		params->rate_limit_params = port_cfg->outq[i].rate_limit_params;
+	}
+
+	return 0;
+}
+
+/**
  * Start QoS mapping.
  *
  * Finalize QoS table configuration and initialize it in SDK. It can be done
diff --git a/drivers/net/mrvl/mrvl_qos.h b/drivers/net/mrvl/mrvl_qos.h
index bcf5bd3..fa9ddec 100644
--- a/drivers/net/mrvl/mrvl_qos.h
+++ b/drivers/net/mrvl/mrvl_qos.h
@@ -20,6 +20,8 @@
 /* QoS config. */
 struct mrvl_qos_cfg {
 	struct port_cfg {
+		int rate_limit_enable;
+		struct pp2_ppio_rate_limit_params rate_limit_params;
 		struct {
 			uint8_t inq[MRVL_PP2_RXQ_MAX];
 			uint8_t dscp[MRVL_CP_PER_TC];
@@ -30,7 +32,10 @@ struct mrvl_qos_cfg {
 			enum pp2_ppio_color color;
 		} tc[MRVL_PP2_TC_MAX];
 		struct {
+			enum pp2_ppio_outq_sched_mode sched_mode;
 			uint8_t weight;
+			int rate_limit_enable;
+			struct pp2_ppio_rate_limit_params rate_limit_params;
 		} outq[MRVL_PP2_RXQ_MAX];
 		enum pp2_cls_qos_tbl_type mapping_priority;
 		uint16_t inqs;
@@ -74,6 +79,20 @@ mrvl_configure_rxqs(struct mrvl_priv *priv, uint16_t portid,
 		    uint16_t max_queues);
 
 /**
+ * Configure TX Queues in a given port.
+ *
+ * Sets up TX queues egress scheduler and limiter.
+ *
+ * @param priv Port's private data
+ * @param portid DPDK port ID
+ * @param max_queues Maximum number of queues to configure.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+int
+mrvl_configure_txqs(struct mrvl_priv *priv, uint16_t portid,
+		    uint16_t max_queues);
+
+/**
  * Start QoS mapping.
  *
  * Finalize QoS table configuration and initialize it in SDK. It can be done
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v2 4/8] net/mrvl: document policer/scheduler/rate limiter usage
  2018-03-12  8:42 ` [PATCH v2 0/8] net/mrvl: add new features to PMD Tomasz Duszynski
                     ` (2 preceding siblings ...)
  2018-03-12  8:42   ` [PATCH v2 3/8] net/mrvl: add egress scheduler/rate limiter support Tomasz Duszynski
@ 2018-03-12  8:42   ` Tomasz Duszynski
  2018-03-12  8:42   ` [PATCH v2 5/8] net/mrvl: add classifier support Tomasz Duszynski
                     ` (4 subsequent siblings)
  8 siblings, 0 replies; 34+ messages in thread
From: Tomasz Duszynski @ 2018-03-12  8:42 UTC (permalink / raw)
  To: dev; +Cc: mw, dima, nsamsono, jck, jianbo.liu, Tomasz Duszynski

Add documentation and example for ingress policer, egress scheduler
and egress rate limiter.

Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
---
 doc/guides/nics/mrvl.rst | 86 ++++++++++++++++++++++++++++++++++++++++++++----
 1 file changed, 80 insertions(+), 6 deletions(-)

diff --git a/doc/guides/nics/mrvl.rst b/doc/guides/nics/mrvl.rst
index b7f3292..6794cbb 100644
--- a/doc/guides/nics/mrvl.rst
+++ b/doc/guides/nics/mrvl.rst
@@ -149,17 +149,36 @@ Configuration syntax
    [port <portnum> default]
    default_tc = <default_tc>
    mapping_priority = <mapping_priority>
+   policer_enable = <policer_enable>
+   token_unit = <token_unit>
+   color = <color_mode>
+   cir = <cir>
+   ebs = <ebs>
+   cbs = <cbs>
+
+   rate_limit_enable = <rate_limit_enable>
+   rate_limit = <rate_limit>
+   burst_size = <burst_size>
 
    [port <portnum> tc <traffic_class>]
    rxq = <rx_queue_list>
    pcp = <pcp_list>
    dscp = <dscp_list>
+   default_color = <default_color>
 
    [port <portnum> tc <traffic_class>]
    rxq = <rx_queue_list>
    pcp = <pcp_list>
    dscp = <dscp_list>
 
+   [port <portnum> txq <txqnum>]
+   sched_mode = <sched_mode>
+   wrr_weight = <wrr_weight>
+
+   rate_limit_enable = <rate_limit_enable>
+   rate_limit = <rate_limit>
+   burst_size = <burst_size>
+
 Where:
 
 - ``<portnum>``: DPDK Port number (0..n).
@@ -176,6 +195,30 @@ Where:
 
 - ``<dscp_list>``: List of DSCP values to handle in particular TC (e.g. 0-12 32-48 63).
 
+- ``<policer_enable>``: Enable ingress policer.
+
+- ``<token_unit>``: Policer token unit (`bytes` or `packets`).
+
+- ``<color_mode>``: Policer color mode (`aware` or `blind`).
+
+- ``<cir>``: Committed information rate in unit of kilo bits per second (data rate) or packets per second.
+
+- ``<cbs>``: Committed burst size in unit of kilo bytes or number of packets.
+
+- ``<ebs>``: Excess burst size in unit of kilo bytes or number of packets.
+
+- ``<default_color>``: Default color for specific tc.
+
+- ``<rate_limit_enable>``: Enables per port or per txq rate limiting.
+
+- ``<rate_limit>``: Committed information rate, in kilo bits per second.
+
+- ``<burst_size>``: Committed burst size, in kilo bytes.
+
+- ``<sched_mode>``: Egress scheduler mode (`wrr` or `sp`).
+
+- ``<wrr_weight>``: Txq weight.
+
 Setting PCP/DSCP values for the default TC is not required. All PCP/DSCP
 values not assigned explicitly to particular TC will be handled by the
 default TC.
@@ -187,11 +230,26 @@ Configuration file example
 
    [port 0 default]
    default_tc = 0
-   qos_mode = ip
+   mapping_priority = ip
+
+   rate_limit_enable = 1
+   rate_limit = 1000
+   burst_size = 2000
 
    [port 0 tc 0]
    rxq = 0 1
 
+   [port 0 txq 0]
+   sched_mode = wrr
+   wrr_weight = 10
+
+   [port 0 txq 1]
+   sched_mode = wrr
+   wrr_weight = 100
+
+   [port 0 txq 2]
+   sched_mode = sp
+
    [port 0 tc 1]
    rxq = 2
    pcp = 5 6 7
@@ -199,15 +257,31 @@ Configuration file example
 
    [port 1 default]
    default_tc = 0
-   qos_mode = vlan/ip
+   mapping_priority = vlan/ip
+
+   policer_enable = 1
+   token_unit = bytes
+   color = blind
+   cir = 100000
+   ebs = 64
+   cbs = 64
 
    [port 1 tc 0]
    rxq = 0
+   dscp = 10
 
    [port 1 tc 1]
-   rxq = 1 2
-   pcp = 5 6 7
-   dscp = 26-38
+   rxq = 1
+   dscp = 11-20
+
+   [port 1 tc 2]
+   rxq = 2
+   dscp = 30
+
+   [port 1 txq 0]
+   rate_limit_enable = 1
+   rate_limit = 10000
+   burst_size = 2000
 
 Usage example
 ^^^^^^^^^^^^^
@@ -215,7 +289,7 @@ Usage example
 .. code-block:: console
 
    ./testpmd --vdev=eth_mrvl,iface=eth0,iface=eth2,cfg=/home/user/mrvl.conf \
-     -c 7 -- -i -a --rxq=2
+     -c 7 -- -i -a --disable-hw-vlan-strip --rxq=3 --txq=3
 
 
 Building DPDK
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v2 5/8] net/mrvl: add classifier support
  2018-03-12  8:42 ` [PATCH v2 0/8] net/mrvl: add new features to PMD Tomasz Duszynski
                     ` (3 preceding siblings ...)
  2018-03-12  8:42   ` [PATCH v2 4/8] net/mrvl: document policer/scheduler/rate limiter usage Tomasz Duszynski
@ 2018-03-12  8:42   ` Tomasz Duszynski
  2018-03-12  8:42   ` [PATCH v2 6/8] net/mrvl: add extended statistics Tomasz Duszynski
                     ` (3 subsequent siblings)
  8 siblings, 0 replies; 34+ messages in thread
From: Tomasz Duszynski @ 2018-03-12  8:42 UTC (permalink / raw)
  To: dev; +Cc: mw, dima, nsamsono, jck, jianbo.liu, Tomasz Duszynski

Add classifier configuration support via rte_flow api.

Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
---
 doc/guides/nics/mrvl.rst       |  168 +++
 drivers/net/mrvl/Makefile      |    1 +
 drivers/net/mrvl/mrvl_ethdev.c |   59 +
 drivers/net/mrvl/mrvl_ethdev.h |   10 +
 drivers/net/mrvl/mrvl_flow.c   | 2759 ++++++++++++++++++++++++++++++++++++++++
 5 files changed, 2997 insertions(+)
 create mode 100644 drivers/net/mrvl/mrvl_flow.c

diff --git a/doc/guides/nics/mrvl.rst b/doc/guides/nics/mrvl.rst
index 6794cbb..9230d5e 100644
--- a/doc/guides/nics/mrvl.rst
+++ b/doc/guides/nics/mrvl.rst
@@ -113,6 +113,9 @@ Prerequisites
   approval has been granted, library can be found by typing ``musdk`` in
   the search box.
 
+  To get better understanding of the library one can consult documentation
+  available in the ``doc`` top level directory of the MUSDK sources.
+
   MUSDK must be configured with the following features:
 
   .. code-block:: console
@@ -318,6 +321,171 @@ the path to the MUSDK installation directory needs to be exported.
    sed -ri 's,(MRVL_PMD=)n,\1y,' build/.config
    make
 
+Flow API
+--------
+
+PPv2 offers packet classification capabilities via classifier engine which
+can be configured via generic flow API offered by DPDK.
+
+Supported flow actions
+~~~~~~~~~~~~~~~~~~~~~~
+
+Following flow action items are supported by the driver:
+
+* DROP
+* QUEUE
+
+Supported flow items
+~~~~~~~~~~~~~~~~~~~~
+
+Following flow items and their respective fields are supported by the driver:
+
+* ETH
+
+  * source MAC
+  * destination MAC
+  * ethertype
+
+* VLAN
+
+  * PCP
+  * VID
+
+* IPV4
+
+  * DSCP
+  * protocol
+  * source address
+  * destination address
+
+* IPV6
+
+  * flow label
+  * next header
+  * source address
+  * destination address
+
+* UDP
+
+  * source port
+  * destination port
+
+* TCP
+
+  * source port
+  * destination port
+
+Classifier match engine
+~~~~~~~~~~~~~~~~~~~~~~~
+
+Classifier has an internal match engine which can be configured to
+operate in either exact or maskable mode.
+
+Mode is selected upon creation of the first unique flow rule as follows:
+
+* maskable, if key size is up to 8 bytes.
+* exact, otherwise, i.e for keys bigger than 8 bytes.
+
+Where the key size equals the number of bytes of all fields specified
+in the flow items.
+
+.. table:: Examples of key size calculation
+
+   +----------------------------------------------------------------------------+-------------------+-------------+
+   | Flow pattern                                                               | Key size in bytes | Used engine |
+   +============================================================================+===================+=============+
+   | ETH (destination MAC) / VLAN (VID)                                         | 6 + 2 = 8         | Maskable    |
+   +----------------------------------------------------------------------------+-------------------+-------------+
+   | VLAN (VID) / IPV4 (source address)                                         | 2 + 4 = 6         | Maskable    |
+   +----------------------------------------------------------------------------+-------------------+-------------+
+   | TCP (source port, destination port)                                        | 2 + 2 = 4         | Maskable    |
+   +----------------------------------------------------------------------------+-------------------+-------------+
+   | VLAN (priority) / IPV4 (source address)                                    | 1 + 4 = 5         | Maskable    |
+   +----------------------------------------------------------------------------+-------------------+-------------+
+   | IPV4 (destination address) / UDP (source port, destination port)           | 6 + 2 + 2 = 10    | Exact       |
+   +----------------------------------------------------------------------------+-------------------+-------------+
+   | VLAN (VID) / IPV6 (flow label, destination address)                        | 2 + 3 + 16 = 21   | Exact       |
+   +----------------------------------------------------------------------------+-------------------+-------------+
+   | IPV4 (DSCP, source address, destination address)                           | 1 + 4 + 4 = 9     | Exact       |
+   +----------------------------------------------------------------------------+-------------------+-------------+
+   | IPV6 (flow label, source address, destination address)                     | 3 + 16 + 16 = 35  | Exact       |
+   +----------------------------------------------------------------------------+-------------------+-------------+
+
+From the user perspective maskable mode means that masks specified
+via flow rules are respected. In case of exact match mode, masks
+which do not provide exact matching (all bits masked) are ignored.
+
+If the flow matches more than one classifier rule the first
+(with the lowest index) matched takes precedence.
+
+Flow rules usage example
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Before proceeding run testpmd user application:
+
+.. code-block:: console
+
+   ./testpmd --vdev=net_mrvl,iface=eth0,iface=eth2 -c 3 -- -i --p 3 -a --disable-hw-vlan-strip
+
+Example #1
+^^^^^^^^^^
+
+.. code-block:: console
+
+   testpmd> flow create 0 ingress pattern eth src is 10:11:12:13:14:15 / end actions drop / end
+
+In this case key size is 6 bytes thus maskable type is selected. Testpmd
+will set mask to ff:ff:ff:ff:ff:ff i.e traffic explicitly matching
+above rule will be dropped.
+
+Example #2
+^^^^^^^^^^
+
+.. code-block:: console
+
+   testpmd> flow create 0 ingress pattern ipv4 src spec 10.10.10.0 src mask 255.255.255.0 / tcp src spec 0x10 src mask 0x10 / end action drop / end
+
+In this case key size is 8 bytes thus maskable type is selected.
+Flows which have IPv4 source addresses ranging from 10.10.10.0 to 10.10.10.255
+and tcp source port set to 16 will be dropped.
+
+Example #3
+^^^^^^^^^^
+
+.. code-block:: console
+
+   testpmd> flow create 0 ingress pattern vlan vid spec 0x10 vid mask 0x10 / ipv4 src spec 10.10.1.1 src mask 255.255.0.0 dst spec 11.11.11.1 dst mask 255.255.255.0 / end actions drop / end
+
+In this case key size is 10 bytes thus exact type is selected.
+Even though each item has partial mask set, masks will be ignored.
+As a result only flows with VID set to 16 and IPv4 source and destination
+addresses set to 10.10.1.1 and 11.11.11.1 respectively will be dropped.
+
+Limitations
+~~~~~~~~~~~
+
+Following limitations need to be taken into account while creating flow rules:
+
+* For IPv4 exact match type the key size must be up to 12 bytes.
+* For IPv6 exact match type the key size must be up to 36 bytes.
+* Following fields cannot be partially masked (all masks are treated as
+  if they were exact):
+
+  * ETH: ethertype
+  * VLAN: PCP, VID
+  * IPv4: protocol
+  * IPv6: next header
+  * TCP/UDP: source port, destination port
+
+* Only one classifier table can be created thus all rules in the table
+  have to match table format. Table format is set during creation of
+  the first unique flow rule.
+* Up to 5 fields can be specified per flow rule.
+* Up to 20 flow rules can be added.
+
+For additional information about classifier please consult
+``doc/musdk_cls_user_guide.txt``.
+
 Usage Example
 -------------
 
diff --git a/drivers/net/mrvl/Makefile b/drivers/net/mrvl/Makefile
index bd3a96a..31a8fda 100644
--- a/drivers/net/mrvl/Makefile
+++ b/drivers/net/mrvl/Makefile
@@ -37,5 +37,6 @@ LDLIBS += -lrte_bus_vdev
 # library source files
 SRCS-$(CONFIG_RTE_LIBRTE_MRVL_PMD) += mrvl_ethdev.c
 SRCS-$(CONFIG_RTE_LIBRTE_MRVL_PMD) += mrvl_qos.c
+SRCS-$(CONFIG_RTE_LIBRTE_MRVL_PMD) += mrvl_flow.c
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/mrvl/mrvl_ethdev.c b/drivers/net/mrvl/mrvl_ethdev.c
index 7e00dbd..34b9ef7 100644
--- a/drivers/net/mrvl/mrvl_ethdev.c
+++ b/drivers/net/mrvl/mrvl_ethdev.c
@@ -659,6 +659,10 @@ mrvl_dev_stop(struct rte_eth_dev *dev)
 	mrvl_dev_set_link_down(dev);
 	mrvl_flush_rx_queues(dev);
 	mrvl_flush_tx_shadow_queues(dev);
+	if (priv->cls_tbl) {
+		pp2_cls_tbl_deinit(priv->cls_tbl);
+		priv->cls_tbl = NULL;
+	}
 	if (priv->qos_tbl) {
 		pp2_cls_qos_tbl_deinit(priv->qos_tbl);
 		priv->qos_tbl = NULL;
@@ -784,6 +788,9 @@ mrvl_promiscuous_enable(struct rte_eth_dev *dev)
 	if (!priv->ppio)
 		return;
 
+	if (priv->isolated)
+		return;
+
 	ret = pp2_ppio_set_promisc(priv->ppio, 1);
 	if (ret)
 		RTE_LOG(ERR, PMD, "Failed to enable promiscuous mode\n");
@@ -804,6 +811,9 @@ mrvl_allmulticast_enable(struct rte_eth_dev *dev)
 	if (!priv->ppio)
 		return;
 
+	if (priv->isolated)
+		return;
+
 	ret = pp2_ppio_set_mc_promisc(priv->ppio, 1);
 	if (ret)
 		RTE_LOG(ERR, PMD, "Failed enable all-multicast mode\n");
@@ -867,6 +877,9 @@ mrvl_mac_addr_remove(struct rte_eth_dev *dev, uint32_t index)
 	if (!priv->ppio)
 		return;
 
+	if (priv->isolated)
+		return;
+
 	ret = pp2_ppio_remove_mac_addr(priv->ppio,
 				       dev->data->mac_addrs[index].addr_bytes);
 	if (ret) {
@@ -899,6 +912,9 @@ mrvl_mac_addr_add(struct rte_eth_dev *dev, struct ether_addr *mac_addr,
 	char buf[ETHER_ADDR_FMT_SIZE];
 	int ret;
 
+	if (priv->isolated)
+		return -ENOTSUP;
+
 	if (index == 0)
 		/* For setting index 0, mrvl_mac_addr_set() should be used.*/
 		return -1;
@@ -946,6 +962,9 @@ mrvl_mac_addr_set(struct rte_eth_dev *dev, struct ether_addr *mac_addr)
 	if (!priv->ppio)
 		return;
 
+	if (priv->isolated)
+		return;
+
 	ret = pp2_ppio_set_mac_addr(priv->ppio, mac_addr->addr_bytes);
 	if (ret) {
 		char buf[ETHER_ADDR_FMT_SIZE];
@@ -1227,6 +1246,9 @@ mrvl_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
 	if (!priv->ppio)
 		return -EPERM;
 
+	if (priv->isolated)
+		return -ENOTSUP;
+
 	return on ? pp2_ppio_add_vlan(priv->ppio, vlan_id) :
 		    pp2_ppio_remove_vlan(priv->ppio, vlan_id);
 }
@@ -1580,6 +1602,9 @@ mrvl_rss_hash_update(struct rte_eth_dev *dev,
 {
 	struct mrvl_priv *priv = dev->data->dev_private;
 
+	if (priv->isolated)
+		return -ENOTSUP;
+
 	return mrvl_configure_rss(priv, rss_conf);
 }
 
@@ -1616,6 +1641,39 @@ mrvl_rss_hash_conf_get(struct rte_eth_dev *dev,
 	return 0;
 }
 
+/**
+ * DPDK callback to get rte_flow callbacks.
+ *
+ * @param dev
+ *   Pointer to the device structure.
+ * @param filer_type
+ *   Flow filter type.
+ * @param filter_op
+ *   Flow filter operation.
+ * @param arg
+ *   Pointer to pass the flow ops.
+ *
+ * @return
+ *   0 on success, negative error value otherwise.
+ */
+static int
+mrvl_eth_filter_ctrl(struct rte_eth_dev *dev __rte_unused,
+		     enum rte_filter_type filter_type,
+		     enum rte_filter_op filter_op, void *arg)
+{
+	switch (filter_type) {
+	case RTE_ETH_FILTER_GENERIC:
+		if (filter_op != RTE_ETH_FILTER_GET)
+			return -EINVAL;
+		*(const void **)arg = &mrvl_flow_ops;
+		return 0;
+	default:
+		RTE_LOG(WARNING, PMD, "Filter type (%d) not supported",
+				filter_type);
+		return -EINVAL;
+	}
+}
+
 static const struct eth_dev_ops mrvl_ops = {
 	.dev_configure = mrvl_dev_configure,
 	.dev_start = mrvl_dev_start,
@@ -1645,6 +1703,7 @@ static const struct eth_dev_ops mrvl_ops = {
 	.tx_queue_release = mrvl_tx_queue_release,
 	.rss_hash_update = mrvl_rss_hash_update,
 	.rss_hash_conf_get = mrvl_rss_hash_conf_get,
+	.filter_ctrl = mrvl_eth_filter_ctrl
 };
 
 /**
diff --git a/drivers/net/mrvl/mrvl_ethdev.h b/drivers/net/mrvl/mrvl_ethdev.h
index 2cc229e..3a42809 100644
--- a/drivers/net/mrvl/mrvl_ethdev.h
+++ b/drivers/net/mrvl/mrvl_ethdev.h
@@ -8,6 +8,7 @@
 #define _MRVL_ETHDEV_H_
 
 #include <rte_spinlock.h>
+#include <rte_flow_driver.h>
 
 #include <env/mv_autogen_comp_flags.h>
 #include <drivers/mv_pp2.h>
@@ -80,12 +81,21 @@ struct mrvl_priv {
 	uint8_t rss_hf_tcp;
 	uint8_t uc_mc_flushed;
 	uint8_t vlan_flushed;
+	uint8_t isolated;
 
 	struct pp2_ppio_params ppio_params;
 	struct pp2_cls_qos_tbl_params qos_tbl_params;
 	struct pp2_cls_tbl *qos_tbl;
 	uint16_t nb_rx_queues;
+
+	struct pp2_cls_tbl_params cls_tbl_params;
+	struct pp2_cls_tbl *cls_tbl;
+	uint32_t cls_tbl_pattern;
+	LIST_HEAD(mrvl_flows, rte_flow) flows;
+
 	struct pp2_cls_plcr *policer;
 };
 
+/** Flow operations forward declaration. */
+extern const struct rte_flow_ops mrvl_flow_ops;
 #endif /* _MRVL_ETHDEV_H_ */
diff --git a/drivers/net/mrvl/mrvl_flow.c b/drivers/net/mrvl/mrvl_flow.c
new file mode 100644
index 0000000..8fd4dbf
--- /dev/null
+++ b/drivers/net/mrvl/mrvl_flow.c
@@ -0,0 +1,2759 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Marvell International Ltd.
+ * Copyright(c) 2018 Semihalf.
+ * All rights reserved.
+ */
+
+#include <rte_flow.h>
+#include <rte_flow_driver.h>
+#include <rte_malloc.h>
+#include <rte_log.h>
+
+#include <arpa/inet.h>
+
+#ifdef container_of
+#undef container_of
+#endif
+
+#include "mrvl_ethdev.h"
+#include "mrvl_qos.h"
+#include "env/mv_common.h" /* for BIT() */
+
+/** Number of rules in the classifier table. */
+#define MRVL_CLS_MAX_NUM_RULES 20
+
+/** Size of the classifier key and mask strings. */
+#define MRVL_CLS_STR_SIZE_MAX 40
+
+/** Parsed fields in processed rte_flow_item. */
+enum mrvl_parsed_fields {
+	/* eth flags */
+	F_DMAC =         BIT(0),
+	F_SMAC =         BIT(1),
+	F_TYPE =         BIT(2),
+	/* vlan flags */
+	F_VLAN_ID =      BIT(3),
+	F_VLAN_PRI =     BIT(4),
+	F_VLAN_TCI =     BIT(5), /* not supported by MUSDK yet */
+	/* ip4 flags */
+	F_IP4_TOS =      BIT(6),
+	F_IP4_SIP =      BIT(7),
+	F_IP4_DIP =      BIT(8),
+	F_IP4_PROTO =    BIT(9),
+	/* ip6 flags */
+	F_IP6_TC =       BIT(10), /* not supported by MUSDK yet */
+	F_IP6_SIP =      BIT(11),
+	F_IP6_DIP =      BIT(12),
+	F_IP6_FLOW =     BIT(13),
+	F_IP6_NEXT_HDR = BIT(14),
+	/* tcp flags */
+	F_TCP_SPORT =    BIT(15),
+	F_TCP_DPORT =    BIT(16),
+	/* udp flags */
+	F_UDP_SPORT =    BIT(17),
+	F_UDP_DPORT =    BIT(18),
+};
+
+/** PMD-specific definition of a flow rule handle. */
+struct rte_flow {
+	LIST_ENTRY(rte_flow) next;
+
+	enum mrvl_parsed_fields pattern;
+
+	struct pp2_cls_tbl_rule rule;
+	struct pp2_cls_cos_desc cos;
+	struct pp2_cls_tbl_action action;
+};
+
+static const enum rte_flow_item_type pattern_eth[] = {
+	RTE_FLOW_ITEM_TYPE_ETH,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_eth_vlan[] = {
+	RTE_FLOW_ITEM_TYPE_ETH,
+	RTE_FLOW_ITEM_TYPE_VLAN,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_eth_vlan_ip[] = {
+	RTE_FLOW_ITEM_TYPE_ETH,
+	RTE_FLOW_ITEM_TYPE_VLAN,
+	RTE_FLOW_ITEM_TYPE_IPV4,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_eth_vlan_ip6[] = {
+	RTE_FLOW_ITEM_TYPE_ETH,
+	RTE_FLOW_ITEM_TYPE_VLAN,
+	RTE_FLOW_ITEM_TYPE_IPV6,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_eth_ip4[] = {
+	RTE_FLOW_ITEM_TYPE_ETH,
+	RTE_FLOW_ITEM_TYPE_IPV4,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_eth_ip4_tcp[] = {
+	RTE_FLOW_ITEM_TYPE_ETH,
+	RTE_FLOW_ITEM_TYPE_IPV4,
+	RTE_FLOW_ITEM_TYPE_TCP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_eth_ip4_udp[] = {
+	RTE_FLOW_ITEM_TYPE_ETH,
+	RTE_FLOW_ITEM_TYPE_IPV4,
+	RTE_FLOW_ITEM_TYPE_UDP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_eth_ip6[] = {
+	RTE_FLOW_ITEM_TYPE_ETH,
+	RTE_FLOW_ITEM_TYPE_IPV6,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_eth_ip6_tcp[] = {
+	RTE_FLOW_ITEM_TYPE_ETH,
+	RTE_FLOW_ITEM_TYPE_IPV6,
+	RTE_FLOW_ITEM_TYPE_TCP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_eth_ip6_udp[] = {
+	RTE_FLOW_ITEM_TYPE_ETH,
+	RTE_FLOW_ITEM_TYPE_IPV6,
+	RTE_FLOW_ITEM_TYPE_UDP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_vlan[] = {
+	RTE_FLOW_ITEM_TYPE_VLAN,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_vlan_ip[] = {
+	RTE_FLOW_ITEM_TYPE_VLAN,
+	RTE_FLOW_ITEM_TYPE_IPV4,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_vlan_ip_tcp[] = {
+	RTE_FLOW_ITEM_TYPE_VLAN,
+	RTE_FLOW_ITEM_TYPE_IPV4,
+	RTE_FLOW_ITEM_TYPE_TCP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_vlan_ip_udp[] = {
+	RTE_FLOW_ITEM_TYPE_VLAN,
+	RTE_FLOW_ITEM_TYPE_IPV4,
+	RTE_FLOW_ITEM_TYPE_UDP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_vlan_ip6[] = {
+	RTE_FLOW_ITEM_TYPE_VLAN,
+	RTE_FLOW_ITEM_TYPE_IPV6,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_vlan_ip6_tcp[] = {
+	RTE_FLOW_ITEM_TYPE_VLAN,
+	RTE_FLOW_ITEM_TYPE_IPV6,
+	RTE_FLOW_ITEM_TYPE_TCP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_vlan_ip6_udp[] = {
+	RTE_FLOW_ITEM_TYPE_VLAN,
+	RTE_FLOW_ITEM_TYPE_IPV6,
+	RTE_FLOW_ITEM_TYPE_UDP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_ip[] = {
+	RTE_FLOW_ITEM_TYPE_IPV4,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_ip6[] = {
+	RTE_FLOW_ITEM_TYPE_IPV6,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_ip_tcp[] = {
+	RTE_FLOW_ITEM_TYPE_IPV4,
+	RTE_FLOW_ITEM_TYPE_TCP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_ip6_tcp[] = {
+	RTE_FLOW_ITEM_TYPE_IPV6,
+	RTE_FLOW_ITEM_TYPE_TCP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_ip_udp[] = {
+	RTE_FLOW_ITEM_TYPE_IPV4,
+	RTE_FLOW_ITEM_TYPE_UDP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_ip6_udp[] = {
+	RTE_FLOW_ITEM_TYPE_IPV6,
+	RTE_FLOW_ITEM_TYPE_UDP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_tcp[] = {
+	RTE_FLOW_ITEM_TYPE_TCP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_udp[] = {
+	RTE_FLOW_ITEM_TYPE_UDP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+#define MRVL_VLAN_ID_MASK 0x0fff
+#define MRVL_VLAN_PRI_MASK 0x7000
+#define MRVL_IPV4_DSCP_MASK 0xfc
+#define MRVL_IPV4_ADDR_MASK 0xffffffff
+#define MRVL_IPV6_FLOW_MASK 0x0fffff
+
+/**
+ * Given a flow item, return the next non-void one.
+ *
+ * @param items Pointer to the item in the table.
+ * @returns Next not-void item, NULL otherwise.
+ */
+static const struct rte_flow_item *
+mrvl_next_item(const struct rte_flow_item *items)
+{
+	const struct rte_flow_item *item = items;
+
+	for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
+		if (item->type != RTE_FLOW_ITEM_TYPE_VOID)
+			return item;
+	}
+
+	return NULL;
+}
+
+/**
+ * Allocate memory for classifier rule key and mask fields.
+ *
+ * @param field Pointer to the classifier rule.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_alloc_key_mask(struct pp2_cls_rule_key_field *field)
+{
+	unsigned int id = rte_socket_id();
+
+	field->key = rte_zmalloc_socket(NULL, MRVL_CLS_STR_SIZE_MAX, 0, id);
+	if (!field->key)
+		goto out;
+
+	field->mask = rte_zmalloc_socket(NULL, MRVL_CLS_STR_SIZE_MAX, 0, id);
+	if (!field->mask)
+		goto out_mask;
+
+	return 0;
+out_mask:
+	rte_free(field->key);
+out:
+	field->key = NULL;
+	field->mask = NULL;
+	return -1;
+}
+
+/**
+ * Free memory allocated for classifier rule key and mask fields.
+ *
+ * @param field Pointer to the classifier rule.
+ */
+static void
+mrvl_free_key_mask(struct pp2_cls_rule_key_field *field)
+{
+	rte_free(field->key);
+	rte_free(field->mask);
+	field->key = NULL;
+	field->mask = NULL;
+}
+
+/**
+ * Free memory allocated for all classifier rule key and mask fields.
+ *
+ * @param rule Pointer to the classifier table rule.
+ */
+static void
+mrvl_free_all_key_mask(struct pp2_cls_tbl_rule *rule)
+{
+	int i;
+
+	for (i = 0; i < rule->num_fields; i++)
+		mrvl_free_key_mask(&rule->fields[i]);
+	rule->num_fields = 0;
+}
+
+/*
+ * Initialize rte flow item parsing.
+ *
+ * @param item Pointer to the flow item.
+ * @param spec_ptr Pointer to the specific item pointer.
+ * @param mask_ptr Pointer to the specific item's mask pointer.
+ * @def_mask Pointer to the default mask.
+ * @size Size of the flow item.
+ * @error Pointer to the rte flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_init(const struct rte_flow_item *item,
+		const void **spec_ptr,
+		const void **mask_ptr,
+		const void *def_mask,
+		unsigned int size,
+		struct rte_flow_error *error)
+{
+	const uint8_t *spec;
+	const uint8_t *mask;
+	const uint8_t *last;
+	uint8_t zeros[size];
+
+	memset(zeros, 0, size);
+
+	if (item == NULL) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+				   "NULL item\n");
+		return -rte_errno;
+	}
+
+	if ((item->last != NULL || item->mask != NULL) && item->spec == NULL) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ITEM, item,
+				   "Mask or last is set without spec\n");
+		return -rte_errno;
+	}
+
+	/*
+	 * If "mask" is not set, default mask is used,
+	 * but if default mask is NULL, "mask" should be set.
+	 */
+	if (item->mask == NULL) {
+		if (def_mask == NULL) {
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+					   "Mask should be specified\n");
+			return -rte_errno;
+		}
+
+		mask = (const uint8_t *)def_mask;
+	} else {
+		mask = (const uint8_t *)item->mask;
+	}
+
+	spec = (const uint8_t *)item->spec;
+	last = (const uint8_t *)item->last;
+
+	if (spec == NULL) {
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+				   NULL, "Spec should be specified\n");
+		return -rte_errno;
+	}
+
+	/*
+	 * If field values in "last" are either 0 or equal to the corresponding
+	 * values in "spec" then they are ignored.
+	 */
+	if (last != NULL &&
+	    !memcmp(last, zeros, size) &&
+	    memcmp(last, spec, size) != 0) {
+		rte_flow_error_set(error, ENOTSUP,
+				   RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+				   "Ranging is not supported\n");
+		return -rte_errno;
+	}
+
+	*spec_ptr = spec;
+	*mask_ptr = mask;
+
+	return 0;
+}
+
+/**
+ * Parse the eth flow item.
+ *
+ * This will create classifier rule that matches either destination or source
+ * mac.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param mask Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static int
+mrvl_parse_mac(const struct rte_flow_item_eth *spec,
+	       const struct rte_flow_item_eth *mask,
+	       int parse_dst, struct rte_flow *flow)
+{
+	struct pp2_cls_rule_key_field *key_field;
+	const uint8_t *k, *m;
+
+	if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS)
+		return -ENOSPC;
+
+	if (parse_dst) {
+		k = spec->dst.addr_bytes;
+		m = mask->dst.addr_bytes;
+
+		flow->pattern |= F_DMAC;
+	} else {
+		k = spec->src.addr_bytes;
+		m = mask->src.addr_bytes;
+
+		flow->pattern |= F_SMAC;
+	}
+
+	key_field = &flow->rule.fields[flow->rule.num_fields];
+	mrvl_alloc_key_mask(key_field);
+	key_field->size = 6;
+
+	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX,
+		 "%02x:%02x:%02x:%02x:%02x:%02x",
+		 k[0], k[1], k[2], k[3], k[4], k[5]);
+
+	snprintf((char *)key_field->mask, MRVL_CLS_STR_SIZE_MAX,
+		 "%02x:%02x:%02x:%02x:%02x:%02x",
+		 m[0], m[1], m[2], m[3], m[4], m[5]);
+
+	flow->rule.num_fields += 1;
+
+	return 0;
+}
+
+/**
+ * Helper for parsing the eth flow item destination mac address.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static inline int
+mrvl_parse_dmac(const struct rte_flow_item_eth *spec,
+		const struct rte_flow_item_eth *mask,
+		struct rte_flow *flow)
+{
+	return mrvl_parse_mac(spec, mask, 1, flow);
+}
+
+/**
+ * Helper for parsing the eth flow item source mac address.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static inline int
+mrvl_parse_smac(const struct rte_flow_item_eth *spec,
+		const struct rte_flow_item_eth *mask,
+		struct rte_flow *flow)
+{
+	return mrvl_parse_mac(spec, mask, 0, flow);
+}
+
+/**
+ * Parse the ether type field of the eth flow item.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static int
+mrvl_parse_type(const struct rte_flow_item_eth *spec,
+		const struct rte_flow_item_eth *mask __rte_unused,
+		struct rte_flow *flow)
+{
+	struct pp2_cls_rule_key_field *key_field;
+	uint16_t k;
+
+	if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS)
+		return -ENOSPC;
+
+	key_field = &flow->rule.fields[flow->rule.num_fields];
+	mrvl_alloc_key_mask(key_field);
+	key_field->size = 2;
+
+	k = rte_be_to_cpu_16(spec->type);
+	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
+
+	flow->pattern |= F_TYPE;
+	flow->rule.num_fields += 1;
+
+	return 0;
+}
+
+/**
+ * Parse the vid field of the vlan rte flow item.
+ *
+ * This will create classifier rule that matches vid.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static int
+mrvl_parse_vlan_id(const struct rte_flow_item_vlan *spec,
+		   const struct rte_flow_item_vlan *mask __rte_unused,
+		   struct rte_flow *flow)
+{
+	struct pp2_cls_rule_key_field *key_field;
+	uint16_t k;
+
+	if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS)
+		return -ENOSPC;
+
+	key_field = &flow->rule.fields[flow->rule.num_fields];
+	mrvl_alloc_key_mask(key_field);
+	key_field->size = 2;
+
+	k = rte_be_to_cpu_16(spec->tci) & MRVL_VLAN_ID_MASK;
+	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
+
+	flow->pattern |= F_VLAN_ID;
+	flow->rule.num_fields += 1;
+
+	return 0;
+}
+
+/**
+ * Parse the pri field of the vlan rte flow item.
+ *
+ * This will create classifier rule that matches pri.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static int
+mrvl_parse_vlan_pri(const struct rte_flow_item_vlan *spec,
+		    const struct rte_flow_item_vlan *mask __rte_unused,
+		    struct rte_flow *flow)
+{
+	struct pp2_cls_rule_key_field *key_field;
+	uint16_t k;
+
+	if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS)
+		return -ENOSPC;
+
+	key_field = &flow->rule.fields[flow->rule.num_fields];
+	mrvl_alloc_key_mask(key_field);
+	key_field->size = 1;
+
+	k = (rte_be_to_cpu_16(spec->tci) & MRVL_VLAN_PRI_MASK) >> 13;
+	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
+
+	flow->pattern |= F_VLAN_PRI;
+	flow->rule.num_fields += 1;
+
+	return 0;
+}
+
+/**
+ * Parse the dscp field of the ipv4 rte flow item.
+ *
+ * This will create classifier rule that matches dscp field.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static int
+mrvl_parse_ip4_dscp(const struct rte_flow_item_ipv4 *spec,
+		    const struct rte_flow_item_ipv4 *mask,
+		    struct rte_flow *flow)
+{
+	struct pp2_cls_rule_key_field *key_field;
+	uint8_t k, m;
+
+	if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS)
+		return -ENOSPC;
+
+	key_field = &flow->rule.fields[flow->rule.num_fields];
+	mrvl_alloc_key_mask(key_field);
+	key_field->size = 1;
+
+	k = (spec->hdr.type_of_service & MRVL_IPV4_DSCP_MASK) >> 2;
+	m = (mask->hdr.type_of_service & MRVL_IPV4_DSCP_MASK) >> 2;
+	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
+	snprintf((char *)key_field->mask, MRVL_CLS_STR_SIZE_MAX, "%u", m);
+
+	flow->pattern |= F_IP4_TOS;
+	flow->rule.num_fields += 1;
+
+	return 0;
+}
+
+/**
+ * Parse either source or destination ip addresses of the ipv4 flow item.
+ *
+ * This will create classifier rule that matches either destination
+ * or source ip field.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static int
+mrvl_parse_ip4_addr(const struct rte_flow_item_ipv4 *spec,
+		    const struct rte_flow_item_ipv4 *mask,
+		    int parse_dst, struct rte_flow *flow)
+{
+	struct pp2_cls_rule_key_field *key_field;
+	struct in_addr k;
+	uint32_t m;
+
+	if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS)
+		return -ENOSPC;
+
+	memset(&k, 0, sizeof(k));
+	if (parse_dst) {
+		k.s_addr = spec->hdr.dst_addr;
+		m = rte_be_to_cpu_32(mask->hdr.dst_addr);
+
+		flow->pattern |= F_IP4_DIP;
+	} else {
+		k.s_addr = spec->hdr.src_addr;
+		m = rte_be_to_cpu_32(mask->hdr.src_addr);
+
+		flow->pattern |= F_IP4_SIP;
+	}
+
+	key_field = &flow->rule.fields[flow->rule.num_fields];
+	mrvl_alloc_key_mask(key_field);
+	key_field->size = 4;
+
+	inet_ntop(AF_INET, &k, (char *)key_field->key, MRVL_CLS_STR_SIZE_MAX);
+	snprintf((char *)key_field->mask, MRVL_CLS_STR_SIZE_MAX, "0x%x", m);
+
+	flow->rule.num_fields += 1;
+
+	return 0;
+}
+
+/**
+ * Helper for parsing destination ip of the ipv4 flow item.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static inline int
+mrvl_parse_ip4_dip(const struct rte_flow_item_ipv4 *spec,
+		   const struct rte_flow_item_ipv4 *mask,
+		   struct rte_flow *flow)
+{
+	return mrvl_parse_ip4_addr(spec, mask, 1, flow);
+}
+
+/**
+ * Helper for parsing source ip of the ipv4 flow item.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static inline int
+mrvl_parse_ip4_sip(const struct rte_flow_item_ipv4 *spec,
+		   const struct rte_flow_item_ipv4 *mask,
+		   struct rte_flow *flow)
+{
+	return mrvl_parse_ip4_addr(spec, mask, 0, flow);
+}
+
+/**
+ * Parse the proto field of the ipv4 rte flow item.
+ *
+ * This will create classifier rule that matches proto field.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static int
+mrvl_parse_ip4_proto(const struct rte_flow_item_ipv4 *spec,
+		     const struct rte_flow_item_ipv4 *mask __rte_unused,
+		     struct rte_flow *flow)
+{
+	struct pp2_cls_rule_key_field *key_field;
+	uint8_t k = spec->hdr.next_proto_id;
+
+	if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS)
+		return -ENOSPC;
+
+	key_field = &flow->rule.fields[flow->rule.num_fields];
+	mrvl_alloc_key_mask(key_field);
+	key_field->size = 1;
+
+	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
+
+	flow->pattern |= F_IP4_PROTO;
+	flow->rule.num_fields += 1;
+
+	return 0;
+}
+
+/**
+ * Parse either source or destination ip addresses of the ipv6 rte flow item.
+ *
+ * This will create classifier rule that matches either destination
+ * or source ip field.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static int
+mrvl_parse_ip6_addr(const struct rte_flow_item_ipv6 *spec,
+	       const struct rte_flow_item_ipv6 *mask,
+	       int parse_dst, struct rte_flow *flow)
+{
+	struct pp2_cls_rule_key_field *key_field;
+	int size = sizeof(spec->hdr.dst_addr);
+	struct in6_addr k, m;
+
+	if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS)
+		return -ENOSPC;
+
+	memset(&k, 0, sizeof(k));
+	if (parse_dst) {
+		memcpy(k.s6_addr, spec->hdr.dst_addr, size);
+		memcpy(m.s6_addr, mask->hdr.dst_addr, size);
+
+		flow->pattern |= F_IP6_DIP;
+	} else {
+		memcpy(k.s6_addr, spec->hdr.src_addr, size);
+		memcpy(m.s6_addr, mask->hdr.src_addr, size);
+
+		flow->pattern |= F_IP6_SIP;
+	}
+
+	key_field = &flow->rule.fields[flow->rule.num_fields];
+	mrvl_alloc_key_mask(key_field);
+	key_field->size = 16;
+
+	inet_ntop(AF_INET6, &k, (char *)key_field->key, MRVL_CLS_STR_SIZE_MAX);
+	inet_ntop(AF_INET6, &m, (char *)key_field->mask, MRVL_CLS_STR_SIZE_MAX);
+
+	flow->rule.num_fields += 1;
+
+	return 0;
+}
+
+/**
+ * Helper for parsing destination ip of the ipv6 flow item.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static inline int
+mrvl_parse_ip6_dip(const struct rte_flow_item_ipv6 *spec,
+		   const struct rte_flow_item_ipv6 *mask,
+		   struct rte_flow *flow)
+{
+	return mrvl_parse_ip6_addr(spec, mask, 1, flow);
+}
+
+/**
+ * Helper for parsing source ip of the ipv6 flow item.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static inline int
+mrvl_parse_ip6_sip(const struct rte_flow_item_ipv6 *spec,
+		   const struct rte_flow_item_ipv6 *mask,
+		   struct rte_flow *flow)
+{
+	return mrvl_parse_ip6_addr(spec, mask, 0, flow);
+}
+
+/**
+ * Parse the flow label of the ipv6 flow item.
+ *
+ * This will create classifier rule that matches flow field.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static int
+mrvl_parse_ip6_flow(const struct rte_flow_item_ipv6 *spec,
+		    const struct rte_flow_item_ipv6 *mask,
+		    struct rte_flow *flow)
+{
+	struct pp2_cls_rule_key_field *key_field;
+	uint32_t k = rte_be_to_cpu_32(spec->hdr.vtc_flow) & MRVL_IPV6_FLOW_MASK,
+		 m = rte_be_to_cpu_32(mask->hdr.vtc_flow) & MRVL_IPV6_FLOW_MASK;
+
+	if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS)
+		return -ENOSPC;
+
+	key_field = &flow->rule.fields[flow->rule.num_fields];
+	mrvl_alloc_key_mask(key_field);
+	key_field->size = 3;
+
+	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
+	snprintf((char *)key_field->mask, MRVL_CLS_STR_SIZE_MAX, "%u", m);
+
+	flow->pattern |= F_IP6_FLOW;
+	flow->rule.num_fields += 1;
+
+	return 0;
+}
+
+/**
+ * Parse the next header of the ipv6 flow item.
+ *
+ * This will create classifier rule that matches next header field.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static int
+mrvl_parse_ip6_next_hdr(const struct rte_flow_item_ipv6 *spec,
+			const struct rte_flow_item_ipv6 *mask __rte_unused,
+			struct rte_flow *flow)
+{
+	struct pp2_cls_rule_key_field *key_field;
+	uint8_t k = spec->hdr.proto;
+
+	if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS)
+		return -ENOSPC;
+
+	key_field = &flow->rule.fields[flow->rule.num_fields];
+	mrvl_alloc_key_mask(key_field);
+	key_field->size = 1;
+
+	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
+
+	flow->pattern |= F_IP6_NEXT_HDR;
+	flow->rule.num_fields += 1;
+
+	return 0;
+}
+
+/**
+ * Parse destination or source port of the tcp flow item.
+ *
+ * This will create classifier rule that matches either destination or
+ * source tcp port.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static int
+mrvl_parse_tcp_port(const struct rte_flow_item_tcp *spec,
+		    const struct rte_flow_item_tcp *mask __rte_unused,
+		    int parse_dst, struct rte_flow *flow)
+{
+	struct pp2_cls_rule_key_field *key_field;
+	uint16_t k;
+
+	if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS)
+		return -ENOSPC;
+
+	key_field = &flow->rule.fields[flow->rule.num_fields];
+	mrvl_alloc_key_mask(key_field);
+	key_field->size = 2;
+
+	if (parse_dst) {
+		k = rte_be_to_cpu_16(spec->hdr.dst_port);
+
+		flow->pattern |= F_TCP_DPORT;
+	} else {
+		k = rte_be_to_cpu_16(spec->hdr.src_port);
+
+		flow->pattern |= F_TCP_SPORT;
+	}
+
+	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
+
+	flow->rule.num_fields += 1;
+
+	return 0;
+}
+
+/**
+ * Helper for parsing the tcp source port of the tcp flow item.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static inline int
+mrvl_parse_tcp_sport(const struct rte_flow_item_tcp *spec,
+		     const struct rte_flow_item_tcp *mask,
+		     struct rte_flow *flow)
+{
+	return mrvl_parse_tcp_port(spec, mask, 0, flow);
+}
+
+/**
+ * Helper for parsing the tcp destination port of the tcp flow item.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static inline int
+mrvl_parse_tcp_dport(const struct rte_flow_item_tcp *spec,
+		     const struct rte_flow_item_tcp *mask,
+		     struct rte_flow *flow)
+{
+	return mrvl_parse_tcp_port(spec, mask, 1, flow);
+}
+
+/**
+ * Parse destination or source port of the udp flow item.
+ *
+ * This will create classifier rule that matches either destination or
+ * source udp port.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static int
+mrvl_parse_udp_port(const struct rte_flow_item_udp *spec,
+		    const struct rte_flow_item_udp *mask __rte_unused,
+		    int parse_dst, struct rte_flow *flow)
+{
+	struct pp2_cls_rule_key_field *key_field;
+	uint16_t k;
+
+	if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS)
+		return -ENOSPC;
+
+	key_field = &flow->rule.fields[flow->rule.num_fields];
+	mrvl_alloc_key_mask(key_field);
+	key_field->size = 2;
+
+	if (parse_dst) {
+		k = rte_be_to_cpu_16(spec->hdr.dst_port);
+
+		flow->pattern |= F_UDP_DPORT;
+	} else {
+		k = rte_be_to_cpu_16(spec->hdr.src_port);
+
+		flow->pattern |= F_UDP_SPORT;
+	}
+
+	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
+
+	flow->rule.num_fields += 1;
+
+	return 0;
+}
+
+/**
+ * Helper for parsing the udp source port of the udp flow item.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static inline int
+mrvl_parse_udp_sport(const struct rte_flow_item_udp *spec,
+		     const struct rte_flow_item_udp *mask,
+		     struct rte_flow *flow)
+{
+	return mrvl_parse_udp_port(spec, mask, 0, flow);
+}
+
+/**
+ * Helper for parsing the udp destination port of the udp flow item.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static inline int
+mrvl_parse_udp_dport(const struct rte_flow_item_udp *spec,
+		     const struct rte_flow_item_udp *mask,
+		     struct rte_flow *flow)
+{
+	return mrvl_parse_udp_port(spec, mask, 1, flow);
+}
+
+/**
+ * Parse eth flow item.
+ *
+ * @param item Pointer to the flow item.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @param fields Pointer to the parsed parsed fields enum.
+ * @returns 0 on success, negative value otherwise.
+ */
+static int
+mrvl_parse_eth(const struct rte_flow_item *item, struct rte_flow *flow,
+	       struct rte_flow_error *error)
+{
+	const struct rte_flow_item_eth *spec = NULL, *mask = NULL;
+	struct ether_addr zero;
+	int ret;
+
+	ret = mrvl_parse_init(item, (const void **)&spec, (const void **)&mask,
+			      &rte_flow_item_eth_mask,
+			      sizeof(struct rte_flow_item_eth), error);
+	if (ret)
+		return ret;
+
+	memset(&zero, 0, sizeof(zero));
+
+	if (memcmp(&mask->dst, &zero, sizeof(mask->dst))) {
+		ret = mrvl_parse_dmac(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	if (memcmp(&mask->src, &zero, sizeof(mask->src))) {
+		ret = mrvl_parse_smac(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	if (mask->type) {
+		RTE_LOG(WARNING, PMD, "eth type mask is ignored\n");
+		ret = mrvl_parse_type(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	return 0;
+out:
+	rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+			   "Reached maximum number of fields in cls tbl key\n");
+	return -rte_errno;
+}
+
+/**
+ * Parse vlan flow item.
+ *
+ * @param item Pointer to the flow item.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @param fields Pointer to the parsed parsed fields enum.
+ * @returns 0 on success, negative value otherwise.
+ */
+static int
+mrvl_parse_vlan(const struct rte_flow_item *item,
+		struct rte_flow *flow,
+		struct rte_flow_error *error)
+{
+	const struct rte_flow_item_vlan *spec = NULL, *mask = NULL;
+	uint16_t m;
+	int ret;
+
+	ret = mrvl_parse_init(item, (const void **)&spec, (const void **)&mask,
+			      &rte_flow_item_vlan_mask,
+			      sizeof(struct rte_flow_item_vlan), error);
+	if (ret)
+		return ret;
+
+	if (mask->tpid) {
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+				   NULL, "Not supported by classifier\n");
+		return -rte_errno;
+	}
+
+	m = rte_be_to_cpu_16(mask->tci);
+	if (m & MRVL_VLAN_ID_MASK) {
+		RTE_LOG(WARNING, PMD, "vlan id mask is ignored\n");
+		ret = mrvl_parse_vlan_id(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	if (m & MRVL_VLAN_PRI_MASK) {
+		RTE_LOG(WARNING, PMD, "vlan pri mask is ignored\n");
+		ret = mrvl_parse_vlan_pri(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	return 0;
+out:
+	rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+			   "Reached maximum number of fields in cls tbl key\n");
+	return -rte_errno;
+}
+
+/**
+ * Parse ipv4 flow item.
+ *
+ * @param item Pointer to the flow item.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @param fields Pointer to the parsed parsed fields enum.
+ * @returns 0 on success, negative value otherwise.
+ */
+static int
+mrvl_parse_ip4(const struct rte_flow_item *item,
+	       struct rte_flow *flow,
+	       struct rte_flow_error *error)
+{
+	const struct rte_flow_item_ipv4 *spec = NULL, *mask = NULL;
+	int ret;
+
+	ret = mrvl_parse_init(item, (const void **)&spec, (const void **)&mask,
+			      &rte_flow_item_ipv4_mask,
+			      sizeof(struct rte_flow_item_ipv4), error);
+	if (ret)
+		return ret;
+
+	if (mask->hdr.version_ihl ||
+	    mask->hdr.total_length ||
+	    mask->hdr.packet_id ||
+	    mask->hdr.fragment_offset ||
+	    mask->hdr.time_to_live ||
+	    mask->hdr.hdr_checksum) {
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+				   NULL, "Not supported by classifier\n");
+		return -rte_errno;
+	}
+
+	if (mask->hdr.type_of_service & MRVL_IPV4_DSCP_MASK) {
+		ret = mrvl_parse_ip4_dscp(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	if (mask->hdr.src_addr) {
+		ret = mrvl_parse_ip4_sip(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	if (mask->hdr.dst_addr) {
+		ret = mrvl_parse_ip4_dip(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	if (mask->hdr.next_proto_id) {
+		RTE_LOG(WARNING, PMD, "next proto id mask is ignored\n");
+		ret = mrvl_parse_ip4_proto(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	return 0;
+out:
+	rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+			   "Reached maximum number of fields in cls tbl key\n");
+	return -rte_errno;
+}
+
+/**
+ * Parse ipv6 flow item.
+ *
+ * @param item Pointer to the flow item.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @param fields Pointer to the parsed parsed fields enum.
+ * @returns 0 on success, negative value otherwise.
+ */
+static int
+mrvl_parse_ip6(const struct rte_flow_item *item,
+	       struct rte_flow *flow,
+	       struct rte_flow_error *error)
+{
+	const struct rte_flow_item_ipv6 *spec = NULL, *mask = NULL;
+	struct ipv6_hdr zero;
+	uint32_t flow_mask;
+	int ret;
+
+	ret = mrvl_parse_init(item, (const void **)&spec,
+			      (const void **)&mask,
+			      &rte_flow_item_ipv6_mask,
+			      sizeof(struct rte_flow_item_ipv6),
+			      error);
+	if (ret)
+		return ret;
+
+	memset(&zero, 0, sizeof(zero));
+
+	if (mask->hdr.payload_len ||
+	    mask->hdr.hop_limits) {
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+				   NULL, "Not supported by classifier\n");
+		return -rte_errno;
+	}
+
+	if (memcmp(mask->hdr.src_addr,
+		   zero.src_addr, sizeof(mask->hdr.src_addr))) {
+		ret = mrvl_parse_ip6_sip(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	if (memcmp(mask->hdr.dst_addr,
+		   zero.dst_addr, sizeof(mask->hdr.dst_addr))) {
+		ret = mrvl_parse_ip6_dip(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	flow_mask = rte_be_to_cpu_32(mask->hdr.vtc_flow) & MRVL_IPV6_FLOW_MASK;
+	if (flow_mask) {
+		ret = mrvl_parse_ip6_flow(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	if (mask->hdr.proto) {
+		RTE_LOG(WARNING, PMD, "next header mask is ignored\n");
+		ret = mrvl_parse_ip6_next_hdr(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	return 0;
+out:
+	rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+			   "Reached maximum number of fields in cls tbl key\n");
+	return -rte_errno;
+}
+
+/**
+ * Parse tcp flow item.
+ *
+ * @param item Pointer to the flow item.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @param fields Pointer to the parsed parsed fields enum.
+ * @returns 0 on success, negative value otherwise.
+ */
+static int
+mrvl_parse_tcp(const struct rte_flow_item *item,
+	       struct rte_flow *flow,
+	       struct rte_flow_error *error)
+{
+	const struct rte_flow_item_tcp *spec = NULL, *mask = NULL;
+	int ret;
+
+	ret = mrvl_parse_init(item, (const void **)&spec, (const void **)&mask,
+			      &rte_flow_item_ipv4_mask,
+			      sizeof(struct rte_flow_item_ipv4), error);
+	if (ret)
+		return ret;
+
+	if (mask->hdr.sent_seq ||
+	    mask->hdr.recv_ack ||
+	    mask->hdr.data_off ||
+	    mask->hdr.tcp_flags ||
+	    mask->hdr.rx_win ||
+	    mask->hdr.cksum ||
+	    mask->hdr.tcp_urp) {
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+				   NULL, "Not supported by classifier\n");
+		return -rte_errno;
+	}
+
+	if (mask->hdr.src_port) {
+		RTE_LOG(WARNING, PMD, "tcp sport mask is ignored\n");
+		ret = mrvl_parse_tcp_sport(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	if (mask->hdr.dst_port) {
+		RTE_LOG(WARNING, PMD, "tcp dport mask is ignored\n");
+		ret = mrvl_parse_tcp_dport(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	return 0;
+out:
+	rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+			   "Reached maximum number of fields in cls tbl key\n");
+	return -rte_errno;
+}
+
+/**
+ * Parse udp flow item.
+ *
+ * @param item Pointer to the flow item.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @param fields Pointer to the parsed parsed fields enum.
+ * @returns 0 on success, negative value otherwise.
+ */
+static int
+mrvl_parse_udp(const struct rte_flow_item *item,
+	       struct rte_flow *flow,
+	       struct rte_flow_error *error)
+{
+	const struct rte_flow_item_udp *spec = NULL, *mask = NULL;
+	int ret;
+
+	ret = mrvl_parse_init(item, (const void **)&spec, (const void **)&mask,
+			      &rte_flow_item_ipv4_mask,
+			      sizeof(struct rte_flow_item_ipv4), error);
+	if (ret)
+		return ret;
+
+	if (mask->hdr.dgram_len ||
+	    mask->hdr.dgram_cksum) {
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+				   NULL, "Not supported by classifier\n");
+		return -rte_errno;
+	}
+
+	if (mask->hdr.src_port) {
+		RTE_LOG(WARNING, PMD, "udp sport mask is ignored\n");
+		ret = mrvl_parse_udp_sport(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	if (mask->hdr.dst_port) {
+		RTE_LOG(WARNING, PMD, "udp dport mask is ignored\n");
+		ret = mrvl_parse_udp_dport(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	return 0;
+out:
+	rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+			   "Reached maximum number of fields in cls tbl key\n");
+	return -rte_errno;
+}
+
+/**
+ * Parse flow pattern composed of the the eth item.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_eth(const struct rte_flow_item pattern[],
+		       struct rte_flow *flow,
+		       struct rte_flow_error *error)
+{
+	return mrvl_parse_eth(pattern, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the eth and vlan items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_eth_vlan(const struct rte_flow_item pattern[],
+			    struct rte_flow *flow,
+			    struct rte_flow_error *error)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+	int ret;
+
+	ret = mrvl_parse_eth(item, flow, error);
+	if (ret)
+		return ret;
+
+	item = mrvl_next_item(item + 1);
+
+	return mrvl_parse_vlan(item, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the eth, vlan and ip4/ip6 items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @param ip6 1 to parse ip6 item, 0 to parse ip4 item.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_eth_vlan_ip4_ip6(const struct rte_flow_item pattern[],
+				    struct rte_flow *flow,
+				    struct rte_flow_error *error, int ip6)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+	int ret;
+
+	ret = mrvl_parse_eth(item, flow, error);
+	if (ret)
+		return ret;
+
+	item = mrvl_next_item(item + 1);
+	ret = mrvl_parse_vlan(item, flow, error);
+	if (ret)
+		return ret;
+
+	item = mrvl_next_item(item + 1);
+
+	return ip6 ? mrvl_parse_ip6(item, flow, error) :
+		     mrvl_parse_ip4(item, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the eth, vlan and ipv4 items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_eth_vlan_ip4(const struct rte_flow_item pattern[],
+				struct rte_flow *flow,
+				struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_eth_vlan_ip4_ip6(pattern, flow, error, 0);
+}
+
+/**
+ * Parse flow pattern composed of the eth, vlan and ipv6 items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_eth_vlan_ip6(const struct rte_flow_item pattern[],
+				struct rte_flow *flow,
+				struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_eth_vlan_ip4_ip6(pattern, flow, error, 1);
+}
+
+/**
+ * Parse flow pattern composed of the eth and ip4/ip6 items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @param ip6 1 to parse ip6 item, 0 to parse ip4 item.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_eth_ip4_ip6(const struct rte_flow_item pattern[],
+			       struct rte_flow *flow,
+			       struct rte_flow_error *error, int ip6)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+	int ret;
+
+	ret = mrvl_parse_eth(item, flow, error);
+	if (ret)
+		return ret;
+
+	item = mrvl_next_item(item + 1);
+
+	return ip6 ? mrvl_parse_ip6(item, flow, error) :
+		     mrvl_parse_ip4(item, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the eth and ipv4 items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_eth_ip4(const struct rte_flow_item pattern[],
+			   struct rte_flow *flow,
+			   struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_eth_ip4_ip6(pattern, flow, error, 0);
+}
+
+/**
+ * Parse flow pattern composed of the eth and ipv6 items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_eth_ip6(const struct rte_flow_item pattern[],
+			   struct rte_flow *flow,
+			   struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_eth_ip4_ip6(pattern, flow, error, 1);
+}
+
+/**
+ * Parse flow pattern composed of the eth, ip4 and tcp/udp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @param tcp 1 to parse tcp item, 0 to parse udp item.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_eth_ip4_tcp_udp(const struct rte_flow_item pattern[],
+				   struct rte_flow *flow,
+				   struct rte_flow_error *error, int tcp)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+	int ret;
+
+	ret = mrvl_parse_pattern_eth_ip4_ip6(pattern, flow, error, 0);
+	if (ret)
+		return ret;
+
+	item = mrvl_next_item(item + 1);
+	item = mrvl_next_item(item + 1);
+
+	if (tcp)
+		return mrvl_parse_tcp(item, flow, error);
+
+	return mrvl_parse_udp(item, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the eth, ipv4 and tcp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_eth_ip4_tcp(const struct rte_flow_item pattern[],
+			       struct rte_flow *flow,
+			       struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_eth_ip4_tcp_udp(pattern, flow, error, 1);
+}
+
+/**
+ * Parse flow pattern composed of the eth, ipv4 and udp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_eth_ip4_udp(const struct rte_flow_item pattern[],
+			       struct rte_flow *flow,
+			       struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_eth_ip4_tcp_udp(pattern, flow, error, 0);
+}
+
+/**
+ * Parse flow pattern composed of the eth, ipv6 and tcp/udp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @param tcp 1 to parse tcp item, 0 to parse udp item.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_eth_ip6_tcp_udp(const struct rte_flow_item pattern[],
+				   struct rte_flow *flow,
+				   struct rte_flow_error *error, int tcp)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+	int ret;
+
+	ret = mrvl_parse_pattern_eth_ip4_ip6(pattern, flow, error, 1);
+	if (ret)
+		return ret;
+
+	item = mrvl_next_item(item + 1);
+	item = mrvl_next_item(item + 1);
+
+	if (tcp)
+		return mrvl_parse_tcp(item, flow, error);
+
+	return mrvl_parse_udp(item, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the eth, ipv6 and tcp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_eth_ip6_tcp(const struct rte_flow_item pattern[],
+			       struct rte_flow *flow,
+			       struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_eth_ip6_tcp_udp(pattern, flow, error, 1);
+}
+
+/**
+ * Parse flow pattern composed of the eth, ipv6 and udp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_eth_ip6_udp(const struct rte_flow_item pattern[],
+			       struct rte_flow *flow,
+			       struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_eth_ip6_tcp_udp(pattern, flow, error, 0);
+}
+
+/**
+ * Parse flow pattern composed of the vlan item.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_vlan(const struct rte_flow_item pattern[],
+			    struct rte_flow *flow,
+			    struct rte_flow_error *error)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+
+	return mrvl_parse_vlan(item, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the vlan and ip4/ip6 items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @param ip6 1 to parse ip6 item, 0 to parse ip4 item.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_vlan_ip4_ip6(const struct rte_flow_item pattern[],
+				struct rte_flow *flow,
+				struct rte_flow_error *error, int ip6)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+	int ret;
+
+	ret = mrvl_parse_vlan(item, flow, error);
+	if (ret)
+		return ret;
+
+	item = mrvl_next_item(item + 1);
+
+	return ip6 ? mrvl_parse_ip6(item, flow, error) :
+		     mrvl_parse_ip4(item, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the vlan and ipv4 items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_vlan_ip4(const struct rte_flow_item pattern[],
+			    struct rte_flow *flow,
+			    struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_vlan_ip4_ip6(pattern, flow, error, 0);
+}
+
+/**
+ * Parse flow pattern composed of the vlan, ipv4 and tcp/udp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_vlan_ip_tcp_udp(const struct rte_flow_item pattern[],
+				   struct rte_flow *flow,
+				   struct rte_flow_error *error, int tcp)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+	int ret;
+
+	ret = mrvl_parse_pattern_vlan_ip4_ip6(pattern, flow, error, 0);
+	if (ret)
+		return ret;
+
+	item = mrvl_next_item(item + 1);
+	item = mrvl_next_item(item + 1);
+
+	if (tcp)
+		return mrvl_parse_tcp(item, flow, error);
+
+	return mrvl_parse_udp(item, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the vlan, ipv4 and tcp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_vlan_ip_tcp(const struct rte_flow_item pattern[],
+			       struct rte_flow *flow,
+			       struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_vlan_ip_tcp_udp(pattern, flow, error, 1);
+}
+
+/**
+ * Parse flow pattern composed of the vlan, ipv4 and udp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_vlan_ip_udp(const struct rte_flow_item pattern[],
+			       struct rte_flow *flow,
+			       struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_vlan_ip_tcp_udp(pattern, flow, error, 0);
+}
+
+/**
+ * Parse flow pattern composed of the vlan and ipv6 items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_vlan_ip6(const struct rte_flow_item pattern[],
+			    struct rte_flow *flow,
+			    struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_vlan_ip4_ip6(pattern, flow, error, 1);
+}
+
+/**
+ * Parse flow pattern composed of the vlan, ipv6 and tcp/udp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_vlan_ip6_tcp_udp(const struct rte_flow_item pattern[],
+				    struct rte_flow *flow,
+				    struct rte_flow_error *error, int tcp)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+	int ret;
+
+	ret = mrvl_parse_pattern_vlan_ip4_ip6(pattern, flow, error, 1);
+	if (ret)
+		return ret;
+
+	item = mrvl_next_item(item + 1);
+	item = mrvl_next_item(item + 1);
+
+	if (tcp)
+		return mrvl_parse_tcp(item, flow, error);
+
+	return mrvl_parse_udp(item, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the vlan, ipv6 and tcp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_vlan_ip6_tcp(const struct rte_flow_item pattern[],
+				struct rte_flow *flow,
+				struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_vlan_ip6_tcp_udp(pattern, flow, error, 1);
+}
+
+/**
+ * Parse flow pattern composed of the vlan, ipv6 and udp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_vlan_ip6_udp(const struct rte_flow_item pattern[],
+				struct rte_flow *flow,
+				struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_vlan_ip6_tcp_udp(pattern, flow, error, 0);
+}
+
+/**
+ * Parse flow pattern composed of the ip4/ip6 item.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @param ip6 1 to parse ip6 item, 0 to parse ip4 item.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_ip4_ip6(const struct rte_flow_item pattern[],
+		       struct rte_flow *flow,
+		       struct rte_flow_error *error, int ip6)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+
+	return ip6 ? mrvl_parse_ip6(item, flow, error) :
+		     mrvl_parse_ip4(item, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the ipv4 item.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_ip4(const struct rte_flow_item pattern[],
+		       struct rte_flow *flow,
+		       struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_ip4_ip6(pattern, flow, error, 0);
+}
+
+/**
+ * Parse flow pattern composed of the ipv6 item.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_ip6(const struct rte_flow_item pattern[],
+		       struct rte_flow *flow,
+		       struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_ip4_ip6(pattern, flow, error, 1);
+}
+
+/**
+ * Parse flow pattern composed of the ip4/ip6 and tcp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @param ip6 1 to parse ip6 item, 0 to parse ip4 item.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_ip4_ip6_tcp(const struct rte_flow_item pattern[],
+			   struct rte_flow *flow,
+			   struct rte_flow_error *error, int ip6)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+	int ret;
+
+	ret = ip6 ? mrvl_parse_ip6(item, flow, error) :
+		    mrvl_parse_ip4(item, flow, error);
+	if (ret)
+		return ret;
+
+	item = mrvl_next_item(item + 1);
+
+	return mrvl_parse_tcp(item, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the ipv4 and tcp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_ip4_tcp(const struct rte_flow_item pattern[],
+			   struct rte_flow *flow,
+			   struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_ip4_ip6_tcp(pattern, flow, error, 0);
+}
+
+/**
+ * Parse flow pattern composed of the ipv6 and tcp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_ip6_tcp(const struct rte_flow_item pattern[],
+			   struct rte_flow *flow,
+			   struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_ip4_ip6_tcp(pattern, flow, error, 1);
+}
+
+/**
+ * Parse flow pattern composed of the ipv4/ipv6 and udp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_ip4_ip6_udp(const struct rte_flow_item pattern[],
+			   struct rte_flow *flow,
+			   struct rte_flow_error *error, int ip6)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+	int ret;
+
+	ret = ip6 ? mrvl_parse_ip6(item, flow, error) :
+		    mrvl_parse_ip4(item, flow, error);
+	if (ret)
+		return ret;
+
+	item = mrvl_next_item(item + 1);
+
+	return mrvl_parse_udp(item, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the ipv4 and udp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_ip4_udp(const struct rte_flow_item pattern[],
+			   struct rte_flow *flow,
+			   struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_ip4_ip6_udp(pattern, flow, error, 0);
+}
+
+/**
+ * Parse flow pattern composed of the ipv6 and udp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_ip6_udp(const struct rte_flow_item pattern[],
+			   struct rte_flow *flow,
+			   struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_ip4_ip6_udp(pattern, flow, error, 1);
+}
+
+/**
+ * Parse flow pattern composed of the tcp item.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_tcp(const struct rte_flow_item pattern[],
+		       struct rte_flow *flow,
+		       struct rte_flow_error *error)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+
+	return mrvl_parse_tcp(item, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the udp item.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_udp(const struct rte_flow_item pattern[],
+		       struct rte_flow *flow,
+		       struct rte_flow_error *error)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+
+	return mrvl_parse_udp(item, flow, error);
+}
+
+/**
+ * Structure used to map specific flow pattern to the pattern parse callback
+ * which will iterate over each pattern item and extract relevant data.
+ */
+static const struct {
+	const enum rte_flow_item_type *pattern;
+	int (*parse)(const struct rte_flow_item pattern[],
+		struct rte_flow *flow,
+		struct rte_flow_error *error);
+} mrvl_patterns[] = {
+	{ pattern_eth, mrvl_parse_pattern_eth },
+	{ pattern_eth_vlan, mrvl_parse_pattern_eth_vlan },
+	{ pattern_eth_vlan_ip, mrvl_parse_pattern_eth_vlan_ip4 },
+	{ pattern_eth_vlan_ip6, mrvl_parse_pattern_eth_vlan_ip6 },
+	{ pattern_eth_ip4, mrvl_parse_pattern_eth_ip4 },
+	{ pattern_eth_ip4_tcp, mrvl_parse_pattern_eth_ip4_tcp },
+	{ pattern_eth_ip4_udp, mrvl_parse_pattern_eth_ip4_udp },
+	{ pattern_eth_ip6, mrvl_parse_pattern_eth_ip6 },
+	{ pattern_eth_ip6_tcp, mrvl_parse_pattern_eth_ip6_tcp },
+	{ pattern_eth_ip6_udp, mrvl_parse_pattern_eth_ip6_udp },
+	{ pattern_vlan, mrvl_parse_pattern_vlan },
+	{ pattern_vlan_ip, mrvl_parse_pattern_vlan_ip4 },
+	{ pattern_vlan_ip_tcp, mrvl_parse_pattern_vlan_ip_tcp },
+	{ pattern_vlan_ip_udp, mrvl_parse_pattern_vlan_ip_udp },
+	{ pattern_vlan_ip6, mrvl_parse_pattern_vlan_ip6 },
+	{ pattern_vlan_ip6_tcp, mrvl_parse_pattern_vlan_ip6_tcp },
+	{ pattern_vlan_ip6_udp, mrvl_parse_pattern_vlan_ip6_udp },
+	{ pattern_ip, mrvl_parse_pattern_ip4 },
+	{ pattern_ip_tcp, mrvl_parse_pattern_ip4_tcp },
+	{ pattern_ip_udp, mrvl_parse_pattern_ip4_udp },
+	{ pattern_ip6, mrvl_parse_pattern_ip6 },
+	{ pattern_ip6_tcp, mrvl_parse_pattern_ip6_tcp },
+	{ pattern_ip6_udp, mrvl_parse_pattern_ip6_udp },
+	{ pattern_tcp, mrvl_parse_pattern_tcp },
+	{ pattern_udp, mrvl_parse_pattern_udp }
+};
+
+/**
+ * Check whether provided pattern matches any of the supported ones.
+ *
+ * @param type_pattern Pointer to the pattern type.
+ * @param item_pattern Pointer to the flow pattern.
+ * @returns 1 in case of success, 0 value otherwise.
+ */
+static int
+mrvl_patterns_match(const enum rte_flow_item_type *type_pattern,
+		    const struct rte_flow_item *item_pattern)
+{
+	const enum rte_flow_item_type *type = type_pattern;
+	const struct rte_flow_item *item = item_pattern;
+
+	for (;;) {
+		if (item->type == RTE_FLOW_ITEM_TYPE_VOID) {
+			item++;
+			continue;
+		}
+
+		if (*type == RTE_FLOW_ITEM_TYPE_END ||
+		    item->type == RTE_FLOW_ITEM_TYPE_END)
+			break;
+
+		if (*type != item->type)
+			break;
+
+		item++;
+		type++;
+	}
+
+	return *type == item->type;
+}
+
+/**
+ * Parse flow attribute.
+ *
+ * This will check whether the provided attribute's flags are supported.
+ *
+ * @param priv Unused
+ * @param attr Pointer to the flow attribute.
+ * @param flow Unused
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_flow_parse_attr(struct mrvl_priv *priv __rte_unused,
+		     const struct rte_flow_attr *attr,
+		     struct rte_flow *flow __rte_unused,
+		     struct rte_flow_error *error)
+{
+	if (!attr) {
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, "NULL attribute");
+		return -rte_errno;
+	}
+
+	if (attr->group) {
+		rte_flow_error_set(error, ENOTSUP,
+				   RTE_FLOW_ERROR_TYPE_ATTR_GROUP, NULL,
+				   "Groups are not supported");
+		return -rte_errno;
+	}
+	if (attr->priority) {
+		rte_flow_error_set(error, ENOTSUP,
+				   RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, NULL,
+				   "Priorities are not supported");
+		return -rte_errno;
+	}
+	if (!attr->ingress) {
+		rte_flow_error_set(error, ENOTSUP,
+				   RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, NULL,
+				   "Only ingress is supported");
+		return -rte_errno;
+	}
+	if (attr->egress) {
+		rte_flow_error_set(error, ENOTSUP,
+				   RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, NULL,
+				   "Egress is not supported");
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
+/**
+ * Parse flow pattern.
+ *
+ * Specific classifier rule will be created as well.
+ *
+ * @param priv Unused
+ * @param pattern Pointer to the flow pattern.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_flow_parse_pattern(struct mrvl_priv *priv __rte_unused,
+			const struct rte_flow_item pattern[],
+			struct rte_flow *flow,
+			struct rte_flow_error *error)
+{
+	unsigned int i;
+	int ret;
+
+	for (i = 0; i < RTE_DIM(mrvl_patterns); i++) {
+		if (!mrvl_patterns_match(mrvl_patterns[i].pattern, pattern))
+			continue;
+
+		ret = mrvl_patterns[i].parse(pattern, flow, error);
+		if (ret)
+			mrvl_free_all_key_mask(&flow->rule);
+
+		return ret;
+	}
+
+	rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+			   "Unsupported pattern");
+
+	return -rte_errno;
+}
+
+/**
+ * Parse flow actions.
+ *
+ * @param priv Pointer to the port's private data.
+ * @param actions Pointer the action table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_flow_parse_actions(struct mrvl_priv *priv,
+			const struct rte_flow_action actions[],
+			struct rte_flow *flow,
+			struct rte_flow_error *error)
+{
+	const struct rte_flow_action *action = actions;
+	int specified = 0;
+
+	for (; action->type != RTE_FLOW_ACTION_TYPE_END; action++) {
+		if (action->type == RTE_FLOW_ACTION_TYPE_VOID)
+			continue;
+
+		if (action->type == RTE_FLOW_ACTION_TYPE_DROP) {
+			flow->cos.ppio = priv->ppio;
+			flow->cos.tc = 0;
+			flow->action.type = PP2_CLS_TBL_ACT_DROP;
+			flow->action.cos = &flow->cos;
+			specified++;
+		} else if (action->type == RTE_FLOW_ACTION_TYPE_QUEUE) {
+			const struct rte_flow_action_queue *q =
+				(const struct rte_flow_action_queue *)
+				action->conf;
+
+			if (q->index > priv->nb_rx_queues) {
+				rte_flow_error_set(error, EINVAL,
+						RTE_FLOW_ERROR_TYPE_ACTION,
+						NULL,
+						"Queue index out of range");
+				return -rte_errno;
+			}
+
+			if (priv->rxq_map[q->index].tc == MRVL_UNKNOWN_TC) {
+				/*
+				 * Unknown TC mapping, mapping will not have
+				 * a correct queue.
+				 */
+				RTE_LOG(ERR, PMD,
+					"Unknown TC mapping for queue %hu eth%hhu\n",
+					q->index, priv->ppio_id);
+
+				rte_flow_error_set(error, EFAULT,
+						RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+						NULL, NULL);
+				return -rte_errno;
+			}
+
+			RTE_LOG(DEBUG, PMD,
+				"Action: Assign packets to queue %d, tc:%d, q:%d\n",
+				q->index, priv->rxq_map[q->index].tc,
+				priv->rxq_map[q->index].inq);
+
+			flow->cos.ppio = priv->ppio;
+			flow->cos.tc = priv->rxq_map[q->index].tc;
+			flow->action.type = PP2_CLS_TBL_ACT_DONE;
+			flow->action.cos = &flow->cos;
+			specified++;
+		} else {
+			rte_flow_error_set(error, ENOTSUP,
+					   RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+					   "Action not supported");
+			return -rte_errno;
+		}
+
+	}
+
+	if (!specified) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				   NULL, "Action not specified");
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
+/**
+ * Parse flow attribute, pattern and actions.
+ *
+ * @param priv Pointer to the port's private data.
+ * @param attr Pointer to the flow attribute.
+ * @param pattern Pointer to the flow pattern.
+ * @param actions Pointer to the flow actions.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 on success, negative value otherwise.
+ */
+static int
+mrvl_flow_parse(struct mrvl_priv *priv, const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct rte_flow *flow,
+		struct rte_flow_error *error)
+{
+	int ret;
+
+	ret = mrvl_flow_parse_attr(priv, attr, flow, error);
+	if (ret)
+		return ret;
+
+	ret = mrvl_flow_parse_pattern(priv, pattern, flow, error);
+	if (ret)
+		return ret;
+
+	return mrvl_flow_parse_actions(priv, actions, flow, error);
+}
+
+static inline enum pp2_cls_tbl_type
+mrvl_engine_type(const struct rte_flow *flow)
+{
+	int i, size = 0;
+
+	for (i = 0; i < flow->rule.num_fields; i++)
+		size += flow->rule.fields[i].size;
+
+	/*
+	 * For maskable engine type the key size must be up to 8 bytes.
+	 * For keys with size bigger than 8 bytes, engine type must
+	 * be set to exact match.
+	 */
+	if (size > 8)
+		return PP2_CLS_TBL_EXACT_MATCH;
+
+	return PP2_CLS_TBL_MASKABLE;
+}
+
+static int
+mrvl_create_cls_table(struct rte_eth_dev *dev, struct rte_flow *first_flow)
+{
+	struct mrvl_priv *priv = dev->data->dev_private;
+	struct pp2_cls_tbl_key *key = &priv->cls_tbl_params.key;
+	int ret;
+
+	if (priv->cls_tbl) {
+		pp2_cls_tbl_deinit(priv->cls_tbl);
+		priv->cls_tbl = NULL;
+	}
+
+	memset(&priv->cls_tbl_params, 0, sizeof(priv->cls_tbl_params));
+
+	priv->cls_tbl_params.type = mrvl_engine_type(first_flow);
+	RTE_LOG(INFO, PMD, "Setting cls search engine type to %s\n",
+			priv->cls_tbl_params.type == PP2_CLS_TBL_EXACT_MATCH ?
+			"exact" : "maskable");
+	priv->cls_tbl_params.max_num_rules = MRVL_CLS_MAX_NUM_RULES;
+	priv->cls_tbl_params.default_act.type = PP2_CLS_TBL_ACT_DONE;
+	priv->cls_tbl_params.default_act.cos = &first_flow->cos;
+
+	if (first_flow->pattern & F_DMAC) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_ETH;
+		key->proto_field[key->num_fields].field.eth = MV_NET_ETH_F_DA;
+		key->key_size += 6;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_SMAC) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_ETH;
+		key->proto_field[key->num_fields].field.eth = MV_NET_ETH_F_SA;
+		key->key_size += 6;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_TYPE) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_ETH;
+		key->proto_field[key->num_fields].field.eth = MV_NET_ETH_F_TYPE;
+		key->key_size += 2;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_VLAN_ID) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_VLAN;
+		key->proto_field[key->num_fields].field.vlan = MV_NET_VLAN_F_ID;
+		key->key_size += 2;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_VLAN_PRI) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_VLAN;
+		key->proto_field[key->num_fields].field.vlan =
+			MV_NET_VLAN_F_PRI;
+		key->key_size += 1;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_IP4_TOS) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP4;
+		key->proto_field[key->num_fields].field.ipv4 = MV_NET_IP4_F_TOS;
+		key->key_size += 1;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_IP4_SIP) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP4;
+		key->proto_field[key->num_fields].field.ipv4 = MV_NET_IP4_F_SA;
+		key->key_size += 4;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_IP4_DIP) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP4;
+		key->proto_field[key->num_fields].field.ipv4 = MV_NET_IP4_F_DA;
+		key->key_size += 4;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_IP4_PROTO) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP4;
+		key->proto_field[key->num_fields].field.ipv4 =
+			MV_NET_IP4_F_PROTO;
+		key->key_size += 1;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_IP6_SIP) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP6;
+		key->proto_field[key->num_fields].field.ipv6 = MV_NET_IP6_F_SA;
+		key->key_size += 16;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_IP6_DIP) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP6;
+		key->proto_field[key->num_fields].field.ipv6 = MV_NET_IP6_F_DA;
+		key->key_size += 16;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_IP6_FLOW) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP6;
+		key->proto_field[key->num_fields].field.ipv6 =
+			MV_NET_IP6_F_FLOW;
+		key->key_size += 3;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_IP6_NEXT_HDR) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP6;
+		key->proto_field[key->num_fields].field.ipv6 =
+			MV_NET_IP6_F_NEXT_HDR;
+		key->key_size += 1;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_TCP_SPORT) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_TCP;
+		key->proto_field[key->num_fields].field.tcp = MV_NET_TCP_F_SP;
+		key->key_size += 2;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_TCP_DPORT) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_TCP;
+		key->proto_field[key->num_fields].field.tcp = MV_NET_TCP_F_DP;
+		key->key_size += 2;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_UDP_SPORT) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_UDP;
+		key->proto_field[key->num_fields].field.tcp = MV_NET_TCP_F_SP;
+		key->key_size += 2;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_UDP_DPORT) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_UDP;
+		key->proto_field[key->num_fields].field.udp = MV_NET_TCP_F_DP;
+		key->key_size += 2;
+		key->num_fields += 1;
+	}
+
+	ret = pp2_cls_tbl_init(&priv->cls_tbl_params, &priv->cls_tbl);
+	if (!ret)
+		priv->cls_tbl_pattern = first_flow->pattern;
+
+	return ret;
+}
+
+/**
+ * Check whether new flow can be added to the table
+ *
+ * @param priv Pointer to the port's private data.
+ * @param flow Pointer to the new flow.
+ * @return 1 in case flow can be added, 0 otherwise.
+ */
+static inline int
+mrvl_flow_can_be_added(struct mrvl_priv *priv, const struct rte_flow *flow)
+{
+	return flow->pattern == priv->cls_tbl_pattern &&
+	       mrvl_engine_type(flow) == priv->cls_tbl_params.type;
+}
+
+/**
+ * DPDK flow create callback called when flow is to be created.
+ *
+ * @param dev Pointer to the device.
+ * @param attr Pointer to the flow attribute.
+ * @param pattern Pointer to the flow pattern.
+ * @param actions Pointer to the flow actions.
+ * @param error Pointer to the flow error.
+ * @returns Pointer to the created flow in case of success, NULL otherwise.
+ */
+static struct rte_flow *
+mrvl_flow_create(struct rte_eth_dev *dev,
+		 const struct rte_flow_attr *attr,
+		 const struct rte_flow_item pattern[],
+		 const struct rte_flow_action actions[],
+		 struct rte_flow_error *error)
+{
+	struct mrvl_priv *priv = dev->data->dev_private;
+	struct rte_flow *flow, *first;
+	int ret;
+
+	if (!dev->data->dev_started) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "Port must be started first\n");
+		return NULL;
+	}
+
+	flow = rte_zmalloc_socket(NULL, sizeof(*flow), 0, rte_socket_id());
+	if (!flow)
+		return NULL;
+
+	ret = mrvl_flow_parse(priv, attr, pattern, actions, flow, error);
+	if (ret)
+		goto out;
+
+	/*
+	 * Four cases here:
+	 *
+	 * 1. In case table does not exist - create one.
+	 * 2. In case table exists, is empty and new flow cannot be added
+	 *    recreate table.
+	 * 3. In case table is not empty and new flow matches table format
+	 *    add it.
+	 * 4. Otherwise flow cannot be added.
+	 */
+	first = LIST_FIRST(&priv->flows);
+	if (!priv->cls_tbl) {
+		ret = mrvl_create_cls_table(dev, flow);
+	} else if (!first && !mrvl_flow_can_be_added(priv, flow)) {
+		ret = mrvl_create_cls_table(dev, flow);
+	} else if (mrvl_flow_can_be_added(priv, flow)) {
+		ret = 0;
+	} else {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "Pattern does not match cls table format\n");
+		goto out;
+	}
+
+	if (ret) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "Failed to create cls table\n");
+		goto out;
+	}
+
+	ret = pp2_cls_tbl_add_rule(priv->cls_tbl, &flow->rule, &flow->action);
+	if (ret) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "Failed to add rule\n");
+		goto out;
+	}
+
+	LIST_INSERT_HEAD(&priv->flows, flow, next);
+
+	return flow;
+out:
+	rte_free(flow);
+	return NULL;
+}
+
+/**
+ * Remove classifier rule associated with given flow.
+ *
+ * @param priv Pointer to the port's private data.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_flow_remove(struct mrvl_priv *priv, struct rte_flow *flow,
+		 struct rte_flow_error *error)
+{
+	int ret;
+
+	if (!priv->cls_tbl) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "Classifier table not initialized");
+		return -rte_errno;
+	}
+
+	ret = pp2_cls_tbl_remove_rule(priv->cls_tbl, &flow->rule);
+	if (ret) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "Failed to remove rule");
+		return -rte_errno;
+	}
+
+	mrvl_free_all_key_mask(&flow->rule);
+
+	return 0;
+}
+
+/**
+ * DPDK flow destroy callback called when flow is to be removed.
+ *
+ * @param priv Pointer to the port's private data.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow,
+		  struct rte_flow_error *error)
+{
+	struct mrvl_priv *priv = dev->data->dev_private;
+	struct rte_flow *f;
+	int ret;
+
+	LIST_FOREACH(f, &priv->flows, next) {
+		if (f == flow)
+			break;
+	}
+
+	if (!flow) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "Rule was not found");
+		return -rte_errno;
+	}
+
+	LIST_REMOVE(f, next);
+
+	ret = mrvl_flow_remove(priv, flow, error);
+	if (ret)
+		return ret;
+
+	rte_free(flow);
+
+	return 0;
+}
+
+/**
+ * DPDK flow callback called to verify given attribute, pattern and actions.
+ *
+ * @param dev Pointer to the device.
+ * @param attr Pointer to the flow attribute.
+ * @param pattern Pointer to the flow pattern.
+ * @param actions Pointer to the flow actions.
+ * @param error Pointer to the flow error.
+ * @returns 0 on success, negative value otherwise.
+ */
+static int
+mrvl_flow_validate(struct rte_eth_dev *dev,
+		   const struct rte_flow_attr *attr,
+		   const struct rte_flow_item pattern[],
+		   const struct rte_flow_action actions[],
+		   struct rte_flow_error *error)
+{
+	static struct rte_flow *flow;
+
+	flow = mrvl_flow_create(dev, attr, pattern, actions, error);
+	if (!flow)
+		return -rte_errno;
+
+	mrvl_flow_destroy(dev, flow, error);
+
+	return 0;
+}
+
+/**
+ * DPDK flow flush callback called when flows are to be flushed.
+ *
+ * @param dev Pointer to the device.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error)
+{
+	struct mrvl_priv *priv = dev->data->dev_private;
+
+	while (!LIST_EMPTY(&priv->flows)) {
+		struct rte_flow *flow = LIST_FIRST(&priv->flows);
+		int ret = mrvl_flow_remove(priv, flow, error);
+		if (ret)
+			return ret;
+
+		LIST_REMOVE(flow, next);
+		rte_free(flow);
+	}
+
+	return 0;
+}
+
+/**
+ * DPDK flow isolate callback called to isolate port.
+ *
+ * @param dev Pointer to the device.
+ * @param enable Pass 0/1 to disable/enable port isolation.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_flow_isolate(struct rte_eth_dev *dev, int enable,
+		  struct rte_flow_error *error)
+{
+	struct mrvl_priv *priv = dev->data->dev_private;
+
+	if (dev->data->dev_started) {
+		rte_flow_error_set(error, EBUSY,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				   NULL, "Port must be stopped first\n");
+		return -rte_errno;
+	}
+
+	priv->isolated = enable;
+
+	return 0;
+}
+
+const struct rte_flow_ops mrvl_flow_ops = {
+	.validate = mrvl_flow_validate,
+	.create = mrvl_flow_create,
+	.destroy = mrvl_flow_destroy,
+	.flush = mrvl_flow_flush,
+	.isolate = mrvl_flow_isolate
+};
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v2 6/8] net/mrvl: add extended statistics
  2018-03-12  8:42 ` [PATCH v2 0/8] net/mrvl: add new features to PMD Tomasz Duszynski
                     ` (4 preceding siblings ...)
  2018-03-12  8:42   ` [PATCH v2 5/8] net/mrvl: add classifier support Tomasz Duszynski
@ 2018-03-12  8:42   ` Tomasz Duszynski
  2018-03-14 17:21     ` Ferruh Yigit
  2018-03-12  8:42   ` [PATCH v2 7/8] net/mrvl: add Rx flow control Tomasz Duszynski
                     ` (2 subsequent siblings)
  8 siblings, 1 reply; 34+ messages in thread
From: Tomasz Duszynski @ 2018-03-12  8:42 UTC (permalink / raw)
  To: dev; +Cc: mw, dima, nsamsono, jck, jianbo.liu, Tomasz Duszynski

Add extended statistics implementation.

Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
---
 doc/guides/nics/features/mrvl.ini |   1 +
 doc/guides/nics/mrvl.rst          |   1 +
 drivers/net/mrvl/mrvl_ethdev.c    | 205 +++++++++++++++++++++++++++++++++++++-
 3 files changed, 206 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/features/mrvl.ini b/doc/guides/nics/features/mrvl.ini
index 00d9621..120fd4d 100644
--- a/doc/guides/nics/features/mrvl.ini
+++ b/doc/guides/nics/features/mrvl.ini
@@ -19,5 +19,6 @@ L3 checksum offload  = Y
 L4 checksum offload  = Y
 Packet type parsing  = Y
 Basic stats          = Y
+Extended stats       = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/doc/guides/nics/mrvl.rst b/doc/guides/nics/mrvl.rst
index 9230d5e..7678265 100644
--- a/doc/guides/nics/mrvl.rst
+++ b/doc/guides/nics/mrvl.rst
@@ -70,6 +70,7 @@ Features of the MRVL PMD are:
 - L4 checksum offload
 - Packet type parsing
 - Basic stats
+- Extended stats
 - QoS
 
 
diff --git a/drivers/net/mrvl/mrvl_ethdev.c b/drivers/net/mrvl/mrvl_ethdev.c
index 34b9ef7..cbe3f4d 100644
--- a/drivers/net/mrvl/mrvl_ethdev.c
+++ b/drivers/net/mrvl/mrvl_ethdev.c
@@ -145,6 +145,32 @@ static inline void mrvl_free_sent_buffers(struct pp2_ppio *ppio,
 			struct pp2_hif *hif, unsigned int core_id,
 			struct mrvl_shadow_txq *sq, int qid, int force);
 
+#define MRVL_XSTATS_TBL_ENTRY(name) { \
+	#name, offsetof(struct pp2_ppio_statistics, name),	\
+	sizeof(((struct pp2_ppio_statistics *)0)->name)		\
+}
+
+/* Table with xstats data */
+static struct {
+	const char *name;
+	unsigned int offset;
+	unsigned int size;
+} mrvl_xstats_tbl[] = {
+	MRVL_XSTATS_TBL_ENTRY(rx_bytes),
+	MRVL_XSTATS_TBL_ENTRY(rx_packets),
+	MRVL_XSTATS_TBL_ENTRY(rx_unicast_packets),
+	MRVL_XSTATS_TBL_ENTRY(rx_errors),
+	MRVL_XSTATS_TBL_ENTRY(rx_fullq_dropped),
+	MRVL_XSTATS_TBL_ENTRY(rx_bm_dropped),
+	MRVL_XSTATS_TBL_ENTRY(rx_early_dropped),
+	MRVL_XSTATS_TBL_ENTRY(rx_fifo_dropped),
+	MRVL_XSTATS_TBL_ENTRY(rx_cls_dropped),
+	MRVL_XSTATS_TBL_ENTRY(tx_bytes),
+	MRVL_XSTATS_TBL_ENTRY(tx_packets),
+	MRVL_XSTATS_TBL_ENTRY(tx_unicast_packets),
+	MRVL_XSTATS_TBL_ENTRY(tx_errors)
+};
+
 static inline int
 mrvl_get_bpool_size(int pp2_id, int pool_id)
 {
@@ -1110,6 +1136,90 @@ mrvl_stats_reset(struct rte_eth_dev *dev)
 }
 
 /**
+ * DPDK callback to get extended statistics.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ * @param stats
+ *   Pointer to xstats table.
+ * @param n
+ *   Number of entries in xstats table.
+ * @return
+ *   Negative value on error, number of read xstats otherwise.
+ */
+static int
+mrvl_xstats_get(struct rte_eth_dev *dev,
+		struct rte_eth_xstat *stats, unsigned int n)
+{
+	struct mrvl_priv *priv = dev->data->dev_private;
+	struct pp2_ppio_statistics ppio_stats;
+	unsigned int i;
+
+	if (!stats)
+		return 0;
+
+	pp2_ppio_get_statistics(priv->ppio, &ppio_stats, 0);
+	for (i = 0; i < n && i < RTE_DIM(mrvl_xstats_tbl); i++) {
+		uint64_t val;
+
+		if (mrvl_xstats_tbl[i].size == sizeof(uint32_t))
+			val = *(uint32_t *)((uint8_t *)&ppio_stats +
+					    mrvl_xstats_tbl[i].offset);
+		else if (mrvl_xstats_tbl[i].size == sizeof(uint64_t))
+			val = *(uint64_t *)((uint8_t *)&ppio_stats +
+					    mrvl_xstats_tbl[i].offset);
+		else
+			return -EINVAL;
+
+		stats[i].id = i;
+		stats[i].value = val;
+	}
+
+	return n;
+}
+
+/**
+ * DPDK callback to reset extended statistics.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ */
+static void
+mrvl_xstats_reset(struct rte_eth_dev *dev)
+{
+	mrvl_stats_reset(dev);
+}
+
+/**
+ * DPDK callback to get extended statistics names.
+ *
+ * @param dev (unused)
+ *   Pointer to Ethernet device structure.
+ * @param xstats_names
+ *   Pointer to xstats names table.
+ * @param size
+ *   Size of the xstats names table.
+ * @return
+ *   Number of read names.
+ */
+static int
+mrvl_xstats_get_names(struct rte_eth_dev *dev __rte_unused,
+		      struct rte_eth_xstat_name *xstats_names,
+		      unsigned int size)
+{
+	unsigned int i;
+
+	if (!xstats_names)
+		return RTE_DIM(mrvl_xstats_tbl);
+
+	for (i = 0; i < size && i < RTE_DIM(mrvl_xstats_tbl); i++)
+		snprintf(xstats_names[i].name, RTE_ETH_XSTATS_NAME_SIZE, "%s",
+			 mrvl_xstats_tbl[i].name);
+
+	return size;
+}
+
+/**
  * DPDK callback to get information about the device.
  *
  * @param dev
@@ -1674,6 +1784,94 @@ mrvl_eth_filter_ctrl(struct rte_eth_dev *dev __rte_unused,
 	}
 }
 
+/**
+ * DPDK callback to get xstats by id.
+ *
+ * @param dev
+ *   Pointer to the device structure.
+ * @param ids
+ *   Pointer to the ids table.
+ * @param values
+ *   Pointer to the values table.
+ * @param n
+ *   Values table size.
+ * @returns
+ *   Number of read values, negative value otherwise.
+ */
+static int
+mrvl_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
+		      uint64_t *values, unsigned int n)
+{
+	unsigned int i, num = RTE_DIM(mrvl_xstats_tbl);
+	uint64_t vals[n];
+	int ret;
+
+	if (!ids) {
+		struct rte_eth_xstat xstats[num];
+		int j;
+
+		ret = mrvl_xstats_get(dev, xstats, num);
+		for (j = 0; j < ret; i++)
+			values[j] = xstats[j].value;
+
+		return ret;
+	}
+
+	ret = mrvl_xstats_get_by_id(dev, NULL, vals, n);
+	if (ret < 0)
+		return ret;
+
+	for (i = 0; i < n; i++) {
+		if (ids[i] >= num) {
+			RTE_LOG(ERR, PMD, "id value is not valid\n");
+			return -1;
+		}
+
+		values[i] = vals[ids[i]];
+	}
+
+	return n;
+}
+
+/**
+ * DPDK callback to get xstats names by ids.
+ *
+ * @param dev
+ *   Pointer to the device structure.
+ * @param xstats_names
+ *   Pointer to table with xstats names.
+ * @param ids
+ *   Pointer to table with ids.
+ * @param size
+ *   Xstats names table size.
+ * @returns
+ *   Number of names read, negative value otherwise.
+ */
+static int
+mrvl_xstats_get_names_by_id(struct rte_eth_dev *dev,
+			    struct rte_eth_xstat_name *xstats_names,
+			    const uint64_t *ids, unsigned int size)
+{
+	unsigned int i, num = RTE_DIM(mrvl_xstats_tbl);
+	struct rte_eth_xstat_name names[num];
+
+	if (!ids)
+		return mrvl_xstats_get_names(dev, xstats_names, size);
+
+	mrvl_xstats_get_names(dev, names, size);
+	for (i = 0; i < size; i++) {
+		if (ids[i] >= num) {
+			RTE_LOG(ERR, PMD, "id value is not valid");
+			return -1;
+		}
+
+		snprintf(xstats_names[i].name, RTE_ETH_XSTATS_NAME_SIZE,
+			 "%s", names[ids[i]].name);
+	}
+
+	return size;
+}
+
 static const struct eth_dev_ops mrvl_ops = {
 	.dev_configure = mrvl_dev_configure,
 	.dev_start = mrvl_dev_start,
@@ -1692,6 +1890,9 @@ static const struct eth_dev_ops mrvl_ops = {
 	.mtu_set = mrvl_mtu_set,
 	.stats_get = mrvl_stats_get,
 	.stats_reset = mrvl_stats_reset,
+	.xstats_get = mrvl_xstats_get,
+	.xstats_reset = mrvl_xstats_reset,
+	.xstats_get_names = mrvl_xstats_get_names,
 	.dev_infos_get = mrvl_dev_infos_get,
 	.dev_supported_ptypes_get = mrvl_dev_supported_ptypes_get,
 	.rxq_info_get = mrvl_rxq_info_get,
@@ -1703,7 +1904,9 @@ static const struct eth_dev_ops mrvl_ops = {
 	.tx_queue_release = mrvl_tx_queue_release,
 	.rss_hash_update = mrvl_rss_hash_update,
 	.rss_hash_conf_get = mrvl_rss_hash_conf_get,
-	.filter_ctrl = mrvl_eth_filter_ctrl
+	.filter_ctrl = mrvl_eth_filter_ctrl,
+	.xstats_get_by_id = mrvl_xstats_get_by_id,
+	.xstats_get_names_by_id = mrvl_xstats_get_names_by_id
 };
 
 /**
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v2 7/8] net/mrvl: add Rx flow control
  2018-03-12  8:42 ` [PATCH v2 0/8] net/mrvl: add new features to PMD Tomasz Duszynski
                     ` (5 preceding siblings ...)
  2018-03-12  8:42   ` [PATCH v2 6/8] net/mrvl: add extended statistics Tomasz Duszynski
@ 2018-03-12  8:42   ` Tomasz Duszynski
  2018-03-12  8:42   ` [PATCH v2 8/8] net/mrvl: add Tx queue start/stop Tomasz Duszynski
  2018-03-15  7:51   ` [PATCH v3 0/8] net/mrvl: add new features to PMD Tomasz Duszynski
  8 siblings, 0 replies; 34+ messages in thread
From: Tomasz Duszynski @ 2018-03-12  8:42 UTC (permalink / raw)
  To: dev; +Cc: mw, dima, nsamsono, jck, jianbo.liu, Tomasz Duszynski

Add Rx side flow control support.

Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
---
 doc/guides/nics/features/mrvl.ini |  1 +
 doc/guides/nics/mrvl.rst          |  1 +
 drivers/net/mrvl/mrvl_ethdev.c    | 78 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 80 insertions(+)

diff --git a/doc/guides/nics/features/mrvl.ini b/doc/guides/nics/features/mrvl.ini
index 120fd4d..8673a56 100644
--- a/doc/guides/nics/features/mrvl.ini
+++ b/doc/guides/nics/features/mrvl.ini
@@ -13,6 +13,7 @@ Allmulticast mode    = Y
 Unicast MAC filter   = Y
 Multicast MAC filter = Y
 RSS hash             = Y
+Flow control         = Y
 VLAN filter          = Y
 CRC offload          = Y
 L3 checksum offload  = Y
diff --git a/doc/guides/nics/mrvl.rst b/doc/guides/nics/mrvl.rst
index 7678265..550bd79 100644
--- a/doc/guides/nics/mrvl.rst
+++ b/doc/guides/nics/mrvl.rst
@@ -72,6 +72,7 @@ Features of the MRVL PMD are:
 - Basic stats
 - Extended stats
 - QoS
+- RX flow control
 
 
 Limitations
diff --git a/drivers/net/mrvl/mrvl_ethdev.c b/drivers/net/mrvl/mrvl_ethdev.c
index cbe3f4d..21a83c9 100644
--- a/drivers/net/mrvl/mrvl_ethdev.c
+++ b/drivers/net/mrvl/mrvl_ethdev.c
@@ -1696,6 +1696,82 @@ mrvl_tx_queue_release(void *txq)
 }
 
 /**
+ * DPDK callback to get flow control configuration.
+ *
+ * @param dev
+ *  Pointer to Ethernet device structure.
+ * @param fc_conf
+ *  Pointer to the flow control configuration.
+ *
+ * @return
+ *  0 on success, negative error value otherwise.
+ */
+static int
+mrvl_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
+{
+	struct mrvl_priv *priv = dev->data->dev_private;
+	int ret, en;
+
+	if (!priv)
+		return -EPERM;
+
+	ret = pp2_ppio_get_rx_pause(priv->ppio, &en);
+	if (ret) {
+		RTE_LOG(ERR, PMD, "Failed to read rx pause state\n");
+		return ret;
+	}
+
+	fc_conf->mode = en ? RTE_FC_RX_PAUSE : RTE_FC_NONE;
+
+	return 0;
+}
+
+/**
+ * DPDK callback to set flow control configuration.
+ *
+ * @param dev
+ *  Pointer to Ethernet device structure.
+ * @param fc_conf
+ *  Pointer to the flow control configuration.
+ *
+ * @return
+ *  0 on success, negative error value otherwise.
+ */
+static int
+mrvl_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
+{
+	struct mrvl_priv *priv = dev->data->dev_private;
+
+	if (!priv)
+		return -EPERM;
+
+	if (fc_conf->high_water ||
+	    fc_conf->low_water ||
+	    fc_conf->pause_time ||
+	    fc_conf->mac_ctrl_frame_fwd ||
+	    fc_conf->autoneg) {
+		RTE_LOG(ERR, PMD, "Flowctrl parameter is not supported\n");
+
+		return -EINVAL;
+	}
+
+	if (fc_conf->mode == RTE_FC_NONE ||
+	    fc_conf->mode == RTE_FC_RX_PAUSE) {
+		int ret, en;
+
+		en = fc_conf->mode == RTE_FC_NONE ? 0 : 1;
+		ret = pp2_ppio_set_rx_pause(priv->ppio, en);
+		if (ret)
+			RTE_LOG(ERR, PMD,
+				"Failed to change flowctrl on RX side\n");
+
+		return ret;
+	}
+
+	return 0;
+}
+
+/**
  * Update RSS hash configuration
  *
  * @param dev
@@ -1902,6 +1978,8 @@ static const struct eth_dev_ops mrvl_ops = {
 	.rx_queue_release = mrvl_rx_queue_release,
 	.tx_queue_setup = mrvl_tx_queue_setup,
 	.tx_queue_release = mrvl_tx_queue_release,
+	.flow_ctrl_get = mrvl_flow_ctrl_get,
+	.flow_ctrl_set = mrvl_flow_ctrl_set,
 	.rss_hash_update = mrvl_rss_hash_update,
 	.rss_hash_conf_get = mrvl_rss_hash_conf_get,
 	.filter_ctrl = mrvl_eth_filter_ctrl,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v2 8/8] net/mrvl: add Tx queue start/stop
  2018-03-12  8:42 ` [PATCH v2 0/8] net/mrvl: add new features to PMD Tomasz Duszynski
                     ` (6 preceding siblings ...)
  2018-03-12  8:42   ` [PATCH v2 7/8] net/mrvl: add Rx flow control Tomasz Duszynski
@ 2018-03-12  8:42   ` Tomasz Duszynski
  2018-03-15  7:51   ` [PATCH v3 0/8] net/mrvl: add new features to PMD Tomasz Duszynski
  8 siblings, 0 replies; 34+ messages in thread
From: Tomasz Duszynski @ 2018-03-12  8:42 UTC (permalink / raw)
  To: dev; +Cc: mw, dima, nsamsono, jck, jianbo.liu, Tomasz Duszynski

Add Tx queue start/stop feature.

Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
---
 doc/guides/nics/mrvl.rst       |  1 +
 drivers/net/mrvl/mrvl_ethdev.c | 92 +++++++++++++++++++++++++++++++++++++++++-
 2 files changed, 91 insertions(+), 2 deletions(-)

diff --git a/doc/guides/nics/mrvl.rst b/doc/guides/nics/mrvl.rst
index 550bd79..f9ec9d6 100644
--- a/doc/guides/nics/mrvl.rst
+++ b/doc/guides/nics/mrvl.rst
@@ -73,6 +73,7 @@ Features of the MRVL PMD are:
 - Extended stats
 - QoS
 - RX flow control
+- TX queue start/stop
 
 
 Limitations
diff --git a/drivers/net/mrvl/mrvl_ethdev.c b/drivers/net/mrvl/mrvl_ethdev.c
index 21a83c9..a6b77a2 100644
--- a/drivers/net/mrvl/mrvl_ethdev.c
+++ b/drivers/net/mrvl/mrvl_ethdev.c
@@ -134,6 +134,7 @@ struct mrvl_txq {
 	int port_id;
 	uint64_t bytes_sent;
 	struct mrvl_shadow_txq shadow_txqs[RTE_MAX_LCORE];
+	int tx_deferred_start;
 };
 
 static int mrvl_lcore_first;
@@ -459,6 +460,70 @@ mrvl_dev_set_link_down(struct rte_eth_dev *dev)
 }
 
 /**
+ * DPDK callback to start tx queue.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ * @param queue_id
+ *   Transmit queue index.
+ *
+ * @return
+ *   0 on success, negative error value otherwise.
+ */
+static int
+mrvl_tx_queue_start(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+	struct mrvl_priv *priv = dev->data->dev_private;
+	int ret;
+
+	if (!priv)
+		return -EPERM;
+
+	/* passing 1 enables given tx queue */
+	ret = pp2_ppio_set_outq_state(priv->ppio, queue_id, 1);
+	if (ret) {
+		RTE_LOG(ERR, PMD, "Failed to start txq %d\n", queue_id);
+		return ret;
+	}
+
+	dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
+
+	return 0;
+}
+
+/**
+ * DPDK callback to stop tx queue.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ * @param queue_id
+ *   Transmit queue index.
+ *
+ * @return
+ *   0 on success, negative error value otherwise.
+ */
+static int
+mrvl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+	struct mrvl_priv *priv = dev->data->dev_private;
+	int ret;
+
+	if (!priv->ppio)
+		return -EPERM;
+
+	/* passing 0 disables given tx queue */
+	ret = pp2_ppio_set_outq_state(priv->ppio, queue_id, 0);
+	if (ret) {
+		RTE_LOG(ERR, PMD, "Failed to stop txq %d\n", queue_id);
+		return ret;
+	}
+
+	dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+/**
  * DPDK callback to start the device.
  *
  * @param dev
@@ -472,7 +537,7 @@ mrvl_dev_start(struct rte_eth_dev *dev)
 {
 	struct mrvl_priv *priv = dev->data->dev_private;
 	char match[MRVL_MATCH_LEN];
-	int ret = 0, def_init_size;
+	int ret = 0, i, def_init_size;
 
 	snprintf(match, sizeof(match), "ppio-%d:%d",
 		 priv->pp_id, priv->ppio_id);
@@ -559,6 +624,24 @@ mrvl_dev_start(struct rte_eth_dev *dev)
 		goto out;
 	}
 
+	/* start tx queues */
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		struct mrvl_txq *txq = dev->data->tx_queues[i];
+
+		dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
+
+		if (!txq->tx_deferred_start)
+			continue;
+
+		/*
+		 * All txqs are started by default. Stop them
+		 * so that tx_deferred_start works as expected.
+		 */
+		ret = mrvl_tx_queue_stop(dev, i);
+		if (ret)
+			goto out;
+	}
+
 	return 0;
 out:
 	RTE_LOG(ERR, PMD, "Failed to start device\n");
@@ -1330,9 +1413,11 @@ static void mrvl_txq_info_get(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 			      struct rte_eth_txq_info *qinfo)
 {
 	struct mrvl_priv *priv = dev->data->dev_private;
+	struct mrvl_txq *txq = dev->data->tx_queues[tx_queue_id];
 
 	qinfo->nb_desc =
 		priv->ppio_params.outqs_params.outqs_params[tx_queue_id].size;
+	qinfo->conf.tx_deferred_start = txq->tx_deferred_start;
 }
 
 /**
@@ -1643,7 +1728,7 @@ mrvl_tx_queue_offloads_okay(struct rte_eth_dev *dev, uint64_t requested)
  * @param socket
  *   NUMA socket on which memory must be allocated.
  * @param conf
- *   Thresholds parameters.
+ *   Tx queue configuration parameters.
  *
  * @return
  *   0 on success, negative error value otherwise.
@@ -1671,6 +1756,7 @@ mrvl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	txq->priv = priv;
 	txq->queue_id = idx;
 	txq->port_id = dev->data->port_id;
+	txq->tx_deferred_start = conf->tx_deferred_start;
 	dev->data->tx_queues[idx] = txq;
 
 	priv->ppio_params.outqs_params.outqs_params[idx].size = desc;
@@ -1974,6 +2060,8 @@ static const struct eth_dev_ops mrvl_ops = {
 	.rxq_info_get = mrvl_rxq_info_get,
 	.txq_info_get = mrvl_txq_info_get,
 	.vlan_filter_set = mrvl_vlan_filter_set,
+	.tx_queue_start = mrvl_tx_queue_start,
+	.tx_queue_stop = mrvl_tx_queue_stop,
 	.rx_queue_setup = mrvl_rx_queue_setup,
 	.rx_queue_release = mrvl_rx_queue_release,
 	.tx_queue_setup = mrvl_tx_queue_setup,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 6/8] net/mrvl: add extended statistics
  2018-03-12  8:42   ` [PATCH v2 6/8] net/mrvl: add extended statistics Tomasz Duszynski
@ 2018-03-14 17:21     ` Ferruh Yigit
  2018-03-15  7:09       ` Tomasz Duszynski
  0 siblings, 1 reply; 34+ messages in thread
From: Ferruh Yigit @ 2018-03-14 17:21 UTC (permalink / raw)
  To: Tomasz Duszynski, dev; +Cc: mw, dima, nsamsono, jck, jianbo.liu

On 3/12/2018 8:42 AM, Tomasz Duszynski wrote:
> Add extended statistics implementation.
> 
> Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
> Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>

<...>

> @@ -1674,6 +1784,94 @@ mrvl_eth_filter_ctrl(struct rte_eth_dev *dev __rte_unused,
>  	}
>  }
>  
> +/**
> + * DPDK callback to get xstats by id.
> + *
> + * @param dev
> + *   Pointer to the device structure.
> + * @param ids
> + *   Pointer to the ids table.
> + * @param values
> + *   Pointer to the values table.
> + * @param n
> + *   Values table size.
> + * @returns
> + *   Number of read values, negative value otherwise.
> + */
> +static int
> +mrvl_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
> +		      uint64_t *values, unsigned int n)
> +{
> +	unsigned int i, num = RTE_DIM(mrvl_xstats_tbl);
> +	uint64_t vals[n];
> +	int ret;
> +
> +	if (!ids) {

You will not get NULL ids, this case covered by ethdev layer, for both by_id()
functions.

> +		struct rte_eth_xstat xstats[num];
> +		int j;
> +
> +		ret = mrvl_xstats_get(dev, xstats, num);
> +		for (j = 0; j < ret; i++)
> +			values[j] = xstats[j].value;
> +
> +		return ret;
> +	}
> +
> +	ret = mrvl_xstats_get_by_id(dev, NULL, vals, n);
> +	if (ret < 0)
> +		return ret;
> +
> +	for (i = 0; i < n; i++) {
> +		if (ids[i] >= num) {
> +			RTE_LOG(ERR, PMD, "id value is not valid\n");
> +			return -1;
> +		}
> +
> +		values[i] = vals[ids[i]];
> +	}
> +
> +	return n;
> +}
> +
> +/**
> + * DPDK callback to get xstats names by ids.
> + *
> + * @param dev
> + *   Pointer to the device structure.
> + * @param xstats_names
> + *   Pointer to table with xstats names.
> + * @param ids
> + *   Pointer to table with ids.
> + * @param size
> + *   Xstats names table size.
> + * @returns
> + *   Number of names read, negative value otherwise.
> + */
> +static int
> +mrvl_xstats_get_names_by_id(struct rte_eth_dev *dev,
> +			    struct rte_eth_xstat_name *xstats_names,
> +			    const uint64_t *ids, unsigned int size)
> +{
> +	unsigned int i, num = RTE_DIM(mrvl_xstats_tbl);
> +	struct rte_eth_xstat_name names[num];
> +
> +	if (!ids)
> +		return mrvl_xstats_get_names(dev, xstats_names, size);
> +
> +	mrvl_xstats_get_names(dev, names, size);
> +	for (i = 0; i < size; i++) {
> +		if (ids[i] >= num) {
> +			RTE_LOG(ERR, PMD, "id value is not valid");
> +			return -1;
> +		}
> +
> +		snprintf(xstats_names[i].name, RTE_ETH_XSTATS_NAME_SIZE,
> +			 "%s", names[ids[i]].name);
> +	}
> +
> +	return size;
> +}

Specific to *_by_id() implementations, please check ethdev layer APIs for these
devops, they already do same thing as you did here.

These devops are to access specific ids efficiently with support of PMD, if you
don't have a quick way to access to an extended stat by id, you may just left
these unimplemented and abstraction layer will do the work for you, it is up to you.


Thanks,
ferruh

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 6/8] net/mrvl: add extended statistics
  2018-03-14 17:21     ` Ferruh Yigit
@ 2018-03-15  7:09       ` Tomasz Duszynski
  0 siblings, 0 replies; 34+ messages in thread
From: Tomasz Duszynski @ 2018-03-15  7:09 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: Tomasz Duszynski, dev, mw, dima, nsamsono, jck, jianbo.liu

On Wed, Mar 14, 2018 at 05:21:07PM +0000, Ferruh Yigit wrote:
> On 3/12/2018 8:42 AM, Tomasz Duszynski wrote:
> > Add extended statistics implementation.
> >
> > Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
> > Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
>
> <...>
>
> > @@ -1674,6 +1784,94 @@ mrvl_eth_filter_ctrl(struct rte_eth_dev *dev __rte_unused,
> >  	}
> >  }
> >
> > +/**
> > + * DPDK callback to get xstats by id.
> > + *
> > + * @param dev
> > + *   Pointer to the device structure.
> > + * @param ids
> > + *   Pointer to the ids table.
> > + * @param values
> > + *   Pointer to the values table.
> > + * @param n
> > + *   Values table size.
> > + * @returns
> > + *   Number of read values, negative value otherwise.
> > + */
> > +static int
> > +mrvl_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
> > +		      uint64_t *values, unsigned int n)
> > +{
> > +	unsigned int i, num = RTE_DIM(mrvl_xstats_tbl);
> > +	uint64_t vals[n];
> > +	int ret;
> > +
> > +	if (!ids) {
>
> You will not get NULL ids, this case covered by ethdev layer, for both by_id()
> functions.
>
> > +		struct rte_eth_xstat xstats[num];
> > +		int j;
> > +
> > +		ret = mrvl_xstats_get(dev, xstats, num);
> > +		for (j = 0; j < ret; i++)
> > +			values[j] = xstats[j].value;
> > +
> > +		return ret;
> > +	}
> > +
> > +	ret = mrvl_xstats_get_by_id(dev, NULL, vals, n);
> > +	if (ret < 0)
> > +		return ret;
> > +
> > +	for (i = 0; i < n; i++) {
> > +		if (ids[i] >= num) {
> > +			RTE_LOG(ERR, PMD, "id value is not valid\n");
> > +			return -1;
> > +		}
> > +
> > +		values[i] = vals[ids[i]];
> > +	}
> > +
> > +	return n;
> > +}
> > +
> > +/**
> > + * DPDK callback to get xstats names by ids.
> > + *
> > + * @param dev
> > + *   Pointer to the device structure.
> > + * @param xstats_names
> > + *   Pointer to table with xstats names.
> > + * @param ids
> > + *   Pointer to table with ids.
> > + * @param size
> > + *   Xstats names table size.
> > + * @returns
> > + *   Number of names read, negative value otherwise.
> > + */
> > +static int
> > +mrvl_xstats_get_names_by_id(struct rte_eth_dev *dev,
> > +			    struct rte_eth_xstat_name *xstats_names,
> > +			    const uint64_t *ids, unsigned int size)
> > +{
> > +	unsigned int i, num = RTE_DIM(mrvl_xstats_tbl);
> > +	struct rte_eth_xstat_name names[num];
> > +
> > +	if (!ids)
> > +		return mrvl_xstats_get_names(dev, xstats_names, size);
> > +
> > +	mrvl_xstats_get_names(dev, names, size);
> > +	for (i = 0; i < size; i++) {
> > +		if (ids[i] >= num) {
> > +			RTE_LOG(ERR, PMD, "id value is not valid");
> > +			return -1;
> > +		}
> > +
> > +		snprintf(xstats_names[i].name, RTE_ETH_XSTATS_NAME_SIZE,
> > +			 "%s", names[ids[i]].name);
> > +	}
> > +
> > +	return size;
> > +}
>
> Specific to *_by_id() implementations, please check ethdev layer APIs for these
> devops, they already do same thing as you did here.
>
> These devops are to access specific ids efficiently with support of PMD, if you
> don't have a quick way to access to an extended stat by id, you may just left
> these unimplemented and abstraction layer will do the work for you, it is up to you.

Good point. Since *_by_id() are using xstats_get_names()/xstats_get()
anyway they do not provide a real speedup. I'll drop that in v3.

>
>
> Thanks,
> ferruh

--
- Tomasz Duszyński

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v3 0/8] net/mrvl: add new features to PMD
  2018-03-12  8:42 ` [PATCH v2 0/8] net/mrvl: add new features to PMD Tomasz Duszynski
                     ` (7 preceding siblings ...)
  2018-03-12  8:42   ` [PATCH v2 8/8] net/mrvl: add Tx queue start/stop Tomasz Duszynski
@ 2018-03-15  7:51   ` Tomasz Duszynski
  2018-03-15  7:51     ` [PATCH v3 1/8] net/mrvl: fix crash when port is closed without starting Tomasz Duszynski
                       ` (8 more replies)
  8 siblings, 9 replies; 34+ messages in thread
From: Tomasz Duszynski @ 2018-03-15  7:51 UTC (permalink / raw)
  To: dev; +Cc: mw, dima, nsamsono, jck, jianbo.liu, Tomasz Duszynski

This patch series comes along with a set of features,
documentation updates and fixes.

Below one can find a short summary of introduced changes:

o Added support for selective Tx queue start and stop.
o Added support for Rx flow control.
o Added support for extended statistics counters.
o Added support for ingress policer, egress scheduler and egress rate
  limiter.
o Added support for configuring hardware classifier via a flow API.
o Documented new features and their usage.

Natalie Samsonov (1):
  net/mrvl: fix crash when port is closed without starting

Tomasz Duszynski (7):
  net/mrvl: add ingress policer support
  net/mrvl: add egress scheduler/rate limiter support
  net/mrvl: document policer/scheduler/rate limiter usage
  net/mrvl: add classifier support
  net/mrvl: add extended statistics
  net/mrvl: add Rx flow control
  net/mrvl: add Tx queue start/stop

v3:
- Remove *_by_id() ops from xstats since they are handled by ether layer.

v2:
- Convert license header of a new file to SPDX tags.

 doc/guides/nics/features/mrvl.ini |    2 +
 doc/guides/nics/mrvl.rst          |  257 +++-
 drivers/net/mrvl/Makefile         |    1 +
 drivers/net/mrvl/mrvl_ethdev.c    |  357 ++++-
 drivers/net/mrvl/mrvl_ethdev.h    |   11 +
 drivers/net/mrvl/mrvl_flow.c      | 2759 +++++++++++++++++++++++++++++++++++++
 drivers/net/mrvl/mrvl_qos.c       |  301 +++-
 drivers/net/mrvl/mrvl_qos.h       |   22 +
 8 files changed, 3692 insertions(+), 18 deletions(-)
 create mode 100644 drivers/net/mrvl/mrvl_flow.c

--
2.7.4

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v3 1/8] net/mrvl: fix crash when port is closed without starting
  2018-03-15  7:51   ` [PATCH v3 0/8] net/mrvl: add new features to PMD Tomasz Duszynski
@ 2018-03-15  7:51     ` Tomasz Duszynski
  2018-03-15  7:51     ` [PATCH v3 2/8] net/mrvl: add ingress policer support Tomasz Duszynski
                       ` (7 subsequent siblings)
  8 siblings, 0 replies; 34+ messages in thread
From: Tomasz Duszynski @ 2018-03-15  7:51 UTC (permalink / raw)
  To: dev; +Cc: mw, dima, nsamsono, jck, jianbo.liu, stable, Tomasz Duszynski

From: Natalie Samsonov <nsamsono@marvell.com>

Fixes: 0ddc9b815b11 ("net/mrvl: add net PMD skeleton")
Cc: stable@dpdk.org

Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
---
 drivers/net/mrvl/mrvl_ethdev.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mrvl/mrvl_ethdev.c b/drivers/net/mrvl/mrvl_ethdev.c
index ac8f2d6..e313768 100644
--- a/drivers/net/mrvl/mrvl_ethdev.c
+++ b/drivers/net/mrvl/mrvl_ethdev.c
@@ -658,7 +658,8 @@ mrvl_dev_stop(struct rte_eth_dev *dev)
 		pp2_cls_qos_tbl_deinit(priv->qos_tbl);
 		priv->qos_tbl = NULL;
 	}
-	pp2_ppio_deinit(priv->ppio);
+	if (priv->ppio)
+		pp2_ppio_deinit(priv->ppio);
 	priv->ppio = NULL;
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v3 2/8] net/mrvl: add ingress policer support
  2018-03-15  7:51   ` [PATCH v3 0/8] net/mrvl: add new features to PMD Tomasz Duszynski
  2018-03-15  7:51     ` [PATCH v3 1/8] net/mrvl: fix crash when port is closed without starting Tomasz Duszynski
@ 2018-03-15  7:51     ` Tomasz Duszynski
  2018-03-15  7:51     ` [PATCH v3 3/8] net/mrvl: add egress scheduler/rate limiter support Tomasz Duszynski
                       ` (6 subsequent siblings)
  8 siblings, 0 replies; 34+ messages in thread
From: Tomasz Duszynski @ 2018-03-15  7:51 UTC (permalink / raw)
  To: dev; +Cc: mw, dima, nsamsono, jck, jianbo.liu, Tomasz Duszynski

Add ingress policer support.

Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
---
 drivers/net/mrvl/mrvl_ethdev.c |   6 ++
 drivers/net/mrvl/mrvl_ethdev.h |   1 +
 drivers/net/mrvl/mrvl_qos.c    | 160 +++++++++++++++++++++++++++++++++++++++--
 drivers/net/mrvl/mrvl_qos.h    |   3 +
 4 files changed, 166 insertions(+), 4 deletions(-)

diff --git a/drivers/net/mrvl/mrvl_ethdev.c b/drivers/net/mrvl/mrvl_ethdev.c
index e313768..d1faa3d 100644
--- a/drivers/net/mrvl/mrvl_ethdev.c
+++ b/drivers/net/mrvl/mrvl_ethdev.c
@@ -661,6 +661,12 @@ mrvl_dev_stop(struct rte_eth_dev *dev)
 	if (priv->ppio)
 		pp2_ppio_deinit(priv->ppio);
 	priv->ppio = NULL;
+
+	/* policer must be released after ppio deinitialization */
+	if (priv->policer) {
+		pp2_cls_plcr_deinit(priv->policer);
+		priv->policer = NULL;
+	}
 }
 
 /**
diff --git a/drivers/net/mrvl/mrvl_ethdev.h b/drivers/net/mrvl/mrvl_ethdev.h
index aee853a..2cc229e 100644
--- a/drivers/net/mrvl/mrvl_ethdev.h
+++ b/drivers/net/mrvl/mrvl_ethdev.h
@@ -85,6 +85,7 @@ struct mrvl_priv {
 	struct pp2_cls_qos_tbl_params qos_tbl_params;
 	struct pp2_cls_tbl *qos_tbl;
 	uint16_t nb_rx_queues;
+	struct pp2_cls_plcr *policer;
 };
 
 #endif /* _MRVL_ETHDEV_H_ */
diff --git a/drivers/net/mrvl/mrvl_qos.c b/drivers/net/mrvl/mrvl_qos.c
index d206dc6..0205157 100644
--- a/drivers/net/mrvl/mrvl_qos.c
+++ b/drivers/net/mrvl/mrvl_qos.c
@@ -43,6 +43,22 @@
 #define MRVL_TOK_VLAN_IP "vlan/ip"
 #define MRVL_TOK_WEIGHT "weight"
 
+/* policer specific configuration tokens */
+#define MRVL_TOK_PLCR_ENABLE "policer_enable"
+#define MRVL_TOK_PLCR_UNIT "token_unit"
+#define MRVL_TOK_PLCR_UNIT_BYTES "bytes"
+#define MRVL_TOK_PLCR_UNIT_PACKETS "packets"
+#define MRVL_TOK_PLCR_COLOR "color_mode"
+#define MRVL_TOK_PLCR_COLOR_BLIND "blind"
+#define MRVL_TOK_PLCR_COLOR_AWARE "aware"
+#define MRVL_TOK_PLCR_CIR "cir"
+#define MRVL_TOK_PLCR_CBS "cbs"
+#define MRVL_TOK_PLCR_EBS "ebs"
+#define MRVL_TOK_PLCR_DEFAULT_COLOR "default_color"
+#define MRVL_TOK_PLCR_DEFAULT_COLOR_GREEN "green"
+#define MRVL_TOK_PLCR_DEFAULT_COLOR_YELLOW "yellow"
+#define MRVL_TOK_PLCR_DEFAULT_COLOR_RED "red"
+
 /** Number of tokens in range a-b = 2. */
 #define MAX_RNG_TOKENS 2
 
@@ -296,6 +312,25 @@ parse_tc_cfg(struct rte_cfgfile *file, int port, int tc,
 		}
 		cfg->port[port].tc[tc].dscps = n;
 	}
+
+	entry = rte_cfgfile_get_entry(file, sec_name,
+			MRVL_TOK_PLCR_DEFAULT_COLOR);
+	if (entry) {
+		if (!strncmp(entry, MRVL_TOK_PLCR_DEFAULT_COLOR_GREEN,
+				sizeof(MRVL_TOK_PLCR_DEFAULT_COLOR_GREEN))) {
+			cfg->port[port].tc[tc].color = PP2_PPIO_COLOR_GREEN;
+		} else if (!strncmp(entry, MRVL_TOK_PLCR_DEFAULT_COLOR_YELLOW,
+				sizeof(MRVL_TOK_PLCR_DEFAULT_COLOR_YELLOW))) {
+			cfg->port[port].tc[tc].color = PP2_PPIO_COLOR_YELLOW;
+		} else if (!strncmp(entry, MRVL_TOK_PLCR_DEFAULT_COLOR_RED,
+				sizeof(MRVL_TOK_PLCR_DEFAULT_COLOR_RED))) {
+			cfg->port[port].tc[tc].color = PP2_PPIO_COLOR_RED;
+		} else {
+			RTE_LOG(ERR, PMD, "Error while parsing: %s\n", entry);
+			return -1;
+		}
+	}
+
 	return 0;
 }
 
@@ -368,6 +403,88 @@ mrvl_get_qoscfg(const char *key __rte_unused, const char *path,
 		}
 
 		entry = rte_cfgfile_get_entry(file, sec_name,
+				MRVL_TOK_PLCR_ENABLE);
+		if (entry) {
+			if (get_val_securely(entry, &val) < 0)
+				return -1;
+			(*cfg)->port[n].policer_enable = val;
+		}
+
+		if ((*cfg)->port[n].policer_enable) {
+			enum pp2_cls_plcr_token_unit unit;
+
+			/* Read policer token unit */
+			entry = rte_cfgfile_get_entry(file, sec_name,
+					MRVL_TOK_PLCR_UNIT);
+			if (entry) {
+				if (!strncmp(entry, MRVL_TOK_PLCR_UNIT_BYTES,
+					sizeof(MRVL_TOK_PLCR_UNIT_BYTES))) {
+					unit = PP2_CLS_PLCR_BYTES_TOKEN_UNIT;
+				} else if (!strncmp(entry,
+						MRVL_TOK_PLCR_UNIT_PACKETS,
+					sizeof(MRVL_TOK_PLCR_UNIT_PACKETS))) {
+					unit = PP2_CLS_PLCR_PACKETS_TOKEN_UNIT;
+				} else {
+					RTE_LOG(ERR, PMD, "Unknown token: %s\n",
+						entry);
+					return -1;
+				}
+				(*cfg)->port[n].policer_params.token_unit =
+					unit;
+			}
+
+			/* Read policer color mode */
+			entry = rte_cfgfile_get_entry(file, sec_name,
+					MRVL_TOK_PLCR_COLOR);
+			if (entry) {
+				enum pp2_cls_plcr_color_mode mode;
+
+				if (!strncmp(entry, MRVL_TOK_PLCR_COLOR_BLIND,
+					sizeof(MRVL_TOK_PLCR_COLOR_BLIND))) {
+					mode = PP2_CLS_PLCR_COLOR_BLIND_MODE;
+				} else if (!strncmp(entry,
+						MRVL_TOK_PLCR_COLOR_AWARE,
+					sizeof(MRVL_TOK_PLCR_COLOR_AWARE))) {
+					mode = PP2_CLS_PLCR_COLOR_AWARE_MODE;
+				} else {
+					RTE_LOG(ERR, PMD,
+						"Error in parsing: %s\n",
+						entry);
+					return -1;
+				}
+				(*cfg)->port[n].policer_params.color_mode =
+					mode;
+			}
+
+			/* Read policer cir */
+			entry = rte_cfgfile_get_entry(file, sec_name,
+					MRVL_TOK_PLCR_CIR);
+			if (entry) {
+				if (get_val_securely(entry, &val) < 0)
+					return -1;
+				(*cfg)->port[n].policer_params.cir = val;
+			}
+
+			/* Read policer cbs */
+			entry = rte_cfgfile_get_entry(file, sec_name,
+					MRVL_TOK_PLCR_CBS);
+			if (entry) {
+				if (get_val_securely(entry, &val) < 0)
+					return -1;
+				(*cfg)->port[n].policer_params.cbs = val;
+			}
+
+			/* Read policer ebs */
+			entry = rte_cfgfile_get_entry(file, sec_name,
+					MRVL_TOK_PLCR_EBS);
+			if (entry) {
+				if (get_val_securely(entry, &val) < 0)
+					return -1;
+				(*cfg)->port[n].policer_params.ebs = val;
+			}
+		}
+
+		entry = rte_cfgfile_get_entry(file, sec_name,
 				MRVL_TOK_MAPPING_PRIORITY);
 		if (entry) {
 			if (!strncmp(entry, MRVL_TOK_VLAN_IP,
@@ -422,16 +539,18 @@ mrvl_get_qoscfg(const char *key __rte_unused, const char *path,
  * @param param TC parameters entry.
  * @param inqs Number of MUSDK in-queues in this TC.
  * @param bpool Bpool for this TC.
+ * @param color Default color for this TC.
  * @returns 0 in case of success, exits otherwise.
  */
 static int
 setup_tc(struct pp2_ppio_tc_params *param, uint8_t inqs,
-	struct pp2_bpool *bpool)
+	struct pp2_bpool *bpool, enum pp2_ppio_color color)
 {
 	struct pp2_ppio_inq_params *inq_params;
 
 	param->pkt_offset = MRVL_PKT_OFFS;
 	param->pools[0] = bpool;
+	param->default_color = color;
 
 	inq_params = rte_zmalloc_socket("inq_params",
 		inqs * sizeof(*inq_params),
@@ -451,6 +570,33 @@ setup_tc(struct pp2_ppio_tc_params *param, uint8_t inqs,
 }
 
 /**
+ * Setup ingress policer.
+ *
+ * @param priv Port's private data.
+ * @param params Pointer to the policer's configuration.
+ * @returns 0 in case of success, negative values otherwise.
+ */
+static int
+setup_policer(struct mrvl_priv *priv, struct pp2_cls_plcr_params *params)
+{
+	char match[16];
+	int ret;
+
+	sprintf(match, "policer-%d:%d\n", priv->pp_id, priv->ppio_id);
+	params->match = match;
+
+	ret = pp2_cls_plcr_init(params, &priv->policer);
+	if (ret) {
+		RTE_LOG(ERR, PMD, "Failed to setup %s\n", match);
+		return -1;
+	}
+
+	priv->ppio_params.inqs_params.plcr = priv->policer;
+
+	return 0;
+}
+
+/**
  * Configure RX Queues in a given port.
  *
  * Sets up RX queues, their Traffic Classes and DPDK rxq->(TC,inq) mapping.
@@ -468,10 +614,13 @@ mrvl_configure_rxqs(struct mrvl_priv *priv, uint16_t portid,
 
 	if (mrvl_qos_cfg == NULL ||
 		mrvl_qos_cfg->port[portid].use_global_defaults) {
-		/* No port configuration, use default: 1 TC, no QoS. */
+		/*
+		 * No port configuration, use default: 1 TC, no QoS,
+		 * TC color set to green.
+		 */
 		priv->ppio_params.inqs_params.num_tcs = 1;
 		setup_tc(&priv->ppio_params.inqs_params.tcs_params[0],
-			max_queues, priv->bpool);
+			max_queues, priv->bpool, PP2_PPIO_COLOR_GREEN);
 
 		/* Direct mapping of queues i.e. 0->0, 1->1 etc. */
 		for (i = 0; i < max_queues; ++i) {
@@ -569,11 +718,14 @@ mrvl_configure_rxqs(struct mrvl_priv *priv, uint16_t portid,
 			break;
 		setup_tc(&priv->ppio_params.inqs_params.tcs_params[i],
 				port_cfg->tc[i].inqs,
-				priv->bpool);
+				priv->bpool, port_cfg->tc[i].color);
 	}
 
 	priv->ppio_params.inqs_params.num_tcs = i;
 
+	if (port_cfg->policer_enable)
+		return setup_policer(priv, &port_cfg->policer_params);
+
 	return 0;
 }
 
diff --git a/drivers/net/mrvl/mrvl_qos.h b/drivers/net/mrvl/mrvl_qos.h
index d2b6c83..bcf5bd3 100644
--- a/drivers/net/mrvl/mrvl_qos.h
+++ b/drivers/net/mrvl/mrvl_qos.h
@@ -27,6 +27,7 @@ struct mrvl_qos_cfg {
 			uint8_t inqs;
 			uint8_t dscps;
 			uint8_t pcps;
+			enum pp2_ppio_color color;
 		} tc[MRVL_PP2_TC_MAX];
 		struct {
 			uint8_t weight;
@@ -36,6 +37,8 @@ struct mrvl_qos_cfg {
 		uint16_t outqs;
 		uint8_t default_tc;
 		uint8_t use_global_defaults;
+		struct pp2_cls_plcr_params policer_params;
+		uint8_t policer_enable;
 	} port[RTE_MAX_ETHPORTS];
 };
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v3 3/8] net/mrvl: add egress scheduler/rate limiter support
  2018-03-15  7:51   ` [PATCH v3 0/8] net/mrvl: add new features to PMD Tomasz Duszynski
  2018-03-15  7:51     ` [PATCH v3 1/8] net/mrvl: fix crash when port is closed without starting Tomasz Duszynski
  2018-03-15  7:51     ` [PATCH v3 2/8] net/mrvl: add ingress policer support Tomasz Duszynski
@ 2018-03-15  7:51     ` Tomasz Duszynski
  2018-03-15  7:52     ` [PATCH v3 4/8] net/mrvl: document policer/scheduler/rate limiter usage Tomasz Duszynski
                       ` (5 subsequent siblings)
  8 siblings, 0 replies; 34+ messages in thread
From: Tomasz Duszynski @ 2018-03-15  7:51 UTC (permalink / raw)
  To: dev; +Cc: mw, dima, nsamsono, jck, jianbo.liu, Tomasz Duszynski

Add egress scheduler and egress rate limiter support.

Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
---
 drivers/net/mrvl/mrvl_ethdev.c |   6 +-
 drivers/net/mrvl/mrvl_qos.c    | 141 +++++++++++++++++++++++++++++++++++++++--
 drivers/net/mrvl/mrvl_qos.h    |  19 ++++++
 3 files changed, 161 insertions(+), 5 deletions(-)

diff --git a/drivers/net/mrvl/mrvl_ethdev.c b/drivers/net/mrvl/mrvl_ethdev.c
index d1faa3d..7e00dbd 100644
--- a/drivers/net/mrvl/mrvl_ethdev.c
+++ b/drivers/net/mrvl/mrvl_ethdev.c
@@ -320,6 +320,11 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
 	if (ret < 0)
 		return ret;
 
+	ret = mrvl_configure_txqs(priv, dev->data->port_id,
+				  dev->data->nb_tx_queues);
+	if (ret < 0)
+		return ret;
+
 	priv->ppio_params.outqs_params.num_outqs = dev->data->nb_tx_queues;
 	priv->ppio_params.maintain_stats = 1;
 	priv->nb_rx_queues = dev->data->nb_rx_queues;
@@ -1537,7 +1542,6 @@ mrvl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	dev->data->tx_queues[idx] = txq;
 
 	priv->ppio_params.outqs_params.outqs_params[idx].size = desc;
-	priv->ppio_params.outqs_params.outqs_params[idx].weight = 1;
 
 	return 0;
 }
diff --git a/drivers/net/mrvl/mrvl_qos.c b/drivers/net/mrvl/mrvl_qos.c
index 0205157..e9c4531 100644
--- a/drivers/net/mrvl/mrvl_qos.c
+++ b/drivers/net/mrvl/mrvl_qos.c
@@ -36,12 +36,19 @@
 #define MRVL_TOK_PCP "pcp"
 #define MRVL_TOK_PORT "port"
 #define MRVL_TOK_RXQ "rxq"
-#define MRVL_TOK_SP "SP"
 #define MRVL_TOK_TC "tc"
 #define MRVL_TOK_TXQ "txq"
 #define MRVL_TOK_VLAN "vlan"
 #define MRVL_TOK_VLAN_IP "vlan/ip"
-#define MRVL_TOK_WEIGHT "weight"
+
+/* egress specific configuration tokens */
+#define MRVL_TOK_BURST_SIZE "burst_size"
+#define MRVL_TOK_RATE_LIMIT "rate_limit"
+#define MRVL_TOK_RATE_LIMIT_ENABLE "rate_limit_enable"
+#define MRVL_TOK_SCHED_MODE "sched_mode"
+#define MRVL_TOK_SCHED_MODE_SP "sp"
+#define MRVL_TOK_SCHED_MODE_WRR "wrr"
+#define MRVL_TOK_WRR_WEIGHT "wrr_weight"
 
 /* policer specific configuration tokens */
 #define MRVL_TOK_PLCR_ENABLE "policer_enable"
@@ -119,12 +126,69 @@ get_outq_cfg(struct rte_cfgfile *file, int port, int outq,
 	if (rte_cfgfile_num_sections(file, sec_name, strlen(sec_name)) <= 0)
 		return 0;
 
+	/* Read scheduling mode */
+	entry = rte_cfgfile_get_entry(file, sec_name, MRVL_TOK_SCHED_MODE);
+	if (entry) {
+		if (!strncmp(entry, MRVL_TOK_SCHED_MODE_SP,
+					strlen(MRVL_TOK_SCHED_MODE_SP))) {
+			cfg->port[port].outq[outq].sched_mode =
+				PP2_PPIO_SCHED_M_SP;
+		} else if (!strncmp(entry, MRVL_TOK_SCHED_MODE_WRR,
+					strlen(MRVL_TOK_SCHED_MODE_WRR))) {
+			cfg->port[port].outq[outq].sched_mode =
+				PP2_PPIO_SCHED_M_WRR;
+		} else {
+			RTE_LOG(ERR, PMD, "Unknown token: %s\n", entry);
+			return -1;
+		}
+	}
+
+	/* Read wrr weight */
+	if (cfg->port[port].outq[outq].sched_mode == PP2_PPIO_SCHED_M_WRR) {
+		entry = rte_cfgfile_get_entry(file, sec_name,
+				MRVL_TOK_WRR_WEIGHT);
+		if (entry) {
+			if (get_val_securely(entry, &val) < 0)
+				return -1;
+			cfg->port[port].outq[outq].weight = val;
+		}
+	}
+
+	/*
+	 * There's no point in setting rate limiting for specific outq as
+	 * global port rate limiting has priority.
+	 */
+	if (cfg->port[port].rate_limit_enable) {
+		RTE_LOG(WARNING, PMD, "Port %d rate limiting already enabled\n",
+			port);
+		return 0;
+	}
+
 	entry = rte_cfgfile_get_entry(file, sec_name,
-			MRVL_TOK_WEIGHT);
+			MRVL_TOK_RATE_LIMIT_ENABLE);
 	if (entry) {
 		if (get_val_securely(entry, &val) < 0)
 			return -1;
-		cfg->port[port].outq[outq].weight = (uint8_t)val;
+		cfg->port[port].outq[outq].rate_limit_enable = val;
+	}
+
+	if (!cfg->port[port].outq[outq].rate_limit_enable)
+		return 0;
+
+	/* Read CBS (in kB) */
+	entry = rte_cfgfile_get_entry(file, sec_name, MRVL_TOK_BURST_SIZE);
+	if (entry) {
+		if (get_val_securely(entry, &val) < 0)
+			return -1;
+		cfg->port[port].outq[outq].rate_limit_params.cbs = val;
+	}
+
+	/* Read CIR (in kbps) */
+	entry = rte_cfgfile_get_entry(file, sec_name, MRVL_TOK_RATE_LIMIT);
+	if (entry) {
+		if (get_val_securely(entry, &val) < 0)
+			return -1;
+		cfg->port[port].outq[outq].rate_limit_params.cir = val;
 	}
 
 	return 0;
@@ -484,6 +548,36 @@ mrvl_get_qoscfg(const char *key __rte_unused, const char *path,
 			}
 		}
 
+		/*
+		 * Read per-port rate limiting. Setting that will
+		 * disable per-queue rate limiting.
+		 */
+		entry = rte_cfgfile_get_entry(file, sec_name,
+				MRVL_TOK_RATE_LIMIT_ENABLE);
+		if (entry) {
+			if (get_val_securely(entry, &val) < 0)
+				return -1;
+			(*cfg)->port[n].rate_limit_enable = val;
+		}
+
+		if ((*cfg)->port[n].rate_limit_enable) {
+			entry = rte_cfgfile_get_entry(file, sec_name,
+					MRVL_TOK_BURST_SIZE);
+			if (entry) {
+				if (get_val_securely(entry, &val) < 0)
+					return -1;
+				(*cfg)->port[n].rate_limit_params.cbs = val;
+			}
+
+			entry = rte_cfgfile_get_entry(file, sec_name,
+					MRVL_TOK_RATE_LIMIT);
+			if (entry) {
+				if (get_val_securely(entry, &val) < 0)
+					return -1;
+				(*cfg)->port[n].rate_limit_params.cir = val;
+			}
+		}
+
 		entry = rte_cfgfile_get_entry(file, sec_name,
 				MRVL_TOK_MAPPING_PRIORITY);
 		if (entry) {
@@ -730,6 +824,45 @@ mrvl_configure_rxqs(struct mrvl_priv *priv, uint16_t portid,
 }
 
 /**
+ * Configure TX Queues in a given port.
+ *
+ * Sets up TX queues egress scheduler and limiter.
+ *
+ * @param priv Port's private data
+ * @param portid DPDK port ID
+ * @param max_queues Maximum number of queues to configure.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+int
+mrvl_configure_txqs(struct mrvl_priv *priv, uint16_t portid,
+		uint16_t max_queues)
+{
+	/* We need only a subset of configuration. */
+	struct port_cfg *port_cfg = &mrvl_qos_cfg->port[portid];
+	int i;
+
+	if (mrvl_qos_cfg == NULL)
+		return 0;
+
+	priv->ppio_params.rate_limit_enable = port_cfg->rate_limit_enable;
+	if (port_cfg->rate_limit_enable)
+		priv->ppio_params.rate_limit_params =
+			port_cfg->rate_limit_params;
+
+	for (i = 0; i < max_queues; i++) {
+		struct pp2_ppio_outq_params *params =
+			&priv->ppio_params.outqs_params.outqs_params[i];
+
+		params->sched_mode = port_cfg->outq[i].sched_mode;
+		params->weight = port_cfg->outq[i].weight;
+		params->rate_limit_enable = port_cfg->outq[i].rate_limit_enable;
+		params->rate_limit_params = port_cfg->outq[i].rate_limit_params;
+	}
+
+	return 0;
+}
+
+/**
  * Start QoS mapping.
  *
  * Finalize QoS table configuration and initialize it in SDK. It can be done
diff --git a/drivers/net/mrvl/mrvl_qos.h b/drivers/net/mrvl/mrvl_qos.h
index bcf5bd3..fa9ddec 100644
--- a/drivers/net/mrvl/mrvl_qos.h
+++ b/drivers/net/mrvl/mrvl_qos.h
@@ -20,6 +20,8 @@
 /* QoS config. */
 struct mrvl_qos_cfg {
 	struct port_cfg {
+		int rate_limit_enable;
+		struct pp2_ppio_rate_limit_params rate_limit_params;
 		struct {
 			uint8_t inq[MRVL_PP2_RXQ_MAX];
 			uint8_t dscp[MRVL_CP_PER_TC];
@@ -30,7 +32,10 @@ struct mrvl_qos_cfg {
 			enum pp2_ppio_color color;
 		} tc[MRVL_PP2_TC_MAX];
 		struct {
+			enum pp2_ppio_outq_sched_mode sched_mode;
 			uint8_t weight;
+			int rate_limit_enable;
+			struct pp2_ppio_rate_limit_params rate_limit_params;
 		} outq[MRVL_PP2_RXQ_MAX];
 		enum pp2_cls_qos_tbl_type mapping_priority;
 		uint16_t inqs;
@@ -74,6 +79,20 @@ mrvl_configure_rxqs(struct mrvl_priv *priv, uint16_t portid,
 		    uint16_t max_queues);
 
 /**
+ * Configure TX Queues in a given port.
+ *
+ * Sets up TX queues egress scheduler and limiter.
+ *
+ * @param priv Port's private data
+ * @param portid DPDK port ID
+ * @param max_queues Maximum number of queues to configure.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+int
+mrvl_configure_txqs(struct mrvl_priv *priv, uint16_t portid,
+		    uint16_t max_queues);
+
+/**
  * Start QoS mapping.
  *
  * Finalize QoS table configuration and initialize it in SDK. It can be done
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v3 4/8] net/mrvl: document policer/scheduler/rate limiter usage
  2018-03-15  7:51   ` [PATCH v3 0/8] net/mrvl: add new features to PMD Tomasz Duszynski
                       ` (2 preceding siblings ...)
  2018-03-15  7:51     ` [PATCH v3 3/8] net/mrvl: add egress scheduler/rate limiter support Tomasz Duszynski
@ 2018-03-15  7:52     ` Tomasz Duszynski
  2018-03-15  7:52     ` [PATCH v3 5/8] net/mrvl: add classifier support Tomasz Duszynski
                       ` (4 subsequent siblings)
  8 siblings, 0 replies; 34+ messages in thread
From: Tomasz Duszynski @ 2018-03-15  7:52 UTC (permalink / raw)
  To: dev; +Cc: mw, dima, nsamsono, jck, jianbo.liu, Tomasz Duszynski

Add documentation and example for ingress policer, egress scheduler
and egress rate limiter.

Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
---
 doc/guides/nics/mrvl.rst | 86 ++++++++++++++++++++++++++++++++++++++++++++----
 1 file changed, 80 insertions(+), 6 deletions(-)

diff --git a/doc/guides/nics/mrvl.rst b/doc/guides/nics/mrvl.rst
index b7f3292..6794cbb 100644
--- a/doc/guides/nics/mrvl.rst
+++ b/doc/guides/nics/mrvl.rst
@@ -149,17 +149,36 @@ Configuration syntax
    [port <portnum> default]
    default_tc = <default_tc>
    mapping_priority = <mapping_priority>
+   policer_enable = <policer_enable>
+   token_unit = <token_unit>
+   color = <color_mode>
+   cir = <cir>
+   ebs = <ebs>
+   cbs = <cbs>
+
+   rate_limit_enable = <rate_limit_enable>
+   rate_limit = <rate_limit>
+   burst_size = <burst_size>
 
    [port <portnum> tc <traffic_class>]
    rxq = <rx_queue_list>
    pcp = <pcp_list>
    dscp = <dscp_list>
+   default_color = <default_color>
 
    [port <portnum> tc <traffic_class>]
    rxq = <rx_queue_list>
    pcp = <pcp_list>
    dscp = <dscp_list>
 
+   [port <portnum> txq <txqnum>]
+   sched_mode = <sched_mode>
+   wrr_weight = <wrr_weight>
+
+   rate_limit_enable = <rate_limit_enable>
+   rate_limit = <rate_limit>
+   burst_size = <burst_size>
+
 Where:
 
 - ``<portnum>``: DPDK Port number (0..n).
@@ -176,6 +195,30 @@ Where:
 
 - ``<dscp_list>``: List of DSCP values to handle in particular TC (e.g. 0-12 32-48 63).
 
+- ``<policer_enable>``: Enable ingress policer.
+
+- ``<token_unit>``: Policer token unit (`bytes` or `packets`).
+
+- ``<color_mode>``: Policer color mode (`aware` or `blind`).
+
+- ``<cir>``: Committed information rate in unit of kilo bits per second (data rate) or packets per second.
+
+- ``<cbs>``: Committed burst size in unit of kilo bytes or number of packets.
+
+- ``<ebs>``: Excess burst size in unit of kilo bytes or number of packets.
+
+- ``<default_color>``: Default color for specific tc.
+
+- ``<rate_limit_enable>``: Enables per port or per txq rate limiting.
+
+- ``<rate_limit>``: Committed information rate, in kilo bits per second.
+
+- ``<burst_size>``: Committed burst size, in kilo bytes.
+
+- ``<sched_mode>``: Egress scheduler mode (`wrr` or `sp`).
+
+- ``<wrr_weight>``: Txq weight.
+
 Setting PCP/DSCP values for the default TC is not required. All PCP/DSCP
 values not assigned explicitly to particular TC will be handled by the
 default TC.
@@ -187,11 +230,26 @@ Configuration file example
 
    [port 0 default]
    default_tc = 0
-   qos_mode = ip
+   mapping_priority = ip
+
+   rate_limit_enable = 1
+   rate_limit = 1000
+   burst_size = 2000
 
    [port 0 tc 0]
    rxq = 0 1
 
+   [port 0 txq 0]
+   sched_mode = wrr
+   wrr_weight = 10
+
+   [port 0 txq 1]
+   sched_mode = wrr
+   wrr_weight = 100
+
+   [port 0 txq 2]
+   sched_mode = sp
+
    [port 0 tc 1]
    rxq = 2
    pcp = 5 6 7
@@ -199,15 +257,31 @@ Configuration file example
 
    [port 1 default]
    default_tc = 0
-   qos_mode = vlan/ip
+   mapping_priority = vlan/ip
+
+   policer_enable = 1
+   token_unit = bytes
+   color = blind
+   cir = 100000
+   ebs = 64
+   cbs = 64
 
    [port 1 tc 0]
    rxq = 0
+   dscp = 10
 
    [port 1 tc 1]
-   rxq = 1 2
-   pcp = 5 6 7
-   dscp = 26-38
+   rxq = 1
+   dscp = 11-20
+
+   [port 1 tc 2]
+   rxq = 2
+   dscp = 30
+
+   [port 1 txq 0]
+   rate_limit_enable = 1
+   rate_limit = 10000
+   burst_size = 2000
 
 Usage example
 ^^^^^^^^^^^^^
@@ -215,7 +289,7 @@ Usage example
 .. code-block:: console
 
    ./testpmd --vdev=eth_mrvl,iface=eth0,iface=eth2,cfg=/home/user/mrvl.conf \
-     -c 7 -- -i -a --rxq=2
+     -c 7 -- -i -a --disable-hw-vlan-strip --rxq=3 --txq=3
 
 
 Building DPDK
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v3 5/8] net/mrvl: add classifier support
  2018-03-15  7:51   ` [PATCH v3 0/8] net/mrvl: add new features to PMD Tomasz Duszynski
                       ` (3 preceding siblings ...)
  2018-03-15  7:52     ` [PATCH v3 4/8] net/mrvl: document policer/scheduler/rate limiter usage Tomasz Duszynski
@ 2018-03-15  7:52     ` Tomasz Duszynski
  2018-03-15  7:52     ` [PATCH v3 6/8] net/mrvl: add extended statistics Tomasz Duszynski
                       ` (3 subsequent siblings)
  8 siblings, 0 replies; 34+ messages in thread
From: Tomasz Duszynski @ 2018-03-15  7:52 UTC (permalink / raw)
  To: dev; +Cc: mw, dima, nsamsono, jck, jianbo.liu, Tomasz Duszynski

Add classifier configuration support via rte_flow api.

Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
---
 doc/guides/nics/mrvl.rst       |  168 +++
 drivers/net/mrvl/Makefile      |    1 +
 drivers/net/mrvl/mrvl_ethdev.c |   59 +
 drivers/net/mrvl/mrvl_ethdev.h |   10 +
 drivers/net/mrvl/mrvl_flow.c   | 2759 ++++++++++++++++++++++++++++++++++++++++
 5 files changed, 2997 insertions(+)
 create mode 100644 drivers/net/mrvl/mrvl_flow.c

diff --git a/doc/guides/nics/mrvl.rst b/doc/guides/nics/mrvl.rst
index 6794cbb..9230d5e 100644
--- a/doc/guides/nics/mrvl.rst
+++ b/doc/guides/nics/mrvl.rst
@@ -113,6 +113,9 @@ Prerequisites
   approval has been granted, library can be found by typing ``musdk`` in
   the search box.
 
+  To get better understanding of the library one can consult documentation
+  available in the ``doc`` top level directory of the MUSDK sources.
+
   MUSDK must be configured with the following features:
 
   .. code-block:: console
@@ -318,6 +321,171 @@ the path to the MUSDK installation directory needs to be exported.
    sed -ri 's,(MRVL_PMD=)n,\1y,' build/.config
    make
 
+Flow API
+--------
+
+PPv2 offers packet classification capabilities via classifier engine which
+can be configured via generic flow API offered by DPDK.
+
+Supported flow actions
+~~~~~~~~~~~~~~~~~~~~~~
+
+Following flow action items are supported by the driver:
+
+* DROP
+* QUEUE
+
+Supported flow items
+~~~~~~~~~~~~~~~~~~~~
+
+Following flow items and their respective fields are supported by the driver:
+
+* ETH
+
+  * source MAC
+  * destination MAC
+  * ethertype
+
+* VLAN
+
+  * PCP
+  * VID
+
+* IPV4
+
+  * DSCP
+  * protocol
+  * source address
+  * destination address
+
+* IPV6
+
+  * flow label
+  * next header
+  * source address
+  * destination address
+
+* UDP
+
+  * source port
+  * destination port
+
+* TCP
+
+  * source port
+  * destination port
+
+Classifier match engine
+~~~~~~~~~~~~~~~~~~~~~~~
+
+Classifier has an internal match engine which can be configured to
+operate in either exact or maskable mode.
+
+Mode is selected upon creation of the first unique flow rule as follows:
+
+* maskable, if key size is up to 8 bytes.
+* exact, otherwise, i.e for keys bigger than 8 bytes.
+
+Where the key size equals the number of bytes of all fields specified
+in the flow items.
+
+.. table:: Examples of key size calculation
+
+   +----------------------------------------------------------------------------+-------------------+-------------+
+   | Flow pattern                                                               | Key size in bytes | Used engine |
+   +============================================================================+===================+=============+
+   | ETH (destination MAC) / VLAN (VID)                                         | 6 + 2 = 8         | Maskable    |
+   +----------------------------------------------------------------------------+-------------------+-------------+
+   | VLAN (VID) / IPV4 (source address)                                         | 2 + 4 = 6         | Maskable    |
+   +----------------------------------------------------------------------------+-------------------+-------------+
+   | TCP (source port, destination port)                                        | 2 + 2 = 4         | Maskable    |
+   +----------------------------------------------------------------------------+-------------------+-------------+
+   | VLAN (priority) / IPV4 (source address)                                    | 1 + 4 = 5         | Maskable    |
+   +----------------------------------------------------------------------------+-------------------+-------------+
+   | IPV4 (destination address) / UDP (source port, destination port)           | 6 + 2 + 2 = 10    | Exact       |
+   +----------------------------------------------------------------------------+-------------------+-------------+
+   | VLAN (VID) / IPV6 (flow label, destination address)                        | 2 + 3 + 16 = 21   | Exact       |
+   +----------------------------------------------------------------------------+-------------------+-------------+
+   | IPV4 (DSCP, source address, destination address)                           | 1 + 4 + 4 = 9     | Exact       |
+   +----------------------------------------------------------------------------+-------------------+-------------+
+   | IPV6 (flow label, source address, destination address)                     | 3 + 16 + 16 = 35  | Exact       |
+   +----------------------------------------------------------------------------+-------------------+-------------+
+
+From the user perspective maskable mode means that masks specified
+via flow rules are respected. In case of exact match mode, masks
+which do not provide exact matching (all bits masked) are ignored.
+
+If the flow matches more than one classifier rule the first
+(with the lowest index) matched takes precedence.
+
+Flow rules usage example
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Before proceeding run testpmd user application:
+
+.. code-block:: console
+
+   ./testpmd --vdev=net_mrvl,iface=eth0,iface=eth2 -c 3 -- -i --p 3 -a --disable-hw-vlan-strip
+
+Example #1
+^^^^^^^^^^
+
+.. code-block:: console
+
+   testpmd> flow create 0 ingress pattern eth src is 10:11:12:13:14:15 / end actions drop / end
+
+In this case key size is 6 bytes thus maskable type is selected. Testpmd
+will set mask to ff:ff:ff:ff:ff:ff i.e traffic explicitly matching
+above rule will be dropped.
+
+Example #2
+^^^^^^^^^^
+
+.. code-block:: console
+
+   testpmd> flow create 0 ingress pattern ipv4 src spec 10.10.10.0 src mask 255.255.255.0 / tcp src spec 0x10 src mask 0x10 / end action drop / end
+
+In this case key size is 8 bytes thus maskable type is selected.
+Flows which have IPv4 source addresses ranging from 10.10.10.0 to 10.10.10.255
+and tcp source port set to 16 will be dropped.
+
+Example #3
+^^^^^^^^^^
+
+.. code-block:: console
+
+   testpmd> flow create 0 ingress pattern vlan vid spec 0x10 vid mask 0x10 / ipv4 src spec 10.10.1.1 src mask 255.255.0.0 dst spec 11.11.11.1 dst mask 255.255.255.0 / end actions drop / end
+
+In this case key size is 10 bytes thus exact type is selected.
+Even though each item has partial mask set, masks will be ignored.
+As a result only flows with VID set to 16 and IPv4 source and destination
+addresses set to 10.10.1.1 and 11.11.11.1 respectively will be dropped.
+
+Limitations
+~~~~~~~~~~~
+
+Following limitations need to be taken into account while creating flow rules:
+
+* For IPv4 exact match type the key size must be up to 12 bytes.
+* For IPv6 exact match type the key size must be up to 36 bytes.
+* Following fields cannot be partially masked (all masks are treated as
+  if they were exact):
+
+  * ETH: ethertype
+  * VLAN: PCP, VID
+  * IPv4: protocol
+  * IPv6: next header
+  * TCP/UDP: source port, destination port
+
+* Only one classifier table can be created thus all rules in the table
+  have to match table format. Table format is set during creation of
+  the first unique flow rule.
+* Up to 5 fields can be specified per flow rule.
+* Up to 20 flow rules can be added.
+
+For additional information about classifier please consult
+``doc/musdk_cls_user_guide.txt``.
+
 Usage Example
 -------------
 
diff --git a/drivers/net/mrvl/Makefile b/drivers/net/mrvl/Makefile
index bd3a96a..31a8fda 100644
--- a/drivers/net/mrvl/Makefile
+++ b/drivers/net/mrvl/Makefile
@@ -37,5 +37,6 @@ LDLIBS += -lrte_bus_vdev
 # library source files
 SRCS-$(CONFIG_RTE_LIBRTE_MRVL_PMD) += mrvl_ethdev.c
 SRCS-$(CONFIG_RTE_LIBRTE_MRVL_PMD) += mrvl_qos.c
+SRCS-$(CONFIG_RTE_LIBRTE_MRVL_PMD) += mrvl_flow.c
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/mrvl/mrvl_ethdev.c b/drivers/net/mrvl/mrvl_ethdev.c
index 7e00dbd..34b9ef7 100644
--- a/drivers/net/mrvl/mrvl_ethdev.c
+++ b/drivers/net/mrvl/mrvl_ethdev.c
@@ -659,6 +659,10 @@ mrvl_dev_stop(struct rte_eth_dev *dev)
 	mrvl_dev_set_link_down(dev);
 	mrvl_flush_rx_queues(dev);
 	mrvl_flush_tx_shadow_queues(dev);
+	if (priv->cls_tbl) {
+		pp2_cls_tbl_deinit(priv->cls_tbl);
+		priv->cls_tbl = NULL;
+	}
 	if (priv->qos_tbl) {
 		pp2_cls_qos_tbl_deinit(priv->qos_tbl);
 		priv->qos_tbl = NULL;
@@ -784,6 +788,9 @@ mrvl_promiscuous_enable(struct rte_eth_dev *dev)
 	if (!priv->ppio)
 		return;
 
+	if (priv->isolated)
+		return;
+
 	ret = pp2_ppio_set_promisc(priv->ppio, 1);
 	if (ret)
 		RTE_LOG(ERR, PMD, "Failed to enable promiscuous mode\n");
@@ -804,6 +811,9 @@ mrvl_allmulticast_enable(struct rte_eth_dev *dev)
 	if (!priv->ppio)
 		return;
 
+	if (priv->isolated)
+		return;
+
 	ret = pp2_ppio_set_mc_promisc(priv->ppio, 1);
 	if (ret)
 		RTE_LOG(ERR, PMD, "Failed enable all-multicast mode\n");
@@ -867,6 +877,9 @@ mrvl_mac_addr_remove(struct rte_eth_dev *dev, uint32_t index)
 	if (!priv->ppio)
 		return;
 
+	if (priv->isolated)
+		return;
+
 	ret = pp2_ppio_remove_mac_addr(priv->ppio,
 				       dev->data->mac_addrs[index].addr_bytes);
 	if (ret) {
@@ -899,6 +912,9 @@ mrvl_mac_addr_add(struct rte_eth_dev *dev, struct ether_addr *mac_addr,
 	char buf[ETHER_ADDR_FMT_SIZE];
 	int ret;
 
+	if (priv->isolated)
+		return -ENOTSUP;
+
 	if (index == 0)
 		/* For setting index 0, mrvl_mac_addr_set() should be used.*/
 		return -1;
@@ -946,6 +962,9 @@ mrvl_mac_addr_set(struct rte_eth_dev *dev, struct ether_addr *mac_addr)
 	if (!priv->ppio)
 		return;
 
+	if (priv->isolated)
+		return;
+
 	ret = pp2_ppio_set_mac_addr(priv->ppio, mac_addr->addr_bytes);
 	if (ret) {
 		char buf[ETHER_ADDR_FMT_SIZE];
@@ -1227,6 +1246,9 @@ mrvl_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
 	if (!priv->ppio)
 		return -EPERM;
 
+	if (priv->isolated)
+		return -ENOTSUP;
+
 	return on ? pp2_ppio_add_vlan(priv->ppio, vlan_id) :
 		    pp2_ppio_remove_vlan(priv->ppio, vlan_id);
 }
@@ -1580,6 +1602,9 @@ mrvl_rss_hash_update(struct rte_eth_dev *dev,
 {
 	struct mrvl_priv *priv = dev->data->dev_private;
 
+	if (priv->isolated)
+		return -ENOTSUP;
+
 	return mrvl_configure_rss(priv, rss_conf);
 }
 
@@ -1616,6 +1641,39 @@ mrvl_rss_hash_conf_get(struct rte_eth_dev *dev,
 	return 0;
 }
 
+/**
+ * DPDK callback to get rte_flow callbacks.
+ *
+ * @param dev
+ *   Pointer to the device structure.
+ * @param filer_type
+ *   Flow filter type.
+ * @param filter_op
+ *   Flow filter operation.
+ * @param arg
+ *   Pointer to pass the flow ops.
+ *
+ * @return
+ *   0 on success, negative error value otherwise.
+ */
+static int
+mrvl_eth_filter_ctrl(struct rte_eth_dev *dev __rte_unused,
+		     enum rte_filter_type filter_type,
+		     enum rte_filter_op filter_op, void *arg)
+{
+	switch (filter_type) {
+	case RTE_ETH_FILTER_GENERIC:
+		if (filter_op != RTE_ETH_FILTER_GET)
+			return -EINVAL;
+		*(const void **)arg = &mrvl_flow_ops;
+		return 0;
+	default:
+		RTE_LOG(WARNING, PMD, "Filter type (%d) not supported",
+				filter_type);
+		return -EINVAL;
+	}
+}
+
 static const struct eth_dev_ops mrvl_ops = {
 	.dev_configure = mrvl_dev_configure,
 	.dev_start = mrvl_dev_start,
@@ -1645,6 +1703,7 @@ static const struct eth_dev_ops mrvl_ops = {
 	.tx_queue_release = mrvl_tx_queue_release,
 	.rss_hash_update = mrvl_rss_hash_update,
 	.rss_hash_conf_get = mrvl_rss_hash_conf_get,
+	.filter_ctrl = mrvl_eth_filter_ctrl
 };
 
 /**
diff --git a/drivers/net/mrvl/mrvl_ethdev.h b/drivers/net/mrvl/mrvl_ethdev.h
index 2cc229e..3a42809 100644
--- a/drivers/net/mrvl/mrvl_ethdev.h
+++ b/drivers/net/mrvl/mrvl_ethdev.h
@@ -8,6 +8,7 @@
 #define _MRVL_ETHDEV_H_
 
 #include <rte_spinlock.h>
+#include <rte_flow_driver.h>
 
 #include <env/mv_autogen_comp_flags.h>
 #include <drivers/mv_pp2.h>
@@ -80,12 +81,21 @@ struct mrvl_priv {
 	uint8_t rss_hf_tcp;
 	uint8_t uc_mc_flushed;
 	uint8_t vlan_flushed;
+	uint8_t isolated;
 
 	struct pp2_ppio_params ppio_params;
 	struct pp2_cls_qos_tbl_params qos_tbl_params;
 	struct pp2_cls_tbl *qos_tbl;
 	uint16_t nb_rx_queues;
+
+	struct pp2_cls_tbl_params cls_tbl_params;
+	struct pp2_cls_tbl *cls_tbl;
+	uint32_t cls_tbl_pattern;
+	LIST_HEAD(mrvl_flows, rte_flow) flows;
+
 	struct pp2_cls_plcr *policer;
 };
 
+/** Flow operations forward declaration. */
+extern const struct rte_flow_ops mrvl_flow_ops;
 #endif /* _MRVL_ETHDEV_H_ */
diff --git a/drivers/net/mrvl/mrvl_flow.c b/drivers/net/mrvl/mrvl_flow.c
new file mode 100644
index 0000000..8fd4dbf
--- /dev/null
+++ b/drivers/net/mrvl/mrvl_flow.c
@@ -0,0 +1,2759 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Marvell International Ltd.
+ * Copyright(c) 2018 Semihalf.
+ * All rights reserved.
+ */
+
+#include <rte_flow.h>
+#include <rte_flow_driver.h>
+#include <rte_malloc.h>
+#include <rte_log.h>
+
+#include <arpa/inet.h>
+
+#ifdef container_of
+#undef container_of
+#endif
+
+#include "mrvl_ethdev.h"
+#include "mrvl_qos.h"
+#include "env/mv_common.h" /* for BIT() */
+
+/** Number of rules in the classifier table. */
+#define MRVL_CLS_MAX_NUM_RULES 20
+
+/** Size of the classifier key and mask strings. */
+#define MRVL_CLS_STR_SIZE_MAX 40
+
+/** Parsed fields in processed rte_flow_item. */
+enum mrvl_parsed_fields {
+	/* eth flags */
+	F_DMAC =         BIT(0),
+	F_SMAC =         BIT(1),
+	F_TYPE =         BIT(2),
+	/* vlan flags */
+	F_VLAN_ID =      BIT(3),
+	F_VLAN_PRI =     BIT(4),
+	F_VLAN_TCI =     BIT(5), /* not supported by MUSDK yet */
+	/* ip4 flags */
+	F_IP4_TOS =      BIT(6),
+	F_IP4_SIP =      BIT(7),
+	F_IP4_DIP =      BIT(8),
+	F_IP4_PROTO =    BIT(9),
+	/* ip6 flags */
+	F_IP6_TC =       BIT(10), /* not supported by MUSDK yet */
+	F_IP6_SIP =      BIT(11),
+	F_IP6_DIP =      BIT(12),
+	F_IP6_FLOW =     BIT(13),
+	F_IP6_NEXT_HDR = BIT(14),
+	/* tcp flags */
+	F_TCP_SPORT =    BIT(15),
+	F_TCP_DPORT =    BIT(16),
+	/* udp flags */
+	F_UDP_SPORT =    BIT(17),
+	F_UDP_DPORT =    BIT(18),
+};
+
+/** PMD-specific definition of a flow rule handle. */
+struct rte_flow {
+	LIST_ENTRY(rte_flow) next;
+
+	enum mrvl_parsed_fields pattern;
+
+	struct pp2_cls_tbl_rule rule;
+	struct pp2_cls_cos_desc cos;
+	struct pp2_cls_tbl_action action;
+};
+
+static const enum rte_flow_item_type pattern_eth[] = {
+	RTE_FLOW_ITEM_TYPE_ETH,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_eth_vlan[] = {
+	RTE_FLOW_ITEM_TYPE_ETH,
+	RTE_FLOW_ITEM_TYPE_VLAN,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_eth_vlan_ip[] = {
+	RTE_FLOW_ITEM_TYPE_ETH,
+	RTE_FLOW_ITEM_TYPE_VLAN,
+	RTE_FLOW_ITEM_TYPE_IPV4,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_eth_vlan_ip6[] = {
+	RTE_FLOW_ITEM_TYPE_ETH,
+	RTE_FLOW_ITEM_TYPE_VLAN,
+	RTE_FLOW_ITEM_TYPE_IPV6,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_eth_ip4[] = {
+	RTE_FLOW_ITEM_TYPE_ETH,
+	RTE_FLOW_ITEM_TYPE_IPV4,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_eth_ip4_tcp[] = {
+	RTE_FLOW_ITEM_TYPE_ETH,
+	RTE_FLOW_ITEM_TYPE_IPV4,
+	RTE_FLOW_ITEM_TYPE_TCP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_eth_ip4_udp[] = {
+	RTE_FLOW_ITEM_TYPE_ETH,
+	RTE_FLOW_ITEM_TYPE_IPV4,
+	RTE_FLOW_ITEM_TYPE_UDP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_eth_ip6[] = {
+	RTE_FLOW_ITEM_TYPE_ETH,
+	RTE_FLOW_ITEM_TYPE_IPV6,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_eth_ip6_tcp[] = {
+	RTE_FLOW_ITEM_TYPE_ETH,
+	RTE_FLOW_ITEM_TYPE_IPV6,
+	RTE_FLOW_ITEM_TYPE_TCP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_eth_ip6_udp[] = {
+	RTE_FLOW_ITEM_TYPE_ETH,
+	RTE_FLOW_ITEM_TYPE_IPV6,
+	RTE_FLOW_ITEM_TYPE_UDP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_vlan[] = {
+	RTE_FLOW_ITEM_TYPE_VLAN,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_vlan_ip[] = {
+	RTE_FLOW_ITEM_TYPE_VLAN,
+	RTE_FLOW_ITEM_TYPE_IPV4,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_vlan_ip_tcp[] = {
+	RTE_FLOW_ITEM_TYPE_VLAN,
+	RTE_FLOW_ITEM_TYPE_IPV4,
+	RTE_FLOW_ITEM_TYPE_TCP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_vlan_ip_udp[] = {
+	RTE_FLOW_ITEM_TYPE_VLAN,
+	RTE_FLOW_ITEM_TYPE_IPV4,
+	RTE_FLOW_ITEM_TYPE_UDP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_vlan_ip6[] = {
+	RTE_FLOW_ITEM_TYPE_VLAN,
+	RTE_FLOW_ITEM_TYPE_IPV6,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_vlan_ip6_tcp[] = {
+	RTE_FLOW_ITEM_TYPE_VLAN,
+	RTE_FLOW_ITEM_TYPE_IPV6,
+	RTE_FLOW_ITEM_TYPE_TCP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_vlan_ip6_udp[] = {
+	RTE_FLOW_ITEM_TYPE_VLAN,
+	RTE_FLOW_ITEM_TYPE_IPV6,
+	RTE_FLOW_ITEM_TYPE_UDP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_ip[] = {
+	RTE_FLOW_ITEM_TYPE_IPV4,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_ip6[] = {
+	RTE_FLOW_ITEM_TYPE_IPV6,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_ip_tcp[] = {
+	RTE_FLOW_ITEM_TYPE_IPV4,
+	RTE_FLOW_ITEM_TYPE_TCP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_ip6_tcp[] = {
+	RTE_FLOW_ITEM_TYPE_IPV6,
+	RTE_FLOW_ITEM_TYPE_TCP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_ip_udp[] = {
+	RTE_FLOW_ITEM_TYPE_IPV4,
+	RTE_FLOW_ITEM_TYPE_UDP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_ip6_udp[] = {
+	RTE_FLOW_ITEM_TYPE_IPV6,
+	RTE_FLOW_ITEM_TYPE_UDP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_tcp[] = {
+	RTE_FLOW_ITEM_TYPE_TCP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+static const enum rte_flow_item_type pattern_udp[] = {
+	RTE_FLOW_ITEM_TYPE_UDP,
+	RTE_FLOW_ITEM_TYPE_END
+};
+
+#define MRVL_VLAN_ID_MASK 0x0fff
+#define MRVL_VLAN_PRI_MASK 0x7000
+#define MRVL_IPV4_DSCP_MASK 0xfc
+#define MRVL_IPV4_ADDR_MASK 0xffffffff
+#define MRVL_IPV6_FLOW_MASK 0x0fffff
+
+/**
+ * Given a flow item, return the next non-void one.
+ *
+ * @param items Pointer to the item in the table.
+ * @returns Next not-void item, NULL otherwise.
+ */
+static const struct rte_flow_item *
+mrvl_next_item(const struct rte_flow_item *items)
+{
+	const struct rte_flow_item *item = items;
+
+	for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
+		if (item->type != RTE_FLOW_ITEM_TYPE_VOID)
+			return item;
+	}
+
+	return NULL;
+}
+
+/**
+ * Allocate memory for classifier rule key and mask fields.
+ *
+ * @param field Pointer to the classifier rule.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_alloc_key_mask(struct pp2_cls_rule_key_field *field)
+{
+	unsigned int id = rte_socket_id();
+
+	field->key = rte_zmalloc_socket(NULL, MRVL_CLS_STR_SIZE_MAX, 0, id);
+	if (!field->key)
+		goto out;
+
+	field->mask = rte_zmalloc_socket(NULL, MRVL_CLS_STR_SIZE_MAX, 0, id);
+	if (!field->mask)
+		goto out_mask;
+
+	return 0;
+out_mask:
+	rte_free(field->key);
+out:
+	field->key = NULL;
+	field->mask = NULL;
+	return -1;
+}
+
+/**
+ * Free memory allocated for classifier rule key and mask fields.
+ *
+ * @param field Pointer to the classifier rule.
+ */
+static void
+mrvl_free_key_mask(struct pp2_cls_rule_key_field *field)
+{
+	rte_free(field->key);
+	rte_free(field->mask);
+	field->key = NULL;
+	field->mask = NULL;
+}
+
+/**
+ * Free memory allocated for all classifier rule key and mask fields.
+ *
+ * @param rule Pointer to the classifier table rule.
+ */
+static void
+mrvl_free_all_key_mask(struct pp2_cls_tbl_rule *rule)
+{
+	int i;
+
+	for (i = 0; i < rule->num_fields; i++)
+		mrvl_free_key_mask(&rule->fields[i]);
+	rule->num_fields = 0;
+}
+
+/*
+ * Initialize rte flow item parsing.
+ *
+ * @param item Pointer to the flow item.
+ * @param spec_ptr Pointer to the specific item pointer.
+ * @param mask_ptr Pointer to the specific item's mask pointer.
+ * @def_mask Pointer to the default mask.
+ * @size Size of the flow item.
+ * @error Pointer to the rte flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_init(const struct rte_flow_item *item,
+		const void **spec_ptr,
+		const void **mask_ptr,
+		const void *def_mask,
+		unsigned int size,
+		struct rte_flow_error *error)
+{
+	const uint8_t *spec;
+	const uint8_t *mask;
+	const uint8_t *last;
+	uint8_t zeros[size];
+
+	memset(zeros, 0, size);
+
+	if (item == NULL) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+				   "NULL item\n");
+		return -rte_errno;
+	}
+
+	if ((item->last != NULL || item->mask != NULL) && item->spec == NULL) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ITEM, item,
+				   "Mask or last is set without spec\n");
+		return -rte_errno;
+	}
+
+	/*
+	 * If "mask" is not set, default mask is used,
+	 * but if default mask is NULL, "mask" should be set.
+	 */
+	if (item->mask == NULL) {
+		if (def_mask == NULL) {
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+					   "Mask should be specified\n");
+			return -rte_errno;
+		}
+
+		mask = (const uint8_t *)def_mask;
+	} else {
+		mask = (const uint8_t *)item->mask;
+	}
+
+	spec = (const uint8_t *)item->spec;
+	last = (const uint8_t *)item->last;
+
+	if (spec == NULL) {
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+				   NULL, "Spec should be specified\n");
+		return -rte_errno;
+	}
+
+	/*
+	 * If field values in "last" are either 0 or equal to the corresponding
+	 * values in "spec" then they are ignored.
+	 */
+	if (last != NULL &&
+	    !memcmp(last, zeros, size) &&
+	    memcmp(last, spec, size) != 0) {
+		rte_flow_error_set(error, ENOTSUP,
+				   RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+				   "Ranging is not supported\n");
+		return -rte_errno;
+	}
+
+	*spec_ptr = spec;
+	*mask_ptr = mask;
+
+	return 0;
+}
+
+/**
+ * Parse the eth flow item.
+ *
+ * This will create classifier rule that matches either destination or source
+ * mac.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param mask Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static int
+mrvl_parse_mac(const struct rte_flow_item_eth *spec,
+	       const struct rte_flow_item_eth *mask,
+	       int parse_dst, struct rte_flow *flow)
+{
+	struct pp2_cls_rule_key_field *key_field;
+	const uint8_t *k, *m;
+
+	if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS)
+		return -ENOSPC;
+
+	if (parse_dst) {
+		k = spec->dst.addr_bytes;
+		m = mask->dst.addr_bytes;
+
+		flow->pattern |= F_DMAC;
+	} else {
+		k = spec->src.addr_bytes;
+		m = mask->src.addr_bytes;
+
+		flow->pattern |= F_SMAC;
+	}
+
+	key_field = &flow->rule.fields[flow->rule.num_fields];
+	mrvl_alloc_key_mask(key_field);
+	key_field->size = 6;
+
+	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX,
+		 "%02x:%02x:%02x:%02x:%02x:%02x",
+		 k[0], k[1], k[2], k[3], k[4], k[5]);
+
+	snprintf((char *)key_field->mask, MRVL_CLS_STR_SIZE_MAX,
+		 "%02x:%02x:%02x:%02x:%02x:%02x",
+		 m[0], m[1], m[2], m[3], m[4], m[5]);
+
+	flow->rule.num_fields += 1;
+
+	return 0;
+}
+
+/**
+ * Helper for parsing the eth flow item destination mac address.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static inline int
+mrvl_parse_dmac(const struct rte_flow_item_eth *spec,
+		const struct rte_flow_item_eth *mask,
+		struct rte_flow *flow)
+{
+	return mrvl_parse_mac(spec, mask, 1, flow);
+}
+
+/**
+ * Helper for parsing the eth flow item source mac address.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static inline int
+mrvl_parse_smac(const struct rte_flow_item_eth *spec,
+		const struct rte_flow_item_eth *mask,
+		struct rte_flow *flow)
+{
+	return mrvl_parse_mac(spec, mask, 0, flow);
+}
+
+/**
+ * Parse the ether type field of the eth flow item.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static int
+mrvl_parse_type(const struct rte_flow_item_eth *spec,
+		const struct rte_flow_item_eth *mask __rte_unused,
+		struct rte_flow *flow)
+{
+	struct pp2_cls_rule_key_field *key_field;
+	uint16_t k;
+
+	if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS)
+		return -ENOSPC;
+
+	key_field = &flow->rule.fields[flow->rule.num_fields];
+	mrvl_alloc_key_mask(key_field);
+	key_field->size = 2;
+
+	k = rte_be_to_cpu_16(spec->type);
+	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
+
+	flow->pattern |= F_TYPE;
+	flow->rule.num_fields += 1;
+
+	return 0;
+}
+
+/**
+ * Parse the vid field of the vlan rte flow item.
+ *
+ * This will create classifier rule that matches vid.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static int
+mrvl_parse_vlan_id(const struct rte_flow_item_vlan *spec,
+		   const struct rte_flow_item_vlan *mask __rte_unused,
+		   struct rte_flow *flow)
+{
+	struct pp2_cls_rule_key_field *key_field;
+	uint16_t k;
+
+	if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS)
+		return -ENOSPC;
+
+	key_field = &flow->rule.fields[flow->rule.num_fields];
+	mrvl_alloc_key_mask(key_field);
+	key_field->size = 2;
+
+	k = rte_be_to_cpu_16(spec->tci) & MRVL_VLAN_ID_MASK;
+	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
+
+	flow->pattern |= F_VLAN_ID;
+	flow->rule.num_fields += 1;
+
+	return 0;
+}
+
+/**
+ * Parse the pri field of the vlan rte flow item.
+ *
+ * This will create classifier rule that matches pri.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static int
+mrvl_parse_vlan_pri(const struct rte_flow_item_vlan *spec,
+		    const struct rte_flow_item_vlan *mask __rte_unused,
+		    struct rte_flow *flow)
+{
+	struct pp2_cls_rule_key_field *key_field;
+	uint16_t k;
+
+	if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS)
+		return -ENOSPC;
+
+	key_field = &flow->rule.fields[flow->rule.num_fields];
+	mrvl_alloc_key_mask(key_field);
+	key_field->size = 1;
+
+	k = (rte_be_to_cpu_16(spec->tci) & MRVL_VLAN_PRI_MASK) >> 13;
+	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
+
+	flow->pattern |= F_VLAN_PRI;
+	flow->rule.num_fields += 1;
+
+	return 0;
+}
+
+/**
+ * Parse the dscp field of the ipv4 rte flow item.
+ *
+ * This will create classifier rule that matches dscp field.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static int
+mrvl_parse_ip4_dscp(const struct rte_flow_item_ipv4 *spec,
+		    const struct rte_flow_item_ipv4 *mask,
+		    struct rte_flow *flow)
+{
+	struct pp2_cls_rule_key_field *key_field;
+	uint8_t k, m;
+
+	if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS)
+		return -ENOSPC;
+
+	key_field = &flow->rule.fields[flow->rule.num_fields];
+	mrvl_alloc_key_mask(key_field);
+	key_field->size = 1;
+
+	k = (spec->hdr.type_of_service & MRVL_IPV4_DSCP_MASK) >> 2;
+	m = (mask->hdr.type_of_service & MRVL_IPV4_DSCP_MASK) >> 2;
+	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
+	snprintf((char *)key_field->mask, MRVL_CLS_STR_SIZE_MAX, "%u", m);
+
+	flow->pattern |= F_IP4_TOS;
+	flow->rule.num_fields += 1;
+
+	return 0;
+}
+
+/**
+ * Parse either source or destination ip addresses of the ipv4 flow item.
+ *
+ * This will create classifier rule that matches either destination
+ * or source ip field.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static int
+mrvl_parse_ip4_addr(const struct rte_flow_item_ipv4 *spec,
+		    const struct rte_flow_item_ipv4 *mask,
+		    int parse_dst, struct rte_flow *flow)
+{
+	struct pp2_cls_rule_key_field *key_field;
+	struct in_addr k;
+	uint32_t m;
+
+	if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS)
+		return -ENOSPC;
+
+	memset(&k, 0, sizeof(k));
+	if (parse_dst) {
+		k.s_addr = spec->hdr.dst_addr;
+		m = rte_be_to_cpu_32(mask->hdr.dst_addr);
+
+		flow->pattern |= F_IP4_DIP;
+	} else {
+		k.s_addr = spec->hdr.src_addr;
+		m = rte_be_to_cpu_32(mask->hdr.src_addr);
+
+		flow->pattern |= F_IP4_SIP;
+	}
+
+	key_field = &flow->rule.fields[flow->rule.num_fields];
+	mrvl_alloc_key_mask(key_field);
+	key_field->size = 4;
+
+	inet_ntop(AF_INET, &k, (char *)key_field->key, MRVL_CLS_STR_SIZE_MAX);
+	snprintf((char *)key_field->mask, MRVL_CLS_STR_SIZE_MAX, "0x%x", m);
+
+	flow->rule.num_fields += 1;
+
+	return 0;
+}
+
+/**
+ * Helper for parsing destination ip of the ipv4 flow item.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static inline int
+mrvl_parse_ip4_dip(const struct rte_flow_item_ipv4 *spec,
+		   const struct rte_flow_item_ipv4 *mask,
+		   struct rte_flow *flow)
+{
+	return mrvl_parse_ip4_addr(spec, mask, 1, flow);
+}
+
+/**
+ * Helper for parsing source ip of the ipv4 flow item.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static inline int
+mrvl_parse_ip4_sip(const struct rte_flow_item_ipv4 *spec,
+		   const struct rte_flow_item_ipv4 *mask,
+		   struct rte_flow *flow)
+{
+	return mrvl_parse_ip4_addr(spec, mask, 0, flow);
+}
+
+/**
+ * Parse the proto field of the ipv4 rte flow item.
+ *
+ * This will create classifier rule that matches proto field.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static int
+mrvl_parse_ip4_proto(const struct rte_flow_item_ipv4 *spec,
+		     const struct rte_flow_item_ipv4 *mask __rte_unused,
+		     struct rte_flow *flow)
+{
+	struct pp2_cls_rule_key_field *key_field;
+	uint8_t k = spec->hdr.next_proto_id;
+
+	if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS)
+		return -ENOSPC;
+
+	key_field = &flow->rule.fields[flow->rule.num_fields];
+	mrvl_alloc_key_mask(key_field);
+	key_field->size = 1;
+
+	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
+
+	flow->pattern |= F_IP4_PROTO;
+	flow->rule.num_fields += 1;
+
+	return 0;
+}
+
+/**
+ * Parse either source or destination ip addresses of the ipv6 rte flow item.
+ *
+ * This will create classifier rule that matches either destination
+ * or source ip field.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static int
+mrvl_parse_ip6_addr(const struct rte_flow_item_ipv6 *spec,
+	       const struct rte_flow_item_ipv6 *mask,
+	       int parse_dst, struct rte_flow *flow)
+{
+	struct pp2_cls_rule_key_field *key_field;
+	int size = sizeof(spec->hdr.dst_addr);
+	struct in6_addr k, m;
+
+	if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS)
+		return -ENOSPC;
+
+	memset(&k, 0, sizeof(k));
+	if (parse_dst) {
+		memcpy(k.s6_addr, spec->hdr.dst_addr, size);
+		memcpy(m.s6_addr, mask->hdr.dst_addr, size);
+
+		flow->pattern |= F_IP6_DIP;
+	} else {
+		memcpy(k.s6_addr, spec->hdr.src_addr, size);
+		memcpy(m.s6_addr, mask->hdr.src_addr, size);
+
+		flow->pattern |= F_IP6_SIP;
+	}
+
+	key_field = &flow->rule.fields[flow->rule.num_fields];
+	mrvl_alloc_key_mask(key_field);
+	key_field->size = 16;
+
+	inet_ntop(AF_INET6, &k, (char *)key_field->key, MRVL_CLS_STR_SIZE_MAX);
+	inet_ntop(AF_INET6, &m, (char *)key_field->mask, MRVL_CLS_STR_SIZE_MAX);
+
+	flow->rule.num_fields += 1;
+
+	return 0;
+}
+
+/**
+ * Helper for parsing destination ip of the ipv6 flow item.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static inline int
+mrvl_parse_ip6_dip(const struct rte_flow_item_ipv6 *spec,
+		   const struct rte_flow_item_ipv6 *mask,
+		   struct rte_flow *flow)
+{
+	return mrvl_parse_ip6_addr(spec, mask, 1, flow);
+}
+
+/**
+ * Helper for parsing source ip of the ipv6 flow item.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static inline int
+mrvl_parse_ip6_sip(const struct rte_flow_item_ipv6 *spec,
+		   const struct rte_flow_item_ipv6 *mask,
+		   struct rte_flow *flow)
+{
+	return mrvl_parse_ip6_addr(spec, mask, 0, flow);
+}
+
+/**
+ * Parse the flow label of the ipv6 flow item.
+ *
+ * This will create classifier rule that matches flow field.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static int
+mrvl_parse_ip6_flow(const struct rte_flow_item_ipv6 *spec,
+		    const struct rte_flow_item_ipv6 *mask,
+		    struct rte_flow *flow)
+{
+	struct pp2_cls_rule_key_field *key_field;
+	uint32_t k = rte_be_to_cpu_32(spec->hdr.vtc_flow) & MRVL_IPV6_FLOW_MASK,
+		 m = rte_be_to_cpu_32(mask->hdr.vtc_flow) & MRVL_IPV6_FLOW_MASK;
+
+	if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS)
+		return -ENOSPC;
+
+	key_field = &flow->rule.fields[flow->rule.num_fields];
+	mrvl_alloc_key_mask(key_field);
+	key_field->size = 3;
+
+	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
+	snprintf((char *)key_field->mask, MRVL_CLS_STR_SIZE_MAX, "%u", m);
+
+	flow->pattern |= F_IP6_FLOW;
+	flow->rule.num_fields += 1;
+
+	return 0;
+}
+
+/**
+ * Parse the next header of the ipv6 flow item.
+ *
+ * This will create classifier rule that matches next header field.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static int
+mrvl_parse_ip6_next_hdr(const struct rte_flow_item_ipv6 *spec,
+			const struct rte_flow_item_ipv6 *mask __rte_unused,
+			struct rte_flow *flow)
+{
+	struct pp2_cls_rule_key_field *key_field;
+	uint8_t k = spec->hdr.proto;
+
+	if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS)
+		return -ENOSPC;
+
+	key_field = &flow->rule.fields[flow->rule.num_fields];
+	mrvl_alloc_key_mask(key_field);
+	key_field->size = 1;
+
+	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
+
+	flow->pattern |= F_IP6_NEXT_HDR;
+	flow->rule.num_fields += 1;
+
+	return 0;
+}
+
+/**
+ * Parse destination or source port of the tcp flow item.
+ *
+ * This will create classifier rule that matches either destination or
+ * source tcp port.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static int
+mrvl_parse_tcp_port(const struct rte_flow_item_tcp *spec,
+		    const struct rte_flow_item_tcp *mask __rte_unused,
+		    int parse_dst, struct rte_flow *flow)
+{
+	struct pp2_cls_rule_key_field *key_field;
+	uint16_t k;
+
+	if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS)
+		return -ENOSPC;
+
+	key_field = &flow->rule.fields[flow->rule.num_fields];
+	mrvl_alloc_key_mask(key_field);
+	key_field->size = 2;
+
+	if (parse_dst) {
+		k = rte_be_to_cpu_16(spec->hdr.dst_port);
+
+		flow->pattern |= F_TCP_DPORT;
+	} else {
+		k = rte_be_to_cpu_16(spec->hdr.src_port);
+
+		flow->pattern |= F_TCP_SPORT;
+	}
+
+	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
+
+	flow->rule.num_fields += 1;
+
+	return 0;
+}
+
+/**
+ * Helper for parsing the tcp source port of the tcp flow item.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static inline int
+mrvl_parse_tcp_sport(const struct rte_flow_item_tcp *spec,
+		     const struct rte_flow_item_tcp *mask,
+		     struct rte_flow *flow)
+{
+	return mrvl_parse_tcp_port(spec, mask, 0, flow);
+}
+
+/**
+ * Helper for parsing the tcp destination port of the tcp flow item.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static inline int
+mrvl_parse_tcp_dport(const struct rte_flow_item_tcp *spec,
+		     const struct rte_flow_item_tcp *mask,
+		     struct rte_flow *flow)
+{
+	return mrvl_parse_tcp_port(spec, mask, 1, flow);
+}
+
+/**
+ * Parse destination or source port of the udp flow item.
+ *
+ * This will create classifier rule that matches either destination or
+ * source udp port.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static int
+mrvl_parse_udp_port(const struct rte_flow_item_udp *spec,
+		    const struct rte_flow_item_udp *mask __rte_unused,
+		    int parse_dst, struct rte_flow *flow)
+{
+	struct pp2_cls_rule_key_field *key_field;
+	uint16_t k;
+
+	if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS)
+		return -ENOSPC;
+
+	key_field = &flow->rule.fields[flow->rule.num_fields];
+	mrvl_alloc_key_mask(key_field);
+	key_field->size = 2;
+
+	if (parse_dst) {
+		k = rte_be_to_cpu_16(spec->hdr.dst_port);
+
+		flow->pattern |= F_UDP_DPORT;
+	} else {
+		k = rte_be_to_cpu_16(spec->hdr.src_port);
+
+		flow->pattern |= F_UDP_SPORT;
+	}
+
+	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
+
+	flow->rule.num_fields += 1;
+
+	return 0;
+}
+
+/**
+ * Helper for parsing the udp source port of the udp flow item.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static inline int
+mrvl_parse_udp_sport(const struct rte_flow_item_udp *spec,
+		     const struct rte_flow_item_udp *mask,
+		     struct rte_flow *flow)
+{
+	return mrvl_parse_udp_port(spec, mask, 0, flow);
+}
+
+/**
+ * Helper for parsing the udp destination port of the udp flow item.
+ *
+ * @param spec Pointer to the specific flow item.
+ * @param mask Pointer to the specific flow item's mask.
+ * @param flow Pointer to the flow.
+ * @return 0 in case of success, negative error value otherwise.
+ */
+static inline int
+mrvl_parse_udp_dport(const struct rte_flow_item_udp *spec,
+		     const struct rte_flow_item_udp *mask,
+		     struct rte_flow *flow)
+{
+	return mrvl_parse_udp_port(spec, mask, 1, flow);
+}
+
+/**
+ * Parse eth flow item.
+ *
+ * @param item Pointer to the flow item.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @param fields Pointer to the parsed parsed fields enum.
+ * @returns 0 on success, negative value otherwise.
+ */
+static int
+mrvl_parse_eth(const struct rte_flow_item *item, struct rte_flow *flow,
+	       struct rte_flow_error *error)
+{
+	const struct rte_flow_item_eth *spec = NULL, *mask = NULL;
+	struct ether_addr zero;
+	int ret;
+
+	ret = mrvl_parse_init(item, (const void **)&spec, (const void **)&mask,
+			      &rte_flow_item_eth_mask,
+			      sizeof(struct rte_flow_item_eth), error);
+	if (ret)
+		return ret;
+
+	memset(&zero, 0, sizeof(zero));
+
+	if (memcmp(&mask->dst, &zero, sizeof(mask->dst))) {
+		ret = mrvl_parse_dmac(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	if (memcmp(&mask->src, &zero, sizeof(mask->src))) {
+		ret = mrvl_parse_smac(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	if (mask->type) {
+		RTE_LOG(WARNING, PMD, "eth type mask is ignored\n");
+		ret = mrvl_parse_type(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	return 0;
+out:
+	rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+			   "Reached maximum number of fields in cls tbl key\n");
+	return -rte_errno;
+}
+
+/**
+ * Parse vlan flow item.
+ *
+ * @param item Pointer to the flow item.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @param fields Pointer to the parsed parsed fields enum.
+ * @returns 0 on success, negative value otherwise.
+ */
+static int
+mrvl_parse_vlan(const struct rte_flow_item *item,
+		struct rte_flow *flow,
+		struct rte_flow_error *error)
+{
+	const struct rte_flow_item_vlan *spec = NULL, *mask = NULL;
+	uint16_t m;
+	int ret;
+
+	ret = mrvl_parse_init(item, (const void **)&spec, (const void **)&mask,
+			      &rte_flow_item_vlan_mask,
+			      sizeof(struct rte_flow_item_vlan), error);
+	if (ret)
+		return ret;
+
+	if (mask->tpid) {
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+				   NULL, "Not supported by classifier\n");
+		return -rte_errno;
+	}
+
+	m = rte_be_to_cpu_16(mask->tci);
+	if (m & MRVL_VLAN_ID_MASK) {
+		RTE_LOG(WARNING, PMD, "vlan id mask is ignored\n");
+		ret = mrvl_parse_vlan_id(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	if (m & MRVL_VLAN_PRI_MASK) {
+		RTE_LOG(WARNING, PMD, "vlan pri mask is ignored\n");
+		ret = mrvl_parse_vlan_pri(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	return 0;
+out:
+	rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+			   "Reached maximum number of fields in cls tbl key\n");
+	return -rte_errno;
+}
+
+/**
+ * Parse ipv4 flow item.
+ *
+ * @param item Pointer to the flow item.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @param fields Pointer to the parsed parsed fields enum.
+ * @returns 0 on success, negative value otherwise.
+ */
+static int
+mrvl_parse_ip4(const struct rte_flow_item *item,
+	       struct rte_flow *flow,
+	       struct rte_flow_error *error)
+{
+	const struct rte_flow_item_ipv4 *spec = NULL, *mask = NULL;
+	int ret;
+
+	ret = mrvl_parse_init(item, (const void **)&spec, (const void **)&mask,
+			      &rte_flow_item_ipv4_mask,
+			      sizeof(struct rte_flow_item_ipv4), error);
+	if (ret)
+		return ret;
+
+	if (mask->hdr.version_ihl ||
+	    mask->hdr.total_length ||
+	    mask->hdr.packet_id ||
+	    mask->hdr.fragment_offset ||
+	    mask->hdr.time_to_live ||
+	    mask->hdr.hdr_checksum) {
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+				   NULL, "Not supported by classifier\n");
+		return -rte_errno;
+	}
+
+	if (mask->hdr.type_of_service & MRVL_IPV4_DSCP_MASK) {
+		ret = mrvl_parse_ip4_dscp(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	if (mask->hdr.src_addr) {
+		ret = mrvl_parse_ip4_sip(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	if (mask->hdr.dst_addr) {
+		ret = mrvl_parse_ip4_dip(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	if (mask->hdr.next_proto_id) {
+		RTE_LOG(WARNING, PMD, "next proto id mask is ignored\n");
+		ret = mrvl_parse_ip4_proto(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	return 0;
+out:
+	rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+			   "Reached maximum number of fields in cls tbl key\n");
+	return -rte_errno;
+}
+
+/**
+ * Parse ipv6 flow item.
+ *
+ * @param item Pointer to the flow item.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @param fields Pointer to the parsed parsed fields enum.
+ * @returns 0 on success, negative value otherwise.
+ */
+static int
+mrvl_parse_ip6(const struct rte_flow_item *item,
+	       struct rte_flow *flow,
+	       struct rte_flow_error *error)
+{
+	const struct rte_flow_item_ipv6 *spec = NULL, *mask = NULL;
+	struct ipv6_hdr zero;
+	uint32_t flow_mask;
+	int ret;
+
+	ret = mrvl_parse_init(item, (const void **)&spec,
+			      (const void **)&mask,
+			      &rte_flow_item_ipv6_mask,
+			      sizeof(struct rte_flow_item_ipv6),
+			      error);
+	if (ret)
+		return ret;
+
+	memset(&zero, 0, sizeof(zero));
+
+	if (mask->hdr.payload_len ||
+	    mask->hdr.hop_limits) {
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+				   NULL, "Not supported by classifier\n");
+		return -rte_errno;
+	}
+
+	if (memcmp(mask->hdr.src_addr,
+		   zero.src_addr, sizeof(mask->hdr.src_addr))) {
+		ret = mrvl_parse_ip6_sip(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	if (memcmp(mask->hdr.dst_addr,
+		   zero.dst_addr, sizeof(mask->hdr.dst_addr))) {
+		ret = mrvl_parse_ip6_dip(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	flow_mask = rte_be_to_cpu_32(mask->hdr.vtc_flow) & MRVL_IPV6_FLOW_MASK;
+	if (flow_mask) {
+		ret = mrvl_parse_ip6_flow(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	if (mask->hdr.proto) {
+		RTE_LOG(WARNING, PMD, "next header mask is ignored\n");
+		ret = mrvl_parse_ip6_next_hdr(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	return 0;
+out:
+	rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+			   "Reached maximum number of fields in cls tbl key\n");
+	return -rte_errno;
+}
+
+/**
+ * Parse tcp flow item.
+ *
+ * @param item Pointer to the flow item.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @param fields Pointer to the parsed parsed fields enum.
+ * @returns 0 on success, negative value otherwise.
+ */
+static int
+mrvl_parse_tcp(const struct rte_flow_item *item,
+	       struct rte_flow *flow,
+	       struct rte_flow_error *error)
+{
+	const struct rte_flow_item_tcp *spec = NULL, *mask = NULL;
+	int ret;
+
+	ret = mrvl_parse_init(item, (const void **)&spec, (const void **)&mask,
+			      &rte_flow_item_ipv4_mask,
+			      sizeof(struct rte_flow_item_ipv4), error);
+	if (ret)
+		return ret;
+
+	if (mask->hdr.sent_seq ||
+	    mask->hdr.recv_ack ||
+	    mask->hdr.data_off ||
+	    mask->hdr.tcp_flags ||
+	    mask->hdr.rx_win ||
+	    mask->hdr.cksum ||
+	    mask->hdr.tcp_urp) {
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+				   NULL, "Not supported by classifier\n");
+		return -rte_errno;
+	}
+
+	if (mask->hdr.src_port) {
+		RTE_LOG(WARNING, PMD, "tcp sport mask is ignored\n");
+		ret = mrvl_parse_tcp_sport(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	if (mask->hdr.dst_port) {
+		RTE_LOG(WARNING, PMD, "tcp dport mask is ignored\n");
+		ret = mrvl_parse_tcp_dport(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	return 0;
+out:
+	rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+			   "Reached maximum number of fields in cls tbl key\n");
+	return -rte_errno;
+}
+
+/**
+ * Parse udp flow item.
+ *
+ * @param item Pointer to the flow item.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @param fields Pointer to the parsed parsed fields enum.
+ * @returns 0 on success, negative value otherwise.
+ */
+static int
+mrvl_parse_udp(const struct rte_flow_item *item,
+	       struct rte_flow *flow,
+	       struct rte_flow_error *error)
+{
+	const struct rte_flow_item_udp *spec = NULL, *mask = NULL;
+	int ret;
+
+	ret = mrvl_parse_init(item, (const void **)&spec, (const void **)&mask,
+			      &rte_flow_item_ipv4_mask,
+			      sizeof(struct rte_flow_item_ipv4), error);
+	if (ret)
+		return ret;
+
+	if (mask->hdr.dgram_len ||
+	    mask->hdr.dgram_cksum) {
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+				   NULL, "Not supported by classifier\n");
+		return -rte_errno;
+	}
+
+	if (mask->hdr.src_port) {
+		RTE_LOG(WARNING, PMD, "udp sport mask is ignored\n");
+		ret = mrvl_parse_udp_sport(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	if (mask->hdr.dst_port) {
+		RTE_LOG(WARNING, PMD, "udp dport mask is ignored\n");
+		ret = mrvl_parse_udp_dport(spec, mask, flow);
+		if (ret)
+			goto out;
+	}
+
+	return 0;
+out:
+	rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+			   "Reached maximum number of fields in cls tbl key\n");
+	return -rte_errno;
+}
+
+/**
+ * Parse flow pattern composed of the the eth item.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_eth(const struct rte_flow_item pattern[],
+		       struct rte_flow *flow,
+		       struct rte_flow_error *error)
+{
+	return mrvl_parse_eth(pattern, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the eth and vlan items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_eth_vlan(const struct rte_flow_item pattern[],
+			    struct rte_flow *flow,
+			    struct rte_flow_error *error)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+	int ret;
+
+	ret = mrvl_parse_eth(item, flow, error);
+	if (ret)
+		return ret;
+
+	item = mrvl_next_item(item + 1);
+
+	return mrvl_parse_vlan(item, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the eth, vlan and ip4/ip6 items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @param ip6 1 to parse ip6 item, 0 to parse ip4 item.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_eth_vlan_ip4_ip6(const struct rte_flow_item pattern[],
+				    struct rte_flow *flow,
+				    struct rte_flow_error *error, int ip6)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+	int ret;
+
+	ret = mrvl_parse_eth(item, flow, error);
+	if (ret)
+		return ret;
+
+	item = mrvl_next_item(item + 1);
+	ret = mrvl_parse_vlan(item, flow, error);
+	if (ret)
+		return ret;
+
+	item = mrvl_next_item(item + 1);
+
+	return ip6 ? mrvl_parse_ip6(item, flow, error) :
+		     mrvl_parse_ip4(item, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the eth, vlan and ipv4 items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_eth_vlan_ip4(const struct rte_flow_item pattern[],
+				struct rte_flow *flow,
+				struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_eth_vlan_ip4_ip6(pattern, flow, error, 0);
+}
+
+/**
+ * Parse flow pattern composed of the eth, vlan and ipv6 items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_eth_vlan_ip6(const struct rte_flow_item pattern[],
+				struct rte_flow *flow,
+				struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_eth_vlan_ip4_ip6(pattern, flow, error, 1);
+}
+
+/**
+ * Parse flow pattern composed of the eth and ip4/ip6 items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @param ip6 1 to parse ip6 item, 0 to parse ip4 item.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_eth_ip4_ip6(const struct rte_flow_item pattern[],
+			       struct rte_flow *flow,
+			       struct rte_flow_error *error, int ip6)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+	int ret;
+
+	ret = mrvl_parse_eth(item, flow, error);
+	if (ret)
+		return ret;
+
+	item = mrvl_next_item(item + 1);
+
+	return ip6 ? mrvl_parse_ip6(item, flow, error) :
+		     mrvl_parse_ip4(item, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the eth and ipv4 items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_eth_ip4(const struct rte_flow_item pattern[],
+			   struct rte_flow *flow,
+			   struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_eth_ip4_ip6(pattern, flow, error, 0);
+}
+
+/**
+ * Parse flow pattern composed of the eth and ipv6 items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_eth_ip6(const struct rte_flow_item pattern[],
+			   struct rte_flow *flow,
+			   struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_eth_ip4_ip6(pattern, flow, error, 1);
+}
+
+/**
+ * Parse flow pattern composed of the eth, ip4 and tcp/udp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @param tcp 1 to parse tcp item, 0 to parse udp item.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_eth_ip4_tcp_udp(const struct rte_flow_item pattern[],
+				   struct rte_flow *flow,
+				   struct rte_flow_error *error, int tcp)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+	int ret;
+
+	ret = mrvl_parse_pattern_eth_ip4_ip6(pattern, flow, error, 0);
+	if (ret)
+		return ret;
+
+	item = mrvl_next_item(item + 1);
+	item = mrvl_next_item(item + 1);
+
+	if (tcp)
+		return mrvl_parse_tcp(item, flow, error);
+
+	return mrvl_parse_udp(item, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the eth, ipv4 and tcp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_eth_ip4_tcp(const struct rte_flow_item pattern[],
+			       struct rte_flow *flow,
+			       struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_eth_ip4_tcp_udp(pattern, flow, error, 1);
+}
+
+/**
+ * Parse flow pattern composed of the eth, ipv4 and udp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_eth_ip4_udp(const struct rte_flow_item pattern[],
+			       struct rte_flow *flow,
+			       struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_eth_ip4_tcp_udp(pattern, flow, error, 0);
+}
+
+/**
+ * Parse flow pattern composed of the eth, ipv6 and tcp/udp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @param tcp 1 to parse tcp item, 0 to parse udp item.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_eth_ip6_tcp_udp(const struct rte_flow_item pattern[],
+				   struct rte_flow *flow,
+				   struct rte_flow_error *error, int tcp)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+	int ret;
+
+	ret = mrvl_parse_pattern_eth_ip4_ip6(pattern, flow, error, 1);
+	if (ret)
+		return ret;
+
+	item = mrvl_next_item(item + 1);
+	item = mrvl_next_item(item + 1);
+
+	if (tcp)
+		return mrvl_parse_tcp(item, flow, error);
+
+	return mrvl_parse_udp(item, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the eth, ipv6 and tcp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_eth_ip6_tcp(const struct rte_flow_item pattern[],
+			       struct rte_flow *flow,
+			       struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_eth_ip6_tcp_udp(pattern, flow, error, 1);
+}
+
+/**
+ * Parse flow pattern composed of the eth, ipv6 and udp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_eth_ip6_udp(const struct rte_flow_item pattern[],
+			       struct rte_flow *flow,
+			       struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_eth_ip6_tcp_udp(pattern, flow, error, 0);
+}
+
+/**
+ * Parse flow pattern composed of the vlan item.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_vlan(const struct rte_flow_item pattern[],
+			    struct rte_flow *flow,
+			    struct rte_flow_error *error)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+
+	return mrvl_parse_vlan(item, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the vlan and ip4/ip6 items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @param ip6 1 to parse ip6 item, 0 to parse ip4 item.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_vlan_ip4_ip6(const struct rte_flow_item pattern[],
+				struct rte_flow *flow,
+				struct rte_flow_error *error, int ip6)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+	int ret;
+
+	ret = mrvl_parse_vlan(item, flow, error);
+	if (ret)
+		return ret;
+
+	item = mrvl_next_item(item + 1);
+
+	return ip6 ? mrvl_parse_ip6(item, flow, error) :
+		     mrvl_parse_ip4(item, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the vlan and ipv4 items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_vlan_ip4(const struct rte_flow_item pattern[],
+			    struct rte_flow *flow,
+			    struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_vlan_ip4_ip6(pattern, flow, error, 0);
+}
+
+/**
+ * Parse flow pattern composed of the vlan, ipv4 and tcp/udp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_vlan_ip_tcp_udp(const struct rte_flow_item pattern[],
+				   struct rte_flow *flow,
+				   struct rte_flow_error *error, int tcp)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+	int ret;
+
+	ret = mrvl_parse_pattern_vlan_ip4_ip6(pattern, flow, error, 0);
+	if (ret)
+		return ret;
+
+	item = mrvl_next_item(item + 1);
+	item = mrvl_next_item(item + 1);
+
+	if (tcp)
+		return mrvl_parse_tcp(item, flow, error);
+
+	return mrvl_parse_udp(item, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the vlan, ipv4 and tcp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_vlan_ip_tcp(const struct rte_flow_item pattern[],
+			       struct rte_flow *flow,
+			       struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_vlan_ip_tcp_udp(pattern, flow, error, 1);
+}
+
+/**
+ * Parse flow pattern composed of the vlan, ipv4 and udp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_vlan_ip_udp(const struct rte_flow_item pattern[],
+			       struct rte_flow *flow,
+			       struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_vlan_ip_tcp_udp(pattern, flow, error, 0);
+}
+
+/**
+ * Parse flow pattern composed of the vlan and ipv6 items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_vlan_ip6(const struct rte_flow_item pattern[],
+			    struct rte_flow *flow,
+			    struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_vlan_ip4_ip6(pattern, flow, error, 1);
+}
+
+/**
+ * Parse flow pattern composed of the vlan, ipv6 and tcp/udp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_vlan_ip6_tcp_udp(const struct rte_flow_item pattern[],
+				    struct rte_flow *flow,
+				    struct rte_flow_error *error, int tcp)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+	int ret;
+
+	ret = mrvl_parse_pattern_vlan_ip4_ip6(pattern, flow, error, 1);
+	if (ret)
+		return ret;
+
+	item = mrvl_next_item(item + 1);
+	item = mrvl_next_item(item + 1);
+
+	if (tcp)
+		return mrvl_parse_tcp(item, flow, error);
+
+	return mrvl_parse_udp(item, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the vlan, ipv6 and tcp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_vlan_ip6_tcp(const struct rte_flow_item pattern[],
+				struct rte_flow *flow,
+				struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_vlan_ip6_tcp_udp(pattern, flow, error, 1);
+}
+
+/**
+ * Parse flow pattern composed of the vlan, ipv6 and udp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_vlan_ip6_udp(const struct rte_flow_item pattern[],
+				struct rte_flow *flow,
+				struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_vlan_ip6_tcp_udp(pattern, flow, error, 0);
+}
+
+/**
+ * Parse flow pattern composed of the ip4/ip6 item.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @param ip6 1 to parse ip6 item, 0 to parse ip4 item.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_ip4_ip6(const struct rte_flow_item pattern[],
+		       struct rte_flow *flow,
+		       struct rte_flow_error *error, int ip6)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+
+	return ip6 ? mrvl_parse_ip6(item, flow, error) :
+		     mrvl_parse_ip4(item, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the ipv4 item.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_ip4(const struct rte_flow_item pattern[],
+		       struct rte_flow *flow,
+		       struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_ip4_ip6(pattern, flow, error, 0);
+}
+
+/**
+ * Parse flow pattern composed of the ipv6 item.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_ip6(const struct rte_flow_item pattern[],
+		       struct rte_flow *flow,
+		       struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_ip4_ip6(pattern, flow, error, 1);
+}
+
+/**
+ * Parse flow pattern composed of the ip4/ip6 and tcp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @param ip6 1 to parse ip6 item, 0 to parse ip4 item.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_ip4_ip6_tcp(const struct rte_flow_item pattern[],
+			   struct rte_flow *flow,
+			   struct rte_flow_error *error, int ip6)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+	int ret;
+
+	ret = ip6 ? mrvl_parse_ip6(item, flow, error) :
+		    mrvl_parse_ip4(item, flow, error);
+	if (ret)
+		return ret;
+
+	item = mrvl_next_item(item + 1);
+
+	return mrvl_parse_tcp(item, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the ipv4 and tcp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_ip4_tcp(const struct rte_flow_item pattern[],
+			   struct rte_flow *flow,
+			   struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_ip4_ip6_tcp(pattern, flow, error, 0);
+}
+
+/**
+ * Parse flow pattern composed of the ipv6 and tcp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_ip6_tcp(const struct rte_flow_item pattern[],
+			   struct rte_flow *flow,
+			   struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_ip4_ip6_tcp(pattern, flow, error, 1);
+}
+
+/**
+ * Parse flow pattern composed of the ipv4/ipv6 and udp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_ip4_ip6_udp(const struct rte_flow_item pattern[],
+			   struct rte_flow *flow,
+			   struct rte_flow_error *error, int ip6)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+	int ret;
+
+	ret = ip6 ? mrvl_parse_ip6(item, flow, error) :
+		    mrvl_parse_ip4(item, flow, error);
+	if (ret)
+		return ret;
+
+	item = mrvl_next_item(item + 1);
+
+	return mrvl_parse_udp(item, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the ipv4 and udp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_ip4_udp(const struct rte_flow_item pattern[],
+			   struct rte_flow *flow,
+			   struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_ip4_ip6_udp(pattern, flow, error, 0);
+}
+
+/**
+ * Parse flow pattern composed of the ipv6 and udp items.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static inline int
+mrvl_parse_pattern_ip6_udp(const struct rte_flow_item pattern[],
+			   struct rte_flow *flow,
+			   struct rte_flow_error *error)
+{
+	return mrvl_parse_pattern_ip4_ip6_udp(pattern, flow, error, 1);
+}
+
+/**
+ * Parse flow pattern composed of the tcp item.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_tcp(const struct rte_flow_item pattern[],
+		       struct rte_flow *flow,
+		       struct rte_flow_error *error)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+
+	return mrvl_parse_tcp(item, flow, error);
+}
+
+/**
+ * Parse flow pattern composed of the udp item.
+ *
+ * @param pattern Pointer to the flow pattern table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_parse_pattern_udp(const struct rte_flow_item pattern[],
+		       struct rte_flow *flow,
+		       struct rte_flow_error *error)
+{
+	const struct rte_flow_item *item = mrvl_next_item(pattern);
+
+	return mrvl_parse_udp(item, flow, error);
+}
+
+/**
+ * Structure used to map specific flow pattern to the pattern parse callback
+ * which will iterate over each pattern item and extract relevant data.
+ */
+static const struct {
+	const enum rte_flow_item_type *pattern;
+	int (*parse)(const struct rte_flow_item pattern[],
+		struct rte_flow *flow,
+		struct rte_flow_error *error);
+} mrvl_patterns[] = {
+	{ pattern_eth, mrvl_parse_pattern_eth },
+	{ pattern_eth_vlan, mrvl_parse_pattern_eth_vlan },
+	{ pattern_eth_vlan_ip, mrvl_parse_pattern_eth_vlan_ip4 },
+	{ pattern_eth_vlan_ip6, mrvl_parse_pattern_eth_vlan_ip6 },
+	{ pattern_eth_ip4, mrvl_parse_pattern_eth_ip4 },
+	{ pattern_eth_ip4_tcp, mrvl_parse_pattern_eth_ip4_tcp },
+	{ pattern_eth_ip4_udp, mrvl_parse_pattern_eth_ip4_udp },
+	{ pattern_eth_ip6, mrvl_parse_pattern_eth_ip6 },
+	{ pattern_eth_ip6_tcp, mrvl_parse_pattern_eth_ip6_tcp },
+	{ pattern_eth_ip6_udp, mrvl_parse_pattern_eth_ip6_udp },
+	{ pattern_vlan, mrvl_parse_pattern_vlan },
+	{ pattern_vlan_ip, mrvl_parse_pattern_vlan_ip4 },
+	{ pattern_vlan_ip_tcp, mrvl_parse_pattern_vlan_ip_tcp },
+	{ pattern_vlan_ip_udp, mrvl_parse_pattern_vlan_ip_udp },
+	{ pattern_vlan_ip6, mrvl_parse_pattern_vlan_ip6 },
+	{ pattern_vlan_ip6_tcp, mrvl_parse_pattern_vlan_ip6_tcp },
+	{ pattern_vlan_ip6_udp, mrvl_parse_pattern_vlan_ip6_udp },
+	{ pattern_ip, mrvl_parse_pattern_ip4 },
+	{ pattern_ip_tcp, mrvl_parse_pattern_ip4_tcp },
+	{ pattern_ip_udp, mrvl_parse_pattern_ip4_udp },
+	{ pattern_ip6, mrvl_parse_pattern_ip6 },
+	{ pattern_ip6_tcp, mrvl_parse_pattern_ip6_tcp },
+	{ pattern_ip6_udp, mrvl_parse_pattern_ip6_udp },
+	{ pattern_tcp, mrvl_parse_pattern_tcp },
+	{ pattern_udp, mrvl_parse_pattern_udp }
+};
+
+/**
+ * Check whether provided pattern matches any of the supported ones.
+ *
+ * @param type_pattern Pointer to the pattern type.
+ * @param item_pattern Pointer to the flow pattern.
+ * @returns 1 in case of success, 0 value otherwise.
+ */
+static int
+mrvl_patterns_match(const enum rte_flow_item_type *type_pattern,
+		    const struct rte_flow_item *item_pattern)
+{
+	const enum rte_flow_item_type *type = type_pattern;
+	const struct rte_flow_item *item = item_pattern;
+
+	for (;;) {
+		if (item->type == RTE_FLOW_ITEM_TYPE_VOID) {
+			item++;
+			continue;
+		}
+
+		if (*type == RTE_FLOW_ITEM_TYPE_END ||
+		    item->type == RTE_FLOW_ITEM_TYPE_END)
+			break;
+
+		if (*type != item->type)
+			break;
+
+		item++;
+		type++;
+	}
+
+	return *type == item->type;
+}
+
+/**
+ * Parse flow attribute.
+ *
+ * This will check whether the provided attribute's flags are supported.
+ *
+ * @param priv Unused
+ * @param attr Pointer to the flow attribute.
+ * @param flow Unused
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_flow_parse_attr(struct mrvl_priv *priv __rte_unused,
+		     const struct rte_flow_attr *attr,
+		     struct rte_flow *flow __rte_unused,
+		     struct rte_flow_error *error)
+{
+	if (!attr) {
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, "NULL attribute");
+		return -rte_errno;
+	}
+
+	if (attr->group) {
+		rte_flow_error_set(error, ENOTSUP,
+				   RTE_FLOW_ERROR_TYPE_ATTR_GROUP, NULL,
+				   "Groups are not supported");
+		return -rte_errno;
+	}
+	if (attr->priority) {
+		rte_flow_error_set(error, ENOTSUP,
+				   RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, NULL,
+				   "Priorities are not supported");
+		return -rte_errno;
+	}
+	if (!attr->ingress) {
+		rte_flow_error_set(error, ENOTSUP,
+				   RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, NULL,
+				   "Only ingress is supported");
+		return -rte_errno;
+	}
+	if (attr->egress) {
+		rte_flow_error_set(error, ENOTSUP,
+				   RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, NULL,
+				   "Egress is not supported");
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
+/**
+ * Parse flow pattern.
+ *
+ * Specific classifier rule will be created as well.
+ *
+ * @param priv Unused
+ * @param pattern Pointer to the flow pattern.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_flow_parse_pattern(struct mrvl_priv *priv __rte_unused,
+			const struct rte_flow_item pattern[],
+			struct rte_flow *flow,
+			struct rte_flow_error *error)
+{
+	unsigned int i;
+	int ret;
+
+	for (i = 0; i < RTE_DIM(mrvl_patterns); i++) {
+		if (!mrvl_patterns_match(mrvl_patterns[i].pattern, pattern))
+			continue;
+
+		ret = mrvl_patterns[i].parse(pattern, flow, error);
+		if (ret)
+			mrvl_free_all_key_mask(&flow->rule);
+
+		return ret;
+	}
+
+	rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+			   "Unsupported pattern");
+
+	return -rte_errno;
+}
+
+/**
+ * Parse flow actions.
+ *
+ * @param priv Pointer to the port's private data.
+ * @param actions Pointer the action table.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_flow_parse_actions(struct mrvl_priv *priv,
+			const struct rte_flow_action actions[],
+			struct rte_flow *flow,
+			struct rte_flow_error *error)
+{
+	const struct rte_flow_action *action = actions;
+	int specified = 0;
+
+	for (; action->type != RTE_FLOW_ACTION_TYPE_END; action++) {
+		if (action->type == RTE_FLOW_ACTION_TYPE_VOID)
+			continue;
+
+		if (action->type == RTE_FLOW_ACTION_TYPE_DROP) {
+			flow->cos.ppio = priv->ppio;
+			flow->cos.tc = 0;
+			flow->action.type = PP2_CLS_TBL_ACT_DROP;
+			flow->action.cos = &flow->cos;
+			specified++;
+		} else if (action->type == RTE_FLOW_ACTION_TYPE_QUEUE) {
+			const struct rte_flow_action_queue *q =
+				(const struct rte_flow_action_queue *)
+				action->conf;
+
+			if (q->index > priv->nb_rx_queues) {
+				rte_flow_error_set(error, EINVAL,
+						RTE_FLOW_ERROR_TYPE_ACTION,
+						NULL,
+						"Queue index out of range");
+				return -rte_errno;
+			}
+
+			if (priv->rxq_map[q->index].tc == MRVL_UNKNOWN_TC) {
+				/*
+				 * Unknown TC mapping, mapping will not have
+				 * a correct queue.
+				 */
+				RTE_LOG(ERR, PMD,
+					"Unknown TC mapping for queue %hu eth%hhu\n",
+					q->index, priv->ppio_id);
+
+				rte_flow_error_set(error, EFAULT,
+						RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+						NULL, NULL);
+				return -rte_errno;
+			}
+
+			RTE_LOG(DEBUG, PMD,
+				"Action: Assign packets to queue %d, tc:%d, q:%d\n",
+				q->index, priv->rxq_map[q->index].tc,
+				priv->rxq_map[q->index].inq);
+
+			flow->cos.ppio = priv->ppio;
+			flow->cos.tc = priv->rxq_map[q->index].tc;
+			flow->action.type = PP2_CLS_TBL_ACT_DONE;
+			flow->action.cos = &flow->cos;
+			specified++;
+		} else {
+			rte_flow_error_set(error, ENOTSUP,
+					   RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+					   "Action not supported");
+			return -rte_errno;
+		}
+
+	}
+
+	if (!specified) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				   NULL, "Action not specified");
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
+/**
+ * Parse flow attribute, pattern and actions.
+ *
+ * @param priv Pointer to the port's private data.
+ * @param attr Pointer to the flow attribute.
+ * @param pattern Pointer to the flow pattern.
+ * @param actions Pointer to the flow actions.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 on success, negative value otherwise.
+ */
+static int
+mrvl_flow_parse(struct mrvl_priv *priv, const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct rte_flow *flow,
+		struct rte_flow_error *error)
+{
+	int ret;
+
+	ret = mrvl_flow_parse_attr(priv, attr, flow, error);
+	if (ret)
+		return ret;
+
+	ret = mrvl_flow_parse_pattern(priv, pattern, flow, error);
+	if (ret)
+		return ret;
+
+	return mrvl_flow_parse_actions(priv, actions, flow, error);
+}
+
+static inline enum pp2_cls_tbl_type
+mrvl_engine_type(const struct rte_flow *flow)
+{
+	int i, size = 0;
+
+	for (i = 0; i < flow->rule.num_fields; i++)
+		size += flow->rule.fields[i].size;
+
+	/*
+	 * For maskable engine type the key size must be up to 8 bytes.
+	 * For keys with size bigger than 8 bytes, engine type must
+	 * be set to exact match.
+	 */
+	if (size > 8)
+		return PP2_CLS_TBL_EXACT_MATCH;
+
+	return PP2_CLS_TBL_MASKABLE;
+}
+
+static int
+mrvl_create_cls_table(struct rte_eth_dev *dev, struct rte_flow *first_flow)
+{
+	struct mrvl_priv *priv = dev->data->dev_private;
+	struct pp2_cls_tbl_key *key = &priv->cls_tbl_params.key;
+	int ret;
+
+	if (priv->cls_tbl) {
+		pp2_cls_tbl_deinit(priv->cls_tbl);
+		priv->cls_tbl = NULL;
+	}
+
+	memset(&priv->cls_tbl_params, 0, sizeof(priv->cls_tbl_params));
+
+	priv->cls_tbl_params.type = mrvl_engine_type(first_flow);
+	RTE_LOG(INFO, PMD, "Setting cls search engine type to %s\n",
+			priv->cls_tbl_params.type == PP2_CLS_TBL_EXACT_MATCH ?
+			"exact" : "maskable");
+	priv->cls_tbl_params.max_num_rules = MRVL_CLS_MAX_NUM_RULES;
+	priv->cls_tbl_params.default_act.type = PP2_CLS_TBL_ACT_DONE;
+	priv->cls_tbl_params.default_act.cos = &first_flow->cos;
+
+	if (first_flow->pattern & F_DMAC) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_ETH;
+		key->proto_field[key->num_fields].field.eth = MV_NET_ETH_F_DA;
+		key->key_size += 6;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_SMAC) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_ETH;
+		key->proto_field[key->num_fields].field.eth = MV_NET_ETH_F_SA;
+		key->key_size += 6;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_TYPE) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_ETH;
+		key->proto_field[key->num_fields].field.eth = MV_NET_ETH_F_TYPE;
+		key->key_size += 2;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_VLAN_ID) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_VLAN;
+		key->proto_field[key->num_fields].field.vlan = MV_NET_VLAN_F_ID;
+		key->key_size += 2;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_VLAN_PRI) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_VLAN;
+		key->proto_field[key->num_fields].field.vlan =
+			MV_NET_VLAN_F_PRI;
+		key->key_size += 1;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_IP4_TOS) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP4;
+		key->proto_field[key->num_fields].field.ipv4 = MV_NET_IP4_F_TOS;
+		key->key_size += 1;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_IP4_SIP) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP4;
+		key->proto_field[key->num_fields].field.ipv4 = MV_NET_IP4_F_SA;
+		key->key_size += 4;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_IP4_DIP) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP4;
+		key->proto_field[key->num_fields].field.ipv4 = MV_NET_IP4_F_DA;
+		key->key_size += 4;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_IP4_PROTO) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP4;
+		key->proto_field[key->num_fields].field.ipv4 =
+			MV_NET_IP4_F_PROTO;
+		key->key_size += 1;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_IP6_SIP) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP6;
+		key->proto_field[key->num_fields].field.ipv6 = MV_NET_IP6_F_SA;
+		key->key_size += 16;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_IP6_DIP) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP6;
+		key->proto_field[key->num_fields].field.ipv6 = MV_NET_IP6_F_DA;
+		key->key_size += 16;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_IP6_FLOW) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP6;
+		key->proto_field[key->num_fields].field.ipv6 =
+			MV_NET_IP6_F_FLOW;
+		key->key_size += 3;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_IP6_NEXT_HDR) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP6;
+		key->proto_field[key->num_fields].field.ipv6 =
+			MV_NET_IP6_F_NEXT_HDR;
+		key->key_size += 1;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_TCP_SPORT) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_TCP;
+		key->proto_field[key->num_fields].field.tcp = MV_NET_TCP_F_SP;
+		key->key_size += 2;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_TCP_DPORT) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_TCP;
+		key->proto_field[key->num_fields].field.tcp = MV_NET_TCP_F_DP;
+		key->key_size += 2;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_UDP_SPORT) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_UDP;
+		key->proto_field[key->num_fields].field.tcp = MV_NET_TCP_F_SP;
+		key->key_size += 2;
+		key->num_fields += 1;
+	}
+
+	if (first_flow->pattern & F_UDP_DPORT) {
+		key->proto_field[key->num_fields].proto = MV_NET_PROTO_UDP;
+		key->proto_field[key->num_fields].field.udp = MV_NET_TCP_F_DP;
+		key->key_size += 2;
+		key->num_fields += 1;
+	}
+
+	ret = pp2_cls_tbl_init(&priv->cls_tbl_params, &priv->cls_tbl);
+	if (!ret)
+		priv->cls_tbl_pattern = first_flow->pattern;
+
+	return ret;
+}
+
+/**
+ * Check whether new flow can be added to the table
+ *
+ * @param priv Pointer to the port's private data.
+ * @param flow Pointer to the new flow.
+ * @return 1 in case flow can be added, 0 otherwise.
+ */
+static inline int
+mrvl_flow_can_be_added(struct mrvl_priv *priv, const struct rte_flow *flow)
+{
+	return flow->pattern == priv->cls_tbl_pattern &&
+	       mrvl_engine_type(flow) == priv->cls_tbl_params.type;
+}
+
+/**
+ * DPDK flow create callback called when flow is to be created.
+ *
+ * @param dev Pointer to the device.
+ * @param attr Pointer to the flow attribute.
+ * @param pattern Pointer to the flow pattern.
+ * @param actions Pointer to the flow actions.
+ * @param error Pointer to the flow error.
+ * @returns Pointer to the created flow in case of success, NULL otherwise.
+ */
+static struct rte_flow *
+mrvl_flow_create(struct rte_eth_dev *dev,
+		 const struct rte_flow_attr *attr,
+		 const struct rte_flow_item pattern[],
+		 const struct rte_flow_action actions[],
+		 struct rte_flow_error *error)
+{
+	struct mrvl_priv *priv = dev->data->dev_private;
+	struct rte_flow *flow, *first;
+	int ret;
+
+	if (!dev->data->dev_started) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "Port must be started first\n");
+		return NULL;
+	}
+
+	flow = rte_zmalloc_socket(NULL, sizeof(*flow), 0, rte_socket_id());
+	if (!flow)
+		return NULL;
+
+	ret = mrvl_flow_parse(priv, attr, pattern, actions, flow, error);
+	if (ret)
+		goto out;
+
+	/*
+	 * Four cases here:
+	 *
+	 * 1. In case table does not exist - create one.
+	 * 2. In case table exists, is empty and new flow cannot be added
+	 *    recreate table.
+	 * 3. In case table is not empty and new flow matches table format
+	 *    add it.
+	 * 4. Otherwise flow cannot be added.
+	 */
+	first = LIST_FIRST(&priv->flows);
+	if (!priv->cls_tbl) {
+		ret = mrvl_create_cls_table(dev, flow);
+	} else if (!first && !mrvl_flow_can_be_added(priv, flow)) {
+		ret = mrvl_create_cls_table(dev, flow);
+	} else if (mrvl_flow_can_be_added(priv, flow)) {
+		ret = 0;
+	} else {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "Pattern does not match cls table format\n");
+		goto out;
+	}
+
+	if (ret) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "Failed to create cls table\n");
+		goto out;
+	}
+
+	ret = pp2_cls_tbl_add_rule(priv->cls_tbl, &flow->rule, &flow->action);
+	if (ret) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "Failed to add rule\n");
+		goto out;
+	}
+
+	LIST_INSERT_HEAD(&priv->flows, flow, next);
+
+	return flow;
+out:
+	rte_free(flow);
+	return NULL;
+}
+
+/**
+ * Remove classifier rule associated with given flow.
+ *
+ * @param priv Pointer to the port's private data.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_flow_remove(struct mrvl_priv *priv, struct rte_flow *flow,
+		 struct rte_flow_error *error)
+{
+	int ret;
+
+	if (!priv->cls_tbl) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "Classifier table not initialized");
+		return -rte_errno;
+	}
+
+	ret = pp2_cls_tbl_remove_rule(priv->cls_tbl, &flow->rule);
+	if (ret) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "Failed to remove rule");
+		return -rte_errno;
+	}
+
+	mrvl_free_all_key_mask(&flow->rule);
+
+	return 0;
+}
+
+/**
+ * DPDK flow destroy callback called when flow is to be removed.
+ *
+ * @param priv Pointer to the port's private data.
+ * @param flow Pointer to the flow.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow,
+		  struct rte_flow_error *error)
+{
+	struct mrvl_priv *priv = dev->data->dev_private;
+	struct rte_flow *f;
+	int ret;
+
+	LIST_FOREACH(f, &priv->flows, next) {
+		if (f == flow)
+			break;
+	}
+
+	if (!flow) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "Rule was not found");
+		return -rte_errno;
+	}
+
+	LIST_REMOVE(f, next);
+
+	ret = mrvl_flow_remove(priv, flow, error);
+	if (ret)
+		return ret;
+
+	rte_free(flow);
+
+	return 0;
+}
+
+/**
+ * DPDK flow callback called to verify given attribute, pattern and actions.
+ *
+ * @param dev Pointer to the device.
+ * @param attr Pointer to the flow attribute.
+ * @param pattern Pointer to the flow pattern.
+ * @param actions Pointer to the flow actions.
+ * @param error Pointer to the flow error.
+ * @returns 0 on success, negative value otherwise.
+ */
+static int
+mrvl_flow_validate(struct rte_eth_dev *dev,
+		   const struct rte_flow_attr *attr,
+		   const struct rte_flow_item pattern[],
+		   const struct rte_flow_action actions[],
+		   struct rte_flow_error *error)
+{
+	static struct rte_flow *flow;
+
+	flow = mrvl_flow_create(dev, attr, pattern, actions, error);
+	if (!flow)
+		return -rte_errno;
+
+	mrvl_flow_destroy(dev, flow, error);
+
+	return 0;
+}
+
+/**
+ * DPDK flow flush callback called when flows are to be flushed.
+ *
+ * @param dev Pointer to the device.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error)
+{
+	struct mrvl_priv *priv = dev->data->dev_private;
+
+	while (!LIST_EMPTY(&priv->flows)) {
+		struct rte_flow *flow = LIST_FIRST(&priv->flows);
+		int ret = mrvl_flow_remove(priv, flow, error);
+		if (ret)
+			return ret;
+
+		LIST_REMOVE(flow, next);
+		rte_free(flow);
+	}
+
+	return 0;
+}
+
+/**
+ * DPDK flow isolate callback called to isolate port.
+ *
+ * @param dev Pointer to the device.
+ * @param enable Pass 0/1 to disable/enable port isolation.
+ * @param error Pointer to the flow error.
+ * @returns 0 in case of success, negative value otherwise.
+ */
+static int
+mrvl_flow_isolate(struct rte_eth_dev *dev, int enable,
+		  struct rte_flow_error *error)
+{
+	struct mrvl_priv *priv = dev->data->dev_private;
+
+	if (dev->data->dev_started) {
+		rte_flow_error_set(error, EBUSY,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				   NULL, "Port must be stopped first\n");
+		return -rte_errno;
+	}
+
+	priv->isolated = enable;
+
+	return 0;
+}
+
+const struct rte_flow_ops mrvl_flow_ops = {
+	.validate = mrvl_flow_validate,
+	.create = mrvl_flow_create,
+	.destroy = mrvl_flow_destroy,
+	.flush = mrvl_flow_flush,
+	.isolate = mrvl_flow_isolate
+};
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v3 6/8] net/mrvl: add extended statistics
  2018-03-15  7:51   ` [PATCH v3 0/8] net/mrvl: add new features to PMD Tomasz Duszynski
                       ` (4 preceding siblings ...)
  2018-03-15  7:52     ` [PATCH v3 5/8] net/mrvl: add classifier support Tomasz Duszynski
@ 2018-03-15  7:52     ` Tomasz Duszynski
  2018-03-15  7:52     ` [PATCH v3 7/8] net/mrvl: add Rx flow control Tomasz Duszynski
                       ` (2 subsequent siblings)
  8 siblings, 0 replies; 34+ messages in thread
From: Tomasz Duszynski @ 2018-03-15  7:52 UTC (permalink / raw)
  To: dev; +Cc: mw, dima, nsamsono, jck, jianbo.liu, Tomasz Duszynski

Add extended statistics implementation.

Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
---
 doc/guides/nics/features/mrvl.ini |   1 +
 doc/guides/nics/mrvl.rst          |   1 +
 drivers/net/mrvl/mrvl_ethdev.c    | 115 +++++++++++++++++++++++++++++++++++++-
 3 files changed, 116 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/features/mrvl.ini b/doc/guides/nics/features/mrvl.ini
index 00d9621..120fd4d 100644
--- a/doc/guides/nics/features/mrvl.ini
+++ b/doc/guides/nics/features/mrvl.ini
@@ -19,5 +19,6 @@ L3 checksum offload  = Y
 L4 checksum offload  = Y
 Packet type parsing  = Y
 Basic stats          = Y
+Extended stats       = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/doc/guides/nics/mrvl.rst b/doc/guides/nics/mrvl.rst
index 9230d5e..7678265 100644
--- a/doc/guides/nics/mrvl.rst
+++ b/doc/guides/nics/mrvl.rst
@@ -70,6 +70,7 @@ Features of the MRVL PMD are:
 - L4 checksum offload
 - Packet type parsing
 - Basic stats
+- Extended stats
 - QoS
 
 
diff --git a/drivers/net/mrvl/mrvl_ethdev.c b/drivers/net/mrvl/mrvl_ethdev.c
index 34b9ef7..880bd0b 100644
--- a/drivers/net/mrvl/mrvl_ethdev.c
+++ b/drivers/net/mrvl/mrvl_ethdev.c
@@ -145,6 +145,32 @@ static inline void mrvl_free_sent_buffers(struct pp2_ppio *ppio,
 			struct pp2_hif *hif, unsigned int core_id,
 			struct mrvl_shadow_txq *sq, int qid, int force);
 
+#define MRVL_XSTATS_TBL_ENTRY(name) { \
+	#name, offsetof(struct pp2_ppio_statistics, name),	\
+	sizeof(((struct pp2_ppio_statistics *)0)->name)		\
+}
+
+/* Table with xstats data */
+static struct {
+	const char *name;
+	unsigned int offset;
+	unsigned int size;
+} mrvl_xstats_tbl[] = {
+	MRVL_XSTATS_TBL_ENTRY(rx_bytes),
+	MRVL_XSTATS_TBL_ENTRY(rx_packets),
+	MRVL_XSTATS_TBL_ENTRY(rx_unicast_packets),
+	MRVL_XSTATS_TBL_ENTRY(rx_errors),
+	MRVL_XSTATS_TBL_ENTRY(rx_fullq_dropped),
+	MRVL_XSTATS_TBL_ENTRY(rx_bm_dropped),
+	MRVL_XSTATS_TBL_ENTRY(rx_early_dropped),
+	MRVL_XSTATS_TBL_ENTRY(rx_fifo_dropped),
+	MRVL_XSTATS_TBL_ENTRY(rx_cls_dropped),
+	MRVL_XSTATS_TBL_ENTRY(tx_bytes),
+	MRVL_XSTATS_TBL_ENTRY(tx_packets),
+	MRVL_XSTATS_TBL_ENTRY(tx_unicast_packets),
+	MRVL_XSTATS_TBL_ENTRY(tx_errors)
+};
+
 static inline int
 mrvl_get_bpool_size(int pp2_id, int pool_id)
 {
@@ -1110,6 +1136,90 @@ mrvl_stats_reset(struct rte_eth_dev *dev)
 }
 
 /**
+ * DPDK callback to get extended statistics.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ * @param stats
+ *   Pointer to xstats table.
+ * @param n
+ *   Number of entries in xstats table.
+ * @return
+ *   Negative value on error, number of read xstats otherwise.
+ */
+static int
+mrvl_xstats_get(struct rte_eth_dev *dev,
+		struct rte_eth_xstat *stats, unsigned int n)
+{
+	struct mrvl_priv *priv = dev->data->dev_private;
+	struct pp2_ppio_statistics ppio_stats;
+	unsigned int i;
+
+	if (!stats)
+		return 0;
+
+	pp2_ppio_get_statistics(priv->ppio, &ppio_stats, 0);
+	for (i = 0; i < n && i < RTE_DIM(mrvl_xstats_tbl); i++) {
+		uint64_t val;
+
+		if (mrvl_xstats_tbl[i].size == sizeof(uint32_t))
+			val = *(uint32_t *)((uint8_t *)&ppio_stats +
+					    mrvl_xstats_tbl[i].offset);
+		else if (mrvl_xstats_tbl[i].size == sizeof(uint64_t))
+			val = *(uint64_t *)((uint8_t *)&ppio_stats +
+					    mrvl_xstats_tbl[i].offset);
+		else
+			return -EINVAL;
+
+		stats[i].id = i;
+		stats[i].value = val;
+	}
+
+	return n;
+}
+
+/**
+ * DPDK callback to reset extended statistics.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ */
+static void
+mrvl_xstats_reset(struct rte_eth_dev *dev)
+{
+	mrvl_stats_reset(dev);
+}
+
+/**
+ * DPDK callback to get extended statistics names.
+ *
+ * @param dev (unused)
+ *   Pointer to Ethernet device structure.
+ * @param xstats_names
+ *   Pointer to xstats names table.
+ * @param size
+ *   Size of the xstats names table.
+ * @return
+ *   Number of read names.
+ */
+static int
+mrvl_xstats_get_names(struct rte_eth_dev *dev __rte_unused,
+		      struct rte_eth_xstat_name *xstats_names,
+		      unsigned int size)
+{
+	unsigned int i;
+
+	if (!xstats_names)
+		return RTE_DIM(mrvl_xstats_tbl);
+
+	for (i = 0; i < size && i < RTE_DIM(mrvl_xstats_tbl); i++)
+		snprintf(xstats_names[i].name, RTE_ETH_XSTATS_NAME_SIZE, "%s",
+			 mrvl_xstats_tbl[i].name);
+
+	return size;
+}
+
+/**
  * DPDK callback to get information about the device.
  *
  * @param dev
@@ -1692,6 +1802,9 @@ static const struct eth_dev_ops mrvl_ops = {
 	.mtu_set = mrvl_mtu_set,
 	.stats_get = mrvl_stats_get,
 	.stats_reset = mrvl_stats_reset,
+	.xstats_get = mrvl_xstats_get,
+	.xstats_reset = mrvl_xstats_reset,
+	.xstats_get_names = mrvl_xstats_get_names,
 	.dev_infos_get = mrvl_dev_infos_get,
 	.dev_supported_ptypes_get = mrvl_dev_supported_ptypes_get,
 	.rxq_info_get = mrvl_rxq_info_get,
@@ -1703,7 +1816,7 @@ static const struct eth_dev_ops mrvl_ops = {
 	.tx_queue_release = mrvl_tx_queue_release,
 	.rss_hash_update = mrvl_rss_hash_update,
 	.rss_hash_conf_get = mrvl_rss_hash_conf_get,
-	.filter_ctrl = mrvl_eth_filter_ctrl
+	.filter_ctrl = mrvl_eth_filter_ctrl,
 };
 
 /**
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v3 7/8] net/mrvl: add Rx flow control
  2018-03-15  7:51   ` [PATCH v3 0/8] net/mrvl: add new features to PMD Tomasz Duszynski
                       ` (5 preceding siblings ...)
  2018-03-15  7:52     ` [PATCH v3 6/8] net/mrvl: add extended statistics Tomasz Duszynski
@ 2018-03-15  7:52     ` Tomasz Duszynski
  2018-03-15  7:52     ` [PATCH v3 8/8] net/mrvl: add Tx queue start/stop Tomasz Duszynski
  2018-03-15 15:09     ` [PATCH v3 0/8] net/mrvl: add new features to PMD Ferruh Yigit
  8 siblings, 0 replies; 34+ messages in thread
From: Tomasz Duszynski @ 2018-03-15  7:52 UTC (permalink / raw)
  To: dev; +Cc: mw, dima, nsamsono, jck, jianbo.liu, Tomasz Duszynski

Add Rx side flow control support.

Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
---
 doc/guides/nics/features/mrvl.ini |  1 +
 doc/guides/nics/mrvl.rst          |  1 +
 drivers/net/mrvl/mrvl_ethdev.c    | 78 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 80 insertions(+)

diff --git a/doc/guides/nics/features/mrvl.ini b/doc/guides/nics/features/mrvl.ini
index 120fd4d..8673a56 100644
--- a/doc/guides/nics/features/mrvl.ini
+++ b/doc/guides/nics/features/mrvl.ini
@@ -13,6 +13,7 @@ Allmulticast mode    = Y
 Unicast MAC filter   = Y
 Multicast MAC filter = Y
 RSS hash             = Y
+Flow control         = Y
 VLAN filter          = Y
 CRC offload          = Y
 L3 checksum offload  = Y
diff --git a/doc/guides/nics/mrvl.rst b/doc/guides/nics/mrvl.rst
index 7678265..550bd79 100644
--- a/doc/guides/nics/mrvl.rst
+++ b/doc/guides/nics/mrvl.rst
@@ -72,6 +72,7 @@ Features of the MRVL PMD are:
 - Basic stats
 - Extended stats
 - QoS
+- RX flow control
 
 
 Limitations
diff --git a/drivers/net/mrvl/mrvl_ethdev.c b/drivers/net/mrvl/mrvl_ethdev.c
index 880bd0b..bf596f4 100644
--- a/drivers/net/mrvl/mrvl_ethdev.c
+++ b/drivers/net/mrvl/mrvl_ethdev.c
@@ -1696,6 +1696,82 @@ mrvl_tx_queue_release(void *txq)
 }
 
 /**
+ * DPDK callback to get flow control configuration.
+ *
+ * @param dev
+ *  Pointer to Ethernet device structure.
+ * @param fc_conf
+ *  Pointer to the flow control configuration.
+ *
+ * @return
+ *  0 on success, negative error value otherwise.
+ */
+static int
+mrvl_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
+{
+	struct mrvl_priv *priv = dev->data->dev_private;
+	int ret, en;
+
+	if (!priv)
+		return -EPERM;
+
+	ret = pp2_ppio_get_rx_pause(priv->ppio, &en);
+	if (ret) {
+		RTE_LOG(ERR, PMD, "Failed to read rx pause state\n");
+		return ret;
+	}
+
+	fc_conf->mode = en ? RTE_FC_RX_PAUSE : RTE_FC_NONE;
+
+	return 0;
+}
+
+/**
+ * DPDK callback to set flow control configuration.
+ *
+ * @param dev
+ *  Pointer to Ethernet device structure.
+ * @param fc_conf
+ *  Pointer to the flow control configuration.
+ *
+ * @return
+ *  0 on success, negative error value otherwise.
+ */
+static int
+mrvl_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
+{
+	struct mrvl_priv *priv = dev->data->dev_private;
+
+	if (!priv)
+		return -EPERM;
+
+	if (fc_conf->high_water ||
+	    fc_conf->low_water ||
+	    fc_conf->pause_time ||
+	    fc_conf->mac_ctrl_frame_fwd ||
+	    fc_conf->autoneg) {
+		RTE_LOG(ERR, PMD, "Flowctrl parameter is not supported\n");
+
+		return -EINVAL;
+	}
+
+	if (fc_conf->mode == RTE_FC_NONE ||
+	    fc_conf->mode == RTE_FC_RX_PAUSE) {
+		int ret, en;
+
+		en = fc_conf->mode == RTE_FC_NONE ? 0 : 1;
+		ret = pp2_ppio_set_rx_pause(priv->ppio, en);
+		if (ret)
+			RTE_LOG(ERR, PMD,
+				"Failed to change flowctrl on RX side\n");
+
+		return ret;
+	}
+
+	return 0;
+}
+
+/**
  * Update RSS hash configuration
  *
  * @param dev
@@ -1814,6 +1890,8 @@ static const struct eth_dev_ops mrvl_ops = {
 	.rx_queue_release = mrvl_rx_queue_release,
 	.tx_queue_setup = mrvl_tx_queue_setup,
 	.tx_queue_release = mrvl_tx_queue_release,
+	.flow_ctrl_get = mrvl_flow_ctrl_get,
+	.flow_ctrl_set = mrvl_flow_ctrl_set,
 	.rss_hash_update = mrvl_rss_hash_update,
 	.rss_hash_conf_get = mrvl_rss_hash_conf_get,
 	.filter_ctrl = mrvl_eth_filter_ctrl,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v3 8/8] net/mrvl: add Tx queue start/stop
  2018-03-15  7:51   ` [PATCH v3 0/8] net/mrvl: add new features to PMD Tomasz Duszynski
                       ` (6 preceding siblings ...)
  2018-03-15  7:52     ` [PATCH v3 7/8] net/mrvl: add Rx flow control Tomasz Duszynski
@ 2018-03-15  7:52     ` Tomasz Duszynski
  2018-03-15 15:09     ` [PATCH v3 0/8] net/mrvl: add new features to PMD Ferruh Yigit
  8 siblings, 0 replies; 34+ messages in thread
From: Tomasz Duszynski @ 2018-03-15  7:52 UTC (permalink / raw)
  To: dev; +Cc: mw, dima, nsamsono, jck, jianbo.liu, Tomasz Duszynski

Add Tx queue start/stop feature.

Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
---
 doc/guides/nics/mrvl.rst       |  1 +
 drivers/net/mrvl/mrvl_ethdev.c | 92 +++++++++++++++++++++++++++++++++++++++++-
 2 files changed, 91 insertions(+), 2 deletions(-)

diff --git a/doc/guides/nics/mrvl.rst b/doc/guides/nics/mrvl.rst
index 550bd79..f9ec9d6 100644
--- a/doc/guides/nics/mrvl.rst
+++ b/doc/guides/nics/mrvl.rst
@@ -73,6 +73,7 @@ Features of the MRVL PMD are:
 - Extended stats
 - QoS
 - RX flow control
+- TX queue start/stop
 
 
 Limitations
diff --git a/drivers/net/mrvl/mrvl_ethdev.c b/drivers/net/mrvl/mrvl_ethdev.c
index bf596f4..fac924f 100644
--- a/drivers/net/mrvl/mrvl_ethdev.c
+++ b/drivers/net/mrvl/mrvl_ethdev.c
@@ -134,6 +134,7 @@ struct mrvl_txq {
 	int port_id;
 	uint64_t bytes_sent;
 	struct mrvl_shadow_txq shadow_txqs[RTE_MAX_LCORE];
+	int tx_deferred_start;
 };
 
 static int mrvl_lcore_first;
@@ -459,6 +460,70 @@ mrvl_dev_set_link_down(struct rte_eth_dev *dev)
 }
 
 /**
+ * DPDK callback to start tx queue.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ * @param queue_id
+ *   Transmit queue index.
+ *
+ * @return
+ *   0 on success, negative error value otherwise.
+ */
+static int
+mrvl_tx_queue_start(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+	struct mrvl_priv *priv = dev->data->dev_private;
+	int ret;
+
+	if (!priv)
+		return -EPERM;
+
+	/* passing 1 enables given tx queue */
+	ret = pp2_ppio_set_outq_state(priv->ppio, queue_id, 1);
+	if (ret) {
+		RTE_LOG(ERR, PMD, "Failed to start txq %d\n", queue_id);
+		return ret;
+	}
+
+	dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
+
+	return 0;
+}
+
+/**
+ * DPDK callback to stop tx queue.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ * @param queue_id
+ *   Transmit queue index.
+ *
+ * @return
+ *   0 on success, negative error value otherwise.
+ */
+static int
+mrvl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+	struct mrvl_priv *priv = dev->data->dev_private;
+	int ret;
+
+	if (!priv->ppio)
+		return -EPERM;
+
+	/* passing 0 disables given tx queue */
+	ret = pp2_ppio_set_outq_state(priv->ppio, queue_id, 0);
+	if (ret) {
+		RTE_LOG(ERR, PMD, "Failed to stop txq %d\n", queue_id);
+		return ret;
+	}
+
+	dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+/**
  * DPDK callback to start the device.
  *
  * @param dev
@@ -472,7 +537,7 @@ mrvl_dev_start(struct rte_eth_dev *dev)
 {
 	struct mrvl_priv *priv = dev->data->dev_private;
 	char match[MRVL_MATCH_LEN];
-	int ret = 0, def_init_size;
+	int ret = 0, i, def_init_size;
 
 	snprintf(match, sizeof(match), "ppio-%d:%d",
 		 priv->pp_id, priv->ppio_id);
@@ -559,6 +624,24 @@ mrvl_dev_start(struct rte_eth_dev *dev)
 		goto out;
 	}
 
+	/* start tx queues */
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		struct mrvl_txq *txq = dev->data->tx_queues[i];
+
+		dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
+
+		if (!txq->tx_deferred_start)
+			continue;
+
+		/*
+		 * All txqs are started by default. Stop them
+		 * so that tx_deferred_start works as expected.
+		 */
+		ret = mrvl_tx_queue_stop(dev, i);
+		if (ret)
+			goto out;
+	}
+
 	return 0;
 out:
 	RTE_LOG(ERR, PMD, "Failed to start device\n");
@@ -1330,9 +1413,11 @@ static void mrvl_txq_info_get(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 			      struct rte_eth_txq_info *qinfo)
 {
 	struct mrvl_priv *priv = dev->data->dev_private;
+	struct mrvl_txq *txq = dev->data->tx_queues[tx_queue_id];
 
 	qinfo->nb_desc =
 		priv->ppio_params.outqs_params.outqs_params[tx_queue_id].size;
+	qinfo->conf.tx_deferred_start = txq->tx_deferred_start;
 }
 
 /**
@@ -1643,7 +1728,7 @@ mrvl_tx_queue_offloads_okay(struct rte_eth_dev *dev, uint64_t requested)
  * @param socket
  *   NUMA socket on which memory must be allocated.
  * @param conf
- *   Thresholds parameters.
+ *   Tx queue configuration parameters.
  *
  * @return
  *   0 on success, negative error value otherwise.
@@ -1671,6 +1756,7 @@ mrvl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	txq->priv = priv;
 	txq->queue_id = idx;
 	txq->port_id = dev->data->port_id;
+	txq->tx_deferred_start = conf->tx_deferred_start;
 	dev->data->tx_queues[idx] = txq;
 
 	priv->ppio_params.outqs_params.outqs_params[idx].size = desc;
@@ -1886,6 +1972,8 @@ static const struct eth_dev_ops mrvl_ops = {
 	.rxq_info_get = mrvl_rxq_info_get,
 	.txq_info_get = mrvl_txq_info_get,
 	.vlan_filter_set = mrvl_vlan_filter_set,
+	.tx_queue_start = mrvl_tx_queue_start,
+	.tx_queue_stop = mrvl_tx_queue_stop,
 	.rx_queue_setup = mrvl_rx_queue_setup,
 	.rx_queue_release = mrvl_rx_queue_release,
 	.tx_queue_setup = mrvl_tx_queue_setup,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 0/8] net/mrvl: add new features to PMD
  2018-03-15  7:51   ` [PATCH v3 0/8] net/mrvl: add new features to PMD Tomasz Duszynski
                       ` (7 preceding siblings ...)
  2018-03-15  7:52     ` [PATCH v3 8/8] net/mrvl: add Tx queue start/stop Tomasz Duszynski
@ 2018-03-15 15:09     ` Ferruh Yigit
  8 siblings, 0 replies; 34+ messages in thread
From: Ferruh Yigit @ 2018-03-15 15:09 UTC (permalink / raw)
  To: Tomasz Duszynski, dev; +Cc: mw, dima, nsamsono, jck, jianbo.liu

On 3/15/2018 7:51 AM, Tomasz Duszynski wrote:
> This patch series comes along with a set of features,
> documentation updates and fixes.
> 
> Below one can find a short summary of introduced changes:
> 
> o Added support for selective Tx queue start and stop.
> o Added support for Rx flow control.
> o Added support for extended statistics counters.
> o Added support for ingress policer, egress scheduler and egress rate
>   limiter.
> o Added support for configuring hardware classifier via a flow API.
> o Documented new features and their usage.
> 
> Natalie Samsonov (1):
>   net/mrvl: fix crash when port is closed without starting
> 
> Tomasz Duszynski (7):
>   net/mrvl: add ingress policer support
>   net/mrvl: add egress scheduler/rate limiter support
>   net/mrvl: document policer/scheduler/rate limiter usage
>   net/mrvl: add classifier support
>   net/mrvl: add extended statistics
>   net/mrvl: add Rx flow control
>   net/mrvl: add Tx queue start/stop

Series applied to dpdk-next-net/master, thanks.

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2018-03-15 15:09 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-02-21 14:14 [PATCH 0/8] net/mrvl: add new features to PMD Tomasz Duszynski
2018-02-21 14:14 ` [PATCH 1/8] net/mrvl: fix crash when port is closed without starting Tomasz Duszynski
2018-02-21 14:14 ` [PATCH 2/8] net/mrvl: add ingress policer support Tomasz Duszynski
2018-02-21 14:14 ` [PATCH 3/8] net/mrvl: add egress scheduler/rate limiter support Tomasz Duszynski
2018-02-21 14:14 ` [PATCH 4/8] net/mrvl: document policer/scheduler/rate limiter usage Tomasz Duszynski
2018-02-21 14:14 ` [PATCH 5/8] net/mrvl: add classifier support Tomasz Duszynski
2018-03-07 11:07   ` Ferruh Yigit
2018-03-07 11:16     ` Tomasz Duszynski
2018-03-07 11:24       ` Ferruh Yigit
2018-03-08 13:23         ` Tomasz Duszynski
2018-02-21 14:14 ` [PATCH 6/8] net/mrvl: add extended statistics Tomasz Duszynski
2018-02-21 14:14 ` [PATCH 7/8] net/mrvl: add Rx flow control Tomasz Duszynski
2018-02-21 14:14 ` [PATCH 8/8] net/mrvl: add Tx queue start/stop Tomasz Duszynski
2018-03-12  8:42 ` [PATCH v2 0/8] net/mrvl: add new features to PMD Tomasz Duszynski
2018-03-12  8:42   ` [PATCH v2 1/8] net/mrvl: fix crash when port is closed without starting Tomasz Duszynski
2018-03-12  8:42   ` [PATCH v2 2/8] net/mrvl: add ingress policer support Tomasz Duszynski
2018-03-12  8:42   ` [PATCH v2 3/8] net/mrvl: add egress scheduler/rate limiter support Tomasz Duszynski
2018-03-12  8:42   ` [PATCH v2 4/8] net/mrvl: document policer/scheduler/rate limiter usage Tomasz Duszynski
2018-03-12  8:42   ` [PATCH v2 5/8] net/mrvl: add classifier support Tomasz Duszynski
2018-03-12  8:42   ` [PATCH v2 6/8] net/mrvl: add extended statistics Tomasz Duszynski
2018-03-14 17:21     ` Ferruh Yigit
2018-03-15  7:09       ` Tomasz Duszynski
2018-03-12  8:42   ` [PATCH v2 7/8] net/mrvl: add Rx flow control Tomasz Duszynski
2018-03-12  8:42   ` [PATCH v2 8/8] net/mrvl: add Tx queue start/stop Tomasz Duszynski
2018-03-15  7:51   ` [PATCH v3 0/8] net/mrvl: add new features to PMD Tomasz Duszynski
2018-03-15  7:51     ` [PATCH v3 1/8] net/mrvl: fix crash when port is closed without starting Tomasz Duszynski
2018-03-15  7:51     ` [PATCH v3 2/8] net/mrvl: add ingress policer support Tomasz Duszynski
2018-03-15  7:51     ` [PATCH v3 3/8] net/mrvl: add egress scheduler/rate limiter support Tomasz Duszynski
2018-03-15  7:52     ` [PATCH v3 4/8] net/mrvl: document policer/scheduler/rate limiter usage Tomasz Duszynski
2018-03-15  7:52     ` [PATCH v3 5/8] net/mrvl: add classifier support Tomasz Duszynski
2018-03-15  7:52     ` [PATCH v3 6/8] net/mrvl: add extended statistics Tomasz Duszynski
2018-03-15  7:52     ` [PATCH v3 7/8] net/mrvl: add Rx flow control Tomasz Duszynski
2018-03-15  7:52     ` [PATCH v3 8/8] net/mrvl: add Tx queue start/stop Tomasz Duszynski
2018-03-15 15:09     ` [PATCH v3 0/8] net/mrvl: add new features to PMD Ferruh Yigit

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.