All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 00/18] net/ixgbe: Consistent filter API
@ 2016-12-30  7:52 Wei Zhao
  2016-12-30  7:52 ` [PATCH v2 01/18] net/ixgbe: store SYN filter Wei Zhao
                   ` (17 more replies)
  0 siblings, 18 replies; 149+ messages in thread
From: Wei Zhao @ 2016-12-30  7:52 UTC (permalink / raw)
  To: dev

The patches mainly finish following functions:
1) Store and restore all kinds of filters.
2) Parse all kinds of filters.
3) Add flow validate function.
4) Add flow create function.
5) Add flow destroy function.
6) Add flow flush function.

v2 changes:
 fix git log error.
 Modify some function call relationship.
 Change return value type of all parse flow functions.
 Update error info for all flow ops.
 Add ixgbe_filterlist_flush to flush flows and rules created.


zhao wei (18):
  net/ixgbe: store SYN filter
  net/ixgbe: store flow director filter
  net/ixgbe: store L2 tunnel filter
  net/ixgbe: restore n-tuple filter
  net/ixgbe: restore ether type filter
  net/ixgbe: restore TCP SYN filter
  net/ixgbe: restore flow director filter
  net/ixgbe: restore L2 tunnel filter
  net/ixgbe: store and restore L2 tunnel configuration
  net/ixgbe: flush all the filters
  net/ixgbe: parse n-tuple filter
  net/ixgbe: parse ethertype filter
  net/ixgbe: parse TCP SYN filter
  net/ixgbe: parse L2 tunnel filter
  net/ixgbe: parse flow director filter
  net/ixgbe: create consistent filter
  net/ixgbe: destroy consistent filter
  net/ixgbe: flush all the filter list

 drivers/net/ixgbe/ixgbe_ethdev.c | 3285 ++++++++++++++++++++++++++++++++++++--
 drivers/net/ixgbe/ixgbe_ethdev.h |  136 +-
 drivers/net/ixgbe/ixgbe_fdir.c   |  403 ++++-
 drivers/net/ixgbe/ixgbe_pf.c     |   26 +-
 lib/librte_ether/rte_flow.h      |   55 +
 5 files changed, 3713 insertions(+), 192 deletions(-)

-- 
2.5.5

^ permalink raw reply	[flat|nested] 149+ messages in thread

* [PATCH v2 01/18] net/ixgbe: store SYN filter
  2016-12-30  7:52 [PATCH v2 00/18] net/ixgbe: Consistent filter API Wei Zhao
@ 2016-12-30  7:52 ` Wei Zhao
  2017-01-03 14:33   ` Dai, Wei
                     ` (3 more replies)
  2016-12-30  7:52 ` [PATCH v2 02/18] net/ixgbe: store flow director filter Wei Zhao
                   ` (16 subsequent siblings)
  17 siblings, 4 replies; 149+ messages in thread
From: Wei Zhao @ 2016-12-30  7:52 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for storing SYN filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---

v2:
--synqf assignment location change
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 14 +++++++++++---
 drivers/net/ixgbe/ixgbe_ethdev.h |  2 ++
 2 files changed, 13 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index a25bac8..316e560 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1274,6 +1274,8 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
 	memset(filter_info->fivetuple_mask, 0,
 	       sizeof(uint32_t) * IXGBE_5TUPLE_ARRAY_SIZE);
 
+	/* initialize SYN filter */
+	filter_info->syn_info = 0;
 	return 0;
 }
 
@@ -5580,15 +5582,18 @@ ixgbe_syn_filter_set(struct rte_eth_dev *dev,
 			bool add)
 {
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+	uint32_t syn_info;
 	uint32_t synqf;
 
 	if (filter->queue >= IXGBE_MAX_RX_QUEUE_NUM)
 		return -EINVAL;
 
-	synqf = IXGBE_READ_REG(hw, IXGBE_SYNQF);
+	syn_info = filter_info->syn_info;
 
 	if (add) {
-		if (synqf & IXGBE_SYN_FILTER_ENABLE)
+		if (syn_info & IXGBE_SYN_FILTER_ENABLE)
 			return -EINVAL;
 		synqf = (uint32_t)(((filter->queue << IXGBE_SYN_FILTER_QUEUE_SHIFT) &
 			IXGBE_SYN_FILTER_QUEUE) | IXGBE_SYN_FILTER_ENABLE);
@@ -5598,10 +5603,13 @@ ixgbe_syn_filter_set(struct rte_eth_dev *dev,
 		else
 			synqf &= ~IXGBE_SYN_FILTER_SYNQFP;
 	} else {
-		if (!(synqf & IXGBE_SYN_FILTER_ENABLE))
+		synqf = IXGBE_READ_REG(hw, IXGBE_SYNQF);
+		if (!(syn_info & IXGBE_SYN_FILTER_ENABLE))
 			return -ENOENT;
 		synqf &= ~(IXGBE_SYN_FILTER_QUEUE | IXGBE_SYN_FILTER_ENABLE);
 	}
+
+	filter_info->syn_info = synqf;
 	IXGBE_WRITE_REG(hw, IXGBE_SYNQF, synqf);
 	IXGBE_WRITE_FLUSH(hw);
 	return 0;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 4ff6338..827026c 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -262,6 +262,8 @@ struct ixgbe_filter_info {
 	/* Bit mask for every used 5tuple filter */
 	uint32_t fivetuple_mask[IXGBE_5TUPLE_ARRAY_SIZE];
 	struct ixgbe_5tuple_filter_list fivetuple_list;
+	/* store the SYN filter info */
+	uint32_t syn_info;
 };
 
 /*
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v2 02/18] net/ixgbe: store flow director filter
  2016-12-30  7:52 [PATCH v2 00/18] net/ixgbe: Consistent filter API Wei Zhao
  2016-12-30  7:52 ` [PATCH v2 01/18] net/ixgbe: store SYN filter Wei Zhao
@ 2016-12-30  7:52 ` Wei Zhao
  2017-01-02  9:59   ` Xing, Beilei
                     ` (2 more replies)
  2016-12-30  7:52 ` [PATCH v2 03/18] net/ixgbe: store L2 tunnel filter Wei Zhao
                   ` (15 subsequent siblings)
  17 siblings, 3 replies; 149+ messages in thread
From: Wei Zhao @ 2016-12-30  7:52 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for storing flow director filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---

v2:
--add a fdir initialization function in device start process
---
 drivers/net/ixgbe/ixgbe_ethdev.c |  55 ++++++++++++++++++++
 drivers/net/ixgbe/ixgbe_ethdev.h |  19 ++++++-
 drivers/net/ixgbe/ixgbe_fdir.c   | 105 ++++++++++++++++++++++++++++++++++++++-
 3 files changed, 176 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 316e560..de27a73 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -60,6 +60,7 @@
 #include <rte_malloc.h>
 #include <rte_random.h>
 #include <rte_dev.h>
+#include <rte_hash_crc.h>
 
 #include "ixgbe_logs.h"
 #include "base/ixgbe_api.h"
@@ -165,6 +166,7 @@ enum ixgbevf_xcast_modes {
 
 static int eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev);
 static int eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev);
+static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev);
 static int  ixgbe_dev_configure(struct rte_eth_dev *dev);
 static int  ixgbe_dev_start(struct rte_eth_dev *dev);
 static void ixgbe_dev_stop(struct rte_eth_dev *dev);
@@ -1276,6 +1278,9 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
 
 	/* initialize SYN filter */
 	filter_info->syn_info = 0;
+	/* initialize flow director filter list & hash */
+	ixgbe_fdir_filter_init(eth_dev);
+
 	return 0;
 }
 
@@ -1284,6 +1289,9 @@ eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev)
 {
 	struct rte_pci_device *pci_dev;
 	struct ixgbe_hw *hw;
+	struct ixgbe_hw_fdir_info *fdir_info =
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(eth_dev->data->dev_private);
+	struct ixgbe_fdir_filter *fdir_filter;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -1317,9 +1325,56 @@ eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev)
 	rte_free(eth_dev->data->hash_mac_addrs);
 	eth_dev->data->hash_mac_addrs = NULL;
 
+	/* remove all the fdir filters & hash */
+	if (fdir_info->hash_map)
+		rte_free(fdir_info->hash_map);
+	if (fdir_info->hash_handle)
+		rte_hash_free(fdir_info->hash_handle);
+
+	while ((fdir_filter = TAILQ_FIRST(&fdir_info->fdir_list))) {
+		TAILQ_REMOVE(&fdir_info->fdir_list,
+			     fdir_filter,
+			     entries);
+		rte_free(fdir_filter);
+	}
+
 	return 0;
 }
 
+static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev)
+{
+	struct ixgbe_hw_fdir_info *fdir_info =
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(eth_dev->data->dev_private);
+	char fdir_hash_name[RTE_HASH_NAMESIZE];
+	struct rte_hash_parameters fdir_hash_params = {
+		.name = fdir_hash_name,
+		.entries = IXGBE_MAX_FDIR_FILTER_NUM,
+		.key_len = sizeof(union ixgbe_atr_input),
+		.hash_func = rte_hash_crc,
+		.hash_func_init_val = 0,
+		.socket_id = rte_socket_id(),
+	};
+
+	TAILQ_INIT(&fdir_info->fdir_list);
+	snprintf(fdir_hash_name, RTE_HASH_NAMESIZE,
+		 "fdir_%s", eth_dev->data->name);
+	fdir_info->hash_handle = rte_hash_create(&fdir_hash_params);
+	if (!fdir_info->hash_handle) {
+		PMD_INIT_LOG(ERR, "Failed to create fdir hash table!");
+		return -EINVAL;
+	}
+	fdir_info->hash_map = rte_zmalloc("ixgbe",
+					  sizeof(struct ixgbe_fdir_filter *) *
+					  IXGBE_MAX_FDIR_FILTER_NUM,
+					  0);
+	if (!fdir_info->hash_map) {
+		PMD_INIT_LOG(ERR,
+			     "Failed to allocate memory for fdir hash map!");
+		return -ENOMEM;
+	}
+
+	return 0;
+}
 /*
  * Negotiate mailbox API version with the PF.
  * After reset API version is always set to the basic one (ixgbe_mbox_api_10).
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 827026c..8310220 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -38,6 +38,7 @@
 #include "base/ixgbe_dcb_82598.h"
 #include "ixgbe_bypass.h"
 #include <rte_time.h>
+#include <rte_hash.h>
 
 /* need update link, bit flag */
 #define IXGBE_FLAG_NEED_LINK_UPDATE (uint32_t)(1 << 0)
@@ -130,10 +131,11 @@
 #define IXGBE_MISC_VEC_ID               RTE_INTR_VEC_ZERO_OFFSET
 #define IXGBE_RX_VEC_START              RTE_INTR_VEC_RXTX_OFFSET
 
+#define IXGBE_MAX_FDIR_FILTER_NUM       (1024 * 32)
+
 /*
  * Information about the fdir mode.
  */
-
 struct ixgbe_hw_fdir_mask {
 	uint16_t vlan_tci_mask;
 	uint32_t src_ipv4_mask;
@@ -148,6 +150,17 @@ struct ixgbe_hw_fdir_mask {
 	uint8_t  tunnel_type_mask;
 };
 
+struct ixgbe_fdir_filter {
+	TAILQ_ENTRY(ixgbe_fdir_filter) entries;
+	union ixgbe_atr_input ixgbe_fdir; /* key of fdir filter*/
+	uint32_t fdirflags; /* drop or forward */
+	uint32_t fdirhash; /* hash value for fdir */
+	uint8_t queue; /* assigned rx queue */
+};
+
+/* list of fdir filters */
+TAILQ_HEAD(ixgbe_fdir_filter_list, ixgbe_fdir_filter);
+
 struct ixgbe_hw_fdir_info {
 	struct ixgbe_hw_fdir_mask mask;
 	uint8_t     flex_bytes_offset;
@@ -159,6 +172,10 @@ struct ixgbe_hw_fdir_info {
 	uint64_t    remove;
 	uint64_t    f_add;
 	uint64_t    f_remove;
+	struct ixgbe_fdir_filter_list fdir_list; /* filter list*/
+	/* store the pointers of the filters, index is the hash value. */
+	struct ixgbe_fdir_filter **hash_map;
+	struct rte_hash *hash_handle; /* cuckoo hash handler */
 };
 
 /* structure for interrupt relative data */
diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c
index 4b81ee3..bfcd294 100644
--- a/drivers/net/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/ixgbe/ixgbe_fdir.c
@@ -43,6 +43,7 @@
 #include <rte_pci.h>
 #include <rte_ether.h>
 #include <rte_ethdev.h>
+#include <rte_malloc.h>
 
 #include "ixgbe_logs.h"
 #include "base/ixgbe_api.h"
@@ -1075,6 +1076,65 @@ fdir_erase_filter_82599(struct ixgbe_hw *hw, uint32_t fdirhash)
 
 }
 
+static inline struct ixgbe_fdir_filter *
+ixgbe_fdir_filter_lookup(struct ixgbe_hw_fdir_info *fdir_info,
+			 union ixgbe_atr_input *key)
+{
+	int ret = 0;
+
+	ret = rte_hash_lookup(fdir_info->hash_handle, (const void *)key);
+	if (ret < 0)
+		return NULL;
+
+	return fdir_info->hash_map[ret];
+}
+
+static inline int
+ixgbe_insert_fdir_filter(struct ixgbe_hw_fdir_info *fdir_info,
+			 struct ixgbe_fdir_filter *fdir_filter)
+{
+	int ret = 0;
+
+	ret = rte_hash_add_key(fdir_info->hash_handle,
+			       &fdir_filter->ixgbe_fdir);
+
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to insert fdir filter to hash table %d!",
+			    ret);
+		return ret;
+	}
+
+	fdir_info->hash_map[ret] = fdir_filter;
+
+	TAILQ_INSERT_TAIL(&fdir_info->fdir_list, fdir_filter, entries);
+
+	return 0;
+}
+
+static inline int
+ixgbe_remove_fdir_filter(struct ixgbe_hw_fdir_info *fdir_info,
+			 union ixgbe_atr_input *key)
+{
+	int ret = 0;
+	struct ixgbe_fdir_filter *fdir_filter;
+
+	ret = rte_hash_del_key(fdir_info->hash_handle, key);
+
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "No such fdir filter to delete %d!", ret);
+		return ret;
+	}
+
+	fdir_filter = fdir_info->hash_map[ret];
+	fdir_info->hash_map[ret] = NULL;
+
+	TAILQ_REMOVE(&fdir_info->fdir_list, fdir_filter, entries);
+	rte_free(fdir_filter);
+
+	return 0;
+}
+
 /*
  * ixgbe_add_del_fdir_filter - add or remove a flow diretor filter.
  * @dev: pointer to the structure rte_eth_dev
@@ -1098,6 +1158,8 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	struct ixgbe_hw_fdir_info *info =
 		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
 	enum rte_fdir_mode fdir_mode = dev->data->dev_conf.fdir_conf.mode;
+	struct ixgbe_fdir_filter *node;
+	bool add_node = FALSE;
 
 	if (fdir_mode == RTE_FDIR_MODE_NONE)
 		return -ENOTSUP;
@@ -1148,6 +1210,10 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 						      dev->data->dev_conf.fdir_conf.pballoc);
 
 	if (del) {
+		err = ixgbe_remove_fdir_filter(info, &input);
+		if (err < 0)
+			return err;
+
 		err = fdir_erase_filter_82599(hw, fdirhash);
 		if (err < 0)
 			PMD_DRV_LOG(ERR, "Fail to delete FDIR filter!");
@@ -1172,6 +1238,37 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	else
 		return -EINVAL;
 
+	node = ixgbe_fdir_filter_lookup(info, &input);
+	if (node) {
+		if (update) {
+			node->fdirflags = fdircmd_flags;
+			node->fdirhash = fdirhash;
+			node->queue = queue;
+		} else {
+			PMD_DRV_LOG(ERR, "Conflict with existing fdir filter!");
+			return -EINVAL;
+		}
+	} else {
+		add_node = TRUE;
+		node = rte_zmalloc("ixgbe_fdir",
+				   sizeof(struct ixgbe_fdir_filter),
+				   0);
+		if (!node)
+			return -ENOMEM;
+		(void)rte_memcpy(&node->ixgbe_fdir,
+				 &input,
+				 sizeof(union ixgbe_atr_input));
+		node->fdirflags = fdircmd_flags;
+		node->fdirhash = fdirhash;
+		node->queue = queue;
+
+		err = ixgbe_insert_fdir_filter(info, node);
+		if (err < 0) {
+			rte_free(node);
+			return err;
+		}
+	}
+
 	if (is_perfect) {
 		err = fdir_write_perfect_filter_82599(hw, &input, queue,
 						      fdircmd_flags, fdirhash,
@@ -1180,10 +1277,14 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 		err = fdir_add_signature_filter_82599(hw, &input, queue,
 						      fdircmd_flags, fdirhash);
 	}
-	if (err < 0)
+	if (err < 0) {
 		PMD_DRV_LOG(ERR, "Fail to add FDIR filter!");
-	else
+
+		if (add_node)
+			(void)ixgbe_remove_fdir_filter(info, &input);
+	} else {
 		PMD_DRV_LOG(DEBUG, "Success to add FDIR filter");
+	}
 
 	return err;
 }
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v2 03/18] net/ixgbe: store L2 tunnel filter
  2016-12-30  7:52 [PATCH v2 00/18] net/ixgbe: Consistent filter API Wei Zhao
  2016-12-30  7:52 ` [PATCH v2 01/18] net/ixgbe: store SYN filter Wei Zhao
  2016-12-30  7:52 ` [PATCH v2 02/18] net/ixgbe: store flow director filter Wei Zhao
@ 2016-12-30  7:52 ` Wei Zhao
  2017-01-02 10:06   ` Xing, Beilei
  2016-12-30  7:52 ` [PATCH v2 04/18] net/ixgbe: restore n-tuple filter Wei Zhao
                   ` (14 subsequent siblings)
  17 siblings, 1 reply; 149+ messages in thread
From: Wei Zhao @ 2016-12-30  7:52 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for storing L2 tunnel filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---

v2:
--add a L2 tunnel initialization function in device start process
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 157 +++++++++++++++++++++++++++++++++++++++
 drivers/net/ixgbe/ixgbe_ethdev.h |  24 ++++++
 2 files changed, 181 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index de27a73..204e531 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -167,6 +167,7 @@ enum ixgbevf_xcast_modes {
 static int eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev);
 static int eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev);
 static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev);
+static int ixgbe_l2_tn_filter_init(struct rte_eth_dev *eth_dev);
 static int  ixgbe_dev_configure(struct rte_eth_dev *dev);
 static int  ixgbe_dev_start(struct rte_eth_dev *dev);
 static void ixgbe_dev_stop(struct rte_eth_dev *dev);
@@ -1281,6 +1282,8 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
 	/* initialize flow director filter list & hash */
 	ixgbe_fdir_filter_init(eth_dev);
 
+	/* initialize l2 tunnel filter list & hash */
+	ixgbe_l2_tn_filter_init(eth_dev);
 	return 0;
 }
 
@@ -1291,7 +1294,10 @@ eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev)
 	struct ixgbe_hw *hw;
 	struct ixgbe_hw_fdir_info *fdir_info =
 		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(eth_dev->data->dev_private);
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(eth_dev->data->dev_private);
 	struct ixgbe_fdir_filter *fdir_filter;
+	struct ixgbe_l2_tn_filter *l2_tn_filter;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -1338,6 +1344,19 @@ eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev)
 		rte_free(fdir_filter);
 	}
 
+	/* remove all the L2 tunnel filters & hash */
+	if (l2_tn_info->hash_map)
+		rte_free(l2_tn_info->hash_map);
+	if (l2_tn_info->hash_handle)
+		rte_hash_free(l2_tn_info->hash_handle);
+
+	while ((l2_tn_filter = TAILQ_FIRST(&l2_tn_info->l2_tn_list))) {
+		TAILQ_REMOVE(&l2_tn_info->l2_tn_list,
+			     l2_tn_filter,
+			     entries);
+		rte_free(l2_tn_filter);
+	}
+
 	return 0;
 }
 
@@ -1372,6 +1391,40 @@ static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev)
 			     "Failed to allocate memory for fdir hash map!");
 		return -ENOMEM;
 	}
+	return 0;
+}
+
+static int ixgbe_l2_tn_filter_init(struct rte_eth_dev *eth_dev)
+{
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(eth_dev->data->dev_private);
+	char l2_tn_hash_name[RTE_HASH_NAMESIZE];
+	struct rte_hash_parameters l2_tn_hash_params = {
+		.name = l2_tn_hash_name,
+		.entries = IXGBE_MAX_L2_TN_FILTER_NUM,
+		.key_len = sizeof(struct ixgbe_l2_tn_key),
+		.hash_func = rte_hash_crc,
+		.hash_func_init_val = 0,
+		.socket_id = rte_socket_id(),
+	};
+
+	TAILQ_INIT(&l2_tn_info->l2_tn_list);
+	snprintf(l2_tn_hash_name, RTE_HASH_NAMESIZE,
+		 "l2_tn_%s", eth_dev->data->name);
+	l2_tn_info->hash_handle = rte_hash_create(&l2_tn_hash_params);
+	if (!l2_tn_info->hash_handle) {
+		PMD_INIT_LOG(ERR, "Failed to create L2 TN hash table!");
+		return -EINVAL;
+	}
+	l2_tn_info->hash_map = rte_zmalloc("ixgbe",
+				   sizeof(struct ixgbe_l2_tn_filter *) *
+				   IXGBE_MAX_L2_TN_FILTER_NUM,
+				   0);
+	if (!l2_tn_info->hash_map) {
+		PMD_INIT_LOG(ERR,
+			"Failed to allocate memory for L2 TN hash map!");
+		return -ENOMEM;
+	}
 
 	return 0;
 }
@@ -7178,12 +7231,104 @@ ixgbe_e_tag_filter_add(struct rte_eth_dev *dev,
 	return -EINVAL;
 }
 
+static inline struct ixgbe_l2_tn_filter *
+ixgbe_l2_tn_filter_lookup(struct ixgbe_l2_tn_info *l2_tn_info,
+			  struct ixgbe_l2_tn_key *key)
+{
+	int ret = 0;
+
+	ret = rte_hash_lookup(l2_tn_info->hash_handle, (const void *)key);
+	if (ret < 0)
+		return NULL;
+
+	return l2_tn_info->hash_map[ret];
+}
+
+static inline int
+ixgbe_insert_l2_tn_filter(struct ixgbe_l2_tn_info *l2_tn_info,
+			  struct ixgbe_l2_tn_filter *l2_tn_filter)
+{
+	int ret = 0;
+
+	ret = rte_hash_add_key(l2_tn_info->hash_handle,
+			       &l2_tn_filter->key);
+
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to insert L2 tunnel filter"
+			    " to hash table %d!",
+			    ret);
+		return ret;
+	}
+
+	l2_tn_info->hash_map[ret] = l2_tn_filter;
+
+	TAILQ_INSERT_TAIL(&l2_tn_info->l2_tn_list, l2_tn_filter, entries);
+
+	return 0;
+}
+
+static inline int
+ixgbe_remove_l2_tn_filter(struct ixgbe_l2_tn_info *l2_tn_info,
+			  struct ixgbe_l2_tn_key *key)
+{
+	int ret = 0;
+	struct ixgbe_l2_tn_filter *l2_tn_filter;
+
+	ret = rte_hash_del_key(l2_tn_info->hash_handle, key);
+
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR,
+			    "No such L2 tunnel filter to delete %d!",
+			    ret);
+		return ret;
+	}
+
+	l2_tn_filter = l2_tn_info->hash_map[ret];
+	l2_tn_info->hash_map[ret] = NULL;
+
+	TAILQ_REMOVE(&l2_tn_info->l2_tn_list, l2_tn_filter, entries);
+	rte_free(l2_tn_filter);
+
+	return 0;
+}
+
 /* Add l2 tunnel filter */
 static int
 ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 			       struct rte_eth_l2_tunnel_conf *l2_tunnel)
 {
 	int ret = 0;
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
+	struct ixgbe_l2_tn_key key;
+	struct ixgbe_l2_tn_filter *node;
+
+	key.l2_tn_type = l2_tunnel->l2_tunnel_type;
+	key.tn_id = l2_tunnel->tunnel_id;
+
+	node = ixgbe_l2_tn_filter_lookup(l2_tn_info, &key);
+
+	if (node) {
+		PMD_DRV_LOG(ERR, "The L2 tunnel filter already exists!");
+		return -EINVAL;
+	}
+
+	node = rte_zmalloc("ixgbe_l2_tn",
+			   sizeof(struct ixgbe_l2_tn_filter),
+			   0);
+	if (!node)
+		return -ENOMEM;
+
+	(void)rte_memcpy(&node->key,
+			 &key,
+			 sizeof(struct ixgbe_l2_tn_key));
+	node->pool = l2_tunnel->pool;
+	ret = ixgbe_insert_l2_tn_filter(l2_tn_info, node);
+	if (ret < 0) {
+		rte_free(node);
+		return ret;
+	}
 
 	switch (l2_tunnel->l2_tunnel_type) {
 	case RTE_L2_TUNNEL_TYPE_E_TAG:
@@ -7195,6 +7340,9 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 		break;
 	}
 
+	if (ret < 0)
+		(void)ixgbe_remove_l2_tn_filter(l2_tn_info, &key);
+
 	return ret;
 }
 
@@ -7204,6 +7352,15 @@ ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
 			       struct rte_eth_l2_tunnel_conf *l2_tunnel)
 {
 	int ret = 0;
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
+	struct ixgbe_l2_tn_key key;
+
+	key.l2_tn_type = l2_tunnel->l2_tunnel_type;
+	key.tn_id = l2_tunnel->tunnel_id;
+	ret = ixgbe_remove_l2_tn_filter(l2_tn_info, &key);
+	if (ret < 0)
+		return ret;
 
 	switch (l2_tunnel->l2_tunnel_type) {
 	case RTE_L2_TUNNEL_TYPE_E_TAG:
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 8310220..6663fc9 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -132,6 +132,7 @@
 #define IXGBE_RX_VEC_START              RTE_INTR_VEC_RXTX_OFFSET
 
 #define IXGBE_MAX_FDIR_FILTER_NUM       (1024 * 32)
+#define IXGBE_MAX_L2_TN_FILTER_NUM      128
 
 /*
  * Information about the fdir mode.
@@ -283,6 +284,25 @@ struct ixgbe_filter_info {
 	uint32_t syn_info;
 };
 
+struct ixgbe_l2_tn_key {
+	enum rte_eth_tunnel_type          l2_tn_type;
+	uint32_t                          tn_id;
+};
+
+struct ixgbe_l2_tn_filter {
+	TAILQ_ENTRY(ixgbe_l2_tn_filter)    entries;
+	struct ixgbe_l2_tn_key             key;
+	uint32_t                           pool;
+};
+
+TAILQ_HEAD(ixgbe_l2_tn_filter_list, ixgbe_l2_tn_filter);
+
+struct ixgbe_l2_tn_info {
+	struct ixgbe_l2_tn_filter_list      l2_tn_list;
+	struct ixgbe_l2_tn_filter         **hash_map;
+	struct rte_hash                    *hash_handle;
+};
+
 /*
  * Structure to store private data for each driver instance (for each port).
  */
@@ -302,6 +322,7 @@ struct ixgbe_adapter {
 	struct ixgbe_bypass_info    bps;
 #endif /* RTE_NIC_BYPASS */
 	struct ixgbe_filter_info    filter;
+	struct ixgbe_l2_tn_info     l2_tn;
 
 	bool rx_bulk_alloc_allowed;
 	bool rx_vec_allowed;
@@ -346,6 +367,9 @@ struct ixgbe_adapter {
 #define IXGBE_DEV_PRIVATE_TO_FILTER_INFO(adapter) \
 	(&((struct ixgbe_adapter *)adapter)->filter)
 
+#define IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(adapter) \
+	(&((struct ixgbe_adapter *)adapter)->l2_tn)
+
 /*
  * RX/TX function prototypes
  */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v2 04/18] net/ixgbe: restore n-tuple filter
  2016-12-30  7:52 [PATCH v2 00/18] net/ixgbe: Consistent filter API Wei Zhao
                   ` (2 preceding siblings ...)
  2016-12-30  7:52 ` [PATCH v2 03/18] net/ixgbe: store L2 tunnel filter Wei Zhao
@ 2016-12-30  7:52 ` Wei Zhao
  2016-12-30  7:52 ` [PATCH v2 05/18] net/ixgbe: restore ether type filter Wei Zhao
                   ` (13 subsequent siblings)
  17 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2016-12-30  7:52 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for restoring n-tuple filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 131 +++++++++++++++++++++++++--------------
 1 file changed, 83 insertions(+), 48 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 204e531..4dd8d52 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -384,6 +384,7 @@ static int ixgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 					 struct rte_eth_udp_tunnel *udp_tunnel);
 static int ixgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 					 struct rte_eth_udp_tunnel *udp_tunnel);
+static int ixgbe_filter_restore(struct rte_eth_dev *dev);
 
 /*
  * Define VF Stats MACRO for Non "cleared on read" register
@@ -1292,6 +1293,9 @@ eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev)
 {
 	struct rte_pci_device *pci_dev;
 	struct ixgbe_hw *hw;
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(eth_dev->data->dev_private);
+	struct ixgbe_5tuple_filter *p_5tuple;
 	struct ixgbe_hw_fdir_info *fdir_info =
 		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(eth_dev->data->dev_private);
 	struct ixgbe_l2_tn_info *l2_tn_info =
@@ -1331,6 +1335,16 @@ eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev)
 	rte_free(eth_dev->data->hash_mac_addrs);
 	eth_dev->data->hash_mac_addrs = NULL;
 
+	/* Remove all ntuple filters of the device */
+	while ((p_5tuple = TAILQ_FIRST(&filter_info->fivetuple_list))) {
+		TAILQ_REMOVE(&filter_info->fivetuple_list,
+			     p_5tuple,
+			     entries);
+		rte_free(p_5tuple);
+	}
+	memset(filter_info->fivetuple_mask, 0,
+	       sizeof(uint32_t) * IXGBE_5TUPLE_ARRAY_SIZE);
+
 	/* remove all the fdir filters & hash */
 	if (fdir_info->hash_map)
 		rte_free(fdir_info->hash_map);
@@ -2483,6 +2497,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
 
 	/* resume enabled intr since hw reset */
 	ixgbe_enable_intr(dev);
+	ixgbe_filter_restore(dev);
 
 	return 0;
 
@@ -2503,9 +2518,6 @@ ixgbe_dev_stop(struct rte_eth_dev *dev)
 		IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ixgbe_vf_info *vfinfo =
 		*IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private);
-	struct ixgbe_filter_info *filter_info =
-		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
-	struct ixgbe_5tuple_filter *p_5tuple, *p_5tuple_next;
 	struct rte_intr_handle *intr_handle = &dev->pci_dev->intr_handle;
 	int vf;
 
@@ -2543,17 +2555,6 @@ ixgbe_dev_stop(struct rte_eth_dev *dev)
 	memset(&link, 0, sizeof(link));
 	rte_ixgbe_dev_atomic_write_link_status(dev, &link);
 
-	/* Remove all ntuple filters of the device */
-	for (p_5tuple = TAILQ_FIRST(&filter_info->fivetuple_list);
-	     p_5tuple != NULL; p_5tuple = p_5tuple_next) {
-		p_5tuple_next = TAILQ_NEXT(p_5tuple, entries);
-		TAILQ_REMOVE(&filter_info->fivetuple_list,
-			     p_5tuple, entries);
-		rte_free(p_5tuple);
-	}
-	memset(filter_info->fivetuple_mask, 0,
-		sizeof(uint32_t) * IXGBE_5TUPLE_ARRAY_SIZE);
-
 	if (!rte_intr_allow_others(intr_handle))
 		/* resume to the default handler */
 		rte_intr_callback_register(intr_handle,
@@ -5795,6 +5796,52 @@ convert_protocol_type(uint8_t protocol_value)
 		return IXGBE_FILTER_PROTOCOL_NONE;
 }
 
+/* inject a 5-tuple filter to HW */
+static inline void
+ixgbe_inject_5tuple_filter(struct rte_eth_dev *dev,
+			   struct ixgbe_5tuple_filter *filter)
+{
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	int i;
+	uint32_t ftqf, sdpqf;
+	uint32_t l34timir = 0;
+	uint8_t mask = 0xff;
+
+	i = filter->index;
+
+	sdpqf = (uint32_t)(filter->filter_info.dst_port <<
+				IXGBE_SDPQF_DSTPORT_SHIFT);
+	sdpqf = sdpqf | (filter->filter_info.src_port & IXGBE_SDPQF_SRCPORT);
+
+	ftqf = (uint32_t)(filter->filter_info.proto &
+		IXGBE_FTQF_PROTOCOL_MASK);
+	ftqf |= (uint32_t)((filter->filter_info.priority &
+		IXGBE_FTQF_PRIORITY_MASK) << IXGBE_FTQF_PRIORITY_SHIFT);
+	if (filter->filter_info.src_ip_mask == 0) /* 0 means compare. */
+		mask &= IXGBE_FTQF_SOURCE_ADDR_MASK;
+	if (filter->filter_info.dst_ip_mask == 0)
+		mask &= IXGBE_FTQF_DEST_ADDR_MASK;
+	if (filter->filter_info.src_port_mask == 0)
+		mask &= IXGBE_FTQF_SOURCE_PORT_MASK;
+	if (filter->filter_info.dst_port_mask == 0)
+		mask &= IXGBE_FTQF_DEST_PORT_MASK;
+	if (filter->filter_info.proto_mask == 0)
+		mask &= IXGBE_FTQF_PROTOCOL_COMP_MASK;
+	ftqf |= mask << IXGBE_FTQF_5TUPLE_MASK_SHIFT;
+	ftqf |= IXGBE_FTQF_POOL_MASK_EN;
+	ftqf |= IXGBE_FTQF_QUEUE_ENABLE;
+
+	IXGBE_WRITE_REG(hw, IXGBE_DAQF(i), filter->filter_info.dst_ip);
+	IXGBE_WRITE_REG(hw, IXGBE_SAQF(i), filter->filter_info.src_ip);
+	IXGBE_WRITE_REG(hw, IXGBE_SDPQF(i), sdpqf);
+	IXGBE_WRITE_REG(hw, IXGBE_FTQF(i), ftqf);
+
+	l34timir |= IXGBE_L34T_IMIR_RESERVE;
+	l34timir |= (uint32_t)(filter->queue <<
+				IXGBE_L34T_IMIR_QUEUE_SHIFT);
+	IXGBE_WRITE_REG(hw, IXGBE_L34T_IMIR(i), l34timir);
+}
+
 /*
  * add a 5tuple filter
  *
@@ -5812,13 +5859,9 @@ static int
 ixgbe_add_5tuple_filter(struct rte_eth_dev *dev,
 			struct ixgbe_5tuple_filter *filter)
 {
-	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ixgbe_filter_info *filter_info =
 		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
 	int i, idx, shift;
-	uint32_t ftqf, sdpqf;
-	uint32_t l34timir = 0;
-	uint8_t mask = 0xff;
 
 	/*
 	 * look for an unused 5tuple filter index,
@@ -5841,37 +5884,8 @@ ixgbe_add_5tuple_filter(struct rte_eth_dev *dev,
 		return -ENOSYS;
 	}
 
-	sdpqf = (uint32_t)(filter->filter_info.dst_port <<
-				IXGBE_SDPQF_DSTPORT_SHIFT);
-	sdpqf = sdpqf | (filter->filter_info.src_port & IXGBE_SDPQF_SRCPORT);
-
-	ftqf = (uint32_t)(filter->filter_info.proto &
-		IXGBE_FTQF_PROTOCOL_MASK);
-	ftqf |= (uint32_t)((filter->filter_info.priority &
-		IXGBE_FTQF_PRIORITY_MASK) << IXGBE_FTQF_PRIORITY_SHIFT);
-	if (filter->filter_info.src_ip_mask == 0) /* 0 means compare. */
-		mask &= IXGBE_FTQF_SOURCE_ADDR_MASK;
-	if (filter->filter_info.dst_ip_mask == 0)
-		mask &= IXGBE_FTQF_DEST_ADDR_MASK;
-	if (filter->filter_info.src_port_mask == 0)
-		mask &= IXGBE_FTQF_SOURCE_PORT_MASK;
-	if (filter->filter_info.dst_port_mask == 0)
-		mask &= IXGBE_FTQF_DEST_PORT_MASK;
-	if (filter->filter_info.proto_mask == 0)
-		mask &= IXGBE_FTQF_PROTOCOL_COMP_MASK;
-	ftqf |= mask << IXGBE_FTQF_5TUPLE_MASK_SHIFT;
-	ftqf |= IXGBE_FTQF_POOL_MASK_EN;
-	ftqf |= IXGBE_FTQF_QUEUE_ENABLE;
-
-	IXGBE_WRITE_REG(hw, IXGBE_DAQF(i), filter->filter_info.dst_ip);
-	IXGBE_WRITE_REG(hw, IXGBE_SAQF(i), filter->filter_info.src_ip);
-	IXGBE_WRITE_REG(hw, IXGBE_SDPQF(i), sdpqf);
-	IXGBE_WRITE_REG(hw, IXGBE_FTQF(i), ftqf);
+	ixgbe_inject_5tuple_filter(dev, filter);
 
-	l34timir |= IXGBE_L34T_IMIR_RESERVE;
-	l34timir |= (uint32_t)(filter->queue <<
-				IXGBE_L34T_IMIR_QUEUE_SHIFT);
-	IXGBE_WRITE_REG(hw, IXGBE_L34T_IMIR(i), l34timir);
 	return 0;
 }
 
@@ -7883,6 +7897,27 @@ ixgbevf_dev_interrupt_handler(__rte_unused struct rte_intr_handle *handle,
 	ixgbevf_dev_interrupt_action(dev);
 }
 
+/* restore n-tuple filter */
+static inline void
+ixgbe_ntuple_filter_restore(struct rte_eth_dev *dev)
+{
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+	struct ixgbe_5tuple_filter *node;
+
+	TAILQ_FOREACH(node, &filter_info->fivetuple_list, entries) {
+		ixgbe_inject_5tuple_filter(dev, node);
+	}
+}
+
+static int
+ixgbe_filter_restore(struct rte_eth_dev *dev)
+{
+	ixgbe_ntuple_filter_restore(dev);
+
+	return 0;
+}
+
 RTE_PMD_REGISTER_PCI(net_ixgbe, rte_ixgbe_pmd.pci_drv);
 RTE_PMD_REGISTER_PCI_TABLE(net_ixgbe, pci_id_ixgbe_map);
 RTE_PMD_REGISTER_KMOD_DEP(net_ixgbe, "* igb_uio | uio_pci_generic | vfio");
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v2 05/18] net/ixgbe: restore ether type filter
  2016-12-30  7:52 [PATCH v2 00/18] net/ixgbe: Consistent filter API Wei Zhao
                   ` (3 preceding siblings ...)
  2016-12-30  7:52 ` [PATCH v2 04/18] net/ixgbe: restore n-tuple filter Wei Zhao
@ 2016-12-30  7:52 ` Wei Zhao
  2016-12-30  7:52 ` [PATCH v2 06/18] net/ixgbe: restore TCP SYN filter Wei Zhao
                   ` (12 subsequent siblings)
  17 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2016-12-30  7:52 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for restoring ether type filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 84 ++++++++++++++++++----------------------
 drivers/net/ixgbe/ixgbe_ethdev.h | 57 ++++++++++++++++++++++++++-
 drivers/net/ixgbe/ixgbe_pf.c     | 25 +++++++-----
 3 files changed, 109 insertions(+), 57 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 4dd8d52..e6e5f72 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1273,6 +1273,11 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
 	/* enable support intr */
 	ixgbe_enable_intr(eth_dev);
 
+	/* initialize ether type filter */
+	filter_info->ethertype_mask = 0;
+	memset(filter_info->ethertype_filters, 0,
+	       sizeof(struct ixgbe_ethertype_filter) * IXGBE_MAX_ETQF_FILTERS);
+
 	/* initialize 5tuple filter list */
 	TAILQ_INIT(&filter_info->fivetuple_list);
 	memset(filter_info->fivetuple_mask, 0,
@@ -6209,47 +6214,6 @@ ixgbe_ntuple_filter_handle(struct rte_eth_dev *dev,
 	return ret;
 }
 
-static inline int
-ixgbe_ethertype_filter_lookup(struct ixgbe_filter_info *filter_info,
-			uint16_t ethertype)
-{
-	int i;
-
-	for (i = 0; i < IXGBE_MAX_ETQF_FILTERS; i++) {
-		if (filter_info->ethertype_filters[i] == ethertype &&
-		    (filter_info->ethertype_mask & (1 << i)))
-			return i;
-	}
-	return -1;
-}
-
-static inline int
-ixgbe_ethertype_filter_insert(struct ixgbe_filter_info *filter_info,
-			uint16_t ethertype)
-{
-	int i;
-
-	for (i = 0; i < IXGBE_MAX_ETQF_FILTERS; i++) {
-		if (!(filter_info->ethertype_mask & (1 << i))) {
-			filter_info->ethertype_mask |= 1 << i;
-			filter_info->ethertype_filters[i] = ethertype;
-			return i;
-		}
-	}
-	return -1;
-}
-
-static inline int
-ixgbe_ethertype_filter_remove(struct ixgbe_filter_info *filter_info,
-			uint8_t idx)
-{
-	if (idx >= IXGBE_MAX_ETQF_FILTERS)
-		return -1;
-	filter_info->ethertype_mask &= ~(1 << idx);
-	filter_info->ethertype_filters[idx] = 0;
-	return idx;
-}
-
 static int
 ixgbe_add_del_ethertype_filter(struct rte_eth_dev *dev,
 			struct rte_eth_ethertype_filter *filter,
@@ -6261,6 +6225,7 @@ ixgbe_add_del_ethertype_filter(struct rte_eth_dev *dev,
 	uint32_t etqf = 0;
 	uint32_t etqs = 0;
 	int ret;
+	struct ixgbe_ethertype_filter ethertype_filter;
 
 	if (filter->queue >= IXGBE_MAX_RX_QUEUE_NUM)
 		return -EINVAL;
@@ -6294,18 +6259,22 @@ ixgbe_add_del_ethertype_filter(struct rte_eth_dev *dev,
 	}
 
 	if (add) {
-		ret = ixgbe_ethertype_filter_insert(filter_info,
-			filter->ether_type);
-		if (ret < 0) {
-			PMD_DRV_LOG(ERR, "ethertype filters are full.");
-			return -ENOSYS;
-		}
 		etqf = IXGBE_ETQF_FILTER_EN;
 		etqf |= (uint32_t)filter->ether_type;
 		etqs |= (uint32_t)((filter->queue <<
 				    IXGBE_ETQS_RX_QUEUE_SHIFT) &
 				    IXGBE_ETQS_RX_QUEUE);
 		etqs |= IXGBE_ETQS_QUEUE_EN;
+
+		ethertype_filter.ethertype = filter->ether_type;
+		ethertype_filter.etqf = etqf;
+		ethertype_filter.etqs = etqs;
+		ret = ixgbe_ethertype_filter_insert(filter_info,
+						    &ethertype_filter);
+		if (ret < 0) {
+			PMD_DRV_LOG(ERR, "ethertype filters are full.");
+			return -ENOSPC;
+		}
 	} else {
 		ret = ixgbe_ethertype_filter_remove(filter_info, (uint8_t)ret);
 		if (ret < 0)
@@ -7910,10 +7879,31 @@ ixgbe_ntuple_filter_restore(struct rte_eth_dev *dev)
 	}
 }
 
+/* restore ethernet type filter */
+static inline void
+ixgbe_ethertype_filter_restore(struct rte_eth_dev *dev)
+{
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+	int i;
+
+	for (i = 0; i < IXGBE_MAX_ETQF_FILTERS; i++) {
+		if (filter_info->ethertype_mask & (1 << i)) {
+			IXGBE_WRITE_REG(hw, IXGBE_ETQF(i),
+					filter_info->ethertype_filters[i].etqf);
+			IXGBE_WRITE_REG(hw, IXGBE_ETQS(i),
+					filter_info->ethertype_filters[i].etqs);
+			IXGBE_WRITE_FLUSH(hw);
+		}
+	}
+}
+
 static int
 ixgbe_filter_restore(struct rte_eth_dev *dev)
 {
 	ixgbe_ntuple_filter_restore(dev);
+	ixgbe_ethertype_filter_restore(dev);
 
 	return 0;
 }
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 6663fc9..c30a8c0 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -270,13 +270,19 @@ struct ixgbe_5tuple_filter {
 	(RTE_ALIGN(IXGBE_MAX_FTQF_FILTERS, (sizeof(uint32_t) * NBBY)) / \
 	 (sizeof(uint32_t) * NBBY))
 
+struct ixgbe_ethertype_filter {
+	uint16_t ethertype;
+	uint32_t etqf;
+	uint32_t etqs;
+};
+
 /*
  * Structure to store filters' info.
  */
 struct ixgbe_filter_info {
 	uint8_t ethertype_mask;  /* Bit mask for every used ethertype filter */
 	/* store used ethertype filters*/
-	uint16_t ethertype_filters[IXGBE_MAX_ETQF_FILTERS];
+	struct ixgbe_ethertype_filter ethertype_filters[IXGBE_MAX_ETQF_FILTERS];
 	/* Bit mask for every used 5tuple filter */
 	uint32_t fivetuple_mask[IXGBE_5TUPLE_ARRAY_SIZE];
 	struct ixgbe_5tuple_filter_list fivetuple_list;
@@ -485,4 +491,53 @@ uint32_t ixgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val);
 
 int ixgbe_fdir_ctrl_func(struct rte_eth_dev *dev,
 			enum rte_filter_op filter_op, void *arg);
+
+static inline int
+ixgbe_ethertype_filter_lookup(struct ixgbe_filter_info *filter_info,
+			      uint16_t ethertype)
+{
+	int i;
+
+	for (i = 0; i < IXGBE_MAX_ETQF_FILTERS; i++) {
+		if (filter_info->ethertype_filters[i].ethertype == ethertype &&
+		    (filter_info->ethertype_mask & (1 << i)))
+			return i;
+	}
+	return -1;
+}
+
+static inline int
+ixgbe_ethertype_filter_insert(struct ixgbe_filter_info *filter_info,
+			      struct ixgbe_ethertype_filter *ethertype_filter)
+{
+	int i;
+
+	for (i = 0; i < IXGBE_MAX_ETQF_FILTERS; i++) {
+		if (!(filter_info->ethertype_mask & (1 << i))) {
+			filter_info->ethertype_mask |= 1 << i;
+			filter_info->ethertype_filters[i].ethertype =
+				ethertype_filter->ethertype;
+			filter_info->ethertype_filters[i].etqf =
+				ethertype_filter->etqf;
+			filter_info->ethertype_filters[i].etqs =
+				ethertype_filter->etqs;
+			return i;
+		}
+	}
+	return -1;
+}
+
+static inline int
+ixgbe_ethertype_filter_remove(struct ixgbe_filter_info *filter_info,
+			      uint8_t idx)
+{
+	if (idx >= IXGBE_MAX_ETQF_FILTERS)
+		return -1;
+	filter_info->ethertype_mask &= ~(1 << idx);
+	filter_info->ethertype_filters[idx].ethertype = 0;
+	filter_info->ethertype_filters[idx].etqf = 0;
+	filter_info->ethertype_filters[idx].etqs = 0;
+	return idx;
+}
+
 #endif /* _IXGBE_ETHDEV_H_ */
diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
index 26395e4..6139915 100644
--- a/drivers/net/ixgbe/ixgbe_pf.c
+++ b/drivers/net/ixgbe/ixgbe_pf.c
@@ -176,6 +176,7 @@ ixgbe_add_tx_flow_control_drop_filter(struct rte_eth_dev *eth_dev)
 		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(eth_dev->data->dev_private);
 	uint16_t vf_num;
 	int i;
+	struct ixgbe_ethertype_filter ethertype_filter;
 
 	if (!hw->mac.ops.set_ethertype_anti_spoofing) {
 		RTE_LOG(INFO, PMD, "ether type anti-spoofing is not"
@@ -183,16 +184,22 @@ ixgbe_add_tx_flow_control_drop_filter(struct rte_eth_dev *eth_dev)
 		return;
 	}
 
-	/* occupy an entity of ether type filter */
-	for (i = 0; i < IXGBE_MAX_ETQF_FILTERS; i++) {
-		if (!(filter_info->ethertype_mask & (1 << i))) {
-			filter_info->ethertype_mask |= 1 << i;
-			filter_info->ethertype_filters[i] =
-				IXGBE_ETHERTYPE_FLOW_CTRL;
-			break;
-		}
+	i = ixgbe_ethertype_filter_lookup(filter_info,
+					  IXGBE_ETHERTYPE_FLOW_CTRL);
+	if (i >= 0) {
+		RTE_LOG(ERR, PMD, "A ether type filter"
+			" entity for flow control already exists!\n");
+		return;
 	}
-	if (i == IXGBE_MAX_ETQF_FILTERS) {
+
+	ethertype_filter.ethertype = IXGBE_ETHERTYPE_FLOW_CTRL;
+	ethertype_filter.etqf = IXGBE_ETQF_FILTER_EN |
+				IXGBE_ETQF_TX_ANTISPOOF |
+				IXGBE_ETHERTYPE_FLOW_CTRL;
+	ethertype_filter.etqs = 0;
+	i = ixgbe_ethertype_filter_insert(filter_info,
+					  &ethertype_filter);
+	if (i < 0) {
 		RTE_LOG(ERR, PMD, "Cannot find an unused ether type filter"
 			" entity for flow control.\n");
 		return;
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v2 06/18] net/ixgbe: restore TCP SYN filter
  2016-12-30  7:52 [PATCH v2 00/18] net/ixgbe: Consistent filter API Wei Zhao
                   ` (4 preceding siblings ...)
  2016-12-30  7:52 ` [PATCH v2 05/18] net/ixgbe: restore ether type filter Wei Zhao
@ 2016-12-30  7:52 ` Wei Zhao
  2016-12-30  7:52 ` [PATCH v2 07/18] net/ixgbe: restore flow director filter Wei Zhao
                   ` (11 subsequent siblings)
  17 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2016-12-30  7:52 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for restoring TCP SYN filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>

---

v2:
--change git log expression
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index e6e5f72..7281ba5 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -7899,11 +7899,29 @@ ixgbe_ethertype_filter_restore(struct rte_eth_dev *dev)
 	}
 }
 
+/* restore SYN filter */
+static inline void
+ixgbe_syn_filter_restore(struct rte_eth_dev *dev)
+{
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+	uint32_t synqf;
+
+	synqf = filter_info->syn_info;
+
+	if (synqf & IXGBE_SYN_FILTER_ENABLE) {
+		IXGBE_WRITE_REG(hw, IXGBE_SYNQF, synqf);
+		IXGBE_WRITE_FLUSH(hw);
+	}
+}
+
 static int
 ixgbe_filter_restore(struct rte_eth_dev *dev)
 {
 	ixgbe_ntuple_filter_restore(dev);
 	ixgbe_ethertype_filter_restore(dev);
+	ixgbe_syn_filter_restore(dev);
 
 	return 0;
 }
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v2 07/18] net/ixgbe: restore flow director filter
  2016-12-30  7:52 [PATCH v2 00/18] net/ixgbe: Consistent filter API Wei Zhao
                   ` (5 preceding siblings ...)
  2016-12-30  7:52 ` [PATCH v2 06/18] net/ixgbe: restore TCP SYN filter Wei Zhao
@ 2016-12-30  7:52 ` Wei Zhao
  2016-12-30  7:53 ` [PATCH v2 08/18] net/ixgbe: restore L2 tunnel filter Wei Zhao
                   ` (10 subsequent siblings)
  17 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2016-12-30  7:52 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for storing flow director filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c |  1 +
 drivers/net/ixgbe/ixgbe_ethdev.h |  1 +
 drivers/net/ixgbe/ixgbe_fdir.c   | 35 +++++++++++++++++++++++++++++++++++
 3 files changed, 37 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 7281ba5..83485a2 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -7922,6 +7922,7 @@ ixgbe_filter_restore(struct rte_eth_dev *dev)
 	ixgbe_ntuple_filter_restore(dev);
 	ixgbe_ethertype_filter_restore(dev);
 	ixgbe_syn_filter_restore(dev);
+	ixgbe_fdir_filter_restore(dev);
 
 	return 0;
 }
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index c30a8c0..d6253ad 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -491,6 +491,7 @@ uint32_t ixgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val);
 
 int ixgbe_fdir_ctrl_func(struct rte_eth_dev *dev,
 			enum rte_filter_op filter_op, void *arg);
+void ixgbe_fdir_filter_restore(struct rte_eth_dev *dev);
 
 static inline int
 ixgbe_ethertype_filter_lookup(struct ixgbe_filter_info *filter_info,
diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c
index bfcd294..d390972 100644
--- a/drivers/net/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/ixgbe/ixgbe_fdir.c
@@ -1479,3 +1479,38 @@ ixgbe_fdir_ctrl_func(struct rte_eth_dev *dev,
 	}
 	return ret;
 }
+
+/* restore flow director filter */
+void
+ixgbe_fdir_filter_restore(struct rte_eth_dev *dev)
+{
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_hw_fdir_info *fdir_info =
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
+	struct ixgbe_fdir_filter *node;
+	bool is_perfect = FALSE;
+	enum rte_fdir_mode fdir_mode = dev->data->dev_conf.fdir_conf.mode;
+
+	if (fdir_mode >= RTE_FDIR_MODE_PERFECT &&
+	    fdir_mode <= RTE_FDIR_MODE_PERFECT_TUNNEL)
+		is_perfect = TRUE;
+
+	if (is_perfect) {
+		TAILQ_FOREACH(node, &fdir_info->fdir_list, entries) {
+			(void)fdir_write_perfect_filter_82599(hw,
+							      &node->ixgbe_fdir,
+							      node->queue,
+							      node->fdirflags,
+							      node->fdirhash,
+							      fdir_mode);
+		}
+	} else {
+		TAILQ_FOREACH(node, &fdir_info->fdir_list, entries) {
+			(void)fdir_add_signature_filter_82599(hw,
+							      &node->ixgbe_fdir,
+							      node->queue,
+							      node->fdirflags,
+							      node->fdirhash);
+		}
+	}
+}
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v2 08/18] net/ixgbe: restore L2 tunnel filter
  2016-12-30  7:52 [PATCH v2 00/18] net/ixgbe: Consistent filter API Wei Zhao
                   ` (6 preceding siblings ...)
  2016-12-30  7:52 ` [PATCH v2 07/18] net/ixgbe: restore flow director filter Wei Zhao
@ 2016-12-30  7:53 ` Wei Zhao
  2016-12-30  7:53 ` [PATCH v2 09/18] net/ixgbe: store and restore L2 tunnel configuration Wei Zhao
                   ` (9 subsequent siblings)
  17 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2016-12-30  7:53 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for restoring L2 tunnel filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 69 ++++++++++++++++++++++++++--------------
 1 file changed, 46 insertions(+), 23 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 83485a2..5c39ffa 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -7279,7 +7279,8 @@ ixgbe_remove_l2_tn_filter(struct ixgbe_l2_tn_info *l2_tn_info,
 /* Add l2 tunnel filter */
 static int
 ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
-			       struct rte_eth_l2_tunnel_conf *l2_tunnel)
+			       struct rte_eth_l2_tunnel_conf *l2_tunnel,
+			       bool restore)
 {
 	int ret = 0;
 	struct ixgbe_l2_tn_info *l2_tn_info =
@@ -7287,30 +7288,33 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 	struct ixgbe_l2_tn_key key;
 	struct ixgbe_l2_tn_filter *node;
 
-	key.l2_tn_type = l2_tunnel->l2_tunnel_type;
-	key.tn_id = l2_tunnel->tunnel_id;
+	if (!restore) {
+		key.l2_tn_type = l2_tunnel->l2_tunnel_type;
+		key.tn_id = l2_tunnel->tunnel_id;
 
-	node = ixgbe_l2_tn_filter_lookup(l2_tn_info, &key);
+		node = ixgbe_l2_tn_filter_lookup(l2_tn_info, &key);
 
-	if (node) {
-		PMD_DRV_LOG(ERR, "The L2 tunnel filter already exists!");
-		return -EINVAL;
-	}
+		if (node) {
+			PMD_DRV_LOG(ERR,
+				    "The L2 tunnel filter already exists!");
+			return -EINVAL;
+		}
 
-	node = rte_zmalloc("ixgbe_l2_tn",
-			   sizeof(struct ixgbe_l2_tn_filter),
-			   0);
-	if (!node)
-		return -ENOMEM;
+		node = rte_zmalloc("ixgbe_l2_tn",
+				   sizeof(struct ixgbe_l2_tn_filter),
+				   0);
+		if (!node)
+			return -ENOMEM;
 
-	(void)rte_memcpy(&node->key,
-			 &key,
-			 sizeof(struct ixgbe_l2_tn_key));
-	node->pool = l2_tunnel->pool;
-	ret = ixgbe_insert_l2_tn_filter(l2_tn_info, node);
-	if (ret < 0) {
-		rte_free(node);
-		return ret;
+		(void)rte_memcpy(&node->key,
+				 &key,
+				 sizeof(struct ixgbe_l2_tn_key));
+		node->pool = l2_tunnel->pool;
+		ret = ixgbe_insert_l2_tn_filter(l2_tn_info, node);
+		if (ret < 0) {
+			rte_free(node);
+			return ret;
+		}
 	}
 
 	switch (l2_tunnel->l2_tunnel_type) {
@@ -7323,7 +7327,7 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 		break;
 	}
 
-	if (ret < 0)
+	if ((!restore) && (ret < 0))
 		(void)ixgbe_remove_l2_tn_filter(l2_tn_info, &key);
 
 	return ret;
@@ -7384,7 +7388,8 @@ ixgbe_dev_l2_tunnel_filter_handle(struct rte_eth_dev *dev,
 	case RTE_ETH_FILTER_ADD:
 		ret = ixgbe_dev_l2_tunnel_filter_add
 			(dev,
-			 (struct rte_eth_l2_tunnel_conf *)arg);
+			 (struct rte_eth_l2_tunnel_conf *)arg,
+			 FALSE);
 		break;
 	case RTE_ETH_FILTER_DELETE:
 		ret = ixgbe_dev_l2_tunnel_filter_del
@@ -7916,6 +7921,23 @@ ixgbe_syn_filter_restore(struct rte_eth_dev *dev)
 	}
 }
 
+/* restore L2 tunnel filter */
+static inline void
+ixgbe_l2_tn_filter_restore(struct rte_eth_dev *dev)
+{
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
+	struct ixgbe_l2_tn_filter *node;
+	struct rte_eth_l2_tunnel_conf l2_tn_conf;
+
+	TAILQ_FOREACH(node, &l2_tn_info->l2_tn_list, entries) {
+		l2_tn_conf.l2_tunnel_type = node->key.l2_tn_type;
+		l2_tn_conf.tunnel_id      = node->key.tn_id;
+		l2_tn_conf.pool           = node->pool;
+		(void)ixgbe_dev_l2_tunnel_filter_add(dev, &l2_tn_conf, TRUE);
+	}
+}
+
 static int
 ixgbe_filter_restore(struct rte_eth_dev *dev)
 {
@@ -7923,6 +7945,7 @@ ixgbe_filter_restore(struct rte_eth_dev *dev)
 	ixgbe_ethertype_filter_restore(dev);
 	ixgbe_syn_filter_restore(dev);
 	ixgbe_fdir_filter_restore(dev);
+	ixgbe_l2_tn_filter_restore(dev);
 
 	return 0;
 }
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v2 09/18] net/ixgbe: store and restore L2 tunnel configuration
  2016-12-30  7:52 [PATCH v2 00/18] net/ixgbe: Consistent filter API Wei Zhao
                   ` (7 preceding siblings ...)
  2016-12-30  7:53 ` [PATCH v2 08/18] net/ixgbe: restore L2 tunnel filter Wei Zhao
@ 2016-12-30  7:53 ` Wei Zhao
  2017-01-02 10:18   ` Xing, Beilei
  2016-12-30  7:53 ` [PATCH v2 10/18] net/ixgbe: flush all the filters Wei Zhao
                   ` (8 subsequent siblings)
  17 siblings, 1 reply; 149+ messages in thread
From: Wei Zhao @ 2016-12-30  7:53 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for store and restore L2 tunnel filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 36 ++++++++++++++++++++++++++++++++++++
 drivers/net/ixgbe/ixgbe_ethdev.h |  3 +++
 2 files changed, 39 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 5c39ffa..d68de65 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -385,6 +385,7 @@ static int ixgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 static int ixgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 					 struct rte_eth_udp_tunnel *udp_tunnel);
 static int ixgbe_filter_restore(struct rte_eth_dev *dev);
+static void ixgbe_l2_tunnel_conf(struct rte_eth_dev *dev);
 
 /*
  * Define VF Stats MACRO for Non "cleared on read" register
@@ -1444,6 +1445,9 @@ static int ixgbe_l2_tn_filter_init(struct rte_eth_dev *eth_dev)
 			"Failed to allocate memory for L2 TN hash map!");
 		return -ENOMEM;
 	}
+	l2_tn_info->e_tag_en = FALSE;
+	l2_tn_info->e_tag_fwd_en = FALSE;
+	l2_tn_info->e_tag_ether_type = DEFAULT_ETAG_ETYPE;
 
 	return 0;
 }
@@ -2502,6 +2506,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
 
 	/* resume enabled intr since hw reset */
 	ixgbe_enable_intr(dev);
+	ixgbe_l2_tunnel_conf(dev);
 	ixgbe_filter_restore(dev);
 
 	return 0;
@@ -7038,12 +7043,15 @@ ixgbe_dev_l2_tunnel_eth_type_conf(struct rte_eth_dev *dev,
 {
 	int ret = 0;
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
 
 	if (l2_tunnel == NULL)
 		return -EINVAL;
 
 	switch (l2_tunnel->l2_tunnel_type) {
 	case RTE_L2_TUNNEL_TYPE_E_TAG:
+		l2_tn_info->e_tag_ether_type = l2_tunnel->ether_type;
 		ret = ixgbe_update_e_tag_eth_type(hw, l2_tunnel->ether_type);
 		break;
 	default:
@@ -7082,9 +7090,12 @@ ixgbe_dev_l2_tunnel_enable(struct rte_eth_dev *dev,
 {
 	int ret = 0;
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
 
 	switch (l2_tunnel_type) {
 	case RTE_L2_TUNNEL_TYPE_E_TAG:
+		l2_tn_info->e_tag_en = TRUE;
 		ret = ixgbe_e_tag_enable(hw);
 		break;
 	default:
@@ -7123,9 +7134,12 @@ ixgbe_dev_l2_tunnel_disable(struct rte_eth_dev *dev,
 {
 	int ret = 0;
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
 
 	switch (l2_tunnel_type) {
 	case RTE_L2_TUNNEL_TYPE_E_TAG:
+		l2_tn_info->e_tag_en = FALSE;
 		ret = ixgbe_e_tag_disable(hw);
 		break;
 	default:
@@ -7432,10 +7446,13 @@ ixgbe_dev_l2_tunnel_forwarding_enable
 	(struct rte_eth_dev *dev,
 	 enum rte_eth_tunnel_type l2_tunnel_type)
 {
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
 	int ret = 0;
 
 	switch (l2_tunnel_type) {
 	case RTE_L2_TUNNEL_TYPE_E_TAG:
+		l2_tn_info->e_tag_fwd_en = TRUE;
 		ret = ixgbe_e_tag_forwarding_en_dis(dev, 1);
 		break;
 	default:
@@ -7453,10 +7470,13 @@ ixgbe_dev_l2_tunnel_forwarding_disable
 	(struct rte_eth_dev *dev,
 	 enum rte_eth_tunnel_type l2_tunnel_type)
 {
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
 	int ret = 0;
 
 	switch (l2_tunnel_type) {
 	case RTE_L2_TUNNEL_TYPE_E_TAG:
+		l2_tn_info->e_tag_fwd_en = FALSE;
 		ret = ixgbe_e_tag_forwarding_en_dis(dev, 0);
 		break;
 	default:
@@ -7950,6 +7970,22 @@ ixgbe_filter_restore(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static void
+ixgbe_l2_tunnel_conf(struct rte_eth_dev *dev)
+{
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	if (l2_tn_info->e_tag_en)
+		(void)ixgbe_e_tag_enable(hw);
+
+	if (l2_tn_info->e_tag_fwd_en)
+		(void)ixgbe_e_tag_forwarding_en_dis(dev, 1);
+
+	(void)ixgbe_update_e_tag_eth_type(hw, l2_tn_info->e_tag_ether_type);
+}
+
 RTE_PMD_REGISTER_PCI(net_ixgbe, rte_ixgbe_pmd.pci_drv);
 RTE_PMD_REGISTER_PCI_TABLE(net_ixgbe, pci_id_ixgbe_map);
 RTE_PMD_REGISTER_KMOD_DEP(net_ixgbe, "* igb_uio | uio_pci_generic | vfio");
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index d6253ad..6327962 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -307,6 +307,9 @@ struct ixgbe_l2_tn_info {
 	struct ixgbe_l2_tn_filter_list      l2_tn_list;
 	struct ixgbe_l2_tn_filter         **hash_map;
 	struct rte_hash                    *hash_handle;
+	bool e_tag_en; /* e-tag enabled */
+	bool e_tag_fwd_en; /* e-tag based forwarding enabled */
+	bool e_tag_ether_type; /* ether type for e-tag */
 };
 
 /*
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v2 10/18] net/ixgbe: flush all the filters
  2016-12-30  7:52 [PATCH v2 00/18] net/ixgbe: Consistent filter API Wei Zhao
                   ` (8 preceding siblings ...)
  2016-12-30  7:53 ` [PATCH v2 09/18] net/ixgbe: store and restore L2 tunnel configuration Wei Zhao
@ 2016-12-30  7:53 ` Wei Zhao
  2017-01-06 16:40   ` Ferruh Yigit
  2016-12-30  7:53 ` [PATCH v2 11/18] net/ixgbe: parse n-tuple filter Wei Zhao
                   ` (7 subsequent siblings)
  17 siblings, 1 reply; 149+ messages in thread
From: Wei Zhao @ 2016-12-30  7:53 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for flush all the filters in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>

---

v2:
--change flush filter function call relationship
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 118 ++++++++++++++++++++++++++++++++++++++-
 drivers/net/ixgbe/ixgbe_ethdev.h |   9 +++
 drivers/net/ixgbe/ixgbe_fdir.c   |  24 ++++++++
 drivers/net/ixgbe/ixgbe_pf.c     |   1 +
 4 files changed, 150 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index d68de65..0de1318 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -61,6 +61,8 @@
 #include <rte_random.h>
 #include <rte_dev.h>
 #include <rte_hash_crc.h>
+#include <rte_flow.h>
+#include <rte_flow_driver.h>
 
 #include "ixgbe_logs.h"
 #include "base/ixgbe_api.h"
@@ -386,7 +388,8 @@ static int ixgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 					 struct rte_eth_udp_tunnel *udp_tunnel);
 static int ixgbe_filter_restore(struct rte_eth_dev *dev);
 static void ixgbe_l2_tunnel_conf(struct rte_eth_dev *dev);
-
+static int ixgbe_flow_flush(struct rte_eth_dev *dev,
+		struct rte_flow_error *error);
 /*
  * Define VF Stats MACRO for Non "cleared on read" register
  */
@@ -765,7 +768,13 @@ static const struct rte_ixgbe_xstats_name_off rte_ixgbevf_stats_strings[] = {
 
 #define IXGBEVF_NB_XSTATS (sizeof(rte_ixgbevf_stats_strings) /	\
 		sizeof(rte_ixgbevf_stats_strings[0]))
-
+static const struct rte_flow_ops ixgbe_flow_ops = {
+	NULL,
+	NULL,
+	NULL,
+	ixgbe_flow_flush,
+	NULL,
+};
 /**
  * Atomically reads the link status information from global
  * structure rte_eth_dev.
@@ -6274,6 +6283,7 @@ ixgbe_add_del_ethertype_filter(struct rte_eth_dev *dev,
 		ethertype_filter.ethertype = filter->ether_type;
 		ethertype_filter.etqf = etqf;
 		ethertype_filter.etqs = etqs;
+		ethertype_filter.conf = FALSE;
 		ret = ixgbe_ethertype_filter_insert(filter_info,
 						    &ethertype_filter);
 		if (ret < 0) {
@@ -6393,6 +6403,11 @@ ixgbe_dev_filter_ctrl(struct rte_eth_dev *dev,
 	case RTE_ETH_FILTER_L2_TUNNEL:
 		ret = ixgbe_dev_l2_tunnel_filter_handle(dev, filter_op, arg);
 		break;
+	case RTE_ETH_FILTER_GENERIC:
+		if (filter_op != RTE_ETH_FILTER_GET)
+			return -EINVAL;
+		*(const void **)arg = &ixgbe_flow_ops;
+		break;
 	default:
 		PMD_DRV_LOG(WARNING, "Filter type (%d) not supported",
 							filter_type);
@@ -7986,6 +8001,105 @@ ixgbe_l2_tunnel_conf(struct rte_eth_dev *dev)
 	(void)ixgbe_update_e_tag_eth_type(hw, l2_tn_info->e_tag_ether_type);
 }
 
+/* remove all the n-tuple filters */
+static void
+ixgbe_clear_all_ntuple_filter(struct rte_eth_dev *dev)
+{
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+	struct ixgbe_5tuple_filter *p_5tuple;
+
+	while ((p_5tuple = TAILQ_FIRST(&filter_info->fivetuple_list)))
+		ixgbe_remove_5tuple_filter(dev, p_5tuple);
+}
+
+/* remove all the ether type filters */
+static void
+ixgbe_clear_all_ethertype_filter(struct rte_eth_dev *dev)
+{
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+	int i;
+
+	for (i = 0; i < IXGBE_MAX_ETQF_FILTERS; i++) {
+		if (filter_info->ethertype_mask & (1 << i) &&
+		    !filter_info->ethertype_filters[i].conf) {
+			(void)ixgbe_ethertype_filter_remove(filter_info,
+							    (uint8_t)i);
+			IXGBE_WRITE_REG(hw, IXGBE_ETQF(i), 0);
+			IXGBE_WRITE_REG(hw, IXGBE_ETQS(i), 0);
+			IXGBE_WRITE_FLUSH(hw);
+		}
+	}
+}
+
+/* remove the SYN filter */
+static void
+ixgbe_clear_syn_filter(struct rte_eth_dev *dev)
+{
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+
+	if (filter_info->syn_info & IXGBE_SYN_FILTER_ENABLE) {
+		filter_info->syn_info = 0;
+
+		IXGBE_WRITE_REG(hw, IXGBE_SYNQF, 0);
+		IXGBE_WRITE_FLUSH(hw);
+	}
+}
+
+/* remove all the L2 tunnel filters */
+static int
+ixgbe_clear_all_l2_tn_filter(struct rte_eth_dev *dev)
+{
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
+	struct ixgbe_l2_tn_filter *l2_tn_filter;
+	struct rte_eth_l2_tunnel_conf l2_tn_conf;
+	int ret = 0;
+
+	while ((l2_tn_filter = TAILQ_FIRST(&l2_tn_info->l2_tn_list))) {
+		l2_tn_conf.l2_tunnel_type = l2_tn_filter->key.l2_tn_type;
+		l2_tn_conf.tunnel_id      = l2_tn_filter->key.tn_id;
+		l2_tn_conf.pool           = l2_tn_filter->pool;
+		ret = ixgbe_dev_l2_tunnel_filter_del(dev, &l2_tn_conf);
+		if (ret < 0)
+			return ret;
+	}
+
+	return 0;
+}
+
+/*  Destroy all flow rules associated with a port on ixgbe. */
+static int
+ixgbe_flow_flush(struct rte_eth_dev *dev,
+		struct rte_flow_error *error)
+{
+	int ret = 0;
+
+	ixgbe_clear_all_ntuple_filter(dev);
+	ixgbe_clear_all_ethertype_filter(dev);
+	ixgbe_clear_syn_filter(dev);
+
+	ret = ixgbe_clear_all_fdir_filter(dev);
+	if (ret < 0) {
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_HANDLE,
+					NULL, "Failed to flush rule");
+		return ret;
+	}
+
+	ret = ixgbe_clear_all_l2_tn_filter(dev);
+	if (ret < 0) {
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_HANDLE,
+					NULL, "Failed to flush rule");
+		return ret;
+	}
+
+	return 0;
+}
+
 RTE_PMD_REGISTER_PCI(net_ixgbe, rte_ixgbe_pmd.pci_drv);
 RTE_PMD_REGISTER_PCI_TABLE(net_ixgbe, pci_id_ixgbe_map);
 RTE_PMD_REGISTER_KMOD_DEP(net_ixgbe, "* igb_uio | uio_pci_generic | vfio");
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 6327962..9ed5f45 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -274,6 +274,11 @@ struct ixgbe_ethertype_filter {
 	uint16_t ethertype;
 	uint32_t etqf;
 	uint32_t etqs;
+	/**
+	 * If this filter is added by configuration,
+	 * it should not be removed.
+	 */
+	bool     conf;
 };
 
 /*
@@ -495,6 +500,7 @@ uint32_t ixgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val);
 int ixgbe_fdir_ctrl_func(struct rte_eth_dev *dev,
 			enum rte_filter_op filter_op, void *arg);
 void ixgbe_fdir_filter_restore(struct rte_eth_dev *dev);
+int ixgbe_clear_all_fdir_filter(struct rte_eth_dev *dev);
 
 static inline int
 ixgbe_ethertype_filter_lookup(struct ixgbe_filter_info *filter_info,
@@ -525,6 +531,8 @@ ixgbe_ethertype_filter_insert(struct ixgbe_filter_info *filter_info,
 				ethertype_filter->etqf;
 			filter_info->ethertype_filters[i].etqs =
 				ethertype_filter->etqs;
+			filter_info->ethertype_filters[i].conf =
+				ethertype_filter->conf;
 			return i;
 		}
 	}
@@ -541,6 +549,7 @@ ixgbe_ethertype_filter_remove(struct ixgbe_filter_info *filter_info,
 	filter_info->ethertype_filters[idx].ethertype = 0;
 	filter_info->ethertype_filters[idx].etqf = 0;
 	filter_info->ethertype_filters[idx].etqs = 0;
+	filter_info->ethertype_filters[idx].etqs = FALSE;
 	return idx;
 }
 
diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c
index d390972..7097dca 100644
--- a/drivers/net/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/ixgbe/ixgbe_fdir.c
@@ -1514,3 +1514,27 @@ ixgbe_fdir_filter_restore(struct rte_eth_dev *dev)
 		}
 	}
 }
+
+/* remove all the flow director filters */
+int
+ixgbe_clear_all_fdir_filter(struct rte_eth_dev *dev)
+{
+	struct ixgbe_hw_fdir_info *fdir_info =
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
+	struct ixgbe_fdir_filter *fdir_filter;
+	int ret = 0;
+
+	/* flush flow director */
+	rte_hash_reset(fdir_info->hash_handle);
+	memset(fdir_info->hash_map, 0,
+	       sizeof(struct ixgbe_fdir_filter *) * IXGBE_MAX_FDIR_FILTER_NUM);
+	while ((fdir_filter = TAILQ_FIRST(&fdir_info->fdir_list))) {
+		TAILQ_REMOVE(&fdir_info->fdir_list,
+			     fdir_filter,
+			     entries);
+		rte_free(fdir_filter);
+	}
+	ret = ixgbe_fdir_flush(dev);
+
+	return ret;
+}
diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
index 6139915..5f017eb 100644
--- a/drivers/net/ixgbe/ixgbe_pf.c
+++ b/drivers/net/ixgbe/ixgbe_pf.c
@@ -197,6 +197,7 @@ ixgbe_add_tx_flow_control_drop_filter(struct rte_eth_dev *eth_dev)
 				IXGBE_ETQF_TX_ANTISPOOF |
 				IXGBE_ETHERTYPE_FLOW_CTRL;
 	ethertype_filter.etqs = 0;
+	ethertype_filter.conf = TRUE;
 	i = ixgbe_ethertype_filter_insert(filter_info,
 					  &ethertype_filter);
 	if (i < 0) {
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v2 11/18] net/ixgbe: parse n-tuple filter
  2016-12-30  7:52 [PATCH v2 00/18] net/ixgbe: Consistent filter API Wei Zhao
                   ` (9 preceding siblings ...)
  2016-12-30  7:53 ` [PATCH v2 10/18] net/ixgbe: flush all the filters Wei Zhao
@ 2016-12-30  7:53 ` Wei Zhao
  2017-01-02 10:41   ` Xing, Beilei
                     ` (2 more replies)
  2016-12-30  7:53 ` [PATCH v2 12/18] net/ixgbe: parse ethertype filter Wei Zhao
                   ` (6 subsequent siblings)
  17 siblings, 3 replies; 149+ messages in thread
From: Wei Zhao @ 2016-12-30  7:53 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

Add rule validate function and check if the rule is a n-tuple rule,
and get the n-tuple info.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>

---

v2:add new error set function
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 414 ++++++++++++++++++++++++++++++++++++++-
 1 file changed, 409 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 0de1318..198cc4b 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -388,6 +388,24 @@ static int ixgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 					 struct rte_eth_udp_tunnel *udp_tunnel);
 static int ixgbe_filter_restore(struct rte_eth_dev *dev);
 static void ixgbe_l2_tunnel_conf(struct rte_eth_dev *dev);
+static int
+cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
+					const struct rte_flow_item pattern[],
+					const struct rte_flow_action actions[],
+					struct rte_eth_ntuple_filter *filter,
+					struct rte_flow_error *error);
+static int
+ixgbe_parse_ntuple_filter(const struct rte_flow_attr *attr,
+					const struct rte_flow_item pattern[],
+					const struct rte_flow_action actions[],
+					struct rte_eth_ntuple_filter *filter,
+					struct rte_flow_error *error);
+static int
+ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
+		const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct rte_flow_error *error);
 static int ixgbe_flow_flush(struct rte_eth_dev *dev,
 		struct rte_flow_error *error);
 /*
@@ -769,7 +787,7 @@ static const struct rte_ixgbe_xstats_name_off rte_ixgbevf_stats_strings[] = {
 #define IXGBEVF_NB_XSTATS (sizeof(rte_ixgbevf_stats_strings) /	\
 		sizeof(rte_ixgbevf_stats_strings[0]))
 static const struct rte_flow_ops ixgbe_flow_ops = {
-	NULL,
+	ixgbe_flow_validate,
 	NULL,
 	NULL,
 	ixgbe_flow_flush,
@@ -8072,6 +8090,390 @@ ixgbe_clear_all_l2_tn_filter(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static inline uint32_t
+rte_be_to_cpu_24(uint32_t x)
+{
+	return  ((x & 0x000000ffUL) << 16) |
+		(x & 0x0000ff00UL) |
+		((x & 0x00ff0000UL) >> 16);
+}
+
+#define IXGBE_MIN_N_TUPLE_PRIO 1
+#define IXGBE_MAX_N_TUPLE_PRIO 7
+#define NEXT_ITEM_OF_PATTERN(item, pattern, index)\
+	do {		\
+		item = pattern + index;\
+		while (item->type == RTE_FLOW_ITEM_TYPE_VOID) {\
+		index++;				\
+		item = pattern + index;		\
+		}						\
+	} while (0)
+
+#define NEXT_ITEM_OF_ACTION(act, actions, index)\
+	do {								\
+		act = actions + index;					\
+		while (act->type == RTE_FLOW_ACTION_TYPE_VOID) {\
+		index++;					\
+		act = actions + index;				\
+		}							\
+	} while (0)
+
+/**
+ * Please aware there's an asumption for all the parsers.
+ * rte_flow_item is using big endian, rte_flow_attr and
+ * rte_flow_action are using CPU order.
+ * Because the pattern is used to describe the packets,
+ * normally the packets should use network order.
+ */
+
+/**
+ * Parse the rule to see if it is a n-tuple rule.
+ * And get the n-tuple filter info BTW.
+ */
+static int
+cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
+			 const struct rte_flow_item pattern[],
+			 const struct rte_flow_action actions[],
+			 struct rte_eth_ntuple_filter *filter,
+			 struct rte_flow_error *error)
+{
+	const struct rte_flow_item *item;
+	const struct rte_flow_action *act;
+	const struct rte_flow_item_ipv4 *ipv4_spec;
+	const struct rte_flow_item_ipv4 *ipv4_mask;
+	const struct rte_flow_item_tcp *tcp_spec;
+	const struct rte_flow_item_tcp *tcp_mask;
+	const struct rte_flow_item_udp *udp_spec;
+	const struct rte_flow_item_udp *udp_mask;
+	uint32_t index;
+
+	if (!pattern) {
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+				   NULL, "NULL pattern.");
+		return -rte_errno;
+	}
+
+	/* parse pattern */
+	index = 0;
+
+	/* the first not void item can be MAC or IPv4 */
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+
+	if (item->type != RTE_FLOW_ITEM_TYPE_ETH &&
+	    item->type != RTE_FLOW_ITEM_TYPE_IPV4) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by ntuple filter");
+		return -rte_errno;
+	}
+	/* Skip Ethernet */
+	if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+
+		}
+		/* if the first item is MAC, the content should be NULL */
+		if (item->spec || item->mask) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by ntuple filter");
+			return -rte_errno;
+		}
+		/* check if the next not void item is IPv4 */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_IPV4) {
+			rte_flow_error_set(error,
+			EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by ntuple filter");
+			return -rte_errno;
+		}
+	}
+
+	/* get the IPv4 info */
+	if (!item->spec || !item->mask) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Invalid ntuple mask");
+		return -rte_errno;
+	}
+	/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+
+	}
+
+	ipv4_mask = (const struct rte_flow_item_ipv4 *)item->mask;
+	/**
+	 * Only support src & dst addresses, protocol,
+	 * others should be masked.
+	 */
+	if (ipv4_mask->hdr.version_ihl ||
+	    ipv4_mask->hdr.type_of_service ||
+	    ipv4_mask->hdr.total_length ||
+	    ipv4_mask->hdr.packet_id ||
+	    ipv4_mask->hdr.fragment_offset ||
+	    ipv4_mask->hdr.time_to_live ||
+	    ipv4_mask->hdr.hdr_checksum) {
+			rte_flow_error_set(error,
+			EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by ntuple filter");
+		return -rte_errno;
+	}
+
+	filter->dst_ip_mask = ipv4_mask->hdr.dst_addr;
+	filter->src_ip_mask = ipv4_mask->hdr.src_addr;
+	filter->proto_mask  = ipv4_mask->hdr.next_proto_id;
+
+	ipv4_spec = (const struct rte_flow_item_ipv4 *)item->spec;
+	filter->dst_ip = ipv4_spec->hdr.dst_addr;
+	filter->src_ip = ipv4_spec->hdr.src_addr;
+	filter->proto  = ipv4_spec->hdr.next_proto_id;
+
+	/* check if the next not void item is TCP or UDP */
+	index++;
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_TCP &&
+	    item->type != RTE_FLOW_ITEM_TYPE_UDP) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by ntuple filter");
+		return -rte_errno;
+	}
+
+	/* get the TCP/UDP info */
+	if (!item->spec || !item->mask) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Invalid ntuple mask");
+		return -rte_errno;
+	}
+
+	/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+
+	}
+
+	if (item->type == RTE_FLOW_ITEM_TYPE_TCP) {
+		tcp_mask = (const struct rte_flow_item_tcp *)item->mask;
+
+		/**
+		 * Only support src & dst ports, tcp flags,
+		 * others should be masked.
+		 */
+		if (tcp_mask->hdr.sent_seq ||
+		    tcp_mask->hdr.recv_ack ||
+		    tcp_mask->hdr.data_off ||
+		    tcp_mask->hdr.rx_win ||
+		    tcp_mask->hdr.cksum ||
+		    tcp_mask->hdr.tcp_urp) {
+			memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by ntuple filter");
+			return -rte_errno;
+		}
+
+		filter->dst_port_mask  = tcp_mask->hdr.dst_port;
+		filter->src_port_mask  = tcp_mask->hdr.src_port;
+		if (tcp_mask->hdr.tcp_flags == 0xFF) {
+			filter->flags |= RTE_NTUPLE_FLAGS_TCP_FLAG;
+		} else if (!tcp_mask->hdr.tcp_flags) {
+			filter->flags &= ~RTE_NTUPLE_FLAGS_TCP_FLAG;
+		} else {
+			memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by ntuple filter");
+			return -rte_errno;
+		}
+
+		tcp_spec = (const struct rte_flow_item_tcp *)item->spec;
+		filter->dst_port  = tcp_spec->hdr.dst_port;
+		filter->src_port  = tcp_spec->hdr.src_port;
+		filter->tcp_flags = tcp_spec->hdr.tcp_flags;
+	} else {
+		udp_mask = (const struct rte_flow_item_udp *)item->mask;
+
+		/**
+		 * Only support src & dst ports,
+		 * others should be masked.
+		 */
+		if (udp_mask->hdr.dgram_len ||
+		    udp_mask->hdr.dgram_cksum) {
+			memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by ntuple filter");
+			return -rte_errno;
+		}
+
+		filter->dst_port_mask = udp_mask->hdr.dst_port;
+		filter->src_port_mask = udp_mask->hdr.src_port;
+
+		udp_spec = (const struct rte_flow_item_udp *)item->spec;
+		filter->dst_port = udp_spec->hdr.dst_port;
+		filter->src_port = udp_spec->hdr.src_port;
+	}
+
+	/* check if the next not void item is END */
+	index++;
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by ntuple filter");
+		return -rte_errno;
+	}
+
+	/* parse action */
+	index = 0;
+
+	if (!actions) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				   NULL, "NULL action.");
+		return -rte_errno;
+	}
+
+	/**
+	 * n-tuple only supports forwarding,
+	 * check if the first not void action is QUEUE.
+	 */
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			item, "Not supported action.");
+		return -rte_errno;
+	}
+	filter->queue =
+		((const struct rte_flow_action_queue *)act->conf)->index;
+
+	/* check if the next not void item is END */
+	index++;
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	/* parse attr */
+	/* must be input direction */
+	if (!attr->ingress) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR_INGRESS,
+				   attr, "Only support ingress.");
+		return -rte_errno;
+	}
+
+	/* not supported */
+	if (attr->egress) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
+				   attr, "Not support egress.");
+		return -rte_errno;
+	}
+
+	if (attr->priority > 0xFFFF) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
+				   attr, "Error priority.");
+		return -rte_errno;
+	}
+	filter->priority = (uint16_t)attr->priority;
+
+	return 0;
+}
+
+/* a specific function for ixgbe because the flags is specific */
+static int
+ixgbe_parse_ntuple_filter(const struct rte_flow_attr *attr,
+			  const struct rte_flow_item pattern[],
+			  const struct rte_flow_action actions[],
+			  struct rte_eth_ntuple_filter *filter,
+			  struct rte_flow_error *error)
+{
+	int ret;
+
+	ret = cons_parse_ntuple_filter(attr, pattern, actions, filter, error);
+
+	if (ret)
+		return ret;
+
+	/* Ixgbe doesn't support tcp flags. */
+	if (filter->flags & RTE_NTUPLE_FLAGS_TCP_FLAG) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ITEM,
+				   NULL, "Not supported by ntuple filter");
+		return -rte_errno;
+	}
+
+	/* Ixgbe doesn't support many priorities. */
+	if (filter->priority < IXGBE_MIN_N_TUPLE_PRIO ||
+	    filter->priority > IXGBE_MAX_N_TUPLE_PRIO) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			NULL, "Priority not supported by ntuple filter");
+		return -rte_errno;
+	}
+
+	if (filter->queue >= IXGBE_MAX_RX_QUEUE_NUM ||
+		filter->priority > IXGBE_5TUPLE_MAX_PRI ||
+		filter->priority < IXGBE_5TUPLE_MIN_PRI)
+		return -rte_errno;
+
+	/* fixed value for ixgbe */
+	filter->flags = RTE_5TUPLE_FLAGS;
+	return 0;
+}
+
+/**
+ * Check if the flow rule is supported by ixgbe.
+ * It only checkes the format. Don't guarantee the rule can be programmed into
+ * the HW. Because there can be no enough room for the rule.
+ */
+static int
+ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
+		const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct rte_flow_error *error)
+{
+	struct rte_eth_ntuple_filter ntuple_filter;
+	int ret;
+
+	memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
+	ret = ixgbe_parse_ntuple_filter(attr, pattern,
+				actions, &ntuple_filter, error);
+	if (!ret)
+		return 0;
+
+	return ret;
+}
+
 /*  Destroy all flow rules associated with a port on ixgbe. */
 static int
 ixgbe_flow_flush(struct rte_eth_dev *dev,
@@ -8085,15 +8487,17 @@ ixgbe_flow_flush(struct rte_eth_dev *dev,
 
 	ret = ixgbe_clear_all_fdir_filter(dev);
 	if (ret < 0) {
-		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_HANDLE,
-					NULL, "Failed to flush rule");
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_HANDLE,
+				NULL, "Failed to flush rule");
 		return ret;
 	}
 
 	ret = ixgbe_clear_all_l2_tn_filter(dev);
 	if (ret < 0) {
-		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_HANDLE,
-					NULL, "Failed to flush rule");
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_HANDLE,
+				NULL, "Failed to flush rule");
 		return ret;
 	}
 
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v2 12/18] net/ixgbe: parse ethertype filter
  2016-12-30  7:52 [PATCH v2 00/18] net/ixgbe: Consistent filter API Wei Zhao
                   ` (10 preceding siblings ...)
  2016-12-30  7:53 ` [PATCH v2 11/18] net/ixgbe: parse n-tuple filter Wei Zhao
@ 2016-12-30  7:53 ` Wei Zhao
  2017-01-06 17:11   ` Ferruh Yigit
  2016-12-30  7:53 ` [PATCH v2 13/18] net/ixgbe: parse TCP SYN filter Wei Zhao
                   ` (5 subsequent siblings)
  17 siblings, 1 reply; 149+ messages in thread
From: Wei Zhao @ 2016-12-30  7:53 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

check if the rule is a ethertype rule, and get the ethertype info.
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>

---

v2:add new error set function
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 251 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 251 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 198cc4b..f2f6cc7 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -401,6 +401,18 @@ ixgbe_parse_ntuple_filter(const struct rte_flow_attr *attr,
 					struct rte_eth_ntuple_filter *filter,
 					struct rte_flow_error *error);
 static int
+cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
+			    const struct rte_flow_item *pattern,
+			    const struct rte_flow_action *actions,
+			    struct rte_eth_ethertype_filter *filter,
+			    struct rte_flow_error *error);
+static int
+ixgbe_parse_ethertype_filter(const struct rte_flow_attr *attr,
+				const struct rte_flow_item pattern[],
+				const struct rte_flow_action actions[],
+				struct rte_eth_ethertype_filter *filter,
+				struct rte_flow_error *error);
+static int
 ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 		const struct rte_flow_attr *attr,
 		const struct rte_flow_item pattern[],
@@ -8451,6 +8463,238 @@ ixgbe_parse_ntuple_filter(const struct rte_flow_attr *attr,
 }
 
 /**
+ * Parse the rule to see if it is a ethertype rule.
+ * And get the ethertype filter info BTW.
+ */
+static int
+cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
+			    const struct rte_flow_item *pattern,
+			    const struct rte_flow_action *actions,
+			    struct rte_eth_ethertype_filter *filter,
+			    struct rte_flow_error *error)
+{
+	const struct rte_flow_item *item;
+	const struct rte_flow_action *act;
+	const struct rte_flow_item_eth *eth_spec;
+	const struct rte_flow_item_eth *eth_mask;
+	const struct rte_flow_action_queue *act_q;
+	uint32_t index;
+
+	if (!pattern) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+				NULL, "NULL pattern.");
+		return -rte_errno;
+	}
+	if (!actions) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				NULL, "NULL action.");
+		return -rte_errno;
+	}
+
+	/* Parse pattern */
+	index = 0;
+
+	/* The first non-void item should be MAC. */
+	item = pattern + index;
+	while (item->type == RTE_FLOW_ITEM_TYPE_VOID) {
+		index++;
+		item = pattern + index;
+	}
+	if (item->type != RTE_FLOW_ITEM_TYPE_ETH) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by ethertype filter");
+		return -rte_errno;
+	}
+
+	/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+	}
+
+	/* Get the MAC info. */
+	if (!item->spec || !item->mask) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by ethertype filter");
+		return -rte_errno;
+	}
+
+	eth_spec = (const struct rte_flow_item_eth *)item->spec;
+	eth_mask = (const struct rte_flow_item_eth *)item->mask;
+
+	/* Mask bits of source MAC address must be full of 0.
+	 * Mask bits of destination MAC address must be full
+	 * of 1 or full of 0.
+	 */
+	if (!is_zero_ether_addr(&eth_mask->src) ||
+	    (!is_zero_ether_addr(&eth_mask->dst) &&
+	     !is_broadcast_ether_addr(&eth_mask->dst))) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Invalid ether address mask");
+		return -rte_errno;
+	}
+
+	if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Invalid ethertype mask");
+		return -rte_errno;
+	}
+
+	/* If mask bits of destination MAC address
+	 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
+	 */
+	if (is_broadcast_ether_addr(&eth_mask->dst)) {
+		filter->mac_addr = eth_spec->dst;
+		filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
+	} else {
+		filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
+	}
+	filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+
+	/* Check if the next non-void item is END. */
+	index++;
+	item = pattern + index;
+	while (item->type == RTE_FLOW_ITEM_TYPE_VOID) {
+		index++;
+		item = pattern + index;
+	}
+	if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by ethertype filter.");
+		return -rte_errno;
+	}
+
+	/* Parse action */
+
+	index = 0;
+	/* Check if the first non-void action is QUEUE or DROP. */
+	act = actions + index;
+	while (act->type == RTE_FLOW_ACTION_TYPE_VOID) {
+		index++;
+		act = actions + index;
+	}
+	if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE &&
+	    act->type != RTE_FLOW_ACTION_TYPE_DROP) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION,
+				act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	if (act->type == RTE_FLOW_ACTION_TYPE_QUEUE) {
+		act_q = (const struct rte_flow_action_queue *)act->conf;
+		filter->queue = act_q->index;
+	} else {
+		filter->flags |= RTE_ETHTYPE_FLAGS_DROP;
+	}
+
+	/* Check if the next non-void item is END */
+	index++;
+	act = actions + index;
+	while (act->type == RTE_FLOW_ACTION_TYPE_VOID) {
+		index++;
+		act = actions + index;
+	}
+	if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION,
+				act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	/* Parse attr */
+	/* Must be input direction */
+	if (!attr->ingress) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ATTR_INGRESS,
+				attr, "Only support ingress.");
+		return -rte_errno;
+	}
+
+	/* Not supported */
+	if (attr->egress) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
+				attr, "Not support egress.");
+		return -rte_errno;
+	}
+
+	/* Not supported */
+	if (attr->priority) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
+				attr, "Not support priority.");
+		return -rte_errno;
+	}
+
+	/* Not supported */
+	if (attr->group) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ATTR_GROUP,
+				attr, "Not support group.");
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
+static int
+ixgbe_parse_ethertype_filter(const struct rte_flow_attr *attr,
+			     const struct rte_flow_item pattern[],
+			     const struct rte_flow_action actions[],
+			     struct rte_eth_ethertype_filter *filter,
+			     struct rte_flow_error *error)
+{
+	int ret;
+
+	ret = cons_parse_ethertype_filter(attr, pattern,
+					actions, filter, error);
+
+	if (ret)
+		return ret;
+
+	/* Ixgbe doesn't support MAC address. */
+	if (filter->flags & RTE_ETHTYPE_FLAGS_MAC) {
+		memset(filter, 0, sizeof(struct rte_eth_ethertype_filter));
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				NULL, "Not supported by ethertype filter");
+		return -rte_errno;
+	}
+
+	if (filter->queue >= IXGBE_MAX_RX_QUEUE_NUM)
+		return -rte_errno;
+
+	if (filter->ether_type == ETHER_TYPE_IPv4 ||
+		filter->ether_type == ETHER_TYPE_IPv6) {
+		PMD_DRV_LOG(ERR, "unsupported ether_type(0x%04x) in"
+			" ethertype filter.", filter->ether_type);
+		return -rte_errno;
+	}
+
+	if (filter->flags & RTE_ETHTYPE_FLAGS_MAC) {
+		PMD_DRV_LOG(ERR, "mac compare is unsupported.");
+		return -rte_errno;
+	}
+
+	if (filter->flags & RTE_ETHTYPE_FLAGS_DROP) {
+		PMD_DRV_LOG(ERR, "drop option is unsupported.");
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
+/**
  * Check if the flow rule is supported by ixgbe.
  * It only checkes the format. Don't guarantee the rule can be programmed into
  * the HW. Because there can be no enough room for the rule.
@@ -8463,6 +8707,7 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 		struct rte_flow_error *error)
 {
 	struct rte_eth_ntuple_filter ntuple_filter;
+	struct rte_eth_ethertype_filter ethertype_filter;
 	int ret;
 
 	memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
@@ -8471,6 +8716,12 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 	if (!ret)
 		return 0;
 
+	memset(&ethertype_filter, 0, sizeof(struct rte_eth_ethertype_filter));
+	ret = ixgbe_parse_ethertype_filter(attr, pattern,
+				actions, &ethertype_filter, error);
+	if (!ret)
+		return 0;
+
 	return ret;
 }
 
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v2 13/18] net/ixgbe: parse TCP SYN filter
  2016-12-30  7:52 [PATCH v2 00/18] net/ixgbe: Consistent filter API Wei Zhao
                   ` (11 preceding siblings ...)
  2016-12-30  7:53 ` [PATCH v2 12/18] net/ixgbe: parse ethertype filter Wei Zhao
@ 2016-12-30  7:53 ` Wei Zhao
  2017-01-06 17:19   ` Ferruh Yigit
  2016-12-30  7:53 ` [PATCH v2 14/18] net/ixgbe: parse L2 tunnel filter Wei Zhao
                   ` (4 subsequent siblings)
  17 siblings, 1 reply; 149+ messages in thread
From: Wei Zhao @ 2016-12-30  7:53 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

check if the rule is a TCP SYN rule, and get the SYN info.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>

---

v2:add new error set function
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 251 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 251 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index f2f6cc7..ffb1962 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -413,6 +413,18 @@ ixgbe_parse_ethertype_filter(const struct rte_flow_attr *attr,
 				struct rte_eth_ethertype_filter *filter,
 				struct rte_flow_error *error);
 static int
+cons_parse_syn_filter(const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct rte_eth_syn_filter *filter,
+		struct rte_flow_error *error);
+static int
+ixgbe_parse_syn_filter(const struct rte_flow_attr *attr,
+				const struct rte_flow_item pattern[],
+				const struct rte_flow_action actions[],
+				struct rte_eth_syn_filter *filter,
+				struct rte_flow_error *error);
+static int
 ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 		const struct rte_flow_attr *attr,
 		const struct rte_flow_item pattern[],
@@ -8695,6 +8707,238 @@ ixgbe_parse_ethertype_filter(const struct rte_flow_attr *attr,
 }
 
 /**
+ * Parse the rule to see if it is a TCP SYN rule.
+ * And get the TCP SYN filter info BTW.
+ */
+static int
+cons_parse_syn_filter(const struct rte_flow_attr *attr,
+				const struct rte_flow_item pattern[],
+				const struct rte_flow_action actions[],
+				struct rte_eth_syn_filter *filter,
+				struct rte_flow_error *error)
+{
+	const struct rte_flow_item *item;
+	const struct rte_flow_action *act;
+	const struct rte_flow_item_tcp *tcp_spec;
+	const struct rte_flow_item_tcp *tcp_mask;
+	const struct rte_flow_action_queue *act_q;
+	uint32_t index;
+
+	if (!pattern) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+				NULL, "NULL pattern.");
+		return -rte_errno;
+	}
+
+	/* parse pattern */
+	index = 0;
+
+	/* the first not void item should be MAC or IPv4 or IPv6 or TCP */
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_ETH &&
+	    item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+	    item->type != RTE_FLOW_ITEM_TYPE_IPV6 &&
+	    item->type != RTE_FLOW_ITEM_TYPE_TCP) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by syn filter");
+		return -rte_errno;
+	}
+		/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+
+	}
+
+	/* Skip Ethernet */
+	if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
+		/* if the item is MAC, the content should be NULL */
+		if (item->spec || item->mask) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Invalid SYN address mask");
+			return -rte_errno;
+		}
+
+		/* check if the next not void item is IPv4 or IPv6 */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+		    item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by syn filter");
+			return -rte_errno;
+		}
+	}
+
+	/* Skip IP */
+	if (item->type == RTE_FLOW_ITEM_TYPE_IPV4 ||
+	    item->type == RTE_FLOW_ITEM_TYPE_IPV6) {
+		/* if the item is IP, the content should be NULL */
+		if (item->spec || item->mask) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Invalid SYN mask");
+			return -rte_errno;
+		}
+
+		/* check if the next not void item is TCP */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_TCP) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by syn filter");
+			return -rte_errno;
+		}
+	}
+
+	/* Get the TCP info. Only support SYN. */
+	if (!item->spec || !item->mask) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Invalid SYN mask");
+		return -rte_errno;
+	}
+	/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+
+	}
+
+	tcp_spec = (const struct rte_flow_item_tcp *)item->spec;
+	tcp_mask = (const struct rte_flow_item_tcp *)item->mask;
+	if (!(tcp_spec->hdr.tcp_flags & TCP_SYN_FLAG) ||
+	    tcp_mask->hdr.src_port ||
+	    tcp_mask->hdr.dst_port ||
+	    tcp_mask->hdr.sent_seq ||
+	    tcp_mask->hdr.recv_ack ||
+	    tcp_mask->hdr.data_off ||
+	    tcp_mask->hdr.tcp_flags != TCP_SYN_FLAG ||
+	    tcp_mask->hdr.rx_win ||
+	    tcp_mask->hdr.cksum ||
+	    tcp_mask->hdr.tcp_urp) {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by syn filter");
+		return -rte_errno;
+	}
+
+	/* check if the next not void item is END */
+	index++;
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by syn filter");
+		return -rte_errno;
+	}
+
+	/* parse action */
+	index = 0;
+
+	if (!actions) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				NULL, "NULL action.");
+		return -rte_errno;
+	}
+
+	/* check if the first not void action is QUEUE. */
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE) {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION,
+				act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	act_q = (const struct rte_flow_action_queue *)act->conf;
+	filter->queue = act_q->index;
+	if (filter->queue >= IXGBE_MAX_RX_QUEUE_NUM) {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION,
+				act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	/* check if the next not void item is END */
+	index++;
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION,
+				act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	/* parse attr */
+	/* must be input direction */
+	if (!attr->ingress) {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_INGRESS,
+			attr, "Only support ingress.");
+		return -rte_errno;
+	}
+
+	/* not supported */
+	if (attr->egress) {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
+			attr, "Not support egress.");
+		return -rte_errno;
+	}
+
+	/* Support 2 priorities, the lowest or highest. */
+	if (!attr->priority) {
+		filter->hig_pri = 0;
+	} else if (attr->priority == (uint32_t)~0U) {
+		filter->hig_pri = 1;
+	} else {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
+			attr, "Not support priority.");
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
+static int
+ixgbe_parse_syn_filter(const struct rte_flow_attr *attr,
+			     const struct rte_flow_item pattern[],
+			     const struct rte_flow_action actions[],
+			     struct rte_eth_syn_filter *filter,
+			     struct rte_flow_error *error)
+{
+	int ret;
+
+	ret = cons_parse_syn_filter(attr, pattern,
+					actions, filter, error);
+
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+/**
  * Check if the flow rule is supported by ixgbe.
  * It only checkes the format. Don't guarantee the rule can be programmed into
  * the HW. Because there can be no enough room for the rule.
@@ -8708,6 +8952,7 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 {
 	struct rte_eth_ntuple_filter ntuple_filter;
 	struct rte_eth_ethertype_filter ethertype_filter;
+	struct rte_eth_syn_filter syn_filter;
 	int ret;
 
 	memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
@@ -8722,6 +8967,12 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 	if (!ret)
 		return 0;
 
+	memset(&syn_filter, 0, sizeof(struct rte_eth_syn_filter));
+	ret = ixgbe_parse_syn_filter(attr, pattern,
+				actions, &syn_filter, error);
+	if (!ret)
+		return 0;
+
 	return ret;
 }
 
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v2 14/18] net/ixgbe: parse L2 tunnel filter
  2016-12-30  7:52 [PATCH v2 00/18] net/ixgbe: Consistent filter API Wei Zhao
                   ` (12 preceding siblings ...)
  2016-12-30  7:53 ` [PATCH v2 13/18] net/ixgbe: parse TCP SYN filter Wei Zhao
@ 2016-12-30  7:53 ` Wei Zhao
  2017-01-03 14:07   ` Adrien Mazarguil
  2016-12-30  7:53 ` [PATCH v2 15/18] net/ixgbe: parse flow director filter Wei Zhao
                   ` (3 subsequent siblings)
  17 siblings, 1 reply; 149+ messages in thread
From: Wei Zhao @ 2016-12-30  7:53 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

check if the rule is a L2 tunnel rule, and get the L2 tunnel info.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>

---

v2:
--add new error set function
--change return value type of parser function
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 269 +++++++++++++++++++++++++++++++++++----
 lib/librte_ether/rte_flow.h      |  32 +++++
 2 files changed, 273 insertions(+), 28 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index ffb1962..0a5ac4f 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -425,6 +425,19 @@ ixgbe_parse_syn_filter(const struct rte_flow_attr *attr,
 				struct rte_eth_syn_filter *filter,
 				struct rte_flow_error *error);
 static int
+cons_parse_l2_tn_filter(const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct rte_eth_l2_tunnel_conf *filter,
+		struct rte_flow_error *error);
+static int
+ixgbe_validate_l2_tn_filter(struct rte_eth_dev *dev,
+			const struct rte_flow_attr *attr,
+			const struct rte_flow_item pattern[],
+			const struct rte_flow_action actions[],
+			struct rte_eth_l2_tunnel_conf *rule,
+			struct rte_flow_error *error);
+static int
 ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 		const struct rte_flow_attr *attr,
 		const struct rte_flow_item pattern[],
@@ -8939,41 +8952,175 @@ ixgbe_parse_syn_filter(const struct rte_flow_attr *attr,
 }
 
 /**
- * Check if the flow rule is supported by ixgbe.
- * It only checkes the format. Don't guarantee the rule can be programmed into
- * the HW. Because there can be no enough room for the rule.
+ * Parse the rule to see if it is a L2 tunnel rule.
+ * And get the L2 tunnel filter info BTW.
+ * Only support E-tag now.
  */
 static int
-ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
-		const struct rte_flow_attr *attr,
-		const struct rte_flow_item pattern[],
-		const struct rte_flow_action actions[],
-		struct rte_flow_error *error)
+cons_parse_l2_tn_filter(const struct rte_flow_attr *attr,
+			const struct rte_flow_item pattern[],
+			const struct rte_flow_action actions[],
+			struct rte_eth_l2_tunnel_conf *filter,
+			struct rte_flow_error *error)
 {
-	struct rte_eth_ntuple_filter ntuple_filter;
-	struct rte_eth_ethertype_filter ethertype_filter;
-	struct rte_eth_syn_filter syn_filter;
-	int ret;
+	const struct rte_flow_item *item;
+	const struct rte_flow_item_e_tag *e_tag_spec;
+	const struct rte_flow_item_e_tag *e_tag_mask;
+	const struct rte_flow_action *act;
+	const struct rte_flow_action_queue *act_q;
+	uint32_t index, j;
 
-	memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
-	ret = ixgbe_parse_ntuple_filter(attr, pattern,
-				actions, &ntuple_filter, error);
-	if (!ret)
-		return 0;
+	if (!pattern) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+			NULL, "NULL pattern.");
+		return -rte_errno;
+	}
 
-	memset(&ethertype_filter, 0, sizeof(struct rte_eth_ethertype_filter));
-	ret = ixgbe_parse_ethertype_filter(attr, pattern,
-				actions, &ethertype_filter, error);
-	if (!ret)
-		return 0;
+	/* parse pattern */
+	index = 0;
 
-	memset(&syn_filter, 0, sizeof(struct rte_eth_syn_filter));
-	ret = ixgbe_parse_syn_filter(attr, pattern,
-				actions, &syn_filter, error);
-	if (!ret)
-		return 0;
+	/* The first not void item should be e-tag. */
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_E_TAG) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by L2 tunnel filter");
+		return -rte_errno;
+	}
 
-	return ret;
+	if (!item->spec || !item->mask) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by L2 tunnel filter");
+		return -rte_errno;
+	}
+
+	/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+	}
+
+	e_tag_spec = (const struct rte_flow_item_e_tag *)item->spec;
+	e_tag_mask = (const struct rte_flow_item_e_tag *)item->mask;
+
+	/* Src & dst MAC address should be masked. */
+	for (j = 0; j < ETHER_ADDR_LEN; j++) {
+		if (e_tag_mask->src.addr_bytes[j] ||
+		    e_tag_mask->dst.addr_bytes[j]) {
+			memset(filter, 0,
+			       sizeof(struct rte_eth_l2_tunnel_conf));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by L2 tunnel filter");
+			return -rte_errno;
+		}
+	}
+
+	/* Only care about GRP and E cid base. */
+	if (e_tag_mask->e_tag_ethertype ||
+	    e_tag_mask->e_pcp ||
+	    e_tag_mask->dei ||
+	    e_tag_mask->in_e_cid_base ||
+	    e_tag_mask->in_e_cid_ext ||
+	    e_tag_mask->e_cid_ext ||
+	    e_tag_mask->type ||
+	    e_tag_mask->tags ||
+	    e_tag_mask->grp != 0x3 ||
+	    e_tag_mask->e_cid_base != 0xFFF) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by L2 tunnel filter");
+		return -rte_errno;
+	}
+
+	filter->l2_tunnel_type = RTE_L2_TUNNEL_TYPE_E_TAG;
+	/**
+	 * grp and e_cid_base are bit fields and only use 14 bits.
+	 * e-tag id is taken as little endian by HW.
+	 */
+	filter->tunnel_id = e_tag_spec->grp << 12;
+	filter->tunnel_id |= rte_be_to_cpu_16(e_tag_spec->e_cid_base);
+
+	/* check if the next not void item is END */
+	index++;
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by L2 tunnel filter");
+		return -rte_errno;
+	}
+
+	/* parse attr */
+	/* must be input direction */
+	if (!attr->ingress) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_INGRESS,
+			attr, "Only support ingress.");
+		return -rte_errno;
+	}
+
+	/* not supported */
+	if (attr->egress) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
+			attr, "Not support egress.");
+		return -rte_errno;
+	}
+
+	/* not supported */
+	if (attr->priority) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
+			attr, "Not support priority.");
+		return -rte_errno;
+	}
+
+	/* parse action */
+	index = 0;
+
+	if (!actions) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				   NULL, "NULL action.");
+		return -rte_errno;
+	}
+
+	/* check if the first not void action is QUEUE. */
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	act_q = (const struct rte_flow_action_queue *)act->conf;
+	filter->pool = act_q->index;
+
+	/* check if the next not void item is END */
+	index++;
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	return 0;
 }
 
 /*  Destroy all flow rules associated with a port on ixgbe. */
@@ -9006,6 +9153,72 @@ ixgbe_flow_flush(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static int
+ixgbe_validate_l2_tn_filter(struct rte_eth_dev *dev,
+			const struct rte_flow_attr *attr,
+			const struct rte_flow_item pattern[],
+			const struct rte_flow_action actions[],
+			struct rte_eth_l2_tunnel_conf *l2_tn_filter,
+			struct rte_flow_error *error)
+{
+	int ret = 0;
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	ret = cons_parse_l2_tn_filter(attr, pattern,
+				actions, l2_tn_filter, error);
+
+	if (hw->mac.type != ixgbe_mac_X550 &&
+		hw->mac.type != ixgbe_mac_X550EM_x &&
+		hw->mac.type != ixgbe_mac_X550EM_a) {
+		return -ENOTSUP;
+	}
+
+	return ret;
+}
+
+/**
+ * Check if the flow rule is supported by ixgbe.
+ * It only checkes the format. Don't guarantee the rule can be programmed into
+ * the HW. Because there can be no enough room for the rule.
+ */
+static int
+ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
+		const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct rte_flow_error *error)
+{
+	struct rte_eth_ntuple_filter ntuple_filter;
+	struct rte_eth_ethertype_filter ethertype_filter;
+	struct rte_eth_syn_filter syn_filter;
+	struct rte_eth_l2_tunnel_conf l2_tn_filter;
+	int ret;
+
+	memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
+	ret = ixgbe_parse_ntuple_filter(attr, pattern,
+				actions, &ntuple_filter, error);
+	if (!ret)
+		return 0;
+
+	memset(&ethertype_filter, 0, sizeof(struct rte_eth_ethertype_filter));
+	ret = ixgbe_parse_ethertype_filter(attr, pattern,
+				actions, &ethertype_filter, error);
+	if (!ret)
+		return 0;
+
+	memset(&syn_filter, 0, sizeof(struct rte_eth_syn_filter));
+	ret = ixgbe_parse_syn_filter(attr, pattern,
+				actions, &syn_filter, error);
+	if (!ret)
+		return 0;
+
+		memset(&l2_tn_filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+	ret = ixgbe_validate_l2_tn_filter(dev, attr, pattern,
+				actions, &l2_tn_filter, error);
+
+	return ret;
+}
+
 RTE_PMD_REGISTER_PCI(net_ixgbe, rte_ixgbe_pmd.pci_drv);
 RTE_PMD_REGISTER_PCI_TABLE(net_ixgbe, pci_id_ixgbe_map);
 RTE_PMD_REGISTER_KMOD_DEP(net_ixgbe, "* igb_uio | uio_pci_generic | vfio");
diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h
index 98084ac..e9e6220 100644
--- a/lib/librte_ether/rte_flow.h
+++ b/lib/librte_ether/rte_flow.h
@@ -268,6 +268,13 @@ enum rte_flow_item_type {
 	 * See struct rte_flow_item_vxlan.
 	 */
 	RTE_FLOW_ITEM_TYPE_VXLAN,
+
+	/**
+	  * Matches a E_TAG header.
+	  *
+	  * See struct rte_flow_item_e_tag.
+	  */
+	RTE_FLOW_ITEM_TYPE_E_TAG,
 };
 
 /**
@@ -454,6 +461,31 @@ struct rte_flow_item_vxlan {
 };
 
 /**
+ * RTE_FLOW_ITEM_TYPE_E_TAG.
+ *
+ * Matches a E-tag header.
+ */
+struct rte_flow_item_e_tag {
+	struct ether_addr dst; /**< Destination MAC. */
+	struct ether_addr src; /**< Source MAC. */
+	uint16_t e_tag_ethertype; /**< E-tag EtherType, 0x893F. */
+	uint16_t e_pcp:3; /**<  E-PCP */
+	uint16_t dei:1; /**< DEI */
+	uint16_t in_e_cid_base:12; /**< Ingress E-CID base */
+	uint16_t rsv:2; /**< reserved */
+	uint16_t grp:2; /**< GRP */
+	uint16_t e_cid_base:12; /**< E-CID base */
+	uint16_t in_e_cid_ext:8; /**< Ingress E-CID extend */
+	uint16_t e_cid_ext:8; /**< E-CID extend */
+	uint16_t type; /**< MAC type. */
+	unsigned int tags; /**< Number of 802.1Q/ad tags defined. */
+	struct {
+		uint16_t tpid; /**< Tag protocol identifier. */
+		uint16_t tci; /**< Tag control information. */
+	} tag[]; /**< 802.1Q/ad tag definitions, outermost first. */
+};
+
+/**
  * Matching pattern item definition.
  *
  * A pattern is formed by stacking items starting from the lowest protocol
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v2 15/18] net/ixgbe: parse flow director filter
  2016-12-30  7:52 [PATCH v2 00/18] net/ixgbe: Consistent filter API Wei Zhao
                   ` (13 preceding siblings ...)
  2016-12-30  7:53 ` [PATCH v2 14/18] net/ixgbe: parse L2 tunnel filter Wei Zhao
@ 2016-12-30  7:53 ` Wei Zhao
  2017-01-02 15:24   ` Xing, Beilei
  2017-01-03 14:08   ` Adrien Mazarguil
  2016-12-30  7:53 ` [PATCH v2 16/18] net/ixgbe: create consistent filter Wei Zhao
                   ` (2 subsequent siblings)
  17 siblings, 2 replies; 149+ messages in thread
From: Wei Zhao @ 2016-12-30  7:53 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

check if the rule is a flow director rule, and get the flow director info.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>

---

v2:add new error set function
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 1467 +++++++++++++++++++++++++++++++++-----
 drivers/net/ixgbe/ixgbe_ethdev.h |   16 +
 drivers/net/ixgbe/ixgbe_fdir.c   |  247 ++++---
 lib/librte_ether/rte_flow.h      |   23 +
 4 files changed, 1495 insertions(+), 258 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 0a5ac4f..c98aa0d 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -438,6 +438,31 @@ ixgbe_validate_l2_tn_filter(struct rte_eth_dev *dev,
 			struct rte_eth_l2_tunnel_conf *rule,
 			struct rte_flow_error *error);
 static int
+ixgbe_validate_fdir_filter(struct rte_eth_dev *dev,
+			const struct rte_flow_attr *attr,
+			const struct rte_flow_item pattern[],
+			const struct rte_flow_action actions[],
+			struct ixgbe_fdir_rule *rule,
+			struct rte_flow_error *error);
+static int
+ixgbe_parse_fdir_filter_normal(const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct ixgbe_fdir_rule *rule,
+		struct rte_flow_error *error);
+static int
+ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct ixgbe_fdir_rule *rule,
+		struct rte_flow_error *error);
+static int
+ixgbe_parse_fdir_filter(const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct ixgbe_fdir_rule *rule,
+		struct rte_flow_error *error);
+static int
 ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 		const struct rte_flow_attr *attr,
 		const struct rte_flow_item pattern[],
@@ -1475,6 +1500,8 @@ static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev)
 			     "Failed to allocate memory for fdir hash map!");
 		return -ENOMEM;
 	}
+	fdir_info->mask_added = FALSE;
+
 	return 0;
 }
 
@@ -8951,117 +8978,22 @@ ixgbe_parse_syn_filter(const struct rte_flow_attr *attr,
 	return 0;
 }
 
-/**
- * Parse the rule to see if it is a L2 tunnel rule.
- * And get the L2 tunnel filter info BTW.
- * Only support E-tag now.
- */
+/* Parse to get the attr and action info of flow director rule. */
 static int
-cons_parse_l2_tn_filter(const struct rte_flow_attr *attr,
-			const struct rte_flow_item pattern[],
-			const struct rte_flow_action actions[],
-			struct rte_eth_l2_tunnel_conf *filter,
-			struct rte_flow_error *error)
+ixgbe_parse_fdir_act_attr(const struct rte_flow_attr *attr,
+			  const struct rte_flow_action actions[],
+			  struct ixgbe_fdir_rule *rule,
+			  struct rte_flow_error *error)
 {
-	const struct rte_flow_item *item;
-	const struct rte_flow_item_e_tag *e_tag_spec;
-	const struct rte_flow_item_e_tag *e_tag_mask;
 	const struct rte_flow_action *act;
 	const struct rte_flow_action_queue *act_q;
-	uint32_t index, j;
-
-	if (!pattern) {
-		rte_flow_error_set(error, EINVAL,
-			RTE_FLOW_ERROR_TYPE_ITEM_NUM,
-			NULL, "NULL pattern.");
-		return -rte_errno;
-	}
-
-	/* parse pattern */
-	index = 0;
-
-	/* The first not void item should be e-tag. */
-	NEXT_ITEM_OF_PATTERN(item, pattern, index);
-	if (item->type != RTE_FLOW_ITEM_TYPE_E_TAG) {
-		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
-		rte_flow_error_set(error, EINVAL,
-			RTE_FLOW_ERROR_TYPE_ITEM,
-			item, "Not supported by L2 tunnel filter");
-		return -rte_errno;
-	}
-
-	if (!item->spec || !item->mask) {
-		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
-		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
-			item, "Not supported by L2 tunnel filter");
-		return -rte_errno;
-	}
-
-	/*Not supported last point for range*/
-	if (item->last) {
-		rte_flow_error_set(error, EINVAL,
-			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
-			item, "Not supported last point for range");
-		return -rte_errno;
-	}
-
-	e_tag_spec = (const struct rte_flow_item_e_tag *)item->spec;
-	e_tag_mask = (const struct rte_flow_item_e_tag *)item->mask;
-
-	/* Src & dst MAC address should be masked. */
-	for (j = 0; j < ETHER_ADDR_LEN; j++) {
-		if (e_tag_mask->src.addr_bytes[j] ||
-		    e_tag_mask->dst.addr_bytes[j]) {
-			memset(filter, 0,
-			       sizeof(struct rte_eth_l2_tunnel_conf));
-			rte_flow_error_set(error, EINVAL,
-				RTE_FLOW_ERROR_TYPE_ITEM,
-				item, "Not supported by L2 tunnel filter");
-			return -rte_errno;
-		}
-	}
-
-	/* Only care about GRP and E cid base. */
-	if (e_tag_mask->e_tag_ethertype ||
-	    e_tag_mask->e_pcp ||
-	    e_tag_mask->dei ||
-	    e_tag_mask->in_e_cid_base ||
-	    e_tag_mask->in_e_cid_ext ||
-	    e_tag_mask->e_cid_ext ||
-	    e_tag_mask->type ||
-	    e_tag_mask->tags ||
-	    e_tag_mask->grp != 0x3 ||
-	    e_tag_mask->e_cid_base != 0xFFF) {
-		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
-		rte_flow_error_set(error, EINVAL,
-			RTE_FLOW_ERROR_TYPE_ITEM,
-			item, "Not supported by L2 tunnel filter");
-		return -rte_errno;
-	}
-
-	filter->l2_tunnel_type = RTE_L2_TUNNEL_TYPE_E_TAG;
-	/**
-	 * grp and e_cid_base are bit fields and only use 14 bits.
-	 * e-tag id is taken as little endian by HW.
-	 */
-	filter->tunnel_id = e_tag_spec->grp << 12;
-	filter->tunnel_id |= rte_be_to_cpu_16(e_tag_spec->e_cid_base);
-
-	/* check if the next not void item is END */
-	index++;
-	NEXT_ITEM_OF_PATTERN(item, pattern, index);
-	if (item->type != RTE_FLOW_ITEM_TYPE_END) {
-		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
-		rte_flow_error_set(error, EINVAL,
-			RTE_FLOW_ERROR_TYPE_ITEM,
-			item, "Not supported by L2 tunnel filter");
-		return -rte_errno;
-	}
+	const struct rte_flow_action_mark *mark;
+	uint32_t index;
 
 	/* parse attr */
 	/* must be input direction */
 	if (!attr->ingress) {
-		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 		rte_flow_error_set(error, EINVAL,
 			RTE_FLOW_ERROR_TYPE_ATTR_INGRESS,
 			attr, "Only support ingress.");
@@ -9070,7 +9002,7 @@ cons_parse_l2_tn_filter(const struct rte_flow_attr *attr,
 
 	/* not supported */
 	if (attr->egress) {
-		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 		rte_flow_error_set(error, EINVAL,
 			RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
 			attr, "Not support egress.");
@@ -9079,7 +9011,7 @@ cons_parse_l2_tn_filter(const struct rte_flow_attr *attr,
 
 	/* not supported */
 	if (attr->priority) {
-		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 		rte_flow_error_set(error, EINVAL,
 			RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
 			attr, "Not support priority.");
@@ -9096,24 +9028,43 @@ cons_parse_l2_tn_filter(const struct rte_flow_attr *attr,
 		return -rte_errno;
 	}
 
-	/* check if the first not void action is QUEUE. */
+	/* check if the first not void action is QUEUE or DROP. */
 	NEXT_ITEM_OF_ACTION(act, actions, index);
-	if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE) {
-		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+	if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE &&
+	    act->type != RTE_FLOW_ACTION_TYPE_DROP) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 		rte_flow_error_set(error, EINVAL,
 			RTE_FLOW_ERROR_TYPE_ACTION,
 			act, "Not supported action.");
 		return -rte_errno;
 	}
 
-	act_q = (const struct rte_flow_action_queue *)act->conf;
-	filter->pool = act_q->index;
+	if (act->type == RTE_FLOW_ACTION_TYPE_QUEUE) {
+		act_q = (const struct rte_flow_action_queue *)act->conf;
+		rule->queue = act_q->index;
+	} else { /* drop */
+		rule->fdirflags = IXGBE_FDIRCMD_DROP;
+	}
+
+	/* check if the next not void item is MARK */
+	index++;
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if (act->type != RTE_FLOW_ACTION_TYPE_MARK) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	mark = (const struct rte_flow_action_mark *)act->conf;
+	rule->soft_id = mark->id;
 
 	/* check if the next not void item is END */
 	index++;
 	NEXT_ITEM_OF_ACTION(act, actions, index);
 	if (act->type != RTE_FLOW_ACTION_TYPE_END) {
-		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 		rte_flow_error_set(error, EINVAL,
 			RTE_FLOW_ERROR_TYPE_ACTION,
 			act, "Not supported action.");
@@ -9123,87 +9074,1245 @@ cons_parse_l2_tn_filter(const struct rte_flow_attr *attr,
 	return 0;
 }
 
-/*  Destroy all flow rules associated with a port on ixgbe. */
+/**
+ * Parse the rule to see if it is a IP or MAC VLAN flow director rule.
+ * And get the flow director filter info BTW.
+ */
 static int
-ixgbe_flow_flush(struct rte_eth_dev *dev,
-		struct rte_flow_error *error)
+ixgbe_parse_fdir_filter_normal(const struct rte_flow_attr *attr,
+			       const struct rte_flow_item pattern[],
+			       const struct rte_flow_action actions[],
+			       struct ixgbe_fdir_rule *rule,
+			       struct rte_flow_error *error)
 {
-	int ret = 0;
+	const struct rte_flow_item *item;
+	const struct rte_flow_item_eth *eth_spec;
+	const struct rte_flow_item_eth *eth_mask;
+	const struct rte_flow_item_ipv4 *ipv4_spec;
+	const struct rte_flow_item_ipv4 *ipv4_mask;
+	const struct rte_flow_item_tcp *tcp_spec;
+	const struct rte_flow_item_tcp *tcp_mask;
+	const struct rte_flow_item_udp *udp_spec;
+	const struct rte_flow_item_udp *udp_mask;
+	const struct rte_flow_item_sctp *sctp_spec;
+	const struct rte_flow_item_sctp *sctp_mask;
+	const struct rte_flow_item_vlan *vlan_spec;
+	const struct rte_flow_item_vlan *vlan_mask;
 
-	ixgbe_clear_all_ntuple_filter(dev);
-	ixgbe_clear_all_ethertype_filter(dev);
-	ixgbe_clear_syn_filter(dev);
+	uint32_t index, j;
 
-	ret = ixgbe_clear_all_fdir_filter(dev);
-	if (ret < 0) {
+	if (!pattern) {
 		rte_flow_error_set(error, EINVAL,
-				RTE_FLOW_ERROR_TYPE_HANDLE,
-				NULL, "Failed to flush rule");
-		return ret;
+			RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+			NULL, "NULL pattern.");
+		return -rte_errno;
 	}
 
-	ret = ixgbe_clear_all_l2_tn_filter(dev);
-	if (ret < 0) {
+	/**
+	 * Some fields may not be provided. Set spec to 0 and mask to default
+	 * value. So, we need not do anything for the not provided fields later.
+	 */
+	memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+	memset(&rule->mask, 0xFF, sizeof(struct ixgbe_hw_fdir_mask));
+	rule->mask.vlan_tci_mask = 0;
+
+	/* parse pattern */
+	index = 0;
+
+	/**
+	 * The first not void item should be
+	 * MAC or IPv4 or TCP or UDP or SCTP.
+	 */
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_ETH &&
+	    item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+	    item->type != RTE_FLOW_ITEM_TYPE_TCP &&
+	    item->type != RTE_FLOW_ITEM_TYPE_UDP &&
+	    item->type != RTE_FLOW_ITEM_TYPE_SCTP) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 		rte_flow_error_set(error, EINVAL,
-				RTE_FLOW_ERROR_TYPE_HANDLE,
-				NULL, "Failed to flush rule");
-		return ret;
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by fdir filter");
+		return -rte_errno;
 	}
 
-	return 0;
-}
-
-static int
-ixgbe_validate_l2_tn_filter(struct rte_eth_dev *dev,
-			const struct rte_flow_attr *attr,
-			const struct rte_flow_item pattern[],
-			const struct rte_flow_action actions[],
-			struct rte_eth_l2_tunnel_conf *l2_tn_filter,
-			struct rte_flow_error *error)
-{
-	int ret = 0;
-	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	rule->mode = RTE_FDIR_MODE_PERFECT;
 
-	ret = cons_parse_l2_tn_filter(attr, pattern,
-				actions, l2_tn_filter, error);
+	/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
 
-	if (hw->mac.type != ixgbe_mac_X550 &&
-		hw->mac.type != ixgbe_mac_X550EM_x &&
-		hw->mac.type != ixgbe_mac_X550EM_a) {
-		return -ENOTSUP;
 	}
 
-	return ret;
-}
+	/* Get the MAC info. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
+		/**
+		 * Only support vlan and dst MAC address,
+		 * others should be masked.
+		 */
+		if (item->spec && !item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
 
-/**
- * Check if the flow rule is supported by ixgbe.
- * It only checkes the format. Don't guarantee the rule can be programmed into
- * the HW. Because there can be no enough room for the rule.
- */
-static int
-ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
-		const struct rte_flow_attr *attr,
-		const struct rte_flow_item pattern[],
-		const struct rte_flow_action actions[],
-		struct rte_flow_error *error)
-{
-	struct rte_eth_ntuple_filter ntuple_filter;
-	struct rte_eth_ethertype_filter ethertype_filter;
-	struct rte_eth_syn_filter syn_filter;
-	struct rte_eth_l2_tunnel_conf l2_tn_filter;
-	int ret;
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			eth_spec = (const struct rte_flow_item_eth *)item->spec;
 
-	memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
-	ret = ixgbe_parse_ntuple_filter(attr, pattern,
-				actions, &ntuple_filter, error);
-	if (!ret)
-		return 0;
+			/* Get the dst MAC. */
+			for (j = 0; j < ETHER_ADDR_LEN; j++) {
+				rule->ixgbe_fdir.formatted.inner_mac[j] =
+					eth_spec->dst.addr_bytes[j];
+			}
+		}
 
-	memset(&ethertype_filter, 0, sizeof(struct rte_eth_ethertype_filter));
-	ret = ixgbe_parse_ethertype_filter(attr, pattern,
-				actions, &ethertype_filter, error);
-	if (!ret)
+
+		if (item->mask) {
+			/* If ethernet has meaning, it means MAC VLAN mode. */
+			rule->mode = RTE_FDIR_MODE_PERFECT_MAC_VLAN;
+
+			rule->b_mask = TRUE;
+			eth_mask = (const struct rte_flow_item_eth *)item->mask;
+
+			/* Ether type should be masked. */
+			if (eth_mask->type) {
+				memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+				rte_flow_error_set(error, EINVAL,
+					RTE_FLOW_ERROR_TYPE_ITEM,
+					item, "Not supported by fdir filter");
+				return -rte_errno;
+			}
+
+			/**
+			 * src MAC address must be masked,
+			 * and don't support dst MAC address mask.
+			 */
+			for (j = 0; j < ETHER_ADDR_LEN; j++) {
+				if (eth_mask->src.addr_bytes[j] ||
+					eth_mask->dst.addr_bytes[j] != 0xFF) {
+					memset(rule, 0,
+					sizeof(struct ixgbe_fdir_rule));
+					rte_flow_error_set(error, EINVAL,
+					RTE_FLOW_ERROR_TYPE_ITEM,
+					item, "Not supported by fdir filter");
+					return -rte_errno;
+				}
+			}
+
+			/* When no VLAN, considered as full mask. */
+			rule->mask.vlan_tci_mask = rte_cpu_to_be_16(0xEFFF);
+		}
+		/*** If both spec and mask are item,
+		 * it means don't care about ETH.
+		 * Do nothing.
+		 */
+
+		/**
+		 * Check if the next not void item is vlan or ipv4.
+		 * IPv6 is not supported.
+		 */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (rule->mode == RTE_FDIR_MODE_PERFECT_MAC_VLAN) {
+			if (item->type != RTE_FLOW_ITEM_TYPE_VLAN) {
+				memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+				rte_flow_error_set(error, EINVAL,
+					RTE_FLOW_ERROR_TYPE_ITEM,
+					item, "Not supported by fdir filter");
+				return -rte_errno;
+			}
+		} else {
+			if (item->type != RTE_FLOW_ITEM_TYPE_IPV4) {
+				memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+				rte_flow_error_set(error, EINVAL,
+					RTE_FLOW_ERROR_TYPE_ITEM,
+					item, "Not supported by fdir filter");
+				return -rte_errno;
+			}
+		}
+	}
+
+	if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+		if (!(item->spec && item->mask)) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+
+		vlan_spec = (const struct rte_flow_item_vlan *)item->spec;
+		vlan_mask = (const struct rte_flow_item_vlan *)item->mask;
+
+		if (vlan_spec->tpid != rte_cpu_to_be_16(ETHER_TYPE_VLAN)) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+
+		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->tci;
+
+		if (vlan_mask->tpid != (uint16_t)~0U) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->mask.vlan_tci_mask = vlan_mask->tci;
+		/* More than one tags are not supported. */
+
+		/**
+		 * Check if the next not void item is not vlan.
+		 */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		} else if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	/* Get the IP info. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_IPV4) {
+		/**
+		 * Set the flow type even if there's no content
+		 * as we must have a flow type.
+		 */
+		rule->ixgbe_fdir.formatted.flow_type =
+			IXGBE_ATR_FLOW_TYPE_IPV4;
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+		/**
+		 * Only care about src & dst addresses,
+		 * others should be masked.
+		 */
+		if (!item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->b_mask = TRUE;
+		ipv4_mask =
+			(const struct rte_flow_item_ipv4 *)item->mask;
+		if (ipv4_mask->hdr.version_ihl ||
+		    ipv4_mask->hdr.type_of_service ||
+		    ipv4_mask->hdr.total_length ||
+		    ipv4_mask->hdr.packet_id ||
+		    ipv4_mask->hdr.fragment_offset ||
+		    ipv4_mask->hdr.time_to_live ||
+		    ipv4_mask->hdr.next_proto_id ||
+		    ipv4_mask->hdr.hdr_checksum) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->mask.dst_ipv4_mask = ipv4_mask->hdr.dst_addr;
+		rule->mask.src_ipv4_mask = ipv4_mask->hdr.src_addr;
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			ipv4_spec =
+				(const struct rte_flow_item_ipv4 *)item->spec;
+			rule->ixgbe_fdir.formatted.dst_ip[0] =
+				ipv4_spec->hdr.dst_addr;
+			rule->ixgbe_fdir.formatted.src_ip[0] =
+				ipv4_spec->hdr.src_addr;
+		}
+
+		/**
+		 * Check if the next not void item is
+		 * TCP or UDP or SCTP or END.
+		 */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_TCP &&
+		    item->type != RTE_FLOW_ITEM_TYPE_UDP &&
+		    item->type != RTE_FLOW_ITEM_TYPE_SCTP &&
+		    item->type != RTE_FLOW_ITEM_TYPE_END) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	/* Get the TCP info. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_TCP) {
+		/**
+		 * Set the flow type even if there's no content
+		 * as we must have a flow type.
+		 */
+		rule->ixgbe_fdir.formatted.flow_type =
+			IXGBE_ATR_FLOW_TYPE_TCPV4;
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+		/**
+		 * Only care about src & dst ports,
+		 * others should be masked.
+		 */
+		if (!item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->b_mask = TRUE;
+		tcp_mask = (const struct rte_flow_item_tcp *)item->mask;
+		if (tcp_mask->hdr.sent_seq ||
+		    tcp_mask->hdr.recv_ack ||
+		    tcp_mask->hdr.data_off ||
+		    tcp_mask->hdr.tcp_flags ||
+		    tcp_mask->hdr.rx_win ||
+		    tcp_mask->hdr.cksum ||
+		    tcp_mask->hdr.tcp_urp) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->mask.src_port_mask = tcp_mask->hdr.src_port;
+		rule->mask.dst_port_mask = tcp_mask->hdr.dst_port;
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			tcp_spec = (const struct rte_flow_item_tcp *)item->spec;
+			rule->ixgbe_fdir.formatted.src_port =
+				tcp_spec->hdr.src_port;
+			rule->ixgbe_fdir.formatted.dst_port =
+				tcp_spec->hdr.dst_port;
+		}
+	}
+
+	/* Get the UDP info */
+	if (item->type == RTE_FLOW_ITEM_TYPE_UDP) {
+		/**
+		 * Set the flow type even if there's no content
+		 * as we must have a flow type.
+		 */
+		rule->ixgbe_fdir.formatted.flow_type =
+			IXGBE_ATR_FLOW_TYPE_UDPV4;
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+		/**
+		 * Only care about src & dst ports,
+		 * others should be masked.
+		 */
+		if (!item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->b_mask = TRUE;
+		udp_mask = (const struct rte_flow_item_udp *)item->mask;
+		if (udp_mask->hdr.dgram_len ||
+		    udp_mask->hdr.dgram_cksum) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->mask.src_port_mask = udp_mask->hdr.src_port;
+		rule->mask.dst_port_mask = udp_mask->hdr.dst_port;
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			udp_spec = (const struct rte_flow_item_udp *)item->spec;
+			rule->ixgbe_fdir.formatted.src_port =
+				udp_spec->hdr.src_port;
+			rule->ixgbe_fdir.formatted.dst_port =
+				udp_spec->hdr.dst_port;
+		}
+	}
+
+	/* Get the SCTP info */
+	if (item->type == RTE_FLOW_ITEM_TYPE_SCTP) {
+		/**
+		 * Set the flow type even if there's no content
+		 * as we must have a flow type.
+		 */
+		rule->ixgbe_fdir.formatted.flow_type =
+			IXGBE_ATR_FLOW_TYPE_SCTPV4;
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+		/**
+		 * Only care about src & dst ports,
+		 * others should be masked.
+		 */
+		if (!item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->b_mask = TRUE;
+		sctp_mask =
+			(const struct rte_flow_item_sctp *)item->mask;
+		if (sctp_mask->hdr.tag ||
+		    sctp_mask->hdr.cksum) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->mask.src_port_mask = sctp_mask->hdr.src_port;
+		rule->mask.dst_port_mask = sctp_mask->hdr.dst_port;
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			sctp_spec =
+				(const struct rte_flow_item_sctp *)item->spec;
+			rule->ixgbe_fdir.formatted.src_port =
+				sctp_spec->hdr.src_port;
+			rule->ixgbe_fdir.formatted.dst_port =
+				sctp_spec->hdr.dst_port;
+		}
+	}
+
+	if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+		/* check if the next not void item is END */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	return ixgbe_parse_fdir_act_attr(attr, actions, rule, error);
+}
+
+#define NVGRE_PROTOCOL 0x6558
+
+/**
+ * Parse the rule to see if it is a VxLAN or NVGRE flow director rule.
+ * And get the flow director filter info BTW.
+ */
+static int
+ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
+			       const struct rte_flow_item pattern[],
+			       const struct rte_flow_action actions[],
+			       struct ixgbe_fdir_rule *rule,
+			       struct rte_flow_error *error)
+{
+	const struct rte_flow_item *item;
+	const struct rte_flow_item_vxlan *vxlan_spec;
+	const struct rte_flow_item_vxlan *vxlan_mask;
+	const struct rte_flow_item_nvgre *nvgre_spec;
+	const struct rte_flow_item_nvgre *nvgre_mask;
+	const struct rte_flow_item_eth *eth_spec;
+	const struct rte_flow_item_eth *eth_mask;
+	const struct rte_flow_item_vlan *vlan_spec;
+	const struct rte_flow_item_vlan *vlan_mask;
+	uint32_t index, j;
+
+	if (!pattern) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+				   NULL, "NULL pattern.");
+		return -rte_errno;
+	}
+
+	/**
+	 * Some fields may not be provided. Set spec to 0 and mask to default
+	 * value. So, we need not do anything for the not provided fields later.
+	 */
+	memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+	memset(&rule->mask, 0xFF, sizeof(struct ixgbe_hw_fdir_mask));
+	rule->mask.vlan_tci_mask = 0;
+
+	/* parse pattern */
+	index = 0;
+
+	/**
+	 * The first not void item should be
+	 * MAC or IPv4 or IPv6 or UDP or VxLAN.
+	 */
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_ETH &&
+	    item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+	    item->type != RTE_FLOW_ITEM_TYPE_IPV6 &&
+	    item->type != RTE_FLOW_ITEM_TYPE_UDP &&
+	    item->type != RTE_FLOW_ITEM_TYPE_VXLAN &&
+	    item->type != RTE_FLOW_ITEM_TYPE_NVGRE) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by fdir filter");
+		return -rte_errno;
+	}
+
+	rule->mode = RTE_FDIR_MODE_PERFECT_TUNNEL;
+
+	/* Skip MAC. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
+		/* Only used to describe the protocol stack. */
+		if (item->spec || item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+
+		/* Check if the next not void item is IPv4 or IPv6. */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+		    item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	/* Skip IP. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_IPV4 ||
+	    item->type == RTE_FLOW_ITEM_TYPE_IPV6) {
+		/* Only used to describe the protocol stack. */
+		if (item->spec || item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+
+		/* Check if the next not void item is UDP or NVGRE. */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_UDP &&
+		    item->type != RTE_FLOW_ITEM_TYPE_NVGRE) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	/* Skip UDP. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_UDP) {
+		/* Only used to describe the protocol stack. */
+		if (item->spec || item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+
+		/* Check if the next not void item is VxLAN. */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_VXLAN) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	/* Get the VxLAN info */
+	if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN) {
+		rule->ixgbe_fdir.formatted.tunnel_type =
+			RTE_FDIR_TUNNEL_TYPE_VXLAN;
+
+		/* Only care about VNI, others should be masked. */
+		if (!item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+		rule->b_mask = TRUE;
+
+		/* Tunnel type is always meaningful. */
+		rule->mask.tunnel_type_mask = 1;
+
+		vxlan_mask =
+			(const struct rte_flow_item_vxlan *)item->mask;
+		if (vxlan_mask->flags) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/* VNI must be totally masked or not. */
+		if ((vxlan_mask->vni[0] || vxlan_mask->vni[1] ||
+			vxlan_mask->vni[2]) &&
+			((vxlan_mask->vni[0] != 0xFF) ||
+			(vxlan_mask->vni[1] != 0xFF) ||
+				(vxlan_mask->vni[2] != 0xFF))) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+
+		rte_memcpy(&rule->mask.tunnel_id_mask, vxlan_mask->vni,
+			RTE_DIM(vxlan_mask->vni));
+		rule->mask.tunnel_id_mask <<= 8;
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			vxlan_spec = (const struct rte_flow_item_vxlan *)
+					item->spec;
+			rte_memcpy(&rule->ixgbe_fdir.formatted.tni_vni,
+				vxlan_spec->vni, RTE_DIM(vxlan_spec->vni));
+			rule->ixgbe_fdir.formatted.tni_vni <<= 8;
+		}
+	}
+
+	/* Get the NVGRE info */
+	if (item->type == RTE_FLOW_ITEM_TYPE_NVGRE) {
+		rule->ixgbe_fdir.formatted.tunnel_type =
+			RTE_FDIR_TUNNEL_TYPE_NVGRE;
+
+		/**
+		 * Only care about flags0, flags1, protocol and TNI,
+		 * others should be masked.
+		 */
+		if (!item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+		rule->b_mask = TRUE;
+
+		/* Tunnel type is always meaningful. */
+		rule->mask.tunnel_type_mask = 1;
+
+		nvgre_mask =
+			(const struct rte_flow_item_nvgre *)item->mask;
+		if (nvgre_mask->ver ||
+		    nvgre_mask->flow_id) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		if (!nvgre_mask->flags0 ||
+		    nvgre_mask->flags1 != 0x3 ||
+		    nvgre_mask->protocol != 0xFFFF) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/* TNI must be totally masked or not. */
+		if (nvgre_mask->tni[0] &&
+		    ((nvgre_mask->tni[0] != 0xFF) ||
+		    (nvgre_mask->tni[1] != 0xFF) ||
+		    (nvgre_mask->tni[2] != 0xFF))) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/* tni is a 24-bits bit field */
+		rte_memcpy(&rule->mask.tunnel_id_mask, nvgre_mask->tni,
+			RTE_DIM(nvgre_mask->tni));
+		rule->mask.tunnel_id_mask <<= 8;
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			nvgre_spec =
+				(const struct rte_flow_item_nvgre *)item->spec;
+			if (nvgre_spec->flags0 ||
+			    nvgre_spec->flags1 != 2 ||
+			    nvgre_spec->protocol !=
+			    rte_cpu_to_be_16(NVGRE_PROTOCOL)) {
+				memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+				rte_flow_error_set(error, EINVAL,
+					RTE_FLOW_ERROR_TYPE_ITEM,
+					item, "Not supported by fdir filter");
+				return -rte_errno;
+			}
+			/* tni is a 24-bits bit field */
+			rte_memcpy(&rule->ixgbe_fdir.formatted.tni_vni,
+			nvgre_spec->tni, RTE_DIM(nvgre_spec->tni));
+			rule->ixgbe_fdir.formatted.tni_vni <<= 8;
+		}
+	}
+
+	/* check if the next not void item is MAC */
+	index++;
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_ETH) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by fdir filter");
+		return -rte_errno;
+	}
+
+	/**
+	 * Only support vlan and dst MAC address,
+	 * others should be masked.
+	 */
+
+	if (!item->mask) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by fdir filter");
+		return -rte_errno;
+	}
+	/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+	}
+	rule->b_mask = TRUE;
+	eth_mask = (const struct rte_flow_item_eth *)item->mask;
+
+	/* Ether type should be masked. */
+	if (eth_mask->type) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by fdir filter");
+		return -rte_errno;
+	}
+
+	/* src MAC address should be masked. */
+	for (j = 0; j < ETHER_ADDR_LEN; j++) {
+		if (eth_mask->src.addr_bytes[j]) {
+			memset(rule, 0,
+			       sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+	rule->mask.mac_addr_byte_mask = 0;
+	for (j = 0; j < ETHER_ADDR_LEN; j++) {
+		/* It's a per byte mask. */
+		if (eth_mask->dst.addr_bytes[j] == 0xFF) {
+			rule->mask.mac_addr_byte_mask |= 0x1 << j;
+		} else if (eth_mask->dst.addr_bytes[j]) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	/* When no vlan, considered as full mask. */
+	rule->mask.vlan_tci_mask = rte_cpu_to_be_16(0xEFFF);
+
+	if (item->spec) {
+		rule->b_spec = TRUE;
+		eth_spec = (const struct rte_flow_item_eth *)item->spec;
+
+		/* Get the dst MAC. */
+		for (j = 0; j < ETHER_ADDR_LEN; j++) {
+			rule->ixgbe_fdir.formatted.inner_mac[j] =
+				eth_spec->dst.addr_bytes[j];
+		}
+	}
+
+	/**
+	 * Check if the next not void item is vlan or ipv4.
+	 * IPv6 is not supported.
+	 */
+	index++;
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if ((item->type != RTE_FLOW_ITEM_TYPE_VLAN) &&
+		(item->type != RTE_FLOW_ITEM_TYPE_VLAN)) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by fdir filter");
+		return -rte_errno;
+	}
+	/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+	}
+
+	if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+		if (!(item->spec && item->mask)) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+
+		vlan_spec = (const struct rte_flow_item_vlan *)item->spec;
+		vlan_mask = (const struct rte_flow_item_vlan *)item->mask;
+
+		if (vlan_spec->tpid != rte_cpu_to_be_16(ETHER_TYPE_VLAN)) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+
+		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->tci;
+
+		if (vlan_mask->tpid != (uint16_t)~0U) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->mask.vlan_tci_mask = vlan_mask->tci;
+		/* More than one tags are not supported. */
+
+		/**
+		 * Check if the next not void item is not vlan.
+		 */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		} else if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/* check if the next not void item is END */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	/**
+	 * If the tags is 0, it means don't care about the VLAN.
+	 * Do nothing.
+	 */
+
+	return ixgbe_parse_fdir_act_attr(attr, actions, rule, error);
+}
+
+static int
+ixgbe_validate_fdir_filter(struct rte_eth_dev *dev,
+			const struct rte_flow_attr *attr,
+			const struct rte_flow_item pattern[],
+			const struct rte_flow_action actions[],
+			struct ixgbe_fdir_rule *rule,
+			struct rte_flow_error *error)
+{
+	int ret = 0;
+
+	enum rte_fdir_mode fdir_mode = dev->data->dev_conf.fdir_conf.mode;
+
+	ixgbe_parse_fdir_filter(attr, pattern, actions,
+				rule, error);
+
+
+	if (fdir_mode == RTE_FDIR_MODE_NONE ||
+	    fdir_mode != rule->mode)
+		return -ENOTSUP;
+
+	return ret;
+}
+
+static int
+ixgbe_parse_fdir_filter(const struct rte_flow_attr *attr,
+			const struct rte_flow_item pattern[],
+			const struct rte_flow_action actions[],
+			struct ixgbe_fdir_rule *rule,
+			struct rte_flow_error *error)
+{
+	int ret;
+
+	ret = ixgbe_parse_fdir_filter_normal(attr, pattern,
+					actions, rule, error);
+
+	if (!ret)
+		return 0;
+
+	ret = ixgbe_parse_fdir_filter_tunnel(attr, pattern,
+					actions, rule, error);
+
+	return ret;
+}
+
+/**
+ * Parse the rule to see if it is a L2 tunnel rule.
+ * And get the L2 tunnel filter info BTW.
+ * Only support E-tag now.
+ */
+static int
+cons_parse_l2_tn_filter(const struct rte_flow_attr *attr,
+			const struct rte_flow_item pattern[],
+			const struct rte_flow_action actions[],
+			struct rte_eth_l2_tunnel_conf *filter,
+			struct rte_flow_error *error)
+{
+	const struct rte_flow_item *item;
+	const struct rte_flow_item_e_tag *e_tag_spec;
+	const struct rte_flow_item_e_tag *e_tag_mask;
+	const struct rte_flow_action *act;
+	const struct rte_flow_action_queue *act_q;
+	uint32_t index, j;
+
+	if (!pattern) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+			NULL, "NULL pattern.");
+		return -rte_errno;
+	}
+
+	/* parse pattern */
+	index = 0;
+
+	/* The first not void item should be e-tag. */
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_E_TAG) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by L2 tunnel filter");
+		return -rte_errno;
+	}
+
+	if (!item->spec || !item->mask) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by L2 tunnel filter");
+		return -rte_errno;
+	}
+
+	/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+	}
+
+	e_tag_spec = (const struct rte_flow_item_e_tag *)item->spec;
+	e_tag_mask = (const struct rte_flow_item_e_tag *)item->mask;
+
+	/* Src & dst MAC address should be masked. */
+	for (j = 0; j < ETHER_ADDR_LEN; j++) {
+		if (e_tag_mask->src.addr_bytes[j] ||
+		    e_tag_mask->dst.addr_bytes[j]) {
+			memset(filter, 0,
+			       sizeof(struct rte_eth_l2_tunnel_conf));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by L2 tunnel filter");
+			return -rte_errno;
+		}
+	}
+
+	/* Only care about GRP and E cid base. */
+	if (e_tag_mask->e_tag_ethertype ||
+	    e_tag_mask->e_pcp ||
+	    e_tag_mask->dei ||
+	    e_tag_mask->in_e_cid_base ||
+	    e_tag_mask->in_e_cid_ext ||
+	    e_tag_mask->e_cid_ext ||
+	    e_tag_mask->type ||
+	    e_tag_mask->tags ||
+	    e_tag_mask->grp != 0x3 ||
+	    e_tag_mask->e_cid_base != 0xFFF) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by L2 tunnel filter");
+		return -rte_errno;
+	}
+
+	filter->l2_tunnel_type = RTE_L2_TUNNEL_TYPE_E_TAG;
+	/**
+	 * grp and e_cid_base are bit fields and only use 14 bits.
+	 * e-tag id is taken as little endian by HW.
+	 */
+	filter->tunnel_id = e_tag_spec->grp << 12;
+	filter->tunnel_id |= rte_be_to_cpu_16(e_tag_spec->e_cid_base);
+
+	/* check if the next not void item is END */
+	index++;
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by L2 tunnel filter");
+		return -rte_errno;
+	}
+
+	/* parse attr */
+	/* must be input direction */
+	if (!attr->ingress) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_INGRESS,
+			attr, "Only support ingress.");
+		return -rte_errno;
+	}
+
+	/* not supported */
+	if (attr->egress) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
+			attr, "Not support egress.");
+		return -rte_errno;
+	}
+
+	/* not supported */
+	if (attr->priority) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
+			attr, "Not support priority.");
+		return -rte_errno;
+	}
+
+	/* parse action */
+	index = 0;
+
+	if (!actions) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				   NULL, "NULL action.");
+		return -rte_errno;
+	}
+
+	/* check if the first not void action is QUEUE. */
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	act_q = (const struct rte_flow_action_queue *)act->conf;
+	filter->pool = act_q->index;
+
+	/* check if the next not void item is END */
+	index++;
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
+/*  Destroy all flow rules associated with a port on ixgbe. */
+static int
+ixgbe_flow_flush(struct rte_eth_dev *dev,
+		struct rte_flow_error *error)
+{
+	int ret = 0;
+
+	ixgbe_clear_all_ntuple_filter(dev);
+	ixgbe_clear_all_ethertype_filter(dev);
+	ixgbe_clear_syn_filter(dev);
+
+	ret = ixgbe_clear_all_fdir_filter(dev);
+	if (ret < 0) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_HANDLE,
+				NULL, "Failed to flush rule");
+		return ret;
+	}
+
+	ret = ixgbe_clear_all_l2_tn_filter(dev);
+	if (ret < 0) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_HANDLE,
+				NULL, "Failed to flush rule");
+		return ret;
+	}
+
+	return 0;
+}
+
+static int
+ixgbe_validate_l2_tn_filter(struct rte_eth_dev *dev,
+			const struct rte_flow_attr *attr,
+			const struct rte_flow_item pattern[],
+			const struct rte_flow_action actions[],
+			struct rte_eth_l2_tunnel_conf *l2_tn_filter,
+			struct rte_flow_error *error)
+{
+	int ret = 0;
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	ret = cons_parse_l2_tn_filter(attr, pattern,
+				actions, l2_tn_filter, error);
+
+	if (hw->mac.type != ixgbe_mac_X550 &&
+		hw->mac.type != ixgbe_mac_X550EM_x &&
+		hw->mac.type != ixgbe_mac_X550EM_a) {
+		return -ENOTSUP;
+	}
+
+	return ret;
+}
+
+/**
+ * Check if the flow rule is supported by ixgbe.
+ * It only checkes the format. Don't guarantee the rule can be programmed into
+ * the HW. Because there can be no enough room for the rule.
+ */
+static int
+ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
+		const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct rte_flow_error *error)
+{
+	struct rte_eth_ntuple_filter ntuple_filter;
+	struct rte_eth_ethertype_filter ethertype_filter;
+	struct rte_eth_syn_filter syn_filter;
+	struct rte_eth_l2_tunnel_conf l2_tn_filter;
+	struct ixgbe_fdir_rule fdir_rule;
+	int ret;
+
+	memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
+	ret = ixgbe_parse_ntuple_filter(attr, pattern,
+				actions, &ntuple_filter, error);
+	if (!ret)
+		return 0;
+
+	memset(&ethertype_filter, 0, sizeof(struct rte_eth_ethertype_filter));
+	ret = ixgbe_parse_ethertype_filter(attr, pattern,
+				actions, &ethertype_filter, error);
+	if (!ret)
 		return 0;
 
 	memset(&syn_filter, 0, sizeof(struct rte_eth_syn_filter));
@@ -9212,7 +10321,13 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 	if (!ret)
 		return 0;
 
-		memset(&l2_tn_filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+	memset(&fdir_rule, 0, sizeof(struct ixgbe_fdir_rule));
+	ret = ixgbe_validate_fdir_filter(dev, attr, pattern,
+				actions, &fdir_rule, error);
+	if (!ret)
+		return 0;
+
+	memset(&l2_tn_filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
 	ret = ixgbe_validate_l2_tn_filter(dev, attr, pattern,
 				actions, &l2_tn_filter, error);
 
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 9ed5f45..676d45f 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -162,6 +162,17 @@ struct ixgbe_fdir_filter {
 /* list of fdir filters */
 TAILQ_HEAD(ixgbe_fdir_filter_list, ixgbe_fdir_filter);
 
+struct ixgbe_fdir_rule {
+	struct ixgbe_hw_fdir_mask mask;
+	union ixgbe_atr_input ixgbe_fdir; /* key of fdir filter*/
+	bool b_spec; /* If TRUE, ixgbe_fdir, fdirflags, queue have meaning. */
+	bool b_mask; /* If TRUE, mask has meaning. */
+	enum rte_fdir_mode mode; /* IP, MAC VLAN, Tunnel */
+	uint32_t fdirflags; /* drop or forward */
+	uint32_t soft_id; /* an unique value for this rule */
+	uint8_t queue; /* assigned rx queue */
+};
+
 struct ixgbe_hw_fdir_info {
 	struct ixgbe_hw_fdir_mask mask;
 	uint8_t     flex_bytes_offset;
@@ -177,6 +188,7 @@ struct ixgbe_hw_fdir_info {
 	/* store the pointers of the filters, index is the hash value. */
 	struct ixgbe_fdir_filter **hash_map;
 	struct rte_hash *hash_handle; /* cuckoo hash handler */
+	bool mask_added; /* If already got mask from consistent filter */
 };
 
 /* structure for interrupt relative data */
@@ -473,6 +485,10 @@ bool ixgbe_rss_update_sp(enum ixgbe_mac_type mac_type);
  * Flow director function prototypes
  */
 int ixgbe_fdir_configure(struct rte_eth_dev *dev);
+int ixgbe_fdir_set_input_mask(struct rte_eth_dev *dev);
+int ixgbe_fdir_filter_program(struct rte_eth_dev *dev,
+			      struct ixgbe_fdir_rule *rule,
+			      bool del, bool update);
 
 void ixgbe_configure_dcb(struct rte_eth_dev *dev);
 
diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c
index 7097dca..0c6464f 100644
--- a/drivers/net/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/ixgbe/ixgbe_fdir.c
@@ -112,10 +112,8 @@
 static int fdir_erase_filter_82599(struct ixgbe_hw *hw, uint32_t fdirhash);
 static int fdir_set_input_mask(struct rte_eth_dev *dev,
 			       const struct rte_eth_fdir_masks *input_mask);
-static int fdir_set_input_mask_82599(struct rte_eth_dev *dev,
-		const struct rte_eth_fdir_masks *input_mask);
-static int fdir_set_input_mask_x550(struct rte_eth_dev *dev,
-				    const struct rte_eth_fdir_masks *input_mask);
+static int fdir_set_input_mask_82599(struct rte_eth_dev *dev);
+static int fdir_set_input_mask_x550(struct rte_eth_dev *dev);
 static int ixgbe_set_fdir_flex_conf(struct rte_eth_dev *dev,
 		const struct rte_eth_fdir_flex_conf *conf, uint32_t *fdirctrl);
 static int fdir_enable_82599(struct ixgbe_hw *hw, uint32_t fdirctrl);
@@ -295,8 +293,7 @@ reverse_fdir_bitmasks(uint16_t hi_dword, uint16_t lo_dword)
  * but makes use of the rte_fdir_masks structure to see which bits to set.
  */
 static int
-fdir_set_input_mask_82599(struct rte_eth_dev *dev,
-		const struct rte_eth_fdir_masks *input_mask)
+fdir_set_input_mask_82599(struct rte_eth_dev *dev)
 {
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ixgbe_hw_fdir_info *info =
@@ -308,8 +305,6 @@ fdir_set_input_mask_82599(struct rte_eth_dev *dev,
 	uint32_t fdirm = IXGBE_FDIRM_POOL | IXGBE_FDIRM_DIPv6 | IXGBE_FDIRM_FLEX;
 	uint32_t fdirtcpm;  /* TCP source and destination port masks. */
 	uint32_t fdiripv6m; /* IPv6 source and destination masks. */
-	uint16_t dst_ipv6m = 0;
-	uint16_t src_ipv6m = 0;
 	volatile uint32_t *reg;
 
 	PMD_INIT_FUNC_TRACE();
@@ -320,31 +315,30 @@ fdir_set_input_mask_82599(struct rte_eth_dev *dev,
 	 * a VLAN of 0 is unspecified, so mask that out as well.  L4type
 	 * cannot be masked out in this implementation.
 	 */
-	if (input_mask->dst_port_mask == 0 && input_mask->src_port_mask == 0)
+	if (info->mask.dst_port_mask == 0 && info->mask.src_port_mask == 0)
 		/* use the L4 protocol mask for raw IPv4/IPv6 traffic */
 		fdirm |= IXGBE_FDIRM_L4P;
 
-	if (input_mask->vlan_tci_mask == rte_cpu_to_be_16(0x0FFF))
+	if (info->mask.vlan_tci_mask == rte_cpu_to_be_16(0x0FFF))
 		/* mask VLAN Priority */
 		fdirm |= IXGBE_FDIRM_VLANP;
-	else if (input_mask->vlan_tci_mask == rte_cpu_to_be_16(0xE000))
+	else if (info->mask.vlan_tci_mask == rte_cpu_to_be_16(0xE000))
 		/* mask VLAN ID */
 		fdirm |= IXGBE_FDIRM_VLANID;
-	else if (input_mask->vlan_tci_mask == 0)
+	else if (info->mask.vlan_tci_mask == 0)
 		/* mask VLAN ID and Priority */
 		fdirm |= IXGBE_FDIRM_VLANID | IXGBE_FDIRM_VLANP;
-	else if (input_mask->vlan_tci_mask != rte_cpu_to_be_16(0xEFFF)) {
+	else if (info->mask.vlan_tci_mask != rte_cpu_to_be_16(0xEFFF)) {
 		PMD_INIT_LOG(ERR, "invalid vlan_tci_mask");
 		return -EINVAL;
 	}
-	info->mask.vlan_tci_mask = input_mask->vlan_tci_mask;
 
 	IXGBE_WRITE_REG(hw, IXGBE_FDIRM, fdirm);
 
 	/* store the TCP/UDP port masks, bit reversed from port layout */
 	fdirtcpm = reverse_fdir_bitmasks(
-			rte_be_to_cpu_16(input_mask->dst_port_mask),
-			rte_be_to_cpu_16(input_mask->src_port_mask));
+			rte_be_to_cpu_16(info->mask.dst_port_mask),
+			rte_be_to_cpu_16(info->mask.src_port_mask));
 
 	/* write all the same so that UDP, TCP and SCTP use the same mask
 	 * (little-endian)
@@ -352,30 +346,23 @@ fdir_set_input_mask_82599(struct rte_eth_dev *dev,
 	IXGBE_WRITE_REG(hw, IXGBE_FDIRTCPM, ~fdirtcpm);
 	IXGBE_WRITE_REG(hw, IXGBE_FDIRUDPM, ~fdirtcpm);
 	IXGBE_WRITE_REG(hw, IXGBE_FDIRSCTPM, ~fdirtcpm);
-	info->mask.src_port_mask = input_mask->src_port_mask;
-	info->mask.dst_port_mask = input_mask->dst_port_mask;
 
 	/* Store source and destination IPv4 masks (big-endian),
 	 * can not use IXGBE_WRITE_REG.
 	 */
 	reg = IXGBE_PCI_REG_ADDR(hw, IXGBE_FDIRSIP4M);
-	*reg = ~(input_mask->ipv4_mask.src_ip);
+	*reg = ~(info->mask.src_ipv4_mask);
 	reg = IXGBE_PCI_REG_ADDR(hw, IXGBE_FDIRDIP4M);
-	*reg = ~(input_mask->ipv4_mask.dst_ip);
-	info->mask.src_ipv4_mask = input_mask->ipv4_mask.src_ip;
-	info->mask.dst_ipv4_mask = input_mask->ipv4_mask.dst_ip;
+	*reg = ~(info->mask.dst_ipv4_mask);
 
 	if (dev->data->dev_conf.fdir_conf.mode == RTE_FDIR_MODE_SIGNATURE) {
 		/*
 		 * Store source and destination IPv6 masks (bit reversed)
 		 */
-		IPV6_ADDR_TO_MASK(input_mask->ipv6_mask.src_ip, src_ipv6m);
-		IPV6_ADDR_TO_MASK(input_mask->ipv6_mask.dst_ip, dst_ipv6m);
-		fdiripv6m = (dst_ipv6m << 16) | src_ipv6m;
+		fdiripv6m = (info->mask.dst_ipv6_mask << 16) |
+			    info->mask.src_ipv6_mask;
 
 		IXGBE_WRITE_REG(hw, IXGBE_FDIRIP6M, ~fdiripv6m);
-		info->mask.src_ipv6_mask = src_ipv6m;
-		info->mask.dst_ipv6_mask = dst_ipv6m;
 	}
 
 	return IXGBE_SUCCESS;
@@ -386,8 +373,7 @@ fdir_set_input_mask_82599(struct rte_eth_dev *dev,
  * but makes use of the rte_fdir_masks structure to see which bits to set.
  */
 static int
-fdir_set_input_mask_x550(struct rte_eth_dev *dev,
-			 const struct rte_eth_fdir_masks *input_mask)
+fdir_set_input_mask_x550(struct rte_eth_dev *dev)
 {
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ixgbe_hw_fdir_info *info =
@@ -410,20 +396,19 @@ fdir_set_input_mask_x550(struct rte_eth_dev *dev,
 	/* some bits must be set for mac vlan or tunnel mode */
 	fdirm |= IXGBE_FDIRM_L4P | IXGBE_FDIRM_L3P;
 
-	if (input_mask->vlan_tci_mask == rte_cpu_to_be_16(0x0FFF))
+	if (info->mask.vlan_tci_mask == rte_cpu_to_be_16(0x0FFF))
 		/* mask VLAN Priority */
 		fdirm |= IXGBE_FDIRM_VLANP;
-	else if (input_mask->vlan_tci_mask == rte_cpu_to_be_16(0xE000))
+	else if (info->mask.vlan_tci_mask == rte_cpu_to_be_16(0xE000))
 		/* mask VLAN ID */
 		fdirm |= IXGBE_FDIRM_VLANID;
-	else if (input_mask->vlan_tci_mask == 0)
+	else if (info->mask.vlan_tci_mask == 0)
 		/* mask VLAN ID and Priority */
 		fdirm |= IXGBE_FDIRM_VLANID | IXGBE_FDIRM_VLANP;
-	else if (input_mask->vlan_tci_mask != rte_cpu_to_be_16(0xEFFF)) {
+	else if (info->mask.vlan_tci_mask != rte_cpu_to_be_16(0xEFFF)) {
 		PMD_INIT_LOG(ERR, "invalid vlan_tci_mask");
 		return -EINVAL;
 	}
-	info->mask.vlan_tci_mask = input_mask->vlan_tci_mask;
 
 	IXGBE_WRITE_REG(hw, IXGBE_FDIRM, fdirm);
 
@@ -434,12 +419,11 @@ fdir_set_input_mask_x550(struct rte_eth_dev *dev,
 				IXGBE_FDIRIP6M_TNI_VNI;
 
 	if (mode == RTE_FDIR_MODE_PERFECT_TUNNEL) {
-		mac_mask = input_mask->mac_addr_byte_mask;
+		mac_mask = info->mask.mac_addr_byte_mask;
 		fdiripv6m |= (mac_mask << IXGBE_FDIRIP6M_INNER_MAC_SHIFT)
 				& IXGBE_FDIRIP6M_INNER_MAC;
-		info->mask.mac_addr_byte_mask = input_mask->mac_addr_byte_mask;
 
-		switch (input_mask->tunnel_type_mask) {
+		switch (info->mask.tunnel_type_mask) {
 		case 0:
 			/* Mask turnnel type */
 			fdiripv6m |= IXGBE_FDIRIP6M_TUNNEL_TYPE;
@@ -450,10 +434,8 @@ fdir_set_input_mask_x550(struct rte_eth_dev *dev,
 			PMD_INIT_LOG(ERR, "invalid tunnel_type_mask");
 			return -EINVAL;
 		}
-		info->mask.tunnel_type_mask =
-			input_mask->tunnel_type_mask;
 
-		switch (rte_be_to_cpu_32(input_mask->tunnel_id_mask)) {
+		switch (rte_be_to_cpu_32(info->mask.tunnel_id_mask)) {
 		case 0x0:
 			/* Mask vxlan id */
 			fdiripv6m |= IXGBE_FDIRIP6M_TNI_VNI;
@@ -467,8 +449,6 @@ fdir_set_input_mask_x550(struct rte_eth_dev *dev,
 			PMD_INIT_LOG(ERR, "invalid tunnel_id_mask");
 			return -EINVAL;
 		}
-		info->mask.tunnel_id_mask =
-			input_mask->tunnel_id_mask;
 	}
 
 	IXGBE_WRITE_REG(hw, IXGBE_FDIRIP6M, fdiripv6m);
@@ -482,22 +462,90 @@ fdir_set_input_mask_x550(struct rte_eth_dev *dev,
 }
 
 static int
-fdir_set_input_mask(struct rte_eth_dev *dev,
-		    const struct rte_eth_fdir_masks *input_mask)
+ixgbe_fdir_store_input_mask_82599(struct rte_eth_dev *dev,
+				  const struct rte_eth_fdir_masks *input_mask)
+{
+	struct ixgbe_hw_fdir_info *info =
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
+	uint16_t dst_ipv6m = 0;
+	uint16_t src_ipv6m = 0;
+
+	memset(&info->mask, 0, sizeof(struct ixgbe_hw_fdir_mask));
+	info->mask.vlan_tci_mask = input_mask->vlan_tci_mask;
+	info->mask.src_port_mask = input_mask->src_port_mask;
+	info->mask.dst_port_mask = input_mask->dst_port_mask;
+	info->mask.src_ipv4_mask = input_mask->ipv4_mask.src_ip;
+	info->mask.dst_ipv4_mask = input_mask->ipv4_mask.dst_ip;
+	IPV6_ADDR_TO_MASK(input_mask->ipv6_mask.src_ip, src_ipv6m);
+	IPV6_ADDR_TO_MASK(input_mask->ipv6_mask.dst_ip, dst_ipv6m);
+	info->mask.src_ipv6_mask = src_ipv6m;
+	info->mask.dst_ipv6_mask = dst_ipv6m;
+
+	return IXGBE_SUCCESS;
+}
+
+static int
+ixgbe_fdir_store_input_mask_x550(struct rte_eth_dev *dev,
+				 const struct rte_eth_fdir_masks *input_mask)
+{
+	struct ixgbe_hw_fdir_info *info =
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
+
+	memset(&info->mask, 0, sizeof(struct ixgbe_hw_fdir_mask));
+	info->mask.vlan_tci_mask = input_mask->vlan_tci_mask;
+	info->mask.mac_addr_byte_mask = input_mask->mac_addr_byte_mask;
+	info->mask.tunnel_type_mask = input_mask->tunnel_type_mask;
+	info->mask.tunnel_id_mask = input_mask->tunnel_id_mask;
+
+	return IXGBE_SUCCESS;
+}
+
+static int
+ixgbe_fdir_store_input_mask(struct rte_eth_dev *dev,
+			    const struct rte_eth_fdir_masks *input_mask)
+{
+	enum rte_fdir_mode mode = dev->data->dev_conf.fdir_conf.mode;
+
+	if (mode >= RTE_FDIR_MODE_SIGNATURE &&
+	    mode <= RTE_FDIR_MODE_PERFECT)
+		return ixgbe_fdir_store_input_mask_82599(dev, input_mask);
+	else if (mode >= RTE_FDIR_MODE_PERFECT_MAC_VLAN &&
+		 mode <= RTE_FDIR_MODE_PERFECT_TUNNEL)
+		return ixgbe_fdir_store_input_mask_x550(dev, input_mask);
+
+	PMD_DRV_LOG(ERR, "Not supported fdir mode - %d!", mode);
+	return -ENOTSUP;
+}
+
+int
+ixgbe_fdir_set_input_mask(struct rte_eth_dev *dev)
 {
 	enum rte_fdir_mode mode = dev->data->dev_conf.fdir_conf.mode;
 
 	if (mode >= RTE_FDIR_MODE_SIGNATURE &&
 	    mode <= RTE_FDIR_MODE_PERFECT)
-		return fdir_set_input_mask_82599(dev, input_mask);
+		return fdir_set_input_mask_82599(dev);
 	else if (mode >= RTE_FDIR_MODE_PERFECT_MAC_VLAN &&
 		 mode <= RTE_FDIR_MODE_PERFECT_TUNNEL)
-		return fdir_set_input_mask_x550(dev, input_mask);
+		return fdir_set_input_mask_x550(dev);
 
 	PMD_DRV_LOG(ERR, "Not supported fdir mode - %d!", mode);
 	return -ENOTSUP;
 }
 
+static int
+fdir_set_input_mask(struct rte_eth_dev *dev,
+		    const struct rte_eth_fdir_masks *input_mask)
+{
+	int ret;
+
+	ret = ixgbe_fdir_store_input_mask(dev, input_mask);
+	if (ret)
+		return ret;
+
+	return ixgbe_fdir_set_input_mask(dev);
+}
+
 /*
  * ixgbe_check_fdir_flex_conf -check if the flex payload and mask configuration
  * arguments are valid
@@ -1135,23 +1183,40 @@ ixgbe_remove_fdir_filter(struct ixgbe_hw_fdir_info *fdir_info,
 	return 0;
 }
 
-/*
- * ixgbe_add_del_fdir_filter - add or remove a flow diretor filter.
- * @dev: pointer to the structure rte_eth_dev
- * @fdir_filter: fdir filter entry
- * @del: 1 - delete, 0 - add
- * @update: 1 - update
- */
 static int
-ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
-			  const struct rte_eth_fdir_filter *fdir_filter,
+ixgbe_interpret_fdir_filter(struct rte_eth_dev *dev,
+			    const struct rte_eth_fdir_filter *fdir_filter,
+			    struct ixgbe_fdir_rule *rule)
+{
+	enum rte_fdir_mode fdir_mode = dev->data->dev_conf.fdir_conf.mode;
+	int err;
+
+	memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+
+	err = ixgbe_fdir_filter_to_atr_input(fdir_filter,
+					     &rule->ixgbe_fdir,
+					     fdir_mode);
+	if (err)
+		return err;
+
+	rule->mode = fdir_mode;
+	if (fdir_filter->action.behavior == RTE_ETH_FDIR_REJECT)
+		rule->fdirflags = IXGBE_FDIRCMD_DROP;
+	rule->queue = fdir_filter->action.rx_queue;
+	rule->soft_id = fdir_filter->soft_id;
+
+	return 0;
+}
+
+int
+ixgbe_fdir_filter_program(struct rte_eth_dev *dev,
+			  struct ixgbe_fdir_rule *rule,
 			  bool del,
 			  bool update)
 {
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	uint32_t fdircmd_flags;
 	uint32_t fdirhash;
-	union ixgbe_atr_input input;
 	uint8_t queue;
 	bool is_perfect = FALSE;
 	int err;
@@ -1161,7 +1226,8 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	struct ixgbe_fdir_filter *node;
 	bool add_node = FALSE;
 
-	if (fdir_mode == RTE_FDIR_MODE_NONE)
+	if (fdir_mode == RTE_FDIR_MODE_NONE ||
+	    fdir_mode != rule->mode)
 		return -ENOTSUP;
 
 	/*
@@ -1174,7 +1240,7 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	    (hw->mac.type == ixgbe_mac_X550 ||
 	     hw->mac.type == ixgbe_mac_X550EM_x ||
 	     hw->mac.type == ixgbe_mac_X550EM_a) &&
-	    (fdir_filter->input.flow_type ==
+	    (rule->ixgbe_fdir.formatted.flow_type ==
 	     RTE_ETH_FLOW_NONFRAG_IPV4_OTHER) &&
 	    (info->mask.src_port_mask != 0 ||
 	     info->mask.dst_port_mask != 0)) {
@@ -1188,29 +1254,23 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	    fdir_mode <= RTE_FDIR_MODE_PERFECT_TUNNEL)
 		is_perfect = TRUE;
 
-	memset(&input, 0, sizeof(input));
-
-	err = ixgbe_fdir_filter_to_atr_input(fdir_filter, &input,
-					     fdir_mode);
-	if (err)
-		return err;
-
 	if (is_perfect) {
-		if (input.formatted.flow_type & IXGBE_ATR_L4TYPE_IPV6_MASK) {
+		if (rule->ixgbe_fdir.formatted.flow_type &
+		    IXGBE_ATR_L4TYPE_IPV6_MASK) {
 			PMD_DRV_LOG(ERR, "IPv6 is not supported in"
 				    " perfect mode!");
 			return -ENOTSUP;
 		}
-		fdirhash = atr_compute_perfect_hash_82599(&input,
+		fdirhash = atr_compute_perfect_hash_82599(&rule->ixgbe_fdir,
 							  dev->data->dev_conf.fdir_conf.pballoc);
-		fdirhash |= fdir_filter->soft_id <<
+		fdirhash |= rule->soft_id <<
 			IXGBE_FDIRHASH_SIG_SW_INDEX_SHIFT;
 	} else
-		fdirhash = atr_compute_sig_hash_82599(&input,
+		fdirhash = atr_compute_sig_hash_82599(&rule->ixgbe_fdir,
 						      dev->data->dev_conf.fdir_conf.pballoc);
 
 	if (del) {
-		err = ixgbe_remove_fdir_filter(info, &input);
+		err = ixgbe_remove_fdir_filter(info, &rule->ixgbe_fdir);
 		if (err < 0)
 			return err;
 
@@ -1223,7 +1283,7 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	}
 	/* add or update an fdir filter*/
 	fdircmd_flags = (update) ? IXGBE_FDIRCMD_FILTER_UPDATE : 0;
-	if (fdir_filter->action.behavior == RTE_ETH_FDIR_REJECT) {
+	if (rule->fdirflags & IXGBE_FDIRCMD_DROP) {
 		if (is_perfect) {
 			queue = dev->data->dev_conf.fdir_conf.drop_queue;
 			fdircmd_flags |= IXGBE_FDIRCMD_DROP;
@@ -1232,13 +1292,12 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 				    " signature mode.");
 			return -EINVAL;
 		}
-	} else if (fdir_filter->action.behavior == RTE_ETH_FDIR_ACCEPT &&
-		   fdir_filter->action.rx_queue < IXGBE_MAX_RX_QUEUE_NUM)
-		queue = (uint8_t)fdir_filter->action.rx_queue;
+	} else if (rule->queue < IXGBE_MAX_RX_QUEUE_NUM)
+		queue = (uint8_t)rule->queue;
 	else
 		return -EINVAL;
 
-	node = ixgbe_fdir_filter_lookup(info, &input);
+	node = ixgbe_fdir_filter_lookup(info, &rule->ixgbe_fdir);
 	if (node) {
 		if (update) {
 			node->fdirflags = fdircmd_flags;
@@ -1256,7 +1315,7 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 		if (!node)
 			return -ENOMEM;
 		(void)rte_memcpy(&node->ixgbe_fdir,
-				 &input,
+				 &rule->ixgbe_fdir,
 				 sizeof(union ixgbe_atr_input));
 		node->fdirflags = fdircmd_flags;
 		node->fdirhash = fdirhash;
@@ -1270,18 +1329,19 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	}
 
 	if (is_perfect) {
-		err = fdir_write_perfect_filter_82599(hw, &input, queue,
-						      fdircmd_flags, fdirhash,
-						      fdir_mode);
+		err = fdir_write_perfect_filter_82599(hw, &rule->ixgbe_fdir,
+						      queue, fdircmd_flags,
+						      fdirhash, fdir_mode);
 	} else {
-		err = fdir_add_signature_filter_82599(hw, &input, queue,
-						      fdircmd_flags, fdirhash);
+		err = fdir_add_signature_filter_82599(hw, &rule->ixgbe_fdir,
+						      queue, fdircmd_flags,
+						      fdirhash);
 	}
 	if (err < 0) {
 		PMD_DRV_LOG(ERR, "Fail to add FDIR filter!");
 
 		if (add_node)
-			(void)ixgbe_remove_fdir_filter(info, &input);
+			(void)ixgbe_remove_fdir_filter(info, &rule->ixgbe_fdir);
 	} else {
 		PMD_DRV_LOG(DEBUG, "Success to add FDIR filter");
 	}
@@ -1289,6 +1349,29 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	return err;
 }
 
+/* ixgbe_add_del_fdir_filter - add or remove a flow diretor filter.
+ * @dev: pointer to the structure rte_eth_dev
+ * @fdir_filter: fdir filter entry
+ * @del: 1 - delete, 0 - add
+ * @update: 1 - update
+ */
+static int
+ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
+			  const struct rte_eth_fdir_filter *fdir_filter,
+			  bool del,
+			  bool update)
+{
+	struct ixgbe_fdir_rule rule;
+	int err;
+
+	err = ixgbe_interpret_fdir_filter(dev, fdir_filter, &rule);
+
+	if (err)
+		return err;
+
+	return ixgbe_fdir_filter_program(dev, &rule, del, update);
+}
+
 static int
 ixgbe_fdir_flush(struct rte_eth_dev *dev)
 {
diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h
index e9e6220..e59f458 100644
--- a/lib/librte_ether/rte_flow.h
+++ b/lib/librte_ether/rte_flow.h
@@ -275,6 +275,13 @@ enum rte_flow_item_type {
 	  * See struct rte_flow_item_e_tag.
 	  */
 	RTE_FLOW_ITEM_TYPE_E_TAG,
+
+	/**
+	 * Matches a NVGRE header.
+	 *
+	 * See struct rte_flow_item_nvgre.
+	 */
+	RTE_FLOW_ITEM_TYPE_NVGRE,
 };
 
 /**
@@ -486,6 +493,22 @@ struct rte_flow_item_e_tag {
 };
 
 /**
+ * RTE_FLOW_ITEM_TYPE_NVGRE.
+ *
+ * Matches a NVGRE header.
+ */
+struct rte_flow_item_nvgre {
+	uint32_t flags0:1; /**< 0 */
+	uint32_t rsvd1:1; /**< 1 bit not defined */
+	uint32_t flags1:2; /**< 2 bits, 1 0 */
+	uint32_t rsvd0:9; /**< Reserved0 */
+	uint32_t ver:3; /**< version */
+	uint32_t protocol:16; /**< protocol type, 0x6558 */
+	uint8_t tni[3]; /**< tenant network ID or virtual subnet ID */
+	uint8_t flow_id; /**< flow ID or Reserved */
+};
+
+/**
  * Matching pattern item definition.
  *
  * A pattern is formed by stacking items starting from the lowest protocol
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v2 16/18] net/ixgbe: create consistent filter
  2016-12-30  7:52 [PATCH v2 00/18] net/ixgbe: Consistent filter API Wei Zhao
                   ` (14 preceding siblings ...)
  2016-12-30  7:53 ` [PATCH v2 15/18] net/ixgbe: parse flow director filter Wei Zhao
@ 2016-12-30  7:53 ` Wei Zhao
  2017-01-03  2:04   ` Xing, Beilei
                     ` (3 more replies)
  2016-12-30  7:53 ` [PATCH v2 17/18] net/ixgbe: destroy " Wei Zhao
  2016-12-30  7:53 ` [PATCH v2 18/18] net/ixgbe: flush all the filter list Wei Zhao
  17 siblings, 4 replies; 149+ messages in thread
From: Wei Zhao @ 2016-12-30  7:53 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

This patch adds a function to create the flow directory filter.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>

---

v2:
--add new error set function
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 240 ++++++++++++++++++++++++++++++++++++++-
 drivers/net/ixgbe/ixgbe_ethdev.h |   5 +
 2 files changed, 244 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index c98aa0d..1c857fc 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -468,6 +468,11 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 		const struct rte_flow_item pattern[],
 		const struct rte_flow_action actions[],
 		struct rte_flow_error *error);
+static struct rte_flow *ixgbe_flow_create(struct rte_eth_dev *dev,
+		const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct rte_flow_error *error);
 static int ixgbe_flow_flush(struct rte_eth_dev *dev,
 		struct rte_flow_error *error);
 /*
@@ -850,11 +855,55 @@ static const struct rte_ixgbe_xstats_name_off rte_ixgbevf_stats_strings[] = {
 		sizeof(rte_ixgbevf_stats_strings[0]))
 static const struct rte_flow_ops ixgbe_flow_ops = {
 	ixgbe_flow_validate,
-	NULL,
+	ixgbe_flow_create,
 	NULL,
 	ixgbe_flow_flush,
 	NULL,
 };
+/* ntuple filter list structure */
+struct ixgbe_ntuple_filter_ele {
+	TAILQ_ENTRY(ixgbe_ntuple_filter_ele) entries;
+	struct rte_eth_ntuple_filter filter_info;
+};
+/* ethertype filter list structure */
+struct ixgbe_ethertype_filter_ele {
+	TAILQ_ENTRY(ixgbe_ethertype_filter_ele) entries;
+	struct rte_eth_ethertype_filter filter_info;
+};
+/* syn filter list structure */
+struct ixgbe_eth_syn_filter_ele {
+	TAILQ_ENTRY(ixgbe_eth_syn_filter_ele) entries;
+	struct rte_eth_syn_filter filter_info;
+};
+/* fdir filter list structure */
+struct ixgbe_fdir_rule_ele {
+	TAILQ_ENTRY(ixgbe_fdir_rule_ele) entries;
+	struct ixgbe_fdir_rule filter_info;
+};
+/* l2_tunnel filter list structure */
+struct ixgbe_eth_l2_tunnel_conf_ele {
+	TAILQ_ENTRY(ixgbe_eth_l2_tunnel_conf_ele) entries;
+	struct rte_eth_l2_tunnel_conf filter_info;
+};
+/* ixgbe_flow memory list structure */
+struct ixgbe_flow_mem {
+	TAILQ_ENTRY(ixgbe_flow_mem) entries;
+	struct ixgbe_flow *flow;
+};
+
+TAILQ_HEAD(ixgbe_ntuple_filter_list, ixgbe_ntuple_filter_ele);
+struct ixgbe_ntuple_filter_list filter_ntuple_list;
+TAILQ_HEAD(ixgbe_ethertype_filter_list, ixgbe_ethertype_filter_ele);
+struct ixgbe_ethertype_filter_list filter_ethertype_list;
+TAILQ_HEAD(ixgbe_syn_filter_list, ixgbe_eth_syn_filter_ele);
+struct ixgbe_syn_filter_list filter_syn_list;
+TAILQ_HEAD(ixgbe_fdir_rule_filter_list, ixgbe_fdir_rule_ele);
+struct ixgbe_fdir_rule_filter_list filter_fdir_list;
+TAILQ_HEAD(ixgbe_l2_tunnel_filter_list, ixgbe_eth_l2_tunnel_conf_ele);
+struct ixgbe_l2_tunnel_filter_list filter_l2_tunnel_list;
+TAILQ_HEAD(ixgbe_flow_mem_list, ixgbe_flow_mem);
+struct ixgbe_flow_mem_list ixgbe_flow_list;
+
 /**
  * Atomically reads the link status information from global
  * structure rte_eth_dev.
@@ -1380,6 +1429,14 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
 
 	/* initialize l2 tunnel filter list & hash */
 	ixgbe_l2_tn_filter_init(eth_dev);
+
+	TAILQ_INIT(&filter_ntuple_list);
+	TAILQ_INIT(&filter_ethertype_list);
+	TAILQ_INIT(&filter_syn_list);
+	TAILQ_INIT(&filter_fdir_list);
+	TAILQ_INIT(&filter_l2_tunnel_list);
+	TAILQ_INIT(&ixgbe_flow_list);
+
 	return 0;
 }
 
@@ -10334,6 +10391,187 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 	return ret;
 }
 
+/**
+ * Create or destroy a flow rule.
+ * Theorically one rule can match more than one filters.
+ * We will let it use the filter which it hitt first.
+ * So, the sequence matters.
+ */
+static struct rte_flow *
+ixgbe_flow_create(struct rte_eth_dev *dev,
+		  const struct rte_flow_attr *attr,
+		  const struct rte_flow_item pattern[],
+		  const struct rte_flow_action actions[],
+		  struct rte_flow_error *error)
+{
+	int ret;
+	struct rte_eth_ntuple_filter ntuple_filter;
+	struct rte_eth_ethertype_filter ethertype_filter;
+	struct rte_eth_syn_filter syn_filter;
+	struct ixgbe_fdir_rule fdir_rule;
+	struct rte_eth_l2_tunnel_conf l2_tn_filter;
+	struct ixgbe_hw_fdir_info *fdir_info =
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
+	struct ixgbe_flow *flow = NULL;
+	struct ixgbe_ntuple_filter_ele *ntuple_filter_ptr;
+	struct ixgbe_ethertype_filter_ele *ethertype_filter_ptr;
+	struct ixgbe_eth_syn_filter_ele *syn_filter_ptr;
+	struct ixgbe_eth_l2_tunnel_conf_ele *l2_tn_filter_ptr;
+	struct ixgbe_fdir_rule_ele *fdir_rule_ptr;
+	struct ixgbe_flow_mem *ixgbe_flow_mem_ptr;
+
+	flow = rte_zmalloc("ixgbe_flow", sizeof(struct ixgbe_flow), 0);
+	if (!flow) {
+		PMD_DRV_LOG(ERR, "failed to allocate memory");
+		return (struct rte_flow *)flow;
+	}
+	ixgbe_flow_mem_ptr = rte_zmalloc("ixgbe_flow_mem",
+			sizeof(struct ixgbe_flow_mem), 0);
+	if (!ixgbe_flow_mem_ptr) {
+		PMD_DRV_LOG(ERR, "failed to allocate memory");
+		return NULL;
+	}
+	ixgbe_flow_mem_ptr->flow = flow;
+	TAILQ_INSERT_TAIL(&ixgbe_flow_list,
+				ixgbe_flow_mem_ptr, entries);
+
+	memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
+	ret = ixgbe_parse_ntuple_filter(attr, pattern,
+			actions, &ntuple_filter, error);
+	if (!ret) {
+		ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, TRUE);
+		if (!ret) {
+			ntuple_filter_ptr = rte_zmalloc("ixgbe_ntuple_filter",
+				sizeof(struct ixgbe_ntuple_filter_ele), 0);
+			(void)rte_memcpy(&ntuple_filter_ptr->filter_info,
+				&ntuple_filter,
+				sizeof(struct rte_eth_ntuple_filter));
+			TAILQ_INSERT_TAIL(&filter_ntuple_list,
+				ntuple_filter_ptr, entries);
+			flow->rule = ntuple_filter_ptr;
+			flow->filter_type = RTE_ETH_FILTER_NTUPLE;
+		}
+		return (struct rte_flow *)flow;
+	}
+
+	memset(&ethertype_filter, 0, sizeof(struct rte_eth_ethertype_filter));
+	ret = ixgbe_parse_ethertype_filter(attr, pattern,
+				actions, &ethertype_filter, error);
+	if (!ret) {
+		ret = ixgbe_add_del_ethertype_filter(dev,
+				&ethertype_filter, TRUE);
+		if (!ret) {
+			ethertype_filter_ptr = rte_zmalloc(
+				"ixgbe_ethertype_filter",
+				sizeof(struct ixgbe_ethertype_filter_ele), 0);
+			(void)rte_memcpy(&ethertype_filter_ptr->filter_info,
+				&ethertype_filter,
+				sizeof(struct rte_eth_ethertype_filter));
+			TAILQ_INSERT_TAIL(&filter_ethertype_list,
+				ethertype_filter_ptr, entries);
+			flow->rule = ethertype_filter_ptr;
+			flow->filter_type = RTE_ETH_FILTER_ETHERTYPE;
+		}
+		return (struct rte_flow *)flow;
+	}
+
+	memset(&syn_filter, 0, sizeof(struct rte_eth_syn_filter));
+	ret = cons_parse_syn_filter(attr, pattern, actions, &syn_filter, error);
+	if (!ret) {
+		ret = ixgbe_syn_filter_set(dev, &syn_filter, TRUE);
+		if (!ret) {
+			syn_filter_ptr = rte_zmalloc("ixgbe_syn_filter",
+				sizeof(struct ixgbe_eth_syn_filter_ele), 0);
+			(void)rte_memcpy(&syn_filter_ptr->filter_info,
+				&syn_filter,
+				sizeof(struct rte_eth_syn_filter));
+			TAILQ_INSERT_TAIL(&filter_syn_list,
+				syn_filter_ptr,
+				entries);
+			flow->rule = syn_filter_ptr;
+			flow->filter_type = RTE_ETH_FILTER_SYN;
+		}
+		return (struct rte_flow *)flow;
+	}
+
+	memset(&fdir_rule, 0, sizeof(struct ixgbe_fdir_rule));
+	ret = ixgbe_parse_fdir_filter(attr, pattern,
+				actions, &fdir_rule, error);
+	if (!ret) {
+		/* A mask cannot be deleted. */
+		if (fdir_rule.b_mask) {
+			if (!fdir_info->mask_added) {
+				/* It's the first time the mask is set. */
+				rte_memcpy(&fdir_info->mask,
+					&fdir_rule.mask,
+					sizeof(struct ixgbe_hw_fdir_mask));
+				ret = ixgbe_fdir_set_input_mask(dev);
+				if (ret)
+					return NULL;
+
+				fdir_info->mask_added = TRUE;
+			} else {
+				/**
+				 * Only support one global mask,
+				 * all the masks should be the same.
+				 */
+				ret = memcmp(&fdir_info->mask,
+					&fdir_rule.mask,
+					sizeof(struct ixgbe_hw_fdir_mask));
+				if (ret)
+					return NULL;
+			}
+		}
+
+		if (fdir_rule.b_spec) {
+			ret = ixgbe_fdir_filter_program(dev, &fdir_rule,
+					FALSE, FALSE);
+			if (!ret) {
+				fdir_rule_ptr = rte_zmalloc("ixgbe_fdir_filter",
+					sizeof(struct ixgbe_fdir_rule_ele), 0);
+				(void)rte_memcpy(&fdir_rule_ptr->filter_info,
+					&fdir_rule,
+					sizeof(struct ixgbe_fdir_rule));
+				TAILQ_INSERT_TAIL(&filter_fdir_list,
+					fdir_rule_ptr, entries);
+				flow->rule = fdir_rule_ptr;
+				flow->filter_type = RTE_ETH_FILTER_FDIR;
+
+				return (struct rte_flow *)flow;
+			}
+
+			if (ret)
+				return NULL;
+		}
+
+		return NULL;
+	}
+
+	memset(&l2_tn_filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+	ret = cons_parse_l2_tn_filter(attr, pattern,
+					actions, &l2_tn_filter, error);
+	if (!ret) {
+		ret = ixgbe_dev_l2_tunnel_filter_add(dev, &l2_tn_filter, FALSE);
+		if (!ret) {
+			l2_tn_filter_ptr = rte_zmalloc("ixgbe_l2_tn_filter",
+				sizeof(struct ixgbe_eth_l2_tunnel_conf_ele), 0);
+			(void)rte_memcpy(&l2_tn_filter_ptr->filter_info,
+				&l2_tn_filter,
+				sizeof(struct rte_eth_l2_tunnel_conf));
+			TAILQ_INSERT_TAIL(&filter_l2_tunnel_list,
+				l2_tn_filter_ptr, entries);
+			flow->rule = l2_tn_filter_ptr;
+			flow->filter_type = RTE_ETH_FILTER_L2_TUNNEL;
+
+			return (struct rte_flow *)flow;
+		}
+	}
+
+	rte_free(ixgbe_flow_mem_ptr);
+	rte_free(flow);
+	return NULL;
+}
+
 RTE_PMD_REGISTER_PCI(net_ixgbe, rte_ixgbe_pmd.pci_drv);
 RTE_PMD_REGISTER_PCI_TABLE(net_ixgbe, pci_id_ixgbe_map);
 RTE_PMD_REGISTER_KMOD_DEP(net_ixgbe, "* igb_uio | uio_pci_generic | vfio");
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 676d45f..0000138 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -329,6 +329,11 @@ struct ixgbe_l2_tn_info {
 	bool e_tag_ether_type; /* ether type for e-tag */
 };
 
+struct ixgbe_flow {
+	enum rte_filter_type filter_type;
+	void *rule;
+};
+
 /*
  * Structure to store private data for each driver instance (for each port).
  */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v2 17/18] net/ixgbe: destroy consistent filter
  2016-12-30  7:52 [PATCH v2 00/18] net/ixgbe: Consistent filter API Wei Zhao
                   ` (15 preceding siblings ...)
  2016-12-30  7:53 ` [PATCH v2 16/18] net/ixgbe: create consistent filter Wei Zhao
@ 2016-12-30  7:53 ` Wei Zhao
  2016-12-30  7:53 ` [PATCH v2 18/18] net/ixgbe: flush all the filter list Wei Zhao
  17 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2016-12-30  7:53 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

This patch adds a function to destroy the flow fliter.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>

---

v2:
--fix git log error
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 117 ++++++++++++++++++++++++++++++++++++++-
 1 file changed, 116 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 1c857fc..8f2182c 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -473,6 +473,9 @@ static struct rte_flow *ixgbe_flow_create(struct rte_eth_dev *dev,
 		const struct rte_flow_item pattern[],
 		const struct rte_flow_action actions[],
 		struct rte_flow_error *error);
+static int ixgbe_flow_destroy(struct rte_eth_dev *dev,
+		struct rte_flow *flow,
+		struct rte_flow_error *error);
 static int ixgbe_flow_flush(struct rte_eth_dev *dev,
 		struct rte_flow_error *error);
 /*
@@ -856,7 +859,7 @@ static const struct rte_ixgbe_xstats_name_off rte_ixgbevf_stats_strings[] = {
 static const struct rte_flow_ops ixgbe_flow_ops = {
 	ixgbe_flow_validate,
 	ixgbe_flow_create,
-	NULL,
+	ixgbe_flow_destroy,
 	ixgbe_flow_flush,
 	NULL,
 };
@@ -10572,6 +10575,118 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
 	return NULL;
 }
 
+/* Destroy a flow rule on ixgbe. */
+static int
+ixgbe_flow_destroy(struct rte_eth_dev *dev,
+		struct rte_flow *flow,
+		struct rte_flow_error *error)
+{
+	int ret;
+	struct ixgbe_flow *pmd_flow = (struct ixgbe_flow *)flow;
+	enum rte_filter_type filter_type = pmd_flow->filter_type;
+	struct rte_eth_ntuple_filter ntuple_filter;
+	struct rte_eth_ethertype_filter ethertype_filter;
+	struct rte_eth_syn_filter syn_filter;
+	struct ixgbe_fdir_rule fdir_rule;
+	struct rte_eth_l2_tunnel_conf l2_tn_filter;
+	struct ixgbe_ntuple_filter_ele *ntuple_filter_ptr;
+	struct ixgbe_ethertype_filter_ele *ethertype_filter_ptr;
+	struct ixgbe_eth_syn_filter_ele *syn_filter_ptr;
+	struct ixgbe_eth_l2_tunnel_conf_ele *l2_tn_filter_ptr;
+	struct ixgbe_fdir_rule_ele *fdir_rule_ptr;
+	struct ixgbe_flow_mem *ixgbe_flow_mem_ptr;
+
+	switch (filter_type) {
+	case RTE_ETH_FILTER_NTUPLE:
+		ntuple_filter_ptr = (struct ixgbe_ntuple_filter_ele *)
+					pmd_flow->rule;
+		(void)rte_memcpy(&ntuple_filter,
+			&ntuple_filter_ptr->filter_info,
+			sizeof(struct rte_eth_ntuple_filter));
+		ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, FALSE);
+		if (!ret) {
+			TAILQ_REMOVE(&filter_ntuple_list,
+			ntuple_filter_ptr, entries);
+			rte_free(ntuple_filter_ptr);
+		}
+		break;
+	case RTE_ETH_FILTER_ETHERTYPE:
+		ethertype_filter_ptr = (struct ixgbe_ethertype_filter_ele *)
+					pmd_flow->rule;
+		(void)rte_memcpy(&ethertype_filter,
+			&ethertype_filter_ptr->filter_info,
+			sizeof(struct rte_eth_ethertype_filter));
+		ret = ixgbe_add_del_ethertype_filter(dev,
+				&ethertype_filter, FALSE);
+		if (!ret) {
+			TAILQ_REMOVE(&filter_ethertype_list,
+				ethertype_filter_ptr, entries);
+			rte_free(ethertype_filter_ptr);
+		}
+		break;
+	case RTE_ETH_FILTER_SYN:
+		syn_filter_ptr = (struct ixgbe_eth_syn_filter_ele *)
+				pmd_flow->rule;
+		(void)rte_memcpy(&syn_filter,
+			&syn_filter_ptr->filter_info,
+			sizeof(struct rte_eth_syn_filter));
+		ret = ixgbe_syn_filter_set(dev, &syn_filter, FALSE);
+		if (!ret) {
+			TAILQ_REMOVE(&filter_syn_list,
+				syn_filter_ptr, entries);
+			rte_free(syn_filter_ptr);
+		}
+		break;
+	case RTE_ETH_FILTER_FDIR:
+		fdir_rule_ptr = (struct ixgbe_fdir_rule_ele *)pmd_flow->rule;
+		(void)rte_memcpy(&fdir_rule,
+			&fdir_rule_ptr->filter_info,
+			sizeof(struct ixgbe_fdir_rule));
+		ret = ixgbe_fdir_filter_program(dev, &fdir_rule, TRUE, FALSE);
+		if (!ret) {
+			TAILQ_REMOVE(&filter_fdir_list,
+				fdir_rule_ptr, entries);
+			rte_free(fdir_rule_ptr);
+		}
+		break;
+	case RTE_ETH_FILTER_L2_TUNNEL:
+		l2_tn_filter_ptr = (struct ixgbe_eth_l2_tunnel_conf_ele *)
+				pmd_flow->rule;
+		(void)rte_memcpy(&l2_tn_filter, &l2_tn_filter_ptr->filter_info,
+			sizeof(struct rte_eth_l2_tunnel_conf));
+		ret = ixgbe_dev_l2_tunnel_filter_del(dev, &l2_tn_filter);
+		if (!ret) {
+			TAILQ_REMOVE(&filter_l2_tunnel_list,
+				l2_tn_filter_ptr, entries);
+			rte_free(l2_tn_filter_ptr);
+		}
+		break;
+	default:
+		PMD_DRV_LOG(WARNING, "Filter type (%d) not supported",
+			    filter_type);
+		ret = -EINVAL;
+		break;
+	}
+
+	if (ret) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_HANDLE,
+				NULL, "Failed to destroy flow");
+		return ret;
+	}
+
+	TAILQ_FOREACH(ixgbe_flow_mem_ptr, &ixgbe_flow_list, entries) {
+		if (ixgbe_flow_mem_ptr->flow == pmd_flow) {
+			TAILQ_REMOVE(&ixgbe_flow_list,
+				ixgbe_flow_mem_ptr, entries);
+			rte_free(ixgbe_flow_mem_ptr);
+		}
+	}
+	rte_free(flow);
+
+	return ret;
+}
+
 RTE_PMD_REGISTER_PCI(net_ixgbe, rte_ixgbe_pmd.pci_drv);
 RTE_PMD_REGISTER_PCI_TABLE(net_ixgbe, pci_id_ixgbe_map);
 RTE_PMD_REGISTER_KMOD_DEP(net_ixgbe, "* igb_uio | uio_pci_generic | vfio");
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v2 18/18] net/ixgbe: flush all the filter list
  2016-12-30  7:52 [PATCH v2 00/18] net/ixgbe: Consistent filter API Wei Zhao
                   ` (16 preceding siblings ...)
  2016-12-30  7:53 ` [PATCH v2 17/18] net/ixgbe: destroy " Wei Zhao
@ 2016-12-30  7:53 ` Wei Zhao
  17 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2016-12-30  7:53 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

This patch adds a function to flush all the fliter list
filter on a port.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>

---
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 59 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 59 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 8f2182c..e12eee0 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -478,6 +478,7 @@ static int ixgbe_flow_destroy(struct rte_eth_dev *dev,
 		struct rte_flow_error *error);
 static int ixgbe_flow_flush(struct rte_eth_dev *dev,
 		struct rte_flow_error *error);
+static void ixgbe_filterlist_flush(void);
 /*
  * Define VF Stats MACRO for Non "cleared on read" register
  */
@@ -1526,6 +1527,9 @@ eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev)
 		rte_free(l2_tn_filter);
 	}
 
+	/* clear all the filters list */
+	ixgbe_filterlist_flush();
+
 	return 0;
 }
 
@@ -10318,6 +10322,8 @@ ixgbe_flow_flush(struct rte_eth_dev *dev,
 		return ret;
 	}
 
+	ixgbe_filterlist_flush();
+
 	return 0;
 }
 
@@ -10394,6 +10400,59 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 	return ret;
 }
 
+static void ixgbe_filterlist_flush(void)
+{
+	struct ixgbe_ntuple_filter_ele *ntuple_filter_ptr;
+	struct ixgbe_ethertype_filter_ele *ethertype_filter_ptr;
+	struct ixgbe_eth_syn_filter_ele *syn_filter_ptr;
+	struct ixgbe_eth_l2_tunnel_conf_ele *l2_tn_filter_ptr;
+	struct ixgbe_fdir_rule_ele *fdir_rule_ptr;
+	struct ixgbe_flow_mem *ixgbe_flow_mem_ptr;
+
+	while ((ntuple_filter_ptr = TAILQ_FIRST(&filter_ntuple_list))) {
+		TAILQ_REMOVE(&filter_ntuple_list,
+				 ntuple_filter_ptr,
+				 entries);
+		rte_free(ntuple_filter_ptr);
+	}
+
+	while ((ethertype_filter_ptr = TAILQ_FIRST(&filter_ethertype_list))) {
+		TAILQ_REMOVE(&filter_ethertype_list,
+				 ethertype_filter_ptr,
+				 entries);
+		rte_free(ethertype_filter_ptr);
+	}
+
+	while ((syn_filter_ptr = TAILQ_FIRST(&filter_syn_list))) {
+		TAILQ_REMOVE(&filter_syn_list,
+				 syn_filter_ptr,
+				 entries);
+		rte_free(syn_filter_ptr);
+	}
+
+	while ((l2_tn_filter_ptr = TAILQ_FIRST(&filter_l2_tunnel_list))) {
+		TAILQ_REMOVE(&filter_l2_tunnel_list,
+				 l2_tn_filter_ptr,
+				 entries);
+		rte_free(l2_tn_filter_ptr);
+	}
+
+	while ((fdir_rule_ptr = TAILQ_FIRST(&filter_fdir_list))) {
+		TAILQ_REMOVE(&filter_fdir_list,
+				 fdir_rule_ptr,
+				 entries);
+		rte_free(fdir_rule_ptr);
+	}
+
+	while ((ixgbe_flow_mem_ptr = TAILQ_FIRST(&ixgbe_flow_list))) {
+		TAILQ_REMOVE(&ixgbe_flow_list,
+				 ixgbe_flow_mem_ptr,
+				 entries);
+		rte_free(ixgbe_flow_mem_ptr->flow);
+		rte_free(ixgbe_flow_mem_ptr);
+	}
+}
+
 /**
  * Create or destroy a flow rule.
  * Theorically one rule can match more than one filters.
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* Re: [PATCH v2 02/18] net/ixgbe: store flow director filter
  2016-12-30  7:52 ` [PATCH v2 02/18] net/ixgbe: store flow director filter Wei Zhao
@ 2017-01-02  9:59   ` Xing, Beilei
  2017-01-03  3:14     ` Zhao1, Wei
  2017-01-03 14:28   ` Dai, Wei
  2017-01-06 16:31   ` Ferruh Yigit
  2 siblings, 1 reply; 149+ messages in thread
From: Xing, Beilei @ 2017-01-02  9:59 UTC (permalink / raw)
  To: Zhao1, Wei, dev; +Cc: Lu, Wenzhuo, Zhao1, Wei

> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wei Zhao
> Sent: Friday, December 30, 2016 3:53 PM
> To: dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhao1, Wei <wei.zhao1@intel.com>
> Subject: [dpdk-dev] [PATCH v2 02/18] net/ixgbe: store flow director filter
> 
> Add support for storing flow director filter in SW.
> 
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> ---
> 
> v2:
> --add a fdir initialization function in device start process
> ---

> 
> +static inline struct ixgbe_fdir_filter *
> +ixgbe_fdir_filter_lookup(struct ixgbe_hw_fdir_info *fdir_info,
> +			 union ixgbe_atr_input *key)
> +{
> +	int ret = 0;

Needless initialization here since ret will be set below.

> +	ret = rte_hash_lookup(fdir_info->hash_handle, (const void *)key);
> +	if (ret < 0)
> +		return NULL;
> +
> +	return fdir_info->hash_map[ret];
> +}
> +
> +static inline int
> +ixgbe_insert_fdir_filter(struct ixgbe_hw_fdir_info *fdir_info,
> +			 struct ixgbe_fdir_filter *fdir_filter) {
> +	int ret = 0;

Same above.

> +
> +	ret = rte_hash_add_key(fdir_info->hash_handle,
> +			       &fdir_filter->ixgbe_fdir);
> +
> +	if (ret < 0) {
> +		PMD_DRV_LOG(ERR,
> +			    "Failed to insert fdir filter to hash table %d!",
> +			    ret);
> +		return ret;
> +	}
> +
> +	fdir_info->hash_map[ret] = fdir_filter;
> +
> +	TAILQ_INSERT_TAIL(&fdir_info->fdir_list, fdir_filter, entries);
> +
> +	return 0;
> +}
> +
> +static inline int
> +ixgbe_remove_fdir_filter(struct ixgbe_hw_fdir_info *fdir_info,
> +			 union ixgbe_atr_input *key)
> +{
> +	int ret = 0;

Same above.

> +	struct ixgbe_fdir_filter *fdir_filter;
> +
> +	ret = rte_hash_del_key(fdir_info->hash_handle, key);
> +
> +	if (ret < 0) {
> +		PMD_DRV_LOG(ERR, "No such fdir filter to delete %d!", ret);
> +		return ret;
> +	}
> +
> +	fdir_filter = fdir_info->hash_map[ret];
> +	fdir_info->hash_map[ret] = NULL;
> +
> +	TAILQ_REMOVE(&fdir_info->fdir_list, fdir_filter, entries);
> +	rte_free(fdir_filter);
> +
> +	return 0;
> +}
> +
>  /*
>   * ixgbe_add_del_fdir_filter - add or remove a flow diretor filter.
>   * @dev: pointer to the structure rte_eth_dev @@ -1098,6 +1158,8 @@
> ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
>  	struct ixgbe_hw_fdir_info *info =
>  		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
>  	enum rte_fdir_mode fdir_mode = dev->data->dev_conf.fdir_conf.mode;
> +	struct ixgbe_fdir_filter *node;
> +	bool add_node = FALSE;
> 
>  	if (fdir_mode == RTE_FDIR_MODE_NONE)
>  		return -ENOTSUP;
> @@ -1148,6 +1210,10 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
>  						      dev->data->dev_conf.fdir_conf.pballoc);
> 
>  	if (del) {
> +		err = ixgbe_remove_fdir_filter(info, &input);
> +		if (err < 0)
> +			return err;
> +
>  		err = fdir_erase_filter_82599(hw, fdirhash);
>  		if (err < 0)
>  			PMD_DRV_LOG(ERR, "Fail to delete FDIR filter!"); @@ -1172,6
> +1238,37 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
>  	else
>  		return -EINVAL;
> 
> +	node = ixgbe_fdir_filter_lookup(info, &input);
> +	if (node) {
> +		if (update) {
> +			node->fdirflags = fdircmd_flags;
> +			node->fdirhash = fdirhash;
> +			node->queue = queue;
> +		} else {
> +			PMD_DRV_LOG(ERR, "Conflict with existing fdir filter!");
> +			return -EINVAL;
> +		}
> +	} else {
> +		add_node = TRUE;
> +		node = rte_zmalloc("ixgbe_fdir",
> +				   sizeof(struct ixgbe_fdir_filter),
> +				   0);
> +		if (!node)
> +			return -ENOMEM;
> +		(void)rte_memcpy(&node->ixgbe_fdir,
> +				 &input,
> +				 sizeof(union ixgbe_atr_input));
> +		node->fdirflags = fdircmd_flags;
> +		node->fdirhash = fdirhash;
> +		node->queue = queue;
> +
> +		err = ixgbe_insert_fdir_filter(info, node);
> +		if (err < 0) {
> +			rte_free(node);
> +			return err;
> +		}
> +	}
> +
>  	if (is_perfect) {
>  		err = fdir_write_perfect_filter_82599(hw, &input, queue,
>  						      fdircmd_flags, fdirhash,
> @@ -1180,10 +1277,14 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev
> *dev,
>  		err = fdir_add_signature_filter_82599(hw, &input, queue,
>  						      fdircmd_flags, fdirhash);
>  	}
> -	if (err < 0)
> +	if (err < 0) {
>  		PMD_DRV_LOG(ERR, "Fail to add FDIR filter!");
> -	else
> +
> +		if (add_node)
> +			(void)ixgbe_remove_fdir_filter(info, &input);
> +	} else {
>  		PMD_DRV_LOG(DEBUG, "Success to add FDIR filter");
> +	}
> 
>  	return err;
>  }
> --
> 2.5.5

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v2 03/18] net/ixgbe: store L2 tunnel filter
  2016-12-30  7:52 ` [PATCH v2 03/18] net/ixgbe: store L2 tunnel filter Wei Zhao
@ 2017-01-02 10:06   ` Xing, Beilei
  0 siblings, 0 replies; 149+ messages in thread
From: Xing, Beilei @ 2017-01-02 10:06 UTC (permalink / raw)
  To: Zhao1, Wei, dev; +Cc: Lu, Wenzhuo, Zhao1, Wei


> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wei Zhao
> Sent: Friday, December 30, 2016 3:53 PM
> To: dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhao1, Wei <wei.zhao1@intel.com>
> Subject: [dpdk-dev] [PATCH v2 03/18] net/ixgbe: store L2 tunnel filter
> 
> Add support for storing L2 tunnel filter in SW.
> 
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> ---
> 
> 
> +static inline struct ixgbe_l2_tn_filter *
> +ixgbe_l2_tn_filter_lookup(struct ixgbe_l2_tn_info *l2_tn_info,
> +			  struct ixgbe_l2_tn_key *key)
> +{
> +	int ret = 0;

Needles initialization since ret will be set bellow, it can be removed.
And this initialization can be removed in other filter lookup/insert/delete functions, either.

> +
> +	ret = rte_hash_lookup(l2_tn_info->hash_handle, (const void *)key);
> +	if (ret < 0)
> +		return NULL;
> +
> +	return l2_tn_info->hash_map[ret];
> +}
> +
> +static inline int
> +ixgbe_insert_l2_tn_filter(struct ixgbe_l2_tn_info *l2_tn_info,
> +			  struct ixgbe_l2_tn_filter *l2_tn_filter) {
> +	int ret = 0;
> +
> +	ret = rte_hash_add_key(l2_tn_info->hash_handle,
> +			       &l2_tn_filter->key);
> +
> +	if (ret < 0) {
> +		PMD_DRV_LOG(ERR,
> +			    "Failed to insert L2 tunnel filter"
> +			    " to hash table %d!",
> +			    ret);
> +		return ret;
> +	}
> +
> +	l2_tn_info->hash_map[ret] = l2_tn_filter;
> +
> +	TAILQ_INSERT_TAIL(&l2_tn_info->l2_tn_list, l2_tn_filter, entries);
> +
> +	return 0;
> +}

>  /* Add l2 tunnel filter */
>  static int
>  ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
>  			       struct rte_eth_l2_tunnel_conf *l2_tunnel)  {
>  	int ret = 0;
> +	struct ixgbe_l2_tn_info *l2_tn_info =
> +		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
> +	struct ixgbe_l2_tn_key key;
> +	struct ixgbe_l2_tn_filter *node;
> +
> +	key.l2_tn_type = l2_tunnel->l2_tunnel_type;
> +	key.tn_id = l2_tunnel->tunnel_id;
> +
> +	node = ixgbe_l2_tn_filter_lookup(l2_tn_info, &key);
> +
> +	if (node) {
> +		PMD_DRV_LOG(ERR, "The L2 tunnel filter already exists!");
> +		return -EINVAL;
> +	}
> +
> +	node = rte_zmalloc("ixgbe_l2_tn",
> +			   sizeof(struct ixgbe_l2_tn_filter),
> +			   0);
> +	if (!node)
> +		return -ENOMEM;
> +
> +	(void)rte_memcpy(&node->key,
> +			 &key,
> +			 sizeof(struct ixgbe_l2_tn_key));
> +	node->pool = l2_tunnel->pool;
> +	ret = ixgbe_insert_l2_tn_filter(l2_tn_info, node);
> +	if (ret < 0) {
> +		rte_free(node);
> +		return ret;
> +	}
> 
>  	switch (l2_tunnel->l2_tunnel_type) {
>  	case RTE_L2_TUNNEL_TYPE_E_TAG:
> @@ -7195,6 +7340,9 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev
> *dev,
>  		break;
>  	}
> 
> +	if (ret < 0)
> +		(void)ixgbe_remove_l2_tn_filter(l2_tn_info, &key);

Some confusion, why remove l2_tn_filter here? Can this check be moved before ixgbe_insert_l2_tn_filter?

> +
>  	return ret;
>  }
> 
> @@ -7204,6 +7352,15 @@ ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev
> *dev,
>  			       struct rte_eth_l2_tunnel_conf *l2_tunnel)  {
>  	int ret = 0;
> +	struct ixgbe_l2_tn_info *l2_tn_info =
> +		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
> +	struct ixgbe_l2_tn_key key;
> +
> +	key.l2_tn_type = l2_tunnel->l2_tunnel_type;
> +	key.tn_id = l2_tunnel->tunnel_id;
> +	ret = ixgbe_remove_l2_tn_filter(l2_tn_info, &key);
> +	if (ret < 0)
> +		return ret;
> 
>  	switch (l2_tunnel->l2_tunnel_type) {
>  	case RTE_L2_TUNNEL_TYPE_E_TAG:
> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h
> b/drivers/net/ixgbe/ixgbe_ethdev.h
> index 8310220..6663fc9 100644
> --- a/drivers/net/ixgbe/ixgbe_ethdev.h
> +++ b/drivers/net/ixgbe/ixgbe_ethdev.h
> @@ -132,6 +132,7 @@
>  #define IXGBE_RX_VEC_START
> RTE_INTR_VEC_RXTX_OFFSET
> 
>  #define IXGBE_MAX_FDIR_FILTER_NUM       (1024 * 32)
> +#define IXGBE_MAX_L2_TN_FILTER_NUM      128
> 
>  /*
>   * Information about the fdir mode.
> @@ -283,6 +284,25 @@ struct ixgbe_filter_info {
>  	uint32_t syn_info;
>  };
> 
> +struct ixgbe_l2_tn_key {
> +	enum rte_eth_tunnel_type          l2_tn_type;
> +	uint32_t                          tn_id;
> +};
> +
> +struct ixgbe_l2_tn_filter {
> +	TAILQ_ENTRY(ixgbe_l2_tn_filter)    entries;
> +	struct ixgbe_l2_tn_key             key;
> +	uint32_t                           pool;

Maybe more comments here will be helpful.

> +};
> +

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v2 09/18] net/ixgbe: store and restore L2 tunnel configuration
  2016-12-30  7:53 ` [PATCH v2 09/18] net/ixgbe: store and restore L2 tunnel configuration Wei Zhao
@ 2017-01-02 10:18   ` Xing, Beilei
  0 siblings, 0 replies; 149+ messages in thread
From: Xing, Beilei @ 2017-01-02 10:18 UTC (permalink / raw)
  To: Zhao1, Wei, dev; +Cc: Lu, Wenzhuo, Zhao1, Wei


> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wei Zhao
> Sent: Friday, December 30, 2016 3:53 PM
> To: dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhao1, Wei <wei.zhao1@intel.com>
> Subject: [dpdk-dev] [PATCH v2 09/18] net/ixgbe: store and restore L2 tunnel
> configuration
> 
> Add support for store and restore L2 tunnel filter in SW.

The whole patch set is related to filter, so do you think it's better to move the patch from his patch set since it's related to configuration? Please help to check.

> 
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> ---
>  drivers/net/ixgbe/ixgbe_ethdev.c | 36
> ++++++++++++++++++++++++++++++++++++
>  drivers/net/ixgbe/ixgbe_ethdev.h |  3 +++
>  2 files changed, 39 insertions(+)
> 
> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c
> b/drivers/net/ixgbe/ixgbe_ethdev.c
> index 5c39ffa..d68de65 100644
> --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> @@ -385,6 +385,7 @@ static int ixgbe_dev_udp_tunnel_port_add(struct
> rte_eth_dev *dev,  static int ixgbe_dev_udp_tunnel_port_del(struct
> rte_eth_dev *dev,
>  					 struct rte_eth_udp_tunnel *udp_tunnel);  static int
> ixgbe_filter_restore(struct rte_eth_dev *dev);
> +static void ixgbe_l2_tunnel_conf(struct rte_eth_dev *dev);
> 
>  /*
>   * Define VF Stats MACRO for Non "cleared on read" register @@ -1444,6
> +1445,9 @@ static int ixgbe_l2_tn_filter_init(struct rte_eth_dev *eth_dev)
>  			"Failed to allocate memory for L2 TN hash map!");
>  		return -ENOMEM;
>  	}
> +	l2_tn_info->e_tag_en = FALSE;
> +	l2_tn_info->e_tag_fwd_en = FALSE;
> +	l2_tn_info->e_tag_ether_type = DEFAULT_ETAG_ETYPE;
> 
>  	return 0;
>  }
> @@ -2502,6 +2506,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
> 
>  	/* resume enabled intr since hw reset */
>  	ixgbe_enable_intr(dev);
> +	ixgbe_l2_tunnel_conf(dev);
>  	ixgbe_filter_restore(dev);
> 
>  	return 0;
> @@ -7038,12 +7043,15 @@ ixgbe_dev_l2_tunnel_eth_type_conf(struct
> rte_eth_dev *dev,  {
>  	int ret = 0;
>  	struct ixgbe_hw *hw =
> IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> +	struct ixgbe_l2_tn_info *l2_tn_info =
> +		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
> 
>  	if (l2_tunnel == NULL)
>  		return -EINVAL;
> 
>  	switch (l2_tunnel->l2_tunnel_type) {
>  	case RTE_L2_TUNNEL_TYPE_E_TAG:
> +		l2_tn_info->e_tag_ether_type = l2_tunnel->ether_type;
>  		ret = ixgbe_update_e_tag_eth_type(hw, l2_tunnel->ether_type);
>  		break;
>  	default:
> @@ -7082,9 +7090,12 @@ ixgbe_dev_l2_tunnel_enable(struct rte_eth_dev
> *dev,  {
>  	int ret = 0;
>  	struct ixgbe_hw *hw =
> IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> +	struct ixgbe_l2_tn_info *l2_tn_info =
> +		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
> 
>  	switch (l2_tunnel_type) {
>  	case RTE_L2_TUNNEL_TYPE_E_TAG:
> +		l2_tn_info->e_tag_en = TRUE;
>  		ret = ixgbe_e_tag_enable(hw);
>  		break;
>  	default:
> @@ -7123,9 +7134,12 @@ ixgbe_dev_l2_tunnel_disable(struct rte_eth_dev
> *dev,  {
>  	int ret = 0;
>  	struct ixgbe_hw *hw =
> IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> +	struct ixgbe_l2_tn_info *l2_tn_info =
> +		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
> 
>  	switch (l2_tunnel_type) {
>  	case RTE_L2_TUNNEL_TYPE_E_TAG:
> +		l2_tn_info->e_tag_en = FALSE;
>  		ret = ixgbe_e_tag_disable(hw);
>  		break;
>  	default:
> @@ -7432,10 +7446,13 @@ ixgbe_dev_l2_tunnel_forwarding_enable
>  	(struct rte_eth_dev *dev,
>  	 enum rte_eth_tunnel_type l2_tunnel_type)  {
> +	struct ixgbe_l2_tn_info *l2_tn_info =
> +		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
>  	int ret = 0;
> 
>  	switch (l2_tunnel_type) {
>  	case RTE_L2_TUNNEL_TYPE_E_TAG:
> +		l2_tn_info->e_tag_fwd_en = TRUE;
>  		ret = ixgbe_e_tag_forwarding_en_dis(dev, 1);
>  		break;
>  	default:
> @@ -7453,10 +7470,13 @@ ixgbe_dev_l2_tunnel_forwarding_disable
>  	(struct rte_eth_dev *dev,
>  	 enum rte_eth_tunnel_type l2_tunnel_type)  {
> +	struct ixgbe_l2_tn_info *l2_tn_info =
> +		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
>  	int ret = 0;
> 
>  	switch (l2_tunnel_type) {
>  	case RTE_L2_TUNNEL_TYPE_E_TAG:
> +		l2_tn_info->e_tag_fwd_en = FALSE;
>  		ret = ixgbe_e_tag_forwarding_en_dis(dev, 0);
>  		break;
>  	default:
> @@ -7950,6 +7970,22 @@ ixgbe_filter_restore(struct rte_eth_dev *dev)
>  	return 0;
>  }
> 
> +static void
> +ixgbe_l2_tunnel_conf(struct rte_eth_dev *dev) {
> +	struct ixgbe_l2_tn_info *l2_tn_info =
> +		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
> +	struct ixgbe_hw *hw =
> IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> +
> +	if (l2_tn_info->e_tag_en)
> +		(void)ixgbe_e_tag_enable(hw);
> +
> +	if (l2_tn_info->e_tag_fwd_en)
> +		(void)ixgbe_e_tag_forwarding_en_dis(dev, 1);
> +
> +	(void)ixgbe_update_e_tag_eth_type(hw, l2_tn_info->e_tag_ether_type); }
> +
>  RTE_PMD_REGISTER_PCI(net_ixgbe, rte_ixgbe_pmd.pci_drv);
> RTE_PMD_REGISTER_PCI_TABLE(net_ixgbe, pci_id_ixgbe_map);
> RTE_PMD_REGISTER_KMOD_DEP(net_ixgbe, "* igb_uio | uio_pci_generic |
> vfio"); diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h
> b/drivers/net/ixgbe/ixgbe_ethdev.h
> index d6253ad..6327962 100644
> --- a/drivers/net/ixgbe/ixgbe_ethdev.h
> +++ b/drivers/net/ixgbe/ixgbe_ethdev.h
> @@ -307,6 +307,9 @@ struct ixgbe_l2_tn_info {
>  	struct ixgbe_l2_tn_filter_list      l2_tn_list;
>  	struct ixgbe_l2_tn_filter         **hash_map;
>  	struct rte_hash                    *hash_handle;
> +	bool e_tag_en; /* e-tag enabled */
> +	bool e_tag_fwd_en; /* e-tag based forwarding enabled */
> +	bool e_tag_ether_type; /* ether type for e-tag */
>  };
> 
>  /*
> --
> 2.5.5

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v2 11/18] net/ixgbe: parse n-tuple filter
  2016-12-30  7:53 ` [PATCH v2 11/18] net/ixgbe: parse n-tuple filter Wei Zhao
@ 2017-01-02 10:41   ` Xing, Beilei
  2017-01-02 10:45   ` Xing, Beilei
  2017-01-06 16:55   ` Ferruh Yigit
  2 siblings, 0 replies; 149+ messages in thread
From: Xing, Beilei @ 2017-01-02 10:41 UTC (permalink / raw)
  To: Zhao1, Wei, dev; +Cc: Zhao1, Wei, Lu, Wenzhuo


> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wei Zhao
> Sent: Friday, December 30, 2016 3:53 PM
> To: dev@dpdk.org
> Cc: Zhao1, Wei <wei.zhao1@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: [dpdk-dev] [PATCH v2 11/18] net/ixgbe: parse n-tuple filter
> 
> Add rule validate function and check if the rule is a n-tuple rule, and get the
> n-tuple info.
> 
> Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> 
> ---
> 
> v2:add new error set function
> ---
>  drivers/net/ixgbe/ixgbe_ethdev.c | 414
> ++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 409 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c
> b/drivers/net/ixgbe/ixgbe_ethdev.c
> index 0de1318..198cc4b 100644
> --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> @@ -388,6 +388,24 @@ static int ixgbe_dev_udp_tunnel_port_del(struct
> rte_eth_dev *dev,
>  					 struct rte_eth_udp_tunnel *udp_tunnel);  static int
> ixgbe_filter_restore(struct rte_eth_dev *dev);  static void
> ixgbe_l2_tunnel_conf(struct rte_eth_dev *dev);
> +static int
> +cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
> +					const struct rte_flow_item pattern[],
> +					const struct rte_flow_action actions[],
> +					struct rte_eth_ntuple_filter *filter,
> +					struct rte_flow_error *error);

Why do you declare cons_parse_ntuple_filter here? And seems it doesn't align with the name rule.

> +static int
> +ixgbe_parse_ntuple_filter(const struct rte_flow_attr *attr,
> +					const struct rte_flow_item pattern[],
> +					const struct rte_flow_action actions[],
> +					struct rte_eth_ntuple_filter *filter,
> +					struct rte_flow_error *error);
> +static int
> +ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
> +		const struct rte_flow_attr *attr,
> +		const struct rte_flow_item pattern[],
> +		const struct rte_flow_action actions[],
> +		struct rte_flow_error *error);
>  static int ixgbe_flow_flush(struct rte_eth_dev *dev,
>  		struct rte_flow_error *error);
>  /*
> @@ -769,7 +787,7 @@ static const struct rte_ixgbe_xstats_name_off
> rte_ixgbevf_stats_strings[] = {
>  #define IXGBEVF_NB_XSTATS (sizeof(rte_ixgbevf_stats_strings) /	\
>  		sizeof(rte_ixgbevf_stats_strings[0]))
>  static const struct rte_flow_ops ixgbe_flow_ops = {
> -	NULL,
> +	ixgbe_flow_validate,
>  	NULL,
>  	NULL,
>  	ixgbe_flow_flush,
> @@ -8072,6 +8090,390 @@ ixgbe_clear_all_l2_tn_filter(struct rte_eth_dev
> *dev)
>  	return 0;
>  }
> 
> +static inline uint32_t
> +rte_be_to_cpu_24(uint32_t x)
> +{
> +	return  ((x & 0x000000ffUL) << 16) |
> +		(x & 0x0000ff00UL) |
> +		((x & 0x00ff0000UL) >> 16);
> +}

Why do you define the function in PMD with rte_ prefixed? Do you want to move it to rte library?

> +
> +
> +/**
> + * Parse the rule to see if it is a n-tuple rule.
> + * And get the n-tuple filter info BTW.
> + */
> +static int
> +cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
> +			 const struct rte_flow_item pattern[],
> +			 const struct rte_flow_action actions[],
> +			 struct rte_eth_ntuple_filter *filter,
> +			 struct rte_flow_error *error)

How about splitting the function into three functions? Including parse pattern/parse actions/parse attr.

> +
> +/* a specific function for ixgbe because the flags is specific */
> +static int ixgbe_parse_ntuple_filter(const struct rte_flow_attr *attr,
> +			  const struct rte_flow_item pattern[],
> +			  const struct rte_flow_action actions[],
> +			  struct rte_eth_ntuple_filter *filter,
> +			  struct rte_flow_error *error)
> +{
> +	int ret;
> +
> +	ret = cons_parse_ntuple_filter(attr, pattern, actions, filter, error);
> +
> +	if (ret)
> +		return ret;
> +
> +	/* Ixgbe doesn't support tcp flags. */
> +	if (filter->flags & RTE_NTUPLE_FLAGS_TCP_FLAG) {
> +		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
> +		rte_flow_error_set(error, EINVAL,
> +				   RTE_FLOW_ERROR_TYPE_ITEM,
> +				   NULL, "Not supported by ntuple filter");
> +		return -rte_errno;
> +	}
> +
> +	/* Ixgbe doesn't support many priorities. */
> +	if (filter->priority < IXGBE_MIN_N_TUPLE_PRIO ||
> +	    filter->priority > IXGBE_MAX_N_TUPLE_PRIO) {
> +		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
> +		rte_flow_error_set(error, EINVAL,
> +			RTE_FLOW_ERROR_TYPE_ITEM,
> +			NULL, "Priority not supported by ntuple filter");
> +		return -rte_errno;
> +	}
> +
> +	if (filter->queue >= IXGBE_MAX_RX_QUEUE_NUM ||
> +		filter->priority > IXGBE_5TUPLE_MAX_PRI ||
> +		filter->priority < IXGBE_5TUPLE_MIN_PRI)
> +		return -rte_errno;
> +
> +	/* fixed value for ixgbe */
> +	filter->flags = RTE_5TUPLE_FLAGS;
> +	return 0;
> +}
> +
> +/**
> + * Check if the flow rule is supported by ixgbe.
> + * It only checkes the format. Don't guarantee the rule can be

Typo: checkes -> checks

> +programmed into
> + * the HW. Because there can be no enough room for the rule.
> + */
> +static int
> +ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
> +		const struct rte_flow_attr *attr,
> +		const struct rte_flow_item pattern[],
> +		const struct rte_flow_action actions[],
> +		struct rte_flow_error *error)
> +{
> +	struct rte_eth_ntuple_filter ntuple_filter;
> +	int ret;
> +
> +	memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
> +	ret = ixgbe_parse_ntuple_filter(attr, pattern,
> +				actions, &ntuple_filter, error);
> +	if (!ret)
> +		return 0;
> +
> +	return ret;
> +}
> +
>  /*  Destroy all flow rules associated with a port on ixgbe. */  static int
> ixgbe_flow_flush(struct rte_eth_dev *dev, @@ -8085,15 +8487,17 @@
> ixgbe_flow_flush(struct rte_eth_dev *dev,
> 
>  	ret = ixgbe_clear_all_fdir_filter(dev);
>  	if (ret < 0) {
> -		rte_flow_error_set(error, EINVAL,
> RTE_FLOW_ERROR_TYPE_HANDLE,
> -					NULL, "Failed to flush rule");
> +		rte_flow_error_set(error, EINVAL,
> +				RTE_FLOW_ERROR_TYPE_HANDLE,
> +				NULL, "Failed to flush rule");
>  		return ret;
>  	}
> 
>  	ret = ixgbe_clear_all_l2_tn_filter(dev);
>  	if (ret < 0) {
> -		rte_flow_error_set(error, EINVAL,
> RTE_FLOW_ERROR_TYPE_HANDLE,
> -					NULL, "Failed to flush rule");
> +		rte_flow_error_set(error, EINVAL,
> +				RTE_FLOW_ERROR_TYPE_HANDLE,
> +				NULL, "Failed to flush rule");
>  		return ret;
>  	}
> 
> --
> 2.5.5

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v2 11/18] net/ixgbe: parse n-tuple filter
  2016-12-30  7:53 ` [PATCH v2 11/18] net/ixgbe: parse n-tuple filter Wei Zhao
  2017-01-02 10:41   ` Xing, Beilei
@ 2017-01-02 10:45   ` Xing, Beilei
  2017-01-06 16:55   ` Ferruh Yigit
  2 siblings, 0 replies; 149+ messages in thread
From: Xing, Beilei @ 2017-01-02 10:45 UTC (permalink / raw)
  To: Zhao1, Wei, dev; +Cc: Zhao1, Wei, Lu, Wenzhuo

> +
> +		filter->dst_port_mask  = tcp_mask->hdr.dst_port;
> +		filter->src_port_mask  = tcp_mask->hdr.src_port;
> +		if (tcp_mask->hdr.tcp_flags == 0xFF) {

It's better to use UINT8_MAX here.

> +			filter->flags |= RTE_NTUPLE_FLAGS_TCP_FLAG;
> +		} else if (!tcp_mask->hdr.tcp_flags) {
> +			filter->flags &= ~RTE_NTUPLE_FLAGS_TCP_FLAG;
> +		} else {
> +			memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
> +			rte_flow_error_set(error, EINVAL,
> +				RTE_FLOW_ERROR_TYPE_ITEM,
> +				item, "Not supported by ntuple filter");
> +			return -rte_errno;
> +		}
> +
> +	if (attr->priority > 0xFFFF) {

How about UINT16_MAX?

> +		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
> +		rte_flow_error_set(error, EINVAL,
> +				   RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
> +				   attr, "Error priority.");
> +		return -rte_errno;
> +	}
> +	filter->priority = (uint16_t)attr->priority;
> +
> +	return 0;
> +}
> +

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v2 15/18] net/ixgbe: parse flow director filter
  2016-12-30  7:53 ` [PATCH v2 15/18] net/ixgbe: parse flow director filter Wei Zhao
@ 2017-01-02 15:24   ` Xing, Beilei
  2017-01-03  3:05     ` Zhao1, Wei
  2017-01-03  3:19     ` Zhao1, Wei
  2017-01-03 14:08   ` Adrien Mazarguil
  1 sibling, 2 replies; 149+ messages in thread
From: Xing, Beilei @ 2017-01-02 15:24 UTC (permalink / raw)
  To: Zhao1, Wei, dev; +Cc: Zhao1, Wei, Lu, Wenzhuo


> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wei Zhao
> Sent: Friday, December 30, 2016 3:53 PM
> To: dev@dpdk.org
> Cc: Zhao1, Wei <wei.zhao1@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: [dpdk-dev] [PATCH v2 15/18] net/ixgbe: parse flow director filter
> 
> check if the rule is a flow director rule, and get the flow director info.
> 
> Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> 
> ---
> 
> v2:add new error set function
> ---
>  drivers/net/ixgbe/ixgbe_ethdev.c | 1467
> +++++++++++++++++++++++++++++++++-----
>  drivers/net/ixgbe/ixgbe_ethdev.h |   16 +
>  drivers/net/ixgbe/ixgbe_fdir.c   |  247 ++++---
>  lib/librte_ether/rte_flow.h      |   23 +
>  4 files changed, 1495 insertions(+), 258 deletions(-)
> 
> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c
> b/drivers/net/ixgbe/ixgbe_ethdev.c
> index 0a5ac4f..c98aa0d 100644
> --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> @@ -438,6 +438,31 @@ ixgbe_validate_l2_tn_filter(struct rte_eth_dev *dev,
>  			struct rte_eth_l2_tunnel_conf *rule,
>  			struct rte_flow_error *error);
>  static int
> +ixgbe_validate_fdir_filter(struct rte_eth_dev *dev,
> +			const struct rte_flow_attr *attr,
> +			const struct rte_flow_item pattern[],
> +			const struct rte_flow_action actions[],
> +			struct ixgbe_fdir_rule *rule,
> +			struct rte_flow_error *error);
> +static int
> +ixgbe_parse_fdir_filter_normal(const struct rte_flow_attr *attr,
> +		const struct rte_flow_item pattern[],
> +		const struct rte_flow_action actions[],
> +		struct ixgbe_fdir_rule *rule,
> +		struct rte_flow_error *error);
> +static int
> +ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
> +		const struct rte_flow_item pattern[],
> +		const struct rte_flow_action actions[],
> +		struct ixgbe_fdir_rule *rule,
> +		struct rte_flow_error *error);
> +static int
> +ixgbe_parse_fdir_filter(const struct rte_flow_attr *attr,
> +		const struct rte_flow_item pattern[],
> +		const struct rte_flow_action actions[],
> +		struct ixgbe_fdir_rule *rule,
> +		struct rte_flow_error *error);
> +static int
>  ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
>  		const struct rte_flow_attr *attr,
>  		const struct rte_flow_item pattern[],
> @@ -1475,6 +1500,8 @@ static int ixgbe_fdir_filter_init(struct rte_eth_dev
> *eth_dev)
>  			     "Failed to allocate memory for fdir hash map!");
>  		return -ENOMEM;
>  	}
> +	fdir_info->mask_added = FALSE;
> +
>  	return 0;
>  }
> 
> @@ -8951,117 +8978,22 @@ ixgbe_parse_syn_filter(const struct
> rte_flow_attr *attr,
>  	return 0;
>  }
> 
> -/**
> - * Parse the rule to see if it is a L2 tunnel rule.
> - * And get the L2 tunnel filter info BTW.
> - * Only support E-tag now.
> - */
> +/* Parse to get the attr and action info of flow director rule. */
>  static int
> -cons_parse_l2_tn_filter(const struct rte_flow_attr *attr,
> -			const struct rte_flow_item pattern[],
> -			const struct rte_flow_action actions[],
> -			struct rte_eth_l2_tunnel_conf *filter,
> -			struct rte_flow_error *error)

Why do u remove functions added in patch 14/18? If the functions don't be needed, how about changing patch 14/18?

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v2 16/18] net/ixgbe: create consistent filter
  2016-12-30  7:53 ` [PATCH v2 16/18] net/ixgbe: create consistent filter Wei Zhao
@ 2017-01-03  2:04   ` Xing, Beilei
  2017-01-03  3:11     ` Zhao1, Wei
       [not found]   ` <94479800C636CB44BD422CB454846E013158D036@SHSMSX101.ccr.corp.intel.com>
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 149+ messages in thread
From: Xing, Beilei @ 2017-01-03  2:04 UTC (permalink / raw)
  To: Zhao1, Wei, dev; +Cc: Zhao1, Wei, Lu, Wenzhuo



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wei Zhao
> Sent: Friday, December 30, 2016 3:53 PM
> To: dev@dpdk.org
> Cc: Zhao1, Wei <wei.zhao1@intel.com>; Lu, Wenzhuo
> <wenzhuo.lu@intel.com>
> Subject: [dpdk-dev] [PATCH v2 16/18] net/ixgbe: create consistent filter
> 
> This patch adds a function to create the flow directory filter.
> 
> Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> 
> ---
> 
> v2:
> --add new error set function
> ---
>  drivers/net/ixgbe/ixgbe_ethdev.c | 240
> ++++++++++++++++++++++++++++++++++++++-
>  drivers/net/ixgbe/ixgbe_ethdev.h |   5 +
>  2 files changed, 244 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c
> b/drivers/net/ixgbe/ixgbe_ethdev.c
> index c98aa0d..1c857fc 100644
> --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> @@ -468,6 +468,11 @@ ixgbe_flow_validate(__rte_unused struct
> rte_eth_dev *dev,
>  		const struct rte_flow_item pattern[],
>  		const struct rte_flow_action actions[],
>  		struct rte_flow_error *error);
> +static struct rte_flow *ixgbe_flow_create(struct rte_eth_dev *dev,
> +		const struct rte_flow_attr *attr,
> +		const struct rte_flow_item pattern[],
> +		const struct rte_flow_action actions[],
> +		struct rte_flow_error *error);
>  static int ixgbe_flow_flush(struct rte_eth_dev *dev,
>  		struct rte_flow_error *error);
>  /*
> @@ -850,11 +855,55 @@ static const struct rte_ixgbe_xstats_name_off
> rte_ixgbevf_stats_strings[] = {
>  		sizeof(rte_ixgbevf_stats_strings[0]))
>  static const struct rte_flow_ops ixgbe_flow_ops = {
>  	ixgbe_flow_validate,
> -	NULL,
> +	ixgbe_flow_create,
>  	NULL,
>  	ixgbe_flow_flush,
>  	NULL,
>  };
> +/* ntuple filter list structure */
> +struct ixgbe_ntuple_filter_ele {
> +	TAILQ_ENTRY(ixgbe_ntuple_filter_ele) entries;
> +	struct rte_eth_ntuple_filter filter_info; };
> +/* ethertype filter list structure */
> +struct ixgbe_ethertype_filter_ele {
> +	TAILQ_ENTRY(ixgbe_ethertype_filter_ele) entries;
> +	struct rte_eth_ethertype_filter filter_info; };
> +/* syn filter list structure */
> +struct ixgbe_eth_syn_filter_ele {
> +	TAILQ_ENTRY(ixgbe_eth_syn_filter_ele) entries;
> +	struct rte_eth_syn_filter filter_info; };
> +/* fdir filter list structure */
> +struct ixgbe_fdir_rule_ele {
> +	TAILQ_ENTRY(ixgbe_fdir_rule_ele) entries;
> +	struct ixgbe_fdir_rule filter_info;
> +};
> +/* l2_tunnel filter list structure */
> +struct ixgbe_eth_l2_tunnel_conf_ele {
> +	TAILQ_ENTRY(ixgbe_eth_l2_tunnel_conf_ele) entries;
> +	struct rte_eth_l2_tunnel_conf filter_info; };
> +/* ixgbe_flow memory list structure */

Why need to define these structures? There has been structures to store the filter before, isn't it?

> +struct ixgbe_flow_mem {
> +	TAILQ_ENTRY(ixgbe_flow_mem) entries;
> +	struct ixgbe_flow *flow;
> +};
> +
> +TAILQ_HEAD(ixgbe_ntuple_filter_list, ixgbe_ntuple_filter_ele); struct
> +ixgbe_ntuple_filter_list filter_ntuple_list;
> +TAILQ_HEAD(ixgbe_ethertype_filter_list, ixgbe_ethertype_filter_ele);
> +struct ixgbe_ethertype_filter_list filter_ethertype_list;
> +TAILQ_HEAD(ixgbe_syn_filter_list, ixgbe_eth_syn_filter_ele); struct
> +ixgbe_syn_filter_list filter_syn_list;
> +TAILQ_HEAD(ixgbe_fdir_rule_filter_list, ixgbe_fdir_rule_ele); struct
> +ixgbe_fdir_rule_filter_list filter_fdir_list;
> +TAILQ_HEAD(ixgbe_l2_tunnel_filter_list, ixgbe_eth_l2_tunnel_conf_ele);
> +struct ixgbe_l2_tunnel_filter_list filter_l2_tunnel_list;
> +TAILQ_HEAD(ixgbe_flow_mem_list, ixgbe_flow_mem); struct
> +ixgbe_flow_mem_list ixgbe_flow_list;
> +
>  /**
>   * Atomically reads the link status information from global
>   * structure rte_eth_dev.
> @@ -1380,6 +1429,14 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
> 
>  	/* initialize l2 tunnel filter list & hash */
>  	ixgbe_l2_tn_filter_init(eth_dev);
> +
> +	TAILQ_INIT(&filter_ntuple_list);
> +	TAILQ_INIT(&filter_ethertype_list);
> +	TAILQ_INIT(&filter_syn_list);
> +	TAILQ_INIT(&filter_fdir_list);
> +	TAILQ_INIT(&filter_l2_tunnel_list);
> +	TAILQ_INIT(&ixgbe_flow_list);
> +
>  	return 0;
>  }
> 

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v2 15/18] net/ixgbe: parse flow director filter
  2017-01-02 15:24   ` Xing, Beilei
@ 2017-01-03  3:05     ` Zhao1, Wei
  2017-01-03  3:19     ` Zhao1, Wei
  1 sibling, 0 replies; 149+ messages in thread
From: Zhao1, Wei @ 2017-01-03  3:05 UTC (permalink / raw)
  To: Xing, Beilei, dev; +Cc: Lu, Wenzhuo

Hi, beilei



> -----Original Message-----

> From: Xing, Beilei

> Sent: Monday, January 2, 2017 11:24 PM

> To: Zhao1, Wei <wei.zhao1@intel.com>; dev@dpdk.org

> Cc: Zhao1, Wei <wei.zhao1@intel.com>; Lu, Wenzhuo

> <wenzhuo.lu@intel.com>

> Subject: RE: [dpdk-dev] [PATCH v2 15/18] net/ixgbe: parse flow director filter

>

>

> > -----Original Message-----

> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wei Zhao

> > Sent: Friday, December 30, 2016 3:53 PM

> > To: dev@dpdk.org<mailto:dev@dpdk.org>

> > Cc: Zhao1, Wei <wei.zhao1@intel.com<mailto:wei.zhao1@intel.com>>; Lu, Wenzhuo

> > <wenzhuo.lu@intel.com<mailto:wenzhuo.lu@intel.com>>

> > Subject: [dpdk-dev] [PATCH v2 15/18] net/ixgbe: parse flow director

> > filter

> >

> > check if the rule is a flow director rule, and get the flow director info.

> >

> > Signed-off-by: Wei Zhao <wei.zhao1@intel.com<mailto:wei.zhao1@intel.com>>

> > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com<mailto:wenzhuo.lu@intel.com>>

> >

> > ---

> >

> > v2:add new error set function

> > ---

> >  drivers/net/ixgbe/ixgbe_ethdev.c | 1467

> > +++++++++++++++++++++++++++++++++-----

> >  drivers/net/ixgbe/ixgbe_ethdev.h |   16 +

> >  drivers/net/ixgbe/ixgbe_fdir.c   |  247 ++++---

> >  lib/librte_ether/rte_flow.h      |   23 +

> >  4 files changed, 1495 insertions(+), 258 deletions(-)

> >

> > diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c

> > b/drivers/net/ixgbe/ixgbe_ethdev.c

> > index 0a5ac4f..c98aa0d 100644

> > --- a/drivers/net/ixgbe/ixgbe_ethdev.c

> > +++ b/drivers/net/ixgbe/ixgbe_ethdev.c

> > @@ -438,6 +438,31 @@ ixgbe_validate_l2_tn_filter(struct rte_eth_dev

> *dev,

> >                                          struct rte_eth_l2_tunnel_conf *rule,

> >                                          struct rte_flow_error *error);

> >  static int

> > +ixgbe_validate_fdir_filter(struct rte_eth_dev *dev,

> > +                                      const struct rte_flow_attr *attr,

> > +                                      const struct rte_flow_item pattern[],

> > +                                      const struct rte_flow_action actions[],

> > +                                      struct ixgbe_fdir_rule *rule,

> > +                                      struct rte_flow_error *error);

> > +static int

> > +ixgbe_parse_fdir_filter_normal(const struct rte_flow_attr *attr,

> > +                      const struct rte_flow_item pattern[],

> > +                      const struct rte_flow_action actions[],

> > +                      struct ixgbe_fdir_rule *rule,

> > +                      struct rte_flow_error *error);

> > +static int

> > +ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,

> > +                      const struct rte_flow_item pattern[],

> > +                      const struct rte_flow_action actions[],

> > +                      struct ixgbe_fdir_rule *rule,

> > +                      struct rte_flow_error *error);

> > +static int

> > +ixgbe_parse_fdir_filter(const struct rte_flow_attr *attr,

> > +                      const struct rte_flow_item pattern[],

> > +                      const struct rte_flow_action actions[],

> > +                      struct ixgbe_fdir_rule *rule,

> > +                      struct rte_flow_error *error);

> > +static int

> >  ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,

> >                          const struct rte_flow_attr *attr,

> >                          const struct rte_flow_item pattern[], @@ -1475,6 +1500,8

> @@ static

> > int ixgbe_fdir_filter_init(struct rte_eth_dev

> > *eth_dev)

> >                                               "Failed to allocate memory for fdir hash map!");

> >                          return -ENOMEM;

> >          }

> > +      fdir_info->mask_added = FALSE;

> > +

> >          return 0;

> >  }

> >

> > @@ -8951,117 +8978,22 @@ ixgbe_parse_syn_filter(const struct

> > rte_flow_attr *attr,

> >          return 0;

> >  }

> >

> > -/**

> > - * Parse the rule to see if it is a L2 tunnel rule.

> > - * And get the L2 tunnel filter info BTW.

> > - * Only support E-tag now.

> > - */

> > +/* Parse to get the attr and action info of flow director rule. */

> >  static int

> > -cons_parse_l2_tn_filter(const struct rte_flow_attr *attr,

> > -                                       const struct rte_flow_item pattern[],

> > -                                       const struct rte_flow_action actions[],

> > -                                       struct rte_eth_l2_tunnel_conf *filter,

> > -                                       struct rte_flow_error *error)

>

> Why do u remove functions added in patch 14/18? If the functions don't be

> needed, how about changing patch 14/18?



This patch is generate in a very complicated way.

If you go on reading  this patch, you will find the following statement, path add the functions back itself latter

So the git seem to generate a patch in a  peculiar way.



+static int

+cons_parse_l2_tn_filter(const struct rte_flow_attr *attr,

+                                             const struct rte_flow_item pattern[],

+                                             const struct rte_flow_action actions[],

+                                             struct rte_eth_l2_tunnel_conf *filter,

+                                             struct rte_flow_error *error)

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v2 16/18] net/ixgbe: create consistent filter
       [not found]   ` <94479800C636CB44BD422CB454846E013158D036@SHSMSX101.ccr.corp.intel.com>
@ 2017-01-03  3:09     ` Zhao1, Wei
  0 siblings, 0 replies; 149+ messages in thread
From: Zhao1, Wei @ 2017-01-03  3:09 UTC (permalink / raw)
  To: Xing, Beilei; +Cc: dev


Hi ,beilei
> -----Original Message-----
> From: Xing, Beilei
> Sent: Tuesday, January 3, 2017 10:34 AM
> To: Zhao1, Wei <wei.zhao1@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 16/18] net/ixgbe: create consistent filter
> 
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wei Zhao
> > Sent: Friday, December 30, 2016 3:53 PM
> > To: dev@dpdk.org
> > Cc: Zhao1, Wei <wei.zhao1@intel.com>; Lu, Wenzhuo
> > <wenzhuo.lu@intel.com>
> > Subject: [dpdk-dev] [PATCH v2 16/18] net/ixgbe: create consistent
> > filter
> >
> > This patch adds a function to create the flow directory filter.
> >
> > Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> >
> > ---
> >
> > v2:
> > --add new error set function
> > ---
> >  drivers/net/ixgbe/ixgbe_ethdev.c | 240
> > ++++++++++++++++++++++++++++++++++++++-
> >  drivers/net/ixgbe/ixgbe_ethdev.h |   5 +
> >  2 files changed, 244 insertions(+), 1 deletion(-)
> >
> > @@ -1380,6 +1429,14 @@ eth_ixgbe_dev_init(struct rte_eth_dev
> *eth_dev)
> >
> >  	/* initialize l2 tunnel filter list & hash */
> >  	ixgbe_l2_tn_filter_init(eth_dev);
> > +
> > +	TAILQ_INIT(&filter_ntuple_list);
> > +	TAILQ_INIT(&filter_ethertype_list);
> > +	TAILQ_INIT(&filter_syn_list);
> > +	TAILQ_INIT(&filter_fdir_list);
> > +	TAILQ_INIT(&filter_l2_tunnel_list);
> > +	TAILQ_INIT(&ixgbe_flow_list);
> 
> You need to destroy these lists in dev_unint function.
> 
> > +
> >  	return 0;
> >  }
> >

Yes, I have add a function ixgbe_filterlist_flush()  in 18/18 in dev_unint patch to flush these list.

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v2 16/18] net/ixgbe: create consistent filter
  2017-01-03  2:04   ` Xing, Beilei
@ 2017-01-03  3:11     ` Zhao1, Wei
  0 siblings, 0 replies; 149+ messages in thread
From: Zhao1, Wei @ 2017-01-03  3:11 UTC (permalink / raw)
  To: Xing, Beilei, dev; +Cc: Lu, Wenzhuo

Hi,beilei

> -----Original Message-----
> From: Xing, Beilei
> Sent: Tuesday, January 3, 2017 10:04 AM
> To: Zhao1, Wei <wei.zhao1@intel.com>; dev@dpdk.org
> Cc: Zhao1, Wei <wei.zhao1@intel.com>; Lu, Wenzhuo
> <wenzhuo.lu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 16/18] net/ixgbe: create consistent filter
> 
> 
> 
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wei Zhao
> > Sent: Friday, December 30, 2016 3:53 PM
> > To: dev@dpdk.org
> > Cc: Zhao1, Wei <wei.zhao1@intel.com>; Lu, Wenzhuo
> > <wenzhuo.lu@intel.com>
> > Subject: [dpdk-dev] [PATCH v2 16/18] net/ixgbe: create consistent
> > filter
> >
> > This patch adds a function to create the flow directory filter.
> >
> > Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> >
> > ---
> >
> > v2:
> > --add new error set function
> > ---
> >  drivers/net/ixgbe/ixgbe_ethdev.c | 240
> > ++++++++++++++++++++++++++++++++++++++-
> >  drivers/net/ixgbe/ixgbe_ethdev.h |   5 +
> >  2 files changed, 244 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c
> > b/drivers/net/ixgbe/ixgbe_ethdev.c
> > index c98aa0d..1c857fc 100644
> > --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> > +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> > @@ -468,6 +468,11 @@ ixgbe_flow_validate(__rte_unused struct
> > rte_eth_dev *dev,
> >  		const struct rte_flow_item pattern[],
> >  		const struct rte_flow_action actions[],
> >  		struct rte_flow_error *error);
> > +static struct rte_flow *ixgbe_flow_create(struct rte_eth_dev *dev,
> > +		const struct rte_flow_attr *attr,
> > +		const struct rte_flow_item pattern[],
> > +		const struct rte_flow_action actions[],
> > +		struct rte_flow_error *error);
> >  static int ixgbe_flow_flush(struct rte_eth_dev *dev,
> >  		struct rte_flow_error *error);
> >  /*
> > @@ -850,11 +855,55 @@ static const struct rte_ixgbe_xstats_name_off
> > rte_ixgbevf_stats_strings[] = {
> >  		sizeof(rte_ixgbevf_stats_strings[0]))
> >  static const struct rte_flow_ops ixgbe_flow_ops = {
> >  	ixgbe_flow_validate,
> > -	NULL,
> > +	ixgbe_flow_create,
> >  	NULL,
> >  	ixgbe_flow_flush,
> >  	NULL,
> >  };
> > +/* ntuple filter list structure */
> > +struct ixgbe_ntuple_filter_ele {
> > +	TAILQ_ENTRY(ixgbe_ntuple_filter_ele) entries;
> > +	struct rte_eth_ntuple_filter filter_info; };
> > +/* ethertype filter list structure */ struct
> > +ixgbe_ethertype_filter_ele {
> > +	TAILQ_ENTRY(ixgbe_ethertype_filter_ele) entries;
> > +	struct rte_eth_ethertype_filter filter_info; };
> > +/* syn filter list structure */
> > +struct ixgbe_eth_syn_filter_ele {
> > +	TAILQ_ENTRY(ixgbe_eth_syn_filter_ele) entries;
> > +	struct rte_eth_syn_filter filter_info; };
> > +/* fdir filter list structure */
> > +struct ixgbe_fdir_rule_ele {
> > +	TAILQ_ENTRY(ixgbe_fdir_rule_ele) entries;
> > +	struct ixgbe_fdir_rule filter_info;
> > +};
> > +/* l2_tunnel filter list structure */ struct
> > +ixgbe_eth_l2_tunnel_conf_ele {
> > +	TAILQ_ENTRY(ixgbe_eth_l2_tunnel_conf_ele) entries;
> > +	struct rte_eth_l2_tunnel_conf filter_info; };
> > +/* ixgbe_flow memory list structure */
> 
> Why need to define these structures? There has been structures to store the
> filter before, isn't it?
> 

These list are created to store flow parameters, which is required by rte layer.
 
> > +struct ixgbe_flow_mem {
> > +	TAILQ_ENTRY(ixgbe_flow_mem) entries;
> > +	struct ixgbe_flow *flow;
> > +};
> > +
> > +TAILQ_HEAD(ixgbe_ntuple_filter_list, ixgbe_ntuple_filter_ele); struct
> > +ixgbe_ntuple_filter_list filter_ntuple_list;
> > +TAILQ_HEAD(ixgbe_ethertype_filter_list, ixgbe_ethertype_filter_ele);
> > +struct ixgbe_ethertype_filter_list filter_ethertype_list;
> > +TAILQ_HEAD(ixgbe_syn_filter_list, ixgbe_eth_syn_filter_ele); struct
> > +ixgbe_syn_filter_list filter_syn_list;
> > +TAILQ_HEAD(ixgbe_fdir_rule_filter_list, ixgbe_fdir_rule_ele); struct
> > +ixgbe_fdir_rule_filter_list filter_fdir_list;
> > +TAILQ_HEAD(ixgbe_l2_tunnel_filter_list,
> > +ixgbe_eth_l2_tunnel_conf_ele); struct ixgbe_l2_tunnel_filter_list
> > +filter_l2_tunnel_list; TAILQ_HEAD(ixgbe_flow_mem_list,
> > +ixgbe_flow_mem); struct ixgbe_flow_mem_list ixgbe_flow_list;
> > +
> >  /**
> >   * Atomically reads the link status information from global
> >   * structure rte_eth_dev.
> > @@ -1380,6 +1429,14 @@ eth_ixgbe_dev_init(struct rte_eth_dev
> *eth_dev)
> >
> >  	/* initialize l2 tunnel filter list & hash */
> >  	ixgbe_l2_tn_filter_init(eth_dev);
> > +
> > +	TAILQ_INIT(&filter_ntuple_list);
> > +	TAILQ_INIT(&filter_ethertype_list);
> > +	TAILQ_INIT(&filter_syn_list);
> > +	TAILQ_INIT(&filter_fdir_list);
> > +	TAILQ_INIT(&filter_l2_tunnel_list);
> > +	TAILQ_INIT(&ixgbe_flow_list);
> > +
> >  	return 0;
> >  }
> >

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v2 02/18] net/ixgbe: store flow director filter
  2017-01-02  9:59   ` Xing, Beilei
@ 2017-01-03  3:14     ` Zhao1, Wei
  0 siblings, 0 replies; 149+ messages in thread
From: Zhao1, Wei @ 2017-01-03  3:14 UTC (permalink / raw)
  To: Xing, Beilei, dev; +Cc: Lu, Wenzhuo

Hi, beilei

> -----Original Message-----
> From: Xing, Beilei
> Sent: Monday, January 2, 2017 5:59 PM
> To: Zhao1, Wei <wei.zhao1@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhao1, Wei
> <wei.zhao1@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 02/18] net/ixgbe: store flow director filter
> 
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wei Zhao
> > Sent: Friday, December 30, 2016 3:53 PM
> > To: dev@dpdk.org
> > Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhao1, Wei
> > <wei.zhao1@intel.com>
> > Subject: [dpdk-dev] [PATCH v2 02/18] net/ixgbe: store flow director
> > filter
> >
> > Add support for storing flow director filter in SW.
> >
> > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> > Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> > ---
> >
> > v2:
> > --add a fdir initialization function in device start process
> > ---
> 
> >
> > +static inline struct ixgbe_fdir_filter *
> > +ixgbe_fdir_filter_lookup(struct ixgbe_hw_fdir_info *fdir_info,
> > +			 union ixgbe_atr_input *key)
> > +{
> > +	int ret = 0;
> 
> Needless initialization here since ret will be set below.
> 

Ok, I will make some change in v3

> > +	ret = rte_hash_lookup(fdir_info->hash_handle, (const void *)key);
> > +	if (ret < 0)
> > +		return NULL;
> > +
> > +	return fdir_info->hash_map[ret];
> > +}
> > +
> > +static inline int
> > +ixgbe_insert_fdir_filter(struct ixgbe_hw_fdir_info *fdir_info,
> > +			 struct ixgbe_fdir_filter *fdir_filter) {
> > +	int ret = 0;
> 
> Same above.
> 
> > +
> > +	ret = rte_hash_add_key(fdir_info->hash_handle,
> > +			       &fdir_filter->ixgbe_fdir);
> > +
> > +	if (ret < 0) {
> > +		PMD_DRV_LOG(ERR,
> > +			    "Failed to insert fdir filter to hash table %d!",
> > +			    ret);
> > +		return ret;
> > +	}
> > +
> > +	fdir_info->hash_map[ret] = fdir_filter;
> > +
> > +	TAILQ_INSERT_TAIL(&fdir_info->fdir_list, fdir_filter, entries);
> > +
> > +	return 0;
> > +}
> > +
> > +static inline int
> > +ixgbe_remove_fdir_filter(struct ixgbe_hw_fdir_info *fdir_info,
> > +			 union ixgbe_atr_input *key)
> > +{
> > +	int ret = 0;
> 
> Same above.
> 

Ok, I will make some change in v3

> > +	struct ixgbe_fdir_filter *fdir_filter;
> > +
> > +	ret = rte_hash_del_key(fdir_info->hash_handle, key);
> > +
> > +	if (ret < 0) {
> > +		PMD_DRV_LOG(ERR, "No such fdir filter to delete %d!", ret);
> > +		return ret;
> > +	}
> > +
> > +	fdir_filter = fdir_info->hash_map[ret];
> > +	fdir_info->hash_map[ret] = NULL;
> > +
> > +	TAILQ_REMOVE(&fdir_info->fdir_list, fdir_filter, entries);
> > +	rte_free(fdir_filter);
> > +
> > +	return 0;
> > +}
> > +
> >  /*
> >   * ixgbe_add_del_fdir_filter - add or remove a flow diretor filter.
> >   * @dev: pointer to the structure rte_eth_dev @@ -1098,6 +1158,8 @@
> > ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
> >  	struct ixgbe_hw_fdir_info *info =
> >  		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data-
> >dev_private);
> >  	enum rte_fdir_mode fdir_mode = dev->data-
> >dev_conf.fdir_conf.mode;
> > +	struct ixgbe_fdir_filter *node;
> > +	bool add_node = FALSE;
> >
> >  	if (fdir_mode == RTE_FDIR_MODE_NONE)
> >  		return -ENOTSUP;
> > @@ -1148,6 +1210,10 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev
> *dev,
> >  						      dev->data-
> >dev_conf.fdir_conf.pballoc);
> >
> >  	if (del) {
> > +		err = ixgbe_remove_fdir_filter(info, &input);
> > +		if (err < 0)
> > +			return err;
> > +
> >  		err = fdir_erase_filter_82599(hw, fdirhash);
> >  		if (err < 0)
> >  			PMD_DRV_LOG(ERR, "Fail to delete FDIR filter!");
> @@ -1172,6
> > +1238,37 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
> >  	else
> >  		return -EINVAL;
> >
> > +	node = ixgbe_fdir_filter_lookup(info, &input);
> > +	if (node) {
> > +		if (update) {
> > +			node->fdirflags = fdircmd_flags;
> > +			node->fdirhash = fdirhash;
> > +			node->queue = queue;
> > +		} else {
> > +			PMD_DRV_LOG(ERR, "Conflict with existing fdir
> filter!");
> > +			return -EINVAL;
> > +		}
> > +	} else {
> > +		add_node = TRUE;
> > +		node = rte_zmalloc("ixgbe_fdir",
> > +				   sizeof(struct ixgbe_fdir_filter),
> > +				   0);
> > +		if (!node)
> > +			return -ENOMEM;
> > +		(void)rte_memcpy(&node->ixgbe_fdir,
> > +				 &input,
> > +				 sizeof(union ixgbe_atr_input));
> > +		node->fdirflags = fdircmd_flags;
> > +		node->fdirhash = fdirhash;
> > +		node->queue = queue;
> > +
> > +		err = ixgbe_insert_fdir_filter(info, node);
> > +		if (err < 0) {
> > +			rte_free(node);
> > +			return err;
> > +		}
> > +	}
> > +
> >  	if (is_perfect) {
> >  		err = fdir_write_perfect_filter_82599(hw, &input, queue,
> >  						      fdircmd_flags, fdirhash,
> > @@ -1180,10 +1277,14 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev
> > *dev,
> >  		err = fdir_add_signature_filter_82599(hw, &input, queue,
> >  						      fdircmd_flags, fdirhash);
> >  	}
> > -	if (err < 0)
> > +	if (err < 0) {
> >  		PMD_DRV_LOG(ERR, "Fail to add FDIR filter!");
> > -	else
> > +
> > +		if (add_node)
> > +			(void)ixgbe_remove_fdir_filter(info, &input);
> > +	} else {
> >  		PMD_DRV_LOG(DEBUG, "Success to add FDIR filter");
> > +	}
> >
> >  	return err;
> >  }
> > --
> > 2.5.5


^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v2 15/18] net/ixgbe: parse flow director filter
  2017-01-02 15:24   ` Xing, Beilei
  2017-01-03  3:05     ` Zhao1, Wei
@ 2017-01-03  3:19     ` Zhao1, Wei
  1 sibling, 0 replies; 149+ messages in thread
From: Zhao1, Wei @ 2017-01-03  3:19 UTC (permalink / raw)
  To: Xing, Beilei, dev; +Cc: Lu, Wenzhuo

Hi, beilei

> -----Original Message-----
> From: Xing, Beilei
> Sent: Monday, January 2, 2017 11:24 PM
> To: Zhao1, Wei <wei.zhao1@intel.com>; dev@dpdk.org
> Cc: Zhao1, Wei <wei.zhao1@intel.com>; Lu, Wenzhuo
> <wenzhuo.lu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 15/18] net/ixgbe: parse flow director filter
> 
> 
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wei Zhao
> > Sent: Friday, December 30, 2016 3:53 PM
> > To: dev@dpdk.org
> > Cc: Zhao1, Wei <wei.zhao1@intel.com>; Lu, Wenzhuo
> > <wenzhuo.lu@intel.com>
> > Subject: [dpdk-dev] [PATCH v2 15/18] net/ixgbe: parse flow director
> > filter
> >
> > check if the rule is a flow director rule, and get the flow director info.
> >
> > Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> >
> > ---
> >
> > v2:add new error set function
> > ---
> >  drivers/net/ixgbe/ixgbe_ethdev.c | 1467
> > +++++++++++++++++++++++++++++++++-----
> >  drivers/net/ixgbe/ixgbe_ethdev.h |   16 +
> >  drivers/net/ixgbe/ixgbe_fdir.c   |  247 ++++---
> >  lib/librte_ether/rte_flow.h      |   23 +
> >  4 files changed, 1495 insertions(+), 258 deletions(-)
> >
> > diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c
> > b/drivers/net/ixgbe/ixgbe_ethdev.c
> > index 0a5ac4f..c98aa0d 100644
> > --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> > +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> > @@ -438,6 +438,31 @@ ixgbe_validate_l2_tn_filter(struct rte_eth_dev
> *dev,
> >  			struct rte_eth_l2_tunnel_conf *rule,
> >  			struct rte_flow_error *error);
> >  static int
> > +ixgbe_validate_fdir_filter(struct rte_eth_dev *dev,
> > +			const struct rte_flow_attr *attr,
> > +			const struct rte_flow_item pattern[],
> > +			const struct rte_flow_action actions[],
> > +			struct ixgbe_fdir_rule *rule,
> > +			struct rte_flow_error *error);
> > +static int
> > +ixgbe_parse_fdir_filter_normal(const struct rte_flow_attr *attr,
> > +		const struct rte_flow_item pattern[],
> > +		const struct rte_flow_action actions[],
> > +		struct ixgbe_fdir_rule *rule,
> > +		struct rte_flow_error *error);
> > +static int
> > +ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
> > +		const struct rte_flow_item pattern[],
> > +		const struct rte_flow_action actions[],
> > +		struct ixgbe_fdir_rule *rule,
> > +		struct rte_flow_error *error);
> > +static int
> > +ixgbe_parse_fdir_filter(const struct rte_flow_attr *attr,
> > +		const struct rte_flow_item pattern[],
> > +		const struct rte_flow_action actions[],
> > +		struct ixgbe_fdir_rule *rule,
> > +		struct rte_flow_error *error);
> > +static int
> >  ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
> >  		const struct rte_flow_attr *attr,
> >  		const struct rte_flow_item pattern[], @@ -1475,6 +1500,8
> @@ static
> > int ixgbe_fdir_filter_init(struct rte_eth_dev
> > *eth_dev)
> >  			     "Failed to allocate memory for fdir hash map!");
> >  		return -ENOMEM;
> >  	}
> > +	fdir_info->mask_added = FALSE;
> > +
> >  	return 0;
> >  }
> >
> > @@ -8951,117 +8978,22 @@ ixgbe_parse_syn_filter(const struct
> > rte_flow_attr *attr,
> >  	return 0;
> >  }
> >
> > -/**
> > - * Parse the rule to see if it is a L2 tunnel rule.
> > - * And get the L2 tunnel filter info BTW.
> > - * Only support E-tag now.
> > - */
> > +/* Parse to get the attr and action info of flow director rule. */
> >  static int
> > -cons_parse_l2_tn_filter(const struct rte_flow_attr *attr,
> > -			const struct rte_flow_item pattern[],
> > -			const struct rte_flow_action actions[],
> > -			struct rte_eth_l2_tunnel_conf *filter,
> > -			struct rte_flow_error *error)
> 
> Why do u remove functions added in patch 14/18? If the functions don't be
> needed, how about changing patch 14/18?

This patch is generate in a very complicated way. 
If you go on reading  this patch, you will find the following statement, path add the functions back itself latter
So the git seem to generate a patch in a  peculiar way.

+static int
+cons_parse_l2_tn_filter(const struct rte_flow_attr *attr,
+                                             const struct rte_flow_item pattern[],
+                                             const struct rte_flow_action actions[],
+                                             struct rte_eth_l2_tunnel_conf *filter,
+                                             struct rte_flow_error *error)

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v2 16/18] net/ixgbe: create consistent filter
  2016-12-30  7:53 ` [PATCH v2 16/18] net/ixgbe: create consistent filter Wei Zhao
  2017-01-03  2:04   ` Xing, Beilei
       [not found]   ` <94479800C636CB44BD422CB454846E013158D036@SHSMSX101.ccr.corp.intel.com>
@ 2017-01-03  5:24   ` Xing, Beilei
  2017-01-06 17:26   ` Ferruh Yigit
  3 siblings, 0 replies; 149+ messages in thread
From: Xing, Beilei @ 2017-01-03  5:24 UTC (permalink / raw)
  To: Zhao1, Wei, dev; +Cc: Zhao1, Wei, Lu, Wenzhuo



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wei Zhao
> Sent: Friday, December 30, 2016 3:53 PM
> To: dev@dpdk.org
> Cc: Zhao1, Wei <wei.zhao1@intel.com>; Lu, Wenzhuo
> <wenzhuo.lu@intel.com>
> Subject: [dpdk-dev] [PATCH v2 16/18] net/ixgbe: create consistent filter
> 
> This patch adds a function to create the flow directory filter.
> 
> Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> 
> ---
> 
> 
> +/**
> + * Create or destroy a flow rule.
> + * Theorically one rule can match more than one filters.
> + * We will let it use the filter which it hitt first.
> + * So, the sequence matters.
> + */
> +static struct rte_flow *
> +ixgbe_flow_create(struct rte_eth_dev *dev,
> +		  const struct rte_flow_attr *attr,
> +		  const struct rte_flow_item pattern[],
> +		  const struct rte_flow_action actions[],
> +		  struct rte_flow_error *error)
> +{
> +	int ret;
> +	struct rte_eth_ntuple_filter ntuple_filter;
> +	struct rte_eth_ethertype_filter ethertype_filter;
> +	struct rte_eth_syn_filter syn_filter;
> +	struct ixgbe_fdir_rule fdir_rule;
> +	struct rte_eth_l2_tunnel_conf l2_tn_filter;
> +	struct ixgbe_hw_fdir_info *fdir_info =
> +		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data-
> >dev_private);
> +	struct ixgbe_flow *flow = NULL;
> +	struct ixgbe_ntuple_filter_ele *ntuple_filter_ptr;
> +	struct ixgbe_ethertype_filter_ele *ethertype_filter_ptr;
> +	struct ixgbe_eth_syn_filter_ele *syn_filter_ptr;
> +	struct ixgbe_eth_l2_tunnel_conf_ele *l2_tn_filter_ptr;
> +	struct ixgbe_fdir_rule_ele *fdir_rule_ptr;
> +	struct ixgbe_flow_mem *ixgbe_flow_mem_ptr;
> +
> +	flow = rte_zmalloc("ixgbe_flow", sizeof(struct ixgbe_flow), 0);
> +	if (!flow) {
> +		PMD_DRV_LOG(ERR, "failed to allocate memory");
> +		return (struct rte_flow *)flow;
> +	}
> +	ixgbe_flow_mem_ptr = rte_zmalloc("ixgbe_flow_mem",
> +			sizeof(struct ixgbe_flow_mem), 0);
> +	if (!ixgbe_flow_mem_ptr) {
> +		PMD_DRV_LOG(ERR, "failed to allocate memory");
> +		return NULL;
> +	}
> +	ixgbe_flow_mem_ptr->flow = flow;
> +	TAILQ_INSERT_TAIL(&ixgbe_flow_list,
> +				ixgbe_flow_mem_ptr, entries);
> +
> +	memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
> +	ret = ixgbe_parse_ntuple_filter(attr, pattern,
> +			actions, &ntuple_filter, error);
> +	if (!ret) {
> +		ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, TRUE);
> +		if (!ret) {
> +			ntuple_filter_ptr = rte_zmalloc("ixgbe_ntuple_filter",
> +				sizeof(struct ixgbe_ntuple_filter_ele), 0);
> +			(void)rte_memcpy(&ntuple_filter_ptr->filter_info,
> +				&ntuple_filter,
> +				sizeof(struct rte_eth_ntuple_filter));
> +			TAILQ_INSERT_TAIL(&filter_ntuple_list,
> +				ntuple_filter_ptr, entries);
> +			flow->rule = ntuple_filter_ptr;
> +			flow->filter_type = RTE_ETH_FILTER_NTUPLE;
> +		}
> +		return (struct rte_flow *)flow;

Should the return statement be in the parantheses above?
And is there any process if (ret<0) ?

> +	}
> +
> +	memset(&ethertype_filter, 0, sizeof(struct
> rte_eth_ethertype_filter));
> +	ret = ixgbe_parse_ethertype_filter(attr, pattern,
> +				actions, &ethertype_filter, error);
> +	if (!ret) {
> +		ret = ixgbe_add_del_ethertype_filter(dev,
> +				&ethertype_filter, TRUE);
> +		if (!ret) {
> +			ethertype_filter_ptr = rte_zmalloc(
> +				"ixgbe_ethertype_filter",
> +				sizeof(struct ixgbe_ethertype_filter_ele), 0);
> +			(void)rte_memcpy(&ethertype_filter_ptr-
> >filter_info,
> +				&ethertype_filter,
> +				sizeof(struct rte_eth_ethertype_filter));
> +			TAILQ_INSERT_TAIL(&filter_ethertype_list,
> +				ethertype_filter_ptr, entries);
> +			flow->rule = ethertype_filter_ptr;
> +			flow->filter_type = RTE_ETH_FILTER_ETHERTYPE;
> +		}
> +		return (struct rte_flow *)flow;

Same above.

> +	}
> +
> +	memset(&syn_filter, 0, sizeof(struct rte_eth_syn_filter));
> +	ret = cons_parse_syn_filter(attr, pattern, actions, &syn_filter, error);
> +	if (!ret) {
> +		ret = ixgbe_syn_filter_set(dev, &syn_filter, TRUE);
> +		if (!ret) {
> +			syn_filter_ptr = rte_zmalloc("ixgbe_syn_filter",
> +				sizeof(struct ixgbe_eth_syn_filter_ele), 0);
> +			(void)rte_memcpy(&syn_filter_ptr->filter_info,
> +				&syn_filter,
> +				sizeof(struct rte_eth_syn_filter));
> +			TAILQ_INSERT_TAIL(&filter_syn_list,
> +				syn_filter_ptr,
> +				entries);
> +			flow->rule = syn_filter_ptr;
> +			flow->filter_type = RTE_ETH_FILTER_SYN;
> +		}
> +		return (struct rte_flow *)flow;

Same above.

> +	}
> +
> +	memset(&fdir_rule, 0, sizeof(struct ixgbe_fdir_rule));
> +	ret = ixgbe_parse_fdir_filter(attr, pattern,
> +				actions, &fdir_rule, error);
> +	if (!ret) {
> +		/* A mask cannot be deleted. */
> +		if (fdir_rule.b_mask) {
> +			if (!fdir_info->mask_added) {
> +				/* It's the first time the mask is set. */
> +				rte_memcpy(&fdir_info->mask,
> +					&fdir_rule.mask,
> +					sizeof(struct ixgbe_hw_fdir_mask));
> +				ret = ixgbe_fdir_set_input_mask(dev);
> +				if (ret)
> +					return NULL;

Before return NULL, ixgbe_flow_mem_ptr and flow should be freed, right?
And ixgbe_flow_mem_ptr should be removed from ixgbe_flow_list. Or you can move TAILQ_INSERT_TAIL(&ixgbe_flow_list, ixgbe_flow_mem_ptr, entries) after a flow is added successfully.
Same comments for "return NULL" below.

> +
> +				fdir_info->mask_added = TRUE;
> +			} else {
> +				/**
> +				 * Only support one global mask,
> +				 * all the masks should be the same.
> +				 */
> +				ret = memcmp(&fdir_info->mask,
> +					&fdir_rule.mask,
> +					sizeof(struct ixgbe_hw_fdir_mask));
> +				if (ret)
> +					return NULL;
> +			}
> +		}
> +
> +		if (fdir_rule.b_spec) {
> +			ret = ixgbe_fdir_filter_program(dev, &fdir_rule,
> +					FALSE, FALSE);
> +			if (!ret) {
> +				fdir_rule_ptr =
> rte_zmalloc("ixgbe_fdir_filter",
> +					sizeof(struct ixgbe_fdir_rule_ele), 0);
> +				(void)rte_memcpy(&fdir_rule_ptr-
> >filter_info,
> +					&fdir_rule,
> +					sizeof(struct ixgbe_fdir_rule));
> +				TAILQ_INSERT_TAIL(&filter_fdir_list,
> +					fdir_rule_ptr, entries);
> +				flow->rule = fdir_rule_ptr;
> +				flow->filter_type = RTE_ETH_FILTER_FDIR;
> +
> +				return (struct rte_flow *)flow;
> +			}
> +
> +			if (ret)
> +				return NULL;
> +		}
> +
> +		return NULL;

Why to return NULL here? 

> +	}
> +
> +	memset(&l2_tn_filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
> +	ret = cons_parse_l2_tn_filter(attr, pattern,
> +					actions, &l2_tn_filter, error);
> +	if (!ret) {
> +		ret = ixgbe_dev_l2_tunnel_filter_add(dev, &l2_tn_filter,
> FALSE);
> +		if (!ret) {
> +			l2_tn_filter_ptr = rte_zmalloc("ixgbe_l2_tn_filter",
> +				sizeof(struct ixgbe_eth_l2_tunnel_conf_ele),
> 0);
> +			(void)rte_memcpy(&l2_tn_filter_ptr->filter_info,
> +				&l2_tn_filter,
> +				sizeof(struct rte_eth_l2_tunnel_conf));
> +			TAILQ_INSERT_TAIL(&filter_l2_tunnel_list,
> +				l2_tn_filter_ptr, entries);
> +			flow->rule = l2_tn_filter_ptr;
> +			flow->filter_type = RTE_ETH_FILTER_L2_TUNNEL;
> +
> +			return (struct rte_flow *)flow;
> +		}

is there any process if (ret<0) ?

> +	}
> +
> +	rte_free(ixgbe_flow_mem_ptr);
> +	rte_free(flow);
> +	return NULL;
> +}
> +

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v2 14/18] net/ixgbe: parse L2 tunnel filter
  2016-12-30  7:53 ` [PATCH v2 14/18] net/ixgbe: parse L2 tunnel filter Wei Zhao
@ 2017-01-03 14:07   ` Adrien Mazarguil
  2017-01-05  3:12     ` Zhao1, Wei
  0 siblings, 1 reply; 149+ messages in thread
From: Adrien Mazarguil @ 2017-01-03 14:07 UTC (permalink / raw)
  To: Wei Zhao; +Cc: dev, Wenzhuo Lu

Hi Wei,

On Fri, Dec 30, 2016 at 03:53:06PM +0800, Wei Zhao wrote:
> check if the rule is a L2 tunnel rule, and get the L2 tunnel info.
> 
> Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> 
> ---
> 
> v2:
> --add new error set function
> --change return value type of parser function
> ---
>  drivers/net/ixgbe/ixgbe_ethdev.c | 269 +++++++++++++++++++++++++++++++++++----
>  lib/librte_ether/rte_flow.h      |  32 +++++
>  2 files changed, 273 insertions(+), 28 deletions(-)
[...]
> diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h
> index 98084ac..e9e6220 100644
> --- a/lib/librte_ether/rte_flow.h
> +++ b/lib/librte_ether/rte_flow.h
> @@ -268,6 +268,13 @@ enum rte_flow_item_type {
>  	 * See struct rte_flow_item_vxlan.
>  	 */
>  	RTE_FLOW_ITEM_TYPE_VXLAN,
> +
> +	/**
> +	  * Matches a E_TAG header.
> +	  *
> +	  * See struct rte_flow_item_e_tag.
> +	  */
> +	RTE_FLOW_ITEM_TYPE_E_TAG,
>  };
>  
>  /**
> @@ -454,6 +461,31 @@ struct rte_flow_item_vxlan {
>  };
>  
>  /**
> + * RTE_FLOW_ITEM_TYPE_E_TAG.
> + *
> + * Matches a E-tag header.
> + */
> +struct rte_flow_item_e_tag {
> +	struct ether_addr dst; /**< Destination MAC. */
> +	struct ether_addr src; /**< Source MAC. */
> +	uint16_t e_tag_ethertype; /**< E-tag EtherType, 0x893F. */
> +	uint16_t e_pcp:3; /**<  E-PCP */
> +	uint16_t dei:1; /**< DEI */
> +	uint16_t in_e_cid_base:12; /**< Ingress E-CID base */
> +	uint16_t rsv:2; /**< reserved */
> +	uint16_t grp:2; /**< GRP */
> +	uint16_t e_cid_base:12; /**< E-CID base */
> +	uint16_t in_e_cid_ext:8; /**< Ingress E-CID extend */
> +	uint16_t e_cid_ext:8; /**< E-CID extend */
> +	uint16_t type; /**< MAC type. */
> +	unsigned int tags; /**< Number of 802.1Q/ad tags defined. */
> +	struct {
> +		uint16_t tpid; /**< Tag protocol identifier. */
> +		uint16_t tci; /**< Tag control information. */
> +	} tag[]; /**< 802.1Q/ad tag definitions, outermost first. */
> +};
[...]

See my previous reply [1], this definition is not endian-safe and comprises
protocols defined as independent items (namely ETH and VLAN). Here is an
untested suggestion:

 struct rte_flow_item_e_tag {
     uint16_t tpid; /**< Tag protocol identifier (0x893F). */
     /** E-Tag control information (E-TCI). */
     uint16_t epcp_edei_in_ecid_b; /**< E-PCP (3b), E-DEI (1b), ingress E-CID base (12b). */
     uint16_t rsvd_grp_ecid_b; /**< Reserved (2b), GRP (2b), E-CID base (12b). */
     uint8_t in_ecid_e; /**< Ingress E-CID ext. */
     uint8_t ecid_e; /**< E-CID ext. */
 };

Applications are responsibile for breaking down and filling individual
fields properly. Ethernet header would be provided as its own item as shown
in this testpmd flow command example:

 flow create 0 ingress pattern eth / e_tag in_ecid_base is 42 / end actions drop / end

Note, all multibyte values are in network order like other protocol header
definitions.

[1] http://dpdk.org/ml/archives/dev/2016-December/053181.html
    Message ID: 20161223081310.GH10340@6wind.com

-- 
Adrien Mazarguil
6WIND

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v2 15/18] net/ixgbe: parse flow director filter
  2016-12-30  7:53 ` [PATCH v2 15/18] net/ixgbe: parse flow director filter Wei Zhao
  2017-01-02 15:24   ` Xing, Beilei
@ 2017-01-03 14:08   ` Adrien Mazarguil
  1 sibling, 0 replies; 149+ messages in thread
From: Adrien Mazarguil @ 2017-01-03 14:08 UTC (permalink / raw)
  To: Wei Zhao; +Cc: dev, Wenzhuo Lu

Hi Wei,

On Fri, Dec 30, 2016 at 03:53:07PM +0800, Wei Zhao wrote:
> check if the rule is a flow director rule, and get the flow director info.
> 
> Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> 
> ---
> 
> v2:add new error set function
> ---
>  drivers/net/ixgbe/ixgbe_ethdev.c | 1467 +++++++++++++++++++++++++++++++++-----
>  drivers/net/ixgbe/ixgbe_ethdev.h |   16 +
>  drivers/net/ixgbe/ixgbe_fdir.c   |  247 ++++---
>  lib/librte_ether/rte_flow.h      |   23 +
>  4 files changed, 1495 insertions(+), 258 deletions(-)
[...]
> diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h
> index e9e6220..e59f458 100644
> --- a/lib/librte_ether/rte_flow.h
> +++ b/lib/librte_ether/rte_flow.h
> @@ -275,6 +275,13 @@ enum rte_flow_item_type {
>  	  * See struct rte_flow_item_e_tag.
>  	  */
>  	RTE_FLOW_ITEM_TYPE_E_TAG,
> +
> +	/**
> +	 * Matches a NVGRE header.
> +	 *
> +	 * See struct rte_flow_item_nvgre.
> +	 */
> +	RTE_FLOW_ITEM_TYPE_NVGRE,
>  };
>  
>  /**
> @@ -486,6 +493,22 @@ struct rte_flow_item_e_tag {
>  };
>  
>  /**
> + * RTE_FLOW_ITEM_TYPE_NVGRE.
> + *
> + * Matches a NVGRE header.
> + */
> +struct rte_flow_item_nvgre {
> +	uint32_t flags0:1; /**< 0 */
> +	uint32_t rsvd1:1; /**< 1 bit not defined */
> +	uint32_t flags1:2; /**< 2 bits, 1 0 */
> +	uint32_t rsvd0:9; /**< Reserved0 */
> +	uint32_t ver:3; /**< version */
> +	uint32_t protocol:16; /**< protocol type, 0x6558 */
> +	uint8_t tni[3]; /**< tenant network ID or virtual subnet ID */
> +	uint8_t flow_id; /**< flow ID or Reserved */
> +};
[...]

See my previous reply [1], this definition is not endian-safe due to the use
of bit-fields and should look more like the VXLAN item. Here is an untested
suggestion (not sure about all values):

 struct rte_flow_item_nvgre {
     /**
      * Checksum (1b), undefined (1b), key bit (1b), sequence number (1b),
      * reserved 0 (9b), version (3b).
      *
      * \c_k_s_rsvd0_ver must have value 0x2000 according to RFC 7637.
      */ 
     uint16_t c_k_s_rsvd0_ver;
     uint16_t proto; /**< Protocol type (0x6558). */
     uint8_t vsid[3]; /**< Virtual subnet ID. */
     uint8_t flow_id; /**< Flow ID. */     
 };

Like for E-Tag, applications are responsibile for breaking down and filling
individual fields properly.

[1] http://dpdk.org/ml/archives/dev/2016-December/053181.html
    Message ID: 20161223081310.GH10340@6wind.com

-- 
Adrien Mazarguil
6WIND

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v2 02/18] net/ixgbe: store flow director filter
  2016-12-30  7:52 ` [PATCH v2 02/18] net/ixgbe: store flow director filter Wei Zhao
  2017-01-02  9:59   ` Xing, Beilei
@ 2017-01-03 14:28   ` Dai, Wei
  2017-01-04  2:03     ` Zhao1, Wei
  2017-01-06 16:31   ` Ferruh Yigit
  2 siblings, 1 reply; 149+ messages in thread
From: Dai, Wei @ 2017-01-03 14:28 UTC (permalink / raw)
  To: Zhao1, Wei, dev; +Cc: Lu, Wenzhuo, Zhao1, Wei

Hi, Wei Zhao

Would you please do git rebase master for this patch set?
When I do git pull and then git apply this patch, following errors are reported:
[root@dpdk4 dpdk-org]# git am ../patches/bundle-488-zhaowei-ixgbe-filter-api-v2.mbox

Applying: net/ixgbe: store SYN filter
Applying: net/ixgbe: store flow director filter
error: patch failed: drivers/net/ixgbe/ixgbe_ethdev.c:1284
error: drivers/net/ixgbe/ixgbe_ethdev.c: patch does not apply
Patch failed at 0002 net/ixgbe: store flow director filter
The copy of the patch that failed is found in: .git/rebase-apply/patch
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wei Zhao
> Sent: Friday, December 30, 2016 3:53 PM
> To: dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhao1, Wei <wei.zhao1@intel.com>
> Subject: [dpdk-dev] [PATCH v2 02/18] net/ixgbe: store flow director filter
> 
> Add support for storing flow director filter in SW.
> 
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> ---
> 
> v2:
> --add a fdir initialization function in device start process
> ---
>  drivers/net/ixgbe/ixgbe_ethdev.c |  55 ++++++++++++++++++++
> drivers/net/ixgbe/ixgbe_ethdev.h |  19 ++++++-
>  drivers/net/ixgbe/ixgbe_fdir.c   | 105
> ++++++++++++++++++++++++++++++++++++++-
>  3 files changed, 176 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c
> b/drivers/net/ixgbe/ixgbe_ethdev.c
> index 316e560..de27a73 100644
> --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> @@ -60,6 +60,7 @@
>  #include <rte_malloc.h>
>  #include <rte_random.h>
>  #include <rte_dev.h>
> +#include <rte_hash_crc.h>
> 
>  #include "ixgbe_logs.h"
>  #include "base/ixgbe_api.h"
> @@ -165,6 +166,7 @@ enum ixgbevf_xcast_modes {
> 
>  static int eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev);  static int
> eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev);
> +static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev);
>  static int  ixgbe_dev_configure(struct rte_eth_dev *dev);  static int
> ixgbe_dev_start(struct rte_eth_dev *dev);  static void ixgbe_dev_stop(struct
> rte_eth_dev *dev); @@ -1276,6 +1278,9 @@ eth_ixgbe_dev_init(struct
> rte_eth_dev *eth_dev)
> 
>  	/* initialize SYN filter */
>  	filter_info->syn_info = 0;
> +	/* initialize flow director filter list & hash */
> +	ixgbe_fdir_filter_init(eth_dev);
> +
>  	return 0;
>  }
> 
> @@ -1284,6 +1289,9 @@ eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev)
> {
>  	struct rte_pci_device *pci_dev;
>  	struct ixgbe_hw *hw;
> +	struct ixgbe_hw_fdir_info *fdir_info =
> +		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(eth_dev->data->dev_private);
> +	struct ixgbe_fdir_filter *fdir_filter;
> 
>  	PMD_INIT_FUNC_TRACE();
> 
> @@ -1317,9 +1325,56 @@ eth_ixgbe_dev_uninit(struct rte_eth_dev
> *eth_dev)
>  	rte_free(eth_dev->data->hash_mac_addrs);
>  	eth_dev->data->hash_mac_addrs = NULL;
> 
> +	/* remove all the fdir filters & hash */
> +	if (fdir_info->hash_map)
> +		rte_free(fdir_info->hash_map);
> +	if (fdir_info->hash_handle)
> +		rte_hash_free(fdir_info->hash_handle);
> +
> +	while ((fdir_filter = TAILQ_FIRST(&fdir_info->fdir_list))) {
> +		TAILQ_REMOVE(&fdir_info->fdir_list,
> +			     fdir_filter,
> +			     entries);
> +		rte_free(fdir_filter);
> +	}
> +
>  	return 0;
>  }
> 
> +static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev) {
> +	struct ixgbe_hw_fdir_info *fdir_info =
> +		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(eth_dev->data->dev_private);
> +	char fdir_hash_name[RTE_HASH_NAMESIZE];
> +	struct rte_hash_parameters fdir_hash_params = {
> +		.name = fdir_hash_name,
> +		.entries = IXGBE_MAX_FDIR_FILTER_NUM,
> +		.key_len = sizeof(union ixgbe_atr_input),
> +		.hash_func = rte_hash_crc,
> +		.hash_func_init_val = 0,
> +		.socket_id = rte_socket_id(),
> +	};
> +
> +	TAILQ_INIT(&fdir_info->fdir_list);
> +	snprintf(fdir_hash_name, RTE_HASH_NAMESIZE,
> +		 "fdir_%s", eth_dev->data->name);
> +	fdir_info->hash_handle = rte_hash_create(&fdir_hash_params);
> +	if (!fdir_info->hash_handle) {
> +		PMD_INIT_LOG(ERR, "Failed to create fdir hash table!");
> +		return -EINVAL;
> +	}
> +	fdir_info->hash_map = rte_zmalloc("ixgbe",
> +					  sizeof(struct ixgbe_fdir_filter *) *
> +					  IXGBE_MAX_FDIR_FILTER_NUM,
> +					  0);
> +	if (!fdir_info->hash_map) {
> +		PMD_INIT_LOG(ERR,
> +			     "Failed to allocate memory for fdir hash map!");
> +		return -ENOMEM;
> +	}
> +
> +	return 0;
> +}
>  /*
>   * Negotiate mailbox API version with the PF.
>   * After reset API version is always set to the basic one (ixgbe_mbox_api_10).
> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h
> b/drivers/net/ixgbe/ixgbe_ethdev.h
> index 827026c..8310220 100644
> --- a/drivers/net/ixgbe/ixgbe_ethdev.h
> +++ b/drivers/net/ixgbe/ixgbe_ethdev.h
> @@ -38,6 +38,7 @@
>  #include "base/ixgbe_dcb_82598.h"
>  #include "ixgbe_bypass.h"
>  #include <rte_time.h>
> +#include <rte_hash.h>
> 
>  /* need update link, bit flag */
>  #define IXGBE_FLAG_NEED_LINK_UPDATE (uint32_t)(1 << 0) @@ -130,10
> +131,11 @@
>  #define IXGBE_MISC_VEC_ID
> RTE_INTR_VEC_ZERO_OFFSET
>  #define IXGBE_RX_VEC_START
> RTE_INTR_VEC_RXTX_OFFSET
> 
> +#define IXGBE_MAX_FDIR_FILTER_NUM       (1024 * 32)
> +
>  /*
>   * Information about the fdir mode.
>   */
> -
>  struct ixgbe_hw_fdir_mask {
>  	uint16_t vlan_tci_mask;
>  	uint32_t src_ipv4_mask;
> @@ -148,6 +150,17 @@ struct ixgbe_hw_fdir_mask {
>  	uint8_t  tunnel_type_mask;
>  };
> 
> +struct ixgbe_fdir_filter {
> +	TAILQ_ENTRY(ixgbe_fdir_filter) entries;
> +	union ixgbe_atr_input ixgbe_fdir; /* key of fdir filter*/
> +	uint32_t fdirflags; /* drop or forward */
> +	uint32_t fdirhash; /* hash value for fdir */
> +	uint8_t queue; /* assigned rx queue */ };
> +
> +/* list of fdir filters */
> +TAILQ_HEAD(ixgbe_fdir_filter_list, ixgbe_fdir_filter);
> +
>  struct ixgbe_hw_fdir_info {
>  	struct ixgbe_hw_fdir_mask mask;
>  	uint8_t     flex_bytes_offset;
> @@ -159,6 +172,10 @@ struct ixgbe_hw_fdir_info {
>  	uint64_t    remove;
>  	uint64_t    f_add;
>  	uint64_t    f_remove;
> +	struct ixgbe_fdir_filter_list fdir_list; /* filter list*/
> +	/* store the pointers of the filters, index is the hash value. */
> +	struct ixgbe_fdir_filter **hash_map;
> +	struct rte_hash *hash_handle; /* cuckoo hash handler */
>  };
> 
>  /* structure for interrupt relative data */ diff --git
> a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c index
> 4b81ee3..bfcd294 100644
> --- a/drivers/net/ixgbe/ixgbe_fdir.c
> +++ b/drivers/net/ixgbe/ixgbe_fdir.c
> @@ -43,6 +43,7 @@
>  #include <rte_pci.h>
>  #include <rte_ether.h>
>  #include <rte_ethdev.h>
> +#include <rte_malloc.h>
> 
>  #include "ixgbe_logs.h"
>  #include "base/ixgbe_api.h"
> @@ -1075,6 +1076,65 @@ fdir_erase_filter_82599(struct ixgbe_hw *hw,
> uint32_t fdirhash)
> 
>  }
> 
> +static inline struct ixgbe_fdir_filter *
> +ixgbe_fdir_filter_lookup(struct ixgbe_hw_fdir_info *fdir_info,
> +			 union ixgbe_atr_input *key)
> +{
> +	int ret = 0;
> +
> +	ret = rte_hash_lookup(fdir_info->hash_handle, (const void *)key);
> +	if (ret < 0)
> +		return NULL;
> +
> +	return fdir_info->hash_map[ret];
> +}
> +
> +static inline int
> +ixgbe_insert_fdir_filter(struct ixgbe_hw_fdir_info *fdir_info,
> +			 struct ixgbe_fdir_filter *fdir_filter) {
> +	int ret = 0;
> +
> +	ret = rte_hash_add_key(fdir_info->hash_handle,
> +			       &fdir_filter->ixgbe_fdir);
> +
> +	if (ret < 0) {
> +		PMD_DRV_LOG(ERR,
> +			    "Failed to insert fdir filter to hash table %d!",
> +			    ret);
> +		return ret;
> +	}
> +
> +	fdir_info->hash_map[ret] = fdir_filter;
> +
> +	TAILQ_INSERT_TAIL(&fdir_info->fdir_list, fdir_filter, entries);
> +
> +	return 0;
> +}
> +
> +static inline int
> +ixgbe_remove_fdir_filter(struct ixgbe_hw_fdir_info *fdir_info,
> +			 union ixgbe_atr_input *key)
> +{
> +	int ret = 0;
> +	struct ixgbe_fdir_filter *fdir_filter;
> +
> +	ret = rte_hash_del_key(fdir_info->hash_handle, key);
> +
> +	if (ret < 0) {
> +		PMD_DRV_LOG(ERR, "No such fdir filter to delete %d!", ret);
> +		return ret;
> +	}
> +
> +	fdir_filter = fdir_info->hash_map[ret];
> +	fdir_info->hash_map[ret] = NULL;
> +
> +	TAILQ_REMOVE(&fdir_info->fdir_list, fdir_filter, entries);
> +	rte_free(fdir_filter);
> +
> +	return 0;
> +}
> +
>  /*
>   * ixgbe_add_del_fdir_filter - add or remove a flow diretor filter.
>   * @dev: pointer to the structure rte_eth_dev @@ -1098,6 +1158,8 @@
> ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
>  	struct ixgbe_hw_fdir_info *info =
>  		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
>  	enum rte_fdir_mode fdir_mode = dev->data->dev_conf.fdir_conf.mode;
> +	struct ixgbe_fdir_filter *node;
> +	bool add_node = FALSE;
> 
>  	if (fdir_mode == RTE_FDIR_MODE_NONE)
>  		return -ENOTSUP;
> @@ -1148,6 +1210,10 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
>  						      dev->data->dev_conf.fdir_conf.pballoc);
> 
>  	if (del) {
> +		err = ixgbe_remove_fdir_filter(info, &input);
> +		if (err < 0)
> +			return err;
> +
>  		err = fdir_erase_filter_82599(hw, fdirhash);
>  		if (err < 0)
>  			PMD_DRV_LOG(ERR, "Fail to delete FDIR filter!"); @@ -1172,6
> +1238,37 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
>  	else
>  		return -EINVAL;
> 
> +	node = ixgbe_fdir_filter_lookup(info, &input);
> +	if (node) {
> +		if (update) {
> +			node->fdirflags = fdircmd_flags;
> +			node->fdirhash = fdirhash;
> +			node->queue = queue;
> +		} else {
> +			PMD_DRV_LOG(ERR, "Conflict with existing fdir filter!");
> +			return -EINVAL;
> +		}
> +	} else {
> +		add_node = TRUE;
> +		node = rte_zmalloc("ixgbe_fdir",
> +				   sizeof(struct ixgbe_fdir_filter),
> +				   0);
> +		if (!node)
> +			return -ENOMEM;
> +		(void)rte_memcpy(&node->ixgbe_fdir,
> +				 &input,
> +				 sizeof(union ixgbe_atr_input));
> +		node->fdirflags = fdircmd_flags;
> +		node->fdirhash = fdirhash;
> +		node->queue = queue;
> +
> +		err = ixgbe_insert_fdir_filter(info, node);
> +		if (err < 0) {
> +			rte_free(node);
> +			return err;
> +		}
> +	}
> +
>  	if (is_perfect) {
>  		err = fdir_write_perfect_filter_82599(hw, &input, queue,
>  						      fdircmd_flags, fdirhash,
> @@ -1180,10 +1277,14 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev
> *dev,
>  		err = fdir_add_signature_filter_82599(hw, &input, queue,
>  						      fdircmd_flags, fdirhash);
>  	}
> -	if (err < 0)
> +	if (err < 0) {
>  		PMD_DRV_LOG(ERR, "Fail to add FDIR filter!");
> -	else
> +
> +		if (add_node)
> +			(void)ixgbe_remove_fdir_filter(info, &input);
> +	} else {
>  		PMD_DRV_LOG(DEBUG, "Success to add FDIR filter");
> +	}
> 
>  	return err;
>  }
> --
> 2.5.5

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v2 01/18] net/ixgbe: store SYN filter
  2016-12-30  7:52 ` [PATCH v2 01/18] net/ixgbe: store SYN filter Wei Zhao
@ 2017-01-03 14:33   ` Dai, Wei
  2017-01-04  1:46     ` Zhao1, Wei
  2017-01-06 16:28   ` Ferruh Yigit
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 149+ messages in thread
From: Dai, Wei @ 2017-01-03 14:33 UTC (permalink / raw)
  To: Zhao1, Wei, dev; +Cc: Lu, Wenzhuo, Zhao1, Wei

Hi, Wei Zhao 

I think that you had better give a cover letter for such a series of patches.
You can give the changes between v2 and v1 in cover letter 
and maybe no need describe it in each one.

Thanks &Best Regards
-Wei

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wei Zhao
> Sent: Friday, December 30, 2016 3:53 PM
> To: dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhao1, Wei <wei.zhao1@intel.com>
> Subject: [dpdk-dev] [PATCH v2 01/18] net/ixgbe: store SYN filter
> 
> Add support for storing SYN filter in SW.
> 
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> ---
> 
> v2:
> --synqf assignment location change
> ---
>  drivers/net/ixgbe/ixgbe_ethdev.c | 14 +++++++++++---
> drivers/net/ixgbe/ixgbe_ethdev.h |  2 ++
>  2 files changed, 13 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c
> b/drivers/net/ixgbe/ixgbe_ethdev.c
> index a25bac8..316e560 100644
> --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> @@ -1274,6 +1274,8 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
>  	memset(filter_info->fivetuple_mask, 0,
>  	       sizeof(uint32_t) * IXGBE_5TUPLE_ARRAY_SIZE);
> 
> +	/* initialize SYN filter */
> +	filter_info->syn_info = 0;
>  	return 0;
>  }
> 
> @@ -5580,15 +5582,18 @@ ixgbe_syn_filter_set(struct rte_eth_dev *dev,
>  			bool add)
>  {
>  	struct ixgbe_hw *hw =
> IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> +	struct ixgbe_filter_info *filter_info =
> +		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
> +	uint32_t syn_info;
>  	uint32_t synqf;
> 
>  	if (filter->queue >= IXGBE_MAX_RX_QUEUE_NUM)
>  		return -EINVAL;
> 
> -	synqf = IXGBE_READ_REG(hw, IXGBE_SYNQF);
> +	syn_info = filter_info->syn_info;
> 
>  	if (add) {
> -		if (synqf & IXGBE_SYN_FILTER_ENABLE)
> +		if (syn_info & IXGBE_SYN_FILTER_ENABLE)
>  			return -EINVAL;
>  		synqf = (uint32_t)(((filter->queue <<
> IXGBE_SYN_FILTER_QUEUE_SHIFT) &
>  			IXGBE_SYN_FILTER_QUEUE) | IXGBE_SYN_FILTER_ENABLE);
> @@ -5598,10 +5603,13 @@ ixgbe_syn_filter_set(struct rte_eth_dev *dev,
>  		else
>  			synqf &= ~IXGBE_SYN_FILTER_SYNQFP;
>  	} else {
> -		if (!(synqf & IXGBE_SYN_FILTER_ENABLE))
> +		synqf = IXGBE_READ_REG(hw, IXGBE_SYNQF);
> +		if (!(syn_info & IXGBE_SYN_FILTER_ENABLE))
>  			return -ENOENT;
>  		synqf &= ~(IXGBE_SYN_FILTER_QUEUE |
> IXGBE_SYN_FILTER_ENABLE);
>  	}
> +
> +	filter_info->syn_info = synqf;
>  	IXGBE_WRITE_REG(hw, IXGBE_SYNQF, synqf);
>  	IXGBE_WRITE_FLUSH(hw);
>  	return 0;
> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h
> b/drivers/net/ixgbe/ixgbe_ethdev.h
> index 4ff6338..827026c 100644
> --- a/drivers/net/ixgbe/ixgbe_ethdev.h
> +++ b/drivers/net/ixgbe/ixgbe_ethdev.h
> @@ -262,6 +262,8 @@ struct ixgbe_filter_info {
>  	/* Bit mask for every used 5tuple filter */
>  	uint32_t fivetuple_mask[IXGBE_5TUPLE_ARRAY_SIZE];
>  	struct ixgbe_5tuple_filter_list fivetuple_list;
> +	/* store the SYN filter info */
> +	uint32_t syn_info;
>  };
> 
>  /*
> --
> 2.5.5

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v2 01/18] net/ixgbe: store SYN filter
  2017-01-03 14:33   ` Dai, Wei
@ 2017-01-04  1:46     ` Zhao1, Wei
  0 siblings, 0 replies; 149+ messages in thread
From: Zhao1, Wei @ 2017-01-04  1:46 UTC (permalink / raw)
  To: Dai, Wei, dev; +Cc: Lu, Wenzhuo

Hi, Daiwei

> -----Original Message-----
> From: Dai, Wei
> Sent: Tuesday, January 3, 2017 10:33 PM
> To: Zhao1, Wei <wei.zhao1@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhao1, Wei
> <wei.zhao1@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 01/18] net/ixgbe: store SYN filter
> 
> Hi, Wei Zhao
> 
> I think that you had better give a cover letter for such a series of patches.
> You can give the changes between v2 and v1 in cover letter and maybe no
> need describe it in each one.
> 

Ok, I will add cover letter in v3

> Thanks &Best Regards
> -Wei
> 
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wei Zhao
> > Sent: Friday, December 30, 2016 3:53 PM
> > To: dev@dpdk.org
> > Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhao1, Wei
> > <wei.zhao1@intel.com>
> > Subject: [dpdk-dev] [PATCH v2 01/18] net/ixgbe: store SYN filter
> >
> > Add support for storing SYN filter in SW.
> >
> > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> > Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> > ---
> >
> > v2:
> > --synqf assignment location change
> > ---
> >  drivers/net/ixgbe/ixgbe_ethdev.c | 14 +++++++++++---
> > drivers/net/ixgbe/ixgbe_ethdev.h |  2 ++
> >  2 files changed, 13 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c
> > b/drivers/net/ixgbe/ixgbe_ethdev.c
> > index a25bac8..316e560 100644
> > --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> > +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> > @@ -1274,6 +1274,8 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
> >  	memset(filter_info->fivetuple_mask, 0,
> >  	       sizeof(uint32_t) * IXGBE_5TUPLE_ARRAY_SIZE);
> >
> > +	/* initialize SYN filter */
> > +	filter_info->syn_info = 0;
> >  	return 0;
> >  }
> >
> > @@ -5580,15 +5582,18 @@ ixgbe_syn_filter_set(struct rte_eth_dev *dev,
> >  			bool add)
> >  {
> >  	struct ixgbe_hw *hw =
> > IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> > +	struct ixgbe_filter_info *filter_info =
> > +		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data-
> >dev_private);
> > +	uint32_t syn_info;
> >  	uint32_t synqf;
> >
> >  	if (filter->queue >= IXGBE_MAX_RX_QUEUE_NUM)
> >  		return -EINVAL;
> >
> > -	synqf = IXGBE_READ_REG(hw, IXGBE_SYNQF);
> > +	syn_info = filter_info->syn_info;
> >
> >  	if (add) {
> > -		if (synqf & IXGBE_SYN_FILTER_ENABLE)
> > +		if (syn_info & IXGBE_SYN_FILTER_ENABLE)
> >  			return -EINVAL;
> >  		synqf = (uint32_t)(((filter->queue <<
> > IXGBE_SYN_FILTER_QUEUE_SHIFT) &
> >  			IXGBE_SYN_FILTER_QUEUE) |
> IXGBE_SYN_FILTER_ENABLE); @@ -5598,10
> > +5603,13 @@ ixgbe_syn_filter_set(struct rte_eth_dev *dev,
> >  		else
> >  			synqf &= ~IXGBE_SYN_FILTER_SYNQFP;
> >  	} else {
> > -		if (!(synqf & IXGBE_SYN_FILTER_ENABLE))
> > +		synqf = IXGBE_READ_REG(hw, IXGBE_SYNQF);
> > +		if (!(syn_info & IXGBE_SYN_FILTER_ENABLE))
> >  			return -ENOENT;
> >  		synqf &= ~(IXGBE_SYN_FILTER_QUEUE |
> IXGBE_SYN_FILTER_ENABLE);
> >  	}
> > +
> > +	filter_info->syn_info = synqf;
> >  	IXGBE_WRITE_REG(hw, IXGBE_SYNQF, synqf);
> >  	IXGBE_WRITE_FLUSH(hw);
> >  	return 0;
> > diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h
> > b/drivers/net/ixgbe/ixgbe_ethdev.h
> > index 4ff6338..827026c 100644
> > --- a/drivers/net/ixgbe/ixgbe_ethdev.h
> > +++ b/drivers/net/ixgbe/ixgbe_ethdev.h
> > @@ -262,6 +262,8 @@ struct ixgbe_filter_info {
> >  	/* Bit mask for every used 5tuple filter */
> >  	uint32_t fivetuple_mask[IXGBE_5TUPLE_ARRAY_SIZE];
> >  	struct ixgbe_5tuple_filter_list fivetuple_list;
> > +	/* store the SYN filter info */
> > +	uint32_t syn_info;
> >  };
> >
> >  /*
> > --
> > 2.5.5

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v2 02/18] net/ixgbe: store flow director filter
  2017-01-03 14:28   ` Dai, Wei
@ 2017-01-04  2:03     ` Zhao1, Wei
  0 siblings, 0 replies; 149+ messages in thread
From: Zhao1, Wei @ 2017-01-04  2:03 UTC (permalink / raw)
  To: Dai, Wei, dev; +Cc: Lu, Wenzhuo

Hi, weid


> -----Original Message-----
> From: Dai, Wei
> Sent: Tuesday, January 3, 2017 10:28 PM
> To: Zhao1, Wei <wei.zhao1@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhao1, Wei
> <wei.zhao1@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 02/18] net/ixgbe: store flow director filter
> 
> Hi, Wei Zhao
> 
> Would you please do git rebase master for this patch set?
> When I do git pull and then git apply this patch, following errors are reported:
> [root@dpdk4 dpdk-org]# git am ../patches/bundle-488-zhaowei-ixgbe-filter-
> api-v2.mbox
> 

This patch is based on dpdk_next_net lib.

> Applying: net/ixgbe: store SYN filter
> Applying: net/ixgbe: store flow director filter
> error: patch failed: drivers/net/ixgbe/ixgbe_ethdev.c:1284
> error: drivers/net/ixgbe/ixgbe_ethdev.c: patch does not apply Patch failed at
> 0002 net/ixgbe: store flow director filter The copy of the patch that failed is
> found in: .git/rebase-apply/patch When you have resolved this problem, run
> "git am --continue".
> If you prefer to skip this patch, run "git am --skip" instead.
> To restore the original branch and stop patching, run "git am --abort".
> 
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wei Zhao
> > Sent: Friday, December 30, 2016 3:53 PM
> > To: dev@dpdk.org
> > Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhao1, Wei
> > <wei.zhao1@intel.com>
> > Subject: [dpdk-dev] [PATCH v2 02/18] net/ixgbe: store flow director
> > filter
> >
> > Add support for storing flow director filter in SW.
> >
> > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> > Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> > ---
> >
> > v2:
> > --add a fdir initialization function in device start process
> > ---
> >  drivers/net/ixgbe/ixgbe_ethdev.c |  55 ++++++++++++++++++++
> > drivers/net/ixgbe/ixgbe_ethdev.h |  19 ++++++-
> >  drivers/net/ixgbe/ixgbe_fdir.c   | 105
> > ++++++++++++++++++++++++++++++++++++++-
> >  3 files changed, 176 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c
> > b/drivers/net/ixgbe/ixgbe_ethdev.c
> > index 316e560..de27a73 100644
> > --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> > +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> > @@ -60,6 +60,7 @@
> >  #include <rte_malloc.h>
> >  #include <rte_random.h>
> >  #include <rte_dev.h>
> > +#include <rte_hash_crc.h>
> >
> >  #include "ixgbe_logs.h"
> >  #include "base/ixgbe_api.h"
> > @@ -165,6 +166,7 @@ enum ixgbevf_xcast_modes {
> >
> >  static int eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev);  static
> > int eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev);
> > +static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev);
> >  static int  ixgbe_dev_configure(struct rte_eth_dev *dev);  static int
> > ixgbe_dev_start(struct rte_eth_dev *dev);  static void
> > ixgbe_dev_stop(struct rte_eth_dev *dev); @@ -1276,6 +1278,9 @@
> > eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
> >
> >  	/* initialize SYN filter */
> >  	filter_info->syn_info = 0;
> > +	/* initialize flow director filter list & hash */
> > +	ixgbe_fdir_filter_init(eth_dev);
> > +
> >  	return 0;
> >  }
> >
> > @@ -1284,6 +1289,9 @@ eth_ixgbe_dev_uninit(struct rte_eth_dev
> > *eth_dev) {
> >  	struct rte_pci_device *pci_dev;
> >  	struct ixgbe_hw *hw;
> > +	struct ixgbe_hw_fdir_info *fdir_info =
> > +		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(eth_dev->data-
> >dev_private);
> > +	struct ixgbe_fdir_filter *fdir_filter;
> >
> >  	PMD_INIT_FUNC_TRACE();
> >
> > @@ -1317,9 +1325,56 @@ eth_ixgbe_dev_uninit(struct rte_eth_dev
> > *eth_dev)
> >  	rte_free(eth_dev->data->hash_mac_addrs);
> >  	eth_dev->data->hash_mac_addrs = NULL;
> >
> > +	/* remove all the fdir filters & hash */
> > +	if (fdir_info->hash_map)
> > +		rte_free(fdir_info->hash_map);
> > +	if (fdir_info->hash_handle)
> > +		rte_hash_free(fdir_info->hash_handle);
> > +
> > +	while ((fdir_filter = TAILQ_FIRST(&fdir_info->fdir_list))) {
> > +		TAILQ_REMOVE(&fdir_info->fdir_list,
> > +			     fdir_filter,
> > +			     entries);
> > +		rte_free(fdir_filter);
> > +	}
> > +
> >  	return 0;
> >  }
> >
> > +static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev) {
> > +	struct ixgbe_hw_fdir_info *fdir_info =
> > +		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(eth_dev->data-
> >dev_private);
> > +	char fdir_hash_name[RTE_HASH_NAMESIZE];
> > +	struct rte_hash_parameters fdir_hash_params = {
> > +		.name = fdir_hash_name,
> > +		.entries = IXGBE_MAX_FDIR_FILTER_NUM,
> > +		.key_len = sizeof(union ixgbe_atr_input),
> > +		.hash_func = rte_hash_crc,
> > +		.hash_func_init_val = 0,
> > +		.socket_id = rte_socket_id(),
> > +	};
> > +
> > +	TAILQ_INIT(&fdir_info->fdir_list);
> > +	snprintf(fdir_hash_name, RTE_HASH_NAMESIZE,
> > +		 "fdir_%s", eth_dev->data->name);
> > +	fdir_info->hash_handle = rte_hash_create(&fdir_hash_params);
> > +	if (!fdir_info->hash_handle) {
> > +		PMD_INIT_LOG(ERR, "Failed to create fdir hash table!");
> > +		return -EINVAL;
> > +	}
> > +	fdir_info->hash_map = rte_zmalloc("ixgbe",
> > +					  sizeof(struct ixgbe_fdir_filter *) *
> > +					  IXGBE_MAX_FDIR_FILTER_NUM,
> > +					  0);
> > +	if (!fdir_info->hash_map) {
> > +		PMD_INIT_LOG(ERR,
> > +			     "Failed to allocate memory for fdir hash map!");
> > +		return -ENOMEM;
> > +	}
> > +
> > +	return 0;
> > +}
> >  /*
> >   * Negotiate mailbox API version with the PF.
> >   * After reset API version is always set to the basic one
> (ixgbe_mbox_api_10).
> > diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h
> > b/drivers/net/ixgbe/ixgbe_ethdev.h
> > index 827026c..8310220 100644
> > --- a/drivers/net/ixgbe/ixgbe_ethdev.h
> > +++ b/drivers/net/ixgbe/ixgbe_ethdev.h
> > @@ -38,6 +38,7 @@
> >  #include "base/ixgbe_dcb_82598.h"
> >  #include "ixgbe_bypass.h"
> >  #include <rte_time.h>
> > +#include <rte_hash.h>
> >
> >  /* need update link, bit flag */
> >  #define IXGBE_FLAG_NEED_LINK_UPDATE (uint32_t)(1 << 0) @@ -130,10
> > +131,11 @@
> >  #define IXGBE_MISC_VEC_ID
> > RTE_INTR_VEC_ZERO_OFFSET
> >  #define IXGBE_RX_VEC_START
> > RTE_INTR_VEC_RXTX_OFFSET
> >
> > +#define IXGBE_MAX_FDIR_FILTER_NUM       (1024 * 32)
> > +
> >  /*
> >   * Information about the fdir mode.
> >   */
> > -
> >  struct ixgbe_hw_fdir_mask {
> >  	uint16_t vlan_tci_mask;
> >  	uint32_t src_ipv4_mask;
> > @@ -148,6 +150,17 @@ struct ixgbe_hw_fdir_mask {
> >  	uint8_t  tunnel_type_mask;
> >  };
> >
> > +struct ixgbe_fdir_filter {
> > +	TAILQ_ENTRY(ixgbe_fdir_filter) entries;
> > +	union ixgbe_atr_input ixgbe_fdir; /* key of fdir filter*/
> > +	uint32_t fdirflags; /* drop or forward */
> > +	uint32_t fdirhash; /* hash value for fdir */
> > +	uint8_t queue; /* assigned rx queue */ };
> > +
> > +/* list of fdir filters */
> > +TAILQ_HEAD(ixgbe_fdir_filter_list, ixgbe_fdir_filter);
> > +
> >  struct ixgbe_hw_fdir_info {
> >  	struct ixgbe_hw_fdir_mask mask;
> >  	uint8_t     flex_bytes_offset;
> > @@ -159,6 +172,10 @@ struct ixgbe_hw_fdir_info {
> >  	uint64_t    remove;
> >  	uint64_t    f_add;
> >  	uint64_t    f_remove;
> > +	struct ixgbe_fdir_filter_list fdir_list; /* filter list*/
> > +	/* store the pointers of the filters, index is the hash value. */
> > +	struct ixgbe_fdir_filter **hash_map;
> > +	struct rte_hash *hash_handle; /* cuckoo hash handler */
> >  };
> >
> >  /* structure for interrupt relative data */ diff --git
> > a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c
> > index
> > 4b81ee3..bfcd294 100644
> > --- a/drivers/net/ixgbe/ixgbe_fdir.c
> > +++ b/drivers/net/ixgbe/ixgbe_fdir.c
> > @@ -43,6 +43,7 @@
> >  #include <rte_pci.h>
> >  #include <rte_ether.h>
> >  #include <rte_ethdev.h>
> > +#include <rte_malloc.h>
> >
> >  #include "ixgbe_logs.h"
> >  #include "base/ixgbe_api.h"
> > @@ -1075,6 +1076,65 @@ fdir_erase_filter_82599(struct ixgbe_hw *hw,
> > uint32_t fdirhash)
> >
> >  }
> >
> > +static inline struct ixgbe_fdir_filter *
> > +ixgbe_fdir_filter_lookup(struct ixgbe_hw_fdir_info *fdir_info,
> > +			 union ixgbe_atr_input *key)
> > +{
> > +	int ret = 0;
> > +
> > +	ret = rte_hash_lookup(fdir_info->hash_handle, (const void *)key);
> > +	if (ret < 0)
> > +		return NULL;
> > +
> > +	return fdir_info->hash_map[ret];
> > +}
> > +
> > +static inline int
> > +ixgbe_insert_fdir_filter(struct ixgbe_hw_fdir_info *fdir_info,
> > +			 struct ixgbe_fdir_filter *fdir_filter) {
> > +	int ret = 0;
> > +
> > +	ret = rte_hash_add_key(fdir_info->hash_handle,
> > +			       &fdir_filter->ixgbe_fdir);
> > +
> > +	if (ret < 0) {
> > +		PMD_DRV_LOG(ERR,
> > +			    "Failed to insert fdir filter to hash table %d!",
> > +			    ret);
> > +		return ret;
> > +	}
> > +
> > +	fdir_info->hash_map[ret] = fdir_filter;
> > +
> > +	TAILQ_INSERT_TAIL(&fdir_info->fdir_list, fdir_filter, entries);
> > +
> > +	return 0;
> > +}
> > +
> > +static inline int
> > +ixgbe_remove_fdir_filter(struct ixgbe_hw_fdir_info *fdir_info,
> > +			 union ixgbe_atr_input *key)
> > +{
> > +	int ret = 0;
> > +	struct ixgbe_fdir_filter *fdir_filter;
> > +
> > +	ret = rte_hash_del_key(fdir_info->hash_handle, key);
> > +
> > +	if (ret < 0) {
> > +		PMD_DRV_LOG(ERR, "No such fdir filter to delete %d!", ret);
> > +		return ret;
> > +	}
> > +
> > +	fdir_filter = fdir_info->hash_map[ret];
> > +	fdir_info->hash_map[ret] = NULL;
> > +
> > +	TAILQ_REMOVE(&fdir_info->fdir_list, fdir_filter, entries);
> > +	rte_free(fdir_filter);
> > +
> > +	return 0;
> > +}
> > +
> >  /*
> >   * ixgbe_add_del_fdir_filter - add or remove a flow diretor filter.
> >   * @dev: pointer to the structure rte_eth_dev @@ -1098,6 +1158,8 @@
> > ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
> >  	struct ixgbe_hw_fdir_info *info =
> >  		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data-
> >dev_private);
> >  	enum rte_fdir_mode fdir_mode = dev->data-
> >dev_conf.fdir_conf.mode;
> > +	struct ixgbe_fdir_filter *node;
> > +	bool add_node = FALSE;
> >
> >  	if (fdir_mode == RTE_FDIR_MODE_NONE)
> >  		return -ENOTSUP;
> > @@ -1148,6 +1210,10 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev
> *dev,
> >  						      dev->data-
> >dev_conf.fdir_conf.pballoc);
> >
> >  	if (del) {
> > +		err = ixgbe_remove_fdir_filter(info, &input);
> > +		if (err < 0)
> > +			return err;
> > +
> >  		err = fdir_erase_filter_82599(hw, fdirhash);
> >  		if (err < 0)
> >  			PMD_DRV_LOG(ERR, "Fail to delete FDIR filter!");
> @@ -1172,6
> > +1238,37 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
> >  	else
> >  		return -EINVAL;
> >
> > +	node = ixgbe_fdir_filter_lookup(info, &input);
> > +	if (node) {
> > +		if (update) {
> > +			node->fdirflags = fdircmd_flags;
> > +			node->fdirhash = fdirhash;
> > +			node->queue = queue;
> > +		} else {
> > +			PMD_DRV_LOG(ERR, "Conflict with existing fdir
> filter!");
> > +			return -EINVAL;
> > +		}
> > +	} else {
> > +		add_node = TRUE;
> > +		node = rte_zmalloc("ixgbe_fdir",
> > +				   sizeof(struct ixgbe_fdir_filter),
> > +				   0);
> > +		if (!node)
> > +			return -ENOMEM;
> > +		(void)rte_memcpy(&node->ixgbe_fdir,
> > +				 &input,
> > +				 sizeof(union ixgbe_atr_input));
> > +		node->fdirflags = fdircmd_flags;
> > +		node->fdirhash = fdirhash;
> > +		node->queue = queue;
> > +
> > +		err = ixgbe_insert_fdir_filter(info, node);
> > +		if (err < 0) {
> > +			rte_free(node);
> > +			return err;
> > +		}
> > +	}
> > +
> >  	if (is_perfect) {
> >  		err = fdir_write_perfect_filter_82599(hw, &input, queue,
> >  						      fdircmd_flags, fdirhash,
> > @@ -1180,10 +1277,14 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev
> > *dev,
> >  		err = fdir_add_signature_filter_82599(hw, &input, queue,
> >  						      fdircmd_flags, fdirhash);
> >  	}
> > -	if (err < 0)
> > +	if (err < 0) {
> >  		PMD_DRV_LOG(ERR, "Fail to add FDIR filter!");
> > -	else
> > +
> > +		if (add_node)
> > +			(void)ixgbe_remove_fdir_filter(info, &input);
> > +	} else {
> >  		PMD_DRV_LOG(DEBUG, "Success to add FDIR filter");
> > +	}
> >
> >  	return err;
> >  }
> > --
> > 2.5.5

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v2 14/18] net/ixgbe: parse L2 tunnel filter
  2017-01-03 14:07   ` Adrien Mazarguil
@ 2017-01-05  3:12     ` Zhao1, Wei
  2017-01-05  8:52       ` Adrien Mazarguil
  0 siblings, 1 reply; 149+ messages in thread
From: Zhao1, Wei @ 2017-01-05  3:12 UTC (permalink / raw)
  To: Adrien Mazarguil; +Cc: dev, Lu, Wenzhuo

Hi, adrien

> -----Original Message-----
> From: Adrien Mazarguil [mailto:adrien.mazarguil@6wind.com]
> Sent: Tuesday, January 3, 2017 10:08 PM
> To: Zhao1, Wei <wei.zhao1@intel.com>
> Cc: dev@dpdk.org; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v2 14/18] net/ixgbe: parse L2 tunnel filter
> 
> Hi Wei,
> 
> On Fri, Dec 30, 2016 at 03:53:06PM +0800, Wei Zhao wrote:
> > check if the rule is a L2 tunnel rule, and get the L2 tunnel info.
> >
> > Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> >
> > ---
> >
> > v2:
> > --add new error set function
> > --change return value type of parser function
> > ---
> >  drivers/net/ixgbe/ixgbe_ethdev.c | 269
> +++++++++++++++++++++++++++++++++++----
> >  lib/librte_ether/rte_flow.h      |  32 +++++
> >  2 files changed, 273 insertions(+), 28 deletions(-)
> [...]
> > diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h
> > index 98084ac..e9e6220 100644
> > --- a/lib/librte_ether/rte_flow.h
> > +++ b/lib/librte_ether/rte_flow.h
> > @@ -268,6 +268,13 @@ enum rte_flow_item_type {
> >  	 * See struct rte_flow_item_vxlan.
> >  	 */
> >  	RTE_FLOW_ITEM_TYPE_VXLAN,
> > +
> > +	/**
> > +	  * Matches a E_TAG header.
> > +	  *
> > +	  * See struct rte_flow_item_e_tag.
> > +	  */
> > +	RTE_FLOW_ITEM_TYPE_E_TAG,
> >  };
> >
> >  /**
> > @@ -454,6 +461,31 @@ struct rte_flow_item_vxlan {  };
> >
> >  /**
> > + * RTE_FLOW_ITEM_TYPE_E_TAG.
> > + *
> > + * Matches a E-tag header.
> > + */
> > +struct rte_flow_item_e_tag {
> > +	struct ether_addr dst; /**< Destination MAC. */
> > +	struct ether_addr src; /**< Source MAC. */
> > +	uint16_t e_tag_ethertype; /**< E-tag EtherType, 0x893F. */
> > +	uint16_t e_pcp:3; /**<  E-PCP */
> > +	uint16_t dei:1; /**< DEI */
> > +	uint16_t in_e_cid_base:12; /**< Ingress E-CID base */
> > +	uint16_t rsv:2; /**< reserved */
> > +	uint16_t grp:2; /**< GRP */
> > +	uint16_t e_cid_base:12; /**< E-CID base */
> > +	uint16_t in_e_cid_ext:8; /**< Ingress E-CID extend */
> > +	uint16_t e_cid_ext:8; /**< E-CID extend */
> > +	uint16_t type; /**< MAC type. */
> > +	unsigned int tags; /**< Number of 802.1Q/ad tags defined. */
> > +	struct {
> > +		uint16_t tpid; /**< Tag protocol identifier. */
> > +		uint16_t tci; /**< Tag control information. */
> > +	} tag[]; /**< 802.1Q/ad tag definitions, outermost first. */ };
> [...]
> 
> See my previous reply [1], this definition is not endian-safe and comprises
> protocols defined as independent items (namely ETH and VLAN). Here is an
> untested suggestion:
> 
>  struct rte_flow_item_e_tag {
>      uint16_t tpid; /**< Tag protocol identifier (0x893F). */
>      /** E-Tag control information (E-TCI). */
>      uint16_t epcp_edei_in_ecid_b; /**< E-PCP (3b), E-DEI (1b), ingress E-CID
> base (12b). */
>      uint16_t rsvd_grp_ecid_b; /**< Reserved (2b), GRP (2b), E-CID base (12b).
> */
>      uint8_t in_ecid_e; /**< Ingress E-CID ext. */
>      uint8_t ecid_e; /**< E-CID ext. */
>  };
> 
> Applications are responsibile for breaking down and filling individual fields
> properly. Ethernet header would be provided as its own item as shown in
> this testpmd flow command example:
> 
>  flow create 0 ingress pattern eth / e_tag in_ecid_base is 42 / end actions
> drop / end
> 

In this case , is  eth  an option or mandatory? 
I think it is optional, because user may do not have any parameter in ETH  config.


> Note, all multibyte values are in network order like other protocol header
> definitions.
> 
> [1] http://dpdk.org/ml/archives/dev/2016-December/053181.html
>     Message ID: 20161223081310.GH10340@6wind.com
> 
> --
> Adrien Mazarguil
> 6WIND

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v2 14/18] net/ixgbe: parse L2 tunnel filter
  2017-01-05  3:12     ` Zhao1, Wei
@ 2017-01-05  8:52       ` Adrien Mazarguil
  0 siblings, 0 replies; 149+ messages in thread
From: Adrien Mazarguil @ 2017-01-05  8:52 UTC (permalink / raw)
  To: Zhao1, Wei; +Cc: dev, Lu, Wenzhuo

On Thu, Jan 05, 2017 at 03:12:01AM +0000, Zhao1, Wei wrote:
> Hi, adrien
> 
> > -----Original Message-----
> > From: Adrien Mazarguil [mailto:adrien.mazarguil@6wind.com]
> > Sent: Tuesday, January 3, 2017 10:08 PM
> > To: Zhao1, Wei <wei.zhao1@intel.com>
> > Cc: dev@dpdk.org; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> > Subject: Re: [dpdk-dev] [PATCH v2 14/18] net/ixgbe: parse L2 tunnel filter
> > 
> > Hi Wei,
> > 
> > On Fri, Dec 30, 2016 at 03:53:06PM +0800, Wei Zhao wrote:
> > > check if the rule is a L2 tunnel rule, and get the L2 tunnel info.
> > >
> > > Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> > > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> > >
> > > ---
> > >
> > > v2:
> > > --add new error set function
> > > --change return value type of parser function
> > > ---
> > >  drivers/net/ixgbe/ixgbe_ethdev.c | 269
> > +++++++++++++++++++++++++++++++++++----
> > >  lib/librte_ether/rte_flow.h      |  32 +++++
> > >  2 files changed, 273 insertions(+), 28 deletions(-)
> > [...]
> > > diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h
> > > index 98084ac..e9e6220 100644
> > > --- a/lib/librte_ether/rte_flow.h
> > > +++ b/lib/librte_ether/rte_flow.h
> > > @@ -268,6 +268,13 @@ enum rte_flow_item_type {
> > >  	 * See struct rte_flow_item_vxlan.
> > >  	 */
> > >  	RTE_FLOW_ITEM_TYPE_VXLAN,
> > > +
> > > +	/**
> > > +	  * Matches a E_TAG header.
> > > +	  *
> > > +	  * See struct rte_flow_item_e_tag.
> > > +	  */
> > > +	RTE_FLOW_ITEM_TYPE_E_TAG,
> > >  };
> > >
> > >  /**
> > > @@ -454,6 +461,31 @@ struct rte_flow_item_vxlan {  };
> > >
> > >  /**
> > > + * RTE_FLOW_ITEM_TYPE_E_TAG.
> > > + *
> > > + * Matches a E-tag header.
> > > + */
> > > +struct rte_flow_item_e_tag {
> > > +	struct ether_addr dst; /**< Destination MAC. */
> > > +	struct ether_addr src; /**< Source MAC. */
> > > +	uint16_t e_tag_ethertype; /**< E-tag EtherType, 0x893F. */
> > > +	uint16_t e_pcp:3; /**<  E-PCP */
> > > +	uint16_t dei:1; /**< DEI */
> > > +	uint16_t in_e_cid_base:12; /**< Ingress E-CID base */
> > > +	uint16_t rsv:2; /**< reserved */
> > > +	uint16_t grp:2; /**< GRP */
> > > +	uint16_t e_cid_base:12; /**< E-CID base */
> > > +	uint16_t in_e_cid_ext:8; /**< Ingress E-CID extend */
> > > +	uint16_t e_cid_ext:8; /**< E-CID extend */
> > > +	uint16_t type; /**< MAC type. */
> > > +	unsigned int tags; /**< Number of 802.1Q/ad tags defined. */
> > > +	struct {
> > > +		uint16_t tpid; /**< Tag protocol identifier. */
> > > +		uint16_t tci; /**< Tag control information. */
> > > +	} tag[]; /**< 802.1Q/ad tag definitions, outermost first. */ };
> > [...]
> > 
> > See my previous reply [1], this definition is not endian-safe and comprises
> > protocols defined as independent items (namely ETH and VLAN). Here is an
> > untested suggestion:
> > 
> >  struct rte_flow_item_e_tag {
> >      uint16_t tpid; /**< Tag protocol identifier (0x893F). */
> >      /** E-Tag control information (E-TCI). */
> >      uint16_t epcp_edei_in_ecid_b; /**< E-PCP (3b), E-DEI (1b), ingress E-CID
> > base (12b). */
> >      uint16_t rsvd_grp_ecid_b; /**< Reserved (2b), GRP (2b), E-CID base (12b).
> > */
> >      uint8_t in_ecid_e; /**< Ingress E-CID ext. */
> >      uint8_t ecid_e; /**< E-CID ext. */
> >  };
> > 
> > Applications are responsibile for breaking down and filling individual fields
> > properly. Ethernet header would be provided as its own item as shown in
> > this testpmd flow command example:
> > 
> >  flow create 0 ingress pattern eth / e_tag in_ecid_base is 42 / end actions
> > drop / end
> > 
> 
> In this case , is  eth  an option or mandatory? 
> I think it is optional, because user may do not have any parameter in ETH  config.

Normally, protocol items start from L2 so applications *should* provide
ETH otherwise it is an error.

Now a PMD may also allow it to be implicit when it is unambiguous (e.g. an
imaginary ETH item provided without a mask) as described in the "UDPv6
anywhere" example [2]. It's up to you.

> > Note, all multibyte values are in network order like other protocol header
> > definitions.
> > 
> > [1] http://dpdk.org/ml/archives/dev/2016-December/053181.html
> >     Message ID: 20161223081310.GH10340@6wind.com

[2] http://dpdk.org/doc/guides/prog_guide/rte_flow.html#matching-pattern

-- 
Adrien Mazarguil
6WIND

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v2 01/18] net/ixgbe: store SYN filter
  2016-12-30  7:52 ` [PATCH v2 01/18] net/ixgbe: store SYN filter Wei Zhao
  2017-01-03 14:33   ` Dai, Wei
@ 2017-01-06 16:28   ` Ferruh Yigit
  2017-01-10  5:33     ` Zhao1, Wei
  2017-01-12  8:12   ` [PATCH v3 00/18] net/ixgbe: Consistent filter API Wei Zhao
  2017-01-12  8:13   ` [PATCH v3 01/18] net/ixgbe: store TCP SYN filter Wei Zhao
  3 siblings, 1 reply; 149+ messages in thread
From: Ferruh Yigit @ 2017-01-06 16:28 UTC (permalink / raw)
  To: Wei Zhao, dev; +Cc: Wenzhuo Lu

On 12/30/2016 7:52 AM, Wei Zhao wrote:
> Add support for storing SYN filter in SW.

I think you have mentioned in previous review that you will update this
to "TCP SYN", I would like to remind in case it is missed.

> 
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> ---
> 
> v2:
> --synqf assignment location change
> ---
<...>

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v2 02/18] net/ixgbe: store flow director filter
  2016-12-30  7:52 ` [PATCH v2 02/18] net/ixgbe: store flow director filter Wei Zhao
  2017-01-02  9:59   ` Xing, Beilei
  2017-01-03 14:28   ` Dai, Wei
@ 2017-01-06 16:31   ` Ferruh Yigit
  2017-01-10  5:30     ` Zhao1, Wei
  2 siblings, 1 reply; 149+ messages in thread
From: Ferruh Yigit @ 2017-01-06 16:31 UTC (permalink / raw)
  To: Wei Zhao, dev; +Cc: Wenzhuo Lu

On 12/30/2016 7:52 AM, Wei Zhao wrote:
> Add support for storing flow director filter in SW.
> 
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> ---
> 
> v2:
> --add a fdir initialization function in device start process
> ---
<...>
> @@ -1276,6 +1278,9 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
>  
>  	/* initialize SYN filter */
>  	filter_info->syn_info = 0;
> +	/* initialize flow director filter list & hash */
> +	ixgbe_fdir_filter_init(eth_dev);
> +
>  	return 0;
>  }
>  
> @@ -1284,6 +1289,9 @@ eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev)
>  {
>  	struct rte_pci_device *pci_dev;
>  	struct ixgbe_hw *hw;
> +	struct ixgbe_hw_fdir_info *fdir_info =
> +		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(eth_dev->data->dev_private);
> +	struct ixgbe_fdir_filter *fdir_filter;
>  
>  	PMD_INIT_FUNC_TRACE();
>  
> @@ -1317,9 +1325,56 @@ eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev)
>  	rte_free(eth_dev->data->hash_mac_addrs);
>  	eth_dev->data->hash_mac_addrs = NULL;
>  
> +	/* remove all the fdir filters & hash */
> +	if (fdir_info->hash_map)
> +		rte_free(fdir_info->hash_map);
> +	if (fdir_info->hash_handle)
> +		rte_hash_free(fdir_info->hash_handle);
> +
> +	while ((fdir_filter = TAILQ_FIRST(&fdir_info->fdir_list))) {
> +		TAILQ_REMOVE(&fdir_info->fdir_list,
> +			     fdir_filter,
> +			     entries);
> +		rte_free(fdir_filter);
> +	}
> +

What do you think extracting these into a function as done in init() ?

<...>

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v2 10/18] net/ixgbe: flush all the filters
  2016-12-30  7:53 ` [PATCH v2 10/18] net/ixgbe: flush all the filters Wei Zhao
@ 2017-01-06 16:40   ` Ferruh Yigit
  2017-01-11  7:51     ` Zhao1, Wei
  0 siblings, 1 reply; 149+ messages in thread
From: Ferruh Yigit @ 2017-01-06 16:40 UTC (permalink / raw)
  To: Wei Zhao, dev; +Cc: Wenzhuo Lu

On 12/30/2016 7:53 AM, Wei Zhao wrote:
> Add support for flush all the filters in SW.
> 
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> 
> ---
> 
> v2:
> --change flush filter function call relationship
> ---
>  drivers/net/ixgbe/ixgbe_ethdev.c | 118 ++++++++++++++++++++++++++++++++++++++-
>  drivers/net/ixgbe/ixgbe_ethdev.h |   9 +++
>  drivers/net/ixgbe/ixgbe_fdir.c   |  24 ++++++++
>  drivers/net/ixgbe/ixgbe_pf.c     |   1 +
>  4 files changed, 150 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
> index d68de65..0de1318 100644
> --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> @@ -61,6 +61,8 @@
>  #include <rte_random.h>
>  #include <rte_dev.h>
>  #include <rte_hash_crc.h>
> +#include <rte_flow.h>
> +#include <rte_flow_driver.h>
>  
>  #include "ixgbe_logs.h"
>  #include "base/ixgbe_api.h"
> @@ -386,7 +388,8 @@ static int ixgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
>  					 struct rte_eth_udp_tunnel *udp_tunnel);
>  static int ixgbe_filter_restore(struct rte_eth_dev *dev);
>  static void ixgbe_l2_tunnel_conf(struct rte_eth_dev *dev);
> -
> +static int ixgbe_flow_flush(struct rte_eth_dev *dev,
> +		struct rte_flow_error *error);

I think it is good idea to move all rte_flow related code into a new
file, as done in i40e (i40e_flow.c)? What do you think, can it be done?

<...>

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v2 11/18] net/ixgbe: parse n-tuple filter
  2016-12-30  7:53 ` [PATCH v2 11/18] net/ixgbe: parse n-tuple filter Wei Zhao
  2017-01-02 10:41   ` Xing, Beilei
  2017-01-02 10:45   ` Xing, Beilei
@ 2017-01-06 16:55   ` Ferruh Yigit
  2017-01-11  8:27     ` Zhao1, Wei
  2 siblings, 1 reply; 149+ messages in thread
From: Ferruh Yigit @ 2017-01-06 16:55 UTC (permalink / raw)
  To: Wei Zhao, dev; +Cc: Wenzhuo Lu

On 12/30/2016 7:53 AM, Wei Zhao wrote:
> Add rule validate function and check if the rule is a n-tuple rule,
> and get the n-tuple info.
> 
> Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> 
> ---
> 
> v2:add new error set function
> ---
>  drivers/net/ixgbe/ixgbe_ethdev.c | 414 ++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 409 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
> index 0de1318..198cc4b 100644
> --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> @@ -388,6 +388,24 @@ static int ixgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
>  					 struct rte_eth_udp_tunnel *udp_tunnel);

<...>

> +
> +/**
> + * Parse the rule to see if it is a n-tuple rule.
> + * And get the n-tuple filter info BTW.
> + */

It would be nice to comment here valid/expected pattern values
(spec/mask/last). Otherwise it is hard to decode from code also it is
good to document intention, so makes easy if there is any defect.

Also valid actions.

> +static int
> +cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
> +			 const struct rte_flow_item pattern[],
> +			 const struct rte_flow_action actions[],
> +			 struct rte_eth_ntuple_filter *filter,
> +			 struct rte_flow_error *error)
> +{
> +	const struct rte_flow_item *item;
> +	const struct rte_flow_action *act;
> +	const struct rte_flow_item_ipv4 *ipv4_spec;
> +	const struct rte_flow_item_ipv4 *ipv4_mask;
> +	const struct rte_flow_item_tcp *tcp_spec;
> +	const struct rte_flow_item_tcp *tcp_mask;
> +	const struct rte_flow_item_udp *udp_spec;
> +	const struct rte_flow_item_udp *udp_mask;
> +	uint32_t index;
> +
> +	if (!pattern) {
> +		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM,
> +				   NULL, "NULL pattern.");
> +		return -rte_errno;
> +	}
> +
> +	/* parse pattern */
> +	index = 0;
> +
> +	/* the first not void item can be MAC or IPv4 */
> +	NEXT_ITEM_OF_PATTERN(item, pattern, index);
> +
> +	if (item->type != RTE_FLOW_ITEM_TYPE_ETH &&
> +	    item->type != RTE_FLOW_ITEM_TYPE_IPV4) {
> +		rte_flow_error_set(error, EINVAL,
> +			RTE_FLOW_ERROR_TYPE_ITEM,
> +			item, "Not supported by ntuple filter");
> +		return -rte_errno;
> +	}
> +	/* Skip Ethernet */
> +	if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
> +		/*Not supported last point for range*/
> +		if (item->last) {
> +			rte_flow_error_set(error, EINVAL,
> +				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> +				item, "Not supported last point for range");
> +			return -rte_errno;
> +
> +		}
> +		/* if the first item is MAC, the content should be NULL */
> +		if (item->spec || item->mask) {
> +			rte_flow_error_set(error, EINVAL,
> +				RTE_FLOW_ERROR_TYPE_ITEM,
> +				item, "Not supported by ntuple filter");
> +			return -rte_errno;
> +		}
> +		/* check if the next not void item is IPv4 */
> +		index++;
> +		NEXT_ITEM_OF_PATTERN(item, pattern, index);
> +		if (item->type != RTE_FLOW_ITEM_TYPE_IPV4) {
> +			rte_flow_error_set(error,
> +			EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
> +			item, "Not supported by ntuple filter");

Wrong indentation.

> +			return -rte_errno;
> +		}
> +	}
> +
> +	/* get the IPv4 info */
> +	if (!item->spec || !item->mask) {
> +		rte_flow_error_set(error, EINVAL,
> +			RTE_FLOW_ERROR_TYPE_ITEM,
> +			item, "Invalid ntuple mask");
> +		return -rte_errno;
> +	}
> +	/*Not supported last point for range*/
> +	if (item->last) {
> +		rte_flow_error_set(error, EINVAL,
> +			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> +			item, "Not supported last point for range");
> +		return -rte_errno;
> +
> +	}
> +
> +	ipv4_mask = (const struct rte_flow_item_ipv4 *)item->mask;
> +	/**
> +	 * Only support src & dst addresses, protocol,
> +	 * others should be masked.
> +	 */
> +	if (ipv4_mask->hdr.version_ihl ||
> +	    ipv4_mask->hdr.type_of_service ||
> +	    ipv4_mask->hdr.total_length ||
> +	    ipv4_mask->hdr.packet_id ||
> +	    ipv4_mask->hdr.fragment_offset ||
> +	    ipv4_mask->hdr.time_to_live ||
> +	    ipv4_mask->hdr.hdr_checksum) {
> +			rte_flow_error_set(error,
> +			EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
> +			item, "Not supported by ntuple filter");
> +		return -rte_errno;
> +	}
> +
> +	filter->dst_ip_mask = ipv4_mask->hdr.dst_addr;
> +	filter->src_ip_mask = ipv4_mask->hdr.src_addr;
> +	filter->proto_mask  = ipv4_mask->hdr.next_proto_id;
> +
> +	ipv4_spec = (const struct rte_flow_item_ipv4 *)item->spec;
> +	filter->dst_ip = ipv4_spec->hdr.dst_addr;
> +	filter->src_ip = ipv4_spec->hdr.src_addr;
> +	filter->proto  = ipv4_spec->hdr.next_proto_id;
> +
> +	/* check if the next not void item is TCP or UDP */
> +	index++;
> +	NEXT_ITEM_OF_PATTERN(item, pattern, index);
> +	if (item->type != RTE_FLOW_ITEM_TYPE_TCP &&
> +	    item->type != RTE_FLOW_ITEM_TYPE_UDP) {
> +		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));

Sometimes meset filter before return from error, sometimes not. Is
memset required at all?

> +		rte_flow_error_set(error, EINVAL,
> +			RTE_FLOW_ERROR_TYPE_ITEM,
> +			item, "Not supported by ntuple filter");
> +		return -rte_errno;
> +	}
> +
> +	/* get the TCP/UDP info */
> +	if (!item->spec || !item->mask) {

For example there is no memset here for filter ..

> +		rte_flow_error_set(error, EINVAL,
> +			RTE_FLOW_ERROR_TYPE_ITEM,
> +			item, "Invalid ntuple mask");
> +		return -rte_errno;
> +	}
> +
> +	/*Not supported last point for range*/
> +	if (item->last) {
> +		rte_flow_error_set(error, EINVAL,
> +			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> +			item, "Not supported last point for range");
> +		return -rte_errno;
> +
> +	}
> +
> +	if (item->type == RTE_FLOW_ITEM_TYPE_TCP) {
> +		tcp_mask = (const struct rte_flow_item_tcp *)item->mask;
> +
> +		/**
> +		 * Only support src & dst ports, tcp flags,
> +		 * others should be masked.
> +		 */
> +		if (tcp_mask->hdr.sent_seq ||
> +		    tcp_mask->hdr.recv_ack ||
> +		    tcp_mask->hdr.data_off ||
> +		    tcp_mask->hdr.rx_win ||
> +		    tcp_mask->hdr.cksum ||
> +		    tcp_mask->hdr.tcp_urp) {
> +			memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
> +			rte_flow_error_set(error, EINVAL,
> +				RTE_FLOW_ERROR_TYPE_ITEM,
> +				item, "Not supported by ntuple filter");
> +			return -rte_errno;
> +		}
> +
> +		filter->dst_port_mask  = tcp_mask->hdr.dst_port;
> +		filter->src_port_mask  = tcp_mask->hdr.src_port;
> +		if (tcp_mask->hdr.tcp_flags == 0xFF) {
> +			filter->flags |= RTE_NTUPLE_FLAGS_TCP_FLAG;
> +		} else if (!tcp_mask->hdr.tcp_flags) {
> +			filter->flags &= ~RTE_NTUPLE_FLAGS_TCP_FLAG;
> +		} else {
> +			memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
> +			rte_flow_error_set(error, EINVAL,
> +				RTE_FLOW_ERROR_TYPE_ITEM,
> +				item, "Not supported by ntuple filter");
> +			return -rte_errno;
> +		}
> +
> +		tcp_spec = (const struct rte_flow_item_tcp *)item->spec;
> +		filter->dst_port  = tcp_spec->hdr.dst_port;
> +		filter->src_port  = tcp_spec->hdr.src_port;
> +		filter->tcp_flags = tcp_spec->hdr.tcp_flags;
> +	} else {
> +		udp_mask = (const struct rte_flow_item_udp *)item->mask;
> +
> +		/**
> +		 * Only support src & dst ports,
> +		 * others should be masked.
> +		 */
> +		if (udp_mask->hdr.dgram_len ||
> +		    udp_mask->hdr.dgram_cksum) {
> +			memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
> +			rte_flow_error_set(error, EINVAL,
> +				RTE_FLOW_ERROR_TYPE_ITEM,
> +				item, "Not supported by ntuple filter");
> +			return -rte_errno;
> +		}
> +
> +		filter->dst_port_mask = udp_mask->hdr.dst_port;
> +		filter->src_port_mask = udp_mask->hdr.src_port;
> +
> +		udp_spec = (const struct rte_flow_item_udp *)item->spec;
> +		filter->dst_port = udp_spec->hdr.dst_port;
> +		filter->src_port = udp_spec->hdr.src_port;
> +	}
> +
> +	/* check if the next not void item is END */
> +	index++;
> +	NEXT_ITEM_OF_PATTERN(item, pattern, index);
> +	if (item->type != RTE_FLOW_ITEM_TYPE_END) {
> +		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
> +		rte_flow_error_set(error, EINVAL,
> +			RTE_FLOW_ERROR_TYPE_ITEM,
> +			item, "Not supported by ntuple filter");
> +		return -rte_errno;
> +	}
> +
> +	/* parse action */
> +	index = 0;
> +
> +	if (!actions) {

Although there is no harm, I would do input check at the beginning of
the function, to not do extra work if we hit this case.

> +		rte_flow_error_set(error, EINVAL,
> +				   RTE_FLOW_ERROR_TYPE_ACTION_NUM,
> +				   NULL, "NULL action.");
> +		return -rte_errno;
> +	}
> +
> +	/**
> +	 * n-tuple only supports forwarding,
> +	 * check if the first not void action is QUEUE.
> +	 */
> +	NEXT_ITEM_OF_ACTION(act, actions, index);
> +	if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE) {
> +		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
> +		rte_flow_error_set(error, EINVAL,
> +			RTE_FLOW_ERROR_TYPE_ACTION,
> +			item, "Not supported action.");
> +		return -rte_errno;
> +	}
> +	filter->queue =
> +		((const struct rte_flow_action_queue *)act->conf)->index;
> +
> +	/* check if the next not void item is END */
> +	index++;
> +	NEXT_ITEM_OF_ACTION(act, actions, index);
> +	if (act->type != RTE_FLOW_ACTION_TYPE_END) {
> +		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
> +		rte_flow_error_set(error, EINVAL,
> +			RTE_FLOW_ERROR_TYPE_ACTION,
> +			act, "Not supported action.");
> +		return -rte_errno;
> +	}
> +
> +	/* parse attr */
> +	/* must be input direction */

May be good idea to check if attr is NULL.

> +	if (!attr->ingress) {
> +		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
> +		rte_flow_error_set(error, EINVAL,
> +				   RTE_FLOW_ERROR_TYPE_ATTR_INGRESS,
> +				   attr, "Only support ingress.");
> +		return -rte_errno;
> +	}
> +
> +	/* not supported */
> +	if (attr->egress) {
> +		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
> +		rte_flow_error_set(error, EINVAL,
> +				   RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
> +				   attr, "Not support egress.");
> +		return -rte_errno;
> +	}
> +
> +	if (attr->priority > 0xFFFF) {
> +		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
> +		rte_flow_error_set(error, EINVAL,
> +				   RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
> +				   attr, "Error priority.");
> +		return -rte_errno;
> +	}
> +	filter->priority = (uint16_t)attr->priority;

Should check attr->group? Do we support groups?

> +
> +	return 0;
> +}
> +

<...>

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v2 12/18] net/ixgbe: parse ethertype filter
  2016-12-30  7:53 ` [PATCH v2 12/18] net/ixgbe: parse ethertype filter Wei Zhao
@ 2017-01-06 17:11   ` Ferruh Yigit
  2017-01-11  8:54     ` Zhao1, Wei
  0 siblings, 1 reply; 149+ messages in thread
From: Ferruh Yigit @ 2017-01-06 17:11 UTC (permalink / raw)
  To: Wei Zhao, dev; +Cc: Wenzhuo Lu

On 12/30/2016 7:53 AM, Wei Zhao wrote:
> check if the rule is a ethertype rule, and get the ethertype info.
> Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> 
> ---
> 
> v2:add new error set function
> ---

<...>

> +static int
> +ixgbe_parse_ethertype_filter(const struct rte_flow_attr *attr,
> +			     const struct rte_flow_item pattern[],
> +			     const struct rte_flow_action actions[],
> +			     struct rte_eth_ethertype_filter *filter,
> +			     struct rte_flow_error *error)
> +{
> +	int ret;
> +
> +	ret = cons_parse_ethertype_filter(attr, pattern,
> +					actions, filter, error);
> +
> +	if (ret)
> +		return ret;
> +
> +	/* Ixgbe doesn't support MAC address. */
> +	if (filter->flags & RTE_ETHTYPE_FLAGS_MAC) {
> +		memset(filter, 0, sizeof(struct rte_eth_ethertype_filter));

Is memset required for error cases, if so is other error cases also
require it?

> +		rte_flow_error_set(error, EINVAL,
> +				RTE_FLOW_ERROR_TYPE_ITEM,
> +				NULL, "Not supported by ethertype filter");
> +		return -rte_errno;
> +	}
> +
> +	if (filter->queue >= IXGBE_MAX_RX_QUEUE_NUM)
> +		return -rte_errno;
> +
> +	if (filter->ether_type == ETHER_TYPE_IPv4 ||
> +		filter->ether_type == ETHER_TYPE_IPv6) {
> +		PMD_DRV_LOG(ERR, "unsupported ether_type(0x%04x) in"
> +			" ethertype filter.", filter->ether_type);

Not sure about this log here, specially it is in ERR level.
This function is returning error, and public API will return an error,
if we want to notify user with a log, should app do this as library
(here) should do this? More comment welcome?

btw, should rte_flow_error_set() used here (and below) ?

> +		return -rte_errno;
> +	}
> +
> +	if (filter->flags & RTE_ETHTYPE_FLAGS_MAC) {

Isn't this duplicate with above check?

> +		PMD_DRV_LOG(ERR, "mac compare is unsupported.");
> +		return -rte_errno;
> +	}
> +
> +	if (filter->flags & RTE_ETHTYPE_FLAGS_DROP) {

Just to double check, isn't drop action by ether filters?

> +		PMD_DRV_LOG(ERR, "drop option is unsupported.");
> +		return -rte_errno;
> +	}
> +
> +	return 0;
> +}
> +
> +/**
<...>

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v2 13/18] net/ixgbe: parse TCP SYN filter
  2016-12-30  7:53 ` [PATCH v2 13/18] net/ixgbe: parse TCP SYN filter Wei Zhao
@ 2017-01-06 17:19   ` Ferruh Yigit
  2017-01-10  5:46     ` Zhao1, Wei
  2017-01-11  9:11     ` Zhao1, Wei
  0 siblings, 2 replies; 149+ messages in thread
From: Ferruh Yigit @ 2017-01-06 17:19 UTC (permalink / raw)
  To: Wei Zhao, dev; +Cc: Wenzhuo Lu

On 12/30/2016 7:53 AM, Wei Zhao wrote:
> check if the rule is a TCP SYN rule, and get the SYN info.
> 
> Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> 
> ---
> 
> v2:add new error set function
> ---

<...>

> +static int
> +ixgbe_parse_syn_filter(const struct rte_flow_attr *attr,
> +			     const struct rte_flow_item pattern[],
> +			     const struct rte_flow_action actions[],
> +			     struct rte_eth_syn_filter *filter,
> +			     struct rte_flow_error *error)
> +{
> +	int ret;
> +
> +	ret = cons_parse_syn_filter(attr, pattern,
> +					actions, filter, error);
> +

> +	if (ret)
> +		return ret;
> +
> +	return 0;

"return ret;" will do the same.
Or remove ret completely perhaps.

> +}
> +
<...>

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v2 16/18] net/ixgbe: create consistent filter
  2016-12-30  7:53 ` [PATCH v2 16/18] net/ixgbe: create consistent filter Wei Zhao
                     ` (2 preceding siblings ...)
  2017-01-03  5:24   ` Xing, Beilei
@ 2017-01-06 17:26   ` Ferruh Yigit
  2017-01-10  5:50     ` Zhao1, Wei
  3 siblings, 1 reply; 149+ messages in thread
From: Ferruh Yigit @ 2017-01-06 17:26 UTC (permalink / raw)
  To: Wei Zhao, dev; +Cc: Wenzhuo Lu

On 12/30/2016 7:53 AM, Wei Zhao wrote:
> This patch adds a function to create the flow directory filter.
> 
> Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> 
> ---
> 
> v2:
> --add new error set function
> ---

<...>

>  
> +struct ixgbe_flow {
> +	enum rte_filter_type filter_type;
> +	void *rule;
> +};

It is possible to rename this struct to rte_flow, to prevent casting.
Although functionally there is no difference.

> +
>  /*
>   * Structure to store private data for each driver instance (for each port).
>   */
> 

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v2 02/18] net/ixgbe: store flow director filter
  2017-01-06 16:31   ` Ferruh Yigit
@ 2017-01-10  5:30     ` Zhao1, Wei
  0 siblings, 0 replies; 149+ messages in thread
From: Zhao1, Wei @ 2017-01-10  5:30 UTC (permalink / raw)
  To: Yigit, Ferruh, dev; +Cc: Lu, Wenzhuo

Hi, yigit

> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Saturday, January 7, 2017 12:31 AM
> To: Zhao1, Wei <wei.zhao1@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v2 02/18] net/ixgbe: store flow director filter
> 
> On 12/30/2016 7:52 AM, Wei Zhao wrote:
> > Add support for storing flow director filter in SW.
> >
> > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> > Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> > ---
> >
> > v2:
> > --add a fdir initialization function in device start process
> > ---
> <...>
> > @@ -1276,6 +1278,9 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
> >
> >  	/* initialize SYN filter */
> >  	filter_info->syn_info = 0;
> > +	/* initialize flow director filter list & hash */
> > +	ixgbe_fdir_filter_init(eth_dev);
> > +
> >  	return 0;
> >  }
> >
> > @@ -1284,6 +1289,9 @@ eth_ixgbe_dev_uninit(struct rte_eth_dev
> > *eth_dev)  {
> >  	struct rte_pci_device *pci_dev;
> >  	struct ixgbe_hw *hw;
> > +	struct ixgbe_hw_fdir_info *fdir_info =
> > +		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(eth_dev->data-
> >dev_private);
> > +	struct ixgbe_fdir_filter *fdir_filter;
> >
> >  	PMD_INIT_FUNC_TRACE();
> >
> > @@ -1317,9 +1325,56 @@ eth_ixgbe_dev_uninit(struct rte_eth_dev
> *eth_dev)
> >  	rte_free(eth_dev->data->hash_mac_addrs);
> >  	eth_dev->data->hash_mac_addrs = NULL;
> >
> > +	/* remove all the fdir filters & hash */
> > +	if (fdir_info->hash_map)
> > +		rte_free(fdir_info->hash_map);
> > +	if (fdir_info->hash_handle)
> > +		rte_hash_free(fdir_info->hash_handle);
> > +
> > +	while ((fdir_filter = TAILQ_FIRST(&fdir_info->fdir_list))) {
> > +		TAILQ_REMOVE(&fdir_info->fdir_list,
> > +			     fdir_filter,
> > +			     entries);
> > +		rte_free(fdir_filter);
> > +	}
> > +
> 
> What do you think extracting these into a function as done in init() ?
> 
> <...>

Ok , I will do as your suggestion.

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v2 01/18] net/ixgbe: store SYN filter
  2017-01-06 16:28   ` Ferruh Yigit
@ 2017-01-10  5:33     ` Zhao1, Wei
  0 siblings, 0 replies; 149+ messages in thread
From: Zhao1, Wei @ 2017-01-10  5:33 UTC (permalink / raw)
  To: Yigit, Ferruh, dev; +Cc: Lu, Wenzhuo

Hi, yigit 

> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Saturday, January 7, 2017 12:29 AM
> To: Zhao1, Wei <wei.zhao1@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v2 01/18] net/ixgbe: store SYN filter
> 
> On 12/30/2016 7:52 AM, Wei Zhao wrote:
> > Add support for storing SYN filter in SW.
> 
> I think you have mentioned in previous review that you will update this to
> "TCP SYN", I would like to remind in case it is missed.
> 
> >
> > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> > Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> > ---
> >
> > v2:
> > --synqf assignment location change
> > ---
> <...>

There is something wrong when I update code in v2.
I PROMISE it will be changed in v3.

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v2 13/18] net/ixgbe: parse TCP SYN filter
  2017-01-06 17:19   ` Ferruh Yigit
@ 2017-01-10  5:46     ` Zhao1, Wei
  2017-01-11  9:11     ` Zhao1, Wei
  1 sibling, 0 replies; 149+ messages in thread
From: Zhao1, Wei @ 2017-01-10  5:46 UTC (permalink / raw)
  To: Yigit, Ferruh, dev; +Cc: Lu, Wenzhuo

Hi, Yigit

> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Saturday, January 7, 2017 1:20 AM
> To: Zhao1, Wei <wei.zhao1@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v2 13/18] net/ixgbe: parse TCP SYN filter
> 
> On 12/30/2016 7:53 AM, Wei Zhao wrote:
> > check if the rule is a TCP SYN rule, and get the SYN info.
> >
> > Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> >
> > ---
> >
> > v2:add new error set function
> > ---
> 
> <...>
> 
> > +static int
> > +ixgbe_parse_syn_filter(const struct rte_flow_attr *attr,
> > +			     const struct rte_flow_item pattern[],
> > +			     const struct rte_flow_action actions[],
> > +			     struct rte_eth_syn_filter *filter,
> > +			     struct rte_flow_error *error) {
> > +	int ret;
> > +
> > +	ret = cons_parse_syn_filter(attr, pattern,
> > +					actions, filter, error);
> > +
> 
> > +	if (ret)
> > +		return ret;
> > +
> > +	return 0;
> 
> "return ret;" will do the same.
> Or remove ret completely perhaps.
> 
> > +}
> > +
> <...>

Yes, "return ret;" may be more appropriate.

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v2 16/18] net/ixgbe: create consistent filter
  2017-01-06 17:26   ` Ferruh Yigit
@ 2017-01-10  5:50     ` Zhao1, Wei
  0 siblings, 0 replies; 149+ messages in thread
From: Zhao1, Wei @ 2017-01-10  5:50 UTC (permalink / raw)
  To: Yigit, Ferruh, dev; +Cc: Lu, Wenzhuo



> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Saturday, January 7, 2017 1:27 AM
> To: Zhao1, Wei <wei.zhao1@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v2 16/18] net/ixgbe: create consistent filter
> 
> On 12/30/2016 7:53 AM, Wei Zhao wrote:
> > This patch adds a function to create the flow directory filter.
> >
> > Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> >
> > ---
> >
> > v2:
> > --add new error set function
> > ---
> 
> <...>
> 
> >
> > +struct ixgbe_flow {
> > +	enum rte_filter_type filter_type;
> > +	void *rule;
> > +};
> 
> It is possible to rename this struct to rte_flow, to prevent casting.
> Although functionally there is no difference.
Ok, I will do that in v3
> 
> > +
> >  /*
> >   * Structure to store private data for each driver instance (for each port).
> >   */
> >


^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v2 10/18] net/ixgbe: flush all the filters
  2017-01-06 16:40   ` Ferruh Yigit
@ 2017-01-11  7:51     ` Zhao1, Wei
  0 siblings, 0 replies; 149+ messages in thread
From: Zhao1, Wei @ 2017-01-11  7:51 UTC (permalink / raw)
  To: Yigit, Ferruh, dev; +Cc: Lu, Wenzhuo

Hi,yigit

> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Saturday, January 7, 2017 12:40 AM
> To: Zhao1, Wei <wei.zhao1@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v2 10/18] net/ixgbe: flush all the filters
> 
> On 12/30/2016 7:53 AM, Wei Zhao wrote:
> > Add support for flush all the filters in SW.
> >
> > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> > Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> >
> > ---
> >
> > v2:
> > --change flush filter function call relationship
> > ---
> >  drivers/net/ixgbe/ixgbe_ethdev.c | 118
> ++++++++++++++++++++++++++++++++++++++-
> >  drivers/net/ixgbe/ixgbe_ethdev.h |   9 +++
> >  drivers/net/ixgbe/ixgbe_fdir.c   |  24 ++++++++
> >  drivers/net/ixgbe/ixgbe_pf.c     |   1 +
> >  4 files changed, 150 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c
> > b/drivers/net/ixgbe/ixgbe_ethdev.c
> > index d68de65..0de1318 100644
> > --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> > +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> > @@ -61,6 +61,8 @@
> >  #include <rte_random.h>
> >  #include <rte_dev.h>
> >  #include <rte_hash_crc.h>
> > +#include <rte_flow.h>
> > +#include <rte_flow_driver.h>
> >
> >  #include "ixgbe_logs.h"
> >  #include "base/ixgbe_api.h"
> > @@ -386,7 +388,8 @@ static int ixgbe_dev_udp_tunnel_port_del(struct
> rte_eth_dev *dev,
> >  					 struct rte_eth_udp_tunnel
> *udp_tunnel);  static int
> > ixgbe_filter_restore(struct rte_eth_dev *dev);  static void
> > ixgbe_l2_tunnel_conf(struct rte_eth_dev *dev);
> > -
> > +static int ixgbe_flow_flush(struct rte_eth_dev *dev,
> > +		struct rte_flow_error *error);
> 
> I think it is good idea to move all rte_flow related code into a new file, as
> done in i40e (i40e_flow.c)? What do you think, can it be done?

I will do  as your suggestion in v3.

> 
> <...>


^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v2 11/18] net/ixgbe: parse n-tuple filter
  2017-01-06 16:55   ` Ferruh Yigit
@ 2017-01-11  8:27     ` Zhao1, Wei
  0 siblings, 0 replies; 149+ messages in thread
From: Zhao1, Wei @ 2017-01-11  8:27 UTC (permalink / raw)
  To: Yigit, Ferruh, dev; +Cc: Lu, Wenzhuo

Hi, yigit

> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Saturday, January 7, 2017 12:56 AM
> To: Zhao1, Wei <wei.zhao1@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v2 11/18] net/ixgbe: parse n-tuple filter
> 
> On 12/30/2016 7:53 AM, Wei Zhao wrote:
> > Add rule validate function and check if the rule is a n-tuple rule,
> > and get the n-tuple info.
> >
> > Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> >
> > ---
> >
> > v2:add new error set function
> > ---
> >  drivers/net/ixgbe/ixgbe_ethdev.c | 414
> > ++++++++++++++++++++++++++++++++++++++-
> >  1 file changed, 409 insertions(+), 5 deletions(-)
> >
> > diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c
> > b/drivers/net/ixgbe/ixgbe_ethdev.c
> > index 0de1318..198cc4b 100644
> > --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> > +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> > @@ -388,6 +388,24 @@ static int ixgbe_dev_udp_tunnel_port_del(struct
> rte_eth_dev *dev,
> >  					 struct rte_eth_udp_tunnel
> *udp_tunnel);
> 
> <...>
> 
> > +
> > +/**
> > + * Parse the rule to see if it is a n-tuple rule.
> > + * And get the n-tuple filter info BTW.
> > + */
> 
> It would be nice to comment here valid/expected pattern values
> (spec/mask/last). Otherwise it is hard to decode from code also it is good to
> document intention, so makes easy if there is any defect.
> 

I will do  as your suggestion in v3.

> Also valid actions.
> 
> > +static int
> > +cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
> > +			 const struct rte_flow_item pattern[],
> > +			 const struct rte_flow_action actions[],
> > +			 struct rte_eth_ntuple_filter *filter,
> > +			 struct rte_flow_error *error)
> > +{
> > +	const struct rte_flow_item *item;
> > +	const struct rte_flow_action *act;
> > +	const struct rte_flow_item_ipv4 *ipv4_spec;
> > +	const struct rte_flow_item_ipv4 *ipv4_mask;
> > +	const struct rte_flow_item_tcp *tcp_spec;
> > +	const struct rte_flow_item_tcp *tcp_mask;
> > +	const struct rte_flow_item_udp *udp_spec;
> > +	const struct rte_flow_item_udp *udp_mask;
> > +	uint32_t index;
> > +
> > +	if (!pattern) {
> > +		rte_flow_error_set(error, EINVAL,
> RTE_FLOW_ERROR_TYPE_ITEM_NUM,
> > +				   NULL, "NULL pattern.");
> > +		return -rte_errno;
> > +	}
> > +
> > +	/* parse pattern */
> > +	index = 0;
> > +
> > +	/* the first not void item can be MAC or IPv4 */
> > +	NEXT_ITEM_OF_PATTERN(item, pattern, index);
> > +
> > +	if (item->type != RTE_FLOW_ITEM_TYPE_ETH &&
> > +	    item->type != RTE_FLOW_ITEM_TYPE_IPV4) {
> > +		rte_flow_error_set(error, EINVAL,
> > +			RTE_FLOW_ERROR_TYPE_ITEM,
> > +			item, "Not supported by ntuple filter");
> > +		return -rte_errno;
> > +	}
> > +	/* Skip Ethernet */
> > +	if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
> > +		/*Not supported last point for range*/
> > +		if (item->last) {
> > +			rte_flow_error_set(error, EINVAL,
> > +				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> > +				item, "Not supported last point for range");
> > +			return -rte_errno;
> > +
> > +		}
> > +		/* if the first item is MAC, the content should be NULL */
> > +		if (item->spec || item->mask) {
> > +			rte_flow_error_set(error, EINVAL,
> > +				RTE_FLOW_ERROR_TYPE_ITEM,
> > +				item, "Not supported by ntuple filter");
> > +			return -rte_errno;
> > +		}
> > +		/* check if the next not void item is IPv4 */
> > +		index++;
> > +		NEXT_ITEM_OF_PATTERN(item, pattern, index);
> > +		if (item->type != RTE_FLOW_ITEM_TYPE_IPV4) {
> > +			rte_flow_error_set(error,
> > +			EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
> > +			item, "Not supported by ntuple filter");
> 
> Wrong indentation.

I will do  as your suggestion in v3.

> 
> > +			return -rte_errno;
> > +		}
> > +	}
> > +
> > +	/* get the IPv4 info */
> > +	if (!item->spec || !item->mask) {
> > +		rte_flow_error_set(error, EINVAL,
> > +			RTE_FLOW_ERROR_TYPE_ITEM,
> > +			item, "Invalid ntuple mask");
> > +		return -rte_errno;
> > +	}
> > +	/*Not supported last point for range*/
> > +	if (item->last) {
> > +		rte_flow_error_set(error, EINVAL,
> > +			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> > +			item, "Not supported last point for range");
> > +		return -rte_errno;
> > +
> > +	}
> > +
> > +	ipv4_mask = (const struct rte_flow_item_ipv4 *)item->mask;
> > +	/**
> > +	 * Only support src & dst addresses, protocol,
> > +	 * others should be masked.
> > +	 */
> > +	if (ipv4_mask->hdr.version_ihl ||
> > +	    ipv4_mask->hdr.type_of_service ||
> > +	    ipv4_mask->hdr.total_length ||
> > +	    ipv4_mask->hdr.packet_id ||
> > +	    ipv4_mask->hdr.fragment_offset ||
> > +	    ipv4_mask->hdr.time_to_live ||
> > +	    ipv4_mask->hdr.hdr_checksum) {
> > +			rte_flow_error_set(error,
> > +			EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
> > +			item, "Not supported by ntuple filter");
> > +		return -rte_errno;
> > +	}
> > +
> > +	filter->dst_ip_mask = ipv4_mask->hdr.dst_addr;
> > +	filter->src_ip_mask = ipv4_mask->hdr.src_addr;
> > +	filter->proto_mask  = ipv4_mask->hdr.next_proto_id;
> > +
> > +	ipv4_spec = (const struct rte_flow_item_ipv4 *)item->spec;
> > +	filter->dst_ip = ipv4_spec->hdr.dst_addr;
> > +	filter->src_ip = ipv4_spec->hdr.src_addr;
> > +	filter->proto  = ipv4_spec->hdr.next_proto_id;
> > +
> > +	/* check if the next not void item is TCP or UDP */
> > +	index++;
> > +	NEXT_ITEM_OF_PATTERN(item, pattern, index);
> > +	if (item->type != RTE_FLOW_ITEM_TYPE_TCP &&
> > +	    item->type != RTE_FLOW_ITEM_TYPE_UDP) {
> > +		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
> 
> Sometimes meset filter before return from error, sometimes not. Is memset
> required at all?

Not all necessary, at the beginning ,filter is not config any value, so it do not need to memset .
> 
> > +		rte_flow_error_set(error, EINVAL,
> > +			RTE_FLOW_ERROR_TYPE_ITEM,
> > +			item, "Not supported by ntuple filter");
> > +		return -rte_errno;
> > +	}
> > +
> > +	/* get the TCP/UDP info */
> > +	if (!item->spec || !item->mask) {
> 
> For example there is no memset here for filter ..
> 
> > +		rte_flow_error_set(error, EINVAL,
> > +			RTE_FLOW_ERROR_TYPE_ITEM,
> > +			item, "Invalid ntuple mask");
> > +		return -rte_errno;
> > +	}
> > +
> > +	/*Not supported last point for range*/
> > +	if (item->last) {
> > +		rte_flow_error_set(error, EINVAL,
> > +			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> > +			item, "Not supported last point for range");
> > +		return -rte_errno;
> > +
> > +	}
> > +
> > +	if (item->type == RTE_FLOW_ITEM_TYPE_TCP) {
> > +		tcp_mask = (const struct rte_flow_item_tcp *)item->mask;
> > +
> > +		/**
> > +		 * Only support src & dst ports, tcp flags,
> > +		 * others should be masked.
> > +		 */
> > +		if (tcp_mask->hdr.sent_seq ||
> > +		    tcp_mask->hdr.recv_ack ||
> > +		    tcp_mask->hdr.data_off ||
> > +		    tcp_mask->hdr.rx_win ||
> > +		    tcp_mask->hdr.cksum ||
> > +		    tcp_mask->hdr.tcp_urp) {
> > +			memset(filter, 0, sizeof(struct
> rte_eth_ntuple_filter));
> > +			rte_flow_error_set(error, EINVAL,
> > +				RTE_FLOW_ERROR_TYPE_ITEM,
> > +				item, "Not supported by ntuple filter");
> > +			return -rte_errno;
> > +		}
> > +
> > +		filter->dst_port_mask  = tcp_mask->hdr.dst_port;
> > +		filter->src_port_mask  = tcp_mask->hdr.src_port;
> > +		if (tcp_mask->hdr.tcp_flags == 0xFF) {
> > +			filter->flags |= RTE_NTUPLE_FLAGS_TCP_FLAG;
> > +		} else if (!tcp_mask->hdr.tcp_flags) {
> > +			filter->flags &= ~RTE_NTUPLE_FLAGS_TCP_FLAG;
> > +		} else {
> > +			memset(filter, 0, sizeof(struct
> rte_eth_ntuple_filter));
> > +			rte_flow_error_set(error, EINVAL,
> > +				RTE_FLOW_ERROR_TYPE_ITEM,
> > +				item, "Not supported by ntuple filter");
> > +			return -rte_errno;
> > +		}
> > +
> > +		tcp_spec = (const struct rte_flow_item_tcp *)item->spec;
> > +		filter->dst_port  = tcp_spec->hdr.dst_port;
> > +		filter->src_port  = tcp_spec->hdr.src_port;
> > +		filter->tcp_flags = tcp_spec->hdr.tcp_flags;
> > +	} else {
> > +		udp_mask = (const struct rte_flow_item_udp *)item->mask;
> > +
> > +		/**
> > +		 * Only support src & dst ports,
> > +		 * others should be masked.
> > +		 */
> > +		if (udp_mask->hdr.dgram_len ||
> > +		    udp_mask->hdr.dgram_cksum) {
> > +			memset(filter, 0, sizeof(struct
> rte_eth_ntuple_filter));
> > +			rte_flow_error_set(error, EINVAL,
> > +				RTE_FLOW_ERROR_TYPE_ITEM,
> > +				item, "Not supported by ntuple filter");
> > +			return -rte_errno;
> > +		}
> > +
> > +		filter->dst_port_mask = udp_mask->hdr.dst_port;
> > +		filter->src_port_mask = udp_mask->hdr.src_port;
> > +
> > +		udp_spec = (const struct rte_flow_item_udp *)item->spec;
> > +		filter->dst_port = udp_spec->hdr.dst_port;
> > +		filter->src_port = udp_spec->hdr.src_port;
> > +	}
> > +
> > +	/* check if the next not void item is END */
> > +	index++;
> > +	NEXT_ITEM_OF_PATTERN(item, pattern, index);
> > +	if (item->type != RTE_FLOW_ITEM_TYPE_END) {
> > +		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
> > +		rte_flow_error_set(error, EINVAL,
> > +			RTE_FLOW_ERROR_TYPE_ITEM,
> > +			item, "Not supported by ntuple filter");
> > +		return -rte_errno;
> > +	}
> > +
> > +	/* parse action */
> > +	index = 0;
> > +
> > +	if (!actions) {
> 
> Although there is no harm, I would do input check at the beginning of the
> function, to not do extra work if we hit this case.

I will do  as your suggestion in v3.

> 
> > +		rte_flow_error_set(error, EINVAL,
> > +				   RTE_FLOW_ERROR_TYPE_ACTION_NUM,
> > +				   NULL, "NULL action.");
> > +		return -rte_errno;
> > +	}
> > +
> > +	/**
> > +	 * n-tuple only supports forwarding,
> > +	 * check if the first not void action is QUEUE.
> > +	 */
> > +	NEXT_ITEM_OF_ACTION(act, actions, index);
> > +	if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE) {
> > +		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
> > +		rte_flow_error_set(error, EINVAL,
> > +			RTE_FLOW_ERROR_TYPE_ACTION,
> > +			item, "Not supported action.");
> > +		return -rte_errno;
> > +	}
> > +	filter->queue =
> > +		((const struct rte_flow_action_queue *)act->conf)->index;
> > +
> > +	/* check if the next not void item is END */
> > +	index++;
> > +	NEXT_ITEM_OF_ACTION(act, actions, index);
> > +	if (act->type != RTE_FLOW_ACTION_TYPE_END) {
> > +		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
> > +		rte_flow_error_set(error, EINVAL,
> > +			RTE_FLOW_ERROR_TYPE_ACTION,
> > +			act, "Not supported action.");
> > +		return -rte_errno;
> > +	}
> > +
> > +	/* parse attr */
> > +	/* must be input direction */
> 
> May be good idea to check if attr is NULL.

I will do  as your suggestion in v3. Add it at the beginning of function.
> 
> > +	if (!attr->ingress) {
> > +		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
> > +		rte_flow_error_set(error, EINVAL,
> > +				   RTE_FLOW_ERROR_TYPE_ATTR_INGRESS,
> > +				   attr, "Only support ingress.");
> > +		return -rte_errno;
> > +	}
> > +
> > +	/* not supported */
> > +	if (attr->egress) {
> > +		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
> > +		rte_flow_error_set(error, EINVAL,
> > +				   RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
> > +				   attr, "Not support egress.");
> > +		return -rte_errno;
> > +	}
> > +
> > +	if (attr->priority > 0xFFFF) {
> > +		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
> > +		rte_flow_error_set(error, EINVAL,
> > +				   RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
> > +				   attr, "Error priority.");
> > +		return -rte_errno;
> > +	}
> > +	filter->priority = (uint16_t)attr->priority;
> 
> Should check attr->group? Do we support groups?

No , we do not.
> 
> > +
> > +	return 0;
> > +}
> > +
> 
> <...>

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v2 12/18] net/ixgbe: parse ethertype filter
  2017-01-06 17:11   ` Ferruh Yigit
@ 2017-01-11  8:54     ` Zhao1, Wei
  0 siblings, 0 replies; 149+ messages in thread
From: Zhao1, Wei @ 2017-01-11  8:54 UTC (permalink / raw)
  To: Yigit, Ferruh, dev; +Cc: Lu, Wenzhuo

Hi, Yigit

> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Saturday, January 7, 2017 1:11 AM
> To: Zhao1, Wei <wei.zhao1@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v2 12/18] net/ixgbe: parse ethertype filter
> 
> On 12/30/2016 7:53 AM, Wei Zhao wrote:
> > check if the rule is a ethertype rule, and get the ethertype info.
> > Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> >
> > ---
> >
> > v2:add new error set function
> > ---
> 
> <...>
> 
> > +static int
> > +ixgbe_parse_ethertype_filter(const struct rte_flow_attr *attr,
> > +			     const struct rte_flow_item pattern[],
> > +			     const struct rte_flow_action actions[],
> > +			     struct rte_eth_ethertype_filter *filter,
> > +			     struct rte_flow_error *error) {
> > +	int ret;
> > +
> > +	ret = cons_parse_ethertype_filter(attr, pattern,
> > +					actions, filter, error);
> > +
> > +	if (ret)
> > +		return ret;
> > +
> > +	/* Ixgbe doesn't support MAC address. */
> > +	if (filter->flags & RTE_ETHTYPE_FLAGS_MAC) {
> > +		memset(filter, 0, sizeof(struct rte_eth_ethertype_filter));
> 
> Is memset required for error cases, if so is other error cases also require it?


Yes, Ok , I will do as your suggestion.

> 
> > +		rte_flow_error_set(error, EINVAL,
> > +				RTE_FLOW_ERROR_TYPE_ITEM,
> > +				NULL, "Not supported by ethertype filter");
> > +		return -rte_errno;
> > +	}
> > +
> > +	if (filter->queue >= IXGBE_MAX_RX_QUEUE_NUM)
> > +		return -rte_errno;
> > +
> > +	if (filter->ether_type == ETHER_TYPE_IPv4 ||
> > +		filter->ether_type == ETHER_TYPE_IPv6) {
> > +		PMD_DRV_LOG(ERR, "unsupported ether_type(0x%04x) in"
> > +			" ethertype filter.", filter->ether_type);
> 
> Not sure about this log here, specially it is in ERR level.
> This function is returning error, and public API will return an error, if we want
> to notify user with a log, should app do this as library
> (here) should do this? More comment welcome?
> 
> btw, should rte_flow_error_set() used here (and below) ?

Yes, I will change to rte_flow_error_set() 

> 
> > +		return -rte_errno;
> > +	}
> > +
> > +	if (filter->flags & RTE_ETHTYPE_FLAGS_MAC) {
> 
> Isn't this duplicate with above check?
> 
> > +		PMD_DRV_LOG(ERR, "mac compare is unsupported.");
> > +		return -rte_errno;
> > +	}
> > +
> > +	if (filter->flags & RTE_ETHTYPE_FLAGS_DROP) {
> 
> Just to double check, isn't drop action by ether filters?

Yse , ixgbe do not, but i40e is.
> 
> > +		PMD_DRV_LOG(ERR, "drop option is unsupported.");
> > +		return -rte_errno;
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> > +/**
> <...>

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v2 13/18] net/ixgbe: parse TCP SYN filter
  2017-01-06 17:19   ` Ferruh Yigit
  2017-01-10  5:46     ` Zhao1, Wei
@ 2017-01-11  9:11     ` Zhao1, Wei
  1 sibling, 0 replies; 149+ messages in thread
From: Zhao1, Wei @ 2017-01-11  9:11 UTC (permalink / raw)
  To: Yigit, Ferruh, dev; +Cc: Lu, Wenzhuo

HI, yigit 

> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Saturday, January 7, 2017 1:20 AM
> To: Zhao1, Wei <wei.zhao1@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v2 13/18] net/ixgbe: parse TCP SYN filter
> 
> On 12/30/2016 7:53 AM, Wei Zhao wrote:
> > check if the rule is a TCP SYN rule, and get the SYN info.
> >
> > Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> >
> > ---
> >
> > v2:add new error set function
> > ---
> 
> <...>
> 
> > +static int
> > +ixgbe_parse_syn_filter(const struct rte_flow_attr *attr,
> > +			     const struct rte_flow_item pattern[],
> > +			     const struct rte_flow_action actions[],
> > +			     struct rte_eth_syn_filter *filter,
> > +			     struct rte_flow_error *error) {
> > +	int ret;
> > +
> > +	ret = cons_parse_syn_filter(attr, pattern,
> > +					actions, filter, error);
> > +
> 
> > +	if (ret)
> > +		return ret;
> > +
> > +	return 0;
> 
> "return ret;" will do the same.
> Or remove ret completely perhaps.

Return 0 is the same as "return ret;", so this may be not to need change.
> 
> > +}
> > +
> <...>

^ permalink raw reply	[flat|nested] 149+ messages in thread

* [PATCH v3 00/18] net/ixgbe: Consistent filter API
  2016-12-30  7:52 ` [PATCH v2 01/18] net/ixgbe: store SYN filter Wei Zhao
  2017-01-03 14:33   ` Dai, Wei
  2017-01-06 16:28   ` Ferruh Yigit
@ 2017-01-12  8:12   ` Wei Zhao
  2017-01-12  8:13   ` [PATCH v3 01/18] net/ixgbe: store TCP SYN filter Wei Zhao
  3 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  8:12 UTC (permalink / raw)
  To: dev

The patches mainly finish following functions:
1) Store and restore all kinds of filters.
2) Parse all kinds of filters.
3) Add flow validate function.
4) Add flow create function.
5) Add flow destroy function.
6) Add flow flush function.

v2 changes:
 fix git log error
 Modify some function call relationship
 Change return value type of all parse flow functions
 Update error info for all flow ops
 Add ixgbe_filterlist_flush to flush flows and rules created

v3 change:
 add new file ixgbe_flow.c to store generic API parser related functions
 add more comment about pattern and action rules
 add attr check in parser functions
 change struct name ixgbe_flow to rte_flow
 change SYN to TCP SYN
 change to use memset initizlize struct ixgbe_filter_info
 break down filter uninit process to 3 indepedent functions in eth_ixgbe_dev_uninit()
 change struct rte_flow_item_nvgre definition
 change struct rte_flow_item_e_tag definition
 fix one bug in function ixgbe_dev_filter_ctrl
 add goto in function ixgbe_flow_create
 delete some useless initialization 
 eliminate some git log check warning

zhao wei (18):
  net/ixgbe: store TCP SYN filter
  net/ixgbe: store flow director filter
  net/ixgbe: store L2 tunnel filter
  net/ixgbe: restore n-tuple filter Add support for restoring n-tuple
    filter in SW.
  net/ixgbe: restore ether type filter
  net/ixgbe: restore TCP SYN filter
  net/ixgbe: restore flow director filter
  net/ixgbe: restore L2 tunnel filter
  net/ixgbe: store and restore L2 tunnel configuration
  net/ixgbe: flush all the filters
  net/ixgbe: parse n-tuple filter
  net/ixgbe: parse ethertype filter
  net/ixgbe: parse TCP SYN filter     check if the rule is a TCP SYN
    rule, and get the SYN info.
  net/ixgbe: parse L2 tunnel filter     check if the rule is a L2 tunnel
    rule, and get the L2 tunnel info.
  net/ixgbe: parse flow director filter
  net/ixgbe: create consistent filter
  net/ixgbe: destroy consistent filter
  net/ixgbe: flush all the filter list

 drivers/net/ixgbe/Makefile       |    2 +
 drivers/net/ixgbe/ixgbe_ethdev.c |  667 +++++++--
 drivers/net/ixgbe/ixgbe_ethdev.h |  203 ++-
 drivers/net/ixgbe/ixgbe_fdir.c   |  407 ++++--
 drivers/net/ixgbe/ixgbe_flow.c   | 2811 ++++++++++++++++++++++++++++++++++++++
 drivers/net/ixgbe/ixgbe_pf.c     |   26 +-
 lib/librte_ether/rte_flow.h      |   48 +
 7 files changed, 3953 insertions(+), 211 deletions(-)
 create mode 100644 drivers/net/ixgbe/ixgbe_flow.c

-- 
2.5.5

^ permalink raw reply	[flat|nested] 149+ messages in thread

* [PATCH v3 01/18] net/ixgbe: store TCP SYN filter
  2016-12-30  7:52 ` [PATCH v2 01/18] net/ixgbe: store SYN filter Wei Zhao
                     ` (2 preceding siblings ...)
  2017-01-12  8:12   ` [PATCH v3 00/18] net/ixgbe: Consistent filter API Wei Zhao
@ 2017-01-12  8:13   ` Wei Zhao
  2017-01-12  8:13     ` [PATCH v3 02/18] net/ixgbe: store flow director filter Wei Zhao
                       ` (18 more replies)
  3 siblings, 19 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  8:13 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for storing TCP SYN filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 18 +++++++++++++-----
 drivers/net/ixgbe/ixgbe_ethdev.h |  2 ++
 2 files changed, 15 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index b7ddd4f..719eddd 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1272,10 +1272,12 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
 	/* enable support intr */
 	ixgbe_enable_intr(eth_dev);
 
+	/* initialize filter info */
+	memset(filter_info, 0,
+	       sizeof(struct ixgbe_filter_info));
+
 	/* initialize 5tuple filter list */
 	TAILQ_INIT(&filter_info->fivetuple_list);
-	memset(filter_info->fivetuple_mask, 0,
-	       sizeof(uint32_t) * IXGBE_5TUPLE_ARRAY_SIZE);
 
 	return 0;
 }
@@ -5603,15 +5605,18 @@ ixgbe_syn_filter_set(struct rte_eth_dev *dev,
 			bool add)
 {
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+	uint32_t syn_info;
 	uint32_t synqf;
 
 	if (filter->queue >= IXGBE_MAX_RX_QUEUE_NUM)
 		return -EINVAL;
 
-	synqf = IXGBE_READ_REG(hw, IXGBE_SYNQF);
+	syn_info = filter_info->syn_info;
 
 	if (add) {
-		if (synqf & IXGBE_SYN_FILTER_ENABLE)
+		if (syn_info & IXGBE_SYN_FILTER_ENABLE)
 			return -EINVAL;
 		synqf = (uint32_t)(((filter->queue << IXGBE_SYN_FILTER_QUEUE_SHIFT) &
 			IXGBE_SYN_FILTER_QUEUE) | IXGBE_SYN_FILTER_ENABLE);
@@ -5621,10 +5626,13 @@ ixgbe_syn_filter_set(struct rte_eth_dev *dev,
 		else
 			synqf &= ~IXGBE_SYN_FILTER_SYNQFP;
 	} else {
-		if (!(synqf & IXGBE_SYN_FILTER_ENABLE))
+		synqf = IXGBE_READ_REG(hw, IXGBE_SYNQF);
+		if (!(syn_info & IXGBE_SYN_FILTER_ENABLE))
 			return -ENOENT;
 		synqf &= ~(IXGBE_SYN_FILTER_QUEUE | IXGBE_SYN_FILTER_ENABLE);
 	}
+
+	filter_info->syn_info = synqf;
 	IXGBE_WRITE_REG(hw, IXGBE_SYNQF, synqf);
 	IXGBE_WRITE_FLUSH(hw);
 	return 0;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 69b276f..90a89ec 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -262,6 +262,8 @@ struct ixgbe_filter_info {
 	/* Bit mask for every used 5tuple filter */
 	uint32_t fivetuple_mask[IXGBE_5TUPLE_ARRAY_SIZE];
 	struct ixgbe_5tuple_filter_list fivetuple_list;
+	/* store the SYN filter info */
+	uint32_t syn_info;
 };
 
 /*
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v3 02/18] net/ixgbe: store flow director filter
  2017-01-12  8:13   ` [PATCH v3 01/18] net/ixgbe: store TCP SYN filter Wei Zhao
@ 2017-01-12  8:13     ` Wei Zhao
  2017-01-12  8:13     ` [PATCH v3 03/18] net/ixgbe: store L2 tunnel filter Wei Zhao
                       ` (17 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  8:13 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for storing flow director filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c |  64 ++++++++++++++++++++++++
 drivers/net/ixgbe/ixgbe_ethdev.h |  19 ++++++-
 drivers/net/ixgbe/ixgbe_fdir.c   | 105 ++++++++++++++++++++++++++++++++++++++-
 3 files changed, 185 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 719eddd..9796c4f 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -60,6 +60,7 @@
 #include <rte_malloc.h>
 #include <rte_random.h>
 #include <rte_dev.h>
+#include <rte_hash_crc.h>
 
 #include "ixgbe_logs.h"
 #include "base/ixgbe_api.h"
@@ -165,6 +166,8 @@ enum ixgbevf_xcast_modes {
 
 static int eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev);
 static int eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev);
+static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev);
+static int ixgbe_fdir_filter_uninit(struct rte_eth_dev *eth_dev);
 static int  ixgbe_dev_configure(struct rte_eth_dev *dev);
 static int  ixgbe_dev_start(struct rte_eth_dev *dev);
 static void ixgbe_dev_stop(struct rte_eth_dev *dev);
@@ -1279,6 +1282,9 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
 	/* initialize 5tuple filter list */
 	TAILQ_INIT(&filter_info->fivetuple_list);
 
+	/* initialize flow director filter list & hash */
+	ixgbe_fdir_filter_init(eth_dev);
+
 	return 0;
 }
 
@@ -1320,9 +1326,67 @@ eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev)
 	rte_free(eth_dev->data->hash_mac_addrs);
 	eth_dev->data->hash_mac_addrs = NULL;
 
+	/* remove all the fdir filters & hash */
+	ixgbe_fdir_filter_uninit(eth_dev);
+
 	return 0;
 }
 
+static int ixgbe_fdir_filter_uninit(struct rte_eth_dev *eth_dev)
+{
+	struct ixgbe_hw_fdir_info *fdir_info =
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(eth_dev->data->dev_private);
+	struct ixgbe_fdir_filter *fdir_filter;
+
+		if (fdir_info->hash_map)
+		rte_free(fdir_info->hash_map);
+	if (fdir_info->hash_handle)
+		rte_hash_free(fdir_info->hash_handle);
+
+	while ((fdir_filter = TAILQ_FIRST(&fdir_info->fdir_list))) {
+		TAILQ_REMOVE(&fdir_info->fdir_list,
+			     fdir_filter,
+			     entries);
+		rte_free(fdir_filter);
+	}
+
+	return 0;
+}
+
+static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev)
+{
+	struct ixgbe_hw_fdir_info *fdir_info =
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(eth_dev->data->dev_private);
+	char fdir_hash_name[RTE_HASH_NAMESIZE];
+	struct rte_hash_parameters fdir_hash_params = {
+		.name = fdir_hash_name,
+		.entries = IXGBE_MAX_FDIR_FILTER_NUM,
+		.key_len = sizeof(union ixgbe_atr_input),
+		.hash_func = rte_hash_crc,
+		.hash_func_init_val = 0,
+		.socket_id = rte_socket_id(),
+	};
+
+	TAILQ_INIT(&fdir_info->fdir_list);
+	snprintf(fdir_hash_name, RTE_HASH_NAMESIZE,
+		 "fdir_%s", eth_dev->data->name);
+	fdir_info->hash_handle = rte_hash_create(&fdir_hash_params);
+	if (!fdir_info->hash_handle) {
+		PMD_INIT_LOG(ERR, "Failed to create fdir hash table!");
+		return -EINVAL;
+	}
+	fdir_info->hash_map = rte_zmalloc("ixgbe",
+					  sizeof(struct ixgbe_fdir_filter *) *
+					  IXGBE_MAX_FDIR_FILTER_NUM,
+					  0);
+	if (!fdir_info->hash_map) {
+		PMD_INIT_LOG(ERR,
+			     "Failed to allocate memory for fdir hash map!");
+		return -ENOMEM;
+	}
+
+	return 0;
+}
 /*
  * Negotiate mailbox API version with the PF.
  * After reset API version is always set to the basic one (ixgbe_mbox_api_10).
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 90a89ec..300542e 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -38,6 +38,7 @@
 #include "base/ixgbe_dcb_82598.h"
 #include "ixgbe_bypass.h"
 #include <rte_time.h>
+#include <rte_hash.h>
 
 /* need update link, bit flag */
 #define IXGBE_FLAG_NEED_LINK_UPDATE (uint32_t)(1 << 0)
@@ -130,10 +131,11 @@
 #define IXGBE_MISC_VEC_ID               RTE_INTR_VEC_ZERO_OFFSET
 #define IXGBE_RX_VEC_START              RTE_INTR_VEC_RXTX_OFFSET
 
+#define IXGBE_MAX_FDIR_FILTER_NUM       (1024 * 32)
+
 /*
  * Information about the fdir mode.
  */
-
 struct ixgbe_hw_fdir_mask {
 	uint16_t vlan_tci_mask;
 	uint32_t src_ipv4_mask;
@@ -148,6 +150,17 @@ struct ixgbe_hw_fdir_mask {
 	uint8_t  tunnel_type_mask;
 };
 
+struct ixgbe_fdir_filter {
+	TAILQ_ENTRY(ixgbe_fdir_filter) entries;
+	union ixgbe_atr_input ixgbe_fdir; /* key of fdir filter*/
+	uint32_t fdirflags; /* drop or forward */
+	uint32_t fdirhash; /* hash value for fdir */
+	uint8_t queue; /* assigned rx queue */
+};
+
+/* list of fdir filters */
+TAILQ_HEAD(ixgbe_fdir_filter_list, ixgbe_fdir_filter);
+
 struct ixgbe_hw_fdir_info {
 	struct ixgbe_hw_fdir_mask mask;
 	uint8_t     flex_bytes_offset;
@@ -159,6 +172,10 @@ struct ixgbe_hw_fdir_info {
 	uint64_t    remove;
 	uint64_t    f_add;
 	uint64_t    f_remove;
+	struct ixgbe_fdir_filter_list fdir_list; /* filter list*/
+	/* store the pointers of the filters, index is the hash value. */
+	struct ixgbe_fdir_filter **hash_map;
+	struct rte_hash *hash_handle; /* cuckoo hash handler */
 };
 
 /* structure for interrupt relative data */
diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c
index 4b81ee3..8bf5705 100644
--- a/drivers/net/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/ixgbe/ixgbe_fdir.c
@@ -43,6 +43,7 @@
 #include <rte_pci.h>
 #include <rte_ether.h>
 #include <rte_ethdev.h>
+#include <rte_malloc.h>
 
 #include "ixgbe_logs.h"
 #include "base/ixgbe_api.h"
@@ -1075,6 +1076,65 @@ fdir_erase_filter_82599(struct ixgbe_hw *hw, uint32_t fdirhash)
 
 }
 
+static inline struct ixgbe_fdir_filter *
+ixgbe_fdir_filter_lookup(struct ixgbe_hw_fdir_info *fdir_info,
+			 union ixgbe_atr_input *key)
+{
+	int ret;
+
+	ret = rte_hash_lookup(fdir_info->hash_handle, (const void *)key);
+	if (ret < 0)
+		return NULL;
+
+	return fdir_info->hash_map[ret];
+}
+
+static inline int
+ixgbe_insert_fdir_filter(struct ixgbe_hw_fdir_info *fdir_info,
+			 struct ixgbe_fdir_filter *fdir_filter)
+{
+	int ret;
+
+	ret = rte_hash_add_key(fdir_info->hash_handle,
+			       &fdir_filter->ixgbe_fdir);
+
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to insert fdir filter to hash table %d!",
+			    ret);
+		return ret;
+	}
+
+	fdir_info->hash_map[ret] = fdir_filter;
+
+	TAILQ_INSERT_TAIL(&fdir_info->fdir_list, fdir_filter, entries);
+
+	return 0;
+}
+
+static inline int
+ixgbe_remove_fdir_filter(struct ixgbe_hw_fdir_info *fdir_info,
+			 union ixgbe_atr_input *key)
+{
+	int ret;
+	struct ixgbe_fdir_filter *fdir_filter;
+
+	ret = rte_hash_del_key(fdir_info->hash_handle, key);
+
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "No such fdir filter to delete %d!", ret);
+		return ret;
+	}
+
+	fdir_filter = fdir_info->hash_map[ret];
+	fdir_info->hash_map[ret] = NULL;
+
+	TAILQ_REMOVE(&fdir_info->fdir_list, fdir_filter, entries);
+	rte_free(fdir_filter);
+
+	return 0;
+}
+
 /*
  * ixgbe_add_del_fdir_filter - add or remove a flow diretor filter.
  * @dev: pointer to the structure rte_eth_dev
@@ -1098,6 +1158,8 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	struct ixgbe_hw_fdir_info *info =
 		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
 	enum rte_fdir_mode fdir_mode = dev->data->dev_conf.fdir_conf.mode;
+	struct ixgbe_fdir_filter *node;
+	bool add_node = FALSE;
 
 	if (fdir_mode == RTE_FDIR_MODE_NONE)
 		return -ENOTSUP;
@@ -1148,6 +1210,10 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 						      dev->data->dev_conf.fdir_conf.pballoc);
 
 	if (del) {
+		err = ixgbe_remove_fdir_filter(info, &input);
+		if (err < 0)
+			return err;
+
 		err = fdir_erase_filter_82599(hw, fdirhash);
 		if (err < 0)
 			PMD_DRV_LOG(ERR, "Fail to delete FDIR filter!");
@@ -1172,6 +1238,37 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	else
 		return -EINVAL;
 
+	node = ixgbe_fdir_filter_lookup(info, &input);
+	if (node) {
+		if (update) {
+			node->fdirflags = fdircmd_flags;
+			node->fdirhash = fdirhash;
+			node->queue = queue;
+		} else {
+			PMD_DRV_LOG(ERR, "Conflict with existing fdir filter!");
+			return -EINVAL;
+		}
+	} else {
+		add_node = TRUE;
+		node = rte_zmalloc("ixgbe_fdir",
+				   sizeof(struct ixgbe_fdir_filter),
+				   0);
+		if (!node)
+			return -ENOMEM;
+		(void)rte_memcpy(&node->ixgbe_fdir,
+				 &input,
+				 sizeof(union ixgbe_atr_input));
+		node->fdirflags = fdircmd_flags;
+		node->fdirhash = fdirhash;
+		node->queue = queue;
+
+		err = ixgbe_insert_fdir_filter(info, node);
+		if (err < 0) {
+			rte_free(node);
+			return err;
+		}
+	}
+
 	if (is_perfect) {
 		err = fdir_write_perfect_filter_82599(hw, &input, queue,
 						      fdircmd_flags, fdirhash,
@@ -1180,10 +1277,14 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 		err = fdir_add_signature_filter_82599(hw, &input, queue,
 						      fdircmd_flags, fdirhash);
 	}
-	if (err < 0)
+	if (err < 0) {
 		PMD_DRV_LOG(ERR, "Fail to add FDIR filter!");
-	else
+
+		if (add_node)
+			(void)ixgbe_remove_fdir_filter(info, &input);
+	} else {
 		PMD_DRV_LOG(DEBUG, "Success to add FDIR filter");
+	}
 
 	return err;
 }
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v3 03/18] net/ixgbe: store L2 tunnel filter
  2017-01-12  8:13   ` [PATCH v3 01/18] net/ixgbe: store TCP SYN filter Wei Zhao
  2017-01-12  8:13     ` [PATCH v3 02/18] net/ixgbe: store flow director filter Wei Zhao
@ 2017-01-12  8:13     ` Wei Zhao
  2017-01-12  8:13     ` [PATCH v3 04/18] net/ixgbe: restore n-tuple filter Add support for restoring n-tuple filter in SW Wei Zhao
                       ` (16 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  8:13 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for storing L2 tunnel filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 172 ++++++++++++++++++++++++++++++++++++++-
 drivers/net/ixgbe/ixgbe_ethdev.h |  24 ++++++
 2 files changed, 193 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 9796c4f..e63b635 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -168,6 +168,8 @@ static int eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev);
 static int eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev);
 static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev);
 static int ixgbe_fdir_filter_uninit(struct rte_eth_dev *eth_dev);
+static int ixgbe_l2_tn_filter_init(struct rte_eth_dev *eth_dev);
+static int ixgbe_l2_tn_filter_uninit(struct rte_eth_dev *eth_dev);
 static int  ixgbe_dev_configure(struct rte_eth_dev *dev);
 static int  ixgbe_dev_start(struct rte_eth_dev *dev);
 static void ixgbe_dev_stop(struct rte_eth_dev *dev);
@@ -1285,6 +1287,8 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
 	/* initialize flow director filter list & hash */
 	ixgbe_fdir_filter_init(eth_dev);
 
+	/* initialize l2 tunnel filter list & hash */
+	ixgbe_l2_tn_filter_init(eth_dev);
 	return 0;
 }
 
@@ -1329,6 +1333,9 @@ eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev)
 	/* remove all the fdir filters & hash */
 	ixgbe_fdir_filter_uninit(eth_dev);
 
+	/* remove all the L2 tunnel filters & hash */
+	ixgbe_l2_tn_filter_uninit(eth_dev);
+
 	return 0;
 }
 
@@ -1353,6 +1360,27 @@ static int ixgbe_fdir_filter_uninit(struct rte_eth_dev *eth_dev)
 	return 0;
 }
 
+static int ixgbe_l2_tn_filter_uninit(struct rte_eth_dev *eth_dev)
+{
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(eth_dev->data->dev_private);
+	struct ixgbe_l2_tn_filter *l2_tn_filter;
+
+	if (l2_tn_info->hash_map)
+		rte_free(l2_tn_info->hash_map);
+	if (l2_tn_info->hash_handle)
+		rte_hash_free(l2_tn_info->hash_handle);
+
+	while ((l2_tn_filter = TAILQ_FIRST(&l2_tn_info->l2_tn_list))) {
+		TAILQ_REMOVE(&l2_tn_info->l2_tn_list,
+			     l2_tn_filter,
+			     entries);
+		rte_free(l2_tn_filter);
+	}
+
+	return 0;
+}
+
 static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev)
 {
 	struct ixgbe_hw_fdir_info *fdir_info =
@@ -1384,6 +1412,40 @@ static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev)
 			     "Failed to allocate memory for fdir hash map!");
 		return -ENOMEM;
 	}
+	return 0;
+}
+
+static int ixgbe_l2_tn_filter_init(struct rte_eth_dev *eth_dev)
+{
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(eth_dev->data->dev_private);
+	char l2_tn_hash_name[RTE_HASH_NAMESIZE];
+	struct rte_hash_parameters l2_tn_hash_params = {
+		.name = l2_tn_hash_name,
+		.entries = IXGBE_MAX_L2_TN_FILTER_NUM,
+		.key_len = sizeof(struct ixgbe_l2_tn_key),
+		.hash_func = rte_hash_crc,
+		.hash_func_init_val = 0,
+		.socket_id = rte_socket_id(),
+	};
+
+	TAILQ_INIT(&l2_tn_info->l2_tn_list);
+	snprintf(l2_tn_hash_name, RTE_HASH_NAMESIZE,
+		 "l2_tn_%s", eth_dev->data->name);
+	l2_tn_info->hash_handle = rte_hash_create(&l2_tn_hash_params);
+	if (!l2_tn_info->hash_handle) {
+		PMD_INIT_LOG(ERR, "Failed to create L2 TN hash table!");
+		return -EINVAL;
+	}
+	l2_tn_info->hash_map = rte_zmalloc("ixgbe",
+				   sizeof(struct ixgbe_l2_tn_filter *) *
+				   IXGBE_MAX_L2_TN_FILTER_NUM,
+				   0);
+	if (!l2_tn_info->hash_map) {
+		PMD_INIT_LOG(ERR,
+			"Failed to allocate memory for L2 TN hash map!");
+		return -ENOMEM;
+	}
 
 	return 0;
 }
@@ -7210,12 +7272,104 @@ ixgbe_e_tag_filter_add(struct rte_eth_dev *dev,
 	return -EINVAL;
 }
 
+static inline struct ixgbe_l2_tn_filter *
+ixgbe_l2_tn_filter_lookup(struct ixgbe_l2_tn_info *l2_tn_info,
+			  struct ixgbe_l2_tn_key *key)
+{
+	int ret;
+
+	ret = rte_hash_lookup(l2_tn_info->hash_handle, (const void *)key);
+	if (ret < 0)
+		return NULL;
+
+	return l2_tn_info->hash_map[ret];
+}
+
+static inline int
+ixgbe_insert_l2_tn_filter(struct ixgbe_l2_tn_info *l2_tn_info,
+			  struct ixgbe_l2_tn_filter *l2_tn_filter)
+{
+	int ret;
+
+	ret = rte_hash_add_key(l2_tn_info->hash_handle,
+			       &l2_tn_filter->key);
+
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to insert L2 tunnel filter"
+			    " to hash table %d!",
+			    ret);
+		return ret;
+	}
+
+	l2_tn_info->hash_map[ret] = l2_tn_filter;
+
+	TAILQ_INSERT_TAIL(&l2_tn_info->l2_tn_list, l2_tn_filter, entries);
+
+	return 0;
+}
+
+static inline int
+ixgbe_remove_l2_tn_filter(struct ixgbe_l2_tn_info *l2_tn_info,
+			  struct ixgbe_l2_tn_key *key)
+{
+	int ret;
+	struct ixgbe_l2_tn_filter *l2_tn_filter;
+
+	ret = rte_hash_del_key(l2_tn_info->hash_handle, key);
+
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR,
+			    "No such L2 tunnel filter to delete %d!",
+			    ret);
+		return ret;
+	}
+
+	l2_tn_filter = l2_tn_info->hash_map[ret];
+	l2_tn_info->hash_map[ret] = NULL;
+
+	TAILQ_REMOVE(&l2_tn_info->l2_tn_list, l2_tn_filter, entries);
+	rte_free(l2_tn_filter);
+
+	return 0;
+}
+
 /* Add l2 tunnel filter */
 static int
 ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 			       struct rte_eth_l2_tunnel_conf *l2_tunnel)
 {
-	int ret = 0;
+	int ret;
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
+	struct ixgbe_l2_tn_key key;
+	struct ixgbe_l2_tn_filter *node;
+
+	key.l2_tn_type = l2_tunnel->l2_tunnel_type;
+	key.tn_id = l2_tunnel->tunnel_id;
+
+	node = ixgbe_l2_tn_filter_lookup(l2_tn_info, &key);
+
+	if (node) {
+		PMD_DRV_LOG(ERR, "The L2 tunnel filter already exists!");
+		return -EINVAL;
+	}
+
+	node = rte_zmalloc("ixgbe_l2_tn",
+			   sizeof(struct ixgbe_l2_tn_filter),
+			   0);
+	if (!node)
+		return -ENOMEM;
+
+	(void)rte_memcpy(&node->key,
+			 &key,
+			 sizeof(struct ixgbe_l2_tn_key));
+	node->pool = l2_tunnel->pool;
+	ret = ixgbe_insert_l2_tn_filter(l2_tn_info, node);
+	if (ret < 0) {
+		rte_free(node);
+		return ret;
+	}
 
 	switch (l2_tunnel->l2_tunnel_type) {
 	case RTE_L2_TUNNEL_TYPE_E_TAG:
@@ -7227,6 +7381,9 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 		break;
 	}
 
+	if (ret < 0)
+		(void)ixgbe_remove_l2_tn_filter(l2_tn_info, &key);
+
 	return ret;
 }
 
@@ -7235,7 +7392,16 @@ static int
 ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
 			       struct rte_eth_l2_tunnel_conf *l2_tunnel)
 {
-	int ret = 0;
+	int ret;
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
+	struct ixgbe_l2_tn_key key;
+
+	key.l2_tn_type = l2_tunnel->l2_tunnel_type;
+	key.tn_id = l2_tunnel->tunnel_id;
+	ret = ixgbe_remove_l2_tn_filter(l2_tn_info, &key);
+	if (ret < 0)
+		return ret;
 
 	switch (l2_tunnel->l2_tunnel_type) {
 	case RTE_L2_TUNNEL_TYPE_E_TAG:
@@ -7261,7 +7427,7 @@ ixgbe_dev_l2_tunnel_filter_handle(struct rte_eth_dev *dev,
 				  enum rte_filter_op filter_op,
 				  void *arg)
 {
-	int ret = 0;
+	int ret;
 
 	if (filter_op == RTE_ETH_FILTER_NOP)
 		return 0;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 300542e..b2cf789 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -132,6 +132,7 @@
 #define IXGBE_RX_VEC_START              RTE_INTR_VEC_RXTX_OFFSET
 
 #define IXGBE_MAX_FDIR_FILTER_NUM       (1024 * 32)
+#define IXGBE_MAX_L2_TN_FILTER_NUM      128
 
 /*
  * Information about the fdir mode.
@@ -283,6 +284,25 @@ struct ixgbe_filter_info {
 	uint32_t syn_info;
 };
 
+struct ixgbe_l2_tn_key {
+	enum rte_eth_tunnel_type          l2_tn_type;
+	uint32_t                          tn_id;
+};
+
+struct ixgbe_l2_tn_filter {
+	TAILQ_ENTRY(ixgbe_l2_tn_filter)    entries;
+	struct ixgbe_l2_tn_key             key;
+	uint32_t                           pool;
+};
+
+TAILQ_HEAD(ixgbe_l2_tn_filter_list, ixgbe_l2_tn_filter);
+
+struct ixgbe_l2_tn_info {
+	struct ixgbe_l2_tn_filter_list      l2_tn_list;
+	struct ixgbe_l2_tn_filter         **hash_map;
+	struct rte_hash                    *hash_handle;
+};
+
 /*
  * Structure to store private data for each driver instance (for each port).
  */
@@ -302,6 +322,7 @@ struct ixgbe_adapter {
 	struct ixgbe_bypass_info    bps;
 #endif /* RTE_NIC_BYPASS */
 	struct ixgbe_filter_info    filter;
+	struct ixgbe_l2_tn_info     l2_tn;
 
 	bool rx_bulk_alloc_allowed;
 	bool rx_vec_allowed;
@@ -349,6 +370,9 @@ struct ixgbe_adapter {
 #define IXGBE_DEV_PRIVATE_TO_FILTER_INFO(adapter) \
 	(&((struct ixgbe_adapter *)adapter)->filter)
 
+#define IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(adapter) \
+	(&((struct ixgbe_adapter *)adapter)->l2_tn)
+
 /*
  * RX/TX function prototypes
  */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v3 04/18] net/ixgbe: restore n-tuple filter Add support for restoring n-tuple filter in SW.
  2017-01-12  8:13   ` [PATCH v3 01/18] net/ixgbe: store TCP SYN filter Wei Zhao
  2017-01-12  8:13     ` [PATCH v3 02/18] net/ixgbe: store flow director filter Wei Zhao
  2017-01-12  8:13     ` [PATCH v3 03/18] net/ixgbe: store L2 tunnel filter Wei Zhao
@ 2017-01-12  8:13     ` Wei Zhao
  2017-01-12  8:13     ` [PATCH v3 4/9] " Wei Zhao
                       ` (15 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  8:13 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 140 +++++++++++++++++++++++++--------------
 1 file changed, 92 insertions(+), 48 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index e63b635..1630e65 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -170,6 +170,7 @@ static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev);
 static int ixgbe_fdir_filter_uninit(struct rte_eth_dev *eth_dev);
 static int ixgbe_l2_tn_filter_init(struct rte_eth_dev *eth_dev);
 static int ixgbe_l2_tn_filter_uninit(struct rte_eth_dev *eth_dev);
+static int ixgbe_ntuple_filter_uninit(struct rte_eth_dev *eth_dev);
 static int  ixgbe_dev_configure(struct rte_eth_dev *dev);
 static int  ixgbe_dev_start(struct rte_eth_dev *dev);
 static void ixgbe_dev_stop(struct rte_eth_dev *dev);
@@ -387,6 +388,7 @@ static int ixgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 					 struct rte_eth_udp_tunnel *udp_tunnel);
 static int ixgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 					 struct rte_eth_udp_tunnel *udp_tunnel);
+static int ixgbe_filter_restore(struct rte_eth_dev *dev);
 
 /*
  * Define VF Stats MACRO for Non "cleared on read" register
@@ -1336,6 +1338,27 @@ eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev)
 	/* remove all the L2 tunnel filters & hash */
 	ixgbe_l2_tn_filter_uninit(eth_dev);
 
+	/* Remove all ntuple filters of the device */
+	ixgbe_ntuple_filter_uninit(eth_dev);
+
+	return 0;
+}
+
+static int ixgbe_ntuple_filter_uninit(struct rte_eth_dev *eth_dev)
+{
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(eth_dev->data->dev_private);
+	struct ixgbe_5tuple_filter *p_5tuple;
+
+	while ((p_5tuple = TAILQ_FIRST(&filter_info->fivetuple_list))) {
+		TAILQ_REMOVE(&filter_info->fivetuple_list,
+			     p_5tuple,
+			     entries);
+		rte_free(p_5tuple);
+	}
+	memset(filter_info->fivetuple_mask, 0,
+	       sizeof(uint32_t) * IXGBE_5TUPLE_ARRAY_SIZE);
+
 	return 0;
 }
 
@@ -2504,6 +2527,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
 
 	/* resume enabled intr since hw reset */
 	ixgbe_enable_intr(dev);
+	ixgbe_filter_restore(dev);
 
 	return 0;
 
@@ -2524,9 +2548,6 @@ ixgbe_dev_stop(struct rte_eth_dev *dev)
 		IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ixgbe_vf_info *vfinfo =
 		*IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private);
-	struct ixgbe_filter_info *filter_info =
-		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
-	struct ixgbe_5tuple_filter *p_5tuple, *p_5tuple_next;
 	struct rte_pci_device *pci_dev = IXGBE_DEV_TO_PCI(dev);
 	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 	int vf;
@@ -2564,17 +2585,6 @@ ixgbe_dev_stop(struct rte_eth_dev *dev)
 	memset(&link, 0, sizeof(link));
 	rte_ixgbe_dev_atomic_write_link_status(dev, &link);
 
-	/* Remove all ntuple filters of the device */
-	for (p_5tuple = TAILQ_FIRST(&filter_info->fivetuple_list);
-	     p_5tuple != NULL; p_5tuple = p_5tuple_next) {
-		p_5tuple_next = TAILQ_NEXT(p_5tuple, entries);
-		TAILQ_REMOVE(&filter_info->fivetuple_list,
-			     p_5tuple, entries);
-		rte_free(p_5tuple);
-	}
-	memset(filter_info->fivetuple_mask, 0,
-		sizeof(uint32_t) * IXGBE_5TUPLE_ARRAY_SIZE);
-
 	if (!rte_intr_allow_others(intr_handle))
 		/* resume to the default handler */
 		rte_intr_callback_register(intr_handle,
@@ -5836,6 +5846,52 @@ convert_protocol_type(uint8_t protocol_value)
 		return IXGBE_FILTER_PROTOCOL_NONE;
 }
 
+/* inject a 5-tuple filter to HW */
+static inline void
+ixgbe_inject_5tuple_filter(struct rte_eth_dev *dev,
+			   struct ixgbe_5tuple_filter *filter)
+{
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	int i;
+	uint32_t ftqf, sdpqf;
+	uint32_t l34timir = 0;
+	uint8_t mask = 0xff;
+
+	i = filter->index;
+
+	sdpqf = (uint32_t)(filter->filter_info.dst_port <<
+				IXGBE_SDPQF_DSTPORT_SHIFT);
+	sdpqf = sdpqf | (filter->filter_info.src_port & IXGBE_SDPQF_SRCPORT);
+
+	ftqf = (uint32_t)(filter->filter_info.proto &
+		IXGBE_FTQF_PROTOCOL_MASK);
+	ftqf |= (uint32_t)((filter->filter_info.priority &
+		IXGBE_FTQF_PRIORITY_MASK) << IXGBE_FTQF_PRIORITY_SHIFT);
+	if (filter->filter_info.src_ip_mask == 0) /* 0 means compare. */
+		mask &= IXGBE_FTQF_SOURCE_ADDR_MASK;
+	if (filter->filter_info.dst_ip_mask == 0)
+		mask &= IXGBE_FTQF_DEST_ADDR_MASK;
+	if (filter->filter_info.src_port_mask == 0)
+		mask &= IXGBE_FTQF_SOURCE_PORT_MASK;
+	if (filter->filter_info.dst_port_mask == 0)
+		mask &= IXGBE_FTQF_DEST_PORT_MASK;
+	if (filter->filter_info.proto_mask == 0)
+		mask &= IXGBE_FTQF_PROTOCOL_COMP_MASK;
+	ftqf |= mask << IXGBE_FTQF_5TUPLE_MASK_SHIFT;
+	ftqf |= IXGBE_FTQF_POOL_MASK_EN;
+	ftqf |= IXGBE_FTQF_QUEUE_ENABLE;
+
+	IXGBE_WRITE_REG(hw, IXGBE_DAQF(i), filter->filter_info.dst_ip);
+	IXGBE_WRITE_REG(hw, IXGBE_SAQF(i), filter->filter_info.src_ip);
+	IXGBE_WRITE_REG(hw, IXGBE_SDPQF(i), sdpqf);
+	IXGBE_WRITE_REG(hw, IXGBE_FTQF(i), ftqf);
+
+	l34timir |= IXGBE_L34T_IMIR_RESERVE;
+	l34timir |= (uint32_t)(filter->queue <<
+				IXGBE_L34T_IMIR_QUEUE_SHIFT);
+	IXGBE_WRITE_REG(hw, IXGBE_L34T_IMIR(i), l34timir);
+}
+
 /*
  * add a 5tuple filter
  *
@@ -5853,13 +5909,9 @@ static int
 ixgbe_add_5tuple_filter(struct rte_eth_dev *dev,
 			struct ixgbe_5tuple_filter *filter)
 {
-	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ixgbe_filter_info *filter_info =
 		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
 	int i, idx, shift;
-	uint32_t ftqf, sdpqf;
-	uint32_t l34timir = 0;
-	uint8_t mask = 0xff;
 
 	/*
 	 * look for an unused 5tuple filter index,
@@ -5882,37 +5934,8 @@ ixgbe_add_5tuple_filter(struct rte_eth_dev *dev,
 		return -ENOSYS;
 	}
 
-	sdpqf = (uint32_t)(filter->filter_info.dst_port <<
-				IXGBE_SDPQF_DSTPORT_SHIFT);
-	sdpqf = sdpqf | (filter->filter_info.src_port & IXGBE_SDPQF_SRCPORT);
-
-	ftqf = (uint32_t)(filter->filter_info.proto &
-		IXGBE_FTQF_PROTOCOL_MASK);
-	ftqf |= (uint32_t)((filter->filter_info.priority &
-		IXGBE_FTQF_PRIORITY_MASK) << IXGBE_FTQF_PRIORITY_SHIFT);
-	if (filter->filter_info.src_ip_mask == 0) /* 0 means compare. */
-		mask &= IXGBE_FTQF_SOURCE_ADDR_MASK;
-	if (filter->filter_info.dst_ip_mask == 0)
-		mask &= IXGBE_FTQF_DEST_ADDR_MASK;
-	if (filter->filter_info.src_port_mask == 0)
-		mask &= IXGBE_FTQF_SOURCE_PORT_MASK;
-	if (filter->filter_info.dst_port_mask == 0)
-		mask &= IXGBE_FTQF_DEST_PORT_MASK;
-	if (filter->filter_info.proto_mask == 0)
-		mask &= IXGBE_FTQF_PROTOCOL_COMP_MASK;
-	ftqf |= mask << IXGBE_FTQF_5TUPLE_MASK_SHIFT;
-	ftqf |= IXGBE_FTQF_POOL_MASK_EN;
-	ftqf |= IXGBE_FTQF_QUEUE_ENABLE;
-
-	IXGBE_WRITE_REG(hw, IXGBE_DAQF(i), filter->filter_info.dst_ip);
-	IXGBE_WRITE_REG(hw, IXGBE_SAQF(i), filter->filter_info.src_ip);
-	IXGBE_WRITE_REG(hw, IXGBE_SDPQF(i), sdpqf);
-	IXGBE_WRITE_REG(hw, IXGBE_FTQF(i), ftqf);
+	ixgbe_inject_5tuple_filter(dev, filter);
 
-	l34timir |= IXGBE_L34T_IMIR_RESERVE;
-	l34timir |= (uint32_t)(filter->queue <<
-				IXGBE_L34T_IMIR_QUEUE_SHIFT);
-	IXGBE_WRITE_REG(hw, IXGBE_L34T_IMIR(i), l34timir);
 	return 0;
 }
 
@@ -7925,6 +7948,27 @@ ixgbevf_dev_interrupt_handler(__rte_unused struct rte_intr_handle *handle,
 	ixgbevf_dev_interrupt_action(dev);
 }
 
+/* restore n-tuple filter */
+static inline void
+ixgbe_ntuple_filter_restore(struct rte_eth_dev *dev)
+{
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+	struct ixgbe_5tuple_filter *node;
+
+	TAILQ_FOREACH(node, &filter_info->fivetuple_list, entries) {
+		ixgbe_inject_5tuple_filter(dev, node);
+	}
+}
+
+static int
+ixgbe_filter_restore(struct rte_eth_dev *dev)
+{
+	ixgbe_ntuple_filter_restore(dev);
+
+	return 0;
+}
+
 RTE_PMD_REGISTER_PCI(net_ixgbe, rte_ixgbe_pmd.pci_drv);
 RTE_PMD_REGISTER_PCI_TABLE(net_ixgbe, pci_id_ixgbe_map);
 RTE_PMD_REGISTER_KMOD_DEP(net_ixgbe, "* igb_uio | uio_pci_generic | vfio");
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v3 4/9] net/ixgbe: restore n-tuple filter Add support for restoring n-tuple filter in SW.
  2017-01-12  8:13   ` [PATCH v3 01/18] net/ixgbe: store TCP SYN filter Wei Zhao
                       ` (2 preceding siblings ...)
  2017-01-12  8:13     ` [PATCH v3 04/18] net/ixgbe: restore n-tuple filter Add support for restoring n-tuple filter in SW Wei Zhao
@ 2017-01-12  8:13     ` Wei Zhao
  2017-01-12  8:13     ` [PATCH v3 05/18] net/ixgbe: restore ether type filter Wei Zhao
                       ` (14 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  8:13 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 140 +++++++++++++++++++++++++--------------
 1 file changed, 92 insertions(+), 48 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index a1c9335..bdb5314 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -170,6 +170,7 @@ static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev);
 static int ixgbe_fdir_filter_uninit(struct rte_eth_dev *eth_dev);
 static int ixgbe_l2_tn_filter_init(struct rte_eth_dev *eth_dev);
 static int ixgbe_l2_tn_filter_uninit(struct rte_eth_dev *eth_dev);
+static int ixgbe_ntuple_filter_uninit(struct rte_eth_dev *eth_dev);
 static int  ixgbe_dev_configure(struct rte_eth_dev *dev);
 static int  ixgbe_dev_start(struct rte_eth_dev *dev);
 static void ixgbe_dev_stop(struct rte_eth_dev *dev);
@@ -387,6 +388,7 @@ static int ixgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 					 struct rte_eth_udp_tunnel *udp_tunnel);
 static int ixgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 					 struct rte_eth_udp_tunnel *udp_tunnel);
+static int ixgbe_filter_restore(struct rte_eth_dev *dev);
 
 /*
  * Define VF Stats MACRO for Non "cleared on read" register
@@ -1335,6 +1337,27 @@ eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev)
 	/* remove all the L2 tunnel filters & hash */
 	ixgbe_l2_tn_filter_uninit(eth_dev);
 
+	/* Remove all ntuple filters of the device */
+	ixgbe_ntuple_filter_uninit(eth_dev);
+
+	return 0;
+}
+
+static int ixgbe_ntuple_filter_uninit(struct rte_eth_dev *eth_dev)
+{
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(eth_dev->data->dev_private);
+	struct ixgbe_5tuple_filter *p_5tuple;
+
+	while ((p_5tuple = TAILQ_FIRST(&filter_info->fivetuple_list))) {
+		TAILQ_REMOVE(&filter_info->fivetuple_list,
+			     p_5tuple,
+			     entries);
+		rte_free(p_5tuple);
+	}
+	memset(filter_info->fivetuple_mask, 0,
+	       sizeof(uint32_t) * IXGBE_5TUPLE_ARRAY_SIZE);
+
 	return 0;
 }
 
@@ -2503,6 +2526,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
 
 	/* resume enabled intr since hw reset */
 	ixgbe_enable_intr(dev);
+	ixgbe_filter_restore(dev);
 
 	return 0;
 
@@ -2523,9 +2547,6 @@ ixgbe_dev_stop(struct rte_eth_dev *dev)
 		IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ixgbe_vf_info *vfinfo =
 		*IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private);
-	struct ixgbe_filter_info *filter_info =
-		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
-	struct ixgbe_5tuple_filter *p_5tuple, *p_5tuple_next;
 	struct rte_pci_device *pci_dev = IXGBE_DEV_TO_PCI(dev);
 	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 	int vf;
@@ -2563,17 +2584,6 @@ ixgbe_dev_stop(struct rte_eth_dev *dev)
 	memset(&link, 0, sizeof(link));
 	rte_ixgbe_dev_atomic_write_link_status(dev, &link);
 
-	/* Remove all ntuple filters of the device */
-	for (p_5tuple = TAILQ_FIRST(&filter_info->fivetuple_list);
-	     p_5tuple != NULL; p_5tuple = p_5tuple_next) {
-		p_5tuple_next = TAILQ_NEXT(p_5tuple, entries);
-		TAILQ_REMOVE(&filter_info->fivetuple_list,
-			     p_5tuple, entries);
-		rte_free(p_5tuple);
-	}
-	memset(filter_info->fivetuple_mask, 0,
-		sizeof(uint32_t) * IXGBE_5TUPLE_ARRAY_SIZE);
-
 	if (!rte_intr_allow_others(intr_handle))
 		/* resume to the default handler */
 		rte_intr_callback_register(intr_handle,
@@ -5835,6 +5845,52 @@ convert_protocol_type(uint8_t protocol_value)
 		return IXGBE_FILTER_PROTOCOL_NONE;
 }
 
+/* inject a 5-tuple filter to HW */
+static inline void
+ixgbe_inject_5tuple_filter(struct rte_eth_dev *dev,
+			   struct ixgbe_5tuple_filter *filter)
+{
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	int i;
+	uint32_t ftqf, sdpqf;
+	uint32_t l34timir = 0;
+	uint8_t mask = 0xff;
+
+	i = filter->index;
+
+	sdpqf = (uint32_t)(filter->filter_info.dst_port <<
+				IXGBE_SDPQF_DSTPORT_SHIFT);
+	sdpqf = sdpqf | (filter->filter_info.src_port & IXGBE_SDPQF_SRCPORT);
+
+	ftqf = (uint32_t)(filter->filter_info.proto &
+		IXGBE_FTQF_PROTOCOL_MASK);
+	ftqf |= (uint32_t)((filter->filter_info.priority &
+		IXGBE_FTQF_PRIORITY_MASK) << IXGBE_FTQF_PRIORITY_SHIFT);
+	if (filter->filter_info.src_ip_mask == 0) /* 0 means compare. */
+		mask &= IXGBE_FTQF_SOURCE_ADDR_MASK;
+	if (filter->filter_info.dst_ip_mask == 0)
+		mask &= IXGBE_FTQF_DEST_ADDR_MASK;
+	if (filter->filter_info.src_port_mask == 0)
+		mask &= IXGBE_FTQF_SOURCE_PORT_MASK;
+	if (filter->filter_info.dst_port_mask == 0)
+		mask &= IXGBE_FTQF_DEST_PORT_MASK;
+	if (filter->filter_info.proto_mask == 0)
+		mask &= IXGBE_FTQF_PROTOCOL_COMP_MASK;
+	ftqf |= mask << IXGBE_FTQF_5TUPLE_MASK_SHIFT;
+	ftqf |= IXGBE_FTQF_POOL_MASK_EN;
+	ftqf |= IXGBE_FTQF_QUEUE_ENABLE;
+
+	IXGBE_WRITE_REG(hw, IXGBE_DAQF(i), filter->filter_info.dst_ip);
+	IXGBE_WRITE_REG(hw, IXGBE_SAQF(i), filter->filter_info.src_ip);
+	IXGBE_WRITE_REG(hw, IXGBE_SDPQF(i), sdpqf);
+	IXGBE_WRITE_REG(hw, IXGBE_FTQF(i), ftqf);
+
+	l34timir |= IXGBE_L34T_IMIR_RESERVE;
+	l34timir |= (uint32_t)(filter->queue <<
+				IXGBE_L34T_IMIR_QUEUE_SHIFT);
+	IXGBE_WRITE_REG(hw, IXGBE_L34T_IMIR(i), l34timir);
+}
+
 /*
  * add a 5tuple filter
  *
@@ -5852,13 +5908,9 @@ static int
 ixgbe_add_5tuple_filter(struct rte_eth_dev *dev,
 			struct ixgbe_5tuple_filter *filter)
 {
-	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ixgbe_filter_info *filter_info =
 		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
 	int i, idx, shift;
-	uint32_t ftqf, sdpqf;
-	uint32_t l34timir = 0;
-	uint8_t mask = 0xff;
 
 	/*
 	 * look for an unused 5tuple filter index,
@@ -5881,37 +5933,8 @@ ixgbe_add_5tuple_filter(struct rte_eth_dev *dev,
 		return -ENOSYS;
 	}
 
-	sdpqf = (uint32_t)(filter->filter_info.dst_port <<
-				IXGBE_SDPQF_DSTPORT_SHIFT);
-	sdpqf = sdpqf | (filter->filter_info.src_port & IXGBE_SDPQF_SRCPORT);
-
-	ftqf = (uint32_t)(filter->filter_info.proto &
-		IXGBE_FTQF_PROTOCOL_MASK);
-	ftqf |= (uint32_t)((filter->filter_info.priority &
-		IXGBE_FTQF_PRIORITY_MASK) << IXGBE_FTQF_PRIORITY_SHIFT);
-	if (filter->filter_info.src_ip_mask == 0) /* 0 means compare. */
-		mask &= IXGBE_FTQF_SOURCE_ADDR_MASK;
-	if (filter->filter_info.dst_ip_mask == 0)
-		mask &= IXGBE_FTQF_DEST_ADDR_MASK;
-	if (filter->filter_info.src_port_mask == 0)
-		mask &= IXGBE_FTQF_SOURCE_PORT_MASK;
-	if (filter->filter_info.dst_port_mask == 0)
-		mask &= IXGBE_FTQF_DEST_PORT_MASK;
-	if (filter->filter_info.proto_mask == 0)
-		mask &= IXGBE_FTQF_PROTOCOL_COMP_MASK;
-	ftqf |= mask << IXGBE_FTQF_5TUPLE_MASK_SHIFT;
-	ftqf |= IXGBE_FTQF_POOL_MASK_EN;
-	ftqf |= IXGBE_FTQF_QUEUE_ENABLE;
-
-	IXGBE_WRITE_REG(hw, IXGBE_DAQF(i), filter->filter_info.dst_ip);
-	IXGBE_WRITE_REG(hw, IXGBE_SAQF(i), filter->filter_info.src_ip);
-	IXGBE_WRITE_REG(hw, IXGBE_SDPQF(i), sdpqf);
-	IXGBE_WRITE_REG(hw, IXGBE_FTQF(i), ftqf);
+	ixgbe_inject_5tuple_filter(dev, filter);
 
-	l34timir |= IXGBE_L34T_IMIR_RESERVE;
-	l34timir |= (uint32_t)(filter->queue <<
-				IXGBE_L34T_IMIR_QUEUE_SHIFT);
-	IXGBE_WRITE_REG(hw, IXGBE_L34T_IMIR(i), l34timir);
 	return 0;
 }
 
@@ -7924,6 +7947,27 @@ ixgbevf_dev_interrupt_handler(__rte_unused struct rte_intr_handle *handle,
 	ixgbevf_dev_interrupt_action(dev);
 }
 
+/* restore n-tuple filter */
+static inline void
+ixgbe_ntuple_filter_restore(struct rte_eth_dev *dev)
+{
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+	struct ixgbe_5tuple_filter *node;
+
+	TAILQ_FOREACH(node, &filter_info->fivetuple_list, entries) {
+		ixgbe_inject_5tuple_filter(dev, node);
+	}
+}
+
+static int
+ixgbe_filter_restore(struct rte_eth_dev *dev)
+{
+	ixgbe_ntuple_filter_restore(dev);
+
+	return 0;
+}
+
 RTE_PMD_REGISTER_PCI(net_ixgbe, rte_ixgbe_pmd.pci_drv);
 RTE_PMD_REGISTER_PCI_TABLE(net_ixgbe, pci_id_ixgbe_map);
 RTE_PMD_REGISTER_KMOD_DEP(net_ixgbe, "* igb_uio | uio_pci_generic | vfio");
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v3 05/18] net/ixgbe: restore ether type filter
  2017-01-12  8:13   ` [PATCH v3 01/18] net/ixgbe: store TCP SYN filter Wei Zhao
                       ` (3 preceding siblings ...)
  2017-01-12  8:13     ` [PATCH v3 4/9] " Wei Zhao
@ 2017-01-12  8:13     ` Wei Zhao
  2017-01-12  8:13     ` [PATCH v3 06/18] net/ixgbe: restore TCP SYN filter Wei Zhao
                       ` (13 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  8:13 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for restoring ether type filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 79 ++++++++++++++++------------------------
 drivers/net/ixgbe/ixgbe_ethdev.h | 57 ++++++++++++++++++++++++++++-
 drivers/net/ixgbe/ixgbe_pf.c     | 25 ++++++++-----
 3 files changed, 104 insertions(+), 57 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 1630e65..6c46354 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -6259,47 +6259,6 @@ ixgbe_ntuple_filter_handle(struct rte_eth_dev *dev,
 	return ret;
 }
 
-static inline int
-ixgbe_ethertype_filter_lookup(struct ixgbe_filter_info *filter_info,
-			uint16_t ethertype)
-{
-	int i;
-
-	for (i = 0; i < IXGBE_MAX_ETQF_FILTERS; i++) {
-		if (filter_info->ethertype_filters[i] == ethertype &&
-		    (filter_info->ethertype_mask & (1 << i)))
-			return i;
-	}
-	return -1;
-}
-
-static inline int
-ixgbe_ethertype_filter_insert(struct ixgbe_filter_info *filter_info,
-			uint16_t ethertype)
-{
-	int i;
-
-	for (i = 0; i < IXGBE_MAX_ETQF_FILTERS; i++) {
-		if (!(filter_info->ethertype_mask & (1 << i))) {
-			filter_info->ethertype_mask |= 1 << i;
-			filter_info->ethertype_filters[i] = ethertype;
-			return i;
-		}
-	}
-	return -1;
-}
-
-static inline int
-ixgbe_ethertype_filter_remove(struct ixgbe_filter_info *filter_info,
-			uint8_t idx)
-{
-	if (idx >= IXGBE_MAX_ETQF_FILTERS)
-		return -1;
-	filter_info->ethertype_mask &= ~(1 << idx);
-	filter_info->ethertype_filters[idx] = 0;
-	return idx;
-}
-
 static int
 ixgbe_add_del_ethertype_filter(struct rte_eth_dev *dev,
 			struct rte_eth_ethertype_filter *filter,
@@ -6311,6 +6270,7 @@ ixgbe_add_del_ethertype_filter(struct rte_eth_dev *dev,
 	uint32_t etqf = 0;
 	uint32_t etqs = 0;
 	int ret;
+	struct ixgbe_ethertype_filter ethertype_filter;
 
 	if (filter->queue >= IXGBE_MAX_RX_QUEUE_NUM)
 		return -EINVAL;
@@ -6344,18 +6304,22 @@ ixgbe_add_del_ethertype_filter(struct rte_eth_dev *dev,
 	}
 
 	if (add) {
-		ret = ixgbe_ethertype_filter_insert(filter_info,
-			filter->ether_type);
-		if (ret < 0) {
-			PMD_DRV_LOG(ERR, "ethertype filters are full.");
-			return -ENOSYS;
-		}
 		etqf = IXGBE_ETQF_FILTER_EN;
 		etqf |= (uint32_t)filter->ether_type;
 		etqs |= (uint32_t)((filter->queue <<
 				    IXGBE_ETQS_RX_QUEUE_SHIFT) &
 				    IXGBE_ETQS_RX_QUEUE);
 		etqs |= IXGBE_ETQS_QUEUE_EN;
+
+		ethertype_filter.ethertype = filter->ether_type;
+		ethertype_filter.etqf = etqf;
+		ethertype_filter.etqs = etqs;
+		ret = ixgbe_ethertype_filter_insert(filter_info,
+						    &ethertype_filter);
+		if (ret < 0) {
+			PMD_DRV_LOG(ERR, "ethertype filters are full.");
+			return -ENOSPC;
+		}
 	} else {
 		ret = ixgbe_ethertype_filter_remove(filter_info, (uint8_t)ret);
 		if (ret < 0)
@@ -7961,10 +7925,31 @@ ixgbe_ntuple_filter_restore(struct rte_eth_dev *dev)
 	}
 }
 
+/* restore ethernet type filter */
+static inline void
+ixgbe_ethertype_filter_restore(struct rte_eth_dev *dev)
+{
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+	int i;
+
+	for (i = 0; i < IXGBE_MAX_ETQF_FILTERS; i++) {
+		if (filter_info->ethertype_mask & (1 << i)) {
+			IXGBE_WRITE_REG(hw, IXGBE_ETQF(i),
+					filter_info->ethertype_filters[i].etqf);
+			IXGBE_WRITE_REG(hw, IXGBE_ETQS(i),
+					filter_info->ethertype_filters[i].etqs);
+			IXGBE_WRITE_FLUSH(hw);
+		}
+	}
+}
+
 static int
 ixgbe_filter_restore(struct rte_eth_dev *dev)
 {
 	ixgbe_ntuple_filter_restore(dev);
+	ixgbe_ethertype_filter_restore(dev);
 
 	return 0;
 }
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index b2cf789..36aae01 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -270,13 +270,19 @@ struct ixgbe_5tuple_filter {
 	(RTE_ALIGN(IXGBE_MAX_FTQF_FILTERS, (sizeof(uint32_t) * NBBY)) / \
 	 (sizeof(uint32_t) * NBBY))
 
+struct ixgbe_ethertype_filter {
+	uint16_t ethertype;
+	uint32_t etqf;
+	uint32_t etqs;
+};
+
 /*
  * Structure to store filters' info.
  */
 struct ixgbe_filter_info {
 	uint8_t ethertype_mask;  /* Bit mask for every used ethertype filter */
 	/* store used ethertype filters*/
-	uint16_t ethertype_filters[IXGBE_MAX_ETQF_FILTERS];
+	struct ixgbe_ethertype_filter ethertype_filters[IXGBE_MAX_ETQF_FILTERS];
 	/* Bit mask for every used 5tuple filter */
 	uint32_t fivetuple_mask[IXGBE_5TUPLE_ARRAY_SIZE];
 	struct ixgbe_5tuple_filter_list fivetuple_list;
@@ -491,4 +497,53 @@ uint32_t ixgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val);
 
 int ixgbe_fdir_ctrl_func(struct rte_eth_dev *dev,
 			enum rte_filter_op filter_op, void *arg);
+
+static inline int
+ixgbe_ethertype_filter_lookup(struct ixgbe_filter_info *filter_info,
+			      uint16_t ethertype)
+{
+	int i;
+
+	for (i = 0; i < IXGBE_MAX_ETQF_FILTERS; i++) {
+		if (filter_info->ethertype_filters[i].ethertype == ethertype &&
+		    (filter_info->ethertype_mask & (1 << i)))
+			return i;
+	}
+	return -1;
+}
+
+static inline int
+ixgbe_ethertype_filter_insert(struct ixgbe_filter_info *filter_info,
+			      struct ixgbe_ethertype_filter *ethertype_filter)
+{
+	int i;
+
+	for (i = 0; i < IXGBE_MAX_ETQF_FILTERS; i++) {
+		if (!(filter_info->ethertype_mask & (1 << i))) {
+			filter_info->ethertype_mask |= 1 << i;
+			filter_info->ethertype_filters[i].ethertype =
+				ethertype_filter->ethertype;
+			filter_info->ethertype_filters[i].etqf =
+				ethertype_filter->etqf;
+			filter_info->ethertype_filters[i].etqs =
+				ethertype_filter->etqs;
+			return i;
+		}
+	}
+	return -1;
+}
+
+static inline int
+ixgbe_ethertype_filter_remove(struct ixgbe_filter_info *filter_info,
+			      uint8_t idx)
+{
+	if (idx >= IXGBE_MAX_ETQF_FILTERS)
+		return -1;
+	filter_info->ethertype_mask &= ~(1 << idx);
+	filter_info->ethertype_filters[idx].ethertype = 0;
+	filter_info->ethertype_filters[idx].etqf = 0;
+	filter_info->ethertype_filters[idx].etqs = 0;
+	return idx;
+}
+
 #endif /* _IXGBE_ETHDEV_H_ */
diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
index cb10265..4b33130 100644
--- a/drivers/net/ixgbe/ixgbe_pf.c
+++ b/drivers/net/ixgbe/ixgbe_pf.c
@@ -178,6 +178,7 @@ ixgbe_add_tx_flow_control_drop_filter(struct rte_eth_dev *eth_dev)
 		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(eth_dev->data->dev_private);
 	uint16_t vf_num;
 	int i;
+	struct ixgbe_ethertype_filter ethertype_filter;
 
 	if (!hw->mac.ops.set_ethertype_anti_spoofing) {
 		RTE_LOG(INFO, PMD, "ether type anti-spoofing is not"
@@ -185,16 +186,22 @@ ixgbe_add_tx_flow_control_drop_filter(struct rte_eth_dev *eth_dev)
 		return;
 	}
 
-	/* occupy an entity of ether type filter */
-	for (i = 0; i < IXGBE_MAX_ETQF_FILTERS; i++) {
-		if (!(filter_info->ethertype_mask & (1 << i))) {
-			filter_info->ethertype_mask |= 1 << i;
-			filter_info->ethertype_filters[i] =
-				IXGBE_ETHERTYPE_FLOW_CTRL;
-			break;
-		}
+	i = ixgbe_ethertype_filter_lookup(filter_info,
+					  IXGBE_ETHERTYPE_FLOW_CTRL);
+	if (i >= 0) {
+		RTE_LOG(ERR, PMD, "A ether type filter"
+			" entity for flow control already exists!\n");
+		return;
 	}
-	if (i == IXGBE_MAX_ETQF_FILTERS) {
+
+	ethertype_filter.ethertype = IXGBE_ETHERTYPE_FLOW_CTRL;
+	ethertype_filter.etqf = IXGBE_ETQF_FILTER_EN |
+				IXGBE_ETQF_TX_ANTISPOOF |
+				IXGBE_ETHERTYPE_FLOW_CTRL;
+	ethertype_filter.etqs = 0;
+	i = ixgbe_ethertype_filter_insert(filter_info,
+					  &ethertype_filter);
+	if (i < 0) {
 		RTE_LOG(ERR, PMD, "Cannot find an unused ether type filter"
 			" entity for flow control.\n");
 		return;
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v3 06/18] net/ixgbe: restore TCP SYN filter
  2017-01-12  8:13   ` [PATCH v3 01/18] net/ixgbe: store TCP SYN filter Wei Zhao
                       ` (4 preceding siblings ...)
  2017-01-12  8:13     ` [PATCH v3 05/18] net/ixgbe: restore ether type filter Wei Zhao
@ 2017-01-12  8:13     ` Wei Zhao
  2017-01-12  8:13     ` [PATCH v3 07/18] net/ixgbe: restore flow director filter Wei Zhao
                       ` (12 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  8:13 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for restoring TCP SYN filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 6c46354..6cd5975 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -7945,11 +7945,29 @@ ixgbe_ethertype_filter_restore(struct rte_eth_dev *dev)
 	}
 }
 
+/* restore SYN filter */
+static inline void
+ixgbe_syn_filter_restore(struct rte_eth_dev *dev)
+{
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+	uint32_t synqf;
+
+	synqf = filter_info->syn_info;
+
+	if (synqf & IXGBE_SYN_FILTER_ENABLE) {
+		IXGBE_WRITE_REG(hw, IXGBE_SYNQF, synqf);
+		IXGBE_WRITE_FLUSH(hw);
+	}
+}
+
 static int
 ixgbe_filter_restore(struct rte_eth_dev *dev)
 {
 	ixgbe_ntuple_filter_restore(dev);
 	ixgbe_ethertype_filter_restore(dev);
+	ixgbe_syn_filter_restore(dev);
 
 	return 0;
 }
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v3 07/18] net/ixgbe: restore flow director filter
  2017-01-12  8:13   ` [PATCH v3 01/18] net/ixgbe: store TCP SYN filter Wei Zhao
                       ` (5 preceding siblings ...)
  2017-01-12  8:13     ` [PATCH v3 06/18] net/ixgbe: restore TCP SYN filter Wei Zhao
@ 2017-01-12  8:13     ` Wei Zhao
  2017-01-12  8:13     ` [PATCH v3 08/18] net/ixgbe: restore L2 tunnel filter Wei Zhao
                       ` (11 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  8:13 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for storing flow director filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c |  1 +
 drivers/net/ixgbe/ixgbe_ethdev.h |  1 +
 drivers/net/ixgbe/ixgbe_fdir.c   | 35 +++++++++++++++++++++++++++++++++++
 3 files changed, 37 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 6cd5975..f48e30c 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -7968,6 +7968,7 @@ ixgbe_filter_restore(struct rte_eth_dev *dev)
 	ixgbe_ntuple_filter_restore(dev);
 	ixgbe_ethertype_filter_restore(dev);
 	ixgbe_syn_filter_restore(dev);
+	ixgbe_fdir_filter_restore(dev);
 
 	return 0;
 }
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 36aae01..4f281e8 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -497,6 +497,7 @@ uint32_t ixgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val);
 
 int ixgbe_fdir_ctrl_func(struct rte_eth_dev *dev,
 			enum rte_filter_op filter_op, void *arg);
+void ixgbe_fdir_filter_restore(struct rte_eth_dev *dev);
 
 static inline int
 ixgbe_ethertype_filter_lookup(struct ixgbe_filter_info *filter_info,
diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c
index 8bf5705..627c51a 100644
--- a/drivers/net/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/ixgbe/ixgbe_fdir.c
@@ -1479,3 +1479,38 @@ ixgbe_fdir_ctrl_func(struct rte_eth_dev *dev,
 	}
 	return ret;
 }
+
+/* restore flow director filter */
+void
+ixgbe_fdir_filter_restore(struct rte_eth_dev *dev)
+{
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_hw_fdir_info *fdir_info =
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
+	struct ixgbe_fdir_filter *node;
+	bool is_perfect = FALSE;
+	enum rte_fdir_mode fdir_mode = dev->data->dev_conf.fdir_conf.mode;
+
+	if (fdir_mode >= RTE_FDIR_MODE_PERFECT &&
+	    fdir_mode <= RTE_FDIR_MODE_PERFECT_TUNNEL)
+		is_perfect = TRUE;
+
+	if (is_perfect) {
+		TAILQ_FOREACH(node, &fdir_info->fdir_list, entries) {
+			(void)fdir_write_perfect_filter_82599(hw,
+							      &node->ixgbe_fdir,
+							      node->queue,
+							      node->fdirflags,
+							      node->fdirhash,
+							      fdir_mode);
+		}
+	} else {
+		TAILQ_FOREACH(node, &fdir_info->fdir_list, entries) {
+			(void)fdir_add_signature_filter_82599(hw,
+							      &node->ixgbe_fdir,
+							      node->queue,
+							      node->fdirflags,
+							      node->fdirhash);
+		}
+	}
+}
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v3 08/18] net/ixgbe: restore L2 tunnel filter
  2017-01-12  8:13   ` [PATCH v3 01/18] net/ixgbe: store TCP SYN filter Wei Zhao
                       ` (6 preceding siblings ...)
  2017-01-12  8:13     ` [PATCH v3 07/18] net/ixgbe: restore flow director filter Wei Zhao
@ 2017-01-12  8:13     ` Wei Zhao
  2017-01-12  8:13     ` [PATCH v3 09/18] net/ixgbe: store and restore L2 tunnel configuration Wei Zhao
                       ` (10 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  8:13 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for restoring L2 tunnel filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 69 ++++++++++++++++++++++++++--------------
 1 file changed, 46 insertions(+), 23 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index f48e30c..0ad613b 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -7324,7 +7324,8 @@ ixgbe_remove_l2_tn_filter(struct ixgbe_l2_tn_info *l2_tn_info,
 /* Add l2 tunnel filter */
 static int
 ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
-			       struct rte_eth_l2_tunnel_conf *l2_tunnel)
+			       struct rte_eth_l2_tunnel_conf *l2_tunnel,
+			       bool restore)
 {
 	int ret;
 	struct ixgbe_l2_tn_info *l2_tn_info =
@@ -7332,30 +7333,33 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 	struct ixgbe_l2_tn_key key;
 	struct ixgbe_l2_tn_filter *node;
 
-	key.l2_tn_type = l2_tunnel->l2_tunnel_type;
-	key.tn_id = l2_tunnel->tunnel_id;
+	if (!restore) {
+		key.l2_tn_type = l2_tunnel->l2_tunnel_type;
+		key.tn_id = l2_tunnel->tunnel_id;
 
-	node = ixgbe_l2_tn_filter_lookup(l2_tn_info, &key);
+		node = ixgbe_l2_tn_filter_lookup(l2_tn_info, &key);
 
-	if (node) {
-		PMD_DRV_LOG(ERR, "The L2 tunnel filter already exists!");
-		return -EINVAL;
-	}
+		if (node) {
+			PMD_DRV_LOG(ERR,
+				    "The L2 tunnel filter already exists!");
+			return -EINVAL;
+		}
 
-	node = rte_zmalloc("ixgbe_l2_tn",
-			   sizeof(struct ixgbe_l2_tn_filter),
-			   0);
-	if (!node)
-		return -ENOMEM;
+		node = rte_zmalloc("ixgbe_l2_tn",
+				   sizeof(struct ixgbe_l2_tn_filter),
+				   0);
+		if (!node)
+			return -ENOMEM;
 
-	(void)rte_memcpy(&node->key,
-			 &key,
-			 sizeof(struct ixgbe_l2_tn_key));
-	node->pool = l2_tunnel->pool;
-	ret = ixgbe_insert_l2_tn_filter(l2_tn_info, node);
-	if (ret < 0) {
-		rte_free(node);
-		return ret;
+		(void)rte_memcpy(&node->key,
+				 &key,
+				 sizeof(struct ixgbe_l2_tn_key));
+		node->pool = l2_tunnel->pool;
+		ret = ixgbe_insert_l2_tn_filter(l2_tn_info, node);
+		if (ret < 0) {
+			rte_free(node);
+			return ret;
+		}
 	}
 
 	switch (l2_tunnel->l2_tunnel_type) {
@@ -7368,7 +7372,7 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 		break;
 	}
 
-	if (ret < 0)
+	if ((!restore) && (ret < 0))
 		(void)ixgbe_remove_l2_tn_filter(l2_tn_info, &key);
 
 	return ret;
@@ -7429,7 +7433,8 @@ ixgbe_dev_l2_tunnel_filter_handle(struct rte_eth_dev *dev,
 	case RTE_ETH_FILTER_ADD:
 		ret = ixgbe_dev_l2_tunnel_filter_add
 			(dev,
-			 (struct rte_eth_l2_tunnel_conf *)arg);
+			 (struct rte_eth_l2_tunnel_conf *)arg,
+			 FALSE);
 		break;
 	case RTE_ETH_FILTER_DELETE:
 		ret = ixgbe_dev_l2_tunnel_filter_del
@@ -7962,6 +7967,23 @@ ixgbe_syn_filter_restore(struct rte_eth_dev *dev)
 	}
 }
 
+/* restore L2 tunnel filter */
+static inline void
+ixgbe_l2_tn_filter_restore(struct rte_eth_dev *dev)
+{
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
+	struct ixgbe_l2_tn_filter *node;
+	struct rte_eth_l2_tunnel_conf l2_tn_conf;
+
+	TAILQ_FOREACH(node, &l2_tn_info->l2_tn_list, entries) {
+		l2_tn_conf.l2_tunnel_type = node->key.l2_tn_type;
+		l2_tn_conf.tunnel_id      = node->key.tn_id;
+		l2_tn_conf.pool           = node->pool;
+		(void)ixgbe_dev_l2_tunnel_filter_add(dev, &l2_tn_conf, TRUE);
+	}
+}
+
 static int
 ixgbe_filter_restore(struct rte_eth_dev *dev)
 {
@@ -7969,6 +7991,7 @@ ixgbe_filter_restore(struct rte_eth_dev *dev)
 	ixgbe_ethertype_filter_restore(dev);
 	ixgbe_syn_filter_restore(dev);
 	ixgbe_fdir_filter_restore(dev);
+	ixgbe_l2_tn_filter_restore(dev);
 
 	return 0;
 }
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v3 09/18] net/ixgbe: store and restore L2 tunnel configuration
  2017-01-12  8:13   ` [PATCH v3 01/18] net/ixgbe: store TCP SYN filter Wei Zhao
                       ` (7 preceding siblings ...)
  2017-01-12  8:13     ` [PATCH v3 08/18] net/ixgbe: restore L2 tunnel filter Wei Zhao
@ 2017-01-12  8:13     ` Wei Zhao
  2017-01-12  8:13     ` [PATCH v3 10/18] net/ixgbe: flush all the filters Wei Zhao
                       ` (9 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  8:13 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for store and restore L2 tunnel filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 36 ++++++++++++++++++++++++++++++++++++
 drivers/net/ixgbe/ixgbe_ethdev.h |  3 +++
 2 files changed, 39 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 0ad613b..3059c64 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -389,6 +389,7 @@ static int ixgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 static int ixgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 					 struct rte_eth_udp_tunnel *udp_tunnel);
 static int ixgbe_filter_restore(struct rte_eth_dev *dev);
+static void ixgbe_l2_tunnel_conf(struct rte_eth_dev *dev);
 
 /*
  * Define VF Stats MACRO for Non "cleared on read" register
@@ -1469,6 +1470,9 @@ static int ixgbe_l2_tn_filter_init(struct rte_eth_dev *eth_dev)
 			"Failed to allocate memory for L2 TN hash map!");
 		return -ENOMEM;
 	}
+	l2_tn_info->e_tag_en = FALSE;
+	l2_tn_info->e_tag_fwd_en = FALSE;
+	l2_tn_info->e_tag_ether_type = DEFAULT_ETAG_ETYPE;
 
 	return 0;
 }
@@ -2527,6 +2531,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
 
 	/* resume enabled intr since hw reset */
 	ixgbe_enable_intr(dev);
+	ixgbe_l2_tunnel_conf(dev);
 	ixgbe_filter_restore(dev);
 
 	return 0;
@@ -7083,12 +7088,15 @@ ixgbe_dev_l2_tunnel_eth_type_conf(struct rte_eth_dev *dev,
 {
 	int ret = 0;
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
 
 	if (l2_tunnel == NULL)
 		return -EINVAL;
 
 	switch (l2_tunnel->l2_tunnel_type) {
 	case RTE_L2_TUNNEL_TYPE_E_TAG:
+		l2_tn_info->e_tag_ether_type = l2_tunnel->ether_type;
 		ret = ixgbe_update_e_tag_eth_type(hw, l2_tunnel->ether_type);
 		break;
 	default:
@@ -7127,9 +7135,12 @@ ixgbe_dev_l2_tunnel_enable(struct rte_eth_dev *dev,
 {
 	int ret = 0;
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
 
 	switch (l2_tunnel_type) {
 	case RTE_L2_TUNNEL_TYPE_E_TAG:
+		l2_tn_info->e_tag_en = TRUE;
 		ret = ixgbe_e_tag_enable(hw);
 		break;
 	default:
@@ -7168,9 +7179,12 @@ ixgbe_dev_l2_tunnel_disable(struct rte_eth_dev *dev,
 {
 	int ret = 0;
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
 
 	switch (l2_tunnel_type) {
 	case RTE_L2_TUNNEL_TYPE_E_TAG:
+		l2_tn_info->e_tag_en = FALSE;
 		ret = ixgbe_e_tag_disable(hw);
 		break;
 	default:
@@ -7477,10 +7491,13 @@ ixgbe_dev_l2_tunnel_forwarding_enable
 	(struct rte_eth_dev *dev,
 	 enum rte_eth_tunnel_type l2_tunnel_type)
 {
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
 	int ret = 0;
 
 	switch (l2_tunnel_type) {
 	case RTE_L2_TUNNEL_TYPE_E_TAG:
+		l2_tn_info->e_tag_fwd_en = TRUE;
 		ret = ixgbe_e_tag_forwarding_en_dis(dev, 1);
 		break;
 	default:
@@ -7498,10 +7515,13 @@ ixgbe_dev_l2_tunnel_forwarding_disable
 	(struct rte_eth_dev *dev,
 	 enum rte_eth_tunnel_type l2_tunnel_type)
 {
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
 	int ret = 0;
 
 	switch (l2_tunnel_type) {
 	case RTE_L2_TUNNEL_TYPE_E_TAG:
+		l2_tn_info->e_tag_fwd_en = FALSE;
 		ret = ixgbe_e_tag_forwarding_en_dis(dev, 0);
 		break;
 	default:
@@ -7996,6 +8016,22 @@ ixgbe_filter_restore(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static void
+ixgbe_l2_tunnel_conf(struct rte_eth_dev *dev)
+{
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	if (l2_tn_info->e_tag_en)
+		(void)ixgbe_e_tag_enable(hw);
+
+	if (l2_tn_info->e_tag_fwd_en)
+		(void)ixgbe_e_tag_forwarding_en_dis(dev, 1);
+
+	(void)ixgbe_update_e_tag_eth_type(hw, l2_tn_info->e_tag_ether_type);
+}
+
 RTE_PMD_REGISTER_PCI(net_ixgbe, rte_ixgbe_pmd.pci_drv);
 RTE_PMD_REGISTER_PCI_TABLE(net_ixgbe, pci_id_ixgbe_map);
 RTE_PMD_REGISTER_KMOD_DEP(net_ixgbe, "* igb_uio | uio_pci_generic | vfio");
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 4f281e8..4e0264c 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -307,6 +307,9 @@ struct ixgbe_l2_tn_info {
 	struct ixgbe_l2_tn_filter_list      l2_tn_list;
 	struct ixgbe_l2_tn_filter         **hash_map;
 	struct rte_hash                    *hash_handle;
+	bool e_tag_en; /* e-tag enabled */
+	bool e_tag_fwd_en; /* e-tag based forwarding enabled */
+	bool e_tag_ether_type; /* ether type for e-tag */
 };
 
 /*
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v3 10/18] net/ixgbe: flush all the filters
  2017-01-12  8:13   ` [PATCH v3 01/18] net/ixgbe: store TCP SYN filter Wei Zhao
                       ` (8 preceding siblings ...)
  2017-01-12  8:13     ` [PATCH v3 09/18] net/ixgbe: store and restore L2 tunnel configuration Wei Zhao
@ 2017-01-12  8:13     ` Wei Zhao
  2017-01-12  8:13     ` [PATCH v3 11/18] net/ixgbe: parse n-tuple filter Wei Zhao
                       ` (8 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  8:13 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for flush all the filters in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/Makefile       |   2 +
 drivers/net/ixgbe/ixgbe_ethdev.c |  79 ++++++++++++++++++++++++++-
 drivers/net/ixgbe/ixgbe_ethdev.h |  16 ++++++
 drivers/net/ixgbe/ixgbe_fdir.c   |  24 ++++++++
 drivers/net/ixgbe/ixgbe_flow.c   | 115 +++++++++++++++++++++++++++++++++++++++
 drivers/net/ixgbe/ixgbe_pf.c     |   1 +
 6 files changed, 236 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/ixgbe/ixgbe_flow.c

diff --git a/drivers/net/ixgbe/Makefile b/drivers/net/ixgbe/Makefile
index a3e6a52..38b9fbd 100644
--- a/drivers/net/ixgbe/Makefile
+++ b/drivers/net/ixgbe/Makefile
@@ -109,6 +109,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_rxtx.c
 SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_ethdev.c
 SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_fdir.c
 SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_pf.c
+SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_flow.c
 ifeq ($(CONFIG_RTE_ARCH_ARM64),y)
 SRCS-$(CONFIG_RTE_IXGBE_INC_VECTOR) += ixgbe_rxtx_vec_neon.c
 else
@@ -127,5 +128,6 @@ SYMLINK-$(CONFIG_RTE_LIBRTE_IXGBE_PMD)-include := rte_pmd_ixgbe.h
 DEPDIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += lib/librte_eal lib/librte_ether
 DEPDIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += lib/librte_mempool lib/librte_mbuf
 DEPDIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += lib/librte_net
+DEPDIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += lib/librte_hash
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 3059c64..2a67462 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -6319,6 +6319,7 @@ ixgbe_add_del_ethertype_filter(struct rte_eth_dev *dev,
 		ethertype_filter.ethertype = filter->ether_type;
 		ethertype_filter.etqf = etqf;
 		ethertype_filter.etqs = etqs;
+		ethertype_filter.conf = FALSE;
 		ret = ixgbe_ethertype_filter_insert(filter_info,
 						    &ethertype_filter);
 		if (ret < 0) {
@@ -6420,7 +6421,7 @@ ixgbe_dev_filter_ctrl(struct rte_eth_dev *dev,
 		     enum rte_filter_op filter_op,
 		     void *arg)
 {
-	int ret = -EINVAL;
+	int ret = 0;
 
 	switch (filter_type) {
 	case RTE_ETH_FILTER_NTUPLE:
@@ -6438,9 +6439,15 @@ ixgbe_dev_filter_ctrl(struct rte_eth_dev *dev,
 	case RTE_ETH_FILTER_L2_TUNNEL:
 		ret = ixgbe_dev_l2_tunnel_filter_handle(dev, filter_op, arg);
 		break;
+	case RTE_ETH_FILTER_GENERIC:
+		if (filter_op != RTE_ETH_FILTER_GET)
+			return -EINVAL;
+		*(const void **)arg = &ixgbe_flow_ops;
+		break;
 	default:
 		PMD_DRV_LOG(WARNING, "Filter type (%d) not supported",
 							filter_type);
+		ret = -EINVAL;
 		break;
 	}
 
@@ -8032,6 +8039,76 @@ ixgbe_l2_tunnel_conf(struct rte_eth_dev *dev)
 	(void)ixgbe_update_e_tag_eth_type(hw, l2_tn_info->e_tag_ether_type);
 }
 
+/* remove all the n-tuple filters */
+void
+ixgbe_clear_all_ntuple_filter(struct rte_eth_dev *dev)
+{
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+	struct ixgbe_5tuple_filter *p_5tuple;
+
+	while ((p_5tuple = TAILQ_FIRST(&filter_info->fivetuple_list)))
+		ixgbe_remove_5tuple_filter(dev, p_5tuple);
+}
+
+/* remove all the ether type filters */
+void
+ixgbe_clear_all_ethertype_filter(struct rte_eth_dev *dev)
+{
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+	int i;
+
+	for (i = 0; i < IXGBE_MAX_ETQF_FILTERS; i++) {
+		if (filter_info->ethertype_mask & (1 << i) &&
+		    !filter_info->ethertype_filters[i].conf) {
+			(void)ixgbe_ethertype_filter_remove(filter_info,
+							    (uint8_t)i);
+			IXGBE_WRITE_REG(hw, IXGBE_ETQF(i), 0);
+			IXGBE_WRITE_REG(hw, IXGBE_ETQS(i), 0);
+			IXGBE_WRITE_FLUSH(hw);
+		}
+	}
+}
+
+/* remove the SYN filter */
+void
+ixgbe_clear_syn_filter(struct rte_eth_dev *dev)
+{
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+
+	if (filter_info->syn_info & IXGBE_SYN_FILTER_ENABLE) {
+		filter_info->syn_info = 0;
+
+		IXGBE_WRITE_REG(hw, IXGBE_SYNQF, 0);
+		IXGBE_WRITE_FLUSH(hw);
+	}
+}
+
+/* remove all the L2 tunnel filters */
+int ixgbe_clear_all_l2_tn_filter(struct rte_eth_dev *dev)
+{
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
+	struct ixgbe_l2_tn_filter *l2_tn_filter;
+	struct rte_eth_l2_tunnel_conf l2_tn_conf;
+	int ret = 0;
+
+	while ((l2_tn_filter = TAILQ_FIRST(&l2_tn_info->l2_tn_list))) {
+		l2_tn_conf.l2_tunnel_type = l2_tn_filter->key.l2_tn_type;
+		l2_tn_conf.tunnel_id      = l2_tn_filter->key.tn_id;
+		l2_tn_conf.pool           = l2_tn_filter->pool;
+		ret = ixgbe_dev_l2_tunnel_filter_del(dev, &l2_tn_conf);
+		if (ret < 0)
+			return ret;
+	}
+
+	return 0;
+}
+
 RTE_PMD_REGISTER_PCI(net_ixgbe, rte_ixgbe_pmd.pci_drv);
 RTE_PMD_REGISTER_PCI_TABLE(net_ixgbe, pci_id_ixgbe_map);
 RTE_PMD_REGISTER_KMOD_DEP(net_ixgbe, "* igb_uio | uio_pci_generic | vfio");
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 4e0264c..5efa650 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -274,6 +274,11 @@ struct ixgbe_ethertype_filter {
 	uint16_t ethertype;
 	uint32_t etqf;
 	uint32_t etqs;
+	/**
+	 * If this filter is added by configuration,
+	 * it should not be removed.
+	 */
+	bool     conf;
 };
 
 /*
@@ -501,6 +506,14 @@ uint32_t ixgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val);
 int ixgbe_fdir_ctrl_func(struct rte_eth_dev *dev,
 			enum rte_filter_op filter_op, void *arg);
 void ixgbe_fdir_filter_restore(struct rte_eth_dev *dev);
+int ixgbe_clear_all_fdir_filter(struct rte_eth_dev *dev);
+
+extern const struct rte_flow_ops ixgbe_flow_ops;
+
+void ixgbe_clear_all_ethertype_filter(struct rte_eth_dev *dev);
+void ixgbe_clear_all_ntuple_filter(struct rte_eth_dev *dev);
+void ixgbe_clear_syn_filter(struct rte_eth_dev *dev);
+int ixgbe_clear_all_l2_tn_filter(struct rte_eth_dev *dev);
 
 static inline int
 ixgbe_ethertype_filter_lookup(struct ixgbe_filter_info *filter_info,
@@ -531,6 +544,8 @@ ixgbe_ethertype_filter_insert(struct ixgbe_filter_info *filter_info,
 				ethertype_filter->etqf;
 			filter_info->ethertype_filters[i].etqs =
 				ethertype_filter->etqs;
+			filter_info->ethertype_filters[i].conf =
+				ethertype_filter->conf;
 			return i;
 		}
 	}
@@ -547,6 +562,7 @@ ixgbe_ethertype_filter_remove(struct ixgbe_filter_info *filter_info,
 	filter_info->ethertype_filters[idx].ethertype = 0;
 	filter_info->ethertype_filters[idx].etqf = 0;
 	filter_info->ethertype_filters[idx].etqs = 0;
+	filter_info->ethertype_filters[idx].etqs = FALSE;
 	return idx;
 }
 
diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c
index 627c51a..e928ad7 100644
--- a/drivers/net/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/ixgbe/ixgbe_fdir.c
@@ -1514,3 +1514,27 @@ ixgbe_fdir_filter_restore(struct rte_eth_dev *dev)
 		}
 	}
 }
+
+/* remove all the flow director filters */
+int
+ixgbe_clear_all_fdir_filter(struct rte_eth_dev *dev)
+{
+	struct ixgbe_hw_fdir_info *fdir_info =
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
+	struct ixgbe_fdir_filter *fdir_filter;
+	int ret = 0;
+
+	/* flush flow director */
+	rte_hash_reset(fdir_info->hash_handle);
+	memset(fdir_info->hash_map, 0,
+	       sizeof(struct ixgbe_fdir_filter *) * IXGBE_MAX_FDIR_FILTER_NUM);
+	while ((fdir_filter = TAILQ_FIRST(&fdir_info->fdir_list))) {
+		TAILQ_REMOVE(&fdir_info->fdir_list,
+			     fdir_filter,
+			     entries);
+		rte_free(fdir_filter);
+	}
+	ret = ixgbe_fdir_flush(dev);
+
+	return ret;
+}
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
new file mode 100644
index 0000000..1499391
--- /dev/null
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -0,0 +1,115 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/queue.h>
+#include <stdio.h>
+#include <errno.h>
+#include <stdint.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <netinet/in.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_cycles.h>
+
+#include <rte_interrupts.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_pci.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_eal.h>
+#include <rte_alarm.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_atomic.h>
+#include <rte_malloc.h>
+#include <rte_random.h>
+#include <rte_dev.h>
+#include <rte_hash_crc.h>
+#include <rte_flow.h>
+#include <rte_flow_driver.h>
+
+#include "ixgbe_logs.h"
+#include "base/ixgbe_api.h"
+#include "base/ixgbe_vf.h"
+#include "base/ixgbe_common.h"
+#include "ixgbe_ethdev.h"
+#include "ixgbe_bypass.h"
+#include "ixgbe_rxtx.h"
+#include "base/ixgbe_type.h"
+#include "base/ixgbe_phy.h"
+#include "rte_pmd_ixgbe.h"
+
+static int ixgbe_flow_flush(struct rte_eth_dev *dev,
+		struct rte_flow_error *error);
+
+const struct rte_flow_ops ixgbe_flow_ops = {
+	NULL,
+	NULL,
+	NULL,
+	ixgbe_flow_flush,
+	NULL,
+};
+
+/*  Destroy all flow rules associated with a port on ixgbe. */
+static int
+ixgbe_flow_flush(struct rte_eth_dev *dev,
+		struct rte_flow_error *error)
+{
+	int ret = 0;
+
+	ixgbe_clear_all_ntuple_filter(dev);
+	ixgbe_clear_all_ethertype_filter(dev);
+	ixgbe_clear_syn_filter(dev);
+
+	ret = ixgbe_clear_all_fdir_filter(dev);
+	if (ret < 0) {
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_HANDLE,
+					NULL, "Failed to flush rule");
+		return ret;
+	}
+
+	ret = ixgbe_clear_all_l2_tn_filter(dev);
+	if (ret < 0) {
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_HANDLE,
+					NULL, "Failed to flush rule");
+		return ret;
+	}
+
+	return 0;
+}
diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
index 4b33130..4715045 100644
--- a/drivers/net/ixgbe/ixgbe_pf.c
+++ b/drivers/net/ixgbe/ixgbe_pf.c
@@ -199,6 +199,7 @@ ixgbe_add_tx_flow_control_drop_filter(struct rte_eth_dev *eth_dev)
 				IXGBE_ETQF_TX_ANTISPOOF |
 				IXGBE_ETHERTYPE_FLOW_CTRL;
 	ethertype_filter.etqs = 0;
+	ethertype_filter.conf = TRUE;
 	i = ixgbe_ethertype_filter_insert(filter_info,
 					  &ethertype_filter);
 	if (i < 0) {
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v3 11/18] net/ixgbe: parse n-tuple filter
  2017-01-12  8:13   ` [PATCH v3 01/18] net/ixgbe: store TCP SYN filter Wei Zhao
                       ` (9 preceding siblings ...)
  2017-01-12  8:13     ` [PATCH v3 10/18] net/ixgbe: flush all the filters Wei Zhao
@ 2017-01-12  8:13     ` Wei Zhao
  2017-01-12  8:13     ` [PATCH v3 12/18] net/ixgbe: parse ethertype filter Wei Zhao
                       ` (7 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  8:13 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

Add rule validate function and check if the rule is a n-tuple rule,
and get the n-tuple info.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ixgbe/ixgbe_flow.c | 429 ++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 424 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 1499391..ac9f663 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -77,15 +77,432 @@
 
 static int ixgbe_flow_flush(struct rte_eth_dev *dev,
 		struct rte_flow_error *error);
+static int
+cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
+					const struct rte_flow_item pattern[],
+					const struct rte_flow_action actions[],
+					struct rte_eth_ntuple_filter *filter,
+					struct rte_flow_error *error);
+static int
+ixgbe_parse_ntuple_filter(const struct rte_flow_attr *attr,
+					const struct rte_flow_item pattern[],
+					const struct rte_flow_action actions[],
+					struct rte_eth_ntuple_filter *filter,
+					struct rte_flow_error *error);
+static int
+ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
+		const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct rte_flow_error *error);
 
 const struct rte_flow_ops ixgbe_flow_ops = {
-	NULL,
+	ixgbe_flow_validate,
 	NULL,
 	NULL,
 	ixgbe_flow_flush,
 	NULL,
 };
 
+#define IXGBE_MIN_N_TUPLE_PRIO 1
+#define IXGBE_MAX_N_TUPLE_PRIO 7
+#define NEXT_ITEM_OF_PATTERN(item, pattern, index)\
+	do {		\
+		item = pattern + index;\
+		while (item->type == RTE_FLOW_ITEM_TYPE_VOID) {\
+		index++;				\
+		item = pattern + index;		\
+		}						\
+	} while (0)
+
+#define NEXT_ITEM_OF_ACTION(act, actions, index)\
+	do {								\
+		act = actions + index;					\
+		while (act->type == RTE_FLOW_ACTION_TYPE_VOID) {\
+		index++;					\
+		act = actions + index;				\
+		}							\
+	} while (0)
+
+/**
+ * Please aware there's an asumption for all the parsers.
+ * rte_flow_item is using big endian, rte_flow_attr and
+ * rte_flow_action are using CPU order.
+ * Because the pattern is used to describe the packets,
+ * normally the packets should use network order.
+ */
+
+/**
+ * Parse the rule to see if it is a n-tuple rule.
+ * And get the n-tuple filter info BTW.
+ * pattern:
+ * The first not void item can be ETH or IPV4.
+ * The second not void item must be IPV4 if the first one is ETH.
+ * The third not void item must be UDP or TCP.
+ * The next not void item must be END.
+ * action:
+ * The first not void action should be QUEUE.
+ * The next not void action should be END.
+ */
+static int
+cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
+			 const struct rte_flow_item pattern[],
+			 const struct rte_flow_action actions[],
+			 struct rte_eth_ntuple_filter *filter,
+			 struct rte_flow_error *error)
+{
+	const struct rte_flow_item *item;
+	const struct rte_flow_action *act;
+	const struct rte_flow_item_ipv4 *ipv4_spec;
+	const struct rte_flow_item_ipv4 *ipv4_mask;
+	const struct rte_flow_item_tcp *tcp_spec;
+	const struct rte_flow_item_tcp *tcp_mask;
+	const struct rte_flow_item_udp *udp_spec;
+	const struct rte_flow_item_udp *udp_mask;
+	uint32_t index;
+
+	if (!pattern) {
+		rte_flow_error_set(error,
+			EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+			NULL, "NULL pattern.");
+		return -rte_errno;
+	}
+
+	if (!actions) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				   NULL, "NULL action.");
+		return -rte_errno;
+	}
+	if (!attr) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, "NULL attribute.");
+		return -rte_errno;
+	}
+
+	/* parse pattern */
+	index = 0;
+
+	/* the first not void item can be MAC or IPv4 */
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+
+	if (item->type != RTE_FLOW_ITEM_TYPE_ETH &&
+	    item->type != RTE_FLOW_ITEM_TYPE_IPV4) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by ntuple filter");
+		return -rte_errno;
+	}
+	/* Skip Ethernet */
+	if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error,
+			  EINVAL,
+			  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			  item, "Not supported last point for range");
+			return -rte_errno;
+
+		}
+		/* if the first item is MAC, the content should be NULL */
+		if (item->spec || item->mask) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by ntuple filter");
+			return -rte_errno;
+		}
+		/* check if the next not void item is IPv4 */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_IPV4) {
+			rte_flow_error_set(error,
+			  EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+			  item, "Not supported by ntuple filter");
+			  return -rte_errno;
+		}
+	}
+
+	/* get the IPv4 info */
+	if (!item->spec || !item->mask) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Invalid ntuple mask");
+		return -rte_errno;
+	}
+	/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+
+	}
+
+	ipv4_mask = (const struct rte_flow_item_ipv4 *)item->mask;
+	/**
+	 * Only support src & dst addresses, protocol,
+	 * others should be masked.
+	 */
+	if (ipv4_mask->hdr.version_ihl ||
+	    ipv4_mask->hdr.type_of_service ||
+	    ipv4_mask->hdr.total_length ||
+	    ipv4_mask->hdr.packet_id ||
+	    ipv4_mask->hdr.fragment_offset ||
+	    ipv4_mask->hdr.time_to_live ||
+	    ipv4_mask->hdr.hdr_checksum) {
+			rte_flow_error_set(error,
+			EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by ntuple filter");
+		return -rte_errno;
+	}
+
+	filter->dst_ip_mask = ipv4_mask->hdr.dst_addr;
+	filter->src_ip_mask = ipv4_mask->hdr.src_addr;
+	filter->proto_mask  = ipv4_mask->hdr.next_proto_id;
+
+	ipv4_spec = (const struct rte_flow_item_ipv4 *)item->spec;
+	filter->dst_ip = ipv4_spec->hdr.dst_addr;
+	filter->src_ip = ipv4_spec->hdr.src_addr;
+	filter->proto  = ipv4_spec->hdr.next_proto_id;
+
+	/* check if the next not void item is TCP or UDP */
+	index++;
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_TCP &&
+	    item->type != RTE_FLOW_ITEM_TYPE_UDP) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by ntuple filter");
+		return -rte_errno;
+	}
+
+	/* get the TCP/UDP info */
+	if (!item->spec || !item->mask) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Invalid ntuple mask");
+		return -rte_errno;
+	}
+
+	/*Not supported last point for range*/
+	if (item->last) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+
+	}
+
+	if (item->type == RTE_FLOW_ITEM_TYPE_TCP) {
+		tcp_mask = (const struct rte_flow_item_tcp *)item->mask;
+
+		/**
+		 * Only support src & dst ports, tcp flags,
+		 * others should be masked.
+		 */
+		if (tcp_mask->hdr.sent_seq ||
+		    tcp_mask->hdr.recv_ack ||
+		    tcp_mask->hdr.data_off ||
+		    tcp_mask->hdr.rx_win ||
+		    tcp_mask->hdr.cksum ||
+		    tcp_mask->hdr.tcp_urp) {
+			memset(filter, 0,
+				sizeof(struct rte_eth_ntuple_filter));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by ntuple filter");
+			return -rte_errno;
+		}
+
+		filter->dst_port_mask  = tcp_mask->hdr.dst_port;
+		filter->src_port_mask  = tcp_mask->hdr.src_port;
+		if (tcp_mask->hdr.tcp_flags == 0xFF) {
+			filter->flags |= RTE_NTUPLE_FLAGS_TCP_FLAG;
+		} else if (!tcp_mask->hdr.tcp_flags) {
+			filter->flags &= ~RTE_NTUPLE_FLAGS_TCP_FLAG;
+		} else {
+			memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by ntuple filter");
+			return -rte_errno;
+		}
+
+		tcp_spec = (const struct rte_flow_item_tcp *)item->spec;
+		filter->dst_port  = tcp_spec->hdr.dst_port;
+		filter->src_port  = tcp_spec->hdr.src_port;
+		filter->tcp_flags = tcp_spec->hdr.tcp_flags;
+	} else {
+		udp_mask = (const struct rte_flow_item_udp *)item->mask;
+
+		/**
+		 * Only support src & dst ports,
+		 * others should be masked.
+		 */
+		if (udp_mask->hdr.dgram_len ||
+		    udp_mask->hdr.dgram_cksum) {
+			memset(filter, 0,
+				sizeof(struct rte_eth_ntuple_filter));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by ntuple filter");
+			return -rte_errno;
+		}
+
+		filter->dst_port_mask = udp_mask->hdr.dst_port;
+		filter->src_port_mask = udp_mask->hdr.src_port;
+
+		udp_spec = (const struct rte_flow_item_udp *)item->spec;
+		filter->dst_port = udp_spec->hdr.dst_port;
+		filter->src_port = udp_spec->hdr.src_port;
+	}
+
+	/* check if the next not void item is END */
+	index++;
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by ntuple filter");
+		return -rte_errno;
+	}
+
+	/* parse action */
+	index = 0;
+
+	/**
+	 * n-tuple only supports forwarding,
+	 * check if the first not void action is QUEUE.
+	 */
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			item, "Not supported action.");
+		return -rte_errno;
+	}
+	filter->queue =
+		((const struct rte_flow_action_queue *)act->conf)->index;
+
+	/* check if the next not void item is END */
+	index++;
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	/* parse attr */
+	/* must be input direction */
+	if (!attr->ingress) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR_INGRESS,
+				   attr, "Only support ingress.");
+		return -rte_errno;
+	}
+
+	/* not supported */
+	if (attr->egress) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
+				   attr, "Not support egress.");
+		return -rte_errno;
+	}
+
+	if (attr->priority > 0xFFFF) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
+				   attr, "Error priority.");
+		return -rte_errno;
+	}
+	filter->priority = (uint16_t)attr->priority;
+	if (attr->priority < IXGBE_MIN_N_TUPLE_PRIO ||
+	    attr->priority > IXGBE_MAX_N_TUPLE_PRIO)
+	    filter->priority = 1;
+
+	return 0;
+}
+
+/* a specific function for ixgbe because the flags is specific */
+static int
+ixgbe_parse_ntuple_filter(const struct rte_flow_attr *attr,
+			  const struct rte_flow_item pattern[],
+			  const struct rte_flow_action actions[],
+			  struct rte_eth_ntuple_filter *filter,
+			  struct rte_flow_error *error)
+{
+	int ret;
+
+	ret = cons_parse_ntuple_filter(attr, pattern, actions, filter, error);
+
+	if (ret)
+		return ret;
+
+	/* Ixgbe doesn't support tcp flags. */
+	if (filter->flags & RTE_NTUPLE_FLAGS_TCP_FLAG) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ITEM,
+				   NULL, "Not supported by ntuple filter");
+		return -rte_errno;
+	}
+
+	/* Ixgbe doesn't support many priorities. */
+	if (filter->priority < IXGBE_MIN_N_TUPLE_PRIO ||
+	    filter->priority > IXGBE_MAX_N_TUPLE_PRIO) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			NULL, "Priority not supported by ntuple filter");
+		return -rte_errno;
+	}
+
+	if (filter->queue >= IXGBE_MAX_RX_QUEUE_NUM ||
+		filter->priority > IXGBE_5TUPLE_MAX_PRI ||
+		filter->priority < IXGBE_5TUPLE_MIN_PRI)
+		return -rte_errno;
+
+	/* fixed value for ixgbe */
+	filter->flags = RTE_5TUPLE_FLAGS;
+	return 0;
+}
+
+/**
+ * Check if the flow rule is supported by ixgbe.
+ * It only checkes the format. Don't guarantee the rule can be programmed into
+ * the HW. Because there can be no enough room for the rule.
+ */
+static int
+ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
+		const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct rte_flow_error *error)
+{
+	struct rte_eth_ntuple_filter ntuple_filter;
+	int ret;
+
+	memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
+	ret = ixgbe_parse_ntuple_filter(attr, pattern,
+				actions, &ntuple_filter, error);
+	if (!ret)
+		return 0;
+
+	return ret;
+}
+
 /*  Destroy all flow rules associated with a port on ixgbe. */
 static int
 ixgbe_flow_flush(struct rte_eth_dev *dev,
@@ -99,15 +516,17 @@ ixgbe_flow_flush(struct rte_eth_dev *dev,
 
 	ret = ixgbe_clear_all_fdir_filter(dev);
 	if (ret < 0) {
-		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_HANDLE,
-					NULL, "Failed to flush rule");
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_HANDLE,
+				NULL, "Failed to flush rule");
 		return ret;
 	}
 
 	ret = ixgbe_clear_all_l2_tn_filter(dev);
 	if (ret < 0) {
-		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_HANDLE,
-					NULL, "Failed to flush rule");
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_HANDLE,
+				NULL, "Failed to flush rule");
 		return ret;
 	}
 
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v3 12/18] net/ixgbe: parse ethertype filter
  2017-01-12  8:13   ` [PATCH v3 01/18] net/ixgbe: store TCP SYN filter Wei Zhao
                       ` (10 preceding siblings ...)
  2017-01-12  8:13     ` [PATCH v3 11/18] net/ixgbe: parse n-tuple filter Wei Zhao
@ 2017-01-12  8:13     ` Wei Zhao
  2017-01-12  8:13     ` [PATCH v3 13/18] net/ixgbe: parse TCP SYN filter check if the rule is a TCP SYN rule, and get the SYN info Wei Zhao
                       ` (6 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  8:13 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

check if the rule is a ethertype rule, and get the ethertype info.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ixgbe/ixgbe_flow.c | 278 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 278 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index ac9f663..2f97129 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -90,6 +90,18 @@ ixgbe_parse_ntuple_filter(const struct rte_flow_attr *attr,
 					struct rte_eth_ntuple_filter *filter,
 					struct rte_flow_error *error);
 static int
+cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
+			    const struct rte_flow_item *pattern,
+			    const struct rte_flow_action *actions,
+			    struct rte_eth_ethertype_filter *filter,
+			    struct rte_flow_error *error);
+static int
+ixgbe_parse_ethertype_filter(const struct rte_flow_attr *attr,
+				const struct rte_flow_item pattern[],
+				const struct rte_flow_action actions[],
+				struct rte_eth_ethertype_filter *filter,
+				struct rte_flow_error *error);
+static int
 ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 		const struct rte_flow_attr *attr,
 		const struct rte_flow_item pattern[],
@@ -480,6 +492,265 @@ ixgbe_parse_ntuple_filter(const struct rte_flow_attr *attr,
 }
 
 /**
+ * Parse the rule to see if it is a ethertype rule.
+ * And get the ethertype filter info BTW.
+ * pattern:
+ * The first not void item can be ETH.
+ * The next not void item must be END.
+ * action:
+ * The first not void action should be QUEUE.
+ * The next not void action should be END.
+ */
+static int
+cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
+			    const struct rte_flow_item *pattern,
+			    const struct rte_flow_action *actions,
+			    struct rte_eth_ethertype_filter *filter,
+			    struct rte_flow_error *error)
+{
+	const struct rte_flow_item *item;
+	const struct rte_flow_action *act;
+	const struct rte_flow_item_eth *eth_spec;
+	const struct rte_flow_item_eth *eth_mask;
+	const struct rte_flow_action_queue *act_q;
+	uint32_t index;
+
+	if (!pattern) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+				NULL, "NULL pattern.");
+		return -rte_errno;
+	}
+
+	if (!actions) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				NULL, "NULL action.");
+		return -rte_errno;
+	}
+
+	if (!attr) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, "NULL attribute.");
+		return -rte_errno;
+	}
+
+	/* Parse pattern */
+	index = 0;
+
+	/* The first non-void item should be MAC. */
+	item = pattern + index;
+	while (item->type == RTE_FLOW_ITEM_TYPE_VOID) {
+		index++;
+		item = pattern + index;
+	}
+	if (item->type != RTE_FLOW_ITEM_TYPE_ETH) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by ethertype filter");
+		return -rte_errno;
+	}
+
+	/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+	}
+
+	/* Get the MAC info. */
+	if (!item->spec || !item->mask) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by ethertype filter");
+		return -rte_errno;
+	}
+
+	eth_spec = (const struct rte_flow_item_eth *)item->spec;
+	eth_mask = (const struct rte_flow_item_eth *)item->mask;
+
+	/* Mask bits of source MAC address must be full of 0.
+	 * Mask bits of destination MAC address must be full
+	 * of 1 or full of 0.
+	 */
+	if (!is_zero_ether_addr(&eth_mask->src) ||
+	    (!is_zero_ether_addr(&eth_mask->dst) &&
+	     !is_broadcast_ether_addr(&eth_mask->dst))) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Invalid ether address mask");
+		return -rte_errno;
+	}
+
+	if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Invalid ethertype mask");
+		return -rte_errno;
+	}
+
+	/* If mask bits of destination MAC address
+	 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
+	 */
+	if (is_broadcast_ether_addr(&eth_mask->dst)) {
+		filter->mac_addr = eth_spec->dst;
+		filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
+	} else {
+		filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
+	}
+	filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+
+	/* Check if the next non-void item is END. */
+	index++;
+	item = pattern + index;
+	while (item->type == RTE_FLOW_ITEM_TYPE_VOID) {
+		index++;
+		item = pattern + index;
+	}
+	if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by ethertype filter.");
+		return -rte_errno;
+	}
+
+	/* Parse action */
+
+	index = 0;
+	/* Check if the first non-void action is QUEUE or DROP. */
+	act = actions + index;
+	while (act->type == RTE_FLOW_ACTION_TYPE_VOID) {
+		index++;
+		act = actions + index;
+	}
+	if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE &&
+	    act->type != RTE_FLOW_ACTION_TYPE_DROP) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION,
+				act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	if (act->type == RTE_FLOW_ACTION_TYPE_QUEUE) {
+		act_q = (const struct rte_flow_action_queue *)act->conf;
+		filter->queue = act_q->index;
+	} else {
+		filter->flags |= RTE_ETHTYPE_FLAGS_DROP;
+	}
+
+	/* Check if the next non-void item is END */
+	index++;
+	act = actions + index;
+	while (act->type == RTE_FLOW_ACTION_TYPE_VOID) {
+		index++;
+		act = actions + index;
+	}
+	if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION,
+				act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	/* Parse attr */
+	/* Must be input direction */
+	if (!attr->ingress) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ATTR_INGRESS,
+				attr, "Only support ingress.");
+		return -rte_errno;
+	}
+
+	/* Not supported */
+	if (attr->egress) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
+				attr, "Not support egress.");
+		return -rte_errno;
+	}
+
+	/* Not supported */
+	if (attr->priority) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
+				attr, "Not support priority.");
+		return -rte_errno;
+	}
+
+	/* Not supported */
+	if (attr->group) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ATTR_GROUP,
+				attr, "Not support group.");
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
+static int
+ixgbe_parse_ethertype_filter(const struct rte_flow_attr *attr,
+			     const struct rte_flow_item pattern[],
+			     const struct rte_flow_action actions[],
+			     struct rte_eth_ethertype_filter *filter,
+			     struct rte_flow_error *error)
+{
+	int ret;
+
+	ret = cons_parse_ethertype_filter(attr, pattern,
+					actions, filter, error);
+
+	if (ret)
+		return ret;
+
+	/* Ixgbe doesn't support MAC address. */
+	if (filter->flags & RTE_ETHTYPE_FLAGS_MAC) {
+		memset(filter, 0, sizeof(struct rte_eth_ethertype_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			NULL, "Not supported by ethertype filter");
+		return -rte_errno;
+	}
+
+	if (filter->queue >= IXGBE_MAX_RX_QUEUE_NUM) {
+		memset(filter, 0, sizeof(struct rte_eth_ethertype_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			NULL, "queue index much too big");
+		return -rte_errno;
+	}
+
+	if (filter->ether_type == ETHER_TYPE_IPv4 ||
+		filter->ether_type == ETHER_TYPE_IPv6) {
+		memset(filter, 0, sizeof(struct rte_eth_ethertype_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			NULL, "IPv4/IPv6 not supported by ethertype filter");
+		return -rte_errno;
+	}
+
+	if (filter->flags & RTE_ETHTYPE_FLAGS_MAC) {
+		memset(filter, 0, sizeof(struct rte_eth_ethertype_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			NULL, "mac compare is unsupported");
+		return -rte_errno;
+	}
+
+	if (filter->flags & RTE_ETHTYPE_FLAGS_DROP) {
+		memset(filter, 0, sizeof(struct rte_eth_ethertype_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			NULL, "drop option is unsupported");
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
+/**
  * Check if the flow rule is supported by ixgbe.
  * It only checkes the format. Don't guarantee the rule can be programmed into
  * the HW. Because there can be no enough room for the rule.
@@ -492,6 +763,7 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 		struct rte_flow_error *error)
 {
 	struct rte_eth_ntuple_filter ntuple_filter;
+	struct rte_eth_ethertype_filter ethertype_filter;
 	int ret;
 
 	memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
@@ -500,6 +772,12 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 	if (!ret)
 		return 0;
 
+	memset(&ethertype_filter, 0, sizeof(struct rte_eth_ethertype_filter));
+	ret = ixgbe_parse_ethertype_filter(attr, pattern,
+				actions, &ethertype_filter, error);
+	if (!ret)
+		return 0;
+
 	return ret;
 }
 
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v3 13/18] net/ixgbe: parse TCP SYN filter check if the rule is a TCP SYN rule, and get the SYN info.
  2017-01-12  8:13   ` [PATCH v3 01/18] net/ixgbe: store TCP SYN filter Wei Zhao
                       ` (11 preceding siblings ...)
  2017-01-12  8:13     ` [PATCH v3 12/18] net/ixgbe: parse ethertype filter Wei Zhao
@ 2017-01-12  8:13     ` Wei Zhao
  2017-01-12  8:13     ` [PATCH v3 14/18] net/ixgbe: parse L2 tunnel filter check if the rule is a L2 tunnel rule, and get the L2 tunnel info Wei Zhao
                       ` (5 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  8:13 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ixgbe/ixgbe_flow.c | 266 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 266 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 2f97129..28445fa 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -102,6 +102,18 @@ ixgbe_parse_ethertype_filter(const struct rte_flow_attr *attr,
 				struct rte_eth_ethertype_filter *filter,
 				struct rte_flow_error *error);
 static int
+cons_parse_syn_filter(const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct rte_eth_syn_filter *filter,
+		struct rte_flow_error *error);
+static int
+ixgbe_parse_syn_filter(const struct rte_flow_attr *attr,
+				const struct rte_flow_item pattern[],
+				const struct rte_flow_action actions[],
+				struct rte_eth_syn_filter *filter,
+				struct rte_flow_error *error);
+static int
 ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 		const struct rte_flow_attr *attr,
 		const struct rte_flow_item pattern[],
@@ -751,6 +763,253 @@ ixgbe_parse_ethertype_filter(const struct rte_flow_attr *attr,
 }
 
 /**
+ * Parse the rule to see if it is a TCP SYN rule.
+ * And get the TCP SYN filter info BTW.
+ * pattern:
+ * The first not void item must be ETH.
+ * The second not void item must be IPV4 or IPV6.
+ * The third not void item must be TCP.
+ * The next not void item must be END.
+ * action:
+ * The first not void action should be QUEUE.
+ * The next not void action should be END.
+ */
+static int
+cons_parse_syn_filter(const struct rte_flow_attr *attr,
+				const struct rte_flow_item pattern[],
+				const struct rte_flow_action actions[],
+				struct rte_eth_syn_filter *filter,
+				struct rte_flow_error *error)
+{
+	const struct rte_flow_item *item;
+	const struct rte_flow_action *act;
+	const struct rte_flow_item_tcp *tcp_spec;
+	const struct rte_flow_item_tcp *tcp_mask;
+	const struct rte_flow_action_queue *act_q;
+	uint32_t index;
+
+	if (!pattern) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+				NULL, "NULL pattern.");
+		return -rte_errno;
+	}
+
+	if (!actions) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				NULL, "NULL action.");
+		return -rte_errno;
+	}
+
+	if (!attr) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, "NULL attribute.");
+		return -rte_errno;
+	}
+
+	/* parse pattern */
+	index = 0;
+
+	/* the first not void item should be MAC or IPv4 or IPv6 or TCP */
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_ETH &&
+	    item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+	    item->type != RTE_FLOW_ITEM_TYPE_IPV6 &&
+	    item->type != RTE_FLOW_ITEM_TYPE_TCP) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by syn filter");
+		return -rte_errno;
+	}
+		/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+
+	}
+
+	/* Skip Ethernet */
+	if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
+		/* if the item is MAC, the content should be NULL */
+		if (item->spec || item->mask) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Invalid SYN address mask");
+			return -rte_errno;
+		}
+
+		/* check if the next not void item is IPv4 or IPv6 */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+		    item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by syn filter");
+			return -rte_errno;
+		}
+	}
+
+	/* Skip IP */
+	if (item->type == RTE_FLOW_ITEM_TYPE_IPV4 ||
+	    item->type == RTE_FLOW_ITEM_TYPE_IPV6) {
+		/* if the item is IP, the content should be NULL */
+		if (item->spec || item->mask) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Invalid SYN mask");
+			return -rte_errno;
+		}
+
+		/* check if the next not void item is TCP */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_TCP) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by syn filter");
+			return -rte_errno;
+		}
+	}
+
+	/* Get the TCP info. Only support SYN. */
+	if (!item->spec || !item->mask) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Invalid SYN mask");
+		return -rte_errno;
+	}
+	/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+
+	}
+
+	tcp_spec = (const struct rte_flow_item_tcp *)item->spec;
+	tcp_mask = (const struct rte_flow_item_tcp *)item->mask;
+	if (!(tcp_spec->hdr.tcp_flags & TCP_SYN_FLAG) ||
+	    tcp_mask->hdr.src_port ||
+	    tcp_mask->hdr.dst_port ||
+	    tcp_mask->hdr.sent_seq ||
+	    tcp_mask->hdr.recv_ack ||
+	    tcp_mask->hdr.data_off ||
+	    tcp_mask->hdr.tcp_flags != TCP_SYN_FLAG ||
+	    tcp_mask->hdr.rx_win ||
+	    tcp_mask->hdr.cksum ||
+	    tcp_mask->hdr.tcp_urp) {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by syn filter");
+		return -rte_errno;
+	}
+
+	/* check if the next not void item is END */
+	index++;
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by syn filter");
+		return -rte_errno;
+	}
+
+	/* parse action */
+	index = 0;
+
+	/* check if the first not void action is QUEUE. */
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE) {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION,
+				act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	act_q = (const struct rte_flow_action_queue *)act->conf;
+	filter->queue = act_q->index;
+	if (filter->queue >= IXGBE_MAX_RX_QUEUE_NUM) {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION,
+				act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	/* check if the next not void item is END */
+	index++;
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION,
+				act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	/* parse attr */
+	/* must be input direction */
+	if (!attr->ingress) {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_INGRESS,
+			attr, "Only support ingress.");
+		return -rte_errno;
+	}
+
+	/* not supported */
+	if (attr->egress) {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
+			attr, "Not support egress.");
+		return -rte_errno;
+	}
+
+	/* Support 2 priorities, the lowest or highest. */
+	if (!attr->priority) {
+		filter->hig_pri = 0;
+	} else if (attr->priority == (uint32_t)~0U) {
+		filter->hig_pri = 1;
+	} else {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
+			attr, "Not support priority.");
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
+static int
+ixgbe_parse_syn_filter(const struct rte_flow_attr *attr,
+			     const struct rte_flow_item pattern[],
+			     const struct rte_flow_action actions[],
+			     struct rte_eth_syn_filter *filter,
+			     struct rte_flow_error *error)
+{
+	int ret;
+
+	ret = cons_parse_syn_filter(attr, pattern,
+					actions, filter, error);
+
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+/**
  * Check if the flow rule is supported by ixgbe.
  * It only checkes the format. Don't guarantee the rule can be programmed into
  * the HW. Because there can be no enough room for the rule.
@@ -764,6 +1023,7 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 {
 	struct rte_eth_ntuple_filter ntuple_filter;
 	struct rte_eth_ethertype_filter ethertype_filter;
+	struct rte_eth_syn_filter syn_filter;
 	int ret;
 
 	memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
@@ -778,6 +1038,12 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 	if (!ret)
 		return 0;
 
+	memset(&syn_filter, 0, sizeof(struct rte_eth_syn_filter));
+	ret = ixgbe_parse_syn_filter(attr, pattern,
+				actions, &syn_filter, error);
+	if (!ret)
+		return 0;
+
 	return ret;
 }
 
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v3 14/18] net/ixgbe: parse L2 tunnel filter check if the rule is a L2 tunnel rule, and get the L2 tunnel info.
  2017-01-12  8:13   ` [PATCH v3 01/18] net/ixgbe: store TCP SYN filter Wei Zhao
                       ` (12 preceding siblings ...)
  2017-01-12  8:13     ` [PATCH v3 13/18] net/ixgbe: parse TCP SYN filter check if the rule is a TCP SYN rule, and get the SYN info Wei Zhao
@ 2017-01-12  8:13     ` Wei Zhao
  2017-01-12  8:13     ` [PATCH v3 15/18] net/ixgbe: parse flow director filter Wei Zhao
                       ` (4 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  8:13 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c |   3 +-
 drivers/net/ixgbe/ixgbe_flow.c   | 203 +++++++++++++++++++++++++++++++++++++++
 lib/librte_ether/rte_flow.h      |  48 +++++++++
 3 files changed, 253 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 2a67462..14a88ee 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -8089,7 +8089,8 @@ ixgbe_clear_syn_filter(struct rte_eth_dev *dev)
 }
 
 /* remove all the L2 tunnel filters */
-int ixgbe_clear_all_l2_tn_filter(struct rte_eth_dev *dev)
+int
+ixgbe_clear_all_l2_tn_filter(struct rte_eth_dev *dev)
 {
 	struct ixgbe_l2_tn_info *l2_tn_info =
 		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 28445fa..fc4ceb8 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -114,6 +114,19 @@ ixgbe_parse_syn_filter(const struct rte_flow_attr *attr,
 				struct rte_eth_syn_filter *filter,
 				struct rte_flow_error *error);
 static int
+cons_parse_l2_tn_filter(const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct rte_eth_l2_tunnel_conf *filter,
+		struct rte_flow_error *error);
+static int
+ixgbe_validate_l2_tn_filter(struct rte_eth_dev *dev,
+			const struct rte_flow_attr *attr,
+			const struct rte_flow_item pattern[],
+			const struct rte_flow_action actions[],
+			struct rte_eth_l2_tunnel_conf *rule,
+			struct rte_flow_error *error);
+static int
 ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 		const struct rte_flow_attr *attr,
 		const struct rte_flow_item pattern[],
@@ -1010,6 +1023,191 @@ ixgbe_parse_syn_filter(const struct rte_flow_attr *attr,
 }
 
 /**
+ * Parse the rule to see if it is a L2 tunnel rule.
+ * And get the L2 tunnel filter info BTW.
+ * Only support E-tag now.
+ */
+static int
+cons_parse_l2_tn_filter(const struct rte_flow_attr *attr,
+			const struct rte_flow_item pattern[],
+			const struct rte_flow_action actions[],
+			struct rte_eth_l2_tunnel_conf *filter,
+			struct rte_flow_error *error)
+{
+	const struct rte_flow_item *item;
+	const struct rte_flow_item_e_tag *e_tag_spec;
+	const struct rte_flow_item_e_tag *e_tag_mask;
+	const struct rte_flow_action *act;
+	const struct rte_flow_action_queue *act_q;
+	uint32_t index;
+
+	if (!pattern) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+			NULL, "NULL pattern.");
+		return -rte_errno;
+	}
+
+	if (!actions) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				   NULL, "NULL action.");
+		return -rte_errno;
+	}
+
+	if (!attr) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, "NULL attribute.");
+		return -rte_errno;
+	}
+	/* parse pattern */
+	index = 0;
+
+	/* The first not void item should be e-tag. */
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_E_TAG) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by L2 tunnel filter");
+		return -rte_errno;
+	}
+
+	if (!item->spec || !item->mask) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by L2 tunnel filter");
+		return -rte_errno;
+	}
+
+	/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+	}
+
+	e_tag_spec = (const struct rte_flow_item_e_tag *)item->spec;
+	e_tag_mask = (const struct rte_flow_item_e_tag *)item->mask;
+
+	/* Only care about GRP and E cid base. */
+	if (e_tag_mask->epcp_edei_in_ecid_b ||
+	    e_tag_mask->in_ecid_e ||
+	    e_tag_mask->ecid_e ||
+	    e_tag_mask->rsvd_grp_ecid_b != rte_cpu_to_be_16(0x3FFF)) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by L2 tunnel filter");
+		return -rte_errno;
+	}
+
+	filter->l2_tunnel_type = RTE_L2_TUNNEL_TYPE_E_TAG;
+	/**
+	 * grp and e_cid_base are bit fields and only use 14 bits.
+	 * e-tag id is taken as little endian by HW.
+	 */
+	filter->tunnel_id = rte_be_to_cpu_16(e_tag_spec->rsvd_grp_ecid_b);
+
+	/* check if the next not void item is END */
+	index++;
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by L2 tunnel filter");
+		return -rte_errno;
+	}
+
+	/* parse attr */
+	/* must be input direction */
+	if (!attr->ingress) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_INGRESS,
+			attr, "Only support ingress.");
+		return -rte_errno;
+	}
+
+	/* not supported */
+	if (attr->egress) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
+			attr, "Not support egress.");
+		return -rte_errno;
+	}
+
+	/* not supported */
+	if (attr->priority) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
+			attr, "Not support priority.");
+		return -rte_errno;
+	}
+
+	/* parse action */
+	index = 0;
+
+	/* check if the first not void action is QUEUE. */
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	act_q = (const struct rte_flow_action_queue *)act->conf;
+	filter->pool = act_q->index;
+
+	/* check if the next not void item is END */
+	index++;
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
+static int
+ixgbe_validate_l2_tn_filter(struct rte_eth_dev *dev,
+			const struct rte_flow_attr *attr,
+			const struct rte_flow_item pattern[],
+			const struct rte_flow_action actions[],
+			struct rte_eth_l2_tunnel_conf *l2_tn_filter,
+			struct rte_flow_error *error)
+{
+	int ret = 0;
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	ret = cons_parse_l2_tn_filter(attr, pattern,
+				actions, l2_tn_filter, error);
+
+	if (hw->mac.type != ixgbe_mac_X550 &&
+		hw->mac.type != ixgbe_mac_X550EM_x &&
+		hw->mac.type != ixgbe_mac_X550EM_a) {
+		memset(l2_tn_filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			NULL, "Not supported by L2 tunnel filter");
+		return -rte_errno;
+	}
+
+	return ret;
+}
+
+/**
  * Check if the flow rule is supported by ixgbe.
  * It only checkes the format. Don't guarantee the rule can be programmed into
  * the HW. Because there can be no enough room for the rule.
@@ -1024,6 +1222,7 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 	struct rte_eth_ntuple_filter ntuple_filter;
 	struct rte_eth_ethertype_filter ethertype_filter;
 	struct rte_eth_syn_filter syn_filter;
+	struct rte_eth_l2_tunnel_conf l2_tn_filter;
 	int ret;
 
 	memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
@@ -1044,6 +1243,10 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 	if (!ret)
 		return 0;
 
+	memset(&l2_tn_filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+	ret = ixgbe_validate_l2_tn_filter(dev, attr, pattern,
+				actions, &l2_tn_filter, error);
+
 	return ret;
 }
 
diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h
index 98084ac..7142479 100644
--- a/lib/librte_ether/rte_flow.h
+++ b/lib/librte_ether/rte_flow.h
@@ -268,6 +268,20 @@ enum rte_flow_item_type {
 	 * See struct rte_flow_item_vxlan.
 	 */
 	RTE_FLOW_ITEM_TYPE_VXLAN,
+
+	/**
+	 * Matches a E_TAG header.
+	 *
+	 * See struct rte_flow_item_e_tag.
+	 */
+	RTE_FLOW_ITEM_TYPE_E_TAG,
+
+	/**
+	 * Matches a NVGRE header.
+	 *
+	 * See struct rte_flow_item_nvgre.
+	 */
+	RTE_FLOW_ITEM_TYPE_NVGRE,
 };
 
 /**
@@ -454,6 +468,40 @@ struct rte_flow_item_vxlan {
 };
 
 /**
+ * RTE_FLOW_ITEM_TYPE_E_TAG.
+ *
+ * Matches a E-tag header.
+ */
+struct rte_flow_item_e_tag {
+	uint16_t tpid; /**< Tag protocol identifier (0x893F). */
+	/** E-Tag control information (E-TCI). */
+	/**< E-PCP (3b), E-DEI (1b), ingress E-CID base (12b). */
+	uint16_t epcp_edei_in_ecid_b;
+	/**< Reserved (2b), GRP (2b), E-CID base (12b). */
+	uint16_t rsvd_grp_ecid_b;
+	uint8_t in_ecid_e; /**< Ingress E-CID ext. */
+	uint8_t ecid_e; /**< E-CID ext. */
+};
+
+/**
+ * RTE_FLOW_ITEM_TYPE_NVGRE.
+ *
+ * Matches a NVGRE header.
+ */
+struct rte_flow_item_nvgre {
+     /**
+      * Checksum (1b), undefined (1b), key bit (1b), sequence number (1b),
+      * reserved 0 (9b), version (3b).
+      *
+      * \c_k_s_rsvd0_ver must have value 0x2000 according to RFC 7637.
+      */
+	uint16_t c_k_s_rsvd0_ver;
+	uint16_t protocol; /**< Protocol type (0x6558). */
+	uint8_t tni[3]; /**< Virtual subnet ID. */
+	uint8_t flow_id; /**< Flow ID. */
+};
+
+/**
  * Matching pattern item definition.
  *
  * A pattern is formed by stacking items starting from the lowest protocol
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v3 15/18] net/ixgbe: parse flow director filter
  2017-01-12  8:13   ` [PATCH v3 01/18] net/ixgbe: store TCP SYN filter Wei Zhao
                       ` (13 preceding siblings ...)
  2017-01-12  8:13     ` [PATCH v3 14/18] net/ixgbe: parse L2 tunnel filter check if the rule is a L2 tunnel rule, and get the L2 tunnel info Wei Zhao
@ 2017-01-12  8:13     ` Wei Zhao
  2017-01-12  8:13     ` [PATCH v3 16/18] net/ixgbe: create consistent filter Wei Zhao
                       ` (3 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  8:13 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

check if the rule is a flow director rule, and get the flow director info.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c |    2 +
 drivers/net/ixgbe/ixgbe_ethdev.h |   16 +
 drivers/net/ixgbe/ixgbe_fdir.c   |  253 +++++---
 drivers/net/ixgbe/ixgbe_flow.c   | 1219 +++++++++++++++++++++++++++++++++++++-
 4 files changed, 1381 insertions(+), 109 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 14a88ee..4b9f4aa 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1436,6 +1436,8 @@ static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev)
 			     "Failed to allocate memory for fdir hash map!");
 		return -ENOMEM;
 	}
+	fdir_info->mask_added = FALSE;
+
 	return 0;
 }
 
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 5efa650..dee159f 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -162,6 +162,17 @@ struct ixgbe_fdir_filter {
 /* list of fdir filters */
 TAILQ_HEAD(ixgbe_fdir_filter_list, ixgbe_fdir_filter);
 
+struct ixgbe_fdir_rule {
+	struct ixgbe_hw_fdir_mask mask;
+	union ixgbe_atr_input ixgbe_fdir; /* key of fdir filter*/
+	bool b_spec; /* If TRUE, ixgbe_fdir, fdirflags, queue have meaning. */
+	bool b_mask; /* If TRUE, mask has meaning. */
+	enum rte_fdir_mode mode; /* IP, MAC VLAN, Tunnel */
+	uint32_t fdirflags; /* drop or forward */
+	uint32_t soft_id; /* an unique value for this rule */
+	uint8_t queue; /* assigned rx queue */
+};
+
 struct ixgbe_hw_fdir_info {
 	struct ixgbe_hw_fdir_mask mask;
 	uint8_t     flex_bytes_offset;
@@ -177,6 +188,7 @@ struct ixgbe_hw_fdir_info {
 	/* store the pointers of the filters, index is the hash value. */
 	struct ixgbe_fdir_filter **hash_map;
 	struct rte_hash *hash_handle; /* cuckoo hash handler */
+	bool mask_added; /* If already got mask from consistent filter */
 };
 
 /* structure for interrupt relative data */
@@ -479,6 +491,10 @@ bool ixgbe_rss_update_sp(enum ixgbe_mac_type mac_type);
  * Flow director function prototypes
  */
 int ixgbe_fdir_configure(struct rte_eth_dev *dev);
+int ixgbe_fdir_set_input_mask(struct rte_eth_dev *dev);
+int ixgbe_fdir_filter_program(struct rte_eth_dev *dev,
+			      struct ixgbe_fdir_rule *rule,
+			      bool del, bool update);
 
 void ixgbe_configure_dcb(struct rte_eth_dev *dev);
 
diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c
index e928ad7..3b9d60c 100644
--- a/drivers/net/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/ixgbe/ixgbe_fdir.c
@@ -112,10 +112,8 @@
 static int fdir_erase_filter_82599(struct ixgbe_hw *hw, uint32_t fdirhash);
 static int fdir_set_input_mask(struct rte_eth_dev *dev,
 			       const struct rte_eth_fdir_masks *input_mask);
-static int fdir_set_input_mask_82599(struct rte_eth_dev *dev,
-		const struct rte_eth_fdir_masks *input_mask);
-static int fdir_set_input_mask_x550(struct rte_eth_dev *dev,
-				    const struct rte_eth_fdir_masks *input_mask);
+static int fdir_set_input_mask_82599(struct rte_eth_dev *dev);
+static int fdir_set_input_mask_x550(struct rte_eth_dev *dev);
 static int ixgbe_set_fdir_flex_conf(struct rte_eth_dev *dev,
 		const struct rte_eth_fdir_flex_conf *conf, uint32_t *fdirctrl);
 static int fdir_enable_82599(struct ixgbe_hw *hw, uint32_t fdirctrl);
@@ -295,8 +293,7 @@ reverse_fdir_bitmasks(uint16_t hi_dword, uint16_t lo_dword)
  * but makes use of the rte_fdir_masks structure to see which bits to set.
  */
 static int
-fdir_set_input_mask_82599(struct rte_eth_dev *dev,
-		const struct rte_eth_fdir_masks *input_mask)
+fdir_set_input_mask_82599(struct rte_eth_dev *dev)
 {
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ixgbe_hw_fdir_info *info =
@@ -308,8 +305,6 @@ fdir_set_input_mask_82599(struct rte_eth_dev *dev,
 	uint32_t fdirm = IXGBE_FDIRM_POOL | IXGBE_FDIRM_DIPv6 | IXGBE_FDIRM_FLEX;
 	uint32_t fdirtcpm;  /* TCP source and destination port masks. */
 	uint32_t fdiripv6m; /* IPv6 source and destination masks. */
-	uint16_t dst_ipv6m = 0;
-	uint16_t src_ipv6m = 0;
 	volatile uint32_t *reg;
 
 	PMD_INIT_FUNC_TRACE();
@@ -320,31 +315,30 @@ fdir_set_input_mask_82599(struct rte_eth_dev *dev,
 	 * a VLAN of 0 is unspecified, so mask that out as well.  L4type
 	 * cannot be masked out in this implementation.
 	 */
-	if (input_mask->dst_port_mask == 0 && input_mask->src_port_mask == 0)
+	if (info->mask.dst_port_mask == 0 && info->mask.src_port_mask == 0)
 		/* use the L4 protocol mask for raw IPv4/IPv6 traffic */
 		fdirm |= IXGBE_FDIRM_L4P;
 
-	if (input_mask->vlan_tci_mask == rte_cpu_to_be_16(0x0FFF))
+	if (info->mask.vlan_tci_mask == rte_cpu_to_be_16(0x0FFF))
 		/* mask VLAN Priority */
 		fdirm |= IXGBE_FDIRM_VLANP;
-	else if (input_mask->vlan_tci_mask == rte_cpu_to_be_16(0xE000))
+	else if (info->mask.vlan_tci_mask == rte_cpu_to_be_16(0xE000))
 		/* mask VLAN ID */
 		fdirm |= IXGBE_FDIRM_VLANID;
-	else if (input_mask->vlan_tci_mask == 0)
+	else if (info->mask.vlan_tci_mask == 0)
 		/* mask VLAN ID and Priority */
 		fdirm |= IXGBE_FDIRM_VLANID | IXGBE_FDIRM_VLANP;
-	else if (input_mask->vlan_tci_mask != rte_cpu_to_be_16(0xEFFF)) {
+	else if (info->mask.vlan_tci_mask != rte_cpu_to_be_16(0xEFFF)) {
 		PMD_INIT_LOG(ERR, "invalid vlan_tci_mask");
 		return -EINVAL;
 	}
-	info->mask.vlan_tci_mask = input_mask->vlan_tci_mask;
 
 	IXGBE_WRITE_REG(hw, IXGBE_FDIRM, fdirm);
 
 	/* store the TCP/UDP port masks, bit reversed from port layout */
 	fdirtcpm = reverse_fdir_bitmasks(
-			rte_be_to_cpu_16(input_mask->dst_port_mask),
-			rte_be_to_cpu_16(input_mask->src_port_mask));
+			rte_be_to_cpu_16(info->mask.dst_port_mask),
+			rte_be_to_cpu_16(info->mask.src_port_mask));
 
 	/* write all the same so that UDP, TCP and SCTP use the same mask
 	 * (little-endian)
@@ -352,30 +346,23 @@ fdir_set_input_mask_82599(struct rte_eth_dev *dev,
 	IXGBE_WRITE_REG(hw, IXGBE_FDIRTCPM, ~fdirtcpm);
 	IXGBE_WRITE_REG(hw, IXGBE_FDIRUDPM, ~fdirtcpm);
 	IXGBE_WRITE_REG(hw, IXGBE_FDIRSCTPM, ~fdirtcpm);
-	info->mask.src_port_mask = input_mask->src_port_mask;
-	info->mask.dst_port_mask = input_mask->dst_port_mask;
 
 	/* Store source and destination IPv4 masks (big-endian),
 	 * can not use IXGBE_WRITE_REG.
 	 */
 	reg = IXGBE_PCI_REG_ADDR(hw, IXGBE_FDIRSIP4M);
-	*reg = ~(input_mask->ipv4_mask.src_ip);
+	*reg = ~(info->mask.src_ipv4_mask);
 	reg = IXGBE_PCI_REG_ADDR(hw, IXGBE_FDIRDIP4M);
-	*reg = ~(input_mask->ipv4_mask.dst_ip);
-	info->mask.src_ipv4_mask = input_mask->ipv4_mask.src_ip;
-	info->mask.dst_ipv4_mask = input_mask->ipv4_mask.dst_ip;
+	*reg = ~(info->mask.dst_ipv4_mask);
 
 	if (dev->data->dev_conf.fdir_conf.mode == RTE_FDIR_MODE_SIGNATURE) {
 		/*
 		 * Store source and destination IPv6 masks (bit reversed)
 		 */
-		IPV6_ADDR_TO_MASK(input_mask->ipv6_mask.src_ip, src_ipv6m);
-		IPV6_ADDR_TO_MASK(input_mask->ipv6_mask.dst_ip, dst_ipv6m);
-		fdiripv6m = (dst_ipv6m << 16) | src_ipv6m;
+		fdiripv6m = (info->mask.dst_ipv6_mask << 16) |
+			    info->mask.src_ipv6_mask;
 
 		IXGBE_WRITE_REG(hw, IXGBE_FDIRIP6M, ~fdiripv6m);
-		info->mask.src_ipv6_mask = src_ipv6m;
-		info->mask.dst_ipv6_mask = dst_ipv6m;
 	}
 
 	return IXGBE_SUCCESS;
@@ -386,8 +373,7 @@ fdir_set_input_mask_82599(struct rte_eth_dev *dev,
  * but makes use of the rte_fdir_masks structure to see which bits to set.
  */
 static int
-fdir_set_input_mask_x550(struct rte_eth_dev *dev,
-			 const struct rte_eth_fdir_masks *input_mask)
+fdir_set_input_mask_x550(struct rte_eth_dev *dev)
 {
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ixgbe_hw_fdir_info *info =
@@ -410,20 +396,19 @@ fdir_set_input_mask_x550(struct rte_eth_dev *dev,
 	/* some bits must be set for mac vlan or tunnel mode */
 	fdirm |= IXGBE_FDIRM_L4P | IXGBE_FDIRM_L3P;
 
-	if (input_mask->vlan_tci_mask == rte_cpu_to_be_16(0x0FFF))
+	if (info->mask.vlan_tci_mask == rte_cpu_to_be_16(0x0FFF))
 		/* mask VLAN Priority */
 		fdirm |= IXGBE_FDIRM_VLANP;
-	else if (input_mask->vlan_tci_mask == rte_cpu_to_be_16(0xE000))
+	else if (info->mask.vlan_tci_mask == rte_cpu_to_be_16(0xE000))
 		/* mask VLAN ID */
 		fdirm |= IXGBE_FDIRM_VLANID;
-	else if (input_mask->vlan_tci_mask == 0)
+	else if (info->mask.vlan_tci_mask == 0)
 		/* mask VLAN ID and Priority */
 		fdirm |= IXGBE_FDIRM_VLANID | IXGBE_FDIRM_VLANP;
-	else if (input_mask->vlan_tci_mask != rte_cpu_to_be_16(0xEFFF)) {
+	else if (info->mask.vlan_tci_mask != rte_cpu_to_be_16(0xEFFF)) {
 		PMD_INIT_LOG(ERR, "invalid vlan_tci_mask");
 		return -EINVAL;
 	}
-	info->mask.vlan_tci_mask = input_mask->vlan_tci_mask;
 
 	IXGBE_WRITE_REG(hw, IXGBE_FDIRM, fdirm);
 
@@ -434,12 +419,11 @@ fdir_set_input_mask_x550(struct rte_eth_dev *dev,
 				IXGBE_FDIRIP6M_TNI_VNI;
 
 	if (mode == RTE_FDIR_MODE_PERFECT_TUNNEL) {
-		mac_mask = input_mask->mac_addr_byte_mask;
+		mac_mask = info->mask.mac_addr_byte_mask;
 		fdiripv6m |= (mac_mask << IXGBE_FDIRIP6M_INNER_MAC_SHIFT)
 				& IXGBE_FDIRIP6M_INNER_MAC;
-		info->mask.mac_addr_byte_mask = input_mask->mac_addr_byte_mask;
 
-		switch (input_mask->tunnel_type_mask) {
+		switch (info->mask.tunnel_type_mask) {
 		case 0:
 			/* Mask turnnel type */
 			fdiripv6m |= IXGBE_FDIRIP6M_TUNNEL_TYPE;
@@ -450,10 +434,8 @@ fdir_set_input_mask_x550(struct rte_eth_dev *dev,
 			PMD_INIT_LOG(ERR, "invalid tunnel_type_mask");
 			return -EINVAL;
 		}
-		info->mask.tunnel_type_mask =
-			input_mask->tunnel_type_mask;
 
-		switch (rte_be_to_cpu_32(input_mask->tunnel_id_mask)) {
+		switch (rte_be_to_cpu_32(info->mask.tunnel_id_mask)) {
 		case 0x0:
 			/* Mask vxlan id */
 			fdiripv6m |= IXGBE_FDIRIP6M_TNI_VNI;
@@ -467,8 +449,6 @@ fdir_set_input_mask_x550(struct rte_eth_dev *dev,
 			PMD_INIT_LOG(ERR, "invalid tunnel_id_mask");
 			return -EINVAL;
 		}
-		info->mask.tunnel_id_mask =
-			input_mask->tunnel_id_mask;
 	}
 
 	IXGBE_WRITE_REG(hw, IXGBE_FDIRIP6M, fdiripv6m);
@@ -482,22 +462,90 @@ fdir_set_input_mask_x550(struct rte_eth_dev *dev,
 }
 
 static int
-fdir_set_input_mask(struct rte_eth_dev *dev,
-		    const struct rte_eth_fdir_masks *input_mask)
+ixgbe_fdir_store_input_mask_82599(struct rte_eth_dev *dev,
+				  const struct rte_eth_fdir_masks *input_mask)
+{
+	struct ixgbe_hw_fdir_info *info =
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
+	uint16_t dst_ipv6m = 0;
+	uint16_t src_ipv6m = 0;
+
+	memset(&info->mask, 0, sizeof(struct ixgbe_hw_fdir_mask));
+	info->mask.vlan_tci_mask = input_mask->vlan_tci_mask;
+	info->mask.src_port_mask = input_mask->src_port_mask;
+	info->mask.dst_port_mask = input_mask->dst_port_mask;
+	info->mask.src_ipv4_mask = input_mask->ipv4_mask.src_ip;
+	info->mask.dst_ipv4_mask = input_mask->ipv4_mask.dst_ip;
+	IPV6_ADDR_TO_MASK(input_mask->ipv6_mask.src_ip, src_ipv6m);
+	IPV6_ADDR_TO_MASK(input_mask->ipv6_mask.dst_ip, dst_ipv6m);
+	info->mask.src_ipv6_mask = src_ipv6m;
+	info->mask.dst_ipv6_mask = dst_ipv6m;
+
+	return IXGBE_SUCCESS;
+}
+
+static int
+ixgbe_fdir_store_input_mask_x550(struct rte_eth_dev *dev,
+				 const struct rte_eth_fdir_masks *input_mask)
+{
+	struct ixgbe_hw_fdir_info *info =
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
+
+	memset(&info->mask, 0, sizeof(struct ixgbe_hw_fdir_mask));
+	info->mask.vlan_tci_mask = input_mask->vlan_tci_mask;
+	info->mask.mac_addr_byte_mask = input_mask->mac_addr_byte_mask;
+	info->mask.tunnel_type_mask = input_mask->tunnel_type_mask;
+	info->mask.tunnel_id_mask = input_mask->tunnel_id_mask;
+
+	return IXGBE_SUCCESS;
+}
+
+static int
+ixgbe_fdir_store_input_mask(struct rte_eth_dev *dev,
+			    const struct rte_eth_fdir_masks *input_mask)
 {
 	enum rte_fdir_mode mode = dev->data->dev_conf.fdir_conf.mode;
 
 	if (mode >= RTE_FDIR_MODE_SIGNATURE &&
 	    mode <= RTE_FDIR_MODE_PERFECT)
-		return fdir_set_input_mask_82599(dev, input_mask);
+		return ixgbe_fdir_store_input_mask_82599(dev, input_mask);
 	else if (mode >= RTE_FDIR_MODE_PERFECT_MAC_VLAN &&
 		 mode <= RTE_FDIR_MODE_PERFECT_TUNNEL)
-		return fdir_set_input_mask_x550(dev, input_mask);
+		return ixgbe_fdir_store_input_mask_x550(dev, input_mask);
 
 	PMD_DRV_LOG(ERR, "Not supported fdir mode - %d!", mode);
 	return -ENOTSUP;
 }
 
+int
+ixgbe_fdir_set_input_mask(struct rte_eth_dev *dev)
+{
+	enum rte_fdir_mode mode = dev->data->dev_conf.fdir_conf.mode;
+
+	if (mode >= RTE_FDIR_MODE_SIGNATURE &&
+	    mode <= RTE_FDIR_MODE_PERFECT)
+		return fdir_set_input_mask_82599(dev);
+	else if (mode >= RTE_FDIR_MODE_PERFECT_MAC_VLAN &&
+		 mode <= RTE_FDIR_MODE_PERFECT_TUNNEL)
+		return fdir_set_input_mask_x550(dev);
+
+	PMD_DRV_LOG(ERR, "Not supported fdir mode - %d!", mode);
+	return -ENOTSUP;
+}
+
+static int
+fdir_set_input_mask(struct rte_eth_dev *dev,
+		    const struct rte_eth_fdir_masks *input_mask)
+{
+	int ret;
+
+	ret = ixgbe_fdir_store_input_mask(dev, input_mask);
+	if (ret)
+		return ret;
+
+	return ixgbe_fdir_set_input_mask(dev);
+}
+
 /*
  * ixgbe_check_fdir_flex_conf -check if the flex payload and mask configuration
  * arguments are valid
@@ -1135,23 +1183,40 @@ ixgbe_remove_fdir_filter(struct ixgbe_hw_fdir_info *fdir_info,
 	return 0;
 }
 
-/*
- * ixgbe_add_del_fdir_filter - add or remove a flow diretor filter.
- * @dev: pointer to the structure rte_eth_dev
- * @fdir_filter: fdir filter entry
- * @del: 1 - delete, 0 - add
- * @update: 1 - update
- */
 static int
-ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
-			  const struct rte_eth_fdir_filter *fdir_filter,
+ixgbe_interpret_fdir_filter(struct rte_eth_dev *dev,
+			    const struct rte_eth_fdir_filter *fdir_filter,
+			    struct ixgbe_fdir_rule *rule)
+{
+	enum rte_fdir_mode fdir_mode = dev->data->dev_conf.fdir_conf.mode;
+	int err;
+
+	memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+
+	err = ixgbe_fdir_filter_to_atr_input(fdir_filter,
+					     &rule->ixgbe_fdir,
+					     fdir_mode);
+	if (err)
+		return err;
+
+	rule->mode = fdir_mode;
+	if (fdir_filter->action.behavior == RTE_ETH_FDIR_REJECT)
+		rule->fdirflags = IXGBE_FDIRCMD_DROP;
+	rule->queue = fdir_filter->action.rx_queue;
+	rule->soft_id = fdir_filter->soft_id;
+
+	return 0;
+}
+
+int
+ixgbe_fdir_filter_program(struct rte_eth_dev *dev,
+			  struct ixgbe_fdir_rule *rule,
 			  bool del,
 			  bool update)
 {
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	uint32_t fdircmd_flags;
 	uint32_t fdirhash;
-	union ixgbe_atr_input input;
 	uint8_t queue;
 	bool is_perfect = FALSE;
 	int err;
@@ -1161,7 +1226,8 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	struct ixgbe_fdir_filter *node;
 	bool add_node = FALSE;
 
-	if (fdir_mode == RTE_FDIR_MODE_NONE)
+	if (fdir_mode == RTE_FDIR_MODE_NONE ||
+	    fdir_mode != rule->mode)
 		return -ENOTSUP;
 
 	/*
@@ -1174,7 +1240,7 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	    (hw->mac.type == ixgbe_mac_X550 ||
 	     hw->mac.type == ixgbe_mac_X550EM_x ||
 	     hw->mac.type == ixgbe_mac_X550EM_a) &&
-	    (fdir_filter->input.flow_type ==
+	    (rule->ixgbe_fdir.formatted.flow_type ==
 	     RTE_ETH_FLOW_NONFRAG_IPV4_OTHER) &&
 	    (info->mask.src_port_mask != 0 ||
 	     info->mask.dst_port_mask != 0)) {
@@ -1188,29 +1254,23 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	    fdir_mode <= RTE_FDIR_MODE_PERFECT_TUNNEL)
 		is_perfect = TRUE;
 
-	memset(&input, 0, sizeof(input));
-
-	err = ixgbe_fdir_filter_to_atr_input(fdir_filter, &input,
-					     fdir_mode);
-	if (err)
-		return err;
-
 	if (is_perfect) {
-		if (input.formatted.flow_type & IXGBE_ATR_L4TYPE_IPV6_MASK) {
+		if (rule->ixgbe_fdir.formatted.flow_type &
+		    IXGBE_ATR_L4TYPE_IPV6_MASK) {
 			PMD_DRV_LOG(ERR, "IPv6 is not supported in"
 				    " perfect mode!");
 			return -ENOTSUP;
 		}
-		fdirhash = atr_compute_perfect_hash_82599(&input,
+		fdirhash = atr_compute_perfect_hash_82599(&rule->ixgbe_fdir,
 							  dev->data->dev_conf.fdir_conf.pballoc);
-		fdirhash |= fdir_filter->soft_id <<
+		fdirhash |= rule->soft_id <<
 			IXGBE_FDIRHASH_SIG_SW_INDEX_SHIFT;
 	} else
-		fdirhash = atr_compute_sig_hash_82599(&input,
+		fdirhash = atr_compute_sig_hash_82599(&rule->ixgbe_fdir,
 						      dev->data->dev_conf.fdir_conf.pballoc);
 
 	if (del) {
-		err = ixgbe_remove_fdir_filter(info, &input);
+		err = ixgbe_remove_fdir_filter(info, &rule->ixgbe_fdir);
 		if (err < 0)
 			return err;
 
@@ -1223,7 +1283,7 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	}
 	/* add or update an fdir filter*/
 	fdircmd_flags = (update) ? IXGBE_FDIRCMD_FILTER_UPDATE : 0;
-	if (fdir_filter->action.behavior == RTE_ETH_FDIR_REJECT) {
+	if (rule->fdirflags & IXGBE_FDIRCMD_DROP) {
 		if (is_perfect) {
 			queue = dev->data->dev_conf.fdir_conf.drop_queue;
 			fdircmd_flags |= IXGBE_FDIRCMD_DROP;
@@ -1232,13 +1292,12 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 				    " signature mode.");
 			return -EINVAL;
 		}
-	} else if (fdir_filter->action.behavior == RTE_ETH_FDIR_ACCEPT &&
-		   fdir_filter->action.rx_queue < IXGBE_MAX_RX_QUEUE_NUM)
-		queue = (uint8_t)fdir_filter->action.rx_queue;
+	} else if (rule->queue < IXGBE_MAX_RX_QUEUE_NUM)
+		queue = (uint8_t)rule->queue;
 	else
 		return -EINVAL;
 
-	node = ixgbe_fdir_filter_lookup(info, &input);
+	node = ixgbe_fdir_filter_lookup(info, &rule->ixgbe_fdir);
 	if (node) {
 		if (update) {
 			node->fdirflags = fdircmd_flags;
@@ -1256,7 +1315,7 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 		if (!node)
 			return -ENOMEM;
 		(void)rte_memcpy(&node->ixgbe_fdir,
-				 &input,
+				 &rule->ixgbe_fdir,
 				 sizeof(union ixgbe_atr_input));
 		node->fdirflags = fdircmd_flags;
 		node->fdirhash = fdirhash;
@@ -1270,18 +1329,19 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	}
 
 	if (is_perfect) {
-		err = fdir_write_perfect_filter_82599(hw, &input, queue,
-						      fdircmd_flags, fdirhash,
-						      fdir_mode);
+		err = fdir_write_perfect_filter_82599(hw, &rule->ixgbe_fdir,
+						      queue, fdircmd_flags,
+						      fdirhash, fdir_mode);
 	} else {
-		err = fdir_add_signature_filter_82599(hw, &input, queue,
-						      fdircmd_flags, fdirhash);
+		err = fdir_add_signature_filter_82599(hw, &rule->ixgbe_fdir,
+						      queue, fdircmd_flags,
+						      fdirhash);
 	}
 	if (err < 0) {
 		PMD_DRV_LOG(ERR, "Fail to add FDIR filter!");
 
 		if (add_node)
-			(void)ixgbe_remove_fdir_filter(info, &input);
+			(void)ixgbe_remove_fdir_filter(info, &rule->ixgbe_fdir);
 	} else {
 		PMD_DRV_LOG(DEBUG, "Success to add FDIR filter");
 	}
@@ -1289,6 +1349,29 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	return err;
 }
 
+/* ixgbe_add_del_fdir_filter - add or remove a flow diretor filter.
+ * @dev: pointer to the structure rte_eth_dev
+ * @fdir_filter: fdir filter entry
+ * @del: 1 - delete, 0 - add
+ * @update: 1 - update
+ */
+static int
+ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
+			  const struct rte_eth_fdir_filter *fdir_filter,
+			  bool del,
+			  bool update)
+{
+	struct ixgbe_fdir_rule rule;
+	int err;
+
+	err = ixgbe_interpret_fdir_filter(dev, fdir_filter, &rule);
+
+	if (err)
+		return err;
+
+	return ixgbe_fdir_filter_program(dev, &rule, del, update);
+}
+
 static int
 ixgbe_fdir_flush(struct rte_eth_dev *dev)
 {
@@ -1522,19 +1605,23 @@ ixgbe_clear_all_fdir_filter(struct rte_eth_dev *dev)
 	struct ixgbe_hw_fdir_info *fdir_info =
 		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
 	struct ixgbe_fdir_filter *fdir_filter;
+	struct ixgbe_fdir_filter *filter_flag;
 	int ret = 0;
 
 	/* flush flow director */
 	rte_hash_reset(fdir_info->hash_handle);
 	memset(fdir_info->hash_map, 0,
 	       sizeof(struct ixgbe_fdir_filter *) * IXGBE_MAX_FDIR_FILTER_NUM);
+	filter_flag = TAILQ_FIRST(&fdir_info->fdir_list);
 	while ((fdir_filter = TAILQ_FIRST(&fdir_info->fdir_list))) {
 		TAILQ_REMOVE(&fdir_info->fdir_list,
 			     fdir_filter,
 			     entries);
 		rte_free(fdir_filter);
 	}
-	ret = ixgbe_fdir_flush(dev);
+
+	if (filter_flag != NULL)
+		ret = ixgbe_fdir_flush(dev);
 
 	return ret;
 }
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index fc4ceb8..ee8d5ca 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -127,6 +127,31 @@ ixgbe_validate_l2_tn_filter(struct rte_eth_dev *dev,
 			struct rte_eth_l2_tunnel_conf *rule,
 			struct rte_flow_error *error);
 static int
+ixgbe_validate_fdir_filter(struct rte_eth_dev *dev,
+			const struct rte_flow_attr *attr,
+			const struct rte_flow_item pattern[],
+			const struct rte_flow_action actions[],
+			struct ixgbe_fdir_rule *rule,
+			struct rte_flow_error *error);
+static int
+ixgbe_parse_fdir_filter_normal(const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct ixgbe_fdir_rule *rule,
+		struct rte_flow_error *error);
+static int
+ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct ixgbe_fdir_rule *rule,
+		struct rte_flow_error *error);
+static int
+ixgbe_parse_fdir_filter(const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct ixgbe_fdir_rule *rule,
+		struct rte_flow_error *error);
+static int
 ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 		const struct rte_flow_attr *attr,
 		const struct rte_flow_item pattern[],
@@ -1207,39 +1232,1181 @@ ixgbe_validate_l2_tn_filter(struct rte_eth_dev *dev,
 	return ret;
 }
 
+/* Parse to get the attr and action info of flow director rule. */
+static int
+ixgbe_parse_fdir_act_attr(const struct rte_flow_attr *attr,
+			  const struct rte_flow_action actions[],
+			  struct ixgbe_fdir_rule *rule,
+			  struct rte_flow_error *error)
+{
+	const struct rte_flow_action *act;
+	const struct rte_flow_action_queue *act_q;
+	const struct rte_flow_action_mark *mark;
+	uint32_t index;
+
+	/* parse attr */
+	/* must be input direction */
+	if (!attr->ingress) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_INGRESS,
+			attr, "Only support ingress.");
+		return -rte_errno;
+	}
+
+	/* not supported */
+	if (attr->egress) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
+			attr, "Not support egress.");
+		return -rte_errno;
+	}
+
+	/* not supported */
+	if (attr->priority) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
+			attr, "Not support priority.");
+		return -rte_errno;
+	}
+
+	/* parse action */
+	index = 0;
+
+	/* check if the first not void action is QUEUE or DROP. */
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE &&
+	    act->type != RTE_FLOW_ACTION_TYPE_DROP) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	if (act->type == RTE_FLOW_ACTION_TYPE_QUEUE) {
+		act_q = (const struct rte_flow_action_queue *)act->conf;
+		rule->queue = act_q->index;
+	} else { /* drop */
+		rule->fdirflags = IXGBE_FDIRCMD_DROP;
+	}
+
+	/* check if the next not void item is MARK */
+	index++;
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if ((act->type != RTE_FLOW_ACTION_TYPE_MARK) &&
+		(act->type != RTE_FLOW_ACTION_TYPE_END)) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	rule->soft_id = 0;
+
+	if (act->type == RTE_FLOW_ACTION_TYPE_MARK) {
+		mark = (const struct rte_flow_action_mark *)act->conf;
+		rule->soft_id = mark->id;
+		index++;
+		NEXT_ITEM_OF_ACTION(act, actions, index);
+	}
+
+	/* check if the next not void item is END */
+	if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
 /**
- * Check if the flow rule is supported by ixgbe.
- * It only checkes the format. Don't guarantee the rule can be programmed into
- * the HW. Because there can be no enough room for the rule.
+ * Parse the rule to see if it is a IP or MAC VLAN flow director rule.
+ * And get the flow director filter info BTW.
+ * UDP/TCP/SCTP PATTERN:
+ * The first not void item can be ETH or IPV4.
+ * The second not void item must be IPV4 if the first one is ETH.
+ * The third not void item must be UDP or TCP or SCTP.
+ * The next not void item must be END.
+ * MAC VLAN PATTERN:
+ * The first not void item must be ETH.
+ * The second not void item must be MAC VLAN.
+ * The next not void item must be END.
+ * ACTION:
+ * The first not void action should be QUEUE or DROP.
+ * The second not void optional action should be MARK,
+ * mark_id is a uint32_t number.
+ * he next not void action should be END.
  */
 static int
-ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
-		const struct rte_flow_attr *attr,
-		const struct rte_flow_item pattern[],
-		const struct rte_flow_action actions[],
-		struct rte_flow_error *error)
+ixgbe_parse_fdir_filter_normal(const struct rte_flow_attr *attr,
+			       const struct rte_flow_item pattern[],
+			       const struct rte_flow_action actions[],
+			       struct ixgbe_fdir_rule *rule,
+			       struct rte_flow_error *error)
 {
-	struct rte_eth_ntuple_filter ntuple_filter;
-	struct rte_eth_ethertype_filter ethertype_filter;
-	struct rte_eth_syn_filter syn_filter;
-	struct rte_eth_l2_tunnel_conf l2_tn_filter;
-	int ret;
+	const struct rte_flow_item *item;
+	const struct rte_flow_item_eth *eth_spec;
+	const struct rte_flow_item_eth *eth_mask;
+	const struct rte_flow_item_ipv4 *ipv4_spec;
+	const struct rte_flow_item_ipv4 *ipv4_mask;
+	const struct rte_flow_item_tcp *tcp_spec;
+	const struct rte_flow_item_tcp *tcp_mask;
+	const struct rte_flow_item_udp *udp_spec;
+	const struct rte_flow_item_udp *udp_mask;
+	const struct rte_flow_item_sctp *sctp_spec;
+	const struct rte_flow_item_sctp *sctp_mask;
+	const struct rte_flow_item_vlan *vlan_spec;
+	const struct rte_flow_item_vlan *vlan_mask;
 
-	memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
-	ret = ixgbe_parse_ntuple_filter(attr, pattern,
-				actions, &ntuple_filter, error);
-	if (!ret)
-		return 0;
+	uint32_t index, j;
 
-	memset(&ethertype_filter, 0, sizeof(struct rte_eth_ethertype_filter));
-	ret = ixgbe_parse_ethertype_filter(attr, pattern,
-				actions, &ethertype_filter, error);
-	if (!ret)
-		return 0;
+	if (!pattern) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+			NULL, "NULL pattern.");
+		return -rte_errno;
+	}
 
-	memset(&syn_filter, 0, sizeof(struct rte_eth_syn_filter));
-	ret = ixgbe_parse_syn_filter(attr, pattern,
-				actions, &syn_filter, error);
+	if (!actions) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				   NULL, "NULL action.");
+		return -rte_errno;
+	}
+
+	if (!attr) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, "NULL attribute.");
+		return -rte_errno;
+	}
+
+	/**
+	 * Some fields may not be provided. Set spec to 0 and mask to default
+	 * value. So, we need not do anything for the not provided fields later.
+	 */
+	memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+	memset(&rule->mask, 0xFF, sizeof(struct ixgbe_hw_fdir_mask));
+	rule->mask.vlan_tci_mask = 0;
+
+	/* parse pattern */
+	index = 0;
+
+	/**
+	 * The first not void item should be
+	 * MAC or IPv4 or TCP or UDP or SCTP.
+	 */
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_ETH &&
+	    item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+	    item->type != RTE_FLOW_ITEM_TYPE_TCP &&
+	    item->type != RTE_FLOW_ITEM_TYPE_UDP &&
+	    item->type != RTE_FLOW_ITEM_TYPE_SCTP) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by fdir filter");
+		return -rte_errno;
+	}
+
+	rule->mode = RTE_FDIR_MODE_PERFECT;
+
+	/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+
+	}
+
+	/* Get the MAC info. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
+		/**
+		 * Only support vlan and dst MAC address,
+		 * others should be masked.
+		 */
+		if (item->spec && !item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			eth_spec = (const struct rte_flow_item_eth *)item->spec;
+
+			/* Get the dst MAC. */
+			for (j = 0; j < ETHER_ADDR_LEN; j++) {
+				rule->ixgbe_fdir.formatted.inner_mac[j] =
+					eth_spec->dst.addr_bytes[j];
+			}
+		}
+
+
+		if (item->mask) {
+			/* If ethernet has meaning, it means MAC VLAN mode. */
+			rule->mode = RTE_FDIR_MODE_PERFECT_MAC_VLAN;
+
+			rule->b_mask = TRUE;
+			eth_mask = (const struct rte_flow_item_eth *)item->mask;
+
+			/* Ether type should be masked. */
+			if (eth_mask->type) {
+				memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+				rte_flow_error_set(error, EINVAL,
+					RTE_FLOW_ERROR_TYPE_ITEM,
+					item, "Not supported by fdir filter");
+				return -rte_errno;
+			}
+
+			/**
+			 * src MAC address must be masked,
+			 * and don't support dst MAC address mask.
+			 */
+			for (j = 0; j < ETHER_ADDR_LEN; j++) {
+				if (eth_mask->src.addr_bytes[j] ||
+					eth_mask->dst.addr_bytes[j] != 0xFF) {
+					memset(rule, 0,
+					sizeof(struct ixgbe_fdir_rule));
+					rte_flow_error_set(error, EINVAL,
+					RTE_FLOW_ERROR_TYPE_ITEM,
+					item, "Not supported by fdir filter");
+					return -rte_errno;
+				}
+			}
+
+			/* When no VLAN, considered as full mask. */
+			rule->mask.vlan_tci_mask = rte_cpu_to_be_16(0xEFFF);
+		}
+		/*** If both spec and mask are item,
+		 * it means don't care about ETH.
+		 * Do nothing.
+		 */
+
+		/**
+		 * Check if the next not void item is vlan or ipv4.
+		 * IPv6 is not supported.
+		 */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (rule->mode == RTE_FDIR_MODE_PERFECT_MAC_VLAN) {
+			if (item->type != RTE_FLOW_ITEM_TYPE_VLAN) {
+				memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+				rte_flow_error_set(error, EINVAL,
+					RTE_FLOW_ERROR_TYPE_ITEM,
+					item, "Not supported by fdir filter");
+				return -rte_errno;
+			}
+		} else {
+			if (item->type != RTE_FLOW_ITEM_TYPE_IPV4) {
+				memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+				rte_flow_error_set(error, EINVAL,
+					RTE_FLOW_ERROR_TYPE_ITEM,
+					item, "Not supported by fdir filter");
+				return -rte_errno;
+			}
+		}
+	}
+
+	if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+		if (!(item->spec && item->mask)) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+
+		vlan_spec = (const struct rte_flow_item_vlan *)item->spec;
+		vlan_mask = (const struct rte_flow_item_vlan *)item->mask;
+
+		if (vlan_spec->tpid != rte_cpu_to_be_16(ETHER_TYPE_VLAN)) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+
+		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->tci;
+
+		if (vlan_mask->tpid != (uint16_t)~0U) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->mask.vlan_tci_mask = vlan_mask->tci;
+		/* More than one tags are not supported. */
+
+		/**
+		 * Check if the next not void item is not vlan.
+		 */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		} else if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	/* Get the IP info. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_IPV4) {
+		/**
+		 * Set the flow type even if there's no content
+		 * as we must have a flow type.
+		 */
+		rule->ixgbe_fdir.formatted.flow_type =
+			IXGBE_ATR_FLOW_TYPE_IPV4;
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+		/**
+		 * Only care about src & dst addresses,
+		 * others should be masked.
+		 */
+		if (!item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->b_mask = TRUE;
+		ipv4_mask =
+			(const struct rte_flow_item_ipv4 *)item->mask;
+		if (ipv4_mask->hdr.version_ihl ||
+		    ipv4_mask->hdr.type_of_service ||
+		    ipv4_mask->hdr.total_length ||
+		    ipv4_mask->hdr.packet_id ||
+		    ipv4_mask->hdr.fragment_offset ||
+		    ipv4_mask->hdr.time_to_live ||
+		    ipv4_mask->hdr.next_proto_id ||
+		    ipv4_mask->hdr.hdr_checksum) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->mask.dst_ipv4_mask = ipv4_mask->hdr.dst_addr;
+		rule->mask.src_ipv4_mask = ipv4_mask->hdr.src_addr;
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			ipv4_spec =
+				(const struct rte_flow_item_ipv4 *)item->spec;
+			rule->ixgbe_fdir.formatted.dst_ip[0] =
+				ipv4_spec->hdr.dst_addr;
+			rule->ixgbe_fdir.formatted.src_ip[0] =
+				ipv4_spec->hdr.src_addr;
+		}
+
+		/**
+		 * Check if the next not void item is
+		 * TCP or UDP or SCTP or END.
+		 */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_TCP &&
+		    item->type != RTE_FLOW_ITEM_TYPE_UDP &&
+		    item->type != RTE_FLOW_ITEM_TYPE_SCTP &&
+		    item->type != RTE_FLOW_ITEM_TYPE_END) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	/* Get the TCP info. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_TCP) {
+		/**
+		 * Set the flow type even if there's no content
+		 * as we must have a flow type.
+		 */
+		rule->ixgbe_fdir.formatted.flow_type =
+			IXGBE_ATR_FLOW_TYPE_TCPV4;
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+		/**
+		 * Only care about src & dst ports,
+		 * others should be masked.
+		 */
+		if (!item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->b_mask = TRUE;
+		tcp_mask = (const struct rte_flow_item_tcp *)item->mask;
+		if (tcp_mask->hdr.sent_seq ||
+		    tcp_mask->hdr.recv_ack ||
+		    tcp_mask->hdr.data_off ||
+		    tcp_mask->hdr.tcp_flags ||
+		    tcp_mask->hdr.rx_win ||
+		    tcp_mask->hdr.cksum ||
+		    tcp_mask->hdr.tcp_urp) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->mask.src_port_mask = tcp_mask->hdr.src_port;
+		rule->mask.dst_port_mask = tcp_mask->hdr.dst_port;
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			tcp_spec = (const struct rte_flow_item_tcp *)item->spec;
+			rule->ixgbe_fdir.formatted.src_port =
+				tcp_spec->hdr.src_port;
+			rule->ixgbe_fdir.formatted.dst_port =
+				tcp_spec->hdr.dst_port;
+		}
+	}
+
+	/* Get the UDP info */
+	if (item->type == RTE_FLOW_ITEM_TYPE_UDP) {
+		/**
+		 * Set the flow type even if there's no content
+		 * as we must have a flow type.
+		 */
+		rule->ixgbe_fdir.formatted.flow_type =
+			IXGBE_ATR_FLOW_TYPE_UDPV4;
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+		/**
+		 * Only care about src & dst ports,
+		 * others should be masked.
+		 */
+		if (!item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->b_mask = TRUE;
+		udp_mask = (const struct rte_flow_item_udp *)item->mask;
+		if (udp_mask->hdr.dgram_len ||
+		    udp_mask->hdr.dgram_cksum) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->mask.src_port_mask = udp_mask->hdr.src_port;
+		rule->mask.dst_port_mask = udp_mask->hdr.dst_port;
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			udp_spec = (const struct rte_flow_item_udp *)item->spec;
+			rule->ixgbe_fdir.formatted.src_port =
+				udp_spec->hdr.src_port;
+			rule->ixgbe_fdir.formatted.dst_port =
+				udp_spec->hdr.dst_port;
+		}
+	}
+
+	/* Get the SCTP info */
+	if (item->type == RTE_FLOW_ITEM_TYPE_SCTP) {
+		/**
+		 * Set the flow type even if there's no content
+		 * as we must have a flow type.
+		 */
+		rule->ixgbe_fdir.formatted.flow_type =
+			IXGBE_ATR_FLOW_TYPE_SCTPV4;
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+		/**
+		 * Only care about src & dst ports,
+		 * others should be masked.
+		 */
+		if (!item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->b_mask = TRUE;
+		sctp_mask =
+			(const struct rte_flow_item_sctp *)item->mask;
+		if (sctp_mask->hdr.tag ||
+		    sctp_mask->hdr.cksum) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->mask.src_port_mask = sctp_mask->hdr.src_port;
+		rule->mask.dst_port_mask = sctp_mask->hdr.dst_port;
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			sctp_spec =
+				(const struct rte_flow_item_sctp *)item->spec;
+			rule->ixgbe_fdir.formatted.src_port =
+				sctp_spec->hdr.src_port;
+			rule->ixgbe_fdir.formatted.dst_port =
+				sctp_spec->hdr.dst_port;
+		}
+	}
+
+	if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+		/* check if the next not void item is END */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	return ixgbe_parse_fdir_act_attr(attr, actions, rule, error);
+}
+
+#define NVGRE_PROTOCOL 0x6558
+
+/**
+ * Parse the rule to see if it is a VxLAN or NVGRE flow director rule.
+ * And get the flow director filter info BTW.
+ * VxLAN PATTERN:
+ * The first not void item must be ETH.
+ * The second not void item must be IPV4/ IPV6.
+ * The third not void item must be NVGRE.
+ * The next not void item must be END.
+ * NVGRE PATTERN:
+ * The first not void item must be ETH.
+ * The second not void item must be IPV4/ IPV6.
+ * The third not void item must be NVGRE.
+ * The next not void item must be END.
+ * ACTION:
+ * The first not void action should be QUEUE or DROP.
+ * The second not void optional action should be MARK,
+ * mark_id is a uint32_t number.
+ * he next not void action should be END.
+ */
+static int
+ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
+			       const struct rte_flow_item pattern[],
+			       const struct rte_flow_action actions[],
+			       struct ixgbe_fdir_rule *rule,
+			       struct rte_flow_error *error)
+{
+	const struct rte_flow_item *item;
+	const struct rte_flow_item_vxlan *vxlan_spec;
+	const struct rte_flow_item_vxlan *vxlan_mask;
+	const struct rte_flow_item_nvgre *nvgre_spec;
+	const struct rte_flow_item_nvgre *nvgre_mask;
+	const struct rte_flow_item_eth *eth_spec;
+	const struct rte_flow_item_eth *eth_mask;
+	const struct rte_flow_item_vlan *vlan_spec;
+	const struct rte_flow_item_vlan *vlan_mask;
+	uint32_t index, j;
+
+	if (!pattern) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+				   NULL, "NULL pattern.");
+		return -rte_errno;
+	}
+
+	if (!actions) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				   NULL, "NULL action.");
+		return -rte_errno;
+	}
+
+	if (!attr) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, "NULL attribute.");
+		return -rte_errno;
+	}
+
+	/**
+	 * Some fields may not be provided. Set spec to 0 and mask to default
+	 * value. So, we need not do anything for the not provided fields later.
+	 */
+	memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+	memset(&rule->mask, 0xFF, sizeof(struct ixgbe_hw_fdir_mask));
+	rule->mask.vlan_tci_mask = 0;
+
+	/* parse pattern */
+	index = 0;
+
+	/**
+	 * The first not void item should be
+	 * MAC or IPv4 or IPv6 or UDP or VxLAN.
+	 */
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_ETH &&
+	    item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+	    item->type != RTE_FLOW_ITEM_TYPE_IPV6 &&
+	    item->type != RTE_FLOW_ITEM_TYPE_UDP &&
+	    item->type != RTE_FLOW_ITEM_TYPE_VXLAN &&
+	    item->type != RTE_FLOW_ITEM_TYPE_NVGRE) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by fdir filter");
+		return -rte_errno;
+	}
+
+	rule->mode = RTE_FDIR_MODE_PERFECT_TUNNEL;
+
+	/* Skip MAC. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
+		/* Only used to describe the protocol stack. */
+		if (item->spec || item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+
+		/* Check if the next not void item is IPv4 or IPv6. */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+		    item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	/* Skip IP. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_IPV4 ||
+	    item->type == RTE_FLOW_ITEM_TYPE_IPV6) {
+		/* Only used to describe the protocol stack. */
+		if (item->spec || item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+
+		/* Check if the next not void item is UDP or NVGRE. */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_UDP &&
+		    item->type != RTE_FLOW_ITEM_TYPE_NVGRE) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	/* Skip UDP. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_UDP) {
+		/* Only used to describe the protocol stack. */
+		if (item->spec || item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+
+		/* Check if the next not void item is VxLAN. */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_VXLAN) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	/* Get the VxLAN info */
+	if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN) {
+		rule->ixgbe_fdir.formatted.tunnel_type =
+			RTE_FDIR_TUNNEL_TYPE_VXLAN;
+
+		/* Only care about VNI, others should be masked. */
+		if (!item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+		rule->b_mask = TRUE;
+
+		/* Tunnel type is always meaningful. */
+		rule->mask.tunnel_type_mask = 1;
+
+		vxlan_mask =
+			(const struct rte_flow_item_vxlan *)item->mask;
+		if (vxlan_mask->flags) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/* VNI must be totally masked or not. */
+		if ((vxlan_mask->vni[0] || vxlan_mask->vni[1] ||
+			vxlan_mask->vni[2]) &&
+			((vxlan_mask->vni[0] != 0xFF) ||
+			(vxlan_mask->vni[1] != 0xFF) ||
+				(vxlan_mask->vni[2] != 0xFF))) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+
+		rte_memcpy(&rule->mask.tunnel_id_mask, vxlan_mask->vni,
+			RTE_DIM(vxlan_mask->vni));
+		rule->mask.tunnel_id_mask <<= 8;
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			vxlan_spec = (const struct rte_flow_item_vxlan *)
+					item->spec;
+			rte_memcpy(&rule->ixgbe_fdir.formatted.tni_vni,
+				vxlan_spec->vni, RTE_DIM(vxlan_spec->vni));
+			rule->ixgbe_fdir.formatted.tni_vni <<= 8;
+		}
+	}
+
+	/* Get the NVGRE info */
+	if (item->type == RTE_FLOW_ITEM_TYPE_NVGRE) {
+		rule->ixgbe_fdir.formatted.tunnel_type =
+			RTE_FDIR_TUNNEL_TYPE_NVGRE;
+
+		/**
+		 * Only care about flags0, flags1, protocol and TNI,
+		 * others should be masked.
+		 */
+		if (!item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+		rule->b_mask = TRUE;
+
+		/* Tunnel type is always meaningful. */
+		rule->mask.tunnel_type_mask = 1;
+
+		nvgre_mask =
+			(const struct rte_flow_item_nvgre *)item->mask;
+		if (nvgre_mask->flow_id) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		if (nvgre_mask->c_k_s_rsvd0_ver !=
+			rte_cpu_to_be_16(0x3000) ||
+		    nvgre_mask->protocol != 0xFFFF) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/* TNI must be totally masked or not. */
+		if (nvgre_mask->tni[0] &&
+		    ((nvgre_mask->tni[0] != 0xFF) ||
+		    (nvgre_mask->tni[1] != 0xFF) ||
+		    (nvgre_mask->tni[2] != 0xFF))) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/* tni is a 24-bits bit field */
+		rte_memcpy(&rule->mask.tunnel_id_mask, nvgre_mask->tni,
+			RTE_DIM(nvgre_mask->tni));
+		rule->mask.tunnel_id_mask <<= 8;
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			nvgre_spec =
+				(const struct rte_flow_item_nvgre *)item->spec;
+			if (nvgre_spec->c_k_s_rsvd0_ver !=
+			    rte_cpu_to_be_16(0x2000) ||
+			    nvgre_spec->protocol !=
+			    rte_cpu_to_be_16(NVGRE_PROTOCOL)) {
+				memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+				rte_flow_error_set(error, EINVAL,
+					RTE_FLOW_ERROR_TYPE_ITEM,
+					item, "Not supported by fdir filter");
+				return -rte_errno;
+			}
+			/* tni is a 24-bits bit field */
+			rte_memcpy(&rule->ixgbe_fdir.formatted.tni_vni,
+			nvgre_spec->tni, RTE_DIM(nvgre_spec->tni));
+			rule->ixgbe_fdir.formatted.tni_vni <<= 8;
+		}
+	}
+
+	/* check if the next not void item is MAC */
+	index++;
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_ETH) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by fdir filter");
+		return -rte_errno;
+	}
+
+	/**
+	 * Only support vlan and dst MAC address,
+	 * others should be masked.
+	 */
+
+	if (!item->mask) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by fdir filter");
+		return -rte_errno;
+	}
+	/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+	}
+	rule->b_mask = TRUE;
+	eth_mask = (const struct rte_flow_item_eth *)item->mask;
+
+	/* Ether type should be masked. */
+	if (eth_mask->type) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by fdir filter");
+		return -rte_errno;
+	}
+
+	/* src MAC address should be masked. */
+	for (j = 0; j < ETHER_ADDR_LEN; j++) {
+		if (eth_mask->src.addr_bytes[j]) {
+			memset(rule, 0,
+			       sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+	rule->mask.mac_addr_byte_mask = 0;
+	for (j = 0; j < ETHER_ADDR_LEN; j++) {
+		/* It's a per byte mask. */
+		if (eth_mask->dst.addr_bytes[j] == 0xFF) {
+			rule->mask.mac_addr_byte_mask |= 0x1 << j;
+		} else if (eth_mask->dst.addr_bytes[j]) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	/* When no vlan, considered as full mask. */
+	rule->mask.vlan_tci_mask = rte_cpu_to_be_16(0xEFFF);
+
+	if (item->spec) {
+		rule->b_spec = TRUE;
+		eth_spec = (const struct rte_flow_item_eth *)item->spec;
+
+		/* Get the dst MAC. */
+		for (j = 0; j < ETHER_ADDR_LEN; j++) {
+			rule->ixgbe_fdir.formatted.inner_mac[j] =
+				eth_spec->dst.addr_bytes[j];
+		}
+	}
+
+	/**
+	 * Check if the next not void item is vlan or ipv4.
+	 * IPv6 is not supported.
+	 */
+	index++;
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if ((item->type != RTE_FLOW_ITEM_TYPE_VLAN) &&
+		(item->type != RTE_FLOW_ITEM_TYPE_VLAN)) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by fdir filter");
+		return -rte_errno;
+	}
+	/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+	}
+
+	if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+		if (!(item->spec && item->mask)) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+
+		vlan_spec = (const struct rte_flow_item_vlan *)item->spec;
+		vlan_mask = (const struct rte_flow_item_vlan *)item->mask;
+
+		if (vlan_spec->tpid != rte_cpu_to_be_16(ETHER_TYPE_VLAN)) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+
+		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->tci;
+
+		if (vlan_mask->tpid != (uint16_t)~0U) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->mask.vlan_tci_mask = vlan_mask->tci;
+		/* More than one tags are not supported. */
+
+		/**
+		 * Check if the next not void item is not vlan.
+		 */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		} else if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/* check if the next not void item is END */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	/**
+	 * If the tags is 0, it means don't care about the VLAN.
+	 * Do nothing.
+	 */
+
+	return ixgbe_parse_fdir_act_attr(attr, actions, rule, error);
+}
+
+static int
+ixgbe_validate_fdir_filter(struct rte_eth_dev *dev,
+			const struct rte_flow_attr *attr,
+			const struct rte_flow_item pattern[],
+			const struct rte_flow_action actions[],
+			struct ixgbe_fdir_rule *rule,
+			struct rte_flow_error *error)
+{
+	int ret = 0;
+
+	enum rte_fdir_mode fdir_mode = dev->data->dev_conf.fdir_conf.mode;
+
+	ixgbe_parse_fdir_filter(attr, pattern, actions,
+				rule, error);
+
+
+	if (fdir_mode == RTE_FDIR_MODE_NONE ||
+	    fdir_mode != rule->mode)
+		return -ENOTSUP;
+
+	return ret;
+}
+
+static int
+ixgbe_parse_fdir_filter(const struct rte_flow_attr *attr,
+			const struct rte_flow_item pattern[],
+			const struct rte_flow_action actions[],
+			struct ixgbe_fdir_rule *rule,
+			struct rte_flow_error *error)
+{
+	int ret;
+
+	ret = ixgbe_parse_fdir_filter_normal(attr, pattern,
+					actions, rule, error);
+
+	if (!ret)
+		return 0;
+
+	ret = ixgbe_parse_fdir_filter_tunnel(attr, pattern,
+					actions, rule, error);
+
+	return ret;
+}
+
+/**
+ * Check if the flow rule is supported by ixgbe.
+ * It only checkes the format. Don't guarantee the rule can be programmed into
+ * the HW. Because there can be no enough room for the rule.
+ */
+static int
+ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
+		const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct rte_flow_error *error)
+{
+	struct rte_eth_ntuple_filter ntuple_filter;
+	struct rte_eth_ethertype_filter ethertype_filter;
+	struct rte_eth_syn_filter syn_filter;
+	struct rte_eth_l2_tunnel_conf l2_tn_filter;
+	struct ixgbe_fdir_rule fdir_rule;
+	int ret;
+
+	memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
+	ret = ixgbe_parse_ntuple_filter(attr, pattern,
+				actions, &ntuple_filter, error);
+	if (!ret)
+		return 0;
+
+	memset(&ethertype_filter, 0, sizeof(struct rte_eth_ethertype_filter));
+	ret = ixgbe_parse_ethertype_filter(attr, pattern,
+				actions, &ethertype_filter, error);
+	if (!ret)
+		return 0;
+
+	memset(&syn_filter, 0, sizeof(struct rte_eth_syn_filter));
+	ret = ixgbe_parse_syn_filter(attr, pattern,
+				actions, &syn_filter, error);
+	if (!ret)
+		return 0;
+
+	memset(&fdir_rule, 0, sizeof(struct ixgbe_fdir_rule));
+	ret = ixgbe_validate_fdir_filter(dev, attr, pattern,
+				actions, &fdir_rule, error);
 	if (!ret)
 		return 0;
 
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v3 16/18] net/ixgbe: create consistent filter
  2017-01-12  8:13   ` [PATCH v3 01/18] net/ixgbe: store TCP SYN filter Wei Zhao
                       ` (14 preceding siblings ...)
  2017-01-12  8:13     ` [PATCH v3 15/18] net/ixgbe: parse flow director filter Wei Zhao
@ 2017-01-12  8:13     ` Wei Zhao
  2017-01-12  8:13     ` [PATCH v3 17/18] net/ixgbe: destroy " Wei Zhao
                       ` (2 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  8:13 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

This patch adds a function to create the flow directory filter.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c |  25 +++--
 drivers/net/ixgbe/ixgbe_ethdev.h |  61 ++++++++++++
 drivers/net/ixgbe/ixgbe_flow.c   | 194 ++++++++++++++++++++++++++++++++++++++-
 3 files changed, 266 insertions(+), 14 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 4b9f4aa..3648e12 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -305,9 +305,6 @@ static void ixgbevf_add_mac_addr(struct rte_eth_dev *dev,
 static void ixgbevf_remove_mac_addr(struct rte_eth_dev *dev, uint32_t index);
 static void ixgbevf_set_default_mac_addr(struct rte_eth_dev *dev,
 					     struct ether_addr *mac_addr);
-static int ixgbe_syn_filter_set(struct rte_eth_dev *dev,
-			struct rte_eth_syn_filter *filter,
-			bool add);
 static int ixgbe_syn_filter_get(struct rte_eth_dev *dev,
 			struct rte_eth_syn_filter *filter);
 static int ixgbe_syn_filter_handle(struct rte_eth_dev *dev,
@@ -317,17 +314,11 @@ static int ixgbe_add_5tuple_filter(struct rte_eth_dev *dev,
 			struct ixgbe_5tuple_filter *filter);
 static void ixgbe_remove_5tuple_filter(struct rte_eth_dev *dev,
 			struct ixgbe_5tuple_filter *filter);
-static int ixgbe_add_del_ntuple_filter(struct rte_eth_dev *dev,
-			struct rte_eth_ntuple_filter *filter,
-			bool add);
 static int ixgbe_ntuple_filter_handle(struct rte_eth_dev *dev,
 				enum rte_filter_op filter_op,
 				void *arg);
 static int ixgbe_get_ntuple_filter(struct rte_eth_dev *dev,
 			struct rte_eth_ntuple_filter *filter);
-static int ixgbe_add_del_ethertype_filter(struct rte_eth_dev *dev,
-			struct rte_eth_ethertype_filter *filter,
-			bool add);
 static int ixgbe_ethertype_filter_handle(struct rte_eth_dev *dev,
 				enum rte_filter_op filter_op,
 				void *arg);
@@ -1292,6 +1283,14 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
 
 	/* initialize l2 tunnel filter list & hash */
 	ixgbe_l2_tn_filter_init(eth_dev);
+
+	TAILQ_INIT(&filter_ntuple_list);
+	TAILQ_INIT(&filter_ethertype_list);
+	TAILQ_INIT(&filter_syn_list);
+	TAILQ_INIT(&filter_fdir_list);
+	TAILQ_INIT(&filter_l2_tunnel_list);
+	TAILQ_INIT(&ixgbe_flow_list);
+
 	return 0;
 }
 
@@ -5742,7 +5741,7 @@ ixgbevf_set_default_mac_addr(struct rte_eth_dev *dev, struct ether_addr *addr)
 		return -ENOTSUP;\
 } while (0)
 
-static int
+int
 ixgbe_syn_filter_set(struct rte_eth_dev *dev,
 			struct rte_eth_syn_filter *filter,
 			bool add)
@@ -6121,7 +6120,7 @@ ntuple_filter_to_5tuple(struct rte_eth_ntuple_filter *filter,
  *    - On success, zero.
  *    - On failure, a negative value.
  */
-static int
+int
 ixgbe_add_del_ntuple_filter(struct rte_eth_dev *dev,
 			struct rte_eth_ntuple_filter *ntuple_filter,
 			bool add)
@@ -6266,7 +6265,7 @@ ixgbe_ntuple_filter_handle(struct rte_eth_dev *dev,
 	return ret;
 }
 
-static int
+int
 ixgbe_add_del_ethertype_filter(struct rte_eth_dev *dev,
 			struct rte_eth_ethertype_filter *filter,
 			bool add)
@@ -7345,7 +7344,7 @@ ixgbe_remove_l2_tn_filter(struct ixgbe_l2_tn_info *l2_tn_info,
 }
 
 /* Add l2 tunnel filter */
-static int
+int
 ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 			       struct rte_eth_l2_tunnel_conf *l2_tunnel,
 			       bool restore)
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index dee159f..6908c3d 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -329,6 +329,54 @@ struct ixgbe_l2_tn_info {
 	bool e_tag_ether_type; /* ether type for e-tag */
 };
 
+struct rte_flow {
+	enum rte_filter_type filter_type;
+	void *rule;
+};
+/* ntuple filter list structure */
+struct ixgbe_ntuple_filter_ele {
+	TAILQ_ENTRY(ixgbe_ntuple_filter_ele) entries;
+	struct rte_eth_ntuple_filter filter_info;
+};
+/* ethertype filter list structure */
+struct ixgbe_ethertype_filter_ele {
+	TAILQ_ENTRY(ixgbe_ethertype_filter_ele) entries;
+	struct rte_eth_ethertype_filter filter_info;
+};
+/* syn filter list structure */
+struct ixgbe_eth_syn_filter_ele {
+	TAILQ_ENTRY(ixgbe_eth_syn_filter_ele) entries;
+	struct rte_eth_syn_filter filter_info;
+};
+/* fdir filter list structure */
+struct ixgbe_fdir_rule_ele {
+	TAILQ_ENTRY(ixgbe_fdir_rule_ele) entries;
+	struct ixgbe_fdir_rule filter_info;
+};
+/* l2_tunnel filter list structure */
+struct ixgbe_eth_l2_tunnel_conf_ele {
+	TAILQ_ENTRY(ixgbe_eth_l2_tunnel_conf_ele) entries;
+	struct rte_eth_l2_tunnel_conf filter_info;
+};
+/* ixgbe_flow memory list structure */
+struct ixgbe_flow_mem {
+	TAILQ_ENTRY(ixgbe_flow_mem) entries;
+	struct rte_flow *flow;
+};
+
+TAILQ_HEAD(ixgbe_ntuple_filter_list, ixgbe_ntuple_filter_ele);
+struct ixgbe_ntuple_filter_list filter_ntuple_list;
+TAILQ_HEAD(ixgbe_ethertype_filter_list, ixgbe_ethertype_filter_ele);
+struct ixgbe_ethertype_filter_list filter_ethertype_list;
+TAILQ_HEAD(ixgbe_syn_filter_list, ixgbe_eth_syn_filter_ele);
+struct ixgbe_syn_filter_list filter_syn_list;
+TAILQ_HEAD(ixgbe_fdir_rule_filter_list, ixgbe_fdir_rule_ele);
+struct ixgbe_fdir_rule_filter_list filter_fdir_list;
+TAILQ_HEAD(ixgbe_l2_tunnel_filter_list, ixgbe_eth_l2_tunnel_conf_ele);
+struct ixgbe_l2_tunnel_filter_list filter_l2_tunnel_list;
+TAILQ_HEAD(ixgbe_flow_mem_list, ixgbe_flow_mem);
+struct ixgbe_flow_mem_list ixgbe_flow_list;
+
 /*
  * Structure to store private data for each driver instance (for each port).
  */
@@ -487,6 +535,19 @@ uint32_t ixgbe_rssrk_reg_get(enum ixgbe_mac_type mac_type, uint8_t i);
 
 bool ixgbe_rss_update_sp(enum ixgbe_mac_type mac_type);
 
+int ixgbe_add_del_ntuple_filter(struct rte_eth_dev *dev,
+			struct rte_eth_ntuple_filter *filter,
+			bool add);
+int ixgbe_add_del_ethertype_filter(struct rte_eth_dev *dev,
+			struct rte_eth_ethertype_filter *filter,
+			bool add);
+int ixgbe_syn_filter_set(struct rte_eth_dev *dev,
+			struct rte_eth_syn_filter *filter,
+			bool add);
+int
+ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
+			       struct rte_eth_l2_tunnel_conf *l2_tunnel,
+			       bool restore);
 /*
  * Flow director function prototypes
  */
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index ee8d5ca..f6fb17c 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -157,10 +157,15 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 		const struct rte_flow_item pattern[],
 		const struct rte_flow_action actions[],
 		struct rte_flow_error *error);
+static struct rte_flow *ixgbe_flow_create(struct rte_eth_dev *dev,
+		const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct rte_flow_error *error);
 
 const struct rte_flow_ops ixgbe_flow_ops = {
 	ixgbe_flow_validate,
-	NULL,
+	ixgbe_flow_create,
 	NULL,
 	ixgbe_flow_flush,
 	NULL,
@@ -2368,6 +2373,193 @@ ixgbe_parse_fdir_filter(const struct rte_flow_attr *attr,
 }
 
 /**
+ * Create or destroy a flow rule.
+ * Theorically one rule can match more than one filters.
+ * We will let it use the filter which it hitt first.
+ * So, the sequence matters.
+ */
+static struct rte_flow *
+ixgbe_flow_create(struct rte_eth_dev *dev,
+		  const struct rte_flow_attr *attr,
+		  const struct rte_flow_item pattern[],
+		  const struct rte_flow_action actions[],
+		  struct rte_flow_error *error)
+{
+	int ret;
+	struct rte_eth_ntuple_filter ntuple_filter;
+	struct rte_eth_ethertype_filter ethertype_filter;
+	struct rte_eth_syn_filter syn_filter;
+	struct ixgbe_fdir_rule fdir_rule;
+	struct rte_eth_l2_tunnel_conf l2_tn_filter;
+	struct ixgbe_hw_fdir_info *fdir_info =
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
+	struct rte_flow *flow = NULL;
+	struct ixgbe_ntuple_filter_ele *ntuple_filter_ptr;
+	struct ixgbe_ethertype_filter_ele *ethertype_filter_ptr;
+	struct ixgbe_eth_syn_filter_ele *syn_filter_ptr;
+	struct ixgbe_eth_l2_tunnel_conf_ele *l2_tn_filter_ptr;
+	struct ixgbe_fdir_rule_ele *fdir_rule_ptr;
+	struct ixgbe_flow_mem *ixgbe_flow_mem_ptr;
+
+	flow = rte_zmalloc("ixgbe_rte_flow", sizeof(struct rte_flow), 0);
+	if (!flow) {
+		PMD_DRV_LOG(ERR, "failed to allocate memory");
+		return (struct rte_flow *)flow;
+	}
+	ixgbe_flow_mem_ptr = rte_zmalloc("ixgbe_flow_mem",
+			sizeof(struct ixgbe_flow_mem), 0);
+	if (!ixgbe_flow_mem_ptr) {
+		PMD_DRV_LOG(ERR, "failed to allocate memory");
+		rte_free(flow);
+		return NULL;
+	}
+	ixgbe_flow_mem_ptr->flow = flow;
+	TAILQ_INSERT_TAIL(&ixgbe_flow_list,
+				ixgbe_flow_mem_ptr, entries);
+
+	memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
+	ret = ixgbe_parse_ntuple_filter(attr, pattern,
+			actions, &ntuple_filter, error);
+	if (!ret) {
+		ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, TRUE);
+		if (!ret) {
+			ntuple_filter_ptr = rte_zmalloc("ixgbe_ntuple_filter",
+				sizeof(struct ixgbe_ntuple_filter_ele), 0);
+			(void)rte_memcpy(&ntuple_filter_ptr->filter_info,
+				&ntuple_filter,
+				sizeof(struct rte_eth_ntuple_filter));
+			TAILQ_INSERT_TAIL(&filter_ntuple_list,
+				ntuple_filter_ptr, entries);
+			flow->rule = ntuple_filter_ptr;
+			flow->filter_type = RTE_ETH_FILTER_NTUPLE;
+			return flow;
+		}
+		goto out;
+	}
+
+	memset(&ethertype_filter, 0, sizeof(struct rte_eth_ethertype_filter));
+	ret = ixgbe_parse_ethertype_filter(attr, pattern,
+				actions, &ethertype_filter, error);
+	if (!ret) {
+		ret = ixgbe_add_del_ethertype_filter(dev,
+				&ethertype_filter, TRUE);
+		if (!ret) {
+			ethertype_filter_ptr = rte_zmalloc(
+				"ixgbe_ethertype_filter",
+				sizeof(struct ixgbe_ethertype_filter_ele), 0);
+			(void)rte_memcpy(&ethertype_filter_ptr->filter_info,
+				&ethertype_filter,
+				sizeof(struct rte_eth_ethertype_filter));
+			TAILQ_INSERT_TAIL(&filter_ethertype_list,
+				ethertype_filter_ptr, entries);
+			flow->rule = ethertype_filter_ptr;
+			flow->filter_type = RTE_ETH_FILTER_ETHERTYPE;
+			return flow;
+		}
+		goto out;
+	}
+
+	memset(&syn_filter, 0, sizeof(struct rte_eth_syn_filter));
+	ret = cons_parse_syn_filter(attr, pattern, actions, &syn_filter, error);
+	if (!ret) {
+		ret = ixgbe_syn_filter_set(dev, &syn_filter, TRUE);
+		if (!ret) {
+			syn_filter_ptr = rte_zmalloc("ixgbe_syn_filter",
+				sizeof(struct ixgbe_eth_syn_filter_ele), 0);
+			(void)rte_memcpy(&syn_filter_ptr->filter_info,
+				&syn_filter,
+				sizeof(struct rte_eth_syn_filter));
+			TAILQ_INSERT_TAIL(&filter_syn_list,
+				syn_filter_ptr,
+				entries);
+			flow->rule = syn_filter_ptr;
+			flow->filter_type = RTE_ETH_FILTER_SYN;
+			return flow;
+		}
+		goto out;
+	}
+
+	memset(&fdir_rule, 0, sizeof(struct ixgbe_fdir_rule));
+	ret = ixgbe_parse_fdir_filter(attr, pattern,
+				actions, &fdir_rule, error);
+	if (!ret) {
+		/* A mask cannot be deleted. */
+		if (fdir_rule.b_mask) {
+			if (!fdir_info->mask_added) {
+				/* It's the first time the mask is set. */
+				rte_memcpy(&fdir_info->mask,
+					&fdir_rule.mask,
+					sizeof(struct ixgbe_hw_fdir_mask));
+				ret = ixgbe_fdir_set_input_mask(dev);
+				if (ret)
+					goto out;
+
+				fdir_info->mask_added = TRUE;
+			} else {
+				/**
+				 * Only support one global mask,
+				 * all the masks should be the same.
+				 */
+				ret = memcmp(&fdir_info->mask,
+					&fdir_rule.mask,
+					sizeof(struct ixgbe_hw_fdir_mask));
+				if (ret)
+					goto out;
+			}
+		}
+
+		if (fdir_rule.b_spec) {
+			ret = ixgbe_fdir_filter_program(dev, &fdir_rule,
+					FALSE, FALSE);
+			if (!ret) {
+				fdir_rule_ptr = rte_zmalloc("ixgbe_fdir_filter",
+					sizeof(struct ixgbe_fdir_rule_ele), 0);
+				(void)rte_memcpy(&fdir_rule_ptr->filter_info,
+					&fdir_rule,
+					sizeof(struct ixgbe_fdir_rule));
+				TAILQ_INSERT_TAIL(&filter_fdir_list,
+					fdir_rule_ptr, entries);
+				flow->rule = fdir_rule_ptr;
+				flow->filter_type = RTE_ETH_FILTER_FDIR;
+
+				return flow;
+			}
+
+			if (ret)
+				goto out;
+		}
+
+		goto out;
+	}
+
+	memset(&l2_tn_filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+	ret = cons_parse_l2_tn_filter(attr, pattern,
+					actions, &l2_tn_filter, error);
+	if (!ret) {
+		ret = ixgbe_dev_l2_tunnel_filter_add(dev, &l2_tn_filter, FALSE);
+		if (!ret) {
+			l2_tn_filter_ptr = rte_zmalloc("ixgbe_l2_tn_filter",
+				sizeof(struct ixgbe_eth_l2_tunnel_conf_ele), 0);
+			(void)rte_memcpy(&l2_tn_filter_ptr->filter_info,
+				&l2_tn_filter,
+				sizeof(struct rte_eth_l2_tunnel_conf));
+			TAILQ_INSERT_TAIL(&filter_l2_tunnel_list,
+				l2_tn_filter_ptr, entries);
+			flow->rule = l2_tn_filter_ptr;
+			flow->filter_type = RTE_ETH_FILTER_L2_TUNNEL;
+			return flow;
+		}
+	}
+
+out:
+	TAILQ_REMOVE(&ixgbe_flow_list,
+		ixgbe_flow_mem_ptr, entries);
+	rte_free(ixgbe_flow_mem_ptr);
+	rte_free(flow);
+	return NULL;
+}
+
+/**
  * Check if the flow rule is supported by ixgbe.
  * It only checkes the format. Don't guarantee the rule can be programmed into
  * the HW. Because there can be no enough room for the rule.
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v3 17/18] net/ixgbe: destroy consistent filter
  2017-01-12  8:13   ` [PATCH v3 01/18] net/ixgbe: store TCP SYN filter Wei Zhao
                       ` (15 preceding siblings ...)
  2017-01-12  8:13     ` [PATCH v3 16/18] net/ixgbe: create consistent filter Wei Zhao
@ 2017-01-12  8:13     ` Wei Zhao
  2017-01-12  8:13     ` [PATCH v3 18/18] net/ixgbe: flush all the filter list Wei Zhao
  2017-01-12  8:40     ` [PATCH v4 00/18] net/ixgbe: Consistent filter API Wei Zhao
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  8:13 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

This patch adds a function to destroy the flow fliter.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c |   2 +-
 drivers/net/ixgbe/ixgbe_ethdev.h |   3 +
 drivers/net/ixgbe/ixgbe_flow.c   | 117 ++++++++++++++++++++++++++++++++++++++-
 3 files changed, 120 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 3648e12..1112a3e 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -7401,7 +7401,7 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 }
 
 /* Delete l2 tunnel filter */
-static int
+int
 ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
 			       struct rte_eth_l2_tunnel_conf *l2_tunnel)
 {
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 6908c3d..01c18cd 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -548,6 +548,9 @@ int
 ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 			       struct rte_eth_l2_tunnel_conf *l2_tunnel,
 			       bool restore);
+int
+ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
+			       struct rte_eth_l2_tunnel_conf *l2_tunnel);
 /*
  * Flow director function prototypes
  */
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index f6fb17c..78c0654 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -162,11 +162,14 @@ static struct rte_flow *ixgbe_flow_create(struct rte_eth_dev *dev,
 		const struct rte_flow_item pattern[],
 		const struct rte_flow_action actions[],
 		struct rte_flow_error *error);
+static int ixgbe_flow_destroy(struct rte_eth_dev *dev,
+		struct rte_flow *flow,
+		struct rte_flow_error *error);
 
 const struct rte_flow_ops ixgbe_flow_ops = {
 	ixgbe_flow_validate,
 	ixgbe_flow_create,
-	NULL,
+	ixgbe_flow_destroy,
 	ixgbe_flow_flush,
 	NULL,
 };
@@ -2609,6 +2612,118 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 	return ret;
 }
 
+/* Destroy a flow rule on ixgbe. */
+static int
+ixgbe_flow_destroy(struct rte_eth_dev *dev,
+		struct rte_flow *flow,
+		struct rte_flow_error *error)
+{
+	int ret;
+	struct rte_flow *pmd_flow = flow;
+	enum rte_filter_type filter_type = pmd_flow->filter_type;
+	struct rte_eth_ntuple_filter ntuple_filter;
+	struct rte_eth_ethertype_filter ethertype_filter;
+	struct rte_eth_syn_filter syn_filter;
+	struct ixgbe_fdir_rule fdir_rule;
+	struct rte_eth_l2_tunnel_conf l2_tn_filter;
+	struct ixgbe_ntuple_filter_ele *ntuple_filter_ptr;
+	struct ixgbe_ethertype_filter_ele *ethertype_filter_ptr;
+	struct ixgbe_eth_syn_filter_ele *syn_filter_ptr;
+	struct ixgbe_eth_l2_tunnel_conf_ele *l2_tn_filter_ptr;
+	struct ixgbe_fdir_rule_ele *fdir_rule_ptr;
+	struct ixgbe_flow_mem *ixgbe_flow_mem_ptr;
+
+	switch (filter_type) {
+	case RTE_ETH_FILTER_NTUPLE:
+		ntuple_filter_ptr = (struct ixgbe_ntuple_filter_ele *)
+					pmd_flow->rule;
+		(void)rte_memcpy(&ntuple_filter,
+			&ntuple_filter_ptr->filter_info,
+			sizeof(struct rte_eth_ntuple_filter));
+		ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, FALSE);
+		if (!ret) {
+			TAILQ_REMOVE(&filter_ntuple_list,
+			ntuple_filter_ptr, entries);
+			rte_free(ntuple_filter_ptr);
+		}
+		break;
+	case RTE_ETH_FILTER_ETHERTYPE:
+		ethertype_filter_ptr = (struct ixgbe_ethertype_filter_ele *)
+					pmd_flow->rule;
+		(void)rte_memcpy(&ethertype_filter,
+			&ethertype_filter_ptr->filter_info,
+			sizeof(struct rte_eth_ethertype_filter));
+		ret = ixgbe_add_del_ethertype_filter(dev,
+				&ethertype_filter, FALSE);
+		if (!ret) {
+			TAILQ_REMOVE(&filter_ethertype_list,
+				ethertype_filter_ptr, entries);
+			rte_free(ethertype_filter_ptr);
+		}
+		break;
+	case RTE_ETH_FILTER_SYN:
+		syn_filter_ptr = (struct ixgbe_eth_syn_filter_ele *)
+				pmd_flow->rule;
+		(void)rte_memcpy(&syn_filter,
+			&syn_filter_ptr->filter_info,
+			sizeof(struct rte_eth_syn_filter));
+		ret = ixgbe_syn_filter_set(dev, &syn_filter, FALSE);
+		if (!ret) {
+			TAILQ_REMOVE(&filter_syn_list,
+				syn_filter_ptr, entries);
+			rte_free(syn_filter_ptr);
+		}
+		break;
+	case RTE_ETH_FILTER_FDIR:
+		fdir_rule_ptr = (struct ixgbe_fdir_rule_ele *)pmd_flow->rule;
+		(void)rte_memcpy(&fdir_rule,
+			&fdir_rule_ptr->filter_info,
+			sizeof(struct ixgbe_fdir_rule));
+		ret = ixgbe_fdir_filter_program(dev, &fdir_rule, TRUE, FALSE);
+		if (!ret) {
+			TAILQ_REMOVE(&filter_fdir_list,
+				fdir_rule_ptr, entries);
+			rte_free(fdir_rule_ptr);
+		}
+		break;
+	case RTE_ETH_FILTER_L2_TUNNEL:
+		l2_tn_filter_ptr = (struct ixgbe_eth_l2_tunnel_conf_ele *)
+				pmd_flow->rule;
+		(void)rte_memcpy(&l2_tn_filter, &l2_tn_filter_ptr->filter_info,
+			sizeof(struct rte_eth_l2_tunnel_conf));
+		ret = ixgbe_dev_l2_tunnel_filter_del(dev, &l2_tn_filter);
+		if (!ret) {
+			TAILQ_REMOVE(&filter_l2_tunnel_list,
+				l2_tn_filter_ptr, entries);
+			rte_free(l2_tn_filter_ptr);
+		}
+		break;
+	default:
+		PMD_DRV_LOG(WARNING, "Filter type (%d) not supported",
+			    filter_type);
+		ret = -EINVAL;
+		break;
+	}
+
+	if (ret) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_HANDLE,
+				NULL, "Failed to destroy flow");
+		return ret;
+	}
+
+	TAILQ_FOREACH(ixgbe_flow_mem_ptr, &ixgbe_flow_list, entries) {
+		if (ixgbe_flow_mem_ptr->flow == pmd_flow) {
+			TAILQ_REMOVE(&ixgbe_flow_list,
+				ixgbe_flow_mem_ptr, entries);
+			rte_free(ixgbe_flow_mem_ptr);
+		}
+	}
+	rte_free(flow);
+
+	return ret;
+}
+
 /*  Destroy all flow rules associated with a port on ixgbe. */
 static int
 ixgbe_flow_flush(struct rte_eth_dev *dev,
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v3 18/18] net/ixgbe: flush all the filter list
  2017-01-12  8:13   ` [PATCH v3 01/18] net/ixgbe: store TCP SYN filter Wei Zhao
                       ` (16 preceding siblings ...)
  2017-01-12  8:13     ` [PATCH v3 17/18] net/ixgbe: destroy " Wei Zhao
@ 2017-01-12  8:13     ` Wei Zhao
  2017-01-12  8:40     ` [PATCH v4 00/18] net/ixgbe: Consistent filter API Wei Zhao
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  8:13 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

This patch adds a function to flush all the fliter list
filter on a port.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c |  3 +++
 drivers/net/ixgbe/ixgbe_ethdev.h |  1 +
 drivers/net/ixgbe/ixgbe_flow.c   | 56 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 60 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 1112a3e..f46507b 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1341,6 +1341,9 @@ eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev)
 	/* Remove all ntuple filters of the device */
 	ixgbe_ntuple_filter_uninit(eth_dev);
 
+	/* clear all the filters list */
+	ixgbe_filterlist_flush();
+
 	return 0;
 }
 
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 01c18cd..5b31149 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -551,6 +551,7 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 int
 ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
 			       struct rte_eth_l2_tunnel_conf *l2_tunnel);
+void ixgbe_filterlist_flush(void);
 /*
  * Flow director function prototypes
  */
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 78c0654..9de23d0 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -2375,6 +2375,60 @@ ixgbe_parse_fdir_filter(const struct rte_flow_attr *attr,
 	return ret;
 }
 
+void
+ixgbe_filterlist_flush(void)
+{
+	struct ixgbe_ntuple_filter_ele *ntuple_filter_ptr;
+	struct ixgbe_ethertype_filter_ele *ethertype_filter_ptr;
+	struct ixgbe_eth_syn_filter_ele *syn_filter_ptr;
+	struct ixgbe_eth_l2_tunnel_conf_ele *l2_tn_filter_ptr;
+	struct ixgbe_fdir_rule_ele *fdir_rule_ptr;
+	struct ixgbe_flow_mem *ixgbe_flow_mem_ptr;
+
+	while ((ntuple_filter_ptr = TAILQ_FIRST(&filter_ntuple_list))) {
+		TAILQ_REMOVE(&filter_ntuple_list,
+				 ntuple_filter_ptr,
+				 entries);
+		rte_free(ntuple_filter_ptr);
+	}
+
+	while ((ethertype_filter_ptr = TAILQ_FIRST(&filter_ethertype_list))) {
+		TAILQ_REMOVE(&filter_ethertype_list,
+				 ethertype_filter_ptr,
+				 entries);
+		rte_free(ethertype_filter_ptr);
+	}
+
+	while ((syn_filter_ptr = TAILQ_FIRST(&filter_syn_list))) {
+		TAILQ_REMOVE(&filter_syn_list,
+				 syn_filter_ptr,
+				 entries);
+		rte_free(syn_filter_ptr);
+	}
+
+	while ((l2_tn_filter_ptr = TAILQ_FIRST(&filter_l2_tunnel_list))) {
+		TAILQ_REMOVE(&filter_l2_tunnel_list,
+				 l2_tn_filter_ptr,
+				 entries);
+		rte_free(l2_tn_filter_ptr);
+	}
+
+	while ((fdir_rule_ptr = TAILQ_FIRST(&filter_fdir_list))) {
+		TAILQ_REMOVE(&filter_fdir_list,
+				 fdir_rule_ptr,
+				 entries);
+		rte_free(fdir_rule_ptr);
+	}
+
+	while ((ixgbe_flow_mem_ptr = TAILQ_FIRST(&ixgbe_flow_list))) {
+		TAILQ_REMOVE(&ixgbe_flow_list,
+				 ixgbe_flow_mem_ptr,
+				 entries);
+		rte_free(ixgbe_flow_mem_ptr->flow);
+		rte_free(ixgbe_flow_mem_ptr);
+	}
+}
+
 /**
  * Create or destroy a flow rule.
  * Theorically one rule can match more than one filters.
@@ -2751,5 +2805,7 @@ ixgbe_flow_flush(struct rte_eth_dev *dev,
 		return ret;
 	}
 
+	ixgbe_filterlist_flush();
+
 	return 0;
 }
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v4 00/18] net/ixgbe: Consistent filter API
  2017-01-12  8:13   ` [PATCH v3 01/18] net/ixgbe: store TCP SYN filter Wei Zhao
                       ` (17 preceding siblings ...)
  2017-01-12  8:13     ` [PATCH v3 18/18] net/ixgbe: flush all the filter list Wei Zhao
@ 2017-01-12  8:40     ` Wei Zhao
  2017-01-12  8:40       ` [PATCH v4 01/18] net/ixgbe: store TCP SYN filter Wei Zhao
                         ` (18 more replies)
  18 siblings, 19 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  8:40 UTC (permalink / raw)
  To: dev; +Cc: zhao wei

From: zhao wei <wei.zhao1@gmail.com>

The patches mainly finish following functions:
1) Store and restore all kinds of filters.
2) Parse all kinds of filters.
3) Add flow validate function.
4) Add flow create function.
5) Add flow destroy function.
6) Add flow flush function.

v2 changes:
 fix git log error
 Modify some function call relationship
 Change return value type of all parse flow functions
 Update error info for all flow ops
 Add ixgbe_filterlist_flush to flush flows and rules created

v3 change:
 add new file ixgbe_flow.c to store generic API parser related functions
 add more comment about pattern and action rules
 add attr check in parser functions
 change struct name ixgbe_flow to rte_flow
 change SYN to TCP SYN
 change to use memset initizlize struct ixgbe_filter_info
 break down filter uninit process to 3 indepedent functions in eth_ixgbe_dev_uninit()
 change struct rte_flow_item_nvgre definition
 change struct rte_flow_item_e_tag definition
 fix one bug in function ixgbe_dev_filter_ctrl
 add goto in function ixgbe_flow_create
 delete some useless initialization 
 eliminate some git log check warning

v4 change:
 fix some check patch warning

zhao wei (18):
  net/ixgbe: store TCP SYN filter
  net/ixgbe: store flow director filter
  net/ixgbe: store L2 tunnel filter
  net/ixgbe: restore n-tuple filter Add support for restoring n-tuple
    filter in SW.
  net/ixgbe: restore ether type filter
  net/ixgbe: restore TCP SYN filter
  net/ixgbe: restore flow director filter
  net/ixgbe: restore L2 tunnel filter
  net/ixgbe: store and restore L2 tunnel configuration
  net/ixgbe: flush all the filters
  net/ixgbe: parse n-tuple filter
  net/ixgbe: parse ethertype filter
  net/ixgbe: parse TCP SYN filter     check if the rule is a TCP SYN
    rule, and get the SYN info.
  net/ixgbe: parse L2 tunnel filter     check if the rule is a L2 tunnel
    rule, and get the L2 tunnel info.
  net/ixgbe: parse flow director filter
  net/ixgbe: create consistent filter
  net/ixgbe: destroy consistent filter
  net/ixgbe: flush all the filter list

 drivers/net/ixgbe/Makefile       |    2 +
 drivers/net/ixgbe/ixgbe_ethdev.c |  667 +++++++--
 drivers/net/ixgbe/ixgbe_ethdev.h |  203 ++-
 drivers/net/ixgbe/ixgbe_fdir.c   |  407 ++++--
 drivers/net/ixgbe/ixgbe_flow.c   | 2808 ++++++++++++++++++++++++++++++++++++++
 drivers/net/ixgbe/ixgbe_pf.c     |   26 +-
 lib/librte_ether/rte_flow.h      |   48 +
 7 files changed, 3950 insertions(+), 211 deletions(-)
 create mode 100644 drivers/net/ixgbe/ixgbe_flow.c

-- 
2.5.5

^ permalink raw reply	[flat|nested] 149+ messages in thread

* [PATCH v4 01/18] net/ixgbe: store TCP SYN filter
  2017-01-12  8:40     ` [PATCH v4 00/18] net/ixgbe: Consistent filter API Wei Zhao
@ 2017-01-12  8:40       ` Wei Zhao
  2017-01-12  8:40       ` [PATCH v4 02/18] net/ixgbe: store flow director filter Wei Zhao
                         ` (17 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  8:40 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for storing TCP SYN filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 18 +++++++++++++-----
 drivers/net/ixgbe/ixgbe_ethdev.h |  2 ++
 2 files changed, 15 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index b7ddd4f..719eddd 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1272,10 +1272,12 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
 	/* enable support intr */
 	ixgbe_enable_intr(eth_dev);
 
+	/* initialize filter info */
+	memset(filter_info, 0,
+	       sizeof(struct ixgbe_filter_info));
+
 	/* initialize 5tuple filter list */
 	TAILQ_INIT(&filter_info->fivetuple_list);
-	memset(filter_info->fivetuple_mask, 0,
-	       sizeof(uint32_t) * IXGBE_5TUPLE_ARRAY_SIZE);
 
 	return 0;
 }
@@ -5603,15 +5605,18 @@ ixgbe_syn_filter_set(struct rte_eth_dev *dev,
 			bool add)
 {
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+	uint32_t syn_info;
 	uint32_t synqf;
 
 	if (filter->queue >= IXGBE_MAX_RX_QUEUE_NUM)
 		return -EINVAL;
 
-	synqf = IXGBE_READ_REG(hw, IXGBE_SYNQF);
+	syn_info = filter_info->syn_info;
 
 	if (add) {
-		if (synqf & IXGBE_SYN_FILTER_ENABLE)
+		if (syn_info & IXGBE_SYN_FILTER_ENABLE)
 			return -EINVAL;
 		synqf = (uint32_t)(((filter->queue << IXGBE_SYN_FILTER_QUEUE_SHIFT) &
 			IXGBE_SYN_FILTER_QUEUE) | IXGBE_SYN_FILTER_ENABLE);
@@ -5621,10 +5626,13 @@ ixgbe_syn_filter_set(struct rte_eth_dev *dev,
 		else
 			synqf &= ~IXGBE_SYN_FILTER_SYNQFP;
 	} else {
-		if (!(synqf & IXGBE_SYN_FILTER_ENABLE))
+		synqf = IXGBE_READ_REG(hw, IXGBE_SYNQF);
+		if (!(syn_info & IXGBE_SYN_FILTER_ENABLE))
 			return -ENOENT;
 		synqf &= ~(IXGBE_SYN_FILTER_QUEUE | IXGBE_SYN_FILTER_ENABLE);
 	}
+
+	filter_info->syn_info = synqf;
 	IXGBE_WRITE_REG(hw, IXGBE_SYNQF, synqf);
 	IXGBE_WRITE_FLUSH(hw);
 	return 0;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 69b276f..90a89ec 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -262,6 +262,8 @@ struct ixgbe_filter_info {
 	/* Bit mask for every used 5tuple filter */
 	uint32_t fivetuple_mask[IXGBE_5TUPLE_ARRAY_SIZE];
 	struct ixgbe_5tuple_filter_list fivetuple_list;
+	/* store the SYN filter info */
+	uint32_t syn_info;
 };
 
 /*
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v4 02/18] net/ixgbe: store flow director filter
  2017-01-12  8:40     ` [PATCH v4 00/18] net/ixgbe: Consistent filter API Wei Zhao
  2017-01-12  8:40       ` [PATCH v4 01/18] net/ixgbe: store TCP SYN filter Wei Zhao
@ 2017-01-12  8:40       ` Wei Zhao
  2017-01-12  8:40       ` [PATCH v4 03/18] net/ixgbe: store L2 tunnel filter Wei Zhao
                         ` (16 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  8:40 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for storing flow director filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c |  64 ++++++++++++++++++++++++
 drivers/net/ixgbe/ixgbe_ethdev.h |  19 ++++++-
 drivers/net/ixgbe/ixgbe_fdir.c   | 105 ++++++++++++++++++++++++++++++++++++++-
 3 files changed, 185 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 719eddd..9796c4f 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -60,6 +60,7 @@
 #include <rte_malloc.h>
 #include <rte_random.h>
 #include <rte_dev.h>
+#include <rte_hash_crc.h>
 
 #include "ixgbe_logs.h"
 #include "base/ixgbe_api.h"
@@ -165,6 +166,8 @@ enum ixgbevf_xcast_modes {
 
 static int eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev);
 static int eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev);
+static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev);
+static int ixgbe_fdir_filter_uninit(struct rte_eth_dev *eth_dev);
 static int  ixgbe_dev_configure(struct rte_eth_dev *dev);
 static int  ixgbe_dev_start(struct rte_eth_dev *dev);
 static void ixgbe_dev_stop(struct rte_eth_dev *dev);
@@ -1279,6 +1282,9 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
 	/* initialize 5tuple filter list */
 	TAILQ_INIT(&filter_info->fivetuple_list);
 
+	/* initialize flow director filter list & hash */
+	ixgbe_fdir_filter_init(eth_dev);
+
 	return 0;
 }
 
@@ -1320,9 +1326,67 @@ eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev)
 	rte_free(eth_dev->data->hash_mac_addrs);
 	eth_dev->data->hash_mac_addrs = NULL;
 
+	/* remove all the fdir filters & hash */
+	ixgbe_fdir_filter_uninit(eth_dev);
+
 	return 0;
 }
 
+static int ixgbe_fdir_filter_uninit(struct rte_eth_dev *eth_dev)
+{
+	struct ixgbe_hw_fdir_info *fdir_info =
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(eth_dev->data->dev_private);
+	struct ixgbe_fdir_filter *fdir_filter;
+
+		if (fdir_info->hash_map)
+		rte_free(fdir_info->hash_map);
+	if (fdir_info->hash_handle)
+		rte_hash_free(fdir_info->hash_handle);
+
+	while ((fdir_filter = TAILQ_FIRST(&fdir_info->fdir_list))) {
+		TAILQ_REMOVE(&fdir_info->fdir_list,
+			     fdir_filter,
+			     entries);
+		rte_free(fdir_filter);
+	}
+
+	return 0;
+}
+
+static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev)
+{
+	struct ixgbe_hw_fdir_info *fdir_info =
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(eth_dev->data->dev_private);
+	char fdir_hash_name[RTE_HASH_NAMESIZE];
+	struct rte_hash_parameters fdir_hash_params = {
+		.name = fdir_hash_name,
+		.entries = IXGBE_MAX_FDIR_FILTER_NUM,
+		.key_len = sizeof(union ixgbe_atr_input),
+		.hash_func = rte_hash_crc,
+		.hash_func_init_val = 0,
+		.socket_id = rte_socket_id(),
+	};
+
+	TAILQ_INIT(&fdir_info->fdir_list);
+	snprintf(fdir_hash_name, RTE_HASH_NAMESIZE,
+		 "fdir_%s", eth_dev->data->name);
+	fdir_info->hash_handle = rte_hash_create(&fdir_hash_params);
+	if (!fdir_info->hash_handle) {
+		PMD_INIT_LOG(ERR, "Failed to create fdir hash table!");
+		return -EINVAL;
+	}
+	fdir_info->hash_map = rte_zmalloc("ixgbe",
+					  sizeof(struct ixgbe_fdir_filter *) *
+					  IXGBE_MAX_FDIR_FILTER_NUM,
+					  0);
+	if (!fdir_info->hash_map) {
+		PMD_INIT_LOG(ERR,
+			     "Failed to allocate memory for fdir hash map!");
+		return -ENOMEM;
+	}
+
+	return 0;
+}
 /*
  * Negotiate mailbox API version with the PF.
  * After reset API version is always set to the basic one (ixgbe_mbox_api_10).
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 90a89ec..300542e 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -38,6 +38,7 @@
 #include "base/ixgbe_dcb_82598.h"
 #include "ixgbe_bypass.h"
 #include <rte_time.h>
+#include <rte_hash.h>
 
 /* need update link, bit flag */
 #define IXGBE_FLAG_NEED_LINK_UPDATE (uint32_t)(1 << 0)
@@ -130,10 +131,11 @@
 #define IXGBE_MISC_VEC_ID               RTE_INTR_VEC_ZERO_OFFSET
 #define IXGBE_RX_VEC_START              RTE_INTR_VEC_RXTX_OFFSET
 
+#define IXGBE_MAX_FDIR_FILTER_NUM       (1024 * 32)
+
 /*
  * Information about the fdir mode.
  */
-
 struct ixgbe_hw_fdir_mask {
 	uint16_t vlan_tci_mask;
 	uint32_t src_ipv4_mask;
@@ -148,6 +150,17 @@ struct ixgbe_hw_fdir_mask {
 	uint8_t  tunnel_type_mask;
 };
 
+struct ixgbe_fdir_filter {
+	TAILQ_ENTRY(ixgbe_fdir_filter) entries;
+	union ixgbe_atr_input ixgbe_fdir; /* key of fdir filter*/
+	uint32_t fdirflags; /* drop or forward */
+	uint32_t fdirhash; /* hash value for fdir */
+	uint8_t queue; /* assigned rx queue */
+};
+
+/* list of fdir filters */
+TAILQ_HEAD(ixgbe_fdir_filter_list, ixgbe_fdir_filter);
+
 struct ixgbe_hw_fdir_info {
 	struct ixgbe_hw_fdir_mask mask;
 	uint8_t     flex_bytes_offset;
@@ -159,6 +172,10 @@ struct ixgbe_hw_fdir_info {
 	uint64_t    remove;
 	uint64_t    f_add;
 	uint64_t    f_remove;
+	struct ixgbe_fdir_filter_list fdir_list; /* filter list*/
+	/* store the pointers of the filters, index is the hash value. */
+	struct ixgbe_fdir_filter **hash_map;
+	struct rte_hash *hash_handle; /* cuckoo hash handler */
 };
 
 /* structure for interrupt relative data */
diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c
index 4b81ee3..8bf5705 100644
--- a/drivers/net/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/ixgbe/ixgbe_fdir.c
@@ -43,6 +43,7 @@
 #include <rte_pci.h>
 #include <rte_ether.h>
 #include <rte_ethdev.h>
+#include <rte_malloc.h>
 
 #include "ixgbe_logs.h"
 #include "base/ixgbe_api.h"
@@ -1075,6 +1076,65 @@ fdir_erase_filter_82599(struct ixgbe_hw *hw, uint32_t fdirhash)
 
 }
 
+static inline struct ixgbe_fdir_filter *
+ixgbe_fdir_filter_lookup(struct ixgbe_hw_fdir_info *fdir_info,
+			 union ixgbe_atr_input *key)
+{
+	int ret;
+
+	ret = rte_hash_lookup(fdir_info->hash_handle, (const void *)key);
+	if (ret < 0)
+		return NULL;
+
+	return fdir_info->hash_map[ret];
+}
+
+static inline int
+ixgbe_insert_fdir_filter(struct ixgbe_hw_fdir_info *fdir_info,
+			 struct ixgbe_fdir_filter *fdir_filter)
+{
+	int ret;
+
+	ret = rte_hash_add_key(fdir_info->hash_handle,
+			       &fdir_filter->ixgbe_fdir);
+
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to insert fdir filter to hash table %d!",
+			    ret);
+		return ret;
+	}
+
+	fdir_info->hash_map[ret] = fdir_filter;
+
+	TAILQ_INSERT_TAIL(&fdir_info->fdir_list, fdir_filter, entries);
+
+	return 0;
+}
+
+static inline int
+ixgbe_remove_fdir_filter(struct ixgbe_hw_fdir_info *fdir_info,
+			 union ixgbe_atr_input *key)
+{
+	int ret;
+	struct ixgbe_fdir_filter *fdir_filter;
+
+	ret = rte_hash_del_key(fdir_info->hash_handle, key);
+
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "No such fdir filter to delete %d!", ret);
+		return ret;
+	}
+
+	fdir_filter = fdir_info->hash_map[ret];
+	fdir_info->hash_map[ret] = NULL;
+
+	TAILQ_REMOVE(&fdir_info->fdir_list, fdir_filter, entries);
+	rte_free(fdir_filter);
+
+	return 0;
+}
+
 /*
  * ixgbe_add_del_fdir_filter - add or remove a flow diretor filter.
  * @dev: pointer to the structure rte_eth_dev
@@ -1098,6 +1158,8 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	struct ixgbe_hw_fdir_info *info =
 		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
 	enum rte_fdir_mode fdir_mode = dev->data->dev_conf.fdir_conf.mode;
+	struct ixgbe_fdir_filter *node;
+	bool add_node = FALSE;
 
 	if (fdir_mode == RTE_FDIR_MODE_NONE)
 		return -ENOTSUP;
@@ -1148,6 +1210,10 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 						      dev->data->dev_conf.fdir_conf.pballoc);
 
 	if (del) {
+		err = ixgbe_remove_fdir_filter(info, &input);
+		if (err < 0)
+			return err;
+
 		err = fdir_erase_filter_82599(hw, fdirhash);
 		if (err < 0)
 			PMD_DRV_LOG(ERR, "Fail to delete FDIR filter!");
@@ -1172,6 +1238,37 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	else
 		return -EINVAL;
 
+	node = ixgbe_fdir_filter_lookup(info, &input);
+	if (node) {
+		if (update) {
+			node->fdirflags = fdircmd_flags;
+			node->fdirhash = fdirhash;
+			node->queue = queue;
+		} else {
+			PMD_DRV_LOG(ERR, "Conflict with existing fdir filter!");
+			return -EINVAL;
+		}
+	} else {
+		add_node = TRUE;
+		node = rte_zmalloc("ixgbe_fdir",
+				   sizeof(struct ixgbe_fdir_filter),
+				   0);
+		if (!node)
+			return -ENOMEM;
+		(void)rte_memcpy(&node->ixgbe_fdir,
+				 &input,
+				 sizeof(union ixgbe_atr_input));
+		node->fdirflags = fdircmd_flags;
+		node->fdirhash = fdirhash;
+		node->queue = queue;
+
+		err = ixgbe_insert_fdir_filter(info, node);
+		if (err < 0) {
+			rte_free(node);
+			return err;
+		}
+	}
+
 	if (is_perfect) {
 		err = fdir_write_perfect_filter_82599(hw, &input, queue,
 						      fdircmd_flags, fdirhash,
@@ -1180,10 +1277,14 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 		err = fdir_add_signature_filter_82599(hw, &input, queue,
 						      fdircmd_flags, fdirhash);
 	}
-	if (err < 0)
+	if (err < 0) {
 		PMD_DRV_LOG(ERR, "Fail to add FDIR filter!");
-	else
+
+		if (add_node)
+			(void)ixgbe_remove_fdir_filter(info, &input);
+	} else {
 		PMD_DRV_LOG(DEBUG, "Success to add FDIR filter");
+	}
 
 	return err;
 }
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v4 03/18] net/ixgbe: store L2 tunnel filter
  2017-01-12  8:40     ` [PATCH v4 00/18] net/ixgbe: Consistent filter API Wei Zhao
  2017-01-12  8:40       ` [PATCH v4 01/18] net/ixgbe: store TCP SYN filter Wei Zhao
  2017-01-12  8:40       ` [PATCH v4 02/18] net/ixgbe: store flow director filter Wei Zhao
@ 2017-01-12  8:40       ` Wei Zhao
  2017-01-12  8:40       ` [PATCH v4 04/18] net/ixgbe: restore n-tuple filter Add support for restoring n-tuple filter in SW Wei Zhao
                         ` (15 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  8:40 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for storing L2 tunnel filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 172 ++++++++++++++++++++++++++++++++++++++-
 drivers/net/ixgbe/ixgbe_ethdev.h |  24 ++++++
 2 files changed, 193 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 9796c4f..e63b635 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -168,6 +168,8 @@ static int eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev);
 static int eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev);
 static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev);
 static int ixgbe_fdir_filter_uninit(struct rte_eth_dev *eth_dev);
+static int ixgbe_l2_tn_filter_init(struct rte_eth_dev *eth_dev);
+static int ixgbe_l2_tn_filter_uninit(struct rte_eth_dev *eth_dev);
 static int  ixgbe_dev_configure(struct rte_eth_dev *dev);
 static int  ixgbe_dev_start(struct rte_eth_dev *dev);
 static void ixgbe_dev_stop(struct rte_eth_dev *dev);
@@ -1285,6 +1287,8 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
 	/* initialize flow director filter list & hash */
 	ixgbe_fdir_filter_init(eth_dev);
 
+	/* initialize l2 tunnel filter list & hash */
+	ixgbe_l2_tn_filter_init(eth_dev);
 	return 0;
 }
 
@@ -1329,6 +1333,9 @@ eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev)
 	/* remove all the fdir filters & hash */
 	ixgbe_fdir_filter_uninit(eth_dev);
 
+	/* remove all the L2 tunnel filters & hash */
+	ixgbe_l2_tn_filter_uninit(eth_dev);
+
 	return 0;
 }
 
@@ -1353,6 +1360,27 @@ static int ixgbe_fdir_filter_uninit(struct rte_eth_dev *eth_dev)
 	return 0;
 }
 
+static int ixgbe_l2_tn_filter_uninit(struct rte_eth_dev *eth_dev)
+{
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(eth_dev->data->dev_private);
+	struct ixgbe_l2_tn_filter *l2_tn_filter;
+
+	if (l2_tn_info->hash_map)
+		rte_free(l2_tn_info->hash_map);
+	if (l2_tn_info->hash_handle)
+		rte_hash_free(l2_tn_info->hash_handle);
+
+	while ((l2_tn_filter = TAILQ_FIRST(&l2_tn_info->l2_tn_list))) {
+		TAILQ_REMOVE(&l2_tn_info->l2_tn_list,
+			     l2_tn_filter,
+			     entries);
+		rte_free(l2_tn_filter);
+	}
+
+	return 0;
+}
+
 static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev)
 {
 	struct ixgbe_hw_fdir_info *fdir_info =
@@ -1384,6 +1412,40 @@ static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev)
 			     "Failed to allocate memory for fdir hash map!");
 		return -ENOMEM;
 	}
+	return 0;
+}
+
+static int ixgbe_l2_tn_filter_init(struct rte_eth_dev *eth_dev)
+{
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(eth_dev->data->dev_private);
+	char l2_tn_hash_name[RTE_HASH_NAMESIZE];
+	struct rte_hash_parameters l2_tn_hash_params = {
+		.name = l2_tn_hash_name,
+		.entries = IXGBE_MAX_L2_TN_FILTER_NUM,
+		.key_len = sizeof(struct ixgbe_l2_tn_key),
+		.hash_func = rte_hash_crc,
+		.hash_func_init_val = 0,
+		.socket_id = rte_socket_id(),
+	};
+
+	TAILQ_INIT(&l2_tn_info->l2_tn_list);
+	snprintf(l2_tn_hash_name, RTE_HASH_NAMESIZE,
+		 "l2_tn_%s", eth_dev->data->name);
+	l2_tn_info->hash_handle = rte_hash_create(&l2_tn_hash_params);
+	if (!l2_tn_info->hash_handle) {
+		PMD_INIT_LOG(ERR, "Failed to create L2 TN hash table!");
+		return -EINVAL;
+	}
+	l2_tn_info->hash_map = rte_zmalloc("ixgbe",
+				   sizeof(struct ixgbe_l2_tn_filter *) *
+				   IXGBE_MAX_L2_TN_FILTER_NUM,
+				   0);
+	if (!l2_tn_info->hash_map) {
+		PMD_INIT_LOG(ERR,
+			"Failed to allocate memory for L2 TN hash map!");
+		return -ENOMEM;
+	}
 
 	return 0;
 }
@@ -7210,12 +7272,104 @@ ixgbe_e_tag_filter_add(struct rte_eth_dev *dev,
 	return -EINVAL;
 }
 
+static inline struct ixgbe_l2_tn_filter *
+ixgbe_l2_tn_filter_lookup(struct ixgbe_l2_tn_info *l2_tn_info,
+			  struct ixgbe_l2_tn_key *key)
+{
+	int ret;
+
+	ret = rte_hash_lookup(l2_tn_info->hash_handle, (const void *)key);
+	if (ret < 0)
+		return NULL;
+
+	return l2_tn_info->hash_map[ret];
+}
+
+static inline int
+ixgbe_insert_l2_tn_filter(struct ixgbe_l2_tn_info *l2_tn_info,
+			  struct ixgbe_l2_tn_filter *l2_tn_filter)
+{
+	int ret;
+
+	ret = rte_hash_add_key(l2_tn_info->hash_handle,
+			       &l2_tn_filter->key);
+
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to insert L2 tunnel filter"
+			    " to hash table %d!",
+			    ret);
+		return ret;
+	}
+
+	l2_tn_info->hash_map[ret] = l2_tn_filter;
+
+	TAILQ_INSERT_TAIL(&l2_tn_info->l2_tn_list, l2_tn_filter, entries);
+
+	return 0;
+}
+
+static inline int
+ixgbe_remove_l2_tn_filter(struct ixgbe_l2_tn_info *l2_tn_info,
+			  struct ixgbe_l2_tn_key *key)
+{
+	int ret;
+	struct ixgbe_l2_tn_filter *l2_tn_filter;
+
+	ret = rte_hash_del_key(l2_tn_info->hash_handle, key);
+
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR,
+			    "No such L2 tunnel filter to delete %d!",
+			    ret);
+		return ret;
+	}
+
+	l2_tn_filter = l2_tn_info->hash_map[ret];
+	l2_tn_info->hash_map[ret] = NULL;
+
+	TAILQ_REMOVE(&l2_tn_info->l2_tn_list, l2_tn_filter, entries);
+	rte_free(l2_tn_filter);
+
+	return 0;
+}
+
 /* Add l2 tunnel filter */
 static int
 ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 			       struct rte_eth_l2_tunnel_conf *l2_tunnel)
 {
-	int ret = 0;
+	int ret;
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
+	struct ixgbe_l2_tn_key key;
+	struct ixgbe_l2_tn_filter *node;
+
+	key.l2_tn_type = l2_tunnel->l2_tunnel_type;
+	key.tn_id = l2_tunnel->tunnel_id;
+
+	node = ixgbe_l2_tn_filter_lookup(l2_tn_info, &key);
+
+	if (node) {
+		PMD_DRV_LOG(ERR, "The L2 tunnel filter already exists!");
+		return -EINVAL;
+	}
+
+	node = rte_zmalloc("ixgbe_l2_tn",
+			   sizeof(struct ixgbe_l2_tn_filter),
+			   0);
+	if (!node)
+		return -ENOMEM;
+
+	(void)rte_memcpy(&node->key,
+			 &key,
+			 sizeof(struct ixgbe_l2_tn_key));
+	node->pool = l2_tunnel->pool;
+	ret = ixgbe_insert_l2_tn_filter(l2_tn_info, node);
+	if (ret < 0) {
+		rte_free(node);
+		return ret;
+	}
 
 	switch (l2_tunnel->l2_tunnel_type) {
 	case RTE_L2_TUNNEL_TYPE_E_TAG:
@@ -7227,6 +7381,9 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 		break;
 	}
 
+	if (ret < 0)
+		(void)ixgbe_remove_l2_tn_filter(l2_tn_info, &key);
+
 	return ret;
 }
 
@@ -7235,7 +7392,16 @@ static int
 ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
 			       struct rte_eth_l2_tunnel_conf *l2_tunnel)
 {
-	int ret = 0;
+	int ret;
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
+	struct ixgbe_l2_tn_key key;
+
+	key.l2_tn_type = l2_tunnel->l2_tunnel_type;
+	key.tn_id = l2_tunnel->tunnel_id;
+	ret = ixgbe_remove_l2_tn_filter(l2_tn_info, &key);
+	if (ret < 0)
+		return ret;
 
 	switch (l2_tunnel->l2_tunnel_type) {
 	case RTE_L2_TUNNEL_TYPE_E_TAG:
@@ -7261,7 +7427,7 @@ ixgbe_dev_l2_tunnel_filter_handle(struct rte_eth_dev *dev,
 				  enum rte_filter_op filter_op,
 				  void *arg)
 {
-	int ret = 0;
+	int ret;
 
 	if (filter_op == RTE_ETH_FILTER_NOP)
 		return 0;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 300542e..b2cf789 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -132,6 +132,7 @@
 #define IXGBE_RX_VEC_START              RTE_INTR_VEC_RXTX_OFFSET
 
 #define IXGBE_MAX_FDIR_FILTER_NUM       (1024 * 32)
+#define IXGBE_MAX_L2_TN_FILTER_NUM      128
 
 /*
  * Information about the fdir mode.
@@ -283,6 +284,25 @@ struct ixgbe_filter_info {
 	uint32_t syn_info;
 };
 
+struct ixgbe_l2_tn_key {
+	enum rte_eth_tunnel_type          l2_tn_type;
+	uint32_t                          tn_id;
+};
+
+struct ixgbe_l2_tn_filter {
+	TAILQ_ENTRY(ixgbe_l2_tn_filter)    entries;
+	struct ixgbe_l2_tn_key             key;
+	uint32_t                           pool;
+};
+
+TAILQ_HEAD(ixgbe_l2_tn_filter_list, ixgbe_l2_tn_filter);
+
+struct ixgbe_l2_tn_info {
+	struct ixgbe_l2_tn_filter_list      l2_tn_list;
+	struct ixgbe_l2_tn_filter         **hash_map;
+	struct rte_hash                    *hash_handle;
+};
+
 /*
  * Structure to store private data for each driver instance (for each port).
  */
@@ -302,6 +322,7 @@ struct ixgbe_adapter {
 	struct ixgbe_bypass_info    bps;
 #endif /* RTE_NIC_BYPASS */
 	struct ixgbe_filter_info    filter;
+	struct ixgbe_l2_tn_info     l2_tn;
 
 	bool rx_bulk_alloc_allowed;
 	bool rx_vec_allowed;
@@ -349,6 +370,9 @@ struct ixgbe_adapter {
 #define IXGBE_DEV_PRIVATE_TO_FILTER_INFO(adapter) \
 	(&((struct ixgbe_adapter *)adapter)->filter)
 
+#define IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(adapter) \
+	(&((struct ixgbe_adapter *)adapter)->l2_tn)
+
 /*
  * RX/TX function prototypes
  */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v4 04/18] net/ixgbe: restore n-tuple filter Add support for restoring n-tuple filter in SW.
  2017-01-12  8:40     ` [PATCH v4 00/18] net/ixgbe: Consistent filter API Wei Zhao
                         ` (2 preceding siblings ...)
  2017-01-12  8:40       ` [PATCH v4 03/18] net/ixgbe: store L2 tunnel filter Wei Zhao
@ 2017-01-12  8:40       ` Wei Zhao
  2017-01-12  8:40       ` [PATCH v4 05/18] net/ixgbe: restore ether type filter Wei Zhao
                         ` (14 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  8:40 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 140 +++++++++++++++++++++++++--------------
 1 file changed, 92 insertions(+), 48 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index e63b635..1630e65 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -170,6 +170,7 @@ static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev);
 static int ixgbe_fdir_filter_uninit(struct rte_eth_dev *eth_dev);
 static int ixgbe_l2_tn_filter_init(struct rte_eth_dev *eth_dev);
 static int ixgbe_l2_tn_filter_uninit(struct rte_eth_dev *eth_dev);
+static int ixgbe_ntuple_filter_uninit(struct rte_eth_dev *eth_dev);
 static int  ixgbe_dev_configure(struct rte_eth_dev *dev);
 static int  ixgbe_dev_start(struct rte_eth_dev *dev);
 static void ixgbe_dev_stop(struct rte_eth_dev *dev);
@@ -387,6 +388,7 @@ static int ixgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 					 struct rte_eth_udp_tunnel *udp_tunnel);
 static int ixgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 					 struct rte_eth_udp_tunnel *udp_tunnel);
+static int ixgbe_filter_restore(struct rte_eth_dev *dev);
 
 /*
  * Define VF Stats MACRO for Non "cleared on read" register
@@ -1336,6 +1338,27 @@ eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev)
 	/* remove all the L2 tunnel filters & hash */
 	ixgbe_l2_tn_filter_uninit(eth_dev);
 
+	/* Remove all ntuple filters of the device */
+	ixgbe_ntuple_filter_uninit(eth_dev);
+
+	return 0;
+}
+
+static int ixgbe_ntuple_filter_uninit(struct rte_eth_dev *eth_dev)
+{
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(eth_dev->data->dev_private);
+	struct ixgbe_5tuple_filter *p_5tuple;
+
+	while ((p_5tuple = TAILQ_FIRST(&filter_info->fivetuple_list))) {
+		TAILQ_REMOVE(&filter_info->fivetuple_list,
+			     p_5tuple,
+			     entries);
+		rte_free(p_5tuple);
+	}
+	memset(filter_info->fivetuple_mask, 0,
+	       sizeof(uint32_t) * IXGBE_5TUPLE_ARRAY_SIZE);
+
 	return 0;
 }
 
@@ -2504,6 +2527,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
 
 	/* resume enabled intr since hw reset */
 	ixgbe_enable_intr(dev);
+	ixgbe_filter_restore(dev);
 
 	return 0;
 
@@ -2524,9 +2548,6 @@ ixgbe_dev_stop(struct rte_eth_dev *dev)
 		IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ixgbe_vf_info *vfinfo =
 		*IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private);
-	struct ixgbe_filter_info *filter_info =
-		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
-	struct ixgbe_5tuple_filter *p_5tuple, *p_5tuple_next;
 	struct rte_pci_device *pci_dev = IXGBE_DEV_TO_PCI(dev);
 	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 	int vf;
@@ -2564,17 +2585,6 @@ ixgbe_dev_stop(struct rte_eth_dev *dev)
 	memset(&link, 0, sizeof(link));
 	rte_ixgbe_dev_atomic_write_link_status(dev, &link);
 
-	/* Remove all ntuple filters of the device */
-	for (p_5tuple = TAILQ_FIRST(&filter_info->fivetuple_list);
-	     p_5tuple != NULL; p_5tuple = p_5tuple_next) {
-		p_5tuple_next = TAILQ_NEXT(p_5tuple, entries);
-		TAILQ_REMOVE(&filter_info->fivetuple_list,
-			     p_5tuple, entries);
-		rte_free(p_5tuple);
-	}
-	memset(filter_info->fivetuple_mask, 0,
-		sizeof(uint32_t) * IXGBE_5TUPLE_ARRAY_SIZE);
-
 	if (!rte_intr_allow_others(intr_handle))
 		/* resume to the default handler */
 		rte_intr_callback_register(intr_handle,
@@ -5836,6 +5846,52 @@ convert_protocol_type(uint8_t protocol_value)
 		return IXGBE_FILTER_PROTOCOL_NONE;
 }
 
+/* inject a 5-tuple filter to HW */
+static inline void
+ixgbe_inject_5tuple_filter(struct rte_eth_dev *dev,
+			   struct ixgbe_5tuple_filter *filter)
+{
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	int i;
+	uint32_t ftqf, sdpqf;
+	uint32_t l34timir = 0;
+	uint8_t mask = 0xff;
+
+	i = filter->index;
+
+	sdpqf = (uint32_t)(filter->filter_info.dst_port <<
+				IXGBE_SDPQF_DSTPORT_SHIFT);
+	sdpqf = sdpqf | (filter->filter_info.src_port & IXGBE_SDPQF_SRCPORT);
+
+	ftqf = (uint32_t)(filter->filter_info.proto &
+		IXGBE_FTQF_PROTOCOL_MASK);
+	ftqf |= (uint32_t)((filter->filter_info.priority &
+		IXGBE_FTQF_PRIORITY_MASK) << IXGBE_FTQF_PRIORITY_SHIFT);
+	if (filter->filter_info.src_ip_mask == 0) /* 0 means compare. */
+		mask &= IXGBE_FTQF_SOURCE_ADDR_MASK;
+	if (filter->filter_info.dst_ip_mask == 0)
+		mask &= IXGBE_FTQF_DEST_ADDR_MASK;
+	if (filter->filter_info.src_port_mask == 0)
+		mask &= IXGBE_FTQF_SOURCE_PORT_MASK;
+	if (filter->filter_info.dst_port_mask == 0)
+		mask &= IXGBE_FTQF_DEST_PORT_MASK;
+	if (filter->filter_info.proto_mask == 0)
+		mask &= IXGBE_FTQF_PROTOCOL_COMP_MASK;
+	ftqf |= mask << IXGBE_FTQF_5TUPLE_MASK_SHIFT;
+	ftqf |= IXGBE_FTQF_POOL_MASK_EN;
+	ftqf |= IXGBE_FTQF_QUEUE_ENABLE;
+
+	IXGBE_WRITE_REG(hw, IXGBE_DAQF(i), filter->filter_info.dst_ip);
+	IXGBE_WRITE_REG(hw, IXGBE_SAQF(i), filter->filter_info.src_ip);
+	IXGBE_WRITE_REG(hw, IXGBE_SDPQF(i), sdpqf);
+	IXGBE_WRITE_REG(hw, IXGBE_FTQF(i), ftqf);
+
+	l34timir |= IXGBE_L34T_IMIR_RESERVE;
+	l34timir |= (uint32_t)(filter->queue <<
+				IXGBE_L34T_IMIR_QUEUE_SHIFT);
+	IXGBE_WRITE_REG(hw, IXGBE_L34T_IMIR(i), l34timir);
+}
+
 /*
  * add a 5tuple filter
  *
@@ -5853,13 +5909,9 @@ static int
 ixgbe_add_5tuple_filter(struct rte_eth_dev *dev,
 			struct ixgbe_5tuple_filter *filter)
 {
-	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ixgbe_filter_info *filter_info =
 		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
 	int i, idx, shift;
-	uint32_t ftqf, sdpqf;
-	uint32_t l34timir = 0;
-	uint8_t mask = 0xff;
 
 	/*
 	 * look for an unused 5tuple filter index,
@@ -5882,37 +5934,8 @@ ixgbe_add_5tuple_filter(struct rte_eth_dev *dev,
 		return -ENOSYS;
 	}
 
-	sdpqf = (uint32_t)(filter->filter_info.dst_port <<
-				IXGBE_SDPQF_DSTPORT_SHIFT);
-	sdpqf = sdpqf | (filter->filter_info.src_port & IXGBE_SDPQF_SRCPORT);
-
-	ftqf = (uint32_t)(filter->filter_info.proto &
-		IXGBE_FTQF_PROTOCOL_MASK);
-	ftqf |= (uint32_t)((filter->filter_info.priority &
-		IXGBE_FTQF_PRIORITY_MASK) << IXGBE_FTQF_PRIORITY_SHIFT);
-	if (filter->filter_info.src_ip_mask == 0) /* 0 means compare. */
-		mask &= IXGBE_FTQF_SOURCE_ADDR_MASK;
-	if (filter->filter_info.dst_ip_mask == 0)
-		mask &= IXGBE_FTQF_DEST_ADDR_MASK;
-	if (filter->filter_info.src_port_mask == 0)
-		mask &= IXGBE_FTQF_SOURCE_PORT_MASK;
-	if (filter->filter_info.dst_port_mask == 0)
-		mask &= IXGBE_FTQF_DEST_PORT_MASK;
-	if (filter->filter_info.proto_mask == 0)
-		mask &= IXGBE_FTQF_PROTOCOL_COMP_MASK;
-	ftqf |= mask << IXGBE_FTQF_5TUPLE_MASK_SHIFT;
-	ftqf |= IXGBE_FTQF_POOL_MASK_EN;
-	ftqf |= IXGBE_FTQF_QUEUE_ENABLE;
-
-	IXGBE_WRITE_REG(hw, IXGBE_DAQF(i), filter->filter_info.dst_ip);
-	IXGBE_WRITE_REG(hw, IXGBE_SAQF(i), filter->filter_info.src_ip);
-	IXGBE_WRITE_REG(hw, IXGBE_SDPQF(i), sdpqf);
-	IXGBE_WRITE_REG(hw, IXGBE_FTQF(i), ftqf);
+	ixgbe_inject_5tuple_filter(dev, filter);
 
-	l34timir |= IXGBE_L34T_IMIR_RESERVE;
-	l34timir |= (uint32_t)(filter->queue <<
-				IXGBE_L34T_IMIR_QUEUE_SHIFT);
-	IXGBE_WRITE_REG(hw, IXGBE_L34T_IMIR(i), l34timir);
 	return 0;
 }
 
@@ -7925,6 +7948,27 @@ ixgbevf_dev_interrupt_handler(__rte_unused struct rte_intr_handle *handle,
 	ixgbevf_dev_interrupt_action(dev);
 }
 
+/* restore n-tuple filter */
+static inline void
+ixgbe_ntuple_filter_restore(struct rte_eth_dev *dev)
+{
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+	struct ixgbe_5tuple_filter *node;
+
+	TAILQ_FOREACH(node, &filter_info->fivetuple_list, entries) {
+		ixgbe_inject_5tuple_filter(dev, node);
+	}
+}
+
+static int
+ixgbe_filter_restore(struct rte_eth_dev *dev)
+{
+	ixgbe_ntuple_filter_restore(dev);
+
+	return 0;
+}
+
 RTE_PMD_REGISTER_PCI(net_ixgbe, rte_ixgbe_pmd.pci_drv);
 RTE_PMD_REGISTER_PCI_TABLE(net_ixgbe, pci_id_ixgbe_map);
 RTE_PMD_REGISTER_KMOD_DEP(net_ixgbe, "* igb_uio | uio_pci_generic | vfio");
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v4 05/18] net/ixgbe: restore ether type filter
  2017-01-12  8:40     ` [PATCH v4 00/18] net/ixgbe: Consistent filter API Wei Zhao
                         ` (3 preceding siblings ...)
  2017-01-12  8:40       ` [PATCH v4 04/18] net/ixgbe: restore n-tuple filter Add support for restoring n-tuple filter in SW Wei Zhao
@ 2017-01-12  8:40       ` Wei Zhao
  2017-01-12  8:40       ` [PATCH v4 06/18] net/ixgbe: restore TCP SYN filter Wei Zhao
                         ` (13 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  8:40 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for restoring ether type filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 79 ++++++++++++++++------------------------
 drivers/net/ixgbe/ixgbe_ethdev.h | 57 ++++++++++++++++++++++++++++-
 drivers/net/ixgbe/ixgbe_pf.c     | 25 ++++++++-----
 3 files changed, 104 insertions(+), 57 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 1630e65..6c46354 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -6259,47 +6259,6 @@ ixgbe_ntuple_filter_handle(struct rte_eth_dev *dev,
 	return ret;
 }
 
-static inline int
-ixgbe_ethertype_filter_lookup(struct ixgbe_filter_info *filter_info,
-			uint16_t ethertype)
-{
-	int i;
-
-	for (i = 0; i < IXGBE_MAX_ETQF_FILTERS; i++) {
-		if (filter_info->ethertype_filters[i] == ethertype &&
-		    (filter_info->ethertype_mask & (1 << i)))
-			return i;
-	}
-	return -1;
-}
-
-static inline int
-ixgbe_ethertype_filter_insert(struct ixgbe_filter_info *filter_info,
-			uint16_t ethertype)
-{
-	int i;
-
-	for (i = 0; i < IXGBE_MAX_ETQF_FILTERS; i++) {
-		if (!(filter_info->ethertype_mask & (1 << i))) {
-			filter_info->ethertype_mask |= 1 << i;
-			filter_info->ethertype_filters[i] = ethertype;
-			return i;
-		}
-	}
-	return -1;
-}
-
-static inline int
-ixgbe_ethertype_filter_remove(struct ixgbe_filter_info *filter_info,
-			uint8_t idx)
-{
-	if (idx >= IXGBE_MAX_ETQF_FILTERS)
-		return -1;
-	filter_info->ethertype_mask &= ~(1 << idx);
-	filter_info->ethertype_filters[idx] = 0;
-	return idx;
-}
-
 static int
 ixgbe_add_del_ethertype_filter(struct rte_eth_dev *dev,
 			struct rte_eth_ethertype_filter *filter,
@@ -6311,6 +6270,7 @@ ixgbe_add_del_ethertype_filter(struct rte_eth_dev *dev,
 	uint32_t etqf = 0;
 	uint32_t etqs = 0;
 	int ret;
+	struct ixgbe_ethertype_filter ethertype_filter;
 
 	if (filter->queue >= IXGBE_MAX_RX_QUEUE_NUM)
 		return -EINVAL;
@@ -6344,18 +6304,22 @@ ixgbe_add_del_ethertype_filter(struct rte_eth_dev *dev,
 	}
 
 	if (add) {
-		ret = ixgbe_ethertype_filter_insert(filter_info,
-			filter->ether_type);
-		if (ret < 0) {
-			PMD_DRV_LOG(ERR, "ethertype filters are full.");
-			return -ENOSYS;
-		}
 		etqf = IXGBE_ETQF_FILTER_EN;
 		etqf |= (uint32_t)filter->ether_type;
 		etqs |= (uint32_t)((filter->queue <<
 				    IXGBE_ETQS_RX_QUEUE_SHIFT) &
 				    IXGBE_ETQS_RX_QUEUE);
 		etqs |= IXGBE_ETQS_QUEUE_EN;
+
+		ethertype_filter.ethertype = filter->ether_type;
+		ethertype_filter.etqf = etqf;
+		ethertype_filter.etqs = etqs;
+		ret = ixgbe_ethertype_filter_insert(filter_info,
+						    &ethertype_filter);
+		if (ret < 0) {
+			PMD_DRV_LOG(ERR, "ethertype filters are full.");
+			return -ENOSPC;
+		}
 	} else {
 		ret = ixgbe_ethertype_filter_remove(filter_info, (uint8_t)ret);
 		if (ret < 0)
@@ -7961,10 +7925,31 @@ ixgbe_ntuple_filter_restore(struct rte_eth_dev *dev)
 	}
 }
 
+/* restore ethernet type filter */
+static inline void
+ixgbe_ethertype_filter_restore(struct rte_eth_dev *dev)
+{
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+	int i;
+
+	for (i = 0; i < IXGBE_MAX_ETQF_FILTERS; i++) {
+		if (filter_info->ethertype_mask & (1 << i)) {
+			IXGBE_WRITE_REG(hw, IXGBE_ETQF(i),
+					filter_info->ethertype_filters[i].etqf);
+			IXGBE_WRITE_REG(hw, IXGBE_ETQS(i),
+					filter_info->ethertype_filters[i].etqs);
+			IXGBE_WRITE_FLUSH(hw);
+		}
+	}
+}
+
 static int
 ixgbe_filter_restore(struct rte_eth_dev *dev)
 {
 	ixgbe_ntuple_filter_restore(dev);
+	ixgbe_ethertype_filter_restore(dev);
 
 	return 0;
 }
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index b2cf789..36aae01 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -270,13 +270,19 @@ struct ixgbe_5tuple_filter {
 	(RTE_ALIGN(IXGBE_MAX_FTQF_FILTERS, (sizeof(uint32_t) * NBBY)) / \
 	 (sizeof(uint32_t) * NBBY))
 
+struct ixgbe_ethertype_filter {
+	uint16_t ethertype;
+	uint32_t etqf;
+	uint32_t etqs;
+};
+
 /*
  * Structure to store filters' info.
  */
 struct ixgbe_filter_info {
 	uint8_t ethertype_mask;  /* Bit mask for every used ethertype filter */
 	/* store used ethertype filters*/
-	uint16_t ethertype_filters[IXGBE_MAX_ETQF_FILTERS];
+	struct ixgbe_ethertype_filter ethertype_filters[IXGBE_MAX_ETQF_FILTERS];
 	/* Bit mask for every used 5tuple filter */
 	uint32_t fivetuple_mask[IXGBE_5TUPLE_ARRAY_SIZE];
 	struct ixgbe_5tuple_filter_list fivetuple_list;
@@ -491,4 +497,53 @@ uint32_t ixgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val);
 
 int ixgbe_fdir_ctrl_func(struct rte_eth_dev *dev,
 			enum rte_filter_op filter_op, void *arg);
+
+static inline int
+ixgbe_ethertype_filter_lookup(struct ixgbe_filter_info *filter_info,
+			      uint16_t ethertype)
+{
+	int i;
+
+	for (i = 0; i < IXGBE_MAX_ETQF_FILTERS; i++) {
+		if (filter_info->ethertype_filters[i].ethertype == ethertype &&
+		    (filter_info->ethertype_mask & (1 << i)))
+			return i;
+	}
+	return -1;
+}
+
+static inline int
+ixgbe_ethertype_filter_insert(struct ixgbe_filter_info *filter_info,
+			      struct ixgbe_ethertype_filter *ethertype_filter)
+{
+	int i;
+
+	for (i = 0; i < IXGBE_MAX_ETQF_FILTERS; i++) {
+		if (!(filter_info->ethertype_mask & (1 << i))) {
+			filter_info->ethertype_mask |= 1 << i;
+			filter_info->ethertype_filters[i].ethertype =
+				ethertype_filter->ethertype;
+			filter_info->ethertype_filters[i].etqf =
+				ethertype_filter->etqf;
+			filter_info->ethertype_filters[i].etqs =
+				ethertype_filter->etqs;
+			return i;
+		}
+	}
+	return -1;
+}
+
+static inline int
+ixgbe_ethertype_filter_remove(struct ixgbe_filter_info *filter_info,
+			      uint8_t idx)
+{
+	if (idx >= IXGBE_MAX_ETQF_FILTERS)
+		return -1;
+	filter_info->ethertype_mask &= ~(1 << idx);
+	filter_info->ethertype_filters[idx].ethertype = 0;
+	filter_info->ethertype_filters[idx].etqf = 0;
+	filter_info->ethertype_filters[idx].etqs = 0;
+	return idx;
+}
+
 #endif /* _IXGBE_ETHDEV_H_ */
diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
index cb10265..4b33130 100644
--- a/drivers/net/ixgbe/ixgbe_pf.c
+++ b/drivers/net/ixgbe/ixgbe_pf.c
@@ -178,6 +178,7 @@ ixgbe_add_tx_flow_control_drop_filter(struct rte_eth_dev *eth_dev)
 		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(eth_dev->data->dev_private);
 	uint16_t vf_num;
 	int i;
+	struct ixgbe_ethertype_filter ethertype_filter;
 
 	if (!hw->mac.ops.set_ethertype_anti_spoofing) {
 		RTE_LOG(INFO, PMD, "ether type anti-spoofing is not"
@@ -185,16 +186,22 @@ ixgbe_add_tx_flow_control_drop_filter(struct rte_eth_dev *eth_dev)
 		return;
 	}
 
-	/* occupy an entity of ether type filter */
-	for (i = 0; i < IXGBE_MAX_ETQF_FILTERS; i++) {
-		if (!(filter_info->ethertype_mask & (1 << i))) {
-			filter_info->ethertype_mask |= 1 << i;
-			filter_info->ethertype_filters[i] =
-				IXGBE_ETHERTYPE_FLOW_CTRL;
-			break;
-		}
+	i = ixgbe_ethertype_filter_lookup(filter_info,
+					  IXGBE_ETHERTYPE_FLOW_CTRL);
+	if (i >= 0) {
+		RTE_LOG(ERR, PMD, "A ether type filter"
+			" entity for flow control already exists!\n");
+		return;
 	}
-	if (i == IXGBE_MAX_ETQF_FILTERS) {
+
+	ethertype_filter.ethertype = IXGBE_ETHERTYPE_FLOW_CTRL;
+	ethertype_filter.etqf = IXGBE_ETQF_FILTER_EN |
+				IXGBE_ETQF_TX_ANTISPOOF |
+				IXGBE_ETHERTYPE_FLOW_CTRL;
+	ethertype_filter.etqs = 0;
+	i = ixgbe_ethertype_filter_insert(filter_info,
+					  &ethertype_filter);
+	if (i < 0) {
 		RTE_LOG(ERR, PMD, "Cannot find an unused ether type filter"
 			" entity for flow control.\n");
 		return;
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v4 06/18] net/ixgbe: restore TCP SYN filter
  2017-01-12  8:40     ` [PATCH v4 00/18] net/ixgbe: Consistent filter API Wei Zhao
                         ` (4 preceding siblings ...)
  2017-01-12  8:40       ` [PATCH v4 05/18] net/ixgbe: restore ether type filter Wei Zhao
@ 2017-01-12  8:40       ` Wei Zhao
  2017-01-12  8:40       ` [PATCH v4 07/18] net/ixgbe: restore flow director filter Wei Zhao
                         ` (12 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  8:40 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for restoring TCP SYN filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 6c46354..6cd5975 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -7945,11 +7945,29 @@ ixgbe_ethertype_filter_restore(struct rte_eth_dev *dev)
 	}
 }
 
+/* restore SYN filter */
+static inline void
+ixgbe_syn_filter_restore(struct rte_eth_dev *dev)
+{
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+	uint32_t synqf;
+
+	synqf = filter_info->syn_info;
+
+	if (synqf & IXGBE_SYN_FILTER_ENABLE) {
+		IXGBE_WRITE_REG(hw, IXGBE_SYNQF, synqf);
+		IXGBE_WRITE_FLUSH(hw);
+	}
+}
+
 static int
 ixgbe_filter_restore(struct rte_eth_dev *dev)
 {
 	ixgbe_ntuple_filter_restore(dev);
 	ixgbe_ethertype_filter_restore(dev);
+	ixgbe_syn_filter_restore(dev);
 
 	return 0;
 }
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v4 07/18] net/ixgbe: restore flow director filter
  2017-01-12  8:40     ` [PATCH v4 00/18] net/ixgbe: Consistent filter API Wei Zhao
                         ` (5 preceding siblings ...)
  2017-01-12  8:40       ` [PATCH v4 06/18] net/ixgbe: restore TCP SYN filter Wei Zhao
@ 2017-01-12  8:40       ` Wei Zhao
  2017-01-12  8:40       ` [PATCH v4 08/18] net/ixgbe: restore L2 tunnel filter Wei Zhao
                         ` (11 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  8:40 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for storing flow director filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c |  1 +
 drivers/net/ixgbe/ixgbe_ethdev.h |  1 +
 drivers/net/ixgbe/ixgbe_fdir.c   | 35 +++++++++++++++++++++++++++++++++++
 3 files changed, 37 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 6cd5975..f48e30c 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -7968,6 +7968,7 @@ ixgbe_filter_restore(struct rte_eth_dev *dev)
 	ixgbe_ntuple_filter_restore(dev);
 	ixgbe_ethertype_filter_restore(dev);
 	ixgbe_syn_filter_restore(dev);
+	ixgbe_fdir_filter_restore(dev);
 
 	return 0;
 }
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 36aae01..4f281e8 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -497,6 +497,7 @@ uint32_t ixgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val);
 
 int ixgbe_fdir_ctrl_func(struct rte_eth_dev *dev,
 			enum rte_filter_op filter_op, void *arg);
+void ixgbe_fdir_filter_restore(struct rte_eth_dev *dev);
 
 static inline int
 ixgbe_ethertype_filter_lookup(struct ixgbe_filter_info *filter_info,
diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c
index 8bf5705..627c51a 100644
--- a/drivers/net/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/ixgbe/ixgbe_fdir.c
@@ -1479,3 +1479,38 @@ ixgbe_fdir_ctrl_func(struct rte_eth_dev *dev,
 	}
 	return ret;
 }
+
+/* restore flow director filter */
+void
+ixgbe_fdir_filter_restore(struct rte_eth_dev *dev)
+{
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_hw_fdir_info *fdir_info =
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
+	struct ixgbe_fdir_filter *node;
+	bool is_perfect = FALSE;
+	enum rte_fdir_mode fdir_mode = dev->data->dev_conf.fdir_conf.mode;
+
+	if (fdir_mode >= RTE_FDIR_MODE_PERFECT &&
+	    fdir_mode <= RTE_FDIR_MODE_PERFECT_TUNNEL)
+		is_perfect = TRUE;
+
+	if (is_perfect) {
+		TAILQ_FOREACH(node, &fdir_info->fdir_list, entries) {
+			(void)fdir_write_perfect_filter_82599(hw,
+							      &node->ixgbe_fdir,
+							      node->queue,
+							      node->fdirflags,
+							      node->fdirhash,
+							      fdir_mode);
+		}
+	} else {
+		TAILQ_FOREACH(node, &fdir_info->fdir_list, entries) {
+			(void)fdir_add_signature_filter_82599(hw,
+							      &node->ixgbe_fdir,
+							      node->queue,
+							      node->fdirflags,
+							      node->fdirhash);
+		}
+	}
+}
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v4 08/18] net/ixgbe: restore L2 tunnel filter
  2017-01-12  8:40     ` [PATCH v4 00/18] net/ixgbe: Consistent filter API Wei Zhao
                         ` (6 preceding siblings ...)
  2017-01-12  8:40       ` [PATCH v4 07/18] net/ixgbe: restore flow director filter Wei Zhao
@ 2017-01-12  8:40       ` Wei Zhao
  2017-01-12  8:40       ` [PATCH v4 09/18] net/ixgbe: store and restore L2 tunnel configuration Wei Zhao
                         ` (10 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  8:40 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for restoring L2 tunnel filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 69 ++++++++++++++++++++++++++--------------
 1 file changed, 46 insertions(+), 23 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index f48e30c..0ad613b 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -7324,7 +7324,8 @@ ixgbe_remove_l2_tn_filter(struct ixgbe_l2_tn_info *l2_tn_info,
 /* Add l2 tunnel filter */
 static int
 ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
-			       struct rte_eth_l2_tunnel_conf *l2_tunnel)
+			       struct rte_eth_l2_tunnel_conf *l2_tunnel,
+			       bool restore)
 {
 	int ret;
 	struct ixgbe_l2_tn_info *l2_tn_info =
@@ -7332,30 +7333,33 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 	struct ixgbe_l2_tn_key key;
 	struct ixgbe_l2_tn_filter *node;
 
-	key.l2_tn_type = l2_tunnel->l2_tunnel_type;
-	key.tn_id = l2_tunnel->tunnel_id;
+	if (!restore) {
+		key.l2_tn_type = l2_tunnel->l2_tunnel_type;
+		key.tn_id = l2_tunnel->tunnel_id;
 
-	node = ixgbe_l2_tn_filter_lookup(l2_tn_info, &key);
+		node = ixgbe_l2_tn_filter_lookup(l2_tn_info, &key);
 
-	if (node) {
-		PMD_DRV_LOG(ERR, "The L2 tunnel filter already exists!");
-		return -EINVAL;
-	}
+		if (node) {
+			PMD_DRV_LOG(ERR,
+				    "The L2 tunnel filter already exists!");
+			return -EINVAL;
+		}
 
-	node = rte_zmalloc("ixgbe_l2_tn",
-			   sizeof(struct ixgbe_l2_tn_filter),
-			   0);
-	if (!node)
-		return -ENOMEM;
+		node = rte_zmalloc("ixgbe_l2_tn",
+				   sizeof(struct ixgbe_l2_tn_filter),
+				   0);
+		if (!node)
+			return -ENOMEM;
 
-	(void)rte_memcpy(&node->key,
-			 &key,
-			 sizeof(struct ixgbe_l2_tn_key));
-	node->pool = l2_tunnel->pool;
-	ret = ixgbe_insert_l2_tn_filter(l2_tn_info, node);
-	if (ret < 0) {
-		rte_free(node);
-		return ret;
+		(void)rte_memcpy(&node->key,
+				 &key,
+				 sizeof(struct ixgbe_l2_tn_key));
+		node->pool = l2_tunnel->pool;
+		ret = ixgbe_insert_l2_tn_filter(l2_tn_info, node);
+		if (ret < 0) {
+			rte_free(node);
+			return ret;
+		}
 	}
 
 	switch (l2_tunnel->l2_tunnel_type) {
@@ -7368,7 +7372,7 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 		break;
 	}
 
-	if (ret < 0)
+	if ((!restore) && (ret < 0))
 		(void)ixgbe_remove_l2_tn_filter(l2_tn_info, &key);
 
 	return ret;
@@ -7429,7 +7433,8 @@ ixgbe_dev_l2_tunnel_filter_handle(struct rte_eth_dev *dev,
 	case RTE_ETH_FILTER_ADD:
 		ret = ixgbe_dev_l2_tunnel_filter_add
 			(dev,
-			 (struct rte_eth_l2_tunnel_conf *)arg);
+			 (struct rte_eth_l2_tunnel_conf *)arg,
+			 FALSE);
 		break;
 	case RTE_ETH_FILTER_DELETE:
 		ret = ixgbe_dev_l2_tunnel_filter_del
@@ -7962,6 +7967,23 @@ ixgbe_syn_filter_restore(struct rte_eth_dev *dev)
 	}
 }
 
+/* restore L2 tunnel filter */
+static inline void
+ixgbe_l2_tn_filter_restore(struct rte_eth_dev *dev)
+{
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
+	struct ixgbe_l2_tn_filter *node;
+	struct rte_eth_l2_tunnel_conf l2_tn_conf;
+
+	TAILQ_FOREACH(node, &l2_tn_info->l2_tn_list, entries) {
+		l2_tn_conf.l2_tunnel_type = node->key.l2_tn_type;
+		l2_tn_conf.tunnel_id      = node->key.tn_id;
+		l2_tn_conf.pool           = node->pool;
+		(void)ixgbe_dev_l2_tunnel_filter_add(dev, &l2_tn_conf, TRUE);
+	}
+}
+
 static int
 ixgbe_filter_restore(struct rte_eth_dev *dev)
 {
@@ -7969,6 +7991,7 @@ ixgbe_filter_restore(struct rte_eth_dev *dev)
 	ixgbe_ethertype_filter_restore(dev);
 	ixgbe_syn_filter_restore(dev);
 	ixgbe_fdir_filter_restore(dev);
+	ixgbe_l2_tn_filter_restore(dev);
 
 	return 0;
 }
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v4 09/18] net/ixgbe: store and restore L2 tunnel configuration
  2017-01-12  8:40     ` [PATCH v4 00/18] net/ixgbe: Consistent filter API Wei Zhao
                         ` (7 preceding siblings ...)
  2017-01-12  8:40       ` [PATCH v4 08/18] net/ixgbe: restore L2 tunnel filter Wei Zhao
@ 2017-01-12  8:40       ` Wei Zhao
  2017-01-12  8:40       ` [PATCH v4 10/18] net/ixgbe: flush all the filters Wei Zhao
                         ` (9 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  8:40 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for store and restore L2 tunnel filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 36 ++++++++++++++++++++++++++++++++++++
 drivers/net/ixgbe/ixgbe_ethdev.h |  3 +++
 2 files changed, 39 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 0ad613b..3059c64 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -389,6 +389,7 @@ static int ixgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 static int ixgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 					 struct rte_eth_udp_tunnel *udp_tunnel);
 static int ixgbe_filter_restore(struct rte_eth_dev *dev);
+static void ixgbe_l2_tunnel_conf(struct rte_eth_dev *dev);
 
 /*
  * Define VF Stats MACRO for Non "cleared on read" register
@@ -1469,6 +1470,9 @@ static int ixgbe_l2_tn_filter_init(struct rte_eth_dev *eth_dev)
 			"Failed to allocate memory for L2 TN hash map!");
 		return -ENOMEM;
 	}
+	l2_tn_info->e_tag_en = FALSE;
+	l2_tn_info->e_tag_fwd_en = FALSE;
+	l2_tn_info->e_tag_ether_type = DEFAULT_ETAG_ETYPE;
 
 	return 0;
 }
@@ -2527,6 +2531,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
 
 	/* resume enabled intr since hw reset */
 	ixgbe_enable_intr(dev);
+	ixgbe_l2_tunnel_conf(dev);
 	ixgbe_filter_restore(dev);
 
 	return 0;
@@ -7083,12 +7088,15 @@ ixgbe_dev_l2_tunnel_eth_type_conf(struct rte_eth_dev *dev,
 {
 	int ret = 0;
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
 
 	if (l2_tunnel == NULL)
 		return -EINVAL;
 
 	switch (l2_tunnel->l2_tunnel_type) {
 	case RTE_L2_TUNNEL_TYPE_E_TAG:
+		l2_tn_info->e_tag_ether_type = l2_tunnel->ether_type;
 		ret = ixgbe_update_e_tag_eth_type(hw, l2_tunnel->ether_type);
 		break;
 	default:
@@ -7127,9 +7135,12 @@ ixgbe_dev_l2_tunnel_enable(struct rte_eth_dev *dev,
 {
 	int ret = 0;
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
 
 	switch (l2_tunnel_type) {
 	case RTE_L2_TUNNEL_TYPE_E_TAG:
+		l2_tn_info->e_tag_en = TRUE;
 		ret = ixgbe_e_tag_enable(hw);
 		break;
 	default:
@@ -7168,9 +7179,12 @@ ixgbe_dev_l2_tunnel_disable(struct rte_eth_dev *dev,
 {
 	int ret = 0;
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
 
 	switch (l2_tunnel_type) {
 	case RTE_L2_TUNNEL_TYPE_E_TAG:
+		l2_tn_info->e_tag_en = FALSE;
 		ret = ixgbe_e_tag_disable(hw);
 		break;
 	default:
@@ -7477,10 +7491,13 @@ ixgbe_dev_l2_tunnel_forwarding_enable
 	(struct rte_eth_dev *dev,
 	 enum rte_eth_tunnel_type l2_tunnel_type)
 {
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
 	int ret = 0;
 
 	switch (l2_tunnel_type) {
 	case RTE_L2_TUNNEL_TYPE_E_TAG:
+		l2_tn_info->e_tag_fwd_en = TRUE;
 		ret = ixgbe_e_tag_forwarding_en_dis(dev, 1);
 		break;
 	default:
@@ -7498,10 +7515,13 @@ ixgbe_dev_l2_tunnel_forwarding_disable
 	(struct rte_eth_dev *dev,
 	 enum rte_eth_tunnel_type l2_tunnel_type)
 {
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
 	int ret = 0;
 
 	switch (l2_tunnel_type) {
 	case RTE_L2_TUNNEL_TYPE_E_TAG:
+		l2_tn_info->e_tag_fwd_en = FALSE;
 		ret = ixgbe_e_tag_forwarding_en_dis(dev, 0);
 		break;
 	default:
@@ -7996,6 +8016,22 @@ ixgbe_filter_restore(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static void
+ixgbe_l2_tunnel_conf(struct rte_eth_dev *dev)
+{
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	if (l2_tn_info->e_tag_en)
+		(void)ixgbe_e_tag_enable(hw);
+
+	if (l2_tn_info->e_tag_fwd_en)
+		(void)ixgbe_e_tag_forwarding_en_dis(dev, 1);
+
+	(void)ixgbe_update_e_tag_eth_type(hw, l2_tn_info->e_tag_ether_type);
+}
+
 RTE_PMD_REGISTER_PCI(net_ixgbe, rte_ixgbe_pmd.pci_drv);
 RTE_PMD_REGISTER_PCI_TABLE(net_ixgbe, pci_id_ixgbe_map);
 RTE_PMD_REGISTER_KMOD_DEP(net_ixgbe, "* igb_uio | uio_pci_generic | vfio");
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 4f281e8..4e0264c 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -307,6 +307,9 @@ struct ixgbe_l2_tn_info {
 	struct ixgbe_l2_tn_filter_list      l2_tn_list;
 	struct ixgbe_l2_tn_filter         **hash_map;
 	struct rte_hash                    *hash_handle;
+	bool e_tag_en; /* e-tag enabled */
+	bool e_tag_fwd_en; /* e-tag based forwarding enabled */
+	bool e_tag_ether_type; /* ether type for e-tag */
 };
 
 /*
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v4 10/18] net/ixgbe: flush all the filters
  2017-01-12  8:40     ` [PATCH v4 00/18] net/ixgbe: Consistent filter API Wei Zhao
                         ` (8 preceding siblings ...)
  2017-01-12  8:40       ` [PATCH v4 09/18] net/ixgbe: store and restore L2 tunnel configuration Wei Zhao
@ 2017-01-12  8:40       ` Wei Zhao
  2017-01-12  8:40       ` [PATCH v4 11/18] net/ixgbe: parse n-tuple filter Wei Zhao
                         ` (8 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  8:40 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for flush all the filters in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/Makefile       |   2 +
 drivers/net/ixgbe/ixgbe_ethdev.c |  79 ++++++++++++++++++++++++++-
 drivers/net/ixgbe/ixgbe_ethdev.h |  16 ++++++
 drivers/net/ixgbe/ixgbe_fdir.c   |  24 ++++++++
 drivers/net/ixgbe/ixgbe_flow.c   | 115 +++++++++++++++++++++++++++++++++++++++
 drivers/net/ixgbe/ixgbe_pf.c     |   1 +
 6 files changed, 236 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/ixgbe/ixgbe_flow.c

diff --git a/drivers/net/ixgbe/Makefile b/drivers/net/ixgbe/Makefile
index a3e6a52..38b9fbd 100644
--- a/drivers/net/ixgbe/Makefile
+++ b/drivers/net/ixgbe/Makefile
@@ -109,6 +109,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_rxtx.c
 SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_ethdev.c
 SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_fdir.c
 SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_pf.c
+SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_flow.c
 ifeq ($(CONFIG_RTE_ARCH_ARM64),y)
 SRCS-$(CONFIG_RTE_IXGBE_INC_VECTOR) += ixgbe_rxtx_vec_neon.c
 else
@@ -127,5 +128,6 @@ SYMLINK-$(CONFIG_RTE_LIBRTE_IXGBE_PMD)-include := rte_pmd_ixgbe.h
 DEPDIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += lib/librte_eal lib/librte_ether
 DEPDIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += lib/librte_mempool lib/librte_mbuf
 DEPDIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += lib/librte_net
+DEPDIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += lib/librte_hash
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 3059c64..2a67462 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -6319,6 +6319,7 @@ ixgbe_add_del_ethertype_filter(struct rte_eth_dev *dev,
 		ethertype_filter.ethertype = filter->ether_type;
 		ethertype_filter.etqf = etqf;
 		ethertype_filter.etqs = etqs;
+		ethertype_filter.conf = FALSE;
 		ret = ixgbe_ethertype_filter_insert(filter_info,
 						    &ethertype_filter);
 		if (ret < 0) {
@@ -6420,7 +6421,7 @@ ixgbe_dev_filter_ctrl(struct rte_eth_dev *dev,
 		     enum rte_filter_op filter_op,
 		     void *arg)
 {
-	int ret = -EINVAL;
+	int ret = 0;
 
 	switch (filter_type) {
 	case RTE_ETH_FILTER_NTUPLE:
@@ -6438,9 +6439,15 @@ ixgbe_dev_filter_ctrl(struct rte_eth_dev *dev,
 	case RTE_ETH_FILTER_L2_TUNNEL:
 		ret = ixgbe_dev_l2_tunnel_filter_handle(dev, filter_op, arg);
 		break;
+	case RTE_ETH_FILTER_GENERIC:
+		if (filter_op != RTE_ETH_FILTER_GET)
+			return -EINVAL;
+		*(const void **)arg = &ixgbe_flow_ops;
+		break;
 	default:
 		PMD_DRV_LOG(WARNING, "Filter type (%d) not supported",
 							filter_type);
+		ret = -EINVAL;
 		break;
 	}
 
@@ -8032,6 +8039,76 @@ ixgbe_l2_tunnel_conf(struct rte_eth_dev *dev)
 	(void)ixgbe_update_e_tag_eth_type(hw, l2_tn_info->e_tag_ether_type);
 }
 
+/* remove all the n-tuple filters */
+void
+ixgbe_clear_all_ntuple_filter(struct rte_eth_dev *dev)
+{
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+	struct ixgbe_5tuple_filter *p_5tuple;
+
+	while ((p_5tuple = TAILQ_FIRST(&filter_info->fivetuple_list)))
+		ixgbe_remove_5tuple_filter(dev, p_5tuple);
+}
+
+/* remove all the ether type filters */
+void
+ixgbe_clear_all_ethertype_filter(struct rte_eth_dev *dev)
+{
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+	int i;
+
+	for (i = 0; i < IXGBE_MAX_ETQF_FILTERS; i++) {
+		if (filter_info->ethertype_mask & (1 << i) &&
+		    !filter_info->ethertype_filters[i].conf) {
+			(void)ixgbe_ethertype_filter_remove(filter_info,
+							    (uint8_t)i);
+			IXGBE_WRITE_REG(hw, IXGBE_ETQF(i), 0);
+			IXGBE_WRITE_REG(hw, IXGBE_ETQS(i), 0);
+			IXGBE_WRITE_FLUSH(hw);
+		}
+	}
+}
+
+/* remove the SYN filter */
+void
+ixgbe_clear_syn_filter(struct rte_eth_dev *dev)
+{
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+
+	if (filter_info->syn_info & IXGBE_SYN_FILTER_ENABLE) {
+		filter_info->syn_info = 0;
+
+		IXGBE_WRITE_REG(hw, IXGBE_SYNQF, 0);
+		IXGBE_WRITE_FLUSH(hw);
+	}
+}
+
+/* remove all the L2 tunnel filters */
+int ixgbe_clear_all_l2_tn_filter(struct rte_eth_dev *dev)
+{
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
+	struct ixgbe_l2_tn_filter *l2_tn_filter;
+	struct rte_eth_l2_tunnel_conf l2_tn_conf;
+	int ret = 0;
+
+	while ((l2_tn_filter = TAILQ_FIRST(&l2_tn_info->l2_tn_list))) {
+		l2_tn_conf.l2_tunnel_type = l2_tn_filter->key.l2_tn_type;
+		l2_tn_conf.tunnel_id      = l2_tn_filter->key.tn_id;
+		l2_tn_conf.pool           = l2_tn_filter->pool;
+		ret = ixgbe_dev_l2_tunnel_filter_del(dev, &l2_tn_conf);
+		if (ret < 0)
+			return ret;
+	}
+
+	return 0;
+}
+
 RTE_PMD_REGISTER_PCI(net_ixgbe, rte_ixgbe_pmd.pci_drv);
 RTE_PMD_REGISTER_PCI_TABLE(net_ixgbe, pci_id_ixgbe_map);
 RTE_PMD_REGISTER_KMOD_DEP(net_ixgbe, "* igb_uio | uio_pci_generic | vfio");
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 4e0264c..5efa650 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -274,6 +274,11 @@ struct ixgbe_ethertype_filter {
 	uint16_t ethertype;
 	uint32_t etqf;
 	uint32_t etqs;
+	/**
+	 * If this filter is added by configuration,
+	 * it should not be removed.
+	 */
+	bool     conf;
 };
 
 /*
@@ -501,6 +506,14 @@ uint32_t ixgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val);
 int ixgbe_fdir_ctrl_func(struct rte_eth_dev *dev,
 			enum rte_filter_op filter_op, void *arg);
 void ixgbe_fdir_filter_restore(struct rte_eth_dev *dev);
+int ixgbe_clear_all_fdir_filter(struct rte_eth_dev *dev);
+
+extern const struct rte_flow_ops ixgbe_flow_ops;
+
+void ixgbe_clear_all_ethertype_filter(struct rte_eth_dev *dev);
+void ixgbe_clear_all_ntuple_filter(struct rte_eth_dev *dev);
+void ixgbe_clear_syn_filter(struct rte_eth_dev *dev);
+int ixgbe_clear_all_l2_tn_filter(struct rte_eth_dev *dev);
 
 static inline int
 ixgbe_ethertype_filter_lookup(struct ixgbe_filter_info *filter_info,
@@ -531,6 +544,8 @@ ixgbe_ethertype_filter_insert(struct ixgbe_filter_info *filter_info,
 				ethertype_filter->etqf;
 			filter_info->ethertype_filters[i].etqs =
 				ethertype_filter->etqs;
+			filter_info->ethertype_filters[i].conf =
+				ethertype_filter->conf;
 			return i;
 		}
 	}
@@ -547,6 +562,7 @@ ixgbe_ethertype_filter_remove(struct ixgbe_filter_info *filter_info,
 	filter_info->ethertype_filters[idx].ethertype = 0;
 	filter_info->ethertype_filters[idx].etqf = 0;
 	filter_info->ethertype_filters[idx].etqs = 0;
+	filter_info->ethertype_filters[idx].etqs = FALSE;
 	return idx;
 }
 
diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c
index 627c51a..e928ad7 100644
--- a/drivers/net/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/ixgbe/ixgbe_fdir.c
@@ -1514,3 +1514,27 @@ ixgbe_fdir_filter_restore(struct rte_eth_dev *dev)
 		}
 	}
 }
+
+/* remove all the flow director filters */
+int
+ixgbe_clear_all_fdir_filter(struct rte_eth_dev *dev)
+{
+	struct ixgbe_hw_fdir_info *fdir_info =
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
+	struct ixgbe_fdir_filter *fdir_filter;
+	int ret = 0;
+
+	/* flush flow director */
+	rte_hash_reset(fdir_info->hash_handle);
+	memset(fdir_info->hash_map, 0,
+	       sizeof(struct ixgbe_fdir_filter *) * IXGBE_MAX_FDIR_FILTER_NUM);
+	while ((fdir_filter = TAILQ_FIRST(&fdir_info->fdir_list))) {
+		TAILQ_REMOVE(&fdir_info->fdir_list,
+			     fdir_filter,
+			     entries);
+		rte_free(fdir_filter);
+	}
+	ret = ixgbe_fdir_flush(dev);
+
+	return ret;
+}
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
new file mode 100644
index 0000000..1499391
--- /dev/null
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -0,0 +1,115 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/queue.h>
+#include <stdio.h>
+#include <errno.h>
+#include <stdint.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <netinet/in.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_cycles.h>
+
+#include <rte_interrupts.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_pci.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_eal.h>
+#include <rte_alarm.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_atomic.h>
+#include <rte_malloc.h>
+#include <rte_random.h>
+#include <rte_dev.h>
+#include <rte_hash_crc.h>
+#include <rte_flow.h>
+#include <rte_flow_driver.h>
+
+#include "ixgbe_logs.h"
+#include "base/ixgbe_api.h"
+#include "base/ixgbe_vf.h"
+#include "base/ixgbe_common.h"
+#include "ixgbe_ethdev.h"
+#include "ixgbe_bypass.h"
+#include "ixgbe_rxtx.h"
+#include "base/ixgbe_type.h"
+#include "base/ixgbe_phy.h"
+#include "rte_pmd_ixgbe.h"
+
+static int ixgbe_flow_flush(struct rte_eth_dev *dev,
+		struct rte_flow_error *error);
+
+const struct rte_flow_ops ixgbe_flow_ops = {
+	NULL,
+	NULL,
+	NULL,
+	ixgbe_flow_flush,
+	NULL,
+};
+
+/*  Destroy all flow rules associated with a port on ixgbe. */
+static int
+ixgbe_flow_flush(struct rte_eth_dev *dev,
+		struct rte_flow_error *error)
+{
+	int ret = 0;
+
+	ixgbe_clear_all_ntuple_filter(dev);
+	ixgbe_clear_all_ethertype_filter(dev);
+	ixgbe_clear_syn_filter(dev);
+
+	ret = ixgbe_clear_all_fdir_filter(dev);
+	if (ret < 0) {
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_HANDLE,
+					NULL, "Failed to flush rule");
+		return ret;
+	}
+
+	ret = ixgbe_clear_all_l2_tn_filter(dev);
+	if (ret < 0) {
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_HANDLE,
+					NULL, "Failed to flush rule");
+		return ret;
+	}
+
+	return 0;
+}
diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
index 4b33130..4715045 100644
--- a/drivers/net/ixgbe/ixgbe_pf.c
+++ b/drivers/net/ixgbe/ixgbe_pf.c
@@ -199,6 +199,7 @@ ixgbe_add_tx_flow_control_drop_filter(struct rte_eth_dev *eth_dev)
 				IXGBE_ETQF_TX_ANTISPOOF |
 				IXGBE_ETHERTYPE_FLOW_CTRL;
 	ethertype_filter.etqs = 0;
+	ethertype_filter.conf = TRUE;
 	i = ixgbe_ethertype_filter_insert(filter_info,
 					  &ethertype_filter);
 	if (i < 0) {
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v4 11/18] net/ixgbe: parse n-tuple filter
  2017-01-12  8:40     ` [PATCH v4 00/18] net/ixgbe: Consistent filter API Wei Zhao
                         ` (9 preceding siblings ...)
  2017-01-12  8:40       ` [PATCH v4 10/18] net/ixgbe: flush all the filters Wei Zhao
@ 2017-01-12  8:40       ` Wei Zhao
  2017-01-12  8:40       ` [PATCH v4 12/18] net/ixgbe: parse ethertype filter Wei Zhao
                         ` (7 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  8:40 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

Add rule validate function and check if the rule is a n-tuple rule,
and get the n-tuple info.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ixgbe/ixgbe_flow.c | 429 ++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 424 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 1499391..ac9f663 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -77,15 +77,432 @@
 
 static int ixgbe_flow_flush(struct rte_eth_dev *dev,
 		struct rte_flow_error *error);
+static int
+cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
+					const struct rte_flow_item pattern[],
+					const struct rte_flow_action actions[],
+					struct rte_eth_ntuple_filter *filter,
+					struct rte_flow_error *error);
+static int
+ixgbe_parse_ntuple_filter(const struct rte_flow_attr *attr,
+					const struct rte_flow_item pattern[],
+					const struct rte_flow_action actions[],
+					struct rte_eth_ntuple_filter *filter,
+					struct rte_flow_error *error);
+static int
+ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
+		const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct rte_flow_error *error);
 
 const struct rte_flow_ops ixgbe_flow_ops = {
-	NULL,
+	ixgbe_flow_validate,
 	NULL,
 	NULL,
 	ixgbe_flow_flush,
 	NULL,
 };
 
+#define IXGBE_MIN_N_TUPLE_PRIO 1
+#define IXGBE_MAX_N_TUPLE_PRIO 7
+#define NEXT_ITEM_OF_PATTERN(item, pattern, index)\
+	do {		\
+		item = pattern + index;\
+		while (item->type == RTE_FLOW_ITEM_TYPE_VOID) {\
+		index++;				\
+		item = pattern + index;		\
+		}						\
+	} while (0)
+
+#define NEXT_ITEM_OF_ACTION(act, actions, index)\
+	do {								\
+		act = actions + index;					\
+		while (act->type == RTE_FLOW_ACTION_TYPE_VOID) {\
+		index++;					\
+		act = actions + index;				\
+		}							\
+	} while (0)
+
+/**
+ * Please aware there's an asumption for all the parsers.
+ * rte_flow_item is using big endian, rte_flow_attr and
+ * rte_flow_action are using CPU order.
+ * Because the pattern is used to describe the packets,
+ * normally the packets should use network order.
+ */
+
+/**
+ * Parse the rule to see if it is a n-tuple rule.
+ * And get the n-tuple filter info BTW.
+ * pattern:
+ * The first not void item can be ETH or IPV4.
+ * The second not void item must be IPV4 if the first one is ETH.
+ * The third not void item must be UDP or TCP.
+ * The next not void item must be END.
+ * action:
+ * The first not void action should be QUEUE.
+ * The next not void action should be END.
+ */
+static int
+cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
+			 const struct rte_flow_item pattern[],
+			 const struct rte_flow_action actions[],
+			 struct rte_eth_ntuple_filter *filter,
+			 struct rte_flow_error *error)
+{
+	const struct rte_flow_item *item;
+	const struct rte_flow_action *act;
+	const struct rte_flow_item_ipv4 *ipv4_spec;
+	const struct rte_flow_item_ipv4 *ipv4_mask;
+	const struct rte_flow_item_tcp *tcp_spec;
+	const struct rte_flow_item_tcp *tcp_mask;
+	const struct rte_flow_item_udp *udp_spec;
+	const struct rte_flow_item_udp *udp_mask;
+	uint32_t index;
+
+	if (!pattern) {
+		rte_flow_error_set(error,
+			EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+			NULL, "NULL pattern.");
+		return -rte_errno;
+	}
+
+	if (!actions) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				   NULL, "NULL action.");
+		return -rte_errno;
+	}
+	if (!attr) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, "NULL attribute.");
+		return -rte_errno;
+	}
+
+	/* parse pattern */
+	index = 0;
+
+	/* the first not void item can be MAC or IPv4 */
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+
+	if (item->type != RTE_FLOW_ITEM_TYPE_ETH &&
+	    item->type != RTE_FLOW_ITEM_TYPE_IPV4) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by ntuple filter");
+		return -rte_errno;
+	}
+	/* Skip Ethernet */
+	if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error,
+			  EINVAL,
+			  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			  item, "Not supported last point for range");
+			return -rte_errno;
+
+		}
+		/* if the first item is MAC, the content should be NULL */
+		if (item->spec || item->mask) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by ntuple filter");
+			return -rte_errno;
+		}
+		/* check if the next not void item is IPv4 */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_IPV4) {
+			rte_flow_error_set(error,
+			  EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+			  item, "Not supported by ntuple filter");
+			  return -rte_errno;
+		}
+	}
+
+	/* get the IPv4 info */
+	if (!item->spec || !item->mask) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Invalid ntuple mask");
+		return -rte_errno;
+	}
+	/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+
+	}
+
+	ipv4_mask = (const struct rte_flow_item_ipv4 *)item->mask;
+	/**
+	 * Only support src & dst addresses, protocol,
+	 * others should be masked.
+	 */
+	if (ipv4_mask->hdr.version_ihl ||
+	    ipv4_mask->hdr.type_of_service ||
+	    ipv4_mask->hdr.total_length ||
+	    ipv4_mask->hdr.packet_id ||
+	    ipv4_mask->hdr.fragment_offset ||
+	    ipv4_mask->hdr.time_to_live ||
+	    ipv4_mask->hdr.hdr_checksum) {
+			rte_flow_error_set(error,
+			EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by ntuple filter");
+		return -rte_errno;
+	}
+
+	filter->dst_ip_mask = ipv4_mask->hdr.dst_addr;
+	filter->src_ip_mask = ipv4_mask->hdr.src_addr;
+	filter->proto_mask  = ipv4_mask->hdr.next_proto_id;
+
+	ipv4_spec = (const struct rte_flow_item_ipv4 *)item->spec;
+	filter->dst_ip = ipv4_spec->hdr.dst_addr;
+	filter->src_ip = ipv4_spec->hdr.src_addr;
+	filter->proto  = ipv4_spec->hdr.next_proto_id;
+
+	/* check if the next not void item is TCP or UDP */
+	index++;
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_TCP &&
+	    item->type != RTE_FLOW_ITEM_TYPE_UDP) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by ntuple filter");
+		return -rte_errno;
+	}
+
+	/* get the TCP/UDP info */
+	if (!item->spec || !item->mask) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Invalid ntuple mask");
+		return -rte_errno;
+	}
+
+	/*Not supported last point for range*/
+	if (item->last) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+
+	}
+
+	if (item->type == RTE_FLOW_ITEM_TYPE_TCP) {
+		tcp_mask = (const struct rte_flow_item_tcp *)item->mask;
+
+		/**
+		 * Only support src & dst ports, tcp flags,
+		 * others should be masked.
+		 */
+		if (tcp_mask->hdr.sent_seq ||
+		    tcp_mask->hdr.recv_ack ||
+		    tcp_mask->hdr.data_off ||
+		    tcp_mask->hdr.rx_win ||
+		    tcp_mask->hdr.cksum ||
+		    tcp_mask->hdr.tcp_urp) {
+			memset(filter, 0,
+				sizeof(struct rte_eth_ntuple_filter));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by ntuple filter");
+			return -rte_errno;
+		}
+
+		filter->dst_port_mask  = tcp_mask->hdr.dst_port;
+		filter->src_port_mask  = tcp_mask->hdr.src_port;
+		if (tcp_mask->hdr.tcp_flags == 0xFF) {
+			filter->flags |= RTE_NTUPLE_FLAGS_TCP_FLAG;
+		} else if (!tcp_mask->hdr.tcp_flags) {
+			filter->flags &= ~RTE_NTUPLE_FLAGS_TCP_FLAG;
+		} else {
+			memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by ntuple filter");
+			return -rte_errno;
+		}
+
+		tcp_spec = (const struct rte_flow_item_tcp *)item->spec;
+		filter->dst_port  = tcp_spec->hdr.dst_port;
+		filter->src_port  = tcp_spec->hdr.src_port;
+		filter->tcp_flags = tcp_spec->hdr.tcp_flags;
+	} else {
+		udp_mask = (const struct rte_flow_item_udp *)item->mask;
+
+		/**
+		 * Only support src & dst ports,
+		 * others should be masked.
+		 */
+		if (udp_mask->hdr.dgram_len ||
+		    udp_mask->hdr.dgram_cksum) {
+			memset(filter, 0,
+				sizeof(struct rte_eth_ntuple_filter));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by ntuple filter");
+			return -rte_errno;
+		}
+
+		filter->dst_port_mask = udp_mask->hdr.dst_port;
+		filter->src_port_mask = udp_mask->hdr.src_port;
+
+		udp_spec = (const struct rte_flow_item_udp *)item->spec;
+		filter->dst_port = udp_spec->hdr.dst_port;
+		filter->src_port = udp_spec->hdr.src_port;
+	}
+
+	/* check if the next not void item is END */
+	index++;
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by ntuple filter");
+		return -rte_errno;
+	}
+
+	/* parse action */
+	index = 0;
+
+	/**
+	 * n-tuple only supports forwarding,
+	 * check if the first not void action is QUEUE.
+	 */
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			item, "Not supported action.");
+		return -rte_errno;
+	}
+	filter->queue =
+		((const struct rte_flow_action_queue *)act->conf)->index;
+
+	/* check if the next not void item is END */
+	index++;
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	/* parse attr */
+	/* must be input direction */
+	if (!attr->ingress) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR_INGRESS,
+				   attr, "Only support ingress.");
+		return -rte_errno;
+	}
+
+	/* not supported */
+	if (attr->egress) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
+				   attr, "Not support egress.");
+		return -rte_errno;
+	}
+
+	if (attr->priority > 0xFFFF) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
+				   attr, "Error priority.");
+		return -rte_errno;
+	}
+	filter->priority = (uint16_t)attr->priority;
+	if (attr->priority < IXGBE_MIN_N_TUPLE_PRIO ||
+	    attr->priority > IXGBE_MAX_N_TUPLE_PRIO)
+	    filter->priority = 1;
+
+	return 0;
+}
+
+/* a specific function for ixgbe because the flags is specific */
+static int
+ixgbe_parse_ntuple_filter(const struct rte_flow_attr *attr,
+			  const struct rte_flow_item pattern[],
+			  const struct rte_flow_action actions[],
+			  struct rte_eth_ntuple_filter *filter,
+			  struct rte_flow_error *error)
+{
+	int ret;
+
+	ret = cons_parse_ntuple_filter(attr, pattern, actions, filter, error);
+
+	if (ret)
+		return ret;
+
+	/* Ixgbe doesn't support tcp flags. */
+	if (filter->flags & RTE_NTUPLE_FLAGS_TCP_FLAG) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ITEM,
+				   NULL, "Not supported by ntuple filter");
+		return -rte_errno;
+	}
+
+	/* Ixgbe doesn't support many priorities. */
+	if (filter->priority < IXGBE_MIN_N_TUPLE_PRIO ||
+	    filter->priority > IXGBE_MAX_N_TUPLE_PRIO) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			NULL, "Priority not supported by ntuple filter");
+		return -rte_errno;
+	}
+
+	if (filter->queue >= IXGBE_MAX_RX_QUEUE_NUM ||
+		filter->priority > IXGBE_5TUPLE_MAX_PRI ||
+		filter->priority < IXGBE_5TUPLE_MIN_PRI)
+		return -rte_errno;
+
+	/* fixed value for ixgbe */
+	filter->flags = RTE_5TUPLE_FLAGS;
+	return 0;
+}
+
+/**
+ * Check if the flow rule is supported by ixgbe.
+ * It only checkes the format. Don't guarantee the rule can be programmed into
+ * the HW. Because there can be no enough room for the rule.
+ */
+static int
+ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
+		const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct rte_flow_error *error)
+{
+	struct rte_eth_ntuple_filter ntuple_filter;
+	int ret;
+
+	memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
+	ret = ixgbe_parse_ntuple_filter(attr, pattern,
+				actions, &ntuple_filter, error);
+	if (!ret)
+		return 0;
+
+	return ret;
+}
+
 /*  Destroy all flow rules associated with a port on ixgbe. */
 static int
 ixgbe_flow_flush(struct rte_eth_dev *dev,
@@ -99,15 +516,17 @@ ixgbe_flow_flush(struct rte_eth_dev *dev,
 
 	ret = ixgbe_clear_all_fdir_filter(dev);
 	if (ret < 0) {
-		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_HANDLE,
-					NULL, "Failed to flush rule");
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_HANDLE,
+				NULL, "Failed to flush rule");
 		return ret;
 	}
 
 	ret = ixgbe_clear_all_l2_tn_filter(dev);
 	if (ret < 0) {
-		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_HANDLE,
-					NULL, "Failed to flush rule");
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_HANDLE,
+				NULL, "Failed to flush rule");
 		return ret;
 	}
 
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v4 12/18] net/ixgbe: parse ethertype filter
  2017-01-12  8:40     ` [PATCH v4 00/18] net/ixgbe: Consistent filter API Wei Zhao
                         ` (10 preceding siblings ...)
  2017-01-12  8:40       ` [PATCH v4 11/18] net/ixgbe: parse n-tuple filter Wei Zhao
@ 2017-01-12  8:40       ` Wei Zhao
  2017-01-12  8:40       ` [PATCH v4 13/18] net/ixgbe: parse TCP SYN filter check if the rule is a TCP SYN rule, and get the SYN info Wei Zhao
                         ` (6 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  8:40 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

check if the rule is a ethertype rule, and get the ethertype info.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ixgbe/ixgbe_flow.c | 278 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 278 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index ac9f663..2f97129 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -90,6 +90,18 @@ ixgbe_parse_ntuple_filter(const struct rte_flow_attr *attr,
 					struct rte_eth_ntuple_filter *filter,
 					struct rte_flow_error *error);
 static int
+cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
+			    const struct rte_flow_item *pattern,
+			    const struct rte_flow_action *actions,
+			    struct rte_eth_ethertype_filter *filter,
+			    struct rte_flow_error *error);
+static int
+ixgbe_parse_ethertype_filter(const struct rte_flow_attr *attr,
+				const struct rte_flow_item pattern[],
+				const struct rte_flow_action actions[],
+				struct rte_eth_ethertype_filter *filter,
+				struct rte_flow_error *error);
+static int
 ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 		const struct rte_flow_attr *attr,
 		const struct rte_flow_item pattern[],
@@ -480,6 +492,265 @@ ixgbe_parse_ntuple_filter(const struct rte_flow_attr *attr,
 }
 
 /**
+ * Parse the rule to see if it is a ethertype rule.
+ * And get the ethertype filter info BTW.
+ * pattern:
+ * The first not void item can be ETH.
+ * The next not void item must be END.
+ * action:
+ * The first not void action should be QUEUE.
+ * The next not void action should be END.
+ */
+static int
+cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
+			    const struct rte_flow_item *pattern,
+			    const struct rte_flow_action *actions,
+			    struct rte_eth_ethertype_filter *filter,
+			    struct rte_flow_error *error)
+{
+	const struct rte_flow_item *item;
+	const struct rte_flow_action *act;
+	const struct rte_flow_item_eth *eth_spec;
+	const struct rte_flow_item_eth *eth_mask;
+	const struct rte_flow_action_queue *act_q;
+	uint32_t index;
+
+	if (!pattern) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+				NULL, "NULL pattern.");
+		return -rte_errno;
+	}
+
+	if (!actions) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				NULL, "NULL action.");
+		return -rte_errno;
+	}
+
+	if (!attr) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, "NULL attribute.");
+		return -rte_errno;
+	}
+
+	/* Parse pattern */
+	index = 0;
+
+	/* The first non-void item should be MAC. */
+	item = pattern + index;
+	while (item->type == RTE_FLOW_ITEM_TYPE_VOID) {
+		index++;
+		item = pattern + index;
+	}
+	if (item->type != RTE_FLOW_ITEM_TYPE_ETH) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by ethertype filter");
+		return -rte_errno;
+	}
+
+	/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+	}
+
+	/* Get the MAC info. */
+	if (!item->spec || !item->mask) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by ethertype filter");
+		return -rte_errno;
+	}
+
+	eth_spec = (const struct rte_flow_item_eth *)item->spec;
+	eth_mask = (const struct rte_flow_item_eth *)item->mask;
+
+	/* Mask bits of source MAC address must be full of 0.
+	 * Mask bits of destination MAC address must be full
+	 * of 1 or full of 0.
+	 */
+	if (!is_zero_ether_addr(&eth_mask->src) ||
+	    (!is_zero_ether_addr(&eth_mask->dst) &&
+	     !is_broadcast_ether_addr(&eth_mask->dst))) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Invalid ether address mask");
+		return -rte_errno;
+	}
+
+	if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Invalid ethertype mask");
+		return -rte_errno;
+	}
+
+	/* If mask bits of destination MAC address
+	 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
+	 */
+	if (is_broadcast_ether_addr(&eth_mask->dst)) {
+		filter->mac_addr = eth_spec->dst;
+		filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
+	} else {
+		filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
+	}
+	filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+
+	/* Check if the next non-void item is END. */
+	index++;
+	item = pattern + index;
+	while (item->type == RTE_FLOW_ITEM_TYPE_VOID) {
+		index++;
+		item = pattern + index;
+	}
+	if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by ethertype filter.");
+		return -rte_errno;
+	}
+
+	/* Parse action */
+
+	index = 0;
+	/* Check if the first non-void action is QUEUE or DROP. */
+	act = actions + index;
+	while (act->type == RTE_FLOW_ACTION_TYPE_VOID) {
+		index++;
+		act = actions + index;
+	}
+	if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE &&
+	    act->type != RTE_FLOW_ACTION_TYPE_DROP) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION,
+				act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	if (act->type == RTE_FLOW_ACTION_TYPE_QUEUE) {
+		act_q = (const struct rte_flow_action_queue *)act->conf;
+		filter->queue = act_q->index;
+	} else {
+		filter->flags |= RTE_ETHTYPE_FLAGS_DROP;
+	}
+
+	/* Check if the next non-void item is END */
+	index++;
+	act = actions + index;
+	while (act->type == RTE_FLOW_ACTION_TYPE_VOID) {
+		index++;
+		act = actions + index;
+	}
+	if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION,
+				act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	/* Parse attr */
+	/* Must be input direction */
+	if (!attr->ingress) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ATTR_INGRESS,
+				attr, "Only support ingress.");
+		return -rte_errno;
+	}
+
+	/* Not supported */
+	if (attr->egress) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
+				attr, "Not support egress.");
+		return -rte_errno;
+	}
+
+	/* Not supported */
+	if (attr->priority) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
+				attr, "Not support priority.");
+		return -rte_errno;
+	}
+
+	/* Not supported */
+	if (attr->group) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ATTR_GROUP,
+				attr, "Not support group.");
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
+static int
+ixgbe_parse_ethertype_filter(const struct rte_flow_attr *attr,
+			     const struct rte_flow_item pattern[],
+			     const struct rte_flow_action actions[],
+			     struct rte_eth_ethertype_filter *filter,
+			     struct rte_flow_error *error)
+{
+	int ret;
+
+	ret = cons_parse_ethertype_filter(attr, pattern,
+					actions, filter, error);
+
+	if (ret)
+		return ret;
+
+	/* Ixgbe doesn't support MAC address. */
+	if (filter->flags & RTE_ETHTYPE_FLAGS_MAC) {
+		memset(filter, 0, sizeof(struct rte_eth_ethertype_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			NULL, "Not supported by ethertype filter");
+		return -rte_errno;
+	}
+
+	if (filter->queue >= IXGBE_MAX_RX_QUEUE_NUM) {
+		memset(filter, 0, sizeof(struct rte_eth_ethertype_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			NULL, "queue index much too big");
+		return -rte_errno;
+	}
+
+	if (filter->ether_type == ETHER_TYPE_IPv4 ||
+		filter->ether_type == ETHER_TYPE_IPv6) {
+		memset(filter, 0, sizeof(struct rte_eth_ethertype_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			NULL, "IPv4/IPv6 not supported by ethertype filter");
+		return -rte_errno;
+	}
+
+	if (filter->flags & RTE_ETHTYPE_FLAGS_MAC) {
+		memset(filter, 0, sizeof(struct rte_eth_ethertype_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			NULL, "mac compare is unsupported");
+		return -rte_errno;
+	}
+
+	if (filter->flags & RTE_ETHTYPE_FLAGS_DROP) {
+		memset(filter, 0, sizeof(struct rte_eth_ethertype_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			NULL, "drop option is unsupported");
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
+/**
  * Check if the flow rule is supported by ixgbe.
  * It only checkes the format. Don't guarantee the rule can be programmed into
  * the HW. Because there can be no enough room for the rule.
@@ -492,6 +763,7 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 		struct rte_flow_error *error)
 {
 	struct rte_eth_ntuple_filter ntuple_filter;
+	struct rte_eth_ethertype_filter ethertype_filter;
 	int ret;
 
 	memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
@@ -500,6 +772,12 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 	if (!ret)
 		return 0;
 
+	memset(&ethertype_filter, 0, sizeof(struct rte_eth_ethertype_filter));
+	ret = ixgbe_parse_ethertype_filter(attr, pattern,
+				actions, &ethertype_filter, error);
+	if (!ret)
+		return 0;
+
 	return ret;
 }
 
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v4 13/18] net/ixgbe: parse TCP SYN filter check if the rule is a TCP SYN rule, and get the SYN info.
  2017-01-12  8:40     ` [PATCH v4 00/18] net/ixgbe: Consistent filter API Wei Zhao
                         ` (11 preceding siblings ...)
  2017-01-12  8:40       ` [PATCH v4 12/18] net/ixgbe: parse ethertype filter Wei Zhao
@ 2017-01-12  8:40       ` Wei Zhao
  2017-01-12  8:40       ` [PATCH v4 14/18] net/ixgbe: parse L2 tunnel filter check if the rule is a L2 tunnel rule, and get the L2 tunnel info Wei Zhao
                         ` (5 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  8:40 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ixgbe/ixgbe_flow.c | 264 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 264 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 2f97129..317deed 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -102,6 +102,18 @@ ixgbe_parse_ethertype_filter(const struct rte_flow_attr *attr,
 				struct rte_eth_ethertype_filter *filter,
 				struct rte_flow_error *error);
 static int
+cons_parse_syn_filter(const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct rte_eth_syn_filter *filter,
+		struct rte_flow_error *error);
+static int
+ixgbe_parse_syn_filter(const struct rte_flow_attr *attr,
+				const struct rte_flow_item pattern[],
+				const struct rte_flow_action actions[],
+				struct rte_eth_syn_filter *filter,
+				struct rte_flow_error *error);
+static int
 ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 		const struct rte_flow_attr *attr,
 		const struct rte_flow_item pattern[],
@@ -751,6 +763,251 @@ ixgbe_parse_ethertype_filter(const struct rte_flow_attr *attr,
 }
 
 /**
+ * Parse the rule to see if it is a TCP SYN rule.
+ * And get the TCP SYN filter info BTW.
+ * pattern:
+ * The first not void item must be ETH.
+ * The second not void item must be IPV4 or IPV6.
+ * The third not void item must be TCP.
+ * The next not void item must be END.
+ * action:
+ * The first not void action should be QUEUE.
+ * The next not void action should be END.
+ */
+static int
+cons_parse_syn_filter(const struct rte_flow_attr *attr,
+				const struct rte_flow_item pattern[],
+				const struct rte_flow_action actions[],
+				struct rte_eth_syn_filter *filter,
+				struct rte_flow_error *error)
+{
+	const struct rte_flow_item *item;
+	const struct rte_flow_action *act;
+	const struct rte_flow_item_tcp *tcp_spec;
+	const struct rte_flow_item_tcp *tcp_mask;
+	const struct rte_flow_action_queue *act_q;
+	uint32_t index;
+
+	if (!pattern) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+				NULL, "NULL pattern.");
+		return -rte_errno;
+	}
+
+	if (!actions) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				NULL, "NULL action.");
+		return -rte_errno;
+	}
+
+	if (!attr) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, "NULL attribute.");
+		return -rte_errno;
+	}
+
+	/* parse pattern */
+	index = 0;
+
+	/* the first not void item should be MAC or IPv4 or IPv6 or TCP */
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_ETH &&
+	    item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+	    item->type != RTE_FLOW_ITEM_TYPE_IPV6 &&
+	    item->type != RTE_FLOW_ITEM_TYPE_TCP) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by syn filter");
+		return -rte_errno;
+	}
+		/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+	}
+
+	/* Skip Ethernet */
+	if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
+		/* if the item is MAC, the content should be NULL */
+		if (item->spec || item->mask) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Invalid SYN address mask");
+			return -rte_errno;
+		}
+
+		/* check if the next not void item is IPv4 or IPv6 */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+		    item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by syn filter");
+			return -rte_errno;
+		}
+	}
+
+	/* Skip IP */
+	if (item->type == RTE_FLOW_ITEM_TYPE_IPV4 ||
+	    item->type == RTE_FLOW_ITEM_TYPE_IPV6) {
+		/* if the item is IP, the content should be NULL */
+		if (item->spec || item->mask) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Invalid SYN mask");
+			return -rte_errno;
+		}
+
+		/* check if the next not void item is TCP */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_TCP) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by syn filter");
+			return -rte_errno;
+		}
+	}
+
+	/* Get the TCP info. Only support SYN. */
+	if (!item->spec || !item->mask) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Invalid SYN mask");
+		return -rte_errno;
+	}
+	/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+	}
+
+	tcp_spec = (const struct rte_flow_item_tcp *)item->spec;
+	tcp_mask = (const struct rte_flow_item_tcp *)item->mask;
+	if (!(tcp_spec->hdr.tcp_flags & TCP_SYN_FLAG) ||
+	    tcp_mask->hdr.src_port ||
+	    tcp_mask->hdr.dst_port ||
+	    tcp_mask->hdr.sent_seq ||
+	    tcp_mask->hdr.recv_ack ||
+	    tcp_mask->hdr.data_off ||
+	    tcp_mask->hdr.tcp_flags != TCP_SYN_FLAG ||
+	    tcp_mask->hdr.rx_win ||
+	    tcp_mask->hdr.cksum ||
+	    tcp_mask->hdr.tcp_urp) {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by syn filter");
+		return -rte_errno;
+	}
+
+	/* check if the next not void item is END */
+	index++;
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by syn filter");
+		return -rte_errno;
+	}
+
+	/* parse action */
+	index = 0;
+
+	/* check if the first not void action is QUEUE. */
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE) {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION,
+				act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	act_q = (const struct rte_flow_action_queue *)act->conf;
+	filter->queue = act_q->index;
+	if (filter->queue >= IXGBE_MAX_RX_QUEUE_NUM) {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION,
+				act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	/* check if the next not void item is END */
+	index++;
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION,
+				act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	/* parse attr */
+	/* must be input direction */
+	if (!attr->ingress) {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_INGRESS,
+			attr, "Only support ingress.");
+		return -rte_errno;
+	}
+
+	/* not supported */
+	if (attr->egress) {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
+			attr, "Not support egress.");
+		return -rte_errno;
+	}
+
+	/* Support 2 priorities, the lowest or highest. */
+	if (!attr->priority) {
+		filter->hig_pri = 0;
+	} else if (attr->priority == (uint32_t)~0U) {
+		filter->hig_pri = 1;
+	} else {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
+			attr, "Not support priority.");
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
+static int
+ixgbe_parse_syn_filter(const struct rte_flow_attr *attr,
+			     const struct rte_flow_item pattern[],
+			     const struct rte_flow_action actions[],
+			     struct rte_eth_syn_filter *filter,
+			     struct rte_flow_error *error)
+{
+	int ret;
+
+	ret = cons_parse_syn_filter(attr, pattern,
+					actions, filter, error);
+
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+/**
  * Check if the flow rule is supported by ixgbe.
  * It only checkes the format. Don't guarantee the rule can be programmed into
  * the HW. Because there can be no enough room for the rule.
@@ -764,6 +1021,7 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 {
 	struct rte_eth_ntuple_filter ntuple_filter;
 	struct rte_eth_ethertype_filter ethertype_filter;
+	struct rte_eth_syn_filter syn_filter;
 	int ret;
 
 	memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
@@ -778,6 +1036,12 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 	if (!ret)
 		return 0;
 
+	memset(&syn_filter, 0, sizeof(struct rte_eth_syn_filter));
+	ret = ixgbe_parse_syn_filter(attr, pattern,
+				actions, &syn_filter, error);
+	if (!ret)
+		return 0;
+
 	return ret;
 }
 
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v4 14/18] net/ixgbe: parse L2 tunnel filter check if the rule is a L2 tunnel rule, and get the L2 tunnel info.
  2017-01-12  8:40     ` [PATCH v4 00/18] net/ixgbe: Consistent filter API Wei Zhao
                         ` (12 preceding siblings ...)
  2017-01-12  8:40       ` [PATCH v4 13/18] net/ixgbe: parse TCP SYN filter check if the rule is a TCP SYN rule, and get the SYN info Wei Zhao
@ 2017-01-12  8:40       ` Wei Zhao
  2017-01-12  8:40       ` [PATCH v4 15/18] net/ixgbe: parse flow director filter Wei Zhao
                         ` (4 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  8:40 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c |   3 +-
 drivers/net/ixgbe/ixgbe_flow.c   | 203 +++++++++++++++++++++++++++++++++++++++
 lib/librte_ether/rte_flow.h      |  48 +++++++++
 3 files changed, 253 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 2a67462..14a88ee 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -8089,7 +8089,8 @@ ixgbe_clear_syn_filter(struct rte_eth_dev *dev)
 }
 
 /* remove all the L2 tunnel filters */
-int ixgbe_clear_all_l2_tn_filter(struct rte_eth_dev *dev)
+int
+ixgbe_clear_all_l2_tn_filter(struct rte_eth_dev *dev)
 {
 	struct ixgbe_l2_tn_info *l2_tn_info =
 		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 317deed..6e1c0bc 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -114,6 +114,19 @@ ixgbe_parse_syn_filter(const struct rte_flow_attr *attr,
 				struct rte_eth_syn_filter *filter,
 				struct rte_flow_error *error);
 static int
+cons_parse_l2_tn_filter(const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct rte_eth_l2_tunnel_conf *filter,
+		struct rte_flow_error *error);
+static int
+ixgbe_validate_l2_tn_filter(struct rte_eth_dev *dev,
+			const struct rte_flow_attr *attr,
+			const struct rte_flow_item pattern[],
+			const struct rte_flow_action actions[],
+			struct rte_eth_l2_tunnel_conf *rule,
+			struct rte_flow_error *error);
+static int
 ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 		const struct rte_flow_attr *attr,
 		const struct rte_flow_item pattern[],
@@ -1008,6 +1021,191 @@ ixgbe_parse_syn_filter(const struct rte_flow_attr *attr,
 }
 
 /**
+ * Parse the rule to see if it is a L2 tunnel rule.
+ * And get the L2 tunnel filter info BTW.
+ * Only support E-tag now.
+ */
+static int
+cons_parse_l2_tn_filter(const struct rte_flow_attr *attr,
+			const struct rte_flow_item pattern[],
+			const struct rte_flow_action actions[],
+			struct rte_eth_l2_tunnel_conf *filter,
+			struct rte_flow_error *error)
+{
+	const struct rte_flow_item *item;
+	const struct rte_flow_item_e_tag *e_tag_spec;
+	const struct rte_flow_item_e_tag *e_tag_mask;
+	const struct rte_flow_action *act;
+	const struct rte_flow_action_queue *act_q;
+	uint32_t index;
+
+	if (!pattern) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+			NULL, "NULL pattern.");
+		return -rte_errno;
+	}
+
+	if (!actions) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				   NULL, "NULL action.");
+		return -rte_errno;
+	}
+
+	if (!attr) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, "NULL attribute.");
+		return -rte_errno;
+	}
+	/* parse pattern */
+	index = 0;
+
+	/* The first not void item should be e-tag. */
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_E_TAG) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by L2 tunnel filter");
+		return -rte_errno;
+	}
+
+	if (!item->spec || !item->mask) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by L2 tunnel filter");
+		return -rte_errno;
+	}
+
+	/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+	}
+
+	e_tag_spec = (const struct rte_flow_item_e_tag *)item->spec;
+	e_tag_mask = (const struct rte_flow_item_e_tag *)item->mask;
+
+	/* Only care about GRP and E cid base. */
+	if (e_tag_mask->epcp_edei_in_ecid_b ||
+	    e_tag_mask->in_ecid_e ||
+	    e_tag_mask->ecid_e ||
+	    e_tag_mask->rsvd_grp_ecid_b != rte_cpu_to_be_16(0x3FFF)) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by L2 tunnel filter");
+		return -rte_errno;
+	}
+
+	filter->l2_tunnel_type = RTE_L2_TUNNEL_TYPE_E_TAG;
+	/**
+	 * grp and e_cid_base are bit fields and only use 14 bits.
+	 * e-tag id is taken as little endian by HW.
+	 */
+	filter->tunnel_id = rte_be_to_cpu_16(e_tag_spec->rsvd_grp_ecid_b);
+
+	/* check if the next not void item is END */
+	index++;
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by L2 tunnel filter");
+		return -rte_errno;
+	}
+
+	/* parse attr */
+	/* must be input direction */
+	if (!attr->ingress) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_INGRESS,
+			attr, "Only support ingress.");
+		return -rte_errno;
+	}
+
+	/* not supported */
+	if (attr->egress) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
+			attr, "Not support egress.");
+		return -rte_errno;
+	}
+
+	/* not supported */
+	if (attr->priority) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
+			attr, "Not support priority.");
+		return -rte_errno;
+	}
+
+	/* parse action */
+	index = 0;
+
+	/* check if the first not void action is QUEUE. */
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	act_q = (const struct rte_flow_action_queue *)act->conf;
+	filter->pool = act_q->index;
+
+	/* check if the next not void item is END */
+	index++;
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
+static int
+ixgbe_validate_l2_tn_filter(struct rte_eth_dev *dev,
+			const struct rte_flow_attr *attr,
+			const struct rte_flow_item pattern[],
+			const struct rte_flow_action actions[],
+			struct rte_eth_l2_tunnel_conf *l2_tn_filter,
+			struct rte_flow_error *error)
+{
+	int ret = 0;
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	ret = cons_parse_l2_tn_filter(attr, pattern,
+				actions, l2_tn_filter, error);
+
+	if (hw->mac.type != ixgbe_mac_X550 &&
+		hw->mac.type != ixgbe_mac_X550EM_x &&
+		hw->mac.type != ixgbe_mac_X550EM_a) {
+		memset(l2_tn_filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			NULL, "Not supported by L2 tunnel filter");
+		return -rte_errno;
+	}
+
+	return ret;
+}
+
+/**
  * Check if the flow rule is supported by ixgbe.
  * It only checkes the format. Don't guarantee the rule can be programmed into
  * the HW. Because there can be no enough room for the rule.
@@ -1022,6 +1220,7 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 	struct rte_eth_ntuple_filter ntuple_filter;
 	struct rte_eth_ethertype_filter ethertype_filter;
 	struct rte_eth_syn_filter syn_filter;
+	struct rte_eth_l2_tunnel_conf l2_tn_filter;
 	int ret;
 
 	memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
@@ -1042,6 +1241,10 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 	if (!ret)
 		return 0;
 
+	memset(&l2_tn_filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+	ret = ixgbe_validate_l2_tn_filter(dev, attr, pattern,
+				actions, &l2_tn_filter, error);
+
 	return ret;
 }
 
diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h
index 98084ac..7142479 100644
--- a/lib/librte_ether/rte_flow.h
+++ b/lib/librte_ether/rte_flow.h
@@ -268,6 +268,20 @@ enum rte_flow_item_type {
 	 * See struct rte_flow_item_vxlan.
 	 */
 	RTE_FLOW_ITEM_TYPE_VXLAN,
+
+	/**
+	 * Matches a E_TAG header.
+	 *
+	 * See struct rte_flow_item_e_tag.
+	 */
+	RTE_FLOW_ITEM_TYPE_E_TAG,
+
+	/**
+	 * Matches a NVGRE header.
+	 *
+	 * See struct rte_flow_item_nvgre.
+	 */
+	RTE_FLOW_ITEM_TYPE_NVGRE,
 };
 
 /**
@@ -454,6 +468,40 @@ struct rte_flow_item_vxlan {
 };
 
 /**
+ * RTE_FLOW_ITEM_TYPE_E_TAG.
+ *
+ * Matches a E-tag header.
+ */
+struct rte_flow_item_e_tag {
+	uint16_t tpid; /**< Tag protocol identifier (0x893F). */
+	/** E-Tag control information (E-TCI). */
+	/**< E-PCP (3b), E-DEI (1b), ingress E-CID base (12b). */
+	uint16_t epcp_edei_in_ecid_b;
+	/**< Reserved (2b), GRP (2b), E-CID base (12b). */
+	uint16_t rsvd_grp_ecid_b;
+	uint8_t in_ecid_e; /**< Ingress E-CID ext. */
+	uint8_t ecid_e; /**< E-CID ext. */
+};
+
+/**
+ * RTE_FLOW_ITEM_TYPE_NVGRE.
+ *
+ * Matches a NVGRE header.
+ */
+struct rte_flow_item_nvgre {
+     /**
+      * Checksum (1b), undefined (1b), key bit (1b), sequence number (1b),
+      * reserved 0 (9b), version (3b).
+      *
+      * \c_k_s_rsvd0_ver must have value 0x2000 according to RFC 7637.
+      */
+	uint16_t c_k_s_rsvd0_ver;
+	uint16_t protocol; /**< Protocol type (0x6558). */
+	uint8_t tni[3]; /**< Virtual subnet ID. */
+	uint8_t flow_id; /**< Flow ID. */
+};
+
+/**
  * Matching pattern item definition.
  *
  * A pattern is formed by stacking items starting from the lowest protocol
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v4 15/18] net/ixgbe: parse flow director filter
  2017-01-12  8:40     ` [PATCH v4 00/18] net/ixgbe: Consistent filter API Wei Zhao
                         ` (13 preceding siblings ...)
  2017-01-12  8:40       ` [PATCH v4 14/18] net/ixgbe: parse L2 tunnel filter check if the rule is a L2 tunnel rule, and get the L2 tunnel info Wei Zhao
@ 2017-01-12  8:40       ` Wei Zhao
  2017-01-12  8:40       ` [PATCH v4 16/18] net/ixgbe: create consistent filter Wei Zhao
                         ` (3 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  8:40 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

check if the rule is a flow director rule, and get the flow director info.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c |    2 +
 drivers/net/ixgbe/ixgbe_ethdev.h |   16 +
 drivers/net/ixgbe/ixgbe_fdir.c   |  253 +++++---
 drivers/net/ixgbe/ixgbe_flow.c   | 1218 +++++++++++++++++++++++++++++++++++++-
 4 files changed, 1380 insertions(+), 109 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 14a88ee..4b9f4aa 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1436,6 +1436,8 @@ static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev)
 			     "Failed to allocate memory for fdir hash map!");
 		return -ENOMEM;
 	}
+	fdir_info->mask_added = FALSE;
+
 	return 0;
 }
 
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 5efa650..dee159f 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -162,6 +162,17 @@ struct ixgbe_fdir_filter {
 /* list of fdir filters */
 TAILQ_HEAD(ixgbe_fdir_filter_list, ixgbe_fdir_filter);
 
+struct ixgbe_fdir_rule {
+	struct ixgbe_hw_fdir_mask mask;
+	union ixgbe_atr_input ixgbe_fdir; /* key of fdir filter*/
+	bool b_spec; /* If TRUE, ixgbe_fdir, fdirflags, queue have meaning. */
+	bool b_mask; /* If TRUE, mask has meaning. */
+	enum rte_fdir_mode mode; /* IP, MAC VLAN, Tunnel */
+	uint32_t fdirflags; /* drop or forward */
+	uint32_t soft_id; /* an unique value for this rule */
+	uint8_t queue; /* assigned rx queue */
+};
+
 struct ixgbe_hw_fdir_info {
 	struct ixgbe_hw_fdir_mask mask;
 	uint8_t     flex_bytes_offset;
@@ -177,6 +188,7 @@ struct ixgbe_hw_fdir_info {
 	/* store the pointers of the filters, index is the hash value. */
 	struct ixgbe_fdir_filter **hash_map;
 	struct rte_hash *hash_handle; /* cuckoo hash handler */
+	bool mask_added; /* If already got mask from consistent filter */
 };
 
 /* structure for interrupt relative data */
@@ -479,6 +491,10 @@ bool ixgbe_rss_update_sp(enum ixgbe_mac_type mac_type);
  * Flow director function prototypes
  */
 int ixgbe_fdir_configure(struct rte_eth_dev *dev);
+int ixgbe_fdir_set_input_mask(struct rte_eth_dev *dev);
+int ixgbe_fdir_filter_program(struct rte_eth_dev *dev,
+			      struct ixgbe_fdir_rule *rule,
+			      bool del, bool update);
 
 void ixgbe_configure_dcb(struct rte_eth_dev *dev);
 
diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c
index e928ad7..3b9d60c 100644
--- a/drivers/net/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/ixgbe/ixgbe_fdir.c
@@ -112,10 +112,8 @@
 static int fdir_erase_filter_82599(struct ixgbe_hw *hw, uint32_t fdirhash);
 static int fdir_set_input_mask(struct rte_eth_dev *dev,
 			       const struct rte_eth_fdir_masks *input_mask);
-static int fdir_set_input_mask_82599(struct rte_eth_dev *dev,
-		const struct rte_eth_fdir_masks *input_mask);
-static int fdir_set_input_mask_x550(struct rte_eth_dev *dev,
-				    const struct rte_eth_fdir_masks *input_mask);
+static int fdir_set_input_mask_82599(struct rte_eth_dev *dev);
+static int fdir_set_input_mask_x550(struct rte_eth_dev *dev);
 static int ixgbe_set_fdir_flex_conf(struct rte_eth_dev *dev,
 		const struct rte_eth_fdir_flex_conf *conf, uint32_t *fdirctrl);
 static int fdir_enable_82599(struct ixgbe_hw *hw, uint32_t fdirctrl);
@@ -295,8 +293,7 @@ reverse_fdir_bitmasks(uint16_t hi_dword, uint16_t lo_dword)
  * but makes use of the rte_fdir_masks structure to see which bits to set.
  */
 static int
-fdir_set_input_mask_82599(struct rte_eth_dev *dev,
-		const struct rte_eth_fdir_masks *input_mask)
+fdir_set_input_mask_82599(struct rte_eth_dev *dev)
 {
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ixgbe_hw_fdir_info *info =
@@ -308,8 +305,6 @@ fdir_set_input_mask_82599(struct rte_eth_dev *dev,
 	uint32_t fdirm = IXGBE_FDIRM_POOL | IXGBE_FDIRM_DIPv6 | IXGBE_FDIRM_FLEX;
 	uint32_t fdirtcpm;  /* TCP source and destination port masks. */
 	uint32_t fdiripv6m; /* IPv6 source and destination masks. */
-	uint16_t dst_ipv6m = 0;
-	uint16_t src_ipv6m = 0;
 	volatile uint32_t *reg;
 
 	PMD_INIT_FUNC_TRACE();
@@ -320,31 +315,30 @@ fdir_set_input_mask_82599(struct rte_eth_dev *dev,
 	 * a VLAN of 0 is unspecified, so mask that out as well.  L4type
 	 * cannot be masked out in this implementation.
 	 */
-	if (input_mask->dst_port_mask == 0 && input_mask->src_port_mask == 0)
+	if (info->mask.dst_port_mask == 0 && info->mask.src_port_mask == 0)
 		/* use the L4 protocol mask for raw IPv4/IPv6 traffic */
 		fdirm |= IXGBE_FDIRM_L4P;
 
-	if (input_mask->vlan_tci_mask == rte_cpu_to_be_16(0x0FFF))
+	if (info->mask.vlan_tci_mask == rte_cpu_to_be_16(0x0FFF))
 		/* mask VLAN Priority */
 		fdirm |= IXGBE_FDIRM_VLANP;
-	else if (input_mask->vlan_tci_mask == rte_cpu_to_be_16(0xE000))
+	else if (info->mask.vlan_tci_mask == rte_cpu_to_be_16(0xE000))
 		/* mask VLAN ID */
 		fdirm |= IXGBE_FDIRM_VLANID;
-	else if (input_mask->vlan_tci_mask == 0)
+	else if (info->mask.vlan_tci_mask == 0)
 		/* mask VLAN ID and Priority */
 		fdirm |= IXGBE_FDIRM_VLANID | IXGBE_FDIRM_VLANP;
-	else if (input_mask->vlan_tci_mask != rte_cpu_to_be_16(0xEFFF)) {
+	else if (info->mask.vlan_tci_mask != rte_cpu_to_be_16(0xEFFF)) {
 		PMD_INIT_LOG(ERR, "invalid vlan_tci_mask");
 		return -EINVAL;
 	}
-	info->mask.vlan_tci_mask = input_mask->vlan_tci_mask;
 
 	IXGBE_WRITE_REG(hw, IXGBE_FDIRM, fdirm);
 
 	/* store the TCP/UDP port masks, bit reversed from port layout */
 	fdirtcpm = reverse_fdir_bitmasks(
-			rte_be_to_cpu_16(input_mask->dst_port_mask),
-			rte_be_to_cpu_16(input_mask->src_port_mask));
+			rte_be_to_cpu_16(info->mask.dst_port_mask),
+			rte_be_to_cpu_16(info->mask.src_port_mask));
 
 	/* write all the same so that UDP, TCP and SCTP use the same mask
 	 * (little-endian)
@@ -352,30 +346,23 @@ fdir_set_input_mask_82599(struct rte_eth_dev *dev,
 	IXGBE_WRITE_REG(hw, IXGBE_FDIRTCPM, ~fdirtcpm);
 	IXGBE_WRITE_REG(hw, IXGBE_FDIRUDPM, ~fdirtcpm);
 	IXGBE_WRITE_REG(hw, IXGBE_FDIRSCTPM, ~fdirtcpm);
-	info->mask.src_port_mask = input_mask->src_port_mask;
-	info->mask.dst_port_mask = input_mask->dst_port_mask;
 
 	/* Store source and destination IPv4 masks (big-endian),
 	 * can not use IXGBE_WRITE_REG.
 	 */
 	reg = IXGBE_PCI_REG_ADDR(hw, IXGBE_FDIRSIP4M);
-	*reg = ~(input_mask->ipv4_mask.src_ip);
+	*reg = ~(info->mask.src_ipv4_mask);
 	reg = IXGBE_PCI_REG_ADDR(hw, IXGBE_FDIRDIP4M);
-	*reg = ~(input_mask->ipv4_mask.dst_ip);
-	info->mask.src_ipv4_mask = input_mask->ipv4_mask.src_ip;
-	info->mask.dst_ipv4_mask = input_mask->ipv4_mask.dst_ip;
+	*reg = ~(info->mask.dst_ipv4_mask);
 
 	if (dev->data->dev_conf.fdir_conf.mode == RTE_FDIR_MODE_SIGNATURE) {
 		/*
 		 * Store source and destination IPv6 masks (bit reversed)
 		 */
-		IPV6_ADDR_TO_MASK(input_mask->ipv6_mask.src_ip, src_ipv6m);
-		IPV6_ADDR_TO_MASK(input_mask->ipv6_mask.dst_ip, dst_ipv6m);
-		fdiripv6m = (dst_ipv6m << 16) | src_ipv6m;
+		fdiripv6m = (info->mask.dst_ipv6_mask << 16) |
+			    info->mask.src_ipv6_mask;
 
 		IXGBE_WRITE_REG(hw, IXGBE_FDIRIP6M, ~fdiripv6m);
-		info->mask.src_ipv6_mask = src_ipv6m;
-		info->mask.dst_ipv6_mask = dst_ipv6m;
 	}
 
 	return IXGBE_SUCCESS;
@@ -386,8 +373,7 @@ fdir_set_input_mask_82599(struct rte_eth_dev *dev,
  * but makes use of the rte_fdir_masks structure to see which bits to set.
  */
 static int
-fdir_set_input_mask_x550(struct rte_eth_dev *dev,
-			 const struct rte_eth_fdir_masks *input_mask)
+fdir_set_input_mask_x550(struct rte_eth_dev *dev)
 {
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ixgbe_hw_fdir_info *info =
@@ -410,20 +396,19 @@ fdir_set_input_mask_x550(struct rte_eth_dev *dev,
 	/* some bits must be set for mac vlan or tunnel mode */
 	fdirm |= IXGBE_FDIRM_L4P | IXGBE_FDIRM_L3P;
 
-	if (input_mask->vlan_tci_mask == rte_cpu_to_be_16(0x0FFF))
+	if (info->mask.vlan_tci_mask == rte_cpu_to_be_16(0x0FFF))
 		/* mask VLAN Priority */
 		fdirm |= IXGBE_FDIRM_VLANP;
-	else if (input_mask->vlan_tci_mask == rte_cpu_to_be_16(0xE000))
+	else if (info->mask.vlan_tci_mask == rte_cpu_to_be_16(0xE000))
 		/* mask VLAN ID */
 		fdirm |= IXGBE_FDIRM_VLANID;
-	else if (input_mask->vlan_tci_mask == 0)
+	else if (info->mask.vlan_tci_mask == 0)
 		/* mask VLAN ID and Priority */
 		fdirm |= IXGBE_FDIRM_VLANID | IXGBE_FDIRM_VLANP;
-	else if (input_mask->vlan_tci_mask != rte_cpu_to_be_16(0xEFFF)) {
+	else if (info->mask.vlan_tci_mask != rte_cpu_to_be_16(0xEFFF)) {
 		PMD_INIT_LOG(ERR, "invalid vlan_tci_mask");
 		return -EINVAL;
 	}
-	info->mask.vlan_tci_mask = input_mask->vlan_tci_mask;
 
 	IXGBE_WRITE_REG(hw, IXGBE_FDIRM, fdirm);
 
@@ -434,12 +419,11 @@ fdir_set_input_mask_x550(struct rte_eth_dev *dev,
 				IXGBE_FDIRIP6M_TNI_VNI;
 
 	if (mode == RTE_FDIR_MODE_PERFECT_TUNNEL) {
-		mac_mask = input_mask->mac_addr_byte_mask;
+		mac_mask = info->mask.mac_addr_byte_mask;
 		fdiripv6m |= (mac_mask << IXGBE_FDIRIP6M_INNER_MAC_SHIFT)
 				& IXGBE_FDIRIP6M_INNER_MAC;
-		info->mask.mac_addr_byte_mask = input_mask->mac_addr_byte_mask;
 
-		switch (input_mask->tunnel_type_mask) {
+		switch (info->mask.tunnel_type_mask) {
 		case 0:
 			/* Mask turnnel type */
 			fdiripv6m |= IXGBE_FDIRIP6M_TUNNEL_TYPE;
@@ -450,10 +434,8 @@ fdir_set_input_mask_x550(struct rte_eth_dev *dev,
 			PMD_INIT_LOG(ERR, "invalid tunnel_type_mask");
 			return -EINVAL;
 		}
-		info->mask.tunnel_type_mask =
-			input_mask->tunnel_type_mask;
 
-		switch (rte_be_to_cpu_32(input_mask->tunnel_id_mask)) {
+		switch (rte_be_to_cpu_32(info->mask.tunnel_id_mask)) {
 		case 0x0:
 			/* Mask vxlan id */
 			fdiripv6m |= IXGBE_FDIRIP6M_TNI_VNI;
@@ -467,8 +449,6 @@ fdir_set_input_mask_x550(struct rte_eth_dev *dev,
 			PMD_INIT_LOG(ERR, "invalid tunnel_id_mask");
 			return -EINVAL;
 		}
-		info->mask.tunnel_id_mask =
-			input_mask->tunnel_id_mask;
 	}
 
 	IXGBE_WRITE_REG(hw, IXGBE_FDIRIP6M, fdiripv6m);
@@ -482,22 +462,90 @@ fdir_set_input_mask_x550(struct rte_eth_dev *dev,
 }
 
 static int
-fdir_set_input_mask(struct rte_eth_dev *dev,
-		    const struct rte_eth_fdir_masks *input_mask)
+ixgbe_fdir_store_input_mask_82599(struct rte_eth_dev *dev,
+				  const struct rte_eth_fdir_masks *input_mask)
+{
+	struct ixgbe_hw_fdir_info *info =
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
+	uint16_t dst_ipv6m = 0;
+	uint16_t src_ipv6m = 0;
+
+	memset(&info->mask, 0, sizeof(struct ixgbe_hw_fdir_mask));
+	info->mask.vlan_tci_mask = input_mask->vlan_tci_mask;
+	info->mask.src_port_mask = input_mask->src_port_mask;
+	info->mask.dst_port_mask = input_mask->dst_port_mask;
+	info->mask.src_ipv4_mask = input_mask->ipv4_mask.src_ip;
+	info->mask.dst_ipv4_mask = input_mask->ipv4_mask.dst_ip;
+	IPV6_ADDR_TO_MASK(input_mask->ipv6_mask.src_ip, src_ipv6m);
+	IPV6_ADDR_TO_MASK(input_mask->ipv6_mask.dst_ip, dst_ipv6m);
+	info->mask.src_ipv6_mask = src_ipv6m;
+	info->mask.dst_ipv6_mask = dst_ipv6m;
+
+	return IXGBE_SUCCESS;
+}
+
+static int
+ixgbe_fdir_store_input_mask_x550(struct rte_eth_dev *dev,
+				 const struct rte_eth_fdir_masks *input_mask)
+{
+	struct ixgbe_hw_fdir_info *info =
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
+
+	memset(&info->mask, 0, sizeof(struct ixgbe_hw_fdir_mask));
+	info->mask.vlan_tci_mask = input_mask->vlan_tci_mask;
+	info->mask.mac_addr_byte_mask = input_mask->mac_addr_byte_mask;
+	info->mask.tunnel_type_mask = input_mask->tunnel_type_mask;
+	info->mask.tunnel_id_mask = input_mask->tunnel_id_mask;
+
+	return IXGBE_SUCCESS;
+}
+
+static int
+ixgbe_fdir_store_input_mask(struct rte_eth_dev *dev,
+			    const struct rte_eth_fdir_masks *input_mask)
 {
 	enum rte_fdir_mode mode = dev->data->dev_conf.fdir_conf.mode;
 
 	if (mode >= RTE_FDIR_MODE_SIGNATURE &&
 	    mode <= RTE_FDIR_MODE_PERFECT)
-		return fdir_set_input_mask_82599(dev, input_mask);
+		return ixgbe_fdir_store_input_mask_82599(dev, input_mask);
 	else if (mode >= RTE_FDIR_MODE_PERFECT_MAC_VLAN &&
 		 mode <= RTE_FDIR_MODE_PERFECT_TUNNEL)
-		return fdir_set_input_mask_x550(dev, input_mask);
+		return ixgbe_fdir_store_input_mask_x550(dev, input_mask);
 
 	PMD_DRV_LOG(ERR, "Not supported fdir mode - %d!", mode);
 	return -ENOTSUP;
 }
 
+int
+ixgbe_fdir_set_input_mask(struct rte_eth_dev *dev)
+{
+	enum rte_fdir_mode mode = dev->data->dev_conf.fdir_conf.mode;
+
+	if (mode >= RTE_FDIR_MODE_SIGNATURE &&
+	    mode <= RTE_FDIR_MODE_PERFECT)
+		return fdir_set_input_mask_82599(dev);
+	else if (mode >= RTE_FDIR_MODE_PERFECT_MAC_VLAN &&
+		 mode <= RTE_FDIR_MODE_PERFECT_TUNNEL)
+		return fdir_set_input_mask_x550(dev);
+
+	PMD_DRV_LOG(ERR, "Not supported fdir mode - %d!", mode);
+	return -ENOTSUP;
+}
+
+static int
+fdir_set_input_mask(struct rte_eth_dev *dev,
+		    const struct rte_eth_fdir_masks *input_mask)
+{
+	int ret;
+
+	ret = ixgbe_fdir_store_input_mask(dev, input_mask);
+	if (ret)
+		return ret;
+
+	return ixgbe_fdir_set_input_mask(dev);
+}
+
 /*
  * ixgbe_check_fdir_flex_conf -check if the flex payload and mask configuration
  * arguments are valid
@@ -1135,23 +1183,40 @@ ixgbe_remove_fdir_filter(struct ixgbe_hw_fdir_info *fdir_info,
 	return 0;
 }
 
-/*
- * ixgbe_add_del_fdir_filter - add or remove a flow diretor filter.
- * @dev: pointer to the structure rte_eth_dev
- * @fdir_filter: fdir filter entry
- * @del: 1 - delete, 0 - add
- * @update: 1 - update
- */
 static int
-ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
-			  const struct rte_eth_fdir_filter *fdir_filter,
+ixgbe_interpret_fdir_filter(struct rte_eth_dev *dev,
+			    const struct rte_eth_fdir_filter *fdir_filter,
+			    struct ixgbe_fdir_rule *rule)
+{
+	enum rte_fdir_mode fdir_mode = dev->data->dev_conf.fdir_conf.mode;
+	int err;
+
+	memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+
+	err = ixgbe_fdir_filter_to_atr_input(fdir_filter,
+					     &rule->ixgbe_fdir,
+					     fdir_mode);
+	if (err)
+		return err;
+
+	rule->mode = fdir_mode;
+	if (fdir_filter->action.behavior == RTE_ETH_FDIR_REJECT)
+		rule->fdirflags = IXGBE_FDIRCMD_DROP;
+	rule->queue = fdir_filter->action.rx_queue;
+	rule->soft_id = fdir_filter->soft_id;
+
+	return 0;
+}
+
+int
+ixgbe_fdir_filter_program(struct rte_eth_dev *dev,
+			  struct ixgbe_fdir_rule *rule,
 			  bool del,
 			  bool update)
 {
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	uint32_t fdircmd_flags;
 	uint32_t fdirhash;
-	union ixgbe_atr_input input;
 	uint8_t queue;
 	bool is_perfect = FALSE;
 	int err;
@@ -1161,7 +1226,8 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	struct ixgbe_fdir_filter *node;
 	bool add_node = FALSE;
 
-	if (fdir_mode == RTE_FDIR_MODE_NONE)
+	if (fdir_mode == RTE_FDIR_MODE_NONE ||
+	    fdir_mode != rule->mode)
 		return -ENOTSUP;
 
 	/*
@@ -1174,7 +1240,7 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	    (hw->mac.type == ixgbe_mac_X550 ||
 	     hw->mac.type == ixgbe_mac_X550EM_x ||
 	     hw->mac.type == ixgbe_mac_X550EM_a) &&
-	    (fdir_filter->input.flow_type ==
+	    (rule->ixgbe_fdir.formatted.flow_type ==
 	     RTE_ETH_FLOW_NONFRAG_IPV4_OTHER) &&
 	    (info->mask.src_port_mask != 0 ||
 	     info->mask.dst_port_mask != 0)) {
@@ -1188,29 +1254,23 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	    fdir_mode <= RTE_FDIR_MODE_PERFECT_TUNNEL)
 		is_perfect = TRUE;
 
-	memset(&input, 0, sizeof(input));
-
-	err = ixgbe_fdir_filter_to_atr_input(fdir_filter, &input,
-					     fdir_mode);
-	if (err)
-		return err;
-
 	if (is_perfect) {
-		if (input.formatted.flow_type & IXGBE_ATR_L4TYPE_IPV6_MASK) {
+		if (rule->ixgbe_fdir.formatted.flow_type &
+		    IXGBE_ATR_L4TYPE_IPV6_MASK) {
 			PMD_DRV_LOG(ERR, "IPv6 is not supported in"
 				    " perfect mode!");
 			return -ENOTSUP;
 		}
-		fdirhash = atr_compute_perfect_hash_82599(&input,
+		fdirhash = atr_compute_perfect_hash_82599(&rule->ixgbe_fdir,
 							  dev->data->dev_conf.fdir_conf.pballoc);
-		fdirhash |= fdir_filter->soft_id <<
+		fdirhash |= rule->soft_id <<
 			IXGBE_FDIRHASH_SIG_SW_INDEX_SHIFT;
 	} else
-		fdirhash = atr_compute_sig_hash_82599(&input,
+		fdirhash = atr_compute_sig_hash_82599(&rule->ixgbe_fdir,
 						      dev->data->dev_conf.fdir_conf.pballoc);
 
 	if (del) {
-		err = ixgbe_remove_fdir_filter(info, &input);
+		err = ixgbe_remove_fdir_filter(info, &rule->ixgbe_fdir);
 		if (err < 0)
 			return err;
 
@@ -1223,7 +1283,7 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	}
 	/* add or update an fdir filter*/
 	fdircmd_flags = (update) ? IXGBE_FDIRCMD_FILTER_UPDATE : 0;
-	if (fdir_filter->action.behavior == RTE_ETH_FDIR_REJECT) {
+	if (rule->fdirflags & IXGBE_FDIRCMD_DROP) {
 		if (is_perfect) {
 			queue = dev->data->dev_conf.fdir_conf.drop_queue;
 			fdircmd_flags |= IXGBE_FDIRCMD_DROP;
@@ -1232,13 +1292,12 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 				    " signature mode.");
 			return -EINVAL;
 		}
-	} else if (fdir_filter->action.behavior == RTE_ETH_FDIR_ACCEPT &&
-		   fdir_filter->action.rx_queue < IXGBE_MAX_RX_QUEUE_NUM)
-		queue = (uint8_t)fdir_filter->action.rx_queue;
+	} else if (rule->queue < IXGBE_MAX_RX_QUEUE_NUM)
+		queue = (uint8_t)rule->queue;
 	else
 		return -EINVAL;
 
-	node = ixgbe_fdir_filter_lookup(info, &input);
+	node = ixgbe_fdir_filter_lookup(info, &rule->ixgbe_fdir);
 	if (node) {
 		if (update) {
 			node->fdirflags = fdircmd_flags;
@@ -1256,7 +1315,7 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 		if (!node)
 			return -ENOMEM;
 		(void)rte_memcpy(&node->ixgbe_fdir,
-				 &input,
+				 &rule->ixgbe_fdir,
 				 sizeof(union ixgbe_atr_input));
 		node->fdirflags = fdircmd_flags;
 		node->fdirhash = fdirhash;
@@ -1270,18 +1329,19 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	}
 
 	if (is_perfect) {
-		err = fdir_write_perfect_filter_82599(hw, &input, queue,
-						      fdircmd_flags, fdirhash,
-						      fdir_mode);
+		err = fdir_write_perfect_filter_82599(hw, &rule->ixgbe_fdir,
+						      queue, fdircmd_flags,
+						      fdirhash, fdir_mode);
 	} else {
-		err = fdir_add_signature_filter_82599(hw, &input, queue,
-						      fdircmd_flags, fdirhash);
+		err = fdir_add_signature_filter_82599(hw, &rule->ixgbe_fdir,
+						      queue, fdircmd_flags,
+						      fdirhash);
 	}
 	if (err < 0) {
 		PMD_DRV_LOG(ERR, "Fail to add FDIR filter!");
 
 		if (add_node)
-			(void)ixgbe_remove_fdir_filter(info, &input);
+			(void)ixgbe_remove_fdir_filter(info, &rule->ixgbe_fdir);
 	} else {
 		PMD_DRV_LOG(DEBUG, "Success to add FDIR filter");
 	}
@@ -1289,6 +1349,29 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	return err;
 }
 
+/* ixgbe_add_del_fdir_filter - add or remove a flow diretor filter.
+ * @dev: pointer to the structure rte_eth_dev
+ * @fdir_filter: fdir filter entry
+ * @del: 1 - delete, 0 - add
+ * @update: 1 - update
+ */
+static int
+ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
+			  const struct rte_eth_fdir_filter *fdir_filter,
+			  bool del,
+			  bool update)
+{
+	struct ixgbe_fdir_rule rule;
+	int err;
+
+	err = ixgbe_interpret_fdir_filter(dev, fdir_filter, &rule);
+
+	if (err)
+		return err;
+
+	return ixgbe_fdir_filter_program(dev, &rule, del, update);
+}
+
 static int
 ixgbe_fdir_flush(struct rte_eth_dev *dev)
 {
@@ -1522,19 +1605,23 @@ ixgbe_clear_all_fdir_filter(struct rte_eth_dev *dev)
 	struct ixgbe_hw_fdir_info *fdir_info =
 		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
 	struct ixgbe_fdir_filter *fdir_filter;
+	struct ixgbe_fdir_filter *filter_flag;
 	int ret = 0;
 
 	/* flush flow director */
 	rte_hash_reset(fdir_info->hash_handle);
 	memset(fdir_info->hash_map, 0,
 	       sizeof(struct ixgbe_fdir_filter *) * IXGBE_MAX_FDIR_FILTER_NUM);
+	filter_flag = TAILQ_FIRST(&fdir_info->fdir_list);
 	while ((fdir_filter = TAILQ_FIRST(&fdir_info->fdir_list))) {
 		TAILQ_REMOVE(&fdir_info->fdir_list,
 			     fdir_filter,
 			     entries);
 		rte_free(fdir_filter);
 	}
-	ret = ixgbe_fdir_flush(dev);
+
+	if (filter_flag != NULL)
+		ret = ixgbe_fdir_flush(dev);
 
 	return ret;
 }
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 6e1c0bc..61f4cff 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -127,6 +127,31 @@ ixgbe_validate_l2_tn_filter(struct rte_eth_dev *dev,
 			struct rte_eth_l2_tunnel_conf *rule,
 			struct rte_flow_error *error);
 static int
+ixgbe_validate_fdir_filter(struct rte_eth_dev *dev,
+			const struct rte_flow_attr *attr,
+			const struct rte_flow_item pattern[],
+			const struct rte_flow_action actions[],
+			struct ixgbe_fdir_rule *rule,
+			struct rte_flow_error *error);
+static int
+ixgbe_parse_fdir_filter_normal(const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct ixgbe_fdir_rule *rule,
+		struct rte_flow_error *error);
+static int
+ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct ixgbe_fdir_rule *rule,
+		struct rte_flow_error *error);
+static int
+ixgbe_parse_fdir_filter(const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct ixgbe_fdir_rule *rule,
+		struct rte_flow_error *error);
+static int
 ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 		const struct rte_flow_attr *attr,
 		const struct rte_flow_item pattern[],
@@ -1205,39 +1230,1180 @@ ixgbe_validate_l2_tn_filter(struct rte_eth_dev *dev,
 	return ret;
 }
 
+/* Parse to get the attr and action info of flow director rule. */
+static int
+ixgbe_parse_fdir_act_attr(const struct rte_flow_attr *attr,
+			  const struct rte_flow_action actions[],
+			  struct ixgbe_fdir_rule *rule,
+			  struct rte_flow_error *error)
+{
+	const struct rte_flow_action *act;
+	const struct rte_flow_action_queue *act_q;
+	const struct rte_flow_action_mark *mark;
+	uint32_t index;
+
+	/* parse attr */
+	/* must be input direction */
+	if (!attr->ingress) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_INGRESS,
+			attr, "Only support ingress.");
+		return -rte_errno;
+	}
+
+	/* not supported */
+	if (attr->egress) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
+			attr, "Not support egress.");
+		return -rte_errno;
+	}
+
+	/* not supported */
+	if (attr->priority) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
+			attr, "Not support priority.");
+		return -rte_errno;
+	}
+
+	/* parse action */
+	index = 0;
+
+	/* check if the first not void action is QUEUE or DROP. */
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE &&
+	    act->type != RTE_FLOW_ACTION_TYPE_DROP) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	if (act->type == RTE_FLOW_ACTION_TYPE_QUEUE) {
+		act_q = (const struct rte_flow_action_queue *)act->conf;
+		rule->queue = act_q->index;
+	} else { /* drop */
+		rule->fdirflags = IXGBE_FDIRCMD_DROP;
+	}
+
+	/* check if the next not void item is MARK */
+	index++;
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if ((act->type != RTE_FLOW_ACTION_TYPE_MARK) &&
+		(act->type != RTE_FLOW_ACTION_TYPE_END)) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	rule->soft_id = 0;
+
+	if (act->type == RTE_FLOW_ACTION_TYPE_MARK) {
+		mark = (const struct rte_flow_action_mark *)act->conf;
+		rule->soft_id = mark->id;
+		index++;
+		NEXT_ITEM_OF_ACTION(act, actions, index);
+	}
+
+	/* check if the next not void item is END */
+	if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
 /**
- * Check if the flow rule is supported by ixgbe.
- * It only checkes the format. Don't guarantee the rule can be programmed into
- * the HW. Because there can be no enough room for the rule.
+ * Parse the rule to see if it is a IP or MAC VLAN flow director rule.
+ * And get the flow director filter info BTW.
+ * UDP/TCP/SCTP PATTERN:
+ * The first not void item can be ETH or IPV4.
+ * The second not void item must be IPV4 if the first one is ETH.
+ * The third not void item must be UDP or TCP or SCTP.
+ * The next not void item must be END.
+ * MAC VLAN PATTERN:
+ * The first not void item must be ETH.
+ * The second not void item must be MAC VLAN.
+ * The next not void item must be END.
+ * ACTION:
+ * The first not void action should be QUEUE or DROP.
+ * The second not void optional action should be MARK,
+ * mark_id is a uint32_t number.
+ * he next not void action should be END.
  */
 static int
-ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
-		const struct rte_flow_attr *attr,
-		const struct rte_flow_item pattern[],
-		const struct rte_flow_action actions[],
-		struct rte_flow_error *error)
+ixgbe_parse_fdir_filter_normal(const struct rte_flow_attr *attr,
+			       const struct rte_flow_item pattern[],
+			       const struct rte_flow_action actions[],
+			       struct ixgbe_fdir_rule *rule,
+			       struct rte_flow_error *error)
 {
-	struct rte_eth_ntuple_filter ntuple_filter;
-	struct rte_eth_ethertype_filter ethertype_filter;
-	struct rte_eth_syn_filter syn_filter;
-	struct rte_eth_l2_tunnel_conf l2_tn_filter;
-	int ret;
+	const struct rte_flow_item *item;
+	const struct rte_flow_item_eth *eth_spec;
+	const struct rte_flow_item_eth *eth_mask;
+	const struct rte_flow_item_ipv4 *ipv4_spec;
+	const struct rte_flow_item_ipv4 *ipv4_mask;
+	const struct rte_flow_item_tcp *tcp_spec;
+	const struct rte_flow_item_tcp *tcp_mask;
+	const struct rte_flow_item_udp *udp_spec;
+	const struct rte_flow_item_udp *udp_mask;
+	const struct rte_flow_item_sctp *sctp_spec;
+	const struct rte_flow_item_sctp *sctp_mask;
+	const struct rte_flow_item_vlan *vlan_spec;
+	const struct rte_flow_item_vlan *vlan_mask;
 
-	memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
-	ret = ixgbe_parse_ntuple_filter(attr, pattern,
-				actions, &ntuple_filter, error);
-	if (!ret)
-		return 0;
+	uint32_t index, j;
 
-	memset(&ethertype_filter, 0, sizeof(struct rte_eth_ethertype_filter));
-	ret = ixgbe_parse_ethertype_filter(attr, pattern,
-				actions, &ethertype_filter, error);
-	if (!ret)
-		return 0;
+	if (!pattern) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+			NULL, "NULL pattern.");
+		return -rte_errno;
+	}
 
-	memset(&syn_filter, 0, sizeof(struct rte_eth_syn_filter));
-	ret = ixgbe_parse_syn_filter(attr, pattern,
-				actions, &syn_filter, error);
+	if (!actions) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				   NULL, "NULL action.");
+		return -rte_errno;
+	}
+
+	if (!attr) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, "NULL attribute.");
+		return -rte_errno;
+	}
+
+	/**
+	 * Some fields may not be provided. Set spec to 0 and mask to default
+	 * value. So, we need not do anything for the not provided fields later.
+	 */
+	memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+	memset(&rule->mask, 0xFF, sizeof(struct ixgbe_hw_fdir_mask));
+	rule->mask.vlan_tci_mask = 0;
+
+	/* parse pattern */
+	index = 0;
+
+	/**
+	 * The first not void item should be
+	 * MAC or IPv4 or TCP or UDP or SCTP.
+	 */
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_ETH &&
+	    item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+	    item->type != RTE_FLOW_ITEM_TYPE_TCP &&
+	    item->type != RTE_FLOW_ITEM_TYPE_UDP &&
+	    item->type != RTE_FLOW_ITEM_TYPE_SCTP) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by fdir filter");
+		return -rte_errno;
+	}
+
+	rule->mode = RTE_FDIR_MODE_PERFECT;
+
+	/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+	}
+
+	/* Get the MAC info. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
+		/**
+		 * Only support vlan and dst MAC address,
+		 * others should be masked.
+		 */
+		if (item->spec && !item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			eth_spec = (const struct rte_flow_item_eth *)item->spec;
+
+			/* Get the dst MAC. */
+			for (j = 0; j < ETHER_ADDR_LEN; j++) {
+				rule->ixgbe_fdir.formatted.inner_mac[j] =
+					eth_spec->dst.addr_bytes[j];
+			}
+		}
+
+
+		if (item->mask) {
+			/* If ethernet has meaning, it means MAC VLAN mode. */
+			rule->mode = RTE_FDIR_MODE_PERFECT_MAC_VLAN;
+
+			rule->b_mask = TRUE;
+			eth_mask = (const struct rte_flow_item_eth *)item->mask;
+
+			/* Ether type should be masked. */
+			if (eth_mask->type) {
+				memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+				rte_flow_error_set(error, EINVAL,
+					RTE_FLOW_ERROR_TYPE_ITEM,
+					item, "Not supported by fdir filter");
+				return -rte_errno;
+			}
+
+			/**
+			 * src MAC address must be masked,
+			 * and don't support dst MAC address mask.
+			 */
+			for (j = 0; j < ETHER_ADDR_LEN; j++) {
+				if (eth_mask->src.addr_bytes[j] ||
+					eth_mask->dst.addr_bytes[j] != 0xFF) {
+					memset(rule, 0,
+					sizeof(struct ixgbe_fdir_rule));
+					rte_flow_error_set(error, EINVAL,
+					RTE_FLOW_ERROR_TYPE_ITEM,
+					item, "Not supported by fdir filter");
+					return -rte_errno;
+				}
+			}
+
+			/* When no VLAN, considered as full mask. */
+			rule->mask.vlan_tci_mask = rte_cpu_to_be_16(0xEFFF);
+		}
+		/*** If both spec and mask are item,
+		 * it means don't care about ETH.
+		 * Do nothing.
+		 */
+
+		/**
+		 * Check if the next not void item is vlan or ipv4.
+		 * IPv6 is not supported.
+		 */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (rule->mode == RTE_FDIR_MODE_PERFECT_MAC_VLAN) {
+			if (item->type != RTE_FLOW_ITEM_TYPE_VLAN) {
+				memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+				rte_flow_error_set(error, EINVAL,
+					RTE_FLOW_ERROR_TYPE_ITEM,
+					item, "Not supported by fdir filter");
+				return -rte_errno;
+			}
+		} else {
+			if (item->type != RTE_FLOW_ITEM_TYPE_IPV4) {
+				memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+				rte_flow_error_set(error, EINVAL,
+					RTE_FLOW_ERROR_TYPE_ITEM,
+					item, "Not supported by fdir filter");
+				return -rte_errno;
+			}
+		}
+	}
+
+	if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+		if (!(item->spec && item->mask)) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+
+		vlan_spec = (const struct rte_flow_item_vlan *)item->spec;
+		vlan_mask = (const struct rte_flow_item_vlan *)item->mask;
+
+		if (vlan_spec->tpid != rte_cpu_to_be_16(ETHER_TYPE_VLAN)) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+
+		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->tci;
+
+		if (vlan_mask->tpid != (uint16_t)~0U) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->mask.vlan_tci_mask = vlan_mask->tci;
+		/* More than one tags are not supported. */
+
+		/**
+		 * Check if the next not void item is not vlan.
+		 */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		} else if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	/* Get the IP info. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_IPV4) {
+		/**
+		 * Set the flow type even if there's no content
+		 * as we must have a flow type.
+		 */
+		rule->ixgbe_fdir.formatted.flow_type =
+			IXGBE_ATR_FLOW_TYPE_IPV4;
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+		/**
+		 * Only care about src & dst addresses,
+		 * others should be masked.
+		 */
+		if (!item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->b_mask = TRUE;
+		ipv4_mask =
+			(const struct rte_flow_item_ipv4 *)item->mask;
+		if (ipv4_mask->hdr.version_ihl ||
+		    ipv4_mask->hdr.type_of_service ||
+		    ipv4_mask->hdr.total_length ||
+		    ipv4_mask->hdr.packet_id ||
+		    ipv4_mask->hdr.fragment_offset ||
+		    ipv4_mask->hdr.time_to_live ||
+		    ipv4_mask->hdr.next_proto_id ||
+		    ipv4_mask->hdr.hdr_checksum) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->mask.dst_ipv4_mask = ipv4_mask->hdr.dst_addr;
+		rule->mask.src_ipv4_mask = ipv4_mask->hdr.src_addr;
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			ipv4_spec =
+				(const struct rte_flow_item_ipv4 *)item->spec;
+			rule->ixgbe_fdir.formatted.dst_ip[0] =
+				ipv4_spec->hdr.dst_addr;
+			rule->ixgbe_fdir.formatted.src_ip[0] =
+				ipv4_spec->hdr.src_addr;
+		}
+
+		/**
+		 * Check if the next not void item is
+		 * TCP or UDP or SCTP or END.
+		 */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_TCP &&
+		    item->type != RTE_FLOW_ITEM_TYPE_UDP &&
+		    item->type != RTE_FLOW_ITEM_TYPE_SCTP &&
+		    item->type != RTE_FLOW_ITEM_TYPE_END) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	/* Get the TCP info. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_TCP) {
+		/**
+		 * Set the flow type even if there's no content
+		 * as we must have a flow type.
+		 */
+		rule->ixgbe_fdir.formatted.flow_type =
+			IXGBE_ATR_FLOW_TYPE_TCPV4;
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+		/**
+		 * Only care about src & dst ports,
+		 * others should be masked.
+		 */
+		if (!item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->b_mask = TRUE;
+		tcp_mask = (const struct rte_flow_item_tcp *)item->mask;
+		if (tcp_mask->hdr.sent_seq ||
+		    tcp_mask->hdr.recv_ack ||
+		    tcp_mask->hdr.data_off ||
+		    tcp_mask->hdr.tcp_flags ||
+		    tcp_mask->hdr.rx_win ||
+		    tcp_mask->hdr.cksum ||
+		    tcp_mask->hdr.tcp_urp) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->mask.src_port_mask = tcp_mask->hdr.src_port;
+		rule->mask.dst_port_mask = tcp_mask->hdr.dst_port;
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			tcp_spec = (const struct rte_flow_item_tcp *)item->spec;
+			rule->ixgbe_fdir.formatted.src_port =
+				tcp_spec->hdr.src_port;
+			rule->ixgbe_fdir.formatted.dst_port =
+				tcp_spec->hdr.dst_port;
+		}
+	}
+
+	/* Get the UDP info */
+	if (item->type == RTE_FLOW_ITEM_TYPE_UDP) {
+		/**
+		 * Set the flow type even if there's no content
+		 * as we must have a flow type.
+		 */
+		rule->ixgbe_fdir.formatted.flow_type =
+			IXGBE_ATR_FLOW_TYPE_UDPV4;
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+		/**
+		 * Only care about src & dst ports,
+		 * others should be masked.
+		 */
+		if (!item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->b_mask = TRUE;
+		udp_mask = (const struct rte_flow_item_udp *)item->mask;
+		if (udp_mask->hdr.dgram_len ||
+		    udp_mask->hdr.dgram_cksum) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->mask.src_port_mask = udp_mask->hdr.src_port;
+		rule->mask.dst_port_mask = udp_mask->hdr.dst_port;
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			udp_spec = (const struct rte_flow_item_udp *)item->spec;
+			rule->ixgbe_fdir.formatted.src_port =
+				udp_spec->hdr.src_port;
+			rule->ixgbe_fdir.formatted.dst_port =
+				udp_spec->hdr.dst_port;
+		}
+	}
+
+	/* Get the SCTP info */
+	if (item->type == RTE_FLOW_ITEM_TYPE_SCTP) {
+		/**
+		 * Set the flow type even if there's no content
+		 * as we must have a flow type.
+		 */
+		rule->ixgbe_fdir.formatted.flow_type =
+			IXGBE_ATR_FLOW_TYPE_SCTPV4;
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+		/**
+		 * Only care about src & dst ports,
+		 * others should be masked.
+		 */
+		if (!item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->b_mask = TRUE;
+		sctp_mask =
+			(const struct rte_flow_item_sctp *)item->mask;
+		if (sctp_mask->hdr.tag ||
+		    sctp_mask->hdr.cksum) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->mask.src_port_mask = sctp_mask->hdr.src_port;
+		rule->mask.dst_port_mask = sctp_mask->hdr.dst_port;
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			sctp_spec =
+				(const struct rte_flow_item_sctp *)item->spec;
+			rule->ixgbe_fdir.formatted.src_port =
+				sctp_spec->hdr.src_port;
+			rule->ixgbe_fdir.formatted.dst_port =
+				sctp_spec->hdr.dst_port;
+		}
+	}
+
+	if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+		/* check if the next not void item is END */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	return ixgbe_parse_fdir_act_attr(attr, actions, rule, error);
+}
+
+#define NVGRE_PROTOCOL 0x6558
+
+/**
+ * Parse the rule to see if it is a VxLAN or NVGRE flow director rule.
+ * And get the flow director filter info BTW.
+ * VxLAN PATTERN:
+ * The first not void item must be ETH.
+ * The second not void item must be IPV4/ IPV6.
+ * The third not void item must be NVGRE.
+ * The next not void item must be END.
+ * NVGRE PATTERN:
+ * The first not void item must be ETH.
+ * The second not void item must be IPV4/ IPV6.
+ * The third not void item must be NVGRE.
+ * The next not void item must be END.
+ * ACTION:
+ * The first not void action should be QUEUE or DROP.
+ * The second not void optional action should be MARK,
+ * mark_id is a uint32_t number.
+ * he next not void action should be END.
+ */
+static int
+ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
+			       const struct rte_flow_item pattern[],
+			       const struct rte_flow_action actions[],
+			       struct ixgbe_fdir_rule *rule,
+			       struct rte_flow_error *error)
+{
+	const struct rte_flow_item *item;
+	const struct rte_flow_item_vxlan *vxlan_spec;
+	const struct rte_flow_item_vxlan *vxlan_mask;
+	const struct rte_flow_item_nvgre *nvgre_spec;
+	const struct rte_flow_item_nvgre *nvgre_mask;
+	const struct rte_flow_item_eth *eth_spec;
+	const struct rte_flow_item_eth *eth_mask;
+	const struct rte_flow_item_vlan *vlan_spec;
+	const struct rte_flow_item_vlan *vlan_mask;
+	uint32_t index, j;
+
+	if (!pattern) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+				   NULL, "NULL pattern.");
+		return -rte_errno;
+	}
+
+	if (!actions) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				   NULL, "NULL action.");
+		return -rte_errno;
+	}
+
+	if (!attr) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, "NULL attribute.");
+		return -rte_errno;
+	}
+
+	/**
+	 * Some fields may not be provided. Set spec to 0 and mask to default
+	 * value. So, we need not do anything for the not provided fields later.
+	 */
+	memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+	memset(&rule->mask, 0xFF, sizeof(struct ixgbe_hw_fdir_mask));
+	rule->mask.vlan_tci_mask = 0;
+
+	/* parse pattern */
+	index = 0;
+
+	/**
+	 * The first not void item should be
+	 * MAC or IPv4 or IPv6 or UDP or VxLAN.
+	 */
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_ETH &&
+	    item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+	    item->type != RTE_FLOW_ITEM_TYPE_IPV6 &&
+	    item->type != RTE_FLOW_ITEM_TYPE_UDP &&
+	    item->type != RTE_FLOW_ITEM_TYPE_VXLAN &&
+	    item->type != RTE_FLOW_ITEM_TYPE_NVGRE) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by fdir filter");
+		return -rte_errno;
+	}
+
+	rule->mode = RTE_FDIR_MODE_PERFECT_TUNNEL;
+
+	/* Skip MAC. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
+		/* Only used to describe the protocol stack. */
+		if (item->spec || item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+
+		/* Check if the next not void item is IPv4 or IPv6. */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+		    item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	/* Skip IP. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_IPV4 ||
+	    item->type == RTE_FLOW_ITEM_TYPE_IPV6) {
+		/* Only used to describe the protocol stack. */
+		if (item->spec || item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+
+		/* Check if the next not void item is UDP or NVGRE. */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_UDP &&
+		    item->type != RTE_FLOW_ITEM_TYPE_NVGRE) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	/* Skip UDP. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_UDP) {
+		/* Only used to describe the protocol stack. */
+		if (item->spec || item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+
+		/* Check if the next not void item is VxLAN. */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_VXLAN) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	/* Get the VxLAN info */
+	if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN) {
+		rule->ixgbe_fdir.formatted.tunnel_type =
+			RTE_FDIR_TUNNEL_TYPE_VXLAN;
+
+		/* Only care about VNI, others should be masked. */
+		if (!item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+		rule->b_mask = TRUE;
+
+		/* Tunnel type is always meaningful. */
+		rule->mask.tunnel_type_mask = 1;
+
+		vxlan_mask =
+			(const struct rte_flow_item_vxlan *)item->mask;
+		if (vxlan_mask->flags) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/* VNI must be totally masked or not. */
+		if ((vxlan_mask->vni[0] || vxlan_mask->vni[1] ||
+			vxlan_mask->vni[2]) &&
+			((vxlan_mask->vni[0] != 0xFF) ||
+			(vxlan_mask->vni[1] != 0xFF) ||
+				(vxlan_mask->vni[2] != 0xFF))) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+
+		rte_memcpy(&rule->mask.tunnel_id_mask, vxlan_mask->vni,
+			RTE_DIM(vxlan_mask->vni));
+		rule->mask.tunnel_id_mask <<= 8;
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			vxlan_spec = (const struct rte_flow_item_vxlan *)
+					item->spec;
+			rte_memcpy(&rule->ixgbe_fdir.formatted.tni_vni,
+				vxlan_spec->vni, RTE_DIM(vxlan_spec->vni));
+			rule->ixgbe_fdir.formatted.tni_vni <<= 8;
+		}
+	}
+
+	/* Get the NVGRE info */
+	if (item->type == RTE_FLOW_ITEM_TYPE_NVGRE) {
+		rule->ixgbe_fdir.formatted.tunnel_type =
+			RTE_FDIR_TUNNEL_TYPE_NVGRE;
+
+		/**
+		 * Only care about flags0, flags1, protocol and TNI,
+		 * others should be masked.
+		 */
+		if (!item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+		rule->b_mask = TRUE;
+
+		/* Tunnel type is always meaningful. */
+		rule->mask.tunnel_type_mask = 1;
+
+		nvgre_mask =
+			(const struct rte_flow_item_nvgre *)item->mask;
+		if (nvgre_mask->flow_id) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		if (nvgre_mask->c_k_s_rsvd0_ver !=
+			rte_cpu_to_be_16(0x3000) ||
+		    nvgre_mask->protocol != 0xFFFF) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/* TNI must be totally masked or not. */
+		if (nvgre_mask->tni[0] &&
+		    ((nvgre_mask->tni[0] != 0xFF) ||
+		    (nvgre_mask->tni[1] != 0xFF) ||
+		    (nvgre_mask->tni[2] != 0xFF))) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/* tni is a 24-bits bit field */
+		rte_memcpy(&rule->mask.tunnel_id_mask, nvgre_mask->tni,
+			RTE_DIM(nvgre_mask->tni));
+		rule->mask.tunnel_id_mask <<= 8;
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			nvgre_spec =
+				(const struct rte_flow_item_nvgre *)item->spec;
+			if (nvgre_spec->c_k_s_rsvd0_ver !=
+			    rte_cpu_to_be_16(0x2000) ||
+			    nvgre_spec->protocol !=
+			    rte_cpu_to_be_16(NVGRE_PROTOCOL)) {
+				memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+				rte_flow_error_set(error, EINVAL,
+					RTE_FLOW_ERROR_TYPE_ITEM,
+					item, "Not supported by fdir filter");
+				return -rte_errno;
+			}
+			/* tni is a 24-bits bit field */
+			rte_memcpy(&rule->ixgbe_fdir.formatted.tni_vni,
+			nvgre_spec->tni, RTE_DIM(nvgre_spec->tni));
+			rule->ixgbe_fdir.formatted.tni_vni <<= 8;
+		}
+	}
+
+	/* check if the next not void item is MAC */
+	index++;
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_ETH) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by fdir filter");
+		return -rte_errno;
+	}
+
+	/**
+	 * Only support vlan and dst MAC address,
+	 * others should be masked.
+	 */
+
+	if (!item->mask) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by fdir filter");
+		return -rte_errno;
+	}
+	/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+	}
+	rule->b_mask = TRUE;
+	eth_mask = (const struct rte_flow_item_eth *)item->mask;
+
+	/* Ether type should be masked. */
+	if (eth_mask->type) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by fdir filter");
+		return -rte_errno;
+	}
+
+	/* src MAC address should be masked. */
+	for (j = 0; j < ETHER_ADDR_LEN; j++) {
+		if (eth_mask->src.addr_bytes[j]) {
+			memset(rule, 0,
+			       sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+	rule->mask.mac_addr_byte_mask = 0;
+	for (j = 0; j < ETHER_ADDR_LEN; j++) {
+		/* It's a per byte mask. */
+		if (eth_mask->dst.addr_bytes[j] == 0xFF) {
+			rule->mask.mac_addr_byte_mask |= 0x1 << j;
+		} else if (eth_mask->dst.addr_bytes[j]) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	/* When no vlan, considered as full mask. */
+	rule->mask.vlan_tci_mask = rte_cpu_to_be_16(0xEFFF);
+
+	if (item->spec) {
+		rule->b_spec = TRUE;
+		eth_spec = (const struct rte_flow_item_eth *)item->spec;
+
+		/* Get the dst MAC. */
+		for (j = 0; j < ETHER_ADDR_LEN; j++) {
+			rule->ixgbe_fdir.formatted.inner_mac[j] =
+				eth_spec->dst.addr_bytes[j];
+		}
+	}
+
+	/**
+	 * Check if the next not void item is vlan or ipv4.
+	 * IPv6 is not supported.
+	 */
+	index++;
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if ((item->type != RTE_FLOW_ITEM_TYPE_VLAN) &&
+		(item->type != RTE_FLOW_ITEM_TYPE_VLAN)) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by fdir filter");
+		return -rte_errno;
+	}
+	/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+	}
+
+	if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+		if (!(item->spec && item->mask)) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+
+		vlan_spec = (const struct rte_flow_item_vlan *)item->spec;
+		vlan_mask = (const struct rte_flow_item_vlan *)item->mask;
+
+		if (vlan_spec->tpid != rte_cpu_to_be_16(ETHER_TYPE_VLAN)) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+
+		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->tci;
+
+		if (vlan_mask->tpid != (uint16_t)~0U) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->mask.vlan_tci_mask = vlan_mask->tci;
+		/* More than one tags are not supported. */
+
+		/**
+		 * Check if the next not void item is not vlan.
+		 */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		} else if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/* check if the next not void item is END */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	/**
+	 * If the tags is 0, it means don't care about the VLAN.
+	 * Do nothing.
+	 */
+
+	return ixgbe_parse_fdir_act_attr(attr, actions, rule, error);
+}
+
+static int
+ixgbe_validate_fdir_filter(struct rte_eth_dev *dev,
+			const struct rte_flow_attr *attr,
+			const struct rte_flow_item pattern[],
+			const struct rte_flow_action actions[],
+			struct ixgbe_fdir_rule *rule,
+			struct rte_flow_error *error)
+{
+	int ret = 0;
+
+	enum rte_fdir_mode fdir_mode = dev->data->dev_conf.fdir_conf.mode;
+
+	ixgbe_parse_fdir_filter(attr, pattern, actions,
+				rule, error);
+
+
+	if (fdir_mode == RTE_FDIR_MODE_NONE ||
+	    fdir_mode != rule->mode)
+		return -ENOTSUP;
+
+	return ret;
+}
+
+static int
+ixgbe_parse_fdir_filter(const struct rte_flow_attr *attr,
+			const struct rte_flow_item pattern[],
+			const struct rte_flow_action actions[],
+			struct ixgbe_fdir_rule *rule,
+			struct rte_flow_error *error)
+{
+	int ret;
+
+	ret = ixgbe_parse_fdir_filter_normal(attr, pattern,
+					actions, rule, error);
+
+	if (!ret)
+		return 0;
+
+	ret = ixgbe_parse_fdir_filter_tunnel(attr, pattern,
+					actions, rule, error);
+
+	return ret;
+}
+
+/**
+ * Check if the flow rule is supported by ixgbe.
+ * It only checkes the format. Don't guarantee the rule can be programmed into
+ * the HW. Because there can be no enough room for the rule.
+ */
+static int
+ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
+		const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct rte_flow_error *error)
+{
+	struct rte_eth_ntuple_filter ntuple_filter;
+	struct rte_eth_ethertype_filter ethertype_filter;
+	struct rte_eth_syn_filter syn_filter;
+	struct rte_eth_l2_tunnel_conf l2_tn_filter;
+	struct ixgbe_fdir_rule fdir_rule;
+	int ret;
+
+	memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
+	ret = ixgbe_parse_ntuple_filter(attr, pattern,
+				actions, &ntuple_filter, error);
+	if (!ret)
+		return 0;
+
+	memset(&ethertype_filter, 0, sizeof(struct rte_eth_ethertype_filter));
+	ret = ixgbe_parse_ethertype_filter(attr, pattern,
+				actions, &ethertype_filter, error);
+	if (!ret)
+		return 0;
+
+	memset(&syn_filter, 0, sizeof(struct rte_eth_syn_filter));
+	ret = ixgbe_parse_syn_filter(attr, pattern,
+				actions, &syn_filter, error);
+	if (!ret)
+		return 0;
+
+	memset(&fdir_rule, 0, sizeof(struct ixgbe_fdir_rule));
+	ret = ixgbe_validate_fdir_filter(dev, attr, pattern,
+				actions, &fdir_rule, error);
 	if (!ret)
 		return 0;
 
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v4 16/18] net/ixgbe: create consistent filter
  2017-01-12  8:40     ` [PATCH v4 00/18] net/ixgbe: Consistent filter API Wei Zhao
                         ` (14 preceding siblings ...)
  2017-01-12  8:40       ` [PATCH v4 15/18] net/ixgbe: parse flow director filter Wei Zhao
@ 2017-01-12  8:40       ` Wei Zhao
  2017-01-12  8:40       ` [PATCH v4 17/18] net/ixgbe: destroy " Wei Zhao
                         ` (2 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  8:40 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

This patch adds a function to create the flow directory filter.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c |  25 +++--
 drivers/net/ixgbe/ixgbe_ethdev.h |  61 ++++++++++++
 drivers/net/ixgbe/ixgbe_flow.c   | 194 ++++++++++++++++++++++++++++++++++++++-
 3 files changed, 266 insertions(+), 14 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 4b9f4aa..3648e12 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -305,9 +305,6 @@ static void ixgbevf_add_mac_addr(struct rte_eth_dev *dev,
 static void ixgbevf_remove_mac_addr(struct rte_eth_dev *dev, uint32_t index);
 static void ixgbevf_set_default_mac_addr(struct rte_eth_dev *dev,
 					     struct ether_addr *mac_addr);
-static int ixgbe_syn_filter_set(struct rte_eth_dev *dev,
-			struct rte_eth_syn_filter *filter,
-			bool add);
 static int ixgbe_syn_filter_get(struct rte_eth_dev *dev,
 			struct rte_eth_syn_filter *filter);
 static int ixgbe_syn_filter_handle(struct rte_eth_dev *dev,
@@ -317,17 +314,11 @@ static int ixgbe_add_5tuple_filter(struct rte_eth_dev *dev,
 			struct ixgbe_5tuple_filter *filter);
 static void ixgbe_remove_5tuple_filter(struct rte_eth_dev *dev,
 			struct ixgbe_5tuple_filter *filter);
-static int ixgbe_add_del_ntuple_filter(struct rte_eth_dev *dev,
-			struct rte_eth_ntuple_filter *filter,
-			bool add);
 static int ixgbe_ntuple_filter_handle(struct rte_eth_dev *dev,
 				enum rte_filter_op filter_op,
 				void *arg);
 static int ixgbe_get_ntuple_filter(struct rte_eth_dev *dev,
 			struct rte_eth_ntuple_filter *filter);
-static int ixgbe_add_del_ethertype_filter(struct rte_eth_dev *dev,
-			struct rte_eth_ethertype_filter *filter,
-			bool add);
 static int ixgbe_ethertype_filter_handle(struct rte_eth_dev *dev,
 				enum rte_filter_op filter_op,
 				void *arg);
@@ -1292,6 +1283,14 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
 
 	/* initialize l2 tunnel filter list & hash */
 	ixgbe_l2_tn_filter_init(eth_dev);
+
+	TAILQ_INIT(&filter_ntuple_list);
+	TAILQ_INIT(&filter_ethertype_list);
+	TAILQ_INIT(&filter_syn_list);
+	TAILQ_INIT(&filter_fdir_list);
+	TAILQ_INIT(&filter_l2_tunnel_list);
+	TAILQ_INIT(&ixgbe_flow_list);
+
 	return 0;
 }
 
@@ -5742,7 +5741,7 @@ ixgbevf_set_default_mac_addr(struct rte_eth_dev *dev, struct ether_addr *addr)
 		return -ENOTSUP;\
 } while (0)
 
-static int
+int
 ixgbe_syn_filter_set(struct rte_eth_dev *dev,
 			struct rte_eth_syn_filter *filter,
 			bool add)
@@ -6121,7 +6120,7 @@ ntuple_filter_to_5tuple(struct rte_eth_ntuple_filter *filter,
  *    - On success, zero.
  *    - On failure, a negative value.
  */
-static int
+int
 ixgbe_add_del_ntuple_filter(struct rte_eth_dev *dev,
 			struct rte_eth_ntuple_filter *ntuple_filter,
 			bool add)
@@ -6266,7 +6265,7 @@ ixgbe_ntuple_filter_handle(struct rte_eth_dev *dev,
 	return ret;
 }
 
-static int
+int
 ixgbe_add_del_ethertype_filter(struct rte_eth_dev *dev,
 			struct rte_eth_ethertype_filter *filter,
 			bool add)
@@ -7345,7 +7344,7 @@ ixgbe_remove_l2_tn_filter(struct ixgbe_l2_tn_info *l2_tn_info,
 }
 
 /* Add l2 tunnel filter */
-static int
+int
 ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 			       struct rte_eth_l2_tunnel_conf *l2_tunnel,
 			       bool restore)
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index dee159f..6908c3d 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -329,6 +329,54 @@ struct ixgbe_l2_tn_info {
 	bool e_tag_ether_type; /* ether type for e-tag */
 };
 
+struct rte_flow {
+	enum rte_filter_type filter_type;
+	void *rule;
+};
+/* ntuple filter list structure */
+struct ixgbe_ntuple_filter_ele {
+	TAILQ_ENTRY(ixgbe_ntuple_filter_ele) entries;
+	struct rte_eth_ntuple_filter filter_info;
+};
+/* ethertype filter list structure */
+struct ixgbe_ethertype_filter_ele {
+	TAILQ_ENTRY(ixgbe_ethertype_filter_ele) entries;
+	struct rte_eth_ethertype_filter filter_info;
+};
+/* syn filter list structure */
+struct ixgbe_eth_syn_filter_ele {
+	TAILQ_ENTRY(ixgbe_eth_syn_filter_ele) entries;
+	struct rte_eth_syn_filter filter_info;
+};
+/* fdir filter list structure */
+struct ixgbe_fdir_rule_ele {
+	TAILQ_ENTRY(ixgbe_fdir_rule_ele) entries;
+	struct ixgbe_fdir_rule filter_info;
+};
+/* l2_tunnel filter list structure */
+struct ixgbe_eth_l2_tunnel_conf_ele {
+	TAILQ_ENTRY(ixgbe_eth_l2_tunnel_conf_ele) entries;
+	struct rte_eth_l2_tunnel_conf filter_info;
+};
+/* ixgbe_flow memory list structure */
+struct ixgbe_flow_mem {
+	TAILQ_ENTRY(ixgbe_flow_mem) entries;
+	struct rte_flow *flow;
+};
+
+TAILQ_HEAD(ixgbe_ntuple_filter_list, ixgbe_ntuple_filter_ele);
+struct ixgbe_ntuple_filter_list filter_ntuple_list;
+TAILQ_HEAD(ixgbe_ethertype_filter_list, ixgbe_ethertype_filter_ele);
+struct ixgbe_ethertype_filter_list filter_ethertype_list;
+TAILQ_HEAD(ixgbe_syn_filter_list, ixgbe_eth_syn_filter_ele);
+struct ixgbe_syn_filter_list filter_syn_list;
+TAILQ_HEAD(ixgbe_fdir_rule_filter_list, ixgbe_fdir_rule_ele);
+struct ixgbe_fdir_rule_filter_list filter_fdir_list;
+TAILQ_HEAD(ixgbe_l2_tunnel_filter_list, ixgbe_eth_l2_tunnel_conf_ele);
+struct ixgbe_l2_tunnel_filter_list filter_l2_tunnel_list;
+TAILQ_HEAD(ixgbe_flow_mem_list, ixgbe_flow_mem);
+struct ixgbe_flow_mem_list ixgbe_flow_list;
+
 /*
  * Structure to store private data for each driver instance (for each port).
  */
@@ -487,6 +535,19 @@ uint32_t ixgbe_rssrk_reg_get(enum ixgbe_mac_type mac_type, uint8_t i);
 
 bool ixgbe_rss_update_sp(enum ixgbe_mac_type mac_type);
 
+int ixgbe_add_del_ntuple_filter(struct rte_eth_dev *dev,
+			struct rte_eth_ntuple_filter *filter,
+			bool add);
+int ixgbe_add_del_ethertype_filter(struct rte_eth_dev *dev,
+			struct rte_eth_ethertype_filter *filter,
+			bool add);
+int ixgbe_syn_filter_set(struct rte_eth_dev *dev,
+			struct rte_eth_syn_filter *filter,
+			bool add);
+int
+ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
+			       struct rte_eth_l2_tunnel_conf *l2_tunnel,
+			       bool restore);
 /*
  * Flow director function prototypes
  */
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 61f4cff..908293a 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -157,10 +157,15 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 		const struct rte_flow_item pattern[],
 		const struct rte_flow_action actions[],
 		struct rte_flow_error *error);
+static struct rte_flow *ixgbe_flow_create(struct rte_eth_dev *dev,
+		const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct rte_flow_error *error);
 
 const struct rte_flow_ops ixgbe_flow_ops = {
 	ixgbe_flow_validate,
-	NULL,
+	ixgbe_flow_create,
 	NULL,
 	ixgbe_flow_flush,
 	NULL,
@@ -2365,6 +2370,193 @@ ixgbe_parse_fdir_filter(const struct rte_flow_attr *attr,
 }
 
 /**
+ * Create or destroy a flow rule.
+ * Theorically one rule can match more than one filters.
+ * We will let it use the filter which it hitt first.
+ * So, the sequence matters.
+ */
+static struct rte_flow *
+ixgbe_flow_create(struct rte_eth_dev *dev,
+		  const struct rte_flow_attr *attr,
+		  const struct rte_flow_item pattern[],
+		  const struct rte_flow_action actions[],
+		  struct rte_flow_error *error)
+{
+	int ret;
+	struct rte_eth_ntuple_filter ntuple_filter;
+	struct rte_eth_ethertype_filter ethertype_filter;
+	struct rte_eth_syn_filter syn_filter;
+	struct ixgbe_fdir_rule fdir_rule;
+	struct rte_eth_l2_tunnel_conf l2_tn_filter;
+	struct ixgbe_hw_fdir_info *fdir_info =
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
+	struct rte_flow *flow = NULL;
+	struct ixgbe_ntuple_filter_ele *ntuple_filter_ptr;
+	struct ixgbe_ethertype_filter_ele *ethertype_filter_ptr;
+	struct ixgbe_eth_syn_filter_ele *syn_filter_ptr;
+	struct ixgbe_eth_l2_tunnel_conf_ele *l2_tn_filter_ptr;
+	struct ixgbe_fdir_rule_ele *fdir_rule_ptr;
+	struct ixgbe_flow_mem *ixgbe_flow_mem_ptr;
+
+	flow = rte_zmalloc("ixgbe_rte_flow", sizeof(struct rte_flow), 0);
+	if (!flow) {
+		PMD_DRV_LOG(ERR, "failed to allocate memory");
+		return (struct rte_flow *)flow;
+	}
+	ixgbe_flow_mem_ptr = rte_zmalloc("ixgbe_flow_mem",
+			sizeof(struct ixgbe_flow_mem), 0);
+	if (!ixgbe_flow_mem_ptr) {
+		PMD_DRV_LOG(ERR, "failed to allocate memory");
+		rte_free(flow);
+		return NULL;
+	}
+	ixgbe_flow_mem_ptr->flow = flow;
+	TAILQ_INSERT_TAIL(&ixgbe_flow_list,
+				ixgbe_flow_mem_ptr, entries);
+
+	memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
+	ret = ixgbe_parse_ntuple_filter(attr, pattern,
+			actions, &ntuple_filter, error);
+	if (!ret) {
+		ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, TRUE);
+		if (!ret) {
+			ntuple_filter_ptr = rte_zmalloc("ixgbe_ntuple_filter",
+				sizeof(struct ixgbe_ntuple_filter_ele), 0);
+			(void)rte_memcpy(&ntuple_filter_ptr->filter_info,
+				&ntuple_filter,
+				sizeof(struct rte_eth_ntuple_filter));
+			TAILQ_INSERT_TAIL(&filter_ntuple_list,
+				ntuple_filter_ptr, entries);
+			flow->rule = ntuple_filter_ptr;
+			flow->filter_type = RTE_ETH_FILTER_NTUPLE;
+			return flow;
+		}
+		goto out;
+	}
+
+	memset(&ethertype_filter, 0, sizeof(struct rte_eth_ethertype_filter));
+	ret = ixgbe_parse_ethertype_filter(attr, pattern,
+				actions, &ethertype_filter, error);
+	if (!ret) {
+		ret = ixgbe_add_del_ethertype_filter(dev,
+				&ethertype_filter, TRUE);
+		if (!ret) {
+			ethertype_filter_ptr = rte_zmalloc(
+				"ixgbe_ethertype_filter",
+				sizeof(struct ixgbe_ethertype_filter_ele), 0);
+			(void)rte_memcpy(&ethertype_filter_ptr->filter_info,
+				&ethertype_filter,
+				sizeof(struct rte_eth_ethertype_filter));
+			TAILQ_INSERT_TAIL(&filter_ethertype_list,
+				ethertype_filter_ptr, entries);
+			flow->rule = ethertype_filter_ptr;
+			flow->filter_type = RTE_ETH_FILTER_ETHERTYPE;
+			return flow;
+		}
+		goto out;
+	}
+
+	memset(&syn_filter, 0, sizeof(struct rte_eth_syn_filter));
+	ret = cons_parse_syn_filter(attr, pattern, actions, &syn_filter, error);
+	if (!ret) {
+		ret = ixgbe_syn_filter_set(dev, &syn_filter, TRUE);
+		if (!ret) {
+			syn_filter_ptr = rte_zmalloc("ixgbe_syn_filter",
+				sizeof(struct ixgbe_eth_syn_filter_ele), 0);
+			(void)rte_memcpy(&syn_filter_ptr->filter_info,
+				&syn_filter,
+				sizeof(struct rte_eth_syn_filter));
+			TAILQ_INSERT_TAIL(&filter_syn_list,
+				syn_filter_ptr,
+				entries);
+			flow->rule = syn_filter_ptr;
+			flow->filter_type = RTE_ETH_FILTER_SYN;
+			return flow;
+		}
+		goto out;
+	}
+
+	memset(&fdir_rule, 0, sizeof(struct ixgbe_fdir_rule));
+	ret = ixgbe_parse_fdir_filter(attr, pattern,
+				actions, &fdir_rule, error);
+	if (!ret) {
+		/* A mask cannot be deleted. */
+		if (fdir_rule.b_mask) {
+			if (!fdir_info->mask_added) {
+				/* It's the first time the mask is set. */
+				rte_memcpy(&fdir_info->mask,
+					&fdir_rule.mask,
+					sizeof(struct ixgbe_hw_fdir_mask));
+				ret = ixgbe_fdir_set_input_mask(dev);
+				if (ret)
+					goto out;
+
+				fdir_info->mask_added = TRUE;
+			} else {
+				/**
+				 * Only support one global mask,
+				 * all the masks should be the same.
+				 */
+				ret = memcmp(&fdir_info->mask,
+					&fdir_rule.mask,
+					sizeof(struct ixgbe_hw_fdir_mask));
+				if (ret)
+					goto out;
+			}
+		}
+
+		if (fdir_rule.b_spec) {
+			ret = ixgbe_fdir_filter_program(dev, &fdir_rule,
+					FALSE, FALSE);
+			if (!ret) {
+				fdir_rule_ptr = rte_zmalloc("ixgbe_fdir_filter",
+					sizeof(struct ixgbe_fdir_rule_ele), 0);
+				(void)rte_memcpy(&fdir_rule_ptr->filter_info,
+					&fdir_rule,
+					sizeof(struct ixgbe_fdir_rule));
+				TAILQ_INSERT_TAIL(&filter_fdir_list,
+					fdir_rule_ptr, entries);
+				flow->rule = fdir_rule_ptr;
+				flow->filter_type = RTE_ETH_FILTER_FDIR;
+
+				return flow;
+			}
+
+			if (ret)
+				goto out;
+		}
+
+		goto out;
+	}
+
+	memset(&l2_tn_filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+	ret = cons_parse_l2_tn_filter(attr, pattern,
+					actions, &l2_tn_filter, error);
+	if (!ret) {
+		ret = ixgbe_dev_l2_tunnel_filter_add(dev, &l2_tn_filter, FALSE);
+		if (!ret) {
+			l2_tn_filter_ptr = rte_zmalloc("ixgbe_l2_tn_filter",
+				sizeof(struct ixgbe_eth_l2_tunnel_conf_ele), 0);
+			(void)rte_memcpy(&l2_tn_filter_ptr->filter_info,
+				&l2_tn_filter,
+				sizeof(struct rte_eth_l2_tunnel_conf));
+			TAILQ_INSERT_TAIL(&filter_l2_tunnel_list,
+				l2_tn_filter_ptr, entries);
+			flow->rule = l2_tn_filter_ptr;
+			flow->filter_type = RTE_ETH_FILTER_L2_TUNNEL;
+			return flow;
+		}
+	}
+
+out:
+	TAILQ_REMOVE(&ixgbe_flow_list,
+		ixgbe_flow_mem_ptr, entries);
+	rte_free(ixgbe_flow_mem_ptr);
+	rte_free(flow);
+	return NULL;
+}
+
+/**
  * Check if the flow rule is supported by ixgbe.
  * It only checkes the format. Don't guarantee the rule can be programmed into
  * the HW. Because there can be no enough room for the rule.
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v4 17/18] net/ixgbe: destroy consistent filter
  2017-01-12  8:40     ` [PATCH v4 00/18] net/ixgbe: Consistent filter API Wei Zhao
                         ` (15 preceding siblings ...)
  2017-01-12  8:40       ` [PATCH v4 16/18] net/ixgbe: create consistent filter Wei Zhao
@ 2017-01-12  8:40       ` Wei Zhao
  2017-01-12  8:40       ` [PATCH v4 18/18] net/ixgbe: flush all the filter list Wei Zhao
  2017-01-12  9:17       ` [PATCH v5 00/18] net/ixgbe: Consistent filter API Wei Zhao
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  8:40 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

This patch adds a function to destroy the flow fliter.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c |   2 +-
 drivers/net/ixgbe/ixgbe_ethdev.h |   3 +
 drivers/net/ixgbe/ixgbe_flow.c   | 117 ++++++++++++++++++++++++++++++++++++++-
 3 files changed, 120 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 3648e12..1112a3e 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -7401,7 +7401,7 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 }
 
 /* Delete l2 tunnel filter */
-static int
+int
 ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
 			       struct rte_eth_l2_tunnel_conf *l2_tunnel)
 {
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 6908c3d..01c18cd 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -548,6 +548,9 @@ int
 ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 			       struct rte_eth_l2_tunnel_conf *l2_tunnel,
 			       bool restore);
+int
+ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
+			       struct rte_eth_l2_tunnel_conf *l2_tunnel);
 /*
  * Flow director function prototypes
  */
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 908293a..175543c 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -162,11 +162,14 @@ static struct rte_flow *ixgbe_flow_create(struct rte_eth_dev *dev,
 		const struct rte_flow_item pattern[],
 		const struct rte_flow_action actions[],
 		struct rte_flow_error *error);
+static int ixgbe_flow_destroy(struct rte_eth_dev *dev,
+		struct rte_flow *flow,
+		struct rte_flow_error *error);
 
 const struct rte_flow_ops ixgbe_flow_ops = {
 	ixgbe_flow_validate,
 	ixgbe_flow_create,
-	NULL,
+	ixgbe_flow_destroy,
 	ixgbe_flow_flush,
 	NULL,
 };
@@ -2606,6 +2609,118 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 	return ret;
 }
 
+/* Destroy a flow rule on ixgbe. */
+static int
+ixgbe_flow_destroy(struct rte_eth_dev *dev,
+		struct rte_flow *flow,
+		struct rte_flow_error *error)
+{
+	int ret;
+	struct rte_flow *pmd_flow = flow;
+	enum rte_filter_type filter_type = pmd_flow->filter_type;
+	struct rte_eth_ntuple_filter ntuple_filter;
+	struct rte_eth_ethertype_filter ethertype_filter;
+	struct rte_eth_syn_filter syn_filter;
+	struct ixgbe_fdir_rule fdir_rule;
+	struct rte_eth_l2_tunnel_conf l2_tn_filter;
+	struct ixgbe_ntuple_filter_ele *ntuple_filter_ptr;
+	struct ixgbe_ethertype_filter_ele *ethertype_filter_ptr;
+	struct ixgbe_eth_syn_filter_ele *syn_filter_ptr;
+	struct ixgbe_eth_l2_tunnel_conf_ele *l2_tn_filter_ptr;
+	struct ixgbe_fdir_rule_ele *fdir_rule_ptr;
+	struct ixgbe_flow_mem *ixgbe_flow_mem_ptr;
+
+	switch (filter_type) {
+	case RTE_ETH_FILTER_NTUPLE:
+		ntuple_filter_ptr = (struct ixgbe_ntuple_filter_ele *)
+					pmd_flow->rule;
+		(void)rte_memcpy(&ntuple_filter,
+			&ntuple_filter_ptr->filter_info,
+			sizeof(struct rte_eth_ntuple_filter));
+		ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, FALSE);
+		if (!ret) {
+			TAILQ_REMOVE(&filter_ntuple_list,
+			ntuple_filter_ptr, entries);
+			rte_free(ntuple_filter_ptr);
+		}
+		break;
+	case RTE_ETH_FILTER_ETHERTYPE:
+		ethertype_filter_ptr = (struct ixgbe_ethertype_filter_ele *)
+					pmd_flow->rule;
+		(void)rte_memcpy(&ethertype_filter,
+			&ethertype_filter_ptr->filter_info,
+			sizeof(struct rte_eth_ethertype_filter));
+		ret = ixgbe_add_del_ethertype_filter(dev,
+				&ethertype_filter, FALSE);
+		if (!ret) {
+			TAILQ_REMOVE(&filter_ethertype_list,
+				ethertype_filter_ptr, entries);
+			rte_free(ethertype_filter_ptr);
+		}
+		break;
+	case RTE_ETH_FILTER_SYN:
+		syn_filter_ptr = (struct ixgbe_eth_syn_filter_ele *)
+				pmd_flow->rule;
+		(void)rte_memcpy(&syn_filter,
+			&syn_filter_ptr->filter_info,
+			sizeof(struct rte_eth_syn_filter));
+		ret = ixgbe_syn_filter_set(dev, &syn_filter, FALSE);
+		if (!ret) {
+			TAILQ_REMOVE(&filter_syn_list,
+				syn_filter_ptr, entries);
+			rte_free(syn_filter_ptr);
+		}
+		break;
+	case RTE_ETH_FILTER_FDIR:
+		fdir_rule_ptr = (struct ixgbe_fdir_rule_ele *)pmd_flow->rule;
+		(void)rte_memcpy(&fdir_rule,
+			&fdir_rule_ptr->filter_info,
+			sizeof(struct ixgbe_fdir_rule));
+		ret = ixgbe_fdir_filter_program(dev, &fdir_rule, TRUE, FALSE);
+		if (!ret) {
+			TAILQ_REMOVE(&filter_fdir_list,
+				fdir_rule_ptr, entries);
+			rte_free(fdir_rule_ptr);
+		}
+		break;
+	case RTE_ETH_FILTER_L2_TUNNEL:
+		l2_tn_filter_ptr = (struct ixgbe_eth_l2_tunnel_conf_ele *)
+				pmd_flow->rule;
+		(void)rte_memcpy(&l2_tn_filter, &l2_tn_filter_ptr->filter_info,
+			sizeof(struct rte_eth_l2_tunnel_conf));
+		ret = ixgbe_dev_l2_tunnel_filter_del(dev, &l2_tn_filter);
+		if (!ret) {
+			TAILQ_REMOVE(&filter_l2_tunnel_list,
+				l2_tn_filter_ptr, entries);
+			rte_free(l2_tn_filter_ptr);
+		}
+		break;
+	default:
+		PMD_DRV_LOG(WARNING, "Filter type (%d) not supported",
+			    filter_type);
+		ret = -EINVAL;
+		break;
+	}
+
+	if (ret) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_HANDLE,
+				NULL, "Failed to destroy flow");
+		return ret;
+	}
+
+	TAILQ_FOREACH(ixgbe_flow_mem_ptr, &ixgbe_flow_list, entries) {
+		if (ixgbe_flow_mem_ptr->flow == pmd_flow) {
+			TAILQ_REMOVE(&ixgbe_flow_list,
+				ixgbe_flow_mem_ptr, entries);
+			rte_free(ixgbe_flow_mem_ptr);
+		}
+	}
+	rte_free(flow);
+
+	return ret;
+}
+
 /*  Destroy all flow rules associated with a port on ixgbe. */
 static int
 ixgbe_flow_flush(struct rte_eth_dev *dev,
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v4 18/18] net/ixgbe: flush all the filter list
  2017-01-12  8:40     ` [PATCH v4 00/18] net/ixgbe: Consistent filter API Wei Zhao
                         ` (16 preceding siblings ...)
  2017-01-12  8:40       ` [PATCH v4 17/18] net/ixgbe: destroy " Wei Zhao
@ 2017-01-12  8:40       ` Wei Zhao
  2017-01-12  9:17       ` [PATCH v5 00/18] net/ixgbe: Consistent filter API Wei Zhao
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  8:40 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

This patch adds a function to flush all the fliter list
filter on a port.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c |  3 +++
 drivers/net/ixgbe/ixgbe_ethdev.h |  1 +
 drivers/net/ixgbe/ixgbe_flow.c   | 56 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 60 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 1112a3e..f46507b 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1341,6 +1341,9 @@ eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev)
 	/* Remove all ntuple filters of the device */
 	ixgbe_ntuple_filter_uninit(eth_dev);
 
+	/* clear all the filters list */
+	ixgbe_filterlist_flush();
+
 	return 0;
 }
 
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 01c18cd..5b31149 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -551,6 +551,7 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 int
 ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
 			       struct rte_eth_l2_tunnel_conf *l2_tunnel);
+void ixgbe_filterlist_flush(void);
 /*
  * Flow director function prototypes
  */
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 175543c..8823f75 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -2372,6 +2372,60 @@ ixgbe_parse_fdir_filter(const struct rte_flow_attr *attr,
 	return ret;
 }
 
+void
+ixgbe_filterlist_flush(void)
+{
+	struct ixgbe_ntuple_filter_ele *ntuple_filter_ptr;
+	struct ixgbe_ethertype_filter_ele *ethertype_filter_ptr;
+	struct ixgbe_eth_syn_filter_ele *syn_filter_ptr;
+	struct ixgbe_eth_l2_tunnel_conf_ele *l2_tn_filter_ptr;
+	struct ixgbe_fdir_rule_ele *fdir_rule_ptr;
+	struct ixgbe_flow_mem *ixgbe_flow_mem_ptr;
+
+	while ((ntuple_filter_ptr = TAILQ_FIRST(&filter_ntuple_list))) {
+		TAILQ_REMOVE(&filter_ntuple_list,
+				 ntuple_filter_ptr,
+				 entries);
+		rte_free(ntuple_filter_ptr);
+	}
+
+	while ((ethertype_filter_ptr = TAILQ_FIRST(&filter_ethertype_list))) {
+		TAILQ_REMOVE(&filter_ethertype_list,
+				 ethertype_filter_ptr,
+				 entries);
+		rte_free(ethertype_filter_ptr);
+	}
+
+	while ((syn_filter_ptr = TAILQ_FIRST(&filter_syn_list))) {
+		TAILQ_REMOVE(&filter_syn_list,
+				 syn_filter_ptr,
+				 entries);
+		rte_free(syn_filter_ptr);
+	}
+
+	while ((l2_tn_filter_ptr = TAILQ_FIRST(&filter_l2_tunnel_list))) {
+		TAILQ_REMOVE(&filter_l2_tunnel_list,
+				 l2_tn_filter_ptr,
+				 entries);
+		rte_free(l2_tn_filter_ptr);
+	}
+
+	while ((fdir_rule_ptr = TAILQ_FIRST(&filter_fdir_list))) {
+		TAILQ_REMOVE(&filter_fdir_list,
+				 fdir_rule_ptr,
+				 entries);
+		rte_free(fdir_rule_ptr);
+	}
+
+	while ((ixgbe_flow_mem_ptr = TAILQ_FIRST(&ixgbe_flow_list))) {
+		TAILQ_REMOVE(&ixgbe_flow_list,
+				 ixgbe_flow_mem_ptr,
+				 entries);
+		rte_free(ixgbe_flow_mem_ptr->flow);
+		rte_free(ixgbe_flow_mem_ptr);
+	}
+}
+
 /**
  * Create or destroy a flow rule.
  * Theorically one rule can match more than one filters.
@@ -2748,5 +2802,7 @@ ixgbe_flow_flush(struct rte_eth_dev *dev,
 		return ret;
 	}
 
+	ixgbe_filterlist_flush();
+
 	return 0;
 }
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v5 00/18] net/ixgbe: Consistent filter API
  2017-01-12  8:40     ` [PATCH v4 00/18] net/ixgbe: Consistent filter API Wei Zhao
                         ` (17 preceding siblings ...)
  2017-01-12  8:40       ` [PATCH v4 18/18] net/ixgbe: flush all the filter list Wei Zhao
@ 2017-01-12  9:17       ` Wei Zhao
  2017-01-12  9:17         ` [PATCH v5 01/18] net/ixgbe: store TCP SYN filter Wei Zhao
                           ` (20 more replies)
  18 siblings, 21 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  9:17 UTC (permalink / raw)
  To: dev

The patches mainly finish following functions:
1) Store and restore all kinds of filters.
2) Parse all kinds of filters.
3) Add flow validate function.
4) Add flow create function.
5) Add flow destroy function.
6) Add flow flush function.

v2 changes:
 fix git log error
 Modify some function call relationship
 Change return value type of all parse flow functions
 Update error info for all flow ops
 Add ixgbe_filterlist_flush to flush flows and rules created

v3 change:
 add new file ixgbe_flow.c to store generic API parser related functions
 add more comment about pattern and action rules
 add attr check in parser functions
 change struct name ixgbe_flow to rte_flow
 change SYN to TCP SYN
 change to use memset initizlize struct ixgbe_filter_info
 break down filter uninit process to 3 indepedent functions in eth_ixgbe_dev_uninit()
 change struct rte_flow_item_nvgre definition
 change struct rte_flow_item_e_tag definition
 fix one bug in function ixgbe_dev_filter_ctrl
 add goto in function ixgbe_flow_create
 delete some useless initialization 
 eliminate some git log check warning

v4 change:
 fix some check patch warning

v5 change:
 fix some git log warning

zhao wei (18):
  net/ixgbe: store TCP SYN filter
  net/ixgbe: store flow director filter
  net/ixgbe: store L2 tunnel filter
  net/ixgbe: restore n-tuple filter
  net/ixgbe: restore ether type filter
  net/ixgbe: restore TCP SYN filter
  net/ixgbe: restore flow director filter
  net/ixgbe: restore L2 tunnel filter
  net/ixgbe: store and restore L2 tunnel configuration
  net/ixgbe: flush all the filters
  net/ixgbe: parse n-tuple filter
  net/ixgbe: parse ethertype filter
  net/ixgbe: parse TCP SYN filter
  net/ixgbe: parse L2 tunnel filter
  net/ixgbe: parse flow director filter
  net/ixgbe: create consistent filter
  net/ixgbe: destroy consistent filter
  net/ixgbe: flush all the filter list

 drivers/net/ixgbe/Makefile       |    2 +
 drivers/net/ixgbe/ixgbe_ethdev.c |  667 +++++++--
 drivers/net/ixgbe/ixgbe_ethdev.h |  203 ++-
 drivers/net/ixgbe/ixgbe_fdir.c   |  407 ++++--
 drivers/net/ixgbe/ixgbe_flow.c   | 2808 ++++++++++++++++++++++++++++++++++++++
 drivers/net/ixgbe/ixgbe_pf.c     |   26 +-
 lib/librte_ether/rte_flow.h      |   48 +
 7 files changed, 3950 insertions(+), 211 deletions(-)
 create mode 100644 drivers/net/ixgbe/ixgbe_flow.c

-- 
2.5.5

^ permalink raw reply	[flat|nested] 149+ messages in thread

* [PATCH v5 01/18] net/ixgbe: store TCP SYN filter
  2017-01-12  9:17       ` [PATCH v5 00/18] net/ixgbe: Consistent filter API Wei Zhao
@ 2017-01-12  9:17         ` Wei Zhao
  2017-01-12  9:17         ` [PATCH v5 02/18] net/ixgbe: store flow director filter Wei Zhao
                           ` (19 subsequent siblings)
  20 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  9:17 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for storing TCP SYN filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 18 +++++++++++++-----
 drivers/net/ixgbe/ixgbe_ethdev.h |  2 ++
 2 files changed, 15 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index b7ddd4f..719eddd 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1272,10 +1272,12 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
 	/* enable support intr */
 	ixgbe_enable_intr(eth_dev);
 
+	/* initialize filter info */
+	memset(filter_info, 0,
+	       sizeof(struct ixgbe_filter_info));
+
 	/* initialize 5tuple filter list */
 	TAILQ_INIT(&filter_info->fivetuple_list);
-	memset(filter_info->fivetuple_mask, 0,
-	       sizeof(uint32_t) * IXGBE_5TUPLE_ARRAY_SIZE);
 
 	return 0;
 }
@@ -5603,15 +5605,18 @@ ixgbe_syn_filter_set(struct rte_eth_dev *dev,
 			bool add)
 {
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+	uint32_t syn_info;
 	uint32_t synqf;
 
 	if (filter->queue >= IXGBE_MAX_RX_QUEUE_NUM)
 		return -EINVAL;
 
-	synqf = IXGBE_READ_REG(hw, IXGBE_SYNQF);
+	syn_info = filter_info->syn_info;
 
 	if (add) {
-		if (synqf & IXGBE_SYN_FILTER_ENABLE)
+		if (syn_info & IXGBE_SYN_FILTER_ENABLE)
 			return -EINVAL;
 		synqf = (uint32_t)(((filter->queue << IXGBE_SYN_FILTER_QUEUE_SHIFT) &
 			IXGBE_SYN_FILTER_QUEUE) | IXGBE_SYN_FILTER_ENABLE);
@@ -5621,10 +5626,13 @@ ixgbe_syn_filter_set(struct rte_eth_dev *dev,
 		else
 			synqf &= ~IXGBE_SYN_FILTER_SYNQFP;
 	} else {
-		if (!(synqf & IXGBE_SYN_FILTER_ENABLE))
+		synqf = IXGBE_READ_REG(hw, IXGBE_SYNQF);
+		if (!(syn_info & IXGBE_SYN_FILTER_ENABLE))
 			return -ENOENT;
 		synqf &= ~(IXGBE_SYN_FILTER_QUEUE | IXGBE_SYN_FILTER_ENABLE);
 	}
+
+	filter_info->syn_info = synqf;
 	IXGBE_WRITE_REG(hw, IXGBE_SYNQF, synqf);
 	IXGBE_WRITE_FLUSH(hw);
 	return 0;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 69b276f..90a89ec 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -262,6 +262,8 @@ struct ixgbe_filter_info {
 	/* Bit mask for every used 5tuple filter */
 	uint32_t fivetuple_mask[IXGBE_5TUPLE_ARRAY_SIZE];
 	struct ixgbe_5tuple_filter_list fivetuple_list;
+	/* store the SYN filter info */
+	uint32_t syn_info;
 };
 
 /*
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v5 02/18] net/ixgbe: store flow director filter
  2017-01-12  9:17       ` [PATCH v5 00/18] net/ixgbe: Consistent filter API Wei Zhao
  2017-01-12  9:17         ` [PATCH v5 01/18] net/ixgbe: store TCP SYN filter Wei Zhao
@ 2017-01-12  9:17         ` Wei Zhao
  2017-01-12  9:17         ` [PATCH v5 03/18] net/ixgbe: store L2 tunnel filter Wei Zhao
                           ` (18 subsequent siblings)
  20 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  9:17 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for storing flow director filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c |  64 ++++++++++++++++++++++++
 drivers/net/ixgbe/ixgbe_ethdev.h |  19 ++++++-
 drivers/net/ixgbe/ixgbe_fdir.c   | 105 ++++++++++++++++++++++++++++++++++++++-
 3 files changed, 185 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 719eddd..9796c4f 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -60,6 +60,7 @@
 #include <rte_malloc.h>
 #include <rte_random.h>
 #include <rte_dev.h>
+#include <rte_hash_crc.h>
 
 #include "ixgbe_logs.h"
 #include "base/ixgbe_api.h"
@@ -165,6 +166,8 @@ enum ixgbevf_xcast_modes {
 
 static int eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev);
 static int eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev);
+static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev);
+static int ixgbe_fdir_filter_uninit(struct rte_eth_dev *eth_dev);
 static int  ixgbe_dev_configure(struct rte_eth_dev *dev);
 static int  ixgbe_dev_start(struct rte_eth_dev *dev);
 static void ixgbe_dev_stop(struct rte_eth_dev *dev);
@@ -1279,6 +1282,9 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
 	/* initialize 5tuple filter list */
 	TAILQ_INIT(&filter_info->fivetuple_list);
 
+	/* initialize flow director filter list & hash */
+	ixgbe_fdir_filter_init(eth_dev);
+
 	return 0;
 }
 
@@ -1320,9 +1326,67 @@ eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev)
 	rte_free(eth_dev->data->hash_mac_addrs);
 	eth_dev->data->hash_mac_addrs = NULL;
 
+	/* remove all the fdir filters & hash */
+	ixgbe_fdir_filter_uninit(eth_dev);
+
 	return 0;
 }
 
+static int ixgbe_fdir_filter_uninit(struct rte_eth_dev *eth_dev)
+{
+	struct ixgbe_hw_fdir_info *fdir_info =
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(eth_dev->data->dev_private);
+	struct ixgbe_fdir_filter *fdir_filter;
+
+		if (fdir_info->hash_map)
+		rte_free(fdir_info->hash_map);
+	if (fdir_info->hash_handle)
+		rte_hash_free(fdir_info->hash_handle);
+
+	while ((fdir_filter = TAILQ_FIRST(&fdir_info->fdir_list))) {
+		TAILQ_REMOVE(&fdir_info->fdir_list,
+			     fdir_filter,
+			     entries);
+		rte_free(fdir_filter);
+	}
+
+	return 0;
+}
+
+static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev)
+{
+	struct ixgbe_hw_fdir_info *fdir_info =
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(eth_dev->data->dev_private);
+	char fdir_hash_name[RTE_HASH_NAMESIZE];
+	struct rte_hash_parameters fdir_hash_params = {
+		.name = fdir_hash_name,
+		.entries = IXGBE_MAX_FDIR_FILTER_NUM,
+		.key_len = sizeof(union ixgbe_atr_input),
+		.hash_func = rte_hash_crc,
+		.hash_func_init_val = 0,
+		.socket_id = rte_socket_id(),
+	};
+
+	TAILQ_INIT(&fdir_info->fdir_list);
+	snprintf(fdir_hash_name, RTE_HASH_NAMESIZE,
+		 "fdir_%s", eth_dev->data->name);
+	fdir_info->hash_handle = rte_hash_create(&fdir_hash_params);
+	if (!fdir_info->hash_handle) {
+		PMD_INIT_LOG(ERR, "Failed to create fdir hash table!");
+		return -EINVAL;
+	}
+	fdir_info->hash_map = rte_zmalloc("ixgbe",
+					  sizeof(struct ixgbe_fdir_filter *) *
+					  IXGBE_MAX_FDIR_FILTER_NUM,
+					  0);
+	if (!fdir_info->hash_map) {
+		PMD_INIT_LOG(ERR,
+			     "Failed to allocate memory for fdir hash map!");
+		return -ENOMEM;
+	}
+
+	return 0;
+}
 /*
  * Negotiate mailbox API version with the PF.
  * After reset API version is always set to the basic one (ixgbe_mbox_api_10).
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 90a89ec..300542e 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -38,6 +38,7 @@
 #include "base/ixgbe_dcb_82598.h"
 #include "ixgbe_bypass.h"
 #include <rte_time.h>
+#include <rte_hash.h>
 
 /* need update link, bit flag */
 #define IXGBE_FLAG_NEED_LINK_UPDATE (uint32_t)(1 << 0)
@@ -130,10 +131,11 @@
 #define IXGBE_MISC_VEC_ID               RTE_INTR_VEC_ZERO_OFFSET
 #define IXGBE_RX_VEC_START              RTE_INTR_VEC_RXTX_OFFSET
 
+#define IXGBE_MAX_FDIR_FILTER_NUM       (1024 * 32)
+
 /*
  * Information about the fdir mode.
  */
-
 struct ixgbe_hw_fdir_mask {
 	uint16_t vlan_tci_mask;
 	uint32_t src_ipv4_mask;
@@ -148,6 +150,17 @@ struct ixgbe_hw_fdir_mask {
 	uint8_t  tunnel_type_mask;
 };
 
+struct ixgbe_fdir_filter {
+	TAILQ_ENTRY(ixgbe_fdir_filter) entries;
+	union ixgbe_atr_input ixgbe_fdir; /* key of fdir filter*/
+	uint32_t fdirflags; /* drop or forward */
+	uint32_t fdirhash; /* hash value for fdir */
+	uint8_t queue; /* assigned rx queue */
+};
+
+/* list of fdir filters */
+TAILQ_HEAD(ixgbe_fdir_filter_list, ixgbe_fdir_filter);
+
 struct ixgbe_hw_fdir_info {
 	struct ixgbe_hw_fdir_mask mask;
 	uint8_t     flex_bytes_offset;
@@ -159,6 +172,10 @@ struct ixgbe_hw_fdir_info {
 	uint64_t    remove;
 	uint64_t    f_add;
 	uint64_t    f_remove;
+	struct ixgbe_fdir_filter_list fdir_list; /* filter list*/
+	/* store the pointers of the filters, index is the hash value. */
+	struct ixgbe_fdir_filter **hash_map;
+	struct rte_hash *hash_handle; /* cuckoo hash handler */
 };
 
 /* structure for interrupt relative data */
diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c
index 4b81ee3..8bf5705 100644
--- a/drivers/net/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/ixgbe/ixgbe_fdir.c
@@ -43,6 +43,7 @@
 #include <rte_pci.h>
 #include <rte_ether.h>
 #include <rte_ethdev.h>
+#include <rte_malloc.h>
 
 #include "ixgbe_logs.h"
 #include "base/ixgbe_api.h"
@@ -1075,6 +1076,65 @@ fdir_erase_filter_82599(struct ixgbe_hw *hw, uint32_t fdirhash)
 
 }
 
+static inline struct ixgbe_fdir_filter *
+ixgbe_fdir_filter_lookup(struct ixgbe_hw_fdir_info *fdir_info,
+			 union ixgbe_atr_input *key)
+{
+	int ret;
+
+	ret = rte_hash_lookup(fdir_info->hash_handle, (const void *)key);
+	if (ret < 0)
+		return NULL;
+
+	return fdir_info->hash_map[ret];
+}
+
+static inline int
+ixgbe_insert_fdir_filter(struct ixgbe_hw_fdir_info *fdir_info,
+			 struct ixgbe_fdir_filter *fdir_filter)
+{
+	int ret;
+
+	ret = rte_hash_add_key(fdir_info->hash_handle,
+			       &fdir_filter->ixgbe_fdir);
+
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to insert fdir filter to hash table %d!",
+			    ret);
+		return ret;
+	}
+
+	fdir_info->hash_map[ret] = fdir_filter;
+
+	TAILQ_INSERT_TAIL(&fdir_info->fdir_list, fdir_filter, entries);
+
+	return 0;
+}
+
+static inline int
+ixgbe_remove_fdir_filter(struct ixgbe_hw_fdir_info *fdir_info,
+			 union ixgbe_atr_input *key)
+{
+	int ret;
+	struct ixgbe_fdir_filter *fdir_filter;
+
+	ret = rte_hash_del_key(fdir_info->hash_handle, key);
+
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "No such fdir filter to delete %d!", ret);
+		return ret;
+	}
+
+	fdir_filter = fdir_info->hash_map[ret];
+	fdir_info->hash_map[ret] = NULL;
+
+	TAILQ_REMOVE(&fdir_info->fdir_list, fdir_filter, entries);
+	rte_free(fdir_filter);
+
+	return 0;
+}
+
 /*
  * ixgbe_add_del_fdir_filter - add or remove a flow diretor filter.
  * @dev: pointer to the structure rte_eth_dev
@@ -1098,6 +1158,8 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	struct ixgbe_hw_fdir_info *info =
 		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
 	enum rte_fdir_mode fdir_mode = dev->data->dev_conf.fdir_conf.mode;
+	struct ixgbe_fdir_filter *node;
+	bool add_node = FALSE;
 
 	if (fdir_mode == RTE_FDIR_MODE_NONE)
 		return -ENOTSUP;
@@ -1148,6 +1210,10 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 						      dev->data->dev_conf.fdir_conf.pballoc);
 
 	if (del) {
+		err = ixgbe_remove_fdir_filter(info, &input);
+		if (err < 0)
+			return err;
+
 		err = fdir_erase_filter_82599(hw, fdirhash);
 		if (err < 0)
 			PMD_DRV_LOG(ERR, "Fail to delete FDIR filter!");
@@ -1172,6 +1238,37 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	else
 		return -EINVAL;
 
+	node = ixgbe_fdir_filter_lookup(info, &input);
+	if (node) {
+		if (update) {
+			node->fdirflags = fdircmd_flags;
+			node->fdirhash = fdirhash;
+			node->queue = queue;
+		} else {
+			PMD_DRV_LOG(ERR, "Conflict with existing fdir filter!");
+			return -EINVAL;
+		}
+	} else {
+		add_node = TRUE;
+		node = rte_zmalloc("ixgbe_fdir",
+				   sizeof(struct ixgbe_fdir_filter),
+				   0);
+		if (!node)
+			return -ENOMEM;
+		(void)rte_memcpy(&node->ixgbe_fdir,
+				 &input,
+				 sizeof(union ixgbe_atr_input));
+		node->fdirflags = fdircmd_flags;
+		node->fdirhash = fdirhash;
+		node->queue = queue;
+
+		err = ixgbe_insert_fdir_filter(info, node);
+		if (err < 0) {
+			rte_free(node);
+			return err;
+		}
+	}
+
 	if (is_perfect) {
 		err = fdir_write_perfect_filter_82599(hw, &input, queue,
 						      fdircmd_flags, fdirhash,
@@ -1180,10 +1277,14 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 		err = fdir_add_signature_filter_82599(hw, &input, queue,
 						      fdircmd_flags, fdirhash);
 	}
-	if (err < 0)
+	if (err < 0) {
 		PMD_DRV_LOG(ERR, "Fail to add FDIR filter!");
-	else
+
+		if (add_node)
+			(void)ixgbe_remove_fdir_filter(info, &input);
+	} else {
 		PMD_DRV_LOG(DEBUG, "Success to add FDIR filter");
+	}
 
 	return err;
 }
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v5 03/18] net/ixgbe: store L2 tunnel filter
  2017-01-12  9:17       ` [PATCH v5 00/18] net/ixgbe: Consistent filter API Wei Zhao
  2017-01-12  9:17         ` [PATCH v5 01/18] net/ixgbe: store TCP SYN filter Wei Zhao
  2017-01-12  9:17         ` [PATCH v5 02/18] net/ixgbe: store flow director filter Wei Zhao
@ 2017-01-12  9:17         ` Wei Zhao
  2017-01-12  9:17         ` [PATCH v5 04/18] net/ixgbe: restore n-tuple filter Wei Zhao
                           ` (17 subsequent siblings)
  20 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  9:17 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for storing L2 tunnel filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 172 ++++++++++++++++++++++++++++++++++++++-
 drivers/net/ixgbe/ixgbe_ethdev.h |  24 ++++++
 2 files changed, 193 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 9796c4f..e63b635 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -168,6 +168,8 @@ static int eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev);
 static int eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev);
 static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev);
 static int ixgbe_fdir_filter_uninit(struct rte_eth_dev *eth_dev);
+static int ixgbe_l2_tn_filter_init(struct rte_eth_dev *eth_dev);
+static int ixgbe_l2_tn_filter_uninit(struct rte_eth_dev *eth_dev);
 static int  ixgbe_dev_configure(struct rte_eth_dev *dev);
 static int  ixgbe_dev_start(struct rte_eth_dev *dev);
 static void ixgbe_dev_stop(struct rte_eth_dev *dev);
@@ -1285,6 +1287,8 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
 	/* initialize flow director filter list & hash */
 	ixgbe_fdir_filter_init(eth_dev);
 
+	/* initialize l2 tunnel filter list & hash */
+	ixgbe_l2_tn_filter_init(eth_dev);
 	return 0;
 }
 
@@ -1329,6 +1333,9 @@ eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev)
 	/* remove all the fdir filters & hash */
 	ixgbe_fdir_filter_uninit(eth_dev);
 
+	/* remove all the L2 tunnel filters & hash */
+	ixgbe_l2_tn_filter_uninit(eth_dev);
+
 	return 0;
 }
 
@@ -1353,6 +1360,27 @@ static int ixgbe_fdir_filter_uninit(struct rte_eth_dev *eth_dev)
 	return 0;
 }
 
+static int ixgbe_l2_tn_filter_uninit(struct rte_eth_dev *eth_dev)
+{
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(eth_dev->data->dev_private);
+	struct ixgbe_l2_tn_filter *l2_tn_filter;
+
+	if (l2_tn_info->hash_map)
+		rte_free(l2_tn_info->hash_map);
+	if (l2_tn_info->hash_handle)
+		rte_hash_free(l2_tn_info->hash_handle);
+
+	while ((l2_tn_filter = TAILQ_FIRST(&l2_tn_info->l2_tn_list))) {
+		TAILQ_REMOVE(&l2_tn_info->l2_tn_list,
+			     l2_tn_filter,
+			     entries);
+		rte_free(l2_tn_filter);
+	}
+
+	return 0;
+}
+
 static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev)
 {
 	struct ixgbe_hw_fdir_info *fdir_info =
@@ -1384,6 +1412,40 @@ static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev)
 			     "Failed to allocate memory for fdir hash map!");
 		return -ENOMEM;
 	}
+	return 0;
+}
+
+static int ixgbe_l2_tn_filter_init(struct rte_eth_dev *eth_dev)
+{
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(eth_dev->data->dev_private);
+	char l2_tn_hash_name[RTE_HASH_NAMESIZE];
+	struct rte_hash_parameters l2_tn_hash_params = {
+		.name = l2_tn_hash_name,
+		.entries = IXGBE_MAX_L2_TN_FILTER_NUM,
+		.key_len = sizeof(struct ixgbe_l2_tn_key),
+		.hash_func = rte_hash_crc,
+		.hash_func_init_val = 0,
+		.socket_id = rte_socket_id(),
+	};
+
+	TAILQ_INIT(&l2_tn_info->l2_tn_list);
+	snprintf(l2_tn_hash_name, RTE_HASH_NAMESIZE,
+		 "l2_tn_%s", eth_dev->data->name);
+	l2_tn_info->hash_handle = rte_hash_create(&l2_tn_hash_params);
+	if (!l2_tn_info->hash_handle) {
+		PMD_INIT_LOG(ERR, "Failed to create L2 TN hash table!");
+		return -EINVAL;
+	}
+	l2_tn_info->hash_map = rte_zmalloc("ixgbe",
+				   sizeof(struct ixgbe_l2_tn_filter *) *
+				   IXGBE_MAX_L2_TN_FILTER_NUM,
+				   0);
+	if (!l2_tn_info->hash_map) {
+		PMD_INIT_LOG(ERR,
+			"Failed to allocate memory for L2 TN hash map!");
+		return -ENOMEM;
+	}
 
 	return 0;
 }
@@ -7210,12 +7272,104 @@ ixgbe_e_tag_filter_add(struct rte_eth_dev *dev,
 	return -EINVAL;
 }
 
+static inline struct ixgbe_l2_tn_filter *
+ixgbe_l2_tn_filter_lookup(struct ixgbe_l2_tn_info *l2_tn_info,
+			  struct ixgbe_l2_tn_key *key)
+{
+	int ret;
+
+	ret = rte_hash_lookup(l2_tn_info->hash_handle, (const void *)key);
+	if (ret < 0)
+		return NULL;
+
+	return l2_tn_info->hash_map[ret];
+}
+
+static inline int
+ixgbe_insert_l2_tn_filter(struct ixgbe_l2_tn_info *l2_tn_info,
+			  struct ixgbe_l2_tn_filter *l2_tn_filter)
+{
+	int ret;
+
+	ret = rte_hash_add_key(l2_tn_info->hash_handle,
+			       &l2_tn_filter->key);
+
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to insert L2 tunnel filter"
+			    " to hash table %d!",
+			    ret);
+		return ret;
+	}
+
+	l2_tn_info->hash_map[ret] = l2_tn_filter;
+
+	TAILQ_INSERT_TAIL(&l2_tn_info->l2_tn_list, l2_tn_filter, entries);
+
+	return 0;
+}
+
+static inline int
+ixgbe_remove_l2_tn_filter(struct ixgbe_l2_tn_info *l2_tn_info,
+			  struct ixgbe_l2_tn_key *key)
+{
+	int ret;
+	struct ixgbe_l2_tn_filter *l2_tn_filter;
+
+	ret = rte_hash_del_key(l2_tn_info->hash_handle, key);
+
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR,
+			    "No such L2 tunnel filter to delete %d!",
+			    ret);
+		return ret;
+	}
+
+	l2_tn_filter = l2_tn_info->hash_map[ret];
+	l2_tn_info->hash_map[ret] = NULL;
+
+	TAILQ_REMOVE(&l2_tn_info->l2_tn_list, l2_tn_filter, entries);
+	rte_free(l2_tn_filter);
+
+	return 0;
+}
+
 /* Add l2 tunnel filter */
 static int
 ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 			       struct rte_eth_l2_tunnel_conf *l2_tunnel)
 {
-	int ret = 0;
+	int ret;
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
+	struct ixgbe_l2_tn_key key;
+	struct ixgbe_l2_tn_filter *node;
+
+	key.l2_tn_type = l2_tunnel->l2_tunnel_type;
+	key.tn_id = l2_tunnel->tunnel_id;
+
+	node = ixgbe_l2_tn_filter_lookup(l2_tn_info, &key);
+
+	if (node) {
+		PMD_DRV_LOG(ERR, "The L2 tunnel filter already exists!");
+		return -EINVAL;
+	}
+
+	node = rte_zmalloc("ixgbe_l2_tn",
+			   sizeof(struct ixgbe_l2_tn_filter),
+			   0);
+	if (!node)
+		return -ENOMEM;
+
+	(void)rte_memcpy(&node->key,
+			 &key,
+			 sizeof(struct ixgbe_l2_tn_key));
+	node->pool = l2_tunnel->pool;
+	ret = ixgbe_insert_l2_tn_filter(l2_tn_info, node);
+	if (ret < 0) {
+		rte_free(node);
+		return ret;
+	}
 
 	switch (l2_tunnel->l2_tunnel_type) {
 	case RTE_L2_TUNNEL_TYPE_E_TAG:
@@ -7227,6 +7381,9 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 		break;
 	}
 
+	if (ret < 0)
+		(void)ixgbe_remove_l2_tn_filter(l2_tn_info, &key);
+
 	return ret;
 }
 
@@ -7235,7 +7392,16 @@ static int
 ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
 			       struct rte_eth_l2_tunnel_conf *l2_tunnel)
 {
-	int ret = 0;
+	int ret;
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
+	struct ixgbe_l2_tn_key key;
+
+	key.l2_tn_type = l2_tunnel->l2_tunnel_type;
+	key.tn_id = l2_tunnel->tunnel_id;
+	ret = ixgbe_remove_l2_tn_filter(l2_tn_info, &key);
+	if (ret < 0)
+		return ret;
 
 	switch (l2_tunnel->l2_tunnel_type) {
 	case RTE_L2_TUNNEL_TYPE_E_TAG:
@@ -7261,7 +7427,7 @@ ixgbe_dev_l2_tunnel_filter_handle(struct rte_eth_dev *dev,
 				  enum rte_filter_op filter_op,
 				  void *arg)
 {
-	int ret = 0;
+	int ret;
 
 	if (filter_op == RTE_ETH_FILTER_NOP)
 		return 0;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 300542e..b2cf789 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -132,6 +132,7 @@
 #define IXGBE_RX_VEC_START              RTE_INTR_VEC_RXTX_OFFSET
 
 #define IXGBE_MAX_FDIR_FILTER_NUM       (1024 * 32)
+#define IXGBE_MAX_L2_TN_FILTER_NUM      128
 
 /*
  * Information about the fdir mode.
@@ -283,6 +284,25 @@ struct ixgbe_filter_info {
 	uint32_t syn_info;
 };
 
+struct ixgbe_l2_tn_key {
+	enum rte_eth_tunnel_type          l2_tn_type;
+	uint32_t                          tn_id;
+};
+
+struct ixgbe_l2_tn_filter {
+	TAILQ_ENTRY(ixgbe_l2_tn_filter)    entries;
+	struct ixgbe_l2_tn_key             key;
+	uint32_t                           pool;
+};
+
+TAILQ_HEAD(ixgbe_l2_tn_filter_list, ixgbe_l2_tn_filter);
+
+struct ixgbe_l2_tn_info {
+	struct ixgbe_l2_tn_filter_list      l2_tn_list;
+	struct ixgbe_l2_tn_filter         **hash_map;
+	struct rte_hash                    *hash_handle;
+};
+
 /*
  * Structure to store private data for each driver instance (for each port).
  */
@@ -302,6 +322,7 @@ struct ixgbe_adapter {
 	struct ixgbe_bypass_info    bps;
 #endif /* RTE_NIC_BYPASS */
 	struct ixgbe_filter_info    filter;
+	struct ixgbe_l2_tn_info     l2_tn;
 
 	bool rx_bulk_alloc_allowed;
 	bool rx_vec_allowed;
@@ -349,6 +370,9 @@ struct ixgbe_adapter {
 #define IXGBE_DEV_PRIVATE_TO_FILTER_INFO(adapter) \
 	(&((struct ixgbe_adapter *)adapter)->filter)
 
+#define IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(adapter) \
+	(&((struct ixgbe_adapter *)adapter)->l2_tn)
+
 /*
  * RX/TX function prototypes
  */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v5 04/18] net/ixgbe: restore n-tuple filter
  2017-01-12  9:17       ` [PATCH v5 00/18] net/ixgbe: Consistent filter API Wei Zhao
                           ` (2 preceding siblings ...)
  2017-01-12  9:17         ` [PATCH v5 03/18] net/ixgbe: store L2 tunnel filter Wei Zhao
@ 2017-01-12  9:17         ` Wei Zhao
  2017-01-12  9:17         ` [PATCH v5 05/18] net/ixgbe: restore ether type filter Wei Zhao
                           ` (16 subsequent siblings)
  20 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  9:17 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for restoring n-tuple filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 140 +++++++++++++++++++++++++--------------
 1 file changed, 92 insertions(+), 48 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index e63b635..1630e65 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -170,6 +170,7 @@ static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev);
 static int ixgbe_fdir_filter_uninit(struct rte_eth_dev *eth_dev);
 static int ixgbe_l2_tn_filter_init(struct rte_eth_dev *eth_dev);
 static int ixgbe_l2_tn_filter_uninit(struct rte_eth_dev *eth_dev);
+static int ixgbe_ntuple_filter_uninit(struct rte_eth_dev *eth_dev);
 static int  ixgbe_dev_configure(struct rte_eth_dev *dev);
 static int  ixgbe_dev_start(struct rte_eth_dev *dev);
 static void ixgbe_dev_stop(struct rte_eth_dev *dev);
@@ -387,6 +388,7 @@ static int ixgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 					 struct rte_eth_udp_tunnel *udp_tunnel);
 static int ixgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 					 struct rte_eth_udp_tunnel *udp_tunnel);
+static int ixgbe_filter_restore(struct rte_eth_dev *dev);
 
 /*
  * Define VF Stats MACRO for Non "cleared on read" register
@@ -1336,6 +1338,27 @@ eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev)
 	/* remove all the L2 tunnel filters & hash */
 	ixgbe_l2_tn_filter_uninit(eth_dev);
 
+	/* Remove all ntuple filters of the device */
+	ixgbe_ntuple_filter_uninit(eth_dev);
+
+	return 0;
+}
+
+static int ixgbe_ntuple_filter_uninit(struct rte_eth_dev *eth_dev)
+{
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(eth_dev->data->dev_private);
+	struct ixgbe_5tuple_filter *p_5tuple;
+
+	while ((p_5tuple = TAILQ_FIRST(&filter_info->fivetuple_list))) {
+		TAILQ_REMOVE(&filter_info->fivetuple_list,
+			     p_5tuple,
+			     entries);
+		rte_free(p_5tuple);
+	}
+	memset(filter_info->fivetuple_mask, 0,
+	       sizeof(uint32_t) * IXGBE_5TUPLE_ARRAY_SIZE);
+
 	return 0;
 }
 
@@ -2504,6 +2527,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
 
 	/* resume enabled intr since hw reset */
 	ixgbe_enable_intr(dev);
+	ixgbe_filter_restore(dev);
 
 	return 0;
 
@@ -2524,9 +2548,6 @@ ixgbe_dev_stop(struct rte_eth_dev *dev)
 		IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ixgbe_vf_info *vfinfo =
 		*IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private);
-	struct ixgbe_filter_info *filter_info =
-		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
-	struct ixgbe_5tuple_filter *p_5tuple, *p_5tuple_next;
 	struct rte_pci_device *pci_dev = IXGBE_DEV_TO_PCI(dev);
 	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 	int vf;
@@ -2564,17 +2585,6 @@ ixgbe_dev_stop(struct rte_eth_dev *dev)
 	memset(&link, 0, sizeof(link));
 	rte_ixgbe_dev_atomic_write_link_status(dev, &link);
 
-	/* Remove all ntuple filters of the device */
-	for (p_5tuple = TAILQ_FIRST(&filter_info->fivetuple_list);
-	     p_5tuple != NULL; p_5tuple = p_5tuple_next) {
-		p_5tuple_next = TAILQ_NEXT(p_5tuple, entries);
-		TAILQ_REMOVE(&filter_info->fivetuple_list,
-			     p_5tuple, entries);
-		rte_free(p_5tuple);
-	}
-	memset(filter_info->fivetuple_mask, 0,
-		sizeof(uint32_t) * IXGBE_5TUPLE_ARRAY_SIZE);
-
 	if (!rte_intr_allow_others(intr_handle))
 		/* resume to the default handler */
 		rte_intr_callback_register(intr_handle,
@@ -5836,6 +5846,52 @@ convert_protocol_type(uint8_t protocol_value)
 		return IXGBE_FILTER_PROTOCOL_NONE;
 }
 
+/* inject a 5-tuple filter to HW */
+static inline void
+ixgbe_inject_5tuple_filter(struct rte_eth_dev *dev,
+			   struct ixgbe_5tuple_filter *filter)
+{
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	int i;
+	uint32_t ftqf, sdpqf;
+	uint32_t l34timir = 0;
+	uint8_t mask = 0xff;
+
+	i = filter->index;
+
+	sdpqf = (uint32_t)(filter->filter_info.dst_port <<
+				IXGBE_SDPQF_DSTPORT_SHIFT);
+	sdpqf = sdpqf | (filter->filter_info.src_port & IXGBE_SDPQF_SRCPORT);
+
+	ftqf = (uint32_t)(filter->filter_info.proto &
+		IXGBE_FTQF_PROTOCOL_MASK);
+	ftqf |= (uint32_t)((filter->filter_info.priority &
+		IXGBE_FTQF_PRIORITY_MASK) << IXGBE_FTQF_PRIORITY_SHIFT);
+	if (filter->filter_info.src_ip_mask == 0) /* 0 means compare. */
+		mask &= IXGBE_FTQF_SOURCE_ADDR_MASK;
+	if (filter->filter_info.dst_ip_mask == 0)
+		mask &= IXGBE_FTQF_DEST_ADDR_MASK;
+	if (filter->filter_info.src_port_mask == 0)
+		mask &= IXGBE_FTQF_SOURCE_PORT_MASK;
+	if (filter->filter_info.dst_port_mask == 0)
+		mask &= IXGBE_FTQF_DEST_PORT_MASK;
+	if (filter->filter_info.proto_mask == 0)
+		mask &= IXGBE_FTQF_PROTOCOL_COMP_MASK;
+	ftqf |= mask << IXGBE_FTQF_5TUPLE_MASK_SHIFT;
+	ftqf |= IXGBE_FTQF_POOL_MASK_EN;
+	ftqf |= IXGBE_FTQF_QUEUE_ENABLE;
+
+	IXGBE_WRITE_REG(hw, IXGBE_DAQF(i), filter->filter_info.dst_ip);
+	IXGBE_WRITE_REG(hw, IXGBE_SAQF(i), filter->filter_info.src_ip);
+	IXGBE_WRITE_REG(hw, IXGBE_SDPQF(i), sdpqf);
+	IXGBE_WRITE_REG(hw, IXGBE_FTQF(i), ftqf);
+
+	l34timir |= IXGBE_L34T_IMIR_RESERVE;
+	l34timir |= (uint32_t)(filter->queue <<
+				IXGBE_L34T_IMIR_QUEUE_SHIFT);
+	IXGBE_WRITE_REG(hw, IXGBE_L34T_IMIR(i), l34timir);
+}
+
 /*
  * add a 5tuple filter
  *
@@ -5853,13 +5909,9 @@ static int
 ixgbe_add_5tuple_filter(struct rte_eth_dev *dev,
 			struct ixgbe_5tuple_filter *filter)
 {
-	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ixgbe_filter_info *filter_info =
 		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
 	int i, idx, shift;
-	uint32_t ftqf, sdpqf;
-	uint32_t l34timir = 0;
-	uint8_t mask = 0xff;
 
 	/*
 	 * look for an unused 5tuple filter index,
@@ -5882,37 +5934,8 @@ ixgbe_add_5tuple_filter(struct rte_eth_dev *dev,
 		return -ENOSYS;
 	}
 
-	sdpqf = (uint32_t)(filter->filter_info.dst_port <<
-				IXGBE_SDPQF_DSTPORT_SHIFT);
-	sdpqf = sdpqf | (filter->filter_info.src_port & IXGBE_SDPQF_SRCPORT);
-
-	ftqf = (uint32_t)(filter->filter_info.proto &
-		IXGBE_FTQF_PROTOCOL_MASK);
-	ftqf |= (uint32_t)((filter->filter_info.priority &
-		IXGBE_FTQF_PRIORITY_MASK) << IXGBE_FTQF_PRIORITY_SHIFT);
-	if (filter->filter_info.src_ip_mask == 0) /* 0 means compare. */
-		mask &= IXGBE_FTQF_SOURCE_ADDR_MASK;
-	if (filter->filter_info.dst_ip_mask == 0)
-		mask &= IXGBE_FTQF_DEST_ADDR_MASK;
-	if (filter->filter_info.src_port_mask == 0)
-		mask &= IXGBE_FTQF_SOURCE_PORT_MASK;
-	if (filter->filter_info.dst_port_mask == 0)
-		mask &= IXGBE_FTQF_DEST_PORT_MASK;
-	if (filter->filter_info.proto_mask == 0)
-		mask &= IXGBE_FTQF_PROTOCOL_COMP_MASK;
-	ftqf |= mask << IXGBE_FTQF_5TUPLE_MASK_SHIFT;
-	ftqf |= IXGBE_FTQF_POOL_MASK_EN;
-	ftqf |= IXGBE_FTQF_QUEUE_ENABLE;
-
-	IXGBE_WRITE_REG(hw, IXGBE_DAQF(i), filter->filter_info.dst_ip);
-	IXGBE_WRITE_REG(hw, IXGBE_SAQF(i), filter->filter_info.src_ip);
-	IXGBE_WRITE_REG(hw, IXGBE_SDPQF(i), sdpqf);
-	IXGBE_WRITE_REG(hw, IXGBE_FTQF(i), ftqf);
+	ixgbe_inject_5tuple_filter(dev, filter);
 
-	l34timir |= IXGBE_L34T_IMIR_RESERVE;
-	l34timir |= (uint32_t)(filter->queue <<
-				IXGBE_L34T_IMIR_QUEUE_SHIFT);
-	IXGBE_WRITE_REG(hw, IXGBE_L34T_IMIR(i), l34timir);
 	return 0;
 }
 
@@ -7925,6 +7948,27 @@ ixgbevf_dev_interrupt_handler(__rte_unused struct rte_intr_handle *handle,
 	ixgbevf_dev_interrupt_action(dev);
 }
 
+/* restore n-tuple filter */
+static inline void
+ixgbe_ntuple_filter_restore(struct rte_eth_dev *dev)
+{
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+	struct ixgbe_5tuple_filter *node;
+
+	TAILQ_FOREACH(node, &filter_info->fivetuple_list, entries) {
+		ixgbe_inject_5tuple_filter(dev, node);
+	}
+}
+
+static int
+ixgbe_filter_restore(struct rte_eth_dev *dev)
+{
+	ixgbe_ntuple_filter_restore(dev);
+
+	return 0;
+}
+
 RTE_PMD_REGISTER_PCI(net_ixgbe, rte_ixgbe_pmd.pci_drv);
 RTE_PMD_REGISTER_PCI_TABLE(net_ixgbe, pci_id_ixgbe_map);
 RTE_PMD_REGISTER_KMOD_DEP(net_ixgbe, "* igb_uio | uio_pci_generic | vfio");
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v5 05/18] net/ixgbe: restore ether type filter
  2017-01-12  9:17       ` [PATCH v5 00/18] net/ixgbe: Consistent filter API Wei Zhao
                           ` (3 preceding siblings ...)
  2017-01-12  9:17         ` [PATCH v5 04/18] net/ixgbe: restore n-tuple filter Wei Zhao
@ 2017-01-12  9:17         ` Wei Zhao
  2017-01-12  9:17         ` [PATCH v5 06/18] net/ixgbe: restore TCP SYN filter Wei Zhao
                           ` (15 subsequent siblings)
  20 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  9:17 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for restoring ether type filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 79 ++++++++++++++++------------------------
 drivers/net/ixgbe/ixgbe_ethdev.h | 57 ++++++++++++++++++++++++++++-
 drivers/net/ixgbe/ixgbe_pf.c     | 25 ++++++++-----
 3 files changed, 104 insertions(+), 57 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 1630e65..6c46354 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -6259,47 +6259,6 @@ ixgbe_ntuple_filter_handle(struct rte_eth_dev *dev,
 	return ret;
 }
 
-static inline int
-ixgbe_ethertype_filter_lookup(struct ixgbe_filter_info *filter_info,
-			uint16_t ethertype)
-{
-	int i;
-
-	for (i = 0; i < IXGBE_MAX_ETQF_FILTERS; i++) {
-		if (filter_info->ethertype_filters[i] == ethertype &&
-		    (filter_info->ethertype_mask & (1 << i)))
-			return i;
-	}
-	return -1;
-}
-
-static inline int
-ixgbe_ethertype_filter_insert(struct ixgbe_filter_info *filter_info,
-			uint16_t ethertype)
-{
-	int i;
-
-	for (i = 0; i < IXGBE_MAX_ETQF_FILTERS; i++) {
-		if (!(filter_info->ethertype_mask & (1 << i))) {
-			filter_info->ethertype_mask |= 1 << i;
-			filter_info->ethertype_filters[i] = ethertype;
-			return i;
-		}
-	}
-	return -1;
-}
-
-static inline int
-ixgbe_ethertype_filter_remove(struct ixgbe_filter_info *filter_info,
-			uint8_t idx)
-{
-	if (idx >= IXGBE_MAX_ETQF_FILTERS)
-		return -1;
-	filter_info->ethertype_mask &= ~(1 << idx);
-	filter_info->ethertype_filters[idx] = 0;
-	return idx;
-}
-
 static int
 ixgbe_add_del_ethertype_filter(struct rte_eth_dev *dev,
 			struct rte_eth_ethertype_filter *filter,
@@ -6311,6 +6270,7 @@ ixgbe_add_del_ethertype_filter(struct rte_eth_dev *dev,
 	uint32_t etqf = 0;
 	uint32_t etqs = 0;
 	int ret;
+	struct ixgbe_ethertype_filter ethertype_filter;
 
 	if (filter->queue >= IXGBE_MAX_RX_QUEUE_NUM)
 		return -EINVAL;
@@ -6344,18 +6304,22 @@ ixgbe_add_del_ethertype_filter(struct rte_eth_dev *dev,
 	}
 
 	if (add) {
-		ret = ixgbe_ethertype_filter_insert(filter_info,
-			filter->ether_type);
-		if (ret < 0) {
-			PMD_DRV_LOG(ERR, "ethertype filters are full.");
-			return -ENOSYS;
-		}
 		etqf = IXGBE_ETQF_FILTER_EN;
 		etqf |= (uint32_t)filter->ether_type;
 		etqs |= (uint32_t)((filter->queue <<
 				    IXGBE_ETQS_RX_QUEUE_SHIFT) &
 				    IXGBE_ETQS_RX_QUEUE);
 		etqs |= IXGBE_ETQS_QUEUE_EN;
+
+		ethertype_filter.ethertype = filter->ether_type;
+		ethertype_filter.etqf = etqf;
+		ethertype_filter.etqs = etqs;
+		ret = ixgbe_ethertype_filter_insert(filter_info,
+						    &ethertype_filter);
+		if (ret < 0) {
+			PMD_DRV_LOG(ERR, "ethertype filters are full.");
+			return -ENOSPC;
+		}
 	} else {
 		ret = ixgbe_ethertype_filter_remove(filter_info, (uint8_t)ret);
 		if (ret < 0)
@@ -7961,10 +7925,31 @@ ixgbe_ntuple_filter_restore(struct rte_eth_dev *dev)
 	}
 }
 
+/* restore ethernet type filter */
+static inline void
+ixgbe_ethertype_filter_restore(struct rte_eth_dev *dev)
+{
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+	int i;
+
+	for (i = 0; i < IXGBE_MAX_ETQF_FILTERS; i++) {
+		if (filter_info->ethertype_mask & (1 << i)) {
+			IXGBE_WRITE_REG(hw, IXGBE_ETQF(i),
+					filter_info->ethertype_filters[i].etqf);
+			IXGBE_WRITE_REG(hw, IXGBE_ETQS(i),
+					filter_info->ethertype_filters[i].etqs);
+			IXGBE_WRITE_FLUSH(hw);
+		}
+	}
+}
+
 static int
 ixgbe_filter_restore(struct rte_eth_dev *dev)
 {
 	ixgbe_ntuple_filter_restore(dev);
+	ixgbe_ethertype_filter_restore(dev);
 
 	return 0;
 }
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index b2cf789..36aae01 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -270,13 +270,19 @@ struct ixgbe_5tuple_filter {
 	(RTE_ALIGN(IXGBE_MAX_FTQF_FILTERS, (sizeof(uint32_t) * NBBY)) / \
 	 (sizeof(uint32_t) * NBBY))
 
+struct ixgbe_ethertype_filter {
+	uint16_t ethertype;
+	uint32_t etqf;
+	uint32_t etqs;
+};
+
 /*
  * Structure to store filters' info.
  */
 struct ixgbe_filter_info {
 	uint8_t ethertype_mask;  /* Bit mask for every used ethertype filter */
 	/* store used ethertype filters*/
-	uint16_t ethertype_filters[IXGBE_MAX_ETQF_FILTERS];
+	struct ixgbe_ethertype_filter ethertype_filters[IXGBE_MAX_ETQF_FILTERS];
 	/* Bit mask for every used 5tuple filter */
 	uint32_t fivetuple_mask[IXGBE_5TUPLE_ARRAY_SIZE];
 	struct ixgbe_5tuple_filter_list fivetuple_list;
@@ -491,4 +497,53 @@ uint32_t ixgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val);
 
 int ixgbe_fdir_ctrl_func(struct rte_eth_dev *dev,
 			enum rte_filter_op filter_op, void *arg);
+
+static inline int
+ixgbe_ethertype_filter_lookup(struct ixgbe_filter_info *filter_info,
+			      uint16_t ethertype)
+{
+	int i;
+
+	for (i = 0; i < IXGBE_MAX_ETQF_FILTERS; i++) {
+		if (filter_info->ethertype_filters[i].ethertype == ethertype &&
+		    (filter_info->ethertype_mask & (1 << i)))
+			return i;
+	}
+	return -1;
+}
+
+static inline int
+ixgbe_ethertype_filter_insert(struct ixgbe_filter_info *filter_info,
+			      struct ixgbe_ethertype_filter *ethertype_filter)
+{
+	int i;
+
+	for (i = 0; i < IXGBE_MAX_ETQF_FILTERS; i++) {
+		if (!(filter_info->ethertype_mask & (1 << i))) {
+			filter_info->ethertype_mask |= 1 << i;
+			filter_info->ethertype_filters[i].ethertype =
+				ethertype_filter->ethertype;
+			filter_info->ethertype_filters[i].etqf =
+				ethertype_filter->etqf;
+			filter_info->ethertype_filters[i].etqs =
+				ethertype_filter->etqs;
+			return i;
+		}
+	}
+	return -1;
+}
+
+static inline int
+ixgbe_ethertype_filter_remove(struct ixgbe_filter_info *filter_info,
+			      uint8_t idx)
+{
+	if (idx >= IXGBE_MAX_ETQF_FILTERS)
+		return -1;
+	filter_info->ethertype_mask &= ~(1 << idx);
+	filter_info->ethertype_filters[idx].ethertype = 0;
+	filter_info->ethertype_filters[idx].etqf = 0;
+	filter_info->ethertype_filters[idx].etqs = 0;
+	return idx;
+}
+
 #endif /* _IXGBE_ETHDEV_H_ */
diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
index cb10265..4b33130 100644
--- a/drivers/net/ixgbe/ixgbe_pf.c
+++ b/drivers/net/ixgbe/ixgbe_pf.c
@@ -178,6 +178,7 @@ ixgbe_add_tx_flow_control_drop_filter(struct rte_eth_dev *eth_dev)
 		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(eth_dev->data->dev_private);
 	uint16_t vf_num;
 	int i;
+	struct ixgbe_ethertype_filter ethertype_filter;
 
 	if (!hw->mac.ops.set_ethertype_anti_spoofing) {
 		RTE_LOG(INFO, PMD, "ether type anti-spoofing is not"
@@ -185,16 +186,22 @@ ixgbe_add_tx_flow_control_drop_filter(struct rte_eth_dev *eth_dev)
 		return;
 	}
 
-	/* occupy an entity of ether type filter */
-	for (i = 0; i < IXGBE_MAX_ETQF_FILTERS; i++) {
-		if (!(filter_info->ethertype_mask & (1 << i))) {
-			filter_info->ethertype_mask |= 1 << i;
-			filter_info->ethertype_filters[i] =
-				IXGBE_ETHERTYPE_FLOW_CTRL;
-			break;
-		}
+	i = ixgbe_ethertype_filter_lookup(filter_info,
+					  IXGBE_ETHERTYPE_FLOW_CTRL);
+	if (i >= 0) {
+		RTE_LOG(ERR, PMD, "A ether type filter"
+			" entity for flow control already exists!\n");
+		return;
 	}
-	if (i == IXGBE_MAX_ETQF_FILTERS) {
+
+	ethertype_filter.ethertype = IXGBE_ETHERTYPE_FLOW_CTRL;
+	ethertype_filter.etqf = IXGBE_ETQF_FILTER_EN |
+				IXGBE_ETQF_TX_ANTISPOOF |
+				IXGBE_ETHERTYPE_FLOW_CTRL;
+	ethertype_filter.etqs = 0;
+	i = ixgbe_ethertype_filter_insert(filter_info,
+					  &ethertype_filter);
+	if (i < 0) {
 		RTE_LOG(ERR, PMD, "Cannot find an unused ether type filter"
 			" entity for flow control.\n");
 		return;
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v5 06/18] net/ixgbe: restore TCP SYN filter
  2017-01-12  9:17       ` [PATCH v5 00/18] net/ixgbe: Consistent filter API Wei Zhao
                           ` (4 preceding siblings ...)
  2017-01-12  9:17         ` [PATCH v5 05/18] net/ixgbe: restore ether type filter Wei Zhao
@ 2017-01-12  9:17         ` Wei Zhao
  2017-01-12  9:17         ` [PATCH v5 07/18] net/ixgbe: restore flow director filter Wei Zhao
                           ` (14 subsequent siblings)
  20 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  9:17 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for restoring TCP SYN filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 6c46354..6cd5975 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -7945,11 +7945,29 @@ ixgbe_ethertype_filter_restore(struct rte_eth_dev *dev)
 	}
 }
 
+/* restore SYN filter */
+static inline void
+ixgbe_syn_filter_restore(struct rte_eth_dev *dev)
+{
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+	uint32_t synqf;
+
+	synqf = filter_info->syn_info;
+
+	if (synqf & IXGBE_SYN_FILTER_ENABLE) {
+		IXGBE_WRITE_REG(hw, IXGBE_SYNQF, synqf);
+		IXGBE_WRITE_FLUSH(hw);
+	}
+}
+
 static int
 ixgbe_filter_restore(struct rte_eth_dev *dev)
 {
 	ixgbe_ntuple_filter_restore(dev);
 	ixgbe_ethertype_filter_restore(dev);
+	ixgbe_syn_filter_restore(dev);
 
 	return 0;
 }
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v5 07/18] net/ixgbe: restore flow director filter
  2017-01-12  9:17       ` [PATCH v5 00/18] net/ixgbe: Consistent filter API Wei Zhao
                           ` (5 preceding siblings ...)
  2017-01-12  9:17         ` [PATCH v5 06/18] net/ixgbe: restore TCP SYN filter Wei Zhao
@ 2017-01-12  9:17         ` Wei Zhao
  2017-01-12  9:17         ` [PATCH v5 08/18] net/ixgbe: restore L2 tunnel filter Wei Zhao
                           ` (13 subsequent siblings)
  20 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  9:17 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for storing flow director filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c |  1 +
 drivers/net/ixgbe/ixgbe_ethdev.h |  1 +
 drivers/net/ixgbe/ixgbe_fdir.c   | 35 +++++++++++++++++++++++++++++++++++
 3 files changed, 37 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 6cd5975..f48e30c 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -7968,6 +7968,7 @@ ixgbe_filter_restore(struct rte_eth_dev *dev)
 	ixgbe_ntuple_filter_restore(dev);
 	ixgbe_ethertype_filter_restore(dev);
 	ixgbe_syn_filter_restore(dev);
+	ixgbe_fdir_filter_restore(dev);
 
 	return 0;
 }
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 36aae01..4f281e8 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -497,6 +497,7 @@ uint32_t ixgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val);
 
 int ixgbe_fdir_ctrl_func(struct rte_eth_dev *dev,
 			enum rte_filter_op filter_op, void *arg);
+void ixgbe_fdir_filter_restore(struct rte_eth_dev *dev);
 
 static inline int
 ixgbe_ethertype_filter_lookup(struct ixgbe_filter_info *filter_info,
diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c
index 8bf5705..627c51a 100644
--- a/drivers/net/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/ixgbe/ixgbe_fdir.c
@@ -1479,3 +1479,38 @@ ixgbe_fdir_ctrl_func(struct rte_eth_dev *dev,
 	}
 	return ret;
 }
+
+/* restore flow director filter */
+void
+ixgbe_fdir_filter_restore(struct rte_eth_dev *dev)
+{
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_hw_fdir_info *fdir_info =
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
+	struct ixgbe_fdir_filter *node;
+	bool is_perfect = FALSE;
+	enum rte_fdir_mode fdir_mode = dev->data->dev_conf.fdir_conf.mode;
+
+	if (fdir_mode >= RTE_FDIR_MODE_PERFECT &&
+	    fdir_mode <= RTE_FDIR_MODE_PERFECT_TUNNEL)
+		is_perfect = TRUE;
+
+	if (is_perfect) {
+		TAILQ_FOREACH(node, &fdir_info->fdir_list, entries) {
+			(void)fdir_write_perfect_filter_82599(hw,
+							      &node->ixgbe_fdir,
+							      node->queue,
+							      node->fdirflags,
+							      node->fdirhash,
+							      fdir_mode);
+		}
+	} else {
+		TAILQ_FOREACH(node, &fdir_info->fdir_list, entries) {
+			(void)fdir_add_signature_filter_82599(hw,
+							      &node->ixgbe_fdir,
+							      node->queue,
+							      node->fdirflags,
+							      node->fdirhash);
+		}
+	}
+}
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v5 08/18] net/ixgbe: restore L2 tunnel filter
  2017-01-12  9:17       ` [PATCH v5 00/18] net/ixgbe: Consistent filter API Wei Zhao
                           ` (6 preceding siblings ...)
  2017-01-12  9:17         ` [PATCH v5 07/18] net/ixgbe: restore flow director filter Wei Zhao
@ 2017-01-12  9:17         ` Wei Zhao
  2017-01-12  9:17         ` [PATCH v5 09/18] net/ixgbe: store and restore L2 tunnel configuration Wei Zhao
                           ` (12 subsequent siblings)
  20 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  9:17 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for restoring L2 tunnel filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 69 ++++++++++++++++++++++++++--------------
 1 file changed, 46 insertions(+), 23 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index f48e30c..0ad613b 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -7324,7 +7324,8 @@ ixgbe_remove_l2_tn_filter(struct ixgbe_l2_tn_info *l2_tn_info,
 /* Add l2 tunnel filter */
 static int
 ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
-			       struct rte_eth_l2_tunnel_conf *l2_tunnel)
+			       struct rte_eth_l2_tunnel_conf *l2_tunnel,
+			       bool restore)
 {
 	int ret;
 	struct ixgbe_l2_tn_info *l2_tn_info =
@@ -7332,30 +7333,33 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 	struct ixgbe_l2_tn_key key;
 	struct ixgbe_l2_tn_filter *node;
 
-	key.l2_tn_type = l2_tunnel->l2_tunnel_type;
-	key.tn_id = l2_tunnel->tunnel_id;
+	if (!restore) {
+		key.l2_tn_type = l2_tunnel->l2_tunnel_type;
+		key.tn_id = l2_tunnel->tunnel_id;
 
-	node = ixgbe_l2_tn_filter_lookup(l2_tn_info, &key);
+		node = ixgbe_l2_tn_filter_lookup(l2_tn_info, &key);
 
-	if (node) {
-		PMD_DRV_LOG(ERR, "The L2 tunnel filter already exists!");
-		return -EINVAL;
-	}
+		if (node) {
+			PMD_DRV_LOG(ERR,
+				    "The L2 tunnel filter already exists!");
+			return -EINVAL;
+		}
 
-	node = rte_zmalloc("ixgbe_l2_tn",
-			   sizeof(struct ixgbe_l2_tn_filter),
-			   0);
-	if (!node)
-		return -ENOMEM;
+		node = rte_zmalloc("ixgbe_l2_tn",
+				   sizeof(struct ixgbe_l2_tn_filter),
+				   0);
+		if (!node)
+			return -ENOMEM;
 
-	(void)rte_memcpy(&node->key,
-			 &key,
-			 sizeof(struct ixgbe_l2_tn_key));
-	node->pool = l2_tunnel->pool;
-	ret = ixgbe_insert_l2_tn_filter(l2_tn_info, node);
-	if (ret < 0) {
-		rte_free(node);
-		return ret;
+		(void)rte_memcpy(&node->key,
+				 &key,
+				 sizeof(struct ixgbe_l2_tn_key));
+		node->pool = l2_tunnel->pool;
+		ret = ixgbe_insert_l2_tn_filter(l2_tn_info, node);
+		if (ret < 0) {
+			rte_free(node);
+			return ret;
+		}
 	}
 
 	switch (l2_tunnel->l2_tunnel_type) {
@@ -7368,7 +7372,7 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 		break;
 	}
 
-	if (ret < 0)
+	if ((!restore) && (ret < 0))
 		(void)ixgbe_remove_l2_tn_filter(l2_tn_info, &key);
 
 	return ret;
@@ -7429,7 +7433,8 @@ ixgbe_dev_l2_tunnel_filter_handle(struct rte_eth_dev *dev,
 	case RTE_ETH_FILTER_ADD:
 		ret = ixgbe_dev_l2_tunnel_filter_add
 			(dev,
-			 (struct rte_eth_l2_tunnel_conf *)arg);
+			 (struct rte_eth_l2_tunnel_conf *)arg,
+			 FALSE);
 		break;
 	case RTE_ETH_FILTER_DELETE:
 		ret = ixgbe_dev_l2_tunnel_filter_del
@@ -7962,6 +7967,23 @@ ixgbe_syn_filter_restore(struct rte_eth_dev *dev)
 	}
 }
 
+/* restore L2 tunnel filter */
+static inline void
+ixgbe_l2_tn_filter_restore(struct rte_eth_dev *dev)
+{
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
+	struct ixgbe_l2_tn_filter *node;
+	struct rte_eth_l2_tunnel_conf l2_tn_conf;
+
+	TAILQ_FOREACH(node, &l2_tn_info->l2_tn_list, entries) {
+		l2_tn_conf.l2_tunnel_type = node->key.l2_tn_type;
+		l2_tn_conf.tunnel_id      = node->key.tn_id;
+		l2_tn_conf.pool           = node->pool;
+		(void)ixgbe_dev_l2_tunnel_filter_add(dev, &l2_tn_conf, TRUE);
+	}
+}
+
 static int
 ixgbe_filter_restore(struct rte_eth_dev *dev)
 {
@@ -7969,6 +7991,7 @@ ixgbe_filter_restore(struct rte_eth_dev *dev)
 	ixgbe_ethertype_filter_restore(dev);
 	ixgbe_syn_filter_restore(dev);
 	ixgbe_fdir_filter_restore(dev);
+	ixgbe_l2_tn_filter_restore(dev);
 
 	return 0;
 }
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v5 09/18] net/ixgbe: store and restore L2 tunnel configuration
  2017-01-12  9:17       ` [PATCH v5 00/18] net/ixgbe: Consistent filter API Wei Zhao
                           ` (7 preceding siblings ...)
  2017-01-12  9:17         ` [PATCH v5 08/18] net/ixgbe: restore L2 tunnel filter Wei Zhao
@ 2017-01-12  9:17         ` Wei Zhao
  2017-01-12  9:17         ` [PATCH v5 10/18] net/ixgbe: flush all the filters Wei Zhao
                           ` (11 subsequent siblings)
  20 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  9:17 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for store and restore L2 tunnel filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 36 ++++++++++++++++++++++++++++++++++++
 drivers/net/ixgbe/ixgbe_ethdev.h |  3 +++
 2 files changed, 39 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 0ad613b..3059c64 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -389,6 +389,7 @@ static int ixgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 static int ixgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 					 struct rte_eth_udp_tunnel *udp_tunnel);
 static int ixgbe_filter_restore(struct rte_eth_dev *dev);
+static void ixgbe_l2_tunnel_conf(struct rte_eth_dev *dev);
 
 /*
  * Define VF Stats MACRO for Non "cleared on read" register
@@ -1469,6 +1470,9 @@ static int ixgbe_l2_tn_filter_init(struct rte_eth_dev *eth_dev)
 			"Failed to allocate memory for L2 TN hash map!");
 		return -ENOMEM;
 	}
+	l2_tn_info->e_tag_en = FALSE;
+	l2_tn_info->e_tag_fwd_en = FALSE;
+	l2_tn_info->e_tag_ether_type = DEFAULT_ETAG_ETYPE;
 
 	return 0;
 }
@@ -2527,6 +2531,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
 
 	/* resume enabled intr since hw reset */
 	ixgbe_enable_intr(dev);
+	ixgbe_l2_tunnel_conf(dev);
 	ixgbe_filter_restore(dev);
 
 	return 0;
@@ -7083,12 +7088,15 @@ ixgbe_dev_l2_tunnel_eth_type_conf(struct rte_eth_dev *dev,
 {
 	int ret = 0;
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
 
 	if (l2_tunnel == NULL)
 		return -EINVAL;
 
 	switch (l2_tunnel->l2_tunnel_type) {
 	case RTE_L2_TUNNEL_TYPE_E_TAG:
+		l2_tn_info->e_tag_ether_type = l2_tunnel->ether_type;
 		ret = ixgbe_update_e_tag_eth_type(hw, l2_tunnel->ether_type);
 		break;
 	default:
@@ -7127,9 +7135,12 @@ ixgbe_dev_l2_tunnel_enable(struct rte_eth_dev *dev,
 {
 	int ret = 0;
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
 
 	switch (l2_tunnel_type) {
 	case RTE_L2_TUNNEL_TYPE_E_TAG:
+		l2_tn_info->e_tag_en = TRUE;
 		ret = ixgbe_e_tag_enable(hw);
 		break;
 	default:
@@ -7168,9 +7179,12 @@ ixgbe_dev_l2_tunnel_disable(struct rte_eth_dev *dev,
 {
 	int ret = 0;
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
 
 	switch (l2_tunnel_type) {
 	case RTE_L2_TUNNEL_TYPE_E_TAG:
+		l2_tn_info->e_tag_en = FALSE;
 		ret = ixgbe_e_tag_disable(hw);
 		break;
 	default:
@@ -7477,10 +7491,13 @@ ixgbe_dev_l2_tunnel_forwarding_enable
 	(struct rte_eth_dev *dev,
 	 enum rte_eth_tunnel_type l2_tunnel_type)
 {
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
 	int ret = 0;
 
 	switch (l2_tunnel_type) {
 	case RTE_L2_TUNNEL_TYPE_E_TAG:
+		l2_tn_info->e_tag_fwd_en = TRUE;
 		ret = ixgbe_e_tag_forwarding_en_dis(dev, 1);
 		break;
 	default:
@@ -7498,10 +7515,13 @@ ixgbe_dev_l2_tunnel_forwarding_disable
 	(struct rte_eth_dev *dev,
 	 enum rte_eth_tunnel_type l2_tunnel_type)
 {
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
 	int ret = 0;
 
 	switch (l2_tunnel_type) {
 	case RTE_L2_TUNNEL_TYPE_E_TAG:
+		l2_tn_info->e_tag_fwd_en = FALSE;
 		ret = ixgbe_e_tag_forwarding_en_dis(dev, 0);
 		break;
 	default:
@@ -7996,6 +8016,22 @@ ixgbe_filter_restore(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static void
+ixgbe_l2_tunnel_conf(struct rte_eth_dev *dev)
+{
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	if (l2_tn_info->e_tag_en)
+		(void)ixgbe_e_tag_enable(hw);
+
+	if (l2_tn_info->e_tag_fwd_en)
+		(void)ixgbe_e_tag_forwarding_en_dis(dev, 1);
+
+	(void)ixgbe_update_e_tag_eth_type(hw, l2_tn_info->e_tag_ether_type);
+}
+
 RTE_PMD_REGISTER_PCI(net_ixgbe, rte_ixgbe_pmd.pci_drv);
 RTE_PMD_REGISTER_PCI_TABLE(net_ixgbe, pci_id_ixgbe_map);
 RTE_PMD_REGISTER_KMOD_DEP(net_ixgbe, "* igb_uio | uio_pci_generic | vfio");
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 4f281e8..4e0264c 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -307,6 +307,9 @@ struct ixgbe_l2_tn_info {
 	struct ixgbe_l2_tn_filter_list      l2_tn_list;
 	struct ixgbe_l2_tn_filter         **hash_map;
 	struct rte_hash                    *hash_handle;
+	bool e_tag_en; /* e-tag enabled */
+	bool e_tag_fwd_en; /* e-tag based forwarding enabled */
+	bool e_tag_ether_type; /* ether type for e-tag */
 };
 
 /*
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v5 10/18] net/ixgbe: flush all the filters
  2017-01-12  9:17       ` [PATCH v5 00/18] net/ixgbe: Consistent filter API Wei Zhao
                           ` (8 preceding siblings ...)
  2017-01-12  9:17         ` [PATCH v5 09/18] net/ixgbe: store and restore L2 tunnel configuration Wei Zhao
@ 2017-01-12  9:17         ` Wei Zhao
  2017-01-12  9:17         ` [PATCH v5 11/18] net/ixgbe: parse n-tuple filter Wei Zhao
                           ` (10 subsequent siblings)
  20 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  9:17 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for flush all the filters in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/Makefile       |   2 +
 drivers/net/ixgbe/ixgbe_ethdev.c |  79 ++++++++++++++++++++++++++-
 drivers/net/ixgbe/ixgbe_ethdev.h |  16 ++++++
 drivers/net/ixgbe/ixgbe_fdir.c   |  24 ++++++++
 drivers/net/ixgbe/ixgbe_flow.c   | 115 +++++++++++++++++++++++++++++++++++++++
 drivers/net/ixgbe/ixgbe_pf.c     |   1 +
 6 files changed, 236 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/ixgbe/ixgbe_flow.c

diff --git a/drivers/net/ixgbe/Makefile b/drivers/net/ixgbe/Makefile
index a3e6a52..38b9fbd 100644
--- a/drivers/net/ixgbe/Makefile
+++ b/drivers/net/ixgbe/Makefile
@@ -109,6 +109,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_rxtx.c
 SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_ethdev.c
 SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_fdir.c
 SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_pf.c
+SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_flow.c
 ifeq ($(CONFIG_RTE_ARCH_ARM64),y)
 SRCS-$(CONFIG_RTE_IXGBE_INC_VECTOR) += ixgbe_rxtx_vec_neon.c
 else
@@ -127,5 +128,6 @@ SYMLINK-$(CONFIG_RTE_LIBRTE_IXGBE_PMD)-include := rte_pmd_ixgbe.h
 DEPDIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += lib/librte_eal lib/librte_ether
 DEPDIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += lib/librte_mempool lib/librte_mbuf
 DEPDIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += lib/librte_net
+DEPDIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += lib/librte_hash
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 3059c64..2a67462 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -6319,6 +6319,7 @@ ixgbe_add_del_ethertype_filter(struct rte_eth_dev *dev,
 		ethertype_filter.ethertype = filter->ether_type;
 		ethertype_filter.etqf = etqf;
 		ethertype_filter.etqs = etqs;
+		ethertype_filter.conf = FALSE;
 		ret = ixgbe_ethertype_filter_insert(filter_info,
 						    &ethertype_filter);
 		if (ret < 0) {
@@ -6420,7 +6421,7 @@ ixgbe_dev_filter_ctrl(struct rte_eth_dev *dev,
 		     enum rte_filter_op filter_op,
 		     void *arg)
 {
-	int ret = -EINVAL;
+	int ret = 0;
 
 	switch (filter_type) {
 	case RTE_ETH_FILTER_NTUPLE:
@@ -6438,9 +6439,15 @@ ixgbe_dev_filter_ctrl(struct rte_eth_dev *dev,
 	case RTE_ETH_FILTER_L2_TUNNEL:
 		ret = ixgbe_dev_l2_tunnel_filter_handle(dev, filter_op, arg);
 		break;
+	case RTE_ETH_FILTER_GENERIC:
+		if (filter_op != RTE_ETH_FILTER_GET)
+			return -EINVAL;
+		*(const void **)arg = &ixgbe_flow_ops;
+		break;
 	default:
 		PMD_DRV_LOG(WARNING, "Filter type (%d) not supported",
 							filter_type);
+		ret = -EINVAL;
 		break;
 	}
 
@@ -8032,6 +8039,76 @@ ixgbe_l2_tunnel_conf(struct rte_eth_dev *dev)
 	(void)ixgbe_update_e_tag_eth_type(hw, l2_tn_info->e_tag_ether_type);
 }
 
+/* remove all the n-tuple filters */
+void
+ixgbe_clear_all_ntuple_filter(struct rte_eth_dev *dev)
+{
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+	struct ixgbe_5tuple_filter *p_5tuple;
+
+	while ((p_5tuple = TAILQ_FIRST(&filter_info->fivetuple_list)))
+		ixgbe_remove_5tuple_filter(dev, p_5tuple);
+}
+
+/* remove all the ether type filters */
+void
+ixgbe_clear_all_ethertype_filter(struct rte_eth_dev *dev)
+{
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+	int i;
+
+	for (i = 0; i < IXGBE_MAX_ETQF_FILTERS; i++) {
+		if (filter_info->ethertype_mask & (1 << i) &&
+		    !filter_info->ethertype_filters[i].conf) {
+			(void)ixgbe_ethertype_filter_remove(filter_info,
+							    (uint8_t)i);
+			IXGBE_WRITE_REG(hw, IXGBE_ETQF(i), 0);
+			IXGBE_WRITE_REG(hw, IXGBE_ETQS(i), 0);
+			IXGBE_WRITE_FLUSH(hw);
+		}
+	}
+}
+
+/* remove the SYN filter */
+void
+ixgbe_clear_syn_filter(struct rte_eth_dev *dev)
+{
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+
+	if (filter_info->syn_info & IXGBE_SYN_FILTER_ENABLE) {
+		filter_info->syn_info = 0;
+
+		IXGBE_WRITE_REG(hw, IXGBE_SYNQF, 0);
+		IXGBE_WRITE_FLUSH(hw);
+	}
+}
+
+/* remove all the L2 tunnel filters */
+int ixgbe_clear_all_l2_tn_filter(struct rte_eth_dev *dev)
+{
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
+	struct ixgbe_l2_tn_filter *l2_tn_filter;
+	struct rte_eth_l2_tunnel_conf l2_tn_conf;
+	int ret = 0;
+
+	while ((l2_tn_filter = TAILQ_FIRST(&l2_tn_info->l2_tn_list))) {
+		l2_tn_conf.l2_tunnel_type = l2_tn_filter->key.l2_tn_type;
+		l2_tn_conf.tunnel_id      = l2_tn_filter->key.tn_id;
+		l2_tn_conf.pool           = l2_tn_filter->pool;
+		ret = ixgbe_dev_l2_tunnel_filter_del(dev, &l2_tn_conf);
+		if (ret < 0)
+			return ret;
+	}
+
+	return 0;
+}
+
 RTE_PMD_REGISTER_PCI(net_ixgbe, rte_ixgbe_pmd.pci_drv);
 RTE_PMD_REGISTER_PCI_TABLE(net_ixgbe, pci_id_ixgbe_map);
 RTE_PMD_REGISTER_KMOD_DEP(net_ixgbe, "* igb_uio | uio_pci_generic | vfio");
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 4e0264c..5efa650 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -274,6 +274,11 @@ struct ixgbe_ethertype_filter {
 	uint16_t ethertype;
 	uint32_t etqf;
 	uint32_t etqs;
+	/**
+	 * If this filter is added by configuration,
+	 * it should not be removed.
+	 */
+	bool     conf;
 };
 
 /*
@@ -501,6 +506,14 @@ uint32_t ixgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val);
 int ixgbe_fdir_ctrl_func(struct rte_eth_dev *dev,
 			enum rte_filter_op filter_op, void *arg);
 void ixgbe_fdir_filter_restore(struct rte_eth_dev *dev);
+int ixgbe_clear_all_fdir_filter(struct rte_eth_dev *dev);
+
+extern const struct rte_flow_ops ixgbe_flow_ops;
+
+void ixgbe_clear_all_ethertype_filter(struct rte_eth_dev *dev);
+void ixgbe_clear_all_ntuple_filter(struct rte_eth_dev *dev);
+void ixgbe_clear_syn_filter(struct rte_eth_dev *dev);
+int ixgbe_clear_all_l2_tn_filter(struct rte_eth_dev *dev);
 
 static inline int
 ixgbe_ethertype_filter_lookup(struct ixgbe_filter_info *filter_info,
@@ -531,6 +544,8 @@ ixgbe_ethertype_filter_insert(struct ixgbe_filter_info *filter_info,
 				ethertype_filter->etqf;
 			filter_info->ethertype_filters[i].etqs =
 				ethertype_filter->etqs;
+			filter_info->ethertype_filters[i].conf =
+				ethertype_filter->conf;
 			return i;
 		}
 	}
@@ -547,6 +562,7 @@ ixgbe_ethertype_filter_remove(struct ixgbe_filter_info *filter_info,
 	filter_info->ethertype_filters[idx].ethertype = 0;
 	filter_info->ethertype_filters[idx].etqf = 0;
 	filter_info->ethertype_filters[idx].etqs = 0;
+	filter_info->ethertype_filters[idx].etqs = FALSE;
 	return idx;
 }
 
diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c
index 627c51a..e928ad7 100644
--- a/drivers/net/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/ixgbe/ixgbe_fdir.c
@@ -1514,3 +1514,27 @@ ixgbe_fdir_filter_restore(struct rte_eth_dev *dev)
 		}
 	}
 }
+
+/* remove all the flow director filters */
+int
+ixgbe_clear_all_fdir_filter(struct rte_eth_dev *dev)
+{
+	struct ixgbe_hw_fdir_info *fdir_info =
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
+	struct ixgbe_fdir_filter *fdir_filter;
+	int ret = 0;
+
+	/* flush flow director */
+	rte_hash_reset(fdir_info->hash_handle);
+	memset(fdir_info->hash_map, 0,
+	       sizeof(struct ixgbe_fdir_filter *) * IXGBE_MAX_FDIR_FILTER_NUM);
+	while ((fdir_filter = TAILQ_FIRST(&fdir_info->fdir_list))) {
+		TAILQ_REMOVE(&fdir_info->fdir_list,
+			     fdir_filter,
+			     entries);
+		rte_free(fdir_filter);
+	}
+	ret = ixgbe_fdir_flush(dev);
+
+	return ret;
+}
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
new file mode 100644
index 0000000..1499391
--- /dev/null
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -0,0 +1,115 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/queue.h>
+#include <stdio.h>
+#include <errno.h>
+#include <stdint.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <netinet/in.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_cycles.h>
+
+#include <rte_interrupts.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_pci.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_eal.h>
+#include <rte_alarm.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_atomic.h>
+#include <rte_malloc.h>
+#include <rte_random.h>
+#include <rte_dev.h>
+#include <rte_hash_crc.h>
+#include <rte_flow.h>
+#include <rte_flow_driver.h>
+
+#include "ixgbe_logs.h"
+#include "base/ixgbe_api.h"
+#include "base/ixgbe_vf.h"
+#include "base/ixgbe_common.h"
+#include "ixgbe_ethdev.h"
+#include "ixgbe_bypass.h"
+#include "ixgbe_rxtx.h"
+#include "base/ixgbe_type.h"
+#include "base/ixgbe_phy.h"
+#include "rte_pmd_ixgbe.h"
+
+static int ixgbe_flow_flush(struct rte_eth_dev *dev,
+		struct rte_flow_error *error);
+
+const struct rte_flow_ops ixgbe_flow_ops = {
+	NULL,
+	NULL,
+	NULL,
+	ixgbe_flow_flush,
+	NULL,
+};
+
+/*  Destroy all flow rules associated with a port on ixgbe. */
+static int
+ixgbe_flow_flush(struct rte_eth_dev *dev,
+		struct rte_flow_error *error)
+{
+	int ret = 0;
+
+	ixgbe_clear_all_ntuple_filter(dev);
+	ixgbe_clear_all_ethertype_filter(dev);
+	ixgbe_clear_syn_filter(dev);
+
+	ret = ixgbe_clear_all_fdir_filter(dev);
+	if (ret < 0) {
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_HANDLE,
+					NULL, "Failed to flush rule");
+		return ret;
+	}
+
+	ret = ixgbe_clear_all_l2_tn_filter(dev);
+	if (ret < 0) {
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_HANDLE,
+					NULL, "Failed to flush rule");
+		return ret;
+	}
+
+	return 0;
+}
diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
index 4b33130..4715045 100644
--- a/drivers/net/ixgbe/ixgbe_pf.c
+++ b/drivers/net/ixgbe/ixgbe_pf.c
@@ -199,6 +199,7 @@ ixgbe_add_tx_flow_control_drop_filter(struct rte_eth_dev *eth_dev)
 				IXGBE_ETQF_TX_ANTISPOOF |
 				IXGBE_ETHERTYPE_FLOW_CTRL;
 	ethertype_filter.etqs = 0;
+	ethertype_filter.conf = TRUE;
 	i = ixgbe_ethertype_filter_insert(filter_info,
 					  &ethertype_filter);
 	if (i < 0) {
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v5 11/18] net/ixgbe: parse n-tuple filter
  2017-01-12  9:17       ` [PATCH v5 00/18] net/ixgbe: Consistent filter API Wei Zhao
                           ` (9 preceding siblings ...)
  2017-01-12  9:17         ` [PATCH v5 10/18] net/ixgbe: flush all the filters Wei Zhao
@ 2017-01-12  9:17         ` Wei Zhao
  2017-01-12 15:40           ` Ferruh Yigit
  2017-01-12  9:17         ` [PATCH v5 12/18] net/ixgbe: parse ethertype filter Wei Zhao
                           ` (9 subsequent siblings)
  20 siblings, 1 reply; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  9:17 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

Add rule validate function and check if the rule is a n-tuple rule,
and get the n-tuple info.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ixgbe/ixgbe_flow.c | 429 ++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 424 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 1499391..ac9f663 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -77,15 +77,432 @@
 
 static int ixgbe_flow_flush(struct rte_eth_dev *dev,
 		struct rte_flow_error *error);
+static int
+cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
+					const struct rte_flow_item pattern[],
+					const struct rte_flow_action actions[],
+					struct rte_eth_ntuple_filter *filter,
+					struct rte_flow_error *error);
+static int
+ixgbe_parse_ntuple_filter(const struct rte_flow_attr *attr,
+					const struct rte_flow_item pattern[],
+					const struct rte_flow_action actions[],
+					struct rte_eth_ntuple_filter *filter,
+					struct rte_flow_error *error);
+static int
+ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
+		const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct rte_flow_error *error);
 
 const struct rte_flow_ops ixgbe_flow_ops = {
-	NULL,
+	ixgbe_flow_validate,
 	NULL,
 	NULL,
 	ixgbe_flow_flush,
 	NULL,
 };
 
+#define IXGBE_MIN_N_TUPLE_PRIO 1
+#define IXGBE_MAX_N_TUPLE_PRIO 7
+#define NEXT_ITEM_OF_PATTERN(item, pattern, index)\
+	do {		\
+		item = pattern + index;\
+		while (item->type == RTE_FLOW_ITEM_TYPE_VOID) {\
+		index++;				\
+		item = pattern + index;		\
+		}						\
+	} while (0)
+
+#define NEXT_ITEM_OF_ACTION(act, actions, index)\
+	do {								\
+		act = actions + index;					\
+		while (act->type == RTE_FLOW_ACTION_TYPE_VOID) {\
+		index++;					\
+		act = actions + index;				\
+		}							\
+	} while (0)
+
+/**
+ * Please aware there's an asumption for all the parsers.
+ * rte_flow_item is using big endian, rte_flow_attr and
+ * rte_flow_action are using CPU order.
+ * Because the pattern is used to describe the packets,
+ * normally the packets should use network order.
+ */
+
+/**
+ * Parse the rule to see if it is a n-tuple rule.
+ * And get the n-tuple filter info BTW.
+ * pattern:
+ * The first not void item can be ETH or IPV4.
+ * The second not void item must be IPV4 if the first one is ETH.
+ * The third not void item must be UDP or TCP.
+ * The next not void item must be END.
+ * action:
+ * The first not void action should be QUEUE.
+ * The next not void action should be END.
+ */
+static int
+cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
+			 const struct rte_flow_item pattern[],
+			 const struct rte_flow_action actions[],
+			 struct rte_eth_ntuple_filter *filter,
+			 struct rte_flow_error *error)
+{
+	const struct rte_flow_item *item;
+	const struct rte_flow_action *act;
+	const struct rte_flow_item_ipv4 *ipv4_spec;
+	const struct rte_flow_item_ipv4 *ipv4_mask;
+	const struct rte_flow_item_tcp *tcp_spec;
+	const struct rte_flow_item_tcp *tcp_mask;
+	const struct rte_flow_item_udp *udp_spec;
+	const struct rte_flow_item_udp *udp_mask;
+	uint32_t index;
+
+	if (!pattern) {
+		rte_flow_error_set(error,
+			EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+			NULL, "NULL pattern.");
+		return -rte_errno;
+	}
+
+	if (!actions) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				   NULL, "NULL action.");
+		return -rte_errno;
+	}
+	if (!attr) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, "NULL attribute.");
+		return -rte_errno;
+	}
+
+	/* parse pattern */
+	index = 0;
+
+	/* the first not void item can be MAC or IPv4 */
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+
+	if (item->type != RTE_FLOW_ITEM_TYPE_ETH &&
+	    item->type != RTE_FLOW_ITEM_TYPE_IPV4) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by ntuple filter");
+		return -rte_errno;
+	}
+	/* Skip Ethernet */
+	if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error,
+			  EINVAL,
+			  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			  item, "Not supported last point for range");
+			return -rte_errno;
+
+		}
+		/* if the first item is MAC, the content should be NULL */
+		if (item->spec || item->mask) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by ntuple filter");
+			return -rte_errno;
+		}
+		/* check if the next not void item is IPv4 */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_IPV4) {
+			rte_flow_error_set(error,
+			  EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+			  item, "Not supported by ntuple filter");
+			  return -rte_errno;
+		}
+	}
+
+	/* get the IPv4 info */
+	if (!item->spec || !item->mask) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Invalid ntuple mask");
+		return -rte_errno;
+	}
+	/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+
+	}
+
+	ipv4_mask = (const struct rte_flow_item_ipv4 *)item->mask;
+	/**
+	 * Only support src & dst addresses, protocol,
+	 * others should be masked.
+	 */
+	if (ipv4_mask->hdr.version_ihl ||
+	    ipv4_mask->hdr.type_of_service ||
+	    ipv4_mask->hdr.total_length ||
+	    ipv4_mask->hdr.packet_id ||
+	    ipv4_mask->hdr.fragment_offset ||
+	    ipv4_mask->hdr.time_to_live ||
+	    ipv4_mask->hdr.hdr_checksum) {
+			rte_flow_error_set(error,
+			EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by ntuple filter");
+		return -rte_errno;
+	}
+
+	filter->dst_ip_mask = ipv4_mask->hdr.dst_addr;
+	filter->src_ip_mask = ipv4_mask->hdr.src_addr;
+	filter->proto_mask  = ipv4_mask->hdr.next_proto_id;
+
+	ipv4_spec = (const struct rte_flow_item_ipv4 *)item->spec;
+	filter->dst_ip = ipv4_spec->hdr.dst_addr;
+	filter->src_ip = ipv4_spec->hdr.src_addr;
+	filter->proto  = ipv4_spec->hdr.next_proto_id;
+
+	/* check if the next not void item is TCP or UDP */
+	index++;
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_TCP &&
+	    item->type != RTE_FLOW_ITEM_TYPE_UDP) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by ntuple filter");
+		return -rte_errno;
+	}
+
+	/* get the TCP/UDP info */
+	if (!item->spec || !item->mask) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Invalid ntuple mask");
+		return -rte_errno;
+	}
+
+	/*Not supported last point for range*/
+	if (item->last) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+
+	}
+
+	if (item->type == RTE_FLOW_ITEM_TYPE_TCP) {
+		tcp_mask = (const struct rte_flow_item_tcp *)item->mask;
+
+		/**
+		 * Only support src & dst ports, tcp flags,
+		 * others should be masked.
+		 */
+		if (tcp_mask->hdr.sent_seq ||
+		    tcp_mask->hdr.recv_ack ||
+		    tcp_mask->hdr.data_off ||
+		    tcp_mask->hdr.rx_win ||
+		    tcp_mask->hdr.cksum ||
+		    tcp_mask->hdr.tcp_urp) {
+			memset(filter, 0,
+				sizeof(struct rte_eth_ntuple_filter));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by ntuple filter");
+			return -rte_errno;
+		}
+
+		filter->dst_port_mask  = tcp_mask->hdr.dst_port;
+		filter->src_port_mask  = tcp_mask->hdr.src_port;
+		if (tcp_mask->hdr.tcp_flags == 0xFF) {
+			filter->flags |= RTE_NTUPLE_FLAGS_TCP_FLAG;
+		} else if (!tcp_mask->hdr.tcp_flags) {
+			filter->flags &= ~RTE_NTUPLE_FLAGS_TCP_FLAG;
+		} else {
+			memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by ntuple filter");
+			return -rte_errno;
+		}
+
+		tcp_spec = (const struct rte_flow_item_tcp *)item->spec;
+		filter->dst_port  = tcp_spec->hdr.dst_port;
+		filter->src_port  = tcp_spec->hdr.src_port;
+		filter->tcp_flags = tcp_spec->hdr.tcp_flags;
+	} else {
+		udp_mask = (const struct rte_flow_item_udp *)item->mask;
+
+		/**
+		 * Only support src & dst ports,
+		 * others should be masked.
+		 */
+		if (udp_mask->hdr.dgram_len ||
+		    udp_mask->hdr.dgram_cksum) {
+			memset(filter, 0,
+				sizeof(struct rte_eth_ntuple_filter));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by ntuple filter");
+			return -rte_errno;
+		}
+
+		filter->dst_port_mask = udp_mask->hdr.dst_port;
+		filter->src_port_mask = udp_mask->hdr.src_port;
+
+		udp_spec = (const struct rte_flow_item_udp *)item->spec;
+		filter->dst_port = udp_spec->hdr.dst_port;
+		filter->src_port = udp_spec->hdr.src_port;
+	}
+
+	/* check if the next not void item is END */
+	index++;
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by ntuple filter");
+		return -rte_errno;
+	}
+
+	/* parse action */
+	index = 0;
+
+	/**
+	 * n-tuple only supports forwarding,
+	 * check if the first not void action is QUEUE.
+	 */
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			item, "Not supported action.");
+		return -rte_errno;
+	}
+	filter->queue =
+		((const struct rte_flow_action_queue *)act->conf)->index;
+
+	/* check if the next not void item is END */
+	index++;
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	/* parse attr */
+	/* must be input direction */
+	if (!attr->ingress) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR_INGRESS,
+				   attr, "Only support ingress.");
+		return -rte_errno;
+	}
+
+	/* not supported */
+	if (attr->egress) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
+				   attr, "Not support egress.");
+		return -rte_errno;
+	}
+
+	if (attr->priority > 0xFFFF) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
+				   attr, "Error priority.");
+		return -rte_errno;
+	}
+	filter->priority = (uint16_t)attr->priority;
+	if (attr->priority < IXGBE_MIN_N_TUPLE_PRIO ||
+	    attr->priority > IXGBE_MAX_N_TUPLE_PRIO)
+	    filter->priority = 1;
+
+	return 0;
+}
+
+/* a specific function for ixgbe because the flags is specific */
+static int
+ixgbe_parse_ntuple_filter(const struct rte_flow_attr *attr,
+			  const struct rte_flow_item pattern[],
+			  const struct rte_flow_action actions[],
+			  struct rte_eth_ntuple_filter *filter,
+			  struct rte_flow_error *error)
+{
+	int ret;
+
+	ret = cons_parse_ntuple_filter(attr, pattern, actions, filter, error);
+
+	if (ret)
+		return ret;
+
+	/* Ixgbe doesn't support tcp flags. */
+	if (filter->flags & RTE_NTUPLE_FLAGS_TCP_FLAG) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ITEM,
+				   NULL, "Not supported by ntuple filter");
+		return -rte_errno;
+	}
+
+	/* Ixgbe doesn't support many priorities. */
+	if (filter->priority < IXGBE_MIN_N_TUPLE_PRIO ||
+	    filter->priority > IXGBE_MAX_N_TUPLE_PRIO) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			NULL, "Priority not supported by ntuple filter");
+		return -rte_errno;
+	}
+
+	if (filter->queue >= IXGBE_MAX_RX_QUEUE_NUM ||
+		filter->priority > IXGBE_5TUPLE_MAX_PRI ||
+		filter->priority < IXGBE_5TUPLE_MIN_PRI)
+		return -rte_errno;
+
+	/* fixed value for ixgbe */
+	filter->flags = RTE_5TUPLE_FLAGS;
+	return 0;
+}
+
+/**
+ * Check if the flow rule is supported by ixgbe.
+ * It only checkes the format. Don't guarantee the rule can be programmed into
+ * the HW. Because there can be no enough room for the rule.
+ */
+static int
+ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
+		const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct rte_flow_error *error)
+{
+	struct rte_eth_ntuple_filter ntuple_filter;
+	int ret;
+
+	memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
+	ret = ixgbe_parse_ntuple_filter(attr, pattern,
+				actions, &ntuple_filter, error);
+	if (!ret)
+		return 0;
+
+	return ret;
+}
+
 /*  Destroy all flow rules associated with a port on ixgbe. */
 static int
 ixgbe_flow_flush(struct rte_eth_dev *dev,
@@ -99,15 +516,17 @@ ixgbe_flow_flush(struct rte_eth_dev *dev,
 
 	ret = ixgbe_clear_all_fdir_filter(dev);
 	if (ret < 0) {
-		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_HANDLE,
-					NULL, "Failed to flush rule");
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_HANDLE,
+				NULL, "Failed to flush rule");
 		return ret;
 	}
 
 	ret = ixgbe_clear_all_l2_tn_filter(dev);
 	if (ret < 0) {
-		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_HANDLE,
-					NULL, "Failed to flush rule");
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_HANDLE,
+				NULL, "Failed to flush rule");
 		return ret;
 	}
 
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v5 12/18] net/ixgbe: parse ethertype filter
  2017-01-12  9:17       ` [PATCH v5 00/18] net/ixgbe: Consistent filter API Wei Zhao
                           ` (10 preceding siblings ...)
  2017-01-12  9:17         ` [PATCH v5 11/18] net/ixgbe: parse n-tuple filter Wei Zhao
@ 2017-01-12  9:17         ` Wei Zhao
  2017-01-12 15:39           ` Ferruh Yigit
  2017-01-12  9:17         ` [PATCH v5 13/18] net/ixgbe: parse TCP SYN filter Wei Zhao
                           ` (8 subsequent siblings)
  20 siblings, 1 reply; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  9:17 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

check if the rule is a ethertype rule, and get the ethertype info.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ixgbe/ixgbe_flow.c | 278 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 278 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index ac9f663..2f97129 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -90,6 +90,18 @@ ixgbe_parse_ntuple_filter(const struct rte_flow_attr *attr,
 					struct rte_eth_ntuple_filter *filter,
 					struct rte_flow_error *error);
 static int
+cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
+			    const struct rte_flow_item *pattern,
+			    const struct rte_flow_action *actions,
+			    struct rte_eth_ethertype_filter *filter,
+			    struct rte_flow_error *error);
+static int
+ixgbe_parse_ethertype_filter(const struct rte_flow_attr *attr,
+				const struct rte_flow_item pattern[],
+				const struct rte_flow_action actions[],
+				struct rte_eth_ethertype_filter *filter,
+				struct rte_flow_error *error);
+static int
 ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 		const struct rte_flow_attr *attr,
 		const struct rte_flow_item pattern[],
@@ -480,6 +492,265 @@ ixgbe_parse_ntuple_filter(const struct rte_flow_attr *attr,
 }
 
 /**
+ * Parse the rule to see if it is a ethertype rule.
+ * And get the ethertype filter info BTW.
+ * pattern:
+ * The first not void item can be ETH.
+ * The next not void item must be END.
+ * action:
+ * The first not void action should be QUEUE.
+ * The next not void action should be END.
+ */
+static int
+cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
+			    const struct rte_flow_item *pattern,
+			    const struct rte_flow_action *actions,
+			    struct rte_eth_ethertype_filter *filter,
+			    struct rte_flow_error *error)
+{
+	const struct rte_flow_item *item;
+	const struct rte_flow_action *act;
+	const struct rte_flow_item_eth *eth_spec;
+	const struct rte_flow_item_eth *eth_mask;
+	const struct rte_flow_action_queue *act_q;
+	uint32_t index;
+
+	if (!pattern) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+				NULL, "NULL pattern.");
+		return -rte_errno;
+	}
+
+	if (!actions) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				NULL, "NULL action.");
+		return -rte_errno;
+	}
+
+	if (!attr) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, "NULL attribute.");
+		return -rte_errno;
+	}
+
+	/* Parse pattern */
+	index = 0;
+
+	/* The first non-void item should be MAC. */
+	item = pattern + index;
+	while (item->type == RTE_FLOW_ITEM_TYPE_VOID) {
+		index++;
+		item = pattern + index;
+	}
+	if (item->type != RTE_FLOW_ITEM_TYPE_ETH) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by ethertype filter");
+		return -rte_errno;
+	}
+
+	/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+	}
+
+	/* Get the MAC info. */
+	if (!item->spec || !item->mask) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by ethertype filter");
+		return -rte_errno;
+	}
+
+	eth_spec = (const struct rte_flow_item_eth *)item->spec;
+	eth_mask = (const struct rte_flow_item_eth *)item->mask;
+
+	/* Mask bits of source MAC address must be full of 0.
+	 * Mask bits of destination MAC address must be full
+	 * of 1 or full of 0.
+	 */
+	if (!is_zero_ether_addr(&eth_mask->src) ||
+	    (!is_zero_ether_addr(&eth_mask->dst) &&
+	     !is_broadcast_ether_addr(&eth_mask->dst))) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Invalid ether address mask");
+		return -rte_errno;
+	}
+
+	if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Invalid ethertype mask");
+		return -rte_errno;
+	}
+
+	/* If mask bits of destination MAC address
+	 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
+	 */
+	if (is_broadcast_ether_addr(&eth_mask->dst)) {
+		filter->mac_addr = eth_spec->dst;
+		filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
+	} else {
+		filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
+	}
+	filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+
+	/* Check if the next non-void item is END. */
+	index++;
+	item = pattern + index;
+	while (item->type == RTE_FLOW_ITEM_TYPE_VOID) {
+		index++;
+		item = pattern + index;
+	}
+	if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by ethertype filter.");
+		return -rte_errno;
+	}
+
+	/* Parse action */
+
+	index = 0;
+	/* Check if the first non-void action is QUEUE or DROP. */
+	act = actions + index;
+	while (act->type == RTE_FLOW_ACTION_TYPE_VOID) {
+		index++;
+		act = actions + index;
+	}
+	if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE &&
+	    act->type != RTE_FLOW_ACTION_TYPE_DROP) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION,
+				act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	if (act->type == RTE_FLOW_ACTION_TYPE_QUEUE) {
+		act_q = (const struct rte_flow_action_queue *)act->conf;
+		filter->queue = act_q->index;
+	} else {
+		filter->flags |= RTE_ETHTYPE_FLAGS_DROP;
+	}
+
+	/* Check if the next non-void item is END */
+	index++;
+	act = actions + index;
+	while (act->type == RTE_FLOW_ACTION_TYPE_VOID) {
+		index++;
+		act = actions + index;
+	}
+	if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION,
+				act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	/* Parse attr */
+	/* Must be input direction */
+	if (!attr->ingress) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ATTR_INGRESS,
+				attr, "Only support ingress.");
+		return -rte_errno;
+	}
+
+	/* Not supported */
+	if (attr->egress) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
+				attr, "Not support egress.");
+		return -rte_errno;
+	}
+
+	/* Not supported */
+	if (attr->priority) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
+				attr, "Not support priority.");
+		return -rte_errno;
+	}
+
+	/* Not supported */
+	if (attr->group) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ATTR_GROUP,
+				attr, "Not support group.");
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
+static int
+ixgbe_parse_ethertype_filter(const struct rte_flow_attr *attr,
+			     const struct rte_flow_item pattern[],
+			     const struct rte_flow_action actions[],
+			     struct rte_eth_ethertype_filter *filter,
+			     struct rte_flow_error *error)
+{
+	int ret;
+
+	ret = cons_parse_ethertype_filter(attr, pattern,
+					actions, filter, error);
+
+	if (ret)
+		return ret;
+
+	/* Ixgbe doesn't support MAC address. */
+	if (filter->flags & RTE_ETHTYPE_FLAGS_MAC) {
+		memset(filter, 0, sizeof(struct rte_eth_ethertype_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			NULL, "Not supported by ethertype filter");
+		return -rte_errno;
+	}
+
+	if (filter->queue >= IXGBE_MAX_RX_QUEUE_NUM) {
+		memset(filter, 0, sizeof(struct rte_eth_ethertype_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			NULL, "queue index much too big");
+		return -rte_errno;
+	}
+
+	if (filter->ether_type == ETHER_TYPE_IPv4 ||
+		filter->ether_type == ETHER_TYPE_IPv6) {
+		memset(filter, 0, sizeof(struct rte_eth_ethertype_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			NULL, "IPv4/IPv6 not supported by ethertype filter");
+		return -rte_errno;
+	}
+
+	if (filter->flags & RTE_ETHTYPE_FLAGS_MAC) {
+		memset(filter, 0, sizeof(struct rte_eth_ethertype_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			NULL, "mac compare is unsupported");
+		return -rte_errno;
+	}
+
+	if (filter->flags & RTE_ETHTYPE_FLAGS_DROP) {
+		memset(filter, 0, sizeof(struct rte_eth_ethertype_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			NULL, "drop option is unsupported");
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
+/**
  * Check if the flow rule is supported by ixgbe.
  * It only checkes the format. Don't guarantee the rule can be programmed into
  * the HW. Because there can be no enough room for the rule.
@@ -492,6 +763,7 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 		struct rte_flow_error *error)
 {
 	struct rte_eth_ntuple_filter ntuple_filter;
+	struct rte_eth_ethertype_filter ethertype_filter;
 	int ret;
 
 	memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
@@ -500,6 +772,12 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 	if (!ret)
 		return 0;
 
+	memset(&ethertype_filter, 0, sizeof(struct rte_eth_ethertype_filter));
+	ret = ixgbe_parse_ethertype_filter(attr, pattern,
+				actions, &ethertype_filter, error);
+	if (!ret)
+		return 0;
+
 	return ret;
 }
 
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v5 13/18] net/ixgbe: parse TCP SYN filter
  2017-01-12  9:17       ` [PATCH v5 00/18] net/ixgbe: Consistent filter API Wei Zhao
                           ` (11 preceding siblings ...)
  2017-01-12  9:17         ` [PATCH v5 12/18] net/ixgbe: parse ethertype filter Wei Zhao
@ 2017-01-12  9:17         ` Wei Zhao
  2017-01-12  9:17         ` [PATCH v5 14/18] net/ixgbe: parse L2 tunnel filter Wei Zhao
                           ` (7 subsequent siblings)
  20 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  9:17 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

check if the rule is a TCP SYN rule, and get the SYN info.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ixgbe/ixgbe_flow.c | 264 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 264 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 2f97129..317deed 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -102,6 +102,18 @@ ixgbe_parse_ethertype_filter(const struct rte_flow_attr *attr,
 				struct rte_eth_ethertype_filter *filter,
 				struct rte_flow_error *error);
 static int
+cons_parse_syn_filter(const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct rte_eth_syn_filter *filter,
+		struct rte_flow_error *error);
+static int
+ixgbe_parse_syn_filter(const struct rte_flow_attr *attr,
+				const struct rte_flow_item pattern[],
+				const struct rte_flow_action actions[],
+				struct rte_eth_syn_filter *filter,
+				struct rte_flow_error *error);
+static int
 ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 		const struct rte_flow_attr *attr,
 		const struct rte_flow_item pattern[],
@@ -751,6 +763,251 @@ ixgbe_parse_ethertype_filter(const struct rte_flow_attr *attr,
 }
 
 /**
+ * Parse the rule to see if it is a TCP SYN rule.
+ * And get the TCP SYN filter info BTW.
+ * pattern:
+ * The first not void item must be ETH.
+ * The second not void item must be IPV4 or IPV6.
+ * The third not void item must be TCP.
+ * The next not void item must be END.
+ * action:
+ * The first not void action should be QUEUE.
+ * The next not void action should be END.
+ */
+static int
+cons_parse_syn_filter(const struct rte_flow_attr *attr,
+				const struct rte_flow_item pattern[],
+				const struct rte_flow_action actions[],
+				struct rte_eth_syn_filter *filter,
+				struct rte_flow_error *error)
+{
+	const struct rte_flow_item *item;
+	const struct rte_flow_action *act;
+	const struct rte_flow_item_tcp *tcp_spec;
+	const struct rte_flow_item_tcp *tcp_mask;
+	const struct rte_flow_action_queue *act_q;
+	uint32_t index;
+
+	if (!pattern) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+				NULL, "NULL pattern.");
+		return -rte_errno;
+	}
+
+	if (!actions) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				NULL, "NULL action.");
+		return -rte_errno;
+	}
+
+	if (!attr) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, "NULL attribute.");
+		return -rte_errno;
+	}
+
+	/* parse pattern */
+	index = 0;
+
+	/* the first not void item should be MAC or IPv4 or IPv6 or TCP */
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_ETH &&
+	    item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+	    item->type != RTE_FLOW_ITEM_TYPE_IPV6 &&
+	    item->type != RTE_FLOW_ITEM_TYPE_TCP) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by syn filter");
+		return -rte_errno;
+	}
+		/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+	}
+
+	/* Skip Ethernet */
+	if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
+		/* if the item is MAC, the content should be NULL */
+		if (item->spec || item->mask) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Invalid SYN address mask");
+			return -rte_errno;
+		}
+
+		/* check if the next not void item is IPv4 or IPv6 */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+		    item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by syn filter");
+			return -rte_errno;
+		}
+	}
+
+	/* Skip IP */
+	if (item->type == RTE_FLOW_ITEM_TYPE_IPV4 ||
+	    item->type == RTE_FLOW_ITEM_TYPE_IPV6) {
+		/* if the item is IP, the content should be NULL */
+		if (item->spec || item->mask) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Invalid SYN mask");
+			return -rte_errno;
+		}
+
+		/* check if the next not void item is TCP */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_TCP) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by syn filter");
+			return -rte_errno;
+		}
+	}
+
+	/* Get the TCP info. Only support SYN. */
+	if (!item->spec || !item->mask) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Invalid SYN mask");
+		return -rte_errno;
+	}
+	/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+	}
+
+	tcp_spec = (const struct rte_flow_item_tcp *)item->spec;
+	tcp_mask = (const struct rte_flow_item_tcp *)item->mask;
+	if (!(tcp_spec->hdr.tcp_flags & TCP_SYN_FLAG) ||
+	    tcp_mask->hdr.src_port ||
+	    tcp_mask->hdr.dst_port ||
+	    tcp_mask->hdr.sent_seq ||
+	    tcp_mask->hdr.recv_ack ||
+	    tcp_mask->hdr.data_off ||
+	    tcp_mask->hdr.tcp_flags != TCP_SYN_FLAG ||
+	    tcp_mask->hdr.rx_win ||
+	    tcp_mask->hdr.cksum ||
+	    tcp_mask->hdr.tcp_urp) {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by syn filter");
+		return -rte_errno;
+	}
+
+	/* check if the next not void item is END */
+	index++;
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by syn filter");
+		return -rte_errno;
+	}
+
+	/* parse action */
+	index = 0;
+
+	/* check if the first not void action is QUEUE. */
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE) {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION,
+				act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	act_q = (const struct rte_flow_action_queue *)act->conf;
+	filter->queue = act_q->index;
+	if (filter->queue >= IXGBE_MAX_RX_QUEUE_NUM) {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION,
+				act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	/* check if the next not void item is END */
+	index++;
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION,
+				act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	/* parse attr */
+	/* must be input direction */
+	if (!attr->ingress) {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_INGRESS,
+			attr, "Only support ingress.");
+		return -rte_errno;
+	}
+
+	/* not supported */
+	if (attr->egress) {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
+			attr, "Not support egress.");
+		return -rte_errno;
+	}
+
+	/* Support 2 priorities, the lowest or highest. */
+	if (!attr->priority) {
+		filter->hig_pri = 0;
+	} else if (attr->priority == (uint32_t)~0U) {
+		filter->hig_pri = 1;
+	} else {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
+			attr, "Not support priority.");
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
+static int
+ixgbe_parse_syn_filter(const struct rte_flow_attr *attr,
+			     const struct rte_flow_item pattern[],
+			     const struct rte_flow_action actions[],
+			     struct rte_eth_syn_filter *filter,
+			     struct rte_flow_error *error)
+{
+	int ret;
+
+	ret = cons_parse_syn_filter(attr, pattern,
+					actions, filter, error);
+
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+/**
  * Check if the flow rule is supported by ixgbe.
  * It only checkes the format. Don't guarantee the rule can be programmed into
  * the HW. Because there can be no enough room for the rule.
@@ -764,6 +1021,7 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 {
 	struct rte_eth_ntuple_filter ntuple_filter;
 	struct rte_eth_ethertype_filter ethertype_filter;
+	struct rte_eth_syn_filter syn_filter;
 	int ret;
 
 	memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
@@ -778,6 +1036,12 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 	if (!ret)
 		return 0;
 
+	memset(&syn_filter, 0, sizeof(struct rte_eth_syn_filter));
+	ret = ixgbe_parse_syn_filter(attr, pattern,
+				actions, &syn_filter, error);
+	if (!ret)
+		return 0;
+
 	return ret;
 }
 
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v5 14/18] net/ixgbe: parse L2 tunnel filter
  2017-01-12  9:17       ` [PATCH v5 00/18] net/ixgbe: Consistent filter API Wei Zhao
                           ` (12 preceding siblings ...)
  2017-01-12  9:17         ` [PATCH v5 13/18] net/ixgbe: parse TCP SYN filter Wei Zhao
@ 2017-01-12  9:17         ` Wei Zhao
  2017-01-12  9:17         ` [PATCH v5 15/18] net/ixgbe: parse flow director filter Wei Zhao
                           ` (6 subsequent siblings)
  20 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  9:17 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

check if the rule is a L2 tunnel rule, and get the L2 tunnel info.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c |   3 +-
 drivers/net/ixgbe/ixgbe_flow.c   | 203 +++++++++++++++++++++++++++++++++++++++
 lib/librte_ether/rte_flow.h      |  48 +++++++++
 3 files changed, 253 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 2a67462..14a88ee 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -8089,7 +8089,8 @@ ixgbe_clear_syn_filter(struct rte_eth_dev *dev)
 }
 
 /* remove all the L2 tunnel filters */
-int ixgbe_clear_all_l2_tn_filter(struct rte_eth_dev *dev)
+int
+ixgbe_clear_all_l2_tn_filter(struct rte_eth_dev *dev)
 {
 	struct ixgbe_l2_tn_info *l2_tn_info =
 		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 317deed..6e1c0bc 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -114,6 +114,19 @@ ixgbe_parse_syn_filter(const struct rte_flow_attr *attr,
 				struct rte_eth_syn_filter *filter,
 				struct rte_flow_error *error);
 static int
+cons_parse_l2_tn_filter(const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct rte_eth_l2_tunnel_conf *filter,
+		struct rte_flow_error *error);
+static int
+ixgbe_validate_l2_tn_filter(struct rte_eth_dev *dev,
+			const struct rte_flow_attr *attr,
+			const struct rte_flow_item pattern[],
+			const struct rte_flow_action actions[],
+			struct rte_eth_l2_tunnel_conf *rule,
+			struct rte_flow_error *error);
+static int
 ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 		const struct rte_flow_attr *attr,
 		const struct rte_flow_item pattern[],
@@ -1008,6 +1021,191 @@ ixgbe_parse_syn_filter(const struct rte_flow_attr *attr,
 }
 
 /**
+ * Parse the rule to see if it is a L2 tunnel rule.
+ * And get the L2 tunnel filter info BTW.
+ * Only support E-tag now.
+ */
+static int
+cons_parse_l2_tn_filter(const struct rte_flow_attr *attr,
+			const struct rte_flow_item pattern[],
+			const struct rte_flow_action actions[],
+			struct rte_eth_l2_tunnel_conf *filter,
+			struct rte_flow_error *error)
+{
+	const struct rte_flow_item *item;
+	const struct rte_flow_item_e_tag *e_tag_spec;
+	const struct rte_flow_item_e_tag *e_tag_mask;
+	const struct rte_flow_action *act;
+	const struct rte_flow_action_queue *act_q;
+	uint32_t index;
+
+	if (!pattern) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+			NULL, "NULL pattern.");
+		return -rte_errno;
+	}
+
+	if (!actions) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				   NULL, "NULL action.");
+		return -rte_errno;
+	}
+
+	if (!attr) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, "NULL attribute.");
+		return -rte_errno;
+	}
+	/* parse pattern */
+	index = 0;
+
+	/* The first not void item should be e-tag. */
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_E_TAG) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by L2 tunnel filter");
+		return -rte_errno;
+	}
+
+	if (!item->spec || !item->mask) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by L2 tunnel filter");
+		return -rte_errno;
+	}
+
+	/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+	}
+
+	e_tag_spec = (const struct rte_flow_item_e_tag *)item->spec;
+	e_tag_mask = (const struct rte_flow_item_e_tag *)item->mask;
+
+	/* Only care about GRP and E cid base. */
+	if (e_tag_mask->epcp_edei_in_ecid_b ||
+	    e_tag_mask->in_ecid_e ||
+	    e_tag_mask->ecid_e ||
+	    e_tag_mask->rsvd_grp_ecid_b != rte_cpu_to_be_16(0x3FFF)) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by L2 tunnel filter");
+		return -rte_errno;
+	}
+
+	filter->l2_tunnel_type = RTE_L2_TUNNEL_TYPE_E_TAG;
+	/**
+	 * grp and e_cid_base are bit fields and only use 14 bits.
+	 * e-tag id is taken as little endian by HW.
+	 */
+	filter->tunnel_id = rte_be_to_cpu_16(e_tag_spec->rsvd_grp_ecid_b);
+
+	/* check if the next not void item is END */
+	index++;
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by L2 tunnel filter");
+		return -rte_errno;
+	}
+
+	/* parse attr */
+	/* must be input direction */
+	if (!attr->ingress) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_INGRESS,
+			attr, "Only support ingress.");
+		return -rte_errno;
+	}
+
+	/* not supported */
+	if (attr->egress) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
+			attr, "Not support egress.");
+		return -rte_errno;
+	}
+
+	/* not supported */
+	if (attr->priority) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
+			attr, "Not support priority.");
+		return -rte_errno;
+	}
+
+	/* parse action */
+	index = 0;
+
+	/* check if the first not void action is QUEUE. */
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	act_q = (const struct rte_flow_action_queue *)act->conf;
+	filter->pool = act_q->index;
+
+	/* check if the next not void item is END */
+	index++;
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
+static int
+ixgbe_validate_l2_tn_filter(struct rte_eth_dev *dev,
+			const struct rte_flow_attr *attr,
+			const struct rte_flow_item pattern[],
+			const struct rte_flow_action actions[],
+			struct rte_eth_l2_tunnel_conf *l2_tn_filter,
+			struct rte_flow_error *error)
+{
+	int ret = 0;
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	ret = cons_parse_l2_tn_filter(attr, pattern,
+				actions, l2_tn_filter, error);
+
+	if (hw->mac.type != ixgbe_mac_X550 &&
+		hw->mac.type != ixgbe_mac_X550EM_x &&
+		hw->mac.type != ixgbe_mac_X550EM_a) {
+		memset(l2_tn_filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			NULL, "Not supported by L2 tunnel filter");
+		return -rte_errno;
+	}
+
+	return ret;
+}
+
+/**
  * Check if the flow rule is supported by ixgbe.
  * It only checkes the format. Don't guarantee the rule can be programmed into
  * the HW. Because there can be no enough room for the rule.
@@ -1022,6 +1220,7 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 	struct rte_eth_ntuple_filter ntuple_filter;
 	struct rte_eth_ethertype_filter ethertype_filter;
 	struct rte_eth_syn_filter syn_filter;
+	struct rte_eth_l2_tunnel_conf l2_tn_filter;
 	int ret;
 
 	memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
@@ -1042,6 +1241,10 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 	if (!ret)
 		return 0;
 
+	memset(&l2_tn_filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+	ret = ixgbe_validate_l2_tn_filter(dev, attr, pattern,
+				actions, &l2_tn_filter, error);
+
 	return ret;
 }
 
diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h
index 98084ac..7142479 100644
--- a/lib/librte_ether/rte_flow.h
+++ b/lib/librte_ether/rte_flow.h
@@ -268,6 +268,20 @@ enum rte_flow_item_type {
 	 * See struct rte_flow_item_vxlan.
 	 */
 	RTE_FLOW_ITEM_TYPE_VXLAN,
+
+	/**
+	 * Matches a E_TAG header.
+	 *
+	 * See struct rte_flow_item_e_tag.
+	 */
+	RTE_FLOW_ITEM_TYPE_E_TAG,
+
+	/**
+	 * Matches a NVGRE header.
+	 *
+	 * See struct rte_flow_item_nvgre.
+	 */
+	RTE_FLOW_ITEM_TYPE_NVGRE,
 };
 
 /**
@@ -454,6 +468,40 @@ struct rte_flow_item_vxlan {
 };
 
 /**
+ * RTE_FLOW_ITEM_TYPE_E_TAG.
+ *
+ * Matches a E-tag header.
+ */
+struct rte_flow_item_e_tag {
+	uint16_t tpid; /**< Tag protocol identifier (0x893F). */
+	/** E-Tag control information (E-TCI). */
+	/**< E-PCP (3b), E-DEI (1b), ingress E-CID base (12b). */
+	uint16_t epcp_edei_in_ecid_b;
+	/**< Reserved (2b), GRP (2b), E-CID base (12b). */
+	uint16_t rsvd_grp_ecid_b;
+	uint8_t in_ecid_e; /**< Ingress E-CID ext. */
+	uint8_t ecid_e; /**< E-CID ext. */
+};
+
+/**
+ * RTE_FLOW_ITEM_TYPE_NVGRE.
+ *
+ * Matches a NVGRE header.
+ */
+struct rte_flow_item_nvgre {
+     /**
+      * Checksum (1b), undefined (1b), key bit (1b), sequence number (1b),
+      * reserved 0 (9b), version (3b).
+      *
+      * \c_k_s_rsvd0_ver must have value 0x2000 according to RFC 7637.
+      */
+	uint16_t c_k_s_rsvd0_ver;
+	uint16_t protocol; /**< Protocol type (0x6558). */
+	uint8_t tni[3]; /**< Virtual subnet ID. */
+	uint8_t flow_id; /**< Flow ID. */
+};
+
+/**
  * Matching pattern item definition.
  *
  * A pattern is formed by stacking items starting from the lowest protocol
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v5 15/18] net/ixgbe: parse flow director filter
  2017-01-12  9:17       ` [PATCH v5 00/18] net/ixgbe: Consistent filter API Wei Zhao
                           ` (13 preceding siblings ...)
  2017-01-12  9:17         ` [PATCH v5 14/18] net/ixgbe: parse L2 tunnel filter Wei Zhao
@ 2017-01-12  9:17         ` Wei Zhao
  2017-01-12  9:17         ` [PATCH v5 16/18] net/ixgbe: create consistent filter Wei Zhao
                           ` (5 subsequent siblings)
  20 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  9:17 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

check if the rule is a flow director rule, and get the flow director info.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c |    2 +
 drivers/net/ixgbe/ixgbe_ethdev.h |   16 +
 drivers/net/ixgbe/ixgbe_fdir.c   |  253 +++++---
 drivers/net/ixgbe/ixgbe_flow.c   | 1218 +++++++++++++++++++++++++++++++++++++-
 4 files changed, 1380 insertions(+), 109 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 14a88ee..4b9f4aa 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1436,6 +1436,8 @@ static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev)
 			     "Failed to allocate memory for fdir hash map!");
 		return -ENOMEM;
 	}
+	fdir_info->mask_added = FALSE;
+
 	return 0;
 }
 
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 5efa650..dee159f 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -162,6 +162,17 @@ struct ixgbe_fdir_filter {
 /* list of fdir filters */
 TAILQ_HEAD(ixgbe_fdir_filter_list, ixgbe_fdir_filter);
 
+struct ixgbe_fdir_rule {
+	struct ixgbe_hw_fdir_mask mask;
+	union ixgbe_atr_input ixgbe_fdir; /* key of fdir filter*/
+	bool b_spec; /* If TRUE, ixgbe_fdir, fdirflags, queue have meaning. */
+	bool b_mask; /* If TRUE, mask has meaning. */
+	enum rte_fdir_mode mode; /* IP, MAC VLAN, Tunnel */
+	uint32_t fdirflags; /* drop or forward */
+	uint32_t soft_id; /* an unique value for this rule */
+	uint8_t queue; /* assigned rx queue */
+};
+
 struct ixgbe_hw_fdir_info {
 	struct ixgbe_hw_fdir_mask mask;
 	uint8_t     flex_bytes_offset;
@@ -177,6 +188,7 @@ struct ixgbe_hw_fdir_info {
 	/* store the pointers of the filters, index is the hash value. */
 	struct ixgbe_fdir_filter **hash_map;
 	struct rte_hash *hash_handle; /* cuckoo hash handler */
+	bool mask_added; /* If already got mask from consistent filter */
 };
 
 /* structure for interrupt relative data */
@@ -479,6 +491,10 @@ bool ixgbe_rss_update_sp(enum ixgbe_mac_type mac_type);
  * Flow director function prototypes
  */
 int ixgbe_fdir_configure(struct rte_eth_dev *dev);
+int ixgbe_fdir_set_input_mask(struct rte_eth_dev *dev);
+int ixgbe_fdir_filter_program(struct rte_eth_dev *dev,
+			      struct ixgbe_fdir_rule *rule,
+			      bool del, bool update);
 
 void ixgbe_configure_dcb(struct rte_eth_dev *dev);
 
diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c
index e928ad7..3b9d60c 100644
--- a/drivers/net/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/ixgbe/ixgbe_fdir.c
@@ -112,10 +112,8 @@
 static int fdir_erase_filter_82599(struct ixgbe_hw *hw, uint32_t fdirhash);
 static int fdir_set_input_mask(struct rte_eth_dev *dev,
 			       const struct rte_eth_fdir_masks *input_mask);
-static int fdir_set_input_mask_82599(struct rte_eth_dev *dev,
-		const struct rte_eth_fdir_masks *input_mask);
-static int fdir_set_input_mask_x550(struct rte_eth_dev *dev,
-				    const struct rte_eth_fdir_masks *input_mask);
+static int fdir_set_input_mask_82599(struct rte_eth_dev *dev);
+static int fdir_set_input_mask_x550(struct rte_eth_dev *dev);
 static int ixgbe_set_fdir_flex_conf(struct rte_eth_dev *dev,
 		const struct rte_eth_fdir_flex_conf *conf, uint32_t *fdirctrl);
 static int fdir_enable_82599(struct ixgbe_hw *hw, uint32_t fdirctrl);
@@ -295,8 +293,7 @@ reverse_fdir_bitmasks(uint16_t hi_dword, uint16_t lo_dword)
  * but makes use of the rte_fdir_masks structure to see which bits to set.
  */
 static int
-fdir_set_input_mask_82599(struct rte_eth_dev *dev,
-		const struct rte_eth_fdir_masks *input_mask)
+fdir_set_input_mask_82599(struct rte_eth_dev *dev)
 {
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ixgbe_hw_fdir_info *info =
@@ -308,8 +305,6 @@ fdir_set_input_mask_82599(struct rte_eth_dev *dev,
 	uint32_t fdirm = IXGBE_FDIRM_POOL | IXGBE_FDIRM_DIPv6 | IXGBE_FDIRM_FLEX;
 	uint32_t fdirtcpm;  /* TCP source and destination port masks. */
 	uint32_t fdiripv6m; /* IPv6 source and destination masks. */
-	uint16_t dst_ipv6m = 0;
-	uint16_t src_ipv6m = 0;
 	volatile uint32_t *reg;
 
 	PMD_INIT_FUNC_TRACE();
@@ -320,31 +315,30 @@ fdir_set_input_mask_82599(struct rte_eth_dev *dev,
 	 * a VLAN of 0 is unspecified, so mask that out as well.  L4type
 	 * cannot be masked out in this implementation.
 	 */
-	if (input_mask->dst_port_mask == 0 && input_mask->src_port_mask == 0)
+	if (info->mask.dst_port_mask == 0 && info->mask.src_port_mask == 0)
 		/* use the L4 protocol mask for raw IPv4/IPv6 traffic */
 		fdirm |= IXGBE_FDIRM_L4P;
 
-	if (input_mask->vlan_tci_mask == rte_cpu_to_be_16(0x0FFF))
+	if (info->mask.vlan_tci_mask == rte_cpu_to_be_16(0x0FFF))
 		/* mask VLAN Priority */
 		fdirm |= IXGBE_FDIRM_VLANP;
-	else if (input_mask->vlan_tci_mask == rte_cpu_to_be_16(0xE000))
+	else if (info->mask.vlan_tci_mask == rte_cpu_to_be_16(0xE000))
 		/* mask VLAN ID */
 		fdirm |= IXGBE_FDIRM_VLANID;
-	else if (input_mask->vlan_tci_mask == 0)
+	else if (info->mask.vlan_tci_mask == 0)
 		/* mask VLAN ID and Priority */
 		fdirm |= IXGBE_FDIRM_VLANID | IXGBE_FDIRM_VLANP;
-	else if (input_mask->vlan_tci_mask != rte_cpu_to_be_16(0xEFFF)) {
+	else if (info->mask.vlan_tci_mask != rte_cpu_to_be_16(0xEFFF)) {
 		PMD_INIT_LOG(ERR, "invalid vlan_tci_mask");
 		return -EINVAL;
 	}
-	info->mask.vlan_tci_mask = input_mask->vlan_tci_mask;
 
 	IXGBE_WRITE_REG(hw, IXGBE_FDIRM, fdirm);
 
 	/* store the TCP/UDP port masks, bit reversed from port layout */
 	fdirtcpm = reverse_fdir_bitmasks(
-			rte_be_to_cpu_16(input_mask->dst_port_mask),
-			rte_be_to_cpu_16(input_mask->src_port_mask));
+			rte_be_to_cpu_16(info->mask.dst_port_mask),
+			rte_be_to_cpu_16(info->mask.src_port_mask));
 
 	/* write all the same so that UDP, TCP and SCTP use the same mask
 	 * (little-endian)
@@ -352,30 +346,23 @@ fdir_set_input_mask_82599(struct rte_eth_dev *dev,
 	IXGBE_WRITE_REG(hw, IXGBE_FDIRTCPM, ~fdirtcpm);
 	IXGBE_WRITE_REG(hw, IXGBE_FDIRUDPM, ~fdirtcpm);
 	IXGBE_WRITE_REG(hw, IXGBE_FDIRSCTPM, ~fdirtcpm);
-	info->mask.src_port_mask = input_mask->src_port_mask;
-	info->mask.dst_port_mask = input_mask->dst_port_mask;
 
 	/* Store source and destination IPv4 masks (big-endian),
 	 * can not use IXGBE_WRITE_REG.
 	 */
 	reg = IXGBE_PCI_REG_ADDR(hw, IXGBE_FDIRSIP4M);
-	*reg = ~(input_mask->ipv4_mask.src_ip);
+	*reg = ~(info->mask.src_ipv4_mask);
 	reg = IXGBE_PCI_REG_ADDR(hw, IXGBE_FDIRDIP4M);
-	*reg = ~(input_mask->ipv4_mask.dst_ip);
-	info->mask.src_ipv4_mask = input_mask->ipv4_mask.src_ip;
-	info->mask.dst_ipv4_mask = input_mask->ipv4_mask.dst_ip;
+	*reg = ~(info->mask.dst_ipv4_mask);
 
 	if (dev->data->dev_conf.fdir_conf.mode == RTE_FDIR_MODE_SIGNATURE) {
 		/*
 		 * Store source and destination IPv6 masks (bit reversed)
 		 */
-		IPV6_ADDR_TO_MASK(input_mask->ipv6_mask.src_ip, src_ipv6m);
-		IPV6_ADDR_TO_MASK(input_mask->ipv6_mask.dst_ip, dst_ipv6m);
-		fdiripv6m = (dst_ipv6m << 16) | src_ipv6m;
+		fdiripv6m = (info->mask.dst_ipv6_mask << 16) |
+			    info->mask.src_ipv6_mask;
 
 		IXGBE_WRITE_REG(hw, IXGBE_FDIRIP6M, ~fdiripv6m);
-		info->mask.src_ipv6_mask = src_ipv6m;
-		info->mask.dst_ipv6_mask = dst_ipv6m;
 	}
 
 	return IXGBE_SUCCESS;
@@ -386,8 +373,7 @@ fdir_set_input_mask_82599(struct rte_eth_dev *dev,
  * but makes use of the rte_fdir_masks structure to see which bits to set.
  */
 static int
-fdir_set_input_mask_x550(struct rte_eth_dev *dev,
-			 const struct rte_eth_fdir_masks *input_mask)
+fdir_set_input_mask_x550(struct rte_eth_dev *dev)
 {
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ixgbe_hw_fdir_info *info =
@@ -410,20 +396,19 @@ fdir_set_input_mask_x550(struct rte_eth_dev *dev,
 	/* some bits must be set for mac vlan or tunnel mode */
 	fdirm |= IXGBE_FDIRM_L4P | IXGBE_FDIRM_L3P;
 
-	if (input_mask->vlan_tci_mask == rte_cpu_to_be_16(0x0FFF))
+	if (info->mask.vlan_tci_mask == rte_cpu_to_be_16(0x0FFF))
 		/* mask VLAN Priority */
 		fdirm |= IXGBE_FDIRM_VLANP;
-	else if (input_mask->vlan_tci_mask == rte_cpu_to_be_16(0xE000))
+	else if (info->mask.vlan_tci_mask == rte_cpu_to_be_16(0xE000))
 		/* mask VLAN ID */
 		fdirm |= IXGBE_FDIRM_VLANID;
-	else if (input_mask->vlan_tci_mask == 0)
+	else if (info->mask.vlan_tci_mask == 0)
 		/* mask VLAN ID and Priority */
 		fdirm |= IXGBE_FDIRM_VLANID | IXGBE_FDIRM_VLANP;
-	else if (input_mask->vlan_tci_mask != rte_cpu_to_be_16(0xEFFF)) {
+	else if (info->mask.vlan_tci_mask != rte_cpu_to_be_16(0xEFFF)) {
 		PMD_INIT_LOG(ERR, "invalid vlan_tci_mask");
 		return -EINVAL;
 	}
-	info->mask.vlan_tci_mask = input_mask->vlan_tci_mask;
 
 	IXGBE_WRITE_REG(hw, IXGBE_FDIRM, fdirm);
 
@@ -434,12 +419,11 @@ fdir_set_input_mask_x550(struct rte_eth_dev *dev,
 				IXGBE_FDIRIP6M_TNI_VNI;
 
 	if (mode == RTE_FDIR_MODE_PERFECT_TUNNEL) {
-		mac_mask = input_mask->mac_addr_byte_mask;
+		mac_mask = info->mask.mac_addr_byte_mask;
 		fdiripv6m |= (mac_mask << IXGBE_FDIRIP6M_INNER_MAC_SHIFT)
 				& IXGBE_FDIRIP6M_INNER_MAC;
-		info->mask.mac_addr_byte_mask = input_mask->mac_addr_byte_mask;
 
-		switch (input_mask->tunnel_type_mask) {
+		switch (info->mask.tunnel_type_mask) {
 		case 0:
 			/* Mask turnnel type */
 			fdiripv6m |= IXGBE_FDIRIP6M_TUNNEL_TYPE;
@@ -450,10 +434,8 @@ fdir_set_input_mask_x550(struct rte_eth_dev *dev,
 			PMD_INIT_LOG(ERR, "invalid tunnel_type_mask");
 			return -EINVAL;
 		}
-		info->mask.tunnel_type_mask =
-			input_mask->tunnel_type_mask;
 
-		switch (rte_be_to_cpu_32(input_mask->tunnel_id_mask)) {
+		switch (rte_be_to_cpu_32(info->mask.tunnel_id_mask)) {
 		case 0x0:
 			/* Mask vxlan id */
 			fdiripv6m |= IXGBE_FDIRIP6M_TNI_VNI;
@@ -467,8 +449,6 @@ fdir_set_input_mask_x550(struct rte_eth_dev *dev,
 			PMD_INIT_LOG(ERR, "invalid tunnel_id_mask");
 			return -EINVAL;
 		}
-		info->mask.tunnel_id_mask =
-			input_mask->tunnel_id_mask;
 	}
 
 	IXGBE_WRITE_REG(hw, IXGBE_FDIRIP6M, fdiripv6m);
@@ -482,22 +462,90 @@ fdir_set_input_mask_x550(struct rte_eth_dev *dev,
 }
 
 static int
-fdir_set_input_mask(struct rte_eth_dev *dev,
-		    const struct rte_eth_fdir_masks *input_mask)
+ixgbe_fdir_store_input_mask_82599(struct rte_eth_dev *dev,
+				  const struct rte_eth_fdir_masks *input_mask)
+{
+	struct ixgbe_hw_fdir_info *info =
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
+	uint16_t dst_ipv6m = 0;
+	uint16_t src_ipv6m = 0;
+
+	memset(&info->mask, 0, sizeof(struct ixgbe_hw_fdir_mask));
+	info->mask.vlan_tci_mask = input_mask->vlan_tci_mask;
+	info->mask.src_port_mask = input_mask->src_port_mask;
+	info->mask.dst_port_mask = input_mask->dst_port_mask;
+	info->mask.src_ipv4_mask = input_mask->ipv4_mask.src_ip;
+	info->mask.dst_ipv4_mask = input_mask->ipv4_mask.dst_ip;
+	IPV6_ADDR_TO_MASK(input_mask->ipv6_mask.src_ip, src_ipv6m);
+	IPV6_ADDR_TO_MASK(input_mask->ipv6_mask.dst_ip, dst_ipv6m);
+	info->mask.src_ipv6_mask = src_ipv6m;
+	info->mask.dst_ipv6_mask = dst_ipv6m;
+
+	return IXGBE_SUCCESS;
+}
+
+static int
+ixgbe_fdir_store_input_mask_x550(struct rte_eth_dev *dev,
+				 const struct rte_eth_fdir_masks *input_mask)
+{
+	struct ixgbe_hw_fdir_info *info =
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
+
+	memset(&info->mask, 0, sizeof(struct ixgbe_hw_fdir_mask));
+	info->mask.vlan_tci_mask = input_mask->vlan_tci_mask;
+	info->mask.mac_addr_byte_mask = input_mask->mac_addr_byte_mask;
+	info->mask.tunnel_type_mask = input_mask->tunnel_type_mask;
+	info->mask.tunnel_id_mask = input_mask->tunnel_id_mask;
+
+	return IXGBE_SUCCESS;
+}
+
+static int
+ixgbe_fdir_store_input_mask(struct rte_eth_dev *dev,
+			    const struct rte_eth_fdir_masks *input_mask)
 {
 	enum rte_fdir_mode mode = dev->data->dev_conf.fdir_conf.mode;
 
 	if (mode >= RTE_FDIR_MODE_SIGNATURE &&
 	    mode <= RTE_FDIR_MODE_PERFECT)
-		return fdir_set_input_mask_82599(dev, input_mask);
+		return ixgbe_fdir_store_input_mask_82599(dev, input_mask);
 	else if (mode >= RTE_FDIR_MODE_PERFECT_MAC_VLAN &&
 		 mode <= RTE_FDIR_MODE_PERFECT_TUNNEL)
-		return fdir_set_input_mask_x550(dev, input_mask);
+		return ixgbe_fdir_store_input_mask_x550(dev, input_mask);
 
 	PMD_DRV_LOG(ERR, "Not supported fdir mode - %d!", mode);
 	return -ENOTSUP;
 }
 
+int
+ixgbe_fdir_set_input_mask(struct rte_eth_dev *dev)
+{
+	enum rte_fdir_mode mode = dev->data->dev_conf.fdir_conf.mode;
+
+	if (mode >= RTE_FDIR_MODE_SIGNATURE &&
+	    mode <= RTE_FDIR_MODE_PERFECT)
+		return fdir_set_input_mask_82599(dev);
+	else if (mode >= RTE_FDIR_MODE_PERFECT_MAC_VLAN &&
+		 mode <= RTE_FDIR_MODE_PERFECT_TUNNEL)
+		return fdir_set_input_mask_x550(dev);
+
+	PMD_DRV_LOG(ERR, "Not supported fdir mode - %d!", mode);
+	return -ENOTSUP;
+}
+
+static int
+fdir_set_input_mask(struct rte_eth_dev *dev,
+		    const struct rte_eth_fdir_masks *input_mask)
+{
+	int ret;
+
+	ret = ixgbe_fdir_store_input_mask(dev, input_mask);
+	if (ret)
+		return ret;
+
+	return ixgbe_fdir_set_input_mask(dev);
+}
+
 /*
  * ixgbe_check_fdir_flex_conf -check if the flex payload and mask configuration
  * arguments are valid
@@ -1135,23 +1183,40 @@ ixgbe_remove_fdir_filter(struct ixgbe_hw_fdir_info *fdir_info,
 	return 0;
 }
 
-/*
- * ixgbe_add_del_fdir_filter - add or remove a flow diretor filter.
- * @dev: pointer to the structure rte_eth_dev
- * @fdir_filter: fdir filter entry
- * @del: 1 - delete, 0 - add
- * @update: 1 - update
- */
 static int
-ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
-			  const struct rte_eth_fdir_filter *fdir_filter,
+ixgbe_interpret_fdir_filter(struct rte_eth_dev *dev,
+			    const struct rte_eth_fdir_filter *fdir_filter,
+			    struct ixgbe_fdir_rule *rule)
+{
+	enum rte_fdir_mode fdir_mode = dev->data->dev_conf.fdir_conf.mode;
+	int err;
+
+	memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+
+	err = ixgbe_fdir_filter_to_atr_input(fdir_filter,
+					     &rule->ixgbe_fdir,
+					     fdir_mode);
+	if (err)
+		return err;
+
+	rule->mode = fdir_mode;
+	if (fdir_filter->action.behavior == RTE_ETH_FDIR_REJECT)
+		rule->fdirflags = IXGBE_FDIRCMD_DROP;
+	rule->queue = fdir_filter->action.rx_queue;
+	rule->soft_id = fdir_filter->soft_id;
+
+	return 0;
+}
+
+int
+ixgbe_fdir_filter_program(struct rte_eth_dev *dev,
+			  struct ixgbe_fdir_rule *rule,
 			  bool del,
 			  bool update)
 {
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	uint32_t fdircmd_flags;
 	uint32_t fdirhash;
-	union ixgbe_atr_input input;
 	uint8_t queue;
 	bool is_perfect = FALSE;
 	int err;
@@ -1161,7 +1226,8 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	struct ixgbe_fdir_filter *node;
 	bool add_node = FALSE;
 
-	if (fdir_mode == RTE_FDIR_MODE_NONE)
+	if (fdir_mode == RTE_FDIR_MODE_NONE ||
+	    fdir_mode != rule->mode)
 		return -ENOTSUP;
 
 	/*
@@ -1174,7 +1240,7 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	    (hw->mac.type == ixgbe_mac_X550 ||
 	     hw->mac.type == ixgbe_mac_X550EM_x ||
 	     hw->mac.type == ixgbe_mac_X550EM_a) &&
-	    (fdir_filter->input.flow_type ==
+	    (rule->ixgbe_fdir.formatted.flow_type ==
 	     RTE_ETH_FLOW_NONFRAG_IPV4_OTHER) &&
 	    (info->mask.src_port_mask != 0 ||
 	     info->mask.dst_port_mask != 0)) {
@@ -1188,29 +1254,23 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	    fdir_mode <= RTE_FDIR_MODE_PERFECT_TUNNEL)
 		is_perfect = TRUE;
 
-	memset(&input, 0, sizeof(input));
-
-	err = ixgbe_fdir_filter_to_atr_input(fdir_filter, &input,
-					     fdir_mode);
-	if (err)
-		return err;
-
 	if (is_perfect) {
-		if (input.formatted.flow_type & IXGBE_ATR_L4TYPE_IPV6_MASK) {
+		if (rule->ixgbe_fdir.formatted.flow_type &
+		    IXGBE_ATR_L4TYPE_IPV6_MASK) {
 			PMD_DRV_LOG(ERR, "IPv6 is not supported in"
 				    " perfect mode!");
 			return -ENOTSUP;
 		}
-		fdirhash = atr_compute_perfect_hash_82599(&input,
+		fdirhash = atr_compute_perfect_hash_82599(&rule->ixgbe_fdir,
 							  dev->data->dev_conf.fdir_conf.pballoc);
-		fdirhash |= fdir_filter->soft_id <<
+		fdirhash |= rule->soft_id <<
 			IXGBE_FDIRHASH_SIG_SW_INDEX_SHIFT;
 	} else
-		fdirhash = atr_compute_sig_hash_82599(&input,
+		fdirhash = atr_compute_sig_hash_82599(&rule->ixgbe_fdir,
 						      dev->data->dev_conf.fdir_conf.pballoc);
 
 	if (del) {
-		err = ixgbe_remove_fdir_filter(info, &input);
+		err = ixgbe_remove_fdir_filter(info, &rule->ixgbe_fdir);
 		if (err < 0)
 			return err;
 
@@ -1223,7 +1283,7 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	}
 	/* add or update an fdir filter*/
 	fdircmd_flags = (update) ? IXGBE_FDIRCMD_FILTER_UPDATE : 0;
-	if (fdir_filter->action.behavior == RTE_ETH_FDIR_REJECT) {
+	if (rule->fdirflags & IXGBE_FDIRCMD_DROP) {
 		if (is_perfect) {
 			queue = dev->data->dev_conf.fdir_conf.drop_queue;
 			fdircmd_flags |= IXGBE_FDIRCMD_DROP;
@@ -1232,13 +1292,12 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 				    " signature mode.");
 			return -EINVAL;
 		}
-	} else if (fdir_filter->action.behavior == RTE_ETH_FDIR_ACCEPT &&
-		   fdir_filter->action.rx_queue < IXGBE_MAX_RX_QUEUE_NUM)
-		queue = (uint8_t)fdir_filter->action.rx_queue;
+	} else if (rule->queue < IXGBE_MAX_RX_QUEUE_NUM)
+		queue = (uint8_t)rule->queue;
 	else
 		return -EINVAL;
 
-	node = ixgbe_fdir_filter_lookup(info, &input);
+	node = ixgbe_fdir_filter_lookup(info, &rule->ixgbe_fdir);
 	if (node) {
 		if (update) {
 			node->fdirflags = fdircmd_flags;
@@ -1256,7 +1315,7 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 		if (!node)
 			return -ENOMEM;
 		(void)rte_memcpy(&node->ixgbe_fdir,
-				 &input,
+				 &rule->ixgbe_fdir,
 				 sizeof(union ixgbe_atr_input));
 		node->fdirflags = fdircmd_flags;
 		node->fdirhash = fdirhash;
@@ -1270,18 +1329,19 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	}
 
 	if (is_perfect) {
-		err = fdir_write_perfect_filter_82599(hw, &input, queue,
-						      fdircmd_flags, fdirhash,
-						      fdir_mode);
+		err = fdir_write_perfect_filter_82599(hw, &rule->ixgbe_fdir,
+						      queue, fdircmd_flags,
+						      fdirhash, fdir_mode);
 	} else {
-		err = fdir_add_signature_filter_82599(hw, &input, queue,
-						      fdircmd_flags, fdirhash);
+		err = fdir_add_signature_filter_82599(hw, &rule->ixgbe_fdir,
+						      queue, fdircmd_flags,
+						      fdirhash);
 	}
 	if (err < 0) {
 		PMD_DRV_LOG(ERR, "Fail to add FDIR filter!");
 
 		if (add_node)
-			(void)ixgbe_remove_fdir_filter(info, &input);
+			(void)ixgbe_remove_fdir_filter(info, &rule->ixgbe_fdir);
 	} else {
 		PMD_DRV_LOG(DEBUG, "Success to add FDIR filter");
 	}
@@ -1289,6 +1349,29 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	return err;
 }
 
+/* ixgbe_add_del_fdir_filter - add or remove a flow diretor filter.
+ * @dev: pointer to the structure rte_eth_dev
+ * @fdir_filter: fdir filter entry
+ * @del: 1 - delete, 0 - add
+ * @update: 1 - update
+ */
+static int
+ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
+			  const struct rte_eth_fdir_filter *fdir_filter,
+			  bool del,
+			  bool update)
+{
+	struct ixgbe_fdir_rule rule;
+	int err;
+
+	err = ixgbe_interpret_fdir_filter(dev, fdir_filter, &rule);
+
+	if (err)
+		return err;
+
+	return ixgbe_fdir_filter_program(dev, &rule, del, update);
+}
+
 static int
 ixgbe_fdir_flush(struct rte_eth_dev *dev)
 {
@@ -1522,19 +1605,23 @@ ixgbe_clear_all_fdir_filter(struct rte_eth_dev *dev)
 	struct ixgbe_hw_fdir_info *fdir_info =
 		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
 	struct ixgbe_fdir_filter *fdir_filter;
+	struct ixgbe_fdir_filter *filter_flag;
 	int ret = 0;
 
 	/* flush flow director */
 	rte_hash_reset(fdir_info->hash_handle);
 	memset(fdir_info->hash_map, 0,
 	       sizeof(struct ixgbe_fdir_filter *) * IXGBE_MAX_FDIR_FILTER_NUM);
+	filter_flag = TAILQ_FIRST(&fdir_info->fdir_list);
 	while ((fdir_filter = TAILQ_FIRST(&fdir_info->fdir_list))) {
 		TAILQ_REMOVE(&fdir_info->fdir_list,
 			     fdir_filter,
 			     entries);
 		rte_free(fdir_filter);
 	}
-	ret = ixgbe_fdir_flush(dev);
+
+	if (filter_flag != NULL)
+		ret = ixgbe_fdir_flush(dev);
 
 	return ret;
 }
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 6e1c0bc..61f4cff 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -127,6 +127,31 @@ ixgbe_validate_l2_tn_filter(struct rte_eth_dev *dev,
 			struct rte_eth_l2_tunnel_conf *rule,
 			struct rte_flow_error *error);
 static int
+ixgbe_validate_fdir_filter(struct rte_eth_dev *dev,
+			const struct rte_flow_attr *attr,
+			const struct rte_flow_item pattern[],
+			const struct rte_flow_action actions[],
+			struct ixgbe_fdir_rule *rule,
+			struct rte_flow_error *error);
+static int
+ixgbe_parse_fdir_filter_normal(const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct ixgbe_fdir_rule *rule,
+		struct rte_flow_error *error);
+static int
+ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct ixgbe_fdir_rule *rule,
+		struct rte_flow_error *error);
+static int
+ixgbe_parse_fdir_filter(const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct ixgbe_fdir_rule *rule,
+		struct rte_flow_error *error);
+static int
 ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 		const struct rte_flow_attr *attr,
 		const struct rte_flow_item pattern[],
@@ -1205,39 +1230,1180 @@ ixgbe_validate_l2_tn_filter(struct rte_eth_dev *dev,
 	return ret;
 }
 
+/* Parse to get the attr and action info of flow director rule. */
+static int
+ixgbe_parse_fdir_act_attr(const struct rte_flow_attr *attr,
+			  const struct rte_flow_action actions[],
+			  struct ixgbe_fdir_rule *rule,
+			  struct rte_flow_error *error)
+{
+	const struct rte_flow_action *act;
+	const struct rte_flow_action_queue *act_q;
+	const struct rte_flow_action_mark *mark;
+	uint32_t index;
+
+	/* parse attr */
+	/* must be input direction */
+	if (!attr->ingress) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_INGRESS,
+			attr, "Only support ingress.");
+		return -rte_errno;
+	}
+
+	/* not supported */
+	if (attr->egress) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
+			attr, "Not support egress.");
+		return -rte_errno;
+	}
+
+	/* not supported */
+	if (attr->priority) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
+			attr, "Not support priority.");
+		return -rte_errno;
+	}
+
+	/* parse action */
+	index = 0;
+
+	/* check if the first not void action is QUEUE or DROP. */
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE &&
+	    act->type != RTE_FLOW_ACTION_TYPE_DROP) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	if (act->type == RTE_FLOW_ACTION_TYPE_QUEUE) {
+		act_q = (const struct rte_flow_action_queue *)act->conf;
+		rule->queue = act_q->index;
+	} else { /* drop */
+		rule->fdirflags = IXGBE_FDIRCMD_DROP;
+	}
+
+	/* check if the next not void item is MARK */
+	index++;
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if ((act->type != RTE_FLOW_ACTION_TYPE_MARK) &&
+		(act->type != RTE_FLOW_ACTION_TYPE_END)) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	rule->soft_id = 0;
+
+	if (act->type == RTE_FLOW_ACTION_TYPE_MARK) {
+		mark = (const struct rte_flow_action_mark *)act->conf;
+		rule->soft_id = mark->id;
+		index++;
+		NEXT_ITEM_OF_ACTION(act, actions, index);
+	}
+
+	/* check if the next not void item is END */
+	if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
 /**
- * Check if the flow rule is supported by ixgbe.
- * It only checkes the format. Don't guarantee the rule can be programmed into
- * the HW. Because there can be no enough room for the rule.
+ * Parse the rule to see if it is a IP or MAC VLAN flow director rule.
+ * And get the flow director filter info BTW.
+ * UDP/TCP/SCTP PATTERN:
+ * The first not void item can be ETH or IPV4.
+ * The second not void item must be IPV4 if the first one is ETH.
+ * The third not void item must be UDP or TCP or SCTP.
+ * The next not void item must be END.
+ * MAC VLAN PATTERN:
+ * The first not void item must be ETH.
+ * The second not void item must be MAC VLAN.
+ * The next not void item must be END.
+ * ACTION:
+ * The first not void action should be QUEUE or DROP.
+ * The second not void optional action should be MARK,
+ * mark_id is a uint32_t number.
+ * he next not void action should be END.
  */
 static int
-ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
-		const struct rte_flow_attr *attr,
-		const struct rte_flow_item pattern[],
-		const struct rte_flow_action actions[],
-		struct rte_flow_error *error)
+ixgbe_parse_fdir_filter_normal(const struct rte_flow_attr *attr,
+			       const struct rte_flow_item pattern[],
+			       const struct rte_flow_action actions[],
+			       struct ixgbe_fdir_rule *rule,
+			       struct rte_flow_error *error)
 {
-	struct rte_eth_ntuple_filter ntuple_filter;
-	struct rte_eth_ethertype_filter ethertype_filter;
-	struct rte_eth_syn_filter syn_filter;
-	struct rte_eth_l2_tunnel_conf l2_tn_filter;
-	int ret;
+	const struct rte_flow_item *item;
+	const struct rte_flow_item_eth *eth_spec;
+	const struct rte_flow_item_eth *eth_mask;
+	const struct rte_flow_item_ipv4 *ipv4_spec;
+	const struct rte_flow_item_ipv4 *ipv4_mask;
+	const struct rte_flow_item_tcp *tcp_spec;
+	const struct rte_flow_item_tcp *tcp_mask;
+	const struct rte_flow_item_udp *udp_spec;
+	const struct rte_flow_item_udp *udp_mask;
+	const struct rte_flow_item_sctp *sctp_spec;
+	const struct rte_flow_item_sctp *sctp_mask;
+	const struct rte_flow_item_vlan *vlan_spec;
+	const struct rte_flow_item_vlan *vlan_mask;
 
-	memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
-	ret = ixgbe_parse_ntuple_filter(attr, pattern,
-				actions, &ntuple_filter, error);
-	if (!ret)
-		return 0;
+	uint32_t index, j;
 
-	memset(&ethertype_filter, 0, sizeof(struct rte_eth_ethertype_filter));
-	ret = ixgbe_parse_ethertype_filter(attr, pattern,
-				actions, &ethertype_filter, error);
-	if (!ret)
-		return 0;
+	if (!pattern) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+			NULL, "NULL pattern.");
+		return -rte_errno;
+	}
 
-	memset(&syn_filter, 0, sizeof(struct rte_eth_syn_filter));
-	ret = ixgbe_parse_syn_filter(attr, pattern,
-				actions, &syn_filter, error);
+	if (!actions) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				   NULL, "NULL action.");
+		return -rte_errno;
+	}
+
+	if (!attr) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, "NULL attribute.");
+		return -rte_errno;
+	}
+
+	/**
+	 * Some fields may not be provided. Set spec to 0 and mask to default
+	 * value. So, we need not do anything for the not provided fields later.
+	 */
+	memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+	memset(&rule->mask, 0xFF, sizeof(struct ixgbe_hw_fdir_mask));
+	rule->mask.vlan_tci_mask = 0;
+
+	/* parse pattern */
+	index = 0;
+
+	/**
+	 * The first not void item should be
+	 * MAC or IPv4 or TCP or UDP or SCTP.
+	 */
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_ETH &&
+	    item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+	    item->type != RTE_FLOW_ITEM_TYPE_TCP &&
+	    item->type != RTE_FLOW_ITEM_TYPE_UDP &&
+	    item->type != RTE_FLOW_ITEM_TYPE_SCTP) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by fdir filter");
+		return -rte_errno;
+	}
+
+	rule->mode = RTE_FDIR_MODE_PERFECT;
+
+	/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+	}
+
+	/* Get the MAC info. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
+		/**
+		 * Only support vlan and dst MAC address,
+		 * others should be masked.
+		 */
+		if (item->spec && !item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			eth_spec = (const struct rte_flow_item_eth *)item->spec;
+
+			/* Get the dst MAC. */
+			for (j = 0; j < ETHER_ADDR_LEN; j++) {
+				rule->ixgbe_fdir.formatted.inner_mac[j] =
+					eth_spec->dst.addr_bytes[j];
+			}
+		}
+
+
+		if (item->mask) {
+			/* If ethernet has meaning, it means MAC VLAN mode. */
+			rule->mode = RTE_FDIR_MODE_PERFECT_MAC_VLAN;
+
+			rule->b_mask = TRUE;
+			eth_mask = (const struct rte_flow_item_eth *)item->mask;
+
+			/* Ether type should be masked. */
+			if (eth_mask->type) {
+				memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+				rte_flow_error_set(error, EINVAL,
+					RTE_FLOW_ERROR_TYPE_ITEM,
+					item, "Not supported by fdir filter");
+				return -rte_errno;
+			}
+
+			/**
+			 * src MAC address must be masked,
+			 * and don't support dst MAC address mask.
+			 */
+			for (j = 0; j < ETHER_ADDR_LEN; j++) {
+				if (eth_mask->src.addr_bytes[j] ||
+					eth_mask->dst.addr_bytes[j] != 0xFF) {
+					memset(rule, 0,
+					sizeof(struct ixgbe_fdir_rule));
+					rte_flow_error_set(error, EINVAL,
+					RTE_FLOW_ERROR_TYPE_ITEM,
+					item, "Not supported by fdir filter");
+					return -rte_errno;
+				}
+			}
+
+			/* When no VLAN, considered as full mask. */
+			rule->mask.vlan_tci_mask = rte_cpu_to_be_16(0xEFFF);
+		}
+		/*** If both spec and mask are item,
+		 * it means don't care about ETH.
+		 * Do nothing.
+		 */
+
+		/**
+		 * Check if the next not void item is vlan or ipv4.
+		 * IPv6 is not supported.
+		 */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (rule->mode == RTE_FDIR_MODE_PERFECT_MAC_VLAN) {
+			if (item->type != RTE_FLOW_ITEM_TYPE_VLAN) {
+				memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+				rte_flow_error_set(error, EINVAL,
+					RTE_FLOW_ERROR_TYPE_ITEM,
+					item, "Not supported by fdir filter");
+				return -rte_errno;
+			}
+		} else {
+			if (item->type != RTE_FLOW_ITEM_TYPE_IPV4) {
+				memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+				rte_flow_error_set(error, EINVAL,
+					RTE_FLOW_ERROR_TYPE_ITEM,
+					item, "Not supported by fdir filter");
+				return -rte_errno;
+			}
+		}
+	}
+
+	if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+		if (!(item->spec && item->mask)) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+
+		vlan_spec = (const struct rte_flow_item_vlan *)item->spec;
+		vlan_mask = (const struct rte_flow_item_vlan *)item->mask;
+
+		if (vlan_spec->tpid != rte_cpu_to_be_16(ETHER_TYPE_VLAN)) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+
+		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->tci;
+
+		if (vlan_mask->tpid != (uint16_t)~0U) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->mask.vlan_tci_mask = vlan_mask->tci;
+		/* More than one tags are not supported. */
+
+		/**
+		 * Check if the next not void item is not vlan.
+		 */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		} else if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	/* Get the IP info. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_IPV4) {
+		/**
+		 * Set the flow type even if there's no content
+		 * as we must have a flow type.
+		 */
+		rule->ixgbe_fdir.formatted.flow_type =
+			IXGBE_ATR_FLOW_TYPE_IPV4;
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+		/**
+		 * Only care about src & dst addresses,
+		 * others should be masked.
+		 */
+		if (!item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->b_mask = TRUE;
+		ipv4_mask =
+			(const struct rte_flow_item_ipv4 *)item->mask;
+		if (ipv4_mask->hdr.version_ihl ||
+		    ipv4_mask->hdr.type_of_service ||
+		    ipv4_mask->hdr.total_length ||
+		    ipv4_mask->hdr.packet_id ||
+		    ipv4_mask->hdr.fragment_offset ||
+		    ipv4_mask->hdr.time_to_live ||
+		    ipv4_mask->hdr.next_proto_id ||
+		    ipv4_mask->hdr.hdr_checksum) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->mask.dst_ipv4_mask = ipv4_mask->hdr.dst_addr;
+		rule->mask.src_ipv4_mask = ipv4_mask->hdr.src_addr;
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			ipv4_spec =
+				(const struct rte_flow_item_ipv4 *)item->spec;
+			rule->ixgbe_fdir.formatted.dst_ip[0] =
+				ipv4_spec->hdr.dst_addr;
+			rule->ixgbe_fdir.formatted.src_ip[0] =
+				ipv4_spec->hdr.src_addr;
+		}
+
+		/**
+		 * Check if the next not void item is
+		 * TCP or UDP or SCTP or END.
+		 */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_TCP &&
+		    item->type != RTE_FLOW_ITEM_TYPE_UDP &&
+		    item->type != RTE_FLOW_ITEM_TYPE_SCTP &&
+		    item->type != RTE_FLOW_ITEM_TYPE_END) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	/* Get the TCP info. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_TCP) {
+		/**
+		 * Set the flow type even if there's no content
+		 * as we must have a flow type.
+		 */
+		rule->ixgbe_fdir.formatted.flow_type =
+			IXGBE_ATR_FLOW_TYPE_TCPV4;
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+		/**
+		 * Only care about src & dst ports,
+		 * others should be masked.
+		 */
+		if (!item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->b_mask = TRUE;
+		tcp_mask = (const struct rte_flow_item_tcp *)item->mask;
+		if (tcp_mask->hdr.sent_seq ||
+		    tcp_mask->hdr.recv_ack ||
+		    tcp_mask->hdr.data_off ||
+		    tcp_mask->hdr.tcp_flags ||
+		    tcp_mask->hdr.rx_win ||
+		    tcp_mask->hdr.cksum ||
+		    tcp_mask->hdr.tcp_urp) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->mask.src_port_mask = tcp_mask->hdr.src_port;
+		rule->mask.dst_port_mask = tcp_mask->hdr.dst_port;
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			tcp_spec = (const struct rte_flow_item_tcp *)item->spec;
+			rule->ixgbe_fdir.formatted.src_port =
+				tcp_spec->hdr.src_port;
+			rule->ixgbe_fdir.formatted.dst_port =
+				tcp_spec->hdr.dst_port;
+		}
+	}
+
+	/* Get the UDP info */
+	if (item->type == RTE_FLOW_ITEM_TYPE_UDP) {
+		/**
+		 * Set the flow type even if there's no content
+		 * as we must have a flow type.
+		 */
+		rule->ixgbe_fdir.formatted.flow_type =
+			IXGBE_ATR_FLOW_TYPE_UDPV4;
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+		/**
+		 * Only care about src & dst ports,
+		 * others should be masked.
+		 */
+		if (!item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->b_mask = TRUE;
+		udp_mask = (const struct rte_flow_item_udp *)item->mask;
+		if (udp_mask->hdr.dgram_len ||
+		    udp_mask->hdr.dgram_cksum) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->mask.src_port_mask = udp_mask->hdr.src_port;
+		rule->mask.dst_port_mask = udp_mask->hdr.dst_port;
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			udp_spec = (const struct rte_flow_item_udp *)item->spec;
+			rule->ixgbe_fdir.formatted.src_port =
+				udp_spec->hdr.src_port;
+			rule->ixgbe_fdir.formatted.dst_port =
+				udp_spec->hdr.dst_port;
+		}
+	}
+
+	/* Get the SCTP info */
+	if (item->type == RTE_FLOW_ITEM_TYPE_SCTP) {
+		/**
+		 * Set the flow type even if there's no content
+		 * as we must have a flow type.
+		 */
+		rule->ixgbe_fdir.formatted.flow_type =
+			IXGBE_ATR_FLOW_TYPE_SCTPV4;
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+		/**
+		 * Only care about src & dst ports,
+		 * others should be masked.
+		 */
+		if (!item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->b_mask = TRUE;
+		sctp_mask =
+			(const struct rte_flow_item_sctp *)item->mask;
+		if (sctp_mask->hdr.tag ||
+		    sctp_mask->hdr.cksum) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->mask.src_port_mask = sctp_mask->hdr.src_port;
+		rule->mask.dst_port_mask = sctp_mask->hdr.dst_port;
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			sctp_spec =
+				(const struct rte_flow_item_sctp *)item->spec;
+			rule->ixgbe_fdir.formatted.src_port =
+				sctp_spec->hdr.src_port;
+			rule->ixgbe_fdir.formatted.dst_port =
+				sctp_spec->hdr.dst_port;
+		}
+	}
+
+	if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+		/* check if the next not void item is END */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	return ixgbe_parse_fdir_act_attr(attr, actions, rule, error);
+}
+
+#define NVGRE_PROTOCOL 0x6558
+
+/**
+ * Parse the rule to see if it is a VxLAN or NVGRE flow director rule.
+ * And get the flow director filter info BTW.
+ * VxLAN PATTERN:
+ * The first not void item must be ETH.
+ * The second not void item must be IPV4/ IPV6.
+ * The third not void item must be NVGRE.
+ * The next not void item must be END.
+ * NVGRE PATTERN:
+ * The first not void item must be ETH.
+ * The second not void item must be IPV4/ IPV6.
+ * The third not void item must be NVGRE.
+ * The next not void item must be END.
+ * ACTION:
+ * The first not void action should be QUEUE or DROP.
+ * The second not void optional action should be MARK,
+ * mark_id is a uint32_t number.
+ * he next not void action should be END.
+ */
+static int
+ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
+			       const struct rte_flow_item pattern[],
+			       const struct rte_flow_action actions[],
+			       struct ixgbe_fdir_rule *rule,
+			       struct rte_flow_error *error)
+{
+	const struct rte_flow_item *item;
+	const struct rte_flow_item_vxlan *vxlan_spec;
+	const struct rte_flow_item_vxlan *vxlan_mask;
+	const struct rte_flow_item_nvgre *nvgre_spec;
+	const struct rte_flow_item_nvgre *nvgre_mask;
+	const struct rte_flow_item_eth *eth_spec;
+	const struct rte_flow_item_eth *eth_mask;
+	const struct rte_flow_item_vlan *vlan_spec;
+	const struct rte_flow_item_vlan *vlan_mask;
+	uint32_t index, j;
+
+	if (!pattern) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+				   NULL, "NULL pattern.");
+		return -rte_errno;
+	}
+
+	if (!actions) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				   NULL, "NULL action.");
+		return -rte_errno;
+	}
+
+	if (!attr) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, "NULL attribute.");
+		return -rte_errno;
+	}
+
+	/**
+	 * Some fields may not be provided. Set spec to 0 and mask to default
+	 * value. So, we need not do anything for the not provided fields later.
+	 */
+	memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+	memset(&rule->mask, 0xFF, sizeof(struct ixgbe_hw_fdir_mask));
+	rule->mask.vlan_tci_mask = 0;
+
+	/* parse pattern */
+	index = 0;
+
+	/**
+	 * The first not void item should be
+	 * MAC or IPv4 or IPv6 or UDP or VxLAN.
+	 */
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_ETH &&
+	    item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+	    item->type != RTE_FLOW_ITEM_TYPE_IPV6 &&
+	    item->type != RTE_FLOW_ITEM_TYPE_UDP &&
+	    item->type != RTE_FLOW_ITEM_TYPE_VXLAN &&
+	    item->type != RTE_FLOW_ITEM_TYPE_NVGRE) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by fdir filter");
+		return -rte_errno;
+	}
+
+	rule->mode = RTE_FDIR_MODE_PERFECT_TUNNEL;
+
+	/* Skip MAC. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
+		/* Only used to describe the protocol stack. */
+		if (item->spec || item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+
+		/* Check if the next not void item is IPv4 or IPv6. */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+		    item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	/* Skip IP. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_IPV4 ||
+	    item->type == RTE_FLOW_ITEM_TYPE_IPV6) {
+		/* Only used to describe the protocol stack. */
+		if (item->spec || item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+
+		/* Check if the next not void item is UDP or NVGRE. */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_UDP &&
+		    item->type != RTE_FLOW_ITEM_TYPE_NVGRE) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	/* Skip UDP. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_UDP) {
+		/* Only used to describe the protocol stack. */
+		if (item->spec || item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+
+		/* Check if the next not void item is VxLAN. */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_VXLAN) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	/* Get the VxLAN info */
+	if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN) {
+		rule->ixgbe_fdir.formatted.tunnel_type =
+			RTE_FDIR_TUNNEL_TYPE_VXLAN;
+
+		/* Only care about VNI, others should be masked. */
+		if (!item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+		rule->b_mask = TRUE;
+
+		/* Tunnel type is always meaningful. */
+		rule->mask.tunnel_type_mask = 1;
+
+		vxlan_mask =
+			(const struct rte_flow_item_vxlan *)item->mask;
+		if (vxlan_mask->flags) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/* VNI must be totally masked or not. */
+		if ((vxlan_mask->vni[0] || vxlan_mask->vni[1] ||
+			vxlan_mask->vni[2]) &&
+			((vxlan_mask->vni[0] != 0xFF) ||
+			(vxlan_mask->vni[1] != 0xFF) ||
+				(vxlan_mask->vni[2] != 0xFF))) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+
+		rte_memcpy(&rule->mask.tunnel_id_mask, vxlan_mask->vni,
+			RTE_DIM(vxlan_mask->vni));
+		rule->mask.tunnel_id_mask <<= 8;
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			vxlan_spec = (const struct rte_flow_item_vxlan *)
+					item->spec;
+			rte_memcpy(&rule->ixgbe_fdir.formatted.tni_vni,
+				vxlan_spec->vni, RTE_DIM(vxlan_spec->vni));
+			rule->ixgbe_fdir.formatted.tni_vni <<= 8;
+		}
+	}
+
+	/* Get the NVGRE info */
+	if (item->type == RTE_FLOW_ITEM_TYPE_NVGRE) {
+		rule->ixgbe_fdir.formatted.tunnel_type =
+			RTE_FDIR_TUNNEL_TYPE_NVGRE;
+
+		/**
+		 * Only care about flags0, flags1, protocol and TNI,
+		 * others should be masked.
+		 */
+		if (!item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+		rule->b_mask = TRUE;
+
+		/* Tunnel type is always meaningful. */
+		rule->mask.tunnel_type_mask = 1;
+
+		nvgre_mask =
+			(const struct rte_flow_item_nvgre *)item->mask;
+		if (nvgre_mask->flow_id) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		if (nvgre_mask->c_k_s_rsvd0_ver !=
+			rte_cpu_to_be_16(0x3000) ||
+		    nvgre_mask->protocol != 0xFFFF) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/* TNI must be totally masked or not. */
+		if (nvgre_mask->tni[0] &&
+		    ((nvgre_mask->tni[0] != 0xFF) ||
+		    (nvgre_mask->tni[1] != 0xFF) ||
+		    (nvgre_mask->tni[2] != 0xFF))) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/* tni is a 24-bits bit field */
+		rte_memcpy(&rule->mask.tunnel_id_mask, nvgre_mask->tni,
+			RTE_DIM(nvgre_mask->tni));
+		rule->mask.tunnel_id_mask <<= 8;
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			nvgre_spec =
+				(const struct rte_flow_item_nvgre *)item->spec;
+			if (nvgre_spec->c_k_s_rsvd0_ver !=
+			    rte_cpu_to_be_16(0x2000) ||
+			    nvgre_spec->protocol !=
+			    rte_cpu_to_be_16(NVGRE_PROTOCOL)) {
+				memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+				rte_flow_error_set(error, EINVAL,
+					RTE_FLOW_ERROR_TYPE_ITEM,
+					item, "Not supported by fdir filter");
+				return -rte_errno;
+			}
+			/* tni is a 24-bits bit field */
+			rte_memcpy(&rule->ixgbe_fdir.formatted.tni_vni,
+			nvgre_spec->tni, RTE_DIM(nvgre_spec->tni));
+			rule->ixgbe_fdir.formatted.tni_vni <<= 8;
+		}
+	}
+
+	/* check if the next not void item is MAC */
+	index++;
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_ETH) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by fdir filter");
+		return -rte_errno;
+	}
+
+	/**
+	 * Only support vlan and dst MAC address,
+	 * others should be masked.
+	 */
+
+	if (!item->mask) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by fdir filter");
+		return -rte_errno;
+	}
+	/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+	}
+	rule->b_mask = TRUE;
+	eth_mask = (const struct rte_flow_item_eth *)item->mask;
+
+	/* Ether type should be masked. */
+	if (eth_mask->type) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by fdir filter");
+		return -rte_errno;
+	}
+
+	/* src MAC address should be masked. */
+	for (j = 0; j < ETHER_ADDR_LEN; j++) {
+		if (eth_mask->src.addr_bytes[j]) {
+			memset(rule, 0,
+			       sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+	rule->mask.mac_addr_byte_mask = 0;
+	for (j = 0; j < ETHER_ADDR_LEN; j++) {
+		/* It's a per byte mask. */
+		if (eth_mask->dst.addr_bytes[j] == 0xFF) {
+			rule->mask.mac_addr_byte_mask |= 0x1 << j;
+		} else if (eth_mask->dst.addr_bytes[j]) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	/* When no vlan, considered as full mask. */
+	rule->mask.vlan_tci_mask = rte_cpu_to_be_16(0xEFFF);
+
+	if (item->spec) {
+		rule->b_spec = TRUE;
+		eth_spec = (const struct rte_flow_item_eth *)item->spec;
+
+		/* Get the dst MAC. */
+		for (j = 0; j < ETHER_ADDR_LEN; j++) {
+			rule->ixgbe_fdir.formatted.inner_mac[j] =
+				eth_spec->dst.addr_bytes[j];
+		}
+	}
+
+	/**
+	 * Check if the next not void item is vlan or ipv4.
+	 * IPv6 is not supported.
+	 */
+	index++;
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if ((item->type != RTE_FLOW_ITEM_TYPE_VLAN) &&
+		(item->type != RTE_FLOW_ITEM_TYPE_VLAN)) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by fdir filter");
+		return -rte_errno;
+	}
+	/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+	}
+
+	if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+		if (!(item->spec && item->mask)) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+
+		vlan_spec = (const struct rte_flow_item_vlan *)item->spec;
+		vlan_mask = (const struct rte_flow_item_vlan *)item->mask;
+
+		if (vlan_spec->tpid != rte_cpu_to_be_16(ETHER_TYPE_VLAN)) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+
+		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->tci;
+
+		if (vlan_mask->tpid != (uint16_t)~0U) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->mask.vlan_tci_mask = vlan_mask->tci;
+		/* More than one tags are not supported. */
+
+		/**
+		 * Check if the next not void item is not vlan.
+		 */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		} else if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/* check if the next not void item is END */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	/**
+	 * If the tags is 0, it means don't care about the VLAN.
+	 * Do nothing.
+	 */
+
+	return ixgbe_parse_fdir_act_attr(attr, actions, rule, error);
+}
+
+static int
+ixgbe_validate_fdir_filter(struct rte_eth_dev *dev,
+			const struct rte_flow_attr *attr,
+			const struct rte_flow_item pattern[],
+			const struct rte_flow_action actions[],
+			struct ixgbe_fdir_rule *rule,
+			struct rte_flow_error *error)
+{
+	int ret = 0;
+
+	enum rte_fdir_mode fdir_mode = dev->data->dev_conf.fdir_conf.mode;
+
+	ixgbe_parse_fdir_filter(attr, pattern, actions,
+				rule, error);
+
+
+	if (fdir_mode == RTE_FDIR_MODE_NONE ||
+	    fdir_mode != rule->mode)
+		return -ENOTSUP;
+
+	return ret;
+}
+
+static int
+ixgbe_parse_fdir_filter(const struct rte_flow_attr *attr,
+			const struct rte_flow_item pattern[],
+			const struct rte_flow_action actions[],
+			struct ixgbe_fdir_rule *rule,
+			struct rte_flow_error *error)
+{
+	int ret;
+
+	ret = ixgbe_parse_fdir_filter_normal(attr, pattern,
+					actions, rule, error);
+
+	if (!ret)
+		return 0;
+
+	ret = ixgbe_parse_fdir_filter_tunnel(attr, pattern,
+					actions, rule, error);
+
+	return ret;
+}
+
+/**
+ * Check if the flow rule is supported by ixgbe.
+ * It only checkes the format. Don't guarantee the rule can be programmed into
+ * the HW. Because there can be no enough room for the rule.
+ */
+static int
+ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
+		const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct rte_flow_error *error)
+{
+	struct rte_eth_ntuple_filter ntuple_filter;
+	struct rte_eth_ethertype_filter ethertype_filter;
+	struct rte_eth_syn_filter syn_filter;
+	struct rte_eth_l2_tunnel_conf l2_tn_filter;
+	struct ixgbe_fdir_rule fdir_rule;
+	int ret;
+
+	memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
+	ret = ixgbe_parse_ntuple_filter(attr, pattern,
+				actions, &ntuple_filter, error);
+	if (!ret)
+		return 0;
+
+	memset(&ethertype_filter, 0, sizeof(struct rte_eth_ethertype_filter));
+	ret = ixgbe_parse_ethertype_filter(attr, pattern,
+				actions, &ethertype_filter, error);
+	if (!ret)
+		return 0;
+
+	memset(&syn_filter, 0, sizeof(struct rte_eth_syn_filter));
+	ret = ixgbe_parse_syn_filter(attr, pattern,
+				actions, &syn_filter, error);
+	if (!ret)
+		return 0;
+
+	memset(&fdir_rule, 0, sizeof(struct ixgbe_fdir_rule));
+	ret = ixgbe_validate_fdir_filter(dev, attr, pattern,
+				actions, &fdir_rule, error);
 	if (!ret)
 		return 0;
 
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v5 16/18] net/ixgbe: create consistent filter
  2017-01-12  9:17       ` [PATCH v5 00/18] net/ixgbe: Consistent filter API Wei Zhao
                           ` (14 preceding siblings ...)
  2017-01-12  9:17         ` [PATCH v5 15/18] net/ixgbe: parse flow director filter Wei Zhao
@ 2017-01-12  9:17         ` Wei Zhao
  2017-01-12  9:17         ` [PATCH v5 17/18] net/ixgbe: destroy " Wei Zhao
                           ` (4 subsequent siblings)
  20 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  9:17 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

This patch adds a function to create the flow directory filter.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c |  25 +++--
 drivers/net/ixgbe/ixgbe_ethdev.h |  61 ++++++++++++
 drivers/net/ixgbe/ixgbe_flow.c   | 194 ++++++++++++++++++++++++++++++++++++++-
 3 files changed, 266 insertions(+), 14 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 4b9f4aa..3648e12 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -305,9 +305,6 @@ static void ixgbevf_add_mac_addr(struct rte_eth_dev *dev,
 static void ixgbevf_remove_mac_addr(struct rte_eth_dev *dev, uint32_t index);
 static void ixgbevf_set_default_mac_addr(struct rte_eth_dev *dev,
 					     struct ether_addr *mac_addr);
-static int ixgbe_syn_filter_set(struct rte_eth_dev *dev,
-			struct rte_eth_syn_filter *filter,
-			bool add);
 static int ixgbe_syn_filter_get(struct rte_eth_dev *dev,
 			struct rte_eth_syn_filter *filter);
 static int ixgbe_syn_filter_handle(struct rte_eth_dev *dev,
@@ -317,17 +314,11 @@ static int ixgbe_add_5tuple_filter(struct rte_eth_dev *dev,
 			struct ixgbe_5tuple_filter *filter);
 static void ixgbe_remove_5tuple_filter(struct rte_eth_dev *dev,
 			struct ixgbe_5tuple_filter *filter);
-static int ixgbe_add_del_ntuple_filter(struct rte_eth_dev *dev,
-			struct rte_eth_ntuple_filter *filter,
-			bool add);
 static int ixgbe_ntuple_filter_handle(struct rte_eth_dev *dev,
 				enum rte_filter_op filter_op,
 				void *arg);
 static int ixgbe_get_ntuple_filter(struct rte_eth_dev *dev,
 			struct rte_eth_ntuple_filter *filter);
-static int ixgbe_add_del_ethertype_filter(struct rte_eth_dev *dev,
-			struct rte_eth_ethertype_filter *filter,
-			bool add);
 static int ixgbe_ethertype_filter_handle(struct rte_eth_dev *dev,
 				enum rte_filter_op filter_op,
 				void *arg);
@@ -1292,6 +1283,14 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
 
 	/* initialize l2 tunnel filter list & hash */
 	ixgbe_l2_tn_filter_init(eth_dev);
+
+	TAILQ_INIT(&filter_ntuple_list);
+	TAILQ_INIT(&filter_ethertype_list);
+	TAILQ_INIT(&filter_syn_list);
+	TAILQ_INIT(&filter_fdir_list);
+	TAILQ_INIT(&filter_l2_tunnel_list);
+	TAILQ_INIT(&ixgbe_flow_list);
+
 	return 0;
 }
 
@@ -5742,7 +5741,7 @@ ixgbevf_set_default_mac_addr(struct rte_eth_dev *dev, struct ether_addr *addr)
 		return -ENOTSUP;\
 } while (0)
 
-static int
+int
 ixgbe_syn_filter_set(struct rte_eth_dev *dev,
 			struct rte_eth_syn_filter *filter,
 			bool add)
@@ -6121,7 +6120,7 @@ ntuple_filter_to_5tuple(struct rte_eth_ntuple_filter *filter,
  *    - On success, zero.
  *    - On failure, a negative value.
  */
-static int
+int
 ixgbe_add_del_ntuple_filter(struct rte_eth_dev *dev,
 			struct rte_eth_ntuple_filter *ntuple_filter,
 			bool add)
@@ -6266,7 +6265,7 @@ ixgbe_ntuple_filter_handle(struct rte_eth_dev *dev,
 	return ret;
 }
 
-static int
+int
 ixgbe_add_del_ethertype_filter(struct rte_eth_dev *dev,
 			struct rte_eth_ethertype_filter *filter,
 			bool add)
@@ -7345,7 +7344,7 @@ ixgbe_remove_l2_tn_filter(struct ixgbe_l2_tn_info *l2_tn_info,
 }
 
 /* Add l2 tunnel filter */
-static int
+int
 ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 			       struct rte_eth_l2_tunnel_conf *l2_tunnel,
 			       bool restore)
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index dee159f..6908c3d 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -329,6 +329,54 @@ struct ixgbe_l2_tn_info {
 	bool e_tag_ether_type; /* ether type for e-tag */
 };
 
+struct rte_flow {
+	enum rte_filter_type filter_type;
+	void *rule;
+};
+/* ntuple filter list structure */
+struct ixgbe_ntuple_filter_ele {
+	TAILQ_ENTRY(ixgbe_ntuple_filter_ele) entries;
+	struct rte_eth_ntuple_filter filter_info;
+};
+/* ethertype filter list structure */
+struct ixgbe_ethertype_filter_ele {
+	TAILQ_ENTRY(ixgbe_ethertype_filter_ele) entries;
+	struct rte_eth_ethertype_filter filter_info;
+};
+/* syn filter list structure */
+struct ixgbe_eth_syn_filter_ele {
+	TAILQ_ENTRY(ixgbe_eth_syn_filter_ele) entries;
+	struct rte_eth_syn_filter filter_info;
+};
+/* fdir filter list structure */
+struct ixgbe_fdir_rule_ele {
+	TAILQ_ENTRY(ixgbe_fdir_rule_ele) entries;
+	struct ixgbe_fdir_rule filter_info;
+};
+/* l2_tunnel filter list structure */
+struct ixgbe_eth_l2_tunnel_conf_ele {
+	TAILQ_ENTRY(ixgbe_eth_l2_tunnel_conf_ele) entries;
+	struct rte_eth_l2_tunnel_conf filter_info;
+};
+/* ixgbe_flow memory list structure */
+struct ixgbe_flow_mem {
+	TAILQ_ENTRY(ixgbe_flow_mem) entries;
+	struct rte_flow *flow;
+};
+
+TAILQ_HEAD(ixgbe_ntuple_filter_list, ixgbe_ntuple_filter_ele);
+struct ixgbe_ntuple_filter_list filter_ntuple_list;
+TAILQ_HEAD(ixgbe_ethertype_filter_list, ixgbe_ethertype_filter_ele);
+struct ixgbe_ethertype_filter_list filter_ethertype_list;
+TAILQ_HEAD(ixgbe_syn_filter_list, ixgbe_eth_syn_filter_ele);
+struct ixgbe_syn_filter_list filter_syn_list;
+TAILQ_HEAD(ixgbe_fdir_rule_filter_list, ixgbe_fdir_rule_ele);
+struct ixgbe_fdir_rule_filter_list filter_fdir_list;
+TAILQ_HEAD(ixgbe_l2_tunnel_filter_list, ixgbe_eth_l2_tunnel_conf_ele);
+struct ixgbe_l2_tunnel_filter_list filter_l2_tunnel_list;
+TAILQ_HEAD(ixgbe_flow_mem_list, ixgbe_flow_mem);
+struct ixgbe_flow_mem_list ixgbe_flow_list;
+
 /*
  * Structure to store private data for each driver instance (for each port).
  */
@@ -487,6 +535,19 @@ uint32_t ixgbe_rssrk_reg_get(enum ixgbe_mac_type mac_type, uint8_t i);
 
 bool ixgbe_rss_update_sp(enum ixgbe_mac_type mac_type);
 
+int ixgbe_add_del_ntuple_filter(struct rte_eth_dev *dev,
+			struct rte_eth_ntuple_filter *filter,
+			bool add);
+int ixgbe_add_del_ethertype_filter(struct rte_eth_dev *dev,
+			struct rte_eth_ethertype_filter *filter,
+			bool add);
+int ixgbe_syn_filter_set(struct rte_eth_dev *dev,
+			struct rte_eth_syn_filter *filter,
+			bool add);
+int
+ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
+			       struct rte_eth_l2_tunnel_conf *l2_tunnel,
+			       bool restore);
 /*
  * Flow director function prototypes
  */
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 61f4cff..908293a 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -157,10 +157,15 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 		const struct rte_flow_item pattern[],
 		const struct rte_flow_action actions[],
 		struct rte_flow_error *error);
+static struct rte_flow *ixgbe_flow_create(struct rte_eth_dev *dev,
+		const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct rte_flow_error *error);
 
 const struct rte_flow_ops ixgbe_flow_ops = {
 	ixgbe_flow_validate,
-	NULL,
+	ixgbe_flow_create,
 	NULL,
 	ixgbe_flow_flush,
 	NULL,
@@ -2365,6 +2370,193 @@ ixgbe_parse_fdir_filter(const struct rte_flow_attr *attr,
 }
 
 /**
+ * Create or destroy a flow rule.
+ * Theorically one rule can match more than one filters.
+ * We will let it use the filter which it hitt first.
+ * So, the sequence matters.
+ */
+static struct rte_flow *
+ixgbe_flow_create(struct rte_eth_dev *dev,
+		  const struct rte_flow_attr *attr,
+		  const struct rte_flow_item pattern[],
+		  const struct rte_flow_action actions[],
+		  struct rte_flow_error *error)
+{
+	int ret;
+	struct rte_eth_ntuple_filter ntuple_filter;
+	struct rte_eth_ethertype_filter ethertype_filter;
+	struct rte_eth_syn_filter syn_filter;
+	struct ixgbe_fdir_rule fdir_rule;
+	struct rte_eth_l2_tunnel_conf l2_tn_filter;
+	struct ixgbe_hw_fdir_info *fdir_info =
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
+	struct rte_flow *flow = NULL;
+	struct ixgbe_ntuple_filter_ele *ntuple_filter_ptr;
+	struct ixgbe_ethertype_filter_ele *ethertype_filter_ptr;
+	struct ixgbe_eth_syn_filter_ele *syn_filter_ptr;
+	struct ixgbe_eth_l2_tunnel_conf_ele *l2_tn_filter_ptr;
+	struct ixgbe_fdir_rule_ele *fdir_rule_ptr;
+	struct ixgbe_flow_mem *ixgbe_flow_mem_ptr;
+
+	flow = rte_zmalloc("ixgbe_rte_flow", sizeof(struct rte_flow), 0);
+	if (!flow) {
+		PMD_DRV_LOG(ERR, "failed to allocate memory");
+		return (struct rte_flow *)flow;
+	}
+	ixgbe_flow_mem_ptr = rte_zmalloc("ixgbe_flow_mem",
+			sizeof(struct ixgbe_flow_mem), 0);
+	if (!ixgbe_flow_mem_ptr) {
+		PMD_DRV_LOG(ERR, "failed to allocate memory");
+		rte_free(flow);
+		return NULL;
+	}
+	ixgbe_flow_mem_ptr->flow = flow;
+	TAILQ_INSERT_TAIL(&ixgbe_flow_list,
+				ixgbe_flow_mem_ptr, entries);
+
+	memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
+	ret = ixgbe_parse_ntuple_filter(attr, pattern,
+			actions, &ntuple_filter, error);
+	if (!ret) {
+		ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, TRUE);
+		if (!ret) {
+			ntuple_filter_ptr = rte_zmalloc("ixgbe_ntuple_filter",
+				sizeof(struct ixgbe_ntuple_filter_ele), 0);
+			(void)rte_memcpy(&ntuple_filter_ptr->filter_info,
+				&ntuple_filter,
+				sizeof(struct rte_eth_ntuple_filter));
+			TAILQ_INSERT_TAIL(&filter_ntuple_list,
+				ntuple_filter_ptr, entries);
+			flow->rule = ntuple_filter_ptr;
+			flow->filter_type = RTE_ETH_FILTER_NTUPLE;
+			return flow;
+		}
+		goto out;
+	}
+
+	memset(&ethertype_filter, 0, sizeof(struct rte_eth_ethertype_filter));
+	ret = ixgbe_parse_ethertype_filter(attr, pattern,
+				actions, &ethertype_filter, error);
+	if (!ret) {
+		ret = ixgbe_add_del_ethertype_filter(dev,
+				&ethertype_filter, TRUE);
+		if (!ret) {
+			ethertype_filter_ptr = rte_zmalloc(
+				"ixgbe_ethertype_filter",
+				sizeof(struct ixgbe_ethertype_filter_ele), 0);
+			(void)rte_memcpy(&ethertype_filter_ptr->filter_info,
+				&ethertype_filter,
+				sizeof(struct rte_eth_ethertype_filter));
+			TAILQ_INSERT_TAIL(&filter_ethertype_list,
+				ethertype_filter_ptr, entries);
+			flow->rule = ethertype_filter_ptr;
+			flow->filter_type = RTE_ETH_FILTER_ETHERTYPE;
+			return flow;
+		}
+		goto out;
+	}
+
+	memset(&syn_filter, 0, sizeof(struct rte_eth_syn_filter));
+	ret = cons_parse_syn_filter(attr, pattern, actions, &syn_filter, error);
+	if (!ret) {
+		ret = ixgbe_syn_filter_set(dev, &syn_filter, TRUE);
+		if (!ret) {
+			syn_filter_ptr = rte_zmalloc("ixgbe_syn_filter",
+				sizeof(struct ixgbe_eth_syn_filter_ele), 0);
+			(void)rte_memcpy(&syn_filter_ptr->filter_info,
+				&syn_filter,
+				sizeof(struct rte_eth_syn_filter));
+			TAILQ_INSERT_TAIL(&filter_syn_list,
+				syn_filter_ptr,
+				entries);
+			flow->rule = syn_filter_ptr;
+			flow->filter_type = RTE_ETH_FILTER_SYN;
+			return flow;
+		}
+		goto out;
+	}
+
+	memset(&fdir_rule, 0, sizeof(struct ixgbe_fdir_rule));
+	ret = ixgbe_parse_fdir_filter(attr, pattern,
+				actions, &fdir_rule, error);
+	if (!ret) {
+		/* A mask cannot be deleted. */
+		if (fdir_rule.b_mask) {
+			if (!fdir_info->mask_added) {
+				/* It's the first time the mask is set. */
+				rte_memcpy(&fdir_info->mask,
+					&fdir_rule.mask,
+					sizeof(struct ixgbe_hw_fdir_mask));
+				ret = ixgbe_fdir_set_input_mask(dev);
+				if (ret)
+					goto out;
+
+				fdir_info->mask_added = TRUE;
+			} else {
+				/**
+				 * Only support one global mask,
+				 * all the masks should be the same.
+				 */
+				ret = memcmp(&fdir_info->mask,
+					&fdir_rule.mask,
+					sizeof(struct ixgbe_hw_fdir_mask));
+				if (ret)
+					goto out;
+			}
+		}
+
+		if (fdir_rule.b_spec) {
+			ret = ixgbe_fdir_filter_program(dev, &fdir_rule,
+					FALSE, FALSE);
+			if (!ret) {
+				fdir_rule_ptr = rte_zmalloc("ixgbe_fdir_filter",
+					sizeof(struct ixgbe_fdir_rule_ele), 0);
+				(void)rte_memcpy(&fdir_rule_ptr->filter_info,
+					&fdir_rule,
+					sizeof(struct ixgbe_fdir_rule));
+				TAILQ_INSERT_TAIL(&filter_fdir_list,
+					fdir_rule_ptr, entries);
+				flow->rule = fdir_rule_ptr;
+				flow->filter_type = RTE_ETH_FILTER_FDIR;
+
+				return flow;
+			}
+
+			if (ret)
+				goto out;
+		}
+
+		goto out;
+	}
+
+	memset(&l2_tn_filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+	ret = cons_parse_l2_tn_filter(attr, pattern,
+					actions, &l2_tn_filter, error);
+	if (!ret) {
+		ret = ixgbe_dev_l2_tunnel_filter_add(dev, &l2_tn_filter, FALSE);
+		if (!ret) {
+			l2_tn_filter_ptr = rte_zmalloc("ixgbe_l2_tn_filter",
+				sizeof(struct ixgbe_eth_l2_tunnel_conf_ele), 0);
+			(void)rte_memcpy(&l2_tn_filter_ptr->filter_info,
+				&l2_tn_filter,
+				sizeof(struct rte_eth_l2_tunnel_conf));
+			TAILQ_INSERT_TAIL(&filter_l2_tunnel_list,
+				l2_tn_filter_ptr, entries);
+			flow->rule = l2_tn_filter_ptr;
+			flow->filter_type = RTE_ETH_FILTER_L2_TUNNEL;
+			return flow;
+		}
+	}
+
+out:
+	TAILQ_REMOVE(&ixgbe_flow_list,
+		ixgbe_flow_mem_ptr, entries);
+	rte_free(ixgbe_flow_mem_ptr);
+	rte_free(flow);
+	return NULL;
+}
+
+/**
  * Check if the flow rule is supported by ixgbe.
  * It only checkes the format. Don't guarantee the rule can be programmed into
  * the HW. Because there can be no enough room for the rule.
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v5 17/18] net/ixgbe: destroy consistent filter
  2017-01-12  9:17       ` [PATCH v5 00/18] net/ixgbe: Consistent filter API Wei Zhao
                           ` (15 preceding siblings ...)
  2017-01-12  9:17         ` [PATCH v5 16/18] net/ixgbe: create consistent filter Wei Zhao
@ 2017-01-12  9:17         ` Wei Zhao
  2017-01-12  9:17         ` [PATCH v5 18/18] net/ixgbe: flush all the filter list Wei Zhao
                           ` (3 subsequent siblings)
  20 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  9:17 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

This patch adds a function to destroy the flow fliter.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c |   2 +-
 drivers/net/ixgbe/ixgbe_ethdev.h |   3 +
 drivers/net/ixgbe/ixgbe_flow.c   | 117 ++++++++++++++++++++++++++++++++++++++-
 3 files changed, 120 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 3648e12..1112a3e 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -7401,7 +7401,7 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 }
 
 /* Delete l2 tunnel filter */
-static int
+int
 ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
 			       struct rte_eth_l2_tunnel_conf *l2_tunnel)
 {
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 6908c3d..01c18cd 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -548,6 +548,9 @@ int
 ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 			       struct rte_eth_l2_tunnel_conf *l2_tunnel,
 			       bool restore);
+int
+ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
+			       struct rte_eth_l2_tunnel_conf *l2_tunnel);
 /*
  * Flow director function prototypes
  */
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 908293a..175543c 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -162,11 +162,14 @@ static struct rte_flow *ixgbe_flow_create(struct rte_eth_dev *dev,
 		const struct rte_flow_item pattern[],
 		const struct rte_flow_action actions[],
 		struct rte_flow_error *error);
+static int ixgbe_flow_destroy(struct rte_eth_dev *dev,
+		struct rte_flow *flow,
+		struct rte_flow_error *error);
 
 const struct rte_flow_ops ixgbe_flow_ops = {
 	ixgbe_flow_validate,
 	ixgbe_flow_create,
-	NULL,
+	ixgbe_flow_destroy,
 	ixgbe_flow_flush,
 	NULL,
 };
@@ -2606,6 +2609,118 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 	return ret;
 }
 
+/* Destroy a flow rule on ixgbe. */
+static int
+ixgbe_flow_destroy(struct rte_eth_dev *dev,
+		struct rte_flow *flow,
+		struct rte_flow_error *error)
+{
+	int ret;
+	struct rte_flow *pmd_flow = flow;
+	enum rte_filter_type filter_type = pmd_flow->filter_type;
+	struct rte_eth_ntuple_filter ntuple_filter;
+	struct rte_eth_ethertype_filter ethertype_filter;
+	struct rte_eth_syn_filter syn_filter;
+	struct ixgbe_fdir_rule fdir_rule;
+	struct rte_eth_l2_tunnel_conf l2_tn_filter;
+	struct ixgbe_ntuple_filter_ele *ntuple_filter_ptr;
+	struct ixgbe_ethertype_filter_ele *ethertype_filter_ptr;
+	struct ixgbe_eth_syn_filter_ele *syn_filter_ptr;
+	struct ixgbe_eth_l2_tunnel_conf_ele *l2_tn_filter_ptr;
+	struct ixgbe_fdir_rule_ele *fdir_rule_ptr;
+	struct ixgbe_flow_mem *ixgbe_flow_mem_ptr;
+
+	switch (filter_type) {
+	case RTE_ETH_FILTER_NTUPLE:
+		ntuple_filter_ptr = (struct ixgbe_ntuple_filter_ele *)
+					pmd_flow->rule;
+		(void)rte_memcpy(&ntuple_filter,
+			&ntuple_filter_ptr->filter_info,
+			sizeof(struct rte_eth_ntuple_filter));
+		ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, FALSE);
+		if (!ret) {
+			TAILQ_REMOVE(&filter_ntuple_list,
+			ntuple_filter_ptr, entries);
+			rte_free(ntuple_filter_ptr);
+		}
+		break;
+	case RTE_ETH_FILTER_ETHERTYPE:
+		ethertype_filter_ptr = (struct ixgbe_ethertype_filter_ele *)
+					pmd_flow->rule;
+		(void)rte_memcpy(&ethertype_filter,
+			&ethertype_filter_ptr->filter_info,
+			sizeof(struct rte_eth_ethertype_filter));
+		ret = ixgbe_add_del_ethertype_filter(dev,
+				&ethertype_filter, FALSE);
+		if (!ret) {
+			TAILQ_REMOVE(&filter_ethertype_list,
+				ethertype_filter_ptr, entries);
+			rte_free(ethertype_filter_ptr);
+		}
+		break;
+	case RTE_ETH_FILTER_SYN:
+		syn_filter_ptr = (struct ixgbe_eth_syn_filter_ele *)
+				pmd_flow->rule;
+		(void)rte_memcpy(&syn_filter,
+			&syn_filter_ptr->filter_info,
+			sizeof(struct rte_eth_syn_filter));
+		ret = ixgbe_syn_filter_set(dev, &syn_filter, FALSE);
+		if (!ret) {
+			TAILQ_REMOVE(&filter_syn_list,
+				syn_filter_ptr, entries);
+			rte_free(syn_filter_ptr);
+		}
+		break;
+	case RTE_ETH_FILTER_FDIR:
+		fdir_rule_ptr = (struct ixgbe_fdir_rule_ele *)pmd_flow->rule;
+		(void)rte_memcpy(&fdir_rule,
+			&fdir_rule_ptr->filter_info,
+			sizeof(struct ixgbe_fdir_rule));
+		ret = ixgbe_fdir_filter_program(dev, &fdir_rule, TRUE, FALSE);
+		if (!ret) {
+			TAILQ_REMOVE(&filter_fdir_list,
+				fdir_rule_ptr, entries);
+			rte_free(fdir_rule_ptr);
+		}
+		break;
+	case RTE_ETH_FILTER_L2_TUNNEL:
+		l2_tn_filter_ptr = (struct ixgbe_eth_l2_tunnel_conf_ele *)
+				pmd_flow->rule;
+		(void)rte_memcpy(&l2_tn_filter, &l2_tn_filter_ptr->filter_info,
+			sizeof(struct rte_eth_l2_tunnel_conf));
+		ret = ixgbe_dev_l2_tunnel_filter_del(dev, &l2_tn_filter);
+		if (!ret) {
+			TAILQ_REMOVE(&filter_l2_tunnel_list,
+				l2_tn_filter_ptr, entries);
+			rte_free(l2_tn_filter_ptr);
+		}
+		break;
+	default:
+		PMD_DRV_LOG(WARNING, "Filter type (%d) not supported",
+			    filter_type);
+		ret = -EINVAL;
+		break;
+	}
+
+	if (ret) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_HANDLE,
+				NULL, "Failed to destroy flow");
+		return ret;
+	}
+
+	TAILQ_FOREACH(ixgbe_flow_mem_ptr, &ixgbe_flow_list, entries) {
+		if (ixgbe_flow_mem_ptr->flow == pmd_flow) {
+			TAILQ_REMOVE(&ixgbe_flow_list,
+				ixgbe_flow_mem_ptr, entries);
+			rte_free(ixgbe_flow_mem_ptr);
+		}
+	}
+	rte_free(flow);
+
+	return ret;
+}
+
 /*  Destroy all flow rules associated with a port on ixgbe. */
 static int
 ixgbe_flow_flush(struct rte_eth_dev *dev,
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v5 18/18] net/ixgbe: flush all the filter list
  2017-01-12  9:17       ` [PATCH v5 00/18] net/ixgbe: Consistent filter API Wei Zhao
                           ` (16 preceding siblings ...)
  2017-01-12  9:17         ` [PATCH v5 17/18] net/ixgbe: destroy " Wei Zhao
@ 2017-01-12  9:17         ` Wei Zhao
  2017-01-12 11:38         ` [PATCH v5 00/18] net/ixgbe: Consistent filter API Xing, Beilei
                           ` (2 subsequent siblings)
  20 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-12  9:17 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

This patch adds a function to flush all the fliter list
filter on a port.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c |  3 +++
 drivers/net/ixgbe/ixgbe_ethdev.h |  1 +
 drivers/net/ixgbe/ixgbe_flow.c   | 56 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 60 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 1112a3e..f46507b 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1341,6 +1341,9 @@ eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev)
 	/* Remove all ntuple filters of the device */
 	ixgbe_ntuple_filter_uninit(eth_dev);
 
+	/* clear all the filters list */
+	ixgbe_filterlist_flush();
+
 	return 0;
 }
 
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 01c18cd..5b31149 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -551,6 +551,7 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 int
 ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
 			       struct rte_eth_l2_tunnel_conf *l2_tunnel);
+void ixgbe_filterlist_flush(void);
 /*
  * Flow director function prototypes
  */
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 175543c..8823f75 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -2372,6 +2372,60 @@ ixgbe_parse_fdir_filter(const struct rte_flow_attr *attr,
 	return ret;
 }
 
+void
+ixgbe_filterlist_flush(void)
+{
+	struct ixgbe_ntuple_filter_ele *ntuple_filter_ptr;
+	struct ixgbe_ethertype_filter_ele *ethertype_filter_ptr;
+	struct ixgbe_eth_syn_filter_ele *syn_filter_ptr;
+	struct ixgbe_eth_l2_tunnel_conf_ele *l2_tn_filter_ptr;
+	struct ixgbe_fdir_rule_ele *fdir_rule_ptr;
+	struct ixgbe_flow_mem *ixgbe_flow_mem_ptr;
+
+	while ((ntuple_filter_ptr = TAILQ_FIRST(&filter_ntuple_list))) {
+		TAILQ_REMOVE(&filter_ntuple_list,
+				 ntuple_filter_ptr,
+				 entries);
+		rte_free(ntuple_filter_ptr);
+	}
+
+	while ((ethertype_filter_ptr = TAILQ_FIRST(&filter_ethertype_list))) {
+		TAILQ_REMOVE(&filter_ethertype_list,
+				 ethertype_filter_ptr,
+				 entries);
+		rte_free(ethertype_filter_ptr);
+	}
+
+	while ((syn_filter_ptr = TAILQ_FIRST(&filter_syn_list))) {
+		TAILQ_REMOVE(&filter_syn_list,
+				 syn_filter_ptr,
+				 entries);
+		rte_free(syn_filter_ptr);
+	}
+
+	while ((l2_tn_filter_ptr = TAILQ_FIRST(&filter_l2_tunnel_list))) {
+		TAILQ_REMOVE(&filter_l2_tunnel_list,
+				 l2_tn_filter_ptr,
+				 entries);
+		rte_free(l2_tn_filter_ptr);
+	}
+
+	while ((fdir_rule_ptr = TAILQ_FIRST(&filter_fdir_list))) {
+		TAILQ_REMOVE(&filter_fdir_list,
+				 fdir_rule_ptr,
+				 entries);
+		rte_free(fdir_rule_ptr);
+	}
+
+	while ((ixgbe_flow_mem_ptr = TAILQ_FIRST(&ixgbe_flow_list))) {
+		TAILQ_REMOVE(&ixgbe_flow_list,
+				 ixgbe_flow_mem_ptr,
+				 entries);
+		rte_free(ixgbe_flow_mem_ptr->flow);
+		rte_free(ixgbe_flow_mem_ptr);
+	}
+}
+
 /**
  * Create or destroy a flow rule.
  * Theorically one rule can match more than one filters.
@@ -2748,5 +2802,7 @@ ixgbe_flow_flush(struct rte_eth_dev *dev,
 		return ret;
 	}
 
+	ixgbe_filterlist_flush();
+
 	return 0;
 }
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* Re: [PATCH v5 00/18] net/ixgbe: Consistent filter API
  2017-01-12  9:17       ` [PATCH v5 00/18] net/ixgbe: Consistent filter API Wei Zhao
                           ` (17 preceding siblings ...)
  2017-01-12  9:17         ` [PATCH v5 18/18] net/ixgbe: flush all the filter list Wei Zhao
@ 2017-01-12 11:38         ` Xing, Beilei
  2017-01-13  6:27           ` Dai, Wei
  2017-01-12 15:43         ` Ferruh Yigit
  2017-01-13  8:12         ` [PATCH v6 " Wei Zhao
  20 siblings, 1 reply; 149+ messages in thread
From: Xing, Beilei @ 2017-01-12 11:38 UTC (permalink / raw)
  To: Zhao1, Wei, dev


> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wei Zhao
> Sent: Thursday, January 12, 2017 5:17 PM
> To: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH v5 00/18] net/ixgbe: Consistent filter API
> 
> The patches mainly finish following functions:
> 1) Store and restore all kinds of filters.
> 2) Parse all kinds of filters.
> 3) Add flow validate function.
> 4) Add flow create function.
> 5) Add flow destroy function.
> 6) Add flow flush function.
> 
> v2 changes:
>  fix git log error
>  Modify some function call relationship
>  Change return value type of all parse flow functions  Update error info for all
> flow ops  Add ixgbe_filterlist_flush to flush flows and rules created
> 
> v3 change:
>  add new file ixgbe_flow.c to store generic API parser related functions  add
> more comment about pattern and action rules  add attr check in parser
> functions  change struct name ixgbe_flow to rte_flow  change SYN to TCP
> SYN  change to use memset initizlize struct ixgbe_filter_info  break down
> filter uninit process to 3 indepedent functions in eth_ixgbe_dev_uninit()
> change struct rte_flow_item_nvgre definition  change struct
> rte_flow_item_e_tag definition  fix one bug in function ixgbe_dev_filter_ctrl
> add goto in function ixgbe_flow_create  delete some useless initialization
> eliminate some git log check warning
> 
> v4 change:
>  fix some check patch warning
> 
> v5 change:
>  fix some git log warning
> 
> zhao wei (18):
>   net/ixgbe: store TCP SYN filter
>   net/ixgbe: store flow director filter
>   net/ixgbe: store L2 tunnel filter
>   net/ixgbe: restore n-tuple filter
>   net/ixgbe: restore ether type filter
>   net/ixgbe: restore TCP SYN filter
>   net/ixgbe: restore flow director filter
>   net/ixgbe: restore L2 tunnel filter
>   net/ixgbe: store and restore L2 tunnel configuration
>   net/ixgbe: flush all the filters
>   net/ixgbe: parse n-tuple filter
>   net/ixgbe: parse ethertype filter
>   net/ixgbe: parse TCP SYN filter
>   net/ixgbe: parse L2 tunnel filter
>   net/ixgbe: parse flow director filter
>   net/ixgbe: create consistent filter
>   net/ixgbe: destroy consistent filter
>   net/ixgbe: flush all the filter list
> 
>  drivers/net/ixgbe/Makefile       |    2 +
>  drivers/net/ixgbe/ixgbe_ethdev.c |  667 +++++++--
> drivers/net/ixgbe/ixgbe_ethdev.h |  203 ++-
>  drivers/net/ixgbe/ixgbe_fdir.c   |  407 ++++--
>  drivers/net/ixgbe/ixgbe_flow.c   | 2808
> ++++++++++++++++++++++++++++++++++++++
>  drivers/net/ixgbe/ixgbe_pf.c     |   26 +-
>  lib/librte_ether/rte_flow.h      |   48 +
>  7 files changed, 3950 insertions(+), 211 deletions(-)  create mode 100644
> drivers/net/ixgbe/ixgbe_flow.c
> 
Acked-by: Beilei Xing <beilei.xing@intel.com>

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v5 12/18] net/ixgbe: parse ethertype filter
  2017-01-12  9:17         ` [PATCH v5 12/18] net/ixgbe: parse ethertype filter Wei Zhao
@ 2017-01-12 15:39           ` Ferruh Yigit
  0 siblings, 0 replies; 149+ messages in thread
From: Ferruh Yigit @ 2017-01-12 15:39 UTC (permalink / raw)
  To: Wei Zhao, dev; +Cc: Wenzhuo Lu

On 1/12/2017 9:17 AM, Wei Zhao wrote:
> check if the rule is a ethertype rule, and get the ethertype info.
> 
> Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> ---
>  drivers/net/ixgbe/ixgbe_flow.c | 278 +++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 278 insertions(+)
> 
> diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
> index ac9f663..2f97129 100644
> --- a/drivers/net/ixgbe/ixgbe_flow.c
> +++ b/drivers/net/ixgbe/ixgbe_flow.c
> @@ -90,6 +90,18 @@ ixgbe_parse_ntuple_filter(const struct rte_flow_attr *attr,
>  					struct rte_eth_ntuple_filter *filter,
>  					struct rte_flow_error *error);
>  static int
> +cons_parse_ethertype_filter(const struct rte_flow_attr *attr,

Although this is static function, you may prefer to keep ixgbe_
namespace in the function.

> +			    const struct rte_flow_item *pattern,
> +			    const struct rte_flow_action *actions,
> +			    struct rte_eth_ethertype_filter *filter,
> +			    struct rte_flow_error *error);
> +static int
> +ixgbe_parse_ethertype_filter(const struct rte_flow_attr *attr,
> +				const struct rte_flow_item pattern[],
> +				const struct rte_flow_action actions[],
> +				struct rte_eth_ethertype_filter *filter,
> +				struct rte_flow_error *error);
> +static int
>  ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
>  		const struct rte_flow_attr *attr,
>  		const struct rte_flow_item pattern[],
> @@ -480,6 +492,265 @@ ixgbe_parse_ntuple_filter(const struct rte_flow_attr *attr,
>  }
>  
>  /**
> + * Parse the rule to see if it is a ethertype rule.
> + * And get the ethertype filter info BTW.
> + * pattern:
> + * The first not void item can be ETH.
> + * The next not void item must be END.

This documents item.type, can you please also document expected/valid
item->spec, item->last and item->mask values.

Same for all parse functions.

I am aware this is lots of comment when you think all parse functions,
but I believe this can be very useful in the future.
And it is very unlikely that it will be done later, if it has not been
done now.

Thanks,
ferruh

> + * action:
> + * The first not void action should be QUEUE.
> + * The next not void action should be END.
> + */
> +static int
> +cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
> +			    const struct rte_flow_item *pattern,
> +			    const struct rte_flow_action *actions,
> +			    struct rte_eth_ethertype_filter *filter,
> +			    struct rte_flow_error *error)

<...>

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v5 11/18] net/ixgbe: parse n-tuple filter
  2017-01-12  9:17         ` [PATCH v5 11/18] net/ixgbe: parse n-tuple filter Wei Zhao
@ 2017-01-12 15:40           ` Ferruh Yigit
  0 siblings, 0 replies; 149+ messages in thread
From: Ferruh Yigit @ 2017-01-12 15:40 UTC (permalink / raw)
  To: Wei Zhao, dev; +Cc: Wenzhuo Lu

On 1/12/2017 9:17 AM, Wei Zhao wrote:
> Add rule validate function and check if the rule is a n-tuple rule,
> and get the n-tuple info.
> 
> Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> ---
<...>
>  /*  Destroy all flow rules associated with a port on ixgbe. */
>  static int
>  ixgbe_flow_flush(struct rte_eth_dev *dev,
> @@ -99,15 +516,17 @@ ixgbe_flow_flush(struct rte_eth_dev *dev,
>  
>  	ret = ixgbe_clear_all_fdir_filter(dev);
>  	if (ret < 0) {
> -		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_HANDLE,
> -					NULL, "Failed to flush rule");
> +		rte_flow_error_set(error, EINVAL,
> +				RTE_FLOW_ERROR_TYPE_HANDLE,
> +				NULL, "Failed to flush rule");
>  		return ret;
>  	}
>  
>  	ret = ixgbe_clear_all_l2_tn_filter(dev);
>  	if (ret < 0) {
> -		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_HANDLE,
> -					NULL, "Failed to flush rule");
> +		rte_flow_error_set(error, EINVAL,
> +				RTE_FLOW_ERROR_TYPE_HANDLE,
> +				NULL, "Failed to flush rule");
>  		return ret;
>  	}

These are syntax modification to the updates added in previous patch.
They can be fixed when they added at first place.

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v5 00/18] net/ixgbe: Consistent filter API
  2017-01-12  9:17       ` [PATCH v5 00/18] net/ixgbe: Consistent filter API Wei Zhao
                           ` (18 preceding siblings ...)
  2017-01-12 11:38         ` [PATCH v5 00/18] net/ixgbe: Consistent filter API Xing, Beilei
@ 2017-01-12 15:43         ` Ferruh Yigit
  2017-01-13  8:12         ` [PATCH v6 " Wei Zhao
  20 siblings, 0 replies; 149+ messages in thread
From: Ferruh Yigit @ 2017-01-12 15:43 UTC (permalink / raw)
  To: Wei Zhao, dev

Hi Wei,

On 1/12/2017 9:17 AM, Wei Zhao wrote:
> The patches mainly finish following functions:
> 1) Store and restore all kinds of filters.
> 2) Parse all kinds of filters.
> 3) Add flow validate function.
> 4) Add flow create function.
> 5) Add flow destroy function.
> 6) Add flow flush function.

Overall patchset looks good, only major comment is adding more comments
on parse_filter functions for expected/valid rules.

You can keep Beilei's Ack for next version of the patchset.

Thanks,
ferruh

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v5 00/18] net/ixgbe: Consistent filter API
  2017-01-12 11:38         ` [PATCH v5 00/18] net/ixgbe: Consistent filter API Xing, Beilei
@ 2017-01-13  6:27           ` Dai, Wei
  0 siblings, 0 replies; 149+ messages in thread
From: Dai, Wei @ 2017-01-13  6:27 UTC (permalink / raw)
  To: Xing, Beilei, Zhao1, Wei, dev

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Xing, Beilei
> Sent: Thursday, January 12, 2017 7:39 PM
> To: Zhao1, Wei <wei.zhao1@intel.com>; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v5 00/18] net/ixgbe: Consistent filter API
> 
> 
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wei Zhao
> > Sent: Thursday, January 12, 2017 5:17 PM
> > To: dev@dpdk.org
> > Subject: [dpdk-dev] [PATCH v5 00/18] net/ixgbe: Consistent filter API
> >
> > The patches mainly finish following functions:
> > 1) Store and restore all kinds of filters.
> > 2) Parse all kinds of filters.
> > 3) Add flow validate function.
> > 4) Add flow create function.
> > 5) Add flow destroy function.
> > 6) Add flow flush function.
> >
> > v2 changes:
> >  fix git log error
> >  Modify some function call relationship  Change return value type of
> > all parse flow functions  Update error info for all flow ops  Add
> > ixgbe_filterlist_flush to flush flows and rules created
> >
> > v3 change:
> >  add new file ixgbe_flow.c to store generic API parser related
> > functions  add more comment about pattern and action rules  add attr
> > check in parser functions  change struct name ixgbe_flow to rte_flow
> > change SYN to TCP SYN  change to use memset initizlize struct
> > ixgbe_filter_info  break down filter uninit process to 3 indepedent
> > functions in eth_ixgbe_dev_uninit() change struct rte_flow_item_nvgre
> > definition  change struct rte_flow_item_e_tag definition  fix one bug
> > in function ixgbe_dev_filter_ctrl add goto in function
> > ixgbe_flow_create  delete some useless initialization eliminate some
> > git log check warning
> >
> > v4 change:
> >  fix some check patch warning
> >
> > v5 change:
> >  fix some git log warning
> >
> > zhao wei (18):
> >   net/ixgbe: store TCP SYN filter
> >   net/ixgbe: store flow director filter
> >   net/ixgbe: store L2 tunnel filter
> >   net/ixgbe: restore n-tuple filter
> >   net/ixgbe: restore ether type filter
> >   net/ixgbe: restore TCP SYN filter
> >   net/ixgbe: restore flow director filter
> >   net/ixgbe: restore L2 tunnel filter
> >   net/ixgbe: store and restore L2 tunnel configuration
> >   net/ixgbe: flush all the filters
> >   net/ixgbe: parse n-tuple filter
> >   net/ixgbe: parse ethertype filter
> >   net/ixgbe: parse TCP SYN filter
> >   net/ixgbe: parse L2 tunnel filter
> >   net/ixgbe: parse flow director filter
> >   net/ixgbe: create consistent filter
> >   net/ixgbe: destroy consistent filter
> >   net/ixgbe: flush all the filter list
> >
> >  drivers/net/ixgbe/Makefile       |    2 +
> >  drivers/net/ixgbe/ixgbe_ethdev.c |  667 +++++++--
> > drivers/net/ixgbe/ixgbe_ethdev.h |  203 ++-
> >  drivers/net/ixgbe/ixgbe_fdir.c   |  407 ++++--
> >  drivers/net/ixgbe/ixgbe_flow.c   | 2808
> > ++++++++++++++++++++++++++++++++++++++
> >  drivers/net/ixgbe/ixgbe_pf.c     |   26 +-
> >  lib/librte_ether/rte_flow.h      |   48 +
> >  7 files changed, 3950 insertions(+), 211 deletions(-)  create mode
> > 100644 drivers/net/ixgbe/ixgbe_flow.c
> >
> Acked-by: Beilei Xing <beilei.xing@intel.com>
Acked-by: Wei Dai <wei.dai@intel.com>

^ permalink raw reply	[flat|nested] 149+ messages in thread

* [PATCH v6 00/18] net/ixgbe: Consistent filter API
  2017-01-12  9:17       ` [PATCH v5 00/18] net/ixgbe: Consistent filter API Wei Zhao
                           ` (19 preceding siblings ...)
  2017-01-12 15:43         ` Ferruh Yigit
@ 2017-01-13  8:12         ` Wei Zhao
  2017-01-13  8:12           ` [PATCH v6 01/18] net/ixgbe: store TCP SYN filter Wei Zhao
                             ` (18 more replies)
  20 siblings, 19 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-13  8:12 UTC (permalink / raw)
  To: dev

The patches mainly finish following functions:
1) Store and restore all kinds of filters.
2) Parse all kinds of filters.
3) Add flow validate function.
4) Add flow create function.
5) Add flow destroy function.
6) Add flow flush function.

v2 changes:
 fix git log error
 Modify some function call relationship
 Change return value type of all parse flow functions
 Update error info for all flow ops
 Add ixgbe_filterlist_flush to flush flows and rules created

v3 change:
 add new file ixgbe_flow.c to store generic API parser related functions
 add more comment about pattern and action rules
 add attr check in parser functions
 change struct name ixgbe_flow to rte_flow
 change SYN to TCP SYN
 change to use memset initizlize struct ixgbe_filter_info
 break down filter uninit process to 3 indepedent functions in eth_ixgbe_dev_uninit()
 change struct rte_flow_item_nvgre definition
 change struct rte_flow_item_e_tag definition
 fix one bug in function ixgbe_dev_filter_ctrl
 add goto in function ixgbe_flow_create
 delete some useless initialization 
 eliminate some git log check warning

v4 change:
 fix some check patch warning

v5 change:
 fix some git log warning

v6 change:
 add more comments on pattern rule example, supply the usage of spec
 last and mask

zhao wei (18):
  net/ixgbe: store TCP SYN filter
  net/ixgbe: store flow director filter
  net/ixgbe: store L2 tunnel filter
  net/ixgbe: restore n-tuple filter
  net/ixgbe: restore ether type filter
  net/ixgbe: restore TCP SYN filter
  net/ixgbe: restore flow director filter
  net/ixgbe: restore L2 tunnel filter
  net/ixgbe: store and restore L2 tunnel configuration
  net/ixgbe: flush all the filters
  net/ixgbe: parse n-tuple filter
  net/ixgbe: parse ethertype filter
  net/ixgbe: parse TCP SYN filter
  net/ixgbe: parse L2 tunnel filter
  net/ixgbe: parse flow director filter
  net/ixgbe: create consistent filter
  net/ixgbe: destroy consistent filter
  net/ixgbe: flush all the filter list

 drivers/net/ixgbe/Makefile       |    2 +
 drivers/net/ixgbe/ixgbe_ethdev.c |  667 +++++++--
 drivers/net/ixgbe/ixgbe_ethdev.h |  203 ++-
 drivers/net/ixgbe/ixgbe_fdir.c   |  407 ++++--
 drivers/net/ixgbe/ixgbe_flow.c   | 2878 ++++++++++++++++++++++++++++++++++++++
 drivers/net/ixgbe/ixgbe_pf.c     |   26 +-
 lib/librte_ether/rte_flow.h      |   48 +
 7 files changed, 4020 insertions(+), 211 deletions(-)
 create mode 100644 drivers/net/ixgbe/ixgbe_flow.c

Acked-by: Beilei Xing <beilei.xing@intel.com>
Acked-by: Wei Dai <wei.dai@intel.com>
-- 
2.5.5

^ permalink raw reply	[flat|nested] 149+ messages in thread

* [PATCH v6 01/18] net/ixgbe: store TCP SYN filter
  2017-01-13  8:12         ` [PATCH v6 " Wei Zhao
@ 2017-01-13  8:12           ` Wei Zhao
  2017-01-13  8:12           ` [PATCH v6 02/18] net/ixgbe: store flow director filter Wei Zhao
                             ` (17 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-13  8:12 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for storing TCP SYN filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 18 +++++++++++++-----
 drivers/net/ixgbe/ixgbe_ethdev.h |  2 ++
 2 files changed, 15 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index b7ddd4f..719eddd 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1272,10 +1272,12 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
 	/* enable support intr */
 	ixgbe_enable_intr(eth_dev);
 
+	/* initialize filter info */
+	memset(filter_info, 0,
+	       sizeof(struct ixgbe_filter_info));
+
 	/* initialize 5tuple filter list */
 	TAILQ_INIT(&filter_info->fivetuple_list);
-	memset(filter_info->fivetuple_mask, 0,
-	       sizeof(uint32_t) * IXGBE_5TUPLE_ARRAY_SIZE);
 
 	return 0;
 }
@@ -5603,15 +5605,18 @@ ixgbe_syn_filter_set(struct rte_eth_dev *dev,
 			bool add)
 {
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+	uint32_t syn_info;
 	uint32_t synqf;
 
 	if (filter->queue >= IXGBE_MAX_RX_QUEUE_NUM)
 		return -EINVAL;
 
-	synqf = IXGBE_READ_REG(hw, IXGBE_SYNQF);
+	syn_info = filter_info->syn_info;
 
 	if (add) {
-		if (synqf & IXGBE_SYN_FILTER_ENABLE)
+		if (syn_info & IXGBE_SYN_FILTER_ENABLE)
 			return -EINVAL;
 		synqf = (uint32_t)(((filter->queue << IXGBE_SYN_FILTER_QUEUE_SHIFT) &
 			IXGBE_SYN_FILTER_QUEUE) | IXGBE_SYN_FILTER_ENABLE);
@@ -5621,10 +5626,13 @@ ixgbe_syn_filter_set(struct rte_eth_dev *dev,
 		else
 			synqf &= ~IXGBE_SYN_FILTER_SYNQFP;
 	} else {
-		if (!(synqf & IXGBE_SYN_FILTER_ENABLE))
+		synqf = IXGBE_READ_REG(hw, IXGBE_SYNQF);
+		if (!(syn_info & IXGBE_SYN_FILTER_ENABLE))
 			return -ENOENT;
 		synqf &= ~(IXGBE_SYN_FILTER_QUEUE | IXGBE_SYN_FILTER_ENABLE);
 	}
+
+	filter_info->syn_info = synqf;
 	IXGBE_WRITE_REG(hw, IXGBE_SYNQF, synqf);
 	IXGBE_WRITE_FLUSH(hw);
 	return 0;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 69b276f..90a89ec 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -262,6 +262,8 @@ struct ixgbe_filter_info {
 	/* Bit mask for every used 5tuple filter */
 	uint32_t fivetuple_mask[IXGBE_5TUPLE_ARRAY_SIZE];
 	struct ixgbe_5tuple_filter_list fivetuple_list;
+	/* store the SYN filter info */
+	uint32_t syn_info;
 };
 
 /*
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v6 02/18] net/ixgbe: store flow director filter
  2017-01-13  8:12         ` [PATCH v6 " Wei Zhao
  2017-01-13  8:12           ` [PATCH v6 01/18] net/ixgbe: store TCP SYN filter Wei Zhao
@ 2017-01-13  8:12           ` Wei Zhao
  2017-01-13  8:12           ` [PATCH v6 03/18] net/ixgbe: store L2 tunnel filter Wei Zhao
                             ` (16 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-13  8:12 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for storing flow director filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c |  64 ++++++++++++++++++++++++
 drivers/net/ixgbe/ixgbe_ethdev.h |  19 ++++++-
 drivers/net/ixgbe/ixgbe_fdir.c   | 105 ++++++++++++++++++++++++++++++++++++++-
 3 files changed, 185 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 719eddd..9796c4f 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -60,6 +60,7 @@
 #include <rte_malloc.h>
 #include <rte_random.h>
 #include <rte_dev.h>
+#include <rte_hash_crc.h>
 
 #include "ixgbe_logs.h"
 #include "base/ixgbe_api.h"
@@ -165,6 +166,8 @@ enum ixgbevf_xcast_modes {
 
 static int eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev);
 static int eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev);
+static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev);
+static int ixgbe_fdir_filter_uninit(struct rte_eth_dev *eth_dev);
 static int  ixgbe_dev_configure(struct rte_eth_dev *dev);
 static int  ixgbe_dev_start(struct rte_eth_dev *dev);
 static void ixgbe_dev_stop(struct rte_eth_dev *dev);
@@ -1279,6 +1282,9 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
 	/* initialize 5tuple filter list */
 	TAILQ_INIT(&filter_info->fivetuple_list);
 
+	/* initialize flow director filter list & hash */
+	ixgbe_fdir_filter_init(eth_dev);
+
 	return 0;
 }
 
@@ -1320,9 +1326,67 @@ eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev)
 	rte_free(eth_dev->data->hash_mac_addrs);
 	eth_dev->data->hash_mac_addrs = NULL;
 
+	/* remove all the fdir filters & hash */
+	ixgbe_fdir_filter_uninit(eth_dev);
+
 	return 0;
 }
 
+static int ixgbe_fdir_filter_uninit(struct rte_eth_dev *eth_dev)
+{
+	struct ixgbe_hw_fdir_info *fdir_info =
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(eth_dev->data->dev_private);
+	struct ixgbe_fdir_filter *fdir_filter;
+
+		if (fdir_info->hash_map)
+		rte_free(fdir_info->hash_map);
+	if (fdir_info->hash_handle)
+		rte_hash_free(fdir_info->hash_handle);
+
+	while ((fdir_filter = TAILQ_FIRST(&fdir_info->fdir_list))) {
+		TAILQ_REMOVE(&fdir_info->fdir_list,
+			     fdir_filter,
+			     entries);
+		rte_free(fdir_filter);
+	}
+
+	return 0;
+}
+
+static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev)
+{
+	struct ixgbe_hw_fdir_info *fdir_info =
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(eth_dev->data->dev_private);
+	char fdir_hash_name[RTE_HASH_NAMESIZE];
+	struct rte_hash_parameters fdir_hash_params = {
+		.name = fdir_hash_name,
+		.entries = IXGBE_MAX_FDIR_FILTER_NUM,
+		.key_len = sizeof(union ixgbe_atr_input),
+		.hash_func = rte_hash_crc,
+		.hash_func_init_val = 0,
+		.socket_id = rte_socket_id(),
+	};
+
+	TAILQ_INIT(&fdir_info->fdir_list);
+	snprintf(fdir_hash_name, RTE_HASH_NAMESIZE,
+		 "fdir_%s", eth_dev->data->name);
+	fdir_info->hash_handle = rte_hash_create(&fdir_hash_params);
+	if (!fdir_info->hash_handle) {
+		PMD_INIT_LOG(ERR, "Failed to create fdir hash table!");
+		return -EINVAL;
+	}
+	fdir_info->hash_map = rte_zmalloc("ixgbe",
+					  sizeof(struct ixgbe_fdir_filter *) *
+					  IXGBE_MAX_FDIR_FILTER_NUM,
+					  0);
+	if (!fdir_info->hash_map) {
+		PMD_INIT_LOG(ERR,
+			     "Failed to allocate memory for fdir hash map!");
+		return -ENOMEM;
+	}
+
+	return 0;
+}
 /*
  * Negotiate mailbox API version with the PF.
  * After reset API version is always set to the basic one (ixgbe_mbox_api_10).
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 90a89ec..300542e 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -38,6 +38,7 @@
 #include "base/ixgbe_dcb_82598.h"
 #include "ixgbe_bypass.h"
 #include <rte_time.h>
+#include <rte_hash.h>
 
 /* need update link, bit flag */
 #define IXGBE_FLAG_NEED_LINK_UPDATE (uint32_t)(1 << 0)
@@ -130,10 +131,11 @@
 #define IXGBE_MISC_VEC_ID               RTE_INTR_VEC_ZERO_OFFSET
 #define IXGBE_RX_VEC_START              RTE_INTR_VEC_RXTX_OFFSET
 
+#define IXGBE_MAX_FDIR_FILTER_NUM       (1024 * 32)
+
 /*
  * Information about the fdir mode.
  */
-
 struct ixgbe_hw_fdir_mask {
 	uint16_t vlan_tci_mask;
 	uint32_t src_ipv4_mask;
@@ -148,6 +150,17 @@ struct ixgbe_hw_fdir_mask {
 	uint8_t  tunnel_type_mask;
 };
 
+struct ixgbe_fdir_filter {
+	TAILQ_ENTRY(ixgbe_fdir_filter) entries;
+	union ixgbe_atr_input ixgbe_fdir; /* key of fdir filter*/
+	uint32_t fdirflags; /* drop or forward */
+	uint32_t fdirhash; /* hash value for fdir */
+	uint8_t queue; /* assigned rx queue */
+};
+
+/* list of fdir filters */
+TAILQ_HEAD(ixgbe_fdir_filter_list, ixgbe_fdir_filter);
+
 struct ixgbe_hw_fdir_info {
 	struct ixgbe_hw_fdir_mask mask;
 	uint8_t     flex_bytes_offset;
@@ -159,6 +172,10 @@ struct ixgbe_hw_fdir_info {
 	uint64_t    remove;
 	uint64_t    f_add;
 	uint64_t    f_remove;
+	struct ixgbe_fdir_filter_list fdir_list; /* filter list*/
+	/* store the pointers of the filters, index is the hash value. */
+	struct ixgbe_fdir_filter **hash_map;
+	struct rte_hash *hash_handle; /* cuckoo hash handler */
 };
 
 /* structure for interrupt relative data */
diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c
index 4b81ee3..8bf5705 100644
--- a/drivers/net/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/ixgbe/ixgbe_fdir.c
@@ -43,6 +43,7 @@
 #include <rte_pci.h>
 #include <rte_ether.h>
 #include <rte_ethdev.h>
+#include <rte_malloc.h>
 
 #include "ixgbe_logs.h"
 #include "base/ixgbe_api.h"
@@ -1075,6 +1076,65 @@ fdir_erase_filter_82599(struct ixgbe_hw *hw, uint32_t fdirhash)
 
 }
 
+static inline struct ixgbe_fdir_filter *
+ixgbe_fdir_filter_lookup(struct ixgbe_hw_fdir_info *fdir_info,
+			 union ixgbe_atr_input *key)
+{
+	int ret;
+
+	ret = rte_hash_lookup(fdir_info->hash_handle, (const void *)key);
+	if (ret < 0)
+		return NULL;
+
+	return fdir_info->hash_map[ret];
+}
+
+static inline int
+ixgbe_insert_fdir_filter(struct ixgbe_hw_fdir_info *fdir_info,
+			 struct ixgbe_fdir_filter *fdir_filter)
+{
+	int ret;
+
+	ret = rte_hash_add_key(fdir_info->hash_handle,
+			       &fdir_filter->ixgbe_fdir);
+
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to insert fdir filter to hash table %d!",
+			    ret);
+		return ret;
+	}
+
+	fdir_info->hash_map[ret] = fdir_filter;
+
+	TAILQ_INSERT_TAIL(&fdir_info->fdir_list, fdir_filter, entries);
+
+	return 0;
+}
+
+static inline int
+ixgbe_remove_fdir_filter(struct ixgbe_hw_fdir_info *fdir_info,
+			 union ixgbe_atr_input *key)
+{
+	int ret;
+	struct ixgbe_fdir_filter *fdir_filter;
+
+	ret = rte_hash_del_key(fdir_info->hash_handle, key);
+
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "No such fdir filter to delete %d!", ret);
+		return ret;
+	}
+
+	fdir_filter = fdir_info->hash_map[ret];
+	fdir_info->hash_map[ret] = NULL;
+
+	TAILQ_REMOVE(&fdir_info->fdir_list, fdir_filter, entries);
+	rte_free(fdir_filter);
+
+	return 0;
+}
+
 /*
  * ixgbe_add_del_fdir_filter - add or remove a flow diretor filter.
  * @dev: pointer to the structure rte_eth_dev
@@ -1098,6 +1158,8 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	struct ixgbe_hw_fdir_info *info =
 		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
 	enum rte_fdir_mode fdir_mode = dev->data->dev_conf.fdir_conf.mode;
+	struct ixgbe_fdir_filter *node;
+	bool add_node = FALSE;
 
 	if (fdir_mode == RTE_FDIR_MODE_NONE)
 		return -ENOTSUP;
@@ -1148,6 +1210,10 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 						      dev->data->dev_conf.fdir_conf.pballoc);
 
 	if (del) {
+		err = ixgbe_remove_fdir_filter(info, &input);
+		if (err < 0)
+			return err;
+
 		err = fdir_erase_filter_82599(hw, fdirhash);
 		if (err < 0)
 			PMD_DRV_LOG(ERR, "Fail to delete FDIR filter!");
@@ -1172,6 +1238,37 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	else
 		return -EINVAL;
 
+	node = ixgbe_fdir_filter_lookup(info, &input);
+	if (node) {
+		if (update) {
+			node->fdirflags = fdircmd_flags;
+			node->fdirhash = fdirhash;
+			node->queue = queue;
+		} else {
+			PMD_DRV_LOG(ERR, "Conflict with existing fdir filter!");
+			return -EINVAL;
+		}
+	} else {
+		add_node = TRUE;
+		node = rte_zmalloc("ixgbe_fdir",
+				   sizeof(struct ixgbe_fdir_filter),
+				   0);
+		if (!node)
+			return -ENOMEM;
+		(void)rte_memcpy(&node->ixgbe_fdir,
+				 &input,
+				 sizeof(union ixgbe_atr_input));
+		node->fdirflags = fdircmd_flags;
+		node->fdirhash = fdirhash;
+		node->queue = queue;
+
+		err = ixgbe_insert_fdir_filter(info, node);
+		if (err < 0) {
+			rte_free(node);
+			return err;
+		}
+	}
+
 	if (is_perfect) {
 		err = fdir_write_perfect_filter_82599(hw, &input, queue,
 						      fdircmd_flags, fdirhash,
@@ -1180,10 +1277,14 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 		err = fdir_add_signature_filter_82599(hw, &input, queue,
 						      fdircmd_flags, fdirhash);
 	}
-	if (err < 0)
+	if (err < 0) {
 		PMD_DRV_LOG(ERR, "Fail to add FDIR filter!");
-	else
+
+		if (add_node)
+			(void)ixgbe_remove_fdir_filter(info, &input);
+	} else {
 		PMD_DRV_LOG(DEBUG, "Success to add FDIR filter");
+	}
 
 	return err;
 }
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v6 03/18] net/ixgbe: store L2 tunnel filter
  2017-01-13  8:12         ` [PATCH v6 " Wei Zhao
  2017-01-13  8:12           ` [PATCH v6 01/18] net/ixgbe: store TCP SYN filter Wei Zhao
  2017-01-13  8:12           ` [PATCH v6 02/18] net/ixgbe: store flow director filter Wei Zhao
@ 2017-01-13  8:12           ` Wei Zhao
  2017-01-13  8:12           ` [PATCH v6 04/18] net/ixgbe: restore n-tuple filter Wei Zhao
                             ` (15 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-13  8:12 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for storing L2 tunnel filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 172 ++++++++++++++++++++++++++++++++++++++-
 drivers/net/ixgbe/ixgbe_ethdev.h |  24 ++++++
 2 files changed, 193 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 9796c4f..e63b635 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -168,6 +168,8 @@ static int eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev);
 static int eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev);
 static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev);
 static int ixgbe_fdir_filter_uninit(struct rte_eth_dev *eth_dev);
+static int ixgbe_l2_tn_filter_init(struct rte_eth_dev *eth_dev);
+static int ixgbe_l2_tn_filter_uninit(struct rte_eth_dev *eth_dev);
 static int  ixgbe_dev_configure(struct rte_eth_dev *dev);
 static int  ixgbe_dev_start(struct rte_eth_dev *dev);
 static void ixgbe_dev_stop(struct rte_eth_dev *dev);
@@ -1285,6 +1287,8 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
 	/* initialize flow director filter list & hash */
 	ixgbe_fdir_filter_init(eth_dev);
 
+	/* initialize l2 tunnel filter list & hash */
+	ixgbe_l2_tn_filter_init(eth_dev);
 	return 0;
 }
 
@@ -1329,6 +1333,9 @@ eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev)
 	/* remove all the fdir filters & hash */
 	ixgbe_fdir_filter_uninit(eth_dev);
 
+	/* remove all the L2 tunnel filters & hash */
+	ixgbe_l2_tn_filter_uninit(eth_dev);
+
 	return 0;
 }
 
@@ -1353,6 +1360,27 @@ static int ixgbe_fdir_filter_uninit(struct rte_eth_dev *eth_dev)
 	return 0;
 }
 
+static int ixgbe_l2_tn_filter_uninit(struct rte_eth_dev *eth_dev)
+{
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(eth_dev->data->dev_private);
+	struct ixgbe_l2_tn_filter *l2_tn_filter;
+
+	if (l2_tn_info->hash_map)
+		rte_free(l2_tn_info->hash_map);
+	if (l2_tn_info->hash_handle)
+		rte_hash_free(l2_tn_info->hash_handle);
+
+	while ((l2_tn_filter = TAILQ_FIRST(&l2_tn_info->l2_tn_list))) {
+		TAILQ_REMOVE(&l2_tn_info->l2_tn_list,
+			     l2_tn_filter,
+			     entries);
+		rte_free(l2_tn_filter);
+	}
+
+	return 0;
+}
+
 static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev)
 {
 	struct ixgbe_hw_fdir_info *fdir_info =
@@ -1384,6 +1412,40 @@ static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev)
 			     "Failed to allocate memory for fdir hash map!");
 		return -ENOMEM;
 	}
+	return 0;
+}
+
+static int ixgbe_l2_tn_filter_init(struct rte_eth_dev *eth_dev)
+{
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(eth_dev->data->dev_private);
+	char l2_tn_hash_name[RTE_HASH_NAMESIZE];
+	struct rte_hash_parameters l2_tn_hash_params = {
+		.name = l2_tn_hash_name,
+		.entries = IXGBE_MAX_L2_TN_FILTER_NUM,
+		.key_len = sizeof(struct ixgbe_l2_tn_key),
+		.hash_func = rte_hash_crc,
+		.hash_func_init_val = 0,
+		.socket_id = rte_socket_id(),
+	};
+
+	TAILQ_INIT(&l2_tn_info->l2_tn_list);
+	snprintf(l2_tn_hash_name, RTE_HASH_NAMESIZE,
+		 "l2_tn_%s", eth_dev->data->name);
+	l2_tn_info->hash_handle = rte_hash_create(&l2_tn_hash_params);
+	if (!l2_tn_info->hash_handle) {
+		PMD_INIT_LOG(ERR, "Failed to create L2 TN hash table!");
+		return -EINVAL;
+	}
+	l2_tn_info->hash_map = rte_zmalloc("ixgbe",
+				   sizeof(struct ixgbe_l2_tn_filter *) *
+				   IXGBE_MAX_L2_TN_FILTER_NUM,
+				   0);
+	if (!l2_tn_info->hash_map) {
+		PMD_INIT_LOG(ERR,
+			"Failed to allocate memory for L2 TN hash map!");
+		return -ENOMEM;
+	}
 
 	return 0;
 }
@@ -7210,12 +7272,104 @@ ixgbe_e_tag_filter_add(struct rte_eth_dev *dev,
 	return -EINVAL;
 }
 
+static inline struct ixgbe_l2_tn_filter *
+ixgbe_l2_tn_filter_lookup(struct ixgbe_l2_tn_info *l2_tn_info,
+			  struct ixgbe_l2_tn_key *key)
+{
+	int ret;
+
+	ret = rte_hash_lookup(l2_tn_info->hash_handle, (const void *)key);
+	if (ret < 0)
+		return NULL;
+
+	return l2_tn_info->hash_map[ret];
+}
+
+static inline int
+ixgbe_insert_l2_tn_filter(struct ixgbe_l2_tn_info *l2_tn_info,
+			  struct ixgbe_l2_tn_filter *l2_tn_filter)
+{
+	int ret;
+
+	ret = rte_hash_add_key(l2_tn_info->hash_handle,
+			       &l2_tn_filter->key);
+
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to insert L2 tunnel filter"
+			    " to hash table %d!",
+			    ret);
+		return ret;
+	}
+
+	l2_tn_info->hash_map[ret] = l2_tn_filter;
+
+	TAILQ_INSERT_TAIL(&l2_tn_info->l2_tn_list, l2_tn_filter, entries);
+
+	return 0;
+}
+
+static inline int
+ixgbe_remove_l2_tn_filter(struct ixgbe_l2_tn_info *l2_tn_info,
+			  struct ixgbe_l2_tn_key *key)
+{
+	int ret;
+	struct ixgbe_l2_tn_filter *l2_tn_filter;
+
+	ret = rte_hash_del_key(l2_tn_info->hash_handle, key);
+
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR,
+			    "No such L2 tunnel filter to delete %d!",
+			    ret);
+		return ret;
+	}
+
+	l2_tn_filter = l2_tn_info->hash_map[ret];
+	l2_tn_info->hash_map[ret] = NULL;
+
+	TAILQ_REMOVE(&l2_tn_info->l2_tn_list, l2_tn_filter, entries);
+	rte_free(l2_tn_filter);
+
+	return 0;
+}
+
 /* Add l2 tunnel filter */
 static int
 ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 			       struct rte_eth_l2_tunnel_conf *l2_tunnel)
 {
-	int ret = 0;
+	int ret;
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
+	struct ixgbe_l2_tn_key key;
+	struct ixgbe_l2_tn_filter *node;
+
+	key.l2_tn_type = l2_tunnel->l2_tunnel_type;
+	key.tn_id = l2_tunnel->tunnel_id;
+
+	node = ixgbe_l2_tn_filter_lookup(l2_tn_info, &key);
+
+	if (node) {
+		PMD_DRV_LOG(ERR, "The L2 tunnel filter already exists!");
+		return -EINVAL;
+	}
+
+	node = rte_zmalloc("ixgbe_l2_tn",
+			   sizeof(struct ixgbe_l2_tn_filter),
+			   0);
+	if (!node)
+		return -ENOMEM;
+
+	(void)rte_memcpy(&node->key,
+			 &key,
+			 sizeof(struct ixgbe_l2_tn_key));
+	node->pool = l2_tunnel->pool;
+	ret = ixgbe_insert_l2_tn_filter(l2_tn_info, node);
+	if (ret < 0) {
+		rte_free(node);
+		return ret;
+	}
 
 	switch (l2_tunnel->l2_tunnel_type) {
 	case RTE_L2_TUNNEL_TYPE_E_TAG:
@@ -7227,6 +7381,9 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 		break;
 	}
 
+	if (ret < 0)
+		(void)ixgbe_remove_l2_tn_filter(l2_tn_info, &key);
+
 	return ret;
 }
 
@@ -7235,7 +7392,16 @@ static int
 ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
 			       struct rte_eth_l2_tunnel_conf *l2_tunnel)
 {
-	int ret = 0;
+	int ret;
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
+	struct ixgbe_l2_tn_key key;
+
+	key.l2_tn_type = l2_tunnel->l2_tunnel_type;
+	key.tn_id = l2_tunnel->tunnel_id;
+	ret = ixgbe_remove_l2_tn_filter(l2_tn_info, &key);
+	if (ret < 0)
+		return ret;
 
 	switch (l2_tunnel->l2_tunnel_type) {
 	case RTE_L2_TUNNEL_TYPE_E_TAG:
@@ -7261,7 +7427,7 @@ ixgbe_dev_l2_tunnel_filter_handle(struct rte_eth_dev *dev,
 				  enum rte_filter_op filter_op,
 				  void *arg)
 {
-	int ret = 0;
+	int ret;
 
 	if (filter_op == RTE_ETH_FILTER_NOP)
 		return 0;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 300542e..b2cf789 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -132,6 +132,7 @@
 #define IXGBE_RX_VEC_START              RTE_INTR_VEC_RXTX_OFFSET
 
 #define IXGBE_MAX_FDIR_FILTER_NUM       (1024 * 32)
+#define IXGBE_MAX_L2_TN_FILTER_NUM      128
 
 /*
  * Information about the fdir mode.
@@ -283,6 +284,25 @@ struct ixgbe_filter_info {
 	uint32_t syn_info;
 };
 
+struct ixgbe_l2_tn_key {
+	enum rte_eth_tunnel_type          l2_tn_type;
+	uint32_t                          tn_id;
+};
+
+struct ixgbe_l2_tn_filter {
+	TAILQ_ENTRY(ixgbe_l2_tn_filter)    entries;
+	struct ixgbe_l2_tn_key             key;
+	uint32_t                           pool;
+};
+
+TAILQ_HEAD(ixgbe_l2_tn_filter_list, ixgbe_l2_tn_filter);
+
+struct ixgbe_l2_tn_info {
+	struct ixgbe_l2_tn_filter_list      l2_tn_list;
+	struct ixgbe_l2_tn_filter         **hash_map;
+	struct rte_hash                    *hash_handle;
+};
+
 /*
  * Structure to store private data for each driver instance (for each port).
  */
@@ -302,6 +322,7 @@ struct ixgbe_adapter {
 	struct ixgbe_bypass_info    bps;
 #endif /* RTE_NIC_BYPASS */
 	struct ixgbe_filter_info    filter;
+	struct ixgbe_l2_tn_info     l2_tn;
 
 	bool rx_bulk_alloc_allowed;
 	bool rx_vec_allowed;
@@ -349,6 +370,9 @@ struct ixgbe_adapter {
 #define IXGBE_DEV_PRIVATE_TO_FILTER_INFO(adapter) \
 	(&((struct ixgbe_adapter *)adapter)->filter)
 
+#define IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(adapter) \
+	(&((struct ixgbe_adapter *)adapter)->l2_tn)
+
 /*
  * RX/TX function prototypes
  */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v6 04/18] net/ixgbe: restore n-tuple filter
  2017-01-13  8:12         ` [PATCH v6 " Wei Zhao
                             ` (2 preceding siblings ...)
  2017-01-13  8:12           ` [PATCH v6 03/18] net/ixgbe: store L2 tunnel filter Wei Zhao
@ 2017-01-13  8:12           ` Wei Zhao
  2017-01-13  8:12           ` [PATCH v6 05/18] net/ixgbe: restore ether type filter Wei Zhao
                             ` (14 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-13  8:12 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for restoring n-tuple filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 140 +++++++++++++++++++++++++--------------
 1 file changed, 92 insertions(+), 48 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index e63b635..1630e65 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -170,6 +170,7 @@ static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev);
 static int ixgbe_fdir_filter_uninit(struct rte_eth_dev *eth_dev);
 static int ixgbe_l2_tn_filter_init(struct rte_eth_dev *eth_dev);
 static int ixgbe_l2_tn_filter_uninit(struct rte_eth_dev *eth_dev);
+static int ixgbe_ntuple_filter_uninit(struct rte_eth_dev *eth_dev);
 static int  ixgbe_dev_configure(struct rte_eth_dev *dev);
 static int  ixgbe_dev_start(struct rte_eth_dev *dev);
 static void ixgbe_dev_stop(struct rte_eth_dev *dev);
@@ -387,6 +388,7 @@ static int ixgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 					 struct rte_eth_udp_tunnel *udp_tunnel);
 static int ixgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 					 struct rte_eth_udp_tunnel *udp_tunnel);
+static int ixgbe_filter_restore(struct rte_eth_dev *dev);
 
 /*
  * Define VF Stats MACRO for Non "cleared on read" register
@@ -1336,6 +1338,27 @@ eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev)
 	/* remove all the L2 tunnel filters & hash */
 	ixgbe_l2_tn_filter_uninit(eth_dev);
 
+	/* Remove all ntuple filters of the device */
+	ixgbe_ntuple_filter_uninit(eth_dev);
+
+	return 0;
+}
+
+static int ixgbe_ntuple_filter_uninit(struct rte_eth_dev *eth_dev)
+{
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(eth_dev->data->dev_private);
+	struct ixgbe_5tuple_filter *p_5tuple;
+
+	while ((p_5tuple = TAILQ_FIRST(&filter_info->fivetuple_list))) {
+		TAILQ_REMOVE(&filter_info->fivetuple_list,
+			     p_5tuple,
+			     entries);
+		rte_free(p_5tuple);
+	}
+	memset(filter_info->fivetuple_mask, 0,
+	       sizeof(uint32_t) * IXGBE_5TUPLE_ARRAY_SIZE);
+
 	return 0;
 }
 
@@ -2504,6 +2527,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
 
 	/* resume enabled intr since hw reset */
 	ixgbe_enable_intr(dev);
+	ixgbe_filter_restore(dev);
 
 	return 0;
 
@@ -2524,9 +2548,6 @@ ixgbe_dev_stop(struct rte_eth_dev *dev)
 		IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ixgbe_vf_info *vfinfo =
 		*IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private);
-	struct ixgbe_filter_info *filter_info =
-		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
-	struct ixgbe_5tuple_filter *p_5tuple, *p_5tuple_next;
 	struct rte_pci_device *pci_dev = IXGBE_DEV_TO_PCI(dev);
 	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 	int vf;
@@ -2564,17 +2585,6 @@ ixgbe_dev_stop(struct rte_eth_dev *dev)
 	memset(&link, 0, sizeof(link));
 	rte_ixgbe_dev_atomic_write_link_status(dev, &link);
 
-	/* Remove all ntuple filters of the device */
-	for (p_5tuple = TAILQ_FIRST(&filter_info->fivetuple_list);
-	     p_5tuple != NULL; p_5tuple = p_5tuple_next) {
-		p_5tuple_next = TAILQ_NEXT(p_5tuple, entries);
-		TAILQ_REMOVE(&filter_info->fivetuple_list,
-			     p_5tuple, entries);
-		rte_free(p_5tuple);
-	}
-	memset(filter_info->fivetuple_mask, 0,
-		sizeof(uint32_t) * IXGBE_5TUPLE_ARRAY_SIZE);
-
 	if (!rte_intr_allow_others(intr_handle))
 		/* resume to the default handler */
 		rte_intr_callback_register(intr_handle,
@@ -5836,6 +5846,52 @@ convert_protocol_type(uint8_t protocol_value)
 		return IXGBE_FILTER_PROTOCOL_NONE;
 }
 
+/* inject a 5-tuple filter to HW */
+static inline void
+ixgbe_inject_5tuple_filter(struct rte_eth_dev *dev,
+			   struct ixgbe_5tuple_filter *filter)
+{
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	int i;
+	uint32_t ftqf, sdpqf;
+	uint32_t l34timir = 0;
+	uint8_t mask = 0xff;
+
+	i = filter->index;
+
+	sdpqf = (uint32_t)(filter->filter_info.dst_port <<
+				IXGBE_SDPQF_DSTPORT_SHIFT);
+	sdpqf = sdpqf | (filter->filter_info.src_port & IXGBE_SDPQF_SRCPORT);
+
+	ftqf = (uint32_t)(filter->filter_info.proto &
+		IXGBE_FTQF_PROTOCOL_MASK);
+	ftqf |= (uint32_t)((filter->filter_info.priority &
+		IXGBE_FTQF_PRIORITY_MASK) << IXGBE_FTQF_PRIORITY_SHIFT);
+	if (filter->filter_info.src_ip_mask == 0) /* 0 means compare. */
+		mask &= IXGBE_FTQF_SOURCE_ADDR_MASK;
+	if (filter->filter_info.dst_ip_mask == 0)
+		mask &= IXGBE_FTQF_DEST_ADDR_MASK;
+	if (filter->filter_info.src_port_mask == 0)
+		mask &= IXGBE_FTQF_SOURCE_PORT_MASK;
+	if (filter->filter_info.dst_port_mask == 0)
+		mask &= IXGBE_FTQF_DEST_PORT_MASK;
+	if (filter->filter_info.proto_mask == 0)
+		mask &= IXGBE_FTQF_PROTOCOL_COMP_MASK;
+	ftqf |= mask << IXGBE_FTQF_5TUPLE_MASK_SHIFT;
+	ftqf |= IXGBE_FTQF_POOL_MASK_EN;
+	ftqf |= IXGBE_FTQF_QUEUE_ENABLE;
+
+	IXGBE_WRITE_REG(hw, IXGBE_DAQF(i), filter->filter_info.dst_ip);
+	IXGBE_WRITE_REG(hw, IXGBE_SAQF(i), filter->filter_info.src_ip);
+	IXGBE_WRITE_REG(hw, IXGBE_SDPQF(i), sdpqf);
+	IXGBE_WRITE_REG(hw, IXGBE_FTQF(i), ftqf);
+
+	l34timir |= IXGBE_L34T_IMIR_RESERVE;
+	l34timir |= (uint32_t)(filter->queue <<
+				IXGBE_L34T_IMIR_QUEUE_SHIFT);
+	IXGBE_WRITE_REG(hw, IXGBE_L34T_IMIR(i), l34timir);
+}
+
 /*
  * add a 5tuple filter
  *
@@ -5853,13 +5909,9 @@ static int
 ixgbe_add_5tuple_filter(struct rte_eth_dev *dev,
 			struct ixgbe_5tuple_filter *filter)
 {
-	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ixgbe_filter_info *filter_info =
 		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
 	int i, idx, shift;
-	uint32_t ftqf, sdpqf;
-	uint32_t l34timir = 0;
-	uint8_t mask = 0xff;
 
 	/*
 	 * look for an unused 5tuple filter index,
@@ -5882,37 +5934,8 @@ ixgbe_add_5tuple_filter(struct rte_eth_dev *dev,
 		return -ENOSYS;
 	}
 
-	sdpqf = (uint32_t)(filter->filter_info.dst_port <<
-				IXGBE_SDPQF_DSTPORT_SHIFT);
-	sdpqf = sdpqf | (filter->filter_info.src_port & IXGBE_SDPQF_SRCPORT);
-
-	ftqf = (uint32_t)(filter->filter_info.proto &
-		IXGBE_FTQF_PROTOCOL_MASK);
-	ftqf |= (uint32_t)((filter->filter_info.priority &
-		IXGBE_FTQF_PRIORITY_MASK) << IXGBE_FTQF_PRIORITY_SHIFT);
-	if (filter->filter_info.src_ip_mask == 0) /* 0 means compare. */
-		mask &= IXGBE_FTQF_SOURCE_ADDR_MASK;
-	if (filter->filter_info.dst_ip_mask == 0)
-		mask &= IXGBE_FTQF_DEST_ADDR_MASK;
-	if (filter->filter_info.src_port_mask == 0)
-		mask &= IXGBE_FTQF_SOURCE_PORT_MASK;
-	if (filter->filter_info.dst_port_mask == 0)
-		mask &= IXGBE_FTQF_DEST_PORT_MASK;
-	if (filter->filter_info.proto_mask == 0)
-		mask &= IXGBE_FTQF_PROTOCOL_COMP_MASK;
-	ftqf |= mask << IXGBE_FTQF_5TUPLE_MASK_SHIFT;
-	ftqf |= IXGBE_FTQF_POOL_MASK_EN;
-	ftqf |= IXGBE_FTQF_QUEUE_ENABLE;
-
-	IXGBE_WRITE_REG(hw, IXGBE_DAQF(i), filter->filter_info.dst_ip);
-	IXGBE_WRITE_REG(hw, IXGBE_SAQF(i), filter->filter_info.src_ip);
-	IXGBE_WRITE_REG(hw, IXGBE_SDPQF(i), sdpqf);
-	IXGBE_WRITE_REG(hw, IXGBE_FTQF(i), ftqf);
+	ixgbe_inject_5tuple_filter(dev, filter);
 
-	l34timir |= IXGBE_L34T_IMIR_RESERVE;
-	l34timir |= (uint32_t)(filter->queue <<
-				IXGBE_L34T_IMIR_QUEUE_SHIFT);
-	IXGBE_WRITE_REG(hw, IXGBE_L34T_IMIR(i), l34timir);
 	return 0;
 }
 
@@ -7925,6 +7948,27 @@ ixgbevf_dev_interrupt_handler(__rte_unused struct rte_intr_handle *handle,
 	ixgbevf_dev_interrupt_action(dev);
 }
 
+/* restore n-tuple filter */
+static inline void
+ixgbe_ntuple_filter_restore(struct rte_eth_dev *dev)
+{
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+	struct ixgbe_5tuple_filter *node;
+
+	TAILQ_FOREACH(node, &filter_info->fivetuple_list, entries) {
+		ixgbe_inject_5tuple_filter(dev, node);
+	}
+}
+
+static int
+ixgbe_filter_restore(struct rte_eth_dev *dev)
+{
+	ixgbe_ntuple_filter_restore(dev);
+
+	return 0;
+}
+
 RTE_PMD_REGISTER_PCI(net_ixgbe, rte_ixgbe_pmd.pci_drv);
 RTE_PMD_REGISTER_PCI_TABLE(net_ixgbe, pci_id_ixgbe_map);
 RTE_PMD_REGISTER_KMOD_DEP(net_ixgbe, "* igb_uio | uio_pci_generic | vfio");
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v6 05/18] net/ixgbe: restore ether type filter
  2017-01-13  8:12         ` [PATCH v6 " Wei Zhao
                             ` (3 preceding siblings ...)
  2017-01-13  8:12           ` [PATCH v6 04/18] net/ixgbe: restore n-tuple filter Wei Zhao
@ 2017-01-13  8:12           ` Wei Zhao
  2017-01-13  8:13           ` [PATCH v6 06/18] net/ixgbe: restore TCP SYN filter Wei Zhao
                             ` (13 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-13  8:12 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for restoring ether type filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 79 ++++++++++++++++------------------------
 drivers/net/ixgbe/ixgbe_ethdev.h | 57 ++++++++++++++++++++++++++++-
 drivers/net/ixgbe/ixgbe_pf.c     | 25 ++++++++-----
 3 files changed, 104 insertions(+), 57 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 1630e65..6c46354 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -6259,47 +6259,6 @@ ixgbe_ntuple_filter_handle(struct rte_eth_dev *dev,
 	return ret;
 }
 
-static inline int
-ixgbe_ethertype_filter_lookup(struct ixgbe_filter_info *filter_info,
-			uint16_t ethertype)
-{
-	int i;
-
-	for (i = 0; i < IXGBE_MAX_ETQF_FILTERS; i++) {
-		if (filter_info->ethertype_filters[i] == ethertype &&
-		    (filter_info->ethertype_mask & (1 << i)))
-			return i;
-	}
-	return -1;
-}
-
-static inline int
-ixgbe_ethertype_filter_insert(struct ixgbe_filter_info *filter_info,
-			uint16_t ethertype)
-{
-	int i;
-
-	for (i = 0; i < IXGBE_MAX_ETQF_FILTERS; i++) {
-		if (!(filter_info->ethertype_mask & (1 << i))) {
-			filter_info->ethertype_mask |= 1 << i;
-			filter_info->ethertype_filters[i] = ethertype;
-			return i;
-		}
-	}
-	return -1;
-}
-
-static inline int
-ixgbe_ethertype_filter_remove(struct ixgbe_filter_info *filter_info,
-			uint8_t idx)
-{
-	if (idx >= IXGBE_MAX_ETQF_FILTERS)
-		return -1;
-	filter_info->ethertype_mask &= ~(1 << idx);
-	filter_info->ethertype_filters[idx] = 0;
-	return idx;
-}
-
 static int
 ixgbe_add_del_ethertype_filter(struct rte_eth_dev *dev,
 			struct rte_eth_ethertype_filter *filter,
@@ -6311,6 +6270,7 @@ ixgbe_add_del_ethertype_filter(struct rte_eth_dev *dev,
 	uint32_t etqf = 0;
 	uint32_t etqs = 0;
 	int ret;
+	struct ixgbe_ethertype_filter ethertype_filter;
 
 	if (filter->queue >= IXGBE_MAX_RX_QUEUE_NUM)
 		return -EINVAL;
@@ -6344,18 +6304,22 @@ ixgbe_add_del_ethertype_filter(struct rte_eth_dev *dev,
 	}
 
 	if (add) {
-		ret = ixgbe_ethertype_filter_insert(filter_info,
-			filter->ether_type);
-		if (ret < 0) {
-			PMD_DRV_LOG(ERR, "ethertype filters are full.");
-			return -ENOSYS;
-		}
 		etqf = IXGBE_ETQF_FILTER_EN;
 		etqf |= (uint32_t)filter->ether_type;
 		etqs |= (uint32_t)((filter->queue <<
 				    IXGBE_ETQS_RX_QUEUE_SHIFT) &
 				    IXGBE_ETQS_RX_QUEUE);
 		etqs |= IXGBE_ETQS_QUEUE_EN;
+
+		ethertype_filter.ethertype = filter->ether_type;
+		ethertype_filter.etqf = etqf;
+		ethertype_filter.etqs = etqs;
+		ret = ixgbe_ethertype_filter_insert(filter_info,
+						    &ethertype_filter);
+		if (ret < 0) {
+			PMD_DRV_LOG(ERR, "ethertype filters are full.");
+			return -ENOSPC;
+		}
 	} else {
 		ret = ixgbe_ethertype_filter_remove(filter_info, (uint8_t)ret);
 		if (ret < 0)
@@ -7961,10 +7925,31 @@ ixgbe_ntuple_filter_restore(struct rte_eth_dev *dev)
 	}
 }
 
+/* restore ethernet type filter */
+static inline void
+ixgbe_ethertype_filter_restore(struct rte_eth_dev *dev)
+{
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+	int i;
+
+	for (i = 0; i < IXGBE_MAX_ETQF_FILTERS; i++) {
+		if (filter_info->ethertype_mask & (1 << i)) {
+			IXGBE_WRITE_REG(hw, IXGBE_ETQF(i),
+					filter_info->ethertype_filters[i].etqf);
+			IXGBE_WRITE_REG(hw, IXGBE_ETQS(i),
+					filter_info->ethertype_filters[i].etqs);
+			IXGBE_WRITE_FLUSH(hw);
+		}
+	}
+}
+
 static int
 ixgbe_filter_restore(struct rte_eth_dev *dev)
 {
 	ixgbe_ntuple_filter_restore(dev);
+	ixgbe_ethertype_filter_restore(dev);
 
 	return 0;
 }
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index b2cf789..36aae01 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -270,13 +270,19 @@ struct ixgbe_5tuple_filter {
 	(RTE_ALIGN(IXGBE_MAX_FTQF_FILTERS, (sizeof(uint32_t) * NBBY)) / \
 	 (sizeof(uint32_t) * NBBY))
 
+struct ixgbe_ethertype_filter {
+	uint16_t ethertype;
+	uint32_t etqf;
+	uint32_t etqs;
+};
+
 /*
  * Structure to store filters' info.
  */
 struct ixgbe_filter_info {
 	uint8_t ethertype_mask;  /* Bit mask for every used ethertype filter */
 	/* store used ethertype filters*/
-	uint16_t ethertype_filters[IXGBE_MAX_ETQF_FILTERS];
+	struct ixgbe_ethertype_filter ethertype_filters[IXGBE_MAX_ETQF_FILTERS];
 	/* Bit mask for every used 5tuple filter */
 	uint32_t fivetuple_mask[IXGBE_5TUPLE_ARRAY_SIZE];
 	struct ixgbe_5tuple_filter_list fivetuple_list;
@@ -491,4 +497,53 @@ uint32_t ixgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val);
 
 int ixgbe_fdir_ctrl_func(struct rte_eth_dev *dev,
 			enum rte_filter_op filter_op, void *arg);
+
+static inline int
+ixgbe_ethertype_filter_lookup(struct ixgbe_filter_info *filter_info,
+			      uint16_t ethertype)
+{
+	int i;
+
+	for (i = 0; i < IXGBE_MAX_ETQF_FILTERS; i++) {
+		if (filter_info->ethertype_filters[i].ethertype == ethertype &&
+		    (filter_info->ethertype_mask & (1 << i)))
+			return i;
+	}
+	return -1;
+}
+
+static inline int
+ixgbe_ethertype_filter_insert(struct ixgbe_filter_info *filter_info,
+			      struct ixgbe_ethertype_filter *ethertype_filter)
+{
+	int i;
+
+	for (i = 0; i < IXGBE_MAX_ETQF_FILTERS; i++) {
+		if (!(filter_info->ethertype_mask & (1 << i))) {
+			filter_info->ethertype_mask |= 1 << i;
+			filter_info->ethertype_filters[i].ethertype =
+				ethertype_filter->ethertype;
+			filter_info->ethertype_filters[i].etqf =
+				ethertype_filter->etqf;
+			filter_info->ethertype_filters[i].etqs =
+				ethertype_filter->etqs;
+			return i;
+		}
+	}
+	return -1;
+}
+
+static inline int
+ixgbe_ethertype_filter_remove(struct ixgbe_filter_info *filter_info,
+			      uint8_t idx)
+{
+	if (idx >= IXGBE_MAX_ETQF_FILTERS)
+		return -1;
+	filter_info->ethertype_mask &= ~(1 << idx);
+	filter_info->ethertype_filters[idx].ethertype = 0;
+	filter_info->ethertype_filters[idx].etqf = 0;
+	filter_info->ethertype_filters[idx].etqs = 0;
+	return idx;
+}
+
 #endif /* _IXGBE_ETHDEV_H_ */
diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
index cb10265..4b33130 100644
--- a/drivers/net/ixgbe/ixgbe_pf.c
+++ b/drivers/net/ixgbe/ixgbe_pf.c
@@ -178,6 +178,7 @@ ixgbe_add_tx_flow_control_drop_filter(struct rte_eth_dev *eth_dev)
 		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(eth_dev->data->dev_private);
 	uint16_t vf_num;
 	int i;
+	struct ixgbe_ethertype_filter ethertype_filter;
 
 	if (!hw->mac.ops.set_ethertype_anti_spoofing) {
 		RTE_LOG(INFO, PMD, "ether type anti-spoofing is not"
@@ -185,16 +186,22 @@ ixgbe_add_tx_flow_control_drop_filter(struct rte_eth_dev *eth_dev)
 		return;
 	}
 
-	/* occupy an entity of ether type filter */
-	for (i = 0; i < IXGBE_MAX_ETQF_FILTERS; i++) {
-		if (!(filter_info->ethertype_mask & (1 << i))) {
-			filter_info->ethertype_mask |= 1 << i;
-			filter_info->ethertype_filters[i] =
-				IXGBE_ETHERTYPE_FLOW_CTRL;
-			break;
-		}
+	i = ixgbe_ethertype_filter_lookup(filter_info,
+					  IXGBE_ETHERTYPE_FLOW_CTRL);
+	if (i >= 0) {
+		RTE_LOG(ERR, PMD, "A ether type filter"
+			" entity for flow control already exists!\n");
+		return;
 	}
-	if (i == IXGBE_MAX_ETQF_FILTERS) {
+
+	ethertype_filter.ethertype = IXGBE_ETHERTYPE_FLOW_CTRL;
+	ethertype_filter.etqf = IXGBE_ETQF_FILTER_EN |
+				IXGBE_ETQF_TX_ANTISPOOF |
+				IXGBE_ETHERTYPE_FLOW_CTRL;
+	ethertype_filter.etqs = 0;
+	i = ixgbe_ethertype_filter_insert(filter_info,
+					  &ethertype_filter);
+	if (i < 0) {
 		RTE_LOG(ERR, PMD, "Cannot find an unused ether type filter"
 			" entity for flow control.\n");
 		return;
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v6 06/18] net/ixgbe: restore TCP SYN filter
  2017-01-13  8:12         ` [PATCH v6 " Wei Zhao
                             ` (4 preceding siblings ...)
  2017-01-13  8:12           ` [PATCH v6 05/18] net/ixgbe: restore ether type filter Wei Zhao
@ 2017-01-13  8:13           ` Wei Zhao
  2017-01-13  8:13           ` [PATCH v6 07/18] net/ixgbe: restore flow director filter Wei Zhao
                             ` (12 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-13  8:13 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for restoring TCP SYN filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 6c46354..6cd5975 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -7945,11 +7945,29 @@ ixgbe_ethertype_filter_restore(struct rte_eth_dev *dev)
 	}
 }
 
+/* restore SYN filter */
+static inline void
+ixgbe_syn_filter_restore(struct rte_eth_dev *dev)
+{
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+	uint32_t synqf;
+
+	synqf = filter_info->syn_info;
+
+	if (synqf & IXGBE_SYN_FILTER_ENABLE) {
+		IXGBE_WRITE_REG(hw, IXGBE_SYNQF, synqf);
+		IXGBE_WRITE_FLUSH(hw);
+	}
+}
+
 static int
 ixgbe_filter_restore(struct rte_eth_dev *dev)
 {
 	ixgbe_ntuple_filter_restore(dev);
 	ixgbe_ethertype_filter_restore(dev);
+	ixgbe_syn_filter_restore(dev);
 
 	return 0;
 }
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v6 07/18] net/ixgbe: restore flow director filter
  2017-01-13  8:12         ` [PATCH v6 " Wei Zhao
                             ` (5 preceding siblings ...)
  2017-01-13  8:13           ` [PATCH v6 06/18] net/ixgbe: restore TCP SYN filter Wei Zhao
@ 2017-01-13  8:13           ` Wei Zhao
  2017-01-13  8:13           ` [PATCH v6 08/18] net/ixgbe: restore L2 tunnel filter Wei Zhao
                             ` (11 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-13  8:13 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for storing flow director filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c |  1 +
 drivers/net/ixgbe/ixgbe_ethdev.h |  1 +
 drivers/net/ixgbe/ixgbe_fdir.c   | 35 +++++++++++++++++++++++++++++++++++
 3 files changed, 37 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 6cd5975..f48e30c 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -7968,6 +7968,7 @@ ixgbe_filter_restore(struct rte_eth_dev *dev)
 	ixgbe_ntuple_filter_restore(dev);
 	ixgbe_ethertype_filter_restore(dev);
 	ixgbe_syn_filter_restore(dev);
+	ixgbe_fdir_filter_restore(dev);
 
 	return 0;
 }
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 36aae01..4f281e8 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -497,6 +497,7 @@ uint32_t ixgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val);
 
 int ixgbe_fdir_ctrl_func(struct rte_eth_dev *dev,
 			enum rte_filter_op filter_op, void *arg);
+void ixgbe_fdir_filter_restore(struct rte_eth_dev *dev);
 
 static inline int
 ixgbe_ethertype_filter_lookup(struct ixgbe_filter_info *filter_info,
diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c
index 8bf5705..627c51a 100644
--- a/drivers/net/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/ixgbe/ixgbe_fdir.c
@@ -1479,3 +1479,38 @@ ixgbe_fdir_ctrl_func(struct rte_eth_dev *dev,
 	}
 	return ret;
 }
+
+/* restore flow director filter */
+void
+ixgbe_fdir_filter_restore(struct rte_eth_dev *dev)
+{
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_hw_fdir_info *fdir_info =
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
+	struct ixgbe_fdir_filter *node;
+	bool is_perfect = FALSE;
+	enum rte_fdir_mode fdir_mode = dev->data->dev_conf.fdir_conf.mode;
+
+	if (fdir_mode >= RTE_FDIR_MODE_PERFECT &&
+	    fdir_mode <= RTE_FDIR_MODE_PERFECT_TUNNEL)
+		is_perfect = TRUE;
+
+	if (is_perfect) {
+		TAILQ_FOREACH(node, &fdir_info->fdir_list, entries) {
+			(void)fdir_write_perfect_filter_82599(hw,
+							      &node->ixgbe_fdir,
+							      node->queue,
+							      node->fdirflags,
+							      node->fdirhash,
+							      fdir_mode);
+		}
+	} else {
+		TAILQ_FOREACH(node, &fdir_info->fdir_list, entries) {
+			(void)fdir_add_signature_filter_82599(hw,
+							      &node->ixgbe_fdir,
+							      node->queue,
+							      node->fdirflags,
+							      node->fdirhash);
+		}
+	}
+}
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v6 08/18] net/ixgbe: restore L2 tunnel filter
  2017-01-13  8:12         ` [PATCH v6 " Wei Zhao
                             ` (6 preceding siblings ...)
  2017-01-13  8:13           ` [PATCH v6 07/18] net/ixgbe: restore flow director filter Wei Zhao
@ 2017-01-13  8:13           ` Wei Zhao
  2017-01-13  8:13           ` [PATCH v6 09/18] net/ixgbe: store and restore L2 tunnel configuration Wei Zhao
                             ` (10 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-13  8:13 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for restoring L2 tunnel filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 69 ++++++++++++++++++++++++++--------------
 1 file changed, 46 insertions(+), 23 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index f48e30c..0ad613b 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -7324,7 +7324,8 @@ ixgbe_remove_l2_tn_filter(struct ixgbe_l2_tn_info *l2_tn_info,
 /* Add l2 tunnel filter */
 static int
 ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
-			       struct rte_eth_l2_tunnel_conf *l2_tunnel)
+			       struct rte_eth_l2_tunnel_conf *l2_tunnel,
+			       bool restore)
 {
 	int ret;
 	struct ixgbe_l2_tn_info *l2_tn_info =
@@ -7332,30 +7333,33 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 	struct ixgbe_l2_tn_key key;
 	struct ixgbe_l2_tn_filter *node;
 
-	key.l2_tn_type = l2_tunnel->l2_tunnel_type;
-	key.tn_id = l2_tunnel->tunnel_id;
+	if (!restore) {
+		key.l2_tn_type = l2_tunnel->l2_tunnel_type;
+		key.tn_id = l2_tunnel->tunnel_id;
 
-	node = ixgbe_l2_tn_filter_lookup(l2_tn_info, &key);
+		node = ixgbe_l2_tn_filter_lookup(l2_tn_info, &key);
 
-	if (node) {
-		PMD_DRV_LOG(ERR, "The L2 tunnel filter already exists!");
-		return -EINVAL;
-	}
+		if (node) {
+			PMD_DRV_LOG(ERR,
+				    "The L2 tunnel filter already exists!");
+			return -EINVAL;
+		}
 
-	node = rte_zmalloc("ixgbe_l2_tn",
-			   sizeof(struct ixgbe_l2_tn_filter),
-			   0);
-	if (!node)
-		return -ENOMEM;
+		node = rte_zmalloc("ixgbe_l2_tn",
+				   sizeof(struct ixgbe_l2_tn_filter),
+				   0);
+		if (!node)
+			return -ENOMEM;
 
-	(void)rte_memcpy(&node->key,
-			 &key,
-			 sizeof(struct ixgbe_l2_tn_key));
-	node->pool = l2_tunnel->pool;
-	ret = ixgbe_insert_l2_tn_filter(l2_tn_info, node);
-	if (ret < 0) {
-		rte_free(node);
-		return ret;
+		(void)rte_memcpy(&node->key,
+				 &key,
+				 sizeof(struct ixgbe_l2_tn_key));
+		node->pool = l2_tunnel->pool;
+		ret = ixgbe_insert_l2_tn_filter(l2_tn_info, node);
+		if (ret < 0) {
+			rte_free(node);
+			return ret;
+		}
 	}
 
 	switch (l2_tunnel->l2_tunnel_type) {
@@ -7368,7 +7372,7 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 		break;
 	}
 
-	if (ret < 0)
+	if ((!restore) && (ret < 0))
 		(void)ixgbe_remove_l2_tn_filter(l2_tn_info, &key);
 
 	return ret;
@@ -7429,7 +7433,8 @@ ixgbe_dev_l2_tunnel_filter_handle(struct rte_eth_dev *dev,
 	case RTE_ETH_FILTER_ADD:
 		ret = ixgbe_dev_l2_tunnel_filter_add
 			(dev,
-			 (struct rte_eth_l2_tunnel_conf *)arg);
+			 (struct rte_eth_l2_tunnel_conf *)arg,
+			 FALSE);
 		break;
 	case RTE_ETH_FILTER_DELETE:
 		ret = ixgbe_dev_l2_tunnel_filter_del
@@ -7962,6 +7967,23 @@ ixgbe_syn_filter_restore(struct rte_eth_dev *dev)
 	}
 }
 
+/* restore L2 tunnel filter */
+static inline void
+ixgbe_l2_tn_filter_restore(struct rte_eth_dev *dev)
+{
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
+	struct ixgbe_l2_tn_filter *node;
+	struct rte_eth_l2_tunnel_conf l2_tn_conf;
+
+	TAILQ_FOREACH(node, &l2_tn_info->l2_tn_list, entries) {
+		l2_tn_conf.l2_tunnel_type = node->key.l2_tn_type;
+		l2_tn_conf.tunnel_id      = node->key.tn_id;
+		l2_tn_conf.pool           = node->pool;
+		(void)ixgbe_dev_l2_tunnel_filter_add(dev, &l2_tn_conf, TRUE);
+	}
+}
+
 static int
 ixgbe_filter_restore(struct rte_eth_dev *dev)
 {
@@ -7969,6 +7991,7 @@ ixgbe_filter_restore(struct rte_eth_dev *dev)
 	ixgbe_ethertype_filter_restore(dev);
 	ixgbe_syn_filter_restore(dev);
 	ixgbe_fdir_filter_restore(dev);
+	ixgbe_l2_tn_filter_restore(dev);
 
 	return 0;
 }
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v6 09/18] net/ixgbe: store and restore L2 tunnel configuration
  2017-01-13  8:12         ` [PATCH v6 " Wei Zhao
                             ` (7 preceding siblings ...)
  2017-01-13  8:13           ` [PATCH v6 08/18] net/ixgbe: restore L2 tunnel filter Wei Zhao
@ 2017-01-13  8:13           ` Wei Zhao
  2017-01-13  8:13           ` [PATCH v6 10/18] net/ixgbe: flush all the filters Wei Zhao
                             ` (9 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-13  8:13 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for store and restore L2 tunnel filter in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 36 ++++++++++++++++++++++++++++++++++++
 drivers/net/ixgbe/ixgbe_ethdev.h |  3 +++
 2 files changed, 39 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 0ad613b..3059c64 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -389,6 +389,7 @@ static int ixgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 static int ixgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 					 struct rte_eth_udp_tunnel *udp_tunnel);
 static int ixgbe_filter_restore(struct rte_eth_dev *dev);
+static void ixgbe_l2_tunnel_conf(struct rte_eth_dev *dev);
 
 /*
  * Define VF Stats MACRO for Non "cleared on read" register
@@ -1469,6 +1470,9 @@ static int ixgbe_l2_tn_filter_init(struct rte_eth_dev *eth_dev)
 			"Failed to allocate memory for L2 TN hash map!");
 		return -ENOMEM;
 	}
+	l2_tn_info->e_tag_en = FALSE;
+	l2_tn_info->e_tag_fwd_en = FALSE;
+	l2_tn_info->e_tag_ether_type = DEFAULT_ETAG_ETYPE;
 
 	return 0;
 }
@@ -2527,6 +2531,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
 
 	/* resume enabled intr since hw reset */
 	ixgbe_enable_intr(dev);
+	ixgbe_l2_tunnel_conf(dev);
 	ixgbe_filter_restore(dev);
 
 	return 0;
@@ -7083,12 +7088,15 @@ ixgbe_dev_l2_tunnel_eth_type_conf(struct rte_eth_dev *dev,
 {
 	int ret = 0;
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
 
 	if (l2_tunnel == NULL)
 		return -EINVAL;
 
 	switch (l2_tunnel->l2_tunnel_type) {
 	case RTE_L2_TUNNEL_TYPE_E_TAG:
+		l2_tn_info->e_tag_ether_type = l2_tunnel->ether_type;
 		ret = ixgbe_update_e_tag_eth_type(hw, l2_tunnel->ether_type);
 		break;
 	default:
@@ -7127,9 +7135,12 @@ ixgbe_dev_l2_tunnel_enable(struct rte_eth_dev *dev,
 {
 	int ret = 0;
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
 
 	switch (l2_tunnel_type) {
 	case RTE_L2_TUNNEL_TYPE_E_TAG:
+		l2_tn_info->e_tag_en = TRUE;
 		ret = ixgbe_e_tag_enable(hw);
 		break;
 	default:
@@ -7168,9 +7179,12 @@ ixgbe_dev_l2_tunnel_disable(struct rte_eth_dev *dev,
 {
 	int ret = 0;
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
 
 	switch (l2_tunnel_type) {
 	case RTE_L2_TUNNEL_TYPE_E_TAG:
+		l2_tn_info->e_tag_en = FALSE;
 		ret = ixgbe_e_tag_disable(hw);
 		break;
 	default:
@@ -7477,10 +7491,13 @@ ixgbe_dev_l2_tunnel_forwarding_enable
 	(struct rte_eth_dev *dev,
 	 enum rte_eth_tunnel_type l2_tunnel_type)
 {
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
 	int ret = 0;
 
 	switch (l2_tunnel_type) {
 	case RTE_L2_TUNNEL_TYPE_E_TAG:
+		l2_tn_info->e_tag_fwd_en = TRUE;
 		ret = ixgbe_e_tag_forwarding_en_dis(dev, 1);
 		break;
 	default:
@@ -7498,10 +7515,13 @@ ixgbe_dev_l2_tunnel_forwarding_disable
 	(struct rte_eth_dev *dev,
 	 enum rte_eth_tunnel_type l2_tunnel_type)
 {
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
 	int ret = 0;
 
 	switch (l2_tunnel_type) {
 	case RTE_L2_TUNNEL_TYPE_E_TAG:
+		l2_tn_info->e_tag_fwd_en = FALSE;
 		ret = ixgbe_e_tag_forwarding_en_dis(dev, 0);
 		break;
 	default:
@@ -7996,6 +8016,22 @@ ixgbe_filter_restore(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static void
+ixgbe_l2_tunnel_conf(struct rte_eth_dev *dev)
+{
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	if (l2_tn_info->e_tag_en)
+		(void)ixgbe_e_tag_enable(hw);
+
+	if (l2_tn_info->e_tag_fwd_en)
+		(void)ixgbe_e_tag_forwarding_en_dis(dev, 1);
+
+	(void)ixgbe_update_e_tag_eth_type(hw, l2_tn_info->e_tag_ether_type);
+}
+
 RTE_PMD_REGISTER_PCI(net_ixgbe, rte_ixgbe_pmd.pci_drv);
 RTE_PMD_REGISTER_PCI_TABLE(net_ixgbe, pci_id_ixgbe_map);
 RTE_PMD_REGISTER_KMOD_DEP(net_ixgbe, "* igb_uio | uio_pci_generic | vfio");
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 4f281e8..4e0264c 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -307,6 +307,9 @@ struct ixgbe_l2_tn_info {
 	struct ixgbe_l2_tn_filter_list      l2_tn_list;
 	struct ixgbe_l2_tn_filter         **hash_map;
 	struct rte_hash                    *hash_handle;
+	bool e_tag_en; /* e-tag enabled */
+	bool e_tag_fwd_en; /* e-tag based forwarding enabled */
+	bool e_tag_ether_type; /* ether type for e-tag */
 };
 
 /*
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v6 10/18] net/ixgbe: flush all the filters
  2017-01-13  8:12         ` [PATCH v6 " Wei Zhao
                             ` (8 preceding siblings ...)
  2017-01-13  8:13           ` [PATCH v6 09/18] net/ixgbe: store and restore L2 tunnel configuration Wei Zhao
@ 2017-01-13  8:13           ` Wei Zhao
  2017-01-13  8:13           ` [PATCH v6 11/18] net/ixgbe: parse n-tuple filter Wei Zhao
                             ` (8 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-13  8:13 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add support for flush all the filters in SW.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ixgbe/Makefile       |   2 +
 drivers/net/ixgbe/ixgbe_ethdev.c |  79 ++++++++++++++++++++++++++-
 drivers/net/ixgbe/ixgbe_ethdev.h |  16 ++++++
 drivers/net/ixgbe/ixgbe_fdir.c   |  24 ++++++++
 drivers/net/ixgbe/ixgbe_flow.c   | 115 +++++++++++++++++++++++++++++++++++++++
 drivers/net/ixgbe/ixgbe_pf.c     |   1 +
 6 files changed, 236 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/ixgbe/ixgbe_flow.c

diff --git a/drivers/net/ixgbe/Makefile b/drivers/net/ixgbe/Makefile
index a3e6a52..38b9fbd 100644
--- a/drivers/net/ixgbe/Makefile
+++ b/drivers/net/ixgbe/Makefile
@@ -109,6 +109,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_rxtx.c
 SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_ethdev.c
 SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_fdir.c
 SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_pf.c
+SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_flow.c
 ifeq ($(CONFIG_RTE_ARCH_ARM64),y)
 SRCS-$(CONFIG_RTE_IXGBE_INC_VECTOR) += ixgbe_rxtx_vec_neon.c
 else
@@ -127,5 +128,6 @@ SYMLINK-$(CONFIG_RTE_LIBRTE_IXGBE_PMD)-include := rte_pmd_ixgbe.h
 DEPDIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += lib/librte_eal lib/librte_ether
 DEPDIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += lib/librte_mempool lib/librte_mbuf
 DEPDIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += lib/librte_net
+DEPDIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += lib/librte_hash
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 3059c64..2a67462 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -6319,6 +6319,7 @@ ixgbe_add_del_ethertype_filter(struct rte_eth_dev *dev,
 		ethertype_filter.ethertype = filter->ether_type;
 		ethertype_filter.etqf = etqf;
 		ethertype_filter.etqs = etqs;
+		ethertype_filter.conf = FALSE;
 		ret = ixgbe_ethertype_filter_insert(filter_info,
 						    &ethertype_filter);
 		if (ret < 0) {
@@ -6420,7 +6421,7 @@ ixgbe_dev_filter_ctrl(struct rte_eth_dev *dev,
 		     enum rte_filter_op filter_op,
 		     void *arg)
 {
-	int ret = -EINVAL;
+	int ret = 0;
 
 	switch (filter_type) {
 	case RTE_ETH_FILTER_NTUPLE:
@@ -6438,9 +6439,15 @@ ixgbe_dev_filter_ctrl(struct rte_eth_dev *dev,
 	case RTE_ETH_FILTER_L2_TUNNEL:
 		ret = ixgbe_dev_l2_tunnel_filter_handle(dev, filter_op, arg);
 		break;
+	case RTE_ETH_FILTER_GENERIC:
+		if (filter_op != RTE_ETH_FILTER_GET)
+			return -EINVAL;
+		*(const void **)arg = &ixgbe_flow_ops;
+		break;
 	default:
 		PMD_DRV_LOG(WARNING, "Filter type (%d) not supported",
 							filter_type);
+		ret = -EINVAL;
 		break;
 	}
 
@@ -8032,6 +8039,76 @@ ixgbe_l2_tunnel_conf(struct rte_eth_dev *dev)
 	(void)ixgbe_update_e_tag_eth_type(hw, l2_tn_info->e_tag_ether_type);
 }
 
+/* remove all the n-tuple filters */
+void
+ixgbe_clear_all_ntuple_filter(struct rte_eth_dev *dev)
+{
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+	struct ixgbe_5tuple_filter *p_5tuple;
+
+	while ((p_5tuple = TAILQ_FIRST(&filter_info->fivetuple_list)))
+		ixgbe_remove_5tuple_filter(dev, p_5tuple);
+}
+
+/* remove all the ether type filters */
+void
+ixgbe_clear_all_ethertype_filter(struct rte_eth_dev *dev)
+{
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+	int i;
+
+	for (i = 0; i < IXGBE_MAX_ETQF_FILTERS; i++) {
+		if (filter_info->ethertype_mask & (1 << i) &&
+		    !filter_info->ethertype_filters[i].conf) {
+			(void)ixgbe_ethertype_filter_remove(filter_info,
+							    (uint8_t)i);
+			IXGBE_WRITE_REG(hw, IXGBE_ETQF(i), 0);
+			IXGBE_WRITE_REG(hw, IXGBE_ETQS(i), 0);
+			IXGBE_WRITE_FLUSH(hw);
+		}
+	}
+}
+
+/* remove the SYN filter */
+void
+ixgbe_clear_syn_filter(struct rte_eth_dev *dev)
+{
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_filter_info *filter_info =
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+
+	if (filter_info->syn_info & IXGBE_SYN_FILTER_ENABLE) {
+		filter_info->syn_info = 0;
+
+		IXGBE_WRITE_REG(hw, IXGBE_SYNQF, 0);
+		IXGBE_WRITE_FLUSH(hw);
+	}
+}
+
+/* remove all the L2 tunnel filters */
+int ixgbe_clear_all_l2_tn_filter(struct rte_eth_dev *dev)
+{
+	struct ixgbe_l2_tn_info *l2_tn_info =
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
+	struct ixgbe_l2_tn_filter *l2_tn_filter;
+	struct rte_eth_l2_tunnel_conf l2_tn_conf;
+	int ret = 0;
+
+	while ((l2_tn_filter = TAILQ_FIRST(&l2_tn_info->l2_tn_list))) {
+		l2_tn_conf.l2_tunnel_type = l2_tn_filter->key.l2_tn_type;
+		l2_tn_conf.tunnel_id      = l2_tn_filter->key.tn_id;
+		l2_tn_conf.pool           = l2_tn_filter->pool;
+		ret = ixgbe_dev_l2_tunnel_filter_del(dev, &l2_tn_conf);
+		if (ret < 0)
+			return ret;
+	}
+
+	return 0;
+}
+
 RTE_PMD_REGISTER_PCI(net_ixgbe, rte_ixgbe_pmd.pci_drv);
 RTE_PMD_REGISTER_PCI_TABLE(net_ixgbe, pci_id_ixgbe_map);
 RTE_PMD_REGISTER_KMOD_DEP(net_ixgbe, "* igb_uio | uio_pci_generic | vfio");
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 4e0264c..5efa650 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -274,6 +274,11 @@ struct ixgbe_ethertype_filter {
 	uint16_t ethertype;
 	uint32_t etqf;
 	uint32_t etqs;
+	/**
+	 * If this filter is added by configuration,
+	 * it should not be removed.
+	 */
+	bool     conf;
 };
 
 /*
@@ -501,6 +506,14 @@ uint32_t ixgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val);
 int ixgbe_fdir_ctrl_func(struct rte_eth_dev *dev,
 			enum rte_filter_op filter_op, void *arg);
 void ixgbe_fdir_filter_restore(struct rte_eth_dev *dev);
+int ixgbe_clear_all_fdir_filter(struct rte_eth_dev *dev);
+
+extern const struct rte_flow_ops ixgbe_flow_ops;
+
+void ixgbe_clear_all_ethertype_filter(struct rte_eth_dev *dev);
+void ixgbe_clear_all_ntuple_filter(struct rte_eth_dev *dev);
+void ixgbe_clear_syn_filter(struct rte_eth_dev *dev);
+int ixgbe_clear_all_l2_tn_filter(struct rte_eth_dev *dev);
 
 static inline int
 ixgbe_ethertype_filter_lookup(struct ixgbe_filter_info *filter_info,
@@ -531,6 +544,8 @@ ixgbe_ethertype_filter_insert(struct ixgbe_filter_info *filter_info,
 				ethertype_filter->etqf;
 			filter_info->ethertype_filters[i].etqs =
 				ethertype_filter->etqs;
+			filter_info->ethertype_filters[i].conf =
+				ethertype_filter->conf;
 			return i;
 		}
 	}
@@ -547,6 +562,7 @@ ixgbe_ethertype_filter_remove(struct ixgbe_filter_info *filter_info,
 	filter_info->ethertype_filters[idx].ethertype = 0;
 	filter_info->ethertype_filters[idx].etqf = 0;
 	filter_info->ethertype_filters[idx].etqs = 0;
+	filter_info->ethertype_filters[idx].etqs = FALSE;
 	return idx;
 }
 
diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c
index 627c51a..e928ad7 100644
--- a/drivers/net/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/ixgbe/ixgbe_fdir.c
@@ -1514,3 +1514,27 @@ ixgbe_fdir_filter_restore(struct rte_eth_dev *dev)
 		}
 	}
 }
+
+/* remove all the flow director filters */
+int
+ixgbe_clear_all_fdir_filter(struct rte_eth_dev *dev)
+{
+	struct ixgbe_hw_fdir_info *fdir_info =
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
+	struct ixgbe_fdir_filter *fdir_filter;
+	int ret = 0;
+
+	/* flush flow director */
+	rte_hash_reset(fdir_info->hash_handle);
+	memset(fdir_info->hash_map, 0,
+	       sizeof(struct ixgbe_fdir_filter *) * IXGBE_MAX_FDIR_FILTER_NUM);
+	while ((fdir_filter = TAILQ_FIRST(&fdir_info->fdir_list))) {
+		TAILQ_REMOVE(&fdir_info->fdir_list,
+			     fdir_filter,
+			     entries);
+		rte_free(fdir_filter);
+	}
+	ret = ixgbe_fdir_flush(dev);
+
+	return ret;
+}
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
new file mode 100644
index 0000000..1499391
--- /dev/null
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -0,0 +1,115 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/queue.h>
+#include <stdio.h>
+#include <errno.h>
+#include <stdint.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <netinet/in.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_cycles.h>
+
+#include <rte_interrupts.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_pci.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_eal.h>
+#include <rte_alarm.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_atomic.h>
+#include <rte_malloc.h>
+#include <rte_random.h>
+#include <rte_dev.h>
+#include <rte_hash_crc.h>
+#include <rte_flow.h>
+#include <rte_flow_driver.h>
+
+#include "ixgbe_logs.h"
+#include "base/ixgbe_api.h"
+#include "base/ixgbe_vf.h"
+#include "base/ixgbe_common.h"
+#include "ixgbe_ethdev.h"
+#include "ixgbe_bypass.h"
+#include "ixgbe_rxtx.h"
+#include "base/ixgbe_type.h"
+#include "base/ixgbe_phy.h"
+#include "rte_pmd_ixgbe.h"
+
+static int ixgbe_flow_flush(struct rte_eth_dev *dev,
+		struct rte_flow_error *error);
+
+const struct rte_flow_ops ixgbe_flow_ops = {
+	NULL,
+	NULL,
+	NULL,
+	ixgbe_flow_flush,
+	NULL,
+};
+
+/*  Destroy all flow rules associated with a port on ixgbe. */
+static int
+ixgbe_flow_flush(struct rte_eth_dev *dev,
+		struct rte_flow_error *error)
+{
+	int ret = 0;
+
+	ixgbe_clear_all_ntuple_filter(dev);
+	ixgbe_clear_all_ethertype_filter(dev);
+	ixgbe_clear_syn_filter(dev);
+
+	ret = ixgbe_clear_all_fdir_filter(dev);
+	if (ret < 0) {
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_HANDLE,
+					NULL, "Failed to flush rule");
+		return ret;
+	}
+
+	ret = ixgbe_clear_all_l2_tn_filter(dev);
+	if (ret < 0) {
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_HANDLE,
+					NULL, "Failed to flush rule");
+		return ret;
+	}
+
+	return 0;
+}
diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
index 4b33130..4715045 100644
--- a/drivers/net/ixgbe/ixgbe_pf.c
+++ b/drivers/net/ixgbe/ixgbe_pf.c
@@ -199,6 +199,7 @@ ixgbe_add_tx_flow_control_drop_filter(struct rte_eth_dev *eth_dev)
 				IXGBE_ETQF_TX_ANTISPOOF |
 				IXGBE_ETHERTYPE_FLOW_CTRL;
 	ethertype_filter.etqs = 0;
+	ethertype_filter.conf = TRUE;
 	i = ixgbe_ethertype_filter_insert(filter_info,
 					  &ethertype_filter);
 	if (i < 0) {
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v6 11/18] net/ixgbe: parse n-tuple filter
  2017-01-13  8:12         ` [PATCH v6 " Wei Zhao
                             ` (9 preceding siblings ...)
  2017-01-13  8:13           ` [PATCH v6 10/18] net/ixgbe: flush all the filters Wei Zhao
@ 2017-01-13  8:13           ` Wei Zhao
  2017-01-13  8:13           ` [PATCH v6 12/18] net/ixgbe: parse ethertype filter Wei Zhao
                             ` (7 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-13  8:13 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

Add rule validate function and check if the rule is a n-tuple rule,
and get the n-tuple info.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ixgbe/ixgbe_flow.c | 430 ++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 429 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 1499391..417728b 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -77,15 +77,443 @@
 
 static int ixgbe_flow_flush(struct rte_eth_dev *dev,
 		struct rte_flow_error *error);
+static int
+cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
+					const struct rte_flow_item pattern[],
+					const struct rte_flow_action actions[],
+					struct rte_eth_ntuple_filter *filter,
+					struct rte_flow_error *error);
+static int
+ixgbe_parse_ntuple_filter(const struct rte_flow_attr *attr,
+					const struct rte_flow_item pattern[],
+					const struct rte_flow_action actions[],
+					struct rte_eth_ntuple_filter *filter,
+					struct rte_flow_error *error);
+static int
+ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
+		const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct rte_flow_error *error);
 
 const struct rte_flow_ops ixgbe_flow_ops = {
-	NULL,
+	ixgbe_flow_validate,
 	NULL,
 	NULL,
 	ixgbe_flow_flush,
 	NULL,
 };
 
+#define IXGBE_MIN_N_TUPLE_PRIO 1
+#define IXGBE_MAX_N_TUPLE_PRIO 7
+#define NEXT_ITEM_OF_PATTERN(item, pattern, index)\
+	do {		\
+		item = pattern + index;\
+		while (item->type == RTE_FLOW_ITEM_TYPE_VOID) {\
+		index++;				\
+		item = pattern + index;		\
+		}						\
+	} while (0)
+
+#define NEXT_ITEM_OF_ACTION(act, actions, index)\
+	do {								\
+		act = actions + index;					\
+		while (act->type == RTE_FLOW_ACTION_TYPE_VOID) {\
+		index++;					\
+		act = actions + index;				\
+		}							\
+	} while (0)
+
+/**
+ * Please aware there's an asumption for all the parsers.
+ * rte_flow_item is using big endian, rte_flow_attr and
+ * rte_flow_action are using CPU order.
+ * Because the pattern is used to describe the packets,
+ * normally the packets should use network order.
+ */
+
+/**
+ * Parse the rule to see if it is a n-tuple rule.
+ * And get the n-tuple filter info BTW.
+ * pattern:
+ * The first not void item can be ETH or IPV4.
+ * The second not void item must be IPV4 if the first one is ETH.
+ * The third not void item must be UDP or TCP.
+ * The next not void item must be END.
+ * action:
+ * The first not void action should be QUEUE.
+ * The next not void action should be END.
+ * pattern example:
+ * ITEM		Spec			Mask
+ * ETH		NULL			NULL
+ * IPV4		src_addr 192.168.1.20	0xFFFFFFFF
+ *		dst_addr 192.167.3.50	0xFFFFFFFF
+ *		next_proto_id	17	0xFF
+ * UDP/TCP	src_port	80	0xFFFF
+ *		dst_port	80	0xFFFF
+ * END
+ * other members in mask and spec should set to 0x00.
+ * item->last should be NULL.
+ */
+static int
+cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
+			 const struct rte_flow_item pattern[],
+			 const struct rte_flow_action actions[],
+			 struct rte_eth_ntuple_filter *filter,
+			 struct rte_flow_error *error)
+{
+	const struct rte_flow_item *item;
+	const struct rte_flow_action *act;
+	const struct rte_flow_item_ipv4 *ipv4_spec;
+	const struct rte_flow_item_ipv4 *ipv4_mask;
+	const struct rte_flow_item_tcp *tcp_spec;
+	const struct rte_flow_item_tcp *tcp_mask;
+	const struct rte_flow_item_udp *udp_spec;
+	const struct rte_flow_item_udp *udp_mask;
+	uint32_t index;
+
+	if (!pattern) {
+		rte_flow_error_set(error,
+			EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+			NULL, "NULL pattern.");
+		return -rte_errno;
+	}
+
+	if (!actions) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				   NULL, "NULL action.");
+		return -rte_errno;
+	}
+	if (!attr) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, "NULL attribute.");
+		return -rte_errno;
+	}
+
+	/* parse pattern */
+	index = 0;
+
+	/* the first not void item can be MAC or IPv4 */
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+
+	if (item->type != RTE_FLOW_ITEM_TYPE_ETH &&
+	    item->type != RTE_FLOW_ITEM_TYPE_IPV4) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by ntuple filter");
+		return -rte_errno;
+	}
+	/* Skip Ethernet */
+	if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error,
+			  EINVAL,
+			  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			  item, "Not supported last point for range");
+			return -rte_errno;
+
+		}
+		/* if the first item is MAC, the content should be NULL */
+		if (item->spec || item->mask) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by ntuple filter");
+			return -rte_errno;
+		}
+		/* check if the next not void item is IPv4 */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_IPV4) {
+			rte_flow_error_set(error,
+			  EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+			  item, "Not supported by ntuple filter");
+			  return -rte_errno;
+		}
+	}
+
+	/* get the IPv4 info */
+	if (!item->spec || !item->mask) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Invalid ntuple mask");
+		return -rte_errno;
+	}
+	/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+
+	}
+
+	ipv4_mask = (const struct rte_flow_item_ipv4 *)item->mask;
+	/**
+	 * Only support src & dst addresses, protocol,
+	 * others should be masked.
+	 */
+	if (ipv4_mask->hdr.version_ihl ||
+	    ipv4_mask->hdr.type_of_service ||
+	    ipv4_mask->hdr.total_length ||
+	    ipv4_mask->hdr.packet_id ||
+	    ipv4_mask->hdr.fragment_offset ||
+	    ipv4_mask->hdr.time_to_live ||
+	    ipv4_mask->hdr.hdr_checksum) {
+			rte_flow_error_set(error,
+			EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by ntuple filter");
+		return -rte_errno;
+	}
+
+	filter->dst_ip_mask = ipv4_mask->hdr.dst_addr;
+	filter->src_ip_mask = ipv4_mask->hdr.src_addr;
+	filter->proto_mask  = ipv4_mask->hdr.next_proto_id;
+
+	ipv4_spec = (const struct rte_flow_item_ipv4 *)item->spec;
+	filter->dst_ip = ipv4_spec->hdr.dst_addr;
+	filter->src_ip = ipv4_spec->hdr.src_addr;
+	filter->proto  = ipv4_spec->hdr.next_proto_id;
+
+	/* check if the next not void item is TCP or UDP */
+	index++;
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_TCP &&
+	    item->type != RTE_FLOW_ITEM_TYPE_UDP) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by ntuple filter");
+		return -rte_errno;
+	}
+
+	/* get the TCP/UDP info */
+	if (!item->spec || !item->mask) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Invalid ntuple mask");
+		return -rte_errno;
+	}
+
+	/*Not supported last point for range*/
+	if (item->last) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+
+	}
+
+	if (item->type == RTE_FLOW_ITEM_TYPE_TCP) {
+		tcp_mask = (const struct rte_flow_item_tcp *)item->mask;
+
+		/**
+		 * Only support src & dst ports, tcp flags,
+		 * others should be masked.
+		 */
+		if (tcp_mask->hdr.sent_seq ||
+		    tcp_mask->hdr.recv_ack ||
+		    tcp_mask->hdr.data_off ||
+		    tcp_mask->hdr.rx_win ||
+		    tcp_mask->hdr.cksum ||
+		    tcp_mask->hdr.tcp_urp) {
+			memset(filter, 0,
+				sizeof(struct rte_eth_ntuple_filter));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by ntuple filter");
+			return -rte_errno;
+		}
+
+		filter->dst_port_mask  = tcp_mask->hdr.dst_port;
+		filter->src_port_mask  = tcp_mask->hdr.src_port;
+		if (tcp_mask->hdr.tcp_flags == 0xFF) {
+			filter->flags |= RTE_NTUPLE_FLAGS_TCP_FLAG;
+		} else if (!tcp_mask->hdr.tcp_flags) {
+			filter->flags &= ~RTE_NTUPLE_FLAGS_TCP_FLAG;
+		} else {
+			memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by ntuple filter");
+			return -rte_errno;
+		}
+
+		tcp_spec = (const struct rte_flow_item_tcp *)item->spec;
+		filter->dst_port  = tcp_spec->hdr.dst_port;
+		filter->src_port  = tcp_spec->hdr.src_port;
+		filter->tcp_flags = tcp_spec->hdr.tcp_flags;
+	} else {
+		udp_mask = (const struct rte_flow_item_udp *)item->mask;
+
+		/**
+		 * Only support src & dst ports,
+		 * others should be masked.
+		 */
+		if (udp_mask->hdr.dgram_len ||
+		    udp_mask->hdr.dgram_cksum) {
+			memset(filter, 0,
+				sizeof(struct rte_eth_ntuple_filter));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by ntuple filter");
+			return -rte_errno;
+		}
+
+		filter->dst_port_mask = udp_mask->hdr.dst_port;
+		filter->src_port_mask = udp_mask->hdr.src_port;
+
+		udp_spec = (const struct rte_flow_item_udp *)item->spec;
+		filter->dst_port = udp_spec->hdr.dst_port;
+		filter->src_port = udp_spec->hdr.src_port;
+	}
+
+	/* check if the next not void item is END */
+	index++;
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by ntuple filter");
+		return -rte_errno;
+	}
+
+	/* parse action */
+	index = 0;
+
+	/**
+	 * n-tuple only supports forwarding,
+	 * check if the first not void action is QUEUE.
+	 */
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			item, "Not supported action.");
+		return -rte_errno;
+	}
+	filter->queue =
+		((const struct rte_flow_action_queue *)act->conf)->index;
+
+	/* check if the next not void item is END */
+	index++;
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	/* parse attr */
+	/* must be input direction */
+	if (!attr->ingress) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR_INGRESS,
+				   attr, "Only support ingress.");
+		return -rte_errno;
+	}
+
+	/* not supported */
+	if (attr->egress) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
+				   attr, "Not support egress.");
+		return -rte_errno;
+	}
+
+	if (attr->priority > 0xFFFF) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
+				   attr, "Error priority.");
+		return -rte_errno;
+	}
+	filter->priority = (uint16_t)attr->priority;
+	if (attr->priority < IXGBE_MIN_N_TUPLE_PRIO ||
+	    attr->priority > IXGBE_MAX_N_TUPLE_PRIO)
+	    filter->priority = 1;
+
+	return 0;
+}
+
+/* a specific function for ixgbe because the flags is specific */
+static int
+ixgbe_parse_ntuple_filter(const struct rte_flow_attr *attr,
+			  const struct rte_flow_item pattern[],
+			  const struct rte_flow_action actions[],
+			  struct rte_eth_ntuple_filter *filter,
+			  struct rte_flow_error *error)
+{
+	int ret;
+
+	ret = cons_parse_ntuple_filter(attr, pattern, actions, filter, error);
+
+	if (ret)
+		return ret;
+
+	/* Ixgbe doesn't support tcp flags. */
+	if (filter->flags & RTE_NTUPLE_FLAGS_TCP_FLAG) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ITEM,
+				   NULL, "Not supported by ntuple filter");
+		return -rte_errno;
+	}
+
+	/* Ixgbe doesn't support many priorities. */
+	if (filter->priority < IXGBE_MIN_N_TUPLE_PRIO ||
+	    filter->priority > IXGBE_MAX_N_TUPLE_PRIO) {
+		memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			NULL, "Priority not supported by ntuple filter");
+		return -rte_errno;
+	}
+
+	if (filter->queue >= IXGBE_MAX_RX_QUEUE_NUM ||
+		filter->priority > IXGBE_5TUPLE_MAX_PRI ||
+		filter->priority < IXGBE_5TUPLE_MIN_PRI)
+		return -rte_errno;
+
+	/* fixed value for ixgbe */
+	filter->flags = RTE_5TUPLE_FLAGS;
+	return 0;
+}
+
+/**
+ * Check if the flow rule is supported by ixgbe.
+ * It only checkes the format. Don't guarantee the rule can be programmed into
+ * the HW. Because there can be no enough room for the rule.
+ */
+static int
+ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
+		const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct rte_flow_error *error)
+{
+	struct rte_eth_ntuple_filter ntuple_filter;
+	int ret;
+
+	memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
+	ret = ixgbe_parse_ntuple_filter(attr, pattern,
+				actions, &ntuple_filter, error);
+	if (!ret)
+		return 0;
+
+	return ret;
+}
+
 /*  Destroy all flow rules associated with a port on ixgbe. */
 static int
 ixgbe_flow_flush(struct rte_eth_dev *dev,
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v6 12/18] net/ixgbe: parse ethertype filter
  2017-01-13  8:12         ` [PATCH v6 " Wei Zhao
                             ` (10 preceding siblings ...)
  2017-01-13  8:13           ` [PATCH v6 11/18] net/ixgbe: parse n-tuple filter Wei Zhao
@ 2017-01-13  8:13           ` Wei Zhao
  2017-01-13  8:13           ` [PATCH v6 13/18] net/ixgbe: parse TCP SYN filter Wei Zhao
                             ` (6 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-13  8:13 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

check if the rule is a ethertype rule, and get the ethertype info.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ixgbe/ixgbe_flow.c | 284 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 284 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 417728b..2e2cafd 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -90,6 +90,18 @@ ixgbe_parse_ntuple_filter(const struct rte_flow_attr *attr,
 					struct rte_eth_ntuple_filter *filter,
 					struct rte_flow_error *error);
 static int
+cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
+			    const struct rte_flow_item *pattern,
+			    const struct rte_flow_action *actions,
+			    struct rte_eth_ethertype_filter *filter,
+			    struct rte_flow_error *error);
+static int
+ixgbe_parse_ethertype_filter(const struct rte_flow_attr *attr,
+				const struct rte_flow_item pattern[],
+				const struct rte_flow_action actions[],
+				struct rte_eth_ethertype_filter *filter,
+				struct rte_flow_error *error);
+static int
 ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 		const struct rte_flow_attr *attr,
 		const struct rte_flow_item pattern[],
@@ -491,6 +503,271 @@ ixgbe_parse_ntuple_filter(const struct rte_flow_attr *attr,
 }
 
 /**
+ * Parse the rule to see if it is a ethertype rule.
+ * And get the ethertype filter info BTW.
+ * pattern:
+ * The first not void item can be ETH.
+ * The next not void item must be END.
+ * action:
+ * The first not void action should be QUEUE.
+ * The next not void action should be END.
+ * pattern example:
+ * ITEM		Spec			Mask
+ * ETH		type	0x0807		0xFFFF
+ * END
+ * other members in mask and spec should set to 0x00.
+ * item->last should be NULL.
+ */
+static int
+cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
+			    const struct rte_flow_item *pattern,
+			    const struct rte_flow_action *actions,
+			    struct rte_eth_ethertype_filter *filter,
+			    struct rte_flow_error *error)
+{
+	const struct rte_flow_item *item;
+	const struct rte_flow_action *act;
+	const struct rte_flow_item_eth *eth_spec;
+	const struct rte_flow_item_eth *eth_mask;
+	const struct rte_flow_action_queue *act_q;
+	uint32_t index;
+
+	if (!pattern) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+				NULL, "NULL pattern.");
+		return -rte_errno;
+	}
+
+	if (!actions) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				NULL, "NULL action.");
+		return -rte_errno;
+	}
+
+	if (!attr) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, "NULL attribute.");
+		return -rte_errno;
+	}
+
+	/* Parse pattern */
+	index = 0;
+
+	/* The first non-void item should be MAC. */
+	item = pattern + index;
+	while (item->type == RTE_FLOW_ITEM_TYPE_VOID) {
+		index++;
+		item = pattern + index;
+	}
+	if (item->type != RTE_FLOW_ITEM_TYPE_ETH) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by ethertype filter");
+		return -rte_errno;
+	}
+
+	/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+	}
+
+	/* Get the MAC info. */
+	if (!item->spec || !item->mask) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by ethertype filter");
+		return -rte_errno;
+	}
+
+	eth_spec = (const struct rte_flow_item_eth *)item->spec;
+	eth_mask = (const struct rte_flow_item_eth *)item->mask;
+
+	/* Mask bits of source MAC address must be full of 0.
+	 * Mask bits of destination MAC address must be full
+	 * of 1 or full of 0.
+	 */
+	if (!is_zero_ether_addr(&eth_mask->src) ||
+	    (!is_zero_ether_addr(&eth_mask->dst) &&
+	     !is_broadcast_ether_addr(&eth_mask->dst))) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Invalid ether address mask");
+		return -rte_errno;
+	}
+
+	if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Invalid ethertype mask");
+		return -rte_errno;
+	}
+
+	/* If mask bits of destination MAC address
+	 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
+	 */
+	if (is_broadcast_ether_addr(&eth_mask->dst)) {
+		filter->mac_addr = eth_spec->dst;
+		filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
+	} else {
+		filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
+	}
+	filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+
+	/* Check if the next non-void item is END. */
+	index++;
+	item = pattern + index;
+	while (item->type == RTE_FLOW_ITEM_TYPE_VOID) {
+		index++;
+		item = pattern + index;
+	}
+	if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by ethertype filter.");
+		return -rte_errno;
+	}
+
+	/* Parse action */
+
+	index = 0;
+	/* Check if the first non-void action is QUEUE or DROP. */
+	act = actions + index;
+	while (act->type == RTE_FLOW_ACTION_TYPE_VOID) {
+		index++;
+		act = actions + index;
+	}
+	if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE &&
+	    act->type != RTE_FLOW_ACTION_TYPE_DROP) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION,
+				act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	if (act->type == RTE_FLOW_ACTION_TYPE_QUEUE) {
+		act_q = (const struct rte_flow_action_queue *)act->conf;
+		filter->queue = act_q->index;
+	} else {
+		filter->flags |= RTE_ETHTYPE_FLAGS_DROP;
+	}
+
+	/* Check if the next non-void item is END */
+	index++;
+	act = actions + index;
+	while (act->type == RTE_FLOW_ACTION_TYPE_VOID) {
+		index++;
+		act = actions + index;
+	}
+	if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION,
+				act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	/* Parse attr */
+	/* Must be input direction */
+	if (!attr->ingress) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ATTR_INGRESS,
+				attr, "Only support ingress.");
+		return -rte_errno;
+	}
+
+	/* Not supported */
+	if (attr->egress) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
+				attr, "Not support egress.");
+		return -rte_errno;
+	}
+
+	/* Not supported */
+	if (attr->priority) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
+				attr, "Not support priority.");
+		return -rte_errno;
+	}
+
+	/* Not supported */
+	if (attr->group) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ATTR_GROUP,
+				attr, "Not support group.");
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
+static int
+ixgbe_parse_ethertype_filter(const struct rte_flow_attr *attr,
+			     const struct rte_flow_item pattern[],
+			     const struct rte_flow_action actions[],
+			     struct rte_eth_ethertype_filter *filter,
+			     struct rte_flow_error *error)
+{
+	int ret;
+
+	ret = cons_parse_ethertype_filter(attr, pattern,
+					actions, filter, error);
+
+	if (ret)
+		return ret;
+
+	/* Ixgbe doesn't support MAC address. */
+	if (filter->flags & RTE_ETHTYPE_FLAGS_MAC) {
+		memset(filter, 0, sizeof(struct rte_eth_ethertype_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			NULL, "Not supported by ethertype filter");
+		return -rte_errno;
+	}
+
+	if (filter->queue >= IXGBE_MAX_RX_QUEUE_NUM) {
+		memset(filter, 0, sizeof(struct rte_eth_ethertype_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			NULL, "queue index much too big");
+		return -rte_errno;
+	}
+
+	if (filter->ether_type == ETHER_TYPE_IPv4 ||
+		filter->ether_type == ETHER_TYPE_IPv6) {
+		memset(filter, 0, sizeof(struct rte_eth_ethertype_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			NULL, "IPv4/IPv6 not supported by ethertype filter");
+		return -rte_errno;
+	}
+
+	if (filter->flags & RTE_ETHTYPE_FLAGS_MAC) {
+		memset(filter, 0, sizeof(struct rte_eth_ethertype_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			NULL, "mac compare is unsupported");
+		return -rte_errno;
+	}
+
+	if (filter->flags & RTE_ETHTYPE_FLAGS_DROP) {
+		memset(filter, 0, sizeof(struct rte_eth_ethertype_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			NULL, "drop option is unsupported");
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
+/**
  * Check if the flow rule is supported by ixgbe.
  * It only checkes the format. Don't guarantee the rule can be programmed into
  * the HW. Because there can be no enough room for the rule.
@@ -503,6 +780,7 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 		struct rte_flow_error *error)
 {
 	struct rte_eth_ntuple_filter ntuple_filter;
+	struct rte_eth_ethertype_filter ethertype_filter;
 	int ret;
 
 	memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
@@ -511,6 +789,12 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 	if (!ret)
 		return 0;
 
+	memset(&ethertype_filter, 0, sizeof(struct rte_eth_ethertype_filter));
+	ret = ixgbe_parse_ethertype_filter(attr, pattern,
+				actions, &ethertype_filter, error);
+	if (!ret)
+		return 0;
+
 	return ret;
 }
 
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v6 13/18] net/ixgbe: parse TCP SYN filter
  2017-01-13  8:12         ` [PATCH v6 " Wei Zhao
                             ` (11 preceding siblings ...)
  2017-01-13  8:13           ` [PATCH v6 12/18] net/ixgbe: parse ethertype filter Wei Zhao
@ 2017-01-13  8:13           ` Wei Zhao
  2017-01-13  8:13           ` [PATCH v6 14/18] net/ixgbe: parse L2 tunnel filter Wei Zhao
                             ` (5 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-13  8:13 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

check if the rule is a TCP SYN rule, and get the SYN info.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ixgbe/ixgbe_flow.c | 272 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 272 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 2e2cafd..7557dfa 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -102,6 +102,18 @@ ixgbe_parse_ethertype_filter(const struct rte_flow_attr *attr,
 				struct rte_eth_ethertype_filter *filter,
 				struct rte_flow_error *error);
 static int
+cons_parse_syn_filter(const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct rte_eth_syn_filter *filter,
+		struct rte_flow_error *error);
+static int
+ixgbe_parse_syn_filter(const struct rte_flow_attr *attr,
+				const struct rte_flow_item pattern[],
+				const struct rte_flow_action actions[],
+				struct rte_eth_syn_filter *filter,
+				struct rte_flow_error *error);
+static int
 ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 		const struct rte_flow_attr *attr,
 		const struct rte_flow_item pattern[],
@@ -768,6 +780,259 @@ ixgbe_parse_ethertype_filter(const struct rte_flow_attr *attr,
 }
 
 /**
+ * Parse the rule to see if it is a TCP SYN rule.
+ * And get the TCP SYN filter info BTW.
+ * pattern:
+ * The first not void item must be ETH.
+ * The second not void item must be IPV4 or IPV6.
+ * The third not void item must be TCP.
+ * The next not void item must be END.
+ * action:
+ * The first not void action should be QUEUE.
+ * The next not void action should be END.
+ * pattern example:
+ * ITEM		Spec			Mask
+ * ETH		NULL			NULL
+ * IPV4/IPV6	NULL			NULL
+ * TCP		tcp_flags	0x02	0xFF
+ * END
+ * other members in mask and spec should set to 0x00.
+ * item->last should be NULL.
+ */
+static int
+cons_parse_syn_filter(const struct rte_flow_attr *attr,
+				const struct rte_flow_item pattern[],
+				const struct rte_flow_action actions[],
+				struct rte_eth_syn_filter *filter,
+				struct rte_flow_error *error)
+{
+	const struct rte_flow_item *item;
+	const struct rte_flow_action *act;
+	const struct rte_flow_item_tcp *tcp_spec;
+	const struct rte_flow_item_tcp *tcp_mask;
+	const struct rte_flow_action_queue *act_q;
+	uint32_t index;
+
+	if (!pattern) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+				NULL, "NULL pattern.");
+		return -rte_errno;
+	}
+
+	if (!actions) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				NULL, "NULL action.");
+		return -rte_errno;
+	}
+
+	if (!attr) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, "NULL attribute.");
+		return -rte_errno;
+	}
+
+	/* parse pattern */
+	index = 0;
+
+	/* the first not void item should be MAC or IPv4 or IPv6 or TCP */
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_ETH &&
+	    item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+	    item->type != RTE_FLOW_ITEM_TYPE_IPV6 &&
+	    item->type != RTE_FLOW_ITEM_TYPE_TCP) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by syn filter");
+		return -rte_errno;
+	}
+		/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+	}
+
+	/* Skip Ethernet */
+	if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
+		/* if the item is MAC, the content should be NULL */
+		if (item->spec || item->mask) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Invalid SYN address mask");
+			return -rte_errno;
+		}
+
+		/* check if the next not void item is IPv4 or IPv6 */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+		    item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by syn filter");
+			return -rte_errno;
+		}
+	}
+
+	/* Skip IP */
+	if (item->type == RTE_FLOW_ITEM_TYPE_IPV4 ||
+	    item->type == RTE_FLOW_ITEM_TYPE_IPV6) {
+		/* if the item is IP, the content should be NULL */
+		if (item->spec || item->mask) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Invalid SYN mask");
+			return -rte_errno;
+		}
+
+		/* check if the next not void item is TCP */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_TCP) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by syn filter");
+			return -rte_errno;
+		}
+	}
+
+	/* Get the TCP info. Only support SYN. */
+	if (!item->spec || !item->mask) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Invalid SYN mask");
+		return -rte_errno;
+	}
+	/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+	}
+
+	tcp_spec = (const struct rte_flow_item_tcp *)item->spec;
+	tcp_mask = (const struct rte_flow_item_tcp *)item->mask;
+	if (!(tcp_spec->hdr.tcp_flags & TCP_SYN_FLAG) ||
+	    tcp_mask->hdr.src_port ||
+	    tcp_mask->hdr.dst_port ||
+	    tcp_mask->hdr.sent_seq ||
+	    tcp_mask->hdr.recv_ack ||
+	    tcp_mask->hdr.data_off ||
+	    tcp_mask->hdr.tcp_flags != TCP_SYN_FLAG ||
+	    tcp_mask->hdr.rx_win ||
+	    tcp_mask->hdr.cksum ||
+	    tcp_mask->hdr.tcp_urp) {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by syn filter");
+		return -rte_errno;
+	}
+
+	/* check if the next not void item is END */
+	index++;
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by syn filter");
+		return -rte_errno;
+	}
+
+	/* parse action */
+	index = 0;
+
+	/* check if the first not void action is QUEUE. */
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE) {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION,
+				act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	act_q = (const struct rte_flow_action_queue *)act->conf;
+	filter->queue = act_q->index;
+	if (filter->queue >= IXGBE_MAX_RX_QUEUE_NUM) {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION,
+				act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	/* check if the next not void item is END */
+	index++;
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION,
+				act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	/* parse attr */
+	/* must be input direction */
+	if (!attr->ingress) {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_INGRESS,
+			attr, "Only support ingress.");
+		return -rte_errno;
+	}
+
+	/* not supported */
+	if (attr->egress) {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
+			attr, "Not support egress.");
+		return -rte_errno;
+	}
+
+	/* Support 2 priorities, the lowest or highest. */
+	if (!attr->priority) {
+		filter->hig_pri = 0;
+	} else if (attr->priority == (uint32_t)~0U) {
+		filter->hig_pri = 1;
+	} else {
+		memset(filter, 0, sizeof(struct rte_eth_syn_filter));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
+			attr, "Not support priority.");
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
+static int
+ixgbe_parse_syn_filter(const struct rte_flow_attr *attr,
+			     const struct rte_flow_item pattern[],
+			     const struct rte_flow_action actions[],
+			     struct rte_eth_syn_filter *filter,
+			     struct rte_flow_error *error)
+{
+	int ret;
+
+	ret = cons_parse_syn_filter(attr, pattern,
+					actions, filter, error);
+
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+/**
  * Check if the flow rule is supported by ixgbe.
  * It only checkes the format. Don't guarantee the rule can be programmed into
  * the HW. Because there can be no enough room for the rule.
@@ -781,6 +1046,7 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 {
 	struct rte_eth_ntuple_filter ntuple_filter;
 	struct rte_eth_ethertype_filter ethertype_filter;
+	struct rte_eth_syn_filter syn_filter;
 	int ret;
 
 	memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
@@ -795,6 +1061,12 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 	if (!ret)
 		return 0;
 
+	memset(&syn_filter, 0, sizeof(struct rte_eth_syn_filter));
+	ret = ixgbe_parse_syn_filter(attr, pattern,
+				actions, &syn_filter, error);
+	if (!ret)
+		return 0;
+
 	return ret;
 }
 
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v6 14/18] net/ixgbe: parse L2 tunnel filter
  2017-01-13  8:12         ` [PATCH v6 " Wei Zhao
                             ` (12 preceding siblings ...)
  2017-01-13  8:13           ` [PATCH v6 13/18] net/ixgbe: parse TCP SYN filter Wei Zhao
@ 2017-01-13  8:13           ` Wei Zhao
  2017-01-13 11:18             ` Ferruh Yigit
  2017-01-16 13:03             ` Adrien Mazarguil
  2017-01-13  8:13           ` [PATCH v6 15/18] net/ixgbe: parse flow director filter Wei Zhao
                             ` (4 subsequent siblings)
  18 siblings, 2 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-13  8:13 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

check if the rule is a L2 tunnel rule, and get the L2 tunnel info.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c |   3 +-
 drivers/net/ixgbe/ixgbe_flow.c   | 216 +++++++++++++++++++++++++++++++++++++++
 lib/librte_ether/rte_flow.h      |  48 +++++++++
 3 files changed, 266 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 2a67462..14a88ee 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -8089,7 +8089,8 @@ ixgbe_clear_syn_filter(struct rte_eth_dev *dev)
 }
 
 /* remove all the L2 tunnel filters */
-int ixgbe_clear_all_l2_tn_filter(struct rte_eth_dev *dev)
+int
+ixgbe_clear_all_l2_tn_filter(struct rte_eth_dev *dev)
 {
 	struct ixgbe_l2_tn_info *l2_tn_info =
 		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 7557dfa..4006084 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -114,6 +114,19 @@ ixgbe_parse_syn_filter(const struct rte_flow_attr *attr,
 				struct rte_eth_syn_filter *filter,
 				struct rte_flow_error *error);
 static int
+cons_parse_l2_tn_filter(const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct rte_eth_l2_tunnel_conf *filter,
+		struct rte_flow_error *error);
+static int
+ixgbe_validate_l2_tn_filter(struct rte_eth_dev *dev,
+			const struct rte_flow_attr *attr,
+			const struct rte_flow_item pattern[],
+			const struct rte_flow_action actions[],
+			struct rte_eth_l2_tunnel_conf *rule,
+			struct rte_flow_error *error);
+static int
 ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 		const struct rte_flow_attr *attr,
 		const struct rte_flow_item pattern[],
@@ -1033,6 +1046,204 @@ ixgbe_parse_syn_filter(const struct rte_flow_attr *attr,
 }
 
 /**
+ * Parse the rule to see if it is a L2 tunnel rule.
+ * And get the L2 tunnel filter info BTW.
+ * Only support E-tag now.
+ * pattern:
+ * The first not void item can be E_TAG.
+ * The next not void item must be END.
+ * action:
+ * The first not void action should be QUEUE.
+ * The next not void action should be END.
+ * pattern example:
+ * ITEM		Spec			Mask
+ * E_TAG	grp		0x1	0x3
+		e_cid_base	0x309	0xFFF
+ * END
+ * other members in mask and spec should set to 0x00.
+ * item->last should be NULL.
+ */
+static int
+cons_parse_l2_tn_filter(const struct rte_flow_attr *attr,
+			const struct rte_flow_item pattern[],
+			const struct rte_flow_action actions[],
+			struct rte_eth_l2_tunnel_conf *filter,
+			struct rte_flow_error *error)
+{
+	const struct rte_flow_item *item;
+	const struct rte_flow_item_e_tag *e_tag_spec;
+	const struct rte_flow_item_e_tag *e_tag_mask;
+	const struct rte_flow_action *act;
+	const struct rte_flow_action_queue *act_q;
+	uint32_t index;
+
+	if (!pattern) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+			NULL, "NULL pattern.");
+		return -rte_errno;
+	}
+
+	if (!actions) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				   NULL, "NULL action.");
+		return -rte_errno;
+	}
+
+	if (!attr) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, "NULL attribute.");
+		return -rte_errno;
+	}
+	/* parse pattern */
+	index = 0;
+
+	/* The first not void item should be e-tag. */
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_E_TAG) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by L2 tunnel filter");
+		return -rte_errno;
+	}
+
+	if (!item->spec || !item->mask) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by L2 tunnel filter");
+		return -rte_errno;
+	}
+
+	/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+	}
+
+	e_tag_spec = (const struct rte_flow_item_e_tag *)item->spec;
+	e_tag_mask = (const struct rte_flow_item_e_tag *)item->mask;
+
+	/* Only care about GRP and E cid base. */
+	if (e_tag_mask->epcp_edei_in_ecid_b ||
+	    e_tag_mask->in_ecid_e ||
+	    e_tag_mask->ecid_e ||
+	    e_tag_mask->rsvd_grp_ecid_b != rte_cpu_to_be_16(0x3FFF)) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by L2 tunnel filter");
+		return -rte_errno;
+	}
+
+	filter->l2_tunnel_type = RTE_L2_TUNNEL_TYPE_E_TAG;
+	/**
+	 * grp and e_cid_base are bit fields and only use 14 bits.
+	 * e-tag id is taken as little endian by HW.
+	 */
+	filter->tunnel_id = rte_be_to_cpu_16(e_tag_spec->rsvd_grp_ecid_b);
+
+	/* check if the next not void item is END */
+	index++;
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by L2 tunnel filter");
+		return -rte_errno;
+	}
+
+	/* parse attr */
+	/* must be input direction */
+	if (!attr->ingress) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_INGRESS,
+			attr, "Only support ingress.");
+		return -rte_errno;
+	}
+
+	/* not supported */
+	if (attr->egress) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
+			attr, "Not support egress.");
+		return -rte_errno;
+	}
+
+	/* not supported */
+	if (attr->priority) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
+			attr, "Not support priority.");
+		return -rte_errno;
+	}
+
+	/* parse action */
+	index = 0;
+
+	/* check if the first not void action is QUEUE. */
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	act_q = (const struct rte_flow_action_queue *)act->conf;
+	filter->pool = act_q->index;
+
+	/* check if the next not void item is END */
+	index++;
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+		memset(filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
+static int
+ixgbe_validate_l2_tn_filter(struct rte_eth_dev *dev,
+			const struct rte_flow_attr *attr,
+			const struct rte_flow_item pattern[],
+			const struct rte_flow_action actions[],
+			struct rte_eth_l2_tunnel_conf *l2_tn_filter,
+			struct rte_flow_error *error)
+{
+	int ret = 0;
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	ret = cons_parse_l2_tn_filter(attr, pattern,
+				actions, l2_tn_filter, error);
+
+	if (hw->mac.type != ixgbe_mac_X550 &&
+		hw->mac.type != ixgbe_mac_X550EM_x &&
+		hw->mac.type != ixgbe_mac_X550EM_a) {
+		memset(l2_tn_filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			NULL, "Not supported by L2 tunnel filter");
+		return -rte_errno;
+	}
+
+	return ret;
+}
+
+/**
  * Check if the flow rule is supported by ixgbe.
  * It only checkes the format. Don't guarantee the rule can be programmed into
  * the HW. Because there can be no enough room for the rule.
@@ -1047,6 +1258,7 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 	struct rte_eth_ntuple_filter ntuple_filter;
 	struct rte_eth_ethertype_filter ethertype_filter;
 	struct rte_eth_syn_filter syn_filter;
+	struct rte_eth_l2_tunnel_conf l2_tn_filter;
 	int ret;
 
 	memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
@@ -1067,6 +1279,10 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 	if (!ret)
 		return 0;
 
+	memset(&l2_tn_filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+	ret = ixgbe_validate_l2_tn_filter(dev, attr, pattern,
+				actions, &l2_tn_filter, error);
+
 	return ret;
 }
 
diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h
index 98084ac..7142479 100644
--- a/lib/librte_ether/rte_flow.h
+++ b/lib/librte_ether/rte_flow.h
@@ -268,6 +268,20 @@ enum rte_flow_item_type {
 	 * See struct rte_flow_item_vxlan.
 	 */
 	RTE_FLOW_ITEM_TYPE_VXLAN,
+
+	/**
+	 * Matches a E_TAG header.
+	 *
+	 * See struct rte_flow_item_e_tag.
+	 */
+	RTE_FLOW_ITEM_TYPE_E_TAG,
+
+	/**
+	 * Matches a NVGRE header.
+	 *
+	 * See struct rte_flow_item_nvgre.
+	 */
+	RTE_FLOW_ITEM_TYPE_NVGRE,
 };
 
 /**
@@ -454,6 +468,40 @@ struct rte_flow_item_vxlan {
 };
 
 /**
+ * RTE_FLOW_ITEM_TYPE_E_TAG.
+ *
+ * Matches a E-tag header.
+ */
+struct rte_flow_item_e_tag {
+	uint16_t tpid; /**< Tag protocol identifier (0x893F). */
+	/** E-Tag control information (E-TCI). */
+	/**< E-PCP (3b), E-DEI (1b), ingress E-CID base (12b). */
+	uint16_t epcp_edei_in_ecid_b;
+	/**< Reserved (2b), GRP (2b), E-CID base (12b). */
+	uint16_t rsvd_grp_ecid_b;
+	uint8_t in_ecid_e; /**< Ingress E-CID ext. */
+	uint8_t ecid_e; /**< E-CID ext. */
+};
+
+/**
+ * RTE_FLOW_ITEM_TYPE_NVGRE.
+ *
+ * Matches a NVGRE header.
+ */
+struct rte_flow_item_nvgre {
+     /**
+      * Checksum (1b), undefined (1b), key bit (1b), sequence number (1b),
+      * reserved 0 (9b), version (3b).
+      *
+      * \c_k_s_rsvd0_ver must have value 0x2000 according to RFC 7637.
+      */
+	uint16_t c_k_s_rsvd0_ver;
+	uint16_t protocol; /**< Protocol type (0x6558). */
+	uint8_t tni[3]; /**< Virtual subnet ID. */
+	uint8_t flow_id; /**< Flow ID. */
+};
+
+/**
  * Matching pattern item definition.
  *
  * A pattern is formed by stacking items starting from the lowest protocol
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v6 15/18] net/ixgbe: parse flow director filter
  2017-01-13  8:12         ` [PATCH v6 " Wei Zhao
                             ` (13 preceding siblings ...)
  2017-01-13  8:13           ` [PATCH v6 14/18] net/ixgbe: parse L2 tunnel filter Wei Zhao
@ 2017-01-13  8:13           ` Wei Zhao
  2017-01-13  8:13           ` [PATCH v6 16/18] net/ixgbe: create consistent filter Wei Zhao
                             ` (3 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-13  8:13 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

check if the rule is a flow director rule, and get the flow director info.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c |    2 +
 drivers/net/ixgbe/ixgbe_ethdev.h |   16 +
 drivers/net/ixgbe/ixgbe_fdir.c   |  253 +++++---
 drivers/net/ixgbe/ixgbe_flow.c   | 1252 +++++++++++++++++++++++++++++++++++++-
 4 files changed, 1414 insertions(+), 109 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 14a88ee..4b9f4aa 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1436,6 +1436,8 @@ static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev)
 			     "Failed to allocate memory for fdir hash map!");
 		return -ENOMEM;
 	}
+	fdir_info->mask_added = FALSE;
+
 	return 0;
 }
 
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 5efa650..dee159f 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -162,6 +162,17 @@ struct ixgbe_fdir_filter {
 /* list of fdir filters */
 TAILQ_HEAD(ixgbe_fdir_filter_list, ixgbe_fdir_filter);
 
+struct ixgbe_fdir_rule {
+	struct ixgbe_hw_fdir_mask mask;
+	union ixgbe_atr_input ixgbe_fdir; /* key of fdir filter*/
+	bool b_spec; /* If TRUE, ixgbe_fdir, fdirflags, queue have meaning. */
+	bool b_mask; /* If TRUE, mask has meaning. */
+	enum rte_fdir_mode mode; /* IP, MAC VLAN, Tunnel */
+	uint32_t fdirflags; /* drop or forward */
+	uint32_t soft_id; /* an unique value for this rule */
+	uint8_t queue; /* assigned rx queue */
+};
+
 struct ixgbe_hw_fdir_info {
 	struct ixgbe_hw_fdir_mask mask;
 	uint8_t     flex_bytes_offset;
@@ -177,6 +188,7 @@ struct ixgbe_hw_fdir_info {
 	/* store the pointers of the filters, index is the hash value. */
 	struct ixgbe_fdir_filter **hash_map;
 	struct rte_hash *hash_handle; /* cuckoo hash handler */
+	bool mask_added; /* If already got mask from consistent filter */
 };
 
 /* structure for interrupt relative data */
@@ -479,6 +491,10 @@ bool ixgbe_rss_update_sp(enum ixgbe_mac_type mac_type);
  * Flow director function prototypes
  */
 int ixgbe_fdir_configure(struct rte_eth_dev *dev);
+int ixgbe_fdir_set_input_mask(struct rte_eth_dev *dev);
+int ixgbe_fdir_filter_program(struct rte_eth_dev *dev,
+			      struct ixgbe_fdir_rule *rule,
+			      bool del, bool update);
 
 void ixgbe_configure_dcb(struct rte_eth_dev *dev);
 
diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c
index e928ad7..3b9d60c 100644
--- a/drivers/net/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/ixgbe/ixgbe_fdir.c
@@ -112,10 +112,8 @@
 static int fdir_erase_filter_82599(struct ixgbe_hw *hw, uint32_t fdirhash);
 static int fdir_set_input_mask(struct rte_eth_dev *dev,
 			       const struct rte_eth_fdir_masks *input_mask);
-static int fdir_set_input_mask_82599(struct rte_eth_dev *dev,
-		const struct rte_eth_fdir_masks *input_mask);
-static int fdir_set_input_mask_x550(struct rte_eth_dev *dev,
-				    const struct rte_eth_fdir_masks *input_mask);
+static int fdir_set_input_mask_82599(struct rte_eth_dev *dev);
+static int fdir_set_input_mask_x550(struct rte_eth_dev *dev);
 static int ixgbe_set_fdir_flex_conf(struct rte_eth_dev *dev,
 		const struct rte_eth_fdir_flex_conf *conf, uint32_t *fdirctrl);
 static int fdir_enable_82599(struct ixgbe_hw *hw, uint32_t fdirctrl);
@@ -295,8 +293,7 @@ reverse_fdir_bitmasks(uint16_t hi_dword, uint16_t lo_dword)
  * but makes use of the rte_fdir_masks structure to see which bits to set.
  */
 static int
-fdir_set_input_mask_82599(struct rte_eth_dev *dev,
-		const struct rte_eth_fdir_masks *input_mask)
+fdir_set_input_mask_82599(struct rte_eth_dev *dev)
 {
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ixgbe_hw_fdir_info *info =
@@ -308,8 +305,6 @@ fdir_set_input_mask_82599(struct rte_eth_dev *dev,
 	uint32_t fdirm = IXGBE_FDIRM_POOL | IXGBE_FDIRM_DIPv6 | IXGBE_FDIRM_FLEX;
 	uint32_t fdirtcpm;  /* TCP source and destination port masks. */
 	uint32_t fdiripv6m; /* IPv6 source and destination masks. */
-	uint16_t dst_ipv6m = 0;
-	uint16_t src_ipv6m = 0;
 	volatile uint32_t *reg;
 
 	PMD_INIT_FUNC_TRACE();
@@ -320,31 +315,30 @@ fdir_set_input_mask_82599(struct rte_eth_dev *dev,
 	 * a VLAN of 0 is unspecified, so mask that out as well.  L4type
 	 * cannot be masked out in this implementation.
 	 */
-	if (input_mask->dst_port_mask == 0 && input_mask->src_port_mask == 0)
+	if (info->mask.dst_port_mask == 0 && info->mask.src_port_mask == 0)
 		/* use the L4 protocol mask for raw IPv4/IPv6 traffic */
 		fdirm |= IXGBE_FDIRM_L4P;
 
-	if (input_mask->vlan_tci_mask == rte_cpu_to_be_16(0x0FFF))
+	if (info->mask.vlan_tci_mask == rte_cpu_to_be_16(0x0FFF))
 		/* mask VLAN Priority */
 		fdirm |= IXGBE_FDIRM_VLANP;
-	else if (input_mask->vlan_tci_mask == rte_cpu_to_be_16(0xE000))
+	else if (info->mask.vlan_tci_mask == rte_cpu_to_be_16(0xE000))
 		/* mask VLAN ID */
 		fdirm |= IXGBE_FDIRM_VLANID;
-	else if (input_mask->vlan_tci_mask == 0)
+	else if (info->mask.vlan_tci_mask == 0)
 		/* mask VLAN ID and Priority */
 		fdirm |= IXGBE_FDIRM_VLANID | IXGBE_FDIRM_VLANP;
-	else if (input_mask->vlan_tci_mask != rte_cpu_to_be_16(0xEFFF)) {
+	else if (info->mask.vlan_tci_mask != rte_cpu_to_be_16(0xEFFF)) {
 		PMD_INIT_LOG(ERR, "invalid vlan_tci_mask");
 		return -EINVAL;
 	}
-	info->mask.vlan_tci_mask = input_mask->vlan_tci_mask;
 
 	IXGBE_WRITE_REG(hw, IXGBE_FDIRM, fdirm);
 
 	/* store the TCP/UDP port masks, bit reversed from port layout */
 	fdirtcpm = reverse_fdir_bitmasks(
-			rte_be_to_cpu_16(input_mask->dst_port_mask),
-			rte_be_to_cpu_16(input_mask->src_port_mask));
+			rte_be_to_cpu_16(info->mask.dst_port_mask),
+			rte_be_to_cpu_16(info->mask.src_port_mask));
 
 	/* write all the same so that UDP, TCP and SCTP use the same mask
 	 * (little-endian)
@@ -352,30 +346,23 @@ fdir_set_input_mask_82599(struct rte_eth_dev *dev,
 	IXGBE_WRITE_REG(hw, IXGBE_FDIRTCPM, ~fdirtcpm);
 	IXGBE_WRITE_REG(hw, IXGBE_FDIRUDPM, ~fdirtcpm);
 	IXGBE_WRITE_REG(hw, IXGBE_FDIRSCTPM, ~fdirtcpm);
-	info->mask.src_port_mask = input_mask->src_port_mask;
-	info->mask.dst_port_mask = input_mask->dst_port_mask;
 
 	/* Store source and destination IPv4 masks (big-endian),
 	 * can not use IXGBE_WRITE_REG.
 	 */
 	reg = IXGBE_PCI_REG_ADDR(hw, IXGBE_FDIRSIP4M);
-	*reg = ~(input_mask->ipv4_mask.src_ip);
+	*reg = ~(info->mask.src_ipv4_mask);
 	reg = IXGBE_PCI_REG_ADDR(hw, IXGBE_FDIRDIP4M);
-	*reg = ~(input_mask->ipv4_mask.dst_ip);
-	info->mask.src_ipv4_mask = input_mask->ipv4_mask.src_ip;
-	info->mask.dst_ipv4_mask = input_mask->ipv4_mask.dst_ip;
+	*reg = ~(info->mask.dst_ipv4_mask);
 
 	if (dev->data->dev_conf.fdir_conf.mode == RTE_FDIR_MODE_SIGNATURE) {
 		/*
 		 * Store source and destination IPv6 masks (bit reversed)
 		 */
-		IPV6_ADDR_TO_MASK(input_mask->ipv6_mask.src_ip, src_ipv6m);
-		IPV6_ADDR_TO_MASK(input_mask->ipv6_mask.dst_ip, dst_ipv6m);
-		fdiripv6m = (dst_ipv6m << 16) | src_ipv6m;
+		fdiripv6m = (info->mask.dst_ipv6_mask << 16) |
+			    info->mask.src_ipv6_mask;
 
 		IXGBE_WRITE_REG(hw, IXGBE_FDIRIP6M, ~fdiripv6m);
-		info->mask.src_ipv6_mask = src_ipv6m;
-		info->mask.dst_ipv6_mask = dst_ipv6m;
 	}
 
 	return IXGBE_SUCCESS;
@@ -386,8 +373,7 @@ fdir_set_input_mask_82599(struct rte_eth_dev *dev,
  * but makes use of the rte_fdir_masks structure to see which bits to set.
  */
 static int
-fdir_set_input_mask_x550(struct rte_eth_dev *dev,
-			 const struct rte_eth_fdir_masks *input_mask)
+fdir_set_input_mask_x550(struct rte_eth_dev *dev)
 {
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ixgbe_hw_fdir_info *info =
@@ -410,20 +396,19 @@ fdir_set_input_mask_x550(struct rte_eth_dev *dev,
 	/* some bits must be set for mac vlan or tunnel mode */
 	fdirm |= IXGBE_FDIRM_L4P | IXGBE_FDIRM_L3P;
 
-	if (input_mask->vlan_tci_mask == rte_cpu_to_be_16(0x0FFF))
+	if (info->mask.vlan_tci_mask == rte_cpu_to_be_16(0x0FFF))
 		/* mask VLAN Priority */
 		fdirm |= IXGBE_FDIRM_VLANP;
-	else if (input_mask->vlan_tci_mask == rte_cpu_to_be_16(0xE000))
+	else if (info->mask.vlan_tci_mask == rte_cpu_to_be_16(0xE000))
 		/* mask VLAN ID */
 		fdirm |= IXGBE_FDIRM_VLANID;
-	else if (input_mask->vlan_tci_mask == 0)
+	else if (info->mask.vlan_tci_mask == 0)
 		/* mask VLAN ID and Priority */
 		fdirm |= IXGBE_FDIRM_VLANID | IXGBE_FDIRM_VLANP;
-	else if (input_mask->vlan_tci_mask != rte_cpu_to_be_16(0xEFFF)) {
+	else if (info->mask.vlan_tci_mask != rte_cpu_to_be_16(0xEFFF)) {
 		PMD_INIT_LOG(ERR, "invalid vlan_tci_mask");
 		return -EINVAL;
 	}
-	info->mask.vlan_tci_mask = input_mask->vlan_tci_mask;
 
 	IXGBE_WRITE_REG(hw, IXGBE_FDIRM, fdirm);
 
@@ -434,12 +419,11 @@ fdir_set_input_mask_x550(struct rte_eth_dev *dev,
 				IXGBE_FDIRIP6M_TNI_VNI;
 
 	if (mode == RTE_FDIR_MODE_PERFECT_TUNNEL) {
-		mac_mask = input_mask->mac_addr_byte_mask;
+		mac_mask = info->mask.mac_addr_byte_mask;
 		fdiripv6m |= (mac_mask << IXGBE_FDIRIP6M_INNER_MAC_SHIFT)
 				& IXGBE_FDIRIP6M_INNER_MAC;
-		info->mask.mac_addr_byte_mask = input_mask->mac_addr_byte_mask;
 
-		switch (input_mask->tunnel_type_mask) {
+		switch (info->mask.tunnel_type_mask) {
 		case 0:
 			/* Mask turnnel type */
 			fdiripv6m |= IXGBE_FDIRIP6M_TUNNEL_TYPE;
@@ -450,10 +434,8 @@ fdir_set_input_mask_x550(struct rte_eth_dev *dev,
 			PMD_INIT_LOG(ERR, "invalid tunnel_type_mask");
 			return -EINVAL;
 		}
-		info->mask.tunnel_type_mask =
-			input_mask->tunnel_type_mask;
 
-		switch (rte_be_to_cpu_32(input_mask->tunnel_id_mask)) {
+		switch (rte_be_to_cpu_32(info->mask.tunnel_id_mask)) {
 		case 0x0:
 			/* Mask vxlan id */
 			fdiripv6m |= IXGBE_FDIRIP6M_TNI_VNI;
@@ -467,8 +449,6 @@ fdir_set_input_mask_x550(struct rte_eth_dev *dev,
 			PMD_INIT_LOG(ERR, "invalid tunnel_id_mask");
 			return -EINVAL;
 		}
-		info->mask.tunnel_id_mask =
-			input_mask->tunnel_id_mask;
 	}
 
 	IXGBE_WRITE_REG(hw, IXGBE_FDIRIP6M, fdiripv6m);
@@ -482,22 +462,90 @@ fdir_set_input_mask_x550(struct rte_eth_dev *dev,
 }
 
 static int
-fdir_set_input_mask(struct rte_eth_dev *dev,
-		    const struct rte_eth_fdir_masks *input_mask)
+ixgbe_fdir_store_input_mask_82599(struct rte_eth_dev *dev,
+				  const struct rte_eth_fdir_masks *input_mask)
+{
+	struct ixgbe_hw_fdir_info *info =
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
+	uint16_t dst_ipv6m = 0;
+	uint16_t src_ipv6m = 0;
+
+	memset(&info->mask, 0, sizeof(struct ixgbe_hw_fdir_mask));
+	info->mask.vlan_tci_mask = input_mask->vlan_tci_mask;
+	info->mask.src_port_mask = input_mask->src_port_mask;
+	info->mask.dst_port_mask = input_mask->dst_port_mask;
+	info->mask.src_ipv4_mask = input_mask->ipv4_mask.src_ip;
+	info->mask.dst_ipv4_mask = input_mask->ipv4_mask.dst_ip;
+	IPV6_ADDR_TO_MASK(input_mask->ipv6_mask.src_ip, src_ipv6m);
+	IPV6_ADDR_TO_MASK(input_mask->ipv6_mask.dst_ip, dst_ipv6m);
+	info->mask.src_ipv6_mask = src_ipv6m;
+	info->mask.dst_ipv6_mask = dst_ipv6m;
+
+	return IXGBE_SUCCESS;
+}
+
+static int
+ixgbe_fdir_store_input_mask_x550(struct rte_eth_dev *dev,
+				 const struct rte_eth_fdir_masks *input_mask)
+{
+	struct ixgbe_hw_fdir_info *info =
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
+
+	memset(&info->mask, 0, sizeof(struct ixgbe_hw_fdir_mask));
+	info->mask.vlan_tci_mask = input_mask->vlan_tci_mask;
+	info->mask.mac_addr_byte_mask = input_mask->mac_addr_byte_mask;
+	info->mask.tunnel_type_mask = input_mask->tunnel_type_mask;
+	info->mask.tunnel_id_mask = input_mask->tunnel_id_mask;
+
+	return IXGBE_SUCCESS;
+}
+
+static int
+ixgbe_fdir_store_input_mask(struct rte_eth_dev *dev,
+			    const struct rte_eth_fdir_masks *input_mask)
 {
 	enum rte_fdir_mode mode = dev->data->dev_conf.fdir_conf.mode;
 
 	if (mode >= RTE_FDIR_MODE_SIGNATURE &&
 	    mode <= RTE_FDIR_MODE_PERFECT)
-		return fdir_set_input_mask_82599(dev, input_mask);
+		return ixgbe_fdir_store_input_mask_82599(dev, input_mask);
 	else if (mode >= RTE_FDIR_MODE_PERFECT_MAC_VLAN &&
 		 mode <= RTE_FDIR_MODE_PERFECT_TUNNEL)
-		return fdir_set_input_mask_x550(dev, input_mask);
+		return ixgbe_fdir_store_input_mask_x550(dev, input_mask);
 
 	PMD_DRV_LOG(ERR, "Not supported fdir mode - %d!", mode);
 	return -ENOTSUP;
 }
 
+int
+ixgbe_fdir_set_input_mask(struct rte_eth_dev *dev)
+{
+	enum rte_fdir_mode mode = dev->data->dev_conf.fdir_conf.mode;
+
+	if (mode >= RTE_FDIR_MODE_SIGNATURE &&
+	    mode <= RTE_FDIR_MODE_PERFECT)
+		return fdir_set_input_mask_82599(dev);
+	else if (mode >= RTE_FDIR_MODE_PERFECT_MAC_VLAN &&
+		 mode <= RTE_FDIR_MODE_PERFECT_TUNNEL)
+		return fdir_set_input_mask_x550(dev);
+
+	PMD_DRV_LOG(ERR, "Not supported fdir mode - %d!", mode);
+	return -ENOTSUP;
+}
+
+static int
+fdir_set_input_mask(struct rte_eth_dev *dev,
+		    const struct rte_eth_fdir_masks *input_mask)
+{
+	int ret;
+
+	ret = ixgbe_fdir_store_input_mask(dev, input_mask);
+	if (ret)
+		return ret;
+
+	return ixgbe_fdir_set_input_mask(dev);
+}
+
 /*
  * ixgbe_check_fdir_flex_conf -check if the flex payload and mask configuration
  * arguments are valid
@@ -1135,23 +1183,40 @@ ixgbe_remove_fdir_filter(struct ixgbe_hw_fdir_info *fdir_info,
 	return 0;
 }
 
-/*
- * ixgbe_add_del_fdir_filter - add or remove a flow diretor filter.
- * @dev: pointer to the structure rte_eth_dev
- * @fdir_filter: fdir filter entry
- * @del: 1 - delete, 0 - add
- * @update: 1 - update
- */
 static int
-ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
-			  const struct rte_eth_fdir_filter *fdir_filter,
+ixgbe_interpret_fdir_filter(struct rte_eth_dev *dev,
+			    const struct rte_eth_fdir_filter *fdir_filter,
+			    struct ixgbe_fdir_rule *rule)
+{
+	enum rte_fdir_mode fdir_mode = dev->data->dev_conf.fdir_conf.mode;
+	int err;
+
+	memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+
+	err = ixgbe_fdir_filter_to_atr_input(fdir_filter,
+					     &rule->ixgbe_fdir,
+					     fdir_mode);
+	if (err)
+		return err;
+
+	rule->mode = fdir_mode;
+	if (fdir_filter->action.behavior == RTE_ETH_FDIR_REJECT)
+		rule->fdirflags = IXGBE_FDIRCMD_DROP;
+	rule->queue = fdir_filter->action.rx_queue;
+	rule->soft_id = fdir_filter->soft_id;
+
+	return 0;
+}
+
+int
+ixgbe_fdir_filter_program(struct rte_eth_dev *dev,
+			  struct ixgbe_fdir_rule *rule,
 			  bool del,
 			  bool update)
 {
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	uint32_t fdircmd_flags;
 	uint32_t fdirhash;
-	union ixgbe_atr_input input;
 	uint8_t queue;
 	bool is_perfect = FALSE;
 	int err;
@@ -1161,7 +1226,8 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	struct ixgbe_fdir_filter *node;
 	bool add_node = FALSE;
 
-	if (fdir_mode == RTE_FDIR_MODE_NONE)
+	if (fdir_mode == RTE_FDIR_MODE_NONE ||
+	    fdir_mode != rule->mode)
 		return -ENOTSUP;
 
 	/*
@@ -1174,7 +1240,7 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	    (hw->mac.type == ixgbe_mac_X550 ||
 	     hw->mac.type == ixgbe_mac_X550EM_x ||
 	     hw->mac.type == ixgbe_mac_X550EM_a) &&
-	    (fdir_filter->input.flow_type ==
+	    (rule->ixgbe_fdir.formatted.flow_type ==
 	     RTE_ETH_FLOW_NONFRAG_IPV4_OTHER) &&
 	    (info->mask.src_port_mask != 0 ||
 	     info->mask.dst_port_mask != 0)) {
@@ -1188,29 +1254,23 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	    fdir_mode <= RTE_FDIR_MODE_PERFECT_TUNNEL)
 		is_perfect = TRUE;
 
-	memset(&input, 0, sizeof(input));
-
-	err = ixgbe_fdir_filter_to_atr_input(fdir_filter, &input,
-					     fdir_mode);
-	if (err)
-		return err;
-
 	if (is_perfect) {
-		if (input.formatted.flow_type & IXGBE_ATR_L4TYPE_IPV6_MASK) {
+		if (rule->ixgbe_fdir.formatted.flow_type &
+		    IXGBE_ATR_L4TYPE_IPV6_MASK) {
 			PMD_DRV_LOG(ERR, "IPv6 is not supported in"
 				    " perfect mode!");
 			return -ENOTSUP;
 		}
-		fdirhash = atr_compute_perfect_hash_82599(&input,
+		fdirhash = atr_compute_perfect_hash_82599(&rule->ixgbe_fdir,
 							  dev->data->dev_conf.fdir_conf.pballoc);
-		fdirhash |= fdir_filter->soft_id <<
+		fdirhash |= rule->soft_id <<
 			IXGBE_FDIRHASH_SIG_SW_INDEX_SHIFT;
 	} else
-		fdirhash = atr_compute_sig_hash_82599(&input,
+		fdirhash = atr_compute_sig_hash_82599(&rule->ixgbe_fdir,
 						      dev->data->dev_conf.fdir_conf.pballoc);
 
 	if (del) {
-		err = ixgbe_remove_fdir_filter(info, &input);
+		err = ixgbe_remove_fdir_filter(info, &rule->ixgbe_fdir);
 		if (err < 0)
 			return err;
 
@@ -1223,7 +1283,7 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	}
 	/* add or update an fdir filter*/
 	fdircmd_flags = (update) ? IXGBE_FDIRCMD_FILTER_UPDATE : 0;
-	if (fdir_filter->action.behavior == RTE_ETH_FDIR_REJECT) {
+	if (rule->fdirflags & IXGBE_FDIRCMD_DROP) {
 		if (is_perfect) {
 			queue = dev->data->dev_conf.fdir_conf.drop_queue;
 			fdircmd_flags |= IXGBE_FDIRCMD_DROP;
@@ -1232,13 +1292,12 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 				    " signature mode.");
 			return -EINVAL;
 		}
-	} else if (fdir_filter->action.behavior == RTE_ETH_FDIR_ACCEPT &&
-		   fdir_filter->action.rx_queue < IXGBE_MAX_RX_QUEUE_NUM)
-		queue = (uint8_t)fdir_filter->action.rx_queue;
+	} else if (rule->queue < IXGBE_MAX_RX_QUEUE_NUM)
+		queue = (uint8_t)rule->queue;
 	else
 		return -EINVAL;
 
-	node = ixgbe_fdir_filter_lookup(info, &input);
+	node = ixgbe_fdir_filter_lookup(info, &rule->ixgbe_fdir);
 	if (node) {
 		if (update) {
 			node->fdirflags = fdircmd_flags;
@@ -1256,7 +1315,7 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 		if (!node)
 			return -ENOMEM;
 		(void)rte_memcpy(&node->ixgbe_fdir,
-				 &input,
+				 &rule->ixgbe_fdir,
 				 sizeof(union ixgbe_atr_input));
 		node->fdirflags = fdircmd_flags;
 		node->fdirhash = fdirhash;
@@ -1270,18 +1329,19 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	}
 
 	if (is_perfect) {
-		err = fdir_write_perfect_filter_82599(hw, &input, queue,
-						      fdircmd_flags, fdirhash,
-						      fdir_mode);
+		err = fdir_write_perfect_filter_82599(hw, &rule->ixgbe_fdir,
+						      queue, fdircmd_flags,
+						      fdirhash, fdir_mode);
 	} else {
-		err = fdir_add_signature_filter_82599(hw, &input, queue,
-						      fdircmd_flags, fdirhash);
+		err = fdir_add_signature_filter_82599(hw, &rule->ixgbe_fdir,
+						      queue, fdircmd_flags,
+						      fdirhash);
 	}
 	if (err < 0) {
 		PMD_DRV_LOG(ERR, "Fail to add FDIR filter!");
 
 		if (add_node)
-			(void)ixgbe_remove_fdir_filter(info, &input);
+			(void)ixgbe_remove_fdir_filter(info, &rule->ixgbe_fdir);
 	} else {
 		PMD_DRV_LOG(DEBUG, "Success to add FDIR filter");
 	}
@@ -1289,6 +1349,29 @@ ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	return err;
 }
 
+/* ixgbe_add_del_fdir_filter - add or remove a flow diretor filter.
+ * @dev: pointer to the structure rte_eth_dev
+ * @fdir_filter: fdir filter entry
+ * @del: 1 - delete, 0 - add
+ * @update: 1 - update
+ */
+static int
+ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
+			  const struct rte_eth_fdir_filter *fdir_filter,
+			  bool del,
+			  bool update)
+{
+	struct ixgbe_fdir_rule rule;
+	int err;
+
+	err = ixgbe_interpret_fdir_filter(dev, fdir_filter, &rule);
+
+	if (err)
+		return err;
+
+	return ixgbe_fdir_filter_program(dev, &rule, del, update);
+}
+
 static int
 ixgbe_fdir_flush(struct rte_eth_dev *dev)
 {
@@ -1522,19 +1605,23 @@ ixgbe_clear_all_fdir_filter(struct rte_eth_dev *dev)
 	struct ixgbe_hw_fdir_info *fdir_info =
 		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
 	struct ixgbe_fdir_filter *fdir_filter;
+	struct ixgbe_fdir_filter *filter_flag;
 	int ret = 0;
 
 	/* flush flow director */
 	rte_hash_reset(fdir_info->hash_handle);
 	memset(fdir_info->hash_map, 0,
 	       sizeof(struct ixgbe_fdir_filter *) * IXGBE_MAX_FDIR_FILTER_NUM);
+	filter_flag = TAILQ_FIRST(&fdir_info->fdir_list);
 	while ((fdir_filter = TAILQ_FIRST(&fdir_info->fdir_list))) {
 		TAILQ_REMOVE(&fdir_info->fdir_list,
 			     fdir_filter,
 			     entries);
 		rte_free(fdir_filter);
 	}
-	ret = ixgbe_fdir_flush(dev);
+
+	if (filter_flag != NULL)
+		ret = ixgbe_fdir_flush(dev);
 
 	return ret;
 }
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 4006084..257cd6f 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -127,6 +127,31 @@ ixgbe_validate_l2_tn_filter(struct rte_eth_dev *dev,
 			struct rte_eth_l2_tunnel_conf *rule,
 			struct rte_flow_error *error);
 static int
+ixgbe_validate_fdir_filter(struct rte_eth_dev *dev,
+			const struct rte_flow_attr *attr,
+			const struct rte_flow_item pattern[],
+			const struct rte_flow_action actions[],
+			struct ixgbe_fdir_rule *rule,
+			struct rte_flow_error *error);
+static int
+ixgbe_parse_fdir_filter_normal(const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct ixgbe_fdir_rule *rule,
+		struct rte_flow_error *error);
+static int
+ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct ixgbe_fdir_rule *rule,
+		struct rte_flow_error *error);
+static int
+ixgbe_parse_fdir_filter(const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct ixgbe_fdir_rule *rule,
+		struct rte_flow_error *error);
+static int
 ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 		const struct rte_flow_attr *attr,
 		const struct rte_flow_item pattern[],
@@ -1243,39 +1268,1214 @@ ixgbe_validate_l2_tn_filter(struct rte_eth_dev *dev,
 	return ret;
 }
 
+/* Parse to get the attr and action info of flow director rule. */
+static int
+ixgbe_parse_fdir_act_attr(const struct rte_flow_attr *attr,
+			  const struct rte_flow_action actions[],
+			  struct ixgbe_fdir_rule *rule,
+			  struct rte_flow_error *error)
+{
+	const struct rte_flow_action *act;
+	const struct rte_flow_action_queue *act_q;
+	const struct rte_flow_action_mark *mark;
+	uint32_t index;
+
+	/* parse attr */
+	/* must be input direction */
+	if (!attr->ingress) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_INGRESS,
+			attr, "Only support ingress.");
+		return -rte_errno;
+	}
+
+	/* not supported */
+	if (attr->egress) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
+			attr, "Not support egress.");
+		return -rte_errno;
+	}
+
+	/* not supported */
+	if (attr->priority) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
+			attr, "Not support priority.");
+		return -rte_errno;
+	}
+
+	/* parse action */
+	index = 0;
+
+	/* check if the first not void action is QUEUE or DROP. */
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE &&
+	    act->type != RTE_FLOW_ACTION_TYPE_DROP) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	if (act->type == RTE_FLOW_ACTION_TYPE_QUEUE) {
+		act_q = (const struct rte_flow_action_queue *)act->conf;
+		rule->queue = act_q->index;
+	} else { /* drop */
+		rule->fdirflags = IXGBE_FDIRCMD_DROP;
+	}
+
+	/* check if the next not void item is MARK */
+	index++;
+	NEXT_ITEM_OF_ACTION(act, actions, index);
+	if ((act->type != RTE_FLOW_ACTION_TYPE_MARK) &&
+		(act->type != RTE_FLOW_ACTION_TYPE_END)) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	rule->soft_id = 0;
+
+	if (act->type == RTE_FLOW_ACTION_TYPE_MARK) {
+		mark = (const struct rte_flow_action_mark *)act->conf;
+		rule->soft_id = mark->id;
+		index++;
+		NEXT_ITEM_OF_ACTION(act, actions, index);
+	}
+
+	/* check if the next not void item is END */
+	if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			act, "Not supported action.");
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
 /**
- * Check if the flow rule is supported by ixgbe.
- * It only checkes the format. Don't guarantee the rule can be programmed into
- * the HW. Because there can be no enough room for the rule.
+ * Parse the rule to see if it is a IP or MAC VLAN flow director rule.
+ * And get the flow director filter info BTW.
+ * UDP/TCP/SCTP PATTERN:
+ * The first not void item can be ETH or IPV4.
+ * The second not void item must be IPV4 if the first one is ETH.
+ * The third not void item must be UDP or TCP or SCTP.
+ * The next not void item must be END.
+ * MAC VLAN PATTERN:
+ * The first not void item must be ETH.
+ * The second not void item must be MAC VLAN.
+ * The next not void item must be END.
+ * ACTION:
+ * The first not void action should be QUEUE or DROP.
+ * The second not void optional action should be MARK,
+ * mark_id is a uint32_t number.
+ * The next not void action should be END.
+ * UDP/TCP/SCTP pattern example:
+ * ITEM		Spec			Mask
+ * ETH		NULL			NULL
+ * IPV4		src_addr 192.168.1.20	0xFFFFFFFF
+ *		dst_addr 192.167.3.50	0xFFFFFFFF
+ * UDP/TCP/SCTP	src_port	80	0xFFFF
+ *		dst_port	80	0xFFFF
+ * END
+ * MAC VLAN pattern example:
+ * ITEM		Spec			Mask
+ * ETH		dst_addr
+		{0xAC, 0x7B, 0xA1,	{0xFF, 0xFF, 0xFF,
+		0x2C, 0x6D, 0x36}	0xFF, 0xFF, 0xFF}
+ * MAC VLAN	tci	0x2016		0xFFFF
+ *		tpid	0x8100		0xFFFF
+ * END
+ * Other members in mask and spec should set to 0x00.
+ * Item->last should be NULL.
  */
 static int
-ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
-		const struct rte_flow_attr *attr,
-		const struct rte_flow_item pattern[],
-		const struct rte_flow_action actions[],
-		struct rte_flow_error *error)
+ixgbe_parse_fdir_filter_normal(const struct rte_flow_attr *attr,
+			       const struct rte_flow_item pattern[],
+			       const struct rte_flow_action actions[],
+			       struct ixgbe_fdir_rule *rule,
+			       struct rte_flow_error *error)
 {
-	struct rte_eth_ntuple_filter ntuple_filter;
-	struct rte_eth_ethertype_filter ethertype_filter;
-	struct rte_eth_syn_filter syn_filter;
-	struct rte_eth_l2_tunnel_conf l2_tn_filter;
-	int ret;
+	const struct rte_flow_item *item;
+	const struct rte_flow_item_eth *eth_spec;
+	const struct rte_flow_item_eth *eth_mask;
+	const struct rte_flow_item_ipv4 *ipv4_spec;
+	const struct rte_flow_item_ipv4 *ipv4_mask;
+	const struct rte_flow_item_tcp *tcp_spec;
+	const struct rte_flow_item_tcp *tcp_mask;
+	const struct rte_flow_item_udp *udp_spec;
+	const struct rte_flow_item_udp *udp_mask;
+	const struct rte_flow_item_sctp *sctp_spec;
+	const struct rte_flow_item_sctp *sctp_mask;
+	const struct rte_flow_item_vlan *vlan_spec;
+	const struct rte_flow_item_vlan *vlan_mask;
 
-	memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
-	ret = ixgbe_parse_ntuple_filter(attr, pattern,
-				actions, &ntuple_filter, error);
-	if (!ret)
-		return 0;
+	uint32_t index, j;
 
-	memset(&ethertype_filter, 0, sizeof(struct rte_eth_ethertype_filter));
-	ret = ixgbe_parse_ethertype_filter(attr, pattern,
-				actions, &ethertype_filter, error);
-	if (!ret)
-		return 0;
+	if (!pattern) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+			NULL, "NULL pattern.");
+		return -rte_errno;
+	}
 
-	memset(&syn_filter, 0, sizeof(struct rte_eth_syn_filter));
-	ret = ixgbe_parse_syn_filter(attr, pattern,
-				actions, &syn_filter, error);
+	if (!actions) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				   NULL, "NULL action.");
+		return -rte_errno;
+	}
+
+	if (!attr) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, "NULL attribute.");
+		return -rte_errno;
+	}
+
+	/**
+	 * Some fields may not be provided. Set spec to 0 and mask to default
+	 * value. So, we need not do anything for the not provided fields later.
+	 */
+	memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+	memset(&rule->mask, 0xFF, sizeof(struct ixgbe_hw_fdir_mask));
+	rule->mask.vlan_tci_mask = 0;
+
+	/* parse pattern */
+	index = 0;
+
+	/**
+	 * The first not void item should be
+	 * MAC or IPv4 or TCP or UDP or SCTP.
+	 */
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_ETH &&
+	    item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+	    item->type != RTE_FLOW_ITEM_TYPE_TCP &&
+	    item->type != RTE_FLOW_ITEM_TYPE_UDP &&
+	    item->type != RTE_FLOW_ITEM_TYPE_SCTP) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by fdir filter");
+		return -rte_errno;
+	}
+
+	rule->mode = RTE_FDIR_MODE_PERFECT;
+
+	/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+	}
+
+	/* Get the MAC info. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
+		/**
+		 * Only support vlan and dst MAC address,
+		 * others should be masked.
+		 */
+		if (item->spec && !item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			eth_spec = (const struct rte_flow_item_eth *)item->spec;
+
+			/* Get the dst MAC. */
+			for (j = 0; j < ETHER_ADDR_LEN; j++) {
+				rule->ixgbe_fdir.formatted.inner_mac[j] =
+					eth_spec->dst.addr_bytes[j];
+			}
+		}
+
+
+		if (item->mask) {
+			/* If ethernet has meaning, it means MAC VLAN mode. */
+			rule->mode = RTE_FDIR_MODE_PERFECT_MAC_VLAN;
+
+			rule->b_mask = TRUE;
+			eth_mask = (const struct rte_flow_item_eth *)item->mask;
+
+			/* Ether type should be masked. */
+			if (eth_mask->type) {
+				memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+				rte_flow_error_set(error, EINVAL,
+					RTE_FLOW_ERROR_TYPE_ITEM,
+					item, "Not supported by fdir filter");
+				return -rte_errno;
+			}
+
+			/**
+			 * src MAC address must be masked,
+			 * and don't support dst MAC address mask.
+			 */
+			for (j = 0; j < ETHER_ADDR_LEN; j++) {
+				if (eth_mask->src.addr_bytes[j] ||
+					eth_mask->dst.addr_bytes[j] != 0xFF) {
+					memset(rule, 0,
+					sizeof(struct ixgbe_fdir_rule));
+					rte_flow_error_set(error, EINVAL,
+					RTE_FLOW_ERROR_TYPE_ITEM,
+					item, "Not supported by fdir filter");
+					return -rte_errno;
+				}
+			}
+
+			/* When no VLAN, considered as full mask. */
+			rule->mask.vlan_tci_mask = rte_cpu_to_be_16(0xEFFF);
+		}
+		/*** If both spec and mask are item,
+		 * it means don't care about ETH.
+		 * Do nothing.
+		 */
+
+		/**
+		 * Check if the next not void item is vlan or ipv4.
+		 * IPv6 is not supported.
+		 */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (rule->mode == RTE_FDIR_MODE_PERFECT_MAC_VLAN) {
+			if (item->type != RTE_FLOW_ITEM_TYPE_VLAN) {
+				memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+				rte_flow_error_set(error, EINVAL,
+					RTE_FLOW_ERROR_TYPE_ITEM,
+					item, "Not supported by fdir filter");
+				return -rte_errno;
+			}
+		} else {
+			if (item->type != RTE_FLOW_ITEM_TYPE_IPV4) {
+				memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+				rte_flow_error_set(error, EINVAL,
+					RTE_FLOW_ERROR_TYPE_ITEM,
+					item, "Not supported by fdir filter");
+				return -rte_errno;
+			}
+		}
+	}
+
+	if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+		if (!(item->spec && item->mask)) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+
+		vlan_spec = (const struct rte_flow_item_vlan *)item->spec;
+		vlan_mask = (const struct rte_flow_item_vlan *)item->mask;
+
+		if (vlan_spec->tpid != rte_cpu_to_be_16(ETHER_TYPE_VLAN)) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+
+		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->tci;
+
+		if (vlan_mask->tpid != (uint16_t)~0U) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->mask.vlan_tci_mask = vlan_mask->tci;
+		/* More than one tags are not supported. */
+
+		/**
+		 * Check if the next not void item is not vlan.
+		 */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		} else if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	/* Get the IP info. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_IPV4) {
+		/**
+		 * Set the flow type even if there's no content
+		 * as we must have a flow type.
+		 */
+		rule->ixgbe_fdir.formatted.flow_type =
+			IXGBE_ATR_FLOW_TYPE_IPV4;
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+		/**
+		 * Only care about src & dst addresses,
+		 * others should be masked.
+		 */
+		if (!item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->b_mask = TRUE;
+		ipv4_mask =
+			(const struct rte_flow_item_ipv4 *)item->mask;
+		if (ipv4_mask->hdr.version_ihl ||
+		    ipv4_mask->hdr.type_of_service ||
+		    ipv4_mask->hdr.total_length ||
+		    ipv4_mask->hdr.packet_id ||
+		    ipv4_mask->hdr.fragment_offset ||
+		    ipv4_mask->hdr.time_to_live ||
+		    ipv4_mask->hdr.next_proto_id ||
+		    ipv4_mask->hdr.hdr_checksum) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->mask.dst_ipv4_mask = ipv4_mask->hdr.dst_addr;
+		rule->mask.src_ipv4_mask = ipv4_mask->hdr.src_addr;
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			ipv4_spec =
+				(const struct rte_flow_item_ipv4 *)item->spec;
+			rule->ixgbe_fdir.formatted.dst_ip[0] =
+				ipv4_spec->hdr.dst_addr;
+			rule->ixgbe_fdir.formatted.src_ip[0] =
+				ipv4_spec->hdr.src_addr;
+		}
+
+		/**
+		 * Check if the next not void item is
+		 * TCP or UDP or SCTP or END.
+		 */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_TCP &&
+		    item->type != RTE_FLOW_ITEM_TYPE_UDP &&
+		    item->type != RTE_FLOW_ITEM_TYPE_SCTP &&
+		    item->type != RTE_FLOW_ITEM_TYPE_END) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	/* Get the TCP info. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_TCP) {
+		/**
+		 * Set the flow type even if there's no content
+		 * as we must have a flow type.
+		 */
+		rule->ixgbe_fdir.formatted.flow_type =
+			IXGBE_ATR_FLOW_TYPE_TCPV4;
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+		/**
+		 * Only care about src & dst ports,
+		 * others should be masked.
+		 */
+		if (!item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->b_mask = TRUE;
+		tcp_mask = (const struct rte_flow_item_tcp *)item->mask;
+		if (tcp_mask->hdr.sent_seq ||
+		    tcp_mask->hdr.recv_ack ||
+		    tcp_mask->hdr.data_off ||
+		    tcp_mask->hdr.tcp_flags ||
+		    tcp_mask->hdr.rx_win ||
+		    tcp_mask->hdr.cksum ||
+		    tcp_mask->hdr.tcp_urp) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->mask.src_port_mask = tcp_mask->hdr.src_port;
+		rule->mask.dst_port_mask = tcp_mask->hdr.dst_port;
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			tcp_spec = (const struct rte_flow_item_tcp *)item->spec;
+			rule->ixgbe_fdir.formatted.src_port =
+				tcp_spec->hdr.src_port;
+			rule->ixgbe_fdir.formatted.dst_port =
+				tcp_spec->hdr.dst_port;
+		}
+	}
+
+	/* Get the UDP info */
+	if (item->type == RTE_FLOW_ITEM_TYPE_UDP) {
+		/**
+		 * Set the flow type even if there's no content
+		 * as we must have a flow type.
+		 */
+		rule->ixgbe_fdir.formatted.flow_type =
+			IXGBE_ATR_FLOW_TYPE_UDPV4;
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+		/**
+		 * Only care about src & dst ports,
+		 * others should be masked.
+		 */
+		if (!item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->b_mask = TRUE;
+		udp_mask = (const struct rte_flow_item_udp *)item->mask;
+		if (udp_mask->hdr.dgram_len ||
+		    udp_mask->hdr.dgram_cksum) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->mask.src_port_mask = udp_mask->hdr.src_port;
+		rule->mask.dst_port_mask = udp_mask->hdr.dst_port;
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			udp_spec = (const struct rte_flow_item_udp *)item->spec;
+			rule->ixgbe_fdir.formatted.src_port =
+				udp_spec->hdr.src_port;
+			rule->ixgbe_fdir.formatted.dst_port =
+				udp_spec->hdr.dst_port;
+		}
+	}
+
+	/* Get the SCTP info */
+	if (item->type == RTE_FLOW_ITEM_TYPE_SCTP) {
+		/**
+		 * Set the flow type even if there's no content
+		 * as we must have a flow type.
+		 */
+		rule->ixgbe_fdir.formatted.flow_type =
+			IXGBE_ATR_FLOW_TYPE_SCTPV4;
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+		/**
+		 * Only care about src & dst ports,
+		 * others should be masked.
+		 */
+		if (!item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->b_mask = TRUE;
+		sctp_mask =
+			(const struct rte_flow_item_sctp *)item->mask;
+		if (sctp_mask->hdr.tag ||
+		    sctp_mask->hdr.cksum) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->mask.src_port_mask = sctp_mask->hdr.src_port;
+		rule->mask.dst_port_mask = sctp_mask->hdr.dst_port;
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			sctp_spec =
+				(const struct rte_flow_item_sctp *)item->spec;
+			rule->ixgbe_fdir.formatted.src_port =
+				sctp_spec->hdr.src_port;
+			rule->ixgbe_fdir.formatted.dst_port =
+				sctp_spec->hdr.dst_port;
+		}
+	}
+
+	if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+		/* check if the next not void item is END */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	return ixgbe_parse_fdir_act_attr(attr, actions, rule, error);
+}
+
+#define NVGRE_PROTOCOL 0x6558
+
+/**
+ * Parse the rule to see if it is a VxLAN or NVGRE flow director rule.
+ * And get the flow director filter info BTW.
+ * VxLAN PATTERN:
+ * The first not void item must be ETH.
+ * The second not void item must be IPV4/ IPV6.
+ * The third not void item must be NVGRE.
+ * The next not void item must be END.
+ * NVGRE PATTERN:
+ * The first not void item must be ETH.
+ * The second not void item must be IPV4/ IPV6.
+ * The third not void item must be NVGRE.
+ * The next not void item must be END.
+ * ACTION:
+ * The first not void action should be QUEUE or DROP.
+ * The second not void optional action should be MARK,
+ * mark_id is a uint32_t number.
+ * The next not void action should be END.
+ * VxLAN pattern example:
+ * ITEM		Spec			Mask
+ * ETH		NULL			NULL
+ * IPV4/IPV6	NULL			NULL
+ * UDP		NULL			NULL
+ * VxLAN	vni{0x00, 0x32, 0x54}	{0xFF, 0xFF, 0xFF}
+ * END
+ * NEGRV pattern example:
+ * ITEM		Spec			Mask
+ * ETH		NULL			NULL
+ * IPV4/IPV6	NULL			NULL
+ * NVGRE	protocol	0x6558	0xFFFF
+ *		tni{0x00, 0x32, 0x54}	{0xFF, 0xFF, 0xFF}
+ * END
+ * other members in mask and spec should set to 0x00.
+ * item->last should be NULL.
+ */
+static int
+ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
+			       const struct rte_flow_item pattern[],
+			       const struct rte_flow_action actions[],
+			       struct ixgbe_fdir_rule *rule,
+			       struct rte_flow_error *error)
+{
+	const struct rte_flow_item *item;
+	const struct rte_flow_item_vxlan *vxlan_spec;
+	const struct rte_flow_item_vxlan *vxlan_mask;
+	const struct rte_flow_item_nvgre *nvgre_spec;
+	const struct rte_flow_item_nvgre *nvgre_mask;
+	const struct rte_flow_item_eth *eth_spec;
+	const struct rte_flow_item_eth *eth_mask;
+	const struct rte_flow_item_vlan *vlan_spec;
+	const struct rte_flow_item_vlan *vlan_mask;
+	uint32_t index, j;
+
+	if (!pattern) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+				   NULL, "NULL pattern.");
+		return -rte_errno;
+	}
+
+	if (!actions) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				   NULL, "NULL action.");
+		return -rte_errno;
+	}
+
+	if (!attr) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, "NULL attribute.");
+		return -rte_errno;
+	}
+
+	/**
+	 * Some fields may not be provided. Set spec to 0 and mask to default
+	 * value. So, we need not do anything for the not provided fields later.
+	 */
+	memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+	memset(&rule->mask, 0xFF, sizeof(struct ixgbe_hw_fdir_mask));
+	rule->mask.vlan_tci_mask = 0;
+
+	/* parse pattern */
+	index = 0;
+
+	/**
+	 * The first not void item should be
+	 * MAC or IPv4 or IPv6 or UDP or VxLAN.
+	 */
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_ETH &&
+	    item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+	    item->type != RTE_FLOW_ITEM_TYPE_IPV6 &&
+	    item->type != RTE_FLOW_ITEM_TYPE_UDP &&
+	    item->type != RTE_FLOW_ITEM_TYPE_VXLAN &&
+	    item->type != RTE_FLOW_ITEM_TYPE_NVGRE) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by fdir filter");
+		return -rte_errno;
+	}
+
+	rule->mode = RTE_FDIR_MODE_PERFECT_TUNNEL;
+
+	/* Skip MAC. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
+		/* Only used to describe the protocol stack. */
+		if (item->spec || item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+
+		/* Check if the next not void item is IPv4 or IPv6. */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+		    item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	/* Skip IP. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_IPV4 ||
+	    item->type == RTE_FLOW_ITEM_TYPE_IPV6) {
+		/* Only used to describe the protocol stack. */
+		if (item->spec || item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+
+		/* Check if the next not void item is UDP or NVGRE. */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_UDP &&
+		    item->type != RTE_FLOW_ITEM_TYPE_NVGRE) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	/* Skip UDP. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_UDP) {
+		/* Only used to describe the protocol stack. */
+		if (item->spec || item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+
+		/* Check if the next not void item is VxLAN. */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_VXLAN) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	/* Get the VxLAN info */
+	if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN) {
+		rule->ixgbe_fdir.formatted.tunnel_type =
+			RTE_FDIR_TUNNEL_TYPE_VXLAN;
+
+		/* Only care about VNI, others should be masked. */
+		if (!item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+		rule->b_mask = TRUE;
+
+		/* Tunnel type is always meaningful. */
+		rule->mask.tunnel_type_mask = 1;
+
+		vxlan_mask =
+			(const struct rte_flow_item_vxlan *)item->mask;
+		if (vxlan_mask->flags) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/* VNI must be totally masked or not. */
+		if ((vxlan_mask->vni[0] || vxlan_mask->vni[1] ||
+			vxlan_mask->vni[2]) &&
+			((vxlan_mask->vni[0] != 0xFF) ||
+			(vxlan_mask->vni[1] != 0xFF) ||
+				(vxlan_mask->vni[2] != 0xFF))) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+
+		rte_memcpy(&rule->mask.tunnel_id_mask, vxlan_mask->vni,
+			RTE_DIM(vxlan_mask->vni));
+		rule->mask.tunnel_id_mask <<= 8;
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			vxlan_spec = (const struct rte_flow_item_vxlan *)
+					item->spec;
+			rte_memcpy(&rule->ixgbe_fdir.formatted.tni_vni,
+				vxlan_spec->vni, RTE_DIM(vxlan_spec->vni));
+			rule->ixgbe_fdir.formatted.tni_vni <<= 8;
+		}
+	}
+
+	/* Get the NVGRE info */
+	if (item->type == RTE_FLOW_ITEM_TYPE_NVGRE) {
+		rule->ixgbe_fdir.formatted.tunnel_type =
+			RTE_FDIR_TUNNEL_TYPE_NVGRE;
+
+		/**
+		 * Only care about flags0, flags1, protocol and TNI,
+		 * others should be masked.
+		 */
+		if (!item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+		rule->b_mask = TRUE;
+
+		/* Tunnel type is always meaningful. */
+		rule->mask.tunnel_type_mask = 1;
+
+		nvgre_mask =
+			(const struct rte_flow_item_nvgre *)item->mask;
+		if (nvgre_mask->flow_id) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		if (nvgre_mask->c_k_s_rsvd0_ver !=
+			rte_cpu_to_be_16(0x3000) ||
+		    nvgre_mask->protocol != 0xFFFF) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/* TNI must be totally masked or not. */
+		if (nvgre_mask->tni[0] &&
+		    ((nvgre_mask->tni[0] != 0xFF) ||
+		    (nvgre_mask->tni[1] != 0xFF) ||
+		    (nvgre_mask->tni[2] != 0xFF))) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/* tni is a 24-bits bit field */
+		rte_memcpy(&rule->mask.tunnel_id_mask, nvgre_mask->tni,
+			RTE_DIM(nvgre_mask->tni));
+		rule->mask.tunnel_id_mask <<= 8;
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			nvgre_spec =
+				(const struct rte_flow_item_nvgre *)item->spec;
+			if (nvgre_spec->c_k_s_rsvd0_ver !=
+			    rte_cpu_to_be_16(0x2000) ||
+			    nvgre_spec->protocol !=
+			    rte_cpu_to_be_16(NVGRE_PROTOCOL)) {
+				memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+				rte_flow_error_set(error, EINVAL,
+					RTE_FLOW_ERROR_TYPE_ITEM,
+					item, "Not supported by fdir filter");
+				return -rte_errno;
+			}
+			/* tni is a 24-bits bit field */
+			rte_memcpy(&rule->ixgbe_fdir.formatted.tni_vni,
+			nvgre_spec->tni, RTE_DIM(nvgre_spec->tni));
+			rule->ixgbe_fdir.formatted.tni_vni <<= 8;
+		}
+	}
+
+	/* check if the next not void item is MAC */
+	index++;
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if (item->type != RTE_FLOW_ITEM_TYPE_ETH) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by fdir filter");
+		return -rte_errno;
+	}
+
+	/**
+	 * Only support vlan and dst MAC address,
+	 * others should be masked.
+	 */
+
+	if (!item->mask) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by fdir filter");
+		return -rte_errno;
+	}
+	/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+	}
+	rule->b_mask = TRUE;
+	eth_mask = (const struct rte_flow_item_eth *)item->mask;
+
+	/* Ether type should be masked. */
+	if (eth_mask->type) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by fdir filter");
+		return -rte_errno;
+	}
+
+	/* src MAC address should be masked. */
+	for (j = 0; j < ETHER_ADDR_LEN; j++) {
+		if (eth_mask->src.addr_bytes[j]) {
+			memset(rule, 0,
+			       sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+	rule->mask.mac_addr_byte_mask = 0;
+	for (j = 0; j < ETHER_ADDR_LEN; j++) {
+		/* It's a per byte mask. */
+		if (eth_mask->dst.addr_bytes[j] == 0xFF) {
+			rule->mask.mac_addr_byte_mask |= 0x1 << j;
+		} else if (eth_mask->dst.addr_bytes[j]) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	/* When no vlan, considered as full mask. */
+	rule->mask.vlan_tci_mask = rte_cpu_to_be_16(0xEFFF);
+
+	if (item->spec) {
+		rule->b_spec = TRUE;
+		eth_spec = (const struct rte_flow_item_eth *)item->spec;
+
+		/* Get the dst MAC. */
+		for (j = 0; j < ETHER_ADDR_LEN; j++) {
+			rule->ixgbe_fdir.formatted.inner_mac[j] =
+				eth_spec->dst.addr_bytes[j];
+		}
+	}
+
+	/**
+	 * Check if the next not void item is vlan or ipv4.
+	 * IPv6 is not supported.
+	 */
+	index++;
+	NEXT_ITEM_OF_PATTERN(item, pattern, index);
+	if ((item->type != RTE_FLOW_ITEM_TYPE_VLAN) &&
+		(item->type != RTE_FLOW_ITEM_TYPE_VLAN)) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item, "Not supported by fdir filter");
+		return -rte_errno;
+	}
+	/*Not supported last point for range*/
+	if (item->last) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			item, "Not supported last point for range");
+		return -rte_errno;
+	}
+
+	if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+		if (!(item->spec && item->mask)) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+
+		vlan_spec = (const struct rte_flow_item_vlan *)item->spec;
+		vlan_mask = (const struct rte_flow_item_vlan *)item->mask;
+
+		if (vlan_spec->tpid != rte_cpu_to_be_16(ETHER_TYPE_VLAN)) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+
+		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->tci;
+
+		if (vlan_mask->tpid != (uint16_t)~0U) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->mask.vlan_tci_mask = vlan_mask->tci;
+		/* More than one tags are not supported. */
+
+		/**
+		 * Check if the next not void item is not vlan.
+		 */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		} else if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		/* check if the next not void item is END */
+		index++;
+		NEXT_ITEM_OF_PATTERN(item, pattern, index);
+		if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	/**
+	 * If the tags is 0, it means don't care about the VLAN.
+	 * Do nothing.
+	 */
+
+	return ixgbe_parse_fdir_act_attr(attr, actions, rule, error);
+}
+
+static int
+ixgbe_validate_fdir_filter(struct rte_eth_dev *dev,
+			const struct rte_flow_attr *attr,
+			const struct rte_flow_item pattern[],
+			const struct rte_flow_action actions[],
+			struct ixgbe_fdir_rule *rule,
+			struct rte_flow_error *error)
+{
+	int ret = 0;
+
+	enum rte_fdir_mode fdir_mode = dev->data->dev_conf.fdir_conf.mode;
+
+	ixgbe_parse_fdir_filter(attr, pattern, actions,
+				rule, error);
+
+
+	if (fdir_mode == RTE_FDIR_MODE_NONE ||
+	    fdir_mode != rule->mode)
+		return -ENOTSUP;
+
+	return ret;
+}
+
+static int
+ixgbe_parse_fdir_filter(const struct rte_flow_attr *attr,
+			const struct rte_flow_item pattern[],
+			const struct rte_flow_action actions[],
+			struct ixgbe_fdir_rule *rule,
+			struct rte_flow_error *error)
+{
+	int ret;
+
+	ret = ixgbe_parse_fdir_filter_normal(attr, pattern,
+					actions, rule, error);
+
+	if (!ret)
+		return 0;
+
+	ret = ixgbe_parse_fdir_filter_tunnel(attr, pattern,
+					actions, rule, error);
+
+	return ret;
+}
+
+/**
+ * Check if the flow rule is supported by ixgbe.
+ * It only checkes the format. Don't guarantee the rule can be programmed into
+ * the HW. Because there can be no enough room for the rule.
+ */
+static int
+ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
+		const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct rte_flow_error *error)
+{
+	struct rte_eth_ntuple_filter ntuple_filter;
+	struct rte_eth_ethertype_filter ethertype_filter;
+	struct rte_eth_syn_filter syn_filter;
+	struct rte_eth_l2_tunnel_conf l2_tn_filter;
+	struct ixgbe_fdir_rule fdir_rule;
+	int ret;
+
+	memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
+	ret = ixgbe_parse_ntuple_filter(attr, pattern,
+				actions, &ntuple_filter, error);
+	if (!ret)
+		return 0;
+
+	memset(&ethertype_filter, 0, sizeof(struct rte_eth_ethertype_filter));
+	ret = ixgbe_parse_ethertype_filter(attr, pattern,
+				actions, &ethertype_filter, error);
+	if (!ret)
+		return 0;
+
+	memset(&syn_filter, 0, sizeof(struct rte_eth_syn_filter));
+	ret = ixgbe_parse_syn_filter(attr, pattern,
+				actions, &syn_filter, error);
+	if (!ret)
+		return 0;
+
+	memset(&fdir_rule, 0, sizeof(struct ixgbe_fdir_rule));
+	ret = ixgbe_validate_fdir_filter(dev, attr, pattern,
+				actions, &fdir_rule, error);
 	if (!ret)
 		return 0;
 
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v6 16/18] net/ixgbe: create consistent filter
  2017-01-13  8:12         ` [PATCH v6 " Wei Zhao
                             ` (14 preceding siblings ...)
  2017-01-13  8:13           ` [PATCH v6 15/18] net/ixgbe: parse flow director filter Wei Zhao
@ 2017-01-13  8:13           ` Wei Zhao
  2017-01-13  8:13           ` [PATCH v6 17/18] net/ixgbe: destroy " Wei Zhao
                             ` (2 subsequent siblings)
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-13  8:13 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

This patch adds a function to create the flow directory filter.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c |  25 +++--
 drivers/net/ixgbe/ixgbe_ethdev.h |  61 ++++++++++++
 drivers/net/ixgbe/ixgbe_flow.c   | 194 ++++++++++++++++++++++++++++++++++++++-
 3 files changed, 266 insertions(+), 14 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 4b9f4aa..3648e12 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -305,9 +305,6 @@ static void ixgbevf_add_mac_addr(struct rte_eth_dev *dev,
 static void ixgbevf_remove_mac_addr(struct rte_eth_dev *dev, uint32_t index);
 static void ixgbevf_set_default_mac_addr(struct rte_eth_dev *dev,
 					     struct ether_addr *mac_addr);
-static int ixgbe_syn_filter_set(struct rte_eth_dev *dev,
-			struct rte_eth_syn_filter *filter,
-			bool add);
 static int ixgbe_syn_filter_get(struct rte_eth_dev *dev,
 			struct rte_eth_syn_filter *filter);
 static int ixgbe_syn_filter_handle(struct rte_eth_dev *dev,
@@ -317,17 +314,11 @@ static int ixgbe_add_5tuple_filter(struct rte_eth_dev *dev,
 			struct ixgbe_5tuple_filter *filter);
 static void ixgbe_remove_5tuple_filter(struct rte_eth_dev *dev,
 			struct ixgbe_5tuple_filter *filter);
-static int ixgbe_add_del_ntuple_filter(struct rte_eth_dev *dev,
-			struct rte_eth_ntuple_filter *filter,
-			bool add);
 static int ixgbe_ntuple_filter_handle(struct rte_eth_dev *dev,
 				enum rte_filter_op filter_op,
 				void *arg);
 static int ixgbe_get_ntuple_filter(struct rte_eth_dev *dev,
 			struct rte_eth_ntuple_filter *filter);
-static int ixgbe_add_del_ethertype_filter(struct rte_eth_dev *dev,
-			struct rte_eth_ethertype_filter *filter,
-			bool add);
 static int ixgbe_ethertype_filter_handle(struct rte_eth_dev *dev,
 				enum rte_filter_op filter_op,
 				void *arg);
@@ -1292,6 +1283,14 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
 
 	/* initialize l2 tunnel filter list & hash */
 	ixgbe_l2_tn_filter_init(eth_dev);
+
+	TAILQ_INIT(&filter_ntuple_list);
+	TAILQ_INIT(&filter_ethertype_list);
+	TAILQ_INIT(&filter_syn_list);
+	TAILQ_INIT(&filter_fdir_list);
+	TAILQ_INIT(&filter_l2_tunnel_list);
+	TAILQ_INIT(&ixgbe_flow_list);
+
 	return 0;
 }
 
@@ -5742,7 +5741,7 @@ ixgbevf_set_default_mac_addr(struct rte_eth_dev *dev, struct ether_addr *addr)
 		return -ENOTSUP;\
 } while (0)
 
-static int
+int
 ixgbe_syn_filter_set(struct rte_eth_dev *dev,
 			struct rte_eth_syn_filter *filter,
 			bool add)
@@ -6121,7 +6120,7 @@ ntuple_filter_to_5tuple(struct rte_eth_ntuple_filter *filter,
  *    - On success, zero.
  *    - On failure, a negative value.
  */
-static int
+int
 ixgbe_add_del_ntuple_filter(struct rte_eth_dev *dev,
 			struct rte_eth_ntuple_filter *ntuple_filter,
 			bool add)
@@ -6266,7 +6265,7 @@ ixgbe_ntuple_filter_handle(struct rte_eth_dev *dev,
 	return ret;
 }
 
-static int
+int
 ixgbe_add_del_ethertype_filter(struct rte_eth_dev *dev,
 			struct rte_eth_ethertype_filter *filter,
 			bool add)
@@ -7345,7 +7344,7 @@ ixgbe_remove_l2_tn_filter(struct ixgbe_l2_tn_info *l2_tn_info,
 }
 
 /* Add l2 tunnel filter */
-static int
+int
 ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 			       struct rte_eth_l2_tunnel_conf *l2_tunnel,
 			       bool restore)
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index dee159f..6908c3d 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -329,6 +329,54 @@ struct ixgbe_l2_tn_info {
 	bool e_tag_ether_type; /* ether type for e-tag */
 };
 
+struct rte_flow {
+	enum rte_filter_type filter_type;
+	void *rule;
+};
+/* ntuple filter list structure */
+struct ixgbe_ntuple_filter_ele {
+	TAILQ_ENTRY(ixgbe_ntuple_filter_ele) entries;
+	struct rte_eth_ntuple_filter filter_info;
+};
+/* ethertype filter list structure */
+struct ixgbe_ethertype_filter_ele {
+	TAILQ_ENTRY(ixgbe_ethertype_filter_ele) entries;
+	struct rte_eth_ethertype_filter filter_info;
+};
+/* syn filter list structure */
+struct ixgbe_eth_syn_filter_ele {
+	TAILQ_ENTRY(ixgbe_eth_syn_filter_ele) entries;
+	struct rte_eth_syn_filter filter_info;
+};
+/* fdir filter list structure */
+struct ixgbe_fdir_rule_ele {
+	TAILQ_ENTRY(ixgbe_fdir_rule_ele) entries;
+	struct ixgbe_fdir_rule filter_info;
+};
+/* l2_tunnel filter list structure */
+struct ixgbe_eth_l2_tunnel_conf_ele {
+	TAILQ_ENTRY(ixgbe_eth_l2_tunnel_conf_ele) entries;
+	struct rte_eth_l2_tunnel_conf filter_info;
+};
+/* ixgbe_flow memory list structure */
+struct ixgbe_flow_mem {
+	TAILQ_ENTRY(ixgbe_flow_mem) entries;
+	struct rte_flow *flow;
+};
+
+TAILQ_HEAD(ixgbe_ntuple_filter_list, ixgbe_ntuple_filter_ele);
+struct ixgbe_ntuple_filter_list filter_ntuple_list;
+TAILQ_HEAD(ixgbe_ethertype_filter_list, ixgbe_ethertype_filter_ele);
+struct ixgbe_ethertype_filter_list filter_ethertype_list;
+TAILQ_HEAD(ixgbe_syn_filter_list, ixgbe_eth_syn_filter_ele);
+struct ixgbe_syn_filter_list filter_syn_list;
+TAILQ_HEAD(ixgbe_fdir_rule_filter_list, ixgbe_fdir_rule_ele);
+struct ixgbe_fdir_rule_filter_list filter_fdir_list;
+TAILQ_HEAD(ixgbe_l2_tunnel_filter_list, ixgbe_eth_l2_tunnel_conf_ele);
+struct ixgbe_l2_tunnel_filter_list filter_l2_tunnel_list;
+TAILQ_HEAD(ixgbe_flow_mem_list, ixgbe_flow_mem);
+struct ixgbe_flow_mem_list ixgbe_flow_list;
+
 /*
  * Structure to store private data for each driver instance (for each port).
  */
@@ -487,6 +535,19 @@ uint32_t ixgbe_rssrk_reg_get(enum ixgbe_mac_type mac_type, uint8_t i);
 
 bool ixgbe_rss_update_sp(enum ixgbe_mac_type mac_type);
 
+int ixgbe_add_del_ntuple_filter(struct rte_eth_dev *dev,
+			struct rte_eth_ntuple_filter *filter,
+			bool add);
+int ixgbe_add_del_ethertype_filter(struct rte_eth_dev *dev,
+			struct rte_eth_ethertype_filter *filter,
+			bool add);
+int ixgbe_syn_filter_set(struct rte_eth_dev *dev,
+			struct rte_eth_syn_filter *filter,
+			bool add);
+int
+ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
+			       struct rte_eth_l2_tunnel_conf *l2_tunnel,
+			       bool restore);
 /*
  * Flow director function prototypes
  */
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 257cd6f..16b43d9 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -157,10 +157,15 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 		const struct rte_flow_item pattern[],
 		const struct rte_flow_action actions[],
 		struct rte_flow_error *error);
+static struct rte_flow *ixgbe_flow_create(struct rte_eth_dev *dev,
+		const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct rte_flow_error *error);
 
 const struct rte_flow_ops ixgbe_flow_ops = {
 	ixgbe_flow_validate,
-	NULL,
+	ixgbe_flow_create,
 	NULL,
 	ixgbe_flow_flush,
 	NULL,
@@ -2437,6 +2442,193 @@ ixgbe_parse_fdir_filter(const struct rte_flow_attr *attr,
 }
 
 /**
+ * Create or destroy a flow rule.
+ * Theorically one rule can match more than one filters.
+ * We will let it use the filter which it hitt first.
+ * So, the sequence matters.
+ */
+static struct rte_flow *
+ixgbe_flow_create(struct rte_eth_dev *dev,
+		  const struct rte_flow_attr *attr,
+		  const struct rte_flow_item pattern[],
+		  const struct rte_flow_action actions[],
+		  struct rte_flow_error *error)
+{
+	int ret;
+	struct rte_eth_ntuple_filter ntuple_filter;
+	struct rte_eth_ethertype_filter ethertype_filter;
+	struct rte_eth_syn_filter syn_filter;
+	struct ixgbe_fdir_rule fdir_rule;
+	struct rte_eth_l2_tunnel_conf l2_tn_filter;
+	struct ixgbe_hw_fdir_info *fdir_info =
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
+	struct rte_flow *flow = NULL;
+	struct ixgbe_ntuple_filter_ele *ntuple_filter_ptr;
+	struct ixgbe_ethertype_filter_ele *ethertype_filter_ptr;
+	struct ixgbe_eth_syn_filter_ele *syn_filter_ptr;
+	struct ixgbe_eth_l2_tunnel_conf_ele *l2_tn_filter_ptr;
+	struct ixgbe_fdir_rule_ele *fdir_rule_ptr;
+	struct ixgbe_flow_mem *ixgbe_flow_mem_ptr;
+
+	flow = rte_zmalloc("ixgbe_rte_flow", sizeof(struct rte_flow), 0);
+	if (!flow) {
+		PMD_DRV_LOG(ERR, "failed to allocate memory");
+		return (struct rte_flow *)flow;
+	}
+	ixgbe_flow_mem_ptr = rte_zmalloc("ixgbe_flow_mem",
+			sizeof(struct ixgbe_flow_mem), 0);
+	if (!ixgbe_flow_mem_ptr) {
+		PMD_DRV_LOG(ERR, "failed to allocate memory");
+		rte_free(flow);
+		return NULL;
+	}
+	ixgbe_flow_mem_ptr->flow = flow;
+	TAILQ_INSERT_TAIL(&ixgbe_flow_list,
+				ixgbe_flow_mem_ptr, entries);
+
+	memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
+	ret = ixgbe_parse_ntuple_filter(attr, pattern,
+			actions, &ntuple_filter, error);
+	if (!ret) {
+		ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, TRUE);
+		if (!ret) {
+			ntuple_filter_ptr = rte_zmalloc("ixgbe_ntuple_filter",
+				sizeof(struct ixgbe_ntuple_filter_ele), 0);
+			(void)rte_memcpy(&ntuple_filter_ptr->filter_info,
+				&ntuple_filter,
+				sizeof(struct rte_eth_ntuple_filter));
+			TAILQ_INSERT_TAIL(&filter_ntuple_list,
+				ntuple_filter_ptr, entries);
+			flow->rule = ntuple_filter_ptr;
+			flow->filter_type = RTE_ETH_FILTER_NTUPLE;
+			return flow;
+		}
+		goto out;
+	}
+
+	memset(&ethertype_filter, 0, sizeof(struct rte_eth_ethertype_filter));
+	ret = ixgbe_parse_ethertype_filter(attr, pattern,
+				actions, &ethertype_filter, error);
+	if (!ret) {
+		ret = ixgbe_add_del_ethertype_filter(dev,
+				&ethertype_filter, TRUE);
+		if (!ret) {
+			ethertype_filter_ptr = rte_zmalloc(
+				"ixgbe_ethertype_filter",
+				sizeof(struct ixgbe_ethertype_filter_ele), 0);
+			(void)rte_memcpy(&ethertype_filter_ptr->filter_info,
+				&ethertype_filter,
+				sizeof(struct rte_eth_ethertype_filter));
+			TAILQ_INSERT_TAIL(&filter_ethertype_list,
+				ethertype_filter_ptr, entries);
+			flow->rule = ethertype_filter_ptr;
+			flow->filter_type = RTE_ETH_FILTER_ETHERTYPE;
+			return flow;
+		}
+		goto out;
+	}
+
+	memset(&syn_filter, 0, sizeof(struct rte_eth_syn_filter));
+	ret = cons_parse_syn_filter(attr, pattern, actions, &syn_filter, error);
+	if (!ret) {
+		ret = ixgbe_syn_filter_set(dev, &syn_filter, TRUE);
+		if (!ret) {
+			syn_filter_ptr = rte_zmalloc("ixgbe_syn_filter",
+				sizeof(struct ixgbe_eth_syn_filter_ele), 0);
+			(void)rte_memcpy(&syn_filter_ptr->filter_info,
+				&syn_filter,
+				sizeof(struct rte_eth_syn_filter));
+			TAILQ_INSERT_TAIL(&filter_syn_list,
+				syn_filter_ptr,
+				entries);
+			flow->rule = syn_filter_ptr;
+			flow->filter_type = RTE_ETH_FILTER_SYN;
+			return flow;
+		}
+		goto out;
+	}
+
+	memset(&fdir_rule, 0, sizeof(struct ixgbe_fdir_rule));
+	ret = ixgbe_parse_fdir_filter(attr, pattern,
+				actions, &fdir_rule, error);
+	if (!ret) {
+		/* A mask cannot be deleted. */
+		if (fdir_rule.b_mask) {
+			if (!fdir_info->mask_added) {
+				/* It's the first time the mask is set. */
+				rte_memcpy(&fdir_info->mask,
+					&fdir_rule.mask,
+					sizeof(struct ixgbe_hw_fdir_mask));
+				ret = ixgbe_fdir_set_input_mask(dev);
+				if (ret)
+					goto out;
+
+				fdir_info->mask_added = TRUE;
+			} else {
+				/**
+				 * Only support one global mask,
+				 * all the masks should be the same.
+				 */
+				ret = memcmp(&fdir_info->mask,
+					&fdir_rule.mask,
+					sizeof(struct ixgbe_hw_fdir_mask));
+				if (ret)
+					goto out;
+			}
+		}
+
+		if (fdir_rule.b_spec) {
+			ret = ixgbe_fdir_filter_program(dev, &fdir_rule,
+					FALSE, FALSE);
+			if (!ret) {
+				fdir_rule_ptr = rte_zmalloc("ixgbe_fdir_filter",
+					sizeof(struct ixgbe_fdir_rule_ele), 0);
+				(void)rte_memcpy(&fdir_rule_ptr->filter_info,
+					&fdir_rule,
+					sizeof(struct ixgbe_fdir_rule));
+				TAILQ_INSERT_TAIL(&filter_fdir_list,
+					fdir_rule_ptr, entries);
+				flow->rule = fdir_rule_ptr;
+				flow->filter_type = RTE_ETH_FILTER_FDIR;
+
+				return flow;
+			}
+
+			if (ret)
+				goto out;
+		}
+
+		goto out;
+	}
+
+	memset(&l2_tn_filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
+	ret = cons_parse_l2_tn_filter(attr, pattern,
+					actions, &l2_tn_filter, error);
+	if (!ret) {
+		ret = ixgbe_dev_l2_tunnel_filter_add(dev, &l2_tn_filter, FALSE);
+		if (!ret) {
+			l2_tn_filter_ptr = rte_zmalloc("ixgbe_l2_tn_filter",
+				sizeof(struct ixgbe_eth_l2_tunnel_conf_ele), 0);
+			(void)rte_memcpy(&l2_tn_filter_ptr->filter_info,
+				&l2_tn_filter,
+				sizeof(struct rte_eth_l2_tunnel_conf));
+			TAILQ_INSERT_TAIL(&filter_l2_tunnel_list,
+				l2_tn_filter_ptr, entries);
+			flow->rule = l2_tn_filter_ptr;
+			flow->filter_type = RTE_ETH_FILTER_L2_TUNNEL;
+			return flow;
+		}
+	}
+
+out:
+	TAILQ_REMOVE(&ixgbe_flow_list,
+		ixgbe_flow_mem_ptr, entries);
+	rte_free(ixgbe_flow_mem_ptr);
+	rte_free(flow);
+	return NULL;
+}
+
+/**
  * Check if the flow rule is supported by ixgbe.
  * It only checkes the format. Don't guarantee the rule can be programmed into
  * the HW. Because there can be no enough room for the rule.
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v6 17/18] net/ixgbe: destroy consistent filter
  2017-01-13  8:12         ` [PATCH v6 " Wei Zhao
                             ` (15 preceding siblings ...)
  2017-01-13  8:13           ` [PATCH v6 16/18] net/ixgbe: create consistent filter Wei Zhao
@ 2017-01-13  8:13           ` Wei Zhao
  2017-01-13  8:13           ` [PATCH v6 18/18] net/ixgbe: flush all the filter list Wei Zhao
  2017-01-13 15:54           ` [PATCH v6 00/18] net/ixgbe: Consistent filter API Ferruh Yigit
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-13  8:13 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

This patch adds a function to destroy the flow fliter.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c |   2 +-
 drivers/net/ixgbe/ixgbe_ethdev.h |   3 +
 drivers/net/ixgbe/ixgbe_flow.c   | 117 ++++++++++++++++++++++++++++++++++++++-
 3 files changed, 120 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 3648e12..1112a3e 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -7401,7 +7401,7 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 }
 
 /* Delete l2 tunnel filter */
-static int
+int
 ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
 			       struct rte_eth_l2_tunnel_conf *l2_tunnel)
 {
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 6908c3d..01c18cd 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -548,6 +548,9 @@ int
 ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 			       struct rte_eth_l2_tunnel_conf *l2_tunnel,
 			       bool restore);
+int
+ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
+			       struct rte_eth_l2_tunnel_conf *l2_tunnel);
 /*
  * Flow director function prototypes
  */
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 16b43d9..20aefa6 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -162,11 +162,14 @@ static struct rte_flow *ixgbe_flow_create(struct rte_eth_dev *dev,
 		const struct rte_flow_item pattern[],
 		const struct rte_flow_action actions[],
 		struct rte_flow_error *error);
+static int ixgbe_flow_destroy(struct rte_eth_dev *dev,
+		struct rte_flow *flow,
+		struct rte_flow_error *error);
 
 const struct rte_flow_ops ixgbe_flow_ops = {
 	ixgbe_flow_validate,
 	ixgbe_flow_create,
-	NULL,
+	ixgbe_flow_destroy,
 	ixgbe_flow_flush,
 	NULL,
 };
@@ -2678,6 +2681,118 @@ ixgbe_flow_validate(__rte_unused struct rte_eth_dev *dev,
 	return ret;
 }
 
+/* Destroy a flow rule on ixgbe. */
+static int
+ixgbe_flow_destroy(struct rte_eth_dev *dev,
+		struct rte_flow *flow,
+		struct rte_flow_error *error)
+{
+	int ret;
+	struct rte_flow *pmd_flow = flow;
+	enum rte_filter_type filter_type = pmd_flow->filter_type;
+	struct rte_eth_ntuple_filter ntuple_filter;
+	struct rte_eth_ethertype_filter ethertype_filter;
+	struct rte_eth_syn_filter syn_filter;
+	struct ixgbe_fdir_rule fdir_rule;
+	struct rte_eth_l2_tunnel_conf l2_tn_filter;
+	struct ixgbe_ntuple_filter_ele *ntuple_filter_ptr;
+	struct ixgbe_ethertype_filter_ele *ethertype_filter_ptr;
+	struct ixgbe_eth_syn_filter_ele *syn_filter_ptr;
+	struct ixgbe_eth_l2_tunnel_conf_ele *l2_tn_filter_ptr;
+	struct ixgbe_fdir_rule_ele *fdir_rule_ptr;
+	struct ixgbe_flow_mem *ixgbe_flow_mem_ptr;
+
+	switch (filter_type) {
+	case RTE_ETH_FILTER_NTUPLE:
+		ntuple_filter_ptr = (struct ixgbe_ntuple_filter_ele *)
+					pmd_flow->rule;
+		(void)rte_memcpy(&ntuple_filter,
+			&ntuple_filter_ptr->filter_info,
+			sizeof(struct rte_eth_ntuple_filter));
+		ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, FALSE);
+		if (!ret) {
+			TAILQ_REMOVE(&filter_ntuple_list,
+			ntuple_filter_ptr, entries);
+			rte_free(ntuple_filter_ptr);
+		}
+		break;
+	case RTE_ETH_FILTER_ETHERTYPE:
+		ethertype_filter_ptr = (struct ixgbe_ethertype_filter_ele *)
+					pmd_flow->rule;
+		(void)rte_memcpy(&ethertype_filter,
+			&ethertype_filter_ptr->filter_info,
+			sizeof(struct rte_eth_ethertype_filter));
+		ret = ixgbe_add_del_ethertype_filter(dev,
+				&ethertype_filter, FALSE);
+		if (!ret) {
+			TAILQ_REMOVE(&filter_ethertype_list,
+				ethertype_filter_ptr, entries);
+			rte_free(ethertype_filter_ptr);
+		}
+		break;
+	case RTE_ETH_FILTER_SYN:
+		syn_filter_ptr = (struct ixgbe_eth_syn_filter_ele *)
+				pmd_flow->rule;
+		(void)rte_memcpy(&syn_filter,
+			&syn_filter_ptr->filter_info,
+			sizeof(struct rte_eth_syn_filter));
+		ret = ixgbe_syn_filter_set(dev, &syn_filter, FALSE);
+		if (!ret) {
+			TAILQ_REMOVE(&filter_syn_list,
+				syn_filter_ptr, entries);
+			rte_free(syn_filter_ptr);
+		}
+		break;
+	case RTE_ETH_FILTER_FDIR:
+		fdir_rule_ptr = (struct ixgbe_fdir_rule_ele *)pmd_flow->rule;
+		(void)rte_memcpy(&fdir_rule,
+			&fdir_rule_ptr->filter_info,
+			sizeof(struct ixgbe_fdir_rule));
+		ret = ixgbe_fdir_filter_program(dev, &fdir_rule, TRUE, FALSE);
+		if (!ret) {
+			TAILQ_REMOVE(&filter_fdir_list,
+				fdir_rule_ptr, entries);
+			rte_free(fdir_rule_ptr);
+		}
+		break;
+	case RTE_ETH_FILTER_L2_TUNNEL:
+		l2_tn_filter_ptr = (struct ixgbe_eth_l2_tunnel_conf_ele *)
+				pmd_flow->rule;
+		(void)rte_memcpy(&l2_tn_filter, &l2_tn_filter_ptr->filter_info,
+			sizeof(struct rte_eth_l2_tunnel_conf));
+		ret = ixgbe_dev_l2_tunnel_filter_del(dev, &l2_tn_filter);
+		if (!ret) {
+			TAILQ_REMOVE(&filter_l2_tunnel_list,
+				l2_tn_filter_ptr, entries);
+			rte_free(l2_tn_filter_ptr);
+		}
+		break;
+	default:
+		PMD_DRV_LOG(WARNING, "Filter type (%d) not supported",
+			    filter_type);
+		ret = -EINVAL;
+		break;
+	}
+
+	if (ret) {
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_HANDLE,
+				NULL, "Failed to destroy flow");
+		return ret;
+	}
+
+	TAILQ_FOREACH(ixgbe_flow_mem_ptr, &ixgbe_flow_list, entries) {
+		if (ixgbe_flow_mem_ptr->flow == pmd_flow) {
+			TAILQ_REMOVE(&ixgbe_flow_list,
+				ixgbe_flow_mem_ptr, entries);
+			rte_free(ixgbe_flow_mem_ptr);
+		}
+	}
+	rte_free(flow);
+
+	return ret;
+}
+
 /*  Destroy all flow rules associated with a port on ixgbe. */
 static int
 ixgbe_flow_flush(struct rte_eth_dev *dev,
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* [PATCH v6 18/18] net/ixgbe: flush all the filter list
  2017-01-13  8:12         ` [PATCH v6 " Wei Zhao
                             ` (16 preceding siblings ...)
  2017-01-13  8:13           ` [PATCH v6 17/18] net/ixgbe: destroy " Wei Zhao
@ 2017-01-13  8:13           ` Wei Zhao
  2017-01-13 15:54           ` [PATCH v6 00/18] net/ixgbe: Consistent filter API Ferruh Yigit
  18 siblings, 0 replies; 149+ messages in thread
From: Wei Zhao @ 2017-01-13  8:13 UTC (permalink / raw)
  To: dev; +Cc: Wei Zhao, Wenzhuo Lu

This patch adds a function to flush all the fliter list
filter on a port.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c |  3 +++
 drivers/net/ixgbe/ixgbe_ethdev.h |  1 +
 drivers/net/ixgbe/ixgbe_flow.c   | 56 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 60 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 1112a3e..f46507b 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1341,6 +1341,9 @@ eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev)
 	/* Remove all ntuple filters of the device */
 	ixgbe_ntuple_filter_uninit(eth_dev);
 
+	/* clear all the filters list */
+	ixgbe_filterlist_flush();
+
 	return 0;
 }
 
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 01c18cd..5b31149 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -551,6 +551,7 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 int
 ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
 			       struct rte_eth_l2_tunnel_conf *l2_tunnel);
+void ixgbe_filterlist_flush(void);
 /*
  * Flow director function prototypes
  */
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 20aefa6..82aceed 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -2444,6 +2444,60 @@ ixgbe_parse_fdir_filter(const struct rte_flow_attr *attr,
 	return ret;
 }
 
+void
+ixgbe_filterlist_flush(void)
+{
+	struct ixgbe_ntuple_filter_ele *ntuple_filter_ptr;
+	struct ixgbe_ethertype_filter_ele *ethertype_filter_ptr;
+	struct ixgbe_eth_syn_filter_ele *syn_filter_ptr;
+	struct ixgbe_eth_l2_tunnel_conf_ele *l2_tn_filter_ptr;
+	struct ixgbe_fdir_rule_ele *fdir_rule_ptr;
+	struct ixgbe_flow_mem *ixgbe_flow_mem_ptr;
+
+	while ((ntuple_filter_ptr = TAILQ_FIRST(&filter_ntuple_list))) {
+		TAILQ_REMOVE(&filter_ntuple_list,
+				 ntuple_filter_ptr,
+				 entries);
+		rte_free(ntuple_filter_ptr);
+	}
+
+	while ((ethertype_filter_ptr = TAILQ_FIRST(&filter_ethertype_list))) {
+		TAILQ_REMOVE(&filter_ethertype_list,
+				 ethertype_filter_ptr,
+				 entries);
+		rte_free(ethertype_filter_ptr);
+	}
+
+	while ((syn_filter_ptr = TAILQ_FIRST(&filter_syn_list))) {
+		TAILQ_REMOVE(&filter_syn_list,
+				 syn_filter_ptr,
+				 entries);
+		rte_free(syn_filter_ptr);
+	}
+
+	while ((l2_tn_filter_ptr = TAILQ_FIRST(&filter_l2_tunnel_list))) {
+		TAILQ_REMOVE(&filter_l2_tunnel_list,
+				 l2_tn_filter_ptr,
+				 entries);
+		rte_free(l2_tn_filter_ptr);
+	}
+
+	while ((fdir_rule_ptr = TAILQ_FIRST(&filter_fdir_list))) {
+		TAILQ_REMOVE(&filter_fdir_list,
+				 fdir_rule_ptr,
+				 entries);
+		rte_free(fdir_rule_ptr);
+	}
+
+	while ((ixgbe_flow_mem_ptr = TAILQ_FIRST(&ixgbe_flow_list))) {
+		TAILQ_REMOVE(&ixgbe_flow_list,
+				 ixgbe_flow_mem_ptr,
+				 entries);
+		rte_free(ixgbe_flow_mem_ptr->flow);
+		rte_free(ixgbe_flow_mem_ptr);
+	}
+}
+
 /**
  * Create or destroy a flow rule.
  * Theorically one rule can match more than one filters.
@@ -2818,5 +2872,7 @@ ixgbe_flow_flush(struct rte_eth_dev *dev,
 		return ret;
 	}
 
+	ixgbe_filterlist_flush();
+
 	return 0;
 }
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 149+ messages in thread

* Re: [PATCH v6 14/18] net/ixgbe: parse L2 tunnel filter
  2017-01-13  8:13           ` [PATCH v6 14/18] net/ixgbe: parse L2 tunnel filter Wei Zhao
@ 2017-01-13 11:18             ` Ferruh Yigit
  2017-01-16 13:03             ` Adrien Mazarguil
  1 sibling, 0 replies; 149+ messages in thread
From: Ferruh Yigit @ 2017-01-13 11:18 UTC (permalink / raw)
  To: Wei Zhao, dev, Adrien Mazarguil; +Cc: Wenzhuo Lu

On 1/13/2017 8:13 AM, Wei Zhao wrote:
> check if the rule is a L2 tunnel rule, and get the L2 tunnel info.
> 
> Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> ---
>  drivers/net/ixgbe/ixgbe_ethdev.c |   3 +-
>  drivers/net/ixgbe/ixgbe_flow.c   | 216 +++++++++++++++++++++++++++++++++++++++
>  lib/librte_ether/rte_flow.h      |  48 +++++++++
>  3 files changed, 266 insertions(+), 1 deletion(-)

> <...>

Hi Adrien,

Can you please review rte_flow related part of the patch below?

I am OK with rest of the patch, and planning to apply to next-net if you
don't have any objection.

Thanks,
ferruh


>  
> diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h
> index 98084ac..7142479 100644
> --- a/lib/librte_ether/rte_flow.h
> +++ b/lib/librte_ether/rte_flow.h
> @@ -268,6 +268,20 @@ enum rte_flow_item_type {
>  	 * See struct rte_flow_item_vxlan.
>  	 */
>  	RTE_FLOW_ITEM_TYPE_VXLAN,
> +
> +	/**
> +	 * Matches a E_TAG header.
> +	 *
> +	 * See struct rte_flow_item_e_tag.
> +	 */
> +	RTE_FLOW_ITEM_TYPE_E_TAG,
> +
> +	/**
> +	 * Matches a NVGRE header.
> +	 *
> +	 * See struct rte_flow_item_nvgre.
> +	 */
> +	RTE_FLOW_ITEM_TYPE_NVGRE,
>  };
>  
>  /**
> @@ -454,6 +468,40 @@ struct rte_flow_item_vxlan {
>  };
>  
>  /**
> + * RTE_FLOW_ITEM_TYPE_E_TAG.
> + *
> + * Matches a E-tag header.
> + */
> +struct rte_flow_item_e_tag {
> +	uint16_t tpid; /**< Tag protocol identifier (0x893F). */
> +	/** E-Tag control information (E-TCI). */
> +	/**< E-PCP (3b), E-DEI (1b), ingress E-CID base (12b). */
> +	uint16_t epcp_edei_in_ecid_b;
> +	/**< Reserved (2b), GRP (2b), E-CID base (12b). */
> +	uint16_t rsvd_grp_ecid_b;
> +	uint8_t in_ecid_e; /**< Ingress E-CID ext. */
> +	uint8_t ecid_e; /**< E-CID ext. */
> +};
> +
> +/**
> + * RTE_FLOW_ITEM_TYPE_NVGRE.
> + *
> + * Matches a NVGRE header.
> + */
> +struct rte_flow_item_nvgre {
> +     /**
> +      * Checksum (1b), undefined (1b), key bit (1b), sequence number (1b),
> +      * reserved 0 (9b), version (3b).
> +      *
> +      * \c_k_s_rsvd0_ver must have value 0x2000 according to RFC 7637.
> +      */
> +	uint16_t c_k_s_rsvd0_ver;
> +	uint16_t protocol; /**< Protocol type (0x6558). */
> +	uint8_t tni[3]; /**< Virtual subnet ID. */
> +	uint8_t flow_id; /**< Flow ID. */
> +};
> +
> +/**
>   * Matching pattern item definition.
>   *
>   * A pattern is formed by stacking items starting from the lowest protocol
> 

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v6 00/18] net/ixgbe: Consistent filter API
  2017-01-13  8:12         ` [PATCH v6 " Wei Zhao
                             ` (17 preceding siblings ...)
  2017-01-13  8:13           ` [PATCH v6 18/18] net/ixgbe: flush all the filter list Wei Zhao
@ 2017-01-13 15:54           ` Ferruh Yigit
  2017-01-15  2:44             ` Lu, Wenzhuo
  2017-01-18 11:04             ` Thomas Monjalon
  18 siblings, 2 replies; 149+ messages in thread
From: Ferruh Yigit @ 2017-01-13 15:54 UTC (permalink / raw)
  To: Wei Zhao, dev

On 1/13/2017 8:12 AM, Wei Zhao wrote:
> The patches mainly finish following functions:
> 1) Store and restore all kinds of filters.
> 2) Parse all kinds of filters.
> 3) Add flow validate function.
> 4) Add flow create function.
> 5) Add flow destroy function.
> 6) Add flow flush function.
> 
<...>
> 
> zhao wei (18):
>   net/ixgbe: store TCP SYN filter
>   net/ixgbe: store flow director filter
>   net/ixgbe: store L2 tunnel filter
>   net/ixgbe: restore n-tuple filter
>   net/ixgbe: restore ether type filter
>   net/ixgbe: restore TCP SYN filter
>   net/ixgbe: restore flow director filter
>   net/ixgbe: restore L2 tunnel filter
>   net/ixgbe: store and restore L2 tunnel configuration
>   net/ixgbe: flush all the filters
>   net/ixgbe: parse n-tuple filter
>   net/ixgbe: parse ethertype filter
>   net/ixgbe: parse TCP SYN filter
>   net/ixgbe: parse L2 tunnel filter
>   net/ixgbe: parse flow director filter
>   net/ixgbe: create consistent filter
>   net/ixgbe: destroy consistent filter
>   net/ixgbe: flush all the filter list
> 
<...>
> 
> Acked-by: Beilei Xing <beilei.xing@intel.com>
> Acked-by: Wei Dai <wei.dai@intel.com>
> 

Series applied to dpdk-next-net/master, thanks.

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v6 00/18] net/ixgbe: Consistent filter API
  2017-01-13 15:54           ` [PATCH v6 00/18] net/ixgbe: Consistent filter API Ferruh Yigit
@ 2017-01-15  2:44             ` Lu, Wenzhuo
  2017-01-18 11:04             ` Thomas Monjalon
  1 sibling, 0 replies; 149+ messages in thread
From: Lu, Wenzhuo @ 2017-01-15  2:44 UTC (permalink / raw)
  To: Zhao1, Wei, dev

Hi Ferruh,

> Series applied to dpdk-next-net/master, thanks.
Great, thanks for applying this patch set!

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v6 14/18] net/ixgbe: parse L2 tunnel filter
  2017-01-13  8:13           ` [PATCH v6 14/18] net/ixgbe: parse L2 tunnel filter Wei Zhao
  2017-01-13 11:18             ` Ferruh Yigit
@ 2017-01-16 13:03             ` Adrien Mazarguil
  2017-01-16 16:39               ` Ferruh Yigit
  1 sibling, 1 reply; 149+ messages in thread
From: Adrien Mazarguil @ 2017-01-16 13:03 UTC (permalink / raw)
  To: Wei Zhao; +Cc: dev, Wenzhuo Lu

Hi,

On Fri, Jan 13, 2017 at 04:13:08PM +0800, Wei Zhao wrote:
> check if the rule is a L2 tunnel rule, and get the L2 tunnel info.
> 
> Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> ---
>  drivers/net/ixgbe/ixgbe_ethdev.c |   3 +-
>  drivers/net/ixgbe/ixgbe_flow.c   | 216 +++++++++++++++++++++++++++++++++++++++
>  lib/librte_ether/rte_flow.h      |  48 +++++++++
>  3 files changed, 266 insertions(+), 1 deletion(-)
[...]
> diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h
> index 98084ac..7142479 100644
> --- a/lib/librte_ether/rte_flow.h
> +++ b/lib/librte_ether/rte_flow.h
> @@ -268,6 +268,20 @@ enum rte_flow_item_type {
>  	 * See struct rte_flow_item_vxlan.
>  	 */
>  	RTE_FLOW_ITEM_TYPE_VXLAN,
> +
> +	/**
> +	 * Matches a E_TAG header.
> +	 *
> +	 * See struct rte_flow_item_e_tag.
> +	 */
> +	RTE_FLOW_ITEM_TYPE_E_TAG,
> +
> +	/**
> +	 * Matches a NVGRE header.
> +	 *
> +	 * See struct rte_flow_item_nvgre.
> +	 */
> +	RTE_FLOW_ITEM_TYPE_NVGRE,
>  };
>  
>  /**
> @@ -454,6 +468,40 @@ struct rte_flow_item_vxlan {
>  };
>  
>  /**
> + * RTE_FLOW_ITEM_TYPE_E_TAG.
> + *
> + * Matches a E-tag header.
> + */
> +struct rte_flow_item_e_tag {
> +	uint16_t tpid; /**< Tag protocol identifier (0x893F). */
> +	/** E-Tag control information (E-TCI). */
> +	/**< E-PCP (3b), E-DEI (1b), ingress E-CID base (12b). */
> +	uint16_t epcp_edei_in_ecid_b;
> +	/**< Reserved (2b), GRP (2b), E-CID base (12b). */
> +	uint16_t rsvd_grp_ecid_b;
> +	uint8_t in_ecid_e; /**< Ingress E-CID ext. */
> +	uint8_t ecid_e; /**< E-CID ext. */
> +};
> +
> +/**
> + * RTE_FLOW_ITEM_TYPE_NVGRE.
> + *
> + * Matches a NVGRE header.
> + */
> +struct rte_flow_item_nvgre {
> +     /**
> +      * Checksum (1b), undefined (1b), key bit (1b), sequence number (1b),
> +      * reserved 0 (9b), version (3b).
> +      *
> +      * \c_k_s_rsvd0_ver must have value 0x2000 according to RFC 7637.
> +      */
> +	uint16_t c_k_s_rsvd0_ver;
> +	uint16_t protocol; /**< Protocol type (0x6558). */
> +	uint8_t tni[3]; /**< Virtual subnet ID. */
> +	uint8_t flow_id; /**< Flow ID. */
> +};
> +
> +/**
>   * Matching pattern item definition.
>   *
>   * A pattern is formed by stacking items starting from the lowest protocol
> -- 
> 2.5.5
> 

OK for these definitions, however API documentation
(doc/guides/prog_guide/rte_flow.rst) must be kept up to date, and it would
be great if testpmd support for these new items was added simultaneously
(changes in app/test-pmd/cmdline.c, app/test-pmd/cmdline_flow.c and
doc/guides/testpmd_app_ug/testpmd_funcs.rst).

How about putting all rte_flow changes (API & testpmd) in their own separate
patch?

You could use VLAN PCP/DEI/VID definitions as an example to expose partial
bit-fields (e.g. epcp_edei_in_ecid_b) in testpmd, see:

 1419fd5a6c9f ("app/testpmd: add protocol fields to flow command")

Now if re-spinning this series yet again is too much work, you can go ahead
with this commit as long as you do not forget to submit the rest later,
thanks.

-- 
Adrien Mazarguil
6WIND

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v6 14/18] net/ixgbe: parse L2 tunnel filter
  2017-01-16 13:03             ` Adrien Mazarguil
@ 2017-01-16 16:39               ` Ferruh Yigit
  2017-01-16 18:26                 ` Adrien Mazarguil
  2017-01-17  9:27                 ` Zhao1, Wei
  0 siblings, 2 replies; 149+ messages in thread
From: Ferruh Yigit @ 2017-01-16 16:39 UTC (permalink / raw)
  To: Adrien Mazarguil, Wei Zhao; +Cc: dev, Wenzhuo Lu

On 1/16/2017 1:03 PM, Adrien Mazarguil wrote:
> Hi,
> 
> On Fri, Jan 13, 2017 at 04:13:08PM +0800, Wei Zhao wrote:
>> check if the rule is a L2 tunnel rule, and get the L2 tunnel info.
>>
>> Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
>> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
>> ---
>>  drivers/net/ixgbe/ixgbe_ethdev.c |   3 +-
>>  drivers/net/ixgbe/ixgbe_flow.c   | 216 +++++++++++++++++++++++++++++++++++++++
>>  lib/librte_ether/rte_flow.h      |  48 +++++++++
>>  3 files changed, 266 insertions(+), 1 deletion(-)
> [...]
>> diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h
>> index 98084ac..7142479 100644
>> --- a/lib/librte_ether/rte_flow.h
>> +++ b/lib/librte_ether/rte_flow.h
>> @@ -268,6 +268,20 @@ enum rte_flow_item_type {
>>  	 * See struct rte_flow_item_vxlan.
>>  	 */
>>  	RTE_FLOW_ITEM_TYPE_VXLAN,
>> +
>> +	/**
>> +	 * Matches a E_TAG header.
>> +	 *
>> +	 * See struct rte_flow_item_e_tag.
>> +	 */
>> +	RTE_FLOW_ITEM_TYPE_E_TAG,
>> +
>> +	/**
>> +	 * Matches a NVGRE header.
>> +	 *
>> +	 * See struct rte_flow_item_nvgre.
>> +	 */
>> +	RTE_FLOW_ITEM_TYPE_NVGRE,
>>  };
>>  
>>  /**
>> @@ -454,6 +468,40 @@ struct rte_flow_item_vxlan {
>>  };
>>  
>>  /**
>> + * RTE_FLOW_ITEM_TYPE_E_TAG.
>> + *
>> + * Matches a E-tag header.
>> + */
>> +struct rte_flow_item_e_tag {
>> +	uint16_t tpid; /**< Tag protocol identifier (0x893F). */
>> +	/** E-Tag control information (E-TCI). */
>> +	/**< E-PCP (3b), E-DEI (1b), ingress E-CID base (12b). */
>> +	uint16_t epcp_edei_in_ecid_b;
>> +	/**< Reserved (2b), GRP (2b), E-CID base (12b). */
>> +	uint16_t rsvd_grp_ecid_b;
>> +	uint8_t in_ecid_e; /**< Ingress E-CID ext. */
>> +	uint8_t ecid_e; /**< E-CID ext. */
>> +};
>> +
>> +/**
>> + * RTE_FLOW_ITEM_TYPE_NVGRE.
>> + *
>> + * Matches a NVGRE header.
>> + */
>> +struct rte_flow_item_nvgre {
>> +     /**
>> +      * Checksum (1b), undefined (1b), key bit (1b), sequence number (1b),
>> +      * reserved 0 (9b), version (3b).
>> +      *
>> +      * \c_k_s_rsvd0_ver must have value 0x2000 according to RFC 7637.
>> +      */
>> +	uint16_t c_k_s_rsvd0_ver;
>> +	uint16_t protocol; /**< Protocol type (0x6558). */
>> +	uint8_t tni[3]; /**< Virtual subnet ID. */
>> +	uint8_t flow_id; /**< Flow ID. */
>> +};
>> +
>> +/**
>>   * Matching pattern item definition.
>>   *
>>   * A pattern is formed by stacking items starting from the lowest protocol
>> -- 
>> 2.5.5
>>
> 
> OK for these definitions, however API documentation
> (doc/guides/prog_guide/rte_flow.rst) must be kept up to date, and it would
> be great if testpmd support for these new items was added simultaneously
> (changes in app/test-pmd/cmdline.c, app/test-pmd/cmdline_flow.c and
> doc/guides/testpmd_app_ug/testpmd_funcs.rst).
> 
> How about putting all rte_flow changes (API & testpmd) in their own separate
> patch?

I thought it can be more useful to have library and its user updated in
same patch, gives more context. But missed rte_flow documentation ...

> 
> You could use VLAN PCP/DEI/VID definitions as an example to expose partial
> bit-fields (e.g. epcp_edei_in_ecid_b) in testpmd, see:
> 
>  1419fd5a6c9f ("app/testpmd: add protocol fields to flow command")
> 
> Now if re-spinning this series yet again is too much work, you can go ahead
> with this commit as long as you do not forget to submit the rest later,
> thanks.
> 

Is following todo list complete:
1- update rte_flow document, doc/guides/prog_guide/rte_flow.rst,
document two new item types: E_TAG & NVGRE.

2- Add testpmd sample implementation and documentation.


Hi Wei,

Would you mind working on a patch to cover above items?

Thanks,
ferruh

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v6 14/18] net/ixgbe: parse L2 tunnel filter
  2017-01-16 16:39               ` Ferruh Yigit
@ 2017-01-16 18:26                 ` Adrien Mazarguil
  2017-01-17  9:27                 ` Zhao1, Wei
  1 sibling, 0 replies; 149+ messages in thread
From: Adrien Mazarguil @ 2017-01-16 18:26 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: Wei Zhao, dev, Wenzhuo Lu

On Mon, Jan 16, 2017 at 04:39:08PM +0000, Ferruh Yigit wrote:
> On 1/16/2017 1:03 PM, Adrien Mazarguil wrote:
> > Hi,
> > 
> > On Fri, Jan 13, 2017 at 04:13:08PM +0800, Wei Zhao wrote:
> >> check if the rule is a L2 tunnel rule, and get the L2 tunnel info.
> >>
> >> Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> >> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> >> ---
> >>  drivers/net/ixgbe/ixgbe_ethdev.c |   3 +-
> >>  drivers/net/ixgbe/ixgbe_flow.c   | 216 +++++++++++++++++++++++++++++++++++++++
> >>  lib/librte_ether/rte_flow.h      |  48 +++++++++
> >>  3 files changed, 266 insertions(+), 1 deletion(-)
> > [...]
> >> diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h
> >> index 98084ac..7142479 100644
> >> --- a/lib/librte_ether/rte_flow.h
> >> +++ b/lib/librte_ether/rte_flow.h
> >> @@ -268,6 +268,20 @@ enum rte_flow_item_type {
> >>  	 * See struct rte_flow_item_vxlan.
> >>  	 */
> >>  	RTE_FLOW_ITEM_TYPE_VXLAN,
> >> +
> >> +	/**
> >> +	 * Matches a E_TAG header.
> >> +	 *
> >> +	 * See struct rte_flow_item_e_tag.
> >> +	 */
> >> +	RTE_FLOW_ITEM_TYPE_E_TAG,
> >> +
> >> +	/**
> >> +	 * Matches a NVGRE header.
> >> +	 *
> >> +	 * See struct rte_flow_item_nvgre.
> >> +	 */
> >> +	RTE_FLOW_ITEM_TYPE_NVGRE,
> >>  };
> >>  
> >>  /**
> >> @@ -454,6 +468,40 @@ struct rte_flow_item_vxlan {
> >>  };
> >>  
> >>  /**
> >> + * RTE_FLOW_ITEM_TYPE_E_TAG.
> >> + *
> >> + * Matches a E-tag header.
> >> + */
> >> +struct rte_flow_item_e_tag {
> >> +	uint16_t tpid; /**< Tag protocol identifier (0x893F). */
> >> +	/** E-Tag control information (E-TCI). */
> >> +	/**< E-PCP (3b), E-DEI (1b), ingress E-CID base (12b). */
> >> +	uint16_t epcp_edei_in_ecid_b;
> >> +	/**< Reserved (2b), GRP (2b), E-CID base (12b). */
> >> +	uint16_t rsvd_grp_ecid_b;
> >> +	uint8_t in_ecid_e; /**< Ingress E-CID ext. */
> >> +	uint8_t ecid_e; /**< E-CID ext. */
> >> +};
> >> +
> >> +/**
> >> + * RTE_FLOW_ITEM_TYPE_NVGRE.
> >> + *
> >> + * Matches a NVGRE header.
> >> + */
> >> +struct rte_flow_item_nvgre {
> >> +     /**
> >> +      * Checksum (1b), undefined (1b), key bit (1b), sequence number (1b),
> >> +      * reserved 0 (9b), version (3b).
> >> +      *
> >> +      * \c_k_s_rsvd0_ver must have value 0x2000 according to RFC 7637.
> >> +      */
> >> +	uint16_t c_k_s_rsvd0_ver;
> >> +	uint16_t protocol; /**< Protocol type (0x6558). */
> >> +	uint8_t tni[3]; /**< Virtual subnet ID. */
> >> +	uint8_t flow_id; /**< Flow ID. */
> >> +};
> >> +
> >> +/**
> >>   * Matching pattern item definition.
> >>   *
> >>   * A pattern is formed by stacking items starting from the lowest protocol
> >> -- 
> >> 2.5.5
> >>
> > 
> > OK for these definitions, however API documentation
> > (doc/guides/prog_guide/rte_flow.rst) must be kept up to date, and it would
> > be great if testpmd support for these new items was added simultaneously
> > (changes in app/test-pmd/cmdline.c, app/test-pmd/cmdline_flow.c and
> > doc/guides/testpmd_app_ug/testpmd_funcs.rst).
> > 
> > How about putting all rte_flow changes (API & testpmd) in their own separate
> > patch?
> 
> I thought it can be more useful to have library and its user updated in
> same patch, gives more context. But missed rte_flow documentation ...

Not much of an issue as long as it's updated, also I did not notice the
series was already applied before replying.

> > You could use VLAN PCP/DEI/VID definitions as an example to expose partial
> > bit-fields (e.g. epcp_edei_in_ecid_b) in testpmd, see:
> > 
> >  1419fd5a6c9f ("app/testpmd: add protocol fields to flow command")
> > 
> > Now if re-spinning this series yet again is too much work, you can go ahead
> > with this commit as long as you do not forget to submit the rest later,
> > thanks.
> > 
> 
> Is following todo list complete:
> 1- update rte_flow document, doc/guides/prog_guide/rte_flow.rst,
> document two new item types: E_TAG & NVGRE.
> 
> 2- Add testpmd sample implementation and documentation.

Right, everything at once in a single patch is fine. The existing item
definitions and testpmd commands can be used as templates.

Thanks.

> Hi Wei,
> 
> Would you mind working on a patch to cover above items?
> 
> Thanks,
> ferruh
> 

-- 
Adrien Mazarguil
6WIND

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v6 14/18] net/ixgbe: parse L2 tunnel filter
  2017-01-16 16:39               ` Ferruh Yigit
  2017-01-16 18:26                 ` Adrien Mazarguil
@ 2017-01-17  9:27                 ` Zhao1, Wei
  2017-01-17 10:03                   ` Ferruh Yigit
  1 sibling, 1 reply; 149+ messages in thread
From: Zhao1, Wei @ 2017-01-17  9:27 UTC (permalink / raw)
  To: Yigit, Ferruh, Adrien Mazarguil; +Cc: dev, Lu, Wenzhuo

Hi, Ferruh

> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Tuesday, January 17, 2017 12:39 AM
> To: Adrien Mazarguil <adrien.mazarguil@6wind.com>; Zhao1, Wei
> <wei.zhao1@intel.com>
> Cc: dev@dpdk.org; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v6 14/18] net/ixgbe: parse L2 tunnel filter
> 
> On 1/16/2017 1:03 PM, Adrien Mazarguil wrote:
> > Hi,
> >
> > On Fri, Jan 13, 2017 at 04:13:08PM +0800, Wei Zhao wrote:
> >> check if the rule is a L2 tunnel rule, and get the L2 tunnel info.
> >>
> >> Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> >> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> >> ---
> >>  drivers/net/ixgbe/ixgbe_ethdev.c |   3 +-
> >>  drivers/net/ixgbe/ixgbe_flow.c   | 216
> +++++++++++++++++++++++++++++++++++++++
> >>  lib/librte_ether/rte_flow.h      |  48 +++++++++
> >>  3 files changed, 266 insertions(+), 1 deletion(-)
> > [...]
> >> diff --git a/lib/librte_ether/rte_flow.h
> >> b/lib/librte_ether/rte_flow.h index 98084ac..7142479 100644
> >> --- a/lib/librte_ether/rte_flow.h
> >> +++ b/lib/librte_ether/rte_flow.h
> >> @@ -268,6 +268,20 @@ enum rte_flow_item_type {
> >>  	 * See struct rte_flow_item_vxlan.
> >>  	 */
> >>  	RTE_FLOW_ITEM_TYPE_VXLAN,
> >> +
> >> +	/**
> >> +	 * Matches a E_TAG header.
> >> +	 *
> >> +	 * See struct rte_flow_item_e_tag.
> >> +	 */
> >> +	RTE_FLOW_ITEM_TYPE_E_TAG,
> >> +
> >> +	/**
> >> +	 * Matches a NVGRE header.
> >> +	 *
> >> +	 * See struct rte_flow_item_nvgre.
> >> +	 */
> >> +	RTE_FLOW_ITEM_TYPE_NVGRE,
> >>  };
> >>
> >>  /**
> >> @@ -454,6 +468,40 @@ struct rte_flow_item_vxlan {  };
> >>
> >>  /**
> >> + * RTE_FLOW_ITEM_TYPE_E_TAG.
> >> + *
> >> + * Matches a E-tag header.
> >> + */
> >> +struct rte_flow_item_e_tag {
> >> +	uint16_t tpid; /**< Tag protocol identifier (0x893F). */
> >> +	/** E-Tag control information (E-TCI). */
> >> +	/**< E-PCP (3b), E-DEI (1b), ingress E-CID base (12b). */
> >> +	uint16_t epcp_edei_in_ecid_b;
> >> +	/**< Reserved (2b), GRP (2b), E-CID base (12b). */
> >> +	uint16_t rsvd_grp_ecid_b;
> >> +	uint8_t in_ecid_e; /**< Ingress E-CID ext. */
> >> +	uint8_t ecid_e; /**< E-CID ext. */
> >> +};
> >> +
> >> +/**
> >> + * RTE_FLOW_ITEM_TYPE_NVGRE.
> >> + *
> >> + * Matches a NVGRE header.
> >> + */
> >> +struct rte_flow_item_nvgre {
> >> +     /**
> >> +      * Checksum (1b), undefined (1b), key bit (1b), sequence number (1b),
> >> +      * reserved 0 (9b), version (3b).
> >> +      *
> >> +      * \c_k_s_rsvd0_ver must have value 0x2000 according to RFC 7637.
> >> +      */
> >> +	uint16_t c_k_s_rsvd0_ver;
> >> +	uint16_t protocol; /**< Protocol type (0x6558). */
> >> +	uint8_t tni[3]; /**< Virtual subnet ID. */
> >> +	uint8_t flow_id; /**< Flow ID. */
> >> +};
> >> +
> >> +/**
> >>   * Matching pattern item definition.
> >>   *
> >>   * A pattern is formed by stacking items starting from the lowest
> >> protocol
> >> --
> >> 2.5.5
> >>
> >
> > OK for these definitions, however API documentation
> > (doc/guides/prog_guide/rte_flow.rst) must be kept up to date, and it
> > would be great if testpmd support for these new items was added
> > simultaneously (changes in app/test-pmd/cmdline.c,
> > app/test-pmd/cmdline_flow.c and
> doc/guides/testpmd_app_ug/testpmd_funcs.rst).
> >
> > How about putting all rte_flow changes (API & testpmd) in their own
> > separate patch?
> 
> I thought it can be more useful to have library and its user updated in same
> patch, gives more context. But missed rte_flow documentation ...
> 
> >
> > You could use VLAN PCP/DEI/VID definitions as an example to expose
> > partial bit-fields (e.g. epcp_edei_in_ecid_b) in testpmd, see:
> >
> >  1419fd5a6c9f ("app/testpmd: add protocol fields to flow command")
> >
> > Now if re-spinning this series yet again is too much work, you can go
> > ahead with this commit as long as you do not forget to submit the rest
> > later, thanks.
> >
> 
> Is following todo list complete:
> 1- update rte_flow document, doc/guides/prog_guide/rte_flow.rst,
> document two new item types: E_TAG & NVGRE.
> 
> 2- Add testpmd sample implementation and documentation.
> 

I am sorry for miss rte_flow.rst document update, I DON'T know there is such a new file of rte_flow.rst.
And also these two types of E_TAG & NVGRE are added into code after rte_flow patch,
So testpmd  implementation do not support for these type.
Now, we have been work on task of 17.05. 
How about finish (1) first as one patch, then after busy work of 17.05  to add (2) as another patch?
OR, if these two work merge in to patch set,   I will may be begin to do after the 17.05  task finish?
Which one is OK for you?

Thank you.
> 
> Hi Wei,
> 
> Would you mind working on a patch to cover above items?
> 
> Thanks,
> ferruh

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v6 14/18] net/ixgbe: parse L2 tunnel filter
  2017-01-17  9:27                 ` Zhao1, Wei
@ 2017-01-17 10:03                   ` Ferruh Yigit
  2017-01-18  1:59                     ` Zhao1, Wei
  0 siblings, 1 reply; 149+ messages in thread
From: Ferruh Yigit @ 2017-01-17 10:03 UTC (permalink / raw)
  To: Zhao1, Wei, Adrien Mazarguil; +Cc: dev, Lu, Wenzhuo

On 1/17/2017 9:27 AM, Zhao1, Wei wrote:
> Hi, Ferruh
> 
>> -----Original Message-----
>> From: Yigit, Ferruh
>> Sent: Tuesday, January 17, 2017 12:39 AM
>> To: Adrien Mazarguil <adrien.mazarguil@6wind.com>; Zhao1, Wei
>> <wei.zhao1@intel.com>
>> Cc: dev@dpdk.org; Lu, Wenzhuo <wenzhuo.lu@intel.com>
>> Subject: Re: [dpdk-dev] [PATCH v6 14/18] net/ixgbe: parse L2 tunnel filter
>>
>> On 1/16/2017 1:03 PM, Adrien Mazarguil wrote:
>>> Hi,
>>>
>>> On Fri, Jan 13, 2017 at 04:13:08PM +0800, Wei Zhao wrote:
>>>> check if the rule is a L2 tunnel rule, and get the L2 tunnel info.
>>>>
>>>> Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
>>>> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
>>>> ---
>>>>  drivers/net/ixgbe/ixgbe_ethdev.c |   3 +-
>>>>  drivers/net/ixgbe/ixgbe_flow.c   | 216
>> +++++++++++++++++++++++++++++++++++++++
>>>>  lib/librte_ether/rte_flow.h      |  48 +++++++++
>>>>  3 files changed, 266 insertions(+), 1 deletion(-)
>>> [...]
>>>> diff --git a/lib/librte_ether/rte_flow.h
>>>> b/lib/librte_ether/rte_flow.h index 98084ac..7142479 100644
>>>> --- a/lib/librte_ether/rte_flow.h
>>>> +++ b/lib/librte_ether/rte_flow.h
>>>> @@ -268,6 +268,20 @@ enum rte_flow_item_type {
>>>>  	 * See struct rte_flow_item_vxlan.
>>>>  	 */
>>>>  	RTE_FLOW_ITEM_TYPE_VXLAN,
>>>> +
>>>> +	/**
>>>> +	 * Matches a E_TAG header.
>>>> +	 *
>>>> +	 * See struct rte_flow_item_e_tag.
>>>> +	 */
>>>> +	RTE_FLOW_ITEM_TYPE_E_TAG,
>>>> +
>>>> +	/**
>>>> +	 * Matches a NVGRE header.
>>>> +	 *
>>>> +	 * See struct rte_flow_item_nvgre.
>>>> +	 */
>>>> +	RTE_FLOW_ITEM_TYPE_NVGRE,
>>>>  };
>>>>
>>>>  /**
>>>> @@ -454,6 +468,40 @@ struct rte_flow_item_vxlan {  };
>>>>
>>>>  /**
>>>> + * RTE_FLOW_ITEM_TYPE_E_TAG.
>>>> + *
>>>> + * Matches a E-tag header.
>>>> + */
>>>> +struct rte_flow_item_e_tag {
>>>> +	uint16_t tpid; /**< Tag protocol identifier (0x893F). */
>>>> +	/** E-Tag control information (E-TCI). */
>>>> +	/**< E-PCP (3b), E-DEI (1b), ingress E-CID base (12b). */
>>>> +	uint16_t epcp_edei_in_ecid_b;
>>>> +	/**< Reserved (2b), GRP (2b), E-CID base (12b). */
>>>> +	uint16_t rsvd_grp_ecid_b;
>>>> +	uint8_t in_ecid_e; /**< Ingress E-CID ext. */
>>>> +	uint8_t ecid_e; /**< E-CID ext. */
>>>> +};
>>>> +
>>>> +/**
>>>> + * RTE_FLOW_ITEM_TYPE_NVGRE.
>>>> + *
>>>> + * Matches a NVGRE header.
>>>> + */
>>>> +struct rte_flow_item_nvgre {
>>>> +     /**
>>>> +      * Checksum (1b), undefined (1b), key bit (1b), sequence number (1b),
>>>> +      * reserved 0 (9b), version (3b).
>>>> +      *
>>>> +      * \c_k_s_rsvd0_ver must have value 0x2000 according to RFC 7637.
>>>> +      */
>>>> +	uint16_t c_k_s_rsvd0_ver;
>>>> +	uint16_t protocol; /**< Protocol type (0x6558). */
>>>> +	uint8_t tni[3]; /**< Virtual subnet ID. */
>>>> +	uint8_t flow_id; /**< Flow ID. */
>>>> +};
>>>> +
>>>> +/**
>>>>   * Matching pattern item definition.
>>>>   *
>>>>   * A pattern is formed by stacking items starting from the lowest
>>>> protocol
>>>> --
>>>> 2.5.5
>>>>
>>>
>>> OK for these definitions, however API documentation
>>> (doc/guides/prog_guide/rte_flow.rst) must be kept up to date, and it
>>> would be great if testpmd support for these new items was added
>>> simultaneously (changes in app/test-pmd/cmdline.c,
>>> app/test-pmd/cmdline_flow.c and
>> doc/guides/testpmd_app_ug/testpmd_funcs.rst).
>>>
>>> How about putting all rte_flow changes (API & testpmd) in their own
>>> separate patch?
>>
>> I thought it can be more useful to have library and its user updated in same
>> patch, gives more context. But missed rte_flow documentation ...
>>
>>>
>>> You could use VLAN PCP/DEI/VID definitions as an example to expose
>>> partial bit-fields (e.g. epcp_edei_in_ecid_b) in testpmd, see:
>>>
>>>  1419fd5a6c9f ("app/testpmd: add protocol fields to flow command")
>>>
>>> Now if re-spinning this series yet again is too much work, you can go
>>> ahead with this commit as long as you do not forget to submit the rest
>>> later, thanks.
>>>
>>
>> Is following todo list complete:
>> 1- update rte_flow document, doc/guides/prog_guide/rte_flow.rst,
>> document two new item types: E_TAG & NVGRE.
>>
>> 2- Add testpmd sample implementation and documentation.
>>
> 
> I am sorry for miss rte_flow.rst document update, I DON'T know there is such a new file of rte_flow.rst.
> And also these two types of E_TAG & NVGRE are added into code after rte_flow patch,
> So testpmd  implementation do not support for these type.
> Now, we have been work on task of 17.05. 
> How about finish (1) first as one patch, then after busy work of 17.05  to add (2) as another patch?
> OR, if these two work merge in to patch set,   I will may be begin to do after the 17.05  task finish?
> Which one is OK for you?

Changes can be in two separate patches, and if possible, can this get
priority against 17.05 task?
This is for 17.02 feature and not completely finished, missing for last
touches..

Thanks,
ferruh

> 
> Thank you.
>>
>> Hi Wei,
>>
>> Would you mind working on a patch to cover above items?
>>
>> Thanks,
>> ferruh
> 

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v6 14/18] net/ixgbe: parse L2 tunnel filter
  2017-01-17 10:03                   ` Ferruh Yigit
@ 2017-01-18  1:59                     ` Zhao1, Wei
  2017-01-18 17:49                       ` Ferruh Yigit
  0 siblings, 1 reply; 149+ messages in thread
From: Zhao1, Wei @ 2017-01-18  1:59 UTC (permalink / raw)
  To: Yigit, Ferruh, Adrien Mazarguil; +Cc: dev, Lu, Wenzhuo

Hi, Ferruh

> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Tuesday, January 17, 2017 6:03 PM
> To: Zhao1, Wei <wei.zhao1@intel.com>; Adrien Mazarguil
> <adrien.mazarguil@6wind.com>
> Cc: dev@dpdk.org; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v6 14/18] net/ixgbe: parse L2 tunnel filter
> 
> On 1/17/2017 9:27 AM, Zhao1, Wei wrote:
> > Hi, Ferruh
> >
> >> -----Original Message-----
> >> From: Yigit, Ferruh
> >> Sent: Tuesday, January 17, 2017 12:39 AM
> >> To: Adrien Mazarguil <adrien.mazarguil@6wind.com>; Zhao1, Wei
> >> <wei.zhao1@intel.com>
> >> Cc: dev@dpdk.org; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> >> Subject: Re: [dpdk-dev] [PATCH v6 14/18] net/ixgbe: parse L2 tunnel
> >> filter
> >>
> >> On 1/16/2017 1:03 PM, Adrien Mazarguil wrote:
> >>> Hi,
> >>>
> >>> On Fri, Jan 13, 2017 at 04:13:08PM +0800, Wei Zhao wrote:
> >>>> check if the rule is a L2 tunnel rule, and get the L2 tunnel info.
> >>>>
> >>>> Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> >>>> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> >>>> ---
> >>>>  drivers/net/ixgbe/ixgbe_ethdev.c |   3 +-
> >>>>  drivers/net/ixgbe/ixgbe_flow.c   | 216
> >> +++++++++++++++++++++++++++++++++++++++
> >>>>  lib/librte_ether/rte_flow.h      |  48 +++++++++
> >>>>  3 files changed, 266 insertions(+), 1 deletion(-)
> >>> [...]
> >>>> diff --git a/lib/librte_ether/rte_flow.h
> >>>> b/lib/librte_ether/rte_flow.h index 98084ac..7142479 100644
> >>>> --- a/lib/librte_ether/rte_flow.h
> >>>> +++ b/lib/librte_ether/rte_flow.h
> >>>> @@ -268,6 +268,20 @@ enum rte_flow_item_type {
> >>>>  	 * See struct rte_flow_item_vxlan.
> >>>>  	 */
> >>>>  	RTE_FLOW_ITEM_TYPE_VXLAN,
> >>>> +
> >>>> +	/**
> >>>> +	 * Matches a E_TAG header.
> >>>> +	 *
> >>>> +	 * See struct rte_flow_item_e_tag.
> >>>> +	 */
> >>>> +	RTE_FLOW_ITEM_TYPE_E_TAG,
> >>>> +
> >>>> +	/**
> >>>> +	 * Matches a NVGRE header.
> >>>> +	 *
> >>>> +	 * See struct rte_flow_item_nvgre.
> >>>> +	 */
> >>>> +	RTE_FLOW_ITEM_TYPE_NVGRE,
> >>>>  };
> >>>>
> >>>>  /**
> >>>> @@ -454,6 +468,40 @@ struct rte_flow_item_vxlan {  };
> >>>>
> >>>>  /**
> >>>> + * RTE_FLOW_ITEM_TYPE_E_TAG.
> >>>> + *
> >>>> + * Matches a E-tag header.
> >>>> + */
> >>>> +struct rte_flow_item_e_tag {
> >>>> +	uint16_t tpid; /**< Tag protocol identifier (0x893F). */
> >>>> +	/** E-Tag control information (E-TCI). */
> >>>> +	/**< E-PCP (3b), E-DEI (1b), ingress E-CID base (12b). */
> >>>> +	uint16_t epcp_edei_in_ecid_b;
> >>>> +	/**< Reserved (2b), GRP (2b), E-CID base (12b). */
> >>>> +	uint16_t rsvd_grp_ecid_b;
> >>>> +	uint8_t in_ecid_e; /**< Ingress E-CID ext. */
> >>>> +	uint8_t ecid_e; /**< E-CID ext. */ };
> >>>> +
> >>>> +/**
> >>>> + * RTE_FLOW_ITEM_TYPE_NVGRE.
> >>>> + *
> >>>> + * Matches a NVGRE header.
> >>>> + */
> >>>> +struct rte_flow_item_nvgre {
> >>>> +     /**
> >>>> +      * Checksum (1b), undefined (1b), key bit (1b), sequence number
> (1b),
> >>>> +      * reserved 0 (9b), version (3b).
> >>>> +      *
> >>>> +      * \c_k_s_rsvd0_ver must have value 0x2000 according to RFC 7637.
> >>>> +      */
> >>>> +	uint16_t c_k_s_rsvd0_ver;
> >>>> +	uint16_t protocol; /**< Protocol type (0x6558). */
> >>>> +	uint8_t tni[3]; /**< Virtual subnet ID. */
> >>>> +	uint8_t flow_id; /**< Flow ID. */ };
> >>>> +
> >>>> +/**
> >>>>   * Matching pattern item definition.
> >>>>   *
> >>>>   * A pattern is formed by stacking items starting from the lowest
> >>>> protocol
> >>>> --
> >>>> 2.5.5
> >>>>
> >>>
> >>> OK for these definitions, however API documentation
> >>> (doc/guides/prog_guide/rte_flow.rst) must be kept up to date, and it
> >>> would be great if testpmd support for these new items was added
> >>> simultaneously (changes in app/test-pmd/cmdline.c,
> >>> app/test-pmd/cmdline_flow.c and
> >> doc/guides/testpmd_app_ug/testpmd_funcs.rst).
> >>>
> >>> How about putting all rte_flow changes (API & testpmd) in their own
> >>> separate patch?
> >>
> >> I thought it can be more useful to have library and its user updated
> >> in same patch, gives more context. But missed rte_flow documentation ...
> >>
> >>>
> >>> You could use VLAN PCP/DEI/VID definitions as an example to expose
> >>> partial bit-fields (e.g. epcp_edei_in_ecid_b) in testpmd, see:
> >>>
> >>>  1419fd5a6c9f ("app/testpmd: add protocol fields to flow command")
> >>>
> >>> Now if re-spinning this series yet again is too much work, you can
> >>> go ahead with this commit as long as you do not forget to submit the
> >>> rest later, thanks.
> >>>
> >>
> >> Is following todo list complete:
> >> 1- update rte_flow document, doc/guides/prog_guide/rte_flow.rst,
> >> document two new item types: E_TAG & NVGRE.
> >>
> >> 2- Add testpmd sample implementation and documentation.
> >>
> >
> > I am sorry for miss rte_flow.rst document update, I DON'T know there is
> such a new file of rte_flow.rst.
> > And also these two types of E_TAG & NVGRE are added into code after
> > rte_flow patch, So testpmd  implementation do not support for these type.
> > Now, we have been work on task of 17.05.
> > How about finish (1) first as one patch, then after busy work of 17.05  to
> add (2) as another patch?
> > OR, if these two work merge in to patch set,   I will may be begin to do after
> the 17.05  task finish?
> > Which one is OK for you?
> 
> Changes can be in two separate patches, and if possible, can this get priority
> against 17.05 task?
> This is for 17.02 feature and not completely finished, missing for last touches..
> 
> Thanks,
> ferruh
> 
> >
> > Thank you.
> >>
> >> Hi Wei,
> >>
> >> Would you mind working on a patch to cover above items?
> >>
> >> Thanks,
> >> ferruh
> >

Add testpmd implementation example is not in our plan from the begin of the task generic filter API by us,
Because all this part is not covered by us(intel) when task allocation.
BUT if this component  is need by community, I will report to my leader and add it into our next work plan and try to finish it ASAP.

Thank you.

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v6 00/18] net/ixgbe: Consistent filter API
  2017-01-13 15:54           ` [PATCH v6 00/18] net/ixgbe: Consistent filter API Ferruh Yigit
  2017-01-15  2:44             ` Lu, Wenzhuo
@ 2017-01-18 11:04             ` Thomas Monjalon
  1 sibling, 0 replies; 149+ messages in thread
From: Thomas Monjalon @ 2017-01-18 11:04 UTC (permalink / raw)
  To: Ferruh Yigit, Wei Zhao; +Cc: dev

2017-01-13 15:54, Ferruh Yigit:
> On 1/13/2017 8:12 AM, Wei Zhao wrote:
> > zhao wei (18):
> >   net/ixgbe: store TCP SYN filter
> >   net/ixgbe: store flow director filter
> >   net/ixgbe: store L2 tunnel filter
> >   net/ixgbe: restore n-tuple filter
> >   net/ixgbe: restore ether type filter
> >   net/ixgbe: restore TCP SYN filter
> >   net/ixgbe: restore flow director filter
> >   net/ixgbe: restore L2 tunnel filter
> >   net/ixgbe: store and restore L2 tunnel configuration
> >   net/ixgbe: flush all the filters
> >   net/ixgbe: parse n-tuple filter
> >   net/ixgbe: parse ethertype filter
> >   net/ixgbe: parse TCP SYN filter
> >   net/ixgbe: parse L2 tunnel filter
> >   net/ixgbe: parse flow director filter
> >   net/ixgbe: create consistent filter
> >   net/ixgbe: destroy consistent filter
> >   net/ixgbe: flush all the filter list
> > 
> <...>
> > 
> > Acked-by: Beilei Xing <beilei.xing@intel.com>
> > Acked-by: Wei Dai <wei.dai@intel.com>
> > 
> 
> Series applied to dpdk-next-net/master, thanks.

There is a build issue:
patch 2/18 uses rte_hash
which is linked to ixgbe only in patch 10/18.

I am fixing it in mainline.

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v6 14/18] net/ixgbe: parse L2 tunnel filter
  2017-01-18  1:59                     ` Zhao1, Wei
@ 2017-01-18 17:49                       ` Ferruh Yigit
  2017-01-25 12:17                         ` Ferruh Yigit
  0 siblings, 1 reply; 149+ messages in thread
From: Ferruh Yigit @ 2017-01-18 17:49 UTC (permalink / raw)
  To: Zhao1, Wei, Adrien Mazarguil; +Cc: dev, Lu, Wenzhuo

On 1/18/2017 1:59 AM, Zhao1, Wei wrote:
> Hi, Ferruh
> 
>> -----Original Message-----
>> From: Yigit, Ferruh
>> Sent: Tuesday, January 17, 2017 6:03 PM
>> To: Zhao1, Wei <wei.zhao1@intel.com>; Adrien Mazarguil
>> <adrien.mazarguil@6wind.com>
>> Cc: dev@dpdk.org; Lu, Wenzhuo <wenzhuo.lu@intel.com>
>> Subject: Re: [dpdk-dev] [PATCH v6 14/18] net/ixgbe: parse L2 tunnel filter
>>
>> On 1/17/2017 9:27 AM, Zhao1, Wei wrote:
>>> Hi, Ferruh
>>>
>>>> -----Original Message-----
>>>> From: Yigit, Ferruh
>>>> Sent: Tuesday, January 17, 2017 12:39 AM
>>>> To: Adrien Mazarguil <adrien.mazarguil@6wind.com>; Zhao1, Wei
>>>> <wei.zhao1@intel.com>
>>>> Cc: dev@dpdk.org; Lu, Wenzhuo <wenzhuo.lu@intel.com>
>>>> Subject: Re: [dpdk-dev] [PATCH v6 14/18] net/ixgbe: parse L2 tunnel
>>>> filter
>>>>
>>>> On 1/16/2017 1:03 PM, Adrien Mazarguil wrote:
>>>>> Hi,
>>>>>
>>>>> On Fri, Jan 13, 2017 at 04:13:08PM +0800, Wei Zhao wrote:
>>>>>> check if the rule is a L2 tunnel rule, and get the L2 tunnel info.
>>>>>>
>>>>>> Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
>>>>>> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
>>>>>> ---
>>>>>>  drivers/net/ixgbe/ixgbe_ethdev.c |   3 +-
>>>>>>  drivers/net/ixgbe/ixgbe_flow.c   | 216
>>>> +++++++++++++++++++++++++++++++++++++++
>>>>>>  lib/librte_ether/rte_flow.h      |  48 +++++++++
>>>>>>  3 files changed, 266 insertions(+), 1 deletion(-)
>>>>> [...]
>>>>>> diff --git a/lib/librte_ether/rte_flow.h
>>>>>> b/lib/librte_ether/rte_flow.h index 98084ac..7142479 100644
>>>>>> --- a/lib/librte_ether/rte_flow.h
>>>>>> +++ b/lib/librte_ether/rte_flow.h
>>>>>> @@ -268,6 +268,20 @@ enum rte_flow_item_type {
>>>>>>  	 * See struct rte_flow_item_vxlan.
>>>>>>  	 */
>>>>>>  	RTE_FLOW_ITEM_TYPE_VXLAN,
>>>>>> +
>>>>>> +	/**
>>>>>> +	 * Matches a E_TAG header.
>>>>>> +	 *
>>>>>> +	 * See struct rte_flow_item_e_tag.
>>>>>> +	 */
>>>>>> +	RTE_FLOW_ITEM_TYPE_E_TAG,
>>>>>> +
>>>>>> +	/**
>>>>>> +	 * Matches a NVGRE header.
>>>>>> +	 *
>>>>>> +	 * See struct rte_flow_item_nvgre.
>>>>>> +	 */
>>>>>> +	RTE_FLOW_ITEM_TYPE_NVGRE,
>>>>>>  };
>>>>>>
>>>>>>  /**
>>>>>> @@ -454,6 +468,40 @@ struct rte_flow_item_vxlan {  };
>>>>>>
>>>>>>  /**
>>>>>> + * RTE_FLOW_ITEM_TYPE_E_TAG.
>>>>>> + *
>>>>>> + * Matches a E-tag header.
>>>>>> + */
>>>>>> +struct rte_flow_item_e_tag {
>>>>>> +	uint16_t tpid; /**< Tag protocol identifier (0x893F). */
>>>>>> +	/** E-Tag control information (E-TCI). */
>>>>>> +	/**< E-PCP (3b), E-DEI (1b), ingress E-CID base (12b). */
>>>>>> +	uint16_t epcp_edei_in_ecid_b;
>>>>>> +	/**< Reserved (2b), GRP (2b), E-CID base (12b). */
>>>>>> +	uint16_t rsvd_grp_ecid_b;
>>>>>> +	uint8_t in_ecid_e; /**< Ingress E-CID ext. */
>>>>>> +	uint8_t ecid_e; /**< E-CID ext. */ };
>>>>>> +
>>>>>> +/**
>>>>>> + * RTE_FLOW_ITEM_TYPE_NVGRE.
>>>>>> + *
>>>>>> + * Matches a NVGRE header.
>>>>>> + */
>>>>>> +struct rte_flow_item_nvgre {
>>>>>> +     /**
>>>>>> +      * Checksum (1b), undefined (1b), key bit (1b), sequence number
>> (1b),
>>>>>> +      * reserved 0 (9b), version (3b).
>>>>>> +      *
>>>>>> +      * \c_k_s_rsvd0_ver must have value 0x2000 according to RFC 7637.
>>>>>> +      */
>>>>>> +	uint16_t c_k_s_rsvd0_ver;
>>>>>> +	uint16_t protocol; /**< Protocol type (0x6558). */
>>>>>> +	uint8_t tni[3]; /**< Virtual subnet ID. */
>>>>>> +	uint8_t flow_id; /**< Flow ID. */ };
>>>>>> +
>>>>>> +/**
>>>>>>   * Matching pattern item definition.
>>>>>>   *
>>>>>>   * A pattern is formed by stacking items starting from the lowest
>>>>>> protocol
>>>>>> --
>>>>>> 2.5.5
>>>>>>
>>>>>
>>>>> OK for these definitions, however API documentation
>>>>> (doc/guides/prog_guide/rte_flow.rst) must be kept up to date, and it
>>>>> would be great if testpmd support for these new items was added
>>>>> simultaneously (changes in app/test-pmd/cmdline.c,
>>>>> app/test-pmd/cmdline_flow.c and
>>>> doc/guides/testpmd_app_ug/testpmd_funcs.rst).
>>>>>
>>>>> How about putting all rte_flow changes (API & testpmd) in their own
>>>>> separate patch?
>>>>
>>>> I thought it can be more useful to have library and its user updated
>>>> in same patch, gives more context. But missed rte_flow documentation ...
>>>>
>>>>>
>>>>> You could use VLAN PCP/DEI/VID definitions as an example to expose
>>>>> partial bit-fields (e.g. epcp_edei_in_ecid_b) in testpmd, see:
>>>>>
>>>>>  1419fd5a6c9f ("app/testpmd: add protocol fields to flow command")
>>>>>
>>>>> Now if re-spinning this series yet again is too much work, you can
>>>>> go ahead with this commit as long as you do not forget to submit the
>>>>> rest later, thanks.
>>>>>
>>>>
>>>> Is following todo list complete:
>>>> 1- update rte_flow document, doc/guides/prog_guide/rte_flow.rst,
>>>> document two new item types: E_TAG & NVGRE.
>>>>
>>>> 2- Add testpmd sample implementation and documentation.
>>>>
>>>
>>> I am sorry for miss rte_flow.rst document update, I DON'T know there is
>> such a new file of rte_flow.rst.
>>> And also these two types of E_TAG & NVGRE are added into code after
>>> rte_flow patch, So testpmd  implementation do not support for these type.
>>> Now, we have been work on task of 17.05.
>>> How about finish (1) first as one patch, then after busy work of 17.05  to
>> add (2) as another patch?
>>> OR, if these two work merge in to patch set,   I will may be begin to do after
>> the 17.05  task finish?
>>> Which one is OK for you?
>>
>> Changes can be in two separate patches, and if possible, can this get priority
>> against 17.05 task?
>> This is for 17.02 feature and not completely finished, missing for last touches..
>>
>> Thanks,
>> ferruh
>>
>>>
>>> Thank you.
>>>>
>>>> Hi Wei,
>>>>
>>>> Would you mind working on a patch to cover above items?
>>>>
>>>> Thanks,
>>>> ferruh
>>>
> 
> Add testpmd implementation example is not in our plan from the begin of the task generic filter API by us,
> Because all this part is not covered by us(intel) when task allocation.

Right, sample testpmd implementation is not directly related to the
adding rte_flow support to the driver.

But for this, a rte_flow library update required, and sample testpmd
implementation is part of rte_flow update.

> BUT if this component  is need by community, I will report to my leader and add it into our next work plan and try to finish it ASAP.

Yes please, I believe it is needed.

Thanks,
ferruh

> 
> Thank you.
> 

^ permalink raw reply	[flat|nested] 149+ messages in thread

* Re: [PATCH v6 14/18] net/ixgbe: parse L2 tunnel filter
  2017-01-18 17:49                       ` Ferruh Yigit
@ 2017-01-25 12:17                         ` Ferruh Yigit
  0 siblings, 0 replies; 149+ messages in thread
From: Ferruh Yigit @ 2017-01-25 12:17 UTC (permalink / raw)
  To: Zhao1, Wei, Adrien Mazarguil; +Cc: dev, Lu, Wenzhuo

On 1/18/2017 5:49 PM, Ferruh Yigit wrote:
<...>

>>>>>>>
>>>>>>
>>>>>> OK for these definitions, however API documentation
>>>>>> (doc/guides/prog_guide/rte_flow.rst) must be kept up to date, and it
>>>>>> would be great if testpmd support for these new items was added
>>>>>> simultaneously (changes in app/test-pmd/cmdline.c,
>>>>>> app/test-pmd/cmdline_flow.c and
>>>>> doc/guides/testpmd_app_ug/testpmd_funcs.rst).
>>>>>>
>>>>>> How about putting all rte_flow changes (API & testpmd) in their own
>>>>>> separate patch?
>>>>>
>>>>> I thought it can be more useful to have library and its user updated
>>>>> in same patch, gives more context. But missed rte_flow documentation ...
>>>>>
>>>>>>
>>>>>> You could use VLAN PCP/DEI/VID definitions as an example to expose
>>>>>> partial bit-fields (e.g. epcp_edei_in_ecid_b) in testpmd, see:
>>>>>>
>>>>>>  1419fd5a6c9f ("app/testpmd: add protocol fields to flow command")
>>>>>>
>>>>>> Now if re-spinning this series yet again is too much work, you can
>>>>>> go ahead with this commit as long as you do not forget to submit the
>>>>>> rest later, thanks.
>>>>>>
>>>>>
>>>>> Is following todo list complete:
>>>>> 1- update rte_flow document, doc/guides/prog_guide/rte_flow.rst,
>>>>> document two new item types: E_TAG & NVGRE.
>>>>>
>>>>> 2- Add testpmd sample implementation and documentation.
>>>>>
>>>>
>>>> I am sorry for miss rte_flow.rst document update, I DON'T know there is
>>> such a new file of rte_flow.rst.
>>>> And also these two types of E_TAG & NVGRE are added into code after
>>>> rte_flow patch, So testpmd  implementation do not support for these type.
>>>> Now, we have been work on task of 17.05.
>>>> How about finish (1) first as one patch, then after busy work of 17.05  to
>>> add (2) as another patch?
>>>> OR, if these two work merge in to patch set,   I will may be begin to do after
>>> the 17.05  task finish?
>>>> Which one is OK for you?
>>>
>>> Changes can be in two separate patches, and if possible, can this get priority
>>> against 17.05 task?
>>> This is for 17.02 feature and not completely finished, missing for last touches..
>>>
>>> Thanks,
>>> ferruh
>>>
>>>>
>>>> Thank you.
>>>>>
>>>>> Hi Wei,
>>>>>
>>>>> Would you mind working on a patch to cover above items?
>>>>>
>>>>> Thanks,
>>>>> ferruh
>>>>
>>
>> Add testpmd implementation example is not in our plan from the begin of the task generic filter API by us,
>> Because all this part is not covered by us(intel) when task allocation.
> 
> Right, sample testpmd implementation is not directly related to the
> adding rte_flow support to the driver.
> 
> But for this, a rte_flow library update required, and sample testpmd
> implementation is part of rte_flow update.
> 
>> BUT if this component  is need by community, I will report to my leader and add it into our next work plan and try to finish it ASAP.
> 
> Yes please, I believe it is needed.

Is there any update from this work?

> 
> Thanks,
> ferruh
> 
>>
>> Thank you.
>>
> 

^ permalink raw reply	[flat|nested] 149+ messages in thread

end of thread, other threads:[~2017-01-25 12:17 UTC | newest]

Thread overview: 149+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-12-30  7:52 [PATCH v2 00/18] net/ixgbe: Consistent filter API Wei Zhao
2016-12-30  7:52 ` [PATCH v2 01/18] net/ixgbe: store SYN filter Wei Zhao
2017-01-03 14:33   ` Dai, Wei
2017-01-04  1:46     ` Zhao1, Wei
2017-01-06 16:28   ` Ferruh Yigit
2017-01-10  5:33     ` Zhao1, Wei
2017-01-12  8:12   ` [PATCH v3 00/18] net/ixgbe: Consistent filter API Wei Zhao
2017-01-12  8:13   ` [PATCH v3 01/18] net/ixgbe: store TCP SYN filter Wei Zhao
2017-01-12  8:13     ` [PATCH v3 02/18] net/ixgbe: store flow director filter Wei Zhao
2017-01-12  8:13     ` [PATCH v3 03/18] net/ixgbe: store L2 tunnel filter Wei Zhao
2017-01-12  8:13     ` [PATCH v3 04/18] net/ixgbe: restore n-tuple filter Add support for restoring n-tuple filter in SW Wei Zhao
2017-01-12  8:13     ` [PATCH v3 4/9] " Wei Zhao
2017-01-12  8:13     ` [PATCH v3 05/18] net/ixgbe: restore ether type filter Wei Zhao
2017-01-12  8:13     ` [PATCH v3 06/18] net/ixgbe: restore TCP SYN filter Wei Zhao
2017-01-12  8:13     ` [PATCH v3 07/18] net/ixgbe: restore flow director filter Wei Zhao
2017-01-12  8:13     ` [PATCH v3 08/18] net/ixgbe: restore L2 tunnel filter Wei Zhao
2017-01-12  8:13     ` [PATCH v3 09/18] net/ixgbe: store and restore L2 tunnel configuration Wei Zhao
2017-01-12  8:13     ` [PATCH v3 10/18] net/ixgbe: flush all the filters Wei Zhao
2017-01-12  8:13     ` [PATCH v3 11/18] net/ixgbe: parse n-tuple filter Wei Zhao
2017-01-12  8:13     ` [PATCH v3 12/18] net/ixgbe: parse ethertype filter Wei Zhao
2017-01-12  8:13     ` [PATCH v3 13/18] net/ixgbe: parse TCP SYN filter check if the rule is a TCP SYN rule, and get the SYN info Wei Zhao
2017-01-12  8:13     ` [PATCH v3 14/18] net/ixgbe: parse L2 tunnel filter check if the rule is a L2 tunnel rule, and get the L2 tunnel info Wei Zhao
2017-01-12  8:13     ` [PATCH v3 15/18] net/ixgbe: parse flow director filter Wei Zhao
2017-01-12  8:13     ` [PATCH v3 16/18] net/ixgbe: create consistent filter Wei Zhao
2017-01-12  8:13     ` [PATCH v3 17/18] net/ixgbe: destroy " Wei Zhao
2017-01-12  8:13     ` [PATCH v3 18/18] net/ixgbe: flush all the filter list Wei Zhao
2017-01-12  8:40     ` [PATCH v4 00/18] net/ixgbe: Consistent filter API Wei Zhao
2017-01-12  8:40       ` [PATCH v4 01/18] net/ixgbe: store TCP SYN filter Wei Zhao
2017-01-12  8:40       ` [PATCH v4 02/18] net/ixgbe: store flow director filter Wei Zhao
2017-01-12  8:40       ` [PATCH v4 03/18] net/ixgbe: store L2 tunnel filter Wei Zhao
2017-01-12  8:40       ` [PATCH v4 04/18] net/ixgbe: restore n-tuple filter Add support for restoring n-tuple filter in SW Wei Zhao
2017-01-12  8:40       ` [PATCH v4 05/18] net/ixgbe: restore ether type filter Wei Zhao
2017-01-12  8:40       ` [PATCH v4 06/18] net/ixgbe: restore TCP SYN filter Wei Zhao
2017-01-12  8:40       ` [PATCH v4 07/18] net/ixgbe: restore flow director filter Wei Zhao
2017-01-12  8:40       ` [PATCH v4 08/18] net/ixgbe: restore L2 tunnel filter Wei Zhao
2017-01-12  8:40       ` [PATCH v4 09/18] net/ixgbe: store and restore L2 tunnel configuration Wei Zhao
2017-01-12  8:40       ` [PATCH v4 10/18] net/ixgbe: flush all the filters Wei Zhao
2017-01-12  8:40       ` [PATCH v4 11/18] net/ixgbe: parse n-tuple filter Wei Zhao
2017-01-12  8:40       ` [PATCH v4 12/18] net/ixgbe: parse ethertype filter Wei Zhao
2017-01-12  8:40       ` [PATCH v4 13/18] net/ixgbe: parse TCP SYN filter check if the rule is a TCP SYN rule, and get the SYN info Wei Zhao
2017-01-12  8:40       ` [PATCH v4 14/18] net/ixgbe: parse L2 tunnel filter check if the rule is a L2 tunnel rule, and get the L2 tunnel info Wei Zhao
2017-01-12  8:40       ` [PATCH v4 15/18] net/ixgbe: parse flow director filter Wei Zhao
2017-01-12  8:40       ` [PATCH v4 16/18] net/ixgbe: create consistent filter Wei Zhao
2017-01-12  8:40       ` [PATCH v4 17/18] net/ixgbe: destroy " Wei Zhao
2017-01-12  8:40       ` [PATCH v4 18/18] net/ixgbe: flush all the filter list Wei Zhao
2017-01-12  9:17       ` [PATCH v5 00/18] net/ixgbe: Consistent filter API Wei Zhao
2017-01-12  9:17         ` [PATCH v5 01/18] net/ixgbe: store TCP SYN filter Wei Zhao
2017-01-12  9:17         ` [PATCH v5 02/18] net/ixgbe: store flow director filter Wei Zhao
2017-01-12  9:17         ` [PATCH v5 03/18] net/ixgbe: store L2 tunnel filter Wei Zhao
2017-01-12  9:17         ` [PATCH v5 04/18] net/ixgbe: restore n-tuple filter Wei Zhao
2017-01-12  9:17         ` [PATCH v5 05/18] net/ixgbe: restore ether type filter Wei Zhao
2017-01-12  9:17         ` [PATCH v5 06/18] net/ixgbe: restore TCP SYN filter Wei Zhao
2017-01-12  9:17         ` [PATCH v5 07/18] net/ixgbe: restore flow director filter Wei Zhao
2017-01-12  9:17         ` [PATCH v5 08/18] net/ixgbe: restore L2 tunnel filter Wei Zhao
2017-01-12  9:17         ` [PATCH v5 09/18] net/ixgbe: store and restore L2 tunnel configuration Wei Zhao
2017-01-12  9:17         ` [PATCH v5 10/18] net/ixgbe: flush all the filters Wei Zhao
2017-01-12  9:17         ` [PATCH v5 11/18] net/ixgbe: parse n-tuple filter Wei Zhao
2017-01-12 15:40           ` Ferruh Yigit
2017-01-12  9:17         ` [PATCH v5 12/18] net/ixgbe: parse ethertype filter Wei Zhao
2017-01-12 15:39           ` Ferruh Yigit
2017-01-12  9:17         ` [PATCH v5 13/18] net/ixgbe: parse TCP SYN filter Wei Zhao
2017-01-12  9:17         ` [PATCH v5 14/18] net/ixgbe: parse L2 tunnel filter Wei Zhao
2017-01-12  9:17         ` [PATCH v5 15/18] net/ixgbe: parse flow director filter Wei Zhao
2017-01-12  9:17         ` [PATCH v5 16/18] net/ixgbe: create consistent filter Wei Zhao
2017-01-12  9:17         ` [PATCH v5 17/18] net/ixgbe: destroy " Wei Zhao
2017-01-12  9:17         ` [PATCH v5 18/18] net/ixgbe: flush all the filter list Wei Zhao
2017-01-12 11:38         ` [PATCH v5 00/18] net/ixgbe: Consistent filter API Xing, Beilei
2017-01-13  6:27           ` Dai, Wei
2017-01-12 15:43         ` Ferruh Yigit
2017-01-13  8:12         ` [PATCH v6 " Wei Zhao
2017-01-13  8:12           ` [PATCH v6 01/18] net/ixgbe: store TCP SYN filter Wei Zhao
2017-01-13  8:12           ` [PATCH v6 02/18] net/ixgbe: store flow director filter Wei Zhao
2017-01-13  8:12           ` [PATCH v6 03/18] net/ixgbe: store L2 tunnel filter Wei Zhao
2017-01-13  8:12           ` [PATCH v6 04/18] net/ixgbe: restore n-tuple filter Wei Zhao
2017-01-13  8:12           ` [PATCH v6 05/18] net/ixgbe: restore ether type filter Wei Zhao
2017-01-13  8:13           ` [PATCH v6 06/18] net/ixgbe: restore TCP SYN filter Wei Zhao
2017-01-13  8:13           ` [PATCH v6 07/18] net/ixgbe: restore flow director filter Wei Zhao
2017-01-13  8:13           ` [PATCH v6 08/18] net/ixgbe: restore L2 tunnel filter Wei Zhao
2017-01-13  8:13           ` [PATCH v6 09/18] net/ixgbe: store and restore L2 tunnel configuration Wei Zhao
2017-01-13  8:13           ` [PATCH v6 10/18] net/ixgbe: flush all the filters Wei Zhao
2017-01-13  8:13           ` [PATCH v6 11/18] net/ixgbe: parse n-tuple filter Wei Zhao
2017-01-13  8:13           ` [PATCH v6 12/18] net/ixgbe: parse ethertype filter Wei Zhao
2017-01-13  8:13           ` [PATCH v6 13/18] net/ixgbe: parse TCP SYN filter Wei Zhao
2017-01-13  8:13           ` [PATCH v6 14/18] net/ixgbe: parse L2 tunnel filter Wei Zhao
2017-01-13 11:18             ` Ferruh Yigit
2017-01-16 13:03             ` Adrien Mazarguil
2017-01-16 16:39               ` Ferruh Yigit
2017-01-16 18:26                 ` Adrien Mazarguil
2017-01-17  9:27                 ` Zhao1, Wei
2017-01-17 10:03                   ` Ferruh Yigit
2017-01-18  1:59                     ` Zhao1, Wei
2017-01-18 17:49                       ` Ferruh Yigit
2017-01-25 12:17                         ` Ferruh Yigit
2017-01-13  8:13           ` [PATCH v6 15/18] net/ixgbe: parse flow director filter Wei Zhao
2017-01-13  8:13           ` [PATCH v6 16/18] net/ixgbe: create consistent filter Wei Zhao
2017-01-13  8:13           ` [PATCH v6 17/18] net/ixgbe: destroy " Wei Zhao
2017-01-13  8:13           ` [PATCH v6 18/18] net/ixgbe: flush all the filter list Wei Zhao
2017-01-13 15:54           ` [PATCH v6 00/18] net/ixgbe: Consistent filter API Ferruh Yigit
2017-01-15  2:44             ` Lu, Wenzhuo
2017-01-18 11:04             ` Thomas Monjalon
2016-12-30  7:52 ` [PATCH v2 02/18] net/ixgbe: store flow director filter Wei Zhao
2017-01-02  9:59   ` Xing, Beilei
2017-01-03  3:14     ` Zhao1, Wei
2017-01-03 14:28   ` Dai, Wei
2017-01-04  2:03     ` Zhao1, Wei
2017-01-06 16:31   ` Ferruh Yigit
2017-01-10  5:30     ` Zhao1, Wei
2016-12-30  7:52 ` [PATCH v2 03/18] net/ixgbe: store L2 tunnel filter Wei Zhao
2017-01-02 10:06   ` Xing, Beilei
2016-12-30  7:52 ` [PATCH v2 04/18] net/ixgbe: restore n-tuple filter Wei Zhao
2016-12-30  7:52 ` [PATCH v2 05/18] net/ixgbe: restore ether type filter Wei Zhao
2016-12-30  7:52 ` [PATCH v2 06/18] net/ixgbe: restore TCP SYN filter Wei Zhao
2016-12-30  7:52 ` [PATCH v2 07/18] net/ixgbe: restore flow director filter Wei Zhao
2016-12-30  7:53 ` [PATCH v2 08/18] net/ixgbe: restore L2 tunnel filter Wei Zhao
2016-12-30  7:53 ` [PATCH v2 09/18] net/ixgbe: store and restore L2 tunnel configuration Wei Zhao
2017-01-02 10:18   ` Xing, Beilei
2016-12-30  7:53 ` [PATCH v2 10/18] net/ixgbe: flush all the filters Wei Zhao
2017-01-06 16:40   ` Ferruh Yigit
2017-01-11  7:51     ` Zhao1, Wei
2016-12-30  7:53 ` [PATCH v2 11/18] net/ixgbe: parse n-tuple filter Wei Zhao
2017-01-02 10:41   ` Xing, Beilei
2017-01-02 10:45   ` Xing, Beilei
2017-01-06 16:55   ` Ferruh Yigit
2017-01-11  8:27     ` Zhao1, Wei
2016-12-30  7:53 ` [PATCH v2 12/18] net/ixgbe: parse ethertype filter Wei Zhao
2017-01-06 17:11   ` Ferruh Yigit
2017-01-11  8:54     ` Zhao1, Wei
2016-12-30  7:53 ` [PATCH v2 13/18] net/ixgbe: parse TCP SYN filter Wei Zhao
2017-01-06 17:19   ` Ferruh Yigit
2017-01-10  5:46     ` Zhao1, Wei
2017-01-11  9:11     ` Zhao1, Wei
2016-12-30  7:53 ` [PATCH v2 14/18] net/ixgbe: parse L2 tunnel filter Wei Zhao
2017-01-03 14:07   ` Adrien Mazarguil
2017-01-05  3:12     ` Zhao1, Wei
2017-01-05  8:52       ` Adrien Mazarguil
2016-12-30  7:53 ` [PATCH v2 15/18] net/ixgbe: parse flow director filter Wei Zhao
2017-01-02 15:24   ` Xing, Beilei
2017-01-03  3:05     ` Zhao1, Wei
2017-01-03  3:19     ` Zhao1, Wei
2017-01-03 14:08   ` Adrien Mazarguil
2016-12-30  7:53 ` [PATCH v2 16/18] net/ixgbe: create consistent filter Wei Zhao
2017-01-03  2:04   ` Xing, Beilei
2017-01-03  3:11     ` Zhao1, Wei
     [not found]   ` <94479800C636CB44BD422CB454846E013158D036@SHSMSX101.ccr.corp.intel.com>
2017-01-03  3:09     ` Zhao1, Wei
2017-01-03  5:24   ` Xing, Beilei
2017-01-06 17:26   ` Ferruh Yigit
2017-01-10  5:50     ` Zhao1, Wei
2016-12-30  7:53 ` [PATCH v2 17/18] net/ixgbe: destroy " Wei Zhao
2016-12-30  7:53 ` [PATCH v2 18/18] net/ixgbe: flush all the filter list Wei Zhao

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.